All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver
@ 2019-12-20 22:36 James Smart
  2019-12-20 22:36 ` [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
                   ` (32 more replies)
  0 siblings, 33 replies; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart

This patch set is a request to incorporate the new Broadcom
(Emulex) FC target driver, efct, into the kernel source tree.

The driver source has been Announced a couple of times, the last
version on 12/18/2018. The driver has been hosted on gitlab for
review has had contributions from the community.
  gitlab (git@gitlab.com:jsmart/efct-Emulex_FC_Target.git)

The driver integrates into the source tree at the (new) drivers/scsi/elx
subdirectory.

The driver consists of the following components:
- A libefc_sli subdirectory: This subdirectory contains a library that
  encapsulates common definitions and routines for an Emulex SLI-4
  adapter.
- A libefc subdirectory: This subdirectory contains a library of
  common routines. Of major import is a number of routines that
  implement a FC Discovery engine for target mode.
- An efct subdirectory: This subdirectory contains the efct target
  mode device driver. The driver utilizes the above librarys and
  plugs into the SCSI LIO interfaces. The driver is SCSI only at
  this time.

The patches populate the libraries and device driver and can only
be compiled as a complete set.

This driver is completely independent from the lpfc device driver
and there is no overlap on PCI ID's.

The patches have been cut against the 5.6/scsi-queue branch.

Thank you to those that have contributed to the driver in the past.

Review comments welcome!

-- james


V2 modifications:

Contains the following modifications based on prior review comments:
  Indentation/Alignment/Spacing changes
  Comments: format cleanup; removed obvious or unnecessary comments;
    Added comments for clarity.
  Headers use #ifndef comparing for prior inclusion
  Cleanup structure names (remove _s suffix)
  Encapsulate use of macro arguments
  Refactor to remove static function declarations for static local routines
  Removed unused variables
  Fix SLI4_INTF_VALID_MASK for 32bits
  Ensure no BIT() use
  Use __ffs() in page count macro
  Reorg to move field defines out of structure definition
  Commonize command building routines to reduce duplication
  LIO interface:
    Removed scsi initiator includes
    Cleaned up interface defines
    Removed lio WWN version attribute.
    Expanded macros within logging macros
    Cleaned up lio state setting macro
    Remove __force use
    Modularized session debugfs code so can be easily replaced.
    Cleaned up abort task handling. Return after initiating.
    Modularized where possible to reduce duplication
    Convert from kthread to workqueue use
    Remove unused macros
  Add missing TARGET_CORE build attribute
  Fix kbuild test robot warnings

Comments not addressed:
  Use of __packed: not believed necessary
  Session debugfs code remains. There is not yet a common lio
    mechanism to replace with.


James Smart (32):
  elx: libefc_sli: SLI-4 register offsets and field definitions
  elx: libefc_sli: SLI Descriptors and Queue entries
  elx: libefc_sli: Data structures and defines for mbox commands
  elx: libefc_sli: queue create/destroy/parse routines
  elx: libefc_sli: Populate and post different WQEs
  elx: libefc_sli: bmbx routines and SLI config commands
  elx: libefc_sli: APIs to setup SLI library
  elx: libefc: Generic state machine framework
  elx: libefc: Emulex FC discovery library APIs and definitions
  elx: libefc: FC Domain state machine interfaces
  elx: libefc: SLI and FC PORT state machine interfaces
  elx: libefc: Remote node state machine interfaces
  elx: libefc: Fabric node state machine interfaces
  elx: libefc: FC node ELS and state handling
  elx: efct: Data structures and defines for hw operations
  elx: efct: Driver initialization routines
  elx: efct: Hardware queues creation and deletion
  elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  elx: efct: Hardware IO and SGL initialization
  elx: efct: Hardware queues processing
  elx: efct: Unsolicited FC frame processing routines
  elx: efct: Extended link Service IO handling
  elx: efct: SCSI IO handling routines
  elx: efct: LIO backend interface routines
  elx: efct: Hardware IO submission routines
  elx: efct: link statistics and SFP data
  elx: efct: xport and hardware teardown routines
  elx: efct: IO timeout handling routines
  elx: efct: Firmware update, async link processing
  elx: efct: scsi_transport_fc host interface support
  elx: efct: Add Makefile and Kconfig for efct driver
  elx: efct: Tie into kernel Kconfig and build process

 MAINTAINERS                            |    8 +
 drivers/scsi/Kconfig                   |    2 +
 drivers/scsi/Makefile                  |    1 +
 drivers/scsi/elx/Kconfig               |    9 +
 drivers/scsi/elx/Makefile              |   30 +
 drivers/scsi/elx/efct/efct_driver.c    | 1031 +++++
 drivers/scsi/elx/efct/efct_driver.h    |  150 +
 drivers/scsi/elx/efct/efct_els.c       | 1953 +++++++++
 drivers/scsi/elx/efct/efct_els.h       |  136 +
 drivers/scsi/elx/efct/efct_hw.c        | 6742 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        | 1099 ++++++
 drivers/scsi/elx/efct/efct_hw_queues.c | 1648 ++++++++
 drivers/scsi/elx/efct/efct_hw_queues.h |   67 +
 drivers/scsi/elx/efct/efct_io.c        |  203 +
 drivers/scsi/elx/efct/efct_io.h        |  196 +
 drivers/scsi/elx/efct/efct_lio.c       | 1921 +++++++++
 drivers/scsi/elx/efct/efct_lio.h       |  192 +
 drivers/scsi/elx/efct/efct_scsi.c      | 1572 ++++++++
 drivers/scsi/elx/efct/efct_scsi.h      |  313 ++
 drivers/scsi/elx/efct/efct_unsol.c     |  835 ++++
 drivers/scsi/elx/efct/efct_unsol.h     |   49 +
 drivers/scsi/elx/efct/efct_utils.c     |  446 +++
 drivers/scsi/elx/efct/efct_utils.h     |   83 +
 drivers/scsi/elx/efct/efct_xport.c     | 1472 +++++++
 drivers/scsi/elx/efct/efct_xport.h     |  205 +
 drivers/scsi/elx/include/efc_common.h  |   52 +
 drivers/scsi/elx/libefc/efc.h          |   99 +
 drivers/scsi/elx/libefc/efc_device.c   | 1658 ++++++++
 drivers/scsi/elx/libefc/efc_device.h   |   72 +
 drivers/scsi/elx/libefc/efc_domain.c   | 1126 ++++++
 drivers/scsi/elx/libefc/efc_domain.h   |   52 +
 drivers/scsi/elx/libefc/efc_fabric.c   | 1762 +++++++++
 drivers/scsi/elx/libefc/efc_fabric.h   |  116 +
 drivers/scsi/elx/libefc/efc_lib.c      |  131 +
 drivers/scsi/elx/libefc/efc_node.c     | 1343 +++++++
 drivers/scsi/elx/libefc/efc_node.h     |  188 +
 drivers/scsi/elx/libefc/efc_sm.c       |  213 +
 drivers/scsi/elx/libefc/efc_sm.h       |  140 +
 drivers/scsi/elx/libefc/efc_sport.c    |  843 ++++
 drivers/scsi/elx/libefc/efc_sport.h    |   52 +
 drivers/scsi/elx/libefc/efclib.h       |  637 +++
 drivers/scsi/elx/libefc_sli/sli4.c     | 5748 +++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h     | 4296 ++++++++++++++++++++
 43 files changed, 38891 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.h
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
 create mode 100644 drivers/scsi/elx/efct/efct_utils.c
 create mode 100644 drivers/scsi/elx/efct/efct_utils.h
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h
 create mode 100644 drivers/scsi/elx/include/efc_common.h
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
 create mode 100644 drivers/scsi/elx/libefc/efclib.h
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

-- 
2.13.7


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  7:11   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This is the initial patch for the new Emulex target mode SCSI
driver sources.

This patch:
- Creates the new Emulex source level directory drivers/scsi/elx
  and adds the directory to the MAINTAINERS file.
- Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
  This library is a SLI-4 interface library.
- Starts the population of the libefc_sli library with definitions
  of SLI-4 hardware register offsets and definitions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 MAINTAINERS                        |   8 ++
 drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
 drivers/scsi/elx/libefc_sli/sli4.h | 239 +++++++++++++++++++++++++++++++++++++
 3 files changed, 273 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc0a4a8ae06a..dd8e5f340991 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6139,6 +6139,14 @@ W:	http://www.broadcom.com
 S:	Supported
 F:	drivers/scsi/lpfc/
 
+EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
+M:	James Smart <james.smart@broadcom.com>
+M:	Ram Vegesna <ram.vegesna@broadcom.com>
+L:	linux-scsi@vger.kernel.org
+W:	http://www.broadcom.com
+S:	Supported
+F:	drivers/scsi/elx/
+
 ENE CB710 FLASH CARD READER DRIVER
 M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
 S:	Maintained
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
new file mode 100644
index 000000000000..29d33becd334
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * All common (i.e. transport-independent) SLI-4 functions are implemented
+ * in this file.
+ */
+#include "sli4.h"
+
+struct sli4_asic_entry_t {
+	u32 rev_id;
+	u32 family;
+};
+
+static struct sli4_asic_entry_t sli4_asic_table[] = {
+	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
+};
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
new file mode 100644
index 000000000000..02c671cf57ef
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/*
+ * All common SLI-4 structures and function prototypes.
+ */
+
+#ifndef _SLI4_H
+#define _SLI4_H
+
+/*************************************************************************
+ * Common SLI-4 register offsets and field definitions
+ */
+
+/* SLI_INTF - SLI Interface Definition Register */
+#define SLI4_INTF_REG			0x0058
+enum {
+	SLI4_INTF_REV_SHIFT		= 4,
+	SLI4_INTF_REV_MASK		= 0x0F << SLI4_INTF_REV_SHIFT,
+
+	SLI4_INTF_REV_S3		= 3 << SLI4_INTF_REV_SHIFT,
+	SLI4_INTF_REV_S4		= 4 << SLI4_INTF_REV_SHIFT,
+
+	SLI4_INTF_FAMILY_SHIFT		= 8,
+	SLI4_INTF_FAMILY_MASK		= 0x0F << SLI4_INTF_FAMILY_SHIFT,
+
+	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0xf << SLI4_INTF_FAMILY_SHIFT,
+
+	SLI4_INTF_IF_TYPE_SHIFT		= 12,
+	SLI4_INTF_IF_TYPE_MASK		= 0x0F << SLI4_INTF_IF_TYPE_SHIFT,
+
+	SLI4_INTF_IF_TYPE_2		= 2 << SLI4_INTF_IF_TYPE_SHIFT,
+	SLI4_INTF_IF_TYPE_6		= 6 << SLI4_INTF_IF_TYPE_SHIFT,
+
+	SLI4_INTF_VALID_SHIFT		= 29,
+	SLI4_INTF_VALID_MASK		= 7 << SLI4_INTF_VALID_SHIFT,
+
+	SLI4_INTF_VALID_VALUE		= 6 << SLI4_INTF_VALID_SHIFT,
+};
+
+/* ASIC_ID - SLI ASIC Type and Revision Register */
+#define SLI4_ASIC_ID_REG	0x009c
+enum {
+	SLI4_ASIC_GEN_SHIFT	= 8,
+	SLI4_ASIC_GEN_MASK	= 0xFF << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_5		= 0x0b << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_6		= 0x0c << SLI4_ASIC_GEN_SHIFT,
+	SLI4_ASIC_GEN_7		= 0x0d << SLI4_ASIC_GEN_SHIFT,
+};
+
+enum {
+	SLI4_ASIC_REV_A0 = 0x00,
+	SLI4_ASIC_REV_A1 = 0x01,
+	SLI4_ASIC_REV_A2 = 0x02,
+	SLI4_ASIC_REV_A3 = 0x03,
+	SLI4_ASIC_REV_B0 = 0x10,
+	SLI4_ASIC_REV_B1 = 0x11,
+	SLI4_ASIC_REV_B2 = 0x12,
+	SLI4_ASIC_REV_C0 = 0x20,
+	SLI4_ASIC_REV_C1 = 0x21,
+	SLI4_ASIC_REV_C2 = 0x22,
+	SLI4_ASIC_REV_D0 = 0x30,
+};
+
+/* BMBX - Bootstrap Mailbox Register */
+#define SLI4_BMBX_REG		0x0160
+#define SLI4_BMBX_MASK_HI	0x3
+#define SLI4_BMBX_MASK_LO	0xf
+#define SLI4_BMBX_RDY		(1 << 0)
+#define SLI4_BMBX_HI		(1 << 1)
+#define SLI4_BMBX_WRITE_HI(r) \
+	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | SLI4_BMBX_HI)
+#define SLI4_BMBX_WRITE_LO(r) \
+	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) << 30) | \
+	 (((r) & ~SLI4_BMBX_MASK_LO) >> 2))
+#define SLI4_BMBX_SIZE				256
+
+/* SLIPORT_CONTROL - SLI Port Control Register */
+#define SLI4_PORT_CTRL_REG	0x0408
+#define SLI4_PORT_CTRL_IP	(1 << 27)
+#define SLI4_PORT_CTRL_IDIS	(1 << 22)
+#define SLI4_PORT_CTRL_FDD	(1 << 31)
+
+/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
+#define SLI4_PORT_ERROR1	0x040c
+#define SLI4_PORT_ERROR2	0x0410
+
+/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
+#define SLI4_EQCQ_DB_REG	0x120
+enum {
+	SLI4_EQ_ID_LO_MASK	= 0x01FF,
+
+	SLI4_CQ_ID_LO_MASK	= 0x03FF,
+
+	SLI4_EQCQ_CI_EQ		= 0x0200,
+
+	SLI4_EQCQ_QT_EQ		= 0x00000400,
+	SLI4_EQCQ_QT_CQ		= 0x00000000,
+
+	SLI4_EQCQ_ID_HI_SHIFT	= 11,
+	SLI4_EQCQ_ID_HI_MASK	= 0xF800,
+
+	SLI4_EQCQ_NUM_SHIFT	= 16,
+	SLI4_EQCQ_NUM_MASK	= 0x1FFF0000,
+
+	SLI4_EQCQ_ARM		= 0x20000000,
+	SLI4_EQCQ_UNARM		= 0x00000000,
+};
+
+#define SLI4_EQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ | \
+	 ((((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
+	 (a) | SLI4_EQCQ_CI_EQ)
+
+#define SLI4_CQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ | \
+	 ((((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | (a))
+
+/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
+#define SLI4_IF6_EQ_DB_REG	0x120
+enum {
+	SLI4_IF6_EQ_ID_MASK	= 0x0FFF,
+
+	SLI4_IF6_EQ_NUM_SHIFT	= 16,
+	SLI4_IF6_EQ_NUM_MASK	= 0x1FFF0000,
+};
+
+#define SLI4_IF6_EQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_IF6_EQ_ID_MASK) | \
+	 (((n) << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | (a))
+
+/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */
+#define SLI4_IF6_CQ_DB_REG	0xC0
+enum {
+	SLI4_IF6_CQ_ID_MASK	= 0xFFFF,
+
+	SLI4_IF6_CQ_NUM_SHIFT	= 16,
+	SLI4_IF6_CQ_NUM_MASK	= 0x1FFF0000,
+};
+
+#define SLI4_IF6_CQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_IF6_CQ_ID_MASK) | \
+	 (((n) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | (a))
+
+/* MQ_DOORBELL - MQ Doorbell Register */
+#define SLI4_MQ_DB_REG		0x0140
+#define SLI4_IF6_MQ_DB_REG	0x0160
+enum {
+	SLI4_MQ_ID_MASK		= 0xFFFF,
+
+	SLI4_MQ_NUM_SHIFT	= 16,
+	SLI4_MQ_NUM_MASK	= 0x3FFF0000,
+};
+
+#define SLI4_MQ_DOORBELL(n, i) \
+	(((i) & SLI4_MQ_ID_MASK) | \
+	 (((n) << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK))
+
+/* RQ_DOORBELL - RQ Doorbell Register */
+#define SLI4_RQ_DB_REG		0x0a0
+#define SLI4_IF6_RQ_DB_REG	0x0080
+enum {
+	SLI4_RQ_DB_ID_MASK	= 0xFFFF,
+
+	SLI4_RQ_DB_NUM_SHIFT	= 16,
+	SLI4_RQ_DB_NUM_MASK	= 0x3FFF0000,
+};
+
+#define SLI4_RQ_DOORBELL(n, i) \
+	(((i) & SLI4_RQ_DB_ID_MASK) | \
+	 (((n) << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK))
+
+/* WQ_DOORBELL - WQ Doorbell Register */
+#define SLI4_IO_WQ_DB_REG	0x040
+#define SLI4_IF6_WQ_DB_REG	0x040
+enum {
+	SLI4_WQ_ID_MASK		= 0xFFFF,
+
+	SLI4_WQ_IDX_SHIFT	= 16,
+	SLI4_WQ_IDX_MASK	= 0xFF << SLI4_WQ_IDX_SHIFT,
+
+	SLI4_WQ_NUM_SHIFT	= 24,
+	SLI4_WQ_NUM_MASK	= 0xFF << SLI4_WQ_NUM_SHIFT,
+};
+
+#define SLI4_WQ_DOORBELL(n, x, i) \
+	(((i) & SLI4_WQ_ID_MASK) | \
+	 (((x) << SLI4_WQ_IDX_SHIFT) & SLI4_WQ_IDX_MASK) | \
+	 (((n) << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK))
+
+/* SLIPORT_SEMAPHORE - SLI Port Host and Port Status Register */
+#define SLI4_PORT_SEMP_REG		0x0400
+enum {
+	SLI4_PORT_SEMP_ERR_MASK		= 0xF000,
+	SLI4_PORT_SEMP_UNRECOV_ERR	= 0xF000,
+};
+
+/* SLIPORT_STATUS - SLI Port Status Register */
+#define SLI4_PORT_STATUS_REGOFF		0x0404
+#define SLI4_PORT_STATUS_FDP		(1 << 21)
+#define SLI4_PORT_STATUS_RDY		(1 << 23)
+#define SLI4_PORT_STATUS_RN		(1 << 24)
+#define SLI4_PORT_STATUS_DIP		(1 << 25)
+#define SLI4_PORT_STATUS_OTI		(1 << 29)
+#define SLI4_PORT_STATUS_ERR		(1 << 31)
+
+#define SLI4_PHYDEV_CTRL_REG		0x0414
+#define SLI4_PHYDEV_CTRL_FRST		(1 << 1)
+#define SLI4_PHYDEV_CTRL_DD		(1 << 2)
+
+/* Register name enums */
+enum sli4_regname_en {
+	SLI4_REG_BMBX,
+	SLI4_REG_EQ_DOORBELL,
+	SLI4_REG_CQ_DOORBELL,
+	SLI4_REG_RQ_DOORBELL,
+	SLI4_REG_IO_WQ_DOORBELL,
+	SLI4_REG_MQ_DOORBELL,
+	SLI4_REG_PHYSDEV_CONTROL,
+	SLI4_REG_PORT_CONTROL,
+	SLI4_REG_PORT_ERROR1,
+	SLI4_REG_PORT_ERROR2,
+	SLI4_REG_PORT_SEMAPHORE,
+	SLI4_REG_PORT_STATUS,
+	SLI4_REG_MAX			/* must be last */
+};
+
+struct sli4_reg {
+	u32	rset;
+	u32	off;
+};
+
+#endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2019-12-20 22:36 ` [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  7:24   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
                   ` (30 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch add SLI-4 Data structures and defines for:
- Buffer Descriptors (BDEs)
- Scatter/Gather List elements (SGEs)
- Queues and their Entry Descriptions for:
   Event Queues (EQs), Completion Queues (CQs),
   Receive Queues (RQs), and the Mailbox Queue (MQ).

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/include/efc_common.h |   25 +
 drivers/scsi/elx/libefc_sli/sli4.h    | 1768 +++++++++++++++++++++++++++++++++
 2 files changed, 1793 insertions(+)
 create mode 100644 drivers/scsi/elx/include/efc_common.h

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
new file mode 100644
index 000000000000..3fc48876c531
--- /dev/null
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_COMMON_H__
+#define __EFC_COMMON_H__
+
+#include <linux/pci.h>
+
+#define EFC_SUCCESS 0
+#define EFC_FAIL 1
+
+struct efc_dma {
+	void		*virt;
+	void            *alloc;
+	dma_addr_t	phys;
+
+	size_t		size;
+	size_t          len;
+	struct pci_dev	*pdev;
+};
+
+#endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 02c671cf57ef..f86a9e72ed43 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include "../include/efc_common.h"
+
 /*************************************************************************
  * Common SLI-4 register offsets and field definitions
  */
@@ -236,4 +238,1770 @@ struct sli4_reg {
 	u32	off;
 };
 
+struct sli4_dmaaddr {
+	__le32 low;
+	__le32 high;
+};
+
+/* a 3-word BDE with address 1st 2 words, length last word */
+struct sli4_bufptr {
+	struct sli4_dmaaddr addr;
+	__le32 length;
+};
+
+/* a 3-word BDE with length as first word, address last 2 words */
+struct sli4_bufptr_len1st {
+	__le32 length0;
+	struct sli4_dmaaddr addr;
+};
+
+/* Buffer Descriptor Entry (BDE) */
+enum {
+	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
+	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
+};
+
+struct sli4_bde {
+	__le32		bde_type_buflen;
+	union {
+		struct sli4_dmaaddr data;
+		struct {
+			__le32	offset;
+			__le32	rsvd2;
+		} imm;
+		struct sli4_dmaaddr blp;
+	} u;
+};
+
+/* Buffer Descriptors */
+enum {
+	BDE_TYPE_SHIFT		= 24,
+	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
+	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
+	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
+};
+
+/* Scatter-Gather Entry (SGE) */
+#define SLI4_SGE_MAX_RESERVED			3
+
+enum {
+	/* DW2 */
+	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,
+	/*DW2W1*/
+	SLI4_SGE_TYPE_SHIFT		= 27,
+	SLI4_SGE_TYPE_MASK		= 0xf << SLI4_SGE_TYPE_SHIFT,
+	/*SGE Types*/
+	SLI4_SGE_TYPE_DATA		= 0x00,
+	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
+	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
+	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
+	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
+	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
+	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
+	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
+	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
+
+	SLI4_SGE_LAST			= (1 << 31),
+};
+
+struct sli4_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		buffer_length;
+};
+
+/* T10 DIF Scatter-Gather Entry (SGE) */
+struct sli4_dif_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		rsvd12;
+};
+
+/* Data Integrity Seed (DISEED) SGE */
+enum {
+	/* DW2W1 */
+	DISEED_SGE_HS			= (1 << 2),
+	DISEED_SGE_WS			= (1 << 3),
+	DISEED_SGE_IC			= (1 << 4),
+	DISEED_SGE_ICS			= (1 << 5),
+	DISEED_SGE_ATRT			= (1 << 6),
+	DISEED_SGE_AT			= (1 << 7),
+	DISEED_SGE_FAT			= (1 << 8),
+	DISEED_SGE_NA			= (1 << 9),
+	DISEED_SGE_HI			= (1 << 10),
+
+	/* DW3W1 */
+	DISEED_SGE_BS_MASK		= 0x0007,
+	DISEED_SGE_AI			= (1 << 3),
+	DISEED_SGE_ME			= (1 << 4),
+	DISEED_SGE_RE			= (1 << 5),
+	DISEED_SGE_CE			= (1 << 6),
+	DISEED_SGE_NR			= (1 << 7),
+
+	DISEED_SGE_OP_RX_SHIFT		= 8,
+	DISEED_SGE_OP_RX_MASK		= (0xf << DISEED_SGE_OP_RX_SHIFT),
+	DISEED_SGE_OP_TX_SHIFT		= 12,
+	DISEED_SGE_OP_TX_MASK		= (0xf << DISEED_SGE_OP_TX_SHIFT),
+
+	/* Opcode values */
+	DISEED_SGE_OP_IN_NODIF_OUT_CRC	= 0x00,
+	DISEED_SGE_OP_IN_CRC_OUT_NODIF	= 0x01,
+	DISEED_SGE_OP_IN_NODIF_OUT_CSUM	= 0x02,
+	DISEED_SGE_OP_IN_CSUM_OUT_NODIF	= 0x03,
+	DISEED_SGE_OP_IN_CRC_OUT_CRC	= 0x04,
+	DISEED_SGE_OP_IN_CSUM_OUT_CSUM	= 0x05,
+	DISEED_SGE_OP_IN_CRC_OUT_CSUM	= 0x06,
+	DISEED_SGE_OP_IN_CSUM_OUT_CRC	= 0x07,
+	DISEED_SGE_OP_IN_RAW_OUT_RAW	= 0x08,
+
+};
+
+#define DISEED_SGE_OP_RX_VALUE(stype) \
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_RX_SHIFT)
+#define DISEED_SGE_OP_TX_VALUE(stype) \
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_TX_SHIFT)
+
+struct sli4_diseed_sge {
+	__le32		ref_tag_cmp;
+	__le32		ref_tag_repl;
+	__le16		app_tag_repl;
+	__le16		dw2w1_flags;
+	__le16		app_tag_cmp;
+	__le16		dw3w1_flags;
+};
+
+/* List Segment Pointer Scatter-Gather Entry (SGE) */
+enum {
+	SLI4_LSP_SGE_SEGLEN	= 0x00ffffff,
+};
+
+struct sli4_lsp_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		dw3_seglen;
+};
+
+enum {
+	SLI4_EQE_VALID	= 1,
+	SLI4_EQE_MJCODE	= 0xe,
+	SLI4_EQE_MNCODE	= 0xfff0,
+};
+
+struct sli4_eqe {
+	__le16		dw0w0_flags;
+	__le16		resource_id;
+};
+
+#define SLI4_MAJOR_CODE_STANDARD	0
+#define SLI4_MAJOR_CODE_SENTINEL	1
+
+enum {
+	SLI4_MCQE_CONSUMED	= (1 << 27),
+	SLI4_MCQE_COMPLETED	= (1 << 28),
+	SLI4_MCQE_AE		= (1 << 30),
+	SLI4_MCQE_VALID		= (1 << 31),
+};
+
+struct sli4_mcqe {
+	__le16		completion_status;
+	__le16		extended_status;
+	__le32		mqe_tag_low;
+	__le32		mqe_tag_high;
+	__le32		dw3_flags;
+};
+
+enum {
+	SLI4_ACQE_AE	= (1 << 6), /* async event - this is an ACQE */
+	SLI4_ACQE_VAL	= (1 << 7), /* valid - contents of CQE are valid */
+};
+
+struct sli4_acqe {
+	__le32		event_data[3];
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		ae_val;
+};
+
+#define SLI4_ACQE_EVENT_CODE_LINK_STATE		0x01
+#define SLI4_ACQE_EVENT_CODE_FIP		0x02
+#define SLI4_ACQE_EVENT_CODE_DCBX		0x03
+#define SLI4_ACQE_EVENT_CODE_ISCSI		0x04
+#define SLI4_ACQE_EVENT_CODE_GRP_5		0x05
+#define SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	0x10
+#define SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	0x11
+#define SLI4_ACQE_EVENT_CODE_VF_EVENT		0x12
+#define SLI4_ACQE_EVENT_CODE_MR_EVENT		0x13
+
+enum sli4_qtype {
+	SLI_QTYPE_EQ,
+	SLI_QTYPE_CQ,
+	SLI_QTYPE_MQ,
+	SLI_QTYPE_WQ,
+	SLI_QTYPE_RQ,
+	SLI_QTYPE_MAX,			/* must be last */
+};
+
+#define SLI_USER_MQ_COUNT	1
+#define SLI_MAX_CQ_SET_COUNT	16
+#define SLI_MAX_RQ_SET_COUNT	16
+
+enum sli4_qentry {
+	SLI_QENTRY_ASYNC,
+	SLI_QENTRY_MQ,
+	SLI_QENTRY_RQ,
+	SLI_QENTRY_WQ,
+	SLI_QENTRY_WQ_RELEASE,
+	SLI_QENTRY_OPT_WRITE_CMD,
+	SLI_QENTRY_OPT_WRITE_DATA,
+	SLI_QENTRY_XABT,
+	SLI_QENTRY_MAX			/* must be last */
+};
+
+enum {
+	/* CQ has MQ/Async completion */
+	SLI4_QUEUE_FLAG_MQ	= (1 << 0),
+
+	/* RQ for packet headers */
+	SLI4_QUEUE_FLAG_HDR	= (1 << 1),
+
+	/* RQ index increment by 8 */
+	SLI4_QUEUE_FLAG_RQBATCH	= (1 << 2),
+};
+
+struct sli4_queue {
+	/* Common to all queue types */
+	struct efc_dma	dma;
+	spinlock_t	lock;	/* protect the queue operations */
+	u32	index;		/* current host entry index */
+	u16	size;		/* entry size */
+	u16	length;		/* number of entries */
+	u16	n_posted;	/* number entries posted */
+	u16	id;		/* Port assigned xQ_ID */
+	u16	ulp;		/* ULP assigned to this queue */
+	void __iomem    *db_regaddr;	/* register address for the doorbell */
+	u8		type;		/* queue type ie EQ, CQ, ... */
+	u32	proc_limit;	/* limit CQE processed per iteration */
+	u32	posted_limit;	/* CQE/EQE process before ring doorbel */
+	u32	max_num_processed;
+	time_t		max_process_time;
+	u16	phase;		/* For if_type = 6, this value toggle
+				 * for each iteration of the queue,
+				 * a queue entry is valid when a cqe
+				 * valid bit matches this value
+				 */
+
+	union {
+		u32	r_idx;	/* "read" index (MQ only) */
+		struct {
+			u32	dword;
+		} flag;
+	} u;
+};
+
+/* Generic Command Request header */
+enum {
+	CMD_V0 = 0x00,
+	CMD_V1 = 0x01,
+	CMD_V2 = 0x02,
+};
+
+struct sli4_rqst_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	__le32		timeout;
+	__le32		request_length;
+	__le32		dw3_version;
+};
+
+/* Generic Command Response header */
+struct sli4_rsp_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	u8		status;
+	u8		additional_status;
+	__le16		rsvd6;
+	__le32		response_length;
+	__le32		actual_response_length;
+};
+
+#define SLI4_QUEUE_DEFAULT_CQ	U16_MAX
+
+#define SLI4_QUEUE_RQ_BATCH	8
+
+#define CFG_RQST_CMDSZ(stype)	sizeof(struct sli4_rqst_##stype)
+
+#define CFG_RQST_PYLD_LEN(stype) \
+		cpu_to_le32(sizeof(struct sli4_rqst_##stype) - \
+			sizeof(struct sli4_rqst_hdr))
+
+#define CFG_RQST_PYLD_LEN_VAR(stype, varpyld) \
+		cpu_to_le32((sizeof(struct sli4_rqst_##stype) + \
+			varpyld) - sizeof(struct sli4_rqst_hdr))
+
+#define SZ_DMAADDR		sizeof(struct sli4_dmaaddr)
+
+#define SLI_CONFIG_PYLD_LENGTH(stype) \
+		max(sizeof(struct sli4_rqst_##stype), \
+		sizeof(struct sli4_rsp_##stype))
+
+enum {
+	/* DW5_flags values*/
+	CREATE_CQV2_CLSWM_MASK	= 0x00003000,
+	CREATE_CQV2_NODELAY	= 0x00004000,
+	CREATE_CQV2_AUTOVALID	= 0x00008000,
+	CREATE_CQV2_CQECNT_MASK	= 0x18000000,
+	CREATE_CQV2_VALID	= 0x20000000,
+	CREATE_CQV2_EVT		= 0x80000000,
+	/* DW6W1_flags values*/
+	CREATE_CQV2_ARM		= 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		eq_id;
+	__le16		dw6w1_arm;
+	__le16		cqe_count;
+	__le16		rsvd30;
+	__le32		rsvd32;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+enum {
+	/* DW5_flags values*/
+	CREATE_CQSETV0_CLSWM_MASK  = 0x00003000,
+	CREATE_CQSETV0_NODELAY	   = 0x00004000,
+	CREATE_CQSETV0_AUTOVALID   = 0x00008000,
+	CREATE_CQSETV0_CQECNT_MASK = 0x18000000,
+	CREATE_CQSETV0_VALID	   = 0x20000000,
+	CREATE_CQSETV0_EVT	   = 0x80000000,
+	/* DW5W1_flags values */
+	CREATE_CQSETV0_CQE_COUNT   = 0x7fff,
+	CREATE_CQSETV0_ARM	   = 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_set_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		num_cq_req;
+	__le16		dw6w1_flags;
+	__le16		eq_id[16];
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+/* CQE count */
+enum {
+	CQ_CNT_SHIFT	= 27,
+
+	CQ_CNT_256	= 0,
+	CQ_CNT_512	= 1,
+	CQ_CNT_1024	= 2,
+	CQ_CNT_LARGE	= 3,
+};
+#define CQ_CNT_VAL(type)		(CQ_CNT_##type << CQ_CNT_SHIFT)
+
+#define SLI4_CQE_BYTES			(4 * sizeof(u32))
+
+#define SLI4_CMN_CREATE_CQ_V2_MAX_PAGES	8
+
+/* Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion */
+struct sli4_rsp_cmn_create_queue {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	u8	rsvd18;
+	u8	ulp;
+	__le32	db_offset;
+	__le16	db_rs;
+	__le16	db_fmt;
+};
+
+struct sli4_rsp_cmn_create_queue_set {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	__le16	num_q_allocated;
+};
+
+/* Common Destroy Queue */
+struct sli4_rqst_cmn_destroy_q {
+	struct sli4_rqst_hdr	hdr;
+	__le16	q_id;
+	__le16	rsvd;
+};
+
+struct sli4_rsp_cmn_destroy_q {
+	struct sli4_rsp_hdr	hdr;
+};
+
+/* Modify the delay multiplier for EQs */
+struct sli4_rqst_cmn_modify_eq_delay {
+	struct sli4_rqst_hdr	hdr;
+	__le32	num_eq;
+	struct {
+		__le32	eq_id;
+		__le32	phase;
+		__le32	delay_multiplier;
+	} eq_delay_record[8];
+};
+
+struct sli4_rsp_cmn_modify_eq_delay {
+	struct sli4_rsp_hdr	hdr;
+};
+
+enum {
+	/* DW5 */
+	CREATE_EQ_AUTOVALID		= (1 << 28),
+	CREATE_EQ_VALID			= (1 << 29),
+	CREATE_EQ_EQESZ			= (1 << 31),
+	/* DW6 */
+	CREATE_EQ_COUNT			= (7 << 26),
+	CREATE_EQ_ARM			= (1 << 31),
+	/* DW7 */
+	CREATE_EQ_DELAYMULTI_SHIFT	= 13,
+	CREATE_EQ_DELAYMULTI_MASK	= (0x3FF << CREATE_EQ_DELAYMULTI_SHIFT),
+	CREATE_EQ_DELAYMULTI		= (32 << CREATE_EQ_DELAYMULTI_SHIFT),
+};
+
+struct sli4_rqst_cmn_create_eq {
+	struct sli4_rqst_hdr	hdr;
+	__le16	num_pages;
+	__le16	rsvd18;
+	__le32	dw5_flags;
+	__le32	dw6_flags;
+	__le32	dw7_delaymulti;
+	__le32	rsvd32;
+	struct sli4_dmaaddr page_address[8];
+};
+
+struct sli4_rsp_cmn_create_eq {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+/* EQ count */
+enum {
+	EQ_CNT_SHIFT	= 26,
+
+	EQ_CNT_256	= 0,
+	EQ_CNT_512	= 1,
+	EQ_CNT_1024	= 2,
+	EQ_CNT_2048	= 3,
+	EQ_CNT_4096	= 3,
+};
+#define EQ_CNT_VAL(type) (EQ_CNT_##type << EQ_CNT_SHIFT)
+
+#define SLI4_EQE_SIZE_4			0
+#define SLI4_EQE_SIZE_16		1
+
+/* Create a Mailbox Queue; accommodate v0 and v1 forms. */
+enum {
+	/* DW6W1 */
+	CREATE_MQEXT_RINGSIZE		= 0xf,
+	CREATE_MQEXT_CQID_SHIFT		= 6,
+	CREATE_MQEXT_CQIDV0_MASK	= 0xffc0,
+	/* DW7 */
+	CREATE_MQEXT_VAL		= (1 << 31),
+	/* DW8 */
+	CREATE_MQEXT_ACQV		= (1 << 0),
+	CREATE_MQEXT_ASYNC_CQIDV0	= 0x7fe,
+};
+
+struct sli4_rqst_cmn_create_mq_ext {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		cq_id_v1;
+	__le32		async_event_bitmap;
+	__le16		async_cq_id_v1;
+	__le16		dw6w1_flags;
+	__le32		dw7_val;
+	__le32		dw8_flags;
+	__le32		rsvd36;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+struct sli4_rsp_cmn_create_mq_ext {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+#define SLI4_MQE_SIZE_16		0x05
+#define SLI4_MQE_SIZE_32		0x06
+#define SLI4_MQE_SIZE_64		0x07
+#define SLI4_MQE_SIZE_128		0x08
+
+#define SLI4_ASYNC_EVT_LINK_STATE	(1 << 1)
+#define SLI4_ASYNC_EVT_FIP		(1 << 2)
+#define SLI4_ASYNC_EVT_GRP5		(1 << 5)
+#define SLI4_ASYNC_EVT_FC		(1 << 16)
+#define SLI4_ASYNC_EVT_SLI_PORT		(1 << 17)
+
+#define	SLI4_ASYNC_EVT_FC_ALL \
+		(SLI4_ASYNC_EVT_LINK_STATE	| \
+		 SLI4_ASYNC_EVT_FIP		| \
+		 SLI4_ASYNC_EVT_GRP5		| \
+		 SLI4_ASYNC_EVT_FC		| \
+		 SLI4_ASYNC_EVT_SLI_PORT)
+
+/* Create a Completion Queue. */
+struct sli4_rqst_cmn_create_cq_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		rsvd18;
+	__le32		dw5_flags;
+	__le32		dw6_flags;
+	__le32		rsvd28;
+	__le32		rsvd32;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+enum {
+	SLI4_RQ_CREATE_DUA		= 0x1,
+	SLI4_RQ_CREATE_BQU		= 0x2,
+
+	SLI4_RQE_SIZE			= 8,
+	SLI4_RQE_SIZE_8			= 0x2,
+	SLI4_RQE_SIZE_16		= 0x3,
+	SLI4_RQE_SIZE_32		= 0x4,
+	SLI4_RQE_SIZE_64		= 0x5,
+	SLI4_RQE_SIZE_128		= 0x6,
+
+	SLI4_RQ_PAGE_SIZE_4096		= 0x1,
+	SLI4_RQ_PAGE_SIZE_8192		= 0x2,
+	SLI4_RQ_PAGE_SIZE_16384		= 0x4,
+	SLI4_RQ_PAGE_SIZE_32768		= 0x8,
+	SLI4_RQ_PAGE_SIZE_64536		= 0x10,
+
+	SLI4_RQ_CREATE_V0_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V0_MIN_BUF_SIZE	= 128,
+	SLI4_RQ_CREATE_V0_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		dua_bqu_byte;
+	u8		ulp;
+	__le16		rsvd16;
+	u8		rqe_count_byte;
+	u8		rsvd19;
+	__le32		rsvd20;
+	__le16		buffer_size;
+	__le16		cq_id;
+	__le32		rsvd28;
+	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V0_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum {
+	SLI4_RQ_CREATE_V1_DNB		= 0x80,
+	SLI4_RQ_CREATE_V1_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V1_MIN_BUF_SIZE	= 64,
+	SLI4_RQ_CREATE_V1_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create_v1 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		rsvd14;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le32		rsvd20;
+	__le16		rsvd24;
+	__le16		cq_id;
+	__le32		buffer_size;
+	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V1_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create_v1 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum {
+	SLI4_RQCREATEV2_DNB = 0x80,
+};
+
+struct sli4_rqst_rq_create_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		rq_count;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le16		hdr_buffer_size;
+	__le16		payload_buffer_size;
+	__le16		base_cq_id;
+	__le16		rsvd26;
+	__le32		rsvd42;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+struct sli4_rsp_rq_create_v2 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+#define SLI4_CQE_CODE_OFFSET			14
+
+#define SLI4_CQE_CODE_WORK_REQUEST_COMPLETION	0x01
+#define SLI4_CQE_CODE_RELEASE_WQE		0x02
+#define SLI4_CQE_CODE_RQ_ASYNC			0x04
+#define SLI4_CQE_CODE_XRI_ABORTED		0x05
+#define SLI4_CQE_CODE_RQ_COALESCING		0x06
+#define SLI4_CQE_CODE_RQ_CONSUMPTION		0x07
+#define SLI4_CQE_CODE_MEASUREMENT_REPORTING	0x08
+#define SLI4_CQE_CODE_RQ_ASYNC_V1		0x09
+#define SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD	0x0B
+#define SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA	0x0C
+
+#define SLI4_WQ_CREATE_MAX_PAGES		8
+struct sli4_rqst_wq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		cq_id;
+	u8		page_size;
+	u8		wqe_size_byte;
+	__le16		wqe_count;
+	__le32		rsvd;
+	struct	sli4_dmaaddr
+			page_phys_addr[SLI4_WQ_CREATE_MAX_PAGES];
+};
+
+struct sli4_rsp_wq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum {
+	LINK_TYPE_SHIFT			= 6,
+	LINK_TYPE_MASK			= 0x03 << LINK_TYPE_SHIFT,
+	LINK_TYPE_ETHERNET		= 0x00 << LINK_TYPE_SHIFT,
+	LINK_TYPE_FC			= 0x01 << LINK_TYPE_SHIFT,
+
+	PORT_SPEED_NO_LINK		= 0x0,
+	PORT_SPEED_10_MBPS		= 0x1,
+	PORT_SPEED_100_MBP		= 0x2,
+	PORT_SPEED_1_GBPS		= 0x3,
+	PORT_SPEED_10_GBPS		= 0x4,
+	PORT_SPEED_20_GBPS		= 0x5,
+	PORT_SPEED_25_GBPS		= 0x6,
+	PORT_SPEED_40_GBPS		= 0x7,
+	PORT_SPEED_100_GBPS		= 0x8,
+
+	PORT_LINK_STATUS_PHYSICAL_DOWN	= 0x0,
+	PORT_LINK_STATUS_PHYSICAL_UP	= 0x1,
+	PORT_LINK_STATUS_LOGICAL_DOWN	= 0x2,
+	PORT_LINK_STATUS_LOGICAL_UP	= 0x3,
+
+	PORT_DUPLEX_NONE		= 0x0,
+	PORT_DUPLEX_HWF			= 0x1,
+	PORT_DUPLEX_FULL		= 0x2,
+
+	/*Link Event Type*/
+	LINK_STATE_PHYSICAL		= 0x00,
+	LINK_STATE_LOGICAL		= 0x01,
+};
+
+struct sli4_link_state {
+	u8		link_num_type;
+	u8		port_link_status;
+	u8		port_duplex;
+	u8		port_speed;
+	u8		port_fault;
+	u8		rsvd5;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+enum {
+	LINK_ATTN_TYPE_LINK_UP		= 0x01,
+	LINK_ATTN_TYPE_LINK_DOWN	= 0x02,
+	LINK_ATTN_TYPE_NO_HARD_ALPA	= 0x03,
+
+	LINK_ATTN_P2P			= 0x01,
+	LINK_ATTN_FC_AL			= 0x02,
+	LINK_ATTN_INTERNAL_LOOPBACK	= 0x03,
+	LINK_ATTN_SERDES_LOOPBACK	= 0x04,
+
+	LINK_ATTN_1G			= 0x01,
+	LINK_ATTN_2G			= 0x02,
+	LINK_ATTN_4G			= 0x04,
+	LINK_ATTN_8G			= 0x08,
+	LINK_ATTN_10G			= 0x0a,
+	LINK_ATTN_16G			= 0x10,
+};
+
+struct sli4_link_attention {
+	u8		link_number;
+	u8		attn_type;
+	u8		topology;
+	u8		port_speed;
+	u8		port_fault;
+	u8		shared_link_status;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+enum {
+	FC_EVENT_LINK_ATTENTION		= 0x01,
+	FC_EVENT_SHARED_LINK_ATTENTION	= 0x02,
+};
+
+enum {
+	SLI4_WCQE_XB = 0x10,
+	SLI4_WCQE_QX = 0x80,
+};
+
+struct sli4_fc_wcqe {
+	u8		hw_status;
+	u8		status;
+	__le16		request_tag;
+	__le32		wqe_specific_1;
+	__le32		wqe_specific_2;
+	u8		rsvd12;
+	u8		qx_byte;
+	u8		code;
+	u8		flags;
+};
+
+/* FC WQ consumed CQ queue entry */
+struct sli4_fc_wqec {
+	__le32		rsvd0;
+	__le32		rsvd1;
+	__le16		wqe_index;
+	__le16		wq_id;
+	__le16		rsvd12;
+	u8		code;
+	u8		vld_byte;
+};
+
+/* FC Completion Status Codes. */
+#define SLI4_FC_WCQE_STATUS_SUCCESS			0x00
+#define SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE		0x01
+#define SLI4_FC_WCQE_STATUS_REMOTE_STOP			0x02
+#define SLI4_FC_WCQE_STATUS_LOCAL_REJECT		0x03
+#define SLI4_FC_WCQE_STATUS_NPORT_RJT			0x04
+#define SLI4_FC_WCQE_STATUS_FABRIC_RJT			0x05
+#define SLI4_FC_WCQE_STATUS_NPORT_BSY			0x06
+#define SLI4_FC_WCQE_STATUS_FABRIC_BSY			0x07
+#define SLI4_FC_WCQE_STATUS_LS_RJT			0x09
+#define SLI4_FC_WCQE_STATUS_CMD_REJECT			0x0b
+#define SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK		0x0c
+#define SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED		0x11
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED	0x12
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC		0x13
+#define SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE		0x14
+#define SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE		0x15
+#define SLI4_FC_WCQE_STATUS_DI_ERROR			0x16
+#define SLI4_FC_WCQE_STATUS_BA_RJT			0x17
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED	0x18
+#define SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC		0x19
+#define SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT		0x1a
+#define SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST		0x1b
+
+/* DI_ERROR Extended Status */
+#define SLI4_FC_DI_ERROR_GE			(1 << 0)
+#define SLI4_FC_DI_ERROR_AE			(1 << 1)
+#define SLI4_FC_DI_ERROR_RE			(1 << 2)
+#define SLI4_FC_DI_ERROR_TDPV			(1 << 3)
+#define SLI4_FC_DI_ERROR_UDB			(1 << 4)
+#define SLI4_FC_DI_ERROR_EDIR			(1 << 5)
+
+/* WQE DIF field contents */
+#define SLI4_DIF_DISABLED			0
+#define SLI4_DIF_PASS_THROUGH			1
+#define SLI4_DIF_STRIP				2
+#define SLI4_DIF_INSERT				3
+
+/* driver generated status codes */
+#define SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT	0xff
+#define SLI4_FC_WCQE_STATUS_SHUTDOWN		0xfe
+#define SLI4_FC_WCQE_STATUS_DISPATCH_ERROR	0xfd
+
+/* Work Queue Entry (WQE) types */
+#define SLI4_WQE_ABORT				0x0f
+#define SLI4_WQE_ELS_REQUEST64			0x8a
+#define SLI4_WQE_FCP_IBIDIR64			0xac
+#define SLI4_WQE_FCP_IREAD64			0x9a
+#define SLI4_WQE_FCP_IWRITE64			0x98
+#define SLI4_WQE_FCP_ICMND64			0x9c
+#define SLI4_WQE_FCP_TRECEIVE64			0xa1
+#define SLI4_WQE_FCP_CONT_TRECEIVE64		0xe5
+#define SLI4_WQE_FCP_TRSP64			0xa3
+#define SLI4_WQE_FCP_TSEND64			0x9f
+#define SLI4_WQE_GEN_REQUEST64			0xc2
+#define SLI4_WQE_SEND_FRAME			0xe1
+#define SLI4_WQE_XMIT_BCAST64			0x84
+#define SLI4_WQE_XMIT_BLS_RSP			0x97
+#define SLI4_WQE_ELS_RSP64			0x95
+#define SLI4_WQE_XMIT_SEQUENCE64		0x82
+#define SLI4_WQE_REQUEUE_XRI			0x93
+
+/* WQE command types */
+#define SLI4_CMD_FCP_IREAD64_WQE		0x00
+#define SLI4_CMD_FCP_ICMND64_WQE		0x00
+#define SLI4_CMD_FCP_IWRITE64_WQE		0x01
+#define SLI4_CMD_FCP_TRECEIVE64_WQE		0x02
+#define SLI4_CMD_FCP_TRSP64_WQE			0x03
+#define SLI4_CMD_FCP_TSEND64_WQE		0x07
+#define SLI4_CMD_GEN_REQUEST64_WQE		0x08
+#define SLI4_CMD_XMIT_BCAST64_WQE		0x08
+#define SLI4_CMD_XMIT_BLS_RSP64_WQE		0x08
+#define SLI4_CMD_ABORT_WQE			0x08
+#define SLI4_CMD_XMIT_SEQUENCE64_WQE		0x08
+#define SLI4_CMD_REQUEUE_XRI_WQE		0x0A
+#define SLI4_CMD_SEND_FRAME_WQE			0x0a
+
+#define SLI4_WQE_SIZE				0x05
+#define SLI4_WQE_EXT_SIZE			0x06
+
+#define SLI4_WQE_BYTES				(16 * sizeof(u32))
+#define SLI4_WQE_EXT_BYTES			(32 * sizeof(u32))
+
+/* Mask for ccp (CS_CTL) */
+#define SLI4_MASK_CCP				0xfe
+
+/* Generic WQE */
+enum {
+	SLI4_GEN_WQE_EBDECNT	= (0xf << 0),
+	SLI4_GEN_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_GEN_WQE_QOSD	= (1 << 9),
+	SLI4_GEN_WQE_XBL	= (1 << 11),
+	SLI4_GEN_WQE_HLM	= (1 << 12),
+	SLI4_GEN_WQE_IOD	= (1 << 13),
+	SLI4_GEN_WQE_DBDE	= (1 << 14),
+	SLI4_GEN_WQE_WQES	= (1 << 15),
+
+	SLI4_GEN_WQE_PRI	= (0x7),
+	SLI4_GEN_WQE_PV		= (1 << 3),
+	SLI4_GEN_WQE_EAT	= (1 << 4),
+	SLI4_GEN_WQE_XC		= (1 << 5),
+	SLI4_GEN_WQE_CCPE	= (1 << 7),
+
+	SLI4_GEN_WQE_CMDTYPE	= (0xf),
+	SLI4_GEN_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_generic_wqe {
+	__le32		cmd_spec0_5[6];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+/* WQE used to abort exchanges. */
+enum {
+	SLI4_ABRT_WQE_IR	= 0x02,
+
+	SLI4_ABRT_WQE_EBDECNT	= (0xf << 0),
+	SLI4_ABRT_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_ABRT_WQE_QOSD	= (1 << 9),
+	SLI4_ABRT_WQE_XBL	= (1 << 11),
+	SLI4_ABRT_WQE_IOD	= (1 << 13),
+	SLI4_ABRT_WQE_DBDE	= (1 << 14),
+	SLI4_ABRT_WQE_WQES	= (1 << 15),
+
+	SLI4_ABRT_WQE_PRI	= (0x7),
+	SLI4_ABRT_WQE_PV	= (1 << 3),
+	SLI4_ABRT_WQE_EAT	= (1 << 4),
+	SLI4_ABRT_WQE_XC	= (1 << 5),
+	SLI4_ABRT_WQE_CCPE	= (1 << 7),
+
+	SLI4_ABRT_WQE_CMDTYPE	= (0xf),
+	SLI4_ABRT_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_abort_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		ext_t_tag;
+	u8		ia_ir_byte;
+	u8		criteria;
+	__le16		rsvd10;
+	__le32		ext_t_mask;
+	__le32		t_mask;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		t_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+#define SLI4_ABORT_CRITERIA_XRI_TAG		0x01
+#define SLI4_ABORT_CRITERIA_ABORT_TAG		0x02
+#define SLI4_ABORT_CRITERIA_REQUEST_TAG		0x03
+#define SLI4_ABORT_CRITERIA_EXT_ABORT_TAG	0x04
+
+enum sli4_abort_type {
+	SLI_ABORT_XRI,
+	SLI_ABORT_ABORT_ID,
+	SLI_ABORT_REQUEST_ID,
+	SLI_ABORT_MAX,		/* must be last */
+};
+
+/* WQE used to create an ELS request. */
+enum {
+	SLI4_REQ_WQE_QOSD		= 0x2,
+	SLI4_REQ_WQE_DBDE		= 0x40,
+	SLI4_REQ_WQE_XBL		= 0x8,
+	SLI4_REQ_WQE_XC			= 0x20,
+	SLI4_REQ_WQE_IOD		= 0x20,
+	SLI4_REQ_WQE_HLM		= 0x10,
+	SLI4_REQ_WQE_CCPE		= 0x80,
+	SLI4_REQ_WQE_EAT		= 0x10,
+	SLI4_REQ_WQE_WQES		= 0x80,
+	SLI4_REQ_WQE_PU_SHFT		= 4,
+	SLI4_REQ_WQE_CT_SHFT		= 2,
+	SLI4_REQ_WQE_CT			= 0xc,
+	SLI4_REQ_WQE_ELSID_SHFT		= 4,
+	SLI4_REQ_WQE_SP_SHFT		= 24,
+	SLI4_REQ_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_REQ_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_els_request64_wqe {
+	struct sli4_bde	els_request_payload;
+	__le32		els_request_payload_length;
+	__le32		sid_sp_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_elsid_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	struct sli4_bde	els_response_payload_bde;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create an FCP initiator no data command. */
+enum {
+	SLI4_ICMD_WQE_DBDE		= 0x40,
+	SLI4_ICMD_WQE_XBL		= 0x8,
+	SLI4_ICMD_WQE_XC		= 0x20,
+	SLI4_ICMD_WQE_IOD		= 0x20,
+	SLI4_ICMD_WQE_HLM		= 0x10,
+	SLI4_ICMD_WQE_CCPE		= 0x80,
+	SLI4_ICMD_WQE_EAT		= 0x10,
+	SLI4_ICMD_WQE_APPID		= 0x10,
+	SLI4_ICMD_WQE_WQES		= 0x80,
+	SLI4_ICMD_WQE_PU_SHFT		= 4,
+	SLI4_ICMD_WQE_CT_SHFT		= 2,
+	SLI4_ICMD_WQE_BS_SHFT		= 4,
+	SLI4_ICMD_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_ICMD_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_icmnd64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le32		rsvd12;
+	__le32		remote_n_port_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP initiator read. */
+enum {
+	SLI4_IR_WQE_DBDE		= 0x40,
+	SLI4_IR_WQE_XBL			= 0x8,
+	SLI4_IR_WQE_XC			= 0x20,
+	SLI4_IR_WQE_IOD			= 0x20,
+	SLI4_IR_WQE_HLM			= 0x10,
+	SLI4_IR_WQE_CCPE		= 0x80,
+	SLI4_IR_WQE_EAT			= 0x10,
+	SLI4_IR_WQE_APPID		= 0x10,
+	SLI4_IR_WQE_WQES		= 0x80,
+	SLI4_IR_WQE_PU_SHFT		= 4,
+	SLI4_IR_WQE_CT_SHFT		= 2,
+	SLI4_IR_WQE_BS_SHFT		= 4,
+	SLI4_IR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iread64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+
+	__le32		total_transfer_length;
+
+	__le32		remote_n_port_id_dword;
+
+	__le16		xri_tag;
+	__le16		context_tag;
+
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+
+	__le32		abort_tag;
+
+	__le16		request_tag;
+	__le16		rsvd34;
+
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+
+	__le32		rsvd44;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP initiator write. */
+enum {
+	SLI4_IWR_WQE_DBDE		= 0x40,
+	SLI4_IWR_WQE_XBL		= 0x8,
+	SLI4_IWR_WQE_XC			= 0x20,
+	SLI4_IWR_WQE_IOD		= 0x20,
+	SLI4_IWR_WQE_HLM		= 0x10,
+	SLI4_IWR_WQE_DNRX		= 0x10,
+	SLI4_IWR_WQE_CCPE		= 0x80,
+	SLI4_IWR_WQE_EAT		= 0x10,
+	SLI4_IWR_WQE_APPID		= 0x10,
+	SLI4_IWR_WQE_WQES		= 0x80,
+	SLI4_IWR_WQE_PU_SHFT		= 4,
+	SLI4_IWR_WQE_CT_SHFT		= 2,
+	SLI4_IWR_WQE_BS_SHFT		= 4,
+	SLI4_IWR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IWR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iwrite64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le16		total_transfer_length;
+	__le16		initial_transfer_length;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	struct sli4_bde	first_data_bde;
+};
+
+struct sli4_fcp_128byte_wqe {
+	u32 dw[32];
+};
+
+/* WQE used to create an FCP target receive */
+enum {
+	SLI4_TRCV_WQE_DBDE		= 0x40,
+	SLI4_TRCV_WQE_XBL		= 0x8,
+	SLI4_TRCV_WQE_AR		= 0x8,
+	SLI4_TRCV_WQE_XC		= 0x20,
+	SLI4_TRCV_WQE_IOD		= 0x20,
+	SLI4_TRCV_WQE_HLM		= 0x10,
+	SLI4_TRCV_WQE_DNRX		= 0x10,
+	SLI4_TRCV_WQE_CCPE		= 0x80,
+	SLI4_TRCV_WQE_EAT		= 0x10,
+	SLI4_TRCV_WQE_APPID		= 0x10,
+	SLI4_TRCV_WQE_WQES		= 0x80,
+	SLI4_TRCV_WQE_PU_SHFT		= 4,
+	SLI4_TRCV_WQE_CT_SHFT		= 2,
+	SLI4_TRCV_WQE_BS_SHFT		= 4,
+	SLI4_TRCV_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_treceive64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	union {
+		__le16		sec_xri_tag;
+		__le16		rsvd;
+		__le32		dword;
+	} dword5;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_ar_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fcp_data_receive_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP target response */
+enum {
+	SLI4_TRSP_WQE_AG	= 0x8,
+	SLI4_TRSP_WQE_DBDE	= 0x40,
+	SLI4_TRSP_WQE_XBL	= 0x8,
+	SLI4_TRSP_WQE_XC	= 0x20,
+	SLI4_TRSP_WQE_HLM	= 0x10,
+	SLI4_TRSP_WQE_DNRX	= 0x10,
+	SLI4_TRSP_WQE_CCPE	= 0x80,
+	SLI4_TRSP_WQE_EAT	= 0x10,
+	SLI4_TRSP_WQE_APPID	= 0x10,
+	SLI4_TRSP_WQE_WQES	= 0x80,
+};
+
+struct sli4_fcp_trsp64_wqe {
+	struct sli4_bde	bde;
+	__le32		fcp_response_length;
+	__le32		rsvd12;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_dnrx_byte;
+	u8		command;
+	u8		class_ag_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP target send (DATA IN). */
+enum {
+	SLI4_TSEND_WQE_XBL	= 0x8,
+	SLI4_TSEND_WQE_DBDE	= 0x40,
+	SLI4_TSEND_WQE_IOD	= 0x20,
+	SLI4_TSEND_WQE_QOSD	= 0x2,
+	SLI4_TSEND_WQE_HLM	= 0x10,
+	SLI4_TSEND_WQE_PU_SHFT	= 4,
+	SLI4_TSEND_WQE_AR	= 0x8,
+	SLI4_TSEND_CT_SHFT	= 2,
+	SLI4_TSEND_BS_SHFT	= 4,
+	SLI4_TSEND_LEN_LOC_BIT2 = 0x1,
+	SLI4_TSEND_CCPE		= 0x80,
+	SLI4_TSEND_APPID_VALID	= 0x20,
+	SLI4_TSEND_WQES		= 0x80,
+	SLI4_TSEND_XC		= 0x20,
+	SLI4_TSEND_EAT		= 0x10,
+};
+
+struct sli4_fcp_tsend64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_byte;
+	u8		command;
+	u8		class_pu_ar_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		dw10byte0;
+	u8		ll_qd_xbl_hlm_iod_dbde;
+	u8		dw10byte2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		fcp_data_transmit_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create a general request. */
+enum {
+	SLI4_GEN_REQ64_WQE_XBL	= 0x8,
+	SLI4_GEN_REQ64_WQE_DBDE	= 0x40,
+	SLI4_GEN_REQ64_WQE_IOD	= 0x20,
+	SLI4_GEN_REQ64_WQE_QOSD	= 0x2,
+	SLI4_GEN_REQ64_WQE_HLM	= 0x10,
+	SLI4_GEN_REQ64_CT_SHFT	= 2,
+};
+
+struct sli4_gen_request64_wqe {
+	struct sli4_bde	bde;
+	__le32		request_payload_length;
+	__le32		relative_offset;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create a send frame request */
+enum {
+	SLI4_SF_WQE_DBDE	= 0x40,
+	SLI4_SF_PU		= 0x30,
+	SLI4_SF_CT		= 0xc,
+	SLI4_SF_QOSD		= 0x2,
+	SLI4_SF_LEN_LOC_BIT1	= 0x80,
+	SLI4_SF_LEN_LOC_BIT2	= 0x1,
+	SLI4_SF_XC		= 0x20,
+	SLI4_SF_XBL		= 0x8,
+};
+
+struct sli4_send_frame_wqe {
+	struct sli4_bde	bde;
+	__le32		frame_length;
+	__le32		fc_header_0_1[2];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	u8		eof;
+	u8		sof;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fc_header_2_5[4];
+};
+
+/* WQE used to create a transmit sequence */
+enum {
+	SLI4_SEQ_WQE_DBDE		= 0x4000,
+	SLI4_SEQ_WQE_XBL		= 0x800,
+	SLI4_SEQ_WQE_SI			= 0x4,
+	SLI4_SEQ_WQE_FT			= 0x8,
+	SLI4_SEQ_WQE_XO			= 0x40,
+	SLI4_SEQ_WQE_LS			= 0x80,
+	SLI4_SEQ_WQE_DIF		= 0x3,
+	SLI4_SEQ_WQE_BS			= 0x70,
+	SLI4_SEQ_WQE_PU			= 0x30,
+	SLI4_SEQ_WQE_HLM		= 0x1000,
+	SLI4_SEQ_WQE_IOD_SHIFT		= 13,
+	SLI4_SEQ_WQE_CT_SHIFT		= 2,
+	SLI4_SEQ_WQE_LEN_LOC_SHIFT	= 7,
+};
+
+struct sli4_xmit_sequence64_wqe {
+	struct sli4_bde	bde;
+	__le32		remote_n_port_id_dword;
+	__le32		relative_offset;
+	u8		dw5flags0;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw7flags0;
+	u8		command;
+	u8		dw7flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	__le16		dw10w0;
+	u8		dw10flags0;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		sequence_payload_len;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/*
+ * WQE used unblock the specified XRI and to release
+ * it to the SLI Port's free pool.
+ */
+enum {
+	SLI4_REQU_XRI_WQE_XC	= 0x20,
+	SLI4_REQU_XRI_WQE_QOSD	= 0x2,
+};
+
+struct sli4_requeue_xri_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		rsvd8;
+	__le32		rsvd12;
+	__le32		rsvd16;
+	__le32		rsvd20;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		rsvd32;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		flags0;
+	__le16		flags1;
+	__le16		flags2;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd42;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to send a single frame sequence to broadcast address */
+enum {
+	SLI4_BCAST_WQE_DBDE		= 0x4000,
+	SLI4_BCAST_WQE_CT_SHIFT		= 2,
+	SLI4_BCAST_WQE_LEN_LOC_SHIFT	= 7,
+	SLI4_BCAST_WQE_IOD_SHIFT	= 13,
+};
+
+struct sli4_xmit_bcast64_wqe {
+	struct sli4_bde	sequence_payload;
+	__le32		sequence_payload_length;
+	__le32		rsvd16;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	__le16		dw10w0;
+	u8		dw10flags1;
+	u8		ccp;
+	u8		dw11flags0;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd45;
+	__le32		rsvd46;
+	__le32		rsvd47;
+};
+
+/* WQE used to create a BLS response */
+enum {
+	SLI4_BLS_RSP_RID		= 0xffffff,
+	SLI4_BLS_RSP_WQE_AR		= 0x40000000,
+	SLI4_BLS_RSP_WQE_CT_SHFT	= 2,
+	SLI4_BLS_RSP_WQE_QOSD		= 0x2,
+	SLI4_BLS_RSP_WQE_HLM		= 0x10,
+};
+
+struct sli4_xmit_bls_rsp_wqe {
+	__le32		payload_word0;
+	__le16		rx_id;
+	__le16		ox_id;
+	__le16		high_seq_cnt;
+	__le16		low_seq_cnt;
+	__le32		rsvd12;
+	__le32		local_n_port_id_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw8flags0;
+	u8		command;
+	u8		dw8flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd38;
+	u8		dw11flags0;
+	u8		dw11flags1;
+	u8		dw11flags2;
+	u8		ccp;
+	u8		dw12flags0;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	u8		rsvd50;
+	u8		rsvd51;
+	__le32		rsvd52;
+	__le32		rsvd56;
+	__le32		rsvd60;
+};
+
+enum sli_bls_type {
+	SLI4_SLI_BLS_ACC,
+	SLI4_SLI_BLS_RJT,
+	SLI4_SLI_BLS_MAX
+};
+
+struct sli_bls_payload {
+	enum sli_bls_type	type;
+	__le16			ox_id;
+	__le16			rx_id;
+	union {
+		struct {
+			u8	seq_id_validity;
+			u8	seq_id_last;
+			u8	rsvd2;
+			u8	rsvd3;
+			u16	ox_id;
+			u16	rx_id;
+			__le16	low_seq_cnt;
+			__le16	high_seq_cnt;
+		} acc;
+		struct {
+			u8	vendor_unique;
+			u8	reason_explanation;
+			u8	reason_code;
+			u8	rsvd3;
+		} rjt;
+	} u;
+};
+
+/* WQE used to create an ELS response */
+
+enum {
+	SLI4_ELS_SID		= 0xffffff,
+	SLI4_ELS_RID		= 0xffffff,
+	SLI4_ELS_DBDE		= 0x40,
+	SLI4_ELS_XBL		= 0x8,
+	SLI4_ELS_IOD		= 0x20,
+	SLI4_ELS_QOSD		= 0x2,
+	SLI4_ELS_XC		= 0x20,
+	SLI4_ELS_CT_OFFSET	= 0X2,
+	SLI4_ELS_SP		= 0X1000000,
+	SLI4_ELS_HLM		= 0X10,
+};
+
+struct sli4_xmit_els_rsp64_wqe {
+	struct sli4_bde	els_response_payload;
+	__le32		els_response_payload_length;
+	__le32		sid_dw;
+	__le32		rid_dw;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		ox_id;
+	u8		flags1;
+	u8		flags2;
+	u8		flags3;
+	u8		flags4;
+	u8		cmd_type_wqec;
+	u8		rsvd34;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	__le16		rsvd38;
+	u32		rsvd40;
+	u32		rsvd44;
+	u32		rsvd48;
+};
+
+/* Local Reject Reason Codes */
+#define SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE		0x01
+#define SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT		0x02
+#define SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR		0x03
+#define SLI4_FC_LOCAL_REJECT_INVALID_RPI		0x04
+#define SLI4_FC_LOCAL_REJECT_NO_XRI			0x05
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND		0x06
+#define SLI4_FC_LOCAL_REJECT_XCHG_DROPPED		0x07
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD		0x08
+#define SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH		0x0c
+#define SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED		0x0d
+#define SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED		0x0e
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME		0x0f
+#define SLI4_FC_LOCAL_REJECT_NO_RESOURCES		0x11
+#define SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE		0x12
+#define SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH		0x13
+#define SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE	0x14
+#define SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS		0x15
+#define SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED		0x16
+#define SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT	0x17
+#define SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE		0x18
+#define SLI4_FC_LOCAL_REJECT_LINK_DOWN			0x1a
+#define SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA		0x1b
+#define SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI		0x1c
+#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA		0x1d
+#define SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK		0x1e
+#define SLI4_FC_LOCAL_REJECT_DUP_FRAME			0x1f
+#define SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME	0x20
+#define SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS		0x21
+#define SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER	0x23
+#define SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED	0x24
+#define SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED	0x25
+#define SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE		0x28
+#define SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING	0x29
+#define SLI4_FC_LOCAL_REJECT_INVALID_VPI		0x2e
+#define SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF		0x30
+#define SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET		0x40
+#define SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET		0x41
+#define SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE	0x42
+#define SLI4_FC_LOCAL_REJECT_MISSING_SI		0x43
+#define SLI4_FC_LOCAL_REJECT_MISSING_ES		0x44
+#define SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER		0x45
+#define SLI4_FC_LOCAL_REJECT_SLER_FAILURE		0x46
+#define SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE	0x47
+#define SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR		0x48
+#define SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR	0x49
+#define SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR		0x4a
+#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR		0x4c
+#define SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR	0x4d
+#define SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR		0x4e
+
+enum {
+	SLI4_RACQE_RQ_EL_INDX	= 0xfff,
+	SLI4_RACQE_FCFI		= 0x3f,
+	SLI4_RACQE_HDPL		= 0x3f,
+	SLI4_RACQE_RQ_ID	= 0xffc0,
+};
+
+struct sli4_fc_async_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		fcfi_rq_id_word;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+struct sli4_fc_async_rcqe_v1 {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	u8		fcfi_byte;
+	u8		rsvd5;
+	__le16		rsvd6;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+#define SLI4_FC_ASYNC_RQ_SUCCESS		0x10
+#define SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED	0x11
+#define SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED	0x12
+#define SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC	0x13
+#define SLI4_FC_ASYNC_RQ_DMA_FAILURE		0x14
+enum {
+	SLI4_RCQE_RQ_EL_INDX = 0xfff,
+};
+
+struct sli4_fc_coalescing_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		rq_id;
+	__le16		sequence_reporting_placement_length;
+	__le16		rsvd14;
+	u8		code;
+	u8		vld_byte;
+};
+
+#define SLI4_FC_COALESCE_RQ_SUCCESS		0x10
+#define SLI4_FC_COALESCE_RQ_INSUFF_XRI_NEEDED	0x18
+/*
+ * @SLI4_OCQE_RQ_EL_INDX: bits 0 to 15 in word1
+ * @SLI4_OCQE_FCFI: bits 0 to 6 in dw1
+ * @SLI4_OCQE_OOX: bit 15 in dw1
+ * @SLI4_OCQE_AGXR: bit 16 in dw1
+ */
+enum {
+	SLI4_OCQE_RQ_EL_INDX = 0x7f,
+	SLI4_OCQE_FCFI = 0x3f,
+	SLI4_OCQE_OOX = (1 << 6),
+	SLI4_OCQE_AGXR = (1 << 7),
+	SLI4_OCQE_HDPL = 0x3f,
+};
+
+struct sli4_fc_optimized_write_cmd_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		w1;
+	u8		flags0;
+	u8		flags1;
+	__le16		xri;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	__le16		rpi;
+	u8		code;
+	u8		hdpl_vld;
+};
+
+enum {
+	SLI4_OCQE_XB = (1 << 4),
+};
+
+struct sli4_fc_optimized_write_data_cqe {
+	u8		hw_status;
+	u8		status;
+	__le16		xri;
+	__le32		total_data_placed;
+	__le32		extended_status;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+struct sli4_fc_xri_aborted_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rsvd2;
+	__le32		extended_status;
+	__le16		xri;
+	__le16		remote_xid;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+#define SLI4_GENERIC_CONTEXT_RPI		0x0
+#define SLI4_GENERIC_CONTEXT_VPI		0x1
+#define SLI4_GENERIC_CONTEXT_VFI		0x2
+#define SLI4_GENERIC_CONTEXT_FCFI		0x3
+
+#define SLI4_GENERIC_CLASS_CLASS_2		0x1
+#define SLI4_GENERIC_CLASS_CLASS_3		0x2
+
+#define SLI4_ELS_REQUEST64_DIR_WRITE		0x0
+#define SLI4_ELS_REQUEST64_DIR_READ		0x1
+
+#define SLI4_ELS_REQUEST64_OTHER		0x0
+#define SLI4_ELS_REQUEST64_LOGO		0x1
+#define SLI4_ELS_REQUEST64_FDISC		0x2
+#define SLI4_ELS_REQUEST64_FLOGIN		0x3
+#define SLI4_ELS_REQUEST64_PLOGI		0x4
+
+#define SLI4_ELS_REQUEST64_CMD_GEN		0x08
+#define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
+#define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2019-12-20 22:36 ` [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2019-12-20 22:36 ` [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  7:32   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
                   ` (29 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds definitions for SLI-4 mailbox commands
and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.h | 1728 +++++++++++++++++++++++++++++++++++-
 1 file changed, 1727 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index f86a9e72ed43..c9bd3f71b27b 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -1995,7 +1995,7 @@ struct sli4_fc_xri_aborted_cqe {
 #define SLI4_ELS_REQUEST64_DIR_READ		0x1
 
 #define SLI4_ELS_REQUEST64_OTHER		0x0
-#define SLI4_ELS_REQUEST64_LOGO		0x1
+#define SLI4_ELS_REQUEST64_LOGO			0x1
 #define SLI4_ELS_REQUEST64_FDISC		0x2
 #define SLI4_ELS_REQUEST64_FLOGIN		0x3
 #define SLI4_ELS_REQUEST64_PLOGI		0x4
@@ -2004,4 +2004,1730 @@ struct sli4_fc_xri_aborted_cqe {
 #define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
 #define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
 
+#define SLI_PAGE_SIZE				(1 << 12)	/* 4096 */
+#define SLI_SUB_PAGE_MASK			(SLI_PAGE_SIZE - 1)
+#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
+
+#define SLI4_BMBX_TIMEOUT_MSEC			30000
+#define SLI4_FW_READY_TIMEOUT_MSEC		30000
+
+#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
+#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
+
+static inline u32
+sli_page_count(size_t bytes, u32 page_size)
+{
+	if (!page_size)
+		return 0;
+
+	return (bytes + (page_size - 1)) >> __ffs(page_size);
+}
+
+/*************************************************************************
+ * SLI-4 mailbox command formats and definitions
+ */
+
+struct sli4_mbox_command_header {
+	u8	resvd0;
+	u8	command;
+	__le16	status;	/* Port writes to indicate success/fail */
+};
+
+enum {
+	MBX_CMD_CONFIG_LINK	= 0x07,
+	MBX_CMD_DUMP		= 0x17,
+	MBX_CMD_DOWN_LINK	= 0x06,
+	MBX_CMD_INIT_LINK	= 0x05,
+	MBX_CMD_INIT_VFI	= 0xa3,
+	MBX_CMD_INIT_VPI	= 0xa4,
+	MBX_CMD_POST_XRI	= 0xa7,
+	MBX_CMD_RELEASE_XRI	= 0xac,
+	MBX_CMD_READ_CONFIG	= 0x0b,
+	MBX_CMD_READ_STATUS	= 0x0e,
+	MBX_CMD_READ_NVPARMS	= 0x02,
+	MBX_CMD_READ_REV	= 0x11,
+	MBX_CMD_READ_LNK_STAT	= 0x12,
+	MBX_CMD_READ_SPARM64	= 0x8d,
+	MBX_CMD_READ_TOPOLOGY	= 0x95,
+	MBX_CMD_REG_FCFI	= 0xa0,
+	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
+	MBX_CMD_REG_RPI		= 0x93,
+	MBX_CMD_REG_RX_RQ	= 0xa6,
+	MBX_CMD_REG_VFI		= 0x9f,
+	MBX_CMD_REG_VPI		= 0x96,
+	MBX_CMD_RQST_FEATURES	= 0x9d,
+	MBX_CMD_SLI_CONFIG	= 0x9b,
+	MBX_CMD_UNREG_FCFI	= 0xa2,
+	MBX_CMD_UNREG_RPI	= 0x14,
+	MBX_CMD_UNREG_VFI	= 0xa1,
+	MBX_CMD_UNREG_VPI	= 0x97,
+	MBX_CMD_WRITE_NVPARMS	= 0x03,
+	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
+
+	MBX_STATUS_SUCCESS	= 0x0000,
+	MBX_STATUS_FAILURE	= 0x0001,
+	MBX_STATUS_RPI_NOT_REG	= 0x1400,
+};
+
+/* CONFIG_LINK */
+enum {
+	SLI4_CFG_LINK_BBSCN = 0xf00,
+	SLI4_CFG_LINK_CSCN  = 0x1000,
+};
+
+struct sli4_cmd_config_link {
+	struct sli4_mbox_command_header	hdr;
+	u8		maxbbc;
+	u8		rsvd5;
+	u8		rsvd6;
+	u8		rsvd7;
+	u8		alpa;
+	__le16		n_port_id;
+	u8		rsvd11;
+	__le32		rsvd12;
+	__le32		e_d_tov;
+	__le32		lp_tov;
+	__le32		r_a_tov;
+	__le32		r_t_tov;
+	__le32		al_tov;
+	__le32		rsvd36;
+	__le32		bbscn_dword;
+};
+
+enum {
+	SLI4_DUMP4_TYPE = 0xf,
+};
+
+#define SLI4_WKI_TAG_SAT_TEM 0x1040
+
+struct sli4_cmd_dump4 {
+	struct sli4_mbox_command_header	hdr;
+	__le32		type_dword;
+	__le16		wki_selection;
+	__le16		rsvd10;
+	__le32		rsvd12;
+	__le32		returned_byte_cnt;
+	__le32		resp_data[59];
+};
+
+/* INIT_LINK - initialize the link for a FC port */
+#define FC_TOPOLOGY_FCAL	0
+#define FC_TOPOLOGY_P2P		1
+
+#define SLI4_INIT_LINK_F_LOOP_BACK	(1 << 0)
+#define SLI4_INIT_LINK_F_UNFAIR		(1 << 6)
+#define SLI4_INIT_LINK_F_NO_LIRP	(1 << 7)
+#define SLI4_INIT_LINK_F_LOOP_VALID_CHK	(1 << 8)
+#define SLI4_INIT_LINK_F_NO_LISA	(1 << 9)
+#define SLI4_INIT_LINK_F_FAIL_OVER	(1 << 10)
+#define SLI4_INIT_LINK_F_NO_AUTOSPEED	(1 << 11)
+#define SLI4_INIT_LINK_F_PICK_HI_ALPA	(1 << 15)
+
+#define SLI4_INIT_LINK_F_P2P_ONLY	1
+#define SLI4_INIT_LINK_F_FCAL_ONLY	2
+
+#define SLI4_INIT_LINK_F_FCAL_FAIL_OVER	0
+#define SLI4_INIT_LINK_F_P2P_FAIL_OVER	1
+
+enum {
+	SLI4_INIT_LINK_SEL_RESET_AL_PA		= 0xff,
+	SLI4_INIT_LINK_FLAG_LOOPBACK		= 0x1,
+	SLI4_INIT_LINK_FLAG_TOPOLOGY		= 0x6,
+	SLI4_INIT_LINK_FLAG_UNFAIR		= 0x40,
+	SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP	= 0x80,
+	SLI4_INIT_LINK_FLAG_LOOP_VALIDITY	= 0x100,
+	SLI4_INIT_LINK_FLAG_SKIP_LISA		= 0x200,
+	SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER	= 0x400,
+	SLI4_INIT_LINK_FLAG_FIXED_SPEED		= 0x800,
+	SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA	= 0x8000,
+};
+
+#define FC_LINK_SPEED_1G		1
+#define FC_LINK_SPEED_2G		2
+#define FC_LINK_SPEED_AUTO_1_2		3
+#define FC_LINK_SPEED_4G		4
+#define FC_LINK_SPEED_AUTO_4_1		5
+#define FC_LINK_SPEED_AUTO_4_2		6
+#define FC_LINK_SPEED_AUTO_4_2_1	7
+#define FC_LINK_SPEED_8G		8
+#define FC_LINK_SPEED_AUTO_8_1		9
+#define FC_LINK_SPEED_AUTO_8_2		10
+#define FC_LINK_SPEED_AUTO_8_2_1	11
+#define FC_LINK_SPEED_AUTO_8_4		12
+#define FC_LINK_SPEED_AUTO_8_4_1	13
+#define FC_LINK_SPEED_AUTO_8_4_2	14
+#define FC_LINK_SPEED_10G		16
+#define FC_LINK_SPEED_16G		17
+#define FC_LINK_SPEED_AUTO_16_8_4	18
+#define FC_LINK_SPEED_AUTO_16_8		19
+#define FC_LINK_SPEED_32G		20
+#define FC_LINK_SPEED_AUTO_32_16_8	21
+#define FC_LINK_SPEED_AUTO_32_16	22
+
+struct sli4_cmd_init_link {
+	struct sli4_mbox_command_header       hdr;
+	__le32	sel_reset_al_pa_dword;
+	__le32	flags0;
+	__le32	link_speed_sel_code;
+};
+
+/* INIT_VFI - initialize the VFI resource */
+enum {
+	SLI4_INIT_VFI_FLAG_VP	= 0x1000,
+	SLI4_INIT_VFI_FLAG_VF	= 0x2000,
+	SLI4_INIT_VFI_FLAG_VT	= 0x4000,
+	SLI4_INIT_VFI_FLAG_VR	= 0x8000,
+
+	SLI4_INIT_VFI_VFID	= 0x1fff,
+	SLI4_INIT_VFI_PRI	= 0xe000,
+
+	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,
+};
+
+struct sli4_cmd_init_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vfi;
+	__le16		flags0_word;
+	__le16		fcfi;
+	__le16		vpi;
+	__le32		vf_id_pri_dword;
+	__le32		hop_cnt_dword;
+};
+
+/* INIT_VPI - initialize the VPI resource */
+struct sli4_cmd_init_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* POST_XRI - post XRI resources to the SLI Port */
+enum {
+	SLI4_POST_XRI_COUNT	= 0xfff,
+	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
+	SLI4_POST_XRI_FLAG_DL	= 0x2000,
+	SLI4_POST_XRI_FLAG_DI	= 0x4000,
+	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
+};
+
+struct sli4_cmd_post_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		xri_base;
+	__le16		xri_count_flags;
+};
+
+/* RELEASE_XRI - Release XRI resources from the SLI Port */
+enum {
+	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,
+	SLI4_RELEASE_XRI_COUNT		= 0x1f,
+};
+
+struct sli4_cmd_release_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		rel_xri_count_word;
+	__le16		xri_count_word;
+
+	struct {
+		__le16	xri_tag0;
+		__le16	xri_tag1;
+	} xri_tbl[62];
+};
+
+/* READ_CONFIG - read SLI port configuration parameters */
+struct sli4_cmd_read_config {
+	struct sli4_mbox_command_header	hdr;
+};
+
+enum {
+	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
+	SLI4_READ_CFG_RESP_TOPOLOGY	= 0xff000000,	/* DW2 */
+};
+
+struct sli4_rsp_read_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		ext_dword;
+	__le32		topology_dword;
+	__le32		resvd8;
+	__le16		e_d_tov;
+	__le16		resvd14;
+	__le32		resvd16;
+	__le16		r_a_tov;
+	__le16		resvd22;
+	__le32		resvd24;
+	__le32		resvd28;
+	__le16		lmt;
+	__le16		resvd34;
+	__le32		resvd36;
+	__le32		resvd40;
+	__le16		xri_base;
+	__le16		xri_count;
+	__le16		rpi_base;
+	__le16		rpi_count;
+	__le16		vpi_base;
+	__le16		vpi_count;
+	__le16		vfi_base;
+	__le16		vfi_count;
+	__le16		resvd60;
+	__le16		fcfi_count;
+	__le16		rq_count;
+	__le16		eq_count;
+	__le16		wq_count;
+	__le16		cq_count;
+	__le32		pad[45];
+};
+
+#define SLI4_READ_CFG_TOPO_FC		0x1	/* FC topology unknown */
+#define SLI4_READ_CFG_TOPO_FC_DA	0x2	/* FC Direct Attach */
+#define SLI4_READ_CFG_TOPO_FC_AL	0x3	/* FC-AL topology */
+
+/* READ_NVPARMS - read SLI port configuration parameters */
+enum {
+	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
+	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
+};
+
+struct sli4_cmd_read_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* WRITE_NVPARMS - write SLI port configuration parameters */
+struct sli4_cmd_write_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* READ_REV - read the Port revision levels */
+enum {
+	SLI4_READ_REV_FLAG_SLI_LEVEL	= 0xf,
+	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
+	SLI4_READ_REV_FLAG_CEEV		= 0x60,
+	SLI4_READ_REV_FLAG_VPD		= 0x2000,
+
+	SLI4_READ_REV_AVAILABLE_LENGTH	= 0xffffff,
+};
+
+struct sli4_cmd_read_rev {
+	struct sli4_mbox_command_header	hdr;
+	__le16			resvd0;
+	__le16			flags0_word;
+	__le32			first_hw_rev;
+	__le32			second_hw_rev;
+	__le32			resvd12;
+	__le32			third_hw_rev;
+	u8			fc_ph_low;
+	u8			fc_ph_high;
+	u8			feature_level_low;
+	u8			feature_level_high;
+	__le32			resvd24;
+	__le32			first_fw_id;
+	u8			first_fw_name[16];
+	__le32			second_fw_id;
+	u8			second_fw_name[16];
+	__le32			rsvd18[30];
+	__le32			available_length_dword;
+	struct sli4_dmaaddr	hostbuf;
+	__le32			returned_vpd_length;
+	__le32			actual_vpd_length;
+};
+
+/* READ_SPARM64 - read the Port service parameters */
+struct sli4_cmd_read_sparm64 {
+	struct sli4_mbox_command_header hdr;
+	__le32			resvd0;
+	__le32			resvd4;
+	struct sli4_bde	bde_64;
+	__le16			vpi;
+	__le16			resvd22;
+	__le16			port_name_start;
+	__le16			port_name_len;
+	__le16			node_name_start;
+	__le16			node_name_len;
+};
+
+#define SLI4_READ_SPARM64_VPI_DEFAULT	0
+#define SLI4_READ_SPARM64_VPI_SPECIAL	U16_MAX
+
+#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
+#define SLI4_READ_SPARM64_WWNN_OFFSET	(SLI4_READ_SPARM64_WWPN_OFFSET \
+					+ sizeof(uint64_t))
+/* READ_TOPOLOGY - read the link event information */
+enum {
+	SLI4_READTOPO_ATTEN_TYPE	= 0xff,
+	SLI4_READTOPO_FLAG_IL		= 0x100,
+	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
+
+	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
+	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
+	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
+	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
+	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
+	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
+
+	SLI4_READTOPO_SCN_BBSCN		= 0xf,
+	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
+
+	SLI4_READTOPO_R_T_TOV		= 0x1ff,
+	SLI4_READTOPO_AL_TOV		= 0xf000,
+
+	SLI4_READTOPO_PB_FLAG		= 0x80,
+
+	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
+};
+
+struct sli4_cmd_read_topology {
+	struct sli4_mbox_command_header	hdr;
+	__le32			event_tag;
+	__le32			dw2_attentype;
+	u8			topology;
+	u8			lip_type;
+	u8			lip_al_ps;
+	u8			al_pa_granted;
+	struct sli4_bde	bde_loop_map;
+	__le32			linkdown_state;
+	__le32			currlink_state;
+	u8			max_bbc;
+	u8			init_bbc;
+	u8			scn_flags;
+	u8			rsvd39;
+	__le16			dw10w0_al_rt_tov;
+	__le16			lp_tov;
+	u8			acquired_al_pa;
+	u8			pb_flags;
+	__le16			specified_al_pa;
+	__le32			dw12_init_n_port_id;
+};
+
+#define SLI4_MIN_LOOP_MAP_BYTES	128
+
+#define SLI4_READ_TOPOLOGY_LINK_UP	0x1
+#define SLI4_READ_TOPOLOGY_LINK_DOWN	0x2
+#define SLI4_READ_TOPOLOGY_LINK_NO_ALPA	0x3
+
+#define SLI4_READ_TOPOLOGY_UNKNOWN	0x0
+#define SLI4_READ_TOPOLOGY_NPORT	0x1
+#define SLI4_READ_TOPOLOGY_FC_AL	0x2
+
+#define SLI4_READ_TOPOLOGY_SPEED_NONE	0x00
+#define SLI4_READ_TOPOLOGY_SPEED_1G	0x04
+#define SLI4_READ_TOPOLOGY_SPEED_2G	0x08
+#define SLI4_READ_TOPOLOGY_SPEED_4G	0x10
+#define SLI4_READ_TOPOLOGY_SPEED_8G	0x20
+#define SLI4_READ_TOPOLOGY_SPEED_10G	0x40
+#define SLI4_READ_TOPOLOGY_SPEED_16G	0x80
+#define SLI4_READ_TOPOLOGY_SPEED_32G	0x90
+
+/* REG_FCFI - activate a FC Forwarder */
+struct sli4_cmd_reg_fcfi_rq_cfg {
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+enum {
+	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
+	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
+};
+
+#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
+struct sli4_cmd_reg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+};
+
+#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
+#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
+#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
+#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
+
+enum {
+	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
+	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
+	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
+
+	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
+	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
+	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
+};
+
+struct sli4_cmd_reg_fcfi_mrq {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+	__le32		dw9_mrqflags;
+};
+
+/* REG_RPI - register a Remote Port Indicator */
+enum {
+	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
+	SLI4_REGRPI_UPD			= 0x1000000,
+	SLI4_REGRPI_ETOW		= 0x8000000,
+	SLI4_REGRPI_TERP		= 0x20000000,
+	SLI4_REGRPI_CI			= 0x80000000,
+};
+
+struct sli4_cmd_reg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			rpi;
+	__le16			rsvd2;
+	__le32			dw2_rportid_flags;
+	struct sli4_bde	bde_64;
+	__le16			vpi;
+	__le16			rsvd26;
+};
+
+#define SLI4_REG_RPI_BUF_LEN		0x70
+
+/* REG_VFI - register a Virtual Fabric Indicator */
+enum {
+	SLI4_REGVFI_VP			= 0x1000,	/* DW1 */
+	SLI4_REGVFI_UPD			= 0x2000,
+
+	SLI4_REGVFI_LOCAL_N_PORTID	= 0xffffff,	/* DW10 */
+};
+
+struct sli4_cmd_reg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			vfi;
+	__le16			dw0w1_flags;
+	__le16			fcfi;
+	__le16			vpi;
+	u8			wwpn[8];
+	struct sli4_bde	sparm;
+	__le32			e_d_tov;
+	__le32			r_a_tov;
+	__le32			dw10_lportid_flags;
+};
+
+/* REG_VPI - register a Virtual Port Indicator */
+enum {
+	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
+	SLI4_REGVPI_UPD			= 0x1000000,
+};
+
+struct sli4_cmd_reg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		dw2_lportid_flags;
+	u8		wwpn[8];
+	__le32		rsvd12;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* REQUEST_FEATURES - request / query SLI features */
+enum {
+	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
+
+	SLI4_REQFEAT_IAAB	= (1 << 0),	/* DW2 & DW3 */
+	SLI4_REQFEAT_NPIV	= (1 << 1),
+	SLI4_REQFEAT_DIF	= (1 << 2),
+	SLI4_REQFEAT_VF		= (1 << 3),
+	SLI4_REQFEAT_FCPI	= (1 << 4),
+	SLI4_REQFEAT_FCPT	= (1 << 5),
+	SLI4_REQFEAT_FCPC	= (1 << 6),
+	SLI4_REQFEAT_RSVD	= (1 << 7),
+	SLI4_REQFEAT_RQD	= (1 << 8),
+	SLI4_REQFEAT_IAAR	= (1 << 9),
+	SLI4_REQFEAT_HLM	= (1 << 10),
+	SLI4_REQFEAT_PERFH	= (1 << 11),
+	SLI4_REQFEAT_RXSEQ	= (1 << 12),
+	SLI4_REQFEAT_RXRI	= (1 << 13),
+	SLI4_REQFEAT_DCL2	= (1 << 14),
+	SLI4_REQFEAT_RSCO	= (1 << 15),
+	SLI4_REQFEAT_MRQP	= (1 << 16),
+};
+
+struct sli4_cmd_request_features {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_qry;
+	__le32		cmd;
+	__le32		resp;
+};
+
+/*
+ * SLI_CONFIG - submit a configuration command to Port
+ *
+ * Command is either embedded as part of the payload (embed) or located
+ * in a separate memory buffer (mem)
+ */
+enum {
+	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
+	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
+	SLI4_SLICONF_PMDCMD_MASK	= 0x1F << SLI4_SLICONF_PMDCMD_SHIFT,
+	SLI4_SLICONF_PMDCMD_VAL_1	= 1 << SLI4_SLICONF_PMDCMD_SHIFT,
+	SLI4_SLICONF_PMDCNT		= 0xf8,
+
+	SLI4_SLICONFIG_PMD_LEN		= 0x00ffffff,
+};
+
+struct sli4_cmd_sli_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		payload_len;
+	__le32		rsvd12[3];
+	union {
+		u8 embed[58 * sizeof(u32)];
+		struct sli4_bufptr mem;
+	} payload;
+};
+
+/* READ_STATUS - read tx/rx status of a particular port */
+enum {
+	SLI4_READSTATUS_CLEAR_COUNTERS	= 0x1,	/* DW1 */
+};
+
+struct sli4_cmd_read_status {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		rsvd4;
+	__le32		trans_kbyte_cnt;
+	__le32		recv_kbyte_cnt;
+	__le32		trans_frame_cnt;
+	__le32		recv_frame_cnt;
+	__le32		trans_seq_cnt;
+	__le32		recv_seq_cnt;
+	__le32		tot_exchanges_orig;
+	__le32		tot_exchanges_resp;
+	__le32		recv_p_bsy_cnt;
+	__le32		recv_f_bsy_cnt;
+	__le32		no_rq_buf_dropped_frames_cnt;
+	__le32		empty_rq_timeout_cnt;
+	__le32		no_xri_dropped_frames_cnt;
+	__le32		empty_xri_pool_cnt;
+};
+
+/* READ_LNK_STAT - read link status of a particular port */
+enum {
+	SLI4_READ_LNKSTAT_REC	= (1 << 0),
+	SLI4_READ_LNKSTAT_GEC	= (1 << 1),
+	SLI4_READ_LNKSTAT_W02OF	= (1 << 2),
+	SLI4_READ_LNKSTAT_W03OF	= (1 << 3),
+	SLI4_READ_LNKSTAT_W04OF	= (1 << 4),
+	SLI4_READ_LNKSTAT_W05OF	= (1 << 5),
+	SLI4_READ_LNKSTAT_W06OF	= (1 << 6),
+	SLI4_READ_LNKSTAT_W07OF	= (1 << 7),
+	SLI4_READ_LNKSTAT_W08OF	= (1 << 8),
+	SLI4_READ_LNKSTAT_W09OF	= (1 << 9),
+	SLI4_READ_LNKSTAT_W10OF = (1 << 10),
+	SLI4_READ_LNKSTAT_W11OF = (1 << 11),
+	SLI4_READ_LNKSTAT_W12OF	= (1 << 12),
+	SLI4_READ_LNKSTAT_W13OF	= (1 << 13),
+	SLI4_READ_LNKSTAT_W14OF	= (1 << 14),
+	SLI4_READ_LNKSTAT_W15OF	= (1 << 15),
+	SLI4_READ_LNKSTAT_W16OF	= (1 << 16),
+	SLI4_READ_LNKSTAT_W17OF	= (1 << 17),
+	SLI4_READ_LNKSTAT_W18OF	= (1 << 18),
+	SLI4_READ_LNKSTAT_W19OF	= (1 << 19),
+	SLI4_READ_LNKSTAT_W20OF	= (1 << 20),
+	SLI4_READ_LNKSTAT_W21OF	= (1 << 21),
+	SLI4_READ_LNKSTAT_CLRC	= (1 << 30),
+	SLI4_READ_LNKSTAT_CLOF	= (1 << 31),
+};
+
+struct sli4_cmd_read_link_stats {
+	struct sli4_mbox_command_header	hdr;
+	__le32	dw1_flags;
+	__le32	linkfail_errcnt;
+	__le32	losssync_errcnt;
+	__le32	losssignal_errcnt;
+	__le32	primseq_errcnt;
+	__le32	inval_txword_errcnt;
+	__le32	crc_errcnt;
+	__le32	primseq_eventtimeout_cnt;
+	__le32	elastic_bufoverrun_errcnt;
+	__le32	arbit_fc_al_timeout_cnt;
+	__le32	adv_rx_buftor_to_buf_credit;
+	__le32	curr_rx_buf_to_buf_credit;
+	__le32	adv_tx_buf_to_buf_credit;
+	__le32	curr_tx_buf_to_buf_credit;
+	__le32	rx_eofa_cnt;
+	__le32	rx_eofdti_cnt;
+	__le32	rx_eofni_cnt;
+	__le32	rx_soff_cnt;
+	__le32	rx_dropped_no_aer_cnt;
+	__le32	rx_dropped_no_avail_rpi_rescnt;
+	__le32	rx_dropped_no_avail_xri_rescnt;
+};
+
+/* Format a WQE with WQ_ID Association performance hint */
+static inline void
+sli_set_wq_id_association(void *entry, u16 q_id)
+{
+	u32 *wqe = entry;
+
+	/*
+	 * Set Word 10, bit 0 to zero
+	 * Set Word 10, bits 15:1 to the WQ ID
+	 */
+	wqe[10] &= ~0xffff;
+	wqe[10] |= q_id << 1;
+}
+
+/* UNREG_FCFI - unregister a FCFI */
+struct sli4_cmd_unreg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		fcfi;
+	__le16		rsvd6;
+};
+
+/* UNREG_RPI - unregister one or more RPI */
+enum {
+	UNREG_RPI_DP		= 0x2000,
+	UNREG_RPI_II_SHIFT	= 14,
+	UNREG_RPI_II_MASK	= 0x03 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_RPI	= 0x00 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_VPI	= 0x01 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_VFI	= 0x02 << UNREG_RPI_II_SHIFT,
+	UNREG_RPI_II_FCFI	= 0x03 << UNREG_RPI_II_SHIFT,
+
+	UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
+};
+
+struct sli4_cmd_unreg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		index;
+	__le16		dw1w1_flags;
+	__le32		dw2_dest_n_portid;
+};
+
+/* UNREG_VFI - unregister one or more VFI */
+enum {
+	UNREG_VFI_II_SHIFT	= 14,
+	UNREG_VFI_II_MASK	= 0x03 << UNREG_VFI_II_SHIFT,
+	UNREG_VFI_II_VFI	= 0x00 << UNREG_VFI_II_SHIFT,
+	UNREG_VFI_II_FCFI	= 0x03 << UNREG_VFI_II_SHIFT,
+};
+
+struct sli4_cmd_unreg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2_flags;
+};
+
+enum sli4_unreg_type {
+	SLI4_UNREG_TYPE_PORT,
+	SLI4_UNREG_TYPE_DOMAIN,
+	SLI4_UNREG_TYPE_FCF,
+	SLI4_UNREG_TYPE_ALL
+};
+
+/* UNREG_VPI - unregister one or more VPI */
+enum {
+	UNREG_VPI_II_SHIFT	= 14,
+	UNREG_VPI_II_MASK	= 0x03 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_VPI	= 0x00 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_VFI	= 0x02 << UNREG_VPI_II_SHIFT,
+	UNREG_VPI_II_FCFI	= 0x03 << UNREG_VPI_II_SHIFT,
+};
+
+struct sli4_cmd_unreg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2w0_flags;
+};
+
+/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */
+struct sli4_cmd_config_auto_xfer_rdy {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+};
+
+#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
+
+struct sli4_cmd_config_auto_xfer_rdy_hp {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+	__le32		dw3_esoc_flags;
+	__le16		block_size;
+	__le16		rsvd14;
+};
+
+/*************************************************************************
+ * SLI-4 common configuration command formats and definitions
+ */
+
+#define SLI4_CFG_STATUS_SUCCESS			0x00
+#define SLI4_CFG_STATUS_FAILED			0x01
+#define SLI4_CFG_STATUS_ILLEGAL_REQUEST		0x02
+#define SLI4_CFG_STATUS_ILLEGAL_FIELD		0x03
+
+#define SLI4_MGMT_STATUS_FLASHROM_READ_FAILED	0xcb
+
+#define SLI4_CFG_ADD_STATUS_NO_STATUS		0x00
+#define SLI4_CFG_ADD_STATUS_INVALID_OPCODE	0x1e
+
+/*
+ * Subsystem values.
+ */
+#define SLI4_SUBSYSTEM_COMMON			0x01
+#define SLI4_SUBSYSTEM_LOWLEVEL			0x0B
+#define SLI4_SUBSYSTEM_FC			0x0c
+#define SLI4_SUBSYSTEM_DMTF			0x11
+
+#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
+
+/*
+ * Common opcode (OPC) values.
+ */
+enum {
+	CMN_FUNCTION_RESET	= 0x3d,
+	CMN_CREATE_CQ		= 0x0c,
+	CMN_CREATE_CQ_SET	= 0x1d,
+	CMN_DESTROY_CQ		= 0x36,
+	CMN_MODIFY_EQ_DELAY	= 0x29,
+	CMN_CREATE_EQ		= 0x0d,
+	CMN_DESTROY_EQ		= 0x37,
+	CMN_CREATE_MQ_EXT	= 0x5a,
+	CMN_DESTROY_MQ		= 0x35,
+	CMN_GET_CNTL_ATTRIBUTES	= 0x20,
+	CMN_NOP			= 0x21,
+	CMN_GET_RSC_EXTENT_INFO = 0x9a,
+	CMN_GET_SLI4_PARAMS	= 0xb5,
+	CMN_QUERY_FW_CONFIG	= 0x3a,
+	CMN_GET_PORT_NAME	= 0x4d,
+
+	CMN_WRITE_FLASHROM	= 0x07,
+	/* TRANSCEIVER Data */
+	CMN_READ_TRANS_DATA	= 0x49,
+	CMN_GET_CNTL_ADDL_ATTRS = 0x79,
+	CMN_GET_FUNCTION_CFG	= 0xa0,
+	CMN_GET_PROFILE_CFG	= 0xa4,
+	CMN_SET_PROFILE_CFG	= 0xa5,
+	CMN_GET_PROFILE_LIST	= 0xa6,
+	CMN_GET_ACTIVE_PROFILE	= 0xa7,
+	CMN_SET_ACTIVE_PROFILE	= 0xa8,
+	CMN_READ_OBJECT		= 0xab,
+	CMN_WRITE_OBJECT	= 0xac,
+	CMN_DELETE_OBJECT	= 0xae,
+	CMN_READ_OBJECT_LIST	= 0xad,
+	CMN_SET_DUMP_LOCATION	= 0xb8,
+	CMN_SET_FEATURES	= 0xbf,
+	CMN_GET_RECFG_LINK_INFO = 0xc9,
+	CMN_SET_RECNG_LINK_ID	= 0xca,
+};
+
+/* DMTF opcode (OPC) values */
+#define DMTF_EXEC_CLP_CMD 0x01
+
+/*
+ * COMMON_FUNCTION_RESET
+ *
+ * Resets the Port, returning it to a power-on state. This configuration
+ * command does not have a payload and should set/expect the lengths to
+ * be zero.
+ */
+struct sli4_rqst_cmn_function_reset {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_function_reset {
+	struct sli4_rsp_hdr	hdr;
+};
+
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * Query for information about the SLI Port
+ */
+enum {
+	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,
+	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
+};
+
+struct sli4_rsp_cmn_get_cntl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	u8			version_str[32];
+	u8			manufacturer_name[32];
+	__le32			supported_modes;
+	u8			eprom_version_lo;
+	u8			eprom_version_hi;
+	__le16			rsvd17;
+	__le32			mbx_ds_version;
+	__le32			ep_fw_ds_version;
+	u8			ncsi_version_str[12];
+	__le32			def_extended_timeout;
+	u8			model_number[32];
+	u8			description[64];
+	u8			serial_number[32];
+	u8			ip_version_str[32];
+	u8			fw_version_str[32];
+	u8			bios_version_str[32];
+	u8			redboot_version_str[32];
+	u8			driver_version_str[32];
+	u8			fw_on_flash_version_str[32];
+	__le32			functionalities_supported;
+	__le16			max_cdb_length;
+	u8			asic_revision;
+	u8			generational_guid0;
+	__le32			generational_guid1_12[3];
+	__le16			generational_guid13_14;
+	u8			generational_guid15;
+	u8			hba_port_count;
+	__le16			default_link_down_timeout;
+	u8			iscsi_version_min_max;
+	u8			multifunctional_device;
+	u8			cache_valid;
+	u8			hba_status;
+	u8			max_domains_supported;
+	u8			port_num_type_flags;
+	__le32			firmware_post_status;
+	__le32			hba_mtu;
+	u8			iscsi_features;
+	u8			rsvd121[3];
+	__le16			pci_vendor_id;
+	__le16			pci_device_id;
+	__le16			pci_sub_vendor_id;
+	__le16			pci_sub_system_id;
+	u8			pci_bus_number;
+	u8			pci_device_number;
+	u8			pci_function_number;
+	u8			interface_type;
+	__le64			unique_identifier;
+	u8			number_of_netfilters;
+	u8			rsvd122[3];
+};
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * This command queries the controller information from the Flash ROM.
+ */
+struct sli4_rqst_cmn_get_cntl_addl_attributes {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_cntl_addl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	__le16		ipl_file_number;
+	u8		ipl_file_version;
+	u8		rsvd4;
+	u8		on_die_temperature;
+	u8		rsvd5[3];
+	__le32		driver_advanced_features_supported;
+	__le32		rsvd7[4];
+	char		universal_bios_version[32];
+	char		x86_bios_version[32];
+	char		efi_bios_version[32];
+	char		fcode_version[32];
+	char		uefi_bios_version[32];
+	char		uefi_nic_version[32];
+	char		uefi_fcode_version[32];
+	char		uefi_iscsi_version[32];
+	char		iscsi_x86_bios_version[32];
+	char		pxe_x86_bios_version[32];
+	u8		default_wwpn[8];
+	u8		ext_phy_version[32];
+	u8		fc_universal_bios_version[32];
+	u8		fc_x86_bios_version[32];
+	u8		fc_efi_bios_version[32];
+	u8		fc_fcode_version[32];
+	u8		ext_phy_crc_label[8];
+	u8		ipl_file_name[16];
+	u8		rsvd139[72];
+};
+
+/*
+ * COMMON_NOP
+ *
+ * This command does not do anything; it only returns
+ * the payload in the completion.
+ */
+struct sli4_rqst_cmn_nop {
+	struct sli4_rqst_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rsp_cmn_nop {
+	struct sli4_rsp_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rqst_cmn_get_resource_extent_info {
+	struct sli4_rqst_hdr	hdr;
+	__le16	resource_type;
+	__le16	rsvd16;
+};
+
+#define SLI4_RSC_TYPE_ISCSI_INI_XRI	0x0c
+#define SLI4_RSC_TYPE_VFI		0x20
+#define SLI4_RSC_TYPE_VPI		0x21
+#define SLI4_RSC_TYPE_RPI		0x22
+#define SLI4_RSC_TYPE_XRI		0x23
+
+struct sli4_rsp_cmn_get_resource_extent_info {
+	struct sli4_rsp_hdr	hdr;
+	__le16			resource_extent_count;
+	__le16			resource_extent_size;
+};
+
+#define SLI4_128BYTE_WQE_SUPPORT	0x02
+
+#define GET_Q_CNT_METHOD(m) \
+	(((m) & RSP_GET_PARAM_Q_CNT_MTHD_MASK) >> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
+#define GET_Q_CREATE_VERSION(v) \
+	(((v) & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)
+
+enum {
+	/*GENERIC*/
+	RSP_GET_PARAM_Q_CNT_MTHD_SHFT	= 24,
+	RSP_GET_PARAM_Q_CNT_MTHD_MASK	= (0xF << 24),
+	RSP_GET_PARAM_QV_SHIFT		= 14,
+	RSP_GET_PARAM_QV_MASK		= (3 << 14),
+
+	/* DW4 */
+	RSP_GET_PARAM_PROTO_TYPE_MASK	= 0xFF,
+	/* DW5 */
+	RSP_GET_PARAM_FT		= (1 << 0),
+	RSP_GET_PARAM_SLI_REV_MASK	= (0xF << 4),
+	RSP_GET_PARAM_SLI_FAM_MASK	= (0xF << 8),
+	RSP_GET_PARAM_IF_TYPE_MASK	= (0xF << 12),
+	RSP_GET_PARAM_SLI_HINT1_MASK	= (0xFF << 16),
+	RSP_GET_PARAM_SLI_HINT2_MASK	= (0x1F << 24),
+	/* DW6 */
+	RSP_GET_PARAM_EQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_EQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_EQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW8 */
+	RSP_GET_PARAM_CQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_CQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_CQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW10 */
+	RSP_GET_PARAM_MQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_MQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW12 */
+	RSP_GET_PARAM_WQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_WQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_WQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW14 */
+	RSP_GET_PARAM_RQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_RQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_RQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW15W1*/
+	RSP_GET_PARAM_RQ_DB_WINDOW_MASK	= 0xF000,
+	/* DW16 */
+	RSP_GET_PARAM_FC		= (1 << 0),
+	RSP_GET_PARAM_EXT		= (1 << 1),
+	RSP_GET_PARAM_HDRR		= (1 << 2),
+	RSP_GET_PARAM_SGLR		= (1 << 3),
+	RSP_GET_PARAM_FBRR		= (1 << 4),
+	RSP_GET_PARAM_AREG		= (1 << 5),
+	RSP_GET_PARAM_TGT		= (1 << 6),
+	RSP_GET_PARAM_TERP		= (1 << 7),
+	RSP_GET_PARAM_ASSI		= (1 << 8),
+	RSP_GET_PARAM_WCHN		= (1 << 9),
+	RSP_GET_PARAM_TCCA		= (1 << 10),
+	RSP_GET_PARAM_TRTY		= (1 << 11),
+	RSP_GET_PARAM_TRIR		= (1 << 12),
+	RSP_GET_PARAM_PHOFF		= (1 << 13),
+	RSP_GET_PARAM_PHON		= (1 << 14),
+	RSP_GET_PARAM_PHWQ		= (1 << 15),
+	RSP_GET_PARAM_BOUND_4GA		= (1 << 16),
+	RSP_GET_PARAM_RXC		= (1 << 17),
+	RSP_GET_PARAM_HLM		= (1 << 18),
+	RSP_GET_PARAM_IPR		= (1 << 19),
+	RSP_GET_PARAM_RXRI		= (1 << 20),
+	RSP_GET_PARAM_SGLC		= (1 << 21),
+	RSP_GET_PARAM_TIMM		= (1 << 22),
+	RSP_GET_PARAM_TSMM		= (1 << 23),
+	RSP_GET_PARAM_OAS		= (1 << 25),
+	RSP_GET_PARAM_LC		= (1 << 26),
+	RSP_GET_PARAM_AGXF		= (1 << 27),
+	RSP_GET_PARAM_LOOPBACK_MASK	= (0xF << 28),
+	/* DW18 */
+	RSP_GET_PARAM_SGL_PAGE_CNT_MASK = (0xF << 0),
+	RSP_GET_PARAM_SGL_PAGE_SZS_MASK = (0xFF << 8),
+	RSP_GET_PARAM_SGL_PP_ALIGN_MASK = (0xFF << 16),
+};
+
+struct sli4_rqst_cmn_get_sli4_params {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_sli4_params {
+	struct sli4_rsp_hdr	hdr;
+	__le32		dw4_protocol_type;
+	__le32		dw5_sli;
+	__le32		dw6_eq_page_cnt;
+	__le16		eqe_count_mask;
+	__le16		rsvd26;
+	__le32		dw8_cq_page_cnt;
+	__le16		cqe_count_mask;
+	__le16		rsvd34;
+	__le32		dw10_mq_page_cnt;
+	__le16		mqe_count_mask;
+	__le16		rsvd42;
+	__le32		dw12_wq_page_cnt;
+	__le16		wqe_count_mask;
+	__le16		rsvd50;
+	__le32		dw14_rq_page_cnt;
+	__le16		rqe_count_mask;
+	__le16		dw15w1_rq_db_window;
+	__le32		dw16_loopback_scope;
+	__le32		sge_supported_length;
+	__le32		dw18_sgl_page_cnt;
+	__le16		min_rq_buffer_size;
+	__le16		rsvd75;
+	__le32		max_rq_buffer_size;
+	__le16		physical_xri_max;
+	__le16		physical_rpi_max;
+	__le16		physical_vpi_max;
+	__le16		physical_vfi_max;
+	__le32		rsvd88;
+	__le16		frag_num_field_offset;
+	__le16		frag_num_field_size;
+	__le16		sgl_index_field_offset;
+	__le16		sgl_index_field_size;
+	__le32		chain_sge_initial_value_lo;
+	__le32		chain_sge_initial_value_hi;
+};
+
+/*
+ * COMMON_QUERY_FW_CONFIG
+ *
+ * This command retrieves firmware configuration parameters and adapter
+ * resources available to the driver.
+ */
+struct sli4_rqst_cmn_query_fw_config {
+	struct sli4_rqst_hdr	hdr;
+};
+
+#define SLI4_FUNCTION_MODE_INI_MODE 0x40
+#define SLI4_FUNCTION_MODE_TGT_MODE 0x80
+#define SLI4_FUNCTION_MODE_DUA_MODE 0x800
+
+#define SLI4_ULP_MODE_INI           0x40
+#define SLI4_ULP_MODE_TGT           0x80
+
+struct sli4_rsp_cmn_query_fw_config {
+	struct sli4_rsp_hdr	hdr;
+	__le32		config_number;
+	__le32		asic_rev;
+	__le32		physical_port;
+	__le32		function_mode;
+	__le32		ulp0_mode;
+	__le32		ulp0_nic_wqid_base;
+	__le32		ulp0_nic_wq_total; /* DW10 */
+	__le32		ulp0_toe_wqid_base;
+	__le32		ulp0_toe_wq_total;
+	__le32		ulp0_toe_rqid_base;
+	__le32		ulp0_toe_rq_total;
+	__le32		ulp0_toe_defrqid_base;
+	__le32		ulp0_toe_defrq_total;
+	__le32		ulp0_lro_rqid_base;
+	__le32		ulp0_lro_rq_total;
+	__le32		ulp0_iscsi_icd_base;
+	__le32		ulp0_iscsi_icd_total; /* DW20 */
+	__le32		ulp1_mode;
+	__le32		ulp1_nic_wqid_base;
+	__le32		ulp1_nic_wq_total;
+	__le32		ulp1_toe_wqid_base;
+	__le32		ulp1_toe_wq_total;
+	__le32		ulp1_toe_rqid_base;
+	__le32		ulp1_toe_rq_total;
+	__le32		ulp1_toe_defrqid_base;
+	__le32		ulp1_toe_defrq_total;
+	__le32		ulp1_lro_rqid_base; /* DW30 */
+	__le32		ulp1_lro_rq_total;
+	__le32		ulp1_iscsi_icd_base;
+	__le32		ulp1_iscsi_icd_total;
+	__le32		function_capabilities;
+	__le32		ulp0_cq_base;
+	__le32		ulp0_cq_total;
+	__le32		ulp0_eq_base;
+	__le32		ulp0_eq_total;
+	__le32		ulp0_iscsi_chain_icd_base;
+	__le32		ulp0_iscsi_chain_icd_total; /* DW40 */
+	__le32		ulp1_iscsi_chain_icd_base;
+	__le32		ulp1_iscsi_chain_icd_total;
+};
+
+/*Port Types*/
+enum {
+	PORT_TYPE_ETH	= 0,
+	PORT_TYPE_FC	= 1,
+};
+
+struct sli4_rqst_cmn_get_port_name {
+	struct sli4_rqst_hdr	hdr;
+	u8      port_type;
+	u8      rsvd4[3];
+};
+
+struct sli4_rsp_cmn_get_port_name {
+	struct sli4_rsp_hdr	hdr;
+	char		port_name[4];
+};
+
+struct sli4_rqst_cmn_write_flashrom {
+	struct sli4_rqst_hdr	hdr;
+	__le32		flash_rom_access_opcode;
+	__le32		flash_rom_access_operation_type;
+	__le32		data_buffer_size;
+	__le32		offset;
+	u8		data_buffer[4];
+};
+
+#define SLI4_MGMT_FLASHROM_OPCODE_FLASH			0x01
+#define SLI4_MGMT_FLASHROM_OPCODE_SAVE			0x02
+#define SLI4_MGMT_FLASHROM_OPCODE_CLEAR			0x03
+#define SLI4_MGMT_FLASHROM_OPCODE_REPORT		0x04
+#define SLI4_MGMT_FLASHROM_OPCODE_IMAGE_INFO		0x05
+#define SLI4_MGMT_FLASHROM_OPCODE_IMAGE_CRC		0x06
+#define SLI4_MGMT_FLASHROM_OPCODE_OFFSET_BASED_FLASH	0x07
+#define SLI4_MGMT_FLASHROM_OPCODE_OFFSET_BASED_SAVE	0x08
+#define SLI4_MGMT_PHY_FLASHROM_OPCODE_FLASH		0x09
+#define SLI4_MGMT_PHY_FLASHROM_OPCODE_SAVE		0x0a
+
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ISCSI		0x00
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_REDBOOT		0x01
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_BIOS		0x02
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_PXE_BIOS		0x03
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_CODE_CONTROL	0x04
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_IPSEC_CFG		0x05
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_INIT_DATA		0x06
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ROM_OFFSET	0x07
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_BIOS		0x08
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ISCSI_BAK		0x09
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_ACT		0x0a
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_FC_BAK		0x0b
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_CODE_CTRL_P	0x0c
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_NCSI		0x0d
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_NIC		0x0e
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_DCBX		0x0f
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_PXE_BIOS_CFG	0x10
+#define SLI4_FLASH_ROM_ACCESS_OP_TYPE_ALL_CFG_DATA	0x11
+
+/*
+ * COMMON_READ_TRANSCEIVER_DATA
+ *
+ * This command reads SFF transceiver data(Format is defined
+ * by the SFF-8472 specification).
+ */
+struct sli4_rqst_cmn_read_transceiver_data {
+	struct sli4_rqst_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+};
+
+struct sli4_rsp_cmn_read_transceiver_data {
+	struct sli4_rsp_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+	u8			page_data[128];
+	u8			page_data_2[128];
+};
+
+enum {
+	SLI4_REQ_DESIRE_READLEN = 0xFFFFFF
+};
+
+struct sli4_rqst_cmn_read_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+enum {
+	RSP_COM_READ_OBJ_EOF = 0x80000000
+
+};
+
+struct sli4_rsp_cmn_read_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_read_length;
+	__le32			eof_dword;
+};
+
+enum {
+	SLI4_RQ_DES_WRITE_LEN		= 0xFFFFFF,
+	SLI4_RQ_DES_WRITE_LEN_NOC	= 0x40000000,
+	SLI4_RQ_DES_WRITE_LEN_EOF	= 0x80000000,
+};
+
+struct sli4_rqst_cmn_write_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_write_len_dword;
+	__le32			write_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+enum {
+	RSP_CHANGE_STATUS = 0xFF,
+};
+
+struct sli4_rsp_cmn_write_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_write_length;
+	__le32			change_status_dword;
+};
+
+struct sli4_rqst_cmn_delete_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			rsvd4;
+	__le32			rsvd5;
+	u8			object_name[104];
+};
+
+enum {
+	SLI4_RQ_OBJ_LIST_READ_LEN = 0xFFFFFF,
+};
+
+struct sli4_rqst_cmn_read_object_list {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+enum {
+	SLI4_RQ_COM_SET_DUMP_BUFFER_LEN	= 0xFFFFFF,
+	SLI4_RQ_COM_SET_DUMP_FDB	= 0x20000000,
+	SLI4_RQ_COM_SET_DUMP_BLP	= 0x40000000,
+	SLI4_RQ_COM_SET_DUMP_QRY	= 0x80000000,
+};
+
+struct sli4_rqst_cmn_set_dump_location {
+	struct sli4_rqst_hdr	hdr;
+	__le32			buffer_length_dword;
+	__le32			buf_addr_low;
+	__le32			buf_addr_high;
+};
+
+enum {
+	RSP_SET_DUMP_BUFFER_LEN = 0xFFFFFF,
+};
+
+struct sli4_rsp_cmn_set_dump_location {
+	struct sli4_rsp_hdr	hdr;
+	__le32			buffer_length_dword;
+};
+
+#define SLI4_SET_FEATURES_DIF_SEED			0x01
+#define SLI4_SET_FEATURES_XRI_TIMER			0x03
+#define SLI4_SET_FEATURES_MAX_PCIE_SPEED		0x04
+#define SLI4_SET_FEATURES_FCTL_CHECK			0x05
+#define SLI4_SET_FEATURES_FEC				0x06
+#define SLI4_SET_FEATURES_PCIE_RECV_DETECT		0x07
+#define SLI4_SET_FEATURES_DIF_MEMORY_MODE		0x08
+#define SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	0x09
+#define SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		0x0A
+#define SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI		0x0C
+#define SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	0x0D
+#define SLI4_SET_FEATURES_SET_FTD_XFER_HINT		0x0F
+#define SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		0x11
+
+struct sli4_rqst_cmn_set_features {
+	struct sli4_rqst_hdr	hdr;
+	__le32			feature;
+	__le32			param_len;
+	__le32			params[8];
+};
+
+struct sli4_rqst_cmn_set_features_dif_seed {
+	__le16		seed;
+	__le16		rsvd16;
+};
+
+enum {
+	SLI4_RQ_COM_SET_T10_PI_MEM_MODEL = 0x1,
+};
+
+struct sli4_rqst_cmn_set_features_t10_pi_mem_model {
+	__le32		tmm_dword;
+};
+
+enum {
+	SLI4_RQ_MULTIRQ_ISR = 0x1,
+	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
+
+	SLI4_RQ_MULTIRQ_NUM_RQS = 0xFF,
+	SLI4_RQ_MULTIRQ_RQ_SELECT = 0xF00,
+};
+
+struct sli4_rqst_cmn_set_features_multirq {
+	__le32		auto_gen_xfer_dword;
+	__le32		num_rqs_dword;
+};
+
+enum {
+	SLI4_SETFEAT_XFERRDY_T10PI_RTC		= (1 << 0),
+	SLI4_SETFEAT_XFERRDY_T10PI_ATV		= (1 << 1),
+	SLI4_SETFEAT_XFERRDY_T10PI_TMM		= (1 << 2),
+	SLI4_SETFEAT_XFERRDY_T10PI_PTYPE	= (0x7 << 4),
+	SLI4_SETFEAT_XFERRDY_T10PI_BLKSIZ	= (0x7 << 7),
+};
+
+struct sli4_rqst_cmn_set_features_xfer_rdy_t10pi {
+	__le32		dw0_flags;
+	__le16		app_tag;
+	__le16		rsvd6;
+};
+
+enum {
+	SLI4_RQ_HEALTH_CHECK_ENABLE	= 0x1,
+	SLI4_RQ_HEALTH_CHECK_QUERY	= 0x2,
+};
+
+struct sli4_rqst_cmn_set_features_health_check {
+	__le32		health_check_dword;
+};
+
+struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint {
+	__le32		fdt_xfer_hint;
+};
+
+struct sli4_rqst_dmtf_exec_clp_cmd {
+	struct sli4_rqst_hdr	hdr;
+	__le32			cmd_buf_length;
+	__le32			resp_buf_length;
+	__le32			cmd_buf_addr_low;
+	__le32			cmd_buf_addr_high;
+	__le32			resp_buf_addr_low;
+	__le32			resp_buf_addr_high;
+};
+
+struct sli4_rsp_dmtf_exec_clp_cmd {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd4;
+	__le32			resp_length;
+	__le32			rsvd6;
+	__le32			rsvd7;
+	__le32			rsvd8;
+	__le32			rsvd9;
+	__le32			clp_status;
+	__le32			clp_detailed_status;
+};
+
+#define SLI4_PROTOCOL_FC		0x10
+#define SLI4_PROTOCOL_DEFAULT		0xff
+
+struct sli4_rspource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		rsvd16;
+	__le32		type_specific[0];
+};
+
+enum {
+	SLI4_PCIE_DESC_IMM		= 0x4000,
+	SLI4_PCIE_DESC_NOSV		= 0x8000,
+
+	SLI4_PCIE_DESC_PF_NO		= 0x3FF0000,
+
+	SLI4_PCIE_DESC_MISSN_ROLE	= 0xFF,
+	SLI4_PCIE_DESC_PCHG		= 0x8000000,
+	SLI4_PCIE_DESC_SCHG		= 0x10000000,
+	SLI4_PCIE_DESC_XCHG		= 0x20000000,
+	SLI4_PCIE_DESC_XROM		= 0xC0000000
+};
+
+struct sli4_pcie_resource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		imm_nosv_dword;
+	__le32		pf_number_dword;
+	__le32		rsvd3;
+	u8		sriov_state;
+	u8		pf_state;
+	u8		pf_type;
+	u8		rsvd4;
+	__le16		number_of_vfs;
+	__le16		rsvd5;
+	__le32		mission_roles_dword;
+	__le32		rsvd7[16];
+};
+
+struct sli4_rqst_cmn_get_function_config {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_function_config {
+	struct sli4_rsp_hdr	hdr;
+	__le32			desc_count;
+	__le32			desc[54];
+};
+
+/* Link Config Descriptor for link config functions */
+struct sli4_link_config_descriptor {
+	u8		link_config_id;
+	u8		rsvd1[3];
+	__le32		config_description[8];
+};
+
+#define MAX_LINK_DES	10
+
+struct sli4_rqst_cmn_get_reconfig_link_info {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_reconfig_link_info {
+	struct sli4_rsp_hdr	hdr;
+	u8			active_link_config_id;
+	u8			rsvd17;
+	u8			next_link_config_id;
+	u8			rsvd19;
+	__le32			link_configuration_descriptor_count;
+	struct sli4_link_config_descriptor
+				desc[MAX_LINK_DES];
+};
+
+enum {
+	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
+	SLI4_SET_RECONFIG_LINKID_FD	= (1 << 31),
+};
+
+struct sli4_rqst_cmn_set_reconfig_link_id {
+	struct sli4_rqst_hdr  hdr;
+	__le32			dw4_flags;
+};
+
+struct sli4_rsp_cmn_set_reconfig_link_id {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_lowlevel_set_watchdog {
+	struct sli4_rqst_hdr	hdr;
+	__le16			watchdog_timeout;
+	__le16			rsvd18;
+};
+
+struct sli4_rsp_lowlevel_set_watchdog {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd;
+};
+
+/* FC opcode (OPC) values */
+#define SLI4_OPC_WQ_CREATE		0x1
+#define SLI4_OPC_WQ_DESTROY		0x2
+#define SLI4_OPC_POST_SGL_PAGES		0x3
+#define SLI4_OPC_RQ_CREATE		0x5
+#define SLI4_OPC_RQ_DESTROY		0x6
+#define SLI4_OPC_READ_FCF_TABLE		0x8
+#define SLI4_OPC_POST_HDR_TEMPLATES	0xb
+#define SLI4_OPC_REDISCOVER_FCF		0x10
+
+/* Use the default CQ associated with the WQ */
+#define SLI4_CQ_DEFAULT 0xffff
+
+/*
+ * POST_SGL_PAGES
+ *
+ * Register the scatter gather list (SGL) memory and
+ * associate it with an XRI.
+ */
+struct sli4_rqst_post_sgl_pages {
+	struct sli4_rqst_hdr	hdr;
+	__le16			xri_start;
+	__le16			xri_count;
+	struct {
+		__le32		page0_low;
+		__le32		page0_high;
+		__le32		page1_low;
+		__le32		page1_high;
+	} page_set[10];
+};
+
+struct sli4_rsp_post_sgl_pages {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_post_hdr_templates {
+	struct sli4_rqst_hdr	hdr;
+	__le16			rpi_offset;
+	__le16			page_count;
+	struct sli4_dmaaddr	page_descriptor[0];
+};
+
+#define SLI4_HDR_TEMPLATE_SIZE		64
+
+/* The XRI associated with this IO is already active */
+#define SLI4_IO_CONTINUATION		(1 << 0)
+/* Automatically generate a good RSP frame */
+#define SLI4_IO_AUTO_GOOD_RESPONSE	(1 << 1)
+#define SLI4_IO_NO_ABORT		(1 << 2)
+/* Set the DNRX bit because no auto xref rdy buffer is posted */
+#define SLI4_IO_DNRX			(1 << 3)
+
+
+
+enum sli4_callback {
+	SLI4_CB_LINK,
+	SLI4_CB_MAX,
+};
+
+enum sli4_link_status {
+	SLI_LINK_STATUS_UP,
+	SLI_LINK_STATUS_DOWN,
+	SLI_LINK_STATUS_NO_ALPA,
+	SLI_LINK_STATUS_MAX,
+};
+
+enum sli4_link_topology {
+	SLI_LINK_TOPO_NPORT = 1,
+	SLI_LINK_TOPO_LOOP,
+	SLI_LINK_TOPO_LOOPBACK_INTERNAL,
+	SLI_LINK_TOPO_LOOPBACK_EXTERNAL,
+	SLI_LINK_TOPO_NONE,
+	SLI_LINK_TOPO_MAX,
+};
+
+enum sli4_link_medium {
+	SLI_LINK_MEDIUM_ETHERNET,
+	SLI_LINK_MEDIUM_FC,
+	SLI_LINK_MEDIUM_MAX,
+};
+
+/*Driver specific structures*/
+
+struct sli4_link_event {
+	enum sli4_link_status		status;
+	enum sli4_link_topology	topology;
+	enum sli4_link_medium		medium;
+	u32				speed;
+	u8				*loop_map;
+	u32				fc_id;
+};
+
+enum sli4_resource {
+	SLI_RSRC_VFI,
+	SLI_RSRC_VPI,
+	SLI_RSRC_RPI,
+	SLI_RSRC_XRI,
+	SLI_RSRC_FCFI,
+	SLI_RSRC_MAX,
+};
+
+struct sli4_extent {
+	u32		number;
+	u32		size;
+	u32		n_alloc;
+	u32		*base;
+	unsigned long	*use_map;
+	u32		map_size;
+};
+
+struct sli4_queue_info {
+	u16	max_qcount[SLI_QTYPE_MAX];
+	u32	max_qentries[SLI_QTYPE_MAX];
+	u16	count_mask[SLI_QTYPE_MAX];
+	u16	count_method[SLI_QTYPE_MAX];
+	u32	qpage_count[SLI_QTYPE_MAX];
+};
+
+#define	SLI_PCI_MAX_REGS		6
+struct sli4 {
+	void				*os;
+	struct pci_dev			*pcidev;
+	void __iomem			*reg[SLI_PCI_MAX_REGS];
+
+	u32				sli_rev;
+	u32				sli_family;
+	u32				if_type;
+
+	u16				asic_type;
+	u16				asic_rev;
+
+	u16				e_d_tov;
+	u16				r_a_tov;
+	struct sli4_queue_info	qinfo;
+	u16				link_module_type;
+	u8				rq_batch;
+	u16				rq_min_buf_size;
+	u32				rq_max_buf_size;
+	u8				topology;
+	u8				wwpn[8];
+	u8				wwnn[8];
+	u32				fw_rev[2];
+	u8				fw_name[2][16];
+	char				ipl_name[16];
+	u32				hw_rev[3];
+	u8				port_number;
+	char				port_name[2];
+	char				modeldesc[64];
+	char				bios_version_string[32];
+	/*
+	 * Tracks the port resources using extents metaphor. For
+	 * devices that don't implement extents (i.e.
+	 * has_extents == FALSE), the code models each resource as
+	 * a single large extent.
+	 */
+	struct sli4_extent		extent[SLI_RSRC_MAX];
+	u32				features;
+	u32				has_extents:1,
+					auto_reg:1,
+					auto_xfer_rdy:1,
+					hdr_template_req:1,
+					perf_hint:1,
+					perf_wq_id_association:1,
+					cq_create_version:2,
+					mq_create_version:2,
+					high_login_mode:1,
+					sgl_pre_registered:1,
+					sgl_pre_registration_required:1,
+					t10_dif_inline_capable:1,
+					t10_dif_separate_capable:1;
+	u32				sge_supported_length;
+	u32				sgl_page_sizes;
+	u32				max_sgl_pages;
+	u32				wqe_size;
+
+	/*
+	 * Callback functions
+	 */
+	int				(*link)(void *ctx, void *event);
+	void				*link_arg;
+
+	struct efc_dma		bmbx;
+
+	/* Save pointer to physical memory descriptor for non-embedded
+	 * SLI_CONFIG commands for BMBX dumping purposes
+	 */
+	struct efc_dma		*bmbx_non_emb_pmd;
+
+	struct efc_dma		vpd_data;
+	u32				vpd_length;
+};
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (2 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  7:45   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
                   ` (28 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create mailbox commands
and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/include/efc_common.h |   27 +
 drivers/scsi/elx/libefc_sli/sli4.c    | 1556 +++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
 3 files changed, 1592 insertions(+)

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
index 3fc48876c531..c339b22c35b5 100644
--- a/drivers/scsi/elx/include/efc_common.h
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -22,4 +22,31 @@ struct efc_dma {
 	struct pci_dev	*pdev;
 };
 
+#define efc_log_crit(efc, fmt, args...) \
+		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_err(efc, fmt, args...) \
+		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_warn(efc, fmt, args...) \
+		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_info(efc, fmt, args...) \
+		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_test(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_debug(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_assert(cond, ...) \
+	do { \
+		if (!(cond)) { \
+			pr_err("%s(%d) assertion (%s) failed\n", \
+				__FILE__, __LINE__, #cond); \
+			dump_stack(); \
+		} \
+	} while (0)
+
 #endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 29d33becd334..7061f7980fad 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -24,3 +24,1559 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
 	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
 	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
 };
+
+/* Convert queue type enum (SLI_QTYPE_*) into a string */
+static char *SLI_QNAME[] = {
+	"Event Queue",
+	"Completion Queue",
+	"Mailbox Queue",
+	"Work Queue",
+	"Receive Queue",
+	"Undefined"
+};
+
+/*
+ * Write a SLI_CONFIG command to the provided buffer.
+ *
+ * @sli4 SLI context pointer.
+ * @buf Destination buffer for the command.
+ * @size size of the destination buffer(buf).
+ * @length Length in bytes of attached command.
+ * @dma DMA buffer for non-embedded commands.
+ *
+ */
+static void *
+sli_config_cmd_init(struct sli4 *sli4, void *buf,
+		    size_t size, u32 length,
+		    struct efc_dma *dma)
+{
+	struct sli4_cmd_sli_config *config = NULL;
+	u32 flags = 0;
+
+	if (length > sizeof(config->payload.embed) && !dma) {
+		efc_log_err(sli4, "Too big for an embedded cmd with len(%d)\n",
+			    length);
+		return NULL;
+	}
+
+	config = buf;
+
+	memset(buf, 0, size);
+
+	config->hdr.command = MBX_CMD_SLI_CONFIG;
+	if (!dma) {
+		flags |= SLI4_SLICONF_EMB;
+		config->dw1_flags = cpu_to_le32(flags);
+		config->payload_len = cpu_to_le32(length);
+		buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
+		return buf;
+	}
+
+	flags = SLI4_SLICONF_PMDCMD_VAL_1;
+	flags &= ~SLI4_SLICONF_EMB;
+	config->dw1_flags = cpu_to_le32(flags);
+
+	config->payload.mem.addr.low = cpu_to_le32(lower_32_bits(dma->phys));
+	config->payload.mem.addr.high =	cpu_to_le32(upper_32_bits(dma->phys));
+	config->payload.mem.length =
+			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
+	config->payload_len = cpu_to_le32(dma->size);
+	/* save pointer to DMA for BMBX dumping purposes */
+	sli4->bmbx_non_emb_pmd = dma;
+	return dma->virt;
+}
+
+/*
+ * Write a COMMON_CREATE_CQ command.
+ *
+ * This creates a Version 2 message.
+ *
+ * Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_cmd_common_create_cq(struct sli4 *sli4, void *buf, size_t size,
+			 struct efc_dma *qmem,
+			 u16 eq_id)
+{
+	struct sli4_rqst_cmn_create_cq_v2 *cqv2 = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 page_bytes = 0;
+	u32 num_pages = 0;
+	size_t cmd_size = 0;
+	u32 page_size = 0;
+	u32 n_cqe = 0;
+	u32 dw5_flags = 0;
+	u16 dw6w1_arm = 0;
+	__le32 len;
+
+	/* First calculate number of pages and the mailbox cmd length */
+	n_cqe = qmem->size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages = sli_page_count(qmem->size, page_bytes);
+
+	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
+
+	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
+	if (!cqv2)
+		return EFC_FAIL;
+
+	len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
+					 SZ_DMAADDR * num_pages);
+	sli_cmd_fill_hdr(&cqv2->hdr, CMN_CREATE_CQ, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V2, len);
+	cqv2->page_size = page_size;
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
+	cqv2->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_CMN_CREATE_CQ_V2_MAX_PAGES)
+		return EFC_FAIL;
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		cqv2->cqe_count = cpu_to_le16(n_cqe);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return -EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQV2_AUTOVALID;
+
+	dw5_flags |= CREATE_CQV2_EVT;
+	dw5_flags |= CREATE_CQV2_VALID;
+
+	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
+	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
+	cqv2->eq_id = cpu_to_le16(eq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
+		cqv2->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		cqv2->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/* Write a COMMON_CREATE_EQ command */
+static int
+sli_cmd_common_create_eq(struct sli4 *sli4, void *buf, size_t size,
+			 struct efc_dma *qmem)
+{
+	struct sli4_rqst_cmn_create_eq *eq;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+	u32 dw5_flags = 0;
+	u32 dw6_flags = 0, ver;
+
+	eq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);
+	if (!eq)
+		return EFC_FAIL;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		ver = CMD_V2;
+
+	sli_cmd_fill_hdr(&eq->hdr, CMN_CREATE_EQ, SLI4_SUBSYSTEM_COMMON,
+			 ver, CFG_RQST_PYLD_LEN(cmn_create_eq));
+
+	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	eq->num_pages = cpu_to_le16(num_pages);
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(1024);
+		break;
+	case 2:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(2048);
+		break;
+	case 4:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(4096);
+		break;
+	default:
+		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_EQ_AUTOVALID;
+
+	dw5_flags |= CREATE_EQ_VALID;
+	dw6_flags &= (~CREATE_EQ_ARM);
+	eq->dw5_flags = cpu_to_le32(dw5_flags);
+	eq->dw6_flags = cpu_to_le32(dw6_flags);
+	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
+		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_create_mq_ext(struct sli4 *sli4, void *buf, size_t size,
+			     struct efc_dma *qmem,
+			     u16 cq_id)
+{
+	struct sli4_rqst_cmn_create_mq_ext *mq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+	u16 dw6w1_flags = 0;
+
+	mq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
+				 NULL);
+	if (!mq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&mq->hdr, CMN_CREATE_MQ_EXT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_create_mq_ext));
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	mq->num_pages = cpu_to_le16(num_pages);
+	switch (num_pages) {
+	case 1:
+		dw6w1_flags |= SLI4_MQE_SIZE_16;
+		break;
+	case 2:
+		dw6w1_flags |= SLI4_MQE_SIZE_32;
+		break;
+	case 4:
+		dw6w1_flags |= SLI4_MQE_SIZE_64;
+		break;
+	case 8:
+		dw6w1_flags |= SLI4_MQE_SIZE_128;
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
+
+	if (sli4->mq_create_version) {
+		mq->cq_id_v1 = cpu_to_le16(cq_id);
+		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+	} else {
+		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
+	}
+	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
+
+	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		mq->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		mq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *qmem, u16 cq_id)
+{
+	struct sli4_rqst_wq_create *wq;
+	u32 p;
+	uintptr_t addr;
+	u32 page_size = 0;
+	u32 page_bytes = 0;
+	u32 n_wqe = 0;
+	u16 num_pages;
+
+	wq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wq->hdr, SLI4_OPC_WQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, CFG_RQST_PYLD_LEN(wq_create));
+	n_wqe = qmem->size / sli4->wqe_size;
+
+	switch (qmem->size) {
+	case 4096:
+	case 8192:
+	case 16384:
+	case 32768:
+		page_size = 1;
+		break;
+	case 65536:
+		page_size = 2;
+		break;
+	case 131072:
+		page_size = 4;
+		break;
+	case 262144:
+		page_size = 8;
+		break;
+	case 524288:
+		page_size = 10;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+
+	/* valid values for number of pages(num_pages): 1-8 */
+	num_pages = sli_page_count(qmem->size, page_bytes);
+	wq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_WQ_CREATE_MAX_PAGES)
+		return EFC_FAIL;
+
+	wq->cq_id = cpu_to_le16(cq_id);
+
+	wq->page_size = page_size;
+
+	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
+		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
+	else
+		wq->wqe_size_byte |= SLI4_WQE_SIZE;
+
+	wq->wqe_count = cpu_to_le16(n_wqe);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
+		wq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		wq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
+		  struct efc_dma *qmem,
+		  u16 cq_id, u16 buffer_size)
+{
+	struct sli4_rqst_rq_create *rq;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, CFG_RQST_PYLD_LEN(rq_create));
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V0_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return 0;
+	}
+
+	/*
+	 * RQE count is the log base 2 of the total number of entries
+	 */
+	rq->rqe_count_byte |= 31 - __builtin_clz(qmem->size / SLI4_RQE_SIZE);
+
+	if (buffer_size < SLI4_RQ_CREATE_V0_MIN_BUF_SIZE ||
+	    buffer_size > SLI4_RQ_CREATE_V0_MAX_BUF_SIZE) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+		       SLI4_RQ_CREATE_V0_MIN_BUF_SIZE,
+		       SLI4_RQ_CREATE_V0_MAX_BUF_SIZE);
+		return -1;
+	}
+	rq->buffer_size = cpu_to_le16(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *qmem, u16 cq_id,
+		     u16 buffer_size)
+{
+	struct sli4_rqst_rq_create_v1 *rq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create_v1), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, CFG_RQST_PYLD_LEN(rq_create_v1));
+	/* Disable "no buffer warnings" to avoid Lancer bug */
+	rq->dim_dfd_dnb |= SLI4_RQ_CREATE_V1_DNB;
+
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V1_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid, max %d\n",
+			num_pages, SLI4_RQ_CREATE_V1_MAX_PAGES);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * RQE count is the total number of entries (note not lg2(# entries))
+	 */
+	rq->rqe_count = cpu_to_le16(qmem->size / SLI4_RQE_SIZE);
+
+	rq->rqe_size_byte |= SLI4_RQE_SIZE_8;
+
+	rq->page_size = SLI4_RQ_PAGE_SIZE_4096;
+
+	if (buffer_size < sli4->rq_min_buf_size ||
+	    buffer_size > sli4->rq_max_buf_size) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+				sli4->rq_min_buf_size,
+				sli4->rq_max_buf_size);
+		return EFC_FAIL;
+	}
+	rq->buffer_size = cpu_to_le32(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_rq_create_v2(struct sli4 *sli4, u32 num_rqs,
+		     struct sli4_queue *qs[], u32 base_cq_id,
+		     u32 header_buffer_size,
+		     u32 payload_buffer_size, struct efc_dma *dma)
+{
+	struct sli4_rqst_rq_create_v2 *req = NULL;
+	u32 i, p, offset = 0;
+	u32 payload_size, page_count;
+	uintptr_t addr;
+	u32 num_pages;
+	__le32 req_len;
+
+	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
+
+	/* Payload length must accommodate both request and response */
+	payload_size = max(CFG_RQST_CMDSZ(rq_create_v2) +
+			   SZ_DMAADDR * page_count,
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			       payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req_len = CFG_RQST_PYLD_LEN_VAR(rq_create_v2, SZ_DMAADDR * page_count);
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V2, req_len);
+	/* Fill Payload fields */
+	req->dim_dfd_dnb  |= SLI4_RQCREATEV2_DNB;
+	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);
+	req->num_pages	   = cpu_to_le16(num_pages);
+	req->rqe_count     = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
+	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
+	req->page_size     = SLI4_RQ_PAGE_SIZE_4096;
+	req->rq_count      = num_rqs;
+	req->base_cq_id    = cpu_to_le16(base_cq_id);
+	req->hdr_buffer_size     = cpu_to_le16(header_buffer_size);
+	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
+
+	for (i = 0; i < num_rqs; i++) {
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
+		     p++, addr += SLI_PAGE_SIZE) {
+			req->page_phys_addr[offset].low =
+					cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+					cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
+{
+	if (!q->dma.size)
+		return;
+
+	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
+			  q->dma.virt, q->dma.phys);
+
+}
+
+int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
+		 u32 qtype, size_t size, u32 n_entries,
+		      u32 align)
+{
+	if (!q->dma.virt || size != q->size ||
+	    n_entries != q->length) {
+		if (q->dma.size)
+			__sli_queue_destroy(sli4, q);
+
+		memset(q, 0, sizeof(struct sli4_queue));
+
+		q->dma.size = size * n_entries;
+		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						 q->dma.size, &q->dma.phys,
+						 GFP_DMA);
+		if (!q->dma.virt) {
+			memset(&q->dma, 0, sizeof(struct efc_dma));
+			efc_log_err(sli4, "%s allocation failed\n",
+			       SLI_QNAME[qtype]);
+			return -1;
+		}
+
+		memset(q->dma.virt, 0, size * n_entries);
+
+		spin_lock_init(&q->lock);
+
+		q->type = qtype;
+		q->size = size;
+		q->length = n_entries;
+
+		if (q->type == SLI_QTYPE_EQ || q->type == SLI_QTYPE_CQ) {
+			/* For prism, phase will be flipped after
+			 * a sweep through eq and cq
+			 */
+			q->phase = 1;
+		}
+
+		/* Limit to hwf the queue size per interrupt */
+		q->proc_limit = n_entries / 2;
+
+		switch (q->type) {
+		case SLI_QTYPE_EQ:
+			q->posted_limit = q->length / 2;
+			break;
+		default:
+			q->posted_limit = 64;
+			break;
+		}
+	} else {
+		efc_log_err(sli4, "%s failed\n", __func__);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue *cq, bool is_hdr)
+{
+	if (__sli_queue_init(sli4, q, SLI_QTYPE_RQ, SLI4_RQE_SIZE,
+			     n_entries, SLI_PAGE_SIZE))
+		return EFC_FAIL;
+
+	if (!sli_cmd_rq_create_v1(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  &q->dma, cq->id, buffer_size)) {
+		if (__sli_create_queue(sli4, q)) {
+			efc_log_info(sli4, "Create queue failed %d\n", q->id);
+			goto error;
+		}
+		if (is_hdr && q->id & 1) {
+			efc_log_info(sli4, "bad header RQ_ID %d\n", q->id);
+			goto error;
+		} else if (!is_hdr  && (q->id & 1) == 0) {
+			efc_log_info(sli4, "bad data RQ_ID %d\n", q->id);
+			goto error;
+		}
+	} else {
+		goto error;
+	}
+	if (is_hdr)
+		q->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+	else
+		q->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs,
+		    struct sli4_queue *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size)
+{
+	u32 i;
+	struct efc_dma dma;
+	struct sli4_rsp_cmn_create_queue_set *rsp = NULL;
+	void __iomem *db_regaddr = NULL;
+	u32 num_rqs = num_rq_pairs * 2;
+
+	for (i = 0; i < num_rqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_RQ,
+				     SLI4_RQE_SIZE, n_entries,
+				     SLI_PAGE_SIZE)) {
+			goto error;
+		}
+	}
+
+	if (sli_cmd_rq_create_v2(sli4, num_rqs, qs, base_cq_id,
+			       header_buffer_size, payload_buffer_size, &dma)) {
+		goto error;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_err(sli4, "bootstrap mailbox write failed RQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_RQ_DB_REG;
+
+	rsp = dma.virt;
+	if (rsp->hdr.status) {
+		efc_log_err(sli4, "bad create RQSet status=%#x addl=%#x\n",
+		       rsp->hdr.status, rsp->hdr.additional_status);
+		goto error;
+	} else {
+		for (i = 0; i < num_rqs; i++) {
+			qs[i]->id = i + le16_to_cpu(rsp->q_id);
+			if ((qs[i]->id & 1) == 0)
+				qs[i]->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+			else
+				qs[i]->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_res_sli_config(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_sli_config *sli_config = buf;
+
+	/* sanity check */
+	if (!buf || sli_config->hdr.command !=
+		    MBX_CMD_SLI_CONFIG) {
+		efc_log_err(sli4, "bad parameter buf=%p cmd=%#x\n", buf,
+		       buf ? sli_config->hdr.command : -1);
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(sli_config->hdr.status))
+		return le16_to_cpu(sli_config->hdr.status);
+
+	if (le32_to_cpu(sli_config->dw1_flags) & SLI4_SLICONF_EMB)
+		return sli_config->payload.embed[4];
+
+	efc_log_info(sli4, "external buffers not supported\n");
+	return EFC_FAIL;
+}
+
+int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q)
+{
+	struct sli4_rsp_cmn_create_queue *res_q = NULL;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail %s\n",
+			SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status create %s\n",
+		       SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	res_q = (void *)((u8 *)sli4->bmbx.virt +
+			offsetof(struct sli4_cmd_sli_config, payload));
+
+	if (res_q->hdr.status) {
+		efc_log_err(sli4, "bad create %s status=%#x addl=%#x\n",
+		       SLI_QNAME[q->type], res_q->hdr.status,
+			    res_q->hdr.additional_status);
+		return EFC_FAIL;
+	}
+	q->id = le16_to_cpu(res_q->q_id);
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_EQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_MQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_MQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_MQ_DB_REG;
+		break;
+	case SLI_QTYPE_RQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_RQ_DB_REG;
+		break;
+	case SLI_QTYPE_WQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_WQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_IO_WQ_DB_REG;
+		break;
+	default:
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype)
+{
+	u32 size = 0;
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		size = sizeof(u32);
+		break;
+	case SLI_QTYPE_CQ:
+		size = 16;
+		break;
+	case SLI_QTYPE_MQ:
+		size = 256;
+		break;
+	case SLI_QTYPE_WQ:
+		size = sli4->wqe_size;
+		break;
+	case SLI_QTYPE_RQ:
+		size = SLI4_RQE_SIZE;
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		return -1;
+	}
+	return size;
+}
+
+int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype,
+		struct sli4_queue *q, u32 n_entries,
+		     struct sli4_queue *assoc)
+{
+	int size;
+	u32 align = 0;
+
+	/* get queue size */
+	size = sli_get_queue_entry_size(sli4, qtype);
+	if (size < 0)
+		return EFC_FAIL;
+	align = SLI_PAGE_SIZE;
+
+	if (__sli_queue_init(sli4, q, qtype, size, n_entries, align)) {
+		efc_log_err(sli4, "%s allocation failed\n",
+		       SLI_QNAME[qtype]);
+		return EFC_FAIL;
+	}
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		if (!sli_cmd_common_create_eq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_CQ:
+		if (!sli_cmd_common_create_cq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma,
+						assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	case SLI_QTYPE_MQ:
+		assoc->u.flag.dword |= SLI4_QUEUE_FLAG_MQ;
+		if (!sli_cmd_common_create_mq_ext(sli4, sli4->bmbx.virt,
+						  SLI4_BMBX_SIZE, &q->dma,
+						  assoc->id)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_WQ:
+		if (!sli_cmd_wq_create(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE, &q->dma,
+					assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		goto error;
+	}
+
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+static int sli_cmd_cq_set_create(struct sli4 *sli4,
+				 struct sli4_queue *qs[], u32 num_cqs,
+				 struct sli4_queue *eqs[],
+				 struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_create_cq_set_v0 *req = NULL;
+	uintptr_t addr;
+	u32 i, offset = 0,  page_bytes = 0, payload_size;
+	u32 p = 0, page_size = 0, n_cqe = 0, num_pages_cq;
+	u32 dw5_flags = 0;
+	u16 dw6w1_flags = 0;
+	__le32 req_len;
+
+
+	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return -1;
+	}
+
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages_cq = sli_page_count(qs[0]->dma.size, page_bytes);
+	payload_size = max(CFG_RQST_CMDSZ(cmn_create_cq_set_v0) +
+			   (SZ_DMAADDR * num_pages_cq * num_cqs),
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req_len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_set_v0,
+					SZ_DMAADDR * num_pages_cq * num_cqs);
+	sli_cmd_fill_hdr(&req->hdr, CMN_CREATE_CQ_SET, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	req->page_size = page_size;
+
+	req->num_pages = cpu_to_le16(num_pages_cq);
+	switch (num_pages_cq) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		dw6w1_flags |= (n_cqe & CREATE_CQSETV0_CQE_COUNT);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages_cq);
+		return EFC_FAIL;
+	}
+
+	dw5_flags |= CREATE_CQSETV0_EVT;
+	dw5_flags |= CREATE_CQSETV0_VALID;
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQSETV0_AUTOVALID;
+
+	dw6w1_flags &= (~CREATE_CQSETV0_ARM);
+
+	req->dw5_flags = cpu_to_le32(dw5_flags);
+	req->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+
+	req->num_cq_req = cpu_to_le16(num_cqs);
+
+	/* Fill page addresses of all the CQs. */
+	for (i = 0; i < num_cqs; i++) {
+		req->eq_id[i] = cpu_to_le16(eqs[i]->id);
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages_cq;
+		     p++, addr += page_bytes) {
+			req->page_phys_addr[offset].low =
+				cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+				cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[])
+{
+	u32 i;
+	struct efc_dma dma;
+	struct sli4_rsp_cmn_create_queue_set *res = NULL;
+	void __iomem *db_regaddr = NULL;
+
+	/* Align the queue DMA memory */
+	for (i = 0; i < num_cqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_CQ,
+				     SLI4_CQE_BYTES,
+					  n_entries, SLI_PAGE_SIZE)) {
+			efc_log_err(sli4, "Queue init failed.\n");
+			goto error;
+		}
+	}
+
+	if (sli_cmd_cq_set_create(sli4, qs, num_cqs, eqs, &dma))
+		goto error;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail CQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_EQCQ_DB_REG;
+
+	res = dma.virt;
+	if (res->hdr.status) {
+		efc_log_err(sli4, "bad create CQSet status=%#x addl=%#x\n",
+		       res->hdr.status, res->hdr.additional_status);
+		goto error;
+	} else {
+		/* Check if we got all requested CQs. */
+		if (le16_to_cpu(res->num_q_allocated) != num_cqs) {
+			efc_log_crit(sli4, "Requested count CQs doesn't match.\n");
+			goto error;
+		}
+		/* Fill the resp cq ids. */
+		for (i = 0; i < num_cqs; i++) {
+			qs[i]->id = le16_to_cpu(res->q_id) + i;
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_cmd_common_destroy_q(struct sli4 *sli4, u8 opc, u8 subsystem, u16 q_id)
+{
+	struct sli4_rqst_cmn_destroy_q *req = NULL;
+
+	/* Payload length must accommodate both request and response */
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_destroy_q), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, opc, subsystem,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_destroy_q));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
+	       u32 destroy_queues, u32 free_memory)
+{
+	int rc = EFC_SUCCESS;
+	u8 opcode, subsystem;
+	struct sli4_rsp_hdr *res;
+
+	if (!q) {
+		efc_log_err(sli4, "bad parameter sli4=%p q=%p\n", sli4, q);
+		return EFC_FAIL;
+	}
+
+	if (!destroy_queues)
+		goto free_mem;
+
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		opcode = CMN_DESTROY_EQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_CQ:
+		opcode = CMN_DESTROY_CQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_MQ:
+		opcode = CMN_DESTROY_MQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_WQ:
+		opcode = SLI4_OPC_WQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	case SLI_QTYPE_RQ:
+		opcode = SLI4_OPC_RQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	default:
+		efc_log_info(sli4, "bad queue type %d\n", q->type);
+		return EFC_FAIL;
+	}
+
+	rc = sli_cmd_common_destroy_q(sli4, opcode, subsystem, q->id);
+	if (!rc)
+		goto free_mem;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox fail destroy %s\n",
+			     SLI_QNAME[q->type]);
+	} else if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status %s\n", SLI_QNAME[q->type]);
+	} else {
+		res = (void *)((u8 *)sli4->bmbx.virt +
+				offsetof(struct sli4_cmd_sli_config, payload));
+
+		if (res->status) {
+			efc_log_err(sli4, "destroy %s st=%#x addl=%#x\n",
+				    SLI_QNAME[q->type],	res->status,
+				    res->additional_status);
+		} else {
+			rc = EFC_SUCCESS;
+		}
+	}
+
+free_mem:
+	if (free_memory)
+		__sli_queue_destroy(sli4, q);
+
+	return rc;
+}
+
+int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+	else
+		val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+	writel(val, q->db_regaddr);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	case SLI_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_CQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_CQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	default:
+		efc_log_info(sli4, "should only be used for EQ/CQ, not %s\n",
+			SLI_QNAME[q->type]);
+	}
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8		*qe = q->dma.virt;
+	u32	qindex;
+	u32	val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	if (sli4->perf_wq_id_association)
+		sli_set_wq_id_association(entry, q->id);
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		/* non-dpp write for iftype = 6 */
+		val = SLI4_WQ_DOORBELL(q->n_posted, 0, q->id);
+	else
+		val = SLI4_WQ_DOORBELL(q->n_posted, q->index, q->id);
+
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->lock, flags);
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	val = SLI4_MQ_DOORBELL(q->n_posted, q->id);
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return qindex;
+}
+
+int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex, n_posted;
+	u32 val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	n_posted = q->n_posted;
+
+	/*
+	 * In RQ-pair, an RQ either contains the FC header
+	 * (i.e. is_hdr == TRUE) or the payload.
+	 *
+	 * Don't ring doorbell for payload RQ
+	 */
+	if (!(q->u.flag.dword & SLI4_QUEUE_FLAG_HDR))
+		goto skip;
+
+	/*
+	 * Some RQ cannot be incremented one entry at a time.
+	 * Instead, the driver collects a number of entries
+	 * and updates the RQ in batches.
+	 */
+	if (q->u.flag.dword & SLI4_QUEUE_FLAG_RQBATCH) {
+		if (((q->index + q->n_posted) %
+		    SLI4_QUEUE_RQ_BATCH)) {
+			goto skip;
+		}
+		n_posted = SLI4_QUEUE_RQ_BATCH;
+	}
+
+	val = SLI4_RQ_DOORBELL(n_posted, q->id);
+	writel(val, q->db_regaddr);
+skip:
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+int
+sli_eq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+	u8 clear = false, valid = false;
+	u16 wflags = 0;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if eqe is valid */
+	wflags = le16_to_cpu(((struct sli4_eqe *)qe)->dw0w0_flags);
+	valid = ((wflags & SLI4_EQE_VALID) == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (valid && clear) {
+		wflags &= ~SLI4_EQE_VALID;
+		((struct sli4_eqe *)qe)->dw0w0_flags =
+						cpu_to_le16(wflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long	flags = 0;
+	u8 clear = false;
+	u32 dwflags = 0;
+	bool valid = false, valid_bit_set = false;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if cqe is valid */
+	dwflags = le32_to_cpu(((struct sli4_mcqe *)qe)->dw3_flags);
+	valid_bit_set = (dwflags & SLI4_MCQE_VALID) != 0;
+
+	valid = (valid_bit_set == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return -1;
+	}
+
+	if (valid && clear) {
+		dwflags &= ~SLI4_MCQE_VALID;
+		((struct sli4_mcqe *)qe)->dw3_flags =
+					cpu_to_le32(dwflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_mq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+
+	qindex = &q->u.r_idx;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if mqe is valid */
+	if (q->index == q->u.r_idx) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return -1;
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_index(struct sli4 *sli4, struct sli4_queue *q)
+{
+	if (q)
+		return q->index;
+	else
+		return -1;
+}
+
+int
+sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q,
+	       u32 index, u8 *entry)
+{
+	int rc;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+	rc = _sli_queue_poke(sli4, q, index, entry);
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return rc;
+}
+
+int
+_sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q,
+		u32 index, u8 *entry)
+{
+	int rc = 0;
+	u8 *qe = q->dma.virt;
+
+	if (index >= q->length)
+		return -1;
+
+	qe += index * q->size;
+
+	if (entry)
+		memcpy(qe, entry, q->size);
+
+	return rc;
+}
+
+int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id)
+{
+	struct sli4_eqe *eqe = (void *)buf;
+	int rc = EFC_SUCCESS;
+	u16 flags = 0;
+	u16 majorcode;
+	u16 minorcode;
+
+	if (!buf || !cq_id) {
+		efc_log_err(sli4, "bad parameters sli4=%p buf=%p cq_id=%p\n",
+		       sli4, buf, cq_id);
+		return -1;
+	}
+
+	flags = le16_to_cpu(eqe->dw0w0_flags);
+	majorcode = (flags & SLI4_EQE_MJCODE) >> 1;
+	minorcode = (flags & SLI4_EQE_MNCODE) >> 4;
+	switch (majorcode) {
+	case SLI4_MAJOR_CODE_STANDARD:
+		*cq_id = le16_to_cpu(eqe->resource_id);
+		break;
+	case SLI4_MAJOR_CODE_SENTINEL:
+		efc_log_info(sli4, "sentinel EQE\n");
+		rc = EFC_FAIL;
+		break;
+	default:
+		efc_log_info(sli4, "Unsupported EQE: major %x minor %x\n",
+			majorcode, minorcode);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+/* Parse a CQ entry to retrieve the event type and the associated queue */
+int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+	     enum sli4_qentry *etype, u16 *q_id)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!cq || !cqe || !etype) {
+		efc_log_err(sli4, "bad params sli4=%p cq=%p cqe=%p etype=%p q_id=%p\n",
+		       sli4, cq, cqe, etype, q_id);
+		return -1;
+	}
+
+	if (cq->u.flag.dword & SLI4_QUEUE_FLAG_MQ) {
+		struct sli4_mcqe	*mcqe = (void *)cqe;
+
+		if (le32_to_cpu(mcqe->dw3_flags) & SLI4_MCQE_AE) {
+			*etype = SLI_QENTRY_ASYNC;
+		} else {
+			*etype = SLI_QENTRY_MQ;
+			rc = sli_cqe_mq(sli4, mcqe);
+		}
+		*q_id = -1;
+	} else {
+		rc = sli_fc_cqe_parse(sli4, cq, cqe, etype, q_id);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index c9bd3f71b27b..1846a28d5fd8 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3730,4 +3730,13 @@ struct sli4 {
 	u32				vpd_length;
 };
 
+static inline void
+sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
+{
+	hdr->opcode = opc;
+	hdr->subsystem = sub;
+	hdr->dw3_version = cpu_to_le32(ver);
+	hdr->request_length = len;
+}
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (3 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  7:54   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
                   ` (27 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create different WQEs and adds
APIs to issue iread, iwrite, treceive, tsend and other work queue
entries.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1717 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |    2 +
 2 files changed, 1719 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 7061f7980fad..2ebe0235bc9e 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -1580,3 +1580,1720 @@ sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
 
 	return rc;
 }
+
+/* Write an ABORT_WQE work queue entry */
+int
+sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
+	      enum sli4_abort_type type, bool send_abts, u32 ids,
+	      u32 mask, u16 tag, u16 cq_id)
+{
+	struct sli4_abort_wqe	*abort = buf;
+
+	memset(buf, 0, size);
+
+	switch (type) {
+	case SLI_ABORT_XRI:
+		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
+		if (mask) {
+			efc_log_warn(sli4, "%#x aborting XRI %#x warning non-zero mask",
+				mask, ids);
+			mask = 0;
+		}
+		break;
+	case SLI_ABORT_ABORT_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
+		break;
+	case SLI_ABORT_REQUEST_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported type %#x\n", type);
+		return EFC_FAIL;
+	}
+
+	abort->ia_ir_byte |= send_abts ? 0 : 1;
+
+	/* Suppress ABTS retries */
+	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
+
+	abort->t_mask = cpu_to_le32(mask);
+	abort->t_tag  = cpu_to_le32(ids);
+	abort->command = SLI4_WQE_ABORT;
+	abort->request_tag = cpu_to_le16(tag);
+
+	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
+
+	abort->cq_id = cpu_to_le16(cq_id);
+	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
+
+	return EFC_SUCCESS;
+}
+
+/* Write an ELS_REQUEST64_WQE work queue entry */
+int
+sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *sgl,
+		      u8 req_type, u32 req_len, u32 max_rsp_len,
+		      u8 timeout, u16 xri, u16 tag,
+		      u16 cq_id, u16 rnodeindicator, u16 sportindicator,
+		      bool hlm, bool rnodeattached, u32 rnode_fcid,
+		      u32 sport_fcid)
+{
+	struct sli4_els_request64_wqe	*els = buf;
+	struct sli4_sge *sge = sgl->virt;
+	bool is_fabric = false;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	bptr = &els->els_request_payload;
+	if (sli4->sgl_pre_registered) {
+		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
+
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	els->els_request_payload_length = cpu_to_le32(req_len);
+	els->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	els->xri_tag = cpu_to_le16(xri);
+	els->timer = timeout;
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_REQUEST64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	if (hlm) {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_HLM;
+		els->remote_id_dword = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
+
+	/* figure out the ELS_ID value from the request buffer */
+
+	switch (req_type) {
+	case ELS_LOGO:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(rnodeindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		break;
+	case ELS_FDISC:
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		if (sport_fcid == 0) {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
+			is_fabric = true;
+		} else {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		}
+		els->ct_byte |= (SLI4_GENERIC_CONTEXT_VPI <<
+				 SLI4_REQ_WQE_CT_SHFT);
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		break;
+	case ELS_FLOGI:
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		/*
+		 * Set SP here ... we haven't done a REG_VPI yet
+		 * need to maybe not set this when we have
+		 * completed VFI/VPI registrations ...
+		 *
+		 * Use the FC_ID of the SPORT if it has been allocated,
+		 * otherwise use an S_ID of zero.
+		 */
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		if (sport_fcid != U32_MAX)
+			els->sid_sp_dword |= cpu_to_le32(sport_fcid);
+		break;
+	case ELS_PLOGI:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	case ELS_SCR:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	default:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(sportindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		break;
+	}
+
+	if (is_fabric)
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
+	else
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
+					SLI4_GENERIC_CONTEXT_RPI)
+		els->remote_id_dword = cpu_to_le32(rnode_fcid);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
+					SLI4_GENERIC_CONTEXT_VPI)
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+
+	return EFC_SUCCESS;
+}
+
+/* Write an FCP_ICMND64_WQE work queue entry */
+int
+sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, bool hlm,
+		    u32 rnode_fcid, u8 timeout)
+{
+	struct sli4_fcp_icmnd64_wqe *icmnd = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &icmnd->bde;
+	if (sli4->sgl_pre_registered) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_ICMD_WQE_XBL;
+
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	icmnd->payload_offset_length = cpu_to_le16(len);
+	icmnd->xri_tag = cpu_to_le16(xri);
+	icmnd->context_tag = cpu_to_le16(rpi);
+	icmnd->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	icmnd->class_pu_byte |= 2 << SLI4_ICMD_WQE_PU_SHFT;
+	icmnd->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	icmnd->command = SLI4_WQE_FCP_ICMND64;
+	icmnd->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_ICMD_WQE_CT_SHFT;
+
+	icmnd->abort_tag = cpu_to_le32(xri);
+
+	icmnd->request_tag = cpu_to_le16(tag);
+	icmnd->len_loc1_byte |= SLI4_ICMD_WQE_LEN_LOC_BIT1;
+	icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_HLM;
+		icmnd->remote_n_port_id_dword =
+				cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	icmnd->cmd_type_byte |= SLI4_CMD_FCP_ICMND64_WQE;
+	icmnd->cq_id = cpu_to_le16(cq_id);
+
+	return  0;
+}
+
+/* Write an FCP_IREAD64_WQE work queue entry */
+int
+sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u32 first_data_sge,
+		    u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, bool hlm, u32 rnode_fcid,
+		    u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iread64_wqe *iread = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &iread->bde;
+	if (sli4->sgl_pre_registered) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IR_WQE_XBL;
+
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = sge[0].buffer_address_low;
+		bptr->u.blp.high = sge[0].buffer_address_high;
+	} else {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iread->fcp_cmd_buffer_length = cpu_to_le16(len);
+
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iread->payload_offset_length = cpu_to_le16(len);
+	iread->total_transfer_length = cpu_to_le32(xfer_len);
+
+	iread->xri_tag = cpu_to_le16(xri);
+	iread->context_tag = cpu_to_le16(rpi);
+
+	iread->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	iread->class_pu_byte |= 2 << SLI4_IR_WQE_PU_SHFT;
+	iread->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iread->command = SLI4_WQE_FCP_IREAD64;
+	iread->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_IR_WQE_CT_SHFT;
+	iread->dif_ct_bs_byte |= dif;
+	iread->dif_ct_bs_byte  |= bs << SLI4_IR_WQE_BS_SHFT;
+
+	iread->abort_tag = cpu_to_le32(xri);
+
+	iread->request_tag = cpu_to_le16(tag);
+	iread->len_loc1_byte |= SLI4_IR_WQE_LEN_LOC_BIT1;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_HLM;
+		iread->remote_n_port_id_dword =
+				cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_IOD;
+	iread->cmd_type_byte |= SLI4_CMD_FCP_IREAD64_WQE;
+	iread->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iread->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			  (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  0;
+}
+
+/* Write an FCP_IWRITE64_WQE work queue entry */
+int
+sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *sgl,
+		     u32 first_data_sge, u32 xfer_len,
+		     u32 first_burst, u16 xri, u16 tag,
+		     u16 cq_id, u32 rpi,
+		     bool hlm, u32 rnode_fcid,
+		     u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iwrite64_wqe *iwrite = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, min, len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &iwrite->bde;
+	if (sli4->sgl_pre_registered) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IWR_WQE_XBL;
+
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iwrite->fcp_cmd_buffer_length = cpu_to_le16(len);
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iwrite->payload_offset_length = cpu_to_le16(len);
+	iwrite->total_transfer_length = cpu_to_le16(xfer_len);
+	min = (xfer_len < first_burst) ? xfer_len : first_burst;
+	iwrite->initial_transfer_length = cpu_to_le16(min);
+
+	iwrite->xri_tag = cpu_to_le16(xri);
+	iwrite->context_tag = cpu_to_le16(rpi);
+
+	iwrite->timer = timeout;
+	/* WQE word 4 contains read transfer length */
+	iwrite->class_pu_byte |= 2 << SLI4_IWR_WQE_PU_SHFT;
+	iwrite->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iwrite->command = SLI4_WQE_FCP_IWRITE64;
+	iwrite->dif_ct_bs_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_IWR_WQE_CT_SHFT;
+	iwrite->dif_ct_bs_byte |= dif;
+	iwrite->dif_ct_bs_byte |= bs << SLI4_IWR_WQE_BS_SHFT;
+
+	iwrite->abort_tag = cpu_to_le32(xri);
+
+	iwrite->request_tag = cpu_to_le16(tag);
+	iwrite->len_loc1_byte |= SLI4_IWR_WQE_LEN_LOC_BIT1;
+	iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_LEN_LOC_BIT2;
+	if (hlm) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_HLM;
+		iwrite->remote_n_port_id_dword =
+			cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+	iwrite->cmd_type_byte |= SLI4_CMD_FCP_IWRITE64_WQE;
+	iwrite->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iwrite->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			 (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  0;
+}
+
+/* Write an FCP_TRECEIVE64_WQE work queue entry */
+int
+sli_fcp_treceive64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *sgl,
+		       u32 first_data_sge, u32 relative_off,
+		       u32 xfer_len, u16 xri, u16 tag,
+		       u16 cq_id, u16 xid, u32 rpi, bool hlm,
+		       u32 rnode_fcid, u32 flags, u8 dif,
+		       u8 bs, u8 csctl, u32 app_id)
+{
+	struct sli4_fcp_treceive64_wqe *trecv = buf;
+	struct sli4_fcp_128byte_wqe *trecv_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &trecv->bde;
+	if (sli4->sgl_pre_registered) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_TRCV_WQE_XBL;
+
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length)
+					& SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trecv->payload_offset_length = sge[0].buffer_length;
+	} else {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
+			trecv->qosd_xbl_hlm_iod_dbde_wqes |=
+							SLI4_TRCV_WQE_DBDE;
+			bptr->bde_type_buflen =
+			      cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					  (le32_to_cpu(sge[2].buffer_length)
+					  & SLI4_BDE_MASK_BUFFER_LEN));
+
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				(sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	trecv->relative_offset = cpu_to_le32(relative_off);
+
+	if (flags & SLI4_IO_CONTINUATION)
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_XC;
+
+	trecv->xri_tag = cpu_to_le16(xri);
+
+	trecv->context_tag = cpu_to_le16(rpi);
+
+	/* WQE uses relative offset */
+	trecv->class_ar_pu_byte |= 1 << SLI4_TRCV_WQE_PU_SHFT;
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		trecv->class_ar_pu_byte |= SLI4_TRCV_WQE_AR;
+
+	trecv->command = SLI4_WQE_FCP_TRECEIVE64;
+	trecv->class_ar_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	trecv->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_TRCV_WQE_CT_SHFT;
+	trecv->dif_ct_bs_byte |= bs << SLI4_TRCV_WQE_BS_SHFT;
+
+	trecv->remote_xid = cpu_to_le16(xid);
+
+	trecv->request_tag = cpu_to_le16(tag);
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_IOD;
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_LEN_LOC_BIT2;
+
+	if (hlm) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_HLM;
+		trecv->dword5.dword = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	trecv->cmd_type_byte |= SLI4_CMD_FCP_TRECEIVE64_WQE;
+
+	trecv->cq_id = cpu_to_le16(cq_id);
+
+	trecv->fcp_data_receive_length = cpu_to_le32(xfer_len);
+
+	if (sli4->perf_hint) {
+		bptr = &trecv->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_CCPE;
+		trecv->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trecv->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trecv->lloc1_appid |= SLI4_TRCV_WQE_APPID;
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_WQES;
+		trecv_128->dw[31] = app_id;
+	}
+	return 0;
+}
+
+/* Write an FCP_CONT_TRECEIVE64_WQE work queue entry */
+int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli4, void *buf, size_t size,
+			    struct efc_dma *sgl, u32 first_data_sge,
+			    u32 relative_off, u32 xfer_len,
+			    u16 xri, u16 sec_xri, u16 tag,
+			    u16 cq_id, u16 xid, u32 rpi,
+			    bool hlm, u32 rnode_fcid, u32 flags,
+			    u8 dif, u8 bs, u8 csctl,
+			    u32 app_id)
+{
+	int rc;
+
+	rc = sli_fcp_treceive64_wqe(sli4, buf, size, sgl, first_data_sge,
+				    relative_off, xfer_len, xri, tag, cq_id,
+				    xid, rpi, hlm, rnode_fcid, flags, dif, bs,
+				    csctl, app_id);
+	if (rc == 0) {
+		struct sli4_fcp_treceive64_wqe *trecv = buf;
+
+		trecv->command = SLI4_WQE_FCP_CONT_TRECEIVE64;
+		trecv->dword5.sec_xri_tag = cpu_to_le16(sec_xri);
+	}
+	return rc;
+}
+
+/* Write an FCP_TRSP64_WQE work queue entry */
+int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		   struct efc_dma *sgl,
+		   u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
+		   u16 xid, u32 rpi, bool hlm, u32 rnode_fcid,
+		   u32 flags, u8 csctl, u8 port_owned,
+		   u32 app_id)
+{
+	struct sli4_fcp_trsp64_wqe *trsp = buf;
+	struct sli4_fcp_128byte_wqe *trsp_128 = buf;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE) {
+		trsp->class_ag_byte |= SLI4_TRSP_WQE_AG;
+	} else {
+		struct sli4_sge	*sge = sgl->virt;
+
+		if (sli4->sgl_pre_registered || port_owned)
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_DBDE;
+		else
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_XBL;
+		bptr = &trsp->bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trsp->fcp_response_length = cpu_to_le32(rsp_len);
+	}
+
+	if (flags & SLI4_IO_CONTINUATION)
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_XC;
+
+	if (hlm) {
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_HLM;
+		trsp->dword5 = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	trsp->xri_tag = cpu_to_le16(xri);
+	trsp->rpi = cpu_to_le16(rpi);
+
+	trsp->command = SLI4_WQE_FCP_TRSP64;
+	trsp->class_ag_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	trsp->remote_xid = cpu_to_le16(xid);
+	trsp->request_tag = cpu_to_le16(tag);
+	if (flags & SLI4_IO_DNRX)
+		trsp->ct_dnrx_byte |= SLI4_TRSP_WQE_DNRX;
+	else
+		trsp->ct_dnrx_byte &= ~SLI4_TRSP_WQE_DNRX;
+
+	trsp->lloc1_appid |= 0x1;
+	trsp->cq_id = cpu_to_le16(cq_id);
+	trsp->cmd_type_byte = SLI4_CMD_FCP_TRSP64_WQE;
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_CCPE;
+		trsp->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trsp->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trsp->lloc1_appid |= SLI4_TRSP_WQE_APPID;
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_WQES;
+		trsp_128->dw[31] = app_id;
+	}
+	return 0;
+}
+
+/* Write an FCP_TSEND64_WQE work queue entry */
+int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl,
+		    u32 first_data_sge, u32 relative_off,
+		    u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u16 xid, u32 rpi,
+		    bool hlm, u32 rnode_fcid, u32 flags, u8 dif,
+		    u8 bs, u8 csctl, u32 app_id)
+{
+	struct sli4_fcp_tsend64_wqe *tsend = buf;
+	struct sli4_fcp_128byte_wqe *tsend_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+
+	bptr = &tsend->bde;
+	if (sli4->sgl_pre_registered) {
+		tsend->ll_qd_xbl_hlm_iod_dbde &= ~SLI4_TSEND_WQE_XBL;
+
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				   (le32_to_cpu(sge[2].buffer_length) &
+				    SLI4_BDE_MASK_BUFFER_LEN));
+
+		/* TSEND64_WQE specifies first two SGE are skipped (3rd is
+		 * valid)
+		 */
+		bptr->u.data.low  = sge[2].buffer_address_low;
+		bptr->u.data.high = sge[2].buffer_address_high;
+	} else {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
+			tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+			bptr->bde_type_buflen =
+			    cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					(le32_to_cpu(sge[2].buffer_length) &
+					SLI4_BDE_MASK_BUFFER_LEN));
+			/*
+			 * TSEND64_WQE specifies first two SGE are skipped
+			 * (i.e. 3rd is valid)
+			 */
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+					    (sgl->size &
+					     SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	tsend->relative_offset = cpu_to_le32(relative_off);
+
+	if (flags & SLI4_IO_CONTINUATION)
+		tsend->dw10byte2 |= SLI4_TSEND_XC;
+
+	tsend->xri_tag = cpu_to_le16(xri);
+
+	tsend->rpi = cpu_to_le16(rpi);
+	/* WQE uses relative offset */
+	tsend->class_pu_ar_byte |= 1 << SLI4_TSEND_WQE_PU_SHFT;
+
+	if (flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		tsend->class_pu_ar_byte |= SLI4_TSEND_WQE_AR;
+
+	tsend->command = SLI4_WQE_FCP_TSEND64;
+	tsend->class_pu_ar_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	tsend->ct_byte |= SLI4_GENERIC_CONTEXT_RPI << SLI4_TSEND_CT_SHFT;
+	tsend->ct_byte |= dif;
+	tsend->ct_byte |= bs << SLI4_TSEND_BS_SHFT;
+
+	tsend->remote_xid = cpu_to_le16(xid);
+
+	tsend->request_tag = cpu_to_le16(tag);
+
+	tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_LEN_LOC_BIT2;
+
+	if (hlm) {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_HLM;
+		tsend->dword5 = cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	tsend->cq_id = cpu_to_le16(cq_id);
+
+	tsend->cmd_type_byte |= SLI4_CMD_FCP_TSEND64_WQE;
+
+	tsend->fcp_data_transmit_length = cpu_to_le32(xfer_len);
+
+	if (sli4->perf_hint) {
+		bptr = &tsend->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (csctl & SLI4_MASK_CCP) {
+		tsend->dw10byte2 |= SLI4_TSEND_CCPE;
+		tsend->ccp = (csctl & SLI4_MASK_CCP);
+	}
+
+	if (app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(tsend->dw10byte2 & SLI4_TSEND_EAT)) {
+		tsend->dw10byte0 |= SLI4_TSEND_APPID_VALID;
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQES;
+		tsend_128->dw[31] = app_id;
+	}
+	return 0;
+}
+
+/* Write a GEN_REQUEST64 work queue entry */
+int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *sgl, u32 req_len,
+		      u32 max_rsp_len, u8 timeout, u16 xri,
+		      u16 tag, u16 cq_id, bool hlm, u32 rnode_fcid,
+		      u16 rnodeindicator, u8 r_ctl,
+		      u8 type, u8 df_ctl)
+{
+	struct sli4_gen_request64_wqe	*gen = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return -1;
+	}
+	sge = sgl->virt;
+	bptr = &gen->bde;
+
+	if (sli4->sgl_pre_registered) {
+		gen->dw10flags1 &= ~SLI4_GEN_REQ64_WQE_XBL;
+
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	gen->request_payload_length = cpu_to_le32(req_len);
+	gen->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	gen->df_ctl = df_ctl;
+	gen->type = type;
+	gen->r_ctl = r_ctl;
+
+	gen->xri_tag = cpu_to_le16(xri);
+
+	gen->ct_byte = SLI4_GENERIC_CONTEXT_RPI << SLI4_GEN_REQ64_CT_SHFT;
+	gen->context_tag = cpu_to_le16(rnodeindicator);
+
+	gen->class_byte = SLI4_GENERIC_CLASS_CLASS_3;
+
+	gen->command = SLI4_WQE_GEN_REQUEST64;
+
+	gen->timer = timeout;
+
+	gen->request_tag = cpu_to_le16(tag);
+
+	gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_IOD;
+
+	gen->dw10flags0 |= SLI4_GEN_REQ64_WQE_QOSD;
+
+	if (hlm) {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_HLM;
+		gen->remote_n_port_id_dword =
+			cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	gen->cmd_type_byte = SLI4_CMD_GEN_REQUEST64_WQE;
+
+	gen->cq_id = cpu_to_le16(cq_id);
+
+	return 0;
+}
+
+/* Write a SEND_FRAME work queue entry */
+int
+sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size,
+		   u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag)
+{
+	struct sli4_send_frame_wqe *sf = buf;
+
+	memset(buf, 0, size);
+
+	sf->dw10flags1 |= SLI4_SF_WQE_DBDE;
+	sf->bde.bde_type_buflen = cpu_to_le32(req_len &
+					      SLI4_BDE_MASK_BUFFER_LEN);
+	sf->bde.u.data.low =
+		cpu_to_le32(lower_32_bits(payload->phys));
+	sf->bde.u.data.high =
+		cpu_to_le32(upper_32_bits(payload->phys));
+
+	/* Copy FC header */
+	sf->fc_header_0_1[0] = cpu_to_le32(hdr[0]);
+	sf->fc_header_0_1[1] = cpu_to_le32(hdr[1]);
+	sf->fc_header_2_5[0] = cpu_to_le32(hdr[2]);
+	sf->fc_header_2_5[1] = cpu_to_le32(hdr[3]);
+	sf->fc_header_2_5[2] = cpu_to_le32(hdr[4]);
+	sf->fc_header_2_5[3] = cpu_to_le32(hdr[5]);
+
+	sf->frame_length = cpu_to_le32(req_len);
+
+	sf->xri_tag = cpu_to_le16(xri);
+	sf->dw7flags0 &= ~SLI4_SF_PU;
+	sf->context_tag = 0;
+
+	sf->ct_byte &= ~SLI4_SF_CT;
+	sf->command = SLI4_WQE_SEND_FRAME;
+	sf->dw7flags0 |= SLI4_GENERIC_CLASS_CLASS_3;
+	sf->timer = timeout;
+
+	sf->request_tag = cpu_to_le16(req_tag);
+	sf->eof = eof;
+	sf->sof = sof;
+
+	sf->dw10flags1 &= ~SLI4_SF_QOSD;
+	sf->dw10flags0 |= SLI4_SF_LEN_LOC_BIT1;
+	sf->dw10flags2 &= ~SLI4_SF_XC;
+
+	sf->dw10flags1 |= SLI4_SF_XBL;
+
+	sf->cmd_type_byte |= SLI4_CMD_SEND_FRAME_WQE;
+	sf->cq_id = cpu_to_le16(0xffff);
+
+	return 0;
+}
+
+/* Write an XMIT_BLS_RSP64_WQE work queue entry */
+int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct sli_bls_payload *payload, u16 xri,
+		       u16 tag, u16 cq_id,
+		       bool rnodeattached, bool hlm, u16 rnodeindicator,
+		       u16 sportindicator, u32 rnode_fcid,
+		       u32 sport_fcid, u32 s_id)
+{
+	struct sli4_xmit_bls_rsp_wqe *bls = buf;
+	u32 dw_ridflags = 0;
+
+	/*
+	 * Callers can either specify RPI or S_ID, but not both
+	 */
+	if (rnodeattached && s_id != U32_MAX) {
+		efc_log_info(sli4, "S_ID specified for attached remote node %d\n",
+			rnodeindicator);
+		return -1;
+	}
+
+	memset(buf, 0, size);
+
+	if (payload->type == SLI4_SLI_BLS_ACC) {
+		bls->payload_word0 =
+			cpu_to_le32((payload->u.acc.seq_id_last << 16) |
+				    (payload->u.acc.seq_id_validity << 24));
+		bls->high_seq_cnt = payload->u.acc.high_seq_cnt;
+		bls->low_seq_cnt = payload->u.acc.low_seq_cnt;
+	} else if (payload->type == SLI4_SLI_BLS_RJT) {
+		bls->payload_word0 =
+				cpu_to_le32(*((u32 *)&payload->u.rjt));
+		dw_ridflags |= SLI4_BLS_RSP_WQE_AR;
+	} else {
+		efc_log_info(sli4, "bad BLS type %#x\n", payload->type);
+		return -1;
+	}
+
+	bls->ox_id = payload->ox_id;
+	bls->rx_id = payload->rx_id;
+
+	if (rnodeattached) {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(sportindicator);
+
+		if (s_id != U32_MAX)
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(s_id & 0x00ffffff);
+		else
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(sport_fcid & 0x00ffffff);
+
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (rnode_fcid & SLI4_BLS_RSP_RID);
+
+		bls->temporary_rpi = cpu_to_le16(rnodeindicator);
+	}
+
+	bls->xri_tag = cpu_to_le16(xri);
+
+	bls->dw8flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	bls->command = SLI4_WQE_XMIT_BLS_RSP;
+
+	bls->request_tag = cpu_to_le16(tag);
+
+	bls->dw11flags1 |= SLI4_BLS_RSP_WQE_QOSD;
+
+	if (hlm) {
+		bls->dw11flags1 |= SLI4_BLS_RSP_WQE_HLM;
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (rnode_fcid & SLI4_BLS_RSP_RID);
+	}
+
+	bls->remote_id_dword = cpu_to_le32(dw_ridflags);
+	bls->cq_id = cpu_to_le16(cq_id);
+
+	bls->dw12flags0 |= SLI4_CMD_XMIT_BLS_RSP64_WQE;
+
+	return 0;
+}
+
+/* Write a XMIT_ELS_RSP64_WQE work queue entry */
+int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *rsp, u32 rsp_len,
+				u16 xri, u16 tag, u16 cq_id,
+				u16 ox_id, u16 rnodeindicator,
+				u16 sportindicator, bool hlm,
+				bool rnodeattached, u32 rnode_fcid,
+				u32 flags, u32 s_id)
+{
+	struct sli4_xmit_els_rsp64_wqe *els = buf;
+
+	memset(buf, 0, size);
+
+	if (sli4->sgl_pre_registered)
+		els->flags2 |= SLI4_ELS_DBDE;
+	else
+		els->flags2 |= SLI4_ELS_XBL;
+
+	els->els_response_payload.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (rsp_len & SLI4_BDE_MASK_BUFFER_LEN));
+	els->els_response_payload.u.data.low =
+		cpu_to_le32(lower_32_bits(rsp->phys));
+	els->els_response_payload.u.data.high =
+		cpu_to_le32(upper_32_bits(rsp->phys));
+
+	els->els_response_payload_length = cpu_to_le32(rsp_len);
+
+	els->xri_tag = cpu_to_le16(xri);
+
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_RSP64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	els->ox_id = cpu_to_le16(ox_id);
+
+	els->flags2 |= (SLI4_ELS_IOD & SLI4_ELS_REQUEST64_DIR_WRITE);
+
+	els->flags2 |= SLI4_ELS_QOSD;
+
+	if (flags & SLI4_IO_CONTINUATION)
+		els->flags3 |= SLI4_ELS_XC;
+
+	if (rnodeattached) {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+		if (s_id != U32_MAX) {
+			els->sid_dw |= cpu_to_le32(SLI4_ELS_SP |
+						   (s_id & SLI4_ELS_SID));
+		}
+	}
+
+	if (hlm) {
+		els->flags2 |= SLI4_ELS_HLM;
+		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
+	}
+
+	els->cmd_type_wqec = SLI4_ELS_REQUEST64_CMD_GEN;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	return 0;
+}
+
+/* Write a XMIT_SEQUENCE64 work queue entry */
+int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, size_t size,
+			struct efc_dma *payload, u32 payload_len,
+		u8 timeout, u16 ox_id, u16 xri,
+		u16 tag, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl,
+		u8 type, u8 df_ctl)
+{
+	struct sli4_xmit_sequence64_wqe *xmit = buf;
+
+	memset(buf, 0, size);
+
+	if (!payload || !payload->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       payload, payload ? payload->virt : NULL);
+		return -1;
+	}
+
+	if (sli4->sgl_pre_registered)
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_DBDE);
+	else
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_XBL);
+
+	xmit->bde.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			(payload_len & SLI4_BDE_MASK_BUFFER_LEN));
+	xmit->bde.u.data.low  =
+			cpu_to_le32(lower_32_bits(payload->phys));
+	xmit->bde.u.data.high =
+			cpu_to_le32(upper_32_bits(payload->phys));
+	xmit->sequence_payload_len = cpu_to_le32(payload_len);
+
+	xmit->remote_n_port_id_dword |= cpu_to_le32(rnode_fcid & 0x00ffffff);
+
+	xmit->relative_offset = 0;
+
+	/* sequence initiative - this matches what is seen from
+	 * FC switches in response to FCGS commands
+	 */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_SI);
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_FT);/* force transmit */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_XO);/* exchange responder */
+	xmit->dw5flags0 |= SLI4_SEQ_WQE_LS;/* last in seqence */
+	xmit->df_ctl = df_ctl;
+	xmit->type = type;
+	xmit->r_ctl = r_ctl;
+
+	xmit->xri_tag = cpu_to_le16(xri);
+	xmit->context_tag = cpu_to_le16(rnodeindicator);
+
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_DIF);
+	xmit->dw7flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_SEQ_WQE_CT_SHIFT;
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_BS);
+
+	xmit->command = SLI4_WQE_XMIT_SEQUENCE64;
+	xmit->dw7flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+	xmit->dw7flags1 &= (~SLI4_SEQ_WQE_PU);
+	xmit->timer = timeout;
+
+	xmit->abort_tag = 0;
+	xmit->request_tag = cpu_to_le16(tag);
+	xmit->remote_xid = cpu_to_le16(ox_id);
+
+	xmit->dw10w0 |=
+	cpu_to_le16(SLI4_ELS_REQUEST64_DIR_READ << SLI4_SEQ_WQE_IOD_SHIFT);
+
+	if (hlm) {
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_HLM);
+		xmit->remote_n_port_id_dword |=
+			cpu_to_le32(rnode_fcid & 0x00ffffff);
+	}
+
+	xmit->cmd_type_wqec_byte |= SLI4_CMD_XMIT_SEQUENCE64_WQE;
+
+	xmit->dw10w0 |= cpu_to_le16(2 << SLI4_SEQ_WQE_LEN_LOC_SHIFT);
+
+	xmit->cq_id = cpu_to_le16(0xFFFF);
+
+	return 0;
+}
+
+/* Write a REQUEUE_XRI_WQE work queue entry */
+int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    u16 xri, u16 tag, u16 cq_id)
+{
+	struct sli4_requeue_xri_wqe *requeue = buf;
+
+	memset(buf, 0, size);
+
+	requeue->command = SLI4_WQE_REQUEUE_XRI;
+	requeue->xri_tag = cpu_to_le16(xri);
+	requeue->request_tag = cpu_to_le16(tag);
+	requeue->flags2 |= cpu_to_le16(SLI4_REQU_XRI_WQE_XC);
+	requeue->flags1 |= cpu_to_le16(SLI4_REQU_XRI_WQE_QOSD);
+	requeue->cq_id = cpu_to_le16(cq_id);
+	requeue->cmd_type_wqec_byte = SLI4_CMD_REQUEUE_XRI_WQE;
+	return 0;
+}
+
+/* Process an asynchronous Link State event entry */
+int
+sli_fc_process_link_state(struct sli4 *sli4, void *acqe)
+{
+	struct sli4_link_state *link_state = acqe;
+	struct sli4_link_event event = { 0 };
+	int rc = 0;
+	u8 link_type = (link_state->link_num_type & LINK_TYPE_MASK);
+
+	if (!sli4->link) {
+		/* bail if there is no callback */
+		return 0;
+	}
+
+	if (link_type == LINK_TYPE_ETHERNET) {
+		event.topology = SLI_LINK_TOPO_NPORT;
+		event.medium   = SLI_LINK_MEDIUM_ETHERNET;
+	} else {
+		efc_log_info(sli4, "unsupported link type %#x\n",
+			link_type);
+		event.topology = SLI_LINK_TOPO_MAX;
+		event.medium   = SLI_LINK_MEDIUM_MAX;
+		rc = -1;
+	}
+
+	switch (link_state->port_link_status) {
+	case PORT_LINK_STATUS_PHYSICAL_DOWN:
+	case PORT_LINK_STATUS_LOGICAL_DOWN:
+		event.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case PORT_LINK_STATUS_PHYSICAL_UP:
+	case PORT_LINK_STATUS_LOGICAL_UP:
+		event.status = SLI_LINK_STATUS_UP;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported link status %#x\n",
+			link_state->port_link_status);
+		event.status = SLI_LINK_STATUS_MAX;
+		rc = -1;
+	}
+
+	switch (link_state->port_speed) {
+	case PORT_SPEED_NO_LINK:
+		event.speed = 0;
+		break;
+	case PORT_SPEED_10_MBPS:
+		event.speed = 10;
+		break;
+	case PORT_SPEED_100_MBP:
+		event.speed = 100;
+		break;
+	case PORT_SPEED_1_GBPS:
+		event.speed = 1000;
+		break;
+	case PORT_SPEED_10_GBPS:
+		event.speed = 10000;
+		break;
+	case PORT_SPEED_20_GBPS:
+		event.speed = 20000;
+		break;
+	case PORT_SPEED_25_GBPS:
+		event.speed = 25000;
+		break;
+	case PORT_SPEED_40_GBPS:
+		event.speed = 40000;
+		break;
+	case PORT_SPEED_100_GBPS:
+		event.speed = 100000;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported port_speed %#x\n",
+			link_state->port_speed);
+		rc = -1;
+	}
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return rc;
+}
+
+/* Process an asynchronous Link Attention event entry */
+int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe)
+{
+	struct sli4_link_attention *link_attn = acqe;
+	struct sli4_link_event event = { 0 };
+
+	efc_log_info(sli4, "link=%d attn_type=%#x top=%#x speed=%#x pfault=%#x\n",
+		link_attn->link_number, link_attn->attn_type,
+		      link_attn->topology, link_attn->port_speed,
+		      link_attn->port_fault);
+	efc_log_info(sli4, "shared_lnk_status=%#x logl_lnk_speed=%#x evnttag=%#x\n",
+		link_attn->shared_link_status,
+		      le16_to_cpu(link_attn->logical_link_speed),
+		      le32_to_cpu(link_attn->event_tag));
+
+	if (!sli4->link)
+		return 0;
+
+	event.medium   = SLI_LINK_MEDIUM_FC;
+
+	switch (link_attn->attn_type) {
+	case LINK_ATTN_TYPE_LINK_UP:
+		event.status = SLI_LINK_STATUS_UP;
+		break;
+	case LINK_ATTN_TYPE_LINK_DOWN:
+		event.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case LINK_ATTN_TYPE_NO_HARD_ALPA:
+		efc_log_info(sli4, "attn_type: no hard alpa\n");
+		event.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		efc_log_info(sli4, "attn_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->event_type) {
+	case FC_EVENT_LINK_ATTENTION:
+		break;
+	case FC_EVENT_SHARED_LINK_ATTENTION:
+		efc_log_info(sli4, "event_type: FC shared link event\n");
+		break;
+	default:
+		efc_log_info(sli4, "event_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->topology) {
+	case LINK_ATTN_P2P:
+		event.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case LINK_ATTN_FC_AL:
+		event.topology = SLI_LINK_TOPO_LOOP;
+		break;
+	case LINK_ATTN_INTERNAL_LOOPBACK:
+		efc_log_info(sli4, "topology Internal loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_INTERNAL;
+		break;
+	case LINK_ATTN_SERDES_LOOPBACK:
+		efc_log_info(sli4, "topology serdes loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_EXTERNAL;
+		break;
+	default:
+		efc_log_info(sli4, "topology: unknown\n");
+		break;
+	}
+
+	event.speed    = link_attn->port_speed * 1000;
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return 0;
+}
+
+/* Parse an FC work queue CQ entry */
+int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype, u16 *r_id)
+{
+	u8 code = cqe[SLI4_CQE_CODE_OFFSET];
+	int rc = -1;
+
+	switch (code) {
+	case SLI4_CQE_CODE_WORK_REQUEST_COMPLETION:
+	{
+		struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ;
+		*r_id = le16_to_cpu(wcqe->request_tag);
+		rc = wcqe->status;
+
+		/* Flag errors except for FCP_RSP_FAILURE */
+		if (rc && rc != SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE) {
+			efc_log_info(sli4, "WCQE: status=%#x hw_status=%#x tag=%#x\n",
+				wcqe->status, wcqe->hw_status,
+				le16_to_cpu(wcqe->request_tag));
+			efc_log_info(sli4, "w1=%#x w2=%#x xb=%d\n",
+				le32_to_cpu(wcqe->wqe_specific_1),
+				     le32_to_cpu(wcqe->wqe_specific_2),
+				     (wcqe->flags & SLI4_WCQE_XB));
+			efc_log_info(sli4, "      %08X %08X %08X %08X\n",
+				((u32 *)cqe)[0],
+				     ((u32 *)cqe)[1],
+				     ((u32 *)cqe)[2],
+				     ((u32 *)cqe)[3]);
+		}
+
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC:
+	{
+		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1:
+	{
+		struct sli4_fc_async_rcqe_v1 *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD:
+	{
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_CMD;
+		*r_id = le16_to_cpu(optcqe->rq_id);
+		rc = optcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA:
+	{
+		struct sli4_fc_optimized_write_data_cqe *dcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_DATA;
+		*r_id = le16_to_cpu(dcqe->xri);
+		rc = dcqe->status;
+
+		/* Flag errors */
+		if (rc != SLI4_FC_WCQE_STATUS_SUCCESS) {
+			efc_log_info(sli4, "Optimized DATA CQE: status=%#x\n",
+				dcqe->status);
+			efc_log_info(sli4, "hstat=%#x xri=%#x dpl=%#x w3=%#x xb=%d\n",
+				dcqe->hw_status, le16_to_cpu(dcqe->xri),
+				le32_to_cpu(dcqe->total_data_placed),
+				((u32 *)cqe)[3],
+				(dcqe->flags & SLI4_OCQE_XB));
+		}
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_COALESCING:
+	{
+		struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_XRI_ABORTED:
+	{
+		struct sli4_fc_xri_aborted_cqe *xa = (void *)cqe;
+
+		*etype = SLI_QENTRY_XABT;
+		*r_id = le16_to_cpu(xa->xri);
+		rc = 0;
+		break;
+	}
+	case SLI4_CQE_CODE_RELEASE_WQE: {
+		struct sli4_fc_wqec *wqec = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ_RELEASE;
+		*r_id = le16_to_cpu(wqec->wq_id);
+		rc = 0;
+		break;
+	}
+	default:
+		efc_log_info(sli4, "CQE completion code %d not handled\n",
+			code);
+		*etype = SLI_QENTRY_MAX;
+		*r_id = U16_MAX;
+	}
+
+	return rc;
+}
+
+u32
+sli_fc_response_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+u32
+sli_fc_io_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+int
+sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	*d_id = 0;
+
+	if (wcqe->status)
+		return -1;
+	*d_id = le32_to_cpu(wcqe->wqe_specific_2) & 0x00ffffff;
+	return 0;
+}
+
+u32
+sli_fc_ext_status(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	mask;
+
+	switch (wcqe->status) {
+	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+	case SLI4_FC_WCQE_STATUS_CMD_REJECT:
+		mask = 0xff;
+		break;
+	case SLI4_FC_WCQE_STATUS_NPORT_RJT:
+	case SLI4_FC_WCQE_STATUS_FABRIC_RJT:
+	case SLI4_FC_WCQE_STATUS_NPORT_BSY:
+	case SLI4_FC_WCQE_STATUS_FABRIC_BSY:
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_DI_ERROR:
+		mask = U32_MAX;
+		break;
+	default:
+		mask = 0;
+	}
+
+	return le32_to_cpu(wcqe->wqe_specific_2) & mask;
+}
+
+/* Retrieve the RQ index from the completion */
+int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
+			  u16 *rq_id, u32 *index)
+{
+	struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+	struct sli4_fc_async_rcqe_v1 *rcqe_v1 = (void *)cqe;
+	int rc = -1;
+	u8 code = 0;
+	u16 rq_element_index;
+
+	*rq_id = 0;
+	*index = U32_MAX;
+
+	code = cqe[SLI4_CQE_CODE_OFFSET];
+
+	if (code == SLI4_CQE_CODE_RQ_ASYNC) {
+		*rq_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rq_element_index =
+		le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX;
+		*index = rq_element_index;
+		if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = rcqe->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->fcfi_rq_id_word) &
+				SLI4_RACQE_RQ_ID);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe->data_placement_length),
+				rcqe->sof_byte, rcqe->eof_byte,
+				rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_ASYNC_V1) {
+		*rq_id = le16_to_cpu(rcqe_v1->rq_id);
+		rq_element_index =
+			(le16_to_cpu(rcqe_v1->rq_elmt_indx_word) &
+			 SLI4_RACQE_RQ_EL_INDX);
+		*index = rq_element_index;
+		if (rcqe_v1->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = rcqe_v1->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d, index=%x\n",
+				rcqe_v1->status,
+				sli_fc_get_status_string(rcqe_v1->status),
+				le16_to_cpu(rcqe_v1->rq_id), rq_element_index);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe_v1->data_placement_length),
+			rcqe_v1->sof_byte, rcqe_v1->eof_byte,
+			rcqe_v1->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD) {
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*rq_id = le16_to_cpu(optcqe->rq_id);
+		*index = le16_to_cpu(optcqe->w1) & SLI4_OCQE_RQ_EL_INDX;
+		if (optcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = 0;
+		} else {
+			rc = optcqe->status;
+			efc_log_info(sli4, "stat=%02x (%s) rqid=%d, idx=%x pdpl=%x\n",
+				optcqe->status,
+				sli_fc_get_status_string(optcqe->status),
+				le16_to_cpu(optcqe->rq_id), *index,
+				le16_to_cpu(optcqe->data_placement_length));
+
+			efc_log_info(sli4, "hdpl=%x oox=%d agxr=%d xri=0x%x rpi=%x\n",
+				(optcqe->hdpl_vld & SLI4_OCQE_HDPL),
+				(optcqe->flags1 & SLI4_OCQE_OOX),
+				(optcqe->flags1 & SLI4_OCQE_AGXR), optcqe->xri,
+				le16_to_cpu(optcqe->rpi));
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_COALESCING) {
+		struct sli4_fc_coalescing_rcqe	*rcqe = (void *)cqe;
+		u16 rq_element_index =
+				(le16_to_cpu(rcqe->rq_elmt_indx_word) &
+				 SLI4_RCQE_RQ_EL_INDX);
+
+		*rq_id = le16_to_cpu(rcqe->rq_id);
+		if (rcqe->status == SLI4_FC_COALESCE_RQ_SUCCESS) {
+			*index = rq_element_index;
+			rc = 0;
+		} else {
+			*index = U32_MAX;
+			rc = rcqe->status;
+
+			efc_log_info(sli4, "stat=%02x (%s) rq_id=%d, idx=%x\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->rq_id), rq_element_index);
+			efc_log_info(sli4, "rq_id=%#x sdpl=%x\n",
+				le16_to_cpu(rcqe->rq_id),
+		    le16_to_cpu(rcqe->sequence_reporting_placement_length));
+		}
+	} else {
+		*index = U32_MAX;
+
+		rc = rcqe->status;
+
+		efc_log_info(sli4, "status=%02x rq_id=%d, index=%x pdpl=%x\n",
+			rcqe->status,
+		le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID,
+		(le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX),
+		le16_to_cpu(rcqe->data_placement_length));
+		efc_log_info(sli4, "sof=%02x eof=%02x hdpl=%x\n",
+			rcqe->sof_byte, rcqe->eof_byte,
+			rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 1846a28d5fd8..5c9609e7c72c 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
 #include "../include/efc_common.h"
 
 /*************************************************************************
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (4 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  8:05   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
                   ` (26 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds routines to create mailbox commands used during
adapter initialization and adds APIs to issue mailbox commands to the
adapter through the bootstrap mailbox register.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1229 +++++++++++++++++++++++++++++++++++-
 drivers/scsi/elx/libefc_sli/sli4.h |    2 +
 2 files changed, 1230 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 2ebe0235bc9e..3cdabb917df6 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -942,7 +942,6 @@ static int sli_cmd_cq_set_create(struct sli4 *sli4,
 	u16 dw6w1_flags = 0;
 	__le32 req_len;
 
-
 	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
 	switch (n_cqe) {
 	case 256:
@@ -3297,3 +3296,1231 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
 
 	return rc;
 }
+
+/* Wait for the bootstrap mailbox to report "ready" */
+static int
+sli_bmbx_wait(struct sli4 *sli4, u32 msec)
+{
+	u32 val = 0;
+
+	do {
+		mdelay(1);	/* 1 ms */
+		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+		msec--;
+	} while (msec && !(val & SLI4_BMBX_RDY));
+
+	val = (!(val & SLI4_BMBX_RDY));
+	return val;
+}
+
+/* Write bootstrap mailbox */
+static int
+sli_bmbx_write(struct sli4 *sli4)
+{
+	u32 val = 0;
+
+	/* write buffer location to bootstrap mailbox register */
+	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
+		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
+		return -1;
+	}
+	val = SLI4_BMBX_WRITE_LO(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	/* wait for SLI Port to set ready bit */
+	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
+}
+
+/* Submit a command to the bootstrap mailbox and check the status */
+int
+sli_bmbx_command(struct sli4 *sli4)
+{
+	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
+
+	if (sli_fw_error_status(sli4) > 0) {
+		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
+		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+		return -1;
+	}
+
+	if (sli_bmbx_write(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail phys=%p reg=%#x\n",
+			(void *)sli4->bmbx.phys,
+			readl(sli4->reg[0] + SLI4_BMBX_REG));
+		return -1;
+	}
+
+	/* check completion queue entry status */
+	if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) &
+	    SLI4_MCQE_VALID) {
+		return sli_cqe_mq(sli4, cqe);
+	}
+	efc_log_crit(sli4, "invalid or wrong type\n");
+	return -1;
+}
+
+/* Write a CONFIG_LINK command to the provided buffer */
+int
+sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_config_link *config_link = buf;
+
+	memset(buf, 0, size);
+
+	config_link->hdr.command = MBX_CMD_CONFIG_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/* Write a DOWN_LINK command to the provided buffer */
+int
+sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_mbox_command_header *hdr = buf;
+
+	memset(buf, 0, size);
+
+	hdr->command = MBX_CMD_DOWN_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/* Write a DUMP Type 4 command to the provided buffer */
+int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf,
+		   size_t size, u16 wki)
+{
+	struct sli4_cmd_dump4 *cmd = buf;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_DUMP;
+	cmd->type_dword = cpu_to_le32(0x4);
+	cmd->wki_selection = cpu_to_le16(wki);
+	return EFC_SUCCESS;
+}
+
+/* Write a COMMON_READ_TRANSCEIVER_DATA command */
+int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
+				     size_t size, u32 page_num,
+				     struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_transceiver_data *req = NULL;
+	u32 psize;
+
+	if (!dma)
+		psize = SLI_CONFIG_PYLD_LENGTH(cmn_read_transceiver_data);
+	else
+		psize = dma->size;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+					    psize, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_READ_TRANS_DATA, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_read_transceiver_data));
+
+	req->page_number = cpu_to_le32(page_num);
+	req->port = cpu_to_le32(sli4->port_number);
+
+	return EFC_SUCCESS;
+}
+
+/* Write a READ_LINK_STAT command to the provided buffer */
+int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size,
+			u8 req_ext_counters,
+			u8 clear_overflow_flags,
+			u8 clear_all_counters)
+{
+	struct sli4_cmd_read_link_stats *cmd = buf;
+	u32 flags;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_LNK_STAT;
+
+	flags = 0;
+	if (req_ext_counters)
+		flags |= SLI4_READ_LNKSTAT_REC;
+	if (clear_all_counters)
+		flags |= SLI4_READ_LNKSTAT_CLRC;
+	if (clear_overflow_flags)
+		flags |= SLI4_READ_LNKSTAT_CLOF;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/* Write a READ_STATUS command to the provided buffer */
+int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size,
+		    u8 clear_counters)
+{
+	struct sli4_cmd_read_status *cmd = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_STATUS;
+	if (clear_counters)
+		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
+	else
+		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_LINK command to the provided buffer */
+int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size,
+		  u32 speed, u8 reset_alpa)
+{
+	struct sli4_cmd_init_link *init_link = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_link->hdr.command = MBX_CMD_INIT_LINK;
+
+	init_link->sel_reset_al_pa_dword =
+				cpu_to_le32(reset_alpa);
+	flags &= ~SLI4_INIT_LINK_FLAG_LOOPBACK;
+
+	init_link->link_speed_sel_code = cpu_to_le32(speed);
+	switch (speed) {
+	case FC_LINK_SPEED_1G:
+	case FC_LINK_SPEED_2G:
+	case FC_LINK_SPEED_4G:
+	case FC_LINK_SPEED_8G:
+	case FC_LINK_SPEED_16G:
+	case FC_LINK_SPEED_32G:
+		flags |= SLI4_INIT_LINK_FLAG_FIXED_SPEED;
+		break;
+	case FC_LINK_SPEED_10G:
+		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+		/* Attempt P2P but failover to FC-AL */
+		flags |= SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER;
+
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (SLI4_INIT_LINK_F_P2P_FAIL_OVER << 1);
+		break;
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (SLI4_INIT_LINK_F_FCAL_ONLY << 1);
+		if (speed == FC_LINK_SPEED_16G ||
+		    speed == FC_LINK_SPEED_32G) {
+			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
+				speed);
+			init_link->flags0 = cpu_to_le32(flags);
+			return EFC_FAIL;
+		}
+		break;
+	case SLI4_READ_CFG_TOPO_FC_DA:
+		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
+		flags |= (FC_TOPOLOGY_P2P << 1);
+		break;
+	default:
+
+		efc_log_info(sli4, "unsupported topology %#x\n",
+			sli4->topology);
+
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	flags &= (~SLI4_INIT_LINK_FLAG_UNFAIR);
+	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP);
+	flags &= (~SLI4_INIT_LINK_FLAG_LOOP_VALIDITY);
+	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LISA);
+	flags &= (~SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA);
+	init_link->flags0 = cpu_to_le32(flags);
+
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_VFI command to the provided buffer */
+int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 vfi, u16 fcfi, u16 vpi)
+{
+	struct sli4_cmd_init_vfi *init_vfi = buf;
+	u16 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_vfi->hdr.command = MBX_CMD_INIT_VFI;
+
+	init_vfi->vfi = cpu_to_le16(vfi);
+	init_vfi->fcfi = cpu_to_le16(fcfi);
+
+	/*
+	 * If the VPI is valid, initialize it at the same time as
+	 * the VFI
+	 */
+	if (vpi != U16_MAX) {
+		flags |= SLI4_INIT_VFI_FLAG_VP;
+		init_vfi->flags0_word = cpu_to_le16(flags);
+		init_vfi->vpi = cpu_to_le16(vpi);
+	}
+
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_VPI command to the provided buffer */
+int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 vpi, u16 vfi)
+{
+	struct sli4_cmd_init_vpi *init_vpi = buf;
+
+	memset(buf, 0, size);
+
+	init_vpi->hdr.command = MBX_CMD_INIT_VPI;
+	init_vpi->vpi = cpu_to_le16(vpi);
+	init_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size,
+		 u16 xri_base, u16 xri_count)
+{
+	struct sli4_cmd_post_xri *post_xri = buf;
+	u16 xri_count_flags = 0;
+
+	memset(buf, 0, size);
+
+	post_xri->hdr.command = MBX_CMD_POST_XRI;
+	post_xri->xri_base = cpu_to_le16(xri_base);
+	xri_count_flags = (xri_count & SLI4_POST_XRI_COUNT);
+	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
+	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
+	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size,
+		    u8 num_xri)
+{
+	struct sli4_cmd_release_xri *release_xri = buf;
+
+	memset(buf, 0, size);
+
+	release_xri->hdr.command = MBX_CMD_RELEASE_XRI;
+	release_xri->xri_count_word = cpu_to_le16(num_xri &
+					SLI4_RELEASE_XRI_COUNT);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_config(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_config *read_config = buf;
+
+	memset(buf, 0, size);
+
+	read_config->hdr.command = MBX_CMD_READ_CONFIG;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_nvparms *read_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	read_nvparms->hdr.command = MBX_CMD_READ_NVPARMS;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size,
+		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id)
+{
+	struct sli4_cmd_write_nvparms *write_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	write_nvparms->hdr.command = MBX_CMD_WRITE_NVPARMS;
+	memcpy(write_nvparms->wwpn, wwpn, 8);
+	memcpy(write_nvparms->wwnn, wwnn, 8);
+
+	write_nvparms->hard_alpa_d_id =
+			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_rev(struct sli4 *sli4, void *buf, size_t size,
+		 struct efc_dma *vpd)
+{
+	struct sli4_cmd_read_rev *read_rev = buf;
+
+	memset(buf, 0, size);
+
+	read_rev->hdr.command = MBX_CMD_READ_REV;
+
+	if (vpd && vpd->size) {
+		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
+
+		read_rev->available_length_dword =
+			cpu_to_le32(vpd->size &
+				    SLI4_READ_REV_AVAILABLE_LENGTH);
+
+		read_rev->hostbuf.low =
+				cpu_to_le32(lower_32_bits(vpd->phys));
+		read_rev->hostbuf.high =
+				cpu_to_le32(upper_32_bits(vpd->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *dma,
+		     u16 vpi)
+{
+	struct sli4_cmd_read_sparm64 *read_sparm64 = buf;
+
+	memset(buf, 0, size);
+
+	if (vpi == SLI4_READ_SPARM64_VPI_SPECIAL) {
+		efc_log_info(sli4, "special VPI not supported!!!\n");
+		return -1;
+	}
+
+	if (!dma || !dma->phys) {
+		efc_log_info(sli4, "bad DMA buffer\n");
+		return -1;
+	}
+
+	read_sparm64->hdr.command = MBX_CMD_READ_SPARM64;
+
+	read_sparm64->bde_64.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+	read_sparm64->bde_64.u.data.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+	read_sparm64->bde_64.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+
+	read_sparm64->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *dma)
+{
+	struct sli4_cmd_read_topology *read_topo = buf;
+
+	memset(buf, 0, size);
+
+	read_topo->hdr.command = MBX_CMD_READ_TOPOLOGY;
+
+	if (dma && dma->size) {
+		if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
+			efc_log_info(sli4, "loop map buffer too small %jd\n",
+				dma->size);
+			return 0;
+		}
+
+		memset(dma->virt, 0, dma->size);
+
+		read_topo->bde_loop_map.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+		read_topo->bde_loop_map.u.data.low  =
+			cpu_to_le32(lower_32_bits(dma->phys));
+		read_topo->bde_loop_map.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 index,
+		 struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi *reg_fcfi = buf;
+	u32 i;
+
+	memset(buf, 0, size);
+
+	reg_fcfi->hdr.command = MBX_CMD_REG_FCFI;
+
+	reg_fcfi->fcf_index = cpu_to_le16(index);
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		switch (i) {
+		case 0:
+			reg_fcfi->rqid0 = rq_cfg[0].rq_id;
+			break;
+		case 1:
+			reg_fcfi->rqid1 = rq_cfg[1].rq_id;
+			break;
+		case 2:
+			reg_fcfi->rqid2 = rq_cfg[2].rq_id;
+			break;
+		case 3:
+			reg_fcfi->rqid3 = rq_cfg[3].rq_id;
+			break;
+		}
+		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size,
+		     u8 mode, u16 fcf_index,
+		     u8 rq_selection_policy, u8 mrq_bit_mask,
+		     u16 num_mrqs,
+		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf;
+	u32 i;
+	u32 mrq_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_fcfi_mrq->hdr.command = MBX_CMD_REG_FCFI_MRQ;
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
+		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
+		goto done;
+	}
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
+
+		switch (i) {
+		case 3:
+			reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id;
+			break;
+		case 2:
+			reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id;
+			break;
+		case 1:
+			reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id;
+			break;
+		case 0:
+			reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id;
+			break;
+		}
+	}
+
+	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
+	mrq_flags |= (mrq_bit_mask << 8);
+	mrq_flags |= (rq_selection_policy << 12);
+	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
+done:
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 nport_id, u16 rpi, u16 vpi,
+		struct efc_dma *dma, u8 update,
+		u8 enable_t10_pi)
+{
+	struct sli4_cmd_reg_rpi *reg_rpi = buf;
+	u32 rportid_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_rpi->hdr.command = MBX_CMD_REG_RPI;
+
+	reg_rpi->rpi = cpu_to_le16(rpi);
+
+	rportid_flags = nport_id & SLI4_REGRPI_REMOTE_N_PORTID;
+
+	if (update)
+		rportid_flags |= SLI4_REGRPI_UPD;
+	else
+		rportid_flags &= ~SLI4_REGRPI_UPD;
+
+	if (enable_t10_pi)
+		rportid_flags |= SLI4_REGRPI_ETOW;
+	else
+		rportid_flags &= ~SLI4_REGRPI_ETOW;
+
+	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
+
+	reg_rpi->bde_64.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_rpi->bde_64.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma->phys));
+	reg_rpi->bde_64.u.data.high =
+		cpu_to_le32(upper_32_bits(dma->phys));
+
+	reg_rpi->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id)
+{
+	struct sli4_cmd_reg_vfi *reg_vfi = buf;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	reg_vfi->hdr.command = MBX_CMD_REG_VFI;
+
+	reg_vfi->vfi = cpu_to_le16(vfi);
+
+	reg_vfi->fcfi = cpu_to_le16(fcfi);
+
+	reg_vfi->sparm.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_vfi->sparm.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma.phys));
+	reg_vfi->sparm.u.data.high =
+		cpu_to_le32(upper_32_bits(dma.phys));
+
+	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
+	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
+
+	reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP);
+	reg_vfi->vpi = cpu_to_le16(vpi);
+	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
+	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
+		bool update)
+{
+	struct sli4_cmd_reg_vpi *reg_vpi = buf;
+	u32 flags = 0;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	reg_vpi->hdr.command = MBX_CMD_REG_VPI;
+
+	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
+	if (update)
+		flags |= SLI4_REGVPI_UPD;
+	else
+		flags &= ~SLI4_REGVPI_UPD;
+
+	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
+	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
+	reg_vpi->vpi = cpu_to_le16(vpi);
+	reg_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_request_features(struct sli4 *sli4, void *buf, size_t size,
+			 u32 features_mask, bool query)
+{
+	struct sli4_cmd_request_features *req_features = buf;
+
+	memset(buf, 0, size);
+
+	req_features->hdr.command = MBX_CMD_RQST_FEATURES;
+
+	if (query)
+		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
+
+	req_features->cmd = cpu_to_le32(features_mask);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		   u16 indicator)
+{
+	struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf;
+
+	if (!sli4 || !buf)
+		return 0;
+
+	memset(buf, 0, size);
+
+	unreg_fcfi->hdr.command = MBX_CMD_UNREG_FCFI;
+
+	unreg_fcfi->fcfi = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator,
+		  enum sli4_resource which, u32 fc_id)
+{
+	struct sli4_cmd_unreg_rpi *unreg_rpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_rpi->hdr.command = MBX_CMD_UNREG_RPI;
+
+	switch (which) {
+	case SLI_RSRC_RPI:
+		flags |= UNREG_RPI_II_RPI;
+		if (fc_id == U32_MAX)
+			break;
+
+		flags |= UNREG_RPI_DP;
+		unreg_rpi->dw2_dest_n_portid =
+			cpu_to_le32(fc_id & UNREG_RPI_DEST_N_PORTID_MASK);
+		break;
+	case SLI_RSRC_VPI:
+		flags |= UNREG_RPI_II_VPI;
+		break;
+	case SLI_RSRC_VFI:
+		flags |= UNREG_RPI_II_VFI;
+		break;
+	case SLI_RSRC_FCFI:
+		flags |= UNREG_RPI_II_FCFI;
+		break;
+	default:
+		efc_log_info(sli4, "unknown type %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	unreg_rpi->dw1w1_flags = cpu_to_le16(flags);
+	unreg_rpi->index = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 index, u32 which)
+{
+	struct sli4_cmd_unreg_vfi *unreg_vfi = buf;
+
+	memset(buf, 0, size);
+
+	unreg_vfi->hdr.command = MBX_CMD_UNREG_VFI;
+	switch (which) {
+	case SLI4_UNREG_TYPE_DOMAIN:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		unreg_vfi->index = cpu_to_le16(U32_MAX);
+		break;
+	default:
+		return 0;
+	}
+
+	if (which != SLI4_UNREG_TYPE_DOMAIN)
+		unreg_vfi->dw2_flags =
+			cpu_to_le16(UNREG_VFI_II_FCFI);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator, u32 which)
+{
+	struct sli4_cmd_unreg_vpi *unreg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_vpi->hdr.command = MBX_CMD_UNREG_VPI;
+	unreg_vpi->index = cpu_to_le16(indicator);
+	switch (which) {
+	case SLI4_UNREG_TYPE_PORT:
+		flags |= UNREG_VPI_II_VPI;
+		break;
+	case SLI4_UNREG_TYPE_DOMAIN:
+		flags |= UNREG_VPI_II_VFI;
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		/* override indicator */
+		unreg_vpi->index = cpu_to_le16(U32_MAX);
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	unreg_vpi->dw2w0_flags = cpu_to_le16(flags);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_modify_eq_delay(struct sli4 *sli4, void *buf, size_t size,
+			       struct sli4_queue *q, int num_q, u32 shift,
+			       u32 delay_mult)
+{
+	struct sli4_rqst_cmn_modify_eq_delay *req = NULL;
+	int i;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+				SLI_CONFIG_PYLD_LENGTH(cmn_modify_eq_delay),
+				NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_MODIFY_EQ_DELAY, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_modify_eq_delay));
+	req->num_eq = cpu_to_le32(num_q);
+
+	for (i = 0; i < num_q; i++) {
+		req->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id);
+		req->eq_delay_record[i].phase = cpu_to_le32(shift);
+		req->eq_delay_record[i].delay_multiplier =
+			cpu_to_le32(delay_mult);
+	}
+
+	return EFC_SUCCESS;
+}
+
+void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf,
+			       size_t size, u16 timeout)
+{
+	struct sli4_rqst_lowlevel_set_watchdog *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+			SLI_CONFIG_PYLD_LENGTH(lowlevel_set_watchdog),
+			NULL);
+	if (!req)
+		return;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_LOWLEVEL_SET_WATCHDOG,
+			 SLI4_SUBSYSTEM_LOWLEVEL, CMD_V0,
+			 CFG_RQST_PYLD_LEN(lowlevel_set_watchdog));
+	req->watchdog_timeout = cpu_to_le16(timeout);
+}
+
+static int
+sli_cmd_common_get_cntl_attributes(struct sli4 *sli4, void *buf, size_t size,
+				   struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ATTRIBUTES;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_cntl_addl_attributes(struct sli4 *sli4, void *buf,
+					size_t size, struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ADDL_ATTRS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf,
+		   size_t size, uint64_t context)
+{
+	struct sli4_rqst_cmn_nop *nop = NULL;
+
+	nop = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_nop), NULL);
+	if (!nop)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&nop->hdr, CMN_NOP, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_nop));
+
+	memcpy(&nop->context, &context, sizeof(context));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
+					size_t size, u16 rtype)
+{
+	struct sli4_rqst_cmn_get_resource_extent_info *extent = NULL;
+
+	extent = sli_config_cmd_init(sli4, buf, size,
+			CFG_RQST_CMDSZ(cmn_get_resource_extent_info),
+				     NULL);
+	if (extent)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&extent->hdr, CMN_GET_RSC_EXTENT_INFO,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 CFG_RQST_PYLD_LEN(cmn_get_resource_extent_info));
+
+	extent->resource_type = cpu_to_le16(rtype);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf,
+				   size_t size)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_get_sli4_params),
+				  NULL);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_SLI4_PARAMS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = CFG_RQST_PYLD_LEN(cmn_get_sli4_params);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_port_name(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_rqst_cmn_get_port_name *pname;
+
+	pname = sli_config_cmd_init(sli4, buf, size,
+				    SLI_CONFIG_PYLD_LENGTH(cmn_get_port_name),
+				    NULL);
+	if (!pname)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&pname->hdr, CMN_GET_PORT_NAME, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V1, CFG_RQST_PYLD_LEN(cmn_get_port_name));
+
+	/* Set the port type value (ethernet=0, FC=1) for V1 commands */
+	pname->port_type = PORT_TYPE_FC;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size,
+			    u16 noc,
+			    u16 eof, u32 desired_write_length,
+			    u32 offset, char *object_name,
+			    struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_write_object *wr_obj = NULL;
+	struct sli4_bde *bde;
+	u32 dwflags = 0;
+
+	wr_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_write_object) +
+				     sizeof(*bde), NULL);
+	if (!wr_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wr_obj->hdr, CMN_WRITE_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0,
+			 CFG_RQST_PYLD_LEN_VAR(cmn_write_object, sizeof(*bde)));
+
+	if (noc)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC;
+	if (eof)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF;
+	dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN);
+
+	wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
+
+	wr_obj->write_offset = cpu_to_le32(offset);
+	strncpy(wr_obj->object_name, object_name,
+		sizeof(wr_obj->object_name));
+	wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_write_length & SLI4_BDE_MASK_BUFFER_LEN));
+	bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+	bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
+			     char *object_name)
+{
+	struct sli4_rqst_cmn_delete_object *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+				  CFG_RQST_CMDSZ(cmn_delete_object), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_DELETE_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_delete_object));
+
+	strncpy(req->object_name, object_name, sizeof(req->object_name));
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
+			   u32 desired_read_length, u32 offset,
+			   char *object_name, struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_object *rd_obj = NULL;
+	struct sli4_bde *bde;
+
+	rd_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_read_object) +
+				     sizeof(*bde), NULL);
+	if (!rd_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rd_obj->hdr, CMN_READ_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0,
+			 CFG_RQST_PYLD_LEN_VAR(cmn_read_object, sizeof(*bde)));
+	rd_obj->desired_read_length_dword =
+		cpu_to_le32(desired_read_length & SLI4_REQ_DESIRE_READLEN);
+
+	rd_obj->read_offset = cpu_to_le32(offset);
+	strncpy(rd_obj->object_name, object_name,
+		sizeof(rd_obj->object_name));
+	rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_read_length & SLI4_BDE_MASK_BUFFER_LEN));
+	if (dma) {
+		bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+		bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+	} else {
+		bde->u.data.low = 0;
+		bde->u.data.high = 0;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
+			  struct efc_dma *cmd,
+			  struct efc_dma *resp)
+{
+	struct sli4_rqst_dmtf_exec_clp_cmd *clp_cmd = NULL;
+
+	clp_cmd = sli_config_cmd_init(sli4, buf, size,
+				      CFG_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL);
+	if (!clp_cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&clp_cmd->hdr, DMTF_EXEC_CLP_CMD, SLI4_SUBSYSTEM_DMTF,
+			 CMD_V0, CFG_RQST_PYLD_LEN(dmtf_exec_clp_cmd));
+
+	clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size);
+	clp_cmd->cmd_buf_addr_low =  cpu_to_le32(lower_32_bits(cmd->phys));
+	clp_cmd->cmd_buf_addr_high =  cpu_to_le32(upper_32_bits(cmd->phys));
+	clp_cmd->resp_buf_length = cpu_to_le32(resp->size);
+	clp_cmd->resp_buf_addr_low =  cpu_to_le32(lower_32_bits(resp->phys));
+	clp_cmd->resp_buf_addr_high =  cpu_to_le32(upper_32_bits(resp->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf,
+				 size_t size, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma *buffer, u8 fdb)
+{
+	struct sli4_rqst_cmn_set_dump_location *set_dump_loc = NULL;
+	u32 buffer_length_flag = 0;
+
+	set_dump_loc = sli_config_cmd_init(sli4, buf, size,
+					CFG_RQST_CMDSZ(cmn_set_dump_location),
+					NULL);
+	if (!set_dump_loc)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&set_dump_loc->hdr, CMN_SET_DUMP_LOCATION,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 CFG_RQST_PYLD_LEN(cmn_set_dump_location));
+
+	if (is_buffer_list)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_BLP;
+
+	if (query)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_QRY;
+
+	if (fdb)
+		buffer_length_flag |= SLI4_RQ_COM_SET_DUMP_FDB;
+
+	if (buffer) {
+		set_dump_loc->buf_addr_low =
+			cpu_to_le32(lower_32_bits(buffer->phys));
+		set_dump_loc->buf_addr_high =
+			cpu_to_le32(upper_32_bits(buffer->phys));
+
+		buffer_length_flag |= (buffer->len &
+				       SLI4_RQ_COM_SET_DUMP_BUFFER_LEN);
+	} else {
+		set_dump_loc->buf_addr_low = 0;
+		set_dump_loc->buf_addr_high = 0;
+		set_dump_loc->buffer_length_dword = 0;
+	}
+	set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
+			    u32 feature,
+			    u32 param_len,
+			    void *parameter)
+{
+	struct sli4_rqst_cmn_set_features *cmd = NULL;
+
+	cmd = sli_config_cmd_init(sli4, buf, size,
+				  CFG_RQST_CMDSZ(cmn_set_features), NULL);
+	if (!cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&cmd->hdr, CMN_SET_FEATURES, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_set_features));
+
+	cmd->feature = cpu_to_le32(feature);
+	cmd->param_len = cpu_to_le32(param_len);
+	memcpy(cmd->params, parameter, param_len);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cqe_mq(struct sli4 *sli4, void *buf)
+{
+	struct sli4_mcqe *mcqe = buf;
+	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
+	/*
+	 * Firmware can split mbx completions into two MCQEs: first with only
+	 * the "consumed" bit set and a second with the "complete" bit set.
+	 * Thus, ignore MCQE unless "complete" is set.
+	 */
+	if (!(dwflags & SLI4_MCQE_COMPLETED))
+		return -2;
+
+	if (le16_to_cpu(mcqe->completion_status)) {
+		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
+			le16_to_cpu(mcqe->completion_status),
+			      le16_to_cpu(mcqe->extended_status),
+			      (dwflags & SLI4_MCQE_CONSUMED),
+			      (dwflags & SLI4_MCQE_COMPLETED),
+			      (dwflags & SLI4_MCQE_AE),
+			      (dwflags & SLI4_MCQE_VALID));
+	}
+
+	return le16_to_cpu(mcqe->completion_status);
+}
+
+int
+sli_cqe_async(struct sli4 *sli4, void *buf)
+{
+	struct sli4_acqe *acqe = buf;
+	int rc = -1;
+
+	if (!buf) {
+		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
+		return -1;
+	}
+
+	switch (acqe->event_code) {
+	case SLI4_ACQE_EVENT_CODE_LINK_STATE:
+		rc = sli_fc_process_link_state(sli4, buf);
+		break;
+	case SLI4_ACQE_EVENT_CODE_GRP_5:
+		efc_log_info(sli4, "ACQE GRP5\n");
+		break;
+	case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT:
+		efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n",
+			acqe->event_type,
+			le32_to_cpu(acqe->event_data[0]),
+			le32_to_cpu(acqe->event_data[1]));
+		break;
+	case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT:
+		rc = sli_fc_process_link_attention(sli4, buf);
+		break;
+	default:
+		efc_log_info(sli4, "ACQE unknown=%#x\n",
+			acqe->event_code);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 5c9609e7c72c..4184a7e0069a 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -12,6 +12,8 @@
 #ifndef _SLI4_H
 #define _SLI4_H
 
+#include <linux/pci.h>
+#include <linux/delay.h>
 #include "scsi/fc/fc_els.h"
 #include "scsi/fc/fc_fs.h"
 #include "../include/efc_common.h"
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (5 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-08  8:22   ` Hannes Reinecke
  2019-12-20 22:36 ` [PATCH v2 08/32] elx: libefc: Generic state machine framework James Smart
                   ` (25 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds APIS to initialize the library, initialize
the SLI Port, reset firmware, terminate the SLI Port, and
terminate the library.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1222 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |  552 +++++++++++++++-
 2 files changed, 1773 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 3cdabb917df6..e2bea34b445a 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -4524,3 +4524,1225 @@ sli_cqe_async(struct sli4 *sli4, void *buf)
 
 	return rc;
 }
+
+/* Determine if the chip FW is in a ready state */
+int
+sli_fw_ready(struct sli4 *sli4)
+{
+	u32 val;
+	/*
+	 * Is firmware ready for operation? Check needed depends on IF_TYPE
+	 */
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
+}
+
+static int
+sli_sliport_reset(struct sli4 *sli4)
+{
+	u32 iter, val;
+	int rc = -1;
+
+	val = SLI4_PORT_CTRL_IP;
+	/* Initialize port, endian */
+	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+
+	for (iter = 0; iter < 3000; iter++) {
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4) == 1) {
+			rc = 0;
+			break;
+		}
+	}
+
+	if (rc != 0)
+		efc_log_crit(sli4, "port failed to become ready after initialization\n");
+
+	return rc;
+}
+
+static bool
+sli_wait_for_fw_ready(struct sli4 *sli4, u32 timeout_ms)
+{
+	u32 iter = timeout_ms / (SLI4_INIT_PORT_DELAY_US / 1000);
+	bool ready = false;
+
+	do {
+		iter--;
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4) == 1)
+			ready = true;
+	} while (!ready && (iter > 0));
+
+	return ready;
+}
+
+static int
+sli_fw_init(struct sli4 *sli4)
+{
+	bool ready;
+
+	/*
+	 * Is firmware ready for operation?
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return -1;
+	}
+
+	/*
+	 * Reset port to a known state
+	 */
+	if (sli_sliport_reset(sli4))
+		return -1;
+
+	return 0;
+}
+
+static int
+sli_fw_term(struct sli4 *sli4)
+{
+	/* type 2 etc. use SLIPORT_CONTROL to initialize port */
+	sli_sliport_reset(sli4);
+	return 0;
+}
+
+static int
+sli_request_features(struct sli4 *sli4, u32 *features, bool query)
+{
+	if (!sli_cmd_request_features(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				     *features, query)) {
+		struct sli4_cmd_request_features *req_features =
+							sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		}
+		if (le16_to_cpu(req_features->hdr.status)) {
+			efc_log_err(sli4, "REQUEST_FEATURES bad status %#x\n",
+			       le16_to_cpu(req_features->hdr.status));
+			return -1;
+		}
+		*features = le32_to_cpu(req_features->resp);
+	} else {
+		efc_log_err(sli4, "bad REQUEST_FEATURES write\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+void
+sli_calc_max_qentries(struct sli4 *sli4)
+{
+	enum sli4_qtype q;
+	u32 qentries;
+
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		sli4->qinfo.max_qentries[q] =
+			sli_convert_mask_to_count(sli4->qinfo.count_method[q],
+						  sli4->qinfo.count_mask[q]);
+	}
+
+	/* single, continguous DMA allocations will be called for each queue
+	 * of size (max_qentries * queue entry size); since these can be large,
+	 * check against the OS max DMA allocation size
+	 */
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		qentries = sli4->qinfo.max_qentries[q];
+
+		efc_log_info(sli4, "[%s]: max_qentries from %d to %d\n",
+			     SLI_QNAME[q],
+			     sli4->qinfo.max_qentries[q], qentries);
+		sli4->qinfo.max_qentries[q] = qentries;
+	}
+}
+
+static int
+sli_get_config(struct sli4 *sli4)
+{
+	struct efc_dma data;
+	u32 psize;
+
+	/*
+	 * Read the device configuration
+	 */
+	if (!sli_cmd_read_config(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_read_config	*read_config = sli4->bmbx.virt;
+		u32 i;
+		u32 total, total_size;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_CONFIG)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_config->hdr.status)) {
+			efc_log_err(sli4, "READ_CONFIG bad status %#x\n",
+			       le16_to_cpu(read_config->hdr.status));
+			return -1;
+		}
+
+		sli4->has_extents =
+			le32_to_cpu(read_config->ext_dword) &
+				    SLI4_READ_CFG_RESP_RESOURCE_EXT;
+		if (!sli4->has_extents) {
+			u32	i = 0, size = 0;
+			u32	*base = sli4->extent[0].base;
+
+			if (!base) {
+				size = SLI_RSRC_MAX * sizeof(u32);
+				base = kzalloc(size, GFP_ATOMIC);
+				if (!base)
+					return -1;
+
+				memset(base, 0,
+				       SLI_RSRC_MAX * sizeof(u32));
+			}
+
+			for (i = 0; i < SLI_RSRC_MAX; i++) {
+				sli4->extent[i].number = 1;
+				sli4->extent[i].n_alloc = 0;
+				sli4->extent[i].base = &base[i];
+			}
+
+			sli4->extent[SLI_RSRC_VFI].base[0] =
+				le16_to_cpu(read_config->vfi_base);
+			sli4->extent[SLI_RSRC_VFI].size =
+				le16_to_cpu(read_config->vfi_count);
+
+			sli4->extent[SLI_RSRC_VPI].base[0] =
+				le16_to_cpu(read_config->vpi_base);
+			sli4->extent[SLI_RSRC_VPI].size =
+				le16_to_cpu(read_config->vpi_count);
+
+			sli4->extent[SLI_RSRC_RPI].base[0] =
+				le16_to_cpu(read_config->rpi_base);
+			sli4->extent[SLI_RSRC_RPI].size =
+				le16_to_cpu(read_config->rpi_count);
+
+			sli4->extent[SLI_RSRC_XRI].base[0] =
+				le16_to_cpu(read_config->xri_base);
+			sli4->extent[SLI_RSRC_XRI].size =
+				le16_to_cpu(read_config->xri_count);
+
+			sli4->extent[SLI_RSRC_FCFI].base[0] = 0;
+			sli4->extent[SLI_RSRC_FCFI].size =
+				le16_to_cpu(read_config->fcfi_count);
+		}
+
+		for (i = 0; i < SLI_RSRC_MAX; i++) {
+			total = sli4->extent[i].number *
+				sli4->extent[i].size;
+			total_size = BITS_TO_LONGS(total) * sizeof(long);
+			sli4->extent[i].use_map =
+				kzalloc(total_size, GFP_ATOMIC);
+			if (!sli4->extent[i].use_map) {
+				efc_log_err(sli4, "bitmap memory allocation failed %d\n",
+				       i);
+				return -1;
+			}
+			sli4->extent[i].map_size = total;
+		}
+
+		sli4->topology =
+				(le32_to_cpu(read_config->topology_dword) &
+				 SLI4_READ_CFG_RESP_TOPOLOGY) >> 24;
+		switch (sli4->topology) {
+		case SLI4_READ_CFG_TOPO_FC:
+			efc_log_info(sli4, "FC (unknown)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_DA:
+			efc_log_info(sli4, "FC (direct attach)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_AL:
+			efc_log_info(sli4, "FC (arbitrated loop)\n");
+			break;
+		default:
+			efc_log_info(sli4, "bad topology %#x\n",
+				sli4->topology);
+		}
+
+		sli4->e_d_tov = le16_to_cpu(read_config->e_d_tov);
+		sli4->r_a_tov = le16_to_cpu(read_config->r_a_tov);
+
+		sli4->link_module_type = le16_to_cpu(read_config->lmt);
+
+		sli4->qinfo.max_qcount[SLI_QTYPE_EQ] =
+				le16_to_cpu(read_config->eq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_CQ] =
+				le16_to_cpu(read_config->cq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_WQ] =
+				le16_to_cpu(read_config->wq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_RQ] =
+				le16_to_cpu(read_config->rq_count);
+
+		/*
+		 * READ_CONFIG doesn't give the max number of MQ. Applications
+		 * will typically want 1, but we may need another at some future
+		 * date. Dummy up a "max" MQ count here.
+		 */
+		sli4->qinfo.max_qcount[SLI_QTYPE_MQ] = SLI_USER_MQ_COUNT;
+	} else {
+		efc_log_err(sli4, "bad READ_CONFIG write\n");
+		return -1;
+	}
+
+	if (!sli_cmd_common_get_sli4_parameters(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_sli4_params	*parms =
+			(struct sli4_rsp_cmn_get_sli4_params *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config, payload.embed));
+		u32 dwflags_loopback;
+		u32 dwflags_eq_page_cnt;
+		u32 dwflags_cq_page_cnt;
+		u32 dwflags_mq_page_cnt;
+		u32 dwflags_wq_page_cnt;
+		u32 dwflags_rq_page_cnt;
+		u32 dwflags_sgl_page_cnt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		} else if (parms->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_SLI4_PARAMETERS bad status %#x",
+			       parms->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       parms->hdr.additional_status);
+			return -1;
+		}
+
+		dwflags_loopback = le32_to_cpu(parms->dw16_loopback_scope);
+		dwflags_eq_page_cnt = le32_to_cpu(parms->dw6_eq_page_cnt);
+		dwflags_cq_page_cnt = le32_to_cpu(parms->dw8_cq_page_cnt);
+		dwflags_mq_page_cnt = le32_to_cpu(parms->dw10_mq_page_cnt);
+		dwflags_wq_page_cnt = le32_to_cpu(parms->dw12_wq_page_cnt);
+		dwflags_rq_page_cnt = le32_to_cpu(parms->dw14_rq_page_cnt);
+
+		sli4->auto_reg =
+			(dwflags_loopback & RSP_GET_PARAM_AREG);
+		sli4->auto_xfer_rdy =
+			(dwflags_loopback & RSP_GET_PARAM_AGXF);
+		sli4->hdr_template_req =
+			(dwflags_loopback & RSP_GET_PARAM_HDRR);
+		sli4->t10_dif_inline_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TIMM);
+		sli4->t10_dif_separate_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TSMM);
+
+		sli4->mq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_mq_page_cnt);
+		sli4->cq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_cq_page_cnt);
+
+		sli4->rq_min_buf_size =
+			le16_to_cpu(parms->min_rq_buffer_size);
+		sli4->rq_max_buf_size =
+			le32_to_cpu(parms->max_rq_buffer_size);
+
+		sli4->qinfo.qpage_count[SLI_QTYPE_EQ] =
+			(dwflags_eq_page_cnt & RSP_GET_PARAM_EQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_CQ] =
+			(dwflags_cq_page_cnt & RSP_GET_PARAM_CQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_MQ] =
+			(dwflags_mq_page_cnt & RSP_GET_PARAM_MQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_WQ] =
+			(dwflags_wq_page_cnt & RSP_GET_PARAM_WQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_RQ] =
+			(dwflags_rq_page_cnt & RSP_GET_PARAM_RQ_PAGE_CNT_MASK);
+
+		/* save count methods and masks for each queue type */
+
+		sli4->qinfo.count_mask[SLI_QTYPE_EQ] =
+				le16_to_cpu(parms->eqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_EQ] =
+				GET_Q_CNT_METHOD(dwflags_eq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_CQ] =
+				le16_to_cpu(parms->cqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_CQ] =
+				GET_Q_CNT_METHOD(dwflags_cq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_MQ] =
+				le16_to_cpu(parms->mqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_MQ] =
+				GET_Q_CNT_METHOD(dwflags_mq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_WQ] =
+				le16_to_cpu(parms->wqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_WQ] =
+				GET_Q_CNT_METHOD(dwflags_wq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_RQ] =
+				le16_to_cpu(parms->rqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_RQ] =
+				GET_Q_CNT_METHOD(dwflags_rq_page_cnt);
+
+		/* now calculate max queue entries */
+		sli_calc_max_qentries(sli4);
+
+		dwflags_sgl_page_cnt = le32_to_cpu(parms->dw18_sgl_page_cnt);
+
+		/* max # of pages */
+		sli4->max_sgl_pages =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_CNT_MASK);
+
+		/* bit map of available sizes */
+		sli4->sgl_page_sizes =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_SZS_MASK) >> 8;
+		/* ignore HLM here. Use value from REQUEST_FEATURES */
+		sli4->sge_supported_length =
+				le32_to_cpu(parms->sge_supported_length);
+		sli4->sgl_pre_registration_required =
+			(dwflags_loopback & RSP_GET_PARAM_SGLR);
+		/* default to using pre-registered SGL's */
+		sli4->sgl_pre_registered = true;
+
+		sli4->perf_hint =
+			(dwflags_loopback & RSP_GET_PARAM_PHON);
+		sli4->perf_wq_id_association =
+			(dwflags_loopback & RSP_GET_PARAM_PHWQ);
+
+		sli4->rq_batch =
+			(le16_to_cpu(parms->dw15w1_rq_db_window) &
+			 RSP_GET_PARAM_RQ_DB_WINDOW_MASK) >> 12;
+
+		/* Use the highest available WQE size. */
+		if (((dwflags_wq_page_cnt &
+		    RSP_GET_PARAM_WQE_SZS_MASK) >> 8) &
+		    SLI4_128BYTE_WQE_SUPPORT)
+			sli4->wqe_size = SLI4_WQE_EXT_BYTES;
+		else
+			sli4->wqe_size = SLI4_WQE_BYTES;
+	}
+
+	sli4->port_number = 0;
+
+	/*
+	 * Issue COMMON_GET_CNTL_ATTRIBUTES to get port_number. Temporarily
+	 * uses VPD DMA buffer as the response won't fit in the embedded
+	 * buffer.
+	 */
+	if (!sli_cmd_common_get_cntl_attributes(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE,
+					       &sli4->vpd_data)) {
+		struct sli4_rsp_cmn_get_cntl_attributes *attr =
+			sli4->vpd_data.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		} else if (attr->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_CNTL_ATTRIBUTES bad status %#x",
+			       attr->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       attr->hdr.additional_status);
+			return -1;
+		}
+
+		sli4->port_number = (attr->port_num_type_flags &
+					    SLI4_CNTL_ATTR_PORTNUM);
+
+		memcpy(sli4->bios_version_string,
+		       attr->bios_version_str,
+		       sizeof(sli4->bios_version_string));
+	} else {
+		efc_log_err(sli4, "bad COMMON_GET_CNTL_ATTRIBUTES write\n");
+		return -1;
+	}
+
+	psize = sizeof(struct sli4_rsp_cmn_get_cntl_addl_attributes);
+	data.size = psize;
+	data.virt = dma_alloc_coherent(&sli4->pcidev->dev, data.size,
+				       &data.phys, GFP_DMA);
+	if (!data.virt) {
+		memset(&data, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "Failed to allocate memory for GET_CNTL_ADDL_ATTR\n");
+	} else {
+		if (!sli_cmd_common_get_cntl_addl_attributes(sli4,
+							    sli4->bmbx.virt,
+							    SLI4_BMBX_SIZE,
+							    &data)) {
+			struct sli4_rsp_cmn_get_cntl_addl_attributes *attr;
+
+			attr = data.virt;
+			if (sli_bmbx_command(sli4)) {
+				efc_log_crit(sli4, "mailbox fail (GET_CNTL_ADDL_ATTR)\n");
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return -1;
+			}
+			if (attr->hdr.status) {
+				efc_log_err(sli4, "GET_CNTL_ADDL_ATTR bad status %#x\n",
+				       attr->hdr.status);
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return -1;
+			}
+
+			memcpy(sli4->ipl_name, attr->ipl_file_name,
+			       sizeof(sli4->ipl_name));
+
+			efc_log_info(sli4, "IPL:%s\n",
+				(char *)sli4->ipl_name);
+		} else {
+			efc_log_err(sli4, "bad GET_CNTL_ADDL_ATTR write\n");
+			dma_free_coherent(&sli4->pcidev->dev, data.size,
+					  data.virt, data.phys);
+			return -1;
+		}
+
+		dma_free_coherent(&sli4->pcidev->dev, data.size, data.virt,
+				  data.phys);
+		memset(&data, 0, sizeof(struct efc_dma));
+	}
+
+	if (!sli_cmd_common_get_port_name(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_port_name	*port_name =
+			(struct sli4_rsp_cmn_get_port_name *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return -1;
+		}
+
+		sli4->port_name[0] =
+			port_name->port_name[sli4->port_number];
+	}
+	sli4->port_name[1] = '\0';
+
+	if (!sli_cmd_read_rev(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			     &sli4->vpd_data)) {
+		struct sli4_cmd_read_rev	*read_rev = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox write fail (READ_REV)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_rev->hdr.status)) {
+			efc_log_err(sli4, "READ_REV bad status %#x\n",
+			       le16_to_cpu(read_rev->hdr.status));
+			return -1;
+		}
+
+		sli4->fw_rev[0] =
+				le32_to_cpu(read_rev->first_fw_id);
+		memcpy(sli4->fw_name[0], read_rev->first_fw_name,
+		       sizeof(sli4->fw_name[0]));
+
+		sli4->fw_rev[1] =
+				le32_to_cpu(read_rev->second_fw_id);
+		memcpy(sli4->fw_name[1], read_rev->second_fw_name,
+		       sizeof(sli4->fw_name[1]));
+
+		sli4->hw_rev[0] = le32_to_cpu(read_rev->first_hw_rev);
+		sli4->hw_rev[1] = le32_to_cpu(read_rev->second_hw_rev);
+		sli4->hw_rev[2] = le32_to_cpu(read_rev->third_hw_rev);
+
+		efc_log_info(sli4, "FW1:%s (%08x) / FW2:%s (%08x)\n",
+			read_rev->first_fw_name,
+			      le32_to_cpu(read_rev->first_fw_id),
+			      read_rev->second_fw_name,
+			      le32_to_cpu(read_rev->second_fw_id));
+
+		efc_log_info(sli4, "HW1: %08x / HW2: %08x\n",
+			le32_to_cpu(read_rev->first_hw_rev),
+			      le32_to_cpu(read_rev->second_hw_rev));
+
+		/* Check that all VPD data was returned */
+		if (le32_to_cpu(read_rev->returned_vpd_length) !=
+		    le32_to_cpu(read_rev->actual_vpd_length)) {
+			efc_log_info(sli4, "VPD length: avail=%d returned=%d actual=%d\n",
+				le32_to_cpu(read_rev->available_length_dword) &
+					    SLI4_READ_REV_AVAILABLE_LENGTH,
+				le32_to_cpu(read_rev->returned_vpd_length),
+				le32_to_cpu(read_rev->actual_vpd_length));
+		}
+		sli4->vpd_length = le32_to_cpu(read_rev->returned_vpd_length);
+	} else {
+		efc_log_err(sli4, "bad READ_REV write\n");
+		return -1;
+	}
+
+	if (!sli_cmd_read_nvparms(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_cmd_read_nvparms *read_nvparms = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_NVPARMS)\n");
+			return -1;
+		}
+		if (le16_to_cpu(read_nvparms->hdr.status)) {
+			efc_log_err(sli4, "READ_NVPARMS bad status %#x\n",
+			       le16_to_cpu(read_nvparms->hdr.status));
+			return -1;
+		}
+
+		memcpy(sli4->wwpn, read_nvparms->wwpn,
+		       sizeof(sli4->wwpn));
+		memcpy(sli4->wwnn, read_nvparms->wwnn,
+		       sizeof(sli4->wwnn));
+
+		efc_log_info(sli4, "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwpn[0],
+			      sli4->wwpn[1],
+			      sli4->wwpn[2],
+			      sli4->wwpn[3],
+			      sli4->wwpn[4],
+			      sli4->wwpn[5],
+			      sli4->wwpn[6],
+			      sli4->wwpn[7]);
+		efc_log_info(sli4, "WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwnn[0],
+			      sli4->wwnn[1],
+			      sli4->wwnn[2],
+			      sli4->wwnn[3],
+			      sli4->wwnn[4],
+			      sli4->wwnn[5],
+			      sli4->wwnn[6],
+			      sli4->wwnn[7]);
+	} else {
+		efc_log_err(sli4, "bad READ_NVPARMS write\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[])
+{
+	u32 intf = U32_MAX;
+	u32 pci_class_rev = 0;
+	u32 rev_id = 0;
+	u32 family = 0;
+	u32 asic_id = 0;
+	u32 i;
+	struct sli4_asic_entry_t *asic;
+
+	memset(sli4, 0, sizeof(struct sli4));
+
+	sli4->os = os;
+	sli4->pcidev = pdev;
+
+	for (i = 0; i < 6; i++)
+		sli4->reg[i] = reg[i];
+	/*
+	 * Read the SLI_INTF register to discover the register layout
+	 * and other capability information
+	 */
+	pci_read_config_dword(pdev, SLI4_INTF_REG, &intf);
+
+	if ((intf & SLI4_INTF_VALID_MASK) != (u32)SLI4_INTF_VALID_VALUE) {
+		efc_log_err(sli4, "SLI_INTF is not valid\n");
+		return -1;
+	}
+
+	/* driver only support SLI-4 */
+	if ((intf & SLI4_INTF_REV_MASK) != SLI4_INTF_REV_S4) {
+		efc_log_err(sli4, "Unsupported SLI revision (intf=%#x)\n",
+		       intf);
+		return -1;
+	}
+
+	sli4->sli_family = intf & SLI4_INTF_FAMILY_MASK;
+
+	sli4->if_type = intf & SLI4_INTF_IF_TYPE_MASK;
+	efc_log_info(sli4, "status=%#x error1=%#x error2=%#x\n",
+		sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+
+	/*
+	 * set the ASIC type and revision
+	 */
+	pci_read_config_dword(pdev, PCI_CLASS_REVISION, &pci_class_rev);
+	rev_id = pci_class_rev & 0xff;
+	family = sli4->sli_family;
+	if (family == SLI4_FAMILY_CHECK_ASIC_TYPE) {
+		pci_read_config_dword(pdev, SLI4_ASIC_ID_REG, &asic_id);
+
+		family = asic_id & SLI4_ASIC_GEN_MASK;
+	}
+
+	for (i = 0, asic = sli4_asic_table; i < ARRAY_SIZE(sli4_asic_table);
+	     i++, asic++) {
+		if (rev_id == asic->rev_id && family == asic->family) {
+			sli4->asic_type = family;
+			sli4->asic_rev = rev_id;
+			break;
+		}
+	}
+	/* Fail if no matching asic type/rev was found */
+	if (!sli4->asic_type || !sli4->asic_rev) {
+		efc_log_err(sli4, "no matching asic family/rev found: %02x/%02x\n",
+		       family, rev_id);
+		return -1;
+	}
+
+	/*
+	 * The bootstrap mailbox is equivalent to a MQ with a single 256 byte
+	 * entry, a CQ with a single 16 byte entry, and no event queue.
+	 * Alignment must be 16 bytes as the low order address bits in the
+	 * address register are also control / status.
+	 */
+	sli4->bmbx.size = SLI4_BMBX_SIZE + sizeof(struct sli4_mcqe);
+	sli4->bmbx.virt = dma_alloc_coherent(&pdev->dev, sli4->bmbx.size,
+					     &sli4->bmbx.phys, GFP_DMA);
+	if (!sli4->bmbx.virt) {
+		memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "bootstrap mailbox allocation failed\n");
+		return -1;
+	}
+
+	if (sli4->bmbx.phys & SLI4_BMBX_MASK_LO) {
+		efc_log_err(sli4, "bad alignment for bootstrap mailbox\n");
+		return -1;
+	}
+
+	efc_log_info(sli4, "bmbx v=%p p=0x%x %08x s=%zd\n", sli4->bmbx.virt,
+		upper_32_bits(sli4->bmbx.phys),
+		      lower_32_bits(sli4->bmbx.phys), sli4->bmbx.size);
+
+	/* 4096 is arbitrary. What should this value actually be? */
+	sli4->vpd_data.size = 4096;
+	sli4->vpd_data.virt = dma_alloc_coherent(&pdev->dev,
+						 sli4->vpd_data.size,
+						 &sli4->vpd_data.phys,
+						 GFP_DMA);
+	if (!sli4->vpd_data.virt) {
+		memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
+		/* Note that failure isn't fatal in this specific case */
+		efc_log_info(sli4, "VPD buffer allocation failed\n");
+	}
+
+	if (sli_fw_init(sli4)) {
+		efc_log_err(sli4, "FW initialization failed\n");
+		return -1;
+	}
+
+	/*
+	 * Set one of fcpi(initiator), fcpt(target), fcpc(combined) to true
+	 * in addition to any other desired features
+	 */
+	sli4->features = (SLI4_REQFEAT_IAAB | SLI4_REQFEAT_NPIV |
+				 SLI4_REQFEAT_DIF | SLI4_REQFEAT_VF |
+				 SLI4_REQFEAT_FCPC | SLI4_REQFEAT_IAAR |
+				 SLI4_REQFEAT_HLM | SLI4_REQFEAT_PERFH |
+				 SLI4_REQFEAT_RXSEQ | SLI4_REQFEAT_RXRI |
+				 SLI4_REQFEAT_MRQP);
+
+	/* use performance hints if available */
+	if (sli4->perf_hint)
+		sli4->features |= SLI4_REQFEAT_PERFH;
+
+	if (sli_request_features(sli4, &sli4->features, true))
+		return -1;
+
+	if (sli_get_config(sli4))
+		return -1;
+
+	return 0;
+}
+
+int
+sli_init(struct sli4 *sli4)
+{
+	if (sli4->has_extents) {
+		efc_log_info(sli4, "XXX need to implement extent allocation\n");
+		return -1;
+	}
+
+	if (sli4->high_login_mode)
+		sli4->features |= SLI4_REQFEAT_HLM;
+	else
+		sli4->features &= (~SLI4_REQFEAT_HLM);
+	sli4->features &= (~SLI4_REQFEAT_RXSEQ);
+	sli4->features &= (~SLI4_REQFEAT_RXRI);
+
+	if (sli_request_features(sli4, &sli4->features, false))
+		return -1;
+
+	return 0;
+}
+
+int
+sli_reset(struct sli4 *sli4)
+{
+	u32	i;
+
+	if (sli_fw_init(sli4)) {
+		efc_log_crit(sli4, "FW initialization failed\n");
+		return -1;
+	}
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+		sli4->extent[i].base = NULL;
+	}
+
+	if (sli_get_config(sli4))
+		return -1;
+
+	return 0;
+}
+
+int
+sli_fw_reset(struct sli4 *sli4)
+{
+	u32 val;
+	bool ready;
+
+	/*
+	 * Firmware must be ready before issuing the reset.
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return -1;
+	}
+	/* Lancer uses PHYDEV_CONTROL */
+
+	val = SLI4_PHYDEV_CTRL_FRST;
+	writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+
+	/* wait for the FW to become ready after the reset */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "Failed to become ready after firmware reset\n");
+		return -1;
+	}
+	return 0;
+}
+
+int
+sli_teardown(struct sli4 *sli4)
+{
+	u32 i;
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		sli4->extent[i].base = NULL;
+
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+	}
+
+	if (sli_fw_term(sli4))
+		efc_log_err(sli4, "FW deinitialization failed\n");
+
+	dma_free_coherent(&sli4->pcidev->dev, sli4->vpd_data.size,
+			  sli4->vpd_data.virt, sli4->vpd_data.phys);
+	dma_free_coherent(&sli4->pcidev->dev, sli4->bmbx.size,
+			  sli4->bmbx.virt, sli4->bmbx.phys);
+
+	return 0;
+}
+
+int
+sli_callback(struct sli4 *sli4, enum sli4_callback which,
+	     void *func, void *arg)
+{
+	if (!func) {
+		efc_log_err(sli4, "bad parameter sli4=%p which=%#x func=%p\n",
+		       sli4, which, func);
+		return -1;
+	}
+
+	switch (which) {
+	case SLI4_CB_LINK:
+		sli4->link = func;
+		sli4->link_arg = arg;
+		break;
+	default:
+		efc_log_info(sli4, "unknown callback %#x\n", which);
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult)
+{
+	sli_cmd_common_modify_eq_delay(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				       eq, num_eq, shift, delay_mult);
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail (MODIFY EQ DELAY)\n");
+		return -1;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status MODIFY EQ DELAY\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
+		   u32 *rid, u32 *index)
+{
+	int rc = 0;
+	u32 size;
+	u32 extent_idx;
+	u32 item_idx;
+	u32 position;
+
+	*rid = U32_MAX;
+	*index = U32_MAX;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		position =
+		find_first_zero_bit(sli4->extent[rtype].use_map,
+				    sli4->extent[rtype].map_size);
+		if (position >= sli4->extent[rtype].map_size) {
+			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
+				    rtype, sli4->extent[rtype].n_alloc);
+			rc = -1;
+			break;
+		}
+		set_bit(position, sli4->extent[rtype].use_map);
+		*index = position;
+
+		size = sli4->extent[rtype].size;
+
+		extent_idx = *index / size;
+		item_idx   = *index % size;
+
+		*rid = sli4->extent[rtype].base[extent_idx] + item_idx;
+
+		sli4->extent[rtype].n_alloc++;
+		break;
+	default:
+		rc = -1;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_free(struct sli4 *sli4,
+		  enum sli4_resource rtype, u32 rid)
+{
+	int rc = -1;
+	u32 x;
+	u32 size, *base;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		/*
+		 * Figure out which extent contains the resource ID. I.e. find
+		 * the extent such that
+		 *   extent->base <= resource ID < extent->base + extent->size
+		 */
+		base = sli4->extent[rtype].base;
+		size = sli4->extent[rtype].size;
+
+		/*
+		 * In the case of FW reset, this may be cleared
+		 * but the force_free path will still attempt to
+		 * free the resource. Prevent a NULL pointer access.
+		 */
+		if (base) {
+			for (x = 0; x < sli4->extent[rtype].number;
+			     x++) {
+				if (rid >= base[x] &&
+				    (rid < (base[x] + size))) {
+					rid -= base[x];
+					clear_bit((x * size) + rid,
+						  sli4->extent[rtype].use_map);
+					rc = 0;
+					break;
+				}
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype)
+{
+	int rc = -1;
+	u32 i;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		for (i = 0; i < sli4->extent[rtype].map_size; i++)
+			clear_bit(i, sli4->extent[rtype].use_map);
+		rc = 0;
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int sli_raise_ue(struct sli4 *sli4, u8 dump)
+{
+	u32 val = 0;
+#define FDD 2
+	if (dump == FDD) {
+		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
+		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+	} else {
+		val = SLI4_PHYDEV_CTRL_FRST;
+
+		if (dump == 1)
+			val |= SLI4_PHYDEV_CTRL_DD;
+		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+	}
+
+	return 0;
+}
+
+int sli_dump_is_ready(struct sli4 *sli4)
+{
+	int rc = 0;
+	u32 port_val;
+	u32 bmbx_val;
+
+	/*
+	 * Ensure that the port is ready AND the mailbox is
+	 * ready before signaling that the dump is ready to go.
+	 */
+	port_val = sli_reg_read_status(sli4);
+	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+
+	if ((bmbx_val & SLI4_BMBX_RDY) &&
+	    (port_val & SLI4_PORT_STATUS_RDY)) {
+		if (port_val & SLI4_PORT_STATUS_DIP)
+			rc = 1;
+		else if (port_val & SLI4_PORT_STATUS_FDP)
+			rc = 2;
+	}
+
+	return rc;
+}
+
+int sli_dump_is_present(struct sli4 *sli4)
+{
+	u32 val;
+	bool ready;
+
+	/* If the chip is not ready, then there cannot be a dump */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_INIT_PORT_DELAY_US);
+	if (!ready)
+		return 0;
+
+	val = sli_reg_read_status(sli4);
+	if (val == U32_MAX) {
+		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
+		return -1;
+	} else {
+		return (val & SLI4_PORT_STATUS_DIP) ? 1 : 0;
+	}
+}
+
+int sli_reset_required(struct sli4 *sli4)
+{
+	u32 val;
+
+	val = sli_reg_read_status(sli4);
+	if (val == U32_MAX) {
+		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
+		return -1;
+	} else {
+		return (val & SLI4_PORT_STATUS_RN) ? 1 : 0;
+	}
+}
+
+int
+sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size,
+		       u16 xri,
+		       u32 xri_count, struct efc_dma *page0[],
+		       struct efc_dma *page1[], struct efc_dma *dma)
+{
+	struct sli4_rqst_post_sgl_pages *post = NULL;
+	u32 i;
+	__le32 req_len;
+
+	post = sli_config_cmd_init(sli4, buf, size,
+				   SLI_CONFIG_PYLD_LENGTH(post_sgl_pages),
+				   dma);
+	if (!post)
+		return EFC_FAIL;
+
+	/* payload size calculation */
+	/* 4 = xri_start + xri_count */
+	/* xri_count = # of XRI's registered */
+	/* sizeof(uint64_t) = physical address size */
+	/* 2 = # of physical addresses per page set */
+	req_len = cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
+	sli_cmd_fill_hdr(&post->hdr, SLI4_OPC_POST_SGL_PAGES, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	post->xri_start = cpu_to_le16(xri);
+	post->xri_count = cpu_to_le16(xri_count);
+
+	for (i = 0; i < xri_count; i++) {
+		post->page_set[i].page0_low  =
+				cpu_to_le32(lower_32_bits(page0[i]->phys));
+		post->page_set[i].page0_high =
+				cpu_to_le32(upper_32_bits(page0[i]->phys));
+	}
+
+	if (page1) {
+		for (i = 0; i < xri_count; i++) {
+			post->page_set[i].page1_low =
+				cpu_to_le32(lower_32_bits(page1[i]->phys));
+			post->page_set[i].page1_high =
+				cpu_to_le32(upper_32_bits(page1[i]->phys));
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf,
+			   size_t size, struct efc_dma *dma,
+			   u16 rpi,
+			   struct efc_dma *payload_dma)
+{
+	struct sli4_rqst_post_hdr_templates *req = NULL;
+	uintptr_t phys = 0;
+	u32 i = 0;
+	u32 page_count, payload_size;
+
+	page_count = sli_page_count(dma->size, SLI_PAGE_SIZE);
+
+	payload_size = ((sizeof(struct sli4_rqst_post_hdr_templates) +
+		(page_count * SZ_DMAADDR)) - sizeof(struct sli4_rqst_hdr));
+
+	if (page_count > 16) {
+		/*
+		 * We can't fit more than 16 descriptors into an embedded mbox
+		 * command, it has to be non-embedded
+		 */
+		payload_dma->size = payload_size;
+		payload_dma->virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						       payload_dma->size,
+					     &payload_dma->phys, GFP_DMA);
+		if (!payload_dma->virt) {
+			memset(payload_dma, 0, sizeof(struct efc_dma));
+			efc_log_err(sli4, "mbox payload memory allocation fail\n");
+			return EFC_FAIL;
+		}
+		req = sli_config_cmd_init(sli4, buf, size,
+					  payload_size, payload_dma);
+	} else {
+		req = sli_config_cmd_init(sli4, buf, size,
+					  payload_size, NULL);
+	}
+
+	if (!req)
+		return EFC_FAIL;
+
+	if (rpi == U16_MAX)
+		rpi = sli4->extent[SLI_RSRC_RPI].base[0];
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_POST_HDR_TEMPLATES,
+			 SLI4_SUBSYSTEM_FC, CMD_V0,
+			 CFG_RQST_PYLD_LEN(post_hdr_templates));
+
+	req->rpi_offset = cpu_to_le16(rpi);
+	req->page_count = cpu_to_le16(page_count);
+	phys = dma->phys;
+	for (i = 0; i < page_count; i++) {
+		req->page_descriptor[i].low  = cpu_to_le32(lower_32_bits(phys));
+		req->page_descriptor[i].high = cpu_to_le32(upper_32_bits(phys));
+
+		phys += SLI_PAGE_SIZE;
+	}
+
+	return EFC_SUCCESS;
+}
+
+u32
+sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi)
+{
+	u32 bytes = 0;
+
+	/* Check if header templates needed */
+	if (sli4->hdr_template_req)
+		/* round up to a page */
+		bytes = SLI_ROUND_PAGE(n_rpi * SLI4_HDR_TEMPLATE_SIZE);
+
+	return bytes;
+}
+
+const char *
+sli_fc_get_status_string(u32 status)
+{
+	static struct {
+		u32 code;
+		const char *label;
+	} lookup[] = {
+		{SLI4_FC_WCQE_STATUS_SUCCESS,		"SUCCESS"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,	"FCP_RSP_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_REMOTE_STOP,	"REMOTE_STOP"},
+		{SLI4_FC_WCQE_STATUS_LOCAL_REJECT,	"LOCAL_REJECT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_RJT,		"NPORT_RJT"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_RJT,	"FABRIC_RJT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_BSY,		"NPORT_BSY"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_BSY,	"FABRIC_BSY"},
+		{SLI4_FC_WCQE_STATUS_LS_RJT,		"LS_RJT"},
+		{SLI4_FC_WCQE_STATUS_CMD_REJECT,	"CMD_REJECT"},
+		{SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,	"FCP_TGT_LENCHECK"},
+		{SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED, "BUF_LEN_EXCEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+				"RQ_INSUFF_BUF_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC, "RQ_INSUFF_FRM_DESC"},
+		{SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,	"RQ_DMA_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,	"FCP_RSP_TRUNCATE"},
+		{SLI4_FC_WCQE_STATUS_DI_ERROR,		"DI_ERROR"},
+		{SLI4_FC_WCQE_STATUS_BA_RJT,		"BA_RJT"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+				"RQ_INSUFF_XRI_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC, "INSUFF_XRI_DISC"},
+		{SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,	"RX_ERROR_DETECT"},
+		{SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,	"RX_ABORT_REQUEST"},
+		};
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(lookup); i++) {
+		if (status == lookup[i].code)
+			return lookup[i].label;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 4184a7e0069a..212ed7fc6b83 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3731,7 +3731,7 @@ struct sli4 {
 	struct efc_dma		*bmbx_non_emb_pmd;
 
 	struct efc_dma		vpd_data;
-	u32				vpd_length;
+	u32			vpd_length;
 };
 
 static inline void
@@ -3743,4 +3743,554 @@ sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
 	hdr->request_length = len;
 }
 
+/**
+ * Get / set parameter functions
+ */
+
+static inline int
+sli_set_hlm(struct sli4 *sli4, u32 value)
+{
+	if (value && !(sli4->features & SLI4_REQFEAT_HLM)) {
+		efc_log_err(sli4, "HLM not supported\n");
+		return -1;
+	}
+
+	sli4->high_login_mode = value != 0 ? true : false;
+
+	return 0;
+}
+
+static inline int
+sli_set_sgl_preregister(struct sli4 *sli4, u32 value)
+{
+	if (value == 0 && sli4->sgl_pre_registration_required) {
+		efc_log_err(sli4, "SGL pre-registration required\n");
+		return -1;
+	}
+
+	sli4->sgl_pre_registered = value != 0 ? true : false;
+
+	return 0;
+}
+
+static inline u32
+sli_get_max_sgl(struct sli4 *sli4)
+{
+	if (sli4->sgl_page_sizes != 1) {
+		efc_log_err(sli4, "unsupported SGL page sizes %#x\n",
+			sli4->sgl_page_sizes);
+		return 0;
+	}
+
+	return ((sli4->max_sgl_pages * SLI_PAGE_SIZE)
+		/ sizeof(struct sli4_sge));
+}
+
+static inline enum sli4_link_medium
+sli_get_medium(struct sli4 *sli4)
+{
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		return SLI_LINK_MEDIUM_FC;
+	default:
+		return SLI_LINK_MEDIUM_MAX;
+	}
+}
+
+static inline int
+sli_set_topology(struct sli4 *sli4, u32 value)
+{
+	int	rc = 0;
+
+	switch (value) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		sli4->topology = value;
+		break;
+	default:
+		efc_log_err(sli4, "unsupported topology %#x\n", value);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static inline u32
+sli_convert_mask_to_count(u32 method, u32 mask)
+{
+	u32 count = 0;
+
+	if (method) {
+		count = 1 << (31 - __builtin_clz(mask));
+		count *= 16;
+	} else {
+		count = mask;
+	}
+
+	return count;
+}
+
+static inline u32
+sli_reg_read_status(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_STATUS_REGOFF);
+}
+
+static inline int
+sli_fw_error_status(struct sli4 *sli4)
+{
+	return ((sli_reg_read_status(sli4) & SLI4_PORT_STATUS_ERR) ? 1 : 0);
+}
+
+static inline u32
+sli_reg_read_err1(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR1);
+}
+
+static inline u32
+sli_reg_read_err2(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR2);
+}
+
+static inline int
+sli_fc_rqe_length(struct sli4 *sli4, void *cqe, u32 *len_hdr,
+		  u32 *len_data)
+{
+	struct sli4_fc_async_rcqe	*rcqe = cqe;
+
+	*len_hdr = *len_data = 0;
+
+	if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+		*len_hdr  = rcqe->hdpl_byte & SLI4_RACQE_HDPL;
+		*len_data = le16_to_cpu(rcqe->data_placement_length);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+static inline u8
+sli_fc_rqe_fcfi(struct sli4 *sli4, void *cqe)
+{
+	u8 code = ((u8 *)cqe)[SLI4_CQE_CODE_OFFSET];
+	u8 fcfi = U8_MAX;
+
+	switch (code) {
+	case SLI4_CQE_CODE_RQ_ASYNC: {
+		struct sli4_fc_async_rcqe *rcqe = cqe;
+
+		fcfi = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1: {
+		struct sli4_fc_async_rcqe_v1 *rcqev1 = cqe;
+
+		fcfi = rcqev1->fcfi_byte & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD: {
+		struct sli4_fc_optimized_write_cmd_cqe *opt_wr = cqe;
+
+		fcfi = opt_wr->flags0 & SLI4_OCQE_FCFI;
+		break;
+	}
+	}
+
+	return fcfi;
+}
+
+/****************************************************************************
+ * Function prototypes
+ */
+extern int
+sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf,
+		   size_t size, u16 wki);
+extern int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
+				     size_t size, u32 page_num,
+				     struct efc_dma *dma);
+extern int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size,
+			u8 req_ext_counters, u8 clear_overflow_flags,
+			u8 clear_all_counters);
+extern int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size,
+		    u8 clear_counters);
+extern int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size,
+		  u32 speed, u8 reset_alpa);
+extern int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size, u16 vfi,
+		 u16 fcfi, u16 vpi);
+extern int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size, u16 vpi,
+		 u16 vfi);
+extern int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size,
+		 u16 xri_base, u16 xri_count);
+extern int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size,
+		    u8 num_xri);
+extern int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *dma, u16 vpi);
+extern int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *dma);
+extern int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size,
+		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		      u32 preferred_d_id);
+struct sli4_cmd_rq_cfg {
+	__le16	rq_id;
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+extern int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 index,
+		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+extern int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size,
+		     u8 mode, u16 fcf_index,
+	    u8 rq_selection_policy, u8 mrq_bit_mask,
+	    u16 num_mrqs,
+	    struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+
+extern int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 nport_id, u16 rpi, u16 vpi,
+		     struct efc_dma *dma, u8 update,
+		     u8 enable_t10_pi);
+extern int
+sli_cmd_sli_config(struct sli4 *sli4, void *buf, size_t size,
+		   u32 length, struct efc_dma *dma);
+extern int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		   u16 indicator);
+extern int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator,
+		  enum sli4_resource which, u32 fc_id);
+extern int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
+		bool update);
+extern int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id);
+extern int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator, u32 which);
+extern int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 index, u32 which);
+extern int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf, size_t size,
+		   uint64_t context);
+extern int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
+					size_t size, u16 rtype);
+extern int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4,
+				   void *buf, size_t size);
+extern int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size,
+			    u16 noc, u16 eof, u32 desired_write_length,
+		u32 offset, char *object_name, struct efc_dma *dma);
+extern int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
+			     char *object_name);
+extern int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
+			   u32 desired_read_length, u32 offset,
+			   char *object_name, struct efc_dma *dma);
+extern int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
+			  struct efc_dma *cmd, struct efc_dma *resp);
+extern int
+sli_cmd_common_set_dump_location(struct sli4 *sli4,
+				 void *buf, size_t size, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma *buffer, u8 fdb);
+extern int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
+			    u32 feature, u32 param_len,
+			    void *parameter);
+
+int sli_cqe_mq(struct sli4 *sli4, void *buf);
+int sli_cqe_async(struct sli4 *sli4, void *buf);
+
+extern int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[]);
+void sli_calc_max_qentries(struct sli4 *sli4);
+int sli_init(struct sli4 *sli4);
+int sli_reset(struct sli4 *sli4);
+int sli_fw_reset(struct sli4 *sli4);
+int sli_teardown(struct sli4 *sli4);
+extern int
+sli_callback(struct sli4 *sli4, enum sli4_callback which,
+	     void *func, void *arg);
+extern int
+sli_bmbx_command(struct sli4 *sli4);
+extern int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
+		 u32 qtype, size_t size, u32 n_entries,
+		      u32 align);
+extern int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q);
+extern int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult);
+extern int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype,
+		struct sli4_queue *q, u32 n_entries,
+		     struct sli4_queue *assoc);
+extern int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[]);
+extern int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype);
+extern int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
+	       u32 destroy_queues, u32 free_memory);
+extern int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+extern int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+
+extern int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry);
+extern int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry);
+extern int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry);
+extern int
+sli_eq_read(struct sli4 *sli4, struct sli4_queue *q,
+	    u8 *entry);
+extern int
+sli_cq_read(struct sli4 *sli4, struct sli4_queue *q,
+	    u8 *entry);
+extern int
+sli_mq_read(struct sli4 *sli4, struct sli4_queue *q,
+	    u8 *entry);
+extern int
+sli_queue_index(struct sli4 *sli4, struct sli4_queue *q);
+extern int
+_sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q,
+		u32 index, u8 *entry);
+extern int
+sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q, u32 index,
+	       u8 *entry);
+extern int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
+		   u32 *rid, u32 *index);
+extern int
+sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype,
+		  u32 rid);
+extern int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype);
+extern int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id);
+extern int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+	     enum sli4_qentry *etype, u16 *q_id);
+
+int sli_raise_ue(struct sli4 *sli4, u8 dump);
+int sli_dump_is_ready(struct sli4 *sli4);
+int sli_dump_is_present(struct sli4 *sli4);
+int sli_reset_required(struct sli4 *sli4);
+int sli_fw_ready(struct sli4 *sli4);
+
+extern int
+sli_fc_process_link_state(struct sli4 *sli4, void *acqe);
+extern int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe);
+extern int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype,
+		 u16 *rid);
+u32 sli_fc_response_length(struct sli4 *sli4, u8 *cqe);
+u32 sli_fc_io_length(struct sli4 *sli4, u8 *cqe);
+int sli_fc_els_did(struct sli4 *sli4, u8 *cqe,
+		   u32 *d_id);
+u32 sli_fc_ext_status(struct sli4 *sli4, u8 *cqe);
+extern int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
+			  u16 *rq_id, u32 *index);
+extern int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
+		  struct efc_dma *qmem, u16 cq_id);
+int sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf,
+			   size_t size, u16 xri, u32 xri_count,
+			   struct efc_dma *page0[],
+			   struct efc_dma *page1[], struct efc_dma *dma);
+extern int
+sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
+		  struct efc_dma *qmem,
+		       u16 cq_id, u16 buffer_size);
+extern int
+sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *qmem, u16 cq_id,
+			  u16 buffer_size);
+extern int
+sli_cmd_read_fcf_table(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *dma, u16 index);
+extern int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf,
+			   size_t size, struct efc_dma *dma,
+				     u16 rpi,
+				     struct efc_dma *payload_dma);
+extern int
+sli_cmd_rediscover_fcf(struct sli4 *sli4, void *buf, size_t size,
+		       u16 index);
+extern int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue *cq, bool is_hdr);
+extern int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs,
+		    struct sli4_queue *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size);
+u32 sli_fc_get_rpi_requirements(struct sli4 *sli4,
+				u32 n_rpi);
+extern int
+sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
+	      enum sli4_abort_type type, bool send_abts,
+	u32 ids, u32 mask, u16 tag, u16 cq_id);
+
+extern int
+sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size,
+		   u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag);
+
+extern int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *rsp, u32 rsp_len,
+		u16 xri, u16 tag, u16 cq_id,
+		u16 ox_id, u16 rnodeindicator,
+		u16 sportindicator, bool hlm, bool rnodeattached,
+		u32 rnode_fcid, u32 flags, u32 s_id);
+
+extern int
+sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *sgl,
+		u8 req_type, u32 req_len, u32 max_rsp_len,
+		u8 timeout, u16 xri, u16 tag,
+		u16 cq_id, u16 rnodeindicator,
+		u16 sportindicator, bool hlm, bool rnodeattached,
+		u32 rnode_fcid, u32 sport_fcid);
+
+extern int
+sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, bool hlm,
+		u32 rnode_fcid, u8 timeout);
+
+extern int
+sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u32 first_data_sge,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, bool hlm, u32 rnode_fcid,
+		u8 dif, u8 bs, u8 timeout);
+
+extern int
+sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len,
+		u32 first_burst, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi,
+		bool hlm, u32 rnode_fcid,
+		u8 dif, u8 bs, u8 timeout);
+
+extern int
+sli_fcp_treceive64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *sgl,
+		u32 first_data_sge, u32 relative_off,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi, bool hlm,
+		u32 rnode_fcid, u32 flags, u8 dif,
+		u8 bs, u8 csctl, u32 app_id);
+
+extern int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli4, void *buf, size_t size,
+			    struct efc_dma *sgl, u32 first_data_sge,
+		u32 relative_off, u32 xfer_len,
+		u16 xri, u16 sec_xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi,
+		bool hlm, u32 rnode_fcid, u32 flags,
+		u8 dif, u8 bs, u8 csctl,
+		u32 app_id);
+
+extern int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		   struct efc_dma *sgl,
+		u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
+		u16 xid, u32 rpi, bool hlm, u32 rnode_fcid,
+		u32 flags, u8 csctl, u8 port_owned,
+		u32 app_id);
+
+extern int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl,
+		u32 first_data_sge, u32 relative_off,
+		u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u16 xid, u32 rpi,
+		bool hlm, u32 rnode_fcid, u32 flags, u8 dif,
+		u8 bs, u8 csctl, u32 app_id);
+
+extern int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *sgl, u32 req_len,
+		u32 max_rsp_len, u8 timeout, u16 xri,
+		u16 tag, u16 cq_id, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl, u8 type,
+		u8 df_ctl);
+
+extern int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct sli_bls_payload *payload, u16 xri,
+		u16 tag, u16 cq_id,
+		bool rnodeattached, bool hlm, u16 rnodeindicator,
+		u16 sportindicator, u32 rnode_fcid,
+		u32 sport_fcid, u32 s_id);
+
+extern int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, size_t size,
+			struct efc_dma *payload, u32 payload_len,
+		u8 timeout, u16 ox_id, u16 xri,
+		u16 tag, bool hlm, u32 rnode_fcid,
+		u16 rnodeindicator, u8 r_ctl,
+		u8 type, u8 df_ctl);
+
+extern int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    u16 xri, u16 tag, u16 cq_id);
+extern void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf,
+			       size_t size, u16 timeout);
+
+const char *sli_fc_get_status_string(u32 status);
+
 #endif /* !_SLI4_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 08/32] elx: libefc: Generic state machine framework
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (6 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2019-12-20 22:36 ` James Smart
  2020-01-09  7:05   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
                   ` (24 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:36 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch starts the population of the libefc library.
The library will contain common tasks usable by a target or initiator
driver. The library will also contain a FC discovery state machine
interface.

This patch creates the library directory and adds definitions
for the discovery state machine interface.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_sm.c | 213 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_sm.h | 140 +++++++++++++++++++++++++
 2 files changed, 353 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h

diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
new file mode 100644
index 000000000000..90e60c0e6638
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.c
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Generic state machine framework.
+ */
+#include "efc.h"
+#include "efc_sm.h"
+
+const char *efc_sm_id[] = {
+	"common",
+	"domain",
+	"login"
+};
+
+/**
+ * efc_sm_post_event() - Post an event to a context.
+ *
+ * @ctx: State machine context
+ * @evt: Event to post
+ * @data: Event-specific data (if any)
+ */
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data)
+{
+	if (ctx->current_state) {
+		ctx->current_state(ctx, evt, data);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void *(*state)(struct efc_sm_ctx *,
+				 enum efc_sm_event, void *), void *data)
+
+{
+	if (ctx->current_state == state) {
+		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
+	} else {
+		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
+	}
+}
+
+void
+efc_sm_disable(struct efc_sm_ctx *ctx)
+{
+	ctx->current_state = NULL;
+}
+
+const char *efc_sm_event_name(enum efc_sm_event evt)
+{
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		return "EFC_EVT_ENTER";
+	case EFC_EVT_REENTER:
+		return "EFC_EVT_REENTER";
+	case EFC_EVT_EXIT:
+		return "EFC_EVT_EXIT";
+	case EFC_EVT_SHUTDOWN:
+		return "EFC_EVT_SHUTDOWN";
+	case EFC_EVT_RESPONSE:
+		return "EFC_EVT_RESPONSE";
+	case EFC_EVT_RESUME:
+		return "EFC_EVT_RESUME";
+	case EFC_EVT_TIMER_EXPIRED:
+		return "EFC_EVT_TIMER_EXPIRED";
+	case EFC_EVT_ERROR:
+		return "EFC_EVT_ERROR";
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		return "EFC_EVT_SRRS_ELS_REQ_OK";
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		return "EFC_EVT_SRRS_ELS_CMPL_OK";
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		return "EFC_EVT_SRRS_ELS_REQ_FAIL";
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		return "EFC_EVT_SRRS_ELS_CMPL_FAIL";
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		return "EFC_EVT_SRRS_ELS_REQ_RJT";
+	case EFC_EVT_NODE_ATTACH_OK:
+		return "EFC_EVT_NODE_ATTACH_OK";
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		return "EFC_EVT_NODE_ATTACH_FAIL";
+	case EFC_EVT_NODE_FREE_OK:
+		return "EFC_EVT_NODE_FREE_OK";
+	case EFC_EVT_ELS_REQ_TIMEOUT:
+		return "EFC_EVT_ELS_REQ_TIMEOUT";
+	case EFC_EVT_ELS_REQ_ABORTED:
+		return "EFC_EVT_ELS_REQ_ABORTED";
+	case EFC_EVT_ABORT_ELS:
+		return "EFC_EVT_ABORT_ELS";
+	case EFC_EVT_ELS_ABORT_CMPL:
+		return "EFC_EVT_ELS_ABORT_CMPL";
+
+	case EFC_EVT_DOMAIN_FOUND:
+		return "EFC_EVT_DOMAIN_FOUND";
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+		return "EFC_EVT_DOMAIN_ALLOC_OK";
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		return "EFC_EVT_DOMAIN_ALLOC_FAIL";
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		return "EFC_EVT_DOMAIN_REQ_ATTACH";
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		return "EFC_EVT_DOMAIN_ATTACH_OK";
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		return "EFC_EVT_DOMAIN_ATTACH_FAIL";
+	case EFC_EVT_DOMAIN_LOST:
+		return "EFC_EVT_DOMAIN_LOST";
+	case EFC_EVT_DOMAIN_FREE_OK:
+		return "EFC_EVT_DOMAIN_FREE_OK";
+	case EFC_EVT_DOMAIN_FREE_FAIL:
+		return "EFC_EVT_DOMAIN_FREE_FAIL";
+	case EFC_EVT_HW_DOMAIN_REQ_ATTACH:
+		return "EFC_EVT_HW_DOMAIN_REQ_ATTACH";
+	case EFC_EVT_HW_DOMAIN_REQ_FREE:
+		return "EFC_EVT_HW_DOMAIN_REQ_FREE";
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		return "EFC_EVT_ALL_CHILD_NODES_FREE";
+
+	case EFC_EVT_SPORT_ALLOC_OK:
+		return "EFC_EVT_SPORT_ALLOC_OK";
+	case EFC_EVT_SPORT_ALLOC_FAIL:
+		return "EFC_EVT_SPORT_ALLOC_FAIL";
+	case EFC_EVT_SPORT_ATTACH_OK:
+		return "EFC_EVT_SPORT_ATTACH_OK";
+	case EFC_EVT_SPORT_ATTACH_FAIL:
+		return "EFC_EVT_SPORT_ATTACH_FAIL";
+	case EFC_EVT_SPORT_FREE_OK:
+		return "EFC_EVT_SPORT_FREE_OK";
+	case EFC_EVT_SPORT_FREE_FAIL:
+		return "EFC_EVT_SPORT_FREE_FAIL";
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
+		return "EFC_EVT_SPORT_TOPOLOGY_NOTIFY";
+	case EFC_EVT_HW_PORT_ALLOC_OK:
+		return "EFC_EVT_HW_PORT_ALLOC_OK";
+	case EFC_EVT_HW_PORT_ALLOC_FAIL:
+		return "EFC_EVT_HW_PORT_ALLOC_FAIL";
+	case EFC_EVT_HW_PORT_ATTACH_OK:
+		return "EFC_EVT_HW_PORT_ATTACH_OK";
+	case EFC_EVT_HW_PORT_REQ_ATTACH:
+		return "EFC_EVT_HW_PORT_REQ_ATTACH";
+	case EFC_EVT_HW_PORT_REQ_FREE:
+		return "EFC_EVT_HW_PORT_REQ_FREE";
+	case EFC_EVT_HW_PORT_FREE_OK:
+		return "EFC_EVT_HW_PORT_FREE_OK";
+
+	case EFC_EVT_NODE_FREE_FAIL:
+		return "EFC_EVT_NODE_FREE_FAIL";
+
+	case EFC_EVT_ABTS_RCVD:
+		return "EFC_EVT_ABTS_RCVD";
+
+	case EFC_EVT_NODE_MISSING:
+		return "EFC_EVT_NODE_MISSING";
+	case EFC_EVT_NODE_REFOUND:
+		return "EFC_EVT_NODE_REFOUND";
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		return "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO";
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		return "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO";
+
+	case EFC_EVT_ELS_FRAME:
+		return "EFC_EVT_ELS_FRAME";
+	case EFC_EVT_PLOGI_RCVD:
+		return "EFC_EVT_PLOGI_RCVD";
+	case EFC_EVT_FLOGI_RCVD:
+		return "EFC_EVT_FLOGI_RCVD";
+	case EFC_EVT_LOGO_RCVD:
+		return "EFC_EVT_LOGO_RCVD";
+	case EFC_EVT_PRLI_RCVD:
+		return "EFC_EVT_PRLI_RCVD";
+	case EFC_EVT_PRLO_RCVD:
+		return "EFC_EVT_PRLO_RCVD";
+	case EFC_EVT_PDISC_RCVD:
+		return "EFC_EVT_PDISC_RCVD";
+	case EFC_EVT_FDISC_RCVD:
+		return "EFC_EVT_FDISC_RCVD";
+	case EFC_EVT_ADISC_RCVD:
+		return "EFC_EVT_ADISC_RCVD";
+	case EFC_EVT_RSCN_RCVD:
+		return "EFC_EVT_RSCN_RCVD";
+	case EFC_EVT_SCR_RCVD:
+		return "EFC_EVT_SCR_RCVD";
+	case EFC_EVT_ELS_RCVD:
+		return "EFC_EVT_ELS_RCVD";
+	case EFC_EVT_LAST:
+		return "EFC_EVT_LAST";
+	case EFC_EVT_FCP_CMD_RCVD:
+		return "EFC_EVT_FCP_CMD_RCVD";
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		return "EFC_EVT_GIDPT_DELAY_EXPIRED";
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		return "EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY";
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+		return "EFC_EVT_NODE_DEL_INI_COMPLETE";
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		return "EFC_EVT_NODE_DEL_TGT_COMPLETE";
+
+	default:
+		break;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
new file mode 100644
index 000000000000..c76352d1d527
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/**
+ * Generic state machine framework declarations.
+ */
+
+#ifndef _EFC_SM_H
+#define _EFC_SM_H
+
+/**
+ * State Machine (SM) IDs.
+ */
+enum {
+	EFC_SM_COMMON = 0,
+	EFC_SM_DOMAIN,
+	EFC_SM_PORT,
+	EFC_SM_LOGIN,
+	EFC_SM_LAST
+};
+
+#define EFC_SM_EVENT_SHIFT		24
+#define EFC_SM_EVENT_START(id)		((id) << EFC_SM_EVENT_SHIFT)
+
+extern const char *efc_sm_id[];
+
+struct efc_sm_ctx;
+
+/* State Machine events */
+enum efc_sm_event {
+	/* Common Events */
+	EFC_EVT_ENTER = EFC_SM_EVENT_START(EFC_SM_COMMON),
+	EFC_EVT_REENTER,
+	EFC_EVT_EXIT,
+	EFC_EVT_SHUTDOWN,
+	EFC_EVT_ALL_CHILD_NODES_FREE,
+	EFC_EVT_RESUME,
+	EFC_EVT_TIMER_EXPIRED,
+
+	/* Domain Events */
+	EFC_EVT_RESPONSE = EFC_SM_EVENT_START(EFC_SM_DOMAIN),
+	EFC_EVT_ERROR,
+
+	EFC_EVT_DOMAIN_FOUND,
+	EFC_EVT_DOMAIN_ALLOC_OK,
+	EFC_EVT_DOMAIN_ALLOC_FAIL,
+	EFC_EVT_DOMAIN_REQ_ATTACH,
+	EFC_EVT_DOMAIN_ATTACH_OK,
+	EFC_EVT_DOMAIN_ATTACH_FAIL,
+	EFC_EVT_DOMAIN_LOST,
+	EFC_EVT_DOMAIN_FREE_OK,
+	EFC_EVT_DOMAIN_FREE_FAIL,
+	EFC_EVT_HW_DOMAIN_REQ_ATTACH,
+	EFC_EVT_HW_DOMAIN_REQ_FREE,
+
+	/* Sport Events */
+	EFC_EVT_SPORT_ALLOC_OK = EFC_SM_EVENT_START(EFC_SM_PORT),
+	EFC_EVT_SPORT_ALLOC_FAIL,
+	EFC_EVT_SPORT_ATTACH_OK,
+	EFC_EVT_SPORT_ATTACH_FAIL,
+	EFC_EVT_SPORT_FREE_OK,
+	EFC_EVT_SPORT_FREE_FAIL,
+	EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+	EFC_EVT_HW_PORT_ALLOC_OK,
+	EFC_EVT_HW_PORT_ALLOC_FAIL,
+	EFC_EVT_HW_PORT_ATTACH_OK,
+	EFC_EVT_HW_PORT_REQ_ATTACH,
+	EFC_EVT_HW_PORT_REQ_FREE,
+	EFC_EVT_HW_PORT_FREE_OK,
+
+	/* Login Events */
+	EFC_EVT_SRRS_ELS_REQ_OK = EFC_SM_EVENT_START(EFC_SM_LOGIN),
+	EFC_EVT_SRRS_ELS_CMPL_OK,
+	EFC_EVT_SRRS_ELS_REQ_FAIL,
+	EFC_EVT_SRRS_ELS_CMPL_FAIL,
+	EFC_EVT_SRRS_ELS_REQ_RJT,
+	EFC_EVT_NODE_ATTACH_OK,
+	EFC_EVT_NODE_ATTACH_FAIL,
+	EFC_EVT_NODE_FREE_OK,
+	EFC_EVT_NODE_FREE_FAIL,
+	EFC_EVT_ELS_FRAME,
+	EFC_EVT_ELS_REQ_TIMEOUT,
+	EFC_EVT_ELS_REQ_ABORTED,
+	/* request an ELS IO be aborted */
+	EFC_EVT_ABORT_ELS,
+	/* ELS abort process complete */
+	EFC_EVT_ELS_ABORT_CMPL,
+
+	EFC_EVT_ABTS_RCVD,
+
+	/* node is not in the GID_PT payload */
+	EFC_EVT_NODE_MISSING,
+	/* node is allocated and in the GID_PT payload */
+	EFC_EVT_NODE_REFOUND,
+	/* node shutting down due to PLOGI recvd (implicit logo) */
+	EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+	/* node shutting down due to LOGO recvd/sent (explicit logo) */
+	EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+
+	EFC_EVT_PLOGI_RCVD,
+	EFC_EVT_FLOGI_RCVD,
+	EFC_EVT_LOGO_RCVD,
+	EFC_EVT_PRLI_RCVD,
+	EFC_EVT_PRLO_RCVD,
+	EFC_EVT_PDISC_RCVD,
+	EFC_EVT_FDISC_RCVD,
+	EFC_EVT_ADISC_RCVD,
+	EFC_EVT_RSCN_RCVD,
+	EFC_EVT_SCR_RCVD,
+	EFC_EVT_ELS_RCVD,
+
+	EFC_EVT_FCP_CMD_RCVD,
+
+	EFC_EVT_GIDPT_DELAY_EXPIRED,
+
+	/* SCSI Target Server events */
+	EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY,
+	EFC_EVT_NODE_DEL_INI_COMPLETE,
+	EFC_EVT_NODE_DEL_TGT_COMPLETE,
+
+	/* Must be last */
+	EFC_EVT_LAST
+};
+
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data);
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void *(*state)(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg),
+		  void *data);
+void efc_sm_disable(struct efc_sm_ctx *ctx);
+const char *efc_sm_event_name(enum efc_sm_event evt);
+
+#endif /* ! _EFC_SM_H */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (7 preceding siblings ...)
  2019-12-20 22:36 ` [PATCH v2 08/32] elx: libefc: Generic state machine framework James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  7:16   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces James Smart
                   ` (23 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI/Local FC port objects
- efc_domain_s: FC domain (aka fabric) objects
- efc_node_s: FC node (aka remote ports) objects
- A sparse vector interface that manages lookup tables
  for the objects.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc.h     |  99 ++++++
 drivers/scsi/elx/libefc/efc_lib.c | 131 ++++++++
 drivers/scsi/elx/libefc/efclib.h  | 637 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 867 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efclib.h

diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
new file mode 100644
index 000000000000..ef7c83e44167
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_H__
+#define __EFC_H__
+
+#include "../include/efc_common.h"
+#include "efclib.h"
+#include "efc_sm.h"
+#include "efc_domain.h"
+#include "efc_sport.h"
+#include "efc_node.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+
+#define EFC_MAX_REMOTE_NODES			2048
+
+enum efc_hw_rtn {
+	EFC_HW_RTN_SUCCESS = 0,
+	EFC_HW_RTN_SUCCESS_SYNC = 1,
+	EFC_HW_RTN_ERROR = -1,
+	EFC_HW_RTN_NO_RESOURCES = -2,
+	EFC_HW_RTN_NO_MEMORY = -3,
+	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFC_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
+
+enum efc_scsi_del_initiator_reason {
+	EFC_SCSI_INITIATOR_DELETED,
+	EFC_SCSI_INITIATOR_MISSING,
+};
+
+enum efc_scsi_del_target_reason {
+	EFC_SCSI_TARGET_DELETED,
+	EFC_SCSI_TARGET_MISSING,
+};
+
+#define EFC_SCSI_CALL_COMPLETE			0
+#define EFC_SCSI_CALL_ASYNC			1
+
+#define EFC_FC_ELS_DEFAULT_RETRIES		3
+
+/* Timeouts */
+#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT		0
+#define EFC_FC_FLOGI_TIMEOUT_SEC		5
+#define EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC	30000000
+
+#define domain_sm_trace(domain) \
+	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n", \
+		      domain->display_name, __func__, efc_sm_event_name(evt)) \
+
+#define domain_trace(domain, fmt, ...) \
+	efc_log_debug(domain->efc, \
+		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__) \
+
+#define node_sm_trace() \
+	efc_log_debug(node->efc, \
+		"[%s] %-20s\n", node->display_name, efc_sm_event_name(evt)) \
+
+#define sport_sm_trace(sport) \
+	efc_log_debug(sport->efc, \
+		"[%s] %-20s\n", sport->display_name, efc_sm_event_name(evt)) \
+
+/**
+ * Sparse Vector API
+ *
+ * This is a trimmed down sparse vector implementation tuned to the problem of
+ * 24-bit FC_IDs. In this case, the 24-bit index value is broken down in three
+ * 8-bit values. These values are used to index up to three 256 element arrays.
+ * Arrays are allocated, only when needed. @n @n
+ * The lookup can complete in constant time (3 indexed array references). @n @n
+ * A typical use case would be that the fabric/directory FC_IDs would cause two
+ * rows to be allocated, and the fabric assigned remote nodes would cause two
+ * rows to be allocated, with the root row always allocated. This gives five
+ * rows of 256 x sizeof(void*), resulting in 10k.
+ */
+
+struct sparse_vector {
+	struct efc *efc;
+	u32 max_idx;
+	void **array;
+};
+
+#define SPV_ROWLEN	256
+#define SPV_DIM		3
+
+void efc_spv_del(struct sparse_vector *spv);
+struct sparse_vector *efc_spv_new(struct efc *efc);
+void efc_spv_set(struct sparse_vector *sv, u32 idx, void *value);
+void *efc_spv_get(struct sparse_vector *sv, u32 idx);
+
+#endif /* __EFC_H__ */
diff --git a/drivers/scsi/elx/libefc/efc_lib.c b/drivers/scsi/elx/libefc/efc_lib.c
new file mode 100644
index 000000000000..9ab8538d6e1f
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_lib.c
@@ -0,0 +1,131 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include "efc.h"
+
+int efcport_init(struct efc *efc)
+{
+	u32 rc = 0;
+
+	spin_lock_init(&efc->lock);
+	INIT_LIST_HEAD(&efc->vport_list);
+
+	/* Create Node pool */
+	rc = efc_node_create_pool(efc, EFC_MAX_REMOTE_NODES);
+	if (rc)
+		efc_log_err(efc, "Can't allocate node pool\n");
+
+	return rc;
+}
+
+void efcport_destroy(struct efc *efc)
+{
+	efc_node_free_pool(efc);
+}
+
+static void **efc_spv_new_row(u32 rowcount)
+{
+	return kzalloc(sizeof(void *) * rowcount, GFP_ATOMIC);
+}
+
+/* Recursively delete the rows in this sparse vector */
+static void
+_efc_spv_del(struct efc *efc, void **a, u32 n, u32 depth)
+{
+	if (a) {
+		if (depth) {
+			u32 i;
+
+			for (i = 0; i < n; i++)
+				_efc_spv_del(efc, a[i], n, depth - 1);
+
+			kfree(a);
+		}
+	}
+}
+
+void
+efc_spv_del(struct sparse_vector *spv)
+{
+	if (spv) {
+		_efc_spv_del(spv->efc, spv->array, SPV_ROWLEN, SPV_DIM);
+		kfree(spv);
+	}
+}
+
+struct sparse_vector
+*efc_spv_new(struct efc *efc)
+{
+	struct sparse_vector *spv;
+	u32 i;
+
+	spv = kzalloc(sizeof(*spv), GFP_ATOMIC);
+	if (!spv)
+		return NULL;
+
+	spv->efc = efc;
+	spv->max_idx = 1;
+	for (i = 0; i < SPV_DIM; i++)
+		spv->max_idx *= SPV_ROWLEN;
+
+	return spv;
+}
+
+static void
+*efc_spv_new_cell(struct sparse_vector *sv, u32 idx, bool alloc_new_rows)
+{
+	void **p;
+	u32 a = (idx >> 16) & 0xff;
+	u32 b = (idx >>  8) & 0xff;
+	u32 c = (idx >>  0) & 0xff;
+
+	if (idx >= sv->max_idx)
+		return NULL;
+
+	if (!sv->array) {
+		sv->array = (alloc_new_rows ?
+			     efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!sv->array)
+			return NULL;
+	}
+	p = sv->array;
+	if (!p[a]) {
+		p[a] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!p[a])
+			return NULL;
+	}
+	p = p[a];
+	if (!p[b]) {
+		p[b] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
+		if (!p[b])
+			return NULL;
+	}
+	p = p[b];
+
+	return &p[c];
+}
+
+void
+efc_spv_set(struct sparse_vector *sv, u32 idx, void *value)
+{
+	void **ref = efc_spv_new_cell(sv, idx, true);
+
+	if (ref)
+		*ref = value;
+}
+
+void
+*efc_spv_get(struct sparse_vector *sv, u32 idx)
+{
+	void **ref = efc_spv_new_cell(sv, idx, false);
+
+	if (ref)
+		return *ref;
+
+	return NULL;
+}
diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
new file mode 100644
index 000000000000..56f6e4afca65
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efclib.h
@@ -0,0 +1,637 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCLIB_H__
+#define __EFCLIB_H__
+
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+#include "scsi/fc/fc_gs.h"
+#include "scsi/fc_frame.h"
+#include "../include/efc_common.h"
+
+#define EFC_SERVICE_PARMS_LENGTH	0x74
+#define EFC_DISPLAY_NAME_LENGTH		32
+#define EFC_DISPLAY_BUS_INFO_LENGTH	16
+
+#define EFC_WWN_LENGTH			32
+
+/* Local port topology */
+enum efc_sport_topology {
+	EFC_SPORT_TOPOLOGY_UNKNOWN = 0,
+	EFC_SPORT_TOPOLOGY_FABRIC,
+	EFC_SPORT_TOPOLOGY_P2P,
+	EFC_SPORT_TOPOLOGY_LOOP,
+};
+
+#define enable_target_rscn(efc)		1
+
+enum efc_node_shutd_rsn {
+	EFC_NODE_SHUTDOWN_DEFAULT = 0,
+	EFC_NODE_SHUTDOWN_EXPLICIT_LOGO,
+	EFC_NODE_SHUTDOWN_IMPLICIT_LOGO,
+};
+
+enum efc_node_send_ls_acc {
+	EFC_NODE_SEND_LS_ACC_NONE = 0,
+	EFC_NODE_SEND_LS_ACC_PLOGI,
+	EFC_NODE_SEND_LS_ACC_PRLI,
+};
+
+#define EFC_LINK_STATUS_UP		0
+#define EFC_LINK_STATUS_DOWN		1
+
+/* State machine context header  */
+struct efc_sm_ctx {
+	void *(*current_state)(struct efc_sm_ctx *ctx,
+			       u32 evt, void *arg);
+
+	const char	*description;
+	void		*app;
+};
+
+/* Description of discovered Fabric Domain */
+struct efc_domain_record {
+	u32		index;
+	u32		priority;
+	u8		address[6];
+	u8		wwn[8];
+	union {
+		u8	vlan[512];
+		u8	loop[128];
+	} map;
+	u32		speed;
+	u32		fc_id;
+	bool		is_loop;
+	bool		is_nport;
+};
+
+/* Fabric/Domain events */
+enum efc_hw_domain_event {
+	EFC_HW_DOMAIN_ALLOC_OK,
+	EFC_HW_DOMAIN_ALLOC_FAIL,
+	EFC_HW_DOMAIN_ATTACH_OK,
+	EFC_HW_DOMAIN_ATTACH_FAIL,
+	EFC_HW_DOMAIN_FREE_OK,
+	EFC_HW_DOMAIN_FREE_FAIL,
+	EFC_HW_DOMAIN_LOST,
+	EFC_HW_DOMAIN_FOUND,
+	EFC_HW_DOMAIN_CHANGED,
+};
+
+enum efc_hw_port_event {
+	EFC_HW_PORT_ALLOC_OK,
+	EFC_HW_PORT_ALLOC_FAIL,
+	EFC_HW_PORT_ATTACH_OK,
+	EFC_HW_PORT_ATTACH_FAIL,
+	EFC_HW_PORT_FREE_OK,
+	EFC_HW_PORT_FREE_FAIL,
+};
+
+enum efc_hw_remote_node_event {
+	EFC_HW_NODE_ATTACH_OK,
+	EFC_HW_NODE_ATTACH_FAIL,
+	EFC_HW_NODE_FREE_OK,
+	EFC_HW_NODE_FREE_FAIL,
+	EFC_HW_NODE_FREE_ALL_OK,
+	EFC_HW_NODE_FREE_ALL_FAIL,
+};
+
+enum efc_hw_node_els_event {
+	EFC_HW_SRRS_ELS_REQ_OK,
+	EFC_HW_SRRS_ELS_CMPL_OK,
+	EFC_HW_SRRS_ELS_REQ_FAIL,
+	EFC_HW_SRRS_ELS_CMPL_FAIL,
+	EFC_HW_SRRS_ELS_REQ_RJT,
+	EFC_HW_ELS_REQ_ABORTED,
+};
+
+struct efc_sli_port {
+	struct list_head	list_entry;
+	struct efc		*efc;
+	u32			tgt_id;
+	u32			index;
+	u32			instance_index;
+	char			display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct efc_domain	*domain;
+	bool			is_vport;
+	u64			wwpn;
+	u64			wwnn;
+	struct list_head	node_list;
+	void			*ini_sport;
+	void			*tgt_sport;
+	void			*tgt_data;
+	void			*ini_data;
+
+	/* Members private to HW/SLI */
+	void			*hw;
+	u32			indicator;
+	u32			fc_id;
+	struct efc_dma		dma;
+
+	u8			wwnn_str[EFC_WWN_LENGTH];
+	__be64			sli_wwpn;
+	__be64			sli_wwnn;
+	bool			free_req_pending;
+	bool			attached;
+
+	struct efc_sm_ctx	sm;
+	struct sparse_vector	*lookup;
+	bool			enable_ini;
+	bool			enable_tgt;
+	bool			enable_rscn;
+	bool			shutting_down;
+	bool			p2p_winner;
+	enum efc_sport_topology topology;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u32			p2p_remote_port_id;
+	u32			p2p_port_id;
+};
+
+/**
+ * Fibre Channel domain object
+ *
+ * This object is a container for the various SLI components needed
+ * to connect to the domain of a FC or FCoE switch
+ * @efc:		pointer back to efc
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @sport_list:		linked list of SLI ports
+ * @ini_domain:		initiator backend private domain data
+ * @tgt_domain:		target backend private domain data
+ * @hw:			pointer to HW
+ * @sm:			state machine context
+ * @fcf:		FC Forwarder table index
+ * @fcf_indicator:	FCFI
+ * @indicator:		VFI
+ * @dma:		memory for Service Parameters
+ * @fcf_wwn:		WWN for FCF/switch
+ * @drvsm:		driver domain sm context
+ * @drvsm_lock:		driver domain sm lock
+ * @attached:		set true after attach completes
+ * @is_fc:		is FC
+ * @is_loop:		is loop topology
+ * @is_nlport:		is public loop
+ * @domain_found_pending:A domain found is pending, drec is updated
+ * @req_domain_free:	True if domain object should be free'd
+ * @req_accept_frames:	set in domain state machine to enable frames
+ * @domain_notify_pend:	Set in domain SM to avoid duplicate node event post
+ * @pending_drec:	Pending drec if a domain found is pending
+ * @service_params:	any sports service parameters
+ * @flogi_service_params:Fabric/P2p service parameters from FLOGI
+ * @lookup:		d_id to node lookup object
+ * @sport:		Pointer to first (physical) SLI port
+ */
+struct efc_domain {
+	struct efc		*efc;
+	char			display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct list_head	sport_list;
+	void			*ini_domain;
+	void			*tgt_domain;
+
+	/* Declarations private to HW/SLI */
+	void			*hw;
+	u32			fcf;
+	u32			fcf_indicator;
+	u32			indicator;
+	struct efc_dma		dma;
+
+	/* Declarations private to FC transport */
+	u64			fcf_wwn;
+	struct efc_sm_ctx	drvsm;
+	bool			attached;
+	bool			is_fc;
+	bool			is_loop;
+	bool			is_nlport;
+	bool			domain_found_pending;
+	bool			req_domain_free;
+	bool			req_accept_frames;
+	bool			domain_notify_pend;
+
+	struct efc_domain_record pending_drec;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u8			flogi_service_params[EFC_SERVICE_PARMS_LENGTH];
+
+	struct sparse_vector	*lookup;
+
+	struct efc_sli_port	*sport;
+	u32			sport_instance_count;
+};
+
+/**
+ * Remote Node object
+ *
+ * This object represents a connection between the SLI port and another
+ * Nx_Port on the fabric. Note this can be either a well known port such
+ * as a F_Port (i.e. ff:ff:fe) or another N_Port.
+ * @indicator:		RPI
+ * @fc_id:		FC address
+ * @attached:		true if attached
+ * @node_group:		true if in node group
+ * @free_group:		true if the node group should be free'd
+ * @sport:		associated SLI port
+ * @node:		associated node
+ */
+struct efc_remote_node {
+	u32			indicator;
+	u32			index;
+	u32			fc_id;
+
+	bool			attached;
+	bool			node_group;
+	bool			free_group;
+
+	struct efc_sli_port	*sport;
+	void			*node;
+};
+
+/**
+ * FC Node object
+ * @efc:		pointer back to efc structure
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @hold_frames:	hold incoming frames if true
+ * @lock:		node wide lock
+ * @active_ios:		active I/O's for this node
+ * @max_wr_xfer_size:	Max write IO size per phase for the transport
+ * @ini_node:		backend initiator private node data
+ * @tgt_node:		backend target private node data
+ * @rnode:		Remote node
+ * @sm:			state machine context
+ * @evtdepth:		current event posting nesting depth
+ * @req_free:		this node is to be free'd
+ * @attached:		node is attached (REGLOGIN complete)
+ * @fcp_enabled:	node is enabled to handle FCP
+ * @rscn_pending:	for name server node RSCN is pending
+ * @send_plogi:		send PLOGI accept, upon completion of node attach
+ * @send_plogi_acc:	TRUE if io_alloc() is enabled.
+ * @send_ls_acc:	type of LS acc to send
+ * @ls_acc_io:		SCSI IO for LS acc
+ * @ls_acc_oxid:	OX_ID for pending accept
+ * @ls_acc_did:		D_ID for pending accept
+ * @shutdown_reason:	reason for node shutdown
+ * @sparm_dma_buf:	service parameters buffer
+ * @service_params:	plogi/acc frame from remote device
+ * @pend_frames_lock:	lock for inbound pending frames list
+ * @pend_frames:	inbound pending frames list
+ * @pend_frames_processed:count of frames processed in hold frames interval
+ * @ox_id_in_use:	used to verify one at a time us of ox_id
+ * @els_retries_remaining:for ELS, number of retries remaining
+ * @els_req_cnt:	number of outstanding ELS requests
+ * @els_cmpl_cnt:	number of outstanding ELS completions
+ * @abort_cnt:		Abort counter for debugging purpos
+ * @current_state_name:	current node state
+ * @prev_state_name:	previous node state
+ * @current_evt:	current event
+ * @prev_evt:		previous event
+ * @targ:		node is target capable
+ * @init:		node is init capable
+ * @refound:		Handle node refound case when node is being deleted
+ * @els_io_pend_list:	list of pending (not yet processed) ELS IOs
+ * @els_io_active_list:	list of active (processed) ELS IOs
+ * @nodedb_state:	Node debugging, saved state
+ * @gidpt_delay_timer:	GIDPT delay timer
+ * @time_last_gidpt_msec:Start time of last target RSCN GIDPT
+ * @wwnn:		remote port WWNN
+ * @wwpn:		remote port WWPN
+ * @chained_io_count:	Statistics : count of IOs with chained SGL's
+ */
+struct efc_node {
+	struct list_head	list_entry;
+	struct efc		*efc;
+	u32			instance_index;
+	char			display_name[EFC_DISPLAY_NAME_LENGTH];
+	struct efc_sli_port	*sport;
+	bool			hold_frames;
+	spinlock_t		active_ios_lock;
+	struct list_head	active_ios;
+	u64			max_wr_xfer_size;
+	void			*ini_node;
+	void			*tgt_node;
+
+	struct efc_remote_node	rnode;
+	/* Declarations private to FC transport */
+	struct efc_sm_ctx	sm;
+	u32			evtdepth;
+
+	bool			req_free;
+	bool			attached;
+	bool			fcp_enabled;
+	bool			rscn_pending;
+	bool			send_plogi;
+	bool			send_plogi_acc;
+	bool			io_alloc_enabled;
+
+	enum efc_node_send_ls_acc send_ls_acc;
+	void			*ls_acc_io;
+	u32			ls_acc_oxid;
+	u32			ls_acc_did;
+	enum efc_node_shutd_rsn	shutdown_reason;
+	struct efc_dma		sparm_dma_buf;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	spinlock_t		pend_frames_lock;
+	struct list_head	pend_frames;
+	u32			pend_frames_processed;
+	u32			ox_id_in_use;
+	u32			els_retries_remaining;
+	u32			els_req_cnt;
+	u32			els_cmpl_cnt;
+	u32			abort_cnt;
+
+	char			current_state_name[EFC_DISPLAY_NAME_LENGTH];
+	char			prev_state_name[EFC_DISPLAY_NAME_LENGTH];
+	int			current_evt;
+	int			prev_evt;
+	bool			targ;
+	bool			init;
+	bool			refound;
+	struct list_head	els_io_pend_list;
+	struct list_head	els_io_active_list;
+
+	void *(*nodedb_state)(struct efc_sm_ctx *ctx,
+			      u32 evt, void *arg);
+	struct timer_list	gidpt_delay_timer;
+	time_t			time_last_gidpt_msec;
+
+	char			wwnn[EFC_WWN_LENGTH];
+	char			wwpn[EFC_WWN_LENGTH];
+
+	u32			chained_io_count;
+};
+
+/**
+ * NPIV port
+ *
+ * Collection of the information required to restore a virtual port across
+ * link events
+ * @wwnn:		node name
+ * @wwpn:		port name
+ * @fc_id:		port id
+ * @tgt_data:		target backend pointer
+ * @ini_data:		initiator backend pointe
+ * @sport:		Used to match record after attaching for update
+ *
+ */
+
+struct efc_vport_spec {
+	struct list_head	list_entry;
+	u64			wwnn;
+	u64			wwpn;
+	u32			fc_id;
+	bool			enable_tgt;
+	bool			enable_ini;
+	void			*tgt_data;
+	void			*ini_data;
+	struct efc_sli_port	*sport;
+};
+
+#define node_printf(node, fmt, args...) \
+	pr_info("[%s] " fmt, node->display_name, ##args)
+
+/* Node SM IO Context Callback structure */
+struct efc_node_cb {
+	int			status;
+	int			ext_status;
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+	struct efc_dma		els_rsp;
+};
+
+/* HW unsolicited callback status */
+enum efc_hw_unsol_status {
+	EFC_HW_UNSOL_SUCCESS,
+	EFC_HW_UNSOL_ERROR,
+	EFC_HW_UNSOL_ABTS_RCVD,
+	EFC_HW_UNSOL_MAX,	/**< must be last */
+};
+
+enum efc_hw_rq_buffer_type {
+	EFC_HW_RQ_BUFFER_TYPE_HDR,
+	EFC_HW_RQ_BUFFER_TYPE_PAYLOAD,
+	EFC_HW_RQ_BUFFER_TYPE_MAX,
+};
+
+struct efc_hw_rq_buffer {
+	u16			rqindex;
+	struct efc_dma		dma;
+};
+
+/*
+ * Defines a general FC sequence object,
+ * consisting of a header, payload buffers
+ * and a HW IO in the case of port owned XRI
+ */
+struct efc_hw_sequence {
+	struct list_head	list_entry;
+	void			*hw;
+	u8			fcfi;
+	u8			auto_xrdy;
+	u8			out_of_xris;
+
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+
+	enum efc_hw_unsol_status status;
+	struct efct_hw_io	*hio;
+
+	void			*hw_priv;
+};
+
+struct libefc_function_template {
+	/*Domain*/
+	int (*hw_domain_alloc)(struct efc *efc, struct efc_domain *d, u32 fcf);
+	int (*hw_domain_attach)(struct efc *efc, struct efc_domain *d, u32 id);
+
+	int (*hw_domain_free)(struct efc *hw, struct efc_domain *d);
+	int (*hw_domain_force_free)(struct efc *efc, struct efc_domain *d);
+
+	int (*new_domain)(struct efc *efc, struct efc_domain *d);
+	void (*del_domain)(struct efc *efc, struct efc_domain *d);
+
+	void (*domain_hold_frames)(struct efc *efc, struct efc_domain *d);
+	void (*domain_accept_frames)(struct efc *efc, struct efc_domain *d);
+
+	/*Sport*/
+	int (*hw_port_alloc)(struct efc *hw, struct efc_sli_port *sp,
+			     struct efc_domain *d, u8 *val);
+	int (*hw_port_attach)(struct efc *hw, struct efc_sli_port *sp,
+			      u32 fc_id);
+
+	int (*hw_port_free)(struct efc *hw, struct efc_sli_port *sp);
+
+	int (*new_sport)(struct efc *efc, struct efc_sli_port *sp);
+	void (*del_sport)(struct efc *efc, struct efc_sli_port *sp);
+
+	/*Node*/
+	int (*hw_node_alloc)(struct efc *hw, struct efc_remote_node *n,
+			     u32 fc_addr, struct efc_sli_port *sport);
+
+	int (*hw_node_attach)(struct efc *hw, struct efc_remote_node *n,
+			      struct efc_dma *sparams);
+
+	int (*hw_node_detach)(struct efc *hw, struct efc_remote_node *r);
+
+	int (*hw_node_free_resources)(struct efc *efc,
+				      struct efc_remote_node *node);
+	int (*node_purge_pending)(struct efc *efc, struct efc_node *n);
+
+	void (*node_io_cleanup)(struct efc *efc, struct efc_node *n,
+				bool force);
+	void (*node_els_cleanup)(struct efc *efc, struct efc_node *n,
+				bool force);
+	void (*node_abort_all_els)(struct efc *efc, struct efc_node *n);
+
+	/*Scsi*/
+	void (*scsi_io_alloc_disable)(struct efc *efc, struct efc_node *node);
+	void (*scsi_io_alloc_enable)(struct efc *efc, struct efc_node *node);
+
+	int (*scsi_validate_node)(struct efc *efc, struct efc_node *n);
+	int (*scsi_new_node)(struct efc *efc, struct efc_node *n);
+
+	int (*scsi_del_node)(struct efc *efc, struct efc_node *n, int reason);
+
+	/*Send ELS*/
+	void *(*els_send)(struct efc *efc, struct efc_node *node,
+			  u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_ct)(struct efc *efc, struct efc_node *node,
+			     u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_resp)(struct efc *efc, struct efc_node *node,
+			       u32 cmd, u16 ox_id);
+
+	void *(*bls_send_acc_hdr)(struct efc *efc, struct efc_node *n,
+				  struct fc_frame_header *hdr);
+	void *(*send_flogi_p2p_acc)(struct efc *efc, struct efc_node *n,
+				    u32 ox_id, u32 s_id);
+
+	int (*send_ct_rsp)(struct efc *efc, struct efc_node *node,
+			   u16 ox_id, struct fc_ct_hdr *hdr,
+			   u32 rsp_code, u32 reason_code, u32 rsn_code_expl);
+
+	void *(*send_ls_rjt)(struct efc *efc, struct efc_node *node,
+			     u32 ox, u32 rcode, u32 rcode_expl, u32 vendor);
+
+	int (*dispatch_fcp_cmd)(struct efc_node *node,
+				struct efc_hw_sequence *seq);
+
+	int (*recv_abts_frame)(struct efc *efc, struct efc_node *node,
+			       struct efc_hw_sequence *seq);
+};
+
+#define EFC_LOG_LIB		0x01
+#define EFC_LOG_NODE		0x02
+#define EFC_LOG_PORT		0x04
+#define EFC_LOG_DOMAIN		0x08
+#define EFC_LOG_ELS		0x10
+#define EFC_LOG_DOMAIN_SM	0x20
+#define EFC_LOG_SM		0x40
+
+/* efc library port structure */
+struct efc {
+	void			*base;
+	struct pci_dev		*pcidev;
+	u64			req_wwpn;
+	u64			req_wwnn;
+
+	u64			def_wwpn;
+	u64			def_wwnn;
+	u64			max_xfer_size;
+	u32			nodes_count;
+	struct efc_node		**nodes;
+	struct list_head	nodes_free_list;
+
+	u32			link_status;
+
+	/* vport */
+	struct list_head	vport_list;
+
+	struct libefc_function_template tt;
+	spinlock_t		lock;
+
+	bool			enable_ini;
+	bool			enable_tgt;
+
+	u32			log_level;
+
+	struct efc_domain	*domain;
+	void (*domain_free_cb)(struct efc *efc, void *arg);
+	void			*domain_free_cb_arg;
+
+	time_t			tgt_rscn_delay_msec;
+	time_t			tgt_rscn_period_msec;
+
+	bool			external_loopback;
+	u32			nodedb_mask;
+};
+
+/*
+ * EFC library registration
+ * **********************************/
+int efcport_init(struct efc *efc);
+void efcport_destroy(struct efc *efc);
+/*
+ * EFC Domain
+ * **********************************/
+int efc_domain_cb(void *arg, int event, void *data);
+void efc_domain_force_free(struct efc_domain *domain);
+void
+efc_register_domain_free_cb(struct efc *efc,
+			    void (*callback)(struct efc *efc, void *arg),
+			    void *arg);
+
+/*
+ * EFC Local port
+ * **********************************/
+int efc_lport_cb(void *arg, int event, void *data);
+int8_t efc_vport_create_spec(struct efc *efc, u64 wwnn,
+			     u64 wwpn, u32 fc_id, bool enable_ini,
+			     bool enable_tgt, void *tgt_data, void *ini_data);
+int efc_sport_vport_new(struct efc_domain *domain, u64 wwpn,
+			u64 wwnn, u32 fc_id, bool ini, bool tgt,
+			void *tgt_data, void *ini_data, bool restore_vport);
+int efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
+			u64 wwpn, u64 wwnn);
+
+void efc_vport_del_all(struct efc *efc);
+
+struct efc_sli_port *efc_sport_find(struct efc_domain *domain, u32 d_id);
+
+/*
+ * EFC Node
+ * **********************************/
+int efc_remote_node_cb(void *arg, int event, void *data);
+u64 efc_node_get_wwnn(struct efc_node *node);
+u64 efc_node_get_wwpn(struct efc_node *node);
+struct efc_node *efc_node_find(struct efc_sli_port *sport, u32 id);
+void efc_node_fcid_display(u32 fc_id, char *buffer, u32 buf_len);
+
+void efc_node_post_els_resp(struct efc_node *node, u32 evt, void *arg);
+void efc_node_post_shutdown(struct efc_node *node, u32 evt, void *arg);
+/*
+ * EFC FCP/ELS/CT interface
+ * **********************************/
+int efc_node_recv_abts_frame(struct efc *efc,
+			     struct efc_node *node,
+			     struct efc_hw_sequence *seq);
+int efc_node_recv_els_frame(struct efc_node *node, struct efc_hw_sequence *s);
+int efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+
+int efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+
+int efc_node_recv_ct_frame(struct efc_node *node, struct efc_hw_sequence *seq);
+int efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
+int efc_node_recv_bls_no_sit(struct efc_node *node, struct efc_hw_sequence *s);
+
+/*
+ * EFC SCSI INTERACTION LAYER
+ * **********************************/
+void efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node);
+
+#endif /* __EFCLIB_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (8 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  7:27   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 11/32] elx: libefc: SLI and FC PORT " James Smart
                   ` (22 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC Domain registration, allocation and deallocation sequence

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_domain.c | 1126 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_domain.h |   52 ++
 2 files changed, 1178 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h

diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
new file mode 100644
index 000000000000..a386d866c77b
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.c
@@ -0,0 +1,1126 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * domain_sm Domain State Machine: States
+ */
+
+#include "efc.h"
+
+/* Accept domain callback events from the user driver */
+int
+efc_domain_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_domain *domain = NULL;
+	int rc = 0;
+
+	if (event != EFC_HW_DOMAIN_FOUND)
+		domain = data;
+
+	switch (event) {
+	case EFC_HW_DOMAIN_FOUND: {
+		u64 fcf_wwn = 0;
+		struct efc_domain_record *drec = data;
+
+		/* extract the fcf_wwn */
+		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
+
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n", fcf_wwn);
+		/*
+		 * lookup domain, or allocate a new one
+		 * if one doesn't exist already
+		 */
+		domain = efc->domain;
+		if (!domain) {
+			domain = efc_domain_alloc(efc, fcf_wwn);
+			if (!domain) {
+				efc_log_err(efc, "efc_domain_alloc() failed\n");
+				rc = -1;
+				break;
+			}
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+		}
+
+		if (fcf_wwn != domain->fcf_wwn) {
+			efc_log_err(efc, "evt: FOUND for existing domain\n");
+			efc_log_err(efc, "wwn:%016llX domain wwn:%016llX\n",
+				    fcf_wwn, domain->fcf_wwn);
+		}
+
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
+		break;
+	}
+
+	case EFC_HW_DOMAIN_LOST:
+		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
+		efc->tt.domain_hold_frames(efc, domain);
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
+				      NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
+		efc_domain_post_event(domain,
+				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
+		break;
+
+	default:
+		efc_log_warn(efc, "unsupported event %#x\n", event);
+	}
+
+	return rc;
+}
+
+struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn)
+{
+	struct efc_domain *domain;
+
+	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
+	if (domain) {
+		domain->efc = efc;
+		domain->drvsm.app = domain;
+
+		/* Allocate a sparse vector for sport FC_ID's */
+		domain->lookup = efc_spv_new(efc);
+		if (!domain->lookup) {
+			efc_log_err(efc, "efc_spv_new() failed\n");
+			kfree(domain);
+			return NULL;
+		}
+
+		INIT_LIST_HEAD(&domain->sport_list);
+		domain->fcf_wwn = fcf_wwn;
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
+			      domain->fcf_wwn);
+		efc->domain = domain;
+	} else {
+		efc_log_err(efc, "domain allocation failed\n");
+	}
+
+	return domain;
+}
+
+void
+efc_domain_free(struct efc_domain *domain)
+{
+	struct efc *efc;
+
+	efc = domain->efc;
+
+	/* Hold frames to clear the domain pointer from the xport lookup */
+	efc->tt.domain_hold_frames(efc, domain);
+
+	efc_log_debug(efc, "Domain free: wwn %016llX\n",
+		      domain->fcf_wwn);
+
+	efc_spv_del(domain->lookup);
+	domain->lookup = NULL;
+	efc->domain = NULL;
+
+	if (efc->domain_free_cb)
+		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
+
+	kfree(domain);
+}
+
+/* Free memory resources of a domain object */
+void
+efc_domain_force_free(struct efc_domain *domain)
+{
+	struct efc_sli_port *sport;
+	struct efc_sli_port *next;
+	struct efc *efc = domain->efc;
+
+	/* Shutdown domain sm */
+	efc_sm_disable(&domain->drvsm);
+
+	list_for_each_entry_safe(sport, next, &domain->sport_list, list_entry) {
+		efc_sport_force_free(sport);
+	}
+
+	efc->tt.hw_domain_force_free(efc, domain);
+	efc_domain_free(domain);
+}
+
+/* Register a callback to be called when the domain is freed */
+void
+efc_register_domain_free_cb(struct efc *efc,
+			    void (*callback)(struct efc *efc, void *arg),
+			    void *arg)
+{
+	efc->domain_free_cb = callback;
+	efc->domain_free_cb_arg = arg;
+	if (!efc->domain && callback)
+		(*callback)(efc, arg);
+}
+
+static void *
+__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/*
+		 * this can arise if an FLOGI fails on the SPORT,
+		 * and the SPORT is shutdown
+		 */
+		break;
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+static void *
+__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+		break;
+	case EFC_EVT_DOMAIN_FOUND:
+		/* save drec, mark domain_found_pending */
+		memcpy(&domain->pending_drec, arg,
+		       sizeof(domain->pending_drec));
+		domain->domain_found_pending = true;
+		break;
+	case EFC_EVT_DOMAIN_LOST:
+		/* unmark domain_found_pending */
+		domain->domain_found_pending = false;
+		break;
+
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+#define std_domain_state_decl(...)\
+	struct efc_domain *domain = NULL;\
+	struct efc *efc = NULL;\
+	\
+	efc_assert(ctx, NULL);\
+	efc_assert(ctx->app, NULL);\
+	domain = ctx->app;\
+	efc_assert(domain->efc, NULL);\
+	efc = domain->efc
+
+void *
+__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		domain->attached = false;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND: {
+		u32	i;
+		struct efc_domain_record *drec = arg;
+		struct efc_sli_port *sport;
+
+		u64	my_wwnn = efc->req_wwnn;
+		u64	my_wwpn = efc->req_wwpn;
+		__be64		be_wwpn;
+
+		if (my_wwpn == 0 || my_wwnn == 0) {
+			efc_log_debug(efc,
+				"using default hardware WWN configuration\n");
+			my_wwpn = efc->def_wwpn;
+			my_wwnn = efc->def_wwnn;
+		}
+
+		efc_log_debug(efc,
+			"Creating base sport using WWPN %016llX WWNN %016llX\n",
+			my_wwpn, my_wwnn);
+
+		/* Allocate a sport and transition to __efc_sport_allocated */
+		sport = efc_sport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
+					efc->enable_ini, efc->enable_tgt);
+
+		if (!sport) {
+			efc_log_err(efc, "efc_sport_alloc() failed\n");
+			break;
+		}
+		efc_sm_transition(&sport->sm, __efc_sport_allocated, NULL);
+
+		be_wwpn = cpu_to_be64(sport->wwpn);
+
+		/* allocate struct efc_sli_port object for local port
+		 * Note: drec->fc_id is ALPA from read_topology only if loop
+		 */
+		if (efc->tt.hw_port_alloc(efc, sport, NULL,
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			efc_sport_free(sport);
+			break;
+		}
+
+		domain->is_loop = drec->is_loop;
+
+		/*
+		 * If the loop position map includes ALPA == 0,
+		 * then we are in a public loop (NL_PORT)
+		 * Note that the first element of the loopmap[]
+		 * contains the count of elements, and if
+		 * ALPA == 0 is present, it will occupy the first
+		 * location after the count.
+		 */
+		domain->is_nlport = drec->map.loop[1] == 0x00;
+
+		if (!domain->is_loop) {
+			/* Initiate HW domain alloc */
+			if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+				efc_log_err(efc,
+					    "Failed to initiate HW domain allocation\n");
+				break;
+			}
+			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+			break;
+		}
+
+		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
+			      drec->is_loop ?
+			      (domain->is_nlport ?
+			      "public-loop" : "loop") : "other",
+			      drec->fc_id, drec->speed);
+
+		sport->fc_id = drec->fc_id;
+		sport->topology = EFC_SPORT_TOPOLOGY_LOOP;
+		snprintf(sport->display_name, sizeof(sport->display_name),
+			 "s%06x", drec->fc_id);
+
+		if (efc->enable_ini) {
+			u32 count = drec->map.loop[0];
+
+			efc_log_debug(efc, "%d position map entries\n",
+				      count);
+			for (i = 1; i <= count; i++) {
+				if (drec->map.loop[i] != drec->fc_id) {
+					struct efc_node *node;
+
+					efc_log_debug(efc, "%#x -> %#x\n",
+						      drec->fc_id,
+						      drec->map.loop[i]);
+					node = efc_node_alloc(sport,
+							      drec->map.loop[i],
+							      false, true);
+					if (!node) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						break;
+					}
+					efc_node_transition(node,
+							    __efc_d_wait_loop,
+							    NULL);
+				}
+			}
+		}
+
+		/* Initiate HW domain alloc */
+		if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+			efc_log_err(efc,
+				    "Failed to initiate HW domain allocation\n");
+			break;
+		}
+		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+		break;
+	}
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the domain allocation to complete */
+void *
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK: {
+		struct fc_els_flogi  *sp;
+
+		sport = domain->sport;
+		efc_assert(sport, NULL);
+		sp = (struct fc_els_flogi  *)sport->service_params;
+
+		/* Save the domain service parameters */
+		memcpy(domain->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+		memcpy(sport->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+
+		/*
+		 * Update the sport's service parameters,
+		 * user might have specified non-default names
+		 */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * Take the loop topology path,
+		 * unless we are an NL_PORT (public loop)
+		 */
+		if (domain->is_loop && !domain->is_nlport) {
+			/*
+			 * For loop, we already have our FC ID
+			 * and don't need fabric login.
+			 * Transition to the allocated state and
+			 * post an event to attach to
+			 * the domain. Note that this breaks the
+			 * normal action/transition
+			 * pattern here to avoid a race with the
+			 * domain attach callback.
+			 */
+			/* sm: is_loop / domain_attach */
+			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+			__efc_domain_attach_internal(domain, sport->fc_id);
+			break;
+		}
+		{
+			struct efc_node *node;
+
+			/* alloc fabric node, send FLOGI */
+			node = efc_node_find(sport, FC_FID_FLOGI);
+			if (node) {
+				efc_log_err(efc,
+					    "Fabric Controller node already exists\n");
+				break;
+			}
+			node = efc_node_alloc(sport, FC_FID_FLOGI,
+					      false, false);
+			if (!node) {
+				efc_log_err(efc,
+					    "Error: efc_node_alloc() failed\n");
+			} else {
+				efc_node_transition(node,
+						    __efc_fabric_init, NULL);
+			}
+			/* Accept frames */
+			domain->req_accept_frames = true;
+		}
+		/* sm: / start fabric logins */
+		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
+			    efc_sm_event_name(evt));
+		efc_log_err(efc, "shutting down domain\n");
+		domain->req_domain_free = true;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw_domain_alloc()\n",
+			efc_sm_event_name(evt));
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the domain attach request */
+void *
+__efc_domain_allocated(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	int rc = 0;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		u32 fc_id;
+
+		efc_assert(arg, NULL);
+
+		fc_id = *((u32 *)arg);
+		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
+			      fc_id);
+		/* Update sport lookup */
+		efc_spv_set(domain->lookup, fc_id, domain->sport);
+
+		/* Update display name for the sport */
+		efc_node_fcid_display(fc_id, domain->sport->display_name,
+				      sizeof(domain->sport->display_name));
+
+		/* Issue domain attach call */
+		rc = efc->tt.hw_domain_attach(efc, domain, fc_id);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
+				    rc);
+			return NULL;
+		}
+		/* sm: / domain_attach */
+		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		efc_log_debug(efc,
+			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
+			efc_sm_event_name(evt));
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each
+			 * sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the HW domain attach to complete */
+void *
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_node *node = NULL;
+		struct efc_node *next_node = NULL;
+		struct efc_sli_port *sport;
+		struct efc_sli_port *next_sport;
+
+		/*
+		 * Set domain notify pending state to avoid
+		 * duplicate domain event post
+		 */
+		domain->domain_notify_pend = true;
+
+		/* Mark as attached */
+		domain->attached = true;
+
+		/* Register with SCSI API */
+		efc->tt.new_domain(efc, domain);
+
+		/* Transition to ready */
+		/* sm: / forward event to all sports and nodes */
+		efc_sm_transition(ctx, __efc_domain_ready, NULL);
+
+		/* We have an FCFI, so we can accept frames */
+		domain->req_accept_frames = true;
+
+		/*
+		 * Notify all nodes that the domain attach request
+		 * has completed
+		 * Note: sport will have already received notification
+		 * of sport attached as a result of the HW's port attach.
+		 */
+		list_for_each_entry_safe(sport, next_sport,
+					 &domain->sport_list, list_entry) {
+			list_for_each_entry_safe(node, next_node,
+						 &sport->node_list,
+						 list_entry) {
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		}
+		domain->domain_notify_pend = false;
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw attach\n",
+			      efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		/*
+		 * Domain lost while waiting for an attach to complete,
+		 * go to a state that waits for  the domain attach to
+		 * complete, then handle domain lost
+		 */
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		/*
+		 * In P2P we can get an attach request from
+		 * the other FLOGI path, so drop this one
+		 */
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Ready state */
+void *
+__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* start any pending vports */
+		if (efc_vport_start(domain)) {
+			efc_log_debug(domain->efc,
+				      "efc_vport_start didn't start vports\n");
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to wait state
+			 * and send shutdown to each sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		/* can happen during p2p */
+		u32 fc_id;
+
+		fc_id = *((u32 *)arg);
+
+		/* Assume that the domain is attached */
+		efc_assert(domain->attached, NULL);
+
+		/*
+		 * Verify that the requested FC_ID
+		 * is the same as the one we're working with
+		 */
+		efc_assert(domain->sport->fc_id == fc_id, NULL);
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for nodes to free prior to the domain shutdown */
+void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			      void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		int rc;
+
+		/* sm: / efc_hw_domain_free */
+		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
+
+		/* Request efc_hw_domain_free and wait for completion */
+		rc = efc->tt.hw_domain_free(efc, domain);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
+				    rc);
+		}
+		break;
+	}
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+ /* Domain state machine: Complete the domain shutdown */
+void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_FREE_OK: {
+		efc->tt.del_domain(efc, domain);
+
+		/* sm: / domain_free */
+		if (domain->domain_found_pending) {
+			/*
+			 * save fcf_wwn and drec from this domain,
+			 * free current domain and allocate
+			 * a new one with the same fcf_wwn
+			 * could use a SLI-4 "re-register VPI"
+			 * operation here?
+			 */
+			u64 fcf_wwn = domain->fcf_wwn;
+			struct efc_domain_record drec = domain->pending_drec;
+
+			efc_log_debug(efc, "Reallocating domain\n");
+			domain->req_domain_free = true;
+			domain = efc_domain_alloc(efc, fcf_wwn);
+
+			if (!domain) {
+				efc_log_err(efc,
+					    "efc_domain_alloc() failed\n");
+				return NULL;
+			}
+			/*
+			 * got a new domain; at this point,
+			 * there are at least two domains
+			 * once the req_domain_free flag is processed,
+			 * the associated domain will be removed.
+			 */
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+			efc_sm_post_event(&domain->drvsm,
+					  EFC_EVT_DOMAIN_FOUND, &drec);
+		} else {
+			domain->req_domain_free = true;
+		}
+		break;
+	}
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/*
+ * Domain state machine: Wait for the domain alloc/attach completion
+ * after receiving a domain lost.
+ */
+void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "efc_hw_domain_free() failed: %d\n",
+									rc);
+			}
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_err(efc, "[domain] %-20s: failed\n",
+			    efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id)
+{
+	memcpy(domain->dma.virt,
+	       ((uint8_t *)domain->flogi_service_params) + 4,
+		   sizeof(struct fc_els_flogi) - 4);
+	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
+				 &s_id);
+}
+
+void
+efc_domain_attach(struct efc_domain *domain, u32 s_id)
+{
+	__efc_domain_attach_internal(domain, s_id);
+}
+
+int
+efc_domain_post_event(struct efc_domain *domain,
+		      enum efc_sm_event event, void *arg)
+{
+	int rc;
+	bool accept_frames;
+	bool req_domain_free;
+	struct efc *efc = domain->efc;
+
+	rc = efc_sm_post_event(&domain->drvsm, event, arg);
+
+	req_domain_free = domain->req_domain_free;
+	domain->req_domain_free = false;
+
+	accept_frames = domain->req_accept_frames;
+	domain->req_accept_frames = false;
+
+	if (accept_frames)
+		efc->tt.domain_accept_frames(efc, domain);
+
+	if (req_domain_free)
+		efc_domain_free(domain);
+
+	return rc;
+}
+
+/* Dispatch unsolicited FC frame */
+int
+efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efc_domain *domain = (struct efc_domain *)arg;
+	struct efc *efc = domain->efc;
+	struct fc_frame_header *hdr;
+	u32 s_id;
+	u32 d_id;
+	struct efc_node *node = NULL;
+	struct efc_sli_port *sport = NULL;
+	unsigned long flags = 0;
+
+	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
+		efc_log_err(efc, "Sequence header or payload is null\n");
+		return -1;
+	}
+
+	hdr = seq->header->dma.virt;
+
+	/* extract the s_id and d_id */
+	s_id = ntoh24(hdr->fh_s_id);
+	d_id = ntoh24(hdr->fh_d_id);
+
+	sport = domain->sport;
+	if (!sport) {
+		efc_log_err(efc,
+			    "Drop frame, sport for FC ID 0x%06x is NULL", d_id);
+		return -1;
+	}
+
+	if (sport->fc_id != d_id) {
+		/* Not a physical port IO lookup sport associated with the
+		 * npiv port
+		 */
+		/* Look up without lock */
+		sport = efc_sport_find(domain, d_id);
+		if (!sport) {
+			if (hdr->fh_type == FC_TYPE_FCP) {
+				/* Drop frame */
+				efc_log_warn(efc,
+					     "unsolicited FCP frame with invalid d_id x%x\n",
+					d_id);
+				return -1;
+			}
+				/* p2p will use this case */
+				sport = domain->sport;
+		}
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Lookup the node given the remote s_id */
+	node = efc_node_find(sport, s_id);
+
+	/* If not found, then create a new node */
+	if (!node) {
+		/* If this is solicited data or control based on R_CTL and
+		 * there is no node context,
+		 * then we can drop the frame
+		 */
+		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
+			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
+			efc_log_debug(efc,
+				      "solicited data/ctrl frame without node,drop\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return -1;
+		}
+
+		node = efc_node_alloc(sport, s_id, false, false);
+		if (!node) {
+			efc_log_err(efc, "efc_node_alloc() failed\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return -1;
+		}
+		/* don't send PLOGI on efc_d_init entry */
+		efc_node_init_device(node, false);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	if (node->hold_frames || !list_empty(&node->pend_frames)) {
+
+		/* add frame to node's pending list */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			INIT_LIST_HEAD(&seq->list_entry);
+			list_add_tail(&seq->list_entry, &node->pend_frames);
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		return 0;
+	}
+
+	/* now dispatch frame to the node frame handler */
+	return efc_node_dispatch_frame(node, seq);
+}
+
+int
+efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	u32 port_id;
+	struct efc_node *node = (struct efc_node *)arg;
+	int rc = -1;
+	int sit_set = 0;
+
+	struct efc *efc = node->efc;
+
+	port_id = ntoh24(hdr->fh_s_id);
+	efc_assert(port_id == node->rnode.fc_id, -1);
+
+	if (!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) {
+		node_printf(node,
+			    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+		return rc;
+	}
+
+	/*if SIT is set */
+	if (ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)
+		sit_set = 1;
+
+	switch (hdr->fh_r_ctl) {
+	case FC_RCTL_ELS_REQ:
+	case FC_RCTL_ELS_REP:
+		if (sit_set)
+			rc = efc_node_recv_els_frame(node, seq);
+
+		//failure status to release the seq
+		if (!rc)
+			rc = 2;
+		break;
+
+	case FC_RCTL_BA_ABTS:
+	case FC_RCTL_BA_ACC:
+	case FC_RCTL_BA_RJT:
+	case FC_RCTL_BA_NOP:
+		if (sit_set)
+			rc = efc->tt.recv_abts_frame(efc, node, seq);
+		else
+			rc = efc_node_recv_bls_no_sit(node, seq);
+		break;
+
+	case FC_RCTL_DD_UNSOL_CMD:
+	case FC_RCTL_DD_UNSOL_CTL:
+		switch (hdr->fh_type) {
+		case FC_TYPE_FCP:
+			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
+				if (!node->fcp_enabled) {
+					rc = efc_node_recv_fcp_cmd(node, seq);
+					break;
+				}
+
+				if (sit_set) {
+					rc = efc->tt.dispatch_fcp_cmd(node,
+									seq);
+				} else {
+					node_printf(node,
+					   "Unsol cmd received with no SIT\n");
+				}
+			} else if ((hdr->fh_r_ctl & 0xf) ==
+							FC_RCTL_DD_SOL_DATA) {
+				node_printf(node,
+				    "solicited data received.Dropping IO\n");
+			}
+			break;
+		case FC_TYPE_CT:
+			if (sit_set)
+				rc = efc_node_recv_ct_frame(node, seq);
+			break;
+		default:
+			break;
+		}
+		break;
+	default:
+		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
new file mode 100644
index 000000000000..d318dda5935c
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declare driver's domain handler exported interface
+ */
+
+#ifndef __EFCT_DOMAIN_H__
+#define __EFCT_DOMAIN_H__
+
+extern struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn);
+extern void
+efc_domain_free(struct efc_domain *domain);
+
+extern void *
+__efc_domain_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_allocated(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+
+extern void
+efc_domain_attach(struct efc_domain *domain, u32 s_id);
+extern int
+efc_domain_post_event(struct efc_domain *domain,
+		      enum efc_sm_event event, void *arg);
+extern void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id);
+
+#endif /* __EFCT_DOMAIN_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 11/32] elx: libefc: SLI and FC PORT state machine interfaces
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (9 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  7:34   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 12/32] elx: libefc: Remote node " James Smart
                   ` (21 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI and FC port (aka n_port_id) registration, allocation and
  deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_sport.c | 843 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_sport.h |  52 +++
 2 files changed, 895 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h

diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
new file mode 100644
index 000000000000..11f3ba73ec6e
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.c
@@ -0,0 +1,843 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Details SLI port (sport) functions.
+ */
+
+#include "efc.h"
+
+/* HW sport callback events from the user driver */
+int
+efc_lport_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_sli_port *sport = data;
+
+	switch (event) {
+	case EFC_HW_PORT_ALLOC_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_OK, NULL);
+		break;
+	case EFC_HW_PORT_ALLOC_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_FAIL\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_FAIL, NULL);
+		break;
+	case EFC_HW_PORT_ATTACH_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ATTACH_OK, NULL);
+		break;
+	case EFC_HW_PORT_ATTACH_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_FAIL\n");
+		efc_sm_post_event(&sport->sm,
+				  EFC_EVT_SPORT_ATTACH_FAIL, NULL);
+		break;
+	case EFC_HW_PORT_FREE_OK:
+		efc_log_debug(efc, "EFC_HW_PORT_FREE_OK\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_OK, NULL);
+		break;
+	case EFC_HW_PORT_FREE_FAIL:
+		efc_log_debug(efc, "EFC_HW_PORT_FREE_FAIL\n");
+		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_FAIL, NULL);
+		break;
+	default:
+		efc_log_test(efc, "unknown event %#x\n", event);
+	}
+
+	return 0;
+}
+
+struct efc_sli_port *
+efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt)
+{
+	struct efc_sli_port *sport;
+
+	if (domain->efc->enable_ini)
+		enable_ini = 0;
+
+	/* Return a failure if this sport has already been allocated */
+	if (wwpn != 0) {
+		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
+		if (sport) {
+			efc_log_err(domain->efc,
+				    "Failed: SPORT %016llX %016llX already allocated\n",
+				    wwnn, wwpn);
+			return NULL;
+		}
+	}
+
+	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);
+	if (!sport)
+		return sport;
+
+	sport->efc = domain->efc;
+	snprintf(sport->display_name, sizeof(sport->display_name), "------");
+	sport->domain = domain;
+	sport->lookup = efc_spv_new(domain->efc);
+	sport->instance_index = domain->sport_instance_count++;
+	INIT_LIST_HEAD(&sport->node_list);
+	sport->sm.app = sport;
+	sport->enable_ini = enable_ini;
+	sport->enable_tgt = enable_tgt;
+	sport->enable_rscn = (sport->enable_ini ||
+			(sport->enable_tgt && enable_target_rscn(sport->efc)));
+
+	/* Copy service parameters from domain */
+	memcpy(sport->service_params, domain->service_params,
+		sizeof(struct fc_els_flogi));
+
+	/* Update requested fc_id */
+	sport->fc_id = fc_id;
+
+	/* Update the sport's service parameters for the new wwn's */
+	sport->wwpn = wwpn;
+	sport->wwnn = wwnn;
+	snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
+
+	/*
+	 * if this is the "first" sport of the domain,
+	 * then make it the "phys" sport
+	 */
+	if (list_empty(&domain->sport_list))
+		domain->sport = sport;
+
+	INIT_LIST_HEAD(&sport->list_entry);
+	list_add_tail(&sport->list_entry, &domain->sport_list);
+
+	efc_log_debug(domain->efc, "[%s] allocate sport\n",
+		      sport->display_name);
+
+	return sport;
+}
+
+void
+efc_sport_free(struct efc_sli_port *sport)
+{
+	struct efc_domain *domain;
+
+	if (!sport)
+		return;
+
+	domain = sport->domain;
+	efc_log_debug(domain->efc, "[%s] free sport\n", sport->display_name);
+	list_del(&sport->list_entry);
+	/*
+	 * if this is the physical sport,
+	 * then clear it out of the domain
+	 */
+	if (sport == domain->sport)
+		domain->sport = NULL;
+
+	efc_spv_del(sport->lookup);
+	sport->lookup = NULL;
+
+	efc_spv_set(domain->lookup, sport->fc_id, NULL);
+
+	if (list_empty(&domain->sport_list))
+		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
+				      NULL);
+
+	kfree(sport);
+}
+
+void
+efc_sport_force_free(struct efc_sli_port *sport)
+{
+	struct efc_node *node;
+	struct efc_node *next;
+
+	/* shutdown sm processing */
+	efc_sm_disable(&sport->sm);
+
+	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
+		efc_node_force_free(node);
+	}
+
+	efc_sport_free(sport);
+}
+
+/* Find a SLI port object, given an FC_ID */
+struct efc_sli_port *
+efc_sport_find(struct efc_domain *domain, u32 d_id)
+{
+	struct efc_sli_port *sport;
+
+	if (!domain->lookup) {
+		efc_log_test(domain->efc,
+			     "assertion failed: domain->lookup is not valid\n");
+		return NULL;
+	}
+
+	sport = efc_spv_get(domain->lookup, d_id);
+	return sport;
+}
+
+/* Find a SLI port, given the WWNN and WWPN */
+struct efc_sli_port *
+efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
+{
+	struct efc_sli_port *sport = NULL;
+
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
+			return sport;
+	}
+	return NULL;
+}
+
+/* External call to request an attach for a sport, given an FC_ID */
+int
+efc_sport_attach(struct efc_sli_port *sport, u32 fc_id)
+{
+	int rc;
+	struct efc_node *node;
+	struct efc *efc = sport->efc;
+
+	/* Set our lookup */
+	efc_spv_set(sport->domain->lookup, fc_id, sport);
+
+	/* Update our display_name */
+	efc_node_fcid_display(fc_id, sport->display_name,
+			      sizeof(sport->display_name));
+
+	list_for_each_entry(node, &sport->node_list, list_entry) {
+		efc_node_update_display_name(node);
+	}
+
+	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
+		      sport->display_name, fc_id);
+
+	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
+	if (rc != EFC_HW_RTN_SUCCESS) {
+		efc_log_err(sport->efc,
+			    "efc_hw_port_attach failed: %d\n", rc);
+		return -1;
+	}
+	return 0;
+}
+
+static void
+efc_sport_shutdown(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_node *node;
+	struct efc_node *node_next;
+
+	list_for_each_entry_safe(node, node_next,
+				 &sport->node_list, list_entry) {
+		if (node->rnode.fc_id != FC_FID_FLOGI ||
+		    !sport->is_vport) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			continue;
+		}
+
+		/*
+		 * If this is a vport, logout of the fabric
+		 * controller so that it deletes the vport
+		 * on the switch.
+		 */
+		/* if link is down, don't send logo */
+		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_log_debug(efc,
+				      "[%s] sport shutdown vport, sending logo to node\n",
+				      node->display_name);
+
+			if (efc->tt.els_send(efc, node, ELS_LOGO,
+					     EFC_FC_FLOGI_TIMEOUT_SEC,
+					EFC_FC_ELS_DEFAULT_RETRIES)) {
+				/* sent LOGO, wait for response */
+				efc_node_transition(node,
+						    __efc_d_wait_logo_rsp,
+						     NULL);
+				continue;
+			}
+
+			/*
+			 * failed to send LOGO,
+			 * go ahead and cleanup node anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		}
+	}
+}
+
+/* Clear the sport reference in the vport specification */
+static void
+efc_vport_link_down(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_vport_spec *vport;
+
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->sport = NULL;
+			break;
+		}
+	}
+}
+
+static void *
+__efc_sport_common(const char *funcname, struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = sport->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		break;
+	case EFC_EVT_SPORT_ATTACH_OK:
+			efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN: {
+		int node_list_empty;
+
+		/* Flag this sport as shutting down */
+		sport->shutting_down = true;
+
+		if (sport->is_vport)
+			efc_vport_link_down(sport);
+
+		node_list_empty = list_empty(&sport->node_list);
+
+		if (node_list_empty) {
+			/* sm: node list is empty / efc_hw_port_free */
+			/*
+			 * Remove the sport from the domain's
+			 * sparse vector lookup table
+			 */
+			efc_spv_set(domain->lookup, sport->fc_id, NULL);
+			efc_sm_transition(ctx, __efc_sport_wait_port_free,
+					  NULL);
+			if (efc->tt.hw_port_free(efc, sport)) {
+				efc_log_test(sport->efc,
+					     "efc_hw_port_free failed\n");
+				/* Not much we can do, free the sport anyways */
+				efc_sport_free(sport);
+			}
+		} else {
+			/* sm: node list is not empty / shutdown nodes */
+			efc_sm_transition(ctx,
+					  __efc_sport_wait_shutdown, NULL);
+			efc_sport_shutdown(sport);
+		}
+		break;
+	}
+	default:
+		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
+			     sport->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+/* SLI port state machine: Physical sport allocated */
+void *
+__efc_sport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	/* the physical sport is attached */
+	case EFC_EVT_SPORT_ATTACH_OK:
+		efc_assert(sport == domain->sport, NULL);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+
+	case EFC_EVT_SPORT_ALLOC_OK:
+		/* ignore */
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/* SLI port state machine: Handle initial virtual port events */
+void *
+__efc_sport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
+
+		if (sport->wwpn == 0)
+			efc_log_debug(efc, "vport: letting f/w select WWN\n");
+
+		if (sport->fc_id != U32_MAX) {
+			efc_log_debug(efc, "vport: hard coding port id: %x\n",
+				      sport->fc_id);
+		}
+
+		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
+		/* If wwpn is zero, then we'll let the f/w */
+		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
+					  sport->wwpn == 0 ? NULL :
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			break;
+		}
+
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * SLI port state machine:
+ * Wait for the HW SLI port allocation to complete
+ */
+void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK: {
+		struct fc_els_flogi *sp;
+		struct efc_node *fabric;
+
+		sp = (struct fc_els_flogi *)sport->service_params;
+		/*
+		 * If we let f/w assign wwn's,
+		 * then sport wwn's with those returned by hw
+		 */
+		if (sport->wwnn == 0) {
+			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
+			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+		}
+
+		/* Update the sport's service parameters */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * if sport->fc_id is uninitialized,
+		 * then request that the fabric node use FDISC
+		 * to find an fc_id.
+		 * Otherwise we're restoring vports, or we're in
+		 * fabric emulation mode, so attach the fc_id
+		 */
+		if (sport->fc_id == U32_MAX) {
+			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
+						false);
+			if (!fabric) {
+				efc_log_err(efc, "efc_node_alloc() failed\n");
+				return NULL;
+			}
+			efc_node_transition(fabric, __efc_vport_fabric_init,
+					    NULL);
+		} else {
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+			efc_sport_attach(sport, sport->fc_id);
+		}
+		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * SLI port state machine: virtual sport allocated.
+ *
+ * This state is entered after the sport is allocated;
+ * it then waits for a fabric node
+ * FDISC to complete, which requests a sport attach.
+ * The sport attach complete is handled in this state.
+ */
+
+void *
+__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		struct efc_node *node;
+
+		/* Find our fabric node, and forward this event */
+		node = efc_node_find(sport, FC_FID_FLOGI);
+		if (!node) {
+			efc_log_test(efc, "can't find node %06x\n",
+				     FC_FID_FLOGI);
+			break;
+		}
+		/* sm: / forward sport attach to fabric node */
+		efc_node_post_event(node, evt, NULL);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+static void
+efc_vport_update_spec(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_vport_spec *vport;
+
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->wwnn = sport->wwnn;
+			vport->wwpn = sport->wwpn;
+			vport->tgt_data = sport->tgt_data;
+			vport->ini_data = sport->ini_data;
+			break;
+		}
+	}
+}
+
+/* State entered after the sport attach has completed */
+void *
+__efc_sport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		struct efc_node *node;
+
+		efc_log_debug(efc,
+			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		list_for_each_entry(node, &sport->node_list, list_entry) {
+			efc_node_update_display_name(node);
+		}
+
+		sport->tgt_id = sport->fc_id;
+
+		efc->tt.new_sport(efc, sport);
+
+		/*
+		 * Update the vport (if its not the physical sport)
+		 * parameters
+		 */
+		if (sport->is_vport)
+			efc_vport_update_spec(sport);
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_log_debug(efc,
+			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		efc->tt.del_sport(efc, sport);
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+
+/* SLI port state machine: Wait for the node shutdowns to complete */
+void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK:
+	case EFC_EVT_SPORT_ALLOC_FAIL:
+	case EFC_EVT_SPORT_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_FAIL:
+		/* ignore these events - just wait for the all free event */
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		/*
+		 * Remove the sport from the domain's
+		 * sparse vector lookup table
+		 */
+		efc_spv_set(domain->lookup, sport->fc_id, NULL);
+		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
+		if (efc->tt.hw_port_free(efc, sport)) {
+			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
+			/* Not much we can do, free the sport anyways */
+			efc_sport_free(sport);
+		}
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/* SLI port state machine: Wait for the HW's port free to complete */
+void *
+__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK:
+		/* Ignore as we are waiting for the free CB */
+		break;
+	case EFC_EVT_SPORT_FREE_OK: {
+		/* All done, free myself */
+		/* sm: / efc_sport_free */
+		efc_sport_free(sport);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/* Use the vport specification to find the associated vports and start them */
+int
+efc_vport_start(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	struct efc_sli_port *sport;
+	int rc = 0;
+	u8 found = false;
+
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (!vport->sport) {
+			found = true;
+			break;
+		}
+	}
+
+	if (found && vport) {
+		sport = efc_sport_alloc(domain, vport->wwpn,
+					vport->wwnn, vport->fc_id,
+					vport->enable_ini,
+					vport->enable_tgt);
+		vport->sport = sport;
+		if (!sport) {
+			rc = -1;
+		} else {
+			sport->is_vport = true;
+			sport->tgt_data = vport->tgt_data;
+			sport->ini_data = vport->ini_data;
+
+			efc_sm_transition(&sport->sm, __efc_sport_vport_init,
+					  NULL);
+		}
+	}
+
+	return rc;
+}
+
+/* Allocate a new virtual SLI port */
+int
+efc_sport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
+		    void *ini_data, bool restore_vport)
+{
+	struct efc_sli_port *sport;
+
+	if (ini && domain->efc->enable_ini == 0) {
+		efc_log_test(domain->efc,
+			     "driver initiator functionality not enabled\n");
+		return -1;
+	}
+
+	if (tgt && domain->efc->enable_tgt == 0) {
+		efc_log_test(domain->efc,
+			     "driver target functionality not enabled\n");
+		return -1;
+	}
+
+	/*
+	 * Create a vport spec if we need to recreate
+	 * this vport after a link up event
+	 */
+	if (restore_vport) {
+		if (efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id,
+					  ini, tgt, tgt_data, ini_data)) {
+			efc_log_test(domain->efc,
+				     "failed to create vport object entry\n");
+			return -1;
+		}
+		return efc_vport_start(domain);
+	}
+
+	/* Allocate a sport */
+	sport = efc_sport_alloc(domain, wwpn, wwnn, fc_id, ini, tgt);
+
+	if (!sport)
+		return -1;
+
+	sport->is_vport = true;
+	sport->tgt_data = tgt_data;
+	sport->ini_data = ini_data;
+
+	/* Transition to vport_init */
+	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
+
+	return 0;
+}
+
+/* Remove a previously-allocated virtual port */
+int
+efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
+		    u64 wwpn, uint64_t wwnn)
+{
+	struct efc_sli_port *sport;
+	int found = 0;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+
+	/* walk the efc_vport_list and remove from there */
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
+			list_del(&vport->list_entry);
+			kfree(vport);
+			break;
+		}
+	}
+
+	if (!domain) {
+		/* No domain means no sport to look for */
+		return 0;
+	}
+
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwpn == wwpn && sport->wwnn == wwnn) {
+			found = 1;
+			break;
+		}
+	}
+
+	if (found) {
+		/* Shutdown this SPORT */
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+	}
+	return 0;
+}
+
+/* Force free all saved vports */
+void
+efc_vport_del_all(struct efc *efc)
+{
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		list_del(&vport->list_entry);
+		kfree(vport);
+	}
+}
+
+/**
+ * Create a saved vport entry.
+ *
+ * A saved vport entry is added to the vport list,
+ * which is restored following a link up.
+ * This function is used to allow vports to be created the first time
+ * the link comes up without having to go through the ioctl() API.
+ */
+
+int8_t
+efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn,
+		      u32 fc_id, bool enable_ini,
+		      bool enable_tgt, void *tgt_data, void *ini_data)
+{
+	struct efc_vport_spec *vport;
+
+	/*
+	 * walk the efc_vport_list and return failure
+	 * if a valid(vport with non zero WWPN and WWNN) vport entry
+	 * is already created
+	 */
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if ((wwpn && vport->wwpn == wwpn) &&
+		    (wwnn && vport->wwnn == wwnn)) {
+			efc_log_test(efc,
+				     "Failed: VPORT %016llX %016llX already allocated\n",
+				     wwnn, wwpn);
+			return -1;
+		}
+	}
+
+	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
+	if (!vport)
+		return -1;
+
+	vport->wwnn = wwnn;
+	vport->wwpn = wwpn;
+	vport->fc_id = fc_id;
+	vport->enable_tgt = enable_tgt;
+	vport->enable_ini = enable_ini;
+	vport->tgt_data = tgt_data;
+	vport->ini_data = ini_data;
+
+	INIT_LIST_HEAD(&vport->list_entry);
+	list_add_tail(&vport->list_entry, &efc->vport_list);
+	return 0;
+}
diff --git a/drivers/scsi/elx/libefc/efc_sport.h b/drivers/scsi/elx/libefc/efc_sport.h
new file mode 100644
index 000000000000..3269e29c6f57
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * EFC FC SLI port (SPORT) exported declarations
+ *
+ */
+
+#ifndef __EFC_SPORT_H__
+#define __EFC_SPORT_H__
+
+extern struct efc_sli_port *
+efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt);
+extern void
+efc_sport_free(struct efc_sli_port *sport);
+extern void
+efc_sport_force_free(struct efc_sli_port *sport);
+extern struct efc_sli_port *
+efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn);
+extern int
+efc_sport_attach(struct efc_sli_port *sport, u32 fc_id);
+
+extern void *
+__efc_sport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+
+extern int
+efc_vport_start(struct efc_domain *domain);
+
+#endif /* __EFC_SPORT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 12/32] elx: libefc: Remote node state machine interfaces
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (10 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 11/32] elx: libefc: SLI and FC PORT " James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  8:31   ` Hannes Reinecke
  2020-01-09  9:57   ` Daniel Wagner
  2019-12-20 22:37 ` [PATCH v2 13/32] elx: libefc: Fabric " James Smart
                   ` (20 subsequent siblings)
  32 siblings, 2 replies; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Remote node (aka remote port) allocation, initializaion and
  destroy routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_node.c | 1343 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_node.h |  188 +++++
 2 files changed, 1531 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h

diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
new file mode 100644
index 000000000000..57bf25a5d76a
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.c
@@ -0,0 +1,1343 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efc.h"
+
+/* HW node callback events from the user driver */
+int
+efc_remote_node_cb(void *arg, int event,
+		   void *data)
+{
+	struct efc *efc = arg;
+	enum efc_sm_event sm_event = EFC_EVT_LAST;
+	struct efc_remote_node *rnode = data;
+	struct efc_node *node = rnode->node;
+	unsigned long flags = 0;
+
+	switch (event) {
+	case EFC_HW_NODE_ATTACH_OK:
+		sm_event = EFC_EVT_NODE_ATTACH_OK;
+		break;
+
+	case EFC_HW_NODE_ATTACH_FAIL:
+		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
+		break;
+
+	case EFC_HW_NODE_FREE_OK:
+		sm_event = EFC_EVT_NODE_FREE_OK;
+		break;
+
+	case EFC_HW_NODE_FREE_FAIL:
+		sm_event = EFC_EVT_NODE_FREE_FAIL;
+		break;
+
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", event);
+		return -1;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, sm_event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return 0;
+}
+
+/* Find an FC node structure given the FC port ID */
+struct efc_node *
+efc_node_find(struct efc_sli_port *sport, u32 port_id)
+{
+	struct efc_node *node;
+
+	node = efc_spv_get(sport->lookup, port_id);
+	return node;
+}
+
+int
+efc_node_create_pool(struct efc *efc, u32 node_count)
+{
+	u32 i;
+	struct efc_node *node;
+	u64 max_xfer_size;
+	struct efc_dma *dma;
+
+	efc->nodes_count = node_count;
+
+	efc->nodes = kmalloc_array(node_count, sizeof(struct efc_node *),
+				   GFP_ATOMIC);
+	if (!efc->nodes)
+		return -1;
+
+	memset(efc->nodes, 0, node_count * sizeof(struct efc_node *));
+
+	if (efc->max_xfer_size)
+		max_xfer_size = efc->max_xfer_size;
+	else
+		max_xfer_size = 65536;
+
+	INIT_LIST_HEAD(&efc->nodes_free_list);
+
+	for (i = 0; i < node_count; i++) {
+		dma = NULL;
+		node = kzalloc(sizeof(*node), GFP_ATOMIC);
+		if (!node) {
+			efc_log_err(efc, "node allocation failed");
+			goto error;
+		}
+		/* Assign any persistent field values */
+		node->instance_index = i;
+		node->max_wr_xfer_size = max_xfer_size;
+		node->rnode.indicator = U32_MAX;
+
+		dma = &node->sparm_dma_buf;
+		dma->size = 256;
+		dma->virt = dma_alloc_coherent(&efc->pcidev->dev, dma->size,
+					       &dma->phys, GFP_DMA);
+		if (!dma->virt) {
+			kfree(node);
+			efc_log_err(efc, "efc_dma_alloc failed");
+			goto error;
+		}
+
+		efc->nodes[i] = node;
+		INIT_LIST_HEAD(&node->list_entry);
+		list_add_tail(&node->list_entry, &efc->nodes_free_list);
+	}
+	return 0;
+
+error:
+	efc_node_free_pool(efc);
+	return -1;
+}
+
+void
+efc_node_free_pool(struct efc *efc)
+{
+	struct efc_node *node;
+	u32 i;
+	struct efc_dma *dma;
+
+	if (!efc->nodes)
+		return;
+
+	for (i = 0; i < efc->nodes_count; i++) {
+		node = efc->nodes[i];
+		if (node) {
+			/* free sparam_dma_buf */
+			dma = &node->sparm_dma_buf;
+			dma_free_coherent(&efc->pcidev->dev, dma->size,
+					  dma->virt, dma->phys);
+
+			kfree(node);
+		}
+		efc->nodes[i] = NULL;
+	}
+}
+
+struct efc_node *
+efc_node_get_instance(struct efc *efc, u32 index)
+{
+	struct efc_node *node = NULL;
+
+	if (index >= efc->nodes_count) {
+		efc_log_test(efc, "invalid index: %d\n", index);
+		return NULL;
+	}
+	node = efc->nodes[index];
+	return node->attached ? node : NULL;
+}
+
+struct efc_node *efc_node_alloc(struct efc_sli_port *sport,
+				  u32 port_id, bool init, bool targ)
+{
+	int rc;
+	struct efc_node *node = NULL;
+	u32 instance_index;
+	u64 max_wr_xfer_size;
+	struct efc *efc = sport->efc;
+	struct efc_dma sparm_dma_buf;
+
+	if (sport->shutting_down) {
+		efc_log_debug(efc, "node allocation when shutting down %06x",
+			      port_id);
+		return NULL;
+	}
+
+	if (!list_empty(&efc->nodes_free_list)) {
+		node = list_first_entry(&efc->nodes_free_list,
+					struct efc_node, list_entry);
+		list_del(&node->list_entry);
+	}
+
+	if (!node) {
+		efc_log_err(efc, "node allocation failed %06x", port_id);
+		return NULL;
+	}
+
+	/* Save persistent values across memset zero */
+	instance_index = node->instance_index;
+	max_wr_xfer_size = node->max_wr_xfer_size;
+	sparm_dma_buf = node->sparm_dma_buf;
+
+	memset(node, 0, sizeof(*node));
+	node->instance_index = instance_index;
+	node->max_wr_xfer_size = max_wr_xfer_size;
+	node->sparm_dma_buf = sparm_dma_buf;
+	node->rnode.indicator = U32_MAX;
+
+	node->sport = sport;
+	INIT_LIST_HEAD(&node->list_entry);
+	list_add_tail(&node->list_entry, &sport->node_list);
+
+	node->efc = efc;
+	node->init = init;
+	node->targ = targ;
+
+	spin_lock_init(&node->pend_frames_lock);
+	INIT_LIST_HEAD(&node->pend_frames);
+	spin_lock_init(&node->active_ios_lock);
+	INIT_LIST_HEAD(&node->active_ios);
+	INIT_LIST_HEAD(&node->els_io_pend_list);
+	INIT_LIST_HEAD(&node->els_io_active_list);
+	efc->tt.scsi_io_alloc_enable(efc, node);
+
+	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
+	if (rc) {
+		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
+		return NULL;
+	}
+	/* zero the service parameters */
+	memset(node->sparm_dma_buf.virt, 0, node->sparm_dma_buf.size);
+
+	node->rnode.node = node;
+	node->sm.app = node;
+	node->evtdepth = 0;
+
+	efc_node_update_display_name(node);
+
+	efc_spv_set(sport->lookup, port_id, node);
+
+	return node;
+}
+
+int
+efc_node_free(struct efc_node *node)
+{
+	struct efc_sli_port *sport;
+	struct efc *efc;
+	int rc = 0;
+	struct efc_node *ns = NULL;
+
+	sport = node->sport;
+	efc = node->efc;
+
+	node_printf(node, "Free'd\n");
+
+	if (node->refound) {
+		/*
+		 * Save the name server node. We will send fake RSCN event at
+		 * the end to handle ignored RSCN event during node deletion
+		 */
+		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
+	}
+
+	list_del(&node->list_entry);
+
+	/* Free HW resources */
+	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
+	if (EFC_HW_RTN_IS_ERROR(rc)) {
+		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
+		rc = -1;
+	}
+
+	/* if the gidpt_delay_timer is still running, then delete it */
+	if (timer_pending(&node->gidpt_delay_timer))
+		del_timer(&node->gidpt_delay_timer);
+
+	/* remove entry from sparse vector list */
+	if (!sport->lookup) {
+		efc_log_test(node->efc,
+			     "assertion failed: sport lookup is NULL\n");
+		return -1;
+	}
+
+	efc_spv_set(sport->lookup, node->rnode.fc_id, NULL);
+
+	/*
+	 * If the node_list is empty,
+	 * then post a ALL_CHILD_NODES_FREE event to the sport,
+	 * after the lock is released.
+	 * The sport may be free'd as a result of the event.
+	 */
+	if (list_empty(&sport->node_list))
+		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
+				  NULL);
+
+	node->sport = NULL;
+	node->sm.current_state = NULL;
+
+	/* return to free list */
+	INIT_LIST_HEAD(&node->list_entry);
+	list_add_tail(&node->list_entry, &efc->nodes_free_list);
+
+	if (ns) {
+		/* sending fake RSCN event to name server node */
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
+	}
+
+	return rc;
+}
+
+void
+efc_node_force_free(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	/* shutdown sm processing */
+	efc_sm_disable(&node->sm);
+
+	strncpy(node->prev_state_name, node->current_state_name,
+		sizeof(node->prev_state_name));
+	strncpy(node->current_state_name, "disabled",
+		sizeof(node->current_state_name));
+
+	efc->tt.node_io_cleanup(efc, node, true);
+	efc->tt.node_els_cleanup(efc, node, true);
+
+	/* manually purge pending frames (if any) */
+	efc->tt.node_purge_pending(efc, node);
+
+	efc_node_free(node);
+}
+
+static void
+efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
+{
+	if (!dma)
+		return;
+	if (!buffer)
+		return;
+	if (buffer_length == 0)
+		return;
+	if (buffer_length > dma->size)
+		buffer_length = dma->size;
+
+	memcpy(dma->virt, buffer, buffer_length);
+	dma->len = buffer_length;
+}
+
+int
+efc_node_attach(struct efc_node *node)
+{
+	int rc = 0;
+	struct efc_sli_port *sport = node->sport;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = node->efc;
+
+	if (!domain->attached) {
+		efc_log_test(efc,
+			     "Warning: unattached domain\n");
+		return -1;
+	}
+	/* Update node->wwpn/wwnn */
+
+	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
+				efc_node_get_wwpn(node));
+	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
+				efc_node_get_wwnn(node));
+
+	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
+			sizeof(node->service_params) - 4);
+
+	/* take lock to protect node->rnode.attached */
+	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
+
+	return rc;
+}
+
+void
+efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
+{
+	switch (fc_id) {
+	case FC_FID_FLOGI:
+		snprintf(buffer, buffer_length, "fabric");
+		break;
+	case FC_FID_FCTRL:
+		snprintf(buffer, buffer_length, "fabctl");
+		break;
+	case FC_FID_DIR_SERV:
+		snprintf(buffer, buffer_length, "nserve");
+		break;
+	default:
+		if (fc_id == FC_FID_DOM_MGR) {
+			snprintf(buffer, buffer_length, "dctl%02x",
+				 (fc_id & 0x0000ff));
+		} else {
+			snprintf(buffer, buffer_length, "%06x", fc_id);
+		}
+		break;
+	}
+}
+
+void
+efc_node_update_display_name(struct efc_node *node)
+{
+	u32 port_id = node->rnode.fc_id;
+	struct efc_sli_port *sport = node->sport;
+	char portid_display[16];
+
+	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
+
+	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
+		 sport->display_name, portid_display);
+}
+
+void
+efc_node_send_ls_io_cleanup(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+
+	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
+		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
+			      node->display_name, node->ls_acc_oxid);
+
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		node->ls_acc_io = NULL;
+	}
+}
+
+void *
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	unsigned long flags = 0;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		efc_node_hold_frames(node);
+		efc_assert(efc_node_active_ios_empty(node), NULL);
+		efc_assert(efc_els_io_list_empty(node,
+						 &node->els_io_active_list),
+			   NULL);
+
+		/* by default, we will be freeing node after we unwind */
+		node->req_free = true;
+
+		switch (node->shutdown_reason) {
+		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
+			/*
+			 * sm: if shutdown reason is
+			 * implicit logout / efc_node_attach
+			 */
+			/* Node shutdown b/c of PLOGI received when node
+			 * already logged in. We have PLOGI service
+			 * parameters, so submit node attach; we won't be
+			 * freeing this node
+			 */
+
+			/* currently, only case for implicit logo is PLOGI
+			 * recvd. Thus, node's ELS IO pending list won't be
+			 * empty (PLOGI will be on it)
+			 */
+			efc_assert(node->send_ls_acc ==
+				   EFC_NODE_SEND_LS_ACC_PLOGI, NULL);
+			node_printf(node,
+				    "Shutdown reason: implicit logout, re-authenticate\n");
+
+			efc->tt.scsi_io_alloc_enable(efc, node);
+
+			/* Re-attach node with the same HW node resources */
+			node->req_free = false;
+			rc = efc_node_attach(node);
+			efc_node_transition(node, __efc_d_wait_node_attach,
+					    NULL);
+			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
+				efc_node_post_event(node,
+						    EFC_EVT_NODE_ATTACH_OK,
+						    NULL);
+			}
+			break;
+		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO: {
+			s8 pend_frames_empty;
+			struct list_head *list;
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+			list = &node->els_io_pend_list;
+			efc_assert(efc_els_io_list_empty(node, list), NULL);
+
+			spin_lock_irqsave(&node->pend_frames_lock, flags);
+			pend_frames_empty = list_empty(&node->pend_frames);
+			spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+			/*
+			 * there are two scenarios where we want to keep
+			 * this node alive:
+			 * 1. there are pending frames that need to be
+			 *    processed or
+			 * 2. we're an initiator and the remote node is
+			 *    a target and we need to re-authenticate
+			 */
+			node_printf(node,
+				    "Shutdown: explicit logo pend=%d ",
+					!pend_frames_empty);
+			 node_printf(node,
+				     "sport.ini=%d node.tgt=%d\n",
+				    node->sport->enable_ini, node->targ);
+
+			if (!pend_frames_empty ||
+			    (node->sport->enable_ini && node->targ)) {
+				u8 send_plogi = false;
+
+				if (node->sport->enable_ini && node->targ) {
+					/*
+					 * we're an initiator and
+					 * node shutting down is a target;
+					 * we'll need to re-authenticate in
+					 * initial state
+					 */
+					send_plogi = true;
+				}
+
+				/*
+				 * transition to __efc_d_init
+				 * (will retain HW node resources)
+				 */
+				efc->tt.scsi_io_alloc_enable(efc, node);
+				node->req_free = false;
+
+				/*
+				 * either pending frames exist,
+				 * or we're re-authenticating with PLOGI
+				 * (or both); in either case,
+				 * return to initial state
+				 */
+				efc_node_init_device(node, send_plogi);
+			}
+			/* else: let node shutdown occur */
+			break;
+		}
+		case EFC_NODE_SHUTDOWN_DEFAULT:
+		default: {
+			struct list_head *list;
+
+			/*
+			 * shutdown due to link down,
+			 * node going away (xport event) or
+			 * sport shutdown, purge pending and
+			 * proceed to cleanup node
+			 */
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+			list = &node->els_io_pend_list;
+			efc_assert(efc_els_io_list_empty(node, list), NULL);
+
+			node_printf(node,
+				    "Shutdown reason: default, purge pending\n");
+			efc->tt.node_purge_pending(efc, node);
+			break;
+		}
+		}
+
+		break;
+	}
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static int
+efc_node_check_els_quiesced(struct efc_node *node)
+{
+	/* check to see if ELS requests, completions are quiesced */
+	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
+	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+		if (!node->attached) {
+			/* hw node detach already completed, proceed */
+			node_printf(node, "HW node not attached\n");
+			efc_node_transition(node,
+					    __efc_node_wait_ios_shutdown,
+					     NULL);
+		} else {
+			/*
+			 * hw node detach hasn't completed,
+			 * transition and wait
+			 */
+			node_printf(node, "HW node still attached\n");
+			efc_node_transition(node, __efc_node_wait_node_free,
+					    NULL);
+		}
+		return 1;
+	}
+	return 0;
+}
+
+void
+efc_node_initiate_cleanup(struct efc_node *node)
+{
+	struct efc *efc;
+
+	efc = node->efc;
+	efc->tt.node_els_cleanup(efc, node, false);
+
+	/*
+	 * if ELS's have already been quiesced, will move to next state
+	 * if ELS's have not been quiesced, abort them
+	 */
+	if (efc_node_check_els_quiesced(node) == 0) {
+		/*
+		 * Abort all ELS's since ELS's won't be aborted by HW
+		 * node free.
+		 */
+		efc_node_hold_frames(node);
+		efc->tt.node_abort_all_els(efc, node);
+		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
+	}
+}
+
+/* Node state machine: Wait for all ELSs to complete */
+void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	bool check_quiesce = false;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			node_printf(node, "All ELS IOs complete\n");
+			check_quiesce = true;
+		}
+		break;
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* all ELS IO's complete */
+		node_printf(node, "All ELS IOs complete\n");
+		efc_assert(efc_els_io_list_empty(node,
+						 &node->els_io_active_list),
+			   NULL);
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	if (check_quiesce)
+		efc_node_check_els_quiesced(node);
+
+	return NULL;
+}
+
+/* Node state machine: Wait for a HW node free event to complete */
+void *
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+		/* node is officially no longer attached */
+		node->attached = false;
+		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		/* As IOs and ELS IO's complete we expect to get these events */
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* Fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * State is entered when a node receives a shutdown event, and it's waiting
+ * for all the active IOs and ELS IOs associated with the node to complete.
+ */
+void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+
+		/* first check to see if no ELS IOs are outstanding */
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			/* If there are any active IOS, Free them. */
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		if (efc_node_active_ios_empty(node) &&
+		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
+			      efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+	struct efc_node_cb *cbdata = arg;
+
+	efc_assert(ctx, NULL);
+	efc_assert(ctx->app, NULL);
+	node = ctx->app;
+	efc_assert(node->efc, NULL);
+	efc = node->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
+	case EFC_EVT_NODE_MISSING:
+	case EFC_EVT_FCP_CMD_RCVD:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		node->refound = true;
+		break;
+
+	/*
+	 * node->attached must be set appropriately
+	 * for all node attach/detach events
+	 */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		break;
+
+	/*
+	 * handle any ELS completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+
+	/*
+	 * handle any ELS request completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_ELS_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * Unsupported ELS was received,
+		 * send LS_RJT, command not supported
+		 */
+		efc_log_debug(efc,
+			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
+			      node->display_name, funcname,
+			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD:
+	case EFC_EVT_FLOGI_RCVD:
+	case EFC_EVT_LOGO_RCVD:
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* sm: / send ELS_RJT */
+		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+		/* if we didn't catch this in a state, send generic LS_RJT */
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+
+		break;
+	}
+	case EFC_EVT_ABTS_RCVD: {
+		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
+			     node->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+	return NULL;
+}
+
+void
+efc_node_save_sparms(struct efc_node *node, void *payload)
+{
+	memcpy(node->service_params, payload, sizeof(node->service_params));
+}
+
+void
+efc_node_post_event(struct efc_node *node,
+		    enum efc_sm_event evt, void *arg)
+{
+	bool free_node = false;
+
+	node->evtdepth++;
+
+	efc_sm_post_event(&node->sm, evt, arg);
+
+	/* If our event call depth is one and
+	 * we're not holding frames
+	 * then we can dispatch any pending frames.
+	 * We don't want to allow the efc_process_node_pending()
+	 * call to recurse.
+	 */
+	if (!node->hold_frames && node->evtdepth == 1)
+		efc_process_node_pending(node);
+
+	node->evtdepth--;
+
+	/*
+	 * Free the node object if so requested,
+	 * and we're at an event call depth of zero
+	 */
+	if (node->evtdepth == 0 && node->req_free)
+		free_node = true;
+
+	if (free_node)
+		efc_node_free(node);
+}
+
+void
+efc_node_transition(struct efc_node *node,
+		    void *(*state)(struct efc_sm_ctx *,
+				   enum efc_sm_event, void *), void *data)
+{
+	struct efc_sm_ctx *ctx = &node->sm;
+
+	if (ctx->current_state == state) {
+		efc_node_post_event(node, EFC_EVT_REENTER, data);
+	} else {
+		efc_node_post_event(node, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_node_post_event(node, EFC_EVT_ENTER, data);
+	}
+}
+
+void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
+{
+	memset(buffer, 0, buffer_len);
+
+	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
+}
+
+uint64_t
+efc_node_get_wwpn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwpn);
+}
+
+uint64_t
+efc_node_get_wwnn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+int
+efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		       void *arg, uint8_t cmd,
+			void *(*efc_node_common_func)(const char *,
+						      struct efc_sm_ctx *,
+			       enum efc_sm_event, void *),
+			const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		      void *arg, uint16_t cmd,
+		       void *(*efc_node_common_func)(const char *,
+						     struct efc_sm_ctx *,
+			      enum efc_sm_event, void *),
+		       const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_node_active_ios_empty(struct efc_node *node)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+void
+efc_node_pause(struct efc_node *node,
+	       void *(*state)(struct efc_sm_ctx *,
+			      enum efc_sm_event, void *))
+
+{
+	node->nodedb_state = state;
+	efc_node_transition(node, __efc_node_paused, NULL);
+}
+
+/**
+ * This state is entered when a state is "paused". When resumed, the node
+ * is transitioned to a previously saved state (node->ndoedb_state)
+ */
+void *
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node_printf(node, "Paused\n");
+		break;
+
+	case EFC_EVT_RESUME: {
+		void *(*pf)(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+
+		pf = node->nodedb_state;
+
+		node->nodedb_state = NULL;
+		efc_node_transition(node, pf, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node->req_free = true;
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/* Posts a resume event to the paused node */
+int
+efc_node_resume(struct efc_node *node)
+{
+	efc_node_post_event(node, EFC_EVT_RESUME, NULL);
+
+	return 0;
+}
+
+int
+efc_node_recv_els_frame(struct efc_node *node,
+			struct efc_hw_sequence *seq)
+{
+	unsigned long flags = 0;
+	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
+	struct {
+		u32 cmd;
+		enum efc_sm_event evt;
+		u32 payload_size;
+	} els_cmd_list[] = {
+		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
+		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
+		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
+		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
+		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
+		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
+	};
+	struct efc_node_cb cbdata;
+	u8 *buf = seq->payload->dma.virt;
+	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
+	u32 i;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	/* find a matching event for the ELS command */
+	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
+		if (els_cmd_list[i].cmd == buf[0]) {
+			evt = els_cmd_list[i].evt;
+			break;
+		}
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, evt, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+
+	return 0;
+}
+
+int
+efc_node_recv_ct_frame(struct efc_node *node,
+		       struct efc_hw_sequence *seq)
+{
+	struct fc_ct_hdr *iu = seq->payload->dma.virt;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efc *efc = node->efc;
+	u16 gscmd = be16_to_cpu(iu->ct_cmd);
+
+	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
+		    node->display_name, gscmd);
+	efc->tt.send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
+			    FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
+	return 0;
+}
+
+int
+efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
+{
+	struct efc_node_cb cbdata;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+
+	return 1;
+}
+
+int
+efc_node_recv_bls_no_sit(struct efc_node *node,
+			 struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+
+	node_printf(node,
+		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+
+	return -1;
+}
+
+int
+efc_process_node_pending(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			if (!list_empty(&node->pend_frames)) {
+				seq = list_first_entry(&node->pend_frames,
+						       struct efc_hw_sequence,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+						node->pend_frames_processed;
+				node->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&node->pend_frames_lock,
+						       flags);
+				break;
+			}
+			node->pend_frames_processed++;
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		efc_node_dispatch_frame(node, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efc, "%u node frames held and processed\n",
+			      pend_frames_processed);
+
+	return 0;
+}
+
+void
+efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void
+efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void
+efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void efc_node_post_els_resp(struct efc_node *node,
+			    enum efc_hw_node_els_event evt, void *arg)
+{
+	enum efc_sm_event sm_event = EFC_EVT_LAST;
+	struct efc *efc = node->efc;
+	unsigned long flags = 0;
+
+	switch (evt) {
+	case EFC_HW_SRRS_ELS_REQ_OK:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_OK:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_RJT:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
+		break;
+	case EFC_HW_ELS_REQ_ABORTED:
+		sm_event = EFC_EVT_ELS_REQ_ABORTED;
+		break;
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", evt);
+		return;
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, sm_event, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void efc_node_post_shutdown(struct efc_node *node,
+			    u32 evt, void *arg)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
new file mode 100644
index 000000000000..a8e7b7a7fe13
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFC_NODE_H__)
+#define __EFC_NODE_H__
+#include "scsi/fc/fc_ns.h"
+
+#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
+#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
+#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
+
+#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
+	io->node->display_name, io->instance_index, io->init_task_tag, \
+	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+static inline void
+efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		 const char *handler)
+{
+	struct efc_node *node = ctx->app;
+
+	if (evt == EFC_EVT_ENTER) {
+		strncpy(node->current_state_name, handler,
+			sizeof(node->current_state_name));
+	} else if (evt == EFC_EVT_EXIT) {
+		strncpy(node->prev_state_name, node->current_state_name,
+			sizeof(node->prev_state_name));
+		strncpy(node->current_state_name, "invalid",
+			sizeof(node->current_state_name));
+	}
+	node->prev_evt = node->current_evt;
+	node->current_evt = evt;
+}
+
+/**
+ * hold frames in pending frame list
+ *
+ * Unsolicited receive frames are held on the node pending frame list,
+ * rather than being processed.
+ */
+
+static inline void
+efc_node_hold_frames(struct efc_node *node)
+{
+	efc_assert(node);
+	node->hold_frames = true;
+}
+
+/**
+ * accept frames
+ *
+ * Unsolicited receive frames processed rather than being held on the node
+ * pending frame list.
+ */
+
+static inline void
+efc_node_accept_frames(struct efc_node *node)
+{
+	efc_assert(node);
+	node->hold_frames = false;
+}
+
+extern int
+efc_node_create_pool(struct efc *efc, u32 node_count);
+extern void
+efc_node_free_pool(struct efc *efc);
+extern struct efc_node *
+efc_node_get_instance(struct efc *efc, u32 instance);
+
+/* Node initiator/target enable defines */
+enum efc_node_enable {
+	EFC_NODE_ENABLE_x_TO_x,
+	EFC_NODE_ENABLE_x_TO_T,
+	EFC_NODE_ENABLE_x_TO_I,
+	EFC_NODE_ENABLE_x_TO_IT,
+	EFC_NODE_ENABLE_T_TO_x,
+	EFC_NODE_ENABLE_T_TO_T,
+	EFC_NODE_ENABLE_T_TO_I,
+	EFC_NODE_ENABLE_T_TO_IT,
+	EFC_NODE_ENABLE_I_TO_x,
+	EFC_NODE_ENABLE_I_TO_T,
+	EFC_NODE_ENABLE_I_TO_I,
+	EFC_NODE_ENABLE_I_TO_IT,
+	EFC_NODE_ENABLE_IT_TO_x,
+	EFC_NODE_ENABLE_IT_TO_T,
+	EFC_NODE_ENABLE_IT_TO_I,
+	EFC_NODE_ENABLE_IT_TO_IT,
+};
+
+static inline enum efc_node_enable
+efc_node_get_enable(struct efc_node *node)
+{
+	u32 retval = 0;
+
+	if (node->sport->enable_ini)
+		retval |= (1U << 3);
+	if (node->sport->enable_tgt)
+		retval |= (1U << 2);
+	if (node->init)
+		retval |= (1U << 1);
+	if (node->targ)
+		retval |= (1U << 0);
+	return (enum efc_node_enable)retval;
+}
+
+extern int
+efc_node_check_els_req(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg,
+		       u8 cmd, void *(*efc_node_common_func)(const char *,
+		       struct efc_sm_ctx *, enum efc_sm_event, void *),
+		       const char *funcname);
+extern int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg,
+		  u16 cmd, void *(*efc_node_common_func)(const char *,
+		  struct efc_sm_ctx *, enum efc_sm_event, void *),
+		  const char *funcname);
+extern int
+efc_node_attach(struct efc_node *node);
+extern struct efc_node *
+efc_node_alloc(struct efc_sli_port *sport, u32 port_id,
+		bool init, bool targ);
+extern int
+efc_node_free(struct efc_node *efc);
+extern void
+efc_node_force_free(struct efc_node *efc);
+extern void
+efc_node_update_display_name(struct efc_node *node);
+void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
+			 void *arg);
+
+extern void *
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void
+efc_node_save_sparms(struct efc_node *node, void *payload);
+extern void
+efc_node_transition(struct efc_node *node,
+		    void *(*state)(struct efc_sm_ctx *,
+		    enum efc_sm_event, void *), void *data);
+extern void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+extern void
+efc_node_initiate_cleanup(struct efc_node *node);
+
+extern void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
+extern uint64_t
+efc_node_get_wwpn(struct efc_node *node);
+
+extern void
+efc_node_pause(struct efc_node *node,
+	       void *(*state)(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg));
+extern int
+efc_node_resume(struct efc_node *node);
+extern void *
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern int
+efc_node_active_ios_empty(struct efc_node *node);
+extern void
+efc_node_send_ls_io_cleanup(struct efc_node *node);
+
+extern int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+extern int
+efc_process_node_pending(struct efc_node *domain);
+
+#endif /* __EFC_NODE_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 13/32] elx: libefc: Fabric node state machine interfaces
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (11 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 12/32] elx: libefc: Remote node " James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  8:34   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 14/32] elx: libefc: FC node ELS and state handling James Smart
                   ` (19 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Fabric node initialization and logins.
- Name/Directory Services node.
- Fabric Controller node to process rscn events.

These are all interactions with remote ports that correspond
to well-known fabric entities

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_fabric.c | 1762 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_fabric.h |  116 +++
 2 files changed, 1878 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h

diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
new file mode 100644
index 000000000000..382a8dc32ce0
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.c
@@ -0,0 +1,1762 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * This file implements remote node state machines for:
+ * - Fabric logins.
+ * - Fabric controller events.
+ * - Name/directory services interaction.
+ * - Point-to-point logins.
+ */
+
+/*
+ * fabric_sm Node State Machine: Fabric States
+ * ns_sm Node State Machine: Name/Directory Services States
+ * p2p_sm Node State Machine: Point-to-Point Node States
+ */
+
+#include "efc.h"
+
+static void
+efc_fabric_initiate_shutdown(struct efc_node *node)
+{
+	int rc;
+	struct efc *efc = node->efc;
+
+	efc->tt.scsi_io_alloc_disable(efc, node);
+
+	if (node->attached) {
+		/* issue hw node free; don't care if succeeds right away
+		 * or sometime later, will check node->attached later in
+		 * shutdown process
+		 */
+		rc = efc->tt.hw_node_detach(efc, &node->rnode);
+		if (rc != EFC_HW_RTN_SUCCESS &&
+		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
+			node_printf(node, "Failed freeing HW node, rc=%d\n",
+				    rc);
+		}
+	}
+	/*
+	 * node has either been detached or is in the process of being detached,
+	 * call common node's initiate cleanup function
+	 */
+	efc_node_initiate_cleanup(node);
+}
+
+static void *
+__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+
+	efc_assert(ctx, NULL);
+	efc_assert(ctx->app, NULL);
+	node = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	case EFC_EVT_SHUTDOWN:
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+void *
+__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_REENTER:	/* not sure why we're getting these ... */
+		efc_log_debug(efc, ">>> reenter !!\n");
+		/* fall through */
+	case EFC_EVT_ENTER:
+		/*  sm: / send FLOGI */
+		efc->tt.els_send(efc, node, ELS_FLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_sport_topology topology)
+{
+	node->sport->topology = topology;
+}
+
+void
+efc_fabric_notify_topology(struct efc_node *node)
+{
+	struct efc_node *tmp_node;
+	struct efc_node *next;
+	enum efc_sport_topology topology = node->sport->topology;
+
+	/*
+	 * now loop through the nodes in the sport
+	 * and send topology notification
+	 */
+	list_for_each_entry_safe(tmp_node, next, &node->sport->node_list,
+				 list_entry) {
+		if (tmp_node != node) {
+			efc_node_post_event(tmp_node,
+					    EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+					    (void *)topology);
+		}
+	}
+}
+
+static bool efc_rnode_is_nport(struct fc_els_flogi *rsp)
+{
+	return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT);
+}
+
+static bool efc_rnode_is_npiv_capable(struct fc_els_flogi *rsp)
+{
+	return !!(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_NPIV_ACC);
+}
+
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->els_rsp.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* Check to see if the fabric is an F_PORT or and N_PORT */
+		if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) {
+			/* sm: if not nport / efc_domain_attach */
+			/* ext_status has the fc_id, attach domain */
+			if (efc_rnode_is_npiv_capable(cbdata->els_rsp.virt)) {
+				efc_log_debug(node->efc,
+					      " NPIV is enabled at switch side\n");
+				//node->efc->sw_feature_cap |= 1<<10;
+			}
+			efc_fabric_set_topology(node,
+						EFC_SPORT_TOPOLOGY_FABRIC);
+			efc_fabric_notify_topology(node);
+			efc_assert(!node->sport->domain->attached, NULL);
+			efc_domain_attach(node->sport->domain,
+					  cbdata->ext_status);
+			efc_node_transition(node,
+					    __efc_fabric_wait_domain_attach,
+					    NULL);
+			break;
+		}
+
+		/*  sm: if nport and p2p_winner / efc_domain_attach */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+			break;
+		}
+
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					     NULL);
+			if (node->sport->domain->attached &&
+			    !node->sport->domain->domain_notify_pend) {
+				/*
+				 * already attached,
+				 * just send ATTACH_OK
+				 */
+				node_printf(node,
+					    "p2p winner, domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/*
+			 * peer is p2p winner;
+			 * PLOGI will be received on the
+			 * remote SID=1 node;
+			 * this node has served its purpose
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_ELS_REQ_ABORTED:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		struct efc_sli_port *sport = node->sport;
+		/*
+		 * with these errors, we have no recovery,
+		 * so shutdown the sport, leave the link
+		 * up and the domain ready
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node,
+			    "FLOGI failed evt=%s, shutting down sport [%s]\n",
+			    efc_sm_event_name(evt), sport->display_name);
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send FDISC */
+		efc->tt.els_send(efc, node, ELS_FDISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+
+		efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* fc_id is in ext_status */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / efc_sport_attach */
+		efc_sport_attach(node->sport, cbdata->ext_status);
+		efc_node_transition(node, __efc_fabric_wait_domain_attach,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_log_err(node->efc, "FDISC failed, shutting down sport\n");
+		/* sm: / shutdown sport */
+		efc_sm_post_event(&node->sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+static int
+efc_start_ns_node(struct efc_sli_port *sport)
+{
+	struct efc_node *ns;
+
+	/* Instantiate a name services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (!ns) {
+		ns = efc_node_alloc(sport, FC_FID_DIR_SERV, false, false);
+		if (!ns)
+			return -1;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER)
+		efc_node_pause(ns, __efc_ns_init);
+	else
+		efc_node_transition(ns, __efc_ns_init, NULL);
+	return 0;
+}
+
+static int
+efc_start_fabctl_node(struct efc_sli_port *sport)
+{
+	struct efc_node *fabctl;
+
+	fabctl = efc_node_find(sport, FC_FID_FCTRL);
+	if (!fabctl) {
+		fabctl = efc_node_alloc(sport, FC_FID_FCTRL,
+					false, false);
+		if (!fabctl)
+			return -1;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	efc_node_transition(fabctl, __efc_fabctl_init, NULL);
+	return 0;
+}
+
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		int rc;
+
+		rc = efc_start_ns_node(node->sport);
+		if (rc)
+			return NULL;
+
+		/* sm: if enable_ini / start fabctl node */
+		/* Instantiate the fabric controller (sends SCR) */
+		if (node->sport->enable_rscn) {
+			rc = efc_start_fabctl_node(node->sport);
+			if (rc)
+				return NULL;
+		}
+		efc_node_transition(node, __efc_fabric_idle, NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* Save service parameters */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_ns_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send RFTID */
+		efc->tt.els_send_ct(efc, node, FC_RCTL_ELS_REQ,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	/* ignore shutdown event as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / send RFFID */
+		efc->tt.els_send_ct(efc, node, FC_NS_RFF_ID,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Waits for an RFFID response event; if configured for an initiator operation,
+ * a GIDPT name services request is issued.
+ */
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		if (node->sport->enable_rscn) {
+			/* sm: if enable_rscn / send GIDPT */
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		} else {
+			/* if 'T' only, we're done, go to idle */
+			efc_node_transition(node, __efc_ns_idle, NULL);
+		}
+		break;
+	}
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static int
+efc_process_gidpt_payload(struct efc_node *node,
+			  void *data, u32 gidpt_len)
+{
+	u32 i, j;
+	struct efc_node *newnode;
+	struct efc_sli_port *sport = node->sport;
+	struct efc *efc = node->efc;
+	u32 port_id = 0, port_count, portlist_count;
+	struct efc_node *n;
+	struct efc_node **active_nodes;
+	int residual;
+	struct fc_ct_hdr *hdr = data;
+	struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
+
+	residual = be16_to_cpu(hdr->ct_mr_size);
+
+	if (residual != 0)
+		efc_log_debug(node->efc, "residual is %u words\n", residual);
+
+	if (be16_to_cpu(hdr->ct_cmd) == FC_FS_RJT) {
+		node_printf(node,
+			    "GIDPT request failed: rsn x%x rsn_expl x%x\n",
+			    hdr->ct_reason, hdr->ct_explan);
+		return -1;
+	}
+
+	portlist_count = (gidpt_len - sizeof(*hdr)) / sizeof(*gidpt);
+
+	/* Count the number of nodes */
+	port_count = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_count++;
+	}
+
+	/* Allocate a buffer for all nodes */
+	active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC);
+	if (!active_nodes) {
+		node_printf(node, "efc_malloc failed\n");
+		return -1;
+	}
+
+	/* Fill buffer with fc_id of active nodes */
+	i = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_id = n->rnode.fc_id;
+		switch (port_id) {
+		case FC_FID_FLOGI:
+		case FC_FID_FCTRL:
+		case FC_FID_DIR_SERV:
+			break;
+		default:
+			if (port_id != FC_FID_DOM_MGR)
+				active_nodes[i++] = n;
+			break;
+		}
+	}
+
+	/* update the active nodes buffer */
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		for (j = 0; j < port_count; j++) {
+			if (active_nodes[j] &&
+			    port_id == active_nodes[j]->rnode.fc_id) {
+				active_nodes[j] = NULL;
+			}
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+
+	/* Those remaining in the active_nodes[] are now gone ! */
+	for (i = 0; i < port_count; i++) {
+		/*
+		 * if we're an initiator and the remote node
+		 * is a target, then post the node missing event.
+		 * if we're target and we have enabled
+		 * target RSCN, then post the node missing event.
+		 */
+		if (active_nodes[i]) {
+			if ((node->sport->enable_ini &&
+			     active_nodes[i]->targ) ||
+			     (node->sport->enable_tgt &&
+			     enable_target_rscn(efc))) {
+				efc_node_post_event(active_nodes[i],
+						    EFC_EVT_NODE_MISSING,
+						     NULL);
+			} else {
+				node_printf(node,
+					    "GID_PT: skipping non-tgt port_id x%06x\n",
+					    active_nodes[i]->rnode.fc_id);
+			}
+		}
+	}
+	kfree(active_nodes);
+
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		/* Don't create node for ourselves */
+		if (port_id != node->rnode.sport->fc_id) {
+			newnode = efc_node_find(sport, port_id);
+			if (!newnode) {
+				if (node->sport->enable_ini) {
+					newnode = efc_node_alloc(sport,
+								 port_id,
+								  false,
+								  false);
+					if (!newnode) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						return -1;
+					}
+					/*
+					 * send PLOGI automatically
+					 * if initiator
+					 */
+					efc_node_init_device(newnode, true);
+				}
+				continue;
+			}
+
+			if (node->sport->enable_ini && newnode->targ) {
+				efc_node_post_event(newnode,
+						    EFC_EVT_NODE_REFOUND,
+						    NULL);
+			}
+			/*
+			 * original code sends ADISC,
+			 * has notion of "refound"
+			 */
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+	return 0;
+}
+
+/**
+ * Wait for a GIDPT response from the name server. Process the FC_IDs that are
+ * reported by creating new remote ports, as needed.
+ */
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / process GIDPT payload */
+		efc_process_gidpt_payload(node, cbdata->els_rsp.virt,
+					  cbdata->els_rsp.len);
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	{
+		/* not much we can do; will retry with the next RSCN */
+		node_printf(node, "GID_PT failed to complete\n");
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	/* if receive RSCN here, queue up another discovery processing */
+	case EFC_EVT_RSCN_RCVD: {
+		node_printf(node, "RSCN received during GID_PT processing\n");
+		node->rscn_pending = true;
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Idle. Waiting for RSCN received events
+ * (posted from the fabric controller), and
+ * restarts the GIDPT name services query and processing.
+ */
+void *
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->rscn_pending)
+			break;
+
+		node_printf(node, "RSCN pending, restart discovery\n");
+		node->rscn_pending = false;
+
+			/* fall through */
+
+	case EFC_EVT_RSCN_RCVD: {
+		/* sm: / send GIDPT */
+		/*
+		 * If target RSCN processing is enabled,
+		 * and this is target only (not initiator),
+		 * and tgt_rscn_delay is non-zero,
+		 * then we delay issuing the GID_PT
+		 */
+		if (efc->tgt_rscn_delay_msec != 0 &&
+		    !node->sport->enable_ini && node->sport->enable_tgt &&
+		    enable_target_rscn(efc)) {
+			efc_node_transition(node, __efc_ns_gidpt_delay, NULL);
+		} else {
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * Handle GIDPT delay timer callback
+ * Post an EFC_EVT_GIDPT_DEIALY_EXPIRED event to the passed in node.
+ */
+static void
+gidpt_delay_timer_cb(struct timer_list *t)
+{
+	struct efc_node *node = from_timer(node, t, gidpt_delay_timer);
+
+	del_timer(&node->gidpt_delay_timer);
+
+	efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL);
+}
+
+/**
+ * Name services node state machine: Delayed GIDPT.
+ *
+ * Waiting for GIDPT delay to expire before submitting GIDPT to name server.
+ */
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		time_t delay_msec;
+
+		/*
+		 * Compute the delay time.
+		 * Set to tgt_rscn_delay, if the time since last GIDPT
+		 * is less than tgt_rscn_period, then use tgt_rscn_period.
+		 */
+		delay_msec = efc->tgt_rscn_delay_msec;
+		if ((jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec)
+		    < efc->tgt_rscn_period_msec) {
+			delay_msec = efc->tgt_rscn_period_msec;
+		}
+		timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb,
+			    0);
+		mod_timer(&node->gidpt_delay_timer,
+			  jiffies + msecs_to_jiffies(delay_msec));
+
+		break;
+	}
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		node->time_last_gidpt_msec = jiffies_to_msecs(jiffies);
+
+		efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_RSCN_RCVD: {
+		efc_log_debug(efc,
+			      "RSCN received while in GIDPT delay - no action\n");
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine: Initial state.
+ *
+ * Issue a PLOGI to a well-known fabric controller address.
+ */
+void *
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* no need to login to fabric controller, just send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine: Wait for a node attach request
+ * to complete.
+ *
+ * Wait for a node attach to complete. If successful, issue an SCR
+ * to the fabric controller, subscribing to all RSCN.
+ */
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine:
+ * Wait for an SCR response from the fabric controller.
+ */
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static void
+efc_process_rscn(struct efc_node *node, struct efc_node_cb *cbdata)
+{
+	struct efc *efc = node->efc;
+	struct efc_sli_port *sport = node->sport;
+	struct efc_node *ns;
+
+	/* Forward this event to the name-services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (ns)
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata);
+	else
+		efc_log_warn(efc, "can't find name server node\n");
+}
+
+/* Fabric controller node state machine: Ready.
+ * In this state, the fabric controller sends a RSCN, which is received
+ * by this node and is forwarded to the name services node object; and
+ * the RSCN LS_ACC is sent.
+ */
+void *
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * sm: / process RSCN (forward to name services node),
+		 * send LS_ACC
+		 */
+		efc_process_rscn(node, cbdata);
+		efc->tt.els_send_resp(efc, node, ELS_LS_ACC,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl,
+				    NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static uint64_t
+efc_get_wwpn(struct fc_els_flogi *sp)
+{
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+static int
+efc_rnode_is_winner(struct efc_sli_port *sport)
+{
+	struct fc_els_flogi *remote_sp;
+	u64 remote_wwpn;
+	u64 local_wwpn = sport->wwpn;
+	//char prop_buf[32];
+	u64 wwn_bump = 0;
+
+	remote_sp = (struct fc_els_flogi *)sport->domain->flogi_service_params;
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	local_wwpn ^= wwn_bump;
+
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	efc_log_debug(sport->efc, "r: %llx\n",
+		      be64_to_cpu(remote_sp->fl_wwpn));
+	efc_log_debug(sport->efc, "l: %llx\n", local_wwpn);
+
+	if (remote_wwpn == local_wwpn) {
+		efc_log_warn(sport->efc,
+			     "WWPN of remote node [%08x %08x] matches local WWPN\n",
+			     (u32)(local_wwpn >> 32ll),
+			     (u32)local_wwpn);
+		return -1;
+	}
+
+	return (remote_wwpn > local_wwpn);
+}
+
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_sli_port *sport = node->sport;
+		struct efc_node *rnode;
+
+		/*
+		 * this transient node (SID=0 (recv'd FLOGI)
+		 * or DID=fabric (sent FLOGI))
+		 * is the p2p winner, will use a separate node
+		 * to send PLOGI to peer
+		 */
+		efc_assert(node->sport->p2p_winner, NULL);
+
+		rnode = efc_node_find(sport, node->sport->p2p_remote_port_id);
+		if (rnode) {
+			/*
+			 * the "other" transient p2p node has
+			 * already kicked off the
+			 * new node from which PLOGI is sent
+			 */
+			node_printf(node,
+				    "Node with fc_id x%x already exists\n",
+				    rnode->rnode.fc_id);
+		} else {
+			/*
+			 * create new node (SID=1, DID=2)
+			 * from which to send PLOGI
+			 */
+			rnode = efc_node_alloc(sport,
+					       sport->p2p_remote_port_id,
+						false, false);
+			if (!rnode) {
+				efc_log_err(efc, "node alloc failed\n");
+				return NULL;
+			}
+
+			efc_fabric_notify_topology(node);
+			/* sm: / allocate p2p remote node */
+			efc_node_transition(rnode, __efc_p2p_rnode_init,
+					    NULL);
+		}
+
+		/*
+		 * the transient node (SID=0 or DID=fabric)
+		 * has served its purpose
+		 */
+		if (node->rnode.fc_id == 0) {
+			/*
+			 * if this is the SID=0 node,
+			 * move to the init state in case peer
+			 * has restarted FLOGI discovery and FLOGI is pending
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		} else {
+			/*
+			 * if this is the DID=fabric node
+			 * (we initiated FLOGI), shut it down
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL);
+		break;
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+
+		/* sm: if p2p_winner / domain_attach */
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					NULL);
+			if (!node->sport->domain->attached) {
+				node_printf(node, "Domain not attached\n");
+				efc_domain_attach(node->sport->domain,
+						  node->sport->p2p_port_id);
+			} else {
+				node_printf(node, "Domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/* this node has served its purpose;
+			 * we'll expect a PLOGI on a separate
+			 * node (remote SID=0x1); return this node
+			 * to init state in case peer
+			 * restarts discovery -- it may already
+			 * have (pending frames may exist).
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		}
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/*
+		 * LS_ACC failed, possibly due to link down;
+		 * shutdown node and wait
+		 * for FLOGI discovery to restart
+		 */
+		node_printf(node, "FLOGI LS_ACC failed, shutting down\n");
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_ABTS_RCVD: {
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node,
+					 cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node, "PLOGI failed, shutting down\n");
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* if we're in external loopback mode, just send LS_ACC */
+		if (node->efc->external_loopback) {
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+						be16_to_cpu(hdr->fh_ox_id));
+		} else {
+			/*
+			 * if this isn't external loopback,
+			 * pass to default handler
+			 */
+			__efc_fabric_common(__func__, ctx, evt, arg);
+		}
+		break;
+	}
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted).
+		 * At this time, we are not waiting on any other unsolicited
+		 * frames to continue with the login process. Thus, it will not
+		 * hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node->ls_acc_io,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node, __efc_d_port_logged_in,
+					    NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+	case EFC_EVT_PRLI_RCVD:
+		node_printf(node, "%s: PRLI received before node is attached\n",
+			    efc_sm_event_name(evt));
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+int
+efc_p2p_setup(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	int rnode_winner;
+
+	rnode_winner = efc_rnode_is_winner(sport);
+
+	/* set sport flags to indicate p2p "winner" */
+	if (rnode_winner == 1) {
+		sport->p2p_remote_port_id = 0;
+		sport->p2p_port_id = 0;
+		sport->p2p_winner = false;
+	} else if (rnode_winner == 0) {
+		sport->p2p_remote_port_id = 2;
+		sport->p2p_port_id = 1;
+		sport->p2p_winner = true;
+	} else {
+		/* no winner; only okay if external loopback enabled */
+		if (sport->efc->external_loopback) {
+			/*
+			 * External loopback mode enabled;
+			 * local sport and remote node
+			 * will be registered with an NPortID = 1;
+			 */
+			efc_log_debug(efc,
+				      "External loopback mode enabled\n");
+			sport->p2p_remote_port_id = 1;
+			sport->p2p_port_id = 1;
+			sport->p2p_winner = true;
+		} else {
+			efc_log_warn(efc,
+				     "failed to determine p2p winner\n");
+			return rnode_winner;
+		}
+	}
+	return 0;
+}
diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h
new file mode 100644
index 000000000000..9571b4b7b2ce
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declarations for the interface exported by efc_fabric
+ */
+
+#ifndef __EFCT_FABRIC_H__
+#define __EFCT_FABRIC_H__
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+
+void *
+__efc_fabric_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_domain_attach_wait(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_sport_attach(struct efc_sm_ctx *ctx,
+			       enum efc_sm_event evt, void *arg);
+
+void *
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_logo_wait_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event, void *arg);
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_idle(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_domain_attach_wait(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+
+int
+efc_p2p_setup(struct efc_sli_port *sport);
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_sport_topology topology);
+void efc_fabric_notify_topology(struct efc_node *node);
+
+#endif /* __EFCT_FABRIC_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 14/32] elx: libefc: FC node ELS and state handling
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (12 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 13/32] elx: libefc: Fabric " James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  8:39   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations James Smart
                   ` (18 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC node PRLI handling and state management

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/libefc/efc_device.c | 1658 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_device.h |   72 ++
 2 files changed, 1730 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h

diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
new file mode 100644
index 000000000000..f87525f65b72
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.c
@@ -0,0 +1,1658 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * device_sm Node State Machine: Remote Device States
+ */
+
+#include "efc.h"
+#include "efc_device.h"
+#include "efc_fabric.h"
+
+void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id)
+{
+	struct efc *efc = node->efc;
+	/* If the back-end doesn't want to talk to this initiator,
+	 * we send an LS_RJT
+	 */
+	if (node->sport->enable_tgt &&
+	    (efc->tt.scsi_validate_node(efc, node) == 0)) {
+		node_printf(node, "PRLI rejected by target-server\n");
+
+		efc->tt.send_ls_rjt(efc, node, ox_id,
+				    ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+	} else {
+		/*
+		 * sm: / process PRLI payload, send PRLI acc
+		 */
+		efc->tt.els_send_resp(efc, node, ELS_PRLI, ox_id);
+
+		/* Immediately go to ready state to avoid window where we're
+		 * waiting for the PRLI LS_ACC to complete while holding
+		 * FCP_CMNDs
+		 */
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+	}
+}
+
+static void *
+__efc_d_common(const char *funcname, struct efc_sm_ctx *ctx,
+	       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+
+	efc_assert(ctx, NULL);
+	node = ctx->app;
+	efc_assert(node, NULL);
+	efc = node->efc;
+	efc_assert(efc, NULL);
+
+	switch (evt) {
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n",
+			      node->display_name, funcname,
+				efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * State is entered when a node sends a delete initiator/target call to the
+ * target-server/initiator-client and needs to wait for that work to complete.
+ */
+static void *
+__efc_d_wait_del_node(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		/*
+		 * node has either been detached or is in the process
+		 * of being detached,
+		 * call common node's initiate cleanup function
+		 */
+		efc_node_initiate_cleanup(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static void *
+__efc_d_wait_del_ini_tgt(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		efc_node_transition(node, __efc_d_wait_del_node, NULL);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* assume no wait needed */
+		int rc = EFC_SCSI_CALL_COMPLETE;
+
+		efc->tt.scsi_io_alloc_disable(efc, node);
+
+		/* make necessary delete upcall(s) */
+		if (node->init && !node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+
+		} else if (node->targ && !node->init) {
+			efc_log_info(node->efc,
+				     "[%s] delete (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+
+		} else if (node->init && node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
+				node->display_name, node->wwpn, node->wwnn);
+			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
+					    NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+			/* assume no wait needed */
+			rc = EFC_SCSI_CALL_COMPLETE;
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+		}
+
+		/* we've initiated the upcalls as needed, now kick off the node
+		 * detach to precipitate the aborting of outstanding exchanges
+		 * associated with said node
+		 *
+		 * Beware: if we've made upcall(s), we've already transitioned
+		 * to a new state by the time we execute this.
+		 * consider doing this before the upcalls?
+		 */
+		if (node->attached) {
+			/* issue hw node free; don't care if succeeds right
+			 * away or sometime later, will check node->attached
+			 * later in shutdown process
+			 */
+			rc = efc->tt.hw_node_detach(efc, &node->rnode);
+			if (rc != EFC_HW_RTN_SUCCESS &&
+			    rc != EFC_HW_RTN_SUCCESS_SYNC)
+				node_printf(node,
+					    "Failed freeing HW node, rc=%d\n",
+					rc);
+		}
+
+		/* if neither initiator nor target, proceed to cleanup */
+		if (!node->init && !node->targ) {
+			/*
+			 * node has either been detached or is in
+			 * the process of being detached,
+			 * call common node's initiate cleanup function
+			 */
+			efc_node_initiate_cleanup(node);
+		}
+		break;
+	}
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* Ignore, this can happen if an ELS is
+		 * aborted while in a delay/retry state
+		 */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		/* send PLOGI automatically if initiator */
+		efc_node_init_device(node, true);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Save the OX_ID for sending LS_ACC sometime later */
+void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+
+	efc_assert(node->send_ls_acc == EFC_NODE_SEND_LS_ACC_NONE);
+
+	node->ls_acc_oxid = ox_id;
+	node->send_ls_acc = ls;
+	node->ls_acc_did = ntoh24(hdr->fh_d_id);
+}
+
+void
+efc_process_prli_payload(struct efc_node *node, void *prli)
+{
+	struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
+
+	node->init = (sp->spp_flags & FCP_SPPF_INIT_FCN) != 0;
+	node->targ = (sp->spp_flags & FCP_SPPF_TARG_FCN) != 0;
+}
+
+void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_d_port_logged_in, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* LOGO response received, sent shutdown */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		node_printf(node,
+			    "LOGO sent (evt=%s), shutdown node\n",
+			efc_sm_event_name(evt));
+		/* sm: / post explicit logout */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				    NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void
+efc_node_init_device(struct efc_node *node, bool send_plogi)
+{
+	node->send_plogi = send_plogi;
+	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
+	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
+		node->nodedb_state = __efc_d_init;
+		efc_node_transition(node, __efc_node_paused, NULL);
+	} else {
+		efc_node_transition(node, __efc_d_init, NULL);
+	}
+}
+
+/**
+ * Device node state machine: Initial node state for an initiator or
+ * a target.
+ *
+ * This state is entered when a node is instantiated, either having been
+ * discovered from a name services query, or having received a PLOGI/FLOGI.
+ */
+void *
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->send_plogi)
+			break;
+		/* only send if we have initiator capability,
+		 * and domain is attached
+		 */
+		if (node->sport->enable_ini &&
+		    node->sport->domain->attached) {
+			efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+
+			efc_node_transition(node, __efc_d_wait_plogi_rsp, NULL);
+		} else {
+			node_printf(node, "not sending plogi sport.ini=%d,",
+				    node->sport->enable_ini);
+			node_printf(node, "domain attached=%d\n",
+				    node->sport->domain->attached);
+		}
+		break;
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* domain already attached */
+		if (node->sport->domain->attached) {
+			rc = efc_node_attach(node);
+			efc_node_transition(node,
+					    __efc_d_wait_node_attach, NULL);
+			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
+				efc_node_post_event(node,
+						    EFC_EVT_NODE_ATTACH_OK,
+						    NULL);
+			}
+			break;
+		}
+
+		/* domain not attached; several possibilities: */
+		switch (node->sport->topology) {
+		case EFC_SPORT_TOPOLOGY_P2P:
+			/* we're not attached and sport is p2p,
+			 * need to attach
+			 */
+			efc_domain_attach(node->sport->domain, d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_FABRIC:
+			/* we're not attached and sport is fabric, domain
+			 * attach should have already been requested as part
+			 * of the fabric state machine, wait for it
+			 */
+			efc_node_transition(node, __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_UNKNOWN:
+			/* Two possibilities:
+			 * 1. received a PLOGI before our FLOGI has completed
+			 *    (possible since completion comes in on another
+			 *    CQ), thus we don't know what we're connected to
+			 *    yet; transition to a state to wait for the
+			 *    fabric node to tell us;
+			 * 2. PLOGI received before link went down and we
+			 * haven't performed domain attach yet.
+			 * Note: we cannot distinguish between 1. and 2.
+			 * so have to assume PLOGI
+			 * was received after link back up.
+			 */
+			node_printf(node,
+				    "received PLOGI, unknown topology did=0x%x\n",
+				d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_topology_notify,
+					    NULL);
+			break;
+		default:
+			node_printf(node,
+				    "received PLOGI, with unexpected topology %d\n",
+				node->sport->topology);
+			efc_assert(false, NULL);
+			break;
+		}
+		break;
+	}
+
+	case EFC_EVT_FDISC_RCVD: {
+		__efc_d_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	case EFC_EVT_FLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		/* sm: / save sparams, send FLOGI acc */
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->payload->dma.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* send FC LS_ACC response, override s_id */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		efc->tt.send_flogi_p2p_acc(efc, node,
+				be16_to_cpu(hdr->fh_ox_id), d_id);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_node_transition(node,
+					    __efc_p2p_wait_flogi_acc_cmpl,
+					    NULL);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and shut node down w/ "explicit logout"
+			 * so pending frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* note: problem, we're now expecting an ELS REQ completion
+		 * from both the LOGO and PLOGI
+		 */
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a
+			 * link down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+
+		/* Send LOGO */
+		node_printf(node, "FCP_CMND received, send LOGO\n");
+		if (efc->tt.els_send(efc, node, ELS_LOGO,
+				     EFC_FC_FLOGI_TIMEOUT_SEC,
+			EFC_FC_ELS_DEFAULT_RETRIES) == NULL) {
+			/*
+			 * failed to send LOGO, go ahead and cleanup node
+			 * anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		} else {
+			/* sent LOGO, wait for response */
+			efc_node_transition(node,
+					    __efc_d_wait_logo_rsp, NULL);
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		/* received PLOGI with svc parms, go ahead and attach node
+		 * when PLOGI that was sent ultimately completes, it'll be a
+		 * no-op
+		 *
+		 * If there is an outstanding PLOGI sent, can we set a flag
+		 * to indicate that we don't want to retry it if it times out?
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+		/* sm: domain->attached / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* Our PLOGI was rejected, this is ok in some cases */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* not logged in yet and outstanding PLOGI so don't send LOGO,
+		 * just drop
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted). At this time, we are not waiting on any other
+		 * unsolicited frames to continue with the login process. Thus,
+		 * it will not hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		efc_assert(node->sport->domain->attached, NULL);
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY: {
+		enum efc_sport_topology topology =
+					(enum efc_sport_topology)arg;
+
+		efc_assert(!node->sport->domain->attached, NULL);
+
+		efc_assert(node->send_ls_acc == EFC_NODE_SEND_LS_ACC_PLOGI,
+			   NULL);
+		node_printf(node, "topology notification, topology=%d\n",
+			    topology);
+
+		/* At the time the PLOGI was received, the topology was unknown,
+		 * so we didn't know which node would perform the domain attach:
+		 * 1. The node from which the PLOGI was sent (p2p) or
+		 * 2. The node to which the FLOGI was sent (fabric).
+		 */
+		if (topology == EFC_SPORT_TOPOLOGY_P2P) {
+			/* if this is p2p, need to attach to the domain using
+			 * the d_id from the PLOGI received
+			 */
+			efc_domain_attach(node->sport->domain,
+					  node->ls_acc_did);
+		}
+		/* else, if this is fabric, the domain attach
+		 * should be performed by the fabric node (node sending FLOGI);
+		 * just wait for attach to complete
+		 */
+
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		efc_assert(node->sport->domain->attached, NULL);
+		node_printf(node, "domain attach ok\n");
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PLOGI: {
+			/* sm: send_plogi_acc is set / send PLOGI acc */
+			/* Normal case for T, or I+T */
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+							node->ls_acc_oxid);
+			efc_node_transition(node,
+					    __efc_d_wait_plogi_acc_cmpl,
+					     NULL);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node,
+					    __efc_d_port_logged_in, NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node,
+				    __efc_d_wait_attach_evt_shutdown, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* Normal case for I or I+T */
+		if (node->sport->enable_ini &&
+		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
+			/* sm: if enable_ini / send PRLI */
+			efc->tt.els_send(efc, node, ELS_PRLI,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+			/* can now expect ELS_REQ_OK/FAIL/RJT */
+		}
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* Normal for T or I+T */
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
+		/* Normal case for I or I+T */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / process PRLI payload */
+		efc_process_prli_payload(node, cbdata->els_rsp.virt);
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
+		/* I, I+T, assume some link failure, shutdown node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT: {
+		/* PRLI rejected by remote
+		 * Normal for I, I+T (connected to an I)
+		 * Node doesn't want to be a target, stay here and wait for a
+		 * PRLI from the remote node
+		 * if it really wants to connect to us as target
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK: {
+		/* Normal T, I+T, target-server rejected the process login */
+		/* This would be received only in the case where we sent
+		 * LS_RJT for the PRLI, so
+		 * do nothing.   (note: as T only we could shutdown the node)
+		 */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/*sm: / save sparams, set send_plogi_acc,
+		 *post implicit logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt),
+					node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* sm: / post explicit logout */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	if (evt != EFC_EVT_FCP_CMD_RCVD)
+		node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node->fcp_enabled = true;
+		if (node->init) {
+			efc_log_info(efc,
+				     "[%s] found (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_tgt)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		if (node->targ) {
+			efc_log_info(efc,
+				     "[%s] found (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_ini)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		break;
+
+	case EFC_EVT_EXIT:
+		node->fcp_enabled = false;
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* T, I+T: remote initiator is slow to get started */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+
+		/* sm: / send PRLI acc */
+
+		efc->tt.els_send_resp(efc, node, ELS_PRLI,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_PRLO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send PRLO acc */
+		efc->tt.els_send_resp(efc, node, ELS_PRLO,
+					be16_to_cpu(hdr->fh_ox_id));
+		/* need implicit logout? */
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_ADISC_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send ADISC acc */
+		efc->tt.els_send_resp(efc, node, ELS_ADISC,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: / process ABTS */
+		// This should not happpen
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		break;
+
+	case EFC_EVT_NODE_MISSING:
+		if (node->sport->enable_rscn)
+			efc_node_transition(node, __efc_d_device_gone, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		/* T, or I+T, PRLI accept completed ok */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* T, or I+T, PRLI accept failed to complete */
+		efc_assert(node->els_cmpl_cnt, NULL);
+		node->els_cmpl_cnt--;
+		node_printf(node, "Failed to send PRLI LS_ACC\n");
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	int rc = EFC_SCSI_CALL_COMPLETE;
+	int rc_2 = EFC_SCSI_CALL_COMPLETE;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		static char const *labels[] = {"none", "initiator", "target",
+							"initiator+target"};
+
+		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
+			     node->display_name,
+				labels[(node->targ << 1) | (node->init)],
+						node->wwpn, node->wwnn);
+
+		switch (efc_node_get_enable(node)) {
+		case EFC_NODE_ENABLE_T_TO_T:
+		case EFC_NODE_ENABLE_I_TO_T:
+		case EFC_NODE_ENABLE_IT_TO_T:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_I:
+		case EFC_NODE_ENABLE_I_TO_I:
+		case EFC_NODE_ENABLE_IT_TO_I:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_I_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+						  EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_IT_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			rc_2 = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		default:
+			rc = EFC_SCSI_CALL_COMPLETE;
+			break;
+		}
+
+		if (rc == EFC_SCSI_CALL_COMPLETE &&
+		    rc_2 == EFC_SCSI_CALL_COMPLETE)
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+
+		break;
+	}
+	case EFC_EVT_NODE_REFOUND:
+		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
+
+		/* reauthenticate with PLOGI/PRLI */
+		/* efc_node_transition(node, __efc_d_discovered, NULL); */
+
+		/* reauthenticate with ADISC */
+		/* sm: / send ADISC */
+		efc->tt.els_send(efc, node, ELS_ADISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters, and send
+		 * ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* most likely a stale frame (received prior to link down),
+		 * if attempt to send LOGO, will probably timeout and eat
+		 * up 20s; thus, drop FCP_CMND
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* received an LS_RJT, in this case, send shutdown
+		 * (explicit logo) event which will unregister the node,
+		 * and start over with PLOGI
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		efc_assert(node->els_req_cnt, NULL);
+		node->els_req_cnt--;
+		/* sm: / post explicit logout */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				     NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* In this case, we have the equivalent of an LS_RJT for
+		 * the ADISC, so we need to abort the ADISC, and re-login
+		 * with PLOGI
+		 */
+		/* sm: / request abort, send LOGO acc */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
new file mode 100644
index 000000000000..513096b8f875
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Node state machine functions for remote device node sm
+ */
+
+#ifndef __EFCT_DEVICE_H__
+#define __EFCT_DEVICE_H__
+extern void
+efc_node_init_device(struct efc_node *node, bool send_plogi);
+extern void
+efc_process_prli_payload(struct efc_node *node,
+			 void *prli);
+extern void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id);
+extern void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls);
+extern void *
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+
+#endif /* __EFCT_DEVICE_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (13 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 14/32] elx: libefc: FC node ELS and state handling James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  8:41   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 16/32] elx: efct: Driver initialization routines James Smart
                   ` (17 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch starts the population of the efct target mode
driver.  The driver is contained in the drivers/scsi/elx/efct
subdirectory.

This patch creates the efct directory and starts population of
the driver by adding SLI-4 configuration parameters, data structures
for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
and handling async events.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.h | 852 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 852 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h

diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
new file mode 100644
index 000000000000..ff6de91923fa
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -0,0 +1,852 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef _EFCT_HW_H
+#define _EFCT_HW_H
+
+#include "../libefc_sli/sli4.h"
+#include "efct_utils.h"
+
+/*
+ * EFCT PCI IDs
+ */
+#define EFCT_VENDOR_ID			0x10df
+/* LightPulse 16Gb x 4 FC (lancer-g6) */
+#define EFCT_DEVICE_ID_LPE31004		0xe307
+#define PCI_PRODUCT_EMULEX_LPE32002	0xe307
+/* LightPulse 32Gb x 4 FC (lancer-g7) */
+#define EFCT_DEVICE_ID_G7		0xf407
+
+/*Default RQ entries len used by driver*/
+#define EFCT_HW_RQ_ENTRIES_MIN		512
+#define EFCT_HW_RQ_ENTRIES_DEF		1024
+#define EFCT_HW_RQ_ENTRIES_MAX		4096
+
+/*Defines the size of the RQ buffers used for each RQ*/
+#define EFCT_HW_RQ_SIZE_HDR             128
+#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
+
+/*Define the maximum number of multi-receive queues*/
+#define EFCT_HW_MAX_MRQS		8
+
+/*
+ * Define count of when to set the WQEC bit in a submitted
+ * WQE, causing a consummed/released completion to be posted.
+ */
+#define EFCT_HW_WQEC_SET_COUNT		32
+
+/*Send frame timeout in seconds*/
+#define EFCT_HW_SEND_FRAME_TIMEOUT	10
+
+/*
+ * FDT Transfer Hint value, reads greater than this value
+ * will be segmented to implement fairness. A value of zero disables
+ * the feature.
+ */
+#define EFCT_HW_FDT_XFER_HINT		8192
+
+#define EFCT_HW_TIMECHECK_ITERATIONS	100
+#define EFCT_HW_MAX_NUM_MQ		1
+#define EFCT_HW_MAX_NUM_RQ		32
+#define EFCT_HW_MAX_NUM_EQ		16
+#define EFCT_HW_MAX_NUM_WQ		32
+
+#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
+
+#define EFCT_HW_MAX_WQ_CLASS		4
+#define EFCT_HW_MAX_WQ_CPU		128
+
+/*
+ * A CQ will be assinged to each WQ
+ * (CQ must have 2X entries of the WQ for abort
+ * processing), plus a separate one for each RQ PAIR and one for MQ
+ */
+#define EFCT_HW_MAX_NUM_CQ \
+	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
+
+#define EFCT_HW_Q_HASH_SIZE		128
+#define EFCT_HW_RQ_HEADER_SIZE		128
+#define EFCT_HW_RQ_HEADER_INDEX		0
+
+/* Options for efct_hw_command() */
+enum {
+	/* command executes synchronously and busy-waits for completion */
+	EFCT_CMD_POLL,
+	/* command executes asynchronously. Uses callback */
+	EFCT_CMD_NOWAIT,
+};
+
+enum efct_hw_rtn {
+	EFCT_HW_RTN_SUCCESS = 0,
+	EFCT_HW_RTN_SUCCESS_SYNC = 1,
+	EFCT_HW_RTN_ERROR = -1,
+	EFCT_HW_RTN_NO_RESOURCES = -2,
+	EFCT_HW_RTN_NO_MEMORY = -3,
+	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFCT_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
+
+enum efct_hw_reset {
+	EFCT_HW_RESET_FUNCTION,
+	EFCT_HW_RESET_FIRMWARE,
+	EFCT_HW_RESET_MAX
+};
+
+enum efct_hw_property {
+	EFCT_HW_N_IO,
+	EFCT_HW_N_SGL,
+	EFCT_HW_MAX_IO,
+	EFCT_HW_MAX_SGE,
+	EFCT_HW_MAX_SGL,
+	EFCT_HW_MAX_NODES,
+	EFCT_HW_MAX_RQ_ENTRIES,
+	EFCT_HW_TOPOLOGY,
+	EFCT_HW_WWN_NODE,
+	EFCT_HW_WWN_PORT,
+	EFCT_HW_FW_REV,
+	EFCT_HW_FW_REV2,
+	EFCT_HW_IPL,
+	EFCT_HW_VPD,
+	EFCT_HW_VPD_LEN,
+	EFCT_HW_MODE,
+	EFCT_HW_LINK_SPEED,
+	EFCT_HW_IF_TYPE,
+	EFCT_HW_SLI_REV,
+	EFCT_HW_SLI_FAMILY,
+	EFCT_HW_RQ_PROCESS_LIMIT,
+	EFCT_HW_RQ_DEFAULT_BUFFER_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_CAPABLE,
+	EFCT_HW_AUTO_XFER_RDY_XRI_CNT,
+	EFCT_HW_AUTO_XFER_RDY_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_BLK_SIZE,
+	EFCT_HW_AUTO_XFER_RDY_T10_ENABLE,
+	EFCT_HW_AUTO_XFER_RDY_P_TYPE,
+	EFCT_HW_AUTO_XFER_RDY_REF_TAG_IS_LBA,
+	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALID,
+	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALUE,
+	EFCT_HW_DIF_CAPABLE,
+	EFCT_HW_DIF_SEED,
+	EFCT_HW_DIF_MODE,
+	EFCT_HW_DIF_MULTI_SEPARATE,
+	EFCT_HW_DUMP_MAX_SIZE,
+	EFCT_HW_DUMP_READY,
+	EFCT_HW_DUMP_PRESENT,
+	EFCT_HW_RESET_REQUIRED,
+	EFCT_HW_FW_ERROR,
+	EFCT_HW_FW_READY,
+	EFCT_HW_HIGH_LOGIN_MODE,
+	EFCT_HW_PREREGISTER_SGL,
+	EFCT_HW_HW_REV1,
+	EFCT_HW_HW_REV2,
+	EFCT_HW_HW_REV3,
+	EFCT_HW_ETH_LICENSE,
+	EFCT_HW_LINK_MODULE_TYPE,
+	EFCT_HW_NUM_CHUTES,
+	EFCT_HW_WAR_VERSION,
+	/* enable driver timeouts for target WQEs */
+	EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
+	EFCT_HW_LINK_CONFIG_SPEED,
+	EFCT_HW_CONFIG_TOPOLOGY,
+	EFCT_HW_BOUNCE,
+	EFCT_HW_PORTNUM,
+	EFCT_HW_BIOS_VERSION_STRING,
+	EFCT_HW_RQ_SELECT_POLICY,
+	EFCT_HW_SGL_CHAINING_CAPABLE,
+	EFCT_HW_SGL_CHAINING_ALLOWED,
+	EFCT_HW_SGL_CHAINING_HOST_ALLOCATED,
+	EFCT_HW_SEND_FRAME_CAPABLE,
+	EFCT_HW_RQ_SELECTION_POLICY,
+	EFCT_HW_RR_QUANTA,
+	EFCT_HW_FILTER_DEF,
+	EFCT_HW_MAX_VPORTS,
+	EFCT_ESOC,
+};
+
+enum {
+	EFCT_HW_TOPOLOGY_AUTO,
+	EFCT_HW_TOPOLOGY_NPORT,
+	EFCT_HW_TOPOLOGY_LOOP,
+	EFCT_HW_TOPOLOGY_NONE,
+	EFCT_HW_TOPOLOGY_MAX
+};
+
+enum {
+	EFCT_HW_MODE_INITIATOR,
+	EFCT_HW_MODE_TARGET,
+	EFCT_HW_MODE_BOTH,
+	EFCT_HW_MODE_MAX
+};
+
+/* pack fw revision values into a single uint64_t */
+#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
+			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
+
+#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
+
+/* Defines DIF operation modes */
+enum {
+	EFCT_HW_DIF_MODE_INLINE,
+	EFCT_HW_DIF_MODE_SEPARATE,
+};
+
+/* T10 DIF operations */
+enum efct_hw_dif_oper {
+	EFCT_HW_DIF_OPER_DISABLED,
+	EFCT_HW_SGE_DIFOP_INNODIFOUTCRC,
+	EFCT_HW_SGE_DIFOP_INCRCOUTNODIF,
+	EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF,
+	EFCT_HW_SGE_DIFOP_INCRCOUTCRC,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM,
+	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC,
+	EFCT_HW_SGE_DIFOP_INRAWOUTRAW,
+};
+
+#define EFCT_HW_DIF_OPER_PASS_THRU	EFCT_HW_SGE_DIFOP_INCRCOUTCRC
+#define EFCT_HW_DIF_OPER_STRIP		EFCT_HW_SGE_DIFOP_INCRCOUTNODIF
+#define EFCT_HW_DIF_OPER_INSERT		EFCT_HW_SGE_DIFOP_INNODIFOUTCRC
+
+/* T10 DIF block sizes */
+enum efct_hw_dif_blk_size {
+	EFCT_HW_DIF_BK_SIZE_512,
+	EFCT_HW_DIF_BK_SIZE_1024,
+	EFCT_HW_DIF_BK_SIZE_2048,
+	EFCT_HW_DIF_BK_SIZE_4096,
+	EFCT_HW_DIF_BK_SIZE_520,
+	EFCT_HW_DIF_BK_SIZE_4104,
+	EFCT_HW_DIF_BK_SIZE_NA = 0
+};
+
+/* link module types */
+enum {
+	EFCT_HW_LINK_MODULE_TYPE_1GB	= 0x0004,
+	EFCT_HW_LINK_MODULE_TYPE_2GB	= 0x0008,
+	EFCT_HW_LINK_MODULE_TYPE_4GB	= 0x0040,
+	EFCT_HW_LINK_MODULE_TYPE_8GB	= 0x0080,
+	EFCT_HW_LINK_MODULE_TYPE_10GB	= 0x0100,
+	EFCT_HW_LINK_MODULE_TYPE_16GB	= 0x0200,
+	EFCT_HW_LINK_MODULE_TYPE_32GB	= 0x0400,
+};
+
+/* T10 DIF information passed to the transport */
+struct efct_hw_dif_info {
+	enum efct_hw_dif_oper dif_oper;
+	enum efct_hw_dif_blk_size blk_size;
+	u32 ref_tag_cmp;
+	u32 ref_tag_repl;
+	u16 app_tag_cmp;
+	u16 app_tag_repl;
+	bool check_ref_tag;
+	bool check_app_tag;
+	bool check_guard;
+	bool auto_incr_ref_tag;
+	bool repl_app_tag;
+	bool repl_ref_tag;
+	bool dif_separate;
+
+	/* If the APP TAG is 0xFFFF, disable REF TAG and CRC field chk */
+	bool disable_app_ffff;
+
+	/* if the APP TAG is 0xFFFF and REF TAG is 0xFFFF_FFFF,
+	 * disable checking the received CRC field.
+	 */
+	bool disable_app_ref_ffff;
+	u16 dif_seed;
+	u8 dif;
+};
+
+enum efct_hw_io_type {
+	EFCT_HW_ELS_REQ,
+	EFCT_HW_ELS_RSP,
+	EFCT_HW_ELS_RSP_SID,
+	EFCT_HW_FC_CT,
+	EFCT_HW_FC_CT_RSP,
+	EFCT_HW_BLS_ACC,
+	EFCT_HW_BLS_ACC_SID,
+	EFCT_HW_BLS_RJT,
+	EFCT_HW_IO_TARGET_READ,
+	EFCT_HW_IO_TARGET_WRITE,
+	EFCT_HW_IO_TARGET_RSP,
+	EFCT_HW_IO_DNRX_REQUEUE,
+	EFCT_HW_IO_MAX,
+};
+
+enum efct_hw_io_state {
+	EFCT_HW_IO_STATE_FREE,
+	EFCT_HW_IO_STATE_INUSE,
+	EFCT_HW_IO_STATE_WAIT_FREE,
+	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
+};
+
+struct efct_hw;
+
+/**
+ * HW command context.
+ * Stores the state for the asynchronous commands sent to the hardware.
+ */
+struct efct_command_ctx {
+	struct list_head	list_entry;
+	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
+	void			*arg;	/* Argument for callback */
+	u8			*buf;	/* buffer holding command / results */
+	void			*ctx;	/* upper layer context */
+};
+
+struct efct_hw_sgl {
+	uintptr_t		addr;
+	size_t			len;
+};
+
+union efct_hw_io_param_u {
+	struct {
+		u16		ox_id;
+		u16		rx_id;
+		u8		payload[12];
+	} bls;
+	struct {
+		u32		s_id;
+		u16		ox_id;
+		u16		rx_id;
+		u8		payload[12];
+	} bls_sid;
+	struct {
+		u8		r_ctl;
+		u8		type;
+		u8		df_ctl;
+		u8		timeout;
+	} bcast;
+	struct {
+		u16		ox_id;
+		u8		timeout;
+	} els;
+	struct {
+		u32		s_id;
+		u16		ox_id;
+		u8		timeout;
+	} els_sid;
+	struct {
+		u8		r_ctl;
+		u8		type;
+		u8		df_ctl;
+		u8		timeout;
+	} fc_ct;
+	struct {
+		u8		r_ctl;
+		u8		type;
+		u8		df_ctl;
+		u8		timeout;
+		u16		ox_id;
+	} fc_ct_rsp;
+	struct {
+		u32		offset;
+		u16		ox_id;
+		u16		flags;
+		u8		cs_ctl;
+		enum efct_hw_dif_oper dif_oper;
+		enum efct_hw_dif_blk_size blk_size;
+		u8		timeout;
+		u32		app_id;
+	} fcp_tgt;
+	struct {
+		struct efc_dma	*cmnd;
+		struct efc_dma	*rsp;
+		enum efct_hw_dif_oper dif_oper;
+		enum efct_hw_dif_blk_size blk_size;
+		u32		cmnd_size;
+		u16		flags;
+		u8		timeout;
+		u32		first_burst;
+	} fcp_ini;
+};
+
+/* WQ steering mode */
+enum efct_hw_wq_steering {
+	EFCT_HW_WQ_STEERING_CLASS,
+	EFCT_HW_WQ_STEERING_REQUEST,
+	EFCT_HW_WQ_STEERING_CPU,
+};
+
+/* HW wqe object */
+struct efct_hw_wqe {
+	struct list_head	list_entry;
+	bool			abort_wqe_submit_needed;
+	bool			send_abts;
+	u32			id;
+	u32			abort_reqtag;
+	u8			*wqebuf;
+};
+
+/**
+ * HW IO object.
+ *
+ * Stores the per-IO information necessary
+ * for both the lower (SLI) and upper
+ * layers (efct).
+ */
+struct efct_hw_io {
+	/* Owned by HW */
+
+	/* reference counter and callback function */
+	struct kref		ref;
+	void (*release)(struct kref *arg);
+	/* used for busy, wait_free, free lists */
+	struct list_head	list_entry;
+	/* used for timed_wqe list */
+	struct list_head	wqe_link;
+	/* used for io posted dnrx list */
+	struct list_head	dnrx_link;
+	/* state of IO: free, busy, wait_free */
+	enum efct_hw_io_state	state;
+	/* Work queue object, with link for pending */
+	struct efct_hw_wqe	wqe;
+	/* pointer back to hardware context */
+	struct efct_hw		*hw;
+	struct efc_remote_node	*rnode;
+	struct efc_dma		xfer_rdy;
+	u16	type;
+	/* WQ assigned to the exchange */
+	struct hw_wq		*wq;
+	/* Exchange is active in FW */
+	bool			xbusy;
+	/* Function called on IO completion */
+	int
+	(*done)(struct efct_hw_io *hio,
+		struct efc_remote_node *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/* argument passed to "IO done" callback */
+	void			*arg;
+	/* Function called on abort completion */
+	int
+	(*abort_done)(struct efct_hw_io *hio,
+		      struct efc_remote_node *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/* argument passed to "abort done" callback */
+	void			*abort_arg;
+	/* needed for bug O127585: length of IO */
+	size_t			length;
+	/* timeout value for target WQEs */
+	u8			tgt_wqe_timeout;
+	/* timestamp when current WQE was submitted */
+	u64			submit_ticks;
+
+	/* if TRUE, latched status shld be returned */
+	bool			status_saved;
+	/* if TRUE, abort is in progress */
+	bool			abort_in_progress;
+	u32			saved_status;
+	u32			saved_len;
+	u32			saved_ext;
+
+	/* EQ that this HIO came up on */
+	struct hw_eq		*eq;
+	/* WQ steering mode request */
+	enum efct_hw_wq_steering wq_steering;
+	/* WQ class if steering mode is Class */
+	u8			wq_class;
+
+	/* request tag for this HW IO */
+	u16			reqtag;
+	/* request tag for an abort of this HW IO
+	 * (note: this is a 32 bit value
+	 * to allow us to use UINT32_MAX as an uninitialized value)
+	 */
+	u32			abort_reqtag;
+	u32			indicator;	/* XRI */
+	struct efc_dma		def_sgl;	/* default SGL*/
+	/* Count of SGEs in default SGL */
+	u32			def_sgl_count;
+	/* pointer to current active SGL */
+	struct efc_dma		*sgl;
+	u32			sgl_count;	/* count of SGEs in io->sgl */
+	u32			first_data_sge;	/* index of first data SGE */
+	struct efc_dma		*ovfl_sgl;	/* overflow SGL */
+	u32			ovfl_sgl_count;
+	 /* pointer to overflow segment len */
+	struct sli4_lsp_sge	*ovfl_lsp;
+	u32			n_sge;		/* number of active SGEs */
+	u32			sge_offset;
+
+	/* where upper layer can store ref to its IO */
+	void			*ul_io;
+};
+
+
+/* Typedef for HW "done" callback */
+typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
+			      u32 len, int status, u32 ext, void *ul_arg);
+
+enum efct_hw_port {
+	EFCT_HW_PORT_INIT,
+	EFCT_HW_PORT_SHUTDOWN,
+};
+
+/* Node group rpi reference */
+struct efct_hw_rpi_ref {
+	atomic_t rpi_count;
+	atomic_t rpi_attached;
+};
+
+enum efct_hw_link_stat {
+	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
+	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
+	EFCT_HW_LINK_STAT_CRC_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
+	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
+	EFCT_HW_LINK_STAT_MAX,
+};
+
+enum efct_hw_host_stat {
+	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
+	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
+	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
+	EFCT_HW_HOST_STAT_MAX,
+};
+
+enum efct_hw_state {
+	EFCT_HW_STATE_UNINITIALIZED,
+	EFCT_HW_STATE_QUEUES_ALLOCATED,
+	EFCT_HW_STATE_ACTIVE,
+	EFCT_HW_STATE_RESET_IN_PROGRESS,
+	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
+};
+
+struct efct_hw_link_stat_counts {
+	u8		overflow;
+	u32		counter;
+};
+
+struct efct_hw_host_stat_counts {
+	u32		counter;
+};
+
+#include "efct_hw_queues.h"
+
+/* Structure used for the hash lookup of queue IDs */
+struct efct_queue_hash {
+	bool		in_use;
+	u16		id;
+	u16		index;
+};
+
+/* WQ callback object */
+struct hw_wq_callback {
+	u16		instance_index;	/* use for request tag */
+	void (*callback)(void *arg, u8 *cqe, int status);
+	void		*arg;
+};
+
+struct efct_hw_config {
+	u32		n_eq;
+	u32		n_cq;
+	u32		n_mq;
+	u32		n_rq;
+	u32		n_wq;
+	u32		n_io;
+	u32		n_sgl;
+	u32		speed;
+	u32		topology;
+	/* size of the buffers for first burst */
+	u32		rq_default_buffer_size;
+	u8		esoc;
+	/* The seed for the DIF CRC calculation */
+	u16		dif_seed;
+	u8		dif_mode;
+	/* Enable driver target wqe timeouts */
+	u8		emulate_tgt_wqe_timeout;
+	bool		bounce;
+	/* Queue topology string */
+	const char	*queue_topology;
+	/* MRQ RQ selection policy */
+	u8		rq_selection_policy;
+	/* RQ quanta if rq_selection_policy == 2 */
+	u8		rr_quanta;
+	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+};
+
+struct efct_hw {
+	struct efct		*os;
+	struct sli4		sli;
+	u16			ulp_start;
+	u16			ulp_max;
+	u32			dump_size;
+	enum efct_hw_state	state;
+	bool			hw_setup_called;
+	u8			sliport_healthcheck;
+	u16			watchdog_timeout;
+
+	/* HW configuration, subject to efct_hw_set()  */
+	struct efct_hw_config	config;
+
+	/* calculated queue sizes for each type */
+	u32			num_qentries[SLI_QTYPE_MAX];
+
+	/* Storage for SLI queue objects */
+	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
+	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
+	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
+	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
+	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
+	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
+
+	/* HW queue */
+	u32			eq_count;
+	u32			cq_count;
+	u32			mq_count;
+	u32			wq_count;
+	u32			rq_count;
+	struct list_head	eq_list;
+
+	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
+
+	/* Storage for HW queue objects */
+	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
+	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
+	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
+	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
+	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
+	/* count of hw_rq[] entries */
+	u32			hw_rq_count;
+	/* count of multirq RQs */
+	u32			hw_mrq_count;
+
+	 /* pool per class WQs */
+	struct efct_varray	*wq_class_array[EFCT_HW_MAX_WQ_CLASS];
+	/* pool per CPU WQs */
+	struct efct_varray	*wq_cpu_array[EFCT_HW_MAX_WQ_CPU];
+
+	/* Sequence objects used in incoming frame processing */
+	struct efct_array	*seq_pool;
+
+	/* Maintain an ordered, linked list of outstanding HW commands. */
+	spinlock_t		cmd_lock;
+	struct list_head	cmd_head;
+	struct list_head	cmd_pending;
+	u32			cmd_head_count;
+
+	struct sli4_link_event	link;
+	struct efc_domain	*domain;
+
+	u16			fcf_indicator;
+
+	/* pointer array of IO objects */
+	struct efct_hw_io	**io;
+	/* array of WQE buffs mapped to IO objects */
+	u8			*wqe_buffs;
+
+	/* IO lock to synchronize list access */
+	spinlock_t		io_lock;
+	/* IO lock to synchronize IO aborting */
+	spinlock_t		io_abort_lock;
+	/* List of IO objects in use */
+	struct list_head	io_inuse;
+	/* List of IO objects with a timed target WQE */
+	struct list_head	io_timed_wqe;
+	/* List of IO objects waiting to be freed */
+	struct list_head	io_wait_free;
+	/* List of IO objects available for allocation */
+	struct list_head	io_free;
+
+	struct efc_dma		loop_map;
+
+	struct efc_dma		xfer_rdy;
+
+	struct efc_dma		dump_sges;
+
+	struct efc_dma		rnode_mem;
+
+	struct efct_hw_rpi_ref	*rpi_ref;
+
+	atomic_t		io_alloc_failed_count;
+
+	struct efct_hw_qtop	*qtop;
+
+	/* stat: wq sumbit count */
+	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
+	/* stat: wq complete count */
+	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
+	/* Timer to periodically check for WQE timeouts */
+	struct timer_list	wqe_timer;
+	/* Timer for heartbeat */
+	struct timer_list	watchdog_timer;
+	bool			in_active_wqe_timer;
+	bool			active_wqe_timer_shutdown;
+
+	struct efct_pool	*wq_reqtag_pool;
+	atomic_t		send_frame_seq_id;
+};
+
+enum efct_hw_io_count_type {
+	EFCT_HW_IO_INUSE_COUNT,
+	EFCT_HW_IO_FREE_COUNT,
+	EFCT_HW_IO_WAIT_FREE_COUNT,
+	EFCT_HW_IO_N_TOTAL_IO_COUNT,
+};
+
+/* HW queue data structures */
+struct hw_eq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct efct_hw		*hw;
+	struct sli4_queue	*queue;
+	struct list_head	cq_list;
+	u32			use_count;
+	struct efct_varray	*wq_array;
+};
+
+struct hw_cq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_eq		*eq;
+	struct sli4_queue	*queue;
+	struct list_head	q_list;
+	u32			use_count;
+};
+
+struct hw_q {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+};
+
+struct hw_mq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+
+	u32			use_count;
+};
+
+struct hw_wq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	struct efct_hw		*hw;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+	u32			class;
+	u8			ulp;
+
+	/* WQ consumed */
+	u32			wqec_set_count;
+	u32			wqec_count;
+	u32			free_count;
+	u32			total_submit_count;
+	struct list_head	pending_list;
+
+	/* HW IO allocated for use with Send Frame */
+	struct efct_hw_io	*send_frame_io;
+
+	/* Stats */
+	u32			use_count;
+	u32			wq_pending_count;
+};
+
+struct hw_rq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			use_count;
+	u32			hdr_entry_size;
+	u32			first_burst_entry_size;
+	u32			data_entry_size;
+	u8			ulp;
+	bool			is_mrq;
+	u32			base_mrq_id;
+
+	struct hw_cq		*cq;
+
+	u8			filter_mask;
+	struct sli4_queue	*hdr;
+	struct sli4_queue	*first_burst;
+	struct sli4_queue	*data;
+
+	struct efc_hw_rq_buffer	*hdr_buf;
+	struct efc_hw_rq_buffer	*fb_buf;
+	struct efc_hw_rq_buffer	*payload_buf;
+	/* RQ tracker for this RQ */
+	struct efc_hw_sequence	**rq_tracker;
+};
+
+struct efct_hw_global {
+	const char		*queue_topology_string;
+};
+
+extern struct efct_hw_global	hw_global;
+
+struct efct_hw_send_frame_context {
+	struct efct_hw		*hw;
+	struct hw_wq_callback	*wqcb;
+	struct efct_hw_wqe	wqe;
+	void (*callback)(int status, void *arg);
+	void			*arg;
+
+	/* General purpose elements */
+	struct efc_hw_sequence	*seq;
+	struct efc_dma		payload;
+};
+
+#define EFCT_HW_OBJECT_G5              0xfeaa0001
+#define EFCT_HW_OBJECT_G6              0xfeaa0003
+struct efct_hw_grp_hdr {
+	u32			size;
+	__be32			magic_number;
+	u32			word2;
+	u8			rev_name[128];
+	u8			date[12];
+	u8			revision[32];
+};
+
+#endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 16/32] elx: efct: Driver initialization routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (14 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:01   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion James Smart
                   ` (16 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Emulex FC Target driver init, attach and hardware setup routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_driver.c | 1031 +++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_driver.h |  150 +++++
 drivers/scsi/elx/efct/efct_hw.c     | 1222 +++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h     |   16 +-
 drivers/scsi/elx/efct/efct_xport.c  |  587 +++++++++++++++++
 drivers/scsi/elx/efct/efct_xport.h  |  205 ++++++
 6 files changed, 3210 insertions(+), 1 deletion(-)
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
new file mode 100644
index 000000000000..f0ec132bdd0e
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -0,0 +1,1031 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+
+#include "efct_els.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+#include "efct_scsi.h"
+
+struct efct *efct_devices[MAX_EFCT_DEVICES];
+
+static int logmask;
+module_param(logmask, int, 0444);
+MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
+
+static struct libefc_function_template efct_libefc_templ = {
+	.hw_domain_alloc = efct_hw_domain_alloc,
+	.hw_domain_attach = efct_hw_domain_attach,
+	.hw_domain_free = efct_hw_domain_free,
+	.hw_domain_force_free = efct_hw_domain_force_free,
+	.domain_hold_frames = efct_domain_hold_frames,
+	.domain_accept_frames = efct_domain_accept_frames,
+
+	.hw_port_alloc = efct_hw_port_alloc,
+	.hw_port_attach = efct_hw_port_attach,
+	.hw_port_free = efct_hw_port_free,
+
+	.hw_node_alloc = efct_hw_node_alloc,
+	.hw_node_attach = efct_hw_node_attach,
+	.hw_node_detach = efct_hw_node_detach,
+	.hw_node_free_resources = efct_hw_node_free_resources,
+	.node_purge_pending = efct_node_purge_pending,
+
+	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
+	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
+	.scsi_validate_node = efct_scsi_validate_initiator,
+	.new_domain = efct_scsi_tgt_new_domain,
+	.del_domain = efct_scsi_tgt_del_domain,
+	.new_sport = efct_scsi_tgt_new_sport,
+	.del_sport = efct_scsi_tgt_del_sport,
+	.scsi_new_node = efct_scsi_new_initiator,
+	.scsi_del_node = efct_scsi_del_initiator,
+
+	.els_send = efct_els_req_send,
+	.els_send_ct = efct_els_send_ct,
+	.els_send_resp = efct_els_resp_send,
+	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
+	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
+	.send_ct_rsp = efct_send_ct_rsp,
+	.send_ls_rjt = efct_send_ls_rjt,
+
+	.node_io_cleanup = efct_node_io_cleanup,
+	.node_els_cleanup = efct_node_els_cleanup,
+	.node_abort_all_els = efct_node_abort_all_els,
+
+	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
+	.recv_abts_frame = efct_node_recv_abts_frame,
+};
+
+static char *queue_topology =
+	"eq cq rq cq mq $nulp($nwq(cq wq:ulp=$rpt1)) cq wq:len=256:class=1";
+
+static int
+efct_device_init(void)
+{
+	int rc;
+
+	hw_global.queue_topology_string = queue_topology;
+
+	/* driver-wide init for target-server */
+	rc = efct_scsi_tgt_driver_init();
+	if (rc) {
+		pr_err("efct_scsi_tgt_init failed rc=%d\n",
+			     rc);
+		return -1;
+	}
+
+	rc = efct_scsi_reg_fc_transport();
+	if (rc) {
+		pr_err("failed to register to FC host\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+static void
+efct_device_shutdown(void)
+{
+	efct_scsi_release_fc_transport();
+
+	efct_scsi_tgt_driver_exit();
+}
+
+static void *
+efct_device_alloc(u32 nid)
+{
+	struct efct *efct = NULL;
+	u32 i;
+
+	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);
+
+	if (efct) {
+		memset(efct, 0, sizeof(*efct));
+		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+			if (!efct_devices[i]) {
+				efct->instance_index = i;
+				efct_devices[i] = efct;
+				break;
+			}
+		}
+
+		if (i == ARRAY_SIZE(efct_devices)) {
+			pr_err("Exceeded max supported devices.\n");
+			kfree(efct);
+			efct = NULL;
+		} else {
+			efct->attached = false;
+		}
+	}
+	return efct;
+}
+
+static struct efct *
+efct_get_instance(u32 index)
+{
+	if (index < ARRAY_SIZE(efct_devices))
+		return efct_devices[index];
+
+	return NULL;
+}
+
+static void
+efct_device_interrupt_handler(struct efct *efct, u32 vector)
+{
+	efct_hw_process(&efct->hw, vector, efct->max_isr_time_msec);
+}
+
+static int
+efct_intr_thread(struct efct_intr_context *intr_context)
+{
+	struct efct *efct = intr_context->efct;
+	int rc;
+	u32 tstart, tnow;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!kthread_should_stop()) {
+		rc = wait_for_completion_timeout(&intr_context->done,
+				  usecs_to_jiffies(100000));
+		if (!rc)
+			continue;
+
+		efct_device_interrupt_handler(efct, intr_context->index);
+
+		/* If we've been running for too long, then yield */
+		tnow = jiffies_to_msecs(jiffies);
+		if ((tnow - tstart) > 5000) {
+			cond_resched();
+			tstart = tnow;
+		}
+	}
+
+	return 0;
+}
+
+static int
+efct_start_event_processing(struct efct *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		char label[32];
+		struct efct_intr_context *intr_ctx = NULL;
+
+		intr_ctx = &efct->intr_context[i];
+
+		intr_ctx->efct = efct;
+		intr_ctx->index = i;
+
+		init_completion(&intr_ctx->done);
+
+		snprintf(label, sizeof(label),
+			 "efct:%d:%d", efct->instance_index, i);
+
+		intr_ctx->thread =
+			kthread_create((int(*)(void *)) efct_intr_thread,
+				       intr_ctx, label);
+
+		if (IS_ERR(intr_ctx->thread)) {
+			efc_log_err(efct, "kthread_create failed: %ld\n",
+				     PTR_ERR(intr_ctx->thread));
+			intr_ctx->thread = NULL;
+
+			return -1;
+		}
+
+		wake_up_process(intr_ctx->thread);
+	}
+
+	return 0;
+}
+
+static void
+efct_teardown_msix(struct efct *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		synchronize_irq(efct->msix_vec[i].vector);
+		free_irq(efct->msix_vec[i].vector,
+			 &efct->intr_context[i]);
+	}
+	pci_disable_msix(efct->pcidev);
+}
+
+static int
+efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
+{
+	struct efc *efc;
+	struct sli4 *sli;
+
+	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
+	if (!efc)
+		return -1;
+
+	memset(efc, 0, sizeof(struct efc));
+	efct->efcport = efc;
+
+	memcpy(&efc->tt, tt, sizeof(*tt));
+	efc->base = efct;
+	efc->pcidev = efct->pcidev;
+
+	efc->def_wwnn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_NODE);
+	efc->def_wwpn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_PORT);
+	efc->enable_tgt = 1;
+	efc->log_level = EFC_LOG_LIB;
+
+	sli = &efct->hw.sli;
+	efc->max_xfer_size = sli->sge_supported_length *
+			     sli_get_max_sgl(&efct->hw.sli);
+
+	efcport_init(efc);
+
+	return 0;
+}
+
+static int efct_request_firmware_update(struct efct *efct);
+
+static int
+efct_device_attach(struct efct *efct)
+{
+	u32 rc = 0, i = 0;
+
+	if (efct->attached) {
+		efc_log_warn(efct, "Device is already attached\n");
+		rc = -1;
+	} else {
+		snprintf(efct->display_name, sizeof(efct->display_name),
+			 "[%s%d] ", "fc",  efct->instance_index);
+
+		efct->logmask = logmask;
+		efct->enable_numa_support = 1;
+		efct->filter_def = "0,0,0,0";
+		efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
+		efct->model =
+			(efct->pcidev->device == EFCT_DEVICE_ID_LPE31004) ?
+			"LPE31004" : "unknown";
+		efct->fw_version = (const char *)efct_hw_get_ptr(&efct->hw,
+							EFCT_HW_FW_REV);
+		efct->driver_version = EFCT_DRIVER_VERSION;
+
+		efct->efct_req_fw_upgrade = true;
+
+		/* Allocate transport object and bring online */
+		efct->xport = efct_xport_alloc(efct);
+		if (!efct->xport) {
+			efc_log_err(efct, "failed to allocate transport object\n");
+			rc = -1;
+		} else if (efct_xport_attach(efct->xport) != 0) {
+			efc_log_err(efct, "failed to attach transport object\n");
+			rc = -1;
+		} else if (efct_xport_initialize(efct->xport) != 0) {
+			efc_log_err(efct, "failed to initialize transport object\n");
+			rc = -1;
+		} else if (efct_efclib_config(efct, &efct_libefc_templ)) {
+			efc_log_err(efct, "failed to init efclib\n");
+			rc = -1;
+		} else if (efct_start_event_processing(efct)) {
+			efc_log_err(efct, "failed to start event processing\n");
+			rc = -1;
+		} else {
+			for (i = 0; i < efct->n_msix_vec; i++) {
+				efc_log_debug(efct, "irq %d enabled\n",
+					efct->msix_vec[i].vector);
+				enable_irq(efct->msix_vec[i].vector);
+			}
+		}
+
+		efct->desc = efct->hw.sli.modeldesc;
+		efc_log_info(efct, "adapter model description: %s\n",
+			      efct->desc);
+
+		if (rc == 0) {
+			efct->attached = true;
+		} else {
+			efct_teardown_msix(efct);
+			if (efct->xport) {
+				efct_xport_free(efct->xport);
+				efct->xport = NULL;
+			}
+		}
+
+		if (efct->efct_req_fw_upgrade) {
+			efc_log_debug(efct, "firmware update is in progress\n");
+			efct_request_firmware_update(efct);
+		}
+	}
+
+	return rc;
+}
+
+static void
+efct_stop_event_processing(struct efct *efct)
+{
+	u32 i;
+	struct task_struct *thread = NULL;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		disable_irq(efct->msix_vec[i].vector);
+
+		thread = efct->intr_context[i].thread;
+		if (!thread)
+			continue;
+
+		/* Call stop */
+		kthread_stop(thread);
+	}
+}
+
+static int
+efct_device_detach(struct efct *efct)
+{
+	int rc = 0;
+
+	if (efct) {
+		if (!efct->attached) {
+			efc_log_warn(efct, "Device is not attached\n");
+			return -1;
+		}
+
+		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
+		if (rc)
+			efc_log_err(efct, "Transport Shutdown timed out\n");
+
+		efct_stop_event_processing(efct);
+
+		if (efct_xport_detach(efct->xport) != 0)
+			efc_log_err(efct, "Transport detach failed\n");
+
+		efct_xport_free(efct->xport);
+		efct->xport = NULL;
+
+		efcport_destroy(efct->efcport);
+		kfree(efct->efcport);
+
+		efct->attached = false;
+	}
+
+	return 0;
+}
+
+static int
+efct_fw_reset(struct efct *efct)
+{
+	int rc = 0;
+	int index = 0;
+	u8 bus, dev;
+	struct efct *other_efct;
+
+	bus = efct->pcidev->bus->number;
+	dev = PCI_SLOT(efct->pcidev->devfn);
+
+	while ((other_efct = efct_get_instance(index++)) != NULL) {
+		u8 other_bus, other_dev;
+
+		other_bus = other_efct->pcidev->bus->number;
+		other_dev = PCI_SLOT(other_efct->pcidev->devfn);
+
+		if (bus == other_bus && dev == other_dev &&
+		    timer_pending(&other_efct->xport->stats_timer)) {
+			efc_log_debug(other_efct,
+				       "removing link stats timer\n");
+			del_timer(&other_efct->xport->stats_timer);
+		}
+	}
+
+	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
+		efc_log_test(efct, "failed to reset firmware\n");
+		rc = -1;
+	} else {
+		efc_log_debug(efct,
+			       "successfully reset firmware.Now resetting port\n");
+		/* now flag all functions on the same device
+		 * as this port as uninitialized
+		 */
+		index = 0;
+
+		while ((other_efct = efct_get_instance(index++)) != NULL) {
+			u8 other_bus, other_dev;
+
+			other_bus = other_efct->pcidev->bus->number;
+			other_dev = PCI_SLOT(other_efct->pcidev->devfn);
+
+			if (bus == other_bus && dev == other_dev) {
+				if (other_efct->hw.state !=
+						EFCT_HW_STATE_UNINITIALIZED) {
+					other_efct->hw.state =
+						EFCT_HW_STATE_QUEUES_ALLOCATED;
+				}
+				efct_device_detach(efct);
+				rc = efct_device_attach(efct);
+
+				efc_log_debug(other_efct,
+					       "re-start driver with new firmware\n");
+			}
+		}
+	}
+	return rc;
+}
+
+static void
+efct_fw_write_cb(int status, u32 actual_write_length,
+		 u32 change_status, void *arg)
+{
+	struct efct_fw_write_result *result = arg;
+
+	result->status = status;
+	result->actual_xfer = actual_write_length;
+	result->change_status = change_status;
+
+	complete(&result->done);
+}
+
+static int
+efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
+		    u8 *change_status)
+{
+	int rc = 0;
+	u32 bytes_left;
+	u32 xfer_size;
+	u32 offset;
+	struct efc_dma dma;
+	int last = 0;
+	struct efct_fw_write_result result;
+
+	init_completion(&result.done);
+
+	bytes_left = buf_len;
+	offset = 0;
+
+	dma.size = FW_WRITE_BUFSIZE;
+	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+				      dma.size, &dma.phys, GFP_DMA);
+	if (!dma.virt)
+		return -ENOMEM;
+
+	while (bytes_left > 0) {
+		if (bytes_left > FW_WRITE_BUFSIZE)
+			xfer_size = FW_WRITE_BUFSIZE;
+		else
+			xfer_size = bytes_left;
+
+		memcpy(dma.virt, buf + offset, xfer_size);
+
+		if (bytes_left == xfer_size)
+			last = 1;
+
+		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
+				       last, efct_fw_write_cb, &result);
+
+		if (wait_for_completion_interruptible(&result.done) != 0) {
+			rc = -ENXIO;
+			break;
+		}
+
+		if (result.actual_xfer == 0 || result.status != 0) {
+			rc = -EFAULT;
+			break;
+		}
+
+		if (last)
+			*change_status = result.change_status;
+
+		bytes_left -= result.actual_xfer;
+		offset += result.actual_xfer;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
+	return rc;
+}
+
+static int
+efct_request_firmware_update(struct efct *efct)
+{
+	int rc = 0;
+	u8 file_name[256], fw_change_status = 0;
+	const struct firmware *fw;
+	struct efct_hw_grp_hdr *fw_image;
+
+	snprintf(file_name, 256, "%s.grp", efct->model);
+	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
+	if (rc) {
+		efc_log_err(efct, "Firmware file(%s) not found.\n", file_name);
+		return rc;
+	}
+	fw_image = (struct efct_hw_grp_hdr *)fw->data;
+
+	/* Check if firmware provided is compatible with this particular
+	 * Adapter of not
+	 */
+	if ((be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G5) &&
+	    (be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G6)) {
+		efc_log_warn(efct,
+			      "Invalid FW image found Magic: 0x%x Size: %ld\n",
+			be32_to_cpu(fw_image->magic_number), fw->size);
+		rc = -1;
+		goto exit;
+	}
+
+	if (!strncmp(efct->fw_version, fw_image->revision,
+		     strnlen(fw_image->revision, 16))) {
+		efc_log_debug(efct,
+			       "No update req. Firmware is already up to date.\n");
+		rc = 0;
+		goto exit;
+	}
+	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
+	if (rc) {
+		efc_log_err(efct,
+			     "Firmware update failed. Return code = %d\n", rc);
+	} else {
+		efc_log_info(efct, "Firmware updated successfully\n");
+		switch (fw_change_status) {
+		case 0x00:
+			efc_log_debug(efct,
+				       "No reset needed, new firmware is active.\n");
+			break;
+		case 0x01:
+			efc_log_warn(efct,
+				      "A physical device reset (host reboot) is needed to activate the new firmware\n");
+			break;
+		case 0x02:
+		case 0x03:
+			efc_log_debug(efct,
+				       "firmware is resetting to activate the new firmware\n");
+			efct_fw_reset(efct);
+			break;
+		default:
+			efc_log_debug(efct,
+				       "Unexected value change_status: %d\n",
+				fw_change_status);
+			break;
+		}
+	}
+
+exit:
+	release_firmware(fw);
+
+	return rc;
+}
+
+static void
+efct_device_free(struct efct *efct)
+{
+	if (efct) {
+		efct_devices[efct->instance_index] = NULL;
+
+		kfree(efct);
+	}
+}
+
+static int
+efct_device_interrupts_required(struct efct *efct)
+{
+	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
+				!= EFCT_HW_RTN_SUCCESS) {
+		return -1;
+	}
+	return efct_hw_qtop_eq_count(&efct->hw);
+}
+
+static irqreturn_t
+efct_intr_msix(int irq, void *handle)
+{
+	struct efct_intr_context *intr_context = handle;
+
+	complete(&intr_context->done);
+	return IRQ_HANDLED;
+}
+
+static int
+efct_setup_msix(struct efct *efct, u32 num_interrupts)
+{
+	int	rc = 0;
+	u32 i;
+
+	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
+		dev_err(&efct->pcidev->dev,
+			"%s : MSI-X not available\n", __func__);
+		return -EINVAL;
+	}
+
+	if (num_interrupts > ARRAY_SIZE(efct->msix_vec)) {
+		dev_err(&efct->pcidev->dev,
+			"%s : num_interrupts: %d greater than vectors\n",
+			__func__, num_interrupts);
+		return -1;
+	}
+
+	efct->n_msix_vec = num_interrupts;
+	for (i = 0; i < num_interrupts; i++)
+		efct->msix_vec[i].entry = i;
+
+	rc = pci_enable_msix_exact(efct->pcidev,
+				   efct->msix_vec, efct->n_msix_vec);
+	if (!rc) {
+		for (i = 0; i < num_interrupts; i++) {
+			rc = request_irq(efct->msix_vec[i].vector,
+					 efct_intr_msix,
+					 0, EFCT_DRIVER_NAME,
+					 &efct->intr_context[i]);
+			if (rc)
+				break;
+		}
+	} else {
+		dev_err(&efct->pcidev->dev,
+			"%s : rc % d returned, IRQ allocation failed\n",
+			   __func__, rc);
+	}
+
+	return rc;
+}
+
+static struct pci_device_id efct_pci_table[] = {
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_LPE31004), 0},
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_G7), 0},
+	{}	/* terminate list */
+};
+
+static int
+efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+	struct efct *efct = NULL;
+	int rc;
+	u32 i, r;
+	int num_interrupts = 0;
+	int nid;
+	struct task_struct *thread = NULL;
+
+	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc)
+		goto efct_pci_probe_err_enable;
+
+	pci_set_master(pdev);
+
+	rc = pci_set_mwi(pdev);
+	if (rc) {
+		dev_info(&pdev->dev,
+			 "pci_set_mwi returned %d\n", rc);
+		goto efct_pci_probe_err_set_mwi;
+	}
+
+	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
+	if (rc) {
+		dev_err(&pdev->dev, "pci_request_regions failed\n");
+		goto efct_pci_probe_err_request_regions;
+	}
+
+	/* Fetch the Numa node id for this device */
+	nid = dev_to_node(&pdev->dev);
+	if (nid < 0) {
+		dev_err(&pdev->dev,
+			"Warning Numa node ID is %d\n", nid);
+		nid = 0;
+	}
+
+	/* Allocate efct */
+	efct = efct_device_alloc(nid);
+	if (!efct) {
+		dev_err(&pdev->dev, "Failed to allocate efct_t\n");
+		rc = -ENOMEM;
+		goto efct_pci_probe_err_efct_device_alloc;
+	}
+
+	efct->pcidev = pdev;
+
+	if (efct->enable_numa_support)
+		efct->numa_node = nid;
+
+	/* Map all memory BARs */
+	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
+			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
+						  pci_resource_len(pdev, i));
+			r++;
+		}
+
+		/*
+		 * If the 64-bit attribute is set, both this BAR and the
+		 * next form the complete address. Skip processing the
+		 * next BAR.
+		 */
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
+			i++;
+	}
+
+	pci_set_drvdata(pdev, efct);
+
+	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
+	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
+		dev_warn(&pdev->dev,
+			 "trying DMA_BIT_MASK(32)\n");
+		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
+		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
+			dev_err(&pdev->dev,
+				"setting DMA_BIT_MASK failed\n");
+			rc = -1;
+			goto efct_pci_probe_err_setup_thread;
+		}
+	}
+
+	num_interrupts = efct_device_interrupts_required(efct);
+	if (num_interrupts < 0) {
+		efc_log_err(efct, "efct_device_interrupts_required failed\n");
+		rc = -1;
+		goto efct_pci_probe_err_setup_thread;
+	}
+
+	/*
+	 * Initialize MSIX interrupts, note,
+	 * efct_setup_msix() enables the interrupt
+	 */
+	rc = efct_setup_msix(efct, num_interrupts);
+	if (rc) {
+		dev_err(&pdev->dev, "Can't setup msix\n");
+		goto efct_pci_probe_err_setup_msix;
+	}
+	/* Disable interrupt for now */
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d disabled\n",
+			       efct->msix_vec[i].vector);
+		disable_irq(efct->msix_vec[i].vector);
+	}
+
+	rc = efct_device_attach((struct efct *)efct);
+	if (rc)
+		goto efct_pci_probe_err_setup_msix;
+
+	return 0;
+
+efct_pci_probe_err_setup_msix:
+	for (i = 0; i < (u32)num_interrupts; i++) {
+		thread = efct->intr_context[i].thread;
+		if (!thread)
+			continue;
+
+		/* Call stop */
+		kthread_stop(thread);
+	}
+
+efct_pci_probe_err_setup_thread:
+	pci_set_drvdata(pdev, NULL);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+	efct_device_free(efct);
+efct_pci_probe_err_efct_device_alloc:
+	pci_release_regions(pdev);
+efct_pci_probe_err_request_regions:
+	pci_clear_mwi(pdev);
+efct_pci_probe_err_set_mwi:
+	pci_disable_device(pdev);
+efct_pci_probe_err_enable:
+	return rc;
+}
+
+static void
+efct_pci_remove(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	u32	i;
+
+	if (!efct)
+		return;
+
+	efct_device_detach(efct);
+
+	efct_teardown_msix(efct);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+
+	pci_set_drvdata(pdev, NULL);
+
+	efct_devices[efct->instance_index] = NULL;
+
+	efct_device_free(efct);
+
+	pci_release_regions(pdev);
+
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
+{
+	if (efct) {
+		efc_log_debug(efct,
+			       "PCI channel disable preparing for reset\n");
+		efct_device_detach(efct);
+		/* Disable interrupt and pci device */
+		efct_teardown_msix(efct);
+	}
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_recover(struct efct *efct)
+{
+	if (efct) {
+		efc_log_debug(efct, "PCI channel preparing for recovery\n");
+		efct_hw_io_abort_all(&efct->hw);
+	}
+}
+
+/**
+ * efct_pci_io_error_detected - method for handling PCI I/O error
+ * @pdev: pointer to PCI device.
+ * @state: the current PCI connection state.
+ *
+ * This routine is registered to the PCI subsystem for error handling. This
+ * function is called by the PCI subsystem after a PCI bus error affecting
+ * this device has been detected. When this routine is invoked, it dispatches
+ * device error detected handling routine, which will perform the proper
+ * error detected operation.
+ *
+ * Return codes
+ * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
+ * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
+ */
+static pci_ers_result_t
+efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	pci_ers_result_t rc;
+
+	switch (state) {
+	case pci_channel_io_normal:
+		efct_device_prep_for_recover(efct);
+		rc = PCI_ERS_RESULT_CAN_RECOVER;
+		break;
+	case pci_channel_io_frozen:
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	case pci_channel_io_perm_failure:
+		efct_device_detach(efct);
+		rc = PCI_ERS_RESULT_DISCONNECT;
+		break;
+	default:
+		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
+			       state);
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	}
+
+	return rc;
+}
+
+static pci_ers_result_t
+efct_pci_io_slot_reset(struct pci_dev *pdev)
+{
+	int rc;
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc) {
+		efc_log_err(efct,
+			     "failed to re-enable PCI device after reset.\n");
+		return PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	/*
+	 * As the new kernel behavior of pci_restore_state() API call clears
+	 * device saved_state flag, need to save the restored state again.
+	 */
+
+	pci_save_state(pdev);
+
+	pci_set_master(pdev);
+
+	rc = efct_setup_msix(efct, efct->n_msix_vec);
+	if (rc)
+		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
+			    rc);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+
+	return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void
+efct_pci_io_resume(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+}
+
+MODULE_DEVICE_TABLE(pci, efct_pci_table);
+
+static struct pci_error_handlers efct_pci_err_handler = {
+	.error_detected = efct_pci_io_error_detected,
+	.slot_reset = efct_pci_io_slot_reset,
+	.resume = efct_pci_io_resume,
+};
+
+static struct pci_driver efct_pci_driver = {
+	.name		= EFCT_DRIVER_NAME,
+	.id_table	= efct_pci_table,
+	.probe		= efct_pci_probe,
+	.remove		= efct_pci_remove,
+	.err_handler	= &efct_pci_err_handler,
+};
+
+static int efct_proc_get(struct seq_file *m, void *v)
+{
+	u32 i;
+	u32 j;
+	u32 device_count = 0;
+
+	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+		if (efct_devices[i])
+			device_count++;
+	}
+
+	seq_printf(m, "%d\n", device_count);
+
+	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+		if (efct_devices[i]) {
+			struct efct *efct = efct_devices[i];
+
+			for (j = 0; j < efct->n_msix_vec; j++) {
+				seq_printf(m, "%d,%d,%d\n", i,
+					   efct->msix_vec[j].vector,
+					-1);
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int efct_proc_open(struct inode *indoe, struct file *file)
+{
+	return single_open(file, efct_proc_get, NULL);
+}
+
+static const struct file_operations efct_proc_fops = {
+	.owner = THIS_MODULE,
+	.open = efct_proc_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+
+static
+int __init efct_init(void)
+{
+	int	rc = -ENODEV;
+
+	rc = efct_device_init();
+	if (rc) {
+		pr_err("efct_device_init failed rc=%d\n", rc);
+		return -ENOMEM;
+	}
+
+	rc = pci_register_driver(&efct_pci_driver);
+	if (rc)
+		goto l1;
+
+	proc_create(EFCT_DRIVER_NAME, 0444, NULL, &efct_proc_fops);
+	return rc;
+
+l1:
+	efct_device_shutdown();
+	return rc;
+}
+
+static void __exit efct_exit(void)
+{
+	pci_unregister_driver(&efct_pci_driver);
+	remove_proc_entry(EFCT_DRIVER_NAME, NULL);
+	efct_device_shutdown();
+}
+
+module_init(efct_init);
+module_exit(efct_exit);
+MODULE_VERSION(EFCT_DRIVER_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom");
diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
new file mode 100644
index 000000000000..70c0dfaa4c7c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.h
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_DRIVER_H__)
+#define __EFCT_DRIVER_H__
+
+/***************************************************************************
+ * OS specific includes
+ */
+#include <stdarg.h>
+#include <linux/version.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <asm-generic/ioctl.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/byteorder.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/sched.h>
+#include <asm/current.h>
+#include <asm/cacheflush.h>
+#include <linux/pagemap.h>
+#include <linux/kthread.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
+#include <linux/firmware.h>
+#include <linux/sched/signal.h>
+#include "../include/efc_common.h"
+
+#define EFCT_DRIVER_NAME			"efct"
+#define EFCT_DRIVER_VERSION			"1.0.0.0"
+
+/* EFCT_OS_MAX_ISR_TIME_MSEC -
+ * maximum time driver code should spend in an interrupt
+ * or kernel thread context without yielding
+ */
+#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
+
+#define EFCT_FC_RQ_SIZE_DEFAULT			1024
+#define EFCT_FC_MAX_SGL				64
+#define EFCT_FC_DIF_SEED			0
+
+/* Timeouts */
+#define EFCT_FC_ELS_SEND_DEFAULT_TIMEOUT	0
+#define EFCT_FC_ELS_DEFAULT_RETRIES		3
+#define EFCT_FC_FLOGI_TIMEOUT_SEC		5
+#define EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC    30000000 /* 30 seconds */
+
+/* Watermark */
+#define EFCT_WATERMARK_HIGH_PCT			90
+#define EFCT_WATERMARK_LOW_PCT			80
+#define EFCT_IO_WATERMARK_PER_INITIATOR		8
+
+#include "efct_utils.h"
+#include "../libefc/efclib.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+#include "efct_xport.h"
+
+#define EFCT_PCI_MAX_REGS			6
+#define MAX_PCI_INTERRUPTS			16
+
+struct efct_intr_context {
+	struct efct		*efct;
+	u32			index;
+	struct completion	done;
+	struct task_struct	*thread;
+};
+
+struct efct {
+	struct pci_dev			*pcidev;
+	void __iomem			*reg[EFCT_PCI_MAX_REGS];
+
+	struct msix_entry		msix_vec[MAX_PCI_INTERRUPTS];
+	u32				n_msix_vec;
+	struct efct_intr_context	intr_context[MAX_PCI_INTERRUPTS];
+	u32				numa_node;
+
+	char				display_name[EFC_DISPLAY_NAME_LENGTH];
+	bool				attached;
+	struct efct_scsi_tgt		tgt_efct;
+	struct efct_xport		*xport;
+	struct efc			*efcport;
+	struct Scsi_Host		*shost;
+	int				logmask;
+	u32				max_isr_time_msec;
+
+	const char			*desc;
+	u32				instance_index;
+
+	const char			*model;
+	const char			*driver_version;
+	const char			*fw_version;
+
+	struct efct_hw			hw;
+
+	u32				num_vports;
+	u32				rq_selection_policy;
+	char				*filter_def;
+
+	bool				soft_wwn_enable;
+
+	/*
+	 * Target IO timer value:
+	 * Zero: target command timeout disabled.
+	 * Non-zero: Timeout value, in seconds, for target commands
+	 */
+	u32				target_io_timer_sec;
+
+	int				speed;
+	int				topology;
+
+	bool				enable_numa_support;
+	u8				efct_req_fw_upgrade;
+	u16				sw_feature_cap;
+	struct dentry			*sess_debugfs_dir;
+};
+
+#define FW_WRITE_BUFSIZE		(64 * 1024)
+
+struct efct_fw_write_result {
+	struct completion done;
+	int status;
+	u32 actual_xfer;
+	u32 change_status;
+};
+
+#define MAX_EFCT_DEVICES		64
+extern struct efct			*efct_devices[MAX_EFCT_DEVICES];
+
+#endif /* __EFCT_DRIVER_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
new file mode 100644
index 000000000000..41e400f9d401
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -0,0 +1,1222 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+
+#define EFCT_HW_MQ_DEPTH		128
+#define EFCT_HW_WQ_TIMER_PERIOD_MS	500
+
+#define EFCT_HW_REQUE_XRI_REGTAG	65534
+
+/* HW global data */
+struct efct_hw_global hw_global;
+
+static enum efct_hw_rtn
+efct_hw_link_event_init(struct efct_hw *hw)
+{
+	hw->link.status = SLI_LINK_STATUS_MAX;
+	hw->link.topology = SLI_LINK_TOPO_NONE;
+	hw->link.medium = SLI_LINK_MEDIUM_MAX;
+	hw->link.speed = 0;
+	hw->link.loop_map = NULL;
+	hw->link.fc_id = U32_MAX;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * Adjust the number of WQs and CQs within the HW.
+ *
+ * Calculates the number of WQs and associated CQs needed in the HW based on
+ * the number of IOs. Calculates the starting CQ index for each WQ, RQ and
+ * MQ.
+ */
+static void
+efct_hw_adjust_wqs(struct efct_hw *hw)
+{
+	u32 max_wq_num = hw->sli.qinfo.max_qcount[SLI_QTYPE_WQ];
+	u32 max_wq_entries = hw->num_qentries[SLI_QTYPE_WQ];
+	u32 max_cq_entries = hw->num_qentries[SLI_QTYPE_CQ];
+
+	/*
+	 * possibly adjust the the size of the WQs so that the CQ is twice as
+	 * big as the WQ to allow for 2 completions per IO. This allows us to
+	 * handle multi-phase as well as aborts.
+	 */
+	if (max_cq_entries < max_wq_entries * 2) {
+		hw->num_qentries[SLI_QTYPE_WQ] = max_cq_entries / 2;
+		max_wq_entries =  hw->num_qentries[SLI_QTYPE_WQ];
+	}
+
+	/*
+	 * Calculate the number of WQs to use base on the number of IOs.
+	 *
+	 * Note: We need to reserve room for aborts which must be sent down
+	 *       the same WQ as the IO. So we allocate enough WQ space to
+	 *       handle 2 times the number of IOs. Half of the space will be
+	 *       used for normal IOs and the other hwf is reserved for aborts.
+	 */
+	hw->config.n_wq = ((hw->config.n_io * 2) + (max_wq_entries - 1))
+			    / max_wq_entries;
+
+	/* make sure we haven't exceeded the max supported in the HW */
+	if (hw->config.n_wq > EFCT_HW_MAX_NUM_WQ)
+		hw->config.n_wq = EFCT_HW_MAX_NUM_WQ;
+
+	/* make sure we haven't exceeded the chip maximum */
+	if (hw->config.n_wq > max_wq_num)
+		hw->config.n_wq = max_wq_num;
+
+}
+
+static inline void
+efct_hw_add_io_timed_wqe(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	unsigned long flags = 0;
+
+	if (hw->config.emulate_tgt_wqe_timeout && io->tgt_wqe_timeout) {
+		/*
+		 * Active WQE list currently only used for
+		 * target WQE timeouts.
+		 */
+		spin_lock_irqsave(&hw->io_lock, flags);
+		INIT_LIST_HEAD(&io->wqe_link);
+		list_add_tail(&io->wqe_link, &hw->io_timed_wqe);
+		io->submit_ticks = jiffies_64;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+	}
+}
+
+static inline void
+efct_hw_remove_io_timed_wqe(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	unsigned long flags = 0;
+
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		/*
+		 * If target wqe timeouts are enabled,
+		 * remove from active wqe list.
+		 */
+		spin_lock_irqsave(&hw->io_lock, flags);
+		if (io->wqe_link.next)
+			list_del(&io->wqe_link);
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+	}
+}
+
+static enum efct_hw_rtn
+efct_hw_read_max_dump_size(struct efct_hw *hw)
+{
+	u8	buf[SLI4_BMBX_SIZE];
+	u8 func;
+	struct efct *efct = hw->os;
+	int	rc = 0;
+
+	/* attempt to detemine the dump size for function 0 only. */
+	func = PCI_FUNC(efct->pcidev->devfn);
+	if (func == 0) {
+		if (!sli_cmd_common_set_dump_location(&hw->sli, buf,
+						     SLI4_BMBX_SIZE, 1, 0,
+						     NULL, 0)) {
+			struct sli4_rsp_cmn_set_dump_location *rsp =
+				(struct sli4_rsp_cmn_set_dump_location *)
+				(buf + offsetof(struct sli4_cmd_sli_config,
+						payload.embed));
+
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL,
+					     NULL);
+			if (rc != EFCT_HW_RTN_SUCCESS) {
+				efc_log_test(hw->os,
+					      "set dump location cmd failed\n");
+				return rc;
+			}
+			hw->dump_size =
+				(le32_to_cpu(rsp->buffer_length_dword) &
+				 RSP_SET_DUMP_BUFFER_LEN);
+			efc_log_debug(hw->os, "Dump size %x\n",
+				       hw->dump_size);
+		}
+	}
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
+{
+	u32 i;
+	struct sli4 *sli = &hw->sli;
+
+	if (!hw) {
+		pr_err("bad parameter(s) hw=%p\n", hw);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->hw_setup_called)
+		return EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * efct_hw_init() relies on NULL pointers indicating that a structure
+	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
+	 * free/realloc that memory
+	 */
+	memset(hw, 0, sizeof(struct efct_hw));
+
+	hw->hw_setup_called = true;
+
+	hw->os = os;
+
+	spin_lock_init(&hw->cmd_lock);
+	INIT_LIST_HEAD(&hw->cmd_head);
+	INIT_LIST_HEAD(&hw->cmd_pending);
+	hw->cmd_head_count = 0;
+
+	spin_lock_init(&hw->io_lock);
+	spin_lock_init(&hw->io_abort_lock);
+
+	atomic_set(&hw->io_alloc_failed_count, 0);
+
+	hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
+	hw->config.dif_seed = 0;
+	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct *)os)->reg)) {
+		efc_log_err(hw->os, "SLI setup failed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_link_event_init(hw);
+
+	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
+
+	/*
+	 * Set all the queue sizes to the maximum allowed.
+	 */
+	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
+		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
+
+	/*
+	 * The RQ assignment for RQ pair mode.
+	 */
+	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
+	hw->config.n_io = hw->sli.extent[SLI_RSRC_XRI].size;
+
+	(void)efct_hw_read_max_dump_size(hw);
+
+	/* calculate the number of WQs required. */
+	efct_hw_adjust_wqs(hw);
+
+	/* Set the default dif mode */
+	if (!(sli->features & SLI4_REQFEAT_DIF &&
+	      sli->t10_dif_inline_capable)) {
+		efc_log_test(hw->os,
+			      "not inline capable, setting mode to separate\n");
+		hw->config.dif_mode = EFCT_HW_DIF_MODE_SEPARATE;
+	}
+
+	hw->config.queue_topology = hw_global.queue_topology_string;
+
+	hw->qtop = efct_hw_qtop_parse(hw, hw->config.queue_topology);
+	if (!hw->qtop) {
+		efc_log_crit(hw->os, "Queue topology string is invalid\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	hw->config.n_eq = hw->qtop->entry_counts[QTOP_EQ];
+	hw->config.n_cq = hw->qtop->entry_counts[QTOP_CQ];
+	hw->config.n_rq = hw->qtop->entry_counts[QTOP_RQ];
+	hw->config.n_wq = hw->qtop->entry_counts[QTOP_WQ];
+	hw->config.n_mq = hw->qtop->entry_counts[QTOP_MQ];
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static void
+efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
+{
+	efc_log_info(hw->os,
+		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
+		     j, hw->config.filter_def[j], i, id);
+}
+
+enum efct_hw_rtn
+efct_hw_init(struct efct_hw *hw)
+{
+	enum efct_hw_rtn rc;
+	u32 i = 0;
+	u8 buf[SLI4_BMBX_SIZE];
+	u32 max_rpi;
+	int rem_count;
+	u32 count;
+	unsigned long flags = 0;
+	struct efct_hw_io *temp;
+	struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	struct sli4 *sli = &hw->sli;
+	struct efct *efct = hw->os;
+
+	/*
+	 * Make sure the command lists are empty. If this is start-of-day,
+	 * they'll be empty since they were just initialized in efct_hw_setup.
+	 * If we've just gone through a reset, the command and command pending
+	 * lists should have been cleaned up as part of the reset
+	 * (efct_hw_reset()).
+	 */
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+		if (!list_empty(&hw->cmd_head)) {
+			efc_log_test(hw->os, "command found on cmd list\n");
+			spin_unlock_irqrestore(&hw->cmd_lock, flags);
+			return EFCT_HW_RTN_ERROR;
+		}
+		if (!list_empty(&hw->cmd_pending)) {
+			efc_log_test(hw->os,
+				      "command found on pending list\n");
+			spin_unlock_irqrestore(&hw->cmd_lock, flags);
+			return EFCT_HW_RTN_ERROR;
+		}
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	/* Free RQ buffers if prevously allocated */
+	efct_hw_rx_free(hw);
+
+	/*
+	 * The IO queues must be initialized here for the reset case. The
+	 * efct_hw_init_io() function will re-add the IOs to the free list.
+	 * The cmd_head list should be OK since we free all entries in
+	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
+	 */
+
+	/* If we are in this function due to a reset, there may be stale items
+	 * on lists that need to be removed.  Clean them up.
+	 */
+	rem_count = 0;
+	if (hw->io_wait_free.next) {
+		while ((!list_empty(&hw->io_wait_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_wait_free,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0) {
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_wait_free list\n",
+				rem_count);
+		}
+	}
+	rem_count = 0;
+	if (hw->io_inuse.next) {
+		while ((!list_empty(&hw->io_inuse))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_inuse,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_inuse list\n",
+				       rem_count);
+	}
+	rem_count = 0;
+	if (hw->io_free.next) {
+		while ((!list_empty(&hw->io_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_free,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_free list\n",
+				       rem_count);
+	}
+
+	INIT_LIST_HEAD(&hw->io_inuse);
+	INIT_LIST_HEAD(&hw->io_free);
+	INIT_LIST_HEAD(&hw->io_wait_free);
+	INIT_LIST_HEAD(&hw->io_timed_wqe);
+
+	/* If MRQ not required, Make sure we dont request feature. */
+	if (hw->config.n_rq == 1)
+		hw->sli.features &= (~SLI4_REQFEAT_MRQP);
+
+	if (sli_init(&hw->sli)) {
+		efc_log_err(hw->os, "SLI failed to initialize\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->sliport_healthcheck) {
+		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "Enable port Health check fail\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * Set FDT transfer hint, only works on Lancer
+	 */
+	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2 &&
+	    EFCT_HW_FDT_XFER_HINT != 0) {
+		/*
+		 * Non-fatal error. In particular, we can disregard failure to
+		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
+		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
+		 */
+		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
+	}
+
+	/*
+	 * Verify that we have not exceeded any queue sizes
+	 */
+	if (hw->config.n_eq > sli->qinfo.max_qcount[SLI_QTYPE_EQ]) {
+		efc_log_err(hw->os, "requested %d EQ but %d allowed\n",
+			     hw->config.n_eq,
+			sli->qinfo.max_qcount[SLI_QTYPE_EQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_cq > sli->qinfo.max_qcount[SLI_QTYPE_CQ]) {
+		efc_log_err(hw->os, "requested %d CQ but %d allowed\n",
+			     hw->config.n_cq,
+			sli->qinfo.max_qcount[SLI_QTYPE_CQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_mq > sli->qinfo.max_qcount[SLI_QTYPE_MQ]) {
+		efc_log_err(hw->os, "requested %d MQ but %d allowed\n",
+			     hw->config.n_mq,
+			sli->qinfo.max_qcount[SLI_QTYPE_MQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_rq > sli->qinfo.max_qcount[SLI_QTYPE_RQ]) {
+		efc_log_err(hw->os, "requested %d RQ but %d allowed\n",
+			     hw->config.n_rq,
+			sli->qinfo.max_qcount[SLI_QTYPE_RQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (hw->config.n_wq > sli->qinfo.max_qcount[SLI_QTYPE_WQ]) {
+		efc_log_err(hw->os, "requested %d WQ but %d allowed\n",
+			     hw->config.n_wq,
+			sli->qinfo.max_qcount[SLI_QTYPE_WQ]);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* zero the hashes */
+	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
+	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
+	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
+	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
+
+	rc = efct_hw_init_queues(hw, hw->qtop);
+	if (rc != EFCT_HW_RTN_SUCCESS)
+		return rc;
+
+	max_rpi = sli->extent[SLI_RSRC_RPI].size;
+	i = sli_fc_get_rpi_requirements(&hw->sli, max_rpi);
+	if (i) {
+		struct efc_dma payload_memory;
+
+		rc = EFCT_HW_RTN_ERROR;
+
+		if (hw->rnode_mem.size) {
+			dma_free_coherent(&efct->pcidev->dev,
+					  hw->rnode_mem.size,
+					  hw->rnode_mem.virt,
+					  hw->rnode_mem.phys);
+			memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
+		}
+
+		hw->rnode_mem.size = i;
+		hw->rnode_mem.virt = dma_alloc_coherent(&efct->pcidev->dev,
+							hw->rnode_mem.size,
+							&hw->rnode_mem.phys,
+							GFP_DMA);
+		if (!hw->rnode_mem.virt) {
+			efc_log_err(hw->os,
+				     "remote node memory allocation fail\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+
+		payload_memory.size = 0;
+		if (!sli_cmd_post_hdr_templates(&hw->sli, buf,
+					       SLI4_BMBX_SIZE,
+						    &hw->rnode_mem,
+						    U16_MAX,
+						    &payload_memory)) {
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL,
+					     NULL);
+
+			if (payload_memory.size != 0) {
+				/*
+				 * The command was non-embedded - need to
+				 * free the dma buffer
+				 */
+				dma_free_coherent(&efct->pcidev->dev,
+						  payload_memory.size,
+						  payload_memory.virt,
+						  payload_memory.phys);
+				memset(&payload_memory, 0,
+				       sizeof(struct efc_dma));
+			}
+		}
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "header template registration failed\n");
+			return rc;
+		}
+	}
+
+	/* Allocate and post RQ buffers */
+	rc = efct_hw_rx_allocate(hw);
+	if (rc) {
+		efc_log_err(hw->os, "rx_allocate failed\n");
+		return rc;
+	}
+
+	/* Populate hw->seq_free_list */
+	if (!hw->seq_pool) {
+		u32 count = 0;
+		u32 i;
+
+		/*
+		 * Sum up the total number of RQ entries, to use to allocate
+		 * the sequence object pool
+		 */
+		for (i = 0; i < hw->hw_rq_count; i++)
+			count += hw->hw_rq[i]->entry_count;
+
+		hw->seq_pool = efct_array_alloc(hw->os,
+					sizeof(struct efc_hw_sequence),
+						count);
+		if (!hw->seq_pool) {
+			efc_log_err(hw->os, "malloc seq_pool failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	if (efct_hw_rx_post(hw))
+		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
+
+	/* Allocate rpi_ref if not previously allocated */
+	if (!hw->rpi_ref) {
+		hw->rpi_ref = kmalloc_array(max_rpi, sizeof(*hw->rpi_ref),
+				      GFP_KERNEL);
+		if (!hw->rpi_ref)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->rpi_ref, 0, max_rpi * sizeof(*hw->rpi_ref));
+	}
+
+	for (i = 0; i < max_rpi; i++) {
+		atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+		atomic_set(&hw->rpi_ref[i].rpi_attached, 0);
+	}
+
+	/*
+	 * Register a FCFI to allow unsolicited frames to be routed to the
+	 * driver
+	 */
+	if (hw->hw_mrq_count) {
+		efc_log_info(hw->os, "using REG_FCFI MRQ\n");
+
+		rc = efct_hw_config_mrq(hw,
+					SLI4_CMD_REG_FCFI_SET_FCFI_MODE, 0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "REG_FCFI_MRQ FCFI reg failed\n");
+			return rc;
+		}
+
+		rc = efct_hw_config_mrq(hw,
+					SLI4_CMD_REG_FCFI_SET_MRQ_MODE, 0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "REG_FCFI_MRQ MRQ reg failed\n");
+			return rc;
+		}
+	} else {
+		u32 min_rq_count;
+
+		efc_log_info(hw->os, "using REG_FCFI standard\n");
+
+		/*
+		 * Set the filter match/mask values from hw's
+		 * filter_def values
+		 */
+		for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+			rq_cfg[i].rq_id = cpu_to_le16(0xffff);
+			rq_cfg[i].r_ctl_mask = (u8)hw->config.filter_def[i];
+			rq_cfg[i].r_ctl_match = (u8)
+					(hw->config.filter_def[i] >> 8);
+			rq_cfg[i].type_mask = (u8)
+					 (hw->config.filter_def[i] >> 16);
+			rq_cfg[i].type_match = (u8)
+					 (hw->config.filter_def[i] >> 24);
+		}
+
+		/*
+		 * Update the rq_id's of the FCF configuration
+		 * (don't update more than the number of rq_cfg
+		 * elements)
+		 */
+		min_rq_count = (hw->hw_rq_count <
+				SLI4_CMD_REG_FCFI_NUM_RQ_CFG)
+				? hw->hw_rq_count :
+				SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+		for (i = 0; i < min_rq_count; i++) {
+			struct hw_rq *rq = hw->hw_rq[i];
+			u32 j;
+
+			for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+			     j++) {
+				u32 mask = (rq->filter_mask != 0) ?
+						 rq->filter_mask : 1;
+
+				if (!(mask & (1U << j)))
+					continue;
+
+				rq_cfg[j].rq_id = cpu_to_le16(rq->hdr->id);
+				efct_logfcfi(hw, j, i, rq->hdr->id);
+			}
+		}
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_reg_fcfi(&hw->sli, buf,
+				     SLI4_BMBX_SIZE, 0,
+					  rq_cfg)) {
+			rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
+					     NULL, NULL);
+		}
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os,
+				     "FCFI registration failed\n");
+			return rc;
+		}
+		hw->fcf_indicator =
+		le16_to_cpu(((struct sli4_cmd_reg_fcfi *)buf)->fcfi);
+	}
+
+	/*
+	 * Allocate the WQ request tag pool, if not previously allocated
+	 * (the request tag value is 16 bits, thus the pool allocation size
+	 * of 64k)
+	 */
+	rc = efct_hw_reqtag_init(hw);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
+		return rc;
+	}
+
+	rc = efct_hw_setup_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO allocation failure\n");
+		return rc;
+	}
+
+	rc = efct_hw_init_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO initialization failure\n");
+		return rc;
+	}
+
+	/* Set the DIF seed - only for lancer right now */
+	if (sli->if_type == SLI4_INTF_IF_TYPE_2 &&
+	    efct_hw_set_dif_seed(hw) != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os, "Failed to set DIF seed value\n");
+		return rc;
+	}
+
+	/*
+	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
+	 * entries
+	 */
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_arm(&hw->sli, &hw->eq[i], true);
+
+	/*
+	 * Initialize RQ hash
+	 */
+	for (i = 0; i < hw->rq_count; i++)
+		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
+
+	/*
+	 * Initialize WQ hash
+	 */
+	for (i = 0; i < hw->wq_count; i++)
+		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
+
+	/*
+	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
+	 */
+	for (i = 0; i < hw->cq_count; i++) {
+		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
+		sli_queue_arm(&hw->sli, &hw->cq[i], true);
+	}
+
+	/* record the fact that the queues are functional */
+	hw->state = EFCT_HW_STATE_ACTIVE;
+
+	/* finally kick off periodic timer to check for timed out target WQEs */
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		timer_setup(&hw->wqe_timer, &target_wqe_timer_cb, 0);
+
+		mod_timer(&hw->wqe_timer, jiffies +
+			  msecs_to_jiffies(EFCT_HW_WQ_TIMER_PERIOD_MS));
+	}
+	/*
+	 * Allocate a HW IOs for send frame.  Allocate one for each Class 1
+	 * WQ, or if there are none of those, allocate one for WQ[0]
+	 */
+	count = efct_varray_get_count(hw->wq_class_array[1]);
+	if (count > 0) {
+		struct hw_wq *wq;
+
+		for (i = 0; i < count; i++) {
+			wq = efct_varray_iter_next(hw->wq_class_array[1]);
+			wq->send_frame_io = efct_hw_io_alloc(hw);
+			if (!wq->send_frame_io)
+				efc_log_err(hw->os,
+					     "alloc for send_frame_io failed\n");
+		}
+	} else {
+		hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
+		if (!hw->hw_wq[0]->send_frame_io)
+			efc_log_err(hw->os,
+				     "alloc for send_frame_io failed\n");
+	}
+
+	/* Initialize send frame frame sequence id */
+	atomic_set(&hw->send_frame_seq_id, 0);
+
+	/* Initialize watchdog timer if enabled by user */
+	if (hw->watchdog_timeout) {
+		if (hw->watchdog_timeout < 1 ||
+		    hw->watchdog_timeout > 65534)
+			efc_log_err(hw->os,
+				     "WDT out of range: range is 1 - 65534\n");
+		else if (!efct_hw_config_watchdog_timer(hw))
+			efc_log_info(hw->os,
+				      "WDT timer config with tmo = %d secs\n",
+				     hw->watchdog_timeout);
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/**
+ * efct_hw_config_mrq() - Configure Multi-RQ
+ *
+ * @hw: Hardware context allocated by the caller.
+ * @mode: 1 to set MRQ filters and 0 to set FCFI index
+ * @fcf_index: valid in mode 0
+ *
+ * Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_config_mrq(struct efct_hw *hw, u8 mode, u16 fcf_index)
+{
+	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
+	struct hw_rq *rq;
+	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
+	u32 i, j;
+	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	int rc;
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		goto issue_cmd;
+
+	/* Set the filter match/mask values from hw's filter_def values */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_filter[i].rq_id = cpu_to_le16(0xffff);
+		rq_filter[i].r_ctl_mask  = (u8)hw->config.filter_def[i];
+		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
+		rq_filter[i].type_mask   = (u8)(hw->config.filter_def[i] >> 16);
+		rq_filter[i].type_match  = (u8)(hw->config.filter_def[i] >> 24);
+	}
+
+	/* Accumulate counts for each filter type used, build rq_ids[] list */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
+			if (!(rq->filter_mask & (1U << j)))
+				continue;
+
+			if (rq_filter[j].rq_id != cpu_to_le16(0xffff)) {
+				/*
+				 * Already used. Bailout ifts not RQset
+				 * case
+				 */
+				if (!rq->is_mrq ||
+				    rq_filter[j].rq_id !=
+				    cpu_to_le16(rq->base_mrq_id)) {
+					efc_log_err(hw->os, "Wrong q top.\n");
+					return EFCT_HW_RTN_ERROR;
+				}
+				continue;
+			}
+
+			if (!rq->is_mrq) {
+				rq_filter[j].rq_id = cpu_to_le16(rq->hdr->id);
+				continue;
+			}
+
+			rq_filter[j].rq_id = cpu_to_le16(rq->base_mrq_id);
+			mrq_bitmask |= (1U << j);
+		}
+	}
+
+issue_cmd:
+	/* Invoke REG_FCFI_MRQ */
+	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
+				  buf,	/* buf */
+				 SLI4_BMBX_SIZE, /* size */
+				 mode, /* mode 1 */
+				 fcf_index, /* fcf_index */
+				 /* RQ selection policy*/
+				 hw->config.rq_selection_policy,
+				 mrq_bitmask, /* MRQ bitmask */
+				 hw->hw_mrq_count, /* num_mrqs */
+				 rq_filter);/* RQ filter */
+	if (rc) {
+		efc_log_err(hw->os,
+			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+
+	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
+
+	if (rc != EFCT_HW_RTN_SUCCESS ||
+	    le16_to_cpu(rsp->hdr.status)) {
+		efc_log_err(hw->os,
+			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
+			     rsp->hdr.command,
+			     le16_to_cpu(rsp->hdr.status));
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
+	return 0;
+}
+
+enum efct_hw_rtn
+efct_hw_set(struct efct_hw *hw, enum efct_hw_property prop, u32 value)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	struct sli4 *sli = &hw->sli;
+
+	switch (prop) {
+	case EFCT_HW_N_IO:
+		if (value > sli->extent[SLI_RSRC_XRI].size ||
+		    value == 0) {
+			efc_log_test(hw->os,
+				      "IO value out of range %d vs %d\n",
+				     value,
+				sli->extent[SLI_RSRC_XRI].size);
+			rc = EFCT_HW_RTN_ERROR;
+		} else {
+			hw->config.n_io = value;
+		}
+		break;
+	case EFCT_HW_N_SGL:
+		value += SLI4_SGE_MAX_RESERVED;
+		if (value > sli_get_max_sgl(&hw->sli)) {
+			efc_log_test(hw->os,
+				      "SGL value out of range %d vs %d\n",
+				     value, sli_get_max_sgl(&hw->sli));
+			rc = EFCT_HW_RTN_ERROR;
+		} else {
+			hw->config.n_sgl = value;
+		}
+		break;
+	case EFCT_HW_TOPOLOGY:
+		switch (value) {
+		case EFCT_HW_TOPOLOGY_AUTO:
+			sli_set_topology(&hw->sli,
+					 SLI4_READ_CFG_TOPO_FC);
+			break;
+		case EFCT_HW_TOPOLOGY_NPORT:
+			sli_set_topology(&hw->sli, SLI4_READ_CFG_TOPO_FC_DA);
+			break;
+		case EFCT_HW_TOPOLOGY_LOOP:
+			sli_set_topology(&hw->sli, SLI4_READ_CFG_TOPO_FC_AL);
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported topology %#x\n", value);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		hw->config.topology = value;
+		break;
+	case EFCT_HW_LINK_SPEED:
+
+		switch (value) {
+		case 0:		/* Auto-speed negotiation */
+			hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
+			break;
+		case 2000:	/* FC speeds */
+			hw->config.speed = FC_LINK_SPEED_2G;
+			break;
+		case 4000:
+			hw->config.speed = FC_LINK_SPEED_4G;
+			break;
+		case 8000:
+			hw->config.speed = FC_LINK_SPEED_8G;
+			break;
+		case 16000:
+			hw->config.speed = FC_LINK_SPEED_16G;
+			break;
+		case 32000:
+			hw->config.speed = FC_LINK_SPEED_32G;
+			break;
+		default:
+			efc_log_test(hw->os, "unsupported speed %d\n", value);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RQ_PROCESS_LIMIT: {
+		struct hw_rq *rq;
+		u32 i;
+
+		/* For each hw_rq object, set its parent CQ limit value */
+		for (i = 0; i < hw->hw_rq_count; i++) {
+			rq = hw->hw_rq[i];
+			hw->cq[rq->cq->instance].proc_limit = value;
+		}
+		break;
+	}
+	case EFCT_HW_RQ_DEFAULT_BUFFER_SIZE:
+		hw->config.rq_default_buffer_size = value;
+		break;
+	case EFCT_ESOC:
+		hw->config.esoc = value;
+		break;
+	case EFCT_HW_HIGH_LOGIN_MODE:
+		rc = sli_set_hlm(&hw->sli, value);
+		break;
+	case EFCT_HW_PREREGISTER_SGL:
+		rc = sli_set_sgl_preregister(&hw->sli, value);
+		break;
+	case EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT:
+		hw->config.emulate_tgt_wqe_timeout = value;
+		break;
+	case EFCT_HW_BOUNCE:
+		hw->config.bounce = value;
+		break;
+	case EFCT_HW_RQ_SELECTION_POLICY:
+		hw->config.rq_selection_policy = value;
+		break;
+	case EFCT_HW_RR_QUANTA:
+		hw->config.rr_quanta = value;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_set_ptr(struct efct_hw *hw, enum efct_hw_property prop,
+		void *value)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (prop) {
+	case EFCT_HW_FILTER_DEF: {
+		char *p = NULL;
+		char *token;
+		u32 idx = 0;
+
+		for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
+			hw->config.filter_def[idx] = 0;
+
+		p = kstrdup(value, GFP_KERNEL);
+		if (!p || !*p) {
+			efc_log_err(hw->os, "p is NULL\n");
+			break;
+		}
+
+		idx = 0;
+		while ((token = strsep(&p, ",")) && *token) {
+			if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
+				efc_log_err(hw->os, "kstrtoint failed\n");
+
+			if (!p || !*p)
+				break;
+
+			if (idx == ARRAY_SIZE(hw->config.filter_def))
+				break;
+		}
+		kfree(p);
+
+		break;
+	}
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+		break;
+	}
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_get(struct efct_hw *hw, enum efct_hw_property prop,
+	    u32 *value)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	int			tmp;
+	struct sli4 *sli = &hw->sli;
+
+	if (!value)
+		return EFCT_HW_RTN_ERROR;
+
+	*value = 0;
+
+	switch (prop) {
+	case EFCT_HW_N_IO:
+		*value = hw->config.n_io;
+		break;
+	case EFCT_HW_N_SGL:
+		*value = (hw->config.n_sgl - SLI4_SGE_MAX_RESERVED);
+		break;
+	case EFCT_HW_MAX_IO:
+		*value = sli->extent[SLI_RSRC_XRI].size;
+		break;
+	case EFCT_HW_MAX_NODES:
+		*value = sli->extent[SLI_RSRC_RPI].size;
+		break;
+	case EFCT_HW_MAX_RQ_ENTRIES:
+		*value = hw->num_qentries[SLI_QTYPE_RQ];
+		break;
+	case EFCT_HW_RQ_DEFAULT_BUFFER_SIZE:
+		*value = hw->config.rq_default_buffer_size;
+		break;
+	case EFCT_HW_MAX_SGE:
+		*value = sli->sge_supported_length;
+		break;
+	case EFCT_HW_MAX_SGL:
+		*value = sli_get_max_sgl(&hw->sli);
+		break;
+	case EFCT_HW_TOPOLOGY:
+		/*
+		 * Infer link.status based on link.speed.
+		 * Report EFCT_HW_TOPOLOGY_NONE if the link is down.
+		 */
+		if (hw->link.speed == 0) {
+			*value = EFCT_HW_TOPOLOGY_NONE;
+			break;
+		}
+		switch (hw->link.topology) {
+		case SLI_LINK_TOPO_NPORT:
+			*value = EFCT_HW_TOPOLOGY_NPORT;
+			break;
+		case SLI_LINK_TOPO_LOOP:
+			*value = EFCT_HW_TOPOLOGY_LOOP;
+			break;
+		case SLI_LINK_TOPO_NONE:
+			*value = EFCT_HW_TOPOLOGY_NONE;
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported topology %#x\n",
+				     hw->link.topology);
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		break;
+	case EFCT_HW_CONFIG_TOPOLOGY:
+		*value = hw->config.topology;
+		break;
+	case EFCT_HW_LINK_SPEED:
+		*value = hw->link.speed;
+		break;
+	case EFCT_HW_LINK_CONFIG_SPEED:
+		switch (hw->config.speed) {
+		case FC_LINK_SPEED_10G:
+			*value = 10000;
+			break;
+		case FC_LINK_SPEED_AUTO_16_8_4:
+			*value = 0;
+			break;
+		case FC_LINK_SPEED_2G:
+			*value = 2000;
+			break;
+		case FC_LINK_SPEED_4G:
+			*value = 4000;
+			break;
+		case FC_LINK_SPEED_8G:
+			*value = 8000;
+			break;
+		case FC_LINK_SPEED_16G:
+			*value = 16000;
+			break;
+		case FC_LINK_SPEED_32G:
+			*value = 32000;
+			break;
+		default:
+			efc_log_test(hw->os,
+				      "unsupported speed %#x\n",
+				     hw->config.speed);
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		break;
+	case EFCT_HW_IF_TYPE:
+		*value = sli->if_type;
+		break;
+	case EFCT_HW_SLI_REV:
+		*value = sli->sli_rev;
+		break;
+	case EFCT_HW_SLI_FAMILY:
+		*value = sli->sli_family;
+		break;
+	case EFCT_HW_DIF_CAPABLE:
+		*value = sli->features & SLI4_REQFEAT_DIF;
+		break;
+	case EFCT_HW_DIF_SEED:
+		*value = hw->config.dif_seed;
+		break;
+	case EFCT_HW_DIF_MODE:
+		*value = hw->config.dif_mode;
+		break;
+	case EFCT_HW_DIF_MULTI_SEPARATE:
+		/* Lancer supports multiple DIF separates */
+		if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2)
+			*value = true;
+		else
+			*value = false;
+		break;
+	case EFCT_HW_DUMP_MAX_SIZE:
+		*value = hw->dump_size;
+		break;
+	case EFCT_HW_DUMP_READY:
+		*value = sli_dump_is_ready(&hw->sli);
+		break;
+	case EFCT_HW_DUMP_PRESENT:
+		*value = sli_dump_is_present(&hw->sli);
+		break;
+	case EFCT_HW_RESET_REQUIRED:
+		tmp = sli_reset_required(&hw->sli);
+		if (tmp < 0)
+			rc = EFCT_HW_RTN_ERROR;
+		else
+			*value = tmp;
+		break;
+	case EFCT_HW_FW_ERROR:
+		*value = sli_fw_error_status(&hw->sli);
+		break;
+	case EFCT_HW_FW_READY:
+		*value = sli_fw_ready(&hw->sli);
+		break;
+	case EFCT_HW_HIGH_LOGIN_MODE:
+		*value = sli->features & SLI4_REQFEAT_HLM;
+		break;
+	case EFCT_HW_PREREGISTER_SGL:
+		*value = sli->sgl_pre_registration_required;
+		break;
+	case EFCT_HW_HW_REV1:
+		*value = sli->hw_rev[0];
+		break;
+	case EFCT_HW_HW_REV2:
+		*value = sli->hw_rev[1];
+		break;
+	case EFCT_HW_HW_REV3:
+		*value = sli->hw_rev[2];
+		break;
+	case EFCT_HW_LINK_MODULE_TYPE:
+		*value = sli->link_module_type;
+		break;
+	case EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT:
+		*value = hw->config.emulate_tgt_wqe_timeout;
+		break;
+	case EFCT_HW_VPD_LEN:
+		*value = sli->vpd_length;
+		break;
+	case EFCT_HW_SEND_FRAME_CAPABLE:
+		*value = 0;
+		break;
+	case EFCT_HW_RQ_SELECTION_POLICY:
+		*value = hw->config.rq_selection_policy;
+		break;
+	case EFCT_HW_RR_QUANTA:
+		*value = hw->config.rr_quanta;
+		break;
+	case EFCT_HW_MAX_VPORTS:
+		*value = sli->extent[SLI_RSRC_VPI].size;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+void *
+efct_hw_get_ptr(struct efct_hw *hw, enum efct_hw_property prop)
+{
+	void *rc = NULL;
+	struct sli4 *sli = &hw->sli;
+
+	switch (prop) {
+	case EFCT_HW_WWN_NODE:
+		rc = sli->wwnn;
+		break;
+	case EFCT_HW_WWN_PORT:
+		rc = sli->wwpn;
+		break;
+	case EFCT_HW_VPD:
+		/* make sure VPD length is non-zero */
+		if (sli->vpd_length)
+			rc = sli->vpd_data.virt;
+		break;
+	case EFCT_HW_FW_REV:
+		rc = sli->fw_name[0];
+		break;
+	case EFCT_HW_FW_REV2:
+		rc = sli->fw_name[1];
+		break;
+	case EFCT_HW_IPL:
+		rc = sli->ipl_name;
+		break;
+	case EFCT_HW_PORTNUM:
+		rc = sli->port_name;
+		break;
+	case EFCT_HW_BIOS_VERSION_STRING:
+		rc = sli->bios_version_string;
+		break;
+	default:
+		efc_log_test(hw->os, "unsupported property %#x\n", prop);
+	}
+
+	return rc;
+}
+
+uint64_t
+efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop)
+{
+	u8 *p = efct_hw_get_ptr(hw, prop);
+	u64 value = 0;
+
+	if (p) {
+		u32 i;
+
+		for (i = 0; i < sizeof(value); i++)
+			value = (value << 8) | p[i];
+	}
+
+	return value;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index ff6de91923fa..bbba73969de3 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -149,7 +149,6 @@ enum efct_hw_property {
 	EFCT_HW_ETH_LICENSE,
 	EFCT_HW_LINK_MODULE_TYPE,
 	EFCT_HW_NUM_CHUTES,
-	EFCT_HW_WAR_VERSION,
 	/* enable driver timeouts for target WQEs */
 	EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
 	EFCT_HW_LINK_CONFIG_SPEED,
@@ -849,4 +848,19 @@ struct efct_hw_grp_hdr {
 	u8			revision[32];
 };
 
+extern enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
+enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_get(struct efct_hw *hw, enum efct_hw_property prop, u32 *value);
+extern void *
+efct_hw_get_ptr(struct efct_hw *hw, enum efct_hw_property prop);
+extern enum efct_hw_rtn
+efct_hw_set(struct efct_hw *hw, enum efct_hw_property prop, u32 value);
+extern enum efct_hw_rtn
+efct_hw_set_ptr(struct efct_hw *hw, enum efct_hw_property prop,
+		void *value);
+extern uint64_t
+efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
new file mode 100644
index 000000000000..e6d6f2000168
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -0,0 +1,587 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_unsol.h"
+
+/* Post node event callback argument. */
+struct efct_xport_post_node_event {
+	struct completion done;
+	atomic_t refcnt;
+	struct efc_node *node;
+	u32	evt;
+	void *context;
+};
+
+static struct dentry *efct_debugfs_root;
+static atomic_t efct_debugfs_count;
+
+static struct scsi_host_template efct_template = {
+	.module			= THIS_MODULE,
+	.name			= EFCT_DRIVER_NAME,
+	.supported_mode		= MODE_TARGET,
+};
+
+/* globals */
+static struct fc_function_template efct_xport_functions;
+static struct fc_function_template efct_vport_functions;
+
+static struct scsi_transport_template *efct_xport_fc_tt;
+static struct scsi_transport_template *efct_vport_fc_tt;
+
+/*
+ * transport object is allocated,
+ * and associated with a device instance
+ */
+struct efct_xport *
+efct_xport_alloc(struct efct *efct)
+{
+	struct efct_xport *xport;
+
+	xport = kmalloc(sizeof(*xport), GFP_KERNEL);
+	if (!xport)
+		return xport;
+
+	memset(xport, 0, sizeof(*xport));
+	xport->efct = efct;
+	return xport;
+}
+
+static int
+efct_xport_init_debugfs(struct efct *efct)
+{
+	/* Setup efct debugfs root directory */
+	if (!efct_debugfs_root) {
+		efct_debugfs_root = debugfs_create_dir("efct", NULL);
+		atomic_set(&efct_debugfs_count, 0);
+		if (!efct_debugfs_root) {
+			efc_log_err(efct, "failed to create debugfs entry\n");
+			goto debugfs_fail;
+		}
+	}
+
+	/* Create a directory for sessions in root */
+	if (!efct->sess_debugfs_dir) {
+		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
+		if (!efct->sess_debugfs_dir) {
+			efc_log_err(efct,
+				     "failed to create debugfs entry for sessions\n");
+			goto debugfs_fail;
+		}
+		atomic_inc(&efct_debugfs_count);
+	}
+
+	return 0;
+
+debugfs_fail:
+	return -1;
+}
+
+static void efct_xport_delete_debugfs(struct efct *efct)
+{
+	/* Remove session debugfs directory */
+	debugfs_remove(efct->sess_debugfs_dir);
+	efct->sess_debugfs_dir = NULL;
+	atomic_dec(&efct_debugfs_count);
+
+	if (atomic_read(&efct_debugfs_count) == 0) {
+		/* remove root debugfs directory */
+		debugfs_remove(efct_debugfs_root);
+		efct_debugfs_root = NULL;
+	}
+}
+
+int
+efct_xport_attach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc;
+	u32 max_sgl;
+	u32 n_sgl;
+	u32 value;
+
+	xport->fcfi.hold_frames = true;
+	spin_lock_init(&xport->fcfi.pend_frames_lock);
+	INIT_LIST_HEAD(&xport->fcfi.pend_frames);
+
+	rc = efct_hw_setup(&efct->hw, efct, efct->pcidev);
+	if (rc) {
+		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
+		return -1;
+	}
+
+	efct_hw_set(&efct->hw, EFCT_HW_RQ_SELECTION_POLICY,
+		    efct->rq_selection_policy);
+	efct_hw_get(&efct->hw, EFCT_HW_RQ_SELECTION_POLICY, &value);
+	efc_log_debug(efct, "RQ Selection Policy: %d\n", value);
+
+	efct_hw_set_ptr(&efct->hw, EFCT_HW_FILTER_DEF,
+			(void *)efct->filter_def);
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_SGL, &max_sgl);
+	max_sgl -= SLI4_SGE_MAX_RESERVED;
+	n_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
+
+	/* Note: number of SGLs must be set for efc_node_create_pool */
+	if (efct_hw_set(&efct->hw, EFCT_HW_N_SGL, n_sgl) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct,
+			     "%s: Can't set number of SGLs\n", efct->desc);
+		return -1;
+	}
+
+	efc_log_debug(efct, "%s: Configured for %d SGLs\n", efct->desc,
+		       n_sgl);
+
+	xport->io_pool = efct_io_pool_create(efct, EFCT_NUM_SCSI_IOS, n_sgl);
+	if (!xport->io_pool) {
+		efc_log_err(efct, "Can't allocate IO pool\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct);
+
+static void
+efct_xport_stats_timer_cb(struct timer_list *t)
+{
+	struct efct_xport *xport = from_timer(xport, t, stats_timer);
+	struct efct *efct = xport->efct;
+
+	efct_xport_config_stats_timer(efct);
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct)
+{
+	u32 timeout = 3 * 1000;
+	struct efct_xport *xport = NULL;
+
+	if (!efct) {
+		pr_err("%s: failed to locate EFCT device\n", __func__);
+		return;
+	}
+
+	xport = efct->xport;
+	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
+			       efct_xport_async_link_stats_cb,
+			       &xport->fc_xport_stats);
+	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
+			       &xport->fc_xport_stats);
+
+	timer_setup(&xport->stats_timer,
+		    &efct_xport_stats_timer_cb, 0);
+	mod_timer(&xport->stats_timer,
+		  jiffies + msecs_to_jiffies(timeout));
+}
+
+int
+efct_xport_initialize(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc;
+	u32 max_hw_io;
+	u32 max_sgl;
+	u32 rq_limit;
+
+	bool tgt_device_set = false;
+	bool hw_initialized = false;
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_IO, &max_hw_io);
+	if (efct_hw_set(&efct->hw, EFCT_HW_N_IO, max_hw_io) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set number of IOs\n",
+			     efct->desc);
+		return -1;
+	}
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_SGL, &max_sgl);
+	max_sgl -= SLI4_SGE_MAX_RESERVED;
+
+	efct_hw_get(&efct->hw, EFCT_HW_MAX_IO, &max_hw_io);
+
+	if (efct_hw_set(&efct->hw, EFCT_HW_TOPOLOGY, efct->topology) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set the toplogy\n", efct->desc);
+		return -1;
+	}
+	efct_hw_set(&efct->hw, EFCT_HW_RQ_DEFAULT_BUFFER_SIZE,
+		    EFCT_FC_RQ_SIZE_DEFAULT);
+
+	if (efct_hw_set(&efct->hw, EFCT_HW_LINK_SPEED, efct->speed) !=
+			EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(efct, "%s: Can't set the link speed\n",
+			     efct->desc);
+		return -1;
+	}
+
+	if (efct->target_io_timer_sec) {
+		efc_log_debug(efct, "setting target io timer=%d\n",
+			       efct->target_io_timer_sec);
+		efct_hw_set(&efct->hw, EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
+			    true);
+	}
+
+	/* Initialize vport list */
+	INIT_LIST_HEAD(&xport->vport_list);
+	spin_lock_init(&xport->io_pending_lock);
+	INIT_LIST_HEAD(&xport->io_pending_list);
+	atomic_set(&xport->io_active_count, 0);
+	atomic_set(&xport->io_pending_count, 0);
+	atomic_set(&xport->io_total_free, 0);
+	atomic_set(&xport->io_total_pending, 0);
+	atomic_set(&xport->io_alloc_failed_count, 0);
+	atomic_set(&xport->io_pending_recursing, 0);
+	rc = efct_hw_init(&efct->hw);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_init failure\n");
+		goto efct_xport_init_cleanup;
+	} else {
+		hw_initialized = true;
+	}
+
+	rq_limit = max_hw_io / 2;
+	if (efct_hw_set(&efct->hw, EFCT_HW_RQ_PROCESS_LIMIT, rq_limit) !=
+			EFCT_HW_RTN_SUCCESS)
+		efc_log_err(efct, "%s: Can't set the RQ process limit\n",
+			     efct->desc);
+
+	rc = efct_scsi_tgt_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize target\n");
+		goto efct_xport_init_cleanup;
+	} else {
+		tgt_device_set = true;
+	}
+
+	rc = efct_scsi_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize initiator\n");
+		goto efct_xport_init_cleanup;
+	}
+
+	/* Get FC link and host statistics perodically*/
+	efct_xport_config_stats_timer(efct);
+
+	efct_xport_init_debugfs(efct);
+
+	return 0;
+
+efct_xport_init_cleanup:
+	if (tgt_device_set)
+		efct_scsi_tgt_del_device(efct);
+
+	if (hw_initialized) {
+		/* efct_hw_teardown can only execute after efct_hw_init */
+		efct_hw_teardown(&efct->hw);
+	}
+
+	return -1;
+}
+
+int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	union efct_xport_stats_u value;
+	enum efct_hw_rtn hw_rc;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_CONFIG_PORT_STATUS:
+		if (xport->configured_link_state == 0) {
+			/*
+			 * Initial state is offline. configured_link_state is
+			 * set to online explicitly when port is brought online
+			 */
+			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
+		}
+		result->value = xport->configured_link_state;
+		break;
+
+	case EFCT_XPORT_PORT_STATUS:
+		/* Determine port status based on link speed. */
+		hw_rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_SPEED,
+				    &value.value);
+		if (hw_rc == EFCT_HW_RTN_SUCCESS) {
+			if (value.value == 0)
+				result->value = 0;
+			else
+				result->value = 1;
+			rc = 0;
+		} else {
+			rc = -1;
+		}
+		break;
+
+	case EFCT_XPORT_LINK_SPEED: {
+		u32 speed;
+
+		result->value = 0;
+
+		rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_SPEED, &speed);
+		if (rc == 0)
+			result->value = speed;
+		break;
+	}
+
+	case EFCT_XPORT_IS_SUPPORTED_LINK_SPEED: {
+		u32 speed;
+		u32 link_module_type;
+
+		speed = result->value;
+
+		rc = efct_hw_get(&efct->hw, EFCT_HW_LINK_MODULE_TYPE,
+				 &link_module_type);
+		if (rc == 0) {
+			switch (speed) {
+			case 1000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_1GB) != 0;
+				break;
+			case 2000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_2GB) != 0;
+				break;
+			case 4000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_4GB) != 0;
+				break;
+			case 8000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_8GB) != 0;
+				break;
+			case 10000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_10GB) != 0;
+				break;
+			case 16000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_16GB) != 0;
+				break;
+			case 32000:
+				rc = (link_module_type &
+					EFCT_HW_LINK_MODULE_TYPE_32GB) != 0;
+				break;
+			default:
+				rc = 0;
+				break;
+			}
+		} else {
+			rc = 0;
+		}
+		break;
+	}
+	case EFCT_XPORT_LINK_STATISTICS:
+		memcpy((void *)result, &efct->xport->fc_xport_stats,
+		       sizeof(union efct_xport_stats_u));
+		break;
+	case EFCT_XPORT_LINK_STAT_RESET: {
+		/* Create a completion to synchronize the stat reset process. */
+		init_completion(&result->stats.done);
+
+		/* First reset the link stats */
+		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
+					    efct_xport_link_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+
+		/* Next reset the host stats */
+		rc = efct_hw_get_host_stats(&efct->hw, 1,
+					    efct_xport_host_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+		break;
+	}
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_new_device(struct efct *efct)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return -1;
+	}
+
+	/* save shost to initiator-client context */
+	efct->shost = shost;
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+
+	/*
+	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
+	 * global queue depth (as opposed to the per-LUN queue depth --
+	 * .cmd_per_lun This may need to be adjusted for I+T mode.
+	 */
+	shost->can_queue = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/*
+	 * can only accept (from mid-layer) as many SGEs as we've
+	 * pre-registered
+	 */
+	shost->sg_tablesize = efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_xport_fc_tt;
+	efc_log_debug(efct, "transport template=%p\n", efct_xport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, &efct->pcidev->dev,
+				       &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return -1;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->fw_version, efct->driver_version);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+
+	fc_host_node_name(shost) = efct_get_wwn(&efct->hw, EFCT_HW_WWN_NODE);
+	fc_host_port_name(shost) = efct_get_wwn(&efct->hw, EFCT_HW_WWN_PORT);
+	fc_host_max_npiv_vports(shost) = 128;
+
+	return 0;
+}
+
+struct scsi_transport_template *
+efct_attach_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_xport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+struct scsi_transport_template *
+efct_attach_vport_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_vport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+int
+efct_scsi_reg_fc_transport(void)
+{
+	/* attach to appropriate scsi_tranport_* module */
+	efct_xport_fc_tt = efct_attach_fc_transport();
+	if (!efct_xport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		return -1;
+	}
+
+	efct_vport_fc_tt = efct_attach_vport_fc_transport();
+	if (!efct_vport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		efct_release_fc_transport(efct_xport_fc_tt);
+		efct_xport_fc_tt = NULL;
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+efct_scsi_release_fc_transport(void)
+{
+	/* detach from scsi_transport_* */
+	efct_release_fc_transport(efct_xport_fc_tt);
+	efct_xport_fc_tt = NULL;
+	if (efct_vport_fc_tt)
+		efct_release_fc_transport(efct_vport_fc_tt);
+	efct_vport_fc_tt = NULL;
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_xport.h b/drivers/scsi/elx/efct/efct_xport.h
new file mode 100644
index 000000000000..c390aea8ff01
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.h
@@ -0,0 +1,205 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_XPORT_H__)
+#define __EFCT_XPORT_H__
+
+/* FCFI lookup/pending frames */
+struct efct_xport_fcfi {
+	/* lock to protect pending frames access*/
+	spinlock_t	pend_frames_lock;
+	struct list_head	pend_frames;
+	/* hold pending frames */
+	bool hold_frames;
+	/* count of pending frames that were processed */
+	u32	pend_frames_processed;
+};
+
+enum efct_xport_ctrl {
+	EFCT_XPORT_PORT_ONLINE = 1,
+	EFCT_XPORT_PORT_OFFLINE,
+	EFCT_XPORT_SHUTDOWN,
+	EFCT_XPORT_POST_NODE_EVENT,
+	EFCT_XPORT_WWNN_SET,
+	EFCT_XPORT_WWPN_SET,
+};
+
+enum efct_xport_status {
+	EFCT_XPORT_PORT_STATUS,
+	EFCT_XPORT_CONFIG_PORT_STATUS,
+	EFCT_XPORT_LINK_SPEED,
+	EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+	EFCT_XPORT_LINK_STATISTICS,
+	EFCT_XPORT_LINK_STAT_RESET,
+	EFCT_XPORT_IS_QUIESCED
+};
+
+struct efct_xport_link_stats {
+	bool		rec;
+	bool		gec;
+	bool		w02of;
+	bool		w03of;
+	bool		w04of;
+	bool		w05of;
+	bool		w06of;
+	bool		w07of;
+	bool		w08of;
+	bool		w09of;
+	bool		w10of;
+	bool		w11of;
+	bool		w12of;
+	bool		w13of;
+	bool		w14of;
+	bool		w15of;
+	bool		w16of;
+	bool		w17of;
+	bool		w18of;
+	bool		w19of;
+	bool		w20of;
+	bool		w21of;
+	bool		clrc;
+	bool		clof1;
+	u32		link_failure_error_count;
+	u32		loss_of_sync_error_count;
+	u32		loss_of_signal_error_count;
+	u32		primitive_sequence_error_count;
+	u32		invalid_transmission_word_error_count;
+	u32		crc_error_count;
+	u32		primitive_sequence_event_timeout_count;
+	u32		elastic_buffer_overrun_error_count;
+	u32		arbitration_fc_al_timeout_count;
+	u32		advertised_receive_bufftor_to_buffer_credit;
+	u32		current_receive_buffer_to_buffer_credit;
+	u32		advertised_transmit_buffer_to_buffer_credit;
+	u32		current_transmit_buffer_to_buffer_credit;
+	u32		received_eofa_count;
+	u32		received_eofdti_count;
+	u32		received_eofni_count;
+	u32		received_soff_count;
+	u32		received_dropped_no_aer_count;
+	u32		received_dropped_no_available_rpi_resources_count;
+	u32		received_dropped_no_available_xri_resources_count;
+};
+
+struct efct_xport_host_stats {
+	bool		cc;
+	u32		transmit_kbyte_count;
+	u32		receive_kbyte_count;
+	u32		transmit_frame_count;
+	u32		receive_frame_count;
+	u32		transmit_sequence_count;
+	u32		receive_sequence_count;
+	u32		total_exchanges_originator;
+	u32		total_exchanges_responder;
+	u32		receive_p_bsy_count;
+	u32		receive_f_bsy_count;
+	u32		dropped_frames_due_to_no_rq_buffer_count;
+	u32		empty_rq_timeout_count;
+	u32		dropped_frames_due_to_no_xri_count;
+	u32		empty_xri_pool_count;
+};
+
+struct efct_xport_host_statistics {
+	struct completion		done;
+	struct efct_xport_link_stats	link_stats;
+	struct efct_xport_host_stats	host_stats;
+};
+
+union efct_xport_stats_u {
+	u32	value;
+	struct efct_xport_host_statistics stats;
+};
+
+struct efct_xport_fcp_stats {
+	u64		input_bytes;
+	u64		output_bytes;
+	u64		input_requests;
+	u64		output_requests;
+	u64		control_requests;
+};
+
+struct efct_xport {
+	struct efct		*efct;
+	/* wwpn requested by user for primary sport */
+	u64			req_wwpn;
+	/* wwnn requested by user for primary sport */
+	u64			req_wwnn;
+
+	struct efct_xport_fcfi	fcfi;
+
+	/* Nodes */
+	/* number of allocated nodes */
+	u32			nodes_count;
+	/* array of pointers to nodes */
+	struct efc_node		**nodes;
+	/* linked list of free nodes */
+	struct list_head	nodes_free_list;
+
+	/* Io pool and counts */
+	/* pointer to IO pool */
+	struct efct_io_pool	*io_pool;
+	/* used to track how often IO pool is empty */
+	atomic_t		io_alloc_failed_count;
+	/* lock for io_pending_list */
+	spinlock_t		io_pending_lock;
+	/* list of IOs waiting for HW resources
+	 *  lock: xport->io_pending_lock
+	 *  link: efct_io_s->io_pending_link
+	 */
+	struct list_head	io_pending_list;
+	/* count of totals IOS allocated */
+	atomic_t		io_total_alloc;
+	/* count of totals IOS free'd */
+	atomic_t		io_total_free;
+	/* count of totals IOS that were pended */
+	atomic_t		io_total_pending;
+	/* count of active IOS */
+	atomic_t		io_active_count;
+	/* count of pending IOS */
+	atomic_t		io_pending_count;
+	/* non-zero if efct_scsi_check_pending is executing */
+	atomic_t		io_pending_recursing;
+
+	/* vport */
+	/* list of VPORTS (NPIV) */
+	struct list_head	vport_list;
+
+	/* Port */
+	/* requested link state */
+	u32			configured_link_state;
+
+	/* Timer for Statistics */
+	struct timer_list	stats_timer;
+	union efct_xport_stats_u fc_xport_stats;
+	struct efct_xport_fcp_stats fcp_stats;
+};
+
+struct efct_rport_data {
+	struct efc_node		*node;
+};
+
+extern struct efct_xport *
+efct_xport_alloc(struct efct *efct);
+extern int
+efct_xport_attach(struct efct_xport *xport);
+extern int
+efct_xport_initialize(struct efct_xport *xport);
+extern int
+efct_xport_detach(struct efct_xport *xport);
+extern int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...);
+extern int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result);
+extern void
+efct_xport_free(struct efct_xport *xport);
+
+struct scsi_transport_template *efct_attach_fc_transport(void);
+struct scsi_transport_template *efct_attach_vport_fc_transport(void);
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template);
+
+#endif /* __EFCT_XPORT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (15 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 16/32] elx: efct: Driver initialization routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:10   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
                   ` (15 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for queue creation, deletion, and configuration.
Driven by strings describing configuration topology with
parsers for the strings.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw_queues.c | 1456 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw_queues.h |   67 ++
 2 files changed, 1523 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.h

diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
new file mode 100644
index 000000000000..8bbeef8ad22d
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -0,0 +1,1456 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_hw_queues.h"
+#include "efct_unsol.h"
+
+/**
+ * Given the parsed queue topology spec, the SLI queues are created and
+ * initialized
+ */
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw, struct efct_hw_qtop *qtop)
+{
+	u32 i, j, k;
+	u32 default_lengths[QTOP_LAST], len;
+	u32 rqset_len = 0, rqset_count = 0;
+	u8 rqset_filter_mask = 0;
+	struct hw_eq *eqs[EFCT_HW_MAX_MRQS];
+	struct hw_cq *cqs[EFCT_HW_MAX_MRQS];
+	struct hw_rq *rqs[EFCT_HW_MAX_MRQS];
+	struct efct_hw_qtop_entry *qt, *next_qt;
+	struct efct_hw_mrq mrq;
+	bool use_mrq = false;
+
+	struct hw_eq *eq = NULL;
+	struct hw_cq *cq = NULL;
+	struct hw_wq *wq = NULL;
+	struct hw_rq *rq = NULL;
+	struct hw_mq *mq = NULL;
+
+	mrq.num_pairs = 0;
+	default_lengths[QTOP_EQ] = 1024;
+	default_lengths[QTOP_CQ] = hw->num_qentries[SLI_QTYPE_CQ];
+	default_lengths[QTOP_WQ] = hw->num_qentries[SLI_QTYPE_WQ];
+	default_lengths[QTOP_RQ] = hw->num_qentries[SLI_QTYPE_RQ];
+	default_lengths[QTOP_MQ] = EFCT_HW_MQ_DEPTH;
+
+	hw->eq_count = 0;
+	hw->cq_count = 0;
+	hw->mq_count = 0;
+	hw->wq_count = 0;
+	hw->rq_count = 0;
+	hw->hw_rq_count = 0;
+	INIT_LIST_HEAD(&hw->eq_list);
+
+	/* If MRQ is requested, Check if it is supported by SLI. */
+	if (hw->config.n_rq > 1 &&
+	    !(hw->sli.features & SLI4_REQFEAT_MRQP)) {
+		efc_log_err(hw->os, "MRQ topology not supported by SLI4.\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->config.n_rq > 1)
+		use_mrq = true;
+
+	/* Allocate class WQ pools */
+	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
+		hw->wq_class_array[i] = efct_varray_alloc(hw->os,
+							  EFCT_HW_MAX_NUM_WQ);
+		if (!hw->wq_class_array[i]) {
+			efc_log_err(hw->os,
+				     "efct_varray_alloc for wq_class failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	/* Allocate per CPU WQ pools */
+	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
+		hw->wq_cpu_array[i] = efct_varray_alloc(hw->os,
+							EFCT_HW_MAX_NUM_WQ);
+		if (!hw->wq_cpu_array[i]) {
+			efc_log_err(hw->os,
+				     "efct_varray_alloc for wq_class failed\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	for (i = 0, qt = qtop->entries; i < qtop->inuse_count; i++, qt++) {
+		if (i == qtop->inuse_count - 1)
+			next_qt = NULL;
+		else
+			next_qt = qt + 1;
+
+		switch (qt->entry) {
+		case QTOP_EQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_EQ];
+
+			if (qt->set_default) {
+				default_lengths[QTOP_EQ] = len;
+				break;
+			}
+
+			eq = efct_hw_new_eq(hw, len);
+			if (!eq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+			break;
+
+		case QTOP_CQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_CQ];
+
+			if (qt->set_default) {
+				default_lengths[QTOP_CQ] = len;
+				break;
+			}
+
+			/* If this CQ is for MRQ, then delay the creation */
+			if (!use_mrq || next_qt->entry != QTOP_RQ) {
+				if (!eq)
+					return EFCT_HW_RTN_NO_MEMORY;
+
+				cq = efct_hw_new_cq(eq, len);
+				if (!cq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+			}
+			break;
+
+		case QTOP_WQ: {
+			len = (qt->len) ? qt->len : default_lengths[QTOP_WQ];
+			if (qt->set_default) {
+				default_lengths[QTOP_WQ] = len;
+				break;
+			}
+
+			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
+				efc_log_err(hw->os,
+					     "invalid ULP %d WQ\n", qt->ulp);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			wq = efct_hw_new_wq(cq, len,
+					    qt->class, hw->ulp_start + qt->ulp);
+			if (!wq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			/* Place this WQ on the EQ WQ array */
+			if (efct_varray_add(eq->wq_array, wq)) {
+				efc_log_err(hw->os,
+					     "QTOP_WQ:EQ efct_varray_add fail\n");
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/* Place this WQ on the HW class array */
+			if (qt->class < ARRAY_SIZE(hw->wq_class_array)) {
+				if (efct_varray_add
+					(hw->wq_class_array[qt->class], wq)) {
+					efc_log_err(hw->os,
+						     "HW wq_class_array efct_varray_add failed\n");
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_ERROR;
+				}
+			} else {
+				efc_log_err(hw->os,
+					     "Invalid class value: %d\n",
+					    qt->class);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/*
+			 * Place this WQ on the per CPU list, asumming that EQs
+			 * are mapped to cpu given by the EQ instance modulo
+			 * number of CPUs
+			 */
+			if (efct_varray_add(hw->wq_cpu_array[eq->instance %
+					   num_online_cpus()], wq)) {
+				efc_log_err(hw->os,
+					     "HW wq_cpu_array efct_varray_add failed\n");
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			break;
+		}
+		case QTOP_RQ: {
+			len = (qt->len) ? qt->len : EFCT_HW_RQ_ENTRIES_DEF;
+
+			/*
+			 * Use the max supported queue length
+			 * if qtop rq len is not a valid value
+			 */
+			if (len > default_lengths[QTOP_RQ] ||
+			    (len % EFCT_HW_RQ_ENTRIES_MIN)) {
+				efc_log_info(hw->os,
+					      "QTOP RQ len %d is invalid. Using max supported RQ len %d\n",
+					len, default_lengths[QTOP_RQ]);
+				len = default_lengths[QTOP_RQ];
+			}
+
+			if (qt->set_default) {
+				default_lengths[QTOP_RQ] = len;
+				break;
+			}
+
+			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
+				efc_log_err(hw->os,
+					     "invalid ULP %d RQ\n", qt->ulp);
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			if (use_mrq) {
+				k = mrq.num_pairs;
+				mrq.rq_cfg[k].len = len;
+				mrq.rq_cfg[k].ulp = hw->ulp_start + qt->ulp;
+				mrq.rq_cfg[k].filter_mask = qt->filter_mask;
+				mrq.rq_cfg[k].eq = eq;
+				mrq.num_pairs++;
+			} else {
+				rq = efct_hw_new_rq(cq, len,
+						    hw->ulp_start + qt->ulp);
+				if (!rq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+				rq->filter_mask = qt->filter_mask;
+			}
+			break;
+		}
+
+		case QTOP_MQ:
+			len = (qt->len) ? qt->len : default_lengths[QTOP_MQ];
+			if (qt->set_default) {
+				default_lengths[QTOP_MQ] = len;
+				break;
+			}
+
+			if (!cq)
+				return EFCT_HW_RTN_NO_MEMORY;
+
+			mq = efct_hw_new_mq(cq, len);
+			if (!mq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+			break;
+
+		default:
+			efc_log_crit(hw->os, "Unknown Queue\n");
+			break;
+		}
+	}
+
+	if (mrq.num_pairs) {
+		/* First create normal RQs. */
+		for (i = 0; i < mrq.num_pairs; i++) {
+			for (j = 0; j < mrq.num_pairs; j++) {
+				if (i != j &&
+				    mrq.rq_cfg[i].filter_mask ==
+				     mrq.rq_cfg[j].filter_mask) {
+					/* This should be created using set */
+					if (rqset_filter_mask &&
+					    rqset_filter_mask !=
+					     mrq.rq_cfg[i].filter_mask) {
+						efc_log_crit(hw->os,
+							      "Cant create > 1 RQ Set\n");
+						efct_hw_queue_teardown(hw);
+						return EFCT_HW_RTN_ERROR;
+					} else if (!rqset_filter_mask) {
+						rqset_filter_mask =
+						      mrq.rq_cfg[i].filter_mask;
+						rqset_len = mrq.rq_cfg[i].len;
+					}
+					eqs[rqset_count] = mrq.rq_cfg[i].eq;
+					rqset_count++;
+					break;
+				}
+			}
+			if (j == mrq.num_pairs) {
+				/* Normal RQ */
+				cq = efct_hw_new_cq(mrq.rq_cfg[i].eq,
+						    default_lengths[QTOP_CQ]);
+				if (!cq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+
+				rq = efct_hw_new_rq(cq, mrq.rq_cfg[i].len,
+						    mrq.rq_cfg[i].ulp);
+				if (!rq) {
+					efct_hw_queue_teardown(hw);
+					return EFCT_HW_RTN_NO_MEMORY;
+				}
+				rq->filter_mask = mrq.rq_cfg[i].filter_mask;
+			}
+		}
+
+		/* Now create RQ Set */
+		if (rqset_count) {
+			/* Create CQ set */
+			if (efct_hw_new_cq_set(eqs, cqs, rqset_count,
+					       default_lengths[QTOP_CQ])) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			/* Create RQ set */
+			if (efct_hw_new_rq_set(cqs, rqs, rqset_count,
+					       rqset_len)) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_ERROR;
+			}
+
+			for (i = 0; i < rqset_count ; i++) {
+				rqs[i]->filter_mask = rqset_filter_mask;
+				rqs[i]->is_mrq = true;
+				rqs[i]->base_mrq_id = rqs[0]->hdr->id;
+			}
+
+			hw->hw_mrq_count = rqset_count;
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/* Allocate a new EQ object */
+struct hw_eq *
+efct_hw_new_eq(struct efct_hw *hw, u32 entry_count)
+{
+	struct hw_eq *eq = kmalloc(sizeof(*eq), GFP_KERNEL);
+
+	if (eq) {
+		memset(eq, 0, sizeof(*eq));
+		eq->type = SLI_QTYPE_EQ;
+		eq->hw = hw;
+		eq->entry_count = entry_count;
+		eq->instance = hw->eq_count++;
+		eq->queue = &hw->eq[eq->instance];
+		INIT_LIST_HEAD(&eq->cq_list);
+
+		eq->wq_array = efct_varray_alloc(hw->os, EFCT_HW_MAX_NUM_WQ);
+		if (!eq->wq_array) {
+			kfree(eq);
+			eq = NULL;
+		} else {
+			if (sli_queue_alloc(&hw->sli, SLI_QTYPE_EQ,
+					    eq->queue,
+					    entry_count, NULL)) {
+				efc_log_err(hw->os,
+					     "EQ[%d] allocation failure\n",
+					    eq->instance);
+				kfree(eq);
+				eq = NULL;
+			} else {
+				sli_eq_modify_delay(&hw->sli, eq->queue,
+						    1, 0, 8);
+				hw->hw_eq[eq->instance] = eq;
+				INIT_LIST_HEAD(&eq->list_entry);
+				list_add_tail(&eq->list_entry, &hw->eq_list);
+				efc_log_debug(hw->os,
+					       "create eq[%2d] id %3d len %4d\n",
+					      eq->instance, eq->queue->id,
+					      eq->entry_count);
+			}
+		}
+	}
+	return eq;
+}
+
+/* Allocate a new CQ object */
+struct hw_cq *
+efct_hw_new_cq(struct hw_eq *eq, u32 entry_count)
+{
+	struct efct_hw *hw = eq->hw;
+	struct hw_cq *cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+
+	if (cq) {
+		memset(cq, 0, sizeof(*cq));
+		cq->eq = eq;
+		cq->type = SLI_QTYPE_CQ;
+		cq->instance = eq->hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue = &hw->cq[cq->instance];
+
+		INIT_LIST_HEAD(&cq->q_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_CQ, cq->queue,
+				    cq->entry_count, eq->queue)) {
+			efc_log_err(hw->os,
+				     "CQ[%d] allocation failure len=%d\n",
+				    eq->instance,
+				    eq->entry_count);
+			kfree(cq);
+			cq = NULL;
+		} else {
+			hw->hw_cq[cq->instance] = cq;
+			INIT_LIST_HEAD(&cq->list_entry);
+			list_add_tail(&cq->list_entry, &eq->cq_list);
+			efc_log_debug(hw->os,
+				       "create cq[%2d] id %3d len %4d\n",
+				      cq->instance, cq->queue->id,
+				      cq->entry_count);
+		}
+	}
+	return cq;
+}
+
+/* Allocate a new CQ Set of objects */
+u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count)
+{
+	u32 i;
+	struct efct_hw *hw = eqs[0]->hw;
+	struct sli4 *sli4 = &hw->sli;
+	struct hw_cq *cq = NULL;
+	struct sli4_queue *qs[SLI_MAX_CQ_SET_COUNT];
+	struct sli4_queue *assefct[SLI_MAX_CQ_SET_COUNT];
+
+	/* Initialise CQS pointers to NULL */
+	for (i = 0; i < num_cqs; i++)
+		cqs[i] = NULL;
+
+	for (i = 0; i < num_cqs; i++) {
+		cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+		if (!cq)
+			goto error;
+
+		memset(cq, 0, sizeof(*cq));
+		cqs[i]          = cq;
+		cq->eq          = eqs[i];
+		cq->type        = SLI_QTYPE_CQ;
+		cq->instance    = hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue       = &hw->cq[cq->instance];
+		qs[i]           = cq->queue;
+		assefct[i]       = eqs[i]->queue;
+		INIT_LIST_HEAD(&cq->q_list);
+	}
+
+	if (!sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
+		efc_log_err(hw->os, "Failed to create CQ Set.\n");
+		goto error;
+	}
+
+	for (i = 0; i < num_cqs; i++) {
+		hw->hw_cq[cqs[i]->instance] = cqs[i];
+		INIT_LIST_HEAD(&cqs[i]->list_entry);
+		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
+	}
+
+	return 0;
+
+error:
+	for (i = 0; i < num_cqs; i++) {
+		kfree(cqs[i]);
+		cqs[i] = NULL;
+	}
+	return -1;
+}
+
+/* Allocate a new MQ object */
+struct hw_mq *
+efct_hw_new_mq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_mq *mq = kmalloc(sizeof(*mq), GFP_KERNEL);
+
+	if (mq) {
+		memset(mq, 0, sizeof(*mq));
+		mq->cq = cq;
+		mq->type = SLI_QTYPE_MQ;
+		mq->instance = cq->eq->hw->mq_count++;
+		mq->entry_count = entry_count;
+		mq->entry_size = EFCT_HW_MQ_DEPTH;
+		mq->queue = &hw->mq[mq->instance];
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_MQ,
+				    mq->queue,
+				    mq->entry_size,
+				    cq->queue)) {
+			efc_log_err(hw->os, "MQ allocation failure\n");
+			kfree(mq);
+			mq = NULL;
+		} else {
+			hw->hw_mq[mq->instance] = mq;
+			INIT_LIST_HEAD(&mq->list_entry);
+			list_add_tail(&mq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create mq[%2d] id %3d len %4d\n",
+				      mq->instance, mq->queue->id,
+				      mq->entry_count);
+		}
+	}
+	return mq;
+}
+
+/* Allocate a new WQ object */
+struct hw_wq *
+efct_hw_new_wq(struct hw_cq *cq, u32 entry_count,
+	       u32 class, u32 ulp)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_wq *wq = kmalloc(sizeof(*wq), GFP_KERNEL);
+
+	if (wq) {
+		memset(wq, 0, sizeof(*wq));
+		wq->hw = cq->eq->hw;
+		wq->cq = cq;
+		wq->type = SLI_QTYPE_WQ;
+		wq->instance = cq->eq->hw->wq_count++;
+		wq->entry_count = entry_count;
+		wq->queue = &hw->wq[wq->instance];
+		wq->ulp = ulp;
+		wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
+		wq->wqec_count = wq->wqec_set_count;
+		wq->free_count = wq->entry_count - 1;
+		wq->class = class;
+		INIT_LIST_HEAD(&wq->pending_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_WQ, wq->queue,
+				    wq->entry_count, cq->queue)) {
+			efc_log_err(hw->os, "WQ allocation failure\n");
+			kfree(wq);
+			wq = NULL;
+		} else {
+			hw->hw_wq[wq->instance] = wq;
+			INIT_LIST_HEAD(&wq->list_entry);
+			list_add_tail(&wq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create wq[%2d] id %3d len %4d cls %d ulp %d\n",
+				wq->instance, wq->queue->id,
+				wq->entry_count, wq->class, wq->ulp);
+		}
+	}
+	return wq;
+}
+
+/* Allocate an RQ object, which encapsulates 2 SLI queues (for rq pair) */
+struct hw_rq *
+efct_hw_new_rq(struct hw_cq *cq, u32 entry_count, u32 ulp)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_rq *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+
+	if (rq) {
+		memset(rq, 0, sizeof(*rq));
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cq;
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Create the header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->hdr,
+				    rq->entry_count,
+				    rq->hdr_entry_size,
+				    cq->queue,
+				    true)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - header\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			      "create rq[%2d] id %3d len %4d hdr  size %4d\n",
+			      rq->instance, rq->hdr->id, rq->entry_count,
+			      rq->hdr_entry_size);
+
+		/* Create the default data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->data,
+				    rq->entry_count,
+				    rq->data_entry_size,
+				    cq->queue,
+				    false)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - first burst\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			       "create rq[%2d] id %3d len %4d data size %4d\n",
+			 rq->instance, rq->data->id, rq->entry_count,
+			 rq->data_entry_size);
+
+		hw->hw_rq[rq->instance] = rq;
+		INIT_LIST_HEAD(&rq->list_entry);
+		list_add_tail(&rq->list_entry, &cq->q_list);
+
+		rq->rq_tracker = kmalloc_array(rq->entry_count,
+					sizeof(struct efc_hw_sequence *),
+					GFP_ATOMIC);
+		if (!rq->rq_tracker)
+			return NULL;
+
+		memset(rq->rq_tracker, 0,
+		       rq->entry_count * sizeof(struct efc_hw_sequence *));
+	}
+	return rq;
+}
+
+/**
+ * Allocate an RQ object SET, where each element in set
+ * encapsulates 2 SLI queues (for rq pair)
+ */
+u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count)
+{
+	struct efct_hw *hw = cqs[0]->eq->hw;
+	struct hw_rq *rq = NULL;
+	struct sli4_queue *qs[SLI_MAX_RQ_SET_COUNT * 2] = { NULL };
+	u32 i, q_count, size;
+
+	/* Initialise RQS pointers */
+	for (i = 0; i < num_rq_pairs; i++)
+		rqs[i] = NULL;
+
+	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
+		rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+		if (!rq)
+			goto error;
+
+		memset(rq, 0, sizeof(*rq));
+		rqs[i] = rq;
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cqs[i];
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count] = rq->hdr;
+
+		/* Data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count + 1] = rq->data;
+
+		rq->rq_tracker = NULL;
+	}
+
+	if (!sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
+				cqs[0]->queue->id,
+			    rqs[0]->entry_count,
+			    rqs[0]->hdr_entry_size,
+			    rqs[0]->data_entry_size)) {
+		efc_log_err(hw->os,
+			     "RQ Set allocation failure for base CQ=%d\n",
+			    cqs[0]->queue->id);
+		goto error;
+	}
+
+	for (i = 0; i < num_rq_pairs; i++) {
+		hw->hw_rq[rqs[i]->instance] = rqs[i];
+		INIT_LIST_HEAD(&rqs[i]->list_entry);
+		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
+		size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count;
+		rqs[i]->rq_tracker = kmalloc(size, GFP_KERNEL);
+		if (!rqs[i]->rq_tracker)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	for (i = 0; i < num_rq_pairs; i++) {
+		if (rqs[i]) {
+			kfree(rqs[i]->rq_tracker);
+			kfree(rqs[i]);
+		}
+	}
+
+	return -1;
+}
+
+void
+efct_hw_del_eq(struct hw_eq *eq)
+{
+	if (eq) {
+		struct hw_cq *cq;
+		struct hw_cq *cq_next;
+
+		list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
+			efct_hw_del_cq(cq);
+		efct_varray_free(eq->wq_array);
+		list_del(&eq->list_entry);
+		eq->hw->hw_eq[eq->instance] = NULL;
+		kfree(eq);
+	}
+}
+
+void
+efct_hw_del_cq(struct hw_cq *cq)
+{
+	if (cq) {
+		struct hw_q *q;
+		struct hw_q *q_next;
+
+		list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
+			switch (q->type) {
+			case SLI_QTYPE_MQ:
+				efct_hw_del_mq((struct hw_mq *)q);
+				break;
+			case SLI_QTYPE_WQ:
+				efct_hw_del_wq((struct hw_wq *)q);
+				break;
+			case SLI_QTYPE_RQ:
+				efct_hw_del_rq((struct hw_rq *)q);
+				break;
+			default:
+				break;
+			}
+		}
+		list_del(&cq->list_entry);
+		cq->eq->hw->hw_cq[cq->instance] = NULL;
+		kfree(cq);
+	}
+}
+
+void
+efct_hw_del_mq(struct hw_mq *mq)
+{
+	if (mq) {
+		list_del(&mq->list_entry);
+		mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
+		kfree(mq);
+	}
+}
+
+void
+efct_hw_del_wq(struct hw_wq *wq)
+{
+	if (wq) {
+		list_del(&wq->list_entry);
+		wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
+		kfree(wq);
+	}
+}
+
+void
+efct_hw_del_rq(struct hw_rq *rq)
+{
+	struct efct_hw *hw = NULL;
+
+	if (rq) {
+		/* Free RQ tracker */
+		kfree(rq->rq_tracker);
+		rq->rq_tracker = NULL;
+		list_del(&rq->list_entry);
+		hw = rq->cq->eq->hw;
+		hw->hw_rq[rq->instance] = NULL;
+		kfree(rq);
+	}
+}
+
+void
+efct_hw_queue_dump(struct efct_hw *hw)
+{
+	struct hw_eq *eq;
+	struct hw_cq *cq;
+	struct hw_q *q;
+	struct hw_mq *mq;
+	struct hw_wq *wq;
+	struct hw_rq *rq;
+
+	list_for_each_entry(eq, &hw->eq_list, list_entry) {
+		efc_log_debug(hw->os, "eq[%d] id %2d\n",
+			       eq->instance, eq->queue->id);
+		list_for_each_entry(cq, &eq->cq_list, list_entry) {
+			efc_log_debug(hw->os, "cq[%d] id %2d current\n",
+				       cq->instance, cq->queue->id);
+			list_for_each_entry(q, &cq->q_list, list_entry) {
+				switch (q->type) {
+				case SLI_QTYPE_MQ:
+					mq = (struct hw_mq *)q;
+					efc_log_debug(hw->os,
+						       "    mq[%d] id %2d\n",
+					       mq->instance, mq->queue->id);
+					break;
+				case SLI_QTYPE_WQ:
+					wq = (struct hw_wq *)q;
+					efc_log_debug(hw->os,
+						       "    wq[%d] id %2d\n",
+						wq->instance, wq->queue->id);
+					break;
+				case SLI_QTYPE_RQ:
+					rq = (struct hw_rq *)q;
+					efc_log_debug(hw->os,
+						       "    rq[%d] hdr id %2d\n",
+					       rq->instance, rq->hdr->id);
+					break;
+				default:
+					break;
+				}
+			}
+		}
+	}
+}
+
+void
+efct_hw_queue_teardown(struct efct_hw *hw)
+{
+	u32 i;
+	struct hw_eq *eq;
+	struct hw_eq *eq_next;
+
+	if (hw->eq_list.next) {
+		list_for_each_entry_safe(eq, eq_next, &hw->eq_list,
+					 list_entry) {
+			efct_hw_del_eq(eq);
+		}
+	}
+	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
+		efct_varray_free(hw->wq_cpu_array[i]);
+		hw->wq_cpu_array[i] = NULL;
+	}
+	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
+		efct_varray_free(hw->wq_class_array[i]);
+		hw->wq_class_array[i] = NULL;
+	}
+}
+
+/**
+ * Allocate a WQ to an IO object
+ *
+ * The next work queue index is used to assign a WQ to an IO.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_CLASS, a WQ from io->wq_class is
+ * selected.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_REQUEST, then a WQ from the EQ that
+ * the IO request came in on is selected.
+ *
+ * If wq_steering is EFCT_HW_WQ_STEERING_CPU, then a WQ associted with the
+ * CPU the request is made on is selected.
+ */
+struct hw_wq *
+efct_hw_queue_next_wq(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	struct hw_eq *eq;
+	struct hw_wq *wq = NULL;
+	u32 cpuidx;
+
+	switch (io->wq_steering) {
+	case EFCT_HW_WQ_STEERING_CLASS:
+		if (unlikely(io->wq_class >= ARRAY_SIZE(hw->wq_class_array)))
+			break;
+
+		wq = efct_varray_iter_next(hw->wq_class_array[io->wq_class]);
+		break;
+	case EFCT_HW_WQ_STEERING_REQUEST:
+		eq = io->eq;
+		if (likely(eq))
+			wq = efct_varray_iter_next(eq->wq_array);
+		break;
+	case EFCT_HW_WQ_STEERING_CPU:
+		cpuidx = in_interrupt() ?
+			raw_smp_processor_id() : task_cpu(current);
+
+		if (likely(cpuidx < ARRAY_SIZE(hw->wq_cpu_array)))
+			wq = efct_varray_iter_next(hw->wq_cpu_array[cpuidx]);
+		break;
+	}
+
+	if (unlikely(!wq))
+		wq = hw->hw_wq[0];
+
+	return wq;
+}
+
+u32
+efct_hw_qtop_eq_count(struct efct_hw *hw)
+{
+	return hw->qtop->entry_counts[QTOP_EQ];
+}
+
+#define TOKEN_LEN		32
+
+/* token types */
+enum tok_type {
+	TOK_LPAREN = 1,
+	TOK_RPAREN,
+	TOK_COLON,
+	TOK_EQUALS,
+	TOK_QUEUE,
+	TOK_ATTR_NAME,
+	TOK_NUMBER,
+	TOK_NUMBER_VALUE,
+	TOK_NUMBER_LIST,
+};
+
+/* token sub-types */
+enum tok_subtype {
+	TOK_SUB_EQ = 100,
+	TOK_SUB_CQ,
+	TOK_SUB_RQ,
+	TOK_SUB_MQ,
+	TOK_SUB_WQ,
+	TOK_SUB_LEN,
+	TOK_SUB_CLASS,
+	TOK_SUB_ULP,
+	TOK_SUB_FILTER,
+};
+
+/* convert queue subtype to QTOP entry */
+static enum efct_hw_qtop_type
+subtype2qtop(enum tok_subtype q)
+{
+	switch (q) {
+	case TOK_SUB_EQ:	return QTOP_EQ;
+	case TOK_SUB_CQ:	return QTOP_CQ;
+	case TOK_SUB_RQ:	return QTOP_RQ;
+	case TOK_SUB_MQ:	return QTOP_MQ;
+	case TOK_SUB_WQ:	return QTOP_WQ;
+	default:
+		break;
+	}
+	return 0;
+}
+
+/* Declare token object */
+struct tok {
+	enum tok_type type;
+	enum tok_subtype subtype;
+	char string[TOKEN_LEN];
+};
+
+/* Declare token array object */
+struct tokarray {
+	struct tok *tokens;
+	u32 alloc_count;
+	u32 inuse_count;
+	u32 iter_idx;
+};
+
+/* token match structure */
+struct tokmatch {
+	char *s;
+	enum tok_type type;
+	enum tok_subtype subtype;
+};
+
+static int
+idstart(int c)
+{
+	return	isalpha(c) || (c == '_') || (c == '$');
+}
+
+static int
+idchar(int c)
+{
+	return idstart(c) || isdigit(c);
+}
+
+/* single character matches */
+static struct tokmatch cmatches[] = {
+	{"(", TOK_LPAREN},
+	{")", TOK_RPAREN},
+	{":", TOK_COLON},
+	{"=", TOK_EQUALS},
+};
+
+/* identifier match strings */
+static struct tokmatch smatches[] = {
+	{"eq", TOK_QUEUE, TOK_SUB_EQ},
+	{"cq", TOK_QUEUE, TOK_SUB_CQ},
+	{"rq", TOK_QUEUE, TOK_SUB_RQ},
+	{"mq", TOK_QUEUE, TOK_SUB_MQ},
+	{"wq", TOK_QUEUE, TOK_SUB_WQ},
+	{"len", TOK_ATTR_NAME, TOK_SUB_LEN},
+	{"class", TOK_ATTR_NAME, TOK_SUB_CLASS},
+	{"ulp", TOK_ATTR_NAME, TOK_SUB_ULP},
+	{"filter", TOK_ATTR_NAME, TOK_SUB_FILTER},
+};
+
+/* The string is scanned and the next token is returned */
+static const char *
+tokenize(const char *s, struct tok *tok)
+{
+	u32 i;
+
+	memset(tok, 0, sizeof(*tok));
+
+	/* Skip over whitespace */
+	while (*s && isspace(*s))
+		s++;
+
+	/* Return if nothing left in this string */
+	if (*s == 0)
+		return NULL;
+
+	/* Look for single character matches */
+	for (i = 0; i < ARRAY_SIZE(cmatches); i++) {
+		if (cmatches[i].s[0] == *s) {
+			tok->type = cmatches[i].type;
+			tok->subtype = cmatches[i].subtype;
+			tok->string[0] = *s++;
+			return s;
+		}
+	}
+
+	/* Scan for a hex number or decimal */
+	if ((s[0] == '0') && ((s[1] == 'x') || (s[1] == 'X'))) {
+		char *p = tok->string;
+
+		tok->type = TOK_NUMBER;
+
+		*p++ = *s++;
+		*p++ = *s++;
+		while ((*s == '.') || isxdigit(*s)) {
+			if ((p - tok->string) < (int)sizeof(tok->string))
+				*p++ = *s;
+			if (*s == ',')
+				tok->type = TOK_NUMBER_LIST;
+			s++;
+		}
+		*p = 0;
+		return s;
+	} else if (isdigit(*s)) {
+		char *p = tok->string;
+
+		tok->type = TOK_NUMBER;
+		while ((*s == ',') || isdigit(*s)) {
+			if ((p - tok->string) < (int)sizeof(tok->string))
+				*p++ = *s;
+			if (*s == ',')
+				tok->type = TOK_NUMBER_LIST;
+			s++;
+		}
+		*p = 0;
+		return s;
+	}
+
+	/* Scan for an ID */
+	if (idstart(*s)) {
+		char *p = tok->string;
+
+		for (*p++ = *s++; idchar(*s); s++) {
+			if ((p - tok->string) < TOKEN_LEN)
+				*p++ = *s;
+		}
+
+		/* See if this is a $ number value */
+		if (tok->string[0] == '$') {
+			tok->type = TOK_NUMBER_VALUE;
+		} else {
+			/* Look for a string match */
+			for (i = 0; i < ARRAY_SIZE(smatches); i++) {
+				if (strcmp(smatches[i].s, tok->string) == 0) {
+					tok->type = smatches[i].type;
+					tok->subtype = smatches[i].subtype;
+					return s;
+				}
+			}
+		}
+	}
+	return s;
+}
+
+/* convert token type to string */
+static const char *
+token_type2s(enum tok_type type)
+{
+	switch (type) {
+	case TOK_LPAREN:
+		return "TOK_LPAREN";
+	case TOK_RPAREN:
+		return "TOK_RPAREN";
+	case TOK_COLON:
+		return "TOK_COLON";
+	case TOK_EQUALS:
+		return "TOK_EQUALS";
+	case TOK_QUEUE:
+		return "TOK_QUEUE";
+	case TOK_ATTR_NAME:
+		return "TOK_ATTR_NAME";
+	case TOK_NUMBER:
+		return "TOK_NUMBER";
+	case TOK_NUMBER_VALUE:
+		return "TOK_NUMBER_VALUE";
+	case TOK_NUMBER_LIST:
+		return "TOK_NUMBER_LIST";
+	}
+	return "unknown";
+}
+
+/* convert token sub-type to string */
+static const char *
+token_subtype2s(enum tok_subtype subtype)
+{
+	switch (subtype) {
+	case TOK_SUB_EQ:
+		return "TOK_SUB_EQ";
+	case TOK_SUB_CQ:
+		return "TOK_SUB_CQ";
+	case TOK_SUB_RQ:
+		return "TOK_SUB_RQ";
+	case TOK_SUB_MQ:
+		return "TOK_SUB_MQ";
+	case TOK_SUB_WQ:
+		return "TOK_SUB_WQ";
+	case TOK_SUB_LEN:
+		return "TOK_SUB_LEN";
+	case TOK_SUB_CLASS:
+		return "TOK_SUB_CLASS";
+	case TOK_SUB_ULP:
+		return "TOK_SUB_ULP";
+	case TOK_SUB_FILTER:
+		return "TOK_SUB_FILTER";
+	}
+	return "";
+}
+
+/*
+ * A syntax error message is found, the input tokens are dumped up to and
+ * including the token that failed as indicated by the current iterator index.
+ */
+static void
+tok_syntax(struct efct_hw *hw, struct tokarray *tokarray)
+{
+	u32 i;
+	struct tok *tok;
+
+	efc_log_test(hw->os, "Syntax error:\n");
+
+	for (i = 0, tok = tokarray->tokens; (i <= tokarray->inuse_count);
+	     i++, tok++) {
+		efc_log_test(hw->os, "%s [%2d]    %-16s %-16s %s\n",
+			      (i == tokarray->iter_idx) ? ">>>" : "   ", i,
+			     token_type2s(tok->type),
+			     token_subtype2s(tok->subtype), tok->string);
+	}
+}
+
+/*
+ * Parses tokens of type TOK_NUMBER and TOK_NUMBER_VALUE, returning a numeric
+ * value
+ */
+static u32
+tok_getnumber(struct efct_hw *hw, struct efct_hw_qtop *qtop,
+	      struct tok *tok)
+{
+	u32 rval = 0;
+	u32 num_cpus = num_online_cpus();
+
+	switch (tok->type) {
+	case TOK_NUMBER_VALUE:
+		if (strcmp(tok->string, "$ncpu") == 0)
+			rval = num_cpus;
+		else if (strcmp(tok->string, "$ncpu1") == 0)
+			rval = num_cpus - 1;
+		else if (strcmp(tok->string, "$nwq") == 0)
+			rval = (hw) ? hw->config.n_wq : 0;
+		else if (strcmp(tok->string, "$maxmrq") == 0)
+			rval = (num_cpus < EFCT_HW_MAX_MRQS)
+				? num_cpus : EFCT_HW_MAX_MRQS;
+		else if (strcmp(tok->string, "$nulp") == 0)
+			rval = hw->ulp_max - hw->ulp_start + 1;
+		else if ((qtop->rptcount_idx > 0) &&
+			 strcmp(tok->string, "$rpt0") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 1];
+		else if ((qtop->rptcount_idx > 1) &&
+			 strcmp(tok->string, "$rpt1") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 2];
+		else if ((qtop->rptcount_idx > 2) &&
+			 strcmp(tok->string, "$rpt2") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 3];
+		else if ((qtop->rptcount_idx > 3) &&
+			 strcmp(tok->string, "$rpt3") == 0)
+			rval = qtop->rptcount[qtop->rptcount_idx - 4];
+		else if (kstrtou32(tok->string, 0, &rval))
+			efc_log_debug(hw->os, "kstrtou32 failed\n");
+
+		break;
+	case TOK_NUMBER:
+		if (kstrtou32(tok->string, 0, &rval))
+			efc_log_debug(hw->os, "kstrtou32 failed\n");
+		break;
+	default:
+		break;
+	}
+	return rval;
+}
+
+/* The tokens are semantically parsed, to generate QTOP entries */
+static void
+parse_sub_filter(struct efct_hw *hw, struct efct_hw_qtop_entry *qt,
+		 struct tok *tok, struct efct_hw_qtop *qtop)
+{
+	u32 mask = 0;
+	char *p;
+	u32 v;
+
+	if (tok[3].type == TOK_NUMBER_LIST) {
+		mask = 0;
+		p = tok[3].string;
+
+		while ((p) && *p) {
+			if (kstrtou32(p, 0, &v))
+				efc_log_debug(hw->os, "kstrtou32 failed\n");
+			if (v < 32)
+				mask |= (1U << v);
+
+			p = strchr(p, ',');
+			if (p)
+				p++;
+		}
+		qt->filter_mask = mask;
+	} else {
+		qt->filter_mask = (1U << tok_getnumber(hw, qtop, &tok[3]));
+	}
+}
+
+/* The tokens are semantically parsed, to generate QTOP entries */
+static int
+parse_topology(struct efct_hw *hw, struct tokarray *tokarray,
+	       struct efct_hw_qtop *qtop)
+{
+	struct efct_hw_qtop_entry *qt = qtop->entries + qtop->inuse_count;
+	struct tok *tok;
+	u32 num = 0;
+
+	for (; (tokarray->iter_idx < tokarray->inuse_count) &&
+	     ((tok = &tokarray->tokens[tokarray->iter_idx]) != NULL);) {
+		if (qtop->inuse_count >= qtop->alloc_count)
+			return -1;
+
+		qt = qtop->entries + qtop->inuse_count;
+
+		switch (tok[0].type) {
+		case TOK_QUEUE:
+			qt->entry = subtype2qtop(tok[0].subtype);
+			qt->set_default = false;
+			qt->len = 0;
+			qt->class = 0;
+			qtop->inuse_count++;
+
+			/* Advance current token index */
+			tokarray->iter_idx++;
+
+			/*
+			 * Parse for queue attributes, possibly multiple
+			 * instances
+			 */
+			while ((tokarray->iter_idx + 4) <=
+				tokarray->inuse_count) {
+				tok = &tokarray->tokens[tokarray->iter_idx];
+				if (tok[0].type == TOK_COLON &&
+				    tok[1].type == TOK_ATTR_NAME &&
+					tok[2].type == TOK_EQUALS &&
+					(tok[3].type == TOK_NUMBER ||
+					 tok[3].type == TOK_NUMBER_VALUE ||
+					 tok[3].type == TOK_NUMBER_LIST)) {
+					num = tok_getnumber(hw, qtop, &tok[3]);
+
+					switch (tok[1].subtype) {
+					case TOK_SUB_LEN:
+						qt->len = num;
+						break;
+					case TOK_SUB_CLASS:
+						qt->class = num;
+						break;
+					case TOK_SUB_ULP:
+						qt->ulp = num;
+						break;
+					case TOK_SUB_FILTER:
+						parse_sub_filter(hw, qt, tok,
+								 qtop);
+						break;
+					default:
+						break;
+					}
+					/* Advance current token index */
+					tokarray->iter_idx += 4;
+				} else {
+					break;
+				}
+				num = 0;
+			}
+			qtop->entry_counts[qt->entry]++;
+			break;
+
+		case TOK_ATTR_NAME:
+			if (((tokarray->iter_idx + 5) <=
+			      tokarray->inuse_count) &&
+			      tok[1].type == TOK_COLON &&
+			      tok[2].type == TOK_QUEUE &&
+			      tok[3].type == TOK_EQUALS &&
+			      (tok[4].type == TOK_NUMBER ||
+			      tok[4].type == TOK_NUMBER_VALUE)) {
+				qt->entry = subtype2qtop(tok[2].subtype);
+				qt->set_default = true;
+				switch (tok[0].subtype) {
+				case TOK_SUB_LEN:
+					qt->len = tok_getnumber(hw, qtop,
+								&tok[4]);
+					break;
+				case TOK_SUB_CLASS:
+					qt->class = tok_getnumber(hw, qtop,
+								  &tok[4]);
+					break;
+				case TOK_SUB_ULP:
+					qt->ulp = tok_getnumber(hw, qtop,
+								&tok[4]);
+					break;
+				default:
+					break;
+				}
+				qtop->inuse_count++;
+				tokarray->iter_idx += 5;
+			} else {
+				tok_syntax(hw, tokarray);
+				return -1;
+			}
+			break;
+
+		case TOK_NUMBER:
+		case TOK_NUMBER_VALUE: {
+			u32 rpt_count = 1;
+			u32 i;
+			u32 rpt_idx;
+
+			rpt_count = tok_getnumber(hw, qtop, tok);
+
+			if (tok[1].type == TOK_LPAREN) {
+				u32 iter_idx_save;
+
+				tokarray->iter_idx += 2;
+
+				/* save token array iteration index */
+				iter_idx_save = tokarray->iter_idx;
+
+				for (i = 0; i < rpt_count; i++) {
+					rpt_idx = qtop->rptcount_idx;
+
+					if (qtop->rptcount_idx <
+					    ARRAY_SIZE(qtop->rptcount)) {
+						qtop->rptcount[rpt_idx + 1] = i;
+					}
+
+					/* restore token array iteration idx */
+					tokarray->iter_idx = iter_idx_save;
+
+					/* parse, append to qtop */
+					parse_topology(hw, tokarray, qtop);
+
+					qtop->rptcount_idx = rpt_idx;
+				}
+			}
+			break;
+		}
+
+		case TOK_RPAREN:
+			tokarray->iter_idx++;
+			return 0;
+
+		default:
+			tok_syntax(hw, tokarray);
+			return -1;
+		}
+	}
+	return 0;
+}
+
+/*
+ * The queue topology object is allocated, and filled with the results of
+ * parsing the passed in queue topology string
+ */
+struct efct_hw_qtop *
+efct_hw_qtop_parse(struct efct_hw *hw, const char *qtop_string)
+{
+	struct efct_hw_qtop *qtop;
+	struct tokarray tokarray;
+	const char *s;
+
+	efc_log_debug(hw->os, "queue topology: %s\n", qtop_string);
+
+	/* Allocate a token array */
+	tokarray.tokens = kmalloc_array(MAX_TOKENS, sizeof(*tokarray.tokens),
+					GFP_KERNEL);
+	if (!tokarray.tokens)
+		return NULL;
+	memset(tokarray.tokens, 0, MAX_TOKENS * sizeof(*tokarray.tokens));
+	tokarray.alloc_count = MAX_TOKENS;
+	tokarray.inuse_count = 0;
+	tokarray.iter_idx = 0;
+
+	/* Parse the tokens */
+	for (s = qtop_string; (tokarray.inuse_count < tokarray.alloc_count) &&
+	     ((s = tokenize(s, &tokarray.tokens[tokarray.inuse_count]))) !=
+	       NULL;)
+		tokarray.inuse_count++;
+
+	/* Allocate a queue topology structure */
+	qtop = kmalloc(sizeof(*qtop), GFP_KERNEL);
+	if (!qtop) {
+		kfree(tokarray.tokens);
+		efc_log_err(hw->os, "malloc qtop failed\n");
+		return NULL;
+	}
+	memset(qtop, 0, sizeof(*qtop));
+	qtop->os = hw->os;
+
+	/* Allocate queue topology entries */
+	qtop->entries = kzalloc((EFCT_HW_MAX_QTOP_ENTRIES *
+				sizeof(*qtop->entries)), GFP_ATOMIC);
+	if (!qtop->entries) {
+		kfree(qtop);
+		kfree(tokarray.tokens);
+		return NULL;
+	}
+	qtop->alloc_count = EFCT_HW_MAX_QTOP_ENTRIES;
+	qtop->inuse_count = 0;
+
+	/* Parse the tokens */
+	if (parse_topology(hw, &tokarray, qtop)) {
+		efc_log_err(hw->os, "failed to parse tokens\n");
+		efct_hw_qtop_free(qtop);
+		kfree(tokarray.tokens);
+		return NULL;
+	}
+
+	/* Free the tokens array */
+	kfree(tokarray.tokens);
+
+	return qtop;
+}
+
+void
+efct_hw_qtop_free(struct efct_hw_qtop *qtop)
+{
+	if (qtop) {
+		kfree(qtop->entries);
+		kfree(qtop);
+	}
+}
diff --git a/drivers/scsi/elx/efct/efct_hw_queues.h b/drivers/scsi/elx/efct/efct_hw_queues.h
new file mode 100644
index 000000000000..afa43209f823
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_HW_QUEUES_H__
+#define __EFCT_HW_QUEUES_H__
+
+#include "efct_hw.h"
+
+#define EFCT_HW_MQ_DEPTH	128
+
+enum efct_hw_qtop_type {
+	QTOP_EQ = 0,
+	QTOP_CQ,
+	QTOP_WQ,
+	QTOP_RQ,
+	QTOP_MQ,
+	QTOP_LAST,
+};
+
+struct efct_hw_qtop_entry {
+	enum		efct_hw_qtop_type entry;
+	bool		set_default;
+	u32		len;
+	u8		class;
+	u8		ulp;
+	u8		filter_mask;
+};
+
+struct efct_hw_mrq {
+	struct rq_config {
+		struct hw_eq *eq;
+		u32	len;
+		u8	class;
+		u8	ulp;
+		u8	filter_mask;
+	} rq_cfg[16];
+	u32 num_pairs;
+};
+
+#define MAX_TOKENS			256
+#define EFCT_HW_MAX_QTOP_ENTRIES	200
+
+struct efct_hw_qtop {
+	void		*os;
+	struct efct_hw_qtop_entry *entries;
+	u32		alloc_count;
+	u32		inuse_count;
+	u32		entry_counts[QTOP_LAST];
+	u32		rptcount[10];
+	u32		rptcount_idx;
+};
+
+struct efct_hw_qtop *
+efct_hw_qtop_parse(struct efct_hw *hw, const char *qtop_string);
+void efct_hw_qtop_free(struct efct_hw_qtop *qtop);
+const char *efct_hw_qtop_entry_name(enum efct_hw_qtop_type entry);
+u32 efct_hw_qtop_eq_count(struct efct_hw *hw);
+
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw, struct efct_hw_qtop *qtop);
+extern  struct hw_wq
+*efct_hw_queue_next_wq(struct efct_hw *hw, struct efct_hw_io *io);
+
+#endif /* __EFCT_HW_QUEUES_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (16 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:13   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization James Smart
                   ` (14 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
RQ data buffer allocation and deallocate.
Memory pool allocation and deallocation APIs.
Mailbox command submission and completion routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    | 355 +++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |   7 +
 drivers/scsi/elx/efct/efct_utils.c | 446 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_utils.h |  83 +++++++
 4 files changed, 891 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_utils.c
 create mode 100644 drivers/scsi/elx/efct/efct_utils.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 41e400f9d401..339e904b0276 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -1220,3 +1220,358 @@ efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop)
 
 	return value;
 }
+
+static struct efc_hw_rq_buffer *
+efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
+			u32 size)
+{
+	struct efct *efct = hw->os;
+	struct efc_hw_rq_buffer *rq_buf = NULL;
+	struct efc_hw_rq_buffer *prq;
+	u32 i;
+
+	if (count != 0) {
+		rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_ATOMIC);
+		if (!rq_buf)
+			return NULL;
+		memset(rq_buf, 0, sizeof(*rq_buf) * count);
+
+		for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
+			prq->rqindex = rqindex;
+			prq->dma.size = size;
+			prq->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+							   prq->dma.size,
+							   &prq->dma.phys,
+							   GFP_DMA);
+			if (!prq->dma.virt) {
+				efc_log_err(hw->os, "DMA allocation failed\n");
+				kfree(rq_buf);
+				rq_buf = NULL;
+				break;
+			}
+		}
+	}
+	return rq_buf;
+}
+
+static void
+efct_hw_rx_buffer_free(struct efct_hw *hw,
+		       struct efc_hw_rq_buffer *rq_buf,
+			u32 count)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	struct efc_hw_rq_buffer *prq;
+
+	if (rq_buf) {
+		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
+			dma_free_coherent(&efct->pcidev->dev,
+					  prq->dma.size, prq->dma.virt,
+					  prq->dma.phys);
+			memset(&prq->dma, 0, sizeof(struct efc_dma));
+		}
+
+		kfree(rq_buf);
+	}
+}
+
+enum efct_hw_rtn
+efct_hw_rx_allocate(struct efct_hw *hw)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	int rc = EFCT_HW_RTN_SUCCESS;
+	u32 rqindex = 0;
+	struct hw_rq *rq;
+	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
+	u32 payload_size = hw->config.rq_default_buffer_size;
+
+	rqindex = 0;
+
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+
+		/* Allocate header buffers */
+		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+						      rq->entry_count,
+						      hdr_size);
+		if (!rq->hdr_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
+			      i, rq->hdr->id, rq->entry_count, hdr_size);
+
+		rqindex++;
+
+		/* Allocate payload buffers */
+		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+							  rq->entry_count,
+							  payload_size);
+		if (!rq->payload_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
+			      i, rq->data->id, rq->entry_count, payload_size);
+		rqindex++;
+	}
+
+	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_rx_post(struct efct_hw *hw)
+{
+	u32 i;
+	u32 idx;
+	u32 rq_idx;
+	int rc = 0;
+
+	/*
+	 * In RQ pair mode, we MUST post the header and payload buffer at the
+	 * same time.
+	 */
+	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
+		struct hw_rq *rq = hw->hw_rq[rq_idx];
+
+		for (i = 0; i < rq->entry_count - 1; i++) {
+			struct efc_hw_sequence *seq;
+
+			seq = efct_array_get(hw->seq_pool, idx++);
+			if (!seq) {
+				rc = -1;
+				break;
+			}
+			seq->header = &rq->hdr_buf[i];
+			seq->payload = &rq->payload_buf[i];
+			rc = efct_hw_sequence_free(hw, seq);
+			if (rc)
+				break;
+		}
+		if (rc)
+			break;
+	}
+
+	return rc;
+}
+
+void
+efct_hw_rx_free(struct efct_hw *hw)
+{
+	struct hw_rq *rq;
+	u32 i;
+
+	/* Free hw_rq buffers */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		if (rq) {
+			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
+					       rq->entry_count);
+			rq->hdr_buf = NULL;
+			efct_hw_rx_buffer_free(hw, rq->payload_buf,
+					       rq->entry_count);
+			rq->payload_buf = NULL;
+		}
+	}
+}
+
+static int
+efct_hw_cmd_submit_pending(struct efct_hw *hw)
+{
+	struct efct_command_ctx *ctx = NULL;
+	int rc = 0;
+
+	/* Assumes lock held */
+
+	/* Only submit MQE if there's room */
+	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
+	       !list_empty(&hw->cmd_pending)) {
+		ctx = list_first_entry(&hw->cmd_pending,
+				       struct efct_command_ctx, list_entry);
+		if (!ctx)
+			break;
+
+		list_del(&ctx->list_entry);
+
+		INIT_LIST_HEAD(&ctx->list_entry);
+		list_add_tail(&ctx->list_entry, &hw->cmd_head);
+		hw->cmd_head_count++;
+		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
+			efc_log_test(hw->os,
+				      "sli_queue_write failed: %d\n", rc);
+			rc = -1;
+			break;
+		}
+	}
+	return rc;
+}
+
+/**
+ * Send a mailbox command to the hardware, and either wait for a completion
+ * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
+ */
+enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	unsigned long flags = 0;
+	void *bmbx = NULL;
+
+	/*
+	 * If the chip is in an error state (UE'd) then reject this mailbox
+	 *  command.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		efc_log_crit(hw->os,
+			      "status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(&hw->sli),
+			sli_reg_read_err1(&hw->sli),
+			sli_reg_read_err2(&hw->sli));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (opts == EFCT_CMD_POLL) {
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+		bmbx = hw->sli.bmbx.virt;
+
+		memset(bmbx, 0, SLI4_BMBX_SIZE);
+		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
+
+		if (sli_bmbx_command(&hw->sli) == 0) {
+			rc = EFCT_HW_RTN_SUCCESS;
+			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
+		}
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	} else if (opts == EFCT_CMD_NOWAIT) {
+		struct efct_command_ctx	*ctx = NULL;
+
+		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+		if (!ctx)
+			return EFCT_HW_RTN_NO_RESOURCES;
+
+		memset(ctx, 0, sizeof(struct efct_command_ctx));
+
+		if (hw->state != EFCT_HW_STATE_ACTIVE) {
+			efc_log_err(hw->os,
+				     "Can't send command, HW state=%d\n",
+				    hw->state);
+			kfree(ctx);
+			return EFCT_HW_RTN_ERROR;
+		}
+
+		if (cb) {
+			ctx->cb = cb;
+			ctx->arg = arg;
+		}
+		ctx->buf = cmd;
+		ctx->ctx = hw;
+
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+
+			/* Add to pending list */
+			INIT_LIST_HEAD(&ctx->list_entry);
+			list_add_tail(&ctx->list_entry, &hw->cmd_pending);
+
+			/* Submit as much of the pending list as we can */
+			if (efct_hw_cmd_submit_pending(hw) == 0)
+				rc = EFCT_HW_RTN_SUCCESS;
+
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
+			size_t size)
+{
+	struct efct_command_ctx *ctx = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		ctx = list_first_entry(&hw->cmd_head,
+				       struct efct_command_ctx, list_entry);
+		list_del(&ctx->list_entry);
+	}
+	if (!ctx) {
+		efc_log_err(hw->os, "no command context?!?\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return -1;
+	}
+
+	hw->cmd_head_count--;
+
+	/* Post any pending requests */
+	efct_hw_cmd_submit_pending(hw);
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	if (ctx->cb) {
+		if (ctx->buf)
+			memcpy(ctx->buf, mqe, size);
+
+		ctx->cb(hw, status, ctx->buf, ctx->arg);
+	}
+
+	memset(ctx, 0, sizeof(struct efct_command_ctx));
+	kfree(ctx);
+
+	return 0;
+}
+
+static int
+efct_hw_mq_process(struct efct_hw *hw,
+		   int status, struct sli4_queue *mq)
+{
+	u8		mqe[SLI4_BMBX_SIZE];
+
+	if (!sli_mq_read(&hw->sli, mq, mqe))
+		efct_hw_command_process(hw, status, mqe, mq->size);
+
+	return 0;
+}
+
+static int
+efct_hw_command_cancel(struct efct_hw *hw)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+
+	/*
+	 * Manually clean up remaining commands. Note: since this calls
+	 * efct_hw_command_process(), we'll also process the cmd_pending
+	 * list, so no need to manually clean that out.
+	 */
+	while (!list_empty(&hw->cmd_head)) {
+		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
+		struct efct_command_ctx *ctx =
+	list_first_entry(&hw->cmd_head, struct efct_command_ctx, list_entry);
+
+		efc_log_test(hw->os, "hung command %08x\n",
+			      !ctx ? U32_MAX :
+			      (!ctx->buf ? U32_MAX :
+			       *((u32 *)ctx->buf)));
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+	}
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index bbba73969de3..2360b64fc2c3 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -863,4 +863,11 @@ efct_hw_set_ptr(struct efct_hw *hw, enum efct_hw_property prop,
 extern uint64_t
 efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop);
 
+enum efct_hw_rtn efct_hw_rx_allocate(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_rx_post(struct efct_hw *hw);
+void efct_hw_rx_free(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_utils.c b/drivers/scsi/elx/efct/efct_utils.c
new file mode 100644
index 000000000000..1d28be633a41
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_utils.c
@@ -0,0 +1,446 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+
+#define DEFAULT_SLAB_LEN		(64 * 1024)
+
+struct pool_hdr {
+	struct list_head list_entry;
+};
+
+struct efct_array {
+	void *os;
+
+	u32 size;
+	u32 count;
+
+	u32 n_rows;
+	u32 elems_per_row;
+	u32 bytes_per_row;
+
+	void **array_rows;
+	u32 array_rows_len;
+};
+
+static u32 slab_len = DEFAULT_SLAB_LEN;
+
+/**
+ * Void pointer array structure
+ *
+ * This structure describes an object consisting of an array of void
+ * pointers.   The object is allocated with a maximum array size, entries
+ * are then added to the array with while maintaining an entry count.   A set of
+ * iterator APIs are included to allow facilitate cycling through the array
+ * entries in a circular fashion.
+ *
+ */
+struct efct_varray {
+	void *os;
+	u32 array_count;	/* maximum entry count in array */
+	void **array;		/* pointer to allocated array memory */
+	u32 entry_count;	/* number of entries added to the array */
+	uint next_index;	/* iterator next index */
+	spinlock_t lock;	/* iterator lock */
+};
+
+void
+efct_array_set_slablen(u32 len)
+{
+	slab_len = len;
+}
+
+struct efct_array *
+efct_array_alloc(void *os, u32 size, u32 count)
+{
+	struct efct_array *array = NULL;
+	u32 i;
+
+	/* Fail if the item size exceeds slab_len - caller should increase
+	 * slab_size, or not use this API.
+	 */
+	if (size > slab_len) {
+		pr_err("Error: size exceeds slab length\n");
+		return NULL;
+	}
+
+	array = kmalloc(sizeof(*array), GFP_KERNEL);
+	if (!array)
+		return NULL;
+
+	memset(array, 0, sizeof(*array));
+	array->os = os;
+	array->size = size;
+	array->count = count;
+	array->elems_per_row = slab_len / size;
+	array->n_rows = (count + array->elems_per_row - 1) /
+			array->elems_per_row;
+	array->bytes_per_row = array->elems_per_row * array->size;
+
+	array->array_rows_len = array->n_rows * sizeof(*array->array_rows);
+	array->array_rows = kmalloc(array->array_rows_len, GFP_ATOMIC);
+	if (!array->array_rows) {
+		efct_array_free(array);
+		return NULL;
+	}
+	memset(array->array_rows, 0, array->array_rows_len);
+	for (i = 0; i < array->n_rows; i++) {
+		array->array_rows[i] = kmalloc(array->bytes_per_row,
+					       GFP_KERNEL);
+		if (!array->array_rows[i]) {
+			efct_array_free(array);
+			return NULL;
+		}
+		memset(array->array_rows[i], 0, array->bytes_per_row);
+	}
+
+	return array;
+}
+
+void
+efct_array_free(struct efct_array *array)
+{
+	u32 i;
+
+	if (array) {
+		if (array->array_rows) {
+			for (i = 0; i < array->n_rows; i++)
+				kfree(array->array_rows[i]);
+
+			kfree(array->array_rows);
+		}
+		kfree(array);
+	}
+}
+
+void *efct_array_get(struct efct_array *array, u32 idx)
+{
+	void *entry = NULL;
+
+	if (idx < array->count) {
+		u32 row = idx / array->elems_per_row;
+		u32 offset = idx % array->elems_per_row;
+
+		entry = ((u8 *)array->array_rows[row]) +
+			 (offset * array->size);
+	}
+	return entry;
+}
+
+u32
+efct_array_get_count(struct efct_array *array)
+{
+	return array->count;
+}
+
+u32
+efct_array_get_size(struct efct_array *array)
+{
+	return array->size;
+}
+
+struct efct_varray *
+efct_varray_alloc(void *os, u32 array_count)
+{
+	struct efct_varray *va;
+
+	va = kmalloc(sizeof(*va), GFP_ATOMIC);
+	if (va) {
+		memset(va, 0, sizeof(*va));
+		va->os = os;
+		va->array_count = array_count;
+		va->array = kmalloc_array(va->array_count, sizeof(*va->array),
+					  GFP_KERNEL);
+		if (va->array) {
+			va->next_index = 0;
+			spin_lock_init(&va->lock);
+		} else {
+			kfree(va);
+			va = NULL;
+		}
+	}
+	return va;
+}
+
+void
+efct_varray_free(struct efct_varray *va)
+{
+	if (va) {
+		kfree(va->array);
+		kfree(va);
+	}
+}
+
+int
+efct_varray_add(struct efct_varray *va, void *entry)
+{
+	u32 rc = -1;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		if (va->entry_count < va->array_count) {
+			va->array[va->entry_count++] = entry;
+			rc = 0;
+		}
+	spin_unlock_irqrestore(&va->lock, flags);
+
+	return rc;
+}
+
+void
+efct_varray_iter_reset(struct efct_varray *va)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		va->next_index = 0;
+	spin_unlock_irqrestore(&va->lock, flags);
+}
+
+void *
+efct_varray_iter_next(struct efct_varray *va)
+{
+	void *rval = NULL;
+	unsigned long flags = 0;
+
+	if (va) {
+		spin_lock_irqsave(&va->lock, flags);
+			rval = _efct_varray_iter_next(va);
+		spin_unlock_irqrestore(&va->lock, flags);
+	}
+	return rval;
+}
+
+void *
+_efct_varray_iter_next(struct efct_varray *va)
+{
+	void *rval;
+
+	rval = va->array[va->next_index];
+	if (++va->next_index >= va->entry_count)
+		va->next_index = 0;
+	return rval;
+}
+
+u32
+efct_varray_get_count(struct efct_varray *va)
+{
+	u32 rc;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&va->lock, flags);
+		rc = va->entry_count;
+	spin_unlock_irqrestore(&va->lock, flags);
+	return rc;
+}
+
+/**
+ * The efct_pool data structure consists of:
+ *
+ *	pool->a		An efct_array_s.
+ *	pool->freelist	A linked list of free items.
+ *
+ *	When a pool is allocated using efct_pool_alloc(), the caller
+ *	provides the size in bytes of each memory pool item (size), and
+ *	a count of items (count). Since efct_pool_alloc() has no visibility
+ *	into the object the caller is allocating, a link for the linked list
+ *	is "pre-pended".  Thus when allocating the efct_array, the size used
+ *	is the size of the pool_hdr plus the requestedmemory pool item size.
+ *
+ *	array item layout:
+ *
+ *		pool_hdr
+ *		pool data[size]
+ *
+ *	The address of the pool data is returned when allocated (using
+ *	efct_pool_get(), or efct_pool_get_instance()), and received when being
+ *	freed (using efct_pool_put(). So the address returned by the array item
+ *	(efct_array_get()) must be offset by the size of pool_hdr_s.
+ */
+struct efct_pool *
+efct_pool_alloc(void *os, u32 size, u32 count)
+{
+	struct efct_pool *pool;
+	struct pool_hdr *pool_entry;
+	u32 i;
+
+	pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		return NULL;
+
+	memset(pool, 0, sizeof(*pool));
+	pool->os = os;
+
+	/* Allocate an array where each array item is the size of a pool_hdr
+	 * plus the requested memory item size (size)
+	 */
+	pool->a = efct_array_alloc(os, size + sizeof(struct pool_hdr),
+				   count);
+	if (!pool->a) {
+		efct_pool_free(pool);
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&pool->freelist);
+	for (i = 0; i < count; i++) {
+		pool_entry = (struct pool_hdr *)efct_array_get(pool->a, i);
+		INIT_LIST_HEAD(&pool_entry->list_entry);
+		list_add_tail(&pool_entry->list_entry, &pool->freelist);
+	}
+
+	spin_lock_init(&pool->lock);
+
+	return pool;
+}
+
+void
+efct_pool_reset(struct efct_pool *pool)
+{
+	u32 i;
+	u32 count = efct_array_get_count(pool->a);
+	u32 size = efct_array_get_size(pool->a);
+	unsigned long flags = 0;
+	struct pool_hdr *pool_entry;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/*
+	 * Remove all the entries from the free list, otherwise we will
+	 * encountered linked list asserts when they are re-added.
+	 */
+	while (!list_empty(&pool->freelist)) {
+		pool_entry = list_first_entry(&pool->freelist,
+					      struct pool_hdr, list_entry);
+		list_del(&pool_entry->list_entry);
+	}
+
+	/* Reset the free list */
+	INIT_LIST_HEAD(&pool->freelist);
+
+	/* Return all elements to the free list and zero the elements */
+	for (i = 0; i < count; i++) {
+		pool_entry = (struct pool_hdr *)efct_array_get(pool->a, i);
+		memset(pool_entry, 0, size - sizeof(struct pool_hdr));
+		INIT_LIST_HEAD(&pool_entry->list_entry);
+		list_add_tail(&pool_entry->list_entry, &pool->freelist);
+	}
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+void
+efct_pool_free(struct efct_pool *pool)
+{
+	if (pool) {
+		if (pool->a)
+			efct_array_free(pool->a);
+		kfree(pool);
+	}
+}
+
+void *
+efct_pool_get(struct efct_pool *pool)
+{
+	struct pool_hdr *h = NULL;
+	void *item = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	if (!list_empty(&pool->freelist)) {
+		h = list_first_entry(&pool->freelist, struct pool_hdr,
+				     list_entry);
+	}
+
+	if (h) {
+		list_del(&h->list_entry);
+		/*
+		 * Return the array item address offset by the size of
+		 * pool_hdr
+		 */
+		item = &h[1];
+	}
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return item;
+}
+
+void
+efct_pool_put(struct efct_pool *pool, void *item)
+{
+	struct pool_hdr *h;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/* Fetch the address of the array item, which is the item address
+	 * negatively offset by size of pool_hdr (note the index of [-1]
+	 */
+	h = &((struct pool_hdr *)item)[-1];
+
+	INIT_LIST_HEAD(&h->list_entry);
+	list_add_tail(&h->list_entry, &pool->freelist);
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+void
+efct_pool_put_head(struct efct_pool *pool, void *item)
+{
+	struct pool_hdr *h;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	/* Fetch the address of the array item, which is the item address
+	 * negatively offset by size of pool_hdr (note the index of [-1]
+	 */
+	h = &((struct pool_hdr *)item)[-1];
+
+	INIT_LIST_HEAD(&h->list_entry);
+	list_add(&h->list_entry, &pool->freelist);
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+u32
+efct_pool_get_count(struct efct_pool *pool)
+{
+	u32 count;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+	count = efct_array_get_count(pool->a);
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return count;
+}
+
+void *
+efct_pool_get_instance(struct efct_pool *pool, u32 idx)
+{
+	struct pool_hdr *h = efct_array_get(pool->a, idx);
+
+	if (!h)
+		return NULL;
+	return &h[1];
+}
+
+u32
+efct_pool_get_freelist_count(struct efct_pool *pool)
+{
+	u32 count = 0;
+	struct pool_hdr *item;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&pool->lock, flags);
+
+	list_for_each_entry(item, &pool->freelist, list_entry) {
+		count++;
+	}
+
+	spin_unlock_irqrestore(&pool->lock, flags);
+	return count;
+}
diff --git a/drivers/scsi/elx/efct/efct_utils.h b/drivers/scsi/elx/efct/efct_utils.h
new file mode 100644
index 000000000000..1c24fef138f3
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_utils.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_UTILS_H__
+#define __EFCT_UTILS_H__
+
+/* Sparse vector structure. */
+struct sparse_vector {
+	void	*os;
+	u32	max_idx;
+	void	**array;
+};
+
+#define EFCT_LOG_ENABLE_SCSI_TRACE(efct)                \
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0)
+#define EFCT_LOG_ENABLE_ELS_TRACE(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 1)) != 0) : 0)
+#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
+
+#define SPV_ROWLEN	256
+#define SPV_DIM		3
+
+struct efct_pool {
+	void			*os;
+	struct efct_array	*a;
+	struct list_head	freelist;
+	/* Protects freelist */
+	spinlock_t		lock;
+};
+
+extern void
+efct_array_set_slablen(u32 len);
+extern struct efct_array *
+efct_array_alloc(void *os, u32 size, u32 count);
+extern void
+efct_array_free(struct efct_array *array);
+extern void *
+efct_array_get(struct efct_array *array, u32 idx);
+extern u32
+efct_array_get_count(struct efct_array *array);
+extern u32
+efct_array_get_size(struct efct_array *array);
+
+extern struct efct_varray *
+efct_varray_alloc(void *os, u32 entry_count);
+extern void
+efct_varray_free(struct efct_varray *ai);
+extern int
+efct_varray_add(struct efct_varray *ai, void *entry);
+extern void
+efct_varray_iter_reset(struct efct_varray *ai);
+extern void *
+efct_varray_iter_next(struct efct_varray *ai);
+extern void *
+_efct_varray_iter_next(struct efct_varray *ai);
+extern void
+efct_varray_unlock(struct efct_varray *ai);
+extern u32
+efct_varray_get_count(struct efct_varray *ai);
+
+extern struct efct_pool *
+efct_pool_alloc(void *os, u32 size, u32 count);
+extern void
+efct_pool_reset(struct efct_pool *pool);
+extern void
+efct_pool_free(struct efct_pool *pool);
+extern void *
+efct_pool_get(struct efct_pool *pool);
+extern void
+efct_pool_put(struct efct_pool *pool, void *arg);
+extern void
+efct_pool_put_head(struct efct_pool *pool, void *arg);
+extern u32
+efct_pool_get_count(struct efct_pool *pool);
+extern void *
+efct_pool_get_instance(struct efct_pool *pool, u32 instance);
+extern u32
+efct_pool_get_freelist_count(struct efct_pool *pool);
+#endif /* __EFCT_UTILS_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (17 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:22   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 20/32] elx: efct: Hardware queues processing James Smart
                   ` (13 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to create IO interfaces (wqs, etc), SGL initialization,
and configure hardware features.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 1480 ++++++++++++++++++++++++++++++++++++---
 drivers/scsi/elx/efct/efct_hw.h |   46 ++
 2 files changed, 1427 insertions(+), 99 deletions(-)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 339e904b0276..beca8534813d 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -240,6 +240,505 @@ efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
 		     j, hw->config.filter_def[j], i, id);
 }
 
+static inline void
+efct_hw_init_free_io(struct efct_hw_io *io)
+{
+	/*
+	 * Set io->done to NULL, to avoid any callbacks, should
+	 * a completion be received for one of these IOs
+	 */
+	io->done = NULL;
+	io->abort_done = NULL;
+	io->status_saved = false;
+	io->abort_in_progress = false;
+	io->rnode = NULL;
+	io->type = 0xFFFF;
+	io->wq = NULL;
+	io->ul_io = NULL;
+	io->tgt_wqe_timeout = 0;
+}
+
+static void
+efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* Restore the default */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+
+	/* Clear the overflow SGL */
+	io->ovfl_sgl = NULL;
+	io->ovfl_sgl_count = 0;
+	io->ovfl_lsp = NULL;
+}
+
+/* Initialize the pool of HW IO objects */
+static enum efct_hw_rtn
+efct_hw_setup_io(struct efct_hw *hw)
+{
+	u32	i = 0;
+	struct efct_hw_io	*io = NULL;
+	uintptr_t	xfer_virt = 0;
+	uintptr_t	xfer_phys = 0;
+	u32	index;
+	bool new_alloc = true;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	if (!hw->io) {
+		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
+				 GFP_KERNEL);
+
+		if (!hw->io)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->io, 0, hw->config.n_io * sizeof(io));
+
+		for (i = 0; i < hw->config.n_io; i++) {
+			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
+			if (!hw->io[i])
+				goto error;
+
+			memset(hw->io[i], 0, sizeof(struct efct_hw_io));
+		}
+
+		/* Create WQE buffs for IO */
+		hw->wqe_buffs = kmalloc((hw->config.n_io *
+					     hw->sli.wqe_size),
+					     GFP_ATOMIC);
+		if (!hw->wqe_buffs) {
+			kfree(hw->io);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		memset(hw->wqe_buffs, 0, (hw->config.n_io *
+					hw->sli.wqe_size));
+
+	} else {
+		/* re-use existing IOs, including SGLs */
+		new_alloc = false;
+	}
+
+	if (new_alloc) {
+		dma = &hw->xfer_rdy;
+		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
+		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       dma->size, &dma->phys, GFP_DMA);
+		if (!dma->virt)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
+	xfer_phys = hw->xfer_rdy.phys;
+
+	for (i = 0; i < hw->config.n_io; i++) {
+		struct hw_wq_callback *wqcb;
+
+		io = hw->io[i];
+
+		/* initialize IO fields */
+		io->hw = hw;
+
+		/* Assign a WQE buff */
+		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
+
+		/* Allocate the request tag for this IO */
+		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
+		if (!wqcb) {
+			efc_log_err(hw->os, "can't allocate request tag\n");
+			return EFCT_HW_RTN_NO_RESOURCES;
+		}
+		io->reqtag = wqcb->instance_index;
+
+		/* Now for the fields that are initialized on each free */
+		efct_hw_init_free_io(io);
+
+		/* The XB flag isn't cleared on IO free, so init to zero */
+		io->xbusy = 0;
+
+		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
+				       &io->indicator, &index)) {
+			efc_log_err(hw->os,
+				     "sli_resource_alloc failed @ %d\n", i);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		if (new_alloc) {
+			dma = &io->def_sgl;
+			dma->size = hw->config.n_sgl *
+					sizeof(struct sli4_sge);
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt) {
+				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
+				memset(&io->def_sgl, 0,
+				       sizeof(struct efc_dma));
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+		}
+		io->def_sgl_count = hw->config.n_sgl;
+		io->sgl = &io->def_sgl;
+		io->sgl_count = io->def_sgl_count;
+
+		if (hw->xfer_rdy.size) {
+			io->xfer_rdy.virt = (void *)xfer_virt;
+			io->xfer_rdy.phys = xfer_phys;
+			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
+
+			xfer_virt += sizeof(struct fcp_txrdy);
+			xfer_phys += sizeof(struct fcp_txrdy);
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+error:
+	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
+		kfree(hw->io[i]);
+		hw->io[i] = NULL;
+	}
+
+	kfree(hw->io);
+	hw->io = NULL;
+
+	return EFCT_HW_RTN_NO_MEMORY;
+}
+
+static enum efct_hw_rtn
+efct_hw_init_io(struct efct_hw *hw)
+{
+	u32	i = 0, io_index = 0;
+	bool prereg = false;
+	struct efct_hw_io	*io = NULL;
+	u8		cmd[SLI4_BMBX_SIZE];
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32	nremaining;
+	u32	n = 0;
+	u32	sgls_per_request = 256;
+	struct efc_dma	**sgls = NULL;
+	struct efc_dma	reqbuf;
+	struct efct *efct = hw->os;
+
+	prereg = hw->sli.sgl_pre_registered;
+
+	memset(&reqbuf, 0, sizeof(struct efc_dma));
+	if (prereg) {
+		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
+				     GFP_ATOMIC);
+		if (!sgls)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		reqbuf.size = 32 + sgls_per_request * 16;
+		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						 reqbuf.size, &reqbuf.phys,
+						 GFP_DMA);
+		if (!reqbuf.virt) {
+			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
+			kfree(sgls);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
+		if (prereg) {
+			/* Copy address of SGL's into local sgls[] array, break
+			 * out if the xri is not contiguous.
+			 */
+			u32 min = (sgls_per_request < nremaining)
+					? sgls_per_request : nremaining;
+			for (n = 0; n < min; n++) {
+				/* Check that we have contiguous xri values */
+				if (n > 0) {
+					if (hw->io[io_index + n]->indicator !=
+					    hw->io[io_index + n - 1]->indicator
+					    + 1)
+						break;
+				}
+				sgls[n] = hw->io[io_index + n]->sgl;
+			}
+
+			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
+						   sizeof(cmd),
+						hw->io[io_index]->indicator,
+						n, sgls, NULL, &reqbuf)) {
+				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
+						    NULL, NULL)) {
+					rc = EFCT_HW_RTN_ERROR;
+					efc_log_err(hw->os,
+						     "SGL post failed\n");
+					break;
+				}
+			}
+		} else {
+			n = nremaining;
+		}
+
+		/* Add to tail if successful */
+		for (i = 0; i < n; i++, io_index++) {
+			io = hw->io[io_index];
+			io->state = EFCT_HW_IO_STATE_FREE;
+			INIT_LIST_HEAD(&io->list_entry);
+			list_add_tail(&io->list_entry, &hw->io_free);
+		}
+	}
+
+	if (prereg) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  reqbuf.size, reqbuf.virt, reqbuf.phys);
+		memset(&reqbuf, 0, sizeof(struct efc_dma));
+		kfree(sgls);
+	}
+
+	return rc;
+}
+
+static enum efct_hw_rtn
+efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
+
+	memset(&param, 0, sizeof(param));
+	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
+			      fdt_xfer_hint, rc);
+	else
+		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
+			      le32_to_cpu(param.fdt_xfer_hint));
+
+	return rc;
+}
+
+/**
+ * efct_hw_config_mrq() - Configure Multi-RQ
+ *
+ * @hw: Hardware context allocated by the caller.
+ * @mode: 1 to set MRQ filters and 0 to set FCFI index
+ * @fcf_index: valid in mode 0
+ *
+ * Returns 0 on success, or a non-zero value on failure.
+ */
+static int
+efct_hw_config_mrq(struct efct_hw *hw, u8 mode, u16 fcf_index)
+{
+	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
+	struct hw_rq *rq;
+	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
+	u32 i, j;
+	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	int rc;
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		goto issue_cmd;
+
+	/* Set the filter match/mask values from hw's filter_def values */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_filter[i].rq_id = cpu_to_le16(0xffff);
+		rq_filter[i].r_ctl_mask  = (u8)hw->config.filter_def[i];
+		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
+		rq_filter[i].type_mask   = (u8)(hw->config.filter_def[i] >> 16);
+		rq_filter[i].type_match  = (u8)(hw->config.filter_def[i] >> 24);
+	}
+
+	/* Accumulate counts for each filter type used, build rq_ids[] list */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
+			if (!(rq->filter_mask & (1U << j)))
+				continue;
+
+			if (rq_filter[j].rq_id != cpu_to_le16(0xffff)) {
+				/*
+				 * Already used. Bailout ifts not RQset
+				 * case
+				 */
+				if (!rq->is_mrq ||
+				    rq_filter[j].rq_id !=
+				    cpu_to_le16(rq->base_mrq_id)) {
+					efc_log_err(hw->os, "Wrong q top.\n");
+					return EFCT_HW_RTN_ERROR;
+				}
+				continue;
+			}
+
+			if (!rq->is_mrq) {
+				rq_filter[j].rq_id = cpu_to_le16(rq->hdr->id);
+				continue;
+			}
+
+			rq_filter[j].rq_id = cpu_to_le16(rq->base_mrq_id);
+			mrq_bitmask |= (1U << j);
+		}
+	}
+
+issue_cmd:
+	/* Invoke REG_FCFI_MRQ */
+	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
+				  buf,	/* buf */
+				 SLI4_BMBX_SIZE, /* size */
+				 mode, /* mode 1 */
+				 fcf_index, /* fcf_index */
+				 /* RQ selection policy*/
+				 hw->config.rq_selection_policy,
+				 mrq_bitmask, /* MRQ bitmask */
+				 hw->hw_mrq_count, /* num_mrqs */
+				 rq_filter);/* RQ filter */
+	if (rc) {
+		efc_log_err(hw->os,
+			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+
+	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
+
+	if (rc != EFCT_HW_RTN_SUCCESS ||
+	    le16_to_cpu(rsp->hdr.status)) {
+		efc_log_err(hw->os,
+			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
+			     rsp->hdr.command,
+			     le16_to_cpu(rsp->hdr.status));
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
+	return 0;
+}
+
+static enum efct_hw_rtn
+efct_hw_config_watchdog_timer(struct efct_hw *hw);
+
+static void
+efct_hw_watchdog_timer_cb(struct timer_list *t)
+{
+	struct efct_hw *hw = from_timer(hw, t, watchdog_timer);
+
+	efct_hw_config_watchdog_timer(hw);
+}
+
+static void
+efct_hw_cb_cfg_watchdog(struct efct_hw *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	u16 timeout = hw->watchdog_timeout;
+
+	if (status != 0) {
+		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
+			     status);
+	} else {
+		if (timeout != 0) {
+			/*
+			 * keeping callback 500ms before timeout to keep
+			 * heartbeat alive
+			 */
+			timer_setup(&hw->watchdog_timer,
+				    &efct_hw_watchdog_timer_cb, 0);
+
+			mod_timer(&hw->watchdog_timer,
+				  jiffies +
+				  msecs_to_jiffies(timeout * 1000 - 500));
+		} else {
+			del_timer(&hw->watchdog_timer);
+		}
+	}
+
+	kfree(mqe);
+}
+
+/* Set configuration parameters for watchdog timer feature */
+static enum efct_hw_rtn
+efct_hw_config_watchdog_timer(struct efct_hw *hw)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 *buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+
+	if (!buf)
+		return EFCT_HW_RTN_ERROR;
+
+	sli4_cmd_lowlevel_set_watchdog(&hw->sli, buf, SLI4_BMBX_SIZE,
+				       hw->watchdog_timeout);
+	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT, efct_hw_cb_cfg_watchdog,
+			     NULL);
+	if (rc) {
+		kfree(buf);
+		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
+			     rc);
+	}
+	return rc;
+}
+
+static enum efct_hw_rtn
+efct_hw_set_dif_seed(struct efct_hw *hw)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_dif_seed seed_param;
+
+	memset(&seed_param, 0, sizeof(seed_param));
+	seed_param.seed = cpu_to_le16(hw->config.dif_seed);
+
+	/* send set_features command */
+	if (!sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+					SLI4_SET_FEATURES_DIF_SEED,
+					4,
+					(u32 *)&seed_param)) {
+		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+		if (rc)
+			efc_log_err(hw->os,
+				     "efct_hw_command returns %d\n", rc);
+		else
+			efc_log_debug(hw->os, "DIF seed set to 0x%x\n",
+				       hw->config.dif_seed);
+	} else {
+		efc_log_err(hw->os,
+			     "sli_cmd_common_set_features failed\n");
+		rc = EFCT_HW_RTN_ERROR;
+	}
+	return rc;
+}
+
+/* enable sli port health check */
+static enum efct_hw_rtn
+efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
+				     u8 enable)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_health_check param;
+	u32	health_check_flag = 0;
+
+	memset(&param, 0, sizeof(param));
+
+	if (enable)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
+
+	if (query)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
+
+	param.health_check_dword = cpu_to_le32(health_check_flag);
+
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
+	else
+		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
+
+	return rc;
+}
+
 enum efct_hw_rtn
 efct_hw_init(struct efct_hw *hw)
 {
@@ -712,104 +1211,6 @@ efct_hw_init(struct efct_hw *hw)
 	return EFCT_HW_RTN_SUCCESS;
 }
 
-/**
- * efct_hw_config_mrq() - Configure Multi-RQ
- *
- * @hw: Hardware context allocated by the caller.
- * @mode: 1 to set MRQ filters and 0 to set FCFI index
- * @fcf_index: valid in mode 0
- *
- * Returns 0 on success, or a non-zero value on failure.
- */
-static int
-efct_hw_config_mrq(struct efct_hw *hw, u8 mode, u16 fcf_index)
-{
-	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
-	struct hw_rq *rq;
-	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
-	u32 i, j;
-	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
-	int rc;
-
-	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
-		goto issue_cmd;
-
-	/* Set the filter match/mask values from hw's filter_def values */
-	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
-		rq_filter[i].rq_id = cpu_to_le16(0xffff);
-		rq_filter[i].r_ctl_mask  = (u8)hw->config.filter_def[i];
-		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
-		rq_filter[i].type_mask   = (u8)(hw->config.filter_def[i] >> 16);
-		rq_filter[i].type_match  = (u8)(hw->config.filter_def[i] >> 24);
-	}
-
-	/* Accumulate counts for each filter type used, build rq_ids[] list */
-	for (i = 0; i < hw->hw_rq_count; i++) {
-		rq = hw->hw_rq[i];
-		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
-			if (!(rq->filter_mask & (1U << j)))
-				continue;
-
-			if (rq_filter[j].rq_id != cpu_to_le16(0xffff)) {
-				/*
-				 * Already used. Bailout ifts not RQset
-				 * case
-				 */
-				if (!rq->is_mrq ||
-				    rq_filter[j].rq_id !=
-				    cpu_to_le16(rq->base_mrq_id)) {
-					efc_log_err(hw->os, "Wrong q top.\n");
-					return EFCT_HW_RTN_ERROR;
-				}
-				continue;
-			}
-
-			if (!rq->is_mrq) {
-				rq_filter[j].rq_id = cpu_to_le16(rq->hdr->id);
-				continue;
-			}
-
-			rq_filter[j].rq_id = cpu_to_le16(rq->base_mrq_id);
-			mrq_bitmask |= (1U << j);
-		}
-	}
-
-issue_cmd:
-	/* Invoke REG_FCFI_MRQ */
-	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
-				  buf,	/* buf */
-				 SLI4_BMBX_SIZE, /* size */
-				 mode, /* mode 1 */
-				 fcf_index, /* fcf_index */
-				 /* RQ selection policy*/
-				 hw->config.rq_selection_policy,
-				 mrq_bitmask, /* MRQ bitmask */
-				 hw->hw_mrq_count, /* num_mrqs */
-				 rq_filter);/* RQ filter */
-	if (rc) {
-		efc_log_err(hw->os,
-			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
-		return EFCT_HW_RTN_ERROR;
-	}
-
-	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
-
-	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
-
-	if (rc != EFCT_HW_RTN_SUCCESS ||
-	    le16_to_cpu(rsp->hdr.status)) {
-		efc_log_err(hw->os,
-			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
-			     rsp->hdr.command,
-			     le16_to_cpu(rsp->hdr.status));
-		return EFCT_HW_RTN_ERROR;
-	}
-
-	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
-		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
-	return 0;
-}
-
 enum efct_hw_rtn
 efct_hw_set(struct efct_hw *hw, enum efct_hw_property prop, u32 value)
 {
@@ -1221,6 +1622,10 @@ efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop)
 	return value;
 }
 
+/*
+ * An efct_hw_rx_buffer_t array is allocated,
+ * along with the required DMA mem
+ */
 static struct efc_hw_rq_buffer *
 efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
 			u32 size)
@@ -1327,6 +1732,7 @@ efct_hw_rx_allocate(struct efct_hw *hw)
 	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
 }
 
+/* Post the RQ data buffers to the chip */
 enum efct_hw_rtn
 efct_hw_rx_post(struct efct_hw *hw)
 {
@@ -1414,7 +1820,7 @@ efct_hw_cmd_submit_pending(struct efct_hw *hw)
 	return rc;
 }
 
-/**
+/*
  * Send a mailbox command to the hardware, and either wait for a completion
  * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
  */
@@ -1575,3 +1981,879 @@ efct_hw_command_cancel(struct efct_hw *hw)
 
 	return 0;
 }
+
+static inline struct efct_hw_io *
+_efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io	*io = NULL;
+
+	if (!list_empty(&hw->io_free)) {
+		io = list_first_entry(&hw->io_free, struct efct_hw_io,
+				      list_entry);
+		list_del(&io->list_entry);
+	}
+	if (io) {
+		INIT_LIST_HEAD(&io->list_entry);
+		INIT_LIST_HEAD(&io->wqe_link);
+		INIT_LIST_HEAD(&io->dnrx_link);
+		list_add_tail(&io->list_entry, &hw->io_inuse);
+		io->state = EFCT_HW_IO_STATE_INUSE;
+		io->abort_reqtag = U32_MAX;
+		kref_init(&io->ref);
+		io->release = efct_hw_io_free_internal;
+	} else {
+		atomic_add_return(1, &hw->io_alloc_failed_count);
+	}
+
+	return io;
+}
+
+struct efct_hw_io *
+efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io	*io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	io = _efct_hw_io_alloc(hw);
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return io;
+}
+
+/*
+ * When an IO is freed, depending on the exchange busy flag, and other
+ * workarounds, move it to the correct list.
+ */
+static void
+efct_hw_io_free_move_correct_list(struct efct_hw *hw,
+				  struct efct_hw_io *io)
+{
+	if (io->xbusy) {
+		/*
+		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
+		 * up
+		 */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_wait_free);
+		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
+	} else {
+		/* IO not busy, add to free list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_free);
+		io->state = EFCT_HW_IO_STATE_FREE;
+	}
+}
+
+static inline void
+efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* initialize IO fields */
+	efct_hw_init_free_io(io);
+
+	/* Restore default SGL */
+	efct_hw_io_restore_sgl(hw, io);
+}
+
+/**
+ * Free a previously-allocated HW IO object. Called when
+ * IO refcount goes to zero (host-owned IOs only).
+ */
+void
+efct_hw_io_free_internal(struct kref *arg)
+{
+	unsigned long flags = 0;
+	struct efct_hw_io *io =
+			container_of(arg, struct efct_hw_io, ref);
+	struct efct_hw *hw = io->hw;
+
+	/* perform common cleanup */
+	efct_hw_io_free_common(hw, io);
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+		/* remove from in-use list */
+		if (io->list_entry.next &&
+		    !list_empty(&hw->io_inuse)) {
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+int
+efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* just put refcount */
+	if (refcount_read(&io->ref.refcount) <= 0) {
+		efc_log_err(hw->os,
+			     "Bad parameter: refcount <= 0 xri=%x tag=%x\n",
+			    io->indicator, io->reqtag);
+		return -1;
+	}
+
+	return kref_put(&io->ref, io->release);
+}
+
+u8
+efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	return (refcount_read(&io->ref.refcount) > 0);
+}
+
+struct efct_hw_io *
+efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
+{
+	u32 ioindex;
+
+	ioindex = xri - hw->sli.extent[SLI_RSRC_XRI].base[0];
+	return hw->io[ioindex];
+}
+
+/**
+ * Issue any pending callbacks for an IO and remove off the timer and
+ * pending lists.
+ */
+static void
+efct_hw_io_cancel_cleanup(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	efct_hw_done_t done = io->done;
+	efct_hw_done_t abort_done = io->abort_done;
+	unsigned long flags = 0;
+
+	/* first check active_wqe list and remove if there */
+	if (io->wqe_link.next)
+		list_del(&io->wqe_link);
+
+	/* Remove from WQ pending list */
+	if (io->wq && io->wq->pending_list.next)
+		list_del(&io->list_entry);
+
+	if (io->done) {
+		void *arg = io->arg;
+
+		io->done = NULL;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+		done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0, arg);
+		spin_lock_irqsave(&hw->io_lock, flags);
+	}
+
+	if (io->abort_done) {
+		void		*abort_arg = io->abort_arg;
+
+		io->abort_done = NULL;
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+		abort_done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0,
+			   abort_arg);
+		spin_lock_irqsave(&hw->io_lock, flags);
+	}
+}
+
+static int
+efct_hw_io_cancel(struct efct_hw *hw)
+{
+	struct efct_hw_io *io = NULL;
+	struct efct_hw_io *tmp_io = NULL;
+	u32 iters = 100; /* One second limit */
+	unsigned long flags = 0;
+
+	/*
+	 * Manually clean up outstanding IO.
+	 * Only walk through list once: the backend will cleanup any IOs when
+	 * done/abort_done is called.
+	 */
+	spin_lock_irqsave(&hw->io_lock, flags);
+	list_for_each_entry_safe(io, tmp_io, &hw->io_inuse, list_entry) {
+		efct_hw_done_t  done = io->done;
+		efct_hw_done_t  abort_done = io->abort_done;
+
+		efct_hw_io_cancel_cleanup(hw, io);
+
+		/*
+		 * Since this is called in a reset/shutdown
+		 * case, If there is no callback, then just
+		 * free the IO.
+		 *
+		 * Note: A port owned XRI cannot be on
+		 *       the in use list. We cannot call
+		 *       efct_hw_io_free() because we already
+		 *       hold the io_lock.
+		 */
+		if (!done &&
+		    !abort_done) {
+			/*
+			 * Since this is called in a reset/shutdown
+			 * case, If there is no callback, then just
+			 * free the IO.
+			 */
+			efct_hw_io_free_common(hw, io);
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	/* Give time for the callbacks to complete */
+	do {
+		mdelay(10);
+		iters--;
+	} while (!list_empty(&hw->io_inuse) && iters);
+
+	/* Leave a breadcrumb that cleanup is not yet complete. */
+	if (!list_empty(&hw->io_inuse))
+		efc_log_test(hw->os, "io_inuse list is not empty\n");
+
+	return 0;
+}
+
+enum efct_hw_rtn
+efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efc_dma *sgl,
+			u32 sgl_count)
+{
+	if (hw->sli.sgl_pre_registered) {
+		efc_log_err(hw->os,
+			     "can't use temp SGL with pre-registered SGLs\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	io->ovfl_sgl = sgl;
+	io->ovfl_sgl_count = sgl_count;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io,
+		     enum efct_hw_io_type type)
+{
+	struct sli4_sge	*data = NULL;
+	u32 i = 0;
+	u32 skips = 0;
+	u32 sge_flags = 0;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n", hw, io);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Clear / reset the scatter-gather list */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+	io->first_data_sge = 0;
+
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+	io->n_sge = 0;
+	io->sge_offset = 0;
+
+	io->type = type;
+
+	data = io->sgl->virt;
+
+	/*
+	 * Some IO types have underlying hardware requirements on the order
+	 * of SGEs. Process all special entries here.
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE:
+#define EFCT_TARGET_WRITE_SKIPS	2
+		skips = EFCT_TARGET_WRITE_SKIPS;
+
+		/* populate host resident XFER_RDY buffer */
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		data->buffer_address_high =
+			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
+		data->buffer_address_low  =
+			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
+		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+
+		skips--;
+
+		io->n_sge = 1;
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		/*
+		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
+		 */
+#define EFCT_TARGET_READ_SKIPS	2
+		skips = EFCT_TARGET_READ_SKIPS;
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		/*
+		 * No skips, etc. for FCP_TRSP64
+		 */
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Write skip entries
+	 */
+	for (i = 0; i < skips; i++) {
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+	}
+
+	io->n_sge += skips;
+
+	/*
+	 * Set last
+	 */
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_add_seed_sge(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efct_hw_dif_info *dif_info)
+{
+	struct sli4_sge	*data = NULL;
+	struct sli4_diseed_sge *dif_seed;
+	u32 sge_flags;
+	u16 dif_flags;
+
+	/* If no dif_info, or dif_oper is disabled, then just return success */
+	if (!dif_info ||
+	    dif_info->dif_oper == EFCT_HW_DIF_OPER_DISABLED)
+		return EFCT_HW_RTN_SUCCESS;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p dif_info=%p\n", hw,
+			    io, dif_info);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	/* If we are doing T10 DIF add the DIF Seed SGE */
+	memset(data, 0, sizeof(struct sli4_diseed_sge));
+	dif_seed = (struct sli4_diseed_sge *)data;
+
+	dif_seed->ref_tag_cmp = cpu_to_le32(dif_info->ref_tag_cmp);
+	dif_seed->ref_tag_repl = cpu_to_le32(dif_info->ref_tag_repl);
+	dif_seed->app_tag_repl = cpu_to_le16(dif_info->app_tag_repl);
+
+	dif_flags = 0;
+	if (dif_info->repl_app_tag)
+		dif_flags |= DISEED_SGE_RE;
+
+	if (hw->sli.if_type != SLI4_INTF_IF_TYPE_2) {
+		if (dif_info->disable_app_ref_ffff)
+			dif_flags |= DISEED_SGE_ATRT;
+
+		if (dif_info->disable_app_ffff)
+			dif_flags |= DISEED_SGE_AT;
+	}
+	dif_flags |= SLI4_SGE_TYPE_DISEED << 11;
+
+	if ((io->type == EFCT_HW_IO_TARGET_WRITE) &&
+	    hw->sli.if_type != SLI4_INTF_IF_TYPE_2 &&
+	    dif_info->dif_separate) {
+		dif_flags &= ~SLI4_SGE_TYPE_MASK;
+		dif_flags |= SLI4_SGE_TYPE_SKIP << 11;
+	}
+
+	dif_seed->dw2w1_flags = cpu_to_le16(dif_flags);
+	dif_seed->app_tag_cmp = cpu_to_le16(dif_info->app_tag_cmp);
+
+	dif_flags = 0;
+	dif_flags |= (dif_info->blk_size & DISEED_SGE_BS_MASK);
+	if (dif_info->auto_incr_ref_tag)
+		dif_flags |= DISEED_SGE_AI;
+	if (dif_info->check_app_tag)
+		dif_flags |= DISEED_SGE_ME;
+	if (dif_info->check_ref_tag)
+		dif_flags |= DISEED_SGE_RE;
+	if (dif_info->check_guard)
+		dif_flags |= DISEED_SGE_CE;
+	if (dif_info->repl_ref_tag)
+		dif_flags |= DISEED_SGE_NR;
+
+	switch (dif_info->dif_oper) {
+	case EFCT_HW_SGE_DIFOP_INNODIFOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_NODIF_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_NODIF_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTNODIF:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_NODIF);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_NODIF);
+		break;
+	case EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_NODIF_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_NODIF_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_NODIF);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_NODIF);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CRC_OUT_CSUM);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CRC_OUT_CSUM);
+		break;
+	case EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_CSUM_OUT_CRC);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_CSUM_OUT_CRC);
+		break;
+	case EFCT_HW_SGE_DIFOP_INRAWOUTRAW:
+		dif_flags |= DISEED_SGE_OP_RX_VALUE(IN_RAW_OUT_RAW);
+		dif_flags |= DISEED_SGE_OP_TX_VALUE(IN_RAW_OUT_RAW);
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported DIF operation %#x\n",
+			     dif_info->dif_oper);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	dif_seed->dw3w1_flags = cpu_to_le16(dif_flags);
+	/*
+	 * Set last, clear previous last
+	 */
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+	if (io->n_sge) {
+		sge_flags = le32_to_cpu(data[-1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	io->n_sge++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_io_overflow_sgl(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	struct sli4_lsp_sge *lsp;
+	u32 dw2_flags = 0;
+
+	/* fail if we're already pointing to the overflow SGL */
+	if (io->sgl == io->ovfl_sgl)
+		return EFCT_HW_RTN_ERROR;
+
+	/* fail if we don't have an overflow SGL registered */
+	if (!io->ovfl_sgl)
+		return EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Overflow, we need to put a link SGE in the last location of the
+	 * current SGL, after copying the the last SGE to the overflow SGL
+	 */
+
+	((struct sli4_sge *)io->ovfl_sgl->virt)[0] =
+			 ((struct sli4_sge *)io->sgl->virt)[io->n_sge - 1];
+
+	lsp = &((struct sli4_lsp_sge *)io->sgl->virt)[io->n_sge - 1];
+	memset(lsp, 0, sizeof(*lsp));
+
+	lsp->buffer_address_high =
+		cpu_to_le32(upper_32_bits(io->ovfl_sgl->phys));
+	lsp->buffer_address_low  =
+		cpu_to_le32(lower_32_bits(io->ovfl_sgl->phys));
+	dw2_flags = SLI4_SGE_TYPE_LSP << SLI4_SGE_TYPE_SHIFT;
+	dw2_flags &= ~SLI4_SGE_LAST;
+	lsp->dw2_flags = cpu_to_le32(dw2_flags);
+
+	io->ovfl_lsp = lsp;
+	io->ovfl_lsp->dw3_seglen =
+		cpu_to_le32(sizeof(struct sli4_sge) &
+			    SLI4_LSP_SGE_SEGLEN);
+
+	/* Update the current SGL pointer, and n_sgl */
+	io->sgl = io->ovfl_sgl;
+	io->sgl_count = io->ovfl_sgl_count;
+	io->n_sge = 1;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length)
+{
+	struct sli4_sge	*data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr || !length) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
+			    hw, io, addr, length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (length && (io->n_sge + 1) > io->sgl_count) {
+		if (efct_hw_io_overflow_sgl(hw, io) != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "SGL full (%d)\n", io->n_sge);
+			return EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	if (length > hw->sli.sge_supported_length) {
+		efc_log_err(hw->os,
+			     "length of SGE %d bigger than allowed %d\n",
+			    length, hw->sli.sge_supported_length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
+	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
+	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+	data->buffer_length = cpu_to_le32(length);
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	if (io->n_sge) {
+		sge_flags = le32_to_cpu(data[-1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	/* Set first_data_bde if not previously set */
+	if (io->first_data_sge == 0)
+		io->first_data_sge = io->n_sge;
+
+	io->sge_offset += length;
+	io->n_sge++;
+
+	/* Update the linked segment length (only executed after overflow has
+	 * begun)
+	 */
+	if (io->ovfl_lsp)
+		io->ovfl_lsp->dw3_seglen =
+			cpu_to_le32(io->n_sge * sizeof(struct sli4_sge) &
+				    SLI4_LSP_SGE_SEGLEN);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_add_dif_sge(struct efct_hw *hw,
+		       struct efct_hw_io *io, uintptr_t addr)
+{
+	struct sli4_dif_sge *data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx\n",
+			    hw, io, addr);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if ((io->n_sge + 1) > hw->config.n_sgl) {
+		if (efct_hw_io_overflow_sgl(hw, io) != EFCT_HW_RTN_ERROR) {
+			efc_log_err(hw->os, "SGL full (%d)\n", io->n_sge);
+			return EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DIF << SLI4_SGE_TYPE_SHIFT;
+
+	if ((io->type == EFCT_HW_IO_TARGET_WRITE) &&
+	    hw->sli.if_type != SLI4_INTF_IF_TYPE_2) {
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT;
+	}
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+	if (io->n_sge) {
+		sge_flags = le32_to_cpu(data[-1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags &= cpu_to_le32(sge_flags);
+	}
+
+	io->n_sge++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+void
+efct_hw_io_abort_all(struct efct_hw *hw)
+{
+	struct efct_hw_io *io_to_abort	= NULL;
+	struct efct_hw_io *next_io = NULL;
+
+	list_for_each_entry_safe(io_to_abort, next_io,
+				 &hw->io_inuse, list_entry) {
+		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
+	}
+}
+
+enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg)
+{
+	enum sli4_abort_type atype = SLI_ABORT_MAX;
+	u32 id = 0, mask = 0;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	if (!io_to_abort) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n",
+			    hw, io_to_abort);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* take a reference on IO being aborted */
+	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
+		/* command no longer active */
+		efc_log_test(hw->os,
+			      "io not active xri=0x%x tag=0x%x\n",
+			     io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/* Must have a valid WQ reference */
+	if (!io_to_abort->wq) {
+		efc_log_test(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
+			      io_to_abort->indicator);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/*
+	 * Validation checks complete; now check to see if already being
+	 * aborted
+	 */
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	if (io_to_abort->abort_in_progress) {
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		efc_log_debug(hw->os,
+			       "io already being aborted xri=0x%x tag=0x%x\n",
+			      io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
+	}
+
+	/*
+	 * This IO is not already being aborted. Set flag so we won't try to
+	 * abort it again. After all, we only have one abort_done callback.
+	 */
+	io_to_abort->abort_in_progress = true;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/*
+	 * If we got here, the possibilities are:
+	 * - host owned xri
+	 *	- io_to_abort->wq_index != U32_MAX
+	 *		- submit ABORT_WQE to same WQ
+	 * - port owned xri:
+	 *	- rxri: io_to_abort->wq_index == U32_MAX
+	 *		- submit ABORT_WQE to any WQ
+	 *	- non-rxri
+	 *		- io_to_abort->index != U32_MAX
+	 *			- submit ABORT_WQE to same WQ
+	 *		- io_to_abort->index == U32_MAX
+	 *			- submit ABORT_WQE to any WQ
+	 */
+	io_to_abort->abort_done = cb;
+	io_to_abort->abort_arg  = arg;
+
+	atype = SLI_ABORT_XRI;
+	id = io_to_abort->indicator;
+
+	/* Allocate a request tag for the abort portion of this IO */
+	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
+	if (!wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+	io_to_abort->abort_reqtag = wqcb->instance_index;
+
+	/*
+	 * If the wqe is on the pending list, then set this wqe to be
+	 * aborted when the IO's wqe is removed from the list.
+	 */
+	if (io_to_abort->wq) {
+		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
+		if (io_to_abort->wqe.list_entry.next) {
+			io_to_abort->wqe.abort_wqe_submit_needed = true;
+			io_to_abort->wqe.send_abts = send_abts;
+			io_to_abort->wqe.id = id;
+			io_to_abort->wqe.abort_reqtag =
+						 io_to_abort->abort_reqtag;
+			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
+					       flags);
+			return 0;
+		}
+		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
+	}
+
+	if (sli_abort_wqe(&hw->sli, io_to_abort->wqe.wqebuf,
+			  hw->sli.wqe_size, atype, send_abts, id, mask,
+			  io_to_abort->abort_reqtag, SLI4_CQ_DEFAULT)) {
+		efc_log_err(hw->os, "ABORT WQE error\n");
+		io_to_abort->abort_reqtag = U32_MAX;
+		efct_hw_reqtag_free(hw, wqcb);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io_to_abort->wq)
+			io_to_abort->wq = efct_hw_queue_next_wq(hw,
+								io_to_abort);
+
+		/* ABORT_WQE does not actually utilize an XRI on the Port,
+		 * therefore, keep xbusy as-is to track the exchange's state,
+		 * not the ABORT_WQE's state
+		 */
+		rc = efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe);
+		if (rc > 0)
+			/* non-negative return is success */
+			rc = 0;
+			/*
+			 * can't abort an abort so skip adding to timed wqe
+			 * list
+			 */
+	}
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		spin_lock_irqsave(&hw->io_abort_lock, flags);
+		io_to_abort->abort_in_progress = false;
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+	}
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_reqtag_init(struct efct_hw *hw)
+{
+	if (!hw->wq_reqtag_pool) {
+		hw->wq_reqtag_pool = efct_pool_alloc(hw->os,
+					sizeof(struct hw_wq_callback),
+					65536);
+		if (!hw->wq_reqtag_pool) {
+			efc_log_err(hw->os,
+				     "efct_pool_alloc struct hw_wq_callback fail\n");
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+	efct_hw_reqtag_reset(hw);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_alloc(struct efct_hw *hw,
+		     void (*callback)(void *arg, u8 *cqe, int status),
+		     void *arg)
+{
+	struct hw_wq_callback *wqcb = NULL;
+
+	if (!callback)
+		return wqcb;
+
+	wqcb = efct_pool_get(hw->wq_reqtag_pool);
+	if (wqcb) {
+		wqcb->callback = callback;
+		wqcb->arg = arg;
+	}
+	return wqcb;
+}
+
+void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb)
+{
+	if (!wqcb->callback)
+		efc_log_err(hw->os, "WQCB is already freed\n");
+
+	wqcb->callback = NULL;
+	wqcb->arg = NULL;
+	efct_pool_put(hw->wq_reqtag_pool, wqcb);
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
+{
+	struct hw_wq_callback *wqcb;
+
+	wqcb = efct_pool_get_instance(hw->wq_reqtag_pool, instance_index);
+	if (!wqcb)
+		efc_log_err(hw->os, "wqcb for instance %d is null\n",
+			     instance_index);
+
+	return wqcb;
+}
+
+void
+efct_hw_reqtag_reset(struct efct_hw *hw)
+{
+	struct hw_wq_callback *wqcb;
+	u32 i;
+
+	/* Remove all from freelist */
+	while (efct_pool_get(hw->wq_reqtag_pool))
+		;
+
+	/* Put them all back */
+	for (i = 0;
+	     ((wqcb = efct_pool_get_instance(hw->wq_reqtag_pool, i)) != NULL);
+	     i++) {
+		wqcb->instance_index = i;
+		wqcb->callback = NULL;
+		wqcb->arg = NULL;
+		efct_pool_put(hw->wq_reqtag_pool, wqcb);
+	}
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 2360b64fc2c3..9e4ac83a81d4 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -869,5 +869,51 @@ void efct_hw_rx_free(struct efct_hw *hw);
 extern enum efct_hw_rtn
 efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
 		void *arg);
+struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw);
+int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io);
+u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io);
+extern enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io, u32 len,
+		union efct_hw_io_param_u *iparam,
+		struct efc_remote_node *rnode, void *cb, void *arg);
+extern enum efct_hw_rtn
+efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efc_dma *sgl,
+			u32 sgl_count);
+extern enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw,
+		     struct efct_hw_io *io, enum efct_hw_io_type type);
+extern enum efct_hw_rtn
+efct_hw_io_add_seed_sge(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efct_hw_dif_info *dif_info);
+extern enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length);
+extern enum efct_hw_rtn
+efct_hw_io_add_dif_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		       uintptr_t addr);
+extern enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg);
+extern u32
+efct_hw_io_get_count(struct efct_hw *hw,
+		     enum efct_hw_io_count_type io_count_type);
+extern struct efct_hw_io
+*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator);
+void efct_hw_io_abort_all(struct efct_hw *hw);
+void efct_hw_io_free_internal(struct kref *arg);
+
+/* HW WQ request tag API */
+enum efct_hw_rtn efct_hw_reqtag_init(struct efct_hw *hw);
+extern struct hw_wq_callback
+*efct_hw_reqtag_alloc(struct efct_hw *hw,
+			void (*callback)(void *arg, u8 *cqe,
+					 int status), void *arg);
+extern void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
+extern struct hw_wq_callback
+*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
+void efct_hw_reqtag_reset(struct efct_hw *hw);
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 20/32] elx: efct: Hardware queues processing
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (18 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:24   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
                   ` (12 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for EQ, CQ, WQ and RQ processing.
Routines for IO object pool allocation and deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c        | 531 +++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        |  36 +++
 drivers/scsi/elx/efct/efct_hw_queues.c | 192 ++++++++++++
 drivers/scsi/elx/efct/efct_io.c        | 203 +++++++++++++
 drivers/scsi/elx/efct/efct_io.h        | 196 ++++++++++++
 5 files changed, 1158 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index beca8534813d..2f30c7322a62 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -258,6 +258,17 @@ efct_hw_init_free_io(struct efct_hw_io *io)
 	io->tgt_wqe_timeout = 0;
 }
 
+static u8 efct_hw_iotype_is_originator(u16 io_type)
+{
+	switch (io_type) {
+	case EFCT_HW_FC_CT:
+	case EFCT_HW_ELS_REQ:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
 static void
 efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
 {
@@ -271,6 +282,127 @@ efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
 	io->ovfl_lsp = NULL;
 }
 
+static void
+efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	len = 0;
+	u32 ext = 0;
+
+	efct_hw_remove_io_timed_wqe(hw, io);
+
+	/* clear xbusy flag if WCQE[XB] is clear */
+	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
+		io->xbusy = false;
+
+	/* get extended CQE status */
+	switch (io->type) {
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_ACC_SID:
+		break;
+	case EFCT_HW_ELS_REQ:
+		sli_fc_els_did(&hw->sli, cqe, &ext);
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_ELS_RSP:
+	case EFCT_HW_ELS_RSP_SID:
+	case EFCT_HW_FC_CT_RSP:
+		break;
+	case EFCT_HW_FC_CT:
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_WRITE:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		break;
+	case EFCT_HW_IO_DNRX_REQUEUE:
+		/* release the count for re-posting the buffer */
+		/* efct_hw_io_free(hw, io); */
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled io type %#x for XRI 0x%x\n",
+			      io->type, io->indicator);
+		break;
+	}
+	if (status) {
+		ext = sli_fc_ext_status(&hw->sli, cqe);
+		/*
+		 * If we're not an originator IO, and XB is set, then issue
+		 * abort for the IO from within the HW
+		 */
+		if ((!efct_hw_iotype_is_originator(io->type)) &&
+		    wcqe->flags & SLI4_WCQE_XB) {
+			enum efct_hw_rtn rc;
+
+			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
+				       io->indicator, io->reqtag);
+
+			/*
+			 * Because targets may send a response when the IO
+			 * completes using the same XRI, we must wait for the
+			 * XRI_ABORTED CQE to issue the IO callback
+			 */
+			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
+			if (rc == EFCT_HW_RTN_SUCCESS) {
+				/*
+				 * latch status to return after abort is
+				 * complete
+				 */
+				io->status_saved = true;
+				io->saved_status = status;
+				io->saved_ext = ext;
+				io->saved_len = len;
+				goto exit_efct_hw_wq_process_io;
+			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+				/*
+				 * Already being aborted by someone else (ABTS
+				 * perhaps). Just fall thru and return original
+				 * error.
+				 */
+				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
+					       "abort in progress xri=",
+					      io->indicator, io->reqtag);
+
+			} else {
+				/* Failed to abort for some other reason, log
+				 * error
+				 */
+				efc_log_test(hw->os, "%s%#x tag=%#x rc=%d\n",
+					      "Failed to abort xri=",
+					     io->indicator, io->reqtag, rc);
+			}
+		}
+	}
+
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void *arg = io->arg;
+
+		io->done = NULL;
+
+		if (io->status_saved) {
+			/* use latched status if exists */
+			status = io->saved_status;
+			len = io->saved_len;
+			ext = io->saved_ext;
+			io->status_saved = false;
+		}
+
+		/* Restore default SGL */
+		efct_hw_io_restore_sgl(hw, io);
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+exit_efct_hw_wq_process_io:
+	return;
+}
+
 /* Initialize the pool of HW IO objects */
 static enum efct_hw_rtn
 efct_hw_setup_io(struct efct_hw *hw)
@@ -704,6 +836,25 @@ efct_hw_set_dif_seed(struct efct_hw *hw)
 	return rc;
 }
 
+static void
+efct_hw_queue_hash_add(struct efct_queue_hash *hash,
+		       u16 id, u16 index)
+{
+	u32	hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the number of queues, then we
+	 * never have to worry about an infinite loop.
+	 */
+	while (hash[hash_index].in_use)
+		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/* not used, claim the entry */
+	hash[hash_index].id = id;
+	hash[hash_index].in_use = true;
+	hash[hash_index].index = index;
+}
+
 /* enable sli port health check */
 static enum efct_hw_rtn
 efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
@@ -2630,6 +2781,73 @@ efct_hw_io_abort_all(struct efct_hw *hw)
 	}
 }
 
+static void
+efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	u32 ext = 0;
+	u32 len = 0;
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	/*
+	 * For IOs that were aborted internally, we may need to issue the
+	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
+	 * not. If the status is Local Reject/No XRI, then
+	 * issue the callback now.
+	 */
+	ext = sli_fc_ext_status(&hw->sli, cqe);
+	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
+	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI &&
+		io->done) {
+		efct_hw_done_t done = io->done;
+		void *arg = io->arg;
+
+		io->done = NULL;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort Note: We wont have both a done and abort_done
+		 * function, so don't worry about
+		 *       clobbering the len, status and ext fields.
+		 */
+		status = io->saved_status;
+		len = io->saved_len;
+		ext = io->saved_ext;
+		io->status_saved = false;
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	if (io->abort_done) {
+		efct_hw_done_t done = io->abort_done;
+		void *arg = io->abort_arg;
+
+		io->abort_done = NULL;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	/* clear abort bit to indicate abort is complete */
+	io->abort_in_progress = false;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/* Free the WQ callback */
+	if (io->abort_reqtag == U32_MAX) {
+		efc_log_err(hw->os, "HW IO already freed\n");
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
+	efct_hw_reqtag_free(hw, wqcb);
+
+	/*
+	 * Call efct_hw_io_free() because this releases the WQ reservation as
+	 * well as doing the refcount put. Don't duplicate the code here.
+	 */
+	(void)efct_hw_io_free(hw, io);
+}
+
 enum efct_hw_rtn
 efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
 		 bool send_abts, void *cb, void *arg)
@@ -2857,3 +3075,316 @@ efct_hw_reqtag_reset(struct efct_hw *hw)
 		efct_pool_put(hw->wq_reqtag_pool, wqcb);
 	}
 }
+
+int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id)
+{
+	int	rc = -1;
+	int	index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the maximum number of Qs, then
+	 * we never have to worry about an infinite loop. We will always find
+	 * an unused entry.
+	 */
+	do {
+		if (hash[index].in_use &&
+		    hash[index].id == id)
+			rc = hash[index].index;
+		else
+			index = (index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+	} while (rc == -1 && hash[index].in_use);
+
+	return rc;
+}
+
+int
+efct_hw_process(struct efct_hw *hw, u32 vector,
+		u32 max_isr_time_msec)
+{
+	struct hw_eq *eq;
+	int rc = 0;
+
+	/*
+	 * The caller should disable interrupts if they wish to prevent us
+	 * from processing during a shutdown. The following states are defined:
+	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
+	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
+	 *                                    queues are cleared.
+	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
+	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
+	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
+	 *                                        completions.
+	 */
+	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
+		return 0;
+
+	/* Get pointer to struct hw_eq */
+	eq = hw->hw_eq[vector];
+	if (!eq)
+		return 0;
+
+	eq->use_count++;
+
+	rc = efct_hw_eq_process(hw, eq, max_isr_time_msec);
+
+	return rc;
+}
+
+int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec)
+{
+	u8		eqe[sizeof(struct sli4_eqe)] = { 0 };
+	u32	tcheck_count;
+	time_t		tstart;
+	time_t		telapsed;
+	bool		done = false;
+
+	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
+		u16	cq_id = 0;
+		int		rc;
+
+		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
+		if (unlikely(rc)) {
+			if (rc > 0) {
+				u32 i;
+
+				/*
+				 * Received a sentinel EQE indicating the
+				 * EQ is full. Process all CQs
+				 */
+				for (i = 0; i < hw->cq_count; i++)
+					efct_hw_cq_process(hw, hw->hw_cq[i]);
+				continue;
+			} else {
+				return rc;
+			}
+		} else {
+			int index;
+
+			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
+
+			if (likely(index >= 0))
+				efct_hw_cq_process(hw, hw->hw_cq[index]);
+			else
+				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
+					     cq_id);
+		}
+
+		if (eq->queue->n_posted > eq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, eq->queue, false);
+
+		if (tcheck_count && (--tcheck_count == 0)) {
+			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+			telapsed = jiffies_to_msecs(jiffies) - tstart;
+			if (telapsed >= max_isr_time_msec)
+				done = true;
+		}
+	}
+	sli_queue_eq_arm(&hw->sli, eq->queue, true);
+
+	return 0;
+}
+
+void
+efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
+{
+	u8		cqe[sizeof(struct sli4_mcqe)];
+	u16	rid = U16_MAX;
+	enum sli4_qentry	ctype;		/* completion type */
+	int		status;
+	u32	n_processed = 0;
+	u32	tstart, telapsed;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
+		status = sli_cq_parse(&hw->sli, cq->queue,
+				      cqe, &ctype, &rid);
+		/*
+		 * The sign of status is significant. If status is:
+		 * == 0 : call completed correctly and
+		 * the CQE indicated success
+		 * > 0 : call completed correctly and
+		 * the CQE indicated an error
+		 * < 0 : call failed and no information is available about the
+		 * CQE
+		 */
+		if (status < 0) {
+			if (status == -2)
+				/*
+				 * Notification that an entry was consumed,
+				 * but not completed
+				 */
+				continue;
+
+			break;
+		}
+
+		switch (ctype) {
+		case SLI_QENTRY_ASYNC:
+			sli_cqe_async(&hw->sli, cqe);
+			break;
+		case SLI_QENTRY_MQ:
+			/*
+			 * Process MQ entry. Note there is no way to determine
+			 * the MQ_ID from the completion entry.
+			 */
+			efct_hw_mq_process(hw, status, hw->mq);
+			break;
+		case SLI_QENTRY_WQ:
+			efct_hw_wq_process(hw, cq, cqe, status, rid);
+			break;
+		case SLI_QENTRY_WQ_RELEASE: {
+			u32 wq_id = rid;
+			int index;
+			struct hw_wq *wq = NULL;
+
+			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
+
+			if (likely(index >= 0)) {
+				wq = hw->hw_wq[index];
+			} else {
+				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
+				break;
+			}
+			/* Submit any HW IOs that are on the WQ pending list */
+			hw_wq_submit_pending(wq, wq->wqec_set_count);
+
+			break;
+		}
+
+		case SLI_QENTRY_RQ:
+			efct_hw_rqpair_process_rq(hw, cq, cqe);
+			break;
+		case SLI_QENTRY_XABT: {
+			efct_hw_xabt_process(hw, cq, cqe, rid);
+			break;
+		}
+		default:
+			efc_log_test(hw->os,
+				      "unhandled ctype=%#x rid=%#x\n",
+				     ctype, rid);
+			break;
+		}
+
+		n_processed++;
+		if (n_processed == cq->queue->proc_limit)
+			break;
+
+		if (cq->queue->n_posted >= cq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, cq->queue, false);
+	}
+
+	sli_queue_arm(&hw->sli, cq->queue, true);
+
+	if (n_processed > cq->queue->max_num_processed)
+		cq->queue->max_num_processed = n_processed;
+	telapsed = jiffies_to_msecs(jiffies) - tstart;
+	if (telapsed > cq->queue->max_process_time)
+		cq->queue->max_process_time = telapsed;
+}
+
+void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid)
+{
+	struct hw_wq_callback *wqcb;
+
+	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
+		if (status)
+			efc_log_err(hw->os, "reque xri failed, status = %d\n",
+				     status);
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, rid);
+	if (!wqcb) {
+		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
+		return;
+	}
+
+	if (!wqcb->callback) {
+		efc_log_err(hw->os, "wqcb callback is NULL\n");
+		return;
+	}
+
+	(*wqcb->callback)(wqcb->arg, cqe, status);
+}
+
+void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid)
+{
+	/* search IOs wait free list */
+	struct efct_hw_io *io = NULL;
+	unsigned long flags = 0;
+
+	io = efct_hw_io_lookup(hw, rid);
+	if (!io) {
+		/* IO lookup failure should never happen */
+		efc_log_err(hw->os,
+			     "Error: xabt io lookup failed rid=%#x\n", rid);
+		return;
+	}
+
+	if (!io->xbusy)
+		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
+	else
+		/* mark IO as no longer busy */
+		io->xbusy = false;
+
+	/*
+	 * For IOs that were aborted internally, we need to issue any pending
+	 * callback here.
+	 */
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void		*arg = io->arg;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort
+		 */
+		int status = io->saved_status;
+		u32 len = io->saved_len;
+		u32 ext = io->saved_ext;
+
+		io->done = NULL;
+		io->status_saved = false;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	if (io->state == EFCT_HW_IO_STATE_INUSE ||
+	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+		/* if on wait_free list, caller has already freed IO;
+		 * remove from wait_free list and add to free list.
+		 * if on in-use list, already marked as no longer busy;
+		 * just leave there and wait for caller to free.
+		 */
+		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+			io->state = EFCT_HW_IO_STATE_FREE;
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+static int
+efct_hw_flush(struct efct_hw *hw)
+{
+	u32	i = 0;
+
+	/* Process any remaining completions */
+	for (i = 0; i < hw->eq_count; i++)
+		efct_hw_process(hw, i, ~0);
+
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 9e4ac83a81d4..55679e40cc49 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -916,4 +916,40 @@ extern struct hw_wq_callback
 *efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
 void efct_hw_reqtag_reset(struct efct_hw *hw);
 
+/* RQ completion handlers for RQ pair mode */
+extern int
+efct_hw_rqpair_process_rq(struct efct_hw *hw,
+			  struct hw_cq *cq, u8 *cqe);
+extern
+enum efct_hw_rtn efct_hw_rqpair_sequence_free(struct efct_hw *hw,
+						struct efc_hw_sequence *seq);
+static inline void
+efct_hw_sequence_copy(struct efc_hw_sequence *dst,
+		      struct efc_hw_sequence *src)
+{
+	/* Copy src to dst, then zero out the linked list link */
+	*dst = *src;
+}
+
+static inline enum efct_hw_rtn
+efct_hw_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	/* Only RQ pair mode is supported */
+	return efct_hw_rqpair_sequence_free(hw, seq);
+}
+extern int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec);
+void efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq);
+extern void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid);
+extern void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid);
+extern int
+efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
+extern int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
index 8bbeef8ad22d..ac266fe9db19 100644
--- a/drivers/scsi/elx/efct/efct_hw_queues.c
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -1454,3 +1454,195 @@ efct_hw_qtop_free(struct efct_hw_qtop *qtop)
 		kfree(qtop);
 	}
 }
+
+static inline int
+efct_hw_rqpair_find(struct efct_hw *hw, u16 rq_id)
+{
+	return efct_hw_queue_hash_find(hw->rq_hash, rq_id);
+}
+
+static struct efc_hw_sequence *
+efct_hw_rqpair_get(struct efct_hw *hw, u16 rqindex, u16 bufindex)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[rqindex];
+	struct efc_hw_sequence *seq = NULL;
+	struct hw_rq *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	unsigned long flags = 0;
+
+	if (bufindex >= rq_hdr->length) {
+		efc_log_err(hw->os,
+				"RQidx %d bufidx %d exceed ring len %d for id %d\n",
+				rqindex, bufindex, rq_hdr->length, rq_hdr->id);
+		return NULL;
+	}
+
+	/* rq_hdr lock also covers rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	seq = rq->rq_tracker[bufindex];
+	rq->rq_tracker[bufindex] = NULL;
+
+	if (!seq) {
+		efc_log_err(hw->os,
+			     "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n",
+			     rqindex, bufindex, rq_hdr->index);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return seq;
+}
+
+int
+efct_hw_rqpair_process_rq(struct efct_hw *hw, struct hw_cq *cq,
+			  u8 *cqe)
+{
+	u16 rq_id;
+	u32 index;
+	int rqindex;
+	int	 rq_status;
+	u32 h_len;
+	u32 p_len;
+	struct efc_hw_sequence *seq;
+	struct hw_rq *rq;
+
+	rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe,
+					      &rq_id, &index);
+	if (rq_status != 0) {
+		switch (rq_status) {
+		case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED:
+		case SLI4_FC_ASYNC_RQ_DMA_FAILURE:
+			/* just get RQ buffer then return to chip */
+			rqindex = efct_hw_rqpair_find(hw, rq_id);
+			if (rqindex < 0) {
+				efc_log_test(hw->os,
+					      "status=%#x: lookup fail id=%#x\n",
+					     rq_status, rq_id);
+				break;
+			}
+
+			/* get RQ buffer */
+			seq = efct_hw_rqpair_get(hw, rqindex, index);
+
+			/* return to chip */
+			if (efct_hw_rqpair_sequence_free(hw, seq)) {
+				efc_log_test(hw->os,
+					      "status=%#x,fail rtrn buf to RQ\n",
+					     rq_status);
+				break;
+			}
+			break;
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED:
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC:
+			/*
+			 * since RQ buffers were not consumed, cannot return
+			 * them to chip
+			 * fall through
+			 */
+			efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n",
+				       rq_status);
+		default:
+			break;
+		}
+		return -1;
+	}
+
+	rqindex = efct_hw_rqpair_find(hw, rq_id);
+	if (rqindex < 0) {
+		efc_log_test(hw->os, "Error: rq_id lookup failed for id=%#x\n",
+			      rq_id);
+		return -1;
+	}
+
+	rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	rq->use_count++;
+
+	seq = efct_hw_rqpair_get(hw, rqindex, index);
+	if (WARN_ON(!seq))
+		return -1;
+
+	seq->hw = hw;
+	seq->auto_xrdy = 0;
+	seq->out_of_xris = 0;
+	seq->hio = NULL;
+
+	sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len);
+	seq->header->dma.len = h_len;
+	seq->payload->dma.len = p_len;
+	seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe);
+	seq->hw_priv = cq->eq;
+
+	efct_unsolicited_cb(hw->os, seq);
+
+	return 0;
+}
+
+static int
+efct_hw_rqpair_put(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[seq->header->rqindex];
+	struct sli4_queue *rq_payload = &hw->rq[seq->payload->rqindex];
+	u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex];
+	struct hw_rq *rq = hw->hw_rq[hw_rq_index];
+	u32     phys_hdr[2];
+	u32     phys_payload[2];
+	int      qindex_hdr;
+	int      qindex_payload;
+	unsigned long flags = 0;
+
+	/* Update the RQ verification lookup tables */
+	phys_hdr[0] = upper_32_bits(seq->header->dma.phys);
+	phys_hdr[1] = lower_32_bits(seq->header->dma.phys);
+	phys_payload[0] = upper_32_bits(seq->payload->dma.phys);
+	phys_payload[1] = lower_32_bits(seq->payload->dma.phys);
+
+	/* rq_hdr lock also covers payload / header->rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	/*
+	 * Note: The header must be posted last for buffer pair mode because
+	 *       posting on the header queue posts the payload queue as well.
+	 *       We do not ring the payload queue independently in RQ pair mode.
+	 */
+	qindex_payload = sli_rq_write(&hw->sli, rq_payload,
+				      (void *)phys_payload);
+	qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr);
+	if (qindex_hdr < 0 ||
+	    qindex_payload < 0) {
+		efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id);
+		spin_unlock_irqrestore(&rq_hdr->lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* ensure the indexes are the same */
+	WARN_ON(qindex_hdr != qindex_payload);
+
+	/* Update the lookup table */
+	if (!rq->rq_tracker[qindex_hdr]) {
+		rq->rq_tracker[qindex_hdr] = seq;
+	} else {
+		efc_log_test(hw->os,
+			      "expected rq_tracker[%d][%d] buffer to be NULL\n",
+			     hw_rq_index, qindex_hdr);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_rqpair_sequence_free(struct efct_hw *hw,
+			     struct efc_hw_sequence *seq)
+{
+	enum efct_hw_rtn   rc = EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * Post the data buffer first. Because in RQ pair mode, ringing the
+	 * doorbell of the header ring will post the data buffer as well.
+	 */
+	if (efct_hw_rqpair_put(hw, seq)) {
+		efc_log_err(hw->os, "error writing buffers\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
new file mode 100644
index 000000000000..a31c18824ec7
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_utils.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+
+struct efct_io_pool {
+	struct efct *efct;
+	spinlock_t lock;	/* IO pool lock */
+	u32 io_num_ios;		/* Total IOs allocated */
+	struct efct_pool *pool;
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_io, u32 num_sgl)
+{
+	u32 i = 0;
+	struct efct_io_pool *io_pool;
+
+	/* Allocate the IO pool */
+	io_pool = kmalloc(sizeof(*io_pool), GFP_KERNEL);
+	if (!io_pool)
+		return NULL;
+
+	memset(io_pool, 0, sizeof(*io_pool));
+	io_pool->efct = efct;
+	io_pool->io_num_ios = num_io;
+
+	/* initialize IO pool lock */
+	spin_lock_init(&io_pool->lock);
+
+	io_pool->pool = efct_pool_alloc(efct, sizeof(struct efct_io),
+					io_pool->io_num_ios);
+
+	for (i = 0; i < io_pool->io_num_ios; i++) {
+		struct efct_io *io = efct_pool_get_instance(io_pool->pool, i);
+
+		io->tag = i;
+		io->instance_index = i;
+		io->efct = efct;
+
+		/* Allocate a response buffer */
+		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
+		io->rspbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						     io->rspbuf.size,
+						     &io->rspbuf.phys, GFP_DMA);
+		if (!io->rspbuf.virt) {
+			efc_log_err(efct, "dma_alloc cmdbuf failed\n");
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		/* Allocate SGL */
+		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_ATOMIC);
+		if (!io->sgl) {
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
+		io->sgl_allocated = num_sgl;
+		io->sgl_count = 0;
+	}
+
+	return io_pool;
+}
+
+int
+efct_io_pool_free(struct efct_io_pool *io_pool)
+{
+	struct efct *efct;
+	u32 i;
+	struct efct_io *io;
+
+	if (io_pool) {
+		efct = io_pool->efct;
+
+		for (i = 0; i < io_pool->io_num_ios; i++) {
+			io = efct_pool_get_instance(io_pool->pool, i);
+			if (!io)
+				continue;
+
+			kfree(io->sgl);
+			dma_free_coherent(&efct->pcidev->dev,
+					  io->cmdbuf.size, io->cmdbuf.virt,
+					  io->cmdbuf.phys);
+			memset(&io->cmdbuf, 0, sizeof(struct efc_dma));
+			dma_free_coherent(&efct->pcidev->dev,
+					  io->rspbuf.size, io->rspbuf.virt,
+					  io->rspbuf.phys);
+			memset(&io->rspbuf, 0, sizeof(struct efc_dma));
+		}
+
+		if (io_pool->pool)
+			efct_pool_free(io_pool->pool);
+
+		kfree(io_pool);
+		efct->xport->io_pool = NULL;
+	}
+
+	return 0;
+}
+
+u32 efct_io_pool_allocated(struct efct_io_pool *io_pool)
+{
+	return io_pool->io_num_ios;
+}
+
+struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool)
+{
+	struct efct_io *io = NULL;
+	struct efct *efct;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	io = efct_pool_get(io_pool->pool);
+	if (io) {
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+
+		io->io_type = EFCT_IO_TYPE_MAX;
+		io->hio_type = EFCT_HW_IO_MAX;
+		io->hio = NULL;
+		io->transferred = 0;
+		io->efct = efct;
+		io->timeout = 0;
+		io->sgl_count = 0;
+		io->tgt_task_tag = 0;
+		io->init_task_tag = 0;
+		io->hw_tag = 0;
+		io->display_name = "pending";
+		io->seq_init = 0;
+		io->els_req_free = false;
+		io->io_free = 0;
+		io->release = NULL;
+		atomic_add_return(1, &efct->xport->io_active_count);
+		atomic_add_return(1, &efct->xport->io_total_alloc);
+	} else {
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+	}
+	return io;
+}
+
+/* Free an object used to track an IO */
+void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io)
+{
+	struct efct *efct;
+	struct efct_hw_io *hio = NULL;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	hio = io->hio;
+	io->hio = NULL;
+	io->io_free = 1;
+	efct_pool_put_head(io_pool->pool, io);
+	spin_unlock_irqrestore(&io_pool->lock, flags);
+
+	if (hio)
+		efct_hw_io_free(&efct->hw, hio);
+
+	atomic_sub_return(1, &efct->xport->io_active_count);
+	atomic_add_return(1, &efct->xport->io_total_free);
+}
+
+/* Find an I/O given it's node and ox_id */
+struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
+		    u16 ox_id, u16 rx_id)
+{
+	struct efct_io	*io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry(io, &node->active_ios, list_entry) {
+		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
+		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
+			if (!kref_get_unless_zero(&io->ref))
+				io = NULL;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return io;
+}
+
+struct efct_io *
+efct_io_get_instance(struct efct *efct, u32 index)
+{
+	struct efct_xport *xport = efct->xport;
+	struct efct_io_pool *io_pool = xport->io_pool;
+
+	return efct_pool_get_instance(io_pool->pool, index);
+}
diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
new file mode 100644
index 000000000000..06784a8afcb1
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.h
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_IO_H__)
+#define __EFCT_IO_H__
+
+#include "efct_lio.h"
+
+#define io_error_log(io, fmt, ...)  \
+	do { \
+		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
+			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+#define SCSI_CMD_BUF_LENGTH	48
+#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
+#define EFCT_NUM_SCSI_IOS	8192
+
+enum efct_io_type {
+	EFCT_IO_TYPE_IO = 0,
+	EFCT_IO_TYPE_ELS,
+	EFCT_IO_TYPE_CT,
+	EFCT_IO_TYPE_CT_RESP,
+	EFCT_IO_TYPE_BLS_RESP,
+	EFCT_IO_TYPE_ABORT,
+
+	EFCT_IO_TYPE_MAX,
+};
+
+enum efct_els_state {
+	EFCT_ELS_REQUEST = 0,
+	EFCT_ELS_REQUEST_DELAYED,
+	EFCT_ELS_REQUEST_DELAY_ABORT,
+	EFCT_ELS_REQ_ABORT,
+	EFCT_ELS_REQ_ABORTED,
+	EFCT_ELS_ABORT_IO_COMPL,
+};
+
+struct efct_io {
+	struct list_head	list_entry;
+	struct list_head	io_pending_link;
+	/* reference counter and callback function */
+	struct kref		ref;
+	void (*release)(struct kref *arg);
+	/* pointer back to efct */
+	struct efct		*efct;
+	/* unique instance index value */
+	u32			instance_index;
+	/* display name */
+	const char		*display_name;
+	/* pointer to node */
+	struct efc_node		*node;
+	/* (io_pool->io_free_list) free list link */
+	/* initiator task tag (OX_ID) for back-end and SCSI logging */
+	u32			init_task_tag;
+	/* target task tag (RX_ID) - for back-end and SCSI logging */
+	u32			tgt_task_tag;
+	/* HW layer unique IO id - for back-end and SCSI logging */
+	u32			hw_tag;
+	/* unique IO identifier */
+	u32			tag;
+	/* SGL */
+	struct efct_scsi_sgl	*sgl;
+	/* Number of allocated SGEs */
+	u32			sgl_allocated;
+	/* Number of SGEs in this SGL */
+	u32			sgl_count;
+	/* backend target private IO data */
+	struct efct_scsi_tgt_io tgt_io;
+	/* expected data transfer length, based on FC header */
+	u32			exp_xfer_len;
+
+	/* Declarations private to HW/SLI */
+	void			*hw_priv;
+
+	/* indicates what this struct efct_io structure is used for */
+	enum efct_io_type	io_type;
+	struct efct_hw_io	*hio;
+	size_t			transferred;
+
+	/* set if auto_trsp was set */
+	bool			auto_resp;
+	/* set if low latency request */
+	bool			low_latency;
+	/* selected WQ steering request */
+	u8			wq_steering;
+	/* selected WQ class if steering is class */
+	u8			wq_class;
+	/* transfer size for current request */
+	u64			xfer_req;
+	/* target callback function */
+	efct_scsi_io_cb_t	scsi_tgt_cb;
+	/* target callback function argument */
+	void			*scsi_tgt_cb_arg;
+	/* abort callback function */
+	efct_scsi_io_cb_t	abort_cb;
+	/* abort callback function argument */
+	void			*abort_cb_arg;
+	/* BLS callback function */
+	efct_scsi_io_cb_t	bls_cb;
+	/* BLS callback function argument */
+	void			*bls_cb_arg;
+	/* TMF command being processed */
+	enum efct_scsi_tmf_cmd	tmf_cmd;
+	/* rx_id from the ABTS that initiated the command abort */
+	u16			abort_rx_id;
+
+	/* True if this is a Target command */
+	bool			cmd_tgt;
+	/* when aborting, indicates ABTS is to be sent */
+	bool			send_abts;
+	/* True if this is an Initiator command */
+	bool			cmd_ini;
+	/* True if local node has sequence initiative */
+	bool			seq_init;
+	/* iparams for hw io send call */
+	union efct_hw_io_param_u iparam;
+	/* HW formatted DIF parameters */
+	struct efct_hw_dif_info hw_dif;
+	/* DIF info saved for DIF error recovery */
+	struct efct_scsi_dif_info scsi_dif_info;
+	/* HW IO type */
+	enum efct_hw_io_type	hio_type;
+	/* wire length */
+	u64			wire_len;
+	/* saved HW callback */
+	void			*hw_cb;
+	/* Overflow SGL */
+	struct efc_dma		ovfl_sgl;
+
+	/* for ELS requests/responses */
+	/* True if ELS is pending */
+	bool			els_pend;
+	/* True if ELS is active */
+	bool			els_active;
+	/* ELS request payload buffer */
+	struct efc_dma		els_req;
+	/* ELS response payload buffer */
+	struct efc_dma		els_rsp;
+	bool			els_req_free;
+	/* Retries remaining */
+	u32			els_retries_remaining;
+	void (*els_callback)(struct efc_node *node,
+			     struct efc_node_cb *cbdata, void *cbarg);
+	void			*els_callback_arg;
+	/* timeout */
+	u32			els_timeout_sec;
+
+	/* delay timer */
+	struct timer_list	delay_timer;
+
+	/* for abort handling */
+	/* pointer to IO to abort */
+	struct efct_io		*io_to_abort;
+
+	enum efct_els_state	state;
+	/* Protects els cmds */
+	spinlock_t		els_lock;
+
+	/* SCSI Command buffer, used for CDB (initiator) */
+	struct efc_dma		cmdbuf;
+	/* SCSI Response buffer (i+t) */
+	struct efc_dma		rspbuf;
+	/* Timeout value in seconds for this IO */
+	u32			timeout;
+	/* CS_CTL priority for this IO */
+	u8			cs_ctl;
+	/* Is io object in freelist > */
+	u8			io_free;
+	u32			app_id;
+};
+
+struct efct_io_cb_arg {
+	int status;		/* completion status */
+	int ext_status;		/* extended completion status */
+	void *app;		/* application argument */
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_io, u32 num_sgl);
+extern int
+efct_io_pool_free(struct efct_io_pool *io_pool);
+extern u32
+efct_io_pool_allocated(struct efct_io_pool *io_pool);
+
+extern struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool);
+extern void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io);
+extern struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
+		    u16 ox_id, u16 rx_id);
+#endif /* __EFCT_IO_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (19 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 20/32] elx: efct: Hardware queues processing James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:26   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 22/32] elx: efct: Extended link Service IO handling James Smart
                   ` (11 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to handle unsolicited FC frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    |   2 +
 drivers/scsi/elx/efct/efct_unsol.c | 835 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_unsol.h |  49 +++
 3 files changed, 886 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 2f30c7322a62..43f1ff526694 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -6,6 +6,8 @@
 
 #include "efct_driver.h"
 #include "efct_hw.h"
+#include "efct_hw_queues.h"
+#include "efct_unsol.h"
 
 #define EFCT_HW_MQ_DEPTH		128
 #define EFCT_HW_WQ_TIMER_PERIOD_MS	500
diff --git a/drivers/scsi/elx/efct/efct_unsol.c b/drivers/scsi/elx/efct/efct_unsol.c
new file mode 100644
index 000000000000..f2bee349b77f
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.c
@@ -0,0 +1,835 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_unsol.h"
+
+#define frame_printf(efct, hdr, fmt, ...) \
+	do { \
+		char s_id_text[16]; \
+		efc_node_fcid_display(ntoh24((hdr)->fh_s_id), \
+			s_id_text, sizeof(s_id_text)); \
+		efc_log_debug(efct, "[%06x.%s] %02x/%04x/%04x: " fmt, \
+			ntoh24((hdr)->fh_d_id), s_id_text, \
+			(hdr)->fh_r_ctl, be16_to_cpu((hdr)->fh_ox_id), \
+			be16_to_cpu((hdr)->fh_rx_id), ##__VA_ARGS__); \
+	} while (0)
+
+static int
+efct_unsol_process(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	struct efct_xport_fcfi *xport_fcfi = NULL;
+	struct efc_domain *domain;
+	struct efct_hw *hw = &efct->hw;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	/* If the transport FCFI entry is NULL, then drop the frame */
+	if (!xport_fcfi) {
+		efc_log_test(efct,
+			      "FCFI %d is not valid, dropping frame\n",
+			seq->fcfi);
+
+		efct_hw_sequence_free(&efct->hw, seq);
+		return 0;
+	}
+
+	domain = hw->domain;
+
+	/*
+	 * If we are holding frames or the domain is not yet registered or
+	 * there's already frames on the pending list,
+	 * then add the new frame to pending list
+	 */
+	if (!domain ||
+	    xport_fcfi->hold_frames ||
+	    !list_empty(&xport_fcfi->pend_frames)) {
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+		INIT_LIST_HEAD(&seq->list_entry);
+		list_add_tail(&seq->list_entry, &xport_fcfi->pend_frames);
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		if (domain) {
+			/* immediately process pending frames */
+			efct_domain_process_pending(domain);
+		}
+	} else {
+		/*
+		 * We are not holding frames and pending list is empty,
+		 * just process frame. A non-zero return means the frame
+		 * was not handled - so cleanup
+		 */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+	}
+	return 0;
+}
+
+int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = arg;
+	int rc;
+
+	rc = efct_unsol_process(efct, seq);
+	if (rc)
+		efct_hw_sequence_free(&efct->hw, seq);
+
+	return 0;
+}
+
+int
+efct_process_node_pending(struct efc_node *node)
+{
+	struct efct *efct = node->efc->base;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			if (!list_empty(&node->pend_frames)) {
+				seq = list_first_entry(&node->pend_frames,
+						       struct efc_hw_sequence,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+					node->pend_frames_processed;
+				node->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&node->pend_frames_lock,
+						       flags);
+				break;
+			}
+			node->pend_frames_processed++;
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_node_dispatch_frame(node, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u node frames held and processed\n",
+			       pend_frames_processed);
+
+	return 0;
+}
+
+static bool efct_domain_frames_held(void *arg)
+{
+	struct efc_domain *domain = (struct efc_domain *)arg;
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return xport_fcfi->hold_frames;
+}
+
+int
+efct_domain_process_pending(struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (efct_domain_frames_held(domain))
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+			if (!list_empty(&xport_fcfi->pend_frames)) {
+				seq = list_first_entry(&xport_fcfi->pend_frames,
+						       struct efc_hw_sequence,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+					xport_fcfi->pend_frames_processed;
+				xport_fcfi->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&
+						xport_fcfi->pend_frames_lock,
+						flags);
+				break;
+			}
+			xport_fcfi->pend_frames_processed++;
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+
+		seq = NULL;
+	}
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u domain frames held and processed\n",
+			       pend_frames_processed);
+	return 0;
+}
+
+static struct efc_hw_sequence *
+efct_frame_next(struct list_head *pend_list, spinlock_t *list_lock)
+{
+	struct efc_hw_sequence *frame = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(list_lock, flags);
+
+	if (!list_empty(pend_list)) {
+		frame = list_first_entry(pend_list,
+					 struct efc_hw_sequence, list_entry);
+		list_del(&frame->list_entry);
+	}
+
+	spin_unlock_irqrestore(list_lock, flags);
+	return frame;
+}
+
+static int
+efct_purge_pending(struct efct *efct, struct list_head *pend_list,
+		   spinlock_t *list_lock)
+{
+	struct efc_hw_sequence *frame;
+
+	for (;;) {
+		frame = efct_frame_next(pend_list, list_lock);
+		if (!frame)
+			break;
+
+		frame_printf(efct,
+			     (struct fc_frame_header *)frame->header->dma.virt,
+			     "Discarding held frame\n");
+		efct_hw_sequence_free(&efct->hw, frame);
+	}
+
+	return 0;
+}
+
+int
+efct_node_purge_pending(struct efc *efc, struct efc_node *node)
+{
+	struct efct *efct = efc->base;
+
+	return efct_purge_pending(efct, &node->pend_frames,
+				&node->pend_frames_lock);
+}
+
+int
+efct_domain_purge_pending(struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return efct_purge_pending(efct,
+				 &xport_fcfi->pend_frames,
+				 &xport_fcfi->pend_frames_lock);
+}
+
+void
+efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (!xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames set for FCFI %d\n",
+			       domain->fcf_indicator);
+		xport_fcfi->hold_frames = true;
+	}
+}
+
+void
+efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames cleared for FCFI %d\n",
+			       domain->fcf_indicator);
+	}
+	xport_fcfi->hold_frames = false;
+	efct_domain_process_pending(domain);
+}
+
+static int
+efct_fc_tmf_rejected_cb(struct efct_io *io,
+			enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg)
+{
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+static void
+efct_dispatch_unsolicited_tmf(struct efct_io *io,
+			      u8 task_management_flags,
+			      struct efc_node *node, u32 lun)
+{
+	u32 i;
+	struct {
+		u32 mask;
+		enum efct_scsi_tmf_cmd cmd;
+	} tmflist[] = {
+	{FCP_TMF_ABT_TASK_SET, EFCT_SCSI_TMF_ABORT_TASK_SET},
+	{FCP_TMF_CLR_TASK_SET, EFCT_SCSI_TMF_CLEAR_TASK_SET},
+	{FCP_TMF_LUN_RESET, EFCT_SCSI_TMF_LOGICAL_UNIT_RESET},
+	{FCP_TMF_TGT_RESET, EFCT_SCSI_TMF_TARGET_RESET},
+	{FCP_TMF_CLR_ACA, EFCT_SCSI_TMF_CLEAR_ACA} };
+
+	io->exp_xfer_len = 0;
+
+	for (i = 0; i < ARRAY_SIZE(tmflist); i++) {
+		if (tmflist[i].mask & task_management_flags) {
+			io->tmf_cmd = tmflist[i].cmd;
+			efct_scsi_recv_tmf(io, lun, tmflist[i].cmd, NULL, 0);
+			break;
+		}
+	}
+	if (i == ARRAY_SIZE(tmflist)) {
+		/* Not handled */
+		node_printf(node, "TMF x%x rejected\n", task_management_flags);
+		efct_scsi_send_tmf_resp(io, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+					NULL, efct_fc_tmf_rejected_cb, NULL);
+	}
+}
+
+static int
+efct_validate_fcp_cmd(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	/*
+	 * If we received less than FCP_CMND_IU bytes, assume that the frame is
+	 * corrupted in some way and drop it.
+	 * This was seen when jamming the FCTL
+	 * fill bytes field.
+	 */
+	if (seq->payload->dma.len < sizeof(struct fcp_cmnd)) {
+		struct fc_frame_header	*fchdr = seq->header->dma.virt;
+
+		efc_log_debug(efct,
+			"drop ox_id %04x with payload (%zd) less than (%zd)\n",
+				    be16_to_cpu(fchdr->fh_ox_id),
+				    seq->payload->dma.len,
+				    sizeof(struct fcp_cmnd));
+		return -1;
+	}
+	return 0;
+}
+
+static void
+efct_populate_io_fcp_cmd(struct efct_io *io, struct fcp_cmnd *cmnd,
+			 struct fc_frame_header *fchdr, bool sit)
+{
+	io->init_task_tag = be16_to_cpu(fchdr->fh_ox_id);
+	/* note, tgt_task_tag, hw_tag  set when HW io is allocated */
+	io->exp_xfer_len = be32_to_cpu(cmnd->fc_dl);
+	io->transferred = 0;
+
+	/* The upper 7 bits of CS_CTL is the frame priority thru the SAN.
+	 * Our assertion here is, the priority given to a frame containing
+	 * the FCP cmd should be the priority given to ALL frames contained
+	 * in that IO. Thus we need to save the incoming CS_CTL here.
+	 */
+	if (ntoh24(fchdr->fh_f_ctl) & FC_FC_RES_B17)
+		io->cs_ctl = fchdr->fh_cs_ctl;
+	else
+		io->cs_ctl = 0;
+
+	io->seq_init = sit;
+}
+
+static u32
+efct_get_flags_fcp_cmd(struct fcp_cmnd *cmnd)
+{
+	u32 flags = 0;
+
+	switch (cmnd->fc_pri_ta & FCP_PTA_MASK) {
+	case FCP_PTA_SIMPLE:
+		flags |= EFCT_SCSI_CMD_SIMPLE;
+		break;
+	case FCP_PTA_HEADQ:
+		flags |= EFCT_SCSI_CMD_HEAD_OF_QUEUE;
+		break;
+	case FCP_PTA_ORDERED:
+		flags |= EFCT_SCSI_CMD_ORDERED;
+		break;
+	case FCP_PTA_ACA:
+		flags |= EFCT_SCSI_CMD_ACA;
+		break;
+	}
+	if (cmnd->fc_flags & FCP_CFL_WRDATA)
+		flags |= EFCT_SCSI_CMD_DIR_IN;
+	if (cmnd->fc_flags & FCP_CFL_RDDATA)
+		flags |= EFCT_SCSI_CMD_DIR_OUT;
+
+	return flags;
+}
+
+static void
+efct_sframe_common_send_cb(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_send_frame_context *ctx = arg;
+	struct efct_hw *hw = ctx->hw;
+
+	/* Free WQ completion callback */
+	efct_hw_reqtag_free(hw, ctx->wqcb);
+
+	/* Free sequence */
+	efct_hw_sequence_free(hw, ctx->seq);
+}
+
+static int
+efct_sframe_common_send(struct efc_node *node,
+			struct efc_hw_sequence *seq,
+			enum fc_rctl r_ctl, u32 f_ctl,
+			u8 type, void *payload, u32 payload_len)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_hw *hw = &efct->hw;
+	enum efct_hw_rtn rc = 0;
+	struct fc_frame_header *req_hdr = seq->header->dma.virt;
+	struct fc_frame_header hdr;
+	struct efct_hw_send_frame_context *ctx;
+
+	u32 heap_size = seq->payload->dma.size;
+	uintptr_t heap_phys_base = seq->payload->dma.phys;
+	u8 *heap_virt_base = seq->payload->dma.virt;
+	u32 heap_offset = 0;
+
+	/* Build the FC header reusing the RQ header DMA buffer */
+	memset(&hdr, 0, sizeof(hdr));
+	hdr.fh_r_ctl = r_ctl;
+	/* send it back to whomever sent it to us */
+	memcpy(hdr.fh_d_id, req_hdr->fh_s_id, sizeof(hdr.fh_d_id));
+	memcpy(hdr.fh_s_id, req_hdr->fh_d_id, sizeof(hdr.fh_s_id));
+	hdr.fh_type = type;
+	hton24(hdr.fh_f_ctl, f_ctl);
+	hdr.fh_ox_id = req_hdr->fh_ox_id;
+	hdr.fh_rx_id = req_hdr->fh_rx_id;
+	hdr.fh_cs_ctl = 0;
+	hdr.fh_df_ctl = 0;
+	hdr.fh_seq_cnt = 0;
+	hdr.fh_parm_offset = 0;
+
+	/*
+	 * send_frame_seq_id is an atomic, we just let it increment,
+	 * while storing only the low 8 bits to hdr->seq_id
+	 */
+	hdr.fh_seq_id = (u8)atomic_add_return(1, &hw->send_frame_seq_id);
+	hdr.fh_seq_id--;
+
+	/* Allocate and fill in the send frame request context */
+	ctx = (void *)(heap_virt_base + heap_offset);
+	heap_offset += sizeof(*ctx);
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return -1;
+	}
+
+	memset(ctx, 0, sizeof(*ctx));
+
+	/* Save sequence */
+	ctx->seq = seq;
+
+	/* Allocate a response payload DMA buffer from the heap */
+	ctx->payload.phys = heap_phys_base + heap_offset;
+	ctx->payload.virt = heap_virt_base + heap_offset;
+	ctx->payload.size = payload_len;
+	ctx->payload.len = payload_len;
+	heap_offset += payload_len;
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return -1;
+	}
+
+	/* Copy the payload in */
+	memcpy(ctx->payload.virt, payload, payload_len);
+
+	/* Send */
+	rc = efct_hw_send_frame(&efct->hw, (void *)&hdr, FC_SOF_N3,
+				FC_EOF_T, &ctx->payload, ctx,
+				efct_sframe_common_send_cb, ctx);
+	if (rc)
+		efc_log_test(efct, "efct_hw_send_frame failed: %d\n", rc);
+
+	return rc ? -1 : 0;
+}
+
+static int
+efct_sframe_send_fcp_rsp(struct efc_node *node,
+			 struct efc_hw_sequence *seq,
+			 void *rsp, u32 rsp_len)
+{
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_DD_CMD_STATUS,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ |
+				      FC_FC_SEQ_INIT,
+				      FC_TYPE_FCP,
+				      rsp, rsp_len);
+}
+
+static int
+efct_sframe_send_task_set_full_or_busy(struct efc_node *node,
+				       struct efc_hw_sequence *seq)
+{
+	struct fcp_resp_with_ext fcprsp;
+	struct fcp_cmnd *fcpcmd = seq->payload->dma.virt;
+	int rc = 0;
+	unsigned long flags = 0;
+	struct efct *efct = node->efc->base;
+
+	/* construct task set full or busy response */
+	memset(&fcprsp, 0, sizeof(fcprsp));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		fcprsp.resp.fr_status = list_empty(&node->active_ios) ?
+				SAM_STAT_BUSY : SAM_STAT_TASK_SET_FULL;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	*((u32 *)&fcprsp.ext.fr_resid) = be32_to_cpu(fcpcmd->fc_dl);
+
+	/* send it using send_frame */
+	rc = efct_sframe_send_fcp_rsp(node, seq, &fcprsp, sizeof(fcprsp));
+	if (rc)
+		efc_log_test(efct,
+			      "efct_sframe_send_fcp_rsp failed: %d\n",
+			rc);
+
+	return rc;
+}
+
+int
+efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
+{
+	struct efc *efc = node->efc;
+	struct efct *efct = efc->base;
+	struct fc_frame_header *fchdr = seq->header->dma.virt;
+	struct fcp_cmnd	*cmnd = NULL;
+	struct efct_io *io = NULL;
+	u32 lun = U32_MAX;
+	int rc = 0;
+
+	if (!seq->payload) {
+		efc_log_err(efct, "Sequence payload is NULL.\n");
+		return -1;
+	}
+
+	cmnd = seq->payload->dma.virt;
+
+	/* perform FCP_CMND validation check(s) */
+	if (efct_validate_fcp_cmd(efct, seq))
+		return -1;
+
+	lun = scsilun_to_int(&cmnd->fc_lun);
+	if (lun == U32_MAX)
+		return -1;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (!io) {
+		u32 send_frame_capable;
+
+		/* If we have SEND_FRAME capability, then use it to send
+		 * task set full or busy
+		 */
+		rc = efct_hw_get(&efct->hw, EFCT_HW_SEND_FRAME_CAPABLE,
+				 &send_frame_capable);
+		if (!rc && send_frame_capable) {
+			rc = efct_sframe_send_task_set_full_or_busy(node, seq);
+			if (rc)
+				efc_log_test(efct,
+					      "efct_sframe_task_full_or_busy failed: %d\n",
+					rc);
+			return rc;
+		}
+
+		efc_log_err(efct, "IO allocation failed ox_id %04x\n",
+			     be16_to_cpu(fchdr->fh_ox_id));
+		return -1;
+	}
+	io->hw_priv = seq->hw_priv;
+
+	io->app_id = 0;
+
+	/* RQ pair, if we got here, SIT=1 */
+	efct_populate_io_fcp_cmd(io, cmnd, fchdr, true);
+
+	if (cmnd->fc_tm_flags) {
+		efct_dispatch_unsolicited_tmf(io,
+					      cmnd->fc_tm_flags,
+					      node, lun);
+	} else {
+		u32 flags = efct_get_flags_fcp_cmd(cmnd);
+
+		if (cmnd->fc_flags & FCP_CFL_LEN_MASK) {
+			efc_log_err(efct, "Additional CDB not supported\n");
+			return -1;
+		}
+		/*
+		 * Can return failure for things like task set full and UAs,
+		 * no need to treat as a dropped frame if rc != 0
+		 */
+		efct_scsi_recv_cmd(io, lun, cmnd->fc_cdb,
+				   sizeof(cmnd->fc_cdb), flags);
+	}
+
+	/* successfully processed, now return RX buffer to the chip */
+	efct_hw_sequence_free(&efct->hw, seq);
+	return 0;
+}
+
+int
+efct_sframe_send_bls_acc(struct efc_node *node,
+			 struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *behdr = seq->header->dma.virt;
+	u16 ox_id = be16_to_cpu(behdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(behdr->fh_rx_id);
+	struct fc_ba_acc acc = {0};
+
+	acc.ba_ox_id = cpu_to_be16(ox_id);
+	acc.ba_rx_id = cpu_to_be16(rx_id);
+	acc.ba_low_seq_cnt = cpu_to_be16(U16_MAX);
+	acc.ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_BA_ACC,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ,
+				      FC_TYPE_BLS,
+				      &acc, sizeof(acc));
+}
+
+void
+efct_node_io_cleanup(struct efc *efc, struct efc_node *node, bool force)
+{
+	struct efct_io *io;
+	struct efct_io *next;
+	unsigned long flags = 0;
+	struct efct *efct = efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(io, next, &node->active_ios, list_entry) {
+		list_del(&io->list_entry);
+		efct_io_pool_io_free(efct->xport->io_pool, io);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+void
+efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
+		      bool force)
+{
+	struct efct_io *els;
+	struct efct_io *els_next;
+	struct efct_io *ls_acc_io;
+	unsigned long flags = 0;
+	struct efct *efct = efc->base;
+
+	/* first cleanup ELS's that are pending (not yet active) */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_pend_list,
+				 list_entry) {
+		/*
+		 * skip the ELS IO for which a response
+		 * will be sent after shutdown
+		 */
+		if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE &&
+		    els == node->ls_acc_io) {
+			continue;
+		}
+		/*
+		 * can't call efct_els_io_free()
+		 * because lock is held; cleanup manually
+		 */
+		node_printf(node, "Freeing pending els %s\n",
+			    els->display_name);
+		list_del(&els->list_entry);
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_rsp.size, els->els_rsp.virt,
+				  els->els_rsp.phys);
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_req.size, els->els_req.virt,
+				  els->els_req.phys);
+
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	ls_acc_io = node->ls_acc_io;
+
+	if (node->ls_acc_io && ls_acc_io->hio) {
+		/*
+		 * if there's an IO that will result in an LS_ACC after
+		 * shutdown and its HW IO is non-NULL, it better be an
+		 * implicit logout in vanilla sequence coalescing. In this
+		 * case, force the LS_ACC to go out on another XRI (hio)
+		 * since the previous will have been aborted by the UNREG_RPI
+		 */
+		node_printf(node,
+			    "invalidating ls_acc_io due to implicit logo\n");
+
+		/*
+		 * No need to abort because the unreg_rpi
+		 * takes care of it, just free
+		 */
+		efct_hw_io_free(&efct->hw, ls_acc_io->hio);
+
+		/* NULL out hio to force the LS_ACC to grab a new XRI */
+		ls_acc_io->hio = NULL;
+	}
+}
+
+void
+efct_node_abort_all_els(struct efc *efc, struct efc_node *node)
+{
+	struct efct_io *els;
+	struct efct_io *els_next;
+	struct efc_node_cb cbdata;
+	struct efct *efct = efc->base;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(struct efc_node_cb));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_active_list,
+				 list_entry) {
+		if (els->els_req_free)
+			continue;
+		efc_log_debug(efct, "[%s] initiate ELS abort %s\n",
+			       node->display_name, els->display_name);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		efct_els_abort(els, &cbdata);
+		spin_lock_irqsave(&node->active_ios_lock, flags);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+static int
+efct_process_abts(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	struct efc_node *node = io->node;
+	struct efct *efct = io->efct;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	struct efct_io *abortio;
+
+	/* Find IO and attempt to take a reference on it */
+	abortio = efct_io_find_tgt_io(efct, node, ox_id, rx_id);
+
+	if (abortio) {
+		/* Got a reference on the IO. Hold it until backend
+		 * is notified below
+		 */
+		node_printf(node, "Abort request: ox_id [%04x] rx_id [%04x]\n",
+			    ox_id, rx_id);
+
+		/*
+		 * Save the ox_id for the ABTS as the init_task_tag in our
+		 * manufactured
+		 * TMF IO object
+		 */
+		io->display_name = "abts";
+		io->init_task_tag = ox_id;
+		/* don't set tgt_task_tag, don't want to confuse with XRI */
+
+		/*
+		 * Save the rx_id from the ABTS as it is
+		 * needed for the BLS response,
+		 * regardless of the IO context's rx_id
+		 */
+		io->abort_rx_id = rx_id;
+
+		/* Call target server command abort */
+		io->tmf_cmd = EFCT_SCSI_TMF_ABORT_TASK;
+		efct_scsi_recv_tmf(io, abortio->tgt_io.lun,
+				   EFCT_SCSI_TMF_ABORT_TASK, abortio, 0);
+
+		/*
+		 * Backend will have taken an additional
+		 * reference on the IO if needed;
+		 * done with current reference.
+		 */
+		kref_put(&abortio->ref, abortio->release);
+	} else {
+		/*
+		 * Either IO was not found or it has been
+		 * freed between finding it
+		 * and attempting to get the reference,
+		 */
+		node_printf(node,
+			    "Abort request: ox_id [%04x], IO not found (exists=%d)\n",
+			    ox_id, (abortio != NULL));
+
+		/* Send a BA_RJT */
+		efct_bls_send_rjt_hdr(io, hdr);
+	}
+	return 0;
+}
+
+int
+efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
+			  struct efc_hw_sequence *seq)
+{
+	struct efct *efct = efc->base;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efct_io *io = NULL;
+
+	node->abort_cnt++;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (io) {
+		io->hw_priv = seq->hw_priv;
+		/* If we got this far, SIT=1 */
+		io->seq_init = 1;
+
+		/* fill out generic fields */
+		io->efct = efct;
+		io->node = node;
+		io->cmd_tgt = true;
+
+		efct_process_abts(io, seq->header->dma.virt);
+	} else {
+		node_printf(node,
+			    "SCSI IO allocation failed for ABTS received ");
+		node_printf(node,
+			    "s_id %06x d_id %06x ox_id %04x rx_id %04x\n",
+			ntoh24(hdr->fh_s_id),
+			ntoh24(hdr->fh_d_id),
+			be16_to_cpu(hdr->fh_ox_id),
+			be16_to_cpu(hdr->fh_rx_id));
+	}
+
+	/* ABTS processed, return RX buffer to the chip */
+	efct_hw_sequence_free(&efct->hw, seq);
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_unsol.h b/drivers/scsi/elx/efct/efct_unsol.h
new file mode 100644
index 000000000000..69e9ce57021c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__OSC_UNSOL_H__)
+#define __OSC_UNSOL_H__
+
+extern int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq);
+extern int
+efct_node_purge_pending(struct efc *efc, struct efc_node *node);
+extern int
+efct_process_node_pending(struct efc_node *domain);
+extern int
+efct_domain_process_pending(struct efc_domain *domain);
+extern int
+efct_domain_purge_pending(struct efc_domain *domain);
+extern int
+efct_dispatch_unsolicited_bls(struct efc_node *node,
+			      struct efc_hw_sequence *seq);
+extern void
+efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain);
+extern void
+efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain);
+extern void
+efct_seq_coalesce_cleanup(struct efct_hw_io *io, u8 count);
+extern int
+efct_sframe_send_bls_acc(struct efc_node *node,
+			 struct efc_hw_sequence *seq);
+extern int
+efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
+
+extern int
+efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
+			  struct efc_hw_sequence *seq);
+extern void
+efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
+		      bool force);
+
+extern void
+efct_node_io_cleanup(struct efc *efc, struct efc_node *node,
+		     bool force);
+
+void
+efct_node_abort_all_els(struct efc *efc, struct efc_node *node);
+
+#endif /* __OSC_UNSOL_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 22/32] elx: efct: Extended link Service IO handling
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (20 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:38   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 23/32] elx: efct: SCSI IO handling routines James Smart
                   ` (10 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Functions to build and send ELS/CT/BLS commands and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_els.c | 1953 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_els.h |  136 +++
 2 files changed, 2089 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h

diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
new file mode 100644
index 000000000000..9c964302505b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.c
@@ -0,0 +1,1953 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Functions to build and send ELS/CT/BLS commands and responses.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+
+#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
+
+#define node_els_trace()  \
+	do { \
+		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
+			efc_log_info(efct, "[%s] %-20s\n", \
+				node->display_name, __func__); \
+	} while (0)
+
+#define els_io_printf(els, fmt, ...) \
+	efc_log_debug((struct efct *)els->node->efc->base,\
+		      "[%s]" ELS_IOFMT " %-8s " fmt, \
+		      els->node->display_name,\
+		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
+		      els->display_name, ##__VA_ARGS__)
+
+#define EFCT_ELS_RSP_LEN		1024
+#define EFCT_ELS_GID_PT_RSP_LEN		8096
+
+void *
+efct_els_req_send(struct efc *efc, struct efc_node *node, u32 cmd,
+		  u32 timeout_sec, u32 retries)
+{
+	struct efct *efct = efc->base;
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		efc_log_debug(efct, "send efct_send_plogi\n");
+		efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_FLOGI:
+		efc_log_debug(efct, "send efct_send_flogi\n");
+		efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_LOGO:
+		efc_log_debug(efct, "send efct_send_logo\n");
+		efct_send_logo(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_PRLI:
+		efc_log_debug(efct, "send efct_send_prli\n");
+		efct_send_prli(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_ADISC:
+		efc_log_debug(efct, "send efct_send_prli\n");
+		efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case ELS_SCR:
+		efc_log_debug(efct, "send efct_send_scr\n");
+		efct_send_scr(node, timeout_sec, retries, NULL, NULL);
+		break;
+	default:
+		efc_log_debug(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+void *
+efct_els_resp_send(struct efc *efc, struct efc_node *node,
+		   u32 cmd, u16 ox_id)
+{
+	struct efct *efct = efc->base;
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		efct_send_plogi_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_FLOGI:
+		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
+		break;
+	case ELS_LOGO:
+		efct_send_logo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLI:
+		efct_send_prli_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLO:
+		efct_send_prlo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_ADISC:
+		efct_send_adisc_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_LS_ACC:
+		efct_send_ls_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PDISC:
+	case ELS_FDISC:
+	case ELS_RSCN:
+	case ELS_SCR:
+		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
+				 ELS_EXPL_NONE, 0);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+struct efct_io *
+efct_els_io_alloc(struct efc_node *node, u32 reqlen,
+		  enum efct_els_role role)
+{
+	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
+}
+
+struct efct_io *
+efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
+		       u32 rsplen, enum efct_els_role role)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	struct efct_io *els;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+	if (!node->io_alloc_enabled) {
+		efc_log_debug(efct,
+			       "called with io_alloc_enabled = FALSE\n");
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	els = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!els) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* initialize refcount */
+	kref_init(&els->ref);
+	els->release = _efct_els_io_free;
+
+	switch (role) {
+	case EFCT_ELS_ROLE_ORIGINATOR:
+		els->cmd_ini = true;
+		els->cmd_tgt = false;
+		break;
+	case EFCT_ELS_ROLE_RESPONDER:
+		els->cmd_ini = false;
+		els->cmd_tgt = true;
+		break;
+	}
+
+	/* IO should not have an associated HW IO yet.
+	 * Assigned below.
+	 */
+	if (els->hio) {
+		efc_log_err(efct,
+			     "assertion failed.  HIO is not null\n");
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* populate generic io fields */
+	els->efct = efct;
+	els->node = node;
+
+	/* set type and ELS-specific fields */
+	els->io_type = EFCT_IO_TYPE_ELS;
+	els->display_name = "pending";
+
+	/* now allocate DMA for request and response */
+	els->els_req.size = reqlen;
+	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       els->els_req.size,
+					       &els->els_req.phys,
+					       GFP_DMA);
+	if (els->els_req.virt) {
+		els->els_rsp.size = rsplen;
+		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       els->els_rsp.size,
+						       &els->els_rsp.phys,
+						       GFP_DMA);
+		if (!els->els_rsp.virt) {
+			efc_log_err(efct, "dma_alloc rsp\n");
+			dma_free_coherent(&efct->pcidev->dev,
+					  els->els_req.size,
+				els->els_req.virt, els->els_req.phys);
+			efct_io_pool_io_free(efct->xport->io_pool, els);
+			els = NULL;
+		}
+	} else {
+		efc_log_err(efct, "dma_alloc req\n");
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		els = NULL;
+	}
+
+	if (els) {
+		/* initialize fields */
+		els->els_retries_remaining =
+					EFCT_FC_ELS_DEFAULT_RETRIES;
+		els->els_pend = false;
+		els->els_active = false;
+
+		/* add els structure to ELS IO list */
+		INIT_LIST_HEAD(&els->list_entry);
+		list_add_tail(&els->list_entry,
+			      &node->els_io_pend_list);
+		els->els_pend = true;
+	}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return els;
+}
+
+void
+efct_els_io_free(struct efct_io *els)
+{
+	kref_put(&els->ref, els->release);
+}
+
+void
+_efct_els_io_free(struct kref *arg)
+{
+	struct efct_io *els = container_of(arg, struct efct_io, ref);
+	struct efct *efct;
+	struct efc_node *node;
+	int send_empty_event = false;
+	unsigned long flags = 0;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_active) {
+			/* if active, remove from active list and check empty */
+			list_del(&els->list_entry);
+			/* Send list empty event if the IO allocator
+			 * is disabled, and the list is empty
+			 * If node->io_alloc_enabled was not checked,
+			 * the event would be posted continually
+			 */
+			send_empty_event = (!node->io_alloc_enabled) &&
+				list_empty(&node->els_io_active_list);
+			els->els_active = false;
+		} else if (els->els_pend) {
+			/* if pending, remove from pending list;
+			 * node shutdown isn't gated off the
+			 * pending list (only the active list),
+			 * so no need to check if pending list is empty
+			 */
+			list_del(&els->list_entry);
+			els->els_pend = 0;
+		} else {
+			efc_log_err(efct,
+				     "assertion fail: niether els_pend nor active set\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return;
+		}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	/* free ELS request and response buffers */
+	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
+			  els->els_rsp.virt, els->els_rsp.phys);
+	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
+			  els->els_req.virt, els->els_req.phys);
+
+	efct_io_pool_io_free(efct->xport->io_pool, els);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	efct_scsi_check_pending(efct);
+}
+
+static void
+efct_els_make_active(struct efct_io *els)
+{
+	struct efc_node *node = els->node;
+	unsigned long flags = 0;
+
+	/* move ELS from pending list to active list */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_pend) {
+			if (els->els_active) {
+				efc_log_err(node->efc,
+					     "assertion fail:both els_pend and active set\n");
+				spin_unlock_irqrestore(&node->active_ios_lock,
+						       flags);
+				return;
+			}
+			/* remove from pending list */
+			list_del(&els->list_entry);
+			els->els_pend = false;
+
+			/* add els structure to ELS IO list */
+			INIT_LIST_HEAD(&els->list_entry);
+			list_add_tail(&els->list_entry,
+				      &node->els_io_active_list);
+			els->els_active = true;
+		} else {
+			/* must be retrying; make sure it's already active */
+			if (!els->els_active) {
+				efc_log_err(node->efc,
+					     "assertion fail: niether els_pend nor active set\n");
+			}
+		}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+static int efct_els_send(struct efct_io *els, u32 reqlen,
+			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
+{
+	struct efc_node *node = els->node;
+
+	/* update ELS request counter */
+	node->els_req_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = reqlen;
+	return efct_scsi_io_dispatch(els, cb);
+}
+
+static void
+efct_els_retry(struct efct_io *els);
+
+static void
+efct_els_delay_timer_cb(struct timer_list *t)
+{
+	struct efct_io *els = from_timer(els, t, delay_timer);
+	struct efc_node *node = els->node;
+
+	/* Retry delay timer expired, retry the ELS request,
+	 * Free the HW IO so that a new oxid is used.
+	 */
+	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
+		node->els_req_cnt++;
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    NULL);
+	} else {
+		efct_els_retry(els);
+	}
+}
+
+static void
+efct_els_abort_cleanup(struct efct_io *els)
+{
+	/* handle event for ABORT_WQE
+	 * whatever state ELS happened to be in, propagate aborted even
+	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
+	 */
+	struct efc_node_cb cbdata;
+
+	cbdata.status = 0;
+	cbdata.ext_status = 0;
+	cbdata.els_rsp = els->els_rsp;
+	els_io_printf(els, "Request aborted\n");
+	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
+}
+
+static int
+efct_els_req_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io *els;
+	struct efc_node *node;
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+	u32 reason_code;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	if (status != 0)
+		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
+
+	/* set the response len element of els->rsp */
+	els->els_rsp.len = length;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* FW returns the number of bytes received on the link in
+	 * the WCQE, not the amount placed in the buffer; use this info to
+	 * check if there was an overrun.
+	 */
+	if (length > els->els_rsp.size) {
+		efc_log_warn(efct,
+			      "ELS response returned len=%d > buflen=%zu\n",
+			     length, els->els_rsp.size);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		return 0;
+	}
+
+	/* Post event to ELS IO object */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		reason_code = (ext_status >> 16) & 0xff;
+
+		/* delay and retry if reason code is Logical Busy */
+		switch (reason_code) {
+		case ELS_RJT_BUSY:
+			els->node->els_req_cnt--;
+			els_io_printf(els,
+				      "LS_RJT Logical Busy response,delay and retry\n");
+			timer_setup(&els->delay_timer,
+				    efct_els_delay_timer_cb, 0);
+			mod_timer(&els->delay_timer,
+				  jiffies + msecs_to_jiffies(5000));
+			els->state = EFCT_ELS_REQUEST_DELAYED;
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
+					    &cbdata);
+			break;
+		}
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
+			efct_els_retry(els);
+			break;
+
+		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
+				/* completion for ELS that was aborted */
+				efct_els_abort_cleanup(els);
+			} else {
+				/* completion for ELS received first,
+				 * transition to wait for abort cmpl
+				 */
+				els->state = EFCT_ELS_REQ_ABORTED;
+			}
+
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    &cbdata);
+			break;
+		}
+		break;
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "els req failed status x%x, ext_status, x%x\n",
+					status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		break;
+	}
+
+	return 0;
+}
+
+static void efct_els_send_req(struct efc_node *node, struct efct_io *els)
+{
+	int rc = 0;
+	struct efct *efct;
+
+	efct = node->efc->base;
+	rc = efct_els_send(els, els->els_req.size,
+			   els->els_timeout_sec, efct_els_req_cb);
+
+	if (rc) {
+		struct efc_node_cb cbdata;
+
+		cbdata.status = INT_MAX;
+		cbdata.ext_status = INT_MAX;
+		cbdata.els_rsp = els->els_rsp;
+		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+	}
+}
+
+static void
+efct_els_retry(struct efct_io *els)
+{
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+
+	efct = els->node->efc->base;
+	cbdata.status = INT_MAX;
+	cbdata.ext_status = INT_MAX;
+	cbdata.els_rsp = els->els_rsp;
+
+	if (!els->els_retries_remaining) {
+		efc_log_err(efct, "ELS retries exhausted\n");
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+		return;
+	}
+
+	els->els_retries_remaining--;
+	 /* Free the HW IO so that a new oxid is used.*/
+	if (els->hio) {
+		efct_hw_io_free(&efct->hw, els->hio);
+		els->hio = NULL;
+	}
+
+	efct_els_send_req(els->node, els);
+}
+
+static int
+efct_els_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io *els;
+	struct efc_node *node;
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* Post node event */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
+		break;
+
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "[%s] %-8s failed status x%x, ext_status x%x\n",
+			    node->display_name, els->display_name,
+			    status, ext_status);
+		efc_log_warn(efct,
+			      "els acc complete: failed status x%x, ext_status, x%x\n",
+		     status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
+		break;
+	}
+
+	return 0;
+}
+
+static int
+efct_els_send_rsp(struct efct_io *els, u32 rsplen)
+{
+	struct efc_node *node = els->node;
+
+	/* increment ELS completion counter */
+	node->els_cmpl_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = rsplen;
+	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
+}
+
+struct efct_io *
+efct_send_plogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries,
+	      void (*cb)(struct efc_node *node,
+			 struct efc_node_cb *cbdata, void *arg), void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *plogi;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "plogi";
+
+		/* Build PLOGI request */
+		plogi = els->els_req.virt;
+
+		memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+
+		plogi->fl_cmd = ELS_PLOGI;
+		memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_flogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_flogi  *flogi;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "flogi";
+
+		/* Build FLOGI request */
+		flogi = els->els_req.virt;
+
+		memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+		flogi->fl_cmd = ELS_FLOGI;
+		memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_flogi *fdisc;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "fdisc";
+
+		/* Build FDISC request */
+		fdisc = els->els_req.virt;
+
+		memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
+		fdisc->fl_cmd = ELS_FDISC;
+		memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_prli(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_io *els;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "prli";
+
+		/* Build PRLI request */
+		pp = els->els_req.virt;
+
+		memset(pp, 0, sizeof(*pp));
+
+		pp->prli.prli_cmd = ELS_PRLI;
+		pp->prli.prli_spp_len = 16;
+		pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+		pp->spp.spp_type = FC_TYPE_FCP;
+		pp->spp.spp_type_ext = 0;
+		pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
+		pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+				       (node->sport->enable_ini ?
+				       FCP_SPPF_INIT_FCN : 0) |
+				       (node->sport->enable_tgt ?
+				       FCP_SPPF_TARG_FCN : 0));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+
+	return els;
+}
+
+struct efct_io *
+efct_send_prlo(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_io *els;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "prlo";
+
+		/* Build PRLO request */
+		pp = els->els_req.virt;
+
+		memset(pp, 0, sizeof(*pp));
+		pp->prlo.prlo_cmd = ELS_PRLO;
+		pp->prlo.prlo_obs = 0x10;
+		pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+		pp->spp.spp_type = FC_TYPE_FCP;
+		pp->spp.spp_type_ext = 0;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_logo(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_logo *logo;
+	struct fc_els_flogi  *sparams;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "logo";
+
+		/* Build LOGO request */
+
+		logo = els->els_req.virt;
+
+		memset(logo, 0, sizeof(*logo));
+		logo->fl_cmd = ELS_LOGO;
+		hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
+		logo->fl_n_port_wwn = sparams->fl_wwpn;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_adisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efc_sli_port *sport = node->sport;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "adisc";
+
+		/* Build ADISC request */
+
+		adisc = els->els_req.virt;
+
+		memset(adisc, 0, sizeof(*adisc));
+		adisc->adisc_cmd = ELS_ADISC;
+		hton24(adisc->adisc_hard_addr, sport->fc_id);
+		adisc->adisc_wwpn = sparams->fl_wwpn;
+		adisc->adisc_wwnn = sparams->fl_wwnn;
+		hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *pdisc;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "pdisc";
+
+		pdisc = els->els_req.virt;
+
+		memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
+
+		pdisc->fl_cmd = ELS_PDISC;
+		memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_scr(struct efc_node *node, u32 timeout_sec, u32 retries,
+	      els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "scr";
+
+		req = els->els_req.virt;
+
+		memset(req, 0, sizeof(*req));
+		req->scr_cmd = ELS_SCR;
+		req->scr_reg_func = ELS_SCRF_FULL;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_rrq(struct efc_node *node, u32 timeout_sec, u32 retries,
+	      els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "scr";
+
+		req = els->els_req.virt;
+
+		memset(req, 0, sizeof(*req));
+		req->scr_cmd = ELS_RRQ;
+		req->scr_reg_func = ELS_SCRF_FULL;
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_send_rscn(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_rscn *req;
+	struct fc_els_rscn_page *rscn_page;
+	u32 length = sizeof(*rscn_page) * port_ids_count;
+
+	length += sizeof(*req);
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->els_timeout_sec = timeout_sec;
+		els->els_retries_remaining = retries;
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rscn";
+
+		req = els->els_req.virt;
+
+		req->rscn_cmd = ELS_RSCN;
+		req->rscn_page_len = sizeof(struct fc_els_rscn_page);
+		req->rscn_plen = cpu_to_be16(length);
+
+		els->hio_type = EFCT_HW_ELS_REQ;
+		els->iparam.els.timeout = timeout_sec;
+
+		/* copy in the payload */
+		rscn_page = els->els_req.virt + sizeof(*req);
+		memcpy(rscn_page, port_ids,
+		       port_ids_count * sizeof(*rscn_page));
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+void *
+efct_send_ls_rjt(struct efc *efc, struct efc_node *node,
+		 u32 ox_id, u32 reason_code,
+		u32 reason_code_expl, u32 vendor_unique)
+{
+	struct efct_io *io = NULL;
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct fc_els_ls_rjt *rjt;
+
+	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	node_els_trace();
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "ls_rjt";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	rjt = io->els_req.virt;
+	memset(rjt, 0, sizeof(*rjt));
+
+	rjt->er_cmd = ELS_LS_RJT;
+	rjt->er_reason = reason_code;
+	rjt->er_explan = reason_code_expl;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*rjt));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_flogi  *plogi;
+	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "plog_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	plogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+	plogi->fl_cmd = ELS_LS_ACC;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	/* Set Application header support bit if requested */
+	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
+		plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*plogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+void *
+efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
+			u32 ox_id, u32 s_id)
+{
+	struct efct_io *io = NULL;
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "flogi_p2p_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els_sid.ox_id = ox_id;
+	io->iparam.els_sid.s_id = s_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "flogi_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els_sid.ox_id = ox_id;
+	io->iparam.els_sid.s_id = io->node->sport->fc_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+
+	/* Set F_port */
+	if (is_fport) {
+		/* Set F_PORT and Multiple N_PORT_ID Assignment */
+		flogi->fl_csp.sp_r_a_tov |=  cpu_to_be32(3U << 28);
+	}
+
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *efct_send_prli_acc(struct efc_node *node,
+				     u32 ox_id, els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prli_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_LS_ACC;
+	pp->prli.prli_spp_len = 0x10;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
+
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+					(node->sport->enable_ini ?
+					 FCP_SPPF_INIT_FCN : 0) |
+					(node->sport->enable_tgt ?
+					 FCP_SPPF_TARG_FCN : 0));
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prlo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+	pp->prlo.prlo_cmd = ELS_LS_ACC;
+	pp->prlo.prlo_obs = 0x10;
+	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_RESP_ACK;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_ls_acc(struct efc_node *node, u32 ox_id, els_cb_t cb,
+		 void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_ls_acc *acc;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*acc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "ls_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	acc = io->els_req.virt;
+	memset(acc, 0, sizeof(*acc));
+
+	acc->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*acc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_logo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io *io = NULL;
+	struct efct *efct = node->efc->base;
+	struct fc_els_ls_acc *logo;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "logo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	logo = io->els_req.virt;
+	memset(logo, 0, sizeof(*logo));
+
+	logo->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*logo));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io *io = NULL;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "adisc_acc";
+	io->init_task_tag = ox_id;
+
+	/* Go ahead and send the ELS_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	sparams = (struct fc_els_flogi  *)node->sport->service_params;
+	adisc = io->els_req.virt;
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_LS_ACC;
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*adisc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+void *
+efct_els_send_ct(struct efc *efc, struct efc_node *node, u32 cmd,
+		 u32 timeout_sec, u32 retries)
+{
+	struct efct *efct = efc->base;
+
+	switch (cmd) {
+	case FC_RCTL_ELS_REQ:
+		efc_log_err(efct, "send efct_ns_send_rftid\n");
+		efct_ns_send_rftid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_RFF_ID:
+		efc_log_err(efct, "send efct_ns_send_rffid\n");
+		efct_ns_send_rffid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_GID_PT:
+		efc_log_err(efct, "send efct_ns_send_gidpt\n");
+		efct_ns_send_gidpt(node, timeout_sec, retries, NULL, NULL);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+static inline void fcct_build_req_header(struct fc_ct_hdr  *hdr,
+					 u16 cmd, u16 max_size)
+{
+	hdr->ct_rev = FC_CT_REV;
+	hdr->ct_fs_type = FC_FST_DIR;
+	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
+	hdr->ct_options = 0;
+	hdr->ct_cmd = cpu_to_be16(cmd);
+	/* words */
+	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
+	hdr->ct_reason = 0;
+	hdr->ct_explan = 0;
+	hdr->ct_vendor = 0;
+}
+
+struct efct_io *
+efct_ns_send_rftid(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rft_id *rftid;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*ct) + sizeof(*rftid),
+				EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+		els->iparam.fc_ct.type = FC_TYPE_CT;
+		els->iparam.fc_ct.df_ctl = 0;
+		els->iparam.fc_ct.timeout = timeout_sec;
+
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rftid";
+
+		ct = els->els_req.virt;
+		memset(ct, 0, sizeof(*ct));
+		fcct_build_req_header(ct, FC_NS_RFT_ID, sizeof(*rftid));
+
+		rftid = els->els_req.virt + sizeof(*ct);
+		memset(rftid, 0, sizeof(*rftid));
+		hton24(rftid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+		rftid->fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
+			cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
+
+		els->hio_type = EFCT_HW_FC_CT;
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_ns_send_rffid(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rff_id *rffid;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*rffid);
+
+	els = efct_els_io_alloc(node, size, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+	} else {
+		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+		els->iparam.fc_ct.type = FC_TYPE_CT;
+		els->iparam.fc_ct.df_ctl = 0;
+		els->iparam.fc_ct.timeout = timeout_sec;
+
+		els->els_callback = cb;
+		els->els_callback_arg = cbarg;
+		els->display_name = "rffid";
+		ct = els->els_req.virt;
+
+		memset(ct, 0, sizeof(*ct));
+		fcct_build_req_header(ct, FC_NS_RFF_ID, sizeof(*rffid));
+
+		rffid = els->els_req.virt + sizeof(*ct);
+		memset(rffid, 0, sizeof(*rffid));
+
+		hton24(rffid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+		if (node->sport->enable_ini)
+			rffid->fr_feat |= FCP_FEAT_INIT;
+		if (node->sport->enable_tgt)
+			rffid->fr_feat |= FCP_FEAT_TARG;
+		rffid->fr_type = FC_TYPE_FCP;
+
+		els->hio_type = EFCT_HW_FC_CT;
+
+		efct_els_send_req(node, els);
+	}
+	return els;
+}
+
+struct efct_io *
+efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els = NULL;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_gid_pt *gidpt;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*gidpt);
+	els = efct_els_io_alloc_size(node, size,
+				     EFCT_ELS_GID_PT_RSP_LEN,
+				   EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+
+	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->iparam.fc_ct.type = FC_TYPE_CT;
+	els->iparam.fc_ct.df_ctl = 0;
+	els->iparam.fc_ct.timeout = timeout_sec;
+
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "gidpt";
+
+	ct = els->els_req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(ct, FC_NS_GID_PT, sizeof(*gidpt));
+
+	gidpt = els->els_req.virt + sizeof(*ct);
+	memset(gidpt, 0, sizeof(*gidpt));
+	gidpt->fn_pt_type = FC_TYPE_FCP;
+
+	els->hio_type = EFCT_HW_FC_CT;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+static int efct_bls_send_rjt_cb(struct efct_hw_io *hio,
+				struct efc_remote_node *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+static struct efct_io *
+efct_bls_send_rjt(struct efct_io *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node *node = io->node;
+	int rc;
+	struct fc_ba_rjt *acc;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_rjt";
+	io->hio_type = EFCT_HW_BLS_RJT;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls_sid.ox_id = ox_id;
+	io->iparam.bls_sid.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls_sid.payload;
+
+	memset(io->iparam.bls_sid.payload, 0,
+	       sizeof(io->iparam.bls_sid.payload));
+	acc->br_reason = ELS_RJT_UNAB;
+	acc->br_explan = ELS_EXPL_NONE;
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_rjt_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+struct efct_io *
+efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	return efct_bls_send_rjt(io, d_id, ox_id, rx_id);
+}
+
+static int efct_bls_send_acc_cb(struct efct_hw_io *hio,
+				struct efc_remote_node *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+static struct efct_io *
+efct_bls_send_acc(struct efct_io *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node *node = io->node;
+	int rc;
+	struct fc_ba_acc *acc;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_acc";
+	io->hio_type = EFCT_HW_BLS_ACC_SID;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls_sid.s_id = s_id;
+	io->iparam.bls_sid.ox_id = ox_id;
+	io->iparam.bls_sid.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls_sid.payload;
+
+	memset(io->iparam.bls_sid.payload, 0,
+	       sizeof(io->iparam.bls_sid.payload));
+	acc->ba_ox_id = cpu_to_be16(io->iparam.bls_sid.ox_id);
+	acc->ba_rx_id = cpu_to_be16(io->iparam.bls_sid.rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_acc_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+void *
+efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
+		      struct fc_frame_header *hdr)
+{
+	struct efct_io *io = NULL;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	io = efct_scsi_io_alloc(node, EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return io;
+	}
+
+	return efct_bls_send_acc(io, d_id, ox_id, rx_id);
+}
+
+static int
+efct_els_abort_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		  u32 length, int status, u32 ext_status,
+		 void *app)
+{
+	struct efct_io *els;
+	struct efct_io *abort_io = NULL; /* IO structure used to abort ELS */
+	struct efct *efct;
+
+	abort_io = app;
+	els = abort_io->io_to_abort;
+
+	if (!els || !els->node || !els->node->efc)
+		return -1;
+
+	efct = els->node->efc->base;
+
+	if (status != 0)
+		efc_log_warn(efct, "status x%x ext x%x\n", status, ext_status);
+
+	/* now free the abort IO */
+	efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+
+	/* send completion event to indicate abort process is complete
+	 * Note: The ELS SM will already be receiving
+	 * ELS_REQ_OK/FAIL/RJT/ABORTED
+	 */
+	if (els->state == EFCT_ELS_REQ_ABORTED) {
+		/* completion for ELS that was aborted */
+		efct_els_abort_cleanup(els);
+	} else {
+		/* completion for abort was received first,
+		 * transition to wait for req cmpl
+		 */
+		els->state = EFCT_ELS_ABORT_IO_COMPL;
+	}
+
+	/* done with ELS IO to abort */
+	kref_put(&els->ref, els->release);
+	return 0;
+}
+
+static struct efct_io *
+efct_els_abort_io(struct efct_io *els, bool send_abts)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	int rc;
+	struct efct_io *abort_io = NULL;
+
+	efct = els->node->efc->base;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&els->ref) == 0)) {
+		/* command no longer active */
+		efc_log_debug(efct, "els no longer active\n");
+		return NULL;
+	}
+
+	/* allocate IO structure to send abort */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+	} else {
+		/* set generic fields */
+		abort_io->efct = efct;
+		abort_io->node = els->node;
+		abort_io->cmd_ini = true;
+
+		/* set type and ABORT-specific fields */
+		abort_io->io_type = EFCT_IO_TYPE_ABORT;
+		abort_io->display_name = "abort_els";
+		abort_io->io_to_abort = els;
+		abort_io->send_abts = send_abts;
+
+		/* now dispatch IO */
+		rc = efct_scsi_io_dispatch_abort(abort_io, efct_els_abort_cb);
+		if (rc) {
+			efc_log_err(efct,
+				     "efct_scsi_io_dispatch failed: %d\n", rc);
+			efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+			abort_io = NULL;
+		}
+	}
+
+	/* if something failed, put reference on ELS to abort */
+	if (!abort_io)
+		kref_put(&els->ref, els->release);
+	return abort_io;
+}
+
+void
+efct_els_abort(struct efct_io *els, struct efc_node_cb *arg)
+{
+	struct efct_io *io = NULL;
+	struct efc_node *node;
+	struct efct *efct;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	/* request to abort this ELS without an ABTS */
+	els_io_printf(els, "ELS abort requested\n");
+	/* Set retries to zero,we are done */
+	els->els_retries_remaining = 0;
+	if (els->state == EFCT_ELS_REQUEST) {
+		els->state = EFCT_ELS_REQ_ABORT;
+		io = efct_els_abort_io(els, false);
+		if (!io) {
+			efc_log_err(efct, "efct_els_abort_io failed\n");
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    arg);
+		}
+
+	} else if (els->state == EFCT_ELS_REQUEST_DELAYED) {
+		/* mod/resched the timer for a short duration */
+		mod_timer(&els->delay_timer,
+			  jiffies + msecs_to_jiffies(1));
+
+		els->state = EFCT_ELS_REQUEST_DELAY_ABORT;
+	}
+}
+
+void
+efct_els_io_cleanup(struct efct_io *els,
+		    enum efc_hw_node_els_event node_evt, void *arg)
+{
+	/* don't want further events that could come; e.g. abort requests
+	 * from the node state machine; thus, disable state machine
+	 */
+	els->els_req_free = true;
+	efc_node_post_els_resp(els->node, node_evt, arg);
+
+	/* If this IO has a callback, invoke it */
+	if (els->els_callback) {
+		(*els->els_callback)(els->node, arg,
+				    els->els_callback_arg);
+	}
+	efct_els_io_free(els);
+}
+
+int
+efct_els_io_list_empty(struct efc_node *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+static int
+efct_ct_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+	       u32 length, int status, u32 ext_status,
+	      void *arg)
+{
+	struct efct_io *io = arg;
+
+	efct_els_io_free(io);
+
+	return 0;
+}
+
+int
+efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr  *ct_hdr, u32 cmd_rsp_code,
+		u32 reason_code, u32 reason_code_explanation)
+{
+	struct efct_io *io = NULL;
+	struct fc_ct_hdr  *rsp = NULL;
+
+	io = efct_els_io_alloc(node, 256, EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return -1;
+	}
+
+	rsp = io->els_rsp.virt;
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+
+	*rsp = *ct_hdr;
+
+	fcct_build_req_header(rsp, cmd_rsp_code, 0);
+	rsp->ct_reason = reason_code;
+	rsp->ct_explan = reason_code_explanation;
+
+	io->display_name = "ct response";
+	io->init_task_tag = ox_id;
+	io->wire_len += sizeof(*rsp);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+	io->hio_type = EFCT_HW_FC_CT_RSP;
+	io->iparam.fc_ct_rsp.ox_id = ox_id;
+	io->iparam.fc_ct_rsp.r_ctl = 3;
+	io->iparam.fc_ct_rsp.type = FC_TYPE_CT;
+	io->iparam.fc_ct_rsp.df_ctl = 0;
+	io->iparam.fc_ct_rsp.timeout = 5;
+
+	if (efct_scsi_io_dispatch(io, efct_ct_acc_cb) < 0) {
+		efct_els_io_free(io);
+		return -1;
+	}
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_els.h b/drivers/scsi/elx/efct/efct_els.h
new file mode 100644
index 000000000000..b7d587050264
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.h
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_ELS_H__)
+#define __EFCT_ELS_H__
+
+enum efct_els_role {
+	EFCT_ELS_ROLE_ORIGINATOR,
+	EFCT_ELS_ROLE_RESPONDER,
+};
+
+void _efct_els_io_free(struct kref *arg);
+extern struct efct_io *
+efct_els_io_alloc(struct efc_node *node, u32 reqlen,
+		  enum efct_els_role role);
+extern struct efct_io *
+efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
+		       u32 rsplen,
+				       enum efct_els_role role);
+void efct_els_io_free(struct efct_io *els);
+
+extern void *
+efct_els_req_send(struct efc *efc, struct efc_node *node,
+		  u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_send_ct(struct efc *efc, struct efc_node *node,
+		 u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_resp_send(struct efc *efc, struct efc_node *node,
+		   u32 cmd, u16 ox_id);
+void
+efct_els_abort(struct efct_io *els, struct efc_node_cb *arg);
+/* ELS command send */
+typedef void (*els_cb_t)(struct efc_node *node,
+			 struct efc_node_cb *cbdata, void *arg);
+extern struct efct_io *
+efct_send_plogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_flogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prli(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prlo(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_logo(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_adisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_scr(struct efc_node *node, u32 timeout_sec,
+	      u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_rrq(struct efc_node *node, u32 timeout_sec,
+	      u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_rftid(struct efc_node *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_rffid(struct efc_node *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_rscn(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, void *port_ids,
+	      u32 port_ids_count, els_cb_t cb, void *cbarg);
+extern void
+efct_els_io_cleanup(struct efct_io *els, enum efc_hw_node_els_event,
+		    void *arg);
+
+/* ELS acc send */
+extern struct efct_io *
+efct_send_ls_acc(struct efc_node *node, u32 ox_id,
+		 els_cb_t cb, void *cbarg);
+
+extern void *
+efct_send_ls_rjt(struct efc *efc, struct efc_node *node, u32 ox_id,
+		 u32 reason_cod, u32 reason_code_expl,
+		u32 vendor_unique);
+extern void *
+efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
+			u32 ox_id, u32 s_id);
+extern struct efct_io *
+efct_send_flogi_acc(struct efc_node *node, u32 ox_id,
+		    u32 is_fport, els_cb_t cb,
+		   void *cbarg);
+extern struct efct_io *
+efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prli_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_logo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+
+extern void *
+efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
+		      struct fc_frame_header *hdr);
+extern struct efct_io *
+efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr);
+
+extern int
+efct_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+/* CT */
+extern int
+efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr *ct_hdr,
+		 u32 cmd_rsp_code, u32 reason_code,
+		 u32 reason_code_explanation);
+
+#endif /* __EFCT_ELS_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 23/32] elx: efct: SCSI IO handling routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (21 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 22/32] elx: efct: Extended link Service IO handling James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:41   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 24/32] elx: efct: LIO backend interface routines James Smart
                   ` (9 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for SCSI transport IO alloc, build and send IO.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_scsi.c | 1572 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_scsi.h |  313 ++++++++
 2 files changed, 1885 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h

diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c
new file mode 100644
index 000000000000..eedb5385837f
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.c
@@ -0,0 +1,1572 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_utils.h"
+#include "efct_hw.h"
+
+#define enable_tsend_auto_resp(efct)	1
+#define enable_treceive_auto_resp(efct)	0
+
+#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]"
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \
+		io->node->display_name, io->instance_index,\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+#define scsi_io_trace(io, fmt, ...) \
+	do { \
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \
+			scsi_io_printf(io, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+/* Enable the SCSI and Transport IO allocations */
+void
+efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = true;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+/* Disable the SCSI and Transport IO allocations */
+void
+efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = false;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+struct efct_io *
+efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role role)
+{
+	struct efct *efct;
+	struct efc *efcp;
+	struct efct_xport *xport;
+	struct efct_io *io;
+	unsigned long flags = 0;
+
+	efcp = node->efc;
+	efct = efcp->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+		if (!node->io_alloc_enabled) {
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		io = efct_io_pool_io_alloc(efct->xport->io_pool);
+		if (!io) {
+			atomic_add_return(1, &xport->io_alloc_failed_count);
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* initialize refcount */
+		kref_init(&io->ref);
+		io->release = _efct_scsi_io_free;
+
+		if (io->hio) {
+			efc_log_err(efct,
+				     "assertion failed: io->hio is not NULL\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* set generic fields */
+		io->efct = efct;
+		io->node = node;
+
+		/* set type and name */
+		io->io_type = EFCT_IO_TYPE_IO;
+		io->display_name = "scsi_io";
+
+		switch (role) {
+		case EFCT_SCSI_IO_ROLE_ORIGINATOR:
+			io->cmd_ini = true;
+			io->cmd_tgt = false;
+			break;
+		case EFCT_SCSI_IO_ROLE_RESPONDER:
+			io->cmd_ini = false;
+			io->cmd_tgt = true;
+			break;
+		}
+
+		/* Add to node's active_ios list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &node->active_ios);
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	return io;
+}
+
+void
+_efct_scsi_io_free(struct kref *arg)
+{
+	struct efct_io *io = container_of(arg, struct efct_io, ref);
+	struct efct *efct = io->efct;
+	struct efc_node *node = io->node;
+	int send_empty_event;
+	unsigned long flags = 0;
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+
+	if (io->io_free) {
+		efc_log_err(efct, "IO already freed.\n");
+		return;
+	}
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		list_del(&io->list_entry);
+		send_empty_event = (!node->io_alloc_enabled) &&
+					list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	io->node = NULL;
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+}
+
+void
+efct_scsi_io_free(struct efct_io *io)
+{
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	WARN_ON(refcount_read(&io->ref.refcount) != 0);
+	kref_put(&io->ref, io->release);
+}
+
+static void
+efct_scsi_io_free_ovfl(struct efct_io *io)
+{
+	if (io->ovfl_sgl.size) {
+		dma_free_coherent(&io->efct->pcidev->dev,
+				  io->ovfl_sgl.size, io->ovfl_sgl.virt,
+				  io->ovfl_sgl.phys);
+		memset(&io->ovfl_sgl, 0, sizeof(struct efc_dma));
+	}
+}
+
+static void
+efct_target_io_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		  u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_stat = EFCT_SCSI_STATUS_GOOD;
+
+	if (!io || !io->efct) {
+		pr_err("%s: IO can not be NULL\n", __func__);
+		return;
+	}
+
+	scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status);
+
+	efct = io->efct;
+
+	efct_scsi_io_free_ovfl(io);
+
+	io->transferred += length;
+
+	/* Call target server completion */
+	if (io->scsi_tgt_cb) {
+		efct_scsi_io_cb_t cb = io->scsi_tgt_cb;
+		u32 flags = 0;
+
+		/* Clear the callback before invoking the callback */
+		io->scsi_tgt_cb = NULL;
+
+		/* if status was good, and auto-good-response was set,
+		 * then callback target-server with IO_CMPL_RSP_SENT,
+		 * otherwise send IO_CMPL
+		 */
+		if (status == 0 && io->auto_resp)
+			flags |= EFCT_SCSI_IO_CMPL_RSP_SENT;
+		else
+			flags |= EFCT_SCSI_IO_CMPL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_stat = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_DI_ERROR:
+			if (ext_status & SLI4_FC_DI_ERROR_GE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR;
+			else if (ext_status & SLI4_FC_DI_ERROR_AE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR;
+			else if (ext_status & SLI4_FC_DI_ERROR_RE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR;
+			else
+				scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET:
+			case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+				scsi_stat = EFCT_SCSI_STATUS_ABORTED;
+				break;
+			case SLI4_FC_LOCAL_REJECT_INVALID_RPI:
+				scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST;
+				break;
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_stat = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			default:
+				/*we have seen 0x0d(TX_DMA_FAILED err)*/
+				scsi_stat = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+
+		case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT:
+			/* target IO timed out */
+			scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED;
+			break;
+
+		case SLI4_FC_WCQE_STATUS_SHUTDOWN:
+			/* Target IO cancelled by HW */
+			scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN;
+			break;
+
+		default:
+			scsi_stat = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+
+		cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg);
+	}
+	efct_scsi_check_pending(efct);
+}
+
+static u32
+efct_scsi_count_sgls(struct efct_hw_dif_info *hw_dif,
+		     struct efct_scsi_sgl *sgl, u32 sgl_count)
+{
+	u32 count = 0;
+	u32 i;
+
+	/* Convert DIF Information */
+	if (hw_dif->dif_oper != EFCT_HW_DIF_OPER_DISABLED) {
+		/* If we're not DIF separate, then emit a seed SGE */
+		if (!hw_dif->dif_separate)
+			count++;
+
+		for (i = 0; i < sgl_count; i++) {
+			/* If DIF is enabled, and DIF is separate,
+			 * then append a SEED then DIF SGE
+			 */
+			if (hw_dif->dif_separate)
+				count += 2;
+
+			count++;
+		}
+	} else {
+		count = sgl_count;
+	}
+	return count;
+}
+
+static int
+efct_scsi_build_sgls(struct efct_hw *hw, struct efct_hw_io *hio,
+		     struct efct_hw_dif_info *hw_dif,
+		struct efct_scsi_sgl *sgl, u32 sgl_count,
+		enum efct_hw_io_type type)
+{
+	int rc;
+	u32 i;
+	struct efct *efct = hw->os;
+	u32 blocksize = 0;
+	u32 blockcount;
+
+	/* Initialize HW SGL */
+	rc = efct_hw_io_init_sges(hw, hio, type);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc);
+		return -1;
+	}
+
+	/* Convert DIF Information */
+	if (hw_dif->dif_oper != EFCT_HW_DIF_OPER_DISABLED) {
+		/* If we're not DIF separate, then emit a seed SGE */
+		if (!hw_dif->dif_separate) {
+			rc = efct_hw_io_add_seed_sge(hw, hio, hw_dif);
+			if (rc)
+				return rc;
+		}
+
+		/* if we are doing DIF separate, then figure out the
+		 * block size so that we can update the ref tag in the
+		 * DIF seed SGE.   Also verify that the
+		 * the sgl lengths are all multiples of the blocksize
+		 */
+		if (hw_dif->dif_separate) {
+			switch (hw_dif->blk_size) {
+			case EFCT_HW_DIF_BK_SIZE_512:
+				blocksize = 512;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_1024:
+				blocksize = 1024;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_2048:
+				blocksize = 2048;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_4096:
+				blocksize = 4096;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_520:
+				blocksize = 520;
+				break;
+			case EFCT_HW_DIF_BK_SIZE_4104:
+				blocksize = 4104;
+				break;
+			default:
+				efc_log_test(efct,
+					      "Invalid hw_dif blocksize %d\n",
+					hw_dif->blk_size);
+				return -1;
+			}
+			for (i = 0; i < sgl_count; i++) {
+				if ((sgl[i].len % blocksize) != 0) {
+					efc_log_test(efct,
+						      "sgl[%d] len of %ld is not multiple of blocksize\n",
+					i, sgl[i].len);
+					return -1;
+				}
+			}
+		}
+
+		for (i = 0; i < sgl_count; i++) {
+
+			/* If DIF is enabled, and DIF is separate,
+			 * then append a SEED then DIF SGE
+			 */
+			if (hw_dif->dif_separate) {
+				rc = efct_hw_io_add_seed_sge(hw, hio,
+							     hw_dif);
+				if (rc)
+					return rc;
+				rc = efct_hw_io_add_dif_sge(hw, hio,
+							    sgl[i].dif_addr);
+				if (rc)
+					return rc;
+				/* Update the ref_tag for next DIF seed SGE*/
+				blockcount = sgl[i].len / blocksize;
+				if (hw_dif->dif_oper ==
+					EFCT_HW_DIF_OPER_INSERT)
+					hw_dif->ref_tag_repl += blockcount;
+				else
+					hw_dif->ref_tag_cmp += blockcount;
+			}
+
+			/* Add data SGE */
+			rc = efct_hw_io_add_sge(hw, hio,
+						sgl[i].addr, sgl[i].len);
+			if (rc) {
+				efc_log_err(efct,
+					     "add sge failed cnt=%d rc=%d\n",
+					     sgl_count, rc);
+				return rc;
+			}
+		}
+	} else {
+		for (i = 0; i < sgl_count; i++) {
+
+			/* Add data SGE */
+			rc = efct_hw_io_add_sge(hw, hio,
+						sgl[i].addr, sgl[i].len);
+			if (rc) {
+				efc_log_err(efct,
+					     "add sge failed cnt=%d rc=%d\n",
+					     sgl_count, rc);
+				return rc;
+			}
+		}
+	}
+	return 0;
+}
+
+/* Convert SCSI API T10 DIF information into the FC HW format */
+static int
+efct_scsi_convert_dif_info(struct efct *efct,
+			   struct efct_scsi_dif_info *scsi_dif_info,
+			  struct efct_hw_dif_info *hw_dif_info)
+{
+	u32 dif_seed;
+
+	memset(hw_dif_info, 0,
+	       sizeof(struct efct_hw_dif_info));
+
+	if (!scsi_dif_info) {
+		hw_dif_info->dif_oper = EFCT_HW_DIF_OPER_DISABLED;
+		hw_dif_info->blk_size =  EFCT_HW_DIF_BK_SIZE_NA;
+		return 0;
+	}
+
+	/* Convert the DIF operation */
+	switch (scsi_dif_info->dif_oper) {
+	case EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INNODIFOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_INSERT;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTNODIF;
+		hw_dif_info->dif = SLI4_DIF_STRIP;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CHKSUM:
+		hw_dif_info->dif_oper =
+				EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_INSERT;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_NODIF:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF;
+		hw_dif_info->dif = SLI4_DIF_STRIP;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CHKSUM:
+		hw_dif_info->dif_oper =
+			EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CHKSUM:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CRC:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	case EFCT_SCSI_DIF_OPER_IN_RAW_OUT_RAW:
+		hw_dif_info->dif_oper = EFCT_HW_SGE_DIFOP_INRAWOUTRAW;
+		hw_dif_info->dif = SLI4_DIF_PASS_THROUGH;
+		break;
+	default:
+		efc_log_test(efct, "unhandled SCSI DIF operation %d\n",
+			      scsi_dif_info->dif_oper);
+		return -1;
+	}
+
+	switch (scsi_dif_info->blk_size) {
+	case EFCT_SCSI_DIF_BK_SIZE_512:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_512;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_1024:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_1024;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_2048:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_2048;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_4096:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_4096;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_520:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_520;
+		break;
+	case EFCT_SCSI_DIF_BK_SIZE_4104:
+		hw_dif_info->blk_size = EFCT_HW_DIF_BK_SIZE_4104;
+		break;
+	default:
+		efc_log_test(efct, "unhandled SCSI DIF block size %d\n",
+			      scsi_dif_info->blk_size);
+		return -1;
+	}
+
+	/* If the operation is an INSERT the tags provided are the
+	 * ones that should be inserted, otherwise they're the ones
+	 * to be checked against.
+	 */
+	if (hw_dif_info->dif == SLI4_DIF_INSERT) {
+		hw_dif_info->ref_tag_repl = scsi_dif_info->ref_tag;
+		hw_dif_info->app_tag_repl = scsi_dif_info->app_tag;
+	} else {
+		hw_dif_info->ref_tag_cmp = scsi_dif_info->ref_tag;
+		hw_dif_info->app_tag_cmp = scsi_dif_info->app_tag;
+	}
+
+	hw_dif_info->check_ref_tag = scsi_dif_info->check_ref_tag;
+	hw_dif_info->check_app_tag = scsi_dif_info->check_app_tag;
+	hw_dif_info->check_guard = scsi_dif_info->check_guard;
+	hw_dif_info->auto_incr_ref_tag = true;
+	hw_dif_info->dif_separate = scsi_dif_info->dif_separate;
+	hw_dif_info->disable_app_ffff = scsi_dif_info->disable_app_ffff;
+	hw_dif_info->disable_app_ref_ffff =
+			scsi_dif_info->disable_app_ref_ffff;
+
+	efct_hw_get(&efct->hw, EFCT_HW_DIF_SEED, &dif_seed);
+	hw_dif_info->dif_seed = dif_seed;
+
+	return 0;
+}
+
+static void efc_log_sgl(struct efct_io *io)
+{
+	struct efct_hw_io *hio = io->hio;
+	struct sli4_sge *data = NULL;
+	u32 *dword = NULL;
+	u32 i;
+	u32 n_sge;
+
+	scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n",
+		      upper_32_bits(hio->def_sgl.phys),
+		      lower_32_bits(hio->def_sgl.phys));
+	n_sge = (hio->sgl == &hio->def_sgl ?
+			hio->n_sge : hio->def_sgl_count);
+	for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) {
+		dword = (u32 *)data;
+
+		scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+			      i, dword[0], dword[1], dword[2], dword[3]);
+
+		if (dword[2] & (1U << 31))
+			break;
+	}
+
+	if (hio->ovfl_sgl &&
+	    hio->sgl == hio->ovfl_sgl) {
+		scsi_io_trace(io, "Overflow at 0x%x 0x%08x\n",
+			      upper_32_bits(hio->ovfl_sgl->phys),
+			      lower_32_bits(hio->ovfl_sgl->phys));
+		for (i = 0, data = hio->ovfl_sgl->virt; i < hio->n_sge;
+			i++, data++) {
+			dword = (u32 *)data;
+
+			scsi_io_trace(io,
+				      "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+				i, dword[0], dword[1], dword[2], dword[3]);
+			if (dword[2] & (1U << 31))
+				break;
+		}
+	}
+}
+
+static int
+efct_scsi_check_pending_async_cb(struct efct_hw *hw, int status,
+				 u8 *mqe, void *arg)
+{
+	struct efct_io *io = arg;
+
+	if (io) {
+		if (io->hw_cb) {
+			efct_hw_done_t cb = io->hw_cb;
+
+			io->hw_cb = NULL;
+			(cb)(io->hio, NULL, 0,
+			 SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io);
+		}
+	}
+	return 0;
+}
+
+static int
+efct_scsi_io_dispatch_hw_io(struct efct_io *io, struct efct_hw_io *hio)
+{
+	int rc = 0;
+	struct efct *efct = io->efct;
+
+	/* Got a HW IO;
+	 * update ini/tgt_task_tag with HW IO info and dispatch
+	 */
+	io->hio = hio;
+	if (io->cmd_tgt)
+		io->tgt_task_tag = hio->indicator;
+	else if (io->cmd_ini)
+		io->init_task_tag = hio->indicator;
+	io->hw_tag = hio->reqtag;
+
+	hio->eq = io->hw_priv;
+
+	/* Copy WQ steering */
+	switch (io->wq_steering) {
+	case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS;
+		break;
+	case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST;
+		break;
+	case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CPU;
+		break;
+	}
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_IO: {
+		u32 max_sgl;
+		u32 total_count;
+		u32 host_allocated;
+
+		efct_hw_get(&efct->hw, EFCT_HW_N_SGL, &max_sgl);
+		efct_hw_get(&efct->hw, EFCT_HW_SGL_CHAINING_HOST_ALLOCATED,
+			    &host_allocated);
+
+		/*
+		 * If the requested SGL is larger than the default size,
+		 * then we can allocate an overflow SGL.
+		 */
+		total_count = efct_scsi_count_sgls(&io->hw_dif,
+						   io->sgl, io->sgl_count);
+
+		/*
+		 * Lancer requires us to allocate the chained memory area
+		 */
+		if (host_allocated && total_count > max_sgl) {
+			/* Compute count needed, the number
+			 * extra plus 1 for the link sge
+			 */
+			u32 count = total_count - max_sgl + 1;
+
+			io->ovfl_sgl.size = count * sizeof(struct sli4_sge);
+			io->ovfl_sgl.virt =
+				dma_alloc_coherent(&efct->pcidev->dev,
+						   io->ovfl_sgl.size,
+						&io->ovfl_sgl.phys, GFP_DMA);
+			if (!io->ovfl_sgl.virt) {
+				efc_log_err(efct,
+					     "dma alloc overflow sgl failed\n");
+				break;
+			}
+			rc = efct_hw_io_register_sgl(&efct->hw,
+						     io->hio, &io->ovfl_sgl,
+						     count);
+			if (rc) {
+				efct_scsi_io_free_ovfl(io);
+				efc_log_err(efct,
+					     "efct_hw_io_register_sgl() failed\n");
+				break;
+			}
+			/* EVT: update chained_io_count */
+			io->node->chained_io_count++;
+		}
+
+		rc = efct_scsi_build_sgls(&efct->hw, io->hio, &io->hw_dif,
+					  io->sgl, io->sgl_count, io->hio_type);
+		if (rc) {
+			efct_scsi_io_free_ovfl(io);
+			break;
+		}
+
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(efct))
+			efc_log_sgl(io);
+
+		if (io->app_id)
+			io->iparam.fcp_tgt.app_id = io->app_id;
+
+		rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio,
+				     io->wire_len, &io->iparam,
+				     &io->node->rnode, io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_ELS:
+	case EFCT_IO_TYPE_CT: {
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_req, io->wire_len,
+			&io->els_rsp, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_CT_RESP: {
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_rsp, io->wire_len,
+			NULL, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	}
+	case EFCT_IO_TYPE_BLS_RESP: {
+		/* no need to update tgt_task_tag for BLS response since
+		 * the RX_ID will be specified by the payload, not the XRI
+		 */
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       NULL, 0, NULL, &io->node->rnode,
+			&io->iparam, io->hw_cb, io);
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+static int
+efct_scsi_io_dispatch_no_hw_io(struct efct_io *io)
+{
+	int rc;
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_ABORT: {
+		struct efct_hw_io *hio_to_abort = NULL;
+
+		hio_to_abort = io->io_to_abort->hio;
+
+		if (!hio_to_abort) {
+			/*
+			 * If "IO to abort" does not have an
+			 * associated HW IO, immediately make callback with
+			 * success. The command must have been sent to
+			 * the backend, but the data phase has not yet
+			 * started, so we don't have a HW IO.
+			 *
+			 * Note: since the backend shims should be
+			 * taking a reference on io_to_abort, it should not
+			 * be possible to have been completed and freed by
+			 * the backend before the abort got here.
+			 */
+			scsi_io_printf(io, "IO: not active\n");
+			((efct_hw_done_t)io->hw_cb)(io->hio, NULL, 0,
+					SLI4_FC_WCQE_STATUS_SUCCESS, 0, io);
+			rc = 0;
+		} else {
+			/* HW IO is valid, abort it */
+			scsi_io_printf(io, "aborting\n");
+			rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort,
+					      io->send_abts, io->hw_cb, io);
+			if (rc) {
+				int status = SLI4_FC_WCQE_STATUS_SUCCESS;
+
+				if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE &&
+				    rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+					status = -1;
+					scsi_io_printf(io,
+						       "Failed to abort IO: status=%d\n",
+						rc);
+				}
+				((efct_hw_done_t)io->hw_cb)(io->hio,
+						NULL, 0, status, 0, io);
+				rc = 0;
+			}
+		}
+
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+/**
+ * Check for pending IOs to dispatch.
+ *
+ * If there are IOs on the pending list, and a HW IO is available, then
+ * dispatch the IOs.
+ */
+void
+efct_scsi_check_pending(struct efct *efct)
+{
+	struct efct_xport *xport = efct->xport;
+	struct efct_io *io = NULL;
+	struct efct_hw_io *hio;
+	int status;
+	int count = 0;
+	int dispatch;
+	unsigned long flags = 0;
+
+	/* Guard against recursion */
+	if (atomic_add_return(1, &xport->io_pending_recursing)) {
+		/* This function is already running.  Decrement and return. */
+		atomic_sub_return(1, &xport->io_pending_recursing);
+		return;
+	}
+
+	do {
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		status = 0;
+		hio = NULL;
+		if (!list_empty(&xport->io_pending_list)) {
+			io = list_first_entry(&xport->io_pending_list,
+					      struct efct_io,
+					      io_pending_link);
+		}
+		if (io) {
+			list_del(&io->io_pending_link);
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				hio = NULL;
+			} else {
+				hio = efct_hw_io_alloc(&efct->hw);
+				if (!hio) {
+					/*
+					 * No HW IO available.Put IO back on
+					 * the front of pending list
+					 */
+					list_add(&xport->io_pending_list,
+						 &io->io_pending_link);
+					io = NULL;
+				} else {
+					hio->eq = io->hw_priv;
+				}
+			}
+		}
+		/* Must drop the lock before dispatching the IO */
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (io) {
+			count++;
+
+			/*
+			 * We pulled an IO off the pending list,
+			 * and either got an HW IO or don't need one
+			 */
+			atomic_sub_return(1, &xport->io_pending_count);
+			if (!hio)
+				status = efct_scsi_io_dispatch_no_hw_io(io);
+			else
+				status = efct_scsi_io_dispatch_hw_io(io, hio);
+			if (status) {
+				/*
+				 * Invoke the HW callback, but do so in the
+				 * separate execution context,provided by the
+				 * NOP mailbox completion processing context
+				 * by using efct_hw_async_call()
+				 */
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call hw async failed\n");
+				}
+			}
+		}
+	} while (io);
+
+	/*
+	 * If nothing was removed from the list,
+	 * we might be in a case where we need to abort an
+	 * active IO and the abort is on the pending list.
+	 * Look for an abort we can dispatch.
+	 */
+	if (count == 0) {
+		dispatch = 0;
+
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		list_for_each_entry(io, &xport->io_pending_list,
+				    io_pending_link) {
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				if (io->io_to_abort->hio) {
+					/* This IO has a HW IO, so it is
+					 * active.  Dispatch the abort.
+					 */
+					dispatch = 1;
+				} else {
+					/* Leave this abort on the pending
+					 * list and keep looking
+					 */
+					dispatch = 0;
+				}
+			}
+			if (dispatch) {
+				list_del(&io->io_pending_link);
+				atomic_sub_return(1, &xport->io_pending_count);
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (dispatch) {
+			status = efct_scsi_io_dispatch_no_hw_io(io);
+			if (status) {
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call to hw async failed\n");
+				}
+			}
+		}
+	}
+
+	atomic_sub_return(1, &xport->io_pending_recursing);
+}
+
+/**
+ * An IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, try to allocate a HW IO. If none
+ *   is available, place this IO at the tail of the pending IO
+ *   list.
+ * - if HW IO is available, attach this IO to the HW IO and
+ *   submit it.
+ */
+int
+efct_scsi_io_dispatch(struct efct_io *io, void *cb)
+{
+	struct efct_hw_io *hio;
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * if this IO already has a HW IO, then this is either
+	 * not the first phase of the IO. Send it to the HW.
+	 */
+	if (io->hio)
+		return efct_scsi_io_dispatch_hw_io(io, io->hio);
+
+	/*
+	 * We don't already have a HW IO associated with the IO. First check
+	 * the pending list. If not empty, add IO to the tail and process the
+	 * pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			/*
+			 * If this is a low latency request,
+			 * the put at the front of the IO pending
+			 * queue, otherwise put it at the end of the queue.
+			 */
+			if (io->low_latency) {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add(&xport->io_pending_list,
+					 &io->io_pending_link);
+			} else {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add_tail(&io->io_pending_link,
+					      &xport->io_pending_list);
+			}
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return 0;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/*
+	 * We don't have a HW IO associated with the IO and there's nothing
+	 * on the pending list. Attempt to allocate a HW IO and dispatch it.
+	 */
+	hio = efct_hw_io_alloc(&io->efct->hw);
+	if (!hio) {
+		/* Couldn't get a HW IO. Save this IO on the pending list */
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		INIT_LIST_HEAD(&io->io_pending_link);
+		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		atomic_add_return(1, &xport->io_total_pending);
+		atomic_add_return(1, &xport->io_pending_count);
+		return 0;
+	}
+
+	/* We successfully allocated a HW IO; dispatch to HW */
+	return efct_scsi_io_dispatch_hw_io(io, hio);
+}
+
+/**
+ * An Abort IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, send abort to the HW.
+ */
+
+int
+efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb)
+{
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * For aborts, we don't need a HW IO, but we still want
+	 * to pass through the pending list to preserve ordering.
+	 * Thus, if the pending list is not empty, add this abort
+	 * to the pending list and process the pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			INIT_LIST_HEAD(&io->io_pending_link);
+			list_add_tail(&io->io_pending_link,
+				      &xport->io_pending_list);
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return 0;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/* nothing on pending list, dispatch abort */
+	return efct_scsi_io_dispatch_no_hw_io(io);
+}
+
+static inline int
+efct_scsi_xfer_data(struct efct_io *io, u32 flags,
+		    struct efct_scsi_dif_info *dif_info,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 xwire_len,
+	enum efct_hw_io_type type, int enable_ar,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct efct *efct;
+	size_t residual = 0;
+
+	if (dif_info &&
+	    dif_info->dif_oper == EFCT_SCSI_DIF_OPER_DISABLED)
+		dif_info = NULL;
+
+	io->sgl_count = sgl_count;
+
+	efct = io->efct;
+
+	scsi_io_trace(io, "%s wire_len %llu\n",
+		      (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv",
+		      xwire_len);
+
+	io->hio_type = type;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	rc = efct_scsi_convert_dif_info(efct, dif_info, &io->hw_dif);
+	if (rc)
+		return rc;
+
+	/* If DIF is used, then save lba for error recovery */
+	if (dif_info)
+		io->scsi_dif_info = *dif_info;
+
+	residual = io->exp_xfer_len - io->transferred;
+	io->wire_len = (xwire_len < residual) ? xwire_len : residual;
+	residual = (xwire_len - io->wire_len);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = io->transferred;
+	io->iparam.fcp_tgt.dif_oper = io->hw_dif.dif;
+	io->iparam.fcp_tgt.blk_size = io->hw_dif.blk_size;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* if this is the last data phase and there is no residual, enable
+	 * auto-good-response
+	 */
+	if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) &&
+	    residual == 0 &&
+		((io->transferred + io->wire_len) == io->exp_xfer_len) &&
+		(!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) {
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+		io->auto_resp = true;
+	} else {
+		io->auto_resp = false;
+	}
+
+	/* save this transfer length */
+	io->xfer_req = io->wire_len;
+
+	/* Adjust the transferred count to account for overrun
+	 * when the residual is calculated in efct_scsi_send_resp
+	 */
+	io->transferred += residual;
+
+	/* Adjust the SGL size if there is overrun */
+
+	if (residual) {
+		struct efct_scsi_sgl  *sgl_ptr = &io->sgl[sgl_count - 1];
+
+		while (residual) {
+			size_t len = sgl_ptr->len;
+
+			if (len > residual) {
+				sgl_ptr->len = len - residual;
+				residual = 0;
+			} else {
+				sgl_ptr->len = 0;
+				residual -= len;
+				io->sgl_count--;
+			}
+			sgl_ptr--;
+		}
+	}
+
+	/* Set latency and WQ steering */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (efct->xport) {
+		struct efct_xport *xport = efct->xport;
+
+		if (type == EFCT_HW_IO_TARGET_READ) {
+			xport->fcp_stats.input_requests++;
+			xport->fcp_stats.input_bytes += xwire_len;
+		} else if (type == EFCT_HW_IO_TARGET_WRITE) {
+			xport->fcp_stats.output_requests++;
+			xport->fcp_stats.output_bytes += xwire_len;
+		}
+	}
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
+		       struct efct_scsi_dif_info *dif_info,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, dif_info, sgl, sgl_count,
+				 len, EFCT_HW_IO_TARGET_READ,
+				 enable_tsend_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
+		       struct efct_scsi_dif_info *dif_info,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, dif_info, sgl, sgl_count, len,
+				 EFCT_HW_IO_TARGET_WRITE,
+				 enable_treceive_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+		    struct efct_scsi_cmd_resp *rsp,
+		   efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	int residual;
+	bool auto_resp = true;		/* Always try auto resp */
+	u8 scsi_status = 0;
+	u16 scsi_status_qualifier = 0;
+	u8 *sense_data = NULL;
+	u32 sense_data_length = 0;
+
+	efct = io->efct;
+
+	efct_scsi_convert_dif_info(efct, NULL, &io->hw_dif);
+
+	if (rsp) {
+		scsi_status = rsp->scsi_status;
+		scsi_status_qualifier = rsp->scsi_status_qualifier;
+		sense_data = rsp->sense_data;
+		sense_data_length = rsp->sense_data_length;
+		residual = rsp->residual;
+	} else {
+		residual = io->exp_xfer_len - io->transferred;
+	}
+
+	io->wire_len = 0;
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* Set low latency queueing request */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (scsi_status != 0 || residual || sense_data_length) {
+		struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
+		u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
+
+		if (!fcprsp) {
+			efc_log_err(efct, "NULL response buffer\n");
+			return -1;
+		}
+
+		auto_resp = false;
+
+		memset(fcprsp, 0, sizeof(*fcprsp));
+
+		io->wire_len += sizeof(*fcprsp);
+
+		fcprsp->resp.fr_status = scsi_status;
+		fcprsp->resp.fr_retry_delay =
+			cpu_to_be16(scsi_status_qualifier);
+
+		/* set residual status if necessary */
+		if (residual != 0) {
+			/* FCP: if data transferred is less than the
+			 * amount expected, then this is an underflow.
+			 * If data transferred would have been greater
+			 * than the amount expected this is an overflow
+			 */
+			if (residual > 0) {
+				fcprsp->resp.fr_flags |= FCP_RESID_UNDER;
+				fcprsp->ext.fr_resid =	cpu_to_be32(residual);
+			} else {
+				fcprsp->resp.fr_flags |= FCP_RESID_OVER;
+				fcprsp->ext.fr_resid = cpu_to_be32(-residual);
+			}
+		}
+
+		if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) {
+			if (sense_data_length > SCSI_SENSE_BUFFERSIZE) {
+				efc_log_err(efct, "Sense exceeds max size.\n");
+				return -1;
+			}
+
+			fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+			memcpy(sns_data, sense_data, sense_data_length);
+			fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length);
+			io->wire_len += sense_data_length;
+		}
+
+		io->sgl[0].addr = io->rspbuf.phys;
+		io->sgl[0].dif_addr = 0;
+		io->sgl[0].len = io->wire_len;
+		io->sgl_count = 1;
+	}
+
+	if (auto_resp)
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+static int
+efct_target_bls_resp_cb(struct efct_hw_io *hio,
+			struct efc_remote_node *rnode,
+	u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status bls_status;
+
+	efct = io->efct;
+
+	/* BLS isn't really a "SCSI" concept, but use SCSI status */
+	if (status) {
+		io_error_log(io, "s=%#x x=%#x\n", status, ext_status);
+		bls_status = EFCT_SCSI_STATUS_ERROR;
+	} else {
+		bls_status = EFCT_SCSI_STATUS_GOOD;
+	}
+
+	if (io->bls_cb) {
+		efct_scsi_io_cb_t bls_cb = io->bls_cb;
+		void *bls_cb_arg = io->bls_cb_arg;
+
+		io->bls_cb = NULL;
+		io->bls_cb_arg = NULL;
+
+		/* invoke callback */
+		bls_cb(io, bls_status, 0, bls_cb_arg);
+	}
+
+	efct_scsi_check_pending(efct);
+	return 0;
+}
+
+static int
+efct_target_send_bls_resp(struct efct_io *io,
+			  efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct fc_ba_acc *acc;
+
+	/* fill out IO structure with everything needed to send BA_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls.ox_id = io->init_task_tag;
+	io->iparam.bls.rx_id = io->abort_rx_id;
+
+	acc = (void *)io->iparam.bls.payload;
+
+	memset(io->iparam.bls.payload, 0,
+	       sizeof(io->iparam.bls.payload));
+	acc->ba_ox_id = cpu_to_be16(io->iparam.bls.ox_id);
+	acc->ba_rx_id = cpu_to_be16(io->iparam.bls.rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	/* generic io fields have already been populated */
+
+	/* set type and BLS-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "bls_rsp";
+	io->hio_type = EFCT_HW_BLS_ACC;
+	io->bls_cb = cb;
+	io->bls_cb_arg = arg;
+
+	/* dispatch IO */
+	rc = efct_scsi_io_dispatch(io, efct_target_bls_resp_cb);
+	return rc;
+}
+
+int
+efct_scsi_send_tmf_resp(struct efct_io *io,
+			enum efct_scsi_tmf_resp rspcode,
+			u8 addl_rsp_info[3],
+			efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc = -1;
+	struct efct *efct = NULL;
+	struct fcp_resp_with_ext *fcprsp = NULL;
+	struct fcp_resp_rsp_info *rspinfo = NULL;
+	u8 fcp_rspcode;
+
+	efct = io->efct;
+
+	io->wire_len = 0;
+	efct_scsi_convert_dif_info(efct, NULL, &io->hw_dif);
+
+	switch (rspcode) {
+	case EFCT_SCSI_TMF_FUNCTION_COMPLETE:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED:
+	case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_REJECTED:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER:
+		fcp_rspcode = FCP_TMF_INVALID_LUN;
+		break;
+	case EFCT_SCSI_TMF_SERVICE_DELIVERY:
+		fcp_rspcode = FCP_TMF_FAILED;
+		break;
+	default:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	}
+
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) {
+		rc = efct_target_send_bls_resp(io, cb, arg);
+		return rc;
+	}
+
+	/* populate the FCP TMF response */
+	fcprsp = io->rspbuf.virt;
+	memset(fcprsp, 0, sizeof(*fcprsp));
+
+	fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+
+	rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
+	if (addl_rsp_info) {
+		memcpy(rspinfo->_fr_resvd, addl_rsp_info,
+		       sizeof(rspinfo->_fr_resvd));
+	}
+	rspinfo->rsp_code = fcp_rspcode;
+
+	io->wire_len = sizeof(*fcprsp) + sizeof(*rspinfo);
+
+	fcprsp->ext.fr_rsp_len = cpu_to_be32(sizeof(*rspinfo));
+
+	io->sgl[0].addr = io->rspbuf.phys;
+	io->sgl[0].dif_addr = 0;
+	io->sgl[0].len = io->wire_len;
+	io->sgl_count = 1;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	rc = efct_scsi_io_dispatch(io, efct_target_io_cb);
+
+	return rc;
+}
+
+static int
+efct_target_abort_cb(struct efct_hw_io *hio,
+		     struct efc_remote_node *rnode,
+		     u32 length, int status,
+		     u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_status;
+
+	efct = io->efct;
+
+	if (io->abort_cb) {
+		efct_scsi_io_cb_t abort_cb = io->abort_cb;
+		void *abort_cb_arg = io->abort_cb_arg;
+
+		io->abort_cb = NULL;
+		io->abort_cb_arg = NULL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_status = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_status = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS:
+				scsi_status =
+					EFCT_SCSI_STATUS_ABORT_IN_PROGRESS;
+				break;
+			default:
+				/*we have seen 0x15 (abort in progress)*/
+				scsi_status = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+		case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+			scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE;
+			break;
+		default:
+			scsi_status = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+		/* invoke callback */
+		abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg);
+	}
+
+	/* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */
+	kref_put(&io->io_to_abort->ref, io->io_to_abort->release);
+
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+
+	efct_scsi_check_pending(efct);
+	return 0;
+}
+
+int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	int rc;
+	struct efct_io *abort_io = NULL;
+
+	efct = io->efct;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&io->ref) == 0)) {
+		/* command no longer active */
+		scsi_io_printf(io, "command no longer active\n");
+		return -1;
+	}
+
+	/*
+	 * allocate a new IO to send the abort request. Use efct_io_alloc()
+	 * directly, as we need an IO object that will not fail allocation
+	 * due to allocations being disabled (in efct_scsi_io_alloc())
+	 */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		kref_put(&io->ref, io->release);
+		return -1;
+	}
+
+	/* Save the target server callback and argument */
+	/* set generic fields */
+	abort_io->cmd_tgt = true;
+	abort_io->node = io->node;
+
+	/* set type and abort-specific fields */
+	abort_io->io_type = EFCT_IO_TYPE_ABORT;
+	abort_io->display_name = "tgt_abort";
+	abort_io->io_to_abort = io;
+	abort_io->send_abts = false;
+	abort_io->abort_cb = cb;
+	abort_io->abort_cb_arg = arg;
+
+	/* now dispatch IO */
+	rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb);
+	if (rc)
+		kref_put(&io->ref, io->release);
+	return rc;
+}
+
+void
+efct_scsi_io_complete(struct efct_io *io)
+{
+	if (io->io_free) {
+		efc_log_test(io->efct,
+			      "Got completion for non-busy io with tag 0x%x\n",
+		    io->tag);
+		return;
+	}
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	kref_put(&io->ref, io->release);
+}
+
+u32
+efct_scsi_get_property(struct efct *efct, enum efct_scsi_property prop)
+{
+	struct efct_xport *xport = efct->xport;
+	u32	val;
+
+	switch (prop) {
+	case EFCT_SCSI_MAX_SGE:
+		if (efct_hw_get(&efct->hw, EFCT_HW_MAX_SGE, &val) == 0)
+			return val;
+		break;
+	case EFCT_SCSI_MAX_SGL:
+		if (efct_hw_get(&efct->hw, EFCT_HW_N_SGL, &val) == 0)
+			return val;
+		break;
+	case EFCT_SCSI_MAX_IOS:
+		return efct_io_pool_allocated(xport->io_pool);
+	case EFCT_SCSI_DIF_CAPABLE:
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_DIF_CAPABLE, &val) == 0) {
+			return val;
+		}
+		break;
+	case EFCT_SCSI_MAX_FIRST_BURST:
+		return 0;
+	case EFCT_SCSI_DIF_MULTI_SEPARATE:
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_DIF_MULTI_SEPARATE, &val) == 0) {
+			return val;
+		}
+		break;
+	case EFCT_SCSI_ENABLE_TASK_SET_FULL:
+		/* Return FALSE if we are send frame capable */
+		if (efct_hw_get(&efct->hw,
+				EFCT_HW_SEND_FRAME_CAPABLE, &val) == 0) {
+			return !val;
+		}
+		break;
+	default:
+		break;
+	}
+
+	efc_log_debug(efct, "invalid property request %d\n", prop);
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h
new file mode 100644
index 000000000000..cefbaa38d99a
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.h
@@ -0,0 +1,313 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_SCSI_H__)
+#define __EFCT_SCSI_H__
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+
+/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */
+#define EFCT_SCSI_CMD_DIR_IN		(1 << 0)
+#define EFCT_SCSI_CMD_DIR_OUT		(1 << 1)
+#define EFCT_SCSI_CMD_SIMPLE		(1 << 2)
+#define EFCT_SCSI_CMD_HEAD_OF_QUEUE	(1 << 3)
+#define EFCT_SCSI_CMD_ORDERED		(1 << 4)
+#define EFCT_SCSI_CMD_UNTAGGED		(1 << 5)
+#define EFCT_SCSI_CMD_ACA		(1 << 6)
+#define EFCT_SCSI_FIRST_BURST_ERR	(1 << 7)
+#define EFCT_SCSI_FIRST_BURST_ABORTED	(1 << 8)
+
+/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */
+#define EFCT_SCSI_LAST_DATAPHASE	(1 << 0)
+#define EFCT_SCSI_NO_AUTO_RESPONSE	(1 << 1)
+#define EFCT_SCSI_LOW_LATENCY		(1 << 2)
+
+#define EFCT_SCSI_SNS_BUF_VALID(sense)	((sense) && \
+			(0x70 == (((const u8 *)(sense))[0] & 0x70)))
+
+#define EFCT_SCSI_WQ_STEERING_SHIFT	16
+#define EFCT_SCSI_WQ_STEERING_MASK	(0xf << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CLASS	(0 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_REQUEST	(1 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CPU	(2 << EFCT_SCSI_WQ_STEERING_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_SHIFT		(20)
+#define EFCT_SCSI_WQ_CLASS_MASK		(0xf << EFCT_SCSI_WQ_CLASS_SHIFT)
+#define EFCT_SCSI_WQ_CLASS(x)		((x & EFCT_SCSI_WQ_CLASS_MASK) << \
+						EFCT_SCSI_WQ_CLASS_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY	1
+
+struct efct_scsi_cmd_resp {
+	u8 scsi_status;			/* SCSI status */
+	u16 scsi_status_qualifier;	/* SCSI status qualifier */
+	/* pointer to response data buffer */
+	u8 *response_data;
+	/* length of response data buffer (bytes) */
+	u32 response_data_length;
+	u8 *sense_data;		/* pointer to sense data buffer */
+	/* length of sense data buffer (bytes) */
+	u32 sense_data_length;
+	/* command residual (not used for target), positive value
+	 * indicates an underflow, negative value indicates overflow
+	 */
+	int residual;
+	/* Command response length received in wcqe */
+	u32 response_wire_length;
+};
+
+struct efct_vport {
+	struct efct		*efct;
+	bool			is_vport;
+	struct fc_host_statistics fc_host_stats;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+};
+
+/* Status values returned by IO callbacks */
+enum efct_scsi_io_status {
+	EFCT_SCSI_STATUS_GOOD = 0,
+	EFCT_SCSI_STATUS_ABORTED,
+	EFCT_SCSI_STATUS_ERROR,
+	EFCT_SCSI_STATUS_DIF_GUARD_ERR,
+	EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR,
+	EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR,
+	EFCT_SCSI_STATUS_NO_IO,
+	EFCT_SCSI_STATUS_ABORT_IN_PROGRESS,
+	EFCT_SCSI_STATUS_CHECK_RESPONSE,
+	EFCT_SCSI_STATUS_COMMAND_TIMEOUT,
+	EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED,
+	EFCT_SCSI_STATUS_SHUTDOWN,
+	EFCT_SCSI_STATUS_NEXUS_LOST,
+};
+
+struct efct_io;
+struct efc_node;
+struct efc_domain;
+struct efc_sli_port;
+
+/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */
+typedef int (*efct_scsi_io_cb_t)(struct efct_io *io,
+				    enum efct_scsi_io_status status,
+				    u32 flags, void *arg);
+
+/* Callback used by send_rd_io(), send_wr_io() */
+typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io *io,
+			enum efct_scsi_io_status status,
+			struct efct_scsi_cmd_resp *rsp,
+			u32 flags, void *arg);
+
+/* efct_scsi_cb_t flags */
+#define EFCT_SCSI_IO_CMPL		(1 << 0)
+/* IO completed, response sent */
+#define EFCT_SCSI_IO_CMPL_RSP_SENT	(1 << 1)
+#define EFCT_SCSI_IO_ABORTED		(1 << 2)
+
+/* efct_scsi_recv_tmf() request values */
+enum efct_scsi_tmf_cmd {
+	EFCT_SCSI_TMF_ABORT_TASK = 1,
+	EFCT_SCSI_TMF_QUERY_TASK_SET,
+	EFCT_SCSI_TMF_ABORT_TASK_SET,
+	EFCT_SCSI_TMF_CLEAR_TASK_SET,
+	EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT,
+	EFCT_SCSI_TMF_LOGICAL_UNIT_RESET,
+	EFCT_SCSI_TMF_CLEAR_ACA,
+	EFCT_SCSI_TMF_TARGET_RESET,
+};
+
+/* efct_scsi_send_tmf_resp() response values */
+enum efct_scsi_tmf_resp {
+	EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1,
+	EFCT_SCSI_TMF_FUNCTION_SUCCEEDED,
+	EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND,
+	EFCT_SCSI_TMF_FUNCTION_REJECTED,
+	EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER,
+	EFCT_SCSI_TMF_SERVICE_DELIVERY,
+};
+
+/**
+ * property names for efct_scsi_get_property() functions
+ */
+
+enum efct_scsi_property {
+	EFCT_SCSI_MAX_SGE,
+	EFCT_SCSI_MAX_SGL,
+	EFCT_SCSI_WWNN,
+	EFCT_SCSI_WWPN,
+	EFCT_SCSI_SERIALNUMBER,
+	EFCT_SCSI_PARTNUMBER,
+	EFCT_SCSI_PORTNUM,
+	EFCT_SCSI_BIOS_VERSION_STRING,
+	EFCT_SCSI_MAX_IOS,
+	EFCT_SCSI_DIF_CAPABLE,
+	EFCT_SCSI_DIF_MULTI_SEPARATE,
+	EFCT_SCSI_MAX_FIRST_BURST,
+	EFCT_SCSI_ENABLE_TASK_SET_FULL,
+};
+
+#define DIF_SIZE		8
+
+/* T10 DIF operations */
+enum efct_scsi_dif_oper {
+	EFCT_SCSI_DIF_OPER_DISABLED,
+	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF,
+	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_NODIF,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CHKSUM,
+	EFCT_SCSI_DIF_OPER_IN_CHKSUM_OUT_CRC,
+	EFCT_SCSI_DIF_OPER_IN_RAW_OUT_RAW,
+};
+
+#define EFCT_SCSI_DIF_OPER_PASS_THRU	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_CRC
+#define EFCT_SCSI_DIF_OPER_STRIP	EFCT_SCSI_DIF_OPER_IN_CRC_OUT_NODIF
+#define EFCT_SCSI_DIF_OPER_INSERT	EFCT_SCSI_DIF_OPER_IN_NODIF_OUT_CRC
+
+/* T10 DIF block sizes */
+enum efct_scsi_dif_blk_size {
+	EFCT_SCSI_DIF_BK_SIZE_512,
+	EFCT_SCSI_DIF_BK_SIZE_1024,
+	EFCT_SCSI_DIF_BK_SIZE_2048,
+	EFCT_SCSI_DIF_BK_SIZE_4096,
+	EFCT_SCSI_DIF_BK_SIZE_520,
+	EFCT_SCSI_DIF_BK_SIZE_4104
+};
+
+struct efct_scsi_sgl {
+	uintptr_t	addr;
+	uintptr_t	dif_addr;
+	size_t		len;
+};
+
+/* T10 DIF information passed to the transport */
+struct efct_scsi_dif_info {
+	enum efct_scsi_dif_oper dif_oper;
+	enum efct_scsi_dif_blk_size blk_size;
+	u32 ref_tag;
+	bool check_ref_tag;
+	bool check_app_tag;
+	bool check_guard;
+	bool dif_separate;
+
+	/* If the APP TAG is 0xFFFF, disable checking
+	 * the REF TAG and CRC fields
+	 */
+	bool disable_app_ffff;
+
+	/* if the APP TAG is 0xFFFF and REF TAG is 0xFFFF_FFFF,
+	 * disable checking the received CRC field.
+	 */
+	bool disable_app_ref_ffff;
+	u64 lba;
+	u16 app_tag;
+};
+
+/* Return values for calls from base driver to libefc */
+#define EFCT_SCSI_CALL_COMPLETE	0 /* All work is done */
+#define EFCT_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
+
+enum efct_scsi_io_role {
+	EFCT_SCSI_IO_ROLE_ORIGINATOR,
+	EFCT_SCSI_IO_ROLE_RESPONDER,
+};
+
+void efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node);
+void efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node);
+extern struct efct_io *
+efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role);
+void efct_scsi_io_free(struct efct_io *io);
+struct efct_io *efct_io_get_instance(struct efct *efct, u32 index);
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+int efct_scsi_tgt_new_device(struct efct *efct);
+int efct_scsi_tgt_del_device(struct efct *efct);
+int
+efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain);
+void
+efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain);
+int
+efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport);
+void
+efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport);
+int
+efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node);
+int
+efct_scsi_new_initiator(struct efc *efc, struct efc_node *node);
+
+enum efct_scsi_del_initiator_reason {
+	EFCT_SCSI_INITIATOR_DELETED,
+	EFCT_SCSI_INITIATOR_MISSING,
+};
+
+extern int
+efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
+			int reason);
+extern int
+efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
+		   u32 cdb_len, u32 flags);
+extern int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd cmd, struct efct_io *abortio,
+		  u32 flags);
+
+extern int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
+		       struct efct_scsi_dif_info *dif_info,
+		      struct efct_scsi_sgl *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
+		       struct efct_scsi_dif_info *dif_info,
+		      struct efct_scsi_sgl *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+		    struct efct_scsi_cmd_resp *rsp, efct_scsi_io_cb_t cb,
+		   void *arg);
+extern int
+efct_scsi_send_tmf_resp(struct efct_io *io,
+			enum efct_scsi_tmf_resp rspcode,
+		       u8 addl_rsp_info[3],
+		       efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg);
+
+void efct_scsi_io_complete(struct efct_io *io);
+
+extern u32
+efct_scsi_get_property(struct efct *efct, enum efct_scsi_property prop);
+
+int efct_scsi_reg_fc_transport(void);
+int efct_scsi_release_fc_transport(void);
+int efct_scsi_new_device(struct efct *efct);
+int efct_scsi_del_device(struct efct *efct);
+void _efct_scsi_io_free(struct kref *arg);
+
+int efct_scsi_send_tmf(struct efc_node *node,
+		       struct efct_io *io,
+		       struct efct_io *io_to_abort, u32 lun,
+		       enum efct_scsi_tmf_cmd tmf,
+		       struct efct_scsi_sgl *sgl,
+		       u32 sgl_count, u32 len,
+		       efct_scsi_rsp_io_cb_t cb, void *arg);
+
+extern int
+efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost);
+extern struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev);
+
+int efct_scsi_io_dispatch(struct efct_io *io, void *cb);
+int efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb);
+void efct_scsi_check_pending(struct efct *efct);
+
+#endif /* __EFCT_SCSI_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 24/32] elx: efct: LIO backend interface routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (22 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 23/32] elx: efct: SCSI IO handling routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  3:56   ` Bart Van Assche
  2019-12-20 22:37 ` [PATCH v2 25/32] elx: efct: Hardware IO submission routines James Smart
                   ` (8 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
LIO backend template registration and template functions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_lio.c | 1921 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_lio.h |  192 ++++
 2 files changed, 2113 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h

diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
new file mode 100644
index 000000000000..89bd8c0efb24
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.c
@@ -0,0 +1,1921 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+
+#include <scsi/scsi_tcq.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+
+#include "efct_lio.h"
+
+static struct workqueue_struct *lio_wq;
+
+static int
+efct_format_wwn(char *str, size_t len, char *pre, u64 wwn)
+{
+	u8 a[8];
+
+	put_unaligned_be64(wwn, a);
+	return snprintf(str, len,
+			"%s%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
+			pre, a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7]);
+}
+
+static int
+efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
+{
+	int a[8], num;
+	u8 b[8];
+
+	if (npiv) {
+		num = sscanf(name, "%02x%02x%02x%02x%02x%02x%02x%02x",
+			     &a[0], &a[1], &a[2], &a[3], &a[4],
+				 &a[5], &a[6], &a[7]);
+	} else {
+		num = sscanf(name,
+			     "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
+			     &a[0], &a[1], &a[2], &a[3], &a[4],
+			     &a[5], &a[6], &a[7]);
+	}
+
+	if (num != 8)
+		return -EINVAL;
+
+	for (num = 0; num < 8; ++num)
+		b[num] = (u8) a[num];
+
+	*wwp = get_unaligned_be64(b);
+	return 0;
+}
+
+static int
+efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn)
+{
+	unsigned int cnt = size;
+	int rc;
+
+	*wwpn = *wwnn = 0;
+	if (name[cnt - 1] == '\n' || name[cnt - 1] == 0)
+		cnt--;
+
+	/* validate we have enough characters for WWPN */
+	if ((cnt != (16 + 1 + 16)) || (name[16] != ':'))
+		return -EINVAL;
+
+	rc = efct_lio_parse_wwn(&name[0], wwpn, 1);
+	if (rc)
+		return rc;
+
+	rc = efct_lio_parse_wwn(&name[17], wwnn, 1);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+static ssize_t
+efct_lio_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_tpg_enable_store(struct config_item *item, const char *page,
+			  size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	struct efct *efct;
+	struct efc *efc;
+	unsigned long op;
+	int ret;
+
+	if (!tpg->sport || !tpg->sport->efct) {
+		pr_err("%s: Unable to find EFCT device\n", __func__);
+		return -EINVAL;
+	}
+
+	efct = tpg->sport->efct;
+	efc = efct->efcport;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		ret = efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE);
+		if (ret) {
+			efct->tgt_efct.lio_sport = NULL;
+			efc_log_test(efct, "cannot bring port online\n");
+			return ret;
+		}
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain && efc->domain->sport)
+			efct_scsi_tgt_del_sport(efc, efc->domain->sport);
+
+		atomic_set(&tpg->enabled, 0);
+	} else {
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
+			       size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	struct efct_lio_vport *lio_vport = tpg->vport;
+	struct efct_lio_vport_data_t *vport_data;
+	struct efct *efct;
+	struct efc *efc;
+	int ret = -1;
+	unsigned long op, flags = 0;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (!lio_vport) {
+		pr_err("Unable to find vport\n");
+		return -EINVAL;
+	}
+
+	efct = lio_vport->efct;
+	efc = efct->efcport;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain) {
+			ret = efc_sport_vport_new(efc->domain,
+						  lio_vport->npiv_wwpn,
+						  lio_vport->npiv_wwnn,
+						  U32_MAX, false, true,
+						  NULL, NULL, true);
+			if (ret != 0) {
+				efc_log_err(efct, "Failed to create Vport\n");
+				return ret;
+			}
+			return count;
+		}
+
+		vport_data = kmalloc(sizeof(*vport_data), GFP_KERNEL);
+		if (!vport_data)
+			return ret;
+
+		memset(vport_data, 0, sizeof(struct efct_lio_vport_data_t));
+		vport_data->phy_wwpn            = lio_vport->wwpn;
+		vport_data->vport_wwpn          = lio_vport->npiv_wwpn;
+		vport_data->vport_wwnn          = lio_vport->npiv_wwnn;
+		vport_data->target_mode         = 1;
+		vport_data->initiator_mode      = 0;
+		vport_data->lio_vport           = lio_vport;
+
+		/* There is no domain.  Add to pending list. When the
+		 * domain is created, the driver will create the vport.
+		 */
+		efc_log_debug(efct, "link down, move to pending\n");
+		spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+		INIT_LIST_HEAD(&vport_data->list_entry);
+		list_add_tail(&vport_data->list_entry,
+			      &efct->tgt_efct.vport_pend_list);
+		spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+
+	} else if (op == 0) {
+		struct efct_lio_vport_data_t *virt_target_data, *next;
+
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		atomic_set(&tpg->enabled, 0);
+		/* only physical sport should exist, free lio_sport
+		 * allocated in efct_lio_make_sport
+		 */
+		if (efc->domain) {
+			efc_sport_vport_del(efct->efcport, efc->domain,
+					    lio_vport->npiv_wwpn,
+					    lio_vport->npiv_wwnn);
+			return count;
+		}
+		spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+		list_for_each_entry_safe(virt_target_data, next,
+					 &efct->tgt_efct.vport_pend_list,
+					 list_entry) {
+			if (virt_target_data->lio_vport == lio_vport) {
+				list_del(&virt_target_data->list_entry);
+				kfree(virt_target_data);
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	} else {
+		return -EINVAL;
+	}
+	return count;
+}
+
+static bool efct_lio_node_is_initiator(struct efc_node *node)
+{
+	if (!node)
+		return 0;
+
+	if (node->rnode.fc_id && node->rnode.fc_id != FC_FID_FLOGI &&
+	    node->rnode.fc_id != FC_FID_DIR_SERV &&
+	    node->rnode.fc_id != FC_FID_FCTRL) {
+		return 1;
+	}
+
+	return 0;
+}
+
+static int  efct_lio_tgt_session_data(struct efct *efct, u64 wwpn,
+				      char *buf, int size)
+{
+	struct efc_sli_port *sport = NULL;
+	struct efc_node *node = NULL;
+	struct efc *efc = efct->efcport;
+	u16 loop_id = 0;
+	int off = 0, rc = 0;
+
+	if (!efc->domain) {
+		efc_log_err(efct, "failed to find efct/domain\n");
+		return -1;
+	}
+
+	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
+		if (sport->wwpn != wwpn)
+			continue;
+		list_for_each_entry(node, &sport->node_list,
+				    list_entry) {
+			/* Dump only remote NPORT sessions */
+			if (!efct_lio_node_is_initiator(node))
+				continue;
+
+			rc = snprintf(buf + off, size - off,
+				"0x%016llx,0x%08x,0x%04x\n",
+				get_unaligned_be64(node->wwpn),
+				node->rnode.fc_id, loop_id);
+			if (rc < 0)
+				break;
+			off += rc;
+		}
+	}
+
+	buf[size - 1] = '\0';
+	return 0;
+}
+
+static int efct_debugfs_session_open(struct inode *inode, struct file *filp)
+{
+	struct efct_lio_sport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return 0;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
+				  size);
+	return 0;
+}
+
+static int efct_debugfs_session_close(struct inode *inode, struct file *filp)
+{
+	if (filp->f_mode & FMODE_READ)
+		kfree(filp->private_data);
+
+	return 0;
+}
+
+static ssize_t efct_debugfs_session_read(struct file *filp, char __user *buf,
+					 size_t count, loff_t *ppos)
+{
+	if (!(filp->f_mode & FMODE_READ))
+		return -EPERM;
+	return simple_read_from_buffer(buf, count, ppos, filp->private_data,
+				       strlen(filp->private_data));
+}
+
+static int efct_npiv_debugfs_session_open(struct inode *inode,
+					  struct file *filp)
+{
+	struct efct_lio_vport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return 0;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->npiv_wwpn,
+				  filp->private_data, size);
+	return 0;
+}
+
+static const struct file_operations efct_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.llseek		= default_llseek,
+};
+
+static const struct file_operations efct_npiv_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_npiv_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.llseek		= default_llseek,
+};
+
+static char *efct_lio_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->sport->wwpn_str;
+}
+
+static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->vport->wwpn_str;
+}
+
+static u16 efct_lio_get_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static u16 efct_lio_get_npiv_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static int efct_lio_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int efct_lio_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	return 0;
+}
+
+static int efct_lio_check_stop_free(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_CHK_STOP_FREE);
+	return target_put_sess_cmd(se_cmd);
+}
+
+static int
+efct_lio_abort_tgt_cb(struct efct_io *io,
+		      enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_io_printf(io, "%s\n", __func__);
+	return 0;
+}
+
+/* command has been aborted, cleanup here */
+static void efct_lio_aborted_task(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
+
+	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
+		return;
+
+	ocp->aborting = true;
+	ocp->err = EFCT_SCSI_STATUS_ABORTED;
+	/* terminate the exchange */
+	efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
+}
+
+/* Called when se_cmd's ref count goes to 0 */
+static void efct_lio_release_cmd(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct *efct = io->efct;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_RELEASE_CMD);
+	efct_scsi_io_complete(io);
+	atomic_sub_return(1, &efct->tgt_efct.ios_in_use);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
+}
+
+static void efct_lio_close_session(struct se_session *se_sess)
+{
+	struct efc_node *node = se_sess->fabric_sess_ptr;
+	struct efct *efct = NULL;
+	int rc;
+
+	pr_debug("se_sess=%p node=%p", se_sess, node);
+
+	if (!node) {
+		pr_debug("node is NULL");
+		return;
+	}
+
+	efct = node->efc->base;
+	rc = efct_xport_control(efct->xport,
+				EFCT_XPORT_POST_NODE_EVENT, node,
+		EFCT_XPORT_SHUTDOWN, NULL);
+	if (rc != 0) {
+		efc_log_test(efct,
+			      "Failed to shutdown session %p node %p\n",
+			     se_sess, node);
+		return;
+	}
+}
+
+static u32 efct_lio_sess_get_index(struct se_session *se_sess)
+{
+	return 0;
+}
+
+static void efct_lio_set_default_node_attrs(struct se_node_acl *nacl)
+{
+}
+
+static int efct_lio_get_cmd_state(struct se_cmd *se_cmd)
+{
+	return 0;
+}
+
+static int
+efct_lio_sg_map(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	ocp->seg_map_cnt = pci_map_sg(io->efct->pcidev, cmd->t_data_sg,
+				      cmd->t_data_nents, cmd->data_direction);
+	if (ocp->seg_map_cnt == 0)
+		return -EFAULT;
+	return 0;
+}
+
+static void
+efct_lio_sg_unmap(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	if (WARN_ON(!ocp->seg_map_cnt || !cmd->t_data_sg))
+		return;
+
+	pci_unmap_sg(io->efct->pcidev, cmd->t_data_sg,
+		     ocp->seg_map_cnt, cmd->data_direction);
+	ocp->seg_map_cnt = 0;
+}
+
+static int
+efct_lio_status_done(struct efct_io *io,
+		     enum efct_scsi_io_status scsi_status,
+		     u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RSP_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	if (ocp->seg_map_cnt)
+		efct_lio_sg_unmap(io);
+
+	efct_lio_io_printf(io, "status=%d, err=%d flags=0x%x, dir=%d\n",
+				scsi_status, ocp->err, flags, ocp->ddir);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return 0;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg);
+
+static int
+efct_lio_write_pending(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg;
+	u32 flags = 0, cnt, curcnt;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_WRITE_PENDING);
+	efct_lio_io_printf(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
+			  cmd->transport_state, cmd->se_cmd_flags);
+
+	if (ocp->seg_cnt == 0) {
+		ocp->seg_cnt = cmd->t_data_nents;
+		ocp->cur_seg = 0;
+		if (efct_lio_sg_map(io)) {
+			efct_lio_io_printf(io, "efct_lio_sg_map failed\n");
+			return -EFAULT;
+		}
+	}
+	curcnt = (ocp->seg_map_cnt - ocp->cur_seg);
+	curcnt = (curcnt < io->sgl_allocated) ? curcnt : io->sgl_allocated;
+	/* find current sg */
+	for (cnt = 0, sg = cmd->t_data_sg; cnt < ocp->cur_seg; cnt++,
+	     sg = sg_next(sg))
+		;
+
+	for (cnt = 0; cnt < curcnt; cnt++, sg = sg_next(sg)) {
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		sgl[cnt].len = sg_dma_len(sg);
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+	}
+	if (ocp->cur_seg == ocp->seg_cnt)
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+	rc = efct_scsi_recv_wr_data(io, flags, NULL, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	return rc;
+}
+
+static int
+efct_lio_queue_data_in(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg = NULL;
+	uint flags = 0, cnt = 0, curcnt = 0;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_DATA_IN);
+
+	if (ocp->seg_cnt == 0) {
+		if (cmd->data_length) {
+			ocp->seg_cnt = cmd->t_data_nents;
+			ocp->cur_seg = 0;
+			if (efct_lio_sg_map(io)) {
+				efct_lio_io_printf(io,
+						   "efct_lio_sg_map failed\n");
+				return -EAGAIN;
+			}
+		} else {
+			/* If command length is 0, send the response status */
+			struct efct_scsi_cmd_resp rsp;
+
+			memset(&rsp, 0, sizeof(rsp));
+			efct_lio_io_printf(io,
+					   "cmd : %p length 0, send status\n",
+					   cmd);
+			return efct_scsi_send_resp(io, 0, &rsp,
+						  efct_lio_status_done, NULL);
+		}
+	}
+	curcnt = min(ocp->seg_map_cnt - ocp->cur_seg, io->sgl_allocated);
+
+	while (cnt < curcnt) {
+		sg = &cmd->t_data_sg[ocp->cur_seg];
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		if (ocp->transferred_len + sg_dma_len(sg) >= cmd->data_length)
+			sgl[cnt].len = cmd->data_length - ocp->transferred_len;
+		else
+			sgl[cnt].len = sg_dma_len(sg);
+
+		ocp->transferred_len += sgl[cnt].len;
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+		cnt++;
+		if (ocp->transferred_len == cmd->data_length)
+			break;
+	}
+
+	if (ocp->transferred_len == cmd->data_length) {
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+		ocp->seg_cnt = ocp->cur_seg;
+	}
+
+	/* If there is residual, disable Auto Good Response */
+	if (cmd->residual_count)
+		flags |= EFCT_SCSI_NO_AUTO_RESPONSE;
+
+	rc = efct_scsi_send_rd_data(io, flags, NULL, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RD_DATA);
+	return rc;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io,
+		       enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &io->tgt_io.cmd;
+	int rc;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_DATA_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	efct_lio_io_printf(io, "seg_map_cnt=%d\n", ocp->seg_map_cnt);
+	if (ocp->seg_map_cnt) {
+		if (ocp->err == EFCT_SCSI_STATUS_GOOD &&
+		    ocp->cur_seg < ocp->seg_cnt) {
+			efct_lio_io_printf(io, "continuing cmd at segm=%d\n",
+					  ocp->cur_seg);
+			if (ocp->ddir == DMA_TO_DEVICE)
+				rc = efct_lio_write_pending(&ocp->cmd);
+			else
+				rc = efct_lio_queue_data_in(&ocp->cmd);
+			if (rc == 0)
+				return 0;
+			ocp->err = EFCT_SCSI_STATUS_ERROR;
+			efct_lio_io_printf(io, "could not continue command\n");
+		}
+		efct_lio_sg_unmap(io);
+	}
+
+	if (io->tgt_io.aborting) {
+		efct_lio_io_printf(io, "IO done aborted\n");
+		return 0;
+	}
+
+	if (ocp->ddir == DMA_TO_DEVICE) {
+		efct_lio_io_printf(io, "Write done, trans_state=0x%x\n",
+				  io->tgt_io.cmd.transport_state);
+		if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+			transport_generic_request_failure(&io->tgt_io.cmd,
+					TCM_CHECK_CONDITION_ABORT_CMD);
+			efct_set_lio_io_state(io,
+				EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE);
+		} else {
+			efct_set_lio_io_state(io,
+						EFCT_LIO_STATE_TGT_EXECUTE_CMD);
+			target_execute_cmd(&io->tgt_io.cmd);
+		}
+	} else {
+		if ((flags & EFCT_SCSI_IO_CMPL_RSP_SENT) == 0) {
+			struct efct_scsi_cmd_resp rsp;
+			/* send check condition if an error occurred */
+			memset(&rsp, 0, sizeof(rsp));
+			rsp.scsi_status = cmd->scsi_status;
+			rsp.sense_data = (uint8_t *)io->tgt_io.sense_buffer;
+			rsp.sense_data_length = cmd->scsi_sense_length;
+
+			/* Check for residual underrun or overrun */
+			if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+				rsp.residual = -cmd->residual_count;
+			else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+				rsp.residual = cmd->residual_count;
+
+			rc = efct_scsi_send_resp(io, 0, &rsp,
+						 efct_lio_status_done, NULL);
+			efct_set_lio_io_state(io,
+						EFCT_LIO_STATE_SCSI_SEND_RSP);
+			if (rc != 0) {
+				efct_lio_io_printf(io,
+						   "Read done, failed to send rsp, rc=%d\n",
+				      rc);
+				transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+				efct_set_lio_io_state(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+			} else {
+				ocp->rsp_sent = true;
+			}
+		} else {
+			ocp->rsp_sent = true;
+			transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+			efct_set_lio_io_state(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+		}
+	}
+	return 0;
+}
+
+static int
+efct_lio_tmf_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		  u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(io, "cmd=%p status=%d, flags=0x%x\n",
+			      &io->tgt_io.cmd, scsi_status, flags);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return 0;
+}
+
+static int
+efct_lio_null_tmf_done(struct efct_io *tmfio,
+		       enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(tmfio, "cmd=%p status=%d, flags=0x%x\n",
+			      &tmfio->tgt_io.cmd, scsi_status, flags);
+
+	/* free struct efct_io only, no active se_cmd */
+	efct_scsi_io_complete(tmfio);
+	return 0;
+}
+
+static int
+efct_lio_queue_status(struct se_cmd *cmd)
+{
+	struct efct_scsi_cmd_resp rsp;
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_STATUS);
+	efct_lio_io_printf(io,
+		"status=0x%x trans_state=0x%x se_cmd_flags=0x%x sns_len=%d\n",
+		cmd->scsi_status, cmd->transport_state, cmd->se_cmd_flags,
+		cmd->scsi_sense_length);
+
+	memset(&rsp, 0, sizeof(rsp));
+	rsp.scsi_status = cmd->scsi_status;
+	rsp.sense_data = (u8 *)io->tgt_io.sense_buffer;
+	rsp.sense_data_length = cmd->scsi_sense_length;
+
+	/* Check for residual underrun or overrun, mark negitive value for
+	 * underrun to recognize in HW
+	 */
+	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+		rsp.residual = -cmd->residual_count;
+	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+		rsp.residual = cmd->residual_count;
+
+	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
+	if (rc == 0)
+		ocp->rsp_sent = true;
+	return rc;
+}
+
+static void efct_lio_queue_tm_rsp(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *tmfio = container_of(ocp, struct efct_io, tgt_io);
+	struct se_tmr_req *se_tmr = cmd->se_tmr_req;
+	u8 rspcode;
+
+	efct_lio_tmfio_printf(tmfio, "cmd=%p function=0x%x tmr->response=%d\n",
+			      cmd, se_tmr->function, se_tmr->response);
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_COMPLETE;
+		break;
+	case TMR_TASK_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND;
+		break;
+	case TMR_LUN_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER;
+		break;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_REJECTED;
+		break;
+	}
+	efct_scsi_send_tmf_resp(tmfio, rspcode, NULL, efct_lio_tmf_done, NULL);
+}
+
+static struct efct *efct_find_wwpn(u64 wwpn)
+{
+	int efctidx;
+	struct efct *efct;
+
+	 /* Search for the HBA that has this WWPN */
+	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
+		u64 pwwn;
+		u8 pn[8];
+
+		efct = efct_devices[efctidx];
+		if (!efct)
+			continue;
+
+		memcpy(pn, efct_hw_get_ptr(&efct->hw, EFCT_HW_WWN_PORT),
+		       sizeof(pn));
+
+		pwwn = get_unaligned_be64(pn);
+		if (pwwn == wwpn)
+			break;
+	}
+
+	if (efctidx == MAX_EFCT_DEVICES)
+		return NULL;
+
+	return efct_devices[efctidx];
+}
+
+static struct dentry *
+efct_create_dfs_session(struct efct *efct, void *data, u8 npiv)
+{
+	char name[16];
+
+	if (!efct->sess_debugfs_dir)
+		return NULL;
+
+	if (npiv)
+		snprintf(name, sizeof(name), "efct-sessions-%d",
+			 efct->instance_index);
+	else
+		snprintf(name, sizeof(name), "sessions-npiv-%d",
+			 efct->instance_index);
+
+	return debugfs_create_file(name, 0644, efct->sess_debugfs_dir,
+				   data, &efct_debugfs_session_fops);
+}
+
+static struct se_wwn *
+efct_lio_make_sport(struct target_fabric_configfs *tf,
+		    struct config_group *group, const char *name)
+{
+	struct efct_lio_sport *lio_sport;
+	struct efct *efct;
+	int ret;
+	u64 wwpn;
+
+	ret = efct_lio_parse_wwn(name, &wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_sport = kzalloc(sizeof(*lio_sport), GFP_KERNEL);
+	if (!lio_sport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_sport->efct = efct;
+	lio_sport->wwpn = wwpn;
+	efct_format_wwn(lio_sport->wwpn_str, sizeof(lio_sport->wwpn_str),
+			"naa.", wwpn);
+	efct->tgt_efct.lio_sport = lio_sport;
+
+	lio_sport->sessions = efct_create_dfs_session(efct, lio_sport, 0);
+	return &lio_sport->sport_wwn;
+}
+
+static struct se_wwn *
+efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
+			 struct config_group *group, const char *name)
+{
+	struct efct_lio_vport *lio_vport;
+	struct efct *efct;
+	int ret = -1;
+	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
+	char *p, tmp[128];
+	struct efct_lio_vport_list_t *vport_list;
+	struct fc_vport *new_fc_vport;
+	struct fc_vport_identifiers vport_id;
+	unsigned long flags = 0;
+
+	snprintf(tmp, 128, "%s", name);
+
+	p = strchr(tmp, '@');
+
+	if (!p) {
+		pr_err("Unable to find separator operator(@)\n");
+		return ERR_PTR(ret);
+	}
+	*p++ = '\0';
+
+	ret = efct_lio_parse_wwn(tmp, &p_wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = efct_lio_parse_npiv_wwn(p, strlen(p) + 1, &npiv_wwpn, &npiv_wwnn);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(p_wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
+	if (!lio_vport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_vport->efct = efct;
+	lio_vport->wwpn = p_wwpn;
+	lio_vport->npiv_wwpn = npiv_wwpn;
+	lio_vport->npiv_wwnn = npiv_wwnn;
+
+	efct_format_wwn(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
+			"naa.", npiv_wwpn);
+
+	vport_list = kmalloc(sizeof(*vport_list), GFP_KERNEL);
+	if (!vport_list) {
+		kfree(lio_vport);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	memset(vport_list, 0, sizeof(struct efct_lio_vport_list_t));
+	vport_list->lio_vport = lio_vport;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	INIT_LIST_HEAD(&vport_list->list_entry);
+	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+
+	lio_vport->sessions = efct_create_dfs_session(efct, lio_vport, 1);
+
+	memset(&vport_id, 0, sizeof(vport_id));
+	vport_id.port_name = npiv_wwpn;
+	vport_id.node_name = npiv_wwnn;
+	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
+	vport_id.vport_type = FC_PORTTYPE_NPIV;
+	vport_id.disable = false;
+
+	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
+	if (!new_fc_vport) {
+		efc_log_err(efct, "fc_vport_create failed\n");
+		kfree(lio_vport);
+		kfree(vport_list);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	lio_vport->fc_vport = new_fc_vport;
+
+	return &lio_vport->vport_wwn;
+}
+
+static void
+efct_lio_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct *efct = lio_sport->efct;
+
+	/* only physical sport should exist, free lio_sport allocated
+	 * in efct_lio_make_sport.
+	 */
+
+	debugfs_remove(lio_sport->sessions);
+	lio_sport->sessions = NULL;
+
+	kfree(efct->tgt_efct.lio_sport);
+	efct->tgt_efct.lio_sport = NULL;
+}
+
+static void
+efct_lio_npiv_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_vport_list_t *vport, *next_vport;
+	struct efct *efct = lio_vport->efct;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+
+	debugfs_remove(lio_vport->sessions);
+
+	if (lio_vport->fc_vport)
+		fc_vport_terminate(lio_vport->fc_vport);
+
+	lio_vport->sessions = NULL;
+
+	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		if (vport->lio_vport == lio_vport) {
+			list_del(&vport->list_entry);
+			kfree(vport->lio_vport);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+}
+
+static struct se_portal_group *
+efct_lio_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->sport = lio_sport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	efct = lio_sport->efct;
+	efct->tgt_efct.tpg = tpg;
+	efc_log_debug(efct, "create portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->sport->efct, "drop portal group %d\n", tpg->tpgt);
+	tpg->sport->efct->tgt_efct.tpg = NULL;
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static struct se_portal_group *
+efct_lio_npiv_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	efct = lio_vport->efct;
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	if (n != 1) {
+		efc_log_err(efct, "Invalid tpgt index: %ld provided\n", n);
+		return ERR_PTR(-EINVAL);
+	}
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->vport = lio_vport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	lio_vport->tpg = tpg;
+	efc_log_debug(efct, "create vport portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_npiv_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->vport->efct, "drop npiv portal group %d\n",
+		       tpg->tpgt);
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static int
+efct_lio_init_nodeacl(struct se_node_acl *se_nacl, const char *name)
+{
+	struct efct_lio_nacl *nacl;
+	u64 wwnn;
+
+	if (efct_lio_parse_wwn(name, &wwnn, 0) < 0)
+		return -EINVAL;
+
+	nacl = container_of(se_nacl, struct efct_lio_nacl, se_node_acl);
+	nacl->nport_wwnn = wwnn;
+
+	efct_format_wwn(nacl->nport_name, sizeof(nacl->nport_name), "", wwnn);
+	return 0;
+}
+
+static int efct_lio_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static int
+efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static struct efct_lio_tpg *
+efct_get_vport_tpg(struct efc_node *node)
+{
+	struct efct *efct;
+	u64 wwpn = node->sport->wwpn;
+	struct efct_lio_vport_list_t *vport, *next;
+	struct efct_lio_vport *lio_vport = NULL;
+	struct efct_lio_tpg *tpg = NULL;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	list_for_each_entry_safe(vport, next, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		lio_vport = vport->lio_vport;
+		if (wwpn && lio_vport &&
+		    lio_vport->npiv_wwpn == wwpn) {
+			efc_log_test(efct, "found tpg on vport\n");
+			tpg = lio_vport->tpg;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return tpg;
+}
+
+static int efct_session_cb(struct se_portal_group *se_tpg,
+			   struct se_session *se_sess, void *private)
+{
+	struct efc_node *node = private;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+
+	tgt_node = kzalloc(sizeof(*tgt_node), GFP_KERNEL);
+	if (!tgt_node)
+		return -1;
+
+	tgt_node->session = se_sess;
+	node->tgt_node = tgt_node;
+
+	return 0;
+}
+
+int efct_scsi_tgt_new_device(struct efct *efct)
+{
+	int rc = 0;
+	u32 total_ios;
+
+	/* Get the max settings */
+	efct->tgt_efct.max_sge =
+			efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGE);
+	efct->tgt_efct.max_sgl =
+			efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* initialize IO watermark fields */
+	atomic_set(&efct->tgt_efct.ios_in_use, 0);
+	total_ios = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	efc_log_debug(efct, "total_ios=%d\n", total_ios);
+	efct->tgt_efct.watermark_min =
+			(total_ios * EFCT_WATERMARK_LOW_PCT) / 100;
+	efct->tgt_efct.watermark_max =
+			(total_ios * EFCT_WATERMARK_HIGH_PCT) / 100;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   efct->tgt_efct.watermark_max);
+	atomic_set(&efct->tgt_efct.watermark_hit, 0);
+	atomic_set(&efct->tgt_efct.initiator_count, 0);
+
+	lio_wq = create_singlethread_workqueue("efct_lio_worker");
+	if (!lio_wq) {
+		efc_log_err(efct, "workqueue create failed\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_init(&efct->tgt_efct.efct_lio_lock);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_pend_list);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_list);
+
+	return rc;
+}
+
+int efct_scsi_tgt_del_device(struct efct *efct)
+{
+	int rc = 0;
+
+	flush_workqueue(lio_wq);
+
+	return rc;
+}
+
+int
+efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain)
+{
+	int status = 0;
+	struct efct *efct = domain->efc->base;
+	struct efct_lio_vport_data_t *virt_target_data, *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	list_for_each_entry_safe(virt_target_data, next,
+		 &efct->tgt_efct.vport_pend_list, list_entry) {
+		list_del(&virt_target_data->list_entry);
+
+		status = efc_sport_vport_new(domain,
+					     virt_target_data->vport_wwpn,
+					     virt_target_data->vport_wwnn,
+					     U32_MAX,
+					     virt_target_data->initiator_mode,
+					     virt_target_data->target_mode,
+					     virt_target_data, NULL, true);
+		if (status != 0) {
+			/* Put this back on list and try again next time */
+			efc_log_test(efct,
+				      "Could not create new vport for WWPN:0x%llx\n",
+				 virt_target_data->vport_wwpn);
+			list_add(&efct->tgt_efct.vport_pend_list,
+				 &virt_target_data->list_entry);
+		} else {
+			efc_log_debug(efct,
+				       "Created new vport for WWPN: 0x%llx\n",
+				      virt_target_data->vport_wwpn);
+			kfree(virt_target_data);
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return status;
+}
+
+void
+efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain)
+{
+}
+
+/* Called by the libefc when new sli port (sport) is discovered */
+int
+efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport)
+{
+	struct efct *efct = sport->efc->base;
+
+	efc_log_debug(efct, "New SPORT: %s bound to %s\n", sport->display_name,
+		       efct->tgt_efct.lio_sport->wwpn_str);
+
+	return 0;
+}
+
+/* Called by the libefc when a sport goes away. */
+void
+efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport)
+{
+	efc_log_debug(efc, "Del SPORT: %s\n",
+		       sport->display_name);
+}
+/* Called by libefc to validate node. */
+int
+efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node)
+{
+	return 1;
+}
+
+static void efct_lio_setup_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data = container_of(work,
+					   struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	char wwpn[WWN_NAME_LEN];
+	struct efct_lio_tpg *tpg = NULL;
+	struct se_portal_group *se_tpg;
+	struct se_session *se_sess;
+	int watermark;
+	int initiator_count;
+
+	/* Check to see if it's belongs to vport,
+	 * if not get physical port
+	 */
+	tpg = efct_get_vport_tpg(node);
+	if (tpg) {
+		se_tpg = &tpg->tpg;
+	} else if (efct->tgt_efct.tpg) {
+		tpg = efct->tgt_efct.tpg;
+		se_tpg = &tpg->tpg;
+	} else {
+		efc_log_err(efct, "failed to init session\n");
+		return;
+	}
+
+	/*
+	 * Format the FCP Initiator port_name into colon
+	 * separated values to match the format by our explicit
+	 * ConfigFS NodeACLs.
+	 */
+	efct_format_wwn(wwpn, sizeof(wwpn), "",
+			efc_node_get_wwpn(node));
+
+	se_sess = target_setup_session(se_tpg, 0, 0,
+				       TARGET_PROT_NORMAL,
+				       wwpn, node,
+				       efct_session_cb);
+	if (IS_ERR(se_sess)) {
+		efc_log_err(efct, "failed to setup session\n");
+		return;
+	}
+
+	efc_log_debug(efct, "new initiator se_sess=%p node=%p\n",
+		      se_sess, node);
+
+	/* update IO watermark: increment initiator count */
+	initiator_count =
+	atomic_add_return(1, &efct->tgt_efct.initiator_count);
+	watermark = (efct->tgt_efct.watermark_max -
+	     initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+		efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   watermark);
+
+	kfree(wq_data);
+}
+
+/* Called by the libefc when new a new remote initiator is discovered */
+int efct_scsi_new_initiator(struct efc *efc, struct efc_node *node)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_lio_wq_data *wq_data;
+
+	/*
+	 * Since LIO only supports initiator validation at thread level,
+	 * we are open minded and accept all callers.
+	 */
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return -ENOMEM;
+
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_setup_session);
+	queue_work(lio_wq, &wq_data->work);
+	return 0;
+}
+
+static void efct_lio_remove_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data = container_of(work,
+					   struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+
+	tgt_node = node->tgt_node;
+	se_sess = tgt_node->session;
+
+	if (!se_sess) {
+		/* base driver has sent back-to-back requests
+		 * to unreg session with no intervening
+		 * register
+		 */
+		efc_log_test(efct,
+			      "unreg session for NULL session\n");
+		efc_scsi_del_initiator_complete(node->efc,
+						node);
+		return;
+	}
+
+	efc_log_debug(efct, "unreg session se_sess=%p node=%p\n",
+		       se_sess, node);
+
+	/* first flag all session commands to complete */
+	target_sess_cmd_list_set_waiting(se_sess);
+
+	/* now wait for session commands to complete */
+	target_wait_for_sess_cmds(se_sess);
+	target_remove_session(se_sess);
+
+	kfree(node->tgt_node);
+
+	node->tgt_node = NULL;
+	efc_scsi_del_initiator_complete(node->efc, node);
+
+	kfree(wq_data);
+}
+
+/* Called by the libefc when an initiator goes away. */
+int efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
+			int reason)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_lio_wq_data *wq_data;
+	int watermark;
+	int initiator_count;
+
+	if (reason == EFCT_SCSI_INITIATOR_MISSING)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	wq_data = kmalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	memset(wq_data, 0, sizeof(*wq_data));
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_remove_session);
+	queue_work(lio_wq, &wq_data->work);
+
+	/*
+	 * update IO watermark: decrement initiator count
+	 */
+	initiator_count =
+		atomic_sub_return(1, &efct->tgt_efct.initiator_count);
+	watermark = (efct->tgt_efct.watermark_max -
+			initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+			efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
+
+	return EFCT_SCSI_CALL_ASYNC;
+}
+
+int efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
+		       u32 cdb_len, u32 flags)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct efct *efct = io->efct;
+	char *ddir;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc = 0;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RECV_CMD);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+
+	/* set target timeout */
+	io->timeout = efct->target_io_timer_sec;
+
+	if (flags & EFCT_SCSI_CMD_SIMPLE)
+		ocp->task_attr = TCM_SIMPLE_TAG;
+	else if (flags & EFCT_SCSI_CMD_HEAD_OF_QUEUE)
+		ocp->task_attr = TCM_HEAD_TAG;
+	else if (flags & EFCT_SCSI_CMD_ORDERED)
+		ocp->task_attr = TCM_ORDERED_TAG;
+	else if (flags & EFCT_SCSI_CMD_ACA)
+		ocp->task_attr = TCM_ACA_TAG;
+
+	switch (flags & (EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT)) {
+	case EFCT_SCSI_CMD_DIR_IN:
+		ddir = "FROM_INITIATOR";
+		ocp->ddir = DMA_TO_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "TO_INITIATOR";
+		ocp->ddir = DMA_FROM_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "BIDIR";
+		ocp->ddir = DMA_BIDIRECTIONAL;
+		break;
+	default:
+		ddir = "NONE";
+		ocp->ddir = DMA_NONE;
+		break;
+	}
+
+	ocp->cdb_opcode = cdb[0];
+	ocp->cdb_len = cdb_len;
+	ocp->lun = lun;
+	efct_lio_io_printf(io, "new cmd=0x%x ddir=%s dl=%u\n",
+			  cdb[0], ddir, io->exp_xfer_len);
+
+	tgt_node = io->node->tgt_node;
+	se_sess = tgt_node->session;
+	if (se_sess) {
+		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_SUBMIT_CMD);
+		rc = target_submit_cmd(&io->tgt_io.cmd, se_sess,
+				       cdb, &io->tgt_io.sense_buffer[0],
+				       ocp->lun, io->exp_xfer_len,
+				       ocp->task_attr, ocp->ddir,
+				       TARGET_SCF_ACK_KREF);
+		if (rc) {
+			efc_log_err(efct, "failed to submit cmd se_cmd: %p\n",
+				    &ocp->cmd);
+			efct_scsi_io_free(io);
+		}
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd cmd,
+		  struct efct_io *io_to_abort, u32 flags)
+{
+	unsigned char tmr_func;
+	struct efct *efct = tmfio->efct;
+	struct efct_scsi_tgt_io *ocp = &tmfio->tgt_io;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_SCSI_RECV_TMF);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+	efct_lio_tmfio_printf(tmfio, "%s: new tmf %x lun=%u\n",
+			      tmfio->display_name, cmd, lun);
+
+	switch (cmd) {
+	case EFCT_SCSI_TMF_ABORT_TASK:
+		tmr_func = TMR_ABORT_TASK;
+		break;
+	case EFCT_SCSI_TMF_ABORT_TASK_SET:
+		tmr_func = TMR_ABORT_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_TASK_SET:
+		tmr_func = TMR_CLEAR_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_LOGICAL_UNIT_RESET:
+		tmr_func = TMR_LUN_RESET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_ACA:
+		tmr_func = TMR_CLEAR_ACA;
+		break;
+	case EFCT_SCSI_TMF_TARGET_RESET:
+		tmr_func = TMR_TARGET_WARM_RESET;
+		break;
+	case EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT:
+	case EFCT_SCSI_TMF_QUERY_TASK_SET:
+	default:
+		goto tmf_fail;
+	}
+
+	tmfio->tgt_io.tmf = tmr_func;
+	tmfio->tgt_io.lun = lun;
+	tmfio->tgt_io.io_to_abort = io_to_abort;
+
+	tgt_node = tmfio->node->tgt_node;
+
+	se_sess = tgt_node->session;
+	if (!se_sess)
+		return 0;
+
+	rc = target_submit_tmr(&ocp->cmd, se_sess, NULL, lun, ocp, tmr_func,
+			GFP_ATOMIC, tmfio->init_task_tag, TARGET_SCF_ACK_KREF);
+
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_TGT_SUBMIT_TMR);
+	if (rc)
+		goto tmf_fail;
+
+	return 0;
+
+tmf_fail:
+	efct_scsi_send_tmf_resp(tmfio, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+				NULL, efct_lio_null_tmf_done, NULL);
+	return 0;
+}
+
+/* Start items for efct_lio_tpg_attrib_cit */
+
+#define DEF_EFCT_TPG_ATTRIB(name)					  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_show(			  \
+		struct config_item *item, char *page)			  \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+			struct efct_lio_tpg, tpg);			  \
+									  \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		  \
+}									  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_store(			  \
+		struct config_item *item, const char *page, size_t count) \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+					struct efct_lio_tpg, tpg);	  \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		  \
+	unsigned long val;						  \
+	int ret;							  \
+									  \
+	ret = kstrtoul(page, 0, &val);					  \
+	if (ret < 0) {							  \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	if (val != 0 && val != 1) {					  \
+		pr_err("Illegal boolean value %lu\n", val);		  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	a->name = val;							  \
+									  \
+	return count;							  \
+}									  \
+CONFIGFS_ATTR(efct_lio_tpg_attrib_, name)
+
+DEF_EFCT_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_tpg_attrib_attrs[] = {
+	&efct_lio_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+#define DEF_EFCT_NPIV_TPG_ATTRIB(name)					   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_show(			   \
+		struct config_item *item, char *page)			   \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+									   \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		   \
+}									   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_store(			   \
+		struct config_item *item, const char *page, size_t count)  \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		   \
+	unsigned long val;						   \
+	int ret;							   \
+									   \
+	ret = kstrtoul(page, 0, &val);					   \
+	if (ret < 0) {							   \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	if (val != 0 && val != 1) {					   \
+		pr_err("Illegal boolean value %lu\n", val);		   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	a->name = val;							   \
+									   \
+	return count;							   \
+}									   \
+CONFIGFS_ATTR(efct_lio_npiv_tpg_attrib_, name)
+
+DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_NPIV_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_npiv_tpg_attrib_attrs[] = {
+	&efct_lio_npiv_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_npiv_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_npiv_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+CONFIGFS_ATTR(efct_lio_tpg_, enable);
+static struct configfs_attribute *efct_lio_tpg_attrs[] = {
+				&efct_lio_tpg_attr_enable, NULL };
+CONFIGFS_ATTR(efct_lio_npiv_tpg_, enable);
+static struct configfs_attribute *efct_lio_npiv_tpg_attrs[] = {
+				&efct_lio_npiv_tpg_attr_enable, NULL };
+
+static const struct target_core_fabric_ops efct_lio_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect = efct_lio_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect = efct_lio_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_make_sport,
+	.fabric_drop_wwn		= efct_lio_drop_sport,
+	.fabric_make_tpg		= efct_lio_make_tpg,
+	.fabric_drop_tpg		= efct_lio_drop_tpg,
+	.tpg_check_demo_mode_login_only = efct_lio_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_tpg_attrs,
+	.tfc_tpg_attrib_attrs           = efct_lio_tpg_attrib_attrs,
+};
+
+static const struct target_core_fabric_ops efct_lio_npiv_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct_npiv",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_npiv_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_npiv_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect =
+					efct_lio_npiv_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect =
+					efct_lio_npiv_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_npiv_make_sport,
+	.fabric_drop_wwn		= efct_lio_npiv_drop_sport,
+	.fabric_make_tpg		= efct_lio_npiv_make_tpg,
+	.fabric_drop_tpg		= efct_lio_npiv_drop_tpg,
+	.tpg_check_demo_mode_login_only =
+				efct_lio_npiv_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_npiv_tpg_attrs,
+	.tfc_tpg_attrib_attrs		= efct_lio_npiv_tpg_attrib_attrs,
+};
+
+int efct_scsi_tgt_driver_init(void)
+{
+	int rc;
+
+	/* Register the top level struct config_item_type with TCM core */
+	rc = target_register_template(&efct_lio_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		return rc;
+	}
+	rc = target_register_template(&efct_lio_npiv_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		target_unregister_template(&efct_lio_ops);
+		return rc;
+	}
+	return 0;
+}
+
+int efct_scsi_tgt_driver_exit(void)
+{
+	target_unregister_template(&efct_lio_ops);
+	target_unregister_template(&efct_lio_npiv_ops);
+	return 0;
+}
diff --git a/drivers/scsi/elx/efct/efct_lio.h b/drivers/scsi/elx/efct/efct_lio.h
new file mode 100644
index 000000000000..66d3790bea45
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_LIO_H__
+#define __EFCT_LIO_H__
+
+#include "efct_scsi.h"
+#include <target/target_core_base.h>
+
+#define efct_lio_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, \
+		"[%s] [%04x][i:%04x t:%04x h:%04x][c:%02x]" fmt, \
+		io->node->display_name, io->instance_index, \
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
+		io->tgt_io.cdb_opcode, ##__VA_ARGS__)
+
+#define efct_lio_tmfio_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, \
+		"[%s] [%04x][i:%04x t:%04x h:%04x][f:%02x]" fmt, \
+		io->node->display_name, io->instance_index, \
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
+		io->tgt_io.tmf,  ##__VA_ARGS__)
+
+#define efct_set_lio_io_state(io, value) (io->tgt_io.state |= value)
+
+struct efct_lio_wq_data {
+	struct efct		*efct;
+	void			*ptr;
+	struct work_struct	work;
+};
+
+/* Target private efct structure */
+struct efct_scsi_tgt {
+	u32			max_sge;
+	u32			max_sgl;
+
+	/*
+	 * Variables used to send task set full. We are using a high watermark
+	 * method to send task set full. We will reserve a fixed number of IOs
+	 * per initiator plus a fudge factor. Once we reach this number,
+	 * then the target will start sending task set full/busy responses.
+	 */
+	atomic_t		initiator_count;
+	atomic_t		ios_in_use;
+	atomic_t		io_high_watermark;
+
+	atomic_t		watermark_hit;
+	int			watermark_min;
+	int			watermark_max;
+
+	struct efct_lio_sport	*lio_sport;
+	struct efct_lio_tpg	*tpg;
+
+	struct list_head	vport_pend_list;
+	struct list_head	vport_list;
+	/* Protects vport list*/
+	spinlock_t		efct_lio_lock;
+
+	u64			wwnn;
+};
+
+struct efct_scsi_tgt_sport {
+	struct efct_lio_sport	*lio_sport;
+};
+
+struct efct_scsi_tgt_node {
+	struct se_session	*session;
+};
+
+#define EFCT_LIO_STATE_SCSI_RECV_CMD		(1 << 0)
+#define EFCT_LIO_STATE_TGT_SUBMIT_CMD		(1 << 1)
+#define EFCT_LIO_STATE_TFO_QUEUE_DATA_IN	(1 << 2)
+#define EFCT_LIO_STATE_TFO_WRITE_PENDING	(1 << 3)
+#define EFCT_LIO_STATE_TGT_EXECUTE_CMD		(1 << 4)
+#define EFCT_LIO_STATE_SCSI_SEND_RD_DATA	(1 << 5)
+#define EFCT_LIO_STATE_TFO_CHK_STOP_FREE	(1 << 6)
+#define EFCT_LIO_STATE_SCSI_DATA_DONE		(1 << 7)
+#define EFCT_LIO_STATE_TFO_QUEUE_STATUS		(1 << 8)
+#define EFCT_LIO_STATE_SCSI_SEND_RSP		(1 << 9)
+#define EFCT_LIO_STATE_SCSI_RSP_DONE		(1 << 10)
+#define EFCT_LIO_STATE_TGT_GENERIC_FREE		(1 << 11)
+#define EFCT_LIO_STATE_SCSI_RECV_TMF		(1 << 12)
+#define EFCT_LIO_STATE_TGT_SUBMIT_TMR		(1 << 13)
+#define EFCT_LIO_STATE_TFO_WRITE_PEND_STATUS	(1 << 14)
+#define EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE  (1 << 15)
+
+#define EFCT_LIO_STATE_TFO_ABORTED_TASK		(1 << 29)
+#define EFCT_LIO_STATE_TFO_RELEASE_CMD		(1 << 30)
+#define EFCT_LIO_STATE_SCSI_CMPL_CMD		(1 << 31)
+
+struct efct_scsi_tgt_io {
+	struct se_cmd		cmd;
+	unsigned char		sense_buffer[TRANSPORT_SENSE_BUFFER];
+	int			ddir;
+	int			task_attr;
+	u64			lun;
+
+	u32			state;
+	u8			cdb_opcode;
+	u8			tmf;
+	struct efct_io		*io_to_abort;
+	u32			cdb_len;
+	u32			seg_map_cnt;
+	u32			seg_cnt;
+	u32			cur_seg;
+	enum efct_scsi_io_status err;
+	bool			aborting;
+	bool			rsp_sent;
+	uint32_t		transferred_len;
+};
+
+/* Handler return codes */
+enum {
+	SCSI_HANDLER_DATAPHASE_STARTED = 1,
+	SCSI_HANDLER_RESP_STARTED,
+	SCSI_HANDLER_VALIDATED_DATAPHASE_STARTED,
+	SCSI_CMD_NOT_SUPPORTED,
+};
+
+#define WWN_NAME_LEN		32
+struct efct_lio_vport {
+	u64			wwpn;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		vport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	struct dentry		*sessions;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	atomic_t		enable;
+};
+
+struct efct_lio_sport {
+	u64			wwpn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		sport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	struct dentry		*sessions;
+	atomic_t		enable;
+};
+
+struct efct_lio_tpg_attrib {
+	int			generate_node_acls;
+	int			cache_dynamic_acls;
+	int			demo_mode_write_protect;
+	int			prod_mode_write_protect;
+	int			demo_mode_login_only;
+	bool			session_deletion_wait;
+};
+
+struct efct_lio_tpg {
+	struct se_portal_group	tpg;
+	struct efct_lio_sport	*sport;
+	struct efct_lio_vport	*vport;
+	struct efct_lio_tpg_attrib tpg_attrib;
+	unsigned short		tpgt;
+	atomic_t		enabled;
+};
+
+struct efct_lio_nacl {
+	u64			nport_wwnn;
+	char			nport_name[WWN_NAME_LEN];
+	struct se_session	*session;
+	struct se_node_acl	se_node_acl;
+};
+
+struct efct_lio_vport_data_t {
+	struct list_head	list_entry;
+	bool			initiator_mode;
+	bool			target_mode;
+	u64			phy_wwpn;
+	u64			phy_wwnn;
+	u64			vport_wwpn;
+	u64			vport_wwnn;
+	struct efct_lio_vport	*lio_vport;
+};
+
+struct efct_lio_vport_list_t {
+	struct list_head	list_entry;
+	struct efct_lio_vport	*lio_vport;
+};
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+
+#endif /*__EFCT_LIO_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 25/32] elx: efct: Hardware IO submission routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (23 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 24/32] elx: efct: LIO backend interface routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09  9:52   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 26/32] elx: efct: link statistics and SFP data James Smart
                   ` (7 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines that write IO to Work queue, send SRRs and raw frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 625 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  19 ++
 2 files changed, 644 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 43f1ff526694..440c4fa196bf 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -3192,6 +3192,68 @@ efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
 	return 0;
 }
 
+static int
+_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int rc;
+	int queue_rc;
+
+	/* Every so often, set the wqec bit to generate comsummed completions */
+	if (wq->wqec_count)
+		wq->wqec_count--;
+
+	if (wq->wqec_count == 0) {
+		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
+
+		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
+		wq->wqec_count = wq->wqec_set_count;
+	}
+
+	/* Decrement WQ free count */
+	wq->free_count--;
+
+	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
+
+	if (queue_rc < 0)
+		rc = -1;
+	else
+		rc = 0;
+
+	return rc;
+}
+
+static void
+hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
+{
+	struct efct_hw_wqe *wqe;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+
+	/* Update free count with value passed in */
+	wq->free_count += update_free_count;
+
+	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
+		wqe = list_first_entry(&wq->pending_list,
+				       struct efct_hw_wqe, list_entry);
+		list_del(&wqe->list_entry);
+		_efct_hw_wq_write(wq, wqe);
+
+		if (wqe->abort_wqe_submit_needed) {
+			wqe->abort_wqe_submit_needed = false;
+			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
+				      wq->hw->sli.wqe_size,
+				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
+				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
+					  INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+}
+
 void
 efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
 {
@@ -3390,3 +3452,566 @@ efct_hw_flush(struct efct_hw *hw)
 
 	return 0;
 }
+
+int
+efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int rc = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+	if (!list_empty(&wq->pending_list)) {
+		INIT_LIST_HEAD(&wqe->list_entry);
+		list_add_tail(&wqe->list_entry, &wq->pending_list);
+		wq->wq_pending_count++;
+		while ((wq->free_count > 0) &&
+		       ((wqe = list_first_entry(&wq->pending_list,
+					struct efct_hw_wqe, list_entry))
+			 != NULL)) {
+			list_del(&wqe->list_entry);
+			rc = _efct_hw_wq_write(wq, wqe);
+			if (rc < 0)
+				break;
+			if (wqe->abort_wqe_submit_needed) {
+				wqe->abort_wqe_submit_needed = false;
+				sli_abort_wqe(&wq->hw->sli,
+					      wqe->wqebuf,
+					      wq->hw->sli.wqe_size,
+					      SLI_ABORT_XRI,
+					      wqe->send_abts, wqe->id,
+					      0, wqe->abort_reqtag,
+					      SLI4_CQ_DEFAULT);
+
+				INIT_LIST_HEAD(&wqe->list_entry);
+				list_add_tail(&wqe->list_entry,
+					      &wq->pending_list);
+				wq->wq_pending_count++;
+			}
+		}
+	} else {
+		if (wq->free_count > 0) {
+			rc = _efct_hw_wq_write(wq, wqe);
+		} else {
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+
+	return rc;
+}
+
+/**
+ * This routine supports communication sequences consisting of a single
+ * request and single response between two endpoints. Examples include:
+ *  - Sending an ELS request.
+ *  - Sending an ELS response - To send an ELS response, the caller must provide
+ * the OX_ID from the received request.
+ *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
+ * the caller must provide the R_CTL, TYPE, and DF_CTL
+ * values to place in the FC frame header.
+ */
+enum efct_hw_rtn
+efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		  struct efct_hw_io *io,
+		  struct efc_dma *send, u32 len,
+		  struct efc_dma *receive, struct efc_remote_node *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb, void *arg)
+{
+	struct sli4_sge	*sge = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+	u16	local_flags = 0;
+	u32 sge0_flags;
+	u32 sge1_flags;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
+			hw, io, send, receive, rnode, iparam);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_test(hw->os,
+			      "cannot send SRRS, HW state=%d\n", hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	io->rnode = rnode;
+	io->type  = type;
+	io->done = cb;
+	io->arg  = arg;
+
+	sge = io->sgl->virt;
+
+	/* clear both SGE */
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+
+	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
+	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
+	if (send) {
+		sge[0].buffer_address_high =
+			cpu_to_le32(upper_32_bits(send->phys));
+		sge[0].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(send->phys));
+
+		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+
+		sge[0].buffer_length = cpu_to_le32(len);
+	}
+
+	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
+		sge[1].buffer_address_high =
+			cpu_to_le32(upper_32_bits(receive->phys));
+		sge[1].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(receive->phys));
+
+		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		sge1_flags |= SLI4_SGE_LAST;
+
+		sge[1].buffer_length = cpu_to_le32(receive->size);
+	} else {
+		sge0_flags |= SLI4_SGE_LAST;
+	}
+
+	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
+	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
+
+	switch (type) {
+	case EFCT_HW_ELS_REQ:
+		if (!send ||
+		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
+					  hw->sli.wqe_size, io->sgl,
+					*((u8 *)send->virt),
+					len, receive->size,
+					iparam->els.timeout,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id, rnode->sport->fc_id)) {
+			efc_log_err(hw->os, "REQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, iparam->els.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id,
+					local_flags, U32_MAX)) {
+			efc_log_err(hw->os, "RSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP_SID:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					iparam->els_sid.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->node_group, rnode->attached,
+					rnode->fc_id,
+					local_flags, iparam->els_sid.s_id)) {
+			efc_log_err(hw->os, "RSP (SID) WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT:
+		if (!send ||
+		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf,
+					  hw->sli.wqe_size, io->sgl,
+					len, receive->size,
+					iparam->fc_ct.timeout, io->indicator,
+					io->reqtag, SLI4_CQ_DEFAULT,
+					rnode->node_group, rnode->fc_id,
+					rnode->indicator,
+					iparam->fc_ct.r_ctl,
+					iparam->fc_ct.type,
+					iparam->fc_ct.df_ctl)) {
+			efc_log_err(hw->os, "GEN WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT_RSP:
+		if (!send ||
+		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
+					    hw->sli.wqe_size, io->sgl,
+					len, iparam->fc_ct_rsp.timeout,
+					iparam->fc_ct_rsp.ox_id,
+					io->indicator, io->reqtag,
+					rnode->node_group, rnode->fc_id,
+					rnode->indicator,
+					iparam->fc_ct_rsp.r_ctl,
+					iparam->fc_ct_rsp.type,
+					iparam->fc_ct_rsp.df_ctl)) {
+			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_RJT:
+	{
+		struct sli_bls_payload	bls;
+
+		if (type == EFCT_HW_BLS_ACC) {
+			bls.type = SLI4_SLI_BLS_ACC;
+			memcpy(&bls.u.acc, iparam->bls.payload,
+			       sizeof(bls.u.acc));
+		} else {
+			bls.type = SLI4_SLI_BLS_RJT;
+			memcpy(&bls.u.rjt, iparam->bls.payload,
+			       sizeof(bls.u.rjt));
+		}
+
+		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached, rnode->node_group,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					U32_MAX)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_BLS_ACC_SID:
+	{
+		struct sli_bls_payload	bls;
+
+		bls.type = SLI4_SLI_BLS_ACC;
+		memcpy(&bls.u.acc, iparam->bls_sid.payload,
+		       sizeof(bls.u.acc));
+
+		bls.ox_id = cpu_to_le16(iparam->bls_sid.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls_sid.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached, rnode->node_group,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					iparam->bls_sid.s_id)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE SID error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "bad SRRS type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io->wq)
+			io->wq = efct_hw_queue_next_wq(hw, io);
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		io->wq->use_count++;
+		efct_hw_add_io_timed_wqe(hw, io);
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+			efct_hw_remove_io_timed_wqe(hw, io);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * Send a read, write, or response IO.
+ *
+ * This routine supports sending a higher-level IO (for example, FCP) between
+ * two endpoints as a target or initiator. Examples include:
+ *  - Sending read data and good response (target).
+ *  - Sending a response (target with no data or after receiving write data).
+ *  .
+ * This routine assumes all IOs use the SGL associated with the HW IO. Prior to
+ * calling this routine, the data should be loaded using efct_hw_io_add_sge().
+ */
+enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io,
+		u32 len, union efct_hw_io_param_u *iparam,
+		struct efc_remote_node *rnode, void *cb, void *arg)
+{
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+	u32	rpi;
+	bool send_wqe = true;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p iparam=%p rnode=%p\n",
+			hw, io, iparam, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rpi = rnode->indicator;
+
+	/*
+	 * Save state needed during later stages
+	 */
+	io->rnode = rnode;
+	io->type  = type;
+	io->done  = cb;
+	io->arg   = arg;
+
+	/*
+	 * Format the work queue entry used to send the IO
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE: {
+		u16 flags = iparam->fcp_tgt.flags;
+		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
+
+		/*
+		 * Fill in the XFER_RDY for IF_TYPE 0 devices
+		 */
+		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
+		xfer->ft_burst_len = cpu_to_be32(len);
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+
+		if (sli_fcp_treceive64_wqe(&hw->sli,
+					   io->wqe.wqebuf,
+					   hw->sli.wqe_size,
+					   &io->def_sgl,
+					   io->first_data_sge,
+					   iparam->fcp_tgt.offset, len,
+					   io->indicator, io->reqtag,
+					   SLI4_CQ_DEFAULT,
+					   iparam->fcp_tgt.ox_id, rpi,
+					   rnode->node_group,
+					   rnode->fc_id, flags,
+					   iparam->fcp_tgt.dif_oper,
+					   iparam->fcp_tgt.blk_size,
+					   iparam->fcp_tgt.cs_ctl,
+					   iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TRECEIVE WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_READ: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
+					hw->sli.wqe_size, &io->def_sgl,
+					io->first_data_sge,
+					iparam->fcp_tgt.offset, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
+					rpi, rnode->node_group,
+					rnode->fc_id, flags,
+					iparam->fcp_tgt.dif_oper,
+					iparam->fcp_tgt.blk_size,
+					iparam->fcp_tgt.cs_ctl,
+					iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TSEND WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_RSP: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
+				       hw->sli.wqe_size, &io->def_sgl,
+				       len, io->indicator, io->reqtag,
+				       SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
+					rpi, rnode->node_group, rnode->fc_id,
+					flags, iparam->fcp_tgt.cs_ctl,
+				       0, iparam->fcp_tgt.app_id)) {
+			efc_log_err(hw->os, "TRSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
+		if (!io->wq)
+			io->wq = efct_hw_queue_next_wq(hw, io);
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		hw->tcmd_wq_submit[io->wq->instance]++;
+		io->wq->use_count++;
+		efct_hw_add_io_timed_wqe(hw, io);
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+			efct_hw_remove_io_timed_wqe(hw, io);
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * Send a raw frame
+ *
+ * Using the SEND_FRAME_WQE, a frame consisting of header and payload is sent.
+ */
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		   struct efct_hw_send_frame_context *ctx,
+		   void (*callback)(void *arg, u8 *cqe, int status),
+		   void *arg)
+{
+	int rc;
+	struct efct_hw_wqe *wqe;
+	u32 xri;
+	struct hw_wq *wq;
+
+	wqe = &ctx->wqe;
+
+	/* populate the callback object */
+	ctx->hw = hw;
+
+	/* Fetch and populate request tag */
+	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
+	if (!ctx->wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+
+	/* Choose a work queue, first look for a class[1] wq, otherwise just
+	 * use wq[0]
+	 */
+	wq = efct_varray_iter_next(hw->wq_class_array[1]);
+	if (!wq)
+		wq = hw->hw_wq[0];
+
+	/* Set XRI and RX_ID in the header based on which WQ, and which
+	 * send_frame_io we are using
+	 */
+	xri = wq->send_frame_io->indicator;
+
+	/* Build the send frame WQE */
+	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
+				hw->sli.wqe_size, sof, eof,
+				(u32 *)hdr, payload, payload->len,
+				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
+				ctx->wqcb->instance_index);
+	if (rc) {
+		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
+			     rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Write to WQ */
+	rc = efct_hw_wq_write(wq, wqe);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	wq->use_count++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+u32
+efct_hw_io_get_count(struct efct_hw *hw,
+		     enum efct_hw_io_count_type io_count_type)
+{
+	struct efct_hw_io *io = NULL;
+	u32 count = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+
+	switch (io_count_type) {
+	case EFCT_HW_IO_INUSE_COUNT:
+		list_for_each_entry(io, &hw->io_inuse, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_WAIT_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_wait_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_N_TOTAL_IO_COUNT:
+		count = hw->config.n_io;
+		break;
+	}
+
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 55679e40cc49..1a019594c471 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -952,4 +952,23 @@ efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
 extern int
 efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
 
+int efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe);
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		struct efct_hw_send_frame_context *ctx,
+		void (*callback)(void *arg, u8 *cqe, int status),
+		void *arg);
+typedef int(*efct_hw_srrs_cb_t)(struct efct_hw_io *io,
+				struct efc_remote_node *rnode, u32 length,
+				int status, u32 ext_status, void *arg);
+extern enum efct_hw_rtn
+efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		  struct efct_hw_io *io,
+		  struct efc_dma *send, u32 len,
+		  struct efc_dma *receive, struct efc_remote_node *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb,
+		  void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 26/32] elx: efct: link statistics and SFP data
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (24 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 25/32] elx: efct: Hardware IO submission routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09 10:12   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 27/32] elx: efct: xport and hardware teardown routines James Smart
                   ` (6 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to retrieve link stats and SFP transceiver data.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 468 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  39 ++++
 2 files changed, 507 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 440c4fa196bf..33eefda7ba51 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -14,6 +14,40 @@
 
 #define EFCT_HW_REQUE_XRI_REGTAG	65534
 
+struct efct_hw_sfp_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u8 *data, void *arg);
+	void *arg;
+	struct efc_dma payload;
+};
+
+struct efct_hw_temp_cb_arg {
+	void (*cb)(int status, u32 curr_temp,
+		   u32 crit_temp_thrshld,
+		   u32 warn_temp_thrshld,
+		   u32 norm_temp_thrshld,
+		   u32 fan_off_thrshld,
+		   u32 fan_on_thrshld,
+		   void *arg);
+	void *arg;
+};
+
+struct efct_hw_link_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_link_stat_counts *counters,
+		void *arg);
+	void *arg;
+};
+
+struct efct_hw_host_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_host_stat_counts *counters,
+		void *arg);
+	void *arg;
+};
+
 /* HW global data */
 struct efct_hw_global hw_global;
 
@@ -4015,3 +4049,437 @@ efct_hw_io_get_count(struct efct_hw *hw,
 
 	return count;
 }
+
+static int
+efct_hw_cb_sfp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct efct_hw_sfp_cb_arg *cb_arg = arg;
+	struct efc_dma *payload = &cb_arg->payload;
+	struct sli4_rsp_cmn_read_transceiver_data *mbox_rsp;
+	struct efct *efct = hw->os;
+	u32 bytes_written;
+
+	mbox_rsp =
+	(struct sli4_rsp_cmn_read_transceiver_data *)payload->virt;
+	bytes_written = le32_to_cpu(mbox_rsp->hdr.response_length);
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_rsp->hdr.status)
+				status = mbox_rsp->hdr.status;
+			cb_arg->cb(status, bytes_written, mbox_rsp->page_data,
+				   cb_arg->arg);
+		}
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
+		kfree(cb_arg);
+	}
+
+	kfree(mqe);
+	return 0;
+}
+
+/* Function to retrieve the SFP information */
+enum efct_hw_rtn
+efct_hw_get_sfp(struct efct_hw *hw, u16 page,
+		void (*cb)(int, u32, u8 *, void *), void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_sfp_cb_arg *cb_arg;
+	u8 *mbxdata;
+	struct efct *efct = hw->os;
+	struct efc_dma *dma;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_sfp_cb_arg));
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* payload holds the non-embedded portion */
+	dma = &cb_arg->payload;
+	dma->size = sizeof(struct sli4_rsp_cmn_read_transceiver_data);
+	dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+				       dma->size, &dma->phys, GFP_DMA);
+	if (!dma->virt) {
+		kfree(cb_arg);
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Send the HW command */
+	if (!sli_cmd_common_read_transceiver_data(&hw->sli, mbxdata,
+						 SLI4_BMBX_SIZE, page,
+						 &cb_arg->payload))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_sfp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os,
+			      "READ_TRANSCEIVER_DATA failed with status %d\n",
+			     rc);
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
+		kfree(cb_arg);
+		kfree(mbxdata);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_temp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_dump4 *mbox_rsp = (struct sli4_cmd_dump4 *)mqe;
+	struct efct_hw_temp_cb_arg *cb_arg = arg;
+	u32 curr_temp = le32_to_cpu(mbox_rsp->resp_data[0]); /* word 5 */
+	u32 crit_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[1]); /* word 6 */
+	u32 warn_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[2]); /* word 7 */
+	u32 norm_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[3]); /* word 8 */
+	u32 fan_off_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[4]);   /* word 9 */
+	u32 fan_on_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[5]);    /* word 10 */
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status,
+				   curr_temp,
+				   crit_temp_thrshld,
+				   warn_temp_thrshld,
+				   norm_temp_thrshld,
+				   fan_off_thrshld,
+				   fan_on_thrshld,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
+
+/* Function to retrieve the temperature information */
+enum efct_hw_rtn
+efct_hw_get_temperature(struct efct_hw *hw,
+			void (*cb)(int status,
+				   u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_temp_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_dump_type4(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+			       SLI4_WKI_TAG_SAT_TEM))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_temp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "DUMP_TYPE4 failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_link_stat(struct efct_hw *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_link_stats *mbox_rsp;
+	struct efct_hw_link_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_link_stat_counts counts[EFCT_HW_LINK_STAT_MAX];
+	u32 num_counters;
+	u32 mbox_rsp_flags = 0;
+
+	mbox_rsp = (struct sli4_cmd_read_link_stats *)mqe;
+	mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags);
+	num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13;
+	memset(counts, 0, sizeof(struct efct_hw_link_stat_counts) *
+				 EFCT_HW_LINK_STAT_MAX);
+
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W02OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W03OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W04OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W05OF);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W06OF);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W07OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W08OF);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W09OF);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W10OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W11OF);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W12OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W13OF);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W14OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W15OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W16OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W17OF);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W18OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W19OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W20OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W21OF);
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->linkfail_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssync_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssignal_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->primseq_errcnt);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->inval_txword_errcnt);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter =
+		le32_to_cpu(mbox_rsp->crc_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter =
+		le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofa_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofdti_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofni_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_soff_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
+
+enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw,
+		       u8 req_ext_counters,
+		       u8 clear_overflow_flags,
+		       u8 clear_all_counters,
+		       void (*cb)(int status,
+				  u32 num_counters,
+			struct efct_hw_link_stat_counts *counters,
+			void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_link_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+				    req_ext_counters,
+				    clear_overflow_flags,
+				    clear_all_counters))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_link_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_host_stat(struct efct_hw *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_status *mbox_rsp =
+					(struct sli4_cmd_read_status *)mqe;
+	struct efct_hw_host_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_host_stat_counts counts[EFCT_HW_HOST_STAT_MAX];
+	u32 num_counters = EFCT_HW_HOST_STAT_MAX;
+
+	memset(counts, 0, sizeof(struct efct_hw_host_stat_counts) *
+		   EFCT_HW_HOST_STAT_MAX);
+
+	counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_orig);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_resp);
+	counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_p_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_f_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_xri_pool_cnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return 0;
+}
+
+enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
+		       void (*cb)(int status,
+				  u32 num_counters,
+				  struct efct_hw_host_stat_counts *counters,
+				  void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_host_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	 cb_arg->cb = cb;
+	 cb_arg->arg = arg;
+
+	 /* Send the HW command to get the host stats */
+	if (!sli_cmd_read_status(&hw->sli, mbxdata, SLI4_BMBX_SIZE, cc))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_host_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_HOST_STATS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 1a019594c471..278f241e8705 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -970,5 +970,44 @@ efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
 		  union efct_hw_io_param_u *iparam,
 		  efct_hw_srrs_cb_t cb,
 		  void *arg);
+/* Function for retrieving SFP data */
+extern enum efct_hw_rtn
+efct_hw_get_sfp(struct efct_hw *hw, u16 page,
+		void (*cb)(int, u32, u8 *, void *), void *arg);
+
+/* Function for retrieving temperature data */
+extern enum efct_hw_rtn
+efct_hw_get_temperature(struct efct_hw *hw,
+			void (*efct_hw_temp_cb_t)(int status,
+						  u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg);
+
+/* Function for retrieving link statistics */
+extern enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw,
+		       u8 req_ext_counters,
+		u8 clear_overflow_flags,
+		u8 clear_all_counters,
+		void (*efct_hw_link_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_link_stat_counts *counters,
+			void *arg),
+		void *arg);
+/* Function for retrieving host statistics */
+extern enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw,
+		       u8 cc,
+		void (*efct_hw_host_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_host_stat_counts *counters,
+			void *arg),
+		void *arg);
+
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 27/32] elx: efct: xport and hardware teardown routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (25 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 26/32] elx: efct: link statistics and SFP data James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09 10:14   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 28/32] elx: efct: IO timeout handling routines James Smart
                   ` (5 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to detach xport and hardware objects.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c    | 437 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |  31 +++
 drivers/scsi/elx/efct/efct_xport.c | 389 +++++++++++++++++++++++++++++++++
 3 files changed, 857 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 33eefda7ba51..fb33317caa0d 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -4483,3 +4483,440 @@ efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
 
 	return rc;
 }
+
+static int
+efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	kfree(mqe);
+	return 0;
+}
+
+/* Control a port (initialize, shutdown, or set link configuration) */
+enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+
+	switch (ctrl) {
+	case EFCT_HW_PORT_INIT:
+	{
+		u8	*init_link;
+		u32 speed = 0;
+		u8 reset_alpa = 0;
+
+		u8	*cfg_link;
+
+		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cfg_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_config_link(&hw->sli, cfg_link,
+					SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, cfg_link,
+					     EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control,
+					     NULL);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(cfg_link);
+			efc_log_err(hw->os, "CONFIG_LINK failed\n");
+			break;
+		}
+		speed = hw->config.speed;
+		reset_alpa = (u8)(value & 0xff);
+
+		/* Allocate a new buffer for the init_link command */
+		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!init_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
+				      speed, reset_alpa))
+			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(init_link);
+			efc_log_err(hw->os, "INIT_LINK failed\n");
+		}
+		break;
+	}
+	case EFCT_HW_PORT_SHUTDOWN:
+	{
+		u8	*down_link;
+
+		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!down_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(down_link);
+			efc_log_err(hw->os, "DOWN_LINK failed\n");
+		}
+		break;
+	}
+	default:
+		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_teardown(struct efct_hw *hw)
+{
+	u32	i = 0;
+	u32	iters = 10;
+	u32	max_rpi;
+	u32 destroy_queues;
+	u32 free_memory;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+
+	/* shutdown target wqe timer */
+	shutdown_target_wqe_timer(hw);
+
+	/* Cancel watchdog timer if enabled */
+	if (hw->watchdog_timeout) {
+		hw->watchdog_timeout = 0;
+		efct_hw_config_watchdog_timer(hw);
+	}
+
+	/* Cancel Sliport Healthcheck */
+	if (hw->sliport_healthcheck) {
+		hw->sliport_healthcheck = 0;
+		efct_hw_config_sli_port_health_check(hw, 0, 0);
+	}
+
+	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+
+		efct_hw_flush(hw);
+
+		/*
+		 * If there are outstanding commands, wait for them to complete
+		 */
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some cmds still pending on MQ queue\n");
+
+		/* Cancel any remaining commands */
+		efct_hw_command_cancel(hw);
+	} else {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+	}
+
+	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
+	if (hw->rpi_ref) {
+		for (i = 0; i < max_rpi; i++) {
+			u32 count;
+
+			count = atomic_read(&hw->rpi_ref[i].rpi_count);
+			if (count)
+				efc_log_debug(hw->os,
+					       "non-zero ref [%d]=%d\n",
+					       i, count);
+		}
+		kfree(hw->rpi_ref);
+		hw->rpi_ref = NULL;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  hw->rnode_mem.size, hw->rnode_mem.virt,
+			  hw->rnode_mem.phys);
+	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
+
+	if (hw->io) {
+		for (i = 0; i < hw->config.n_io; i++) {
+			if (hw->io[i] && hw->io[i]->sgl &&
+			    hw->io[i]->sgl->virt) {
+				dma_free_coherent(&efct->pcidev->dev,
+						  hw->io[i]->sgl->size,
+						  hw->io[i]->sgl->virt,
+						  hw->io[i]->sgl->phys);
+				memset(&hw->io[i]->sgl, 0,
+				       sizeof(struct efc_dma));
+			}
+			kfree(hw->io[i]);
+			hw->io[i] = NULL;
+		}
+		kfree(hw->io);
+		hw->io = NULL;
+		kfree(hw->wqe_buffs);
+		hw->wqe_buffs = NULL;
+	}
+
+	dma = &hw->xfer_rdy;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	dma = &hw->dump_sges;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	dma = &hw->loop_map;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	for (i = 0; i < hw->wq_count; i++)
+		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->rq_count; i++)
+		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->mq_count; i++)
+		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->cq_count; i++)
+		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
+			       free_memory);
+
+	efct_hw_qtop_free(hw->qtop);
+
+	/* Free rq buffers */
+	efct_hw_rx_free(hw);
+
+	efct_hw_queue_teardown(hw);
+
+	if (sli_teardown(&hw->sli))
+		efc_log_err(hw->os, "SLI teardown failed\n");
+
+	/* record the fact that the queues are non-functional */
+	hw->state = EFCT_HW_STATE_UNINITIALIZED;
+
+	/* free sequence free pool */
+	efct_array_free(hw->seq_pool);
+	hw->seq_pool = NULL;
+
+	/* free hw_wq_callback pool */
+	efct_pool_free(hw->wq_reqtag_pool);
+
+	/* Mark HW setup as not having been called */
+	hw->hw_setup_called = false;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset,
+		  enum efct_hw_state prev_state)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (reset) {
+	case EFCT_HW_RESET_FUNCTION:
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RESET_FIRMWARE:
+		efc_log_debug(hw->os, "issuing firmware reset\n");
+		if (sli_fw_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_soft_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		/*
+		 * Because the FW reset leaves the FW in a non-running state,
+		 * follow that with a regular reset.
+		 */
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	default:
+		efc_log_err(hw->os,
+			     "unknown reset type - no reset performed\n");
+		hw->state = prev_state;
+		rc = EFCT_HW_RTN_INVALID_ARG;
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset)
+{
+	u32	i;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32	iters;
+	enum efct_hw_state prev_state = hw->state;
+	unsigned long flags = 0;
+	struct efct_hw_io *temp;
+	u32 destroy_queues;
+	u32 free_memory;
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE)
+		efc_log_test(hw->os,
+			      "HW state %d is not active\n", hw->state);
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
+
+	/*
+	 * If the prev_state is already reset/teardown in progress,
+	 * don't continue further
+	 */
+	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
+	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
+		return efct_hw_sli_reset(hw, reset, prev_state);
+
+	/* shutdown target wqe timer */
+	shutdown_target_wqe_timer(hw);
+
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_flush(hw);
+
+		/*
+		 * If an mailbox command requiring a DMA is outstanding
+		 * (SFP/DDM), then the FW will UE when the reset is issued.
+		 * So attempt to complete all mailbox commands.
+		 */
+		iters = 10;
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some commands still pending on MQ queue\n");
+	}
+
+	/* Reset the chip */
+	rc = efct_hw_sli_reset(hw, reset, prev_state);
+	if (rc == EFCT_HW_RTN_INVALID_ARG)
+		return EFCT_HW_RTN_ERROR;
+
+	/* Not safe to walk command/io lists unless they've been initialized */
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_command_cancel(hw);
+
+		/* Try to clean up the io_inuse list */
+		efct_hw_io_cancel(hw);
+
+		efct_hw_link_event_init(hw);
+
+		spin_lock_irqsave(&hw->io_lock, flags);
+			/*
+			 * The io lists should be empty, but remove any that
+			 * didn't get cleaned up.
+			 */
+			while (!list_empty(&hw->io_timed_wqe)) {
+				temp = list_first_entry(&hw->io_timed_wqe,
+							struct efct_hw_io,
+							list_entry);
+				list_del(&temp->wqe_link);
+			}
+
+			while (!list_empty(&hw->io_free)) {
+				temp = list_first_entry(&hw->io_free,
+							struct efct_hw_io,
+							list_entry);
+				list_del(&temp->list_entry);
+			}
+
+			while (!list_empty(&hw->io_wait_free)) {
+				temp = list_first_entry(&hw->io_wait_free,
+							struct efct_hw_io,
+							list_entry);
+				list_del(&temp->list_entry);
+			}
+		spin_unlock_irqrestore(&hw->io_lock, flags);
+
+		for (i = 0; i < hw->wq_count; i++)
+			sli_queue_free(&hw->sli, &hw->wq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->rq_count; i++)
+			sli_queue_free(&hw->sli, &hw->rq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->hw_rq_count; i++) {
+			struct hw_rq *rq = hw->hw_rq[i];
+
+			if (rq->rq_tracker) {
+				u32 j;
+
+				for (j = 0; j < rq->entry_count; j++)
+					rq->rq_tracker[j] = NULL;
+			}
+		}
+
+		for (i = 0; i < hw->mq_count; i++)
+			sli_queue_free(&hw->sli, &hw->mq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->cq_count; i++)
+			sli_queue_free(&hw->sli, &hw->cq[i],
+				       destroy_queues, free_memory);
+
+		for (i = 0; i < hw->eq_count; i++)
+			sli_queue_free(&hw->sli, &hw->eq[i],
+				       destroy_queues, free_memory);
+
+		/* Free rq buffers */
+		efct_hw_rx_free(hw);
+
+		/* Teardown the HW queue topology */
+		efct_hw_queue_teardown(hw);
+
+		/*
+		 * Reset the request tag pool, the HW IO request tags
+		 * are reassigned in efct_hw_setup_io()
+		 */
+		efct_hw_reqtag_reset(hw);
+	} else {
+		/* Free rq buffers */
+		efct_hw_rx_free(hw);
+	}
+
+	return rc;
+}
+
+int
+efct_hw_get_num_eq(struct efct_hw *hw)
+{
+	return hw->eq_count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 278f241e8705..862504b96a23 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -1009,5 +1009,36 @@ efct_hw_get_host_stats(struct efct_hw *hw,
 			void *arg),
 		void *arg);
 
+struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count);
+struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count);
+extern u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count);
+struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count);
+extern struct hw_wq
+*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count,
+		u32 class, u32 ulp);
+extern struct hw_rq
+*efct_hw_new_rq(struct hw_cq *cq, u32 entry_count, u32 ulp);
+extern u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count);
+void efct_hw_del_eq(struct hw_eq *eq);
+void efct_hw_del_cq(struct hw_cq *cq);
+void efct_hw_del_mq(struct hw_mq *mq);
+void efct_hw_del_wq(struct hw_wq *wq);
+void efct_hw_del_rq(struct hw_rq *rq);
+void efct_hw_queue_dump(struct efct_hw *hw);
+void efct_hw_queue_teardown(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw);
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset);
+int efct_hw_get_num_eq(struct efct_hw *hw);
+
+extern enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg);
 
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index e6d6f2000168..6d8e0cefa903 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -146,6 +146,80 @@ efct_xport_attach(struct efct_xport *xport)
 }
 
 static void
+efct_xport_link_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_link_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_host_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_host_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_async_link_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_link_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+}
+
+static void
+efct_xport_async_host_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_host_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+}
+
+static void
 efct_xport_config_stats_timer(struct efct *efct);
 
 static void
@@ -585,3 +659,318 @@ efct_scsi_release_fc_transport(void)
 
 	return 0;
 }
+
+int
+efct_xport_detach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+
+	/* free resources associated with target-server and initiator-client */
+	efct_scsi_tgt_del_device(efct);
+
+	efct_scsi_del_device(efct);
+
+	/*Shutdown FC Statistics timer*/
+	del_timer(&efct->xport->stats_timer);
+
+	efct_hw_teardown(&efct->hw);
+
+	efct_xport_delete_debugfs(efct);
+
+	return 0;
+}
+
+static void
+efct_xport_domain_free_cb(struct efc *efc, void *arg)
+{
+	struct completion *done = arg;
+
+	complete(done);
+}
+
+static int
+efct_xport_post_node_event_cb(struct efct_hw *hw, int status,
+			      u8 *mqe, void *arg)
+{
+	struct efct_xport_post_node_event *payload = arg;
+
+	if (payload) {
+		efc_node_post_shutdown(payload->node, payload->evt,
+				       payload->context);
+		complete(&payload->done);
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+	}
+	return 0;
+}
+
+static void
+efct_xport_force_free(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	struct efc *efc = efct->efcport;
+
+	efc_log_debug(efct, "reset required, do force shutdown\n");
+
+	if (!efc->domain) {
+		efc_log_err(efct, "Domain is already freed\n");
+		return;
+	}
+
+	efc_domain_force_free(efc->domain);
+}
+
+int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	va_list argp;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_PORT_ONLINE: {
+		/* Bring the port on-line */
+		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
+					  NULL, NULL);
+		if (rc)
+			efc_log_err(efct,
+				     "%s: Can't init port\n", efct->desc);
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+	case EFCT_XPORT_PORT_OFFLINE: {
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL))
+			efc_log_err(efct, "port shutdown failed\n");
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+
+	case EFCT_XPORT_SHUTDOWN: {
+		struct completion done;
+		u32 reset_required;
+		unsigned long timeout;
+
+		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
+		 * all PCI functions; orderly shutdown won't work,
+		 * just force free
+		 */
+		if (efct_hw_get(&efct->hw, EFCT_HW_RESET_REQUIRED,
+				&reset_required) != EFCT_HW_RTN_SUCCESS)
+			reset_required = 0;
+
+		if (reset_required) {
+			efc_log_debug(efct,
+				       "reset required, do force shutdown\n");
+			efct_xport_force_free(xport);
+			break;
+		}
+		init_completion(&done);
+
+		efc_register_domain_free_cb(efct->efcport,
+					efct_xport_domain_free_cb, &done);
+
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL)) {
+			efc_log_debug(efct,
+				       "port shutdown failed, do force shutdown\n");
+			efct_xport_force_free(xport);
+		} else {
+			efc_log_debug(efct,
+				       "Waiting %d seconds for domain shutdown.\n",
+			(EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC / 1000000));
+
+			timeout = usecs_to_jiffies(
+					EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC);
+			if (!wait_for_completion_timeout(&done, timeout)) {
+				efc_log_debug(efct,
+					       "Domain shutdown timed out!!\n");
+				efct_xport_force_free(xport);
+			}
+		}
+
+		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
+
+		/* Free up any saved virtual ports */
+		efc_vport_del_all(efct->efcport);
+		break;
+	}
+
+	/*
+	 * POST_NODE_EVENT:  post an event to a node object
+	 *
+	 * This transport function is used to post an event to a node object.
+	 * It does this by submitting a NOP mailbox command to defer execution
+	 * to the interrupt context (thereby enforcing the serialized execution
+	 * of event posting to the node state machine instances)
+	 */
+	case EFCT_XPORT_POST_NODE_EVENT: {
+		struct efc_node *node;
+		u32	evt;
+		void *context;
+		struct efct_xport_post_node_event *payload = NULL;
+		struct efct *efct;
+		struct efct_hw *hw;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		node = va_arg(argp, struct efc_node *);
+		evt = va_arg(argp, u32);
+		context = va_arg(argp, void *);
+		va_end(argp);
+
+		payload = kmalloc(sizeof(*payload), GFP_KERNEL);
+		if (!payload)
+			return -1;
+
+		memset(payload, 0, sizeof(*payload));
+
+		efct = node->efc->base;
+		hw = &efct->hw;
+
+		/* if node's state machine is disabled,
+		 * don't bother continuing
+		 */
+		if (!node->sm.current_state) {
+			efc_log_test(efct, "node %p state machine disabled\n",
+				      node);
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Setup payload */
+		init_completion(&payload->done);
+
+		/* one for self and one for callback */
+		atomic_set(&payload->refcnt, 2);
+		payload->node = node;
+		payload->evt = evt;
+		payload->context = context;
+
+		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
+				       payload)) {
+			efc_log_test(efct, "efct_hw_async_call failed\n");
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Wait for completion */
+		if (wait_for_completion_interruptible(&payload->done)) {
+			efc_log_test(efct,
+				      "POST_NODE_EVENT: completion failed\n");
+			rc = -1;
+		}
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+
+		break;
+	}
+	/*
+	 * Set wwnn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWNN_SET: {
+		u64 wwnn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwnn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
+		xport->req_wwnn = wwnn;
+
+		break;
+	}
+	/*
+	 * Set wwpn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWPN_SET: {
+		u64 wwpn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwpn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
+		xport->req_wwpn = wwpn;
+
+		break;
+	}
+
+	default:
+		break;
+	}
+	return rc;
+}
+
+void
+efct_xport_free(struct efct_xport *xport)
+{
+	if (xport) {
+		efct_io_pool_free(xport->io_pool);
+
+		kfree(xport);
+	}
+}
+
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template)
+{
+	if (transport_template)
+		pr_err("releasing transport layer\n");
+
+	/* Releasing FC transport */
+	fc_release_transport(transport_template);
+}
+
+static void
+efct_xport_remove_host(struct Scsi_Host *shost)
+{
+	/*
+	 * Remove host from FC Transport layer
+	 *
+	 * 1. fc_remove_host()
+	 * a. for each vport: queue vport_delete_work (fc_vport_sched_delete())
+	 *	b. for each rport: queue rport_delete_work
+	 *		(fc_rport_final_delete())
+	 *	c. scsi_flush_work()
+	 * 2. fc_rport_final_delete()
+	 * a. fc_terminate_rport_io
+	 *		i. call LLDD's terminate_rport_io()
+	 *		ii. scsi_target_unblock()
+	 *	b. fc_starget_delete()
+	 *		i. fc_terminate_rport_io()
+	 *			1. call LLDD's terminate_rport_io()
+	 *			2. scsi_target_unblock()
+	 *		ii. scsi_remove_target()
+	 *      c. invoke LLDD devloss callback
+	 *      d. transport_remove_device(&rport->dev)
+	 *      e. device_del(&rport->dev)
+	 *      f. transport_destroy_device(&rport->dev)
+	 *      g. put_device(&shost->shost_gendev) (for fc_host->rport list)
+	 *      h. put_device(&rport->dev)
+	 */
+	fc_remove_host(shost);
+}
+
+int efct_scsi_del_device(struct efct *efct)
+{
+	if (efct->shost) {
+		efc_log_debug(efct, "Unregistering with Transport Layer\n");
+		efct_xport_remove_host(efct->shost);
+		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
+		scsi_remove_host(efct->shost);
+		scsi_host_put(efct->shost);
+		efct->shost = NULL;
+	}
+	return 0;
+}
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 28/32] elx: efct: IO timeout handling routines
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (26 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 27/32] elx: efct: xport and hardware teardown routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09 11:27   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 29/32] elx: efct: Firmware update, async link processing James Smart
                   ` (4 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Add support for a WQE timer to handle the wqe and IO timeouts.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 187 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 187 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index fb33317caa0d..c18bda1351cc 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -276,6 +276,98 @@ efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
 		     j, hw->config.filter_def[j], i, id);
 }
 
+static void
+target_wqe_timer_cb(struct timer_list *t);
+
+static int
+target_wqe_timer_nop_cb(struct efct_hw *hw, int status,
+			u8 *mqe, void *arg)
+{
+	struct efct_hw_io *io = NULL;
+	struct efct_hw_io *io_next = NULL;
+	u64 ticks_current = jiffies_64;
+	u32 sec_elapsed;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	unsigned long flags = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status st=%x hdr=%x\n",
+			       status,
+			       le16_to_cpu(hdr->status));
+		/* go ahead and proceed with wqe timer checks... */
+	}
+
+	/* loop through active WQE list and check for timeouts */
+	spin_lock_irqsave(&hw->io_lock, flags);
+	list_for_each_entry_safe(io, io_next, &hw->io_timed_wqe, wqe_link) {
+		sec_elapsed = ((u32)(ticks_current - io->submit_ticks) / HZ);
+
+		/*
+		 * If elapsed time > timeout, abort it. No need to check type
+		 * since it wouldn't be on this list unless it was a target WQE
+		 */
+		if (sec_elapsed > io->tgt_wqe_timeout) {
+			efc_log_test(hw->os,
+				      "IO timeout xri=0x%x tag=0x%x type=%d\n",
+				     io->indicator, io->reqtag, io->type);
+
+			/*
+			 * remove from active_wqe list so won't try to abort
+			 * again
+			 */
+			list_del(&io->list_entry);
+
+			/* save status of timed_out for when abort completes */
+			io->status_saved = true;
+			io->saved_status =
+					 SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT;
+			io->saved_ext = 0;
+			io->saved_len = 0;
+
+			/* now abort outstanding IO */
+			efct_hw_io_abort(hw, io, false, NULL, NULL);
+		}
+		/*
+		 * need to go through entire list since each IO could have a
+		 * different timeout value
+		 */
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	/* if we're not in the middle of shutting down, schedule next timer */
+	if (!hw->active_wqe_timer_shutdown) {
+		timer_setup(&hw->wqe_timer,
+			    &target_wqe_timer_cb, 0);
+
+		mod_timer(&hw->wqe_timer,
+			  jiffies +
+			  msecs_to_jiffies(EFCT_HW_WQ_TIMER_PERIOD_MS));
+	}
+	hw->in_active_wqe_timer = false;
+	return 0;
+}
+
+static void
+target_wqe_timer_cb(struct timer_list *t)
+{
+	struct efct_hw *hw = from_timer(hw, t, wqe_timer);
+
+	/*
+	 * delete existing timer; will kick off new timer after checking wqe
+	 * timeouts
+	 */
+	hw->in_active_wqe_timer = true;
+	del_timer(&hw->wqe_timer);
+
+	/*
+	 * Forward timer callback to execute in the mailbox completion
+	 * processing context
+	 */
+	if (efct_hw_async_call(hw, target_wqe_timer_nop_cb, hw))
+		efc_log_test(hw->os, "efct_hw_async_call failed\n");
+}
+
 static inline void
 efct_hw_init_free_io(struct efct_hw_io *io)
 {
@@ -4572,6 +4664,40 @@ efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
 	return rc;
 }
 
+static void
+shutdown_target_wqe_timer(struct efct_hw *hw)
+{
+	u32	iters = 100;
+
+	if (hw->config.emulate_tgt_wqe_timeout) {
+		/*
+		 * request active wqe timer shutdown, then wait for it to
+		 * complete
+		 */
+		hw->active_wqe_timer_shutdown = true;
+
+		/*
+		 * delete WQE timer and wait for timer handler to complete
+		 * (if necessary)
+		 */
+		del_timer(&hw->wqe_timer);
+
+		/* now wait for timer handler to complete (if necessary) */
+		while (hw->in_active_wqe_timer && iters) {
+			/*
+			 * if we happen to have just sent NOP mbox cmn, make
+			 * sure completions are being processed
+			 */
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (iters == 0)
+			efc_log_test(hw->os,
+				      "Failed to shutdown active wqe timer\n");
+	}
+}
+
 enum efct_hw_rtn
 efct_hw_teardown(struct efct_hw *hw)
 {
@@ -4920,3 +5046,64 @@ efct_hw_get_num_eq(struct efct_hw *hw)
 {
 	return hw->eq_count;
 }
+
+/* HW async call context structure */
+struct efct_hw_async_call_ctx {
+	efct_hw_async_cb_t callback;
+	void *arg;
+	u8 cmd[SLI4_BMBX_SIZE];
+};
+
+static void
+efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
+{
+	struct efct_hw_async_call_ctx *ctx = arg;
+
+	if (ctx) {
+		if (ctx->callback)
+			(*ctx->callback)(hw, status, mqe, ctx->arg);
+
+		kfree(ctx);
+	}
+}
+
+/*
+ * Post a NOP mbox cmd; the callback with argument is invoked upon completion
+ * while in the event processing context.
+ */
+int
+efct_hw_async_call(struct efct_hw *hw,
+		   efct_hw_async_cb_t callback, void *arg)
+{
+	int rc = 0;
+	struct efct_hw_async_call_ctx *ctx;
+
+	/*
+	 * Allocate a callback context (which includes the mbox cmd buffer),
+	 * we need this to be persistent as the mbox cmd submission may be
+	 * queued and executed later execution.
+	 */
+	ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+	if (!ctx)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(ctx, 0, sizeof(*ctx));
+	ctx->callback = callback;
+	ctx->arg = arg;
+
+	/* Build and send a NOP mailbox command */
+	if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
+			       sizeof(ctx->cmd), 0) == 0) {
+		efc_log_err(hw->os, "COMMON_NOP format failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+
+	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
+			    ctx)) {
+		efc_log_err(hw->os, "COMMON_NOP command failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+	return rc;
+}
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 29/32] elx: efct: Firmware update, async link processing
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (27 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 28/32] elx: efct: IO timeout handling routines James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09 11:45   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support James Smart
                   ` (3 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Handling of async link event.
Registrations for VFI, VPI and RPI.
Add Firmware update helper routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_hw.c | 1633 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   57 +-
 2 files changed, 1689 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index c18bda1351cc..23d55d0d26c3 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -48,6 +48,12 @@ struct efct_hw_host_stat_cb_arg {
 	void *arg;
 };
 
+struct efct_hw_fw_wr_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u32 change_status, void *arg);
+	void *arg;
+};
+
 /* HW global data */
 struct efct_hw_global hw_global;
 
@@ -180,6 +186,175 @@ efct_hw_read_max_dump_size(struct efct_hw *hw)
 	return EFCT_HW_RTN_SUCCESS;
 }
 
+static int
+__efct_read_topology_cb(struct efct_hw *hw, int status,
+			u8 *mqe, void *arg)
+{
+	struct sli4_cmd_read_topology *read_topo =
+				(struct sli4_cmd_read_topology *)mqe;
+	u8 speed;
+	struct efc_domain_record drec = {0};
+	struct efct *efct = hw->os;
+
+	if (status || le16_to_cpu(read_topo->hdr.status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
+			       status,
+			       le16_to_cpu(read_topo->hdr.status));
+		kfree(mqe);
+		return -1;
+	}
+
+	switch (le32_to_cpu(read_topo->dw2_attentype) &
+		SLI4_READTOPO_ATTEN_TYPE) {
+	case SLI4_READ_TOPOLOGY_LINK_UP:
+		hw->link.status = SLI_LINK_STATUS_UP;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_DOWN:
+		hw->link.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
+		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		hw->link.status = SLI_LINK_STATUS_MAX;
+		break;
+	}
+
+	switch (read_topo->topology) {
+	case SLI4_READ_TOPOLOGY_NPORT:
+		hw->link.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case SLI4_READ_TOPOLOGY_FC_AL:
+		hw->link.topology = SLI_LINK_TOPO_LOOP;
+		if (hw->link.status == SLI_LINK_STATUS_UP)
+			hw->link.loop_map = hw->loop_map.virt;
+		hw->link.fc_id = read_topo->acquired_al_pa;
+		break;
+	default:
+		hw->link.topology = SLI_LINK_TOPO_MAX;
+		break;
+	}
+
+	hw->link.medium = SLI_LINK_MEDIUM_FC;
+
+	speed = (le32_to_cpu(read_topo->currlink_state) &
+		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
+	switch (speed) {
+	case SLI4_READ_TOPOLOGY_SPEED_1G:
+		hw->link.speed =  1 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_2G:
+		hw->link.speed =  2 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_4G:
+		hw->link.speed =  4 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_8G:
+		hw->link.speed =  8 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_16G:
+		hw->link.speed = 16 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_32G:
+		hw->link.speed = 32 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	}
+
+	kfree(mqe);
+
+	drec.speed = hw->link.speed;
+	drec.fc_id = hw->link.fc_id;
+	drec.is_nport = true;
+	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
+
+	return 0;
+}
+
+/* Callback function for the SLI link events */
+static int
+efct_hw_cb_link(void *ctx, void *e)
+{
+	struct efct_hw	*hw = ctx;
+	struct sli4_link_event *event = e;
+	struct efc_domain	*d = NULL;
+	int		rc = EFCT_HW_RTN_ERROR;
+	struct efct	*efct = hw->os;
+	struct efc_dma *dma;
+
+	efct_hw_link_event_init(hw);
+
+	switch (event->status) {
+	case SLI_LINK_STATUS_UP:
+
+		hw->link = *event;
+		efct->efcport->link_status = EFC_LINK_STATUS_UP;
+
+		if (event->topology == SLI_LINK_TOPO_NPORT) {
+			struct efc_domain_record drec = {0};
+
+			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
+				      event->speed);
+			drec.speed = event->speed;
+			drec.fc_id = event->fc_id;
+			drec.is_nport = true;
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
+				      &drec);
+		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
+			u8	*buf = NULL;
+
+			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
+				      event->speed);
+			dma = &hw->loop_map;
+			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt)
+				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
+
+			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+			if (!buf)
+				break;
+
+			if (!sli_cmd_read_topology(&hw->sli, buf,
+						  SLI4_BMBX_SIZE,
+						       &hw->loop_map)) {
+				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+						     __efct_read_topology_cb,
+						     NULL);
+			}
+
+			if (rc != EFCT_HW_RTN_SUCCESS) {
+				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
+				kfree(buf);
+			}
+		} else {
+			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
+				      "Link Up, unsupported topology ",
+				     event->topology, event->speed);
+		}
+		break;
+	case SLI_LINK_STATUS_DOWN:
+		efc_log_info(hw->os, "Link down\n");
+
+		hw->link.status = event->status;
+		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
+
+		d = hw->domain;
+		if (d)
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled link status %#x\n",
+			      event->status);
+		break;
+	}
+
+	return 0;
+}
+
 enum efct_hw_rtn
 efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
 {
@@ -5107,3 +5282,1461 @@ efct_hw_async_call(struct efct_hw *hw,
 	}
 	return rc;
 }
+
+static void
+efct_hw_port_free_resources(struct efc_sli_port *sport, int evt, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+
+	/* Clear the sport attached flag */
+	sport->attached = false;
+
+	/* Free the service parameters buffer */
+	if (sport->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  sport->dma.size, sport->dma.virt,
+				  sport->dma.phys);
+		memset(&sport->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
+
+	efc_lport_cb(efct->efcport, evt, sport);
+}
+
+static int
+efct_hw_port_get_mbox_status(struct efc_sli_port *sport,
+			     u8 *mqe, int status)
+{
+	struct efct_hw *hw = sport->hw;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
+			       sport->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_port_free_unreg_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int evt = EFC_HW_PORT_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		evt = EFC_HW_PORT_FREE_FAIL;
+		rc = -1;
+	}
+
+	efct_hw_port_free_resources(sport, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_port_free_unreg_vpi(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	int rc;
+
+	/* Allocate memory and send unreg_vpi */
+	if (!data) {
+		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data) {
+			efct_hw_port_free_resources(sport,
+						    EFC_HW_PORT_FREE_FAIL,
+						    data);
+			return;
+		}
+		memset(data, 0, SLI4_BMBX_SIZE);
+	}
+
+	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       sport->indicator, SLI4_UNREG_TYPE_PORT);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_free_unreg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+	}
+}
+
+static void
+efct_hw_port_send_evt(struct efc_sli_port *sport, int evt, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Now inform the registered callbacks */
+	efc_lport_cb(efct->efcport, evt, sport);
+
+	/* Set the sport attached flag */
+	if (evt == EFC_HW_PORT_ATTACH_OK)
+		sport->attached = true;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+}
+
+static int
+efct_hw_port_alloc_init_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_alloc_init_vpi(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	int rc;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending) {
+		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
+		return;
+	}
+
+	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      sport->indicator, sport->domain->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_init_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_port_alloc_read_sparm64_cb(struct efct_hw *hw,
+				   int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	u8 *payload = NULL;
+	struct efct *efct = hw->os;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	payload = sport->dma.virt;
+
+	memcpy(&sport->sli_wwpn,
+	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
+		sizeof(sport->sli_wwpn));
+	memcpy(&sport->sli_wwnn,
+	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
+		sizeof(sport->sli_wwnn));
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  sport->dma.size, sport->dma.virt, sport->dma.phys);
+	memset(&sport->dma, 0, sizeof(struct efc_dma));
+	efct_hw_port_alloc_init_vpi(sport, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_alloc_read_sparm64(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+	int rc;
+
+	/* Allocate memory for the service parameters */
+	sport->dma.size = 112;
+	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					     sport->dma.size, &sport->dma.phys,
+					     GFP_DMA);
+	if (!sport->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &sport->dma, sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_read_sparm64_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+/*
+ * This function allocates a VPI object for the port and stores it in the
+ * indicator field of the port object.
+ */
+enum efct_hw_rtn
+efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
+		   struct efc_domain *domain, u8 *wwpn)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*cmd = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 index;
+
+	sport->indicator = U32_MAX;
+	sport->hw = hw;
+	sport->free_req_pending = false;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (wwpn)
+		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
+			       &sport->indicator, &index)) {
+		efc_log_err(hw->os, "VPI allocation failure\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (domain) {
+		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cmd) {
+			rc = EFCT_HW_RTN_NO_MEMORY;
+			goto efct_hw_port_alloc_out;
+		}
+		memset(cmd, 0, SLI4_BMBX_SIZE);
+
+		/*
+		 * If the WWPN is NULL, fetch the default
+		 * WWPN and WWNN before initializing the VPI
+		 */
+		if (!wwpn)
+			efct_hw_port_alloc_read_sparm64(sport, cmd);
+		else
+			efct_hw_port_alloc_init_vpi(sport, cmd);
+	} else if (!wwpn) {
+		/* This is the convention for the HW, not SLI */
+		efc_log_test(hw->os, "need WWN for physical port\n");
+		rc = EFCT_HW_RTN_ERROR;
+	}
+	/* domain NULL and wwpn non-NULL */
+	// no-op;
+
+efct_hw_port_alloc_out:
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(cmd);
+
+		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
+				  sport->indicator);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_port_attach_reg_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_port_attach_reg_vpi(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	int rc;
+
+	if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			    sport->fc_id, sport->sli_wwpn,
+			sport->indicator, sport->domain->indicator,
+			false) == 0) {
+		efc_log_err(hw->os, "REG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_attach_reg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, data);
+	}
+}
+
+/**
+ * This function registers a previously-allocated VPI with the
+ * device.
+ */
+enum efct_hw_rtn
+efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
+		    u32 fc_id)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	sport->fc_id = fc_id;
+	efct_hw_port_attach_reg_vpi(sport, buf);
+	return rc;
+}
+
+/* Issue the UNREG_VPI command to free the assigned VPI context */
+enum efct_hw_rtn
+efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (sport->attached)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+	else
+		sport->free_req_pending = true;
+
+	return rc;
+}
+
+static int
+efct_hw_domain_get_mbox_status(struct efc_domain *domain,
+			       u8 *mqe, int status)
+{
+	struct efct_hw *hw = domain->hw;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
+			       domain->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static void
+efct_hw_domain_free_resources(struct efc_domain *domain,
+			      int evt, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the service parameters buffer */
+	if (domain->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
+
+	efc_domain_cb(efct->efcport, evt, domain);
+}
+
+static void
+efct_hw_domain_send_sport_evt(struct efc_domain *domain,
+			      int port_evt, int domain_evt, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Send alloc/attach ok to the physical sport */
+	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
+
+	/* Now inform the registered callbacks */
+	efc_domain_cb(efct->efcport, domain_evt, domain);
+}
+
+static int
+efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw *hw,
+				     int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	hw->domain = domain;
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
+				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_alloc_read_sparm64(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	int rc;
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &domain->dma, SLI4_READ_SPARM64_VPI_DEFAULT);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_read_sparm64_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_domain_alloc_init_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_domain_alloc_read_sparm64(domain, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_alloc_init_vfi(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efc_sli_port *sport = domain->sport;
+	int rc;
+
+	/*
+	 * For FC, the HW alread registered an FCFI.
+	 * Copy FCF information into the domain and jump to INIT_VFI.
+	 */
+	domain->fcf_indicator = hw->fcf_indicator;
+	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      domain->indicator, domain->fcf_indicator,
+			sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_init_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+/**
+ * This function starts a series of commands needed to connect to the domain,
+ * including
+ *   - REG_FCFI
+ *   - INIT_VFI
+ *   - READ_SPARMS
+ */
+enum efct_hw_rtn
+efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
+		     u32 fcf)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	u8 *cmd = NULL;
+	u32 index;
+
+	if (!domain || !domain->sport) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
+			    hw, domain, domain ? domain->sport : NULL);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(efct,
+			     "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!cmd)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(cmd, 0, SLI4_BMBX_SIZE);
+
+	/* allocate memory for the service parameters */
+	domain->dma.size = 112;
+	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					      domain->dma.size,
+					      &domain->dma.phys, GFP_DMA);
+	if (!domain->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		kfree(cmd);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	domain->hw = hw;
+	domain->fcf = fcf;
+	domain->fcf_indicator = U32_MAX;
+	domain->indicator = U32_MAX;
+
+	if (sli_resource_alloc(&hw->sli,
+			       SLI_RSRC_VFI, &domain->indicator,
+				    &index)) {
+		efc_log_err(hw->os, "VFI allocation failure\n");
+
+		kfree(cmd);
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_alloc_init_vfi(domain, cmd);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static int
+efct_hw_domain_attach_reg_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		hw->domain = NULL;
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
+		return -1;
+	}
+
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
+				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
+	return 0;
+}
+
+static void
+efct_hw_domain_attach_reg_vfi(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	int rc;
+
+	if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			    domain->indicator, domain->fcf_indicator,
+			domain->dma, domain->sport->indicator,
+			domain->sport->sli_wwpn,
+			domain->sport->fc_id) == 0) {
+		efc_log_err(hw->os, "REG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_attach_reg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain,
+				      EFC_HW_DOMAIN_ATTACH_FAIL, data);
+}
+
+enum efct_hw_rtn
+efct_hw_domain_attach(struct efc *efc,
+		      struct efc_domain *domain, u32 fc_id)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	domain->sport->fc_id = fc_id;
+	efct_hw_domain_attach_reg_vfi(domain, buf);
+	return rc;
+}
+
+static int
+efct_hw_domain_free_unreg_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int evt = EFC_HW_DOMAIN_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		evt = EFC_HW_DOMAIN_FREE_FAIL;
+		rc = -1;
+	}
+
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_domain_free_unreg_vfi(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	int rc;
+
+	if (!data) {
+		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data)
+			goto cleanup;
+	}
+
+	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_free_unreg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
+}
+
+enum efct_hw_rtn
+efct_hw_domain_free(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_free_unreg_vfi(domain, NULL);
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	if (!domain) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  domain->dma.size, domain->dma.virt, domain->dma.phys);
+	memset(&domain->dma, 0, sizeof(struct efc_dma));
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
+			  domain->indicator);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
+		   u32 fc_addr, struct efc_sli_port *sport)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	/* Check for invalid indicator */
+	if (rnode->indicator != U32_MAX) {
+		efc_log_err(hw->os,
+			     "RPI allocation failure addr=%#x rpi=%#x\n",
+			    fc_addr, rnode->indicator);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* NULL SLI port indicates an unallocated remote node */
+	rnode->sport = NULL;
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
+			       &rnode->indicator, &rnode->index)) {
+		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
+			     fc_addr);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rnode->fc_id = fc_addr;
+	rnode->sport = sport;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static int
+efct_hw_cb_node_attach(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event	evt = 0;
+
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+		rnode->attached = false;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
+		evt = EFC_HW_NODE_ATTACH_FAIL;
+	} else {
+		rnode->attached = true;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
+		evt = EFC_HW_NODE_ATTACH_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return 0;
+}
+
+/* Update a remote node object with the remote port's service parameters */
+enum efct_hw_rtn
+efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
+	u8		*buf = NULL;
+	u32	count = 0;
+
+	if (!hw || !rnode || !sparms) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
+			    hw, rnode, sparms);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	/*
+	 * If the attach count is non-zero, this RPI has already been reg'd.
+	 * Otherwise, register the RPI
+	 */
+	if (rnode->index == U32_MAX) {
+		efc_log_err(efct, "bad parameter rnode->index invalid\n");
+		kfree(buf);
+		return EFCT_HW_RTN_ERROR;
+	}
+	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+	count--;
+	if (count) {
+		/*
+		 * Can't attach multiple FC_ID's to a node unless High Login
+		 * Mode is enabled
+		 */
+		if (!hw->sli.high_login_mode) {
+			efc_log_test(hw->os,
+				      "attach to attached node HLM=%d cnt=%d\n",
+				     hw->sli.high_login_mode, count);
+			rc = EFCT_HW_RTN_SUCCESS;
+		} else {
+			rnode->node_group = true;
+			rnode->attached =
+			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
+			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
+							 EFCT_HW_RTN_SUCCESS;
+		}
+	} else {
+		rnode->node_group = false;
+
+		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    rnode->fc_id,
+				    rnode->indicator, rnode->sport->indicator,
+				    sparms, 0, 0))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_attach, rnode);
+	}
+
+	if (count || rc) {
+		if (rc < EFCT_HW_RTN_SUCCESS) {
+			atomic_sub_return(1,
+					  &hw->rpi_ref[rnode->index].rpi_count);
+			efc_log_err(hw->os,
+				     "%s error\n", count ? "HLM" : "REG_RPI");
+		}
+		kfree(buf);
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_free_resources(struct efc *efc,
+			    struct efc_remote_node *rnode)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (rnode->sport) {
+		if (rnode->attached) {
+			efc_log_err(hw->os, "Err: rnode is still attached\n");
+			return EFCT_HW_RTN_ERROR;
+		}
+		if (rnode->indicator != U32_MAX) {
+			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
+					      rnode->indicator)) {
+				efc_log_err(hw->os,
+					     "RPI free fail RPI %d addr=%#x\n",
+					    rnode->indicator,
+					    rnode->fc_id);
+				rc = EFCT_HW_RTN_ERROR;
+			} else {
+				rnode->node_group = false;
+				rnode->indicator = U32_MAX;
+				rnode->index = U32_MAX;
+				rnode->free_group = false;
+			}
+		}
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_node_free(struct efct_hw *hw,
+		     int status, u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+
+		/*
+		 * In certain cases, a non-zero MQE status is OK (all must be
+		 * true):
+		 *   - node is attached
+		 *   - if High Login Mode is enabled, node is part of a node
+		 * group
+		 *   - status is 0x1400
+		 */
+		if (!rnode->attached ||
+		    (hw->sli.high_login_mode && !rnode->node_group) ||
+				(le16_to_cpu(hdr->status) !=
+				 MBX_STATUS_RPI_NOT_REG))
+			rc = -1;
+	}
+
+	if (rc == 0) {
+		rnode->node_group = false;
+		rnode->attached = false;
+
+		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
+			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
+				   0);
+		 evt = EFC_HW_NODE_FREE_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	u8	*buf = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS_SYNC;
+	u32	index = U32_MAX;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	index = rnode->index;
+
+	if (rnode->sport) {
+		u32	count = 0;
+		u32	fc_id;
+
+		if (!rnode->attached)
+			return EFCT_HW_RTN_SUCCESS_SYNC;
+
+		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!buf)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(buf, 0, SLI4_BMBX_SIZE);
+		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
+		count++;
+		if (count <= 1) {
+			/*
+			 * There are no other references to this RPI so
+			 * unregister it
+			 */
+			fc_id = U32_MAX;
+			/* and free the resource */
+			rnode->node_group = false;
+			rnode->free_group = true;
+		} else {
+			if (!hw->sli.high_login_mode)
+				efc_log_test(hw->os,
+					      "Inval cnt with HLM off, cnt=%d\n",
+					     count);
+			fc_id = rnode->fc_id & 0x00ffffff;
+		}
+
+		rc = EFCT_HW_RTN_ERROR;
+
+		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				      rnode->indicator,
+				      SLI_RSRC_RPI, fc_id))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_free, rnode);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "UNREG_RPI failed\n");
+			kfree(buf);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_node_free_all(struct efct_hw *hw, int status, u8 *mqe,
+			 void *arg)
+{
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	u32	i;
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+	} else {
+		evt = EFC_HW_NODE_FREE_ALL_OK;
+	}
+
+	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
+		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
+		     i++)
+			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+
+		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
+			efc_log_test(hw->os, "RPI free all failure\n");
+			rc = -1;
+		}
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, NULL);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_free_all(struct efct_hw *hw)
+{
+	u8	*buf = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE, 0xffff,
+			      SLI_RSRC_FCFI, U32_MAX))
+		rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_node_free_all,
+				     NULL);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os, "UNREG_RPI failed\n");
+		kfree(buf);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+struct efct_hw_get_nvparms_cb_arg {
+	void (*cb)(int status,
+		   u8 *wwpn, u8 *wwnn,
+		u8 hard_alpa, u32 preferred_d_id,
+		void *arg);
+	void *arg;
+};
+
+static int
+efct_hw_get_nvparms_cb(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_get_nvparms_cb_arg *cb_arg = arg;
+	struct sli4_cmd_read_nvparms *mbox_rsp =
+			(struct sli4_cmd_read_nvparms *)mqe;
+	u8 hard_alpa;
+	u32 preferred_d_id;
+
+	hard_alpa = le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+				SLI4_READ_NVPARAMS_HARD_ALPA;
+	preferred_d_id = (le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+			  SLI4_READ_NVPARAMS_PREFERRED_D_ID) >> 8;
+	if (cb_arg->cb)
+		cb_arg->cb(status, mbox_rsp->wwpn, mbox_rsp->wwnn,
+			   hard_alpa, preferred_d_id,
+			   cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return 0;
+}
+
+int
+efct_hw_get_nvparms(struct efct_hw *hw,
+		    void (*cb)(int status, u8 *wwpn,
+			       u8 *wwnn, u8 hard_alpa,
+			       u32 preferred_d_id, void *arg),
+		    void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_get_nvparms_cb_arg *cb_arg;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_get_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+struct efct_hw_set_nvparms_cb_arg {
+	void (*cb)(int status, void *arg);
+	void *arg;
+};
+
+static int
+efct_hw_set_nvparms_cb(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_set_nvparms_cb_arg *cb_arg = arg;
+
+	if (cb_arg->cb)
+		cb_arg->cb(status, cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return 0;
+}
+
+int
+efct_hw_set_nvparms(struct efct_hw *hw,
+		    void (*cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id,
+		void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_set_nvparms_cb_arg *cb_arg;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_write_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE, wwpn,
+				  wwnn, hard_alpa, preferred_d_id))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_set_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "SET_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_fw_write(struct efct_hw *hw, int status,
+		    u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_sli_config *mbox_rsp =
+					(struct sli4_cmd_sli_config *)mqe;
+	struct sli4_rsp_cmn_write_object *wr_obj_rsp;
+	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
+	u32 bytes_written;
+	u16 mbox_status;
+	u32 change_status;
+
+	wr_obj_rsp = (struct sli4_rsp_cmn_write_object *)
+		      &mbox_rsp->payload.embed;
+	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
+	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
+	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
+			 RSP_CHANGE_STATUS);
+
+	kfree(mqe);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_status)
+				status = mbox_status;
+			cb_arg->cb(status, bytes_written, change_status,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return 0;
+}
+
+static enum efct_hw_rtn
+efct_hw_firmware_write_sli4_intf_2(struct efct_hw *hw, struct efc_dma *dma,
+				   u32 size, u32 offset, int last,
+			      void (*cb)(int status, u32 bytes_written,
+					 u32 change_status, void *arg),
+				void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	u8 *mbxdata;
+	struct efct_hw_fw_wr_cb_arg *cb_arg;
+	int noc = 0;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_fw_wr_cb_arg));
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_common_write_object(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+					noc, last, size, offset, "/prg/",
+					dma))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_fw_write, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "COMMON_WRITE_OBJECT failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/* Write a portion of a firmware image to the device */
+enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
+		       u32 size, u32 offset, int last,
+			void (*cb)(int status, u32 bytes_written,
+				   u32 change_status, void *arg),
+			void *arg)
+{
+	return efct_hw_firmware_write_sli4_intf_2(hw, dma, size, offset,
+						     last, cb, arg);
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 862504b96a23..598d05694ac3 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -479,7 +479,6 @@ struct efct_hw_io {
 	void			*ul_io;
 };
 
-
 /* Typedef for HW "done" callback */
 typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
 			      u32 len, int status, u32 ext, void *ul_arg);
@@ -1040,5 +1039,61 @@ efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
 		     uintptr_t value,
 		void (*cb)(int status, uintptr_t value, void *arg),
 		void *arg);
+extern enum efct_hw_rtn
+efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
+		   struct efc_domain *domain, u8 *wwpn);
+extern enum efct_hw_rtn
+efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
+		    u32 fc_id);
+extern enum efct_hw_rtn
+efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport);
+extern enum efct_hw_rtn
+efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
+		     u32 fcf);
+extern enum efct_hw_rtn
+efct_hw_domain_attach(struct efc *efc,
+		      struct efc_domain *domain, u32 fc_id);
+extern enum efct_hw_rtn
+efct_hw_domain_free(struct efc *efc, struct efc_domain *domain);
+extern enum efct_hw_rtn
+efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain);
+extern enum efct_hw_rtn
+efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
+		   u32 fc_addr, struct efc_sli_port *sport);
+extern enum efct_hw_rtn
+efct_hw_node_free_all(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms);
+extern enum efct_hw_rtn
+efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode);
+extern enum efct_hw_rtn
+efct_hw_node_free_resources(struct efc *efc,
+			    struct efc_remote_node *rnode);
+
+extern enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
+		       u32 size, u32 offset, int last,
+		       void (*cb)(int status, u32 bytes_written,
+				  u32 change_status, void *arg),
+		       void *arg);
+
+extern enum efct_hw_rtn
+efct_hw_get_nvparms(struct efct_hw *hw,
+		    void (*mgmt_cb)(int status, u8 *wwpn,
+				    u8 *wwnn, u8 hard_alpa,
+				    u32 preferred_d_id, void *arg),
+		    void *arg);
+extern
+enum efct_hw_rtn efct_hw_set_nvparms(struct efct_hw *hw,
+				       void (*mgmt_cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id, void *arg);
+
+typedef int (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
+				  u8 *mqe, void *arg);
+extern int
+efct_hw_async_call(struct efct_hw *hw,
+		   efct_hw_async_cb_t callback, void *arg);
 
 #endif /* __EFCT_H__ */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (28 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 29/32] elx: efct: Firmware update, async link processing James Smart
@ 2019-12-20 22:37 ` James Smart
  2020-01-09 11:46   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
                   ` (2 subsequent siblings)
  32 siblings, 1 reply; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Integration with the scsi_fc_transport host interfaces

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/efct/efct_xport.c | 496 +++++++++++++++++++++++++++++++++++++
 1 file changed, 496 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index 6d8e0cefa903..024f65ee113a 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -974,3 +974,499 @@ int efct_scsi_del_device(struct efct *efct)
 	}
 	return 0;
 }
+
+static void
+efct_get_host_port_id(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	struct efc_sli_port *sport;
+
+	if (efc->domain && efc->domain->sport) {
+		sport = efc->domain->sport;
+		fc_host_port_id(shost) = sport->fc_id;
+	}
+}
+
+static void
+efct_get_host_port_type(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	struct efc_sli_port *sport;
+	int type = FC_PORTTYPE_UNKNOWN;
+
+	if (efc->domain && efc->domain->sport) {
+		if (efc->domain->is_loop) {
+			type = FC_PORTTYPE_LPORT;
+		} else {
+			sport = efc->domain->sport;
+			if (sport->is_vport)
+				type = FC_PORTTYPE_NPIV;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_P2P)
+				type = FC_PORTTYPE_PTP;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_UNKNOWN)
+				type = FC_PORTTYPE_UNKNOWN;
+			else
+				type = FC_PORTTYPE_NPORT;
+		}
+	}
+	fc_host_port_type(shost) = type;
+}
+
+static void
+efct_get_host_vport_type(struct Scsi_Host *shost)
+{
+	fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+}
+
+static void
+efct_get_host_port_state(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain)
+		fc_host_port_state(shost) = FC_PORTSTATE_ONLINE;
+	else
+		fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE;
+}
+
+static void
+efct_get_host_speed(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	union efct_xport_stats_u speed;
+	u32 fc_speed = FC_PORTSPEED_UNKNOWN;
+	int rc;
+
+	if (efc->domain && efc->domain->sport) {
+		rc = efct_xport_status(efct->xport,
+				       EFCT_XPORT_LINK_SPEED, &speed);
+		if (rc == 0) {
+			switch (speed.value) {
+			case 1000:
+				fc_speed = FC_PORTSPEED_1GBIT;
+				break;
+			case 2000:
+				fc_speed = FC_PORTSPEED_2GBIT;
+				break;
+			case 4000:
+				fc_speed = FC_PORTSPEED_4GBIT;
+				break;
+			case 8000:
+				fc_speed = FC_PORTSPEED_8GBIT;
+				break;
+			case 10000:
+				fc_speed = FC_PORTSPEED_10GBIT;
+				break;
+			case 16000:
+				fc_speed = FC_PORTSPEED_16GBIT;
+				break;
+			case 32000:
+				fc_speed = FC_PORTSPEED_32GBIT;
+				break;
+			}
+		}
+	}
+	fc_host_speed(shost) = fc_speed;
+}
+
+static void
+efct_get_host_fabric_name(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain) {
+		struct fc_els_flogi  *sp =
+			(struct fc_els_flogi  *)
+				efc->domain->flogi_service_params;
+
+		fc_host_fabric_name(shost) = be64_to_cpu(sp->fl_wwnn);
+	}
+}
+
+static struct fc_host_statistics *
+efct_get_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	union efct_xport_stats_u stats;
+	struct efct_xport *xport = efct->xport;
+	u32 rc = 1;
+
+	rc = efct_xport_status(xport, EFCT_XPORT_LINK_STATISTICS, &stats);
+	if (rc != 0) {
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+		return NULL;
+	}
+
+	vport->fc_host_stats.loss_of_sync_count =
+		stats.stats.link_stats.loss_of_sync_error_count;
+	vport->fc_host_stats.link_failure_count =
+		stats.stats.link_stats.link_failure_error_count;
+	vport->fc_host_stats.prim_seq_protocol_err_count =
+		stats.stats.link_stats.primitive_sequence_error_count;
+	vport->fc_host_stats.invalid_tx_word_count =
+		stats.stats.link_stats.invalid_transmission_word_error_count;
+	vport->fc_host_stats.invalid_crc_count =
+		stats.stats.link_stats.crc_error_count;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.tx_words =
+		stats.stats.host_stats.transmit_kbyte_count * 256;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.rx_words =
+		stats.stats.host_stats.receive_kbyte_count * 256;
+	vport->fc_host_stats.tx_frames =
+		stats.stats.host_stats.transmit_frame_count;
+	vport->fc_host_stats.rx_frames =
+		stats.stats.host_stats.receive_frame_count;
+
+	vport->fc_host_stats.fcp_input_requests =
+			xport->fcp_stats.input_requests;
+	vport->fc_host_stats.fcp_output_requests =
+			xport->fcp_stats.output_requests;
+	vport->fc_host_stats.fcp_output_megabytes =
+			xport->fcp_stats.output_bytes >> 20;
+	vport->fc_host_stats.fcp_input_megabytes =
+			xport->fcp_stats.input_bytes >> 20;
+	vport->fc_host_stats.fcp_control_requests =
+			xport->fcp_stats.control_requests;
+
+	return &vport->fc_host_stats;
+}
+
+static void
+efct_reset_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	/* argument has no purpose for this action */
+	union efct_xport_stats_u dummy;
+	u32 rc = 0;
+
+	rc = efct_xport_status(efct->xport, EFCT_XPORT_LINK_STAT_RESET, &dummy);
+	if (rc != 0)
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+}
+
+static void
+efct_get_starget_port_id(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_node_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_port_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_set_vport_symbolic_name(struct fc_vport *fc_vport)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * Bring the link down gracefully then re-init the link. The firmware will
+ * re-initialize the Fibre Channel interface as required.
+ * It does not issue a LIP.
+ */
+static int
+efct_issue_lip(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport =
+			shost ? (struct efct_vport *)shost->hostdata : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+
+	if (!shost || !vport || !efct) {
+		pr_err("%s: shost=%p vport=%p efct=%p\n", __func__,
+		       shost, vport, efct);
+		return -EPERM;
+	}
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_OFFLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_OFFLINE failed\n");
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_ONLINE failed\n");
+
+	return 0;
+}
+
+struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return NULL;
+	}
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+	vport->is_vport = true;
+
+	shost->can_queue = efct_scsi_get_property(efct, EFCT_SCSI_MAX_IOS);
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/* can only accept (from mid-layer) as many SGEs as we've pre-regited*/
+	shost->sg_tablesize = efct_scsi_get_property(efct, EFCT_SCSI_MAX_SGL);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_vport_fc_tt;
+	efc_log_debug(efct, "vport transport template=%p\n",
+		       efct_vport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, dev, &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return NULL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->fw_version, efct->driver_version);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+	vport->shost = shost;
+
+	return vport;
+}
+
+int efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost)
+{
+	if (shost) {
+		efc_log_debug(efct,
+			       "Unregistering vport with Transport Layer\n");
+		efct_xport_remove_host(shost);
+		efc_log_debug(efct, "Unregistering vport with SCSI Midlayer\n");
+		scsi_remove_host(shost);
+		scsi_host_put(shost);
+		return 0;
+	}
+	return -1;
+}
+
+static int
+efct_vport_create(struct fc_vport *fc_vport, bool disable)
+{
+	struct Scsi_Host *shost = fc_vport ? fc_vport->shost : NULL;
+	struct efct_vport *pport = shost ?
+					(struct efct_vport *)shost->hostdata :
+					NULL;
+	struct efct *efct = pport ? pport->efct : NULL;
+	struct efct_vport *vport = NULL;
+
+	if (!fc_vport || !shost || !efct)
+		goto fail;
+
+	vport = efct_scsi_new_vport(efct, &fc_vport->dev);
+	if (!vport) {
+		efc_log_err(efct, "failed to create vport\n");
+		goto fail;
+	}
+
+	vport->fc_vport = fc_vport;
+	vport->npiv_wwpn = fc_vport->port_name;
+	vport->npiv_wwnn = fc_vport->node_name;
+	fc_host_node_name(vport->shost) = vport->npiv_wwnn;
+	fc_host_port_name(vport->shost) = vport->npiv_wwpn;
+	*(struct efct_vport **)fc_vport->dd_data = vport;
+
+	return 0;
+
+fail:
+	return -1;
+}
+
+static int
+efct_vport_delete(struct fc_vport *fc_vport)
+{
+	struct efct_vport *vport = *(struct efct_vport **)fc_vport->dd_data;
+	struct Scsi_Host *shost = vport ? vport->shost : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+	int rc = -1;
+
+	rc = efct_scsi_del_vport(efct, shost);
+
+	if (rc)
+		pr_err("%s: vport delete failed\n", __func__);
+
+	return rc;
+}
+
+static int
+efct_vport_disable(struct fc_vport *fc_vport, bool disable)
+{
+	return 0;
+}
+
+static struct fc_function_template efct_xport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_port_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+	.vport_disable = efct_vport_disable,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+	.vport_create = efct_vport_create,
+	.vport_delete = efct_vport_delete,
+};
+
+static struct fc_function_template efct_vport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_vport_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+};
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (29 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support James Smart
@ 2019-12-20 22:37 ` James Smart
  2019-12-20 23:17   ` Randy Dunlap
  2020-01-09 11:47   ` Hannes Reinecke
  2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-12-29 18:27 ` [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Sebastian Herbszt
  32 siblings, 2 replies; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This patch completes the efct driver population.

This patch adds driver definitions for:
Adds the efct driver Kconfig and Makefiles

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/elx/Kconfig  |  9 +++++++++
 drivers/scsi/elx/Makefile | 30 ++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile

diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
new file mode 100644
index 000000000000..ec710ade44f3
--- /dev/null
+++ b/drivers/scsi/elx/Kconfig
@@ -0,0 +1,9 @@
+config SCSI_EFCT
+	tristate "Emulex Fibre Channel Target"
+	depends on PCI && SCSI
+	depends on TARGET_CORE
+	depends on SCSI_FC_ATTRS
+	select CRC_T10DIF
+	help
+          The efct driver provides enhanced SCSI Target Mode
+	  support for specific SLI-4 adapters.
diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile
new file mode 100644
index 000000000000..79cc4e57676e
--- /dev/null
+++ b/drivers/scsi/elx/Makefile
@@ -0,0 +1,30 @@
+#/*******************************************************************
+# * This file is part of the Emulex Linux Device Driver for         *
+# * Fibre Channel Host Bus Adapters.                                *
+# * Copyright (C) 2018 Broadcom. All Rights Reserved. The term	   *
+# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.     *
+# *                                                                 *
+# * This program is free software; you can redistribute it and/or   *
+# * modify it under the terms of version 2 of the GNU General       *
+# * Public License as published by the Free Software Foundation.    *
+# * This program is distributed in the hope that it will be useful. *
+# * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+# * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+# * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+# * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+# * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+# * more details, a copy of which can be found in the file COPYING  *
+# * included with this package.                                     *
+# ********************************************************************/
+
+obj-$(CONFIG_SCSI_EFCT) := efct.o
+
+efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o efct/efct_els.o \
+	     efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \
+	     efct/efct_utils.o efct/efct_lio.o efct/efct_unsol.o
+
+efct-objs += libefc/efc_domain.o libefc/efc_fabric.o libefc/efc_node.o \
+	     libefc/efc_sport.o libefc/efc_device.o \
+	     libefc/efc_lib.o libefc/efc_sm.o
+
+efct-objs += libefc_sli/sli4.o
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (30 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2019-12-20 22:37 ` James Smart
  2019-12-24  7:45     ` kbuild test robot
                     ` (2 more replies)
  2019-12-29 18:27 ` [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Sebastian Herbszt
  32 siblings, 3 replies; 78+ messages in thread
From: James Smart @ 2019-12-20 22:37 UTC (permalink / raw)
  To: linux-scsi; +Cc: maier, dwagner, bvanassche, James Smart, Ram Vegesna

This final patch ties the efct driver into the kernel Kconfig
and build linkages in the drivers/scsi directory.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
---
 drivers/scsi/Kconfig  | 2 ++
 drivers/scsi/Makefile | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 90cf4691b8c3..78822ae45457 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1176,6 +1176,8 @@ config SCSI_LPFC_DEBUG_FS
 	  This makes debugging information from the lpfc driver
 	  available via the debugfs filesystem.
 
+source "drivers/scsi/elx/Kconfig"
+
 config SCSI_SIM710
 	tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
 	depends on EISA && SCSI
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c00e3dd57990..844db573283c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280)	+= qla1280.o
 obj-$(CONFIG_SCSI_QLA_FC)	+= qla2xxx/
 obj-$(CONFIG_SCSI_QLA_ISCSI)	+= libiscsi.o qla4xxx/
 obj-$(CONFIG_SCSI_LPFC)		+= lpfc/
+obj-$(CONFIG_SCSI_EFCT)		+= elx/
 obj-$(CONFIG_SCSI_BFA_FC)	+= bfa/
 obj-$(CONFIG_SCSI_CHELSIO_FCOE)	+= csiostor/
 obj-$(CONFIG_SCSI_DMX3191D)	+= dmx3191d.o
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-12-20 22:37 ` [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2019-12-20 23:17   ` Randy Dunlap
  2020-01-09 11:47   ` Hannes Reinecke
  1 sibling, 0 replies; 78+ messages in thread
From: Randy Dunlap @ 2019-12-20 23:17 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 2:37 PM, James Smart wrote:
> diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
> new file mode 100644
> index 000000000000..ec710ade44f3
> --- /dev/null
> +++ b/drivers/scsi/elx/Kconfig
> @@ -0,0 +1,9 @@
> +config SCSI_EFCT
> +	tristate "Emulex Fibre Channel Target"
> +	depends on PCI && SCSI
> +	depends on TARGET_CORE
> +	depends on SCSI_FC_ATTRS
> +	select CRC_T10DIF
> +	help
> +          The efct driver provides enhanced SCSI Target Mode

Use tab + 2 spaces above, instead of all spaces, please.

> +	  support for specific SLI-4 adapters.


-- 
~Randy


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2019-12-24  7:45     ` kbuild test robot
  2019-12-24 21:01   ` Nathan Chancellor
  2020-01-09 11:47   ` Hannes Reinecke
  2 siblings, 0 replies; 78+ messages in thread
From: kbuild test robot @ 2019-12-24  7:45 UTC (permalink / raw)
  To: James Smart
  Cc: kbuild-all, linux-scsi, maier, dwagner, bvanassche, James Smart,
	Ram Vegesna

[-- Attachment #1: Type: text/plain, Size: 9211 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.5-rc3 next-20191220]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20191224-054519
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
config: i386-allyesconfig (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   drivers/scsi/elx/efct/efct_driver.c: In function 'efct_request_firmware_update':
>> drivers/scsi/elx/efct/efct_driver.c:530:10: warning: format '%ld' expects argument of type 'long int', but argument 4 has type 'size_t {aka const unsigned int}' [-Wformat=]
             "Invalid FW image found Magic: 0x%x Size: %ld\n",
             ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/efct/../include/efc_common.h:32:3: note: in expansion of macro 'dev_warn'
      dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~~
>> drivers/scsi/elx/efct/efct_driver.c:529:3: note: in expansion of macro 'efc_log_warn'
      efc_log_warn(efct,
      ^~~~~~~~~~~~
--
   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_scsi.c:7:
   drivers/scsi/elx/efct/efct_scsi.c: In function 'efct_scsi_build_sgls':
>> drivers/scsi/elx/efct/efct_scsi.c:346:13: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
                "sgl[%d] len of %ld is not multiple of blocksize\n",
                ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/efct/../include/efc_common.h:38:3: note: in expansion of macro 'dev_dbg'
      dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~
>> drivers/scsi/elx/efct/efct_scsi.c:345:6: note: in expansion of macro 'efc_log_test'
         efc_log_test(efct,
         ^~~~~~~~~~~~
--
   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/libefc_sli/sli4.h:15,
                    from drivers/scsi/elx/libefc_sli/sli4.c:11:
   drivers/scsi/elx/libefc_sli/sli4.c: In function 'sli_cmd_read_topology':
>> drivers/scsi/elx/libefc_sli/sli4.c:3753:23: warning: format '%jd' expects argument of type 'intmax_t', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
       efc_log_info(sli4, "loop map buffer too small %jd\n",
                          ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/libefc_sli/../include/efc_common.h:35:3: note: in expansion of macro 'dev_info'
      dev_info(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~~
>> drivers/scsi/elx/libefc_sli/sli4.c:3753:4: note: in expansion of macro 'efc_log_info'
       efc_log_info(sli4, "loop map buffer too small %jd\n",
       ^~~~~~~~~~~~

vim +530 drivers/scsi/elx/efct/efct_driver.c

3bd67f890edb8f James Smart 2019-12-20  507  
3bd67f890edb8f James Smart 2019-12-20  508  static int
3bd67f890edb8f James Smart 2019-12-20  509  efct_request_firmware_update(struct efct *efct)
3bd67f890edb8f James Smart 2019-12-20  510  {
3bd67f890edb8f James Smart 2019-12-20  511  	int rc = 0;
3bd67f890edb8f James Smart 2019-12-20  512  	u8 file_name[256], fw_change_status = 0;
3bd67f890edb8f James Smart 2019-12-20  513  	const struct firmware *fw;
3bd67f890edb8f James Smart 2019-12-20  514  	struct efct_hw_grp_hdr *fw_image;
3bd67f890edb8f James Smart 2019-12-20  515  
3bd67f890edb8f James Smart 2019-12-20  516  	snprintf(file_name, 256, "%s.grp", efct->model);
3bd67f890edb8f James Smart 2019-12-20  517  	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
3bd67f890edb8f James Smart 2019-12-20  518  	if (rc) {
3bd67f890edb8f James Smart 2019-12-20  519  		efc_log_err(efct, "Firmware file(%s) not found.\n", file_name);
3bd67f890edb8f James Smart 2019-12-20  520  		return rc;
3bd67f890edb8f James Smart 2019-12-20  521  	}
3bd67f890edb8f James Smart 2019-12-20  522  	fw_image = (struct efct_hw_grp_hdr *)fw->data;
3bd67f890edb8f James Smart 2019-12-20  523  
3bd67f890edb8f James Smart 2019-12-20  524  	/* Check if firmware provided is compatible with this particular
3bd67f890edb8f James Smart 2019-12-20  525  	 * Adapter of not
3bd67f890edb8f James Smart 2019-12-20  526  	 */
3bd67f890edb8f James Smart 2019-12-20  527  	if ((be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G5) &&
3bd67f890edb8f James Smart 2019-12-20  528  	    (be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G6)) {
3bd67f890edb8f James Smart 2019-12-20 @529  		efc_log_warn(efct,
3bd67f890edb8f James Smart 2019-12-20 @530  			      "Invalid FW image found Magic: 0x%x Size: %ld\n",
3bd67f890edb8f James Smart 2019-12-20  531  			be32_to_cpu(fw_image->magic_number), fw->size);
3bd67f890edb8f James Smart 2019-12-20  532  		rc = -1;
3bd67f890edb8f James Smart 2019-12-20  533  		goto exit;
3bd67f890edb8f James Smart 2019-12-20  534  	}
3bd67f890edb8f James Smart 2019-12-20  535  
3bd67f890edb8f James Smart 2019-12-20  536  	if (!strncmp(efct->fw_version, fw_image->revision,
3bd67f890edb8f James Smart 2019-12-20  537  		     strnlen(fw_image->revision, 16))) {
3bd67f890edb8f James Smart 2019-12-20  538  		efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  539  			       "No update req. Firmware is already up to date.\n");
3bd67f890edb8f James Smart 2019-12-20  540  		rc = 0;
3bd67f890edb8f James Smart 2019-12-20  541  		goto exit;
3bd67f890edb8f James Smart 2019-12-20  542  	}
3bd67f890edb8f James Smart 2019-12-20  543  	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
3bd67f890edb8f James Smart 2019-12-20  544  	if (rc) {
3bd67f890edb8f James Smart 2019-12-20  545  		efc_log_err(efct,
3bd67f890edb8f James Smart 2019-12-20  546  			     "Firmware update failed. Return code = %d\n", rc);
3bd67f890edb8f James Smart 2019-12-20  547  	} else {
3bd67f890edb8f James Smart 2019-12-20  548  		efc_log_info(efct, "Firmware updated successfully\n");
3bd67f890edb8f James Smart 2019-12-20  549  		switch (fw_change_status) {
3bd67f890edb8f James Smart 2019-12-20  550  		case 0x00:
3bd67f890edb8f James Smart 2019-12-20  551  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  552  				       "No reset needed, new firmware is active.\n");
3bd67f890edb8f James Smart 2019-12-20  553  			break;
3bd67f890edb8f James Smart 2019-12-20  554  		case 0x01:
3bd67f890edb8f James Smart 2019-12-20  555  			efc_log_warn(efct,
3bd67f890edb8f James Smart 2019-12-20  556  				      "A physical device reset (host reboot) is needed to activate the new firmware\n");
3bd67f890edb8f James Smart 2019-12-20  557  			break;
3bd67f890edb8f James Smart 2019-12-20  558  		case 0x02:
3bd67f890edb8f James Smart 2019-12-20  559  		case 0x03:
3bd67f890edb8f James Smart 2019-12-20  560  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  561  				       "firmware is resetting to activate the new firmware\n");
3bd67f890edb8f James Smart 2019-12-20  562  			efct_fw_reset(efct);
3bd67f890edb8f James Smart 2019-12-20  563  			break;
3bd67f890edb8f James Smart 2019-12-20  564  		default:
3bd67f890edb8f James Smart 2019-12-20  565  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  566  				       "Unexected value change_status: %d\n",
3bd67f890edb8f James Smart 2019-12-20  567  				fw_change_status);
3bd67f890edb8f James Smart 2019-12-20  568  			break;
3bd67f890edb8f James Smart 2019-12-20  569  		}
3bd67f890edb8f James Smart 2019-12-20  570  	}
3bd67f890edb8f James Smart 2019-12-20  571  
3bd67f890edb8f James Smart 2019-12-20  572  exit:
3bd67f890edb8f James Smart 2019-12-20  573  	release_firmware(fw);
3bd67f890edb8f James Smart 2019-12-20  574  
3bd67f890edb8f James Smart 2019-12-20  575  	return rc;
3bd67f890edb8f James Smart 2019-12-20  576  }
3bd67f890edb8f James Smart 2019-12-20  577  

:::::: The code at line 530 was first introduced by commit
:::::: 3bd67f890edb8fd4fc7c9902b8f11e2041571d9a elx: efct: Driver initialization routines

:::::: TO: James Smart <jsmart2021@gmail.com>
:::::: CC: 0day robot <lkp@intel.com>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 70326 bytes --]

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
@ 2019-12-24  7:45     ` kbuild test robot
  0 siblings, 0 replies; 78+ messages in thread
From: kbuild test robot @ 2019-12-24  7:45 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 9370 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.5-rc3 next-20191220]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20191224-054519
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
config: i386-allyesconfig (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   drivers/scsi/elx/efct/efct_driver.c: In function 'efct_request_firmware_update':
>> drivers/scsi/elx/efct/efct_driver.c:530:10: warning: format '%ld' expects argument of type 'long int', but argument 4 has type 'size_t {aka const unsigned int}' [-Wformat=]
             "Invalid FW image found Magic: 0x%x Size: %ld\n",
             ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/efct/../include/efc_common.h:32:3: note: in expansion of macro 'dev_warn'
      dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~~
>> drivers/scsi/elx/efct/efct_driver.c:529:3: note: in expansion of macro 'efc_log_warn'
      efc_log_warn(efct,
      ^~~~~~~~~~~~
--
   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_scsi.c:7:
   drivers/scsi/elx/efct/efct_scsi.c: In function 'efct_scsi_build_sgls':
>> drivers/scsi/elx/efct/efct_scsi.c:346:13: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
                "sgl[%d] len of %ld is not multiple of blocksize\n",
                ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/efct/../include/efc_common.h:38:3: note: in expansion of macro 'dev_dbg'
      dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~
>> drivers/scsi/elx/efct/efct_scsi.c:345:6: note: in expansion of macro 'efc_log_test'
         efc_log_test(efct,
         ^~~~~~~~~~~~
--
   In file included from include/linux/pci.h:37:0,
                    from drivers/scsi/elx/libefc_sli/sli4.h:15,
                    from drivers/scsi/elx/libefc_sli/sli4.c:11:
   drivers/scsi/elx/libefc_sli/sli4.c: In function 'sli_cmd_read_topology':
>> drivers/scsi/elx/libefc_sli/sli4.c:3753:23: warning: format '%jd' expects argument of type 'intmax_t', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
       efc_log_info(sli4, "loop map buffer too small %jd\n",
                          ^
   include/linux/device.h:1691:22: note: in definition of macro 'dev_fmt'
    #define dev_fmt(fmt) fmt
                         ^~~
>> drivers/scsi/elx/libefc_sli/../include/efc_common.h:35:3: note: in expansion of macro 'dev_info'
      dev_info(&((efc)->pcidev)->dev, fmt, ##args)
      ^~~~~~~~
>> drivers/scsi/elx/libefc_sli/sli4.c:3753:4: note: in expansion of macro 'efc_log_info'
       efc_log_info(sli4, "loop map buffer too small %jd\n",
       ^~~~~~~~~~~~

vim +530 drivers/scsi/elx/efct/efct_driver.c

3bd67f890edb8f James Smart 2019-12-20  507  
3bd67f890edb8f James Smart 2019-12-20  508  static int
3bd67f890edb8f James Smart 2019-12-20  509  efct_request_firmware_update(struct efct *efct)
3bd67f890edb8f James Smart 2019-12-20  510  {
3bd67f890edb8f James Smart 2019-12-20  511  	int rc = 0;
3bd67f890edb8f James Smart 2019-12-20  512  	u8 file_name[256], fw_change_status = 0;
3bd67f890edb8f James Smart 2019-12-20  513  	const struct firmware *fw;
3bd67f890edb8f James Smart 2019-12-20  514  	struct efct_hw_grp_hdr *fw_image;
3bd67f890edb8f James Smart 2019-12-20  515  
3bd67f890edb8f James Smart 2019-12-20  516  	snprintf(file_name, 256, "%s.grp", efct->model);
3bd67f890edb8f James Smart 2019-12-20  517  	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
3bd67f890edb8f James Smart 2019-12-20  518  	if (rc) {
3bd67f890edb8f James Smart 2019-12-20  519  		efc_log_err(efct, "Firmware file(%s) not found.\n", file_name);
3bd67f890edb8f James Smart 2019-12-20  520  		return rc;
3bd67f890edb8f James Smart 2019-12-20  521  	}
3bd67f890edb8f James Smart 2019-12-20  522  	fw_image = (struct efct_hw_grp_hdr *)fw->data;
3bd67f890edb8f James Smart 2019-12-20  523  
3bd67f890edb8f James Smart 2019-12-20  524  	/* Check if firmware provided is compatible with this particular
3bd67f890edb8f James Smart 2019-12-20  525  	 * Adapter of not
3bd67f890edb8f James Smart 2019-12-20  526  	 */
3bd67f890edb8f James Smart 2019-12-20  527  	if ((be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G5) &&
3bd67f890edb8f James Smart 2019-12-20  528  	    (be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G6)) {
3bd67f890edb8f James Smart 2019-12-20 @529  		efc_log_warn(efct,
3bd67f890edb8f James Smart 2019-12-20 @530  			      "Invalid FW image found Magic: 0x%x Size: %ld\n",
3bd67f890edb8f James Smart 2019-12-20  531  			be32_to_cpu(fw_image->magic_number), fw->size);
3bd67f890edb8f James Smart 2019-12-20  532  		rc = -1;
3bd67f890edb8f James Smart 2019-12-20  533  		goto exit;
3bd67f890edb8f James Smart 2019-12-20  534  	}
3bd67f890edb8f James Smart 2019-12-20  535  
3bd67f890edb8f James Smart 2019-12-20  536  	if (!strncmp(efct->fw_version, fw_image->revision,
3bd67f890edb8f James Smart 2019-12-20  537  		     strnlen(fw_image->revision, 16))) {
3bd67f890edb8f James Smart 2019-12-20  538  		efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  539  			       "No update req. Firmware is already up to date.\n");
3bd67f890edb8f James Smart 2019-12-20  540  		rc = 0;
3bd67f890edb8f James Smart 2019-12-20  541  		goto exit;
3bd67f890edb8f James Smart 2019-12-20  542  	}
3bd67f890edb8f James Smart 2019-12-20  543  	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
3bd67f890edb8f James Smart 2019-12-20  544  	if (rc) {
3bd67f890edb8f James Smart 2019-12-20  545  		efc_log_err(efct,
3bd67f890edb8f James Smart 2019-12-20  546  			     "Firmware update failed. Return code = %d\n", rc);
3bd67f890edb8f James Smart 2019-12-20  547  	} else {
3bd67f890edb8f James Smart 2019-12-20  548  		efc_log_info(efct, "Firmware updated successfully\n");
3bd67f890edb8f James Smart 2019-12-20  549  		switch (fw_change_status) {
3bd67f890edb8f James Smart 2019-12-20  550  		case 0x00:
3bd67f890edb8f James Smart 2019-12-20  551  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  552  				       "No reset needed, new firmware is active.\n");
3bd67f890edb8f James Smart 2019-12-20  553  			break;
3bd67f890edb8f James Smart 2019-12-20  554  		case 0x01:
3bd67f890edb8f James Smart 2019-12-20  555  			efc_log_warn(efct,
3bd67f890edb8f James Smart 2019-12-20  556  				      "A physical device reset (host reboot) is needed to activate the new firmware\n");
3bd67f890edb8f James Smart 2019-12-20  557  			break;
3bd67f890edb8f James Smart 2019-12-20  558  		case 0x02:
3bd67f890edb8f James Smart 2019-12-20  559  		case 0x03:
3bd67f890edb8f James Smart 2019-12-20  560  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  561  				       "firmware is resetting to activate the new firmware\n");
3bd67f890edb8f James Smart 2019-12-20  562  			efct_fw_reset(efct);
3bd67f890edb8f James Smart 2019-12-20  563  			break;
3bd67f890edb8f James Smart 2019-12-20  564  		default:
3bd67f890edb8f James Smart 2019-12-20  565  			efc_log_debug(efct,
3bd67f890edb8f James Smart 2019-12-20  566  				       "Unexected value change_status: %d\n",
3bd67f890edb8f James Smart 2019-12-20  567  				fw_change_status);
3bd67f890edb8f James Smart 2019-12-20  568  			break;
3bd67f890edb8f James Smart 2019-12-20  569  		}
3bd67f890edb8f James Smart 2019-12-20  570  	}
3bd67f890edb8f James Smart 2019-12-20  571  
3bd67f890edb8f James Smart 2019-12-20  572  exit:
3bd67f890edb8f James Smart 2019-12-20  573  	release_firmware(fw);
3bd67f890edb8f James Smart 2019-12-20  574  
3bd67f890edb8f James Smart 2019-12-20  575  	return rc;
3bd67f890edb8f James Smart 2019-12-20  576  }
3bd67f890edb8f James Smart 2019-12-20  577  

:::::: The code at line 530 was first introduced by commit
:::::: 3bd67f890edb8fd4fc7c9902b8f11e2041571d9a elx: efct: Driver initialization routines

:::::: TO: James Smart <jsmart2021@gmail.com>
:::::: CC: 0day robot <lkp@intel.com>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 70326 bytes --]

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-12-24  7:45     ` kbuild test robot
@ 2019-12-24 21:01   ` Nathan Chancellor
  2019-12-25 16:09     ` James Smart
  2020-01-09 11:47   ` Hannes Reinecke
  2 siblings, 1 reply; 78+ messages in thread
From: Nathan Chancellor @ 2019-12-24 21:01 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, dwagner, bvanassche, Ram Vegesna, clang-built-linux

On Fri, Dec 20, 2019 at 02:37:23PM -0800, James Smart wrote:
> This final patch ties the efct driver into the kernel Kconfig
> and build linkages in the drivers/scsi directory.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>

Hi James,

The 0day bot reported a few new clang warnings with this series. Would
you mind fixing them in the next version? I've attached how I would
resolve them inline, feel free to use them or fix the warnings in a
different way.


On Wed, Dec 25, 2019 at 04:31:56AM +0800, kbuild test robot wrote:
> CC: kbuild-all@lists.01.org
> In-Reply-To: <20191220223723.26563-33-jsmart2021@gmail.com>
> References: <20191220223723.26563-33-jsmart2021@gmail.com>
> TO: James Smart <jsmart2021@gmail.com>
> CC: linux-scsi@vger.kernel.org
> CC: maier@linux.ibm.com, dwagner@suse.de, bvanassche@acm.org, James Smart <jsmart2021@gmail.com>, Ram Vegesna <ram.vegesna@broadcom.com>
> 
> Hi James,
> 
> I love your patch! Perhaps something to improve:
> 
> [auto build test WARNING on mkp-scsi/for-next]
> [also build test WARNING on scsi/for-next linus/master v5.5-rc3 next-20191219]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
> 
> url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20191224-054519
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
> config: x86_64-allyesconfig (attached as .config)
> compiler: clang version 10.0.0 (git://gitmirror/llvm_project e5a743c4f6e3639ba3bee778c894a996ef96391a)
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=x86_64 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> All warnings (new ones prefixed by >>):
> 
> >> drivers/scsi/elx/efct/efct_els.c:1736:32: warning: implicit conversion from enumeration type 'enum efct_els_role' to different enumeration type 'enum efct_scsi_io_role' [-Wenum-conversion]
>            io = efct_scsi_io_alloc(node, EFCT_ELS_ROLE_RESPONDER);
>                 ~~~~~~~~~~~~~~~~~~       ^~~~~~~~~~~~~~~~~~~~~~~
>    1 warning generated.


diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
index 9c964302505b..10e60128a527 100644
--- a/drivers/scsi/elx/efct/efct_els.c
+++ b/drivers/scsi/elx/efct/efct_els.c
@@ -1733,7 +1733,7 @@ efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
 	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
 	u32 d_id = ntoh24(hdr->fh_d_id);
 
-	io = efct_scsi_io_alloc(node, EFCT_ELS_ROLE_RESPONDER);
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
 	if (!io) {
 		efc_log_err(efc, "els IO alloc failed\n");
 		return io;

> >> drivers/scsi/elx/efct/efct_hw.c:5270:6: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
>            if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
>                ^
>    drivers/scsi/elx/efct/efct_hw.c:5270:6: note: add parentheses after the '!' to evaluate the comparison first
>            if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
>                ^
>                 (
>    drivers/scsi/elx/efct/efct_hw.c:5270:6: note: add parentheses around left hand side expression to silence this warning
>            if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
>                ^
>                (
>    drivers/scsi/elx/efct/efct_hw.c:5619:6: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
>            if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>    drivers/scsi/elx/efct/efct_hw.c:5619:6: note: add parentheses after the '!' to evaluate the comparison first
>            if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>                 (
>    drivers/scsi/elx/efct/efct_hw.c:5619:6: note: add parentheses around left hand side expression to silence this warning
>            if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>                (
>    drivers/scsi/elx/efct/efct_hw.c:5962:6: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
>            if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>    drivers/scsi/elx/efct/efct_hw.c:5962:6: note: add parentheses after the '!' to evaluate the comparison first
>            if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>                 (
>    drivers/scsi/elx/efct/efct_hw.c:5962:6: note: add parentheses around left hand side expression to silence this warning
>            if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
>                ^
>                (
>    3 warnings generated.

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 23d55d0d26c3..8428c7ff9d72 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -5267,8 +5267,8 @@ efct_hw_async_call(struct efct_hw *hw,
 	ctx->arg = arg;
 
 	/* Build and send a NOP mailbox command */
-	if (!sli_cmd_common_nop(&hw->sli, ctx->cmd,
-			       sizeof(ctx->cmd), 0) == 0) {
+	if (sli_cmd_common_nop(&hw->sli, ctx->cmd,
+			       sizeof(ctx->cmd), 0)) {
 		efc_log_err(hw->os, "COMMON_NOP format failure\n");
 		kfree(ctx);
 		rc = -1;
@@ -5616,10 +5616,10 @@ efct_hw_port_attach_reg_vpi(struct efc_sli_port *sport, void *data)
 	struct efct_hw *hw = sport->hw;
 	int rc;
 
-	if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+	if (sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
 			    sport->fc_id, sport->sli_wwpn,
 			sport->indicator, sport->domain->indicator,
-			false) == 0) {
+			false)) {
 		efc_log_err(hw->os, "REG_VPI format failure\n");
 		efct_hw_port_free_resources(sport,
 					    EFC_HW_PORT_ATTACH_FAIL, data);
@@ -5959,11 +5959,11 @@ efct_hw_domain_attach_reg_vfi(struct efc_domain *domain, void *data)
 	struct efct_hw *hw = domain->hw;
 	int rc;
 
-	if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+	if (sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
 			    domain->indicator, domain->fcf_indicator,
 			domain->dma, domain->sport->indicator,
 			domain->sport->sli_wwpn,
-			domain->sport->fc_id) == 0) {
+			domain->sport->fc_id)) {
 		efc_log_err(hw->os, "REG_VFI format failure\n");
 		goto cleanup;
 	}

> >> drivers/scsi/elx/libefc_sli/sli4.c:202:6: warning: variable 'ver' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
>            if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
>                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>    drivers/scsi/elx/libefc_sli/sli4.c:206:5: note: uninitialized use occurs here
>                             ver, CFG_RQST_PYLD_LEN(cmn_create_eq));
>                             ^~~
>    drivers/scsi/elx/libefc_sli/sli4.c:202:2: note: remove the 'if' if its condition is always true
>            if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
>            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>    drivers/scsi/elx/libefc_sli/sli4.c:195:24: note: initialize the variable 'ver' to silence this warning
>            u32 dw6_flags = 0, ver;
>                                  ^
>                                   = 0
>    1 warning generated.

Presumably, ver should be initialized to either CMD_V0 or CMD_V1 but I
cannot tell.

Cheers,
Nathan

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-12-24 21:01   ` Nathan Chancellor
@ 2019-12-25 16:09     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2019-12-25 16:09 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: linux-scsi, maier, dwagner, bvanassche, Ram Vegesna, clang-built-linux



> On Dec 24, 2019, at 1:01 PM, Nathan Chancellor <natechancellor@gmail.com> wrote:
> 
> On Fri, Dec 20, 2019 at 02:37:23PM -0800, James Smart wrote:
>> This final patch ties the efct driver into the kernel Kconfig
>> and build linkages in the drivers/scsi directory.
>> 
>> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
>> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> Hi James,
> 
> The 0day bot reported a few new clang warnings with this series. Would
> you mind fixing them in the next version? I've attached how I would
> resolve them inline, feel free to use them or fix the warnings in a
> different way.
> 

Hi Nathan,

I will gladly take care of them

— james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver
  2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (31 preceding siblings ...)
  2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2019-12-29 18:27 ` Sebastian Herbszt
  32 siblings, 0 replies; 78+ messages in thread
From: Sebastian Herbszt @ 2019-12-29 18:27 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, maier, dwagner, bvanassche, Sebastian Herbszt

James Smart wrote:
> This patch set is a request to incorporate the new Broadcom
> (Emulex) FC target driver, efct, into the kernel source tree.
>
> The driver source has been Announced a couple of times, the last
> version on 12/18/2018. The driver has been hosted on gitlab for
> review has had contributions from the community.
>   gitlab (git@gitlab.com:jsmart/efct-Emulex_FC_Target.git)
>
> The driver integrates into the source tree at the (new)
> drivers/scsi/elx subdirectory.
>
> The driver consists of the following components:
> - A libefc_sli subdirectory: This subdirectory contains a library that
>   encapsulates common definitions and routines for an Emulex SLI-4
>   adapter.
> - A libefc subdirectory: This subdirectory contains a library of
>   common routines. Of major import is a number of routines that
>   implement a FC Discovery engine for target mode.
> - An efct subdirectory: This subdirectory contains the efct target
>   mode device driver. The driver utilizes the above librarys and
>   plugs into the SCSI LIO interfaces. The driver is SCSI only at
>   this time.
>
> The patches populate the libraries and device driver and can only
> be compiled as a complete set.
>
> This driver is completely independent from the lpfc device driver
> and there is no overlap on PCI ID's.
>
> The patches have been cut against the 5.6/scsi-queue branch.
>
> Thank you to those that have contributed to the driver in the past.
>
> Review comments welcome!
>
> -- james
>
>
> V2 modifications:
>
> Contains the following modifications based on prior review comments:
>   Indentation/Alignment/Spacing changes
>   Comments: format cleanup; removed obvious or unnecessary comments;
>     Added comments for clarity.
>   Headers use #ifndef comparing for prior inclusion
>   Cleanup structure names (remove _s suffix)
>   Encapsulate use of macro arguments
>   Refactor to remove static function declarations for static local
> routines Removed unused variables
>   Fix SLI4_INTF_VALID_MASK for 32bits
>   Ensure no BIT() use
>   Use __ffs() in page count macro
>   Reorg to move field defines out of structure definition
>   Commonize command building routines to reduce duplication
>   LIO interface:
>     Removed scsi initiator includes
>     Cleaned up interface defines
>     Removed lio WWN version attribute.
>     Expanded macros within logging macros
>     Cleaned up lio state setting macro
>     Remove __force use
>     Modularized session debugfs code so can be easily replaced.
>     Cleaned up abort task handling. Return after initiating.
>     Modularized where possible to reduce duplication
>     Convert from kthread to workqueue use
>     Remove unused macros
>   Add missing TARGET_CORE build attribute
>   Fix kbuild test robot warnings
>
> Comments not addressed:
>   Use of __packed: not believed necessary
>   Session debugfs code remains. There is not yet a common lio
>     mechanism to replace with.

There seems to be an issue with this version and also the code from
October on my setup. I am running 5.5.0-rc3 but it also happens
on earlier kernel versions.

While shutting down the target after some testing I execute

rmdir /sys/kernel/config/target/efct/10:00:00:90:fa:f0:89:ba/tpgt_0/acls/10:00:00:90:fa:f0:89:bb

but this command never returns and the shutdown script hangs.

The code from August [1] and a refactored version [2] do not exhibit
this problem.

[  245.485090] efct_TPG[0]_LUN[0] - Removed ACL for InitiatorNode: 10:00:00:90:fa:f0:89:bb Mapped LUN: 0
[  245.497691] efct_TPG[0] - Freeing ACL for efct InitiatorNode: 10:00:00:90:fa:f0:89:bb Mapped LUN: 0

[  385.687531] sysrq: Show Blocked State
[  385.687547]   task                PC stack   pid father
[  385.687610] efct:0:0        D    0  3241      2 0x80004000
[  385.687615] Call Trace:
[  385.687628]  __schedule+0x28e/0x7a0
[  385.687635]  ? try_to_del_timer_sync+0x45/0x70
[  385.687639]  ? _raw_spin_lock_irqsave+0x14/0x40
[  385.687643]  schedule+0x46/0xb0
[  385.687646]  schedule_timeout+0x118/0x2d0
[  385.687650]  ? __next_timer_interrupt+0xb0/0xb0
[  385.687653]  wait_for_completion_timeout+0x87/0xf0
[  385.687657]  ? wake_up_q+0x90/0x90
[  385.687682]  efct_intr_thread+0x5a/0xa0 [efct]
[  385.687695]  ? efct_device_detach+0x110/0x110 [efct]
[  385.687700]  kthread+0xdc/0x110
[  385.687713]  ? efct_device_detach+0x110/0x110 [efct]
[  385.687716]  ? kthread_park+0xa0/0xa0
[  385.687720]  ret_from_fork+0x2e/0x40
[  385.687726] rmdir           D    0  3368   3365 0x00000000
[  385.687730] Call Trace:
[  385.687735]  __schedule+0x28e/0x7a0
[  385.687739]  schedule+0x46/0xb0
[  385.687742]  schedule_timeout+0x1bd/0x2d0
[  385.687762]  ? efct_lio_close_session+0x3e/0xd0 [efct]
[  385.687780]  ? efct_lio_close_session+0x3e/0xd0 [efct]
[  385.687783]  ? wait_for_completion+0x2a/0xe0
[  385.687786]  wait_for_completion+0x8f/0xe0
[  385.687789]  ? wake_up_q+0x90/0x90
[  385.687820]  core_tpg_del_initiator_node_acl+0x73/0x100 [target_core_mod]
[  385.687827]  ? config_item_put.part.0+0x57/0xe0 [configfs]
[  385.687845]  target_fabric_nacl_base_release+0x20/0x30 [target_core_mod]
[  385.687851]  config_item_put.part.0+0x78/0xe0 [configfs]
[  385.687856]  config_item_put+0x11/0x20 [configfs]
[  385.687861]  configfs_rmdir+0x299/0x300 [configfs]
[  385.687866]  vfs_rmdir+0x6a/0x150
[  385.687869]  do_rmdir+0x16d/0x1a0
[  385.687873]  sys_rmdir+0x15/0x20
[  385.687876]  do_fast_syscall_32+0x87/0x280
[  385.687880]  entry_SYSENTER_32+0xaa/0x102
[  385.687884] EIP: 0xb7eebb89
[  385.687888] Code: ff 00 06 fc ff 30 06 fc ff 60 06 fc ff 90 06 fc ff d0 06 fc ff 00 07 fc ff 30 07 fc ff 70 07 fc ff 15 06 fc ff 35 06 fc ff 55 <06> fc ff 75 06 fc ff 12 06 fc ff 32 06 fc ff 52 06 fc ff 72 06 fc
[  385.687891] EAX: ffffffda EBX: bf9ea909 ECX: 00000000 EDX: bf9ea909
[  385.687893] ESI: bf9e9114 EDI: 00000002 EBP: bf9e9078 ESP: bf9e901c
[  385.687895] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000292

[1]
https://repo.or.cz/efct-Emulex_FC_Target/sherbszt.git/shortlog/refs/heads/v2-20190804
[2]
https://repo.or.cz/efct-Emulex_FC_Target/sherbszt.git/shortlog/refs/heads/v2-20191125

Sebastian

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2019-12-20 22:36 ` [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2020-01-08  7:11   ` Hannes Reinecke
  2020-01-09  0:59     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  7:11 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This is the initial patch for the new Emulex target mode SCSI
> driver sources.
> 
> This patch:
> - Creates the new Emulex source level directory drivers/scsi/elx
>   and adds the directory to the MAINTAINERS file.
> - Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
>   This library is a SLI-4 interface library.
> - Starts the population of the libefc_sli library with definitions
>   of SLI-4 hardware register offsets and definitions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  MAINTAINERS                        |   8 ++
>  drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
>  drivers/scsi/elx/libefc_sli/sli4.h | 239 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 273 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cc0a4a8ae06a..dd8e5f340991 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6139,6 +6139,14 @@ W:	http://www.broadcom.com
>  S:	Supported
>  F:	drivers/scsi/lpfc/
>  
> +EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
> +M:	James Smart <james.smart@broadcom.com>
> +M:	Ram Vegesna <ram.vegesna@broadcom.com>
> +L:	linux-scsi@vger.kernel.org
> +W:	http://www.broadcom.com
> +S:	Supported
> +F:	drivers/scsi/elx/
> +
>  ENE CB710 FLASH CARD READER DRIVER
>  M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
>  S:	Maintained
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> new file mode 100644
> index 000000000000..29d33becd334
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -0,0 +1,26 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/**
> + * All common (i.e. transport-independent) SLI-4 functions are implemented
> + * in this file.
> + */
> +#include "sli4.h"
> +
> +struct sli4_asic_entry_t {
> +	u32 rev_id;
> +	u32 family;
> +};
> +
> +static struct sli4_asic_entry_t sli4_asic_table[] = {
> +	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
> +};
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> new file mode 100644
> index 000000000000..02c671cf57ef
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -0,0 +1,239 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + */
> +
> +/*
> + * All common SLI-4 structures and function prototypes.
> + */
> +
> +#ifndef _SLI4_H
> +#define _SLI4_H
> +
> +/*************************************************************************
> + * Common SLI-4 register offsets and field definitions
> + */
> +
> +/* SLI_INTF - SLI Interface Definition Register */
> +#define SLI4_INTF_REG			0x0058
> +enum {
> +	SLI4_INTF_REV_SHIFT		= 4,
> +	SLI4_INTF_REV_MASK		= 0x0F << SLI4_INTF_REV_SHIFT,
> +
> +	SLI4_INTF_REV_S3		= 3 << SLI4_INTF_REV_SHIFT,
> +	SLI4_INTF_REV_S4		= 4 << SLI4_INTF_REV_SHIFT,
> +
> +	SLI4_INTF_FAMILY_SHIFT		= 8,
> +	SLI4_INTF_FAMILY_MASK		= 0x0F << SLI4_INTF_FAMILY_SHIFT,
> +
> +	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0xf << SLI4_INTF_FAMILY_SHIFT,
> +
> +	SLI4_INTF_IF_TYPE_SHIFT		= 12,
> +	SLI4_INTF_IF_TYPE_MASK		= 0x0F << SLI4_INTF_IF_TYPE_SHIFT,
> +
> +	SLI4_INTF_IF_TYPE_2		= 2 << SLI4_INTF_IF_TYPE_SHIFT,
> +	SLI4_INTF_IF_TYPE_6		= 6 << SLI4_INTF_IF_TYPE_SHIFT,
> +
> +	SLI4_INTF_VALID_SHIFT		= 29,
> +	SLI4_INTF_VALID_MASK		= 7 << SLI4_INTF_VALID_SHIFT,
> +
> +	SLI4_INTF_VALID_VALUE		= 6 << SLI4_INTF_VALID_SHIFT,
> +};
> +
> +/* ASIC_ID - SLI ASIC Type and Revision Register */
> +#define SLI4_ASIC_ID_REG	0x009c
> +enum {
> +	SLI4_ASIC_GEN_SHIFT	= 8,
> +	SLI4_ASIC_GEN_MASK	= 0xFF << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_5		= 0x0b << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_6		= 0x0c << SLI4_ASIC_GEN_SHIFT,
> +	SLI4_ASIC_GEN_7		= 0x0d << SLI4_ASIC_GEN_SHIFT,
> +};
> +
> +enum {
> +	SLI4_ASIC_REV_A0 = 0x00,
> +	SLI4_ASIC_REV_A1 = 0x01,
> +	SLI4_ASIC_REV_A2 = 0x02,
> +	SLI4_ASIC_REV_A3 = 0x03,
> +	SLI4_ASIC_REV_B0 = 0x10,
> +	SLI4_ASIC_REV_B1 = 0x11,
> +	SLI4_ASIC_REV_B2 = 0x12,
> +	SLI4_ASIC_REV_C0 = 0x20,
> +	SLI4_ASIC_REV_C1 = 0x21,
> +	SLI4_ASIC_REV_C2 = 0x22,
> +	SLI4_ASIC_REV_D0 = 0x30,
> +};
> +
> +/* BMBX - Bootstrap Mailbox Register */
> +#define SLI4_BMBX_REG		0x0160
> +#define SLI4_BMBX_MASK_HI	0x3
> +#define SLI4_BMBX_MASK_LO	0xf
> +#define SLI4_BMBX_RDY		(1 << 0)
> +#define SLI4_BMBX_HI		(1 << 1)
> +#define SLI4_BMBX_WRITE_HI(r) \
> +	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | SLI4_BMBX_HI)
> +#define SLI4_BMBX_WRITE_LO(r) \
> +	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) << 30) | \
> +	 (((r) & ~SLI4_BMBX_MASK_LO) >> 2))
> +#define SLI4_BMBX_SIZE				256
> +
> +/* SLIPORT_CONTROL - SLI Port Control Register */
> +#define SLI4_PORT_CTRL_REG	0x0408
> +#define SLI4_PORT_CTRL_IP	(1 << 27)
> +#define SLI4_PORT_CTRL_IDIS	(1 << 22)
> +#define SLI4_PORT_CTRL_FDD	(1 << 31)
> +
> +/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
> +#define SLI4_PORT_ERROR1	0x040c
> +#define SLI4_PORT_ERROR2	0x0410
> +
> +/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
> +#define SLI4_EQCQ_DB_REG	0x120
> +enum {
> +	SLI4_EQ_ID_LO_MASK	= 0x01FF,
> +
> +	SLI4_CQ_ID_LO_MASK	= 0x03FF,
> +
> +	SLI4_EQCQ_CI_EQ		= 0x0200,
> +
> +	SLI4_EQCQ_QT_EQ		= 0x00000400,
> +	SLI4_EQCQ_QT_CQ		= 0x00000000,
> +
> +	SLI4_EQCQ_ID_HI_SHIFT	= 11,
> +	SLI4_EQCQ_ID_HI_MASK	= 0xF800,
> +
> +	SLI4_EQCQ_NUM_SHIFT	= 16,
> +	SLI4_EQCQ_NUM_MASK	= 0x1FFF0000,
> +
> +	SLI4_EQCQ_ARM		= 0x20000000,
> +	SLI4_EQCQ_UNARM		= 0x00000000,
> +};
> +
Please be consistent here wrt _SHIFT and _MASK statements.
Either have them spelled out (as you do in this case), but then please
change the first hunk to avoid an explicit shift.
Or keep the style in the first hunk, and change the _MASK values here
to use the _SHIFT values
(ie SLI4_EQCQ_ID_HI_MASK = 0x1F << SLI4_EQCQ_ID_HI_SHIFT).
I don't mind either way, but keep it consistent.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2019-12-20 22:36 ` [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2020-01-08  7:24   ` Hannes Reinecke
  2020-01-09  1:00     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  7:24 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch add SLI-4 Data structures and defines for:
> - Buffer Descriptors (BDEs)
> - Scatter/Gather List elements (SGEs)
> - Queues and their Entry Descriptions for:
>    Event Queues (EQs), Completion Queues (CQs),
>    Receive Queues (RQs), and the Mailbox Queue (MQ).
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/include/efc_common.h |   25 +
>  drivers/scsi/elx/libefc_sli/sli4.h    | 1768 +++++++++++++++++++++++++++++++++
>  2 files changed, 1793 insertions(+)
>  create mode 100644 drivers/scsi/elx/include/efc_common.h
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> new file mode 100644
> index 000000000000..3fc48876c531
> --- /dev/null
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFC_COMMON_H__
> +#define __EFC_COMMON_H__
> +
> +#include <linux/pci.h>
> +
> +#define EFC_SUCCESS 0
> +#define EFC_FAIL 1
> +
> +struct efc_dma {
> +	void		*virt;
> +	void            *alloc;
> +	dma_addr_t	phys;
> +
> +	size_t		size;
> +	size_t          len;
> +	struct pci_dev	*pdev;
> +};
> +
> +#endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 02c671cf57ef..f86a9e72ed43 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -12,6 +12,8 @@
>  #ifndef _SLI4_H
>  #define _SLI4_H
>  
> +#include "../include/efc_common.h"
> +
>  /*************************************************************************
>   * Common SLI-4 register offsets and field definitions
>   */
> @@ -236,4 +238,1770 @@ struct sli4_reg {
>  	u32	off;
>  };
>  
> +struct sli4_dmaaddr {
> +	__le32 low;
> +	__le32 high;
> +};
> +
> +/* a 3-word BDE with address 1st 2 words, length last word */
> +struct sli4_bufptr {
> +	struct sli4_dmaaddr addr;
> +	__le32 length;
> +};
> +
> +/* a 3-word BDE with length as first word, address last 2 words */
> +struct sli4_bufptr_len1st {
> +	__le32 length0;
> +	struct sli4_dmaaddr addr;
> +};
> +
> +/* Buffer Descriptor Entry (BDE) */
> +enum {
> +	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
> +	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
> +};
> +
> +struct sli4_bde {
> +	__le32		bde_type_buflen;
> +	union {
> +		struct sli4_dmaaddr data;
> +		struct {
> +			__le32	offset;
> +			__le32	rsvd2;
> +		} imm;
> +		struct sli4_dmaaddr blp;
> +	} u;
> +};
> +
> +/* Buffer Descriptors */
> +enum {
> +	BDE_TYPE_SHIFT		= 24,
> +	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
> +	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
> +	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
> +};
> +
> +/* Scatter-Gather Entry (SGE) */
> +#define SLI4_SGE_MAX_RESERVED			3
> +
> +enum {
> +	/* DW2 */
> +	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,
> +	/*DW2W1*/
> +	SLI4_SGE_TYPE_SHIFT		= 27,
> +	SLI4_SGE_TYPE_MASK		= 0xf << SLI4_SGE_TYPE_SHIFT,
> +	/*SGE Types*/
> +	SLI4_SGE_TYPE_DATA		= 0x00,
> +	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
> +	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
> +	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
> +	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
> +	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
> +	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
> +	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
> +	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
> +
> +	SLI4_SGE_LAST			= (1 << 31),
> +};
> +
> +struct sli4_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		buffer_length;
> +};
> +
I am really not a big fan of anonymous enums, especially not if they are
scoped for specific structures.
Can you please avoid the use of anonymous enums, and name them according
to the structure where they are indended to be used?
Ideally the structure should reference named enums directly, but I do
agree that this it not always possible or desired.
But we should at least name them accordingly to give the developer a
hint where these values are expected to occur.

Eg in the above case

enum sli4_sge_flags {

or similar would make the intended usage clearer.

> +/* T10 DIF Scatter-Gather Entry (SGE) */
> +struct sli4_dif_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		rsvd12;
> +};
> +
> +/* Data Integrity Seed (DISEED) SGE */
> +enum {
> +	/* DW2W1 */
> +	DISEED_SGE_HS			= (1 << 2),
> +	DISEED_SGE_WS			= (1 << 3),
> +	DISEED_SGE_IC			= (1 << 4),
> +	DISEED_SGE_ICS			= (1 << 5),
> +	DISEED_SGE_ATRT			= (1 << 6),
> +	DISEED_SGE_AT			= (1 << 7),
> +	DISEED_SGE_FAT			= (1 << 8),
> +	DISEED_SGE_NA			= (1 << 9),
> +	DISEED_SGE_HI			= (1 << 10),
> +
> +	/* DW3W1 */
> +	DISEED_SGE_BS_MASK		= 0x0007,
> +	DISEED_SGE_AI			= (1 << 3),
> +	DISEED_SGE_ME			= (1 << 4),
> +	DISEED_SGE_RE			= (1 << 5),
> +	DISEED_SGE_CE			= (1 << 6),
> +	DISEED_SGE_NR			= (1 << 7),
> +
> +	DISEED_SGE_OP_RX_SHIFT		= 8,
> +	DISEED_SGE_OP_RX_MASK		= (0xf << DISEED_SGE_OP_RX_SHIFT),
> +	DISEED_SGE_OP_TX_SHIFT		= 12,
> +	DISEED_SGE_OP_TX_MASK		= (0xf << DISEED_SGE_OP_TX_SHIFT),
> +
> +	/* Opcode values */
> +	DISEED_SGE_OP_IN_NODIF_OUT_CRC	= 0x00,
> +	DISEED_SGE_OP_IN_CRC_OUT_NODIF	= 0x01,
> +	DISEED_SGE_OP_IN_NODIF_OUT_CSUM	= 0x02,
> +	DISEED_SGE_OP_IN_CSUM_OUT_NODIF	= 0x03,
> +	DISEED_SGE_OP_IN_CRC_OUT_CRC	= 0x04,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CSUM	= 0x05,
> +	DISEED_SGE_OP_IN_CRC_OUT_CSUM	= 0x06,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CRC	= 0x07,
> +	DISEED_SGE_OP_IN_RAW_OUT_RAW	= 0x08,
> +
> +};
> +
Similar here: please use individual named enums, not one giant anonymous
enum containing different value for different use-cases.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2019-12-20 22:36 ` [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2020-01-08  7:32   ` Hannes Reinecke
  2020-01-09  1:03     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  7:32 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds definitions for SLI-4 mailbox commands
> and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc_sli/sli4.h | 1728 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 1727 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index f86a9e72ed43..c9bd3f71b27b 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -1995,7 +1995,7 @@ struct sli4_fc_xri_aborted_cqe {
>  #define SLI4_ELS_REQUEST64_DIR_READ		0x1
>  
>  #define SLI4_ELS_REQUEST64_OTHER		0x0
> -#define SLI4_ELS_REQUEST64_LOGO		0x1
> +#define SLI4_ELS_REQUEST64_LOGO			0x1
>  #define SLI4_ELS_REQUEST64_FDISC		0x2
>  #define SLI4_ELS_REQUEST64_FLOGIN		0x3
>  #define SLI4_ELS_REQUEST64_PLOGI		0x4
Shouldn't this rather be merged with the previous patch?

> @@ -2004,4 +2004,1730 @@ struct sli4_fc_xri_aborted_cqe {
>  #define SLI4_ELS_REQUEST64_CMD_NON_FABRIC	0x0c
>  #define SLI4_ELS_REQUEST64_CMD_FABRIC		0x0d
>  
> +#define SLI_PAGE_SIZE				(1 << 12)	/* 4096 */
> +#define SLI_SUB_PAGE_MASK			(SLI_PAGE_SIZE - 1)
> +#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
> +
> +#define SLI4_BMBX_TIMEOUT_MSEC			30000
> +#define SLI4_FW_READY_TIMEOUT_MSEC		30000
> +
> +#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
> +#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
> +
> +static inline u32
> +sli_page_count(size_t bytes, u32 page_size)
> +{
> +	if (!page_size)
> +		return 0;
> +
> +	return (bytes + (page_size - 1)) >> __ffs(page_size);
> +}
> +
> +/*************************************************************************
> + * SLI-4 mailbox command formats and definitions
> + */
> +
> +struct sli4_mbox_command_header {
> +	u8	resvd0;
> +	u8	command;
> +	__le16	status;	/* Port writes to indicate success/fail */
> +};
> +
> +enum {
> +	MBX_CMD_CONFIG_LINK	= 0x07,
> +	MBX_CMD_DUMP		= 0x17,
> +	MBX_CMD_DOWN_LINK	= 0x06,
> +	MBX_CMD_INIT_LINK	= 0x05,
> +	MBX_CMD_INIT_VFI	= 0xa3,
> +	MBX_CMD_INIT_VPI	= 0xa4,
> +	MBX_CMD_POST_XRI	= 0xa7,
> +	MBX_CMD_RELEASE_XRI	= 0xac,
> +	MBX_CMD_READ_CONFIG	= 0x0b,
> +	MBX_CMD_READ_STATUS	= 0x0e,
> +	MBX_CMD_READ_NVPARMS	= 0x02,
> +	MBX_CMD_READ_REV	= 0x11,
> +	MBX_CMD_READ_LNK_STAT	= 0x12,
> +	MBX_CMD_READ_SPARM64	= 0x8d,
> +	MBX_CMD_READ_TOPOLOGY	= 0x95,
> +	MBX_CMD_REG_FCFI	= 0xa0,
> +	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
> +	MBX_CMD_REG_RPI		= 0x93,
> +	MBX_CMD_REG_RX_RQ	= 0xa6,
> +	MBX_CMD_REG_VFI		= 0x9f,
> +	MBX_CMD_REG_VPI		= 0x96,
> +	MBX_CMD_RQST_FEATURES	= 0x9d,
> +	MBX_CMD_SLI_CONFIG	= 0x9b,
> +	MBX_CMD_UNREG_FCFI	= 0xa2,
> +	MBX_CMD_UNREG_RPI	= 0x14,
> +	MBX_CMD_UNREG_VFI	= 0xa1,
> +	MBX_CMD_UNREG_VPI	= 0x97,
> +	MBX_CMD_WRITE_NVPARMS	= 0x03,
> +	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
> +
> +	MBX_STATUS_SUCCESS	= 0x0000,
> +	MBX_STATUS_FAILURE	= 0x0001,
> +	MBX_STATUS_RPI_NOT_REG	= 0x1400,
> +};
> +
Make this two enums, one 'enum sli4_mbx_cmd' and one 'enum sli4_mbx_status'.

> +/* CONFIG_LINK */
> +enum {
> +	SLI4_CFG_LINK_BBSCN = 0xf00,
> +	SLI4_CFG_LINK_CSCN  = 0x1000,
> +};
> +
> +struct sli4_cmd_config_link {
> +	struct sli4_mbox_command_header	hdr;
> +	u8		maxbbc;
> +	u8		rsvd5;
> +	u8		rsvd6;
> +	u8		rsvd7;
> +	u8		alpa;
> +	__le16		n_port_id;
> +	u8		rsvd11;
> +	__le32		rsvd12;
> +	__le32		e_d_tov;
> +	__le32		lp_tov;
> +	__le32		r_a_tov;
> +	__le32		r_t_tov;
> +	__le32		al_tov;
> +	__le32		rsvd36;
> +	__le32		bbscn_dword;
> +};
> +
> +enum {
> +	SLI4_DUMP4_TYPE = 0xf,
> +};

Single enum should rather be converted into a #define ..

> +
> +#define SLI4_WKI_TAG_SAT_TEM 0x1040
> +
> +struct sli4_cmd_dump4 {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		type_dword;
> +	__le16		wki_selection;
> +	__le16		rsvd10;
> +	__le32		rsvd12;
> +	__le32		returned_byte_cnt;
> +	__le32		resp_data[59];
> +};
> +
> +/* INIT_LINK - initialize the link for a FC port */
> +#define FC_TOPOLOGY_FCAL	0
> +#define FC_TOPOLOGY_P2P		1
> +
> +#define SLI4_INIT_LINK_F_LOOP_BACK	(1 << 0)
> +#define SLI4_INIT_LINK_F_UNFAIR		(1 << 6)
> +#define SLI4_INIT_LINK_F_NO_LIRP	(1 << 7)
> +#define SLI4_INIT_LINK_F_LOOP_VALID_CHK	(1 << 8)
> +#define SLI4_INIT_LINK_F_NO_LISA	(1 << 9)
> +#define SLI4_INIT_LINK_F_FAIL_OVER	(1 << 10)
> +#define SLI4_INIT_LINK_F_NO_AUTOSPEED	(1 << 11)
> +#define SLI4_INIT_LINK_F_PICK_HI_ALPA	(1 << 15)
> +
> +#define SLI4_INIT_LINK_F_P2P_ONLY	1
> +#define SLI4_INIT_LINK_F_FCAL_ONLY	2
> +
> +#define SLI4_INIT_LINK_F_FCAL_FAIL_OVER	0
> +#define SLI4_INIT_LINK_F_P2P_FAIL_OVER	1
> +
> +enum {
> +	SLI4_INIT_LINK_SEL_RESET_AL_PA		= 0xff,
> +	SLI4_INIT_LINK_FLAG_LOOPBACK		= 0x1,
> +	SLI4_INIT_LINK_FLAG_TOPOLOGY		= 0x6,
> +	SLI4_INIT_LINK_FLAG_UNFAIR		= 0x40,
> +	SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP	= 0x80,
> +	SLI4_INIT_LINK_FLAG_LOOP_VALIDITY	= 0x100,
> +	SLI4_INIT_LINK_FLAG_SKIP_LISA		= 0x200,
> +	SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER	= 0x400,
> +	SLI4_INIT_LINK_FLAG_FIXED_SPEED		= 0x800,
> +	SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA	= 0x8000,
> +};
> +
Why is this an enum, and the above SLI4_INIT_LINK_F_XXX value are defines?
Please be consistent.

And this applies throughout the remainder of the patch.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2019-12-20 22:36 ` [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2020-01-08  7:45   ` Hannes Reinecke
  2020-01-09  1:04     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  7:45 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create mailbox commands
> and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/include/efc_common.h |   27 +
>  drivers/scsi/elx/libefc_sli/sli4.c    | 1556 +++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
>  3 files changed, 1592 insertions(+)
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> index 3fc48876c531..c339b22c35b5 100644
> --- a/drivers/scsi/elx/include/efc_common.h
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -22,4 +22,31 @@ struct efc_dma {
>  	struct pci_dev	*pdev;
>  };
>  
> +#define efc_log_crit(efc, fmt, args...) \
> +		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_err(efc, fmt, args...) \
> +		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_warn(efc, fmt, args...) \
> +		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_info(efc, fmt, args...) \
> +		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_test(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_debug(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_assert(cond, ...) \
> +	do { \
> +		if (!(cond)) { \
> +			pr_err("%s(%d) assertion (%s) failed\n", \
> +				__FILE__, __LINE__, #cond); \
> +			dump_stack(); \
> +		} \
> +	} while (0)
> +
>  #endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 29d33becd334..7061f7980fad 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -24,3 +24,1559 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
>  	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
>  	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
>  };
> +
> +/* Convert queue type enum (SLI_QTYPE_*) into a string */
> +static char *SLI_QNAME[] = {
> +	"Event Queue",
> +	"Completion Queue",
> +	"Mailbox Queue",
> +	"Work Queue",
> +	"Receive Queue",
> +	"Undefined"
> +};
> +
> +/*
> + * Write a SLI_CONFIG command to the provided buffer.
> + *
> + * @sli4 SLI context pointer.
> + * @buf Destination buffer for the command.
> + * @size size of the destination buffer(buf).
> + * @length Length in bytes of attached command.
> + * @dma DMA buffer for non-embedded commands.
> + *
> + */
> +static void *
> +sli_config_cmd_init(struct sli4 *sli4, void *buf,
> +		    size_t size, u32 length,
> +		    struct efc_dma *dma)
> +{
> +	struct sli4_cmd_sli_config *config = NULL;
> +	u32 flags = 0;
> +
> +	if (length > sizeof(config->payload.embed) && !dma) {
> +		efc_log_err(sli4, "Too big for an embedded cmd with len(%d)\n",
> +			    length);
> +		return NULL;
> +	}
> +
> +	config = buf;
> +
> +	memset(buf, 0, size);
> +
> +	config->hdr.command = MBX_CMD_SLI_CONFIG;
> +	if (!dma) {
> +		flags |= SLI4_SLICONF_EMB;
> +		config->dw1_flags = cpu_to_le32(flags);
> +		config->payload_len = cpu_to_le32(length);
> +		buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
> +		return buf;
> +	}
> +
> +	flags = SLI4_SLICONF_PMDCMD_VAL_1;
> +	flags &= ~SLI4_SLICONF_EMB;
> +	config->dw1_flags = cpu_to_le32(flags);
> +
> +	config->payload.mem.addr.low = cpu_to_le32(lower_32_bits(dma->phys));
> +	config->payload.mem.addr.high =	cpu_to_le32(upper_32_bits(dma->phys));
> +	config->payload.mem.length =
> +			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
> +	config->payload_len = cpu_to_le32(dma->size);
> +	/* save pointer to DMA for BMBX dumping purposes */
> +	sli4->bmbx_non_emb_pmd = dma;
> +	return dma->virt;
> +}
> +
> +/*
> + * Write a COMMON_CREATE_CQ command.
> + *
> + * This creates a Version 2 message.
> + *
> + * Returns 0 on success, or non-zero otherwise.
> + */
> +static int
> +sli_cmd_common_create_cq(struct sli4 *sli4, void *buf, size_t size,
> +			 struct efc_dma *qmem,
> +			 u16 eq_id)
> +{
> +	struct sli4_rqst_cmn_create_cq_v2 *cqv2 = NULL;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_bytes = 0;
> +	u32 num_pages = 0;
> +	size_t cmd_size = 0;
> +	u32 page_size = 0;
> +	u32 n_cqe = 0;
> +	u32 dw5_flags = 0;
> +	u16 dw6w1_arm = 0;
> +	__le32 len;
> +
> +	/* First calculate number of pages and the mailbox cmd length */
> +	n_cqe = qmem->size / SLI4_CQE_BYTES;
> +	switch (n_cqe) {
> +	case 256:
> +	case 512:
> +	case 1024:
> +	case 2048:
> +		page_size = 1;
> +		break;
> +	case 4096:
> +		page_size = 2;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;
> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +
> +	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
> +
> +	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
> +	if (!cqv2)
> +		return EFC_FAIL;
> +
> +	len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
> +					 SZ_DMAADDR * num_pages);
> +	sli_cmd_fill_hdr(&cqv2->hdr, CMN_CREATE_CQ, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V2, len);
> +	cqv2->page_size = page_size;
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
> +	cqv2->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages || num_pages > SLI4_CMN_CREATE_CQ_V2_MAX_PAGES)
> +		return EFC_FAIL;
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= CQ_CNT_VAL(256);
> +		break;
> +	case 2:
> +		dw5_flags |= CQ_CNT_VAL(512);
> +		break;
> +	case 4:
> +		dw5_flags |= CQ_CNT_VAL(1024);
> +		break;
> +	case 8:
> +		dw5_flags |= CQ_CNT_VAL(LARGE);
> +		cqv2->cqe_count = cpu_to_le16(n_cqe);
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return -EFC_FAIL;
> +	}
> +
Hmm. Why do you return -EFC_FAIL here, and EFC_FAIL in the two cases above?
Do you differentiate between EFC_FAIL and -EFC_FAIL?
If so you should probably use different #defines ...

> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_CQV2_AUTOVALID;
> +
> +	dw5_flags |= CREATE_CQV2_EVT;
> +	dw5_flags |= CREATE_CQV2_VALID;
> +
> +	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
> +	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
> +	cqv2->eq_id = cpu_to_le16(eq_id);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
> +		cqv2->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
> +		cqv2->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a COMMON_CREATE_EQ command */
> +static int
> +sli_cmd_common_create_eq(struct sli4 *sli4, void *buf, size_t size,
> +			 struct efc_dma *qmem)
> +{
> +	struct sli4_rqst_cmn_create_eq *eq;
> +	u32 p;
> +	uintptr_t addr;
> +	u16 num_pages;
> +	u32 dw5_flags = 0;
> +	u32 dw6_flags = 0, ver;
> +
> +	eq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);
> +	if (!eq)
> +		return EFC_FAIL;
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		ver = CMD_V2;
> +
> +	sli_cmd_fill_hdr(&eq->hdr, CMN_CREATE_EQ, SLI4_SUBSYSTEM_COMMON,
> +			 ver, CFG_RQST_PYLD_LEN(cmn_create_eq));
> +
> +	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	eq->num_pages = cpu_to_le16(num_pages);
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(1024);
> +		break;
> +	case 2:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(2048);
> +		break;
> +	case 4:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(4096);
> +		break;
> +	default:
> +		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_EQ_AUTOVALID;
> +
> +	dw5_flags |= CREATE_EQ_VALID;
> +	dw6_flags &= (~CREATE_EQ_ARM);
> +	eq->dw5_flags = cpu_to_le32(dw5_flags);
> +	eq->dw6_flags = cpu_to_le32(dw6_flags);
> +	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {
> +		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
> +		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_common_create_mq_ext(struct sli4 *sli4, void *buf, size_t size,
> +			     struct efc_dma *qmem,
> +			     u16 cq_id)
> +{
> +	struct sli4_rqst_cmn_create_mq_ext *mq;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 num_pages;
> +	u16 dw6w1_flags = 0;
> +
> +	mq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
> +				 NULL);
> +	if (!mq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&mq->hdr, CMN_CREATE_MQ_EXT, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_create_mq_ext));
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	mq->num_pages = cpu_to_le16(num_pages);
> +	switch (num_pages) {
> +	case 1:
> +		dw6w1_flags |= SLI4_MQE_SIZE_16;
> +		break;
> +	case 2:
> +		dw6w1_flags |= SLI4_MQE_SIZE_32;
> +		break;
> +	case 4:
> +		dw6w1_flags |= SLI4_MQE_SIZE_64;
> +		break;
> +	case 8:
> +		dw6w1_flags |= SLI4_MQE_SIZE_128;
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
> +
> +	if (sli4->mq_create_version) {
> +		mq->cq_id_v1 = cpu_to_le16(cq_id);
> +		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
> +	} else {
> +		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
> +	}
> +	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
> +
> +	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {
> +		mq->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
> +		mq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *qmem, u16 cq_id)
> +{
> +	struct sli4_rqst_wq_create *wq;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_size = 0;
> +	u32 page_bytes = 0;
> +	u32 n_wqe = 0;
> +	u16 num_pages;
> +
> +	wq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
> +	if (!wq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&wq->hdr, SLI4_OPC_WQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V1, CFG_RQST_PYLD_LEN(wq_create));
> +	n_wqe = qmem->size / sli4->wqe_size;
> +
> +	switch (qmem->size) {
> +	case 4096:
> +	case 8192:
> +	case 16384:
> +	case 32768:
> +		page_size = 1;
> +		break;
> +	case 65536:
> +		page_size = 2;
> +		break;
> +	case 131072:
> +		page_size = 4;
> +		break;
> +	case 262144:
> +		page_size = 8;
> +		break;
> +	case 524288:
> +		page_size = 10;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;
> +
> +	/* valid values for number of pages(num_pages): 1-8 */
> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +	wq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages || num_pages > SLI4_WQ_CREATE_MAX_PAGES)
> +		return EFC_FAIL;
> +
> +	wq->cq_id = cpu_to_le16(cq_id);
> +
> +	wq->page_size = page_size;
> +
> +	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
> +		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
> +	else
> +		wq->wqe_size_byte |= SLI4_WQE_SIZE;
> +
> +	wq->wqe_count = cpu_to_le16(n_wqe);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
> +		wq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
> +		wq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
> +		  struct efc_dma *qmem,
> +		  u16 cq_id, u16 buffer_size)
> +{
> +	struct sli4_rqst_rq_create *rq;
> +	u32 p;
> +	uintptr_t addr;
> +	u16 num_pages;
> +
> +	rq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(rq_create), NULL);
> +	if (!rq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(rq_create));
> +	/* valid values for number of pages: 1-8 (sec 4.5.6) */
> +	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
> +	rq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages ||
> +	    num_pages > SLI4_RQ_CREATE_V0_MAX_PAGES) {
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return 0;
> +	}
> +

'0'? Why not EFC_FAIL/EFC_SUCCESS?

> +	/*
> +	 * RQE count is the log base 2 of the total number of entries
> +	 */
> +	rq->rqe_count_byte |= 31 - __builtin_clz(qmem->size / SLI4_RQE_SIZE);
> +
> +	if (buffer_size < SLI4_RQ_CREATE_V0_MIN_BUF_SIZE ||
> +	    buffer_size > SLI4_RQ_CREATE_V0_MAX_BUF_SIZE) {
> +		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
> +		       buffer_size,
> +		       SLI4_RQ_CREATE_V0_MIN_BUF_SIZE,
> +		       SLI4_RQ_CREATE_V0_MAX_BUF_SIZE);
> +		return -1;
> +	}

'-1'? Not EFC_FAIL?

[ .. ]
> +int
> +__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
> +		 u32 qtype, size_t size, u32 n_entries,
> +		      u32 align)
> +{
> +	if (!q->dma.virt || size != q->size ||
> +	    n_entries != q->length) {
> +		if (q->dma.size)
> +			__sli_queue_destroy(sli4, q);
> +
> +		memset(q, 0, sizeof(struct sli4_queue));
> +
> +		q->dma.size = size * n_entries;
> +		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
> +						 q->dma.size, &q->dma.phys,
> +						 GFP_DMA);> +		if (!q->dma.virt) {
> +			memset(&q->dma, 0, sizeof(struct efc_dma));
> +			efc_log_err(sli4, "%s allocation failed\n",
> +			       SLI_QNAME[qtype]);
> +			return -1;
> +		}

EFC_FAIL?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs
  2019-12-20 22:36 ` [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2020-01-08  7:54   ` Hannes Reinecke
  2020-01-09  1:04     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  7:54 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create different WQEs and adds
> APIs to issue iread, iwrite, treceive, tsend and other work queue
> entries.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1717 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc_sli/sli4.h |    2 +
>  2 files changed, 1719 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 7061f7980fad..2ebe0235bc9e 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -1580,3 +1580,1720 @@ sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
>  
>  	return rc;
>  }
> +
> +/* Write an ABORT_WQE work queue entry */
> +int
> +sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
> +	      enum sli4_abort_type type, bool send_abts, u32 ids,
> +	      u32 mask, u16 tag, u16 cq_id)
> +{
> +	struct sli4_abort_wqe	*abort = buf;
> +
> +	memset(buf, 0, size);
> +
> +	switch (type) {
> +	case SLI_ABORT_XRI:
> +		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
> +		if (mask) {
> +			efc_log_warn(sli4, "%#x aborting XRI %#x warning non-zero mask",
> +				mask, ids);
> +			mask = 0;
> +		}
> +		break;
> +	case SLI_ABORT_ABORT_ID:
> +		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
> +		break;
> +	case SLI_ABORT_REQUEST_ID:
> +		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
> +		break;
> +	default:
> +		efc_log_info(sli4, "unsupported type %#x\n", type);
> +		return EFC_FAIL;
> +	}
> +
> +	abort->ia_ir_byte |= send_abts ? 0 : 1;
> +
> +	/* Suppress ABTS retries */
> +	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
> +
> +	abort->t_mask = cpu_to_le32(mask);
> +	abort->t_tag  = cpu_to_le32(ids);
> +	abort->command = SLI4_WQE_ABORT;
> +	abort->request_tag = cpu_to_le16(tag);
> +
> +	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
> +
> +	abort->cq_id = cpu_to_le16(cq_id);
> +	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an ELS_REQUEST64_WQE work queue entry */
> +int
> +sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		      struct efc_dma *sgl,
> +		      u8 req_type, u32 req_len, u32 max_rsp_len,
> +		      u8 timeout, u16 xri, u16 tag,
> +		      u16 cq_id, u16 rnodeindicator, u16 sportindicator,
> +		      bool hlm, bool rnodeattached, u32 rnode_fcid,
> +		      u32 sport_fcid)
> +{
> +	struct sli4_els_request64_wqe	*els = buf;
> +	struct sli4_sge *sge = sgl->virt;
> +	bool is_fabric = false;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, size);
> +
> +	bptr = &els->els_request_payload;
> +	if (sli4->sgl_pre_registered) {
> +		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
> +
> +		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +	} else {
> +		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				    ((2 * sizeof(struct sli4_sge)) &
> +				     SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
> +	}
> +
> +	els->els_request_payload_length = cpu_to_le32(req_len);
> +	els->max_response_payload_length = cpu_to_le32(max_rsp_len);
> +
> +	els->xri_tag = cpu_to_le16(xri);
> +	els->timer = timeout;
> +	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	els->command = SLI4_WQE_ELS_REQUEST64;
> +
> +	els->request_tag = cpu_to_le16(tag);
> +
> +	if (hlm) {
> +		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_HLM;
> +		els->remote_id_dword = cpu_to_le32(rnode_fcid & 0x00ffffff);
> +	}
> +
> +	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
> +
> +	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
> +
> +	/* figure out the ELS_ID value from the request buffer */
> +
> +	switch (req_type) {
> +	case ELS_LOGO:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
> +		if (rnodeattached) {
> +			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
> +					 SLI4_REQ_WQE_CT_SHFT);
> +			els->context_tag = cpu_to_le16(rnodeindicator);
> +		} else {
> +			els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +			els->context_tag =
> +				cpu_to_le16(sportindicator);
> +		}
> +		if (rnode_fcid == FC_FID_FLOGI)
> +			is_fabric = true;
> +		break;
> +	case ELS_FDISC:
> +		if (rnode_fcid == FC_FID_FLOGI)
> +			is_fabric = true;
> +		if (sport_fcid == 0) {
> +			els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
> +			is_fabric = true;
> +		} else {
> +			els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		}
> +		els->ct_byte |= (SLI4_GENERIC_CONTEXT_VPI <<
> +				 SLI4_REQ_WQE_CT_SHFT);
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
> +		break;
> +	case ELS_FLOGI:
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		/*
> +		 * Set SP here ... we haven't done a REG_VPI yet
> +		 * need to maybe not set this when we have
> +		 * completed VFI/VPI registrations ...
> +		 *
> +		 * Use the FC_ID of the SPORT if it has been allocated,
> +		 * otherwise use an S_ID of zero.
> +		 */
> +		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
> +		if (sport_fcid != U32_MAX)
> +			els->sid_sp_dword |= cpu_to_le32(sport_fcid);
> +		break;
> +	case ELS_PLOGI:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		break;
> +	case ELS_SCR:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		break;
> +	default:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		if (rnodeattached) {
> +			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
> +					 SLI4_REQ_WQE_CT_SHFT);
> +			els->context_tag = cpu_to_le16(sportindicator);
> +		} else {
> +			els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +			els->context_tag =
> +				cpu_to_le16(sportindicator);
> +		}
> +		break;
> +	}
> +
> +	if (is_fabric)
> +		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
> +	else
> +		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
> +
> +	els->cq_id = cpu_to_le16(cq_id);
> +
> +	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
> +					SLI4_GENERIC_CONTEXT_RPI)
> +		els->remote_id_dword = cpu_to_le32(rnode_fcid);
> +
> +	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
> +					SLI4_GENERIC_CONTEXT_VPI)
> +		els->temporary_rpi = cpu_to_le16(rnodeindicator);
> +
> +	return EFC_SUCCESS;
> +}
> +
You seem to have given up using EFC_SUCCESS / EFC_FAIL for the next few
functions.
Please be consistent here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands
  2019-12-20 22:36 ` [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2020-01-08  8:05   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  8:05 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds routines to create mailbox commands used during
> adapter initialization and adds APIs to issue mailbox commands to the
> adapter through the bootstrap mailbox register.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1229 +++++++++++++++++++++++++++++++++++-
>  drivers/scsi/elx/libefc_sli/sli4.h |    2 +
>  2 files changed, 1230 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 2ebe0235bc9e..3cdabb917df6 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -942,7 +942,6 @@ static int sli_cmd_cq_set_create(struct sli4 *sli4,
>  	u16 dw6w1_flags = 0;
>  	__le32 req_len;
>  
> -
>  	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
>  	switch (n_cqe) {
>  	case 256:
> @@ -3297,3 +3296,1231 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
>  
>  	return rc;
>  }
> +
> +/* Wait for the bootstrap mailbox to report "ready" */
> +static int
> +sli_bmbx_wait(struct sli4 *sli4, u32 msec)
> +{
> +	u32 val = 0;
> +
> +	do {
> +		mdelay(1);	/* 1 ms */
> +		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
> +		msec--;
> +	} while (msec && !(val & SLI4_BMBX_RDY));
> +
> +	val = (!(val & SLI4_BMBX_RDY));
> +	return val;
> +}
> +
> +/* Write bootstrap mailbox */
> +static int
> +sli_bmbx_write(struct sli4 *sli4)
> +{
> +	u32 val = 0;
> +
> +	/* write buffer location to bootstrap mailbox register */
> +	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
> +	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
> +
> +	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
> +		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
> +		return -1;
> +	}

EFC_FAIL?

> +	val = SLI4_BMBX_WRITE_LO(sli4->bmbx.phys);
> +	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
> +
> +	/* wait for SLI Port to set ready bit */
> +	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
> +}
> +
> +/* Submit a command to the bootstrap mailbox and check the status */
> +int
> +sli_bmbx_command(struct sli4 *sli4)
> +{
> +	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
> +
> +	if (sli_fw_error_status(sli4) > 0) {
> +		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
> +		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
> +			sli_reg_read_status(sli4),
> +			sli_reg_read_err1(sli4),
> +			sli_reg_read_err2(sli4));
> +		return -1;
> +	}
> +
Same here.

> +	if (sli_bmbx_write(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox write fail phys=%p reg=%#x\n",
> +			(void *)sli4->bmbx.phys,
> +			readl(sli4->reg[0] + SLI4_BMBX_REG));
> +		return -1;
> +	}
> +
And here.

> +	/* check completion queue entry status */
> +	if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) &
> +	    SLI4_MCQE_VALID) {
> +		return sli_cqe_mq(sli4, cqe);
> +	}
> +	efc_log_crit(sli4, "invalid or wrong type\n");
> +	return -1;
> +}
> +
> +/* Write a CONFIG_LINK command to the provided buffer */
> +int
> +sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_config_link *config_link = buf;
> +
> +	memset(buf, 0, size);
> +
> +	config_link->hdr.command = MBX_CMD_CONFIG_LINK;
> +
> +	/* Port interprets zero in a field as "use default value" */
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a DOWN_LINK command to the provided buffer */
> +int
> +sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_mbox_command_header *hdr = buf;
> +
> +	memset(buf, 0, size);
> +
> +	hdr->command = MBX_CMD_DOWN_LINK;
> +
> +	/* Port interprets zero in a field as "use default value" */
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a DUMP Type 4 command to the provided buffer */
> +int
> +sli_cmd_dump_type4(struct sli4 *sli4, void *buf,
> +		   size_t size, u16 wki)
> +{
> +	struct sli4_cmd_dump4 *cmd = buf;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_DUMP;
> +	cmd->type_dword = cpu_to_le32(0x4);
> +	cmd->wki_selection = cpu_to_le16(wki);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a COMMON_READ_TRANSCEIVER_DATA command */
> +int
> +sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
> +				     size_t size, u32 page_num,
> +				     struct efc_dma *dma)
> +{
> +	struct sli4_rqst_cmn_read_transceiver_data *req = NULL;
> +	u32 psize;
> +
> +	if (!dma)
> +		psize = SLI_CONFIG_PYLD_LENGTH(cmn_read_transceiver_data);
> +	else
> +		psize = dma->size;
> +
> +	req = sli_config_cmd_init(sli4, buf, size,
> +					    psize, dma);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&req->hdr, CMN_READ_TRANS_DATA, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_read_transceiver_data));
> +
> +	req->page_number = cpu_to_le32(page_num);
> +	req->port = cpu_to_le32(sli4->port_number);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a READ_LINK_STAT command to the provided buffer */
> +int
> +sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size,
> +			u8 req_ext_counters,
> +			u8 clear_overflow_flags,
> +			u8 clear_all_counters)
> +{
> +	struct sli4_cmd_read_link_stats *cmd = buf;
> +	u32 flags;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_READ_LNK_STAT;
> +
> +	flags = 0;
> +	if (req_ext_counters)
> +		flags |= SLI4_READ_LNKSTAT_REC;
> +	if (clear_all_counters)
> +		flags |= SLI4_READ_LNKSTAT_CLRC;
> +	if (clear_overflow_flags)
> +		flags |= SLI4_READ_LNKSTAT_CLOF;
> +
> +	cmd->dw1_flags = cpu_to_le32(flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a READ_STATUS command to the provided buffer */
> +int
> +sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size,
> +		    u8 clear_counters)
> +{
> +	struct sli4_cmd_read_status *cmd = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_READ_STATUS;
> +	if (clear_counters)
> +		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
> +	else
> +		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
> +
> +	cmd->dw1_flags = cpu_to_le32(flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_LINK command to the provided buffer */
> +int
> +sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size,
> +		  u32 speed, u8 reset_alpa)
> +{
> +	struct sli4_cmd_init_link *init_link = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	init_link->hdr.command = MBX_CMD_INIT_LINK;
> +
> +	init_link->sel_reset_al_pa_dword =
> +				cpu_to_le32(reset_alpa);
> +	flags &= ~SLI4_INIT_LINK_FLAG_LOOPBACK;
> +
> +	init_link->link_speed_sel_code = cpu_to_le32(speed);
> +	switch (speed) {
> +	case FC_LINK_SPEED_1G:
> +	case FC_LINK_SPEED_2G:
> +	case FC_LINK_SPEED_4G:
> +	case FC_LINK_SPEED_8G:
> +	case FC_LINK_SPEED_16G:
> +	case FC_LINK_SPEED_32G:
> +		flags |= SLI4_INIT_LINK_FLAG_FIXED_SPEED;
> +		break;

I was under the impression that 64G FC is already underway, is it not?

> +	case FC_LINK_SPEED_10G:
> +		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
> +		init_link->flags0 = cpu_to_le32(flags);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (sli4->topology) {
> +	case SLI4_READ_CFG_TOPO_FC:
> +		/* Attempt P2P but failover to FC-AL */
> +		flags |= SLI4_INIT_LINK_FLAG_EN_TOPO_FAILOVER;
> +
> +		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
> +		flags |= (SLI4_INIT_LINK_F_P2P_FAIL_OVER << 1);
> +		break;
> +	case SLI4_READ_CFG_TOPO_FC_AL:
> +		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
> +		flags |= (SLI4_INIT_LINK_F_FCAL_ONLY << 1);
> +		if (speed == FC_LINK_SPEED_16G ||
> +		    speed == FC_LINK_SPEED_32G) {
> +			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
> +				speed);
> +			init_link->flags0 = cpu_to_le32(flags);
> +			return EFC_FAIL;
> +		}
> +		break;
> +	case SLI4_READ_CFG_TOPO_FC_DA:
> +		flags &= ~SLI4_INIT_LINK_FLAG_TOPOLOGY;
> +		flags |= (FC_TOPOLOGY_P2P << 1);
> +		break;
> +	default:
> +
> +		efc_log_info(sli4, "unsupported topology %#x\n",
> +			sli4->topology);
> +
> +		init_link->flags0 = cpu_to_le32(flags);
> +		return EFC_FAIL;
> +	}
> +
> +	flags &= (~SLI4_INIT_LINK_FLAG_UNFAIR);
> +	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LIRP_LILP);
> +	flags &= (~SLI4_INIT_LINK_FLAG_LOOP_VALIDITY);
> +	flags &= (~SLI4_INIT_LINK_FLAG_SKIP_LISA);
> +	flags &= (~SLI4_INIT_LINK_FLAG_SEL_HIGHTEST_AL_PA);
> +	init_link->flags0 = cpu_to_le32(flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_VFI command to the provided buffer */
> +int
> +sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 vfi, u16 fcfi, u16 vpi)
> +{
> +	struct sli4_cmd_init_vfi *init_vfi = buf;
> +	u16 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	init_vfi->hdr.command = MBX_CMD_INIT_VFI;
> +
> +	init_vfi->vfi = cpu_to_le16(vfi);
> +	init_vfi->fcfi = cpu_to_le16(fcfi);
> +
> +	/*
> +	 * If the VPI is valid, initialize it at the same time as
> +	 * the VFI
> +	 */
> +	if (vpi != U16_MAX) {
> +		flags |= SLI4_INIT_VFI_FLAG_VP;
> +		init_vfi->flags0_word = cpu_to_le16(flags);
> +		init_vfi->vpi = cpu_to_le16(vpi);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_VPI command to the provided buffer */
> +int
> +sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 vpi, u16 vfi)
> +{
> +	struct sli4_cmd_init_vpi *init_vpi = buf;
> +
> +	memset(buf, 0, size);
> +
> +	init_vpi->hdr.command = MBX_CMD_INIT_VPI;
> +	init_vpi->vpi = cpu_to_le16(vpi);
> +	init_vpi->vfi = cpu_to_le16(vfi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 xri_base, u16 xri_count)
> +{
> +	struct sli4_cmd_post_xri *post_xri = buf;
> +	u16 xri_count_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	post_xri->hdr.command = MBX_CMD_POST_XRI;
> +	post_xri->xri_base = cpu_to_le16(xri_base);
> +	xri_count_flags = (xri_count & SLI4_POST_XRI_COUNT);
> +	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
> +	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
> +	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size,
> +		    u8 num_xri)
> +{
> +	struct sli4_cmd_release_xri *release_xri = buf;
> +
> +	memset(buf, 0, size);
> +
> +	release_xri->hdr.command = MBX_CMD_RELEASE_XRI;
> +	release_xri->xri_count_word = cpu_to_le16(num_xri &
> +					SLI4_RELEASE_XRI_COUNT);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_read_config(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_read_config *read_config = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_config->hdr.command = MBX_CMD_READ_CONFIG;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_read_nvparms *read_nvparms = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_nvparms->hdr.command = MBX_CMD_READ_NVPARMS;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size,
> +		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id)
> +{
> +	struct sli4_cmd_write_nvparms *write_nvparms = buf;
> +
> +	memset(buf, 0, size);
> +
> +	write_nvparms->hdr.command = MBX_CMD_WRITE_NVPARMS;
> +	memcpy(write_nvparms->wwpn, wwpn, 8);
> +	memcpy(write_nvparms->wwnn, wwnn, 8);
> +
> +	write_nvparms->hard_alpa_d_id =
> +			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_read_rev(struct sli4 *sli4, void *buf, size_t size,
> +		 struct efc_dma *vpd)
> +{
> +	struct sli4_cmd_read_rev *read_rev = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_rev->hdr.command = MBX_CMD_READ_REV;
> +
> +	if (vpd && vpd->size) {
> +		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
> +
> +		read_rev->available_length_dword =
> +			cpu_to_le32(vpd->size &
> +				    SLI4_READ_REV_AVAILABLE_LENGTH);
> +
> +		read_rev->hostbuf.low =
> +				cpu_to_le32(lower_32_bits(vpd->phys));
> +		read_rev->hostbuf.high =
> +				cpu_to_le32(upper_32_bits(vpd->phys));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *dma,
> +		     u16 vpi)
> +{
> +	struct sli4_cmd_read_sparm64 *read_sparm64 = buf;
> +
> +	memset(buf, 0, size);
> +
> +	if (vpi == SLI4_READ_SPARM64_VPI_SPECIAL) {
> +		efc_log_info(sli4, "special VPI not supported!!!\n");
> +		return -1;
> +	}

EFC_FAIL

> +
> +	if (!dma || !dma->phys) {
> +		efc_log_info(sli4, "bad DMA buffer\n");
> +		return -1;
> +	}
> +
Same here.

> +	read_sparm64->hdr.command = MBX_CMD_READ_SPARM64;
> +
> +	read_sparm64->bde_64.bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
> +	read_sparm64->bde_64.u.data.low =
> +			cpu_to_le32(lower_32_bits(dma->phys));
> +	read_sparm64->bde_64.u.data.high =
> +			cpu_to_le32(upper_32_bits(dma->phys));
> +
> +	read_sparm64->vpi = cpu_to_le16(vpi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
> +		      struct efc_dma *dma)
> +{
> +	struct sli4_cmd_read_topology *read_topo = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_topo->hdr.command = MBX_CMD_READ_TOPOLOGY;
> +
> +	if (dma && dma->size) {
> +		if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
> +			efc_log_info(sli4, "loop map buffer too small %jd\n",
> +				dma->size);
> +			return 0;
> +		}

Ah. So this is not an error?
And if this function can't fail, why not make it a void function?

> +
> +		memset(dma->virt, 0, dma->size);
> +
> +		read_topo->bde_loop_map.bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
> +		read_topo->bde_loop_map.u.data.low  =
> +			cpu_to_le32(lower_32_bits(dma->phys));
> +		read_topo->bde_loop_map.u.data.high =
> +			cpu_to_le32(upper_32_bits(dma->phys));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 index,
> +		 struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
> +{
> +	struct sli4_cmd_reg_fcfi *reg_fcfi = buf;
> +	u32 i;
> +
> +	memset(buf, 0, size);
> +
> +	reg_fcfi->hdr.command = MBX_CMD_REG_FCFI;
> +
> +	reg_fcfi->fcf_index = cpu_to_le16(index);
> +
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		switch (i) {
> +		case 0:
> +			reg_fcfi->rqid0 = rq_cfg[0].rq_id;
> +			break;
> +		case 1:
> +			reg_fcfi->rqid1 = rq_cfg[1].rq_id;
> +			break;
> +		case 2:
> +			reg_fcfi->rqid2 = rq_cfg[2].rq_id;
> +			break;
> +		case 3:
> +			reg_fcfi->rqid3 = rq_cfg[3].rq_id;
> +			break;
> +		}
> +		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
> +		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
> +		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
> +		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size,
> +		     u8 mode, u16 fcf_index,
> +		     u8 rq_selection_policy, u8 mrq_bit_mask,
> +		     u16 num_mrqs,
> +		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
> +{
> +	struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf;
> +	u32 i;
> +	u32 mrq_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	reg_fcfi_mrq->hdr.command = MBX_CMD_REG_FCFI_MRQ;
> +	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
> +		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
> +		goto done;
> +	}
> +
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
> +		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
> +		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
> +		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
> +
> +		switch (i) {
> +		case 3:
> +			reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id;
> +			break;
> +		case 2:
> +			reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id;
> +			break;
> +		case 1:
> +			reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id;
> +			break;
> +		case 0:
> +			reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id;
> +			break;
> +		}
> +	}
> +
> +	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
> +	mrq_flags |= (mrq_bit_mask << 8);
> +	mrq_flags |= (rq_selection_policy << 12);
> +	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
> +done:
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size,
> +		u32 nport_id, u16 rpi, u16 vpi,
> +		struct efc_dma *dma, u8 update,
> +		u8 enable_t10_pi)
> +{
> +	struct sli4_cmd_reg_rpi *reg_rpi = buf;
> +	u32 rportid_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	reg_rpi->hdr.command = MBX_CMD_REG_RPI;
> +
> +	reg_rpi->rpi = cpu_to_le16(rpi);
> +
> +	rportid_flags = nport_id & SLI4_REGRPI_REMOTE_N_PORTID;
> +
> +	if (update)
> +		rportid_flags |= SLI4_REGRPI_UPD;
> +	else
> +		rportid_flags &= ~SLI4_REGRPI_UPD;
> +
> +	if (enable_t10_pi)
> +		rportid_flags |= SLI4_REGRPI_ETOW;
> +	else
> +		rportid_flags &= ~SLI4_REGRPI_ETOW;
> +
> +	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
> +
> +	reg_rpi->bde_64.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
> +	reg_rpi->bde_64.u.data.low  =
> +		cpu_to_le32(lower_32_bits(dma->phys));
> +	reg_rpi->bde_64.u.data.high =
> +		cpu_to_le32(upper_32_bits(dma->phys));
> +
> +	reg_rpi->vpi = cpu_to_le16(vpi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		u16 vfi, u16 fcfi, struct efc_dma dma,
> +		u16 vpi, __be64 sli_wwpn, u32 fc_id)
> +{
> +	struct sli4_cmd_reg_vfi *reg_vfi = buf;
> +
> +	if (!sli4 || !buf)
> +		return 0;
> +

EFC_SUCCESS?
And why is this not an error?

> +	memset(buf, 0, size);
> +
> +	reg_vfi->hdr.command = MBX_CMD_REG_VFI;
> +
> +	reg_vfi->vfi = cpu_to_le16(vfi);
> +
> +	reg_vfi->fcfi = cpu_to_le16(fcfi);
> +
> +	reg_vfi->sparm.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
> +	reg_vfi->sparm.u.data.low  =
> +		cpu_to_le32(lower_32_bits(dma.phys));
> +	reg_vfi->sparm.u.data.high =
> +		cpu_to_le32(upper_32_bits(dma.phys));
> +
> +	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
> +	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
> +
> +	reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP);
> +	reg_vfi->vpi = cpu_to_le16(vpi);
> +	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
> +	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size,
> +		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
> +		bool update)
> +{
> +	struct sli4_cmd_reg_vpi *reg_vpi = buf;
> +	u32 flags = 0;
> +
> +	if (!sli4 || !buf)
> +		return 0;
> +

Same here. Why is this not returning EFC_SUCCESS?
And why is it not an error?

> +	memset(buf, 0, size);
> +
> +	reg_vpi->hdr.command = MBX_CMD_REG_VPI;
> +
> +	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
> +	if (update)
> +		flags |= SLI4_REGVPI_UPD;
> +	else
> +		flags &= ~SLI4_REGVPI_UPD;
> +
> +	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
> +	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
> +	reg_vpi->vpi = cpu_to_le16(vpi);
> +	reg_vpi->vfi = cpu_to_le16(vfi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_request_features(struct sli4 *sli4, void *buf, size_t size,
> +			 u32 features_mask, bool query)
> +{
> +	struct sli4_cmd_request_features *req_features = buf;
> +

And why does this function _not_ have the check?

> +	memset(buf, 0, size);
> +
> +	req_features->hdr.command = MBX_CMD_RQST_FEATURES;
> +
> +	if (query)
> +		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
> +
> +	req_features->cmd = cpu_to_le32(features_mask);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size,
> +		   u16 indicator)
> +{
> +	struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf;
> +
> +	if (!sli4 || !buf)
> +		return 0;
> +
> +	memset(buf, 0, size);
> +
> +	unreg_fcfi->hdr.command = MBX_CMD_UNREG_FCFI;
> +
> +	unreg_fcfi->fcfi = cpu_to_le16(indicator);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size,
> +		  u16 indicator,
> +		  enum sli4_resource which, u32 fc_id)
> +{
> +	struct sli4_cmd_unreg_rpi *unreg_rpi = buf;
> +	u32 flags = 0;
> +

Same here.
Why is there no check?

[ .. ]
> +
> +int
> +sli_cqe_mq(struct sli4 *sli4, void *buf)
> +{
> +	struct sli4_mcqe *mcqe = buf;
> +	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
> +	/*
> +	 * Firmware can split mbx completions into two MCQEs: first with only
> +	 * the "consumed" bit set and a second with the "complete" bit set.
> +	 * Thus, ignore MCQE unless "complete" is set.
> +	 */
> +	if (!(dwflags & SLI4_MCQE_COMPLETED))
> +		return -2;
> +

Now you're getting creative.
-2 ?

What happened to EFC_SUCCESS/EFC_FAIL?
If this is deliberate please add another EFC_XXX define and document it
why this is not the standard return value.

> +	if (le16_to_cpu(mcqe->completion_status)) {
> +		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
> +			le16_to_cpu(mcqe->completion_status),
> +			      le16_to_cpu(mcqe->extended_status),
> +			      (dwflags & SLI4_MCQE_CONSUMED),
> +			      (dwflags & SLI4_MCQE_COMPLETED),
> +			      (dwflags & SLI4_MCQE_AE),
> +			      (dwflags & SLI4_MCQE_VALID));
> +	}
> +
> +	return le16_to_cpu(mcqe->completion_status);
> +}
> +
> +int
> +sli_cqe_async(struct sli4 *sli4, void *buf)
> +{
> +	struct sli4_acqe *acqe = buf;
> +	int rc = -1;
> +
> +	if (!buf) {
> +		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
> +		return -1;
> +	}
> +

EFC_FAIL...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library
  2019-12-20 22:36 ` [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2020-01-08  8:22   ` Hannes Reinecke
  2020-01-09  1:29     ` James Smart
  0 siblings, 1 reply; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-08  8:22 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds APIS to initialize the library, initialize
> the SLI Port, reset firmware, terminate the SLI Port, and
> terminate the library.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1222 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc_sli/sli4.h |  552 +++++++++++++++-
>  2 files changed, 1773 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 3cdabb917df6..e2bea34b445a 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -4524,3 +4524,1225 @@ sli_cqe_async(struct sli4 *sli4, void *buf)
>  
>  	return rc;
>  }
> +
> +/* Determine if the chip FW is in a ready state */
> +int
> +sli_fw_ready(struct sli4 *sli4)
> +{
> +	u32 val;
> +	/*
> +	 * Is firmware ready for operation? Check needed depends on IF_TYPE
> +	 */
> +	val = sli_reg_read_status(sli4);
> +	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
> +}
> +

boolean?

> +static int
> +sli_sliport_reset(struct sli4 *sli4)
> +{
> +	u32 iter, val;
> +	int rc = -1;
> +
> +	val = SLI4_PORT_CTRL_IP;
> +	/* Initialize port, endian */
> +	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
> +
> +	for (iter = 0; iter < 3000; iter++) {
> +		mdelay(10);	/* 10 ms */
> +		if (sli_fw_ready(sli4) == 1) {
> +			rc = 0;
> +			break;
> +		}
> +	}
> +
> +	if (rc != 0)
> +		efc_log_crit(sli4, "port failed to become ready after initialization\n");
> +
> +	return rc;
> +}
> +

Same here?

> +static bool
> +sli_wait_for_fw_ready(struct sli4 *sli4, u32 timeout_ms)
> +{
> +	u32 iter = timeout_ms / (SLI4_INIT_PORT_DELAY_US / 1000);
> +	bool ready = false;
> +
> +	do {
> +		iter--;
> +		mdelay(10);	/* 10 ms */
> +		if (sli_fw_ready(sli4) == 1)
> +			ready = true;
> +	} while (!ready && (iter > 0));
> +
> +	return ready;
> +}
> +

See? It doesn't even hurt ...

> +static int
> +sli_fw_init(struct sli4 *sli4)
> +{
> +	bool ready;
> +
> +	/*
> +	 * Is firmware ready for operation?
> +	 */
> +	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
> +	if (!ready) {
> +		efc_log_crit(sli4, "FW status is NOT ready\n");
> +		return -1;
> +	}
> +
> +	/*
> +	 * Reset port to a known state
> +	 */
> +	if (sli_sliport_reset(sli4))
> +		return -1;
> +
> +	return 0;
> +}
> +

boolean?

[ .. ]
> +int
> +sli_init(struct sli4 *sli4)
> +{
> +	if (sli4->has_extents) {
> +		efc_log_info(sli4, "XXX need to implement extent allocation\n");
> +		return -1;
> +	}
> +
Ho-hum.
Maybe 'extend allocation not implemented' ?

[ .. ]
> +int
> +sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
> +		   u32 *rid, u32 *index)
> +{
> +	int rc = 0;
> +	u32 size;
> +	u32 extent_idx;
> +	u32 item_idx;
> +	u32 position;
> +
> +	*rid = U32_MAX;
> +	*index = U32_MAX;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		position =
> +		find_first_zero_bit(sli4->extent[rtype].use_map,
> +				    sli4->extent[rtype].map_size);
> +		if (position >= sli4->extent[rtype].map_size) {
> +			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
> +				    rtype, sli4->extent[rtype].n_alloc);
> +			rc = -1;
> +			break;
> +		}
> +		set_bit(position, sli4->extent[rtype].use_map);
> +		*index = position;
> +
> +		size = sli4->extent[rtype].size;
> +
> +		extent_idx = *index / size;
> +		item_idx   = *index % size;
> +
> +		*rid = sli4->extent[rtype].base[extent_idx] + item_idx;
> +
> +		sli4->extent[rtype].n_alloc++;
> +		break;
> +	default:
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +

Didn't you mention extent allocation is not implemented?
So is this a different type of extent?

> +int
> +sli_resource_free(struct sli4 *sli4,
> +		  enum sli4_resource rtype, u32 rid)
> +{
> +	int rc = -1;
> +	u32 x;
> +	u32 size, *base;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		/*
> +		 * Figure out which extent contains the resource ID. I.e. find
> +		 * the extent such that
> +		 *   extent->base <= resource ID < extent->base + extent->size
> +		 */
> +		base = sli4->extent[rtype].base;
> +		size = sli4->extent[rtype].size;
> +
> +		/*
> +		 * In the case of FW reset, this may be cleared
> +		 * but the force_free path will still attempt to
> +		 * free the resource. Prevent a NULL pointer access.
> +		 */
> +		if (base) {
> +			for (x = 0; x < sli4->extent[rtype].number;
> +			     x++) {
> +				if (rid >= base[x] &&
> +				    (rid < (base[x] + size))) {
> +					rid -= base[x];
> +					clear_bit((x * size) + rid,
> +						  sli4->extent[rtype].use_map);
> +					rc = 0;
> +					break;
> +				}
> +			}
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype)
> +{
> +	int rc = -1;
> +	u32 i;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		for (i = 0; i < sli4->extent[rtype].map_size; i++)
> +			clear_bit(i, sli4->extent[rtype].use_map);
> +		rc = 0;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int sli_raise_ue(struct sli4 *sli4, u8 dump)
> +{
> +	u32 val = 0;
> +#define FDD 2

Oh, come on.
You have defines for everything but the kitchen sink.
So why do you have to define this one inline?

> +	if (dump == FDD) {
> +		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
> +		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
> +	} else {
> +		val = SLI4_PHYDEV_CTRL_FRST;
> +
> +		if (dump == 1)
> +			val |= SLI4_PHYDEV_CTRL_DD;
> +		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
> +	}
> +
> +	return 0;
> +}
> +
> +int sli_dump_is_ready(struct sli4 *sli4)
> +{
> +	int rc = 0;
> +	u32 port_val;
> +	u32 bmbx_val;
> +
> +	/*
> +	 * Ensure that the port is ready AND the mailbox is
> +	 * ready before signaling that the dump is ready to go.
> +	 */
> +	port_val = sli_reg_read_status(sli4);
> +	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
> +
> +	if ((bmbx_val & SLI4_BMBX_RDY) &&
> +	    (port_val & SLI4_PORT_STATUS_RDY)) {
> +		if (port_val & SLI4_PORT_STATUS_DIP)
> +			rc = 1;
> +		else if (port_val & SLI4_PORT_STATUS_FDP)
> +			rc = 2;
> +	}
> +
> +	return rc;
> +}
> +

Please use defines for the return values here.
One has no idea why '1' or '2' is returned here.
At the very least some documentation.

> +int sli_dump_is_present(struct sli4 *sli4)
> +{
> +	u32 val;
> +	bool ready;
> +
> +	/* If the chip is not ready, then there cannot be a dump */
> +	ready = sli_wait_for_fw_ready(sli4, SLI4_INIT_PORT_DELAY_US);
> +	if (!ready)
> +		return 0;
> +
> +	val = sli_reg_read_status(sli4);
> +	if (val == U32_MAX) {
> +		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
> +		return -1;
> +	} else {
> +		return (val & SLI4_PORT_STATUS_DIP) ? 1 : 0;
> +	}
> +}
> +
> +int sli_reset_required(struct sli4 *sli4)
> +{
> +	u32 val;
> +
> +	val = sli_reg_read_status(sli4);
> +	if (val == U32_MAX) {
> +		efc_log_err(sli4, "error reading SLIPORT_STATUS\n");
> +		return -1;
> +	} else {
> +		return (val & SLI4_PORT_STATUS_RN) ? 1 : 0;
> +	}
> +}
> +
> +int
> +sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size,
> +		       u16 xri,
> +		       u32 xri_count, struct efc_dma *page0[],
> +		       struct efc_dma *page1[], struct efc_dma *dma)
> +{
> +	struct sli4_rqst_post_sgl_pages *post = NULL;
> +	u32 i;
> +	__le32 req_len;
> +
> +	post = sli_config_cmd_init(sli4, buf, size,
> +				   SLI_CONFIG_PYLD_LENGTH(post_sgl_pages),
> +				   dma);
> +	if (!post)
> +		return EFC_FAIL;
> +
> +	/* payload size calculation */
> +	/* 4 = xri_start + xri_count */
> +	/* xri_count = # of XRI's registered */
> +	/* sizeof(uint64_t) = physical address size */
> +	/* 2 = # of physical addresses per page set */
> +	req_len = cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
> +	sli_cmd_fill_hdr(&post->hdr, SLI4_OPC_POST_SGL_PAGES, SLI4_SUBSYSTEM_FC,
> +			 CMD_V0, req_len);
> +	post->xri_start = cpu_to_le16(xri);
> +	post->xri_count = cpu_to_le16(xri_count);
> +
> +	for (i = 0; i < xri_count; i++) {
> +		post->page_set[i].page0_low  =
> +				cpu_to_le32(lower_32_bits(page0[i]->phys));
> +		post->page_set[i].page0_high =
> +				cpu_to_le32(upper_32_bits(page0[i]->phys));
> +	}
> +
> +	if (page1) {
> +		for (i = 0; i < xri_count; i++) {
> +			post->page_set[i].page1_low =
> +				cpu_to_le32(lower_32_bits(page1[i]->phys));
> +			post->page_set[i].page1_high =
> +				cpu_to_le32(upper_32_bits(page1[i]->phys));
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +

EFC_SUCCESS is back!
I've already missed them; none of the previous functions in this patch
use them.
Please fix.

[ .. ]
> +extern int
> +sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		  u16 index, u32 which);
> +extern int
> +sli_cmd_common_nop(struct sli4 *sli4, void *buf, size_t size,
> +		   uint64_t context);
> +extern int
> +sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
> +					size_t size, u16 rtype);
> +extern int
> +sli_cmd_common_get_sli4_parameters(struct sli4 *sli4,
> +				   void *buf, size_t size);
> +extern int
> +sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size,
> +			    u16 noc, u16 eof, u32 desired_write_length,
> +		u32 offset, char *object_name, struct efc_dma *dma);
> +extern int
> +sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
> +			     char *object_name);
> +extern int
> +sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
> +			   u32 desired_read_length, u32 offset,
> +			   char *object_name, struct efc_dma *dma);
> +extern int
> +sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
> +			  struct efc_dma *cmd, struct efc_dma *resp);
> +extern int
> +sli_cmd_common_set_dump_location(struct sli4 *sli4,
> +				 void *buf, size_t size, bool query,
> +				 bool is_buffer_list,
> +				 struct efc_dma *buffer, u8 fdb);
> +extern int
> +sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
> +			    u32 feature, u32 param_len,
> +			    void *parameter);
> +
> +int sli_cqe_mq(struct sli4 *sli4, void *buf);
> +int sli_cqe_async(struct sli4 *sli4, void *buf);
> +
> +extern int
> +sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
> +	  void __iomem *reg[]);
> +void sli_calc_max_qentries(struct sli4 *sli4);
> +int sli_init(struct sli4 *sli4);
> +int sli_reset(struct sli4 *sli4);
> +int sli_fw_reset(struct sli4 *sli4);
> +int sli_teardown(struct sli4 *sli4);
> +extern int
> +sli_callback(struct sli4 *sli4, enum sli4_callback which,
> +	     void *func, void *arg);
> +extern int
> +sli_bmbx_command(struct sli4 *sli4);
> +extern int
> +__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
> +		 u32 qtype, size_t size, u32 n_entries,
> +		      u32 align);
> +extern int
> +__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q);
> +extern int
> +sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
> +		    u32 num_eq, u32 shift, u32 delay_mult);
> +extern int
> +sli_queue_alloc(struct sli4 *sli4, u32 qtype,
> +		struct sli4_queue *q, u32 n_entries,
> +		     struct sli4_queue *assoc);
> +extern int
> +sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
> +		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[]);
> +extern int
> +sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype);
> +extern int
> +sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
> +	       u32 destroy_queues, u32 free_memory);
> +extern int
> +sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
> +extern int
> +sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
> +
> +extern int
> +sli_wq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry);
> +extern int
> +sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry);
> +extern int
> +sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry);
> +extern int
> +sli_eq_read(struct sli4 *sli4, struct sli4_queue *q,
> +	    u8 *entry);
> +extern int
> +sli_cq_read(struct sli4 *sli4, struct sli4_queue *q,
> +	    u8 *entry);
> +extern int
> +sli_mq_read(struct sli4 *sli4, struct sli4_queue *q,
> +	    u8 *entry);
> +extern int
> +sli_queue_index(struct sli4 *sli4, struct sli4_queue *q);
> +extern int
> +_sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q,
> +		u32 index, u8 *entry);
> +extern int
> +sli_queue_poke(struct sli4 *sli4, struct sli4_queue *q, u32 index,
> +	       u8 *entry);
> +extern int
> +sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
> +		   u32 *rid, u32 *index);
> +extern int
> +sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype,
> +		  u32 rid);
> +extern int
> +sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype);
> +extern int
> +sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id);
> +extern int
> +sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
> +	     enum sli4_qentry *etype, u16 *q_id);
> +

I guess you can reformat those; the linux line length is 80 characters,
and one really should use them ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-01-08  7:11   ` Hannes Reinecke
@ 2020-01-09  0:59     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  0:59 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 1/7/2020 11:11 PM, Hannes Reinecke wrote:
> Please be consistent here wrt _SHIFT and _MASK statements.
> Either have them spelled out (as you do in this case), but then please
> change the first hunk to avoid an explicit shift.
> Or keep the style in the first hunk, and change the _MASK values here
> to use the _SHIFT values
> (ie SLI4_EQCQ_ID_HI_MASK = 0x1F << SLI4_EQCQ_ID_HI_SHIFT).
> I don't mind either way, but keep it consistent.
> 
> Cheers,
> 
> Hannes
> 

We will do the change to spell out the _MASK directly.

Thanks

-- james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-01-08  7:24   ` Hannes Reinecke
@ 2020-01-09  1:00     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  1:00 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: maier, dwagner, bvanassche, Ram Vegesna, James Smart

On 1/7/2020 11:24 PM, Hannes Reinecke wrote:
> I am really not a big fan of anonymous enums, especially not if they are
> scoped for specific structures.
> Can you please avoid the use of anonymous enums, and name them according
> to the structure where they are indended to be used?
> Ideally the structure should reference named enums directly, but I do
> agree that this it not always possible or desired.
> But we should at least name them accordingly to give the developer a
> hint where these values are expected to occur.
> 
> Eg in the above case
> 
> enum sli4_sge_flags {
> 
> or similar would make the intended usage clearer.
> 

We will add names to the enums as suggested.

Thanks

-- james



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands
  2020-01-08  7:32   ` Hannes Reinecke
@ 2020-01-09  1:03     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  1:03 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 1/7/2020 11:32 PM, Hannes Reinecke wrote:
> On 12/20/19 11:36 PM, James Smart wrote:
>> @@ -1995,7 +1995,7 @@ struct sli4_fc_xri_aborted_cqe {
>>   #define SLI4_ELS_REQUEST64_DIR_READ		0x1
>>   
>>   #define SLI4_ELS_REQUEST64_OTHER		0x0
>> -#define SLI4_ELS_REQUEST64_LOGO		0x1
>> +#define SLI4_ELS_REQUEST64_LOGO			0x1
>>   #define SLI4_ELS_REQUEST64_FDISC		0x2
>>   #define SLI4_ELS_REQUEST64_FLOGIN		0x3
>>   #define SLI4_ELS_REQUEST64_PLOGI		0x4
> Shouldn't this rather be merged with the previous patch?

yes - will do so


> Make this two enums, one 'enum sli4_mbx_cmd' and one 'enum sli4_mbx_status'.
> 
...
> 
> Single enum should rather be converted into a #define ..
> 
...
> Why is this an enum, and the above SLI4_INIT_LINK_F_XXX value are defines?
> Please be consistent.
> 
> And this applies throughout the remainder of the patch.
> 
> Cheers,
> 
> Hannes
> 

We will convert to multiple enums.

Thanks

-- james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines
  2020-01-08  7:45   ` Hannes Reinecke
@ 2020-01-09  1:04     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  1:04 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 1/7/2020 11:45 PM, Hannes Reinecke wrote:
> Hmm. Why do you return -EFC_FAIL here, and EFC_FAIL in the two cases above?
> Do you differentiate between EFC_FAIL and -EFC_FAIL?
> If so you should probably use different #defines ...
> 
...
> 
> '0'? Why not EFC_FAIL/EFC_SUCCESS?
> 
...
> EFC_FAIL?
> 
> Cheers,
> 
> Hannes
> 

We will remove all -1, -EFC_FAIL and return only EFC_SUCCESS/EFC_FAIL.

Thanks

-- james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs
  2020-01-08  7:54   ` Hannes Reinecke
@ 2020-01-09  1:04     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  1:04 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 1/7/2020 11:54 PM, Hannes Reinecke wrote:
> You seem to have given up using EFC_SUCCESS / EFC_FAIL for the next few
> functions.
> Please be consistent here.
> 
> Cheers,
> 
> Hannes
> 

We will covert over to EFC_xxx status's

Thanks

-- james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library
  2020-01-08  8:22   ` Hannes Reinecke
@ 2020-01-09  1:29     ` James Smart
  0 siblings, 0 replies; 78+ messages in thread
From: James Smart @ 2020-01-09  1:29 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: maier, dwagner, bvanassche, Ram Vegesna, James Smart

On 1/8/2020 12:22 AM, Hannes Reinecke wrote:

> 
> boolean?
> 
...
> 
> Same here?
> 
...
> See? It doesn't even hurt ...

:)

...
> 
> boolean?
> 

yep - we'll convert them to boolean's


> Ho-hum.
> Maybe 'extend allocation not implemented' ?

ok


> 
> Didn't you mention extent allocation is not implemented?
> So is this a different type of extent?

kinda - there was a comment header that tried to clarify this:

/*      * Tracks the port resources using extents metaphor. For
          * devices that don't implement extents (i.e.
          * has_extents == FALSE), the code models each resource as
          * a single large extent.
          */

regardless - we'll clarify what's going on.


>> +#define FDD 2
> 
> Oh, come on.
> You have defines for everything but the kitchen sink.
> So why do you have to define this one inline?

yeah - there are a lot.

Agree - the define will be moved to a header. It's a dump-type selection 
(function scope or adapter scope; only newer things do function only).


> 
> Please use defines for the return values here.
> One has no idea why '1' or '2' is returned here.
> At the very least some documentation.
> 

yep - will do.

> EFC_SUCCESS is back!
> I've already missed them; none of the previous functions in this patch
> use them.
> Please fix.

yep.


> I guess you can reformat those; the linux line length is 80 characters,
> and one really should use them ...
> 
> Cheers,
> 
> Hannes
> 

We'll fix the line lengths.

-- james

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 24/32] elx: efct: LIO backend interface routines
  2019-12-20 22:37 ` [PATCH v2 24/32] elx: efct: LIO backend interface routines James Smart
@ 2020-01-09  3:56   ` Bart Van Assche
  0 siblings, 0 replies; 78+ messages in thread
From: Bart Van Assche @ 2020-01-09  3:56 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, Ram Vegesna

On 2019-12-20 14:37, James Smart wrote:
> +#include <scsi/scsi_tcq.h>

Including the scsi_tcq.h header file is only useful in source files that
implement initiator functionality. This source file implements SCSI
target functionality. Is this include really necessary?

> +static struct workqueue_struct *lio_wq;
> +
> +static int
> +efct_format_wwn(char *str, size_t len, char *pre, u64 wwn)
> +{
> +	u8 a[8];
> +
> +	put_unaligned_be64(wwn, a);
> +	return snprintf(str, len,
> +			"%s%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
> +			pre, a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7]);
> +}

Can the type of 'pre' be changed from 'char *' into 'const char *'?

Can %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x be changed into %8phC?

> +static int
> +efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
> +{
> +	int a[8], num;
> +	u8 b[8];
> +
> +	if (npiv) {
> +		num = sscanf(name, "%02x%02x%02x%02x%02x%02x%02x%02x",
> +			     &a[0], &a[1], &a[2], &a[3], &a[4],
> +				 &a[5], &a[6], &a[7]);
> +	} else {
> +		num = sscanf(name,
> +			     "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
> +			     &a[0], &a[1], &a[2], &a[3], &a[4],
> +			     &a[5], &a[6], &a[7]);
> +	}
> +
> +	if (num != 8)
> +		return -EINVAL;
> +
> +	for (num = 0; num < 8; ++num)
> +		b[num] = (u8) a[num];
> +
> +	*wwp = get_unaligned_be64(b);
> +	return 0;
> +}

If the %02x sscanf specifiers are changed into %02hhx, can the int a[8]
array be left out?

> +static ssize_t
> +efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
> +			       size_t count)
> +{
> +	struct se_portal_group *se_tpg = to_tpg(item);
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +	struct efct_lio_vport *lio_vport = tpg->vport;
> +	struct efct_lio_vport_data_t *vport_data;
> +	struct efct *efct;
> +	struct efc *efc;
> +	int ret = -1;
> +	unsigned long op, flags = 0;
> +
> +	if (kstrtoul(page, 0, &op) < 0)
> +		return -EINVAL;
> +
> +	if (!lio_vport) {
> +		pr_err("Unable to find vport\n");
> +		return -EINVAL;
> +	}
> +
> +	efct = lio_vport->efct;
> +	efc = efct->efcport;
> +
> +	if (op == 1) {
> +		atomic_set(&tpg->enabled, 1);
> +		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
> +
> +		if (efc->domain) {
> +			ret = efc_sport_vport_new(efc->domain,
> +						  lio_vport->npiv_wwpn,
> +						  lio_vport->npiv_wwnn,
> +						  U32_MAX, false, true,
> +						  NULL, NULL, true);
> +			if (ret != 0) {
> +				efc_log_err(efct, "Failed to create Vport\n");
> +				return ret;
> +			}
> +			return count;
> +		}
> +
> +		vport_data = kmalloc(sizeof(*vport_data), GFP_KERNEL);
> +		if (!vport_data)
> +			return ret;
> +
> +		memset(vport_data, 0, sizeof(struct efct_lio_vport_data_t));
> +		vport_data->phy_wwpn            = lio_vport->wwpn;
> +		vport_data->vport_wwpn          = lio_vport->npiv_wwpn;
> +		vport_data->vport_wwnn          = lio_vport->npiv_wwnn;
> +		vport_data->target_mode         = 1;
> +		vport_data->initiator_mode      = 0;
> +		vport_data->lio_vport           = lio_vport;
> +
> +		/* There is no domain.  Add to pending list. When the
> +		 * domain is created, the driver will create the vport.
> +		 */
> +		efc_log_debug(efct, "link down, move to pending\n");
> +		spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +		INIT_LIST_HEAD(&vport_data->list_entry);
> +		list_add_tail(&vport_data->list_entry,
> +			      &efct->tgt_efct.vport_pend_list);
> +		spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +
> +	} else if (op == 0) {
> +		struct efct_lio_vport_data_t *virt_target_data, *next;
> +
> +		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
> +
> +		atomic_set(&tpg->enabled, 0);
> +		/* only physical sport should exist, free lio_sport
> +		 * allocated in efct_lio_make_sport
> +		 */
> +		if (efc->domain) {
> +			efc_sport_vport_del(efct->efcport, efc->domain,
> +					    lio_vport->npiv_wwpn,
> +					    lio_vport->npiv_wwnn);
> +			return count;
> +		}
> +		spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +		list_for_each_entry_safe(virt_target_data, next,
> +					 &efct->tgt_efct.vport_pend_list,
> +					 list_entry) {
> +			if (virt_target_data->lio_vport == lio_vport) {
> +				list_del(&virt_target_data->list_entry);
> +				kfree(virt_target_data);
> +				break;
> +			}
> +		}
> +		spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +	} else {
> +		return -EINVAL;
> +	}
> +	return count;
> +}

I think the above function can return -1. Please make sure that this
function returns an appropriate error code if something fails.

> +static bool efct_lio_node_is_initiator(struct efc_node *node)
> +{
> +	if (!node)
> +		return 0;
> +
> +	if (node->rnode.fc_id && node->rnode.fc_id != FC_FID_FLOGI &&
> +	    node->rnode.fc_id != FC_FID_DIR_SERV &&
> +	    node->rnode.fc_id != FC_FID_FCTRL) {
> +		return 1;
> +	}
> +
> +	return 0;
> +}

Should return 0 / return 1 perhaps be changed into return false / return
true?

> +static int  efct_lio_tgt_session_data(struct efct *efct, u64 wwpn,
> +				      char *buf, int size)
> +{
> +	struct efc_sli_port *sport = NULL;
> +	struct efc_node *node = NULL;
> +	struct efc *efc = efct->efcport;
> +	u16 loop_id = 0;
> +	int off = 0, rc = 0;
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "failed to find efct/domain\n");
> +		return -1;
> +	}
> +
> +	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
> +		if (sport->wwpn != wwpn)
> +			continue;
> +		list_for_each_entry(node, &sport->node_list,
> +				    list_entry) {
> +			/* Dump only remote NPORT sessions */
> +			if (!efct_lio_node_is_initiator(node))
> +				continue;
> +
> +			rc = snprintf(buf + off, size - off,
> +				"0x%016llx,0x%08x,0x%04x\n",
> +				get_unaligned_be64(node->wwpn),
> +				node->rnode.fc_id, loop_id);
> +			if (rc < 0)
> +				break;
> +			off += rc;
> +		}
> +	}
> +
> +	buf[size - 1] = '\0';
> +	return 0;
> +}

Does the caller of this function initialize buf[]? If not, should this
function initialize buf[] before calling snprintf()?

Since snprintf() guarantees '\0' termination I think that the buf[size -
1] = '\0' at the end of this function can be left out.

> +static int
> +efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
> +		       u32 flags, void *arg);

Can this forward declaration be avoided by reordering the function
definitions?

> +static struct se_wwn *
> +efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
> +			 struct config_group *group, const char *name)
> +{
> +	struct efct_lio_vport *lio_vport;
> +	struct efct *efct;
> +	int ret = -1;
> +	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
> +	char *p, tmp[128];
> +	struct efct_lio_vport_list_t *vport_list;
> +	struct fc_vport *new_fc_vport;
> +	struct fc_vport_identifiers vport_id;
> +	unsigned long flags = 0;
> +
> +	snprintf(tmp, 128, "%s", name);

How about using sizeof(tmp) instead of hardcoding the array size?

> +	p = strchr(tmp, '@');
> +
> +	if (!p) {
> +		pr_err("Unable to find separator operator(@)\n");
> +		return ERR_PTR(ret);
> +	}
> +	*p++ = '\0';

Can this be changed into a strsep() call?

> +int efct_scsi_tgt_del_device(struct efct *efct)
> +{
> +	int rc = 0;
> +
> +	flush_workqueue(lio_wq);
> +
> +	return rc;
> +}

Is the 'rc' variable necessary in the above function? Can it be removed?

> +/* Called by the libefc when an initiator goes away. */
> +int efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
> +			int reason)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_lio_wq_data *wq_data;
> +	int watermark;
> +	int initiator_count;
> +
> +	if (reason == EFCT_SCSI_INITIATOR_MISSING)
> +		return EFCT_SCSI_CALL_COMPLETE;
> +
> +	wq_data = kmalloc(sizeof(*wq_data), GFP_ATOMIC);
> +	if (!wq_data)
> +		return EFCT_SCSI_CALL_COMPLETE;
> +
> +	memset(wq_data, 0, sizeof(*wq_data));
> +	wq_data->ptr = node;
> +	wq_data->efct = efct;
> +	INIT_WORK(&wq_data->work, efct_lio_remove_session);
> +	queue_work(lio_wq, &wq_data->work);
> +
> +	/*
> +	 * update IO watermark: decrement initiator count
> +	 */
> +	initiator_count =
> +		atomic_sub_return(1, &efct->tgt_efct.initiator_count);
> +	watermark = (efct->tgt_efct.watermark_max -
> +			initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
> +	watermark = (efct->tgt_efct.watermark_min > watermark) ?
> +			efct->tgt_efct.watermark_min : watermark;
> +	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
> +
> +	return EFCT_SCSI_CALL_ASYNC;
> +}

Is the lio_wq work queue really necessary? Could one of the system
workqueues have been used instead?

> +	ret = kstrtoul(page, 0, &val);					  \
> +	if (ret < 0) {							  \
> +		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
> +		return -EINVAL;						  \
> +	}								  \

Has it been considered to return 'ret' (the kstrtoul() return value)
instead of -EINVAL?

> +	ret = kstrtoul(page, 0, &val);					   \
> +	if (ret < 0) {							   \
> +		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
> +		return -EINVAL;						   \
> +	}								   \

Same comment here.

> +#define efct_set_lio_io_state(io, value) (io->tgt_io.state |= value)

Is this macro really useful? Can it be removed?

> +struct efct_scsi_tgt_io {
> +	struct se_cmd		cmd;
> +	unsigned char		sense_buffer[TRANSPORT_SENSE_BUFFER];
> +	int			ddir;

Should 'int' perhaps be changed into 'enum dma_data_direction'?

> +	u8			cdb_opcode;

Does this duplicate cmd.t_task_cdb[0]? If so, is it useful to duplicate
that value?

> +	u32			cdb_len;

Is this value identical to scsi_command_size(cmd.t_task_cdb)? Is it
essential to have this member in this data structure?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 08/32] elx: libefc: Generic state machine framework
  2019-12-20 22:36 ` [PATCH v2 08/32] elx: libefc: Generic state machine framework James Smart
@ 2020-01-09  7:05   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  7:05 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:36 PM, James Smart wrote:
> This patch starts the population of the libefc library.
> The library will contain common tasks usable by a target or initiator
> driver. The library will also contain a FC discovery state machine
> interface.
> 
> This patch creates the library directory and adds definitions
> for the discovery state machine interface.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_sm.c | 213 +++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_sm.h | 140 +++++++++++++++++++++++++
>  2 files changed, 353 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
> new file mode 100644
> index 000000000000..90e60c0e6638
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sm.c
> @@ -0,0 +1,213 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Generic state machine framework.
> + */
> +#include "efc.h"
> +#include "efc_sm.h"
> +
> +const char *efc_sm_id[] = {
> +	"common",
> +	"domain",
> +	"login"
> +};
> +
> +/**
> + * efc_sm_post_event() - Post an event to a context.
> + *
> + * @ctx: State machine context
> + * @evt: Event to post
> + * @data: Event-specific data (if any)
> + */
> +int
> +efc_sm_post_event(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *data)
> +{
> +	if (ctx->current_state) {
> +		ctx->current_state(ctx, evt, data);
> +		return 0;
> +	} else {
> +		return -1;
> +	}
> +}
> +
> +void
> +efc_sm_transition(struct efc_sm_ctx *ctx,
> +		  void *(*state)(struct efc_sm_ctx *,
> +				 enum efc_sm_event, void *), void *data)
> +
> +{
> +	if (ctx->current_state == state) {
> +		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
> +	} else {
> +		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
> +		ctx->current_state = state;
> +		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
> +	}
> +}
> +
> +void
> +efc_sm_disable(struct efc_sm_ctx *ctx)
> +{
> +	ctx->current_state = NULL;
> +}
> +
> +const char *efc_sm_event_name(enum efc_sm_event evt)
> +{
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		return "EFC_EVT_ENTER";
> +	case EFC_EVT_REENTER:
> +		return "EFC_EVT_REENTER";
> +	case EFC_EVT_EXIT:
> +		return "EFC_EVT_EXIT";
> +	case EFC_EVT_SHUTDOWN:
> +		return "EFC_EVT_SHUTDOWN";
> +	case EFC_EVT_RESPONSE:
> +		return "EFC_EVT_RESPONSE";
> +	case EFC_EVT_RESUME:
> +		return "EFC_EVT_RESUME";
> +	case EFC_EVT_TIMER_EXPIRED:
> +		return "EFC_EVT_TIMER_EXPIRED";
> +	case EFC_EVT_ERROR:
> +		return "EFC_EVT_ERROR";
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +		return "EFC_EVT_SRRS_ELS_REQ_OK";
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +		return "EFC_EVT_SRRS_ELS_CMPL_OK";
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		return "EFC_EVT_SRRS_ELS_REQ_FAIL";
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		return "EFC_EVT_SRRS_ELS_CMPL_FAIL";
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		return "EFC_EVT_SRRS_ELS_REQ_RJT";
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		return "EFC_EVT_NODE_ATTACH_OK";
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		return "EFC_EVT_NODE_ATTACH_FAIL";
> +	case EFC_EVT_NODE_FREE_OK:
> +		return "EFC_EVT_NODE_FREE_OK";
> +	case EFC_EVT_ELS_REQ_TIMEOUT:
> +		return "EFC_EVT_ELS_REQ_TIMEOUT";
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		return "EFC_EVT_ELS_REQ_ABORTED";
> +	case EFC_EVT_ABORT_ELS:
> +		return "EFC_EVT_ABORT_ELS";
> +	case EFC_EVT_ELS_ABORT_CMPL:
> +		return "EFC_EVT_ELS_ABORT_CMPL";
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		return "EFC_EVT_DOMAIN_FOUND";
> +	case EFC_EVT_DOMAIN_ALLOC_OK:
> +		return "EFC_EVT_DOMAIN_ALLOC_OK";
> +	case EFC_EVT_DOMAIN_ALLOC_FAIL:
> +		return "EFC_EVT_DOMAIN_ALLOC_FAIL";
> +	case EFC_EVT_DOMAIN_REQ_ATTACH:
> +		return "EFC_EVT_DOMAIN_REQ_ATTACH";
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		return "EFC_EVT_DOMAIN_ATTACH_OK";
> +	case EFC_EVT_DOMAIN_ATTACH_FAIL:
> +		return "EFC_EVT_DOMAIN_ATTACH_FAIL";
> +	case EFC_EVT_DOMAIN_LOST:
> +		return "EFC_EVT_DOMAIN_LOST";
> +	case EFC_EVT_DOMAIN_FREE_OK:
> +		return "EFC_EVT_DOMAIN_FREE_OK";
> +	case EFC_EVT_DOMAIN_FREE_FAIL:
> +		return "EFC_EVT_DOMAIN_FREE_FAIL";
> +	case EFC_EVT_HW_DOMAIN_REQ_ATTACH:
> +		return "EFC_EVT_HW_DOMAIN_REQ_ATTACH";
> +	case EFC_EVT_HW_DOMAIN_REQ_FREE:
> +		return "EFC_EVT_HW_DOMAIN_REQ_FREE";
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		return "EFC_EVT_ALL_CHILD_NODES_FREE";
> +
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +		return "EFC_EVT_SPORT_ALLOC_OK";
> +	case EFC_EVT_SPORT_ALLOC_FAIL:
> +		return "EFC_EVT_SPORT_ALLOC_FAIL";
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		return "EFC_EVT_SPORT_ATTACH_OK";
> +	case EFC_EVT_SPORT_ATTACH_FAIL:
> +		return "EFC_EVT_SPORT_ATTACH_FAIL";
> +	case EFC_EVT_SPORT_FREE_OK:
> +		return "EFC_EVT_SPORT_FREE_OK";
> +	case EFC_EVT_SPORT_FREE_FAIL:
> +		return "EFC_EVT_SPORT_FREE_FAIL";
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
> +		return "EFC_EVT_SPORT_TOPOLOGY_NOTIFY";
> +	case EFC_EVT_HW_PORT_ALLOC_OK:
> +		return "EFC_EVT_HW_PORT_ALLOC_OK";
> +	case EFC_EVT_HW_PORT_ALLOC_FAIL:
> +		return "EFC_EVT_HW_PORT_ALLOC_FAIL";
> +	case EFC_EVT_HW_PORT_ATTACH_OK:
> +		return "EFC_EVT_HW_PORT_ATTACH_OK";
> +	case EFC_EVT_HW_PORT_REQ_ATTACH:
> +		return "EFC_EVT_HW_PORT_REQ_ATTACH";
> +	case EFC_EVT_HW_PORT_REQ_FREE:
> +		return "EFC_EVT_HW_PORT_REQ_FREE";
> +	case EFC_EVT_HW_PORT_FREE_OK:
> +		return "EFC_EVT_HW_PORT_FREE_OK";
> +
> +	case EFC_EVT_NODE_FREE_FAIL:
> +		return "EFC_EVT_NODE_FREE_FAIL";
> +
> +	case EFC_EVT_ABTS_RCVD:
> +		return "EFC_EVT_ABTS_RCVD";
> +
> +	case EFC_EVT_NODE_MISSING:
> +		return "EFC_EVT_NODE_MISSING";
> +	case EFC_EVT_NODE_REFOUND:
> +		return "EFC_EVT_NODE_REFOUND";
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		return "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO";
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +		return "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO";
> +
> +	case EFC_EVT_ELS_FRAME:
> +		return "EFC_EVT_ELS_FRAME";
> +	case EFC_EVT_PLOGI_RCVD:
> +		return "EFC_EVT_PLOGI_RCVD";
> +	case EFC_EVT_FLOGI_RCVD:
> +		return "EFC_EVT_FLOGI_RCVD";
> +	case EFC_EVT_LOGO_RCVD:
> +		return "EFC_EVT_LOGO_RCVD";
> +	case EFC_EVT_PRLI_RCVD:
> +		return "EFC_EVT_PRLI_RCVD";
> +	case EFC_EVT_PRLO_RCVD:
> +		return "EFC_EVT_PRLO_RCVD";
> +	case EFC_EVT_PDISC_RCVD:
> +		return "EFC_EVT_PDISC_RCVD";
> +	case EFC_EVT_FDISC_RCVD:
> +		return "EFC_EVT_FDISC_RCVD";
> +	case EFC_EVT_ADISC_RCVD:
> +		return "EFC_EVT_ADISC_RCVD";
> +	case EFC_EVT_RSCN_RCVD:
> +		return "EFC_EVT_RSCN_RCVD";
> +	case EFC_EVT_SCR_RCVD:
> +		return "EFC_EVT_SCR_RCVD";
> +	case EFC_EVT_ELS_RCVD:
> +		return "EFC_EVT_ELS_RCVD";
> +	case EFC_EVT_LAST:
> +		return "EFC_EVT_LAST";
> +	case EFC_EVT_FCP_CMD_RCVD:
> +		return "EFC_EVT_FCP_CMD_RCVD";
> +
> +	case EFC_EVT_GIDPT_DELAY_EXPIRED:
> +		return "EFC_EVT_GIDPT_DELAY_EXPIRED";
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		return "EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY";
> +	case EFC_EVT_NODE_DEL_INI_COMPLETE:
> +		return "EFC_EVT_NODE_DEL_INI_COMPLETE";
> +	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
> +		return "EFC_EVT_NODE_DEL_TGT_COMPLETE";
> +
> +	default:
> +		break;
> +	}
> +	return "unknown";
> +}
Please convert into a lookup array.

> diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
> new file mode 100644
> index 000000000000..c76352d1d527
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sm.h
> @@ -0,0 +1,140 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + */
> +
> +/**
> + * Generic state machine framework declarations.
> + */
> +
> +#ifndef _EFC_SM_H
> +#define _EFC_SM_H
> +
> +/**
> + * State Machine (SM) IDs.
> + */
> +enum {
> +	EFC_SM_COMMON = 0,
> +	EFC_SM_DOMAIN,
> +	EFC_SM_PORT,
> +	EFC_SM_LOGIN,
> +	EFC_SM_LAST
> +};
> +
> +#define EFC_SM_EVENT_SHIFT		24
> +#define EFC_SM_EVENT_START(id)		((id) << EFC_SM_EVENT_SHIFT)
> +
> +extern const char *efc_sm_id[];
> +
Curious.
You define 4 state machine IDs, yet declare names for only 3 of them.
Omission?

And I would probably use a lookup function for the state machines; this
'const char *efc_sm_id[]' looks a bit lonely. Plus I'm always wary of
variable sized global const ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions
  2019-12-20 22:37 ` [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2020-01-09  7:16   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  7:16 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI/Local FC port objects
> - efc_domain_s: FC domain (aka fabric) objects
> - efc_node_s: FC node (aka remote ports) objects
> - A sparse vector interface that manages lookup tables
>   for the objects.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc.h     |  99 ++++++
>  drivers/scsi/elx/libefc/efc_lib.c | 131 ++++++++
>  drivers/scsi/elx/libefc/efclib.h  | 637 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 867 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc.h
>  create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
>  create mode 100644 drivers/scsi/elx/libefc/efclib.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
> new file mode 100644
> index 000000000000..ef7c83e44167
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc.h
> @@ -0,0 +1,99 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFC_H__
> +#define __EFC_H__
> +
> +#include "../include/efc_common.h"
> +#include "efclib.h"
> +#include "efc_sm.h"
> +#include "efc_domain.h"
> +#include "efc_sport.h"
> +#include "efc_node.h"
> +#include "efc_fabric.h"
> +#include "efc_device.h"
> +
> +#define EFC_MAX_REMOTE_NODES			2048
> +
> +enum efc_hw_rtn {
> +	EFC_HW_RTN_SUCCESS = 0,
> +	EFC_HW_RTN_SUCCESS_SYNC = 1,
> +	EFC_HW_RTN_ERROR = -1,
> +	EFC_HW_RTN_NO_RESOURCES = -2,
> +	EFC_HW_RTN_NO_MEMORY = -3,
> +	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
> +	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
> +	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
> +	EFC_HW_RTN_INVALID_ARG = -7,
> +};
> +

(Silent applause for the named enum :-)

> +#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
> +
> +enum efc_scsi_del_initiator_reason {
> +	EFC_SCSI_INITIATOR_DELETED,
> +	EFC_SCSI_INITIATOR_MISSING,
> +};
> +
> +enum efc_scsi_del_target_reason {
> +	EFC_SCSI_TARGET_DELETED,
> +	EFC_SCSI_TARGET_MISSING,
> +};
> +
> +#define EFC_SCSI_CALL_COMPLETE			0
> +#define EFC_SCSI_CALL_ASYNC			1
> +
> +#define EFC_FC_ELS_DEFAULT_RETRIES		3
> +
> +/* Timeouts */
> +#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT		0
> +#define EFC_FC_FLOGI_TIMEOUT_SEC		5
> +#define EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC	30000000
> +
> +#define domain_sm_trace(domain) \
> +	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n", \
> +		      domain->display_name, __func__, efc_sm_event_name(evt)) \
> +
> +#define domain_trace(domain, fmt, ...) \
> +	efc_log_debug(domain->efc, \
> +		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__) \
> +
> +#define node_sm_trace() \
> +	efc_log_debug(node->efc, \
> +		"[%s] %-20s\n", node->display_name, efc_sm_event_name(evt)) \
> +
> +#define sport_sm_trace(sport) \
> +	efc_log_debug(sport->efc, \
> +		"[%s] %-20s\n", sport->display_name, efc_sm_event_name(evt)) \
> +
> +/**
> + * Sparse Vector API
> + *
> + * This is a trimmed down sparse vector implementation tuned to the problem of
> + * 24-bit FC_IDs. In this case, the 24-bit index value is broken down in three
> + * 8-bit values. These values are used to index up to three 256 element arrays.
> + * Arrays are allocated, only when needed. @n @n

@n @n ?

> + * The lookup can complete in constant time (3 indexed array references). @n @n
> + * A typical use case would be that the fabric/directory FC_IDs would cause two
> + * rows to be allocated, and the fabric assigned remote nodes would cause two
> + * rows to be allocated, with the root row always allocated. This gives five
> + * rows of 256 x sizeof(void*), resulting in 10k.
> + */
> +
> +struct sparse_vector {
> +	struct efc *efc;
> +	u32 max_idx;
> +	void **array;
> +};
> +
> +#define SPV_ROWLEN	256
> +#define SPV_DIM		3
> +
Hmm. One wonders if xarrays wouldn't work better (and simpler to implement).

Have you looked at that?

> +void efc_spv_del(struct sparse_vector *spv);
> +struct sparse_vector *efc_spv_new(struct efc *efc);
> +void efc_spv_set(struct sparse_vector *sv, u32 idx, void *value);
> +void *efc_spv_get(struct sparse_vector *sv, u32 idx);
> +
> +#endif /* __EFC_H__ */
> diff --git a/drivers/scsi/elx/libefc/efc_lib.c b/drivers/scsi/elx/libefc/efc_lib.c
> new file mode 100644
> index 000000000000..9ab8538d6e1f
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_lib.c
> @@ -0,0 +1,131 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include "efc.h"
> +
> +int efcport_init(struct efc *efc)
> +{
> +	u32 rc = 0;
> +
> +	spin_lock_init(&efc->lock);
> +	INIT_LIST_HEAD(&efc->vport_list);
> +
> +	/* Create Node pool */
> +	rc = efc_node_create_pool(efc, EFC_MAX_REMOTE_NODES);
> +	if (rc)
> +		efc_log_err(efc, "Can't allocate node pool\n");
> +
> +	return rc;
> +}
> +
> +void efcport_destroy(struct efc *efc)
> +{
> +	efc_node_free_pool(efc);
> +}
> +
> +static void **efc_spv_new_row(u32 rowcount)
> +{
> +	return kzalloc(sizeof(void *) * rowcount, GFP_ATOMIC);
> +}
> +
> +/* Recursively delete the rows in this sparse vector */
> +static void
> +_efc_spv_del(struct efc *efc, void **a, u32 n, u32 depth)
> +{
> +	if (a) {
> +		if (depth) {
> +			u32 i;
> +
> +			for (i = 0; i < n; i++)
> +				_efc_spv_del(efc, a[i], n, depth - 1);
> +
> +			kfree(a);
> +		}
> +	}
> +}
> +
> +void
> +efc_spv_del(struct sparse_vector *spv)
> +{
> +	if (spv) {
> +		_efc_spv_del(spv->efc, spv->array, SPV_ROWLEN, SPV_DIM);
> +		kfree(spv);
> +	}
> +}
> +
> +struct sparse_vector
> +*efc_spv_new(struct efc *efc)
> +{
> +	struct sparse_vector *spv;
> +	u32 i;
> +
> +	spv = kzalloc(sizeof(*spv), GFP_ATOMIC);
> +	if (!spv)
> +		return NULL;
> +
> +	spv->efc = efc;
> +	spv->max_idx = 1;
> +	for (i = 0; i < SPV_DIM; i++)
> +		spv->max_idx *= SPV_ROWLEN;
> +
> +	return spv;
> +}
> +
> +static void
> +*efc_spv_new_cell(struct sparse_vector *sv, u32 idx, bool alloc_new_rows)
> +{
> +	void **p;
> +	u32 a = (idx >> 16) & 0xff;
> +	u32 b = (idx >>  8) & 0xff;
> +	u32 c = (idx >>  0) & 0xff;
> +
> +	if (idx >= sv->max_idx)
> +		return NULL;
> +
> +	if (!sv->array) {
> +		sv->array = (alloc_new_rows ?
> +			     efc_spv_new_row(SPV_ROWLEN) : NULL);
> +		if (!sv->array)
> +			return NULL;
> +	}
> +	p = sv->array;
> +	if (!p[a]) {
> +		p[a] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
> +		if (!p[a])
> +			return NULL;
> +	}
> +	p = p[a];
> +	if (!p[b]) {
> +		p[b] = (alloc_new_rows ? efc_spv_new_row(SPV_ROWLEN) : NULL);
> +		if (!p[b])
> +			return NULL;
> +	}
> +	p = p[b];
> +
> +	return &p[c];
> +}
> +
> +void
> +efc_spv_set(struct sparse_vector *sv, u32 idx, void *value)
> +{
> +	void **ref = efc_spv_new_cell(sv, idx, true);
> +
> +	if (ref)
> +		*ref = value;
> +}
> +
> +void
> +*efc_spv_get(struct sparse_vector *sv, u32 idx)
> +{
> +	void **ref = efc_spv_new_cell(sv, idx, false);
> +
> +	if (ref)
> +		return *ref;
> +
> +	return NULL;
> +}
What about locking?

But maybe it'll clarify itself with the next patches.
Let's see.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces
  2019-12-20 22:37 ` [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces James Smart
@ 2020-01-09  7:27   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  7:27 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC Domain registration, allocation and deallocation sequence
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_domain.c | 1126 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_domain.h |   52 ++
>  2 files changed, 1178 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
> new file mode 100644
> index 000000000000..a386d866c77b
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_domain.c
> @@ -0,0 +1,1126 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * domain_sm Domain State Machine: States
> + */
> +
> +#include "efc.h"
> +
> +/* Accept domain callback events from the user driver */
> +int
> +efc_domain_cb(void *arg, int event, void *data)
> +{
> +	struct efc *efc = arg;
> +	struct efc_domain *domain = NULL;
> +	int rc = 0;
> +
> +	if (event != EFC_HW_DOMAIN_FOUND)
> +		domain = data;
> +
> +	switch (event) {
> +	case EFC_HW_DOMAIN_FOUND: {
> +		u64 fcf_wwn = 0;
> +		struct efc_domain_record *drec = data;
> +
> +		/* extract the fcf_wwn */
> +		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
> +
> +		efc_log_debug(efc, "Domain allocated: wwn %016llX\n", fcf_wwn);
> +		/*
> +		 * lookup domain, or allocate a new one
> +		 * if one doesn't exist already
> +		 */
> +		domain = efc->domain;
> +		if (!domain) {
> +			domain = efc_domain_alloc(efc, fcf_wwn);
> +			if (!domain) {
> +				efc_log_err(efc, "efc_domain_alloc() failed\n");
> +				rc = -1;
> +				break;
> +			}
> +			efc_sm_transition(&domain->drvsm, __efc_domain_init,
> +					  NULL);
> +		}
> +
> +		if (fcf_wwn != domain->fcf_wwn) {
> +			efc_log_err(efc, "evt: FOUND for existing domain\n");
> +			efc_log_err(efc, "wwn:%016llX domain wwn:%016llX\n",
> +				    fcf_wwn, domain->fcf_wwn);
> +		}
> +
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
> +		break;
> +	}
> +
> +	case EFC_HW_DOMAIN_LOST:
> +		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
> +		efc->tt.domain_hold_frames(efc, domain);
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ALLOC_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ALLOC_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
> +				      NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ATTACH_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ATTACH_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
> +		efc_domain_post_event(domain,
> +				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_FREE_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_FREE_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
> +		break;
> +
> +	default:
> +		efc_log_warn(efc, "unsupported event %#x\n", event);
> +	}
> +
> +	return rc;
> +}
> +

You painfully declared a mapping for all events to strings in
efc_sm_event_name(), yet you spell them out here.
Wouldn't the above be a great place to use that function?
It would even simplify the code, as several cases can be collapsed into
one ...

> +struct efc_domain *
> +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn)
> +{
> +	struct efc_domain *domain;
> +
> +	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
> +	if (domain) {
> +		domain->efc = efc;
> +		domain->drvsm.app = domain;
> +
> +		/* Allocate a sparse vector for sport FC_ID's */
> +		domain->lookup = efc_spv_new(efc);
> +		if (!domain->lookup) {
> +			efc_log_err(efc, "efc_spv_new() failed\n");
> +			kfree(domain);
> +			return NULL;
> +		}
> +
> +		INIT_LIST_HEAD(&domain->sport_list);
> +		domain->fcf_wwn = fcf_wwn;
> +		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
> +			      domain->fcf_wwn);
> +		efc->domain = domain;
> +	} else {
> +		efc_log_err(efc, "domain allocation failed\n");
> +	}
> +
> +	return domain;
> +}
> +
> +void
> +efc_domain_free(struct efc_domain *domain)
> +{
> +	struct efc *efc;
> +
> +	efc = domain->efc;
> +
> +	/* Hold frames to clear the domain pointer from the xport lookup */
> +	efc->tt.domain_hold_frames(efc, domain);
> +
> +	efc_log_debug(efc, "Domain free: wwn %016llX\n",
> +		      domain->fcf_wwn);
> +
> +	efc_spv_del(domain->lookup);
> +	domain->lookup = NULL;
> +	efc->domain = NULL;
> +
> +	if (efc->domain_free_cb)
> +		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
> +
> +	kfree(domain);
> +}
> +
> +/* Free memory resources of a domain object */
> +void
> +efc_domain_force_free(struct efc_domain *domain)
> +{
> +	struct efc_sli_port *sport;
> +	struct efc_sli_port *next;
> +	struct efc *efc = domain->efc;
> +
> +	/* Shutdown domain sm */
> +	efc_sm_disable(&domain->drvsm);
> +
> +	list_for_each_entry_safe(sport, next, &domain->sport_list, list_entry) {
> +		efc_sport_force_free(sport);
> +	}
> +
> +	efc->tt.hw_domain_force_free(efc, domain);
> +	efc_domain_free(domain);
> +}
> +
> +/* Register a callback to be called when the domain is freed */
> +void
> +efc_register_domain_free_cb(struct efc *efc,
> +			    void (*callback)(struct efc *efc, void *arg),
> +			    void *arg)
> +{
> +	efc->domain_free_cb = callback;
> +	efc->domain_free_cb_arg = arg;
> +	if (!efc->domain && callback)
> +		(*callback)(efc, arg);
> +}
> +
> +static void *
> +__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_domain *domain = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/*
> +		 * this can arise if an FLOGI fails on the SPORT,
> +		 * and the SPORT is shutdown
> +		 */
> +		break;
> +	default:
> +		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
> +			     funcname, efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +static void *
> +__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_domain *domain = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +		break;
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* save drec, mark domain_found_pending */
> +		memcpy(&domain->pending_drec, arg,
> +		       sizeof(domain->pending_drec));
> +		domain->domain_found_pending = true;
> +		break;
> +	case EFC_EVT_DOMAIN_LOST:
> +		/* unmark domain_found_pending */
> +		domain->domain_found_pending = false;
> +		break;
> +
> +	default:
> +		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
> +			     funcname, efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +#define std_domain_state_decl(...)\
> +	struct efc_domain *domain = NULL;\
> +	struct efc *efc = NULL;\
> +	\
> +	efc_assert(ctx, NULL);\
> +	efc_assert(ctx->app, NULL);\
> +	domain = ctx->app;\
> +	efc_assert(domain->efc, NULL);\
> +	efc = domain->efc
> +
> +void *
> +__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		  void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		domain->attached = false;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND: {
> +		u32	i;
> +		struct efc_domain_record *drec = arg;
> +		struct efc_sli_port *sport;
> +
> +		u64	my_wwnn = efc->req_wwnn;
> +		u64	my_wwpn = efc->req_wwpn;
> +		__be64		be_wwpn;
> +
> +		if (my_wwpn == 0 || my_wwnn == 0) {
> +			efc_log_debug(efc,
> +				"using default hardware WWN configuration\n");
> +			my_wwpn = efc->def_wwpn;
> +			my_wwnn = efc->def_wwnn;
> +		}
> +
> +		efc_log_debug(efc,
> +			"Creating base sport using WWPN %016llX WWNN %016llX\n",
> +			my_wwpn, my_wwnn);
> +
> +		/* Allocate a sport and transition to __efc_sport_allocated */
> +		sport = efc_sport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
> +					efc->enable_ini, efc->enable_tgt);
> +
> +		if (!sport) {
> +			efc_log_err(efc, "efc_sport_alloc() failed\n");
> +			break;
> +		}
> +		efc_sm_transition(&sport->sm, __efc_sport_allocated, NULL);
> +
> +		be_wwpn = cpu_to_be64(sport->wwpn);
> +
> +		/* allocate struct efc_sli_port object for local port
> +		 * Note: drec->fc_id is ALPA from read_topology only if loop
> +		 */
> +		if (efc->tt.hw_port_alloc(efc, sport, NULL,
> +					  (uint8_t *)&be_wwpn)) {
> +			efc_log_err(efc, "Can't allocate port\n");
> +			efc_sport_free(sport);
> +			break;
> +		}
> +
> +		domain->is_loop = drec->is_loop;
> +
> +		/*
> +		 * If the loop position map includes ALPA == 0,
> +		 * then we are in a public loop (NL_PORT)
> +		 * Note that the first element of the loopmap[]
> +		 * contains the count of elements, and if
> +		 * ALPA == 0 is present, it will occupy the first
> +		 * location after the count.
> +		 */
> +		domain->is_nlport = drec->map.loop[1] == 0x00;
> +
> +		if (!domain->is_loop) {
> +			/* Initiate HW domain alloc */
> +			if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
> +				efc_log_err(efc,
> +					    "Failed to initiate HW domain allocation\n");
> +				break;
> +			}
> +			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
> +			break;
> +		}
> +
> +		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
> +			      drec->is_loop ?
> +			      (domain->is_nlport ?
> +			      "public-loop" : "loop") : "other",
> +			      drec->fc_id, drec->speed);
> +
> +		sport->fc_id = drec->fc_id;
> +		sport->topology = EFC_SPORT_TOPOLOGY_LOOP;
> +		snprintf(sport->display_name, sizeof(sport->display_name),
> +			 "s%06x", drec->fc_id);
> +
> +		if (efc->enable_ini) {
> +			u32 count = drec->map.loop[0];
> +
> +			efc_log_debug(efc, "%d position map entries\n",
> +				      count);
> +			for (i = 1; i <= count; i++) {
> +				if (drec->map.loop[i] != drec->fc_id) {
> +					struct efc_node *node;
> +
> +					efc_log_debug(efc, "%#x -> %#x\n",
> +						      drec->fc_id,
> +						      drec->map.loop[i]);
> +					node = efc_node_alloc(sport,
> +							      drec->map.loop[i],
> +							      false, true);
> +					if (!node) {
> +						efc_log_err(efc,
> +							    "efc_node_alloc() failed\n");
> +						break;
> +					}
> +					efc_node_transition(node,
> +							    __efc_d_wait_loop,
> +							    NULL);
> +				}
> +			}
> +		}
> +
> +		/* Initiate HW domain alloc */
> +		if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
> +			efc_log_err(efc,
> +				    "Failed to initiate HW domain allocation\n");
> +			break;
> +		}
> +		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
> +		break;
> +	}
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the domain allocation to complete */
> +void *
> +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport;
> +
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ALLOC_OK: {
> +		struct fc_els_flogi  *sp;
> +
> +		sport = domain->sport;
> +		efc_assert(sport, NULL);
The assert is pretty much pointless; you'll crash anyway in the next line.

if (WARN_ON())
  break;

maybe?

> +		sp = (struct fc_els_flogi  *)sport->service_params;
> +
> +		/* Save the domain service parameters */
> +		memcpy(domain->service_params + 4, domain->dma.virt,
> +		       sizeof(struct fc_els_flogi) - 4);
> +		memcpy(sport->service_params + 4, domain->dma.virt,
> +		       sizeof(struct fc_els_flogi) - 4);
> +
> +		/*
> +		 * Update the sport's service parameters,
> +		 * user might have specified non-default names
> +		 */
> +		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
> +		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
> +
> +		/*
> +		 * Take the loop topology path,
> +		 * unless we are an NL_PORT (public loop)
> +		 */
> +		if (domain->is_loop && !domain->is_nlport) {
> +			/*
> +			 * For loop, we already have our FC ID
> +			 * and don't need fabric login.
> +			 * Transition to the allocated state and
> +			 * post an event to attach to
> +			 * the domain. Note that this breaks the
> +			 * normal action/transition
> +			 * pattern here to avoid a race with the
> +			 * domain attach callback.
> +			 */
> +			/* sm: is_loop / domain_attach */
> +			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
> +			__efc_domain_attach_internal(domain, sport->fc_id);
> +			break;
> +		}

Locking would help ...

> +		{
> +			struct efc_node *node;
> +
> +			/* alloc fabric node, send FLOGI */
> +			node = efc_node_find(sport, FC_FID_FLOGI);
> +			if (node) {
> +				efc_log_err(efc,
> +					    "Fabric Controller node already exists\n");
> +				break;
> +			}
> +			node = efc_node_alloc(sport, FC_FID_FLOGI,
> +					      false, false);
> +			if (!node) {
> +				efc_log_err(efc,
> +					    "Error: efc_node_alloc() failed\n");
> +			} else {
> +				efc_node_transition(node,
> +						    __efc_fabric_init, NULL);
> +			}
> +			/* Accept frames */
> +			domain->req_accept_frames = true;
> +		}
> +		/* sm: / start fabric logins */
> +		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ALLOC_FAIL:
> +		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
> +			    efc_sm_event_name(evt));
> +		efc_log_err(efc, "shutting down domain\n");
> +		domain->req_domain_free = true;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST:
> +		efc_log_debug(efc,
> +			      "%s received while waiting for hw_domain_alloc()\n",
> +			efc_sm_event_name(evt));
> +		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
> +		break;
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the domain attach request */
> +void *
> +__efc_domain_allocated(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	int rc = 0;
> +
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_REQ_ATTACH: {
> +		u32 fc_id;
> +
> +		efc_assert(arg, NULL);
> +
> +		fc_id = *((u32 *)arg);
> +		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
> +			      fc_id);
> +		/* Update sport lookup */
> +		efc_spv_set(domain->lookup, fc_id, domain->sport);
> +
> +		/* Update display name for the sport */
> +		efc_node_fcid_display(fc_id, domain->sport->display_name,
> +				      sizeof(domain->sport->display_name));
> +
> +		/* Issue domain attach call */
> +		rc = efc->tt.hw_domain_attach(efc, domain, fc_id);
> +		if (rc) {
> +			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
> +				    rc);
> +			return NULL;
> +		}
> +		/* sm: / domain_attach */
> +		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST: {
> +		int rc;
> +
> +		efc_log_debug(efc,
> +			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
> +			efc_sm_event_name(evt));
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to
> +			 * wait state and send shutdown to each
> +			 * sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "hw_domain_free failed: %d\n", rc);
> +			}
> +		}
> +
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the HW domain attach to complete */
> +void *
> +__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		struct efc_node *node = NULL;
> +		struct efc_node *next_node = NULL;
> +		struct efc_sli_port *sport;
> +		struct efc_sli_port *next_sport;
> +
> +		/*
> +		 * Set domain notify pending state to avoid
> +		 * duplicate domain event post
> +		 */
> +		domain->domain_notify_pend = true;
> +
> +		/* Mark as attached */
> +		domain->attached = true;
> +
> +		/* Register with SCSI API */
> +		efc->tt.new_domain(efc, domain);
> +
> +		/* Transition to ready */
> +		/* sm: / forward event to all sports and nodes */
> +		efc_sm_transition(ctx, __efc_domain_ready, NULL);
> +
> +		/* We have an FCFI, so we can accept frames */
> +		domain->req_accept_frames = true;
> +
> +		/*
> +		 * Notify all nodes that the domain attach request
> +		 * has completed
> +		 * Note: sport will have already received notification
> +		 * of sport attached as a result of the HW's port attach.
> +		 */
> +		list_for_each_entry_safe(sport, next_sport,
> +					 &domain->sport_list, list_entry) {
> +			list_for_each_entry_safe(node, next_node,
> +						 &sport->node_list,
> +						 list_entry) {
> +				efc_node_post_event(node,
> +						    EFC_EVT_DOMAIN_ATTACH_OK,
> +						    NULL);
> +			}
> +		}
> +		domain->domain_notify_pend = false;
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ATTACH_FAIL:
> +		efc_log_debug(efc,
> +			      "%s received while waiting for hw attach\n",
> +			      efc_sm_event_name(evt));
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST:
> +		/*
> +		 * Domain lost while waiting for an attach to complete,
> +		 * go to a state that waits for  the domain attach to
> +		 * complete, then handle domain lost
> +		 */
> +		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_REQ_ATTACH:
> +		/*
> +		 * In P2P we can get an attach request from
> +		 * the other FLOGI path, so drop this one
> +		 */
> +		break;
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Ready state */
> +void *
> +__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		/* start any pending vports */
> +		if (efc_vport_start(domain)) {
> +			efc_log_debug(domain->efc,
> +				      "efc_vport_start didn't start vports\n");
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_LOST: {
> +		int rc;
> +
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to wait state
> +			 * and send shutdown to each sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "hw_domain_free failed: %d\n", rc);
> +			}
> +		}
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_REQ_ATTACH: {
> +		/* can happen during p2p */
> +		u32 fc_id;
> +
> +		fc_id = *((u32 *)arg);
> +
> +		/* Assume that the domain is attached */
> +		efc_assert(domain->attached, NULL);
> +
> +		/*
> +		 * Verify that the requested FC_ID
> +		 * is the same as the one we're working with
> +		 */
> +		efc_assert(domain->sport->fc_id == fc_id, NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for nodes to free prior to the domain shutdown */
> +void *
> +__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +			      void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		int rc;
> +
> +		/* sm: / efc_hw_domain_free */
> +		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
> +
> +		/* Request efc_hw_domain_free and wait for completion */
> +		rc = efc->tt.hw_domain_free(efc, domain);
> +		if (rc) {
> +			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
> +				    rc);
> +		}
> +		break;
> +	}
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> + /* Domain state machine: Complete the domain shutdown */
> +void *
> +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_FREE_OK: {
> +		efc->tt.del_domain(efc, domain);
> +
> +		/* sm: / domain_free */
> +		if (domain->domain_found_pending) {
> +			/*
> +			 * save fcf_wwn and drec from this domain,
> +			 * free current domain and allocate
> +			 * a new one with the same fcf_wwn
> +			 * could use a SLI-4 "re-register VPI"
> +			 * operation here?
> +			 */
> +			u64 fcf_wwn = domain->fcf_wwn;
> +			struct efc_domain_record drec = domain->pending_drec;
> +
> +			efc_log_debug(efc, "Reallocating domain\n");
> +			domain->req_domain_free = true;
> +			domain = efc_domain_alloc(efc, fcf_wwn);
> +
> +			if (!domain) {
> +				efc_log_err(efc,
> +					    "efc_domain_alloc() failed\n");
> +				return NULL;
> +			}
> +			/*
> +			 * got a new domain; at this point,
> +			 * there are at least two domains
> +			 * once the req_domain_free flag is processed,
> +			 * the associated domain will be removed.
> +			 */
> +			efc_sm_transition(&domain->drvsm, __efc_domain_init,
> +					  NULL);
> +			efc_sm_post_event(&domain->drvsm,
> +					  EFC_EVT_DOMAIN_FOUND, &drec);
> +		} else {
> +			domain->req_domain_free = true;
> +		}
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/*
> + * Domain state machine: Wait for the domain alloc/attach completion
> + * after receiving a domain lost.
> + */
> +void *
> +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ALLOC_OK:
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		int rc;
> +
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to
> +			 * wait state and send shutdown to each sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "efc_hw_domain_free() failed: %d\n",
> +									rc);
> +			}
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ALLOC_FAIL:
> +	case EFC_EVT_DOMAIN_ATTACH_FAIL:
> +		efc_log_err(efc, "[domain] %-20s: failed\n",
> +			    efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void
> +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id)
> +{
> +	memcpy(domain->dma.virt,
> +	       ((uint8_t *)domain->flogi_service_params) + 4,
> +		   sizeof(struct fc_els_flogi) - 4);
> +	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
> +				 &s_id);
> +}
> +
> +void
> +efc_domain_attach(struct efc_domain *domain, u32 s_id)
> +{
> +	__efc_domain_attach_internal(domain, s_id);
> +}
> +
> +int
> +efc_domain_post_event(struct efc_domain *domain,
> +		      enum efc_sm_event event, void *arg)
> +{
> +	int rc;
> +	bool accept_frames;
> +	bool req_domain_free;
> +	struct efc *efc = domain->efc;
> +
> +	rc = efc_sm_post_event(&domain->drvsm, event, arg);
> +
> +	req_domain_free = domain->req_domain_free;
> +	domain->req_domain_free = false;
> +
> +	accept_frames = domain->req_accept_frames;
> +	domain->req_accept_frames = false;
> +
> +	if (accept_frames)
> +		efc->tt.domain_accept_frames(efc, domain);
> +
> +	if (req_domain_free)
> +		efc_domain_free(domain);
> +
> +	return rc;
> +}
> +
> +/* Dispatch unsolicited FC frame */
> +int
> +efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
> +{
> +	struct efc_domain *domain = (struct efc_domain *)arg;
> +	struct efc *efc = domain->efc;
> +	struct fc_frame_header *hdr;
> +	u32 s_id;
> +	u32 d_id;
> +	struct efc_node *node = NULL;
> +	struct efc_sli_port *sport = NULL;
> +	unsigned long flags = 0;
> +
> +	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
> +		efc_log_err(efc, "Sequence header or payload is null\n");
> +		return -1;
> +	}
> +
> +	hdr = seq->header->dma.virt;
> +
> +	/* extract the s_id and d_id */
> +	s_id = ntoh24(hdr->fh_s_id);
> +	d_id = ntoh24(hdr->fh_d_id);
> +
> +	sport = domain->sport;
> +	if (!sport) {
> +		efc_log_err(efc,
> +			    "Drop frame, sport for FC ID 0x%06x is NULL", d_id);
> +		return -1;
> +	}
> +
> +	if (sport->fc_id != d_id) {
> +		/* Not a physical port IO lookup sport associated with the
> +		 * npiv port
> +		 */
> +		/* Look up without lock */
> +		sport = efc_sport_find(domain, d_id);
> +		if (!sport) {
> +			if (hdr->fh_type == FC_TYPE_FCP) {
> +				/* Drop frame */
> +				efc_log_warn(efc,
> +					     "unsolicited FCP frame with invalid d_id x%x\n",
> +					d_id);
> +				return -1;
> +			}
> +				/* p2p will use this case */
> +				sport = domain->sport;
> +		}
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	/* Lookup the node given the remote s_id */
> +	node = efc_node_find(sport, s_id);
> +
> +	/* If not found, then create a new node */
> +	if (!node) {
> +		/* If this is solicited data or control based on R_CTL and
> +		 * there is no node context,
> +		 * then we can drop the frame
> +		 */
> +		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
> +			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
> +			efc_log_debug(efc,
> +				      "solicited data/ctrl frame without node,drop\n");
> +			spin_unlock_irqrestore(&efc->lock, flags);
> +			return -1;
> +		}
> +
> +		node = efc_node_alloc(sport, s_id, false, false);
> +		if (!node) {
> +			efc_log_err(efc, "efc_node_alloc() failed\n");
> +			spin_unlock_irqrestore(&efc->lock, flags);
> +			return -1;
> +		}
> +		/* don't send PLOGI on efc_d_init entry */
> +		efc_node_init_device(node, false);
> +	}
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	if (node->hold_frames || !list_empty(&node->pend_frames)) {
> +
> +		/* add frame to node's pending list */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +			INIT_LIST_HEAD(&seq->list_entry);
> +			list_add_tail(&seq->list_entry, &node->pend_frames);
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +		return 0;
> +	}
> +
> +	/* now dispatch frame to the node frame handler */
> +	return efc_node_dispatch_frame(node, seq);
> +}
> +
> +int
> +efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
> +{
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	u32 port_id;
> +	struct efc_node *node = (struct efc_node *)arg;
> +	int rc = -1;
> +	int sit_set = 0;
> +
> +	struct efc *efc = node->efc;
> +
> +	port_id = ntoh24(hdr->fh_s_id);
> +	efc_assert(port_id == node->rnode.fc_id, -1);
> +
> +	if (!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) {
> +		node_printf(node,
> +			    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
> +		    cpu_to_be32(((u32 *)hdr)[0]),
> +		    cpu_to_be32(((u32 *)hdr)[1]),
> +		    cpu_to_be32(((u32 *)hdr)[2]),
> +		    cpu_to_be32(((u32 *)hdr)[3]),
> +		    cpu_to_be32(((u32 *)hdr)[4]),
> +		    cpu_to_be32(((u32 *)hdr)[5]));
> +		return rc;
> +	}
> +
> +	/*if SIT is set */
> +	if (ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)
> +		sit_set = 1;
> +
> +	switch (hdr->fh_r_ctl) {
> +	case FC_RCTL_ELS_REQ:
> +	case FC_RCTL_ELS_REP:
> +		if (sit_set)
> +			rc = efc_node_recv_els_frame(node, seq);
> +
> +		//failure status to release the seq
> +		if (!rc)
> +			rc = 2;
> +		break;
> +
> +	case FC_RCTL_BA_ABTS:
> +	case FC_RCTL_BA_ACC:
> +	case FC_RCTL_BA_RJT:
> +	case FC_RCTL_BA_NOP:
> +		if (sit_set)
> +			rc = efc->tt.recv_abts_frame(efc, node, seq);
> +		else
> +			rc = efc_node_recv_bls_no_sit(node, seq);
> +		break;
> +
> +	case FC_RCTL_DD_UNSOL_CMD:
> +	case FC_RCTL_DD_UNSOL_CTL:
> +		switch (hdr->fh_type) {
> +		case FC_TYPE_FCP:
> +			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
> +				if (!node->fcp_enabled) {
> +					rc = efc_node_recv_fcp_cmd(node, seq);
> +					break;
> +				}
> +
> +				if (sit_set) {
> +					rc = efc->tt.dispatch_fcp_cmd(node,
> +									seq);
> +				} else {
> +					node_printf(node,
> +					   "Unsol cmd received with no SIT\n");
> +				}
> +			} else if ((hdr->fh_r_ctl & 0xf) ==
> +							FC_RCTL_DD_SOL_DATA) {
> +				node_printf(node,
> +				    "solicited data received.Dropping IO\n");
> +			}
> +			break;
> +		case FC_TYPE_CT:
> +			if (sit_set)
> +				rc = efc_node_recv_ct_frame(node, seq);
> +			break;
> +		default:
> +			break;
> +		}
> +		break;
> +	default:
> +		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
> +	}
> +
> +	return rc;
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
> new file mode 100644
> index 000000000000..d318dda5935c
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_domain.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Declare driver's domain handler exported interface
> + */
> +
> +#ifndef __EFCT_DOMAIN_H__
> +#define __EFCT_DOMAIN_H__
> +
> +extern struct efc_domain *
> +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn);
> +extern void
> +efc_domain_free(struct efc_domain *domain);
> +
> +extern void *
> +__efc_domain_init(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_allocated(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_ready(struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +
> +extern void
> +efc_domain_attach(struct efc_domain *domain, u32 s_id);
> +extern int
> +efc_domain_post_event(struct efc_domain *domain,
> +		      enum efc_sm_event event, void *arg);
> +extern void
> +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id);
> +
> +#endif /* __EFCT_DOMAIN_H__ */
> 
What makes me slightly nervous is the complete lack of locking in the
domain and sport structures. I could imagine that they are mostly
single-threaded, but still I can't see why they wouldn't race with eg
lookup sports during interrupt handling.
Can you explain the logic behind it?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/32] elx: libefc: SLI and FC PORT state machine interfaces
  2019-12-20 22:37 ` [PATCH v2 11/32] elx: libefc: SLI and FC PORT " James Smart
@ 2020-01-09  7:34   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  7:34 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI and FC port (aka n_port_id) registration, allocation and
>   deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_sport.c | 843 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_sport.h |  52 +++
>  2 files changed, 895 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
> new file mode 100644
> index 000000000000..11f3ba73ec6e
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sport.c
> @@ -0,0 +1,843 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Details SLI port (sport) functions.
> + */
> +
> +#include "efc.h"
> +
> +/* HW sport callback events from the user driver */
> +int
> +efc_lport_cb(void *arg, int event, void *data)
> +{
> +	struct efc *efc = arg;
> +	struct efc_sli_port *sport = data;
> +
> +	switch (event) {
> +	case EFC_HW_PORT_ALLOC_OK:
> +		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_OK\n");
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_OK, NULL);
> +		break;
> +	case EFC_HW_PORT_ALLOC_FAIL:
> +		efc_log_debug(efc, "EFC_HW_PORT_ALLOC_FAIL\n");
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ALLOC_FAIL, NULL);
> +		break;
> +	case EFC_HW_PORT_ATTACH_OK:
> +		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_OK\n");
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_ATTACH_OK, NULL);
> +		break;
> +	case EFC_HW_PORT_ATTACH_FAIL:
> +		efc_log_debug(efc, "EFC_HW_PORT_ATTACH_FAIL\n");
> +		efc_sm_post_event(&sport->sm,
> +				  EFC_EVT_SPORT_ATTACH_FAIL, NULL);
> +		break;
> +	case EFC_HW_PORT_FREE_OK:
> +		efc_log_debug(efc, "EFC_HW_PORT_FREE_OK\n");
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_OK, NULL);
> +		break;
> +	case EFC_HW_PORT_FREE_FAIL:
> +		efc_log_debug(efc, "EFC_HW_PORT_FREE_FAIL\n");
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SPORT_FREE_FAIL, NULL);
> +		break;
> +	default:
> +		efc_log_test(efc, "unknown event %#x\n", event);
> +	}
> +
> +	return 0;
> +}
> +

Same here; please use the name mapping function and collapse the cases.

> +struct efc_sli_port *
> +efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		u32 fc_id, bool enable_ini, bool enable_tgt)
> +{
> +	struct efc_sli_port *sport;
> +
> +	if (domain->efc->enable_ini)
> +		enable_ini = 0;
> +
> +	/* Return a failure if this sport has already been allocated */
> +	if (wwpn != 0) {
> +		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
> +		if (sport) {
> +			efc_log_err(domain->efc,
> +				    "Failed: SPORT %016llX %016llX already allocated\n",
> +				    wwnn, wwpn);
> +			return NULL;
> +		}
> +	}
> +
> +	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);
> +	if (!sport)
> +		return sport;
> +
> +	sport->efc = domain->efc;
> +	snprintf(sport->display_name, sizeof(sport->display_name), "------");
> +	sport->domain = domain;
> +	sport->lookup = efc_spv_new(domain->efc);
> +	sport->instance_index = domain->sport_instance_count++;
> +	INIT_LIST_HEAD(&sport->node_list);
> +	sport->sm.app = sport;
> +	sport->enable_ini = enable_ini;
> +	sport->enable_tgt = enable_tgt;
> +	sport->enable_rscn = (sport->enable_ini ||
> +			(sport->enable_tgt && enable_target_rscn(sport->efc)));
> +
> +	/* Copy service parameters from domain */
> +	memcpy(sport->service_params, domain->service_params,
> +		sizeof(struct fc_els_flogi));
> +
> +	/* Update requested fc_id */
> +	sport->fc_id = fc_id;
> +
> +	/* Update the sport's service parameters for the new wwn's */
> +	sport->wwpn = wwpn;
> +	sport->wwnn = wwnn;
> +	snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
> +
> +	/*
> +	 * if this is the "first" sport of the domain,
> +	 * then make it the "phys" sport
> +	 */
> +	if (list_empty(&domain->sport_list))
> +		domain->sport = sport;
> +
> +	INIT_LIST_HEAD(&sport->list_entry);
> +	list_add_tail(&sport->list_entry, &domain->sport_list);
> +
> +	efc_log_debug(domain->efc, "[%s] allocate sport\n",
> +		      sport->display_name);
> +
> +	return sport;
> +}

And this is what I meant with missing locking; if this function is
called concurrently with the same wwnn you might end up with two
identical entries in the sport list.
At the very lease explain why this is safe; but still I would prefer
locking here.

> +
> +void
> +efc_sport_free(struct efc_sli_port *sport)
> +{
> +	struct efc_domain *domain;
> +
> +	if (!sport)
> +		return;
> +
> +	domain = sport->domain;
> +	efc_log_debug(domain->efc, "[%s] free sport\n", sport->display_name);
> +	list_del(&sport->list_entry);
> +	/*
> +	 * if this is the physical sport,
> +	 * then clear it out of the domain
> +	 */
> +	if (sport == domain->sport)
> +		domain->sport = NULL;
> +
> +	efc_spv_del(sport->lookup);
> +	sport->lookup = NULL;
> +
> +	efc_spv_set(domain->lookup, sport->fc_id, NULL);
> +
> +	if (list_empty(&domain->sport_list))
> +		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				      NULL);
> +
> +	kfree(sport);
> +}
> +
> +void
> +efc_sport_force_free(struct efc_sli_port *sport)
> +{
> +	struct efc_node *node;
> +	struct efc_node *next;
> +
> +	/* shutdown sm processing */
> +	efc_sm_disable(&sport->sm);
> +
> +	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
> +		efc_node_force_free(node);
> +	}
> +
> +	efc_sport_free(sport);
> +}
> +
> +/* Find a SLI port object, given an FC_ID */
> +struct efc_sli_port *
> +efc_sport_find(struct efc_domain *domain, u32 d_id)
> +{
> +	struct efc_sli_port *sport;
> +
> +	if (!domain->lookup) {
> +		efc_log_test(domain->efc,
> +			     "assertion failed: domain->lookup is not valid\n");
> +		return NULL;
> +	}
> +
> +	sport = efc_spv_get(domain->lookup, d_id);
> +	return sport;
> +}
> +
> +/* Find a SLI port, given the WWNN and WWPN */
> +struct efc_sli_port *
> +efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
> +{
> +	struct efc_sli_port *sport = NULL;
> +
> +	list_for_each_entry(sport, &domain->sport_list, list_entry) {
> +		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
> +			return sport;
> +	}
> +	return NULL;
> +}
> +
> +/* External call to request an attach for a sport, given an FC_ID */
> +int
> +efc_sport_attach(struct efc_sli_port *sport, u32 fc_id)
> +{
> +	int rc;
> +	struct efc_node *node;
> +	struct efc *efc = sport->efc;
> +
> +	/* Set our lookup */
> +	efc_spv_set(sport->domain->lookup, fc_id, sport);
> +
> +	/* Update our display_name */
> +	efc_node_fcid_display(fc_id, sport->display_name,
> +			      sizeof(sport->display_name));
> +
> +	list_for_each_entry(node, &sport->node_list, list_entry) {
> +		efc_node_update_display_name(node);
> +	}
> +
> +	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
> +		      sport->display_name, fc_id);
> +
> +	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
> +	if (rc != EFC_HW_RTN_SUCCESS) {
> +		efc_log_err(sport->efc,
> +			    "efc_hw_port_attach failed: %d\n", rc);
> +		return -1;
> +	}
> +	return 0;
> +}
> +
> +static void
> +efc_sport_shutdown(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_node *node;
> +	struct efc_node *node_next;
> +
> +	list_for_each_entry_safe(node, node_next,
> +				 &sport->node_list, list_entry) {
> +		if (node->rnode.fc_id != FC_FID_FLOGI ||
> +		    !sport->is_vport) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +			continue;
> +		}
> +
> +		/*
> +		 * If this is a vport, logout of the fabric
> +		 * controller so that it deletes the vport
> +		 * on the switch.
> +		 */
> +		/* if link is down, don't send logo */
> +		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		} else {
> +			efc_log_debug(efc,
> +				      "[%s] sport shutdown vport, sending logo to node\n",
> +				      node->display_name);
> +
> +			if (efc->tt.els_send(efc, node, ELS_LOGO,
> +					     EFC_FC_FLOGI_TIMEOUT_SEC,
> +					EFC_FC_ELS_DEFAULT_RETRIES)) {
> +				/* sent LOGO, wait for response */
> +				efc_node_transition(node,
> +						    __efc_d_wait_logo_rsp,
> +						     NULL);
> +				continue;
> +			}
> +
> +			/*
> +			 * failed to send LOGO,
> +			 * go ahead and cleanup node anyways
> +			 */
> +			node_printf(node, "Failed to send LOGO\n");
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +		}
> +	}
> +}
> +
> +/* Clear the sport reference in the vport specification */
> +static void
> +efc_vport_link_down(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->sport = NULL;
> +			break;
> +		}
> +	}
> +}
> +

Similar here: Why is there no locking?
Or RCU lists?

> +static void *
> +__efc_sport_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		break;
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +			efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN: {
> +		int node_list_empty;
> +
> +		/* Flag this sport as shutting down */
> +		sport->shutting_down = true;
> +
> +		if (sport->is_vport)
> +			efc_vport_link_down(sport);
> +
> +		node_list_empty = list_empty(&sport->node_list);
> +
> +		if (node_list_empty) {
> +			/* sm: node list is empty / efc_hw_port_free */
> +			/*
> +			 * Remove the sport from the domain's
> +			 * sparse vector lookup table
> +			 */
> +			efc_spv_set(domain->lookup, sport->fc_id, NULL);
> +			efc_sm_transition(ctx, __efc_sport_wait_port_free,
> +					  NULL);
> +			if (efc->tt.hw_port_free(efc, sport)) {
> +				efc_log_test(sport->efc,
> +					     "efc_hw_port_free failed\n");
> +				/* Not much we can do, free the sport anyways */
> +				efc_sport_free(sport);
> +			}
> +		} else {
> +			/* sm: node list is not empty / shutdown nodes */
> +			efc_sm_transition(ctx,
> +					  __efc_sport_wait_shutdown, NULL);
> +			efc_sport_shutdown(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
> +			     sport->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Physical sport allocated */
> +void *
> +__efc_sport_allocated(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	/* the physical sport is attached */
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		efc_assert(sport == domain->sport, NULL);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +		/* ignore */
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Handle initial virtual port events */
> +void *
> +__efc_sport_vport_init(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
> +
> +		if (sport->wwpn == 0)
> +			efc_log_debug(efc, "vport: letting f/w select WWN\n");
> +
> +		if (sport->fc_id != U32_MAX) {
> +			efc_log_debug(efc, "vport: hard coding port id: %x\n",
> +				      sport->fc_id);
> +		}
> +
> +		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
> +		/* If wwpn is zero, then we'll let the f/w */
> +		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
> +					  sport->wwpn == 0 ? NULL :
> +					  (uint8_t *)&be_wwpn)) {
> +			efc_log_err(efc, "Can't allocate port\n");
> +			break;
> +		}
> +
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine:
> + * Wait for the HW SLI port allocation to complete
> + */
> +void *
> +__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK: {
> +		struct fc_els_flogi *sp;
> +		struct efc_node *fabric;
> +
> +		sp = (struct fc_els_flogi *)sport->service_params;
> +		/*
> +		 * If we let f/w assign wwn's,
> +		 * then sport wwn's with those returned by hw
> +		 */
> +		if (sport->wwnn == 0) {
> +			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
> +			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +		}
> +
> +		/* Update the sport's service parameters */
> +		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
> +		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
> +
> +		/*
> +		 * if sport->fc_id is uninitialized,
> +		 * then request that the fabric node use FDISC
> +		 * to find an fc_id.
> +		 * Otherwise we're restoring vports, or we're in
> +		 * fabric emulation mode, so attach the fc_id
> +		 */
> +		if (sport->fc_id == U32_MAX) {
> +			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
> +						false);
> +			if (!fabric) {
> +				efc_log_err(efc, "efc_node_alloc() failed\n");
> +				return NULL;
> +			}
> +			efc_node_transition(fabric, __efc_vport_fabric_init,
> +					    NULL);
> +		} else {
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +			efc_sport_attach(sport, sport->fc_id);
> +		}
> +		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine: virtual sport allocated.
> + *
> + * This state is entered after the sport is allocated;
> + * it then waits for a fabric node
> + * FDISC to complete, which requests a sport attach.
> + * The sport attach complete is handled in this state.
> + */
> +
> +void *
> +__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK: {
> +		struct efc_node *node;
> +
> +		/* Find our fabric node, and forward this event */
> +		node = efc_node_find(sport, FC_FID_FLOGI);
> +		if (!node) {
> +			efc_log_test(efc, "can't find node %06x\n",
> +				     FC_FID_FLOGI);
> +			break;
> +		}
> +		/* sm: / forward sport attach to fabric node */
> +		efc_node_post_event(node, evt, NULL);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +static void
> +efc_vport_update_spec(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->wwnn = sport->wwnn;
> +			vport->wwpn = sport->wwpn;
> +			vport->tgt_data = sport->tgt_data;
> +			vport->ini_data = sport->ini_data;
> +			break;
> +		}
> +	}
> +}
> +
> +/* State entered after the sport attach has completed */
> +void *
> +__efc_sport_attached(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		struct efc_node *node;
> +
> +		efc_log_debug(efc,
> +			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		list_for_each_entry(node, &sport->node_list, list_entry) {
> +			efc_node_update_display_name(node);
> +		}
> +
> +		sport->tgt_id = sport->fc_id;
> +
> +		efc->tt.new_sport(efc, sport);
> +
> +		/*
> +		 * Update the vport (if its not the physical sport)
> +		 * parameters
> +		 */
> +		if (sport->is_vport)
> +			efc_vport_update_spec(sport);
> +		break;
> +	}
> +
> +	case EFC_EVT_EXIT:
> +		efc_log_debug(efc,
> +			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		efc->tt.del_sport(efc, sport);
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +
> +/* SLI port state machine: Wait for the node shutdowns to complete */
> +void *
> +__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +	case EFC_EVT_SPORT_ALLOC_FAIL:
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +	case EFC_EVT_SPORT_ATTACH_FAIL:
> +		/* ignore these events - just wait for the all free event */
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		/*
> +		 * Remove the sport from the domain's
> +		 * sparse vector lookup table
> +		 */
> +		efc_spv_set(domain->lookup, sport->fc_id, NULL);
> +		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
> +		if (efc->tt.hw_port_free(efc, sport)) {
> +			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
> +			/* Not much we can do, free the sport anyways */
> +			efc_sport_free(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Wait for the HW's port free to complete */
> +void *
> +__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		/* Ignore as we are waiting for the free CB */
> +		break;
> +	case EFC_EVT_SPORT_FREE_OK: {
> +		/* All done, free myself */
> +		/* sm: / efc_sport_free */
> +		efc_sport_free(sport);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* Use the vport specification to find the associated vports and start them */
> +int
> +efc_vport_start(struct efc_domain *domain)
> +{
> +	struct efc *efc = domain->efc;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	struct efc_sli_port *sport;
> +	int rc = 0;
> +	u8 found = false;
> +
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (!vport->sport) {
> +			found = true;
> +			break;
> +		}
> +	}
> +
> +	if (found && vport) {
> +		sport = efc_sport_alloc(domain, vport->wwpn,
> +					vport->wwnn, vport->fc_id,
> +					vport->enable_ini,
> +					vport->enable_tgt);
> +		vport->sport = sport;
> +		if (!sport) {
> +			rc = -1;
> +		} else {
> +			sport->is_vport = true;
> +			sport->tgt_data = vport->tgt_data;
> +			sport->ini_data = vport->ini_data;
> +
> +			efc_sm_transition(&sport->sm, __efc_sport_vport_init,
> +					  NULL);
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +/* Allocate a new virtual SLI port */
> +int
> +efc_sport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
> +		    void *ini_data, bool restore_vport)
> +{
> +	struct efc_sli_port *sport;
> +
> +	if (ini && domain->efc->enable_ini == 0) {
> +		efc_log_test(domain->efc,
> +			     "driver initiator functionality not enabled\n");
> +		return -1;
> +	}
> +
> +	if (tgt && domain->efc->enable_tgt == 0) {
> +		efc_log_test(domain->efc,
> +			     "driver target functionality not enabled\n");
> +		return -1;
> +	}
> +
> +	/*
> +	 * Create a vport spec if we need to recreate
> +	 * this vport after a link up event
> +	 */
> +	if (restore_vport) {
> +		if (efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id,
> +					  ini, tgt, tgt_data, ini_data)) {
> +			efc_log_test(domain->efc,
> +				     "failed to create vport object entry\n");
> +			return -1;
> +		}
> +		return efc_vport_start(domain);
> +	}
> +
> +	/* Allocate a sport */
> +	sport = efc_sport_alloc(domain, wwpn, wwnn, fc_id, ini, tgt);
> +
> +	if (!sport)
> +		return -1;
> +
> +	sport->is_vport = true;
> +	sport->tgt_data = tgt_data;
> +	sport->ini_data = ini_data;
> +

Isn't there a race condition?
The port is already allocated, but not fully populated.
Can someone access the sport before these three lines are executed?

> +	/* Transition to vport_init */
> +	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
> +
> +	return 0;
> +}
> +
> +/* Remove a previously-allocated virtual port */
> +int
> +efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
> +		    u64 wwpn, uint64_t wwnn)
> +{
> +	struct efc_sli_port *sport;
> +	int found = 0;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +
> +	/* walk the efc_vport_list and remove from there */
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
> +			list_del(&vport->list_entry);
> +			kfree(vport);
> +			break;
> +		}
> +	}
> +

Locking?


Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 12/32] elx: libefc: Remote node state machine interfaces
  2019-12-20 22:37 ` [PATCH v2 12/32] elx: libefc: Remote node " James Smart
@ 2020-01-09  8:31   ` Hannes Reinecke
  2020-01-09  9:57   ` Daniel Wagner
  1 sibling, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  8:31 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Remote node (aka remote port) allocation, initializaion and
>   destroy routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_node.c | 1343 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_node.h |  188 +++++
>  2 files changed, 1531 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
> new file mode 100644
> index 000000000000..57bf25a5d76a
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.c
> @@ -0,0 +1,1343 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efc.h"
> +
> +/* HW node callback events from the user driver */
> +int
> +efc_remote_node_cb(void *arg, int event,
> +		   void *data)
> +{
> +	struct efc *efc = arg;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc_remote_node *rnode = data;
> +	struct efc_node *node = rnode->node;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_NODE_ATTACH_OK:
> +		sm_event = EFC_EVT_NODE_ATTACH_OK;
> +		break;
> +
> +	case EFC_HW_NODE_ATTACH_FAIL:
> +		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_OK:
> +		sm_event = EFC_EVT_NODE_FREE_OK;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_FAIL:
> +		sm_event = EFC_EVT_NOD> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", event);
> +		return -1;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return 0;
> +}
> +
> +/* Find an FC node structure given the FC port ID */
> +struct efc_node *
> +efc_node_find(struct efc_sli_port *sport, u32 port_id)
> +{
> +	struct efc_node *node;
> +
> +	node = efc_spv_get(sport->lookup, port_id);
> +	return node;
> +}
> +
> +int
> +efc_node_create_pool(struct efc *efc, u32 node_count)
> +{
> +	u32 i;
> +	struct efc_node *node;
> +	u64 max_xfer_size;
> +	struct efc_dma *dma;
> +
> +	efc->nodes_count = node_count;
> +
> +	efc->nodes = kmalloc_array(node_count, sizeof(struct efc_node *),
> +				   GFP_ATOMIC);
> +	if (!efc->nodes)
> +		return -1;
> +
> +	memset(efc->nodes, 0, node_count * sizeof(struct efc_node *));
> +
> +	if (efc->max_xfer_size)
> +		max_xfer_size = efc->max_xfer_size;
> +	else
> +		max_xfer_size = 65536;
> +
> +	INIT_LIST_HEAD(&efc->nodes_free_list);
> +
> +	for (i = 0; i < node_count; i++) {
> +		dma = NULL;
> +		node = kzalloc(sizeof(*node), GFP_ATOMIC);
> +		if (!node) {
> +			efc_log_err(efc, "node allocation failed");
> +			goto error;
> +		}
> +		/* Assign any persistent field values */
> +		node->instance_index = i;
> +		node->max_wr_xfer_size = max_xfer_size;
> +		node->rnode.indicator = U32_MAX;
> +
> +		dma = &node->sparm_dma_buf;
> +		dma->size = 256;
> +		dma->virt = dma_alloc_coherent(&efc->pcidev->dev, dma->size,
> +					       &dma->phys, GFP_DMA);
> +		if (!dma->virt) {
> +			kfree(node);
> +			efc_log_err(efc, "efc_dma_alloc failed");
> +			goto error;
> +		}
> +
> +		efc->nodes[i] = node;
> +		INIT_LIST_HEAD(&node->list_entry);
> +		list_add_tail(&node->list_entry, &efc->nodes_free_list);
> +	}
> +	return 0;
> +
> +error:
> +	efc_node_free_pool(efc);
> +	return -1;
> +}
> +

Can't you use a normal mempool here, and allocate the dma region when
required? I guess the node pool is used only infrequently, so
performance shouldn't be impacted ...
But it would reduce the pressure on the IOMMU, no?

[ .. ]
> +void efc_node_post_els_resp(struct efc_node *node,
> +			    enum efc_hw_node_els_event evt, void *arg)
> +{
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc *efc = node->efc;
> +	unsigned long flags = 0;
> +
> +	switch (evt) {
> +	case EFC_HW_SRRS_ELS_REQ_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_RJT:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
> +		break;
> +	case EFC_HW_ELS_REQ_ABORTED:
> +		sm_event = EFC_EVT_ELS_REQ_ABORTED;
> +		break;

Please collapse.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 13/32] elx: libefc: Fabric node state machine interfaces
  2019-12-20 22:37 ` [PATCH v2 13/32] elx: libefc: Fabric " James Smart
@ 2020-01-09  8:34   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  8:34 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Fabric node initialization and logins.
> - Name/Directory Services node.
> - Fabric Controller node to process rscn events.
> 
> These are all interactions with remote ports that correspond
> to well-known fabric entities
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_fabric.c | 1762 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_fabric.h |  116 +++
>  2 files changed, 1878 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 14/32] elx: libefc: FC node ELS and state handling
  2019-12-20 22:37 ` [PATCH v2 14/32] elx: libefc: FC node ELS and state handling James Smart
@ 2020-01-09  8:39   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  8:39 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC node PRLI handling and state management
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_device.c | 1658 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_device.h |   72 ++
>  2 files changed, 1730 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_device.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_device.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
> new file mode 100644
> index 000000000000..f87525f65b72
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_device.c
[ .. ]
> +	case EFC_EVT_LOGO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_ADISC_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* sm: / send ADISC acc */
> +		efc->tt.els_send_resp(efc, node, ELS_ADISC,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_ABTS_RCVD:
> +		/* sm: / process ABTS */
> +		// This should not happpen

... then I would expect a logging message, not a C++ style comment.
Please fix.


Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations
  2019-12-20 22:37 ` [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations James Smart
@ 2020-01-09  8:41   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  8:41 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch starts the population of the efct target mode
> driver.  The driver is contained in the drivers/scsi/elx/efct
> subdirectory.
> 
> This patch creates the efct directory and starts population of
> the driver by adding SLI-4 configuration parameters, data structures
> for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
> and handling async events.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.h | 852 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 852 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_hw.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> new file mode 100644
> index 000000000000..ff6de91923fa
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -0,0 +1,852 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef _EFCT_HW_H
> +#define _EFCT_HW_H
> +
> +#include "../libefc_sli/sli4.h"
> +#include "efct_utils.h"
> +
> +/*
> + * EFCT PCI IDs
> + */
> +#define EFCT_VENDOR_ID			0x10df
> +/* LightPulse 16Gb x 4 FC (lancer-g6) */
> +#define EFCT_DEVICE_ID_LPE31004		0xe307
> +#define PCI_PRODUCT_EMULEX_LPE32002	0xe307
> +/* LightPulse 32Gb x 4 FC (lancer-g7) */
> +#define EFCT_DEVICE_ID_G7		0xf407
> +
> +/*Default RQ entries len used by driver*/
> +#define EFCT_HW_RQ_ENTRIES_MIN		512
> +#define EFCT_HW_RQ_ENTRIES_DEF		1024
> +#define EFCT_HW_RQ_ENTRIES_MAX		4096
> +
> +/*Defines the size of the RQ buffers used for each RQ*/
> +#define EFCT_HW_RQ_SIZE_HDR             128
> +#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
> +
> +/*Define the maximum number of multi-receive queues*/
> +#define EFCT_HW_MAX_MRQS		8
> +
> +/*
> + * Define count of when to set the WQEC bit in a submitted
> + * WQE, causing a consummed/released completion to be posted.
> + */
> +#define EFCT_HW_WQEC_SET_COUNT		32
> +
> +/*Send frame timeout in seconds*/
> +#define EFCT_HW_SEND_FRAME_TIMEOUT	10
> +
> +/*
> + * FDT Transfer Hint value, reads greater than this value
> + * will be segmented to implement fairness. A value of zero disables
> + * the feature.
> + */
> +#define EFCT_HW_FDT_XFER_HINT		8192
> +
> +#define EFCT_HW_TIMECHECK_ITERATIONS	100
> +#define EFCT_HW_MAX_NUM_MQ		1
> +#define EFCT_HW_MAX_NUM_RQ		32
> +#define EFCT_HW_MAX_NUM_EQ		16
> +#define EFCT_HW_MAX_NUM_WQ		32
> +
> +#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
> +
> +#define EFCT_HW_MAX_WQ_CLASS		4
> +#define EFCT_HW_MAX_WQ_CPU		128
> +
> +/*
> + * A CQ will be assinged to each WQ
> + * (CQ must have 2X entries of the WQ for abort
> + * processing), plus a separate one for each RQ PAIR and one for MQ
> + */
> +#define EFCT_HW_MAX_NUM_CQ \
> +	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
> +
> +#define EFCT_HW_Q_HASH_SIZE		128
> +#define EFCT_HW_RQ_HEADER_SIZE		128
> +#define EFCT_HW_RQ_HEADER_INDEX		0
> +
> +/* Options for efct_hw_command() */
> +enum {
> +	/* command executes synchronously and busy-waits for completion */
> +	EFCT_CMD_POLL,
> +	/* command executes asynchronously. Uses callback */
> +	EFCT_CMD_NOWAIT,
> +};
> +
> +enum efct_hw_rtn {
> +	EFCT_HW_RTN_SUCCESS = 0,
> +	EFCT_HW_RTN_SUCCESS_SYNC = 1,
> +	EFCT_HW_RTN_ERROR = -1,
> +	EFCT_HW_RTN_NO_RESOURCES = -2,
> +	EFCT_HW_RTN_NO_MEMORY = -3,
> +	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
> +	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
> +	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
> +	EFCT_HW_RTN_INVALID_ARG = -7,
> +};
> +
> +#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
> +
> +enum efct_hw_reset {
> +	EFCT_HW_RESET_FUNCTION,
> +	EFCT_HW_RESET_FIRMWARE,
> +	EFCT_HW_RESET_MAX
> +};
> +
> +enum efct_hw_property {
> +	EFCT_HW_N_IO,
> +	EFCT_HW_N_SGL,
> +	EFCT_HW_MAX_IO,
> +	EFCT_HW_MAX_SGE,
> +	EFCT_HW_MAX_SGL,
> +	EFCT_HW_MAX_NODES,
> +	EFCT_HW_MAX_RQ_ENTRIES,
> +	EFCT_HW_TOPOLOGY,
> +	EFCT_HW_WWN_NODE,
> +	EFCT_HW_WWN_PORT,
> +	EFCT_HW_FW_REV,
> +	EFCT_HW_FW_REV2,
> +	EFCT_HW_IPL,
> +	EFCT_HW_VPD,
> +	EFCT_HW_VPD_LEN,
> +	EFCT_HW_MODE,
> +	EFCT_HW_LINK_SPEED,
> +	EFCT_HW_IF_TYPE,
> +	EFCT_HW_SLI_REV,
> +	EFCT_HW_SLI_FAMILY,
> +	EFCT_HW_RQ_PROCESS_LIMIT,
> +	EFCT_HW_RQ_DEFAULT_BUFFER_SIZE,
> +	EFCT_HW_AUTO_XFER_RDY_CAPABLE,
> +	EFCT_HW_AUTO_XFER_RDY_XRI_CNT,
> +	EFCT_HW_AUTO_XFER_RDY_SIZE,
> +	EFCT_HW_AUTO_XFER_RDY_BLK_SIZE,
> +	EFCT_HW_AUTO_XFER_RDY_T10_ENABLE,
> +	EFCT_HW_AUTO_XFER_RDY_P_TYPE,
> +	EFCT_HW_AUTO_XFER_RDY_REF_TAG_IS_LBA,
> +	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALID,
> +	EFCT_HW_AUTO_XFER_RDY_APP_TAG_VALUE,
> +	EFCT_HW_DIF_CAPABLE,
> +	EFCT_HW_DIF_SEED,
> +	EFCT_HW_DIF_MODE,
> +	EFCT_HW_DIF_MULTI_SEPARATE,
> +	EFCT_HW_DUMP_MAX_SIZE,
> +	EFCT_HW_DUMP_READY,
> +	EFCT_HW_DUMP_PRESENT,
> +	EFCT_HW_RESET_REQUIRED,
> +	EFCT_HW_FW_ERROR,
> +	EFCT_HW_FW_READY,
> +	EFCT_HW_HIGH_LOGIN_MODE,
> +	EFCT_HW_PREREGISTER_SGL,
> +	EFCT_HW_HW_REV1,
> +	EFCT_HW_HW_REV2,
> +	EFCT_HW_HW_REV3,
> +	EFCT_HW_ETH_LICENSE,
> +	EFCT_HW_LINK_MODULE_TYPE,
> +	EFCT_HW_NUM_CHUTES,
> +	EFCT_HW_WAR_VERSION,
> +	/* enable driver timeouts for target WQEs */
> +	EFCT_HW_EMULATE_TARGET_WQE_TIMEOUT,
> +	EFCT_HW_LINK_CONFIG_SPEED,
> +	EFCT_HW_CONFIG_TOPOLOGY,
> +	EFCT_HW_BOUNCE,
> +	EFCT_HW_PORTNUM,
> +	EFCT_HW_BIOS_VERSION_STRING,
> +	EFCT_HW_RQ_SELECT_POLICY,
> +	EFCT_HW_SGL_CHAINING_CAPABLE,
> +	EFCT_HW_SGL_CHAINING_ALLOWED,
> +	EFCT_HW_SGL_CHAINING_HOST_ALLOCATED,
> +	EFCT_HW_SEND_FRAME_CAPABLE,
> +	EFCT_HW_RQ_SELECTION_POLICY,
> +	EFCT_HW_RR_QUANTA,
> +	EFCT_HW_FILTER_DEF,
> +	EFCT_HW_MAX_VPORTS,
> +	EFCT_ESOC,
> +};
> +
> +enum {
> +	EFCT_HW_TOPOLOGY_AUTO,
> +	EFCT_HW_TOPOLOGY_NPORT,
> +	EFCT_HW_TOPOLOGY_LOOP,
> +	EFCT_HW_TOPOLOGY_NONE,
> +	EFCT_HW_TOPOLOGY_MAX
> +};
> +
> +enum {
> +	EFCT_HW_MODE_INITIATOR,
> +	EFCT_HW_MODE_TARGET,
> +	EFCT_HW_MODE_BOTH,
> +	EFCT_HW_MODE_MAX
> +};
> +

Anonymous enums ...

> +/* pack fw revision values into a single uint64_t */
> +#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
> +			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
> +
> +#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
> +
> +/* Defines DIF operation modes */
> +enum {
> +	EFCT_HW_DIF_MODE_INLINE,
> +	EFCT_HW_DIF_MODE_SEPARATE,
> +};
> +
> +/* T10 DIF operations */
> +enum efct_hw_dif_oper {
> +	EFCT_HW_DIF_OPER_DISABLED,
> +	EFCT_HW_SGE_DIFOP_INNODIFOUTCRC,
> +	EFCT_HW_SGE_DIFOP_INCRCOUTNODIF,
> +	EFCT_HW_SGE_DIFOP_INNODIFOUTCHKSUM,
> +	EFCT_HW_SGE_DIFOP_INCHKSUMOUTNODIF,
> +	EFCT_HW_SGE_DIFOP_INCRCOUTCRC,
> +	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCHKSUM,
> +	EFCT_HW_SGE_DIFOP_INCRCOUTCHKSUM,
> +	EFCT_HW_SGE_DIFOP_INCHKSUMOUTCRC,
> +	EFCT_HW_SGE_DIFOP_INRAWOUTRAW,
> +};
> +
> +#define EFCT_HW_DIF_OPER_PASS_THRU	EFCT_HW_SGE_DIFOP_INCRCOUTCRC
> +#define EFCT_HW_DIF_OPER_STRIP		EFCT_HW_SGE_DIFOP_INCRCOUTNODIF
> +#define EFCT_HW_DIF_OPER_INSERT		EFCT_HW_SGE_DIFOP_INNODIFOUTCRC
> +
> +/* T10 DIF block sizes */
> +enum efct_hw_dif_blk_size {
> +	EFCT_HW_DIF_BK_SIZE_512,
> +	EFCT_HW_DIF_BK_SIZE_1024,
> +	EFCT_HW_DIF_BK_SIZE_2048,
> +	EFCT_HW_DIF_BK_SIZE_4096,
> +	EFCT_HW_DIF_BK_SIZE_520,
> +	EFCT_HW_DIF_BK_SIZE_4104,
> +	EFCT_HW_DIF_BK_SIZE_NA = 0
> +};
> +
> +/* link module types */
> +enum {
> +	EFCT_HW_LINK_MODULE_TYPE_1GB	= 0x0004,
> +	EFCT_HW_LINK_MODULE_TYPE_2GB	= 0x0008,
> +	EFCT_HW_LINK_MODULE_TYPE_4GB	= 0x0040,
> +	EFCT_HW_LINK_MODULE_TYPE_8GB	= 0x0080,
> +	EFCT_HW_LINK_MODULE_TYPE_10GB	= 0x0100,
> +	EFCT_HW_LINK_MODULE_TYPE_16GB	= 0x0200,
> +	EFCT_HW_LINK_MODULE_TYPE_32GB	= 0x0400,
> +};
> +

Same here ...

> +/* T10 DIF information passed to the transport */
> +struct efct_hw_dif_info {
> +	enum efct_hw_dif_oper dif_oper;
> +	enum efct_hw_dif_blk_size blk_size;
> +	u32 ref_tag_cmp;
> +	u32 ref_tag_repl;
> +	u16 app_tag_cmp;
> +	u16 app_tag_repl;
> +	bool check_ref_tag;
> +	bool check_app_tag;
> +	bool check_guard;
> +	bool auto_incr_ref_tag;
> +	bool repl_app_tag;
> +	bool repl_ref_tag;
> +	bool dif_separate;
> +
> +	/* If the APP TAG is 0xFFFF, disable REF TAG and CRC field chk */
> +	bool disable_app_ffff;
> +
> +	/* if the APP TAG is 0xFFFF and REF TAG is 0xFFFF_FFFF,
> +	 * disable checking the received CRC field.
> +	 */
> +	bool disable_app_ref_ffff;
> +	u16 dif_seed;
> +	u8 dif;
> +};
> +
> +enum efct_hw_io_type {
> +	EFCT_HW_ELS_REQ,
> +	EFCT_HW_ELS_RSP,
> +	EFCT_HW_ELS_RSP_SID,
> +	EFCT_HW_FC_CT,
> +	EFCT_HW_FC_CT_RSP,
> +	EFCT_HW_BLS_ACC,
> +	EFCT_HW_BLS_ACC_SID,
> +	EFCT_HW_BLS_RJT,
> +	EFCT_HW_IO_TARGET_READ,
> +	EFCT_HW_IO_TARGET_WRITE,
> +	EFCT_HW_IO_TARGET_RSP,
> +	EFCT_HW_IO_DNRX_REQUEUE,
> +	EFCT_HW_IO_MAX,
> +};
> +
> +enum efct_hw_io_state {
> +	EFCT_HW_IO_STATE_FREE,
> +	EFCT_HW_IO_STATE_INUSE,
> +	EFCT_HW_IO_STATE_WAIT_FREE,
> +	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
> +};
> +
> +struct efct_hw;
> +
> +/**
> + * HW command context.
> + * Stores the state for the asynchronous commands sent to the hardware.
> + */
> +struct efct_command_ctx {
> +	struct list_head	list_entry;
> +	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
> +	void			*arg;	/* Argument for callback */
> +	u8			*buf;	/* buffer holding command / results */
> +	void			*ctx;	/* upper layer context */
> +};
> +
> +struct efct_hw_sgl {
> +	uintptr_t		addr;
> +	size_t			len;
> +};
> +
> +union efct_hw_io_param_u {
> +	struct {
> +		u16		ox_id;
> +		u16		rx_id;
> +		u8		payload[12];
> +	} bls;
> +	struct {
> +		u32		s_id;
> +		u16		ox_id;
> +		u16		rx_id;
> +		u8		payload[12];
> +	} bls_sid;
> +	struct {
> +		u8		r_ctl;
> +		u8		type;
> +		u8		df_ctl;
> +		u8		timeout;
> +	} bcast;
> +	struct {
> +		u16		ox_id;
> +		u8		timeout;
> +	} els;
> +	struct {
> +		u32		s_id;
> +		u16		ox_id;
> +		u8		timeout;
> +	} els_sid;
> +	struct {
> +		u8		r_ctl;
> +		u8		type;
> +		u8		df_ctl;
> +		u8		timeout;
> +	} fc_ct;
> +	struct {
> +		u8		r_ctl;
> +		u8		type;
> +		u8		df_ctl;
> +		u8		timeout;
> +		u16		ox_id;
> +	} fc_ct_rsp;
> +	struct {
> +		u32		offset;
> +		u16		ox_id;
> +		u16		flags;
> +		u8		cs_ctl;
> +		enum efct_hw_dif_oper dif_oper;
> +		enum efct_hw_dif_blk_size blk_size;
> +		u8		timeout;
> +		u32		app_id;
> +	} fcp_tgt;
> +	struct {
> +		struct efc_dma	*cmnd;
> +		struct efc_dma	*rsp;
> +		enum efct_hw_dif_oper dif_oper;
> +		enum efct_hw_dif_blk_size blk_size;
> +		u32		cmnd_size;
> +		u16		flags;
> +		u8		timeout;
> +		u32		first_burst;
> +	} fcp_ini;
> +};
> +
> +/* WQ steering mode */
> +enum efct_hw_wq_steering {
> +	EFCT_HW_WQ_STEERING_CLASS,
> +	EFCT_HW_WQ_STEERING_REQUEST,
> +	EFCT_HW_WQ_STEERING_CPU,
> +};
> +
> +/* HW wqe object */
> +struct efct_hw_wqe {
> +	struct list_head	list_entry;
> +	bool			abort_wqe_submit_needed;
> +	bool			send_abts;
> +	u32			id;
> +	u32			abort_reqtag;
> +	u8			*wqebuf;
> +};
> +
> +/**
> + * HW IO object.
> + *
> + * Stores the per-IO information necessary
> + * for both the lower (SLI) and upper
> + * layers (efct).
> + */
> +struct efct_hw_io {
> +	/* Owned by HW */
> +
> +	/* reference counter and callback function */
> +	struct kref		ref;
> +	void (*release)(struct kref *arg);
> +	/* used for busy, wait_free, free lists */
> +	struct list_head	list_entry;
> +	/* used for timed_wqe list */
> +	struct list_head	wqe_link;
> +	/* used for io posted dnrx list */
> +	struct list_head	dnrx_link;
> +	/* state of IO: free, busy, wait_free */
> +	enum efct_hw_io_state	state;
> +	/* Work queue object, with link for pending */
> +	struct efct_hw_wqe	wqe;
> +	/* pointer back to hardware context */
> +	struct efct_hw		*hw;
> +	struct efc_remote_node	*rnode;
> +	struct efc_dma		xfer_rdy;
> +	u16	type;
> +	/* WQ assigned to the exchange */
> +	struct hw_wq		*wq;
> +	/* Exchange is active in FW */
> +	bool			xbusy;
> +	/* Function called on IO completion */
> +	int
> +	(*done)(struct efct_hw_io *hio,
> +		struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);
> +	/* argument passed to "IO done" callback */
> +	void			*arg;
> +	/* Function called on abort completion */
> +	int
> +	(*abort_done)(struct efct_hw_io *hio,
> +		      struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);
> +	/* argument passed to "abort done" callback */
> +	void			*abort_arg;
> +	/* needed for bug O127585: length of IO */
> +	size_t			length;
> +	/* timeout value for target WQEs */
> +	u8			tgt_wqe_timeout;
> +	/* timestamp when current WQE was submitted */
> +	u64			submit_ticks;
> +
> +	/* if TRUE, latched status shld be returned */
> +	bool			status_saved;
> +	/* if TRUE, abort is in progress */
> +	bool			abort_in_progress;
> +	u32			saved_status;
> +	u32			saved_len;
> +	u32			saved_ext;
> +
> +	/* EQ that this HIO came up on */
> +	struct hw_eq		*eq;
> +	/* WQ steering mode request */
> +	enum efct_hw_wq_steering wq_steering;
> +	/* WQ class if steering mode is Class */
> +	u8			wq_class;
> +
> +	/* request tag for this HW IO */
> +	u16			reqtag;
> +	/* request tag for an abort of this HW IO
> +	 * (note: this is a 32 bit value
> +	 * to allow us to use UINT32_MAX as an uninitialized value)
> +	 */
> +	u32			abort_reqtag;
> +	u32			indicator;	/* XRI */
> +	struct efc_dma		def_sgl;	/* default SGL*/
> +	/* Count of SGEs in default SGL */
> +	u32			def_sgl_count;
> +	/* pointer to current active SGL */
> +	struct efc_dma		*sgl;
> +	u32			sgl_count;	/* count of SGEs in io->sgl */
> +	u32			first_data_sge;	/* index of first data SGE */
> +	struct efc_dma		*ovfl_sgl;	/* overflow SGL */
> +	u32			ovfl_sgl_count;
> +	 /* pointer to overflow segment len */
> +	struct sli4_lsp_sge	*ovfl_lsp;
> +	u32			n_sge;		/* number of active SGEs */
> +	u32			sge_offset;
> +
> +	/* where upper layer can store ref to its IO */
> +	void			*ul_io;
> +};
> +
> +
> +/* Typedef for HW "done" callback */
> +typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
> +			      u32 len, int status, u32 ext, void *ul_arg);
> +
> +enum efct_hw_port {
> +	EFCT_HW_PORT_INIT,
> +	EFCT_HW_PORT_SHUTDOWN,
> +};
> +
> +/* Node group rpi reference */
> +struct efct_hw_rpi_ref {
> +	atomic_t rpi_count;
> +	atomic_t rpi_attached;
> +};
> +
> +enum efct_hw_link_stat {
> +	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
> +	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
> +	EFCT_HW_LINK_STAT_CRC_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
> +	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
> +	EFCT_HW_LINK_STAT_MAX,
> +};
> +
> +enum efct_hw_host_stat {
> +	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
> +	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
> +	EFCT_HW_HOST_STAT_MAX,
> +};
> +
> +enum efct_hw_state {
> +	EFCT_HW_STATE_UNINITIALIZED,
> +	EFCT_HW_STATE_QUEUES_ALLOCATED,
> +	EFCT_HW_STATE_ACTIVE,
> +	EFCT_HW_STATE_RESET_IN_PROGRESS,
> +	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
> +};
> +
> +struct efct_hw_link_stat_counts {
> +	u8		overflow;
> +	u32		counter;
> +};
> +
> +struct efct_hw_host_stat_counts {
> +	u32		counter;
> +};
> +
> +#include "efct_hw_queues.h"
> +

Errm. Please move to the start of the file, to keep all includes in the
same place.

> +/* Structure used for the hash lookup of queue IDs */
> +struct efct_queue_hash {
> +	bool		in_use;
> +	u16		id;
> +	u16		index;
> +};
> +
> +/* WQ callback object */
> +struct hw_wq_callback {
> +	u16		instance_index;	/* use for request tag */
> +	void (*callback)(void *arg, u8 *cqe, int status);
> +	void		*arg;
> +};
> +
> +struct efct_hw_config {
> +	u32		n_eq;
> +	u32		n_cq;
> +	u32		n_mq;
> +	u32		n_rq;
> +	u32		n_wq;
> +	u32		n_io;
> +	u32		n_sgl;
> +	u32		speed;
> +	u32		topology;
> +	/* size of the buffers for first burst */
> +	u32		rq_default_buffer_size;
> +	u8		esoc;
> +	/* The seed for the DIF CRC calculation */
> +	u16		dif_seed;
> +	u8		dif_mode;
> +	/* Enable driver target wqe timeouts */
> +	u8		emulate_tgt_wqe_timeout;
> +	bool		bounce;
> +	/* Queue topology string */
> +	const char	*queue_topology;
> +	/* MRQ RQ selection policy */
> +	u8		rq_selection_policy;
> +	/* RQ quanta if rq_selection_policy == 2 */
> +	u8		rr_quanta;
> +	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +};
> +
> +struct efct_hw {
> +	struct efct		*os;
> +	struct sli4		sli;
> +	u16			ulp_start;
> +	u16			ulp_max;
> +	u32			dump_size;
> +	enum efct_hw_state	state;
> +	bool			hw_setup_called;
> +	u8			sliport_healthcheck;
> +	u16			watchdog_timeout;
> +
> +	/* HW configuration, subject to efct_hw_set()  */
> +	struct efct_hw_config	config;
> +
> +	/* calculated queue sizes for each type */
> +	u32			num_qentries[SLI_QTYPE_MAX];
> +
> +	/* Storage for SLI queue objects */
> +	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
> +	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
> +	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
> +	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
> +	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
> +	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
> +
> +	/* HW queue */
> +	u32			eq_count;
> +	u32			cq_count;
> +	u32			mq_count;
> +	u32			wq_count;
> +	u32			rq_count;
> +	struct list_head	eq_list;
> +
> +	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
> +
> +	/* Storage for HW queue objects */
> +	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
> +	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
> +	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
> +	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
> +	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
> +	/* count of hw_rq[] entries */
> +	u32			hw_rq_count;
> +	/* count of multirq RQs */
> +	u32			hw_mrq_count;
> +
> +	 /* pool per class WQs */
> +	struct efct_varray	*wq_class_array[EFCT_HW_MAX_WQ_CLASS];
> +	/* pool per CPU WQs */
> +	struct efct_varray	*wq_cpu_array[EFCT_HW_MAX_WQ_CPU];
> +
> +	/* Sequence objects used in incoming frame processing */
> +	struct efct_array	*seq_pool;
> +
> +	/* Maintain an ordered, linked list of outstanding HW commands. */
> +	spinlock_t		cmd_lock;
> +	struct list_head	cmd_head;
> +	struct list_head	cmd_pending;
> +	u32			cmd_head_count;
> +
> +	struct sli4_link_event	link;
> +	struct efc_domain	*domain;
> +
> +	u16			fcf_indicator;
> +
> +	/* pointer array of IO objects */
> +	struct efct_hw_io	**io;
> +	/* array of WQE buffs mapped to IO objects */
> +	u8			*wqe_buffs;
> +
> +	/* IO lock to synchronize list access */
> +	spinlock_t		io_lock;
> +	/* IO lock to synchronize IO aborting */
> +	spinlock_t		io_abort_lock;
> +	/* List of IO objects in use */
> +	struct list_head	io_inuse;
> +	/* List of IO objects with a timed target WQE */
> +	struct list_head	io_timed_wqe;
> +	/* List of IO objects waiting to be freed */
> +	struct list_head	io_wait_free;
> +	/* List of IO objects available for allocation */
> +	struct list_head	io_free;
> +
> +	struct efc_dma		loop_map;
> +
> +	struct efc_dma		xfer_rdy;
> +
> +	struct efc_dma		dump_sges;
> +
> +	struct efc_dma		rnode_mem;
> +
> +	struct efct_hw_rpi_ref	*rpi_ref;
> +
> +	atomic_t		io_alloc_failed_count;
> +
> +	struct efct_hw_qtop	*qtop;
> +
> +	/* stat: wq sumbit count */
> +	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
> +	/* stat: wq complete count */
> +	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
> +	/* Timer to periodically check for WQE timeouts */
> +	struct timer_list	wqe_timer;
> +	/* Timer for heartbeat */
> +	struct timer_list	watchdog_timer;
> +	bool			in_active_wqe_timer;
> +	bool			active_wqe_timer_shutdown;
> +
> +	struct efct_pool	*wq_reqtag_pool;
> +	atomic_t		send_frame_seq_id;
> +};
> +
> +enum efct_hw_io_count_type {
> +	EFCT_HW_IO_INUSE_COUNT,
> +	EFCT_HW_IO_FREE_COUNT,
> +	EFCT_HW_IO_WAIT_FREE_COUNT,
> +	EFCT_HW_IO_N_TOTAL_IO_COUNT,
> +};
> +
> +/* HW queue data structures */
> +struct hw_eq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct efct_hw		*hw;
> +	struct sli4_queue	*queue;
> +	struct list_head	cq_list;
> +	u32			use_count;
> +	struct efct_varray	*wq_array;
> +};
> +
> +struct hw_cq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_eq		*eq;
> +	struct sli4_queue	*queue;
> +	struct list_head	q_list;
> +	u32			use_count;
> +};
> +
> +struct hw_q {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +};
> +
> +struct hw_mq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +
> +	u32			use_count;
> +};
> +
> +struct hw_wq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	struct efct_hw		*hw;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +	u32			class;
> +	u8			ulp;
> +
> +	/* WQ consumed */
> +	u32			wqec_set_count;
> +	u32			wqec_count;
> +	u32			free_count;
> +	u32			total_submit_count;
> +	struct list_head	pending_list;
> +
> +	/* HW IO allocated for use with Send Frame */
> +	struct efct_hw_io	*send_frame_io;
> +
> +	/* Stats */
> +	u32			use_count;
> +	u32			wq_pending_count;
> +};
> +
> +struct hw_rq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			use_count;
> +	u32			hdr_entry_size;
> +	u32			first_burst_entry_size;
> +	u32			data_entry_size;
> +	u8			ulp;
> +	bool			is_mrq;
> +	u32			base_mrq_id;
> +
> +	struct hw_cq		*cq;
> +
> +	u8			filter_mask;
> +	struct sli4_queue	*hdr;
> +	struct sli4_queue	*first_burst;
> +	struct sli4_queue	*data;
> +
> +	struct efc_hw_rq_buffer	*hdr_buf;
> +	struct efc_hw_rq_buffer	*fb_buf;
> +	struct efc_hw_rq_buffer	*payload_buf;
> +	/* RQ tracker for this RQ */
> +	struct efc_hw_sequence	**rq_tracker;
> +};
> +
> +struct efct_hw_global {
> +	const char		*queue_topology_string;
> +};
> +
> +extern struct efct_hw_global	hw_global;
> +
> +struct efct_hw_send_frame_context {
> +	struct efct_hw		*hw;
> +	struct hw_wq_callback	*wqcb;
> +	struct efct_hw_wqe	wqe;
> +	void (*callback)(int status, void *arg);
> +	void			*arg;
> +
> +	/* General purpose elements */
> +	struct efc_hw_sequence	*seq;
> +	struct efc_dma		payload;
> +};
> +
> +#define EFCT_HW_OBJECT_G5              0xfeaa0001
> +#define EFCT_HW_OBJECT_G6              0xfeaa0003
> +struct efct_hw_grp_hdr {
> +	u32			size;
> +	__be32			magic_number;
> +	u32			word2;
> +	u8			rev_name[128];
> +	u8			date[12];
> +	u8			revision[32];
> +};
> +
> +#endif /* __EFCT_H__ */
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 16/32] elx: efct: Driver initialization routines
  2019-12-20 22:37 ` [PATCH v2 16/32] elx: efct: Driver initialization routines James Smart
@ 2020-01-09  9:01   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:01 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Emulex FC Target driver init, attach and hardware setup routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_driver.c | 1031 +++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_driver.h |  150 +++++
>  drivers/scsi/elx/efct/efct_hw.c     | 1222 +++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h     |   16 +-
>  drivers/scsi/elx/efct/efct_xport.c  |  587 +++++++++++++++++
>  drivers/scsi/elx/efct/efct_xport.h  |  205 ++++++
>  6 files changed, 3210 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/scsi/elx/efct/efct_driver.c
>  create mode 100644 drivers/scsi/elx/efct/efct_driver.h
>  create mode 100644 drivers/scsi/elx/efct/efct_hw.c
>  create mode 100644 drivers/scsi/elx/efct/efct_xport.c
>  create mode 100644 drivers/scsi/elx/efct/efct_xport.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
> new file mode 100644
> index 000000000000..f0ec132bdd0e
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_driver.c
> @@ -0,0 +1,1031 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_utils.h"
> +
> +#include "efct_els.h"
> +#include "efct_hw.h"
> +#include "efct_unsol.h"
> +#include "efct_scsi.h"
> +
> +struct efct *efct_devices[MAX_EFCT_DEVICES];
> +
> +static int logmask;
> +module_param(logmask, int, 0444);
> +MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
> +
> +static struct libefc_function_template efct_libefc_templ = {
> +	.hw_domain_alloc = efct_hw_domain_alloc,
> +	.hw_domain_attach = efct_hw_domain_attach,
> +	.hw_domain_free = efct_hw_domain_free,
> +	.hw_domain_force_free = efct_hw_domain_force_free,
> +	.domain_hold_frames = efct_domain_hold_frames,
> +	.domain_accept_frames = efct_domain_accept_frames,
> +
> +	.hw_port_alloc = efct_hw_port_alloc,
> +	.hw_port_attach = efct_hw_port_attach,
> +	.hw_port_free = efct_hw_port_free,
> +
> +	.hw_node_alloc = efct_hw_node_alloc,
> +	.hw_node_attach = efct_hw_node_attach,
> +	.hw_node_detach = efct_hw_node_detach,
> +	.hw_node_free_resources = efct_hw_node_free_resources,
> +	.node_purge_pending = efct_node_purge_pending,
> +
> +	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
> +	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
> +	.scsi_validate_node = efct_scsi_validate_initiator,
> +	.new_domain = efct_scsi_tgt_new_domain,
> +	.del_domain = efct_scsi_tgt_del_domain,
> +	.new_sport = efct_scsi_tgt_new_sport,
> +	.del_sport = efct_scsi_tgt_del_sport,
> +	.scsi_new_node = efct_scsi_new_initiator,
> +	.scsi_del_node = efct_scsi_del_initiator,
> +
> +	.els_send = efct_els_req_send,
> +	.els_send_ct = efct_els_send_ct,
> +	.els_send_resp = efct_els_resp_send,
> +	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
> +	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
> +	.send_ct_rsp = efct_send_ct_rsp,
> +	.send_ls_rjt = efct_send_ls_rjt,
> +
> +	.node_io_cleanup = efct_node_io_cleanup,
> +	.node_els_cleanup = efct_node_els_cleanup,
> +	.node_abort_all_els = efct_node_abort_all_els,
> +
> +	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
> +	.recv_abts_frame = efct_node_recv_abts_frame,
> +};
> +
> +static char *queue_topology =
> +	"eq cq rq cq mq $nulp($nwq(cq wq:ulp=$rpt1)) cq wq:len=256:class=1";
> +

What on earth ...
That _does_ warrant an explanation.

> +static int
> +efct_device_init(void)
> +{
> +	int rc;
> +
> +	hw_global.queue_topology_string = queue_topology;
> +
> +	/* driver-wide init for target-server */
> +	rc = efct_scsi_tgt_driver_init();
> +	if (rc) {
> +		pr_err("efct_scsi_tgt_init failed rc=%d\n",
> +			     rc);
> +		return -1;
> +	}
> +
> +	rc = efct_scsi_reg_fc_transport();
> +	if (rc) {
> +		pr_err("failed to register to FC host\n");
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static void
> +efct_device_shutdown(void)
> +{
> +	efct_scsi_release_fc_transport();
> +
> +	efct_scsi_tgt_driver_exit();
> +}
> +
> +static void *
> +efct_device_alloc(u32 nid)
> +{
> +	struct efct *efct = NULL;
> +	u32 i;
> +
> +	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);
> +
> +	if (efct) {
> +		memset(efct, 0, sizeof(*efct));
> +		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
> +			if (!efct_devices[i]) {
> +				efct->instance_index = i;
> +				efct_devices[i] = efct;
> +				break;
> +			}
> +		}
> +
> +		if (i == ARRAY_SIZE(efct_devices)) {
> +			pr_err("Exceeded max supported devices.\n");
> +			kfree(efct);
> +			efct = NULL;
> +		} else {
> +			efct->attached = false;
> +		}
> +	}
> +	return efct;
> +}
> +
> +static struct efct *
> +efct_get_instance(u32 index)
> +{
> +	if (index < ARRAY_SIZE(efct_devices))
> +		return efct_devices[index];
> +
> +	return NULL;
> +}
> +
> +static void
> +efct_device_interrupt_handler(struct efct *efct, u32 vector)
> +{
> +	efct_hw_process(&efct->hw, vector, efct->max_isr_time_msec);
> +}
> +
> +static int
> +efct_intr_thread(struct efct_intr_context *intr_context)
> +{
> +	struct efct *efct = intr_context->efct;
> +	int rc;
> +	u32 tstart, tnow;
> +
> +	tstart = jiffies_to_msecs(jiffies);
> +
> +	while (!kthread_should_stop()) {
> +		rc = wait_for_completion_timeout(&intr_context->done,
> +				  usecs_to_jiffies(100000));
> +		if (!rc)
> +			continue;
> +
> +		efct_device_interrupt_handler(efct, intr_context->index);
> +
> +		/* If we've been running for too long, then yield */
> +		tnow = jiffies_to_msecs(jiffies);
> +		if ((tnow - tstart) > 5000) {
> +			cond_resched();
> +			tstart = tnow;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +efct_start_event_processing(struct efct *efct)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		char label[32];
> +		struct efct_intr_context *intr_ctx = NULL;
> +
> +		intr_ctx = &efct->intr_context[i];
> +
> +		intr_ctx->efct = efct;
> +		intr_ctx->index = i;
> +
> +		init_completion(&intr_ctx->done);
> +
> +		snprintf(label, sizeof(label),
> +			 "efct:%d:%d", efct->instance_index, i);
> +
> +		intr_ctx->thread =
> +			kthread_create((int(*)(void *)) efct_intr_thread,
> +				       intr_ctx, label);
> +
> +		if (IS_ERR(intr_ctx->thread)) {
> +			efc_log_err(efct, "kthread_create failed: %ld\n",
> +				     PTR_ERR(intr_ctx->thread));
> +			intr_ctx->thread = NULL;
> +
> +			return -1;
> +		}
> +
> +		wake_up_process(intr_ctx->thread);
> +	}
> +
> +	return 0;
> +}
> +

Hmpf.
We _do_ have a generic threaded interrupt model.
Any particular reason why you have to reimplement it?

> +static void
> +efct_teardown_msix(struct efct *efct)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		synchronize_irq(efct->msix_vec[i].vector);
> +		free_irq(efct->msix_vec[i].vector,
> +			 &efct->intr_context[i]);
> +	}
> +	pci_disable_msix(efct->pcidev);
> +}
> +
> +static int
> +efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
> +{
> +	struct efc *efc;
> +	struct sli4 *sli;
> +
> +	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
> +	if (!efc)
> +		return -1;
> +
> +	memset(efc, 0, sizeof(struct efc));
> +	efct->efcport = efc;
> +
> +	memcpy(&efc->tt, tt, sizeof(*tt));
> +	efc->base = efct;
> +	efc->pcidev = efct->pcidev;
> +
> +	efc->def_wwnn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_NODE);
> +	efc->def_wwpn = efct_get_wwn(&efct->hw, EFCT_HW_WWN_PORT);
> +	efc->enable_tgt = 1;
> +	efc->log_level = EFC_LOG_LIB;
> +
> +	sli = &efct->hw.sli;
> +	efc->max_xfer_size = sli->sge_supported_length *
> +			     sli_get_max_sgl(&efct->hw.sli);
> +
> +	efcport_init(efc);
> +
> +	return 0;
> +}
> +
> +static int efct_request_firmware_update(struct efct *efct);
> +
> +static int
> +efct_device_attach(struct efct *efct)
> +{
> +	u32 rc = 0, i = 0;
> +
> +	if (efct->attached) {
> +		efc_log_warn(efct, "Device is already attached\n");
> +		rc = -1;
> +	} else {
> +		snprintf(efct->display_name, sizeof(efct->display_name),
> +			 "[%s%d] ", "fc",  efct->instance_index);
> +
> +		efct->logmask = logmask;
> +		efct->enable_numa_support = 1;
> +		efct->filter_def = "0,0,0,0";
> +		efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
> +		efct->model =
> +			(efct->pcidev->device == EFCT_DEVICE_ID_LPE31004) ?
> +			"LPE31004" : "unknown";

That is _so_ lame.
You already know which devices you bind to, and even the names.
So please update this check.

> +		efct->fw_version = (const char *)efct_hw_get_ptr(&efct->hw,
> +							EFCT_HW_FW_REV);
> +		efct->driver_version = EFCT_DRIVER_VERSION;
> +
> +		efct->efct_req_fw_upgrade = true;
> +
> +		/* Allocate transport object and bring online */
> +		efct->xport = efct_xport_alloc(efct);
> +		if (!efct->xport) {
> +			efc_log_err(efct, "failed to allocate transport object\n");
> +			rc = -1;
> +		} else if (efct_xport_attach(efct->xport) != 0) {
> +			efc_log_err(efct, "failed to attach transport object\n");
> +			rc = -1;
> +		} else if (efct_xport_initialize(efct->xport) != 0) {
> +			efc_log_err(efct, "failed to initialize transport object\n");
> +			rc = -1;
> +		} else if (efct_efclib_config(efct, &efct_libefc_templ)) {
> +			efc_log_err(efct, "failed to init efclib\n");
> +			rc = -1;
> +		} else if (efct_start_event_processing(efct)) {
> +			efc_log_err(efct, "failed to start event processing\n");
> +			rc = -1;
> +		} else {
> +			for (i = 0; i < efct->n_msix_vec; i++) {
> +				efc_log_debug(efct, "irq %d enabled\n",
> +					efct->msix_vec[i].vector);
> +				enable_irq(efct->msix_vec[i].vector);
> +			}
> +		}

Curious programming.
The 'normal' way would be using plain if statements and gotos.
Please fix.
And check if the cleanup is done correctly.

> +
> +		efct->desc = efct->hw.sli.modeldesc;
> +		efc_log_info(efct, "adapter model description: %s\n",
> +			      efct->desc);
> +
> +		if (rc == 0) {
> +			efct->attached = true;
> +		} else {
> +			efct_teardown_msix(efct);
> +			if (efct->xport) {
> +				efct_xport_free(efct->xport);
> +				efct->xport = NULL;
> +			}
> +		}
> +
> +		if (efct->efct_req_fw_upgrade) {
> +			efc_log_debug(efct, "firmware update is in progress\n");
> +			efct_request_firmware_update(efct);
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static void
> +efct_stop_event_processing(struct efct *efct)
> +{
> +	u32 i;
> +	struct task_struct *thread = NULL;
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		disable_irq(efct->msix_vec[i].vector);
> +
> +		thread = efct->intr_context[i].thread;
> +		if (!thread)
> +			continue;
> +
> +		/* Call stop */
> +		kthread_stop(thread);
> +	}
> +}
> +
> +static int
> +efct_device_detach(struct efct *efct)
> +{
> +	int rc = 0;
> +
> +	if (efct) {
> +		if (!efct->attached) {
> +			efc_log_warn(efct, "Device is not attached\n");
> +			return -1;
> +		}
> +
> +		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
> +		if (rc)
> +			efc_log_err(efct, "Transport Shutdown timed out\n");
> +
> +		efct_stop_event_processing(efct);
> +
> +		if (efct_xport_detach(efct->xport) != 0)
> +			efc_log_err(efct, "Transport detach failed\n");
> +
> +		efct_xport_free(efct->xport);
> +		efct->xport = NULL;
> +
> +		efcport_destroy(efct->efcport);
> +		kfree(efct->efcport);
> +
> +		efct->attached = false;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +efct_fw_reset(struct efct *efct)
> +{
> +	int rc = 0;
> +	int index = 0;
> +	u8 bus, dev;
> +	struct efct *other_efct;
> +
> +	bus = efct->pcidev->bus->number;
> +	dev = PCI_SLOT(efct->pcidev->devfn);
> +
> +	while ((other_efct = efct_get_instance(index++)) != NULL) {
> +		u8 other_bus, other_dev;
> +
> +		other_bus = other_efct->pcidev->bus->number;
> +		other_dev = PCI_SLOT(other_efct->pcidev->devfn);
> +
> +		if (bus == other_bus && dev == other_dev &&
> +		    timer_pending(&other_efct->xport->stats_timer)) {
> +			efc_log_debug(other_efct,
> +				       "removing link stats timer\n");
> +			del_timer(&other_efct->xport->stats_timer);
> +		}
> +	}
> +

???
You're not telling me you're doing a cross-PCI device reset, do you?
This does need some documentation explaining what it's trying to do and
why this is necessary.

> +	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
> +		efc_log_test(efct, "failed to reset firmware\n");
> +		rc = -1;
> +	} else {
> +		efc_log_debug(efct,
> +			       "successfully reset firmware.Now resetting port\n");
> +		/* now flag all functions on the same device
> +		 * as this port as uninitialized
> +		 */
> +		index = 0;
> +
> +		while ((other_efct = efct_get_instance(index++)) != NULL) {
> +			u8 other_bus, other_dev;
> +
> +			other_bus = other_efct->pcidev->bus->number;
> +			other_dev = PCI_SLOT(other_efct->pcidev->devfn);
> +
> +			if (bus == other_bus && dev == other_dev) {
> +				if (other_efct->hw.state !=
> +						EFCT_HW_STATE_UNINITIALIZED) {
> +					other_efct->hw.state =
> +						EFCT_HW_STATE_QUEUES_ALLOCATED;
> +				}
> +				efct_device_detach(efct);
> +				rc = efct_device_attach(efct);
> +
> +				efc_log_debug(other_efct,
> +					       "re-start driver with new firmware\n");
> +			}
> +		}
> +	}
> +	return rc;
> +}
> +

Similar here.

> +static void
> +efct_fw_write_cb(int status, u32 actual_write_length,
> +		 u32 change_status, void *arg)
> +{
> +	struct efct_fw_write_result *result = arg;
> +
> +	result->status = status;
> +	result->actual_xfer = actual_write_length;
> +	result->change_status = change_status;
> +
> +	complete(&result->done);
> +}
> +
> +static int
> +efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
> +		    u8 *change_status)
> +{
> +	int rc = 0;
> +	u32 bytes_left;
> +	u32 xfer_size;
> +	u32 offset;
> +	struct efc_dma dma;
> +	int last = 0;
> +	struct efct_fw_write_result result;
> +
> +	init_completion(&result.done);
> +
> +	bytes_left = buf_len;
> +	offset = 0;
> +
> +	dma.size = FW_WRITE_BUFSIZE;
> +	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +				      dma.size, &dma.phys, GFP_DMA);
> +	if (!dma.virt)
> +		return -ENOMEM;
> +
> +	while (bytes_left > 0) {
> +		if (bytes_left > FW_WRITE_BUFSIZE)
> +			xfer_size = FW_WRITE_BUFSIZE;
> +		else
> +			xfer_size = bytes_left;
> +
> +		memcpy(dma.virt, buf + offset, xfer_size);
> +
> +		if (bytes_left == xfer_size)
> +			last = 1;
> +
> +		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
> +				       last, efct_fw_write_cb, &result);
> +
> +		if (wait_for_completion_interruptible(&result.done) != 0) {
> +			rc = -ENXIO;
> +			break;
> +		}
> +
> +		if (result.actual_xfer == 0 || result.status != 0) {
> +			rc = -EFAULT;
> +			break;
> +		}
> +
> +		if (last)
> +			*change_status = result.change_status;
> +
> +		bytes_left -= result.actual_xfer;
> +		offset += result.actual_xfer;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
> +	return rc;
> +}
> +
> +static int
> +efct_request_firmware_update(struct efct *efct)
> +{
> +	int rc = 0;
> +	u8 file_name[256], fw_change_status = 0;
> +	const struct firmware *fw;
> +	struct efct_hw_grp_hdr *fw_image;
> +
> +	snprintf(file_name, 256, "%s.grp", efct->model);
> +	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
> +	if (rc) {
> +		efc_log_err(efct, "Firmware file(%s) not found.\n", file_name);
> +		return rc;
> +	}
> +	fw_image = (struct efct_hw_grp_hdr *)fw->data;
> +
> +	/* Check if firmware provided is compatible with this particular
> +	 * Adapter of not
> +	 */
> +	if ((be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G5) &&
> +	    (be32_to_cpu(fw_image->magic_number) != EFCT_HW_OBJECT_G6)) {
> +		efc_log_warn(efct,
> +			      "Invalid FW image found Magic: 0x%x Size: %ld\n",
> +			be32_to_cpu(fw_image->magic_number), fw->size);
> +		rc = -1;
> +		goto exit;
> +	}
> +
> +	if (!strncmp(efct->fw_version, fw_image->revision,
> +		     strnlen(fw_image->revision, 16))) {
> +		efc_log_debug(efct,
> +			       "No update req. Firmware is already up to date.\n");
> +		rc = 0;
> +		goto exit;
> +	}
> +	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "Firmware update failed. Return code = %d\n", rc);
> +	} else {
> +		efc_log_info(efct, "Firmware updated successfully\n");
> +		switch (fw_change_status) {
> +		case 0x00:
> +			efc_log_debug(efct,
> +				       "No reset needed, new firmware is active.\n");
> +			break;
> +		case 0x01:
> +			efc_log_warn(efct,
> +				      "A physical device reset (host reboot) is needed to activate the new firmware\n");
> +			break;
> +		case 0x02:
> +		case 0x03:
> +			efc_log_debug(efct,
> +				       "firmware is resetting to activate the new firmware\n");
> +			efct_fw_reset(efct);
> +			break;
> +		default:
> +			efc_log_debug(efct,
> +				       "Unexected value change_status: %d\n",
> +				fw_change_status);
> +			break;
> +		}
> +	}
> +
> +exit:
> +	release_firmware(fw);
> +
> +	return rc;
> +}
> +
> +static void
> +efct_device_free(struct efct *efct)
> +{
> +	if (efct) {
> +		efct_devices[efct->instance_index] = NULL;
> +
> +		kfree(efct);
> +	}
> +}
> +
> +static int
> +efct_device_interrupts_required(struct efct *efct)
> +{
> +	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
> +				!= EFCT_HW_RTN_SUCCESS) {
> +		return -1;
> +	}
> +	return efct_hw_qtop_eq_count(&efct->hw);
> +}
> +
> +static irqreturn_t
> +efct_intr_msix(int irq, void *handle)
> +{
> +	struct efct_intr_context *intr_context = handle;
> +
> +	complete(&intr_context->done);
> +	return IRQ_HANDLED;
> +}
> +
> +static int
> +efct_setup_msix(struct efct *efct, u32 num_interrupts)
> +{
> +	int	rc = 0;
> +	u32 i;
> +
> +	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
> +		dev_err(&efct->pcidev->dev,
> +			"%s : MSI-X not available\n", __func__);
> +		return -EINVAL;
> +	}
> +
> +	if (num_interrupts > ARRAY_SIZE(efct->msix_vec)) {
> +		dev_err(&efct->pcidev->dev,
> +			"%s : num_interrupts: %d greater than vectors\n",
> +			__func__, num_interrupts);
> +		return -1;
> +	}
> +
> +	efct->n_msix_vec = num_interrupts;
> +	for (i = 0; i < num_interrupts; i++)
> +		efct->msix_vec[i].entry = i;
> +
> +	rc = pci_enable_msix_exact(efct->pcidev,
> +				   efct->msix_vec, efct->n_msix_vec);
> +	if (!rc) {
> +		for (i = 0; i < num_interrupts; i++) {
> +			rc = request_irq(efct->msix_vec[i].vector,
> +					 efct_intr_msix,
> +					 0, EFCT_DRIVER_NAME,
> +					 &efct->intr_context[i]);
> +			if (rc)
> +				break;
> +		}
> +	} else {
> +		dev_err(&efct->pcidev->dev,
> +			"%s : rc % d returned, IRQ allocation failed\n",
> +			   __func__, rc);
> +	}
> +
> +	return rc;
> +}

Interrupt affinity?

> +
> +static struct pci_device_id efct_pci_table[] = {
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_LPE31004), 0},
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_ID_G7), 0},
> +	{}	/* terminate list */
> +};
> +

Ah. What happened to the G6 HW mentioned at the top?

> +static int
> +efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> +	struct efct *efct = NULL;
> +	int rc;
> +	u32 i, r;
> +	int num_interrupts = 0;
> +	int nid;
> +	struct task_struct *thread = NULL;
> +
> +	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc)
> +		goto efct_pci_probe_err_enable;
> +
> +	pci_set_master(pdev);
> +
> +	rc = pci_set_mwi(pdev);
> +	if (rc) {
> +		dev_info(&pdev->dev,
> +			 "pci_set_mwi returned %d\n", rc);
> +		goto efct_pci_probe_err_set_mwi;
> +	}
> +
> +	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
> +	if (rc) {
> +		dev_err(&pdev->dev, "pci_request_regions failed\n");
> +		goto efct_pci_probe_err_request_regions;
> +	}
> +
> +	/* Fetch the Numa node id for this device */
> +	nid = dev_to_node(&pdev->dev);
> +	if (nid < 0) {
> +		dev_err(&pdev->dev,
> +			"Warning Numa node ID is %d\n", nid);
> +		nid = 0;
> +	}
> +
> +	/* Allocate efct */
> +	efct = efct_device_alloc(nid);
> +	if (!efct) {
> +		dev_err(&pdev->dev, "Failed to allocate efct_t\n");
> +		rc = -ENOMEM;
> +		goto efct_pci_probe_err_efct_device_alloc;
> +	}
> +
> +	efct->pcidev = pdev;
> +
> +	if (efct->enable_numa_support)
> +		efct->numa_node = nid;
> +
> +	/* Map all memory BARs */
> +	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
> +			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
> +						  pci_resource_len(pdev, i));
> +			r++;
> +		}
> +
> +		/*
> +		 * If the 64-bit attribute is set, both this BAR and the
> +		 * next form the complete address. Skip processing the
> +		 * next BAR.
> +		 */
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
> +			i++;
> +	}
> +
> +	pci_set_drvdata(pdev, efct);
> +
> +	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
> +	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
> +		dev_warn(&pdev->dev,
> +			 "trying DMA_BIT_MASK(32)\n");
> +		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
> +		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
> +			dev_err(&pdev->dev,
> +				"setting DMA_BIT_MASK failed\n");
> +			rc = -1;
> +			goto efct_pci_probe_err_setup_thread;
> +		}
> +	}
> +
> +	num_interrupts = efct_device_interrupts_required(efct);
> +	if (num_interrupts < 0) {
> +		efc_log_err(efct, "efct_device_interrupts_required failed\n");
> +		rc = -1;
> +		goto efct_pci_probe_err_setup_thread;
> +	}
> +
> +	/*
> +	 * Initialize MSIX interrupts, note,
> +	 * efct_setup_msix() enables the interrupt
> +	 */
> +	rc = efct_setup_msix(efct, num_interrupts);
> +	if (rc) {
> +		dev_err(&pdev->dev, "Can't setup msix\n");
> +		goto efct_pci_probe_err_setup_msix;
> +	}
> +	/* Disable interrupt for now */
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		efc_log_debug(efct, "irq %d disabled\n",
> +			       efct->msix_vec[i].vector);
> +		disable_irq(efct->msix_vec[i].vector);
> +	}
> +
> +	rc = efct_device_attach((struct efct *)efct);
> +	if (rc)
> +		goto efct_pci_probe_err_setup_msix;
> +
> +	return 0;
> +
> +efct_pci_probe_err_setup_msix:
> +	for (i = 0; i < (u32)num_interrupts; i++) {
> +		thread = efct->intr_context[i].thread;
> +		if (!thread)
> +			continue;
> +
> +		/* Call stop */
> +		kthread_stop(thread);
> +	}
> +
> +efct_pci_probe_err_setup_thread:
> +	pci_set_drvdata(pdev, NULL);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +	efct_device_free(efct);
> +efct_pci_probe_err_efct_device_alloc:
> +	pci_release_regions(pdev);
> +efct_pci_probe_err_request_regions:
> +	pci_clear_mwi(pdev);
> +efct_pci_probe_err_set_mwi:
> +	pci_disable_device(pdev);
> +efct_pci_probe_err_enable:
> +	return rc;
> +}
> +
> +static void
> +efct_pci_remove(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	u32	i;
> +
> +	if (!efct)
> +		return;
> +
> +	efct_device_detach(efct);
> +
> +	efct_teardown_msix(efct);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +
> +	pci_set_drvdata(pdev, NULL);
> +
> +	efct_devices[efct->instance_index] = NULL;
> +
> +	efct_device_free(efct);
> +
> +	pci_release_regions(pdev);
> +
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
> +{
> +	if (efct) {
> +		efc_log_debug(efct,
> +			       "PCI channel disable preparing for reset\n");
> +		efct_device_detach(efct);
> +		/* Disable interrupt and pci device */
> +		efct_teardown_msix(efct);
> +	}
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_recover(struct efct *efct)
> +{
> +	if (efct) {
> +		efc_log_debug(efct, "PCI channel preparing for recovery\n");
> +		efct_hw_io_abort_all(&efct->hw);
> +	}
> +}
> +
> +/**
> + * efct_pci_io_error_detected - method for handling PCI I/O error
> + * @pdev: pointer to PCI device.
> + * @state: the current PCI connection state.
> + *
> + * This routine is registered to the PCI subsystem for error handling. This
> + * function is called by the PCI subsystem after a PCI bus error affecting
> + * this device has been detected. When this routine is invoked, it dispatches
> + * device error detected handling routine, which will perform the proper
> + * error detected operation.
> + *
> + * Return codes
> + * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
> + * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
> + */
> +static pci_ers_result_t
> +efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	pci_ers_result_t rc;
> +
> +	switch (state) {
> +	case pci_channel_io_normal:
> +		efct_device_prep_for_recover(efct);
> +		rc = PCI_ERS_RESULT_CAN_RECOVER;
> +		break;
> +	case pci_channel_io_frozen:
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	case pci_channel_io_perm_failure:
> +		efct_device_detach(efct);
> +		rc = PCI_ERS_RESULT_DISCONNECT;
> +		break;
> +	default:
> +		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
> +			       state);
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +static pci_ers_result_t
> +efct_pci_io_slot_reset(struct pci_dev *pdev)
> +{
> +	int rc;
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "failed to re-enable PCI device after reset.\n");
> +		return PCI_ERS_RESULT_DISCONNECT;
> +	}
> +
> +	/*
> +	 * As the new kernel behavior of pci_restore_state() API call clears
> +	 * device saved_state flag, need to save the restored state again.
> +	 */
> +
> +	pci_save_state(pdev);
> +
> +	pci_set_master(pdev);
> +
> +	rc = efct_setup_msix(efct, efct->n_msix_vec);
> +	if (rc)
> +		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
> +			    rc);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +
> +	return PCI_ERS_RESULT_RECOVERED;
> +}
> +
> +static void
> +efct_pci_io_resume(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +}
> +
> +MODULE_DEVICE_TABLE(pci, efct_pci_table);
> +
> +static struct pci_error_handlers efct_pci_err_handler = {
> +	.error_detected = efct_pci_io_error_detected,
> +	.slot_reset = efct_pci_io_slot_reset,
> +	.resume = efct_pci_io_resume,
> +};
> +
> +static struct pci_driver efct_pci_driver = {
> +	.name		= EFCT_DRIVER_NAME,
> +	.id_table	= efct_pci_table,
> +	.probe		= efct_pci_probe,
> +	.remove		= efct_pci_remove,
> +	.err_handler	= &efct_pci_err_handler,
> +};
> +
> +static int efct_proc_get(struct seq_file *m, void *v)
> +{
> +	u32 i;
> +	u32 j;
> +	u32 device_count = 0;
> +
> +	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
> +		if (efct_devices[i])
> +			device_count++;
> +	}
> +
> +	seq_printf(m, "%d\n", device_count);
> +
> +	for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
> +		if (efct_devices[i]) {
> +			struct efct *efct = efct_devices[i];
> +
> +			for (j = 0; j < efct->n_msix_vec; j++) {
> +				seq_printf(m, "%d,%d,%d\n", i,
> +					   efct->msix_vec[j].vector,
> +					-1);
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int efct_proc_open(struct inode *indoe, struct file *file)
> +{
> +	return single_open(file, efct_proc_get, NULL);
> +}
> +
> +static const struct file_operations efct_proc_fops = {
> +	.owner = THIS_MODULE,
> +	.open = efct_proc_open,
> +	.read = seq_read,
> +	.llseek = seq_lseek,
> +	.release = single_release,
> +};
> +

Proc interface? Seriously?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion
  2019-12-20 22:37 ` [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion James Smart
@ 2020-01-09  9:10   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:10 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for queue creation, deletion, and configuration.
> Driven by strings describing configuration topology with
> parsers for the strings.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw_queues.c | 1456 ++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw_queues.h |   67 ++
>  2 files changed, 1523 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
>  create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
> new file mode 100644
> index 000000000000..8bbeef8ad22d
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw_queues.c
> @@ -0,0 +1,1456 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +#include "efct_hw_queues.h"
> +#include "efct_unsol.h"
> +
> +/**
> + * Given the parsed queue topology spec, the SLI queues are created and
> + * initialized
> + */
> +enum efct_hw_rtn
> +efct_hw_init_queues(struct efct_hw *hw, struct efct_hw_qtop *qtop)
> +{
> +	u32 i, j, k;
> +	u32 default_lengths[QTOP_LAST], len;
> +	u32 rqset_len = 0, rqset_count = 0;
> +	u8 rqset_filter_mask = 0;
> +	struct hw_eq *eqs[EFCT_HW_MAX_MRQS];
> +	struct hw_cq *cqs[EFCT_HW_MAX_MRQS];
> +	struct hw_rq *rqs[EFCT_HW_MAX_MRQS];
> +	struct efct_hw_qtop_entry *qt, *next_qt;
> +	struct efct_hw_mrq mrq;
> +	bool use_mrq = false;
> +
> +	struct hw_eq *eq = NULL;
> +	struct hw_cq *cq = NULL;
> +	struct hw_wq *wq = NULL;
> +	struct hw_rq *rq = NULL;
> +	struct hw_mq *mq = NULL;
> +
> +	mrq.num_pairs = 0;
> +	default_lengths[QTOP_EQ] = 1024;
> +	default_lengths[QTOP_CQ] = hw->num_qentries[SLI_QTYPE_CQ];
> +	default_lengths[QTOP_WQ] = hw->num_qentries[SLI_QTYPE_WQ];
> +	default_lengths[QTOP_RQ] = hw->num_qentries[SLI_QTYPE_RQ];
> +	default_lengths[QTOP_MQ] = EFCT_HW_MQ_DEPTH;
> +
> +	hw->eq_count = 0;
> +	hw->cq_count = 0;
> +	hw->mq_count = 0;
> +	hw->wq_count = 0;
> +	hw->rq_count = 0;
> +	hw->hw_rq_count = 0;
> +	INIT_LIST_HEAD(&hw->eq_list);
> +
> +	/* If MRQ is requested, Check if it is supported by SLI. */
> +	if (hw->config.n_rq > 1 &&
> +	    !(hw->sli.features & SLI4_REQFEAT_MRQP)) {
> +		efc_log_err(hw->os, "MRQ topology not supported by SLI4.\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->config.n_rq > 1)
> +		use_mrq = true;
> +
> +	/* Allocate class WQ pools */
> +	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
> +		hw->wq_class_array[i] = efct_varray_alloc(hw->os,
> +							  EFCT_HW_MAX_NUM_WQ);
> +		if (!hw->wq_class_array[i]) {
> +			efc_log_err(hw->os,
> +				     "efct_varray_alloc for wq_class failed\n");
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +	}
> +
> +	/* Allocate per CPU WQ pools */
> +	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
> +		hw->wq_cpu_array[i] = efct_varray_alloc(hw->os,
> +							EFCT_HW_MAX_NUM_WQ);
> +		if (!hw->wq_cpu_array[i]) {
> +			efc_log_err(hw->os,
> +				     "efct_varray_alloc for wq_class failed\n");
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +	}
> +
> +	for (i = 0, qt = qtop->entries; i < qtop->inuse_count; i++, qt++) {
> +		if (i == qtop->inuse_count - 1)
> +			next_qt = NULL;
> +		else
> +			next_qt = qt + 1;
> +
> +		switch (qt->entry) {
> +		case QTOP_EQ:
> +			len = (qt->len) ? qt->len : default_lengths[QTOP_EQ];
> +
> +			if (qt->set_default) {
> +				default_lengths[QTOP_EQ] = len;
> +				break;
> +			}
> +
> +			eq = efct_hw_new_eq(hw, len);
> +			if (!eq) {
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +			break;
> +
> +		case QTOP_CQ:
> +			len = (qt->len) ? qt->len : default_lengths[QTOP_CQ];
> +
> +			if (qt->set_default) {
> +				default_lengths[QTOP_CQ] = len;
> +				break;
> +			}
> +
> +			/* If this CQ is for MRQ, then delay the creation */
> +			if (!use_mrq || next_qt->entry != QTOP_RQ) {
> +				if (!eq)
> +					return EFCT_HW_RTN_NO_MEMORY;
> +
> +				cq = efct_hw_new_cq(eq, len);
> +				if (!cq) {
> +					efct_hw_queue_teardown(hw);
> +					return EFCT_HW_RTN_NO_MEMORY;
> +				}
> +			}
> +			break;
> +
> +		case QTOP_WQ: {
> +			len = (qt->len) ? qt->len : default_lengths[QTOP_WQ];
> +			if (qt->set_default) {
> +				default_lengths[QTOP_WQ] = len;
> +				break;
> +			}
> +
> +			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
> +				efc_log_err(hw->os,
> +					     "invalid ULP %d WQ\n", qt->ulp);
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +
> +			wq = efct_hw_new_wq(cq, len,
> +					    qt->class, hw->ulp_start + qt->ulp);
> +			if (!wq) {
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +
> +			/* Place this WQ on the EQ WQ array */
> +			if (efct_varray_add(eq->wq_array, wq)) {
> +				efc_log_err(hw->os,
> +					     "QTOP_WQ:EQ efct_varray_add fail\n");
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_ERROR;
> +			}
> +
> +			/* Place this WQ on the HW class array */
> +			if (qt->class < ARRAY_SIZE(hw->wq_class_array)) {
> +				if (efct_varray_add
> +					(hw->wq_class_array[qt->class], wq)) {
> +					efc_log_err(hw->os,
> +						     "HW wq_class_array efct_varray_add failed\n");
> +					efct_hw_queue_teardown(hw);
> +					return EFCT_HW_RTN_ERROR;
> +				}
> +			} else {
> +				efc_log_err(hw->os,
> +					     "Invalid class value: %d\n",
> +					    qt->class);
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_ERROR;
> +			}
> +
> +			/*
> +			 * Place this WQ on the per CPU list, asumming that EQs
> +			 * are mapped to cpu given by the EQ instance modulo
> +			 * number of CPUs
> +			 */
> +			if (efct_varray_add(hw->wq_cpu_array[eq->instance %
> +					   num_online_cpus()], wq)) {
> +				efc_log_err(hw->os,
> +					     "HW wq_cpu_array efct_varray_add failed\n");
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_ERROR;
> +			}
> +
> +			break;
> +		}
> +		case QTOP_RQ: {
> +			len = (qt->len) ? qt->len : EFCT_HW_RQ_ENTRIES_DEF;
> +
> +			/*
> +			 * Use the max supported queue length
> +			 * if qtop rq len is not a valid value
> +			 */
> +			if (len > default_lengths[QTOP_RQ] ||
> +			    (len % EFCT_HW_RQ_ENTRIES_MIN)) {
> +				efc_log_info(hw->os,
> +					      "QTOP RQ len %d is invalid. Using max supported RQ len %d\n",
> +					len, default_lengths[QTOP_RQ]);
> +				len = default_lengths[QTOP_RQ];
> +			}
> +
> +			if (qt->set_default) {
> +				default_lengths[QTOP_RQ] = len;
> +				break;
> +			}
> +
> +			if ((hw->ulp_start + qt->ulp) > hw->ulp_max) {
> +				efc_log_err(hw->os,
> +					     "invalid ULP %d RQ\n", qt->ulp);
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +
> +			if (use_mrq) {
> +				k = mrq.num_pairs;
> +				mrq.rq_cfg[k].len = len;
> +				mrq.rq_cfg[k].ulp = hw->ulp_start + qt->ulp;
> +				mrq.rq_cfg[k].filter_mask = qt->filter_mask;
> +				mrq.rq_cfg[k].eq = eq;
> +				mrq.num_pairs++;
> +			} else {
> +				rq = efct_hw_new_rq(cq, len,
> +						    hw->ulp_start + qt->ulp);
> +				if (!rq) {
> +					efct_hw_queue_teardown(hw);
> +					return EFCT_HW_RTN_NO_MEMORY;
> +				}
> +				rq->filter_mask = qt->filter_mask;
> +			}
> +			break;
> +		}
> +
> +		case QTOP_MQ:
> +			len = (qt->len) ? qt->len : default_lengths[QTOP_MQ];
> +			if (qt->set_default) {
> +				default_lengths[QTOP_MQ] = len;
> +				break;
> +			}
> +
> +			if (!cq)
> +				return EFCT_HW_RTN_NO_MEMORY;
> +
> +			mq = efct_hw_new_mq(cq, len);
> +			if (!mq) {
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +			break;
> +
> +		default:
> +			efc_log_crit(hw->os, "Unknown Queue\n");
> +			break;
> +		}
> +	}
> +
> +	if (mrq.num_pairs) {
> +		/* First create normal RQs. */
> +		for (i = 0; i < mrq.num_pairs; i++) {
> +			for (j = 0; j < mrq.num_pairs; j++) {
> +				if (i != j &&
> +				    mrq.rq_cfg[i].filter_mask ==
> +				     mrq.rq_cfg[j].filter_mask) {
> +					/* This should be created using set */
> +					if (rqset_filter_mask &&
> +					    rqset_filter_mask !=
> +					     mrq.rq_cfg[i].filter_mask) {
> +						efc_log_crit(hw->os,
> +							      "Cant create > 1 RQ Set\n");
> +						efct_hw_queue_teardown(hw);
> +						return EFCT_HW_RTN_ERROR;
> +					} else if (!rqset_filter_mask) {
> +						rqset_filter_mask =
> +						      mrq.rq_cfg[i].filter_mask;
> +						rqset_len = mrq.rq_cfg[i].len;
> +					}
> +					eqs[rqset_count] = mrq.rq_cfg[i].eq;
> +					rqset_count++;
> +					break;
> +				}
> +			}
> +			if (j == mrq.num_pairs) {
> +				/* Normal RQ */
> +				cq = efct_hw_new_cq(mrq.rq_cfg[i].eq,
> +						    default_lengths[QTOP_CQ]);
> +				if (!cq) {
> +					efct_hw_queue_teardown(hw);
> +					return EFCT_HW_RTN_NO_MEMORY;
> +				}
> +
> +				rq = efct_hw_new_rq(cq, mrq.rq_cfg[i].len,
> +						    mrq.rq_cfg[i].ulp);
> +				if (!rq) {
> +					efct_hw_queue_teardown(hw);
> +					return EFCT_HW_RTN_NO_MEMORY;
> +				}
> +				rq->filter_mask = mrq.rq_cfg[i].filter_mask;
> +			}
> +		}
> +
> +		/* Now create RQ Set */
> +		if (rqset_count) {
> +			/* Create CQ set */
> +			if (efct_hw_new_cq_set(eqs, cqs, rqset_count,
> +					       default_lengths[QTOP_CQ])) {
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_ERROR;
> +			}
> +
> +			/* Create RQ set */
> +			if (efct_hw_new_rq_set(cqs, rqs, rqset_count,
> +					       rqset_len)) {
> +				efct_hw_queue_teardown(hw);
> +				return EFCT_HW_RTN_ERROR;
> +			}
> +
> +			for (i = 0; i < rqset_count ; i++) {
> +				rqs[i]->filter_mask = rqset_filter_mask;
> +				rqs[i]->is_mrq = true;
> +				rqs[i]->base_mrq_id = rqs[0]->hdr->id;
> +			}
> +
> +			hw->hw_mrq_count = rqset_count;
> +		}
> +	}
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +/* Allocate a new EQ object */
> +struct hw_eq *
> +efct_hw_new_eq(struct efct_hw *hw, u32 entry_count)
> +{
> +	struct hw_eq *eq = kmalloc(sizeof(*eq), GFP_KERNEL);
> +
> +	if (eq) {
> +		memset(eq, 0, sizeof(*eq));
> +		eq->type = SLI_QTYPE_EQ;
> +		eq->hw = hw;
> +		eq->entry_count = entry_count;
> +		eq->instance = hw->eq_count++;
> +		eq->queue = &hw->eq[eq->instance];
> +		INIT_LIST_HEAD(&eq->cq_list);
> +
> +		eq->wq_array = efct_varray_alloc(hw->os, EFCT_HW_MAX_NUM_WQ);
> +		if (!eq->wq_array) {
> +			kfree(eq);
> +			eq = NULL;
> +		} else {
> +			if (sli_queue_alloc(&hw->sli, SLI_QTYPE_EQ,
> +					    eq->queue,
> +					    entry_count, NULL)) {
> +				efc_log_err(hw->os,
> +					     "EQ[%d] allocation failure\n",
> +					    eq->instance);
> +				kfree(eq);
> +				eq = NULL;
> +			} else {
> +				sli_eq_modify_delay(&hw->sli, eq->queue,
> +						    1, 0, 8);
> +				hw->hw_eq[eq->instance] = eq;
> +				INIT_LIST_HEAD(&eq->list_entry);
> +				list_add_tail(&eq->list_entry, &hw->eq_list);
> +				efc_log_debug(hw->os,
> +					       "create eq[%2d] id %3d len %4d\n",
> +					      eq->instance, eq->queue->id,
> +					      eq->entry_count);
> +			}
> +		}
> +	}
> +	return eq;
> +}
> +
> +/* Allocate a new CQ object */
> +struct hw_cq *
> +efct_hw_new_cq(struct hw_eq *eq, u32 entry_count)
> +{
> +	struct efct_hw *hw = eq->hw;
> +	struct hw_cq *cq = kmalloc(sizeof(*cq), GFP_KERNEL);
> +
> +	if (cq) {
> +		memset(cq, 0, sizeof(*cq));
> +		cq->eq = eq;
> +		cq->type = SLI_QTYPE_CQ;
> +		cq->instance = eq->hw->cq_count++;
> +		cq->entry_count = entry_count;
> +		cq->queue = &hw->cq[cq->instance];
> +
> +		INIT_LIST_HEAD(&cq->q_list);
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_CQ, cq->queue,
> +				    cq->entry_count, eq->queue)) {
> +			efc_log_err(hw->os,
> +				     "CQ[%d] allocation failure len=%d\n",
> +				    eq->instance,
> +				    eq->entry_count);
> +			kfree(cq);
> +			cq = NULL;
> +		} else {
> +			hw->hw_cq[cq->instance] = cq;
> +			INIT_LIST_HEAD(&cq->list_entry);
> +			list_add_tail(&cq->list_entry, &eq->cq_list);
> +			efc_log_debug(hw->os,
> +				       "create cq[%2d] id %3d len %4d\n",
> +				      cq->instance, cq->queue->id,
> +				      cq->entry_count);
> +		}
> +	}
> +	return cq;
> +}
> +
> +/* Allocate a new CQ Set of objects */
> +u32
> +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
> +		   u32 num_cqs, u32 entry_count)
> +{
> +	u32 i;
> +	struct efct_hw *hw = eqs[0]->hw;
> +	struct sli4 *sli4 = &hw->sli;
> +	struct hw_cq *cq = NULL;
> +	struct sli4_queue *qs[SLI_MAX_CQ_SET_COUNT];
> +	struct sli4_queue *assefct[SLI_MAX_CQ_SET_COUNT];
> +
> +	/* Initialise CQS pointers to NULL */
> +	for (i = 0; i < num_cqs; i++)
> +		cqs[i] = NULL;
> +
> +	for (i = 0; i < num_cqs; i++) {
> +		cq = kmalloc(sizeof(*cq), GFP_KERNEL);
> +		if (!cq)
> +			goto error;
> +
> +		memset(cq, 0, sizeof(*cq));
> +		cqs[i]          = cq;
> +		cq->eq          = eqs[i];
> +		cq->type        = SLI_QTYPE_CQ;
> +		cq->instance    = hw->cq_count++;
> +		cq->entry_count = entry_count;
> +		cq->queue       = &hw->cq[cq->instance];
> +		qs[i]           = cq->queue;
> +		assefct[i]       = eqs[i]->queue;
> +		INIT_LIST_HEAD(&cq->q_list);
> +	}
> +
> +	if (!sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
> +		efc_log_err(hw->os, "Failed to create CQ Set.\n");
> +		goto error;
> +	}
> +
> +	for (i = 0; i < num_cqs; i++) {
> +		hw->hw_cq[cqs[i]->instance] = cqs[i];
> +		INIT_LIST_HEAD(&cqs[i]->list_entry);
> +		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
> +	}
> +
> +	return 0;
> +
> +error:
> +	for (i = 0; i < num_cqs; i++) {
> +		kfree(cqs[i]);
> +		cqs[i] = NULL;
> +	}
> +	return -1;
> +}
> +
> +/* Allocate a new MQ object */
> +struct hw_mq *
> +efct_hw_new_mq(struct hw_cq *cq, u32 entry_count)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_mq *mq = kmalloc(sizeof(*mq), GFP_KERNEL);
> +
> +	if (mq) {
> +		memset(mq, 0, sizeof(*mq));
> +		mq->cq = cq;
> +		mq->type = SLI_QTYPE_MQ;
> +		mq->instance = cq->eq->hw->mq_count++;
> +		mq->entry_count = entry_count;
> +		mq->entry_size = EFCT_HW_MQ_DEPTH;
> +		mq->queue = &hw->mq[mq->instance];
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_MQ,
> +				    mq->queue,
> +				    mq->entry_size,
> +				    cq->queue)) {
> +			efc_log_err(hw->os, "MQ allocation failure\n");
> +			kfree(mq);
> +			mq = NULL;
> +		} else {
> +			hw->hw_mq[mq->instance] = mq;
> +			INIT_LIST_HEAD(&mq->list_entry);
> +			list_add_tail(&mq->list_entry, &cq->q_list);
> +			efc_log_debug(hw->os,
> +				       "create mq[%2d] id %3d len %4d\n",
> +				      mq->instance, mq->queue->id,
> +				      mq->entry_count);
> +		}
> +	}
> +	return mq;
> +}
> +
> +/* Allocate a new WQ object */
> +struct hw_wq *
> +efct_hw_new_wq(struct hw_cq *cq, u32 entry_count,
> +	       u32 class, u32 ulp)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_wq *wq = kmalloc(sizeof(*wq), GFP_KERNEL);
> +
> +	if (wq) {
> +		memset(wq, 0, sizeof(*wq));
> +		wq->hw = cq->eq->hw;
> +		wq->cq = cq;
> +		wq->type = SLI_QTYPE_WQ;
> +		wq->instance = cq->eq->hw->wq_count++;
> +		wq->entry_count = entry_count;
> +		wq->queue = &hw->wq[wq->instance];
> +		wq->ulp = ulp;
> +		wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
> +		wq->wqec_count = wq->wqec_set_count;
> +		wq->free_count = wq->entry_count - 1;
> +		wq->class = class;
> +		INIT_LIST_HEAD(&wq->pending_list);
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_WQ, wq->queue,
> +				    wq->entry_count, cq->queue)) {
> +			efc_log_err(hw->os, "WQ allocation failure\n");
> +			kfree(wq);
> +			wq = NULL;
> +		} else {
> +			hw->hw_wq[wq->instance] = wq;
> +			INIT_LIST_HEAD(&wq->list_entry);
> +			list_add_tail(&wq->list_entry, &cq->q_list);
> +			efc_log_debug(hw->os,
> +				       "create wq[%2d] id %3d len %4d cls %d ulp %d\n",
> +				wq->instance, wq->queue->id,
> +				wq->entry_count, wq->class, wq->ulp);
> +		}
> +	}
> +	return wq;
> +}
> +
> +/* Allocate an RQ object, which encapsulates 2 SLI queues (for rq pair) */
> +struct hw_rq *
> +efct_hw_new_rq(struct hw_cq *cq, u32 entry_count, u32 ulp)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_rq *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
> +
> +	if (rq) {
> +		memset(rq, 0, sizeof(*rq));
> +		rq->instance = hw->hw_rq_count++;
> +		rq->cq = cq;
> +		rq->type = SLI_QTYPE_RQ;
> +		rq->entry_count = entry_count;
> +
> +		/* Create the header RQ */
> +		rq->hdr = &hw->rq[hw->rq_count];
> +		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
> +
> +		if (sli_fc_rq_alloc(&hw->sli, rq->hdr,
> +				    rq->entry_count,
> +				    rq->hdr_entry_size,
> +				    cq->queue,
> +				    true)) {
> +			efc_log_err(hw->os,
> +				     "RQ allocation failure - header\n");
> +			kfree(rq);
> +			return NULL;
> +		}
> +		/* Update hw_rq_lookup[] */
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		efc_log_debug(hw->os,
> +			      "create rq[%2d] id %3d len %4d hdr  size %4d\n",
> +			      rq->instance, rq->hdr->id, rq->entry_count,
> +			      rq->hdr_entry_size);
> +
> +		/* Create the default data RQ */
> +		rq->data = &hw->rq[hw->rq_count];
> +		rq->data_entry_size = hw->config.rq_default_buffer_size;
> +
> +		if (sli_fc_rq_alloc(&hw->sli, rq->data,
> +				    rq->entry_count,
> +				    rq->data_entry_size,
> +				    cq->queue,
> +				    false)) {
> +			efc_log_err(hw->os,
> +				     "RQ allocation failure - first burst\n");
> +			kfree(rq);
> +			return NULL;
> +		}
> +		/* Update hw_rq_lookup[] */
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		efc_log_debug(hw->os,
> +			       "create rq[%2d] id %3d len %4d data size %4d\n",
> +			 rq->instance, rq->data->id, rq->entry_count,
> +			 rq->data_entry_size);
> +
> +		hw->hw_rq[rq->instance] = rq;
> +		INIT_LIST_HEAD(&rq->list_entry);
> +		list_add_tail(&rq->list_entry, &cq->q_list);
> +
> +		rq->rq_tracker = kmalloc_array(rq->entry_count,
> +					sizeof(struct efc_hw_sequence *),
> +					GFP_ATOMIC);
> +		if (!rq->rq_tracker)
> +			return NULL;
> +
> +		memset(rq->rq_tracker, 0,
> +		       rq->entry_count * sizeof(struct efc_hw_sequence *));
> +	}
> +	return rq;
> +}
> +
> +/**
> + * Allocate an RQ object SET, where each element in set
> + * encapsulates 2 SLI queues (for rq pair)
> + */
> +u32
> +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
> +		   u32 num_rq_pairs, u32 entry_count)
> +{
> +	struct efct_hw *hw = cqs[0]->eq->hw;
> +	struct hw_rq *rq = NULL;
> +	struct sli4_queue *qs[SLI_MAX_RQ_SET_COUNT * 2] = { NULL };
> +	u32 i, q_count, size;
> +
> +	/* Initialise RQS pointers */
> +	for (i = 0; i < num_rq_pairs; i++)
> +		rqs[i] = NULL;
> +
> +	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
> +		rq = kmalloc(sizeof(*rq), GFP_KERNEL);
> +		if (!rq)
> +			goto error;
> +
> +		memset(rq, 0, sizeof(*rq));
> +		rqs[i] = rq;
> +		rq->instance = hw->hw_rq_count++;
> +		rq->cq = cqs[i];
> +		rq->type = SLI_QTYPE_RQ;
> +		rq->entry_count = entry_count;
> +
> +		/* Header RQ */
> +		rq->hdr = &hw->rq[hw->rq_count];
> +		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		qs[q_count] = rq->hdr;
> +
> +		/* Data RQ */
> +		rq->data = &hw->rq[hw->rq_count];
> +		rq->data_entry_size = hw->config.rq_default_buffer_size;
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		qs[q_count + 1] = rq->data;
> +
> +		rq->rq_tracker = NULL;
> +	}
> +
> +	if (!sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
> +				cqs[0]->queue->id,
> +			    rqs[0]->entry_count,
> +			    rqs[0]->hdr_entry_size,
> +			    rqs[0]->data_entry_size)) {
> +		efc_log_err(hw->os,
> +			     "RQ Set allocation failure for base CQ=%d\n",
> +			    cqs[0]->queue->id);
> +		goto error;
> +	}
> +
> +	for (i = 0; i < num_rq_pairs; i++) {
> +		hw->hw_rq[rqs[i]->instance] = rqs[i];
> +		INIT_LIST_HEAD(&rqs[i]->list_entry);
> +		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
> +		size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count;
> +		rqs[i]->rq_tracker = kmalloc(size, GFP_KERNEL);
> +		if (!rqs[i]->rq_tracker)
> +			goto error;
> +	}
> +
> +	return 0;
> +
> +error:
> +	for (i = 0; i < num_rq_pairs; i++) {
> +		if (rqs[i]) {
> +			kfree(rqs[i]->rq_tracker);
> +			kfree(rqs[i]);
> +		}
> +	}
> +
> +	return -1;
> +}
> +
> +void
> +efct_hw_del_eq(struct hw_eq *eq)
> +{
> +	if (eq) {
> +		struct hw_cq *cq;
> +		struct hw_cq *cq_next;
> +
> +		list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
> +			efct_hw_del_cq(cq);
> +		efct_varray_free(eq->wq_array);
> +		list_del(&eq->list_entry);
> +		eq->hw->hw_eq[eq->instance] = NULL;
> +		kfree(eq);
> +	}
> +}
> +
> +void
> +efct_hw_del_cq(struct hw_cq *cq)
> +{
> +	if (cq) {
> +		struct hw_q *q;
> +		struct hw_q *q_next;
> +
> +		list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
> +			switch (q->type) {
> +			case SLI_QTYPE_MQ:
> +				efct_hw_del_mq((struct hw_mq *)q);
> +				break;
> +			case SLI_QTYPE_WQ:
> +				efct_hw_del_wq((struct hw_wq *)q);
> +				break;
> +			case SLI_QTYPE_RQ:
> +				efct_hw_del_rq((struct hw_rq *)q);
> +				break;
> +			default:
> +				break;
> +			}
> +		}
> +		list_del(&cq->list_entry);
> +		cq->eq->hw->hw_cq[cq->instance] = NULL;
> +		kfree(cq);
> +	}
> +}
> +
> +void
> +efct_hw_del_mq(struct hw_mq *mq)
> +{
> +	if (mq) {
> +		list_del(&mq->list_entry);
> +		mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
> +		kfree(mq);
> +	}
> +}
> +
> +void
> +efct_hw_del_wq(struct hw_wq *wq)
> +{
> +	if (wq) {
> +		list_del(&wq->list_entry);
> +		wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
> +		kfree(wq);
> +	}
> +}
> +
> +void
> +efct_hw_del_rq(struct hw_rq *rq)
> +{
> +	struct efct_hw *hw = NULL;
> +
> +	if (rq) {
> +		/* Free RQ tracker */
> +		kfree(rq->rq_tracker);
> +		rq->rq_tracker = NULL;
> +		list_del(&rq->list_entry);
> +		hw = rq->cq->eq->hw;
> +		hw->hw_rq[rq->instance] = NULL;
> +		kfree(rq);
> +	}
> +}
> +
> +void
> +efct_hw_queue_dump(struct efct_hw *hw)
> +{
> +	struct hw_eq *eq;
> +	struct hw_cq *cq;
> +	struct hw_q *q;
> +	struct hw_mq *mq;
> +	struct hw_wq *wq;
> +	struct hw_rq *rq;
> +
> +	list_for_each_entry(eq, &hw->eq_list, list_entry) {
> +		efc_log_debug(hw->os, "eq[%d] id %2d\n",
> +			       eq->instance, eq->queue->id);
> +		list_for_each_entry(cq, &eq->cq_list, list_entry) {
> +			efc_log_debug(hw->os, "cq[%d] id %2d current\n",
> +				       cq->instance, cq->queue->id);
> +			list_for_each_entry(q, &cq->q_list, list_entry) {
> +				switch (q->type) {
> +				case SLI_QTYPE_MQ:
> +					mq = (struct hw_mq *)q;
> +					efc_log_debug(hw->os,
> +						       "    mq[%d] id %2d\n",
> +					       mq->instance, mq->queue->id);
> +					break;
> +				case SLI_QTYPE_WQ:
> +					wq = (struct hw_wq *)q;
> +					efc_log_debug(hw->os,
> +						       "    wq[%d] id %2d\n",
> +						wq->instance, wq->queue->id);
> +					break;
> +				case SLI_QTYPE_RQ:
> +					rq = (struct hw_rq *)q;
> +					efc_log_debug(hw->os,
> +						       "    rq[%d] hdr id %2d\n",
> +					       rq->instance, rq->hdr->id);
> +					break;
> +				default:
> +					break;
> +				}
> +			}
> +		}
> +	}
> +}
> +
> +void
> +efct_hw_queue_teardown(struct efct_hw *hw)
> +{
> +	u32 i;
> +	struct hw_eq *eq;
> +	struct hw_eq *eq_next;
> +
> +	if (hw->eq_list.next) {
> +		list_for_each_entry_safe(eq, eq_next, &hw->eq_list,
> +					 list_entry) {
> +			efct_hw_del_eq(eq);
> +		}
> +	}
> +	for (i = 0; i < ARRAY_SIZE(hw->wq_cpu_array); i++) {
> +		efct_varray_free(hw->wq_cpu_array[i]);
> +		hw->wq_cpu_array[i] = NULL;
> +	}
> +	for (i = 0; i < ARRAY_SIZE(hw->wq_class_array); i++) {
> +		efct_varray_free(hw->wq_class_array[i]);
> +		hw->wq_class_array[i] = NULL;
> +	}
> +}
> +
> +/**
> + * Allocate a WQ to an IO object
> + *
> + * The next work queue index is used to assign a WQ to an IO.
> + *
> + * If wq_steering is EFCT_HW_WQ_STEERING_CLASS, a WQ from io->wq_class is
> + * selected.
> + *
> + * If wq_steering is EFCT_HW_WQ_STEERING_REQUEST, then a WQ from the EQ that
> + * the IO request came in on is selected.
> + *
> + * If wq_steering is EFCT_HW_WQ_STEERING_CPU, then a WQ associted with the
> + * CPU the request is made on is selected.
> + */
> +struct hw_wq *
> +efct_hw_queue_next_wq(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	struct hw_eq *eq;
> +	struct hw_wq *wq = NULL;
> +	u32 cpuidx;
> +
> +	switch (io->wq_steering) {
> +	case EFCT_HW_WQ_STEERING_CLASS:
> +		if (unlikely(io->wq_class >= ARRAY_SIZE(hw->wq_class_array)))
> +			break;
> +
> +		wq = efct_varray_iter_next(hw->wq_class_array[io->wq_class]);
> +		break;
> +	case EFCT_HW_WQ_STEERING_REQUEST:
> +		eq = io->eq;
> +		if (likely(eq))
> +			wq = efct_varray_iter_next(eq->wq_array);
> +		break;
> +	case EFCT_HW_WQ_STEERING_CPU:
> +		cpuidx = in_interrupt() ?
> +			raw_smp_processor_id() : task_cpu(current);
> +
> +		if (likely(cpuidx < ARRAY_SIZE(hw->wq_cpu_array)))
> +			wq = efct_varray_iter_next(hw->wq_cpu_array[cpuidx]);
> +		break;
> +	}
> +
> +	if (unlikely(!wq))
> +		wq = hw->hw_wq[0];
> +
> +	return wq;
> +}
> +
> +u32
> +efct_hw_qtop_eq_count(struct efct_hw *hw)
> +{
> +	return hw->qtop->entry_counts[QTOP_EQ];
> +}
> +
> +#define TOKEN_LEN		32
> +
> +/* token types */
> +enum tok_type {
> +	TOK_LPAREN = 1,
> +	TOK_RPAREN,
> +	TOK_COLON,
> +	TOK_EQUALS,
> +	TOK_QUEUE,
> +	TOK_ATTR_NAME,
> +	TOK_NUMBER,
> +	TOK_NUMBER_VALUE,
> +	TOK_NUMBER_LIST,
> +};
> +
> +/* token sub-types */
> +enum tok_subtype {
> +	TOK_SUB_EQ = 100,
> +	TOK_SUB_CQ,
> +	TOK_SUB_RQ,
> +	TOK_SUB_MQ,
> +	TOK_SUB_WQ,
> +	TOK_SUB_LEN,
> +	TOK_SUB_CLASS,
> +	TOK_SUB_ULP,
> +	TOK_SUB_FILTER,
> +};
> +
> +/* convert queue subtype to QTOP entry */
> +static enum efct_hw_qtop_type
> +subtype2qtop(enum tok_subtype q)
> +{
> +	switch (q) {
> +	case TOK_SUB_EQ:	return QTOP_EQ;
> +	case TOK_SUB_CQ:	return QTOP_CQ;
> +	case TOK_SUB_RQ:	return QTOP_RQ;
> +	case TOK_SUB_MQ:	return QTOP_MQ;
> +	case TOK_SUB_WQ:	return QTOP_WQ;
> +	default:
> +		break;
> +	}
> +	return 0;
> +}
> +
> +/* Declare token object */
> +struct tok {
> +	enum tok_type type;
> +	enum tok_subtype subtype;
> +	char string[TOKEN_LEN];
> +};
> +
> +/* Declare token array object */
> +struct tokarray {
> +	struct tok *tokens;
> +	u32 alloc_count;
> +	u32 inuse_count;
> +	u32 iter_idx;
> +};
> +
> +/* token match structure */
> +struct tokmatch {
> +	char *s;
> +	enum tok_type type;
> +	enum tok_subtype subtype;
> +};
> +
> +static int
> +idstart(int c)
> +{
> +	return	isalpha(c) || (c == '_') || (c == '$');
> +}
> +
> +static int
> +idchar(int c)
> +{
> +	return idstart(c) || isdigit(c);
> +}
> +
> +/* single character matches */
> +static struct tokmatch cmatches[] = {
> +	{"(", TOK_LPAREN},
> +	{")", TOK_RPAREN},
> +	{":", TOK_COLON},
> +	{"=", TOK_EQUALS},
> +};
> +
> +/* identifier match strings */
> +static struct tokmatch smatches[] = {
> +	{"eq", TOK_QUEUE, TOK_SUB_EQ},
> +	{"cq", TOK_QUEUE, TOK_SUB_CQ},
> +	{"rq", TOK_QUEUE, TOK_SUB_RQ},
> +	{"mq", TOK_QUEUE, TOK_SUB_MQ},
> +	{"wq", TOK_QUEUE, TOK_SUB_WQ},
> +	{"len", TOK_ATTR_NAME, TOK_SUB_LEN},
> +	{"class", TOK_ATTR_NAME, TOK_SUB_CLASS},
> +	{"ulp", TOK_ATTR_NAME, TOK_SUB_ULP},
> +	{"filter", TOK_ATTR_NAME, TOK_SUB_FILTER},
> +};
> +
> +/* The string is scanned and the next token is returned */
> +static const char *
> +tokenize(const char *s, struct tok *tok)
> +{
> +	u32 i;
> +
> +	memset(tok, 0, sizeof(*tok));
> +
> +	/* Skip over whitespace */
> +	while (*s && isspace(*s))
> +		s++;
> +
> +	/* Return if nothing left in this string */
> +	if (*s == 0)
> +		return NULL;
> +
> +	/* Look for single character matches */
> +	for (i = 0; i < ARRAY_SIZE(cmatches); i++) {
> +		if (cmatches[i].s[0] == *s) {
> +			tok->type = cmatches[i].type;
> +			tok->subtype = cmatches[i].subtype;
> +			tok->string[0] = *s++;
> +			return s;
> +		}
> +	}
> +
> +	/* Scan for a hex number or decimal */
> +	if ((s[0] == '0') && ((s[1] == 'x') || (s[1] == 'X'))) {
> +		char *p = tok->string;
> +
> +		tok->type = TOK_NUMBER;
> +
> +		*p++ = *s++;
> +		*p++ = *s++;
> +		while ((*s == '.') || isxdigit(*s)) {
> +			if ((p - tok->string) < (int)sizeof(tok->string))
> +				*p++ = *s;
> +			if (*s == ',')
> +				tok->type = TOK_NUMBER_LIST;
> +			s++;
> +		}
> +		*p = 0;
> +		return s;
> +	} else if (isdigit(*s)) {
> +		char *p = tok->string;
> +
> +		tok->type = TOK_NUMBER;
> +		while ((*s == ',') || isdigit(*s)) {
> +			if ((p - tok->string) < (int)sizeof(tok->string))
> +				*p++ = *s;
> +			if (*s == ',')
> +				tok->type = TOK_NUMBER_LIST;
> +			s++;
> +		}
> +		*p = 0;
> +		return s;
> +	}
> +
> +	/* Scan for an ID */
> +	if (idstart(*s)) {
> +		char *p = tok->string;
> +
> +		for (*p++ = *s++; idchar(*s); s++) {
> +			if ((p - tok->string) < TOKEN_LEN)
> +				*p++ = *s;
> +		}
> +
> +		/* See if this is a $ number value */
> +		if (tok->string[0] == '$') {
> +			tok->type = TOK_NUMBER_VALUE;
> +		} else {
> +			/* Look for a string match */
> +			for (i = 0; i < ARRAY_SIZE(smatches); i++) {
> +				if (strcmp(smatches[i].s, tok->string) == 0) {
> +					tok->type = smatches[i].type;
> +					tok->subtype = smatches[i].subtype;
> +					return s;
> +				}
> +			}
> +		}
> +	}
> +	return s;
> +}
> +
> +/* convert token type to string */
> +static const char *
> +token_type2s(enum tok_type type)
> +{
> +	switch (type) {
> +	case TOK_LPAREN:
> +		return "TOK_LPAREN";
> +	case TOK_RPAREN:
> +		return "TOK_RPAREN";
> +	case TOK_COLON:
> +		return "TOK_COLON";
> +	case TOK_EQUALS:
> +		return "TOK_EQUALS";
> +	case TOK_QUEUE:
> +		return "TOK_QUEUE";
> +	case TOK_ATTR_NAME:
> +		return "TOK_ATTR_NAME";
> +	case TOK_NUMBER:
> +		return "TOK_NUMBER";
> +	case TOK_NUMBER_VALUE:
> +		return "TOK_NUMBER_VALUE";
> +	case TOK_NUMBER_LIST:
> +		return "TOK_NUMBER_LIST";
> +	}
> +	return "unknown";
> +}
> +
> +/* convert token sub-type to string */
> +static const char *
> +token_subtype2s(enum tok_subtype subtype)
> +{
> +	switch (subtype) {
> +	case TOK_SUB_EQ:
> +		return "TOK_SUB_EQ";
> +	case TOK_SUB_CQ:
> +		return "TOK_SUB_CQ";
> +	case TOK_SUB_RQ:
> +		return "TOK_SUB_RQ";
> +	case TOK_SUB_MQ:
> +		return "TOK_SUB_MQ";
> +	case TOK_SUB_WQ:
> +		return "TOK_SUB_WQ";
> +	case TOK_SUB_LEN:
> +		return "TOK_SUB_LEN";
> +	case TOK_SUB_CLASS:
> +		return "TOK_SUB_CLASS";
> +	case TOK_SUB_ULP:
> +		return "TOK_SUB_ULP";
> +	case TOK_SUB_FILTER:
> +		return "TOK_SUB_FILTER";
> +	}
> +	return "";
> +}
> +
> +/*
> + * A syntax error message is found, the input tokens are dumped up to and
> + * including the token that failed as indicated by the current iterator index.
> + */
> +static void
> +tok_syntax(struct efct_hw *hw, struct tokarray *tokarray)
> +{
> +	u32 i;
> +	struct tok *tok;
> +
> +	efc_log_test(hw->os, "Syntax error:\n");
> +
> +	for (i = 0, tok = tokarray->tokens; (i <= tokarray->inuse_count);
> +	     i++, tok++) {
> +		efc_log_test(hw->os, "%s [%2d]    %-16s %-16s %s\n",
> +			      (i == tokarray->iter_idx) ? ">>>" : "   ", i,
> +			     token_type2s(tok->type),
> +			     token_subtype2s(tok->subtype), tok->string);
> +	}
> +}
> +
> +/*
> + * Parses tokens of type TOK_NUMBER and TOK_NUMBER_VALUE, returning a numeric
> + * value
> + */
> +static u32
> +tok_getnumber(struct efct_hw *hw, struct efct_hw_qtop *qtop,
> +	      struct tok *tok)
> +{
> +	u32 rval = 0;
> +	u32 num_cpus = num_online_cpus();
> +
> +	switch (tok->type) {
> +	case TOK_NUMBER_VALUE:
> +		if (strcmp(tok->string, "$ncpu") == 0)
> +			rval = num_cpus;
> +		else if (strcmp(tok->string, "$ncpu1") == 0)
> +			rval = num_cpus - 1;
> +		else if (strcmp(tok->string, "$nwq") == 0)
> +			rval = (hw) ? hw->config.n_wq : 0;
> +		else if (strcmp(tok->string, "$maxmrq") == 0)
> +			rval = (num_cpus < EFCT_HW_MAX_MRQS)
> +				? num_cpus : EFCT_HW_MAX_MRQS;
> +		else if (strcmp(tok->string, "$nulp") == 0)
> +			rval = hw->ulp_max - hw->ulp_start + 1;
> +		else if ((qtop->rptcount_idx > 0) &&
> +			 strcmp(tok->string, "$rpt0") == 0)
> +			rval = qtop->rptcount[qtop->rptcount_idx - 1];
> +		else if ((qtop->rptcount_idx > 1) &&
> +			 strcmp(tok->string, "$rpt1") == 0)
> +			rval = qtop->rptcount[qtop->rptcount_idx - 2];
> +		else if ((qtop->rptcount_idx > 2) &&
> +			 strcmp(tok->string, "$rpt2") == 0)
> +			rval = qtop->rptcount[qtop->rptcount_idx - 3];
> +		else if ((qtop->rptcount_idx > 3) &&
> +			 strcmp(tok->string, "$rpt3") == 0)
> +			rval = qtop->rptcount[qtop->rptcount_idx - 4];
> +		else if (kstrtou32(tok->string, 0, &rval))
> +			efc_log_debug(hw->os, "kstrtou32 failed\n");
> +
> +		break;
> +	case TOK_NUMBER:
> +		if (kstrtou32(tok->string, 0, &rval))
> +			efc_log_debug(hw->os, "kstrtou32 failed\n");
> +		break;
> +	default:
> +		break;
> +	}
> +	return rval;
> +}
> +
> +/* The tokens are semantically parsed, to generate QTOP entries */
> +static void
> +parse_sub_filter(struct efct_hw *hw, struct efct_hw_qtop_entry *qt,
> +		 struct tok *tok, struct efct_hw_qtop *qtop)
> +{
> +	u32 mask = 0;
> +	char *p;
> +	u32 v;
> +
> +	if (tok[3].type == TOK_NUMBER_LIST) {
> +		mask = 0;
> +		p = tok[3].string;
> +
> +		while ((p) && *p) {
> +			if (kstrtou32(p, 0, &v))
> +				efc_log_debug(hw->os, "kstrtou32 failed\n");
> +			if (v < 32)
> +				mask |= (1U << v);
> +
> +			p = strchr(p, ',');
> +			if (p)
> +				p++;
> +		}
> +		qt->filter_mask = mask;
> +	} else {
> +		qt->filter_mask = (1U << tok_getnumber(hw, qtop, &tok[3]));
> +	}
> +}
> +
> +/* The tokens are semantically parsed, to generate QTOP entries */
> +static int
> +parse_topology(struct efct_hw *hw, struct tokarray *tokarray,
> +	       struct efct_hw_qtop *qtop)
> +{
> +	struct efct_hw_qtop_entry *qt = qtop->entries + qtop->inuse_count;
> +	struct tok *tok;
> +	u32 num = 0;
> +
> +	for (; (tokarray->iter_idx < tokarray->inuse_count) &&
> +	     ((tok = &tokarray->tokens[tokarray->iter_idx]) != NULL);) {
> +		if (qtop->inuse_count >= qtop->alloc_count)
> +			return -1;
> +
> +		qt = qtop->entries + qtop->inuse_count;
> +
> +		switch (tok[0].type) {
> +		case TOK_QUEUE:
> +			qt->entry = subtype2qtop(tok[0].subtype);
> +			qt->set_default = false;
> +			qt->len = 0;
> +			qt->class = 0;
> +			qtop->inuse_count++;
> +
> +			/* Advance current token index */
> +			tokarray->iter_idx++;
> +
> +			/*
> +			 * Parse for queue attributes, possibly multiple
> +			 * instances
> +			 */
> +			while ((tokarray->iter_idx + 4) <=
> +				tokarray->inuse_count) {
> +				tok = &tokarray->tokens[tokarray->iter_idx];
> +				if (tok[0].type == TOK_COLON &&
> +				    tok[1].type == TOK_ATTR_NAME &&
> +					tok[2].type == TOK_EQUALS &&
> +					(tok[3].type == TOK_NUMBER ||
> +					 tok[3].type == TOK_NUMBER_VALUE ||
> +					 tok[3].type == TOK_NUMBER_LIST)) {
> +					num = tok_getnumber(hw, qtop, &tok[3]);
> +
> +					switch (tok[1].subtype) {
> +					case TOK_SUB_LEN:
> +						qt->len = num;
> +						break;
> +					case TOK_SUB_CLASS:
> +						qt->class = num;
> +						break;
> +					case TOK_SUB_ULP:
> +						qt->ulp = num;
> +						break;
> +					case TOK_SUB_FILTER:
> +						parse_sub_filter(hw, qt, tok,
> +								 qtop);
> +						break;
> +					default:
> +						break;
> +					}
> +					/* Advance current token index */
> +					tokarray->iter_idx += 4;
> +				} else {
> +					break;
> +				}
> +				num = 0;
> +			}
> +			qtop->entry_counts[qt->entry]++;
> +			break;
> +
> +		case TOK_ATTR_NAME:
> +			if (((tokarray->iter_idx + 5) <=
> +			      tokarray->inuse_count) &&
> +			      tok[1].type == TOK_COLON &&
> +			      tok[2].type == TOK_QUEUE &&
> +			      tok[3].type == TOK_EQUALS &&
> +			      (tok[4].type == TOK_NUMBER ||
> +			      tok[4].type == TOK_NUMBER_VALUE)) {
> +				qt->entry = subtype2qtop(tok[2].subtype);
> +				qt->set_default = true;
> +				switch (tok[0].subtype) {
> +				case TOK_SUB_LEN:
> +					qt->len = tok_getnumber(hw, qtop,
> +								&tok[4]);
> +					break;
> +				case TOK_SUB_CLASS:
> +					qt->class = tok_getnumber(hw, qtop,
> +								  &tok[4]);
> +					break;
> +				case TOK_SUB_ULP:
> +					qt->ulp = tok_getnumber(hw, qtop,
> +								&tok[4]);
> +					break;
> +				default:
> +					break;
> +				}
> +				qtop->inuse_count++;
> +				tokarray->iter_idx += 5;
> +			} else {
> +				tok_syntax(hw, tokarray);
> +				return -1;
> +			}
> +			break;
> +
> +		case TOK_NUMBER:
> +		case TOK_NUMBER_VALUE: {
> +			u32 rpt_count = 1;
> +			u32 i;
> +			u32 rpt_idx;
> +
> +			rpt_count = tok_getnumber(hw, qtop, tok);
> +
> +			if (tok[1].type == TOK_LPAREN) {
> +				u32 iter_idx_save;
> +
> +				tokarray->iter_idx += 2;
> +
> +				/* save token array iteration index */
> +				iter_idx_save = tokarray->iter_idx;
> +
> +				for (i = 0; i < rpt_count; i++) {
> +					rpt_idx = qtop->rptcount_idx;
> +
> +					if (qtop->rptcount_idx <
> +					    ARRAY_SIZE(qtop->rptcount)) {
> +						qtop->rptcount[rpt_idx + 1] = i;
> +					}
> +
> +					/* restore token array iteration idx */
> +					tokarray->iter_idx = iter_idx_save;
> +
> +					/* parse, append to qtop */
> +					parse_topology(hw, tokarray, qtop);
> +
> +					qtop->rptcount_idx = rpt_idx;
> +				}
> +			}
> +			break;
> +		}
> +
> +		case TOK_RPAREN:
> +			tokarray->iter_idx++;
> +			return 0;
> +
> +		default:
> +			tok_syntax(hw, tokarray);
> +			return -1;
> +		}
> +	}
> +	return 0;
> +}
> +
> +/*
> + * The queue topology object is allocated, and filled with the results of
> + * parsing the passed in queue topology string
> + */
> +struct efct_hw_qtop *
> +efct_hw_qtop_parse(struct efct_hw *hw, const char *qtop_string)
> +{
> +	struct efct_hw_qtop *qtop;
> +	struct tokarray tokarray;
> +	const char *s;
> +
> +	efc_log_debug(hw->os, "queue topology: %s\n", qtop_string);
> +
> +	/* Allocate a token array */
> +	tokarray.tokens = kmalloc_array(MAX_TOKENS, sizeof(*tokarray.tokens),
> +					GFP_KERNEL);
> +	if (!tokarray.tokens)
> +		return NULL;
> +	memset(tokarray.tokens, 0, MAX_TOKENS * sizeof(*tokarray.tokens));
> +	tokarray.alloc_count = MAX_TOKENS;
> +	tokarray.inuse_count = 0;
> +	tokarray.iter_idx = 0;
> +
> +	/* Parse the tokens */
> +	for (s = qtop_string; (tokarray.inuse_count < tokarray.alloc_count) &&
> +	     ((s = tokenize(s, &tokarray.tokens[tokarray.inuse_count]))) !=
> +	       NULL;)
> +		tokarray.inuse_count++;
> +
> +	/* Allocate a queue topology structure */
> +	qtop = kmalloc(sizeof(*qtop), GFP_KERNEL);
> +	if (!qtop) {
> +		kfree(tokarray.tokens);
> +		efc_log_err(hw->os, "malloc qtop failed\n");
> +		return NULL;
> +	}
> +	memset(qtop, 0, sizeof(*qtop));
> +	qtop->os = hw->os;
> +
> +	/* Allocate queue topology entries */
> +	qtop->entries = kzalloc((EFCT_HW_MAX_QTOP_ENTRIES *
> +				sizeof(*qtop->entries)), GFP_ATOMIC);
> +	if (!qtop->entries) {
> +		kfree(qtop);
> +		kfree(tokarray.tokens);
> +		return NULL;
> +	}
> +	qtop->alloc_count = EFCT_HW_MAX_QTOP_ENTRIES;
> +	qtop->inuse_count = 0;
> +
> +	/* Parse the tokens */
> +	if (parse_topology(hw, &tokarray, qtop)) {
> +		efc_log_err(hw->os, "failed to parse tokens\n");
> +		efct_hw_qtop_free(qtop);
> +		kfree(tokarray.tokens);
> +		return NULL;
> +	}
> +
> +	/* Free the tokens array */
> +	kfree(tokarray.tokens);
> +
> +	return qtop;
> +}
> +
> +void
> +efct_hw_qtop_free(struct efct_hw_qtop *qtop)
> +{
> +	if (qtop) {
> +		kfree(qtop->entries);
> +		kfree(qtop);
> +	}
> +}
Ah, so here is the magic token parsing.
So please, move the string from the previous patches into this patch to
make more sense of it.

> diff --git a/drivers/scsi/elx/efct/efct_hw_queues.h b/drivers/scsi/elx/efct/efct_hw_queues.h
> new file mode 100644
> index 000000000000..afa43209f823
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw_queues.h
> @@ -0,0 +1,67 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFCT_HW_QUEUES_H__
> +#define __EFCT_HW_QUEUES_H__
> +
> +#include "efct_hw.h"
> +
> +#define EFCT_HW_MQ_DEPTH	128
> +
> +enum efct_hw_qtop_type {
> +	QTOP_EQ = 0,
> +	QTOP_CQ,
> +	QTOP_WQ,
> +	QTOP_RQ,
> +	QTOP_MQ,
> +	QTOP_LAST,
> +};
> +
> +struct efct_hw_qtop_entry {
> +	enum		efct_hw_qtop_type entry;
> +	bool		set_default;
> +	u32		len;
> +	u8		class;
> +	u8		ulp;
> +	u8		filter_mask;
> +};
> +
> +struct efct_hw_mrq {
> +	struct rq_config {
> +		struct hw_eq *eq;
> +		u32	len;
> +		u8	class;
> +		u8	ulp;
> +		u8	filter_mask;
> +	} rq_cfg[16];
> +	u32 num_pairs;
> +};
> +
> +#define MAX_TOKENS			256
> +#define EFCT_HW_MAX_QTOP_ENTRIES	200
> +
> +struct efct_hw_qtop {
> +	void		*os;
> +	struct efct_hw_qtop_entry *entries;
> +	u32		alloc_count;
> +	u32		inuse_count;
> +	u32		entry_counts[QTOP_LAST];
> +	u32		rptcount[10];
> +	u32		rptcount_idx;
> +};
> +
> +struct efct_hw_qtop *
> +efct_hw_qtop_parse(struct efct_hw *hw, const char *qtop_string);
> +void efct_hw_qtop_free(struct efct_hw_qtop *qtop);
> +const char *efct_hw_qtop_entry_name(enum efct_hw_qtop_type entry);
> +u32 efct_hw_qtop_eq_count(struct efct_hw *hw);
> +
> +enum efct_hw_rtn
> +efct_hw_init_queues(struct efct_hw *hw, struct efct_hw_qtop *qtop);
> +extern  struct hw_wq
> +*efct_hw_queue_next_wq(struct efct_hw *hw, struct efct_hw_io *io);
> +
> +#endif /* __EFCT_HW_QUEUES_H__ */
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2019-12-20 22:37 ` [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2020-01-09  9:13   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:13 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> RQ data buffer allocation and deallocate.
> Memory pool allocation and deallocation APIs.
> Mailbox command submission and completion routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c    | 355 +++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h    |   7 +
>  drivers/scsi/elx/efct/efct_utils.c | 446 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_utils.h |  83 +++++++
>  4 files changed, 891 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_utils.c
>  create mode 100644 drivers/scsi/elx/efct/efct_utils.h
> 
[ .. ]
> diff --git a/drivers/scsi/elx/efct/efct_utils.c b/drivers/scsi/elx/efct/efct_utils.c
> new file mode 100644
> index 000000000000..1d28be633a41
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_utils.c
> @@ -0,0 +1,446 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_utils.h"
> +
> +#define DEFAULT_SLAB_LEN		(64 * 1024)
> +
> +struct pool_hdr {
> +	struct list_head list_entry;
> +};
> +
> +struct efct_array {
> +	void *os;
> +
> +	u32 size;
> +	u32 count;
> +
> +	u32 n_rows;
> +	u32 elems_per_row;
> +	u32 bytes_per_row;
> +
> +	void **array_rows;
> +	u32 array_rows_len;
> +};
> +
I really wonder if xarray wouldn't be better suited here.
Have you checked?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization
  2019-12-20 22:37 ` [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization James Smart
@ 2020-01-09  9:22   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:22 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to create IO interfaces (wqs, etc), SGL initialization,
> and configure hardware features.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 1480 ++++++++++++++++++++++++++++++++++++---
>  drivers/scsi/elx/efct/efct_hw.h |   46 ++
>  2 files changed, 1427 insertions(+), 99 deletions(-)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 339e904b0276..beca8534813d 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -240,6 +240,505 @@ efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
>  		     j, hw->config.filter_def[j], i, id);
>  }
>  
> +static inline void
> +efct_hw_init_free_io(struct efct_hw_io *io)
> +{
> +	/*
> +	 * Set io->done to NULL, to avoid any callbacks, should
> +	 * a completion be received for one of these IOs
> +	 */
> +	io->done = NULL;
> +	io->abort_done = NULL;
> +	io->status_saved = false;
> +	io->abort_in_progress = false;
> +	io->rnode = NULL;
> +	io->type = 0xFFFF;
> +	io->wq = NULL;
> +	io->ul_io = NULL;
> +	io->tgt_wqe_timeout = 0;
> +}
> +
> +static void
> +efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* Restore the default */
> +	io->sgl = &io->def_sgl;
> +	io->sgl_count = io->def_sgl_count;
> +
> +	/* Clear the overflow SGL */
> +	io->ovfl_sgl = NULL;
> +	io->ovfl_sgl_count = 0;
> +	io->ovfl_lsp = NULL;
> +}
> +
> +/* Initialize the pool of HW IO objects */
> +static enum efct_hw_rtn
> +efct_hw_setup_io(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	struct efct_hw_io	*io = NULL;
> +	uintptr_t	xfer_virt = 0;
> +	uintptr_t	xfer_phys = 0;
> +	u32	index;
> +	bool new_alloc = true;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	if (!hw->io) {
> +		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
> +				 GFP_KERNEL);
> +
> +		if (!hw->io)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(hw->io, 0, hw->config.n_io * sizeof(io));
> +
> +		for (i = 0; i < hw->config.n_io; i++) {
> +			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
> +			if (!hw->io[i])
> +				goto error;
> +
> +			memset(hw->io[i], 0, sizeof(struct efct_hw_io));
> +		}
> +
> +		/* Create WQE buffs for IO */
> +		hw->wqe_buffs = kmalloc((hw->config.n_io *
> +					     hw->sli.wqe_size),
> +					     GFP_ATOMIC);
> +		if (!hw->wqe_buffs) {
> +			kfree(hw->io);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		memset(hw->wqe_buffs, 0, (hw->config.n_io *
> +					hw->sli.wqe_size));
> +
> +	} else {
> +		/* re-use existing IOs, including SGLs */
> +		new_alloc = false;
> +	}
> +
> +	if (new_alloc) {
> +		dma = &hw->xfer_rdy;
> +		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
> +		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       dma->size, &dma->phys, GFP_DMA);
> +		if (!dma->virt)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
> +	xfer_phys = hw->xfer_rdy.phys;
> +
> +	for (i = 0; i < hw->config.n_io; i++) {
> +		struct hw_wq_callback *wqcb;
> +
> +		io = hw->io[i];
> +
> +		/* initialize IO fields */
> +		io->hw = hw;
> +
> +		/* Assign a WQE buff */
> +		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
> +
> +		/* Allocate the request tag for this IO */
> +		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
> +		if (!wqcb) {
> +			efc_log_err(hw->os, "can't allocate request tag\n");
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +		}
> +		io->reqtag = wqcb->instance_index;
> +
> +		/* Now for the fields that are initialized on each free */
> +		efct_hw_init_free_io(io);
> +
> +		/* The XB flag isn't cleared on IO free, so init to zero */
> +		io->xbusy = 0;
> +
> +		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
> +				       &io->indicator, &index)) {
> +			efc_log_err(hw->os,
> +				     "sli_resource_alloc failed @ %d\n", i);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		if (new_alloc) {
> +			dma = &io->def_sgl;
> +			dma->size = hw->config.n_sgl *
> +					sizeof(struct sli4_sge);
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt) {
> +				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
> +				memset(&io->def_sgl, 0,
> +				       sizeof(struct efc_dma));
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +		}
> +		io->def_sgl_count = hw->config.n_sgl;
> +		io->sgl = &io->def_sgl;
> +		io->sgl_count = io->def_sgl_count;
> +
> +		if (hw->xfer_rdy.size) {
> +			io->xfer_rdy.virt = (void *)xfer_virt;
> +			io->xfer_rdy.phys = xfer_phys;
> +			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
> +
> +			xfer_virt += sizeof(struct fcp_txrdy);
> +			xfer_phys += sizeof(struct fcp_txrdy);
> +		}
> +	}
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +error:
> +	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
> +		kfree(hw->io[i]);
> +		hw->io[i] = NULL;
> +	}
> +
> +	kfree(hw->io);
> +	hw->io = NULL;
> +
> +	return EFCT_HW_RTN_NO_MEMORY;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_init_io(struct efct_hw *hw)
> +{
> +	u32	i = 0, io_index = 0;
> +	bool prereg = false;
> +	struct efct_hw_io	*io = NULL;
> +	u8		cmd[SLI4_BMBX_SIZE];
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32	nremaining;
> +	u32	n = 0;
> +	u32	sgls_per_request = 256;
> +	struct efc_dma	**sgls = NULL;
> +	struct efc_dma	reqbuf;
> +	struct efct *efct = hw->os;
> +
> +	prereg = hw->sli.sgl_pre_registered;
> +
> +	memset(&reqbuf, 0, sizeof(struct efc_dma));
> +	if (prereg) {
> +		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
> +				     GFP_ATOMIC);
> +		if (!sgls)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		reqbuf.size = 32 + sgls_per_request * 16;
> +		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						 reqbuf.size, &reqbuf.phys,
> +						 GFP_DMA);
> +		if (!reqbuf.virt) {
> +			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
> +			kfree(sgls);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +	}
> +
> +	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
> +		if (prereg) {
> +			/* Copy address of SGL's into local sgls[] array, break
> +			 * out if the xri is not contiguous.
> +			 */
> +			u32 min = (sgls_per_request < nremaining)
> +					? sgls_per_request : nremaining;
> +			for (n = 0; n < min; n++) {
> +				/* Check that we have contiguous xri values */
> +				if (n > 0) {
> +					if (hw->io[io_index + n]->indicator !=
> +					    hw->io[io_index + n - 1]->indicator
> +					    + 1)
> +						break;
> +				}
> +				sgls[n] = hw->io[io_index + n]->sgl;
> +			}
> +
> +			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
> +						   sizeof(cmd),
> +						hw->io[io_index]->indicator,
> +						n, sgls, NULL, &reqbuf)) {
> +				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
> +						    NULL, NULL)) {
> +					rc = EFCT_HW_RTN_ERROR;
> +					efc_log_err(hw->os,
> +						     "SGL post failed\n");
> +					break;
> +				}
> +			}
> +		} else {
> +			n = nremaining;
> +		}
> +
> +		/* Add to tail if successful */
> +		for (i = 0; i < n; i++, io_index++) {
> +			io = hw->io[io_index];
> +			io->state = EFCT_HW_IO_STATE_FREE;
> +			INIT_LIST_HEAD(&io->list_entry);
> +			list_add_tail(&io->list_entry, &hw->io_free);
> +		}
> +	}
> +
> +	if (prereg) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  reqbuf.size, reqbuf.virt, reqbuf.phys);
> +		memset(&reqbuf, 0, sizeof(struct efc_dma));
> +		kfree(sgls);
> +	}
> +
> +	return rc;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
> +
> +	memset(&param, 0, sizeof(param));
> +	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
> +			      fdt_xfer_hint, rc);
> +	else
> +		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
> +			      le32_to_cpu(param.fdt_xfer_hint));
> +
> +	return rc;
> +}
> +
> +/**
> + * efct_hw_config_mrq() - Configure Multi-RQ
> + *
> + * @hw: Hardware context allocated by the caller.
> + * @mode: 1 to set MRQ filters and 0 to set FCFI index
> + * @fcf_index: valid in mode 0
> + *
> + * Returns 0 on success, or a non-zero value on failure.
> + */
> +static int
> +efct_hw_config_mrq(struct efct_hw *hw, u8 mode, u16 fcf_index)
> +{
> +	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
> +	struct hw_rq *rq;
> +	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
> +	u32 i, j;
> +	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
> +	int rc;
> +
> +	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
> +		goto issue_cmd;
> +
> +	/* Set the filter match/mask values from hw's filter_def values */
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		rq_filter[i].rq_id = cpu_to_le16(0xffff);
> +		rq_filter[i].r_ctl_mask  = (u8)hw->config.filter_def[i];
> +		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
> +		rq_filter[i].type_mask   = (u8)(hw->config.filter_def[i] >> 16);
> +		rq_filter[i].type_match  = (u8)(hw->config.filter_def[i] >> 24);
> +	}
> +
> +	/* Accumulate counts for each filter type used, build rq_ids[] list */
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
> +			if (!(rq->filter_mask & (1U << j)))
> +				continue;
> +
> +			if (rq_filter[j].rq_id != cpu_to_le16(0xffff)) {
> +				/*
> +				 * Already used. Bailout ifts not RQset
> +				 * case
> +				 */
> +				if (!rq->is_mrq ||
> +				    rq_filter[j].rq_id !=
> +				    cpu_to_le16(rq->base_mrq_id)) {
> +					efc_log_err(hw->os, "Wrong q top.\n");
> +					return EFCT_HW_RTN_ERROR;
> +				}
> +				continue;
> +			}
> +
> +			if (!rq->is_mrq) {
> +				rq_filter[j].rq_id = cpu_to_le16(rq->hdr->id);
> +				continue;
> +			}
> +
> +			rq_filter[j].rq_id = cpu_to_le16(rq->base_mrq_id);
> +			mrq_bitmask |= (1U << j);
> +		}
> +	}
> +
> +issue_cmd:
> +	/* Invoke REG_FCFI_MRQ */
> +	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
> +				  buf,	/* buf */
> +				 SLI4_BMBX_SIZE, /* size */
> +				 mode, /* mode 1 */
> +				 fcf_index, /* fcf_index */
> +				 /* RQ selection policy*/
> +				 hw->config.rq_selection_policy,
> +				 mrq_bitmask, /* MRQ bitmask */
> +				 hw->hw_mrq_count, /* num_mrqs */
> +				 rq_filter);/* RQ filter */
> +	if (rc) {
> +		efc_log_err(hw->os,
> +			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +
> +	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS ||
> +	    le16_to_cpu(rsp->hdr.status)) {
> +		efc_log_err(hw->os,
> +			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
> +			     rsp->hdr.command,
> +			     le16_to_cpu(rsp->hdr.status));
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
> +		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
> +	return 0;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_config_watchdog_timer(struct efct_hw *hw);
> +
> +static void
> +efct_hw_watchdog_timer_cb(struct timer_list *t)
> +{
> +	struct efct_hw *hw = from_timer(hw, t, watchdog_timer);
> +
> +	efct_hw_config_watchdog_timer(hw);
> +}
> +
> +static void
> +efct_hw_cb_cfg_watchdog(struct efct_hw *hw, int status, u8 *mqe,
> +			void  *arg)
> +{
> +	u16 timeout = hw->watchdog_timeout;
> +
> +	if (status != 0) {
> +		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
> +			     status);
> +	} else {
> +		if (timeout != 0) {
> +			/*
> +			 * keeping callback 500ms before timeout to keep
> +			 * heartbeat alive
> +			 */
> +			timer_setup(&hw->watchdog_timer,
> +				    &efct_hw_watchdog_timer_cb, 0);
> +
> +			mod_timer(&hw->watchdog_timer,
> +				  jiffies +
> +				  msecs_to_jiffies(timeout * 1000 - 500));
> +		} else {
> +			del_timer(&hw->watchdog_timer);
> +		}
> +	}
> +
> +	kfree(mqe);
> +}
> +
> +/* Set configuration parameters for watchdog timer feature */
> +static enum efct_hw_rtn
> +efct_hw_config_watchdog_timer(struct efct_hw *hw)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 *buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +
> +	if (!buf)
> +		return EFCT_HW_RTN_ERROR;
> +
> +	sli4_cmd_lowlevel_set_watchdog(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				       hw->watchdog_timeout);
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT, efct_hw_cb_cfg_watchdog,
> +			     NULL);
> +	if (rc) {
> +		kfree(buf);
> +		efc_log_err(hw->os, "config watchdog timer failed, rc = %d\n",
> +			     rc);
> +	}
> +	return rc;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_set_dif_seed(struct efct_hw *hw)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_dif_seed seed_param;
> +
> +	memset(&seed_param, 0, sizeof(seed_param));
> +	seed_param.seed = cpu_to_le16(hw->config.dif_seed);
> +
> +	/* send set_features command */
> +	if (!sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +					SLI4_SET_FEATURES_DIF_SEED,
> +					4,
> +					(u32 *)&seed_param)) {
> +		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +		if (rc)
> +			efc_log_err(hw->os,
> +				     "efct_hw_command returns %d\n", rc);
> +		else
> +			efc_log_debug(hw->os, "DIF seed set to 0x%x\n",
> +				       hw->config.dif_seed);
> +	} else {
> +		efc_log_err(hw->os,
> +			     "sli_cmd_common_set_features failed\n");
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +	return rc;
> +}
> +
> +/* enable sli port health check */
> +static enum efct_hw_rtn
> +efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
> +				     u8 enable)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_health_check param;
> +	u32	health_check_flag = 0;
> +
> +	memset(&param, 0, sizeof(param));
> +
> +	if (enable)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
> +
> +	if (query)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
> +
> +	param.health_check_dword = cpu_to_le32(health_check_flag);
> +
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
> +	else
> +		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
> +
> +	return rc;
> +}
> +
>  enum efct_hw_rtn
>  efct_hw_init(struct efct_hw *hw)
>  {
> @@ -712,104 +1211,6 @@ efct_hw_init(struct efct_hw *hw)
>  	return EFCT_HW_RTN_SUCCESS;
>  }
>  
> -/**
> - * efct_hw_config_mrq() - Configure Multi-RQ
> - *
> - * @hw: Hardware context allocated by the caller.
> - * @mode: 1 to set MRQ filters and 0 to set FCFI index
> - * @fcf_index: valid in mode 0
> - *
> - * Returns 0 on success, or a non-zero value on failure.
> - */
> -static int
> -efct_hw_config_mrq(struct efct_hw *hw, u8 mode, u16 fcf_index)
> -{
> -	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
> -	struct hw_rq *rq;
> -	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
> -	u32 i, j;
> -	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
> -	int rc;
> -
> -	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
> -		goto issue_cmd;
> -
> -	/* Set the filter match/mask values from hw's filter_def values */
> -	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> -		rq_filter[i].rq_id = cpu_to_le16(0xffff);
> -		rq_filter[i].r_ctl_mask  = (u8)hw->config.filter_def[i];
> -		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
> -		rq_filter[i].type_mask   = (u8)(hw->config.filter_def[i] >> 16);
> -		rq_filter[i].type_match  = (u8)(hw->config.filter_def[i] >> 24);
> -	}
> -
> -	/* Accumulate counts for each filter type used, build rq_ids[] list */
> -	for (i = 0; i < hw->hw_rq_count; i++) {
> -		rq = hw->hw_rq[i];
> -		for (j = 0; j < SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG; j++) {
> -			if (!(rq->filter_mask & (1U << j)))
> -				continue;
> -
> -			if (rq_filter[j].rq_id != cpu_to_le16(0xffff)) {
> -				/*
> -				 * Already used. Bailout ifts not RQset
> -				 * case
> -				 */
> -				if (!rq->is_mrq ||
> -				    rq_filter[j].rq_id !=
> -				    cpu_to_le16(rq->base_mrq_id)) {
> -					efc_log_err(hw->os, "Wrong q top.\n");
> -					return EFCT_HW_RTN_ERROR;
> -				}
> -				continue;
> -			}
> -
> -			if (!rq->is_mrq) {
> -				rq_filter[j].rq_id = cpu_to_le16(rq->hdr->id);
> -				continue;
> -			}
> -
> -			rq_filter[j].rq_id = cpu_to_le16(rq->base_mrq_id);
> -			mrq_bitmask |= (1U << j);
> -		}
> -	}
> -
> -issue_cmd:
> -	/* Invoke REG_FCFI_MRQ */
> -	rc = sli_cmd_reg_fcfi_mrq(&hw->sli,
> -				  buf,	/* buf */
> -				 SLI4_BMBX_SIZE, /* size */
> -				 mode, /* mode 1 */
> -				 fcf_index, /* fcf_index */
> -				 /* RQ selection policy*/
> -				 hw->config.rq_selection_policy,
> -				 mrq_bitmask, /* MRQ bitmask */
> -				 hw->hw_mrq_count, /* num_mrqs */
> -				 rq_filter);/* RQ filter */
> -	if (rc) {
> -		efc_log_err(hw->os,
> -			     "sli_cmd_reg_fcfi_mrq() failed: %d\n", rc);
> -		return EFCT_HW_RTN_ERROR;
> -	}
> -
> -	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> -
> -	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
> -
> -	if (rc != EFCT_HW_RTN_SUCCESS ||
> -	    le16_to_cpu(rsp->hdr.status)) {
> -		efc_log_err(hw->os,
> -			     "FCFI MRQ reg failed. cmd = %x status = %x\n",
> -			     rsp->hdr.command,
> -			     le16_to_cpu(rsp->hdr.status));
> -		return EFCT_HW_RTN_ERROR;
> -	}
> -
> -	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
> -		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
> -	return 0;
> -}
> -
>  enum efct_hw_rtn
>  efct_hw_set(struct efct_hw *hw, enum efct_hw_property prop, u32 value)
>  {
> @@ -1221,6 +1622,10 @@ efct_get_wwn(struct efct_hw *hw, enum efct_hw_property prop)
>  	return value;
>  }
>  
> +/*
> + * An efct_hw_rx_buffer_t array is allocated,
> + * along with the required DMA mem
> + */
>  static struct efc_hw_rq_buffer *
>  efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
>  			u32 size)
> @@ -1327,6 +1732,7 @@ efct_hw_rx_allocate(struct efct_hw *hw)
>  	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
>  }
>  
> +/* Post the RQ data buffers to the chip */
>  enum efct_hw_rtn
>  efct_hw_rx_post(struct efct_hw *hw)
>  {
> @@ -1414,7 +1820,7 @@ efct_hw_cmd_submit_pending(struct efct_hw *hw)
>  	return rc;
>  }
>  
> -/**
> +/*
>   * Send a mailbox command to the hardware, and either wait for a completion
>   * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
>   */

Pointless hunk.

> @@ -1575,3 +1981,879 @@ efct_hw_command_cancel(struct efct_hw *hw)
>  
>  	return 0;
>  }
> +
> +static inline struct efct_hw_io *
> +_efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io	*io = NULL;
> +
> +	if (!list_empty(&hw->io_free)) {
> +		io = list_first_entry(&hw->io_free, struct efct_hw_io,
> +				      list_entry);
> +		list_del(&io->list_entry);
> +	}
> +	if (io) {
> +		INIT_LIST_HEAD(&io->list_entry);
> +		INIT_LIST_HEAD(&io->wqe_link);
> +		INIT_LIST_HEAD(&io->dnrx_link);
> +		list_add_tail(&io->list_entry, &hw->io_inuse);
> +		io->state = EFCT_HW_IO_STATE_INUSE;
> +		io->abort_reqtag = U32_MAX;
> +		kref_init(&io->ref);
> +		io->release = efct_hw_io_free_internal;
> +	} else {
> +		atomic_add_return(1, &hw->io_alloc_failed_count);
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io	*io = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	io = _efct_hw_io_alloc(hw);
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	return io;
> +}
> +
> +/*
> + * When an IO is freed, depending on the exchange busy flag, and other
> + * workarounds, move it to the correct list.
> + */
> +static void
> +efct_hw_io_free_move_correct_list(struct efct_hw *hw,
> +				  struct efct_hw_io *io)
> +{
> +	if (io->xbusy) {
> +		/*
> +		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
> +		 * up
> +		 */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_wait_free);
> +		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
> +	} else {
> +		/* IO not busy, add to free list */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_free);
> +		io->state = EFCT_HW_IO_STATE_FREE;
> +	}
> +}
> +
> +static inline void
> +efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* initialize IO fields */
> +	efct_hw_init_free_io(io);
> +
> +	/* Restore default SGL */
> +	efct_hw_io_restore_sgl(hw, io);
> +}
> +
> +/**
> + * Free a previously-allocated HW IO object. Called when
> + * IO refcount goes to zero (host-owned IOs only).
> + */
> +void
> +efct_hw_io_free_internal(struct kref *arg)
> +{
> +	unsigned long flags = 0;
> +	struct efct_hw_io *io =
> +			container_of(arg, struct efct_hw_io, ref);
> +	struct efct_hw *hw = io->hw;
> +
> +	/* perform common cleanup */
> +	efct_hw_io_free_common(hw, io);
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +		/* remove from in-use list */
> +		if (io->list_entry.next &&
> +		    !list_empty(&hw->io_inuse)) {
> +			list_del(&io->list_entry);
> +			efct_hw_io_free_move_correct_list(hw, io);
> +		}
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +}
> +
> +int
> +efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* just put refcount */
> +	if (refcount_read(&io->ref.refcount) <= 0) {
> +		efc_log_err(hw->os,
> +			     "Bad parameter: refcount <= 0 xri=%x tag=%x\n",
> +			    io->indicator, io->reqtag);
> +		return -1;
> +	}
> +
> +	return kref_put(&io->ref, io->release);
> +}
> +

Why this check? Shouldn't kref_get_unless_zero() protect against this?

> +u8
> +efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	return (refcount_read(&io->ref.refcount) > 0);
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
> +{
> +	u32 ioindex;
> +
> +	ioindex = xri - hw->sli.extent[SLI_RSRC_XRI].base[0];
> +	return hw->io[ioindex];
> +}
> +
> +/**
> + * Issue any pending callbacks for an IO and remove off the timer and
> + * pending lists.
> + */
> +static void
> +efct_hw_io_cancel_cleanup(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	efct_hw_done_t done = io->done;
> +	efct_hw_done_t abort_done = io->abort_done;
> +	unsigned long flags = 0;
> +
> +	/* first check active_wqe list and remove if there */
> +	if (io->wqe_link.next)
> +		list_del(&io->wqe_link);
> +
> +	/* Remove from WQ pending list */
> +	if (io->wq && io->wq->pending_list.next)
> +		list_del(&io->list_entry);
> +
> +	if (io->done) {
> +		void *arg = io->arg;
> +
> +		io->done = NULL;
> +		spin_unlock_irqrestore(&hw->io_lock, flags);
> +		done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0, arg);
> +		spin_lock_irqsave(&hw->io_lock, flags);
> +	}
> +
> +	if (io->abort_done) {
> +		void		*abort_arg = io->abort_arg;
> +
> +		io->abort_done = NULL;
> +		spin_unlock_irqrestore(&hw->io_lock, flags);
> +		abort_done(io, io->rnode, 0, SLI4_FC_WCQE_STATUS_SHUTDOWN, 0,
> +			   abort_arg);
> +		spin_lock_irqsave(&hw->io_lock, flags);
> +	}
> +}
> +
> +static int
> +efct_hw_io_cancel(struct efct_hw *hw)
> +{
> +	struct efct_hw_io *io = NULL;
> +	struct efct_hw_io *tmp_io = NULL;
> +	u32 iters = 100; /* One second limit */
> +	unsigned long flags = 0;
> +
> +	/*
> +	 * Manually clean up outstanding IO.
> +	 * Only walk through list once: the backend will cleanup any IOs when
> +	 * done/abort_done is called.
> +	 */
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	list_for_each_entry_safe(io, tmp_io, &hw->io_inuse, list_entry) {
> +		efct_hw_done_t  done = io->done;
> +		efct_hw_done_t  abort_done = io->abort_done;
> +
> +		efct_hw_io_cancel_cleanup(hw, io);
> +
> +		/*
> +		 * Since this is called in a reset/shutdown
> +		 * case, If there is no callback, then just
> +		 * free the IO.
> +		 *
> +		 * Note: A port owned XRI cannot be on
> +		 *       the in use list. We cannot call
> +		 *       efct_hw_io_free() because we already
> +		 *       hold the io_lock.
> +		 */
> +		if (!done &&
> +		    !abort_done) {
> +			/*
> +			 * Since this is called in a reset/shutdown
> +			 * case, If there is no callback, then just
> +			 * free the IO.
> +			 */
> +			efct_hw_io_free_common(hw, io);
> +			list_del(&io->list_entry);
> +			efct_hw_io_free_move_correct_list(hw, io);
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	/* Give time for the callbacks to complete */
> +	do {
> +		mdelay(10);
> +		iters--;
> +	} while (!list_empty(&hw->io_inuse) && iters);
> +

That is pretty lame.
Can't you use refcounts for 'hw' just call 'kref_put(hw)'?


Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 20/32] elx: efct: Hardware queues processing
  2019-12-20 22:37 ` [PATCH v2 20/32] elx: efct: Hardware queues processing James Smart
@ 2020-01-09  9:24   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:24 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for EQ, CQ, WQ and RQ processing.
> Routines for IO object pool allocation and deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c        | 531 +++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h        |  36 +++
>  drivers/scsi/elx/efct/efct_hw_queues.c | 192 ++++++++++++
>  drivers/scsi/elx/efct/efct_io.c        | 203 +++++++++++++
>  drivers/scsi/elx/efct/efct_io.h        | 196 ++++++++++++
>  5 files changed, 1158 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_io.c
>  create mode 100644 drivers/scsi/elx/efct/efct_io.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines
  2019-12-20 22:37 ` [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2020-01-09  9:26   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:26 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to handle unsolicited FC frames.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c    |   2 +
>  drivers/scsi/elx/efct/efct_unsol.c | 835 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_unsol.h |  49 +++
>  3 files changed, 886 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
>  create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 22/32] elx: efct: Extended link Service IO handling
  2019-12-20 22:37 ` [PATCH v2 22/32] elx: efct: Extended link Service IO handling James Smart
@ 2020-01-09  9:38   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:38 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Functions to build and send ELS/CT/BLS commands and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_els.c | 1953 ++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_els.h |  136 +++
>  2 files changed, 2089 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_els.c
>  create mode 100644 drivers/scsi/elx/efct/efct_els.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
> new file mode 100644
> index 000000000000..9c964302505b
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_els.c
> @@ -0,0 +1,1953 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Functions to build and send ELS/CT/BLS commands and responses.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_els.h"
> +
> +#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
> +
> +#define node_els_trace()  \
> +	do { \
> +		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
> +			efc_log_info(efct, "[%s] %-20s\n", \
> +				node->display_name, __func__); \
> +	} while (0)
> +
> +#define els_io_printf(els, fmt, ...) \
> +	efc_log_debug((struct efct *)els->node->efc->base,\
> +		      "[%s]" ELS_IOFMT " %-8s " fmt, \
> +		      els->node->display_name,\
> +		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
> +		      els->display_name, ##__VA_ARGS__)
> +
> +#define EFCT_ELS_RSP_LEN		1024
> +#define EFCT_ELS_GID_PT_RSP_LEN		8096
> +
> +void *
> +efct_els_req_send(struct efc *efc, struct efc_node *node, u32 cmd,
> +		  u32 timeout_sec, u32 retries)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		efc_log_debug(efct, "send efct_send_plogi\n");
> +		efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case ELS_FLOGI:
> +		efc_log_debug(efct, "send efct_send_flogi\n");
> +		efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case ELS_LOGO:
> +		efc_log_debug(efct, "send efct_send_logo\n");
> +		efct_send_logo(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case ELS_PRLI:
> +		efc_log_debug(efct, "send efct_send_prli\n");
> +		efct_send_prli(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case ELS_ADISC:
> +		efc_log_debug(efct, "send efct_send_prli\n");
> +		efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case ELS_SCR:
> +		efc_log_debug(efct, "send efct_send_scr\n");
> +		efct_send_scr(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	default:
> +		efc_log_debug(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}

'send efct_send_plogi' ?
Maybe 'send %s', cmd_name(cmd) ?
Or move it into the function, and just call 'send %s', __function__ ...

> +
> +void *
> +efct_els_resp_send(struct efc *efc, struct efc_node *node,
> +		   u32 cmd, u16 ox_id)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		efct_send_plogi_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_FLOGI:
> +		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
> +		break;
> +	case ELS_LOGO:
> +		efct_send_logo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLI:
> +		efct_send_prli_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLO:
> +		efct_send_prlo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_ADISC:
> +		efct_send_adisc_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_LS_ACC:
> +		efct_send_ls_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PDISC:
> +	case ELS_FDISC:
> +	case ELS_RSCN:
> +	case ELS_SCR:
> +		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
> +				 ELS_EXPL_NONE, 0);
> +		break;
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +
> +struct efct_io *
> +efct_els_io_alloc(struct efc_node *node, u32 reqlen,
> +		  enum efct_els_role role)
> +{
> +	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
> +}
> +
> +struct efct_io *
> +efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
> +		       u32 rsplen, enum efct_els_role role)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	struct efct_io *els;
> +	unsigned long flags = 0;
> +
> +	efct = node->efc->base;
> +
> +	xport = efct->xport;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +
> +	if (!node->io_alloc_enabled) {
> +		efc_log_debug(efct,
> +			       "called with io_alloc_enabled = FALSE\n");
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	els = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!els) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	/* initialize refcount */
> +	kref_init(&els->ref);
> +	els->release = _efct_els_io_free;
> +
> +	switch (role) {
> +	case EFCT_ELS_ROLE_ORIGINATOR:
> +		els->cmd_ini = true;
> +		els->cmd_tgt = false;
> +		break;
> +	case EFCT_ELS_ROLE_RESPONDER:
> +		els->cmd_ini = false;
> +		els->cmd_tgt = true;
> +		break;
> +	}
> +
> +	/* IO should not have an associated HW IO yet.
> +	 * Assigned below.
> +	 */
> +	if (els->hio) {
> +		efc_log_err(efct,
> +			     "assertion failed.  HIO is not null\n");
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +

That is not an assertion, it's a plain logging message.

> +	/* populate generic io fields */
> +	els->efct = efct;
> +	els->node = node;
> +
> +	/* set type and ELS-specific fields */
> +	els->io_type = EFCT_IO_TYPE_ELS;
> +	els->display_name = "pending";
> +
> +	/* now allocate DMA for request and response */
> +	els->els_req.size = reqlen;
> +	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       els->els_req.size,
> +					       &els->els_req.phys,
> +					       GFP_DMA);
> +	if (els->els_req.virt) {
> +		els->els_rsp.size = rsplen;
> +		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       els->els_rsp.size,
> +						       &els->els_rsp.phys,
> +						       GFP_DMA);
> +		if (!els->els_rsp.virt) {
> +			efc_log_err(efct, "dma_alloc rsp\n");
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  els->els_req.size,
> +				els->els_req.virt, els->els_req.phys);
> +			efct_io_pool_io_free(efct->xport->io_pool, els);
> +			els = NULL;
> +		}
> +	} else {
> +		efc_log_err(efct, "dma_alloc req\n");
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		els = NULL;
> +	}
> +
> +	if (els) {
> +		/* initialize fields */
> +		els->els_retries_remaining =
> +					EFCT_FC_ELS_DEFAULT_RETRIES;
> +		els->els_pend = false;
> +		els->els_active = false;
> +
> +		/* add els structure to ELS IO list */
> +		INIT_LIST_HEAD(&els->list_entry);
> +		list_add_tail(&els->list_entry,
> +			      &node->els_io_pend_list);
> +		els->els_pend = true;
> +	}
> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return els;
> +}
> +
> +void
> +efct_els_io_free(struct efct_io *els)
> +{
> +	kref_put(&els->ref, els->release);
> +}
> +
> +void
> +_efct_els_io_free(struct kref *arg)
> +{
> +	struct efct_io *els = container_of(arg, struct efct_io, ref);
> +	struct efct *efct;
> +	struct efc_node *node;
> +	int send_empty_event = false;
> +	unsigned long flags = 0;
> +
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_active) {
> +			/* if active, remove from active list and check empty */
> +			list_del(&els->list_entry);
> +			/* Send list empty event if the IO allocator
> +			 * is disabled, and the list is empty
> +			 * If node->io_alloc_enabled was not checked,
> +			 * the event would be posted continually
> +			 */
> +			send_empty_event = (!node->io_alloc_enabled) &&
> +				list_empty(&node->els_io_active_list);
> +			els->els_active = false;
> +		} else if (els->els_pend) {
> +			/* if pending, remove from pending list;
> +			 * node shutdown isn't gated off the
> +			 * pending list (only the active list),
> +			 * so no need to check if pending list is empty
> +			 */
> +			list_del(&els->list_entry);
> +			els->els_pend = 0;
> +		} else {
> +			efc_log_err(efct,
> +				     "assertion fail: niether els_pend nor active set\n");
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +			return;
> +		}
> +

Same here.

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	/* free ELS request and response buffers */
> +	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
> +			  els->els_rsp.virt, els->els_rsp.phys);
> +	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
> +			  els->els_req.virt, els->els_req.phys);
> +
> +	efct_io_pool_io_free(efct->xport->io_pool, els);
> +
> +	if (send_empty_event)
> +		efc_scsi_io_list_empty(node->efc, node);
> +
> +	efct_scsi_check_pending(efct);
> +}
> +
> +static void
> +efct_els_make_active(struct efct_io *els)
> +{
> +	struct efc_node *node = els->node;
> +	unsigned long flags = 0;
> +
> +	/* move ELS from pending list to active list */
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_pend) {
> +			if (els->els_active) {
> +				efc_log_err(node->efc,
> +					     "assertion fail:both els_pend and active set\n");
> +				spin_unlock_irqrestore(&node->active_ios_lock,
> +						       flags);
> +				return;
> +			}

And here.

> +			/* remove from pending list */
> +			list_del(&els->list_entry);
> +			els->els_pend = false;
> +
> +			/* add els structure to ELS IO list */
> +			INIT_LIST_HEAD(&els->list_entry);
> +			list_add_tail(&els->list_entry,
> +				      &node->els_io_active_list);
> +			els->els_active = true;
> +		} else {
> +			/* must be retrying; make sure it's already active */
> +			if (!els->els_active) {
> +				efc_log_err(node->efc,
> +					     "assertion fail: niether els_pend nor active set\n");
> +			}

And here.

> +		}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +static int efct_els_send(struct efct_io *els, u32 reqlen,
> +			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* update ELS request counter */
> +	node->els_req_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = reqlen;
> +	return efct_scsi_io_dispatch(els, cb);
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els);
> +
> +static void
> +efct_els_delay_timer_cb(struct timer_list *t)
> +{
> +	struct efct_io *els = from_timer(els, t, delay_timer);
> +	struct efc_node *node = els->node;
> +
> +	/* Retry delay timer expired, retry the ELS request,
> +	 * Free the HW IO so that a new oxid is used.
> +	 */
> +	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
> +		node->els_req_cnt++;
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    NULL);
> +	} else {
> +		efct_els_retry(els);
> +	}
> +}
> +
> +static void
> +efct_els_abort_cleanup(struct efct_io *els)
> +{
> +	/* handle event for ABORT_WQE
> +	 * whatever state ELS happened to be in, propagate aborted even
> +	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
> +	 */
> +	struct efc_node_cb cbdata;
> +
> +	cbdata.status = 0;
> +	cbdata.ext_status = 0;
> +	cbdata.els_rsp = els->els_rsp;
> +	els_io_printf(els, "Request aborted\n");
> +	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
> +}
> +
> +static int
> +efct_els_req_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +	u32 reason_code;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	if (status != 0)
> +		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
> +
> +	/* set the response len element of els->rsp */
> +	els->els_rsp.len = length;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* FW returns the number of bytes received on the link in
> +	 * the WCQE, not the amount placed in the buffer; use this info to
> +	 * check if there was an overrun.
> +	 */
> +	if (length > els->els_rsp.size) {
> +		efc_log_warn(efct,
> +			      "ELS response returned len=%d > buflen=%zu\n",
> +			     length, els->els_rsp.size);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		return 0;
> +	}
> +
> +	/* Post event to ELS IO object */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LS_RJT:
> +		reason_code = (ext_status >> 16) & 0xff;
> +
> +		/* delay and retry if reason code is Logical Busy */
> +		switch (reason_code) {
> +		case ELS_RJT_BUSY:
> +			els->node->els_req_cnt--;
> +			els_io_printf(els,
> +				      "LS_RJT Logical Busy response,delay and retry\n");
> +			timer_setup(&els->delay_timer,
> +				    efct_els_delay_timer_cb, 0);
> +			mod_timer(&els->delay_timer,
> +				  jiffies + msecs_to_jiffies(5000));
> +			els->state = EFCT_ELS_REQUEST_DELAYED;
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +		switch (ext_status) {
> +		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
> +			efct_els_retry(els);
> +			break;
> +
> +		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
> +			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
> +				/* completion for ELS that was aborted */
> +				efct_els_abort_cleanup(els);
> +			} else {
> +				/* completion for ELS received first,
> +				 * transition to wait for abort cmpl
> +				 */
> +				els->state = EFCT_ELS_REQ_ABORTED;
> +			}
> +
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "els req failed status x%x, ext_status, x%x\n",
> +					status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static void efct_els_send_req(struct efc_node *node, struct efct_io *els)
> +{
> +	int rc = 0;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +	rc = efct_els_send(els, els->els_req.size,
> +			   els->els_timeout_sec, efct_els_req_cb);
> +
> +	if (rc) {
> +		struct efc_node_cb cbdata;
> +
> +		cbdata.status = INT_MAX;
> +		cbdata.ext_status = INT_MAX;
> +		cbdata.els_rsp = els->els_rsp;
> +		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +	}
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els)
> +{
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	efct = els->node->efc->base;
> +	cbdata.status = INT_MAX;
> +	cbdata.ext_status = INT_MAX;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	if (!els->els_retries_remaining) {
> +		efc_log_err(efct, "ELS retries exhausted\n");
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +		return;
> +	}
> +
> +	els->els_retries_remaining--;
> +	 /* Free the HW IO so that a new oxid is used.*/
> +	if (els->hio) {
> +		efct_hw_io_free(&efct->hw, els->hio);
> +		els->hio = NULL;
> +	}
> +
> +	efct_els_send_req(els->node, els);
> +}
> +
> +static int
> +efct_els_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* Post node event */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
> +		break;
> +
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "[%s] %-8s failed status x%x, ext_status x%x\n",
> +			    node->display_name, els->display_name,
> +			    status, ext_status);
> +		efc_log_warn(efct,
> +			      "els acc complete: failed status x%x, ext_status, x%x\n",
> +		     status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +efct_els_send_rsp(struct efct_io *els, u32 rsplen)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* increment ELS completion counter */
> +	node->els_cmpl_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = rsplen;
> +	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
> +}
> +
> +struct efct_io *
> +efct_send_plogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries,
> +	      void (*cb)(struct efc_node *node,
> +			 struct efc_node_cb *cbdata, void *arg), void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *plogi;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "plogi";
> +
> +		/* Build PLOGI request */
> +		plogi = els->els_req.virt;
> +
> +		memcpy(plogi, node->sport->service_params, sizeof(*plogi));
> +
> +		plogi->fl_cmd = ELS_PLOGI;
> +		memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_flogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi  *flogi;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "flogi";
> +
> +		/* Build FLOGI request */
> +		flogi = els->els_req.virt;
> +
> +		memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +		flogi->fl_cmd = ELS_FLOGI;
> +		memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi *fdisc;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "fdisc";
> +
> +		/* Build FDISC request */
> +		fdisc = els->els_req.virt;
> +
> +		memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
> +		fdisc->fl_cmd = ELS_FDISC;
> +		memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prli(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prli prli;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "prli";
> +
> +		/* Build PRLI request */
> +		pp = els->els_req.virt;
> +
> +		memset(pp, 0, sizeof(*pp));
> +
> +		pp->prli.prli_cmd = ELS_PRLI;
> +		pp->prli.prli_spp_len = 16;
> +		pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
> +		pp->spp.spp_type = FC_TYPE_FCP;
> +		pp->spp.spp_type_ext = 0;
> +		pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
> +		pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
> +				       (node->sport->enable_ini ?
> +				       FCP_SPPF_INIT_FCN : 0) |
> +				       (node->sport->enable_tgt ?
> +				       FCP_SPPF_TARG_FCN : 0));
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prlo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prlo prlo;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "prlo";
> +
> +		/* Build PRLO request */
> +		pp = els->els_req.virt;
> +
> +		memset(pp, 0, sizeof(*pp));
> +		pp->prlo.prlo_cmd = ELS_PRLO;
> +		pp->prlo.prlo_obs = 0x10;
> +		pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
> +
> +		pp->spp.spp_type = FC_TYPE_FCP;
> +		pp->spp.spp_type_ext = 0;
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_logo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_logo *logo;
> +	struct fc_els_flogi  *sparams;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "logo";
> +
> +		/* Build LOGO request */
> +
> +		logo = els->els_req.virt;
> +
> +		memset(logo, 0, sizeof(*logo));
> +		logo->fl_cmd = ELS_LOGO;
> +		hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
> +		logo->fl_n_port_wwn = sparams->fl_wwpn;
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_adisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_adisc *adisc;
> +	struct fc_els_flogi  *sparams;
> +	struct efc_sli_port *sport = node->sport;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "adisc";
> +
> +		/* Build ADISC request */
> +
> +		adisc = els->els_req.virt;
> +
> +		memset(adisc, 0, sizeof(*adisc));
> +		adisc->adisc_cmd = ELS_ADISC;
> +		hton24(adisc->adisc_hard_addr, sport->fc_id);
> +		adisc->adisc_wwpn = sparams->fl_wwpn;
> +		adisc->adisc_wwnn = sparams->fl_wwnn;
> +		hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *pdisc;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "pdisc";
> +
> +		pdisc = els->els_req.virt;
> +
> +		memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
> +
> +		pdisc->fl_cmd = ELS_PDISC;
> +		memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_scr(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	      els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_scr *req;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "scr";
> +
> +		req = els->els_req.virt;
> +
> +		memset(req, 0, sizeof(*req));
> +		req->scr_cmd = ELS_SCR;
> +		req->scr_reg_func = ELS_SCRF_FULL;
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_rrq(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	      els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_scr *req;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "scr";
> +

"rrq" ?

> +		req = els->els_req.virt;
> +
> +		memset(req, 0, sizeof(*req));
> +		req->scr_cmd = ELS_RRQ;
> +		req->scr_reg_func = ELS_SCRF_FULL;
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_rscn(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_rscn *req;
> +	struct fc_els_rscn_page *rscn_page;
> +	u32 length = sizeof(*rscn_page) * port_ids_count;
> +
> +	length += sizeof(*req);
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->els_timeout_sec = timeout_sec;
> +		els->els_retries_remaining = retries;
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "rscn";
> +
> +		req = els->els_req.virt;
> +
> +		req->rscn_cmd = ELS_RSCN;
> +		req->rscn_page_len = sizeof(struct fc_els_rscn_page);
> +		req->rscn_plen = cpu_to_be16(length);
> +
> +		els->hio_type = EFCT_HW_ELS_REQ;
> +		els->iparam.els.timeout = timeout_sec;
> +
> +		/* copy in the payload */
> +		rscn_page = els->els_req.virt + sizeof(*req);
> +		memcpy(rscn_page, port_ids,
> +		       port_ids_count * sizeof(*rscn_page));
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +void *
> +efct_send_ls_rjt(struct efc *efc, struct efc_node *node,
> +		 u32 ox_id, u32 reason_code,
> +		u32 reason_code_expl, u32 vendor_unique)
> +{
> +	struct efct_io *io = NULL;
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_ls_rjt *rjt;
> +
> +	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	node_els_trace();
> +
> +	io->els_callback = NULL;
> +	io->els_callback_arg = NULL;
> +	io->display_name = "ls_rjt";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	rjt = io->els_req.virt;
> +	memset(rjt, 0, sizeof(*rjt));
> +
> +	rjt->er_cmd = ELS_LS_RJT;
> +	rjt->er_reason = reason_code;
> +	rjt->er_explan = reason_code_expl;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*rjt));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_flogi  *plogi;
> +	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "plog_acc";

Please make it 'plogi_acc';
that one additional byte won't harm anybody.

> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	plogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
> +	plogi->fl_cmd = ELS_LS_ACC;
> +	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
> +
> +	/* Set Application header support bit if requested */
> +	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
> +		plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST);
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*plogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +void *
> +efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
> +			u32 ox_id, u32 s_id)
> +{
> +	struct efct_io *io = NULL;
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *flogi;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = NULL;
> +	io->els_callback_arg = NULL;
> +	io->display_name = "flogi_p2p_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els_sid.ox_id = ox_id;
> +	io->iparam.els_sid.s_id = s_id;
> +
> +	flogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +	flogi->fl_cmd = ELS_LS_ACC;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP_SID;
> +	rc = efct_els_send_rsp(io, sizeof(*flogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_flogi  *flogi;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "flogi_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els_sid.ox_id = ox_id;
> +	io->iparam.els_sid.s_id = io->node->sport->fc_id;
> +
> +	flogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +
> +	/* Set F_port */
> +	if (is_fport) {
> +		/* Set F_PORT and Multiple N_PORT_ID Assignment */
> +		flogi->fl_csp.sp_r_a_tov |=  cpu_to_be32(3U << 28);
> +	}
> +
> +	flogi->fl_cmd = ELS_LS_ACC;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP_SID;
> +	rc = efct_els_send_rsp(io, sizeof(*flogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *efct_send_prli_acc(struct efc_node *node,
> +				     u32 ox_id, els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct {
> +		struct fc_els_prli prli;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "prli_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	pp = io->els_req.virt;
> +	memset(pp, 0, sizeof(*pp));
> +
> +	pp->prli.prli_cmd = ELS_LS_ACC;
> +	pp->prli.prli_spp_len = 0x10;
> +	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
> +
> +	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
> +					(node->sport->enable_ini ?
> +					 FCP_SPPF_INIT_FCN : 0) |
> +					(node->sport->enable_tgt ?
> +					 FCP_SPPF_TARG_FCN : 0));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*pp));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct {
> +		struct fc_els_prlo prlo;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "prlo_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	pp = io->els_req.virt;
> +	memset(pp, 0, sizeof(*pp));
> +	pp->prlo.prlo_cmd = ELS_LS_ACC;
> +	pp->prlo.prlo_obs = 0x10;
> +	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
> +
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_RESP_ACK;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*pp));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_ls_acc(struct efc_node *node, u32 ox_id, els_cb_t cb,
> +		 void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_ls_acc *acc;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*acc), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "ls_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	acc = io->els_req.virt;
> +	memset(acc, 0, sizeof(*acc));
> +
> +	acc->la_cmd = ELS_LS_ACC;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*acc));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_logo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct_io *io = NULL;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_ls_acc *logo;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "logo_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	logo = io->els_req.virt;
> +	memset(logo, 0, sizeof(*logo));
> +
> +	logo->la_cmd = ELS_LS_ACC;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*logo));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct_io *io = NULL;
> +	struct fc_els_adisc *adisc;
> +	struct fc_els_flogi  *sparams;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "adisc_acc";
> +	io->init_task_tag = ox_id;
> +
> +	/* Go ahead and send the ELS_ACC */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	sparams = (struct fc_els_flogi  *)node->sport->service_params;
> +	adisc = io->els_req.virt;
> +	memset(adisc, 0, sizeof(*adisc));
> +	adisc->adisc_cmd = ELS_LS_ACC;
> +	adisc->adisc_wwpn = sparams->fl_wwpn;
> +	adisc->adisc_wwnn = sparams->fl_wwnn;
> +	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*adisc));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +void *
> +efct_els_send_ct(struct efc *efc, struct efc_node *node, u32 cmd,
> +		 u32 timeout_sec, u32 retries)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case FC_RCTL_ELS_REQ:
> +		efc_log_err(efct, "send efct_ns_send_rftid\n");
> +		efct_ns_send_rftid(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case FC_NS_RFF_ID:
> +		efc_log_err(efct, "send efct_ns_send_rffid\n");
> +		efct_ns_send_rffid(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case FC_NS_GID_PT:
> +		efc_log_err(efct, "send efct_ns_send_gidpt\n");
> +		efct_ns_send_gidpt(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +

Same here; 'send efct_ns_send_rftid' has a duplicate send.
Move it into the function and use 'send %s', __function__ or similar.

> +static inline void fcct_build_req_header(struct fc_ct_hdr  *hdr,
> +					 u16 cmd, u16 max_size)
> +{
> +	hdr->ct_rev = FC_CT_REV;
> +	hdr->ct_fs_type = FC_FST_DIR;
> +	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
> +	hdr->ct_options = 0;
> +	hdr->ct_cmd = cpu_to_be16(cmd);
> +	/* words */
> +	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
> +	hdr->ct_reason = 0;
> +	hdr->ct_explan = 0;
> +	hdr->ct_vendor = 0;
> +}
> +
> +struct efct_io *
> +efct_ns_send_rftid(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_rft_id *rftid;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*ct) + sizeof(*rftid),
> +				EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +	} else {
> +		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +		els->iparam.fc_ct.type = FC_TYPE_CT;
> +		els->iparam.fc_ct.df_ctl = 0;
> +		els->iparam.fc_ct.timeout = timeout_sec;
> +
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "rftid";
> +
> +		ct = els->els_req.virt;
> +		memset(ct, 0, sizeof(*ct));
> +		fcct_build_req_header(ct, FC_NS_RFT_ID, sizeof(*rftid));
> +
> +		rftid = els->els_req.virt + sizeof(*ct);
> +		memset(rftid, 0, sizeof(*rftid));
> +		hton24(rftid->fr_fid.fp_fid, node->rnode.sport->fc_id);
> +		rftid->fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
> +			cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
> +
> +		els->hio_type = EFCT_HW_FC_CT;
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_ns_send_rffid(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_rff_id *rffid;
> +	u32 size = 0;
> +
> +	node_els_trace();
> +
> +	size = sizeof(*ct) + sizeof(*rffid);
> +
> +	els = efct_els_io_alloc(node, size, EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");

Consistency:
use 'return NULL' here and drop the 'else'

> +	} else {
> +		els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +		els->iparam.fc_ct.type = FC_TYPE_CT;
> +		els->iparam.fc_ct.df_ctl = 0;
> +		els->iparam.fc_ct.timeout = timeout_sec;
> +
> +		els->els_callback = cb;
> +		els->els_callback_arg = cbarg;
> +		els->display_name = "rffid";
> +		ct = els->els_req.virt;
> +
> +		memset(ct, 0, sizeof(*ct));
> +		fcct_build_req_header(ct, FC_NS_RFF_ID, sizeof(*rffid));
> +
> +		rffid = els->els_req.virt + sizeof(*ct);
> +		memset(rffid, 0, sizeof(*rffid));
> +
> +		hton24(rffid->fr_fid.fp_fid, node->rnode.sport->fc_id);
> +		if (node->sport->enable_ini)
> +			rffid->fr_feat |= FCP_FEAT_INIT;
> +		if (node->sport->enable_tgt)
> +			rffid->fr_feat |= FCP_FEAT_TARG;
> +		rffid->fr_type = FC_TYPE_FCP;
> +
> +		els->hio_type = EFCT_HW_FC_CT;
> +
> +		efct_els_send_req(node, els);
> +	}
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els = NULL;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_gid_pt *gidpt;
> +	u32 size = 0;
> +
> +	node_els_trace();
> +
> +	size = sizeof(*ct) + sizeof(*gidpt);
> +	els = efct_els_io_alloc_size(node, size,
> +				     EFCT_ELS_GID_PT_RSP_LEN,
> +				   EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +
> +	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +	els->iparam.fc_ct.type = FC_TYPE_CT;
> +	els->iparam.fc_ct.df_ctl = 0;
> +	els->iparam.fc_ct.timeout = timeout_sec;
> +
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "gidpt";
> +
> +	ct = els->els_req.virt;
> +
> +	memset(ct, 0, sizeof(*ct));
> +	fcct_build_req_header(ct, FC_NS_GID_PT, sizeof(*gidpt));
> +
> +	gidpt = els->els_req.virt + sizeof(*ct);
> +	memset(gidpt, 0, sizeof(*gidpt));
> +	gidpt->fn_pt_type = FC_TYPE_FCP;
> +
> +	els->hio_type = EFCT_HW_FC_CT;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +static int efct_bls_send_rjt_cb(struct efct_hw_io *hio,
> +				struct efc_remote_node *rnode, u32 length,
> +		int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +
> +	efct_scsi_io_free(io);
> +	return 0;
> +}
> +
> +static struct efct_io *
> +efct_bls_send_rjt(struct efct_io *io, u32 s_id,
> +		  u16 ox_id, u16 rx_id)
> +{
> +	struct efc_node *node = io->node;
> +	int rc;
> +	struct fc_ba_rjt *acc;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	if (node->rnode.sport->fc_id == s_id)
> +		s_id = U32_MAX;
> +
> +	/* fill out generic fields */
> +	io->efct = efct;
> +	io->node = node;
> +	io->cmd_tgt = true;
> +
> +	/* fill out BLS Response-specific fields */
> +	io->io_type = EFCT_IO_TYPE_BLS_RESP;
> +	io->display_name = "ba_rjt";
> +	io->hio_type = EFCT_HW_BLS_RJT;
> +	io->init_task_tag = ox_id;
> +
> +	/* fill out iparam fields */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.bls_sid.ox_id = ox_id;
> +	io->iparam.bls_sid.rx_id = rx_id;
> +
> +	acc = (void *)io->iparam.bls_sid.payload;
> +
> +	memset(io->iparam.bls_sid.payload, 0,
> +	       sizeof(io->iparam.bls_sid.payload));
> +	acc->br_reason = ELS_RJT_UNAB;
> +	acc->br_explan = ELS_EXPL_NONE;
> +
> +	rc = efct_scsi_io_dispatch(io, efct_bls_send_rjt_cb);
> +	if (rc) {
> +		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
> +		efct_scsi_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr)
> +{
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
> +	u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +	return efct_bls_send_rjt(io, d_id, ox_id, rx_id);
> +}
> +
> +static int efct_bls_send_acc_cb(struct efct_hw_io *hio,
> +				struct efc_remote_node *rnode, u32 length,
> +		int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +
> +	efct_scsi_io_free(io);
> +	return 0;
> +}
> +
> +static struct efct_io *
> +efct_bls_send_acc(struct efct_io *io, u32 s_id,
> +		  u16 ox_id, u16 rx_id)
> +{
> +	struct efc_node *node = io->node;
> +	int rc;
> +	struct fc_ba_acc *acc;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	if (node->rnode.sport->fc_id == s_id)
> +		s_id = U32_MAX;
> +
> +	/* fill out generic fields */
> +	io->efct = efct;
> +	io->node = node;
> +	io->cmd_tgt = true;
> +
> +	/* fill out BLS Response-specific fields */
> +	io->io_type = EFCT_IO_TYPE_BLS_RESP;
> +	io->display_name = "ba_acc";
> +	io->hio_type = EFCT_HW_BLS_ACC_SID;
> +	io->init_task_tag = ox_id;
> +
> +	/* fill out iparam fields */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.bls_sid.s_id = s_id;
> +	io->iparam.bls_sid.ox_id = ox_id;
> +	io->iparam.bls_sid.rx_id = rx_id;
> +
> +	acc = (void *)io->iparam.bls_sid.payload;
> +
> +	memset(io->iparam.bls_sid.payload, 0,
> +	       sizeof(io->iparam.bls_sid.payload));
> +	acc->ba_ox_id = cpu_to_be16(io->iparam.bls_sid.ox_id);
> +	acc->ba_rx_id = cpu_to_be16(io->iparam.bls_sid.rx_id);
> +	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
> +
> +	rc = efct_scsi_io_dispatch(io, efct_bls_send_acc_cb);
> +	if (rc) {
> +		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
> +		efct_scsi_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +void *
> +efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
> +		      struct fc_frame_header *hdr)
> +{
> +	struct efct_io *io = NULL;
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
> +	u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +	io = efct_scsi_io_alloc(node, EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efc, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	return efct_bls_send_acc(io, d_id, ox_id, rx_id);
> +}
> +
> +static int
> +efct_els_abort_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		  u32 length, int status, u32 ext_status,
> +		 void *app)
> +{
> +	struct efct_io *els;
> +	struct efct_io *abort_io = NULL; /* IO structure used to abort ELS */
> +	struct efct *efct;
> +
> +	abort_io = app;
> +	els = abort_io->io_to_abort;
> +
> +	if (!els || !els->node || !els->node->efc)
> +		return -1;
> +
> +	efct = els->node->efc->base;
> +
> +	if (status != 0)
> +		efc_log_warn(efct, "status x%x ext x%x\n", status, ext_status);
> +
> +	/* now free the abort IO */
> +	efct_io_pool_io_free(efct->xport->io_pool, abort_io);
> +
> +	/* send completion event to indicate abort process is complete
> +	 * Note: The ELS SM will already be receiving
> +	 * ELS_REQ_OK/FAIL/RJT/ABORTED
> +	 */
> +	if (els->state == EFCT_ELS_REQ_ABORTED) {
> +		/* completion for ELS that was aborted */
> +		efct_els_abort_cleanup(els);
> +	} else {
> +		/* completion for abort was received first,
> +		 * transition to wait for req cmpl
> +		 */
> +		els->state = EFCT_ELS_ABORT_IO_COMPL;
> +	}
> +
> +	/* done with ELS IO to abort */
> +	kref_put(&els->ref, els->release);
> +	return 0;
> +}
> +
> +static struct efct_io *
> +efct_els_abort_io(struct efct_io *els, bool send_abts)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	int rc;
> +	struct efct_io *abort_io = NULL;
> +
> +	efct = els->node->efc->base;
> +	xport = efct->xport;
> +
> +	/* take a reference on IO being aborted */
> +	if ((kref_get_unless_zero(&els->ref) == 0)) {
> +		/* command no longer active */
> +		efc_log_debug(efct, "els no longer active\n");
> +		return NULL;
> +	}
> +
> +	/* allocate IO structure to send abort */
> +	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!abort_io) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +	} else {
> +		/* set generic fields */
> +		abort_io->efct = efct;
> +		abort_io->node = els->node;
> +		abort_io->cmd_ini = true;
> +
> +		/* set type and ABORT-specific fields */
> +		abort_io->io_type = EFCT_IO_TYPE_ABORT;
> +		abort_io->display_name = "abort_els";
> +		abort_io->io_to_abort = els;
> +		abort_io->send_abts = send_abts;
> +
> +		/* now dispatch IO */
> +		rc = efct_scsi_io_dispatch_abort(abort_io, efct_els_abort_cb);
> +		if (rc) {
> +			efc_log_err(efct,
> +				     "efct_scsi_io_dispatch failed: %d\n", rc);
> +			efct_io_pool_io_free(efct->xport->io_pool, abort_io);
> +			abort_io = NULL;
> +		}
> +	}
> +
> +	/* if something failed, put reference on ELS to abort */
> +	if (!abort_io)
> +		kref_put(&els->ref, els->release);
> +	return abort_io;
> +}
> +
> +void
> +efct_els_abort(struct efct_io *els, struct efc_node_cb *arg)
> +{
> +	struct efct_io *io = NULL;
> +	struct efc_node *node;
> +	struct efct *efct;
> +
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	/* request to abort this ELS without an ABTS */
> +	els_io_printf(els, "ELS abort requested\n");
> +	/* Set retries to zero,we are done */
> +	els->els_retries_remaining = 0;
> +	if (els->state == EFCT_ELS_REQUEST) {
> +		els->state = EFCT_ELS_REQ_ABORT;
> +		io = efct_els_abort_io(els, false);
> +		if (!io) {
> +			efc_log_err(efct, "efct_els_abort_io failed\n");
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    arg);
> +		}
> +
> +	} else if (els->state == EFCT_ELS_REQUEST_DELAYED) {
> +		/* mod/resched the timer for a short duration */
> +		mod_timer(&els->delay_timer,
> +			  jiffies + msecs_to_jiffies(1));
> +
> +		els->state = EFCT_ELS_REQUEST_DELAY_ABORT;
> +	}
> +}
> +
> +void
> +efct_els_io_cleanup(struct efct_io *els,
> +		    enum efc_hw_node_els_event node_evt, void *arg)
> +{
> +	/* don't want further events that could come; e.g. abort requests
> +	 * from the node state machine; thus, disable state machine
> +	 */
> +	els->els_req_free = true;
> +	efc_node_post_els_resp(els->node, node_evt, arg);
> +
> +	/* If this IO has a callback, invoke it */
> +	if (els->els_callback) {
> +		(*els->els_callback)(els->node, arg,
> +				    els->els_callback_arg);
> +	}
> +	efct_els_io_free(els);
> +}
> +
> +int
> +efct_els_io_list_empty(struct efc_node *node, struct list_head *list)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		empty = list_empty(list);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +static int
> +efct_ct_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +	       u32 length, int status, u32 ext_status,
> +	      void *arg)
> +{
> +	struct efct_io *io = arg;
> +
> +	efct_els_io_free(io);
> +
> +	return 0;
> +}
> +
> +int
> +efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
> +		 struct fc_ct_hdr  *ct_hdr, u32 cmd_rsp_code,
> +		u32 reason_code, u32 reason_code_explanation)
> +{
> +	struct efct_io *io = NULL;
> +	struct fc_ct_hdr  *rsp = NULL;
> +
> +	io = efct_els_io_alloc(node, 256, EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efc, "IO alloc failed\n");
> +		return -1;
> +	}
> +
> +	rsp = io->els_rsp.virt;
> +	io->io_type = EFCT_IO_TYPE_CT_RESP;
> +
> +	*rsp = *ct_hdr;
> +
> +	fcct_build_req_header(rsp, cmd_rsp_code, 0);
> +	rsp->ct_reason = reason_code;
> +	rsp->ct_explan = reason_code_explanation;
> +
> +	io->display_name = "ct response";

Please use 'ct_rsp' to be consistent with the previous naming.

> +	io->init_task_tag = ox_id;
> +	io->wire_len += sizeof(*rsp);
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +
> +	io->io_type = EFCT_IO_TYPE_CT_RESP;
> +	io->hio_type = EFCT_HW_FC_CT_RSP;
> +	io->iparam.fc_ct_rsp.ox_id = ox_id;
> +	io->iparam.fc_ct_rsp.r_ctl = 3;
> +	io->iparam.fc_ct_rsp.type = FC_TYPE_CT;
> +	io->iparam.fc_ct_rsp.df_ctl = 0;
> +	io->iparam.fc_ct_rsp.timeout = 5;
> +
> +	if (efct_scsi_io_dispatch(io, efct_ct_acc_cb) < 0) {
> +		efct_els_io_free(io);
> +		return -1;
> +	}
> +	return 0;
> +}
[ .. ]

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 23/32] elx: efct: SCSI IO handling routines
  2019-12-20 22:37 ` [PATCH v2 23/32] elx: efct: SCSI IO handling routines James Smart
@ 2020-01-09  9:41   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:41 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for SCSI transport IO alloc, build and send IO.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_scsi.c | 1572 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_scsi.h |  313 ++++++++
>  2 files changed, 1885 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
>  create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 25/32] elx: efct: Hardware IO submission routines
  2019-12-20 22:37 ` [PATCH v2 25/32] elx: efct: Hardware IO submission routines James Smart
@ 2020-01-09  9:52   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09  9:52 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines that write IO to Work queue, send SRRs and raw frames.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 625 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  19 ++
>  2 files changed, 644 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 43f1ff526694..440c4fa196bf 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -3192,6 +3192,68 @@ efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
>  	return 0;
>  }
>  
> +static int
> +_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int rc;
> +	int queue_rc;
> +
> +	/* Every so often, set the wqec bit to generate comsummed completions */
> +	if (wq->wqec_count)
> +		wq->wqec_count--;
> +
> +	if (wq->wqec_count == 0) {
> +		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
> +
> +		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
> +		wq->wqec_count = wq->wqec_set_count;
> +	}
> +
> +	/* Decrement WQ free count */
> +	wq->free_count--;
> +
> +	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
> +
> +	if (queue_rc < 0)
> +		rc = -1;
> +	else
> +		rc = 0;
> +
> +	return rc;
> +}
> +

return (queue_rq < 0) ? -1 : 0;

> +static void
> +hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
> +{
> +	struct efct_hw_wqe *wqe;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +
> +	/* Update free count with value passed in */
> +	wq->free_count += update_free_count;
> +
> +	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
> +		wqe = list_first_entry(&wq->pending_list,
> +				       struct efct_hw_wqe, list_entry);
> +		list_del(&wqe->list_entry);
> +		_efct_hw_wq_write(wq, wqe);
> +
> +		if (wqe->abort_wqe_submit_needed) {
> +			wqe->abort_wqe_submit_needed = false;
> +			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
> +				      wq->hw->sli.wqe_size,
> +				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
> +				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
> +					  INIT_LIST_HEAD(&wqe->list_entry);
> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +}
> +
>  void
>  efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
>  {
> @@ -3390,3 +3452,566 @@ efct_hw_flush(struct efct_hw *hw)
>  
>  	return 0;
>  }
> +
> +int
> +efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int rc = 0;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +	if (!list_empty(&wq->pending_list)) {
> +		INIT_LIST_HEAD(&wqe->list_entry);
> +		list_add_tail(&wqe->list_entry, &wq->pending_list);
> +		wq->wq_pending_count++;
> +		while ((wq->free_count > 0) &&
> +		       ((wqe = list_first_entry(&wq->pending_list,
> +					struct efct_hw_wqe, list_entry))
> +			 != NULL)) {
> +			list_del(&wqe->list_entry);
> +			rc = _efct_hw_wq_write(wq, wqe);
> +			if (rc < 0)
> +				break;
> +			if (wqe->abort_wqe_submit_needed) {
> +				wqe->abort_wqe_submit_needed = false;
> +				sli_abort_wqe(&wq->hw->sli,
> +					      wqe->wqebuf,
> +					      wq->hw->sli.wqe_size,
> +					      SLI_ABORT_XRI,
> +					      wqe->send_abts, wqe->id,
> +					      0, wqe->abort_reqtag,
> +					      SLI4_CQ_DEFAULT);
> +
> +				INIT_LIST_HEAD(&wqe->list_entry);
> +				list_add_tail(&wqe->list_entry,
> +					      &wq->pending_list);
> +				wq->wq_pending_count++;
> +			}
> +		}
> +	} else {
> +		if (wq->free_count > 0) {
> +			rc = _efct_hw_wq_write(wq, wqe);
> +		} else {
> +			INIT_LIST_HEAD(&wqe->list_entry);
> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +
> +	return rc;
> +}
> +
> +/**
> + * This routine supports communication sequences consisting of a single
> + * request and single response between two endpoints. Examples include:
> + *  - Sending an ELS request.
> + *  - Sending an ELS response - To send an ELS response, the caller must provide
> + * the OX_ID from the received request.
> + *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
> + * the caller must provide the R_CTL, TYPE, and DF_CTL
> + * values to place in the FC frame header.
> + */
> +enum efct_hw_rtn
> +efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		  struct efct_hw_io *io,
> +		  struct efc_dma *send, u32 len,
> +		  struct efc_dma *receive, struct efc_remote_node *rnode,
> +		  union efct_hw_io_param_u *iparam,
> +		  efct_hw_srrs_cb_t cb, void *arg)
> +{
> +	struct sli4_sge	*sge = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +	u16	local_flags = 0;
> +	u32 sge0_flags;
> +	u32 sge1_flags;
> +
> +	if (!io || !rnode || !iparam) {
> +		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
> +			hw, io, send, receive, rnode, iparam);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_test(hw->os,
> +			      "cannot send SRRS, HW state=%d\n", hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	io->rnode = rnode;
> +	io->type  = type;
> +	io->done = cb;
> +	io->arg  = arg;
> +
> +	sge = io->sgl->virt;
> +
> +	/* clear both SGE */
> +	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
> +
> +	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
> +	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
> +	if (send) {
> +		sge[0].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(send->phys));
> +		sge[0].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(send->phys));
> +
> +		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +
> +		sge[0].buffer_length = cpu_to_le32(len);
> +	}
> +
> +	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
> +		sge[1].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(receive->phys));
> +		sge[1].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(receive->phys));
> +
> +		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +		sge1_flags |= SLI4_SGE_LAST;
> +
> +		sge[1].buffer_length = cpu_to_le32(receive->size);
> +	} else {
> +		sge0_flags |= SLI4_SGE_LAST;
> +	}
> +
> +	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
> +	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
> +
> +	switch (type) {
> +	case EFCT_HW_ELS_REQ:
> +		if (!send ||
> +		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
> +					  hw->sli.wqe_size, io->sgl,
> +					*((u8 *)send->virt),
> +					len, receive->size,
> +					iparam->els.timeout,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->node_group, rnode->attached,
> +					rnode->fc_id, rnode->sport->fc_id)) {
> +			efc_log_err(hw->os, "REQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_ELS_RSP:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, iparam->els.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->node_group, rnode->attached,
> +					rnode->fc_id,
> +					local_flags, U32_MAX)) {
> +			efc_log_err(hw->os, "RSP WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_ELS_RSP_SID:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					iparam->els_sid.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->node_group, rnode->attached,
> +					rnode->fc_id,
> +					local_flags, iparam->els_sid.s_id)) {
> +			efc_log_err(hw->os, "RSP (SID) WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT:
> +		if (!send ||
> +		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf,
> +					  hw->sli.wqe_size, io->sgl,
> +					len, receive->size,
> +					iparam->fc_ct.timeout, io->indicator,
> +					io->reqtag, SLI4_CQ_DEFAULT,
> +					rnode->node_group, rnode->fc_id,
> +					rnode->indicator,
> +					iparam->fc_ct.r_ctl,
> +					iparam->fc_ct.type,
> +					iparam->fc_ct.df_ctl)) {
> +			efc_log_err(hw->os, "GEN WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT_RSP:
> +		if (!send ||
> +		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
> +					    hw->sli.wqe_size, io->sgl,
> +					len, iparam->fc_ct_rsp.timeout,
> +					iparam->fc_ct_rsp.ox_id,
> +					io->indicator, io->reqtag,
> +					rnode->node_group, rnode->fc_id,
> +					rnode->indicator,
> +					iparam->fc_ct_rsp.r_ctl,
> +					iparam->fc_ct_rsp.type,
> +					iparam->fc_ct_rsp.df_ctl)) {
> +			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_BLS_ACC:
> +	case EFCT_HW_BLS_RJT:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		if (type == EFCT_HW_BLS_ACC) {
> +			bls.type = SLI4_SLI_BLS_ACC;
> +			memcpy(&bls.u.acc, iparam->bls.payload,
> +			       sizeof(bls.u.acc));
> +		} else {
> +			bls.type = SLI4_SLI_BLS_RJT;
> +			memcpy(&bls.u.rjt, iparam->bls.payload,
> +			       sizeof(bls.u.rjt));
> +		}
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached, rnode->node_group,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					U32_MAX)) {
> +			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_BLS_ACC_SID:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		bls.type = SLI4_SLI_BLS_ACC;
> +		memcpy(&bls.u.acc, iparam->bls_sid.payload,
> +		       sizeof(bls.u.acc));
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls_sid.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls_sid.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached, rnode->node_group,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					iparam->bls_sid.s_id)) {
> +			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE SID error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_err(hw->os, "bad SRRS type %#x\n", type);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rc == EFCT_HW_RTN_SUCCESS) {
> +		if (!io->wq)
> +			io->wq = efct_hw_queue_next_wq(hw, io);
> +
> +		io->xbusy = true;
> +
> +		/*
> +		 * Add IO to active io wqe list before submitting, in case the
> +		 * wcqe processing preempts this thread.
> +		 */
> +		io->wq->use_count++;
> +		efct_hw_add_io_timed_wqe(hw, io);
> +		rc = efct_hw_wq_write(io->wq, &io->wqe);
> +		if (rc >= 0) {
> +			/* non-negative return is success */
> +			rc = 0;
> +		} else {
> +			/* failed to write wqe, remove from active wqe list */
> +			efc_log_err(hw->os,
> +				     "sli_queue_write failed: %d\n", rc);
> +			io->xbusy = false;
> +			efct_hw_remove_io_timed_wqe(hw, io);
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +/**
> + * Send a read, write, or response IO.
> + *
> + * This routine supports sending a higher-level IO (for example, FCP) between
> + * two endpoints as a target or initiator. Examples include:
> + *  - Sending read data and good response (target).
> + *  - Sending a response (target with no data or after receiving write data).
> + *  .
> + * This routine assumes all IOs use the SGL associated with the HW IO. Prior to
> + * calling this routine, the data should be loaded using efct_hw_io_add_sge().
> + */
> +enum efct_hw_rtn
> +efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		struct efct_hw_io *io,
> +		u32 len, union efct_hw_io_param_u *iparam,
> +		struct efc_remote_node *rnode, void *cb, void *arg)
> +{
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +	u32	rpi;
> +	bool send_wqe = true;
> +
> +	if (!io || !rnode || !iparam) {
> +		pr_err("bad parm hw=%p io=%p iparam=%p rnode=%p\n",
> +			hw, io, iparam, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_err(hw->os, "cannot send IO, HW state=%d\n",
> +			     hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rpi = rnode->indicator;
> +
> +	/*
> +	 * Save state needed during later stages
> +	 */
> +	io->rnode = rnode;
> +	io->type  = type;
> +	io->done  = cb;
> +	io->arg   = arg;
> +
> +	/*
> +	 * Format the work queue entry used to send the IO
> +	 */
> +	switch (type) {
> +	case EFCT_HW_IO_TARGET_WRITE: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
> +
> +		/*
> +		 * Fill in the XFER_RDY for IF_TYPE 0 devices
> +		 */
> +		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
> +		xfer->ft_burst_len = cpu_to_be32(len);
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +
> +		if (sli_fcp_treceive64_wqe(&hw->sli,
> +					   io->wqe.wqebuf,
> +					   hw->sli.wqe_size,
> +					   &io->def_sgl,
> +					   io->first_data_sge,
> +					   iparam->fcp_tgt.offset, len,
> +					   io->indicator, io->reqtag,
> +					   SLI4_CQ_DEFAULT,
> +					   iparam->fcp_tgt.ox_id, rpi,
> +					   rnode->node_group,
> +					   rnode->fc_id, flags,
> +					   iparam->fcp_tgt.dif_oper,
> +					   iparam->fcp_tgt.blk_size,
> +					   iparam->fcp_tgt.cs_ctl,
> +					   iparam->fcp_tgt.app_id)) {
Whoa.
Now _that_ are a lot of arguments.
I would invite you to try to whittle down that list, eg by passing in
'iparam.fcp_tct' and 'rnode' instead of the individual elements.

> +			efc_log_err(hw->os, "TRECEIVE WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_IO_TARGET_READ: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
> +					hw->sli.wqe_size, &io->def_sgl,
> +					io->first_data_sge,
> +					iparam->fcp_tgt.offset, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
> +					rpi, rnode->node_group,
> +					rnode->fc_id, flags,
> +					iparam->fcp_tgt.dif_oper,
> +					iparam->fcp_tgt.blk_size,
> +					iparam->fcp_tgt.cs_ctl,
> +					iparam->fcp_tgt.app_id)) {

Same here.

> +			efc_log_err(hw->os, "TSEND WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_IO_TARGET_RSP: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +				       hw->sli.wqe_size, &io->def_sgl,
> +				       len, io->indicator, io->reqtag,
> +				       SLI4_CQ_DEFAULT, iparam->fcp_tgt.ox_id,
> +					rpi, rnode->node_group, rnode->fc_id,
> +					flags, iparam->fcp_tgt.cs_ctl,
> +				       0, iparam->fcp_tgt.app_id)) {

And here.

> +			efc_log_err(hw->os, "TRSP WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +
> +		break;
> +	}
> +	default:
> +		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
> +		if (!io->wq)
> +			io->wq = efct_hw_queue_next_wq(hw, io);
> +
> +		io->xbusy = true;
> +
> +		/*
> +		 * Add IO to active io wqe list before submitting, in case the
> +		 * wcqe processing preempts this thread.
> +		 */
> +		hw->tcmd_wq_submit[io->wq->instance]++;
> +		io->wq->use_count++;
> +		efct_hw_add_io_timed_wqe(hw, io);
> +		rc = efct_hw_wq_write(io->wq, &io->wqe);
> +		if (rc >= 0) {
> +			/* non-negative return is success */
> +			rc = 0;
> +		} else {
> +			/* failed to write wqe, remove from active wqe list */
> +			efc_log_err(hw->os,
> +				     "sli_queue_write failed: %d\n", rc);
> +			io->xbusy = false;
> +			efct_hw_remove_io_timed_wqe(hw, io);
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +/**
> + * Send a raw frame
> + *
> + * Using the SEND_FRAME_WQE, a frame consisting of header and payload is sent.
> + */
> +enum efct_hw_rtn
> +efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
> +		   u8 sof, u8 eof, struct efc_dma *payload,
> +		   struct efct_hw_send_frame_context *ctx,
> +		   void (*callback)(void *arg, u8 *cqe, int status),
> +		   void *arg)
> +{
> +	int rc;
> +	struct efct_hw_wqe *wqe;
> +	u32 xri;
> +	struct hw_wq *wq;
> +
> +	wqe = &ctx->wqe;
> +
> +	/* populate the callback object */
> +	ctx->hw = hw;
> +
> +	/* Fetch and populate request tag */
> +	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
> +	if (!ctx->wqcb) {
> +		efc_log_err(hw->os, "can't allocate request tag\n");
> +		return EFCT_HW_RTN_NO_RESOURCES;
> +	}
> +
> +	/* Choose a work queue, first look for a class[1] wq, otherwise just
> +	 * use wq[0]
> +	 */
> +	wq = efct_varray_iter_next(hw->wq_class_array[1]);
> +	if (!wq)
> +		wq = hw->hw_wq[0];
> +
> +	/* Set XRI and RX_ID in the header based on which WQ, and which
> +	 * send_frame_io we are using
> +	 */
> +	xri = wq->send_frame_io->indicator;
> +
> +	/* Build the send frame WQE */
> +	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
> +				hw->sli.wqe_size, sof, eof,
> +				(u32 *)hdr, payload, payload->len,
> +				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
> +				ctx->wqcb->instance_index);
> +	if (rc) {
> +		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
> +			     rc);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* Write to WQ */
> +	rc = efct_hw_wq_write(wq, wqe);
> +	if (rc) {
> +		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	wq->use_count++;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +u32
> +efct_hw_io_get_count(struct efct_hw *hw,
> +		     enum efct_hw_io_count_type io_count_type)
> +{
> +	struct efct_hw_io *io = NULL;
> +	u32 count = 0;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +
> +	switch (io_count_type) {
> +	case EFCT_HW_IO_INUSE_COUNT:
> +		list_for_each_entry(io, &hw->io_inuse, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_FREE_COUNT:
> +		list_for_each_entry(io, &hw->io_free, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_WAIT_FREE_COUNT:
> +		list_for_each_entry(io, &hw->io_wait_free, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_N_TOTAL_IO_COUNT:
> +		count = hw->config.n_io;
> +		break;
> +	}
> +
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	return count;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 55679e40cc49..1a019594c471 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -952,4 +952,23 @@ efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
>  extern int
>  efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
>  
> +int efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe);
> +enum efct_hw_rtn
> +efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
> +		   u8 sof, u8 eof, struct efc_dma *payload,
> +		struct efct_hw_send_frame_context *ctx,
> +		void (*callback)(void *arg, u8 *cqe, int status),
> +		void *arg);
> +typedef int(*efct_hw_srrs_cb_t)(struct efct_hw_io *io,
> +				struct efc_remote_node *rnode, u32 length,
> +				int status, u32 ext_status, void *arg);
> +extern enum efct_hw_rtn
> +efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		  struct efct_hw_io *io,
> +		  struct efc_dma *send, u32 len,
> +		  struct efc_dma *receive, struct efc_remote_node *rnode,
> +		  union efct_hw_io_param_u *iparam,
> +		  efct_hw_srrs_cb_t cb,
> +		  void *arg);
> +
>  #endif /* __EFCT_H__ */
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 12/32] elx: libefc: Remote node state machine interfaces
  2019-12-20 22:37 ` [PATCH v2 12/32] elx: libefc: Remote node " James Smart
  2020-01-09  8:31   ` Hannes Reinecke
@ 2020-01-09  9:57   ` Daniel Wagner
  1 sibling, 0 replies; 78+ messages in thread
From: Daniel Wagner @ 2020-01-09  9:57 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, maier, bvanassche, Ram Vegesna

Hi,

On Fri, Dec 20, 2019 at 02:37:03PM -0800, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Remote node (aka remote port) allocation, initializaion and
>   destroy routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/libefc/efc_node.c | 1343 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_node.h |  188 +++++
>  2 files changed, 1531 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
> new file mode 100644
> index 000000000000..57bf25a5d76a
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.c
> @@ -0,0 +1,1343 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efc.h"
> +
> +/* HW node callback events from the user driver */
> +int
> +efc_remote_node_cb(void *arg, int event,
> +		   void *data)
> +{
> +	struct efc *efc = arg;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc_remote_node *rnode = data;
> +	struct efc_node *node = rnode->node;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_NODE_ATTACH_OK:
> +		sm_event = EFC_EVT_NODE_ATTACH_OK;
> +		break;
> +
> +	case EFC_HW_NODE_ATTACH_FAIL:
> +		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_OK:
> +		sm_event = EFC_EVT_NODE_FREE_OK;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_FAIL:
> +		sm_event = EFC_EVT_NODE_FREE_FAIL;
> +		break;
> +
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", event);
> +		return -1;

Use a defined return value.

> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return 0;
> +}
> +
> +/* Find an FC node structure given the FC port ID */
> +struct efc_node *
> +efc_node_find(struct efc_sli_port *sport, u32 port_id)
> +{
> +	struct efc_node *node;
> +
> +	node = efc_spv_get(sport->lookup, port_id);
> +	return node;
> +}

Is this helper function that useful? And if you insist on it, you
could just do a

	return efc_spv_get(sport->lookup, port_id);

> +int
> +efc_node_create_pool(struct efc *efc, u32 node_count)
> +{
> +	u32 i;
> +	struct efc_node *node;
> +	u64 max_xfer_size;
> +	struct efc_dma *dma;
> +
> +	efc->nodes_count = node_count;
> +
> +	efc->nodes = kmalloc_array(node_count, sizeof(struct efc_node *),
> +				   GFP_ATOMIC);
> +	if (!efc->nodes)
> +		return -1;
> +
> +	memset(efc->nodes, 0, node_count * sizeof(struct efc_node *));


Would using kcalloc() instead kmalloc_array() + memset() make sense? I
am always slightly confused what all the compiler magic and macros are
able to do here :)

> +
> +	if (efc->max_xfer_size)
> +		max_xfer_size = efc->max_xfer_size;
> +	else
> +		max_xfer_size = 65536;
> +
> +	INIT_LIST_HEAD(&efc->nodes_free_list);
> +
> +	for (i = 0; i < node_count; i++) {
> +		dma = NULL;
> +		node = kzalloc(sizeof(*node), GFP_ATOMIC);
> +		if (!node) {
> +			efc_log_err(efc, "node allocation failed");
> +			goto error;
> +		}
> +		/* Assign any persistent field values */
> +		node->instance_index = i;
> +		node->max_wr_xfer_size = max_xfer_size;
> +		node->rnode.indicator = U32_MAX;
> +
> +		dma = &node->sparm_dma_buf;
> +		dma->size = 256;
> +		dma->virt = dma_alloc_coherent(&efc->pcidev->dev, dma->size,
> +					       &dma->phys, GFP_DMA);
> +		if (!dma->virt) {
> +			kfree(node);
> +			efc_log_err(efc, "efc_dma_alloc failed");
> +			goto error;

if you bail out here, we end up in efc_node_free_pool() which does
call dma_free_coherent(). But as I understand this is what failed. So
the undo doesn't work, right?

> +		}
> +
> +		efc->nodes[i] = node;
> +		INIT_LIST_HEAD(&node->list_entry);
> +		list_add_tail(&node->list_entry, &efc->nodes_free_list);
> +	}
> +	return 0;
> +
> +error:
> +	efc_node_free_pool(efc);
> +	return -1;

Use a define return value.

> +}
> +
> +void
> +efc_node_free_pool(struct efc *efc)
> +{
> +	struct efc_node *node;
> +	u32 i;
> +	struct efc_dma *dma;
> +
> +	if (!efc->nodes)
> +		return;
> +
> +	for (i = 0; i < efc->nodes_count; i++) {
> +		node = efc->nodes[i];
> +		if (node) {
> +			/* free sparam_dma_buf */
> +			dma = &node->sparm_dma_buf;
> +			dma_free_coherent(&efc->pcidev->dev, dma->size,
> +					  dma->virt, dma->phys);
> +
> +			kfree(node);
> +		}
> +		efc->nodes[i] = NULL;
> +	}
> +}
> +
> +struct efc_node *
> +efc_node_get_instance(struct efc *efc, u32 index)
> +{
> +	struct efc_node *node = NULL;
> +
> +	if (index >= efc->nodes_count) {
> +		efc_log_test(efc, "invalid index: %d\n", index);
> +		return NULL;
> +	}
> +	node = efc->nodes[index];
> +	return node->attached ? node : NULL;
> +}
> +
> +struct efc_node *efc_node_alloc(struct efc_sli_port *sport,
> +				  u32 port_id, bool init, bool targ)
> +{
> +	int rc;
> +	struct efc_node *node = NULL;
> +	u32 instance_index;
> +	u64 max_wr_xfer_size;
> +	struct efc *efc = sport->efc;
> +	struct efc_dma sparm_dma_buf;
> +
> +	if (sport->shutting_down) {
> +		efc_log_debug(efc, "node allocation when shutting down %06x",
> +			      port_id);
> +		return NULL;
> +	}
> +
> +	if (!list_empty(&efc->nodes_free_list)) {
> +		node = list_first_entry(&efc->nodes_free_list,
> +					struct efc_node, list_entry);
> +		list_del(&node->list_entry);
> +	}
> +
> +	if (!node) {
> +		efc_log_err(efc, "node allocation failed %06x", port_id);
> +		return NULL;
> +	}
> +
> +	/* Save persistent values across memset zero */
> +	instance_index = node->instance_index;
> +	max_wr_xfer_size = node->max_wr_xfer_size;
> +	sparm_dma_buf = node->sparm_dma_buf;
> +
> +	memset(node, 0, sizeof(*node));
> +	node->instance_index = instance_index;
> +	node->max_wr_xfer_size = max_wr_xfer_size;
> +	node->sparm_dma_buf = sparm_dma_buf;
> +	node->rnode.indicator = U32_MAX;
> +
> +	node->sport = sport;
> +	INIT_LIST_HEAD(&node->list_entry);
> +	list_add_tail(&node->list_entry, &sport->node_list);
> +
> +	node->efc = efc;
> +	node->init = init;
> +	node->targ = targ;
> +
> +	spin_lock_init(&node->pend_frames_lock);
> +	INIT_LIST_HEAD(&node->pend_frames);
> +	spin_lock_init(&node->active_ios_lock);
> +	INIT_LIST_HEAD(&node->active_ios);
> +	INIT_LIST_HEAD(&node->els_io_pend_list);
> +	INIT_LIST_HEAD(&node->els_io_active_list);
> +	efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
> +	if (rc) {
> +		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);

Isn't node leaked here?

> +		return NULL;
> +	}
> +	/* zero the service parameters */
> +	memset(node->sparm_dma_buf.virt, 0, node->sparm_dma_buf.size);
> +
> +	node->rnode.node = node;
> +	node->sm.app = node;
> +	node->evtdepth = 0;
> +
> +	efc_node_update_display_name(node);
> +
> +	efc_spv_set(sport->lookup, port_id, node);
> +
> +	return node;
> +}
> +
> +int
> +efc_node_free(struct efc_node *node)
> +{
> +	struct efc_sli_port *sport;
> +	struct efc *efc;
> +	int rc = 0;
> +	struct efc_node *ns = NULL;
> +
> +	sport = node->sport;
> +	efc = node->efc;
> +
> +	node_printf(node, "Free'd\n");
> +
> +	if (node->refound) {
> +		/*
> +		 * Save the name server node. We will send fake RSCN event at
> +		 * the end to handle ignored RSCN event during node deletion
> +		 */
> +		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
> +	}
> +
> +	list_del(&node->list_entry);
> +
> +	/* Free HW resources */
> +	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
> +	if (EFC_HW_RTN_IS_ERROR(rc)) {
> +		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
> +		rc = -1;

Use a define here.

> +	}
> +
> +	/* if the gidpt_delay_timer is still running, then delete it */
> +	if (timer_pending(&node->gidpt_delay_timer))
> +		del_timer(&node->gidpt_delay_timer);
> +
> +	/* remove entry from sparse vector list */
> +	if (!sport->lookup) {
> +		efc_log_test(node->efc,
> +			     "assertion failed: sport lookup is NULL\n");
> +		return -1;

Use a define here.

> +	}
> +
> +	efc_spv_set(sport->lookup, node->rnode.fc_id, NULL);
> +
> +	/*
> +	 * If the node_list is empty,
> +	 * then post a ALL_CHILD_NODES_FREE event to the sport,
> +	 * after the lock is released.
> +	 * The sport may be free'd as a result of the event.
> +	 */
> +	if (list_empty(&sport->node_list))
> +		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				  NULL);
> +
> +	node->sport = NULL;
> +	node->sm.current_state = NULL;
> +
> +	/* return to free list */
> +	INIT_LIST_HEAD(&node->list_entry);
> +	list_add_tail(&node->list_entry, &efc->nodes_free_list);
> +
> +	if (ns) {
> +		/* sending fake RSCN event to name server node */
> +		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
> +	}
> +
> +	return rc;
> +}
> +
> +void
> +efc_node_force_free(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	/* shutdown sm processing */
> +	efc_sm_disable(&node->sm);
> +
> +	strncpy(node->prev_state_name, node->current_state_name,
> +		sizeof(node->prev_state_name));
> +	strncpy(node->current_state_name, "disabled",
> +		sizeof(node->current_state_name));
> +
> +	efc->tt.node_io_cleanup(efc, node, true);
> +	efc->tt.node_els_cleanup(efc, node, true);
> +
> +	/* manually purge pending frames (if any) */
> +	efc->tt.node_purge_pending(efc, node);
> +
> +	efc_node_free(node);
> +}
> +
> +static void
> +efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
> +{
> +	if (!dma)
> +		return;
> +	if (!buffer)
> +		return;
> +	if (buffer_length == 0)
> +		return;

Merge the three indepement condition into one.

> +	if (buffer_length > dma->size)
> +		buffer_length = dma->size;
> +
> +	memcpy(dma->virt, buffer, buffer_length);
> +	dma->len = buffer_length;
> +}
> +
> +int
> +efc_node_attach(struct efc_node *node)
> +{
> +	int rc = 0;
> +	struct efc_sli_port *sport = node->sport;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = node->efc;
> +
> +	if (!domain->attached) {
> +		efc_log_test(efc,
> +			     "Warning: unattached domain\n");
> +		return -1;

Use a define here.

> +	}
> +	/* Update node->wwpn/wwnn */
> +
> +	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
> +				efc_node_get_wwpn(node));
> +	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
> +				efc_node_get_wwnn(node));
> +
> +	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
> +			sizeof(node->service_params) - 4);

Maybe adding a comment why +/-4 is needed.

> +
> +	/* take lock to protect node->rnode.attached */
> +	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
> +	if (EFC_HW_RTN_IS_ERROR(rc))
> +		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
> +
> +	return rc;
> +}
> +
> +void
> +efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
> +{
> +	switch (fc_id) {
> +	case FC_FID_FLOGI:
> +		snprintf(buffer, buffer_length, "fabric");
> +		break;
> +	case FC_FID_FCTRL:
> +		snprintf(buffer, buffer_length, "fabctl");
> +		break;
> +	case FC_FID_DIR_SERV:
> +		snprintf(buffer, buffer_length, "nserve");
> +		break;
> +	default:
> +		if (fc_id == FC_FID_DOM_MGR) {
> +			snprintf(buffer, buffer_length, "dctl%02x",
> +				 (fc_id & 0x0000ff));
> +		} else {
> +			snprintf(buffer, buffer_length, "%06x", fc_id);
> +		}
> +		break;
> +	}
> +}
> +
> +void
> +efc_node_update_display_name(struct efc_node *node)
> +{
> +	u32 port_id = node->rnode.fc_id;
> +	struct efc_sli_port *sport = node->sport;
> +	char portid_display[16];
> +
> +	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
> +
> +	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
> +		 sport->display_name, portid_display);
> +}
> +
> +void
> +efc_node_send_ls_io_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +
> +	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
> +		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
> +			      node->display_name, node->ls_acc_oxid);
> +
> +		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +		node->ls_acc_io = NULL;
> +	}
> +}
> +
> +void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	unsigned long flags = 0;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		efc_node_hold_frames(node);
> +		efc_assert(efc_node_active_ios_empty(node), NULL);
> +		efc_assert(efc_els_io_list_empty(node,
> +						 &node->els_io_active_list),
> +			   NULL);

whitespace damage

> +
> +		/* by default, we will be freeing node after we unwind */
> +		node->req_free = true;
> +
> +		switch (node->shutdown_reason) {
> +		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
> +			/*
> +			 * sm: if shutdown reason is
> +			 * implicit logout / efc_node_attach
> +			 */
> +			/* Node shutdown b/c of PLOGI received when node
> +			 * already logged in. We have PLOGI service
> +			 * parameters, so submit node attach; we won't be
> +			 * freeing this node
> +			 */
> +
> +			/* currently, only case for implicit logo is PLOGI
> +			 * recvd. Thus, node's ELS IO pending list won't be
> +			 * empty (PLOGI will be on it)
> +			 */

Merge the comments into one consistent comment.

> +			efc_assert(node->send_ls_acc ==
> +				   EFC_NODE_SEND_LS_ACC_PLOGI, NULL);
> +			node_printf(node,
> +				    "Shutdown reason: implicit logout, re-authenticate\n");
> +
> +			efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +			/* Re-attach node with the same HW node resources */
> +			node->req_free = false;
> +			rc = efc_node_attach(node);
> +			efc_node_transition(node, __efc_d_wait_node_attach,
> +					    NULL);
> +			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
> +				efc_node_post_event(node,
> +						    EFC_EVT_NODE_ATTACH_OK,
> +						    NULL);
> +			}

I would suggest to move this part (and also the other cases) into
seperate function. This heavy indention makes it more difficult to
read, especially with all those additional newlines to avoid the 80
chars limit.

> +			break;
> +		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO: {
> +			s8 pend_frames_empty;
> +			struct list_head *list;
> +
> +			/* cleanup any pending LS_ACC ELSs */
> +			efc_node_send_ls_io_cleanup(node);
> +			list = &node->els_io_pend_list;
> +			efc_assert(efc_els_io_list_empty(node, list), NULL);
> +
> +			spin_lock_irqsave(&node->pend_frames_lock, flags);
> +			pend_frames_empty = list_empty(&node->pend_frames);
> +			spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +			/*
> +			 * there are two scenarios where we want to keep
> +			 * this node alive:
> +			 * 1. there are pending frames that need to be
> +			 *    processed or
> +			 * 2. we're an initiator and the remote node is
> +			 *    a target and we need to re-authenticate
> +			 */
> +			node_printf(node,
> +				    "Shutdown: explicit logo pend=%d ",
> +					!pend_frames_empty);
> +			 node_printf(node,
> +				     "sport.ini=%d node.tgt=%d\n",
> +				    node->sport->enable_ini, node->targ);

whitespace damage

> +
> +			if (!pend_frames_empty ||
> +			    (node->sport->enable_ini && node->targ)) {
> +				u8 send_plogi = false;
> +
> +				if (node->sport->enable_ini && node->targ) {
> +					/*
> +					 * we're an initiator and
> +					 * node shutting down is a target;
> +					 * we'll need to re-authenticate in
> +					 * initial state
> +					 */
> +					send_plogi = true;
> +				}
> +
> +				/*
> +				 * transition to __efc_d_init
> +				 * (will retain HW node resources)
> +				 */
> +				efc->tt.scsi_io_alloc_enable(efc, node);
> +				node->req_free = false;
> +
> +				/*
> +				 * either pending frames exist,
> +				 * or we're re-authenticating with PLOGI
> +				 * (or both); in either case,
> +				 * return to initial state
> +				 */
> +				efc_node_init_device(node, send_plogi);
> +			}
> +			/* else: let node shutdown occur */
> +			break;
> +		}
> +		case EFC_NODE_SHUTDOWN_DEFAULT:
> +		default: {
> +			struct list_head *list;
> +
> +			/*
> +			 * shutdown due to link down,
> +			 * node going away (xport event) or
> +			 * sport shutdown, purge pending and
> +			 * proceed to cleanup node
> +			 */
> +
> +			/* cleanup any pending LS_ACC ELSs */
> +			efc_node_send_ls_io_cleanup(node);
> +			list = &node->els_io_pend_list;
> +			efc_assert(efc_els_io_list_empty(node, list), NULL);
> +
> +			node_printf(node,
> +				    "Shutdown reason: default, purge pending\n");
> +			efc->tt.node_purge_pending(efc, node);
> +			break;
> +		}
> +		}
> +
> +		break;
> +	}
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int
> +efc_node_check_els_quiesced(struct efc_node *node)
> +{
> +	/* check to see if ELS requests, completions are quiesced */
> +	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
> +	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +		if (!node->attached) {
> +			/* hw node detach already completed, proceed */
> +			node_printf(node, "HW node not attached\n");
> +			efc_node_transition(node,
> +					    __efc_node_wait_ios_shutdown,
> +					     NULL);
> +		} else {
> +			/*
> +			 * hw node detach hasn't completed,
> +			 * transition and wait
> +			 */
> +			node_printf(node, "HW node still attached\n");
> +			efc_node_transition(node, __efc_node_wait_node_free,
> +					    NULL);
> +		}
> +		return 1;

Use defined return value here.

> +	}
> +	return 0;
> +}
> +
> +void
> +efc_node_initiate_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc;
> +
> +	efc = node->efc;
> +	efc->tt.node_els_cleanup(efc, node, false);
> +
> +	/*
> +	 * if ELS's have already been quiesced, will move to next state
> +	 * if ELS's have not been quiesced, abort them
> +	 */
> +	if (efc_node_check_els_quiesced(node) == 0) {
> +		/*
> +		 * Abort all ELS's since ELS's won't be aborted by HW
> +		 * node free.
> +		 */
> +		efc_node_hold_frames(node);
> +		efc->tt.node_abort_all_els(efc, node);
> +		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
> +	}
> +}
> +
> +/* Node state machine: Wait for all ELSs to complete */
> +void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	bool check_quiesce = false;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			node_printf(node, "All ELS IOs complete\n");
> +			check_quiesce = true;
> +		}
> +		break;
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		efc_assert(node->els_req_cnt, NULL);
> +		node->els_req_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		efc_assert(node->els_cmpl_cnt, NULL);
> +		node->els_cmpl_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* all ELS IO's complete */
> +		node_printf(node, "All ELS IOs complete\n");
> +		efc_assert(efc_els_io_list_empty(node,
> +						 &node->els_io_active_list),
> +			   NULL);
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	if (check_quiesce)
> +		efc_node_check_els_quiesced(node);
> +
> +	return NULL;
> +}
> +
> +/* Node state machine: Wait for a HW node free event to complete */
> +void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +		/* node is officially no longer attached */
> +		node->attached = false;
> +		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		/* As IOs and ELS IO's complete we expect to get these events */
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* Fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * State is entered when a node receives a shutdown event, and it's waiting
> + * for all the active IOs and ELS IOs associated with the node to complete.
> + */
> +void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +
> +		/* first check to see if no ELS IOs are outstanding */
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			/* If there are any active IOS, Free them. */
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		if (efc_node_active_ios_empty(node) &&
> +		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +	}

The brakets are not needed

> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		efc_assert(node->els_req_cnt, NULL);
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
> +			      efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +	struct efc *efc = NULL;
> +	struct efc_node_cb *cbdata = arg;
> +
> +	efc_assert(ctx, NULL);
> +	efc_assert(ctx->app, NULL);
> +	node = ctx->app;
> +	efc_assert(node->efc, NULL);
> +	efc = node->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
> +	case EFC_EVT_NODE_MISSING:
> +	case EFC_EVT_FCP_CMD_RCVD:
> +		break;
> +
> +	case EFC_EVT_NODE_REFOUND:
> +		node->refound = true;
> +		break;
> +
> +	/*
> +	 * node->attached must be set appropriately
> +	 * for all node attach/detach events
> +	 */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		node->attached = false;
> +		break;
> +
> +	/*
> +	 * handle any ELS completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		efc_assert(node->els_cmpl_cnt, NULL);
> +		node->els_cmpl_cnt--;
> +		break;
> +
> +	/*
> +	 * handle any ELS request completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		efc_assert(node->els_req_cnt, NULL);
> +		node->els_req_cnt--;
> +		break;
> +
> +	case EFC_EVT_ELS_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/*
> +		 * Unsupported ELS was received,
> +		 * send LS_RJT, command not supported
> +		 */
> +		efc_log_debug(efc,
> +			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
> +			      node->display_name, funcname,
> +			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
> +
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD:
> +	case EFC_EVT_FLOGI_RCVD:
> +	case EFC_EVT_LOGO_RCVD:
> +	case EFC_EVT_PRLI_RCVD:
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_FDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD:
> +	case EFC_EVT_SCR_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/* sm: / send ELS_RJT */
> +		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +		/* if we didn't catch this in a state, send generic LS_RJT */
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
> +
> +		break;
> +	}
> +	case EFC_EVT_ABTS_RCVD: {
> +		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +
> +		/* sm: / send BA_ACC */
> +		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
> +		break;
> +	}
> +
> +	default:
> +		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
> +			     node->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_save_sparms(struct efc_node *node, void *payload)
> +{
> +	memcpy(node->service_params, payload, sizeof(node->service_params));
> +}
> +
> +void
> +efc_node_post_event(struct efc_node *node,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	bool free_node = false;
> +
> +	node->evtdepth++;
> +
> +	efc_sm_post_event(&node->sm, evt, arg);
> +
> +	/* If our event call depth is one and
> +	 * we're not holding frames
> +	 * then we can dispatch any pending frames.
> +	 * We don't want to allow the efc_process_node_pending()
> +	 * call to recurse.
> +	 */
> +	if (!node->hold_frames && node->evtdepth == 1)
> +		efc_process_node_pending(node);
> +
> +	node->evtdepth--;
> +
> +	/*
> +	 * Free the node object if so requested,
> +	 * and we're at an event call depth of zero
> +	 */
> +	if (node->evtdepth == 0 && node->req_free)
> +		free_node = true;
> +
> +	if (free_node)
> +		efc_node_free(node);
> +}
> +
> +void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +				   enum efc_sm_event, void *), void *data)
> +{
> +	struct efc_sm_ctx *ctx = &node->sm;
> +
> +	if (ctx->current_state == state) {
> +		efc_node_post_event(node, EFC_EVT_REENTER, data);
> +	} else {
> +		efc_node_post_event(node, EFC_EVT_EXIT, data);
> +		ctx->current_state = state;
> +		efc_node_post_event(node, EFC_EVT_ENTER, data);
> +	}
> +}
> +
> +void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
> +{
> +	memset(buffer, 0, buffer_len);
> +
> +	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
> +}
> +
> +uint64_t
> +efc_node_get_wwpn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwpn);
> +}
> +
> +uint64_t
> +efc_node_get_wwnn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwnn);
> +}
> +
> +int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		       void *arg, uint8_t cmd,
> +			void *(*efc_node_common_func)(const char *,
> +						      struct efc_sm_ctx *,
> +			       enum efc_sm_event, void *),
> +			const char *funcname)

Uhh, this looks nasty, though I don't see many options how to fix it
(maybe allign it a bit better).

> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		      void *arg, uint16_t cmd,
> +		       void *(*efc_node_common_func)(const char *,
> +						     struct efc_sm_ctx *,
> +			      enum efc_sm_event, void *),
> +		       const char *funcname)
> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_active_ios_empty(struct efc_node *node)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(&node->active_ios);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(list);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *,
> +			      enum efc_sm_event, void *))
> +
> +{
> +	node->nodedb_state = state;
> +	efc_node_transition(node, __efc_node_paused, NULL);
> +}
> +
> +/**
> + * This state is entered when a state is "paused". When resumed, the node
> + * is transitioned to a previously saved state (node->ndoedb_state)
> + */
> +void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		node_printf(node, "Paused\n");
> +		break;
> +
> +	case EFC_EVT_RESUME: {
> +		void *(*pf)(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +
> +		pf = node->nodedb_state;
> +
> +		node->nodedb_state = NULL;
> +		efc_node_transition(node, pf, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node->req_free = true;
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +/* Posts a resume event to the paused node */
> +int
> +efc_node_resume(struct efc_node *node)
> +{
> +	efc_node_post_event(node, EFC_EVT_RESUME, NULL);
> +
> +	return 0;
> +}
> +
> +int
> +efc_node_recv_els_frame(struct efc_node *node,
> +			struct efc_hw_sequence *seq)
> +{
> +	unsigned long flags = 0;
> +	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
> +	struct {
> +		u32 cmd;
> +		enum efc_sm_event evt;
> +		u32 payload_size;
> +	} els_cmd_list[] = {
> +		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
> +		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
> +		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
> +		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
> +		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +	};
> +	struct efc_node_cb cbdata;
> +	u8 *buf = seq->payload->dma.virt;
> +	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
> +	u32 i;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	/* find a matching event for the ELS command */
> +	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
> +		if (els_cmd_list[i].cmd == buf[0]) {
> +			evt = els_cmd_list[i].evt;
> +			break;
> +		}
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, evt, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +
> +	return 0;
> +}
> +
> +int
> +efc_node_recv_ct_frame(struct efc_node *node,
> +		       struct efc_hw_sequence *seq)
> +{
> +	struct fc_ct_hdr *iu = seq->payload->dma.virt;
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	struct efc *efc = node->efc;
> +	u16 gscmd = be16_to_cpu(iu->ct_cmd);
> +
> +	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
> +		    node->display_name, gscmd);
> +	efc->tt.send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
> +			    FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
> +	return 0;

Is a return value needed?

> +}
> +
> +int
> +efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
> +{
> +	struct efc_node_cb cbdata;
> +	unsigned long flags = 0;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +
> +	return 1;

If the function can't fail why bother returning anyting?

> +}
> +
> +int
> +efc_node_recv_bls_no_sit(struct efc_node *node,
> +			 struct efc_hw_sequence *seq)
> +{
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +
> +	node_printf(node,
> +		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
> +		    cpu_to_be32(((u32 *)hdr)[0]),
> +		    cpu_to_be32(((u32 *)hdr)[1]),
> +		    cpu_to_be32(((u32 *)hdr)[2]),
> +		    cpu_to_be32(((u32 *)hdr)[3]),
> +		    cpu_to_be32(((u32 *)hdr)[4]),
> +		    cpu_to_be32(((u32 *)hdr)[5]));
> +
> +	return -1;

Also why returning a a value and does -1 even make sense?

> +}
> +
> +int
> +efc_process_node_pending(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	struct efc_hw_sequence *seq = NULL;
> +	u32 pend_frames_processed = 0;
> +	unsigned long flags = 0;
> +
> +	for (;;) {
> +		/* need to check for hold frames condition after each frame
> +		 * processed because any given frame could cause a transition
> +		 * to a state that holds frames
> +		 */
> +		if (node->hold_frames)
> +			break;
> +
> +		/* Get next frame/sequence */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +			if (!list_empty(&node->pend_frames)) {

I don't think you need to indent the block.

> +				seq = list_first_entry(&node->pend_frames,
> +						       struct efc_hw_sequence,
> +						       list_entry);
> +				list_del(&seq->list_entry);
> +			}
> +			if (!seq) {
> +				pend_frames_processed =
> +						node->pend_frames_processed;
> +				node->pend_frames_processed = 0;
> +				spin_unlock_irqrestore(&node->pend_frames_lock,
> +						       flags);
> +				break;
> +			}
> +			node->pend_frames_processed++;
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);

Why not moving the spin_unlock_irqrestore up, direct before the
'if(!seq)'? Or needs the node->pend_frames_processed = 0; to be
protected? I assume that the spin lock only protects the list.

> +
> +		/* now dispatch frame(s) to dispatch function */
> +		efc_node_dispatch_frame(node, seq);
> +	}
> +
> +	if (pend_frames_processed != 0)
> +		efc_log_debug(efc, "%u node frames held and processed\n",
> +			      pend_frames_processed);
> +
> +	return 0;
> +}
> +
> +void
> +efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void efc_node_post_els_resp(struct efc_node *node,
> +			    enum efc_hw_node_els_event evt, void *arg)
> +{
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc *efc = node->efc;
> +	unsigned long flags = 0;
> +
> +	switch (evt) {
> +	case EFC_HW_SRRS_ELS_REQ_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_RJT:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
> +		break;
> +	case EFC_HW_ELS_REQ_ABORTED:
> +		sm_event = EFC_EVT_ELS_REQ_ABORTED;
> +		break;
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", evt);
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, sm_event, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void efc_node_post_shutdown(struct efc_node *node,
> +			    u32 evt, void *arg)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
> new file mode 100644
> index 000000000000..a8e7b7a7fe13
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.h
> @@ -0,0 +1,188 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFC_NODE_H__)
> +#define __EFC_NODE_H__
> +#include "scsi/fc/fc_ns.h"
> +
> +#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
> +#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
> +#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
> +
> +#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
> +
> +#define scsi_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
> +	io->node->display_name, io->instance_index, io->init_task_tag, \
> +	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
> +
> +static inline void
> +efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		 const char *handler)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	if (evt == EFC_EVT_ENTER) {
> +		strncpy(node->current_state_name, handler,
> +			sizeof(node->current_state_name));
> +	} else if (evt == EFC_EVT_EXIT) {
> +		strncpy(node->prev_state_name, node->current_state_name,
> +			sizeof(node->prev_state_name));
> +		strncpy(node->current_state_name, "invalid",
> +			sizeof(node->current_state_name));
> +	}
> +	node->prev_evt = node->current_evt;
> +	node->current_evt = evt;
> +}
> +
> +/**
> + * hold frames in pending frame list
> + *
> + * Unsolicited receive frames are held on the node pending frame list,
> + * rather than being processed.
> + */
> +
> +static inline void
> +efc_node_hold_frames(struct efc_node *node)
> +{
> +	efc_assert(node);
> +	node->hold_frames = true;
> +}
> +
> +/**
> + * accept frames
> + *
> + * Unsolicited receive frames processed rather than being held on the node
> + * pending frame list.
> + */
> +
> +static inline void
> +efc_node_accept_frames(struct efc_node *node)
> +{
> +	efc_assert(node);
> +	node->hold_frames = false;
> +}
> +
> +extern int
> +efc_node_create_pool(struct efc *efc, u32 node_count);
> +extern void
> +efc_node_free_pool(struct efc *efc);
> +extern struct efc_node *
> +efc_node_get_instance(struct efc *efc, u32 instance);
> +
> +/* Node initiator/target enable defines */

I wouldn't mind a comment on the meaning of the encoding :)

> +enum efc_node_enable {
> +	EFC_NODE_ENABLE_x_TO_x,
> +	EFC_NODE_ENABLE_x_TO_T,
> +	EFC_NODE_ENABLE_x_TO_I,
> +	EFC_NODE_ENABLE_x_TO_IT,
> +	EFC_NODE_ENABLE_T_TO_x,
> +	EFC_NODE_ENABLE_T_TO_T,
> +	EFC_NODE_ENABLE_T_TO_I,
> +	EFC_NODE_ENABLE_T_TO_IT,
> +	EFC_NODE_ENABLE_I_TO_x,
> +	EFC_NODE_ENABLE_I_TO_T,
> +	EFC_NODE_ENABLE_I_TO_I,
> +	EFC_NODE_ENABLE_I_TO_IT,
> +	EFC_NODE_ENABLE_IT_TO_x,
> +	EFC_NODE_ENABLE_IT_TO_T,
> +	EFC_NODE_ENABLE_IT_TO_I,
> +	EFC_NODE_ENABLE_IT_TO_IT,
> +};
> +
> +static inline enum efc_node_enable
> +efc_node_get_enable(struct efc_node *node)
> +{
> +	u32 retval = 0;
> +
> +	if (node->sport->enable_ini)
> +		retval |= (1U << 3);
> +	if (node->sport->enable_tgt)
> +		retval |= (1U << 2);
> +	if (node->init)
> +		retval |= (1U << 1);
> +	if (node->targ)
> +		retval |= (1U << 0);
> +	return (enum efc_node_enable)retval;
> +}
> +
> +extern int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg,
> +		       u8 cmd, void *(*efc_node_common_func)(const char *,
> +		       struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		       const char *funcname);
> +extern int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg,
> +		  u16 cmd, void *(*efc_node_common_func)(const char *,
> +		  struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		  const char *funcname);
> +extern int
> +efc_node_attach(struct efc_node *node);
> +extern struct efc_node *
> +efc_node_alloc(struct efc_sli_port *sport, u32 port_id,
> +		bool init, bool targ);
> +extern int
> +efc_node_free(struct efc_node *efc);
> +extern void
> +efc_node_force_free(struct efc_node *efc);
> +extern void
> +efc_node_update_display_name(struct efc_node *node);
> +void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
> +			 void *arg);
> +
> +extern void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void
> +efc_node_save_sparms(struct efc_node *node, void *payload);
> +extern void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +		    enum efc_sm_event, void *), void *data);
> +extern void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +
> +extern void
> +efc_node_initiate_cleanup(struct efc_node *node);
> +
> +extern void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
> +extern uint64_t
> +efc_node_get_wwpn(struct efc_node *node);
> +
> +extern void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg));
> +extern int
> +efc_node_resume(struct efc_node *node);
> +extern void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern int
> +efc_node_active_ios_empty(struct efc_node *node);
> +extern void
> +efc_node_send_ls_io_cleanup(struct efc_node *node);
> +
> +extern int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
> +
> +extern int
> +efc_process_node_pending(struct efc_node *domain);
> +
> +#endif /* __EFC_NODE_H__ */
> -- 
> 2.13.7
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 26/32] elx: efct: link statistics and SFP data
  2019-12-20 22:37 ` [PATCH v2 26/32] elx: efct: link statistics and SFP data James Smart
@ 2020-01-09 10:12   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 10:12 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to retrieve link stats and SFP transceiver data.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 468 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  39 ++++
>  2 files changed, 507 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 27/32] elx: efct: xport and hardware teardown routines
  2019-12-20 22:37 ` [PATCH v2 27/32] elx: efct: xport and hardware teardown routines James Smart
@ 2020-01-09 10:14   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 10:14 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to detach xport and hardware objects.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c    | 437 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h    |  31 +++
>  drivers/scsi/elx/efct/efct_xport.c | 389 +++++++++++++++++++++++++++++++++
>  3 files changed, 857 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 33eefda7ba51..fb33317caa0d 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -4483,3 +4483,440 @@ efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
>  
>  	return rc;
>  }
> +
> +static int
> +efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
> +			void  *arg)
> +{
> +	kfree(mqe);
> +	return 0;
> +}
> +
> +/* Control a port (initialize, shutdown, or set link configuration) */
> +enum efct_hw_rtn
> +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
> +		     uintptr_t value,
> +		void (*cb)(int status, uintptr_t value, void *arg),
> +		void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +
> +	switch (ctrl) {
> +	case EFCT_HW_PORT_INIT:
> +	{
> +		u8	*init_link;
> +		u32 speed = 0;
> +		u8 reset_alpa = 0;
> +
> +		u8	*cfg_link;
> +
> +		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!cfg_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		if (!sli_cmd_config_link(&hw->sli, cfg_link,
> +					SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, cfg_link,
> +					     EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control,
> +					     NULL);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(cfg_link);
> +			efc_log_err(hw->os, "CONFIG_LINK failed\n");
> +			break;
> +		}
> +		speed = hw->config.speed;
> +		reset_alpa = (u8)(value & 0xff);
> +
> +		/* Allocate a new buffer for the init_link command */
> +		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!init_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
> +				      speed, reset_alpa))
> +			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(init_link);
> +			efc_log_err(hw->os, "INIT_LINK failed\n");
> +		}
> +		break;
> +	}
> +	case EFCT_HW_PORT_SHUTDOWN:
> +	{
> +		u8	*down_link;
> +
> +		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!down_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(down_link);
> +			efc_log_err(hw->os, "DOWN_LINK failed\n");
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_teardown(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	u32	iters = 10;
> +	u32	max_rpi;
> +	u32 destroy_queues;
> +	u32 free_memory;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
> +	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
> +
> +	/* shutdown target wqe timer */
> +	shutdown_target_wqe_timer(hw);
> +
> +	/* Cancel watchdog timer if enabled */
> +	if (hw->watchdog_timeout) {
> +		hw->watchdog_timeout = 0;
> +		efct_hw_config_watchdog_timer(hw);
> +	}
> +
> +	/* Cancel Sliport Healthcheck */
> +	if (hw->sliport_healthcheck) {
> +		hw->sliport_healthcheck = 0;
> +		efct_hw_config_sli_port_health_check(hw, 0, 0);
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +
> +		efct_hw_flush(hw);
> +
> +		/*
> +		 * If there are outstanding commands, wait for them to complete
> +		 */
> +		while (!list_empty(&hw->cmd_head) && iters) {
> +			mdelay(10);
> +			efct_hw_flush(hw);
> +			iters--;
> +		}
> +
> +		if (list_empty(&hw->cmd_head))
> +			efc_log_debug(hw->os,
> +				       "All commands completed on MQ queue\n");
> +		else
> +			efc_log_debug(hw->os,
> +				       "Some cmds still pending on MQ queue\n");
> +
> +		/* Cancel any remaining commands */
> +		efct_hw_command_cancel(hw);
> +	} else {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +	}
> +
> +	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
> +	if (hw->rpi_ref) {
> +		for (i = 0; i < max_rpi; i++) {
> +			u32 count;
> +
> +			count = atomic_read(&hw->rpi_ref[i].rpi_count);
> +			if (count)
> +				efc_log_debug(hw->os,
> +					       "non-zero ref [%d]=%d\n",
> +					       i, count);
> +		}
> +		kfree(hw->rpi_ref);
> +		hw->rpi_ref = NULL;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  hw->rnode_mem.size, hw->rnode_mem.virt,
> +			  hw->rnode_mem.phys);
> +	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
> +
> +	if (hw->io) {
> +		for (i = 0; i < hw->config.n_io; i++) {
> +			if (hw->io[i] && hw->io[i]->sgl &&
> +			    hw->io[i]->sgl->virt) {
> +				dma_free_coherent(&efct->pcidev->dev,
> +						  hw->io[i]->sgl->size,
> +						  hw->io[i]->sgl->virt,
> +						  hw->io[i]->sgl->phys);
> +				memset(&hw->io[i]->sgl, 0,
> +				       sizeof(struct efc_dma));
> +			}
> +			kfree(hw->io[i]);
> +			hw->io[i] = NULL;
> +		}
> +		kfree(hw->io);
> +		hw->io = NULL;
> +		kfree(hw->wqe_buffs);
> +		hw->wqe_buffs = NULL;
> +	}
> +
> +	dma = &hw->xfer_rdy;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	dma = &hw->dump_sges;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	dma = &hw->loop_map;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	for (i = 0; i < hw->wq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->rq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->mq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->cq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->eq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
> +			       free_memory);
> +
> +	efct_hw_qtop_free(hw->qtop);
> +
> +	/* Free rq buffers */
> +	efct_hw_rx_free(hw);
> +
> +	efct_hw_queue_teardown(hw);
> +
> +	if (sli_teardown(&hw->sli))
> +		efc_log_err(hw->os, "SLI teardown failed\n");
> +
> +	/* record the fact that the queues are non-functional */
> +	hw->state = EFCT_HW_STATE_UNINITIALIZED;
> +
> +	/* free sequence free pool */
> +	efct_array_free(hw->seq_pool);
> +	hw->seq_pool = NULL;
> +
> +	/* free hw_wq_callback pool */
> +	efct_pool_free(hw->wq_reqtag_pool);
> +
> +	/* Mark HW setup as not having been called */
> +	hw->hw_setup_called = false;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset,
> +		  enum efct_hw_state prev_state)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	switch (reset) {
> +	case EFCT_HW_RESET_FUNCTION:
> +		efc_log_debug(hw->os, "issuing function level reset\n");
> +		if (sli_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_RESET_FIRMWARE:
> +		efc_log_debug(hw->os, "issuing firmware reset\n");
> +		if (sli_fw_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_soft_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		/*
> +		 * Because the FW reset leaves the FW in a non-running state,
> +		 * follow that with a regular reset.
> +		 */
> +		efc_log_debug(hw->os, "issuing function level reset\n");
> +		if (sli_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	default:
> +		efc_log_err(hw->os,
> +			     "unknown reset type - no reset performed\n");
> +		hw->state = prev_state;
> +		rc = EFCT_HW_RTN_INVALID_ARG;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset)
> +{
> +	u32	i;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32	iters;
> +	enum efct_hw_state prev_state = hw->state;
> +	unsigned long flags = 0;
> +	struct efct_hw_io *temp;
> +	u32 destroy_queues;
> +	u32 free_memory;
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE)
> +		efc_log_test(hw->os,
> +			      "HW state %d is not active\n", hw->state);
> +
> +	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
> +	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
> +	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
> +
> +	/*
> +	 * If the prev_state is already reset/teardown in progress,
> +	 * don't continue further
> +	 */
> +	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
> +	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
> +		return efct_hw_sli_reset(hw, reset, prev_state);
> +
> +	/* shutdown target wqe timer */
> +	shutdown_target_wqe_timer(hw);
> +
> +	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
> +		efct_hw_flush(hw);
> +
> +		/*
> +		 * If an mailbox command requiring a DMA is outstanding
> +		 * (SFP/DDM), then the FW will UE when the reset is issued.
> +		 * So attempt to complete all mailbox commands.
> +		 */
> +		iters = 10;
> +		while (!list_empty(&hw->cmd_head) && iters) {
> +			mdelay(10);
> +			efct_hw_flush(hw);
> +			iters--;
> +		}
> +
> +		if (list_empty(&hw->cmd_head))
> +			efc_log_debug(hw->os,
> +				       "All commands completed on MQ queue\n");
> +		else
> +			efc_log_debug(hw->os,
> +				       "Some commands still pending on MQ queue\n");
> +	}
> +
> +	/* Reset the chip */
> +	rc = efct_hw_sli_reset(hw, reset, prev_state);
> +	if (rc == EFCT_HW_RTN_INVALID_ARG)
> +		return EFCT_HW_RTN_ERROR;
> +
> +	/* Not safe to walk command/io lists unless they've been initialized */
> +	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
> +		efct_hw_command_cancel(hw);
> +
> +		/* Try to clean up the io_inuse list */
> +		efct_hw_io_cancel(hw);
> +
> +		efct_hw_link_event_init(hw);
> +
> +		spin_lock_irqsave(&hw->io_lock, flags);
> +			/*
> +			 * The io lists should be empty, but remove any that
> +			 * didn't get cleaned up.
> +			 */
> +			while (!list_empty(&hw->io_timed_wqe)) {
> +				temp = list_first_entry(&hw->io_timed_wqe,
> +							struct efct_hw_io,
> +							list_entry);
> +				list_del(&temp->wqe_link);
> +			}
> +
> +			while (!list_empty(&hw->io_free)) {
> +				temp = list_first_entry(&hw->io_free,
> +							struct efct_hw_io,
> +							list_entry);
> +				list_del(&temp->list_entry);
> +			}
> +
> +			while (!list_empty(&hw->io_wait_free)) {
> +				temp = list_first_entry(&hw->io_wait_free,
> +							struct efct_hw_io,
> +							list_entry);
> +				list_del(&temp->list_entry);
> +			}
> +		spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +		for (i = 0; i < hw->wq_count; i++)
> +			sli_queue_free(&hw->sli, &hw->wq[i],
> +				       destroy_queues, free_memory);
> +
> +		for (i = 0; i < hw->rq_count; i++)
> +			sli_queue_free(&hw->sli, &hw->rq[i],
> +				       destroy_queues, free_memory);
> +
> +		for (i = 0; i < hw->hw_rq_count; i++) {
> +			struct hw_rq *rq = hw->hw_rq[i];
> +
> +			if (rq->rq_tracker) {
> +				u32 j;
> +
> +				for (j = 0; j < rq->entry_count; j++)
> +					rq->rq_tracker[j] = NULL;
> +			}
> +		}
> +
> +		for (i = 0; i < hw->mq_count; i++)
> +			sli_queue_free(&hw->sli, &hw->mq[i],
> +				       destroy_queues, free_memory);
> +
> +		for (i = 0; i < hw->cq_count; i++)
> +			sli_queue_free(&hw->sli, &hw->cq[i],
> +				       destroy_queues, free_memory);
> +
> +		for (i = 0; i < hw->eq_count; i++)
> +			sli_queue_free(&hw->sli, &hw->eq[i],
> +				       destroy_queues, free_memory);
> +
> +		/* Free rq buffers */
> +		efct_hw_rx_free(hw);
> +
> +		/* Teardown the HW queue topology */
> +		efct_hw_queue_teardown(hw);
> +
> +		/*
> +		 * Reset the request tag pool, the HW IO request tags
> +		 * are reassigned in efct_hw_setup_io()
> +		 */
> +		efct_hw_reqtag_reset(hw);
> +	} else {
> +		/* Free rq buffers */
> +		efct_hw_rx_free(hw);
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_get_num_eq(struct efct_hw *hw)
> +{
> +	return hw->eq_count;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 278f241e8705..862504b96a23 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -1009,5 +1009,36 @@ efct_hw_get_host_stats(struct efct_hw *hw,
>  			void *arg),
>  		void *arg);
>  
> +struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count);
> +struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count);
> +extern u32
> +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
> +		   u32 num_cqs, u32 entry_count);
> +struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count);
> +extern struct hw_wq
> +*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count,
> +		u32 class, u32 ulp);
> +extern struct hw_rq
> +*efct_hw_new_rq(struct hw_cq *cq, u32 entry_count, u32 ulp);
> +extern u32
> +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
> +		   u32 num_rq_pairs, u32 entry_count);
> +void efct_hw_del_eq(struct hw_eq *eq);
> +void efct_hw_del_cq(struct hw_cq *cq);
> +void efct_hw_del_mq(struct hw_mq *mq);
> +void efct_hw_del_wq(struct hw_wq *wq);
> +void efct_hw_del_rq(struct hw_rq *rq);
> +void efct_hw_queue_dump(struct efct_hw *hw);
> +void efct_hw_queue_teardown(struct efct_hw *hw);
> +enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw);
> +enum efct_hw_rtn
> +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset);
> +int efct_hw_get_num_eq(struct efct_hw *hw);
> +
> +extern enum efct_hw_rtn
> +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
> +		     uintptr_t value,
> +		void (*cb)(int status, uintptr_t value, void *arg),
> +		void *arg);
>  
>  #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
> index e6d6f2000168..6d8e0cefa903 100644
> --- a/drivers/scsi/elx/efct/efct_xport.c
> +++ b/drivers/scsi/elx/efct/efct_xport.c
> @@ -146,6 +146,80 @@ efct_xport_attach(struct efct_xport *xport)
>  }
>  
>  static void
> +efct_xport_link_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_link_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_host_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_host_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_async_link_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_link_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +}
> +
> +static void
> +efct_xport_async_host_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_host_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +}
> +
> +static void
>  efct_xport_config_stats_timer(struct efct *efct);
>  
>  static void
> @@ -585,3 +659,318 @@ efct_scsi_release_fc_transport(void)
>  
>  	return 0;
>  }
> +
> +int
> +efct_xport_detach(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +
> +	/* free resources associated with target-server and initiator-client */
> +	efct_scsi_tgt_del_device(efct);
> +
> +	efct_scsi_del_device(efct);
> +
> +	/*Shutdown FC Statistics timer*/
> +	del_timer(&efct->xport->stats_timer);
> +
> +	efct_hw_teardown(&efct->hw);
> +
> +	efct_xport_delete_debugfs(efct);
> +
> +	return 0;
> +}
> +
> +static void
> +efct_xport_domain_free_cb(struct efc *efc, void *arg)
> +{
> +	struct completion *done = arg;
> +
> +	complete(done);
> +}
> +
> +static int
> +efct_xport_post_node_event_cb(struct efct_hw *hw, int status,
> +			      u8 *mqe, void *arg)
> +{
> +	struct efct_xport_post_node_event *payload = arg;
> +
> +	if (payload) {
> +		efc_node_post_shutdown(payload->node, payload->evt,
> +				       payload->context);
> +		complete(&payload->done);
> +		if (atomic_sub_and_test(1, &payload->refcnt))
> +			kfree(payload);
> +	}
> +	return 0;
> +}
> +
> +static void
> +efct_xport_force_free(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	struct efc *efc = efct->efcport;
> +
> +	efc_log_debug(efct, "reset required, do force shutdown\n");
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "Domain is already freed\n");
> +		return;
> +	}
> +
> +	efc_domain_force_free(efc->domain);
> +}
> +
> +int
> +efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...)
> +{
> +	u32 rc = 0;
> +	struct efct *efct = NULL;
> +	va_list argp;
> +
> +	efct = xport->efct;
> +
> +	switch (cmd) {
> +	case EFCT_XPORT_PORT_ONLINE: {
> +		/* Bring the port on-line */
> +		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
> +					  NULL, NULL);
> +		if (rc)
> +			efc_log_err(efct,
> +				     "%s: Can't init port\n", efct->desc);
> +		else
> +			xport->configured_link_state = cmd;
> +		break;
> +	}
> +	case EFCT_XPORT_PORT_OFFLINE: {
> +		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
> +					 NULL, NULL))
> +			efc_log_err(efct, "port shutdown failed\n");
> +		else
> +			xport->configured_link_state = cmd;
> +		break;
> +	}
> +
> +	case EFCT_XPORT_SHUTDOWN: {
> +		struct completion done;
> +		u32 reset_required;
> +		unsigned long timeout;
> +
> +		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
> +		 * all PCI functions; orderly shutdown won't work,
> +		 * just force free
> +		 */
> +		if (efct_hw_get(&efct->hw, EFCT_HW_RESET_REQUIRED,
> +				&reset_required) != EFCT_HW_RTN_SUCCESS)
> +			reset_required = 0;
> +
> +		if (reset_required) {
> +			efc_log_debug(efct,
> +				       "reset required, do force shutdown\n");
> +			efct_xport_force_free(xport);
> +			break;
> +		}
> +		init_completion(&done);
> +
> +		efc_register_domain_free_cb(efct->efcport,
> +					efct_xport_domain_free_cb, &done);
> +
> +		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
> +					 NULL, NULL)) {
> +			efc_log_debug(efct,
> +				       "port shutdown failed, do force shutdown\n");
> +			efct_xport_force_free(xport);
> +		} else {
> +			efc_log_debug(efct,
> +				       "Waiting %d seconds for domain shutdown.\n",
> +			(EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC / 1000000));
> +
> +			timeout = usecs_to_jiffies(
> +					EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC);
> +			if (!wait_for_completion_timeout(&done, timeout)) {
> +				efc_log_debug(efct,
> +					       "Domain shutdown timed out!!\n");
> +				efct_xport_force_free(xport);
> +			}
> +		}
> +
> +		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
> +
> +		/* Free up any saved virtual ports */
> +		efc_vport_del_all(efct->efcport);
> +		break;
> +	}
> +
> +	/*
> +	 * POST_NODE_EVENT:  post an event to a node object
> +	 *
> +	 * This transport function is used to post an event to a node object.
> +	 * It does this by submitting a NOP mailbox command to defer execution
> +	 * to the interrupt context (thereby enforcing the serialized execution
> +	 * of event posting to the node state machine instances)
> +	 */
> +	case EFCT_XPORT_POST_NODE_EVENT: {
> +		struct efc_node *node;
> +		u32	evt;
> +		void *context;
> +		struct efct_xport_post_node_event *payload = NULL;
> +		struct efct *efct;
> +		struct efct_hw *hw;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		node = va_arg(argp, struct efc_node *);
> +		evt = va_arg(argp, u32);
> +		context = va_arg(argp, void *);
> +		va_end(argp);
> +
> +		payload = kmalloc(sizeof(*payload), GFP_KERNEL);
> +		if (!payload)
> +			return -1;
> +
> +		memset(payload, 0, sizeof(*payload));
> +
> +		efct = node->efc->base;
> +		hw = &efct->hw;
> +
> +		/* if node's state machine is disabled,
> +		 * don't bother continuing
> +		 */
> +		if (!node->sm.current_state) {
> +			efc_log_test(efct, "node %p state machine disabled\n",
> +				      node);
> +			kfree(payload);
> +			rc = -1;
> +			break;
> +		}
> +
> +		/* Setup payload */
> +		init_completion(&payload->done);
> +
> +		/* one for self and one for callback */
> +		atomic_set(&payload->refcnt, 2);
> +		payload->node = node;
> +		payload->evt = evt;
> +		payload->context = context;
> +
> +		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
> +				       payload)) {
> +			efc_log_test(efct, "efct_hw_async_call failed\n");
> +			kfree(payload);
> +			rc = -1;
> +			break;
> +		}
> +
> +		/* Wait for completion */
> +		if (wait_for_completion_interruptible(&payload->done)) {
> +			efc_log_test(efct,
> +				      "POST_NODE_EVENT: completion failed\n");
> +			rc = -1;
> +		}
> +		if (atomic_sub_and_test(1, &payload->refcnt))
> +			kfree(payload);
> +
> +		break;
> +	}
> +	/*
> +	 * Set wwnn for the port. This will be used instead of the default
> +	 * provided by FW.
> +	 */
> +	case EFCT_XPORT_WWNN_SET: {
> +		u64 wwnn;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		wwnn = va_arg(argp, uint64_t);
> +		va_end(argp);
> +
> +		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
> +		xport->req_wwnn = wwnn;
> +
> +		break;
> +	}
> +	/*
> +	 * Set wwpn for the port. This will be used instead of the default
> +	 * provided by FW.
> +	 */
> +	case EFCT_XPORT_WWPN_SET: {
> +		u64 wwpn;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		wwpn = va_arg(argp, uint64_t);
> +		va_end(argp);
> +
> +		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
> +		xport->req_wwpn = wwpn;
> +
> +		break;
> +	}
> +
> +	default:
> +		break;
> +	}
> +	return rc;
> +}
> +
> +void
> +efct_xport_free(struct efct_xport *xport)
> +{
> +	if (xport) {
> +		efct_io_pool_free(xport->io_pool);
> +
> +		kfree(xport);
> +	}
> +}
> +
> +void
> +efct_release_fc_transport(struct scsi_transport_template *transport_template)
> +{
> +	if (transport_template)
> +		pr_err("releasing transport layer\n");
> +
> +	/* Releasing FC transport */
> +	fc_release_transport(transport_template);
> +}
> +
> +static void
> +efct_xport_remove_host(struct Scsi_Host *shost)
> +{
> +	/*
> +	 * Remove host from FC Transport layer
> +	 *
> +	 * 1. fc_remove_host()
> +	 * a. for each vport: queue vport_delete_work (fc_vport_sched_delete())
> +	 *	b. for each rport: queue rport_delete_work
> +	 *		(fc_rport_final_delete())
> +	 *	c. scsi_flush_work()
> +	 * 2. fc_rport_final_delete()
> +	 * a. fc_terminate_rport_io
> +	 *		i. call LLDD's terminate_rport_io()
> +	 *		ii. scsi_target_unblock()
> +	 *	b. fc_starget_delete()
> +	 *		i. fc_terminate_rport_io()
> +	 *			1. call LLDD's terminate_rport_io()
> +	 *			2. scsi_target_unblock()
> +	 *		ii. scsi_remove_target()
> +	 *      c. invoke LLDD devloss callback
> +	 *      d. transport_remove_device(&rport->dev)
> +	 *      e. device_del(&rport->dev)
> +	 *      f. transport_destroy_device(&rport->dev)
> +	 *      g. put_device(&shost->shost_gendev) (for fc_host->rport list)
> +	 *      h. put_device(&rport->dev)
> +	 */
> +	fc_remove_host(shost);
> +}
> +

That looks a bit strange.
Citing several steps, but calling only the first.
Please explain.

> +int efct_scsi_del_device(struct efct *efct)
> +{
> +	if (efct->shost) {
> +		efc_log_debug(efct, "Unregistering with Transport Layer\n");
> +		efct_xport_remove_host(efct->shost);
> +		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
> +		scsi_remove_host(efct->shost);
> +		scsi_host_put(efct->shost);
> +		efct->shost = NULL;
> +	}
> +	return 0;
> +}
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 28/32] elx: efct: IO timeout handling routines
  2019-12-20 22:37 ` [PATCH v2 28/32] elx: efct: IO timeout handling routines James Smart
@ 2020-01-09 11:27   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 11:27 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Add support for a WQE timer to handle the wqe and IO timeouts.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 187 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 187 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 29/32] elx: efct: Firmware update, async link processing
  2019-12-20 22:37 ` [PATCH v2 29/32] elx: efct: Firmware update, async link processing James Smart
@ 2020-01-09 11:45   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 11:45 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Handling of async link event.
> Registrations for VFI, VPI and RPI.
> Add Firmware update helper routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 1633 +++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |   57 +-
>  2 files changed, 1689 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index c18bda1351cc..23d55d0d26c3 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -48,6 +48,12 @@ struct efct_hw_host_stat_cb_arg {
>  	void *arg;
>  };
>  
> +struct efct_hw_fw_wr_cb_arg {
> +	void (*cb)(int status, u32 bytes_written,
> +		   u32 change_status, void *arg);
> +	void *arg;
> +};
> +
>  /* HW global data */
>  struct efct_hw_global hw_global;
>  
> @@ -180,6 +186,175 @@ efct_hw_read_max_dump_size(struct efct_hw *hw)
>  	return EFCT_HW_RTN_SUCCESS;
>  }
>  
> +static int
> +__efct_read_topology_cb(struct efct_hw *hw, int status,
> +			u8 *mqe, void *arg)
> +{
> +	struct sli4_cmd_read_topology *read_topo =
> +				(struct sli4_cmd_read_topology *)mqe;
> +	u8 speed;
> +	struct efc_domain_record drec = {0};
> +	struct efct *efct = hw->os;
> +
> +	if (status || le16_to_cpu(read_topo->hdr.status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
> +			       status,
> +			       le16_to_cpu(read_topo->hdr.status));
> +		kfree(mqe);
> +		return -1;
> +	}
> +
> +	switch (le32_to_cpu(read_topo->dw2_attentype) &
> +		SLI4_READTOPO_ATTEN_TYPE) {
> +	case SLI4_READ_TOPOLOGY_LINK_UP:
> +		hw->link.status = SLI_LINK_STATUS_UP;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_DOWN:
> +		hw->link.status = SLI_LINK_STATUS_DOWN;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
> +		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
> +		break;
> +	default:
> +		hw->link.status = SLI_LINK_STATUS_MAX;
> +		break;
> +	}
> +
> +	switch (read_topo->topology) {
> +	case SLI4_READ_TOPOLOGY_NPORT:
> +		hw->link.topology = SLI_LINK_TOPO_NPORT;
> +		break;
> +	case SLI4_READ_TOPOLOGY_FC_AL:
> +		hw->link.topology = SLI_LINK_TOPO_LOOP;
> +		if (hw->link.status == SLI_LINK_STATUS_UP)
> +			hw->link.loop_map = hw->loop_map.virt;
> +		hw->link.fc_id = read_topo->acquired_al_pa;
> +		break;
> +	default:
> +		hw->link.topology = SLI_LINK_TOPO_MAX;
> +		break;
> +	}
> +

SLI_LINK_TOPO_MAX is a bit odd; SLI_LINK_TOPO_UNKNOWN, maybe ?

> +	hw->link.medium = SLI_LINK_MEDIUM_FC;
> +
> +	speed = (le32_to_cpu(read_topo->currlink_state) &
> +		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
> +	switch (speed) {
> +	case SLI4_READ_TOPOLOGY_SPEED_1G:
> +		hw->link.speed =  1 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_2G:
> +		hw->link.speed =  2 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_4G:
> +		hw->link.speed =  4 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_8G:
> +		hw->link.speed =  8 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_16G:
> +		hw->link.speed = 16 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_32G:
> +		hw->link.speed = 32 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	}
> +
> +	kfree(mqe);
> +
> +	drec.speed = hw->link.speed;
> +	drec.fc_id = hw->link.fc_id;
> +	drec.is_nport = true;
> +	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
> +
> +	return 0;
> +}
> +
> +/* Callback function for the SLI link events */
> +static int
> +efct_hw_cb_link(void *ctx, void *e)
> +{
> +	struct efct_hw	*hw = ctx;
> +	struct sli4_link_event *event = e;
> +	struct efc_domain	*d = NULL;
> +	int		rc = EFCT_HW_RTN_ERROR;
> +	struct efct	*efct = hw->os;
> +	struct efc_dma *dma;
> +
> +	efct_hw_link_event_init(hw);
> +
> +	switch (event->status) {
> +	case SLI_LINK_STATUS_UP:
> +
> +		hw->link = *event;
> +		efct->efcport->link_status = EFC_LINK_STATUS_UP;
> +
> +		if (event->topology == SLI_LINK_TOPO_NPORT) {
> +			struct efc_domain_record drec = {0};
> +
> +			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
> +				      event->speed);
> +			drec.speed = event->speed;
> +			drec.fc_id = event->fc_id;
> +			drec.is_nport = true;
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
> +				      &drec);
> +		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
> +			u8	*buf = NULL;
> +
> +			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
> +				      event->speed);
> +			dma = &hw->loop_map;
> +			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt)
> +				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
> +
> +			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +			if (!buf)
> +				break;
> +
> +			if (!sli_cmd_read_topology(&hw->sli, buf,
> +						  SLI4_BMBX_SIZE,
> +						       &hw->loop_map)) {
> +				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +						     __efct_read_topology_cb,
> +						     NULL);
> +			}
> +

Hmm. What do you need the 'buf' for?
It's clearly not for DMA, so it looks like an intermediate buffer before
the actual values are set.
Any particular reason why you can't use 'loop_map' directly?

> +			if (rc != EFCT_HW_RTN_SUCCESS) {
> +				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
> +				kfree(buf);
> +			}
> +		} else {
> +			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
> +				      "Link Up, unsupported topology ",
> +				     event->topology, event->speed);
> +		}
> +		break;
> +	case SLI_LINK_STATUS_DOWN:
> +		efc_log_info(hw->os, "Link down\n");
> +
> +		hw->link.status = event->status;
> +		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
> +
> +		d = hw->domain;
> +		if (d)
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
> +		break;
> +	default:
> +		efc_log_test(hw->os, "unhandled link status %#x\n",
> +			      event->status);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
>  enum efct_hw_rtn
>  efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
>  {
> @@ -5107,3 +5282,1461 @@ efct_hw_async_call(struct efct_hw *hw,
>  	}
>  	return rc;
>  }
> +
> +static void
> +efct_hw_port_free_resources(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Clear the sport attached flag */
> +	sport->attached = false;
> +
> +	/* Free the service parameters buffer */
> +	if (sport->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  sport->dma.size, sport->dma.virt,
> +				  sport->dma.phys);
> +		memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
> +
> +	efc_lport_cb(efct->efcport, evt, sport);
> +}
> +
> +static int
> +efct_hw_port_get_mbox_status(struct efc_sli_port *sport,
> +			     u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
> +			       sport->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_free_unreg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int evt = EFC_HW_PORT_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_PORT_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	efct_hw_port_free_resources(sport, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_port_free_unreg_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* Allocate memory and send unreg_vpi */
> +	if (!data) {
> +		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data) {
> +			efct_hw_port_free_resources(sport,
> +						    EFC_HW_PORT_FREE_FAIL,
> +						    data);
> +			return;
> +		}
> +		memset(data, 0, SLI4_BMBX_SIZE);
> +	}
> +
> +	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       sport->indicator, SLI4_UNREG_TYPE_PORT);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_free_unreg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +	}
> +}
> +
> +static void
> +efct_hw_port_send_evt(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Now inform the registered callbacks */
> +	efc_lport_cb(efct->efcport, evt, sport);
> +
> +	/* Set the sport attached flag */
> +	if (evt == EFC_HW_PORT_ATTACH_OK)
> +		sport->attached = true;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +}
> +
> +static int
> +efct_hw_port_alloc_init_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_port_alloc_init_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending) {
> +		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      sport->indicator, sport->domain->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_init_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_port_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				   int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	u8 *payload = NULL;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	payload = sport->dma.virt;
> +
> +	memcpy(&sport->sli_wwpn,
> +	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
> +		sizeof(sport->sli_wwpn));
> +	memcpy(&sport->sli_wwnn,
> +	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
> +		sizeof(sport->sli_wwnn));
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  sport->dma.size, sport->dma.virt, sport->dma.phys);
> +	memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	efct_hw_port_alloc_init_vpi(sport, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_port_alloc_read_sparm64(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	/* Allocate memory for the service parameters */
> +	sport->dma.size = 112;
> +	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					     sport->dma.size, &sport->dma.phys,
> +					     GFP_DMA);
> +	if (!sport->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &sport->dma, sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_read_sparm64_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/*
> + * This function allocates a VPI object for the port and stores it in the
> + * indicator field of the port object.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
> +		   struct efc_domain *domain, u8 *wwpn)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*cmd = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32 index;
> +
> +	sport->indicator = U32_MAX;
> +	sport->hw = hw;
> +	sport->free_req_pending = false;
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (wwpn)
> +		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
> +			       &sport->indicator, &index)) {
> +		efc_log_err(hw->os, "VPI allocation failure\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (domain) {
> +		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!cmd) {
> +			rc = EFCT_HW_RTN_NO_MEMORY;
> +			goto efct_hw_port_alloc_out;
> +		}
> +		memset(cmd, 0, SLI4_BMBX_SIZE);
> +
> +		/*
> +		 * If the WWPN is NULL, fetch the default
> +		 * WWPN and WWNN before initializing the VPI
> +		 */
> +		if (!wwpn)
> +			efct_hw_port_alloc_read_sparm64(sport, cmd);
> +		else
> +			efct_hw_port_alloc_init_vpi(sport, cmd);
> +	} else if (!wwpn) {
> +		/* This is the convention for the HW, not SLI */
> +		efc_log_test(hw->os, "need WWN for physical port\n");
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +	/* domain NULL and wwpn non-NULL */
> +	// no-op;
> +
> +efct_hw_port_alloc_out:
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		kfree(cmd);
> +
> +		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
> +				  sport->indicator);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_attach_reg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_port_attach_reg_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	if (!sli_cmd_reg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			    sport->fc_id, sport->sli_wwpn,
> +			sport->indicator, sport->domain->indicator,
> +			false) == 0) {
> +		efc_log_err(hw->os, "REG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_attach_reg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, data);
> +	}
> +}
> +
> +/**
> + * This function registers a previously-allocated VPI with the
> + * device.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
> +		    u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	sport->fc_id = fc_id;
> +	efct_hw_port_attach_reg_vpi(sport, buf);
> +	return rc;
> +}

Huh? What is _that_ for an interface?
Passing in a preallocated buffer, but doing nothing with it?
How very curious.
I would suggest reworking by eg moving the allocation into the function
itself ...

> +
> +/* Issue the UNREG_VPI command to free the assigned VPI context */
> +enum efct_hw_rtn
> +efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (sport->attached)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +	else
> +		sport->free_req_pending = true;
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_domain_get_mbox_status(struct efc_domain *domain,
> +			       u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
> +			       domain->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_resources(struct efc_domain *domain,
> +			      int evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the service parameters buffer */
> +	if (domain->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
> +
> +	efc_domain_cb(efct->efcport, evt, domain);
> +}
> +
> +static void
> +efct_hw_domain_send_sport_evt(struct efc_domain *domain,
> +			      int port_evt, int domain_evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Send alloc/attach ok to the physical sport */
> +	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
> +
> +	/* Now inform the registered callbacks */
> +	efc_domain_cb(efct->efcport, domain_evt, domain);
> +}
> +
> +static int
> +efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	hw->domain = domain;
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
> +				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_domain_alloc_read_sparm64(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &domain->dma, SLI4_READ_SPARM64_VPI_DEFAULT);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_read_sparm64_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_domain_alloc_init_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	efct_hw_domain_alloc_read_sparm64(domain, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_domain_alloc_init_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efc_sli_port *sport = domain->sport;
> +	int rc;
> +
> +	/*
> +	 * For FC, the HW alread registered an FCFI.
> +	 * Copy FCF information into the domain and jump to INIT_VFI.
> +	 */
> +	domain->fcf_indicator = hw->fcf_indicator;
> +	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      domain->indicator, domain->fcf_indicator,
> +			sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_init_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/**
> + * This function starts a series of commands needed to connect to the domain,
> + * including
> + *   - REG_FCFI
> + *   - INIT_VFI
> + *   - READ_SPARMS
> + */
> +enum efct_hw_rtn
> +efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
> +		     u32 fcf)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8 *cmd = NULL;
> +	u32 index;
> +
> +	if (!domain || !domain->sport) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
> +			    hw, domain, domain ? domain->sport : NULL);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(efct,
> +			     "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!cmd)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(cmd, 0, SLI4_BMBX_SIZE);
> +
> +	/* allocate memory for the service parameters */
> +	domain->dma.size = 112;
> +	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					      domain->dma.size,
> +					      &domain->dma.phys, GFP_DMA);
> +	if (!domain->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		kfree(cmd);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	domain->hw = hw;
> +	domain->fcf = fcf;
> +	domain->fcf_indicator = U32_MAX;
> +	domain->indicator = U32_MAX;
> +
> +	if (sli_resource_alloc(&hw->sli,
> +			       SLI_RSRC_VFI, &domain->indicator,
> +				    &index)) {
> +		efc_log_err(hw->os, "VFI allocation failure\n");
> +
> +		kfree(cmd);
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_alloc_init_vfi(domain, cmd);
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static int
> +efct_hw_domain_attach_reg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		hw->domain = NULL;
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
> +		return -1;
> +	}
> +
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
> +				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
> +	return 0;
> +}
> +
> +static void
> +efct_hw_domain_attach_reg_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	if (!sli_cmd_reg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			    domain->indicator, domain->fcf_indicator,
> +			domain->dma, domain->sport->indicator,
> +			domain->sport->sli_wwpn,
> +			domain->sport->fc_id) == 0) {
> +		efc_log_err(hw->os, "REG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_attach_reg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain,
> +				      EFC_HW_DOMAIN_ATTACH_FAIL, data);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_attach(struct efc *efc,
> +		      struct efc_domain *domain, u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	domain->sport->fc_id = fc_id;
> +	efct_hw_domain_attach_reg_vfi(domain, buf);
> +	return rc;
> +}
> +

Same here.

> +static int
> +efct_hw_domain_free_unreg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int evt = EFC_HW_DOMAIN_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_DOMAIN_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_unreg_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	if (!data) {
> +		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data)
> +			goto cleanup;
> +	}
> +
> +	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_free_unreg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_free_unreg_vfi(domain, NULL);
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	if (!domain) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  domain->dma.size, domain->dma.virt, domain->dma.phys);
> +	memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
> +			  domain->indicator);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
> +		   u32 fc_addr, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	/* Check for invalid indicator */
> +	if (rnode->indicator != U32_MAX) {
> +		efc_log_err(hw->os,
> +			     "RPI allocation failure addr=%#x rpi=%#x\n",
> +			    fc_addr, rnode->indicator);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* NULL SLI port indicates an unallocated remote node */
> +	rnode->sport = NULL;
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
> +			       &rnode->indicator, &rnode->index)) {
> +		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
> +			     fc_addr);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rnode->fc_id = fc_addr;
> +	rnode->sport = sport;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static int
> +efct_hw_cb_node_attach(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event	evt = 0;
> +
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +		rnode->attached = false;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
> +		evt = EFC_HW_NODE_ATTACH_FAIL;
> +	} else {
> +		rnode->attached = true;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
> +		evt = EFC_HW_NODE_ATTACH_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);
> +
> +	return 0;
> +}
> +
> +/* Update a remote node object with the remote port's service parameters */
> +enum efct_hw_rtn
> +efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
> +		    struct efc_dma *sparms)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
> +	u8		*buf = NULL;
> +	u32	count = 0;
> +
> +	if (!hw || !rnode || !sparms) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
> +			    hw, rnode, sparms);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	/*
> +	 * If the attach count is non-zero, this RPI has already been reg'd.
> +	 * Otherwise, register the RPI
> +	 */
> +	if (rnode->index == U32_MAX) {
> +		efc_log_err(efct, "bad parameter rnode->index invalid\n");
> +		kfree(buf);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +	count--;
> +	if (count) {
> +		/*
> +		 * Can't attach multiple FC_ID's to a node unless High Login
> +		 * Mode is enabled
> +		 */
> +		if (!hw->sli.high_login_mode) {
> +			efc_log_test(hw->os,
> +				      "attach to attached node HLM=%d cnt=%d\n",
> +				     hw->sli.high_login_mode, count);
> +			rc = EFCT_HW_RTN_SUCCESS;
> +		} else {
> +			rnode->node_group = true;
> +			rnode->attached =
> +			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
> +			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
> +							 EFCT_HW_RTN_SUCCESS;
> +		}
> +	} else {
> +		rnode->node_group = false;
> +
> +		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    rnode->fc_id,
> +				    rnode->indicator, rnode->sport->indicator,
> +				    sparms, 0, 0))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_attach, rnode);
> +	}
> +
> +	if (count || rc) {
> +		if (rc < EFCT_HW_RTN_SUCCESS) {
> +			atomic_sub_return(1,
> +					  &hw->rpi_ref[rnode->index].rpi_count);
> +			efc_log_err(hw->os,
> +				     "%s error\n", count ? "HLM" : "REG_RPI");
> +		}
> +		kfree(buf);
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_resources(struct efc *efc,
> +			    struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rnode->sport) {
> +		if (rnode->attached) {
> +			efc_log_err(hw->os, "Err: rnode is still attached\n");
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +		if (rnode->indicator != U32_MAX) {
> +			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
> +					      rnode->indicator)) {
> +				efc_log_err(hw->os,
> +					     "RPI free fail RPI %d addr=%#x\n",
> +					    rnode->indicator,
> +					    rnode->fc_id);
> +				rc = EFCT_HW_RTN_ERROR;
> +			} else {
> +				rnode->node_group = false;
> +				rnode->indicator = U32_MAX;
> +				rnode->index = U32_MAX;
> +				rnode->free_group = false;
> +			}
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_node_free(struct efct_hw *hw,
> +		     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +
> +		/*
> +		 * In certain cases, a non-zero MQE status is OK (all must be
> +		 * true):
> +		 *   - node is attached
> +		 *   - if High Login Mode is enabled, node is part of a node
> +		 * group
> +		 *   - status is 0x1400
> +		 */
> +		if (!rnode->attached ||
> +		    (hw->sli.high_login_mode && !rnode->node_group) ||
> +				(le16_to_cpu(hdr->status) !=
> +				 MBX_STATUS_RPI_NOT_REG))
> +			rc = -1;
> +	}
> +
> +	if (rc == 0) {
> +		rnode->node_group = false;
> +		rnode->attached = false;
> +
> +		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
> +			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
> +				   0);
> +		 evt = EFC_HW_NODE_FREE_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS_SYNC;
> +	u32	index = U32_MAX;
> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	index = rnode->index;
> +
> +	if (rnode->sport) {
> +		u32	count = 0;
> +		u32	fc_id;
> +
> +		if (!rnode->attached)
> +			return EFCT_HW_RTN_SUCCESS_SYNC;
> +
> +		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!buf)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(buf, 0, SLI4_BMBX_SIZE);
> +		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
> +		count++;
> +		if (count <= 1) {
> +			/*
> +			 * There are no other references to this RPI so
> +			 * unregister it
> +			 */
> +			fc_id = U32_MAX;
> +			/* and free the resource */
> +			rnode->node_group = false;
> +			rnode->free_group = true;
> +		} else {
> +			if (!hw->sli.high_login_mode)
> +				efc_log_test(hw->os,
> +					      "Inval cnt with HLM off, cnt=%d\n",
> +					     count);
> +			fc_id = rnode->fc_id & 0x00ffffff;
> +		}
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +
> +		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				      rnode->indicator,
> +				      SLI_RSRC_RPI, fc_id))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_free, rnode);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			efc_log_err(hw->os, "UNREG_RPI failed\n");
> +			kfree(buf);
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_node_free_all(struct efct_hw *hw, int status, u8 *mqe,
> +			 void *arg)
> +{
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	u32	i;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +	} else {
> +		evt = EFC_HW_NODE_FREE_ALL_OK;
> +	}
> +
> +	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
> +		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
> +		     i++)
> +			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
> +
> +		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
> +			efc_log_test(hw->os, "RPI free all failure\n");
> +			rc = -1;
> +		}
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, NULL);
> +
> +	kfree(mqe);
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_all(struct efct_hw *hw)
> +{
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +
> +	if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE, 0xffff,
> +			      SLI_RSRC_FCFI, U32_MAX))
> +		rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_node_free_all,
> +				     NULL);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_err(hw->os, "UNREG_RPI failed\n");
> +		kfree(buf);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	return rc;
> +}
> +
> +struct efct_hw_get_nvparms_cb_arg {
> +	void (*cb)(int status,
> +		   u8 *wwpn, u8 *wwnn,
> +		u8 hard_alpa, u32 preferred_d_id,
> +		void *arg);
> +	void *arg;
> +};
> +
> +static int
> +efct_hw_get_nvparms_cb(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efct_hw_get_nvparms_cb_arg *cb_arg = arg;
> +	struct sli4_cmd_read_nvparms *mbox_rsp =
> +			(struct sli4_cmd_read_nvparms *)mqe;
> +	u8 hard_alpa;
> +	u32 preferred_d_id;
> +
> +	hard_alpa = le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
> +				SLI4_READ_NVPARAMS_HARD_ALPA;
> +	preferred_d_id = (le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
> +			  SLI4_READ_NVPARAMS_PREFERRED_D_ID) >> 8;
> +	if (cb_arg->cb)
> +		cb_arg->cb(status, mbox_rsp->wwpn, mbox_rsp->wwnn,
> +			   hard_alpa, preferred_d_id,
> +			   cb_arg->arg);
> +
> +	kfree(mqe);
> +	kfree(cb_arg);
> +
> +	return 0;
> +}
> +
> +int
> +efct_hw_get_nvparms(struct efct_hw *hw,
> +		    void (*cb)(int status, u8 *wwpn,
> +			       u8 *wwnn, u8 hard_alpa,
> +			       u32 preferred_d_id, void *arg),
> +		    void *ul_arg)
> +{
> +	u8 *mbxdata;
> +	struct efct_hw_get_nvparms_cb_arg *cb_arg;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	/* mbxdata holds the header of the command */
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);
> +
> +	/*
> +	 * cb_arg holds the data that will be passed to the callback on
> +	 * completion
> +	 */
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = ul_arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_read_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_get_nvparms_cb, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "READ_NVPARMS failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +struct efct_hw_set_nvparms_cb_arg {
> +	void (*cb)(int status, void *arg);
> +	void *arg;
> +};
> +
> +static int
> +efct_hw_set_nvparms_cb(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efct_hw_set_nvparms_cb_arg *cb_arg = arg;
> +
> +	if (cb_arg->cb)
> +		cb_arg->cb(status, cb_arg->arg);
> +
> +	kfree(mqe);
> +	kfree(cb_arg);
> +
> +	return 0;
> +}
> +
> +int
> +efct_hw_set_nvparms(struct efct_hw *hw,
> +		    void (*cb)(int status, void *arg),
> +		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id,
> +		void *ul_arg)
> +{
> +	u8 *mbxdata;
> +	struct efct_hw_set_nvparms_cb_arg *cb_arg;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	/* mbxdata holds the header of the command */
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	/*
> +	 * cb_arg holds the data that will be passed to the callback on
> +	 * completion
> +	 */
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = ul_arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_write_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE, wwpn,
> +				  wwnn, hard_alpa, preferred_d_id))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_set_nvparms_cb, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "SET_NVPARMS failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_fw_write(struct efct_hw *hw, int status,
> +		    u8 *mqe, void  *arg)
> +{
> +	struct sli4_cmd_sli_config *mbox_rsp =
> +					(struct sli4_cmd_sli_config *)mqe;
> +	struct sli4_rsp_cmn_write_object *wr_obj_rsp;
> +	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
> +	u32 bytes_written;
> +	u16 mbox_status;
> +	u32 change_status;
> +
> +	wr_obj_rsp = (struct sli4_rsp_cmn_write_object *)
> +		      &mbox_rsp->payload.embed;
> +	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
> +	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
> +	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
> +			 RSP_CHANGE_STATUS);
> +
> +	kfree(mqe);
> +
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (!status && mbox_status)
> +				status = mbox_status;
> +			cb_arg->cb(status, bytes_written, change_status,
> +				   cb_arg->arg);
> +		}
> +
> +		kfree(cb_arg);
> +	}
> +
> +	return 0;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_firmware_write_sli4_intf_2(struct efct_hw *hw, struct efc_dma *dma,
> +				   u32 size, u32 offset, int last,
> +			      void (*cb)(int status, u32 bytes_written,
> +					 u32 change_status, void *arg),
> +				void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	u8 *mbxdata;
> +	struct efct_hw_fw_wr_cb_arg *cb_arg;
> +	int noc = 0;
> +
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);
> +
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	memset(cb_arg, 0, sizeof(struct efct_hw_fw_wr_cb_arg));
> +	cb_arg->cb = cb;
> +	cb_arg->arg = arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_common_write_object(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
> +					noc, last, size, offset, "/prg/",
> +					dma))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_fw_write, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "COMMON_WRITE_OBJECT failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +/* Write a portion of a firmware image to the device */
> +enum efct_hw_rtn
> +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
> +		       u32 size, u32 offset, int last,
> +			void (*cb)(int status, u32 bytes_written,
> +				   u32 change_status, void *arg),
> +			void *arg)
> +{
> +	return efct_hw_firmware_write_sli4_intf_2(hw, dma, size, offset,
> +						     last, cb, arg);
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 862504b96a23..598d05694ac3 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -479,7 +479,6 @@ struct efct_hw_io {
>  	void			*ul_io;
>  };
>  
> -
>  /* Typedef for HW "done" callback */
>  typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
>  			      u32 len, int status, u32 ext, void *ul_arg);
> @@ -1040,5 +1039,61 @@ efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
>  		     uintptr_t value,
>  		void (*cb)(int status, uintptr_t value, void *arg),
>  		void *arg);
> +extern enum efct_hw_rtn
> +efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
> +		   struct efc_domain *domain, u8 *wwpn);
> +extern enum efct_hw_rtn
> +efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
> +		    u32 fc_id);
> +extern enum efct_hw_rtn
> +efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport);
> +extern enum efct_hw_rtn
> +efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
> +		     u32 fcf);
> +extern enum efct_hw_rtn
> +efct_hw_domain_attach(struct efc *efc,
> +		      struct efc_domain *domain, u32 fc_id);
> +extern enum efct_hw_rtn
> +efct_hw_domain_free(struct efc *efc, struct efc_domain *domain);
> +extern enum efct_hw_rtn
> +efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain);
> +extern enum efct_hw_rtn
> +efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
> +		   u32 fc_addr, struct efc_sli_port *sport);
> +extern enum efct_hw_rtn
> +efct_hw_node_free_all(struct efct_hw *hw);
> +extern enum efct_hw_rtn
> +efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
> +		    struct efc_dma *sparms);
> +extern enum efct_hw_rtn
> +efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode);
> +extern enum efct_hw_rtn
> +efct_hw_node_free_resources(struct efc *efc,
> +			    struct efc_remote_node *rnode);
> +
> +extern enum efct_hw_rtn
> +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
> +		       u32 size, u32 offset, int last,
> +		       void (*cb)(int status, u32 bytes_written,
> +				  u32 change_status, void *arg),
> +		       void *arg);
> +
> +extern enum efct_hw_rtn
> +efct_hw_get_nvparms(struct efct_hw *hw,
> +		    void (*mgmt_cb)(int status, u8 *wwpn,
> +				    u8 *wwnn, u8 hard_alpa,
> +				    u32 preferred_d_id, void *arg),
> +		    void *arg);
> +extern
> +enum efct_hw_rtn efct_hw_set_nvparms(struct efct_hw *hw,
> +				       void (*mgmt_cb)(int status, void *arg),
> +		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id, void *arg);
> +
> +typedef int (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
> +				  u8 *mqe, void *arg);
> +extern int
> +efct_hw_async_call(struct efct_hw *hw,
> +		   efct_hw_async_cb_t callback, void *arg);
>  
>  #endif /* __EFCT_H__ */
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support
  2019-12-20 22:37 ` [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support James Smart
@ 2020-01-09 11:46   ` Hannes Reinecke
  0 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 11:46 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Integration with the scsi_fc_transport host interfaces
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/efct/efct_xport.c | 496 +++++++++++++++++++++++++++++++++++++
>  1 file changed, 496 insertions(+)
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver
  2019-12-20 22:37 ` [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
  2019-12-20 23:17   ` Randy Dunlap
@ 2020-01-09 11:47   ` Hannes Reinecke
  1 sibling, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 11:47 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This patch completes the efct driver population.
> 
> This patch adds driver definitions for:
> Adds the efct driver Kconfig and Makefiles
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/elx/Kconfig  |  9 +++++++++
>  drivers/scsi/elx/Makefile | 30 ++++++++++++++++++++++++++++++
>  2 files changed, 39 insertions(+)
>  create mode 100644 drivers/scsi/elx/Kconfig
>  create mode 100644 drivers/scsi/elx/Makefile
> 
> diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
> new file mode 100644
> index 000000000000..ec710ade44f3
> --- /dev/null
> +++ b/drivers/scsi/elx/Kconfig
> @@ -0,0 +1,9 @@
> +config SCSI_EFCT
> +	tristate "Emulex Fibre Channel Target"
> +	depends on PCI && SCSI
> +	depends on TARGET_CORE
> +	depends on SCSI_FC_ATTRS
> +	select CRC_T10DIF
> +	help
> +          The efct driver provides enhanced SCSI Target Mode
> +	  support for specific SLI-4 adapters.
> diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile
> new file mode 100644
> index 000000000000..79cc4e57676e
> --- /dev/null
> +++ b/drivers/scsi/elx/Makefile
> @@ -0,0 +1,30 @@
> +#/*******************************************************************
> +# * This file is part of the Emulex Linux Device Driver for         *
> +# * Fibre Channel Host Bus Adapters.                                *
> +# * Copyright (C) 2018 Broadcom. All Rights Reserved. The term	   *
> +# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.     *
> +# *                                                                 *
> +# * This program is free software; you can redistribute it and/or   *
> +# * modify it under the terms of version 2 of the GNU General       *
> +# * Public License as published by the Free Software Foundation.    *
> +# * This program is distributed in the hope that it will be useful. *
> +# * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
> +# * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
> +# * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
> +# * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
> +# * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
> +# * more details, a copy of which can be found in the file COPYING  *
> +# * included with this package.                                     *
> +# ********************************************************************/
> +

Use SPDX identifier here.

> +obj-$(CONFIG_SCSI_EFCT) := efct.o
> +
> +efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o efct/efct_els.o \
> +	     efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \
> +	     efct/efct_utils.o efct/efct_lio.o efct/efct_unsol.o
> +
> +efct-objs += libefc/efc_domain.o libefc/efc_fabric.o libefc/efc_node.o \
> +	     libefc/efc_sport.o libefc/efc_device.o \
> +	     libefc/efc_lib.o libefc/efc_sm.o
> +
> +efct-objs += libefc_sli/sli4.o
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process
  2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
  2019-12-24  7:45     ` kbuild test robot
  2019-12-24 21:01   ` Nathan Chancellor
@ 2020-01-09 11:47   ` Hannes Reinecke
  2 siblings, 0 replies; 78+ messages in thread
From: Hannes Reinecke @ 2020-01-09 11:47 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: maier, dwagner, bvanassche, Ram Vegesna

On 12/20/19 11:37 PM, James Smart wrote:
> This final patch ties the efct driver into the kernel Kconfig
> and build linkages in the drivers/scsi directory.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> ---
>  drivers/scsi/Kconfig  | 2 ++
>  drivers/scsi/Makefile | 1 +
>  2 files changed, 3 insertions(+)
> 
> diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
> index 90cf4691b8c3..78822ae45457 100644
> --- a/drivers/scsi/Kconfig
> +++ b/drivers/scsi/Kconfig
> @@ -1176,6 +1176,8 @@ config SCSI_LPFC_DEBUG_FS
>  	  This makes debugging information from the lpfc driver
>  	  available via the debugfs filesystem.
>  
> +source "drivers/scsi/elx/Kconfig"
> +
>  config SCSI_SIM710
>  	tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
>  	depends on EISA && SCSI
> diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
> index c00e3dd57990..844db573283c 100644
> --- a/drivers/scsi/Makefile
> +++ b/drivers/scsi/Makefile
> @@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280)	+= qla1280.o
>  obj-$(CONFIG_SCSI_QLA_FC)	+= qla2xxx/
>  obj-$(CONFIG_SCSI_QLA_ISCSI)	+= libiscsi.o qla4xxx/
>  obj-$(CONFIG_SCSI_LPFC)		+= lpfc/
> +obj-$(CONFIG_SCSI_EFCT)		+= elx/
>  obj-$(CONFIG_SCSI_BFA_FC)	+= bfa/
>  obj-$(CONFIG_SCSI_CHELSIO_FCOE)	+= csiostor/
>  obj-$(CONFIG_SCSI_DMX3191D)	+= dmx3191d.o
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      Teamlead Storage & Networking
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2020-01-09 11:47 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-20 22:36 [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
2019-12-20 22:36 ` [PATCH v2 01/32] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
2020-01-08  7:11   ` Hannes Reinecke
2020-01-09  0:59     ` James Smart
2019-12-20 22:36 ` [PATCH v2 02/32] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
2020-01-08  7:24   ` Hannes Reinecke
2020-01-09  1:00     ` James Smart
2019-12-20 22:36 ` [PATCH v2 03/32] elx: libefc_sli: Data structures and defines for mbox commands James Smart
2020-01-08  7:32   ` Hannes Reinecke
2020-01-09  1:03     ` James Smart
2019-12-20 22:36 ` [PATCH v2 04/32] elx: libefc_sli: queue create/destroy/parse routines James Smart
2020-01-08  7:45   ` Hannes Reinecke
2020-01-09  1:04     ` James Smart
2019-12-20 22:36 ` [PATCH v2 05/32] elx: libefc_sli: Populate and post different WQEs James Smart
2020-01-08  7:54   ` Hannes Reinecke
2020-01-09  1:04     ` James Smart
2019-12-20 22:36 ` [PATCH v2 06/32] elx: libefc_sli: bmbx routines and SLI config commands James Smart
2020-01-08  8:05   ` Hannes Reinecke
2019-12-20 22:36 ` [PATCH v2 07/32] elx: libefc_sli: APIs to setup SLI library James Smart
2020-01-08  8:22   ` Hannes Reinecke
2020-01-09  1:29     ` James Smart
2019-12-20 22:36 ` [PATCH v2 08/32] elx: libefc: Generic state machine framework James Smart
2020-01-09  7:05   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 09/32] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
2020-01-09  7:16   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 10/32] elx: libefc: FC Domain state machine interfaces James Smart
2020-01-09  7:27   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 11/32] elx: libefc: SLI and FC PORT " James Smart
2020-01-09  7:34   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 12/32] elx: libefc: Remote node " James Smart
2020-01-09  8:31   ` Hannes Reinecke
2020-01-09  9:57   ` Daniel Wagner
2019-12-20 22:37 ` [PATCH v2 13/32] elx: libefc: Fabric " James Smart
2020-01-09  8:34   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 14/32] elx: libefc: FC node ELS and state handling James Smart
2020-01-09  8:39   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 15/32] elx: efct: Data structures and defines for hw operations James Smart
2020-01-09  8:41   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 16/32] elx: efct: Driver initialization routines James Smart
2020-01-09  9:01   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 17/32] elx: efct: Hardware queues creation and deletion James Smart
2020-01-09  9:10   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 18/32] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
2020-01-09  9:13   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 19/32] elx: efct: Hardware IO and SGL initialization James Smart
2020-01-09  9:22   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 20/32] elx: efct: Hardware queues processing James Smart
2020-01-09  9:24   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 21/32] elx: efct: Unsolicited FC frame processing routines James Smart
2020-01-09  9:26   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 22/32] elx: efct: Extended link Service IO handling James Smart
2020-01-09  9:38   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 23/32] elx: efct: SCSI IO handling routines James Smart
2020-01-09  9:41   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 24/32] elx: efct: LIO backend interface routines James Smart
2020-01-09  3:56   ` Bart Van Assche
2019-12-20 22:37 ` [PATCH v2 25/32] elx: efct: Hardware IO submission routines James Smart
2020-01-09  9:52   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 26/32] elx: efct: link statistics and SFP data James Smart
2020-01-09 10:12   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 27/32] elx: efct: xport and hardware teardown routines James Smart
2020-01-09 10:14   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 28/32] elx: efct: IO timeout handling routines James Smart
2020-01-09 11:27   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 29/32] elx: efct: Firmware update, async link processing James Smart
2020-01-09 11:45   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 30/32] elx: efct: scsi_transport_fc host interface support James Smart
2020-01-09 11:46   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 31/32] elx: efct: Add Makefile and Kconfig for efct driver James Smart
2019-12-20 23:17   ` Randy Dunlap
2020-01-09 11:47   ` Hannes Reinecke
2019-12-20 22:37 ` [PATCH v2 32/32] elx: efct: Tie into kernel Kconfig and build process James Smart
2019-12-24  7:45   ` kbuild test robot
2019-12-24  7:45     ` kbuild test robot
2019-12-24 21:01   ` Nathan Chancellor
2019-12-25 16:09     ` James Smart
2020-01-09 11:47   ` Hannes Reinecke
2019-12-29 18:27 ` [PATCH v2 00/32] [NEW] efct: Broadcom (Emulex) FC Target driver Sebastian Herbszt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.