All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver
@ 2020-04-12  3:32 James Smart
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
                   ` (30 more replies)
  0 siblings, 31 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart

This patch set is a request to incorporate the new Broadcom
(Emulex) FC target driver, efct, into the kernel source tree.

The driver source has been Announced a couple of times, the last
version on 12/20/2019. The driver has been hosted on gitlab for
review has had contributions from the community.
  gitlab (git@gitlab.com:jsmart/efct-Emulex_FC_Target.git)

The driver integrates into the source tree at the (new) drivers/scsi/elx
subdirectory.

The driver consists of the following components:
- A libefc_sli subdirectory: This subdirectory contains a library that
  encapsulates common definitions and routines for an Emulex SLI-4
  adapter.
- A libefc subdirectory: This subdirectory contains a library of
  common routines. Of major import is a number of routines that
  implement a FC Discovery engine for target mode.
- An efct subdirectory: This subdirectory contains the efct target
  mode device driver. The driver utilizes the above librarys and
  plugs into the SCSI LIO interfaces. The driver is SCSI only at
  this time.

The patches populate the libraries and device driver and can only
be compiled as a complete set.

This driver is completely independent from the lpfc device driver
and there is no overlap on PCI ID's.

The patches have been cut against the 5.7/scsi-queue branch.

Thank you to those that have contributed to the driver in the past.

Review comments welcome!

-- james


V2 modifications:

Contains the following modifications based on prior review comments:
  Indentation/Alignment/Spacing changes
  Comments: format cleanup; removed obvious or unnecessary comments;
    Added comments for clarity.
  Headers use #ifndef comparing for prior inclusion
  Cleanup structure names (remove _s suffix)
  Encapsulate use of macro arguments
  Refactor to remove static function declarations for static local routines
  Removed unused variables
  Fix SLI4_INTF_VALID_MASK for 32bits
  Ensure no BIT() use
  Use __ffs() in page count macro
  Reorg to move field defines out of structure definition
  Commonize command building routines to reduce duplication
  LIO interface:
    Removed scsi initiator includes
    Cleaned up interface defines
    Removed lio WWN version attribute.
    Expanded macros within logging macros
    Cleaned up lio state setting macro
    Remove __force use
    Modularized session debugfs code so can be easily replaced.
    Cleaned up abort task handling. Return after initiating.
    Modularized where possible to reduce duplication
    Convert from kthread to workqueue use
    Remove unused macros
  Add missing TARGET_CORE build attribute
  Fix kbuild test robot warnings

  Comments not addressed:
    Use of __packed: not believed necessary
    Session debugfs code remains. There is not yet a common lio
      mechanism to replace with.

V3 modifications:
  Changed anonymous enums to named enums
  Split gaint enums into multiple enums
  Use defines to spell out _MASK values directly
  Changed multiple #defines to named enums and a few vice versa cases
    for consistency
  Added Link Speed support for up to 128G
  Removed efc_assert define. Replaced with WARN_ON.
  Returned defined return values EFC_SUCCESS & EFC_FAIL
  Added return values for routines returning more than those 2 values
  Reduction of calling arguments in various routines.
  Expanded dump type and status handling
  Fixed locking in discovery handling routines.
  Fixed line formatting length and indentation issues.
  Removed code that was not used.
  Removed Sparse Vector APIs and structures. Use xarray api instead.
  Changed node pool creation. Use mempool and allocate dma memory when
    required
  Bug Fix: Send LS_RJT for non FCP PRLIs
  Removed Queue topology string and parsing routines and rework queue
    creation. Adapter configuration is implicitly known by the driver
  Used request_threaded_irq instead of using our thread
  Reworked efct_device_attach function to use if statements and gotos
  Changed efct_fw_reset, removed accessing other port
  Convert to used pci_alloc_irq_vectors api
  Removed proc interface.
  Changed assertion log messages.
  Unified log message using cmd_name
  Removed DIF related code which is not used
  Removed SCSI get property
  Incorporated LIO interface review comments
  Reworked xxx_reg_vpi/vfi routines
  Use SPDX license in elx/Makefile
  Many more small changes.
  

James Smart (31):
  elx: libefc_sli: SLI-4 register offsets and field definitions
  elx: libefc_sli: SLI Descriptors and Queue entries
  elx: libefc_sli: Data structures and defines for mbox commands
  elx: libefc_sli: queue create/destroy/parse routines
  elx: libefc_sli: Populate and post different WQEs
  elx: libefc_sli: bmbx routines and SLI config commands
  elx: libefc_sli: APIs to setup SLI library
  elx: libefc: Generic state machine framework
  elx: libefc: Emulex FC discovery library APIs and definitions
  elx: libefc: FC Domain state machine interfaces
  elx: libefc: SLI and FC PORT state machine interfaces
  elx: libefc: Remote node state machine interfaces
  elx: libefc: Fabric node state machine interfaces
  elx: libefc: FC node ELS and state handling
  elx: efct: Data structures and defines for hw operations
  elx: efct: Driver initialization routines
  elx: efct: Hardware queues creation and deletion
  elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  elx: efct: Hardware IO and SGL initialization
  elx: efct: Hardware queues processing
  elx: efct: Unsolicited FC frame processing routines
  elx: efct: Extended link Service IO handling
  elx: efct: SCSI IO handling routines
  elx: efct: LIO backend interface routines
  elx: efct: Hardware IO submission routines
  elx: efct: link statistics and SFP data
  elx: efct: xport and hardware teardown routines
  elx: efct: Firmware update, async link processing
  elx: efct: scsi_transport_fc host interface support
  elx: efct: Add Makefile and Kconfig for efct driver
  elx: efct: Tie into kernel Kconfig and build process

 MAINTAINERS                            |    8 +
 drivers/scsi/Kconfig                   |    2 +
 drivers/scsi/Makefile                  |    1 +
 drivers/scsi/elx/Kconfig               |    9 +
 drivers/scsi/elx/Makefile              |   18 +
 drivers/scsi/elx/efct/efct_driver.c    |  856 +++++
 drivers/scsi/elx/efct/efct_driver.h    |  142 +
 drivers/scsi/elx/efct/efct_els.c       | 1928 +++++++++++
 drivers/scsi/elx/efct/efct_els.h       |  133 +
 drivers/scsi/elx/efct/efct_hw.c        | 5347 +++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        |  864 +++++
 drivers/scsi/elx/efct/efct_hw_queues.c |  765 +++++
 drivers/scsi/elx/efct/efct_io.c        |  198 ++
 drivers/scsi/elx/efct/efct_io.h        |  191 ++
 drivers/scsi/elx/efct/efct_lio.c       | 1840 +++++++++++
 drivers/scsi/elx/efct/efct_lio.h       |  178 +
 drivers/scsi/elx/efct/efct_scsi.c      | 1192 +++++++
 drivers/scsi/elx/efct/efct_scsi.h      |  235 ++
 drivers/scsi/elx/efct/efct_unsol.c     |  813 +++++
 drivers/scsi/elx/efct/efct_unsol.h     |   49 +
 drivers/scsi/elx/efct/efct_xport.c     | 1310 ++++++++
 drivers/scsi/elx/efct/efct_xport.h     |  201 ++
 drivers/scsi/elx/include/efc_common.h  |   43 +
 drivers/scsi/elx/libefc/efc.h          |   72 +
 drivers/scsi/elx/libefc/efc_device.c   | 1672 ++++++++++
 drivers/scsi/elx/libefc/efc_device.h   |   72 +
 drivers/scsi/elx/libefc/efc_domain.c   | 1109 +++++++
 drivers/scsi/elx/libefc/efc_domain.h   |   52 +
 drivers/scsi/elx/libefc/efc_fabric.c   | 1759 ++++++++++
 drivers/scsi/elx/libefc/efc_fabric.h   |  116 +
 drivers/scsi/elx/libefc/efc_lib.c      |   41 +
 drivers/scsi/elx/libefc/efc_node.c     | 1196 +++++++
 drivers/scsi/elx/libefc/efc_node.h     |  183 ++
 drivers/scsi/elx/libefc/efc_sm.c       |   61 +
 drivers/scsi/elx/libefc/efc_sm.h       |  209 ++
 drivers/scsi/elx/libefc/efc_sport.c    |  846 +++++
 drivers/scsi/elx/libefc/efc_sport.h    |   52 +
 drivers/scsi/elx/libefc/efclib.h       |  640 ++++
 drivers/scsi/elx/libefc_sli/sli4.c     | 5523 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h     | 4133 ++++++++++++++++++++++++
 40 files changed, 34059 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h
 create mode 100644 drivers/scsi/elx/include/efc_common.h
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
 create mode 100644 drivers/scsi/elx/libefc/efclib.h
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

-- 
2.16.4


^ permalink raw reply	[flat|nested] 124+ messages in thread

* [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-14 15:23   ` Daniel Wagner
                     ` (2 more replies)
  2020-04-12  3:32 ` [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
                   ` (29 subsequent siblings)
  30 siblings, 3 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This is the initial patch for the new Emulex target mode SCSI
driver sources.

This patch:
- Creates the new Emulex source level directory drivers/scsi/elx
  and adds the directory to the MAINTAINERS file.
- Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
  This library is a SLI-4 interface library.
- Starts the population of the libefc_sli library with definitions
  of SLI-4 hardware register offsets and definitions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed anonymous enums to named.
  SLI defines to spell out _MASK value directly.
  Changed multiple #defines to named enums for consistency.
  SLI4_REG_MAX to SLI4_REG_UNKNOWN
---
 MAINTAINERS                        |   8 ++
 drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
 drivers/scsi/elx/libefc_sli/sli4.h | 252 +++++++++++++++++++++++++++++++++++++
 3 files changed, 286 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 7bd5e23648b1..a7381c0088e4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6223,6 +6223,14 @@ W:	http://www.broadcom.com
 S:	Supported
 F:	drivers/scsi/lpfc/
 
+EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
+M:	James Smart <james.smart@broadcom.com>
+M:	Ram Vegesna <ram.vegesna@broadcom.com>
+L:	linux-scsi@vger.kernel.org
+W:	http://www.broadcom.com
+S:	Supported
+F:	drivers/scsi/elx/
+
 ENE CB710 FLASH CARD READER DRIVER
 M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
 S:	Maintained
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
new file mode 100644
index 000000000000..29d33becd334
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * All common (i.e. transport-independent) SLI-4 functions are implemented
+ * in this file.
+ */
+#include "sli4.h"
+
+struct sli4_asic_entry_t {
+	u32 rev_id;
+	u32 family;
+};
+
+static struct sli4_asic_entry_t sli4_asic_table[] = {
+	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
+};
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
new file mode 100644
index 000000000000..1fad48643f94
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/*
+ * All common SLI-4 structures and function prototypes.
+ */
+
+#ifndef _SLI4_H
+#define _SLI4_H
+
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "../include/efc_common.h"
+
+/*************************************************************************
+ * Common SLI-4 register offsets and field definitions
+ */
+
+/* SLI_INTF - SLI Interface Definition Register */
+#define SLI4_INTF_REG			0x0058
+enum sli4_intf {
+	SLI4_INTF_REV_SHIFT		= 4,
+	SLI4_INTF_REV_MASK		= 0xF0,
+
+	SLI4_INTF_REV_S3		= 0x30,
+	SLI4_INTF_REV_S4		= 0x40,
+
+	SLI4_INTF_FAMILY_SHIFT		= 8,
+	SLI4_INTF_FAMILY_MASK		= 0x0F00,
+
+	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0x0F00,
+
+	SLI4_INTF_IF_TYPE_SHIFT		= 12,
+	SLI4_INTF_IF_TYPE_MASK		= 0xF000,
+
+	SLI4_INTF_IF_TYPE_2		= 0x2000,
+	SLI4_INTF_IF_TYPE_6		= 0x6000,
+
+	SLI4_INTF_VALID_SHIFT		= 29,
+	SLI4_INTF_VALID_MASK		= 0xE0000000,
+
+	SLI4_INTF_VALID_VALUE		= 0xC0000000,
+};
+
+/* ASIC_ID - SLI ASIC Type and Revision Register */
+#define SLI4_ASIC_ID_REG	0x009c
+enum sli4_asic {
+	SLI4_ASIC_GEN_SHIFT	= 8,
+	SLI4_ASIC_GEN_MASK	= 0xFF00,
+	SLI4_ASIC_GEN_5		= 0x0B00,
+	SLI4_ASIC_GEN_6		= 0x0C00,
+	SLI4_ASIC_GEN_7		= 0x0D00,
+};
+
+enum sli4_acic_revisions {
+	SLI4_ASIC_REV_A0 = 0x00,
+	SLI4_ASIC_REV_A1 = 0x01,
+	SLI4_ASIC_REV_A2 = 0x02,
+	SLI4_ASIC_REV_A3 = 0x03,
+	SLI4_ASIC_REV_B0 = 0x10,
+	SLI4_ASIC_REV_B1 = 0x11,
+	SLI4_ASIC_REV_B2 = 0x12,
+	SLI4_ASIC_REV_C0 = 0x20,
+	SLI4_ASIC_REV_C1 = 0x21,
+	SLI4_ASIC_REV_C2 = 0x22,
+	SLI4_ASIC_REV_D0 = 0x30,
+};
+
+/* BMBX - Bootstrap Mailbox Register */
+#define SLI4_BMBX_REG		0x0160
+enum sli4_bmbx {
+	SLI4_BMBX_MASK_HI	= 0x3,
+	SLI4_BMBX_MASK_LO	= 0xf,
+	SLI4_BMBX_RDY		= (1 << 0),
+	SLI4_BMBX_HI		= (1 << 1),
+	SLI4_BMBX_SIZE		= 256,
+};
+
+#define SLI4_BMBX_WRITE_HI(r) \
+	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | SLI4_BMBX_HI)
+#define SLI4_BMBX_WRITE_LO(r) \
+	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) << 30) | \
+	 (((r) & ~SLI4_BMBX_MASK_LO) >> 2))
+
+/* SLIPORT_CONTROL - SLI Port Control Register */
+#define SLI4_PORT_CTRL_REG	0x0408
+enum sli4_port_ctrl {
+	SLI4_PORT_CTRL_IP	= (1 << 27),
+	SLI4_PORT_CTRL_IDIS	= (1 << 22),
+	SLI4_PORT_CTRL_FDD	= (1 << 31),
+};
+
+/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
+#define SLI4_PORT_ERROR1	0x040c
+#define SLI4_PORT_ERROR2	0x0410
+
+/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
+#define SLI4_EQCQ_DB_REG	0x120
+enum sli4_eqcq_e {
+	SLI4_EQ_ID_LO_MASK	= 0x01FF,
+
+	SLI4_CQ_ID_LO_MASK	= 0x03FF,
+
+	SLI4_EQCQ_CI_EQ		= 0x0200,
+
+	SLI4_EQCQ_QT_EQ		= 0x00000400,
+	SLI4_EQCQ_QT_CQ		= 0x00000000,
+
+	SLI4_EQCQ_ID_HI_SHIFT	= 11,
+	SLI4_EQCQ_ID_HI_MASK	= 0xF800,
+
+	SLI4_EQCQ_NUM_SHIFT	= 16,
+	SLI4_EQCQ_NUM_MASK	= 0x1FFF0000,
+
+	SLI4_EQCQ_ARM		= 0x20000000,
+	SLI4_EQCQ_UNARM		= 0x00000000,
+};
+
+#define SLI4_EQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ | \
+	 ((((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
+	 (a) | SLI4_EQCQ_CI_EQ)
+
+#define SLI4_CQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ | \
+	 ((((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
+	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | (a))
+
+/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
+#define SLI4_IF6_EQ_DB_REG	0x120
+enum sli4_eq_e {
+	SLI4_IF6_EQ_ID_MASK	= 0x0FFF,
+
+	SLI4_IF6_EQ_NUM_SHIFT	= 16,
+	SLI4_IF6_EQ_NUM_MASK	= 0x1FFF0000,
+};
+
+#define SLI4_IF6_EQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_IF6_EQ_ID_MASK) | \
+	 (((n) << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | (a))
+
+/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */
+#define SLI4_IF6_CQ_DB_REG	0xC0
+enum sli4_cq_e {
+	SLI4_IF6_CQ_ID_MASK	= 0xFFFF,
+
+	SLI4_IF6_CQ_NUM_SHIFT	= 16,
+	SLI4_IF6_CQ_NUM_MASK	= 0x1FFF0000,
+};
+
+#define SLI4_IF6_CQ_DOORBELL(n, id, a) \
+	(((id) & SLI4_IF6_CQ_ID_MASK) | \
+	 (((n) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | (a))
+
+/* MQ_DOORBELL - MQ Doorbell Register */
+#define SLI4_MQ_DB_REG		0x0140
+#define SLI4_IF6_MQ_DB_REG	0x0160
+enum sli4_mq_e {
+	SLI4_MQ_ID_MASK		= 0xFFFF,
+
+	SLI4_MQ_NUM_SHIFT	= 16,
+	SLI4_MQ_NUM_MASK	= 0x3FFF0000,
+};
+
+#define SLI4_MQ_DOORBELL(n, i) \
+	(((i) & SLI4_MQ_ID_MASK) | \
+	 (((n) << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK))
+
+/* RQ_DOORBELL - RQ Doorbell Register */
+#define SLI4_RQ_DB_REG		0x0a0
+#define SLI4_IF6_RQ_DB_REG	0x0080
+enum sli4_rq_e {
+	SLI4_RQ_DB_ID_MASK	= 0xFFFF,
+
+	SLI4_RQ_DB_NUM_SHIFT	= 16,
+	SLI4_RQ_DB_NUM_MASK	= 0x3FFF0000,
+};
+
+#define SLI4_RQ_DOORBELL(n, i) \
+	(((i) & SLI4_RQ_DB_ID_MASK) | \
+	 (((n) << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK))
+
+/* WQ_DOORBELL - WQ Doorbell Register */
+#define SLI4_IO_WQ_DB_REG	0x040
+#define SLI4_IF6_WQ_DB_REG	0x040
+enum sli4_wq_e {
+	SLI4_WQ_ID_MASK		= 0xFFFF,
+
+	SLI4_WQ_IDX_SHIFT	= 16,
+	SLI4_WQ_IDX_MASK	= 0xFF0000,
+
+	SLI4_WQ_NUM_SHIFT	= 24,
+	SLI4_WQ_NUM_MASK	= 0x0FF00000,
+};
+
+#define SLI4_WQ_DOORBELL(n, x, i) \
+	(((i) & SLI4_WQ_ID_MASK) | \
+	 (((x) << SLI4_WQ_IDX_SHIFT) & SLI4_WQ_IDX_MASK) | \
+	 (((n) << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK))
+
+/* SLIPORT_SEMAPHORE - SLI Port Host and Port Status Register */
+#define SLI4_PORT_SEMP_REG		0x0400
+enum sli4_port_sem_e {
+	SLI4_PORT_SEMP_ERR_MASK		= 0xF000,
+	SLI4_PORT_SEMP_UNRECOV_ERR	= 0xF000,
+};
+
+/* SLIPORT_STATUS - SLI Port Status Register */
+#define SLI4_PORT_STATUS_REGOFF		0x0404
+enum sli4_port_status {
+	SLI4_PORT_STATUS_FDP		= (1 << 21),
+	SLI4_PORT_STATUS_RDY		= (1 << 23),
+	SLI4_PORT_STATUS_RN		= (1 << 24),
+	SLI4_PORT_STATUS_DIP		= (1 << 25),
+	SLI4_PORT_STATUS_OTI		= (1 << 29),
+	SLI4_PORT_STATUS_ERR		= (1 << 31),
+};
+
+#define SLI4_PHYDEV_CTRL_REG		0x0414
+#define SLI4_PHYDEV_CTRL_FRST		(1 << 1)
+#define SLI4_PHYDEV_CTRL_DD		(1 << 2)
+
+/* Register name enums */
+enum sli4_regname_en {
+	SLI4_REG_BMBX,
+	SLI4_REG_EQ_DOORBELL,
+	SLI4_REG_CQ_DOORBELL,
+	SLI4_REG_RQ_DOORBELL,
+	SLI4_REG_IO_WQ_DOORBELL,
+	SLI4_REG_MQ_DOORBELL,
+	SLI4_REG_PHYSDEV_CONTROL,
+	SLI4_REG_PORT_CONTROL,
+	SLI4_REG_PORT_ERROR1,
+	SLI4_REG_PORT_ERROR2,
+	SLI4_REG_PORT_SEMAPHORE,
+	SLI4_REG_PORT_STATUS,
+	SLI4_REG_UNKWOWN			/* must be last */
+};
+
+struct sli4_reg {
+	u32	rset;
+	u32	off;
+};
+
+#endif /* !_SLI4_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-14 18:02   ` Daniel Wagner
  2020-04-15 12:14   ` Hannes Reinecke
  2020-04-12  3:32 ` [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
                   ` (28 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch add SLI-4 Data structures and defines for:
- Buffer Descriptors (BDEs)
- Scatter/Gather List elements (SGEs)
- Queues and their Entry Descriptions for:
   Event Queues (EQs), Completion Queues (CQs),
   Receive Queues (RQs), and the Mailbox Queue (MQ).

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed anonymous enums to named.
  SLI defines to spell out _MASK value directly.
  Change multiple defines to named Enums for consistency.
  Single Enum to #define.
---
 drivers/scsi/elx/include/efc_common.h |   25 +
 drivers/scsi/elx/libefc_sli/sli4.h    | 1761 +++++++++++++++++++++++++++++++++
 2 files changed, 1786 insertions(+)
 create mode 100644 drivers/scsi/elx/include/efc_common.h

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
new file mode 100644
index 000000000000..c427f75da4d5
--- /dev/null
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_COMMON_H__
+#define __EFC_COMMON_H__
+
+#include <linux/pci.h>
+
+#define EFC_SUCCESS	0
+#define EFC_FAIL	1
+
+struct efc_dma {
+	void		*virt;
+	void            *alloc;
+	dma_addr_t	phys;
+
+	size_t		size;
+	size_t          len;
+	struct pci_dev	*pdev;
+};
+
+#endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 1fad48643f94..07eef8df9690 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -249,4 +249,1765 @@ struct sli4_reg {
 	u32	off;
 };
 
+struct sli4_dmaaddr {
+	__le32 low;
+	__le32 high;
+};
+
+/* a 3-word BDE with address 1st 2 words, length last word */
+struct sli4_bufptr {
+	struct sli4_dmaaddr addr;
+	__le32 length;
+};
+
+/* a 3-word BDE with length as first word, address last 2 words */
+struct sli4_bufptr_len1st {
+	__le32 length0;
+	struct sli4_dmaaddr addr;
+};
+
+/* Buffer Descriptor Entry (BDE) */
+enum sli4_bde_e {
+	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
+	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
+};
+
+struct sli4_bde {
+	__le32		bde_type_buflen;
+	union {
+		struct sli4_dmaaddr data;
+		struct {
+			__le32	offset;
+			__le32	rsvd2;
+		} imm;
+		struct sli4_dmaaddr blp;
+	} u;
+};
+
+/* Buffer Descriptors */
+enum sli4_bde_type {
+	BDE_TYPE_SHIFT		= 24,
+	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
+	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
+	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
+};
+
+/* Scatter-Gather Entry (SGE) */
+#define SLI4_SGE_MAX_RESERVED			3
+
+enum sli4_sge_type {
+	/* DW2 */
+	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,
+	/*DW2W1*/
+	SLI4_SGE_TYPE_SHIFT		= 27,
+	SLI4_SGE_TYPE_MASK		= 0x78000000,
+	/*SGE Types*/
+	SLI4_SGE_TYPE_DATA		= 0x00,
+	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
+	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
+	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
+	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
+	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
+	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
+	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
+	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
+
+	SLI4_SGE_LAST			= (1 << 31),
+};
+
+struct sli4_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		buffer_length;
+};
+
+/* T10 DIF Scatter-Gather Entry (SGE) */
+struct sli4_dif_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		rsvd12;
+};
+
+/* Data Integrity Seed (DISEED) SGE */
+enum sli4_diseed_sge_flags {
+	/* DW2W1 */
+	DISEED_SGE_HS			= (1 << 2),
+	DISEED_SGE_WS			= (1 << 3),
+	DISEED_SGE_IC			= (1 << 4),
+	DISEED_SGE_ICS			= (1 << 5),
+	DISEED_SGE_ATRT			= (1 << 6),
+	DISEED_SGE_AT			= (1 << 7),
+	DISEED_SGE_FAT			= (1 << 8),
+	DISEED_SGE_NA			= (1 << 9),
+	DISEED_SGE_HI			= (1 << 10),
+
+	/* DW3W1 */
+	DISEED_SGE_BS_MASK		= 0x0007,
+	DISEED_SGE_AI			= (1 << 3),
+	DISEED_SGE_ME			= (1 << 4),
+	DISEED_SGE_RE			= (1 << 5),
+	DISEED_SGE_CE			= (1 << 6),
+	DISEED_SGE_NR			= (1 << 7),
+
+	DISEED_SGE_OP_RX_SHIFT		= 8,
+	DISEED_SGE_OP_RX_MASK		= 0x0F00,
+	DISEED_SGE_OP_TX_SHIFT		= 12,
+	DISEED_SGE_OP_TX_MASK		= 0xF000,
+};
+
+/* Opcode values */
+enum sli4_diseed_sge_opcodes {
+	DISEED_SGE_OP_IN_NODIF_OUT_CRC,
+	DISEED_SGE_OP_IN_CRC_OUT_NODIF,
+	DISEED_SGE_OP_IN_NODIF_OUT_CSUM,
+	DISEED_SGE_OP_IN_CSUM_OUT_NODIF,
+	DISEED_SGE_OP_IN_CRC_OUT_CRC,
+	DISEED_SGE_OP_IN_CSUM_OUT_CSUM,
+	DISEED_SGE_OP_IN_CRC_OUT_CSUM,
+	DISEED_SGE_OP_IN_CSUM_OUT_CRC,
+	DISEED_SGE_OP_IN_RAW_OUT_RAW,
+};
+
+#define DISEED_SGE_OP_RX_VALUE(stype) \
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_RX_SHIFT)
+#define DISEED_SGE_OP_TX_VALUE(stype) \
+	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_TX_SHIFT)
+
+struct sli4_diseed_sge {
+	__le32		ref_tag_cmp;
+	__le32		ref_tag_repl;
+	__le16		app_tag_repl;
+	__le16		dw2w1_flags;
+	__le16		app_tag_cmp;
+	__le16		dw3w1_flags;
+};
+
+/* List Segment Pointer Scatter-Gather Entry (SGE) */
+#define SLI4_LSP_SGE_SEGLEN	0x00ffffff
+
+struct sli4_lsp_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		dw3_seglen;
+};
+
+enum sli4_eqe_e {
+	SLI4_EQE_VALID	= 1,
+	SLI4_EQE_MJCODE	= 0xe,
+	SLI4_EQE_MNCODE	= 0xfff0,
+};
+
+struct sli4_eqe {
+	__le16		dw0w0_flags;
+	__le16		resource_id;
+};
+
+#define SLI4_MAJOR_CODE_STANDARD	0
+#define SLI4_MAJOR_CODE_SENTINEL	1
+
+/* Sentinel EQE indicating the EQ is full */
+#define SLI4_EQE_STATUS_EQ_FULL		2
+
+enum sli4_mcqe_e {
+	SLI4_MCQE_CONSUMED	= (1 << 27),
+	SLI4_MCQE_COMPLETED	= (1 << 28),
+	SLI4_MCQE_AE		= (1 << 30),
+	SLI4_MCQE_VALID		= (1 << 31),
+};
+
+/* Entry was consumed but not completed */
+#define SLI4_MCQE_STATUS_NOT_COMPLETED	-2
+
+struct sli4_mcqe {
+	__le16		completion_status;
+	__le16		extended_status;
+	__le32		mqe_tag_low;
+	__le32		mqe_tag_high;
+	__le32		dw3_flags;
+};
+
+enum sli4_acqe_e {
+	SLI4_ACQE_AE	= (1 << 6), /* async event - this is an ACQE */
+	SLI4_ACQE_VAL	= (1 << 7), /* valid - contents of CQE are valid */
+};
+
+struct sli4_acqe {
+	__le32		event_data[3];
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		ae_val;
+};
+
+enum sli4_acqe_event_code {
+	SLI4_ACQE_EVENT_CODE_LINK_STATE		= 0x01,
+	SLI4_ACQE_EVENT_CODE_FIP		= 0x02,
+	SLI4_ACQE_EVENT_CODE_DCBX		= 0x03,
+	SLI4_ACQE_EVENT_CODE_ISCSI		= 0x04,
+	SLI4_ACQE_EVENT_CODE_GRP_5		= 0x05,
+	SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	= 0x10,
+	SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	= 0x11,
+	SLI4_ACQE_EVENT_CODE_VF_EVENT		= 0x12,
+	SLI4_ACQE_EVENT_CODE_MR_EVENT		= 0x13,
+};
+
+enum sli4_qtype {
+	SLI_QTYPE_EQ,
+	SLI_QTYPE_CQ,
+	SLI_QTYPE_MQ,
+	SLI_QTYPE_WQ,
+	SLI_QTYPE_RQ,
+	SLI_QTYPE_MAX,			/* must be last */
+};
+
+#define SLI_USER_MQ_COUNT	1
+#define SLI_MAX_CQ_SET_COUNT	16
+#define SLI_MAX_RQ_SET_COUNT	16
+
+enum sli4_qentry {
+	SLI_QENTRY_ASYNC,
+	SLI_QENTRY_MQ,
+	SLI_QENTRY_RQ,
+	SLI_QENTRY_WQ,
+	SLI_QENTRY_WQ_RELEASE,
+	SLI_QENTRY_OPT_WRITE_CMD,
+	SLI_QENTRY_OPT_WRITE_DATA,
+	SLI_QENTRY_XABT,
+	SLI_QENTRY_MAX			/* must be last */
+};
+
+enum sli4_queue_flags {
+	/* CQ has MQ/Async completion */
+	SLI4_QUEUE_FLAG_MQ	= (1 << 0),
+
+	/* RQ for packet headers */
+	SLI4_QUEUE_FLAG_HDR	= (1 << 1),
+
+	/* RQ index increment by 8 */
+	SLI4_QUEUE_FLAG_RQBATCH	= (1 << 2),
+};
+
+struct sli4_queue {
+	/* Common to all queue types */
+	struct efc_dma	dma;
+	spinlock_t	lock;	/* protect the queue operations */
+	u32	index;		/* current host entry index */
+	u16	size;		/* entry size */
+	u16	length;		/* number of entries */
+	u16	n_posted;	/* number entries posted */
+	u16	id;		/* Port assigned xQ_ID */
+	u16	ulp;		/* ULP assigned to this queue */
+	void __iomem    *db_regaddr;	/* register address for the doorbell */
+	u8		type;		/* queue type ie EQ, CQ, ... */
+	u32	proc_limit;	/* limit CQE processed per iteration */
+	u32	posted_limit;	/* CQE/EQE process before ring doorbel */
+	u32	max_num_processed;
+	time_t		max_process_time;
+	u16	phase;		/* For if_type = 6, this value toggle
+				 * for each iteration of the queue,
+				 * a queue entry is valid when a cqe
+				 * valid bit matches this value
+				 */
+
+	union {
+		u32	r_idx;	/* "read" index (MQ only) */
+		struct {
+			u32	dword;
+		} flag;
+	} u;
+};
+
+/* Generic Command Request header */
+enum sli4_cmd_version {
+	CMD_V0,
+	CMD_V1,
+	CMD_V2,
+};
+
+struct sli4_rqst_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	__le32		timeout;
+	__le32		request_length;
+	__le32		dw3_version;
+};
+
+/* Generic Command Response header */
+struct sli4_rsp_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	u8		status;
+	u8		additional_status;
+	__le16		rsvd6;
+	__le32		response_length;
+	__le32		actual_response_length;
+};
+
+#define SLI4_QUEUE_RQ_BATCH	8
+
+#define CFG_RQST_CMDSZ(stype)	sizeof(struct sli4_rqst_##stype)
+
+#define CFG_RQST_PYLD_LEN(stype) \
+		cpu_to_le32(sizeof(struct sli4_rqst_##stype) - \
+			sizeof(struct sli4_rqst_hdr))
+
+#define CFG_RQST_PYLD_LEN_VAR(stype, varpyld) \
+		cpu_to_le32((sizeof(struct sli4_rqst_##stype) + \
+			varpyld) - sizeof(struct sli4_rqst_hdr))
+
+#define SZ_DMAADDR		sizeof(struct sli4_dmaaddr)
+
+#define SLI_CONFIG_PYLD_LENGTH(stype) \
+		max(sizeof(struct sli4_rqst_##stype), \
+		sizeof(struct sli4_rsp_##stype))
+
+enum sli4_create_cqv2_e {
+	/* DW5_flags values*/
+	CREATE_CQV2_CLSWM_MASK	= 0x00003000,
+	CREATE_CQV2_NODELAY	= 0x00004000,
+	CREATE_CQV2_AUTOVALID	= 0x00008000,
+	CREATE_CQV2_CQECNT_MASK	= 0x18000000,
+	CREATE_CQV2_VALID	= 0x20000000,
+	CREATE_CQV2_EVT		= 0x80000000,
+	/* DW6W1_flags values*/
+	CREATE_CQV2_ARM		= 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		eq_id;
+	__le16		dw6w1_arm;
+	__le16		cqe_count;
+	__le16		rsvd30;
+	__le32		rsvd32;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+enum sli4_create_cqset_e {
+	/* DW5_flags values*/
+	CREATE_CQSETV0_CLSWM_MASK  = 0x00003000,
+	CREATE_CQSETV0_NODELAY	   = 0x00004000,
+	CREATE_CQSETV0_AUTOVALID   = 0x00008000,
+	CREATE_CQSETV0_CQECNT_MASK = 0x18000000,
+	CREATE_CQSETV0_VALID	   = 0x20000000,
+	CREATE_CQSETV0_EVT	   = 0x80000000,
+	/* DW5W1_flags values */
+	CREATE_CQSETV0_CQE_COUNT   = 0x7fff,
+	CREATE_CQSETV0_ARM	   = 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_set_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		page_size;
+	u8		rsvd19;
+	__le32		dw5_flags;
+	__le16		num_cq_req;
+	__le16		dw6w1_flags;
+	__le16		eq_id[16];
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+/* CQE count */
+enum sli4_cq_cnt {
+	CQ_CNT_256,
+	CQ_CNT_512,
+	CQ_CNT_1024,
+	CQ_CNT_LARGE,
+};
+
+#define CQ_CNT_SHIFT			27
+#define CQ_CNT_VAL(type)		(CQ_CNT_##type << CQ_CNT_SHIFT)
+
+#define SLI4_CQE_BYTES			(4 * sizeof(u32))
+
+#define SLI4_CMN_CREATE_CQ_V2_MAX_PAGES	8
+
+/* Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion */
+struct sli4_rsp_cmn_create_queue {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	u8	rsvd18;
+	u8	ulp;
+	__le32	db_offset;
+	__le16	db_rs;
+	__le16	db_fmt;
+};
+
+struct sli4_rsp_cmn_create_queue_set {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	__le16	num_q_allocated;
+};
+
+/* Common Destroy Queue */
+struct sli4_rqst_cmn_destroy_q {
+	struct sli4_rqst_hdr	hdr;
+	__le16	q_id;
+	__le16	rsvd;
+};
+
+struct sli4_rsp_cmn_destroy_q {
+	struct sli4_rsp_hdr	hdr;
+};
+
+/* Modify the delay multiplier for EQs */
+struct sli4_rqst_cmn_modify_eq_delay {
+	struct sli4_rqst_hdr	hdr;
+	__le32	num_eq;
+	struct {
+		__le32	eq_id;
+		__le32	phase;
+		__le32	delay_multiplier;
+	} eq_delay_record[8];
+};
+
+struct sli4_rsp_cmn_modify_eq_delay {
+	struct sli4_rsp_hdr	hdr;
+};
+
+enum sli4_create_cq_e {
+	/* DW5 */
+	CREATE_EQ_AUTOVALID		= (1 << 28),
+	CREATE_EQ_VALID			= (1 << 29),
+	CREATE_EQ_EQESZ			= (1 << 31),
+	/* DW6 */
+	CREATE_EQ_COUNT			= (7 << 26),
+	CREATE_EQ_ARM			= (1 << 31),
+	/* DW7 */
+	CREATE_EQ_DELAYMULTI_SHIFT	= 13,
+	CREATE_EQ_DELAYMULTI_MASK	= 0x007FE000,
+	CREATE_EQ_DELAYMULTI		= 0x00040000,
+};
+
+struct sli4_rqst_cmn_create_eq {
+	struct sli4_rqst_hdr	hdr;
+	__le16	num_pages;
+	__le16	rsvd18;
+	__le32	dw5_flags;
+	__le32	dw6_flags;
+	__le32	dw7_delaymulti;
+	__le32	rsvd32;
+	struct sli4_dmaaddr page_address[8];
+};
+
+struct sli4_rsp_cmn_create_eq {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+/* EQ count */
+enum sli4_eq_cnt {
+	EQ_CNT_256,
+	EQ_CNT_512,
+	EQ_CNT_1024,
+	EQ_CNT_2048,
+	EQ_CNT_4096 = 3,
+};
+
+#define EQ_CNT_SHIFT			26
+#define EQ_CNT_VAL(type)		(EQ_CNT_##type << EQ_CNT_SHIFT)
+
+#define SLI4_EQE_SIZE_4			0
+#define SLI4_EQE_SIZE_16		1
+
+/* Create a Mailbox Queue; accommodate v0 and v1 forms. */
+enum sli4_create_mq_flags {
+	/* DW6W1 */
+	CREATE_MQEXT_RINGSIZE		= 0xf,
+	CREATE_MQEXT_CQID_SHIFT		= 6,
+	CREATE_MQEXT_CQIDV0_MASK	= 0xffc0,
+	/* DW7 */
+	CREATE_MQEXT_VAL		= (1 << 31),
+	/* DW8 */
+	CREATE_MQEXT_ACQV		= (1 << 0),
+	CREATE_MQEXT_ASYNC_CQIDV0	= 0x7fe,
+};
+
+struct sli4_rqst_cmn_create_mq_ext {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		cq_id_v1;
+	__le32		async_event_bitmap;
+	__le16		async_cq_id_v1;
+	__le16		dw6w1_flags;
+	__le32		dw7_val;
+	__le32		dw8_flags;
+	__le32		rsvd36;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+struct sli4_rsp_cmn_create_mq_ext {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+enum sli4_mqe_size {
+	SLI4_MQE_SIZE_16 = 0x05,
+	SLI4_MQE_SIZE_32,
+	SLI4_MQE_SIZE_64,
+	SLI4_MQE_SIZE_128,
+};
+
+enum sli4_async_evt {
+	SLI4_ASYNC_EVT_LINK_STATE	= (1 << 1),
+	SLI4_ASYNC_EVT_FIP		= (1 << 2),
+	SLI4_ASYNC_EVT_GRP5		= (1 << 5),
+	SLI4_ASYNC_EVT_FC		= (1 << 16),
+	SLI4_ASYNC_EVT_SLI_PORT		= (1 << 17),
+};
+
+#define	SLI4_ASYNC_EVT_FC_ALL \
+		(SLI4_ASYNC_EVT_LINK_STATE	| \
+		 SLI4_ASYNC_EVT_FIP		| \
+		 SLI4_ASYNC_EVT_GRP5		| \
+		 SLI4_ASYNC_EVT_FC		| \
+		 SLI4_ASYNC_EVT_SLI_PORT)
+
+/* Create a Completion Queue. */
+struct sli4_rqst_cmn_create_cq_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		rsvd18;
+	__le32		dw5_flags;
+	__le32		dw6_flags;
+	__le32		rsvd28;
+	__le32		rsvd32;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+enum sli4_create_rq_e {
+	SLI4_RQ_CREATE_DUA		= 0x1,
+	SLI4_RQ_CREATE_BQU		= 0x2,
+
+	SLI4_RQE_SIZE			= 8,
+	SLI4_RQE_SIZE_8			= 0x2,
+	SLI4_RQE_SIZE_16		= 0x3,
+	SLI4_RQE_SIZE_32		= 0x4,
+	SLI4_RQE_SIZE_64		= 0x5,
+	SLI4_RQE_SIZE_128		= 0x6,
+
+	SLI4_RQ_PAGE_SIZE_4096		= 0x1,
+	SLI4_RQ_PAGE_SIZE_8192		= 0x2,
+	SLI4_RQ_PAGE_SIZE_16384		= 0x4,
+	SLI4_RQ_PAGE_SIZE_32768		= 0x8,
+	SLI4_RQ_PAGE_SIZE_64536		= 0x10,
+
+	SLI4_RQ_CREATE_V0_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V0_MIN_BUF_SIZE	= 128,
+	SLI4_RQ_CREATE_V0_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		dua_bqu_byte;
+	u8		ulp;
+	__le16		rsvd16;
+	u8		rqe_count_byte;
+	u8		rsvd19;
+	__le32		rsvd20;
+	__le16		buffer_size;
+	__le16		cq_id;
+	__le32		rsvd28;
+	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V0_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum sli4_create_rqv1_e {
+	SLI4_RQ_CREATE_V1_DNB		= 0x80,
+	SLI4_RQ_CREATE_V1_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V1_MIN_BUF_SIZE	= 64,
+	SLI4_RQ_CREATE_V1_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create_v1 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		rsvd14;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le32		rsvd20;
+	__le16		rsvd24;
+	__le16		cq_id;
+	__le32		buffer_size;
+	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V1_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create_v1 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+#define	SLI4_RQCREATEV2_DNB	0x80
+
+struct sli4_rqst_rq_create_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	u8		rq_count;
+	u8		dim_dfd_dnb;
+	u8		page_size;
+	u8		rqe_size_byte;
+	__le16		rqe_count;
+	__le16		hdr_buffer_size;
+	__le16		payload_buffer_size;
+	__le16		base_cq_id;
+	__le16		rsvd26;
+	__le32		rsvd42;
+	struct sli4_dmaaddr page_phys_addr[0];
+};
+
+struct sli4_rsp_rq_create_v2 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+#define SLI4_CQE_CODE_OFFSET	14
+enum sli4_cqe_code {
+	SLI4_CQE_CODE_WORK_REQUEST_COMPLETION = 0x01,
+	SLI4_CQE_CODE_RELEASE_WQE,
+	SLI4_CQE_CODE_RSVD,
+	SLI4_CQE_CODE_RQ_ASYNC,
+	SLI4_CQE_CODE_XRI_ABORTED,
+	SLI4_CQE_CODE_RQ_COALESCING,
+	SLI4_CQE_CODE_RQ_CONSUMPTION,
+	SLI4_CQE_CODE_MEASUREMENT_REPORTING,
+	SLI4_CQE_CODE_RQ_ASYNC_V1,
+	SLI4_CQE_CODE_RQ_COALESCING_V1,
+	SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD,
+	SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA,
+};
+
+#define SLI4_WQ_CREATE_MAX_PAGES		8
+struct sli4_rqst_wq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16		num_pages;
+	__le16		cq_id;
+	u8		page_size;
+	u8		wqe_size_byte;
+	__le16		wqe_count;
+	__le32		rsvd;
+	struct	sli4_dmaaddr
+			page_phys_addr[SLI4_WQ_CREATE_MAX_PAGES];
+};
+
+struct sli4_rsp_wq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum sli4_link_attention_flags {
+	LINK_ATTN_TYPE_LINK_UP		= 0x01,
+	LINK_ATTN_TYPE_LINK_DOWN	= 0x02,
+	LINK_ATTN_TYPE_NO_HARD_ALPA	= 0x03,
+
+	LINK_ATTN_P2P			= 0x01,
+	LINK_ATTN_FC_AL			= 0x02,
+	LINK_ATTN_INTERNAL_LOOPBACK	= 0x03,
+	LINK_ATTN_SERDES_LOOPBACK	= 0x04,
+
+	LINK_ATTN_1G			= 0x01,
+	LINK_ATTN_2G			= 0x02,
+	LINK_ATTN_4G			= 0x04,
+	LINK_ATTN_8G			= 0x08,
+	LINK_ATTN_10G			= 0x0a,
+	LINK_ATTN_16G			= 0x10,
+};
+
+struct sli4_link_attention {
+	u8		link_number;
+	u8		attn_type;
+	u8		topology;
+	u8		port_speed;
+	u8		port_fault;
+	u8		shared_link_status;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+enum sli4_link_event_type {
+	FC_EVENT_LINK_ATTENTION		= 0x01,
+	FC_EVENT_SHARED_LINK_ATTENTION	= 0x02,
+};
+
+enum sli4_wcqe_flags {
+	SLI4_WCQE_XB = 0x10,
+	SLI4_WCQE_QX = 0x80,
+};
+
+struct sli4_fc_wcqe {
+	u8		hw_status;
+	u8		status;
+	__le16		request_tag;
+	__le32		wqe_specific_1;
+	__le32		wqe_specific_2;
+	u8		rsvd12;
+	u8		qx_byte;
+	u8		code;
+	u8		flags;
+};
+
+/* FC WQ consumed CQ queue entry */
+struct sli4_fc_wqec {
+	__le32		rsvd0;
+	__le32		rsvd1;
+	__le16		wqe_index;
+	__le16		wq_id;
+	__le16		rsvd12;
+	u8		code;
+	u8		vld_byte;
+};
+
+/* FC Completion Status Codes. */
+enum sli4_wcqe_status {
+	SLI4_FC_WCQE_STATUS_SUCCESS,
+	SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,
+	SLI4_FC_WCQE_STATUS_REMOTE_STOP,
+	SLI4_FC_WCQE_STATUS_LOCAL_REJECT,
+	SLI4_FC_WCQE_STATUS_NPORT_RJT,
+	SLI4_FC_WCQE_STATUS_FABRIC_RJT,
+	SLI4_FC_WCQE_STATUS_NPORT_BSY,
+	SLI4_FC_WCQE_STATUS_FABRIC_BSY,
+	SLI4_FC_WCQE_STATUS_RSVD,
+	SLI4_FC_WCQE_STATUS_LS_RJT,
+	SLI4_FC_WCQE_STATUS_RX_BUF_OVERRUN,
+	SLI4_FC_WCQE_STATUS_CMD_REJECT,
+	SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,
+	SLI4_FC_WCQE_STATUS_RSVD1,
+	SLI4_FC_WCQE_STATUS_ELS_CMPLT_NO_AUTOREG,
+	SLI4_FC_WCQE_STATUS_RSVD2,
+	SLI4_FC_WCQE_STATUS_RQ_SUCCESS, //0x10
+	SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC,
+	SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,
+	SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,
+	SLI4_FC_WCQE_STATUS_DI_ERROR,
+	SLI4_FC_WCQE_STATUS_BA_RJT,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC,
+	SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,
+	SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,
+
+	/* driver generated status codes */
+	SLI4_FC_WCQE_STATUS_DISPATCH_ERROR	= 0xfd,
+	SLI4_FC_WCQE_STATUS_SHUTDOWN		= 0xfe,
+	SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT	= 0xff,
+};
+
+/* DI_ERROR Extended Status */
+enum sli4_fc_di_error_status {
+	SLI4_FC_DI_ERROR_GE			= (1 << 0),
+	SLI4_FC_DI_ERROR_AE			= (1 << 1),
+	SLI4_FC_DI_ERROR_RE			= (1 << 2),
+	SLI4_FC_DI_ERROR_TDPV			= (1 << 3),
+	SLI4_FC_DI_ERROR_UDB			= (1 << 4),
+	SLI4_FC_DI_ERROR_EDIR			= (1 << 5),
+};
+
+/* WQE DIF field contents */
+enum sli4_dif_fields {
+	SLI4_DIF_DISABLED,
+	SLI4_DIF_PASS_THROUGH,
+	SLI4_DIF_STRIP,
+	SLI4_DIF_INSERT,
+};
+
+/* Work Queue Entry (WQE) types */
+enum sli4_wqe_types {
+	SLI4_WQE_ABORT				= 0x0f,
+	SLI4_WQE_ELS_REQUEST64			= 0x8a,
+	SLI4_WQE_FCP_IBIDIR64			= 0xac,
+	SLI4_WQE_FCP_IREAD64			= 0x9a,
+	SLI4_WQE_FCP_IWRITE64			= 0x98,
+	SLI4_WQE_FCP_ICMND64			= 0x9c,
+	SLI4_WQE_FCP_TRECEIVE64			= 0xa1,
+	SLI4_WQE_FCP_CONT_TRECEIVE64		= 0xe5,
+	SLI4_WQE_FCP_TRSP64			= 0xa3,
+	SLI4_WQE_FCP_TSEND64			= 0x9f,
+	SLI4_WQE_GEN_REQUEST64			= 0xc2,
+	SLI4_WQE_SEND_FRAME			= 0xe1,
+	SLI4_WQE_XMIT_BCAST64			= 0x84,
+	SLI4_WQE_XMIT_BLS_RSP			= 0x97,
+	SLI4_WQE_ELS_RSP64			= 0x95,
+	SLI4_WQE_XMIT_SEQUENCE64		= 0x82,
+	SLI4_WQE_REQUEUE_XRI			= 0x93,
+};
+
+/* WQE command types */
+enum sli4_wqe_cmds {
+	SLI4_CMD_FCP_IREAD64_WQE		= 0x00,
+	SLI4_CMD_FCP_ICMND64_WQE		= 0x00,
+	SLI4_CMD_FCP_IWRITE64_WQE		= 0x01,
+	SLI4_CMD_FCP_TRECEIVE64_WQE		= 0x02,
+	SLI4_CMD_FCP_TRSP64_WQE			= 0x03,
+	SLI4_CMD_FCP_TSEND64_WQE		= 0x07,
+	SLI4_CMD_GEN_REQUEST64_WQE		= 0x08,
+	SLI4_CMD_XMIT_BCAST64_WQE		= 0x08,
+	SLI4_CMD_XMIT_BLS_RSP64_WQE		= 0x08,
+	SLI4_CMD_ABORT_WQE			= 0x08,
+	SLI4_CMD_XMIT_SEQUENCE64_WQE		= 0x08,
+	SLI4_CMD_REQUEUE_XRI_WQE		= 0x0A,
+	SLI4_CMD_SEND_FRAME_WQE			= 0x0a,
+};
+
+#define SLI4_WQE_SIZE				0x05
+#define SLI4_WQE_EXT_SIZE			0x06
+
+#define SLI4_WQE_BYTES				(16 * sizeof(u32))
+#define SLI4_WQE_EXT_BYTES			(32 * sizeof(u32))
+
+/* Mask for ccp (CS_CTL) */
+#define SLI4_MASK_CCP				0xfe
+
+/* Generic WQE */
+enum sli4_gen_wqe_flags {
+	SLI4_GEN_WQE_EBDECNT	= (0xf << 0),
+	SLI4_GEN_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_GEN_WQE_QOSD	= (1 << 9),
+	SLI4_GEN_WQE_XBL	= (1 << 11),
+	SLI4_GEN_WQE_HLM	= (1 << 12),
+	SLI4_GEN_WQE_IOD	= (1 << 13),
+	SLI4_GEN_WQE_DBDE	= (1 << 14),
+	SLI4_GEN_WQE_WQES	= (1 << 15),
+
+	SLI4_GEN_WQE_PRI	= (0x7),
+	SLI4_GEN_WQE_PV		= (1 << 3),
+	SLI4_GEN_WQE_EAT	= (1 << 4),
+	SLI4_GEN_WQE_XC		= (1 << 5),
+	SLI4_GEN_WQE_CCPE	= (1 << 7),
+
+	SLI4_GEN_WQE_CMDTYPE	= (0xf),
+	SLI4_GEN_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_generic_wqe {
+	__le32		cmd_spec0_5[6];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+/* WQE used to abort exchanges. */
+enum sli4_abort_wqe_flags {
+	SLI4_ABRT_WQE_IR	= 0x02,
+
+	SLI4_ABRT_WQE_EBDECNT	= (0xf << 0),
+	SLI4_ABRT_WQE_LEN_LOC	= (0x3 << 7),
+	SLI4_ABRT_WQE_QOSD	= (1 << 9),
+	SLI4_ABRT_WQE_XBL	= (1 << 11),
+	SLI4_ABRT_WQE_IOD	= (1 << 13),
+	SLI4_ABRT_WQE_DBDE	= (1 << 14),
+	SLI4_ABRT_WQE_WQES	= (1 << 15),
+
+	SLI4_ABRT_WQE_PRI	= (0x7),
+	SLI4_ABRT_WQE_PV	= (1 << 3),
+	SLI4_ABRT_WQE_EAT	= (1 << 4),
+	SLI4_ABRT_WQE_XC	= (1 << 5),
+	SLI4_ABRT_WQE_CCPE	= (1 << 7),
+
+	SLI4_ABRT_WQE_CMDTYPE	= (0xf),
+	SLI4_ABRT_WQE_WQEC	= (1 << 7),
+};
+
+struct sli4_abort_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		ext_t_tag;
+	u8		ia_ir_byte;
+	u8		criteria;
+	__le16		rsvd10;
+	__le32		ext_t_mask;
+	__le32		t_mask;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		t_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+enum sli4_abort_criteria {
+	SLI4_ABORT_CRITERIA_XRI_TAG = 0x01,
+	SLI4_ABORT_CRITERIA_ABORT_TAG,
+	SLI4_ABORT_CRITERIA_REQUEST_TAG,
+	SLI4_ABORT_CRITERIA_EXT_ABORT_TAG,
+};
+
+enum sli4_abort_type {
+	SLI_ABORT_XRI,
+	SLI_ABORT_ABORT_ID,
+	SLI_ABORT_REQUEST_ID,
+	SLI_ABORT_MAX,		/* must be last */
+};
+
+/* WQE used to create an ELS request. */
+enum sli4_els_req_wqe_flags {
+	SLI4_REQ_WQE_QOSD		= 0x2,
+	SLI4_REQ_WQE_DBDE		= 0x40,
+	SLI4_REQ_WQE_XBL		= 0x8,
+	SLI4_REQ_WQE_XC			= 0x20,
+	SLI4_REQ_WQE_IOD		= 0x20,
+	SLI4_REQ_WQE_HLM		= 0x10,
+	SLI4_REQ_WQE_CCPE		= 0x80,
+	SLI4_REQ_WQE_EAT		= 0x10,
+	SLI4_REQ_WQE_WQES		= 0x80,
+	SLI4_REQ_WQE_PU_SHFT		= 4,
+	SLI4_REQ_WQE_CT_SHFT		= 2,
+	SLI4_REQ_WQE_CT			= 0xc,
+	SLI4_REQ_WQE_ELSID_SHFT		= 4,
+	SLI4_REQ_WQE_SP_SHFT		= 24,
+	SLI4_REQ_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_REQ_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_els_request64_wqe {
+	struct sli4_bde	els_request_payload;
+	__le32		els_request_payload_length;
+	__le32		sid_sp_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_elsid_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	struct sli4_bde	els_response_payload_bde;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create an FCP initiator no data command. */
+enum sli4_icmd_wqe_flags {
+	SLI4_ICMD_WQE_DBDE		= 0x40,
+	SLI4_ICMD_WQE_XBL		= 0x8,
+	SLI4_ICMD_WQE_XC		= 0x20,
+	SLI4_ICMD_WQE_IOD		= 0x20,
+	SLI4_ICMD_WQE_HLM		= 0x10,
+	SLI4_ICMD_WQE_CCPE		= 0x80,
+	SLI4_ICMD_WQE_EAT		= 0x10,
+	SLI4_ICMD_WQE_APPID		= 0x10,
+	SLI4_ICMD_WQE_WQES		= 0x80,
+	SLI4_ICMD_WQE_PU_SHFT		= 4,
+	SLI4_ICMD_WQE_CT_SHFT		= 2,
+	SLI4_ICMD_WQE_BS_SHFT		= 4,
+	SLI4_ICMD_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_ICMD_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_icmnd64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le32		rsvd12;
+	__le32		remote_n_port_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP initiator read. */
+enum sli4_ir_wqe_flags {
+	SLI4_IR_WQE_DBDE		= 0x40,
+	SLI4_IR_WQE_XBL			= 0x8,
+	SLI4_IR_WQE_XC			= 0x20,
+	SLI4_IR_WQE_IOD			= 0x20,
+	SLI4_IR_WQE_HLM			= 0x10,
+	SLI4_IR_WQE_CCPE		= 0x80,
+	SLI4_IR_WQE_EAT			= 0x10,
+	SLI4_IR_WQE_APPID		= 0x10,
+	SLI4_IR_WQE_WQES		= 0x80,
+	SLI4_IR_WQE_PU_SHFT		= 4,
+	SLI4_IR_WQE_CT_SHFT		= 2,
+	SLI4_IR_WQE_BS_SHFT		= 4,
+	SLI4_IR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iread64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+
+	__le32		total_transfer_length;
+
+	__le32		remote_n_port_id_dword;
+
+	__le16		xri_tag;
+	__le16		context_tag;
+
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+
+	__le32		abort_tag;
+
+	__le16		request_tag;
+	__le16		rsvd34;
+
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+
+	__le32		rsvd44;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP initiator write. */
+enum sli4_iwr_wqe_flags {
+	SLI4_IWR_WQE_DBDE		= 0x40,
+	SLI4_IWR_WQE_XBL		= 0x8,
+	SLI4_IWR_WQE_XC			= 0x20,
+	SLI4_IWR_WQE_IOD		= 0x20,
+	SLI4_IWR_WQE_HLM		= 0x10,
+	SLI4_IWR_WQE_DNRX		= 0x10,
+	SLI4_IWR_WQE_CCPE		= 0x80,
+	SLI4_IWR_WQE_EAT		= 0x10,
+	SLI4_IWR_WQE_APPID		= 0x10,
+	SLI4_IWR_WQE_WQES		= 0x80,
+	SLI4_IWR_WQE_PU_SHFT		= 4,
+	SLI4_IWR_WQE_CT_SHFT		= 2,
+	SLI4_IWR_WQE_BS_SHFT		= 4,
+	SLI4_IWR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IWR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iwrite64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le16		total_transfer_length;
+	__le16		initial_transfer_length;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	struct sli4_bde	first_data_bde;
+};
+
+struct sli4_fcp_128byte_wqe {
+	u32 dw[32];
+};
+
+/* WQE used to create an FCP target receive */
+enum sli4_trcv_wqe_flags {
+	SLI4_TRCV_WQE_DBDE		= 0x40,
+	SLI4_TRCV_WQE_XBL		= 0x8,
+	SLI4_TRCV_WQE_AR		= 0x8,
+	SLI4_TRCV_WQE_XC		= 0x20,
+	SLI4_TRCV_WQE_IOD		= 0x20,
+	SLI4_TRCV_WQE_HLM		= 0x10,
+	SLI4_TRCV_WQE_DNRX		= 0x10,
+	SLI4_TRCV_WQE_CCPE		= 0x80,
+	SLI4_TRCV_WQE_EAT		= 0x10,
+	SLI4_TRCV_WQE_APPID		= 0x10,
+	SLI4_TRCV_WQE_WQES		= 0x80,
+	SLI4_TRCV_WQE_PU_SHFT		= 4,
+	SLI4_TRCV_WQE_CT_SHFT		= 2,
+	SLI4_TRCV_WQE_BS_SHFT		= 4,
+	SLI4_TRCV_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_treceive64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	union {
+		__le16		sec_xri_tag;
+		__le16		rsvd;
+		__le32		dword;
+	} dword5;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_ar_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fcp_data_receive_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP target response */
+enum sli4_trsp_wqe_flags {
+	SLI4_TRSP_WQE_AG	= 0x8,
+	SLI4_TRSP_WQE_DBDE	= 0x40,
+	SLI4_TRSP_WQE_XBL	= 0x8,
+	SLI4_TRSP_WQE_XC	= 0x20,
+	SLI4_TRSP_WQE_HLM	= 0x10,
+	SLI4_TRSP_WQE_DNRX	= 0x10,
+	SLI4_TRSP_WQE_CCPE	= 0x80,
+	SLI4_TRSP_WQE_EAT	= 0x10,
+	SLI4_TRSP_WQE_APPID	= 0x10,
+	SLI4_TRSP_WQE_WQES	= 0x80,
+};
+
+struct sli4_fcp_trsp64_wqe {
+	struct sli4_bde	bde;
+	__le32		fcp_response_length;
+	__le32		rsvd12;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_dnrx_byte;
+	u8		command;
+	u8		class_ag_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP target send (DATA IN). */
+enum sli4_tsend_wqe_flags {
+	SLI4_TSEND_WQE_XBL	= 0x8,
+	SLI4_TSEND_WQE_DBDE	= 0x40,
+	SLI4_TSEND_WQE_IOD	= 0x20,
+	SLI4_TSEND_WQE_QOSD	= 0x2,
+	SLI4_TSEND_WQE_HLM	= 0x10,
+	SLI4_TSEND_WQE_PU_SHFT	= 4,
+	SLI4_TSEND_WQE_AR	= 0x8,
+	SLI4_TSEND_CT_SHFT	= 2,
+	SLI4_TSEND_BS_SHFT	= 4,
+	SLI4_TSEND_LEN_LOC_BIT2 = 0x1,
+	SLI4_TSEND_CCPE		= 0x80,
+	SLI4_TSEND_APPID_VALID	= 0x20,
+	SLI4_TSEND_WQES		= 0x80,
+	SLI4_TSEND_XC		= 0x20,
+	SLI4_TSEND_EAT		= 0x10,
+};
+
+struct sli4_fcp_tsend64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_byte;
+	u8		command;
+	u8		class_pu_ar_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		dw10byte0;
+	u8		ll_qd_xbl_hlm_iod_dbde;
+	u8		dw10byte2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		fcp_data_transmit_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create a general request. */
+enum sli4_gen_req_wqe_flags {
+	SLI4_GEN_REQ64_WQE_XBL	= 0x8,
+	SLI4_GEN_REQ64_WQE_DBDE	= 0x40,
+	SLI4_GEN_REQ64_WQE_IOD	= 0x20,
+	SLI4_GEN_REQ64_WQE_QOSD	= 0x2,
+	SLI4_GEN_REQ64_WQE_HLM	= 0x10,
+	SLI4_GEN_REQ64_CT_SHFT	= 2,
+};
+
+struct sli4_gen_request64_wqe {
+	struct sli4_bde	bde;
+	__le32		request_payload_length;
+	__le32		relative_offset;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create a send frame request */
+enum sli4_sf_wqe_flags {
+	SLI4_SF_WQE_DBDE	= 0x40,
+	SLI4_SF_PU		= 0x30,
+	SLI4_SF_CT		= 0xc,
+	SLI4_SF_QOSD		= 0x2,
+	SLI4_SF_LEN_LOC_BIT1	= 0x80,
+	SLI4_SF_LEN_LOC_BIT2	= 0x1,
+	SLI4_SF_XC		= 0x20,
+	SLI4_SF_XBL		= 0x8,
+};
+
+struct sli4_send_frame_wqe {
+	struct sli4_bde	bde;
+	__le32		frame_length;
+	__le32		fc_header_0_1[2];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	u8		eof;
+	u8		sof;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fc_header_2_5[4];
+};
+
+/* WQE used to create a transmit sequence */
+enum sli4_seq_wqe_flags {
+	SLI4_SEQ_WQE_DBDE		= 0x4000,
+	SLI4_SEQ_WQE_XBL		= 0x800,
+	SLI4_SEQ_WQE_SI			= 0x4,
+	SLI4_SEQ_WQE_FT			= 0x8,
+	SLI4_SEQ_WQE_XO			= 0x40,
+	SLI4_SEQ_WQE_LS			= 0x80,
+	SLI4_SEQ_WQE_DIF		= 0x3,
+	SLI4_SEQ_WQE_BS			= 0x70,
+	SLI4_SEQ_WQE_PU			= 0x30,
+	SLI4_SEQ_WQE_HLM		= 0x1000,
+	SLI4_SEQ_WQE_IOD_SHIFT		= 13,
+	SLI4_SEQ_WQE_CT_SHIFT		= 2,
+	SLI4_SEQ_WQE_LEN_LOC_SHIFT	= 7,
+};
+
+struct sli4_xmit_sequence64_wqe {
+	struct sli4_bde	bde;
+	__le32		remote_n_port_id_dword;
+	__le32		relative_offset;
+	u8		dw5flags0;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw7flags0;
+	u8		command;
+	u8		dw7flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	__le16		dw10w0;
+	u8		dw10flags0;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		sequence_payload_len;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/*
+ * WQE used unblock the specified XRI and to release
+ * it to the SLI Port's free pool.
+ */
+enum sli4_requeue_wqe_flags {
+	SLI4_REQU_XRI_WQE_XC	= 0x20,
+	SLI4_REQU_XRI_WQE_QOSD	= 0x2,
+};
+
+struct sli4_requeue_xri_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		rsvd8;
+	__le32		rsvd12;
+	__le32		rsvd16;
+	__le32		rsvd20;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		rsvd32;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		flags0;
+	__le16		flags1;
+	__le16		flags2;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd42;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create a BLS response */
+enum sli4_bls_rsp_wqe_flags {
+	SLI4_BLS_RSP_RID		= 0xffffff,
+	SLI4_BLS_RSP_WQE_AR		= 0x40000000,
+	SLI4_BLS_RSP_WQE_CT_SHFT	= 2,
+	SLI4_BLS_RSP_WQE_QOSD		= 0x2,
+	SLI4_BLS_RSP_WQE_HLM		= 0x10,
+};
+
+struct sli4_xmit_bls_rsp_wqe {
+	__le32		payload_word0;
+	__le16		rx_id;
+	__le16		ox_id;
+	__le16		high_seq_cnt;
+	__le16		low_seq_cnt;
+	__le32		rsvd12;
+	__le32		local_n_port_id_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw8flags0;
+	u8		command;
+	u8		dw8flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd38;
+	u8		dw11flags0;
+	u8		dw11flags1;
+	u8		dw11flags2;
+	u8		ccp;
+	u8		dw12flags0;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	u8		rsvd50;
+	u8		rsvd51;
+	__le32		rsvd52;
+	__le32		rsvd56;
+	__le32		rsvd60;
+};
+
+enum sli_bls_type {
+	SLI4_SLI_BLS_ACC,
+	SLI4_SLI_BLS_RJT,
+	SLI4_SLI_BLS_MAX
+};
+
+struct sli_bls_payload {
+	enum sli_bls_type	type;
+	__le16			ox_id;
+	__le16			rx_id;
+	union {
+		struct {
+			u8	seq_id_validity;
+			u8	seq_id_last;
+			u8	rsvd2;
+			u8	rsvd3;
+			u16	ox_id;
+			u16	rx_id;
+			__le16	low_seq_cnt;
+			__le16	high_seq_cnt;
+		} acc;
+		struct {
+			u8	vendor_unique;
+			u8	reason_explanation;
+			u8	reason_code;
+			u8	rsvd3;
+		} rjt;
+	} u;
+};
+
+/* WQE used to create an ELS response */
+
+enum sli4_els_rsp_flags {
+	SLI4_ELS_SID		= 0xffffff,
+	SLI4_ELS_RID		= 0xffffff,
+	SLI4_ELS_DBDE		= 0x40,
+	SLI4_ELS_XBL		= 0x8,
+	SLI4_ELS_IOD		= 0x20,
+	SLI4_ELS_QOSD		= 0x2,
+	SLI4_ELS_XC		= 0x20,
+	SLI4_ELS_CT_OFFSET	= 0X2,
+	SLI4_ELS_SP		= 0X1000000,
+	SLI4_ELS_HLM		= 0X10,
+};
+
+struct sli4_xmit_els_rsp64_wqe {
+	struct sli4_bde	els_response_payload;
+	__le32		els_response_payload_length;
+	__le32		sid_dw;
+	__le32		rid_dw;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		ox_id;
+	u8		flags1;
+	u8		flags2;
+	u8		flags3;
+	u8		flags4;
+	u8		cmd_type_wqec;
+	u8		rsvd34;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	__le16		rsvd38;
+	u32		rsvd40;
+	u32		rsvd44;
+	u32		rsvd48;
+};
+
+/* Prameters used to populate WQE*/
+struct sli_bls_params {
+	u32		s_id;
+	u16		ox_id;
+	u16		rx_id;
+	u8		payload[12];
+};
+
+struct sli_els_params {
+	u32		s_id;
+	u16		ox_id;
+	u8		timeout;
+};
+
+struct sli_ct_params {
+	u8		r_ctl;
+	u8		type;
+	u8		df_ctl;
+	u8		timeout;
+	u16		ox_id;
+};
+
+struct sli_fcp_tgt_params {
+	u32		offset;
+	u16		ox_id;
+	u16		flags;
+	u8		cs_ctl;
+	u8		timeout;
+	u32		app_id;
+};
+
+/* Local Reject Reason Codes */
+enum sli4_fc_local_rej_codes {
+	SLI4_FC_LOCAL_REJECT_UNKNOWN,
+	SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE,
+	SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT,
+	SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR,
+	SLI4_FC_LOCAL_REJECT_INVALID_RPI,
+	SLI4_FC_LOCAL_REJECT_NO_XRI,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND,
+	SLI4_FC_LOCAL_REJECT_XCHG_DROPPED,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD,
+	SLI4_FC_LOCAL_REJECT_RPI_SUSPENDED,
+	SLI4_FC_LOCAL_REJECT_RSVD,
+	SLI4_FC_LOCAL_REJECT_RSVD1,
+	SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH,
+	SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED,
+	SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME,
+	SLI4_FC_LOCAL_REJECT_RSVD2,
+	SLI4_FC_LOCAL_REJECT_NO_RESOURCES, //0x11
+	SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH,
+	SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE,
+	SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS,
+	SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED,
+	SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT,
+	SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE,
+	SLI4_FC_LOCAL_REJECT_RSVD3,
+	SLI4_FC_LOCAL_REJECT_LINK_DOWN,
+	SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA,
+	SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI,
+	SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA,
+	SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK,
+	SLI4_FC_LOCAL_REJECT_DUP_FRAME,
+	SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME, //0x20
+	SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS,
+	SLI4_FC_LOCAL_REJECT_RSVD4,
+	SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER,
+	SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED,
+	SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED,
+	SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE	= 0x28,
+	SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING,
+	SLI4_FC_LOCAL_REJECT_INVALID_VPI	= 0x2e,
+	SLI4_FC_LOCAL_REJECT_NO_FPORT_DETECTED,
+	SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF,
+	SLI4_FC_LOCAL_REJECT_RSVD5,
+	SLI4_FC_LOCAL_REJECT_INVALID_XRI,
+	SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET	= 0x40,
+	SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET,
+	SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE,
+	SLI4_FC_LOCAL_REJECT_MISSING_SI,
+	SLI4_FC_LOCAL_REJECT_MISSING_ES,
+	SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER,
+	SLI4_FC_LOCAL_REJECT_SLER_FAILURE,
+	SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE,
+	SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_RSVD6,
+	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR,
+};
+
+enum sli4_async_rcqe_flags {
+	SLI4_RACQE_RQ_EL_INDX	= 0xfff,
+	SLI4_RACQE_FCFI		= 0x3f,
+	SLI4_RACQE_HDPL		= 0x3f,
+	SLI4_RACQE_RQ_ID	= 0xffc0,
+};
+
+struct sli4_fc_async_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		fcfi_rq_id_word;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+struct sli4_fc_async_rcqe_v1 {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	u8		fcfi_byte;
+	u8		rsvd5;
+	__le16		rsvd6;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+enum sli4_fc_async_rq_status {
+	SLI4_FC_ASYNC_RQ_SUCCESS = 0x10,
+	SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED,
+	SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED,
+	SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC,
+	SLI4_FC_ASYNC_RQ_DMA_FAILURE,
+};
+
+#define SLI4_RCQE_RQ_EL_INDX	0xfff
+
+struct sli4_fc_coalescing_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		rq_id;
+	__le16		sequence_reporting_placement_length;
+	__le16		rsvd14;
+	u8		code;
+	u8		vld_byte;
+};
+
+#define SLI4_FC_COALESCE_RQ_SUCCESS		0x10
+#define SLI4_FC_COALESCE_RQ_INSUFF_XRI_NEEDED	0x18
+/*
+ * @SLI4_OCQE_RQ_EL_INDX: bits 0 to 15 in word1
+ * @SLI4_OCQE_FCFI: bits 0 to 6 in dw1
+ * @SLI4_OCQE_OOX: bit 15 in dw1
+ * @SLI4_OCQE_AGXR: bit 16 in dw1
+ */
+enum sli4_optimized_write_cmd_cqe_flags {
+	SLI4_OCQE_RQ_EL_INDX = 0x7f,
+	SLI4_OCQE_FCFI = 0x3f,
+	SLI4_OCQE_OOX = (1 << 6),
+	SLI4_OCQE_AGXR = (1 << 7),
+	SLI4_OCQE_HDPL = 0x3f,
+};
+
+struct sli4_fc_optimized_write_cmd_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		w1;
+	u8		flags0;
+	u8		flags1;
+	__le16		xri;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	__le16		rpi;
+	u8		code;
+	u8		hdpl_vld;
+};
+
+#define	SLI4_OCQE_XB		0x10
+
+struct sli4_fc_optimized_write_data_cqe {
+	u8		hw_status;
+	u8		status;
+	__le16		xri;
+	__le32		total_data_placed;
+	__le32		extended_status;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+struct sli4_fc_xri_aborted_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rsvd2;
+	__le32		extended_status;
+	__le16		xri;
+	__le16		remote_xid;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+enum sli4_generic_ctx {
+	SLI4_GENERIC_CONTEXT_RPI,
+	SLI4_GENERIC_CONTEXT_VPI,
+	SLI4_GENERIC_CONTEXT_VFI,
+	SLI4_GENERIC_CONTEXT_FCFI,
+};
+
+#define SLI4_GENERIC_CLASS_CLASS_2		0x1
+#define SLI4_GENERIC_CLASS_CLASS_3		0x2
+
+#define SLI4_ELS_REQUEST64_DIR_WRITE		0x0
+#define SLI4_ELS_REQUEST64_DIR_READ		0x1
+
+enum sli4_els_request {
+	SLI4_ELS_REQUEST64_OTHER,
+	SLI4_ELS_REQUEST64_LOGO,
+	SLI4_ELS_REQUEST64_FDISC,
+	SLI4_ELS_REQUEST64_FLOGIN,
+	SLI4_ELS_REQUEST64_PLOGI,
+};
+
+enum sli4_els_cmd_type {
+	SLI4_ELS_REQUEST64_CMD_GEN		= 0x08,
+	SLI4_ELS_REQUEST64_CMD_NON_FABRIC	= 0x0c,
+	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
+};
+
 #endif /* !_SLI4_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2020-04-12  3:32 ` [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-14 19:01   ` Daniel Wagner
  2020-04-15 12:22   ` Hannes Reinecke
  2020-04-12  3:32 ` [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
                   ` (27 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds definitions for SLI-4 mailbox commands
and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed anonymous enums to named.
  Split gaint enums into multiple enums.
  Single Enum to #define
  Added Link Speed defines to accommodate upto 128G
  SLI defines to spell out _MASK value directly.
  Changed multiple defines to named Enums for consistency.
---
 drivers/scsi/elx/libefc_sli/sli4.h | 1677 ++++++++++++++++++++++++++++++++++++
 1 file changed, 1677 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 07eef8df9690..b360d809f144 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -2010,4 +2010,1681 @@ enum sli4_els_cmd_type {
 	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
 };
 
+#define SLI_PAGE_SIZE				(1 << 12)	/* 4096 */
+#define SLI_SUB_PAGE_MASK			(SLI_PAGE_SIZE - 1)
+#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
+
+#define SLI4_BMBX_TIMEOUT_MSEC			30000
+#define SLI4_FW_READY_TIMEOUT_MSEC		30000
+
+#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
+#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
+
+static inline u32
+sli_page_count(size_t bytes, u32 page_size)
+{
+	if (!page_size)
+		return 0;
+
+	return (bytes + (page_size - 1)) >> __ffs(page_size);
+}
+
+/*************************************************************************
+ * SLI-4 mailbox command formats and definitions
+ */
+
+struct sli4_mbox_command_header {
+	u8	resvd0;
+	u8	command;
+	__le16	status;	/* Port writes to indicate success/fail */
+};
+
+enum sli4_mbx_cmd_value {
+	MBX_CMD_CONFIG_LINK	= 0x07,
+	MBX_CMD_DUMP		= 0x17,
+	MBX_CMD_DOWN_LINK	= 0x06,
+	MBX_CMD_INIT_LINK	= 0x05,
+	MBX_CMD_INIT_VFI	= 0xa3,
+	MBX_CMD_INIT_VPI	= 0xa4,
+	MBX_CMD_POST_XRI	= 0xa7,
+	MBX_CMD_RELEASE_XRI	= 0xac,
+	MBX_CMD_READ_CONFIG	= 0x0b,
+	MBX_CMD_READ_STATUS	= 0x0e,
+	MBX_CMD_READ_NVPARMS	= 0x02,
+	MBX_CMD_READ_REV	= 0x11,
+	MBX_CMD_READ_LNK_STAT	= 0x12,
+	MBX_CMD_READ_SPARM64	= 0x8d,
+	MBX_CMD_READ_TOPOLOGY	= 0x95,
+	MBX_CMD_REG_FCFI	= 0xa0,
+	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
+	MBX_CMD_REG_RPI		= 0x93,
+	MBX_CMD_REG_RX_RQ	= 0xa6,
+	MBX_CMD_REG_VFI		= 0x9f,
+	MBX_CMD_REG_VPI		= 0x96,
+	MBX_CMD_RQST_FEATURES	= 0x9d,
+	MBX_CMD_SLI_CONFIG	= 0x9b,
+	MBX_CMD_UNREG_FCFI	= 0xa2,
+	MBX_CMD_UNREG_RPI	= 0x14,
+	MBX_CMD_UNREG_VFI	= 0xa1,
+	MBX_CMD_UNREG_VPI	= 0x97,
+	MBX_CMD_WRITE_NVPARMS	= 0x03,
+	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
+};
+
+enum sli4_mbx_status {
+	MBX_STATUS_SUCCESS	= 0x0000,
+	MBX_STATUS_FAILURE	= 0x0001,
+	MBX_STATUS_RPI_NOT_REG	= 0x1400,
+};
+
+/* CONFIG_LINK */
+enum sli4_cmd_config_link_flags {
+	SLI4_CFG_LINK_BBSCN = 0xf00,
+	SLI4_CFG_LINK_CSCN  = 0x1000,
+};
+
+struct sli4_cmd_config_link {
+	struct sli4_mbox_command_header	hdr;
+	u8		maxbbc;
+	u8		rsvd5;
+	u8		rsvd6;
+	u8		rsvd7;
+	u8		alpa;
+	__le16		n_port_id;
+	u8		rsvd11;
+	__le32		rsvd12;
+	__le32		e_d_tov;
+	__le32		lp_tov;
+	__le32		r_a_tov;
+	__le32		r_t_tov;
+	__le32		al_tov;
+	__le32		rsvd36;
+	__le32		bbscn_dword;
+};
+
+#define SLI4_DUMP4_TYPE		0xf
+
+#define SLI4_WKI_TAG_SAT_TEM	0x1040
+
+struct sli4_cmd_dump4 {
+	struct sli4_mbox_command_header	hdr;
+	__le32		type_dword;
+	__le16		wki_selection;
+	__le16		rsvd10;
+	__le32		rsvd12;
+	__le32		returned_byte_cnt;
+	__le32		resp_data[59];
+};
+
+/* INIT_LINK - initialize the link for a FC port */
+enum sli4_init_link_flags {
+	SLI4_INIT_LINK_F_LOOPBACK	= (1 << 0),
+
+	SLI4_INIT_LINK_F_P2P_ONLY	= (1 << 1),
+	SLI4_INIT_LINK_F_FCAL_ONLY	= (2 << 1),
+	SLI4_INIT_LINK_F_FCAL_FAIL_OVER	= (0 << 1),
+	SLI4_INIT_LINK_F_P2P_FAIL_OVER	= (1 << 1),
+
+	SLI4_INIT_LINK_F_UNFAIR		= (1 << 6),
+	SLI4_INIT_LINK_F_NO_LIRP	= (1 << 7),
+	SLI4_INIT_LINK_F_LOOP_VALID_CHK	= (1 << 8),
+	SLI4_INIT_LINK_F_NO_LISA	= (1 << 9),
+	SLI4_INIT_LINK_F_FAIL_OVER	= (1 << 10),
+	SLI4_INIT_LINK_F_FIXED_SPEED	= (1 << 11),
+	SLI4_INIT_LINK_F_PICK_HI_ALPA	= (1 << 15),
+
+};
+
+enum sli4_fc_link_speed {
+	FC_LINK_SPEED_1G = 1,
+	FC_LINK_SPEED_2G,
+	FC_LINK_SPEED_AUTO_1_2,
+	FC_LINK_SPEED_4G,
+	FC_LINK_SPEED_AUTO_4_1,
+	FC_LINK_SPEED_AUTO_4_2,
+	FC_LINK_SPEED_AUTO_4_2_1,
+	FC_LINK_SPEED_8G,
+	FC_LINK_SPEED_AUTO_8_1,
+	FC_LINK_SPEED_AUTO_8_2,
+	FC_LINK_SPEED_AUTO_8_2_1,
+	FC_LINK_SPEED_AUTO_8_4,
+	FC_LINK_SPEED_AUTO_8_4_1,
+	FC_LINK_SPEED_AUTO_8_4_2,
+	FC_LINK_SPEED_10G,
+	FC_LINK_SPEED_16G,
+	FC_LINK_SPEED_AUTO_16_8_4,
+	FC_LINK_SPEED_AUTO_16_8,
+	FC_LINK_SPEED_32G,
+	FC_LINK_SPEED_AUTO_32_16_8,
+	FC_LINK_SPEED_AUTO_32_16,
+	FC_LINK_SPEED_64G,
+	FC_LINK_SPEED_AUTO_64_32_16,
+	FC_LINK_SPEED_AUTO_64_32,
+	FC_LINK_SPEED_128G,
+	FC_LINK_SPEED_AUTO_128_64_32,
+	FC_LINK_SPEED_AUTO_128_64,
+};
+
+struct sli4_cmd_init_link {
+	struct sli4_mbox_command_header       hdr;
+	__le32	sel_reset_al_pa_dword;
+	__le32	flags0;
+	__le32	link_speed_sel_code;
+};
+
+/* INIT_VFI - initialize the VFI resource */
+enum sli4_init_vfi_flags {
+	SLI4_INIT_VFI_FLAG_VP	= 0x1000,
+	SLI4_INIT_VFI_FLAG_VF	= 0x2000,
+	SLI4_INIT_VFI_FLAG_VT	= 0x4000,
+	SLI4_INIT_VFI_FLAG_VR	= 0x8000,
+
+	SLI4_INIT_VFI_VFID	= 0x1fff,
+	SLI4_INIT_VFI_PRI	= 0xe000,
+
+	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,
+};
+
+struct sli4_cmd_init_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vfi;
+	__le16		flags0_word;
+	__le16		fcfi;
+	__le16		vpi;
+	__le32		vf_id_pri_dword;
+	__le32		hop_cnt_dword;
+};
+
+/* INIT_VPI - initialize the VPI resource */
+struct sli4_cmd_init_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* POST_XRI - post XRI resources to the SLI Port */
+enum sli4_post_xri_flags {
+	SLI4_POST_XRI_COUNT	= 0xfff,
+	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
+	SLI4_POST_XRI_FLAG_DL	= 0x2000,
+	SLI4_POST_XRI_FLAG_DI	= 0x4000,
+	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
+};
+
+struct sli4_cmd_post_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		xri_base;
+	__le16		xri_count_flags;
+};
+
+/* RELEASE_XRI - Release XRI resources from the SLI Port */
+enum sli4_release_xri_flags {
+	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,
+	SLI4_RELEASE_XRI_COUNT		= 0x1f,
+};
+
+struct sli4_cmd_release_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		rel_xri_count_word;
+	__le16		xri_count_word;
+
+	struct {
+		__le16	xri_tag0;
+		__le16	xri_tag1;
+	} xri_tbl[62];
+};
+
+/* READ_CONFIG - read SLI port configuration parameters */
+struct sli4_cmd_read_config {
+	struct sli4_mbox_command_header	hdr;
+};
+
+enum sli4_read_cfg_resp_flags {
+	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
+	SLI4_READ_CFG_RESP_TOPOLOGY	= 0xff000000,	/* DW2 */
+};
+
+enum sli4_read_cfg_topo {
+	SLI4_READ_CFG_TOPO_FC		= 0x1,	/* FC topology unknown */
+	SLI4_READ_CFG_TOPO_FC_DA	= 0x2,	/* FC Direct Attach */
+	SLI4_READ_CFG_TOPO_FC_AL	= 0x3,	/* FC-AL topology */
+};
+
+struct sli4_rsp_read_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		ext_dword;
+	__le32		topology_dword;
+	__le32		resvd8;
+	__le16		e_d_tov;
+	__le16		resvd14;
+	__le32		resvd16;
+	__le16		r_a_tov;
+	__le16		resvd22;
+	__le32		resvd24;
+	__le32		resvd28;
+	__le16		lmt;
+	__le16		resvd34;
+	__le32		resvd36;
+	__le32		resvd40;
+	__le16		xri_base;
+	__le16		xri_count;
+	__le16		rpi_base;
+	__le16		rpi_count;
+	__le16		vpi_base;
+	__le16		vpi_count;
+	__le16		vfi_base;
+	__le16		vfi_count;
+	__le16		resvd60;
+	__le16		fcfi_count;
+	__le16		rq_count;
+	__le16		eq_count;
+	__le16		wq_count;
+	__le16		cq_count;
+	__le32		pad[45];
+};
+
+/* READ_NVPARMS - read SLI port configuration parameters */
+enum sli4_read_nvparms_flags {
+	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
+	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
+};
+
+struct sli4_cmd_read_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* WRITE_NVPARMS - write SLI port configuration parameters */
+struct sli4_cmd_write_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* READ_REV - read the Port revision levels */
+enum {
+	SLI4_READ_REV_FLAG_SLI_LEVEL	= 0xf,
+	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
+	SLI4_READ_REV_FLAG_CEEV		= 0x60,
+	SLI4_READ_REV_FLAG_VPD		= 0x2000,
+
+	SLI4_READ_REV_AVAILABLE_LENGTH	= 0xffffff,
+};
+
+struct sli4_cmd_read_rev {
+	struct sli4_mbox_command_header	hdr;
+	__le16			resvd0;
+	__le16			flags0_word;
+	__le32			first_hw_rev;
+	__le32			second_hw_rev;
+	__le32			resvd12;
+	__le32			third_hw_rev;
+	u8			fc_ph_low;
+	u8			fc_ph_high;
+	u8			feature_level_low;
+	u8			feature_level_high;
+	__le32			resvd24;
+	__le32			first_fw_id;
+	u8			first_fw_name[16];
+	__le32			second_fw_id;
+	u8			second_fw_name[16];
+	__le32			rsvd18[30];
+	__le32			available_length_dword;
+	struct sli4_dmaaddr	hostbuf;
+	__le32			returned_vpd_length;
+	__le32			actual_vpd_length;
+};
+
+/* READ_SPARM64 - read the Port service parameters */
+#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
+#define SLI4_READ_SPARM64_WWNN_OFFSET	(6 * sizeof(u32))
+
+struct sli4_cmd_read_sparm64 {
+	struct sli4_mbox_command_header hdr;
+	__le32			resvd0;
+	__le32			resvd4;
+	struct sli4_bde	bde_64;
+	__le16			vpi;
+	__le16			resvd22;
+	__le16			port_name_start;
+	__le16			port_name_len;
+	__le16			node_name_start;
+	__le16			node_name_len;
+};
+
+/* READ_TOPOLOGY - read the link event information */
+enum sli4_read_topo_e {
+	SLI4_READTOPO_ATTEN_TYPE	= 0xff,
+	SLI4_READTOPO_FLAG_IL		= 0x100,
+	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
+
+	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
+	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
+	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
+	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
+	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
+	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
+
+	SLI4_READTOPO_SCN_BBSCN		= 0xf,
+	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
+
+	SLI4_READTOPO_R_T_TOV		= 0x1ff,
+	SLI4_READTOPO_AL_TOV		= 0xf000,
+
+	SLI4_READTOPO_PB_FLAG		= 0x80,
+
+	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
+};
+
+#define SLI4_MIN_LOOP_MAP_BYTES	128
+
+struct sli4_cmd_read_topology {
+	struct sli4_mbox_command_header	hdr;
+	__le32			event_tag;
+	__le32			dw2_attentype;
+	u8			topology;
+	u8			lip_type;
+	u8			lip_al_ps;
+	u8			al_pa_granted;
+	struct sli4_bde	bde_loop_map;
+	__le32			linkdown_state;
+	__le32			currlink_state;
+	u8			max_bbc;
+	u8			init_bbc;
+	u8			scn_flags;
+	u8			rsvd39;
+	__le16			dw10w0_al_rt_tov;
+	__le16			lp_tov;
+	u8			acquired_al_pa;
+	u8			pb_flags;
+	__le16			specified_al_pa;
+	__le32			dw12_init_n_port_id;
+};
+
+enum sli4_read_topo_link {
+	SLI4_READ_TOPOLOGY_LINK_UP	= 0x1,
+	SLI4_READ_TOPOLOGY_LINK_DOWN,
+	SLI4_READ_TOPOLOGY_LINK_NO_ALPA,
+};
+
+enum sli4_read_topo {
+	SLI4_READ_TOPOLOGY_UNKNOWN	= 0x0,
+	SLI4_READ_TOPOLOGY_NPORT,
+	SLI4_READ_TOPOLOGY_FC_AL,
+};
+
+enum sli4_read_topo_speed {
+	SLI4_READ_TOPOLOGY_SPEED_NONE	= 0x00,
+	SLI4_READ_TOPOLOGY_SPEED_1G	= 0x04,
+	SLI4_READ_TOPOLOGY_SPEED_2G	= 0x08,
+	SLI4_READ_TOPOLOGY_SPEED_4G	= 0x10,
+	SLI4_READ_TOPOLOGY_SPEED_8G	= 0x20,
+	SLI4_READ_TOPOLOGY_SPEED_10G	= 0x40,
+	SLI4_READ_TOPOLOGY_SPEED_16G	= 0x80,
+	SLI4_READ_TOPOLOGY_SPEED_32G	= 0x90,
+};
+
+/* REG_FCFI - activate a FC Forwarder */
+struct sli4_cmd_reg_fcfi_rq_cfg {
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+enum sli4_regfcfi_tag {
+	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
+	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
+};
+
+#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
+struct sli4_cmd_reg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+};
+
+#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
+#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
+#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
+#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
+
+enum sli4_reg_fcfi_mrq {
+	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
+	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
+	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
+
+	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
+	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
+	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
+};
+
+struct sli4_cmd_reg_fcfi_mrq {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+	__le32		dw9_mrqflags;
+};
+
+struct sli4_cmd_rq_cfg {
+	__le16	rq_id;
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+/* REG_RPI - register a Remote Port Indicator */
+enum sli4_reg_rpi {
+	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
+	SLI4_REGRPI_UPD			= 0x1000000,
+	SLI4_REGRPI_ETOW		= 0x8000000,
+	SLI4_REGRPI_TERP		= 0x20000000,
+	SLI4_REGRPI_CI			= 0x80000000,
+};
+
+struct sli4_cmd_reg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			rpi;
+	__le16			rsvd2;
+	__le32			dw2_rportid_flags;
+	struct sli4_bde	bde_64;
+	__le16			vpi;
+	__le16			rsvd26;
+};
+
+#define SLI4_REG_RPI_BUF_LEN		0x70
+
+/* REG_VFI - register a Virtual Fabric Indicator */
+enum sli_reg_vfi {
+	SLI4_REGVFI_VP			= 0x1000,	/* DW1 */
+	SLI4_REGVFI_UPD			= 0x2000,
+
+	SLI4_REGVFI_LOCAL_N_PORTID	= 0xffffff,	/* DW10 */
+};
+
+struct sli4_cmd_reg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			vfi;
+	__le16			dw0w1_flags;
+	__le16			fcfi;
+	__le16			vpi;
+	u8			wwpn[8];
+	struct sli4_bde	sparm;
+	__le32			e_d_tov;
+	__le32			r_a_tov;
+	__le32			dw10_lportid_flags;
+};
+
+/* REG_VPI - register a Virtual Port Indicator */
+enum sli4_reg_vpi {
+	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
+	SLI4_REGVPI_UPD			= 0x1000000,
+};
+
+struct sli4_cmd_reg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		dw2_lportid_flags;
+	u8		wwpn[8];
+	__le32		rsvd12;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* REQUEST_FEATURES - request / query SLI features */
+enum sli4_req_features_flags {
+	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
+
+	SLI4_REQFEAT_IAAB	= (1 << 0),	/* DW2 & DW3 */
+	SLI4_REQFEAT_NPIV	= (1 << 1),
+	SLI4_REQFEAT_DIF	= (1 << 2),
+	SLI4_REQFEAT_VF		= (1 << 3),
+	SLI4_REQFEAT_FCPI	= (1 << 4),
+	SLI4_REQFEAT_FCPT	= (1 << 5),
+	SLI4_REQFEAT_FCPC	= (1 << 6),
+	SLI4_REQFEAT_RSVD	= (1 << 7),
+	SLI4_REQFEAT_RQD	= (1 << 8),
+	SLI4_REQFEAT_IAAR	= (1 << 9),
+	SLI4_REQFEAT_HLM	= (1 << 10),
+	SLI4_REQFEAT_PERFH	= (1 << 11),
+	SLI4_REQFEAT_RXSEQ	= (1 << 12),
+	SLI4_REQFEAT_RXRI	= (1 << 13),
+	SLI4_REQFEAT_DCL2	= (1 << 14),
+	SLI4_REQFEAT_RSCO	= (1 << 15),
+	SLI4_REQFEAT_MRQP	= (1 << 16),
+};
+
+struct sli4_cmd_request_features {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_qry;
+	__le32		cmd;
+	__le32		resp;
+};
+
+/*
+ * SLI_CONFIG - submit a configuration command to Port
+ *
+ * Command is either embedded as part of the payload (embed) or located
+ * in a separate memory buffer (mem)
+ */
+enum sli4_sli_config {
+	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
+	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
+	SLI4_SLICONF_PMDCMD_MASK	= 0xF8,
+	SLI4_SLICONF_PMDCMD_VAL_1	= 8,
+	SLI4_SLICONF_PMDCNT		= 0xF8,
+
+	SLI4_SLICONFIG_PMD_LEN		= 0x00ffffff,
+};
+
+struct sli4_cmd_sli_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		payload_len;
+	__le32		rsvd12[3];
+	union {
+		u8 embed[58 * sizeof(u32)];
+		struct sli4_bufptr mem;
+	} payload;
+};
+
+/* READ_STATUS - read tx/rx status of a particular port */
+#define SLI4_READSTATUS_CLEAR_COUNTERS	0x1
+
+struct sli4_cmd_read_status {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		rsvd4;
+	__le32		trans_kbyte_cnt;
+	__le32		recv_kbyte_cnt;
+	__le32		trans_frame_cnt;
+	__le32		recv_frame_cnt;
+	__le32		trans_seq_cnt;
+	__le32		recv_seq_cnt;
+	__le32		tot_exchanges_orig;
+	__le32		tot_exchanges_resp;
+	__le32		recv_p_bsy_cnt;
+	__le32		recv_f_bsy_cnt;
+	__le32		no_rq_buf_dropped_frames_cnt;
+	__le32		empty_rq_timeout_cnt;
+	__le32		no_xri_dropped_frames_cnt;
+	__le32		empty_xri_pool_cnt;
+};
+
+/* READ_LNK_STAT - read link status of a particular port */
+enum sli4_read_link_stats_flags {
+	SLI4_READ_LNKSTAT_REC	= (1 << 0),
+	SLI4_READ_LNKSTAT_GEC	= (1 << 1),
+	SLI4_READ_LNKSTAT_W02OF	= (1 << 2),
+	SLI4_READ_LNKSTAT_W03OF	= (1 << 3),
+	SLI4_READ_LNKSTAT_W04OF	= (1 << 4),
+	SLI4_READ_LNKSTAT_W05OF	= (1 << 5),
+	SLI4_READ_LNKSTAT_W06OF	= (1 << 6),
+	SLI4_READ_LNKSTAT_W07OF	= (1 << 7),
+	SLI4_READ_LNKSTAT_W08OF	= (1 << 8),
+	SLI4_READ_LNKSTAT_W09OF	= (1 << 9),
+	SLI4_READ_LNKSTAT_W10OF = (1 << 10),
+	SLI4_READ_LNKSTAT_W11OF = (1 << 11),
+	SLI4_READ_LNKSTAT_W12OF	= (1 << 12),
+	SLI4_READ_LNKSTAT_W13OF	= (1 << 13),
+	SLI4_READ_LNKSTAT_W14OF	= (1 << 14),
+	SLI4_READ_LNKSTAT_W15OF	= (1 << 15),
+	SLI4_READ_LNKSTAT_W16OF	= (1 << 16),
+	SLI4_READ_LNKSTAT_W17OF	= (1 << 17),
+	SLI4_READ_LNKSTAT_W18OF	= (1 << 18),
+	SLI4_READ_LNKSTAT_W19OF	= (1 << 19),
+	SLI4_READ_LNKSTAT_W20OF	= (1 << 20),
+	SLI4_READ_LNKSTAT_W21OF	= (1 << 21),
+	SLI4_READ_LNKSTAT_CLRC	= (1 << 30),
+	SLI4_READ_LNKSTAT_CLOF	= (1 << 31),
+};
+
+struct sli4_cmd_read_link_stats {
+	struct sli4_mbox_command_header	hdr;
+	__le32	dw1_flags;
+	__le32	linkfail_errcnt;
+	__le32	losssync_errcnt;
+	__le32	losssignal_errcnt;
+	__le32	primseq_errcnt;
+	__le32	inval_txword_errcnt;
+	__le32	crc_errcnt;
+	__le32	primseq_eventtimeout_cnt;
+	__le32	elastic_bufoverrun_errcnt;
+	__le32	arbit_fc_al_timeout_cnt;
+	__le32	adv_rx_buftor_to_buf_credit;
+	__le32	curr_rx_buf_to_buf_credit;
+	__le32	adv_tx_buf_to_buf_credit;
+	__le32	curr_tx_buf_to_buf_credit;
+	__le32	rx_eofa_cnt;
+	__le32	rx_eofdti_cnt;
+	__le32	rx_eofni_cnt;
+	__le32	rx_soff_cnt;
+	__le32	rx_dropped_no_aer_cnt;
+	__le32	rx_dropped_no_avail_rpi_rescnt;
+	__le32	rx_dropped_no_avail_xri_rescnt;
+};
+
+/* Format a WQE with WQ_ID Association performance hint */
+static inline void
+sli_set_wq_id_association(void *entry, u16 q_id)
+{
+	u32 *wqe = entry;
+
+	/*
+	 * Set Word 10, bit 0 to zero
+	 * Set Word 10, bits 15:1 to the WQ ID
+	 */
+	wqe[10] &= ~0xffff;
+	wqe[10] |= q_id << 1;
+}
+
+/* UNREG_FCFI - unregister a FCFI */
+struct sli4_cmd_unreg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		fcfi;
+	__le16		rsvd6;
+};
+
+/* UNREG_RPI - unregister one or more RPI */
+enum sli4_unreg_rpi {
+	UNREG_RPI_DP		= 0x2000,
+	UNREG_RPI_II_SHIFT	= 14,
+	UNREG_RPI_II_MASK	= 0xC000,
+	UNREG_RPI_II_RPI	= 0x0000,
+	UNREG_RPI_II_VPI	= 0x4000,
+	UNREG_RPI_II_VFI	= 0x8000,
+	UNREG_RPI_II_FCFI	= 0xC000,
+
+	UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
+};
+
+struct sli4_cmd_unreg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		index;
+	__le16		dw1w1_flags;
+	__le32		dw2_dest_n_portid;
+};
+
+/* UNREG_VFI - unregister one or more VFI */
+enum sli4_unreg_vfi {
+	UNREG_VFI_II_SHIFT	= 14,
+	UNREG_VFI_II_MASK	= 0xC000,
+	UNREG_VFI_II_VFI	= 0x0000,
+	UNREG_VFI_II_FCFI	= 0xC000,
+};
+
+struct sli4_cmd_unreg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2_flags;
+};
+
+enum sli4_unreg_type {
+	SLI4_UNREG_TYPE_PORT,
+	SLI4_UNREG_TYPE_DOMAIN,
+	SLI4_UNREG_TYPE_FCF,
+	SLI4_UNREG_TYPE_ALL
+};
+
+/* UNREG_VPI - unregister one or more VPI */
+enum sli4_unreg_vpi {
+	UNREG_VPI_II_SHIFT	= 14,
+	UNREG_VPI_II_MASK	= 0xC000,
+	UNREG_VPI_II_VPI	= 0x0000,
+	UNREG_VPI_II_VFI	= 0x8000,
+	UNREG_VPI_II_FCFI	= 0xC000,
+};
+
+struct sli4_cmd_unreg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2w0_flags;
+};
+
+/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */
+struct sli4_cmd_config_auto_xfer_rdy {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+};
+
+#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
+
+struct sli4_cmd_config_auto_xfer_rdy_hp {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+	__le32		dw3_esoc_flags;
+	__le16		block_size;
+	__le16		rsvd14;
+};
+
+/*************************************************************************
+ * SLI-4 common configuration command formats and definitions
+ */
+
+/*
+ * Subsystem values.
+ */
+enum sli4_subsystem {
+	SLI4_SUBSYSTEM_COMMON	= 0x01,
+	SLI4_SUBSYSTEM_LOWLEVEL	= 0x0B,
+	SLI4_SUBSYSTEM_FC	= 0x0C,
+	SLI4_SUBSYSTEM_DMTF	= 0x11,
+};
+
+#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
+
+/*
+ * Common opcode (OPC) values.
+ */
+enum sli4_cmn_opcode {
+	CMN_FUNCTION_RESET	= 0x3d,
+	CMN_CREATE_CQ		= 0x0c,
+	CMN_CREATE_CQ_SET	= 0x1d,
+	CMN_DESTROY_CQ		= 0x36,
+	CMN_MODIFY_EQ_DELAY	= 0x29,
+	CMN_CREATE_EQ		= 0x0d,
+	CMN_DESTROY_EQ		= 0x37,
+	CMN_CREATE_MQ_EXT	= 0x5a,
+	CMN_DESTROY_MQ		= 0x35,
+	CMN_GET_CNTL_ATTRIBUTES	= 0x20,
+	CMN_NOP			= 0x21,
+	CMN_GET_RSC_EXTENT_INFO = 0x9a,
+	CMN_GET_SLI4_PARAMS	= 0xb5,
+	CMN_QUERY_FW_CONFIG	= 0x3a,
+	CMN_GET_PORT_NAME	= 0x4d,
+
+	CMN_WRITE_FLASHROM	= 0x07,
+	/* TRANSCEIVER Data */
+	CMN_READ_TRANS_DATA	= 0x49,
+	CMN_GET_CNTL_ADDL_ATTRS = 0x79,
+	CMN_GET_FUNCTION_CFG	= 0xa0,
+	CMN_GET_PROFILE_CFG	= 0xa4,
+	CMN_SET_PROFILE_CFG	= 0xa5,
+	CMN_GET_PROFILE_LIST	= 0xa6,
+	CMN_GET_ACTIVE_PROFILE	= 0xa7,
+	CMN_SET_ACTIVE_PROFILE	= 0xa8,
+	CMN_READ_OBJECT		= 0xab,
+	CMN_WRITE_OBJECT	= 0xac,
+	CMN_DELETE_OBJECT	= 0xae,
+	CMN_READ_OBJECT_LIST	= 0xad,
+	CMN_SET_DUMP_LOCATION	= 0xb8,
+	CMN_SET_FEATURES	= 0xbf,
+	CMN_GET_RECFG_LINK_INFO = 0xc9,
+	CMN_SET_RECNG_LINK_ID	= 0xca,
+};
+
+/* DMTF opcode (OPC) values */
+#define DMTF_EXEC_CLP_CMD 0x01
+
+/*
+ * COMMON_FUNCTION_RESET
+ *
+ * Resets the Port, returning it to a power-on state. This configuration
+ * command does not have a payload and should set/expect the lengths to
+ * be zero.
+ */
+struct sli4_rqst_cmn_function_reset {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_function_reset {
+	struct sli4_rsp_hdr	hdr;
+};
+
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * Query for information about the SLI Port
+ */
+enum sli4_cntrl_attr_flags {
+	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,
+	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
+};
+
+struct sli4_rsp_cmn_get_cntl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	u8			version_str[32];
+	u8			manufacturer_name[32];
+	__le32			supported_modes;
+	u8			eprom_version_lo;
+	u8			eprom_version_hi;
+	__le16			rsvd17;
+	__le32			mbx_ds_version;
+	__le32			ep_fw_ds_version;
+	u8			ncsi_version_str[12];
+	__le32			def_extended_timeout;
+	u8			model_number[32];
+	u8			description[64];
+	u8			serial_number[32];
+	u8			ip_version_str[32];
+	u8			fw_version_str[32];
+	u8			bios_version_str[32];
+	u8			redboot_version_str[32];
+	u8			driver_version_str[32];
+	u8			fw_on_flash_version_str[32];
+	__le32			functionalities_supported;
+	__le16			max_cdb_length;
+	u8			asic_revision;
+	u8			generational_guid0;
+	__le32			generational_guid1_12[3];
+	__le16			generational_guid13_14;
+	u8			generational_guid15;
+	u8			hba_port_count;
+	__le16			default_link_down_timeout;
+	u8			iscsi_version_min_max;
+	u8			multifunctional_device;
+	u8			cache_valid;
+	u8			hba_status;
+	u8			max_domains_supported;
+	u8			port_num_type_flags;
+	__le32			firmware_post_status;
+	__le32			hba_mtu;
+	u8			iscsi_features;
+	u8			rsvd121[3];
+	__le16			pci_vendor_id;
+	__le16			pci_device_id;
+	__le16			pci_sub_vendor_id;
+	__le16			pci_sub_system_id;
+	u8			pci_bus_number;
+	u8			pci_device_number;
+	u8			pci_function_number;
+	u8			interface_type;
+	__le64			unique_identifier;
+	u8			number_of_netfilters;
+	u8			rsvd122[3];
+};
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * This command queries the controller information from the Flash ROM.
+ */
+struct sli4_rqst_cmn_get_cntl_addl_attributes {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_cntl_addl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	__le16		ipl_file_number;
+	u8		ipl_file_version;
+	u8		rsvd4;
+	u8		on_die_temperature;
+	u8		rsvd5[3];
+	__le32		driver_advanced_features_supported;
+	__le32		rsvd7[4];
+	char		universal_bios_version[32];
+	char		x86_bios_version[32];
+	char		efi_bios_version[32];
+	char		fcode_version[32];
+	char		uefi_bios_version[32];
+	char		uefi_nic_version[32];
+	char		uefi_fcode_version[32];
+	char		uefi_iscsi_version[32];
+	char		iscsi_x86_bios_version[32];
+	char		pxe_x86_bios_version[32];
+	u8		default_wwpn[8];
+	u8		ext_phy_version[32];
+	u8		fc_universal_bios_version[32];
+	u8		fc_x86_bios_version[32];
+	u8		fc_efi_bios_version[32];
+	u8		fc_fcode_version[32];
+	u8		ext_phy_crc_label[8];
+	u8		ipl_file_name[16];
+	u8		rsvd139[72];
+};
+
+/*
+ * COMMON_NOP
+ *
+ * This command does not do anything; it only returns
+ * the payload in the completion.
+ */
+struct sli4_rqst_cmn_nop {
+	struct sli4_rqst_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rsp_cmn_nop {
+	struct sli4_rsp_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rqst_cmn_get_resource_extent_info {
+	struct sli4_rqst_hdr	hdr;
+	__le16	resource_type;
+	__le16	rsvd16;
+};
+
+enum sli4_rsc_type {
+	SLI4_RSC_TYPE_VFI	= 0x20,
+	SLI4_RSC_TYPE_VPI	= 0x21,
+	SLI4_RSC_TYPE_RPI	= 0x22,
+	SLI4_RSC_TYPE_XRI	= 0x23,
+};
+
+struct sli4_rsp_cmn_get_resource_extent_info {
+	struct sli4_rsp_hdr	hdr;
+	__le16			resource_extent_count;
+	__le16			resource_extent_size;
+};
+
+#define SLI4_128BYTE_WQE_SUPPORT	0x02
+
+#define GET_Q_CNT_METHOD(m) \
+	(((m) & RSP_GET_PARAM_Q_CNT_MTHD_MASK) >> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
+#define GET_Q_CREATE_VERSION(v) \
+	(((v) & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)
+
+enum sli4_rsp_get_params_e {
+	/*GENERIC*/
+	RSP_GET_PARAM_Q_CNT_MTHD_SHFT	= 24,
+	RSP_GET_PARAM_Q_CNT_MTHD_MASK	= (0xF << 24),
+	RSP_GET_PARAM_QV_SHIFT		= 14,
+	RSP_GET_PARAM_QV_MASK		= (3 << 14),
+
+	/* DW4 */
+	RSP_GET_PARAM_PROTO_TYPE_MASK	= 0xFF,
+	/* DW5 */
+	RSP_GET_PARAM_FT		= (1 << 0),
+	RSP_GET_PARAM_SLI_REV_MASK	= (0xF << 4),
+	RSP_GET_PARAM_SLI_FAM_MASK	= (0xF << 8),
+	RSP_GET_PARAM_IF_TYPE_MASK	= (0xF << 12),
+	RSP_GET_PARAM_SLI_HINT1_MASK	= (0xFF << 16),
+	RSP_GET_PARAM_SLI_HINT2_MASK	= (0x1F << 24),
+	/* DW6 */
+	RSP_GET_PARAM_EQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_EQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_EQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW8 */
+	RSP_GET_PARAM_CQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_CQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_CQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW10 */
+	RSP_GET_PARAM_MQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_MQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW12 */
+	RSP_GET_PARAM_WQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_WQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_WQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW14 */
+	RSP_GET_PARAM_RQ_PAGE_CNT_MASK	= (0xF << 0),
+	RSP_GET_PARAM_RQE_SZS_MASK	= (0xF << 8),
+	RSP_GET_PARAM_RQ_PAGE_SZS_MASK	= (0xFF << 16),
+	/* DW15W1*/
+	RSP_GET_PARAM_RQ_DB_WINDOW_MASK	= 0xF000,
+	/* DW16 */
+	RSP_GET_PARAM_FC		= (1 << 0),
+	RSP_GET_PARAM_EXT		= (1 << 1),
+	RSP_GET_PARAM_HDRR		= (1 << 2),
+	RSP_GET_PARAM_SGLR		= (1 << 3),
+	RSP_GET_PARAM_FBRR		= (1 << 4),
+	RSP_GET_PARAM_AREG		= (1 << 5),
+	RSP_GET_PARAM_TGT		= (1 << 6),
+	RSP_GET_PARAM_TERP		= (1 << 7),
+	RSP_GET_PARAM_ASSI		= (1 << 8),
+	RSP_GET_PARAM_WCHN		= (1 << 9),
+	RSP_GET_PARAM_TCCA		= (1 << 10),
+	RSP_GET_PARAM_TRTY		= (1 << 11),
+	RSP_GET_PARAM_TRIR		= (1 << 12),
+	RSP_GET_PARAM_PHOFF		= (1 << 13),
+	RSP_GET_PARAM_PHON		= (1 << 14),
+	RSP_GET_PARAM_PHWQ		= (1 << 15),
+	RSP_GET_PARAM_BOUND_4GA		= (1 << 16),
+	RSP_GET_PARAM_RXC		= (1 << 17),
+	RSP_GET_PARAM_HLM		= (1 << 18),
+	RSP_GET_PARAM_IPR		= (1 << 19),
+	RSP_GET_PARAM_RXRI		= (1 << 20),
+	RSP_GET_PARAM_SGLC		= (1 << 21),
+	RSP_GET_PARAM_TIMM		= (1 << 22),
+	RSP_GET_PARAM_TSMM		= (1 << 23),
+	RSP_GET_PARAM_OAS		= (1 << 25),
+	RSP_GET_PARAM_LC		= (1 << 26),
+	RSP_GET_PARAM_AGXF		= (1 << 27),
+	RSP_GET_PARAM_LOOPBACK_MASK	= (0xF << 28),
+	/* DW18 */
+	RSP_GET_PARAM_SGL_PAGE_CNT_MASK = (0xF << 0),
+	RSP_GET_PARAM_SGL_PAGE_SZS_MASK = (0xFF << 8),
+	RSP_GET_PARAM_SGL_PP_ALIGN_MASK = (0xFF << 16),
+};
+
+struct sli4_rqst_cmn_get_sli4_params {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_sli4_params {
+	struct sli4_rsp_hdr	hdr;
+	__le32		dw4_protocol_type;
+	__le32		dw5_sli;
+	__le32		dw6_eq_page_cnt;
+	__le16		eqe_count_mask;
+	__le16		rsvd26;
+	__le32		dw8_cq_page_cnt;
+	__le16		cqe_count_mask;
+	__le16		rsvd34;
+	__le32		dw10_mq_page_cnt;
+	__le16		mqe_count_mask;
+	__le16		rsvd42;
+	__le32		dw12_wq_page_cnt;
+	__le16		wqe_count_mask;
+	__le16		rsvd50;
+	__le32		dw14_rq_page_cnt;
+	__le16		rqe_count_mask;
+	__le16		dw15w1_rq_db_window;
+	__le32		dw16_loopback_scope;
+	__le32		sge_supported_length;
+	__le32		dw18_sgl_page_cnt;
+	__le16		min_rq_buffer_size;
+	__le16		rsvd75;
+	__le32		max_rq_buffer_size;
+	__le16		physical_xri_max;
+	__le16		physical_rpi_max;
+	__le16		physical_vpi_max;
+	__le16		physical_vfi_max;
+	__le32		rsvd88;
+	__le16		frag_num_field_offset;
+	__le16		frag_num_field_size;
+	__le16		sgl_index_field_offset;
+	__le16		sgl_index_field_size;
+	__le32		chain_sge_initial_value_lo;
+	__le32		chain_sge_initial_value_hi;
+};
+
+/*
+ * COMMON_QUERY_FW_CONFIG
+ *
+ * This command retrieves firmware configuration parameters and adapter
+ * resources available to the driver.
+ */
+struct sli4_rqst_cmn_query_fw_config {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_query_fw_config {
+	struct sli4_rsp_hdr	hdr;
+	__le32		config_number;
+	__le32		asic_rev;
+	__le32		physical_port;
+	__le32		function_mode;
+	__le32		ulp0_mode;
+	__le32		ulp0_nic_wqid_base;
+	__le32		ulp0_nic_wq_total; /* DW10 */
+	__le32		ulp0_toe_wqid_base;
+	__le32		ulp0_toe_wq_total;
+	__le32		ulp0_toe_rqid_base;
+	__le32		ulp0_toe_rq_total;
+	__le32		ulp0_toe_defrqid_base;
+	__le32		ulp0_toe_defrq_total;
+	__le32		ulp0_lro_rqid_base;
+	__le32		ulp0_lro_rq_total;
+	__le32		ulp0_iscsi_icd_base;
+	__le32		ulp0_iscsi_icd_total; /* DW20 */
+	__le32		ulp1_mode;
+	__le32		ulp1_nic_wqid_base;
+	__le32		ulp1_nic_wq_total;
+	__le32		ulp1_toe_wqid_base;
+	__le32		ulp1_toe_wq_total;
+	__le32		ulp1_toe_rqid_base;
+	__le32		ulp1_toe_rq_total;
+	__le32		ulp1_toe_defrqid_base;
+	__le32		ulp1_toe_defrq_total;
+	__le32		ulp1_lro_rqid_base; /* DW30 */
+	__le32		ulp1_lro_rq_total;
+	__le32		ulp1_iscsi_icd_base;
+	__le32		ulp1_iscsi_icd_total;
+	__le32		function_capabilities;
+	__le32		ulp0_cq_base;
+	__le32		ulp0_cq_total;
+	__le32		ulp0_eq_base;
+	__le32		ulp0_eq_total;
+	__le32		ulp0_iscsi_chain_icd_base;
+	__le32		ulp0_iscsi_chain_icd_total; /* DW40 */
+	__le32		ulp1_iscsi_chain_icd_base;
+	__le32		ulp1_iscsi_chain_icd_total;
+};
+
+/*Port Types*/
+enum sli4_port_types {
+	PORT_TYPE_ETH	= 0,
+	PORT_TYPE_FC	= 1,
+};
+
+struct sli4_rqst_cmn_get_port_name {
+	struct sli4_rqst_hdr	hdr;
+	u8      port_type;
+	u8      rsvd4[3];
+};
+
+struct sli4_rsp_cmn_get_port_name {
+	struct sli4_rsp_hdr	hdr;
+	char		port_name[4];
+};
+
+struct sli4_rqst_cmn_write_flashrom {
+	struct sli4_rqst_hdr	hdr;
+	__le32		flash_rom_access_opcode;
+	__le32		flash_rom_access_operation_type;
+	__le32		data_buffer_size;
+	__le32		offset;
+	u8		data_buffer[4];
+};
+
+/*
+ * COMMON_READ_TRANSCEIVER_DATA
+ *
+ * This command reads SFF transceiver data(Format is defined
+ * by the SFF-8472 specification).
+ */
+struct sli4_rqst_cmn_read_transceiver_data {
+	struct sli4_rqst_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+};
+
+struct sli4_rsp_cmn_read_transceiver_data {
+	struct sli4_rsp_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+	u8			page_data[128];
+	u8			page_data_2[128];
+};
+
+#define SLI4_REQ_DESIRE_READLEN		0xFFFFFF
+
+struct sli4_rqst_cmn_read_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+#define RSP_COM_READ_OBJ_EOF		0x80000000
+
+struct sli4_rsp_cmn_read_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_read_length;
+	__le32			eof_dword;
+};
+
+enum sli4_rqst_write_object_flags {
+	SLI4_RQ_DES_WRITE_LEN		= 0xFFFFFF,
+	SLI4_RQ_DES_WRITE_LEN_NOC	= 0x40000000,
+	SLI4_RQ_DES_WRITE_LEN_EOF	= 0x80000000,
+};
+
+struct sli4_rqst_cmn_write_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_write_len_dword;
+	__le32			write_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+#define	RSP_CHANGE_STATUS		0xFF
+
+struct sli4_rsp_cmn_write_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_write_length;
+	__le32			change_status_dword;
+};
+
+struct sli4_rqst_cmn_delete_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			rsvd4;
+	__le32			rsvd5;
+	u8			object_name[104];
+};
+
+#define SLI4_RQ_OBJ_LIST_READ_LEN	0xFFFFFF
+
+struct sli4_rqst_cmn_read_object_list {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde	host_buffer_descriptor[0];
+};
+
+enum sli4_rqst_set_dump_flags {
+	SLI4_CMN_SET_DUMP_BUFFER_LEN	= 0xFFFFFF,
+	SLI4_CMN_SET_DUMP_FDB		= 0x20000000,
+	SLI4_CMN_SET_DUMP_BLP		= 0x40000000,
+	SLI4_CMN_SET_DUMP_QRY		= 0x80000000,
+};
+
+struct sli4_rqst_cmn_set_dump_location {
+	struct sli4_rqst_hdr	hdr;
+	__le32			buffer_length_dword;
+	__le32			buf_addr_low;
+	__le32			buf_addr_high;
+};
+
+struct sli4_rsp_cmn_set_dump_location {
+	struct sli4_rsp_hdr	hdr;
+	__le32			buffer_length_dword;
+};
+
+enum sli4_dump_level {
+	SLI4_DUMP_LEVEL_NONE,
+	SLI4_CHIP_LEVEL_DUMP,
+	SLI4_FUNC_DESC_DUMP,
+};
+
+enum sli4_dump_state {
+	SLI4_DUMP_STATE_NONE,
+	SLI4_CHIP_DUMP_STATE_VALID,
+	SLI4_FUNC_DUMP_STATE_VALID,
+};
+
+enum sli4_dump_status {
+	SLI4_DUMP_READY_STATUS_NOT_READY,
+	SLI4_DUMP_READY_STATUS_DD_PRESENT,
+	SLI4_DUMP_READY_STATUS_FDB_PRESENT,
+	SLI4_DUMP_READY_STATUS_SKIP_DUMP,
+	SLI4_DUMP_READY_STATUS_FAILED = -1,
+};
+
+enum sli4_set_features {
+	SLI4_SET_FEATURES_DIF_SEED			= 0x01,
+	SLI4_SET_FEATURES_XRI_TIMER			= 0x03,
+	SLI4_SET_FEATURES_MAX_PCIE_SPEED		= 0x04,
+	SLI4_SET_FEATURES_FCTL_CHECK			= 0x05,
+	SLI4_SET_FEATURES_FEC				= 0x06,
+	SLI4_SET_FEATURES_PCIE_RECV_DETECT		= 0x07,
+	SLI4_SET_FEATURES_DIF_MEMORY_MODE		= 0x08,
+	SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	= 0x09,
+	SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		= 0x0A,
+	SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI		= 0x0C,
+	SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	= 0x0D,
+	SLI4_SET_FEATURES_SET_FTD_XFER_HINT		= 0x0F,
+	SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		= 0x11,
+};
+
+struct sli4_rqst_cmn_set_features {
+	struct sli4_rqst_hdr	hdr;
+	__le32			feature;
+	__le32			param_len;
+	__le32			params[8];
+};
+
+struct sli4_rqst_cmn_set_features_dif_seed {
+	__le16		seed;
+	__le16		rsvd16;
+};
+
+enum sli4_rqst_set_mrq_features {
+	SLI4_RQ_MULTIRQ_ISR		 = 0x1,
+	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
+
+	SLI4_RQ_MULTIRQ_NUM_RQS		 = 0xFF,
+	SLI4_RQ_MULTIRQ_RQ_SELECT	 = 0xF00,
+};
+
+struct sli4_rqst_cmn_set_features_multirq {
+	__le32		auto_gen_xfer_dword;
+	__le32		num_rqs_dword;
+};
+
+enum sli4_rqst_health_check_flags {
+	SLI4_RQ_HEALTH_CHECK_ENABLE	= 0x1,
+	SLI4_RQ_HEALTH_CHECK_QUERY	= 0x2,
+};
+
+struct sli4_rqst_cmn_set_features_health_check {
+	__le32		health_check_dword;
+};
+
+struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint {
+	__le32		fdt_xfer_hint;
+};
+
+struct sli4_rqst_dmtf_exec_clp_cmd {
+	struct sli4_rqst_hdr	hdr;
+	__le32			cmd_buf_length;
+	__le32			resp_buf_length;
+	__le32			cmd_buf_addr_low;
+	__le32			cmd_buf_addr_high;
+	__le32			resp_buf_addr_low;
+	__le32			resp_buf_addr_high;
+};
+
+struct sli4_rsp_dmtf_exec_clp_cmd {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd4;
+	__le32			resp_length;
+	__le32			rsvd6;
+	__le32			rsvd7;
+	__le32			rsvd8;
+	__le32			rsvd9;
+	__le32			clp_status;
+	__le32			clp_detailed_status;
+};
+
+#define SLI4_PROTOCOL_FC		0x10
+#define SLI4_PROTOCOL_DEFAULT		0xff
+
+struct sli4_rspource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		rsvd16;
+	__le32		type_specific[0];
+};
+
+enum sli4_pcie_desc_flags {
+	SLI4_PCIE_DESC_IMM		= 0x4000,
+	SLI4_PCIE_DESC_NOSV		= 0x8000,
+
+	SLI4_PCIE_DESC_PF_NO		= 0x3FF0000,
+
+	SLI4_PCIE_DESC_MISSN_ROLE	= 0xFF,
+	SLI4_PCIE_DESC_PCHG		= 0x8000000,
+	SLI4_PCIE_DESC_SCHG		= 0x10000000,
+	SLI4_PCIE_DESC_XCHG		= 0x20000000,
+	SLI4_PCIE_DESC_XROM		= 0xC0000000
+};
+
+struct sli4_pcie_resource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		imm_nosv_dword;
+	__le32		pf_number_dword;
+	__le32		rsvd3;
+	u8		sriov_state;
+	u8		pf_state;
+	u8		pf_type;
+	u8		rsvd4;
+	__le16		number_of_vfs;
+	__le16		rsvd5;
+	__le32		mission_roles_dword;
+	__le32		rsvd7[16];
+};
+
+struct sli4_rqst_cmn_get_function_config {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_function_config {
+	struct sli4_rsp_hdr	hdr;
+	__le32			desc_count;
+	__le32			desc[54];
+};
+
+/* Link Config Descriptor for link config functions */
+struct sli4_link_config_descriptor {
+	u8		link_config_id;
+	u8		rsvd1[3];
+	__le32		config_description[8];
+};
+
+#define MAX_LINK_DES	10
+
+struct sli4_rqst_cmn_get_reconfig_link_info {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_reconfig_link_info {
+	struct sli4_rsp_hdr	hdr;
+	u8			active_link_config_id;
+	u8			rsvd17;
+	u8			next_link_config_id;
+	u8			rsvd19;
+	__le32			link_configuration_descriptor_count;
+	struct sli4_link_config_descriptor
+				desc[MAX_LINK_DES];
+};
+
+enum sli4_set_reconfig_link_flags {
+	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
+	SLI4_SET_RECONFIG_LINKID_FD	= (1 << 31),
+};
+
+struct sli4_rqst_cmn_set_reconfig_link_id {
+	struct sli4_rqst_hdr  hdr;
+	__le32			dw4_flags;
+};
+
+struct sli4_rsp_cmn_set_reconfig_link_id {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_lowlevel_set_watchdog {
+	struct sli4_rqst_hdr	hdr;
+	__le16			watchdog_timeout;
+	__le16			rsvd18;
+};
+
+struct sli4_rsp_lowlevel_set_watchdog {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd;
+};
+
+/* FC opcode (OPC) values */
+enum sli4_fc_opcodes {
+	SLI4_OPC_WQ_CREATE		= 0x1,
+	SLI4_OPC_WQ_DESTROY		= 0x2,
+	SLI4_OPC_POST_SGL_PAGES		= 0x3,
+	SLI4_OPC_RQ_CREATE		= 0x5,
+	SLI4_OPC_RQ_DESTROY		= 0x6,
+	SLI4_OPC_READ_FCF_TABLE		= 0x8,
+	SLI4_OPC_POST_HDR_TEMPLATES	= 0xb,
+	SLI4_OPC_REDISCOVER_FCF		= 0x10,
+};
+
+/* Use the default CQ associated with the WQ */
+#define SLI4_CQ_DEFAULT 0xffff
+
+/*
+ * POST_SGL_PAGES
+ *
+ * Register the scatter gather list (SGL) memory and
+ * associate it with an XRI.
+ */
+struct sli4_rqst_post_sgl_pages {
+	struct sli4_rqst_hdr	hdr;
+	__le16			xri_start;
+	__le16			xri_count;
+	struct {
+		__le32		page0_low;
+		__le32		page0_high;
+		__le32		page1_low;
+		__le32		page1_high;
+	} page_set[10];
+};
+
+struct sli4_rsp_post_sgl_pages {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_post_hdr_templates {
+	struct sli4_rqst_hdr	hdr;
+	__le16			rpi_offset;
+	__le16			page_count;
+	struct sli4_dmaaddr	page_descriptor[0];
+};
+
+#define SLI4_HDR_TEMPLATE_SIZE		64
+
+enum sli4_io_flags {
+/* The XRI associated with this IO is already active */
+	SLI4_IO_CONTINUATION		= (1 << 0),
+/* Automatically generate a good RSP frame */
+	SLI4_IO_AUTO_GOOD_RESPONSE	= (1 << 1),
+	SLI4_IO_NO_ABORT		= (1 << 2),
+/* Set the DNRX bit because no auto xref rdy buffer is posted */
+	SLI4_IO_DNRX			= (1 << 3),
+};
+
+enum sli4_callback {
+	SLI4_CB_LINK,
+	SLI4_CB_MAX,
+};
+
+enum sli4_link_status {
+	SLI_LINK_STATUS_UP,
+	SLI_LINK_STATUS_DOWN,
+	SLI_LINK_STATUS_NO_ALPA,
+	SLI_LINK_STATUS_MAX,
+};
+
+enum sli4_link_topology {
+	SLI_LINK_TOPO_NPORT = 1,
+	SLI_LINK_TOPO_LOOP,
+	SLI_LINK_TOPO_LOOPBACK_INTERNAL,
+	SLI_LINK_TOPO_LOOPBACK_EXTERNAL,
+	SLI_LINK_TOPO_NONE,
+	SLI_LINK_TOPO_MAX,
+};
+
+enum sli4_link_medium {
+	SLI_LINK_MEDIUM_ETHERNET,
+	SLI_LINK_MEDIUM_FC,
+	SLI_LINK_MEDIUM_MAX,
+};
+
+/*Driver specific structures*/
+
+struct sli4_link_event {
+	enum sli4_link_status		status;
+	enum sli4_link_topology	topology;
+	enum sli4_link_medium		medium;
+	u32				speed;
+	u8				*loop_map;
+	u32				fc_id;
+};
+
+enum sli4_resource {
+	SLI_RSRC_VFI,
+	SLI_RSRC_VPI,
+	SLI_RSRC_RPI,
+	SLI_RSRC_XRI,
+	SLI_RSRC_FCFI,
+	SLI_RSRC_MAX,
+};
+
+struct sli4_extent {
+	u32		number;
+	u32		size;
+	u32		n_alloc;
+	u32		*base;
+	unsigned long	*use_map;
+	u32		map_size;
+};
+
+struct sli4_queue_info {
+	u16	max_qcount[SLI_QTYPE_MAX];
+	u32	max_qentries[SLI_QTYPE_MAX];
+	u16	count_mask[SLI_QTYPE_MAX];
+	u16	count_method[SLI_QTYPE_MAX];
+	u32	qpage_count[SLI_QTYPE_MAX];
+};
+
+#define	SLI_PCI_MAX_REGS		6
+struct sli4 {
+	void				*os;
+	struct pci_dev			*pcidev;
+	void __iomem			*reg[SLI_PCI_MAX_REGS];
+
+	u32				sli_rev;
+	u32				sli_family;
+	u32				if_type;
+
+	u16				asic_type;
+	u16				asic_rev;
+
+	u16				e_d_tov;
+	u16				r_a_tov;
+	struct sli4_queue_info	qinfo;
+	u16				link_module_type;
+	u8				rq_batch;
+	u16				rq_min_buf_size;
+	u32				rq_max_buf_size;
+	u8				topology;
+	u8				wwpn[8];
+	u8				wwnn[8];
+	u32				fw_rev[2];
+	u8				fw_name[2][16];
+	char				ipl_name[16];
+	u32				hw_rev[3];
+	u8				port_number;
+	char				port_name[2];
+	char				modeldesc[64];
+	char				bios_version_string[32];
+	/*
+	 * Tracks the port resources using extents metaphor. For
+	 * devices that don't implement extents (i.e.
+	 * has_extents == FALSE), the code models each resource as
+	 * a single large extent.
+	 */
+	struct sli4_extent		extent[SLI_RSRC_MAX];
+	u32				features;
+	u32				has_extents:1,
+					auto_reg:1,
+					auto_xfer_rdy:1,
+					hdr_template_req:1,
+					perf_hint:1,
+					perf_wq_id_association:1,
+					cq_create_version:2,
+					mq_create_version:2,
+					high_login_mode:1,
+					sgl_pre_registered:1,
+					sgl_pre_registration_required:1,
+					t10_dif_inline_capable:1,
+					t10_dif_separate_capable:1;
+	u32				sge_supported_length;
+	u32				sgl_page_sizes;
+	u32				max_sgl_pages;
+	u32				wqe_size;
+
+	/*
+	 * Callback functions
+	 */
+	int				(*link)(void *ctx, void *event);
+	void				*link_arg;
+
+	struct efc_dma		bmbx;
+
+	/* Save pointer to physical memory descriptor for non-embedded
+	 * SLI_CONFIG commands for BMBX dumping purposes
+	 */
+	struct efc_dma		*bmbx_non_emb_pmd;
+
+	struct efc_dma		vpd_data;
+	u32			vpd_length;
+};
+
 #endif /* !_SLI4_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (2 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 10:04   ` Daniel Wagner
  2020-04-15 12:27   ` Hannes Reinecke
  2020-04-12  3:32 ` [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
                   ` (26 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create mailbox commands
and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Removed efc_assert define. Replaced with WARN_ON.
  Returned defined return values EFC_SUCCESS/FAIL
---
 drivers/scsi/elx/include/efc_common.h |   18 +
 drivers/scsi/elx/libefc_sli/sli4.c    | 1514 +++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
 3 files changed, 1541 insertions(+)

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
index c427f75da4d5..4c7574dacb99 100644
--- a/drivers/scsi/elx/include/efc_common.h
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -22,4 +22,22 @@ struct efc_dma {
 	struct pci_dev	*pdev;
 };
 
+#define efc_log_crit(efc, fmt, args...) \
+		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_err(efc, fmt, args...) \
+		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_warn(efc, fmt, args...) \
+		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_info(efc, fmt, args...) \
+		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_test(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
+#define efc_log_debug(efc, fmt, args...) \
+		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
+
 #endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 29d33becd334..224a06610c78 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -24,3 +24,1517 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
 	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
 	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
 };
+
+/* Convert queue type enum (SLI_QTYPE_*) into a string */
+static char *SLI_QNAME[] = {
+	"Event Queue",
+	"Completion Queue",
+	"Mailbox Queue",
+	"Work Queue",
+	"Receive Queue",
+	"Undefined"
+};
+
+/*
+ * Write a SLI_CONFIG command to the provided buffer.
+ *
+ * @sli4 SLI context pointer.
+ * @buf Destination buffer for the command.
+ * @size size of the destination buffer(buf).
+ * @length Length in bytes of attached command.
+ * @dma DMA buffer for non-embedded commands.
+ *
+ */
+static void *
+sli_config_cmd_init(struct sli4 *sli4, void *buf,
+		    size_t size, u32 length,
+		    struct efc_dma *dma)
+{
+	struct sli4_cmd_sli_config *config = NULL;
+	u32 flags = 0;
+
+	if (length > sizeof(config->payload.embed) && !dma) {
+		efc_log_err(sli4, "Too big for an embedded cmd with len(%d)\n",
+			    length);
+		return NULL;
+	}
+
+	config = buf;
+
+	memset(buf, 0, size);
+
+	config->hdr.command = MBX_CMD_SLI_CONFIG;
+	if (!dma) {
+		flags |= SLI4_SLICONF_EMB;
+		config->dw1_flags = cpu_to_le32(flags);
+		config->payload_len = cpu_to_le32(length);
+		buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
+		return buf;
+	}
+
+	flags = SLI4_SLICONF_PMDCMD_VAL_1;
+	flags &= ~SLI4_SLICONF_EMB;
+	config->dw1_flags = cpu_to_le32(flags);
+
+	config->payload.mem.addr.low = cpu_to_le32(lower_32_bits(dma->phys));
+	config->payload.mem.addr.high =	cpu_to_le32(upper_32_bits(dma->phys));
+	config->payload.mem.length =
+			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
+	config->payload_len = cpu_to_le32(dma->size);
+	/* save pointer to DMA for BMBX dumping purposes */
+	sli4->bmbx_non_emb_pmd = dma;
+	return dma->virt;
+}
+
+/*
+ * Write a COMMON_CREATE_CQ command.
+ *
+ * This creates a Version 2 message.
+ *
+ * Returns 0 on success, or non-zero otherwise.
+ */
+static int
+sli_cmd_common_create_cq(struct sli4 *sli4, void *buf, size_t size,
+			 struct efc_dma *qmem,
+			 u16 eq_id)
+{
+	struct sli4_rqst_cmn_create_cq_v2 *cqv2 = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 page_bytes = 0;
+	u32 num_pages = 0;
+	size_t cmd_size = 0;
+	u32 page_size = 0;
+	u32 n_cqe = 0;
+	u32 dw5_flags = 0;
+	u16 dw6w1_arm = 0;
+	__le32 len;
+
+	/* First calculate number of pages and the mailbox cmd length */
+	n_cqe = qmem->size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages = sli_page_count(qmem->size, page_bytes);
+
+	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
+
+	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
+	if (!cqv2)
+		return EFC_FAIL;
+
+	len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
+					 SZ_DMAADDR * num_pages);
+	sli_cmd_fill_hdr(&cqv2->hdr, CMN_CREATE_CQ, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V2, len);
+	cqv2->page_size = page_size;
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
+	cqv2->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_CMN_CREATE_CQ_V2_MAX_PAGES)
+		return EFC_FAIL;
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		cqv2->cqe_count = cpu_to_le16(n_cqe);
+		break;
+	default:
+		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQV2_AUTOVALID;
+
+	dw5_flags |= CREATE_CQV2_EVT;
+	dw5_flags |= CREATE_CQV2_VALID;
+
+	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
+	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
+	cqv2->eq_id = cpu_to_le16(eq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
+		cqv2->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		cqv2->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+/* Write a COMMON_CREATE_EQ command */
+static int
+sli_cmd_common_create_eq(struct sli4 *sli4, void *buf, size_t size,
+			 struct efc_dma *qmem)
+{
+	struct sli4_rqst_cmn_create_eq *eq;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+	u32 dw5_flags = 0;
+	u32 dw6_flags = 0, ver = CMD_V0;
+
+	eq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);
+	if (!eq)
+		return EFC_FAIL;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		ver = CMD_V2;
+
+	sli_cmd_fill_hdr(&eq->hdr, CMN_CREATE_EQ, SLI4_SUBSYSTEM_COMMON,
+			 ver, CFG_RQST_PYLD_LEN(cmn_create_eq));
+
+	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	eq->num_pages = cpu_to_le16(num_pages);
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(1024);
+		break;
+	case 2:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(2048);
+		break;
+	case 4:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= EQ_CNT_VAL(4096);
+		break;
+	default:
+		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_EQ_AUTOVALID;
+
+	dw5_flags |= CREATE_EQ_VALID;
+	dw6_flags &= (~CREATE_EQ_ARM);
+	eq->dw5_flags = cpu_to_le32(dw5_flags);
+	eq->dw6_flags = cpu_to_le32(dw6_flags);
+	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
+		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_create_mq_ext(struct sli4 *sli4, void *buf, size_t size,
+			     struct efc_dma *qmem,
+			     u16 cq_id)
+{
+	struct sli4_rqst_cmn_create_mq_ext *mq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+	u16 dw6w1_flags = 0;
+
+	mq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
+				 NULL);
+	if (!mq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&mq->hdr, CMN_CREATE_MQ_EXT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_create_mq_ext));
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	mq->num_pages = cpu_to_le16(num_pages);
+	switch (num_pages) {
+	case 1:
+		dw6w1_flags |= SLI4_MQE_SIZE_16;
+		break;
+	case 2:
+		dw6w1_flags |= SLI4_MQE_SIZE_32;
+		break;
+	case 4:
+		dw6w1_flags |= SLI4_MQE_SIZE_64;
+		break;
+	case 8:
+		dw6w1_flags |= SLI4_MQE_SIZE_128;
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
+
+	if (sli4->mq_create_version) {
+		mq->cq_id_v1 = cpu_to_le16(cq_id);
+		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+	} else {
+		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
+	}
+	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
+
+	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		mq->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		mq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *qmem, u16 cq_id)
+{
+	struct sli4_rqst_wq_create *wq;
+	u32 p;
+	uintptr_t addr;
+	u32 page_size = 0;
+	u32 page_bytes = 0;
+	u32 n_wqe = 0;
+	u16 num_pages;
+
+	wq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wq->hdr, SLI4_OPC_WQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, CFG_RQST_PYLD_LEN(wq_create));
+	n_wqe = qmem->size / sli4->wqe_size;
+
+	switch (qmem->size) {
+	case 4096:
+	case 8192:
+	case 16384:
+	case 32768:
+		page_size = 1;
+		break;
+	case 65536:
+		page_size = 2;
+		break;
+	case 131072:
+		page_size = 4;
+		break;
+	case 262144:
+		page_size = 8;
+		break;
+	case 524288:
+		page_size = 10;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	page_bytes = page_size * SLI_PAGE_SIZE;
+
+	/* valid values for number of pages(num_pages): 1-8 */
+	num_pages = sli_page_count(qmem->size, page_bytes);
+	wq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_WQ_CREATE_MAX_PAGES)
+		return EFC_FAIL;
+
+	wq->cq_id = cpu_to_le16(cq_id);
+
+	wq->page_size = page_size;
+
+	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
+		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
+	else
+		wq->wqe_size_byte |= SLI4_WQE_SIZE;
+
+	wq->wqe_count = cpu_to_le16(n_wqe);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
+		wq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		wq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
+		  struct efc_dma *qmem,
+		  u16 cq_id, u16 buffer_size)
+{
+	struct sli4_rqst_rq_create *rq;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, CFG_RQST_PYLD_LEN(rq_create));
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V0_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * RQE count is the log base 2 of the total number of entries
+	 */
+	rq->rqe_count_byte |= 31 - __builtin_clz(qmem->size / SLI4_RQE_SIZE);
+
+	if (buffer_size < SLI4_RQ_CREATE_V0_MIN_BUF_SIZE ||
+	    buffer_size > SLI4_RQ_CREATE_V0_MAX_BUF_SIZE) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+		       SLI4_RQ_CREATE_V0_MIN_BUF_SIZE,
+		       SLI4_RQ_CREATE_V0_MAX_BUF_SIZE);
+		return EFC_FAIL;
+	}
+	rq->buffer_size = cpu_to_le16(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *qmem, u16 cq_id,
+		     u16 buffer_size)
+{
+	struct sli4_rqst_rq_create_v1 *rq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, size,
+				 SLI_CONFIG_PYLD_LENGTH(rq_create_v1), NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, CFG_RQST_PYLD_LEN(rq_create_v1));
+	/* Disable "no buffer warnings" to avoid Lancer bug */
+	rq->dim_dfd_dnb |= SLI4_RQ_CREATE_V1_DNB;
+
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V1_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid, max %d\n",
+			num_pages, SLI4_RQ_CREATE_V1_MAX_PAGES);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * RQE count is the total number of entries (note not lg2(# entries))
+	 */
+	rq->rqe_count = cpu_to_le16(qmem->size / SLI4_RQE_SIZE);
+
+	rq->rqe_size_byte |= SLI4_RQE_SIZE_8;
+
+	rq->page_size = SLI4_RQ_PAGE_SIZE_4096;
+
+	if (buffer_size < sli4->rq_min_buf_size ||
+	    buffer_size > sli4->rq_max_buf_size) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+				sli4->rq_min_buf_size,
+				sli4->rq_max_buf_size);
+		return EFC_FAIL;
+	}
+	rq->buffer_size = cpu_to_le32(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_rq_create_v2(struct sli4 *sli4, u32 num_rqs,
+		     struct sli4_queue *qs[], u32 base_cq_id,
+		     u32 header_buffer_size,
+		     u32 payload_buffer_size, struct efc_dma *dma)
+{
+	struct sli4_rqst_rq_create_v2 *req = NULL;
+	u32 i, p, offset = 0;
+	u32 payload_size, page_count;
+	uintptr_t addr;
+	u32 num_pages;
+	__le32 req_len;
+
+	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
+
+	/* Payload length must accommodate both request and response */
+	payload_size = max(CFG_RQST_CMDSZ(rq_create_v2) +
+			   SZ_DMAADDR * page_count,
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			       payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req_len = CFG_RQST_PYLD_LEN_VAR(rq_create_v2, SZ_DMAADDR * page_count);
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V2, req_len);
+	/* Fill Payload fields */
+	req->dim_dfd_dnb  |= SLI4_RQCREATEV2_DNB;
+	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);
+	req->num_pages	   = cpu_to_le16(num_pages);
+	req->rqe_count     = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
+	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
+	req->page_size     = SLI4_RQ_PAGE_SIZE_4096;
+	req->rq_count      = num_rqs;
+	req->base_cq_id    = cpu_to_le16(base_cq_id);
+	req->hdr_buffer_size     = cpu_to_le16(header_buffer_size);
+	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
+
+	for (i = 0; i < num_rqs; i++) {
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
+		     p++, addr += SLI_PAGE_SIZE) {
+			req->page_phys_addr[offset].low =
+					cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+					cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
+{
+	if (!q->dma.size)
+		return;
+
+	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
+			  q->dma.virt, q->dma.phys);
+	memset(&q->dma, 0, sizeof(struct efc_dma));
+}
+
+int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
+		 u32 qtype, size_t size, u32 n_entries,
+		      u32 align)
+{
+	if (!q->dma.virt || size != q->size ||
+	    n_entries != q->length) {
+		if (q->dma.size)
+			__sli_queue_destroy(sli4, q);
+
+		memset(q, 0, sizeof(struct sli4_queue));
+
+		q->dma.size = size * n_entries;
+		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						 q->dma.size, &q->dma.phys,
+						 GFP_DMA);
+		if (!q->dma.virt) {
+			memset(&q->dma, 0, sizeof(struct efc_dma));
+			efc_log_err(sli4, "%s allocation failed\n",
+			       SLI_QNAME[qtype]);
+			return EFC_FAIL;
+		}
+
+		memset(q->dma.virt, 0, size * n_entries);
+
+		spin_lock_init(&q->lock);
+
+		q->type = qtype;
+		q->size = size;
+		q->length = n_entries;
+
+		if (q->type == SLI_QTYPE_EQ || q->type == SLI_QTYPE_CQ) {
+			/* For prism, phase will be flipped after
+			 * a sweep through eq and cq
+			 */
+			q->phase = 1;
+		}
+
+		/* Limit to hwf the queue size per interrupt */
+		q->proc_limit = n_entries / 2;
+
+		switch (q->type) {
+		case SLI_QTYPE_EQ:
+			q->posted_limit = q->length / 2;
+			break;
+		default:
+			q->posted_limit = 64;
+			break;
+		}
+	} else {
+		efc_log_err(sli4, "%s failed\n", __func__);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue *cq, bool is_hdr)
+{
+	if (__sli_queue_init(sli4, q, SLI_QTYPE_RQ, SLI4_RQE_SIZE,
+			     n_entries, SLI_PAGE_SIZE))
+		return EFC_FAIL;
+
+	if (!sli_cmd_rq_create_v1(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  &q->dma, cq->id, buffer_size)) {
+		if (__sli_create_queue(sli4, q)) {
+			efc_log_info(sli4, "Create queue failed %d\n", q->id);
+			goto error;
+		}
+		if (is_hdr && q->id & 1) {
+			efc_log_info(sli4, "bad header RQ_ID %d\n", q->id);
+			goto error;
+		} else if (!is_hdr  && (q->id & 1) == 0) {
+			efc_log_info(sli4, "bad data RQ_ID %d\n", q->id);
+			goto error;
+		}
+	} else {
+		goto error;
+	}
+	if (is_hdr)
+		q->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+	else
+		q->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs,
+		    struct sli4_queue *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size)
+{
+	u32 i;
+	struct efc_dma dma;
+	struct sli4_rsp_cmn_create_queue_set *rsp = NULL;
+	void __iomem *db_regaddr = NULL;
+	u32 num_rqs = num_rq_pairs * 2;
+
+	for (i = 0; i < num_rqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_RQ,
+				     SLI4_RQE_SIZE, n_entries,
+				     SLI_PAGE_SIZE)) {
+			goto error;
+		}
+	}
+
+	if (sli_cmd_rq_create_v2(sli4, num_rqs, qs, base_cq_id,
+			       header_buffer_size, payload_buffer_size, &dma)) {
+		goto error;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_err(sli4, "bootstrap mailbox write failed RQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_RQ_DB_REG;
+
+	rsp = dma.virt;
+	if (rsp->hdr.status) {
+		efc_log_err(sli4, "bad create RQSet status=%#x addl=%#x\n",
+		       rsp->hdr.status, rsp->hdr.additional_status);
+		goto error;
+	} else {
+		for (i = 0; i < num_rqs; i++) {
+			qs[i]->id = i + le16_to_cpu(rsp->q_id);
+			if ((qs[i]->id & 1) == 0)
+				qs[i]->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
+			else
+				qs[i]->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
+
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_res_sli_config(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_sli_config *sli_config = buf;
+
+	/* sanity check */
+	if (!buf || sli_config->hdr.command !=
+		    MBX_CMD_SLI_CONFIG) {
+		efc_log_err(sli4, "bad parameter buf=%p cmd=%#x\n", buf,
+		       buf ? sli_config->hdr.command : -1);
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(sli_config->hdr.status))
+		return le16_to_cpu(sli_config->hdr.status);
+
+	if (le32_to_cpu(sli_config->dw1_flags) & SLI4_SLICONF_EMB)
+		return sli_config->payload.embed[4];
+
+	efc_log_info(sli4, "external buffers not supported\n");
+	return EFC_FAIL;
+}
+
+int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q)
+{
+	struct sli4_rsp_cmn_create_queue *res_q = NULL;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail %s\n",
+			SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status create %s\n",
+		       SLI_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	res_q = (void *)((u8 *)sli4->bmbx.virt +
+			offsetof(struct sli4_cmd_sli_config, payload));
+
+	if (res_q->hdr.status) {
+		efc_log_err(sli4, "bad create %s status=%#x addl=%#x\n",
+		       SLI_QNAME[q->type], res_q->hdr.status,
+			    res_q->hdr.additional_status);
+		return EFC_FAIL;
+	}
+	q->id = le16_to_cpu(res_q->q_id);
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_EQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI_QTYPE_MQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_MQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_MQ_DB_REG;
+		break;
+	case SLI_QTYPE_RQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_RQ_DB_REG;
+		break;
+	case SLI_QTYPE_WQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_WQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_IO_WQ_DB_REG;
+		break;
+	default:
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype)
+{
+	u32 size = 0;
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		size = sizeof(u32);
+		break;
+	case SLI_QTYPE_CQ:
+		size = 16;
+		break;
+	case SLI_QTYPE_MQ:
+		size = 256;
+		break;
+	case SLI_QTYPE_WQ:
+		size = sli4->wqe_size;
+		break;
+	case SLI_QTYPE_RQ:
+		size = SLI4_RQE_SIZE;
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		return -1;
+	}
+	return size;
+}
+
+int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype,
+		struct sli4_queue *q, u32 n_entries,
+		     struct sli4_queue *assoc)
+{
+	int size;
+	u32 align = 0;
+
+	/* get queue size */
+	size = sli_get_queue_entry_size(sli4, qtype);
+	if (size < 0)
+		return EFC_FAIL;
+	align = SLI_PAGE_SIZE;
+
+	if (__sli_queue_init(sli4, q, qtype, size, n_entries, align)) {
+		efc_log_err(sli4, "%s allocation failed\n",
+		       SLI_QNAME[qtype]);
+		return EFC_FAIL;
+	}
+
+	switch (qtype) {
+	case SLI_QTYPE_EQ:
+		if (!sli_cmd_common_create_eq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_CQ:
+		if (!sli_cmd_common_create_cq(sli4, sli4->bmbx.virt,
+					     SLI4_BMBX_SIZE, &q->dma,
+						assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	case SLI_QTYPE_MQ:
+		assoc->u.flag.dword |= SLI4_QUEUE_FLAG_MQ;
+		if (!sli_cmd_common_create_mq_ext(sli4, sli4->bmbx.virt,
+						  SLI4_BMBX_SIZE, &q->dma,
+						  assoc->id)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+
+		break;
+	case SLI_QTYPE_WQ:
+		if (!sli_cmd_wq_create(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE, &q->dma,
+					assoc ? assoc->id : 0)) {
+			if (__sli_create_queue(sli4, q)) {
+				efc_log_err(sli4, "create %s failed\n",
+					    SLI_QNAME[qtype]);
+				goto error;
+			}
+		} else {
+			efc_log_err(sli4, "cannot create %s\n",
+				    SLI_QNAME[qtype]);
+			goto error;
+		}
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		goto error;
+	}
+
+	return EFC_SUCCESS;
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+static int sli_cmd_cq_set_create(struct sli4 *sli4,
+				 struct sli4_queue *qs[], u32 num_cqs,
+				 struct sli4_queue *eqs[],
+				 struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_create_cq_set_v0 *req = NULL;
+	uintptr_t addr;
+	u32 i, offset = 0,  page_bytes = 0, payload_size;
+	u32 p = 0, page_size = 0, n_cqe = 0, num_pages_cq;
+	u32 dw5_flags = 0;
+	u16 dw6w1_flags = 0;
+	__le32 req_len;
+
+	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages_cq = sli_page_count(qs[0]->dma.size, page_bytes);
+	payload_size = max(CFG_RQST_CMDSZ(cmn_create_cq_set_v0) +
+			   (SZ_DMAADDR * num_pages_cq * num_cqs),
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req_len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_set_v0,
+					SZ_DMAADDR * num_pages_cq * num_cqs);
+	sli_cmd_fill_hdr(&req->hdr, CMN_CREATE_CQ_SET, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	req->page_size = page_size;
+
+	req->num_pages = cpu_to_le16(num_pages_cq);
+	switch (num_pages_cq) {
+	case 1:
+		dw5_flags |= CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= CQ_CNT_VAL(LARGE);
+		dw6w1_flags |= (n_cqe & CREATE_CQSETV0_CQE_COUNT);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages_cq);
+		return EFC_FAIL;
+	}
+
+	dw5_flags |= CREATE_CQSETV0_EVT;
+	dw5_flags |= CREATE_CQSETV0_VALID;
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= CREATE_CQSETV0_AUTOVALID;
+
+	dw6w1_flags &= (~CREATE_CQSETV0_ARM);
+
+	req->dw5_flags = cpu_to_le32(dw5_flags);
+	req->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+
+	req->num_cq_req = cpu_to_le16(num_cqs);
+
+	/* Fill page addresses of all the CQs. */
+	for (i = 0; i < num_cqs; i++) {
+		req->eq_id[i] = cpu_to_le16(eqs[i]->id);
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages_cq;
+		     p++, addr += page_bytes) {
+			req->page_phys_addr[offset].low =
+				cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+				cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[])
+{
+	u32 i;
+	struct efc_dma dma;
+	struct sli4_rsp_cmn_create_queue_set *res = NULL;
+	void __iomem *db_regaddr = NULL;
+
+	/* Align the queue DMA memory */
+	for (i = 0; i < num_cqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_CQ,
+				     SLI4_CQE_BYTES,
+					  n_entries, SLI_PAGE_SIZE)) {
+			efc_log_err(sli4, "Queue init failed.\n");
+			goto error;
+		}
+	}
+
+	if (sli_cmd_cq_set_create(sli4, qs, num_cqs, eqs, &dma))
+		goto error;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail CQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_EQCQ_DB_REG;
+
+	res = dma.virt;
+	if (res->hdr.status) {
+		efc_log_err(sli4, "bad create CQSet status=%#x addl=%#x\n",
+		       res->hdr.status, res->hdr.additional_status);
+		goto error;
+	} else {
+		/* Check if we got all requested CQs. */
+		if (le16_to_cpu(res->num_q_allocated) != num_cqs) {
+			efc_log_crit(sli4, "Requested count CQs doesn't match.\n");
+			goto error;
+		}
+		/* Fill the resp cq ids. */
+		for (i = 0; i < num_cqs; i++) {
+			qs[i]->id = le16_to_cpu(res->q_id) + i;
+			qs[i]->db_regaddr = db_regaddr;
+		}
+	}
+
+	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_cmd_common_destroy_q(struct sli4 *sli4, u8 opc, u8 subsystem, u16 q_id)
+{
+	struct sli4_rqst_cmn_destroy_q *req = NULL;
+
+	/* Payload length must accommodate both request and response */
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_destroy_q), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, opc, subsystem,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_destroy_q));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
+	       u32 destroy_queues, u32 free_memory)
+{
+	int rc = EFC_SUCCESS;
+	u8 opcode, subsystem;
+	struct sli4_rsp_hdr *res;
+
+	if (!q) {
+		efc_log_err(sli4, "bad parameter sli4=%p q=%p\n", sli4, q);
+		return EFC_FAIL;
+	}
+
+	if (!destroy_queues)
+		goto free_mem;
+
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		opcode = CMN_DESTROY_EQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_CQ:
+		opcode = CMN_DESTROY_CQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_MQ:
+		opcode = CMN_DESTROY_MQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI_QTYPE_WQ:
+		opcode = SLI4_OPC_WQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	case SLI_QTYPE_RQ:
+		opcode = SLI4_OPC_RQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	default:
+		efc_log_info(sli4, "bad queue type %d\n", q->type);
+		return EFC_FAIL;
+	}
+
+	rc = sli_cmd_common_destroy_q(sli4, opcode, subsystem, q->id);
+	if (!rc)
+		goto free_mem;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox fail destroy %s\n",
+			     SLI_QNAME[q->type]);
+	} else if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status %s\n", SLI_QNAME[q->type]);
+	} else {
+		res = (void *)((u8 *)sli4->bmbx.virt +
+				offsetof(struct sli4_cmd_sli_config, payload));
+
+		if (res->status) {
+			efc_log_err(sli4, "destroy %s st=%#x addl=%#x\n",
+				    SLI_QNAME[q->type],	res->status,
+				    res->additional_status);
+		} else {
+			rc = EFC_SUCCESS;
+		}
+	}
+
+free_mem:
+	if (free_memory)
+		__sli_queue_destroy(sli4, q);
+
+	return rc;
+}
+
+int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+	else
+		val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+	writel(val, q->db_regaddr);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	switch (q->type) {
+	case SLI_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	case SLI_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = SLI4_IF6_CQ_DOORBELL(q->n_posted, q->id, a);
+		else
+			val = SLI4_CQ_DOORBELL(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	default:
+		efc_log_info(sli4, "should only be used for EQ/CQ, not %s\n",
+			SLI_QNAME[q->type]);
+	}
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8		*qe = q->dma.virt;
+	u32	qindex;
+	u32	val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	if (sli4->perf_wq_id_association)
+		sli_set_wq_id_association(entry, q->id);
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		/* non-dpp write for iftype = 6 */
+		val = SLI4_WQ_DOORBELL(q->n_posted, 0, q->id);
+	else
+		val = SLI4_WQ_DOORBELL(q->n_posted, q->index, q->id);
+
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->lock, flags);
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	val = SLI4_MQ_DOORBELL(q->n_posted, q->id);
+	writel(val, q->db_regaddr);
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return qindex;
+}
+
+int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
+	     u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex, n_posted;
+	u32 val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	q->n_posted = 1;
+
+	n_posted = q->n_posted;
+
+	/*
+	 * In RQ-pair, an RQ either contains the FC header
+	 * (i.e. is_hdr == TRUE) or the payload.
+	 *
+	 * Don't ring doorbell for payload RQ
+	 */
+	if (!(q->u.flag.dword & SLI4_QUEUE_FLAG_HDR))
+		goto skip;
+
+	/*
+	 * Some RQ cannot be incremented one entry at a time.
+	 * Instead, the driver collects a number of entries
+	 * and updates the RQ in batches.
+	 */
+	if (q->u.flag.dword & SLI4_QUEUE_FLAG_RQBATCH) {
+		if (((q->index + q->n_posted) %
+		    SLI4_QUEUE_RQ_BATCH)) {
+			goto skip;
+		}
+		n_posted = SLI4_QUEUE_RQ_BATCH;
+	}
+
+	val = SLI4_RQ_DOORBELL(n_posted, q->id);
+	writel(val, q->db_regaddr);
+skip:
+	q->index = (q->index + q->n_posted) & (q->length - 1);
+	q->n_posted = 0;
+
+	return qindex;
+}
+
+int
+sli_eq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+	u8 clear = false, valid = false;
+	u16 wflags = 0;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if eqe is valid */
+	wflags = le16_to_cpu(((struct sli4_eqe *)qe)->dw0w0_flags);
+	valid = ((wflags & SLI4_EQE_VALID) == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (valid && clear) {
+		wflags &= ~SLI4_EQE_VALID;
+		((struct sli4_eqe *)qe)->dw0w0_flags =
+						cpu_to_le16(wflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long	flags = 0;
+	u8 clear = false;
+	u32 dwflags = 0;
+	bool valid = false, valid_bit_set = false;
+
+	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
+
+	qindex = &q->index;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if cqe is valid */
+	dwflags = le32_to_cpu(((struct sli4_mcqe *)qe)->dw3_flags);
+	valid_bit_set = (dwflags & SLI4_MCQE_VALID) != 0;
+
+	valid = (valid_bit_set == q->phase);
+	if (!valid) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (valid && clear) {
+		dwflags &= ~SLI4_MCQE_VALID;
+		((struct sli4_mcqe *)qe)->dw3_flags =
+					cpu_to_le32(dwflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_mq_read(struct sli4 *sli4,
+	    struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 *qindex = NULL;
+	unsigned long flags = 0;
+
+	qindex = &q->u.r_idx;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += *qindex * q->size;
+
+	/* Check if mqe is valid */
+	if (q->index == q->u.r_idx) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	memcpy(entry, qe, q->size);
+	*qindex = (*qindex + 1) & (q->length - 1);
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id)
+{
+	struct sli4_eqe *eqe = (void *)buf;
+	int rc = EFC_SUCCESS;
+	u16 flags = 0;
+	u16 majorcode;
+	u16 minorcode;
+
+	if (!buf || !cq_id) {
+		efc_log_err(sli4, "bad parameters sli4=%p buf=%p cq_id=%p\n",
+		       sli4, buf, cq_id);
+		return EFC_FAIL;
+	}
+
+	flags = le16_to_cpu(eqe->dw0w0_flags);
+	majorcode = (flags & SLI4_EQE_MJCODE) >> 1;
+	minorcode = (flags & SLI4_EQE_MNCODE) >> 4;
+	switch (majorcode) {
+	case SLI4_MAJOR_CODE_STANDARD:
+		*cq_id = le16_to_cpu(eqe->resource_id);
+		break;
+	case SLI4_MAJOR_CODE_SENTINEL:
+		efc_log_info(sli4, "sentinel EQE\n");
+		rc = SLI4_EQE_STATUS_EQ_FULL;
+		break;
+	default:
+		efc_log_info(sli4, "Unsupported EQE: major %x minor %x\n",
+			majorcode, minorcode);
+		rc = EFC_FAIL;
+	}
+
+	return rc;
+}
+
+/* Parse a CQ entry to retrieve the event type and the associated queue */
+int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+	     enum sli4_qentry *etype, u16 *q_id)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!cq || !cqe || !etype) {
+		efc_log_err(sli4, "bad params sli4=%p cq=%p cqe=%p etype=%p q_id=%p\n",
+		       sli4, cq, cqe, etype, q_id);
+		return -EINVAL;
+	}
+
+	if (cq->u.flag.dword & SLI4_QUEUE_FLAG_MQ) {
+		struct sli4_mcqe	*mcqe = (void *)cqe;
+
+		if (le32_to_cpu(mcqe->dw3_flags) & SLI4_MCQE_AE) {
+			*etype = SLI_QENTRY_ASYNC;
+		} else {
+			*etype = SLI_QENTRY_MQ;
+			rc = sli_cqe_mq(sli4, mcqe);
+		}
+		*q_id = -1;
+	} else {
+		rc = sli_fc_cqe_parse(sli4, cq, cqe, etype, q_id);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index b360d809f144..13f5a0d8d31c 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3687,4 +3687,13 @@ struct sli4 {
 	u32			vpd_length;
 };
 
+static inline void
+sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
+{
+	hdr->opcode = opc;
+	hdr->subsystem = sub;
+	hdr->dw3_version = cpu_to_le32(ver);
+	hdr->request_length = len;
+}
+
 #endif /* !_SLI4_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (3 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 14:34   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
                   ` (25 subsequent siblings)
  30 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create different WQEs and adds
APIs to issue iread, iwrite, treceive, tsend and other work queue
entries.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Return defined return values EFC_SUCCESS/FAIL
  Removed HLM argument for WQE calls.
  Reduced args for sli_fcp_treceive64_wqe(),
    sli_fcp_cont_treceive64_wqe(), sli_fcp_trsp64_wqe(). Defined new
    structure sli_fcp_tgt_params.
  Removed sli_fc_process_link_state function which was not used for FC.
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1565 ++++++++++++++++++++++++++++++++++++
 1 file changed, 1565 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 224a06610c78..0365d7943468 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -1538,3 +1538,1568 @@ sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
 
 	return rc;
 }
+
+/* Write an ABORT_WQE work queue entry */
+int
+sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
+	      enum sli4_abort_type type, bool send_abts, u32 ids,
+	      u32 mask, u16 tag, u16 cq_id)
+{
+	struct sli4_abort_wqe	*abort = buf;
+
+	memset(buf, 0, size);
+
+	switch (type) {
+	case SLI_ABORT_XRI:
+		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
+		if (mask) {
+			efc_log_warn(sli4, "%#x aborting XRI %#x warning non-zero mask",
+				mask, ids);
+			mask = 0;
+		}
+		break;
+	case SLI_ABORT_ABORT_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
+		break;
+	case SLI_ABORT_REQUEST_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
+		break;
+	default:
+		efc_log_info(sli4, "unsupported type %#x\n", type);
+		return EFC_FAIL;
+	}
+
+	abort->ia_ir_byte |= send_abts ? 0 : 1;
+
+	/* Suppress ABTS retries */
+	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
+
+	abort->t_mask = cpu_to_le32(mask);
+	abort->t_tag  = cpu_to_le32(ids);
+	abort->command = SLI4_WQE_ABORT;
+	abort->request_tag = cpu_to_le16(tag);
+
+	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
+
+	abort->cq_id = cpu_to_le16(cq_id);
+	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
+
+	return EFC_SUCCESS;
+}
+
+/* Write an ELS_REQUEST64_WQE work queue entry */
+int
+sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *sgl,
+		      u8 req_type, u32 req_len, u32 max_rsp_len,
+		      u8 timeout, u16 xri, u16 tag,
+		      u16 cq_id, u16 rnodeindicator, u16 sportindicator,
+		      bool rnodeattached, u32 rnode_fcid, u32 sport_fcid)
+{
+	struct sli4_els_request64_wqe	*els = buf;
+	struct sli4_sge *sge = sgl->virt;
+	bool is_fabric = false;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, size);
+
+	bptr = &els->els_request_payload;
+	if (sli4->sgl_pre_registered) {
+		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
+
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	els->els_request_payload_length = cpu_to_le32(req_len);
+	els->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	els->xri_tag = cpu_to_le16(xri);
+	els->timer = timeout;
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_REQUEST64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
+
+	/* figure out the ELS_ID value from the request buffer */
+
+	switch (req_type) {
+	case ELS_LOGO:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(rnodeindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		break;
+	case ELS_FDISC:
+		if (rnode_fcid == FC_FID_FLOGI)
+			is_fabric = true;
+		if (sport_fcid == 0) {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
+			is_fabric = true;
+		} else {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		}
+		els->ct_byte |= (SLI4_GENERIC_CONTEXT_VPI <<
+				 SLI4_REQ_WQE_CT_SHFT);
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		break;
+	case ELS_FLOGI:
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		/*
+		 * Set SP here ... we haven't done a REG_VPI yet
+		 * need to maybe not set this when we have
+		 * completed VFI/VPI registrations ...
+		 *
+		 * Use the FC_ID of the SPORT if it has been allocated,
+		 * otherwise use an S_ID of zero.
+		 */
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		if (sport_fcid != U32_MAX)
+			els->sid_sp_dword |= cpu_to_le32(sport_fcid);
+		break;
+	case ELS_PLOGI:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	case ELS_SCR:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(sportindicator);
+		break;
+	default:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		if (rnodeattached) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(sportindicator);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag =
+				cpu_to_le16(sportindicator);
+		}
+		break;
+	}
+
+	if (is_fabric)
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
+	else
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
+					SLI4_GENERIC_CONTEXT_RPI)
+		els->remote_id_dword = cpu_to_le32(rnode_fcid);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
+					SLI4_GENERIC_CONTEXT_VPI)
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+
+	return EFC_SUCCESS;
+}
+
+/* Write an FCP_ICMND64_WQE work queue entry */
+int
+sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 timeout)
+{
+	struct sli4_fcp_icmnd64_wqe *icmnd = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &icmnd->bde;
+	if (sli4->sgl_pre_registered) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_ICMD_WQE_XBL;
+
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	icmnd->payload_offset_length = cpu_to_le16(len);
+	icmnd->xri_tag = cpu_to_le16(xri);
+	icmnd->context_tag = cpu_to_le16(rpi);
+	icmnd->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	icmnd->class_pu_byte |= 2 << SLI4_ICMD_WQE_PU_SHFT;
+	icmnd->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	icmnd->command = SLI4_WQE_FCP_ICMND64;
+	icmnd->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_ICMD_WQE_CT_SHFT;
+
+	icmnd->abort_tag = cpu_to_le32(xri);
+
+	icmnd->request_tag = cpu_to_le16(tag);
+	icmnd->len_loc1_byte |= SLI4_ICMD_WQE_LEN_LOC_BIT1;
+	icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_LEN_LOC_BIT2;
+	icmnd->cmd_type_byte |= SLI4_CMD_FCP_ICMND64_WQE;
+	icmnd->cq_id = cpu_to_le16(cq_id);
+
+	return  EFC_SUCCESS;
+}
+
+/* Write an FCP_IREAD64_WQE work queue entry */
+int
+sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    struct efc_dma *sgl, u32 first_data_sge,
+		    u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, u32 rnode_fcid,
+		    u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iread64_wqe *iread = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+
+	sge = sgl->virt;
+	bptr = &iread->bde;
+	if (sli4->sgl_pre_registered) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IR_WQE_XBL;
+
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  = sge[0].buffer_address_low;
+		bptr->u.blp.high = sge[0].buffer_address_high;
+	} else {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iread->fcp_cmd_buffer_length = cpu_to_le16(len);
+
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iread->payload_offset_length = cpu_to_le16(len);
+	iread->total_transfer_length = cpu_to_le32(xfer_len);
+
+	iread->xri_tag = cpu_to_le16(xri);
+	iread->context_tag = cpu_to_le16(rpi);
+
+	iread->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	iread->class_pu_byte |= 2 << SLI4_IR_WQE_PU_SHFT;
+	iread->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iread->command = SLI4_WQE_FCP_IREAD64;
+	iread->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_IR_WQE_CT_SHFT;
+	iread->dif_ct_bs_byte |= dif;
+	iread->dif_ct_bs_byte  |= bs << SLI4_IR_WQE_BS_SHFT;
+
+	iread->abort_tag = cpu_to_le32(xri);
+
+	iread->request_tag = cpu_to_le16(tag);
+	iread->len_loc1_byte |= SLI4_IR_WQE_LEN_LOC_BIT1;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_LEN_LOC_BIT2;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_IOD;
+	iread->cmd_type_byte |= SLI4_CMD_FCP_IREAD64_WQE;
+	iread->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iread->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			  (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  EFC_SUCCESS;
+}
+
+/* Write an FCP_IWRITE64_WQE work queue entry */
+int
+sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *sgl,
+		     u32 first_data_sge, u32 xfer_len,
+		     u32 first_burst, u16 xri, u16 tag,
+		     u16 cq_id, u32 rpi,
+		     u32 rnode_fcid,
+		     u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iwrite64_wqe *iwrite = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, min, len;
+
+	memset(buf, 0, size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &iwrite->bde;
+	if (sli4->sgl_pre_registered) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IWR_WQE_XBL;
+
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low  =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iwrite->fcp_cmd_buffer_length = cpu_to_le16(len);
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iwrite->payload_offset_length = cpu_to_le16(len);
+	iwrite->total_transfer_length = cpu_to_le16(xfer_len);
+	min = (xfer_len < first_burst) ? xfer_len : first_burst;
+	iwrite->initial_transfer_length = cpu_to_le16(min);
+
+	iwrite->xri_tag = cpu_to_le16(xri);
+	iwrite->context_tag = cpu_to_le16(rpi);
+
+	iwrite->timer = timeout;
+	/* WQE word 4 contains read transfer length */
+	iwrite->class_pu_byte |= 2 << SLI4_IWR_WQE_PU_SHFT;
+	iwrite->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iwrite->command = SLI4_WQE_FCP_IWRITE64;
+	iwrite->dif_ct_bs_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_IWR_WQE_CT_SHFT;
+	iwrite->dif_ct_bs_byte |= dif;
+	iwrite->dif_ct_bs_byte |= bs << SLI4_IWR_WQE_BS_SHFT;
+
+	iwrite->abort_tag = cpu_to_le32(xri);
+
+	iwrite->request_tag = cpu_to_le16(tag);
+	iwrite->len_loc1_byte |= SLI4_IWR_WQE_LEN_LOC_BIT1;
+	iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_LEN_LOC_BIT2;
+	iwrite->cmd_type_byte |= SLI4_CMD_FCP_IWRITE64_WQE;
+	iwrite->cq_id = cpu_to_le16(cq_id);
+
+	if (sli4->perf_hint) {
+		bptr = &iwrite->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			 (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  EFC_SUCCESS;
+}
+
+/* Write an FCP_TRECEIVE64_WQE work queue entry */
+int
+sli_fcp_treceive64_wqe(struct sli4 *sli, void *buf,
+		       struct efc_dma *sgl,
+		       u32 first_data_sge,
+		       u32 xfer_len, u16 xri, u16 tag,
+		       u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		       struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_treceive64_wqe *trecv = buf;
+	struct sli4_fcp_128byte_wqe *trecv_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &trecv->bde;
+	if (sli->sgl_pre_registered) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_TRCV_WQE_XBL;
+
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (le32_to_cpu(sge[0].buffer_length)
+					& SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trecv->payload_offset_length = sge[0].buffer_length;
+	} else {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
+			trecv->qosd_xbl_hlm_iod_dbde_wqes |=
+							SLI4_TRCV_WQE_DBDE;
+			bptr->bde_type_buflen =
+			      cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					  (le32_to_cpu(sge[2].buffer_length)
+					  & SLI4_BDE_MASK_BUFFER_LEN));
+
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				(sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	trecv->relative_offset = cpu_to_le32(params->offset);
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_XC;
+
+	trecv->xri_tag = cpu_to_le16(xri);
+
+	trecv->context_tag = cpu_to_le16(rpi);
+
+	/* WQE uses relative offset */
+	trecv->class_ar_pu_byte |= 1 << SLI4_TRCV_WQE_PU_SHFT;
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		trecv->class_ar_pu_byte |= SLI4_TRCV_WQE_AR;
+
+	trecv->command = SLI4_WQE_FCP_TRECEIVE64;
+	trecv->class_ar_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	trecv->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_TRCV_WQE_CT_SHFT;
+	trecv->dif_ct_bs_byte |= bs << SLI4_TRCV_WQE_BS_SHFT;
+
+	trecv->remote_xid = cpu_to_le16(params->ox_id);
+
+	trecv->request_tag = cpu_to_le16(tag);
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_IOD;
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_LEN_LOC_BIT2;
+
+	trecv->cmd_type_byte |= SLI4_CMD_FCP_TRECEIVE64_WQE;
+
+	trecv->cq_id = cpu_to_le16(cq_id);
+
+	trecv->fcp_data_receive_length = cpu_to_le32(xfer_len);
+
+	if (sli->perf_hint) {
+		bptr = &trecv->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_CCPE;
+		trecv->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trecv->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trecv->lloc1_appid |= SLI4_TRCV_WQE_APPID;
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_WQES;
+		trecv_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+/* Write an FCP_CONT_TRECEIVE64_WQE work queue entry */
+int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf,
+			    struct efc_dma *sgl, u32 first_data_sge,
+			    u32 xfer_len, u16 xri, u16 sec_xri, u16 tag,
+			    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+			    struct sli_fcp_tgt_params *params)
+{
+	int rc;
+
+	rc = sli_fcp_treceive64_wqe(sli, buf, sgl, first_data_sge,
+				    xfer_len, xri, tag, cq_id,
+				    rpi, rnode_fcid, dif, bs, params);
+	if (rc == 0) {
+		struct sli4_fcp_treceive64_wqe *trecv = buf;
+
+		trecv->command = SLI4_WQE_FCP_CONT_TRECEIVE64;
+		trecv->dword5.sec_xri_tag = cpu_to_le16(sec_xri);
+	}
+	return rc;
+}
+
+/* Write an FCP_TRSP64_WQE work queue entry */
+int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		   u32 rsp_len, u16 xri, u16 tag, u16 cq_id, u32 rpi,
+		   u32 rnode_fcid, u8 port_owned,
+		   struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_trsp64_wqe *trsp = buf;
+	struct sli4_fcp_128byte_wqe *trsp_128 = buf;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE) {
+		trsp->class_ag_byte |= SLI4_TRSP_WQE_AG;
+	} else {
+		struct sli4_sge	*sge = sgl->virt;
+
+		if (sli4->sgl_pre_registered || port_owned)
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_DBDE;
+		else
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_XBL;
+		bptr = &trsp->bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trsp->fcp_response_length = cpu_to_le32(rsp_len);
+	}
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_XC;
+
+	trsp->xri_tag = cpu_to_le16(xri);
+	trsp->rpi = cpu_to_le16(rpi);
+
+	trsp->command = SLI4_WQE_FCP_TRSP64;
+	trsp->class_ag_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	trsp->remote_xid = cpu_to_le16(params->ox_id);
+	trsp->request_tag = cpu_to_le16(tag);
+	if (params->flags & SLI4_IO_DNRX)
+		trsp->ct_dnrx_byte |= SLI4_TRSP_WQE_DNRX;
+	else
+		trsp->ct_dnrx_byte &= ~SLI4_TRSP_WQE_DNRX;
+
+	trsp->lloc1_appid |= 0x1;
+	trsp->cq_id = cpu_to_le16(cq_id);
+	trsp->cmd_type_byte = SLI4_CMD_FCP_TRSP64_WQE;
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_CCPE;
+		trsp->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trsp->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trsp->lloc1_appid |= SLI4_TRSP_WQE_APPID;
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_WQES;
+		trsp_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+/* Write an FCP_TSEND64_WQE work queue entry */
+int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		    u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		    struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_tsend64_wqe *tsend = buf;
+	struct sli4_fcp_128byte_wqe *tsend_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+
+	bptr = &tsend->bde;
+	if (sli4->sgl_pre_registered) {
+		tsend->ll_qd_xbl_hlm_iod_dbde &= ~SLI4_TSEND_WQE_XBL;
+
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				   (le32_to_cpu(sge[2].buffer_length) &
+				    SLI4_BDE_MASK_BUFFER_LEN));
+
+		/* TSEND64_WQE specifies first two SGE are skipped (3rd is
+		 * valid)
+		 */
+		bptr->u.data.low  = sge[2].buffer_address_low;
+		bptr->u.data.high = sge[2].buffer_address_high;
+	} else {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
+			tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+			bptr->bde_type_buflen =
+			    cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+					(le32_to_cpu(sge[2].buffer_length) &
+					SLI4_BDE_MASK_BUFFER_LEN));
+			/*
+			 * TSEND64_WQE specifies first two SGE are skipped
+			 * (i.e. 3rd is valid)
+			 */
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+					    (sgl->size &
+					     SLI4_BDE_MASK_BUFFER_LEN));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	tsend->relative_offset = cpu_to_le32(params->offset);
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		tsend->dw10byte2 |= SLI4_TSEND_XC;
+
+	tsend->xri_tag = cpu_to_le16(xri);
+
+	tsend->rpi = cpu_to_le16(rpi);
+	/* WQE uses relative offset */
+	tsend->class_pu_ar_byte |= 1 << SLI4_TSEND_WQE_PU_SHFT;
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		tsend->class_pu_ar_byte |= SLI4_TSEND_WQE_AR;
+
+	tsend->command = SLI4_WQE_FCP_TSEND64;
+	tsend->class_pu_ar_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	tsend->ct_byte |= SLI4_GENERIC_CONTEXT_RPI << SLI4_TSEND_CT_SHFT;
+	tsend->ct_byte |= dif;
+	tsend->ct_byte |= bs << SLI4_TSEND_BS_SHFT;
+
+	tsend->remote_xid = cpu_to_le16(params->ox_id);
+
+	tsend->request_tag = cpu_to_le16(tag);
+
+	tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_LEN_LOC_BIT2;
+
+	tsend->cq_id = cpu_to_le16(cq_id);
+
+	tsend->cmd_type_byte |= SLI4_CMD_FCP_TSEND64_WQE;
+
+	tsend->fcp_data_transmit_length = cpu_to_le32(xfer_len);
+
+	if (sli4->perf_hint) {
+		bptr = &tsend->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_MASK_BUFFER_LEN));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		tsend->dw10byte2 |= SLI4_TSEND_CCPE;
+		tsend->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(tsend->dw10byte2 & SLI4_TSEND_EAT)) {
+		tsend->dw10byte0 |= SLI4_TSEND_APPID_VALID;
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQES;
+		tsend_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+/* Write a GEN_REQUEST64 work queue entry */
+int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		      u32 req_len, u32 max_rsp_len, u16 xri, u16 tag,
+		      u16 cq_id, u32 rnode_fcid, u16 rnodeindicator,
+		      struct sli_ct_params *params)
+{
+	struct sli4_gen_request64_wqe	*gen = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &gen->bde;
+
+	if (sli4->sgl_pre_registered) {
+		gen->dw10flags1 &= ~SLI4_GEN_REQ64_WQE_XBL;
+
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_MASK_BUFFER_LEN));
+
+		bptr->u.blp.low =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	gen->request_payload_length = cpu_to_le32(req_len);
+	gen->max_response_payload_length = cpu_to_le32(max_rsp_len);
+
+	gen->df_ctl = params->df_ctl;
+	gen->type = params->type;
+	gen->r_ctl = params->r_ctl;
+
+	gen->xri_tag = cpu_to_le16(xri);
+
+	gen->ct_byte = SLI4_GENERIC_CONTEXT_RPI << SLI4_GEN_REQ64_CT_SHFT;
+	gen->context_tag = cpu_to_le16(rnodeindicator);
+
+	gen->class_byte = SLI4_GENERIC_CLASS_CLASS_3;
+
+	gen->command = SLI4_WQE_GEN_REQUEST64;
+
+	gen->timer = params->timeout;
+
+	gen->request_tag = cpu_to_le16(tag);
+
+	gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_IOD;
+
+	gen->dw10flags0 |= SLI4_GEN_REQ64_WQE_QOSD;
+
+	gen->cmd_type_byte = SLI4_CMD_GEN_REQUEST64_WQE;
+
+	gen->cq_id = cpu_to_le16(cq_id);
+
+	return EFC_SUCCESS;
+}
+
+/* Write a SEND_FRAME work queue entry */
+int
+sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size,
+		   u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag)
+{
+	struct sli4_send_frame_wqe *sf = buf;
+
+	memset(buf, 0, size);
+
+	sf->dw10flags1 |= SLI4_SF_WQE_DBDE;
+	sf->bde.bde_type_buflen = cpu_to_le32(req_len &
+					      SLI4_BDE_MASK_BUFFER_LEN);
+	sf->bde.u.data.low =
+		cpu_to_le32(lower_32_bits(payload->phys));
+	sf->bde.u.data.high =
+		cpu_to_le32(upper_32_bits(payload->phys));
+
+	/* Copy FC header */
+	sf->fc_header_0_1[0] = cpu_to_le32(hdr[0]);
+	sf->fc_header_0_1[1] = cpu_to_le32(hdr[1]);
+	sf->fc_header_2_5[0] = cpu_to_le32(hdr[2]);
+	sf->fc_header_2_5[1] = cpu_to_le32(hdr[3]);
+	sf->fc_header_2_5[2] = cpu_to_le32(hdr[4]);
+	sf->fc_header_2_5[3] = cpu_to_le32(hdr[5]);
+
+	sf->frame_length = cpu_to_le32(req_len);
+
+	sf->xri_tag = cpu_to_le16(xri);
+	sf->dw7flags0 &= ~SLI4_SF_PU;
+	sf->context_tag = 0;
+
+	sf->ct_byte &= ~SLI4_SF_CT;
+	sf->command = SLI4_WQE_SEND_FRAME;
+	sf->dw7flags0 |= SLI4_GENERIC_CLASS_CLASS_3;
+	sf->timer = timeout;
+
+	sf->request_tag = cpu_to_le16(req_tag);
+	sf->eof = eof;
+	sf->sof = sof;
+
+	sf->dw10flags1 &= ~SLI4_SF_QOSD;
+	sf->dw10flags0 |= SLI4_SF_LEN_LOC_BIT1;
+	sf->dw10flags2 &= ~SLI4_SF_XC;
+
+	sf->dw10flags1 |= SLI4_SF_XBL;
+
+	sf->cmd_type_byte |= SLI4_CMD_SEND_FRAME_WQE;
+	sf->cq_id = cpu_to_le16(0xffff);
+
+	return EFC_SUCCESS;
+}
+
+/* Write an XMIT_BLS_RSP64_WQE work queue entry */
+int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct sli_bls_payload *payload, u16 xri,
+		       u16 tag, u16 cq_id,
+		       bool rnodeattached, u16 rnodeindicator,
+		       u16 sportindicator, u32 rnode_fcid,
+		       u32 sport_fcid, u32 s_id)
+{
+	struct sli4_xmit_bls_rsp_wqe *bls = buf;
+	u32 dw_ridflags = 0;
+
+	/*
+	 * Callers can either specify RPI or S_ID, but not both
+	 */
+	if (rnodeattached && s_id != U32_MAX) {
+		efc_log_info(sli4, "S_ID specified for attached remote node %d\n",
+			rnodeindicator);
+		return EFC_FAIL;
+	}
+
+	memset(buf, 0, size);
+
+	if (payload->type == SLI4_SLI_BLS_ACC) {
+		bls->payload_word0 =
+			cpu_to_le32((payload->u.acc.seq_id_last << 16) |
+				    (payload->u.acc.seq_id_validity << 24));
+		bls->high_seq_cnt = payload->u.acc.high_seq_cnt;
+		bls->low_seq_cnt = payload->u.acc.low_seq_cnt;
+	} else if (payload->type == SLI4_SLI_BLS_RJT) {
+		bls->payload_word0 =
+				cpu_to_le32(*((u32 *)&payload->u.rjt));
+		dw_ridflags |= SLI4_BLS_RSP_WQE_AR;
+	} else {
+		efc_log_info(sli4, "bad BLS type %#x\n", payload->type);
+		return EFC_FAIL;
+	}
+
+	bls->ox_id = payload->ox_id;
+	bls->rx_id = payload->rx_id;
+
+	if (rnodeattached) {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(sportindicator);
+
+		if (s_id != U32_MAX)
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(s_id & 0x00ffffff);
+		else
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(sport_fcid & 0x00ffffff);
+
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (rnode_fcid & SLI4_BLS_RSP_RID);
+
+		bls->temporary_rpi = cpu_to_le16(rnodeindicator);
+	}
+
+	bls->xri_tag = cpu_to_le16(xri);
+
+	bls->dw8flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	bls->command = SLI4_WQE_XMIT_BLS_RSP;
+
+	bls->request_tag = cpu_to_le16(tag);
+
+	bls->dw11flags1 |= SLI4_BLS_RSP_WQE_QOSD;
+
+	bls->remote_id_dword = cpu_to_le32(dw_ridflags);
+	bls->cq_id = cpu_to_le16(cq_id);
+
+	bls->dw12flags0 |= SLI4_CMD_XMIT_BLS_RSP64_WQE;
+
+	return EFC_SUCCESS;
+}
+
+/* Write a XMIT_ELS_RSP64_WQE work queue entry */
+int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		       struct efc_dma *rsp, u32 rsp_len,
+				u16 xri, u16 tag, u16 cq_id,
+				u16 ox_id, u16 rnodeindicator,
+				u16 sportindicator,
+				bool rnodeattached, u32 rnode_fcid,
+				u32 flags, u32 s_id)
+{
+	struct sli4_xmit_els_rsp64_wqe *els = buf;
+
+	memset(buf, 0, size);
+
+	if (sli4->sgl_pre_registered)
+		els->flags2 |= SLI4_ELS_DBDE;
+	else
+		els->flags2 |= SLI4_ELS_XBL;
+
+	els->els_response_payload.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (rsp_len & SLI4_BDE_MASK_BUFFER_LEN));
+	els->els_response_payload.u.data.low =
+		cpu_to_le32(lower_32_bits(rsp->phys));
+	els->els_response_payload.u.data.high =
+		cpu_to_le32(upper_32_bits(rsp->phys));
+
+	els->els_response_payload_length = cpu_to_le32(rsp_len);
+
+	els->xri_tag = cpu_to_le16(xri);
+
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_RSP64;
+
+	els->request_tag = cpu_to_le16(tag);
+
+	els->ox_id = cpu_to_le16(ox_id);
+
+	els->flags2 |= (SLI4_ELS_IOD & SLI4_ELS_REQUEST64_DIR_WRITE);
+
+	els->flags2 |= SLI4_ELS_QOSD;
+
+	if (flags & SLI4_IO_CONTINUATION)
+		els->flags3 |= SLI4_ELS_XC;
+
+	if (rnodeattached) {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(rnodeindicator);
+	} else {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(sportindicator);
+		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
+		els->temporary_rpi = cpu_to_le16(rnodeindicator);
+		if (s_id != U32_MAX) {
+			els->sid_dw |= cpu_to_le32(SLI4_ELS_SP |
+						   (s_id & SLI4_ELS_SID));
+		}
+	}
+
+	els->cmd_type_wqec = SLI4_ELS_REQUEST64_CMD_GEN;
+
+	els->cq_id = cpu_to_le16(cq_id);
+
+	return EFC_SUCCESS;
+}
+
+/* Write a XMIT_SEQUENCE64 work queue entry */
+int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf,
+			struct efc_dma *payload, u32 payload_len,
+			u16 xri, u16 tag, u32 rnode_fcid,
+			u16 rnodeindicator, struct sli_ct_params *params)
+{
+	struct sli4_xmit_sequence64_wqe *xmit = buf;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!payload || !payload->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       payload, payload ? payload->virt : NULL);
+		return EFC_FAIL;
+	}
+
+	if (sli4->sgl_pre_registered)
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_DBDE);
+	else
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_XBL);
+
+	xmit->bde.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			(payload_len & SLI4_BDE_MASK_BUFFER_LEN));
+	xmit->bde.u.data.low  =
+			cpu_to_le32(lower_32_bits(payload->phys));
+	xmit->bde.u.data.high =
+			cpu_to_le32(upper_32_bits(payload->phys));
+	xmit->sequence_payload_len = cpu_to_le32(payload_len);
+
+	xmit->remote_n_port_id_dword |= cpu_to_le32(rnode_fcid & 0x00ffffff);
+
+	xmit->relative_offset = 0;
+
+	/* sequence initiative - this matches what is seen from
+	 * FC switches in response to FCGS commands
+	 */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_SI);
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_FT);/* force transmit */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_XO);/* exchange responder */
+	xmit->dw5flags0 |= SLI4_SEQ_WQE_LS;/* last in seqence */
+	xmit->df_ctl = params->df_ctl;
+	xmit->type = params->type;
+	xmit->r_ctl = params->r_ctl;
+
+	xmit->xri_tag = cpu_to_le16(xri);
+	xmit->context_tag = cpu_to_le16(rnodeindicator);
+
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_DIF);
+	xmit->dw7flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_SEQ_WQE_CT_SHIFT;
+	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_BS);
+
+	xmit->command = SLI4_WQE_XMIT_SEQUENCE64;
+	xmit->dw7flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+	xmit->dw7flags1 &= (~SLI4_SEQ_WQE_PU);
+	xmit->timer = params->timeout;
+
+	xmit->abort_tag = 0;
+	xmit->request_tag = cpu_to_le16(tag);
+	xmit->remote_xid = cpu_to_le16(params->ox_id);
+
+	xmit->dw10w0 |=
+	cpu_to_le16(SLI4_ELS_REQUEST64_DIR_READ << SLI4_SEQ_WQE_IOD_SHIFT);
+
+	xmit->cmd_type_wqec_byte |= SLI4_CMD_XMIT_SEQUENCE64_WQE;
+
+	xmit->dw10w0 |= cpu_to_le16(2 << SLI4_SEQ_WQE_LEN_LOC_SHIFT);
+
+	xmit->cq_id = cpu_to_le16(0xFFFF);
+
+	return EFC_SUCCESS;
+}
+
+/* Write a REQUEUE_XRI_WQE work queue entry */
+int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size,
+		    u16 xri, u16 tag, u16 cq_id)
+{
+	struct sli4_requeue_xri_wqe *requeue = buf;
+
+	memset(buf, 0, size);
+
+	requeue->command = SLI4_WQE_REQUEUE_XRI;
+	requeue->xri_tag = cpu_to_le16(xri);
+	requeue->request_tag = cpu_to_le16(tag);
+	requeue->flags2 |= cpu_to_le16(SLI4_REQU_XRI_WQE_XC);
+	requeue->flags1 |= cpu_to_le16(SLI4_REQU_XRI_WQE_QOSD);
+	requeue->cq_id = cpu_to_le16(cq_id);
+	requeue->cmd_type_wqec_byte = SLI4_CMD_REQUEUE_XRI_WQE;
+	return EFC_SUCCESS;
+}
+
+/* Process an asynchronous Link Attention event entry */
+int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe)
+{
+	struct sli4_link_attention *link_attn = acqe;
+	struct sli4_link_event event = { 0 };
+
+	efc_log_info(sli4, "link=%d attn_type=%#x top=%#x speed=%#x pfault=%#x\n",
+		link_attn->link_number, link_attn->attn_type,
+		      link_attn->topology, link_attn->port_speed,
+		      link_attn->port_fault);
+	efc_log_info(sli4, "shared_lnk_status=%#x logl_lnk_speed=%#x evttag=%#x\n",
+		link_attn->shared_link_status,
+		      le16_to_cpu(link_attn->logical_link_speed),
+		      le32_to_cpu(link_attn->event_tag));
+
+	if (!sli4->link)
+		return EFC_FAIL;
+
+	event.medium   = SLI_LINK_MEDIUM_FC;
+
+	switch (link_attn->attn_type) {
+	case LINK_ATTN_TYPE_LINK_UP:
+		event.status = SLI_LINK_STATUS_UP;
+		break;
+	case LINK_ATTN_TYPE_LINK_DOWN:
+		event.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case LINK_ATTN_TYPE_NO_HARD_ALPA:
+		efc_log_info(sli4, "attn_type: no hard alpa\n");
+		event.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		efc_log_info(sli4, "attn_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->event_type) {
+	case FC_EVENT_LINK_ATTENTION:
+		break;
+	case FC_EVENT_SHARED_LINK_ATTENTION:
+		efc_log_info(sli4, "event_type: FC shared link event\n");
+		break;
+	default:
+		efc_log_info(sli4, "event_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->topology) {
+	case LINK_ATTN_P2P:
+		event.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case LINK_ATTN_FC_AL:
+		event.topology = SLI_LINK_TOPO_LOOP;
+		break;
+	case LINK_ATTN_INTERNAL_LOOPBACK:
+		efc_log_info(sli4, "topology Internal loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_INTERNAL;
+		break;
+	case LINK_ATTN_SERDES_LOOPBACK:
+		efc_log_info(sli4, "topology serdes loopback\n");
+		event.topology = SLI_LINK_TOPO_LOOPBACK_EXTERNAL;
+		break;
+	default:
+		efc_log_info(sli4, "topology: unknown\n");
+		break;
+	}
+
+	event.speed = link_attn->port_speed * 1000;
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return EFC_SUCCESS;
+}
+
+/* Parse an FC work queue CQ entry */
+int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype, u16 *r_id)
+{
+	u8 code = cqe[SLI4_CQE_CODE_OFFSET];
+	int rc;
+
+	switch (code) {
+	case SLI4_CQE_CODE_WORK_REQUEST_COMPLETION:
+	{
+		struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ;
+		*r_id = le16_to_cpu(wcqe->request_tag);
+		rc = wcqe->status;
+
+		/* Flag errors except for FCP_RSP_FAILURE */
+		if (rc && rc != SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE) {
+			efc_log_info(sli4, "WCQE: status=%#x hw_status=%#x tag=%#x\n",
+				wcqe->status, wcqe->hw_status,
+				le16_to_cpu(wcqe->request_tag));
+			efc_log_info(sli4, "w1=%#x w2=%#x xb=%d\n",
+				le32_to_cpu(wcqe->wqe_specific_1),
+				     le32_to_cpu(wcqe->wqe_specific_2),
+				     (wcqe->flags & SLI4_WCQE_XB));
+			efc_log_info(sli4, "      %08X %08X %08X %08X\n",
+				((u32 *)cqe)[0],
+				     ((u32 *)cqe)[1],
+				     ((u32 *)cqe)[2],
+				     ((u32 *)cqe)[3]);
+		}
+
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC:
+	{
+		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1:
+	{
+		struct sli4_fc_async_rcqe_v1 *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD:
+	{
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_CMD;
+		*r_id = le16_to_cpu(optcqe->rq_id);
+		rc = optcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA:
+	{
+		struct sli4_fc_optimized_write_data_cqe *dcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_OPT_WRITE_DATA;
+		*r_id = le16_to_cpu(dcqe->xri);
+		rc = dcqe->status;
+
+		/* Flag errors */
+		if (rc != SLI4_FC_WCQE_STATUS_SUCCESS) {
+			efc_log_info(sli4, "Optimized DATA CQE: status=%#x\n",
+				dcqe->status);
+			efc_log_info(sli4, "hstat=%#x xri=%#x dpl=%#x w3=%#x xb=%d\n",
+				dcqe->hw_status, le16_to_cpu(dcqe->xri),
+				le32_to_cpu(dcqe->total_data_placed),
+				((u32 *)cqe)[3],
+				(dcqe->flags & SLI4_OCQE_XB));
+		}
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_COALESCING:
+	{
+		struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_XRI_ABORTED:
+	{
+		struct sli4_fc_xri_aborted_cqe *xa = (void *)cqe;
+
+		*etype = SLI_QENTRY_XABT;
+		*r_id = le16_to_cpu(xa->xri);
+		rc = EFC_SUCCESS;
+		break;
+	}
+	case SLI4_CQE_CODE_RELEASE_WQE: {
+		struct sli4_fc_wqec *wqec = (void *)cqe;
+
+		*etype = SLI_QENTRY_WQ_RELEASE;
+		*r_id = le16_to_cpu(wqec->wq_id);
+		rc = EFC_SUCCESS;
+		break;
+	}
+	default:
+		efc_log_info(sli4, "CQE completion code %d not handled\n",
+			code);
+		*etype = SLI_QENTRY_MAX;
+		*r_id = U16_MAX;
+		rc = -EINVAL;
+	}
+
+	return rc;
+}
+
+u32
+sli_fc_response_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+u32
+sli_fc_io_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+int
+sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	*d_id = 0;
+
+	if (wcqe->status)
+		return EFC_FAIL;
+	*d_id = le32_to_cpu(wcqe->wqe_specific_2) & 0x00ffffff;
+	return EFC_SUCCESS;
+}
+
+u32
+sli_fc_ext_status(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	mask;
+
+	switch (wcqe->status) {
+	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+	case SLI4_FC_WCQE_STATUS_CMD_REJECT:
+		mask = 0xff;
+		break;
+	case SLI4_FC_WCQE_STATUS_NPORT_RJT:
+	case SLI4_FC_WCQE_STATUS_FABRIC_RJT:
+	case SLI4_FC_WCQE_STATUS_NPORT_BSY:
+	case SLI4_FC_WCQE_STATUS_FABRIC_BSY:
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_DI_ERROR:
+		mask = U32_MAX;
+		break;
+	default:
+		mask = 0;
+	}
+
+	return le32_to_cpu(wcqe->wqe_specific_2) & mask;
+}
+
+/* Retrieve the RQ index from the completion */
+int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
+			  u16 *rq_id, u32 *index)
+{
+	struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+	struct sli4_fc_async_rcqe_v1 *rcqe_v1 = (void *)cqe;
+	int rc = EFC_FAIL;
+	u8 code = 0;
+	u16 rq_element_index;
+
+	*rq_id = 0;
+	*index = U32_MAX;
+
+	code = cqe[SLI4_CQE_CODE_OFFSET];
+
+	if (code == SLI4_CQE_CODE_RQ_ASYNC) {
+		*rq_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rq_element_index =
+		le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX;
+		*index = rq_element_index;
+		if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = rcqe->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->fcfi_rq_id_word) &
+				SLI4_RACQE_RQ_ID);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe->data_placement_length),
+				rcqe->sof_byte, rcqe->eof_byte,
+				rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_ASYNC_V1) {
+		*rq_id = le16_to_cpu(rcqe_v1->rq_id);
+		rq_element_index =
+			(le16_to_cpu(rcqe_v1->rq_elmt_indx_word) &
+			 SLI4_RACQE_RQ_EL_INDX);
+		*index = rq_element_index;
+		if (rcqe_v1->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = rcqe_v1->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d, index=%x\n",
+				rcqe_v1->status,
+				sli_fc_get_status_string(rcqe_v1->status),
+				le16_to_cpu(rcqe_v1->rq_id), rq_element_index);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe_v1->data_placement_length),
+			rcqe_v1->sof_byte, rcqe_v1->eof_byte,
+			rcqe_v1->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD) {
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*rq_id = le16_to_cpu(optcqe->rq_id);
+		*index = le16_to_cpu(optcqe->w1) & SLI4_OCQE_RQ_EL_INDX;
+		if (optcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = optcqe->status;
+			efc_log_info(sli4, "stat=%02x (%s) rqid=%d, idx=%x pdpl=%x\n",
+				optcqe->status,
+				sli_fc_get_status_string(optcqe->status),
+				le16_to_cpu(optcqe->rq_id), *index,
+				le16_to_cpu(optcqe->data_placement_length));
+
+			efc_log_info(sli4, "hdpl=%x oox=%d agxr=%d xri=0x%x rpi=%x\n",
+				(optcqe->hdpl_vld & SLI4_OCQE_HDPL),
+				(optcqe->flags1 & SLI4_OCQE_OOX),
+				(optcqe->flags1 & SLI4_OCQE_AGXR), optcqe->xri,
+				le16_to_cpu(optcqe->rpi));
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_COALESCING) {
+		struct sli4_fc_coalescing_rcqe	*rcqe = (void *)cqe;
+		u16 rq_element_index =
+				(le16_to_cpu(rcqe->rq_elmt_indx_word) &
+				 SLI4_RCQE_RQ_EL_INDX);
+
+		*rq_id = le16_to_cpu(rcqe->rq_id);
+		if (rcqe->status == SLI4_FC_COALESCE_RQ_SUCCESS) {
+			*index = rq_element_index;
+			rc = EFC_SUCCESS;
+		} else {
+			*index = U32_MAX;
+			rc = rcqe->status;
+
+			efc_log_info(sli4, "stat=%02x (%s) rq_id=%d, idx=%x\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->rq_id), rq_element_index);
+			efc_log_info(sli4, "rq_id=%#x sdpl=%x\n",
+				le16_to_cpu(rcqe->rq_id),
+		    le16_to_cpu(rcqe->sequence_reporting_placement_length));
+		}
+	} else {
+		*index = U32_MAX;
+
+		rc = rcqe->status;
+
+		efc_log_info(sli4, "status=%02x rq_id=%d, index=%x pdpl=%x\n",
+			rcqe->status,
+		le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID,
+		(le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX),
+		le16_to_cpu(rcqe->data_placement_length));
+		efc_log_info(sli4, "sof=%02x eof=%02x hdpl=%x\n",
+			rcqe->sof_byte, rcqe->eof_byte,
+			rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+	}
+
+	return rc;
+}
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (4 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 16:10   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
                   ` (24 subsequent siblings)
  30 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds routines to create mailbox commands used during
adapter initialization and adds APIs to issue mailbox commands to the
adapter through the bootstrap mailbox register.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Return defined return values EFC_SUCCESS/FAIL
  Added 64G link speed support.
  Defined return values for sli_cqe_mq.
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1216 ++++++++++++++++++++++++++++++++++++
 1 file changed, 1216 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 0365d7943468..6ecb0f1ad19b 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -3103,3 +3103,1219 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
 
 	return rc;
 }
+
+/* Wait for the bootstrap mailbox to report "ready" */
+static int
+sli_bmbx_wait(struct sli4 *sli4, u32 msec)
+{
+	u32 val = 0;
+
+	do {
+		mdelay(1);	/* 1 ms */
+		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+		msec--;
+	} while (msec && !(val & SLI4_BMBX_RDY));
+
+	val = (!(val & SLI4_BMBX_RDY));
+	return val;
+}
+
+/* Write bootstrap mailbox */
+static int
+sli_bmbx_write(struct sli4 *sli4)
+{
+	u32 val = 0;
+
+	/* write buffer location to bootstrap mailbox register */
+	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
+		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
+		return EFC_FAIL;
+	}
+	val = SLI4_BMBX_WRITE_LO(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	/* wait for SLI Port to set ready bit */
+	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
+}
+
+/* Submit a command to the bootstrap mailbox and check the status */
+int
+sli_bmbx_command(struct sli4 *sli4)
+{
+	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
+
+	if (sli_fw_error_status(sli4) > 0) {
+		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
+		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+		return EFC_FAIL;
+	}
+
+	if (sli_bmbx_write(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail phys=%p reg=%#x\n",
+			(void *)sli4->bmbx.phys,
+			readl(sli4->reg[0] + SLI4_BMBX_REG));
+		return EFC_FAIL;
+	}
+
+	/* check completion queue entry status */
+	if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) &
+	    SLI4_MCQE_VALID) {
+		return sli_cqe_mq(sli4, cqe);
+	}
+	efc_log_crit(sli4, "invalid or wrong type\n");
+	return EFC_FAIL;
+}
+
+/* Write a CONFIG_LINK command to the provided buffer */
+int
+sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_config_link *config_link = buf;
+
+	memset(buf, 0, size);
+
+	config_link->hdr.command = MBX_CMD_CONFIG_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/* Write a DOWN_LINK command to the provided buffer */
+int
+sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_mbox_command_header *hdr = buf;
+
+	memset(buf, 0, size);
+
+	hdr->command = MBX_CMD_DOWN_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+/* Write a DUMP Type 4 command to the provided buffer */
+int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf,
+		   size_t size, u16 wki)
+{
+	struct sli4_cmd_dump4 *cmd = buf;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_DUMP;
+	cmd->type_dword = cpu_to_le32(0x4);
+	cmd->wki_selection = cpu_to_le16(wki);
+	return EFC_SUCCESS;
+}
+
+/* Write a COMMON_READ_TRANSCEIVER_DATA command */
+int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
+				     size_t size, u32 page_num,
+				     struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_transceiver_data *req = NULL;
+	u32 psize;
+
+	if (!dma)
+		psize = SLI_CONFIG_PYLD_LENGTH(cmn_read_transceiver_data);
+	else
+		psize = dma->size;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+					    psize, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_READ_TRANS_DATA, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_read_transceiver_data));
+
+	req->page_number = cpu_to_le32(page_num);
+	req->port = cpu_to_le32(sli4->port_number);
+
+	return EFC_SUCCESS;
+}
+
+/* Write a READ_LINK_STAT command to the provided buffer */
+int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size,
+			u8 req_ext_counters,
+			u8 clear_overflow_flags,
+			u8 clear_all_counters)
+{
+	struct sli4_cmd_read_link_stats *cmd = buf;
+	u32 flags;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_LNK_STAT;
+
+	flags = 0;
+	if (req_ext_counters)
+		flags |= SLI4_READ_LNKSTAT_REC;
+	if (clear_all_counters)
+		flags |= SLI4_READ_LNKSTAT_CLRC;
+	if (clear_overflow_flags)
+		flags |= SLI4_READ_LNKSTAT_CLOF;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/* Write a READ_STATUS command to the provided buffer */
+int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size,
+		    u8 clear_counters)
+{
+	struct sli4_cmd_read_status *cmd = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	cmd->hdr.command = MBX_CMD_READ_STATUS;
+	if (clear_counters)
+		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
+	else
+		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_LINK command to the provided buffer */
+int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size,
+		  u32 speed, u8 reset_alpa)
+{
+	struct sli4_cmd_init_link *init_link = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_link->hdr.command = MBX_CMD_INIT_LINK;
+
+	init_link->sel_reset_al_pa_dword =
+				cpu_to_le32(reset_alpa);
+	flags &= ~SLI4_INIT_LINK_F_LOOPBACK;
+
+	init_link->link_speed_sel_code = cpu_to_le32(speed);
+	switch (speed) {
+	case FC_LINK_SPEED_1G:
+	case FC_LINK_SPEED_2G:
+	case FC_LINK_SPEED_4G:
+	case FC_LINK_SPEED_8G:
+	case FC_LINK_SPEED_16G:
+	case FC_LINK_SPEED_32G:
+	case FC_LINK_SPEED_64G:
+		flags |= SLI4_INIT_LINK_F_FIXED_SPEED;
+		break;
+	case FC_LINK_SPEED_10G:
+		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+		/* Attempt P2P but failover to FC-AL */
+		flags |= SLI4_INIT_LINK_F_FAIL_OVER;
+		flags |= SLI4_INIT_LINK_F_P2P_FAIL_OVER;
+		break;
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		flags |= SLI4_INIT_LINK_F_FCAL_ONLY;
+		if (speed == FC_LINK_SPEED_16G || speed == FC_LINK_SPEED_32G) {
+			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
+				     speed);
+			init_link->flags0 = cpu_to_le32(flags);
+			return EFC_FAIL;
+		}
+		break;
+	case SLI4_READ_CFG_TOPO_FC_DA:
+		flags |= SLI4_INIT_LINK_F_P2P_ONLY;
+		break;
+	default:
+
+		efc_log_info(sli4, "unsupported topology %#x\n",
+			sli4->topology);
+
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	flags &= (~SLI4_INIT_LINK_F_UNFAIR);
+	flags &= (~SLI4_INIT_LINK_F_NO_LIRP);
+	flags &= (~SLI4_INIT_LINK_F_LOOP_VALID_CHK);
+	flags &= (~SLI4_INIT_LINK_F_NO_LISA);
+	flags &= (~SLI4_INIT_LINK_F_PICK_HI_ALPA);
+	init_link->flags0 = cpu_to_le32(flags);
+
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_VFI command to the provided buffer */
+int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 vfi, u16 fcfi, u16 vpi)
+{
+	struct sli4_cmd_init_vfi *init_vfi = buf;
+	u16 flags = 0;
+
+	memset(buf, 0, size);
+
+	init_vfi->hdr.command = MBX_CMD_INIT_VFI;
+
+	init_vfi->vfi = cpu_to_le16(vfi);
+	init_vfi->fcfi = cpu_to_le16(fcfi);
+
+	/*
+	 * If the VPI is valid, initialize it at the same time as
+	 * the VFI
+	 */
+	if (vpi != U16_MAX) {
+		flags |= SLI4_INIT_VFI_FLAG_VP;
+		init_vfi->flags0_word = cpu_to_le16(flags);
+		init_vfi->vpi = cpu_to_le16(vpi);
+	}
+
+	return EFC_SUCCESS;
+}
+
+/* Write an INIT_VPI command to the provided buffer */
+int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 vpi, u16 vfi)
+{
+	struct sli4_cmd_init_vpi *init_vpi = buf;
+
+	memset(buf, 0, size);
+
+	init_vpi->hdr.command = MBX_CMD_INIT_VPI;
+	init_vpi->vpi = cpu_to_le16(vpi);
+	init_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size,
+		 u16 xri_base, u16 xri_count)
+{
+	struct sli4_cmd_post_xri *post_xri = buf;
+	u16 xri_count_flags = 0;
+
+	memset(buf, 0, size);
+
+	post_xri->hdr.command = MBX_CMD_POST_XRI;
+	post_xri->xri_base = cpu_to_le16(xri_base);
+	xri_count_flags = (xri_count & SLI4_POST_XRI_COUNT);
+	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
+	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
+	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size,
+		    u8 num_xri)
+{
+	struct sli4_cmd_release_xri *release_xri = buf;
+
+	memset(buf, 0, size);
+
+	release_xri->hdr.command = MBX_CMD_RELEASE_XRI;
+	release_xri->xri_count_word = cpu_to_le16(num_xri &
+					SLI4_RELEASE_XRI_COUNT);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_config(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_config *read_config = buf;
+
+	memset(buf, 0, size);
+
+	read_config->hdr.command = MBX_CMD_READ_CONFIG;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_cmd_read_nvparms *read_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	read_nvparms->hdr.command = MBX_CMD_READ_NVPARMS;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size,
+		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id)
+{
+	struct sli4_cmd_write_nvparms *write_nvparms = buf;
+
+	memset(buf, 0, size);
+
+	write_nvparms->hdr.command = MBX_CMD_WRITE_NVPARMS;
+	memcpy(write_nvparms->wwpn, wwpn, 8);
+	memcpy(write_nvparms->wwnn, wwnn, 8);
+
+	write_nvparms->hard_alpa_d_id =
+			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_rev(struct sli4 *sli4, void *buf, size_t size,
+		 struct efc_dma *vpd)
+{
+	struct sli4_cmd_read_rev *read_rev = buf;
+
+	memset(buf, 0, size);
+
+	read_rev->hdr.command = MBX_CMD_READ_REV;
+
+	if (vpd && vpd->size) {
+		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
+
+		read_rev->available_length_dword =
+			cpu_to_le32(vpd->size &
+				    SLI4_READ_REV_AVAILABLE_LENGTH);
+
+		read_rev->hostbuf.low =
+				cpu_to_le32(lower_32_bits(vpd->phys));
+		read_rev->hostbuf.high =
+				cpu_to_le32(upper_32_bits(vpd->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
+		     struct efc_dma *dma,
+		     u16 vpi)
+{
+	struct sli4_cmd_read_sparm64 *read_sparm64 = buf;
+
+	memset(buf, 0, size);
+
+	if (vpi == U16_MAX) {
+		efc_log_err(sli4, "special VPI not supported!!!\n");
+		return EFC_FAIL;
+	}
+
+	if (!dma || !dma->phys) {
+		efc_log_err(sli4, "bad DMA buffer\n");
+		return EFC_FAIL;
+	}
+
+	read_sparm64->hdr.command = MBX_CMD_READ_SPARM64;
+
+	read_sparm64->bde_64.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+	read_sparm64->bde_64.u.data.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+	read_sparm64->bde_64.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+
+	read_sparm64->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
+		      struct efc_dma *dma)
+{
+	struct sli4_cmd_read_topology *read_topo = buf;
+
+	memset(buf, 0, size);
+
+	read_topo->hdr.command = MBX_CMD_READ_TOPOLOGY;
+
+	if (dma && dma->size) {
+		if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
+			efc_log_err(sli4, "loop map buffer too small %zx\n",
+				dma->size);
+			return EFC_FAIL;
+		}
+
+		memset(dma->virt, 0, dma->size);
+
+		read_topo->bde_loop_map.bde_type_buflen =
+			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
+		read_topo->bde_loop_map.u.data.low  =
+			cpu_to_le32(lower_32_bits(dma->phys));
+		read_topo->bde_loop_map.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		 u16 index,
+		 struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi *reg_fcfi = buf;
+	u32 i;
+
+	memset(buf, 0, size);
+
+	reg_fcfi->hdr.command = MBX_CMD_REG_FCFI;
+
+	reg_fcfi->fcf_index = cpu_to_le16(index);
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		switch (i) {
+		case 0:
+			reg_fcfi->rqid0 = rq_cfg[0].rq_id;
+			break;
+		case 1:
+			reg_fcfi->rqid1 = rq_cfg[1].rq_id;
+			break;
+		case 2:
+			reg_fcfi->rqid2 = rq_cfg[2].rq_id;
+			break;
+		case 3:
+			reg_fcfi->rqid3 = rq_cfg[3].rq_id;
+			break;
+		}
+		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size,
+		     u8 mode, u16 fcf_index,
+		     u8 rq_selection_policy, u8 mrq_bit_mask,
+		     u16 num_mrqs,
+		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
+{
+	struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf;
+	u32 i;
+	u32 mrq_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_fcfi_mrq->hdr.command = MBX_CMD_REG_FCFI_MRQ;
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
+		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
+		goto done;
+	}
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
+
+		switch (i) {
+		case 3:
+			reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id;
+			break;
+		case 2:
+			reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id;
+			break;
+		case 1:
+			reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id;
+			break;
+		case 0:
+			reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id;
+			break;
+		}
+	}
+
+	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
+	mrq_flags |= (mrq_bit_mask << 8);
+	mrq_flags |= (rq_selection_policy << 12);
+	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
+done:
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 nport_id, u16 rpi, u16 vpi,
+		struct efc_dma *dma, u8 update,
+		u8 enable_t10_pi)
+{
+	struct sli4_cmd_reg_rpi *reg_rpi = buf;
+	u32 rportid_flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_rpi->hdr.command = MBX_CMD_REG_RPI;
+
+	reg_rpi->rpi = cpu_to_le16(rpi);
+
+	rportid_flags = nport_id & SLI4_REGRPI_REMOTE_N_PORTID;
+
+	if (update)
+		rportid_flags |= SLI4_REGRPI_UPD;
+	else
+		rportid_flags &= ~SLI4_REGRPI_UPD;
+
+	if (enable_t10_pi)
+		rportid_flags |= SLI4_REGRPI_ETOW;
+	else
+		rportid_flags &= ~SLI4_REGRPI_ETOW;
+
+	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
+
+	reg_rpi->bde_64.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_rpi->bde_64.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma->phys));
+	reg_rpi->bde_64.u.data.high =
+		cpu_to_le32(upper_32_bits(dma->phys));
+
+	reg_rpi->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id)
+{
+	struct sli4_cmd_reg_vfi *reg_vfi = buf;
+
+	memset(buf, 0, size);
+
+	reg_vfi->hdr.command = MBX_CMD_REG_VFI;
+
+	reg_vfi->vfi = cpu_to_le16(vfi);
+
+	reg_vfi->fcfi = cpu_to_le16(fcfi);
+
+	reg_vfi->sparm.bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
+	reg_vfi->sparm.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma.phys));
+	reg_vfi->sparm.u.data.high =
+		cpu_to_le32(upper_32_bits(dma.phys));
+
+	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
+	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
+
+	reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP);
+	reg_vfi->vpi = cpu_to_le16(vpi);
+	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
+	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
+		bool update)
+{
+	struct sli4_cmd_reg_vpi *reg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	reg_vpi->hdr.command = MBX_CMD_REG_VPI;
+
+	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
+	if (update)
+		flags |= SLI4_REGVPI_UPD;
+	else
+		flags &= ~SLI4_REGVPI_UPD;
+
+	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
+	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
+	reg_vpi->vpi = cpu_to_le16(vpi);
+	reg_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_request_features(struct sli4 *sli4, void *buf, size_t size,
+			 u32 features_mask, bool query)
+{
+	struct sli4_cmd_request_features *req_features = buf;
+
+	memset(buf, 0, size);
+
+	req_features->hdr.command = MBX_CMD_RQST_FEATURES;
+
+	if (query)
+		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
+
+	req_features->cmd = cpu_to_le32(features_mask);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size,
+		   u16 indicator)
+{
+	struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf;
+
+	memset(buf, 0, size);
+
+	unreg_fcfi->hdr.command = MBX_CMD_UNREG_FCFI;
+
+	unreg_fcfi->fcfi = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator,
+		  enum sli4_resource which, u32 fc_id)
+{
+	struct sli4_cmd_unreg_rpi *unreg_rpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_rpi->hdr.command = MBX_CMD_UNREG_RPI;
+
+	switch (which) {
+	case SLI_RSRC_RPI:
+		flags |= UNREG_RPI_II_RPI;
+		if (fc_id == U32_MAX)
+			break;
+
+		flags |= UNREG_RPI_DP;
+		unreg_rpi->dw2_dest_n_portid =
+			cpu_to_le32(fc_id & UNREG_RPI_DEST_N_PORTID_MASK);
+		break;
+	case SLI_RSRC_VPI:
+		flags |= UNREG_RPI_II_VPI;
+		break;
+	case SLI_RSRC_VFI:
+		flags |= UNREG_RPI_II_VFI;
+		break;
+	case SLI_RSRC_FCFI:
+		flags |= UNREG_RPI_II_FCFI;
+		break;
+	default:
+		efc_log_info(sli4, "unknown type %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	unreg_rpi->dw1w1_flags = cpu_to_le16(flags);
+	unreg_rpi->index = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 index, u32 which)
+{
+	struct sli4_cmd_unreg_vfi *unreg_vfi = buf;
+
+	memset(buf, 0, size);
+
+	unreg_vfi->hdr.command = MBX_CMD_UNREG_VFI;
+	switch (which) {
+	case SLI4_UNREG_TYPE_DOMAIN:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		unreg_vfi->index = cpu_to_le16(U32_MAX);
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	if (which != SLI4_UNREG_TYPE_DOMAIN)
+		unreg_vfi->dw2_flags =
+			cpu_to_le16(UNREG_VFI_II_FCFI);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size,
+		  u16 indicator, u32 which)
+{
+	struct sli4_cmd_unreg_vpi *unreg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, size);
+
+	unreg_vpi->hdr.command = MBX_CMD_UNREG_VPI;
+	unreg_vpi->index = cpu_to_le16(indicator);
+	switch (which) {
+	case SLI4_UNREG_TYPE_PORT:
+		flags |= UNREG_VPI_II_VPI;
+		break;
+	case SLI4_UNREG_TYPE_DOMAIN:
+		flags |= UNREG_VPI_II_VFI;
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		/* override indicator */
+		unreg_vpi->index = cpu_to_le16(U32_MAX);
+		flags |= UNREG_VPI_II_FCFI;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	unreg_vpi->dw2w0_flags = cpu_to_le16(flags);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_modify_eq_delay(struct sli4 *sli4, void *buf, size_t size,
+			       struct sli4_queue *q, int num_q, u32 shift,
+			       u32 delay_mult)
+{
+	struct sli4_rqst_cmn_modify_eq_delay *req = NULL;
+	int i;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+				SLI_CONFIG_PYLD_LENGTH(cmn_modify_eq_delay),
+				NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_MODIFY_EQ_DELAY, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_modify_eq_delay));
+	req->num_eq = cpu_to_le32(num_q);
+
+	for (i = 0; i < num_q; i++) {
+		req->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id);
+		req->eq_delay_record[i].phase = cpu_to_le32(shift);
+		req->eq_delay_record[i].delay_multiplier =
+			cpu_to_le32(delay_mult);
+	}
+
+	return EFC_SUCCESS;
+}
+
+void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf,
+			       size_t size, u16 timeout)
+{
+	struct sli4_rqst_lowlevel_set_watchdog *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+			SLI_CONFIG_PYLD_LENGTH(lowlevel_set_watchdog),
+			NULL);
+	if (!req)
+		return;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_LOWLEVEL_SET_WATCHDOG,
+			 SLI4_SUBSYSTEM_LOWLEVEL, CMD_V0,
+			 CFG_RQST_PYLD_LEN(lowlevel_set_watchdog));
+	req->watchdog_timeout = cpu_to_le16(timeout);
+}
+
+static int
+sli_cmd_common_get_cntl_attributes(struct sli4 *sli4, void *buf, size_t size,
+				   struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ATTRIBUTES;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_cntl_addl_attributes(struct sli4 *sli4, void *buf,
+					size_t size, struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_CNTL_ADDL_ATTRS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf,
+		   size_t size, uint64_t context)
+{
+	struct sli4_rqst_cmn_nop *nop = NULL;
+
+	nop = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_nop), NULL);
+	if (!nop)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&nop->hdr, CMN_NOP, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_nop));
+
+	memcpy(&nop->context, &context, sizeof(context));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
+					size_t size, u16 rtype)
+{
+	struct sli4_rqst_cmn_get_resource_extent_info *extent = NULL;
+
+	extent = sli_config_cmd_init(sli4, buf, size,
+			CFG_RQST_CMDSZ(cmn_get_resource_extent_info),
+				     NULL);
+	if (extent)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&extent->hdr, CMN_GET_RSC_EXTENT_INFO,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 CFG_RQST_PYLD_LEN(cmn_get_resource_extent_info));
+
+	extent->resource_type = cpu_to_le16(rtype);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf,
+				   size_t size)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, size,
+				  SLI_CONFIG_PYLD_LENGTH(cmn_get_sli4_params),
+				  NULL);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = CMN_GET_SLI4_PARAMS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = CFG_RQST_PYLD_LEN(cmn_get_sli4_params);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_port_name(struct sli4 *sli4, void *buf, size_t size)
+{
+	struct sli4_rqst_cmn_get_port_name *pname;
+
+	pname = sli_config_cmd_init(sli4, buf, size,
+				    SLI_CONFIG_PYLD_LENGTH(cmn_get_port_name),
+				    NULL);
+	if (!pname)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&pname->hdr, CMN_GET_PORT_NAME, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V1, CFG_RQST_PYLD_LEN(cmn_get_port_name));
+
+	/* Set the port type value (ethernet=0, FC=1) for V1 commands */
+	pname->port_type = PORT_TYPE_FC;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size,
+			    u16 noc,
+			    u16 eof, u32 desired_write_length,
+			    u32 offset, char *object_name,
+			    struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_write_object *wr_obj = NULL;
+	struct sli4_bde *bde;
+	u32 dwflags = 0;
+
+	wr_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_write_object) +
+				     sizeof(*bde), NULL);
+	if (!wr_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wr_obj->hdr, CMN_WRITE_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0,
+			 CFG_RQST_PYLD_LEN_VAR(cmn_write_object, sizeof(*bde)));
+
+	if (noc)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC;
+	if (eof)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF;
+	dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN);
+
+	wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
+
+	wr_obj->write_offset = cpu_to_le32(offset);
+	strncpy(wr_obj->object_name, object_name,
+		sizeof(wr_obj->object_name));
+	wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_write_length & SLI4_BDE_MASK_BUFFER_LEN));
+	bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+	bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
+			     char *object_name)
+{
+	struct sli4_rqst_cmn_delete_object *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf, size,
+				  CFG_RQST_CMDSZ(cmn_delete_object), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, CMN_DELETE_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_delete_object));
+
+	strncpy(req->object_name, object_name, sizeof(req->object_name));
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
+			   u32 desired_read_length, u32 offset,
+			   char *object_name, struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_object *rd_obj = NULL;
+	struct sli4_bde *bde;
+
+	rd_obj = sli_config_cmd_init(sli4, buf, size,
+				     CFG_RQST_CMDSZ(cmn_read_object) +
+				     sizeof(*bde), NULL);
+	if (!rd_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rd_obj->hdr, CMN_READ_OBJECT, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0,
+			 CFG_RQST_PYLD_LEN_VAR(cmn_read_object, sizeof(*bde)));
+	rd_obj->desired_read_length_dword =
+		cpu_to_le32(desired_read_length & SLI4_REQ_DESIRE_READLEN);
+
+	rd_obj->read_offset = cpu_to_le32(offset);
+	strncpy(rd_obj->object_name, object_name,
+		sizeof(rd_obj->object_name));
+	rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
+			    (desired_read_length & SLI4_BDE_MASK_BUFFER_LEN));
+	if (dma) {
+		bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+		bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+	} else {
+		bde->u.data.low = 0;
+		bde->u.data.high = 0;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
+			  struct efc_dma *cmd,
+			  struct efc_dma *resp)
+{
+	struct sli4_rqst_dmtf_exec_clp_cmd *clp_cmd = NULL;
+
+	clp_cmd = sli_config_cmd_init(sli4, buf, size,
+				      CFG_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL);
+	if (!clp_cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&clp_cmd->hdr, DMTF_EXEC_CLP_CMD, SLI4_SUBSYSTEM_DMTF,
+			 CMD_V0, CFG_RQST_PYLD_LEN(dmtf_exec_clp_cmd));
+
+	clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size);
+	clp_cmd->cmd_buf_addr_low =  cpu_to_le32(lower_32_bits(cmd->phys));
+	clp_cmd->cmd_buf_addr_high =  cpu_to_le32(upper_32_bits(cmd->phys));
+	clp_cmd->resp_buf_length = cpu_to_le32(resp->size);
+	clp_cmd->resp_buf_addr_low =  cpu_to_le32(lower_32_bits(resp->phys));
+	clp_cmd->resp_buf_addr_high =  cpu_to_le32(upper_32_bits(resp->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf,
+				 size_t size, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma *buffer, u8 fdb)
+{
+	struct sli4_rqst_cmn_set_dump_location *set_dump_loc = NULL;
+	u32 buffer_length_flag = 0;
+
+	set_dump_loc = sli_config_cmd_init(sli4, buf, size,
+					CFG_RQST_CMDSZ(cmn_set_dump_location),
+					NULL);
+	if (!set_dump_loc)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&set_dump_loc->hdr, CMN_SET_DUMP_LOCATION,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 CFG_RQST_PYLD_LEN(cmn_set_dump_location));
+
+	if (is_buffer_list)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_BLP;
+
+	if (query)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_QRY;
+
+	if (fdb)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_FDB;
+
+	if (buffer) {
+		set_dump_loc->buf_addr_low =
+			cpu_to_le32(lower_32_bits(buffer->phys));
+		set_dump_loc->buf_addr_high =
+			cpu_to_le32(upper_32_bits(buffer->phys));
+
+		buffer_length_flag |= (buffer->len &
+				       SLI4_CMN_SET_DUMP_BUFFER_LEN);
+	} else {
+		set_dump_loc->buf_addr_low = 0;
+		set_dump_loc->buf_addr_high = 0;
+		set_dump_loc->buffer_length_dword = 0;
+	}
+	set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
+			    u32 feature,
+			    u32 param_len,
+			    void *parameter)
+{
+	struct sli4_rqst_cmn_set_features *cmd = NULL;
+
+	cmd = sli_config_cmd_init(sli4, buf, size,
+				  CFG_RQST_CMDSZ(cmn_set_features), NULL);
+	if (!cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&cmd->hdr, CMN_SET_FEATURES, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_set_features));
+
+	cmd->feature = cpu_to_le32(feature);
+	cmd->param_len = cpu_to_le32(param_len);
+	memcpy(cmd->params, parameter, param_len);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cqe_mq(struct sli4 *sli4, void *buf)
+{
+	struct sli4_mcqe *mcqe = buf;
+	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
+	/*
+	 * Firmware can split mbx completions into two MCQEs: first with only
+	 * the "consumed" bit set and a second with the "complete" bit set.
+	 * Thus, ignore MCQE unless "complete" is set.
+	 */
+	if (!(dwflags & SLI4_MCQE_COMPLETED))
+		return SLI4_MCQE_STATUS_NOT_COMPLETED;
+
+	if (le16_to_cpu(mcqe->completion_status)) {
+		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
+			le16_to_cpu(mcqe->completion_status),
+			      le16_to_cpu(mcqe->extended_status),
+			      (dwflags & SLI4_MCQE_CONSUMED),
+			      (dwflags & SLI4_MCQE_COMPLETED),
+			      (dwflags & SLI4_MCQE_AE),
+			      (dwflags & SLI4_MCQE_VALID));
+	}
+
+	return le16_to_cpu(mcqe->completion_status);
+}
+
+int
+sli_cqe_async(struct sli4 *sli4, void *buf)
+{
+	struct sli4_acqe *acqe = buf;
+	int rc = EFC_FAIL;
+
+	if (!buf) {
+		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
+		return EFC_FAIL;
+	}
+
+	switch (acqe->event_code) {
+	case SLI4_ACQE_EVENT_CODE_LINK_STATE:
+		efc_log_info(sli4, "Unsupported by FC link, evt code:%#x\n",
+			     acqe->event_code);
+		break;
+	case SLI4_ACQE_EVENT_CODE_GRP_5:
+		efc_log_info(sli4, "ACQE GRP5\n");
+		break;
+	case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT:
+		efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n",
+			acqe->event_type,
+			le32_to_cpu(acqe->event_data[0]),
+			le32_to_cpu(acqe->event_data[1]));
+		break;
+	case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT:
+		rc = sli_fc_process_link_attention(sli4, buf);
+		break;
+	default:
+		efc_log_info(sli4, "ACQE unknown=%#x\n",
+			acqe->event_code);
+	}
+
+	return rc;
+}
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (5 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 12:49   ` Hannes Reinecke
  2020-04-15 17:06   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 08/31] elx: libefc: Generic state machine framework James Smart
                   ` (23 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc_sli SLI-4 library population.

This patch adds APIS to initialize the library, initialize
the SLI Port, reset firmware, terminate the SLI Port, and
terminate the library.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed some function types to bool.
  Return defined return values EFC_SUCCESS/FAIL
  Defined dump types SLI4_FUNC_DESC_DUMP, SLI4_CHIP_LEVEL_DUMP.
  Defined dump status return values for sli_dump_is_ready().
  Formatted function defines to use 80 character length.
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1202 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |  434 +++++++++++++
 2 files changed, 1636 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 6ecb0f1ad19b..c45a3ac8962d 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -4319,3 +4319,1205 @@ sli_cqe_async(struct sli4 *sli4, void *buf)
 
 	return rc;
 }
+
+/* Determine if the chip FW is in a ready state */
+bool
+sli_fw_ready(struct sli4 *sli4)
+{
+	u32 val;
+	/*
+	 * Is firmware ready for operation? Check needed depends on IF_TYPE
+	 */
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
+}
+
+static bool
+sli_sliport_reset(struct sli4 *sli4)
+{
+	u32 iter, val;
+	int rc = false;
+
+	val = SLI4_PORT_CTRL_IP;
+	/* Initialize port, endian */
+	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+
+	for (iter = 0; iter < 3000; iter++) {
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4)) {
+			rc = true;
+			break;
+		}
+	}
+
+	if (!rc)
+		efc_log_crit(sli4, "port failed to become ready after initialization\n");
+
+	return rc;
+}
+
+static bool
+sli_wait_for_fw_ready(struct sli4 *sli4, u32 timeout_ms)
+{
+	u32 iter = timeout_ms / (SLI4_INIT_PORT_DELAY_US / 1000);
+	bool ready = false;
+
+	do {
+		iter--;
+		mdelay(10);	/* 10 ms */
+		if (sli_fw_ready(sli4) == 1)
+			ready = true;
+	} while (!ready && (iter > 0));
+
+	return ready;
+}
+
+static bool
+sli_fw_init(struct sli4 *sli4)
+{
+	bool ready;
+
+	/*
+	 * Is firmware ready for operation?
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return false;
+	}
+
+	/*
+	 * Reset port to a known state
+	 */
+	if (!sli_sliport_reset(sli4))
+		return false;
+
+	return true;
+}
+
+static int
+sli_fw_term(struct sli4 *sli4)
+{
+	/* type 2 etc. use SLIPORT_CONTROL to initialize port */
+	sli_sliport_reset(sli4);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_request_features(struct sli4 *sli4, u32 *features, bool query)
+{
+	if (!sli_cmd_request_features(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				     *features, query)) {
+		struct sli4_cmd_request_features *req_features =
+							sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return EFC_FAIL;
+		}
+		if (le16_to_cpu(req_features->hdr.status)) {
+			efc_log_err(sli4, "REQUEST_FEATURES bad status %#x\n",
+			       le16_to_cpu(req_features->hdr.status));
+			return EFC_FAIL;
+		}
+		*features = le32_to_cpu(req_features->resp);
+	} else {
+		efc_log_err(sli4, "bad REQUEST_FEATURES write\n");
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+void
+sli_calc_max_qentries(struct sli4 *sli4)
+{
+	enum sli4_qtype q;
+	u32 qentries;
+
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		sli4->qinfo.max_qentries[q] =
+			sli_convert_mask_to_count(sli4->qinfo.count_method[q],
+						  sli4->qinfo.count_mask[q]);
+	}
+
+	/* single, continguous DMA allocations will be called for each queue
+	 * of size (max_qentries * queue entry size); since these can be large,
+	 * check against the OS max DMA allocation size
+	 */
+	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
+		qentries = sli4->qinfo.max_qentries[q];
+
+		efc_log_info(sli4, "[%s]: max_qentries from %d to %d\n",
+			     SLI_QNAME[q],
+			     sli4->qinfo.max_qentries[q], qentries);
+		sli4->qinfo.max_qentries[q] = qentries;
+	}
+}
+
+static int
+sli_get_config(struct sli4 *sli4)
+{
+	struct efc_dma data;
+	u32 psize;
+
+	/*
+	 * Read the device configuration
+	 */
+	if (!sli_cmd_read_config(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_read_config	*read_config = sli4->bmbx.virt;
+		u32 i;
+		u32 total, total_size;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_CONFIG)\n");
+			return EFC_FAIL;
+		}
+		if (le16_to_cpu(read_config->hdr.status)) {
+			efc_log_err(sli4, "READ_CONFIG bad status %#x\n",
+			       le16_to_cpu(read_config->hdr.status));
+			return EFC_FAIL;
+		}
+
+		sli4->has_extents =
+			le32_to_cpu(read_config->ext_dword) &
+				    SLI4_READ_CFG_RESP_RESOURCE_EXT;
+		if (!sli4->has_extents) {
+			u32	i = 0, size = 0;
+			u32	*base = sli4->extent[0].base;
+
+			if (!base) {
+				size = SLI_RSRC_MAX * sizeof(u32);
+				base = kzalloc(size, GFP_ATOMIC);
+				if (!base)
+					return EFC_FAIL;
+
+				memset(base, 0,
+				       SLI_RSRC_MAX * sizeof(u32));
+			}
+
+			for (i = 0; i < SLI_RSRC_MAX; i++) {
+				sli4->extent[i].number = 1;
+				sli4->extent[i].n_alloc = 0;
+				sli4->extent[i].base = &base[i];
+			}
+
+			sli4->extent[SLI_RSRC_VFI].base[0] =
+				le16_to_cpu(read_config->vfi_base);
+			sli4->extent[SLI_RSRC_VFI].size =
+				le16_to_cpu(read_config->vfi_count);
+
+			sli4->extent[SLI_RSRC_VPI].base[0] =
+				le16_to_cpu(read_config->vpi_base);
+			sli4->extent[SLI_RSRC_VPI].size =
+				le16_to_cpu(read_config->vpi_count);
+
+			sli4->extent[SLI_RSRC_RPI].base[0] =
+				le16_to_cpu(read_config->rpi_base);
+			sli4->extent[SLI_RSRC_RPI].size =
+				le16_to_cpu(read_config->rpi_count);
+
+			sli4->extent[SLI_RSRC_XRI].base[0] =
+				le16_to_cpu(read_config->xri_base);
+			sli4->extent[SLI_RSRC_XRI].size =
+				le16_to_cpu(read_config->xri_count);
+
+			sli4->extent[SLI_RSRC_FCFI].base[0] = 0;
+			sli4->extent[SLI_RSRC_FCFI].size =
+				le16_to_cpu(read_config->fcfi_count);
+		}
+
+		for (i = 0; i < SLI_RSRC_MAX; i++) {
+			total = sli4->extent[i].number *
+				sli4->extent[i].size;
+			total_size = BITS_TO_LONGS(total) * sizeof(long);
+			sli4->extent[i].use_map =
+				kzalloc(total_size, GFP_ATOMIC);
+			if (!sli4->extent[i].use_map) {
+				efc_log_err(sli4, "bitmap memory allocation failed %d\n",
+				       i);
+				return EFC_FAIL;
+			}
+			sli4->extent[i].map_size = total;
+		}
+
+		sli4->topology =
+				(le32_to_cpu(read_config->topology_dword) &
+				 SLI4_READ_CFG_RESP_TOPOLOGY) >> 24;
+		switch (sli4->topology) {
+		case SLI4_READ_CFG_TOPO_FC:
+			efc_log_info(sli4, "FC (unknown)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_DA:
+			efc_log_info(sli4, "FC (direct attach)\n");
+			break;
+		case SLI4_READ_CFG_TOPO_FC_AL:
+			efc_log_info(sli4, "FC (arbitrated loop)\n");
+			break;
+		default:
+			efc_log_info(sli4, "bad topology %#x\n",
+				sli4->topology);
+		}
+
+		sli4->e_d_tov = le16_to_cpu(read_config->e_d_tov);
+		sli4->r_a_tov = le16_to_cpu(read_config->r_a_tov);
+
+		sli4->link_module_type = le16_to_cpu(read_config->lmt);
+
+		sli4->qinfo.max_qcount[SLI_QTYPE_EQ] =
+				le16_to_cpu(read_config->eq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_CQ] =
+				le16_to_cpu(read_config->cq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_WQ] =
+				le16_to_cpu(read_config->wq_count);
+		sli4->qinfo.max_qcount[SLI_QTYPE_RQ] =
+				le16_to_cpu(read_config->rq_count);
+
+		/*
+		 * READ_CONFIG doesn't give the max number of MQ. Applications
+		 * will typically want 1, but we may need another at some future
+		 * date. Dummy up a "max" MQ count here.
+		 */
+		sli4->qinfo.max_qcount[SLI_QTYPE_MQ] = SLI_USER_MQ_COUNT;
+	} else {
+		efc_log_err(sli4, "bad READ_CONFIG write\n");
+		return EFC_FAIL;
+	}
+
+	if (!sli_cmd_common_get_sli4_parameters(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_sli4_params	*parms =
+			(struct sli4_rsp_cmn_get_sli4_params *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config, payload.embed));
+		u32 dwflags_loopback;
+		u32 dwflags_eq_page_cnt;
+		u32 dwflags_cq_page_cnt;
+		u32 dwflags_mq_page_cnt;
+		u32 dwflags_wq_page_cnt;
+		u32 dwflags_rq_page_cnt;
+		u32 dwflags_sgl_page_cnt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return EFC_FAIL;
+		} else if (parms->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_SLI4_PARAMETERS bad status %#x",
+			       parms->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       parms->hdr.additional_status);
+			return EFC_FAIL;
+		}
+
+		dwflags_loopback = le32_to_cpu(parms->dw16_loopback_scope);
+		dwflags_eq_page_cnt = le32_to_cpu(parms->dw6_eq_page_cnt);
+		dwflags_cq_page_cnt = le32_to_cpu(parms->dw8_cq_page_cnt);
+		dwflags_mq_page_cnt = le32_to_cpu(parms->dw10_mq_page_cnt);
+		dwflags_wq_page_cnt = le32_to_cpu(parms->dw12_wq_page_cnt);
+		dwflags_rq_page_cnt = le32_to_cpu(parms->dw14_rq_page_cnt);
+
+		sli4->auto_reg =
+			(dwflags_loopback & RSP_GET_PARAM_AREG);
+		sli4->auto_xfer_rdy =
+			(dwflags_loopback & RSP_GET_PARAM_AGXF);
+		sli4->hdr_template_req =
+			(dwflags_loopback & RSP_GET_PARAM_HDRR);
+		sli4->t10_dif_inline_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TIMM);
+		sli4->t10_dif_separate_capable =
+			(dwflags_loopback & RSP_GET_PARAM_TSMM);
+
+		sli4->mq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_mq_page_cnt);
+		sli4->cq_create_version =
+				GET_Q_CREATE_VERSION(dwflags_cq_page_cnt);
+
+		sli4->rq_min_buf_size =
+			le16_to_cpu(parms->min_rq_buffer_size);
+		sli4->rq_max_buf_size =
+			le32_to_cpu(parms->max_rq_buffer_size);
+
+		sli4->qinfo.qpage_count[SLI_QTYPE_EQ] =
+			(dwflags_eq_page_cnt & RSP_GET_PARAM_EQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_CQ] =
+			(dwflags_cq_page_cnt & RSP_GET_PARAM_CQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_MQ] =
+			(dwflags_mq_page_cnt & RSP_GET_PARAM_MQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_WQ] =
+			(dwflags_wq_page_cnt & RSP_GET_PARAM_WQ_PAGE_CNT_MASK);
+		sli4->qinfo.qpage_count[SLI_QTYPE_RQ] =
+			(dwflags_rq_page_cnt & RSP_GET_PARAM_RQ_PAGE_CNT_MASK);
+
+		/* save count methods and masks for each queue type */
+
+		sli4->qinfo.count_mask[SLI_QTYPE_EQ] =
+				le16_to_cpu(parms->eqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_EQ] =
+				GET_Q_CNT_METHOD(dwflags_eq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_CQ] =
+				le16_to_cpu(parms->cqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_CQ] =
+				GET_Q_CNT_METHOD(dwflags_cq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_MQ] =
+				le16_to_cpu(parms->mqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_MQ] =
+				GET_Q_CNT_METHOD(dwflags_mq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_WQ] =
+				le16_to_cpu(parms->wqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_WQ] =
+				GET_Q_CNT_METHOD(dwflags_wq_page_cnt);
+
+		sli4->qinfo.count_mask[SLI_QTYPE_RQ] =
+				le16_to_cpu(parms->rqe_count_mask);
+		sli4->qinfo.count_method[SLI_QTYPE_RQ] =
+				GET_Q_CNT_METHOD(dwflags_rq_page_cnt);
+
+		/* now calculate max queue entries */
+		sli_calc_max_qentries(sli4);
+
+		dwflags_sgl_page_cnt = le32_to_cpu(parms->dw18_sgl_page_cnt);
+
+		/* max # of pages */
+		sli4->max_sgl_pages =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_CNT_MASK);
+
+		/* bit map of available sizes */
+		sli4->sgl_page_sizes =
+				(dwflags_sgl_page_cnt &
+				 RSP_GET_PARAM_SGL_PAGE_SZS_MASK) >> 8;
+		/* ignore HLM here. Use value from REQUEST_FEATURES */
+		sli4->sge_supported_length =
+				le32_to_cpu(parms->sge_supported_length);
+		sli4->sgl_pre_registration_required =
+			(dwflags_loopback & RSP_GET_PARAM_SGLR);
+		/* default to using pre-registered SGL's */
+		sli4->sgl_pre_registered = true;
+
+		sli4->perf_hint =
+			(dwflags_loopback & RSP_GET_PARAM_PHON);
+		sli4->perf_wq_id_association =
+			(dwflags_loopback & RSP_GET_PARAM_PHWQ);
+
+		sli4->rq_batch =
+			(le16_to_cpu(parms->dw15w1_rq_db_window) &
+			 RSP_GET_PARAM_RQ_DB_WINDOW_MASK) >> 12;
+
+		/* Use the highest available WQE size. */
+		if (((dwflags_wq_page_cnt &
+		    RSP_GET_PARAM_WQE_SZS_MASK) >> 8) &
+		    SLI4_128BYTE_WQE_SUPPORT)
+			sli4->wqe_size = SLI4_WQE_EXT_BYTES;
+		else
+			sli4->wqe_size = SLI4_WQE_BYTES;
+	}
+
+	sli4->port_number = 0;
+
+	/*
+	 * Issue COMMON_GET_CNTL_ATTRIBUTES to get port_number. Temporarily
+	 * uses VPD DMA buffer as the response won't fit in the embedded
+	 * buffer.
+	 */
+	memset(sli4->vpd_data.virt, 0, sli4->vpd_data.size);
+	if (!sli_cmd_common_get_cntl_attributes(sli4, sli4->bmbx.virt,
+					       SLI4_BMBX_SIZE,
+					       &sli4->vpd_data)) {
+		struct sli4_rsp_cmn_get_cntl_attributes *attr =
+			sli4->vpd_data.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return EFC_FAIL;
+		} else if (attr->hdr.status) {
+			efc_log_err(sli4, "COMMON_GET_CNTL_ATTRIBUTES bad status %#x",
+			       attr->hdr.status);
+			efc_log_err(sli4, "additional status %#x\n",
+			       attr->hdr.additional_status);
+			return EFC_FAIL;
+		}
+
+		sli4->port_number = (attr->port_num_type_flags &
+					    SLI4_CNTL_ATTR_PORTNUM);
+
+		memcpy(sli4->bios_version_string,
+		       attr->bios_version_str,
+		       sizeof(sli4->bios_version_string));
+	} else {
+		efc_log_err(sli4, "bad COMMON_GET_CNTL_ATTRIBUTES write\n");
+		return EFC_FAIL;
+	}
+
+	psize = sizeof(struct sli4_rsp_cmn_get_cntl_addl_attributes);
+	data.size = psize;
+	data.virt = dma_alloc_coherent(&sli4->pcidev->dev, data.size,
+				       &data.phys, GFP_DMA);
+	if (!data.virt) {
+		memset(&data, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "Failed to allocate memory for GET_CNTL_ADDL_ATTR\n");
+	} else {
+		if (!sli_cmd_common_get_cntl_addl_attributes(sli4,
+							    sli4->bmbx.virt,
+							    SLI4_BMBX_SIZE,
+							    &data)) {
+			struct sli4_rsp_cmn_get_cntl_addl_attributes *attr;
+
+			attr = data.virt;
+			if (sli_bmbx_command(sli4)) {
+				efc_log_crit(sli4, "mailbox fail (GET_CNTL_ADDL_ATTR)\n");
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return EFC_FAIL;
+			}
+			if (attr->hdr.status) {
+				efc_log_err(sli4, "GET_CNTL_ADDL_ATTR bad status %#x\n",
+				       attr->hdr.status);
+				dma_free_coherent(&sli4->pcidev->dev, data.size,
+						  data.virt, data.phys);
+				return EFC_FAIL;
+			}
+
+			memcpy(sli4->ipl_name, attr->ipl_file_name,
+			       sizeof(sli4->ipl_name));
+
+			efc_log_info(sli4, "IPL:%s\n",
+				(char *)sli4->ipl_name);
+		} else {
+			efc_log_err(sli4, "bad GET_CNTL_ADDL_ATTR write\n");
+			dma_free_coherent(&sli4->pcidev->dev, data.size,
+					  data.virt, data.phys);
+			return EFC_FAIL;
+		}
+
+		dma_free_coherent(&sli4->pcidev->dev, data.size, data.virt,
+				  data.phys);
+		memset(&data, 0, sizeof(struct efc_dma));
+	}
+
+	if (!sli_cmd_common_get_port_name(sli4, sli4->bmbx.virt,
+					 SLI4_BMBX_SIZE)) {
+		struct sli4_rsp_cmn_get_port_name	*port_name =
+			(struct sli4_rsp_cmn_get_port_name *)
+			(((u8 *)sli4->bmbx.virt) +
+			offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+			return EFC_FAIL;
+		}
+
+		sli4->port_name[0] =
+			port_name->port_name[sli4->port_number];
+	}
+	sli4->port_name[1] = '\0';
+
+	if (!sli_cmd_read_rev(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+			     &sli4->vpd_data)) {
+		struct sli4_cmd_read_rev	*read_rev = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox write fail (READ_REV)\n");
+			return EFC_FAIL;
+		}
+		if (le16_to_cpu(read_rev->hdr.status)) {
+			efc_log_err(sli4, "READ_REV bad status %#x\n",
+			       le16_to_cpu(read_rev->hdr.status));
+			return EFC_FAIL;
+		}
+
+		sli4->fw_rev[0] =
+				le32_to_cpu(read_rev->first_fw_id);
+		memcpy(sli4->fw_name[0], read_rev->first_fw_name,
+		       sizeof(sli4->fw_name[0]));
+
+		sli4->fw_rev[1] =
+				le32_to_cpu(read_rev->second_fw_id);
+		memcpy(sli4->fw_name[1], read_rev->second_fw_name,
+		       sizeof(sli4->fw_name[1]));
+
+		sli4->hw_rev[0] = le32_to_cpu(read_rev->first_hw_rev);
+		sli4->hw_rev[1] = le32_to_cpu(read_rev->second_hw_rev);
+		sli4->hw_rev[2] = le32_to_cpu(read_rev->third_hw_rev);
+
+		efc_log_info(sli4, "FW1:%s (%08x) / FW2:%s (%08x)\n",
+			read_rev->first_fw_name,
+			      le32_to_cpu(read_rev->first_fw_id),
+			      read_rev->second_fw_name,
+			      le32_to_cpu(read_rev->second_fw_id));
+
+		efc_log_info(sli4, "HW1: %08x / HW2: %08x\n",
+			le32_to_cpu(read_rev->first_hw_rev),
+			      le32_to_cpu(read_rev->second_hw_rev));
+
+		/* Check that all VPD data was returned */
+		if (le32_to_cpu(read_rev->returned_vpd_length) !=
+		    le32_to_cpu(read_rev->actual_vpd_length)) {
+			efc_log_info(sli4, "VPD length: avail=%d returned=%d actual=%d\n",
+				le32_to_cpu(read_rev->available_length_dword) &
+					    SLI4_READ_REV_AVAILABLE_LENGTH,
+				le32_to_cpu(read_rev->returned_vpd_length),
+				le32_to_cpu(read_rev->actual_vpd_length));
+		}
+		sli4->vpd_length = le32_to_cpu(read_rev->returned_vpd_length);
+	} else {
+		efc_log_err(sli4, "bad READ_REV write\n");
+		return EFC_FAIL;
+	}
+
+	if (!sli_cmd_read_nvparms(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {
+		struct sli4_cmd_read_nvparms *read_nvparms = sli4->bmbx.virt;
+
+		if (sli_bmbx_command(sli4)) {
+			efc_log_crit(sli4, "bootstrap mailbox fail (READ_NVPARMS)\n");
+			return EFC_FAIL;
+		}
+		if (le16_to_cpu(read_nvparms->hdr.status)) {
+			efc_log_err(sli4, "READ_NVPARMS bad status %#x\n",
+			       le16_to_cpu(read_nvparms->hdr.status));
+			return EFC_FAIL;
+		}
+
+		memcpy(sli4->wwpn, read_nvparms->wwpn,
+		       sizeof(sli4->wwpn));
+		memcpy(sli4->wwnn, read_nvparms->wwnn,
+		       sizeof(sli4->wwnn));
+
+		efc_log_info(sli4, "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwpn[0],
+			      sli4->wwpn[1],
+			      sli4->wwpn[2],
+			      sli4->wwpn[3],
+			      sli4->wwpn[4],
+			      sli4->wwpn[5],
+			      sli4->wwpn[6],
+			      sli4->wwpn[7]);
+		efc_log_info(sli4, "WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+			sli4->wwnn[0],
+			      sli4->wwnn[1],
+			      sli4->wwnn[2],
+			      sli4->wwnn[3],
+			      sli4->wwnn[4],
+			      sli4->wwnn[5],
+			      sli4->wwnn[6],
+			      sli4->wwnn[7]);
+	} else {
+		efc_log_err(sli4, "bad READ_NVPARMS write\n");
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[])
+{
+	u32 intf = U32_MAX;
+	u32 pci_class_rev = 0;
+	u32 rev_id = 0;
+	u32 family = 0;
+	u32 asic_id = 0;
+	u32 i;
+	struct sli4_asic_entry_t *asic;
+
+	memset(sli4, 0, sizeof(struct sli4));
+
+	sli4->os = os;
+	sli4->pcidev = pdev;
+
+	for (i = 0; i < 6; i++)
+		sli4->reg[i] = reg[i];
+	/*
+	 * Read the SLI_INTF register to discover the register layout
+	 * and other capability information
+	 */
+	pci_read_config_dword(pdev, SLI4_INTF_REG, &intf);
+
+	if ((intf & SLI4_INTF_VALID_MASK) != (u32)SLI4_INTF_VALID_VALUE) {
+		efc_log_err(sli4, "SLI_INTF is not valid\n");
+		return EFC_FAIL;
+	}
+
+	/* driver only support SLI-4 */
+	if ((intf & SLI4_INTF_REV_MASK) != SLI4_INTF_REV_S4) {
+		efc_log_err(sli4, "Unsupported SLI revision (intf=%#x)\n",
+		       intf);
+		return EFC_FAIL;
+	}
+
+	sli4->sli_family = intf & SLI4_INTF_FAMILY_MASK;
+
+	sli4->if_type = intf & SLI4_INTF_IF_TYPE_MASK;
+	efc_log_info(sli4, "status=%#x error1=%#x error2=%#x\n",
+		sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+
+	/*
+	 * set the ASIC type and revision
+	 */
+	pci_read_config_dword(pdev, PCI_CLASS_REVISION, &pci_class_rev);
+	rev_id = pci_class_rev & 0xff;
+	family = sli4->sli_family;
+	if (family == SLI4_FAMILY_CHECK_ASIC_TYPE) {
+		pci_read_config_dword(pdev, SLI4_ASIC_ID_REG, &asic_id);
+
+		family = asic_id & SLI4_ASIC_GEN_MASK;
+	}
+
+	for (i = 0, asic = sli4_asic_table; i < ARRAY_SIZE(sli4_asic_table);
+	     i++, asic++) {
+		if (rev_id == asic->rev_id && family == asic->family) {
+			sli4->asic_type = family;
+			sli4->asic_rev = rev_id;
+			break;
+		}
+	}
+	/* Fail if no matching asic type/rev was found */
+	if (!sli4->asic_type || !sli4->asic_rev) {
+		efc_log_err(sli4, "no matching asic family/rev found: %02x/%02x\n",
+		       family, rev_id);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * The bootstrap mailbox is equivalent to a MQ with a single 256 byte
+	 * entry, a CQ with a single 16 byte entry, and no event queue.
+	 * Alignment must be 16 bytes as the low order address bits in the
+	 * address register are also control / status.
+	 */
+	sli4->bmbx.size = SLI4_BMBX_SIZE + sizeof(struct sli4_mcqe);
+	sli4->bmbx.virt = dma_alloc_coherent(&pdev->dev, sli4->bmbx.size,
+					     &sli4->bmbx.phys, GFP_DMA);
+	if (!sli4->bmbx.virt) {
+		memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "bootstrap mailbox allocation failed\n");
+		return EFC_FAIL;
+	}
+
+	if (sli4->bmbx.phys & SLI4_BMBX_MASK_LO) {
+		efc_log_err(sli4, "bad alignment for bootstrap mailbox\n");
+		return EFC_FAIL;
+	}
+
+	efc_log_info(sli4, "bmbx v=%p p=0x%x %08x s=%zd\n", sli4->bmbx.virt,
+		upper_32_bits(sli4->bmbx.phys),
+		      lower_32_bits(sli4->bmbx.phys), sli4->bmbx.size);
+
+	/* 4096 is arbitrary. What should this value actually be? */
+	sli4->vpd_data.size = 4096;
+	sli4->vpd_data.virt = dma_alloc_coherent(&pdev->dev,
+						 sli4->vpd_data.size,
+						 &sli4->vpd_data.phys,
+						 GFP_DMA);
+	if (!sli4->vpd_data.virt) {
+		memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
+		/* Note that failure isn't fatal in this specific case */
+		efc_log_info(sli4, "VPD buffer allocation failed\n");
+	}
+
+	if (!sli_fw_init(sli4)) {
+		efc_log_err(sli4, "FW initialization failed\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Set one of fcpi(initiator), fcpt(target), fcpc(combined) to true
+	 * in addition to any other desired features
+	 */
+	sli4->features = (SLI4_REQFEAT_IAAB | SLI4_REQFEAT_NPIV |
+				 SLI4_REQFEAT_DIF | SLI4_REQFEAT_VF |
+				 SLI4_REQFEAT_FCPC | SLI4_REQFEAT_IAAR |
+				 SLI4_REQFEAT_HLM | SLI4_REQFEAT_PERFH |
+				 SLI4_REQFEAT_RXSEQ | SLI4_REQFEAT_RXRI |
+				 SLI4_REQFEAT_MRQP);
+
+	/* use performance hints if available */
+	if (sli4->perf_hint)
+		sli4->features |= SLI4_REQFEAT_PERFH;
+
+	if (sli_request_features(sli4, &sli4->features, true))
+		return EFC_FAIL;
+
+	if (sli_get_config(sli4))
+		return EFC_FAIL;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_init(struct sli4 *sli4)
+{
+	if (sli4->has_extents) {
+		efc_log_info(sli4, "extend allocation not implemented\n");
+		return EFC_FAIL;
+	}
+
+	if (sli4->high_login_mode)
+		sli4->features |= SLI4_REQFEAT_HLM;
+	else
+		sli4->features &= (~SLI4_REQFEAT_HLM);
+	sli4->features &= (~SLI4_REQFEAT_RXSEQ);
+	sli4->features &= (~SLI4_REQFEAT_RXRI);
+
+	if (sli_request_features(sli4, &sli4->features, false))
+		return EFC_FAIL;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_reset(struct sli4 *sli4)
+{
+	u32	i;
+
+	if (!sli_fw_init(sli4)) {
+		efc_log_crit(sli4, "FW initialization failed\n");
+		return EFC_FAIL;
+	}
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+		sli4->extent[i].base = NULL;
+	}
+
+	if (sli_get_config(sli4))
+		return EFC_FAIL;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fw_reset(struct sli4 *sli4)
+{
+	u32 val;
+	bool ready;
+
+	/*
+	 * Firmware must be ready before issuing the reset.
+	 */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return EFC_FAIL;
+	}
+	/* Lancer uses PHYDEV_CONTROL */
+
+	val = SLI4_PHYDEV_CTRL_FRST;
+	writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+
+	/* wait for the FW to become ready after the reset */
+	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!ready) {
+		efc_log_crit(sli4, "Failed to become ready after firmware reset\n");
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+int
+sli_teardown(struct sli4 *sli4)
+{
+	u32 i;
+
+	kfree(sli4->extent[0].base);
+	sli4->extent[0].base = NULL;
+
+	for (i = 0; i < SLI_RSRC_MAX; i++) {
+		sli4->extent[i].base = NULL;
+
+		kfree(sli4->extent[i].use_map);
+		sli4->extent[i].use_map = NULL;
+	}
+
+	if (sli_fw_term(sli4))
+		efc_log_err(sli4, "FW deinitialization failed\n");
+
+	dma_free_coherent(&sli4->pcidev->dev, sli4->vpd_data.size,
+			  sli4->vpd_data.virt, sli4->vpd_data.phys);
+	memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
+
+	dma_free_coherent(&sli4->pcidev->dev, sli4->bmbx.size,
+			  sli4->bmbx.virt, sli4->bmbx.phys);
+	memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_callback(struct sli4 *sli4, enum sli4_callback which,
+	     void *func, void *arg)
+{
+	if (!func) {
+		efc_log_err(sli4, "bad parameter sli4=%p which=%#x func=%p\n",
+		       sli4, which, func);
+		return EFC_FAIL;
+	}
+
+	switch (which) {
+	case SLI4_CB_LINK:
+		sli4->link = func;
+		sli4->link_arg = arg;
+		break;
+	default:
+		efc_log_info(sli4, "unknown callback %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult)
+{
+	sli_cmd_common_modify_eq_delay(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
+				       eq, num_eq, shift, delay_mult);
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail (MODIFY EQ DELAY)\n");
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status MODIFY EQ DELAY\n");
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
+		   u32 *rid, u32 *index)
+{
+	int rc = EFC_SUCCESS;
+	u32 size;
+	u32 extent_idx;
+	u32 item_idx;
+	u32 position;
+
+	*rid = U32_MAX;
+	*index = U32_MAX;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		position =
+		find_first_zero_bit(sli4->extent[rtype].use_map,
+				    sli4->extent[rtype].map_size);
+		if (position >= sli4->extent[rtype].map_size) {
+			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
+				    rtype, sli4->extent[rtype].n_alloc);
+			rc = EFC_FAIL;
+			break;
+		}
+		set_bit(position, sli4->extent[rtype].use_map);
+		*index = position;
+
+		size = sli4->extent[rtype].size;
+
+		extent_idx = *index / size;
+		item_idx   = *index % size;
+
+		*rid = sli4->extent[rtype].base[extent_idx] + item_idx;
+
+		sli4->extent[rtype].n_alloc++;
+		break;
+	default:
+		rc = EFC_FAIL;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_free(struct sli4 *sli4,
+		  enum sli4_resource rtype, u32 rid)
+{
+	int rc = EFC_FAIL;
+	u32 x;
+	u32 size, *base;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		/*
+		 * Figure out which extent contains the resource ID. I.e. find
+		 * the extent such that
+		 *   extent->base <= resource ID < extent->base + extent->size
+		 */
+		base = sli4->extent[rtype].base;
+		size = sli4->extent[rtype].size;
+
+		/*
+		 * In the case of FW reset, this may be cleared
+		 * but the force_free path will still attempt to
+		 * free the resource. Prevent a NULL pointer access.
+		 */
+		if (base) {
+			for (x = 0; x < sli4->extent[rtype].number;
+			     x++) {
+				if (rid >= base[x] &&
+				    (rid < (base[x] + size))) {
+					rid -= base[x];
+					clear_bit((x * size) + rid,
+						  sli4->extent[rtype].use_map);
+					rc = EFC_SUCCESS;
+					break;
+				}
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype)
+{
+	int rc = EFC_FAIL;
+	u32 i;
+
+	switch (rtype) {
+	case SLI_RSRC_VFI:
+	case SLI_RSRC_VPI:
+	case SLI_RSRC_RPI:
+	case SLI_RSRC_XRI:
+		for (i = 0; i < sli4->extent[rtype].map_size; i++)
+			clear_bit(i, sli4->extent[rtype].use_map);
+		rc = EFC_SUCCESS;
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int sli_raise_ue(struct sli4 *sli4, u8 dump)
+{
+	u32 val = 0;
+
+	if (dump == SLI4_FUNC_DESC_DUMP) {
+		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
+		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+	} else {
+		val = SLI4_PHYDEV_CTRL_FRST;
+
+		if (dump == SLI4_CHIP_LEVEL_DUMP)
+			val |= SLI4_PHYDEV_CTRL_DD;
+		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int sli_dump_is_ready(struct sli4 *sli4)
+{
+	int rc = SLI4_DUMP_READY_STATUS_NOT_READY;
+	u32 port_val;
+	u32 bmbx_val;
+
+	/*
+	 * Ensure that the port is ready AND the mailbox is
+	 * ready before signaling that the dump is ready to go.
+	 */
+	port_val = sli_reg_read_status(sli4);
+	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+
+	if ((bmbx_val & SLI4_BMBX_RDY) &&
+	    (port_val & SLI4_PORT_STATUS_RDY)) {
+		if (port_val & SLI4_PORT_STATUS_DIP)
+			rc = SLI4_DUMP_READY_STATUS_DD_PRESENT;
+		else if (port_val & SLI4_PORT_STATUS_FDP)
+			rc = SLI4_DUMP_READY_STATUS_FDB_PRESENT;
+	}
+
+	return rc;
+}
+
+bool sli_reset_required(struct sli4 *sli4)
+{
+	u32 val;
+
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RN);
+}
+
+int
+sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size,
+		       u16 xri,
+		       u32 xri_count, struct efc_dma *page0[],
+		       struct efc_dma *page1[], struct efc_dma *dma)
+{
+	struct sli4_rqst_post_sgl_pages *post = NULL;
+	u32 i;
+	__le32 req_len;
+
+	post = sli_config_cmd_init(sli4, buf, size,
+				   SLI_CONFIG_PYLD_LENGTH(post_sgl_pages),
+				   dma);
+	if (!post)
+		return EFC_FAIL;
+
+	/* payload size calculation */
+	/* 4 = xri_start + xri_count */
+	/* xri_count = # of XRI's registered */
+	/* sizeof(uint64_t) = physical address size */
+	/* 2 = # of physical addresses per page set */
+	req_len = cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
+	sli_cmd_fill_hdr(&post->hdr, SLI4_OPC_POST_SGL_PAGES, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	post->xri_start = cpu_to_le16(xri);
+	post->xri_count = cpu_to_le16(xri_count);
+
+	for (i = 0; i < xri_count; i++) {
+		post->page_set[i].page0_low  =
+				cpu_to_le32(lower_32_bits(page0[i]->phys));
+		post->page_set[i].page0_high =
+				cpu_to_le32(upper_32_bits(page0[i]->phys));
+	}
+
+	if (page1) {
+		for (i = 0; i < xri_count; i++) {
+			post->page_set[i].page1_low =
+				cpu_to_le32(lower_32_bits(page1[i]->phys));
+			post->page_set[i].page1_high =
+				cpu_to_le32(upper_32_bits(page1[i]->phys));
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf,
+			   size_t size, struct efc_dma *dma,
+			   u16 rpi,
+			   struct efc_dma *payload_dma)
+{
+	struct sli4_rqst_post_hdr_templates *req = NULL;
+	uintptr_t phys = 0;
+	u32 i = 0;
+	u32 page_count, payload_size;
+
+	page_count = sli_page_count(dma->size, SLI_PAGE_SIZE);
+
+	payload_size = ((sizeof(struct sli4_rqst_post_hdr_templates) +
+		(page_count * SZ_DMAADDR)) - sizeof(struct sli4_rqst_hdr));
+
+	if (page_count > 16) {
+		/*
+		 * We can't fit more than 16 descriptors into an embedded mbox
+		 * command, it has to be non-embedded
+		 */
+		payload_dma->size = payload_size;
+		payload_dma->virt = dma_alloc_coherent(&sli4->pcidev->dev,
+						       payload_dma->size,
+					     &payload_dma->phys, GFP_DMA);
+		if (!payload_dma->virt) {
+			memset(payload_dma, 0, sizeof(struct efc_dma));
+			efc_log_err(sli4, "mbox payload memory allocation fail\n");
+			return EFC_FAIL;
+		}
+		req = sli_config_cmd_init(sli4, buf, size,
+					  payload_size, payload_dma);
+	} else {
+		req = sli_config_cmd_init(sli4, buf, size,
+					  payload_size, NULL);
+	}
+
+	if (!req)
+		return EFC_FAIL;
+
+	if (rpi == U16_MAX)
+		rpi = sli4->extent[SLI_RSRC_RPI].base[0];
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_POST_HDR_TEMPLATES,
+			 SLI4_SUBSYSTEM_FC, CMD_V0,
+			 CFG_RQST_PYLD_LEN(post_hdr_templates));
+
+	req->rpi_offset = cpu_to_le16(rpi);
+	req->page_count = cpu_to_le16(page_count);
+	phys = dma->phys;
+	for (i = 0; i < page_count; i++) {
+		req->page_descriptor[i].low  = cpu_to_le32(lower_32_bits(phys));
+		req->page_descriptor[i].high = cpu_to_le32(upper_32_bits(phys));
+
+		phys += SLI_PAGE_SIZE;
+	}
+
+	return EFC_SUCCESS;
+}
+
+u32
+sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi)
+{
+	u32 bytes = 0;
+
+	/* Check if header templates needed */
+	if (sli4->hdr_template_req)
+		/* round up to a page */
+		bytes = SLI_ROUND_PAGE(n_rpi * SLI4_HDR_TEMPLATE_SIZE);
+
+	return bytes;
+}
+
+const char *
+sli_fc_get_status_string(u32 status)
+{
+	static struct {
+		u32 code;
+		const char *label;
+	} lookup[] = {
+		{SLI4_FC_WCQE_STATUS_SUCCESS,		"SUCCESS"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,	"FCP_RSP_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_REMOTE_STOP,	"REMOTE_STOP"},
+		{SLI4_FC_WCQE_STATUS_LOCAL_REJECT,	"LOCAL_REJECT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_RJT,		"NPORT_RJT"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_RJT,	"FABRIC_RJT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_BSY,		"NPORT_BSY"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_BSY,	"FABRIC_BSY"},
+		{SLI4_FC_WCQE_STATUS_LS_RJT,		"LS_RJT"},
+		{SLI4_FC_WCQE_STATUS_CMD_REJECT,	"CMD_REJECT"},
+		{SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,	"FCP_TGT_LENCHECK"},
+		{SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED, "BUF_LEN_EXCEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+				"RQ_INSUFF_BUF_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC, "RQ_INSUFF_FRM_DESC"},
+		{SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,	"RQ_DMA_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,	"FCP_RSP_TRUNCATE"},
+		{SLI4_FC_WCQE_STATUS_DI_ERROR,		"DI_ERROR"},
+		{SLI4_FC_WCQE_STATUS_BA_RJT,		"BA_RJT"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+				"RQ_INSUFF_XRI_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC, "INSUFF_XRI_DISC"},
+		{SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,	"RX_ERROR_DETECT"},
+		{SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,	"RX_ABORT_REQUEST"},
+		};
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(lookup); i++) {
+		if (status == lookup[i].code)
+			return lookup[i].label;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 13f5a0d8d31c..30a951b9593d 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3696,4 +3696,438 @@ sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
 	hdr->request_length = len;
 }
 
+/**
+ * Get / set parameter functions
+ */
+
+static inline int
+sli_set_sgl_preregister(struct sli4 *sli4, u32 value)
+{
+	if (value == 0 && sli4->sgl_pre_registration_required) {
+		efc_log_err(sli4, "SGL pre-registration required\n");
+		return -1;
+	}
+
+	sli4->sgl_pre_registered = value != 0 ? true : false;
+
+	return 0;
+}
+
+static inline u32
+sli_get_max_sge(struct sli4 *sli4)
+{
+	return sli4->sge_supported_length;
+}
+
+static inline u32
+sli_get_max_sgl(struct sli4 *sli4)
+{
+	if (sli4->sgl_page_sizes != 1) {
+		efc_log_err(sli4, "unsupported SGL page sizes %#x\n",
+			sli4->sgl_page_sizes);
+		return 0;
+	}
+
+	return ((sli4->max_sgl_pages * SLI_PAGE_SIZE)
+		/ sizeof(struct sli4_sge));
+}
+
+static inline enum sli4_link_medium
+sli_get_medium(struct sli4 *sli4)
+{
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		return SLI_LINK_MEDIUM_FC;
+	default:
+		return SLI_LINK_MEDIUM_MAX;
+	}
+}
+
+static inline int
+sli_set_topology(struct sli4 *sli4, u32 value)
+{
+	int	rc = 0;
+
+	switch (value) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_DA:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		sli4->topology = value;
+		break;
+	default:
+		efc_log_err(sli4, "unsupported topology %#x\n", value);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static inline u32
+sli_convert_mask_to_count(u32 method, u32 mask)
+{
+	u32 count = 0;
+
+	if (method) {
+		count = 1 << (31 - __builtin_clz(mask));
+		count *= 16;
+	} else {
+		count = mask;
+	}
+
+	return count;
+}
+
+static inline u32
+sli_reg_read_status(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_STATUS_REGOFF);
+}
+
+static inline int
+sli_fw_error_status(struct sli4 *sli4)
+{
+	return ((sli_reg_read_status(sli4) & SLI4_PORT_STATUS_ERR) ? 1 : 0);
+}
+
+static inline u32
+sli_reg_read_err1(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR1);
+}
+
+static inline u32
+sli_reg_read_err2(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR2);
+}
+
+static inline int
+sli_fc_rqe_length(struct sli4 *sli4, void *cqe, u32 *len_hdr,
+		  u32 *len_data)
+{
+	struct sli4_fc_async_rcqe	*rcqe = cqe;
+
+	*len_hdr = *len_data = 0;
+
+	if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+		*len_hdr  = rcqe->hdpl_byte & SLI4_RACQE_HDPL;
+		*len_data = le16_to_cpu(rcqe->data_placement_length);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+static inline u8
+sli_fc_rqe_fcfi(struct sli4 *sli4, void *cqe)
+{
+	u8 code = ((u8 *)cqe)[SLI4_CQE_CODE_OFFSET];
+	u8 fcfi = U8_MAX;
+
+	switch (code) {
+	case SLI4_CQE_CODE_RQ_ASYNC: {
+		struct sli4_fc_async_rcqe *rcqe = cqe;
+
+		fcfi = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1: {
+		struct sli4_fc_async_rcqe_v1 *rcqev1 = cqe;
+
+		fcfi = rcqev1->fcfi_byte & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD: {
+		struct sli4_fc_optimized_write_cmd_cqe *opt_wr = cqe;
+
+		fcfi = opt_wr->flags0 & SLI4_OCQE_FCFI;
+		break;
+	}
+	}
+
+	return fcfi;
+}
+
+/****************************************************************************
+ * Function prototypes
+ */
+extern int
+sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf, size_t size, u16 wki);
+extern int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf, size_t size,
+				     u32 page_num, struct efc_dma *dma);
+extern int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size, u8 req_stats,
+			u8 clear_overflow_flags, u8 clear_all_counters);
+extern int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size, u8 clear);
+extern int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size, u32 speed,
+		  u8 reset_alpa);
+extern int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size, u16 vfi, u16 fcfi,
+		 u16 vpi);
+extern int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size, u16 vpi, u16 vfi);
+extern int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size, u16 base, u16 cnt);
+extern int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size, u8 num_xri);
+extern int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *dma, u16 vpi);
+extern int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *dma);
+extern int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size, u8 *wwpn,
+		u8 *wwnn, u8 hard_alpa, u32 preferred_d_id);
+extern int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size, u16 index,
+		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+extern int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size, u8 mode,
+	u16 index, u8 rq_selection_policy, u8 mrq_bit_mask, u16 num_mrqs,
+	    struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
+extern int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size, u32 nport_id,
+	u16 rpi, u16 vpi, struct efc_dma *dma, u8 update, u8 enable_t10_pi);
+extern int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size, u16 indicator);
+extern int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size, u16 indicator,
+		  enum sli4_resource which, u32 fc_id);
+extern int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size, u32 fc_id,
+		__be64 sli_wwpn, u16 vpi, u16 vfi, bool update);
+extern int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size, u16 vfi, u16 fcfi,
+		struct efc_dma dma, u16 vpi, __be64 sli_wwpn, u32 fc_id);
+extern int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size, u16 id, u32 type);
+extern int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size, u16 idx, u32 type);
+extern int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf, size_t size, uint64_t context);
+extern int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
+					size_t size, u16 rtype);
+extern int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf, size_t size);
+extern int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size, u16 noc,
+		u16 eof, u32 len, u32 offset, char *name, struct efc_dma *dma);
+extern int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
+		char *object_name);
+extern int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
+		u32 length, u32 offset, char *name, struct efc_dma *dma);
+extern int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *cmd, struct efc_dma *resp);
+extern int
+sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf, size_t size,
+		bool query, bool is_buffer_list, struct efc_dma *dma, u8 fdb);
+extern int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
+			    u32 feature, u32 param_len, void *parameter);
+
+extern int sli_cqe_mq(struct sli4 *sli4, void *buf);
+extern int sli_cqe_async(struct sli4 *sli4, void *buf);
+
+extern int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev *pdev, void __iomem *r[]);
+extern void sli_calc_max_qentries(struct sli4 *sli4);
+extern int sli_init(struct sli4 *sli4);
+extern int sli_reset(struct sli4 *sli4);
+extern int sli_fw_reset(struct sli4 *sli4);
+extern int sli_teardown(struct sli4 *sli4);
+extern int
+sli_callback(struct sli4 *sli4, enum sli4_callback cb, void *func, void *arg);
+extern int
+sli_bmbx_command(struct sli4 *sli4);
+extern int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q, u32 qtype,
+		size_t size, u32 n_entries, u32 align);
+extern int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q);
+extern int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq, u32 num_eq,
+		u32 shift, u32 delay_mult);
+extern int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype, struct sli4_queue *q,
+		u32 n_entries, struct sli4_queue *assoc);
+extern int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[], u32 num_cqs,
+		u32 n_entries, struct sli4_queue *eqs[]);
+extern int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype);
+extern int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q, u32 destroy_queues,
+		u32 free_memory);
+extern int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+extern int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+
+extern int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_eq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_cq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_mq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+extern int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype, u32 *rid,
+		u32 *index);
+extern int
+sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype, u32 rid);
+extern int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype);
+extern int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id);
+extern int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+		enum sli4_qentry *etype, u16 *q_id);
+
+extern int sli_raise_ue(struct sli4 *sli4, u8 dump);
+extern int sli_dump_is_ready(struct sli4 *sli4);
+extern bool sli_reset_required(struct sli4 *sli4);
+extern bool sli_fw_ready(struct sli4 *sli4);
+
+extern int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe);
+extern int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype,
+		 u16 *rid);
+u32 sli_fc_response_length(struct sli4 *sli4, u8 *cqe);
+u32 sli_fc_io_length(struct sli4 *sli4, u8 *cqe);
+int sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id);
+u32 sli_fc_ext_status(struct sli4 *sli4, u8 *cqe);
+extern int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index);
+extern int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *qmem, u16 cq_id);
+int sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size, u16 xri,
+		u32 xri_count, struct efc_dma *page0[], struct efc_dma *page1[],
+		struct efc_dma *dma);
+extern int
+sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *qmem, u16 cq_id, u16 buffer_size);
+extern int
+sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *qmem, u16 cq_id, u16 buffer_size);
+extern int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *dma, u16 rpi, struct efc_dma *payload_dma);
+extern int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q, u32 n_entries,
+		u32 buffer_size, struct sli4_queue *cq, bool is_hdr);
+extern int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs, struct sli4_queue *q[],
+		u32 base_cq_id, u32 num, u32 hdr_buf_size, u32 data_buf_size);
+u32 sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi);
+extern int
+sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
+		enum sli4_abort_type type, bool send_abts, u32 ids, u32 mask,
+		u16 tag, u16 cq_id);
+
+extern int
+sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size, u8 sof, u8 eof,
+		u32 *hdr, struct efc_dma *payload, u32 req_len, u8 timeout,
+		u16 xri, u16 req_tag);
+
+extern int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *rsp, u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
+		u16 ox_id, u16 rnodeid, u16 sportid, bool rnodeattached,
+		u32 rnode_fcid, u32 flags, u32 s_id);
+
+extern int
+sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *sgl, u8 req_type, u32 req_len, u32 max_rsp_len,
+		u8 timeout, u16 xri, u16 tag, u16 cq_id, u16 rnodeid,
+		u16 sport, bool rnodeattached, u32 rnode_fcid, u32 sport_fcid);
+
+extern int
+sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *sgl, u16 xri, u16 tag, u16 cq_id, u32 rpi,
+		u32 rnode_fcid, u8 timeout);
+
+extern int
+sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *sgl, u32 first_data_sge, u32 xfer_len, u16 xri,
+		u16 tag, u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		u8 timeout);
+
+extern int
+sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct efc_dma *sgl, u32 first_data_sge, u32 xfer_len,
+		u32 first_burst, u16 xri, u16 tag, u16 cq_id, u32 rpi,
+		u32 rnode_fcid, u8 dif, u8 bs, u8 timeout);
+
+extern int
+sli_fcp_treceive64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		struct sli_fcp_tgt_params *params);
+
+extern int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len, u16 xri, u16 sec_xri, u16 tag,
+		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		struct sli_fcp_tgt_params *params);
+
+extern int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 rsp_len, u16 xri, u16 tag, u16 cq_id, u32 rpi,
+		u32 rnode_fcid, u8 port_owned, struct sli_fcp_tgt_params *prms);
+
+extern int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
+		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		struct sli_fcp_tgt_params *params);
+
+extern int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 req_len, u32 max_rsp_len, u16 xri, u16 tag, u16 cq_id,
+		u32 rnode_fcid, u16 rnodeid, struct sli_ct_params *params);
+
+extern int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
+		struct sli_bls_payload *payload, u16 xri, u16 tag, u16 cq_id,
+		bool rnodeattached, u16 rnodeid, u16 sportid, u32 rnode_fcid,
+		u32 sport_fcid, u32 s_id);
+
+extern int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *payload,
+		u32 payload_len, u16 xri, u16 tag, u32 rnode_fcid, u16 rnodeid,
+		struct sli_ct_params *params);
+
+extern int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size, u16 xri, u16 tag,
+		u16 cq_id);
+extern void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf, size_t size,
+		u16 timeout);
+
+const char *sli_fc_get_status_string(u32 status);
+
 #endif /* !_SLI4_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 08/31] elx: libefc: Generic state machine framework
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (6 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 12:37   ` Hannes Reinecke
  2020-04-15 17:20   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
                   ` (22 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch starts the population of the libefc library.
The library will contain common tasks usable by a target or initiator
driver. The library will also contain a FC discovery state machine
interface.

This patch creates the library directory and adds definitions
for the discovery state machine interface.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Removed efc_sm_id array which is not used.
  Added State Machine event name lookup array.
---
 drivers/scsi/elx/libefc/efc_sm.c |  61 ++++++++++++
 drivers/scsi/elx/libefc/efc_sm.h | 209 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 270 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h

diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
new file mode 100644
index 000000000000..aba9d542f22e
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Generic state machine framework.
+ */
+#include "efc.h"
+#include "efc_sm.h"
+
+/**
+ * efc_sm_post_event() - Post an event to a context.
+ *
+ * @ctx: State machine context
+ * @evt: Event to post
+ * @data: Event-specific data (if any)
+ */
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data)
+{
+	if (ctx->current_state) {
+		ctx->current_state(ctx, evt, data);
+		return EFC_SUCCESS;
+	} else {
+		return EFC_FAIL;
+	}
+}
+
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void *(*state)(struct efc_sm_ctx *,
+				 enum efc_sm_event, void *), void *data)
+
+{
+	if (ctx->current_state == state) {
+		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
+	} else {
+		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
+	}
+}
+
+void
+efc_sm_disable(struct efc_sm_ctx *ctx)
+{
+	ctx->current_state = NULL;
+}
+
+static char *event_name[] = EFC_SM_EVENT_NAME;
+
+const char *efc_sm_event_name(enum efc_sm_event evt)
+{
+	if (evt > EFC_EVT_LAST)
+		return "unknown";
+
+	return event_name[evt];
+}
diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
new file mode 100644
index 000000000000..9cb534a1b86e
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/**
+ * Generic state machine framework declarations.
+ */
+
+#ifndef _EFC_SM_H
+#define _EFC_SM_H
+
+/**
+ * State Machine (SM) IDs.
+ */
+enum {
+	EFC_SM_COMMON = 0,
+	EFC_SM_DOMAIN,
+	EFC_SM_PORT,
+	EFC_SM_LOGIN,
+	EFC_SM_LAST
+};
+
+#define EFC_SM_EVENT_SHIFT		8
+#define EFC_SM_EVENT_START(id)		((id) << EFC_SM_EVENT_SHIFT)
+
+struct efc_sm_ctx;
+
+/* State Machine events */
+enum efc_sm_event {
+	/* Common Events */
+	EFC_EVT_ENTER = EFC_SM_EVENT_START(EFC_SM_COMMON),
+	EFC_EVT_REENTER,
+	EFC_EVT_EXIT,
+	EFC_EVT_SHUTDOWN,
+	EFC_EVT_ALL_CHILD_NODES_FREE,
+	EFC_EVT_RESUME,
+	EFC_EVT_TIMER_EXPIRED,
+
+	/* Domain Events */
+	EFC_EVT_RESPONSE = EFC_SM_EVENT_START(EFC_SM_DOMAIN),
+	EFC_EVT_ERROR,
+
+	EFC_EVT_DOMAIN_FOUND,
+	EFC_EVT_DOMAIN_ALLOC_OK,
+	EFC_EVT_DOMAIN_ALLOC_FAIL,
+	EFC_EVT_DOMAIN_REQ_ATTACH,
+	EFC_EVT_DOMAIN_ATTACH_OK,
+	EFC_EVT_DOMAIN_ATTACH_FAIL,
+	EFC_EVT_DOMAIN_LOST,
+	EFC_EVT_DOMAIN_FREE_OK,
+	EFC_EVT_DOMAIN_FREE_FAIL,
+	EFC_EVT_HW_DOMAIN_REQ_ATTACH,
+	EFC_EVT_HW_DOMAIN_REQ_FREE,
+
+	/* Sport Events */
+	EFC_EVT_SPORT_ALLOC_OK = EFC_SM_EVENT_START(EFC_SM_PORT),
+	EFC_EVT_SPORT_ALLOC_FAIL,
+	EFC_EVT_SPORT_ATTACH_OK,
+	EFC_EVT_SPORT_ATTACH_FAIL,
+	EFC_EVT_SPORT_FREE_OK,
+	EFC_EVT_SPORT_FREE_FAIL,
+	EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+	EFC_EVT_HW_PORT_ALLOC_OK,
+	EFC_EVT_HW_PORT_ALLOC_FAIL,
+	EFC_EVT_HW_PORT_ATTACH_OK,
+	EFC_EVT_HW_PORT_REQ_ATTACH,
+	EFC_EVT_HW_PORT_REQ_FREE,
+	EFC_EVT_HW_PORT_FREE_OK,
+
+	/* Login Events */
+	EFC_EVT_SRRS_ELS_REQ_OK = EFC_SM_EVENT_START(EFC_SM_LOGIN),
+	EFC_EVT_SRRS_ELS_CMPL_OK,
+	EFC_EVT_SRRS_ELS_REQ_FAIL,
+	EFC_EVT_SRRS_ELS_CMPL_FAIL,
+	EFC_EVT_SRRS_ELS_REQ_RJT,
+	EFC_EVT_NODE_ATTACH_OK,
+	EFC_EVT_NODE_ATTACH_FAIL,
+	EFC_EVT_NODE_FREE_OK,
+	EFC_EVT_NODE_FREE_FAIL,
+	EFC_EVT_ELS_FRAME,
+	EFC_EVT_ELS_REQ_TIMEOUT,
+	EFC_EVT_ELS_REQ_ABORTED,
+	/* request an ELS IO be aborted */
+	EFC_EVT_ABORT_ELS,
+	/* ELS abort process complete */
+	EFC_EVT_ELS_ABORT_CMPL,
+
+	EFC_EVT_ABTS_RCVD,
+
+	/* node is not in the GID_PT payload */
+	EFC_EVT_NODE_MISSING,
+	/* node is allocated and in the GID_PT payload */
+	EFC_EVT_NODE_REFOUND,
+	/* node shutting down due to PLOGI recvd (implicit logo) */
+	EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+	/* node shutting down due to LOGO recvd/sent (explicit logo) */
+	EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+
+	EFC_EVT_PLOGI_RCVD,
+	EFC_EVT_FLOGI_RCVD,
+	EFC_EVT_LOGO_RCVD,
+	EFC_EVT_PRLI_RCVD,
+	EFC_EVT_PRLO_RCVD,
+	EFC_EVT_PDISC_RCVD,
+	EFC_EVT_FDISC_RCVD,
+	EFC_EVT_ADISC_RCVD,
+	EFC_EVT_RSCN_RCVD,
+	EFC_EVT_SCR_RCVD,
+	EFC_EVT_ELS_RCVD,
+
+	EFC_EVT_FCP_CMD_RCVD,
+
+	EFC_EVT_GIDPT_DELAY_EXPIRED,
+
+	/* SCSI Target Server events */
+	EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY,
+	EFC_EVT_NODE_DEL_INI_COMPLETE,
+	EFC_EVT_NODE_DEL_TGT_COMPLETE,
+
+	/* Must be last */
+	EFC_EVT_LAST
+};
+
+/* State Machine event name lookup array */
+#define EFC_SM_EVENT_NAME {						\
+	[EFC_EVT_ENTER]			= "EFC_EVT_ENTER",		\
+	[EFC_EVT_REENTER]		= "EFC_EVT_REENTER",		\
+	[EFC_EVT_EXIT]			= "EFC_EVT_EXIT",		\
+	[EFC_EVT_SHUTDOWN]		= "EFC_EVT_SHUTDOWN",		\
+	[EFC_EVT_ALL_CHILD_NODES_FREE]	= "EFC_EVT_ALL_CHILD_NODES_FREE",\
+	[EFC_EVT_RESUME]		= "EFC_EVT_RESUME",		\
+	[EFC_EVT_TIMER_EXPIRED]		= "EFC_EVT_TIMER_EXPIRED",	\
+	[EFC_EVT_RESPONSE]		= "EFC_EVT_RESPONSE",		\
+	[EFC_EVT_ERROR]			= "EFC_EVT_ERROR",		\
+	[EFC_EVT_DOMAIN_FOUND]		= "EFC_EVT_DOMAIN_FOUND",	\
+	[EFC_EVT_DOMAIN_ALLOC_OK]	= "EFC_EVT_DOMAIN_ALLOC_OK",	\
+	[EFC_EVT_DOMAIN_ALLOC_FAIL]	= "EFC_EVT_DOMAIN_ALLOC_FAIL",	\
+	[EFC_EVT_DOMAIN_REQ_ATTACH]	= "EFC_EVT_DOMAIN_REQ_ATTACH",	\
+	[EFC_EVT_DOMAIN_ATTACH_OK]	= "EFC_EVT_DOMAIN_ATTACH_OK",	\
+	[EFC_EVT_DOMAIN_ATTACH_FAIL]	= "EFC_EVT_DOMAIN_ATTACH_FAIL",	\
+	[EFC_EVT_DOMAIN_LOST]		= "EFC_EVT_DOMAIN_LOST",	\
+	[EFC_EVT_DOMAIN_FREE_OK]	= "EFC_EVT_DOMAIN_FREE_OK",	\
+	[EFC_EVT_DOMAIN_FREE_FAIL]	= "EFC_EVT_DOMAIN_FREE_FAIL",	\
+	[EFC_EVT_HW_DOMAIN_REQ_ATTACH]	= "EFC_EVT_HW_DOMAIN_REQ_ATTACH",\
+	[EFC_EVT_HW_DOMAIN_REQ_FREE]	= "EFC_EVT_HW_DOMAIN_REQ_FREE",	\
+	[EFC_EVT_SPORT_ALLOC_OK]	= "EFC_EVT_SPORT_ALLOC_OK",	\
+	[EFC_EVT_SPORT_ALLOC_FAIL]	= "EFC_EVT_SPORT_ALLOC_FAIL",	\
+	[EFC_EVT_SPORT_ATTACH_OK]	= "EFC_EVT_SPORT_ATTACH_OK",	\
+	[EFC_EVT_SPORT_ATTACH_FAIL]	= "EFC_EVT_SPORT_ATTACH_FAIL",	\
+	[EFC_EVT_SPORT_FREE_OK]		= "EFC_EVT_SPORT_FREE_OK",	\
+	[EFC_EVT_SPORT_FREE_FAIL]	= "EFC_EVT_SPORT_FREE_FAIL",	\
+	[EFC_EVT_SPORT_TOPOLOGY_NOTIFY]	= "EFC_EVT_SPORT_TOPOLOGY_NOTIFY",\
+	[EFC_EVT_HW_PORT_ALLOC_OK]	= "EFC_EVT_HW_PORT_ALLOC_OK",	\
+	[EFC_EVT_HW_PORT_ALLOC_FAIL]	= "EFC_EVT_HW_PORT_ALLOC_FAIL",	\
+	[EFC_EVT_HW_PORT_ATTACH_OK]	= "EFC_EVT_HW_PORT_ATTACH_OK",	\
+	[EFC_EVT_HW_PORT_REQ_ATTACH]	= "EFC_EVT_HW_PORT_REQ_ATTACH",	\
+	[EFC_EVT_HW_PORT_REQ_FREE]	= "EFC_EVT_HW_PORT_REQ_FREE",	\
+	[EFC_EVT_HW_PORT_FREE_OK]	= "EFC_EVT_HW_PORT_FREE_OK",	\
+	[EFC_EVT_SRRS_ELS_REQ_OK]	= "EFC_EVT_SRRS_ELS_REQ_OK",	\
+	[EFC_EVT_SRRS_ELS_CMPL_OK]	= "EFC_EVT_SRRS_ELS_CMPL_OK",	\
+	[EFC_EVT_SRRS_ELS_REQ_FAIL]	= "EFC_EVT_SRRS_ELS_REQ_FAIL",	\
+	[EFC_EVT_SRRS_ELS_CMPL_FAIL]	= "EFC_EVT_SRRS_ELS_CMPL_FAIL",	\
+	[EFC_EVT_SRRS_ELS_REQ_RJT]	= "EFC_EVT_SRRS_ELS_REQ_RJT",	\
+	[EFC_EVT_NODE_ATTACH_OK]	= "EFC_EVT_NODE_ATTACH_OK",	\
+	[EFC_EVT_NODE_ATTACH_FAIL]	= "EFC_EVT_NODE_ATTACH_FAIL",	\
+	[EFC_EVT_NODE_FREE_OK]		= "EFC_EVT_NODE_FREE_OK",	\
+	[EFC_EVT_NODE_FREE_FAIL]	= "EFC_EVT_NODE_FREE_FAIL",	\
+	[EFC_EVT_ELS_FRAME]		= "EFC_EVT_ELS_FRAME",		\
+	[EFC_EVT_ELS_REQ_TIMEOUT]	= "EFC_EVT_ELS_REQ_TIMEOUT",	\
+	[EFC_EVT_ELS_REQ_ABORTED]	= "EFC_EVT_ELS_REQ_ABORTED",	\
+	[EFC_EVT_ABORT_ELS]		= "EFC_EVT_ABORT_ELS",		\
+	[EFC_EVT_ELS_ABORT_CMPL]	= "EFC_EVT_ELS_ABORT_CMPL",	\
+	[EFC_EVT_ABTS_RCVD]		= "EFC_EVT_ABTS_RCVD",		\
+	[EFC_EVT_NODE_MISSING]		= "EFC_EVT_NODE_MISSING",	\
+	[EFC_EVT_NODE_REFOUND]		= "EFC_EVT_NODE_REFOUND",	\
+	[EFC_EVT_SHUTDOWN_IMPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO",\
+	[EFC_EVT_SHUTDOWN_EXPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO",\
+	[EFC_EVT_PLOGI_RCVD]		= "EFC_EVT_PLOGI_RCVD",		\
+	[EFC_EVT_FLOGI_RCVD]		= "EFC_EVT_FLOGI_RCVD",		\
+	[EFC_EVT_LOGO_RCVD]		= "EFC_EVT_LOGO_RCVD",		\
+	[EFC_EVT_PRLI_RCVD]		= "EFC_EVT_PRLI_RCVD",		\
+	[EFC_EVT_PRLO_RCVD]		= "EFC_EVT_PRLO_RCVD",		\
+	[EFC_EVT_PDISC_RCVD]		= "EFC_EVT_PDISC_RCVD",		\
+	[EFC_EVT_FDISC_RCVD]		= "EFC_EVT_FDISC_RCVD",		\
+	[EFC_EVT_ADISC_RCVD]		= "EFC_EVT_ADISC_RCVD",		\
+	[EFC_EVT_RSCN_RCVD]		= "EFC_EVT_RSCN_RCVD",		\
+	[EFC_EVT_SCR_RCVD]		= "EFC_EVT_SCR_RCVD",		\
+	[EFC_EVT_ELS_RCVD]		= "EFC_EVT_ELS_RCVD",		\
+	[EFC_EVT_FCP_CMD_RCVD]		= "EFC_EVT_FCP_CMD_RCVD",	\
+	[EFC_EVT_NODE_DEL_INI_COMPLETE]	= "EFC_EVT_NODE_DEL_INI_COMPLETE",\
+	[EFC_EVT_NODE_DEL_TGT_COMPLETE]	= "EFC_EVT_NODE_DEL_TGT_COMPLETE",\
+	[EFC_EVT_LAST]			= "EFC_EVT_LAST",		\
+}
+
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data);
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void *(*state)(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg),
+		  void *data);
+void efc_sm_disable(struct efc_sm_ctx *ctx);
+const char *efc_sm_event_name(enum efc_sm_event evt);
+
+#endif /* ! _EFC_SM_H */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (7 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 08/31] elx: libefc: Generic state machine framework James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 12:41   ` Hannes Reinecke
  2020-04-15 17:32   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces James Smart
                   ` (21 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI/Local FC port objects
- efc_domain_s: FC domain (aka fabric) objects
- efc_node_s: FC node (aka remote ports) objects

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Removed Sparse Vector APIs and structures.
---
 drivers/scsi/elx/libefc/efc.h     |  72 +++++
 drivers/scsi/elx/libefc/efc_lib.c |  41 +++
 drivers/scsi/elx/libefc/efclib.h  | 640 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 753 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
 create mode 100644 drivers/scsi/elx/libefc/efclib.h

diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
new file mode 100644
index 000000000000..c93c6d59b21a
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_H__
+#define __EFC_H__
+
+#include "../include/efc_common.h"
+#include "efclib.h"
+#include "efc_sm.h"
+#include "efc_domain.h"
+#include "efc_sport.h"
+#include "efc_node.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+
+#define EFC_MAX_REMOTE_NODES			2048
+#define NODE_SPARAMS_SIZE			256
+
+enum efc_hw_rtn {
+	EFC_HW_RTN_SUCCESS = 0,
+	EFC_HW_RTN_SUCCESS_SYNC = 1,
+	EFC_HW_RTN_ERROR = -1,
+	EFC_HW_RTN_NO_RESOURCES = -2,
+	EFC_HW_RTN_NO_MEMORY = -3,
+	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFC_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
+
+enum efc_scsi_del_initiator_reason {
+	EFC_SCSI_INITIATOR_DELETED,
+	EFC_SCSI_INITIATOR_MISSING,
+};
+
+enum efc_scsi_del_target_reason {
+	EFC_SCSI_TARGET_DELETED,
+	EFC_SCSI_TARGET_MISSING,
+};
+
+#define EFC_SCSI_CALL_COMPLETE			0
+#define EFC_SCSI_CALL_ASYNC			1
+
+#define EFC_FC_ELS_DEFAULT_RETRIES		3
+
+/* Timeouts */
+#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT		0
+#define EFC_FC_FLOGI_TIMEOUT_SEC		5
+#define EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC	30000000
+
+#define domain_sm_trace(domain) \
+	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n", \
+		      domain->display_name, __func__, efc_sm_event_name(evt)) \
+
+#define domain_trace(domain, fmt, ...) \
+	efc_log_debug(domain->efc, \
+		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__) \
+
+#define node_sm_trace() \
+	efc_log_debug(node->efc, \
+		"[%s] %-20s\n", node->display_name, efc_sm_event_name(evt)) \
+
+#define sport_sm_trace(sport) \
+	efc_log_debug(sport->efc, \
+		"[%s] %-20s\n", sport->display_name, efc_sm_event_name(evt)) \
+
+#endif /* __EFC_H__ */
diff --git a/drivers/scsi/elx/libefc/efc_lib.c b/drivers/scsi/elx/libefc/efc_lib.c
new file mode 100644
index 000000000000..894cce9a39f4
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_lib.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include "efc.h"
+
+int efcport_init(struct efc *efc)
+{
+	u32 rc = EFC_SUCCESS;
+
+	spin_lock_init(&efc->lock);
+	INIT_LIST_HEAD(&efc->vport_list);
+
+	/* Create Node pool */
+	efc->node_pool = mempool_create_kmalloc_pool(EFC_MAX_REMOTE_NODES,
+						sizeof(struct efc_node));
+	if (!efc->node_pool) {
+		efc_log_err(efc, "Can't allocate node pool\n");
+		return -ENOMEM;
+	}
+
+	efc->node_dma_pool = dma_pool_create("node_dma_pool", &efc->pcidev->dev,
+						NODE_SPARAMS_SIZE, 0, 0);
+	if (!efc->node_dma_pool) {
+		efc_log_err(efc, "Can't allocate node dma pool\n");
+		mempool_destroy(efc->node_pool);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
+void efcport_destroy(struct efc *efc)
+{
+	mempool_destroy(efc->node_pool);
+	dma_pool_destroy(efc->node_dma_pool);
+}
diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
new file mode 100644
index 000000000000..9ac52ca7ec83
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efclib.h
@@ -0,0 +1,640 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCLIB_H__
+#define __EFCLIB_H__
+
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+#include "scsi/fc/fc_gs.h"
+#include "scsi/fc_frame.h"
+#include "../include/efc_common.h"
+
+#define EFC_SERVICE_PARMS_LENGTH	0x74
+#define EFC_NAME_LENGTH			32
+#define EFC_DISPLAY_BUS_INFO_LENGTH	16
+
+#define EFC_WWN_LENGTH			32
+
+/* Local port topology */
+enum efc_sport_topology {
+	EFC_SPORT_TOPOLOGY_UNKNOWN = 0,
+	EFC_SPORT_TOPOLOGY_FABRIC,
+	EFC_SPORT_TOPOLOGY_P2P,
+	EFC_SPORT_TOPOLOGY_LOOP,
+};
+
+#define enable_target_rscn(efc)		1
+
+enum efc_node_shutd_rsn {
+	EFC_NODE_SHUTDOWN_DEFAULT = 0,
+	EFC_NODE_SHUTDOWN_EXPLICIT_LOGO,
+	EFC_NODE_SHUTDOWN_IMPLICIT_LOGO,
+};
+
+enum efc_node_send_ls_acc {
+	EFC_NODE_SEND_LS_ACC_NONE = 0,
+	EFC_NODE_SEND_LS_ACC_PLOGI,
+	EFC_NODE_SEND_LS_ACC_PRLI,
+};
+
+#define EFC_LINK_STATUS_UP		0
+#define EFC_LINK_STATUS_DOWN		1
+
+/* State machine context header  */
+struct efc_sm_ctx {
+	void *(*current_state)(struct efc_sm_ctx *ctx,
+			       u32 evt, void *arg);
+
+	const char	*description;
+	void		*app;
+};
+
+/* Description of discovered Fabric Domain */
+struct efc_domain_record {
+	u32		index;
+	u32		priority;
+	u8		address[6];
+	u8		wwn[8];
+	union {
+		u8	vlan[512];
+		u8	loop[128];
+	} map;
+	u32		speed;
+	u32		fc_id;
+	bool		is_loop;
+	bool		is_nport;
+};
+
+/* Fabric/Domain events */
+enum efc_hw_domain_event {
+	EFC_HW_DOMAIN_ALLOC_OK,
+	EFC_HW_DOMAIN_ALLOC_FAIL,
+	EFC_HW_DOMAIN_ATTACH_OK,
+	EFC_HW_DOMAIN_ATTACH_FAIL,
+	EFC_HW_DOMAIN_FREE_OK,
+	EFC_HW_DOMAIN_FREE_FAIL,
+	EFC_HW_DOMAIN_LOST,
+	EFC_HW_DOMAIN_FOUND,
+	EFC_HW_DOMAIN_CHANGED,
+};
+
+enum efc_hw_port_event {
+	EFC_HW_PORT_ALLOC_OK,
+	EFC_HW_PORT_ALLOC_FAIL,
+	EFC_HW_PORT_ATTACH_OK,
+	EFC_HW_PORT_ATTACH_FAIL,
+	EFC_HW_PORT_FREE_OK,
+	EFC_HW_PORT_FREE_FAIL,
+};
+
+enum efc_hw_remote_node_event {
+	EFC_HW_NODE_ATTACH_OK,
+	EFC_HW_NODE_ATTACH_FAIL,
+	EFC_HW_NODE_FREE_OK,
+	EFC_HW_NODE_FREE_FAIL,
+	EFC_HW_NODE_FREE_ALL_OK,
+	EFC_HW_NODE_FREE_ALL_FAIL,
+};
+
+enum efc_hw_node_els_event {
+	EFC_HW_SRRS_ELS_REQ_OK,
+	EFC_HW_SRRS_ELS_CMPL_OK,
+	EFC_HW_SRRS_ELS_REQ_FAIL,
+	EFC_HW_SRRS_ELS_CMPL_FAIL,
+	EFC_HW_SRRS_ELS_REQ_RJT,
+	EFC_HW_ELS_REQ_ABORTED,
+};
+
+struct efc_sli_port {
+	struct list_head	list_entry;
+	struct efc		*efc;
+	u32			tgt_id;
+	u32			index;
+	u32			instance_index;
+	char			display_name[EFC_NAME_LENGTH];
+	struct efc_domain	*domain;
+	bool			is_vport;
+	u64			wwpn;
+	u64			wwnn;
+	struct list_head	node_list;
+	void			*ini_sport;
+	void			*tgt_sport;
+	void			*tgt_data;
+	void			*ini_data;
+
+	/* Members private to HW/SLI */
+	void			*hw;
+	u32			indicator;
+	u32			fc_id;
+	struct efc_dma		dma;
+
+	u8			wwnn_str[EFC_WWN_LENGTH];
+	__be64			sli_wwpn;
+	__be64			sli_wwnn;
+	bool			free_req_pending;
+	bool			attached;
+
+	struct efc_sm_ctx	sm;
+	struct xarray		lookup;
+	bool			enable_ini;
+	bool			enable_tgt;
+	bool			enable_rscn;
+	bool			shutting_down;
+	bool			p2p_winner;
+	enum efc_sport_topology topology;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u32			p2p_remote_port_id;
+	u32			p2p_port_id;
+};
+
+/**
+ * Fibre Channel domain object
+ *
+ * This object is a container for the various SLI components needed
+ * to connect to the domain of a FC or FCoE switch
+ * @efc:		pointer back to efc
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @sport_list:		linked list of SLI ports
+ * @ini_domain:		initiator backend private domain data
+ * @tgt_domain:		target backend private domain data
+ * @hw:			pointer to HW
+ * @sm:			state machine context
+ * @fcf:		FC Forwarder table index
+ * @fcf_indicator:	FCFI
+ * @indicator:		VFI
+ * @dma:		memory for Service Parameters
+ * @fcf_wwn:		WWN for FCF/switch
+ * @drvsm:		driver domain sm context
+ * @drvsm_lock:		driver domain sm lock
+ * @attached:		set true after attach completes
+ * @is_fc:		is FC
+ * @is_loop:		is loop topology
+ * @is_nlport:		is public loop
+ * @domain_found_pending:A domain found is pending, drec is updated
+ * @req_domain_free:	True if domain object should be free'd
+ * @req_accept_frames:	set in domain state machine to enable frames
+ * @domain_notify_pend:	Set in domain SM to avoid duplicate node event post
+ * @pending_drec:	Pending drec if a domain found is pending
+ * @service_params:	any sports service parameters
+ * @flogi_service_params:Fabric/P2p service parameters from FLOGI
+ * @lookup:		d_id to node lookup object
+ * @sport:		Pointer to first (physical) SLI port
+ */
+struct efc_domain {
+	struct efc		*efc;
+	char			display_name[EFC_NAME_LENGTH];
+	struct list_head	sport_list;
+	void			*ini_domain;
+	void			*tgt_domain;
+
+	/* Declarations private to HW/SLI */
+	void			*hw;
+	u32			fcf;
+	u32			fcf_indicator;
+	u32			indicator;
+	struct efc_dma		dma;
+
+	/* Declarations private to FC transport */
+	u64			fcf_wwn;
+	struct efc_sm_ctx	drvsm;
+	bool			attached;
+	bool			is_fc;
+	bool			is_loop;
+	bool			is_nlport;
+	bool			domain_found_pending;
+	bool			req_domain_free;
+	bool			req_accept_frames;
+	bool			domain_notify_pend;
+
+	struct efc_domain_record pending_drec;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u8			flogi_service_params[EFC_SERVICE_PARMS_LENGTH];
+
+	struct xarray		lookup;
+
+	struct efc_sli_port	*sport;
+	u32			sport_instance_count;
+};
+
+/**
+ * Remote Node object
+ *
+ * This object represents a connection between the SLI port and another
+ * Nx_Port on the fabric. Note this can be either a well known port such
+ * as a F_Port (i.e. ff:ff:fe) or another N_Port.
+ * @indicator:		RPI
+ * @fc_id:		FC address
+ * @attached:		true if attached
+ * @node_group:		true if in node group
+ * @free_group:		true if the node group should be free'd
+ * @sport:		associated SLI port
+ * @node:		associated node
+ */
+struct efc_remote_node {
+	u32			indicator;
+	u32			index;
+	u32			fc_id;
+
+	bool			attached;
+	bool			node_group;
+	bool			free_group;
+
+	struct efc_sli_port	*sport;
+	void			*node;
+};
+
+/**
+ * FC Node object
+ * @efc:		pointer back to efc structure
+ * @display_name:	Node display name
+ * @hold_frames:	hold incoming frames if true
+ * @lock:		node wide lock
+ * @active_ios:		active I/O's for this node
+ * @ini_node:		backend initiator private node data
+ * @tgt_node:		backend target private node data
+ * @rnode:		Remote node
+ * @sm:			state machine context
+ * @evtdepth:		current event posting nesting depth
+ * @req_free:		this node is to be free'd
+ * @attached:		node is attached (REGLOGIN complete)
+ * @fcp_enabled:	node is enabled to handle FCP
+ * @rscn_pending:	for name server node RSCN is pending
+ * @send_plogi:		send PLOGI accept, upon completion of node attach
+ * @send_plogi_acc:	TRUE if io_alloc() is enabled.
+ * @send_ls_acc:	type of LS acc to send
+ * @ls_acc_io:		SCSI IO for LS acc
+ * @ls_acc_oxid:	OX_ID for pending accept
+ * @ls_acc_did:		D_ID for pending accept
+ * @shutdown_reason:	reason for node shutdown
+ * @sparm_dma_buf:	service parameters buffer
+ * @service_params:	plogi/acc frame from remote device
+ * @pend_frames_lock:	lock for inbound pending frames list
+ * @pend_frames:	inbound pending frames list
+ * @pend_frames_processed:count of frames processed in hold frames interval
+ * @ox_id_in_use:	used to verify one at a time us of ox_id
+ * @els_retries_remaining:for ELS, number of retries remaining
+ * @els_req_cnt:	number of outstanding ELS requests
+ * @els_cmpl_cnt:	number of outstanding ELS completions
+ * @abort_cnt:		Abort counter for debugging purpos
+ * @current_state_name:	current node state
+ * @prev_state_name:	previous node state
+ * @current_evt:	current event
+ * @prev_evt:		previous event
+ * @targ:		node is target capable
+ * @init:		node is init capable
+ * @refound:		Handle node refound case when node is being deleted
+ * @els_io_pend_list:	list of pending (not yet processed) ELS IOs
+ * @els_io_active_list:	list of active (processed) ELS IOs
+ * @nodedb_state:	Node debugging, saved state
+ * @gidpt_delay_timer:	GIDPT delay timer
+ * @time_last_gidpt_msec:Start time of last target RSCN GIDPT
+ * @wwnn:		remote port WWNN
+ * @wwpn:		remote port WWPN
+ * @chained_io_count:	Statistics : count of IOs with chained SGL's
+ */
+struct efc_node {
+	struct list_head	list_entry;
+	struct efc		*efc;
+	char			display_name[EFC_NAME_LENGTH];
+	struct efc_sli_port	*sport;
+	bool			hold_frames;
+	spinlock_t		active_ios_lock;
+	struct list_head	active_ios;
+	void			*ini_node;
+	void			*tgt_node;
+
+	struct efc_remote_node	rnode;
+	/* Declarations private to FC transport */
+	struct efc_sm_ctx	sm;
+	u32			evtdepth;
+
+	bool			req_free;
+	bool			attached;
+	bool			fcp_enabled;
+	bool			rscn_pending;
+	bool			send_plogi;
+	bool			send_plogi_acc;
+	bool			io_alloc_enabled;
+
+	enum efc_node_send_ls_acc send_ls_acc;
+	void			*ls_acc_io;
+	u32			ls_acc_oxid;
+	u32			ls_acc_did;
+	enum efc_node_shutd_rsn	shutdown_reason;
+	struct efc_dma		sparm_dma_buf;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	spinlock_t		pend_frames_lock;
+	struct list_head	pend_frames;
+	u32			pend_frames_processed;
+	u32			ox_id_in_use;
+	u32			els_retries_remaining;
+	u32			els_req_cnt;
+	u32			els_cmpl_cnt;
+	u32			abort_cnt;
+
+	char			current_state_name[EFC_NAME_LENGTH];
+	char			prev_state_name[EFC_NAME_LENGTH];
+	int			current_evt;
+	int			prev_evt;
+	bool			targ;
+	bool			init;
+	bool			refound;
+	struct list_head	els_io_pend_list;
+	struct list_head	els_io_active_list;
+
+	void *(*nodedb_state)(struct efc_sm_ctx *ctx,
+			      u32 evt, void *arg);
+	struct timer_list	gidpt_delay_timer;
+	time_t			time_last_gidpt_msec;
+
+	char			wwnn[EFC_WWN_LENGTH];
+	char			wwpn[EFC_WWN_LENGTH];
+
+	u32			chained_io_count;
+};
+
+/**
+ * NPIV port
+ *
+ * Collection of the information required to restore a virtual port across
+ * link events
+ * @wwnn:		node name
+ * @wwpn:		port name
+ * @fc_id:		port id
+ * @tgt_data:		target backend pointer
+ * @ini_data:		initiator backend pointe
+ * @sport:		Used to match record after attaching for update
+ *
+ */
+
+struct efc_vport_spec {
+	struct list_head	list_entry;
+	u64			wwnn;
+	u64			wwpn;
+	u32			fc_id;
+	bool			enable_tgt;
+	bool			enable_ini;
+	void			*tgt_data;
+	void			*ini_data;
+	struct efc_sli_port	*sport;
+};
+
+#define node_printf(node, fmt, args...) \
+	pr_info("[%s] " fmt, node->display_name, ##args)
+
+/* Node SM IO Context Callback structure */
+struct efc_node_cb {
+	int			status;
+	int			ext_status;
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+	struct efc_dma		els_rsp;
+};
+
+/* HW unsolicited callback status */
+enum efc_hw_unsol_status {
+	EFC_HW_UNSOL_SUCCESS,
+	EFC_HW_UNSOL_ERROR,
+	EFC_HW_UNSOL_ABTS_RCVD,
+	EFC_HW_UNSOL_MAX,	/**< must be last */
+};
+
+enum efc_hw_rq_buffer_type {
+	EFC_HW_RQ_BUFFER_TYPE_HDR,
+	EFC_HW_RQ_BUFFER_TYPE_PAYLOAD,
+	EFC_HW_RQ_BUFFER_TYPE_MAX,
+};
+
+struct efc_hw_rq_buffer {
+	u16			rqindex;
+	struct efc_dma		dma;
+};
+
+/*
+ * Defines a general FC sequence object,
+ * consisting of a header, payload buffers
+ * and a HW IO in the case of port owned XRI
+ */
+struct efc_hw_sequence {
+	struct list_head	list_entry;
+	void			*hw;
+	u8			fcfi;
+	u8			auto_xrdy;
+	u8			out_of_xris;
+
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+
+	enum efc_hw_unsol_status status;
+	struct efct_hw_io	*hio;
+
+	void			*hw_priv;
+};
+
+/* Return value indiacating the sequence can not be freed */
+#define EFC_HW_SEQ_HOLD		0
+/* Return value indiacating the sequence can be freed */
+#define EFC_HW_SEQ_FREE		1
+
+struct libefc_function_template {
+	/*Domain*/
+	int (*hw_domain_alloc)(struct efc *efc, struct efc_domain *d, u32 fcf);
+	int (*hw_domain_attach)(struct efc *efc, struct efc_domain *d, u32 id);
+
+	int (*hw_domain_free)(struct efc *hw, struct efc_domain *d);
+	int (*hw_domain_force_free)(struct efc *efc, struct efc_domain *d);
+
+	int (*new_domain)(struct efc *efc, struct efc_domain *d);
+	void (*del_domain)(struct efc *efc, struct efc_domain *d);
+
+	void (*domain_hold_frames)(struct efc *efc, struct efc_domain *d);
+	void (*domain_accept_frames)(struct efc *efc, struct efc_domain *d);
+
+	/*Sport*/
+	int (*hw_port_alloc)(struct efc *hw, struct efc_sli_port *sp,
+			     struct efc_domain *d, u8 *val);
+	int (*hw_port_attach)(struct efc *hw, struct efc_sli_port *sp,
+			      u32 fc_id);
+
+	int (*hw_port_free)(struct efc *hw, struct efc_sli_port *sp);
+
+	int (*new_sport)(struct efc *efc, struct efc_sli_port *sp);
+	void (*del_sport)(struct efc *efc, struct efc_sli_port *sp);
+
+	/*Node*/
+	int (*hw_node_alloc)(struct efc *hw, struct efc_remote_node *n,
+			     u32 fc_addr, struct efc_sli_port *sport);
+
+	int (*hw_node_attach)(struct efc *hw, struct efc_remote_node *n,
+			      struct efc_dma *sparams);
+
+	int (*hw_node_detach)(struct efc *hw, struct efc_remote_node *r);
+
+	int (*hw_node_free_resources)(struct efc *efc,
+				      struct efc_remote_node *node);
+	int (*node_purge_pending)(struct efc *efc, struct efc_node *n);
+
+	void (*node_io_cleanup)(struct efc *efc, struct efc_node *n,
+				bool force);
+	void (*node_els_cleanup)(struct efc *efc, struct efc_node *n,
+				bool force);
+	void (*node_abort_all_els)(struct efc *efc, struct efc_node *n);
+
+	/*Scsi*/
+	void (*scsi_io_alloc_disable)(struct efc *efc, struct efc_node *node);
+	void (*scsi_io_alloc_enable)(struct efc *efc, struct efc_node *node);
+
+	int (*scsi_validate_node)(struct efc *efc, struct efc_node *n);
+	int (*scsi_new_node)(struct efc *efc, struct efc_node *n);
+
+	int (*scsi_del_node)(struct efc *efc, struct efc_node *n, int reason);
+
+	/*Send ELS*/
+	void *(*els_send)(struct efc *efc, struct efc_node *node,
+			  u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_ct)(struct efc *efc, struct efc_node *node,
+			     u32 cmd, u32 timeout_sec, u32 retries);
+
+	void *(*els_send_resp)(struct efc *efc, struct efc_node *node,
+			       u32 cmd, u16 ox_id);
+
+	void *(*bls_send_acc_hdr)(struct efc *efc, struct efc_node *n,
+				  struct fc_frame_header *hdr);
+	void *(*send_flogi_p2p_acc)(struct efc *efc, struct efc_node *n,
+				    u32 ox_id, u32 s_id);
+
+	int (*send_ct_rsp)(struct efc *efc, struct efc_node *node,
+			   u16 ox_id, struct fc_ct_hdr *hdr,
+			   u32 rsp_code, u32 reason_code, u32 rsn_code_expl);
+
+	void *(*send_ls_rjt)(struct efc *efc, struct efc_node *node,
+			     u32 ox, u32 rcode, u32 rcode_expl, u32 vendor);
+
+	int (*dispatch_fcp_cmd)(struct efc_node *node,
+				struct efc_hw_sequence *seq);
+
+	int (*recv_abts_frame)(struct efc *efc, struct efc_node *node,
+			       struct efc_hw_sequence *seq);
+};
+
+#define EFC_LOG_LIB		0x01
+#define EFC_LOG_NODE		0x02
+#define EFC_LOG_PORT		0x04
+#define EFC_LOG_DOMAIN		0x08
+#define EFC_LOG_ELS		0x10
+#define EFC_LOG_DOMAIN_SM	0x20
+#define EFC_LOG_SM		0x40
+
+/* efc library port structure */
+struct efc {
+	void			*base;
+	struct pci_dev		*pcidev;
+	u64			req_wwpn;
+	u64			req_wwnn;
+
+	u64			def_wwpn;
+	u64			def_wwnn;
+	u64			max_xfer_size;
+	u32			nodes_count;
+	mempool_t		*node_pool;
+	struct dma_pool		*node_dma_pool;
+
+	u32			link_status;
+
+	/* vport */
+	struct list_head	vport_list;
+	/* lock to protect the vport list*/
+	spinlock_t		vport_lock;
+
+	struct libefc_function_template tt;
+	/* lock to protect the discovery library */
+	spinlock_t		lock;
+
+	bool			enable_ini;
+	bool			enable_tgt;
+
+	u32			log_level;
+
+	struct efc_domain	*domain;
+	void (*domain_free_cb)(struct efc *efc, void *arg);
+	void			*domain_free_cb_arg;
+
+	time_t			tgt_rscn_delay_msec;
+	time_t			tgt_rscn_period_msec;
+
+	bool			external_loopback;
+	u32			nodedb_mask;
+};
+
+/*
+ * EFC library registration
+ * **********************************/
+int efcport_init(struct efc *efc);
+void efcport_destroy(struct efc *efc);
+/*
+ * EFC Domain
+ * **********************************/
+int efc_domain_cb(void *arg, int event, void *data);
+void efc_domain_force_free(struct efc_domain *domain);
+void
+efc_register_domain_free_cb(struct efc *efc,
+			    void (*callback)(struct efc *efc, void *arg),
+			    void *arg);
+
+/*
+ * EFC Local port
+ * **********************************/
+int efc_lport_cb(void *arg, int event, void *data);
+struct efc_vport_spec *efc_vport_create_spec(struct efc *efc, u64 wwnn,
+			u64 wwpn, u32 fc_id, bool enable_ini, bool enable_tgt,
+			void *tgt_data, void *ini_data);
+int efc_sport_vport_new(struct efc_domain *domain, u64 wwpn,
+			u64 wwnn, u32 fc_id, bool ini, bool tgt,
+			void *tgt_data, void *ini_data);
+int efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
+			u64 wwpn, u64 wwnn);
+
+void efc_vport_del_all(struct efc *efc);
+
+struct efc_sli_port *efc_sport_find(struct efc_domain *domain, u32 d_id);
+
+/*
+ * EFC Node
+ * **********************************/
+int efc_remote_node_cb(void *arg, int event, void *data);
+u64 efc_node_get_wwnn(struct efc_node *node);
+u64 efc_node_get_wwpn(struct efc_node *node);
+struct efc_node *efc_node_find(struct efc_sli_port *sport, u32 id);
+void efc_node_fcid_display(u32 fc_id, char *buffer, u32 buf_len);
+
+void efc_node_post_els_resp(struct efc_node *node, u32 evt, void *arg);
+void efc_node_post_shutdown(struct efc_node *node, u32 evt, void *arg);
+/*
+ * EFC FCP/ELS/CT interface
+ * **********************************/
+int efc_node_recv_abts_frame(struct efc *efc,
+			     struct efc_node *node,
+			     struct efc_hw_sequence *seq);
+void efc_node_recv_els_frame(struct efc_node *node, struct efc_hw_sequence *s);
+int efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+
+void efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+
+void efc_node_recv_ct_frame(struct efc_node *node, struct efc_hw_sequence *seq);
+void efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
+
+/*
+ * EFC SCSI INTERACTION LAYER
+ * **********************************/
+void efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node);
+
+#endif /* __EFCLIB_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (8 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 12:50   ` Hannes Reinecke
  2020-04-15 17:50   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 11/31] elx: libefc: SLI and FC PORT " James Smart
                   ` (20 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC Domain registration, allocation and deallocation sequence

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Acquire efc->lock in efc_domain_cb to protect all the domain state
    transitions.
  Removed efc_assert and used WARN_ON.
  Note: Re: Locking:
    efc->lock is a global per port lock which is used to synchronize and
    serialize all the state machine event processing. As there is a
    single EQ all the events are serialized. This lock will protect the
    sport list, sport, node list, node, and vport list. All the libefc
    APIs called by the driver will take this lock internally.
 Note: Re: "It would even simplify the code, as several cases can be
      collapsed into one ..."
    The Hardware events cannot be collapsed as each events is different
    from State Machine events. The code present looks more readable than
    a mapping array in this case.
---
 drivers/scsi/elx/libefc/efc_domain.c | 1109 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_domain.h |   52 ++
 2 files changed, 1161 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h

diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
new file mode 100644
index 000000000000..4d16e9742e86
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.c
@@ -0,0 +1,1109 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * domain_sm Domain State Machine: States
+ */
+
+#include "efc.h"
+
+/* Accept domain callback events from the user driver */
+int
+efc_domain_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_domain *domain = NULL;
+	int rc = 0;
+	unsigned long flags = 0;
+
+	if (event != EFC_HW_DOMAIN_FOUND)
+		domain = data;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	switch (event) {
+	case EFC_HW_DOMAIN_FOUND: {
+		u64 fcf_wwn = 0;
+		struct efc_domain_record *drec = data;
+
+		/* extract the fcf_wwn */
+		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
+
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n", fcf_wwn);
+		/*
+		 * lookup domain, or allocate a new one
+		 * if one doesn't exist already
+		 */
+		domain = efc->domain;
+		if (!domain) {
+			domain = efc_domain_alloc(efc, fcf_wwn);
+			if (!domain) {
+				efc_log_err(efc, "efc_domain_alloc() failed\n");
+				rc = -1;
+				break;
+			}
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+		}
+
+		if (fcf_wwn != domain->fcf_wwn) {
+			efc_log_err(efc, "evt: FOUND for existing domain\n");
+			efc_log_err(efc, "wwn:%016llX domain wwn:%016llX\n",
+				    fcf_wwn, domain->fcf_wwn);
+		}
+
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
+		break;
+	}
+
+	case EFC_HW_DOMAIN_LOST:
+		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
+		efc->tt.domain_hold_frames(efc, domain);
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
+				      NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
+		efc_domain_post_event(domain,
+				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
+		break;
+
+	default:
+		efc_log_warn(efc, "unsupported event %#x\n", event);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	if (efc->domain && domain->req_accept_frames) {
+		domain->req_accept_frames = false;
+		efc->tt.domain_accept_frames(efc, domain);
+	}
+
+	return rc;
+}
+
+struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn)
+{
+	struct efc_domain *domain;
+
+	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
+	if (domain) {
+		domain->efc = efc;
+		domain->drvsm.app = domain;
+
+		xa_init(&domain->lookup);
+
+		INIT_LIST_HEAD(&domain->sport_list);
+		domain->fcf_wwn = fcf_wwn;
+		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
+			      domain->fcf_wwn);
+		efc->domain = domain;
+	} else {
+		efc_log_err(efc, "domain allocation failed\n");
+	}
+
+	return domain;
+}
+
+void
+efc_domain_free(struct efc_domain *domain)
+{
+	struct efc *efc;
+
+	efc = domain->efc;
+
+	/* Hold frames to clear the domain pointer from the xport lookup */
+	efc->tt.domain_hold_frames(efc, domain);
+
+	efc_log_debug(efc, "Domain free: wwn %016llX\n",
+		      domain->fcf_wwn);
+
+	xa_destroy(&domain->lookup);
+	efc->domain = NULL;
+
+	if (efc->domain_free_cb)
+		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
+
+	kfree(domain);
+}
+
+/* Free memory resources of a domain object */
+void
+efc_domain_force_free(struct efc_domain *domain)
+{
+	struct efc_sli_port *sport;
+	struct efc_sli_port *next;
+	struct efc *efc = domain->efc;
+
+	/* Shutdown domain sm */
+	efc_sm_disable(&domain->drvsm);
+
+	list_for_each_entry_safe(sport, next, &domain->sport_list, list_entry) {
+		efc_sport_force_free(sport);
+	}
+
+	efc->tt.hw_domain_force_free(efc, domain);
+	efc_domain_free(domain);
+}
+
+/* Register a callback to be called when the domain is freed */
+void
+efc_register_domain_free_cb(struct efc *efc,
+			    void (*callback)(struct efc *efc, void *arg),
+			    void *arg)
+{
+	efc->domain_free_cb = callback;
+	efc->domain_free_cb_arg = arg;
+	if (!efc->domain && callback)
+		(*callback)(efc, arg);
+}
+
+static void *
+__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/*
+		 * this can arise if an FLOGI fails on the SPORT,
+		 * and the SPORT is shutdown
+		 */
+		break;
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+static void *
+__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+		break;
+	case EFC_EVT_DOMAIN_FOUND:
+		/* save drec, mark domain_found_pending */
+		memcpy(&domain->pending_drec, arg,
+		       sizeof(domain->pending_drec));
+		domain->domain_found_pending = true;
+		break;
+	case EFC_EVT_DOMAIN_LOST:
+		/* unmark domain_found_pending */
+		domain->domain_found_pending = false;
+		break;
+
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+#define std_domain_state_decl(...)\
+	struct efc_domain *domain = NULL;\
+	struct efc *efc = NULL;\
+	\
+	WARN_ON(!ctx || !ctx->app);\
+	domain = ctx->app;\
+	WARN_ON(!domain->efc);\
+	efc = domain->efc
+
+void *
+__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		domain->attached = false;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND: {
+		u32	i;
+		struct efc_domain_record *drec = arg;
+		struct efc_sli_port *sport;
+
+		u64	my_wwnn = efc->req_wwnn;
+		u64	my_wwpn = efc->req_wwpn;
+		__be64		be_wwpn;
+
+		if (my_wwpn == 0 || my_wwnn == 0) {
+			efc_log_debug(efc,
+				"using default hardware WWN configuration\n");
+			my_wwpn = efc->def_wwpn;
+			my_wwnn = efc->def_wwnn;
+		}
+
+		efc_log_debug(efc,
+			"Creating base sport using WWPN %016llX WWNN %016llX\n",
+			my_wwpn, my_wwnn);
+
+		/* Allocate a sport and transition to __efc_sport_allocated */
+		sport = efc_sport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
+					efc->enable_ini, efc->enable_tgt);
+
+		if (!sport) {
+			efc_log_err(efc, "efc_sport_alloc() failed\n");
+			break;
+		}
+		efc_sm_transition(&sport->sm, __efc_sport_allocated, NULL);
+
+		be_wwpn = cpu_to_be64(sport->wwpn);
+
+		/* allocate struct efc_sli_port object for local port
+		 * Note: drec->fc_id is ALPA from read_topology only if loop
+		 */
+		if (efc->tt.hw_port_alloc(efc, sport, NULL,
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			efc_sport_free(sport);
+			break;
+		}
+
+		domain->is_loop = drec->is_loop;
+
+		/*
+		 * If the loop position map includes ALPA == 0,
+		 * then we are in a public loop (NL_PORT)
+		 * Note that the first element of the loopmap[]
+		 * contains the count of elements, and if
+		 * ALPA == 0 is present, it will occupy the first
+		 * location after the count.
+		 */
+		domain->is_nlport = drec->map.loop[1] == 0x00;
+
+		if (!domain->is_loop) {
+			/* Initiate HW domain alloc */
+			if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+				efc_log_err(efc,
+					    "Failed to initiate HW domain allocation\n");
+				break;
+			}
+			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+			break;
+		}
+
+		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
+			      drec->is_loop ?
+			      (domain->is_nlport ?
+			      "public-loop" : "loop") : "other",
+			      drec->fc_id, drec->speed);
+
+		sport->fc_id = drec->fc_id;
+		sport->topology = EFC_SPORT_TOPOLOGY_LOOP;
+		snprintf(sport->display_name, sizeof(sport->display_name),
+			 "s%06x", drec->fc_id);
+
+		if (efc->enable_ini) {
+			u32 count = drec->map.loop[0];
+
+			efc_log_debug(efc, "%d position map entries\n",
+				      count);
+			for (i = 1; i <= count; i++) {
+				if (drec->map.loop[i] != drec->fc_id) {
+					struct efc_node *node;
+
+					efc_log_debug(efc, "%#x -> %#x\n",
+						      drec->fc_id,
+						      drec->map.loop[i]);
+					node = efc_node_alloc(sport,
+							      drec->map.loop[i],
+							      false, true);
+					if (!node) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						break;
+					}
+					efc_node_transition(node,
+							    __efc_d_wait_loop,
+							    NULL);
+				}
+			}
+		}
+
+		/* Initiate HW domain alloc */
+		if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
+			efc_log_err(efc,
+				    "Failed to initiate HW domain allocation\n");
+			break;
+		}
+		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+		break;
+	}
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the domain allocation to complete */
+void *
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK: {
+		struct fc_els_flogi  *sp;
+
+		sport = domain->sport;
+		if (WARN_ON(!sport))
+			return NULL;
+
+		sp = (struct fc_els_flogi  *)sport->service_params;
+
+		/* Save the domain service parameters */
+		memcpy(domain->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+		memcpy(sport->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+
+		/*
+		 * Update the sport's service parameters,
+		 * user might have specified non-default names
+		 */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * Take the loop topology path,
+		 * unless we are an NL_PORT (public loop)
+		 */
+		if (domain->is_loop && !domain->is_nlport) {
+			/*
+			 * For loop, we already have our FC ID
+			 * and don't need fabric login.
+			 * Transition to the allocated state and
+			 * post an event to attach to
+			 * the domain. Note that this breaks the
+			 * normal action/transition
+			 * pattern here to avoid a race with the
+			 * domain attach callback.
+			 */
+			/* sm: is_loop / domain_attach */
+			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+			__efc_domain_attach_internal(domain, sport->fc_id);
+			break;
+		}
+		{
+			struct efc_node *node;
+
+			/* alloc fabric node, send FLOGI */
+			node = efc_node_find(sport, FC_FID_FLOGI);
+			if (node) {
+				efc_log_err(efc,
+					    "Fabric Controller node already exists\n");
+				break;
+			}
+			node = efc_node_alloc(sport, FC_FID_FLOGI,
+					      false, false);
+			if (!node) {
+				efc_log_err(efc,
+					    "Error: efc_node_alloc() failed\n");
+			} else {
+				efc_node_transition(node,
+						    __efc_fabric_init, NULL);
+			}
+			/* Accept frames */
+			domain->req_accept_frames = true;
+		}
+		/* sm: / start fabric logins */
+		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
+			    efc_sm_event_name(evt));
+		efc_log_err(efc, "shutting down domain\n");
+		domain->req_domain_free = true;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw_domain_alloc()\n",
+			efc_sm_event_name(evt));
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the domain attach request */
+void *
+__efc_domain_allocated(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	int rc = 0;
+
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		u32 fc_id;
+
+		if (WARN_ON(!arg))
+			return NULL;
+
+		fc_id = *((u32 *)arg);
+		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
+			      fc_id);
+		/* Update sport lookup */
+		rc = xa_err(xa_store(&domain->lookup, fc_id, domain->sport,
+				     GFP_ATOMIC));
+		if (rc) {
+			efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
+			return NULL;
+		}
+
+		/* Update display name for the sport */
+		efc_node_fcid_display(fc_id, domain->sport->display_name,
+				      sizeof(domain->sport->display_name));
+
+		/* Issue domain attach call */
+		rc = efc->tt.hw_domain_attach(efc, domain, fc_id);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
+				    rc);
+			return NULL;
+		}
+		/* sm: / domain_attach */
+		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		efc_log_debug(efc,
+			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
+			efc_sm_event_name(evt));
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each
+			 * sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for the HW domain attach to complete */
+void *
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_node *node = NULL;
+		struct efc_node *next_node = NULL;
+		struct efc_sli_port *sport;
+		struct efc_sli_port *next_sport;
+
+		/*
+		 * Set domain notify pending state to avoid
+		 * duplicate domain event post
+		 */
+		domain->domain_notify_pend = true;
+
+		/* Mark as attached */
+		domain->attached = true;
+
+		/* Register with SCSI API */
+		efc->tt.new_domain(efc, domain);
+
+		/* Transition to ready */
+		/* sm: / forward event to all sports and nodes */
+		efc_sm_transition(ctx, __efc_domain_ready, NULL);
+
+		/* We have an FCFI, so we can accept frames */
+		domain->req_accept_frames = true;
+
+		/*
+		 * Notify all nodes that the domain attach request
+		 * has completed
+		 * Note: sport will have already received notification
+		 * of sport attached as a result of the HW's port attach.
+		 */
+		list_for_each_entry_safe(sport, next_sport,
+					 &domain->sport_list, list_entry) {
+			list_for_each_entry_safe(node, next_node,
+						 &sport->node_list,
+						 list_entry) {
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		}
+		domain->domain_notify_pend = false;
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw attach\n",
+			      efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		/*
+		 * Domain lost while waiting for an attach to complete,
+		 * go to a state that waits for  the domain attach to
+		 * complete, then handle domain lost
+		 */
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		/*
+		 * In P2P we can get an attach request from
+		 * the other FLOGI path, so drop this one
+		 */
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Ready state */
+void *
+__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* start any pending vports */
+		if (efc_vport_start(domain)) {
+			efc_log_debug(domain->efc,
+				      "efc_vport_start didn't start vports\n");
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_LOST: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to wait state
+			 * and send shutdown to each sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "hw_domain_free failed: %d\n", rc);
+			}
+		}
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		/* can happen during p2p */
+		u32 fc_id;
+
+		fc_id = *((u32 *)arg);
+
+		/* Assume that the domain is attached */
+		WARN_ON(!domain->attached);
+
+		/*
+		 * Verify that the requested FC_ID
+		 * is the same as the one we're working with
+		 */
+		WARN_ON(domain->sport->fc_id != fc_id);
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Domain state machine: Wait for nodes to free prior to the domain shutdown */
+void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			      void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		int rc;
+
+		/* sm: / efc_hw_domain_free */
+		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
+
+		/* Request efc_hw_domain_free and wait for completion */
+		rc = efc->tt.hw_domain_free(efc, domain);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
+				    rc);
+		}
+		break;
+	}
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+ /* Domain state machine: Complete the domain shutdown */
+void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_FREE_OK: {
+		efc->tt.del_domain(efc, domain);
+
+		/* sm: / domain_free */
+		if (domain->domain_found_pending) {
+			/*
+			 * save fcf_wwn and drec from this domain,
+			 * free current domain and allocate
+			 * a new one with the same fcf_wwn
+			 * could use a SLI-4 "re-register VPI"
+			 * operation here?
+			 */
+			u64 fcf_wwn = domain->fcf_wwn;
+			struct efc_domain_record drec = domain->pending_drec;
+
+			efc_log_debug(efc, "Reallocating domain\n");
+			domain->req_domain_free = true;
+			domain = efc_domain_alloc(efc, fcf_wwn);
+
+			if (!domain) {
+				efc_log_err(efc,
+					    "efc_domain_alloc() failed\n");
+				return NULL;
+			}
+			/*
+			 * got a new domain; at this point,
+			 * there are at least two domains
+			 * once the req_domain_free flag is processed,
+			 * the associated domain will be removed.
+			 */
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+			efc_sm_post_event(&domain->drvsm,
+					  EFC_EVT_DOMAIN_FOUND, &drec);
+		} else {
+			domain->req_domain_free = true;
+		}
+		break;
+	}
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/*
+ * Domain state machine: Wait for the domain alloc/attach completion
+ * after receiving a domain lost.
+ */
+void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		int rc;
+
+		if (!list_empty(&domain->sport_list)) {
+			/*
+			 * if there are sports, transition to
+			 * wait state and send shutdown to each sport
+			 */
+			struct efc_sli_port	*sport = NULL;
+			struct efc_sli_port	*sport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
+					  NULL);
+			list_for_each_entry_safe(sport, sport_next,
+						 &domain->sport_list,
+						 list_entry) {
+				efc_sm_post_event(&sport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no sports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			rc = efc->tt.hw_domain_free(efc, domain);
+			if (rc) {
+				efc_log_err(efc,
+					    "efc_hw_domain_free() failed: %d\n",
+									rc);
+			}
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_err(efc, "[domain] %-20s: failed\n",
+			    efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id)
+{
+	memcpy(domain->dma.virt,
+	       ((uint8_t *)domain->flogi_service_params) + 4,
+		   sizeof(struct fc_els_flogi) - 4);
+	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
+				 &s_id);
+}
+
+void
+efc_domain_attach(struct efc_domain *domain, u32 s_id)
+{
+	__efc_domain_attach_internal(domain, s_id);
+}
+
+int
+efc_domain_post_event(struct efc_domain *domain,
+		      enum efc_sm_event event, void *arg)
+{
+	int rc;
+	bool req_domain_free;
+
+	rc = efc_sm_post_event(&domain->drvsm, event, arg);
+
+	req_domain_free = domain->req_domain_free;
+	domain->req_domain_free = false;
+
+	if (req_domain_free)
+		efc_domain_free(domain);
+
+	return rc;
+}
+
+/* Dispatch unsolicited FC frame */
+int
+efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efc_domain *domain = (struct efc_domain *)arg;
+	struct efc *efc = domain->efc;
+	struct fc_frame_header *hdr;
+	u32 s_id;
+	u32 d_id;
+	struct efc_node *node = NULL;
+	struct efc_sli_port *sport = NULL;
+	unsigned long flags = 0;
+
+	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
+		efc_log_err(efc, "Sequence header or payload is null\n");
+		return EFC_HW_SEQ_FREE;
+	}
+
+	hdr = seq->header->dma.virt;
+
+	/* extract the s_id and d_id */
+	s_id = ntoh24(hdr->fh_s_id);
+	d_id = ntoh24(hdr->fh_d_id);
+
+	spin_lock_irqsave(&efc->lock, flags);
+	sport = domain->sport;
+	if (!sport) {
+		efc_log_err(efc,
+			    "Drop frame, sport for FC ID 0x%06x is NULL", d_id);
+		spin_unlock_irqrestore(&efc->lock, flags);
+		return EFC_HW_SEQ_FREE;
+	}
+
+	if (sport->fc_id != d_id) {
+		/* Not a physical port IO lookup sport associated with the
+		 * npiv port
+		 */
+		/* Look up without lock */
+		sport = efc_sport_find(domain, d_id);
+		if (!sport) {
+			if (hdr->fh_type == FC_TYPE_FCP) {
+				/* Drop frame */
+				efc_log_warn(efc,
+					     "unsolicited FCP frame with invalid d_id x%x\n",
+					d_id);
+				spin_unlock_irqrestore(&efc->lock, flags);
+				return EFC_HW_SEQ_FREE;
+			}
+			/* p2p will use this case */
+			sport = domain->sport;
+		}
+	}
+
+	/* Lookup the node given the remote s_id */
+	node = efc_node_find(sport, s_id);
+
+	/* If not found, then create a new node */
+	if (!node) {
+		/* If this is solicited data or control based on R_CTL and
+		 * there is no node context,
+		 * then we can drop the frame
+		 */
+		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
+			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
+			efc_log_debug(efc,
+				      "solicited data/ctrl frame without node,drop\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return EFC_HW_SEQ_FREE;
+		}
+
+		node = efc_node_alloc(sport, s_id, false, false);
+		if (!node) {
+			efc_log_err(efc, "efc_node_alloc() failed\n");
+			spin_unlock_irqrestore(&efc->lock, flags);
+			return EFC_HW_SEQ_FREE;
+		}
+		/* don't send PLOGI on efc_d_init entry */
+		efc_node_init_device(node, false);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	if (node->hold_frames || !list_empty(&node->pend_frames)) {
+
+		/* add frame to node's pending list */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+			INIT_LIST_HEAD(&seq->list_entry);
+			list_add_tail(&seq->list_entry, &node->pend_frames);
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		return EFC_HW_SEQ_HOLD;
+	}
+
+	/* now dispatch frame to the node frame handler */
+	efc_node_dispatch_frame(node, seq);
+	return EFC_HW_SEQ_FREE;
+}
+
+void
+efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	u32 port_id;
+	struct efc_node *node = (struct efc_node *)arg;
+	struct efc *efc = node->efc;
+
+	port_id = ntoh24(hdr->fh_s_id);
+
+	if (WARN_ON(port_id != node->rnode.fc_id))
+		return;
+
+	if ((!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) ||
+	    !(ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)) {
+		node_printf(node,
+		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+		return;
+	}
+
+	switch (hdr->fh_r_ctl) {
+	case FC_RCTL_ELS_REQ:
+	case FC_RCTL_ELS_REP:
+		efc_node_recv_els_frame(node, seq);
+		break;
+
+	case FC_RCTL_BA_ABTS:
+	case FC_RCTL_BA_ACC:
+	case FC_RCTL_BA_RJT:
+	case FC_RCTL_BA_NOP:
+		efc->tt.recv_abts_frame(efc, node, seq);
+		break;
+
+	case FC_RCTL_DD_UNSOL_CMD:
+	case FC_RCTL_DD_UNSOL_CTL:
+		switch (hdr->fh_type) {
+		case FC_TYPE_FCP:
+			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
+				if (!node->fcp_enabled) {
+					efc_node_recv_fcp_cmd(node, seq);
+					break;
+				}
+				/* Dispatch FCP command*/
+				efc->tt.dispatch_fcp_cmd(node, seq);
+			} else if ((hdr->fh_r_ctl & 0xf) ==
+							FC_RCTL_DD_SOL_DATA) {
+				node_printf(node,
+				    "solicited data received.Dropping IO\n");
+			}
+			break;
+
+		case FC_TYPE_CT:
+			efc_node_recv_ct_frame(node, seq);
+			break;
+		default:
+			break;
+		}
+		break;
+	default:
+		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
+	}
+}
diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
new file mode 100644
index 000000000000..d318dda5935c
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declare driver's domain handler exported interface
+ */
+
+#ifndef __EFCT_DOMAIN_H__
+#define __EFCT_DOMAIN_H__
+
+extern struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn);
+extern void
+efc_domain_free(struct efc_domain *domain);
+
+extern void *
+__efc_domain_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_allocated(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+
+extern void
+efc_domain_attach(struct efc_domain *domain, u32 s_id);
+extern int
+efc_domain_post_event(struct efc_domain *domain,
+		      enum efc_sm_event event, void *arg);
+extern void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id);
+
+#endif /* __EFCT_DOMAIN_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 11/31] elx: libefc: SLI and FC PORT state machine interfaces
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (9 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 15:38   ` Hannes Reinecke
  2020-04-15 18:04   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 12/31] elx: libefc: Remote node " James Smart
                   ` (19 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI and FC port (aka n_port_id) registration, allocation and
  deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Acquire efc->lock in efc_lport_cb to protect all the port state
    transitions.
  Add vport_lock to protect vport_list access.
  Fixed vport_sport allocation race.
  Reworked on vport code.
---
 drivers/scsi/elx/libefc/efc_sport.c | 846 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_sport.h |  52 +++
 2 files changed, 898 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sport.h

diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
new file mode 100644
index 000000000000..99f5213e0902
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.c
@@ -0,0 +1,846 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Details SLI port (sport) functions.
+ */
+
+#include "efc.h"
+
+/* HW sport callback events from the user driver */
+int
+efc_lport_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_sli_port *sport = data;
+	enum efc_sm_event sm_event = EFC_EVT_LAST;
+	unsigned long flags = 0;
+
+	switch (event) {
+	case EFC_HW_PORT_ALLOC_OK:
+		sm_event = EFC_EVT_SPORT_ALLOC_OK;
+		break;
+	case EFC_HW_PORT_ALLOC_FAIL:
+		sm_event = EFC_EVT_SPORT_ALLOC_FAIL;
+		break;
+	case EFC_HW_PORT_ATTACH_OK:
+		sm_event = EFC_EVT_SPORT_ATTACH_OK;
+		break;
+	case EFC_HW_PORT_ATTACH_FAIL:
+		sm_event = EFC_EVT_SPORT_ATTACH_FAIL;
+		break;
+	case EFC_HW_PORT_FREE_OK:
+		sm_event = EFC_EVT_SPORT_FREE_OK;
+		break;
+	case EFC_HW_PORT_FREE_FAIL:
+		sm_event = EFC_EVT_SPORT_FREE_FAIL;
+		break;
+	default:
+		efc_log_err(efc, "unknown event %#x\n", event);
+		return EFC_FAIL;
+	}
+
+	efc_log_debug(efc, "sport event: %s\n", efc_sm_event_name(sm_event));
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_sm_post_event(&sport->sm, sm_event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+struct efc_sli_port *
+efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt)
+{
+	struct efc_sli_port *sport;
+
+	if (domain->efc->enable_ini)
+		enable_ini = 0;
+
+	/* Return a failure if this sport has already been allocated */
+	if (wwpn != 0) {
+		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
+		if (sport) {
+			efc_log_err(domain->efc,
+				    "Failed: SPORT %016llX %016llX already allocated\n",
+				    wwnn, wwpn);
+			return NULL;
+		}
+	}
+
+	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);
+	if (!sport)
+		return sport;
+
+	sport->efc = domain->efc;
+	snprintf(sport->display_name, sizeof(sport->display_name), "------");
+	sport->domain = domain;
+	xa_init(&sport->lookup);
+	sport->instance_index = domain->sport_instance_count++;
+	INIT_LIST_HEAD(&sport->node_list);
+	sport->sm.app = sport;
+	sport->enable_ini = enable_ini;
+	sport->enable_tgt = enable_tgt;
+	sport->enable_rscn = (sport->enable_ini ||
+			(sport->enable_tgt && enable_target_rscn(sport->efc)));
+
+	/* Copy service parameters from domain */
+	memcpy(sport->service_params, domain->service_params,
+		sizeof(struct fc_els_flogi));
+
+	/* Update requested fc_id */
+	sport->fc_id = fc_id;
+
+	/* Update the sport's service parameters for the new wwn's */
+	sport->wwpn = wwpn;
+	sport->wwnn = wwnn;
+	snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
+
+	/*
+	 * if this is the "first" sport of the domain,
+	 * then make it the "phys" sport
+	 */
+	if (list_empty(&domain->sport_list))
+		domain->sport = sport;
+
+	INIT_LIST_HEAD(&sport->list_entry);
+	list_add_tail(&sport->list_entry, &domain->sport_list);
+
+	efc_log_debug(domain->efc, "[%s] allocate sport\n",
+		      sport->display_name);
+
+	return sport;
+}
+
+void
+efc_sport_free(struct efc_sli_port *sport)
+{
+	struct efc_domain *domain;
+
+	if (!sport)
+		return;
+
+	domain = sport->domain;
+	efc_log_debug(domain->efc, "[%s] free sport\n", sport->display_name);
+	list_del(&sport->list_entry);
+	/*
+	 * if this is the physical sport,
+	 * then clear it out of the domain
+	 */
+	if (sport == domain->sport)
+		domain->sport = NULL;
+
+	xa_destroy(&sport->lookup);
+	xa_erase(&domain->lookup, sport->fc_id);
+
+	if (list_empty(&domain->sport_list))
+		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
+				      NULL);
+
+	kfree(sport);
+}
+
+void
+efc_sport_force_free(struct efc_sli_port *sport)
+{
+	struct efc_node *node;
+	struct efc_node *next;
+
+	/* shutdown sm processing */
+	efc_sm_disable(&sport->sm);
+
+	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
+		efc_node_force_free(node);
+	}
+
+	efc_sport_free(sport);
+}
+
+/* Find a SLI port object, given an FC_ID */
+struct efc_sli_port *
+efc_sport_find(struct efc_domain *domain, u32 d_id)
+{
+	return xa_load(&domain->lookup, d_id);
+}
+
+/* Find a SLI port, given the WWNN and WWPN */
+struct efc_sli_port *
+efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
+{
+	struct efc_sli_port *sport = NULL;
+
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
+			return sport;
+	}
+	return NULL;
+}
+
+/* External call to request an attach for a sport, given an FC_ID */
+int
+efc_sport_attach(struct efc_sli_port *sport, u32 fc_id)
+{
+	int rc;
+	struct efc_node *node;
+	struct efc *efc = sport->efc;
+
+	/* Set our lookup */
+	rc = xa_err(xa_store(&sport->domain->lookup, fc_id, sport, GFP_ATOMIC));
+	if (rc) {
+		efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
+		return rc;
+	}
+
+	/* Update our display_name */
+	efc_node_fcid_display(fc_id, sport->display_name,
+			      sizeof(sport->display_name));
+
+	list_for_each_entry(node, &sport->node_list, list_entry) {
+		efc_node_update_display_name(node);
+	}
+
+	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
+		      sport->display_name, fc_id);
+
+	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
+	if (rc != EFC_HW_RTN_SUCCESS) {
+		efc_log_err(sport->efc,
+			    "efc_hw_port_attach failed: %d\n", rc);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+static void
+efc_sport_shutdown(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_node *node;
+	struct efc_node *node_next;
+
+	list_for_each_entry_safe(node, node_next,
+					&sport->node_list, list_entry) {
+		if (!(node->rnode.fc_id == FC_FID_FLOGI && sport->is_vport)) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			continue;
+		}
+
+		/*
+		 * If this is a vport, logout of the fabric
+		 * controller so that it deletes the vport
+		 * on the switch.
+		 */
+		/* if link is down, don't send logo */
+		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_log_debug(efc,
+				      "[%s] sport shutdown vport, sending logo to node\n",
+				      node->display_name);
+
+			if (efc->tt.els_send(efc, node, ELS_LOGO,
+					     EFC_FC_FLOGI_TIMEOUT_SEC,
+					EFC_FC_ELS_DEFAULT_RETRIES)) {
+				/* sent LOGO, wait for response */
+				efc_node_transition(node,
+						    __efc_d_wait_logo_rsp,
+						     NULL);
+				continue;
+			}
+
+			/*
+			 * failed to send LOGO,
+			 * go ahead and cleanup node anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		}
+	}
+}
+
+/* Clear the sport reference in the vport specification */
+static void
+efc_vport_link_down(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_vport_spec *vport;
+
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->sport = NULL;
+			break;
+		}
+	}
+}
+
+static void *
+__efc_sport_common(const char *funcname, struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = sport->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		break;
+	case EFC_EVT_SPORT_ATTACH_OK:
+			efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN: {
+		int node_list_empty;
+
+		/* Flag this sport as shutting down */
+		sport->shutting_down = true;
+
+		if (sport->is_vport)
+			efc_vport_link_down(sport);
+
+		node_list_empty = list_empty(&sport->node_list);
+
+		if (node_list_empty) {
+			/* Remove the sport from the domain's lookup table */
+			xa_erase(&domain->lookup, sport->fc_id);
+			efc_sm_transition(ctx, __efc_sport_wait_port_free,
+					  NULL);
+			if (efc->tt.hw_port_free(efc, sport)) {
+				efc_log_test(sport->efc,
+					     "efc_hw_port_free failed\n");
+				/* Not much we can do, free the sport anyways */
+				efc_sport_free(sport);
+			}
+		} else {
+			/* sm: node list is not empty / shutdown nodes */
+			efc_sm_transition(ctx,
+					  __efc_sport_wait_shutdown, NULL);
+			efc_sport_shutdown(sport);
+		}
+		break;
+	}
+	default:
+		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
+			     sport->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+
+	return NULL;
+}
+
+/* SLI port state machine: Physical sport allocated */
+void *
+__efc_sport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	/* the physical sport is attached */
+	case EFC_EVT_SPORT_ATTACH_OK:
+		WARN_ON(sport != domain->sport);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+
+	case EFC_EVT_SPORT_ALLOC_OK:
+		/* ignore */
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/* SLI port state machine: Handle initial virtual port events */
+void *
+__efc_sport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
+
+		if (sport->wwpn == 0)
+			efc_log_debug(efc, "vport: letting f/w select WWN\n");
+
+		if (sport->fc_id != U32_MAX) {
+			efc_log_debug(efc, "vport: hard coding port id: %x\n",
+				      sport->fc_id);
+		}
+
+		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
+		/* If wwpn is zero, then we'll let the f/w */
+		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
+					  sport->wwpn == 0 ? NULL :
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			break;
+		}
+
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * SLI port state machine:
+ * Wait for the HW SLI port allocation to complete
+ */
+void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK: {
+		struct fc_els_flogi *sp;
+		struct efc_node *fabric;
+
+		sp = (struct fc_els_flogi *)sport->service_params;
+		/*
+		 * If we let f/w assign wwn's,
+		 * then sport wwn's with those returned by hw
+		 */
+		if (sport->wwnn == 0) {
+			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
+			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+		}
+
+		/* Update the sport's service parameters */
+		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
+
+		/*
+		 * if sport->fc_id is uninitialized,
+		 * then request that the fabric node use FDISC
+		 * to find an fc_id.
+		 * Otherwise we're restoring vports, or we're in
+		 * fabric emulation mode, so attach the fc_id
+		 */
+		if (sport->fc_id == U32_MAX) {
+			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
+						false);
+			if (!fabric) {
+				efc_log_err(efc, "efc_node_alloc() failed\n");
+				return NULL;
+			}
+			efc_node_transition(fabric, __efc_vport_fabric_init,
+					    NULL);
+		} else {
+			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
+				 "%016llX", sport->wwpn);
+			efc_sport_attach(sport, sport->fc_id);
+		}
+		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/**
+ * SLI port state machine: virtual sport allocated.
+ *
+ * This state is entered after the sport is allocated;
+ * it then waits for a fabric node
+ * FDISC to complete, which requests a sport attach.
+ * The sport attach complete is handled in this state.
+ */
+
+void *
+__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		struct efc_node *node;
+
+		/* Find our fabric node, and forward this event */
+		node = efc_node_find(sport, FC_FID_FLOGI);
+		if (!node) {
+			efc_log_test(efc, "can't find node %06x\n",
+				     FC_FID_FLOGI);
+			break;
+		}
+		/* sm: / forward sport attach to fabric node */
+		efc_node_post_event(node, evt, NULL);
+		efc_sm_transition(ctx, __efc_sport_attached, NULL);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+static void
+efc_vport_update_spec(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	struct efc_vport_spec *vport;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->sport == sport) {
+			vport->wwnn = sport->wwnn;
+			vport->wwpn = sport->wwpn;
+			vport->tgt_data = sport->tgt_data;
+			vport->ini_data = sport->ini_data;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+}
+
+/* State entered after the sport attach has completed */
+void *
+__efc_sport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		struct efc_node *node;
+
+		efc_log_debug(efc,
+			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		list_for_each_entry(node, &sport->node_list, list_entry) {
+			efc_node_update_display_name(node);
+		}
+
+		sport->tgt_id = sport->fc_id;
+
+		efc->tt.new_sport(efc, sport);
+
+		/*
+		 * Update the vport (if its not the physical sport)
+		 * parameters
+		 */
+		if (sport->is_vport)
+			efc_vport_update_spec(sport);
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_log_debug(efc,
+			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
+			      sport->display_name,
+			      sport->wwpn, sport->wwnn);
+
+		efc->tt.del_sport(efc, sport);
+		break;
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+
+/* SLI port state machine: Wait for the node shutdowns to complete */
+void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = sport->efc;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ALLOC_OK:
+	case EFC_EVT_SPORT_ALLOC_FAIL:
+	case EFC_EVT_SPORT_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_FAIL:
+		/* ignore these events - just wait for the all free event */
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		/*
+		 * Remove the sport from the domain's
+		 * sparse vector lookup table
+		 */
+		xa_erase(&domain->lookup, sport->fc_id);
+		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
+		if (efc->tt.hw_port_free(efc, sport)) {
+			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
+			/* Not much we can do, free the sport anyways */
+			efc_sport_free(sport);
+		}
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+/* SLI port state machine: Wait for the HW's port free to complete */
+void *
+__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_sli_port *sport = ctx->app;
+
+	sport_sm_trace(sport);
+
+	switch (evt) {
+	case EFC_EVT_SPORT_ATTACH_OK:
+		/* Ignore as we are waiting for the free CB */
+		break;
+	case EFC_EVT_SPORT_FREE_OK: {
+		/* All done, free myself */
+		/* sm: / efc_sport_free */
+		efc_sport_free(sport);
+		break;
+	}
+	default:
+		__efc_sport_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+static int
+efc_vport_sport_alloc(struct efc_domain *domain, struct efc_vport_spec *vport)
+{
+	struct efc_sli_port *sport;
+
+	sport = efc_sport_alloc(domain, vport->wwpn,
+				vport->wwnn, vport->fc_id,
+				vport->enable_ini,
+				vport->enable_tgt);
+	vport->sport = sport;
+	if (!sport)
+		return EFC_FAIL;
+
+	sport->is_vport = true;
+	sport->tgt_data = vport->tgt_data;
+	sport->ini_data = vport->ini_data;
+
+	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
+
+	return EFC_SUCCESS;
+}
+
+/* Use the vport specification to find the associated vports and start them */
+int
+efc_vport_start(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	int rc = EFC_SUCCESS;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (!vport->sport) {
+			if (efc_vport_sport_alloc(domain, vport))
+				rc = EFC_FAIL;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+
+	return rc;
+}
+
+/* Allocate a new virtual SLI port */
+int
+efc_sport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
+		    void *ini_data)
+{
+	struct efc *efc = domain->efc;
+	struct efc_vport_spec *vport;
+	int rc = EFC_SUCCESS;
+	unsigned long flags = 0;
+
+	if (ini && domain->efc->enable_ini == 0) {
+		efc_log_debug(efc,
+			     "driver initiator functionality not enabled\n");
+		return EFC_FAIL;
+	}
+
+	if (tgt && domain->efc->enable_tgt == 0) {
+		efc_log_debug(efc,
+			     "driver target functionality not enabled\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Create a vport spec if we need to recreate
+	 * this vport after a link up event
+	 */
+	vport = efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id, ini, tgt,
+					tgt_data, ini_data);
+	if (!vport) {
+		efc_log_err(efc, "failed to create vport object entry\n");
+		return EFC_FAIL;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	rc = efc_vport_sport_alloc(domain, vport);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return rc;
+}
+
+/* Remove a previously-allocated virtual port */
+int
+efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
+		    u64 wwpn, uint64_t wwnn)
+{
+	struct efc_sli_port *sport;
+	int found = 0;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	/* walk the efc_vport_list and remove from there */
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
+			list_del(&vport->list_entry);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+
+	if (!domain) {
+		/* No domain means no sport to look for */
+		return EFC_SUCCESS;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	list_for_each_entry(sport, &domain->sport_list, list_entry) {
+		if (sport->wwpn == wwpn && sport->wwnn == wwnn) {
+			found = 1;
+			break;
+		}
+	}
+
+	if (found) {
+		/* Shutdown this SPORT */
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+	return EFC_SUCCESS;
+}
+
+/* Force free all saved vports */
+void
+efc_vport_del_all(struct efc *efc)
+{
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		list_del(&vport->list_entry);
+		kfree(vport);
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+}
+
+/**
+ * Create a saved vport entry.
+ *
+ * A saved vport entry is added to the vport list,
+ * which is restored following a link up.
+ * This function is used to allow vports to be created the first time
+ * the link comes up without having to go through the ioctl() API.
+ */
+
+struct efc_vport_spec *
+efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn,
+		      u32 fc_id, bool enable_ini,
+		      bool enable_tgt, void *tgt_data, void *ini_data)
+{
+	struct efc_vport_spec *vport;
+	unsigned long flags = 0;
+
+	/*
+	 * walk the efc_vport_list and return failure
+	 * if a valid(vport with non zero WWPN and WWNN) vport entry
+	 * is already created
+	 */
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if ((wwpn && vport->wwpn == wwpn) &&
+		    (wwnn && vport->wwnn == wwnn)) {
+			efc_log_err(efc,
+				"Failed: VPORT %016llX %016llX already allocated\n",
+				wwnn, wwpn);
+			spin_unlock_irqrestore(&efc->vport_lock, flags);
+			return NULL;
+		}
+	}
+
+	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
+	if (!vport) {
+		spin_unlock_irqrestore(&efc->vport_lock, flags);
+		return NULL;
+	}
+
+	vport->wwnn = wwnn;
+	vport->wwpn = wwpn;
+	vport->fc_id = fc_id;
+	vport->enable_tgt = enable_tgt;
+	vport->enable_ini = enable_ini;
+	vport->tgt_data = tgt_data;
+	vport->ini_data = ini_data;
+
+	INIT_LIST_HEAD(&vport->list_entry);
+	list_add_tail(&vport->list_entry, &efc->vport_list);
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+	return vport;
+}
diff --git a/drivers/scsi/elx/libefc/efc_sport.h b/drivers/scsi/elx/libefc/efc_sport.h
new file mode 100644
index 000000000000..3269e29c6f57
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sport.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * EFC FC SLI port (SPORT) exported declarations
+ *
+ */
+
+#ifndef __EFC_SPORT_H__
+#define __EFC_SPORT_H__
+
+extern struct efc_sli_port *
+efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt);
+extern void
+efc_sport_free(struct efc_sli_port *sport);
+extern void
+efc_sport_force_free(struct efc_sli_port *sport);
+extern struct efc_sli_port *
+efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn);
+extern int
+efc_sport_attach(struct efc_sli_port *sport, u32 fc_id);
+
+extern void *
+__efc_sport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_sport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+
+extern int
+efc_vport_start(struct efc_domain *domain);
+
+#endif /* __EFC_SPORT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (10 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 11/31] elx: libefc: SLI and FC PORT " James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 15:51   ` Hannes Reinecke
  2020-04-15 18:19   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 13/31] elx: libefc: Fabric " James Smart
                   ` (18 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Remote node (aka remote port) allocation, initializaion and
  destroy routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed node pool creation. Use mempool for node structure and allocate
    dma mem when required.
  Added functions efc_node_handle_implicit_logo() and
    efc_node_handle_explicit_logo() for better indentation.
  Replace efc_assert with WARN_ON.
  Use linux xarray api for lookup instead of sparse vectors.
  Use defined return values.
---
 drivers/scsi/elx/libefc/efc_node.c | 1196 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_node.h |  183 ++++++
 2 files changed, 1379 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h

diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
new file mode 100644
index 000000000000..e8fd631f1793
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.c
@@ -0,0 +1,1196 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efc.h"
+
+/* HW node callback events from the user driver */
+int
+efc_remote_node_cb(void *arg, int event,
+		   void *data)
+{
+	struct efc *efc = arg;
+	enum efc_sm_event sm_event = EFC_EVT_LAST;
+	struct efc_remote_node *rnode = data;
+	struct efc_node *node = rnode->node;
+	unsigned long flags = 0;
+
+	switch (event) {
+	case EFC_HW_NODE_ATTACH_OK:
+		sm_event = EFC_EVT_NODE_ATTACH_OK;
+		break;
+
+	case EFC_HW_NODE_ATTACH_FAIL:
+		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
+		break;
+
+	case EFC_HW_NODE_FREE_OK:
+		sm_event = EFC_EVT_NODE_FREE_OK;
+		break;
+
+	case EFC_HW_NODE_FREE_FAIL:
+		sm_event = EFC_EVT_NODE_FREE_FAIL;
+		break;
+
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", event);
+		return EFC_FAIL;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, sm_event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+/* Find an FC node structure given the FC port ID */
+struct efc_node *
+efc_node_find(struct efc_sli_port *sport, u32 port_id)
+{
+	return xa_load(&sport->lookup, port_id);
+}
+
+struct efc_node *efc_node_alloc(struct efc_sli_port *sport,
+				  u32 port_id, bool init, bool targ)
+{
+	int rc;
+	struct efc_node *node = NULL;
+	struct efc *efc = sport->efc;
+	struct efc_dma *dma;
+
+	if (sport->shutting_down) {
+		efc_log_debug(efc, "node allocation when shutting down %06x",
+			      port_id);
+		return NULL;
+	}
+
+	node = mempool_alloc(efc->node_pool, GFP_ATOMIC);
+	if (!node) {
+		efc_log_err(efc, "node allocation failed %06x", port_id);
+		return NULL;
+	}
+	memset(node, 0, sizeof(*node));
+
+	dma = &node->sparm_dma_buf;
+	dma->size = NODE_SPARAMS_SIZE;
+	dma->virt = dma_pool_zalloc(efc->node_dma_pool, GFP_ATOMIC, &dma->phys);
+	if (!dma->virt) {
+		efc_log_err(efc, "node dma alloc failed\n");
+		goto dma_fail;
+	}
+	node->rnode.indicator = U32_MAX;
+	node->sport = sport;
+	INIT_LIST_HEAD(&node->list_entry);
+	list_add_tail(&node->list_entry, &sport->node_list);
+
+	node->efc = efc;
+	node->init = init;
+	node->targ = targ;
+
+	spin_lock_init(&node->pend_frames_lock);
+	INIT_LIST_HEAD(&node->pend_frames);
+	spin_lock_init(&node->active_ios_lock);
+	INIT_LIST_HEAD(&node->active_ios);
+	INIT_LIST_HEAD(&node->els_io_pend_list);
+	INIT_LIST_HEAD(&node->els_io_active_list);
+	efc->tt.scsi_io_alloc_enable(efc, node);
+
+	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
+	if (rc) {
+		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
+		goto hw_alloc_fail;
+	}
+
+	node->rnode.node = node;
+	node->sm.app = node;
+	node->evtdepth = 0;
+
+	efc_node_update_display_name(node);
+
+	rc = xa_err(xa_store(&sport->lookup, port_id, node, GFP_ATOMIC));
+	if (rc) {
+		efc_log_err(efc, "Node lookup store failed: %d\n", rc);
+		goto xa_fail;
+	}
+
+	return node;
+
+xa_fail:
+	efc->tt.hw_node_free_resources(efc, &node->rnode);
+hw_alloc_fail:
+	list_del(&node->list_entry);
+	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
+dma_fail:
+	mempool_free(node, efc->node_pool);
+	return NULL;
+}
+
+void
+efc_node_free(struct efc_node *node)
+{
+	struct efc_sli_port *sport;
+	struct efc *efc;
+	int rc = 0;
+	struct efc_node *ns = NULL;
+	struct efc_dma *dma;
+
+	sport = node->sport;
+	efc = node->efc;
+
+	node_printf(node, "Free'd\n");
+
+	if (node->refound) {
+		/*
+		 * Save the name server node. We will send fake RSCN event at
+		 * the end to handle ignored RSCN event during node deletion
+		 */
+		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
+	}
+
+	list_del(&node->list_entry);
+
+	/* Free HW resources */
+	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
+
+	/* if the gidpt_delay_timer is still running, then delete it */
+	if (timer_pending(&node->gidpt_delay_timer))
+		del_timer(&node->gidpt_delay_timer);
+
+	xa_erase(&sport->lookup, node->rnode.fc_id);
+
+	/*
+	 * If the node_list is empty,
+	 * then post a ALL_CHILD_NODES_FREE event to the sport,
+	 * after the lock is released.
+	 * The sport may be free'd as a result of the event.
+	 */
+	if (list_empty(&sport->node_list))
+		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
+				  NULL);
+
+	node->sport = NULL;
+	node->sm.current_state = NULL;
+
+	dma = &node->sparm_dma_buf;
+	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+	mempool_free(node, efc->node_pool);
+
+	if (ns) {
+		/* sending fake RSCN event to name server node */
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
+	}
+}
+
+void
+efc_node_force_free(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	/* shutdown sm processing */
+	efc_sm_disable(&node->sm);
+
+	strncpy(node->prev_state_name, node->current_state_name,
+		sizeof(node->prev_state_name));
+	strncpy(node->current_state_name, "disabled",
+		sizeof(node->current_state_name));
+
+	efc->tt.node_io_cleanup(efc, node, true);
+	efc->tt.node_els_cleanup(efc, node, true);
+
+	/* manually purge pending frames (if any) */
+	efc->tt.node_purge_pending(efc, node);
+
+	efc_node_free(node);
+}
+
+static void
+efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
+{
+	if (!dma || !buffer || !buffer_length)
+		return;
+
+	if (buffer_length > dma->size)
+		buffer_length = dma->size;
+
+	memcpy(dma->virt, buffer, buffer_length);
+	dma->len = buffer_length;
+}
+
+int
+efc_node_attach(struct efc_node *node)
+{
+	int rc = 0;
+	struct efc_sli_port *sport = node->sport;
+	struct efc_domain *domain = sport->domain;
+	struct efc *efc = node->efc;
+
+	if (!domain->attached) {
+		efc_log_err(efc,
+			     "Warning: unattached domain\n");
+		return EFC_FAIL;
+	}
+	/* Update node->wwpn/wwnn */
+
+	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
+				efc_node_get_wwpn(node));
+	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
+				efc_node_get_wwnn(node));
+
+	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
+			sizeof(node->service_params) - 4);
+
+	/* take lock to protect node->rnode.attached */
+	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
+
+	return rc;
+}
+
+void
+efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
+{
+	switch (fc_id) {
+	case FC_FID_FLOGI:
+		snprintf(buffer, buffer_length, "fabric");
+		break;
+	case FC_FID_FCTRL:
+		snprintf(buffer, buffer_length, "fabctl");
+		break;
+	case FC_FID_DIR_SERV:
+		snprintf(buffer, buffer_length, "nserve");
+		break;
+	default:
+		if (fc_id == FC_FID_DOM_MGR) {
+			snprintf(buffer, buffer_length, "dctl%02x",
+				 (fc_id & 0x0000ff));
+		} else {
+			snprintf(buffer, buffer_length, "%06x", fc_id);
+		}
+		break;
+	}
+}
+
+void
+efc_node_update_display_name(struct efc_node *node)
+{
+	u32 port_id = node->rnode.fc_id;
+	struct efc_sli_port *sport = node->sport;
+	char portid_display[16];
+
+	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
+
+	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
+		 sport->display_name, portid_display);
+}
+
+void
+efc_node_send_ls_io_cleanup(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+
+	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
+		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
+			      node->display_name, node->ls_acc_oxid);
+
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		node->ls_acc_io = NULL;
+	}
+}
+
+/* currently, only case for implicit logo is PLOGI
+ * recvd. Thus, node's ELS IO pending list won't be
+ * empty (PLOGI will be on it)
+ */
+static void efc_node_handle_implicit_logo(struct efc_node *node)
+{
+	int rc;
+	struct efc *efc = node->efc;
+
+	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
+	node_printf(node, "Reason: implicit logout, re-authenticate\n");
+
+	efc->tt.scsi_io_alloc_enable(efc, node);
+
+	/* Re-attach node with the same HW node resources */
+	node->req_free = false;
+	rc = efc_node_attach(node);
+	efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+
+	if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+		efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, NULL);
+
+}
+
+static void efc_node_handle_explicit_logo(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	s8 pend_frames_empty;
+	struct list_head *list;
+	unsigned long flags = 0;
+
+	/* cleanup any pending LS_ACC ELSs */
+	efc_node_send_ls_io_cleanup(node);
+	list = &node->els_io_pend_list;
+	WARN_ON(!efc_els_io_list_empty(node, list));
+
+	spin_lock_irqsave(&node->pend_frames_lock, flags);
+	pend_frames_empty = list_empty(&node->pend_frames);
+	spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+	/*
+	 * there are two scenarios where we want to keep
+	 * this node alive:
+	 * 1. there are pending frames that need to be
+	 *    processed or
+	 * 2. we're an initiator and the remote node is
+	 *    a target and we need to re-authenticate
+	 */
+	node_printf(node, "Shutdown: explicit logo pend=%d ",
+			  !pend_frames_empty);
+	node_printf(node, "sport.ini=%d node.tgt=%d\n",
+			  node->sport->enable_ini, node->targ);
+	if (!pend_frames_empty || (node->sport->enable_ini && node->targ)) {
+		u8 send_plogi = false;
+
+		if (node->sport->enable_ini && node->targ) {
+			/*
+			 * we're an initiator and
+			 * node shutting down is a target;
+			 * we'll need to re-authenticate in
+			 * initial state
+			 */
+			send_plogi = true;
+		}
+
+		/*
+		 * transition to __efc_d_init
+		 * (will retain HW node resources)
+		 */
+		efc->tt.scsi_io_alloc_enable(efc, node);
+		node->req_free = false;
+
+		/*
+		 * either pending frames exist,
+		 * or we're re-authenticating with PLOGI
+		 * (or both); in either case,
+		 * return to initial state
+		 */
+		efc_node_init_device(node, send_plogi);
+	}
+	/* else: let node shutdown occur */
+}
+
+void *
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		efc_node_hold_frames(node);
+		WARN_ON(!efc_node_active_ios_empty(node));
+		WARN_ON(!efc_els_io_list_empty(node,
+						&node->els_io_active_list));
+		/* by default, we will be freeing node after we unwind */
+		node->req_free = true;
+
+		switch (node->shutdown_reason) {
+		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
+			/* Node shutdown b/c of PLOGI received when node
+			 * already logged in. We have PLOGI service
+			 * parameters, so submit node attach; we won't be
+			 * freeing this node
+			 */
+
+			efc_node_handle_implicit_logo(node);
+			break;
+
+		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO:
+			efc_node_handle_explicit_logo(node);
+			break;
+
+		case EFC_NODE_SHUTDOWN_DEFAULT:
+		default: {
+			struct list_head *list;
+
+			/*
+			 * shutdown due to link down,
+			 * node going away (xport event) or
+			 * sport shutdown, purge pending and
+			 * proceed to cleanup node
+			 */
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+			list = &node->els_io_pend_list;
+			WARN_ON(!efc_els_io_list_empty(node, list));
+
+			node_printf(node,
+				    "Shutdown reason: default, purge pending\n");
+			efc->tt.node_purge_pending(efc, node);
+			break;
+		}
+		}
+
+		break;
+	}
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static bool
+efc_node_check_els_quiesced(struct efc_node *node)
+{
+	/* check to see if ELS requests, completions are quiesced */
+	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
+	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+		if (!node->attached) {
+			/* hw node detach already completed, proceed */
+			node_printf(node, "HW node not attached\n");
+			efc_node_transition(node,
+					    __efc_node_wait_ios_shutdown,
+					     NULL);
+		} else {
+			/*
+			 * hw node detach hasn't completed,
+			 * transition and wait
+			 */
+			node_printf(node, "HW node still attached\n");
+			efc_node_transition(node, __efc_node_wait_node_free,
+					    NULL);
+		}
+		return true;
+	}
+	return false;
+}
+
+void
+efc_node_initiate_cleanup(struct efc_node *node)
+{
+	struct efc *efc;
+
+	efc = node->efc;
+	efc->tt.node_els_cleanup(efc, node, false);
+
+	/*
+	 * if ELS's have already been quiesced, will move to next state
+	 * if ELS's have not been quiesced, abort them
+	 */
+	if (!efc_node_check_els_quiesced(node)) {
+		/*
+		 * Abort all ELS's since ELS's won't be aborted by HW
+		 * node free.
+		 */
+		efc_node_hold_frames(node);
+		efc->tt.node_abort_all_els(efc, node);
+		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
+	}
+}
+
+/* Node state machine: Wait for all ELSs to complete */
+void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	bool check_quiesce = false;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			node_printf(node, "All ELS IOs complete\n");
+			check_quiesce = true;
+		}
+		break;
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		if (WARN_ON(!node->els_cmpl_cnt))
+			break;
+		node->els_cmpl_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* all ELS IO's complete */
+		node_printf(node, "All ELS IOs complete\n");
+		WARN_ON(!efc_els_io_list_empty(node,
+					 &node->els_io_active_list));
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	if (check_quiesce)
+		efc_node_check_els_quiesced(node);
+
+	return NULL;
+}
+
+/* Node state machine: Wait for a HW node free event to complete */
+void *
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+		/* node is officially no longer attached */
+		node->attached = false;
+		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		/* As IOs and ELS IO's complete we expect to get these events */
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* Fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * State is entered when a node receives a shutdown event, and it's waiting
+ * for all the active IOs and ELS IOs associated with the node to complete.
+ */
+void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+
+		/* first check to see if no ELS IOs are outstanding */
+		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			/* If there are any active IOS, Free them. */
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		if (efc_node_active_ios_empty(node) &&
+		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		}
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
+			      efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+	struct efc_node_cb *cbdata = arg;
+
+	node = ctx->app;
+	efc = node->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
+	case EFC_EVT_NODE_MISSING:
+	case EFC_EVT_FCP_CMD_RCVD:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		node->refound = true;
+		break;
+
+	/*
+	 * node->attached must be set appropriately
+	 * for all node attach/detach events
+	 */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		break;
+
+	/*
+	 * handle any ELS completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		if (WARN_ON(!node->els_cmpl_cnt))
+			break;
+		node->els_cmpl_cnt--;
+		break;
+
+	/*
+	 * handle any ELS request completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_ELS_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * Unsupported ELS was received,
+		 * send LS_RJT, command not supported
+		 */
+		efc_log_debug(efc,
+			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
+			      node->display_name, funcname,
+			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD:
+	case EFC_EVT_FLOGI_RCVD:
+	case EFC_EVT_LOGO_RCVD:
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* sm: / send ELS_RJT */
+		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+		/* if we didn't catch this in a state, send generic LS_RJT */
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+
+		break;
+	}
+	case EFC_EVT_ABTS_RCVD: {
+		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
+			     node->display_name, funcname,
+			     efc_sm_event_name(evt));
+		break;
+	}
+	return NULL;
+}
+
+void
+efc_node_save_sparms(struct efc_node *node, void *payload)
+{
+	memcpy(node->service_params, payload, sizeof(node->service_params));
+}
+
+void
+efc_node_post_event(struct efc_node *node,
+		    enum efc_sm_event evt, void *arg)
+{
+	bool free_node = false;
+
+	node->evtdepth++;
+
+	efc_sm_post_event(&node->sm, evt, arg);
+
+	/* If our event call depth is one and
+	 * we're not holding frames
+	 * then we can dispatch any pending frames.
+	 * We don't want to allow the efc_process_node_pending()
+	 * call to recurse.
+	 */
+	if (!node->hold_frames && node->evtdepth == 1)
+		efc_process_node_pending(node);
+
+	node->evtdepth--;
+
+	/*
+	 * Free the node object if so requested,
+	 * and we're at an event call depth of zero
+	 */
+	if (node->evtdepth == 0 && node->req_free)
+		free_node = true;
+
+	if (free_node)
+		efc_node_free(node);
+}
+
+void
+efc_node_transition(struct efc_node *node,
+		    void *(*state)(struct efc_sm_ctx *,
+				   enum efc_sm_event, void *), void *data)
+{
+	struct efc_sm_ctx *ctx = &node->sm;
+
+	if (ctx->current_state == state) {
+		efc_node_post_event(node, EFC_EVT_REENTER, data);
+	} else {
+		efc_node_post_event(node, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_node_post_event(node, EFC_EVT_ENTER, data);
+	}
+}
+
+void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
+{
+	memset(buffer, 0, buffer_len);
+
+	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
+}
+
+uint64_t
+efc_node_get_wwpn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwpn);
+}
+
+uint64_t
+efc_node_get_wwnn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+int
+efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
+		uint8_t cmd, void *(*efc_node_common_func)(const char *,
+				struct efc_sm_ctx *, enum efc_sm_event, void *),
+		const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
+		uint16_t cmd, void *(*efc_node_common_func)(const char *,
+				struct efc_sm_ctx *, enum efc_sm_event, void *),
+		const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_node_active_ios_empty(struct efc_node *node)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+void
+efc_node_pause(struct efc_node *node,
+	       void *(*state)(struct efc_sm_ctx *,
+			      enum efc_sm_event, void *))
+
+{
+	node->nodedb_state = state;
+	efc_node_transition(node, __efc_node_paused, NULL);
+}
+
+/**
+ * This state is entered when a state is "paused". When resumed, the node
+ * is transitioned to a previously saved state (node->ndoedb_state)
+ */
+void *
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node_printf(node, "Paused\n");
+		break;
+
+	case EFC_EVT_RESUME: {
+		void *(*pf)(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+
+		pf = node->nodedb_state;
+
+		node->nodedb_state = NULL;
+		efc_node_transition(node, pf, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node->req_free = true;
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+void
+efc_node_recv_els_frame(struct efc_node *node,
+			struct efc_hw_sequence *seq)
+{
+	unsigned long flags = 0;
+	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
+	struct {
+		u32 cmd;
+		enum efc_sm_event evt;
+		u32 payload_size;
+	} els_cmd_list[] = {
+		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
+		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
+		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
+		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
+		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
+		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
+	};
+	struct efc_node_cb cbdata;
+	u8 *buf = seq->payload->dma.virt;
+	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
+	u32 i;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	/* find a matching event for the ELS command */
+	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
+		if (els_cmd_list[i].cmd == buf[0]) {
+			evt = els_cmd_list[i].evt;
+			break;
+		}
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, evt, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void
+efc_node_recv_ct_frame(struct efc_node *node,
+		       struct efc_hw_sequence *seq)
+{
+	struct fc_ct_hdr *iu = seq->payload->dma.virt;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efc *efc = node->efc;
+	u16 gscmd = be16_to_cpu(iu->ct_cmd);
+
+	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
+		    node->display_name, gscmd);
+	efc->tt.send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
+			    FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
+}
+
+void
+efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
+{
+	struct efc_node_cb cbdata;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void
+efc_process_node_pending(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+
+		if (!list_empty(&node->pend_frames)) {
+			seq = list_first_entry(&node->pend_frames,
+					struct efc_hw_sequence, list_entry);
+			list_del(&seq->list_entry);
+		}
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		if (!seq) {
+			pend_frames_processed =	node->pend_frames_processed;
+			node->pend_frames_processed = 0;
+			break;
+		}
+		node->pend_frames_processed++;
+
+		/* now dispatch frame(s) to dispatch function */
+		efc_node_dispatch_frame(node, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efc, "%u node frames held and processed\n",
+			      pend_frames_processed);
+}
+
+void
+efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void
+efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void
+efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void efc_node_post_els_resp(struct efc_node *node,
+			    enum efc_hw_node_els_event evt, void *arg)
+{
+	enum efc_sm_event sm_event = EFC_EVT_LAST;
+	struct efc *efc = node->efc;
+	unsigned long flags = 0;
+
+	switch (evt) {
+	case EFC_HW_SRRS_ELS_REQ_OK:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_OK:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_CMPL_FAIL:
+		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
+		break;
+	case EFC_HW_SRRS_ELS_REQ_RJT:
+		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
+		break;
+	case EFC_HW_ELS_REQ_ABORTED:
+		sm_event = EFC_EVT_ELS_REQ_ABORTED;
+		break;
+	default:
+		efc_log_test(efc, "unhandled event %#x\n", evt);
+		return;
+	}
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, sm_event, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
+
+void efc_node_post_shutdown(struct efc_node *node,
+			    u32 evt, void *arg)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
+	spin_unlock_irqrestore(&node->efc->lock, flags);
+}
diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
new file mode 100644
index 000000000000..0608703cfd04
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.h
@@ -0,0 +1,183 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFC_NODE_H__)
+#define __EFC_NODE_H__
+#include "scsi/fc/fc_ns.h"
+
+#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
+#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
+#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
+
+#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
+	io->node->display_name, io->instance_index, io->init_task_tag, \
+	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+static inline void
+efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		 const char *handler)
+{
+	struct efc_node *node = ctx->app;
+
+	if (evt == EFC_EVT_ENTER) {
+		strncpy(node->current_state_name, handler,
+			sizeof(node->current_state_name));
+	} else if (evt == EFC_EVT_EXIT) {
+		strncpy(node->prev_state_name, node->current_state_name,
+			sizeof(node->prev_state_name));
+		strncpy(node->current_state_name, "invalid",
+			sizeof(node->current_state_name));
+	}
+	node->prev_evt = node->current_evt;
+	node->current_evt = evt;
+}
+
+/**
+ * hold frames in pending frame list
+ *
+ * Unsolicited receive frames are held on the node pending frame list,
+ * rather than being processed.
+ */
+
+static inline void
+efc_node_hold_frames(struct efc_node *node)
+{
+	node->hold_frames = true;
+}
+
+/**
+ * accept frames
+ *
+ * Unsolicited receive frames processed rather than being held on the node
+ * pending frame list.
+ */
+
+static inline void
+efc_node_accept_frames(struct efc_node *node)
+{
+	node->hold_frames = false;
+}
+
+/*
+ * Node initiator/target enable defines
+ * All combinations of the SLI port (sport) initiator/target enable,
+ * and remote node initiator/target enable are enumerated.
+ * ex: EFC_NODE_ENABLE_T_TO_IT decodes to target mode is enabled on SLI port
+ * and I+T is enabled on remote node.
+ */
+enum efc_node_enable {
+	EFC_NODE_ENABLE_x_TO_x,
+	EFC_NODE_ENABLE_x_TO_T,
+	EFC_NODE_ENABLE_x_TO_I,
+	EFC_NODE_ENABLE_x_TO_IT,
+	EFC_NODE_ENABLE_T_TO_x,
+	EFC_NODE_ENABLE_T_TO_T,
+	EFC_NODE_ENABLE_T_TO_I,
+	EFC_NODE_ENABLE_T_TO_IT,
+	EFC_NODE_ENABLE_I_TO_x,
+	EFC_NODE_ENABLE_I_TO_T,
+	EFC_NODE_ENABLE_I_TO_I,
+	EFC_NODE_ENABLE_I_TO_IT,
+	EFC_NODE_ENABLE_IT_TO_x,
+	EFC_NODE_ENABLE_IT_TO_T,
+	EFC_NODE_ENABLE_IT_TO_I,
+	EFC_NODE_ENABLE_IT_TO_IT,
+};
+
+static inline enum efc_node_enable
+efc_node_get_enable(struct efc_node *node)
+{
+	u32 retval = 0;
+
+	if (node->sport->enable_ini)
+		retval |= (1U << 3);
+	if (node->sport->enable_tgt)
+		retval |= (1U << 2);
+	if (node->init)
+		retval |= (1U << 1);
+	if (node->targ)
+		retval |= (1U << 0);
+	return (enum efc_node_enable)retval;
+}
+
+extern int
+efc_node_check_els_req(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg,
+		       u8 cmd, void *(*efc_node_common_func)(const char *,
+		       struct efc_sm_ctx *, enum efc_sm_event, void *),
+		       const char *funcname);
+extern int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg,
+		  u16 cmd, void *(*efc_node_common_func)(const char *,
+		  struct efc_sm_ctx *, enum efc_sm_event, void *),
+		  const char *funcname);
+extern int
+efc_node_attach(struct efc_node *node);
+extern struct efc_node *
+efc_node_alloc(struct efc_sli_port *sport, u32 port_id,
+		bool init, bool targ);
+extern void
+efc_node_free(struct efc_node *efc);
+extern void
+efc_node_force_free(struct efc_node *efc);
+extern void
+efc_node_update_display_name(struct efc_node *node);
+void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
+			 void *arg);
+
+extern void *
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void
+efc_node_save_sparms(struct efc_node *node, void *payload);
+extern void
+efc_node_transition(struct efc_node *node,
+		    void *(*state)(struct efc_sm_ctx *,
+		    enum efc_sm_event, void *), void *data);
+extern void *
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+extern void
+efc_node_initiate_cleanup(struct efc_node *node);
+
+extern void
+efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
+extern uint64_t
+efc_node_get_wwpn(struct efc_node *node);
+
+extern void
+efc_node_pause(struct efc_node *node,
+	       void *(*state)(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg));
+extern void *
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern int
+efc_node_active_ios_empty(struct efc_node *node);
+extern void
+efc_node_send_ls_io_cleanup(struct efc_node *node);
+
+extern int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+extern void
+efc_process_node_pending(struct efc_node *domain);
+
+#endif /* __EFC_NODE_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 13/31] elx: libefc: Fabric node state machine interfaces
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (11 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 12/31] elx: libefc: Remote node " James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 18:51   ` Daniel Wagner
  2020-04-16  6:37   ` Hannes Reinecke
  2020-04-12  3:32 ` [PATCH v3 14/31] elx: libefc: FC node ELS and state handling James Smart
                   ` (17 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Fabric node initialization and logins.
- Name/Directory Services node.
- Fabric Controller node to process rscn events.

These are all interactions with remote ports that correspond
to well-known fabric entities

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Replace efc_assert with WARN_ON
  Return defined return values
---
 drivers/scsi/elx/libefc/efc_fabric.c | 1759 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_fabric.h |  116 +++
 2 files changed, 1875 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h

diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
new file mode 100644
index 000000000000..251f8702dbc5
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.c
@@ -0,0 +1,1759 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * This file implements remote node state machines for:
+ * - Fabric logins.
+ * - Fabric controller events.
+ * - Name/directory services interaction.
+ * - Point-to-point logins.
+ */
+
+/*
+ * fabric_sm Node State Machine: Fabric States
+ * ns_sm Node State Machine: Name/Directory Services States
+ * p2p_sm Node State Machine: Point-to-Point Node States
+ */
+
+#include "efc.h"
+
+static void
+efc_fabric_initiate_shutdown(struct efc_node *node)
+{
+	int rc;
+	struct efc *efc = node->efc;
+
+	efc->tt.scsi_io_alloc_disable(efc, node);
+
+	if (node->attached) {
+		/* issue hw node free; don't care if succeeds right away
+		 * or sometime later, will check node->attached later in
+		 * shutdown process
+		 */
+		rc = efc->tt.hw_node_detach(efc, &node->rnode);
+		if (rc != EFC_HW_RTN_SUCCESS &&
+		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
+			node_printf(node, "Failed freeing HW node, rc=%d\n",
+				    rc);
+		}
+	}
+	/*
+	 * node has either been detached or is in the process of being detached,
+	 * call common node's initiate cleanup function
+	 */
+	efc_node_initiate_cleanup(node);
+}
+
+static void *
+__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+
+	node = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	case EFC_EVT_SHUTDOWN:
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+void *
+__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_REENTER:	/* not sure why we're getting these ... */
+		efc_log_debug(efc, ">>> reenter !!\n");
+		/* fall through */
+	case EFC_EVT_ENTER:
+		/*  sm: / send FLOGI */
+		efc->tt.els_send(efc, node, ELS_FLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_sport_topology topology)
+{
+	node->sport->topology = topology;
+}
+
+void
+efc_fabric_notify_topology(struct efc_node *node)
+{
+	struct efc_node *tmp_node;
+	struct efc_node *next;
+	enum efc_sport_topology topology = node->sport->topology;
+
+	/*
+	 * now loop through the nodes in the sport
+	 * and send topology notification
+	 */
+	list_for_each_entry_safe(tmp_node, next, &node->sport->node_list,
+				 list_entry) {
+		if (tmp_node != node) {
+			efc_node_post_event(tmp_node,
+					    EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
+					    (void *)topology);
+		}
+	}
+}
+
+static bool efc_rnode_is_nport(struct fc_els_flogi *rsp)
+{
+	return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT);
+}
+
+static bool efc_rnode_is_npiv_capable(struct fc_els_flogi *rsp)
+{
+	return !!(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_NPIV_ACC);
+}
+
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->els_rsp.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* Check to see if the fabric is an F_PORT or and N_PORT */
+		if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) {
+			/* sm: if not nport / efc_domain_attach */
+			/* ext_status has the fc_id, attach domain */
+			if (efc_rnode_is_npiv_capable(cbdata->els_rsp.virt)) {
+				efc_log_debug(node->efc,
+					      " NPIV is enabled at switch side\n");
+				//node->efc->sw_feature_cap |= 1<<10;
+			}
+			efc_fabric_set_topology(node,
+						EFC_SPORT_TOPOLOGY_FABRIC);
+			efc_fabric_notify_topology(node);
+			WARN_ON(node->sport->domain->attached);
+			efc_domain_attach(node->sport->domain,
+					  cbdata->ext_status);
+			efc_node_transition(node,
+					    __efc_fabric_wait_domain_attach,
+					    NULL);
+			break;
+		}
+
+		/*  sm: if nport and p2p_winner / efc_domain_attach */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+			break;
+		}
+
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					     NULL);
+			if (node->sport->domain->attached &&
+			    !node->sport->domain->domain_notify_pend) {
+				/*
+				 * already attached,
+				 * just send ATTACH_OK
+				 */
+				node_printf(node,
+					    "p2p winner, domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/*
+			 * peer is p2p winner;
+			 * PLOGI will be received on the
+			 * remote SID=1 node;
+			 * this node has served its purpose
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_ELS_REQ_ABORTED:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		struct efc_sli_port *sport = node->sport;
+		/*
+		 * with these errors, we have no recovery,
+		 * so shutdown the sport, leave the link
+		 * up and the domain ready
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node,
+			    "FLOGI failed evt=%s, shutting down sport [%s]\n",
+			    efc_sm_event_name(evt), sport->display_name);
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send FDISC */
+		efc->tt.els_send(efc, node, ELS_FDISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+
+		efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* fc_id is in ext_status */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / efc_sport_attach */
+		efc_sport_attach(node->sport, cbdata->ext_status);
+		efc_node_transition(node, __efc_fabric_wait_domain_attach,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_log_err(node->efc, "FDISC failed, shutting down sport\n");
+		/* sm: / shutdown sport */
+		efc_sm_post_event(&node->sport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+static int
+efc_start_ns_node(struct efc_sli_port *sport)
+{
+	struct efc_node *ns;
+
+	/* Instantiate a name services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (!ns) {
+		ns = efc_node_alloc(sport, FC_FID_DIR_SERV, false, false);
+		if (!ns)
+			return EFC_FAIL;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER)
+		efc_node_pause(ns, __efc_ns_init);
+	else
+		efc_node_transition(ns, __efc_ns_init, NULL);
+	return EFC_SUCCESS;
+}
+
+static int
+efc_start_fabctl_node(struct efc_sli_port *sport)
+{
+	struct efc_node *fabctl;
+
+	fabctl = efc_node_find(sport, FC_FID_FCTRL);
+	if (!fabctl) {
+		fabctl = efc_node_alloc(sport, FC_FID_FCTRL,
+					false, false);
+		if (!fabctl)
+			return EFC_FAIL;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	efc_node_transition(fabctl, __efc_fabctl_init, NULL);
+	return EFC_SUCCESS;
+}
+
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+	case EFC_EVT_SPORT_ATTACH_OK: {
+		int rc;
+
+		rc = efc_start_ns_node(node->sport);
+		if (rc)
+			return NULL;
+
+		/* sm: if enable_ini / start fabctl node */
+		/* Instantiate the fabric controller (sends SCR) */
+		if (node->sport->enable_rscn) {
+			rc = efc_start_fabctl_node(node->sport);
+			if (rc)
+				return NULL;
+		}
+		efc_node_transition(node, __efc_fabric_idle, NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* Save service parameters */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_ns_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send RFTID */
+		efc->tt.els_send_ct(efc, node, FC_RCTL_ELS_REQ,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	/* ignore shutdown event as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / send RFFID */
+		efc->tt.els_send_ct(efc, node, FC_NS_RFF_ID,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Waits for an RFFID response event; if configured for an initiator operation,
+ * a GIDPT name services request is issued.
+ */
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		if (node->sport->enable_rscn) {
+			/* sm: if enable_rscn / send GIDPT */
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		} else {
+			/* if 'T' only, we're done, go to idle */
+			efc_node_transition(node, __efc_ns_idle, NULL);
+		}
+		break;
+	}
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static int
+efc_process_gidpt_payload(struct efc_node *node,
+			  void *data, u32 gidpt_len)
+{
+	u32 i, j;
+	struct efc_node *newnode;
+	struct efc_sli_port *sport = node->sport;
+	struct efc *efc = node->efc;
+	u32 port_id = 0, port_count, portlist_count;
+	struct efc_node *n;
+	struct efc_node **active_nodes;
+	int residual;
+	struct fc_ct_hdr *hdr = data;
+	struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
+
+	residual = be16_to_cpu(hdr->ct_mr_size);
+
+	if (residual != 0)
+		efc_log_debug(node->efc, "residual is %u words\n", residual);
+
+	if (be16_to_cpu(hdr->ct_cmd) == FC_FS_RJT) {
+		node_printf(node,
+			    "GIDPT request failed: rsn x%x rsn_expl x%x\n",
+			    hdr->ct_reason, hdr->ct_explan);
+		return EFC_FAIL;
+	}
+
+	portlist_count = (gidpt_len - sizeof(*hdr)) / sizeof(*gidpt);
+
+	/* Count the number of nodes */
+	port_count = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_count++;
+	}
+
+	/* Allocate a buffer for all nodes */
+	active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC);
+	if (!active_nodes) {
+		node_printf(node, "efc_malloc failed\n");
+		return EFC_FAIL;
+	}
+
+	/* Fill buffer with fc_id of active nodes */
+	i = 0;
+	list_for_each_entry(n, &sport->node_list, list_entry) {
+		port_id = n->rnode.fc_id;
+		switch (port_id) {
+		case FC_FID_FLOGI:
+		case FC_FID_FCTRL:
+		case FC_FID_DIR_SERV:
+			break;
+		default:
+			if (port_id != FC_FID_DOM_MGR)
+				active_nodes[i++] = n;
+			break;
+		}
+	}
+
+	/* update the active nodes buffer */
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		for (j = 0; j < port_count; j++) {
+			if (active_nodes[j] &&
+			    port_id == active_nodes[j]->rnode.fc_id) {
+				active_nodes[j] = NULL;
+			}
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+
+	/* Those remaining in the active_nodes[] are now gone ! */
+	for (i = 0; i < port_count; i++) {
+		/*
+		 * if we're an initiator and the remote node
+		 * is a target, then post the node missing event.
+		 * if we're target and we have enabled
+		 * target RSCN, then post the node missing event.
+		 */
+		if (active_nodes[i]) {
+			if ((node->sport->enable_ini &&
+			     active_nodes[i]->targ) ||
+			     (node->sport->enable_tgt &&
+			     enable_target_rscn(efc))) {
+				efc_node_post_event(active_nodes[i],
+						    EFC_EVT_NODE_MISSING,
+						     NULL);
+			} else {
+				node_printf(node,
+					    "GID_PT: skipping non-tgt port_id x%06x\n",
+					    active_nodes[i]->rnode.fc_id);
+			}
+		}
+	}
+	kfree(active_nodes);
+
+	for (i = 0; i < portlist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		/* Don't create node for ourselves */
+		if (port_id != node->rnode.sport->fc_id) {
+			newnode = efc_node_find(sport, port_id);
+			if (!newnode) {
+				if (node->sport->enable_ini) {
+					newnode = efc_node_alloc(sport,
+								 port_id,
+								  false,
+								  false);
+					if (!newnode) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						return EFC_FAIL;
+					}
+					/*
+					 * send PLOGI automatically
+					 * if initiator
+					 */
+					efc_node_init_device(newnode, true);
+				}
+				continue;
+			}
+
+			if (node->sport->enable_ini && newnode->targ) {
+				efc_node_post_event(newnode,
+						    EFC_EVT_NODE_REFOUND,
+						    NULL);
+			}
+			/*
+			 * original code sends ADISC,
+			 * has notion of "refound"
+			 */
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+	return EFC_SUCCESS;
+}
+
+/**
+ * Wait for a GIDPT response from the name server. Process the FC_IDs that are
+ * reported by creating new remote ports, as needed.
+ */
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT,
+					  __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / process GIDPT payload */
+		efc_process_gidpt_payload(node, cbdata->els_rsp.virt,
+					  cbdata->els_rsp.len);
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	{
+		/* not much we can do; will retry with the next RSCN */
+		node_printf(node, "GID_PT failed to complete\n");
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	/* if receive RSCN here, queue up another discovery processing */
+	case EFC_EVT_RSCN_RCVD: {
+		node_printf(node, "RSCN received during GID_PT processing\n");
+		node->rscn_pending = true;
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Idle. Waiting for RSCN received events
+ * (posted from the fabric controller), and
+ * restarts the GIDPT name services query and processing.
+ */
+void *
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->rscn_pending)
+			break;
+
+		node_printf(node, "RSCN pending, restart discovery\n");
+		node->rscn_pending = false;
+
+			/* fall through */
+
+	case EFC_EVT_RSCN_RCVD: {
+		/* sm: / send GIDPT */
+		/*
+		 * If target RSCN processing is enabled,
+		 * and this is target only (not initiator),
+		 * and tgt_rscn_delay is non-zero,
+		 * then we delay issuing the GID_PT
+		 */
+		if (efc->tgt_rscn_delay_msec != 0 &&
+		    !node->sport->enable_ini && node->sport->enable_tgt &&
+		    enable_target_rscn(efc)) {
+			efc_node_transition(node, __efc_ns_gidpt_delay, NULL);
+		} else {
+			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+					EFC_FC_ELS_DEFAULT_RETRIES);
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * Handle GIDPT delay timer callback
+ * Post an EFC_EVT_GIDPT_DEIALY_EXPIRED event to the passed in node.
+ */
+static void
+gidpt_delay_timer_cb(struct timer_list *t)
+{
+	struct efc_node *node = from_timer(node, t, gidpt_delay_timer);
+
+	del_timer(&node->gidpt_delay_timer);
+
+	efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL);
+}
+
+/**
+ * Name services node state machine: Delayed GIDPT.
+ *
+ * Waiting for GIDPT delay to expire before submitting GIDPT to name server.
+ */
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		time_t delay_msec;
+
+		/*
+		 * Compute the delay time.
+		 * Set to tgt_rscn_delay, if the time since last GIDPT
+		 * is less than tgt_rscn_period, then use tgt_rscn_period.
+		 */
+		delay_msec = efc->tgt_rscn_delay_msec;
+		if ((jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec)
+		    < efc->tgt_rscn_period_msec) {
+			delay_msec = efc->tgt_rscn_period_msec;
+		}
+		timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb,
+			    0);
+		mod_timer(&node->gidpt_delay_timer,
+			  jiffies + msecs_to_jiffies(delay_msec));
+
+		break;
+	}
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		node->time_last_gidpt_msec = jiffies_to_msecs(jiffies);
+
+		efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_RSCN_RCVD: {
+		efc_log_debug(efc,
+			      "RSCN received while in GIDPT delay - no action\n");
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine: Initial state.
+ *
+ * Issue a PLOGI to a well-known fabric controller address.
+ */
+void *
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* no need to login to fabric controller, just send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine: Wait for a node attach request
+ * to complete.
+ *
+ * Wait for a node attach to complete. If successful, issue an SCR
+ * to the fabric controller, subscribing to all RSCN.
+ */
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send SCR */
+		efc->tt.els_send(efc, node, ELS_SCR,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/**
+ * Fabric controller node state machine:
+ * Wait for an SCR response from the fabric controller.
+ */
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static void
+efc_process_rscn(struct efc_node *node, struct efc_node_cb *cbdata)
+{
+	struct efc *efc = node->efc;
+	struct efc_sli_port *sport = node->sport;
+	struct efc_node *ns;
+
+	/* Forward this event to the name-services node */
+	ns = efc_node_find(sport, FC_FID_DIR_SERV);
+	if (ns)
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata);
+	else
+		efc_log_warn(efc, "can't find name server node\n");
+}
+
+/* Fabric controller node state machine: Ready.
+ * In this state, the fabric controller sends a RSCN, which is received
+ * by this node and is forwarded to the name services node object; and
+ * the RSCN LS_ACC is sent.
+ */
+void *
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * sm: / process RSCN (forward to name services node),
+		 * send LS_ACC
+		 */
+		efc_process_rscn(node, cbdata);
+		efc->tt.els_send_resp(efc, node, ELS_LS_ACC,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl,
+				    NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static uint64_t
+efc_get_wwpn(struct fc_els_flogi *sp)
+{
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+static int
+efc_rnode_is_winner(struct efc_sli_port *sport)
+{
+	struct fc_els_flogi *remote_sp;
+	u64 remote_wwpn;
+	u64 local_wwpn = sport->wwpn;
+	u64 wwn_bump = 0;
+
+	remote_sp = (struct fc_els_flogi *)sport->domain->flogi_service_params;
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	local_wwpn ^= wwn_bump;
+
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	efc_log_debug(sport->efc, "r: %llx\n",
+		      be64_to_cpu(remote_sp->fl_wwpn));
+	efc_log_debug(sport->efc, "l: %llx\n", local_wwpn);
+
+	if (remote_wwpn == local_wwpn) {
+		efc_log_warn(sport->efc,
+			     "WWPN of remote node [%08x %08x] matches local WWPN\n",
+			     (u32)(local_wwpn >> 32ll),
+			     (u32)local_wwpn);
+		return -1;
+	}
+
+	return (remote_wwpn > local_wwpn);
+}
+
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_sli_port *sport = node->sport;
+		struct efc_node *rnode;
+
+		/*
+		 * this transient node (SID=0 (recv'd FLOGI)
+		 * or DID=fabric (sent FLOGI))
+		 * is the p2p winner, will use a separate node
+		 * to send PLOGI to peer
+		 */
+		WARN_ON(!node->sport->p2p_winner);
+
+		rnode = efc_node_find(sport, node->sport->p2p_remote_port_id);
+		if (rnode) {
+			/*
+			 * the "other" transient p2p node has
+			 * already kicked off the
+			 * new node from which PLOGI is sent
+			 */
+			node_printf(node,
+				    "Node with fc_id x%x already exists\n",
+				    rnode->rnode.fc_id);
+		} else {
+			/*
+			 * create new node (SID=1, DID=2)
+			 * from which to send PLOGI
+			 */
+			rnode = efc_node_alloc(sport,
+					       sport->p2p_remote_port_id,
+						false, false);
+			if (!rnode) {
+				efc_log_err(efc, "node alloc failed\n");
+				return NULL;
+			}
+
+			efc_fabric_notify_topology(node);
+			/* sm: / allocate p2p remote node */
+			efc_node_transition(rnode, __efc_p2p_rnode_init,
+					    NULL);
+		}
+
+		/*
+		 * the transient node (SID=0 or DID=fabric)
+		 * has served its purpose
+		 */
+		if (node->rnode.fc_id == 0) {
+			/*
+			 * if this is the SID=0 node,
+			 * move to the init state in case peer
+			 * has restarted FLOGI discovery and FLOGI is pending
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		} else {
+			/*
+			 * if this is the DID=fabric node
+			 * (we initiated FLOGI), shut it down
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL);
+		break;
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+
+		/* sm: if p2p_winner / domain_attach */
+		if (node->sport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					NULL);
+			if (!node->sport->domain->attached) {
+				node_printf(node, "Domain not attached\n");
+				efc_domain_attach(node->sport->domain,
+						  node->sport->p2p_port_id);
+			} else {
+				node_printf(node, "Domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/* this node has served its purpose;
+			 * we'll expect a PLOGI on a separate
+			 * node (remote SID=0x1); return this node
+			 * to init state in case peer
+			 * restarts discovery -- it may already
+			 * have (pending frames may exist).
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		}
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/*
+		 * LS_ACC failed, possibly due to link down;
+		 * shutdown node and wait
+		 * for FLOGI discovery to restart
+		 */
+		node_printf(node, "FLOGI LS_ACC failed, shutting down\n");
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_ABTS_RCVD: {
+		/* sm: / send BA_ACC */
+		efc->tt.bls_send_acc_hdr(efc, node,
+					 cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		node_printf(node, "PLOGI failed, shutting down\n");
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* if we're in external loopback mode, just send LS_ACC */
+		if (node->efc->external_loopback) {
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+						be16_to_cpu(hdr->fh_ox_id));
+		} else {
+			/*
+			 * if this isn't external loopback,
+			 * pass to default handler
+			 */
+			__efc_fabric_common(__func__, ctx, evt, arg);
+		}
+		break;
+	}
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted).
+		 * At this time, we are not waiting on any other unsolicited
+		 * frames to continue with the login process. Thus, it will not
+		 * hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return NULL;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node->ls_acc_io,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node, __efc_d_port_logged_in,
+					    NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+	case EFC_EVT_PRLI_RCVD:
+		node_printf(node, "%s: PRLI received before node is attached\n",
+			    efc_sm_event_name(evt));
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+int
+efc_p2p_setup(struct efc_sli_port *sport)
+{
+	struct efc *efc = sport->efc;
+	int rnode_winner;
+
+	rnode_winner = efc_rnode_is_winner(sport);
+
+	/* set sport flags to indicate p2p "winner" */
+	if (rnode_winner == 1) {
+		sport->p2p_remote_port_id = 0;
+		sport->p2p_port_id = 0;
+		sport->p2p_winner = false;
+	} else if (rnode_winner == 0) {
+		sport->p2p_remote_port_id = 2;
+		sport->p2p_port_id = 1;
+		sport->p2p_winner = true;
+	} else {
+		/* no winner; only okay if external loopback enabled */
+		if (sport->efc->external_loopback) {
+			/*
+			 * External loopback mode enabled;
+			 * local sport and remote node
+			 * will be registered with an NPortID = 1;
+			 */
+			efc_log_debug(efc,
+				      "External loopback mode enabled\n");
+			sport->p2p_remote_port_id = 1;
+			sport->p2p_port_id = 1;
+			sport->p2p_winner = true;
+		} else {
+			efc_log_warn(efc,
+				     "failed to determine p2p winner\n");
+			return rnode_winner;
+		}
+	}
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h
new file mode 100644
index 000000000000..9571b4b7b2ce
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declarations for the interface exported by efc_fabric
+ */
+
+#ifndef __EFCT_FABRIC_H__
+#define __EFCT_FABRIC_H__
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+
+void *
+__efc_fabric_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_domain_attach_wait(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+
+void *
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_sport_attach(struct efc_sm_ctx *ctx,
+			       enum efc_sm_event evt, void *arg);
+
+void *
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_logo_wait_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event, void *arg);
+void *
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void *
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg);
+void *
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_fabric_idle(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+void *
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_domain_attach_wait(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void *
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+
+int
+efc_p2p_setup(struct efc_sli_port *sport);
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_sport_topology topology);
+void efc_fabric_notify_topology(struct efc_node *node);
+
+#endif /* __EFCT_FABRIC_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (12 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 13/31] elx: libefc: Fabric " James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-15 18:56   ` Daniel Wagner
  2020-04-16  6:47   ` Hannes Reinecke
  2020-04-12  3:32 ` [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations James Smart
                   ` (16 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC node PRLI handling and state management

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Replace efc_assert with WARN_ON
  Bug Fix: Send LS_RJT for non FCP PRLIs
---
 drivers/scsi/elx/libefc/efc_device.c | 1672 ++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_device.h |   72 ++
 2 files changed, 1744 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h

diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
new file mode 100644
index 000000000000..e279a6dd19fa
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.c
@@ -0,0 +1,1672 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * device_sm Node State Machine: Remote Device States
+ */
+
+#include "efc.h"
+#include "efc_device.h"
+#include "efc_fabric.h"
+
+void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id)
+{
+	struct efc *efc = node->efc;
+	/* If the back-end doesn't want to talk to this initiator,
+	 * we send an LS_RJT
+	 */
+	if (node->sport->enable_tgt &&
+	    (efc->tt.scsi_validate_node(efc, node) == 0)) {
+		node_printf(node, "PRLI rejected by target-server\n");
+
+		efc->tt.send_ls_rjt(efc, node, ox_id,
+				    ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+	} else {
+		/*
+		 * sm: / process PRLI payload, send PRLI acc
+		 */
+		efc->tt.els_send_resp(efc, node, ELS_PRLI, ox_id);
+
+		/* Immediately go to ready state to avoid window where we're
+		 * waiting for the PRLI LS_ACC to complete while holding
+		 * FCP_CMNDs
+		 */
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+	}
+}
+
+static void *
+__efc_d_common(const char *funcname, struct efc_sm_ctx *ctx,
+	       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+
+	node = ctx->app;
+	efc = node->efc;
+
+	switch (evt) {
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n",
+			      node->display_name, funcname,
+				efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+		break;
+	}
+	return NULL;
+}
+
+/**
+ * State is entered when a node sends a delete initiator/target call to the
+ * target-server/initiator-client and needs to wait for that work to complete.
+ */
+static void *
+__efc_d_wait_del_node(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		/*
+		 * node has either been detached or is in the process
+		 * of being detached,
+		 * call common node's initiate cleanup function
+		 */
+		efc_node_initiate_cleanup(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+static void *
+__efc_d_wait_del_ini_tgt(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		/* Fall through */
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		efc_node_transition(node, __efc_d_wait_del_node, NULL);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* assume no wait needed */
+		int rc = EFC_SCSI_CALL_COMPLETE;
+
+		efc->tt.scsi_io_alloc_disable(efc, node);
+
+		/* make necessary delete upcall(s) */
+		if (node->init && !node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+
+		} else if (node->targ && !node->init) {
+			efc_log_info(node->efc,
+				     "[%s] delete (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+
+		} else if (node->init && node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
+				node->display_name, node->wwpn, node->wwnn);
+			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
+					    NULL);
+			if (node->sport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+			/* assume no wait needed */
+			rc = EFC_SCSI_CALL_COMPLETE;
+			if (node->sport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+		}
+
+		/* we've initiated the upcalls as needed, now kick off the node
+		 * detach to precipitate the aborting of outstanding exchanges
+		 * associated with said node
+		 *
+		 * Beware: if we've made upcall(s), we've already transitioned
+		 * to a new state by the time we execute this.
+		 * consider doing this before the upcalls?
+		 */
+		if (node->attached) {
+			/* issue hw node free; don't care if succeeds right
+			 * away or sometime later, will check node->attached
+			 * later in shutdown process
+			 */
+			rc = efc->tt.hw_node_detach(efc, &node->rnode);
+			if (rc != EFC_HW_RTN_SUCCESS &&
+			    rc != EFC_HW_RTN_SUCCESS_SYNC)
+				node_printf(node,
+					    "Failed freeing HW node, rc=%d\n",
+					rc);
+		}
+
+		/* if neither initiator nor target, proceed to cleanup */
+		if (!node->init && !node->targ) {
+			/*
+			 * node has either been detached or is in
+			 * the process of being detached,
+			 * call common node's initiate cleanup function
+			 */
+			efc_node_initiate_cleanup(node);
+		}
+		break;
+	}
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* Ignore, this can happen if an ELS is
+		 * aborted while in a delay/retry state
+		 */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		/* send PLOGI automatically if initiator */
+		efc_node_init_device(node, true);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/* Save the OX_ID for sending LS_ACC sometime later */
+void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+
+	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE);
+
+	node->ls_acc_oxid = ox_id;
+	node->send_ls_acc = ls;
+	node->ls_acc_did = ntoh24(hdr->fh_d_id);
+}
+
+void
+efc_process_prli_payload(struct efc_node *node, void *prli)
+{
+	struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
+
+	node->init = (sp->spp_flags & FCP_SPPF_INIT_FCN) != 0;
+	node->targ = (sp->spp_flags & FCP_SPPF_TARG_FCN) != 0;
+}
+
+void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_d_port_logged_in, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* LOGO response received, sent shutdown */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node_printf(node,
+			    "LOGO sent (evt=%s), shutdown node\n",
+			efc_sm_event_name(evt));
+		/* sm: / post explicit logout */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				    NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void
+efc_node_init_device(struct efc_node *node, bool send_plogi)
+{
+	node->send_plogi = send_plogi;
+	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
+	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
+		node->nodedb_state = __efc_d_init;
+		efc_node_transition(node, __efc_node_paused, NULL);
+	} else {
+		efc_node_transition(node, __efc_d_init, NULL);
+	}
+}
+
+/**
+ * Device node state machine: Initial node state for an initiator or
+ * a target.
+ *
+ * This state is entered when a node is instantiated, either having been
+ * discovered from a name services query, or having received a PLOGI/FLOGI.
+ */
+void *
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->send_plogi)
+			break;
+		/* only send if we have initiator capability,
+		 * and domain is attached
+		 */
+		if (node->sport->enable_ini &&
+		    node->sport->domain->attached) {
+			efc->tt.els_send(efc, node, ELS_PLOGI,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+
+			efc_node_transition(node, __efc_d_wait_plogi_rsp, NULL);
+		} else {
+			node_printf(node, "not sending plogi sport.ini=%d,",
+				    node->sport->enable_ini);
+			node_printf(node, "domain attached=%d\n",
+				    node->sport->domain->attached);
+		}
+		break;
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* domain already attached */
+		if (node->sport->domain->attached) {
+			rc = efc_node_attach(node);
+			efc_node_transition(node,
+					    __efc_d_wait_node_attach, NULL);
+			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
+				efc_node_post_event(node,
+						    EFC_EVT_NODE_ATTACH_OK,
+						    NULL);
+			}
+			break;
+		}
+
+		/* domain not attached; several possibilities: */
+		switch (node->sport->topology) {
+		case EFC_SPORT_TOPOLOGY_P2P:
+			/* we're not attached and sport is p2p,
+			 * need to attach
+			 */
+			efc_domain_attach(node->sport->domain, d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_FABRIC:
+			/* we're not attached and sport is fabric, domain
+			 * attach should have already been requested as part
+			 * of the fabric state machine, wait for it
+			 */
+			efc_node_transition(node, __efc_d_wait_domain_attach,
+					    NULL);
+			break;
+		case EFC_SPORT_TOPOLOGY_UNKNOWN:
+			/* Two possibilities:
+			 * 1. received a PLOGI before our FLOGI has completed
+			 *    (possible since completion comes in on another
+			 *    CQ), thus we don't know what we're connected to
+			 *    yet; transition to a state to wait for the
+			 *    fabric node to tell us;
+			 * 2. PLOGI received before link went down and we
+			 * haven't performed domain attach yet.
+			 * Note: we cannot distinguish between 1. and 2.
+			 * so have to assume PLOGI
+			 * was received after link back up.
+			 */
+			node_printf(node,
+				    "received PLOGI, unknown topology did=0x%x\n",
+				d_id);
+			efc_node_transition(node,
+					    __efc_d_wait_topology_notify,
+					    NULL);
+			break;
+		default:
+			node_printf(node,
+				    "received PLOGI, with unexpected topology %d\n",
+				node->sport->topology);
+			break;
+		}
+		break;
+	}
+
+	case EFC_EVT_FDISC_RCVD: {
+		__efc_d_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	case EFC_EVT_FLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		/* sm: / save sparams, send FLOGI acc */
+		memcpy(node->sport->domain->flogi_service_params,
+		       cbdata->payload->dma.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* send FC LS_ACC response, override s_id */
+		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
+		efc->tt.send_flogi_p2p_acc(efc, node,
+				be16_to_cpu(hdr->fh_ox_id), d_id);
+		if (efc_p2p_setup(node->sport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		} else {
+			efc_node_transition(node,
+					    __efc_p2p_wait_flogi_acc_cmpl,
+					    NULL);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and shut node down w/ "explicit logout"
+			 * so pending frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* note: problem, we're now expecting an ELS REQ completion
+		 * from both the LOGO and PLOGI
+		 */
+		if (!node->sport->domain->attached) {
+			/* most likely a frame left over from before a
+			 * link down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+
+		/* Send LOGO */
+		node_printf(node, "FCP_CMND received, send LOGO\n");
+		if (efc->tt.els_send(efc, node, ELS_LOGO,
+				     EFC_FC_FLOGI_TIMEOUT_SEC,
+			EFC_FC_ELS_DEFAULT_RETRIES) == NULL) {
+			/*
+			 * failed to send LOGO, go ahead and cleanup node
+			 * anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		} else {
+			/* sent LOGO, wait for response */
+			efc_node_transition(node,
+					    __efc_d_wait_logo_rsp, NULL);
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		/* received PLOGI with svc parms, go ahead and attach node
+		 * when PLOGI that was sent ultimately completes, it'll be a
+		 * no-op
+		 *
+		 * If there is an outstanding PLOGI sent, can we set a flag
+		 * to indicate that we don't want to retry it if it times out?
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+		/* sm: domain->attached / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+
+		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
+				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* Our PLOGI was rejected, this is ok in some cases */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* not logged in yet and outstanding PLOGI so don't send LOGO,
+		 * just drop
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted). At this time, we are not waiting on any other
+		 * unsolicited frames to continue with the login process. Thus,
+		 * it will not hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		WARN_ON(!node->sport->domain->attached);
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY: {
+		enum efc_sport_topology topology =
+					(enum efc_sport_topology)arg;
+
+		WARN_ON(node->sport->domain->attached);
+
+		WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		node_printf(node, "topology notification, topology=%d\n",
+			    topology);
+
+		/* At the time the PLOGI was received, the topology was unknown,
+		 * so we didn't know which node would perform the domain attach:
+		 * 1. The node from which the PLOGI was sent (p2p) or
+		 * 2. The node to which the FLOGI was sent (fabric).
+		 */
+		if (topology == EFC_SPORT_TOPOLOGY_P2P) {
+			/* if this is p2p, need to attach to the domain using
+			 * the d_id from the PLOGI received
+			 */
+			efc_domain_attach(node->sport->domain,
+					  node->ls_acc_did);
+		}
+		/* else, if this is fabric, the domain attach
+		 * should be performed by the fabric node (node sending FLOGI);
+		 * just wait for attach to complete
+		 */
+
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		WARN_ON(!node->sport->domain->attached);
+		node_printf(node, "domain attach ok\n");
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+	return NULL;
+}
+
+void *
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PLOGI: {
+			/* sm: send_plogi_acc is set / send PLOGI acc */
+			/* Normal case for T, or I+T */
+			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
+							node->ls_acc_oxid);
+			efc_node_transition(node,
+					    __efc_d_wait_plogi_acc_cmpl,
+					     NULL);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node,
+					    __efc_d_port_logged_in, NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node,
+				    __efc_d_wait_attach_evt_shutdown, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		/* fall through */
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* Normal case for I or I+T */
+		if (node->sport->enable_ini &&
+		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
+			/* sm: if enable_ini / send PRLI */
+			efc->tt.els_send(efc, node, ELS_PRLI,
+				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+			/* can now expect ELS_REQ_OK/FAIL/RJT */
+		}
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* Normal case for T or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		struct fc_els_spp *sp = cbdata->payload->dma.virt
+					+ sizeof(struct fc_els_prli);
+
+		if (sp->spp_type != FC_TYPE_FCP) {
+			/*Only FCP is supported*/
+			efc->tt.send_ls_rjt(efc, node,
+					be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
+			break;
+		}
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
+		/* Normal case for I or I+T */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / process PRLI payload */
+		efc_process_prli_payload(node, cbdata->els_rsp.virt);
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
+		/* I, I+T, assume some link failure, shutdown node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT: {
+		/* PRLI rejected by remote
+		 * Normal for I, I+T (connected to an I)
+		 * Node doesn't want to be a target, stay here and wait for a
+		 * PRLI from the remote node
+		 * if it really wants to connect to us as target
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK: {
+		/* Normal T, I+T, target-server rejected the process login */
+		/* This would be received only in the case where we sent
+		 * LS_RJT for the PRLI, so
+		 * do nothing.   (note: as T only we could shutdown the node)
+		 */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/*sm: / save sparams, set send_plogi_acc,
+		 *post implicit logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt),
+					node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* sm: / post explicit logout */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	if (evt != EFC_EVT_FCP_CMD_RCVD)
+		node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node->fcp_enabled = true;
+		if (node->init) {
+			efc_log_info(efc,
+				     "[%s] found (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_tgt)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		if (node->targ) {
+			efc_log_info(efc,
+				     "[%s] found (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->sport->enable_ini)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		break;
+
+	case EFC_EVT_EXIT:
+		node->fcp_enabled = false;
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* T, I+T: remote initiator is slow to get started */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		struct fc_els_spp *sp = cbdata->payload->dma.virt
+					+ sizeof(struct fc_els_prli);
+
+		if (sp->spp_type != FC_TYPE_FCP) {
+			/*Only FCP is supported*/
+			efc->tt.send_ls_rjt(efc, node,
+					be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
+			break;
+		}
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+
+		efc->tt.els_send_resp(efc, node, ELS_PRLI,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_PRLO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send PRLO acc */
+		efc->tt.els_send_resp(efc, node, ELS_PRLO,
+					be16_to_cpu(hdr->fh_ox_id));
+		/* need implicit logout? */
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_ADISC_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send ADISC acc */
+		efc->tt.els_send_resp(efc, node, ELS_ADISC,
+					be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: / process ABTS */
+		efc_log_err(efc, "Unexpected event:%s\n",
+					efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		break;
+
+	case EFC_EVT_NODE_MISSING:
+		if (node->sport->enable_rscn)
+			efc_node_transition(node, __efc_d_device_gone, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		/* T, or I+T, PRLI accept completed ok */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* T, or I+T, PRLI accept failed to complete */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node_printf(node, "Failed to send PRLI LS_ACC\n");
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	int rc = EFC_SCSI_CALL_COMPLETE;
+	int rc_2 = EFC_SCSI_CALL_COMPLETE;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		static char const *labels[] = {"none", "initiator", "target",
+							"initiator+target"};
+
+		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
+			     node->display_name,
+				labels[(node->targ << 1) | (node->init)],
+						node->wwpn, node->wwnn);
+
+		switch (efc_node_get_enable(node)) {
+		case EFC_NODE_ENABLE_T_TO_T:
+		case EFC_NODE_ENABLE_I_TO_T:
+		case EFC_NODE_ENABLE_IT_TO_T:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_I:
+		case EFC_NODE_ENABLE_I_TO_I:
+		case EFC_NODE_ENABLE_IT_TO_I:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_I_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+						  EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_IT_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			rc_2 = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		default:
+			rc = EFC_SCSI_CALL_COMPLETE;
+			break;
+		}
+
+		if (rc == EFC_SCSI_CALL_COMPLETE &&
+		    rc_2 == EFC_SCSI_CALL_COMPLETE)
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+
+		break;
+	}
+	case EFC_EVT_NODE_REFOUND:
+		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
+
+		/* reauthenticate with PLOGI/PRLI */
+		/* efc_node_transition(node, __efc_d_discovered, NULL); */
+
+		/* reauthenticate with ADISC */
+		/* sm: / send ADISC */
+		efc->tt.els_send(efc, node, ELS_ADISC,
+				EFC_FC_FLOGI_TIMEOUT_SEC,
+				EFC_FC_ELS_DEFAULT_RETRIES);
+		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters, and send
+		 * ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* most likely a stale frame (received prior to link down),
+		 * if attempt to send LOGO, will probably timeout and eat
+		 * up 20s; thus, drop FCP_CMND
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* received an LS_RJT, in this case, send shutdown
+		 * (explicit logo) event which will unregister the node,
+		 * and start over with PLOGI
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return NULL;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / post explicit logout */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				     NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* In this case, we have the equivalent of an LS_RJT for
+		 * the ADISC, so we need to abort the ADISC, and re-login
+		 * with PLOGI
+		 */
+		/* sm: / request abort, send LOGO acc */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+
+		efc->tt.els_send_resp(efc, node, ELS_LOGO,
+					be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+		return NULL;
+	}
+
+	return NULL;
+}
diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
new file mode 100644
index 000000000000..513096b8f875
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Node state machine functions for remote device node sm
+ */
+
+#ifndef __EFCT_DEVICE_H__
+#define __EFCT_DEVICE_H__
+extern void
+efc_node_init_device(struct efc_node *node, bool send_plogi);
+extern void
+efc_process_prli_payload(struct efc_node *node,
+			 void *prli);
+extern void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id);
+extern void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls);
+extern void *
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+extern void *
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+
+#endif /* __EFCT_DEVICE_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (13 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 14/31] elx: libefc: FC node ELS and state handling James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  6:51   ` Hannes Reinecke
  2020-04-16  7:22   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 16/31] elx: efct: Driver initialization routines James Smart
                   ` (15 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch starts the population of the efct target mode
driver.  The driver is contained in the drivers/scsi/elx/efct
subdirectory.

This patch creates the efct directory and starts population of
the driver by adding SLI-4 configuration parameters, data structures
for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
and handling async events.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Changed anonymous enums to named.
  Removed some structures and defines which are not used.
  Reworked on efct_hw_io_param struct which can be used for holding
    params in WQE submission.
---
 drivers/scsi/elx/efct/efct_hw.h | 617 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 617 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h

diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
new file mode 100644
index 000000000000..b3d4d4bc8d8c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -0,0 +1,617 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef _EFCT_HW_H
+#define _EFCT_HW_H
+
+#include "../libefc_sli/sli4.h"
+
+/*
+ * EFCT PCI IDs
+ */
+#define EFCT_VENDOR_ID			0x10df
+/* LightPulse 16Gb x 4 FC (lancer-g6) */
+#define EFCT_DEVICE_LANCER_G6		0xe307
+/* LightPulse 32Gb x 4 FC (lancer-g7) */
+#define EFCT_DEVICE_LANCER_G7		0xf407
+
+/*Default RQ entries len used by driver*/
+#define EFCT_HW_RQ_ENTRIES_MIN		512
+#define EFCT_HW_RQ_ENTRIES_DEF		1024
+#define EFCT_HW_RQ_ENTRIES_MAX		4096
+
+/*Defines the size of the RQ buffers used for each RQ*/
+#define EFCT_HW_RQ_SIZE_HDR             128
+#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
+
+/*Define the maximum number of multi-receive queues*/
+#define EFCT_HW_MAX_MRQS		8
+
+/*
+ * Define count of when to set the WQEC bit in a submitted
+ * WQE, causing a consummed/released completion to be posted.
+ */
+#define EFCT_HW_WQEC_SET_COUNT		32
+
+/*Send frame timeout in seconds*/
+#define EFCT_HW_SEND_FRAME_TIMEOUT	10
+
+/*
+ * FDT Transfer Hint value, reads greater than this value
+ * will be segmented to implement fairness. A value of zero disables
+ * the feature.
+ */
+#define EFCT_HW_FDT_XFER_HINT		8192
+
+#define EFCT_HW_TIMECHECK_ITERATIONS	100
+#define EFCT_HW_MAX_NUM_MQ		1
+#define EFCT_HW_MAX_NUM_RQ		32
+#define EFCT_HW_MAX_NUM_EQ		16
+#define EFCT_HW_MAX_NUM_WQ		32
+#define EFCT_HW_DEF_NUM_EQ		1
+
+#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
+
+#define EFCT_HW_MQ_DEPTH		128
+#define EFCT_HW_EQ_DEPTH		1024
+
+/*
+ * A CQ will be assinged to each WQ
+ * (CQ must have 2X entries of the WQ for abort
+ * processing), plus a separate one for each RQ PAIR and one for MQ
+ */
+#define EFCT_HW_MAX_NUM_CQ \
+	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
+
+#define EFCT_HW_Q_HASH_SIZE		128
+#define EFCT_HW_RQ_HEADER_SIZE		128
+#define EFCT_HW_RQ_HEADER_INDEX		0
+
+#define EFCT_HW_REQUE_XRI_REGTAG	65534
+
+/* Options for efct_hw_command() */
+enum efct_cmd_opts {
+	/* command executes synchronously and busy-waits for completion */
+	EFCT_CMD_POLL,
+	/* command executes asynchronously. Uses callback */
+	EFCT_CMD_NOWAIT,
+};
+
+enum efct_hw_rtn {
+	EFCT_HW_RTN_SUCCESS = 0,
+	EFCT_HW_RTN_SUCCESS_SYNC = 1,
+	EFCT_HW_RTN_ERROR = -1,
+	EFCT_HW_RTN_NO_RESOURCES = -2,
+	EFCT_HW_RTN_NO_MEMORY = -3,
+	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFCT_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
+
+enum efct_hw_reset {
+	EFCT_HW_RESET_FUNCTION,
+	EFCT_HW_RESET_FIRMWARE,
+	EFCT_HW_RESET_MAX
+};
+
+enum efct_hw_topo {
+	EFCT_HW_TOPOLOGY_AUTO,
+	EFCT_HW_TOPOLOGY_NPORT,
+	EFCT_HW_TOPOLOGY_LOOP,
+	EFCT_HW_TOPOLOGY_NONE,
+	EFCT_HW_TOPOLOGY_MAX
+};
+
+/* pack fw revision values into a single uint64_t */
+#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
+			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
+
+#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
+
+enum efct_hw_io_type {
+	EFCT_HW_ELS_REQ,
+	EFCT_HW_ELS_RSP,
+	EFCT_HW_ELS_RSP_SID,
+	EFCT_HW_FC_CT,
+	EFCT_HW_FC_CT_RSP,
+	EFCT_HW_BLS_ACC,
+	EFCT_HW_BLS_ACC_SID,
+	EFCT_HW_BLS_RJT,
+	EFCT_HW_IO_TARGET_READ,
+	EFCT_HW_IO_TARGET_WRITE,
+	EFCT_HW_IO_TARGET_RSP,
+	EFCT_HW_IO_DNRX_REQUEUE,
+	EFCT_HW_IO_MAX,
+};
+
+enum efct_hw_io_state {
+	EFCT_HW_IO_STATE_FREE,
+	EFCT_HW_IO_STATE_INUSE,
+	EFCT_HW_IO_STATE_WAIT_FREE,
+	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
+};
+
+struct efct_hw;
+
+/**
+ * HW command context.
+ * Stores the state for the asynchronous commands sent to the hardware.
+ */
+struct efct_command_ctx {
+	struct list_head	list_entry;
+	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
+	void			*arg;	/* Argument for callback */
+	u8			*buf;	/* buffer holding command / results */
+	void			*ctx;	/* upper layer context */
+};
+
+struct efct_hw_sgl {
+	uintptr_t		addr;
+	size_t			len;
+};
+
+union efct_hw_io_param_u {
+	struct sli_bls_params bls;
+	struct sli_els_params els;
+	struct sli_ct_params fc_ct;
+	struct sli_fcp_tgt_params fcp_tgt;
+};
+
+/* WQ steering mode */
+enum efct_hw_wq_steering {
+	EFCT_HW_WQ_STEERING_CLASS,
+	EFCT_HW_WQ_STEERING_REQUEST,
+	EFCT_HW_WQ_STEERING_CPU,
+};
+
+/* HW wqe object */
+struct efct_hw_wqe {
+	struct list_head	list_entry;
+	bool			abort_wqe_submit_needed;
+	bool			send_abts;
+	u32			id;
+	u32			abort_reqtag;
+	u8			*wqebuf;
+};
+
+/**
+ * HW IO object.
+ *
+ * Stores the per-IO information necessary
+ * for both the lower (SLI) and upper
+ * layers (efct).
+ */
+struct efct_hw_io {
+	/* Owned by HW */
+
+	/* reference counter and callback function */
+	struct kref		ref;
+	void (*release)(struct kref *arg);
+	/* used for busy, wait_free, free lists */
+	struct list_head	list_entry;
+	/* used for timed_wqe list */
+	struct list_head	wqe_link;
+	/* used for io posted dnrx list */
+	struct list_head	dnrx_link;
+	/* state of IO: free, busy, wait_free */
+	enum efct_hw_io_state	state;
+	/* Work queue object, with link for pending */
+	struct efct_hw_wqe	wqe;
+	/* pointer back to hardware context */
+	struct efct_hw		*hw;
+	struct efc_remote_node	*rnode;
+	struct efc_dma		xfer_rdy;
+	u16	type;
+	/* WQ assigned to the exchange */
+	struct hw_wq		*wq;
+	/* Exchange is active in FW */
+	bool			xbusy;
+	/* Function called on IO completion */
+	int
+	(*done)(struct efct_hw_io *hio,
+		struct efc_remote_node *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/* argument passed to "IO done" callback */
+	void			*arg;
+	/* Function called on abort completion */
+	int
+	(*abort_done)(struct efct_hw_io *hio,
+		      struct efc_remote_node *rnode,
+			u32 len, int status,
+			u32 ext, void *ul_arg);
+	/* argument passed to "abort done" callback */
+	void			*abort_arg;
+	/* needed for bug O127585: length of IO */
+	size_t			length;
+	/* timeout value for target WQEs */
+	u8			tgt_wqe_timeout;
+	/* timestamp when current WQE was submitted */
+	u64			submit_ticks;
+
+	/* if TRUE, latched status shld be returned */
+	bool			status_saved;
+	/* if TRUE, abort is in progress */
+	bool			abort_in_progress;
+	u32			saved_status;
+	u32			saved_len;
+	u32			saved_ext;
+
+	/* EQ that this HIO came up on */
+	struct hw_eq		*eq;
+	/* WQ steering mode request */
+	enum efct_hw_wq_steering wq_steering;
+	/* WQ class if steering mode is Class */
+	u8			wq_class;
+
+	/* request tag for this HW IO */
+	u16			reqtag;
+	/* request tag for an abort of this HW IO
+	 * (note: this is a 32 bit value
+	 * to allow us to use UINT32_MAX as an uninitialized value)
+	 */
+	u32			abort_reqtag;
+	u32			indicator;	/* XRI */
+	struct efc_dma		def_sgl;	/* default SGL*/
+	/* Count of SGEs in default SGL */
+	u32			def_sgl_count;
+	/* pointer to current active SGL */
+	struct efc_dma		*sgl;
+	u32			sgl_count;	/* count of SGEs in io->sgl */
+	u32			first_data_sge;	/* index of first data SGE */
+	struct efc_dma		*ovfl_sgl;	/* overflow SGL */
+	u32			ovfl_sgl_count;
+	 /* pointer to overflow segment len */
+	struct sli4_lsp_sge	*ovfl_lsp;
+	u32			n_sge;		/* number of active SGEs */
+	u32			sge_offset;
+
+	/* where upper layer can store ref to its IO */
+	void			*ul_io;
+};
+
+/* Typedef for HW "done" callback */
+typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
+			      u32 len, int status, u32 ext, void *ul_arg);
+
+enum efct_hw_port {
+	EFCT_HW_PORT_INIT,
+	EFCT_HW_PORT_SHUTDOWN,
+};
+
+/* Node group rpi reference */
+struct efct_hw_rpi_ref {
+	atomic_t rpi_count;
+	atomic_t rpi_attached;
+};
+
+enum efct_hw_link_stat {
+	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
+	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
+	EFCT_HW_LINK_STAT_CRC_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
+	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
+	EFCT_HW_LINK_STAT_MAX,
+};
+
+enum efct_hw_host_stat {
+	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
+	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
+	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
+	EFCT_HW_HOST_STAT_MAX,
+};
+
+enum efct_hw_state {
+	EFCT_HW_STATE_UNINITIALIZED,
+	EFCT_HW_STATE_QUEUES_ALLOCATED,
+	EFCT_HW_STATE_ACTIVE,
+	EFCT_HW_STATE_RESET_IN_PROGRESS,
+	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
+};
+
+struct efct_hw_link_stat_counts {
+	u8		overflow;
+	u32		counter;
+};
+
+struct efct_hw_host_stat_counts {
+	u32		counter;
+};
+
+/* Structure used for the hash lookup of queue IDs */
+struct efct_queue_hash {
+	bool		in_use;
+	u16		id;
+	u16		index;
+};
+
+/* WQ callback object */
+struct hw_wq_callback {
+	u16		instance_index;	/* use for request tag */
+	void (*callback)(void *arg, u8 *cqe, int status);
+	void		*arg;
+	struct list_head list_entry;
+};
+
+struct reqtag_pool {
+	spinlock_t lock;	/* pool lock */
+	struct hw_wq_callback *tags[U16_MAX];
+	struct list_head freelist;
+};
+
+struct efct_hw_config {
+	u32		n_eq;
+	u32		n_cq;
+	u32		n_mq;
+	u32		n_rq;
+	u32		n_wq;
+	u32		n_io;
+	u32		n_sgl;
+	u32		speed;
+	u32		topology;
+	/* size of the buffers for first burst */
+	u32		rq_default_buffer_size;
+	u8		esoc;
+	/* MRQ RQ selection policy */
+	u8		rq_selection_policy;
+	/* RQ quanta if rq_selection_policy == 2 */
+	u8		rr_quanta;
+	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+};
+
+struct efct_hw {
+	struct efct		*os;
+	struct sli4		sli;
+	u16			ulp_start;
+	u16			ulp_max;
+	u32			dump_size;
+	enum efct_hw_state	state;
+	bool			hw_setup_called;
+	u8			sliport_healthcheck;
+
+	/* HW configuration */
+	struct efct_hw_config	config;
+
+	/* calculated queue sizes for each type */
+	u32			num_qentries[SLI_QTYPE_MAX];
+
+	/* Storage for SLI queue objects */
+	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
+	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
+	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
+	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
+	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
+	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
+
+	/* HW queue */
+	u32			eq_count;
+	u32			cq_count;
+	u32			mq_count;
+	u32			wq_count;
+	u32			rq_count;
+	struct list_head	eq_list;
+
+	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
+
+	/* Storage for HW queue objects */
+	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
+	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
+	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
+	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
+	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
+	/* count of hw_rq[] entries */
+	u32			hw_rq_count;
+	/* count of multirq RQs */
+	u32			hw_mrq_count;
+
+	/* Sequence objects used in incoming frame processing */
+	void			*seq_pool;
+
+	/* Maintain an ordered, linked list of outstanding HW commands. */
+	spinlock_t		cmd_lock;
+	struct list_head	cmd_head;
+	struct list_head	cmd_pending;
+	u32			cmd_head_count;
+
+	struct sli4_link_event	link;
+	struct efc_domain	*domain;
+
+	u16			fcf_indicator;
+
+	/* pointer array of IO objects */
+	struct efct_hw_io	**io;
+	/* array of WQE buffs mapped to IO objects */
+	u8			*wqe_buffs;
+
+	/* IO lock to synchronize list access */
+	spinlock_t		io_lock;
+	/* IO lock to synchronize IO aborting */
+	spinlock_t		io_abort_lock;
+	/* List of IO objects in use */
+	struct list_head	io_inuse;
+	/* List of IO objects waiting to be freed */
+	struct list_head	io_wait_free;
+	/* List of IO objects available for allocation */
+	struct list_head	io_free;
+
+	struct efc_dma		loop_map;
+
+	struct efc_dma		xfer_rdy;
+
+	struct efc_dma		dump_sges;
+
+	struct efc_dma		rnode_mem;
+
+	struct efct_hw_rpi_ref	*rpi_ref;
+
+	atomic_t		io_alloc_failed_count;
+
+	/* stat: wq sumbit count */
+	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
+	/* stat: wq complete count */
+	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
+
+	struct reqtag_pool	*wq_reqtag_pool;
+	atomic_t		send_frame_seq_id;
+};
+
+enum efct_hw_io_count_type {
+	EFCT_HW_IO_INUSE_COUNT,
+	EFCT_HW_IO_FREE_COUNT,
+	EFCT_HW_IO_WAIT_FREE_COUNT,
+	EFCT_HW_IO_N_TOTAL_IO_COUNT,
+};
+
+/* HW queue data structures */
+struct hw_eq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct efct_hw		*hw;
+	struct sli4_queue	*queue;
+	struct list_head	cq_list;
+	u32			use_count;
+};
+
+struct hw_cq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_eq		*eq;
+	struct sli4_queue	*queue;
+	struct list_head	q_list;
+	u32			use_count;
+};
+
+struct hw_q {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+};
+
+struct hw_mq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+
+	u32			use_count;
+};
+
+struct hw_wq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	struct efct_hw		*hw;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+	u32			class;
+
+	/* WQ consumed */
+	u32			wqec_set_count;
+	u32			wqec_count;
+	u32			free_count;
+	u32			total_submit_count;
+	struct list_head	pending_list;
+
+	/* HW IO allocated for use with Send Frame */
+	struct efct_hw_io	*send_frame_io;
+
+	/* Stats */
+	u32			use_count;
+	u32			wq_pending_count;
+};
+
+struct hw_rq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			use_count;
+	u32			hdr_entry_size;
+	u32			first_burst_entry_size;
+	u32			data_entry_size;
+	bool			is_mrq;
+	u32			base_mrq_id;
+
+	struct hw_cq		*cq;
+
+	u8			filter_mask;
+	struct sli4_queue	*hdr;
+	struct sli4_queue	*first_burst;
+	struct sli4_queue	*data;
+
+	struct efc_hw_rq_buffer	*hdr_buf;
+	struct efc_hw_rq_buffer	*fb_buf;
+	struct efc_hw_rq_buffer	*payload_buf;
+	/* RQ tracker for this RQ */
+	struct efc_hw_sequence	**rq_tracker;
+};
+
+struct efct_hw_send_frame_context {
+	struct efct_hw		*hw;
+	struct hw_wq_callback	*wqcb;
+	struct efct_hw_wqe	wqe;
+	void (*callback)(int status, void *arg);
+	void			*arg;
+
+	/* General purpose elements */
+	struct efc_hw_sequence	*seq;
+	struct efc_dma		payload;
+};
+
+struct efct_hw_grp_hdr {
+	u32			size;
+	__be32			magic_number;
+	u32			word2;
+	u8			rev_name[128];
+	u8			date[12];
+	u8			revision[32];
+};
+
+#endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 16/31] elx: efct: Driver initialization routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (14 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:11   ` Hannes Reinecke
  2020-04-16  8:03   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion James Smart
                   ` (14 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Emulex FC Target driver init, attach and hardware setup routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Removed Queue topology string.
  Used request_threaded_irq instead of a thread.
  Use a static fuction to get the model.
  Reworked efct_device_attach to use if statements and gotos.
  Changed efct_fw_reset, removed accessing other functions.
  Converted to use pci_alloc_irq_vectors api.
  Removed proc interface.
  Removed efct_hw_get and efct_hw_set functions. Driver implicitly
    knows adapter configuration.
  Many more small changes.
---
 drivers/scsi/elx/efct/efct_driver.c |  856 +++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_driver.h |  142 +++++
 drivers/scsi/elx/efct/efct_hw.c     | 1116 +++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h     |   15 +
 drivers/scsi/elx/efct/efct_xport.c  |  523 ++++++++++++++++
 drivers/scsi/elx/efct/efct_xport.h  |  201 +++++++
 6 files changed, 2853 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
new file mode 100644
index 000000000000..ff488fb774f1
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -0,0 +1,856 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+
+#include "efct_els.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+#include "efct_scsi.h"
+
+struct efct *efct_devices[MAX_EFCT_DEVICES];
+
+static int logmask;
+module_param(logmask, int, 0444);
+MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
+
+static struct libefc_function_template efct_libefc_templ = {
+	.hw_domain_alloc = efct_hw_domain_alloc,
+	.hw_domain_attach = efct_hw_domain_attach,
+	.hw_domain_free = efct_hw_domain_free,
+	.hw_domain_force_free = efct_hw_domain_force_free,
+	.domain_hold_frames = efct_domain_hold_frames,
+	.domain_accept_frames = efct_domain_accept_frames,
+
+	.hw_port_alloc = efct_hw_port_alloc,
+	.hw_port_attach = efct_hw_port_attach,
+	.hw_port_free = efct_hw_port_free,
+
+	.hw_node_alloc = efct_hw_node_alloc,
+	.hw_node_attach = efct_hw_node_attach,
+	.hw_node_detach = efct_hw_node_detach,
+	.hw_node_free_resources = efct_hw_node_free_resources,
+	.node_purge_pending = efct_node_purge_pending,
+
+	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
+	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
+	.scsi_validate_node = efct_scsi_validate_initiator,
+	.new_domain = efct_scsi_tgt_new_domain,
+	.del_domain = efct_scsi_tgt_del_domain,
+	.new_sport = efct_scsi_tgt_new_sport,
+	.del_sport = efct_scsi_tgt_del_sport,
+	.scsi_new_node = efct_scsi_new_initiator,
+	.scsi_del_node = efct_scsi_del_initiator,
+
+	.els_send = efct_els_req_send,
+	.els_send_ct = efct_els_send_ct,
+	.els_send_resp = efct_els_resp_send,
+	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
+	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
+	.send_ct_rsp = efct_send_ct_rsp,
+	.send_ls_rjt = efct_send_ls_rjt,
+
+	.node_io_cleanup = efct_node_io_cleanup,
+	.node_els_cleanup = efct_node_els_cleanup,
+	.node_abort_all_els = efct_node_abort_all_els,
+
+	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
+	.recv_abts_frame = efct_node_recv_abts_frame,
+};
+
+static int
+efct_device_init(void)
+{
+	int rc;
+
+	/* driver-wide init for target-server */
+	rc = efct_scsi_tgt_driver_init();
+	if (rc) {
+		pr_err("efct_scsi_tgt_init failed rc=%d\n",
+			     rc);
+		return rc;
+	}
+
+	rc = efct_scsi_reg_fc_transport();
+	if (rc) {
+		pr_err("failed to register to FC host\n");
+		return rc;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_device_shutdown(void)
+{
+	efct_scsi_release_fc_transport();
+
+	efct_scsi_tgt_driver_exit();
+}
+
+static void *
+efct_device_alloc(u32 nid)
+{
+	struct efct *efct = NULL;
+	u32 i;
+
+	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);
+
+	if (efct) {
+		memset(efct, 0, sizeof(*efct));
+		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
+			if (!efct_devices[i]) {
+				efct->instance_index = i;
+				efct_devices[i] = efct;
+				break;
+			}
+		}
+
+		if (i == ARRAY_SIZE(efct_devices)) {
+			pr_err("Exceeded max supported devices.\n");
+			kfree(efct);
+			efct = NULL;
+		} else {
+			efct->attached = false;
+		}
+	}
+	return efct;
+}
+
+static void
+efct_teardown_msix(struct efct *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		free_irq(pci_irq_vector(efct->pcidev, i),
+			 &efct->intr_context[i]);
+	}
+
+	pci_free_irq_vectors(efct->pcidev);
+}
+
+static int
+efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
+{
+	struct efc *efc;
+	struct sli4 *sli;
+	int rc = EFC_SUCCESS;
+
+	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
+	if (!efc)
+		return EFC_FAIL;
+
+	memset(efc, 0, sizeof(struct efc));
+	efct->efcport = efc;
+
+	memcpy(&efc->tt, tt, sizeof(*tt));
+	efc->base = efct;
+	efc->pcidev = efct->pcidev;
+
+	efc->def_wwnn = efct_get_wwnn(&efct->hw);
+	efc->def_wwpn = efct_get_wwpn(&efct->hw);
+	efc->enable_tgt = 1;
+	efc->log_level = EFC_LOG_LIB;
+
+	sli = &efct->hw.sli;
+	efc->max_xfer_size = sli->sge_supported_length *
+			     sli_get_max_sgl(&efct->hw.sli);
+
+	rc = efcport_init(efc);
+	if (rc)
+		efc_log_err(efc, "efcport_init failed\n");
+
+	return rc;
+}
+
+static int efct_request_firmware_update(struct efct *efct);
+
+static const char*
+efct_pci_model(u16 device)
+{
+	switch (device) {
+	case EFCT_DEVICE_LANCER_G6:	return "LPE31004";
+	case EFCT_DEVICE_LANCER_G7:	return "LPE36000";
+	default:			return "unknown";
+	}
+}
+
+static int
+efct_device_attach(struct efct *efct)
+{
+	u32 rc = 0, i = 0;
+
+	if (efct->attached) {
+		efc_log_err(efct, "Device is already attached\n");
+		return EFC_FAIL;
+	}
+
+	snprintf(efct->name, sizeof(efct->name), "[%s%d] ", "fc",
+		 efct->instance_index);
+
+	efct->logmask = logmask;
+	efct->filter_def = "0,0,0,0";
+	efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
+
+	efct->model = efct_pci_model(efct->pcidev->device);
+
+	efct->efct_req_fw_upgrade = true;
+
+	/* Allocate transport object and bring online */
+	efct->xport = efct_xport_alloc(efct);
+	if (!efct->xport) {
+		efc_log_err(efct, "failed to allocate transport object\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rc = efct_xport_attach(efct->xport);
+	if (rc) {
+		efc_log_err(efct, "failed to attach transport object\n");
+		goto xport_out;
+	}
+
+	rc = efct_xport_initialize(efct->xport);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize transport object\n");
+		goto xport_out;
+	}
+
+	rc = efct_efclib_config(efct, &efct_libefc_templ);
+	if (rc) {
+		efc_log_err(efct, "failed to init efclib\n");
+		goto efclib_out;
+	}
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d enabled\n", i);
+		enable_irq(pci_irq_vector(efct->pcidev, i));
+	}
+
+	efct->attached = true;
+
+	if (efct->efct_req_fw_upgrade)
+		efct_request_firmware_update(efct);
+
+	return rc;
+
+efclib_out:
+	efct_xport_detach(efct->xport);
+xport_out:
+	if (efct->xport) {
+		efct_xport_free(efct->xport);
+		efct->xport = NULL;
+	}
+out:
+	return rc;
+}
+
+static int
+efct_device_detach(struct efct *efct)
+{
+	int rc = 0, i;
+
+	if (efct) {
+		if (!efct->attached) {
+			efc_log_warn(efct, "Device is not attached\n");
+			return EFC_FAIL;
+		}
+
+		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
+		if (rc)
+			efc_log_err(efct, "Transport Shutdown timed out\n");
+
+		for (i = 0; i < efct->n_msix_vec; i++)
+			disable_irq(pci_irq_vector(efct->pcidev, i));
+
+		if (efct_xport_detach(efct->xport) != 0)
+			efc_log_err(efct, "Transport detach failed\n");
+
+		efct_xport_free(efct->xport);
+		efct->xport = NULL;
+
+		efcport_destroy(efct->efcport);
+		kfree(efct->efcport);
+
+		efct->attached = false;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_fw_write_cb(int status, u32 actual_write_length,
+		 u32 change_status, void *arg)
+{
+	struct efct_fw_write_result *result = arg;
+
+	result->status = status;
+	result->actual_xfer = actual_write_length;
+	result->change_status = change_status;
+
+	complete(&result->done);
+}
+
+static int
+efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
+		    u8 *change_status)
+{
+	int rc = 0;
+	u32 bytes_left;
+	u32 xfer_size;
+	u32 offset;
+	struct efc_dma dma;
+	int last = 0;
+	struct efct_fw_write_result result;
+
+	init_completion(&result.done);
+
+	bytes_left = buf_len;
+	offset = 0;
+
+	dma.size = FW_WRITE_BUFSIZE;
+	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+				      dma.size, &dma.phys, GFP_DMA);
+	if (!dma.virt)
+		return -ENOMEM;
+
+	while (bytes_left > 0) {
+		if (bytes_left > FW_WRITE_BUFSIZE)
+			xfer_size = FW_WRITE_BUFSIZE;
+		else
+			xfer_size = bytes_left;
+
+		memcpy(dma.virt, buf + offset, xfer_size);
+
+		if (bytes_left == xfer_size)
+			last = 1;
+
+		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
+				       last, efct_fw_write_cb, &result);
+
+		if (wait_for_completion_interruptible(&result.done) != 0) {
+			rc = -ENXIO;
+			break;
+		}
+
+		if (result.actual_xfer == 0 || result.status != 0) {
+			rc = -EFAULT;
+			break;
+		}
+
+		if (last)
+			*change_status = result.change_status;
+
+		bytes_left -= result.actual_xfer;
+		offset += result.actual_xfer;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
+	return rc;
+}
+
+/*
+ * Firmware reset to activate the new firmware.
+ * Function 0 will update and load the new firmware
+ * during attach.
+ */
+static int
+efct_fw_reset(struct efct *efct)
+{
+	int rc = 0;
+
+	if (timer_pending(&efct->xport->stats_timer))
+		del_timer(&efct->xport->stats_timer);
+
+	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
+		efc_log_info(efct, "failed to reset firmware\n");
+		rc = -1;
+	} else {
+		efc_log_info(efct,
+			"successfully reset firmware.Now resetting port\n");
+
+		efct_device_detach(efct);
+		rc = efct_device_attach(efct);
+	}
+	return rc;
+}
+
+static int
+efct_request_firmware_update(struct efct *efct)
+{
+	int rc = 0;
+	u8 file_name[256], fw_change_status = 0;
+	const struct firmware *fw;
+	struct efct_hw_grp_hdr *fw_image;
+
+	snprintf(file_name, 256, "%s.grp", efct->model);
+
+	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
+	if (rc) {
+		efc_log_debug(efct, "Firmware file(%s) not found.\n",
+				file_name);
+		return rc;
+	}
+
+	fw_image = (struct efct_hw_grp_hdr *)fw->data;
+
+	if (!strncmp(efct->hw.sli.fw_name[0], fw_image->revision,
+		     strnlen(fw_image->revision, 16))) {
+		efc_log_debug(efct,
+			"Skipped update. Firmware is already up to date.\n");
+		goto exit;
+	}
+
+	efc_log_info(efct, "Firmware update is initiated. %s -> %s\n",
+		     efct->hw.sli.fw_name[0], fw_image->revision);
+
+	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
+	if (rc) {
+		efc_log_err(efct,
+			     "Firmware update failed. Return code = %d\n", rc);
+		goto exit;
+	}
+
+	efc_log_info(efct, "Firmware updated successfully\n");
+	switch (fw_change_status) {
+	case 0x00:
+		efc_log_info(efct, "New firmware is active.\n");
+		break;
+	case 0x01:
+		efc_log_info(efct,
+			"System reboot needed to activate the new firmware\n");
+		break;
+	case 0x02:
+	case 0x03:
+		efc_log_info(efct,
+			"firmware is resetting to activate the new firmware\n");
+		efct_fw_reset(efct);
+		break;
+	default:
+		efc_log_info(efct,
+			"Unexected value change_status:%d\n", fw_change_status);
+		break;
+	}
+
+exit:
+	release_firmware(fw);
+
+	return rc;
+}
+
+static void
+efct_device_free(struct efct *efct)
+{
+	if (efct) {
+		efct_devices[efct->instance_index] = NULL;
+
+		kfree(efct);
+	}
+}
+
+static int
+efct_device_interrupts_required(struct efct *efct)
+{
+	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
+				!= EFCT_HW_RTN_SUCCESS) {
+		return -1;
+	}
+
+	return efct->hw.config.n_eq;
+}
+
+static irqreturn_t
+efct_intr_thread(int irq, void *handle)
+{
+	struct efct_intr_context *intr_ctx = handle;
+	struct efct *efct = intr_ctx->efct;
+
+	efct_hw_process(&efct->hw, intr_ctx->index, efct->max_isr_time_msec);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t
+efct_intr_msix(int irq, void *handle)
+{
+	return IRQ_WAKE_THREAD;
+}
+
+static int
+efct_setup_msix(struct efct *efct, u32 num_intrs)
+{
+	int	rc = 0, i;
+
+	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
+		dev_err(&efct->pcidev->dev,
+			"%s : MSI-X not available\n", __func__);
+		return -EINVAL;
+	}
+
+	efct->n_msix_vec = num_intrs;
+
+	rc = pci_alloc_irq_vectors(efct->pcidev, num_intrs, num_intrs,
+				   PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
+
+	if (rc < 0) {
+		dev_err(&efct->pcidev->dev, "Failed to alloc irq : %d\n", rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_intrs; i++) {
+		struct efct_intr_context *intr_ctx = NULL;
+
+		intr_ctx = &efct->intr_context[i];
+		intr_ctx->efct = efct;
+		intr_ctx->index = i;
+
+		rc = request_threaded_irq(pci_irq_vector(efct->pcidev, i),
+					  efct_intr_msix, efct_intr_thread, 0,
+					  EFCT_DRIVER_NAME, intr_ctx);
+		if (rc) {
+			dev_err(&efct->pcidev->dev,
+				"Failed to register %d vector: %d\n", i, rc);
+			goto out;
+		}
+	}
+
+	return rc;
+
+out:
+	while (--i >= 0)
+		free_irq(pci_irq_vector(efct->pcidev, i),
+			 &efct->intr_context[i]);
+
+	pci_free_irq_vectors(efct->pcidev);
+	return rc;
+}
+
+static struct pci_device_id efct_pci_table[] = {
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G6), 0},
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G7), 0},
+	{}	/* terminate list */
+};
+
+static int
+efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+	struct efct *efct = NULL;
+	int rc;
+	u32 i, r;
+	int num_interrupts = 0;
+	int nid;
+
+	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc)
+		return rc;
+
+	pci_set_master(pdev);
+
+	rc = pci_set_mwi(pdev);
+	if (rc) {
+		dev_info(&pdev->dev,
+			 "pci_set_mwi returned %d\n", rc);
+		goto mwi_out;
+	}
+
+	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
+	if (rc) {
+		dev_err(&pdev->dev, "pci_request_regions failed %d\n", rc);
+		goto req_regions_out;
+	}
+
+	/* Fetch the Numa node id for this device */
+	nid = dev_to_node(&pdev->dev);
+	if (nid < 0) {
+		dev_err(&pdev->dev,
+			"Warning Numa node ID is %d\n", nid);
+		nid = 0;
+	}
+
+	/* Allocate efct */
+	efct = efct_device_alloc(nid);
+	if (!efct) {
+		dev_err(&pdev->dev, "Failed to allocate efct\n");
+		rc = -ENOMEM;
+		goto alloc_out;
+	}
+
+	efct->pcidev = pdev;
+
+	efct->numa_node = nid;
+
+	/* Map all memory BARs */
+	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
+			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
+						  pci_resource_len(pdev, i));
+			r++;
+		}
+
+		/*
+		 * If the 64-bit attribute is set, both this BAR and the
+		 * next form the complete address. Skip processing the
+		 * next BAR.
+		 */
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
+			i++;
+	}
+
+	pci_set_drvdata(pdev, efct);
+
+	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
+	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
+		dev_warn(&pdev->dev,
+			 "trying DMA_BIT_MASK(32)\n");
+		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
+		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
+			dev_err(&pdev->dev,
+				"setting DMA_BIT_MASK failed\n");
+			rc = -1;
+			goto dma_mask_out;
+		}
+	}
+
+	num_interrupts = efct_device_interrupts_required(efct);
+	if (num_interrupts < 0) {
+		efc_log_err(efct, "efct_device_interrupts_required failed\n");
+		rc = -1;
+		goto dma_mask_out;
+	}
+
+	/*
+	 * Initialize MSIX interrupts, note,
+	 * efct_setup_msix() enables the interrupt
+	 */
+	rc = efct_setup_msix(efct, num_interrupts);
+	if (rc) {
+		dev_err(&pdev->dev, "Can't setup msix\n");
+		goto dma_mask_out;
+	}
+	/* Disable interrupt for now */
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d disabled\n", i);
+		disable_irq(pci_irq_vector(efct->pcidev, i));
+	}
+
+	rc = efct_device_attach((struct efct *)efct);
+	if (rc)
+		goto attach_out;
+
+	return EFC_SUCCESS;
+
+attach_out:
+	efct_teardown_msix(efct);
+dma_mask_out:
+	pci_set_drvdata(pdev, NULL);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+	efct_device_free(efct);
+alloc_out:
+	pci_release_regions(pdev);
+req_regions_out:
+	pci_clear_mwi(pdev);
+mwi_out:
+	pci_disable_device(pdev);
+	return rc;
+}
+
+static void
+efct_pci_remove(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	u32	i;
+
+	if (!efct)
+		return;
+
+	efct_device_detach(efct);
+
+	efct_teardown_msix(efct);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+
+	pci_set_drvdata(pdev, NULL);
+
+	efct_devices[efct->instance_index] = NULL;
+
+	efct_device_free(efct);
+
+	pci_release_regions(pdev);
+
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
+{
+	if (efct) {
+		efc_log_debug(efct,
+			       "PCI channel disable preparing for reset\n");
+		efct_device_detach(efct);
+		/* Disable interrupt and pci device */
+		efct_teardown_msix(efct);
+	}
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_recover(struct efct *efct)
+{
+	if (efct) {
+		efc_log_debug(efct, "PCI channel preparing for recovery\n");
+		efct_hw_io_abort_all(&efct->hw);
+	}
+}
+
+/**
+ * efct_pci_io_error_detected - method for handling PCI I/O error
+ * @pdev: pointer to PCI device.
+ * @state: the current PCI connection state.
+ *
+ * This routine is registered to the PCI subsystem for error handling. This
+ * function is called by the PCI subsystem after a PCI bus error affecting
+ * this device has been detected. When this routine is invoked, it dispatches
+ * device error detected handling routine, which will perform the proper
+ * error detected operation.
+ *
+ * Return codes
+ * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
+ * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
+ */
+static pci_ers_result_t
+efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	pci_ers_result_t rc;
+
+	switch (state) {
+	case pci_channel_io_normal:
+		efct_device_prep_for_recover(efct);
+		rc = PCI_ERS_RESULT_CAN_RECOVER;
+		break;
+	case pci_channel_io_frozen:
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	case pci_channel_io_perm_failure:
+		efct_device_detach(efct);
+		rc = PCI_ERS_RESULT_DISCONNECT;
+		break;
+	default:
+		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
+			       state);
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	}
+
+	return rc;
+}
+
+static pci_ers_result_t
+efct_pci_io_slot_reset(struct pci_dev *pdev)
+{
+	int rc;
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc) {
+		efc_log_err(efct,
+			     "failed to re-enable PCI device after reset.\n");
+		return PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	/*
+	 * As the new kernel behavior of pci_restore_state() API call clears
+	 * device saved_state flag, need to save the restored state again.
+	 */
+
+	pci_save_state(pdev);
+
+	pci_set_master(pdev);
+
+	rc = efct_setup_msix(efct, efct->n_msix_vec);
+	if (rc)
+		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
+			    rc);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+
+	return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void
+efct_pci_io_resume(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+}
+
+MODULE_DEVICE_TABLE(pci, efct_pci_table);
+
+static struct pci_error_handlers efct_pci_err_handler = {
+	.error_detected = efct_pci_io_error_detected,
+	.slot_reset = efct_pci_io_slot_reset,
+	.resume = efct_pci_io_resume,
+};
+
+static struct pci_driver efct_pci_driver = {
+	.name		= EFCT_DRIVER_NAME,
+	.id_table	= efct_pci_table,
+	.probe		= efct_pci_probe,
+	.remove		= efct_pci_remove,
+	.err_handler	= &efct_pci_err_handler,
+};
+
+static
+int __init efct_init(void)
+{
+	int	rc = -ENODEV;
+
+	rc = efct_device_init();
+	if (rc) {
+		pr_err("efct_device_init failed rc=%d\n", rc);
+		return -ENOMEM;
+	}
+
+	rc = pci_register_driver(&efct_pci_driver);
+	if (rc)
+		goto l1;
+
+	return rc;
+
+l1:
+	efct_device_shutdown();
+	return rc;
+}
+
+static void __exit efct_exit(void)
+{
+	pci_unregister_driver(&efct_pci_driver);
+	efct_device_shutdown();
+}
+
+module_init(efct_init);
+module_exit(efct_exit);
+MODULE_VERSION(EFCT_DRIVER_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom");
diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
new file mode 100644
index 000000000000..07ca0b182d90
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_DRIVER_H__)
+#define __EFCT_DRIVER_H__
+
+/***************************************************************************
+ * OS specific includes
+ */
+#include <stdarg.h>
+#include <linux/version.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <asm-generic/ioctl.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/byteorder.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/sched.h>
+#include <asm/current.h>
+#include <asm/cacheflush.h>
+#include <linux/pagemap.h>
+#include <linux/kthread.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
+#include <linux/firmware.h>
+#include <linux/sched/signal.h>
+#include "../include/efc_common.h"
+
+#define EFCT_DRIVER_NAME			"efct"
+#define EFCT_DRIVER_VERSION			"1.0.0.0"
+
+/* EFCT_OS_MAX_ISR_TIME_MSEC -
+ * maximum time driver code should spend in an interrupt
+ * or kernel thread context without yielding
+ */
+#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
+
+#define EFCT_FC_MAX_SGL				64
+#define EFCT_FC_DIF_SEED			0
+
+/* Timeouts */
+#define EFCT_FC_ELS_SEND_DEFAULT_TIMEOUT	0
+#define EFCT_FC_ELS_DEFAULT_RETRIES		3
+#define EFCT_FC_FLOGI_TIMEOUT_SEC		5
+#define EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC    30000000 /* 30 seconds */
+
+/* Watermark */
+#define EFCT_WATERMARK_HIGH_PCT			90
+#define EFCT_WATERMARK_LOW_PCT			80
+#define EFCT_IO_WATERMARK_PER_INITIATOR		8
+
+#include "../libefc/efclib.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+#include "efct_xport.h"
+
+#define EFCT_PCI_MAX_REGS			6
+#define MAX_PCI_INTERRUPTS			16
+
+struct efct_intr_context {
+	struct efct		*efct;
+	u32			index;
+};
+
+struct efct {
+	struct pci_dev			*pcidev;
+	void __iomem			*reg[EFCT_PCI_MAX_REGS];
+
+	u32				n_msix_vec;
+	struct efct_intr_context	intr_context[MAX_PCI_INTERRUPTS];
+	u32				numa_node;
+
+	char				name[EFC_NAME_LENGTH];
+	bool				attached;
+	struct efct_scsi_tgt		tgt_efct;
+	struct efct_xport		*xport;
+	struct efc			*efcport;
+	struct Scsi_Host		*shost;
+	int				logmask;
+	u32				max_isr_time_msec;
+
+	const char			*desc;
+	u32				instance_index;
+
+	const char			*model;
+
+	struct efct_hw			hw;
+
+	u32				num_vports;
+	u32				rq_selection_policy;
+	char				*filter_def;
+
+	bool				soft_wwn_enable;
+
+	/*
+	 * Target IO timer value:
+	 * Zero: target command timeout disabled.
+	 * Non-zero: Timeout value, in seconds, for target commands
+	 */
+	u32				target_io_timer_sec;
+
+	int				speed;
+	int				topology;
+
+	u8				efct_req_fw_upgrade;
+	u16				sw_feature_cap;
+	struct dentry			*sess_debugfs_dir;
+};
+
+#define FW_WRITE_BUFSIZE		(64 * 1024)
+
+struct efct_fw_write_result {
+	struct completion done;
+	int status;
+	u32 actual_xfer;
+	u32 change_status;
+};
+
+#define MAX_EFCT_DEVICES		64
+extern struct efct			*efct_devices[MAX_EFCT_DEVICES];
+
+#endif /* __EFCT_DRIVER_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
new file mode 100644
index 000000000000..21fcaf7b3d2b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -0,0 +1,1116 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+
+static enum efct_hw_rtn
+efct_hw_link_event_init(struct efct_hw *hw)
+{
+	hw->link.status = SLI_LINK_STATUS_MAX;
+	hw->link.topology = SLI_LINK_TOPO_NONE;
+	hw->link.medium = SLI_LINK_MEDIUM_MAX;
+	hw->link.speed = 0;
+	hw->link.loop_map = NULL;
+	hw->link.fc_id = U32_MAX;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_read_max_dump_size(struct efct_hw *hw)
+{
+	u8	buf[SLI4_BMBX_SIZE];
+	struct efct *efct = hw->os;
+	int	rc = EFCT_HW_RTN_SUCCESS;
+	struct sli4_rsp_cmn_set_dump_location *rsp;
+
+	/* attempt to detemine the dump size for function 0 only. */
+	if (PCI_FUNC(efct->pcidev->devfn) != 0)
+		return rc;
+
+	rc = sli_cmd_common_set_dump_location(&hw->sli, buf, SLI4_BMBX_SIZE, 1,
+					     0,	NULL, 0);
+	if (rc)
+		return rc;
+
+	rsp = (struct sli4_rsp_cmn_set_dump_location *)
+	      (buf + offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "set dump location cmd failed\n");
+		return rc;
+	}
+	hw->dump_size =	(le32_to_cpu(rsp->buffer_length_dword) &
+			 SLI4_CMN_SET_DUMP_BUFFER_LEN);
+	efc_log_debug(hw->os, "Dump size %x\n",	hw->dump_size);
+
+	return rc;
+}
+
+static int
+__efct_read_topology_cb(struct efct_hw *hw, int status,
+			u8 *mqe, void *arg)
+{
+	struct sli4_cmd_read_topology *read_topo =
+				(struct sli4_cmd_read_topology *)mqe;
+	u8 speed;
+	struct efc_domain_record drec = {0};
+	struct efct *efct = hw->os;
+
+	if (status || le16_to_cpu(read_topo->hdr.status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
+			       status,
+			       le16_to_cpu(read_topo->hdr.status));
+		kfree(mqe);
+		return EFC_FAIL;
+	}
+
+	switch (le32_to_cpu(read_topo->dw2_attentype) &
+		SLI4_READTOPO_ATTEN_TYPE) {
+	case SLI4_READ_TOPOLOGY_LINK_UP:
+		hw->link.status = SLI_LINK_STATUS_UP;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_DOWN:
+		hw->link.status = SLI_LINK_STATUS_DOWN;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
+		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		hw->link.status = SLI_LINK_STATUS_MAX;
+		break;
+	}
+
+	switch (read_topo->topology) {
+	case SLI4_READ_TOPOLOGY_NPORT:
+		hw->link.topology = SLI_LINK_TOPO_NPORT;
+		break;
+	case SLI4_READ_TOPOLOGY_FC_AL:
+		hw->link.topology = SLI_LINK_TOPO_LOOP;
+		if (hw->link.status == SLI_LINK_STATUS_UP)
+			hw->link.loop_map = hw->loop_map.virt;
+		hw->link.fc_id = read_topo->acquired_al_pa;
+		break;
+	default:
+		hw->link.topology = SLI_LINK_TOPO_MAX;
+		break;
+	}
+
+	hw->link.medium = SLI_LINK_MEDIUM_FC;
+
+	speed = (le32_to_cpu(read_topo->currlink_state) &
+		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
+	switch (speed) {
+	case SLI4_READ_TOPOLOGY_SPEED_1G:
+		hw->link.speed =  1 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_2G:
+		hw->link.speed =  2 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_4G:
+		hw->link.speed =  4 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_8G:
+		hw->link.speed =  8 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_16G:
+		hw->link.speed = 16 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_32G:
+		hw->link.speed = 32 * 1000;
+		hw->link.loop_map = NULL;
+		break;
+	}
+
+	kfree(mqe);
+
+	drec.speed = hw->link.speed;
+	drec.fc_id = hw->link.fc_id;
+	drec.is_nport = true;
+	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
+
+	return EFC_SUCCESS;
+}
+
+/* Callback function for the SLI link events */
+static int
+efct_hw_cb_link(void *ctx, void *e)
+{
+	struct efct_hw	*hw = ctx;
+	struct sli4_link_event *event = e;
+	struct efc_domain	*d = NULL;
+	int		rc = EFCT_HW_RTN_ERROR;
+	struct efct	*efct = hw->os;
+	struct efc_dma *dma;
+
+	efct_hw_link_event_init(hw);
+
+	switch (event->status) {
+	case SLI_LINK_STATUS_UP:
+
+		hw->link = *event;
+		efct->efcport->link_status = EFC_LINK_STATUS_UP;
+
+		if (event->topology == SLI_LINK_TOPO_NPORT) {
+			struct efc_domain_record drec = {0};
+
+			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
+				      event->speed);
+			drec.speed = event->speed;
+			drec.fc_id = event->fc_id;
+			drec.is_nport = true;
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
+				      &drec);
+		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
+			u8	*buf = NULL;
+
+			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
+				      event->speed);
+			dma = &hw->loop_map;
+			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt)
+				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
+
+			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+			if (!buf)
+				break;
+
+			if (!sli_cmd_read_topology(&hw->sli, buf,
+						  SLI4_BMBX_SIZE,
+						       &hw->loop_map)) {
+				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+						     __efct_read_topology_cb,
+						     NULL);
+			}
+
+			if (rc != EFCT_HW_RTN_SUCCESS) {
+				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
+				kfree(buf);
+			}
+		} else {
+			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
+				      "Link Up, unsupported topology ",
+				     event->topology, event->speed);
+		}
+		break;
+	case SLI_LINK_STATUS_DOWN:
+		efc_log_info(hw->os, "Link down\n");
+
+		hw->link.status = event->status;
+		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
+
+		d = hw->domain;
+		if (d)
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled link status %#x\n",
+			      event->status);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
+{
+	u32 i, max_sgl;
+
+	if (hw->hw_setup_called)
+		return EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * efct_hw_init() relies on NULL pointers indicating that a structure
+	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
+	 * free/realloc that memory
+	 */
+	memset(hw, 0, sizeof(struct efct_hw));
+
+	hw->hw_setup_called = true;
+
+	hw->os = os;
+
+	spin_lock_init(&hw->cmd_lock);
+	INIT_LIST_HEAD(&hw->cmd_head);
+	INIT_LIST_HEAD(&hw->cmd_pending);
+	hw->cmd_head_count = 0;
+
+	spin_lock_init(&hw->io_lock);
+	spin_lock_init(&hw->io_abort_lock);
+
+	atomic_set(&hw->io_alloc_failed_count, 0);
+
+	hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
+	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct *)os)->reg)) {
+		efc_log_err(hw->os, "SLI setup failed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_link_event_init(hw);
+
+	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
+
+	/*
+	 * Set all the queue sizes to the maximum allowed.
+	 */
+	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
+		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
+	/*
+	 * Adjust the the size of the WQs so that the CQ is twice as
+	 * big as the WQ to allow for 2 completions per IO. This allows us to
+	 * handle multi-phase as well as aborts.
+	 */
+	hw->num_qentries[SLI_QTYPE_WQ] = hw->num_qentries[SLI_QTYPE_CQ] / 2;
+
+	/*
+	 * The RQ assignment for RQ pair mode.
+	 */
+	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
+	hw->config.n_io = hw->sli.extent[SLI_RSRC_XRI].size;
+	hw->config.n_eq = EFCT_HW_DEF_NUM_EQ;
+
+	max_sgl = sli_get_max_sgl(&hw->sli) - SLI4_SGE_MAX_RESERVED;
+	max_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
+	hw->config.n_sgl = max_sgl;
+
+	(void)efct_hw_read_max_dump_size(hw);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static void
+efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
+{
+	efc_log_info(hw->os,
+		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
+		     j, hw->config.filter_def[j], i, id);
+}
+
+static inline void
+efct_hw_init_free_io(struct efct_hw_io *io)
+{
+	/*
+	 * Set io->done to NULL, to avoid any callbacks, should
+	 * a completion be received for one of these IOs
+	 */
+	io->done = NULL;
+	io->abort_done = NULL;
+	io->status_saved = false;
+	io->abort_in_progress = false;
+	io->rnode = NULL;
+	io->type = 0xFFFF;
+	io->wq = NULL;
+	io->ul_io = NULL;
+	io->tgt_wqe_timeout = 0;
+}
+
+static u8 efct_hw_iotype_is_originator(u16 io_type)
+{
+	switch (io_type) {
+	case EFCT_HW_FC_CT:
+	case EFCT_HW_ELS_REQ:
+		return 1;
+	default:
+		return EFC_SUCCESS;
+	}
+}
+
+static void
+efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* Restore the default */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+
+	/* Clear the overflow SGL */
+	io->ovfl_sgl = NULL;
+	io->ovfl_sgl_count = 0;
+	io->ovfl_lsp = NULL;
+}
+
+static void
+efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	len = 0;
+	u32 ext = 0;
+
+	/* clear xbusy flag if WCQE[XB] is clear */
+	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
+		io->xbusy = false;
+
+	/* get extended CQE status */
+	switch (io->type) {
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_ACC_SID:
+		break;
+	case EFCT_HW_ELS_REQ:
+		sli_fc_els_did(&hw->sli, cqe, &ext);
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_ELS_RSP:
+	case EFCT_HW_ELS_RSP_SID:
+	case EFCT_HW_FC_CT_RSP:
+		break;
+	case EFCT_HW_FC_CT:
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_WRITE:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		break;
+	case EFCT_HW_IO_DNRX_REQUEUE:
+		/* release the count for re-posting the buffer */
+		/* efct_hw_io_free(hw, io); */
+		break;
+	default:
+		efc_log_test(hw->os, "unhandled io type %#x for XRI 0x%x\n",
+			      io->type, io->indicator);
+		break;
+	}
+	if (status) {
+		ext = sli_fc_ext_status(&hw->sli, cqe);
+		/*
+		 * If we're not an originator IO, and XB is set, then issue
+		 * abort for the IO from within the HW
+		 */
+		if ((!efct_hw_iotype_is_originator(io->type)) &&
+		    wcqe->flags & SLI4_WCQE_XB) {
+			enum efct_hw_rtn rc;
+
+			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
+				       io->indicator, io->reqtag);
+
+			/*
+			 * Because targets may send a response when the IO
+			 * completes using the same XRI, we must wait for the
+			 * XRI_ABORTED CQE to issue the IO callback
+			 */
+			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
+			if (rc == EFCT_HW_RTN_SUCCESS) {
+				/*
+				 * latch status to return after abort is
+				 * complete
+				 */
+				io->status_saved = true;
+				io->saved_status = status;
+				io->saved_ext = ext;
+				io->saved_len = len;
+				goto exit_efct_hw_wq_process_io;
+			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+				/*
+				 * Already being aborted by someone else (ABTS
+				 * perhaps). Just fall thru and return original
+				 * error.
+				 */
+				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
+					       "abort in progress xri=",
+					      io->indicator, io->reqtag);
+
+			} else {
+				/* Failed to abort for some other reason, log
+				 * error
+				 */
+				efc_log_test(hw->os, "%s%#x tag=%#x rc=%d\n",
+					      "Failed to abort xri=",
+					     io->indicator, io->reqtag, rc);
+			}
+		}
+	}
+
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void *arg = io->arg;
+
+		io->done = NULL;
+
+		if (io->status_saved) {
+			/* use latched status if exists */
+			status = io->saved_status;
+			len = io->saved_len;
+			ext = io->saved_ext;
+			io->status_saved = false;
+		}
+
+		/* Restore default SGL */
+		efct_hw_io_restore_sgl(hw, io);
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+exit_efct_hw_wq_process_io:
+	return;
+}
+
+/* Initialize the pool of HW IO objects */
+static enum efct_hw_rtn
+efct_hw_setup_io(struct efct_hw *hw)
+{
+	u32	i = 0;
+	struct efct_hw_io	*io = NULL;
+	uintptr_t	xfer_virt = 0;
+	uintptr_t	xfer_phys = 0;
+	u32	index;
+	bool new_alloc = true;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	if (!hw->io) {
+		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
+				 GFP_KERNEL);
+
+		if (!hw->io)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->io, 0, hw->config.n_io * sizeof(io));
+
+		for (i = 0; i < hw->config.n_io; i++) {
+			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
+			if (!hw->io[i])
+				goto error;
+
+			memset(hw->io[i], 0, sizeof(struct efct_hw_io));
+		}
+
+		/* Create WQE buffs for IO */
+		hw->wqe_buffs = kmalloc((hw->config.n_io *
+					     hw->sli.wqe_size),
+					     GFP_ATOMIC);
+		if (!hw->wqe_buffs) {
+			kfree(hw->io);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		memset(hw->wqe_buffs, 0, (hw->config.n_io *
+					hw->sli.wqe_size));
+
+	} else {
+		/* re-use existing IOs, including SGLs */
+		new_alloc = false;
+	}
+
+	if (new_alloc) {
+		dma = &hw->xfer_rdy;
+		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
+		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       dma->size, &dma->phys, GFP_DMA);
+		if (!dma->virt)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
+	xfer_phys = hw->xfer_rdy.phys;
+
+	for (i = 0; i < hw->config.n_io; i++) {
+		struct hw_wq_callback *wqcb;
+
+		io = hw->io[i];
+
+		/* initialize IO fields */
+		io->hw = hw;
+
+		/* Assign a WQE buff */
+		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
+
+		/* Allocate the request tag for this IO */
+		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
+		if (!wqcb) {
+			efc_log_err(hw->os, "can't allocate request tag\n");
+			return EFCT_HW_RTN_NO_RESOURCES;
+		}
+		io->reqtag = wqcb->instance_index;
+
+		/* Now for the fields that are initialized on each free */
+		efct_hw_init_free_io(io);
+
+		/* The XB flag isn't cleared on IO free, so init to zero */
+		io->xbusy = 0;
+
+		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
+				       &io->indicator, &index)) {
+			efc_log_err(hw->os,
+				     "sli_resource_alloc failed @ %d\n", i);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		if (new_alloc) {
+			dma = &io->def_sgl;
+			dma->size = hw->config.n_sgl *
+					sizeof(struct sli4_sge);
+			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt) {
+				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
+				memset(&io->def_sgl, 0,
+				       sizeof(struct efc_dma));
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+		}
+		io->def_sgl_count = hw->config.n_sgl;
+		io->sgl = &io->def_sgl;
+		io->sgl_count = io->def_sgl_count;
+
+		if (hw->xfer_rdy.size) {
+			io->xfer_rdy.virt = (void *)xfer_virt;
+			io->xfer_rdy.phys = xfer_phys;
+			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
+
+			xfer_virt += sizeof(struct fcp_txrdy);
+			xfer_phys += sizeof(struct fcp_txrdy);
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+error:
+	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
+		kfree(hw->io[i]);
+		hw->io[i] = NULL;
+	}
+
+	kfree(hw->io);
+	hw->io = NULL;
+
+	return EFCT_HW_RTN_NO_MEMORY;
+}
+
+static enum efct_hw_rtn
+efct_hw_init_io(struct efct_hw *hw)
+{
+	u32	i = 0, io_index = 0;
+	bool prereg = false;
+	struct efct_hw_io	*io = NULL;
+	u8		cmd[SLI4_BMBX_SIZE];
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32	nremaining;
+	u32	n = 0;
+	u32	sgls_per_request = 256;
+	struct efc_dma	**sgls = NULL;
+	struct efc_dma	reqbuf;
+	struct efct *efct = hw->os;
+
+	prereg = hw->sli.sgl_pre_registered;
+
+	memset(&reqbuf, 0, sizeof(struct efc_dma));
+	if (prereg) {
+		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
+				     GFP_ATOMIC);
+		if (!sgls)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		reqbuf.size = 32 + sgls_per_request * 16;
+		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						 reqbuf.size, &reqbuf.phys,
+						 GFP_DMA);
+		if (!reqbuf.virt) {
+			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
+			kfree(sgls);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
+		if (prereg) {
+			/* Copy address of SGL's into local sgls[] array, break
+			 * out if the xri is not contiguous.
+			 */
+			u32 min = (sgls_per_request < nremaining)
+					? sgls_per_request : nremaining;
+			for (n = 0; n < min; n++) {
+				/* Check that we have contiguous xri values */
+				if (n > 0) {
+					if (hw->io[io_index + n]->indicator !=
+					    hw->io[io_index + n - 1]->indicator
+					    + 1)
+						break;
+				}
+				sgls[n] = hw->io[io_index + n]->sgl;
+			}
+
+			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
+						   sizeof(cmd),
+						hw->io[io_index]->indicator,
+						n, sgls, NULL, &reqbuf)) {
+				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
+						    NULL, NULL)) {
+					rc = EFCT_HW_RTN_ERROR;
+					efc_log_err(hw->os,
+						     "SGL post failed\n");
+					break;
+				}
+			}
+		} else {
+			n = nremaining;
+		}
+
+		/* Add to tail if successful */
+		for (i = 0; i < n; i++, io_index++) {
+			io = hw->io[io_index];
+			io->state = EFCT_HW_IO_STATE_FREE;
+			INIT_LIST_HEAD(&io->list_entry);
+			list_add_tail(&io->list_entry, &hw->io_free);
+		}
+	}
+
+	if (prereg) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  reqbuf.size, reqbuf.virt, reqbuf.phys);
+		memset(&reqbuf, 0, sizeof(struct efc_dma));
+		kfree(sgls);
+	}
+
+	return rc;
+}
+
+static enum efct_hw_rtn
+efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
+
+	memset(&param, 0, sizeof(param));
+	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
+			      fdt_xfer_hint, rc);
+	else
+		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
+			      le32_to_cpu(param.fdt_xfer_hint));
+
+	return rc;
+}
+
+static int
+efct_hw_config_rq(struct efct_hw *hw)
+{
+	u32 min_rq_count, i, rc;
+	struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	u8 buf[SLI4_BMBX_SIZE];
+
+	efc_log_info(hw->os, "using REG_FCFI standard\n");
+
+	/*
+	 * Set the filter match/mask values from hw's
+	 * filter_def values
+	 */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_cfg[i].rq_id = cpu_to_le16(0xffff);
+		rq_cfg[i].r_ctl_mask = (u8)hw->config.filter_def[i];
+		rq_cfg[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
+		rq_cfg[i].type_mask = (u8)(hw->config.filter_def[i] >> 16);
+		rq_cfg[i].type_match = (u8)(hw->config.filter_def[i] >> 24);
+	}
+
+	/*
+	 * Update the rq_id's of the FCF configuration
+	 * (don't update more than the number of rq_cfg
+	 * elements)
+	 */
+	min_rq_count = (hw->hw_rq_count < SLI4_CMD_REG_FCFI_NUM_RQ_CFG)	?
+			hw->hw_rq_count : SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+	for (i = 0; i < min_rq_count; i++) {
+		struct hw_rq *rq = hw->hw_rq[i];
+		u32 j;
+
+		for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; j++) {
+			u32 mask = (rq->filter_mask != 0) ?
+				rq->filter_mask : 1;
+
+			if (!(mask & (1U << j)))
+				continue;
+
+			rq_cfg[j].rq_id = cpu_to_le16(rq->hdr->id);
+			efct_logfcfi(hw, j, i, rq->hdr->id);
+		}
+	}
+
+	rc = EFCT_HW_RTN_ERROR;
+	if (!sli_cmd_reg_fcfi(&hw->sli, buf,
+				SLI4_BMBX_SIZE, 0,
+				rq_cfg)) {
+		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
+				NULL, NULL);
+	}
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os,
+				"FCFI registration failed\n");
+		return rc;
+	}
+	hw->fcf_indicator =
+		le16_to_cpu(((struct sli4_cmd_reg_fcfi *)buf)->fcfi);
+
+	return rc;
+}
+
+static void
+efct_hw_queue_hash_add(struct efct_queue_hash *hash,
+		       u16 id, u16 index)
+{
+	u32	hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the number of queues, then we
+	 * never have to worry about an infinite loop.
+	 */
+	while (hash[hash_index].in_use)
+		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/* not used, claim the entry */
+	hash[hash_index].id = id;
+	hash[hash_index].in_use = true;
+	hash[hash_index].index = index;
+}
+
+/* enable sli port health check */
+static enum efct_hw_rtn
+efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
+				     u8 enable)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_health_check param;
+	u32	health_check_flag = 0;
+
+	memset(&param, 0, sizeof(param));
+
+	if (enable)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
+
+	if (query)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
+
+	param.health_check_dword = cpu_to_le32(health_check_flag);
+
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
+				    sizeof(param),
+				    &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
+	else
+		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_init(struct efct_hw *hw)
+{
+	enum efct_hw_rtn rc;
+	u32 i = 0;
+	u32 max_rpi;
+	int rem_count;
+	unsigned long flags = 0;
+	struct efct_hw_io *temp;
+	struct sli4 *sli = &hw->sli;
+	struct hw_rq *rq;
+
+	/*
+	 * Make sure the command lists are empty. If this is start-of-day,
+	 * they'll be empty since they were just initialized in efct_hw_setup.
+	 * If we've just gone through a reset, the command and command pending
+	 * lists should have been cleaned up as part of the reset
+	 * (efct_hw_reset()).
+	 */
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		efc_log_err(hw->os, "command found on cmd list\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (!list_empty(&hw->cmd_pending)) {
+		efc_log_err(hw->os,
+				"command found on pending list\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	/* Free RQ buffers if prevously allocated */
+	efct_hw_rx_free(hw);
+
+	/*
+	 * The IO queues must be initialized here for the reset case. The
+	 * efct_hw_init_io() function will re-add the IOs to the free list.
+	 * The cmd_head list should be OK since we free all entries in
+	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
+	 */
+
+	/* If we are in this function due to a reset, there may be stale items
+	 * on lists that need to be removed.  Clean them up.
+	 */
+	rem_count = 0;
+	if (hw->io_wait_free.next) {
+		while ((!list_empty(&hw->io_wait_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_wait_free,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0) {
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_wait_free list\n",
+				rem_count);
+		}
+	}
+	rem_count = 0;
+	if (hw->io_inuse.next) {
+		while ((!list_empty(&hw->io_inuse))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_inuse,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_inuse list\n",
+				       rem_count);
+	}
+	rem_count = 0;
+	if (hw->io_free.next) {
+		while ((!list_empty(&hw->io_free))) {
+			rem_count++;
+			temp = list_first_entry(&hw->io_free,
+						struct efct_hw_io,
+						list_entry);
+			list_del(&temp->list_entry);
+		}
+		if (rem_count > 0)
+			efc_log_debug(hw->os,
+				       "rmvd %d items from io_free list\n",
+				       rem_count);
+	}
+
+	INIT_LIST_HEAD(&hw->io_inuse);
+	INIT_LIST_HEAD(&hw->io_free);
+	INIT_LIST_HEAD(&hw->io_wait_free);
+
+	/* If MRQ not required, Make sure we dont request feature. */
+	hw->sli.features &= (~SLI4_REQFEAT_MRQP);
+
+	if (sli_init(&hw->sli)) {
+		efc_log_err(hw->os, "SLI failed to initialize\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->sliport_healthcheck) {
+		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "Enable port Health check fail\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * Set FDT transfer hint, only works on Lancer
+	 */
+	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2) {
+		/*
+		 * Non-fatal error. In particular, we can disregard failure to
+		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
+		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
+		 */
+		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
+	}
+
+	/* zero the hashes */
+	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
+	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
+	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
+	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
+
+	rc = efct_hw_init_queues(hw);
+	if (rc != EFCT_HW_RTN_SUCCESS)
+		return rc;
+
+	/* Allocate and post RQ buffers */
+	rc = efct_hw_rx_allocate(hw);
+	if (rc) {
+		efc_log_err(hw->os, "rx_allocate failed\n");
+		return rc;
+	}
+
+	rc = efct_hw_rx_post(hw);
+	if (rc) {
+		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
+		return rc;
+	}
+
+	max_rpi = sli->extent[SLI_RSRC_RPI].size;
+	/* Allocate rpi_ref if not previously allocated */
+	if (!hw->rpi_ref) {
+		hw->rpi_ref = kmalloc_array(max_rpi, sizeof(*hw->rpi_ref),
+				      GFP_KERNEL);
+		if (!hw->rpi_ref)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->rpi_ref, 0, max_rpi * sizeof(*hw->rpi_ref));
+	}
+
+	for (i = 0; i < max_rpi; i++) {
+		atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+		atomic_set(&hw->rpi_ref[i].rpi_attached, 0);
+	}
+
+	rc = efct_hw_config_rq(hw);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_config_rq failed %d\n", rc);
+		return rc;
+	}
+
+	/*
+	 * Allocate the WQ request tag pool, if not previously allocated
+	 * (the request tag value is 16 bits, thus the pool allocation size
+	 * of 64k)
+	 */
+	hw->wq_reqtag_pool = efct_hw_reqtag_pool_alloc(hw);
+	if (!hw->wq_reqtag_pool) {
+		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
+		return rc;
+	}
+
+	rc = efct_hw_setup_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO allocation failure\n");
+		return rc;
+	}
+
+	rc = efct_hw_init_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO initialization failure\n");
+		return rc;
+	}
+
+	/*
+	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
+	 * entries
+	 */
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_arm(&hw->sli, &hw->eq[i], true);
+
+	/*
+	 * Initialize RQ hash
+	 */
+	for (i = 0; i < hw->rq_count; i++)
+		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
+
+	/*
+	 * Initialize WQ hash
+	 */
+	for (i = 0; i < hw->wq_count; i++)
+		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
+
+	/*
+	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
+	 */
+	for (i = 0; i < hw->cq_count; i++) {
+		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
+		sli_queue_arm(&hw->sli, &hw->cq[i], true);
+	}
+
+	/* Set RQ process limit*/
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		hw->cq[rq->cq->instance].proc_limit = hw->config.n_io / 2;
+	}
+
+	/* record the fact that the queues are functional */
+	hw->state = EFCT_HW_STATE_ACTIVE;
+	/*
+	 * Allocate a HW IOs for send frame.
+	 */
+	hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
+	if (!hw->hw_wq[0]->send_frame_io)
+		efc_log_err(hw->os, "alloc for send_frame_io failed\n");
+
+	/* Initialize send frame frame sequence id */
+	atomic_set(&hw->send_frame_seq_id, 0);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_parse_filter(struct efct_hw *hw, void *value)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	char *p = NULL;
+	char *token;
+	u32 idx = 0;
+
+	for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
+		hw->config.filter_def[idx] = 0;
+
+	p = kstrdup(value, GFP_KERNEL);
+	if (!p || !*p) {
+		efc_log_err(hw->os, "p is NULL\n");
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	idx = 0;
+	while ((token = strsep(&p, ",")) && *token) {
+		if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
+			efc_log_err(hw->os, "kstrtoint failed\n");
+
+		if (!p || !*p)
+			break;
+
+		if (idx == ARRAY_SIZE(hw->config.filter_def))
+			break;
+	}
+	kfree(p);
+
+	return rc;
+}
+
+u64
+efct_get_wwnn(struct efct_hw *hw)
+{
+	struct sli4 *sli = &hw->sli;
+	u8 p[8];
+
+	memcpy(p, sli->wwnn, sizeof(p));
+	return get_unaligned_be64(p);
+}
+
+u64
+efct_get_wwpn(struct efct_hw *hw)
+{
+	struct sli4 *sli = &hw->sli;
+	u8 p[8];
+
+	memcpy(p, sli->wwpn, sizeof(p));
+	return get_unaligned_be64(p);
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index b3d4d4bc8d8c..e5839254c730 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -614,4 +614,19 @@ struct efct_hw_grp_hdr {
 	u8			revision[32];
 };
 
+static inline int
+efct_hw_get_link_speed(struct efct_hw *hw) {
+	return hw->link.speed;
+}
+
+extern enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
+enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_parse_filter(struct efct_hw *hw, void *value);
+extern uint64_t
+efct_get_wwnn(struct efct_hw *hw);
+extern uint64_t
+efct_get_wwpn(struct efct_hw *hw);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
new file mode 100644
index 000000000000..b683208d396f
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -0,0 +1,523 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_unsol.h"
+
+/* Post node event callback argument. */
+struct efct_xport_post_node_event {
+	struct completion done;
+	atomic_t refcnt;
+	struct efc_node *node;
+	u32	evt;
+	void *context;
+};
+
+static struct dentry *efct_debugfs_root;
+static atomic_t efct_debugfs_count;
+
+static struct scsi_host_template efct_template = {
+	.module			= THIS_MODULE,
+	.name			= EFCT_DRIVER_NAME,
+	.supported_mode		= MODE_TARGET,
+};
+
+/* globals */
+static struct fc_function_template efct_xport_functions;
+static struct fc_function_template efct_vport_functions;
+
+static struct scsi_transport_template *efct_xport_fc_tt;
+static struct scsi_transport_template *efct_vport_fc_tt;
+
+/*
+ * transport object is allocated,
+ * and associated with a device instance
+ */
+struct efct_xport *
+efct_xport_alloc(struct efct *efct)
+{
+	struct efct_xport *xport;
+
+	xport = kmalloc(sizeof(*xport), GFP_KERNEL);
+	if (!xport)
+		return xport;
+
+	memset(xport, 0, sizeof(*xport));
+	xport->efct = efct;
+	return xport;
+}
+
+static int
+efct_xport_init_debugfs(struct efct *efct)
+{
+	/* Setup efct debugfs root directory */
+	if (!efct_debugfs_root) {
+		efct_debugfs_root = debugfs_create_dir("efct", NULL);
+		atomic_set(&efct_debugfs_count, 0);
+		if (!efct_debugfs_root) {
+			efc_log_err(efct, "failed to create debugfs entry\n");
+			goto debugfs_fail;
+		}
+	}
+
+	/* Create a directory for sessions in root */
+	if (!efct->sess_debugfs_dir) {
+		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
+		if (!efct->sess_debugfs_dir) {
+			efc_log_err(efct,
+				     "failed to create debugfs entry for sessions\n");
+			goto debugfs_fail;
+		}
+		atomic_inc(&efct_debugfs_count);
+	}
+
+	return EFC_SUCCESS;
+
+debugfs_fail:
+	return EFC_FAIL;
+}
+
+static void efct_xport_delete_debugfs(struct efct *efct)
+{
+	/* Remove session debugfs directory */
+	debugfs_remove(efct->sess_debugfs_dir);
+	efct->sess_debugfs_dir = NULL;
+	atomic_dec(&efct_debugfs_count);
+
+	if (atomic_read(&efct_debugfs_count) == 0) {
+		/* remove root debugfs directory */
+		debugfs_remove(efct_debugfs_root);
+		efct_debugfs_root = NULL;
+	}
+}
+
+int
+efct_xport_attach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc;
+
+	xport->fcfi.hold_frames = true;
+	spin_lock_init(&xport->fcfi.pend_frames_lock);
+	INIT_LIST_HEAD(&xport->fcfi.pend_frames);
+
+	rc = efct_hw_setup(&efct->hw, efct, efct->pcidev);
+	if (rc) {
+		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
+		return rc;
+	}
+
+	efct_hw_parse_filter(&efct->hw, (void *)efct->filter_def);
+
+	xport->io_pool = efct_io_pool_create(efct, efct->hw.config.n_sgl);
+	if (!xport->io_pool) {
+		efc_log_err(efct, "Can't allocate IO pool\n");
+		return -ENOMEM;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_xport_link_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_link_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_host_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_host_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_async_link_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_link_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+}
+
+static void
+efct_xport_async_host_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_host_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct);
+
+static void
+efct_xport_stats_timer_cb(struct timer_list *t)
+{
+	struct efct_xport *xport = from_timer(xport, t, stats_timer);
+	struct efct *efct = xport->efct;
+
+	efct_xport_config_stats_timer(efct);
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct)
+{
+	u32 timeout = 3 * 1000;
+	struct efct_xport *xport = NULL;
+
+	if (!efct) {
+		pr_err("%s: failed to locate EFCT device\n", __func__);
+		return;
+	}
+
+	xport = efct->xport;
+	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
+			       efct_xport_async_link_stats_cb,
+			       &xport->fc_xport_stats);
+	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
+			       &xport->fc_xport_stats);
+
+	timer_setup(&xport->stats_timer,
+		    &efct_xport_stats_timer_cb, 0);
+	mod_timer(&xport->stats_timer,
+		  jiffies + msecs_to_jiffies(timeout));
+}
+
+int
+efct_xport_initialize(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc = 0;
+
+	/* Initialize io lists */
+	spin_lock_init(&xport->io_pending_lock);
+	INIT_LIST_HEAD(&xport->io_pending_list);
+	atomic_set(&xport->io_active_count, 0);
+	atomic_set(&xport->io_pending_count, 0);
+	atomic_set(&xport->io_total_free, 0);
+	atomic_set(&xport->io_total_pending, 0);
+	atomic_set(&xport->io_alloc_failed_count, 0);
+	atomic_set(&xport->io_pending_recursing, 0);
+	rc = efct_hw_init(&efct->hw);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_init failure\n");
+		goto out;
+	}
+
+	rc = efct_scsi_tgt_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize target\n");
+		goto hw_init_out;
+	}
+
+	rc = efct_scsi_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize initiator\n");
+		goto tgt_dev_out;
+	}
+
+	/* Get FC link and host statistics perodically*/
+	efct_xport_config_stats_timer(efct);
+
+	efct_xport_init_debugfs(efct);
+
+	return rc;
+
+tgt_dev_out:
+	efct_scsi_tgt_del_device(efct);
+
+hw_init_out:
+	efct_hw_teardown(&efct->hw);
+out:
+	return rc;
+}
+
+int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	union efct_xport_stats_u value;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_CONFIG_PORT_STATUS:
+		if (xport->configured_link_state == 0) {
+			/*
+			 * Initial state is offline. configured_link_state is
+			 * set to online explicitly when port is brought online
+			 */
+			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
+		}
+		result->value = xport->configured_link_state;
+		break;
+
+	case EFCT_XPORT_PORT_STATUS:
+		/* Determine port status based on link speed. */
+		value.value = efct_hw_get_link_speed(&efct->hw);
+		if (value.value == 0)
+			result->value = 0;
+		else
+			result->value = 1;
+		rc = 0;
+		break;
+
+	case EFCT_XPORT_LINK_SPEED: {
+		result->value = efct_hw_get_link_speed(&efct->hw);
+
+		break;
+	}
+	case EFCT_XPORT_LINK_STATISTICS:
+		memcpy((void *)result, &efct->xport->fc_xport_stats,
+		       sizeof(union efct_xport_stats_u));
+		break;
+	case EFCT_XPORT_LINK_STAT_RESET: {
+		/* Create a completion to synchronize the stat reset process. */
+		init_completion(&result->stats.done);
+
+		/* First reset the link stats */
+		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
+					    efct_xport_link_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+
+		/* Next reset the host stats */
+		rc = efct_hw_get_host_stats(&efct->hw, 1,
+					    efct_xport_host_stats_cb, result);
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_test(efct, "sem wait failed\n");
+			rc = -ENXIO;
+			break;
+		}
+		break;
+	}
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_new_device(struct efct *efct)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return EFC_FAIL;
+	}
+
+	/* save shost to initiator-client context */
+	efct->shost = shost;
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+
+	/*
+	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
+	 * global queue depth (as opposed to the per-LUN queue depth --
+	 * .cmd_per_lun This may need to be adjusted for I+T mode.
+	 */
+	shost->can_queue = efct->hw.config.n_io;
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/*
+	 * can only accept (from mid-layer) as many SGEs as we've
+	 * pre-registered
+	 */
+	shost->sg_tablesize = sli_get_max_sgl(&efct->hw.sli);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_xport_fc_tt;
+	efc_log_debug(efct, "transport template=%p\n", efct_xport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, &efct->pcidev->dev,
+				       &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return EFC_FAIL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->hw.sli.fw_name[0], EFCT_DRIVER_VERSION);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+
+	fc_host_node_name(shost) = efct_get_wwnn(&efct->hw);
+	fc_host_port_name(shost) = efct_get_wwpn(&efct->hw);
+	fc_host_max_npiv_vports(shost) = 128;
+
+	return EFC_SUCCESS;
+}
+
+struct scsi_transport_template *
+efct_attach_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_xport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+struct scsi_transport_template *
+efct_attach_vport_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_vport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+int
+efct_scsi_reg_fc_transport(void)
+{
+	/* attach to appropriate scsi_tranport_* module */
+	efct_xport_fc_tt = efct_attach_fc_transport();
+	if (!efct_xport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		return EFC_FAIL;
+	}
+
+	efct_vport_fc_tt = efct_attach_vport_fc_transport();
+	if (!efct_vport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		efct_release_fc_transport(efct_xport_fc_tt);
+		efct_xport_fc_tt = NULL;
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_release_fc_transport(void)
+{
+	/* detach from scsi_transport_* */
+	efct_release_fc_transport(efct_xport_fc_tt);
+	efct_xport_fc_tt = NULL;
+	if (efct_vport_fc_tt)
+		efct_release_fc_transport(efct_vport_fc_tt);
+	efct_vport_fc_tt = NULL;
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_xport.h b/drivers/scsi/elx/efct/efct_xport.h
new file mode 100644
index 000000000000..0866edc55c54
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_XPORT_H__)
+#define __EFCT_XPORT_H__
+
+/* FCFI lookup/pending frames */
+struct efct_xport_fcfi {
+	/* lock to protect pending frames access*/
+	spinlock_t	pend_frames_lock;
+	struct list_head	pend_frames;
+	/* hold pending frames */
+	bool hold_frames;
+	/* count of pending frames that were processed */
+	u32	pend_frames_processed;
+};
+
+enum efct_xport_ctrl {
+	EFCT_XPORT_PORT_ONLINE = 1,
+	EFCT_XPORT_PORT_OFFLINE,
+	EFCT_XPORT_SHUTDOWN,
+	EFCT_XPORT_POST_NODE_EVENT,
+	EFCT_XPORT_WWNN_SET,
+	EFCT_XPORT_WWPN_SET,
+};
+
+enum efct_xport_status {
+	EFCT_XPORT_PORT_STATUS,
+	EFCT_XPORT_CONFIG_PORT_STATUS,
+	EFCT_XPORT_LINK_SPEED,
+	EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+	EFCT_XPORT_LINK_STATISTICS,
+	EFCT_XPORT_LINK_STAT_RESET,
+	EFCT_XPORT_IS_QUIESCED
+};
+
+struct efct_xport_link_stats {
+	bool		rec;
+	bool		gec;
+	bool		w02of;
+	bool		w03of;
+	bool		w04of;
+	bool		w05of;
+	bool		w06of;
+	bool		w07of;
+	bool		w08of;
+	bool		w09of;
+	bool		w10of;
+	bool		w11of;
+	bool		w12of;
+	bool		w13of;
+	bool		w14of;
+	bool		w15of;
+	bool		w16of;
+	bool		w17of;
+	bool		w18of;
+	bool		w19of;
+	bool		w20of;
+	bool		w21of;
+	bool		clrc;
+	bool		clof1;
+	u32		link_failure_error_count;
+	u32		loss_of_sync_error_count;
+	u32		loss_of_signal_error_count;
+	u32		primitive_sequence_error_count;
+	u32		invalid_transmission_word_error_count;
+	u32		crc_error_count;
+	u32		primitive_sequence_event_timeout_count;
+	u32		elastic_buffer_overrun_error_count;
+	u32		arbitration_fc_al_timeout_count;
+	u32		advertised_receive_bufftor_to_buffer_credit;
+	u32		current_receive_buffer_to_buffer_credit;
+	u32		advertised_transmit_buffer_to_buffer_credit;
+	u32		current_transmit_buffer_to_buffer_credit;
+	u32		received_eofa_count;
+	u32		received_eofdti_count;
+	u32		received_eofni_count;
+	u32		received_soff_count;
+	u32		received_dropped_no_aer_count;
+	u32		received_dropped_no_available_rpi_resources_count;
+	u32		received_dropped_no_available_xri_resources_count;
+};
+
+struct efct_xport_host_stats {
+	bool		cc;
+	u32		transmit_kbyte_count;
+	u32		receive_kbyte_count;
+	u32		transmit_frame_count;
+	u32		receive_frame_count;
+	u32		transmit_sequence_count;
+	u32		receive_sequence_count;
+	u32		total_exchanges_originator;
+	u32		total_exchanges_responder;
+	u32		receive_p_bsy_count;
+	u32		receive_f_bsy_count;
+	u32		dropped_frames_due_to_no_rq_buffer_count;
+	u32		empty_rq_timeout_count;
+	u32		dropped_frames_due_to_no_xri_count;
+	u32		empty_xri_pool_count;
+};
+
+struct efct_xport_host_statistics {
+	struct completion		done;
+	struct efct_xport_link_stats	link_stats;
+	struct efct_xport_host_stats	host_stats;
+};
+
+union efct_xport_stats_u {
+	u32	value;
+	struct efct_xport_host_statistics stats;
+};
+
+struct efct_xport_fcp_stats {
+	u64		input_bytes;
+	u64		output_bytes;
+	u64		input_requests;
+	u64		output_requests;
+	u64		control_requests;
+};
+
+struct efct_xport {
+	struct efct		*efct;
+	/* wwpn requested by user for primary sport */
+	u64			req_wwpn;
+	/* wwnn requested by user for primary sport */
+	u64			req_wwnn;
+
+	struct efct_xport_fcfi	fcfi;
+
+	/* Nodes */
+	/* number of allocated nodes */
+	u32			nodes_count;
+	/* array of pointers to nodes */
+	struct efc_node		**nodes;
+	/* linked list of free nodes */
+	struct list_head	nodes_free_list;
+
+	/* Io pool and counts */
+	/* pointer to IO pool */
+	struct efct_io_pool	*io_pool;
+	/* used to track how often IO pool is empty */
+	atomic_t		io_alloc_failed_count;
+	/* lock for io_pending_list */
+	spinlock_t		io_pending_lock;
+	/* list of IOs waiting for HW resources
+	 *  lock: xport->io_pending_lock
+	 *  link: efct_io_s->io_pending_link
+	 */
+	struct list_head	io_pending_list;
+	/* count of totals IOS allocated */
+	atomic_t		io_total_alloc;
+	/* count of totals IOS free'd */
+	atomic_t		io_total_free;
+	/* count of totals IOS that were pended */
+	atomic_t		io_total_pending;
+	/* count of active IOS */
+	atomic_t		io_active_count;
+	/* count of pending IOS */
+	atomic_t		io_pending_count;
+	/* non-zero if efct_scsi_check_pending is executing */
+	atomic_t		io_pending_recursing;
+
+	/* Port */
+	/* requested link state */
+	u32			configured_link_state;
+
+	/* Timer for Statistics */
+	struct timer_list	stats_timer;
+	union efct_xport_stats_u fc_xport_stats;
+	struct efct_xport_fcp_stats fcp_stats;
+};
+
+struct efct_rport_data {
+	struct efc_node		*node;
+};
+
+extern struct efct_xport *
+efct_xport_alloc(struct efct *efct);
+extern int
+efct_xport_attach(struct efct_xport *xport);
+extern int
+efct_xport_initialize(struct efct_xport *xport);
+extern int
+efct_xport_detach(struct efct_xport *xport);
+extern int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...);
+extern int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result);
+extern void
+efct_xport_free(struct efct_xport *xport);
+
+struct scsi_transport_template *efct_attach_fc_transport(void);
+struct scsi_transport_template *efct_attach_vport_fc_transport(void);
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template);
+
+#endif /* __EFCT_XPORT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (15 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 16/31] elx: efct: Driver initialization routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:14   ` Hannes Reinecke
  2020-04-16  8:24   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
                   ` (13 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for queue creation, deletion, and configuration.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Removed all Queue topology parsing code
  Reworked queue creation code.
---
 drivers/scsi/elx/efct/efct_hw_queues.c | 765 +++++++++++++++++++++++++++++++++
 1 file changed, 765 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c

diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
new file mode 100644
index 000000000000..c343e7c5b20d
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -0,0 +1,765 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+
+/**
+ * SLI queues are created and initialized
+ */
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw)
+{
+	struct hw_eq *eq = NULL;
+	struct hw_cq *cq = NULL;
+	struct hw_wq *wq = NULL;
+	struct hw_rq *rq = NULL;
+	struct hw_mq *mq = NULL;
+
+	hw->eq_count = 0;
+	hw->cq_count = 0;
+	hw->mq_count = 0;
+	hw->wq_count = 0;
+	hw->rq_count = 0;
+	hw->hw_rq_count = 0;
+	INIT_LIST_HEAD(&hw->eq_list);
+
+	/* Create EQ */
+	eq = efct_hw_new_eq(hw, EFCT_HW_EQ_DEPTH);
+	if (!eq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Create RQ*/
+	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
+	if (!cq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	rq = efct_hw_new_rq(cq, EFCT_HW_RQ_ENTRIES_DEF);
+	if (!rq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Create MQ*/
+	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
+	if (!cq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	mq = efct_hw_new_mq(cq, EFCT_HW_MQ_DEPTH);
+	if (!mq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Create WQ */
+	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
+	if (!cq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	wq = efct_hw_new_wq(cq, hw->num_qentries[SLI_QTYPE_WQ]);
+	if (!wq) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+/* Allocate a new EQ object */
+struct hw_eq *
+efct_hw_new_eq(struct efct_hw *hw, u32 entry_count)
+{
+	struct hw_eq *eq = kmalloc(sizeof(*eq), GFP_KERNEL);
+
+	if (eq) {
+		memset(eq, 0, sizeof(*eq));
+		eq->type = SLI_QTYPE_EQ;
+		eq->hw = hw;
+		eq->entry_count = entry_count;
+		eq->instance = hw->eq_count++;
+		eq->queue = &hw->eq[eq->instance];
+		INIT_LIST_HEAD(&eq->cq_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_EQ,
+					eq->queue,
+					entry_count, NULL)) {
+			efc_log_err(hw->os,
+					"EQ[%d] allocation failure\n",
+					eq->instance);
+			kfree(eq);
+			eq = NULL;
+		} else {
+			sli_eq_modify_delay(&hw->sli, eq->queue,
+					1, 0, 8);
+			hw->hw_eq[eq->instance] = eq;
+			INIT_LIST_HEAD(&eq->list_entry);
+			list_add_tail(&eq->list_entry, &hw->eq_list);
+			efc_log_debug(hw->os,
+					"create eq[%2d] id %3d len %4d\n",
+					eq->instance, eq->queue->id,
+					eq->entry_count);
+		}
+	}
+	return eq;
+}
+
+/* Allocate a new CQ object */
+struct hw_cq *
+efct_hw_new_cq(struct hw_eq *eq, u32 entry_count)
+{
+	struct efct_hw *hw = eq->hw;
+	struct hw_cq *cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+
+	if (cq) {
+		memset(cq, 0, sizeof(*cq));
+		cq->eq = eq;
+		cq->type = SLI_QTYPE_CQ;
+		cq->instance = eq->hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue = &hw->cq[cq->instance];
+
+		INIT_LIST_HEAD(&cq->q_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_CQ, cq->queue,
+				    cq->entry_count, eq->queue)) {
+			efc_log_err(hw->os,
+				     "CQ[%d] allocation failure len=%d\n",
+				    eq->instance,
+				    eq->entry_count);
+			kfree(cq);
+			cq = NULL;
+		} else {
+			hw->hw_cq[cq->instance] = cq;
+			INIT_LIST_HEAD(&cq->list_entry);
+			list_add_tail(&cq->list_entry, &eq->cq_list);
+			efc_log_debug(hw->os,
+				       "create cq[%2d] id %3d len %4d\n",
+				      cq->instance, cq->queue->id,
+				      cq->entry_count);
+		}
+	}
+	return cq;
+}
+
+/* Allocate a new CQ Set of objects */
+u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count)
+{
+	u32 i;
+	struct efct_hw *hw = eqs[0]->hw;
+	struct sli4 *sli4 = &hw->sli;
+	struct hw_cq *cq = NULL;
+	struct sli4_queue *qs[SLI_MAX_CQ_SET_COUNT];
+	struct sli4_queue *assefct[SLI_MAX_CQ_SET_COUNT];
+
+	/* Initialise CQS pointers to NULL */
+	for (i = 0; i < num_cqs; i++)
+		cqs[i] = NULL;
+
+	for (i = 0; i < num_cqs; i++) {
+		cq = kmalloc(sizeof(*cq), GFP_KERNEL);
+		if (!cq)
+			goto error;
+
+		memset(cq, 0, sizeof(*cq));
+		cqs[i]          = cq;
+		cq->eq          = eqs[i];
+		cq->type        = SLI_QTYPE_CQ;
+		cq->instance    = hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue       = &hw->cq[cq->instance];
+		qs[i]           = cq->queue;
+		assefct[i]       = eqs[i]->queue;
+		INIT_LIST_HEAD(&cq->q_list);
+	}
+
+	if (!sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
+		efc_log_err(hw->os, "Failed to create CQ Set.\n");
+		goto error;
+	}
+
+	for (i = 0; i < num_cqs; i++) {
+		hw->hw_cq[cqs[i]->instance] = cqs[i];
+		INIT_LIST_HEAD(&cqs[i]->list_entry);
+		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
+	}
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++) {
+		kfree(cqs[i]);
+		cqs[i] = NULL;
+	}
+	return EFC_FAIL;
+}
+
+/* Allocate a new MQ object */
+struct hw_mq *
+efct_hw_new_mq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_mq *mq = kmalloc(sizeof(*mq), GFP_KERNEL);
+
+	if (mq) {
+		memset(mq, 0, sizeof(*mq));
+		mq->cq = cq;
+		mq->type = SLI_QTYPE_MQ;
+		mq->instance = cq->eq->hw->mq_count++;
+		mq->entry_count = entry_count;
+		mq->entry_size = EFCT_HW_MQ_DEPTH;
+		mq->queue = &hw->mq[mq->instance];
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_MQ,
+				    mq->queue,
+				    mq->entry_size,
+				    cq->queue)) {
+			efc_log_err(hw->os, "MQ allocation failure\n");
+			kfree(mq);
+			mq = NULL;
+		} else {
+			hw->hw_mq[mq->instance] = mq;
+			INIT_LIST_HEAD(&mq->list_entry);
+			list_add_tail(&mq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create mq[%2d] id %3d len %4d\n",
+				      mq->instance, mq->queue->id,
+				      mq->entry_count);
+		}
+	}
+	return mq;
+}
+
+/* Allocate a new WQ object */
+struct hw_wq *
+efct_hw_new_wq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_wq *wq = kmalloc(sizeof(*wq), GFP_KERNEL);
+
+	if (wq) {
+		memset(wq, 0, sizeof(*wq));
+		wq->hw = cq->eq->hw;
+		wq->cq = cq;
+		wq->type = SLI_QTYPE_WQ;
+		wq->instance = cq->eq->hw->wq_count++;
+		wq->entry_count = entry_count;
+		wq->queue = &hw->wq[wq->instance];
+		wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
+		wq->wqec_count = wq->wqec_set_count;
+		wq->free_count = wq->entry_count - 1;
+		INIT_LIST_HEAD(&wq->pending_list);
+
+		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_WQ, wq->queue,
+				    wq->entry_count, cq->queue)) {
+			efc_log_err(hw->os, "WQ allocation failure\n");
+			kfree(wq);
+			wq = NULL;
+		} else {
+			hw->hw_wq[wq->instance] = wq;
+			INIT_LIST_HEAD(&wq->list_entry);
+			list_add_tail(&wq->list_entry, &cq->q_list);
+			efc_log_debug(hw->os,
+				       "create wq[%2d] id %3d len %4d cls %d\n",
+				wq->instance, wq->queue->id,
+				wq->entry_count, wq->class);
+		}
+	}
+	return wq;
+}
+
+/* Allocate an RQ object, which encapsulates 2 SLI queues (for rq pair) */
+struct hw_rq *
+efct_hw_new_rq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_rq *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+
+	if (rq) {
+		memset(rq, 0, sizeof(*rq));
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cq;
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Create the header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->hdr,
+				    rq->entry_count,
+				    rq->hdr_entry_size,
+				    cq->queue,
+				    true)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - header\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			      "create rq[%2d] id %3d len %4d hdr  size %4d\n",
+			      rq->instance, rq->hdr->id, rq->entry_count,
+			      rq->hdr_entry_size);
+
+		/* Create the default data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+
+		if (sli_fc_rq_alloc(&hw->sli, rq->data,
+				    rq->entry_count,
+				    rq->data_entry_size,
+				    cq->queue,
+				    false)) {
+			efc_log_err(hw->os,
+				     "RQ allocation failure - first burst\n");
+			kfree(rq);
+			return NULL;
+		}
+		/* Update hw_rq_lookup[] */
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		efc_log_debug(hw->os,
+			       "create rq[%2d] id %3d len %4d data size %4d\n",
+			 rq->instance, rq->data->id, rq->entry_count,
+			 rq->data_entry_size);
+
+		hw->hw_rq[rq->instance] = rq;
+		INIT_LIST_HEAD(&rq->list_entry);
+		list_add_tail(&rq->list_entry, &cq->q_list);
+
+		rq->rq_tracker = kmalloc_array(rq->entry_count,
+					sizeof(struct efc_hw_sequence *),
+					GFP_ATOMIC);
+		if (!rq->rq_tracker)
+			return NULL;
+
+		memset(rq->rq_tracker, 0,
+		       rq->entry_count * sizeof(struct efc_hw_sequence *));
+	}
+	return rq;
+}
+
+/**
+ * Allocate an RQ object SET, where each element in set
+ * encapsulates 2 SLI queues (for rq pair)
+ */
+u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count)
+{
+	struct efct_hw *hw = cqs[0]->eq->hw;
+	struct hw_rq *rq = NULL;
+	struct sli4_queue *qs[SLI_MAX_RQ_SET_COUNT * 2] = { NULL };
+	u32 i, q_count, size;
+
+	/* Initialise RQS pointers */
+	for (i = 0; i < num_rq_pairs; i++)
+		rqs[i] = NULL;
+
+	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
+		rq = kmalloc(sizeof(*rq), GFP_KERNEL);
+		if (!rq)
+			goto error;
+
+		memset(rq, 0, sizeof(*rq));
+		rqs[i] = rq;
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cqs[i];
+		rq->type = SLI_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count] = rq->hdr;
+
+		/* Data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count + 1] = rq->data;
+
+		rq->rq_tracker = NULL;
+	}
+
+	if (!sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
+				cqs[0]->queue->id,
+			    rqs[0]->entry_count,
+			    rqs[0]->hdr_entry_size,
+			    rqs[0]->data_entry_size)) {
+		efc_log_err(hw->os,
+			     "RQ Set allocation failure for base CQ=%d\n",
+			    cqs[0]->queue->id);
+		goto error;
+	}
+
+	for (i = 0; i < num_rq_pairs; i++) {
+		hw->hw_rq[rqs[i]->instance] = rqs[i];
+		INIT_LIST_HEAD(&rqs[i]->list_entry);
+		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
+		size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count;
+		rqs[i]->rq_tracker = kmalloc(size, GFP_KERNEL);
+		if (!rqs[i]->rq_tracker)
+			goto error;
+	}
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rq_pairs; i++) {
+		if (rqs[i]) {
+			kfree(rqs[i]->rq_tracker);
+			kfree(rqs[i]);
+		}
+	}
+
+	return EFC_FAIL;
+}
+
+void
+efct_hw_del_eq(struct hw_eq *eq)
+{
+	if (eq) {
+		struct hw_cq *cq;
+		struct hw_cq *cq_next;
+
+		list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
+			efct_hw_del_cq(cq);
+		list_del(&eq->list_entry);
+		eq->hw->hw_eq[eq->instance] = NULL;
+		kfree(eq);
+	}
+}
+
+void
+efct_hw_del_cq(struct hw_cq *cq)
+{
+	if (cq) {
+		struct hw_q *q;
+		struct hw_q *q_next;
+
+		list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
+			switch (q->type) {
+			case SLI_QTYPE_MQ:
+				efct_hw_del_mq((struct hw_mq *)q);
+				break;
+			case SLI_QTYPE_WQ:
+				efct_hw_del_wq((struct hw_wq *)q);
+				break;
+			case SLI_QTYPE_RQ:
+				efct_hw_del_rq((struct hw_rq *)q);
+				break;
+			default:
+				break;
+			}
+		}
+		list_del(&cq->list_entry);
+		cq->eq->hw->hw_cq[cq->instance] = NULL;
+		kfree(cq);
+	}
+}
+
+void
+efct_hw_del_mq(struct hw_mq *mq)
+{
+	if (mq) {
+		list_del(&mq->list_entry);
+		mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
+		kfree(mq);
+	}
+}
+
+void
+efct_hw_del_wq(struct hw_wq *wq)
+{
+	if (wq) {
+		list_del(&wq->list_entry);
+		wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
+		kfree(wq);
+	}
+}
+
+void
+efct_hw_del_rq(struct hw_rq *rq)
+{
+	struct efct_hw *hw = NULL;
+
+	if (rq) {
+		/* Free RQ tracker */
+		kfree(rq->rq_tracker);
+		rq->rq_tracker = NULL;
+		list_del(&rq->list_entry);
+		hw = rq->cq->eq->hw;
+		hw->hw_rq[rq->instance] = NULL;
+		kfree(rq);
+	}
+}
+
+void
+efct_hw_queue_dump(struct efct_hw *hw)
+{
+	struct hw_eq *eq;
+	struct hw_cq *cq;
+	struct hw_q *q;
+	struct hw_mq *mq;
+	struct hw_wq *wq;
+	struct hw_rq *rq;
+
+	list_for_each_entry(eq, &hw->eq_list, list_entry) {
+		efc_log_debug(hw->os, "eq[%d] id %2d\n",
+			       eq->instance, eq->queue->id);
+		list_for_each_entry(cq, &eq->cq_list, list_entry) {
+			efc_log_debug(hw->os, "cq[%d] id %2d current\n",
+				       cq->instance, cq->queue->id);
+			list_for_each_entry(q, &cq->q_list, list_entry) {
+				switch (q->type) {
+				case SLI_QTYPE_MQ:
+					mq = (struct hw_mq *)q;
+					efc_log_debug(hw->os,
+						       "    mq[%d] id %2d\n",
+					       mq->instance, mq->queue->id);
+					break;
+				case SLI_QTYPE_WQ:
+					wq = (struct hw_wq *)q;
+					efc_log_debug(hw->os,
+						       "    wq[%d] id %2d\n",
+						wq->instance, wq->queue->id);
+					break;
+				case SLI_QTYPE_RQ:
+					rq = (struct hw_rq *)q;
+					efc_log_debug(hw->os,
+						       "    rq[%d] hdr id %2d\n",
+					       rq->instance, rq->hdr->id);
+					break;
+				default:
+					break;
+				}
+			}
+		}
+	}
+}
+
+void
+efct_hw_queue_teardown(struct efct_hw *hw)
+{
+	struct hw_eq *eq;
+	struct hw_eq *eq_next;
+
+	if (hw->eq_list.next) {
+		list_for_each_entry_safe(eq, eq_next, &hw->eq_list,
+					 list_entry) {
+			efct_hw_del_eq(eq);
+		}
+	}
+}
+
+static inline int
+efct_hw_rqpair_find(struct efct_hw *hw, u16 rq_id)
+{
+	return efct_hw_queue_hash_find(hw->rq_hash, rq_id);
+}
+
+static struct efc_hw_sequence *
+efct_hw_rqpair_get(struct efct_hw *hw, u16 rqindex, u16 bufindex)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[rqindex];
+	struct efc_hw_sequence *seq = NULL;
+	struct hw_rq *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	unsigned long flags = 0;
+
+	if (bufindex >= rq_hdr->length) {
+		efc_log_err(hw->os,
+				"RQidx %d bufidx %d exceed ring len %d for id %d\n",
+				rqindex, bufindex, rq_hdr->length, rq_hdr->id);
+		return NULL;
+	}
+
+	/* rq_hdr lock also covers rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	seq = rq->rq_tracker[bufindex];
+	rq->rq_tracker[bufindex] = NULL;
+
+	if (!seq) {
+		efc_log_err(hw->os,
+			     "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n",
+			     rqindex, bufindex, rq_hdr->index);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return seq;
+}
+
+int
+efct_hw_rqpair_process_rq(struct efct_hw *hw, struct hw_cq *cq,
+			  u8 *cqe)
+{
+	u16 rq_id;
+	u32 index;
+	int rqindex;
+	int	 rq_status;
+	u32 h_len;
+	u32 p_len;
+	struct efc_hw_sequence *seq;
+	struct hw_rq *rq;
+
+	rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe,
+					      &rq_id, &index);
+	if (rq_status != 0) {
+		switch (rq_status) {
+		case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED:
+		case SLI4_FC_ASYNC_RQ_DMA_FAILURE:
+			/* just get RQ buffer then return to chip */
+			rqindex = efct_hw_rqpair_find(hw, rq_id);
+			if (rqindex < 0) {
+				efc_log_test(hw->os,
+					      "status=%#x: lookup fail id=%#x\n",
+					     rq_status, rq_id);
+				break;
+			}
+
+			/* get RQ buffer */
+			seq = efct_hw_rqpair_get(hw, rqindex, index);
+
+			/* return to chip */
+			if (efct_hw_rqpair_sequence_free(hw, seq)) {
+				efc_log_test(hw->os,
+					      "status=%#x,fail rtrn buf to RQ\n",
+					     rq_status);
+				break;
+			}
+			break;
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED:
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC:
+			/*
+			 * since RQ buffers were not consumed, cannot return
+			 * them to chip
+			 * fall through
+			 */
+			efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n",
+				       rq_status);
+		default:
+			break;
+		}
+		return EFC_FAIL;
+	}
+
+	rqindex = efct_hw_rqpair_find(hw, rq_id);
+	if (rqindex < 0) {
+		efc_log_test(hw->os, "Error: rq_id lookup failed for id=%#x\n",
+			      rq_id);
+		return EFC_FAIL;
+	}
+
+	rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	rq->use_count++;
+
+	seq = efct_hw_rqpair_get(hw, rqindex, index);
+	if (WARN_ON(!seq))
+		return EFC_FAIL;
+
+	seq->hw = hw;
+	seq->auto_xrdy = 0;
+	seq->out_of_xris = 0;
+	seq->hio = NULL;
+
+	sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len);
+	seq->header->dma.len = h_len;
+	seq->payload->dma.len = p_len;
+	seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe);
+	seq->hw_priv = cq->eq;
+
+	efct_unsolicited_cb(hw->os, seq);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_rqpair_put(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[seq->header->rqindex];
+	struct sli4_queue *rq_payload = &hw->rq[seq->payload->rqindex];
+	u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex];
+	struct hw_rq *rq = hw->hw_rq[hw_rq_index];
+	u32     phys_hdr[2];
+	u32     phys_payload[2];
+	int      qindex_hdr;
+	int      qindex_payload;
+	unsigned long flags = 0;
+
+	/* Update the RQ verification lookup tables */
+	phys_hdr[0] = upper_32_bits(seq->header->dma.phys);
+	phys_hdr[1] = lower_32_bits(seq->header->dma.phys);
+	phys_payload[0] = upper_32_bits(seq->payload->dma.phys);
+	phys_payload[1] = lower_32_bits(seq->payload->dma.phys);
+
+	/* rq_hdr lock also covers payload / header->rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	/*
+	 * Note: The header must be posted last for buffer pair mode because
+	 *       posting on the header queue posts the payload queue as well.
+	 *       We do not ring the payload queue independently in RQ pair mode.
+	 */
+	qindex_payload = sli_rq_write(&hw->sli, rq_payload,
+				      (void *)phys_payload);
+	qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr);
+	if (qindex_hdr < 0 ||
+	    qindex_payload < 0) {
+		efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id);
+		spin_unlock_irqrestore(&rq_hdr->lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* ensure the indexes are the same */
+	WARN_ON(qindex_hdr != qindex_payload);
+
+	/* Update the lookup table */
+	if (!rq->rq_tracker[qindex_hdr]) {
+		rq->rq_tracker[qindex_hdr] = seq;
+	} else {
+		efc_log_test(hw->os,
+			      "expected rq_tracker[%d][%d] buffer to be NULL\n",
+			     hw_rq_index, qindex_hdr);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_rqpair_sequence_free(struct efct_hw *hw,
+			     struct efc_hw_sequence *seq)
+{
+	enum efct_hw_rtn   rc = EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * Post the data buffer first. Because in RQ pair mode, ringing the
+	 * doorbell of the header ring will post the data buffer as well.
+	 */
+	if (efct_hw_rqpair_put(hw, seq)) {
+		efc_log_err(hw->os, "error writing buffers\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (16 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:24   ` Hannes Reinecke
  2020-04-16  8:41   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization James Smart
                   ` (12 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
RQ data buffer allocation and deallocate.
Memory pool allocation and deallocation APIs.
Mailbox command submission and completion routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  efct_utils.c file is removed. Replaced efct_pool, efct_varray and
  efct_array with other alternatives.
---
 drivers/scsi/elx/efct/efct_hw.c | 375 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   7 +
 2 files changed, 382 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 21fcaf7b3d2b..3e9906749da2 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -1114,3 +1114,378 @@ efct_get_wwpn(struct efct_hw *hw)
 	memcpy(p, sli->wwpn, sizeof(p));
 	return get_unaligned_be64(p);
 }
+
+/*
+ * An efct_hw_rx_buffer_t array is allocated,
+ * along with the required DMA mem
+ */
+static struct efc_hw_rq_buffer *
+efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
+			u32 size)
+{
+	struct efct *efct = hw->os;
+	struct efc_hw_rq_buffer *rq_buf = NULL;
+	struct efc_hw_rq_buffer *prq;
+	u32 i;
+
+	if (count != 0) {
+		rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_ATOMIC);
+		if (!rq_buf)
+			return NULL;
+		memset(rq_buf, 0, sizeof(*rq_buf) * count);
+
+		for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
+			prq->rqindex = rqindex;
+			prq->dma.size = size;
+			prq->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+							   prq->dma.size,
+							   &prq->dma.phys,
+							   GFP_DMA);
+			if (!prq->dma.virt) {
+				efc_log_err(hw->os, "DMA allocation failed\n");
+				kfree(rq_buf);
+				rq_buf = NULL;
+				break;
+			}
+		}
+	}
+	return rq_buf;
+}
+
+static void
+efct_hw_rx_buffer_free(struct efct_hw *hw,
+		       struct efc_hw_rq_buffer *rq_buf,
+			u32 count)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	struct efc_hw_rq_buffer *prq;
+
+	if (rq_buf) {
+		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
+			dma_free_coherent(&efct->pcidev->dev,
+					  prq->dma.size, prq->dma.virt,
+					  prq->dma.phys);
+			memset(&prq->dma, 0, sizeof(struct efc_dma));
+		}
+
+		kfree(rq_buf);
+	}
+}
+
+enum efct_hw_rtn
+efct_hw_rx_allocate(struct efct_hw *hw)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	int rc = EFCT_HW_RTN_SUCCESS;
+	u32 rqindex = 0;
+	struct hw_rq *rq;
+	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
+	u32 payload_size = hw->config.rq_default_buffer_size;
+
+	rqindex = 0;
+
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+
+		/* Allocate header buffers */
+		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+						      rq->entry_count,
+						      hdr_size);
+		if (!rq->hdr_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
+			      i, rq->hdr->id, rq->entry_count, hdr_size);
+
+		rqindex++;
+
+		/* Allocate payload buffers */
+		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+							  rq->entry_count,
+							  payload_size);
+		if (!rq->payload_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
+			      i, rq->data->id, rq->entry_count, payload_size);
+		rqindex++;
+	}
+
+	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
+}
+
+/* Post the RQ data buffers to the chip */
+enum efct_hw_rtn
+efct_hw_rx_post(struct efct_hw *hw)
+{
+	u32 i;
+	u32 idx;
+	u32 rq_idx;
+	int rc = 0;
+
+	if (!hw->seq_pool) {
+		u32 count = 0;
+
+		for (i = 0; i < hw->hw_rq_count; i++)
+			count += hw->hw_rq[i]->entry_count;
+
+		hw->seq_pool = kmalloc_array(count,
+				sizeof(struct efc_hw_sequence),	GFP_KERNEL);
+		if (!hw->seq_pool)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/*
+	 * In RQ pair mode, we MUST post the header and payload buffer at the
+	 * same time.
+	 */
+	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
+		struct hw_rq *rq = hw->hw_rq[rq_idx];
+
+		for (i = 0; i < rq->entry_count - 1; i++) {
+			struct efc_hw_sequence *seq;
+
+			seq = hw->seq_pool + idx * sizeof(*seq);
+			if (!seq) {
+				rc = -1;
+				break;
+			}
+			idx++;
+			seq->header = &rq->hdr_buf[i];
+			seq->payload = &rq->payload_buf[i];
+			rc = efct_hw_sequence_free(hw, seq);
+			if (rc)
+				break;
+		}
+		if (rc)
+			break;
+	}
+
+	if (rc && hw->seq_pool)
+		kfree(hw->seq_pool);
+
+	return rc;
+}
+
+void
+efct_hw_rx_free(struct efct_hw *hw)
+{
+	struct hw_rq *rq;
+	u32 i;
+
+	/* Free hw_rq buffers */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		rq = hw->hw_rq[i];
+		if (rq) {
+			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
+					       rq->entry_count);
+			rq->hdr_buf = NULL;
+			efct_hw_rx_buffer_free(hw, rq->payload_buf,
+					       rq->entry_count);
+			rq->payload_buf = NULL;
+		}
+	}
+}
+
+static int
+efct_hw_cmd_submit_pending(struct efct_hw *hw)
+{
+	struct efct_command_ctx *ctx = NULL;
+	int rc = 0;
+
+	/* Assumes lock held */
+
+	/* Only submit MQE if there's room */
+	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
+	       !list_empty(&hw->cmd_pending)) {
+		ctx = list_first_entry(&hw->cmd_pending,
+				       struct efct_command_ctx, list_entry);
+		if (!ctx)
+			break;
+
+		list_del(&ctx->list_entry);
+
+		INIT_LIST_HEAD(&ctx->list_entry);
+		list_add_tail(&ctx->list_entry, &hw->cmd_head);
+		hw->cmd_head_count++;
+		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
+			efc_log_test(hw->os,
+				      "sli_queue_write failed: %d\n", rc);
+			rc = -1;
+			break;
+		}
+	}
+	return rc;
+}
+
+/*
+ * Send a mailbox command to the hardware, and either wait for a completion
+ * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
+ */
+enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	unsigned long flags = 0;
+	void *bmbx = NULL;
+
+	/*
+	 * If the chip is in an error state (UE'd) then reject this mailbox
+	 *  command.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		efc_log_crit(hw->os,
+			      "status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(&hw->sli),
+			sli_reg_read_err1(&hw->sli),
+			sli_reg_read_err2(&hw->sli));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (opts == EFCT_CMD_POLL) {
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+		bmbx = hw->sli.bmbx.virt;
+
+		memset(bmbx, 0, SLI4_BMBX_SIZE);
+		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
+
+		if (sli_bmbx_command(&hw->sli) == 0) {
+			rc = EFCT_HW_RTN_SUCCESS;
+			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
+		}
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	} else if (opts == EFCT_CMD_NOWAIT) {
+		struct efct_command_ctx	*ctx = NULL;
+
+		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+		if (!ctx)
+			return EFCT_HW_RTN_NO_RESOURCES;
+
+		memset(ctx, 0, sizeof(struct efct_command_ctx));
+
+		if (hw->state != EFCT_HW_STATE_ACTIVE) {
+			efc_log_err(hw->os,
+				     "Can't send command, HW state=%d\n",
+				    hw->state);
+			kfree(ctx);
+			return EFCT_HW_RTN_ERROR;
+		}
+
+		if (cb) {
+			ctx->cb = cb;
+			ctx->arg = arg;
+		}
+		ctx->buf = cmd;
+		ctx->ctx = hw;
+
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+
+			/* Add to pending list */
+			INIT_LIST_HEAD(&ctx->list_entry);
+			list_add_tail(&ctx->list_entry, &hw->cmd_pending);
+
+			/* Submit as much of the pending list as we can */
+			if (efct_hw_cmd_submit_pending(hw) == 0)
+				rc = EFCT_HW_RTN_SUCCESS;
+
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
+			size_t size)
+{
+	struct efct_command_ctx *ctx = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		ctx = list_first_entry(&hw->cmd_head,
+				       struct efct_command_ctx, list_entry);
+		list_del(&ctx->list_entry);
+	}
+	if (!ctx) {
+		efc_log_err(hw->os, "no command context?!?\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return EFC_FAIL;
+	}
+
+	hw->cmd_head_count--;
+
+	/* Post any pending requests */
+	efct_hw_cmd_submit_pending(hw);
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	if (ctx->cb) {
+		if (ctx->buf)
+			memcpy(ctx->buf, mqe, size);
+
+		ctx->cb(hw, status, ctx->buf, ctx->arg);
+	}
+
+	memset(ctx, 0, sizeof(struct efct_command_ctx));
+	kfree(ctx);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_mq_process(struct efct_hw *hw,
+		   int status, struct sli4_queue *mq)
+{
+	u8		mqe[SLI4_BMBX_SIZE];
+
+	if (!sli_mq_read(&hw->sli, mq, mqe))
+		efct_hw_command_process(hw, status, mqe, mq->size);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_command_cancel(struct efct_hw *hw)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+
+	/*
+	 * Manually clean up remaining commands. Note: since this calls
+	 * efct_hw_command_process(), we'll also process the cmd_pending
+	 * list, so no need to manually clean that out.
+	 */
+	while (!list_empty(&hw->cmd_head)) {
+		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
+		struct efct_command_ctx *ctx =
+	list_first_entry(&hw->cmd_head, struct efct_command_ctx, list_entry);
+
+		efc_log_test(hw->os, "hung command %08x\n",
+			      !ctx ? U32_MAX :
+			      (!ctx->buf ? U32_MAX :
+			       *((u32 *)ctx->buf)));
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+	}
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index e5839254c730..1b67e0721936 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -629,4 +629,11 @@ efct_get_wwnn(struct efct_hw *hw);
 extern uint64_t
 efct_get_wwpn(struct efct_hw *hw);
 
+enum efct_hw_rtn efct_hw_rx_allocate(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_rx_post(struct efct_hw *hw);
+void efct_hw_rx_free(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (17 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:32   ` Hannes Reinecke
  2020-04-16  8:47   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 20/31] elx: efct: Hardware queues processing James Smart
                   ` (11 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to create IO interfaces (wqs, etc), SGL initialization,
and configure hardware features.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Request tag pool(reqtag_pool) handling fuctions.
---
 drivers/scsi/elx/efct/efct_hw.c | 657 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  42 +++
 2 files changed, 699 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 3e9906749da2..892493a3a35e 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -1489,3 +1489,660 @@ efct_hw_command_cancel(struct efct_hw *hw)
 
 	return EFC_SUCCESS;
 }
+
+static inline struct efct_hw_io *
+_efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io	*io = NULL;
+
+	if (!list_empty(&hw->io_free)) {
+		io = list_first_entry(&hw->io_free, struct efct_hw_io,
+				      list_entry);
+		list_del(&io->list_entry);
+	}
+	if (io) {
+		INIT_LIST_HEAD(&io->list_entry);
+		INIT_LIST_HEAD(&io->wqe_link);
+		INIT_LIST_HEAD(&io->dnrx_link);
+		list_add_tail(&io->list_entry, &hw->io_inuse);
+		io->state = EFCT_HW_IO_STATE_INUSE;
+		io->abort_reqtag = U32_MAX;
+		io->wq = hw->hw_wq[0];
+		kref_init(&io->ref);
+		io->release = efct_hw_io_free_internal;
+	} else {
+		atomic_add_return(1, &hw->io_alloc_failed_count);
+	}
+
+	return io;
+}
+
+struct efct_hw_io *
+efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io	*io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	io = _efct_hw_io_alloc(hw);
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return io;
+}
+
+/*
+ * When an IO is freed, depending on the exchange busy flag, and other
+ * workarounds, move it to the correct list.
+ */
+static void
+efct_hw_io_free_move_correct_list(struct efct_hw *hw,
+				  struct efct_hw_io *io)
+{
+	if (io->xbusy) {
+		/*
+		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
+		 * up
+		 */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_wait_free);
+		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
+	} else {
+		/* IO not busy, add to free list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_free);
+		io->state = EFCT_HW_IO_STATE_FREE;
+	}
+}
+
+static inline void
+efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* initialize IO fields */
+	efct_hw_init_free_io(io);
+
+	/* Restore default SGL */
+	efct_hw_io_restore_sgl(hw, io);
+}
+
+/**
+ * Free a previously-allocated HW IO object. Called when
+ * IO refcount goes to zero (host-owned IOs only).
+ */
+void
+efct_hw_io_free_internal(struct kref *arg)
+{
+	unsigned long flags = 0;
+	struct efct_hw_io *io =
+			container_of(arg, struct efct_hw_io, ref);
+	struct efct_hw *hw = io->hw;
+
+	/* perform common cleanup */
+	efct_hw_io_free_common(hw, io);
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+		/* remove from in-use list */
+		if (io->list_entry.next &&
+		    !list_empty(&hw->io_inuse)) {
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+int
+efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	return kref_put(&io->ref, io->release);
+}
+
+u8
+efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	return (refcount_read(&io->ref.refcount) > 0);
+}
+
+struct efct_hw_io *
+efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
+{
+	u32 ioindex;
+
+	ioindex = xri - hw->sli.extent[SLI_RSRC_XRI].base[0];
+	return hw->io[ioindex];
+}
+
+enum efct_hw_rtn
+efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efc_dma *sgl,
+			u32 sgl_count)
+{
+	if (hw->sli.sgl_pre_registered) {
+		efc_log_err(hw->os,
+			     "can't use temp SGL with pre-registered SGLs\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	io->ovfl_sgl = sgl;
+	io->ovfl_sgl_count = sgl_count;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io,
+		     enum efct_hw_io_type type)
+{
+	struct sli4_sge	*data = NULL;
+	u32 i = 0;
+	u32 skips = 0;
+	u32 sge_flags = 0;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n", hw, io);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Clear / reset the scatter-gather list */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+	io->first_data_sge = 0;
+
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+	io->n_sge = 0;
+	io->sge_offset = 0;
+
+	io->type = type;
+
+	data = io->sgl->virt;
+
+	/*
+	 * Some IO types have underlying hardware requirements on the order
+	 * of SGEs. Process all special entries here.
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE:
+#define EFCT_TARGET_WRITE_SKIPS	2
+		skips = EFCT_TARGET_WRITE_SKIPS;
+
+		/* populate host resident XFER_RDY buffer */
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		data->buffer_address_high =
+			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
+		data->buffer_address_low  =
+			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
+		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+
+		skips--;
+
+		io->n_sge = 1;
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		/*
+		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
+		 */
+#define EFCT_TARGET_READ_SKIPS	2
+		skips = EFCT_TARGET_READ_SKIPS;
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		/*
+		 * No skips, etc. for FCP_TRSP64
+		 */
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Write skip entries
+	 */
+	for (i = 0; i < skips; i++) {
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+	}
+
+	io->n_sge += skips;
+
+	/*
+	 * Set last
+	 */
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length)
+{
+	struct sli4_sge	*data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr || !length) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
+			    hw, io, addr, length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (length > hw->sli.sge_supported_length) {
+		efc_log_err(hw->os,
+			     "length of SGE %d bigger than allowed %d\n",
+			    length, hw->sli.sge_supported_length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
+	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
+	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+	data->buffer_length = cpu_to_le32(length);
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	if (io->n_sge) {
+		sge_flags = le32_to_cpu(data[-1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	/* Set first_data_bde if not previously set */
+	if (io->first_data_sge == 0)
+		io->first_data_sge = io->n_sge;
+
+	io->sge_offset += length;
+	io->n_sge++;
+
+	/* Update the linked segment length (only executed after overflow has
+	 * begun)
+	 */
+	if (io->ovfl_lsp)
+		io->ovfl_lsp->dw3_seglen =
+			cpu_to_le32(io->n_sge * sizeof(struct sli4_sge) &
+				    SLI4_LSP_SGE_SEGLEN);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+void
+efct_hw_io_abort_all(struct efct_hw *hw)
+{
+	struct efct_hw_io *io_to_abort	= NULL;
+	struct efct_hw_io *next_io = NULL;
+
+	list_for_each_entry_safe(io_to_abort, next_io,
+				 &hw->io_inuse, list_entry) {
+		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
+	}
+}
+
+static void
+efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	u32 ext = 0;
+	u32 len = 0;
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	/*
+	 * For IOs that were aborted internally, we may need to issue the
+	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
+	 * not. If the status is Local Reject/No XRI, then
+	 * issue the callback now.
+	 */
+	ext = sli_fc_ext_status(&hw->sli, cqe);
+	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
+	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI &&
+		io->done) {
+		efct_hw_done_t done = io->done;
+		void *arg = io->arg;
+
+		io->done = NULL;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort Note: We wont have both a done and abort_done
+		 * function, so don't worry about
+		 *       clobbering the len, status and ext fields.
+		 */
+		status = io->saved_status;
+		len = io->saved_len;
+		ext = io->saved_ext;
+		io->status_saved = false;
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	if (io->abort_done) {
+		efct_hw_done_t done = io->abort_done;
+		void *arg = io->abort_arg;
+
+		io->abort_done = NULL;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	/* clear abort bit to indicate abort is complete */
+	io->abort_in_progress = false;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/* Free the WQ callback */
+	if (io->abort_reqtag == U32_MAX) {
+		efc_log_err(hw->os, "HW IO already freed\n");
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
+	efct_hw_reqtag_free(hw, wqcb);
+
+	/*
+	 * Call efct_hw_io_free() because this releases the WQ reservation as
+	 * well as doing the refcount put. Don't duplicate the code here.
+	 */
+	(void)efct_hw_io_free(hw, io);
+}
+
+enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg)
+{
+	enum sli4_abort_type atype = SLI_ABORT_MAX;
+	u32 id = 0, mask = 0;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	if (!io_to_abort) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n",
+			    hw, io_to_abort);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* take a reference on IO being aborted */
+	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
+		/* command no longer active */
+		efc_log_test(hw->os,
+			      "io not active xri=0x%x tag=0x%x\n",
+			     io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/* Must have a valid WQ reference */
+	if (!io_to_abort->wq) {
+		efc_log_test(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
+			      io_to_abort->indicator);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/*
+	 * Validation checks complete; now check to see if already being
+	 * aborted
+	 */
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	if (io_to_abort->abort_in_progress) {
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		efc_log_debug(hw->os,
+			       "io already being aborted xri=0x%x tag=0x%x\n",
+			      io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
+	}
+
+	/*
+	 * This IO is not already being aborted. Set flag so we won't try to
+	 * abort it again. After all, we only have one abort_done callback.
+	 */
+	io_to_abort->abort_in_progress = true;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/*
+	 * If we got here, the possibilities are:
+	 * - host owned xri
+	 *	- io_to_abort->wq_index != U32_MAX
+	 *		- submit ABORT_WQE to same WQ
+	 * - port owned xri:
+	 *	- rxri: io_to_abort->wq_index == U32_MAX
+	 *		- submit ABORT_WQE to any WQ
+	 *	- non-rxri
+	 *		- io_to_abort->index != U32_MAX
+	 *			- submit ABORT_WQE to same WQ
+	 *		- io_to_abort->index == U32_MAX
+	 *			- submit ABORT_WQE to any WQ
+	 */
+	io_to_abort->abort_done = cb;
+	io_to_abort->abort_arg  = arg;
+
+	atype = SLI_ABORT_XRI;
+	id = io_to_abort->indicator;
+
+	/* Allocate a request tag for the abort portion of this IO */
+	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
+	if (!wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+	io_to_abort->abort_reqtag = wqcb->instance_index;
+
+	/*
+	 * If the wqe is on the pending list, then set this wqe to be
+	 * aborted when the IO's wqe is removed from the list.
+	 */
+	if (io_to_abort->wq) {
+		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
+		if (io_to_abort->wqe.list_entry.next) {
+			io_to_abort->wqe.abort_wqe_submit_needed = true;
+			io_to_abort->wqe.send_abts = send_abts;
+			io_to_abort->wqe.id = id;
+			io_to_abort->wqe.abort_reqtag =
+						 io_to_abort->abort_reqtag;
+			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
+					       flags);
+			return EFC_SUCCESS;
+		}
+		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
+	}
+
+	if (sli_abort_wqe(&hw->sli, io_to_abort->wqe.wqebuf,
+			  hw->sli.wqe_size, atype, send_abts, id, mask,
+			  io_to_abort->abort_reqtag, SLI4_CQ_DEFAULT)) {
+		efc_log_err(hw->os, "ABORT WQE error\n");
+		io_to_abort->abort_reqtag = U32_MAX;
+		efct_hw_reqtag_free(hw, wqcb);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+
+		/* ABORT_WQE does not actually utilize an XRI on the Port,
+		 * therefore, keep xbusy as-is to track the exchange's state,
+		 * not the ABORT_WQE's state
+		 */
+		rc = efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe);
+		if (rc > 0)
+			/* non-negative return is success */
+			rc = 0;
+			/*
+			 * can't abort an abort so skip adding to timed wqe
+			 * list
+			 */
+	}
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		spin_lock_irqsave(&hw->io_abort_lock, flags);
+		io_to_abort->abort_in_progress = false;
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+	}
+	return rc;
+}
+
+void
+efct_hw_reqtag_pool_free(struct efct_hw *hw)
+{
+	u32 i = 0;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+	struct hw_wq_callback *wqcb = NULL;
+
+	if (reqtag_pool) {
+		for (i = 0; i < U16_MAX; i++) {
+			wqcb = reqtag_pool->tags[i];
+			if (!wqcb)
+				continue;
+
+			kfree(wqcb);
+		}
+		kfree(reqtag_pool);
+		hw->wq_reqtag_pool = NULL;
+	}
+}
+
+struct reqtag_pool *
+efct_hw_reqtag_pool_alloc(struct efct_hw *hw)
+{
+	u32 i = 0;
+	struct reqtag_pool *reqtag_pool;
+	struct hw_wq_callback *wqcb;
+
+	reqtag_pool = kzalloc(sizeof(*reqtag_pool), GFP_KERNEL);
+	if (!reqtag_pool)
+		return NULL;
+
+	INIT_LIST_HEAD(&reqtag_pool->freelist);
+	/* initialize reqtag pool lock */
+	spin_lock_init(&reqtag_pool->lock);
+	for (i = 0; i < U16_MAX; i++) {
+		wqcb = kmalloc(sizeof(*wqcb), GFP_KERNEL);
+		if (!wqcb)
+			break;
+
+		reqtag_pool->tags[i] = wqcb;
+		wqcb->instance_index = i;
+		wqcb->callback = NULL;
+		wqcb->arg = NULL;
+		INIT_LIST_HEAD(&wqcb->list_entry);
+		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
+	}
+
+	return reqtag_pool;
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_alloc(struct efct_hw *hw,
+		     void (*callback)(void *arg, u8 *cqe, int status),
+		     void *arg)
+{
+	struct hw_wq_callback *wqcb = NULL;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+	unsigned long flags = 0;
+
+	if (!callback)
+		return wqcb;
+
+	spin_lock_irqsave(&reqtag_pool->lock, flags);
+
+	if (!list_empty(&reqtag_pool->freelist)) {
+		wqcb = list_first_entry(&reqtag_pool->freelist,
+				      struct hw_wq_callback, list_entry);
+	}
+
+	if (wqcb) {
+		list_del(&wqcb->list_entry);
+		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+		wqcb->callback = callback;
+		wqcb->arg = arg;
+	} else {
+		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+	}
+
+	return wqcb;
+}
+
+void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb)
+{
+	unsigned long flags = 0;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+
+	if (!wqcb->callback)
+		efc_log_err(hw->os, "WQCB is already freed\n");
+
+	spin_lock_irqsave(&reqtag_pool->lock, flags);
+	wqcb->callback = NULL;
+	wqcb->arg = NULL;
+	INIT_LIST_HEAD(&wqcb->list_entry);
+	list_add(&wqcb->list_entry, &hw->wq_reqtag_pool->freelist);
+	spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
+{
+	struct hw_wq_callback *wqcb;
+
+	wqcb = hw->wq_reqtag_pool->tags[instance_index];
+	if (!wqcb)
+		efc_log_err(hw->os, "wqcb for instance %d is null\n",
+			     instance_index);
+
+	return wqcb;
+}
+
+void
+efct_hw_reqtag_reset(struct efct_hw *hw)
+{
+	struct hw_wq_callback *wqcb;
+	u32 i;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+	struct list_head *p, *n;
+
+	/* Remove all from freelist */
+	list_for_each_safe(p, n, &reqtag_pool->freelist) {
+		wqcb = list_entry(p, struct hw_wq_callback, list_entry);
+
+		if (wqcb)
+			list_del(&wqcb->list_entry);
+	}
+
+	/* Put them all back */
+	for (i = 0; i < ARRAY_SIZE(reqtag_pool->tags); i++) {
+		wqcb = reqtag_pool->tags[i];
+		wqcb->instance_index = i;
+		wqcb->callback = NULL;
+		wqcb->arg = NULL;
+		INIT_LIST_HEAD(&wqcb->list_entry);
+		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
+	}
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 1b67e0721936..86736d5295ec 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -635,5 +635,47 @@ void efct_hw_rx_free(struct efct_hw *hw);
 extern enum efct_hw_rtn
 efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
 		void *arg);
+struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw);
+int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io);
+u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io);
+extern enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io, u32 len,
+		union efct_hw_io_param_u *iparam,
+		struct efc_remote_node *rnode, void *cb, void *arg);
+extern enum efct_hw_rtn
+efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efc_dma *sgl,
+			u32 sgl_count);
+extern enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw,
+		     struct efct_hw_io *io, enum efct_hw_io_type type);
+
+extern enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length);
+extern enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg);
+extern u32
+efct_hw_io_get_count(struct efct_hw *hw,
+		     enum efct_hw_io_count_type io_count_type);
+extern struct efct_hw_io
+*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator);
+void efct_hw_io_abort_all(struct efct_hw *hw);
+void efct_hw_io_free_internal(struct kref *arg);
+
+/* HW WQ request tag API */
+struct reqtag_pool *efct_hw_reqtag_pool_alloc(struct efct_hw *hw);
+void efct_hw_reqtag_pool_free(struct efct_hw *hw);
+extern struct hw_wq_callback
+*efct_hw_reqtag_alloc(struct efct_hw *hw,
+			void (*callback)(void *arg, u8 *cqe,
+					 int status), void *arg);
+extern void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
+extern struct hw_wq_callback
+*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
+void efct_hw_reqtag_reset(struct efct_hw *hw);
 
 #endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 20/31] elx: efct: Hardware queues processing
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (18 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:37   ` Hannes Reinecke
  2020-04-16  9:17   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines James Smart
                   ` (10 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for EQ, CQ, WQ and RQ processing.
Routines for IO object pool allocation and deallocation.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Return defined values
  Changed IO pool allocation logic to avoid using efct_pool.
---
 drivers/scsi/elx/efct/efct_hw.c | 369 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  36 ++++
 drivers/scsi/elx/efct/efct_io.c | 198 +++++++++++++++++++++
 drivers/scsi/elx/efct/efct_io.h | 191 +++++++++++++++++++++
 4 files changed, 794 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 892493a3a35e..6cdc7e27b148 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -2146,3 +2146,372 @@ efct_hw_reqtag_reset(struct efct_hw *hw)
 		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
 	}
 }
+
+int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id)
+{
+	int	rc = -1;
+	int	index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the maximum number of Qs, then
+	 * we never have to worry about an infinite loop. We will always find
+	 * an unused entry.
+	 */
+	do {
+		if (hash[index].in_use &&
+		    hash[index].id == id)
+			rc = hash[index].index;
+		else
+			index = (index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+	} while (rc == -1 && hash[index].in_use);
+
+	return rc;
+}
+
+int
+efct_hw_process(struct efct_hw *hw, u32 vector,
+		u32 max_isr_time_msec)
+{
+	struct hw_eq *eq;
+	int rc = 0;
+
+	/*
+	 * The caller should disable interrupts if they wish to prevent us
+	 * from processing during a shutdown. The following states are defined:
+	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
+	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
+	 *                                    queues are cleared.
+	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
+	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
+	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
+	 *                                        completions.
+	 */
+	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
+		return EFC_SUCCESS;
+
+	/* Get pointer to struct hw_eq */
+	eq = hw->hw_eq[vector];
+	if (!eq)
+		return EFC_SUCCESS;
+
+	eq->use_count++;
+
+	rc = efct_hw_eq_process(hw, eq, max_isr_time_msec);
+
+	return rc;
+}
+
+int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec)
+{
+	u8		eqe[sizeof(struct sli4_eqe)] = { 0 };
+	u32	tcheck_count;
+	time_t		tstart;
+	time_t		telapsed;
+	bool		done = false;
+
+	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
+		u16	cq_id = 0;
+		int		rc;
+
+		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
+		if (unlikely(rc)) {
+			if (rc == SLI4_EQE_STATUS_EQ_FULL) {
+				u32 i;
+
+				/*
+				 * Received a sentinel EQE indicating the
+				 * EQ is full. Process all CQs
+				 */
+				for (i = 0; i < hw->cq_count; i++)
+					efct_hw_cq_process(hw, hw->hw_cq[i]);
+				continue;
+			} else {
+				return rc;
+			}
+		} else {
+			int index;
+
+			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
+
+			if (likely(index >= 0))
+				efct_hw_cq_process(hw, hw->hw_cq[index]);
+			else
+				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
+					     cq_id);
+		}
+
+		if (eq->queue->n_posted > eq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, eq->queue, false);
+
+		if (tcheck_count && (--tcheck_count == 0)) {
+			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+			telapsed = jiffies_to_msecs(jiffies) - tstart;
+			if (telapsed >= max_isr_time_msec)
+				done = true;
+		}
+	}
+	sli_queue_eq_arm(&hw->sli, eq->queue, true);
+
+	return EFC_SUCCESS;
+}
+
+static int
+_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int queue_rc;
+
+	/* Every so often, set the wqec bit to generate comsummed completions */
+	if (wq->wqec_count)
+		wq->wqec_count--;
+
+	if (wq->wqec_count == 0) {
+		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
+
+		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
+		wq->wqec_count = wq->wqec_set_count;
+	}
+
+	/* Decrement WQ free count */
+	wq->free_count--;
+
+	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
+
+	return (queue_rc < 0) ? -1 : 0;
+}
+
+static void
+hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
+{
+	struct efct_hw_wqe *wqe;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+
+	/* Update free count with value passed in */
+	wq->free_count += update_free_count;
+
+	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
+		wqe = list_first_entry(&wq->pending_list,
+				       struct efct_hw_wqe, list_entry);
+		list_del(&wqe->list_entry);
+		_efct_hw_wq_write(wq, wqe);
+
+		if (wqe->abort_wqe_submit_needed) {
+			wqe->abort_wqe_submit_needed = false;
+			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
+				      wq->hw->sli.wqe_size,
+				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
+				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
+					  INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+}
+
+void
+efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
+{
+	u8		cqe[sizeof(struct sli4_mcqe)];
+	u16	rid = U16_MAX;
+	enum sli4_qentry	ctype;		/* completion type */
+	int		status;
+	u32	n_processed = 0;
+	u32	tstart, telapsed;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
+		status = sli_cq_parse(&hw->sli, cq->queue,
+				      cqe, &ctype, &rid);
+		/*
+		 * The sign of status is significant. If status is:
+		 * == 0 : call completed correctly and
+		 * the CQE indicated success
+		 * > 0 : call completed correctly and
+		 * the CQE indicated an error
+		 * < 0 : call failed and no information is available about the
+		 * CQE
+		 */
+		if (status < 0) {
+			if (status == SLI4_MCQE_STATUS_NOT_COMPLETED)
+				/*
+				 * Notification that an entry was consumed,
+				 * but not completed
+				 */
+				continue;
+
+			break;
+		}
+
+		switch (ctype) {
+		case SLI_QENTRY_ASYNC:
+			sli_cqe_async(&hw->sli, cqe);
+			break;
+		case SLI_QENTRY_MQ:
+			/*
+			 * Process MQ entry. Note there is no way to determine
+			 * the MQ_ID from the completion entry.
+			 */
+			efct_hw_mq_process(hw, status, hw->mq);
+			break;
+		case SLI_QENTRY_WQ:
+			efct_hw_wq_process(hw, cq, cqe, status, rid);
+			break;
+		case SLI_QENTRY_WQ_RELEASE: {
+			u32 wq_id = rid;
+			int index;
+			struct hw_wq *wq = NULL;
+
+			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
+
+			if (likely(index >= 0)) {
+				wq = hw->hw_wq[index];
+			} else {
+				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
+				break;
+			}
+			/* Submit any HW IOs that are on the WQ pending list */
+			hw_wq_submit_pending(wq, wq->wqec_set_count);
+
+			break;
+		}
+
+		case SLI_QENTRY_RQ:
+			efct_hw_rqpair_process_rq(hw, cq, cqe);
+			break;
+		case SLI_QENTRY_XABT: {
+			efct_hw_xabt_process(hw, cq, cqe, rid);
+			break;
+		}
+		default:
+			efc_log_test(hw->os,
+				      "unhandled ctype=%#x rid=%#x\n",
+				     ctype, rid);
+			break;
+		}
+
+		n_processed++;
+		if (n_processed == cq->queue->proc_limit)
+			break;
+
+		if (cq->queue->n_posted >= cq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, cq->queue, false);
+	}
+
+	sli_queue_arm(&hw->sli, cq->queue, true);
+
+	if (n_processed > cq->queue->max_num_processed)
+		cq->queue->max_num_processed = n_processed;
+	telapsed = jiffies_to_msecs(jiffies) - tstart;
+	if (telapsed > cq->queue->max_process_time)
+		cq->queue->max_process_time = telapsed;
+}
+
+void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid)
+{
+	struct hw_wq_callback *wqcb;
+
+	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
+		if (status)
+			efc_log_err(hw->os, "reque xri failed, status = %d\n",
+				     status);
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, rid);
+	if (!wqcb) {
+		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
+		return;
+	}
+
+	if (!wqcb->callback) {
+		efc_log_err(hw->os, "wqcb callback is NULL\n");
+		return;
+	}
+
+	(*wqcb->callback)(wqcb->arg, cqe, status);
+}
+
+void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid)
+{
+	/* search IOs wait free list */
+	struct efct_hw_io *io = NULL;
+	unsigned long flags = 0;
+
+	io = efct_hw_io_lookup(hw, rid);
+	if (!io) {
+		/* IO lookup failure should never happen */
+		efc_log_err(hw->os,
+			     "Error: xabt io lookup failed rid=%#x\n", rid);
+		return;
+	}
+
+	if (!io->xbusy)
+		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
+	else
+		/* mark IO as no longer busy */
+		io->xbusy = false;
+
+	/*
+	 * For IOs that were aborted internally, we need to issue any pending
+	 * callback here.
+	 */
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void		*arg = io->arg;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort
+		 */
+		int status = io->saved_status;
+		u32 len = io->saved_len;
+		u32 ext = io->saved_ext;
+
+		io->done = NULL;
+		io->status_saved = false;
+
+		done(io, io->rnode, len, status, ext, arg);
+	}
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	if (io->state == EFCT_HW_IO_STATE_INUSE ||
+	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+		/* if on wait_free list, caller has already freed IO;
+		 * remove from wait_free list and add to free list.
+		 * if on in-use list, already marked as no longer busy;
+		 * just leave there and wait for caller to free.
+		 */
+		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+			io->state = EFCT_HW_IO_STATE_FREE;
+			list_del(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+static int
+efct_hw_flush(struct efct_hw *hw)
+{
+	u32	i = 0;
+
+	/* Process any remaining completions */
+	for (i = 0; i < hw->eq_count; i++)
+		efct_hw_process(hw, i, ~0);
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 86736d5295ec..b427a4eda5a3 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -678,4 +678,40 @@ extern struct hw_wq_callback
 *efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
 void efct_hw_reqtag_reset(struct efct_hw *hw);
 
+/* RQ completion handlers for RQ pair mode */
+extern int
+efct_hw_rqpair_process_rq(struct efct_hw *hw,
+			  struct hw_cq *cq, u8 *cqe);
+extern
+enum efct_hw_rtn efct_hw_rqpair_sequence_free(struct efct_hw *hw,
+						struct efc_hw_sequence *seq);
+static inline void
+efct_hw_sequence_copy(struct efc_hw_sequence *dst,
+		      struct efc_hw_sequence *src)
+{
+	/* Copy src to dst, then zero out the linked list link */
+	*dst = *src;
+}
+
+static inline enum efct_hw_rtn
+efct_hw_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	/* Only RQ pair mode is supported */
+	return efct_hw_rqpair_sequence_free(hw, seq);
+}
+extern int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec);
+void efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq);
+extern void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid);
+extern void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid);
+extern int
+efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
+extern int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
new file mode 100644
index 000000000000..8ea05b59c892
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.c
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+
+struct efct_io_pool {
+	struct efct *efct;
+	spinlock_t lock;	/* IO pool lock */
+	u32 io_num_ios;		/* Total IOs allocated */
+	struct efct_io *ios[EFCT_NUM_SCSI_IOS];
+	struct list_head freelist;
+
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_sgl)
+{
+	u32 i = 0;
+	struct efct_io_pool *io_pool;
+	struct efct_io *io;
+
+	/* Allocate the IO pool */
+	io_pool = kzalloc(sizeof(*io_pool), GFP_KERNEL);
+	if (!io_pool)
+		return NULL;
+
+	io_pool->efct = efct;
+	INIT_LIST_HEAD(&io_pool->freelist);
+	/* initialize IO pool lock */
+	spin_lock_init(&io_pool->lock);
+
+	for (i = 0; i < EFCT_NUM_SCSI_IOS; i++) {
+		io = kmalloc(sizeof(*io), GFP_KERNEL);
+		if (!io)
+			break;
+
+		io_pool->io_num_ios++;
+		io_pool->ios[i] = io;
+		io->tag = i;
+		io->instance_index = i;
+
+		/* Allocate a response buffer */
+		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
+		io->rspbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						     io->rspbuf.size,
+						     &io->rspbuf.phys, GFP_DMA);
+		if (!io->rspbuf.virt) {
+			efc_log_err(efct, "dma_alloc rspbuf failed\n");
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		/* Allocate SGL */
+		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_ATOMIC);
+		if (!io->sgl) {
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
+		io->sgl_allocated = num_sgl;
+		io->sgl_count = 0;
+
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &io_pool->freelist);
+	}
+
+	return io_pool;
+}
+
+int
+efct_io_pool_free(struct efct_io_pool *io_pool)
+{
+	struct efct *efct;
+	u32 i;
+	struct efct_io *io;
+
+	if (io_pool) {
+		efct = io_pool->efct;
+
+		for (i = 0; i < io_pool->io_num_ios; i++) {
+			io = io_pool->ios[i];
+			if (!io)
+				continue;
+
+			kfree(io->sgl);
+			dma_free_coherent(&efct->pcidev->dev,
+					  io->rspbuf.size, io->rspbuf.virt,
+					  io->rspbuf.phys);
+			memset(&io->rspbuf, 0, sizeof(struct efc_dma));
+		}
+
+		kfree(io_pool);
+		efct->xport->io_pool = NULL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+u32 efct_io_pool_allocated(struct efct_io_pool *io_pool)
+{
+	return io_pool->io_num_ios;
+}
+
+struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool)
+{
+	struct efct_io *io = NULL;
+	struct efct *efct;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+
+	if (!list_empty(&io_pool->freelist)) {
+		io = list_first_entry(&io_pool->freelist, struct efct_io,
+				     list_entry);
+	}
+
+	if (io) {
+		list_del(&io->list_entry);
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+
+		io->io_type = EFCT_IO_TYPE_MAX;
+		io->hio_type = EFCT_HW_IO_MAX;
+		io->hio = NULL;
+		io->transferred = 0;
+		io->efct = efct;
+		io->timeout = 0;
+		io->sgl_count = 0;
+		io->tgt_task_tag = 0;
+		io->init_task_tag = 0;
+		io->hw_tag = 0;
+		io->display_name = "pending";
+		io->seq_init = 0;
+		io->els_req_free = false;
+		io->io_free = 0;
+		io->release = NULL;
+		atomic_add_return(1, &efct->xport->io_active_count);
+		atomic_add_return(1, &efct->xport->io_total_alloc);
+	} else {
+		spin_unlock_irqrestore(&io_pool->lock, flags);
+	}
+	return io;
+}
+
+/* Free an object used to track an IO */
+void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io)
+{
+	struct efct *efct;
+	struct efct_hw_io *hio = NULL;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	hio = io->hio;
+	io->hio = NULL;
+	io->io_free = 1;
+	INIT_LIST_HEAD(&io->list_entry);
+	list_add(&io->list_entry, &io_pool->freelist);
+	spin_unlock_irqrestore(&io_pool->lock, flags);
+
+	if (hio)
+		efct_hw_io_free(&efct->hw, hio);
+
+	atomic_sub_return(1, &efct->xport->io_active_count);
+	atomic_add_return(1, &efct->xport->io_total_free);
+}
+
+/* Find an I/O given it's node and ox_id */
+struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
+		    u16 ox_id, u16 rx_id)
+{
+	struct efct_io	*io = NULL;
+	unsigned long flags = 0;
+	u8 found = false;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry(io, &node->active_ios, list_entry) {
+		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
+		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
+			if (kref_get_unless_zero(&io->ref))
+				found = true;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return found ? io : NULL;
+}
diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
new file mode 100644
index 000000000000..e28d8ed2f7ff
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.h
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_IO_H__)
+#define __EFCT_IO_H__
+
+#include "efct_lio.h"
+
+#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
+
+#define io_error_log(io, fmt, ...)  \
+	do { \
+		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
+			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+#define SCSI_CMD_BUF_LENGTH	48
+#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
+#define EFCT_NUM_SCSI_IOS	8192
+
+enum efct_io_type {
+	EFCT_IO_TYPE_IO = 0,
+	EFCT_IO_TYPE_ELS,
+	EFCT_IO_TYPE_CT,
+	EFCT_IO_TYPE_CT_RESP,
+	EFCT_IO_TYPE_BLS_RESP,
+	EFCT_IO_TYPE_ABORT,
+
+	EFCT_IO_TYPE_MAX,
+};
+
+enum efct_els_state {
+	EFCT_ELS_REQUEST = 0,
+	EFCT_ELS_REQUEST_DELAYED,
+	EFCT_ELS_REQUEST_DELAY_ABORT,
+	EFCT_ELS_REQ_ABORT,
+	EFCT_ELS_REQ_ABORTED,
+	EFCT_ELS_ABORT_IO_COMPL,
+};
+
+struct efct_io {
+	struct list_head	list_entry;
+	struct list_head	io_pending_link;
+	/* reference counter and callback function */
+	struct kref		ref;
+	void (*release)(struct kref *arg);
+	/* pointer back to efct */
+	struct efct		*efct;
+	/* unique instance index value */
+	u32			instance_index;
+	/* display name */
+	const char		*display_name;
+	/* pointer to node */
+	struct efc_node		*node;
+	/* (io_pool->io_free_list) free list link */
+	/* initiator task tag (OX_ID) for back-end and SCSI logging */
+	u32			init_task_tag;
+	/* target task tag (RX_ID) - for back-end and SCSI logging */
+	u32			tgt_task_tag;
+	/* HW layer unique IO id - for back-end and SCSI logging */
+	u32			hw_tag;
+	/* unique IO identifier */
+	u32			tag;
+	/* SGL */
+	struct efct_scsi_sgl	*sgl;
+	/* Number of allocated SGEs */
+	u32			sgl_allocated;
+	/* Number of SGEs in this SGL */
+	u32			sgl_count;
+	/* backend target private IO data */
+	struct efct_scsi_tgt_io tgt_io;
+	/* expected data transfer length, based on FC header */
+	u32			exp_xfer_len;
+
+	/* Declarations private to HW/SLI */
+	void			*hw_priv;
+
+	/* indicates what this struct efct_io structure is used for */
+	enum efct_io_type	io_type;
+	struct efct_hw_io	*hio;
+	size_t			transferred;
+
+	/* set if auto_trsp was set */
+	bool			auto_resp;
+	/* set if low latency request */
+	bool			low_latency;
+	/* selected WQ steering request */
+	u8			wq_steering;
+	/* selected WQ class if steering is class */
+	u8			wq_class;
+	/* transfer size for current request */
+	u64			xfer_req;
+	/* target callback function */
+	efct_scsi_io_cb_t	scsi_tgt_cb;
+	/* target callback function argument */
+	void			*scsi_tgt_cb_arg;
+	/* abort callback function */
+	efct_scsi_io_cb_t	abort_cb;
+	/* abort callback function argument */
+	void			*abort_cb_arg;
+	/* BLS callback function */
+	efct_scsi_io_cb_t	bls_cb;
+	/* BLS callback function argument */
+	void			*bls_cb_arg;
+	/* TMF command being processed */
+	enum efct_scsi_tmf_cmd	tmf_cmd;
+	/* rx_id from the ABTS that initiated the command abort */
+	u16			abort_rx_id;
+
+	/* True if this is a Target command */
+	bool			cmd_tgt;
+	/* when aborting, indicates ABTS is to be sent */
+	bool			send_abts;
+	/* True if this is an Initiator command */
+	bool			cmd_ini;
+	/* True if local node has sequence initiative */
+	bool			seq_init;
+	/* iparams for hw io send call */
+	union efct_hw_io_param_u iparam;
+	/* HW IO type */
+	enum efct_hw_io_type	hio_type;
+	/* wire length */
+	u64			wire_len;
+	/* saved HW callback */
+	void			*hw_cb;
+
+	/* for ELS requests/responses */
+	/* True if ELS is pending */
+	bool			els_pend;
+	/* True if ELS is active */
+	bool			els_active;
+	/* ELS request payload buffer */
+	struct efc_dma		els_req;
+	/* ELS response payload buffer */
+	struct efc_dma		els_rsp;
+	bool			els_req_free;
+	/* Retries remaining */
+	u32			els_retries_remaining;
+	void (*els_callback)(struct efc_node *node,
+			     struct efc_node_cb *cbdata, void *cbarg);
+	void			*els_callback_arg;
+	/* timeout */
+	u32			els_timeout_sec;
+
+	/* delay timer */
+	struct timer_list	delay_timer;
+
+	/* for abort handling */
+	/* pointer to IO to abort */
+	struct efct_io		*io_to_abort;
+
+	enum efct_els_state	state;
+	/* Protects els cmds */
+	spinlock_t		els_lock;
+
+	/* SCSI Response buffer */
+	struct efc_dma		rspbuf;
+	/* Timeout value in seconds for this IO */
+	u32			timeout;
+	/* CS_CTL priority for this IO */
+	u8			cs_ctl;
+	/* Is io object in freelist > */
+	u8			io_free;
+	u32			app_id;
+};
+
+struct efct_io_cb_arg {
+	int status;		/* completion status */
+	int ext_status;		/* extended completion status */
+	void *app;		/* application argument */
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_sgl);
+extern int
+efct_io_pool_free(struct efct_io_pool *io_pool);
+extern u32
+efct_io_pool_allocated(struct efct_io_pool *io_pool);
+
+extern struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool);
+extern void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io);
+extern struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
+		    u16 ox_id, u16 rx_id);
+#endif /* __EFCT_IO_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (19 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 20/31] elx: efct: Hardware queues processing James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  9:36   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 22/31] elx: efct: Extended link Service IO handling James Smart
                   ` (9 subsequent siblings)
  30 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to handle unsolicited FC frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>

---
v3:
  Return defined values
---
 drivers/scsi/elx/efct/efct_hw.c    |   1 +
 drivers/scsi/elx/efct/efct_unsol.c | 813 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_unsol.h |  49 +++
 3 files changed, 863 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 6cdc7e27b148..fd3c2dec3ef6 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -6,6 +6,7 @@
 
 #include "efct_driver.h"
 #include "efct_hw.h"
+#include "efct_unsol.h"
 
 static enum efct_hw_rtn
 efct_hw_link_event_init(struct efct_hw *hw)
diff --git a/drivers/scsi/elx/efct/efct_unsol.c b/drivers/scsi/elx/efct/efct_unsol.c
new file mode 100644
index 000000000000..e8611524e2cd
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.c
@@ -0,0 +1,813 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_unsol.h"
+
+#define frame_printf(efct, hdr, fmt, ...) \
+	do { \
+		char s_id_text[16]; \
+		efc_node_fcid_display(ntoh24((hdr)->fh_s_id), \
+			s_id_text, sizeof(s_id_text)); \
+		efc_log_debug(efct, "[%06x.%s] %02x/%04x/%04x: " fmt, \
+			ntoh24((hdr)->fh_d_id), s_id_text, \
+			(hdr)->fh_r_ctl, be16_to_cpu((hdr)->fh_ox_id), \
+			be16_to_cpu((hdr)->fh_rx_id), ##__VA_ARGS__); \
+	} while (0)
+
+static int
+efct_unsol_process(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	struct efct_xport_fcfi *xport_fcfi = NULL;
+	struct efc_domain *domain;
+	struct efct_hw *hw = &efct->hw;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	/* If the transport FCFI entry is NULL, then drop the frame */
+	if (!xport_fcfi) {
+		efc_log_test(efct,
+			      "FCFI %d is not valid, dropping frame\n",
+			seq->fcfi);
+
+		efct_hw_sequence_free(&efct->hw, seq);
+		return EFC_SUCCESS;
+	}
+
+	domain = hw->domain;
+
+	/*
+	 * If we are holding frames or the domain is not yet registered or
+	 * there's already frames on the pending list,
+	 * then add the new frame to pending list
+	 */
+	if (!domain ||
+	    xport_fcfi->hold_frames ||
+	    !list_empty(&xport_fcfi->pend_frames)) {
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+		INIT_LIST_HEAD(&seq->list_entry);
+		list_add_tail(&seq->list_entry, &xport_fcfi->pend_frames);
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		if (domain) {
+			/* immediately process pending frames */
+			efct_domain_process_pending(domain);
+		}
+	} else {
+		/*
+		 * We are not holding frames and pending list is empty,
+		 * just process frame. A non-zero return means the frame
+		 * was not handled - so cleanup
+		 */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+	}
+	return EFC_SUCCESS;
+}
+
+int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = arg;
+	int rc;
+
+	rc = efct_unsol_process(efct, seq);
+	if (rc)
+		efct_hw_sequence_free(&efct->hw, seq);
+
+	return EFC_SUCCESS;
+}
+
+void
+efct_process_node_pending(struct efc_node *node)
+{
+	struct efct *efct = node->efc->base;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+		if (!list_empty(&node->pend_frames)) {
+			seq = list_first_entry(&node->pend_frames,
+					struct efc_hw_sequence, list_entry);
+			list_del(&seq->list_entry);
+		}
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		if (!seq) {
+			pend_frames_processed =	node->pend_frames_processed;
+			node->pend_frames_processed = 0;
+			break;
+		}
+		node->pend_frames_processed++;
+
+		/* now dispatch frame(s) to dispatch function */
+		efc_node_dispatch_frame(node, seq);
+		efct_hw_sequence_free(&efct->hw, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u node frames held and processed\n",
+			       pend_frames_processed);
+}
+
+static bool efct_domain_frames_held(void *arg)
+{
+	struct efc_domain *domain = (struct efc_domain *)arg;
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return xport_fcfi->hold_frames;
+}
+
+void
+efct_domain_process_pending(struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	xport_fcfi = &efct->xport->fcfi;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (efct_domain_frames_held(domain))
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
+			if (!list_empty(&xport_fcfi->pend_frames)) {
+				seq = list_first_entry(&xport_fcfi->pend_frames,
+						       struct efc_hw_sequence,
+						       list_entry);
+				list_del(&seq->list_entry);
+			}
+			if (!seq) {
+				pend_frames_processed =
+					xport_fcfi->pend_frames_processed;
+				xport_fcfi->pend_frames_processed = 0;
+				spin_unlock_irqrestore(&
+						xport_fcfi->pend_frames_lock,
+						flags);
+				break;
+			}
+			xport_fcfi->pend_frames_processed++;
+		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efct_hw_sequence_free(&efct->hw, seq);
+
+		seq = NULL;
+	}
+	if (pend_frames_processed != 0)
+		efc_log_debug(efct, "%u domain frames held and processed\n",
+			       pend_frames_processed);
+}
+
+static struct efc_hw_sequence *
+efct_frame_next(struct list_head *pend_list, spinlock_t *list_lock)
+{
+	struct efc_hw_sequence *frame = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(list_lock, flags);
+
+	if (!list_empty(pend_list)) {
+		frame = list_first_entry(pend_list,
+					 struct efc_hw_sequence, list_entry);
+		list_del(&frame->list_entry);
+	}
+
+	spin_unlock_irqrestore(list_lock, flags);
+	return frame;
+}
+
+static int
+efct_purge_pending(struct efct *efct, struct list_head *pend_list,
+		   spinlock_t *list_lock)
+{
+	struct efc_hw_sequence *frame;
+
+	for (;;) {
+		frame = efct_frame_next(pend_list, list_lock);
+		if (!frame)
+			break;
+
+		frame_printf(efct,
+			     (struct fc_frame_header *)frame->header->dma.virt,
+			     "Discarding held frame\n");
+		efct_hw_sequence_free(&efct->hw, frame);
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_node_purge_pending(struct efc *efc, struct efc_node *node)
+{
+	struct efct *efct = efc->base;
+
+	return efct_purge_pending(efct, &node->pend_frames,
+				&node->pend_frames_lock);
+}
+
+int
+efct_domain_purge_pending(struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	return efct_purge_pending(efct,
+				 &xport_fcfi->pend_frames,
+				 &xport_fcfi->pend_frames_lock);
+}
+
+void
+efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (!xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames set for FCFI %d\n",
+			       domain->fcf_indicator);
+		xport_fcfi->hold_frames = true;
+	}
+}
+
+void
+efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = domain->efc->base;
+	struct efct_xport_fcfi *xport_fcfi;
+
+	xport_fcfi = &efct->xport->fcfi;
+	if (xport_fcfi->hold_frames) {
+		efc_log_debug(efct, "hold frames cleared for FCFI %d\n",
+			       domain->fcf_indicator);
+	}
+	xport_fcfi->hold_frames = false;
+	efct_domain_process_pending(domain);
+}
+
+static int
+efct_fc_tmf_rejected_cb(struct efct_io *io,
+			enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg)
+{
+	efct_scsi_io_free(io);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_dispatch_unsolicited_tmf(struct efct_io *io,
+			      u8 task_management_flags,
+			      struct efc_node *node, u32 lun)
+{
+	u32 i;
+	struct {
+		u32 mask;
+		enum efct_scsi_tmf_cmd cmd;
+	} tmflist[] = {
+	{FCP_TMF_ABT_TASK_SET, EFCT_SCSI_TMF_ABORT_TASK_SET},
+	{FCP_TMF_CLR_TASK_SET, EFCT_SCSI_TMF_CLEAR_TASK_SET},
+	{FCP_TMF_LUN_RESET, EFCT_SCSI_TMF_LOGICAL_UNIT_RESET},
+	{FCP_TMF_TGT_RESET, EFCT_SCSI_TMF_TARGET_RESET},
+	{FCP_TMF_CLR_ACA, EFCT_SCSI_TMF_CLEAR_ACA} };
+
+	io->exp_xfer_len = 0;
+
+	for (i = 0; i < ARRAY_SIZE(tmflist); i++) {
+		if (tmflist[i].mask & task_management_flags) {
+			io->tmf_cmd = tmflist[i].cmd;
+			efct_scsi_recv_tmf(io, lun, tmflist[i].cmd, NULL, 0);
+			break;
+		}
+	}
+	if (i == ARRAY_SIZE(tmflist)) {
+		/* Not handled */
+		node_printf(node, "TMF x%x rejected\n", task_management_flags);
+		efct_scsi_send_tmf_resp(io, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+					NULL, efct_fc_tmf_rejected_cb, NULL);
+	}
+}
+
+static int
+efct_validate_fcp_cmd(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	/*
+	 * If we received less than FCP_CMND_IU bytes, assume that the frame is
+	 * corrupted in some way and drop it.
+	 * This was seen when jamming the FCTL
+	 * fill bytes field.
+	 */
+	if (seq->payload->dma.len < sizeof(struct fcp_cmnd)) {
+		struct fc_frame_header	*fchdr = seq->header->dma.virt;
+
+		efc_log_debug(efct,
+			"drop ox_id %04x with payload (%zd) less than (%zd)\n",
+				    be16_to_cpu(fchdr->fh_ox_id),
+				    seq->payload->dma.len,
+				    sizeof(struct fcp_cmnd));
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+static void
+efct_populate_io_fcp_cmd(struct efct_io *io, struct fcp_cmnd *cmnd,
+			 struct fc_frame_header *fchdr, bool sit)
+{
+	io->init_task_tag = be16_to_cpu(fchdr->fh_ox_id);
+	/* note, tgt_task_tag, hw_tag  set when HW io is allocated */
+	io->exp_xfer_len = be32_to_cpu(cmnd->fc_dl);
+	io->transferred = 0;
+
+	/* The upper 7 bits of CS_CTL is the frame priority thru the SAN.
+	 * Our assertion here is, the priority given to a frame containing
+	 * the FCP cmd should be the priority given to ALL frames contained
+	 * in that IO. Thus we need to save the incoming CS_CTL here.
+	 */
+	if (ntoh24(fchdr->fh_f_ctl) & FC_FC_RES_B17)
+		io->cs_ctl = fchdr->fh_cs_ctl;
+	else
+		io->cs_ctl = 0;
+
+	io->seq_init = sit;
+}
+
+static u32
+efct_get_flags_fcp_cmd(struct fcp_cmnd *cmnd)
+{
+	u32 flags = 0;
+
+	switch (cmnd->fc_pri_ta & FCP_PTA_MASK) {
+	case FCP_PTA_SIMPLE:
+		flags |= EFCT_SCSI_CMD_SIMPLE;
+		break;
+	case FCP_PTA_HEADQ:
+		flags |= EFCT_SCSI_CMD_HEAD_OF_QUEUE;
+		break;
+	case FCP_PTA_ORDERED:
+		flags |= EFCT_SCSI_CMD_ORDERED;
+		break;
+	case FCP_PTA_ACA:
+		flags |= EFCT_SCSI_CMD_ACA;
+		break;
+	}
+	if (cmnd->fc_flags & FCP_CFL_WRDATA)
+		flags |= EFCT_SCSI_CMD_DIR_IN;
+	if (cmnd->fc_flags & FCP_CFL_RDDATA)
+		flags |= EFCT_SCSI_CMD_DIR_OUT;
+
+	return flags;
+}
+
+static void
+efct_sframe_common_send_cb(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_send_frame_context *ctx = arg;
+	struct efct_hw *hw = ctx->hw;
+
+	/* Free WQ completion callback */
+	efct_hw_reqtag_free(hw, ctx->wqcb);
+
+	/* Free sequence */
+	efct_hw_sequence_free(hw, ctx->seq);
+}
+
+static int
+efct_sframe_common_send(struct efc_node *node,
+			struct efc_hw_sequence *seq,
+			enum fc_rctl r_ctl, u32 f_ctl,
+			u8 type, void *payload, u32 payload_len)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_hw *hw = &efct->hw;
+	enum efct_hw_rtn rc = 0;
+	struct fc_frame_header *req_hdr = seq->header->dma.virt;
+	struct fc_frame_header hdr;
+	struct efct_hw_send_frame_context *ctx;
+
+	u32 heap_size = seq->payload->dma.size;
+	uintptr_t heap_phys_base = seq->payload->dma.phys;
+	u8 *heap_virt_base = seq->payload->dma.virt;
+	u32 heap_offset = 0;
+
+	/* Build the FC header reusing the RQ header DMA buffer */
+	memset(&hdr, 0, sizeof(hdr));
+	hdr.fh_r_ctl = r_ctl;
+	/* send it back to whomever sent it to us */
+	memcpy(hdr.fh_d_id, req_hdr->fh_s_id, sizeof(hdr.fh_d_id));
+	memcpy(hdr.fh_s_id, req_hdr->fh_d_id, sizeof(hdr.fh_s_id));
+	hdr.fh_type = type;
+	hton24(hdr.fh_f_ctl, f_ctl);
+	hdr.fh_ox_id = req_hdr->fh_ox_id;
+	hdr.fh_rx_id = req_hdr->fh_rx_id;
+	hdr.fh_cs_ctl = 0;
+	hdr.fh_df_ctl = 0;
+	hdr.fh_seq_cnt = 0;
+	hdr.fh_parm_offset = 0;
+
+	/*
+	 * send_frame_seq_id is an atomic, we just let it increment,
+	 * while storing only the low 8 bits to hdr->seq_id
+	 */
+	hdr.fh_seq_id = (u8)atomic_add_return(1, &hw->send_frame_seq_id);
+	hdr.fh_seq_id--;
+
+	/* Allocate and fill in the send frame request context */
+	ctx = (void *)(heap_virt_base + heap_offset);
+	heap_offset += sizeof(*ctx);
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return EFC_FAIL;
+	}
+
+	memset(ctx, 0, sizeof(*ctx));
+
+	/* Save sequence */
+	ctx->seq = seq;
+
+	/* Allocate a response payload DMA buffer from the heap */
+	ctx->payload.phys = heap_phys_base + heap_offset;
+	ctx->payload.virt = heap_virt_base + heap_offset;
+	ctx->payload.size = payload_len;
+	ctx->payload.len = payload_len;
+	heap_offset += payload_len;
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return EFC_FAIL;
+	}
+
+	/* Copy the payload in */
+	memcpy(ctx->payload.virt, payload, payload_len);
+
+	/* Send */
+	rc = efct_hw_send_frame(&efct->hw, (void *)&hdr, FC_SOF_N3,
+				FC_EOF_T, &ctx->payload, ctx,
+				efct_sframe_common_send_cb, ctx);
+	if (rc)
+		efc_log_test(efct, "efct_hw_send_frame failed: %d\n", rc);
+
+	return rc ? -1 : 0;
+}
+
+static int
+efct_sframe_send_fcp_rsp(struct efc_node *node,
+			 struct efc_hw_sequence *seq,
+			 void *rsp, u32 rsp_len)
+{
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_DD_CMD_STATUS,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ |
+				      FC_FC_SEQ_INIT,
+				      FC_TYPE_FCP,
+				      rsp, rsp_len);
+}
+
+static int
+efct_sframe_send_task_set_full_or_busy(struct efc_node *node,
+				       struct efc_hw_sequence *seq)
+{
+	struct fcp_resp_with_ext fcprsp;
+	struct fcp_cmnd *fcpcmd = seq->payload->dma.virt;
+	int rc = 0;
+	unsigned long flags = 0;
+	struct efct *efct = node->efc->base;
+
+	/* construct task set full or busy response */
+	memset(&fcprsp, 0, sizeof(fcprsp));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		fcprsp.resp.fr_status = list_empty(&node->active_ios) ?
+				SAM_STAT_BUSY : SAM_STAT_TASK_SET_FULL;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	*((u32 *)&fcprsp.ext.fr_resid) = be32_to_cpu(fcpcmd->fc_dl);
+
+	/* send it using send_frame */
+	rc = efct_sframe_send_fcp_rsp(node, seq, &fcprsp, sizeof(fcprsp));
+	if (rc)
+		efc_log_test(efct,
+			      "efct_sframe_send_fcp_rsp failed: %d\n",
+			rc);
+
+	return rc;
+}
+
+int
+efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
+{
+	struct efc *efc = node->efc;
+	struct efct *efct = efc->base;
+	struct fc_frame_header *fchdr = seq->header->dma.virt;
+	struct fcp_cmnd	*cmnd = NULL;
+	struct efct_io *io = NULL;
+	u32 lun = U32_MAX;
+	int rc = 0;
+
+	if (!seq->payload) {
+		efc_log_err(efct, "Sequence payload is NULL.\n");
+		return EFC_FAIL;
+	}
+
+	cmnd = seq->payload->dma.virt;
+
+	/* perform FCP_CMND validation check(s) */
+	if (efct_validate_fcp_cmd(efct, seq))
+		return EFC_FAIL;
+
+	lun = scsilun_to_int(&cmnd->fc_lun);
+	if (lun == U32_MAX)
+		return EFC_FAIL;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (!io) {
+		/* Use SEND_FRAME to send task set full or busy */
+		rc = efct_sframe_send_task_set_full_or_busy(node, seq);
+		if (rc)
+			efc_log_err(efct, "Failed to send busy task: %d\n", rc);
+		return rc;
+	}
+
+	io->hw_priv = seq->hw_priv;
+
+	io->app_id = 0;
+
+	/* RQ pair, if we got here, SIT=1 */
+	efct_populate_io_fcp_cmd(io, cmnd, fchdr, true);
+
+	if (cmnd->fc_tm_flags) {
+		efct_dispatch_unsolicited_tmf(io,
+					      cmnd->fc_tm_flags,
+					      node, lun);
+	} else {
+		u32 flags = efct_get_flags_fcp_cmd(cmnd);
+
+		if (cmnd->fc_flags & FCP_CFL_LEN_MASK) {
+			efc_log_err(efct, "Additional CDB not supported\n");
+			return EFC_FAIL;
+		}
+		/*
+		 * Can return failure for things like task set full and UAs,
+		 * no need to treat as a dropped frame if rc != 0
+		 */
+		efct_scsi_recv_cmd(io, lun, cmnd->fc_cdb,
+				   sizeof(cmnd->fc_cdb), flags);
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_sframe_send_bls_acc(struct efc_node *node,
+			 struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *behdr = seq->header->dma.virt;
+	u16 ox_id = be16_to_cpu(behdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(behdr->fh_rx_id);
+	struct fc_ba_acc acc = {0};
+
+	acc.ba_ox_id = cpu_to_be16(ox_id);
+	acc.ba_rx_id = cpu_to_be16(rx_id);
+	acc.ba_low_seq_cnt = cpu_to_be16(U16_MAX);
+	acc.ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	return efct_sframe_common_send(node, seq,
+				      FC_RCTL_BA_ACC,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ,
+				      FC_TYPE_BLS,
+				      &acc, sizeof(acc));
+}
+
+void
+efct_node_io_cleanup(struct efc *efc, struct efc_node *node, bool force)
+{
+	struct efct_io *io;
+	struct efct_io *next;
+	unsigned long flags = 0;
+	struct efct *efct = efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(io, next, &node->active_ios, list_entry) {
+		list_del(&io->list_entry);
+		efct_io_pool_io_free(efct->xport->io_pool, io);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+void
+efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
+		      bool force)
+{
+	struct efct_io *els;
+	struct efct_io *els_next;
+	struct efct_io *ls_acc_io;
+	unsigned long flags = 0;
+	struct efct *efct = efc->base;
+
+	/* first cleanup ELS's that are pending (not yet active) */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_pend_list,
+				 list_entry) {
+		/*
+		 * skip the ELS IO for which a response
+		 * will be sent after shutdown
+		 */
+		if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE &&
+		    els == node->ls_acc_io) {
+			continue;
+		}
+		/*
+		 * can't call efct_els_io_free()
+		 * because lock is held; cleanup manually
+		 */
+		node_printf(node, "Freeing pending els %s\n",
+			    els->display_name);
+		list_del(&els->list_entry);
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_rsp.size, els->els_rsp.virt,
+				  els->els_rsp.phys);
+		dma_free_coherent(&efct->pcidev->dev,
+				  els->els_req.size, els->els_req.virt,
+				  els->els_req.phys);
+		memset(&els->els_rsp, 0, sizeof(struct efc_dma));
+		memset(&els->els_req, 0, sizeof(struct efc_dma));
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	ls_acc_io = node->ls_acc_io;
+
+	if (node->ls_acc_io && ls_acc_io->hio) {
+		/*
+		 * if there's an IO that will result in an LS_ACC after
+		 * shutdown and its HW IO is non-NULL, it better be an
+		 * implicit logout in vanilla sequence coalescing. In this
+		 * case, force the LS_ACC to go out on another XRI (hio)
+		 * since the previous will have been aborted by the UNREG_RPI
+		 */
+		node_printf(node,
+			    "invalidating ls_acc_io due to implicit logo\n");
+
+		/*
+		 * No need to abort because the unreg_rpi
+		 * takes care of it, just free
+		 */
+		efct_hw_io_free(&efct->hw, ls_acc_io->hio);
+
+		/* NULL out hio to force the LS_ACC to grab a new XRI */
+		ls_acc_io->hio = NULL;
+	}
+}
+
+void
+efct_node_abort_all_els(struct efc *efc, struct efc_node *node)
+{
+	struct efct_io *els;
+	struct efct_io *els_next;
+	struct efc_node_cb cbdata;
+	struct efct *efct = efc->base;
+	unsigned long flags = 0;
+
+	memset(&cbdata, 0, sizeof(struct efc_node_cb));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry_safe(els, els_next, &node->els_io_active_list,
+				 list_entry) {
+		if (els->els_req_free)
+			continue;
+		efc_log_debug(efct, "[%s] initiate ELS abort %s\n",
+			       node->display_name, els->display_name);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		efct_els_abort(els, &cbdata);
+		spin_lock_irqsave(&node->active_ios_lock, flags);
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+static int
+efct_process_abts(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	struct efc_node *node = io->node;
+	struct efct *efct = io->efct;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	struct efct_io *abortio;
+
+	/* Find IO and attempt to take a reference on it */
+	abortio = efct_io_find_tgt_io(efct, node, ox_id, rx_id);
+
+	if (abortio) {
+		/* Got a reference on the IO. Hold it until backend
+		 * is notified below
+		 */
+		node_printf(node, "Abort request: ox_id [%04x] rx_id [%04x]\n",
+			    ox_id, rx_id);
+
+		/*
+		 * Save the ox_id for the ABTS as the init_task_tag in our
+		 * manufactured
+		 * TMF IO object
+		 */
+		io->display_name = "abts";
+		io->init_task_tag = ox_id;
+		/* don't set tgt_task_tag, don't want to confuse with XRI */
+
+		/*
+		 * Save the rx_id from the ABTS as it is
+		 * needed for the BLS response,
+		 * regardless of the IO context's rx_id
+		 */
+		io->abort_rx_id = rx_id;
+
+		/* Call target server command abort */
+		io->tmf_cmd = EFCT_SCSI_TMF_ABORT_TASK;
+		efct_scsi_recv_tmf(io, abortio->tgt_io.lun,
+				   EFCT_SCSI_TMF_ABORT_TASK, abortio, 0);
+
+		/*
+		 * Backend will have taken an additional
+		 * reference on the IO if needed;
+		 * done with current reference.
+		 */
+		kref_put(&abortio->ref, abortio->release);
+	} else {
+		/*
+		 * Either IO was not found or it has been
+		 * freed between finding it
+		 * and attempting to get the reference,
+		 */
+		node_printf(node,
+			    "Abort request: ox_id [%04x], IO not found (exists=%d)\n",
+			    ox_id, (abortio != NULL));
+
+		/* Send a BA_RJT */
+		efct_bls_send_rjt_hdr(io, hdr);
+	}
+	return EFC_SUCCESS;
+}
+
+int
+efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
+			  struct efc_hw_sequence *seq)
+{
+	struct efct *efct = efc->base;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efct_io *io = NULL;
+
+	node->abort_cnt++;
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (io) {
+		io->hw_priv = seq->hw_priv;
+		/* If we got this far, SIT=1 */
+		io->seq_init = 1;
+
+		/* fill out generic fields */
+		io->efct = efct;
+		io->node = node;
+		io->cmd_tgt = true;
+
+		efct_process_abts(io, seq->header->dma.virt);
+	} else {
+		node_printf(node,
+			    "SCSI IO allocation failed for ABTS received ");
+		node_printf(node,
+			    "s_id %06x d_id %06x ox_id %04x rx_id %04x\n",
+			ntoh24(hdr->fh_s_id),
+			ntoh24(hdr->fh_d_id),
+			be16_to_cpu(hdr->fh_ox_id),
+			be16_to_cpu(hdr->fh_rx_id));
+	}
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_unsol.h b/drivers/scsi/elx/efct/efct_unsol.h
new file mode 100644
index 000000000000..615c83120a00
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__OSC_UNSOL_H__)
+#define __OSC_UNSOL_H__
+
+extern int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq);
+extern int
+efct_node_purge_pending(struct efc *efc, struct efc_node *node);
+extern void
+efct_process_node_pending(struct efc_node *domain);
+extern void
+efct_domain_process_pending(struct efc_domain *domain);
+extern int
+efct_domain_purge_pending(struct efc_domain *domain);
+extern int
+efct_dispatch_unsolicited_bls(struct efc_node *node,
+			      struct efc_hw_sequence *seq);
+extern void
+efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain);
+extern void
+efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain);
+extern void
+efct_seq_coalesce_cleanup(struct efct_hw_io *io, u8 count);
+extern int
+efct_sframe_send_bls_acc(struct efc_node *node,
+			 struct efc_hw_sequence *seq);
+extern int
+efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
+
+extern int
+efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
+			  struct efc_hw_sequence *seq);
+extern void
+efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
+		      bool force);
+
+extern void
+efct_node_io_cleanup(struct efc *efc, struct efc_node *node,
+		     bool force);
+
+void
+efct_node_abort_all_els(struct efc *efc, struct efc_node *node);
+
+#endif /* __OSC_UNSOL_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 22/31] elx: efct: Extended link Service IO handling
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (20 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  7:58   ` Hannes Reinecke
  2020-04-16  9:49   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 23/31] elx: efct: SCSI IO handling routines James Smart
                   ` (8 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Functions to build and send ELS/CT/BLS commands and responses.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Unified log message using cmd_name
  Return and drop else, for better indentation and consistency.
  Changed assertion log messages.
---
 drivers/scsi/elx/efct/efct_els.c | 1928 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_els.h |  133 +++
 2 files changed, 2061 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_els.c
 create mode 100644 drivers/scsi/elx/efct/efct_els.h

diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
new file mode 100644
index 000000000000..8a2598a83445
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.c
@@ -0,0 +1,1928 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Functions to build and send ELS/CT/BLS commands and responses.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+
+#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
+
+#define EFCT_LOG_ENABLE_ELS_TRACE(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 1)) != 0) : 0)
+
+#define node_els_trace()  \
+	do { \
+		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
+			efc_log_info(efct, "[%s] %-20s\n", \
+				node->display_name, __func__); \
+	} while (0)
+
+#define els_io_printf(els, fmt, ...) \
+	efc_log_debug((struct efct *)els->node->efc->base,\
+		      "[%s]" ELS_IOFMT " %-8s " fmt, \
+		      els->node->display_name,\
+		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
+		      els->display_name, ##__VA_ARGS__)
+
+#define EFCT_ELS_RSP_LEN		1024
+#define EFCT_ELS_GID_PT_RSP_LEN		8096
+
+static char *cmd_name[] = FC_ELS_CMDS_INIT;
+
+void *
+efct_els_req_send(struct efc *efc, struct efc_node *node, u32 cmd,
+		  u32 timeout_sec, u32 retries)
+{
+	struct efct *efct = efc->base;
+
+	efc_log_debug(efct, "send %s\n", cmd_name[cmd]);
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		return efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
+	case ELS_FLOGI:
+		return efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
+	case ELS_FDISC:
+		return efct_send_fdisc(node, timeout_sec, retries, NULL, NULL);
+	case ELS_LOGO:
+		return efct_send_logo(node, timeout_sec, retries, NULL, NULL);
+	case ELS_PRLI:
+		return efct_send_prli(node, timeout_sec, retries, NULL, NULL);
+	case ELS_ADISC:
+		return efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
+	case ELS_SCR:
+		return efct_send_scr(node, timeout_sec, retries, NULL, NULL);
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+void *
+efct_els_resp_send(struct efc *efc, struct efc_node *node,
+		   u32 cmd, u16 ox_id)
+{
+	struct efct *efct = efc->base;
+
+	switch (cmd) {
+	case ELS_PLOGI:
+		efct_send_plogi_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_FLOGI:
+		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
+		break;
+	case ELS_LOGO:
+		efct_send_logo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLI:
+		efct_send_prli_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PRLO:
+		efct_send_prlo_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_ADISC:
+		efct_send_adisc_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_LS_ACC:
+		efct_send_ls_acc(node, ox_id, NULL, NULL);
+		break;
+	case ELS_PDISC:
+	case ELS_FDISC:
+	case ELS_RSCN:
+	case ELS_SCR:
+		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
+				 ELS_EXPL_NONE, 0);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+struct efct_io *
+efct_els_io_alloc(struct efc_node *node, u32 reqlen,
+		  enum efct_els_role role)
+{
+	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
+}
+
+struct efct_io *
+efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
+		       u32 rsplen, enum efct_els_role role)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	struct efct_io *els;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+	if (!node->io_alloc_enabled) {
+		efc_log_debug(efct,
+			       "called with io_alloc_enabled = FALSE\n");
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	els = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!els) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* initialize refcount */
+	kref_init(&els->ref);
+	els->release = _efct_els_io_free;
+
+	switch (role) {
+	case EFCT_ELS_ROLE_ORIGINATOR:
+		els->cmd_ini = true;
+		els->cmd_tgt = false;
+		break;
+	case EFCT_ELS_ROLE_RESPONDER:
+		els->cmd_ini = false;
+		els->cmd_tgt = true;
+		break;
+	}
+
+	/* IO should not have an associated HW IO yet.
+	 * Assigned below.
+	 */
+	if (els->hio) {
+		efc_log_err(efct, "Error: HW io not null hio:%p\n", els->hio);
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* populate generic io fields */
+	els->efct = efct;
+	els->node = node;
+
+	/* set type and ELS-specific fields */
+	els->io_type = EFCT_IO_TYPE_ELS;
+	els->display_name = "pending";
+
+	/* now allocate DMA for request and response */
+	els->els_req.size = reqlen;
+	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					       els->els_req.size,
+					       &els->els_req.phys,
+					       GFP_DMA);
+	if (els->els_req.virt) {
+		els->els_rsp.size = rsplen;
+		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
+						       els->els_rsp.size,
+						       &els->els_rsp.phys,
+						       GFP_DMA);
+		if (!els->els_rsp.virt) {
+			efc_log_err(efct, "dma_alloc rsp\n");
+			dma_free_coherent(&efct->pcidev->dev,
+					  els->els_req.size,
+				els->els_req.virt, els->els_req.phys);
+			memset(&els->els_req, 0, sizeof(struct efc_dma));
+			efct_io_pool_io_free(efct->xport->io_pool, els);
+			els = NULL;
+		}
+	} else {
+		efc_log_err(efct, "dma_alloc req\n");
+		efct_io_pool_io_free(efct->xport->io_pool, els);
+		els = NULL;
+	}
+
+	if (els) {
+		/* initialize fields */
+		els->els_retries_remaining =
+					EFCT_FC_ELS_DEFAULT_RETRIES;
+		els->els_pend = false;
+		els->els_active = false;
+
+		/* add els structure to ELS IO list */
+		INIT_LIST_HEAD(&els->list_entry);
+		list_add_tail(&els->list_entry,
+			      &node->els_io_pend_list);
+		els->els_pend = true;
+	}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return els;
+}
+
+void
+efct_els_io_free(struct efct_io *els)
+{
+	kref_put(&els->ref, els->release);
+}
+
+void
+_efct_els_io_free(struct kref *arg)
+{
+	struct efct_io *els = container_of(arg, struct efct_io, ref);
+	struct efct *efct;
+	struct efc_node *node;
+	int send_empty_event = false;
+	unsigned long flags = 0;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_active) {
+			/* if active, remove from active list and check empty */
+			list_del(&els->list_entry);
+			/* Send list empty event if the IO allocator
+			 * is disabled, and the list is empty
+			 * If node->io_alloc_enabled was not checked,
+			 * the event would be posted continually
+			 */
+			send_empty_event = (!node->io_alloc_enabled) &&
+				list_empty(&node->els_io_active_list);
+			els->els_active = false;
+		} else if (els->els_pend) {
+			/* if pending, remove from pending list;
+			 * node shutdown isn't gated off the
+			 * pending list (only the active list),
+			 * so no need to check if pending list is empty
+			 */
+			list_del(&els->list_entry);
+			els->els_pend = 0;
+		} else {
+			efc_log_err(efct,
+				"Error: els not in pending or active set\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return;
+		}
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	/* free ELS request and response buffers */
+	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
+			  els->els_rsp.virt, els->els_rsp.phys);
+	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
+			  els->els_req.virt, els->els_req.phys);
+
+	memset(&els->els_rsp, 0, sizeof(struct efc_dma));
+	memset(&els->els_req, 0, sizeof(struct efc_dma));
+	efct_io_pool_io_free(efct->xport->io_pool, els);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	efct_scsi_check_pending(efct);
+}
+
+static void
+efct_els_make_active(struct efct_io *els)
+{
+	struct efc_node *node = els->node;
+	unsigned long flags = 0;
+
+	/* move ELS from pending list to active list */
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		if (els->els_pend) {
+			if (els->els_active) {
+				efc_log_err(node->efc,
+					"Error: els in both pend and active\n");
+				spin_unlock_irqrestore(&node->active_ios_lock,
+						       flags);
+				return;
+			}
+			/* remove from pending list */
+			list_del(&els->list_entry);
+			els->els_pend = false;
+
+			/* add els structure to ELS IO list */
+			INIT_LIST_HEAD(&els->list_entry);
+			list_add_tail(&els->list_entry,
+				      &node->els_io_active_list);
+			els->els_active = true;
+		} else {
+			/* must be retrying; make sure it's already active */
+			if (!els->els_active) {
+				efc_log_err(node->efc,
+					"Error: els not in pending or active set\n");
+			}
+		}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+static int efct_els_send(struct efct_io *els, u32 reqlen,
+			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
+{
+	struct efc_node *node = els->node;
+
+	/* update ELS request counter */
+	node->els_req_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = reqlen;
+	return efct_scsi_io_dispatch(els, cb);
+}
+
+static void
+efct_els_retry(struct efct_io *els);
+
+static void
+efct_els_delay_timer_cb(struct timer_list *t)
+{
+	struct efct_io *els = from_timer(els, t, delay_timer);
+	struct efc_node *node = els->node;
+
+	/* Retry delay timer expired, retry the ELS request,
+	 * Free the HW IO so that a new oxid is used.
+	 */
+	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
+		node->els_req_cnt++;
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    NULL);
+	} else {
+		efct_els_retry(els);
+	}
+}
+
+static void
+efct_els_abort_cleanup(struct efct_io *els)
+{
+	/* handle event for ABORT_WQE
+	 * whatever state ELS happened to be in, propagate aborted even
+	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
+	 */
+	struct efc_node_cb cbdata;
+
+	cbdata.status = 0;
+	cbdata.ext_status = 0;
+	cbdata.els_rsp = els->els_rsp;
+	els_io_printf(els, "Request aborted\n");
+	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
+}
+
+static int
+efct_els_req_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io *els;
+	struct efc_node *node;
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+	u32 reason_code;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	if (status != 0)
+		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
+
+	/* set the response len element of els->rsp */
+	els->els_rsp.len = length;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* FW returns the number of bytes received on the link in
+	 * the WCQE, not the amount placed in the buffer; use this info to
+	 * check if there was an overrun.
+	 */
+	if (length > els->els_rsp.size) {
+		efc_log_warn(efct,
+			      "ELS response returned len=%d > buflen=%zu\n",
+			     length, els->els_rsp.size);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		return EFC_SUCCESS;
+	}
+
+	/* Post event to ELS IO object */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		reason_code = (ext_status >> 16) & 0xff;
+
+		/* delay and retry if reason code is Logical Busy */
+		switch (reason_code) {
+		case ELS_RJT_BUSY:
+			els->node->els_req_cnt--;
+			els_io_printf(els,
+				      "LS_RJT Logical Busy response,delay and retry\n");
+			timer_setup(&els->delay_timer,
+				    efct_els_delay_timer_cb, 0);
+			mod_timer(&els->delay_timer,
+				  jiffies + msecs_to_jiffies(5000));
+			els->state = EFCT_ELS_REQUEST_DELAYED;
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
+					    &cbdata);
+			break;
+		}
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
+			efct_els_retry(els);
+			break;
+
+		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
+				/* completion for ELS that was aborted */
+				efct_els_abort_cleanup(els);
+			} else {
+				/* completion for ELS received first,
+				 * transition to wait for abort cmpl
+				 */
+				els->state = EFCT_ELS_REQ_ABORTED;
+			}
+
+			break;
+		default:
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    &cbdata);
+			break;
+		}
+		break;
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "els req failed status x%x, ext_status, x%x\n",
+					status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void efct_els_send_req(struct efc_node *node, struct efct_io *els)
+{
+	int rc = 0;
+	struct efct *efct;
+
+	efct = node->efc->base;
+	rc = efct_els_send(els, els->els_req.size,
+			   els->els_timeout_sec, efct_els_req_cb);
+
+	if (rc) {
+		struct efc_node_cb cbdata;
+
+		cbdata.status = INT_MAX;
+		cbdata.ext_status = INT_MAX;
+		cbdata.els_rsp = els->els_rsp;
+		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+	}
+}
+
+static void
+efct_els_retry(struct efct_io *els)
+{
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+
+	efct = els->node->efc->base;
+	cbdata.status = INT_MAX;
+	cbdata.ext_status = INT_MAX;
+	cbdata.els_rsp = els->els_rsp;
+
+	if (!els->els_retries_remaining) {
+		efc_log_err(efct, "ELS retries exhausted\n");
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+				    &cbdata);
+		return;
+	}
+
+	els->els_retries_remaining--;
+	 /* Free the HW IO so that a new oxid is used.*/
+	if (els->hio) {
+		efct_hw_io_free(&efct->hw, els->hio);
+		els->hio = NULL;
+	}
+
+	efct_els_send_req(els->node, els);
+}
+
+static int
+efct_els_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		u32 length, int status, u32 ext_status, void *arg)
+{
+	struct efct_io *els;
+	struct efc_node *node;
+	struct efct *efct;
+	struct efc_node_cb cbdata;
+
+	els = arg;
+	node = els->node;
+	efct = node->efc->base;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->els_rsp;
+
+	/* Post node event */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
+		break;
+
+	default:	/* Other error */
+		efc_log_warn(efct,
+			      "[%s] %-8s failed status x%x, ext_status x%x\n",
+			    node->display_name, els->display_name,
+			    status, ext_status);
+		efc_log_warn(efct,
+			      "els acc complete: failed status x%x, ext_status, x%x\n",
+		     status, ext_status);
+		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_els_send_rsp(struct efct_io *els, u32 rsplen)
+{
+	struct efc_node *node = els->node;
+
+	/* increment ELS completion counter */
+	node->els_cmpl_cnt++;
+
+	/* move ELS from pending list to active list */
+	efct_els_make_active(els);
+
+	els->wire_len = rsplen;
+	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
+}
+
+struct efct_io *
+efct_send_plogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries,
+	      void (*cb)(struct efc_node *node,
+			 struct efc_node_cb *cbdata, void *arg), void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *plogi;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "plogi";
+
+	/* Build PLOGI request */
+	plogi = els->els_req.virt;
+
+	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+
+	plogi->fl_cmd = ELS_PLOGI;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_flogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_flogi  *flogi;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "flogi";
+
+	/* Build FLOGI request */
+	flogi = els->els_req.virt;
+
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_FLOGI;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_flogi *fdisc;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "fdisc";
+
+	/* Build FDISC request */
+	fdisc = els->els_req.virt;
+
+	memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
+	fdisc->fl_cmd = ELS_FDISC;
+	memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_prli(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_io *els;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "prli";
+
+	/* Build PRLI request */
+	pp = els->els_req.virt;
+
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_PRLI;
+	pp->prli.prli_spp_len = 16;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+			       (node->sport->enable_ini ?
+			       FCP_SPPF_INIT_FCN : 0) |
+			       (node->sport->enable_tgt ?
+			       FCP_SPPF_TARG_FCN : 0));
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_prlo(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_io *els;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "prlo";
+
+	/* Build PRLO request */
+	pp = els->els_req.virt;
+
+	memset(pp, 0, sizeof(*pp));
+	pp->prlo.prlo_cmd = ELS_PRLO;
+	pp->prlo.prlo_obs = 0x10;
+	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_logo(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_logo *logo;
+	struct fc_els_flogi  *sparams;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "logo";
+
+	/* Build LOGO request */
+
+	logo = els->els_req.virt;
+
+	memset(logo, 0, sizeof(*logo));
+	logo->fl_cmd = ELS_LOGO;
+	hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
+	logo->fl_n_port_wwn = sparams->fl_wwpn;
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_adisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efc_sli_port *sport = node->sport;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->sport->service_params;
+
+	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "adisc";
+
+	/* Build ADISC request */
+
+	adisc = els->els_req.virt;
+
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_ADISC;
+	hton24(adisc->adisc_hard_addr, sport->fc_id);
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *pdisc;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "pdisc";
+
+	pdisc = els->els_req.virt;
+
+	memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
+
+	pdisc->fl_cmd = ELS_PDISC;
+	memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_scr(struct efc_node *node, u32 timeout_sec, u32 retries,
+	      els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "scr";
+
+	req = els->els_req.virt;
+
+	memset(req, 0, sizeof(*req));
+	req->scr_cmd = ELS_SCR;
+	req->scr_reg_func = ELS_SCRF_FULL;
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_send_rscn(struct efc_node *node, u32 timeout_sec, u32 retries,
+	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_els_rscn *req;
+	struct fc_els_rscn_page *rscn_page;
+	u32 length = sizeof(*rscn_page) * port_ids_count;
+
+	length += sizeof(*req);
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->els_timeout_sec = timeout_sec;
+	els->els_retries_remaining = retries;
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "rscn";
+
+	req = els->els_req.virt;
+
+	req->rscn_cmd = ELS_RSCN;
+	req->rscn_page_len = sizeof(struct fc_els_rscn_page);
+	req->rscn_plen = cpu_to_be16(length);
+
+	els->hio_type = EFCT_HW_ELS_REQ;
+	els->iparam.els.timeout = timeout_sec;
+
+	/* copy in the payload */
+	rscn_page = els->els_req.virt + sizeof(*req);
+	memcpy(rscn_page, port_ids,
+	       port_ids_count * sizeof(*rscn_page));
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+void *
+efct_send_ls_rjt(struct efc *efc, struct efc_node *node,
+		 u32 ox_id, u32 reason_code,
+		u32 reason_code_expl, u32 vendor_unique)
+{
+	struct efct_io *io = NULL;
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct fc_els_ls_rjt *rjt;
+
+	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	node_els_trace();
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "ls_rjt";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	rjt = io->els_req.virt;
+	memset(rjt, 0, sizeof(*rjt));
+
+	rjt->er_cmd = ELS_LS_RJT;
+	rjt->er_reason = reason_code;
+	rjt->er_explan = reason_code_expl;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*rjt));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_flogi  *plogi;
+	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "plogi_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	plogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
+	plogi->fl_cmd = ELS_LS_ACC;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	/* Set Application header support bit if requested */
+	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
+		plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*plogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+void *
+efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
+			u32 ox_id, u32 s_id)
+{
+	struct efct_io *io = NULL;
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = NULL;
+	io->els_callback_arg = NULL;
+	io->display_name = "flogi_p2p_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+	io->iparam.els.s_id = s_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "flogi_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+	io->iparam.els.s_id = io->node->sport->fc_id;
+
+	flogi = io->els_req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
+
+	/* Set F_port */
+	if (is_fport) {
+		/* Set F_PORT and Multiple N_PORT_ID Assignment */
+		flogi->fl_csp.sp_r_a_tov |=  cpu_to_be32(3U << 28);
+	}
+
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	io->hio_type = EFCT_HW_ELS_RSP_SID;
+	rc = efct_els_send_rsp(io, sizeof(*flogi));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *efct_send_prli_acc(struct efc_node *node,
+				     u32 ox_id, els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prli_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_LS_ACC;
+	pp->prli.prli_spp_len = 0x10;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
+
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+					(node->sport->enable_ini ?
+					 FCP_SPPF_INIT_FCN : 0) |
+					(node->sport->enable_tgt ?
+					 FCP_SPPF_TARG_FCN : 0));
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "prlo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	pp = io->els_req.virt;
+	memset(pp, 0, sizeof(*pp));
+	pp->prlo.prlo_cmd = ELS_LS_ACC;
+	pp->prlo.prlo_obs = 0x10;
+	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_RESP_ACK;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*pp));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_ls_acc(struct efc_node *node, u32 ox_id, els_cb_t cb,
+		 void *cbarg)
+{
+	int rc;
+	struct efct *efct = node->efc->base;
+	struct efct_io *io = NULL;
+	struct fc_els_ls_acc *acc;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*acc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "ls_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	acc = io->els_req.virt;
+	memset(acc, 0, sizeof(*acc));
+
+	acc->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*acc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_logo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io *io = NULL;
+	struct efct *efct = node->efc->base;
+	struct fc_els_ls_acc *logo;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "logo_acc";
+	io->init_task_tag = ox_id;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	logo = io->els_req.virt;
+	memset(logo, 0, sizeof(*logo));
+
+	logo->la_cmd = ELS_LS_ACC;
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*logo));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+struct efct_io *
+efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg)
+{
+	int rc;
+	struct efct_io *io = NULL;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	node_els_trace();
+
+	io = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efct, "els IO alloc failed\n");
+		return io;
+	}
+
+	io->els_callback = cb;
+	io->els_callback_arg = cbarg;
+	io->display_name = "adisc_acc";
+	io->init_task_tag = ox_id;
+
+	/* Go ahead and send the ELS_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.els.ox_id = ox_id;
+
+	sparams = (struct fc_els_flogi  *)node->sport->service_params;
+	adisc = io->els_req.virt;
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_LS_ACC;
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
+
+	io->hio_type = EFCT_HW_ELS_RSP;
+	rc = efct_els_send_rsp(io, sizeof(*adisc));
+	if (rc) {
+		efct_els_io_free(io);
+		io = NULL;
+	}
+
+	return io;
+}
+
+void *
+efct_els_send_ct(struct efc *efc, struct efc_node *node, u32 cmd,
+		 u32 timeout_sec, u32 retries)
+{
+	struct efct *efct = efc->base;
+
+	switch (cmd) {
+	case FC_RCTL_ELS_REQ:
+		efct_ns_send_rftid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_RFF_ID:
+		efct_ns_send_rffid(node, timeout_sec, retries, NULL, NULL);
+		break;
+	case FC_NS_GID_PT:
+		efct_ns_send_gidpt(node, timeout_sec, retries, NULL, NULL);
+		break;
+	default:
+		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
+	}
+
+	return NULL;
+}
+
+static inline void fcct_build_req_header(struct fc_ct_hdr  *hdr,
+					 u16 cmd, u16 max_size)
+{
+	hdr->ct_rev = FC_CT_REV;
+	hdr->ct_fs_type = FC_FST_DIR;
+	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
+	hdr->ct_options = 0;
+	hdr->ct_cmd = cpu_to_be16(cmd);
+	/* words */
+	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
+	hdr->ct_reason = 0;
+	hdr->ct_explan = 0;
+	hdr->ct_vendor = 0;
+}
+
+struct efct_io *
+efct_ns_send_rftid(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rft_id *rftid;
+
+	node_els_trace();
+
+	els = efct_els_io_alloc(node, sizeof(*ct) + sizeof(*rftid),
+				EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+
+	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->iparam.fc_ct.type = FC_TYPE_CT;
+	els->iparam.fc_ct.df_ctl = 0;
+	els->iparam.fc_ct.timeout = timeout_sec;
+
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "rftid";
+
+	ct = els->els_req.virt;
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(ct, FC_NS_RFT_ID, sizeof(*rftid));
+
+	rftid = els->els_req.virt + sizeof(*ct);
+	memset(rftid, 0, sizeof(*rftid));
+	hton24(rftid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+	rftid->fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
+		cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
+
+	els->hio_type = EFCT_HW_FC_CT;
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_ns_send_rffid(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_rff_id *rffid;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*rffid);
+
+	els = efct_els_io_alloc(node, size, EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->iparam.fc_ct.type = FC_TYPE_CT;
+	els->iparam.fc_ct.df_ctl = 0;
+	els->iparam.fc_ct.timeout = timeout_sec;
+
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "rffid";
+	ct = els->els_req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(ct, FC_NS_RFF_ID, sizeof(*rffid));
+
+	rffid = els->els_req.virt + sizeof(*ct);
+	memset(rffid, 0, sizeof(*rffid));
+
+	hton24(rffid->fr_fid.fp_fid, node->rnode.sport->fc_id);
+	if (node->sport->enable_ini)
+		rffid->fr_feat |= FCP_FEAT_INIT;
+	if (node->sport->enable_tgt)
+		rffid->fr_feat |= FCP_FEAT_TARG;
+	rffid->fr_type = FC_TYPE_FCP;
+
+	els->hio_type = EFCT_HW_FC_CT;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+struct efct_io *
+efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg)
+{
+	struct efct_io *els = NULL;
+	struct efct *efct = node->efc->base;
+	struct fc_ct_hdr *ct;
+	struct fc_ns_gid_pt *gidpt;
+	u32 size = 0;
+
+	node_els_trace();
+
+	size = sizeof(*ct) + sizeof(*gidpt);
+	els = efct_els_io_alloc_size(node, size,
+				     EFCT_ELS_GID_PT_RSP_LEN,
+				   EFCT_ELS_ROLE_ORIGINATOR);
+	if (!els) {
+		efc_log_err(efct, "IO alloc failed\n");
+		return els;
+	}
+
+	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->iparam.fc_ct.type = FC_TYPE_CT;
+	els->iparam.fc_ct.df_ctl = 0;
+	els->iparam.fc_ct.timeout = timeout_sec;
+
+	els->els_callback = cb;
+	els->els_callback_arg = cbarg;
+	els->display_name = "gidpt";
+
+	ct = els->els_req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(ct, FC_NS_GID_PT, sizeof(*gidpt));
+
+	gidpt = els->els_req.virt + sizeof(*ct);
+	memset(gidpt, 0, sizeof(*gidpt));
+	gidpt->fn_pt_type = FC_TYPE_FCP;
+
+	els->hio_type = EFCT_HW_FC_CT;
+
+	efct_els_send_req(node, els);
+
+	return els;
+}
+
+static int efct_bls_send_rjt_cb(struct efct_hw_io *hio,
+				struct efc_remote_node *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+
+	efct_scsi_io_free(io);
+	return EFC_SUCCESS;
+}
+
+static struct efct_io *
+efct_bls_send_rjt(struct efct_io *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node *node = io->node;
+	int rc;
+	struct fc_ba_rjt *acc;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_rjt";
+	io->hio_type = EFCT_HW_BLS_RJT;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls.ox_id = ox_id;
+	io->iparam.bls.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls.payload;
+
+	memset(io->iparam.bls.payload, 0,
+	       sizeof(io->iparam.bls.payload));
+	acc->br_reason = ELS_RJT_UNAB;
+	acc->br_explan = ELS_EXPL_NONE;
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_rjt_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+struct efct_io *
+efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	return efct_bls_send_rjt(io, d_id, ox_id, rx_id);
+}
+
+static int efct_bls_send_acc_cb(struct efct_hw_io *hio,
+				struct efc_remote_node *rnode, u32 length,
+		int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+
+	efct_scsi_io_free(io);
+	return EFC_SUCCESS;
+}
+
+static struct efct_io *
+efct_bls_send_acc(struct efct_io *io, u32 s_id,
+		  u16 ox_id, u16 rx_id)
+{
+	struct efc_node *node = io->node;
+	int rc;
+	struct fc_ba_acc *acc;
+	struct efct *efct;
+
+	efct = node->efc->base;
+
+	if (node->rnode.sport->fc_id == s_id)
+		s_id = U32_MAX;
+
+	/* fill out generic fields */
+	io->efct = efct;
+	io->node = node;
+	io->cmd_tgt = true;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_acc";
+	io->hio_type = EFCT_HW_BLS_ACC_SID;
+	io->init_task_tag = ox_id;
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls.s_id = s_id;
+	io->iparam.bls.ox_id = ox_id;
+	io->iparam.bls.rx_id = rx_id;
+
+	acc = (void *)io->iparam.bls.payload;
+
+	memset(io->iparam.bls.payload, 0,
+	       sizeof(io->iparam.bls.payload));
+	acc->ba_ox_id = cpu_to_be16(io->iparam.bls.ox_id);
+	acc->ba_rx_id = cpu_to_be16(io->iparam.bls.rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	rc = efct_scsi_io_dispatch(io, efct_bls_send_acc_cb);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+void *
+efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
+		      struct fc_frame_header *hdr)
+{
+	struct efct_io *io = NULL;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	u32 d_id = ntoh24(hdr->fh_d_id);
+
+	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return io;
+	}
+
+	return efct_bls_send_acc(io, d_id, ox_id, rx_id);
+}
+
+static int
+efct_els_abort_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		  u32 length, int status, u32 ext_status,
+		 void *app)
+{
+	struct efct_io *els;
+	struct efct_io *abort_io = NULL; /* IO structure used to abort ELS */
+	struct efct *efct;
+
+	abort_io = app;
+	els = abort_io->io_to_abort;
+
+	if (!els || !els->node || !els->node->efc)
+		return EFC_FAIL;
+
+	efct = els->node->efc->base;
+
+	if (status != 0)
+		efc_log_warn(efct, "status x%x ext x%x\n", status, ext_status);
+
+	/* now free the abort IO */
+	efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+
+	/* send completion event to indicate abort process is complete
+	 * Note: The ELS SM will already be receiving
+	 * ELS_REQ_OK/FAIL/RJT/ABORTED
+	 */
+	if (els->state == EFCT_ELS_REQ_ABORTED) {
+		/* completion for ELS that was aborted */
+		efct_els_abort_cleanup(els);
+	} else {
+		/* completion for abort was received first,
+		 * transition to wait for req cmpl
+		 */
+		els->state = EFCT_ELS_ABORT_IO_COMPL;
+	}
+
+	/* done with ELS IO to abort */
+	kref_put(&els->ref, els->release);
+	return EFC_SUCCESS;
+}
+
+static struct efct_io *
+efct_els_abort_io(struct efct_io *els, bool send_abts)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	int rc;
+	struct efct_io *abort_io = NULL;
+
+	efct = els->node->efc->base;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&els->ref) == 0)) {
+		/* command no longer active */
+		efc_log_debug(efct, "els no longer active\n");
+		return NULL;
+	}
+
+	/* allocate IO structure to send abort */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+	} else {
+		/* set generic fields */
+		abort_io->efct = efct;
+		abort_io->node = els->node;
+		abort_io->cmd_ini = true;
+
+		/* set type and ABORT-specific fields */
+		abort_io->io_type = EFCT_IO_TYPE_ABORT;
+		abort_io->display_name = "abort_els";
+		abort_io->io_to_abort = els;
+		abort_io->send_abts = send_abts;
+
+		/* now dispatch IO */
+		rc = efct_scsi_io_dispatch_abort(abort_io, efct_els_abort_cb);
+		if (rc) {
+			efc_log_err(efct,
+				     "efct_scsi_io_dispatch failed: %d\n", rc);
+			efct_io_pool_io_free(efct->xport->io_pool, abort_io);
+			abort_io = NULL;
+		}
+	}
+
+	/* if something failed, put reference on ELS to abort */
+	if (!abort_io)
+		kref_put(&els->ref, els->release);
+	return abort_io;
+}
+
+void
+efct_els_abort(struct efct_io *els, struct efc_node_cb *arg)
+{
+	struct efct_io *io = NULL;
+	struct efc_node *node;
+	struct efct *efct;
+
+	node = els->node;
+	efct = node->efc->base;
+
+	/* request to abort this ELS without an ABTS */
+	els_io_printf(els, "ELS abort requested\n");
+	/* Set retries to zero,we are done */
+	els->els_retries_remaining = 0;
+	if (els->state == EFCT_ELS_REQUEST) {
+		els->state = EFCT_ELS_REQ_ABORT;
+		io = efct_els_abort_io(els, false);
+		if (!io) {
+			efc_log_err(efct, "efct_els_abort_io failed\n");
+			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
+					    arg);
+		}
+
+	} else if (els->state == EFCT_ELS_REQUEST_DELAYED) {
+		/* mod/resched the timer for a short duration */
+		mod_timer(&els->delay_timer,
+			  jiffies + msecs_to_jiffies(1));
+
+		els->state = EFCT_ELS_REQUEST_DELAY_ABORT;
+	}
+}
+
+void
+efct_els_io_cleanup(struct efct_io *els,
+		    enum efc_hw_node_els_event node_evt, void *arg)
+{
+	/* don't want further events that could come; e.g. abort requests
+	 * from the node state machine; thus, disable state machine
+	 */
+	els->els_req_free = true;
+	efc_node_post_els_resp(els->node, node_evt, arg);
+
+	/* If this IO has a callback, invoke it */
+	if (els->els_callback) {
+		(*els->els_callback)(els->node, arg,
+				    els->els_callback_arg);
+	}
+	efct_els_io_free(els);
+}
+
+int
+efct_els_io_list_empty(struct efc_node *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		empty = list_empty(list);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return empty;
+}
+
+static int
+efct_ct_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+	       u32 length, int status, u32 ext_status,
+	      void *arg)
+{
+	struct efct_io *io = arg;
+
+	efct_els_io_free(io);
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr  *ct_hdr, u32 cmd_rsp_code,
+		u32 reason_code, u32 reason_code_explanation)
+{
+	struct efct_io *io = NULL;
+	struct fc_ct_hdr  *rsp = NULL;
+
+	io = efct_els_io_alloc(node, 256, EFCT_ELS_ROLE_RESPONDER);
+	if (!io) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	rsp = io->els_rsp.virt;
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+
+	*rsp = *ct_hdr;
+
+	fcct_build_req_header(rsp, cmd_rsp_code, 0);
+	rsp->ct_reason = reason_code;
+	rsp->ct_explan = reason_code_explanation;
+
+	io->display_name = "ct_rsp";
+	io->init_task_tag = ox_id;
+	io->wire_len += sizeof(*rsp);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+
+	io->io_type = EFCT_IO_TYPE_CT_RESP;
+	io->hio_type = EFCT_HW_FC_CT_RSP;
+	io->iparam.fc_ct.ox_id = ox_id;
+	io->iparam.fc_ct.r_ctl = 3;
+	io->iparam.fc_ct.type = FC_TYPE_CT;
+	io->iparam.fc_ct.df_ctl = 0;
+	io->iparam.fc_ct.timeout = 5;
+
+	if (efct_scsi_io_dispatch(io, efct_ct_acc_cb) < 0) {
+		efct_els_io_free(io);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_els.h b/drivers/scsi/elx/efct/efct_els.h
new file mode 100644
index 000000000000..9b79783a39a3
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_els.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_ELS_H__)
+#define __EFCT_ELS_H__
+
+enum efct_els_role {
+	EFCT_ELS_ROLE_ORIGINATOR,
+	EFCT_ELS_ROLE_RESPONDER,
+};
+
+void _efct_els_io_free(struct kref *arg);
+extern struct efct_io *
+efct_els_io_alloc(struct efc_node *node, u32 reqlen,
+		  enum efct_els_role role);
+extern struct efct_io *
+efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
+		       u32 rsplen,
+				       enum efct_els_role role);
+void efct_els_io_free(struct efct_io *els);
+
+extern void *
+efct_els_req_send(struct efc *efc, struct efc_node *node,
+		  u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_send_ct(struct efc *efc, struct efc_node *node,
+		 u32 cmd, u32 timeout_sec, u32 retries);
+extern void *
+efct_els_resp_send(struct efc *efc, struct efc_node *node,
+		   u32 cmd, u16 ox_id);
+void
+efct_els_abort(struct efct_io *els, struct efc_node_cb *arg);
+/* ELS command send */
+typedef void (*els_cb_t)(struct efc_node *node,
+			 struct efc_node_cb *cbdata, void *arg);
+extern struct efct_io *
+efct_send_plogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_flogi(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prli(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prlo(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_logo(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_adisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
+		u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_scr(struct efc_node *node, u32 timeout_sec,
+	      u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_rftid(struct efc_node *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_rffid(struct efc_node *node,
+		   u32 timeout_sec,
+		  u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
+		   u32 retries, els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_rscn(struct efc_node *node, u32 timeout_sec,
+	       u32 retries, void *port_ids,
+	      u32 port_ids_count, els_cb_t cb, void *cbarg);
+extern void
+efct_els_io_cleanup(struct efct_io *els, enum efc_hw_node_els_event,
+		    void *arg);
+
+/* ELS acc send */
+extern struct efct_io *
+efct_send_ls_acc(struct efc_node *node, u32 ox_id,
+		 els_cb_t cb, void *cbarg);
+
+extern void *
+efct_send_ls_rjt(struct efc *efc, struct efc_node *node, u32 ox_id,
+		 u32 reason_cod, u32 reason_code_expl,
+		u32 vendor_unique);
+extern void *
+efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
+			u32 ox_id, u32 s_id);
+extern struct efct_io *
+efct_send_flogi_acc(struct efc_node *node, u32 ox_id,
+		    u32 is_fport, els_cb_t cb,
+		   void *cbarg);
+extern struct efct_io *
+efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prli_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_logo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
+		   els_cb_t cb, void *cbarg);
+extern struct efct_io *
+efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
+		    els_cb_t cb, void *cbarg);
+
+extern void *
+efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
+		      struct fc_frame_header *hdr);
+extern struct efct_io *
+efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr);
+
+extern int
+efct_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+/* CT */
+extern int
+efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr *ct_hdr,
+		 u32 cmd_rsp_code, u32 reason_code,
+		 u32 reason_code_explanation);
+
+#endif /* __EFCT_ELS_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 23/31] elx: efct: SCSI IO handling routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (21 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 22/31] elx: efct: Extended link Service IO handling James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16 11:40   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
                   ` (7 subsequent siblings)
  30 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for SCSI transport IO alloc, build and send IO.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>

---
v3:
  Removed DIF related code which is not used.
  Removed SCSI get property.
---
 drivers/scsi/elx/efct/efct_scsi.c | 1192 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_scsi.h |  235 ++++++++
 2 files changed, 1427 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h

diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c
new file mode 100644
index 000000000000..c299eadbc492
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.c
@@ -0,0 +1,1192 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_els.h"
+#include "efct_hw.h"
+
+#define enable_tsend_auto_resp(efct)	1
+#define enable_treceive_auto_resp(efct)	0
+
+#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]"
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \
+		io->node->display_name, io->instance_index,\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+#define EFCT_LOG_ENABLE_SCSI_TRACE(efct)                \
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0)
+
+#define scsi_io_trace(io, fmt, ...) \
+	do { \
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \
+			scsi_io_printf(io, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+/* Enable the SCSI and Transport IO allocations */
+void
+efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = true;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+/* Disable the SCSI and Transport IO allocations */
+void
+efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		node->io_alloc_enabled = false;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+}
+
+struct efct_io *
+efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role role)
+{
+	struct efct *efct;
+	struct efc *efcp;
+	struct efct_xport *xport;
+	struct efct_io *io;
+	unsigned long flags = 0;
+
+	efcp = node->efc;
+	efct = efcp->base;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+		if (!node->io_alloc_enabled) {
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		io = efct_io_pool_io_alloc(efct->xport->io_pool);
+		if (!io) {
+			atomic_add_return(1, &xport->io_alloc_failed_count);
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* initialize refcount */
+		kref_init(&io->ref);
+		io->release = _efct_scsi_io_free;
+
+		if (io->hio) {
+			efc_log_err(efct,
+				     "assertion failed: io->hio is not NULL\n");
+			spin_unlock_irqrestore(&node->active_ios_lock, flags);
+			return NULL;
+		}
+
+		/* set generic fields */
+		io->efct = efct;
+		io->node = node;
+
+		/* set type and name */
+		io->io_type = EFCT_IO_TYPE_IO;
+		io->display_name = "scsi_io";
+
+		switch (role) {
+		case EFCT_SCSI_IO_ROLE_ORIGINATOR:
+			io->cmd_ini = true;
+			io->cmd_tgt = false;
+			break;
+		case EFCT_SCSI_IO_ROLE_RESPONDER:
+			io->cmd_ini = false;
+			io->cmd_tgt = true;
+			break;
+		}
+
+		/* Add to node's active_ios list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &node->active_ios);
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	return io;
+}
+
+void
+_efct_scsi_io_free(struct kref *arg)
+{
+	struct efct_io *io = container_of(arg, struct efct_io, ref);
+	struct efct *efct = io->efct;
+	struct efc_node *node = io->node;
+	int send_empty_event;
+	unsigned long flags = 0;
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+
+	if (io->io_free) {
+		efc_log_err(efct, "IO already freed.\n");
+		return;
+	}
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+		list_del(&io->list_entry);
+		send_empty_event = (!node->io_alloc_enabled) &&
+					list_empty(&node->active_ios);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+
+	io->node = NULL;
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+}
+
+void
+efct_scsi_io_free(struct efct_io *io)
+{
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	WARN_ON(refcount_read(&io->ref.refcount) != 0);
+	kref_put(&io->ref, io->release);
+}
+
+static void
+efct_target_io_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
+		  u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_stat = EFCT_SCSI_STATUS_GOOD;
+
+	if (!io || !io->efct) {
+		pr_err("%s: IO can not be NULL\n", __func__);
+		return;
+	}
+
+	scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status);
+
+	efct = io->efct;
+
+	io->transferred += length;
+
+	/* Call target server completion */
+	if (io->scsi_tgt_cb) {
+		efct_scsi_io_cb_t cb = io->scsi_tgt_cb;
+		u32 flags = 0;
+
+		/* Clear the callback before invoking the callback */
+		io->scsi_tgt_cb = NULL;
+
+		/* if status was good, and auto-good-response was set,
+		 * then callback target-server with IO_CMPL_RSP_SENT,
+		 * otherwise send IO_CMPL
+		 */
+		if (status == 0 && io->auto_resp)
+			flags |= EFCT_SCSI_IO_CMPL_RSP_SENT;
+		else
+			flags |= EFCT_SCSI_IO_CMPL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_stat = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_DI_ERROR:
+			if (ext_status & SLI4_FC_DI_ERROR_GE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR;
+			else if (ext_status & SLI4_FC_DI_ERROR_AE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR;
+			else if (ext_status & SLI4_FC_DI_ERROR_RE)
+				scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR;
+			else
+				scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET:
+			case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+				scsi_stat = EFCT_SCSI_STATUS_ABORTED;
+				break;
+			case SLI4_FC_LOCAL_REJECT_INVALID_RPI:
+				scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST;
+				break;
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_stat = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			default:
+				/*we have seen 0x0d(TX_DMA_FAILED err)*/
+				scsi_stat = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+
+		case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT:
+			/* target IO timed out */
+			scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED;
+			break;
+
+		case SLI4_FC_WCQE_STATUS_SHUTDOWN:
+			/* Target IO cancelled by HW */
+			scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN;
+			break;
+
+		default:
+			scsi_stat = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+
+		cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg);
+	}
+	efct_scsi_check_pending(efct);
+}
+
+static int
+efct_scsi_build_sgls(struct efct_hw *hw, struct efct_hw_io *hio,
+		struct efct_scsi_sgl *sgl, u32 sgl_count,
+		enum efct_hw_io_type type)
+{
+	int rc;
+	u32 i;
+	struct efct *efct = hw->os;
+
+	/* Initialize HW SGL */
+	rc = efct_hw_io_init_sges(hw, hio, type);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc);
+		return EFC_FAIL;
+	}
+
+	for (i = 0; i < sgl_count; i++) {
+
+		/* Add data SGE */
+		rc = efct_hw_io_add_sge(hw, hio,
+				sgl[i].addr, sgl[i].len);
+		if (rc) {
+			efc_log_err(efct,
+					"add sge failed cnt=%d rc=%d\n",
+					sgl_count, rc);
+			return rc;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void efc_log_sgl(struct efct_io *io)
+{
+	struct efct_hw_io *hio = io->hio;
+	struct sli4_sge *data = NULL;
+	u32 *dword = NULL;
+	u32 i;
+	u32 n_sge;
+
+	scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n",
+		      upper_32_bits(hio->def_sgl.phys),
+		      lower_32_bits(hio->def_sgl.phys));
+	n_sge = (hio->sgl == &hio->def_sgl ?
+			hio->n_sge : hio->def_sgl_count);
+	for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) {
+		dword = (u32 *)data;
+
+		scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+			      i, dword[0], dword[1], dword[2], dword[3]);
+
+		if (dword[2] & (1U << 31))
+			break;
+	}
+
+}
+
+static int
+efct_scsi_check_pending_async_cb(struct efct_hw *hw, int status,
+				 u8 *mqe, void *arg)
+{
+	struct efct_io *io = arg;
+
+	if (io) {
+		if (io->hw_cb) {
+			efct_hw_done_t cb = io->hw_cb;
+
+			io->hw_cb = NULL;
+			(cb)(io->hio, NULL, 0,
+			 SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io);
+		}
+	}
+	return EFC_SUCCESS;
+}
+
+static int
+efct_scsi_io_dispatch_hw_io(struct efct_io *io, struct efct_hw_io *hio)
+{
+	int rc = 0;
+	struct efct *efct = io->efct;
+
+	/* Got a HW IO;
+	 * update ini/tgt_task_tag with HW IO info and dispatch
+	 */
+	io->hio = hio;
+	if (io->cmd_tgt)
+		io->tgt_task_tag = hio->indicator;
+	else if (io->cmd_ini)
+		io->init_task_tag = hio->indicator;
+	io->hw_tag = hio->reqtag;
+
+	hio->eq = io->hw_priv;
+
+	/* Copy WQ steering */
+	switch (io->wq_steering) {
+	case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS;
+		break;
+	case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST;
+		break;
+	case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CPU;
+		break;
+	}
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_IO:
+		rc = efct_scsi_build_sgls(&efct->hw, io->hio,
+					  io->sgl, io->sgl_count, io->hio_type);
+		if (rc)
+			break;
+
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(efct))
+			efc_log_sgl(io);
+
+		if (io->app_id)
+			io->iparam.fcp_tgt.app_id = io->app_id;
+
+		rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio,
+				     io->wire_len, &io->iparam,
+				     &io->node->rnode, io->hw_cb, io);
+		break;
+	case EFCT_IO_TYPE_ELS:
+	case EFCT_IO_TYPE_CT:
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_req, io->wire_len,
+			&io->els_rsp, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	case EFCT_IO_TYPE_CT_RESP:
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       &io->els_rsp, io->wire_len,
+			NULL, &io->node->rnode, &io->iparam,
+			io->hw_cb, io);
+		break;
+	case EFCT_IO_TYPE_BLS_RESP:
+		/* no need to update tgt_task_tag for BLS response since
+		 * the RX_ID will be specified by the payload, not the XRI
+		 */
+		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
+				       NULL, 0, NULL, &io->node->rnode,
+			&io->iparam, io->hw_cb, io);
+		break;
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+static int
+efct_scsi_io_dispatch_no_hw_io(struct efct_io *io)
+{
+	int rc;
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_ABORT: {
+		struct efct_hw_io *hio_to_abort = NULL;
+
+		hio_to_abort = io->io_to_abort->hio;
+
+		if (!hio_to_abort) {
+			/*
+			 * If "IO to abort" does not have an
+			 * associated HW IO, immediately make callback with
+			 * success. The command must have been sent to
+			 * the backend, but the data phase has not yet
+			 * started, so we don't have a HW IO.
+			 *
+			 * Note: since the backend shims should be
+			 * taking a reference on io_to_abort, it should not
+			 * be possible to have been completed and freed by
+			 * the backend before the abort got here.
+			 */
+			scsi_io_printf(io, "IO: not active\n");
+			((efct_hw_done_t)io->hw_cb)(io->hio, NULL, 0,
+					SLI4_FC_WCQE_STATUS_SUCCESS, 0, io);
+			rc = 0;
+		} else {
+			/* HW IO is valid, abort it */
+			scsi_io_printf(io, "aborting\n");
+			rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort,
+					      io->send_abts, io->hw_cb, io);
+			if (rc) {
+				int status = SLI4_FC_WCQE_STATUS_SUCCESS;
+
+				if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE &&
+				    rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+					status = -1;
+					scsi_io_printf(io,
+						       "Failed to abort IO: status=%d\n",
+						rc);
+				}
+				((efct_hw_done_t)io->hw_cb)(io->hio,
+						NULL, 0, status, 0, io);
+				rc = 0;
+			}
+		}
+
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = -1;
+		break;
+	}
+	return rc;
+}
+
+/**
+ * Check for pending IOs to dispatch.
+ *
+ * If there are IOs on the pending list, and a HW IO is available, then
+ * dispatch the IOs.
+ */
+void
+efct_scsi_check_pending(struct efct *efct)
+{
+	struct efct_xport *xport = efct->xport;
+	struct efct_io *io = NULL;
+	struct efct_hw_io *hio;
+	int status;
+	int count = 0;
+	int dispatch;
+	unsigned long flags = 0;
+
+	/* Guard against recursion */
+	if (atomic_add_return(1, &xport->io_pending_recursing)) {
+		/* This function is already running.  Decrement and return. */
+		atomic_sub_return(1, &xport->io_pending_recursing);
+		return;
+	}
+
+	do {
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		status = 0;
+		hio = NULL;
+		if (!list_empty(&xport->io_pending_list)) {
+			io = list_first_entry(&xport->io_pending_list,
+					      struct efct_io,
+					      io_pending_link);
+		}
+		if (io) {
+			list_del(&io->io_pending_link);
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				hio = NULL;
+			} else {
+				hio = efct_hw_io_alloc(&efct->hw);
+				if (!hio) {
+					/*
+					 * No HW IO available.Put IO back on
+					 * the front of pending list
+					 */
+					list_add(&xport->io_pending_list,
+						 &io->io_pending_link);
+					io = NULL;
+				} else {
+					hio->eq = io->hw_priv;
+				}
+			}
+		}
+		/* Must drop the lock before dispatching the IO */
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (io) {
+			count++;
+
+			/*
+			 * We pulled an IO off the pending list,
+			 * and either got an HW IO or don't need one
+			 */
+			atomic_sub_return(1, &xport->io_pending_count);
+			if (!hio)
+				status = efct_scsi_io_dispatch_no_hw_io(io);
+			else
+				status = efct_scsi_io_dispatch_hw_io(io, hio);
+			if (status) {
+				/*
+				 * Invoke the HW callback, but do so in the
+				 * separate execution context,provided by the
+				 * NOP mailbox completion processing context
+				 * by using efct_hw_async_call()
+				 */
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call hw async failed\n");
+				}
+			}
+		}
+	} while (io);
+
+	/*
+	 * If nothing was removed from the list,
+	 * we might be in a case where we need to abort an
+	 * active IO and the abort is on the pending list.
+	 * Look for an abort we can dispatch.
+	 */
+	if (count == 0) {
+		dispatch = 0;
+
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		list_for_each_entry(io, &xport->io_pending_list,
+				    io_pending_link) {
+			if (io->io_type == EFCT_IO_TYPE_ABORT) {
+				if (io->io_to_abort->hio) {
+					/* This IO has a HW IO, so it is
+					 * active.  Dispatch the abort.
+					 */
+					dispatch = 1;
+				} else {
+					/* Leave this abort on the pending
+					 * list and keep looking
+					 */
+					dispatch = 0;
+				}
+			}
+			if (dispatch) {
+				list_del(&io->io_pending_link);
+				atomic_sub_return(1, &xport->io_pending_count);
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		if (dispatch) {
+			status = efct_scsi_io_dispatch_no_hw_io(io);
+			if (status) {
+				if (efct_hw_async_call(&efct->hw,
+					       efct_scsi_check_pending_async_cb,
+					io)) {
+					efc_log_test(efct,
+						      "call to hw async failed\n");
+				}
+			}
+		}
+	}
+
+	atomic_sub_return(1, &xport->io_pending_recursing);
+}
+
+/**
+ * An IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, try to allocate a HW IO. If none
+ *   is available, place this IO at the tail of the pending IO
+ *   list.
+ * - if HW IO is available, attach this IO to the HW IO and
+ *   submit it.
+ */
+int
+efct_scsi_io_dispatch(struct efct_io *io, void *cb)
+{
+	struct efct_hw_io *hio;
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * if this IO already has a HW IO, then this is either
+	 * not the first phase of the IO. Send it to the HW.
+	 */
+	if (io->hio)
+		return efct_scsi_io_dispatch_hw_io(io, io->hio);
+
+	/*
+	 * We don't already have a HW IO associated with the IO. First check
+	 * the pending list. If not empty, add IO to the tail and process the
+	 * pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			/*
+			 * If this is a low latency request,
+			 * the put at the front of the IO pending
+			 * queue, otherwise put it at the end of the queue.
+			 */
+			if (io->low_latency) {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add(&xport->io_pending_list,
+					 &io->io_pending_link);
+			} else {
+				INIT_LIST_HEAD(&io->io_pending_link);
+				list_add_tail(&io->io_pending_link,
+					      &xport->io_pending_list);
+			}
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return EFC_SUCCESS;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/*
+	 * We don't have a HW IO associated with the IO and there's nothing
+	 * on the pending list. Attempt to allocate a HW IO and dispatch it.
+	 */
+	hio = efct_hw_io_alloc(&io->efct->hw);
+	if (!hio) {
+		/* Couldn't get a HW IO. Save this IO on the pending list */
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		INIT_LIST_HEAD(&io->io_pending_link);
+		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		atomic_add_return(1, &xport->io_total_pending);
+		atomic_add_return(1, &xport->io_pending_count);
+		return EFC_SUCCESS;
+	}
+
+	/* We successfully allocated a HW IO; dispatch to HW */
+	return efct_scsi_io_dispatch_hw_io(io, hio);
+}
+
+/**
+ * An Abort IO is dispatched:
+ * - if the pending list is not empty, add IO to pending list
+ *   and call a function to process the pending list.
+ * - if pending list is empty, send abort to the HW.
+ */
+
+int
+efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb)
+{
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * For aborts, we don't need a HW IO, but we still want
+	 * to pass through the pending list to preserve ordering.
+	 * Thus, if the pending list is not empty, add this abort
+	 * to the pending list and process the pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+		if (!list_empty(&xport->io_pending_list)) {
+			INIT_LIST_HEAD(&io->io_pending_link);
+			list_add_tail(&io->io_pending_link,
+				      &xport->io_pending_list);
+			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+			atomic_add_return(1, &xport->io_pending_count);
+			atomic_add_return(1, &xport->io_total_pending);
+
+			/* process pending list */
+			efct_scsi_check_pending(efct);
+			return EFC_SUCCESS;
+		}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/* nothing on pending list, dispatch abort */
+	return efct_scsi_io_dispatch_no_hw_io(io);
+}
+
+static inline int
+efct_scsi_xfer_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 xwire_len,
+	enum efct_hw_io_type type, int enable_ar,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	size_t residual = 0;
+
+	io->sgl_count = sgl_count;
+
+	efct = io->efct;
+
+	scsi_io_trace(io, "%s wire_len %llu\n",
+		      (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv",
+		      xwire_len);
+
+	io->hio_type = type;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	residual = io->exp_xfer_len - io->transferred;
+	io->wire_len = (xwire_len < residual) ? xwire_len : residual;
+	residual = (xwire_len - io->wire_len);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = io->transferred;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* if this is the last data phase and there is no residual, enable
+	 * auto-good-response
+	 */
+	if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) &&
+	    residual == 0 &&
+		((io->transferred + io->wire_len) == io->exp_xfer_len) &&
+		(!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) {
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+		io->auto_resp = true;
+	} else {
+		io->auto_resp = false;
+	}
+
+	/* save this transfer length */
+	io->xfer_req = io->wire_len;
+
+	/* Adjust the transferred count to account for overrun
+	 * when the residual is calculated in efct_scsi_send_resp
+	 */
+	io->transferred += residual;
+
+	/* Adjust the SGL size if there is overrun */
+
+	if (residual) {
+		struct efct_scsi_sgl  *sgl_ptr = &io->sgl[sgl_count - 1];
+
+		while (residual) {
+			size_t len = sgl_ptr->len;
+
+			if (len > residual) {
+				sgl_ptr->len = len - residual;
+				residual = 0;
+			} else {
+				sgl_ptr->len = 0;
+				residual -= len;
+				io->sgl_count--;
+			}
+			sgl_ptr--;
+		}
+	}
+
+	/* Set latency and WQ steering */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (efct->xport) {
+		struct efct_xport *xport = efct->xport;
+
+		if (type == EFCT_HW_IO_TARGET_READ) {
+			xport->fcp_stats.input_requests++;
+			xport->fcp_stats.input_bytes += xwire_len;
+		} else if (type == EFCT_HW_IO_TARGET_WRITE) {
+			xport->fcp_stats.output_requests++;
+			xport->fcp_stats.output_bytes += xwire_len;
+		}
+	}
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, sgl, sgl_count,
+				 len, EFCT_HW_IO_TARGET_READ,
+				 enable_tsend_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, sgl, sgl_count, len,
+				 EFCT_HW_IO_TARGET_WRITE,
+				 enable_treceive_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+		    struct efct_scsi_cmd_resp *rsp,
+		   efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	int residual;
+	bool auto_resp = true;		/* Always try auto resp */
+	u8 scsi_status = 0;
+	u16 scsi_status_qualifier = 0;
+	u8 *sense_data = NULL;
+	u32 sense_data_length = 0;
+
+	efct = io->efct;
+
+	if (rsp) {
+		scsi_status = rsp->scsi_status;
+		scsi_status_qualifier = rsp->scsi_status_qualifier;
+		sense_data = rsp->sense_data;
+		sense_data_length = rsp->sense_data_length;
+		residual = rsp->residual;
+	} else {
+		residual = io->exp_xfer_len - io->transferred;
+	}
+
+	io->wire_len = 0;
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* Set low latency queueing request */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (scsi_status != 0 || residual || sense_data_length) {
+		struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
+		u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
+
+		if (!fcprsp) {
+			efc_log_err(efct, "NULL response buffer\n");
+			return EFC_FAIL;
+		}
+
+		auto_resp = false;
+
+		memset(fcprsp, 0, sizeof(*fcprsp));
+
+		io->wire_len += sizeof(*fcprsp);
+
+		fcprsp->resp.fr_status = scsi_status;
+		fcprsp->resp.fr_retry_delay =
+			cpu_to_be16(scsi_status_qualifier);
+
+		/* set residual status if necessary */
+		if (residual != 0) {
+			/* FCP: if data transferred is less than the
+			 * amount expected, then this is an underflow.
+			 * If data transferred would have been greater
+			 * than the amount expected this is an overflow
+			 */
+			if (residual > 0) {
+				fcprsp->resp.fr_flags |= FCP_RESID_UNDER;
+				fcprsp->ext.fr_resid =	cpu_to_be32(residual);
+			} else {
+				fcprsp->resp.fr_flags |= FCP_RESID_OVER;
+				fcprsp->ext.fr_resid = cpu_to_be32(-residual);
+			}
+		}
+
+		if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) {
+			if (sense_data_length > SCSI_SENSE_BUFFERSIZE) {
+				efc_log_err(efct, "Sense exceeds max size.\n");
+				return EFC_FAIL;
+			}
+
+			fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+			memcpy(sns_data, sense_data, sense_data_length);
+			fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length);
+			io->wire_len += sense_data_length;
+		}
+
+		io->sgl[0].addr = io->rspbuf.phys;
+		//io->sgl[0].dif_addr = 0;
+		io->sgl[0].len = io->wire_len;
+		io->sgl_count = 1;
+	}
+
+	if (auto_resp)
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+static int
+efct_target_bls_resp_cb(struct efct_hw_io *hio,
+			struct efc_remote_node *rnode,
+	u32 length, int status, u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status bls_status;
+
+	efct = io->efct;
+
+	/* BLS isn't really a "SCSI" concept, but use SCSI status */
+	if (status) {
+		io_error_log(io, "s=%#x x=%#x\n", status, ext_status);
+		bls_status = EFCT_SCSI_STATUS_ERROR;
+	} else {
+		bls_status = EFCT_SCSI_STATUS_GOOD;
+	}
+
+	if (io->bls_cb) {
+		efct_scsi_io_cb_t bls_cb = io->bls_cb;
+		void *bls_cb_arg = io->bls_cb_arg;
+
+		io->bls_cb = NULL;
+		io->bls_cb_arg = NULL;
+
+		/* invoke callback */
+		bls_cb(io, bls_status, 0, bls_cb_arg);
+	}
+
+	efct_scsi_check_pending(efct);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_target_send_bls_resp(struct efct_io *io,
+			  efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct fc_ba_acc *acc;
+
+	/* fill out IO structure with everything needed to send BA_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.bls.ox_id = io->init_task_tag;
+	io->iparam.bls.rx_id = io->abort_rx_id;
+
+	acc = (void *)io->iparam.bls.payload;
+
+	memset(io->iparam.bls.payload, 0,
+	       sizeof(io->iparam.bls.payload));
+	acc->ba_ox_id = cpu_to_be16(io->iparam.bls.ox_id);
+	acc->ba_rx_id = cpu_to_be16(io->iparam.bls.rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	/* generic io fields have already been populated */
+
+	/* set type and BLS-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "bls_rsp";
+	io->hio_type = EFCT_HW_BLS_ACC;
+	io->bls_cb = cb;
+	io->bls_cb_arg = arg;
+
+	/* dispatch IO */
+	rc = efct_scsi_io_dispatch(io, efct_target_bls_resp_cb);
+	return rc;
+}
+
+int
+efct_scsi_send_tmf_resp(struct efct_io *io,
+			enum efct_scsi_tmf_resp rspcode,
+			u8 addl_rsp_info[3],
+			efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc = -1;
+	struct fcp_resp_with_ext *fcprsp = NULL;
+	struct fcp_resp_rsp_info *rspinfo = NULL;
+	u8 fcp_rspcode;
+
+	io->wire_len = 0;
+
+	switch (rspcode) {
+	case EFCT_SCSI_TMF_FUNCTION_COMPLETE:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED:
+	case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_REJECTED:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER:
+		fcp_rspcode = FCP_TMF_INVALID_LUN;
+		break;
+	case EFCT_SCSI_TMF_SERVICE_DELIVERY:
+		fcp_rspcode = FCP_TMF_FAILED;
+		break;
+	default:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	}
+
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) {
+		rc = efct_target_send_bls_resp(io, cb, arg);
+		return rc;
+	}
+
+	/* populate the FCP TMF response */
+	fcprsp = io->rspbuf.virt;
+	memset(fcprsp, 0, sizeof(*fcprsp));
+
+	fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+
+	rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
+	if (addl_rsp_info) {
+		memcpy(rspinfo->_fr_resvd, addl_rsp_info,
+		       sizeof(rspinfo->_fr_resvd));
+	}
+	rspinfo->rsp_code = fcp_rspcode;
+
+	io->wire_len = sizeof(*fcprsp) + sizeof(*rspinfo);
+
+	fcprsp->ext.fr_rsp_len = cpu_to_be32(sizeof(*rspinfo));
+
+	io->sgl[0].addr = io->rspbuf.phys;
+	io->sgl[0].dif_addr = 0;
+	io->sgl[0].len = io->wire_len;
+	io->sgl_count = 1;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	rc = efct_scsi_io_dispatch(io, efct_target_io_cb);
+
+	return rc;
+}
+
+static int
+efct_target_abort_cb(struct efct_hw_io *hio,
+		     struct efc_remote_node *rnode,
+		     u32 length, int status,
+		     u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_status;
+
+	efct = io->efct;
+
+	if (io->abort_cb) {
+		efct_scsi_io_cb_t abort_cb = io->abort_cb;
+		void *abort_cb_arg = io->abort_cb_arg;
+
+		io->abort_cb = NULL;
+		io->abort_cb_arg = NULL;
+
+		switch (status) {
+		case SLI4_FC_WCQE_STATUS_SUCCESS:
+			scsi_status = EFCT_SCSI_STATUS_GOOD;
+			break;
+		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+			switch (ext_status) {
+			case SLI4_FC_LOCAL_REJECT_NO_XRI:
+				scsi_status = EFCT_SCSI_STATUS_NO_IO;
+				break;
+			case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS:
+				scsi_status =
+					EFCT_SCSI_STATUS_ABORT_IN_PROGRESS;
+				break;
+			default:
+				/*we have seen 0x15 (abort in progress)*/
+				scsi_status = EFCT_SCSI_STATUS_ERROR;
+				break;
+			}
+			break;
+		case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+			scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE;
+			break;
+		default:
+			scsi_status = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+		/* invoke callback */
+		abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg);
+	}
+
+	/* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */
+	kref_put(&io->io_to_abort->ref, io->io_to_abort->release);
+
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+
+	efct_scsi_check_pending(efct);
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	int rc;
+	struct efct_io *abort_io = NULL;
+
+	efct = io->efct;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if ((kref_get_unless_zero(&io->ref) == 0)) {
+		/* command no longer active */
+		scsi_io_printf(io, "command no longer active\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * allocate a new IO to send the abort request. Use efct_io_alloc()
+	 * directly, as we need an IO object that will not fail allocation
+	 * due to allocations being disabled (in efct_scsi_io_alloc())
+	 */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		kref_put(&io->ref, io->release);
+		return EFC_FAIL;
+	}
+
+	/* Save the target server callback and argument */
+	/* set generic fields */
+	abort_io->cmd_tgt = true;
+	abort_io->node = io->node;
+
+	/* set type and abort-specific fields */
+	abort_io->io_type = EFCT_IO_TYPE_ABORT;
+	abort_io->display_name = "tgt_abort";
+	abort_io->io_to_abort = io;
+	abort_io->send_abts = false;
+	abort_io->abort_cb = cb;
+	abort_io->abort_cb_arg = arg;
+
+	/* now dispatch IO */
+	rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb);
+	if (rc)
+		kref_put(&io->ref, io->release);
+	return rc;
+}
+
+void
+efct_scsi_io_complete(struct efct_io *io)
+{
+	if (io->io_free) {
+		efc_log_test(io->efct,
+			      "Got completion for non-busy io with tag 0x%x\n",
+		    io->tag);
+		return;
+	}
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	kref_put(&io->ref, io->release);
+}
diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h
new file mode 100644
index 000000000000..28204c5fde69
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.h
@@ -0,0 +1,235 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_SCSI_H__)
+#define __EFCT_SCSI_H__
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+
+/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */
+#define EFCT_SCSI_CMD_DIR_IN		(1 << 0)
+#define EFCT_SCSI_CMD_DIR_OUT		(1 << 1)
+#define EFCT_SCSI_CMD_SIMPLE		(1 << 2)
+#define EFCT_SCSI_CMD_HEAD_OF_QUEUE	(1 << 3)
+#define EFCT_SCSI_CMD_ORDERED		(1 << 4)
+#define EFCT_SCSI_CMD_UNTAGGED		(1 << 5)
+#define EFCT_SCSI_CMD_ACA		(1 << 6)
+#define EFCT_SCSI_FIRST_BURST_ERR	(1 << 7)
+#define EFCT_SCSI_FIRST_BURST_ABORTED	(1 << 8)
+
+/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */
+#define EFCT_SCSI_LAST_DATAPHASE	(1 << 0)
+#define EFCT_SCSI_NO_AUTO_RESPONSE	(1 << 1)
+#define EFCT_SCSI_LOW_LATENCY		(1 << 2)
+
+#define EFCT_SCSI_SNS_BUF_VALID(sense)	((sense) && \
+			(0x70 == (((const u8 *)(sense))[0] & 0x70)))
+
+#define EFCT_SCSI_WQ_STEERING_SHIFT	16
+#define EFCT_SCSI_WQ_STEERING_MASK	(0xf << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CLASS	(0 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_REQUEST	(1 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CPU	(2 << EFCT_SCSI_WQ_STEERING_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_SHIFT		(20)
+#define EFCT_SCSI_WQ_CLASS_MASK		(0xf << EFCT_SCSI_WQ_CLASS_SHIFT)
+#define EFCT_SCSI_WQ_CLASS(x)		((x & EFCT_SCSI_WQ_CLASS_MASK) << \
+						EFCT_SCSI_WQ_CLASS_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY	1
+
+struct efct_scsi_cmd_resp {
+	u8 scsi_status;			/* SCSI status */
+	u16 scsi_status_qualifier;	/* SCSI status qualifier */
+	/* pointer to response data buffer */
+	u8 *response_data;
+	/* length of response data buffer (bytes) */
+	u32 response_data_length;
+	u8 *sense_data;		/* pointer to sense data buffer */
+	/* length of sense data buffer (bytes) */
+	u32 sense_data_length;
+	/* command residual (not used for target), positive value
+	 * indicates an underflow, negative value indicates overflow
+	 */
+	int residual;
+	/* Command response length received in wcqe */
+	u32 response_wire_length;
+};
+
+struct efct_vport {
+	struct efct		*efct;
+	bool			is_vport;
+	struct fc_host_statistics fc_host_stats;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+};
+
+/* Status values returned by IO callbacks */
+enum efct_scsi_io_status {
+	EFCT_SCSI_STATUS_GOOD = 0,
+	EFCT_SCSI_STATUS_ABORTED,
+	EFCT_SCSI_STATUS_ERROR,
+	EFCT_SCSI_STATUS_DIF_GUARD_ERR,
+	EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR,
+	EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR,
+	EFCT_SCSI_STATUS_NO_IO,
+	EFCT_SCSI_STATUS_ABORT_IN_PROGRESS,
+	EFCT_SCSI_STATUS_CHECK_RESPONSE,
+	EFCT_SCSI_STATUS_COMMAND_TIMEOUT,
+	EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED,
+	EFCT_SCSI_STATUS_SHUTDOWN,
+	EFCT_SCSI_STATUS_NEXUS_LOST,
+};
+
+struct efct_io;
+struct efc_node;
+struct efc_domain;
+struct efc_sli_port;
+
+/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */
+typedef int (*efct_scsi_io_cb_t)(struct efct_io *io,
+				    enum efct_scsi_io_status status,
+				    u32 flags, void *arg);
+
+/* Callback used by send_rd_io(), send_wr_io() */
+typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io *io,
+			enum efct_scsi_io_status status,
+			struct efct_scsi_cmd_resp *rsp,
+			u32 flags, void *arg);
+
+/* efct_scsi_cb_t flags */
+#define EFCT_SCSI_IO_CMPL		(1 << 0)
+/* IO completed, response sent */
+#define EFCT_SCSI_IO_CMPL_RSP_SENT	(1 << 1)
+#define EFCT_SCSI_IO_ABORTED		(1 << 2)
+
+/* efct_scsi_recv_tmf() request values */
+enum efct_scsi_tmf_cmd {
+	EFCT_SCSI_TMF_ABORT_TASK = 1,
+	EFCT_SCSI_TMF_QUERY_TASK_SET,
+	EFCT_SCSI_TMF_ABORT_TASK_SET,
+	EFCT_SCSI_TMF_CLEAR_TASK_SET,
+	EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT,
+	EFCT_SCSI_TMF_LOGICAL_UNIT_RESET,
+	EFCT_SCSI_TMF_CLEAR_ACA,
+	EFCT_SCSI_TMF_TARGET_RESET,
+};
+
+/* efct_scsi_send_tmf_resp() response values */
+enum efct_scsi_tmf_resp {
+	EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1,
+	EFCT_SCSI_TMF_FUNCTION_SUCCEEDED,
+	EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND,
+	EFCT_SCSI_TMF_FUNCTION_REJECTED,
+	EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER,
+	EFCT_SCSI_TMF_SERVICE_DELIVERY,
+};
+
+struct efct_scsi_sgl {
+	uintptr_t	addr;
+	uintptr_t	dif_addr;
+	size_t		len;
+};
+
+/* Return values for calls from base driver to libefc */
+#define EFCT_SCSI_CALL_COMPLETE	0 /* All work is done */
+#define EFCT_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
+
+enum efct_scsi_io_role {
+	EFCT_SCSI_IO_ROLE_ORIGINATOR,
+	EFCT_SCSI_IO_ROLE_RESPONDER,
+};
+
+void efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node);
+void efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node);
+extern struct efct_io *
+efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role);
+void efct_scsi_io_free(struct efct_io *io);
+struct efct_io *efct_io_get_instance(struct efct *efct, u32 index);
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+int efct_scsi_tgt_new_device(struct efct *efct);
+int efct_scsi_tgt_del_device(struct efct *efct);
+int
+efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain);
+void
+efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain);
+int
+efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport);
+void
+efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport);
+int
+efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node);
+int
+efct_scsi_new_initiator(struct efc *efc, struct efc_node *node);
+
+enum efct_scsi_del_initiator_reason {
+	EFCT_SCSI_INITIATOR_DELETED,
+	EFCT_SCSI_INITIATOR_MISSING,
+};
+
+extern int
+efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
+			int reason);
+extern int
+efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
+		   u32 cdb_len, u32 flags);
+extern int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd cmd, struct efct_io *abortio,
+		  u32 flags);
+
+extern int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
+		      struct efct_scsi_sgl *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
+		      struct efct_scsi_sgl *sgl, u32 sgl_count,
+		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+		    struct efct_scsi_cmd_resp *rsp, efct_scsi_io_cb_t cb,
+		   void *arg);
+extern int
+efct_scsi_send_tmf_resp(struct efct_io *io,
+			enum efct_scsi_tmf_resp rspcode,
+		       u8 addl_rsp_info[3],
+		       efct_scsi_io_cb_t cb, void *arg);
+extern int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg);
+
+void efct_scsi_io_complete(struct efct_io *io);
+
+int efct_scsi_reg_fc_transport(void);
+int efct_scsi_release_fc_transport(void);
+int efct_scsi_new_device(struct efct *efct);
+int efct_scsi_del_device(struct efct *efct);
+void _efct_scsi_io_free(struct kref *arg);
+
+int efct_scsi_send_tmf(struct efc_node *node,
+		       struct efct_io *io,
+		       struct efct_io *io_to_abort, u32 lun,
+		       enum efct_scsi_tmf_cmd tmf,
+		       struct efct_scsi_sgl *sgl,
+		       u32 sgl_count, u32 len,
+		       efct_scsi_rsp_io_cb_t cb, void *arg);
+
+extern int
+efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost);
+extern struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev);
+
+int efct_scsi_io_dispatch(struct efct_io *io, void *cb);
+int efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb);
+void efct_scsi_check_pending(struct efct *efct);
+
+#endif /* __EFCT_SCSI_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (22 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 23/31] elx: efct: SCSI IO handling routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-12  4:57   ` Bart Van Assche
                     ` (2 more replies)
  2020-04-12  3:32 ` [PATCH v3 25/31] elx: efct: Hardware IO submission routines James Smart
                   ` (6 subsequent siblings)
  30 siblings, 3 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
LIO backend template registration and template functions.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Fixed as per the review comments.
  Removed vport pend list. Pending list is tracked based on the sport
    assigned to vport.
---
 drivers/scsi/elx/efct/efct_lio.c | 1840 ++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_lio.h |  178 ++++
 2 files changed, 2018 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h

diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
new file mode 100644
index 000000000000..c784ef9dbbee
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.c
@@ -0,0 +1,1840 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+#include "efct_driver.h"
+#include "efct_lio.h"
+
+static struct workqueue_struct *lio_wq;
+
+static int
+efct_format_wwn(char *str, size_t len, const char *pre, u64 wwn)
+{
+	u8 a[8];
+
+	put_unaligned_be64(wwn, a);
+	return snprintf(str, len, "%s%8phC", pre, a);
+}
+
+static int
+efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
+{
+	int num;
+	u8 b[8];
+
+	if (npiv) {
+		num = sscanf(name,
+			     "%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx",
+			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
+			     &b[7]);
+	} else {
+		num = sscanf(name,
+		      "%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx",
+			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
+			     &b[7]);
+	}
+
+	if (num != 8)
+		return -EINVAL;
+
+	*wwp = get_unaligned_be64(b);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn)
+{
+	unsigned int cnt = size;
+	int rc;
+
+	*wwpn = *wwnn = 0;
+	if (name[cnt - 1] == '\n' || name[cnt - 1] == 0)
+		cnt--;
+
+	/* validate we have enough characters for WWPN */
+	if ((cnt != (16 + 1 + 16)) || (name[16] != ':'))
+		return -EINVAL;
+
+	rc = efct_lio_parse_wwn(&name[0], wwpn, 1);
+	if (rc)
+		return rc;
+
+	rc = efct_lio_parse_wwn(&name[17], wwnn, 1);
+	if (rc)
+		return rc;
+
+	return EFC_SUCCESS;
+}
+
+static ssize_t
+efct_lio_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_tpg_enable_store(struct config_item *item, const char *page,
+			  size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	struct efct *efct;
+	struct efc *efc;
+	unsigned long op;
+	int ret;
+
+	if (!tpg->sport || !tpg->sport->efct) {
+		pr_err("%s: Unable to find EFCT device\n", __func__);
+		return -EINVAL;
+	}
+
+	efct = tpg->sport->efct;
+	efc = efct->efcport;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		ret = efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE);
+		if (ret) {
+			efct->tgt_efct.lio_sport = NULL;
+			efc_log_test(efct, "cannot bring port online\n");
+			return ret;
+		}
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain && efc->domain->sport)
+			efct_scsi_tgt_del_sport(efc, efc->domain->sport);
+
+		atomic_set(&tpg->enabled, 0);
+	} else {
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
+			       size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+	struct efct_lio_vport *lio_vport = tpg->vport;
+	struct efct *efct;
+	struct efc *efc;
+	int ret;
+	unsigned long op;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (!lio_vport) {
+		pr_err("Unable to find vport\n");
+		return -EINVAL;
+	}
+
+	efct = lio_vport->efct;
+	efc = efct->efcport;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain) {
+			ret = efc_sport_vport_new(efc->domain,
+						  lio_vport->npiv_wwpn,
+						  lio_vport->npiv_wwnn,
+						  U32_MAX, false, true,
+						  NULL, NULL);
+			if (ret != 0) {
+				efc_log_err(efct, "Failed to create Vport\n");
+				return ret;
+			}
+			return count;
+		}
+
+		if (!(efc_vport_create_spec(efc, lio_vport->npiv_wwnn,
+					    lio_vport->npiv_wwpn, U32_MAX,
+					    false, true, NULL, NULL)))
+			return -ENOMEM;
+
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		atomic_set(&tpg->enabled, 0);
+		/* only physical sport should exist, free lio_sport
+		 * allocated in efct_lio_make_sport
+		 */
+		if (efc->domain) {
+			efc_sport_vport_del(efct->efcport, efc->domain,
+					    lio_vport->npiv_wwpn,
+					    lio_vport->npiv_wwnn);
+			return count;
+		}
+	} else {
+		return -EINVAL;
+	}
+	return count;
+}
+
+static bool efct_lio_node_is_initiator(struct efc_node *node)
+{
+	if (!node)
+		return false;
+
+	if (node->rnode.fc_id && node->rnode.fc_id != FC_FID_FLOGI &&
+	    node->rnode.fc_id != FC_FID_DIR_SERV &&
+	    node->rnode.fc_id != FC_FID_FCTRL) {
+		return true;
+	}
+
+	return false;
+}
+
+static int  efct_lio_tgt_session_data(struct efct *efct, u64 wwpn,
+				      char *buf, int size)
+{
+	struct efc_sli_port *sport = NULL;
+	struct efc_node *node = NULL;
+	struct efc *efc = efct->efcport;
+	u16 loop_id = 0;
+	int off = 0, rc = 0;
+
+	if (!efc->domain) {
+		efc_log_err(efct, "failed to find efct/domain\n");
+		return EFC_FAIL;
+	}
+
+	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
+		if (sport->wwpn != wwpn)
+			continue;
+		list_for_each_entry(node, &sport->node_list,
+				    list_entry) {
+			/* Dump only remote NPORT sessions */
+			if (!efct_lio_node_is_initiator(node))
+				continue;
+
+			rc = snprintf(buf + off, size - off,
+				"0x%016llx,0x%08x,0x%04x\n",
+				get_unaligned_be64(node->wwpn),
+				node->rnode.fc_id, loop_id);
+			if (rc < 0)
+				break;
+			off += rc;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int efct_debugfs_session_open(struct inode *inode, struct file *filp)
+{
+	struct efct_lio_sport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return EFC_SUCCESS;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
+				  size);
+	return EFC_SUCCESS;
+}
+
+static int efct_debugfs_session_close(struct inode *inode, struct file *filp)
+{
+	if (filp->f_mode & FMODE_READ)
+		kfree(filp->private_data);
+
+	return EFC_SUCCESS;
+}
+
+static ssize_t efct_debugfs_session_read(struct file *filp, char __user *buf,
+					 size_t count, loff_t *ppos)
+{
+	if (!(filp->f_mode & FMODE_READ))
+		return -EPERM;
+	return simple_read_from_buffer(buf, count, ppos, filp->private_data,
+				       strlen(filp->private_data));
+}
+
+static int efct_npiv_debugfs_session_open(struct inode *inode,
+					  struct file *filp)
+{
+	struct efct_lio_vport *sport = inode->i_private;
+	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
+
+	if (!(filp->f_mode & FMODE_READ)) {
+		filp->private_data = sport;
+		return EFC_SUCCESS;
+	}
+
+	filp->private_data = kmalloc(size, GFP_KERNEL);
+	if (!filp->private_data)
+		return -ENOMEM;
+
+	memset(filp->private_data, 0, size);
+	efct_lio_tgt_session_data(sport->efct, sport->npiv_wwpn,
+				  filp->private_data, size);
+	return EFC_SUCCESS;
+}
+
+static const struct file_operations efct_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.llseek		= default_llseek,
+};
+
+static const struct file_operations efct_npiv_debugfs_session_fops = {
+	.owner		= THIS_MODULE,
+	.open		= efct_npiv_debugfs_session_open,
+	.release	= efct_debugfs_session_close,
+	.read		= efct_debugfs_session_read,
+	.llseek		= default_llseek,
+};
+
+static char *efct_lio_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->sport->wwpn_str;
+}
+
+static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->vport->wwpn_str;
+}
+
+static u16 efct_lio_get_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static u16 efct_lio_get_npiv_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static int efct_lio_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int efct_lio_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	return EFC_SUCCESS;
+}
+
+static int efct_lio_check_stop_free(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_CHK_STOP_FREE);
+	return target_put_sess_cmd(se_cmd);
+}
+
+static int
+efct_lio_abort_tgt_cb(struct efct_io *io,
+		      enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_io_printf(io, "%s\n", __func__);
+	return EFC_SUCCESS;
+}
+
+/* command has been aborted, cleanup here */
+static void efct_lio_aborted_task(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
+
+	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
+		return;
+
+	ocp->aborting = true;
+	ocp->err = EFCT_SCSI_STATUS_ABORTED;
+	/* terminate the exchange */
+	efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
+}
+
+/* Called when se_cmd's ref count goes to 0 */
+static void efct_lio_release_cmd(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct *efct = io->efct;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_RELEASE_CMD);
+	efct_scsi_io_complete(io);
+	atomic_sub_return(1, &efct->tgt_efct.ios_in_use);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
+}
+
+static void efct_lio_close_session(struct se_session *se_sess)
+{
+	struct efc_node *node = se_sess->fabric_sess_ptr;
+	struct efct *efct = NULL;
+	int rc;
+
+	pr_debug("se_sess=%p node=%p", se_sess, node);
+
+	if (!node) {
+		pr_debug("node is NULL");
+		return;
+	}
+
+	efct = node->efc->base;
+	rc = efct_xport_control(efct->xport,
+				EFCT_XPORT_POST_NODE_EVENT, node,
+		EFCT_XPORT_SHUTDOWN, NULL);
+	if (rc != 0) {
+		efc_log_test(efct,
+			      "Failed to shutdown session %p node %p\n",
+			     se_sess, node);
+		return;
+	}
+}
+
+static u32 efct_lio_sess_get_index(struct se_session *se_sess)
+{
+	return EFC_SUCCESS;
+}
+
+static void efct_lio_set_default_node_attrs(struct se_node_acl *nacl)
+{
+}
+
+static int efct_lio_get_cmd_state(struct se_cmd *se_cmd)
+{
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_sg_map(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	ocp->seg_map_cnt = pci_map_sg(io->efct->pcidev, cmd->t_data_sg,
+				      cmd->t_data_nents, cmd->data_direction);
+	if (ocp->seg_map_cnt == 0)
+		return -EFAULT;
+	return EFC_SUCCESS;
+}
+
+static void
+efct_lio_sg_unmap(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	if (WARN_ON(!ocp->seg_map_cnt || !cmd->t_data_sg))
+		return;
+
+	pci_unmap_sg(io->efct->pcidev, cmd->t_data_sg,
+		     ocp->seg_map_cnt, cmd->data_direction);
+	ocp->seg_map_cnt = 0;
+}
+
+static int
+efct_lio_status_done(struct efct_io *io,
+		     enum efct_scsi_io_status scsi_status,
+		     u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RSP_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	if (ocp->seg_map_cnt)
+		efct_lio_sg_unmap(io);
+
+	efct_lio_io_printf(io, "status=%d, err=%d flags=0x%x, dir=%d\n",
+				scsi_status, ocp->err, flags, ocp->ddir);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg);
+
+static int
+efct_lio_write_pending(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg;
+	u32 flags = 0, cnt, curcnt;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_WRITE_PENDING);
+	efct_lio_io_printf(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
+			  cmd->transport_state, cmd->se_cmd_flags);
+
+	if (ocp->seg_cnt == 0) {
+		ocp->seg_cnt = cmd->t_data_nents;
+		ocp->cur_seg = 0;
+		if (efct_lio_sg_map(io)) {
+			efct_lio_io_printf(io, "efct_lio_sg_map failed\n");
+			return -EFAULT;
+		}
+	}
+	curcnt = (ocp->seg_map_cnt - ocp->cur_seg);
+	curcnt = (curcnt < io->sgl_allocated) ? curcnt : io->sgl_allocated;
+	/* find current sg */
+	for (cnt = 0, sg = cmd->t_data_sg; cnt < ocp->cur_seg; cnt++,
+	     sg = sg_next(sg))
+		;
+
+	for (cnt = 0; cnt < curcnt; cnt++, sg = sg_next(sg)) {
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		sgl[cnt].len = sg_dma_len(sg);
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+	}
+	if (ocp->cur_seg == ocp->seg_cnt)
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+	rc = efct_scsi_recv_wr_data(io, flags, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	return rc;
+}
+
+static int
+efct_lio_queue_data_in(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg = NULL;
+	uint flags = 0, cnt = 0, curcnt = 0;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_DATA_IN);
+
+	if (ocp->seg_cnt == 0) {
+		if (cmd->data_length) {
+			ocp->seg_cnt = cmd->t_data_nents;
+			ocp->cur_seg = 0;
+			if (efct_lio_sg_map(io)) {
+				efct_lio_io_printf(io,
+						   "efct_lio_sg_map failed\n");
+				return -EAGAIN;
+			}
+		} else {
+			/* If command length is 0, send the response status */
+			struct efct_scsi_cmd_resp rsp;
+
+			memset(&rsp, 0, sizeof(rsp));
+			efct_lio_io_printf(io,
+					   "cmd : %p length 0, send status\n",
+					   cmd);
+			return efct_scsi_send_resp(io, 0, &rsp,
+						  efct_lio_status_done, NULL);
+		}
+	}
+	curcnt = min(ocp->seg_map_cnt - ocp->cur_seg, io->sgl_allocated);
+
+	while (cnt < curcnt) {
+		sg = &cmd->t_data_sg[ocp->cur_seg];
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		if (ocp->transferred_len + sg_dma_len(sg) >= cmd->data_length)
+			sgl[cnt].len = cmd->data_length - ocp->transferred_len;
+		else
+			sgl[cnt].len = sg_dma_len(sg);
+
+		ocp->transferred_len += sgl[cnt].len;
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+		cnt++;
+		if (ocp->transferred_len == cmd->data_length)
+			break;
+	}
+
+	if (ocp->transferred_len == cmd->data_length) {
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+		ocp->seg_cnt = ocp->cur_seg;
+	}
+
+	/* If there is residual, disable Auto Good Response */
+	if (cmd->residual_count)
+		flags |= EFCT_SCSI_NO_AUTO_RESPONSE;
+
+	rc = efct_scsi_send_rd_data(io, flags, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RD_DATA);
+	return rc;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io,
+		       enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &io->tgt_io.cmd;
+	int rc;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_DATA_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	efct_lio_io_printf(io, "seg_map_cnt=%d\n", ocp->seg_map_cnt);
+	if (ocp->seg_map_cnt) {
+		if (ocp->err == EFCT_SCSI_STATUS_GOOD &&
+		    ocp->cur_seg < ocp->seg_cnt) {
+			efct_lio_io_printf(io, "continuing cmd at segm=%d\n",
+					  ocp->cur_seg);
+			if (ocp->ddir == DMA_TO_DEVICE)
+				rc = efct_lio_write_pending(&ocp->cmd);
+			else
+				rc = efct_lio_queue_data_in(&ocp->cmd);
+			if (rc == 0)
+				return EFC_SUCCESS;
+			ocp->err = EFCT_SCSI_STATUS_ERROR;
+			efct_lio_io_printf(io, "could not continue command\n");
+		}
+		efct_lio_sg_unmap(io);
+	}
+
+	if (io->tgt_io.aborting) {
+		efct_lio_io_printf(io, "IO done aborted\n");
+		return EFC_SUCCESS;
+	}
+
+	if (ocp->ddir == DMA_TO_DEVICE) {
+		efct_lio_io_printf(io, "Write done, trans_state=0x%x\n",
+				  io->tgt_io.cmd.transport_state);
+		if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+			transport_generic_request_failure(&io->tgt_io.cmd,
+					TCM_CHECK_CONDITION_ABORT_CMD);
+			efct_set_lio_io_state(io,
+				EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE);
+		} else {
+			efct_set_lio_io_state(io,
+						EFCT_LIO_STATE_TGT_EXECUTE_CMD);
+			target_execute_cmd(&io->tgt_io.cmd);
+		}
+	} else {
+		if ((flags & EFCT_SCSI_IO_CMPL_RSP_SENT) == 0) {
+			struct efct_scsi_cmd_resp rsp;
+			/* send check condition if an error occurred */
+			memset(&rsp, 0, sizeof(rsp));
+			rsp.scsi_status = cmd->scsi_status;
+			rsp.sense_data = (uint8_t *)io->tgt_io.sense_buffer;
+			rsp.sense_data_length = cmd->scsi_sense_length;
+
+			/* Check for residual underrun or overrun */
+			if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+				rsp.residual = -cmd->residual_count;
+			else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+				rsp.residual = cmd->residual_count;
+
+			rc = efct_scsi_send_resp(io, 0, &rsp,
+						 efct_lio_status_done, NULL);
+			efct_set_lio_io_state(io,
+						EFCT_LIO_STATE_SCSI_SEND_RSP);
+			if (rc != 0) {
+				efct_lio_io_printf(io,
+						   "Read done, failed to send rsp, rc=%d\n",
+				      rc);
+				transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+				efct_set_lio_io_state(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+			} else {
+				ocp->rsp_sent = true;
+			}
+		} else {
+			ocp->rsp_sent = true;
+			transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+			efct_set_lio_io_state(io,
+					EFCT_LIO_STATE_TGT_GENERIC_FREE);
+		}
+	}
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_tmf_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		  u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(io, "cmd=%p status=%d, flags=0x%x\n",
+			      &io->tgt_io.cmd, scsi_status, flags);
+
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_null_tmf_done(struct efct_io *tmfio,
+		       enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(tmfio, "cmd=%p status=%d, flags=0x%x\n",
+			      &tmfio->tgt_io.cmd, scsi_status, flags);
+
+	/* free struct efct_io only, no active se_cmd */
+	efct_scsi_io_complete(tmfio);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_queue_status(struct se_cmd *cmd)
+{
+	struct efct_scsi_cmd_resp rsp;
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_STATUS);
+	efct_lio_io_printf(io,
+		"status=0x%x trans_state=0x%x se_cmd_flags=0x%x sns_len=%d\n",
+		cmd->scsi_status, cmd->transport_state, cmd->se_cmd_flags,
+		cmd->scsi_sense_length);
+
+	memset(&rsp, 0, sizeof(rsp));
+	rsp.scsi_status = cmd->scsi_status;
+	rsp.sense_data = (u8 *)io->tgt_io.sense_buffer;
+	rsp.sense_data_length = cmd->scsi_sense_length;
+
+	/* Check for residual underrun or overrun, mark negitive value for
+	 * underrun to recognize in HW
+	 */
+	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+		rsp.residual = -cmd->residual_count;
+	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+		rsp.residual = cmd->residual_count;
+
+	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
+	if (rc == 0)
+		ocp->rsp_sent = true;
+	return rc;
+}
+
+static void efct_lio_queue_tm_rsp(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp = container_of(cmd,
+						     struct efct_scsi_tgt_io,
+						     cmd);
+	struct efct_io *tmfio = container_of(ocp, struct efct_io, tgt_io);
+	struct se_tmr_req *se_tmr = cmd->se_tmr_req;
+	u8 rspcode;
+
+	efct_lio_tmfio_printf(tmfio, "cmd=%p function=0x%x tmr->response=%d\n",
+			      cmd, se_tmr->function, se_tmr->response);
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_COMPLETE;
+		break;
+	case TMR_TASK_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND;
+		break;
+	case TMR_LUN_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER;
+		break;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_REJECTED;
+		break;
+	}
+	efct_scsi_send_tmf_resp(tmfio, rspcode, NULL, efct_lio_tmf_done, NULL);
+}
+
+static struct efct *efct_find_wwpn(u64 wwpn)
+{
+	int efctidx;
+	struct efct *efct;
+
+	 /* Search for the HBA that has this WWPN */
+	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
+
+		efct = efct_devices[efctidx];
+		if (!efct)
+			continue;
+
+		if (wwpn == efct_get_wwpn(&efct->hw))
+			break;
+	}
+
+	if (efctidx == MAX_EFCT_DEVICES)
+		return NULL;
+
+	return efct_devices[efctidx];
+}
+
+static struct dentry *
+efct_create_dfs_session(struct efct *efct, void *data, u8 npiv)
+{
+	char name[16];
+
+	if (!efct->sess_debugfs_dir)
+		return NULL;
+
+	if (!npiv) {
+		snprintf(name, sizeof(name), "efct-sessions-%d",
+			 efct->instance_index);
+		return debugfs_create_file(name, 0644, efct->sess_debugfs_dir,
+					data, &efct_debugfs_session_fops);
+	}
+
+	snprintf(name, sizeof(name), "sessions-npiv-%d", efct->instance_index);
+
+	return debugfs_create_file(name, 0644, efct->sess_debugfs_dir, data,
+					&efct_npiv_debugfs_session_fops);
+}
+
+static struct se_wwn *
+efct_lio_make_sport(struct target_fabric_configfs *tf,
+		    struct config_group *group, const char *name)
+{
+	struct efct_lio_sport *lio_sport;
+	struct efct *efct;
+	int ret;
+	u64 wwpn;
+
+	ret = efct_lio_parse_wwn(name, &wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_sport = kzalloc(sizeof(*lio_sport), GFP_KERNEL);
+	if (!lio_sport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_sport->efct = efct;
+	lio_sport->wwpn = wwpn;
+	efct_format_wwn(lio_sport->wwpn_str, sizeof(lio_sport->wwpn_str),
+			"naa.", wwpn);
+	efct->tgt_efct.lio_sport = lio_sport;
+
+	lio_sport->sessions = efct_create_dfs_session(efct, lio_sport, 0);
+	return &lio_sport->sport_wwn;
+}
+
+static struct se_wwn *
+efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
+			 struct config_group *group, const char *name)
+{
+	struct efct_lio_vport *lio_vport;
+	struct efct *efct;
+	int ret = -1;
+	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
+	char *p, *pbuf, tmp[128];
+	struct efct_lio_vport_list_t *vport_list;
+	struct fc_vport *new_fc_vport;
+	struct fc_vport_identifiers vport_id;
+	unsigned long flags = 0;
+
+	snprintf(tmp, sizeof(tmp), "%s", name);
+	pbuf = &tmp[0];
+
+	p = strsep(&pbuf, "@");
+
+	if (!p || !pbuf) {
+		pr_err("Unable to find separator operator(@)\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	ret = efct_lio_parse_wwn(p, &p_wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = efct_lio_parse_npiv_wwn(pbuf, strlen(pbuf), &npiv_wwpn,
+				      &npiv_wwnn);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(p_wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
+	if (!lio_vport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_vport->efct = efct;
+	lio_vport->wwpn = p_wwpn;
+	lio_vport->npiv_wwpn = npiv_wwpn;
+	lio_vport->npiv_wwnn = npiv_wwnn;
+
+	efct_format_wwn(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
+			"naa.", npiv_wwpn);
+
+	vport_list = kmalloc(sizeof(*vport_list), GFP_KERNEL);
+	if (!vport_list) {
+		kfree(lio_vport);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	memset(vport_list, 0, sizeof(struct efct_lio_vport_list_t));
+	vport_list->lio_vport = lio_vport;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	INIT_LIST_HEAD(&vport_list->list_entry);
+	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+
+	lio_vport->sessions = efct_create_dfs_session(efct, lio_vport, 1);
+
+	memset(&vport_id, 0, sizeof(vport_id));
+	vport_id.port_name = npiv_wwpn;
+	vport_id.node_name = npiv_wwnn;
+	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
+	vport_id.vport_type = FC_PORTTYPE_NPIV;
+	vport_id.disable = false;
+
+	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
+	if (!new_fc_vport) {
+		efc_log_err(efct, "fc_vport_create failed\n");
+		kfree(lio_vport);
+		kfree(vport_list);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	lio_vport->fc_vport = new_fc_vport;
+
+	return &lio_vport->vport_wwn;
+}
+
+static void
+efct_lio_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct *efct = lio_sport->efct;
+
+	/* only physical sport should exist, free lio_sport allocated
+	 * in efct_lio_make_sport.
+	 */
+
+	debugfs_remove(lio_sport->sessions);
+	lio_sport->sessions = NULL;
+
+	kfree(efct->tgt_efct.lio_sport);
+	efct->tgt_efct.lio_sport = NULL;
+}
+
+static void
+efct_lio_npiv_drop_sport(struct se_wwn *wwn)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_vport_list_t *vport, *next_vport;
+	struct efct *efct = lio_vport->efct;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+
+	debugfs_remove(lio_vport->sessions);
+
+	if (lio_vport->fc_vport)
+		fc_vport_terminate(lio_vport->fc_vport);
+
+	lio_vport->sessions = NULL;
+
+	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		if (vport->lio_vport == lio_vport) {
+			list_del(&vport->list_entry);
+			kfree(vport->lio_vport);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+}
+
+static struct se_portal_group *
+efct_lio_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_sport *lio_sport = container_of(wwn,
+					    struct efct_lio_sport, sport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->sport = lio_sport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	efct = lio_sport->efct;
+	efct->tgt_efct.tpg = tpg;
+	efc_log_debug(efct, "create portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->sport->efct, "drop portal group %d\n", tpg->tpgt);
+	tpg->sport->efct->tgt_efct.tpg = NULL;
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static struct se_portal_group *
+efct_lio_npiv_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_vport *lio_vport = container_of(wwn,
+			struct efct_lio_vport, vport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	efct = lio_vport->efct;
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	if (n != 1) {
+		efc_log_err(efct, "Invalid tpgt index: %ld provided\n", n);
+		return ERR_PTR(-EINVAL);
+	}
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->vport = lio_vport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	lio_vport->tpg = tpg;
+	efc_log_debug(efct, "create vport portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_npiv_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg = container_of(se_tpg,
+						struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->vport->efct, "drop npiv portal group %d\n",
+		       tpg->tpgt);
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static int
+efct_lio_init_nodeacl(struct se_node_acl *se_nacl, const char *name)
+{
+	struct efct_lio_nacl *nacl;
+	u64 wwnn;
+
+	if (efct_lio_parse_wwn(name, &wwnn, 0) < 0)
+		return -EINVAL;
+
+	nacl = container_of(se_nacl, struct efct_lio_nacl, se_node_acl);
+	nacl->nport_wwnn = wwnn;
+
+	efct_format_wwn(nacl->nport_name, sizeof(nacl->nport_name), "", wwnn);
+	return EFC_SUCCESS;
+}
+
+static int efct_lio_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static int
+efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static struct efct_lio_tpg *
+efct_get_vport_tpg(struct efc_node *node)
+{
+	struct efct *efct;
+	u64 wwpn = node->sport->wwpn;
+	struct efct_lio_vport_list_t *vport, *next;
+	struct efct_lio_vport *lio_vport = NULL;
+	struct efct_lio_tpg *tpg = NULL;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	list_for_each_entry_safe(vport, next, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		lio_vport = vport->lio_vport;
+		if (wwpn && lio_vport &&
+		    lio_vport->npiv_wwpn == wwpn) {
+			efc_log_test(efct, "found tpg on vport\n");
+			tpg = lio_vport->tpg;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return tpg;
+}
+
+static int efct_session_cb(struct se_portal_group *se_tpg,
+			   struct se_session *se_sess, void *private)
+{
+	struct efc_node *node = private;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+
+	tgt_node = kzalloc(sizeof(*tgt_node), GFP_KERNEL);
+	if (!tgt_node)
+		return EFC_FAIL;
+
+	tgt_node->session = se_sess;
+	node->tgt_node = tgt_node;
+
+	return EFC_SUCCESS;
+}
+
+int efct_scsi_tgt_new_device(struct efct *efct)
+{
+	int rc = 0;
+	u32 total_ios;
+
+	/* Get the max settings */
+	efct->tgt_efct.max_sge = sli_get_max_sge(&efct->hw.sli);
+	efct->tgt_efct.max_sgl = sli_get_max_sgl(&efct->hw.sli);
+
+	/* initialize IO watermark fields */
+	atomic_set(&efct->tgt_efct.ios_in_use, 0);
+	total_ios = efct->hw.config.n_io;
+	efc_log_debug(efct, "total_ios=%d\n", total_ios);
+	efct->tgt_efct.watermark_min =
+			(total_ios * EFCT_WATERMARK_LOW_PCT) / 100;
+	efct->tgt_efct.watermark_max =
+			(total_ios * EFCT_WATERMARK_HIGH_PCT) / 100;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   efct->tgt_efct.watermark_max);
+	atomic_set(&efct->tgt_efct.watermark_hit, 0);
+	atomic_set(&efct->tgt_efct.initiator_count, 0);
+
+	lio_wq = create_singlethread_workqueue("efct_lio_worker");
+	if (!lio_wq) {
+		efc_log_err(efct, "workqueue create failed\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_init(&efct->tgt_efct.efct_lio_lock);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_list);
+
+	return rc;
+}
+
+int efct_scsi_tgt_del_device(struct efct *efct)
+{
+	flush_workqueue(lio_wq);
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain)
+{
+	return 0;
+}
+
+void
+efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain)
+{
+}
+
+/* Called by the libefc when new sli port (sport) is discovered */
+int
+efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport)
+{
+	struct efct *efct = sport->efc->base;
+
+	efc_log_debug(efct, "New SPORT: %s bound to %s\n", sport->display_name,
+		       efct->tgt_efct.lio_sport->wwpn_str);
+
+	return EFC_SUCCESS;
+}
+
+/* Called by the libefc when a sport goes away. */
+void
+efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport)
+{
+	efc_log_debug(efc, "Del SPORT: %s\n",
+		       sport->display_name);
+}
+/* Called by libefc to validate node. */
+int
+efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node)
+{
+	return 1;
+}
+
+static void efct_lio_setup_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data = container_of(work,
+					   struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	char wwpn[WWN_NAME_LEN];
+	struct efct_lio_tpg *tpg = NULL;
+	struct se_portal_group *se_tpg;
+	struct se_session *se_sess;
+	int watermark;
+	int initiator_count;
+
+	/* Check to see if it's belongs to vport,
+	 * if not get physical port
+	 */
+	tpg = efct_get_vport_tpg(node);
+	if (tpg) {
+		se_tpg = &tpg->tpg;
+	} else if (efct->tgt_efct.tpg) {
+		tpg = efct->tgt_efct.tpg;
+		se_tpg = &tpg->tpg;
+	} else {
+		efc_log_err(efct, "failed to init session\n");
+		return;
+	}
+
+	/*
+	 * Format the FCP Initiator port_name into colon
+	 * separated values to match the format by our explicit
+	 * ConfigFS NodeACLs.
+	 */
+	efct_format_wwn(wwpn, sizeof(wwpn), "",
+			efc_node_get_wwpn(node));
+
+	se_sess = target_setup_session(se_tpg, 0, 0,
+				       TARGET_PROT_NORMAL,
+				       wwpn, node,
+				       efct_session_cb);
+	if (IS_ERR(se_sess)) {
+		efc_log_err(efct, "failed to setup session\n");
+		return;
+	}
+
+	efc_log_debug(efct, "new initiator se_sess=%p node=%p\n",
+		      se_sess, node);
+
+	/* update IO watermark: increment initiator count */
+	initiator_count =
+	atomic_add_return(1, &efct->tgt_efct.initiator_count);
+	watermark = (efct->tgt_efct.watermark_max -
+	     initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+		efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   watermark);
+
+	kfree(wq_data);
+}
+
+/* Called by the libefc when new a new remote initiator is discovered */
+int efct_scsi_new_initiator(struct efc *efc, struct efc_node *node)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_lio_wq_data *wq_data;
+
+	/*
+	 * Since LIO only supports initiator validation at thread level,
+	 * we are open minded and accept all callers.
+	 */
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return -ENOMEM;
+
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_setup_session);
+	queue_work(lio_wq, &wq_data->work);
+	return EFC_SUCCESS;
+}
+
+static void efct_lio_remove_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data = container_of(work,
+					   struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+
+	tgt_node = node->tgt_node;
+	se_sess = tgt_node->session;
+
+	if (!se_sess) {
+		/* base driver has sent back-to-back requests
+		 * to unreg session with no intervening
+		 * register
+		 */
+		efc_log_test(efct,
+			      "unreg session for NULL session\n");
+		efc_scsi_del_initiator_complete(node->efc,
+						node);
+		return;
+	}
+
+	efc_log_debug(efct, "unreg session se_sess=%p node=%p\n",
+		       se_sess, node);
+
+	/* first flag all session commands to complete */
+	target_sess_cmd_list_set_waiting(se_sess);
+
+	/* now wait for session commands to complete */
+	target_wait_for_sess_cmds(se_sess);
+	target_remove_session(se_sess);
+
+	kfree(node->tgt_node);
+
+	node->tgt_node = NULL;
+	efc_scsi_del_initiator_complete(node->efc, node);
+
+	kfree(wq_data);
+}
+
+/* Called by the libefc when an initiator goes away. */
+int efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
+			int reason)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_lio_wq_data *wq_data;
+	int watermark;
+	int initiator_count;
+
+	if (reason == EFCT_SCSI_INITIATOR_MISSING)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	wq_data = kmalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	memset(wq_data, 0, sizeof(*wq_data));
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_remove_session);
+	queue_work(lio_wq, &wq_data->work);
+
+	/*
+	 * update IO watermark: decrement initiator count
+	 */
+	initiator_count =
+		atomic_sub_return(1, &efct->tgt_efct.initiator_count);
+	watermark = (efct->tgt_efct.watermark_max -
+			initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+			efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
+
+	return EFCT_SCSI_CALL_ASYNC;
+}
+
+int efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
+		       u32 cdb_len, u32 flags)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct efct *efct = io->efct;
+	char *ddir;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc = 0;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RECV_CMD);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+
+	/* set target timeout */
+	io->timeout = efct->target_io_timer_sec;
+
+	if (flags & EFCT_SCSI_CMD_SIMPLE)
+		ocp->task_attr = TCM_SIMPLE_TAG;
+	else if (flags & EFCT_SCSI_CMD_HEAD_OF_QUEUE)
+		ocp->task_attr = TCM_HEAD_TAG;
+	else if (flags & EFCT_SCSI_CMD_ORDERED)
+		ocp->task_attr = TCM_ORDERED_TAG;
+	else if (flags & EFCT_SCSI_CMD_ACA)
+		ocp->task_attr = TCM_ACA_TAG;
+
+	switch (flags & (EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT)) {
+	case EFCT_SCSI_CMD_DIR_IN:
+		ddir = "FROM_INITIATOR";
+		ocp->ddir = DMA_TO_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "TO_INITIATOR";
+		ocp->ddir = DMA_FROM_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "BIDIR";
+		ocp->ddir = DMA_BIDIRECTIONAL;
+		break;
+	default:
+		ddir = "NONE";
+		ocp->ddir = DMA_NONE;
+		break;
+	}
+
+	ocp->lun = lun;
+	efct_lio_io_printf(io, "new cmd=0x%x ddir=%s dl=%u\n",
+			  cdb[0], ddir, io->exp_xfer_len);
+
+	tgt_node = io->node->tgt_node;
+	se_sess = tgt_node->session;
+	if (se_sess) {
+		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_SUBMIT_CMD);
+		rc = target_submit_cmd(&io->tgt_io.cmd, se_sess,
+				       cdb, &io->tgt_io.sense_buffer[0],
+				       ocp->lun, io->exp_xfer_len,
+				       ocp->task_attr, ocp->ddir,
+				       TARGET_SCF_ACK_KREF);
+		if (rc) {
+			efc_log_err(efct, "failed to submit cmd se_cmd: %p\n",
+				    &ocp->cmd);
+			efct_scsi_io_free(io);
+		}
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
+		   enum efct_scsi_tmf_cmd cmd,
+		  struct efct_io *io_to_abort, u32 flags)
+{
+	unsigned char tmr_func;
+	struct efct *efct = tmfio->efct;
+	struct efct_scsi_tgt_io *ocp = &tmfio->tgt_io;
+	struct efct_scsi_tgt_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_SCSI_RECV_TMF);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+	efct_lio_tmfio_printf(tmfio, "%s: new tmf %x lun=%u\n",
+			      tmfio->display_name, cmd, lun);
+
+	switch (cmd) {
+	case EFCT_SCSI_TMF_ABORT_TASK:
+		tmr_func = TMR_ABORT_TASK;
+		break;
+	case EFCT_SCSI_TMF_ABORT_TASK_SET:
+		tmr_func = TMR_ABORT_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_TASK_SET:
+		tmr_func = TMR_CLEAR_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_LOGICAL_UNIT_RESET:
+		tmr_func = TMR_LUN_RESET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_ACA:
+		tmr_func = TMR_CLEAR_ACA;
+		break;
+	case EFCT_SCSI_TMF_TARGET_RESET:
+		tmr_func = TMR_TARGET_WARM_RESET;
+		break;
+	case EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT:
+	case EFCT_SCSI_TMF_QUERY_TASK_SET:
+	default:
+		goto tmf_fail;
+	}
+
+	tmfio->tgt_io.tmf = tmr_func;
+	tmfio->tgt_io.lun = lun;
+	tmfio->tgt_io.io_to_abort = io_to_abort;
+
+	tgt_node = tmfio->node->tgt_node;
+
+	se_sess = tgt_node->session;
+	if (!se_sess)
+		return EFC_SUCCESS;
+
+	rc = target_submit_tmr(&ocp->cmd, se_sess, NULL, lun, ocp, tmr_func,
+			GFP_ATOMIC, tmfio->init_task_tag, TARGET_SCF_ACK_KREF);
+
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_TGT_SUBMIT_TMR);
+	if (rc)
+		goto tmf_fail;
+
+	return EFC_SUCCESS;
+
+tmf_fail:
+	efct_scsi_send_tmf_resp(tmfio, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+				NULL, efct_lio_null_tmf_done, NULL);
+	return EFC_SUCCESS;
+}
+
+/* Start items for efct_lio_tpg_attrib_cit */
+
+#define DEF_EFCT_TPG_ATTRIB(name)					  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_show(			  \
+		struct config_item *item, char *page)			  \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+			struct efct_lio_tpg, tpg);			  \
+									  \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		  \
+}									  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_store(			  \
+		struct config_item *item, const char *page, size_t count) \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+					struct efct_lio_tpg, tpg);	  \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		  \
+	unsigned long val;						  \
+	int ret;							  \
+									  \
+	ret = kstrtoul(page, 0, &val);					  \
+	if (ret < 0) {							  \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
+		return ret;						  \
+	}								  \
+									  \
+	if (val != 0 && val != 1) {					  \
+		pr_err("Illegal boolean value %lu\n", val);		  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	a->name = val;							  \
+									  \
+	return count;							  \
+}									  \
+CONFIGFS_ATTR(efct_lio_tpg_attrib_, name)
+
+DEF_EFCT_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_tpg_attrib_attrs[] = {
+	&efct_lio_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+#define DEF_EFCT_NPIV_TPG_ATTRIB(name)					   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_show(			   \
+		struct config_item *item, char *page)			   \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+									   \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		   \
+}									   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_store(			   \
+		struct config_item *item, const char *page, size_t count)  \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		   \
+	unsigned long val;						   \
+	int ret;							   \
+									   \
+	ret = kstrtoul(page, 0, &val);					   \
+	if (ret < 0) {							   \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
+		return ret;						   \
+	}								   \
+									   \
+	if (val != 0 && val != 1) {					   \
+		pr_err("Illegal boolean value %lu\n", val);		   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	a->name = val;							   \
+									   \
+	return count;							   \
+}									   \
+CONFIGFS_ATTR(efct_lio_npiv_tpg_attrib_, name)
+
+DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_NPIV_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_npiv_tpg_attrib_attrs[] = {
+	&efct_lio_npiv_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_npiv_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_npiv_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+CONFIGFS_ATTR(efct_lio_tpg_, enable);
+static struct configfs_attribute *efct_lio_tpg_attrs[] = {
+				&efct_lio_tpg_attr_enable, NULL };
+CONFIGFS_ATTR(efct_lio_npiv_tpg_, enable);
+static struct configfs_attribute *efct_lio_npiv_tpg_attrs[] = {
+				&efct_lio_npiv_tpg_attr_enable, NULL };
+
+static const struct target_core_fabric_ops efct_lio_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect = efct_lio_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect = efct_lio_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_make_sport,
+	.fabric_drop_wwn		= efct_lio_drop_sport,
+	.fabric_make_tpg		= efct_lio_make_tpg,
+	.fabric_drop_tpg		= efct_lio_drop_tpg,
+	.tpg_check_demo_mode_login_only = efct_lio_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_tpg_attrs,
+	.tfc_tpg_attrib_attrs           = efct_lio_tpg_attrib_attrs,
+};
+
+static const struct target_core_fabric_ops efct_lio_npiv_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct_npiv",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_npiv_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_npiv_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect =
+					efct_lio_npiv_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect =
+					efct_lio_npiv_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_npiv_make_sport,
+	.fabric_drop_wwn		= efct_lio_npiv_drop_sport,
+	.fabric_make_tpg		= efct_lio_npiv_make_tpg,
+	.fabric_drop_tpg		= efct_lio_npiv_drop_tpg,
+	.tpg_check_demo_mode_login_only =
+				efct_lio_npiv_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_npiv_tpg_attrs,
+	.tfc_tpg_attrib_attrs		= efct_lio_npiv_tpg_attrib_attrs,
+};
+
+int efct_scsi_tgt_driver_init(void)
+{
+	int rc;
+
+	/* Register the top level struct config_item_type with TCM core */
+	rc = target_register_template(&efct_lio_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		return rc;
+	}
+	rc = target_register_template(&efct_lio_npiv_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		target_unregister_template(&efct_lio_ops);
+		return rc;
+	}
+	return EFC_SUCCESS;
+}
+
+int efct_scsi_tgt_driver_exit(void)
+{
+	target_unregister_template(&efct_lio_ops);
+	target_unregister_template(&efct_lio_npiv_ops);
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_lio.h b/drivers/scsi/elx/efct/efct_lio.h
new file mode 100644
index 000000000000..f884bcd3b240
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_LIO_H__
+#define __EFCT_LIO_H__
+
+#include "efct_scsi.h"
+#include <target/target_core_base.h>
+
+#define efct_lio_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, \
+		"[%s] [%04x][i:%04x t:%04x h:%04x][c:%02x]" fmt, \
+		io->node->display_name, io->instance_index, \
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
+		io->tgt_io.cmd.t_task_cdb[0], ##__VA_ARGS__)
+
+#define efct_lio_tmfio_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, \
+		"[%s] [%04x][i:%04x t:%04x h:%04x][f:%02x]" fmt, \
+		io->node->display_name, io->instance_index, \
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
+		io->tgt_io.tmf,  ##__VA_ARGS__)
+
+#define efct_set_lio_io_state(io, value) (io->tgt_io.state |= value)
+
+struct efct_lio_wq_data {
+	struct efct		*efct;
+	void			*ptr;
+	struct work_struct	work;
+};
+
+/* Target private efct structure */
+struct efct_scsi_tgt {
+	u32			max_sge;
+	u32			max_sgl;
+
+	/*
+	 * Variables used to send task set full. We are using a high watermark
+	 * method to send task set full. We will reserve a fixed number of IOs
+	 * per initiator plus a fudge factor. Once we reach this number,
+	 * then the target will start sending task set full/busy responses.
+	 */
+	atomic_t		initiator_count;
+	atomic_t		ios_in_use;
+	atomic_t		io_high_watermark;
+
+	atomic_t		watermark_hit;
+	int			watermark_min;
+	int			watermark_max;
+
+	struct efct_lio_sport	*lio_sport;
+	struct efct_lio_tpg	*tpg;
+
+	struct list_head	vport_list;
+	/* Protects vport list*/
+	spinlock_t		efct_lio_lock;
+
+	u64			wwnn;
+};
+
+struct efct_scsi_tgt_sport {
+	struct efct_lio_sport	*lio_sport;
+};
+
+struct efct_scsi_tgt_node {
+	struct se_session	*session;
+};
+
+#define EFCT_LIO_STATE_SCSI_RECV_CMD		(1 << 0)
+#define EFCT_LIO_STATE_TGT_SUBMIT_CMD		(1 << 1)
+#define EFCT_LIO_STATE_TFO_QUEUE_DATA_IN	(1 << 2)
+#define EFCT_LIO_STATE_TFO_WRITE_PENDING	(1 << 3)
+#define EFCT_LIO_STATE_TGT_EXECUTE_CMD		(1 << 4)
+#define EFCT_LIO_STATE_SCSI_SEND_RD_DATA	(1 << 5)
+#define EFCT_LIO_STATE_TFO_CHK_STOP_FREE	(1 << 6)
+#define EFCT_LIO_STATE_SCSI_DATA_DONE		(1 << 7)
+#define EFCT_LIO_STATE_TFO_QUEUE_STATUS		(1 << 8)
+#define EFCT_LIO_STATE_SCSI_SEND_RSP		(1 << 9)
+#define EFCT_LIO_STATE_SCSI_RSP_DONE		(1 << 10)
+#define EFCT_LIO_STATE_TGT_GENERIC_FREE		(1 << 11)
+#define EFCT_LIO_STATE_SCSI_RECV_TMF		(1 << 12)
+#define EFCT_LIO_STATE_TGT_SUBMIT_TMR		(1 << 13)
+#define EFCT_LIO_STATE_TFO_WRITE_PEND_STATUS	(1 << 14)
+#define EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE  (1 << 15)
+
+#define EFCT_LIO_STATE_TFO_ABORTED_TASK		(1 << 29)
+#define EFCT_LIO_STATE_TFO_RELEASE_CMD		(1 << 30)
+#define EFCT_LIO_STATE_SCSI_CMPL_CMD		(1 << 31)
+
+struct efct_scsi_tgt_io {
+	struct se_cmd		cmd;
+	unsigned char		sense_buffer[TRANSPORT_SENSE_BUFFER];
+	enum dma_data_direction	ddir;
+	int			task_attr;
+	u64			lun;
+
+	u32			state;
+	u8			tmf;
+	struct efct_io		*io_to_abort;
+	u32			seg_map_cnt;
+	u32			seg_cnt;
+	u32			cur_seg;
+	enum efct_scsi_io_status err;
+	bool			aborting;
+	bool			rsp_sent;
+	uint32_t		transferred_len;
+};
+
+/* Handler return codes */
+enum {
+	SCSI_HANDLER_DATAPHASE_STARTED = 1,
+	SCSI_HANDLER_RESP_STARTED,
+	SCSI_HANDLER_VALIDATED_DATAPHASE_STARTED,
+	SCSI_CMD_NOT_SUPPORTED,
+};
+
+#define WWN_NAME_LEN		32
+struct efct_lio_vport {
+	u64			wwpn;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		vport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	struct dentry		*sessions;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	atomic_t		enable;
+};
+
+struct efct_lio_sport {
+	u64			wwpn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		sport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	struct dentry		*sessions;
+	atomic_t		enable;
+};
+
+struct efct_lio_tpg_attrib {
+	int			generate_node_acls;
+	int			cache_dynamic_acls;
+	int			demo_mode_write_protect;
+	int			prod_mode_write_protect;
+	int			demo_mode_login_only;
+	bool			session_deletion_wait;
+};
+
+struct efct_lio_tpg {
+	struct se_portal_group	tpg;
+	struct efct_lio_sport	*sport;
+	struct efct_lio_vport	*vport;
+	struct efct_lio_tpg_attrib tpg_attrib;
+	unsigned short		tpgt;
+	atomic_t		enabled;
+};
+
+struct efct_lio_nacl {
+	u64			nport_wwnn;
+	char			nport_name[WWN_NAME_LEN];
+	struct se_session	*session;
+	struct se_node_acl	se_node_acl;
+};
+
+struct efct_lio_vport_list_t {
+	struct list_head	list_entry;
+	struct efct_lio_vport	*lio_vport;
+};
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+
+#endif /*__EFCT_LIO_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 25/31] elx: efct: Hardware IO submission routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (23 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  8:10   ` Hannes Reinecke
  2020-04-16 12:44   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 26/31] elx: efct: link statistics and SFP data James Smart
                   ` (5 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines that write IO to Work queue, send SRRs and raw frames.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Reduced arguments for sli_fcp_tsend64_wqe(), sli_fcp_trsp64_wqe(),
  sli_fcp_treceive64_wqe() calls
---
 drivers/scsi/elx/efct/efct_hw.c | 519 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  19 ++
 2 files changed, 538 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index fd3c2dec3ef6..26dd9bd1eeef 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -2516,3 +2516,522 @@ efct_hw_flush(struct efct_hw *hw)
 
 	return EFC_SUCCESS;
 }
+
+int
+efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int rc = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+	if (!list_empty(&wq->pending_list)) {
+		INIT_LIST_HEAD(&wqe->list_entry);
+		list_add_tail(&wqe->list_entry, &wq->pending_list);
+		wq->wq_pending_count++;
+		while ((wq->free_count > 0) &&
+		       ((wqe = list_first_entry(&wq->pending_list,
+					struct efct_hw_wqe, list_entry))
+			 != NULL)) {
+			list_del(&wqe->list_entry);
+			rc = _efct_hw_wq_write(wq, wqe);
+			if (rc < 0)
+				break;
+			if (wqe->abort_wqe_submit_needed) {
+				wqe->abort_wqe_submit_needed = false;
+				sli_abort_wqe(&wq->hw->sli,
+					      wqe->wqebuf,
+					      wq->hw->sli.wqe_size,
+					      SLI_ABORT_XRI,
+					      wqe->send_abts, wqe->id,
+					      0, wqe->abort_reqtag,
+					      SLI4_CQ_DEFAULT);
+
+				INIT_LIST_HEAD(&wqe->list_entry);
+				list_add_tail(&wqe->list_entry,
+					      &wq->pending_list);
+				wq->wq_pending_count++;
+			}
+		}
+	} else {
+		if (wq->free_count > 0) {
+			rc = _efct_hw_wq_write(wq, wqe);
+		} else {
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+
+	return rc;
+}
+
+/**
+ * This routine supports communication sequences consisting of a single
+ * request and single response between two endpoints. Examples include:
+ *  - Sending an ELS request.
+ *  - Sending an ELS response - To send an ELS response, the caller must provide
+ * the OX_ID from the received request.
+ *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
+ * the caller must provide the R_CTL, TYPE, and DF_CTL
+ * values to place in the FC frame header.
+ */
+enum efct_hw_rtn
+efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		  struct efct_hw_io *io,
+		  struct efc_dma *send, u32 len,
+		  struct efc_dma *receive, struct efc_remote_node *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb, void *arg)
+{
+	struct sli4_sge	*sge = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+	u16	local_flags = 0;
+	u32 sge0_flags;
+	u32 sge1_flags;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
+			hw, io, send, receive, rnode, iparam);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_test(hw->os,
+			      "cannot send SRRS, HW state=%d\n", hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	io->rnode = rnode;
+	io->type  = type;
+	io->done = cb;
+	io->arg  = arg;
+
+	sge = io->sgl->virt;
+
+	/* clear both SGE */
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+
+	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
+	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
+	if (send) {
+		sge[0].buffer_address_high =
+			cpu_to_le32(upper_32_bits(send->phys));
+		sge[0].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(send->phys));
+
+		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+
+		sge[0].buffer_length = cpu_to_le32(len);
+	}
+
+	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
+		sge[1].buffer_address_high =
+			cpu_to_le32(upper_32_bits(receive->phys));
+		sge[1].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(receive->phys));
+
+		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		sge1_flags |= SLI4_SGE_LAST;
+
+		sge[1].buffer_length = cpu_to_le32(receive->size);
+	} else {
+		sge0_flags |= SLI4_SGE_LAST;
+	}
+
+	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
+	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
+
+	switch (type) {
+	case EFCT_HW_ELS_REQ:
+		if (!send ||
+		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
+					  hw->sli.wqe_size, io->sgl,
+					*((u8 *)send->virt),
+					len, receive->size,
+					iparam->els.timeout,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, rnode->indicator,
+					rnode->sport->indicator,
+					rnode->attached, rnode->fc_id,
+					rnode->sport->fc_id)) {
+			efc_log_err(hw->os, "REQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, iparam->els.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->attached, rnode->fc_id,
+					local_flags, U32_MAX)) {
+			efc_log_err(hw->os, "RSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_ELS_RSP_SID:
+		if (!send ||
+		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, send, len,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					iparam->els.ox_id,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->attached, rnode->fc_id,
+					local_flags, iparam->els.s_id)) {
+			efc_log_err(hw->os, "RSP (SID) WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT:
+		if (!send ||
+		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf, io->sgl,
+					len, receive->size, io->indicator,
+					io->reqtag, SLI4_CQ_DEFAULT,
+					rnode->fc_id, rnode->indicator,
+					&iparam->fc_ct)) {
+			efc_log_err(hw->os, "GEN WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_FC_CT_RSP:
+		if (!send ||
+		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
+					    io->sgl, len, io->indicator,
+					    io->reqtag, rnode->fc_id,
+					    rnode->indicator, &iparam->fc_ct)) {
+			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_RJT:
+	{
+		struct sli_bls_payload	bls;
+
+		if (type == EFCT_HW_BLS_ACC) {
+			bls.type = SLI4_SLI_BLS_ACC;
+			memcpy(&bls.u.acc, iparam->bls.payload,
+			       sizeof(bls.u.acc));
+		} else {
+			bls.type = SLI4_SLI_BLS_RJT;
+			memcpy(&bls.u.rjt, iparam->bls.payload,
+			       sizeof(bls.u.rjt));
+		}
+
+		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					U32_MAX)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_BLS_ACC_SID:
+	{
+		struct sli_bls_payload	bls;
+
+		bls.type = SLI4_SLI_BLS_ACC;
+		memcpy(&bls.u.acc, iparam->bls.payload,
+		       sizeof(bls.u.acc));
+
+		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
+		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
+
+		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
+					   hw->sli.wqe_size, &bls,
+					io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT,
+					rnode->attached,
+					rnode->indicator,
+					rnode->sport->indicator,
+					rnode->fc_id, rnode->sport->fc_id,
+					iparam->bls.s_id)) {
+			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE SID error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "bad SRRS type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		io->wq->use_count++;
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * Send a read, write, or response IO.
+ *
+ * This routine supports sending a higher-level IO (for example, FCP) between
+ * two endpoints as a target or initiator. Examples include:
+ *  - Sending read data and good response (target).
+ *  - Sending a response (target with no data or after receiving write data).
+ *  .
+ * This routine assumes all IOs use the SGL associated with the HW IO. Prior to
+ * calling this routine, the data should be loaded using efct_hw_io_add_sge().
+ */
+enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io,
+		u32 len, union efct_hw_io_param_u *iparam,
+		struct efc_remote_node *rnode, void *cb, void *arg)
+{
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+	u32	rpi;
+	bool send_wqe = true;
+
+	if (!io || !rnode || !iparam) {
+		pr_err("bad parm hw=%p io=%p iparam=%p rnode=%p\n",
+			hw, io, iparam, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rpi = rnode->indicator;
+
+	/*
+	 * Save state needed during later stages
+	 */
+	io->rnode = rnode;
+	io->type  = type;
+	io->done  = cb;
+	io->arg   = arg;
+
+	/*
+	 * Format the work queue entry used to send the IO
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE: {
+		u16 flags = iparam->fcp_tgt.flags;
+		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
+
+		/*
+		 * Fill in the XFER_RDY for IF_TYPE 0 devices
+		 */
+		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
+		xfer->ft_burst_len = cpu_to_be32(len);
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+
+		if (sli_fcp_treceive64_wqe(&hw->sli, io->wqe.wqebuf,
+					   &io->def_sgl, io->first_data_sge,
+					   len, io->indicator, io->reqtag,
+					   SLI4_CQ_DEFAULT, rpi, rnode->fc_id,
+					   0, 0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TRECEIVE WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_READ: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+
+		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
+					&io->def_sgl, io->first_data_sge,
+					len, io->indicator, io->reqtag,
+					SLI4_CQ_DEFAULT, rpi, rnode->fc_id,
+					0, 0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TSEND WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_RSP: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
+
+		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
+				       &io->def_sgl, len, io->indicator,
+				       io->reqtag, SLI4_CQ_DEFAULT, rpi,
+				       rnode->fc_id, 0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TRSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		hw->tcmd_wq_submit[io->wq->instance]++;
+		io->wq->use_count++;
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+		}
+	}
+
+	return rc;
+}
+
+/**
+ * Send a raw frame
+ *
+ * Using the SEND_FRAME_WQE, a frame consisting of header and payload is sent.
+ */
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		   struct efct_hw_send_frame_context *ctx,
+		   void (*callback)(void *arg, u8 *cqe, int status),
+		   void *arg)
+{
+	int rc;
+	struct efct_hw_wqe *wqe;
+	u32 xri;
+	struct hw_wq *wq;
+
+	wqe = &ctx->wqe;
+
+	/* populate the callback object */
+	ctx->hw = hw;
+
+	/* Fetch and populate request tag */
+	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
+	if (!ctx->wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+
+	wq = hw->hw_wq[0];
+
+	/* Set XRI and RX_ID in the header based on which WQ, and which
+	 * send_frame_io we are using
+	 */
+	xri = wq->send_frame_io->indicator;
+
+	/* Build the send frame WQE */
+	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
+				hw->sli.wqe_size, sof, eof,
+				(u32 *)hdr, payload, payload->len,
+				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
+				ctx->wqcb->instance_index);
+	if (rc) {
+		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
+			     rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Write to WQ */
+	rc = efct_hw_wq_write(wq, wqe);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	wq->use_count++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+u32
+efct_hw_io_get_count(struct efct_hw *hw,
+		     enum efct_hw_io_count_type io_count_type)
+{
+	struct efct_hw_io *io = NULL;
+	u32 count = 0;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+
+	switch (io_count_type) {
+	case EFCT_HW_IO_INUSE_COUNT:
+		list_for_each_entry(io, &hw->io_inuse, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_WAIT_FREE_COUNT:
+		list_for_each_entry(io, &hw->io_wait_free, list_entry) {
+			count = count + 1;
+		}
+		break;
+	case EFCT_HW_IO_N_TOTAL_IO_COUNT:
+		count = hw->config.n_io;
+		break;
+	}
+
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index b427a4eda5a3..36a832f32616 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -714,4 +714,23 @@ efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
 extern int
 efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
 
+int efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe);
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		struct efct_hw_send_frame_context *ctx,
+		void (*callback)(void *arg, u8 *cqe, int status),
+		void *arg);
+typedef int(*efct_hw_srrs_cb_t)(struct efct_hw_io *io,
+				struct efc_remote_node *rnode, u32 length,
+				int status, u32 ext_status, void *arg);
+extern enum efct_hw_rtn
+efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		  struct efct_hw_io *io,
+		  struct efc_dma *send, u32 len,
+		  struct efc_dma *receive, struct efc_remote_node *rnode,
+		  union efct_hw_io_param_u *iparam,
+		  efct_hw_srrs_cb_t cb,
+		  void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 26/31] elx: efct: link statistics and SFP data
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (24 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 25/31] elx: efct: Hardware IO submission routines James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16 12:55   ` Daniel Wagner
  2020-04-12  3:32 ` [PATCH v3 27/31] elx: efct: xport and hardware teardown routines James Smart
                   ` (4 subsequent siblings)
  30 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to retrieve link stats and SFP transceiver data.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c | 468 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  39 ++++
 2 files changed, 507 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 26dd9bd1eeef..ca2fd237c7d6 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -8,6 +8,40 @@
 #include "efct_hw.h"
 #include "efct_unsol.h"
 
+struct efct_hw_sfp_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u8 *data, void *arg);
+	void *arg;
+	struct efc_dma payload;
+};
+
+struct efct_hw_temp_cb_arg {
+	void (*cb)(int status, u32 curr_temp,
+		   u32 crit_temp_thrshld,
+		   u32 warn_temp_thrshld,
+		   u32 norm_temp_thrshld,
+		   u32 fan_off_thrshld,
+		   u32 fan_on_thrshld,
+		   void *arg);
+	void *arg;
+};
+
+struct efct_hw_link_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_link_stat_counts *counters,
+		void *arg);
+	void *arg;
+};
+
+struct efct_hw_host_stat_cb_arg {
+	void (*cb)(int status,
+		   u32 num_counters,
+		struct efct_hw_host_stat_counts *counters,
+		void *arg);
+	void *arg;
+};
+
 static enum efct_hw_rtn
 efct_hw_link_event_init(struct efct_hw *hw)
 {
@@ -3035,3 +3069,437 @@ efct_hw_io_get_count(struct efct_hw *hw,
 
 	return count;
 }
+
+static int
+efct_hw_cb_sfp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct efct_hw_sfp_cb_arg *cb_arg = arg;
+	struct efc_dma *payload = &cb_arg->payload;
+	struct sli4_rsp_cmn_read_transceiver_data *mbox_rsp;
+	struct efct *efct = hw->os;
+	u32 bytes_written;
+
+	mbox_rsp =
+	(struct sli4_rsp_cmn_read_transceiver_data *)payload->virt;
+	bytes_written = le32_to_cpu(mbox_rsp->hdr.response_length);
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_rsp->hdr.status)
+				status = mbox_rsp->hdr.status;
+			cb_arg->cb(status, bytes_written, mbox_rsp->page_data,
+				   cb_arg->arg);
+		}
+
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
+		kfree(cb_arg);
+	}
+
+	kfree(mqe);
+	return EFC_SUCCESS;
+}
+
+/* Function to retrieve the SFP information */
+enum efct_hw_rtn
+efct_hw_get_sfp(struct efct_hw *hw, u16 page,
+		void (*cb)(int, u32, u8 *, void *), void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_sfp_cb_arg *cb_arg;
+	u8 *mbxdata;
+	struct efct *efct = hw->os;
+	struct efc_dma *dma;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_sfp_cb_arg));
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* payload holds the non-embedded portion */
+	dma = &cb_arg->payload;
+	dma->size = sizeof(struct sli4_rsp_cmn_read_transceiver_data);
+	dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
+				       dma->size, &dma->phys, GFP_DMA);
+	if (!dma->virt) {
+		kfree(cb_arg);
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/* Send the HW command */
+	if (!sli_cmd_common_read_transceiver_data(&hw->sli, mbxdata,
+						 SLI4_BMBX_SIZE, page,
+						 &cb_arg->payload))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_sfp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os,
+			      "READ_TRANSCEIVER_DATA failed with status %d\n",
+			     rc);
+		dma_free_coherent(&efct->pcidev->dev,
+				  cb_arg->payload.size, cb_arg->payload.virt,
+				  cb_arg->payload.phys);
+		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
+		kfree(cb_arg);
+		kfree(mbxdata);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_temp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_dump4 *mbox_rsp = (struct sli4_cmd_dump4 *)mqe;
+	struct efct_hw_temp_cb_arg *cb_arg = arg;
+	u32 curr_temp = le32_to_cpu(mbox_rsp->resp_data[0]); /* word 5 */
+	u32 crit_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[1]); /* word 6 */
+	u32 warn_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[2]); /* word 7 */
+	u32 norm_temp_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[3]); /* word 8 */
+	u32 fan_off_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[4]);   /* word 9 */
+	u32 fan_on_thrshld =
+			le32_to_cpu(mbox_rsp->resp_data[5]);    /* word 10 */
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status,
+				   curr_temp,
+				   crit_temp_thrshld,
+				   warn_temp_thrshld,
+				   norm_temp_thrshld,
+				   fan_off_thrshld,
+				   fan_on_thrshld,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return EFC_SUCCESS;
+}
+
+/* Function to retrieve the temperature information */
+enum efct_hw_rtn
+efct_hw_get_temperature(struct efct_hw *hw,
+			void (*cb)(int status,
+				   u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_temp_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_dump_type4(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+			       SLI4_WKI_TAG_SAT_TEM))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_temp, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "DUMP_TYPE4 failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_link_stat(struct efct_hw *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_link_stats *mbox_rsp;
+	struct efct_hw_link_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_link_stat_counts counts[EFCT_HW_LINK_STAT_MAX];
+	u32 num_counters;
+	u32 mbox_rsp_flags = 0;
+
+	mbox_rsp = (struct sli4_cmd_read_link_stats *)mqe;
+	mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags);
+	num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13;
+	memset(counts, 0, sizeof(struct efct_hw_link_stat_counts) *
+				 EFCT_HW_LINK_STAT_MAX);
+
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W02OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W03OF);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W04OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W05OF);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W06OF);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W07OF);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W08OF);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W09OF);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W10OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W11OF);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W12OF);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W13OF);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W14OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W15OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W16OF);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W17OF);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W18OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W19OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W20OF);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].overflow =
+		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W21OF);
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->linkfail_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssync_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssignal_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->primseq_errcnt);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->inval_txword_errcnt);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter =
+		le32_to_cpu(mbox_rsp->crc_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter =
+		le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofa_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofdti_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofni_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_soff_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw,
+		       u8 req_ext_counters,
+		       u8 clear_overflow_flags,
+		       u8 clear_all_counters,
+		       void (*cb)(int status,
+				  u32 num_counters,
+			struct efct_hw_link_stat_counts *counters,
+			void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_link_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+				    req_ext_counters,
+				    clear_overflow_flags,
+				    clear_all_counters))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_link_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_host_stat(struct efct_hw *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_status *mbox_rsp =
+					(struct sli4_cmd_read_status *)mqe;
+	struct efct_hw_host_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_host_stat_counts counts[EFCT_HW_HOST_STAT_MAX];
+	u32 num_counters = EFCT_HW_HOST_STAT_MAX;
+
+	memset(counts, 0, sizeof(struct efct_hw_host_stat_counts) *
+		   EFCT_HW_HOST_STAT_MAX);
+
+	counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_orig);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_resp);
+	counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_p_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_f_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_xri_pool_cnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+	kfree(mqe);
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
+		       void (*cb)(int status,
+				  u32 num_counters,
+				  struct efct_hw_host_stat_counts *counters,
+				  void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_host_stat_cb_arg *cb_arg;
+	u8 *mbxdata;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	 cb_arg->cb = cb;
+	 cb_arg->arg = arg;
+
+	 /* Send the HW command to get the host stats */
+	if (!sli_cmd_read_status(&hw->sli, mbxdata, SLI4_BMBX_SIZE, cc))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_host_stat, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_HOST_STATS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 36a832f32616..0b6838c7f924 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -733,4 +733,43 @@ efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
 		  efct_hw_srrs_cb_t cb,
 		  void *arg);
 
+/* Function for retrieving SFP data */
+extern enum efct_hw_rtn
+efct_hw_get_sfp(struct efct_hw *hw, u16 page,
+		void (*cb)(int, u32, u8 *, void *), void *arg);
+
+/* Function for retrieving temperature data */
+extern enum efct_hw_rtn
+efct_hw_get_temperature(struct efct_hw *hw,
+			void (*efct_hw_temp_cb_t)(int status,
+						  u32 curr_temp,
+				u32 crit_temp_thrshld,
+				u32 warn_temp_thrshld,
+				u32 norm_temp_thrshld,
+				u32 fan_off_thrshld,
+				u32 fan_on_thrshld,
+				void *arg),
+			void *arg);
+
+/* Function for retrieving link statistics */
+extern enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw,
+		       u8 req_ext_counters,
+		u8 clear_overflow_flags,
+		u8 clear_all_counters,
+		void (*efct_hw_link_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_link_stat_counts *counters,
+			void *arg),
+		void *arg);
+/* Function for retrieving host statistics */
+extern enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw,
+		       u8 cc,
+		void (*efct_hw_host_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_host_stat_counts *counters,
+			void *arg),
+		void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 27/31] elx: efct: xport and hardware teardown routines
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (25 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 26/31] elx: efct: link statistics and SFP data James Smart
@ 2020-04-12  3:32 ` James Smart
  2020-04-16  9:45   ` Hannes Reinecke
  2020-04-16 13:01   ` Daniel Wagner
  2020-04-12  3:33 ` [PATCH v3 28/31] elx: efct: Firmware update, async link processing James Smart
                   ` (3 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:32 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to detach xport and hardware objects.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
   Removed old patch 28 and merged with 27
---
 drivers/scsi/elx/efct/efct_hw.c    | 333 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |  31 ++++
 drivers/scsi/elx/efct/efct_xport.c | 291 ++++++++++++++++++++++++++++++++
 3 files changed, 655 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index ca2fd237c7d6..a007ca98895d 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -3503,3 +3503,336 @@ efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
 
 	return rc;
 }
+
+static int
+efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	kfree(mqe);
+	return EFC_SUCCESS;
+}
+
+/* Control a port (initialize, shutdown, or set link configuration) */
+enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+
+	switch (ctrl) {
+	case EFCT_HW_PORT_INIT:
+	{
+		u8	*init_link;
+		u32 speed = 0;
+		u8 reset_alpa = 0;
+
+		u8	*cfg_link;
+
+		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cfg_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_config_link(&hw->sli, cfg_link,
+					SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, cfg_link,
+					     EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control,
+					     NULL);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(cfg_link);
+			efc_log_err(hw->os, "CONFIG_LINK failed\n");
+			break;
+		}
+		speed = hw->config.speed;
+		reset_alpa = (u8)(value & 0xff);
+
+		/* Allocate a new buffer for the init_link command */
+		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!init_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
+				      speed, reset_alpa))
+			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(init_link);
+			efc_log_err(hw->os, "INIT_LINK failed\n");
+		}
+		break;
+	}
+	case EFCT_HW_PORT_SHUTDOWN:
+	{
+		u8	*down_link;
+
+		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!down_link)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
+			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			kfree(down_link);
+			efc_log_err(hw->os, "DOWN_LINK failed\n");
+		}
+		break;
+	}
+	default:
+		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_teardown(struct efct_hw *hw)
+{
+	u32	i = 0;
+	u32	iters = 10;
+	u32	max_rpi;
+	u32 destroy_queues;
+	u32 free_memory;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+
+	/* Cancel Sliport Healthcheck */
+	if (hw->sliport_healthcheck) {
+		hw->sliport_healthcheck = 0;
+		efct_hw_config_sli_port_health_check(hw, 0, 0);
+	}
+
+	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+
+		efct_hw_flush(hw);
+
+		/*
+		 * If there are outstanding commands, wait for them to complete
+		 */
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some cmds still pending on MQ queue\n");
+
+		/* Cancel any remaining commands */
+		efct_hw_command_cancel(hw);
+	} else {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+	}
+
+	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
+	if (hw->rpi_ref) {
+		for (i = 0; i < max_rpi; i++) {
+			u32 count;
+
+			count = atomic_read(&hw->rpi_ref[i].rpi_count);
+			if (count)
+				efc_log_debug(hw->os,
+					       "non-zero ref [%d]=%d\n",
+					       i, count);
+		}
+		kfree(hw->rpi_ref);
+		hw->rpi_ref = NULL;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  hw->rnode_mem.size, hw->rnode_mem.virt,
+			  hw->rnode_mem.phys);
+	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
+
+	if (hw->io) {
+		for (i = 0; i < hw->config.n_io; i++) {
+			if (hw->io[i] && hw->io[i]->sgl &&
+			    hw->io[i]->sgl->virt) {
+				dma_free_coherent(&efct->pcidev->dev,
+						  hw->io[i]->sgl->size,
+						  hw->io[i]->sgl->virt,
+						  hw->io[i]->sgl->phys);
+				memset(&hw->io[i]->sgl, 0,
+				       sizeof(struct efc_dma));
+			}
+			kfree(hw->io[i]);
+			hw->io[i] = NULL;
+		}
+		kfree(hw->io);
+		hw->io = NULL;
+		kfree(hw->wqe_buffs);
+		hw->wqe_buffs = NULL;
+	}
+
+	dma = &hw->xfer_rdy;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	dma = &hw->dump_sges;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	dma = &hw->loop_map;
+	dma_free_coherent(&efct->pcidev->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	for (i = 0; i < hw->wq_count; i++)
+		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->rq_count; i++)
+		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->mq_count; i++)
+		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->cq_count; i++)
+		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
+			       free_memory);
+
+	/* Free rq buffers */
+	efct_hw_rx_free(hw);
+
+	efct_hw_queue_teardown(hw);
+
+	if (sli_teardown(&hw->sli))
+		efc_log_err(hw->os, "SLI teardown failed\n");
+
+	/* record the fact that the queues are non-functional */
+	hw->state = EFCT_HW_STATE_UNINITIALIZED;
+
+	/* free sequence free pool */
+	kfree(hw->seq_pool);
+	hw->seq_pool = NULL;
+
+	/* free hw_wq_callback pool */
+	efct_hw_reqtag_pool_free(hw);
+
+	/* Mark HW setup as not having been called */
+	hw->hw_setup_called = false;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset,
+		  enum efct_hw_state prev_state)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (reset) {
+	case EFCT_HW_RESET_FUNCTION:
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RESET_FIRMWARE:
+		efc_log_debug(hw->os, "issuing firmware reset\n");
+		if (sli_fw_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_soft_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		/*
+		 * Because the FW reset leaves the FW in a non-running state,
+		 * follow that with a regular reset.
+		 */
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	default:
+		efc_log_err(hw->os,
+			     "unknown reset type - no reset performed\n");
+		hw->state = prev_state;
+		rc = EFCT_HW_RTN_INVALID_ARG;
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32	iters;
+	enum efct_hw_state prev_state = hw->state;
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE)
+		efc_log_debug(hw->os,
+			      "HW state %d is not active\n", hw->state);
+
+	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
+
+	/*
+	 * If the prev_state is already reset/teardown in progress,
+	 * don't continue further
+	 */
+	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
+	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
+		return efct_hw_sli_reset(hw, reset, prev_state);
+
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_flush(hw);
+
+		/*
+		 * If an mailbox command requiring a DMA is outstanding
+		 * (SFP/DDM), then the FW will UE when the reset is issued.
+		 * So attempt to complete all mailbox commands.
+		 */
+		iters = 10;
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some commands still pending on MQ queue\n");
+	}
+
+	/* Reset the chip */
+	rc = efct_hw_sli_reset(hw, reset, prev_state);
+	if (rc == EFCT_HW_RTN_INVALID_ARG)
+		return EFCT_HW_RTN_ERROR;
+
+	return rc;
+}
+
+int
+efct_hw_get_num_eq(struct efct_hw *hw)
+{
+	return hw->eq_count;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 0b6838c7f924..9c025a1709e3 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -772,4 +772,35 @@ efct_hw_get_host_stats(struct efct_hw *hw,
 			void *arg),
 		void *arg);
 
+struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count);
+struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count);
+extern u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count);
+struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count);
+extern struct hw_wq
+*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count);
+extern struct hw_rq
+*efct_hw_new_rq(struct hw_cq *cq, u32 entry_count);
+extern u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count);
+void efct_hw_del_eq(struct hw_eq *eq);
+void efct_hw_del_cq(struct hw_cq *cq);
+void efct_hw_del_mq(struct hw_mq *mq);
+void efct_hw_del_wq(struct hw_wq *wq);
+void efct_hw_del_rq(struct hw_rq *rq);
+void efct_hw_queue_dump(struct efct_hw *hw);
+void efct_hw_queue_teardown(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw);
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset);
+int efct_hw_get_num_eq(struct efct_hw *hw);
+
+extern enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index b683208d396f..fef7f427dbbf 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -521,3 +521,294 @@ efct_scsi_release_fc_transport(void)
 
 	return EFC_SUCCESS;
 }
+
+int
+efct_xport_detach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+
+	/* free resources associated with target-server and initiator-client */
+	efct_scsi_tgt_del_device(efct);
+
+	efct_scsi_del_device(efct);
+
+	/*Shutdown FC Statistics timer*/
+	if (timer_pending(&xport->stats_timer))
+		del_timer(&xport->stats_timer);
+
+	efct_hw_teardown(&efct->hw);
+
+	efct_xport_delete_debugfs(efct);
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_xport_domain_free_cb(struct efc *efc, void *arg)
+{
+	struct completion *done = arg;
+
+	complete(done);
+}
+
+static int
+efct_xport_post_node_event_cb(struct efct_hw *hw, int status,
+			      u8 *mqe, void *arg)
+{
+	struct efct_xport_post_node_event *payload = arg;
+
+	if (payload) {
+		efc_node_post_shutdown(payload->node, payload->evt,
+				       payload->context);
+		complete(&payload->done);
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+	}
+	return EFC_SUCCESS;
+}
+
+static void
+efct_xport_force_free(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	struct efc *efc = efct->efcport;
+
+	efc_log_debug(efct, "reset required, do force shutdown\n");
+
+	if (!efc->domain) {
+		efc_log_err(efct, "Domain is already freed\n");
+		return;
+	}
+
+	efc_domain_force_free(efc->domain);
+}
+
+int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	va_list argp;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_PORT_ONLINE: {
+		/* Bring the port on-line */
+		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
+					  NULL, NULL);
+		if (rc)
+			efc_log_err(efct,
+				     "%s: Can't init port\n", efct->desc);
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+	case EFCT_XPORT_PORT_OFFLINE: {
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL))
+			efc_log_err(efct, "port shutdown failed\n");
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+
+	case EFCT_XPORT_SHUTDOWN: {
+		struct completion done;
+		bool reset_required;
+		unsigned long timeout;
+
+		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
+		 * all PCI functions; orderly shutdown won't work,
+		 * just force free
+		 */
+
+		reset_required = sli_reset_required(&efct->hw.sli);
+
+		if (reset_required) {
+			efc_log_debug(efct,
+				       "reset required, do force shutdown\n");
+			efct_xport_force_free(xport);
+			break;
+		}
+		init_completion(&done);
+
+		efc_register_domain_free_cb(efct->efcport,
+					efct_xport_domain_free_cb, &done);
+
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL)) {
+			efc_log_debug(efct,
+				       "port shutdown failed, do force shutdown\n");
+			efct_xport_force_free(xport);
+		} else {
+			efc_log_debug(efct,
+				       "Waiting %d seconds for domain shutdown.\n",
+			(EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC / 1000000));
+
+			timeout = usecs_to_jiffies(
+					EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC);
+			if (!wait_for_completion_timeout(&done, timeout)) {
+				efc_log_debug(efct,
+					       "Domain shutdown timed out!!\n");
+				efct_xport_force_free(xport);
+			}
+		}
+
+		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
+
+		/* Free up any saved virtual ports */
+		efc_vport_del_all(efct->efcport);
+		break;
+	}
+
+	/*
+	 * POST_NODE_EVENT:  post an event to a node object
+	 *
+	 * This transport function is used to post an event to a node object.
+	 * It does this by submitting a NOP mailbox command to defer execution
+	 * to the interrupt context (thereby enforcing the serialized execution
+	 * of event posting to the node state machine instances)
+	 */
+	case EFCT_XPORT_POST_NODE_EVENT: {
+		struct efc_node *node;
+		u32	evt;
+		void *context;
+		struct efct_xport_post_node_event *payload = NULL;
+		struct efct *efct;
+		struct efct_hw *hw;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		node = va_arg(argp, struct efc_node *);
+		evt = va_arg(argp, u32);
+		context = va_arg(argp, void *);
+		va_end(argp);
+
+		payload = kmalloc(sizeof(*payload), GFP_KERNEL);
+		if (!payload)
+			return EFC_FAIL;
+
+		memset(payload, 0, sizeof(*payload));
+
+		efct = node->efc->base;
+		hw = &efct->hw;
+
+		/* if node's state machine is disabled,
+		 * don't bother continuing
+		 */
+		if (!node->sm.current_state) {
+			efc_log_test(efct, "node %p state machine disabled\n",
+				      node);
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Setup payload */
+		init_completion(&payload->done);
+
+		/* one for self and one for callback */
+		atomic_set(&payload->refcnt, 2);
+		payload->node = node;
+		payload->evt = evt;
+		payload->context = context;
+
+		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
+				       payload)) {
+			efc_log_test(efct, "efct_hw_async_call failed\n");
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Wait for completion */
+		if (wait_for_completion_interruptible(&payload->done)) {
+			efc_log_test(efct,
+				      "POST_NODE_EVENT: completion failed\n");
+			rc = -1;
+		}
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+
+		break;
+	}
+	/*
+	 * Set wwnn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWNN_SET: {
+		u64 wwnn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwnn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
+		xport->req_wwnn = wwnn;
+
+		break;
+	}
+	/*
+	 * Set wwpn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWPN_SET: {
+		u64 wwpn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwpn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
+		xport->req_wwpn = wwpn;
+
+		break;
+	}
+
+	default:
+		break;
+	}
+	return rc;
+}
+
+void
+efct_xport_free(struct efct_xport *xport)
+{
+	if (xport) {
+		efct_io_pool_free(xport->io_pool);
+
+		kfree(xport);
+	}
+}
+
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template)
+{
+	if (transport_template)
+		pr_err("releasing transport layer\n");
+
+	/* Releasing FC transport */
+	fc_release_transport(transport_template);
+}
+
+static void
+efct_xport_remove_host(struct Scsi_Host *shost)
+{
+	fc_remove_host(shost);
+}
+
+int efct_scsi_del_device(struct efct *efct)
+{
+	if (efct->shost) {
+		efc_log_debug(efct, "Unregistering with Transport Layer\n");
+		efct_xport_remove_host(efct->shost);
+		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
+		scsi_remove_host(efct->shost);
+		scsi_host_put(efct->shost);
+		efct->shost = NULL;
+	}
+	return EFC_SUCCESS;
+}
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 28/31] elx: efct: Firmware update, async link processing
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (26 preceding siblings ...)
  2020-04-12  3:32 ` [PATCH v3 27/31] elx: efct: xport and hardware teardown routines James Smart
@ 2020-04-12  3:33 ` James Smart
  2020-04-16 10:01   ` Hannes Reinecke
  2020-04-16 13:10   ` Daniel Wagner
  2020-04-12  3:33 ` [PATCH v3 29/31] elx: efct: scsi_transport_fc host interface support James Smart
                   ` (2 subsequent siblings)
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:33 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Handling of async link event.
Registrations for VFI, VPI and RPI.
Add Firmware update helper routines.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Reworked efct_hw_port_attach_reg_vpi() and efct_hw_port_attach_reg_vfi()
  Return defined values
---
 drivers/scsi/elx/efct/efct_hw.c | 1509 +++++++++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   58 ++
 2 files changed, 1567 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index a007ca98895d..b3a1ec0f674b 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -42,6 +42,12 @@ struct efct_hw_host_stat_cb_arg {
 	void *arg;
 };
 
+struct efct_hw_fw_wr_cb_arg {
+	void (*cb)(int status, u32 bytes_written,
+		   u32 change_status, void *arg);
+	void *arg;
+};
+
 static enum efct_hw_rtn
 efct_hw_link_event_init(struct efct_hw *hw)
 {
@@ -3836,3 +3842,1506 @@ efct_hw_get_num_eq(struct efct_hw *hw)
 {
 	return hw->eq_count;
 }
+
+/* HW async call context structure */
+struct efct_hw_async_call_ctx {
+	efct_hw_async_cb_t callback;
+	void *arg;
+	u8 cmd[SLI4_BMBX_SIZE];
+};
+
+static void
+efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
+{
+	struct efct_hw_async_call_ctx *ctx = arg;
+
+	if (ctx) {
+		if (ctx->callback)
+			(*ctx->callback)(hw, status, mqe, ctx->arg);
+
+		kfree(ctx);
+	}
+}
+
+/*
+ * Post a NOP mbox cmd; the callback with argument is invoked upon completion
+ * while in the event processing context.
+ */
+int
+efct_hw_async_call(struct efct_hw *hw,
+		   efct_hw_async_cb_t callback, void *arg)
+{
+	int rc = 0;
+	struct efct_hw_async_call_ctx *ctx;
+
+	/*
+	 * Allocate a callback context (which includes the mbox cmd buffer),
+	 * we need this to be persistent as the mbox cmd submission may be
+	 * queued and executed later execution.
+	 */
+	ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
+	if (!ctx)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(ctx, 0, sizeof(*ctx));
+	ctx->callback = callback;
+	ctx->arg = arg;
+
+	/* Build and send a NOP mailbox command */
+	if (sli_cmd_common_nop(&hw->sli, ctx->cmd, sizeof(ctx->cmd), 0)) {
+		efc_log_err(hw->os, "COMMON_NOP format failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+
+	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
+			    ctx)) {
+		efc_log_err(hw->os, "COMMON_NOP command failure\n");
+		kfree(ctx);
+		rc = -1;
+	}
+	return rc;
+}
+
+static void
+efct_hw_port_free_resources(struct efc_sli_port *sport, int evt, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+
+	/* Clear the sport attached flag */
+	sport->attached = false;
+
+	/* Free the service parameters buffer */
+	if (sport->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  sport->dma.size, sport->dma.virt,
+				  sport->dma.phys);
+		memset(&sport->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
+
+	efc_lport_cb(efct->efcport, evt, sport);
+}
+
+static int
+efct_hw_port_get_mbox_status(struct efc_sli_port *sport,
+			     u8 *mqe, int status)
+{
+	struct efct_hw *hw = sport->hw;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
+			       sport->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_port_free_unreg_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int evt = EFC_HW_PORT_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		evt = EFC_HW_PORT_FREE_FAIL;
+		rc = -1;
+	}
+
+	efct_hw_port_free_resources(sport, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_port_free_unreg_vpi(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	int rc;
+
+	/* Allocate memory and send unreg_vpi */
+	if (!data) {
+		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data) {
+			efct_hw_port_free_resources(sport,
+						    EFC_HW_PORT_FREE_FAIL,
+						    data);
+			return;
+		}
+		memset(data, 0, SLI4_BMBX_SIZE);
+	}
+
+	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       sport->indicator, SLI4_UNREG_TYPE_PORT);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_free_unreg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_FREE_FAIL, data);
+	}
+}
+
+static void
+efct_hw_port_send_evt(struct efc_sli_port *sport, int evt, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Now inform the registered callbacks */
+	efc_lport_cb(efct->efcport, evt, sport);
+
+	/* Set the sport attached flag */
+	if (evt == EFC_HW_PORT_ATTACH_OK)
+		sport->attached = true;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+}
+
+static int
+efct_hw_port_alloc_init_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_hw_port_alloc_init_vpi(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	int rc;
+
+	/* If there is a pending free request, then handle it now */
+	if (sport->free_req_pending) {
+		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
+		return;
+	}
+
+	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      sport->indicator, sport->domain->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_init_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_port_alloc_read_sparm64_cb(struct efct_hw *hw,
+				   int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	u8 *payload = NULL;
+	struct efct *efct = hw->os;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	payload = sport->dma.virt;
+
+	memcpy(&sport->sli_wwpn,
+	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
+		sizeof(sport->sli_wwpn));
+	memcpy(&sport->sli_wwnn,
+	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
+		sizeof(sport->sli_wwnn));
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  sport->dma.size, sport->dma.virt, sport->dma.phys);
+	memset(&sport->dma, 0, sizeof(struct efc_dma));
+	efct_hw_port_alloc_init_vpi(sport, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_hw_port_alloc_read_sparm64(struct efc_sli_port *sport, void *data)
+{
+	struct efct_hw *hw = sport->hw;
+	struct efct *efct = hw->os;
+	int rc;
+
+	/* Allocate memory for the service parameters */
+	sport->dma.size = 112;
+	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					     sport->dma.size, &sport->dma.phys,
+					     GFP_DMA);
+	if (!sport->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &sport->dma, sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_port_alloc_read_sparm64_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ALLOC_FAIL, data);
+	}
+}
+
+/*
+ * This function allocates a VPI object for the port and stores it in the
+ * indicator field of the port object.
+ */
+enum efct_hw_rtn
+efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
+		   struct efc_domain *domain, u8 *wwpn)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*cmd = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 index;
+
+	sport->indicator = U32_MAX;
+	sport->hw = hw;
+	sport->free_req_pending = false;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (wwpn)
+		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
+			       &sport->indicator, &index)) {
+		efc_log_err(hw->os, "VPI allocation failure\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (domain) {
+		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!cmd) {
+			rc = EFCT_HW_RTN_NO_MEMORY;
+			goto efct_hw_port_alloc_out;
+		}
+		memset(cmd, 0, SLI4_BMBX_SIZE);
+
+		/*
+		 * If the WWPN is NULL, fetch the default
+		 * WWPN and WWNN before initializing the VPI
+		 */
+		if (!wwpn)
+			efct_hw_port_alloc_read_sparm64(sport, cmd);
+		else
+			efct_hw_port_alloc_init_vpi(sport, cmd);
+	} else if (!wwpn) {
+		/* This is the convention for the HW, not SLI */
+		efc_log_test(hw->os, "need WWN for physical port\n");
+		rc = EFCT_HW_RTN_ERROR;
+	}
+	/* domain NULL and wwpn non-NULL */
+	// no-op;
+
+efct_hw_port_alloc_out:
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		kfree(cmd);
+
+		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
+				  sport->indicator);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_port_attach_reg_vpi_cb(struct efct_hw *hw,
+			       int status, u8 *mqe, void *arg)
+{
+	struct efc_sli_port *sport = arg;
+	int rc;
+
+	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
+	if (rc) {
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+/**
+ * This function registers a previously-allocated VPI with the
+ * device.
+ */
+enum efct_hw_rtn
+efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
+		    u32 fc_id)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	sport->fc_id = fc_id;
+
+	rc = sli_cmd_reg_vpi(&hw->sli, buf, SLI4_BMBX_SIZE, sport->fc_id,
+			    sport->sli_wwpn, sport->indicator,
+			    sport->domain->indicator, false);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VPI format failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, buf);
+		return rc;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+			     efct_hw_port_attach_reg_vpi_cb, sport);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VPI command failure\n");
+		efct_hw_port_free_resources(sport,
+					    EFC_HW_PORT_ATTACH_FAIL, buf);
+	}
+
+	return rc;
+}
+
+/* Issue the UNREG_VPI command to free the assigned VPI context */
+enum efct_hw_rtn
+efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!sport) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p sport=%p\n", hw,
+			sport);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (sport->attached)
+		efct_hw_port_free_unreg_vpi(sport, NULL);
+	else
+		sport->free_req_pending = true;
+
+	return rc;
+}
+
+static int
+efct_hw_domain_get_mbox_status(struct efc_domain *domain,
+			       u8 *mqe, int status)
+{
+	struct efct_hw *hw = domain->hw;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+	int rc = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
+			       domain->indicator, status,
+			       le16_to_cpu(hdr->status));
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static void
+efct_hw_domain_free_resources(struct efc_domain *domain,
+			      int evt, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the service parameters buffer */
+	if (domain->dma.virt) {
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the command buffer */
+	kfree(data);
+
+	/* Free the SLI resources */
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
+
+	efc_domain_cb(efct->efcport, evt, domain);
+}
+
+static void
+efct_hw_domain_send_sport_evt(struct efc_domain *domain,
+			      int port_evt, int domain_evt, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efct *efct = hw->os;
+
+	/* Free the mbox buffer */
+	kfree(data);
+
+	/* Send alloc/attach ok to the physical sport */
+	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
+
+	/* Now inform the registered callbacks */
+	efc_domain_cb(efct->efcport, domain_evt, domain);
+}
+
+static int
+efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw *hw,
+				     int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	hw->domain = domain;
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
+				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_hw_domain_alloc_read_sparm64(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	int rc;
+
+	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
+				  &domain->dma, 0);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_read_sparm64_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efct_hw_domain_alloc_init_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efct_hw_domain_alloc_read_sparm64(domain, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_hw_domain_alloc_init_vfi(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	struct efc_sli_port *sport = domain->sport;
+	int rc;
+
+	/*
+	 * For FC, the HW alread registered an FCFI.
+	 * Copy FCF information into the domain and jump to INIT_VFI.
+	 */
+	domain->fcf_indicator = hw->fcf_indicator;
+	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			      domain->indicator, domain->fcf_indicator,
+			sport->indicator);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI format failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_alloc_init_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "INIT_VFI command failure\n");
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+/**
+ * This function starts a series of commands needed to connect to the domain,
+ * including
+ *   - REG_FCFI
+ *   - INIT_VFI
+ *   - READ_SPARMS
+ */
+enum efct_hw_rtn
+efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
+		     u32 fcf)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	u8 *cmd = NULL;
+	u32 index;
+
+	if (!domain || !domain->sport) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
+			    hw, domain, domain ? domain->sport : NULL);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(efct,
+			     "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!cmd)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(cmd, 0, SLI4_BMBX_SIZE);
+
+	/* allocate memory for the service parameters */
+	domain->dma.size = 112;
+	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
+					      domain->dma.size,
+					      &domain->dma.phys, GFP_DMA);
+	if (!domain->dma.virt) {
+		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
+		kfree(cmd);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	domain->hw = hw;
+	domain->fcf = fcf;
+	domain->fcf_indicator = U32_MAX;
+	domain->indicator = U32_MAX;
+
+	if (sli_resource_alloc(&hw->sli,
+			       SLI_RSRC_VFI, &domain->indicator,
+				    &index)) {
+		efc_log_err(hw->os, "VFI allocation failure\n");
+
+		kfree(cmd);
+		dma_free_coherent(&efct->pcidev->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_alloc_init_vfi(domain, cmd);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static int
+efct_hw_domain_attach_reg_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int rc;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		hw->domain = NULL;
+		efct_hw_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
+				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_domain_attach(struct efc *efc,
+		      struct efc_domain *domain, u32 fc_id)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	u8	*buf = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	domain->sport->fc_id = fc_id;
+
+	rc = sli_cmd_reg_vfi(&hw->sli, buf, SLI4_BMBX_SIZE, domain->indicator,
+			    domain->fcf_indicator, domain->dma,
+			    domain->sport->indicator, domain->sport->sli_wwpn,
+			    domain->sport->fc_id);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_attach_reg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "REG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return rc;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_ATTACH_FAIL, buf);
+
+	return rc;
+}
+
+static int
+efct_hw_domain_free_unreg_vfi_cb(struct efct_hw *hw,
+				 int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int evt = EFC_HW_DOMAIN_FREE_OK;
+	int rc = 0;
+
+	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		evt = EFC_HW_DOMAIN_FREE_FAIL;
+		rc = -1;
+	}
+
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, evt, mqe);
+	return rc;
+}
+
+static void
+efct_hw_domain_free_unreg_vfi(struct efc_domain *domain, void *data)
+{
+	struct efct_hw *hw = domain->hw;
+	int rc;
+
+	if (!data) {
+		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!data)
+			goto cleanup;
+	}
+
+	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
+			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
+			     efct_hw_domain_free_unreg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(hw->os, "UNREG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	hw->domain = NULL;
+	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
+}
+
+enum efct_hw_rtn
+efct_hw_domain_free(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(hw->os,
+			     "bad parameter(s) hw=%p domain=%p\n",
+			hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_domain_free_unreg_vfi(domain, NULL);
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	if (!domain) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	dma_free_coherent(&efct->pcidev->dev,
+			  domain->dma.size, domain->dma.virt, domain->dma.phys);
+	memset(&domain->dma, 0, sizeof(struct efc_dma));
+	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
+			  domain->indicator);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
+		   u32 fc_addr, struct efc_sli_port *sport)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	/* Check for invalid indicator */
+	if (rnode->indicator != U32_MAX) {
+		efc_log_err(hw->os,
+			     "RPI allocation failure addr=%#x rpi=%#x\n",
+			    fc_addr, rnode->indicator);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* NULL SLI port indicates an unallocated remote node */
+	rnode->sport = NULL;
+
+	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
+			       &rnode->indicator, &rnode->index)) {
+		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
+			     fc_addr);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	rnode->fc_id = fc_addr;
+	rnode->sport = sport;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static int
+efct_hw_cb_node_attach(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event	evt = 0;
+
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+		rnode->attached = false;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
+		evt = EFC_HW_NODE_ATTACH_FAIL;
+	} else {
+		rnode->attached = true;
+		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
+		evt = EFC_HW_NODE_ATTACH_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return EFC_SUCCESS;
+}
+
+/* Update a remote node object with the remote port's service parameters */
+enum efct_hw_rtn
+efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
+	u8		*buf = NULL;
+	u32	count = 0;
+
+	if (!hw || !rnode || !sparms) {
+		efc_log_err(efct,
+			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
+			    hw, rnode, sparms);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+	/*
+	 * If the attach count is non-zero, this RPI has already been reg'd.
+	 * Otherwise, register the RPI
+	 */
+	if (rnode->index == U32_MAX) {
+		efc_log_err(efct, "bad parameter rnode->index invalid\n");
+		kfree(buf);
+		return EFCT_HW_RTN_ERROR;
+	}
+	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
+	count--;
+	if (count) {
+		/*
+		 * Can't attach multiple FC_ID's to a node unless High Login
+		 * Mode is enabled
+		 */
+		if (!hw->sli.high_login_mode) {
+			efc_log_test(hw->os,
+				      "attach to attached node HLM=%d cnt=%d\n",
+				     hw->sli.high_login_mode, count);
+			rc = EFCT_HW_RTN_SUCCESS;
+		} else {
+			rnode->node_group = true;
+			rnode->attached =
+			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
+			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
+							 EFCT_HW_RTN_SUCCESS;
+		}
+	} else {
+		rnode->node_group = false;
+
+		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				    rnode->fc_id,
+				    rnode->indicator, rnode->sport->indicator,
+				    sparms, 0, 0))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_attach, rnode);
+	}
+
+	if (count || rc) {
+		if (rc < EFCT_HW_RTN_SUCCESS) {
+			atomic_sub_return(1,
+					  &hw->rpi_ref[rnode->index].rpi_count);
+			efc_log_err(hw->os,
+				     "%s error\n", count ? "HLM" : "REG_RPI");
+		}
+		kfree(buf);
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_free_resources(struct efc *efc,
+			    struct efc_remote_node *rnode)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (rnode->sport) {
+		if (rnode->attached) {
+			efc_log_err(hw->os, "Err: rnode is still attached\n");
+			return EFCT_HW_RTN_ERROR;
+		}
+		if (rnode->indicator != U32_MAX) {
+			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
+					      rnode->indicator)) {
+				efc_log_err(hw->os,
+					     "RPI free fail RPI %d addr=%#x\n",
+					    rnode->indicator,
+					    rnode->fc_id);
+				rc = EFCT_HW_RTN_ERROR;
+			} else {
+				rnode->node_group = false;
+				rnode->indicator = U32_MAX;
+				rnode->index = U32_MAX;
+				rnode->free_group = false;
+			}
+		}
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_node_free(struct efct_hw *hw,
+		     int status, u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+
+		/*
+		 * In certain cases, a non-zero MQE status is OK (all must be
+		 * true):
+		 *   - node is attached
+		 *   - if High Login Mode is enabled, node is part of a node
+		 * group
+		 *   - status is 0x1400
+		 */
+		if (!rnode->attached ||
+		    (hw->sli.high_login_mode && !rnode->node_group) ||
+				(le16_to_cpu(hdr->status) !=
+				 MBX_STATUS_RPI_NOT_REG))
+			rc = -1;
+	}
+
+	if (rc == 0) {
+		rnode->node_group = false;
+		rnode->attached = false;
+
+		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
+			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
+				   0);
+		 evt = EFC_HW_NODE_FREE_OK;
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, rnode);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw *hw = &efct->hw;
+	u8	*buf = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS_SYNC;
+	u32	index = U32_MAX;
+
+	if (!hw || !rnode) {
+		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
+			     hw, rnode);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	index = rnode->index;
+
+	if (rnode->sport) {
+		u32	count = 0;
+		u32	fc_id;
+
+		if (!rnode->attached)
+			return EFCT_HW_RTN_SUCCESS_SYNC;
+
+		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+		if (!buf)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(buf, 0, SLI4_BMBX_SIZE);
+		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
+		count++;
+		if (count <= 1) {
+			/*
+			 * There are no other references to this RPI so
+			 * unregister it
+			 */
+			fc_id = U32_MAX;
+			/* and free the resource */
+			rnode->node_group = false;
+			rnode->free_group = true;
+		} else {
+			if (!hw->sli.high_login_mode)
+				efc_log_test(hw->os,
+					      "Inval cnt with HLM off, cnt=%d\n",
+					     count);
+			fc_id = rnode->fc_id & 0x00ffffff;
+		}
+
+		rc = EFCT_HW_RTN_ERROR;
+
+		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
+				      rnode->indicator,
+				      SLI_RSRC_RPI, fc_id))
+			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_node_free, rnode);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "UNREG_RPI failed\n");
+			kfree(buf);
+			rc = EFCT_HW_RTN_ERROR;
+		}
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_node_free_all(struct efct_hw *hw, int status, u8 *mqe,
+			 void *arg)
+{
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
+	int		rc = 0;
+	u32	i;
+	struct efct   *efct = hw->os;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+	} else {
+		evt = EFC_HW_NODE_FREE_ALL_OK;
+	}
+
+	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
+		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
+		     i++)
+			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
+
+		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
+			efc_log_test(hw->os, "RPI free all failure\n");
+			rc = -1;
+		}
+	}
+
+	efc_remote_node_cb(efct->efcport, evt, NULL);
+
+	kfree(mqe);
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_node_free_all(struct efct_hw *hw)
+{
+	u8	*buf = NULL;
+	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
+	if (!buf)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE, 0xffff,
+			      SLI_RSRC_FCFI, U32_MAX))
+		rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_node_free_all,
+				     NULL);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os, "UNREG_RPI failed\n");
+		kfree(buf);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+struct efct_hw_get_nvparms_cb_arg {
+	void (*cb)(int status,
+		   u8 *wwpn, u8 *wwnn,
+		u8 hard_alpa, u32 preferred_d_id,
+		void *arg);
+	void *arg;
+};
+
+static int
+efct_hw_get_nvparms_cb(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_get_nvparms_cb_arg *cb_arg = arg;
+	struct sli4_cmd_read_nvparms *mbox_rsp =
+			(struct sli4_cmd_read_nvparms *)mqe;
+	u8 hard_alpa;
+	u32 preferred_d_id;
+
+	hard_alpa = le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+				SLI4_READ_NVPARAMS_HARD_ALPA;
+	preferred_d_id = (le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
+			  SLI4_READ_NVPARAMS_PREFERRED_D_ID) >> 8;
+	if (cb_arg->cb)
+		cb_arg->cb(status, mbox_rsp->wwpn, mbox_rsp->wwnn,
+			   hard_alpa, preferred_d_id,
+			   cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_hw_get_nvparms(struct efct_hw *hw,
+		    void (*cb)(int status, u8 *wwpn,
+			       u8 *wwnn, u8 hard_alpa,
+			       u32 preferred_d_id, void *arg),
+		    void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_get_nvparms_cb_arg *cb_arg;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_get_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "READ_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+struct efct_hw_set_nvparms_cb_arg {
+	void (*cb)(int status, void *arg);
+	void *arg;
+};
+
+static int
+efct_hw_set_nvparms_cb(struct efct_hw *hw, int status,
+		       u8 *mqe, void *arg)
+{
+	struct efct_hw_set_nvparms_cb_arg *cb_arg = arg;
+
+	if (cb_arg->cb)
+		cb_arg->cb(status, cb_arg->arg);
+
+	kfree(mqe);
+	kfree(cb_arg);
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_hw_set_nvparms(struct efct_hw *hw,
+		    void (*cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id,
+		void *ul_arg)
+{
+	u8 *mbxdata;
+	struct efct_hw_set_nvparms_cb_arg *cb_arg;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	/* mbxdata holds the header of the command */
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	/*
+	 * cb_arg holds the data that will be passed to the callback on
+	 * completion
+	 */
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	cb_arg->cb = cb;
+	cb_arg->arg = ul_arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_write_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE, wwpn,
+				  wwnn, hard_alpa, preferred_d_id))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_set_nvparms_cb, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "SET_NVPARMS failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_cb_fw_write(struct efct_hw *hw, int status,
+		    u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_sli_config *mbox_rsp =
+					(struct sli4_cmd_sli_config *)mqe;
+	struct sli4_rsp_cmn_write_object *wr_obj_rsp;
+	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
+	u32 bytes_written;
+	u16 mbox_status;
+	u32 change_status;
+
+	wr_obj_rsp = (struct sli4_rsp_cmn_write_object *)
+		      &mbox_rsp->payload.embed;
+	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
+	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
+	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
+			 RSP_CHANGE_STATUS);
+
+	kfree(mqe);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_status)
+				status = mbox_status;
+			cb_arg->cb(status, bytes_written, change_status,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return EFC_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_firmware_write_sli4_intf_2(struct efct_hw *hw, struct efc_dma *dma,
+				   u32 size, u32 offset, int last,
+			      void (*cb)(int status, u32 bytes_written,
+					 u32 change_status, void *arg),
+				void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	u8 *mbxdata;
+	struct efct_hw_fw_wr_cb_arg *cb_arg;
+	int noc = 0;
+
+	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
+	if (!mbxdata)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(mbxdata, 0, SLI4_BMBX_SIZE);
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg) {
+		kfree(mbxdata);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+	memset(cb_arg, 0, sizeof(struct efct_hw_fw_wr_cb_arg));
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_common_write_object(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
+					noc, last, size, offset, "/prg/",
+					dma))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_fw_write, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_test(hw->os, "COMMON_WRITE_OBJECT failed\n");
+		kfree(mbxdata);
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+/* Write a portion of a firmware image to the device */
+enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
+		       u32 size, u32 offset, int last,
+			void (*cb)(int status, u32 bytes_written,
+				   u32 change_status, void *arg),
+			void *arg)
+{
+	return efct_hw_firmware_write_sli4_intf_2(hw, dma, size, offset,
+						     last, cb, arg);
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 9c025a1709e3..6bd1fde177cd 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -802,5 +802,63 @@ efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
 		     uintptr_t value,
 		void (*cb)(int status, uintptr_t value, void *arg),
 		void *arg);
+extern enum efct_hw_rtn
+efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
+		   struct efc_domain *domain, u8 *wwpn);
+extern enum efct_hw_rtn
+efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
+		    u32 fc_id);
+extern enum efct_hw_rtn
+efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport);
+extern enum efct_hw_rtn
+efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
+		     u32 fcf);
+extern enum efct_hw_rtn
+efct_hw_domain_attach(struct efc *efc,
+		      struct efc_domain *domain, u32 fc_id);
+extern enum efct_hw_rtn
+efct_hw_domain_free(struct efc *efc, struct efc_domain *domain);
+extern enum efct_hw_rtn
+efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain);
+extern enum efct_hw_rtn
+efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
+		   u32 fc_addr, struct efc_sli_port *sport);
+extern enum efct_hw_rtn
+efct_hw_node_free_all(struct efct_hw *hw);
+extern enum efct_hw_rtn
+efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms);
+extern enum efct_hw_rtn
+efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode);
+extern enum efct_hw_rtn
+efct_hw_node_free_resources(struct efc *efc,
+			    struct efc_remote_node *rnode);
+
+extern enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
+		       u32 size, u32 offset, int last,
+		       void (*cb)(int status, u32 bytes_written,
+				  u32 change_status, void *arg),
+		       void *arg);
+
+extern enum efct_hw_rtn
+efct_hw_get_nvparms(struct efct_hw *hw,
+		    void (*mgmt_cb)(int status, u8 *wwpn,
+				    u8 *wwnn, u8 hard_alpa,
+				    u32 preferred_d_id, void *arg),
+		    void *arg);
+extern
+enum efct_hw_rtn efct_hw_set_nvparms(struct efct_hw *hw,
+				       void (*mgmt_cb)(int status, void *arg),
+		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
+		u32 preferred_d_id, void *arg);
+
+typedef int (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
+				  u8 *mqe, void *arg);
+extern int
+efct_hw_async_call(struct efct_hw *hw,
+		   efct_hw_async_cb_t callback, void *arg);
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw);
 
 #endif /* __EFCT_H__ */
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 29/31] elx: efct: scsi_transport_fc host interface support
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (27 preceding siblings ...)
  2020-04-12  3:33 ` [PATCH v3 28/31] elx: efct: Firmware update, async link processing James Smart
@ 2020-04-12  3:33 ` James Smart
  2020-04-12  3:33 ` [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
  2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
  30 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:33 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
Integration with the scsi_fc_transport host interfaces

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/scsi/elx/efct/efct_xport.c | 496 +++++++++++++++++++++++++++++++++++++
 1 file changed, 496 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index fef7f427dbbf..cf2824f5f6a0 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -812,3 +812,499 @@ int efct_scsi_del_device(struct efct *efct)
 	}
 	return EFC_SUCCESS;
 }
+
+static void
+efct_get_host_port_id(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	struct efc_sli_port *sport;
+
+	if (efc->domain && efc->domain->sport) {
+		sport = efc->domain->sport;
+		fc_host_port_id(shost) = sport->fc_id;
+	}
+}
+
+static void
+efct_get_host_port_type(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	struct efc_sli_port *sport;
+	int type = FC_PORTTYPE_UNKNOWN;
+
+	if (efc->domain && efc->domain->sport) {
+		if (efc->domain->is_loop) {
+			type = FC_PORTTYPE_LPORT;
+		} else {
+			sport = efc->domain->sport;
+			if (sport->is_vport)
+				type = FC_PORTTYPE_NPIV;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_P2P)
+				type = FC_PORTTYPE_PTP;
+			else if (sport->topology == EFC_SPORT_TOPOLOGY_UNKNOWN)
+				type = FC_PORTTYPE_UNKNOWN;
+			else
+				type = FC_PORTTYPE_NPORT;
+		}
+	}
+	fc_host_port_type(shost) = type;
+}
+
+static void
+efct_get_host_vport_type(struct Scsi_Host *shost)
+{
+	fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+}
+
+static void
+efct_get_host_port_state(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain)
+		fc_host_port_state(shost) = FC_PORTSTATE_ONLINE;
+	else
+		fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE;
+}
+
+static void
+efct_get_host_speed(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	union efct_xport_stats_u speed;
+	u32 fc_speed = FC_PORTSPEED_UNKNOWN;
+	int rc;
+
+	if (efc->domain && efc->domain->sport) {
+		rc = efct_xport_status(efct->xport,
+				       EFCT_XPORT_LINK_SPEED, &speed);
+		if (rc == 0) {
+			switch (speed.value) {
+			case 1000:
+				fc_speed = FC_PORTSPEED_1GBIT;
+				break;
+			case 2000:
+				fc_speed = FC_PORTSPEED_2GBIT;
+				break;
+			case 4000:
+				fc_speed = FC_PORTSPEED_4GBIT;
+				break;
+			case 8000:
+				fc_speed = FC_PORTSPEED_8GBIT;
+				break;
+			case 10000:
+				fc_speed = FC_PORTSPEED_10GBIT;
+				break;
+			case 16000:
+				fc_speed = FC_PORTSPEED_16GBIT;
+				break;
+			case 32000:
+				fc_speed = FC_PORTSPEED_32GBIT;
+				break;
+			}
+		}
+	}
+	fc_host_speed(shost) = fc_speed;
+}
+
+static void
+efct_get_host_fabric_name(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain) {
+		struct fc_els_flogi  *sp =
+			(struct fc_els_flogi  *)
+				efc->domain->flogi_service_params;
+
+		fc_host_fabric_name(shost) = be64_to_cpu(sp->fl_wwnn);
+	}
+}
+
+static struct fc_host_statistics *
+efct_get_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	union efct_xport_stats_u stats;
+	struct efct_xport *xport = efct->xport;
+	u32 rc = 1;
+
+	rc = efct_xport_status(xport, EFCT_XPORT_LINK_STATISTICS, &stats);
+	if (rc != 0) {
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+		return NULL;
+	}
+
+	vport->fc_host_stats.loss_of_sync_count =
+		stats.stats.link_stats.loss_of_sync_error_count;
+	vport->fc_host_stats.link_failure_count =
+		stats.stats.link_stats.link_failure_error_count;
+	vport->fc_host_stats.prim_seq_protocol_err_count =
+		stats.stats.link_stats.primitive_sequence_error_count;
+	vport->fc_host_stats.invalid_tx_word_count =
+		stats.stats.link_stats.invalid_transmission_word_error_count;
+	vport->fc_host_stats.invalid_crc_count =
+		stats.stats.link_stats.crc_error_count;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.tx_words =
+		stats.stats.host_stats.transmit_kbyte_count * 256;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.rx_words =
+		stats.stats.host_stats.receive_kbyte_count * 256;
+	vport->fc_host_stats.tx_frames =
+		stats.stats.host_stats.transmit_frame_count;
+	vport->fc_host_stats.rx_frames =
+		stats.stats.host_stats.receive_frame_count;
+
+	vport->fc_host_stats.fcp_input_requests =
+			xport->fcp_stats.input_requests;
+	vport->fc_host_stats.fcp_output_requests =
+			xport->fcp_stats.output_requests;
+	vport->fc_host_stats.fcp_output_megabytes =
+			xport->fcp_stats.output_bytes >> 20;
+	vport->fc_host_stats.fcp_input_megabytes =
+			xport->fcp_stats.input_bytes >> 20;
+	vport->fc_host_stats.fcp_control_requests =
+			xport->fcp_stats.control_requests;
+
+	return &vport->fc_host_stats;
+}
+
+static void
+efct_reset_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	/* argument has no purpose for this action */
+	union efct_xport_stats_u dummy;
+	u32 rc = 0;
+
+	rc = efct_xport_status(efct->xport, EFCT_XPORT_LINK_STAT_RESET, &dummy);
+	if (rc != 0)
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+}
+
+static void
+efct_get_starget_port_id(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_node_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_port_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_set_vport_symbolic_name(struct fc_vport *fc_vport)
+{
+	pr_err("%s\n", __func__);
+}
+
+/**
+ * Bring the link down gracefully then re-init the link. The firmware will
+ * re-initialize the Fibre Channel interface as required.
+ * It does not issue a LIP.
+ */
+static int
+efct_issue_lip(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport =
+			shost ? (struct efct_vport *)shost->hostdata : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+
+	if (!shost || !vport || !efct) {
+		pr_err("%s: shost=%p vport=%p efct=%p\n", __func__,
+		       shost, vport, efct);
+		return -EPERM;
+	}
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_OFFLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_OFFLINE failed\n");
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE))
+		efc_log_test(efct, "EFCT_XPORT_PORT_ONLINE failed\n");
+
+	return EFC_SUCCESS;
+}
+
+struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return NULL;
+	}
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+	vport->is_vport = true;
+
+	shost->can_queue = efct->hw.config.n_io;
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/* can only accept (from mid-layer) as many SGEs as we've pre-regited*/
+	shost->sg_tablesize = sli_get_max_sgl(&efct->hw.sli);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_vport_fc_tt;
+	efc_log_debug(efct, "vport transport template=%p\n",
+		       efct_vport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, dev, &efct->pcidev->dev);
+	if (error) {
+		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
+		return NULL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		 "Emulex %s FV%s DV%s", efct->model, efct->hw.sli.fw_name[0],
+		 EFCT_DRIVER_VERSION);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+	vport->shost = shost;
+
+	return vport;
+}
+
+int efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost)
+{
+	if (shost) {
+		efc_log_debug(efct,
+			       "Unregistering vport with Transport Layer\n");
+		efct_xport_remove_host(shost);
+		efc_log_debug(efct, "Unregistering vport with SCSI Midlayer\n");
+		scsi_remove_host(shost);
+		scsi_host_put(shost);
+		return EFC_SUCCESS;
+	}
+	return EFC_FAIL;
+}
+
+static int
+efct_vport_create(struct fc_vport *fc_vport, bool disable)
+{
+	struct Scsi_Host *shost = fc_vport ? fc_vport->shost : NULL;
+	struct efct_vport *pport = shost ?
+					(struct efct_vport *)shost->hostdata :
+					NULL;
+	struct efct *efct = pport ? pport->efct : NULL;
+	struct efct_vport *vport = NULL;
+
+	if (!fc_vport || !shost || !efct)
+		goto fail;
+
+	vport = efct_scsi_new_vport(efct, &fc_vport->dev);
+	if (!vport) {
+		efc_log_err(efct, "failed to create vport\n");
+		goto fail;
+	}
+
+	vport->fc_vport = fc_vport;
+	vport->npiv_wwpn = fc_vport->port_name;
+	vport->npiv_wwnn = fc_vport->node_name;
+	fc_host_node_name(vport->shost) = vport->npiv_wwnn;
+	fc_host_port_name(vport->shost) = vport->npiv_wwpn;
+	*(struct efct_vport **)fc_vport->dd_data = vport;
+
+	return EFC_SUCCESS;
+
+fail:
+	return EFC_FAIL;
+}
+
+static int
+efct_vport_delete(struct fc_vport *fc_vport)
+{
+	struct efct_vport *vport = *(struct efct_vport **)fc_vport->dd_data;
+	struct Scsi_Host *shost = vport ? vport->shost : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+	int rc = -1;
+
+	rc = efct_scsi_del_vport(efct, shost);
+
+	if (rc)
+		pr_err("%s: vport delete failed\n", __func__);
+
+	return rc;
+}
+
+static int
+efct_vport_disable(struct fc_vport *fc_vport, bool disable)
+{
+	return EFC_SUCCESS;
+}
+
+static struct fc_function_template efct_xport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_port_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+	.vport_disable = efct_vport_disable,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+	.vport_create = efct_vport_create,
+	.vport_delete = efct_vport_delete,
+};
+
+static struct fc_function_template efct_vport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_vport_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+};
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (28 preceding siblings ...)
  2020-04-12  3:33 ` [PATCH v3 29/31] elx: efct: scsi_transport_fc host interface support James Smart
@ 2020-04-12  3:33 ` James Smart
  2020-04-16 10:02   ` Hannes Reinecke
  2020-04-16 13:15   ` Daniel Wagner
  2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
  30 siblings, 2 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:33 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This patch completes the efct driver population.

This patch adds driver definitions for:
Adds the efct driver Kconfig and Makefiles

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v3:
  Use SPDX license
  Remove utils.c from makefile
---
 drivers/scsi/elx/Kconfig  |  9 +++++++++
 drivers/scsi/elx/Makefile | 18 ++++++++++++++++++
 2 files changed, 27 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile

diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
new file mode 100644
index 000000000000..831daea7a951
--- /dev/null
+++ b/drivers/scsi/elx/Kconfig
@@ -0,0 +1,9 @@
+config SCSI_EFCT
+	tristate "Emulex Fibre Channel Target"
+	depends on PCI && SCSI
+	depends on TARGET_CORE
+	depends on SCSI_FC_ATTRS
+	select CRC_T10DIF
+	help
+	  The efct driver provides enhanced SCSI Target Mode
+	  support for specific SLI-4 adapters.
diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile
new file mode 100644
index 000000000000..77f06b962403
--- /dev/null
+++ b/drivers/scsi/elx/Makefile
@@ -0,0 +1,18 @@
+#// SPDX-License-Identifier: GPL-2.0
+#/*
+# * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
+# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+# */
+
+
+obj-$(CONFIG_SCSI_EFCT) := efct.o
+
+efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o efct/efct_els.o \
+	     efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \
+	     efct/efct_lio.o efct/efct_unsol.o
+
+efct-objs += libefc/efc_domain.o libefc/efc_fabric.o libefc/efc_node.o \
+	     libefc/efc_sport.o libefc/efc_device.o \
+	     libefc/efc_lib.o libefc/efc_sm.o
+
+efct-objs += libefc_sli/sli4.o
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
  2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (29 preceding siblings ...)
  2020-04-12  3:33 ` [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2020-04-12  3:33 ` James Smart
  2020-04-12  6:16     ` kbuild test robot
                     ` (2 more replies)
  30 siblings, 3 replies; 124+ messages in thread
From: James Smart @ 2020-04-12  3:33 UTC (permalink / raw)
  To: linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, James Smart, Ram Vegesna

This final patch ties the efct driver into the kernel Kconfig
and build linkages in the drivers/scsi directory.

Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/scsi/Kconfig  | 2 ++
 drivers/scsi/Makefile | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index b5be6f43ec3f..e476eaad6f49 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1168,6 +1168,8 @@ config SCSI_LPFC_DEBUG_FS
 	  This makes debugging information from the lpfc driver
 	  available via the debugfs filesystem.
 
+source "drivers/scsi/elx/Kconfig"
+
 config SCSI_SIM710
 	tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
 	depends on EISA && SCSI
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c00e3dd57990..844db573283c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280)	+= qla1280.o
 obj-$(CONFIG_SCSI_QLA_FC)	+= qla2xxx/
 obj-$(CONFIG_SCSI_QLA_ISCSI)	+= libiscsi.o qla4xxx/
 obj-$(CONFIG_SCSI_LPFC)		+= lpfc/
+obj-$(CONFIG_SCSI_EFCT)		+= elx/
 obj-$(CONFIG_SCSI_BFA_FC)	+= bfa/
 obj-$(CONFIG_SCSI_CHELSIO_FCOE)	+= csiostor/
 obj-$(CONFIG_SCSI_DMX3191D)	+= dmx3191d.o
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
@ 2020-04-12  4:57   ` Bart Van Assche
  2020-04-16 11:48     ` Daniel Wagner
  2020-04-22  4:20     ` James Smart
  2020-04-16  8:02   ` Hannes Reinecke
  2020-04-16 12:34   ` Daniel Wagner
  2 siblings, 2 replies; 124+ messages in thread
From: Bart Van Assche @ 2020-04-12  4:57 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, herbszt, natechancellor, rdunlap, hare, Ram Vegesna

On 2020-04-11 20:32, James Smart wrote:
> +	return EFC_SUCCESS;
> +}

Redefining 0 is unusual in the Linux kernel. I prefer to see "return 0;"
instead of "return ${DRIVER_NAME}_SUCCESS;".

> +static int  efct_lio_tgt_session_data(struct efct *efct, u64 wwpn,
> +				      char *buf, int size)
> +{
> +	struct efc_sli_port *sport = NULL;
> +	struct efc_node *node = NULL;
> +	struct efc *efc = efct->efcport;
> +	u16 loop_id = 0;
> +	int off = 0, rc = 0;
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "failed to find efct/domain\n");
> +		return EFC_FAIL;
> +	}
> +
> +	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
> +		if (sport->wwpn != wwpn)
> +			continue;
> +		list_for_each_entry(node, &sport->node_list,
> +				    list_entry) {
> +			/* Dump only remote NPORT sessions */
> +			if (!efct_lio_node_is_initiator(node))
> +				continue;
> +
> +			rc = snprintf(buf + off, size - off,
> +				"0x%016llx,0x%08x,0x%04x\n",
> +				get_unaligned_be64(node->wwpn),
> +				node->rnode.fc_id, loop_id);
> +			if (rc < 0)
> +				break;
> +			off += rc;
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}

This is one of the most unfriendly debugfs data formats I have seen so
far: information about all sessions is dumped into one huge debugfs
attribute.

Is information about active sessions useful for other LIO target
drivers? Wasn't it promised that this functionality would be moved into
the LIO core instead of defining it for the efct driver only?

> +static int efct_debugfs_session_open(struct inode *inode, struct file *filp)
> +{
> +	struct efct_lio_sport *sport = inode->i_private;
> +	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
> +
> +	if (!(filp->f_mode & FMODE_READ)) {
> +		filp->private_data = sport;
> +		return EFC_SUCCESS;
> +	}
> +
> +	filp->private_data = kmalloc(size, GFP_KERNEL);
> +	if (!filp->private_data)
> +		return -ENOMEM;
> +
> +	memset(filp->private_data, 0, size);
> +	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
> +				  size);
> +	return EFC_SUCCESS;
> +}

kmalloc() + memset() can be changed into kzalloc().

The above code allocates 68 KB physically contiguous memory? Kernel code
should not rely on higher order page allocations unless there is no
other choice.

Additionally, I see that the amount of memory allocated is independent
of the number of sessions. I think there are better approaches.

> +
> +	lio_wq = create_singlethread_workqueue("efct_lio_worker");
> +	if (!lio_wq) {
> +		efc_log_err(efct, "workqueue create failed\n");
> +		return -ENOMEM;
> +	}

Is any work queued onto lio_wq that needs to be serialized against other
work queued onto the same queue? If not, can one of the system
workqueues be used, e.g. system_wq?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
  2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2020-04-12  6:16     ` kbuild test robot
  2020-04-12  7:56     ` kbuild test robot
  2020-04-16 13:15   ` Daniel Wagner
  2 siblings, 0 replies; 124+ messages in thread
From: kbuild test robot @ 2020-04-12  6:16 UTC (permalink / raw)
  To: James Smart
  Cc: kbuild-all, linux-scsi, dwagner, maier, bvanassche, herbszt,
	natechancellor, rdunlap, hare, James Smart, Ram Vegesna

[-- Attachment #1: Type: text/plain, Size: 5385 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.6 next-20200411]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20200412-114125
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
config: parisc-allyesconfig (attached as .config)
compiler: hppa-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=9.3.0 make.cross ARCH=parisc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from include/uapi/linux/ioctl.h:5,
                    from include/uapi/linux/fs.h:14,
                    from include/linux/fs.h:45,
                    from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:567,
                    from include/linux/scatterlist.h:8,
                    from include/linux/dmapool.h:14,
                    from include/linux/pci.h:1414,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
>> arch/parisc/include/uapi/asm/ioctl.h:40: warning: "_IOC_WRITE" redefined
      40 | #define _IOC_WRITE 2U
         | 
   In file included from include/asm-generic/ioctl.h:5,
                    from drivers/scsi/elx/efct/efct_driver.h:20,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   include/uapi/asm-generic/ioctl.h:62: note: this is the location of the previous definition
      62 | # define _IOC_WRITE 1U
         | 
   In file included from include/uapi/linux/ioctl.h:5,
                    from include/uapi/linux/fs.h:14,
                    from include/linux/fs.h:45,
                    from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:567,
                    from include/linux/scatterlist.h:8,
                    from include/linux/dmapool.h:14,
                    from include/linux/pci.h:1414,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
>> arch/parisc/include/uapi/asm/ioctl.h:41: warning: "_IOC_READ" redefined
      41 | #define _IOC_READ 1U
         | 
   In file included from include/asm-generic/ioctl.h:5,
                    from drivers/scsi/elx/efct/efct_driver.h:20,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   include/uapi/asm-generic/ioctl.h:66: note: this is the location of the previous definition
      66 | # define _IOC_READ 2U
         | 

vim +/_IOC_WRITE +40 arch/parisc/include/uapi/asm/ioctl.h

^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  25  
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  26  /* ioctl command encoding: 32 bits total, command in lower 16 bits,
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  27   * size of the parameter structure in the lower 14 bits of the
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  28   * upper 16 bits.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  29   * Encoding the size of the parameter structure in the ioctl request
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  30   * is useful for catching programs compiled with old versions
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  31   * and to avoid overwriting user space outside the user buffer area.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  32   * The highest 2 bits are reserved for indicating the ``access mode''.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  33   * NOTE: This limits the max parameter size to 16kB -1 !
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  34   */
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  35  
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  36  /*
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  37   * Direction bits.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  38   */
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  39  #define _IOC_NONE	0U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16 @40  #define _IOC_WRITE	2U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16 @41  #define _IOC_READ	1U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  42  

:::::: The code at line 40 was first introduced by commit
:::::: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 Linux-2.6.12-rc2

:::::: TO: Linus Torvalds <torvalds@ppc970.osdl.org>
:::::: CC: Linus Torvalds <torvalds@ppc970.osdl.org>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 60607 bytes --]

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
@ 2020-04-12  6:16     ` kbuild test robot
  0 siblings, 0 replies; 124+ messages in thread
From: kbuild test robot @ 2020-04-12  6:16 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 5481 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.6 next-20200411]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20200412-114125
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
config: parisc-allyesconfig (attached as .config)
compiler: hppa-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=9.3.0 make.cross ARCH=parisc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from include/uapi/linux/ioctl.h:5,
                    from include/uapi/linux/fs.h:14,
                    from include/linux/fs.h:45,
                    from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:567,
                    from include/linux/scatterlist.h:8,
                    from include/linux/dmapool.h:14,
                    from include/linux/pci.h:1414,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
>> arch/parisc/include/uapi/asm/ioctl.h:40: warning: "_IOC_WRITE" redefined
      40 | #define _IOC_WRITE 2U
         | 
   In file included from include/asm-generic/ioctl.h:5,
                    from drivers/scsi/elx/efct/efct_driver.h:20,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   include/uapi/asm-generic/ioctl.h:62: note: this is the location of the previous definition
      62 | # define _IOC_WRITE 1U
         | 
   In file included from include/uapi/linux/ioctl.h:5,
                    from include/uapi/linux/fs.h:14,
                    from include/linux/fs.h:45,
                    from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:567,
                    from include/linux/scatterlist.h:8,
                    from include/linux/dmapool.h:14,
                    from include/linux/pci.h:1414,
                    from drivers/scsi/elx/efct/efct_driver.h:23,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
>> arch/parisc/include/uapi/asm/ioctl.h:41: warning: "_IOC_READ" redefined
      41 | #define _IOC_READ 1U
         | 
   In file included from include/asm-generic/ioctl.h:5,
                    from drivers/scsi/elx/efct/efct_driver.h:20,
                    from drivers/scsi/elx/efct/efct_driver.c:7:
   include/uapi/asm-generic/ioctl.h:66: note: this is the location of the previous definition
      66 | # define _IOC_READ 2U
         | 

vim +/_IOC_WRITE +40 arch/parisc/include/uapi/asm/ioctl.h

^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  25  
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  26  /* ioctl command encoding: 32 bits total, command in lower 16 bits,
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  27   * size of the parameter structure in the lower 14 bits of the
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  28   * upper 16 bits.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  29   * Encoding the size of the parameter structure in the ioctl request
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  30   * is useful for catching programs compiled with old versions
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  31   * and to avoid overwriting user space outside the user buffer area.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  32   * The highest 2 bits are reserved for indicating the ``access mode''.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  33   * NOTE: This limits the max parameter size to 16kB -1 !
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  34   */
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  35  
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  36  /*
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  37   * Direction bits.
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  38   */
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  39  #define _IOC_NONE	0U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16 @40  #define _IOC_WRITE	2U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16 @41  #define _IOC_READ	1U
^1da177e4c3f41 include/asm-parisc/ioctl.h Linus Torvalds 2005-04-16  42  

:::::: The code at line 40 was first introduced by commit
:::::: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 Linux-2.6.12-rc2

:::::: TO: Linus Torvalds <torvalds@ppc970.osdl.org>
:::::: CC: Linus Torvalds <torvalds@ppc970.osdl.org>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 60607 bytes --]

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
  2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
@ 2020-04-12  7:56     ` kbuild test robot
  2020-04-12  7:56     ` kbuild test robot
  2020-04-16 13:15   ` Daniel Wagner
  2 siblings, 0 replies; 124+ messages in thread
From: kbuild test robot @ 2020-04-12  7:56 UTC (permalink / raw)
  To: James Smart
  Cc: kbuild-all, linux-scsi, dwagner, maier, bvanassche, herbszt,
	natechancellor, rdunlap, hare, James Smart, Ram Vegesna

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.6 next-20200412]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20200412-114125
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>


cppcheck warnings: (new ones prefixed by >>)

>> drivers/scsi/elx/efct/efct_driver.c:829:5: warning: Redundant initialization for 'rc'. The initialized value is overwritten before it is read. [redundantInitialization]
    rc = efct_device_init();
       ^
   drivers/scsi/elx/efct/efct_driver.c:827:9: note: rc is initialized
    int rc = -ENODEV;
           ^
   drivers/scsi/elx/efct/efct_driver.c:829:5: note: rc is overwritten
    rc = efct_device_init();
       ^
   drivers/scsi/elx/efct/efct_driver.c:255:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0, i;
        ^
   drivers/scsi/elx/efct/efct_driver.c:255:14: warning: The scope of the variable 'i' can be reduced. [variableScope]
    int rc = 0, i;
                ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_driver.c:255:9: warning: Variable 'rc' is assigned a value that is never used. [unreadVariable]
    int rc = 0, i;
           ^
--
>> drivers/scsi/elx/efct/efct_scsi.c:869:34: warning: Either the condition '!fcprsp' is redundant or there is pointer arithmetic with NULL pointer. [nullPointerArithmeticRedundantCheck]
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:871:7: note: Assuming that condition '!fcprsp' is not redundant
     if (!fcprsp) {
         ^
   drivers/scsi/elx/efct/efct_scsi.c:868:48: note: Assignment from 'io->rspbuf.virt'
     struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
                                                  ^
   drivers/scsi/elx/efct/efct_scsi.c:869:34: note: Null pointer addition
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:467:21: warning: The scope of the variable 'hio' can be reduced. [variableScope]
    struct efct_hw_io *hio;
                       ^
   drivers/scsi/elx/efct/efct_scsi.c:470:6: warning: The scope of the variable 'dispatch' can be reduced. [variableScope]
    int dispatch;
        ^
>> drivers/scsi/elx/efct/efct_scsi.c:869:34: warning: 'io->rspbuf.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:1045:28: warning: 'io->rspbuf.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
                              ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_scsi.c:482:10: warning: Variable 'status' is assigned a value that is never used. [unreadVariable]
     status = 0;
            ^
--
   drivers/scsi/elx/efct/efct_els.c:1767:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/efct/efct_els.c:980:32: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rscn_page = els->els_req.virt + sizeof(*req);
                                  ^
   drivers/scsi/elx/efct/efct_els.c:1471:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rftid = els->els_req.virt + sizeof(*ct);
                              ^
   drivers/scsi/elx/efct/efct_els.c:1515:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rffid = els->els_req.virt + sizeof(*ct);
                              ^
   drivers/scsi/elx/efct/efct_els.c:1567:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    gidpt = els->els_req.virt + sizeof(*ct);
                              ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_els.c:1600:8: warning: Variable 's_id' is assigned a value that is never used. [unreadVariable]
     s_id = U32_MAX;
          ^
--
>> drivers/scsi/elx/efct/efct_xport.c:341:6: warning: Variable 'rc' is reassigned a value before the old one has been used. [redundantAssignment]
     rc = efct_hw_get_host_stats(&efct->hw, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:329:6: note: rc is assigned
     rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:341:6: note: rc is overwritten
     rc = efct_hw_get_host_stats(&efct->hw, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:619:17: warning: The scope of the variable 'timeout' can be reduced. [variableScope]
     unsigned long timeout;
                   ^
   drivers/scsi/elx/efct/efct_xport.c:836:23: warning: The scope of the variable 'sport' can be reduced. [variableScope]
    struct efc_sli_port *sport;
                         ^
   drivers/scsi/elx/efct/efct_xport.c:884:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/efct/efct_xport.c:678:16: warning: Local variable 'efct' shadows outer variable [shadowVariable]
     struct efct *efct;
                  ^
   drivers/scsi/elx/efct/efct_xport.c:590:15: note: Shadowed declaration
    struct efct *efct = NULL;
                 ^
   drivers/scsi/elx/efct/efct_xport.c:678:16: note: Shadow variable
     struct efct *efct;
                  ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
--
   drivers/scsi/elx/efct/efct_hw.c:871:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:1224:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:1325:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:2367:7: warning: The scope of the variable 'status' can be reduced. [variableScope]
    int  status;
         ^
>> drivers/scsi/elx/efct/efct_hw.c:480:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:383:29: note: Shadowed declaration
   efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
                               ^
   drivers/scsi/elx/efct/efct_hw.c:480:9: note: Shadow variable
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1862:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1842:32: note: Shadowed declaration
   efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
                                  ^
   drivers/scsi/elx/efct/efct_hw.c:1862:9: note: Shadow variable
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1881:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->abort_arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1842:32: note: Shadowed declaration
   efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
                                  ^
   drivers/scsi/elx/efct/efct_hw.c:1881:9: note: Shadow variable
     void *arg = io->abort_arg;
           ^
>> drivers/scsi/elx/efct/efct_hw.c:1300:23: warning: 'hw->seq_pool' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
      seq = hw->seq_pool + idx * sizeof(*seq);
                         ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_hw.c:633:8: warning: Variable 'i' is assigned a value that is never used. [unreadVariable]
    u32 i = 0, io_index = 0;
          ^
   drivers/scsi/elx/efct/efct_hw.c:2056:8: warning: Variable 'i' is assigned a value that is never used. [unreadVariable]
    u32 i = 0;
          ^
--
   drivers/scsi/elx/libefc/efc_domain.c:392:23: warning: The scope of the variable 'sport' can be reduced. [variableScope]
    struct efc_sli_port *sport;
                         ^
   drivers/scsi/elx/libefc/efc_domain.c:499:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0;
        ^
   drivers/scsi/elx/libefc/efc_domain.c:546:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:699:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:866:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
>> drivers/scsi/elx/libefc/efc_domain.c:546:7: warning: Local variable 'rc' shadows outer variable [shadowVariable]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:499:6: note: Shadowed declaration
    int rc = 0;
        ^
   drivers/scsi/elx/libefc/efc_domain.c:546:7: note: Shadow variable
     int rc;
         ^
--
>> drivers/scsi/elx/libefc/efc_fabric.c:1274:14: warning: Variable 'remote_wwpn' is reassigned a value before the old one has been used. [redundantAssignment]
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:1270:14: note: remote_wwpn is assigned
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:1274:14: note: remote_wwpn is overwritten
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:26:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_fabric.c:479:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_fabric.c:1502:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/libefc/efc_fabric.c:726:38: warning: 'data' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
                                        ^
--
>> drivers/scsi/elx/efct/efct_lio.c:1663:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(generate_node_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1664:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1665:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1666:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1667:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1719:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1720:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1721:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1722:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1723:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
   ^
   drivers/scsi/elx/efct/efct_lio.c:93:6: warning: The scope of the variable 'ret' can be reduced. [variableScope]
    int ret;
        ^
   drivers/scsi/elx/efct/efct_lio.c:150:6: warning: The scope of the variable 'ret' can be reduced. [variableScope]
    int ret;
        ^
>> drivers/scsi/elx/efct/efct_lio.c:480:22: warning: Passing NULL after the last typed argument to a variadic function leads to undefined behaviour. [varFuncNullUB]
     EFCT_XPORT_SHUTDOWN, NULL);
                        ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_lio.c:461:2: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
    ^
--
>> drivers/scsi/elx/libefc/efc_node.c:887:2: warning: %llX in format string (no. 1) requires 'unsigned long long' but the argument type is 'unsigned long'. [invalidPrintfArgType_uint]
    snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
    ^
--
   drivers/scsi/elx/efct/efct_unsol.c:28:16: warning: The scope of the variable 'flags' can be reduced. [variableScope]
    unsigned long flags = 0;
                  ^
   drivers/scsi/elx/efct/efct_unsol.c:210:26: warning: The scope of the variable 'frame' can be reduced. [variableScope]
    struct efc_hw_sequence *frame;
                            ^
   drivers/scsi/elx/efct/efct_unsol.c:533:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0;
        ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_unsol.c:533:9: warning: Variable 'rc' is assigned a value that is never used. [unreadVariable]
    int rc = 0;
           ^
--
>> drivers/scsi/elx/libefc/efc_sport.c:101:2: warning: %llX in format string (no. 1) requires 'unsigned long long' but the argument type is 'unsigned long'. [invalidPrintfArgType_uint]
    snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
    ^
   drivers/scsi/elx/libefc/efc_sport.c:422:20: warning: The scope of the variable 'fabric' can be reduced. [variableScope]
     struct efc_node *fabric;
                      ^
--
   drivers/scsi/elx/libefc/efc_device.c:485:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_device.c:1486:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = EFC_SCSI_CALL_COMPLETE;
        ^
   drivers/scsi/elx/libefc/efc_device.c:1487:6: warning: The scope of the variable 'rc_2' can be reduced. [variableScope]
    int rc_2 = EFC_SCSI_CALL_COMPLETE;
        ^
>> drivers/scsi/elx/libefc/efc_device.c:372:31: warning: 'prli' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
                                 ^
>> drivers/scsi/elx/libefc/efc_device.c:1193:6: warning: 'cbdata->payload->dma.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
        + sizeof(struct fc_els_prli);
        ^
   drivers/scsi/elx/libefc/efc_device.c:1397:6: warning: 'cbdata->payload->dma.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
        + sizeof(struct fc_els_prli);
        ^
--
>> drivers/scsi/elx/libefc_sli/sli4.c:1372:6: warning: Condition 'valid' is always true [knownConditionTrueFalse]
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1367:6: note: Assuming that condition '!valid' is not redundant
    if (!valid) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1372:6: note: Condition 'valid' is always true
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1425:6: warning: Condition 'valid' is always true [knownConditionTrueFalse]
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1420:6: note: Assuming that condition '!valid' is not redundant
    if (!valid) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1425:6: note: Condition 'valid' is always true
    if (valid && clear) {
        ^
>> drivers/scsi/elx/libefc_sli/sli4.c:4018:2: warning: Either the condition 'extent' is redundant or there is possible null pointer dereference: extent. [nullPointerRedundantCheck]
    extent->resource_type = cpu_to_le16(rtype);
    ^
   drivers/scsi/elx/libefc_sli/sli4.c:4011:6: note: Assuming that condition 'extent' is not redundant
    if (extent)
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:4018:2: note: Null pointer dereference
    extent->resource_type = cpu_to_le16(rtype);
    ^
   drivers/scsi/elx/libefc_sli/sli4.c:2175:19: warning: The scope of the variable 'bptr' can be reduced. [variableScope]
    struct sli4_bde *bptr;
                     ^
>> drivers/scsi/elx/libefc_sli/sli4.c:3068:35: warning: Local variable 'rcqe' shadows outer variable [shadowVariable]
     struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
                                     ^
   drivers/scsi/elx/libefc_sli/sli4.c:2995:29: note: Shadowed declaration
    struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
                               ^
   drivers/scsi/elx/libefc_sli/sli4.c:3068:35: note: Shadow variable
     struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
                                     ^
>> drivers/scsi/elx/libefc_sli/sli4.c:3069:7: warning: Local variable 'rq_element_index' shadows outer variable [shadowVariable]
     u16 rq_element_index =
         ^
   drivers/scsi/elx/libefc_sli/sli4.c:2999:6: note: Shadowed declaration
    u16 rq_element_index;
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:3069:7: note: Shadow variable
     u16 rq_element_index =
         ^
>> drivers/scsi/elx/libefc_sli/sli4.c:4487:8: warning: Local variable 'i' shadows outer variable [shadowVariable]
      u32 i = 0, size = 0;
          ^
   drivers/scsi/elx/libefc_sli/sli4.c:4470:7: note: Shadowed declaration
     u32 i;
         ^
   drivers/scsi/elx/libefc_sli/sli4.c:4487:8: note: Shadow variable
      u32 i = 0, size = 0;
          ^
>> drivers/scsi/elx/libefc_sli/sli4.c:71:7: warning: 'buf' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
     buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
         ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^

vim +/rc +829 drivers/scsi/elx/efct/efct_driver.c

3569c2ab8265ff James Smart 2020-04-11  823  
3569c2ab8265ff James Smart 2020-04-11  824  static
3569c2ab8265ff James Smart 2020-04-11  825  int __init efct_init(void)
3569c2ab8265ff James Smart 2020-04-11  826  {
3569c2ab8265ff James Smart 2020-04-11  827  	int	rc = -ENODEV;
3569c2ab8265ff James Smart 2020-04-11  828  
3569c2ab8265ff James Smart 2020-04-11 @829  	rc = efct_device_init();
3569c2ab8265ff James Smart 2020-04-11  830  	if (rc) {
3569c2ab8265ff James Smart 2020-04-11  831  		pr_err("efct_device_init failed rc=%d\n", rc);
3569c2ab8265ff James Smart 2020-04-11  832  		return -ENOMEM;
3569c2ab8265ff James Smart 2020-04-11  833  	}
3569c2ab8265ff James Smart 2020-04-11  834  
3569c2ab8265ff James Smart 2020-04-11  835  	rc = pci_register_driver(&efct_pci_driver);
3569c2ab8265ff James Smart 2020-04-11  836  	if (rc)
3569c2ab8265ff James Smart 2020-04-11  837  		goto l1;
3569c2ab8265ff James Smart 2020-04-11  838  
3569c2ab8265ff James Smart 2020-04-11  839  	return rc;
3569c2ab8265ff James Smart 2020-04-11  840  
3569c2ab8265ff James Smart 2020-04-11  841  l1:
3569c2ab8265ff James Smart 2020-04-11  842  	efct_device_shutdown();
3569c2ab8265ff James Smart 2020-04-11  843  	return rc;
3569c2ab8265ff James Smart 2020-04-11  844  }
3569c2ab8265ff James Smart 2020-04-11  845  

:::::: The code at line 829 was first introduced by commit
:::::: 3569c2ab8265ffc43780ee31f4c4ef80e37cf0f7 elx: efct: Driver initialization routines

:::::: TO: James Smart <jsmart2021@gmail.com>
:::::: CC: 0day robot <lkp@intel.com>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
@ 2020-04-12  7:56     ` kbuild test robot
  0 siblings, 0 replies; 124+ messages in thread
From: kbuild test robot @ 2020-04-12  7:56 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 38003 bytes --]

Hi James,

I love your patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on scsi/for-next linus/master v5.6 next-20200412]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/James-Smart/efct-Broadcom-Emulex-FC-Target-driver/20200412-114125
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>


cppcheck warnings: (new ones prefixed by >>)

>> drivers/scsi/elx/efct/efct_driver.c:829:5: warning: Redundant initialization for 'rc'. The initialized value is overwritten before it is read. [redundantInitialization]
    rc = efct_device_init();
       ^
   drivers/scsi/elx/efct/efct_driver.c:827:9: note: rc is initialized
    int rc = -ENODEV;
           ^
   drivers/scsi/elx/efct/efct_driver.c:829:5: note: rc is overwritten
    rc = efct_device_init();
       ^
   drivers/scsi/elx/efct/efct_driver.c:255:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0, i;
        ^
   drivers/scsi/elx/efct/efct_driver.c:255:14: warning: The scope of the variable 'i' can be reduced. [variableScope]
    int rc = 0, i;
                ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_driver.c:255:9: warning: Variable 'rc' is assigned a value that is never used. [unreadVariable]
    int rc = 0, i;
           ^
--
>> drivers/scsi/elx/efct/efct_scsi.c:869:34: warning: Either the condition '!fcprsp' is redundant or there is pointer arithmetic with NULL pointer. [nullPointerArithmeticRedundantCheck]
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:871:7: note: Assuming that condition '!fcprsp' is not redundant
     if (!fcprsp) {
         ^
   drivers/scsi/elx/efct/efct_scsi.c:868:48: note: Assignment from 'io->rspbuf.virt'
     struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
                                                  ^
   drivers/scsi/elx/efct/efct_scsi.c:869:34: note: Null pointer addition
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:467:21: warning: The scope of the variable 'hio' can be reduced. [variableScope]
    struct efct_hw_io *hio;
                       ^
   drivers/scsi/elx/efct/efct_scsi.c:470:6: warning: The scope of the variable 'dispatch' can be reduced. [variableScope]
    int dispatch;
        ^
>> drivers/scsi/elx/efct/efct_scsi.c:869:34: warning: 'io->rspbuf.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
     u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
                                    ^
   drivers/scsi/elx/efct/efct_scsi.c:1045:28: warning: 'io->rspbuf.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
                              ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_scsi.c:482:10: warning: Variable 'status' is assigned a value that is never used. [unreadVariable]
     status = 0;
            ^
--
   drivers/scsi/elx/efct/efct_els.c:1767:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/efct/efct_els.c:980:32: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rscn_page = els->els_req.virt + sizeof(*req);
                                  ^
   drivers/scsi/elx/efct/efct_els.c:1471:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rftid = els->els_req.virt + sizeof(*ct);
                              ^
   drivers/scsi/elx/efct/efct_els.c:1515:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    rffid = els->els_req.virt + sizeof(*ct);
                              ^
   drivers/scsi/elx/efct/efct_els.c:1567:28: warning: 'els->els_req.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    gidpt = els->els_req.virt + sizeof(*ct);
                              ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_els.c:1600:8: warning: Variable 's_id' is assigned a value that is never used. [unreadVariable]
     s_id = U32_MAX;
          ^
--
>> drivers/scsi/elx/efct/efct_xport.c:341:6: warning: Variable 'rc' is reassigned a value before the old one has been used. [redundantAssignment]
     rc = efct_hw_get_host_stats(&efct->hw, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:329:6: note: rc is assigned
     rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:341:6: note: rc is overwritten
     rc = efct_hw_get_host_stats(&efct->hw, 1,
        ^
   drivers/scsi/elx/efct/efct_xport.c:619:17: warning: The scope of the variable 'timeout' can be reduced. [variableScope]
     unsigned long timeout;
                   ^
   drivers/scsi/elx/efct/efct_xport.c:836:23: warning: The scope of the variable 'sport' can be reduced. [variableScope]
    struct efc_sli_port *sport;
                         ^
   drivers/scsi/elx/efct/efct_xport.c:884:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/efct/efct_xport.c:678:16: warning: Local variable 'efct' shadows outer variable [shadowVariable]
     struct efct *efct;
                  ^
   drivers/scsi/elx/efct/efct_xport.c:590:15: note: Shadowed declaration
    struct efct *efct = NULL;
                 ^
   drivers/scsi/elx/efct/efct_xport.c:678:16: note: Shadow variable
     struct efct *efct;
                  ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
--
   drivers/scsi/elx/efct/efct_hw.c:871:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:1224:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:1325:16: warning: The scope of the variable 'rq' can be reduced. [variableScope]
    struct hw_rq *rq;
                  ^
   drivers/scsi/elx/efct/efct_hw.c:2367:7: warning: The scope of the variable 'status' can be reduced. [variableScope]
    int  status;
         ^
>> drivers/scsi/elx/efct/efct_hw.c:480:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:383:29: note: Shadowed declaration
   efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
                               ^
   drivers/scsi/elx/efct/efct_hw.c:480:9: note: Shadow variable
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1862:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1842:32: note: Shadowed declaration
   efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
                                  ^
   drivers/scsi/elx/efct/efct_hw.c:1862:9: note: Shadow variable
     void *arg = io->arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1881:9: warning: Local variable 'arg' shadows outer argument [shadowArgument]
     void *arg = io->abort_arg;
           ^
   drivers/scsi/elx/efct/efct_hw.c:1842:32: note: Shadowed declaration
   efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
                                  ^
   drivers/scsi/elx/efct/efct_hw.c:1881:9: note: Shadow variable
     void *arg = io->abort_arg;
           ^
>> drivers/scsi/elx/efct/efct_hw.c:1300:23: warning: 'hw->seq_pool' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
      seq = hw->seq_pool + idx * sizeof(*seq);
                         ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_hw.c:633:8: warning: Variable 'i' is assigned a value that is never used. [unreadVariable]
    u32 i = 0, io_index = 0;
          ^
   drivers/scsi/elx/efct/efct_hw.c:2056:8: warning: Variable 'i' is assigned a value that is never used. [unreadVariable]
    u32 i = 0;
          ^
--
   drivers/scsi/elx/libefc/efc_domain.c:392:23: warning: The scope of the variable 'sport' can be reduced. [variableScope]
    struct efc_sli_port *sport;
                         ^
   drivers/scsi/elx/libefc/efc_domain.c:499:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0;
        ^
   drivers/scsi/elx/libefc/efc_domain.c:546:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:699:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:866:7: warning: The scope of the variable 'rc' can be reduced. [variableScope]
     int rc;
         ^
>> drivers/scsi/elx/libefc/efc_domain.c:546:7: warning: Local variable 'rc' shadows outer variable [shadowVariable]
     int rc;
         ^
   drivers/scsi/elx/libefc/efc_domain.c:499:6: note: Shadowed declaration
    int rc = 0;
        ^
   drivers/scsi/elx/libefc/efc_domain.c:546:7: note: Shadow variable
     int rc;
         ^
--
>> drivers/scsi/elx/libefc/efc_fabric.c:1274:14: warning: Variable 'remote_wwpn' is reassigned a value before the old one has been used. [redundantAssignment]
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:1270:14: note: remote_wwpn is assigned
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:1274:14: note: remote_wwpn is overwritten
    remote_wwpn = efc_get_wwpn(remote_sp);
                ^
   drivers/scsi/elx/libefc/efc_fabric.c:26:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_fabric.c:479:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_fabric.c:1502:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
>> drivers/scsi/elx/libefc/efc_fabric.c:726:38: warning: 'data' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
                                        ^
--
>> drivers/scsi/elx/efct/efct_lio.c:1663:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(generate_node_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1664:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1665:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1666:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1667:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1719:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1720:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1721:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1722:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
   ^
   drivers/scsi/elx/efct/efct_lio.c:1723:1: warning: %u in format string (no. 1) requires 'unsigned int' but the argument type is 'signed int'. [invalidPrintfArgType_uint]
   DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
   ^
   drivers/scsi/elx/efct/efct_lio.c:93:6: warning: The scope of the variable 'ret' can be reduced. [variableScope]
    int ret;
        ^
   drivers/scsi/elx/efct/efct_lio.c:150:6: warning: The scope of the variable 'ret' can be reduced. [variableScope]
    int ret;
        ^
>> drivers/scsi/elx/efct/efct_lio.c:480:22: warning: Passing NULL after the last typed argument to a variadic function leads to undefined behaviour. [varFuncNullUB]
     EFCT_XPORT_SHUTDOWN, NULL);
                        ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_lio.c:461:2: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
    ^
--
>> drivers/scsi/elx/libefc/efc_node.c:887:2: warning: %llX in format string (no. 1) requires 'unsigned long long' but the argument type is 'unsigned long'. [invalidPrintfArgType_uint]
    snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
    ^
--
   drivers/scsi/elx/efct/efct_unsol.c:28:16: warning: The scope of the variable 'flags' can be reduced. [variableScope]
    unsigned long flags = 0;
                  ^
   drivers/scsi/elx/efct/efct_unsol.c:210:26: warning: The scope of the variable 'frame' can be reduced. [variableScope]
    struct efc_hw_sequence *frame;
                            ^
   drivers/scsi/elx/efct/efct_unsol.c:533:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = 0;
        ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^
>> drivers/scsi/elx/efct/efct_unsol.c:533:9: warning: Variable 'rc' is assigned a value that is never used. [unreadVariable]
    int rc = 0;
           ^
--
>> drivers/scsi/elx/libefc/efc_sport.c:101:2: warning: %llX in format string (no. 1) requires 'unsigned long long' but the argument type is 'unsigned long'. [invalidPrintfArgType_uint]
    snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
    ^
   drivers/scsi/elx/libefc/efc_sport.c:422:20: warning: The scope of the variable 'fabric' can be reduced. [variableScope]
     struct efc_node *fabric;
                      ^
--
   drivers/scsi/elx/libefc/efc_device.c:485:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc;
        ^
   drivers/scsi/elx/libefc/efc_device.c:1486:6: warning: The scope of the variable 'rc' can be reduced. [variableScope]
    int rc = EFC_SCSI_CALL_COMPLETE;
        ^
   drivers/scsi/elx/libefc/efc_device.c:1487:6: warning: The scope of the variable 'rc_2' can be reduced. [variableScope]
    int rc_2 = EFC_SCSI_CALL_COMPLETE;
        ^
>> drivers/scsi/elx/libefc/efc_device.c:372:31: warning: 'prli' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
    struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
                                 ^
>> drivers/scsi/elx/libefc/efc_device.c:1193:6: warning: 'cbdata->payload->dma.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
        + sizeof(struct fc_els_prli);
        ^
   drivers/scsi/elx/libefc/efc_device.c:1397:6: warning: 'cbdata->payload->dma.virt' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
        + sizeof(struct fc_els_prli);
        ^
--
>> drivers/scsi/elx/libefc_sli/sli4.c:1372:6: warning: Condition 'valid' is always true [knownConditionTrueFalse]
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1367:6: note: Assuming that condition '!valid' is not redundant
    if (!valid) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1372:6: note: Condition 'valid' is always true
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1425:6: warning: Condition 'valid' is always true [knownConditionTrueFalse]
    if (valid && clear) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1420:6: note: Assuming that condition '!valid' is not redundant
    if (!valid) {
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:1425:6: note: Condition 'valid' is always true
    if (valid && clear) {
        ^
>> drivers/scsi/elx/libefc_sli/sli4.c:4018:2: warning: Either the condition 'extent' is redundant or there is possible null pointer dereference: extent. [nullPointerRedundantCheck]
    extent->resource_type = cpu_to_le16(rtype);
    ^
   drivers/scsi/elx/libefc_sli/sli4.c:4011:6: note: Assuming that condition 'extent' is not redundant
    if (extent)
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:4018:2: note: Null pointer dereference
    extent->resource_type = cpu_to_le16(rtype);
    ^
   drivers/scsi/elx/libefc_sli/sli4.c:2175:19: warning: The scope of the variable 'bptr' can be reduced. [variableScope]
    struct sli4_bde *bptr;
                     ^
>> drivers/scsi/elx/libefc_sli/sli4.c:3068:35: warning: Local variable 'rcqe' shadows outer variable [shadowVariable]
     struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
                                     ^
   drivers/scsi/elx/libefc_sli/sli4.c:2995:29: note: Shadowed declaration
    struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
                               ^
   drivers/scsi/elx/libefc_sli/sli4.c:3068:35: note: Shadow variable
     struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
                                     ^
>> drivers/scsi/elx/libefc_sli/sli4.c:3069:7: warning: Local variable 'rq_element_index' shadows outer variable [shadowVariable]
     u16 rq_element_index =
         ^
   drivers/scsi/elx/libefc_sli/sli4.c:2999:6: note: Shadowed declaration
    u16 rq_element_index;
        ^
   drivers/scsi/elx/libefc_sli/sli4.c:3069:7: note: Shadow variable
     u16 rq_element_index =
         ^
>> drivers/scsi/elx/libefc_sli/sli4.c:4487:8: warning: Local variable 'i' shadows outer variable [shadowVariable]
      u32 i = 0, size = 0;
          ^
   drivers/scsi/elx/libefc_sli/sli4.c:4470:7: note: Shadowed declaration
     u32 i;
         ^
   drivers/scsi/elx/libefc_sli/sli4.c:4487:8: note: Shadow variable
      u32 i = 0, size = 0;
          ^
>> drivers/scsi/elx/libefc_sli/sli4.c:71:7: warning: 'buf' is of type 'void *'. When using void pointers in calculations, the behaviour is undefined. [arithOperationsOnVoidPointer]
     buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
         ^
>> drivers/scsi/elx/libefc_sli/sli4.h:96:26: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_CTRL_FDD = (1 << 31),
                            ^
   drivers/scsi/elx/libefc_sli/sli4.h:223:29: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_PORT_STATUS_ERR  = (1 << 31),
                               ^
   drivers/scsi/elx/libefc_sli/sli4.h:315:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SGE_LAST   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:418:24: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_MCQE_VALID  = (1 << 31),
                          ^
   drivers/scsi/elx/libefc_sli/sli4.h:682:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_EQESZ   = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:685:23: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_EQ_ARM   = (1 << 31),
                         ^
   drivers/scsi/elx/libefc_sli/sli4.h:729:25: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    CREATE_MQEXT_VAL  = (1 << 31),
                           ^
   drivers/scsi/elx/libefc_sli/sli4.h:2665:30: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_READ_LNKSTAT_CLOF = (1 << 31),
                                ^
   drivers/scsi/elx/libefc_sli/sli4.h:3473:35: warning: Shifting signed 32-bit value by 31 bits is implementation-defined behaviour [shiftTooManyBitsSigned]
    SLI4_SET_RECONFIG_LINKID_FD = (1 << 31),
                                     ^

vim +/rc +829 drivers/scsi/elx/efct/efct_driver.c

3569c2ab8265ff James Smart 2020-04-11  823  
3569c2ab8265ff James Smart 2020-04-11  824  static
3569c2ab8265ff James Smart 2020-04-11  825  int __init efct_init(void)
3569c2ab8265ff James Smart 2020-04-11  826  {
3569c2ab8265ff James Smart 2020-04-11  827  	int	rc = -ENODEV;
3569c2ab8265ff James Smart 2020-04-11  828  
3569c2ab8265ff James Smart 2020-04-11 @829  	rc = efct_device_init();
3569c2ab8265ff James Smart 2020-04-11  830  	if (rc) {
3569c2ab8265ff James Smart 2020-04-11  831  		pr_err("efct_device_init failed rc=%d\n", rc);
3569c2ab8265ff James Smart 2020-04-11  832  		return -ENOMEM;
3569c2ab8265ff James Smart 2020-04-11  833  	}
3569c2ab8265ff James Smart 2020-04-11  834  
3569c2ab8265ff James Smart 2020-04-11  835  	rc = pci_register_driver(&efct_pci_driver);
3569c2ab8265ff James Smart 2020-04-11  836  	if (rc)
3569c2ab8265ff James Smart 2020-04-11  837  		goto l1;
3569c2ab8265ff James Smart 2020-04-11  838  
3569c2ab8265ff James Smart 2020-04-11  839  	return rc;
3569c2ab8265ff James Smart 2020-04-11  840  
3569c2ab8265ff James Smart 2020-04-11  841  l1:
3569c2ab8265ff James Smart 2020-04-11  842  	efct_device_shutdown();
3569c2ab8265ff James Smart 2020-04-11  843  	return rc;
3569c2ab8265ff James Smart 2020-04-11  844  }
3569c2ab8265ff James Smart 2020-04-11  845  

:::::: The code@line 829 was first introduced by commit
:::::: 3569c2ab8265ffc43780ee31f4c4ef80e37cf0f7 elx: efct: Driver initialization routines

:::::: TO: James Smart <jsmart2021@gmail.com>
:::::: CC: 0day robot <lkp@intel.com>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2020-04-14 15:23   ` Daniel Wagner
  2020-04-22  4:28     ` James Smart
  2020-04-15 12:06   ` Hannes Reinecke
  2020-04-23  1:52   ` Roman Bolshakov
  2 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-14 15:23 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi James,

On Sat, Apr 11, 2020 at 08:32:33PM -0700, James Smart wrote:
> This is the initial patch for the new Emulex target mode SCSI
> driver sources.
> 
> This patch:
> - Creates the new Emulex source level directory drivers/scsi/elx
>   and adds the directory to the MAINTAINERS file.

I would drop this. It's clear from the diff stat.

> - Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
>   This library is a SLI-4 interface library.

Instead saying what this patch does, it would be better the explain
why it structured this way.

> - Starts the population of the libefc_sli library with definitions
>   of SLI-4 hardware register offsets and definitions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed anonymous enums to named.
>   SLI defines to spell out _MASK value directly.
>   Changed multiple #defines to named enums for consistency.
>   SLI4_REG_MAX to SLI4_REG_UNKNOWN
> ---
>  MAINTAINERS                        |   8 ++
>  drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
>  drivers/scsi/elx/libefc_sli/sli4.h | 252 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 286 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7bd5e23648b1..a7381c0088e4 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6223,6 +6223,14 @@ W:	http://www.broadcom.com
>  S:	Supported
>  F:	drivers/scsi/lpfc/
>  
> +EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
> +M:	James Smart <james.smart@broadcom.com>
> +M:	Ram Vegesna <ram.vegesna@broadcom.com>
> +L:	linux-scsi@vger.kernel.org
> +W:	http://www.broadcom.com
> +S:	Supported
> +F:	drivers/scsi/elx/
> +
>  ENE CB710 FLASH CARD READER DRIVER
>  M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
>  S:	Maintained
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> new file mode 100644
> index 000000000000..29d33becd334
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -0,0 +1,26 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term

2020? (and all later copyright headers)

> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/**
> + * All common (i.e. transport-independent) SLI-4 functions are implemented
> + * in this file.
> + */
> +#include "sli4.h"
> +
> +struct sli4_asic_entry_t {
> +	u32 rev_id;
> +	u32 family;
> +};
> +
> +static struct sli4_asic_entry_t sli4_asic_table[] = {
> +	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
> +};
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> new file mode 100644
> index 000000000000..1fad48643f94
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -0,0 +1,252 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + */
> +
> +/*
> + * All common SLI-4 structures and function prototypes.
> + */
> +
> +#ifndef _SLI4_H
> +#define _SLI4_H
> +
> +#include <linux/pci.h>
> +#include <linux/delay.h>
> +#include "scsi/fc/fc_els.h"
> +#include "scsi/fc/fc_fs.h"
> +#include "../include/efc_common.h"
> +
> +/*************************************************************************
> + * Common SLI-4 register offsets and field definitions
> + */
> +
> +/* SLI_INTF - SLI Interface Definition Register */
> +#define SLI4_INTF_REG			0x0058
> +enum sli4_intf {
> +	SLI4_INTF_REV_SHIFT		= 4,
> +	SLI4_INTF_REV_MASK		= 0xF0,
> +
> +	SLI4_INTF_REV_S3		= 0x30,
> +	SLI4_INTF_REV_S4		= 0x40,
> +
> +	SLI4_INTF_FAMILY_SHIFT		= 8,
> +	SLI4_INTF_FAMILY_MASK		= 0x0F00,
> +
> +	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0x0F00,
> +
> +	SLI4_INTF_IF_TYPE_SHIFT		= 12,
> +	SLI4_INTF_IF_TYPE_MASK		= 0xF000,
> +
> +	SLI4_INTF_IF_TYPE_2		= 0x2000,
> +	SLI4_INTF_IF_TYPE_6		= 0x6000,
> +
> +	SLI4_INTF_VALID_SHIFT		= 29,
> +	SLI4_INTF_VALID_MASK		= 0xE0000000,
> +
> +	SLI4_INTF_VALID_VALUE		= 0xC0000000,
> +};
> +
> +/* ASIC_ID - SLI ASIC Type and Revision Register */
> +#define SLI4_ASIC_ID_REG	0x009c
> +enum sli4_asic {
> +	SLI4_ASIC_GEN_SHIFT	= 8,
> +	SLI4_ASIC_GEN_MASK	= 0xFF00,
> +	SLI4_ASIC_GEN_5		= 0x0B00,
> +	SLI4_ASIC_GEN_6		= 0x0C00,
> +	SLI4_ASIC_GEN_7		= 0x0D00,
> +};
> +
> +enum sli4_acic_revisions {
> +	SLI4_ASIC_REV_A0 = 0x00,
> +	SLI4_ASIC_REV_A1 = 0x01,
> +	SLI4_ASIC_REV_A2 = 0x02,
> +	SLI4_ASIC_REV_A3 = 0x03,
> +	SLI4_ASIC_REV_B0 = 0x10,
> +	SLI4_ASIC_REV_B1 = 0x11,
> +	SLI4_ASIC_REV_B2 = 0x12,
> +	SLI4_ASIC_REV_C0 = 0x20,
> +	SLI4_ASIC_REV_C1 = 0x21,
> +	SLI4_ASIC_REV_C2 = 0x22,
> +	SLI4_ASIC_REV_D0 = 0x30,
> +};
> +
> +/* BMBX - Bootstrap Mailbox Register */
> +#define SLI4_BMBX_REG		0x0160
> +enum sli4_bmbx {
> +	SLI4_BMBX_MASK_HI	= 0x3,
> +	SLI4_BMBX_MASK_LO	= 0xf,
> +	SLI4_BMBX_RDY		= (1 << 0),
> +	SLI4_BMBX_HI		= (1 << 1),

Are the brakets nessary for enum defitions? I would leave away.

> +	SLI4_BMBX_SIZE		= 256,
> +};
> +
> +#define SLI4_BMBX_WRITE_HI(r) \
> +	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | SLI4_BMBX_HI)
> +#define SLI4_BMBX_WRITE_LO(r) \
> +	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) << 30) | \
> +	 (((r) & ~SLI4_BMBX_MASK_LO) >> 2))

I am wondering if these macros should be an inline function
instead. Certainly bike shedding territory.

BTW, why is it called SLI4_BMBX_WRITE_HI and not just SLI4_BMBX_HI?
Because below with the doorbell registers there is a different pattern.

> +
> +/* SLIPORT_CONTROL - SLI Port Control Register */
> +#define SLI4_PORT_CTRL_REG	0x0408
> +enum sli4_port_ctrl {
> +	SLI4_PORT_CTRL_IP	= (1 << 27),
> +	SLI4_PORT_CTRL_IDIS	= (1 << 22),
> +	SLI4_PORT_CTRL_FDD	= (1 << 31),
> +};
> +
> +/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
> +#define SLI4_PORT_ERROR1	0x040c
> +#define SLI4_PORT_ERROR2	0x0410
> +
> +/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
> +#define SLI4_EQCQ_DB_REG	0x120
> +enum sli4_eqcq_e {
> +	SLI4_EQ_ID_LO_MASK	= 0x01FF,
> +
> +	SLI4_CQ_ID_LO_MASK	= 0x03FF,
> +
> +	SLI4_EQCQ_CI_EQ		= 0x0200,
> +
> +	SLI4_EQCQ_QT_EQ		= 0x00000400,
> +	SLI4_EQCQ_QT_CQ		= 0x00000000,
> +
> +	SLI4_EQCQ_ID_HI_SHIFT	= 11,
> +	SLI4_EQCQ_ID_HI_MASK	= 0xF800,
> +
> +	SLI4_EQCQ_NUM_SHIFT	= 16,
> +	SLI4_EQCQ_NUM_MASK	= 0x1FFF0000,
> +
> +	SLI4_EQCQ_ARM		= 0x20000000,
> +	SLI4_EQCQ_UNARM		= 0x00000000,
> +};
> +
> +#define SLI4_EQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ | \
> +	 ((((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
> +	 (a) | SLI4_EQCQ_CI_EQ)
> +
> +#define SLI4_CQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ | \
> +	 ((((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | (a))

Well, I find it a bit hard to digest. What do those two defines do?
Maybe it would be more readable as inline function with a comment or
so. Just saying :)

> +/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
> +#define SLI4_IF6_EQ_DB_REG	0x120
> +enum sli4_eq_e {
> +	SLI4_IF6_EQ_ID_MASK	= 0x0FFF,
> +
> +	SLI4_IF6_EQ_NUM_SHIFT	= 16,
> +	SLI4_IF6_EQ_NUM_MASK	= 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_EQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_IF6_EQ_ID_MASK) | \
> +	 (((n) << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | (a))
> +
> +/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */
> +#define SLI4_IF6_CQ_DB_REG	0xC0
> +enum sli4_cq_e {
> +	SLI4_IF6_CQ_ID_MASK	= 0xFFFF,
> +
> +	SLI4_IF6_CQ_NUM_SHIFT	= 16,
> +	SLI4_IF6_CQ_NUM_MASK	= 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_CQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_IF6_CQ_ID_MASK) | \
> +	 (((n) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | (a))
> +
> +/* MQ_DOORBELL - MQ Doorbell Register */
> +#define SLI4_MQ_DB_REG		0x0140
> +#define SLI4_IF6_MQ_DB_REG	0x0160
> +enum sli4_mq_e {
> +	SLI4_MQ_ID_MASK		= 0xFFFF,
> +
> +	SLI4_MQ_NUM_SHIFT	= 16,
> +	SLI4_MQ_NUM_MASK	= 0x3FFF0000,
> +};
> +
> +#define SLI4_MQ_DOORBELL(n, i) \
> +	(((i) & SLI4_MQ_ID_MASK) | \
> +	 (((n) << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK))

s/i/id/

And just because I like patterns: why no 'a'?

> +/* RQ_DOORBELL - RQ Doorbell Register */
> +#define SLI4_RQ_DB_REG		0x0a0
> +#define SLI4_IF6_RQ_DB_REG	0x0080
> +enum sli4_rq_e {
> +	SLI4_RQ_DB_ID_MASK	= 0xFFFF,
> +
> +	SLI4_RQ_DB_NUM_SHIFT	= 16,
> +	SLI4_RQ_DB_NUM_MASK	= 0x3FFF0000,
> +};
> +
> +#define SLI4_RQ_DOORBELL(n, i) \
> +	(((i) & SLI4_RQ_DB_ID_MASK) | \
> +	 (((n) << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK))

s/i/id/

> +
> +/* WQ_DOORBELL - WQ Doorbell Register */
> +#define SLI4_IO_WQ_DB_REG	0x040
> +#define SLI4_IF6_WQ_DB_REG	0x040
> +enum sli4_wq_e {
> +	SLI4_WQ_ID_MASK		= 0xFFFF,
> +
> +	SLI4_WQ_IDX_SHIFT	= 16,
> +	SLI4_WQ_IDX_MASK	= 0xFF0000,
> +
> +	SLI4_WQ_NUM_SHIFT	= 24,
> +	SLI4_WQ_NUM_MASK	= 0x0FF00000,
> +};
> +
> +#define SLI4_WQ_DOORBELL(n, x, i) \
> +	(((i) & SLI4_WQ_ID_MASK) | \
> +	 (((x) << SLI4_WQ_IDX_SHIFT) & SLI4_WQ_IDX_MASK) | \
> +	 (((n) << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK))

s/i/id/

And why is id not the second argument? What is x?

> +
> +/* SLIPORT_SEMAPHORE - SLI Port Host and Port Status Register */
> +#define SLI4_PORT_SEMP_REG		0x0400
> +enum sli4_port_sem_e {
> +	SLI4_PORT_SEMP_ERR_MASK		= 0xF000,
> +	SLI4_PORT_SEMP_UNRECOV_ERR	= 0xF000,
> +};
> +
> +/* SLIPORT_STATUS - SLI Port Status Register */
> +#define SLI4_PORT_STATUS_REGOFF		0x0404
> +enum sli4_port_status {
> +	SLI4_PORT_STATUS_FDP		= (1 << 21),
> +	SLI4_PORT_STATUS_RDY		= (1 << 23),
> +	SLI4_PORT_STATUS_RN		= (1 << 24),
> +	SLI4_PORT_STATUS_DIP		= (1 << 25),
> +	SLI4_PORT_STATUS_OTI		= (1 << 29),
> +	SLI4_PORT_STATUS_ERR		= (1 << 31),
> +};
> +
> +#define SLI4_PHYDEV_CTRL_REG		0x0414
> +#define SLI4_PHYDEV_CTRL_FRST		(1 << 1)
> +#define SLI4_PHYDEV_CTRL_DD		(1 << 2)
> +
> +/* Register name enums */
> +enum sli4_regname_en {
> +	SLI4_REG_BMBX,
> +	SLI4_REG_EQ_DOORBELL,
> +	SLI4_REG_CQ_DOORBELL,
> +	SLI4_REG_RQ_DOORBELL,
> +	SLI4_REG_IO_WQ_DOORBELL,
> +	SLI4_REG_MQ_DOORBELL,
> +	SLI4_REG_PHYSDEV_CONTROL,
> +	SLI4_REG_PORT_CONTROL,
> +	SLI4_REG_PORT_ERROR1,
> +	SLI4_REG_PORT_ERROR2,
> +	SLI4_REG_PORT_SEMAPHORE,
> +	SLI4_REG_PORT_STATUS,
> +	SLI4_REG_UNKWOWN			/* must be last */
> +};
> +
> +struct sli4_reg {
> +	u32	rset;
> +	u32	off;
> +};
> +
> +#endif /* !_SLI4_H */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-12  3:32 ` [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2020-04-14 18:02   ` Daniel Wagner
  2020-04-22  4:41     ` James Smart
  2020-04-15 12:14   ` Hannes Reinecke
  1 sibling, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-14 18:02 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi,

On Sat, Apr 11, 2020 at 08:32:34PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch add SLI-4 Data structures and defines for:
> - Buffer Descriptors (BDEs)
> - Scatter/Gather List elements (SGEs)
> - Queues and their Entry Descriptions for:
>    Event Queues (EQs), Completion Queues (CQs),
>    Receive Queues (RQs), and the Mailbox Queue (MQ).

There are a definitions which are not used at all,
e.g. DISEED_SGE_OP_RX_VALUE, sli4_acqe_e, sli4_acqe_event_code,
etc. What are the plans with those?

> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed anonymous enums to named.
>   SLI defines to spell out _MASK value directly.
>   Change multiple defines to named Enums for consistency.
>   Single Enum to #define.
> ---
>  drivers/scsi/elx/include/efc_common.h |   25 +
>  drivers/scsi/elx/libefc_sli/sli4.h    | 1761 +++++++++++++++++++++++++++++++++
>  2 files changed, 1786 insertions(+)
>  create mode 100644 drivers/scsi/elx/include/efc_common.h
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> new file mode 100644
> index 000000000000..c427f75da4d5
> --- /dev/null
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFC_COMMON_H__
> +#define __EFC_COMMON_H__
> +
> +#include <linux/pci.h>
> +
> +#define EFC_SUCCESS	0
> +#define EFC_FAIL	1
> +
> +struct efc_dma {
> +	void		*virt;
> +	void            *alloc;
> +	dma_addr_t	phys;
> +
> +	size_t		size;
> +	size_t          len;
> +	struct pci_dev	*pdev;
> +};
> +
> +#endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 1fad48643f94..07eef8df9690 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -249,4 +249,1765 @@ struct sli4_reg {
>  	u32	off;
>  };
>  
> +struct sli4_dmaaddr {
> +	__le32 low;
> +	__le32 high;
> +};
> +
> +/* a 3-word BDE with address 1st 2 words, length last word */

Maybe expand BDE here as it is the first time the acrynom is
used. Please 'with address 1st 2 words,....' since this is redunant.

> +struct sli4_bufptr {
> +	struct sli4_dmaaddr addr;
> +	__le32 length;
> +};
> +
> +/* a 3-word BDE with length as first word, address last 2 words */
> +struct sli4_bufptr_len1st {
> +	__le32 length0;
> +	struct sli4_dmaaddr addr;
> +};

This struct is unused. Remove it please.

> +
> +/* Buffer Descriptor Entry (BDE) */
> +enum sli4_bde_e {
> +	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
> +	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
> +};
> +
> +struct sli4_bde {
> +	__le32		bde_type_buflen;
> +	union {
> +		struct sli4_dmaaddr data;
> +		struct {
> +			__le32	offset;
> +			__le32	rsvd2;
> +		} imm;
> +		struct sli4_dmaaddr blp;
> +	} u;
> +};
> +
> +/* Buffer Descriptors */
> +enum sli4_bde_type {
> +	BDE_TYPE_SHIFT		= 24,
> +	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
> +	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
> +	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
> +};
> +
> +/* Scatter-Gather Entry (SGE) */
> +#define SLI4_SGE_MAX_RESERVED			3
> +
> +enum sli4_sge_type {
> +	/* DW2 */
> +	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,

Nitpick: some of the hex values are in small letters some as here use
large letters.

> +	/*DW2W1*/
> +	SLI4_SGE_TYPE_SHIFT		= 27,
> +	SLI4_SGE_TYPE_MASK		= 0x78000000,
> +	/*SGE Types*/
> +	SLI4_SGE_TYPE_DATA		= 0x00,
> +	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
> +	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
> +	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
> +	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
> +	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
> +	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
> +	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
> +	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
> +
> +	SLI4_SGE_LAST			= (1 << 31),
> +};
> +
> +struct sli4_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		buffer_length;
> +};
> +
> +/* T10 DIF Scatter-Gather Entry (SGE) */
> +struct sli4_dif_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		rsvd12;
> +};
> +
> +/* Data Integrity Seed (DISEED) SGE */
> +enum sli4_diseed_sge_flags {
> +	/* DW2W1 */
> +	DISEED_SGE_HS			= (1 << 2),
> +	DISEED_SGE_WS			= (1 << 3),
> +	DISEED_SGE_IC			= (1 << 4),
> +	DISEED_SGE_ICS			= (1 << 5),
> +	DISEED_SGE_ATRT			= (1 << 6),
> +	DISEED_SGE_AT			= (1 << 7),
> +	DISEED_SGE_FAT			= (1 << 8),
> +	DISEED_SGE_NA			= (1 << 9),
> +	DISEED_SGE_HI			= (1 << 10),
> +
> +	/* DW3W1 */
> +	DISEED_SGE_BS_MASK		= 0x0007,
> +	DISEED_SGE_AI			= (1 << 3),
> +	DISEED_SGE_ME			= (1 << 4),
> +	DISEED_SGE_RE			= (1 << 5),
> +	DISEED_SGE_CE			= (1 << 6),
> +	DISEED_SGE_NR			= (1 << 7),
> +
> +	DISEED_SGE_OP_RX_SHIFT		= 8,
> +	DISEED_SGE_OP_RX_MASK		= 0x0F00,
> +	DISEED_SGE_OP_TX_SHIFT		= 12,
> +	DISEED_SGE_OP_TX_MASK		= 0xF000,
> +};
> +
> +/* Opcode values */
> +enum sli4_diseed_sge_opcodes {
> +	DISEED_SGE_OP_IN_NODIF_OUT_CRC,
> +	DISEED_SGE_OP_IN_CRC_OUT_NODIF,
> +	DISEED_SGE_OP_IN_NODIF_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CSUM_OUT_NODIF,
> +	DISEED_SGE_OP_IN_CRC_OUT_CRC,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CRC_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CRC,
> +	DISEED_SGE_OP_IN_RAW_OUT_RAW,
> +};
> +
> +#define DISEED_SGE_OP_RX_VALUE(stype) \
> +	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_RX_SHIFT)
> +#define DISEED_SGE_OP_TX_VALUE(stype) \
> +	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_TX_SHIFT)

empty line

> +struct sli4_diseed_sge {
> +	__le32		ref_tag_cmp;
> +	__le32		ref_tag_repl;
> +	__le16		app_tag_repl;
> +	__le16		dw2w1_flags;
> +	__le16		app_tag_cmp;
> +	__le16		dw3w1_flags;
> +};
> +
> +/* List Segment Pointer Scatter-Gather Entry (SGE) */
> +#define SLI4_LSP_SGE_SEGLEN	0x00ffffff
> +
> +struct sli4_lsp_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		dw3_seglen;
> +};
> +
> +enum sli4_eqe_e {
> +	SLI4_EQE_VALID	= 1,
> +	SLI4_EQE_MJCODE	= 0xe,
> +	SLI4_EQE_MNCODE	= 0xfff0,
> +};
> +
> +struct sli4_eqe {
> +	__le16		dw0w0_flags;
> +	__le16		resource_id;
> +};
> +
> +#define SLI4_MAJOR_CODE_STANDARD	0
> +#define SLI4_MAJOR_CODE_SENTINEL	1
> +
> +/* Sentinel EQE indicating the EQ is full */
> +#define SLI4_EQE_STATUS_EQ_FULL		2
> +
> +enum sli4_mcqe_e {
> +	SLI4_MCQE_CONSUMED	= (1 << 27),
> +	SLI4_MCQE_COMPLETED	= (1 << 28),
> +	SLI4_MCQE_AE		= (1 << 30),
> +	SLI4_MCQE_VALID		= (1 << 31),
> +};
> +
> +/* Entry was consumed but not completed */
> +#define SLI4_MCQE_STATUS_NOT_COMPLETED	-2
> +
> +struct sli4_mcqe {
> +	__le16		completion_status;
> +	__le16		extended_status;
> +	__le32		mqe_tag_low;
> +	__le32		mqe_tag_high;
> +	__le32		dw3_flags;
> +};
> +
> +enum sli4_acqe_e {
> +	SLI4_ACQE_AE	= (1 << 6), /* async event - this is an ACQE */
> +	SLI4_ACQE_VAL	= (1 << 7), /* valid - contents of CQE are valid */
> +};
> +
> +struct sli4_acqe {
> +	__le32		event_data[3];
> +	u8		rsvd12;
> +	u8		event_code;
> +	u8		event_type;
> +	u8		ae_val;
> +};
> +
> +enum sli4_acqe_event_code {
> +	SLI4_ACQE_EVENT_CODE_LINK_STATE		= 0x01,
> +	SLI4_ACQE_EVENT_CODE_FIP		= 0x02,
> +	SLI4_ACQE_EVENT_CODE_DCBX		= 0x03,
> +	SLI4_ACQE_EVENT_CODE_ISCSI		= 0x04,
> +	SLI4_ACQE_EVENT_CODE_GRP_5		= 0x05,
> +	SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	= 0x10,
> +	SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	= 0x11,
> +	SLI4_ACQE_EVENT_CODE_VF_EVENT		= 0x12,
> +	SLI4_ACQE_EVENT_CODE_MR_EVENT		= 0x13,
> +};
> +
> +enum sli4_qtype {
> +	SLI_QTYPE_EQ,
> +	SLI_QTYPE_CQ,
> +	SLI_QTYPE_MQ,
> +	SLI_QTYPE_WQ,
> +	SLI_QTYPE_RQ,
> +	SLI_QTYPE_MAX,			/* must be last */
> +};
> +
> +#define SLI_USER_MQ_COUNT	1
> +#define SLI_MAX_CQ_SET_COUNT	16
> +#define SLI_MAX_RQ_SET_COUNT	16
> +
> +enum sli4_qentry {
> +	SLI_QENTRY_ASYNC,
> +	SLI_QENTRY_MQ,
> +	SLI_QENTRY_RQ,
> +	SLI_QENTRY_WQ,
> +	SLI_QENTRY_WQ_RELEASE,
> +	SLI_QENTRY_OPT_WRITE_CMD,
> +	SLI_QENTRY_OPT_WRITE_DATA,
> +	SLI_QENTRY_XABT,
> +	SLI_QENTRY_MAX			/* must be last */
> +};
> +
> +enum sli4_queue_flags {
> +	/* CQ has MQ/Async completion */
> +	SLI4_QUEUE_FLAG_MQ	= (1 << 0),
> +
> +	/* RQ for packet headers */
> +	SLI4_QUEUE_FLAG_HDR	= (1 << 1),
> +
> +	/* RQ index increment by 8 */
> +	SLI4_QUEUE_FLAG_RQBATCH	= (1 << 2),
> +};

There are several styles of documention and comments in this header. I
suggest to streamline them all and use kernel doc style?

> +struct sli4_queue {
> +	/* Common to all queue types */
> +	struct efc_dma	dma;
> +	spinlock_t	lock;	/* protect the queue operations */

I really would love the have some documentation on the locking concept
of the driver. From my very limited experience so far with the FC
drivers, this is one of the areas which are hard to review if there is
no documentation on the design and rules. For example 'protect the
queue operations' is pretty much an useless comment.

> +	u32	index;		/* current host entry index */
> +	u16	size;		/* entry size */
> +	u16	length;		/* number of entries */
> +	u16	n_posted;	/* number entries posted */
> +	u16	id;		/* Port assigned xQ_ID */
> +	u16	ulp;		/* ULP assigned to this queue */
> +	void __iomem    *db_regaddr;	/* register address for the doorbell */
> +	u8		type;		/* queue type ie EQ, CQ, ... */
> +	u32	proc_limit;	/* limit CQE processed per iteration */
> +	u32	posted_limit;	/* CQE/EQE process before ring doorbel */
> +	u32	max_num_processed;
> +	time_t		max_process_time;
> +	u16	phase;		/* For if_type = 6, this value toggle
                                       ^^^^^^^
Is this the type member? What does the value 6 mean?

> +				 * for each iteration of the queue,
> +				 * a queue entry is valid when a cqe
> +				 * valid bit matches this value
> +				 */
> +
> +	union {
> +		u32	r_idx;	/* "read" index (MQ only) */
> +		struct {
> +			u32	dword;
> +		} flag;

Is this struct really neccessary?

> +	} u;
> +};
> +
> +/* Generic Command Request header */
> +enum sli4_cmd_version {
> +	CMD_V0,
> +	CMD_V1,
> +	CMD_V2,
> +};
> +
> +struct sli4_rqst_hdr {
> +	u8		opcode;
> +	u8		subsystem;
> +	__le16		rsvd2;
> +	__le32		timeout;
> +	__le32		request_length;
> +	__le32		dw3_version;
> +};
> +
> +/* Generic Command Response header */
> +struct sli4_rsp_hdr {
> +	u8		opcode;
> +	u8		subsystem;
> +	__le16		rsvd2;
> +	u8		status;
> +	u8		additional_status;
> +	__le16		rsvd6;
> +	__le32		response_length;
> +	__le32		actual_response_length;
> +};
> +
> +#define SLI4_QUEUE_RQ_BATCH	8
> +
> +#define CFG_RQST_CMDSZ(stype)	sizeof(struct sli4_rqst_##stype)
> +
> +#define CFG_RQST_PYLD_LEN(stype) \
> +		cpu_to_le32(sizeof(struct sli4_rqst_##stype) - \
> +			sizeof(struct sli4_rqst_hdr))
> +
> +#define CFG_RQST_PYLD_LEN_VAR(stype, varpyld) \
> +		cpu_to_le32((sizeof(struct sli4_rqst_##stype) + \
> +			varpyld) - sizeof(struct sli4_rqst_hdr))
> +
> +#define SZ_DMAADDR		sizeof(struct sli4_dmaaddr)
> +
> +#define SLI_CONFIG_PYLD_LENGTH(stype) \
> +		max(sizeof(struct sli4_rqst_##stype), \
> +		sizeof(struct sli4_rsp_##stype))
> +
> +enum sli4_create_cqv2_e {
> +	/* DW5_flags values*/
> +	CREATE_CQV2_CLSWM_MASK	= 0x00003000,
> +	CREATE_CQV2_NODELAY	= 0x00004000,
> +	CREATE_CQV2_AUTOVALID	= 0x00008000,
> +	CREATE_CQV2_CQECNT_MASK	= 0x18000000,
> +	CREATE_CQV2_VALID	= 0x20000000,
> +	CREATE_CQV2_EVT		= 0x80000000,
> +	/* DW6W1_flags values*/
> +	CREATE_CQV2_ARM		= 0x8000,
> +};
> +
> +struct sli4_rqst_cmn_create_cq_v2 {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	u8		page_size;
> +	u8		rsvd19;
> +	__le32		dw5_flags;
> +	__le16		eq_id;
> +	__le16		dw6w1_arm;
> +	__le16		cqe_count;
> +	__le16		rsvd30;
> +	__le32		rsvd32;
> +	struct sli4_dmaaddr page_phys_addr[0];
> +};
> +
> +enum sli4_create_cqset_e {
> +	/* DW5_flags values*/
> +	CREATE_CQSETV0_CLSWM_MASK  = 0x00003000,
> +	CREATE_CQSETV0_NODELAY	   = 0x00004000,
> +	CREATE_CQSETV0_AUTOVALID   = 0x00008000,
> +	CREATE_CQSETV0_CQECNT_MASK = 0x18000000,
> +	CREATE_CQSETV0_VALID	   = 0x20000000,
> +	CREATE_CQSETV0_EVT	   = 0x80000000,
> +	/* DW5W1_flags values */
> +	CREATE_CQSETV0_CQE_COUNT   = 0x7fff,
> +	CREATE_CQSETV0_ARM	   = 0x8000,
> +};
> +
> +struct sli4_rqst_cmn_create_cq_set_v0 {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	u8		page_size;
> +	u8		rsvd19;
> +	__le32		dw5_flags;
> +	__le16		num_cq_req;
> +	__le16		dw6w1_flags;
> +	__le16		eq_id[16];
> +	struct sli4_dmaaddr page_phys_addr[0];
> +};
> +
> +/* CQE count */
> +enum sli4_cq_cnt {
> +	CQ_CNT_256,
> +	CQ_CNT_512,
> +	CQ_CNT_1024,
> +	CQ_CNT_LARGE,
> +};
> +
> +#define CQ_CNT_SHIFT			27
> +#define CQ_CNT_VAL(type)		(CQ_CNT_##type << CQ_CNT_SHIFT)
> +
> +#define SLI4_CQE_BYTES			(4 * sizeof(u32))
> +
> +#define SLI4_CMN_CREATE_CQ_V2_MAX_PAGES	8
> +
> +/* Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion */
> +struct sli4_rsp_cmn_create_queue {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16	q_id;
> +	u8	rsvd18;
> +	u8	ulp;
> +	__le32	db_offset;
> +	__le16	db_rs;
> +	__le16	db_fmt;
> +};
> +
> +struct sli4_rsp_cmn_create_queue_set {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16	q_id;
> +	__le16	num_q_allocated;
> +};
> +
> +/* Common Destroy Queue */
> +struct sli4_rqst_cmn_destroy_q {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16	q_id;
> +	__le16	rsvd;
> +};
> +
> +struct sli4_rsp_cmn_destroy_q {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +/* Modify the delay multiplier for EQs */
> +struct sli4_rqst_cmn_modify_eq_delay {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32	num_eq;
> +	struct {
> +		__le32	eq_id;
> +		__le32	phase;
> +		__le32	delay_multiplier;
> +	} eq_delay_record[8];
> +};
> +
> +struct sli4_rsp_cmn_modify_eq_delay {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +enum sli4_create_cq_e {
> +	/* DW5 */
> +	CREATE_EQ_AUTOVALID		= (1 << 28),
> +	CREATE_EQ_VALID			= (1 << 29),
> +	CREATE_EQ_EQESZ			= (1 << 31),
> +	/* DW6 */
> +	CREATE_EQ_COUNT			= (7 << 26),
> +	CREATE_EQ_ARM			= (1 << 31),
> +	/* DW7 */
> +	CREATE_EQ_DELAYMULTI_SHIFT	= 13,
> +	CREATE_EQ_DELAYMULTI_MASK	= 0x007FE000,
> +	CREATE_EQ_DELAYMULTI		= 0x00040000,
> +};
> +
> +struct sli4_rqst_cmn_create_eq {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16	num_pages;
> +	__le16	rsvd18;
> +	__le32	dw5_flags;
> +	__le32	dw6_flags;
> +	__le32	dw7_delaymulti;
> +	__le32	rsvd32;
> +	struct sli4_dmaaddr page_address[8];
> +};
> +
> +struct sli4_rsp_cmn_create_eq {
> +	struct sli4_rsp_cmn_create_queue q_rsp;
> +};
> +
> +/* EQ count */
> +enum sli4_eq_cnt {
> +	EQ_CNT_256,
> +	EQ_CNT_512,
> +	EQ_CNT_1024,
> +	EQ_CNT_2048,
> +	EQ_CNT_4096 = 3,
> +};
> +
> +#define EQ_CNT_SHIFT			26
> +#define EQ_CNT_VAL(type)		(EQ_CNT_##type << EQ_CNT_SHIFT)
> +
> +#define SLI4_EQE_SIZE_4			0
> +#define SLI4_EQE_SIZE_16		1
> +
> +/* Create a Mailbox Queue; accommodate v0 and v1 forms. */
> +enum sli4_create_mq_flags {
> +	/* DW6W1 */
> +	CREATE_MQEXT_RINGSIZE		= 0xf,
> +	CREATE_MQEXT_CQID_SHIFT		= 6,
> +	CREATE_MQEXT_CQIDV0_MASK	= 0xffc0,
> +	/* DW7 */
> +	CREATE_MQEXT_VAL		= (1 << 31),
> +	/* DW8 */
> +	CREATE_MQEXT_ACQV		= (1 << 0),
> +	CREATE_MQEXT_ASYNC_CQIDV0	= 0x7fe,
> +};
> +
> +struct sli4_rqst_cmn_create_mq_ext {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	__le16		cq_id_v1;
> +	__le32		async_event_bitmap;
> +	__le16		async_cq_id_v1;
> +	__le16		dw6w1_flags;
> +	__le32		dw7_val;
> +	__le32		dw8_flags;
> +	__le32		rsvd36;

Nitpick: The code intentions for all these definitions are not
consistent. For example above struct sli4_rqst_cmn_create_eq uses a
different pattern. Please pick one and stick to it.


> +	struct sli4_dmaaddr page_phys_addr[0];
> +};
> +
> +struct sli4_rsp_cmn_create_mq_ext {
> +	struct sli4_rsp_cmn_create_queue q_rsp;
> +};
> +
> +enum sli4_mqe_size {
> +	SLI4_MQE_SIZE_16 = 0x05,
> +	SLI4_MQE_SIZE_32,
> +	SLI4_MQE_SIZE_64,
> +	SLI4_MQE_SIZE_128,
> +};
> +
> +enum sli4_async_evt {
> +	SLI4_ASYNC_EVT_LINK_STATE	= (1 << 1),
> +	SLI4_ASYNC_EVT_FIP		= (1 << 2),
> +	SLI4_ASYNC_EVT_GRP5		= (1 << 5),
> +	SLI4_ASYNC_EVT_FC		= (1 << 16),
> +	SLI4_ASYNC_EVT_SLI_PORT		= (1 << 17),
> +};
> +
> +#define	SLI4_ASYNC_EVT_FC_ALL \
> +		(SLI4_ASYNC_EVT_LINK_STATE	| \
> +		 SLI4_ASYNC_EVT_FIP		| \
> +		 SLI4_ASYNC_EVT_GRP5		| \
> +		 SLI4_ASYNC_EVT_FC		| \
> +		 SLI4_ASYNC_EVT_SLI_PORT)
> +
> +/* Create a Completion Queue. */
> +struct sli4_rqst_cmn_create_cq_v0 {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	__le16		rsvd18;
> +	__le32		dw5_flags;
> +	__le32		dw6_flags;
> +	__le32		rsvd28;
> +	__le32		rsvd32;
> +	struct sli4_dmaaddr page_phys_addr[0];
> +};
> +
> +enum sli4_create_rq_e {
> +	SLI4_RQ_CREATE_DUA		= 0x1,
> +	SLI4_RQ_CREATE_BQU		= 0x2,
> +
> +	SLI4_RQE_SIZE			= 8,
> +	SLI4_RQE_SIZE_8			= 0x2,
> +	SLI4_RQE_SIZE_16		= 0x3,
> +	SLI4_RQE_SIZE_32		= 0x4,
> +	SLI4_RQE_SIZE_64		= 0x5,
> +	SLI4_RQE_SIZE_128		= 0x6,
> +
> +	SLI4_RQ_PAGE_SIZE_4096		= 0x1,
> +	SLI4_RQ_PAGE_SIZE_8192		= 0x2,
> +	SLI4_RQ_PAGE_SIZE_16384		= 0x4,
> +	SLI4_RQ_PAGE_SIZE_32768		= 0x8,
> +	SLI4_RQ_PAGE_SIZE_64536		= 0x10,
> +
> +	SLI4_RQ_CREATE_V0_MAX_PAGES	= 8,
> +	SLI4_RQ_CREATE_V0_MIN_BUF_SIZE	= 128,
> +	SLI4_RQ_CREATE_V0_MAX_BUF_SIZE	= 2048,
> +};
> +
> +struct sli4_rqst_rq_create {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	u8		dua_bqu_byte;
> +	u8		ulp;
> +	__le16		rsvd16;
> +	u8		rqe_count_byte;
> +	u8		rsvd19;
> +	__le32		rsvd20;
> +	__le16		buffer_size;
> +	__le16		cq_id;
> +	__le32		rsvd28;
> +	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V0_MAX_PAGES];
> +};
> +
> +struct sli4_rsp_rq_create {
> +	struct sli4_rsp_cmn_create_queue rsp;
> +};
> +
> +enum sli4_create_rqv1_e {
> +	SLI4_RQ_CREATE_V1_DNB		= 0x80,
> +	SLI4_RQ_CREATE_V1_MAX_PAGES	= 8,
> +	SLI4_RQ_CREATE_V1_MIN_BUF_SIZE	= 64,
> +	SLI4_RQ_CREATE_V1_MAX_BUF_SIZE	= 2048,
> +};
> +
> +struct sli4_rqst_rq_create_v1 {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	u8		rsvd14;
> +	u8		dim_dfd_dnb;
> +	u8		page_size;
> +	u8		rqe_size_byte;
> +	__le16		rqe_count;
> +	__le32		rsvd20;
> +	__le16		rsvd24;
> +	__le16		cq_id;
> +	__le32		buffer_size;
> +	struct sli4_dmaaddr page_phys_addr[SLI4_RQ_CREATE_V1_MAX_PAGES];
> +};
> +
> +struct sli4_rsp_rq_create_v1 {
> +	struct sli4_rsp_cmn_create_queue rsp;
> +};
> +
> +#define	SLI4_RQCREATEV2_DNB	0x80
> +
> +struct sli4_rqst_rq_create_v2 {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	u8		rq_count;
> +	u8		dim_dfd_dnb;
> +	u8		page_size;
> +	u8		rqe_size_byte;
> +	__le16		rqe_count;
> +	__le16		hdr_buffer_size;
> +	__le16		payload_buffer_size;
> +	__le16		base_cq_id;
> +	__le16		rsvd26;
> +	__le32		rsvd42;
> +	struct sli4_dmaaddr page_phys_addr[0];
> +};
> +
> +struct sli4_rsp_rq_create_v2 {
> +	struct sli4_rsp_cmn_create_queue rsp;
> +};
> +
> +#define SLI4_CQE_CODE_OFFSET	14

empty line

> +enum sli4_cqe_code {
> +	SLI4_CQE_CODE_WORK_REQUEST_COMPLETION = 0x01,
> +	SLI4_CQE_CODE_RELEASE_WQE,
> +	SLI4_CQE_CODE_RSVD,
> +	SLI4_CQE_CODE_RQ_ASYNC,
> +	SLI4_CQE_CODE_XRI_ABORTED,
> +	SLI4_CQE_CODE_RQ_COALESCING,
> +	SLI4_CQE_CODE_RQ_CONSUMPTION,
> +	SLI4_CQE_CODE_MEASUREMENT_REPORTING,
> +	SLI4_CQE_CODE_RQ_ASYNC_V1,
> +	SLI4_CQE_CODE_RQ_COALESCING_V1,
> +	SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD,
> +	SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA,
> +};
> +
> +#define SLI4_WQ_CREATE_MAX_PAGES		8
> +struct sli4_rqst_wq_create {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16		num_pages;
> +	__le16		cq_id;
> +	u8		page_size;
> +	u8		wqe_size_byte;
> +	__le16		wqe_count;
> +	__le32		rsvd;
> +	struct	sli4_dmaaddr
> +			page_phys_addr[SLI4_WQ_CREATE_MAX_PAGES];
> +};
> +
> +struct sli4_rsp_wq_create {
> +	struct sli4_rsp_cmn_create_queue rsp;
> +};
> +
> +enum sli4_link_attention_flags {
> +	LINK_ATTN_TYPE_LINK_UP		= 0x01,
> +	LINK_ATTN_TYPE_LINK_DOWN	= 0x02,
> +	LINK_ATTN_TYPE_NO_HARD_ALPA	= 0x03,
> +
> +	LINK_ATTN_P2P			= 0x01,
> +	LINK_ATTN_FC_AL			= 0x02,
> +	LINK_ATTN_INTERNAL_LOOPBACK	= 0x03,
> +	LINK_ATTN_SERDES_LOOPBACK	= 0x04,
> +
> +	LINK_ATTN_1G			= 0x01,
> +	LINK_ATTN_2G			= 0x02,
> +	LINK_ATTN_4G			= 0x04,
> +	LINK_ATTN_8G			= 0x08,
> +	LINK_ATTN_10G			= 0x0a,
> +	LINK_ATTN_16G			= 0x10,
> +};
> +
> +struct sli4_link_attention {
> +	u8		link_number;
> +	u8		attn_type;
> +	u8		topology;
> +	u8		port_speed;
> +	u8		port_fault;
> +	u8		shared_link_status;
> +	__le16		logical_link_speed;
> +	__le32		event_tag;
> +	u8		rsvd12;
> +	u8		event_code;
> +	u8		event_type;
> +	u8		flags;
> +};
> +
> +enum sli4_link_event_type {
> +	FC_EVENT_LINK_ATTENTION		= 0x01,
> +	FC_EVENT_SHARED_LINK_ATTENTION	= 0x02,
> +};
> +
> +enum sli4_wcqe_flags {
> +	SLI4_WCQE_XB = 0x10,
> +	SLI4_WCQE_QX = 0x80,
> +};
> +
> +struct sli4_fc_wcqe {
> +	u8		hw_status;
> +	u8		status;
> +	__le16		request_tag;
> +	__le32		wqe_specific_1;
> +	__le32		wqe_specific_2;
> +	u8		rsvd12;
> +	u8		qx_byte;
> +	u8		code;
> +	u8		flags;
> +};
> +
> +/* FC WQ consumed CQ queue entry */
> +struct sli4_fc_wqec {
> +	__le32		rsvd0;
> +	__le32		rsvd1;
> +	__le16		wqe_index;
> +	__le16		wq_id;
> +	__le16		rsvd12;
> +	u8		code;
> +	u8		vld_byte;
> +};
> +
> +/* FC Completion Status Codes. */
> +enum sli4_wcqe_status {
> +	SLI4_FC_WCQE_STATUS_SUCCESS,
> +	SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,
> +	SLI4_FC_WCQE_STATUS_REMOTE_STOP,
> +	SLI4_FC_WCQE_STATUS_LOCAL_REJECT,
> +	SLI4_FC_WCQE_STATUS_NPORT_RJT,
> +	SLI4_FC_WCQE_STATUS_FABRIC_RJT,
> +	SLI4_FC_WCQE_STATUS_NPORT_BSY,
> +	SLI4_FC_WCQE_STATUS_FABRIC_BSY,
> +	SLI4_FC_WCQE_STATUS_RSVD,
> +	SLI4_FC_WCQE_STATUS_LS_RJT,
> +	SLI4_FC_WCQE_STATUS_RX_BUF_OVERRUN,
> +	SLI4_FC_WCQE_STATUS_CMD_REJECT,
> +	SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,
> +	SLI4_FC_WCQE_STATUS_RSVD1,
> +	SLI4_FC_WCQE_STATUS_ELS_CMPLT_NO_AUTOREG,
> +	SLI4_FC_WCQE_STATUS_RSVD2,
> +	SLI4_FC_WCQE_STATUS_RQ_SUCCESS, //0x10
> +	SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED,
> +	SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
> +	SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC,
> +	SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,
> +	SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,
> +	SLI4_FC_WCQE_STATUS_DI_ERROR,
> +	SLI4_FC_WCQE_STATUS_BA_RJT,
> +	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
> +	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC,
> +	SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,
> +	SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,
> +
> +	/* driver generated status codes */
> +	SLI4_FC_WCQE_STATUS_DISPATCH_ERROR	= 0xfd,
> +	SLI4_FC_WCQE_STATUS_SHUTDOWN		= 0xfe,
> +	SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT	= 0xff,
> +};
> +
> +/* DI_ERROR Extended Status */
> +enum sli4_fc_di_error_status {
> +	SLI4_FC_DI_ERROR_GE			= (1 << 0),
> +	SLI4_FC_DI_ERROR_AE			= (1 << 1),
> +	SLI4_FC_DI_ERROR_RE			= (1 << 2),
> +	SLI4_FC_DI_ERROR_TDPV			= (1 << 3),
> +	SLI4_FC_DI_ERROR_UDB			= (1 << 4),
> +	SLI4_FC_DI_ERROR_EDIR			= (1 << 5),
> +};
> +
> +/* WQE DIF field contents */
> +enum sli4_dif_fields {
> +	SLI4_DIF_DISABLED,
> +	SLI4_DIF_PASS_THROUGH,
> +	SLI4_DIF_STRIP,
> +	SLI4_DIF_INSERT,
> +};
> +
> +/* Work Queue Entry (WQE) types */
> +enum sli4_wqe_types {
> +	SLI4_WQE_ABORT				= 0x0f,
> +	SLI4_WQE_ELS_REQUEST64			= 0x8a,
> +	SLI4_WQE_FCP_IBIDIR64			= 0xac,
> +	SLI4_WQE_FCP_IREAD64			= 0x9a,
> +	SLI4_WQE_FCP_IWRITE64			= 0x98,
> +	SLI4_WQE_FCP_ICMND64			= 0x9c,
> +	SLI4_WQE_FCP_TRECEIVE64			= 0xa1,
> +	SLI4_WQE_FCP_CONT_TRECEIVE64		= 0xe5,
> +	SLI4_WQE_FCP_TRSP64			= 0xa3,
> +	SLI4_WQE_FCP_TSEND64			= 0x9f,
> +	SLI4_WQE_GEN_REQUEST64			= 0xc2,
> +	SLI4_WQE_SEND_FRAME			= 0xe1,
> +	SLI4_WQE_XMIT_BCAST64			= 0x84,
> +	SLI4_WQE_XMIT_BLS_RSP			= 0x97,
> +	SLI4_WQE_ELS_RSP64			= 0x95,
> +	SLI4_WQE_XMIT_SEQUENCE64		= 0x82,
> +	SLI4_WQE_REQUEUE_XRI			= 0x93,
> +};
> +
> +/* WQE command types */
> +enum sli4_wqe_cmds {
> +	SLI4_CMD_FCP_IREAD64_WQE		= 0x00,
> +	SLI4_CMD_FCP_ICMND64_WQE		= 0x00,
> +	SLI4_CMD_FCP_IWRITE64_WQE		= 0x01,
> +	SLI4_CMD_FCP_TRECEIVE64_WQE		= 0x02,
> +	SLI4_CMD_FCP_TRSP64_WQE			= 0x03,
> +	SLI4_CMD_FCP_TSEND64_WQE		= 0x07,
> +	SLI4_CMD_GEN_REQUEST64_WQE		= 0x08,
> +	SLI4_CMD_XMIT_BCAST64_WQE		= 0x08,
> +	SLI4_CMD_XMIT_BLS_RSP64_WQE		= 0x08,
> +	SLI4_CMD_ABORT_WQE			= 0x08,
> +	SLI4_CMD_XMIT_SEQUENCE64_WQE		= 0x08,
> +	SLI4_CMD_REQUEUE_XRI_WQE		= 0x0A,
> +	SLI4_CMD_SEND_FRAME_WQE			= 0x0a,
> +};
> +
> +#define SLI4_WQE_SIZE				0x05
> +#define SLI4_WQE_EXT_SIZE			0x06
> +
> +#define SLI4_WQE_BYTES				(16 * sizeof(u32))
> +#define SLI4_WQE_EXT_BYTES			(32 * sizeof(u32))
> +
> +/* Mask for ccp (CS_CTL) */
> +#define SLI4_MASK_CCP				0xfe
> +
> +/* Generic WQE */
> +enum sli4_gen_wqe_flags {
> +	SLI4_GEN_WQE_EBDECNT	= (0xf << 0),
> +	SLI4_GEN_WQE_LEN_LOC	= (0x3 << 7),
> +	SLI4_GEN_WQE_QOSD	= (1 << 9),
> +	SLI4_GEN_WQE_XBL	= (1 << 11),
> +	SLI4_GEN_WQE_HLM	= (1 << 12),
> +	SLI4_GEN_WQE_IOD	= (1 << 13),
> +	SLI4_GEN_WQE_DBDE	= (1 << 14),
> +	SLI4_GEN_WQE_WQES	= (1 << 15),
> +
> +	SLI4_GEN_WQE_PRI	= (0x7),
> +	SLI4_GEN_WQE_PV		= (1 << 3),
> +	SLI4_GEN_WQE_EAT	= (1 << 4),
> +	SLI4_GEN_WQE_XC		= (1 << 5),
> +	SLI4_GEN_WQE_CCPE	= (1 << 7),
> +
> +	SLI4_GEN_WQE_CMDTYPE	= (0xf),
> +	SLI4_GEN_WQE_WQEC	= (1 << 7),
> +};
> +
> +struct sli4_generic_wqe {
> +	__le32		cmd_spec0_5[6];
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	__le16		dw10w0_flags;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmdtype_wqec_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +};
> +
> +/* WQE used to abort exchanges. */
> +enum sli4_abort_wqe_flags {
> +	SLI4_ABRT_WQE_IR	= 0x02,
> +
> +	SLI4_ABRT_WQE_EBDECNT	= (0xf << 0),
> +	SLI4_ABRT_WQE_LEN_LOC	= (0x3 << 7),
> +	SLI4_ABRT_WQE_QOSD	= (1 << 9),
> +	SLI4_ABRT_WQE_XBL	= (1 << 11),
> +	SLI4_ABRT_WQE_IOD	= (1 << 13),
> +	SLI4_ABRT_WQE_DBDE	= (1 << 14),
> +	SLI4_ABRT_WQE_WQES	= (1 << 15),
> +
> +	SLI4_ABRT_WQE_PRI	= (0x7),
> +	SLI4_ABRT_WQE_PV	= (1 << 3),
> +	SLI4_ABRT_WQE_EAT	= (1 << 4),
> +	SLI4_ABRT_WQE_XC	= (1 << 5),
> +	SLI4_ABRT_WQE_CCPE	= (1 << 7),
> +
> +	SLI4_ABRT_WQE_CMDTYPE	= (0xf),
> +	SLI4_ABRT_WQE_WQEC	= (1 << 7),
> +};
> +
> +struct sli4_abort_wqe {
> +	__le32		rsvd0;
> +	__le32		rsvd4;
> +	__le32		ext_t_tag;
> +	u8		ia_ir_byte;
> +	u8		criteria;
> +	__le16		rsvd10;
> +	__le32		ext_t_mask;
> +	__le32		t_mask;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		t_tag;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	__le16		dw10w0_flags;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmdtype_wqec_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +};
> +
> +enum sli4_abort_criteria {
> +	SLI4_ABORT_CRITERIA_XRI_TAG = 0x01,
> +	SLI4_ABORT_CRITERIA_ABORT_TAG,
> +	SLI4_ABORT_CRITERIA_REQUEST_TAG,
> +	SLI4_ABORT_CRITERIA_EXT_ABORT_TAG,
> +};
> +
> +enum sli4_abort_type {
> +	SLI_ABORT_XRI,
> +	SLI_ABORT_ABORT_ID,
> +	SLI_ABORT_REQUEST_ID,
> +	SLI_ABORT_MAX,		/* must be last */
> +};
> +
> +/* WQE used to create an ELS request. */
> +enum sli4_els_req_wqe_flags {
> +	SLI4_REQ_WQE_QOSD		= 0x2,
> +	SLI4_REQ_WQE_DBDE		= 0x40,
> +	SLI4_REQ_WQE_XBL		= 0x8,
> +	SLI4_REQ_WQE_XC			= 0x20,
> +	SLI4_REQ_WQE_IOD		= 0x20,
> +	SLI4_REQ_WQE_HLM		= 0x10,
> +	SLI4_REQ_WQE_CCPE		= 0x80,
> +	SLI4_REQ_WQE_EAT		= 0x10,
> +	SLI4_REQ_WQE_WQES		= 0x80,
> +	SLI4_REQ_WQE_PU_SHFT		= 4,
> +	SLI4_REQ_WQE_CT_SHFT		= 2,
> +	SLI4_REQ_WQE_CT			= 0xc,
> +	SLI4_REQ_WQE_ELSID_SHFT		= 4,
> +	SLI4_REQ_WQE_SP_SHFT		= 24,
> +	SLI4_REQ_WQE_LEN_LOC_BIT1	= 0x80,
> +	SLI4_REQ_WQE_LEN_LOC_BIT2	= 0x1,
> +};
> +
> +struct sli4_els_request64_wqe {
> +	struct sli4_bde	els_request_payload;
> +	__le32		els_request_payload_length;
> +	__le32		sid_sp_dword;
> +	__le32		remote_id_dword;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		temporary_rpi;
> +	u8		len_loc1_byte;
> +	u8		qosd_xbl_hlm_iod_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmdtype_elsid_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	struct sli4_bde	els_response_payload_bde;
> +	__le32		max_response_payload_length;
> +};
> +
> +/* WQE used to create an FCP initiator no data command. */
> +enum sli4_icmd_wqe_flags {
> +	SLI4_ICMD_WQE_DBDE		= 0x40,
> +	SLI4_ICMD_WQE_XBL		= 0x8,
> +	SLI4_ICMD_WQE_XC		= 0x20,
> +	SLI4_ICMD_WQE_IOD		= 0x20,
> +	SLI4_ICMD_WQE_HLM		= 0x10,
> +	SLI4_ICMD_WQE_CCPE		= 0x80,
> +	SLI4_ICMD_WQE_EAT		= 0x10,
> +	SLI4_ICMD_WQE_APPID		= 0x10,
> +	SLI4_ICMD_WQE_WQES		= 0x80,
> +	SLI4_ICMD_WQE_PU_SHFT		= 4,
> +	SLI4_ICMD_WQE_CT_SHFT		= 2,
> +	SLI4_ICMD_WQE_BS_SHFT		= 4,
> +	SLI4_ICMD_WQE_LEN_LOC_BIT1	= 0x80,
> +	SLI4_ICMD_WQE_LEN_LOC_BIT2	= 0x1,
> +};
> +
> +struct sli4_fcp_icmnd64_wqe {
> +	struct sli4_bde	bde;
> +	__le16		payload_offset_length;
> +	__le16		fcp_cmd_buffer_length;
> +	__le32		rsvd12;
> +	__le32		remote_n_port_id_dword;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		dif_ct_bs_byte;
> +	u8		command;
> +	u8		class_pu_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	u8		len_loc1_byte;
> +	u8		qosd_xbl_hlm_iod_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		rsvd44;
> +	__le32		rsvd48;
> +	__le32		rsvd52;
> +	__le32		rsvd56;
> +};
> +
> +/* WQE used to create an FCP initiator read. */
> +enum sli4_ir_wqe_flags {
> +	SLI4_IR_WQE_DBDE		= 0x40,
> +	SLI4_IR_WQE_XBL			= 0x8,
> +	SLI4_IR_WQE_XC			= 0x20,
> +	SLI4_IR_WQE_IOD			= 0x20,
> +	SLI4_IR_WQE_HLM			= 0x10,
> +	SLI4_IR_WQE_CCPE		= 0x80,
> +	SLI4_IR_WQE_EAT			= 0x10,
> +	SLI4_IR_WQE_APPID		= 0x10,
> +	SLI4_IR_WQE_WQES		= 0x80,
> +	SLI4_IR_WQE_PU_SHFT		= 4,
> +	SLI4_IR_WQE_CT_SHFT		= 2,
> +	SLI4_IR_WQE_BS_SHFT		= 4,
> +	SLI4_IR_WQE_LEN_LOC_BIT1	= 0x80,
> +	SLI4_IR_WQE_LEN_LOC_BIT2	= 0x1,
> +};
> +
> +struct sli4_fcp_iread64_wqe {
> +	struct sli4_bde	bde;
> +	__le16		payload_offset_length;
> +	__le16		fcp_cmd_buffer_length;
> +
> +	__le32		total_transfer_length;
> +
> +	__le32		remote_n_port_id_dword;
> +
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +
> +	u8		dif_ct_bs_byte;
> +	u8		command;
> +	u8		class_pu_byte;
> +	u8		timer;
> +
> +	__le32		abort_tag;
> +
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +
> +	u8		len_loc1_byte;
> +	u8		qosd_xbl_hlm_iod_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +
> +	__le32		rsvd44;
> +	struct sli4_bde	first_data_bde;
> +};
> +
> +/* WQE used to create an FCP initiator write. */
> +enum sli4_iwr_wqe_flags {
> +	SLI4_IWR_WQE_DBDE		= 0x40,
> +	SLI4_IWR_WQE_XBL		= 0x8,
> +	SLI4_IWR_WQE_XC			= 0x20,
> +	SLI4_IWR_WQE_IOD		= 0x20,
> +	SLI4_IWR_WQE_HLM		= 0x10,
> +	SLI4_IWR_WQE_DNRX		= 0x10,
> +	SLI4_IWR_WQE_CCPE		= 0x80,
> +	SLI4_IWR_WQE_EAT		= 0x10,
> +	SLI4_IWR_WQE_APPID		= 0x10,
> +	SLI4_IWR_WQE_WQES		= 0x80,
> +	SLI4_IWR_WQE_PU_SHFT		= 4,
> +	SLI4_IWR_WQE_CT_SHFT		= 2,
> +	SLI4_IWR_WQE_BS_SHFT		= 4,
> +	SLI4_IWR_WQE_LEN_LOC_BIT1	= 0x80,
> +	SLI4_IWR_WQE_LEN_LOC_BIT2	= 0x1,
> +};
> +
> +struct sli4_fcp_iwrite64_wqe {
> +	struct sli4_bde	bde;
> +	__le16		payload_offset_length;
> +	__le16		fcp_cmd_buffer_length;
> +	__le16		total_transfer_length;
> +	__le16		initial_transfer_length;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		dif_ct_bs_byte;
> +	u8		command;
> +	u8		class_pu_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	u8		len_loc1_byte;
> +	u8		qosd_xbl_hlm_iod_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		remote_n_port_id_dword;
> +	struct sli4_bde	first_data_bde;
> +};
> +
> +struct sli4_fcp_128byte_wqe {
> +	u32 dw[32];
> +};
> +
> +/* WQE used to create an FCP target receive */
> +enum sli4_trcv_wqe_flags {
> +	SLI4_TRCV_WQE_DBDE		= 0x40,
> +	SLI4_TRCV_WQE_XBL		= 0x8,
> +	SLI4_TRCV_WQE_AR		= 0x8,
> +	SLI4_TRCV_WQE_XC		= 0x20,
> +	SLI4_TRCV_WQE_IOD		= 0x20,
> +	SLI4_TRCV_WQE_HLM		= 0x10,
> +	SLI4_TRCV_WQE_DNRX		= 0x10,
> +	SLI4_TRCV_WQE_CCPE		= 0x80,
> +	SLI4_TRCV_WQE_EAT		= 0x10,
> +	SLI4_TRCV_WQE_APPID		= 0x10,
> +	SLI4_TRCV_WQE_WQES		= 0x80,
> +	SLI4_TRCV_WQE_PU_SHFT		= 4,
> +	SLI4_TRCV_WQE_CT_SHFT		= 2,
> +	SLI4_TRCV_WQE_BS_SHFT		= 4,
> +	SLI4_TRCV_WQE_LEN_LOC_BIT2	= 0x1,
> +};
> +
> +struct sli4_fcp_treceive64_wqe {
> +	struct sli4_bde	bde;
> +	__le32		payload_offset_length;
> +	__le32		relative_offset;
> +	union {
> +		__le16		sec_xri_tag;
> +		__le16		rsvd;
> +		__le32		dword;
> +	} dword5;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		dif_ct_bs_byte;
> +	u8		command;
> +	u8		class_ar_pu_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		remote_xid;
> +	u8		lloc1_appid;
> +	u8		qosd_xbl_hlm_iod_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		fcp_data_receive_length;
> +	struct sli4_bde	first_data_bde;
> +};
> +
> +/* WQE used to create an FCP target response */
> +enum sli4_trsp_wqe_flags {
> +	SLI4_TRSP_WQE_AG	= 0x8,
> +	SLI4_TRSP_WQE_DBDE	= 0x40,
> +	SLI4_TRSP_WQE_XBL	= 0x8,
> +	SLI4_TRSP_WQE_XC	= 0x20,
> +	SLI4_TRSP_WQE_HLM	= 0x10,
> +	SLI4_TRSP_WQE_DNRX	= 0x10,
> +	SLI4_TRSP_WQE_CCPE	= 0x80,
> +	SLI4_TRSP_WQE_EAT	= 0x10,
> +	SLI4_TRSP_WQE_APPID	= 0x10,
> +	SLI4_TRSP_WQE_WQES	= 0x80,
> +};
> +
> +struct sli4_fcp_trsp64_wqe {
> +	struct sli4_bde	bde;
> +	__le32		fcp_response_length;
> +	__le32		rsvd12;
> +	__le32		dword5;
> +	__le16		xri_tag;
> +	__le16		rpi;
> +	u8		ct_dnrx_byte;
> +	u8		command;
> +	u8		class_ag_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		remote_xid;
> +	u8		lloc1_appid;
> +	u8		qosd_xbl_hlm_dbde_wqes;
> +	u8		eat_xc_ccpe;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		rsvd44;
> +	__le32		rsvd48;
> +	__le32		rsvd52;
> +	__le32		rsvd56;
> +};
> +
> +/* WQE used to create an FCP target send (DATA IN). */
> +enum sli4_tsend_wqe_flags {
> +	SLI4_TSEND_WQE_XBL	= 0x8,
> +	SLI4_TSEND_WQE_DBDE	= 0x40,
> +	SLI4_TSEND_WQE_IOD	= 0x20,
> +	SLI4_TSEND_WQE_QOSD	= 0x2,
> +	SLI4_TSEND_WQE_HLM	= 0x10,
> +	SLI4_TSEND_WQE_PU_SHFT	= 4,
> +	SLI4_TSEND_WQE_AR	= 0x8,
> +	SLI4_TSEND_CT_SHFT	= 2,
> +	SLI4_TSEND_BS_SHFT	= 4,
> +	SLI4_TSEND_LEN_LOC_BIT2 = 0x1,
> +	SLI4_TSEND_CCPE		= 0x80,
> +	SLI4_TSEND_APPID_VALID	= 0x20,
> +	SLI4_TSEND_WQES		= 0x80,
> +	SLI4_TSEND_XC		= 0x20,
> +	SLI4_TSEND_EAT		= 0x10,
> +};
> +
> +struct sli4_fcp_tsend64_wqe {
> +	struct sli4_bde	bde;
> +	__le32		payload_offset_length;
> +	__le32		relative_offset;
> +	__le32		dword5;
> +	__le16		xri_tag;
> +	__le16		rpi;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_pu_ar_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		remote_xid;
> +	u8		dw10byte0;
> +	u8		ll_qd_xbl_hlm_iod_dbde;
> +	u8		dw10byte2;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd45;
> +	__le16		cq_id;
> +	__le32		fcp_data_transmit_length;
> +	struct sli4_bde	first_data_bde;
> +};
> +
> +/* WQE used to create a general request. */
> +enum sli4_gen_req_wqe_flags {
> +	SLI4_GEN_REQ64_WQE_XBL	= 0x8,
> +	SLI4_GEN_REQ64_WQE_DBDE	= 0x40,
> +	SLI4_GEN_REQ64_WQE_IOD	= 0x20,
> +	SLI4_GEN_REQ64_WQE_QOSD	= 0x2,
> +	SLI4_GEN_REQ64_WQE_HLM	= 0x10,
> +	SLI4_GEN_REQ64_CT_SHFT	= 2,
> +};
> +
> +struct sli4_gen_request64_wqe {
> +	struct sli4_bde	bde;
> +	__le32		request_payload_length;
> +	__le32		relative_offset;
> +	u8		rsvd17;
> +	u8		df_ctl;
> +	u8		type;
> +	u8		r_ctl;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	u8		dw10flags0;
> +	u8		dw10flags1;
> +	u8		dw10flags2;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		remote_n_port_id_dword;
> +	__le32		rsvd48;
> +	__le32		rsvd52;
> +	__le32		max_response_payload_length;
> +};
> +
> +/* WQE used to create a send frame request */
> +enum sli4_sf_wqe_flags {
> +	SLI4_SF_WQE_DBDE	= 0x40,
> +	SLI4_SF_PU		= 0x30,
> +	SLI4_SF_CT		= 0xc,
> +	SLI4_SF_QOSD		= 0x2,
> +	SLI4_SF_LEN_LOC_BIT1	= 0x80,
> +	SLI4_SF_LEN_LOC_BIT2	= 0x1,
> +	SLI4_SF_XC		= 0x20,
> +	SLI4_SF_XBL		= 0x8,
> +};
> +
> +struct sli4_send_frame_wqe {
> +	struct sli4_bde	bde;
> +	__le32		frame_length;
> +	__le32		fc_header_0_1[2];
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		dw7flags0;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	u8		eof;
> +	u8		sof;
> +	u8		dw10flags0;
> +	u8		dw10flags1;
> +	u8		dw10flags2;
> +	u8		ccp;
> +	u8		cmd_type_byte;
> +	u8		rsvd41;
> +	__le16		cq_id;
> +	__le32		fc_header_2_5[4];
> +};
> +
> +/* WQE used to create a transmit sequence */
> +enum sli4_seq_wqe_flags {
> +	SLI4_SEQ_WQE_DBDE		= 0x4000,
> +	SLI4_SEQ_WQE_XBL		= 0x800,
> +	SLI4_SEQ_WQE_SI			= 0x4,
> +	SLI4_SEQ_WQE_FT			= 0x8,
> +	SLI4_SEQ_WQE_XO			= 0x40,
> +	SLI4_SEQ_WQE_LS			= 0x80,
> +	SLI4_SEQ_WQE_DIF		= 0x3,
> +	SLI4_SEQ_WQE_BS			= 0x70,
> +	SLI4_SEQ_WQE_PU			= 0x30,
> +	SLI4_SEQ_WQE_HLM		= 0x1000,
> +	SLI4_SEQ_WQE_IOD_SHIFT		= 13,
> +	SLI4_SEQ_WQE_CT_SHIFT		= 2,
> +	SLI4_SEQ_WQE_LEN_LOC_SHIFT	= 7,
> +};
> +
> +struct sli4_xmit_sequence64_wqe {
> +	struct sli4_bde	bde;
> +	__le32		remote_n_port_id_dword;
> +	__le32		relative_offset;
> +	u8		dw5flags0;
> +	u8		df_ctl;
> +	u8		type;
> +	u8		r_ctl;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		dw7flags0;
> +	u8		command;
> +	u8		dw7flags1;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		remote_xid;
> +	__le16		dw10w0;
> +	u8		dw10flags0;
> +	u8		ccp;
> +	u8		cmd_type_wqec_byte;
> +	u8		rsvd45;
> +	__le16		cq_id;
> +	__le32		sequence_payload_len;
> +	__le32		rsvd48;
> +	__le32		rsvd52;
> +	__le32		rsvd56;
> +};
> +
> +/*
> + * WQE used unblock the specified XRI and to release
> + * it to the SLI Port's free pool.
> + */
> +enum sli4_requeue_wqe_flags {
> +	SLI4_REQU_XRI_WQE_XC	= 0x20,
> +	SLI4_REQU_XRI_WQE_QOSD	= 0x2,
> +};
> +
> +struct sli4_requeue_xri_wqe {
> +	__le32		rsvd0;
> +	__le32		rsvd4;
> +	__le32		rsvd8;
> +	__le32		rsvd12;
> +	__le32		rsvd16;
> +	__le32		rsvd20;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		rsvd32;
> +	__le16		request_tag;
> +	__le16		rsvd34;
> +	__le16		flags0;
> +	__le16		flags1;
> +	__le16		flags2;
> +	u8		ccp;
> +	u8		cmd_type_wqec_byte;
> +	u8		rsvd42;
> +	__le16		cq_id;
> +	__le32		rsvd44;
> +	__le32		rsvd48;
> +	__le32		rsvd52;
> +	__le32		rsvd56;
> +};
> +
> +/* WQE used to create a BLS response */
> +enum sli4_bls_rsp_wqe_flags {
> +	SLI4_BLS_RSP_RID		= 0xffffff,
> +	SLI4_BLS_RSP_WQE_AR		= 0x40000000,
> +	SLI4_BLS_RSP_WQE_CT_SHFT	= 2,
> +	SLI4_BLS_RSP_WQE_QOSD		= 0x2,
> +	SLI4_BLS_RSP_WQE_HLM		= 0x10,
> +};
> +
> +struct sli4_xmit_bls_rsp_wqe {
> +	__le32		payload_word0;
> +	__le16		rx_id;
> +	__le16		ox_id;
> +	__le16		high_seq_cnt;
> +	__le16		low_seq_cnt;
> +	__le32		rsvd12;
> +	__le32		local_n_port_id_dword;
> +	__le32		remote_id_dword;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		dw8flags0;
> +	u8		command;
> +	u8		dw8flags1;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		rsvd38;
> +	u8		dw11flags0;
> +	u8		dw11flags1;
> +	u8		dw11flags2;
> +	u8		ccp;
> +	u8		dw12flags0;
> +	u8		rsvd45;
> +	__le16		cq_id;
> +	__le16		temporary_rpi;
> +	u8		rsvd50;
> +	u8		rsvd51;
> +	__le32		rsvd52;
> +	__le32		rsvd56;
> +	__le32		rsvd60;
> +};
> +
> +enum sli_bls_type {
> +	SLI4_SLI_BLS_ACC,
> +	SLI4_SLI_BLS_RJT,
> +	SLI4_SLI_BLS_MAX
> +};
> +
> +struct sli_bls_payload {
> +	enum sli_bls_type	type;
> +	__le16			ox_id;
> +	__le16			rx_id;
> +	union {
> +		struct {
> +			u8	seq_id_validity;
> +			u8	seq_id_last;
> +			u8	rsvd2;
> +			u8	rsvd3;
> +			u16	ox_id;
> +			u16	rx_id;
> +			__le16	low_seq_cnt;
> +			__le16	high_seq_cnt;
> +		} acc;
> +		struct {
> +			u8	vendor_unique;
> +			u8	reason_explanation;
> +			u8	reason_code;
> +			u8	rsvd3;
> +		} rjt;
> +	} u;
> +};
> +
> +/* WQE used to create an ELS response */
> +
> +enum sli4_els_rsp_flags {
> +	SLI4_ELS_SID		= 0xffffff,
> +	SLI4_ELS_RID		= 0xffffff,
> +	SLI4_ELS_DBDE		= 0x40,
> +	SLI4_ELS_XBL		= 0x8,
> +	SLI4_ELS_IOD		= 0x20,
> +	SLI4_ELS_QOSD		= 0x2,
> +	SLI4_ELS_XC		= 0x20,
> +	SLI4_ELS_CT_OFFSET	= 0X2,
> +	SLI4_ELS_SP		= 0X1000000,
> +	SLI4_ELS_HLM		= 0X10,
> +};
> +
> +struct sli4_xmit_els_rsp64_wqe {
> +	struct sli4_bde	els_response_payload;
> +	__le32		els_response_payload_length;
> +	__le32		sid_dw;
> +	__le32		rid_dw;
> +	__le16		xri_tag;
> +	__le16		context_tag;
> +	u8		ct_byte;
> +	u8		command;
> +	u8		class_byte;
> +	u8		timer;
> +	__le32		abort_tag;
> +	__le16		request_tag;
> +	__le16		ox_id;
> +	u8		flags1;
> +	u8		flags2;
> +	u8		flags3;
> +	u8		flags4;
> +	u8		cmd_type_wqec;
> +	u8		rsvd34;
> +	__le16		cq_id;
> +	__le16		temporary_rpi;
> +	__le16		rsvd38;
> +	u32		rsvd40;
> +	u32		rsvd44;
> +	u32		rsvd48;
> +};
> +
> +/* Prameters used to populate WQE*/
> +struct sli_bls_params {
> +	u32		s_id;
> +	u16		ox_id;
> +	u16		rx_id;
> +	u8		payload[12];
> +};
> +
> +struct sli_els_params {
> +	u32		s_id;
> +	u16		ox_id;
> +	u8		timeout;
> +};
> +
> +struct sli_ct_params {
> +	u8		r_ctl;
> +	u8		type;
> +	u8		df_ctl;
> +	u8		timeout;
> +	u16		ox_id;
> +};
> +
> +struct sli_fcp_tgt_params {
> +	u32		offset;
> +	u16		ox_id;
> +	u16		flags;
> +	u8		cs_ctl;
> +	u8		timeout;
> +	u32		app_id;
> +};
> +
> +/* Local Reject Reason Codes */
> +enum sli4_fc_local_rej_codes {
> +	SLI4_FC_LOCAL_REJECT_UNKNOWN,
> +	SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE,
> +	SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT,
> +	SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR,
> +	SLI4_FC_LOCAL_REJECT_INVALID_RPI,
> +	SLI4_FC_LOCAL_REJECT_NO_XRI,
> +	SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND,
> +	SLI4_FC_LOCAL_REJECT_XCHG_DROPPED,
> +	SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD,
> +	SLI4_FC_LOCAL_REJECT_RPI_SUSPENDED,
> +	SLI4_FC_LOCAL_REJECT_RSVD,
> +	SLI4_FC_LOCAL_REJECT_RSVD1,
> +	SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH,
> +	SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED,
> +	SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED,
> +	SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME,
> +	SLI4_FC_LOCAL_REJECT_RSVD2,
> +	SLI4_FC_LOCAL_REJECT_NO_RESOURCES, //0x11
> +	SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE,
> +	SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH,
> +	SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE,
> +	SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS,
> +	SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED,
> +	SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT,
> +	SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE,
> +	SLI4_FC_LOCAL_REJECT_RSVD3,
> +	SLI4_FC_LOCAL_REJECT_LINK_DOWN,
> +	SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA,
> +	SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI,
> +	SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA,
> +	SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK,
> +	SLI4_FC_LOCAL_REJECT_DUP_FRAME,
> +	SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME, //0x20
> +	SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS,
> +	SLI4_FC_LOCAL_REJECT_RSVD4,
> +	SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER,
> +	SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED,
> +	SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED,
> +	SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE	= 0x28,
> +	SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING,
> +	SLI4_FC_LOCAL_REJECT_INVALID_VPI	= 0x2e,
> +	SLI4_FC_LOCAL_REJECT_NO_FPORT_DETECTED,
> +	SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF,
> +	SLI4_FC_LOCAL_REJECT_RSVD5,
> +	SLI4_FC_LOCAL_REJECT_INVALID_XRI,
> +	SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET	= 0x40,
> +	SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET,
> +	SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE,
> +	SLI4_FC_LOCAL_REJECT_MISSING_SI,
> +	SLI4_FC_LOCAL_REJECT_MISSING_ES,
> +	SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER,
> +	SLI4_FC_LOCAL_REJECT_SLER_FAILURE,
> +	SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE,
> +	SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR,
> +	SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR,
> +	SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR,
> +	SLI4_FC_LOCAL_REJECT_RSVD6,
> +	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR,
> +	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR,
> +	SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR,
> +};
> +
> +enum sli4_async_rcqe_flags {
> +	SLI4_RACQE_RQ_EL_INDX	= 0xfff,
> +	SLI4_RACQE_FCFI		= 0x3f,
> +	SLI4_RACQE_HDPL		= 0x3f,
> +	SLI4_RACQE_RQ_ID	= 0xffc0,
> +};
> +
> +struct sli4_fc_async_rcqe {
> +	u8		rsvd0;
> +	u8		status;
> +	__le16		rq_elmt_indx_word;
> +	__le32		rsvd4;
> +	__le16		fcfi_rq_id_word;
> +	__le16		data_placement_length;
> +	u8		sof_byte;
> +	u8		eof_byte;
> +	u8		code;
> +	u8		hdpl_byte;
> +};
> +
> +struct sli4_fc_async_rcqe_v1 {
> +	u8		rsvd0;
> +	u8		status;
> +	__le16		rq_elmt_indx_word;
> +	u8		fcfi_byte;
> +	u8		rsvd5;
> +	__le16		rsvd6;
> +	__le16		rq_id;
> +	__le16		data_placement_length;
> +	u8		sof_byte;
> +	u8		eof_byte;
> +	u8		code;
> +	u8		hdpl_byte;
> +};
> +
> +enum sli4_fc_async_rq_status {
> +	SLI4_FC_ASYNC_RQ_SUCCESS = 0x10,
> +	SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED,
> +	SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED,
> +	SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC,
> +	SLI4_FC_ASYNC_RQ_DMA_FAILURE,
> +};
> +
> +#define SLI4_RCQE_RQ_EL_INDX	0xfff
> +
> +struct sli4_fc_coalescing_rcqe {
> +	u8		rsvd0;
> +	u8		status;
> +	__le16		rq_elmt_indx_word;
> +	__le32		rsvd4;
> +	__le16		rq_id;
> +	__le16		sequence_reporting_placement_length;
> +	__le16		rsvd14;
> +	u8		code;
> +	u8		vld_byte;
> +};
> +
> +#define SLI4_FC_COALESCE_RQ_SUCCESS		0x10
> +#define SLI4_FC_COALESCE_RQ_INSUFF_XRI_NEEDED	0x18

empty line missing

> +/*
> + * @SLI4_OCQE_RQ_EL_INDX: bits 0 to 15 in word1
> + * @SLI4_OCQE_FCFI: bits 0 to 6 in dw1
> + * @SLI4_OCQE_OOX: bit 15 in dw1
> + * @SLI4_OCQE_AGXR: bit 16 in dw1
> + */

Maybe make a proper kernel doc out of this?

> +enum sli4_optimized_write_cmd_cqe_flags {
> +	SLI4_OCQE_RQ_EL_INDX = 0x7f,
> +	SLI4_OCQE_FCFI = 0x3f,
> +	SLI4_OCQE_OOX = (1 << 6),
> +	SLI4_OCQE_AGXR = (1 << 7),
> +	SLI4_OCQE_HDPL = 0x3f,

Code aligment is not consistent with the rest.

> +};
> +
> +struct sli4_fc_optimized_write_cmd_cqe {
> +	u8		rsvd0;
> +	u8		status;
> +	__le16		w1;
> +	u8		flags0;
> +	u8		flags1;
> +	__le16		xri;
> +	__le16		rq_id;
> +	__le16		data_placement_length;
> +	__le16		rpi;
> +	u8		code;
> +	u8		hdpl_vld;
> +};
> +
> +#define	SLI4_OCQE_XB		0x10
> +
> +struct sli4_fc_optimized_write_data_cqe {
> +	u8		hw_status;
> +	u8		status;
> +	__le16		xri;
> +	__le32		total_data_placed;
> +	__le32		extended_status;
> +	__le16		rsvd12;
> +	u8		code;
> +	u8		flags;
> +};
> +
> +struct sli4_fc_xri_aborted_cqe {
> +	u8		rsvd0;
> +	u8		status;
> +	__le16		rsvd2;
> +	__le32		extended_status;
> +	__le16		xri;
> +	__le16		remote_xid;
> +	__le16		rsvd12;
> +	u8		code;
> +	u8		flags;
> +};
> +
> +enum sli4_generic_ctx {
> +	SLI4_GENERIC_CONTEXT_RPI,
> +	SLI4_GENERIC_CONTEXT_VPI,
> +	SLI4_GENERIC_CONTEXT_VFI,
> +	SLI4_GENERIC_CONTEXT_FCFI,
> +};
> +
> +#define SLI4_GENERIC_CLASS_CLASS_2		0x1
> +#define SLI4_GENERIC_CLASS_CLASS_3		0x2
> +
> +#define SLI4_ELS_REQUEST64_DIR_WRITE		0x0
> +#define SLI4_ELS_REQUEST64_DIR_READ		0x1
> +
> +enum sli4_els_request {
> +	SLI4_ELS_REQUEST64_OTHER,
> +	SLI4_ELS_REQUEST64_LOGO,
> +	SLI4_ELS_REQUEST64_FDISC,
> +	SLI4_ELS_REQUEST64_FLOGIN,
> +	SLI4_ELS_REQUEST64_PLOGI,
> +};
> +
> +enum sli4_els_cmd_type {
> +	SLI4_ELS_REQUEST64_CMD_GEN		= 0x08,
> +	SLI4_ELS_REQUEST64_CMD_NON_FABRIC	= 0x0c,
> +	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
> +};
> +
>  #endif /* !_SLI4_H */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands
  2020-04-12  3:32 ` [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2020-04-14 19:01   ` Daniel Wagner
  2020-04-15 12:22   ` Hannes Reinecke
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-14 19:01 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi,

On Sat, Apr 11, 2020 at 08:32:35PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds definitions for SLI-4 mailbox commands
> and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed anonymous enums to named.
>   Split gaint enums into multiple enums.
>   Single Enum to #define
>   Added Link Speed defines to accommodate upto 128G
>   SLI defines to spell out _MASK value directly.
>   Changed multiple defines to named Enums for consistency.
> ---
>  drivers/scsi/elx/libefc_sli/sli4.h | 1677 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 1677 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 07eef8df9690..b360d809f144 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -2010,4 +2010,1681 @@ enum sli4_els_cmd_type {
>  	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
>  };
>  
> +#define SLI_PAGE_SIZE				(1 << 12)	/* 4096 */

Could you add a comment why you need this definitionw and why it's not
PAGE_SIZE? From starring at the code I think it has someting todo with
SLI4_RQ_PAGE_SIZE_4096, but it's definitly not obvious for me how
those address calculation work.

> +#define SLI_SUB_PAGE_MASK			(SLI_PAGE_SIZE - 1)
> +#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)

Couldn't you use round_up() instead defining SLI_SUB_PAGE_MASK and
SLI_ROUND_PAGE?


> +#define SLI4_BMBX_TIMEOUT_MSEC			30000
> +#define SLI4_FW_READY_TIMEOUT_MSEC		30000
> +
> +#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
> +#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
> +
> +static inline u32
> +sli_page_count(size_t bytes, u32 page_size)
> +{
> +	if (!page_size)
> +		return 0;
> +
> +	return (bytes + (page_size - 1)) >> __ffs(page_size);
> +}
> +
> +/*************************************************************************
> + * SLI-4 mailbox command formats and definitions
> + */
> +
> +struct sli4_mbox_command_header {
> +	u8	resvd0;
> +	u8	command;
> +	__le16	status;	/* Port writes to indicate success/fail */
> +};
> +
> +enum sli4_mbx_cmd_value {
> +	MBX_CMD_CONFIG_LINK	= 0x07,
> +	MBX_CMD_DUMP		= 0x17,
> +	MBX_CMD_DOWN_LINK	= 0x06,
> +	MBX_CMD_INIT_LINK	= 0x05,
> +	MBX_CMD_INIT_VFI	= 0xa3,
> +	MBX_CMD_INIT_VPI	= 0xa4,
> +	MBX_CMD_POST_XRI	= 0xa7,
> +	MBX_CMD_RELEASE_XRI	= 0xac,
> +	MBX_CMD_READ_CONFIG	= 0x0b,
> +	MBX_CMD_READ_STATUS	= 0x0e,
> +	MBX_CMD_READ_NVPARMS	= 0x02,
> +	MBX_CMD_READ_REV	= 0x11,
> +	MBX_CMD_READ_LNK_STAT	= 0x12,
> +	MBX_CMD_READ_SPARM64	= 0x8d,
> +	MBX_CMD_READ_TOPOLOGY	= 0x95,
> +	MBX_CMD_REG_FCFI	= 0xa0,
> +	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
> +	MBX_CMD_REG_RPI		= 0x93,
> +	MBX_CMD_REG_RX_RQ	= 0xa6,
> +	MBX_CMD_REG_VFI		= 0x9f,
> +	MBX_CMD_REG_VPI		= 0x96,
> +	MBX_CMD_RQST_FEATURES	= 0x9d,
> +	MBX_CMD_SLI_CONFIG	= 0x9b,
> +	MBX_CMD_UNREG_FCFI	= 0xa2,
> +	MBX_CMD_UNREG_RPI	= 0x14,
> +	MBX_CMD_UNREG_VFI	= 0xa1,
> +	MBX_CMD_UNREG_VPI	= 0x97,
> +	MBX_CMD_WRITE_NVPARMS	= 0x03,
> +	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,

s/0xAD/0xad/

> +};
> +
> +enum sli4_mbx_status {
> +	MBX_STATUS_SUCCESS	= 0x0000,
> +	MBX_STATUS_FAILURE	= 0x0001,
> +	MBX_STATUS_RPI_NOT_REG	= 0x1400,
> +};
> +
> +/* CONFIG_LINK */

This is not a Kconfig CONFIG flag, correct? Maybe make the comment a
bit more verbose explaining what it means.

> +enum sli4_cmd_config_link_flags {
> +	SLI4_CFG_LINK_BBSCN = 0xf00,
> +	SLI4_CFG_LINK_CSCN  = 0x1000,
> +};
> +
> +struct sli4_cmd_config_link {
> +	struct sli4_mbox_command_header	hdr;
> +	u8		maxbbc;
> +	u8		rsvd5;
> +	u8		rsvd6;
> +	u8		rsvd7;
> +	u8		alpa;
> +	__le16		n_port_id;
> +	u8		rsvd11;
> +	__le32		rsvd12;
> +	__le32		e_d_tov;
> +	__le32		lp_tov;
> +	__le32		r_a_tov;
> +	__le32		r_t_tov;
> +	__le32		al_tov;
> +	__le32		rsvd36;
> +	__le32		bbscn_dword;
> +};
> +
> +#define SLI4_DUMP4_TYPE		0xf
> +
> +#define SLI4_WKI_TAG_SAT_TEM	0x1040
> +
> +struct sli4_cmd_dump4 {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		type_dword;
> +	__le16		wki_selection;
> +	__le16		rsvd10;
> +	__le32		rsvd12;
> +	__le32		returned_byte_cnt;
> +	__le32		resp_data[59];
> +};
> +
> +/* INIT_LINK - initialize the link for a FC port */
> +enum sli4_init_link_flags {
> +	SLI4_INIT_LINK_F_LOOPBACK	= (1 << 0),
> +
> +	SLI4_INIT_LINK_F_P2P_ONLY	= (1 << 1),
> +	SLI4_INIT_LINK_F_FCAL_ONLY	= (2 << 1),
> +	SLI4_INIT_LINK_F_FCAL_FAIL_OVER	= (0 << 1),
> +	SLI4_INIT_LINK_F_P2P_FAIL_OVER	= (1 << 1),
> +
> +	SLI4_INIT_LINK_F_UNFAIR		= (1 << 6),
> +	SLI4_INIT_LINK_F_NO_LIRP	= (1 << 7),
> +	SLI4_INIT_LINK_F_LOOP_VALID_CHK	= (1 << 8),
> +	SLI4_INIT_LINK_F_NO_LISA	= (1 << 9),
> +	SLI4_INIT_LINK_F_FAIL_OVER	= (1 << 10),
> +	SLI4_INIT_LINK_F_FIXED_SPEED	= (1 << 11),
> +	SLI4_INIT_LINK_F_PICK_HI_ALPA	= (1 << 15),
> +
> +};
> +
> +enum sli4_fc_link_speed {
> +	FC_LINK_SPEED_1G = 1,
> +	FC_LINK_SPEED_2G,
> +	FC_LINK_SPEED_AUTO_1_2,
> +	FC_LINK_SPEED_4G,
> +	FC_LINK_SPEED_AUTO_4_1,
> +	FC_LINK_SPEED_AUTO_4_2,
> +	FC_LINK_SPEED_AUTO_4_2_1,
> +	FC_LINK_SPEED_8G,
> +	FC_LINK_SPEED_AUTO_8_1,
> +	FC_LINK_SPEED_AUTO_8_2,
> +	FC_LINK_SPEED_AUTO_8_2_1,
> +	FC_LINK_SPEED_AUTO_8_4,
> +	FC_LINK_SPEED_AUTO_8_4_1,
> +	FC_LINK_SPEED_AUTO_8_4_2,
> +	FC_LINK_SPEED_10G,
> +	FC_LINK_SPEED_16G,
> +	FC_LINK_SPEED_AUTO_16_8_4,
> +	FC_LINK_SPEED_AUTO_16_8,
> +	FC_LINK_SPEED_32G,
> +	FC_LINK_SPEED_AUTO_32_16_8,
> +	FC_LINK_SPEED_AUTO_32_16,
> +	FC_LINK_SPEED_64G,
> +	FC_LINK_SPEED_AUTO_64_32_16,
> +	FC_LINK_SPEED_AUTO_64_32,
> +	FC_LINK_SPEED_128G,
> +	FC_LINK_SPEED_AUTO_128_64_32,
> +	FC_LINK_SPEED_AUTO_128_64,
> +};
> +
> +struct sli4_cmd_init_link {
> +	struct sli4_mbox_command_header       hdr;
> +	__le32	sel_reset_al_pa_dword;
> +	__le32	flags0;
> +	__le32	link_speed_sel_code;
> +};
> +
> +/* INIT_VFI - initialize the VFI resource */
> +enum sli4_init_vfi_flags {
> +	SLI4_INIT_VFI_FLAG_VP	= 0x1000,
> +	SLI4_INIT_VFI_FLAG_VF	= 0x2000,
> +	SLI4_INIT_VFI_FLAG_VT	= 0x4000,
> +	SLI4_INIT_VFI_FLAG_VR	= 0x8000,
> +
> +	SLI4_INIT_VFI_VFID	= 0x1fff,
> +	SLI4_INIT_VFI_PRI	= 0xe000,
> +
> +	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,
> +};
> +
> +struct sli4_cmd_init_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		vfi;
> +	__le16		flags0_word;
> +	__le16		fcfi;
> +	__le16		vpi;
> +	__le32		vf_id_pri_dword;
> +	__le32		hop_cnt_dword;
> +};
> +
> +/* INIT_VPI - initialize the VPI resource */
> +struct sli4_cmd_init_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		vpi;
> +	__le16		vfi;
> +};
> +
> +/* POST_XRI - post XRI resources to the SLI Port */
> +enum sli4_post_xri_flags {
> +	SLI4_POST_XRI_COUNT	= 0xfff,
> +	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
> +	SLI4_POST_XRI_FLAG_DL	= 0x2000,
> +	SLI4_POST_XRI_FLAG_DI	= 0x4000,
> +	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
> +};
> +
> +struct sli4_cmd_post_xri {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		xri_base;
> +	__le16		xri_count_flags;
> +};
> +
> +/* RELEASE_XRI - Release XRI resources from the SLI Port */
> +enum sli4_release_xri_flags {
> +	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,
> +	SLI4_RELEASE_XRI_COUNT		= 0x1f,
> +};
> +
> +struct sli4_cmd_release_xri {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		rel_xri_count_word;
> +	__le16		xri_count_word;
> +
> +	struct {
> +		__le16	xri_tag0;
> +		__le16	xri_tag1;
> +	} xri_tbl[62];
> +};
> +
> +/* READ_CONFIG - read SLI port configuration parameters */
> +struct sli4_cmd_read_config {
> +	struct sli4_mbox_command_header	hdr;
> +};
> +
> +enum sli4_read_cfg_resp_flags {
> +	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
> +	SLI4_READ_CFG_RESP_TOPOLOGY	= 0xff000000,	/* DW2 */
> +};
> +
> +enum sli4_read_cfg_topo {
> +	SLI4_READ_CFG_TOPO_FC		= 0x1,	/* FC topology unknown */
> +	SLI4_READ_CFG_TOPO_FC_DA	= 0x2,	/* FC Direct Attach */
> +	SLI4_READ_CFG_TOPO_FC_AL	= 0x3,	/* FC-AL topology */
> +};
> +
> +struct sli4_rsp_read_config {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		ext_dword;
> +	__le32		topology_dword;
> +	__le32		resvd8;
> +	__le16		e_d_tov;
> +	__le16		resvd14;
> +	__le32		resvd16;
> +	__le16		r_a_tov;
> +	__le16		resvd22;
> +	__le32		resvd24;
> +	__le32		resvd28;
> +	__le16		lmt;
> +	__le16		resvd34;
> +	__le32		resvd36;
> +	__le32		resvd40;
> +	__le16		xri_base;
> +	__le16		xri_count;
> +	__le16		rpi_base;
> +	__le16		rpi_count;
> +	__le16		vpi_base;
> +	__le16		vpi_count;
> +	__le16		vfi_base;
> +	__le16		vfi_count;
> +	__le16		resvd60;
> +	__le16		fcfi_count;
> +	__le16		rq_count;
> +	__le16		eq_count;
> +	__le16		wq_count;
> +	__le16		cq_count;
> +	__le32		pad[45];
> +};
> +
> +/* READ_NVPARMS - read SLI port configuration parameters */
> +enum sli4_read_nvparms_flags {
> +	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
> +	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
> +};
> +
> +struct sli4_cmd_read_nvparms {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/* WRITE_NVPARMS - write SLI port configuration parameters */
> +struct sli4_cmd_write_nvparms {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/* READ_REV - read the Port revision levels */
> +enum {
> +	SLI4_READ_REV_FLAG_SLI_LEVEL	= 0xf,
> +	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
> +	SLI4_READ_REV_FLAG_CEEV		= 0x60,
> +	SLI4_READ_REV_FLAG_VPD		= 0x2000,
> +
> +	SLI4_READ_REV_AVAILABLE_LENGTH	= 0xffffff,
> +};
> +
> +struct sli4_cmd_read_rev {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			resvd0;
> +	__le16			flags0_word;
> +	__le32			first_hw_rev;
> +	__le32			second_hw_rev;
> +	__le32			resvd12;
> +	__le32			third_hw_rev;
> +	u8			fc_ph_low;
> +	u8			fc_ph_high;
> +	u8			feature_level_low;
> +	u8			feature_level_high;
> +	__le32			resvd24;
> +	__le32			first_fw_id;
> +	u8			first_fw_name[16];
> +	__le32			second_fw_id;
> +	u8			second_fw_name[16];
> +	__le32			rsvd18[30];
> +	__le32			available_length_dword;
> +	struct sli4_dmaaddr	hostbuf;
> +	__le32			returned_vpd_length;
> +	__le32			actual_vpd_length;
> +};
> +
> +/* READ_SPARM64 - read the Port service parameters */
> +#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
> +#define SLI4_READ_SPARM64_WWNN_OFFSET	(6 * sizeof(u32))
> +
> +struct sli4_cmd_read_sparm64 {
> +	struct sli4_mbox_command_header hdr;
> +	__le32			resvd0;
> +	__le32			resvd4;
> +	struct sli4_bde	bde_64;

I find these kind of strange code indention pattern unnecessary. I
assume bde_64 is not indenteded as resvd4 because it is a struct.

> +	__le16			vpi;
> +	__le16			resvd22;
> +	__le16			port_name_start;
> +	__le16			port_name_len;
> +	__le16			node_name_start;
> +	__le16			node_name_len;
> +};
> +
> +/* READ_TOPOLOGY - read the link event information */
> +enum sli4_read_topo_e {
> +	SLI4_READTOPO_ATTEN_TYPE	= 0xff,
> +	SLI4_READTOPO_FLAG_IL		= 0x100,
> +	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
> +
> +	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
> +	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
> +	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
> +	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
> +	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
> +	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
> +
> +	SLI4_READTOPO_SCN_BBSCN		= 0xf,
> +	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
> +
> +	SLI4_READTOPO_R_T_TOV		= 0x1ff,
> +	SLI4_READTOPO_AL_TOV		= 0xf000,
> +
> +	SLI4_READTOPO_PB_FLAG		= 0x80,
> +
> +	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
> +};
> +
> +#define SLI4_MIN_LOOP_MAP_BYTES	128
> +
> +struct sli4_cmd_read_topology {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32			event_tag;
> +	__le32			dw2_attentype;
> +	u8			topology;
> +	u8			lip_type;
> +	u8			lip_al_ps;
> +	u8			al_pa_granted;
> +	struct sli4_bde	bde_loop_map;

and here again

> +	__le32			linkdown_state;
> +	__le32			currlink_state;
> +	u8			max_bbc;
> +	u8			init_bbc;
> +	u8			scn_flags;
> +	u8			rsvd39;
> +	__le16			dw10w0_al_rt_tov;
> +	__le16			lp_tov;
> +	u8			acquired_al_pa;
> +	u8			pb_flags;
> +	__le16			specified_al_pa;
> +	__le32			dw12_init_n_port_id;
> +};
> +
> +enum sli4_read_topo_link {
> +	SLI4_READ_TOPOLOGY_LINK_UP	= 0x1,
> +	SLI4_READ_TOPOLOGY_LINK_DOWN,
> +	SLI4_READ_TOPOLOGY_LINK_NO_ALPA,
> +};
> +
> +enum sli4_read_topo {
> +	SLI4_READ_TOPOLOGY_UNKNOWN	= 0x0,
> +	SLI4_READ_TOPOLOGY_NPORT,
> +	SLI4_READ_TOPOLOGY_FC_AL,
> +};
> +
> +enum sli4_read_topo_speed {
> +	SLI4_READ_TOPOLOGY_SPEED_NONE	= 0x00,
> +	SLI4_READ_TOPOLOGY_SPEED_1G	= 0x04,
> +	SLI4_READ_TOPOLOGY_SPEED_2G	= 0x08,
> +	SLI4_READ_TOPOLOGY_SPEED_4G	= 0x10,
> +	SLI4_READ_TOPOLOGY_SPEED_8G	= 0x20,
> +	SLI4_READ_TOPOLOGY_SPEED_10G	= 0x40,
> +	SLI4_READ_TOPOLOGY_SPEED_16G	= 0x80,
> +	SLI4_READ_TOPOLOGY_SPEED_32G	= 0x90,
> +};
> +
> +/* REG_FCFI - activate a FC Forwarder */
> +struct sli4_cmd_reg_fcfi_rq_cfg {
> +	u8	r_ctl_mask;
> +	u8	r_ctl_match;
> +	u8	type_mask;
> +	u8	type_match;
> +};
> +
> +enum sli4_regfcfi_tag {
> +	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
> +	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
> +};
> +
> +#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
> +struct sli4_cmd_reg_fcfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg
> +			rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];

and here it rq_cfg is aligned.

> +	__le32		dw8_vlan;
> +};
> +
> +#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
> +#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
> +#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
> +#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
> +
> +enum sli4_reg_fcfi_mrq {
> +	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
> +	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
> +	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
> +
> +	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
> +	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
> +	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
> +};
> +
> +struct sli4_cmd_reg_fcfi_mrq {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg
> +			rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
> +	__le32		dw8_vlan;
> +	__le32		dw9_mrqflags;
> +};
> +
> +struct sli4_cmd_rq_cfg {
> +	__le16	rq_id;
> +	u8	r_ctl_mask;
> +	u8	r_ctl_match;
> +	u8	type_mask;
> +	u8	type_match;
> +};
> +
> +/* REG_RPI - register a Remote Port Indicator */
> +enum sli4_reg_rpi {
> +	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
> +	SLI4_REGRPI_UPD			= 0x1000000,
> +	SLI4_REGRPI_ETOW		= 0x8000000,
> +	SLI4_REGRPI_TERP		= 0x20000000,
> +	SLI4_REGRPI_CI			= 0x80000000,
> +};
> +
> +struct sli4_cmd_reg_rpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			rpi;
> +	__le16			rsvd2;
> +	__le32			dw2_rportid_flags;
> +	struct sli4_bde	bde_64;
> +	__le16			vpi;
> +	__le16			rsvd26;
> +};
> +
> +#define SLI4_REG_RPI_BUF_LEN		0x70
> +
> +/* REG_VFI - register a Virtual Fabric Indicator */
> +enum sli_reg_vfi {
> +	SLI4_REGVFI_VP			= 0x1000,	/* DW1 */
> +	SLI4_REGVFI_UPD			= 0x2000,
> +
> +	SLI4_REGVFI_LOCAL_N_PORTID	= 0xffffff,	/* DW10 */
> +};
> +
> +struct sli4_cmd_reg_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			vfi;
> +	__le16			dw0w1_flags;
> +	__le16			fcfi;
> +	__le16			vpi;
> +	u8			wwpn[8];
> +	struct sli4_bde	sparm;
> +	__le32			e_d_tov;
> +	__le32			r_a_tov;
> +	__le32			dw10_lportid_flags;
> +};
> +
> +/* REG_VPI - register a Virtual Port Indicator */
> +enum sli4_reg_vpi {
> +	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
> +	SLI4_REGVPI_UPD			= 0x1000000,
> +};
> +
> +struct sli4_cmd_reg_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		dw2_lportid_flags;
> +	u8		wwpn[8];
> +	__le32		rsvd12;
> +	__le16		vpi;
> +	__le16		vfi;
> +};
> +
> +/* REQUEST_FEATURES - request / query SLI features */
> +enum sli4_req_features_flags {
> +	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
> +
> +	SLI4_REQFEAT_IAAB	= (1 << 0),	/* DW2 & DW3 */
> +	SLI4_REQFEAT_NPIV	= (1 << 1),
> +	SLI4_REQFEAT_DIF	= (1 << 2),
> +	SLI4_REQFEAT_VF		= (1 << 3),
> +	SLI4_REQFEAT_FCPI	= (1 << 4),
> +	SLI4_REQFEAT_FCPT	= (1 << 5),
> +	SLI4_REQFEAT_FCPC	= (1 << 6),
> +	SLI4_REQFEAT_RSVD	= (1 << 7),
> +	SLI4_REQFEAT_RQD	= (1 << 8),
> +	SLI4_REQFEAT_IAAR	= (1 << 9),
> +	SLI4_REQFEAT_HLM	= (1 << 10),
> +	SLI4_REQFEAT_PERFH	= (1 << 11),
> +	SLI4_REQFEAT_RXSEQ	= (1 << 12),
> +	SLI4_REQFEAT_RXRI	= (1 << 13),
> +	SLI4_REQFEAT_DCL2	= (1 << 14),
> +	SLI4_REQFEAT_RSCO	= (1 << 15),
> +	SLI4_REQFEAT_MRQP	= (1 << 16),
> +};
> +
> +struct sli4_cmd_request_features {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_qry;
> +	__le32		cmd;
> +	__le32		resp;
> +};
> +
> +/*
> + * SLI_CONFIG - submit a configuration command to Port
> + *
> + * Command is either embedded as part of the payload (embed) or located
> + * in a separate memory buffer (mem)
> + */
> +enum sli4_sli_config {
> +	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
> +	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
> +	SLI4_SLICONF_PMDCMD_MASK	= 0xF8,
> +	SLI4_SLICONF_PMDCMD_VAL_1	= 8,
> +	SLI4_SLICONF_PMDCNT		= 0xF8,
> +
> +	SLI4_SLICONFIG_PMD_LEN		= 0x00ffffff,
> +};
> +
> +struct sli4_cmd_sli_config {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_flags;
> +	__le32		payload_len;
> +	__le32		rsvd12[3];
> +	union {
> +		u8 embed[58 * sizeof(u32)];
> +		struct sli4_bufptr mem;
> +	} payload;
> +};
> +
> +/* READ_STATUS - read tx/rx status of a particular port */
> +#define SLI4_READSTATUS_CLEAR_COUNTERS	0x1
> +
> +struct sli4_cmd_read_status {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_flags;
> +	__le32		rsvd4;
> +	__le32		trans_kbyte_cnt;
> +	__le32		recv_kbyte_cnt;
> +	__le32		trans_frame_cnt;
> +	__le32		recv_frame_cnt;
> +	__le32		trans_seq_cnt;
> +	__le32		recv_seq_cnt;
> +	__le32		tot_exchanges_orig;
> +	__le32		tot_exchanges_resp;
> +	__le32		recv_p_bsy_cnt;
> +	__le32		recv_f_bsy_cnt;
> +	__le32		no_rq_buf_dropped_frames_cnt;
> +	__le32		empty_rq_timeout_cnt;
> +	__le32		no_xri_dropped_frames_cnt;
> +	__le32		empty_xri_pool_cnt;
> +};
> +
> +/* READ_LNK_STAT - read link status of a particular port */
> +enum sli4_read_link_stats_flags {
> +	SLI4_READ_LNKSTAT_REC	= (1 << 0),
> +	SLI4_READ_LNKSTAT_GEC	= (1 << 1),
> +	SLI4_READ_LNKSTAT_W02OF	= (1 << 2),
> +	SLI4_READ_LNKSTAT_W03OF	= (1 << 3),
> +	SLI4_READ_LNKSTAT_W04OF	= (1 << 4),
> +	SLI4_READ_LNKSTAT_W05OF	= (1 << 5),
> +	SLI4_READ_LNKSTAT_W06OF	= (1 << 6),
> +	SLI4_READ_LNKSTAT_W07OF	= (1 << 7),
> +	SLI4_READ_LNKSTAT_W08OF	= (1 << 8),
> +	SLI4_READ_LNKSTAT_W09OF	= (1 << 9),
> +	SLI4_READ_LNKSTAT_W10OF = (1 << 10),
> +	SLI4_READ_LNKSTAT_W11OF = (1 << 11),
> +	SLI4_READ_LNKSTAT_W12OF	= (1 << 12),
> +	SLI4_READ_LNKSTAT_W13OF	= (1 << 13),
> +	SLI4_READ_LNKSTAT_W14OF	= (1 << 14),
> +	SLI4_READ_LNKSTAT_W15OF	= (1 << 15),
> +	SLI4_READ_LNKSTAT_W16OF	= (1 << 16),
> +	SLI4_READ_LNKSTAT_W17OF	= (1 << 17),
> +	SLI4_READ_LNKSTAT_W18OF	= (1 << 18),
> +	SLI4_READ_LNKSTAT_W19OF	= (1 << 19),
> +	SLI4_READ_LNKSTAT_W20OF	= (1 << 20),
> +	SLI4_READ_LNKSTAT_W21OF	= (1 << 21),
> +	SLI4_READ_LNKSTAT_CLRC	= (1 << 30),
> +	SLI4_READ_LNKSTAT_CLOF	= (1 << 31),
> +};
> +
> +struct sli4_cmd_read_link_stats {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32	dw1_flags;
> +	__le32	linkfail_errcnt;
> +	__le32	losssync_errcnt;
> +	__le32	losssignal_errcnt;
> +	__le32	primseq_errcnt;
> +	__le32	inval_txword_errcnt;
> +	__le32	crc_errcnt;
> +	__le32	primseq_eventtimeout_cnt;
> +	__le32	elastic_bufoverrun_errcnt;
> +	__le32	arbit_fc_al_timeout_cnt;
> +	__le32	adv_rx_buftor_to_buf_credit;
> +	__le32	curr_rx_buf_to_buf_credit;
> +	__le32	adv_tx_buf_to_buf_credit;
> +	__le32	curr_tx_buf_to_buf_credit;
> +	__le32	rx_eofa_cnt;
> +	__le32	rx_eofdti_cnt;
> +	__le32	rx_eofni_cnt;
> +	__le32	rx_soff_cnt;
> +	__le32	rx_dropped_no_aer_cnt;
> +	__le32	rx_dropped_no_avail_rpi_rescnt;
> +	__le32	rx_dropped_no_avail_xri_rescnt;
> +};
> +
> +/* Format a WQE with WQ_ID Association performance hint */
> +static inline void
> +sli_set_wq_id_association(void *entry, u16 q_id)
> +{
> +	u32 *wqe = entry;
> +
> +	/*
> +	 * Set Word 10, bit 0 to zero
> +	 * Set Word 10, bits 15:1 to the WQ ID
> +	 */
> +	wqe[10] &= ~0xffff;
> +	wqe[10] |= q_id << 1;
> +}
> +
> +/* UNREG_FCFI - unregister a FCFI */
> +struct sli4_cmd_unreg_fcfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		fcfi;
> +	__le16		rsvd6;
> +};
> +
> +/* UNREG_RPI - unregister one or more RPI */
> +enum sli4_unreg_rpi {
> +	UNREG_RPI_DP		= 0x2000,
> +	UNREG_RPI_II_SHIFT	= 14,
> +	UNREG_RPI_II_MASK	= 0xC000,
> +	UNREG_RPI_II_RPI	= 0x0000,
> +	UNREG_RPI_II_VPI	= 0x4000,
> +	UNREG_RPI_II_VFI	= 0x8000,
> +	UNREG_RPI_II_FCFI	= 0xC000,
> +
> +	UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
> +};
> +
> +struct sli4_cmd_unreg_rpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		index;
> +	__le16		dw1w1_flags;
> +	__le32		dw2_dest_n_portid;
> +};
> +
> +/* UNREG_VFI - unregister one or more VFI */
> +enum sli4_unreg_vfi {
> +	UNREG_VFI_II_SHIFT	= 14,
> +	UNREG_VFI_II_MASK	= 0xC000,
> +	UNREG_VFI_II_VFI	= 0x0000,
> +	UNREG_VFI_II_FCFI	= 0xC000,
> +};
> +
> +struct sli4_cmd_unreg_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		index;
> +	__le16		dw2_flags;
> +};
> +
> +enum sli4_unreg_type {
> +	SLI4_UNREG_TYPE_PORT,
> +	SLI4_UNREG_TYPE_DOMAIN,
> +	SLI4_UNREG_TYPE_FCF,
> +	SLI4_UNREG_TYPE_ALL
> +};
> +
> +/* UNREG_VPI - unregister one or more VPI */
> +enum sli4_unreg_vpi {
> +	UNREG_VPI_II_SHIFT	= 14,
> +	UNREG_VPI_II_MASK	= 0xC000,
> +	UNREG_VPI_II_VPI	= 0x0000,
> +	UNREG_VPI_II_VFI	= 0x8000,
> +	UNREG_VPI_II_FCFI	= 0xC000,
> +};
> +
> +struct sli4_cmd_unreg_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		index;
> +	__le16		dw2w0_flags;
> +};
> +
> +/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */
> +struct sli4_cmd_config_auto_xfer_rdy {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		max_burst_len;
> +};
> +
> +#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
> +
> +struct sli4_cmd_config_auto_xfer_rdy_hp {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		max_burst_len;
> +	__le32		dw3_esoc_flags;
> +	__le16		block_size;
> +	__le16		rsvd14;
> +};
> +
> +/*************************************************************************
> + * SLI-4 common configuration command formats and definitions
> + */
> +
> +/*
> + * Subsystem values.
> + */
> +enum sli4_subsystem {
> +	SLI4_SUBSYSTEM_COMMON	= 0x01,
> +	SLI4_SUBSYSTEM_LOWLEVEL	= 0x0B,
> +	SLI4_SUBSYSTEM_FC	= 0x0C,
> +	SLI4_SUBSYSTEM_DMTF	= 0x11,
> +};
> +
> +#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
> +
> +/*
> + * Common opcode (OPC) values.
> + */
> +enum sli4_cmn_opcode {
> +	CMN_FUNCTION_RESET	= 0x3d,
> +	CMN_CREATE_CQ		= 0x0c,
> +	CMN_CREATE_CQ_SET	= 0x1d,
> +	CMN_DESTROY_CQ		= 0x36,
> +	CMN_MODIFY_EQ_DELAY	= 0x29,
> +	CMN_CREATE_EQ		= 0x0d,
> +	CMN_DESTROY_EQ		= 0x37,
> +	CMN_CREATE_MQ_EXT	= 0x5a,
> +	CMN_DESTROY_MQ		= 0x35,
> +	CMN_GET_CNTL_ATTRIBUTES	= 0x20,
> +	CMN_NOP			= 0x21,
> +	CMN_GET_RSC_EXTENT_INFO = 0x9a,
> +	CMN_GET_SLI4_PARAMS	= 0xb5,
> +	CMN_QUERY_FW_CONFIG	= 0x3a,
> +	CMN_GET_PORT_NAME	= 0x4d,
> +
> +	CMN_WRITE_FLASHROM	= 0x07,
> +	/* TRANSCEIVER Data */
> +	CMN_READ_TRANS_DATA	= 0x49,
> +	CMN_GET_CNTL_ADDL_ATTRS = 0x79,
> +	CMN_GET_FUNCTION_CFG	= 0xa0,
> +	CMN_GET_PROFILE_CFG	= 0xa4,
> +	CMN_SET_PROFILE_CFG	= 0xa5,
> +	CMN_GET_PROFILE_LIST	= 0xa6,
> +	CMN_GET_ACTIVE_PROFILE	= 0xa7,
> +	CMN_SET_ACTIVE_PROFILE	= 0xa8,
> +	CMN_READ_OBJECT		= 0xab,
> +	CMN_WRITE_OBJECT	= 0xac,
> +	CMN_DELETE_OBJECT	= 0xae,
> +	CMN_READ_OBJECT_LIST	= 0xad,
> +	CMN_SET_DUMP_LOCATION	= 0xb8,
> +	CMN_SET_FEATURES	= 0xbf,
> +	CMN_GET_RECFG_LINK_INFO = 0xc9,
> +	CMN_SET_RECNG_LINK_ID	= 0xca,
> +};
> +
> +/* DMTF opcode (OPC) values */
> +#define DMTF_EXEC_CLP_CMD 0x01
> +
> +/*
> + * COMMON_FUNCTION_RESET
> + *
> + * Resets the Port, returning it to a power-on state. This configuration
> + * command does not have a payload and should set/expect the lengths to
> + * be zero.
> + */
> +struct sli4_rqst_cmn_function_reset {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_function_reset {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +
> +/*
> + * COMMON_GET_CNTL_ATTRIBUTES
> + *
> + * Query for information about the SLI Port
> + */
> +enum sli4_cntrl_attr_flags {
> +	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,
> +	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
> +};
> +
> +struct sli4_rsp_cmn_get_cntl_attributes {
> +	struct sli4_rsp_hdr	hdr;
> +	u8			version_str[32];
> +	u8			manufacturer_name[32];
> +	__le32			supported_modes;
> +	u8			eprom_version_lo;
> +	u8			eprom_version_hi;
> +	__le16			rsvd17;
> +	__le32			mbx_ds_version;
> +	__le32			ep_fw_ds_version;
> +	u8			ncsi_version_str[12];
> +	__le32			def_extended_timeout;
> +	u8			model_number[32];
> +	u8			description[64];
> +	u8			serial_number[32];
> +	u8			ip_version_str[32];
> +	u8			fw_version_str[32];
> +	u8			bios_version_str[32];
> +	u8			redboot_version_str[32];
> +	u8			driver_version_str[32];
> +	u8			fw_on_flash_version_str[32];
> +	__le32			functionalities_supported;
> +	__le16			max_cdb_length;
> +	u8			asic_revision;
> +	u8			generational_guid0;
> +	__le32			generational_guid1_12[3];
> +	__le16			generational_guid13_14;
> +	u8			generational_guid15;
> +	u8			hba_port_count;
> +	__le16			default_link_down_timeout;
> +	u8			iscsi_version_min_max;
> +	u8			multifunctional_device;
> +	u8			cache_valid;
> +	u8			hba_status;
> +	u8			max_domains_supported;
> +	u8			port_num_type_flags;
> +	__le32			firmware_post_status;
> +	__le32			hba_mtu;
> +	u8			iscsi_features;
> +	u8			rsvd121[3];
> +	__le16			pci_vendor_id;
> +	__le16			pci_device_id;
> +	__le16			pci_sub_vendor_id;
> +	__le16			pci_sub_system_id;
> +	u8			pci_bus_number;
> +	u8			pci_device_number;
> +	u8			pci_function_number;
> +	u8			interface_type;
> +	__le64			unique_identifier;
> +	u8			number_of_netfilters;
> +	u8			rsvd122[3];
> +};
> +
> +/*
> + * COMMON_GET_CNTL_ATTRIBUTES
> + *
> + * This command queries the controller information from the Flash ROM.
> + */
> +struct sli4_rqst_cmn_get_cntl_addl_attributes {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_cntl_addl_attributes {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16		ipl_file_number;
> +	u8		ipl_file_version;
> +	u8		rsvd4;
> +	u8		on_die_temperature;
> +	u8		rsvd5[3];
> +	__le32		driver_advanced_features_supported;
> +	__le32		rsvd7[4];
> +	char		universal_bios_version[32];
> +	char		x86_bios_version[32];
> +	char		efi_bios_version[32];
> +	char		fcode_version[32];
> +	char		uefi_bios_version[32];
> +	char		uefi_nic_version[32];
> +	char		uefi_fcode_version[32];
> +	char		uefi_iscsi_version[32];
> +	char		iscsi_x86_bios_version[32];
> +	char		pxe_x86_bios_version[32];
> +	u8		default_wwpn[8];
> +	u8		ext_phy_version[32];
> +	u8		fc_universal_bios_version[32];
> +	u8		fc_x86_bios_version[32];
> +	u8		fc_efi_bios_version[32];
> +	u8		fc_fcode_version[32];
> +	u8		ext_phy_crc_label[8];
> +	u8		ipl_file_name[16];
> +	u8		rsvd139[72];
> +};
> +
> +/*
> + * COMMON_NOP
> + *
> + * This command does not do anything; it only returns
> + * the payload in the completion.
> + */
> +struct sli4_rqst_cmn_nop {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			context[2];
> +};
> +
> +struct sli4_rsp_cmn_nop {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			context[2];
> +};
> +
> +struct sli4_rqst_cmn_get_resource_extent_info {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16	resource_type;
> +	__le16	rsvd16;
> +};
> +
> +enum sli4_rsc_type {
> +	SLI4_RSC_TYPE_VFI	= 0x20,
> +	SLI4_RSC_TYPE_VPI	= 0x21,
> +	SLI4_RSC_TYPE_RPI	= 0x22,
> +	SLI4_RSC_TYPE_XRI	= 0x23,
> +};
> +
> +struct sli4_rsp_cmn_get_resource_extent_info {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16			resource_extent_count;
> +	__le16			resource_extent_size;
> +};
> +
> +#define SLI4_128BYTE_WQE_SUPPORT	0x02
> +
> +#define GET_Q_CNT_METHOD(m) \
> +	(((m) & RSP_GET_PARAM_Q_CNT_MTHD_MASK) >> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
> +#define GET_Q_CREATE_VERSION(v) \
> +	(((v) & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)
> +
> +enum sli4_rsp_get_params_e {
> +	/*GENERIC*/
> +	RSP_GET_PARAM_Q_CNT_MTHD_SHFT	= 24,
> +	RSP_GET_PARAM_Q_CNT_MTHD_MASK	= (0xF << 24),
> +	RSP_GET_PARAM_QV_SHIFT		= 14,
> +	RSP_GET_PARAM_QV_MASK		= (3 << 14),
> +
> +	/* DW4 */
> +	RSP_GET_PARAM_PROTO_TYPE_MASK	= 0xFF,
> +	/* DW5 */
> +	RSP_GET_PARAM_FT		= (1 << 0),
> +	RSP_GET_PARAM_SLI_REV_MASK	= (0xF << 4),
> +	RSP_GET_PARAM_SLI_FAM_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_IF_TYPE_MASK	= (0xF << 12),
> +	RSP_GET_PARAM_SLI_HINT1_MASK	= (0xFF << 16),
> +	RSP_GET_PARAM_SLI_HINT2_MASK	= (0x1F << 24),
> +	/* DW6 */
> +	RSP_GET_PARAM_EQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_EQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_EQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW8 */
> +	RSP_GET_PARAM_CQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_CQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_CQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW10 */
> +	RSP_GET_PARAM_MQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_MQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW12 */
> +	RSP_GET_PARAM_WQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_WQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_WQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW14 */
> +	RSP_GET_PARAM_RQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_RQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_RQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW15W1*/
> +	RSP_GET_PARAM_RQ_DB_WINDOW_MASK	= 0xF000,
> +	/* DW16 */
> +	RSP_GET_PARAM_FC		= (1 << 0),
> +	RSP_GET_PARAM_EXT		= (1 << 1),
> +	RSP_GET_PARAM_HDRR		= (1 << 2),
> +	RSP_GET_PARAM_SGLR		= (1 << 3),
> +	RSP_GET_PARAM_FBRR		= (1 << 4),
> +	RSP_GET_PARAM_AREG		= (1 << 5),
> +	RSP_GET_PARAM_TGT		= (1 << 6),
> +	RSP_GET_PARAM_TERP		= (1 << 7),
> +	RSP_GET_PARAM_ASSI		= (1 << 8),
> +	RSP_GET_PARAM_WCHN		= (1 << 9),
> +	RSP_GET_PARAM_TCCA		= (1 << 10),
> +	RSP_GET_PARAM_TRTY		= (1 << 11),
> +	RSP_GET_PARAM_TRIR		= (1 << 12),
> +	RSP_GET_PARAM_PHOFF		= (1 << 13),
> +	RSP_GET_PARAM_PHON		= (1 << 14),
> +	RSP_GET_PARAM_PHWQ		= (1 << 15),
> +	RSP_GET_PARAM_BOUND_4GA		= (1 << 16),
> +	RSP_GET_PARAM_RXC		= (1 << 17),
> +	RSP_GET_PARAM_HLM		= (1 << 18),
> +	RSP_GET_PARAM_IPR		= (1 << 19),
> +	RSP_GET_PARAM_RXRI		= (1 << 20),
> +	RSP_GET_PARAM_SGLC		= (1 << 21),
> +	RSP_GET_PARAM_TIMM		= (1 << 22),
> +	RSP_GET_PARAM_TSMM		= (1 << 23),
> +	RSP_GET_PARAM_OAS		= (1 << 25),
> +	RSP_GET_PARAM_LC		= (1 << 26),
> +	RSP_GET_PARAM_AGXF		= (1 << 27),
> +	RSP_GET_PARAM_LOOPBACK_MASK	= (0xF << 28),
> +	/* DW18 */
> +	RSP_GET_PARAM_SGL_PAGE_CNT_MASK = (0xF << 0),
> +	RSP_GET_PARAM_SGL_PAGE_SZS_MASK = (0xFF << 8),
> +	RSP_GET_PARAM_SGL_PP_ALIGN_MASK = (0xFF << 16),
> +};
> +
> +struct sli4_rqst_cmn_get_sli4_params {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_sli4_params {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32		dw4_protocol_type;
> +	__le32		dw5_sli;
> +	__le32		dw6_eq_page_cnt;
> +	__le16		eqe_count_mask;
> +	__le16		rsvd26;
> +	__le32		dw8_cq_page_cnt;
> +	__le16		cqe_count_mask;
> +	__le16		rsvd34;
> +	__le32		dw10_mq_page_cnt;
> +	__le16		mqe_count_mask;
> +	__le16		rsvd42;
> +	__le32		dw12_wq_page_cnt;
> +	__le16		wqe_count_mask;
> +	__le16		rsvd50;
> +	__le32		dw14_rq_page_cnt;
> +	__le16		rqe_count_mask;
> +	__le16		dw15w1_rq_db_window;
> +	__le32		dw16_loopback_scope;
> +	__le32		sge_supported_length;
> +	__le32		dw18_sgl_page_cnt;
> +	__le16		min_rq_buffer_size;
> +	__le16		rsvd75;
> +	__le32		max_rq_buffer_size;
> +	__le16		physical_xri_max;
> +	__le16		physical_rpi_max;
> +	__le16		physical_vpi_max;
> +	__le16		physical_vfi_max;
> +	__le32		rsvd88;
> +	__le16		frag_num_field_offset;
> +	__le16		frag_num_field_size;
> +	__le16		sgl_index_field_offset;
> +	__le16		sgl_index_field_size;
> +	__le32		chain_sge_initial_value_lo;
> +	__le32		chain_sge_initial_value_hi;
> +};
> +
> +/*
> + * COMMON_QUERY_FW_CONFIG
> + *
> + * This command retrieves firmware configuration parameters and adapter
> + * resources available to the driver.
> + */
> +struct sli4_rqst_cmn_query_fw_config {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_query_fw_config {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32		config_number;
> +	__le32		asic_rev;
> +	__le32		physical_port;
> +	__le32		function_mode;
> +	__le32		ulp0_mode;
> +	__le32		ulp0_nic_wqid_base;
> +	__le32		ulp0_nic_wq_total; /* DW10 */
> +	__le32		ulp0_toe_wqid_base;
> +	__le32		ulp0_toe_wq_total;
> +	__le32		ulp0_toe_rqid_base;
> +	__le32		ulp0_toe_rq_total;
> +	__le32		ulp0_toe_defrqid_base;
> +	__le32		ulp0_toe_defrq_total;
> +	__le32		ulp0_lro_rqid_base;
> +	__le32		ulp0_lro_rq_total;
> +	__le32		ulp0_iscsi_icd_base;
> +	__le32		ulp0_iscsi_icd_total; /* DW20 */
> +	__le32		ulp1_mode;
> +	__le32		ulp1_nic_wqid_base;
> +	__le32		ulp1_nic_wq_total;
> +	__le32		ulp1_toe_wqid_base;
> +	__le32		ulp1_toe_wq_total;
> +	__le32		ulp1_toe_rqid_base;
> +	__le32		ulp1_toe_rq_total;
> +	__le32		ulp1_toe_defrqid_base;
> +	__le32		ulp1_toe_defrq_total;
> +	__le32		ulp1_lro_rqid_base; /* DW30 */
> +	__le32		ulp1_lro_rq_total;
> +	__le32		ulp1_iscsi_icd_base;
> +	__le32		ulp1_iscsi_icd_total;
> +	__le32		function_capabilities;
> +	__le32		ulp0_cq_base;
> +	__le32		ulp0_cq_total;
> +	__le32		ulp0_eq_base;
> +	__le32		ulp0_eq_total;
> +	__le32		ulp0_iscsi_chain_icd_base;
> +	__le32		ulp0_iscsi_chain_icd_total; /* DW40 */
> +	__le32		ulp1_iscsi_chain_icd_base;
> +	__le32		ulp1_iscsi_chain_icd_total;
> +};
> +
> +/*Port Types*/
> +enum sli4_port_types {
> +	PORT_TYPE_ETH	= 0,
> +	PORT_TYPE_FC	= 1,
> +};
> +
> +struct sli4_rqst_cmn_get_port_name {
> +	struct sli4_rqst_hdr	hdr;
> +	u8      port_type;
> +	u8      rsvd4[3];
> +};
> +
> +struct sli4_rsp_cmn_get_port_name {
> +	struct sli4_rsp_hdr	hdr;
> +	char		port_name[4];
> +};
> +
> +struct sli4_rqst_cmn_write_flashrom {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32		flash_rom_access_opcode;
> +	__le32		flash_rom_access_operation_type;
> +	__le32		data_buffer_size;
> +	__le32		offset;
> +	u8		data_buffer[4];
> +};
> +
> +/*
> + * COMMON_READ_TRANSCEIVER_DATA
> + *
> + * This command reads SFF transceiver data(Format is defined
> + * by the SFF-8472 specification).
> + */
> +struct sli4_rqst_cmn_read_transceiver_data {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			page_number;
> +	__le32			port;
> +};
> +
> +struct sli4_rsp_cmn_read_transceiver_data {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			page_number;
> +	__le32			port;
> +	u8			page_data[128];
> +	u8			page_data_2[128];
> +};
> +
> +#define SLI4_REQ_DESIRE_READLEN		0xFFFFFF
> +
> +struct sli4_rqst_cmn_read_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_read_length_dword;
> +	__le32			read_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +#define RSP_COM_READ_OBJ_EOF		0x80000000
> +
> +struct sli4_rsp_cmn_read_object {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			actual_read_length;
> +	__le32			eof_dword;
> +};
> +
> +enum sli4_rqst_write_object_flags {
> +	SLI4_RQ_DES_WRITE_LEN		= 0xFFFFFF,
> +	SLI4_RQ_DES_WRITE_LEN_NOC	= 0x40000000,
> +	SLI4_RQ_DES_WRITE_LEN_EOF	= 0x80000000,
> +};
> +
> +struct sli4_rqst_cmn_write_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_write_len_dword;
> +	__le32			write_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +#define	RSP_CHANGE_STATUS		0xFF
> +
> +struct sli4_rsp_cmn_write_object {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			actual_write_length;
> +	__le32			change_status_dword;
> +};
> +
> +struct sli4_rqst_cmn_delete_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			rsvd4;
> +	__le32			rsvd5;
> +	u8			object_name[104];
> +};
> +
> +#define SLI4_RQ_OBJ_LIST_READ_LEN	0xFFFFFF
> +
> +struct sli4_rqst_cmn_read_object_list {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_read_length_dword;
> +	__le32			read_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +enum sli4_rqst_set_dump_flags {
> +	SLI4_CMN_SET_DUMP_BUFFER_LEN	= 0xFFFFFF,
> +	SLI4_CMN_SET_DUMP_FDB		= 0x20000000,
> +	SLI4_CMN_SET_DUMP_BLP		= 0x40000000,
> +	SLI4_CMN_SET_DUMP_QRY		= 0x80000000,
> +};
> +
> +struct sli4_rqst_cmn_set_dump_location {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			buffer_length_dword;
> +	__le32			buf_addr_low;
> +	__le32			buf_addr_high;
> +};
> +
> +struct sli4_rsp_cmn_set_dump_location {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			buffer_length_dword;
> +};
> +
> +enum sli4_dump_level {
> +	SLI4_DUMP_LEVEL_NONE,
> +	SLI4_CHIP_LEVEL_DUMP,
> +	SLI4_FUNC_DESC_DUMP,
> +};
> +
> +enum sli4_dump_state {
> +	SLI4_DUMP_STATE_NONE,
> +	SLI4_CHIP_DUMP_STATE_VALID,
> +	SLI4_FUNC_DUMP_STATE_VALID,
> +};
> +
> +enum sli4_dump_status {
> +	SLI4_DUMP_READY_STATUS_NOT_READY,
> +	SLI4_DUMP_READY_STATUS_DD_PRESENT,
> +	SLI4_DUMP_READY_STATUS_FDB_PRESENT,
> +	SLI4_DUMP_READY_STATUS_SKIP_DUMP,
> +	SLI4_DUMP_READY_STATUS_FAILED = -1,
> +};
> +
> +enum sli4_set_features {
> +	SLI4_SET_FEATURES_DIF_SEED			= 0x01,
> +	SLI4_SET_FEATURES_XRI_TIMER			= 0x03,
> +	SLI4_SET_FEATURES_MAX_PCIE_SPEED		= 0x04,
> +	SLI4_SET_FEATURES_FCTL_CHECK			= 0x05,
> +	SLI4_SET_FEATURES_FEC				= 0x06,
> +	SLI4_SET_FEATURES_PCIE_RECV_DETECT		= 0x07,
> +	SLI4_SET_FEATURES_DIF_MEMORY_MODE		= 0x08,
> +	SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	= 0x09,
> +	SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		= 0x0A,
> +	SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI		= 0x0C,
> +	SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	= 0x0D,
> +	SLI4_SET_FEATURES_SET_FTD_XFER_HINT		= 0x0F,
> +	SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		= 0x11,
> +};
> +
> +struct sli4_rqst_cmn_set_features {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			feature;
> +	__le32			param_len;
> +	__le32			params[8];
> +};
> +
> +struct sli4_rqst_cmn_set_features_dif_seed {
> +	__le16		seed;
> +	__le16		rsvd16;
> +};
> +
> +enum sli4_rqst_set_mrq_features {
> +	SLI4_RQ_MULTIRQ_ISR		 = 0x1,
> +	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
> +
> +	SLI4_RQ_MULTIRQ_NUM_RQS		 = 0xFF,
> +	SLI4_RQ_MULTIRQ_RQ_SELECT	 = 0xF00,
> +};
> +
> +struct sli4_rqst_cmn_set_features_multirq {
> +	__le32		auto_gen_xfer_dword;
> +	__le32		num_rqs_dword;
> +};
> +
> +enum sli4_rqst_health_check_flags {
> +	SLI4_RQ_HEALTH_CHECK_ENABLE	= 0x1,
> +	SLI4_RQ_HEALTH_CHECK_QUERY	= 0x2,
> +};
> +
> +struct sli4_rqst_cmn_set_features_health_check {
> +	__le32		health_check_dword;
> +};
> +
> +struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint {
> +	__le32		fdt_xfer_hint;
> +};
> +
> +struct sli4_rqst_dmtf_exec_clp_cmd {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			cmd_buf_length;
> +	__le32			resp_buf_length;
> +	__le32			cmd_buf_addr_low;
> +	__le32			cmd_buf_addr_high;
> +	__le32			resp_buf_addr_low;
> +	__le32			resp_buf_addr_high;
> +};
> +
> +struct sli4_rsp_dmtf_exec_clp_cmd {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			rsvd4;
> +	__le32			resp_length;
> +	__le32			rsvd6;
> +	__le32			rsvd7;
> +	__le32			rsvd8;
> +	__le32			rsvd9;
> +	__le32			clp_status;
> +	__le32			clp_detailed_status;
> +};
> +
> +#define SLI4_PROTOCOL_FC		0x10
> +#define SLI4_PROTOCOL_DEFAULT		0xff
> +
> +struct sli4_rspource_descriptor_v1 {
> +	u8		descriptor_type;
> +	u8		descriptor_length;
> +	__le16		rsvd16;
> +	__le32		type_specific[0];
> +};
> +
> +enum sli4_pcie_desc_flags {
> +	SLI4_PCIE_DESC_IMM		= 0x4000,
> +	SLI4_PCIE_DESC_NOSV		= 0x8000,
> +
> +	SLI4_PCIE_DESC_PF_NO		= 0x3FF0000,
> +
> +	SLI4_PCIE_DESC_MISSN_ROLE	= 0xFF,
> +	SLI4_PCIE_DESC_PCHG		= 0x8000000,
> +	SLI4_PCIE_DESC_SCHG		= 0x10000000,
> +	SLI4_PCIE_DESC_XCHG		= 0x20000000,
> +	SLI4_PCIE_DESC_XROM		= 0xC0000000
> +};
> +
> +struct sli4_pcie_resource_descriptor_v1 {
> +	u8		descriptor_type;
> +	u8		descriptor_length;
> +	__le16		imm_nosv_dword;
> +	__le32		pf_number_dword;
> +	__le32		rsvd3;
> +	u8		sriov_state;
> +	u8		pf_state;
> +	u8		pf_type;
> +	u8		rsvd4;
> +	__le16		number_of_vfs;
> +	__le16		rsvd5;
> +	__le32		mission_roles_dword;
> +	__le32		rsvd7[16];
> +};
> +
> +struct sli4_rqst_cmn_get_function_config {
> +	struct sli4_rqst_hdr  hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_function_config {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			desc_count;
> +	__le32			desc[54];
> +};
> +
> +/* Link Config Descriptor for link config functions */
> +struct sli4_link_config_descriptor {
> +	u8		link_config_id;
> +	u8		rsvd1[3];
> +	__le32		config_description[8];
> +};
> +
> +#define MAX_LINK_DES	10
> +
> +struct sli4_rqst_cmn_get_reconfig_link_info {
> +	struct sli4_rqst_hdr  hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_reconfig_link_info {
> +	struct sli4_rsp_hdr	hdr;
> +	u8			active_link_config_id;
> +	u8			rsvd17;
> +	u8			next_link_config_id;
> +	u8			rsvd19;
> +	__le32			link_configuration_descriptor_count;
> +	struct sli4_link_config_descriptor
> +				desc[MAX_LINK_DES];
> +};
> +
> +enum sli4_set_reconfig_link_flags {
> +	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
> +	SLI4_SET_RECONFIG_LINKID_FD	= (1 << 31),
> +};
> +
> +struct sli4_rqst_cmn_set_reconfig_link_id {
> +	struct sli4_rqst_hdr  hdr;
> +	__le32			dw4_flags;
> +};
> +
> +struct sli4_rsp_cmn_set_reconfig_link_id {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +struct sli4_rqst_lowlevel_set_watchdog {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			watchdog_timeout;
> +	__le16			rsvd18;
> +};
> +
> +struct sli4_rsp_lowlevel_set_watchdog {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			rsvd;
> +};
> +
> +/* FC opcode (OPC) values */
> +enum sli4_fc_opcodes {
> +	SLI4_OPC_WQ_CREATE		= 0x1,
> +	SLI4_OPC_WQ_DESTROY		= 0x2,
> +	SLI4_OPC_POST_SGL_PAGES		= 0x3,
> +	SLI4_OPC_RQ_CREATE		= 0x5,
> +	SLI4_OPC_RQ_DESTROY		= 0x6,
> +	SLI4_OPC_READ_FCF_TABLE		= 0x8,
> +	SLI4_OPC_POST_HDR_TEMPLATES	= 0xb,
> +	SLI4_OPC_REDISCOVER_FCF		= 0x10,
> +};
> +
> +/* Use the default CQ associated with the WQ */
> +#define SLI4_CQ_DEFAULT 0xffff
> +
> +/*
> + * POST_SGL_PAGES
> + *
> + * Register the scatter gather list (SGL) memory and
> + * associate it with an XRI.
> + */
> +struct sli4_rqst_post_sgl_pages {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			xri_start;
> +	__le16			xri_count;
> +	struct {
> +		__le32		page0_low;
> +		__le32		page0_high;
> +		__le32		page1_low;
> +		__le32		page1_high;
> +	} page_set[10];
> +};
> +
> +struct sli4_rsp_post_sgl_pages {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +struct sli4_rqst_post_hdr_templates {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			rpi_offset;
> +	__le16			page_count;
> +	struct sli4_dmaaddr	page_descriptor[0];
> +};
> +
> +#define SLI4_HDR_TEMPLATE_SIZE		64
> +
> +enum sli4_io_flags {
> +/* The XRI associated with this IO is already active */
> +	SLI4_IO_CONTINUATION		= (1 << 0),
> +/* Automatically generate a good RSP frame */
> +	SLI4_IO_AUTO_GOOD_RESPONSE	= (1 << 1),
> +	SLI4_IO_NO_ABORT		= (1 << 2),
> +/* Set the DNRX bit because no auto xref rdy buffer is posted */
> +	SLI4_IO_DNRX			= (1 << 3),
> +};
> +
> +enum sli4_callback {
> +	SLI4_CB_LINK,
> +	SLI4_CB_MAX,
> +};
> +
> +enum sli4_link_status {
> +	SLI_LINK_STATUS_UP,
> +	SLI_LINK_STATUS_DOWN,
> +	SLI_LINK_STATUS_NO_ALPA,
> +	SLI_LINK_STATUS_MAX,
> +};
> +
> +enum sli4_link_topology {
> +	SLI_LINK_TOPO_NPORT = 1,
> +	SLI_LINK_TOPO_LOOP,
> +	SLI_LINK_TOPO_LOOPBACK_INTERNAL,
> +	SLI_LINK_TOPO_LOOPBACK_EXTERNAL,
> +	SLI_LINK_TOPO_NONE,
> +	SLI_LINK_TOPO_MAX,
> +};
> +
> +enum sli4_link_medium {
> +	SLI_LINK_MEDIUM_ETHERNET,
> +	SLI_LINK_MEDIUM_FC,
> +	SLI_LINK_MEDIUM_MAX,
> +};
> +
> +/*Driver specific structures*/
> +
> +struct sli4_link_event {
> +	enum sli4_link_status		status;
> +	enum sli4_link_topology	topology;

Code idention

> +	enum sli4_link_medium		medium;
> +	u32				speed;
> +	u8				*loop_map;
> +	u32				fc_id;
> +};
> +
> +enum sli4_resource {
> +	SLI_RSRC_VFI,
> +	SLI_RSRC_VPI,
> +	SLI_RSRC_RPI,
> +	SLI_RSRC_XRI,
> +	SLI_RSRC_FCFI,
> +	SLI_RSRC_MAX,
> +};
> +
> +struct sli4_extent {
> +	u32		number;
> +	u32		size;
> +	u32		n_alloc;
> +	u32		*base;
> +	unsigned long	*use_map;
> +	u32		map_size;
> +};
> +
> +struct sli4_queue_info {
> +	u16	max_qcount[SLI_QTYPE_MAX];
> +	u32	max_qentries[SLI_QTYPE_MAX];
> +	u16	count_mask[SLI_QTYPE_MAX];
> +	u16	count_method[SLI_QTYPE_MAX];
> +	u32	qpage_count[SLI_QTYPE_MAX];
> +};
> +
> +#define	SLI_PCI_MAX_REGS		6

 PCI_STD_NUM_BARS ?

> +struct sli4 {
> +	void				*os;
> +	struct pci_dev			*pcidev;

s/pcidev/pci/ as this seems to be a more common pattern.

> +	void __iomem			*reg[SLI_PCI_MAX_REGS];
> +
> +	u32				sli_rev;
> +	u32				sli_family;
> +	u32				if_type;

ahhh the if_type member from the other comment

> +
> +	u16				asic_type;
> +	u16				asic_rev;
> +
> +	u16				e_d_tov;
> +	u16				r_a_tov;
> +	struct sli4_queue_info	qinfo;

Please align it with the rest.

> +	u16				link_module_type;
> +	u8				rq_batch;
> +	u16				rq_min_buf_size;
> +	u32				rq_max_buf_size;
> +	u8				topology;
> +	u8				wwpn[8];
> +	u8				wwnn[8];
> +	u32				fw_rev[2];
> +	u8				fw_name[2][16];
> +	char				ipl_name[16];
> +	u32				hw_rev[3];
> +	u8				port_number;
> +	char				port_name[2];
> +	char				modeldesc[64];
> +	char				bios_version_string[32];
> +	/*
> +	 * Tracks the port resources using extents metaphor. For
> +	 * devices that don't implement extents (i.e.
> +	 * has_extents == FALSE), the code models each resource as
> +	 * a single large extent.
> +	 */
> +	struct sli4_extent		extent[SLI_RSRC_MAX];
> +	u32				features;
> +	u32				has_extents:1,
> +					auto_reg:1,
> +					auto_xfer_rdy:1,
> +					hdr_template_req:1,
> +					perf_hint:1,
> +					perf_wq_id_association:1,
> +					cq_create_version:2,
> +					mq_create_version:2,
> +					high_login_mode:1,
> +					sgl_pre_registered:1,
> +					sgl_pre_registration_required:1,
> +					t10_dif_inline_capable:1,
> +					t10_dif_separate_capable:1;

Why features and the bitfields?

> +	u32				sge_supported_length;
> +	u32				sgl_page_sizes;
> +	u32				max_sgl_pages;
> +	u32				wqe_size;
> +
> +	/*
> +	 * Callback functions
> +	 */
> +	int				(*link)(void *ctx, void *event);
> +	void				*link_arg;
> +
> +	struct efc_dma		bmbx;
> +
> +	/* Save pointer to physical memory descriptor for non-embedded
> +	 * SLI_CONFIG commands for BMBX dumping purposes
> +	 */
> +	struct efc_dma		*bmbx_non_emb_pmd;
> +
> +	struct efc_dma		vpd_data;
> +	u32			vpd_length;
> +};
> +
>  #endif /* !_SLI4_H */
> -- 
> 2.16.4

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-12  3:32 ` [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2020-04-15 10:04   ` Daniel Wagner
  2020-04-22  5:05     ` James Smart
  2020-04-15 12:27   ` Hannes Reinecke
  1 sibling, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 10:04 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi,

On Sat, Apr 11, 2020 at 08:32:36PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create mailbox commands
> and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Removed efc_assert define. Replaced with WARN_ON.
>   Returned defined return values EFC_SUCCESS/FAIL
> ---
>  drivers/scsi/elx/include/efc_common.h |   18 +
>  drivers/scsi/elx/libefc_sli/sli4.c    | 1514 +++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
>  3 files changed, 1541 insertions(+)
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> index c427f75da4d5..4c7574dacb99 100644
> --- a/drivers/scsi/elx/include/efc_common.h
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -22,4 +22,22 @@ struct efc_dma {
>  	struct pci_dev	*pdev;
>  };
>  
> +#define efc_log_crit(efc, fmt, args...) \
> +		dev_crit(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_err(efc, fmt, args...) \
> +		dev_err(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_warn(efc, fmt, args...) \
> +		dev_warn(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_info(efc, fmt, args...) \
> +		dev_info(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_test(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
> +#define efc_log_debug(efc, fmt, args...) \
> +		dev_dbg(&((efc)->pcidev)->dev, fmt, ##args)
> +
>  #endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 29d33becd334..224a06610c78 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -24,3 +24,1517 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
>  	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
>  	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
>  };
> +
> +/* Convert queue type enum (SLI_QTYPE_*) into a string */
> +static char *SLI_QNAME[] = {
> +	"Event Queue",
> +	"Completion Queue",
> +	"Mailbox Queue",
> +	"Work Queue",
> +	"Receive Queue",
> +	"Undefined"
> +};
> +
> +/*
> + * Write a SLI_CONFIG command to the provided buffer.
> + *
> + * @sli4 SLI context pointer.
> + * @buf Destination buffer for the command.
> + * @size size of the destination buffer(buf).
> + * @length Length in bytes of attached command.
> + * @dma DMA buffer for non-embedded commands.
> + *
> + */

Turn this into kerneldoc style.

> +static void *
> +sli_config_cmd_init(struct sli4 *sli4, void *buf,
> +		    size_t size, u32 length,
> +		    struct efc_dma *dma)
> +{
> +	struct sli4_cmd_sli_config *config = NULL;

Not needed, config gets buf assigned.

> +	u32 flags = 0;
> +
> +	if (length > sizeof(config->payload.embed) && !dma) {
> +		efc_log_err(sli4, "Too big for an embedded cmd with len(%d)\n",
> +			    length);
> +		return NULL;
> +	}
> +
> +	config = buf;
> +
> +	memset(buf, 0, size);

Maybe do the memset first and then do the config = buf assignemnt.

> +
> +	config->hdr.command = MBX_CMD_SLI_CONFIG;
> +	if (!dma) {
> +		flags |= SLI4_SLICONF_EMB;
> +		config->dw1_flags = cpu_to_le32(flags);
> +		config->payload_len = cpu_to_le32(length);
> +		buf += offsetof(struct sli4_cmd_sli_config, payload.embed);
> +		return buf;
> +	}
> +
> +	flags = SLI4_SLICONF_PMDCMD_VAL_1;
> +	flags &= ~SLI4_SLICONF_EMB;
> +	config->dw1_flags = cpu_to_le32(flags);
> +
> +	config->payload.mem.addr.low = cpu_to_le32(lower_32_bits(dma->phys));
> +	config->payload.mem.addr.high =	cpu_to_le32(upper_32_bits(dma->phys));
> +	config->payload.mem.length =
> +			cpu_to_le32(dma->size & SLI4_SLICONFIG_PMD_LEN);
> +	config->payload_len = cpu_to_le32(dma->size);
> +	/* save pointer to DMA for BMBX dumping purposes */
> +	sli4->bmbx_non_emb_pmd = dma;
> +	return dma->virt;
> +}
> +
> +/*
> + * Write a COMMON_CREATE_CQ command.
> + *
> + * This creates a Version 2 message.
> + *
> + * Returns 0 on success, or non-zero otherwise.
> + */

kerneldoc style

> +static int
> +sli_cmd_common_create_cq(struct sli4 *sli4, void *buf, size_t size,
> +			 struct efc_dma *qmem,
> +			 u16 eq_id)
> +{
> +	struct sli4_rqst_cmn_create_cq_v2 *cqv2 = NULL;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_bytes = 0;
> +	u32 num_pages = 0;
> +	size_t cmd_size = 0;
> +	u32 page_size = 0;
> +	u32 n_cqe = 0;
> +	u32 dw5_flags = 0;
> +	u16 dw6w1_arm = 0;
> +	__le32 len;
> +
> +	/* First calculate number of pages and the mailbox cmd length */
> +	n_cqe = qmem->size / SLI4_CQE_BYTES;
> +	switch (n_cqe) {
> +	case 256:
> +	case 512:
> +	case 1024:
> +	case 2048:
> +		page_size = 1;
> +		break;
> +	case 4096:
> +		page_size = 2;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;

page_size is confusing variable name, multiplying page_size with
SLI_PAGE_SIZE gives bytes.

> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +
> +	cmd_size = CFG_RQST_CMDSZ(cmn_create_cq_v2) + SZ_DMAADDR * num_pages;
> +
> +	cqv2 = sli_config_cmd_init(sli4, buf, size, cmd_size, NULL);
> +	if (!cqv2)
> +		return EFC_FAIL;
> +
> +	len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_v2,
> +					 SZ_DMAADDR * num_pages);
> +	sli_cmd_fill_hdr(&cqv2->hdr, CMN_CREATE_CQ, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V2, len);
> +	cqv2->page_size = page_size;
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
> +	cqv2->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages || num_pages > SLI4_CMN_CREATE_CQ_V2_MAX_PAGES)
> +		return EFC_FAIL;
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= CQ_CNT_VAL(256);
> +		break;
> +	case 2:
> +		dw5_flags |= CQ_CNT_VAL(512);
> +		break;
> +	case 4:
> +		dw5_flags |= CQ_CNT_VAL(1024);
> +		break;
> +	case 8:
> +		dw5_flags |= CQ_CNT_VAL(LARGE);
> +		cqv2->cqe_count = cpu_to_le16(n_cqe);
> +		break;
> +	default:
> +		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_CQV2_AUTOVALID;
> +
> +	dw5_flags |= CREATE_CQV2_EVT;
> +	dw5_flags |= CREATE_CQV2_VALID;
> +
> +	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
> +	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
> +	cqv2->eq_id = cpu_to_le16(eq_id);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
> +		cqv2->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
> +		cqv2->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a COMMON_CREATE_EQ command */
> +static int
> +sli_cmd_common_create_eq(struct sli4 *sli4, void *buf, size_t size,
> +			 struct efc_dma *qmem)
> +{
> +	struct sli4_rqst_cmn_create_eq *eq;
> +	u32 p;
> +	uintptr_t addr;
> +	u16 num_pages;
> +	u32 dw5_flags = 0;
> +	u32 dw6_flags = 0, ver = CMD_V0;

Maybe initialize ver below when for SLI4_INTF_IF_TYPE_6 is
tested. That would keep the setting of ver in one place.

> +
> +	eq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_eq), NULL);
> +	if (!eq)
> +		return EFC_FAIL;
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		ver = CMD_V2;
> +
> +	sli_cmd_fill_hdr(&eq->hdr, CMN_CREATE_EQ, SLI4_SUBSYSTEM_COMMON,
> +			 ver, CFG_RQST_PYLD_LEN(cmn_create_eq));
> +
> +	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	eq->num_pages = cpu_to_le16(num_pages);
> +
> +	switch (num_pages) {
> +	case 1:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(1024);
> +		break;
> +	case 2:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(2048);
> +		break;
> +	case 4:
> +		dw5_flags |= SLI4_EQE_SIZE_4;
> +		dw6_flags |= EQ_CNT_VAL(4096);
> +		break;
> +	default:
> +		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_EQ_AUTOVALID;
> +
> +	dw5_flags |= CREATE_EQ_VALID;
> +	dw6_flags &= (~CREATE_EQ_ARM);
> +	eq->dw5_flags = cpu_to_le32(dw5_flags);
> +	eq->dw6_flags = cpu_to_le32(dw6_flags);
> +	eq->dw7_delaymulti = cpu_to_le32(CREATE_EQ_DELAYMULTI);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {
> +		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
> +		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_common_create_mq_ext(struct sli4 *sli4, void *buf, size_t size,
> +			     struct efc_dma *qmem,
> +			     u16 cq_id)
> +{
> +	struct sli4_rqst_cmn_create_mq_ext *mq;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 num_pages;
> +	u16 dw6w1_flags = 0;
> +
> +	mq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(cmn_create_mq_ext),
> +				 NULL);
> +	if (!mq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&mq->hdr, CMN_CREATE_MQ_EXT, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_create_mq_ext));
> +
> +	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
> +	num_pages = qmem->size / SLI_PAGE_SIZE;
> +	mq->num_pages = cpu_to_le16(num_pages);
> +	switch (num_pages) {
> +	case 1:
> +		dw6w1_flags |= SLI4_MQE_SIZE_16;
> +		break;
> +	case 2:
> +		dw6w1_flags |= SLI4_MQE_SIZE_32;
> +		break;
> +	case 4:
> +		dw6w1_flags |= SLI4_MQE_SIZE_64;
> +		break;
> +	case 8:
> +		dw6w1_flags |= SLI4_MQE_SIZE_128;
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
> +
> +	if (sli4->mq_create_version) {
> +		mq->cq_id_v1 = cpu_to_le16(cq_id);
> +		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
> +	} else {
> +		dw6w1_flags |= (cq_id << CREATE_MQEXT_CQID_SHIFT);
> +	}
> +	mq->dw7_val = cpu_to_le32(CREATE_MQEXT_VAL);
> +
> +	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {
> +		mq->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
> +		mq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *qmem, u16 cq_id)
> +{
> +	struct sli4_rqst_wq_create *wq;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 page_size = 0;
> +	u32 page_bytes = 0;
> +	u32 n_wqe = 0;
> +	u16 num_pages;
> +
> +	wq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(wq_create), NULL);
> +	if (!wq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&wq->hdr, SLI4_OPC_WQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V1, CFG_RQST_PYLD_LEN(wq_create));
> +	n_wqe = qmem->size / sli4->wqe_size;
> +
> +	switch (qmem->size) {
> +	case 4096:
> +	case 8192:
> +	case 16384:
> +	case 32768:
> +		page_size = 1;
> +		break;
> +	case 65536:
> +		page_size = 2;
> +		break;
> +	case 131072:
> +		page_size = 4;
> +		break;
> +	case 262144:
> +		page_size = 8;
> +		break;
> +	case 524288:
> +		page_size = 10;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +	page_bytes = page_size * SLI_PAGE_SIZE;

same comment as above about the page_size variable name.

> +
> +	/* valid values for number of pages(num_pages): 1-8 */
> +	num_pages = sli_page_count(qmem->size, page_bytes);
> +	wq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages || num_pages > SLI4_WQ_CREATE_MAX_PAGES)
> +		return EFC_FAIL;
> +
> +	wq->cq_id = cpu_to_le16(cq_id);
> +
> +	wq->page_size = page_size;
> +
> +	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
> +		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
> +	else
> +		wq->wqe_size_byte |= SLI4_WQE_SIZE;
> +
> +	wq->wqe_count = cpu_to_le16(n_wqe);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_bytes) {
> +		wq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
> +		wq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
> +		  struct efc_dma *qmem,
> +		  u16 cq_id, u16 buffer_size)

I think sli_cmd_rq_create() should be called sli_cmd_rq_create_v0() to
match sli_cmd_rq_create_{v1|v2}() and make it more explicit in the API
which version of this function is.

> +{
> +	struct sli4_rqst_rq_create *rq;
> +	u32 p;
> +	uintptr_t addr;
> +	u16 num_pages;
> +
> +	rq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(rq_create), NULL);
> +	if (!rq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(rq_create));
> +	/* valid values for number of pages: 1-8 (sec 4.5.6) */
> +	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
> +	rq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages ||
> +	    num_pages > SLI4_RQ_CREATE_V0_MAX_PAGES) {
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * RQE count is the log base 2 of the total number of entries
> +	 */
> +	rq->rqe_count_byte |= 31 - __builtin_clz(qmem->size / SLI4_RQE_SIZE);
> +
> +	if (buffer_size < SLI4_RQ_CREATE_V0_MIN_BUF_SIZE ||
> +	    buffer_size > SLI4_RQ_CREATE_V0_MAX_BUF_SIZE) {
> +		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
> +		       buffer_size,
> +		       SLI4_RQ_CREATE_V0_MIN_BUF_SIZE,
> +		       SLI4_RQ_CREATE_V0_MAX_BUF_SIZE);
> +		return EFC_FAIL;
> +	}
> +	rq->buffer_size = cpu_to_le16(buffer_size);
> +
> +	rq->cq_id = cpu_to_le16(cq_id);
> +
> +	for (p = 0, addr = qmem->phys; p < num_pages;
> +	     p++, addr += SLI_PAGE_SIZE) {
> +		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
> +		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *qmem, u16 cq_id,
> +		     u16 buffer_size)
> +{
> +	struct sli4_rqst_rq_create_v1 *rq;
> +	u32 p;
> +	uintptr_t addr;
> +	u32 num_pages;
> +
> +	rq = sli_config_cmd_init(sli4, buf, size,
> +				 SLI_CONFIG_PYLD_LENGTH(rq_create_v1), NULL);
> +	if (!rq)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V1, CFG_RQST_PYLD_LEN(rq_create_v1));
> +	/* Disable "no buffer warnings" to avoid Lancer bug */
> +	rq->dim_dfd_dnb |= SLI4_RQ_CREATE_V1_DNB;
> +
> +	/* valid values for number of pages: 1-8 (sec 4.5.6) */
> +	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
> +	rq->num_pages = cpu_to_le16(num_pages);
> +	if (!num_pages ||
> +	    num_pages > SLI4_RQ_CREATE_V1_MAX_PAGES) {
> +		efc_log_info(sli4, "num_pages %d not valid, max %d\n",
> +			num_pages, SLI4_RQ_CREATE_V1_MAX_PAGES);
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * RQE count is the total number of entries (note not lg2(# entries))
> +	 */
> +	rq->rqe_count = cpu_to_le16(qmem->size / SLI4_RQE_SIZE);
> +
> +	rq->rqe_size_byte |= SLI4_RQE_SIZE_8;
> +
> +	rq->page_size = SLI4_RQ_PAGE_SIZE_4096;
> +
> +	if (buffer_size < sli4->rq_min_buf_size ||
> +	    buffer_size > sli4->rq_max_buf_size) {
> +		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
> +		       buffer_size,
> +				sli4->rq_min_buf_size,
> +				sli4->rq_max_buf_size);
> +		return EFC_FAIL;
> +	}
> +	rq->buffer_size = cpu_to_le32(buffer_size);
> +
> +	rq->cq_id = cpu_to_le16(cq_id);
> +
> +	for (p = 0, addr = qmem->phys;
> +			p < num_pages;
> +			p++, addr += SLI_PAGE_SIZE) {
> +		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
> +		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_rq_create_v2(struct sli4 *sli4, u32 num_rqs,
> +		     struct sli4_queue *qs[], u32 base_cq_id,
> +		     u32 header_buffer_size,
> +		     u32 payload_buffer_size, struct efc_dma *dma)
> +{
> +	struct sli4_rqst_rq_create_v2 *req = NULL;
> +	u32 i, p, offset = 0;
> +	u32 payload_size, page_count;
> +	uintptr_t addr;
> +	u32 num_pages;
> +	__le32 req_len;
> +
> +	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
> +
> +	/* Payload length must accommodate both request and response */
> +	payload_size = max(CFG_RQST_CMDSZ(rq_create_v2) +
> +			   SZ_DMAADDR * page_count,
> +			   sizeof(struct sli4_rsp_cmn_create_queue_set));
> +
> +	dma->size = payload_size;
> +	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
> +				      &dma->phys, GFP_DMA);
> +	if (!dma->virt)
> +		return EFC_FAIL;
> +
> +	memset(dma->virt, 0, payload_size);
> +
> +	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +			       payload_size, dma);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	req_len = CFG_RQST_PYLD_LEN_VAR(rq_create_v2, SZ_DMAADDR * page_count);
> +	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
> +			 CMD_V2, req_len);
> +	/* Fill Payload fields */
> +	req->dim_dfd_dnb  |= SLI4_RQCREATEV2_DNB;
> +	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);

code alignment

> +	req->num_pages	   = cpu_to_le16(num_pages);
> +	req->rqe_count     = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
> +	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
> +	req->page_size     = SLI4_RQ_PAGE_SIZE_4096;
> +	req->rq_count      = num_rqs;
> +	req->base_cq_id    = cpu_to_le16(base_cq_id);
> +	req->hdr_buffer_size     = cpu_to_le16(header_buffer_size);
> +	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
> +
> +	for (i = 0; i < num_rqs; i++) {
> +		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
> +		     p++, addr += SLI_PAGE_SIZE) {
> +			req->page_phys_addr[offset].low =
> +					cpu_to_le32(lower_32_bits(addr));
> +			req->page_phys_addr[offset].high =
> +					cpu_to_le32(upper_32_bits(addr));
> +			offset++;
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
> +{
> +	if (!q->dma.size)
> +		return;
> +
> +	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
> +			  q->dma.virt, q->dma.phys);
> +	memset(&q->dma, 0, sizeof(struct efc_dma));

Is this necessary to clear q->dma? Just asking if it's possible to
avoid the additional work.


> +}
> +
> +int
> +__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q,
> +		 u32 qtype, size_t size, u32 n_entries,
> +		      u32 align)
> +{
> +	if (!q->dma.virt || size != q->size ||
> +	    n_entries != q->length) {

I would put all on one line. The column limit should be reached.

And couldn't you test logical inverse and return early and avoid the
idention? And also I add the queue type to the log message so it's
not necessary to add the some log statement through the code base.
Something like:

	if (q->dma.virt && size == q->size && n_entries == q->length) {
		efc_log_err(sli4, "%s %s failed\n", __func__, SLI_QNAME[qtype]);
		return EFC_FAIL;
	}

Maybe add combine the error path here and the below together and use a
goto/label.

> +		if (q->dma.size)
> +			__sli_queue_destroy(sli4, q);
> +
> +		memset(q, 0, sizeof(struct sli4_queue));
> +
> +		q->dma.size = size * n_entries;
> +		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
> +						 q->dma.size, &q->dma.phys,
> +						 GFP_DMA);
> +		if (!q->dma.virt) {
> +			memset(&q->dma, 0, sizeof(struct efc_dma));

So if __sli_queue_destroy() keeps clearing q->dma, than this one can
go away, since if __sli_queue_init() fails __sli_queue_destroy() will
be called.

> +			efc_log_err(sli4, "%s allocation failed\n",
> +			       SLI_QNAME[qtype]);
> +			return EFC_FAIL;
> +		}
> +
> +		memset(q->dma.virt, 0, size * n_entries);
> +
> +		spin_lock_init(&q->lock);
> +
> +		q->type = qtype;
> +		q->size = size;
> +		q->length = n_entries;
> +
> +		if (q->type == SLI_QTYPE_EQ || q->type == SLI_QTYPE_CQ) {
> +			/* For prism, phase will be flipped after
> +			 * a sweep through eq and cq
> +			 */
> +			q->phase = 1;
> +		}
> +
> +		/* Limit to hwf the queue size per interrupt */
> +		q->proc_limit = n_entries / 2;
> +
> +		switch (q->type) {
> +		case SLI_QTYPE_EQ:
> +			q->posted_limit = q->length / 2;
> +			break;
> +		default:
> +			q->posted_limit = 64;
> +			break;
> +		}
> +	} else {
> +		efc_log_err(sli4, "%s failed\n", __func__);
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q,
> +		u32 n_entries, u32 buffer_size,
> +		struct sli4_queue *cq, bool is_hdr)
> +{
> +	if (__sli_queue_init(sli4, q, SLI_QTYPE_RQ, SLI4_RQE_SIZE,
> +			     n_entries, SLI_PAGE_SIZE))
> +		return EFC_FAIL;
> +
> +	if (!sli_cmd_rq_create_v1(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +				  &q->dma, cq->id, buffer_size)) {
> +		if (__sli_create_queue(sli4, q)) {
> +			efc_log_info(sli4, "Create queue failed %d\n", q->id);
> +			goto error;
> +		}
> +		if (is_hdr && q->id & 1) {
> +			efc_log_info(sli4, "bad header RQ_ID %d\n", q->id);
> +			goto error;
> +		} else if (!is_hdr  && (q->id & 1) == 0) {
> +			efc_log_info(sli4, "bad data RQ_ID %d\n", q->id);
> +			goto error;
> +		}
> +	} else {
> +		goto error;
> +	}
> +	if (is_hdr)
> +		q->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
> +	else
> +		q->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
> +	return EFC_SUCCESS;
> +error:
> +	__sli_queue_destroy(sli4, q);
> +	return EFC_FAIL;
> +}
> +
> +int
> +sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs,
> +		    struct sli4_queue *qs[], u32 base_cq_id,
> +		    u32 n_entries, u32 header_buffer_size,
> +		    u32 payload_buffer_size)
> +{
> +	u32 i;
> +	struct efc_dma dma;
> +	struct sli4_rsp_cmn_create_queue_set *rsp = NULL;
> +	void __iomem *db_regaddr = NULL;
> +	u32 num_rqs = num_rq_pairs * 2;
> +
> +	for (i = 0; i < num_rqs; i++) {
> +		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_RQ,
> +				     SLI4_RQE_SIZE, n_entries,
> +				     SLI_PAGE_SIZE)) {
> +			goto error;
> +		}
> +	}
> +
> +	if (sli_cmd_rq_create_v2(sli4, num_rqs, qs, base_cq_id,
> +			       header_buffer_size, payload_buffer_size, &dma)) {
> +		goto error;
> +	}
> +
> +	if (sli_bmbx_command(sli4)) {
> +		efc_log_err(sli4, "bootstrap mailbox write failed RQSet\n");
> +		goto error;
> +	}
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
> +	else
> +		db_regaddr = sli4->reg[0] + SLI4_RQ_DB_REG;
> +
> +	rsp = dma.virt;
> +	if (rsp->hdr.status) {
> +		efc_log_err(sli4, "bad create RQSet status=%#x addl=%#x\n",
> +		       rsp->hdr.status, rsp->hdr.additional_status);
> +		goto error;
> +	} else {
> +		for (i = 0; i < num_rqs; i++) {
> +			qs[i]->id = i + le16_to_cpu(rsp->q_id);
> +			if ((qs[i]->id & 1) == 0)
> +				qs[i]->u.flag.dword |= SLI4_QUEUE_FLAG_HDR;
> +			else
> +				qs[i]->u.flag.dword &= ~SLI4_QUEUE_FLAG_HDR;
> +
> +			qs[i]->db_regaddr = db_regaddr;
> +		}
> +	}
> +
> +	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
> +
> +	return EFC_SUCCESS;
> +
> +error:
> +	for (i = 0; i < num_rqs; i++)
> +		__sli_queue_destroy(sli4, qs[i]);
> +
> +	if (dma.virt)
> +		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
> +				  dma.phys);
> +
> +	return EFC_FAIL;
> +}
> +
> +static int
> +sli_res_sli_config(struct sli4 *sli4, void *buf)
> +{
> +	struct sli4_cmd_sli_config *sli_config = buf;
> +
> +	/* sanity check */
> +	if (!buf || sli_config->hdr.command !=
> +		    MBX_CMD_SLI_CONFIG) {
> +		efc_log_err(sli4, "bad parameter buf=%p cmd=%#x\n", buf,
> +		       buf ? sli_config->hdr.command : -1);
> +		return EFC_FAIL;
> +	}
> +
> +	if (le16_to_cpu(sli_config->hdr.status))
> +		return le16_to_cpu(sli_config->hdr.status);
> +
> +	if (le32_to_cpu(sli_config->dw1_flags) & SLI4_SLICONF_EMB)
> +		return sli_config->payload.embed[4];
> +
> +	efc_log_info(sli4, "external buffers not supported\n");
> +	return EFC_FAIL;
> +}
> +
> +int
> +__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q)
> +{
> +	struct sli4_rsp_cmn_create_queue *res_q = NULL;
> +
> +	if (sli_bmbx_command(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox write fail %s\n",
> +			SLI_QNAME[q->type]);
> +		return EFC_FAIL;
> +	}
> +	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
> +		efc_log_err(sli4, "bad status create %s\n",
> +		       SLI_QNAME[q->type]);
> +		return EFC_FAIL;
> +	}
> +	res_q = (void *)((u8 *)sli4->bmbx.virt +
> +			offsetof(struct sli4_cmd_sli_config, payload));
> +
> +	if (res_q->hdr.status) {
> +		efc_log_err(sli4, "bad create %s status=%#x addl=%#x\n",
> +		       SLI_QNAME[q->type], res_q->hdr.status,
> +			    res_q->hdr.additional_status);
> +		return EFC_FAIL;
> +	}
> +	q->id = le16_to_cpu(res_q->q_id);
> +	switch (q->type) {
> +	case SLI_QTYPE_EQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			q->db_regaddr = sli4->reg[1] + SLI4_IF6_EQ_DB_REG;
> +		else
> +			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
> +		break;
> +	case SLI_QTYPE_CQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			q->db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
> +		else
> +			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
> +		break;
> +	case SLI_QTYPE_MQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			q->db_regaddr = sli4->reg[1] + SLI4_IF6_MQ_DB_REG;
> +		else
> +			q->db_regaddr =	sli4->reg[0] + SLI4_MQ_DB_REG;
> +		break;
> +	case SLI_QTYPE_RQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			q->db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
> +		else
> +			q->db_regaddr =	sli4->reg[0] + SLI4_RQ_DB_REG;
> +		break;
> +	case SLI_QTYPE_WQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			q->db_regaddr = sli4->reg[1] + SLI4_IF6_WQ_DB_REG;
> +		else
> +			q->db_regaddr =	sli4->reg[0] + SLI4_IO_WQ_DB_REG;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype)
> +{
> +	u32 size = 0;
> +
> +	switch (qtype) {
> +	case SLI_QTYPE_EQ:
> +		size = sizeof(u32);
> +		break;
> +	case SLI_QTYPE_CQ:
> +		size = 16;
> +		break;
> +	case SLI_QTYPE_MQ:
> +		size = 256;
> +		break;
> +	case SLI_QTYPE_WQ:
> +		size = sli4->wqe_size;
> +		break;
> +	case SLI_QTYPE_RQ:
> +		size = SLI4_RQE_SIZE;
> +		break;
> +	default:
> +		efc_log_info(sli4, "unknown queue type %d\n", qtype);
> +		return -1;
> +	}
> +	return size;
> +}
> +
> +int
> +sli_queue_alloc(struct sli4 *sli4, u32 qtype,
> +		struct sli4_queue *q, u32 n_entries,
> +		     struct sli4_queue *assoc)
> +{
> +	int size;
> +	u32 align = 0;
> +
> +	/* get queue size */
> +	size = sli_get_queue_entry_size(sli4, qtype);
> +	if (size < 0)
> +		return EFC_FAIL;
> +	align = SLI_PAGE_SIZE;
> +
> +	if (__sli_queue_init(sli4, q, qtype, size, n_entries, align)) {
> +		efc_log_err(sli4, "%s allocation failed\n",
> +		       SLI_QNAME[qtype]);

__sli_queue_init() already logs this information.

> +		return EFC_FAIL;
> +	}
> +
> +	switch (qtype) {
> +	case SLI_QTYPE_EQ:
> +		if (!sli_cmd_common_create_eq(sli4, sli4->bmbx.virt,
> +					     SLI4_BMBX_SIZE, &q->dma)) {
> +			if (__sli_create_queue(sli4, q)) {
> +				efc_log_err(sli4, "create %s failed\n",
> +					    SLI_QNAME[qtype]);

__sli_create_queue logs this already.

> +				goto error;
> +			}
> +		} else {
> +			efc_log_err(sli4, "cannot create %s\n",
> +				    SLI_QNAME[qtype]);
> +			goto error;
> +		}

So I think you could just do

		if (sli_cmd_common_create_eq(...) ||
		     __sli_create_queue(...)))
			goto error;

or even do an early exit on success and avoid the 'goto error'

		if (!sli_cmd_common_create_eq(...) &&
		    !__sli_create_queue(...))
			return EFC_SUCCESS;


> +
> +		break;
> +	case SLI_QTYPE_CQ:
> +		if (!sli_cmd_common_create_cq(sli4, sli4->bmbx.virt,
> +					     SLI4_BMBX_SIZE, &q->dma,
> +						assoc ? assoc->id : 0)) {
> +			if (__sli_create_queue(sli4, q)) {
> +				efc_log_err(sli4, "create %s failed\n",
> +					    SLI_QNAME[qtype]);
> +				goto error;
> +			}
> +		} else {
> +			efc_log_err(sli4, "cannot create %s\n",
> +				    SLI_QNAME[qtype]);
> +			goto error;
> +		}

Same as above

> +		break;
> +	case SLI_QTYPE_MQ:
> +		assoc->u.flag.dword |= SLI4_QUEUE_FLAG_MQ;
> +		if (!sli_cmd_common_create_mq_ext(sli4, sli4->bmbx.virt,
> +						  SLI4_BMBX_SIZE, &q->dma,
> +						  assoc->id)) {
> +			if (__sli_create_queue(sli4, q)) {
> +				efc_log_err(sli4, "create %s failed\n",
> +					    SLI_QNAME[qtype]);
> +				goto error;
> +			}
> +		} else {
> +			efc_log_err(sli4, "cannot create %s\n",
> +				    SLI_QNAME[qtype]);
> +			goto error;
> +		}

and here

> +
> +		break;
> +	case SLI_QTYPE_WQ:
> +		if (!sli_cmd_wq_create(sli4, sli4->bmbx.virt,
> +					 SLI4_BMBX_SIZE, &q->dma,
> +					assoc ? assoc->id : 0)) {
> +			if (__sli_create_queue(sli4, q)) {
> +				efc_log_err(sli4, "create %s failed\n",
> +					    SLI_QNAME[qtype]);
> +				goto error;
> +			}
> +		} else {
> +			efc_log_err(sli4, "cannot create %s\n",
> +				    SLI_QNAME[qtype]);
> +			goto error;
> +		}
> +		break;
> +	default:
> +		efc_log_info(sli4, "unknown queue type %d\n", qtype);
> +		goto error;
> +	}
> +
> +	return EFC_SUCCESS;
> +error:
> +	__sli_queue_destroy(sli4, q);
> +	return EFC_FAIL;
> +}
> +
> +static int sli_cmd_cq_set_create(struct sli4 *sli4,
> +				 struct sli4_queue *qs[], u32 num_cqs,
> +				 struct sli4_queue *eqs[],
> +				 struct efc_dma *dma)
> +{
> +	struct sli4_rqst_cmn_create_cq_set_v0 *req = NULL;
> +	uintptr_t addr;
> +	u32 i, offset = 0,  page_bytes = 0, payload_size;
> +	u32 p = 0, page_size = 0, n_cqe = 0, num_pages_cq;
> +	u32 dw5_flags = 0;
> +	u16 dw6w1_flags = 0;
> +	__le32 req_len;
> +
> +	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
> +	switch (n_cqe) {
> +	case 256:
> +	case 512:
> +	case 1024:
> +	case 2048:
> +		page_size = 1;
> +		break;
> +	case 4096:
> +		page_size = 2;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +
> +	page_bytes = page_size * SLI_PAGE_SIZE;
> +	num_pages_cq = sli_page_count(qs[0]->dma.size, page_bytes);
> +	payload_size = max(CFG_RQST_CMDSZ(cmn_create_cq_set_v0) +
> +			   (SZ_DMAADDR * num_pages_cq * num_cqs),
> +			   sizeof(struct sli4_rsp_cmn_create_queue_set));
> +
> +	dma->size = payload_size;
> +	dma->virt = dma_alloc_coherent(&sli4->pcidev->dev, dma->size,
> +				      &dma->phys, GFP_DMA);
> +	if (!dma->virt)
> +		return EFC_FAIL;
> +
> +	memset(dma->virt, 0, payload_size);
> +
> +	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +				  payload_size, dma);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	req_len = CFG_RQST_PYLD_LEN_VAR(cmn_create_cq_set_v0,
> +					SZ_DMAADDR * num_pages_cq * num_cqs);
> +	sli_cmd_fill_hdr(&req->hdr, CMN_CREATE_CQ_SET, SLI4_SUBSYSTEM_FC,
> +			 CMD_V0, req_len);
> +	req->page_size = page_size;
> +
> +	req->num_pages = cpu_to_le16(num_pages_cq);
> +	switch (num_pages_cq) {
> +	case 1:
> +		dw5_flags |= CQ_CNT_VAL(256);
> +		break;
> +	case 2:
> +		dw5_flags |= CQ_CNT_VAL(512);
> +		break;
> +	case 4:
> +		dw5_flags |= CQ_CNT_VAL(1024);
> +		break;
> +	case 8:
> +		dw5_flags |= CQ_CNT_VAL(LARGE);
> +		dw6w1_flags |= (n_cqe & CREATE_CQSETV0_CQE_COUNT);
> +		break;
> +	default:
> +		efc_log_info(sli4, "num_pages %d not valid\n", num_pages_cq);
> +		return EFC_FAIL;
> +	}
> +
> +	dw5_flags |= CREATE_CQSETV0_EVT;
> +	dw5_flags |= CREATE_CQSETV0_VALID;
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		dw5_flags |= CREATE_CQSETV0_AUTOVALID;
> +
> +	dw6w1_flags &= (~CREATE_CQSETV0_ARM);

The brakets are not needed.

> +
> +	req->dw5_flags = cpu_to_le32(dw5_flags);
> +	req->dw6w1_flags = cpu_to_le16(dw6w1_flags);
> +
> +	req->num_cq_req = cpu_to_le16(num_cqs);
> +
> +	/* Fill page addresses of all the CQs. */
> +	for (i = 0; i < num_cqs; i++) {
> +		req->eq_id[i] = cpu_to_le16(eqs[i]->id);
> +		for (p = 0, addr = qs[i]->dma.phys; p < num_pages_cq;
> +		     p++, addr += page_bytes) {
> +			req->page_phys_addr[offset].low =
> +				cpu_to_le32(lower_32_bits(addr));
> +			req->page_phys_addr[offset].high =
> +				cpu_to_le32(upper_32_bits(addr));
> +			offset++;
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
> +		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[])
> +{
> +	u32 i;
> +	struct efc_dma dma;
> +	struct sli4_rsp_cmn_create_queue_set *res = NULL;
> +	void __iomem *db_regaddr = NULL;

res and db_regaddr do not to be pre-initialized here

> +
> +	/* Align the queue DMA memory */
> +	for (i = 0; i < num_cqs; i++) {
> +		if (__sli_queue_init(sli4, qs[i], SLI_QTYPE_CQ,
> +				     SLI4_CQE_BYTES,
> +					  n_entries, SLI_PAGE_SIZE)) {
> +			efc_log_err(sli4, "Queue init failed.\n");

__sli_queue_init() already logs this information.

> +			goto error;
> +		}
> +	}
> +
> +	if (sli_cmd_cq_set_create(sli4, qs, num_cqs, eqs, &dma))
> +		goto error;
> +
> +	if (sli_bmbx_command(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox write fail CQSet\n");
> +		goto error;
> +	}

Since sli_bmbx_command already logs an error, is it this necessary?

> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
> +	else
> +		db_regaddr = sli4->reg[0] + SLI4_EQCQ_DB_REG;
> +
> +	res = dma.virt;
> +	if (res->hdr.status) {
> +		efc_log_err(sli4, "bad create CQSet status=%#x addl=%#x\n",
> +		       res->hdr.status, res->hdr.additional_status);
> +		goto error;
> +	} else {
> +		/* Check if we got all requested CQs. */
> +		if (le16_to_cpu(res->num_q_allocated) != num_cqs) {
> +			efc_log_crit(sli4, "Requested count CQs doesn't match.\n");
> +			goto error;
> +		}
> +		/* Fill the resp cq ids. */
> +		for (i = 0; i < num_cqs; i++) {
> +			qs[i]->id = le16_to_cpu(res->q_id) + i;
> +			qs[i]->db_regaddr = db_regaddr;
> +		}
> +	}
> +
> +	dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt, dma.phys);
> +
> +	return EFC_SUCCESS;
> +
> +error:
> +	for (i = 0; i < num_cqs; i++)
> +		__sli_queue_destroy(sli4, qs[i]);
> +
> +	if (dma.virt)
> +		dma_free_coherent(&sli4->pcidev->dev, dma.size, dma.virt,
> +				  dma.phys);
> +
> +	return EFC_FAIL;
> +}
> +
> +static int
> +sli_cmd_common_destroy_q(struct sli4 *sli4, u8 opc, u8 subsystem, u16 q_id)
> +{
> +	struct sli4_rqst_cmn_destroy_q *req = NULL;

Not necessary to pre-initilize req here.

> +
> +	/* Payload length must accommodate both request and response */
> +	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +				  SLI_CONFIG_PYLD_LENGTH(cmn_destroy_q), NULL);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&req->hdr, opc, subsystem,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_destroy_q));
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
> +	       u32 destroy_queues, u32 free_memory)
> +{
> +	int rc = EFC_SUCCESS;
> +	u8 opcode, subsystem;
> +	struct sli4_rsp_hdr *res;
> +
> +	if (!q) {
> +		efc_log_err(sli4, "bad parameter sli4=%p q=%p\n", sli4, q);
> +		return EFC_FAIL;
> +	}
> +
> +	if (!destroy_queues)
> +		goto free_mem;
> +
> +	switch (q->type) {
> +	case SLI_QTYPE_EQ:
> +		opcode = CMN_DESTROY_EQ;
> +		subsystem = SLI4_SUBSYSTEM_COMMON;
> +		break;
> +	case SLI_QTYPE_CQ:
> +		opcode = CMN_DESTROY_CQ;
> +		subsystem = SLI4_SUBSYSTEM_COMMON;
> +		break;
> +	case SLI_QTYPE_MQ:
> +		opcode = CMN_DESTROY_MQ;
> +		subsystem = SLI4_SUBSYSTEM_COMMON;
> +		break;
> +	case SLI_QTYPE_WQ:
> +		opcode = SLI4_OPC_WQ_DESTROY;
> +		subsystem = SLI4_SUBSYSTEM_FC;
> +		break;
> +	case SLI_QTYPE_RQ:
> +		opcode = SLI4_OPC_RQ_DESTROY;
> +		subsystem = SLI4_SUBSYSTEM_FC;
> +		break;
> +	default:
> +		efc_log_info(sli4, "bad queue type %d\n", q->type);
> +		return EFC_FAIL;

		and why not 'goto free_mem'?

I think I would move the destroy queue code into a new function instead
all in this function...

> +	}
> +
> +	rc = sli_cmd_common_destroy_q(sli4, opcode, subsystem, q->id);
> +	if (!rc)
> +		goto free_mem;
> +
> +	if (sli_bmbx_command(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox fail destroy %s\n",
> +			     SLI_QNAME[q->type]);

...and if this is an seperate function it would be possible to use an
early return and don't have to do this else if dance.

> +	} else if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
> +		efc_log_err(sli4, "bad status %s\n", SLI_QNAME[q->type]);
> +	} else {
> +		res = (void *)((u8 *)sli4->bmbx.virt +
> +				offsetof(struct sli4_cmd_sli_config, payload));
> +
> +		if (res->status) {
> +			efc_log_err(sli4, "destroy %s st=%#x addl=%#x\n",
> +				    SLI_QNAME[q->type],	res->status,
> +				    res->additional_status);
> +		} else {
> +			rc = EFC_SUCCESS;
> +		}
> +	}
> +
> +free_mem:
> +	if (free_memory)
> +		__sli_queue_destroy(sli4, q);
> +
> +	return rc;
> +}
> +
> +int
> +sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
> +{
> +	u32 val = 0;

No need to pre-initialize val;

> +	unsigned long flags = 0;
> +	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
> +	else
> +		val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);

After seeing SLI4_*_DOORBELL in action I would prefer these are inline
functions and q and arm are passed in. All the macro magic could
happend inside there.  This would avoid the "a = arm ?..." here.

> +
> +	writel(val, q->db_regaddr);
> +	q->n_posted = 0;
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
> +{
> +	u32 val = 0;
> +	unsigned long flags = 0;
> +	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +
> +	switch (q->type) {
> +	case SLI_QTYPE_EQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			val = SLI4_IF6_EQ_DOORBELL(q->n_posted, q->id, a);
> +		else
> +			val = SLI4_EQ_DOORBELL(q->n_posted, q->id, a);
> +
> +		writel(val, q->db_regaddr);
> +		q->n_posted = 0;
> +		break;
> +	case SLI_QTYPE_CQ:
> +		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +			val = SLI4_IF6_CQ_DOORBELL(q->n_posted, q->id, a);
> +		else
> +			val = SLI4_CQ_DOORBELL(q->n_posted, q->id, a);
> +
> +		writel(val, q->db_regaddr);
> +		q->n_posted = 0;
> +		break;
> +	default:
> +		efc_log_info(sli4, "should only be used for EQ/CQ, not %s\n",
> +			SLI_QNAME[q->type]);
> +	}
> +
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_wq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry)

entry still fits on the line above.

> +{
> +	u8		*qe = q->dma.virt;
> +	u32	qindex;
> +	u32	val = 0;

code indention not consistent

> +
> +	qindex = q->index;
> +	qe += q->index * q->size;
> +
> +	if (sli4->perf_wq_id_association)
> +		sli_set_wq_id_association(entry, q->id);
> +
> +	memcpy(qe, entry, q->size);
> +	q->n_posted = 1;
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
> +		/* non-dpp write for iftype = 6 */
> +		val = SLI4_WQ_DOORBELL(q->n_posted, 0, q->id);
> +	else
> +		val = SLI4_WQ_DOORBELL(q->n_posted, q->index, q->id);
> +
> +	writel(val, q->db_regaddr);
> +	q->index = (q->index + q->n_posted) & (q->length - 1);
> +	q->n_posted = 0;

Why does this not need a lock protection as in sli_queue_arm()?

> +
> +	return qindex;
> +}
> +
> +int
> +sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry)

entry also fits on the line above

> +{
> +	u8 *qe = q->dma.virt;
> +	u32 qindex;
> +	u32 val = 0;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +	qindex = q->index;
> +	qe += q->index * q->size;
> +
> +	memcpy(qe, entry, q->size);
> +	q->n_posted = 1;

Shouldn't this be q->n_posted++ ? Or is it garanteed n_posted is 0?

> +
> +	val = SLI4_MQ_DOORBELL(q->n_posted, q->id);
> +	writel(val, q->db_regaddr);
> +	q->index = (q->index + q->n_posted) & (q->length - 1);
> +	q->n_posted = 0;
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return qindex;
> +}
> +
> +int
> +sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
> +	     u8 *entry)

enrry fits on the line above

> +{
> +	u8 *qe = q->dma.virt;
> +	u32 qindex, n_posted;
> +	u32 val = 0;
> +
> +	qindex = q->index;
> +	qe += q->index * q->size;
> +
> +	memcpy(qe, entry, q->size);
> +	q->n_posted = 1;

Again why not q->n_posted++ ?

I am confused why no lock is used here and in the fuction above. A few
words on the locking desing would be highly appreciated.

> +
> +	n_posted = q->n_posted;
> +
> +	/*
> +	 * In RQ-pair, an RQ either contains the FC header
> +	 * (i.e. is_hdr == TRUE) or the payload.
> +	 *
> +	 * Don't ring doorbell for payload RQ
> +	 */
> +	if (!(q->u.flag.dword & SLI4_QUEUE_FLAG_HDR))
> +		goto skip;
> +
> +	/*
> +	 * Some RQ cannot be incremented one entry at a time.
> +	 * Instead, the driver collects a number of entries
> +	 * and updates the RQ in batches.
> +	 */
> +	if (q->u.flag.dword & SLI4_QUEUE_FLAG_RQBATCH) {
> +		if (((q->index + q->n_posted) %
> +		    SLI4_QUEUE_RQ_BATCH)) {
> +			goto skip;
> +		}
> +		n_posted = SLI4_QUEUE_RQ_BATCH;
> +	}
> +
> +	val = SLI4_RQ_DOORBELL(n_posted, q->id);
> +	writel(val, q->db_regaddr);
> +skip:
> +	q->index = (q->index + q->n_posted) & (q->length - 1);
> +	q->n_posted = 0;
> +
> +	return qindex;
> +}
> +
> +int
> +sli_eq_read(struct sli4 *sli4,
> +	    struct sli4_queue *q, u8 *entry)

this fits on one line

> +{
> +	u8 *qe = q->dma.virt;
> +	u32 *qindex = NULL;
> +	unsigned long flags = 0;
> +	u8 clear = false, valid = false;
> +	u16 wflags = 0;
> +
> +	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
> +
> +	qindex = &q->index;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +
> +	qe += *qindex * q->size;
> +
> +	/* Check if eqe is valid */
> +	wflags = le16_to_cpu(((struct sli4_eqe *)qe)->dw0w0_flags);
> +	valid = ((wflags & SLI4_EQE_VALID) == q->phase);
> +	if (!valid) {
> +		spin_unlock_irqrestore(&q->lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	if (valid && clear) {
> +		wflags &= ~SLI4_EQE_VALID;
> +		((struct sli4_eqe *)qe)->dw0w0_flags =
> +						cpu_to_le16(wflags);
> +	}
> +
> +	memcpy(entry, qe, q->size);
> +	*qindex = (*qindex + 1) & (q->length - 1);
> +	q->n_posted++;
> +	/*
> +	 * For prism, the phase value will be used
> +	 * to check the validity of eq/cq entries.
> +	 * The value toggles after a complete sweep
> +	 * through the queue.
> +	 */
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
> +		q->phase ^= (u16)0x1;

		q->phase = !q->phase;

> +
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cq_read(struct sli4 *sli4,
> +	    struct sli4_queue *q, u8 *entry)

fits on the same line

> +{
> +	u8 *qe = q->dma.virt;
> +	u32 *qindex = NULL;
> +	unsigned long	flags = 0;
> +	u8 clear = false;
> +	u32 dwflags = 0;
> +	bool valid = false, valid_bit_set = false;
> +
> +	clear = (sli4->if_type == SLI4_INTF_IF_TYPE_6) ?  false : true;
> +
> +	qindex = &q->index;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +
> +	qe += *qindex * q->size;
> +
> +	/* Check if cqe is valid */
> +	dwflags = le32_to_cpu(((struct sli4_mcqe *)qe)->dw3_flags);
> +	valid_bit_set = (dwflags & SLI4_MCQE_VALID) != 0;
> +
> +	valid = (valid_bit_set == q->phase);
> +	if (!valid) {

	if (valid_bit_set == q->phase)

> +		spin_unlock_irqrestore(&q->lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	if (valid && clear) {

valid is alway true

> +		dwflags &= ~SLI4_MCQE_VALID;
> +		((struct sli4_mcqe *)qe)->dw3_flags =
> +					cpu_to_le32(dwflags);
> +	}
> +
> +	memcpy(entry, qe, q->size);
> +	*qindex = (*qindex + 1) & (q->length - 1);
> +	q->n_posted++;
> +	/*
> +	 * For prism, the phase value will be used
> +	 * to check the validity of eq/cq entries.
> +	 * The value toggles after a complete sweep
> +	 * through the queue.
> +	 */
> +
> +	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && *qindex == 0)
> +		q->phase ^= (u16)0x1;

		q->phase = !q->phase;

> +
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_mq_read(struct sli4 *sli4,
> +	    struct sli4_queue *q, u8 *entry)

fits on one line

> +{
> +	u8 *qe = q->dma.virt;
> +	u32 *qindex = NULL;
> +	unsigned long flags = 0;
> +
> +	qindex = &q->u.r_idx;
> +
> +	spin_lock_irqsave(&q->lock, flags);
> +
> +	qe += *qindex * q->size;
> +
> +	/* Check if mqe is valid */
> +	if (q->index == q->u.r_idx) {
> +		spin_unlock_irqrestore(&q->lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	memcpy(entry, qe, q->size);
> +	*qindex = (*qindex + 1) & (q->length - 1);
> +
> +	spin_unlock_irqrestore(&q->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id)
> +{
> +	struct sli4_eqe *eqe = (void *)buf;
> +	int rc = EFC_SUCCESS;
> +	u16 flags = 0;
> +	u16 majorcode;
> +	u16 minorcode;
> +
> +	if (!buf || !cq_id) {
> +		efc_log_err(sli4, "bad parameters sli4=%p buf=%p cq_id=%p\n",
> +		       sli4, buf, cq_id);
> +		return EFC_FAIL;
> +	}
> +
> +	flags = le16_to_cpu(eqe->dw0w0_flags);
> +	majorcode = (flags & SLI4_EQE_MJCODE) >> 1;
> +	minorcode = (flags & SLI4_EQE_MNCODE) >> 4;
> +	switch (majorcode) {
> +	case SLI4_MAJOR_CODE_STANDARD:
> +		*cq_id = le16_to_cpu(eqe->resource_id);
> +		break;
> +	case SLI4_MAJOR_CODE_SENTINEL:
> +		efc_log_info(sli4, "sentinel EQE\n");
> +		rc = SLI4_EQE_STATUS_EQ_FULL;
> +		break;
> +	default:
> +		efc_log_info(sli4, "Unsupported EQE: major %x minor %x\n",
> +			majorcode, minorcode);
> +		rc = EFC_FAIL;
> +	}
> +
> +	return rc;
> +}
> +
> +/* Parse a CQ entry to retrieve the event type and the associated queue */
> +int
> +sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
> +	     enum sli4_qentry *etype, u16 *q_id)
> +{
> +	int rc = EFC_SUCCESS;
> +
> +	if (!cq || !cqe || !etype) {
> +		efc_log_err(sli4, "bad params sli4=%p cq=%p cqe=%p etype=%p q_id=%p\n",
> +		       sli4, cq, cqe, etype, q_id);
> +		return -EINVAL;
> +	}
> +
> +	if (cq->u.flag.dword & SLI4_QUEUE_FLAG_MQ) {
> +		struct sli4_mcqe	*mcqe = (void *)cqe;

whitespace damage

> +
> +		if (le32_to_cpu(mcqe->dw3_flags) & SLI4_MCQE_AE) {
> +			*etype = SLI_QENTRY_ASYNC;
> +		} else {
> +			*etype = SLI_QENTRY_MQ;
> +			rc = sli_cqe_mq(sli4, mcqe);
> +		}
> +		*q_id = -1;
> +	} else {
> +		rc = sli_fc_cqe_parse(sli4, cq, cqe, etype, q_id);
> +	}
> +
> +	return rc;
> +}
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index b360d809f144..13f5a0d8d31c 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -3687,4 +3687,13 @@ struct sli4 {
>  	u32			vpd_length;
>  };
>  
> +static inline void
> +sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)

line is too long

> +{
> +	hdr->opcode = opc;
> +	hdr->subsystem = sub;
> +	hdr->dw3_version = cpu_to_le32(ver);
> +	hdr->request_length = len;
> +}
> +
>  #endif /* !_SLI4_H */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2020-04-14 15:23   ` Daniel Wagner
@ 2020-04-15 12:06   ` Hannes Reinecke
  2020-04-23  1:52   ` Roman Bolshakov
  2 siblings, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:06 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This is the initial patch for the new Emulex target mode SCSI
> driver sources.
> 
> This patch:
> - Creates the new Emulex source level directory drivers/scsi/elx
>    and adds the directory to the MAINTAINERS file.
> - Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
>    This library is a SLI-4 interface library.
> - Starts the population of the libefc_sli library with definitions
>    of SLI-4 hardware register offsets and definitions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-12  3:32 ` [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
  2020-04-14 18:02   ` Daniel Wagner
@ 2020-04-15 12:14   ` Hannes Reinecke
  2020-04-15 17:43     ` James Bottomley
  2020-04-22  4:44     ` James Smart
  1 sibling, 2 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:14 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch add SLI-4 Data structures and defines for:
> - Buffer Descriptors (BDEs)
> - Scatter/Gather List elements (SGEs)
> - Queues and their Entry Descriptions for:
>     Event Queues (EQs), Completion Queues (CQs),
>     Receive Queues (RQs), and the Mailbox Queue (MQ).
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Changed anonymous enums to named.
>    SLI defines to spell out _MASK value directly.
>    Change multiple defines to named Enums for consistency.
>    Single Enum to #define.
> ---
>   drivers/scsi/elx/include/efc_common.h |   25 +
>   drivers/scsi/elx/libefc_sli/sli4.h    | 1761 +++++++++++++++++++++++++++++++++
>   2 files changed, 1786 insertions(+)
>   create mode 100644 drivers/scsi/elx/include/efc_common.h
> 
> diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
> new file mode 100644
> index 000000000000..c427f75da4d5
> --- /dev/null
> +++ b/drivers/scsi/elx/include/efc_common.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFC_COMMON_H__
> +#define __EFC_COMMON_H__
> +
> +#include <linux/pci.h>
> +
> +#define EFC_SUCCESS	0
> +#define EFC_FAIL	1
> +
> +struct efc_dma {
> +	void		*virt;
> +	void            *alloc;
> +	dma_addr_t	phys;
> +
> +	size_t		size;
> +	size_t          len;
> +	struct pci_dev	*pdev;
> +};
> +
> +#endif /* __EFC_COMMON_H__ */
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 1fad48643f94..07eef8df9690 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -249,4 +249,1765 @@ struct sli4_reg {
>   	u32	off;
>   };
>   
> +struct sli4_dmaaddr {
> +	__le32 low;
> +	__le32 high;
> +};
> +
> +/* a 3-word BDE with address 1st 2 words, length last word */
> +struct sli4_bufptr {
> +	struct sli4_dmaaddr addr;
> +	__le32 length;
> +};
> +
> +/* a 3-word BDE with length as first word, address last 2 words */
> +struct sli4_bufptr_len1st {
> +	__le32 length0;
> +	struct sli4_dmaaddr addr;
> +};
> +
> +/* Buffer Descriptor Entry (BDE) */
> +enum sli4_bde_e {
> +	SLI4_BDE_MASK_BUFFER_LEN	= 0x00ffffff,
> +	SLI4_BDE_MASK_BDE_TYPE		= 0xff000000,
> +};
> +
> +struct sli4_bde {
> +	__le32		bde_type_buflen;
> +	union {
> +		struct sli4_dmaaddr data;
> +		struct {
> +			__le32	offset;
> +			__le32	rsvd2;
> +		} imm;
> +		struct sli4_dmaaddr blp;
> +	} u;
> +};
> +
> +/* Buffer Descriptors */
> +enum sli4_bde_type {
> +	BDE_TYPE_SHIFT		= 24,
> +	BDE_TYPE_BDE_64		= 0x00,	/* Generic 64-bit data */
> +	BDE_TYPE_BDE_IMM	= 0x01,	/* Immediate data */
> +	BDE_TYPE_BLP		= 0x40,	/* Buffer List Pointer */
> +};
> +
> +/* Scatter-Gather Entry (SGE) */
> +#define SLI4_SGE_MAX_RESERVED			3
> +
> +enum sli4_sge_type {
> +	/* DW2 */
> +	SLI4_SGE_DATA_OFFSET_MASK	= 0x07FFFFFF,
> +	/*DW2W1*/
> +	SLI4_SGE_TYPE_SHIFT		= 27,
> +	SLI4_SGE_TYPE_MASK		= 0x78000000,
> +	/*SGE Types*/
> +	SLI4_SGE_TYPE_DATA		= 0x00,
> +	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
> +	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
> +	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
> +	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
> +	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
> +	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
> +	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
> +	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
> +
> +	SLI4_SGE_LAST			= (1 << 31),
> +};
> +
> +struct sli4_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		buffer_length;
> +};
> +
> +/* T10 DIF Scatter-Gather Entry (SGE) */
> +struct sli4_dif_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		rsvd12;
> +};
> +
> +/* Data Integrity Seed (DISEED) SGE */
> +enum sli4_diseed_sge_flags {
> +	/* DW2W1 */
> +	DISEED_SGE_HS			= (1 << 2),
> +	DISEED_SGE_WS			= (1 << 3),
> +	DISEED_SGE_IC			= (1 << 4),
> +	DISEED_SGE_ICS			= (1 << 5),
> +	DISEED_SGE_ATRT			= (1 << 6),
> +	DISEED_SGE_AT			= (1 << 7),
> +	DISEED_SGE_FAT			= (1 << 8),
> +	DISEED_SGE_NA			= (1 << 9),
> +	DISEED_SGE_HI			= (1 << 10),
> +
> +	/* DW3W1 */
> +	DISEED_SGE_BS_MASK		= 0x0007,
> +	DISEED_SGE_AI			= (1 << 3),
> +	DISEED_SGE_ME			= (1 << 4),
> +	DISEED_SGE_RE			= (1 << 5),
> +	DISEED_SGE_CE			= (1 << 6),
> +	DISEED_SGE_NR			= (1 << 7),
> +
> +	DISEED_SGE_OP_RX_SHIFT		= 8,
> +	DISEED_SGE_OP_RX_MASK		= 0x0F00,
> +	DISEED_SGE_OP_TX_SHIFT		= 12,
> +	DISEED_SGE_OP_TX_MASK		= 0xF000,
> +};
> +
> +/* Opcode values */
> +enum sli4_diseed_sge_opcodes {
> +	DISEED_SGE_OP_IN_NODIF_OUT_CRC,
> +	DISEED_SGE_OP_IN_CRC_OUT_NODIF,
> +	DISEED_SGE_OP_IN_NODIF_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CSUM_OUT_NODIF,
> +	DISEED_SGE_OP_IN_CRC_OUT_CRC,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CRC_OUT_CSUM,
> +	DISEED_SGE_OP_IN_CSUM_OUT_CRC,
> +	DISEED_SGE_OP_IN_RAW_OUT_RAW,
> +};
> +
> +#define DISEED_SGE_OP_RX_VALUE(stype) \
> +	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_RX_SHIFT)
> +#define DISEED_SGE_OP_TX_VALUE(stype) \
> +	(DISEED_SGE_OP_##stype << DISEED_SGE_OP_TX_SHIFT)
> +
> +struct sli4_diseed_sge {
> +	__le32		ref_tag_cmp;
> +	__le32		ref_tag_repl;
> +	__le16		app_tag_repl;
> +	__le16		dw2w1_flags;
> +	__le16		app_tag_cmp;
> +	__le16		dw3w1_flags;
> +};
> +
> +/* List Segment Pointer Scatter-Gather Entry (SGE) */
> +#define SLI4_LSP_SGE_SEGLEN	0x00ffffff
> +
> +struct sli4_lsp_sge {
> +	__le32		buffer_address_high;
> +	__le32		buffer_address_low;
> +	__le32		dw2_flags;
> +	__le32		dw3_seglen;
> +};
> +
> +enum sli4_eqe_e {
> +	SLI4_EQE_VALID	= 1,
> +	SLI4_EQE_MJCODE	= 0xe,
> +	SLI4_EQE_MNCODE	= 0xfff0,
> +};
> +
> +struct sli4_eqe {
> +	__le16		dw0w0_flags;
> +	__le16		resource_id;
> +};
> +
> +#define SLI4_MAJOR_CODE_STANDARD	0
> +#define SLI4_MAJOR_CODE_SENTINEL	1
> +
> +/* Sentinel EQE indicating the EQ is full */
> +#define SLI4_EQE_STATUS_EQ_FULL		2
> +
> +enum sli4_mcqe_e {
> +	SLI4_MCQE_CONSUMED	= (1 << 27),
> +	SLI4_MCQE_COMPLETED	= (1 << 28),
> +	SLI4_MCQE_AE		= (1 << 30),
> +	SLI4_MCQE_VALID		= (1 << 31),
> +};
> +
> +/* Entry was consumed but not completed */
> +#define SLI4_MCQE_STATUS_NOT_COMPLETED	-2
> +
> +struct sli4_mcqe {
> +	__le16		completion_status;
> +	__le16		extended_status;
> +	__le32		mqe_tag_low;
> +	__le32		mqe_tag_high;
> +	__le32		dw3_flags;
> +};
> +
> +enum sli4_acqe_e {
> +	SLI4_ACQE_AE	= (1 << 6), /* async event - this is an ACQE */
> +	SLI4_ACQE_VAL	= (1 << 7), /* valid - contents of CQE are valid */
> +};
> +
> +struct sli4_acqe {
> +	__le32		event_data[3];
> +	u8		rsvd12;
> +	u8		event_code;
> +	u8		event_type;
> +	u8		ae_val;
> +};
> +
> +enum sli4_acqe_event_code {
> +	SLI4_ACQE_EVENT_CODE_LINK_STATE		= 0x01,
> +	SLI4_ACQE_EVENT_CODE_FIP		= 0x02,
> +	SLI4_ACQE_EVENT_CODE_DCBX		= 0x03,
> +	SLI4_ACQE_EVENT_CODE_ISCSI		= 0x04,
> +	SLI4_ACQE_EVENT_CODE_GRP_5		= 0x05,
> +	SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	= 0x10,
> +	SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	= 0x11,
> +	SLI4_ACQE_EVENT_CODE_VF_EVENT		= 0x12,
> +	SLI4_ACQE_EVENT_CODE_MR_EVENT		= 0x13,
> +};
> +
> +enum sli4_qtype {
> +	SLI_QTYPE_EQ,
> +	SLI_QTYPE_CQ,
> +	SLI_QTYPE_MQ,
> +	SLI_QTYPE_WQ,
> +	SLI_QTYPE_RQ,
> +	SLI_QTYPE_MAX,			/* must be last */
> +};
> +
> +#define SLI_USER_MQ_COUNT	1
> +#define SLI_MAX_CQ_SET_COUNT	16
> +#define SLI_MAX_RQ_SET_COUNT	16
> +
> +enum sli4_qentry {
> +	SLI_QENTRY_ASYNC,
> +	SLI_QENTRY_MQ,
> +	SLI_QENTRY_RQ,
> +	SLI_QENTRY_WQ,
> +	SLI_QENTRY_WQ_RELEASE,
> +	SLI_QENTRY_OPT_WRITE_CMD,
> +	SLI_QENTRY_OPT_WRITE_DATA,
> +	SLI_QENTRY_XABT,
> +	SLI_QENTRY_MAX			/* must be last */
> +};
> +
> +enum sli4_queue_flags {
> +	/* CQ has MQ/Async completion */
> +	SLI4_QUEUE_FLAG_MQ	= (1 << 0),
> +
> +	/* RQ for packet headers */
> +	SLI4_QUEUE_FLAG_HDR	= (1 << 1),
> +
> +	/* RQ index increment by 8 */
> +	SLI4_QUEUE_FLAG_RQBATCH	= (1 << 2),
> +};
> +
> +struct sli4_queue {
> +	/* Common to all queue types */
> +	struct efc_dma	dma;
> +	spinlock_t	lock;	/* protect the queue operations */
> +	u32	index;		/* current host entry index */
> +	u16	size;		/* entry size */
> +	u16	length;		/* number of entries */
> +	u16	n_posted;	/* number entries posted */
> +	u16	id;		/* Port assigned xQ_ID */
> +	u16	ulp;		/* ULP assigned to this queue */
> +	void __iomem    *db_regaddr;	/* register address for the doorbell */
> +	u8		type;		/* queue type ie EQ, CQ, ... */

Alignment?
Having an u8 following a pointer is a guaranteed misalignment for the 
remaining entries.
Also, consider casting it to the above qtype enum.

> +	u32	proc_limit;	/* limit CQE processed per iteration */
> +	u32	posted_limit;	/* CQE/EQE process before ring doorbel */
> +	u32	max_num_processed;
> +	time_t		max_process_time;
> +	u16	phase;		/* For if_type = 6, this value toggle
> +				 * for each iteration of the queue,
> +				 * a queue entry is valid when a cqe
> +				 * valid bit matches this value
> +				 */
> +
Same here. u16 between u32 is not making the cachline happy.

> +	union {
> +		u32	r_idx;	/* "read" index (MQ only) */
> +		struct {
> +			u32	dword;
> +		} flag;
> +	} u;
> +};
> +
> +/* Generic Command Request header */

And the remainder seems to be all hardware-dependent structures.
Would it be possible to rearrange stuff so that hardware/SLI related 
structures are being kept separate from the software/driver dependent ones?
Just so that one is clear which structures can or must mot be changed.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands
  2020-04-12  3:32 ` [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
  2020-04-14 19:01   ` Daniel Wagner
@ 2020-04-15 12:22   ` Hannes Reinecke
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:22 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds definitions for SLI-4 mailbox commands
> and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Changed anonymous enums to named.
>    Split gaint enums into multiple enums.
>    Single Enum to #define
>    Added Link Speed defines to accommodate upto 128G
>    SLI defines to spell out _MASK value directly.
>    Changed multiple defines to named Enums for consistency.
> ---
>   drivers/scsi/elx/libefc_sli/sli4.h | 1677 ++++++++++++++++++++++++++++++++++++
>   1 file changed, 1677 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 07eef8df9690..b360d809f144 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -2010,4 +2010,1681 @@ enum sli4_els_cmd_type {
>   	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
>   };
>   
> +#define SLI_PAGE_SIZE				(1 << 12)	/* 4096 */

SZ_4k?

> +#define SLI_SUB_PAGE_MASK			(SLI_PAGE_SIZE - 1)
> +#define SLI_ROUND_PAGE(b)	(((b) + SLI_SUB_PAGE_MASK) & ~SLI_SUB_PAGE_MASK)
> +
> +#define SLI4_BMBX_TIMEOUT_MSEC			30000
> +#define SLI4_FW_READY_TIMEOUT_MSEC		30000
> +
> +#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
> +#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
> +
> +static inline u32
> +sli_page_count(size_t bytes, u32 page_size)
> +{
> +	if (!page_size)
> +		return 0;
> +
> +	return (bytes + (page_size - 1)) >> __ffs(page_size);
> +}
> +
> +/*************************************************************************
> + * SLI-4 mailbox command formats and definitions
> + */
> +
> +struct sli4_mbox_command_header {
> +	u8	resvd0;
> +	u8	command;
> +	__le16	status;	/* Port writes to indicate success/fail */
> +};
> +
> +enum sli4_mbx_cmd_value {
> +	MBX_CMD_CONFIG_LINK	= 0x07,
> +	MBX_CMD_DUMP		= 0x17,
> +	MBX_CMD_DOWN_LINK	= 0x06,
> +	MBX_CMD_INIT_LINK	= 0x05,
> +	MBX_CMD_INIT_VFI	= 0xa3,
> +	MBX_CMD_INIT_VPI	= 0xa4,
> +	MBX_CMD_POST_XRI	= 0xa7,
> +	MBX_CMD_RELEASE_XRI	= 0xac,
> +	MBX_CMD_READ_CONFIG	= 0x0b,
> +	MBX_CMD_READ_STATUS	= 0x0e,
> +	MBX_CMD_READ_NVPARMS	= 0x02,
> +	MBX_CMD_READ_REV	= 0x11,
> +	MBX_CMD_READ_LNK_STAT	= 0x12,
> +	MBX_CMD_READ_SPARM64	= 0x8d,
> +	MBX_CMD_READ_TOPOLOGY	= 0x95,
> +	MBX_CMD_REG_FCFI	= 0xa0,
> +	MBX_CMD_REG_FCFI_MRQ	= 0xaf,
> +	MBX_CMD_REG_RPI		= 0x93,
> +	MBX_CMD_REG_RX_RQ	= 0xa6,
> +	MBX_CMD_REG_VFI		= 0x9f,
> +	MBX_CMD_REG_VPI		= 0x96,
> +	MBX_CMD_RQST_FEATURES	= 0x9d,
> +	MBX_CMD_SLI_CONFIG	= 0x9b,
> +	MBX_CMD_UNREG_FCFI	= 0xa2,
> +	MBX_CMD_UNREG_RPI	= 0x14,
> +	MBX_CMD_UNREG_VFI	= 0xa1,
> +	MBX_CMD_UNREG_VPI	= 0x97,
> +	MBX_CMD_WRITE_NVPARMS	= 0x03,
> +	MBX_CMD_CFG_AUTO_XFER_RDY = 0xAD,
> +};
> +
> +enum sli4_mbx_status {
> +	MBX_STATUS_SUCCESS	= 0x0000,
> +	MBX_STATUS_FAILURE	= 0x0001,
> +	MBX_STATUS_RPI_NOT_REG	= 0x1400,
> +};
> +
> +/* CONFIG_LINK */
> +enum sli4_cmd_config_link_flags {
> +	SLI4_CFG_LINK_BBSCN = 0xf00,
> +	SLI4_CFG_LINK_CSCN  = 0x1000,
> +};
> +
> +struct sli4_cmd_config_link {
> +	struct sli4_mbox_command_header	hdr;
> +	u8		maxbbc;
> +	u8		rsvd5;
> +	u8		rsvd6;
> +	u8		rsvd7;
> +	u8		alpa;
> +	__le16		n_port_id;
> +	u8		rsvd11;
> +	__le32		rsvd12;
> +	__le32		e_d_tov;
> +	__le32		lp_tov;
> +	__le32		r_a_tov;
> +	__le32		r_t_tov;
> +	__le32		al_tov;
> +	__le32		rsvd36;
> +	__le32		bbscn_dword;
> +};
> +
> +#define SLI4_DUMP4_TYPE		0xf
> +
> +#define SLI4_WKI_TAG_SAT_TEM	0x1040
> +
> +struct sli4_cmd_dump4 {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		type_dword;
> +	__le16		wki_selection;
> +	__le16		rsvd10;
> +	__le32		rsvd12;
> +	__le32		returned_byte_cnt;
> +	__le32		resp_data[59];
> +};
> +
> +/* INIT_LINK - initialize the link for a FC port */
> +enum sli4_init_link_flags {
> +	SLI4_INIT_LINK_F_LOOPBACK	= (1 << 0),
> +
> +	SLI4_INIT_LINK_F_P2P_ONLY	= (1 << 1),
> +	SLI4_INIT_LINK_F_FCAL_ONLY	= (2 << 1),
> +	SLI4_INIT_LINK_F_FCAL_FAIL_OVER	= (0 << 1),
> +	SLI4_INIT_LINK_F_P2P_FAIL_OVER	= (1 << 1),
> +
> +	SLI4_INIT_LINK_F_UNFAIR		= (1 << 6),
> +	SLI4_INIT_LINK_F_NO_LIRP	= (1 << 7),
> +	SLI4_INIT_LINK_F_LOOP_VALID_CHK	= (1 << 8),
> +	SLI4_INIT_LINK_F_NO_LISA	= (1 << 9),
> +	SLI4_INIT_LINK_F_FAIL_OVER	= (1 << 10),
> +	SLI4_INIT_LINK_F_FIXED_SPEED	= (1 << 11),
> +	SLI4_INIT_LINK_F_PICK_HI_ALPA	= (1 << 15),
> +
> +};
> +
> +enum sli4_fc_link_speed {
> +	FC_LINK_SPEED_1G = 1,
> +	FC_LINK_SPEED_2G,
> +	FC_LINK_SPEED_AUTO_1_2,
> +	FC_LINK_SPEED_4G,
> +	FC_LINK_SPEED_AUTO_4_1,
> +	FC_LINK_SPEED_AUTO_4_2,
> +	FC_LINK_SPEED_AUTO_4_2_1,
> +	FC_LINK_SPEED_8G,
> +	FC_LINK_SPEED_AUTO_8_1,
> +	FC_LINK_SPEED_AUTO_8_2,
> +	FC_LINK_SPEED_AUTO_8_2_1,
> +	FC_LINK_SPEED_AUTO_8_4,
> +	FC_LINK_SPEED_AUTO_8_4_1,
> +	FC_LINK_SPEED_AUTO_8_4_2,
> +	FC_LINK_SPEED_10G,
> +	FC_LINK_SPEED_16G,
> +	FC_LINK_SPEED_AUTO_16_8_4,
> +	FC_LINK_SPEED_AUTO_16_8,
> +	FC_LINK_SPEED_32G,
> +	FC_LINK_SPEED_AUTO_32_16_8,
> +	FC_LINK_SPEED_AUTO_32_16,
> +	FC_LINK_SPEED_64G,
> +	FC_LINK_SPEED_AUTO_64_32_16,
> +	FC_LINK_SPEED_AUTO_64_32,
> +	FC_LINK_SPEED_128G,
> +	FC_LINK_SPEED_AUTO_128_64_32,
> +	FC_LINK_SPEED_AUTO_128_64,
> +};

Please don't use the generic 'FC_LINK_SPEED_' prefix here; chances are 
that it might clash with a generic definition eg from libfc.
SLI4_LINK_SPEED_?

> +
> +struct sli4_cmd_init_link {
> +	struct sli4_mbox_command_header       hdr;
> +	__le32	sel_reset_al_pa_dword;
> +	__le32	flags0;
> +	__le32	link_speed_sel_code;
> +};
> +
> +/* INIT_VFI - initialize the VFI resource */
> +enum sli4_init_vfi_flags {
> +	SLI4_INIT_VFI_FLAG_VP	= 0x1000,
> +	SLI4_INIT_VFI_FLAG_VF	= 0x2000,
> +	SLI4_INIT_VFI_FLAG_VT	= 0x4000,
> +	SLI4_INIT_VFI_FLAG_VR	= 0x8000,
> +
> +	SLI4_INIT_VFI_VFID	= 0x1fff,
> +	SLI4_INIT_VFI_PRI	= 0xe000,
> +
> +	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,
> +};
> +
> +struct sli4_cmd_init_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		vfi;
> +	__le16		flags0_word;
> +	__le16		fcfi;
> +	__le16		vpi;
> +	__le32		vf_id_pri_dword;
> +	__le32		hop_cnt_dword;
> +};
> +
> +/* INIT_VPI - initialize the VPI resource */
> +struct sli4_cmd_init_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		vpi;
> +	__le16		vfi;
> +};
> +
> +/* POST_XRI - post XRI resources to the SLI Port */
> +enum sli4_post_xri_flags {
> +	SLI4_POST_XRI_COUNT	= 0xfff,
> +	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
> +	SLI4_POST_XRI_FLAG_DL	= 0x2000,
> +	SLI4_POST_XRI_FLAG_DI	= 0x4000,
> +	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
> +};
> +
> +struct sli4_cmd_post_xri {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		xri_base;
> +	__le16		xri_count_flags;
> +};
> +
> +/* RELEASE_XRI - Release XRI resources from the SLI Port */
> +enum sli4_release_xri_flags {
> +	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,
> +	SLI4_RELEASE_XRI_COUNT		= 0x1f,
> +};
> +
> +struct sli4_cmd_release_xri {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		rel_xri_count_word;
> +	__le16		xri_count_word;
> +
> +	struct {
> +		__le16	xri_tag0;
> +		__le16	xri_tag1;
> +	} xri_tbl[62];
> +};
> +
> +/* READ_CONFIG - read SLI port configuration parameters */
> +struct sli4_cmd_read_config {
> +	struct sli4_mbox_command_header	hdr;
> +};
> +
> +enum sli4_read_cfg_resp_flags {
> +	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
> +	SLI4_READ_CFG_RESP_TOPOLOGY	= 0xff000000,	/* DW2 */
> +};
> +
> +enum sli4_read_cfg_topo {
> +	SLI4_READ_CFG_TOPO_FC		= 0x1,	/* FC topology unknown */
> +	SLI4_READ_CFG_TOPO_FC_DA	= 0x2,	/* FC Direct Attach */
> +	SLI4_READ_CFG_TOPO_FC_AL	= 0x3,	/* FC-AL topology */
> +};
> +
> +struct sli4_rsp_read_config {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		ext_dword;
> +	__le32		topology_dword;
> +	__le32		resvd8;
> +	__le16		e_d_tov;
> +	__le16		resvd14;
> +	__le32		resvd16;
> +	__le16		r_a_tov;
> +	__le16		resvd22;
> +	__le32		resvd24;
> +	__le32		resvd28;
> +	__le16		lmt;
> +	__le16		resvd34;
> +	__le32		resvd36;
> +	__le32		resvd40;
> +	__le16		xri_base;
> +	__le16		xri_count;
> +	__le16		rpi_base;
> +	__le16		rpi_count;
> +	__le16		vpi_base;
> +	__le16		vpi_count;
> +	__le16		vfi_base;
> +	__le16		vfi_count;
> +	__le16		resvd60;
> +	__le16		fcfi_count;
> +	__le16		rq_count;
> +	__le16		eq_count;
> +	__le16		wq_count;
> +	__le16		cq_count;
> +	__le32		pad[45];
> +};
> +
> +/* READ_NVPARMS - read SLI port configuration parameters */
> +enum sli4_read_nvparms_flags {
> +	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
> +	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
> +};
> +
> +struct sli4_cmd_read_nvparms {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/* WRITE_NVPARMS - write SLI port configuration parameters */
> +struct sli4_cmd_write_nvparms {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		resvd0;
> +	__le32		resvd4;
> +	__le32		resvd8;
> +	__le32		resvd12;
> +	u8		wwpn[8];
> +	u8		wwnn[8];
> +	__le32		hard_alpa_d_id;
> +};
> +
> +/* READ_REV - read the Port revision levels */
> +enum {
> +	SLI4_READ_REV_FLAG_SLI_LEVEL	= 0xf,
> +	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
> +	SLI4_READ_REV_FLAG_CEEV		= 0x60,
> +	SLI4_READ_REV_FLAG_VPD		= 0x2000,
> +
> +	SLI4_READ_REV_AVAILABLE_LENGTH	= 0xffffff,
> +};
> +
> +struct sli4_cmd_read_rev {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			resvd0;
> +	__le16			flags0_word;
> +	__le32			first_hw_rev;
> +	__le32			second_hw_rev;
> +	__le32			resvd12;
> +	__le32			third_hw_rev;
> +	u8			fc_ph_low;
> +	u8			fc_ph_high;
> +	u8			feature_level_low;
> +	u8			feature_level_high;
> +	__le32			resvd24;
> +	__le32			first_fw_id;
> +	u8			first_fw_name[16];
> +	__le32			second_fw_id;
> +	u8			second_fw_name[16];
> +	__le32			rsvd18[30];
> +	__le32			available_length_dword;
> +	struct sli4_dmaaddr	hostbuf;
> +	__le32			returned_vpd_length;
> +	__le32			actual_vpd_length;
> +};
> +
> +/* READ_SPARM64 - read the Port service parameters */
> +#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
> +#define SLI4_READ_SPARM64_WWNN_OFFSET	(6 * sizeof(u32))
> +
> +struct sli4_cmd_read_sparm64 {
> +	struct sli4_mbox_command_header hdr;
> +	__le32			resvd0;
> +	__le32			resvd4;
> +	struct sli4_bde	bde_64;
> +	__le16			vpi;
> +	__le16			resvd22;
> +	__le16			port_name_start;
> +	__le16			port_name_len;
> +	__le16			node_name_start;
> +	__le16			node_name_len;
> +};
> +
> +/* READ_TOPOLOGY - read the link event information */
> +enum sli4_read_topo_e {
> +	SLI4_READTOPO_ATTEN_TYPE	= 0xff,
> +	SLI4_READTOPO_FLAG_IL		= 0x100,
> +	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
> +
> +	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
> +	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
> +	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
> +	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
> +	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
> +	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
> +
> +	SLI4_READTOPO_SCN_BBSCN		= 0xf,
> +	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
> +
> +	SLI4_READTOPO_R_T_TOV		= 0x1ff,
> +	SLI4_READTOPO_AL_TOV		= 0xf000,
> +
> +	SLI4_READTOPO_PB_FLAG		= 0x80,
> +
> +	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
> +};
> +
> +#define SLI4_MIN_LOOP_MAP_BYTES	128
> +
> +struct sli4_cmd_read_topology {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32			event_tag;
> +	__le32			dw2_attentype;
> +	u8			topology;
> +	u8			lip_type;
> +	u8			lip_al_ps;
> +	u8			al_pa_granted;
> +	struct sli4_bde	bde_loop_map;
> +	__le32			linkdown_state;
> +	__le32			currlink_state;
> +	u8			max_bbc;
> +	u8			init_bbc;
> +	u8			scn_flags;
> +	u8			rsvd39;
> +	__le16			dw10w0_al_rt_tov;
> +	__le16			lp_tov;
> +	u8			acquired_al_pa;
> +	u8			pb_flags;
> +	__le16			specified_al_pa;
> +	__le32			dw12_init_n_port_id;
> +};
> +
> +enum sli4_read_topo_link {
> +	SLI4_READ_TOPOLOGY_LINK_UP	= 0x1,
> +	SLI4_READ_TOPOLOGY_LINK_DOWN,
> +	SLI4_READ_TOPOLOGY_LINK_NO_ALPA,
> +};
> +
> +enum sli4_read_topo {
> +	SLI4_READ_TOPOLOGY_UNKNOWN	= 0x0,
> +	SLI4_READ_TOPOLOGY_NPORT,
> +	SLI4_READ_TOPOLOGY_FC_AL,
> +};
> +
> +enum sli4_read_topo_speed {
> +	SLI4_READ_TOPOLOGY_SPEED_NONE	= 0x00,
> +	SLI4_READ_TOPOLOGY_SPEED_1G	= 0x04,
> +	SLI4_READ_TOPOLOGY_SPEED_2G	= 0x08,
> +	SLI4_READ_TOPOLOGY_SPEED_4G	= 0x10,
> +	SLI4_READ_TOPOLOGY_SPEED_8G	= 0x20,
> +	SLI4_READ_TOPOLOGY_SPEED_10G	= 0x40,
> +	SLI4_READ_TOPOLOGY_SPEED_16G	= 0x80,
> +	SLI4_READ_TOPOLOGY_SPEED_32G	= 0x90,
> +};
> +
> +/* REG_FCFI - activate a FC Forwarder */
> +struct sli4_cmd_reg_fcfi_rq_cfg {
> +	u8	r_ctl_mask;
> +	u8	r_ctl_match;
> +	u8	type_mask;
> +	u8	type_match;
> +};
> +
> +enum sli4_regfcfi_tag {
> +	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
> +	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
> +};
> +
> +#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
> +struct sli4_cmd_reg_fcfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg
> +			rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +	__le32		dw8_vlan;
> +};
> +
> +#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
> +#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
> +#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
> +#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
> +
> +enum sli4_reg_fcfi_mrq {
> +	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
> +	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
> +	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
> +
> +	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
> +	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
> +	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
> +};
> +
> +struct sli4_cmd_reg_fcfi_mrq {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		fcf_index;
> +	__le16		fcfi;
> +	__le16		rqid1;
> +	__le16		rqid0;
> +	__le16		rqid3;
> +	__le16		rqid2;
> +	struct sli4_cmd_reg_fcfi_rq_cfg
> +			rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
> +	__le32		dw8_vlan;
> +	__le32		dw9_mrqflags;
> +};
> +
> +struct sli4_cmd_rq_cfg {
> +	__le16	rq_id;
> +	u8	r_ctl_mask;
> +	u8	r_ctl_match;
> +	u8	type_mask;
> +	u8	type_match;
> +};
> +
> +/* REG_RPI - register a Remote Port Indicator */
> +enum sli4_reg_rpi {
> +	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
> +	SLI4_REGRPI_UPD			= 0x1000000,
> +	SLI4_REGRPI_ETOW		= 0x8000000,
> +	SLI4_REGRPI_TERP		= 0x20000000,
> +	SLI4_REGRPI_CI			= 0x80000000,
> +};
> +
> +struct sli4_cmd_reg_rpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			rpi;
> +	__le16			rsvd2;
> +	__le32			dw2_rportid_flags;
> +	struct sli4_bde	bde_64;
> +	__le16			vpi;
> +	__le16			rsvd26;
> +};
> +
> +#define SLI4_REG_RPI_BUF_LEN		0x70
> +
> +/* REG_VFI - register a Virtual Fabric Indicator */
> +enum sli_reg_vfi {
> +	SLI4_REGVFI_VP			= 0x1000,	/* DW1 */
> +	SLI4_REGVFI_UPD			= 0x2000,
> +
> +	SLI4_REGVFI_LOCAL_N_PORTID	= 0xffffff,	/* DW10 */
> +};
> +
> +struct sli4_cmd_reg_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16			vfi;
> +	__le16			dw0w1_flags;
> +	__le16			fcfi;
> +	__le16			vpi;
> +	u8			wwpn[8];
> +	struct sli4_bde	sparm;
> +	__le32			e_d_tov;
> +	__le32			r_a_tov;
> +	__le32			dw10_lportid_flags;
> +};
> +
> +/* REG_VPI - register a Virtual Port Indicator */
> +enum sli4_reg_vpi {
> +	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
> +	SLI4_REGVPI_UPD			= 0x1000000,
> +};
> +
> +struct sli4_cmd_reg_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		dw2_lportid_flags;
> +	u8		wwpn[8];
> +	__le32		rsvd12;
> +	__le16		vpi;
> +	__le16		vfi;
> +};
> +
> +/* REQUEST_FEATURES - request / query SLI features */
> +enum sli4_req_features_flags {
> +	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
> +
> +	SLI4_REQFEAT_IAAB	= (1 << 0),	/* DW2 & DW3 */
> +	SLI4_REQFEAT_NPIV	= (1 << 1),
> +	SLI4_REQFEAT_DIF	= (1 << 2),
> +	SLI4_REQFEAT_VF		= (1 << 3),
> +	SLI4_REQFEAT_FCPI	= (1 << 4),
> +	SLI4_REQFEAT_FCPT	= (1 << 5),
> +	SLI4_REQFEAT_FCPC	= (1 << 6),
> +	SLI4_REQFEAT_RSVD	= (1 << 7),
> +	SLI4_REQFEAT_RQD	= (1 << 8),
> +	SLI4_REQFEAT_IAAR	= (1 << 9),
> +	SLI4_REQFEAT_HLM	= (1 << 10),
> +	SLI4_REQFEAT_PERFH	= (1 << 11),
> +	SLI4_REQFEAT_RXSEQ	= (1 << 12),
> +	SLI4_REQFEAT_RXRI	= (1 << 13),
> +	SLI4_REQFEAT_DCL2	= (1 << 14),
> +	SLI4_REQFEAT_RSCO	= (1 << 15),
> +	SLI4_REQFEAT_MRQP	= (1 << 16),
> +};
> +
> +struct sli4_cmd_request_features {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_qry;
> +	__le32		cmd;
> +	__le32		resp;
> +};
> +
> +/*
> + * SLI_CONFIG - submit a configuration command to Port
> + *
> + * Command is either embedded as part of the payload (embed) or located
> + * in a separate memory buffer (mem)
> + */
> +enum sli4_sli_config {
> +	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
> +	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
> +	SLI4_SLICONF_PMDCMD_MASK	= 0xF8,
> +	SLI4_SLICONF_PMDCMD_VAL_1	= 8,
> +	SLI4_SLICONF_PMDCNT		= 0xF8,
> +
> +	SLI4_SLICONFIG_PMD_LEN		= 0x00ffffff,
> +};
> +
> +struct sli4_cmd_sli_config {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_flags;
> +	__le32		payload_len;
> +	__le32		rsvd12[3];
> +	union {
> +		u8 embed[58 * sizeof(u32)];
> +		struct sli4_bufptr mem;
> +	} payload;
> +};
> +
> +/* READ_STATUS - read tx/rx status of a particular port */
> +#define SLI4_READSTATUS_CLEAR_COUNTERS	0x1
> +
> +struct sli4_cmd_read_status {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		dw1_flags;
> +	__le32		rsvd4;
> +	__le32		trans_kbyte_cnt;
> +	__le32		recv_kbyte_cnt;
> +	__le32		trans_frame_cnt;
> +	__le32		recv_frame_cnt;
> +	__le32		trans_seq_cnt;
> +	__le32		recv_seq_cnt;
> +	__le32		tot_exchanges_orig;
> +	__le32		tot_exchanges_resp;
> +	__le32		recv_p_bsy_cnt;
> +	__le32		recv_f_bsy_cnt;
> +	__le32		no_rq_buf_dropped_frames_cnt;
> +	__le32		empty_rq_timeout_cnt;
> +	__le32		no_xri_dropped_frames_cnt;
> +	__le32		empty_xri_pool_cnt;
> +};
> +
> +/* READ_LNK_STAT - read link status of a particular port */
> +enum sli4_read_link_stats_flags {
> +	SLI4_READ_LNKSTAT_REC	= (1 << 0),
> +	SLI4_READ_LNKSTAT_GEC	= (1 << 1),
> +	SLI4_READ_LNKSTAT_W02OF	= (1 << 2),
> +	SLI4_READ_LNKSTAT_W03OF	= (1 << 3),
> +	SLI4_READ_LNKSTAT_W04OF	= (1 << 4),
> +	SLI4_READ_LNKSTAT_W05OF	= (1 << 5),
> +	SLI4_READ_LNKSTAT_W06OF	= (1 << 6),
> +	SLI4_READ_LNKSTAT_W07OF	= (1 << 7),
> +	SLI4_READ_LNKSTAT_W08OF	= (1 << 8),
> +	SLI4_READ_LNKSTAT_W09OF	= (1 << 9),
> +	SLI4_READ_LNKSTAT_W10OF = (1 << 10),
> +	SLI4_READ_LNKSTAT_W11OF = (1 << 11),
> +	SLI4_READ_LNKSTAT_W12OF	= (1 << 12),
> +	SLI4_READ_LNKSTAT_W13OF	= (1 << 13),
> +	SLI4_READ_LNKSTAT_W14OF	= (1 << 14),
> +	SLI4_READ_LNKSTAT_W15OF	= (1 << 15),
> +	SLI4_READ_LNKSTAT_W16OF	= (1 << 16),
> +	SLI4_READ_LNKSTAT_W17OF	= (1 << 17),
> +	SLI4_READ_LNKSTAT_W18OF	= (1 << 18),
> +	SLI4_READ_LNKSTAT_W19OF	= (1 << 19),
> +	SLI4_READ_LNKSTAT_W20OF	= (1 << 20),
> +	SLI4_READ_LNKSTAT_W21OF	= (1 << 21),
> +	SLI4_READ_LNKSTAT_CLRC	= (1 << 30),
> +	SLI4_READ_LNKSTAT_CLOF	= (1 << 31),
> +};
> +
> +struct sli4_cmd_read_link_stats {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32	dw1_flags;
> +	__le32	linkfail_errcnt;
> +	__le32	losssync_errcnt;
> +	__le32	losssignal_errcnt;
> +	__le32	primseq_errcnt;
> +	__le32	inval_txword_errcnt;
> +	__le32	crc_errcnt;
> +	__le32	primseq_eventtimeout_cnt;
> +	__le32	elastic_bufoverrun_errcnt;
> +	__le32	arbit_fc_al_timeout_cnt;
> +	__le32	adv_rx_buftor_to_buf_credit;
> +	__le32	curr_rx_buf_to_buf_credit;
> +	__le32	adv_tx_buf_to_buf_credit;
> +	__le32	curr_tx_buf_to_buf_credit;
> +	__le32	rx_eofa_cnt;
> +	__le32	rx_eofdti_cnt;
> +	__le32	rx_eofni_cnt;
> +	__le32	rx_soff_cnt;
> +	__le32	rx_dropped_no_aer_cnt;
> +	__le32	rx_dropped_no_avail_rpi_rescnt;
> +	__le32	rx_dropped_no_avail_xri_rescnt;
> +};
> +
> +/* Format a WQE with WQ_ID Association performance hint */
> +static inline void
> +sli_set_wq_id_association(void *entry, u16 q_id)
> +{
> +	u32 *wqe = entry;
> +
> +	/*
> +	 * Set Word 10, bit 0 to zero
> +	 * Set Word 10, bits 15:1 to the WQ ID
> +	 */
> +	wqe[10] &= ~0xffff;
> +	wqe[10] |= q_id << 1;
> +}
> +
> +/* UNREG_FCFI - unregister a FCFI */
> +struct sli4_cmd_unreg_fcfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		fcfi;
> +	__le16		rsvd6;
> +};
> +
> +/* UNREG_RPI - unregister one or more RPI */
> +enum sli4_unreg_rpi {
> +	UNREG_RPI_DP		= 0x2000,
> +	UNREG_RPI_II_SHIFT	= 14,
> +	UNREG_RPI_II_MASK	= 0xC000,
> +	UNREG_RPI_II_RPI	= 0x0000,
> +	UNREG_RPI_II_VPI	= 0x4000,
> +	UNREG_RPI_II_VFI	= 0x8000,
> +	UNREG_RPI_II_FCFI	= 0xC000,
> +
> +	UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
> +};
> +
> +struct sli4_cmd_unreg_rpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le16		index;
> +	__le16		dw1w1_flags;
> +	__le32		dw2_dest_n_portid;
> +};
> +
> +/* UNREG_VFI - unregister one or more VFI */
> +enum sli4_unreg_vfi {
> +	UNREG_VFI_II_SHIFT	= 14,
> +	UNREG_VFI_II_MASK	= 0xC000,
> +	UNREG_VFI_II_VFI	= 0x0000,
> +	UNREG_VFI_II_FCFI	= 0xC000,
> +};
> +
> +struct sli4_cmd_unreg_vfi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		index;
> +	__le16		dw2_flags;
> +};
> +
> +enum sli4_unreg_type {
> +	SLI4_UNREG_TYPE_PORT,
> +	SLI4_UNREG_TYPE_DOMAIN,
> +	SLI4_UNREG_TYPE_FCF,
> +	SLI4_UNREG_TYPE_ALL
> +};
> +
> +/* UNREG_VPI - unregister one or more VPI */
> +enum sli4_unreg_vpi {
> +	UNREG_VPI_II_SHIFT	= 14,
> +	UNREG_VPI_II_MASK	= 0xC000,
> +	UNREG_VPI_II_VPI	= 0x0000,
> +	UNREG_VPI_II_VFI	= 0x8000,
> +	UNREG_VPI_II_FCFI	= 0xC000,
> +};
> +
> +struct sli4_cmd_unreg_vpi {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le16		index;
> +	__le16		dw2w0_flags;
> +};
> +
> +/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */
> +struct sli4_cmd_config_auto_xfer_rdy {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		max_burst_len;
> +};
> +
> +#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
> +
> +struct sli4_cmd_config_auto_xfer_rdy_hp {
> +	struct sli4_mbox_command_header	hdr;
> +	__le32		rsvd0;
> +	__le32		max_burst_len;
> +	__le32		dw3_esoc_flags;
> +	__le16		block_size;
> +	__le16		rsvd14;
> +};
> +
> +/*************************************************************************
> + * SLI-4 common configuration command formats and definitions
> + */
> +
> +/*
> + * Subsystem values.
> + */
> +enum sli4_subsystem {
> +	SLI4_SUBSYSTEM_COMMON	= 0x01,
> +	SLI4_SUBSYSTEM_LOWLEVEL	= 0x0B,
> +	SLI4_SUBSYSTEM_FC	= 0x0C,
> +	SLI4_SUBSYSTEM_DMTF	= 0x11,
> +};
> +
> +#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
> +
> +/*
> + * Common opcode (OPC) values.
> + */
> +enum sli4_cmn_opcode {
> +	CMN_FUNCTION_RESET	= 0x3d,
> +	CMN_CREATE_CQ		= 0x0c,
> +	CMN_CREATE_CQ_SET	= 0x1d,
> +	CMN_DESTROY_CQ		= 0x36,
> +	CMN_MODIFY_EQ_DELAY	= 0x29,
> +	CMN_CREATE_EQ		= 0x0d,
> +	CMN_DESTROY_EQ		= 0x37,
> +	CMN_CREATE_MQ_EXT	= 0x5a,
> +	CMN_DESTROY_MQ		= 0x35,
> +	CMN_GET_CNTL_ATTRIBUTES	= 0x20,
> +	CMN_NOP			= 0x21,
> +	CMN_GET_RSC_EXTENT_INFO = 0x9a,
> +	CMN_GET_SLI4_PARAMS	= 0xb5,
> +	CMN_QUERY_FW_CONFIG	= 0x3a,
> +	CMN_GET_PORT_NAME	= 0x4d,
> +
> +	CMN_WRITE_FLASHROM	= 0x07,
> +	/* TRANSCEIVER Data */
> +	CMN_READ_TRANS_DATA	= 0x49,
> +	CMN_GET_CNTL_ADDL_ATTRS = 0x79,
> +	CMN_GET_FUNCTION_CFG	= 0xa0,
> +	CMN_GET_PROFILE_CFG	= 0xa4,
> +	CMN_SET_PROFILE_CFG	= 0xa5,
> +	CMN_GET_PROFILE_LIST	= 0xa6,
> +	CMN_GET_ACTIVE_PROFILE	= 0xa7,
> +	CMN_SET_ACTIVE_PROFILE	= 0xa8,
> +	CMN_READ_OBJECT		= 0xab,
> +	CMN_WRITE_OBJECT	= 0xac,
> +	CMN_DELETE_OBJECT	= 0xae,
> +	CMN_READ_OBJECT_LIST	= 0xad,
> +	CMN_SET_DUMP_LOCATION	= 0xb8,
> +	CMN_SET_FEATURES	= 0xbf,
> +	CMN_GET_RECFG_LINK_INFO = 0xc9,
> +	CMN_SET_RECNG_LINK_ID	= 0xca,
> +};
> +

Same here; please prefix with SLI4_

> +/* DMTF opcode (OPC) values */
> +#define DMTF_EXEC_CLP_CMD 0x01
> +
> +/*
> + * COMMON_FUNCTION_RESET
> + *
> + * Resets the Port, returning it to a power-on state. This configuration
> + * command does not have a payload and should set/expect the lengths to
> + * be zero.
> + */
> +struct sli4_rqst_cmn_function_reset {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_function_reset {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +
> +/*
> + * COMMON_GET_CNTL_ATTRIBUTES
> + *
> + * Query for information about the SLI Port
> + */
> +enum sli4_cntrl_attr_flags {
> +	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,
> +	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
> +};
> +
> +struct sli4_rsp_cmn_get_cntl_attributes {
> +	struct sli4_rsp_hdr	hdr;
> +	u8			version_str[32];
> +	u8			manufacturer_name[32];
> +	__le32			supported_modes;
> +	u8			eprom_version_lo;
> +	u8			eprom_version_hi;
> +	__le16			rsvd17;
> +	__le32			mbx_ds_version;
> +	__le32			ep_fw_ds_version;
> +	u8			ncsi_version_str[12];
> +	__le32			def_extended_timeout;
> +	u8			model_number[32];
> +	u8			description[64];
> +	u8			serial_number[32];
> +	u8			ip_version_str[32];
> +	u8			fw_version_str[32];
> +	u8			bios_version_str[32];
> +	u8			redboot_version_str[32];
> +	u8			driver_version_str[32];
> +	u8			fw_on_flash_version_str[32];
> +	__le32			functionalities_supported;
> +	__le16			max_cdb_length;
> +	u8			asic_revision;
> +	u8			generational_guid0;
> +	__le32			generational_guid1_12[3];
> +	__le16			generational_guid13_14;
> +	u8			generational_guid15;
> +	u8			hba_port_count;
> +	__le16			default_link_down_timeout;
> +	u8			iscsi_version_min_max;
> +	u8			multifunctional_device;
> +	u8			cache_valid;
> +	u8			hba_status;
> +	u8			max_domains_supported;
> +	u8			port_num_type_flags;
> +	__le32			firmware_post_status;
> +	__le32			hba_mtu;
> +	u8			iscsi_features;
> +	u8			rsvd121[3];
> +	__le16			pci_vendor_id;
> +	__le16			pci_device_id;
> +	__le16			pci_sub_vendor_id;
> +	__le16			pci_sub_system_id;
> +	u8			pci_bus_number;
> +	u8			pci_device_number;
> +	u8			pci_function_number;
> +	u8			interface_type;
> +	__le64			unique_identifier;
> +	u8			number_of_netfilters;
> +	u8			rsvd122[3];
> +};
> +
> +/*
> + * COMMON_GET_CNTL_ATTRIBUTES
> + *
> + * This command queries the controller information from the Flash ROM.
> + */
> +struct sli4_rqst_cmn_get_cntl_addl_attributes {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_cntl_addl_attributes {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16		ipl_file_number;
> +	u8		ipl_file_version;
> +	u8		rsvd4;
> +	u8		on_die_temperature;
> +	u8		rsvd5[3];
> +	__le32		driver_advanced_features_supported;
> +	__le32		rsvd7[4];
> +	char		universal_bios_version[32];
> +	char		x86_bios_version[32];
> +	char		efi_bios_version[32];
> +	char		fcode_version[32];
> +	char		uefi_bios_version[32];
> +	char		uefi_nic_version[32];
> +	char		uefi_fcode_version[32];
> +	char		uefi_iscsi_version[32];
> +	char		iscsi_x86_bios_version[32];
> +	char		pxe_x86_bios_version[32];
> +	u8		default_wwpn[8];
> +	u8		ext_phy_version[32];
> +	u8		fc_universal_bios_version[32];
> +	u8		fc_x86_bios_version[32];
> +	u8		fc_efi_bios_version[32];
> +	u8		fc_fcode_version[32];
> +	u8		ext_phy_crc_label[8];
> +	u8		ipl_file_name[16];
> +	u8		rsvd139[72];
> +};
> +
> +/*
> + * COMMON_NOP
> + *
> + * This command does not do anything; it only returns
> + * the payload in the completion.
> + */
> +struct sli4_rqst_cmn_nop {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			context[2];
> +};
> +
> +struct sli4_rsp_cmn_nop {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			context[2];
> +};
> +
> +struct sli4_rqst_cmn_get_resource_extent_info {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16	resource_type;
> +	__le16	rsvd16;
> +};
> +
> +enum sli4_rsc_type {
> +	SLI4_RSC_TYPE_VFI	= 0x20,
> +	SLI4_RSC_TYPE_VPI	= 0x21,
> +	SLI4_RSC_TYPE_RPI	= 0x22,
> +	SLI4_RSC_TYPE_XRI	= 0x23,
> +};
> +
> +struct sli4_rsp_cmn_get_resource_extent_info {
> +	struct sli4_rsp_hdr	hdr;
> +	__le16			resource_extent_count;
> +	__le16			resource_extent_size;
> +};
> +
> +#define SLI4_128BYTE_WQE_SUPPORT	0x02
> +
> +#define GET_Q_CNT_METHOD(m) \
> +	(((m) & RSP_GET_PARAM_Q_CNT_MTHD_MASK) >> RSP_GET_PARAM_Q_CNT_MTHD_SHFT)
> +#define GET_Q_CREATE_VERSION(v) \
> +	(((v) & RSP_GET_PARAM_QV_MASK) >> RSP_GET_PARAM_QV_SHIFT)
> +
> +enum sli4_rsp_get_params_e {
> +	/*GENERIC*/
> +	RSP_GET_PARAM_Q_CNT_MTHD_SHFT	= 24,
> +	RSP_GET_PARAM_Q_CNT_MTHD_MASK	= (0xF << 24),
> +	RSP_GET_PARAM_QV_SHIFT		= 14,
> +	RSP_GET_PARAM_QV_MASK		= (3 << 14),
> +
> +	/* DW4 */
> +	RSP_GET_PARAM_PROTO_TYPE_MASK	= 0xFF,
> +	/* DW5 */
> +	RSP_GET_PARAM_FT		= (1 << 0),
> +	RSP_GET_PARAM_SLI_REV_MASK	= (0xF << 4),
> +	RSP_GET_PARAM_SLI_FAM_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_IF_TYPE_MASK	= (0xF << 12),
> +	RSP_GET_PARAM_SLI_HINT1_MASK	= (0xFF << 16),
> +	RSP_GET_PARAM_SLI_HINT2_MASK	= (0x1F << 24),
> +	/* DW6 */
> +	RSP_GET_PARAM_EQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_EQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_EQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW8 */
> +	RSP_GET_PARAM_CQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_CQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_CQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW10 */
> +	RSP_GET_PARAM_MQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_MQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW12 */
> +	RSP_GET_PARAM_WQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_WQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_WQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW14 */
> +	RSP_GET_PARAM_RQ_PAGE_CNT_MASK	= (0xF << 0),
> +	RSP_GET_PARAM_RQE_SZS_MASK	= (0xF << 8),
> +	RSP_GET_PARAM_RQ_PAGE_SZS_MASK	= (0xFF << 16),
> +	/* DW15W1*/
> +	RSP_GET_PARAM_RQ_DB_WINDOW_MASK	= 0xF000,
> +	/* DW16 */
> +	RSP_GET_PARAM_FC		= (1 << 0),
> +	RSP_GET_PARAM_EXT		= (1 << 1),
> +	RSP_GET_PARAM_HDRR		= (1 << 2),
> +	RSP_GET_PARAM_SGLR		= (1 << 3),
> +	RSP_GET_PARAM_FBRR		= (1 << 4),
> +	RSP_GET_PARAM_AREG		= (1 << 5),
> +	RSP_GET_PARAM_TGT		= (1 << 6),
> +	RSP_GET_PARAM_TERP		= (1 << 7),
> +	RSP_GET_PARAM_ASSI		= (1 << 8),
> +	RSP_GET_PARAM_WCHN		= (1 << 9),
> +	RSP_GET_PARAM_TCCA		= (1 << 10),
> +	RSP_GET_PARAM_TRTY		= (1 << 11),
> +	RSP_GET_PARAM_TRIR		= (1 << 12),
> +	RSP_GET_PARAM_PHOFF		= (1 << 13),
> +	RSP_GET_PARAM_PHON		= (1 << 14),
> +	RSP_GET_PARAM_PHWQ		= (1 << 15),
> +	RSP_GET_PARAM_BOUND_4GA		= (1 << 16),
> +	RSP_GET_PARAM_RXC		= (1 << 17),
> +	RSP_GET_PARAM_HLM		= (1 << 18),
> +	RSP_GET_PARAM_IPR		= (1 << 19),
> +	RSP_GET_PARAM_RXRI		= (1 << 20),
> +	RSP_GET_PARAM_SGLC		= (1 << 21),
> +	RSP_GET_PARAM_TIMM		= (1 << 22),
> +	RSP_GET_PARAM_TSMM		= (1 << 23),
> +	RSP_GET_PARAM_OAS		= (1 << 25),
> +	RSP_GET_PARAM_LC		= (1 << 26),
> +	RSP_GET_PARAM_AGXF		= (1 << 27),
> +	RSP_GET_PARAM_LOOPBACK_MASK	= (0xF << 28),
> +	/* DW18 */
> +	RSP_GET_PARAM_SGL_PAGE_CNT_MASK = (0xF << 0),
> +	RSP_GET_PARAM_SGL_PAGE_SZS_MASK = (0xFF << 8),
> +	RSP_GET_PARAM_SGL_PP_ALIGN_MASK = (0xFF << 16),
> +};
> +

Same here.

> +struct sli4_rqst_cmn_get_sli4_params {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_sli4_params {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32		dw4_protocol_type;
> +	__le32		dw5_sli;
> +	__le32		dw6_eq_page_cnt;
> +	__le16		eqe_count_mask;
> +	__le16		rsvd26;
> +	__le32		dw8_cq_page_cnt;
> +	__le16		cqe_count_mask;
> +	__le16		rsvd34;
> +	__le32		dw10_mq_page_cnt;
> +	__le16		mqe_count_mask;
> +	__le16		rsvd42;
> +	__le32		dw12_wq_page_cnt;
> +	__le16		wqe_count_mask;
> +	__le16		rsvd50;
> +	__le32		dw14_rq_page_cnt;
> +	__le16		rqe_count_mask;
> +	__le16		dw15w1_rq_db_window;
> +	__le32		dw16_loopback_scope;
> +	__le32		sge_supported_length;
> +	__le32		dw18_sgl_page_cnt;
> +	__le16		min_rq_buffer_size;
> +	__le16		rsvd75;
> +	__le32		max_rq_buffer_size;
> +	__le16		physical_xri_max;
> +	__le16		physical_rpi_max;
> +	__le16		physical_vpi_max;
> +	__le16		physical_vfi_max;
> +	__le32		rsvd88;
> +	__le16		frag_num_field_offset;
> +	__le16		frag_num_field_size;
> +	__le16		sgl_index_field_offset;
> +	__le16		sgl_index_field_size;
> +	__le32		chain_sge_initial_value_lo;
> +	__le32		chain_sge_initial_value_hi;
> +};
> +
> +/*
> + * COMMON_QUERY_FW_CONFIG
> + *
> + * This command retrieves firmware configuration parameters and adapter
> + * resources available to the driver.
> + */
> +struct sli4_rqst_cmn_query_fw_config {
> +	struct sli4_rqst_hdr	hdr;
> +};
> +
> +struct sli4_rsp_cmn_query_fw_config {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32		config_number;
> +	__le32		asic_rev;
> +	__le32		physical_port;
> +	__le32		function_mode;
> +	__le32		ulp0_mode;
> +	__le32		ulp0_nic_wqid_base;
> +	__le32		ulp0_nic_wq_total; /* DW10 */
> +	__le32		ulp0_toe_wqid_base;
> +	__le32		ulp0_toe_wq_total;
> +	__le32		ulp0_toe_rqid_base;
> +	__le32		ulp0_toe_rq_total;
> +	__le32		ulp0_toe_defrqid_base;
> +	__le32		ulp0_toe_defrq_total;
> +	__le32		ulp0_lro_rqid_base;
> +	__le32		ulp0_lro_rq_total;
> +	__le32		ulp0_iscsi_icd_base;
> +	__le32		ulp0_iscsi_icd_total; /* DW20 */
> +	__le32		ulp1_mode;
> +	__le32		ulp1_nic_wqid_base;
> +	__le32		ulp1_nic_wq_total;
> +	__le32		ulp1_toe_wqid_base;
> +	__le32		ulp1_toe_wq_total;
> +	__le32		ulp1_toe_rqid_base;
> +	__le32		ulp1_toe_rq_total;
> +	__le32		ulp1_toe_defrqid_base;
> +	__le32		ulp1_toe_defrq_total;
> +	__le32		ulp1_lro_rqid_base; /* DW30 */
> +	__le32		ulp1_lro_rq_total;
> +	__le32		ulp1_iscsi_icd_base;
> +	__le32		ulp1_iscsi_icd_total;
> +	__le32		function_capabilities;
> +	__le32		ulp0_cq_base;
> +	__le32		ulp0_cq_total;
> +	__le32		ulp0_eq_base;
> +	__le32		ulp0_eq_total;
> +	__le32		ulp0_iscsi_chain_icd_base;
> +	__le32		ulp0_iscsi_chain_icd_total; /* DW40 */
> +	__le32		ulp1_iscsi_chain_icd_base;
> +	__le32		ulp1_iscsi_chain_icd_total;
> +};
> +
> +/*Port Types*/
> +enum sli4_port_types {
> +	PORT_TYPE_ETH	= 0,
> +	PORT_TYPE_FC	= 1,
> +};
> +

And here.

> +struct sli4_rqst_cmn_get_port_name {
> +	struct sli4_rqst_hdr	hdr;
> +	u8      port_type;
> +	u8      rsvd4[3];
> +};
> +
> +struct sli4_rsp_cmn_get_port_name {
> +	struct sli4_rsp_hdr	hdr;
> +	char		port_name[4];
> +};
> +
> +struct sli4_rqst_cmn_write_flashrom {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32		flash_rom_access_opcode;
> +	__le32		flash_rom_access_operation_type;
> +	__le32		data_buffer_size;
> +	__le32		offset;
> +	u8		data_buffer[4];
> +};
> +
> +/*
> + * COMMON_READ_TRANSCEIVER_DATA
> + *
> + * This command reads SFF transceiver data(Format is defined
> + * by the SFF-8472 specification).
> + */
> +struct sli4_rqst_cmn_read_transceiver_data {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			page_number;
> +	__le32			port;
> +};
> +
> +struct sli4_rsp_cmn_read_transceiver_data {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			page_number;
> +	__le32			port;
> +	u8			page_data[128];
> +	u8			page_data_2[128];
> +};
> +
> +#define SLI4_REQ_DESIRE_READLEN		0xFFFFFF
> +
> +struct sli4_rqst_cmn_read_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_read_length_dword;
> +	__le32			read_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +#define RSP_COM_READ_OBJ_EOF		0x80000000
> +
> +struct sli4_rsp_cmn_read_object {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			actual_read_length;
> +	__le32			eof_dword;
> +};
> +
> +enum sli4_rqst_write_object_flags {
> +	SLI4_RQ_DES_WRITE_LEN		= 0xFFFFFF,
> +	SLI4_RQ_DES_WRITE_LEN_NOC	= 0x40000000,
> +	SLI4_RQ_DES_WRITE_LEN_EOF	= 0x80000000,
> +};
> +
> +struct sli4_rqst_cmn_write_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_write_len_dword;
> +	__le32			write_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +#define	RSP_CHANGE_STATUS		0xFF
> +
> +struct sli4_rsp_cmn_write_object {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			actual_write_length;
> +	__le32			change_status_dword;
> +};
> +
> +struct sli4_rqst_cmn_delete_object {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			rsvd4;
> +	__le32			rsvd5;
> +	u8			object_name[104];
> +};
> +
> +#define SLI4_RQ_OBJ_LIST_READ_LEN	0xFFFFFF
> +
> +struct sli4_rqst_cmn_read_object_list {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			desired_read_length_dword;
> +	__le32			read_offset;
> +	u8			object_name[104];
> +	__le32			host_buffer_descriptor_count;
> +	struct sli4_bde	host_buffer_descriptor[0];
> +};
> +
> +enum sli4_rqst_set_dump_flags {
> +	SLI4_CMN_SET_DUMP_BUFFER_LEN	= 0xFFFFFF,
> +	SLI4_CMN_SET_DUMP_FDB		= 0x20000000,
> +	SLI4_CMN_SET_DUMP_BLP		= 0x40000000,
> +	SLI4_CMN_SET_DUMP_QRY		= 0x80000000,
> +};
> +
> +struct sli4_rqst_cmn_set_dump_location {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			buffer_length_dword;
> +	__le32			buf_addr_low;
> +	__le32			buf_addr_high;
> +};
> +
> +struct sli4_rsp_cmn_set_dump_location {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			buffer_length_dword;
> +};
> +
> +enum sli4_dump_level {
> +	SLI4_DUMP_LEVEL_NONE,
> +	SLI4_CHIP_LEVEL_DUMP,
> +	SLI4_FUNC_DESC_DUMP,
> +};
> +
> +enum sli4_dump_state {
> +	SLI4_DUMP_STATE_NONE,
> +	SLI4_CHIP_DUMP_STATE_VALID,
> +	SLI4_FUNC_DUMP_STATE_VALID,
> +};
> +
> +enum sli4_dump_status {
> +	SLI4_DUMP_READY_STATUS_NOT_READY,
> +	SLI4_DUMP_READY_STATUS_DD_PRESENT,
> +	SLI4_DUMP_READY_STATUS_FDB_PRESENT,
> +	SLI4_DUMP_READY_STATUS_SKIP_DUMP,
> +	SLI4_DUMP_READY_STATUS_FAILED = -1,
> +};
> +
> +enum sli4_set_features {
> +	SLI4_SET_FEATURES_DIF_SEED			= 0x01,
> +	SLI4_SET_FEATURES_XRI_TIMER			= 0x03,
> +	SLI4_SET_FEATURES_MAX_PCIE_SPEED		= 0x04,
> +	SLI4_SET_FEATURES_FCTL_CHECK			= 0x05,
> +	SLI4_SET_FEATURES_FEC				= 0x06,
> +	SLI4_SET_FEATURES_PCIE_RECV_DETECT		= 0x07,
> +	SLI4_SET_FEATURES_DIF_MEMORY_MODE		= 0x08,
> +	SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	= 0x09,
> +	SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		= 0x0A,
> +	SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI		= 0x0C,
> +	SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	= 0x0D,
> +	SLI4_SET_FEATURES_SET_FTD_XFER_HINT		= 0x0F,
> +	SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		= 0x11,
> +};
> +
> +struct sli4_rqst_cmn_set_features {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			feature;
> +	__le32			param_len;
> +	__le32			params[8];
> +};
> +
> +struct sli4_rqst_cmn_set_features_dif_seed {
> +	__le16		seed;
> +	__le16		rsvd16;
> +};
> +
> +enum sli4_rqst_set_mrq_features {
> +	SLI4_RQ_MULTIRQ_ISR		 = 0x1,
> +	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
> +
> +	SLI4_RQ_MULTIRQ_NUM_RQS		 = 0xFF,
> +	SLI4_RQ_MULTIRQ_RQ_SELECT	 = 0xF00,
> +};
> +
> +struct sli4_rqst_cmn_set_features_multirq {
> +	__le32		auto_gen_xfer_dword;
> +	__le32		num_rqs_dword;
> +};
> +
> +enum sli4_rqst_health_check_flags {
> +	SLI4_RQ_HEALTH_CHECK_ENABLE	= 0x1,
> +	SLI4_RQ_HEALTH_CHECK_QUERY	= 0x2,
> +};
> +
> +struct sli4_rqst_cmn_set_features_health_check {
> +	__le32		health_check_dword;
> +};
> +
> +struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint {
> +	__le32		fdt_xfer_hint;
> +};
> +
> +struct sli4_rqst_dmtf_exec_clp_cmd {
> +	struct sli4_rqst_hdr	hdr;
> +	__le32			cmd_buf_length;
> +	__le32			resp_buf_length;
> +	__le32			cmd_buf_addr_low;
> +	__le32			cmd_buf_addr_high;
> +	__le32			resp_buf_addr_low;
> +	__le32			resp_buf_addr_high;
> +};
> +
> +struct sli4_rsp_dmtf_exec_clp_cmd {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			rsvd4;
> +	__le32			resp_length;
> +	__le32			rsvd6;
> +	__le32			rsvd7;
> +	__le32			rsvd8;
> +	__le32			rsvd9;
> +	__le32			clp_status;
> +	__le32			clp_detailed_status;
> +};
> +
> +#define SLI4_PROTOCOL_FC		0x10
> +#define SLI4_PROTOCOL_DEFAULT		0xff
> +
> +struct sli4_rspource_descriptor_v1 {
> +	u8		descriptor_type;
> +	u8		descriptor_length;
> +	__le16		rsvd16;
> +	__le32		type_specific[0];
> +};
> +
> +enum sli4_pcie_desc_flags {
> +	SLI4_PCIE_DESC_IMM		= 0x4000,
> +	SLI4_PCIE_DESC_NOSV		= 0x8000,
> +
> +	SLI4_PCIE_DESC_PF_NO		= 0x3FF0000,
> +
> +	SLI4_PCIE_DESC_MISSN_ROLE	= 0xFF,
> +	SLI4_PCIE_DESC_PCHG		= 0x8000000,
> +	SLI4_PCIE_DESC_SCHG		= 0x10000000,
> +	SLI4_PCIE_DESC_XCHG		= 0x20000000,
> +	SLI4_PCIE_DESC_XROM		= 0xC0000000
> +};
> +
> +struct sli4_pcie_resource_descriptor_v1 {
> +	u8		descriptor_type;
> +	u8		descriptor_length;
> +	__le16		imm_nosv_dword;
> +	__le32		pf_number_dword;
> +	__le32		rsvd3;
> +	u8		sriov_state;
> +	u8		pf_state;
> +	u8		pf_type;
> +	u8		rsvd4;
> +	__le16		number_of_vfs;
> +	__le16		rsvd5;
> +	__le32		mission_roles_dword;
> +	__le32		rsvd7[16];
> +};
> +
> +struct sli4_rqst_cmn_get_function_config {
> +	struct sli4_rqst_hdr  hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_function_config {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			desc_count;
> +	__le32			desc[54];
> +};
> +
> +/* Link Config Descriptor for link config functions */
> +struct sli4_link_config_descriptor {
> +	u8		link_config_id;
> +	u8		rsvd1[3];
> +	__le32		config_description[8];
> +};
> +
> +#define MAX_LINK_DES	10
> +
> +struct sli4_rqst_cmn_get_reconfig_link_info {
> +	struct sli4_rqst_hdr  hdr;
> +};
> +
> +struct sli4_rsp_cmn_get_reconfig_link_info {
> +	struct sli4_rsp_hdr	hdr;
> +	u8			active_link_config_id;
> +	u8			rsvd17;
> +	u8			next_link_config_id;
> +	u8			rsvd19;
> +	__le32			link_configuration_descriptor_count;
> +	struct sli4_link_config_descriptor
> +				desc[MAX_LINK_DES];
> +};
> +
> +enum sli4_set_reconfig_link_flags {
> +	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
> +	SLI4_SET_RECONFIG_LINKID_FD	= (1 << 31),
> +};
> +
> +struct sli4_rqst_cmn_set_reconfig_link_id {
> +	struct sli4_rqst_hdr  hdr;
> +	__le32			dw4_flags;
> +};
> +
> +struct sli4_rsp_cmn_set_reconfig_link_id {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +struct sli4_rqst_lowlevel_set_watchdog {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			watchdog_timeout;
> +	__le16			rsvd18;
> +};
> +
> +struct sli4_rsp_lowlevel_set_watchdog {
> +	struct sli4_rsp_hdr	hdr;
> +	__le32			rsvd;
> +};
> +
> +/* FC opcode (OPC) values */
> +enum sli4_fc_opcodes {
> +	SLI4_OPC_WQ_CREATE		= 0x1,
> +	SLI4_OPC_WQ_DESTROY		= 0x2,
> +	SLI4_OPC_POST_SGL_PAGES		= 0x3,
> +	SLI4_OPC_RQ_CREATE		= 0x5,
> +	SLI4_OPC_RQ_DESTROY		= 0x6,
> +	SLI4_OPC_READ_FCF_TABLE		= 0x8,
> +	SLI4_OPC_POST_HDR_TEMPLATES	= 0xb,
> +	SLI4_OPC_REDISCOVER_FCF		= 0x10,
> +};
> +
> +/* Use the default CQ associated with the WQ */
> +#define SLI4_CQ_DEFAULT 0xffff
> +
> +/*
> + * POST_SGL_PAGES
> + *
> + * Register the scatter gather list (SGL) memory and
> + * associate it with an XRI.
> + */
> +struct sli4_rqst_post_sgl_pages {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			xri_start;
> +	__le16			xri_count;
> +	struct {
> +		__le32		page0_low;
> +		__le32		page0_high;
> +		__le32		page1_low;
> +		__le32		page1_high;
> +	} page_set[10];
> +};
> +
> +struct sli4_rsp_post_sgl_pages {
> +	struct sli4_rsp_hdr	hdr;
> +};
> +
> +struct sli4_rqst_post_hdr_templates {
> +	struct sli4_rqst_hdr	hdr;
> +	__le16			rpi_offset;
> +	__le16			page_count;
> +	struct sli4_dmaaddr	page_descriptor[0];
> +};
> +
> +#define SLI4_HDR_TEMPLATE_SIZE		64
> +
> +enum sli4_io_flags {
> +/* The XRI associated with this IO is already active */
> +	SLI4_IO_CONTINUATION		= (1 << 0),
> +/* Automatically generate a good RSP frame */
> +	SLI4_IO_AUTO_GOOD_RESPONSE	= (1 << 1),
> +	SLI4_IO_NO_ABORT		= (1 << 2),
> +/* Set the DNRX bit because no auto xref rdy buffer is posted */
> +	SLI4_IO_DNRX			= (1 << 3),
> +};
> +
> +enum sli4_callback {
> +	SLI4_CB_LINK,
> +	SLI4_CB_MAX,
> +};
> +
> +enum sli4_link_status {
> +	SLI_LINK_STATUS_UP,
> +	SLI_LINK_STATUS_DOWN,
> +	SLI_LINK_STATUS_NO_ALPA,
> +	SLI_LINK_STATUS_MAX,
> +};
> +
> +enum sli4_link_topology {
> +	SLI_LINK_TOPO_NPORT = 1,
> +	SLI_LINK_TOPO_LOOP,
> +	SLI_LINK_TOPO_LOOPBACK_INTERNAL,
> +	SLI_LINK_TOPO_LOOPBACK_EXTERNAL,
> +	SLI_LINK_TOPO_NONE,
> +	SLI_LINK_TOPO_MAX,
> +};
> +
> +enum sli4_link_medium {
> +	SLI_LINK_MEDIUM_ETHERNET,
> +	SLI_LINK_MEDIUM_FC,
> +	SLI_LINK_MEDIUM_MAX,
> +};
> +

Why 'SLI_LINK_' and not 'SLI4_LINK_' like everywhere else?

> +/*Driver specific structures*/
> +
> +struct sli4_link_event {
> +	enum sli4_link_status		status;
> +	enum sli4_link_topology	topology;
> +	enum sli4_link_medium		medium;
> +	u32				speed;
> +	u8				*loop_map;
> +	u32				fc_id;
> +};
> +
> +enum sli4_resource {
> +	SLI_RSRC_VFI,
> +	SLI_RSRC_VPI,
> +	SLI_RSRC_RPI,
> +	SLI_RSRC_XRI,
> +	SLI_RSRC_FCFI,
> +	SLI_RSRC_MAX,
> +};
> +

Hmm?

> +struct sli4_extent {
> +	u32		number;
> +	u32		size;
> +	u32		n_alloc;
> +	u32		*base;
> +	unsigned long	*use_map;
> +	u32		map_size;
> +};
> +
> +struct sli4_queue_info {
> +	u16	max_qcount[SLI_QTYPE_MAX];
> +	u32	max_qentries[SLI_QTYPE_MAX];
> +	u16	count_mask[SLI_QTYPE_MAX];
> +	u16	count_method[SLI_QTYPE_MAX];
> +	u32	qpage_count[SLI_QTYPE_MAX];
> +};
> +
> +#define	SLI_PCI_MAX_REGS		6
> +struct sli4 {
> +	void				*os;
> +	struct pci_dev			*pcidev;
> +	void __iomem			*reg[SLI_PCI_MAX_REGS];
> +
> +	u32				sli_rev;
> +	u32				sli_family;
> +	u32				if_type;
> +
> +	u16				asic_type;
> +	u16				asic_rev;
> +
> +	u16				e_d_tov;
> +	u16				r_a_tov;
> +	struct sli4_queue_info	qinfo;
> +	u16				link_module_type;
> +	u8				rq_batch;
> +	u16				rq_min_buf_size;
> +	u32				rq_max_buf_size;
> +	u8				topology;
> +	u8				wwpn[8];
> +	u8				wwnn[8];
> +	u32				fw_rev[2];
> +	u8				fw_name[2][16];
> +	char				ipl_name[16];
> +	u32				hw_rev[3];
> +	u8				port_number;
> +	char				port_name[2];
> +	char				modeldesc[64];
> +	char				bios_version_string[32];
> +	/*
> +	 * Tracks the port resources using extents metaphor. For
> +	 * devices that don't implement extents (i.e.
> +	 * has_extents == FALSE), the code models each resource as
> +	 * a single large extent.
> +	 */
> +	struct sli4_extent		extent[SLI_RSRC_MAX];
> +	u32				features;
> +	u32				has_extents:1,
> +					auto_reg:1,
> +					auto_xfer_rdy:1,
> +					hdr_template_req:1,
> +					perf_hint:1,
> +					perf_wq_id_association:1,
> +					cq_create_version:2,
> +					mq_create_version:2,
> +					high_login_mode:1,
> +					sgl_pre_registered:1,
> +					sgl_pre_registration_required:1,
> +					t10_dif_inline_capable:1,
> +					t10_dif_separate_capable:1;
> +	u32				sge_supported_length;
> +	u32				sgl_page_sizes;
> +	u32				max_sgl_pages;
> +	u32				wqe_size;
> +
> +	/*
> +	 * Callback functions
> +	 */
> +	int				(*link)(void *ctx, void *event);
> +	void				*link_arg;
> +
> +	struct efc_dma		bmbx;
> +
> +	/* Save pointer to physical memory descriptor for non-embedded
> +	 * SLI_CONFIG commands for BMBX dumping purposes
> +	 */
> +	struct efc_dma		*bmbx_non_emb_pmd;
> +
> +	struct efc_dma		vpd_data;
> +	u32			vpd_length;
> +};
> +
>   #endif /* !_SLI4_H */
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-12  3:32 ` [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
  2020-04-15 10:04   ` Daniel Wagner
@ 2020-04-15 12:27   ` Hannes Reinecke
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:27 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create mailbox commands
> and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed efc_assert define. Replaced with WARN_ON.
>    Returned defined return values EFC_SUCCESS/FAIL
> ---
>   drivers/scsi/elx/include/efc_common.h |   18 +
>   drivers/scsi/elx/libefc_sli/sli4.c    | 1514 +++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
>   3 files changed, 1541 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 08/31] elx: libefc: Generic state machine framework
  2020-04-12  3:32 ` [PATCH v3 08/31] elx: libefc: Generic state machine framework James Smart
@ 2020-04-15 12:37   ` Hannes Reinecke
  2020-04-15 17:20   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:37 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch starts the population of the libefc library.
> The library will contain common tasks usable by a target or initiator
> driver. The library will also contain a FC discovery state machine
> interface.
> 
> This patch creates the library directory and adds definitions
> for the discovery state machine interface.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed efc_sm_id array which is not used.
>    Added State Machine event name lookup array.
> ---
>   drivers/scsi/elx/libefc/efc_sm.c |  61 ++++++++++++
>   drivers/scsi/elx/libefc/efc_sm.h | 209 +++++++++++++++++++++++++++++++++++++++
>   2 files changed, 270 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions
  2020-04-12  3:32 ` [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2020-04-15 12:41   ` Hannes Reinecke
  2020-04-15 17:32   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:41 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI/Local FC port objects
> - efc_domain_s: FC domain (aka fabric) objects
> - efc_node_s: FC node (aka remote ports) objects
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed Sparse Vector APIs and structures.
> ---
>   drivers/scsi/elx/libefc/efc.h     |  72 +++++
>   drivers/scsi/elx/libefc/efc_lib.c |  41 +++
>   drivers/scsi/elx/libefc/efclib.h  | 640 ++++++++++++++++++++++++++++++++++++++
>   3 files changed, 753 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc.h
>   create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
>   create mode 100644 drivers/scsi/elx/libefc/efclib.h
> 
[ .. ]
> +#define EFC_LOG_LIB		0x01
> +#define EFC_LOG_NODE		0x02
> +#define EFC_LOG_PORT		0x04
> +#define EFC_LOG_DOMAIN		0x08
> +#define EFC_LOG_ELS		0x10
> +#define EFC_LOG_DOMAIN_SM	0x20
> +#define EFC_LOG_SM		0x40
> +
> +/* efc library port structure */
> +struct efc {
> +	void			*base;
> +	struct pci_dev		*pcidev;
> +	u64			req_wwpn;
> +	u64			req_wwnn;
> +
> +	u64			def_wwpn;
> +	u64			def_wwnn;
> +	u64			max_xfer_size;
> +	u32			nodes_count;
> +	mempool_t		*node_pool;
> +	struct dma_pool		*node_dma_pool;
> +
> +	u32			link_status;
> +

Maybe move 'nodes_count' after 'node_dma_pool' for better alignment?

But just a minor nit.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library
  2020-04-12  3:32 ` [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2020-04-15 12:49   ` Hannes Reinecke
  2020-04-15 17:06   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:49 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds APIS to initialize the library, initialize
> the SLI Port, reset firmware, terminate the SLI Port, and
> terminate the library.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Changed some function types to bool.
>    Return defined return values EFC_SUCCESS/FAIL
>    Defined dump types SLI4_FUNC_DESC_DUMP, SLI4_CHIP_LEVEL_DUMP.
>    Defined dump status return values for sli_dump_is_ready().
>    Formatted function defines to use 80 character length.
> ---
>   drivers/scsi/elx/libefc_sli/sli4.c | 1202 ++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc_sli/sli4.h |  434 +++++++++++++
>   2 files changed, 1636 insertions(+)
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces James Smart
@ 2020-04-15 12:50   ` Hannes Reinecke
  2020-04-15 17:50   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 12:50 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC Domain registration, allocation and deallocation sequence
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Acquire efc->lock in efc_domain_cb to protect all the domain state
>      transitions.
>    Removed efc_assert and used WARN_ON.
>    Note: Re: Locking:
>      efc->lock is a global per port lock which is used to synchronize and
>      serialize all the state machine event processing. As there is a
>      single EQ all the events are serialized. This lock will protect the
>      sport list, sport, node list, node, and vport list. All the libefc
>      APIs called by the driver will take this lock internally.
>   Note: Re: "It would even simplify the code, as several cases can be
>        collapsed into one ..."
>      The Hardware events cannot be collapsed as each events is different
>      from State Machine events. The code present looks more readable than
>      a mapping array in this case.
> ---
>   drivers/scsi/elx/libefc/efc_domain.c | 1109 ++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc/efc_domain.h |   52 ++
>   2 files changed, 1161 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
>

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs
  2020-04-12  3:32 ` [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2020-04-15 14:34   ` Daniel Wagner
  2020-04-22  5:08     ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 14:34 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:37PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds service routines to create different WQEs and adds
> APIs to issue iread, iwrite, treceive, tsend and other work queue
> entries.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Return defined return values EFC_SUCCESS/FAIL
>   Removed HLM argument for WQE calls.
>   Reduced args for sli_fcp_treceive64_wqe(),
>     sli_fcp_cont_treceive64_wqe(), sli_fcp_trsp64_wqe(). Defined new
>     structure sli_fcp_tgt_params.
>   Removed sli_fc_process_link_state function which was not used for FC.
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1565 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 1565 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 224a06610c78..0365d7943468 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -1538,3 +1538,1568 @@ sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
>  
>  	return rc;
>  }
> +
> +/* Write an ABORT_WQE work queue entry */
> +int
> +sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
> +	      enum sli4_abort_type type, bool send_abts, u32 ids,
> +	      u32 mask, u16 tag, u16 cq_id)
> +{
> +	struct sli4_abort_wqe	*abort = buf;

Why not just a space instead of a tab?

> +
> +	memset(buf, 0, size);

Is 'size' expected to be equal to the size of 'struct sli4_abort_wqe'?
Or could it be bigger? In case 'size' can be bigger than 'abort', do
you need to clear the complete buffer, or would it be enough to clear
only the size of 'abort'?

> +
> +	switch (type) {
> +	case SLI_ABORT_XRI:
> +		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
> +		if (mask) {
> +			efc_log_warn(sli4, "%#x aborting XRI %#x warning non-zero mask",
> +				mask, ids);
> +			mask = 0;
> +		}
> +		break;
> +	case SLI_ABORT_ABORT_ID:
> +		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
> +		break;
> +	case SLI_ABORT_REQUEST_ID:
> +		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
> +		break;
> +	default:
> +		efc_log_info(sli4, "unsupported type %#x\n", type);
> +		return EFC_FAIL;
> +	}
> +
> +	abort->ia_ir_byte |= send_abts ? 0 : 1;
> +
> +	/* Suppress ABTS retries */
> +	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
> +
> +	abort->t_mask = cpu_to_le32(mask);
> +	abort->t_tag  = cpu_to_le32(ids);
> +	abort->command = SLI4_WQE_ABORT;
> +	abort->request_tag = cpu_to_le16(tag);
> +
> +	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
> +
> +	abort->cq_id = cpu_to_le16(cq_id);
> +	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an ELS_REQUEST64_WQE work queue entry */
> +int
> +sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		      struct efc_dma *sgl,
> +		      u8 req_type, u32 req_len, u32 max_rsp_len,
> +		      u8 timeout, u16 xri, u16 tag,
> +		      u16 cq_id, u16 rnodeindicator, u16 sportindicator,
> +		      bool rnodeattached, u32 rnode_fcid, u32 sport_fcid)
> +{
> +	struct sli4_els_request64_wqe	*els = buf;

And here as well, a tab and the following decleration use a space.

> +	struct sli4_sge *sge = sgl->virt;
> +	bool is_fabric = false;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, size);

The same question as above about the size.

> +
> +	bptr = &els->els_request_payload;
> +	if (sli4->sgl_pre_registered) {
> +		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
> +
> +		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +	} else {
> +		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				    ((2 * sizeof(struct sli4_sge)) &
> +				     SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
> +	}
> +
> +	els->els_request_payload_length = cpu_to_le32(req_len);
> +	els->max_response_payload_length = cpu_to_le32(max_rsp_len);
> +
> +	els->xri_tag = cpu_to_le16(xri);
> +	els->timer = timeout;
> +	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	els->command = SLI4_WQE_ELS_REQUEST64;
> +
> +	els->request_tag = cpu_to_le16(tag);
> +
> +	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
> +
> +	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
> +
> +	/* figure out the ELS_ID value from the request buffer */
> +
> +	switch (req_type) {
> +	case ELS_LOGO:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
> +		if (rnodeattached) {
> +			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
> +					 SLI4_REQ_WQE_CT_SHFT);

The brackets are not needed.

> +			els->context_tag = cpu_to_le16(rnodeindicator);
> +		} else {
> +			els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +			els->context_tag =
> +				cpu_to_le16(sportindicator);
> +		}
> +		if (rnode_fcid == FC_FID_FLOGI)
> +			is_fabric = true;
> +		break;
> +	case ELS_FDISC:
> +		if (rnode_fcid == FC_FID_FLOGI)
> +			is_fabric = true;
> +		if (sport_fcid == 0) {
> +			els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
> +			is_fabric = true;
> +		} else {
> +			els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		}
> +		els->ct_byte |= (SLI4_GENERIC_CONTEXT_VPI <<
> +				 SLI4_REQ_WQE_CT_SHFT);

Brakets are not needed

> +		els->context_tag = cpu_to_le16(sportindicator);
> +		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
> +		break;
> +	case ELS_FLOGI:
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		/*
> +		 * Set SP here ... we haven't done a REG_VPI yet
> +		 * need to maybe not set this when we have
> +		 * completed VFI/VPI registrations ...
> +		 *
> +		 * Use the FC_ID of the SPORT if it has been allocated,
> +		 * otherwise use an S_ID of zero.
> +		 */
> +		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
> +		if (sport_fcid != U32_MAX)
> +			els->sid_sp_dword |= cpu_to_le32(sport_fcid);
> +		break;
> +	case ELS_PLOGI:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		break;
> +	case ELS_SCR:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		break;
> +	default:
> +		els->cmdtype_elsid_byte |=
> +			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
> +		if (rnodeattached) {
> +			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
> +					 SLI4_REQ_WQE_CT_SHFT);
> +			els->context_tag = cpu_to_le16(sportindicator);
> +		} else {
> +			els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
> +			els->context_tag =
> +				cpu_to_le16(sportindicator);
> +		}
> +		break;
> +	}
> +
> +	if (is_fabric)
> +		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
> +	else
> +		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
> +
> +	els->cq_id = cpu_to_le16(cq_id);
> +
> +	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
> +					SLI4_GENERIC_CONTEXT_RPI)
> +		els->remote_id_dword = cpu_to_le32(rnode_fcid);
> +
> +	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
> +					SLI4_GENERIC_CONTEXT_VPI)
> +		els->temporary_rpi = cpu_to_le16(rnodeindicator);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_ICMND64_WQE work queue entry */
> +int
> +sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		    struct efc_dma *sgl, u16 xri, u16 tag,
> +		    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 timeout)
> +{
> +	struct sli4_fcp_icmnd64_wqe *icmnd = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +	u32 len;
> +
> +	memset(buf, 0, size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +	sge = sgl->virt;
> +	bptr = &icmnd->bde;
> +	if (sli4->sgl_pre_registered) {
> +		icmnd->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_ICMD_WQE_XBL;
> +
> +		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_DBDE;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (le32_to_cpu(sge[0].buffer_length) &
> +				     SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +	} else {
> +		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
> +	}
> +
> +	len = le32_to_cpu(sge[0].buffer_length) +
> +	      le32_to_cpu(sge[1].buffer_length);
> +	icmnd->payload_offset_length = cpu_to_le16(len);
> +	icmnd->xri_tag = cpu_to_le16(xri);
> +	icmnd->context_tag = cpu_to_le16(rpi);
> +	icmnd->timer = timeout;
> +
> +	/* WQE word 4 contains read transfer length */
> +	icmnd->class_pu_byte |= 2 << SLI4_ICMD_WQE_PU_SHFT;
> +	icmnd->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +	icmnd->command = SLI4_WQE_FCP_ICMND64;
> +	icmnd->dif_ct_bs_byte |=
> +		SLI4_GENERIC_CONTEXT_RPI << SLI4_ICMD_WQE_CT_SHFT;
> +
> +	icmnd->abort_tag = cpu_to_le32(xri);
> +
> +	icmnd->request_tag = cpu_to_le16(tag);
> +	icmnd->len_loc1_byte |= SLI4_ICMD_WQE_LEN_LOC_BIT1;
> +	icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_LEN_LOC_BIT2;
> +	icmnd->cmd_type_byte |= SLI4_CMD_FCP_ICMND64_WQE;
> +	icmnd->cq_id = cpu_to_le16(cq_id);
> +
> +	return  EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_IREAD64_WQE work queue entry */
> +int
> +sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		    struct efc_dma *sgl, u32 first_data_sge,
> +		    u32 xfer_len, u16 xri, u16 tag,
> +		    u16 cq_id, u32 rpi, u32 rnode_fcid,
> +		    u8 dif, u8 bs, u8 timeout)
> +{
> +	struct sli4_fcp_iread64_wqe *iread = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +	u32 sge_flags, len;
> +
> +	memset(buf, 0, size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +
> +	sge = sgl->virt;
> +	bptr = &iread->bde;
> +	if (sli4->sgl_pre_registered) {
> +		iread->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IR_WQE_XBL;
> +
> +		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_DBDE;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (le32_to_cpu(sge[0].buffer_length) &
> +				     SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.blp.low  = sge[0].buffer_address_low;
> +		bptr->u.blp.high = sge[0].buffer_address_high;
> +	} else {
> +		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.blp.low  =
> +				cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high =
> +				cpu_to_le32(upper_32_bits(sgl->phys));
> +
> +		/*
> +		 * fill out fcp_cmnd buffer len and change resp buffer to be of
> +		 * type "skip" (note: response will still be written to sge[1]
> +		 * if necessary)
> +		 */
> +		len = le32_to_cpu(sge[0].buffer_length);
> +		iread->fcp_cmd_buffer_length = cpu_to_le16(len);
> +
> +		sge_flags = le32_to_cpu(sge[1].dw2_flags);
> +		sge_flags &= (~SLI4_SGE_TYPE_MASK);
> +		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
> +		sge[1].dw2_flags = cpu_to_le32(sge_flags);
> +	}
> +
> +	len = le32_to_cpu(sge[0].buffer_length) +
> +	      le32_to_cpu(sge[1].buffer_length);
> +	iread->payload_offset_length = cpu_to_le16(len);
> +	iread->total_transfer_length = cpu_to_le32(xfer_len);
> +
> +	iread->xri_tag = cpu_to_le16(xri);
> +	iread->context_tag = cpu_to_le16(rpi);
> +
> +	iread->timer = timeout;
> +
> +	/* WQE word 4 contains read transfer length */
> +	iread->class_pu_byte |= 2 << SLI4_IR_WQE_PU_SHFT;
> +	iread->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +	iread->command = SLI4_WQE_FCP_IREAD64;
> +	iread->dif_ct_bs_byte |=
> +		SLI4_GENERIC_CONTEXT_RPI << SLI4_IR_WQE_CT_SHFT;
> +	iread->dif_ct_bs_byte |= dif;
> +	iread->dif_ct_bs_byte  |= bs << SLI4_IR_WQE_BS_SHFT;
> +
> +	iread->abort_tag = cpu_to_le32(xri);
> +
> +	iread->request_tag = cpu_to_le16(tag);
> +	iread->len_loc1_byte |= SLI4_IR_WQE_LEN_LOC_BIT1;
> +	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_LEN_LOC_BIT2;
> +	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_IOD;
> +	iread->cmd_type_byte |= SLI4_CMD_FCP_IREAD64_WQE;
> +	iread->cq_id = cpu_to_le16(cq_id);
> +
> +	if (sli4->perf_hint) {
> +		bptr = &iread->first_data_bde;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			  (le32_to_cpu(sge[first_data_sge].buffer_length) &
> +			     SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.data.low =
> +			sge[first_data_sge].buffer_address_low;
> +		bptr->u.data.high =
> +			sge[first_data_sge].buffer_address_high;
> +	}
> +
> +	return  EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_IWRITE64_WQE work queue entry */
> +int
> +sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *sgl,
> +		     u32 first_data_sge, u32 xfer_len,
> +		     u32 first_burst, u16 xri, u16 tag,
> +		     u16 cq_id, u32 rpi,
> +		     u32 rnode_fcid,
> +		     u8 dif, u8 bs, u8 timeout)
> +{
> +	struct sli4_fcp_iwrite64_wqe *iwrite = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +	u32 sge_flags, min, len;
> +
> +	memset(buf, 0, size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +	sge = sgl->virt;
> +	bptr = &iwrite->bde;
> +	if (sli4->sgl_pre_registered) {
> +		iwrite->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IWR_WQE_XBL;
> +
> +		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_DBDE;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				     (le32_to_cpu(sge[0].buffer_length) &
> +				      SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +	} else {
> +		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.blp.low  =
> +			cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high =
> +			cpu_to_le32(upper_32_bits(sgl->phys));
> +
> +		/*
> +		 * fill out fcp_cmnd buffer len and change resp buffer to be of
> +		 * type "skip" (note: response will still be written to sge[1]
> +		 * if necessary)
> +		 */
> +		len = le32_to_cpu(sge[0].buffer_length);
> +		iwrite->fcp_cmd_buffer_length = cpu_to_le16(len);
> +		sge_flags = le32_to_cpu(sge[1].dw2_flags);
> +		sge_flags &= ~SLI4_SGE_TYPE_MASK;
> +		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
> +		sge[1].dw2_flags = cpu_to_le32(sge_flags);
> +	}
> +
> +	len = le32_to_cpu(sge[0].buffer_length) +
> +	      le32_to_cpu(sge[1].buffer_length);
> +	iwrite->payload_offset_length = cpu_to_le16(len);
> +	iwrite->total_transfer_length = cpu_to_le16(xfer_len);
> +	min = (xfer_len < first_burst) ? xfer_len : first_burst;
> +	iwrite->initial_transfer_length = cpu_to_le16(min);
> +
> +	iwrite->xri_tag = cpu_to_le16(xri);
> +	iwrite->context_tag = cpu_to_le16(rpi);
> +
> +	iwrite->timer = timeout;
> +	/* WQE word 4 contains read transfer length */
> +	iwrite->class_pu_byte |= 2 << SLI4_IWR_WQE_PU_SHFT;
> +	iwrite->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +	iwrite->command = SLI4_WQE_FCP_IWRITE64;
> +	iwrite->dif_ct_bs_byte |=
> +			SLI4_GENERIC_CONTEXT_RPI << SLI4_IWR_WQE_CT_SHFT;
> +	iwrite->dif_ct_bs_byte |= dif;
> +	iwrite->dif_ct_bs_byte |= bs << SLI4_IWR_WQE_BS_SHFT;
> +
> +	iwrite->abort_tag = cpu_to_le32(xri);
> +
> +	iwrite->request_tag = cpu_to_le16(tag);
> +	iwrite->len_loc1_byte |= SLI4_IWR_WQE_LEN_LOC_BIT1;
> +	iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_LEN_LOC_BIT2;
> +	iwrite->cmd_type_byte |= SLI4_CMD_FCP_IWRITE64_WQE;
> +	iwrite->cq_id = cpu_to_le16(cq_id);
> +
> +	if (sli4->perf_hint) {
> +		bptr = &iwrite->first_data_bde;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			 (le32_to_cpu(sge[first_data_sge].buffer_length) &
> +			     SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low =
> +			sge[first_data_sge].buffer_address_low;
> +		bptr->u.data.high =
> +			sge[first_data_sge].buffer_address_high;
> +	}
> +
> +	return  EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_TRECEIVE64_WQE work queue entry */
> +int
> +sli_fcp_treceive64_wqe(struct sli4 *sli, void *buf,
> +		       struct efc_dma *sgl,
> +		       u32 first_data_sge,
> +		       u32 xfer_len, u16 xri, u16 tag,
> +		       u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		       struct sli_fcp_tgt_params *params)
> +{
> +	struct sli4_fcp_treceive64_wqe *trecv = buf;
> +	struct sli4_fcp_128byte_wqe *trecv_128 = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, sli->wqe_size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +	sge = sgl->virt;
> +	bptr = &trecv->bde;
> +	if (sli->sgl_pre_registered) {
> +		trecv->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_TRCV_WQE_XBL;
> +
> +		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (le32_to_cpu(sge[0].buffer_length)
> +					& SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +
> +		trecv->payload_offset_length = sge[0].buffer_length;
> +	} else {
> +		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_XBL;
> +
> +		/* if data is a single physical address, use a BDE */
> +		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
> +			trecv->qosd_xbl_hlm_iod_dbde_wqes |=
> +							SLI4_TRCV_WQE_DBDE;
> +			bptr->bde_type_buflen =
> +			      cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +					  (le32_to_cpu(sge[2].buffer_length)
> +					  & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +			bptr->u.data.low =
> +				sge[2].buffer_address_low;
> +			bptr->u.data.high =
> +				sge[2].buffer_address_high;
> +		} else {
> +			bptr->bde_type_buflen =
> +				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				(sgl->size & SLI4_BDE_MASK_BUFFER_LEN));
> +			bptr->u.blp.low =
> +				cpu_to_le32(lower_32_bits(sgl->phys));
> +			bptr->u.blp.high =
> +				cpu_to_le32(upper_32_bits(sgl->phys));
> +		}
> +	}
> +
> +	trecv->relative_offset = cpu_to_le32(params->offset);
> +
> +	if (params->flags & SLI4_IO_CONTINUATION)
> +		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_XC;
> +
> +	trecv->xri_tag = cpu_to_le16(xri);
> +
> +	trecv->context_tag = cpu_to_le16(rpi);
> +
> +	/* WQE uses relative offset */
> +	trecv->class_ar_pu_byte |= 1 << SLI4_TRCV_WQE_PU_SHFT;
> +
> +	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
> +		trecv->class_ar_pu_byte |= SLI4_TRCV_WQE_AR;
> +
> +	trecv->command = SLI4_WQE_FCP_TRECEIVE64;
> +	trecv->class_ar_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +	trecv->dif_ct_bs_byte |=
> +		SLI4_GENERIC_CONTEXT_RPI << SLI4_TRCV_WQE_CT_SHFT;
> +	trecv->dif_ct_bs_byte |= bs << SLI4_TRCV_WQE_BS_SHFT;
> +
> +	trecv->remote_xid = cpu_to_le16(params->ox_id);
> +
> +	trecv->request_tag = cpu_to_le16(tag);
> +
> +	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_IOD;
> +
> +	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_LEN_LOC_BIT2;
> +
> +	trecv->cmd_type_byte |= SLI4_CMD_FCP_TRECEIVE64_WQE;
> +
> +	trecv->cq_id = cpu_to_le16(cq_id);
> +
> +	trecv->fcp_data_receive_length = cpu_to_le32(xfer_len);
> +
> +	if (sli->perf_hint) {
> +		bptr = &trecv->first_data_bde;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
> +			     SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.data.low =
> +			sge[first_data_sge].buffer_address_low;
> +		bptr->u.data.high =
> +			sge[first_data_sge].buffer_address_high;
> +	}
> +
> +	/* The upper 7 bits of csctl is the priority */
> +	if (params->cs_ctl & SLI4_MASK_CCP) {
> +		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_CCPE;
> +		trecv->ccp = (params->cs_ctl & SLI4_MASK_CCP);
> +	}
> +
> +	if (params->app_id && sli->wqe_size == SLI4_WQE_EXT_BYTES &&
> +	    !(trecv->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
> +		trecv->lloc1_appid |= SLI4_TRCV_WQE_APPID;
> +		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_WQES;
> +		trecv_128->dw[31] = params->app_id;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_CONT_TRECEIVE64_WQE work queue entry */
> +int
> +sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf,
> +			    struct efc_dma *sgl, u32 first_data_sge,
> +			    u32 xfer_len, u16 xri, u16 sec_xri, u16 tag,
> +			    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +			    struct sli_fcp_tgt_params *params)
> +{
> +	int rc;
> +
> +	rc = sli_fcp_treceive64_wqe(sli, buf, sgl, first_data_sge,
> +				    xfer_len, xri, tag, cq_id,
> +				    rpi, rnode_fcid, dif, bs, params);
> +	if (rc == 0) {

	if (rc == EFC_SUCCESS)

> +		struct sli4_fcp_treceive64_wqe *trecv = buf;
> +
> +		trecv->command = SLI4_WQE_FCP_CONT_TRECEIVE64;
> +		trecv->dword5.sec_xri_tag = cpu_to_le16(sec_xri);
> +	}
> +	return rc;
> +}
> +
> +/* Write an FCP_TRSP64_WQE work queue entry */
> +int
> +sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		   u32 rsp_len, u16 xri, u16 tag, u16 cq_id, u32 rpi,
> +		   u32 rnode_fcid, u8 port_owned,
> +		   struct sli_fcp_tgt_params *params)
> +{
> +	struct sli4_fcp_trsp64_wqe *trsp = buf;
> +	struct sli4_fcp_128byte_wqe *trsp_128 = buf;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, sli4->wqe_size);
> +
> +	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE) {
> +		trsp->class_ag_byte |= SLI4_TRSP_WQE_AG;
> +	} else {
> +		struct sli4_sge	*sge = sgl->virt;
> +
> +		if (sli4->sgl_pre_registered || port_owned)
> +			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_DBDE;
> +		else
> +			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_XBL;
> +		bptr = &trsp->bde;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				     (le32_to_cpu(sge[0].buffer_length) &
> +				      SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +
> +		trsp->fcp_response_length = cpu_to_le32(rsp_len);
> +	}
> +
> +	if (params->flags & SLI4_IO_CONTINUATION)
> +		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_XC;
> +
> +	trsp->xri_tag = cpu_to_le16(xri);
> +	trsp->rpi = cpu_to_le16(rpi);
> +
> +	trsp->command = SLI4_WQE_FCP_TRSP64;
> +	trsp->class_ag_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	trsp->remote_xid = cpu_to_le16(params->ox_id);
> +	trsp->request_tag = cpu_to_le16(tag);
> +	if (params->flags & SLI4_IO_DNRX)
> +		trsp->ct_dnrx_byte |= SLI4_TRSP_WQE_DNRX;
> +	else
> +		trsp->ct_dnrx_byte &= ~SLI4_TRSP_WQE_DNRX;
> +
> +	trsp->lloc1_appid |= 0x1;
> +	trsp->cq_id = cpu_to_le16(cq_id);
> +	trsp->cmd_type_byte = SLI4_CMD_FCP_TRSP64_WQE;
> +
> +	/* The upper 7 bits of csctl is the priority */
> +	if (params->cs_ctl & SLI4_MASK_CCP) {
> +		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_CCPE;
> +		trsp->ccp = (params->cs_ctl & SLI4_MASK_CCP);
> +	}
> +
> +	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
> +	    !(trsp->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
> +		trsp->lloc1_appid |= SLI4_TRSP_WQE_APPID;
> +		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_WQES;
> +		trsp_128->dw[31] = params->app_id;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an FCP_TSEND64_WQE work queue entry */
> +int
> +sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		    u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
> +		    u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		    struct sli_fcp_tgt_params *params)
> +{
> +	struct sli4_fcp_tsend64_wqe *tsend = buf;
> +	struct sli4_fcp_128byte_wqe *tsend_128 = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, sli4->wqe_size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +	sge = sgl->virt;
> +
> +	bptr = &tsend->bde;
> +	if (sli4->sgl_pre_registered) {
> +		tsend->ll_qd_xbl_hlm_iod_dbde &= ~SLI4_TSEND_WQE_XBL;
> +
> +		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				   (le32_to_cpu(sge[2].buffer_length) &
> +				    SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		/* TSEND64_WQE specifies first two SGE are skipped (3rd is
> +		 * valid)
> +		 */
> +		bptr->u.data.low  = sge[2].buffer_address_low;
> +		bptr->u.data.high = sge[2].buffer_address_high;
> +	} else {
> +		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_XBL;
> +
> +		/* if data is a single physical address, use a BDE */
> +		if (!dif && xfer_len <= le32_to_cpu(sge[2].buffer_length)) {
> +			tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
> +
> +			bptr->bde_type_buflen =
> +			    cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +					(le32_to_cpu(sge[2].buffer_length) &
> +					SLI4_BDE_MASK_BUFFER_LEN));
> +			/*
> +			 * TSEND64_WQE specifies first two SGE are skipped
> +			 * (i.e. 3rd is valid)
> +			 */
> +			bptr->u.data.low =
> +				sge[2].buffer_address_low;
> +			bptr->u.data.high =
> +				sge[2].buffer_address_high;
> +		} else {
> +			bptr->bde_type_buflen =
> +				cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +					    (sgl->size &
> +					     SLI4_BDE_MASK_BUFFER_LEN));
> +			bptr->u.blp.low =
> +				cpu_to_le32(lower_32_bits(sgl->phys));
> +			bptr->u.blp.high =
> +				cpu_to_le32(upper_32_bits(sgl->phys));
> +		}
> +	}
> +
> +	tsend->relative_offset = cpu_to_le32(params->offset);
> +
> +	if (params->flags & SLI4_IO_CONTINUATION)
> +		tsend->dw10byte2 |= SLI4_TSEND_XC;
> +
> +	tsend->xri_tag = cpu_to_le16(xri);
> +
> +	tsend->rpi = cpu_to_le16(rpi);
> +	/* WQE uses relative offset */
> +	tsend->class_pu_ar_byte |= 1 << SLI4_TSEND_WQE_PU_SHFT;
> +
> +	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
> +		tsend->class_pu_ar_byte |= SLI4_TSEND_WQE_AR;
> +
> +	tsend->command = SLI4_WQE_FCP_TSEND64;
> +	tsend->class_pu_ar_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +	tsend->ct_byte |= SLI4_GENERIC_CONTEXT_RPI << SLI4_TSEND_CT_SHFT;
> +	tsend->ct_byte |= dif;
> +	tsend->ct_byte |= bs << SLI4_TSEND_BS_SHFT;
> +
> +	tsend->remote_xid = cpu_to_le16(params->ox_id);
> +
> +	tsend->request_tag = cpu_to_le16(tag);
> +
> +	tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_LEN_LOC_BIT2;
> +
> +	tsend->cq_id = cpu_to_le16(cq_id);
> +
> +	tsend->cmd_type_byte |= SLI4_CMD_FCP_TSEND64_WQE;
> +
> +	tsend->fcp_data_transmit_length = cpu_to_le32(xfer_len);
> +
> +	if (sli4->perf_hint) {
> +		bptr = &tsend->first_data_bde;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
> +			     SLI4_BDE_MASK_BUFFER_LEN));
> +		bptr->u.data.low =
> +			sge[first_data_sge].buffer_address_low;
> +		bptr->u.data.high =
> +			sge[first_data_sge].buffer_address_high;
> +	}
> +
> +	/* The upper 7 bits of csctl is the priority */
> +	if (params->cs_ctl & SLI4_MASK_CCP) {
> +		tsend->dw10byte2 |= SLI4_TSEND_CCPE;
> +		tsend->ccp = (params->cs_ctl & SLI4_MASK_CCP);
> +	}
> +
> +	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
> +	    !(tsend->dw10byte2 & SLI4_TSEND_EAT)) {
> +		tsend->dw10byte0 |= SLI4_TSEND_APPID_VALID;
> +		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQES;
> +		tsend_128->dw[31] = params->app_id;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a GEN_REQUEST64 work queue entry */
> +int
> +sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		      u32 req_len, u32 max_rsp_len, u16 xri, u16 tag,
> +		      u16 cq_id, u32 rnode_fcid, u16 rnodeindicator,
> +		      struct sli_ct_params *params)
> +{
> +	struct sli4_gen_request64_wqe	*gen = buf;
> +	struct sli4_sge *sge = NULL;
> +	struct sli4_bde *bptr;
> +
> +	memset(buf, 0, sli4->wqe_size);
> +
> +	if (!sgl || !sgl->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       sgl, sgl ? sgl->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +	sge = sgl->virt;
> +	bptr = &gen->bde;
> +
> +	if (sli4->sgl_pre_registered) {
> +		gen->dw10flags1 &= ~SLI4_GEN_REQ64_WQE_XBL;
> +
> +		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_DBDE;
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (req_len & SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.data.low  = sge[0].buffer_address_low;
> +		bptr->u.data.high = sge[0].buffer_address_high;
> +	} else {
> +		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_XBL;
> +
> +		bptr->bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BLP << BDE_TYPE_SHIFT) |
> +				    ((2 * sizeof(struct sli4_sge)) &
> +				     SLI4_BDE_MASK_BUFFER_LEN));
> +
> +		bptr->u.blp.low =
> +			cpu_to_le32(lower_32_bits(sgl->phys));
> +		bptr->u.blp.high =
> +			cpu_to_le32(upper_32_bits(sgl->phys));
> +	}
> +
> +	gen->request_payload_length = cpu_to_le32(req_len);
> +	gen->max_response_payload_length = cpu_to_le32(max_rsp_len);
> +
> +	gen->df_ctl = params->df_ctl;
> +	gen->type = params->type;
> +	gen->r_ctl = params->r_ctl;
> +
> +	gen->xri_tag = cpu_to_le16(xri);
> +
> +	gen->ct_byte = SLI4_GENERIC_CONTEXT_RPI << SLI4_GEN_REQ64_CT_SHFT;
> +	gen->context_tag = cpu_to_le16(rnodeindicator);
> +
> +	gen->class_byte = SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	gen->command = SLI4_WQE_GEN_REQUEST64;
> +
> +	gen->timer = params->timeout;
> +
> +	gen->request_tag = cpu_to_le16(tag);
> +
> +	gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_IOD;
> +
> +	gen->dw10flags0 |= SLI4_GEN_REQ64_WQE_QOSD;
> +
> +	gen->cmd_type_byte = SLI4_CMD_GEN_REQUEST64_WQE;
> +
> +	gen->cq_id = cpu_to_le16(cq_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a SEND_FRAME work queue entry */
> +int
> +sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		   u8 sof, u8 eof, u32 *hdr,
> +			struct efc_dma *payload, u32 req_len,
> +			u8 timeout, u16 xri, u16 req_tag)
> +{
> +	struct sli4_send_frame_wqe *sf = buf;
> +
> +	memset(buf, 0, size);
> +
> +	sf->dw10flags1 |= SLI4_SF_WQE_DBDE;
> +	sf->bde.bde_type_buflen = cpu_to_le32(req_len &
> +					      SLI4_BDE_MASK_BUFFER_LEN);
> +	sf->bde.u.data.low =
> +		cpu_to_le32(lower_32_bits(payload->phys));
> +	sf->bde.u.data.high =
> +		cpu_to_le32(upper_32_bits(payload->phys));
> +
> +	/* Copy FC header */
> +	sf->fc_header_0_1[0] = cpu_to_le32(hdr[0]);
> +	sf->fc_header_0_1[1] = cpu_to_le32(hdr[1]);
> +	sf->fc_header_2_5[0] = cpu_to_le32(hdr[2]);
> +	sf->fc_header_2_5[1] = cpu_to_le32(hdr[3]);
> +	sf->fc_header_2_5[2] = cpu_to_le32(hdr[4]);
> +	sf->fc_header_2_5[3] = cpu_to_le32(hdr[5]);
> +
> +	sf->frame_length = cpu_to_le32(req_len);
> +
> +	sf->xri_tag = cpu_to_le16(xri);
> +	sf->dw7flags0 &= ~SLI4_SF_PU;
> +	sf->context_tag = 0;
> +
> +	sf->ct_byte &= ~SLI4_SF_CT;
> +	sf->command = SLI4_WQE_SEND_FRAME;
> +	sf->dw7flags0 |= SLI4_GENERIC_CLASS_CLASS_3;
> +	sf->timer = timeout;
> +
> +	sf->request_tag = cpu_to_le16(req_tag);
> +	sf->eof = eof;
> +	sf->sof = sof;
> +
> +	sf->dw10flags1 &= ~SLI4_SF_QOSD;
> +	sf->dw10flags0 |= SLI4_SF_LEN_LOC_BIT1;
> +	sf->dw10flags2 &= ~SLI4_SF_XC;
> +
> +	sf->dw10flags1 |= SLI4_SF_XBL;
> +
> +	sf->cmd_type_byte |= SLI4_CMD_SEND_FRAME_WQE;
> +	sf->cq_id = cpu_to_le16(0xffff);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an XMIT_BLS_RSP64_WQE work queue entry */
> +int
> +sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		       struct sli_bls_payload *payload, u16 xri,
> +		       u16 tag, u16 cq_id,
> +		       bool rnodeattached, u16 rnodeindicator,
> +		       u16 sportindicator, u32 rnode_fcid,
> +		       u32 sport_fcid, u32 s_id)
> +{
> +	struct sli4_xmit_bls_rsp_wqe *bls = buf;
> +	u32 dw_ridflags = 0;
> +
> +	/*
> +	 * Callers can either specify RPI or S_ID, but not both
> +	 */
> +	if (rnodeattached && s_id != U32_MAX) {
> +		efc_log_info(sli4, "S_ID specified for attached remote node %d\n",
> +			rnodeindicator);
> +		return EFC_FAIL;
> +	}
> +
> +	memset(buf, 0, size);
> +
> +	if (payload->type == SLI4_SLI_BLS_ACC) {
> +		bls->payload_word0 =
> +			cpu_to_le32((payload->u.acc.seq_id_last << 16) |
> +				    (payload->u.acc.seq_id_validity << 24));
> +		bls->high_seq_cnt = payload->u.acc.high_seq_cnt;
> +		bls->low_seq_cnt = payload->u.acc.low_seq_cnt;
> +	} else if (payload->type == SLI4_SLI_BLS_RJT) {
> +		bls->payload_word0 =
> +				cpu_to_le32(*((u32 *)&payload->u.rjt));
> +		dw_ridflags |= SLI4_BLS_RSP_WQE_AR;
> +	} else {
> +		efc_log_info(sli4, "bad BLS type %#x\n", payload->type);
> +		return EFC_FAIL;
> +	}
> +
> +	bls->ox_id = payload->ox_id;
> +	bls->rx_id = payload->rx_id;
> +
> +	if (rnodeattached) {
> +		bls->dw8flags0 |=
> +		SLI4_GENERIC_CONTEXT_RPI << SLI4_BLS_RSP_WQE_CT_SHFT;
> +		bls->context_tag = cpu_to_le16(rnodeindicator);
> +	} else {
> +		bls->dw8flags0 |=
> +		SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
> +		bls->context_tag = cpu_to_le16(sportindicator);
> +
> +		if (s_id != U32_MAX)
> +			bls->local_n_port_id_dword |=
> +				cpu_to_le32(s_id & 0x00ffffff);
> +		else
> +			bls->local_n_port_id_dword |=
> +				cpu_to_le32(sport_fcid & 0x00ffffff);
> +
> +		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
> +			       (rnode_fcid & SLI4_BLS_RSP_RID);
> +
> +		bls->temporary_rpi = cpu_to_le16(rnodeindicator);
> +	}
> +
> +	bls->xri_tag = cpu_to_le16(xri);
> +
> +	bls->dw8flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	bls->command = SLI4_WQE_XMIT_BLS_RSP;
> +
> +	bls->request_tag = cpu_to_le16(tag);
> +
> +	bls->dw11flags1 |= SLI4_BLS_RSP_WQE_QOSD;
> +
> +	bls->remote_id_dword = cpu_to_le32(dw_ridflags);
> +	bls->cq_id = cpu_to_le16(cq_id);
> +
> +	bls->dw12flags0 |= SLI4_CMD_XMIT_BLS_RSP64_WQE;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a XMIT_ELS_RSP64_WQE work queue entry */
> +int
> +sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		       struct efc_dma *rsp, u32 rsp_len,
> +				u16 xri, u16 tag, u16 cq_id,
> +				u16 ox_id, u16 rnodeindicator,
> +				u16 sportindicator,
> +				bool rnodeattached, u32 rnode_fcid,
> +				u32 flags, u32 s_id)
> +{
> +	struct sli4_xmit_els_rsp64_wqe *els = buf;
> +
> +	memset(buf, 0, size);
> +
> +	if (sli4->sgl_pre_registered)
> +		els->flags2 |= SLI4_ELS_DBDE;
> +	else
> +		els->flags2 |= SLI4_ELS_XBL;
> +
> +	els->els_response_payload.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (rsp_len & SLI4_BDE_MASK_BUFFER_LEN));
> +	els->els_response_payload.u.data.low =
> +		cpu_to_le32(lower_32_bits(rsp->phys));
> +	els->els_response_payload.u.data.high =
> +		cpu_to_le32(upper_32_bits(rsp->phys));
> +
> +	els->els_response_payload_length = cpu_to_le32(rsp_len);
> +
> +	els->xri_tag = cpu_to_le16(xri);
> +
> +	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
> +
> +	els->command = SLI4_WQE_ELS_RSP64;
> +
> +	els->request_tag = cpu_to_le16(tag);
> +
> +	els->ox_id = cpu_to_le16(ox_id);
> +
> +	els->flags2 |= (SLI4_ELS_IOD & SLI4_ELS_REQUEST64_DIR_WRITE);

Brakets not needed.

> +
> +	els->flags2 |= SLI4_ELS_QOSD;
> +
> +	if (flags & SLI4_IO_CONTINUATION)
> +		els->flags3 |= SLI4_ELS_XC;
> +
> +	if (rnodeattached) {
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_RPI << SLI4_ELS_CT_OFFSET;
> +		els->context_tag = cpu_to_le16(rnodeindicator);
> +	} else {
> +		els->ct_byte |=
> +			SLI4_GENERIC_CONTEXT_VPI << SLI4_ELS_CT_OFFSET;
> +		els->context_tag = cpu_to_le16(sportindicator);
> +		els->rid_dw = cpu_to_le32(rnode_fcid & SLI4_ELS_RID);
> +		els->temporary_rpi = cpu_to_le16(rnodeindicator);
> +		if (s_id != U32_MAX) {
> +			els->sid_dw |= cpu_to_le32(SLI4_ELS_SP |
> +						   (s_id & SLI4_ELS_SID));
> +		}
> +	}
> +
> +	els->cmd_type_wqec = SLI4_ELS_REQUEST64_CMD_GEN;
> +
> +	els->cq_id = cpu_to_le16(cq_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a XMIT_SEQUENCE64 work queue entry */
> +int
> +sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf,
> +			struct efc_dma *payload, u32 payload_len,
> +			u16 xri, u16 tag, u32 rnode_fcid,
> +			u16 rnodeindicator, struct sli_ct_params *params)
> +{
> +	struct sli4_xmit_sequence64_wqe *xmit = buf;
> +
> +	memset(buf, 0, sli4->wqe_size);
> +
> +	if (!payload || !payload->virt) {
> +		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
> +		       payload, payload ? payload->virt : NULL);
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->sgl_pre_registered)
> +		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_DBDE);
> +	else
> +		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_XBL);
> +
> +	xmit->bde.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			(payload_len & SLI4_BDE_MASK_BUFFER_LEN));
> +	xmit->bde.u.data.low  =
> +			cpu_to_le32(lower_32_bits(payload->phys));
> +	xmit->bde.u.data.high =
> +			cpu_to_le32(upper_32_bits(payload->phys));
> +	xmit->sequence_payload_len = cpu_to_le32(payload_len);
> +
> +	xmit->remote_n_port_id_dword |= cpu_to_le32(rnode_fcid & 0x00ffffff);
> +
> +	xmit->relative_offset = 0;
> +
> +	/* sequence initiative - this matches what is seen from
> +	 * FC switches in response to FCGS commands
> +	 */
> +	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_SI);
> +	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_FT);/* force transmit */
> +	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_XO);/* exchange responder */
> +	xmit->dw5flags0 |= SLI4_SEQ_WQE_LS;/* last in seqence */
> +	xmit->df_ctl = params->df_ctl;
> +	xmit->type = params->type;
> +	xmit->r_ctl = params->r_ctl;
> +
> +	xmit->xri_tag = cpu_to_le16(xri);
> +	xmit->context_tag = cpu_to_le16(rnodeindicator);
> +
> +	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_DIF);
> +	xmit->dw7flags0 |=
> +		SLI4_GENERIC_CONTEXT_RPI << SLI4_SEQ_WQE_CT_SHIFT;
> +	xmit->dw7flags0 &= (~SLI4_SEQ_WQE_BS);
> +
> +	xmit->command = SLI4_WQE_XMIT_SEQUENCE64;
> +	xmit->dw7flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
> +	xmit->dw7flags1 &= (~SLI4_SEQ_WQE_PU);
> +	xmit->timer = params->timeout;
> +
> +	xmit->abort_tag = 0;
> +	xmit->request_tag = cpu_to_le16(tag);
> +	xmit->remote_xid = cpu_to_le16(params->ox_id);
> +
> +	xmit->dw10w0 |=
> +	cpu_to_le16(SLI4_ELS_REQUEST64_DIR_READ << SLI4_SEQ_WQE_IOD_SHIFT);
> +
> +	xmit->cmd_type_wqec_byte |= SLI4_CMD_XMIT_SEQUENCE64_WQE;
> +
> +	xmit->dw10w0 |= cpu_to_le16(2 << SLI4_SEQ_WQE_LEN_LOC_SHIFT);
> +
> +	xmit->cq_id = cpu_to_le16(0xFFFF);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a REQUEUE_XRI_WQE work queue entry */
> +int
> +sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		    u16 xri, u16 tag, u16 cq_id)
> +{
> +	struct sli4_requeue_xri_wqe *requeue = buf;
> +
> +	memset(buf, 0, size);
> +
> +	requeue->command = SLI4_WQE_REQUEUE_XRI;
> +	requeue->xri_tag = cpu_to_le16(xri);
> +	requeue->request_tag = cpu_to_le16(tag);
> +	requeue->flags2 |= cpu_to_le16(SLI4_REQU_XRI_WQE_XC);
> +	requeue->flags1 |= cpu_to_le16(SLI4_REQU_XRI_WQE_QOSD);
> +	requeue->cq_id = cpu_to_le16(cq_id);
> +	requeue->cmd_type_wqec_byte = SLI4_CMD_REQUEUE_XRI_WQE;
> +	return EFC_SUCCESS;
> +}
> +
> +/* Process an asynchronous Link Attention event entry */
> +int
> +sli_fc_process_link_attention(struct sli4 *sli4, void *acqe)
> +{
> +	struct sli4_link_attention *link_attn = acqe;
> +	struct sli4_link_event event = { 0 };
> +
> +	efc_log_info(sli4, "link=%d attn_type=%#x top=%#x speed=%#x pfault=%#x\n",
> +		link_attn->link_number, link_attn->attn_type,
> +		      link_attn->topology, link_attn->port_speed,
> +		      link_attn->port_fault);
> +	efc_log_info(sli4, "shared_lnk_status=%#x logl_lnk_speed=%#x evttag=%#x\n",
> +		link_attn->shared_link_status,
> +		      le16_to_cpu(link_attn->logical_link_speed),
> +		      le32_to_cpu(link_attn->event_tag));
> +
> +	if (!sli4->link)
> +		return EFC_FAIL;
> +
> +	event.medium   = SLI_LINK_MEDIUM_FC;
> +
> +	switch (link_attn->attn_type) {
> +	case LINK_ATTN_TYPE_LINK_UP:
> +		event.status = SLI_LINK_STATUS_UP;
> +		break;
> +	case LINK_ATTN_TYPE_LINK_DOWN:
> +		event.status = SLI_LINK_STATUS_DOWN;
> +		break;
> +	case LINK_ATTN_TYPE_NO_HARD_ALPA:
> +		efc_log_info(sli4, "attn_type: no hard alpa\n");
> +		event.status = SLI_LINK_STATUS_NO_ALPA;
> +		break;
> +	default:
> +		efc_log_info(sli4, "attn_type: unknown\n");
> +		break;
> +	}
> +
> +	switch (link_attn->event_type) {
> +	case FC_EVENT_LINK_ATTENTION:
> +		break;
> +	case FC_EVENT_SHARED_LINK_ATTENTION:
> +		efc_log_info(sli4, "event_type: FC shared link event\n");
> +		break;
> +	default:
> +		efc_log_info(sli4, "event_type: unknown\n");
> +		break;
> +	}
> +
> +	switch (link_attn->topology) {
> +	case LINK_ATTN_P2P:
> +		event.topology = SLI_LINK_TOPO_NPORT;
> +		break;
> +	case LINK_ATTN_FC_AL:
> +		event.topology = SLI_LINK_TOPO_LOOP;
> +		break;
> +	case LINK_ATTN_INTERNAL_LOOPBACK:
> +		efc_log_info(sli4, "topology Internal loopback\n");
> +		event.topology = SLI_LINK_TOPO_LOOPBACK_INTERNAL;
> +		break;
> +	case LINK_ATTN_SERDES_LOOPBACK:
> +		efc_log_info(sli4, "topology serdes loopback\n");
> +		event.topology = SLI_LINK_TOPO_LOOPBACK_EXTERNAL;
> +		break;
> +	default:
> +		efc_log_info(sli4, "topology: unknown\n");
> +		break;
> +	}
> +
> +	event.speed = link_attn->port_speed * 1000;
> +
> +	sli4->link(sli4->link_arg, (void *)&event);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Parse an FC work queue CQ entry */
> +int
> +sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
> +		 u8 *cqe, enum sli4_qentry *etype, u16 *r_id)
> +{
> +	u8 code = cqe[SLI4_CQE_CODE_OFFSET];
> +	int rc;
> +
> +	switch (code) {
> +	case SLI4_CQE_CODE_WORK_REQUEST_COMPLETION:
> +	{
> +		struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_WQ;
> +		*r_id = le16_to_cpu(wcqe->request_tag);
> +		rc = wcqe->status;
> +
> +		/* Flag errors except for FCP_RSP_FAILURE */
> +		if (rc && rc != SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE) {
> +			efc_log_info(sli4, "WCQE: status=%#x hw_status=%#x tag=%#x\n",
> +				wcqe->status, wcqe->hw_status,
> +				le16_to_cpu(wcqe->request_tag));
> +			efc_log_info(sli4, "w1=%#x w2=%#x xb=%d\n",
> +				le32_to_cpu(wcqe->wqe_specific_1),
> +				     le32_to_cpu(wcqe->wqe_specific_2),
> +				     (wcqe->flags & SLI4_WCQE_XB));
> +			efc_log_info(sli4, "      %08X %08X %08X %08X\n",
> +				((u32 *)cqe)[0],
> +				     ((u32 *)cqe)[1],
> +				     ((u32 *)cqe)[2],
> +				     ((u32 *)cqe)[3]);
> +		}
> +
> +		break;
> +	}
> +	case SLI4_CQE_CODE_RQ_ASYNC:
> +	{
> +		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_RQ;
> +		*r_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
> +		rc = rcqe->status;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_RQ_ASYNC_V1:
> +	{
> +		struct sli4_fc_async_rcqe_v1 *rcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_RQ;
> +		*r_id = le16_to_cpu(rcqe->rq_id);
> +		rc = rcqe->status;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD:
> +	{
> +		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_OPT_WRITE_CMD;
> +		*r_id = le16_to_cpu(optcqe->rq_id);
> +		rc = optcqe->status;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA:
> +	{
> +		struct sli4_fc_optimized_write_data_cqe *dcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_OPT_WRITE_DATA;
> +		*r_id = le16_to_cpu(dcqe->xri);
> +		rc = dcqe->status;
> +
> +		/* Flag errors */
> +		if (rc != SLI4_FC_WCQE_STATUS_SUCCESS) {
> +			efc_log_info(sli4, "Optimized DATA CQE: status=%#x\n",
> +				dcqe->status);
> +			efc_log_info(sli4, "hstat=%#x xri=%#x dpl=%#x w3=%#x xb=%d\n",
> +				dcqe->hw_status, le16_to_cpu(dcqe->xri),
> +				le32_to_cpu(dcqe->total_data_placed),
> +				((u32 *)cqe)[3],
> +				(dcqe->flags & SLI4_OCQE_XB));
> +		}
> +		break;
> +	}
> +	case SLI4_CQE_CODE_RQ_COALESCING:
> +	{
> +		struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_RQ;
> +		*r_id = le16_to_cpu(rcqe->rq_id);
> +		rc = rcqe->status;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_XRI_ABORTED:
> +	{
> +		struct sli4_fc_xri_aborted_cqe *xa = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_XABT;
> +		*r_id = le16_to_cpu(xa->xri);
> +		rc = EFC_SUCCESS;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_RELEASE_WQE: {
> +		struct sli4_fc_wqec *wqec = (void *)cqe;
> +
> +		*etype = SLI_QENTRY_WQ_RELEASE;
> +		*r_id = le16_to_cpu(wqec->wq_id);
> +		rc = EFC_SUCCESS;
> +		break;
> +	}
> +	default:
> +		efc_log_info(sli4, "CQE completion code %d not handled\n",
> +			code);
> +		*etype = SLI_QENTRY_MAX;
> +		*r_id = U16_MAX;
> +		rc = -EINVAL;
> +	}
> +
> +	return rc;
> +}
> +
> +u32
> +sli_fc_response_length(struct sli4 *sli4, u8 *cqe)
> +{
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +
> +	return le32_to_cpu(wcqe->wqe_specific_1);
> +}
> +
> +u32
> +sli_fc_io_length(struct sli4 *sli4, u8 *cqe)
> +{
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +
> +	return le32_to_cpu(wcqe->wqe_specific_1);
> +}
> +
> +int
> +sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id)
> +{
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +
> +	*d_id = 0;
> +
> +	if (wcqe->status)
> +		return EFC_FAIL;
> +	*d_id = le32_to_cpu(wcqe->wqe_specific_2) & 0x00ffffff;
> +	return EFC_SUCCESS;
> +}
> +
> +u32
> +sli_fc_ext_status(struct sli4 *sli4, u8 *cqe)
> +{
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +	u32	mask;
> +
> +	switch (wcqe->status) {
> +	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
> +		mask = U32_MAX;
> +		break;
> +	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +	case SLI4_FC_WCQE_STATUS_CMD_REJECT:
> +		mask = 0xff;
> +		break;
> +	case SLI4_FC_WCQE_STATUS_NPORT_RJT:
> +	case SLI4_FC_WCQE_STATUS_FABRIC_RJT:
> +	case SLI4_FC_WCQE_STATUS_NPORT_BSY:
> +	case SLI4_FC_WCQE_STATUS_FABRIC_BSY:
> +	case SLI4_FC_WCQE_STATUS_LS_RJT:
> +		mask = U32_MAX;
> +		break;
> +	case SLI4_FC_WCQE_STATUS_DI_ERROR:
> +		mask = U32_MAX;
> +		break;
> +	default:
> +		mask = 0;
> +	}
> +
> +	return le32_to_cpu(wcqe->wqe_specific_2) & mask;
> +}
> +
> +/* Retrieve the RQ index from the completion */
> +int
> +sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
> +			  u16 *rq_id, u32 *index)
> +{
> +	struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
> +	struct sli4_fc_async_rcqe_v1 *rcqe_v1 = (void *)cqe;
> +	int rc = EFC_FAIL;
> +	u8 code = 0;
> +	u16 rq_element_index;
> +
> +	*rq_id = 0;
> +	*index = U32_MAX;
> +
> +	code = cqe[SLI4_CQE_CODE_OFFSET];
> +
> +	if (code == SLI4_CQE_CODE_RQ_ASYNC) {
> +		*rq_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
> +		rq_element_index =
> +		le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX;
> +		*index = rq_element_index;
> +		if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
> +			rc = EFC_SUCCESS;
> +		} else {
> +			rc = rcqe->status;
> +			efc_log_info(sli4, "status=%02x (%s) rq_id=%d\n",
> +				rcqe->status,
> +				sli_fc_get_status_string(rcqe->status),
> +				le16_to_cpu(rcqe->fcfi_rq_id_word) &
> +				SLI4_RACQE_RQ_ID);
> +
> +			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
> +				le16_to_cpu(rcqe->data_placement_length),
> +				rcqe->sof_byte, rcqe->eof_byte,
> +				rcqe->hdpl_byte & SLI4_RACQE_HDPL);
> +		}
> +	} else if (code == SLI4_CQE_CODE_RQ_ASYNC_V1) {
> +		*rq_id = le16_to_cpu(rcqe_v1->rq_id);
> +		rq_element_index =
> +			(le16_to_cpu(rcqe_v1->rq_elmt_indx_word) &
> +			 SLI4_RACQE_RQ_EL_INDX);
> +		*index = rq_element_index;
> +		if (rcqe_v1->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
> +			rc = EFC_SUCCESS;
> +		} else {
> +			rc = rcqe_v1->status;
> +			efc_log_info(sli4, "status=%02x (%s) rq_id=%d, index=%x\n",
> +				rcqe_v1->status,
> +				sli_fc_get_status_string(rcqe_v1->status),
> +				le16_to_cpu(rcqe_v1->rq_id), rq_element_index);
> +
> +			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
> +				le16_to_cpu(rcqe_v1->data_placement_length),
> +			rcqe_v1->sof_byte, rcqe_v1->eof_byte,
> +			rcqe_v1->hdpl_byte & SLI4_RACQE_HDPL);
> +		}
> +	} else if (code == SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD) {
> +		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
> +
> +		*rq_id = le16_to_cpu(optcqe->rq_id);
> +		*index = le16_to_cpu(optcqe->w1) & SLI4_OCQE_RQ_EL_INDX;
> +		if (optcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
> +			rc = EFC_SUCCESS;
> +		} else {
> +			rc = optcqe->status;
> +			efc_log_info(sli4, "stat=%02x (%s) rqid=%d, idx=%x pdpl=%x\n",
> +				optcqe->status,
> +				sli_fc_get_status_string(optcqe->status),
> +				le16_to_cpu(optcqe->rq_id), *index,
> +				le16_to_cpu(optcqe->data_placement_length));
> +
> +			efc_log_info(sli4, "hdpl=%x oox=%d agxr=%d xri=0x%x rpi=%x\n",
> +				(optcqe->hdpl_vld & SLI4_OCQE_HDPL),
> +				(optcqe->flags1 & SLI4_OCQE_OOX),
> +				(optcqe->flags1 & SLI4_OCQE_AGXR), optcqe->xri,
> +				le16_to_cpu(optcqe->rpi));
> +		}
> +	} else if (code == SLI4_CQE_CODE_RQ_COALESCING) {
> +		struct sli4_fc_coalescing_rcqe	*rcqe = (void *)cqe;
> +		u16 rq_element_index =
> +				(le16_to_cpu(rcqe->rq_elmt_indx_word) &
> +				 SLI4_RCQE_RQ_EL_INDX);
> +
> +		*rq_id = le16_to_cpu(rcqe->rq_id);
> +		if (rcqe->status == SLI4_FC_COALESCE_RQ_SUCCESS) {
> +			*index = rq_element_index;
> +			rc = EFC_SUCCESS;
> +		} else {
> +			*index = U32_MAX;
> +			rc = rcqe->status;
> +
> +			efc_log_info(sli4, "stat=%02x (%s) rq_id=%d, idx=%x\n",
> +				rcqe->status,
> +				sli_fc_get_status_string(rcqe->status),
> +				le16_to_cpu(rcqe->rq_id), rq_element_index);
> +			efc_log_info(sli4, "rq_id=%#x sdpl=%x\n",
> +				le16_to_cpu(rcqe->rq_id),
> +		    le16_to_cpu(rcqe->sequence_reporting_placement_length));
> +		}
> +	} else {
> +		*index = U32_MAX;
> +
> +		rc = rcqe->status;
> +
> +		efc_log_info(sli4, "status=%02x rq_id=%d, index=%x pdpl=%x\n",
> +			rcqe->status,
> +		le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID,
> +		(le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX),
> +		le16_to_cpu(rcqe->data_placement_length));
> +		efc_log_info(sli4, "sof=%02x eof=%02x hdpl=%x\n",
> +			rcqe->sof_byte, rcqe->eof_byte,
> +			rcqe->hdpl_byte & SLI4_RACQE_HDPL);
> +	}
> +
> +	return rc;
> +}
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 11/31] elx: libefc: SLI and FC PORT state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 11/31] elx: libefc: SLI and FC PORT " James Smart
@ 2020-04-15 15:38   ` Hannes Reinecke
  2020-04-22 23:12     ` James Smart
  2020-04-15 18:04   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 15:38 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI and FC port (aka n_port_id) registration, allocation and
>    deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Acquire efc->lock in efc_lport_cb to protect all the port state
>      transitions.
>    Add vport_lock to protect vport_list access.
>    Fixed vport_sport allocation race.
>    Reworked on vport code.
> ---
>   drivers/scsi/elx/libefc/efc_sport.c | 846 ++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc/efc_sport.h |  52 +++
>   2 files changed, 898 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
> new file mode 100644
> index 000000000000..99f5213e0902
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sport.c
> @@ -0,0 +1,846 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Details SLI port (sport) functions.
> + */
> +
> +#include "efc.h"
> +
> +/* HW sport callback events from the user driver */
> +int
> +efc_lport_cb(void *arg, int event, void *data)
> +{
> +	struct efc *efc = arg;
> +	struct efc_sli_port *sport = data;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_PORT_ALLOC_OK:
> +		sm_event = EFC_EVT_SPORT_ALLOC_OK;
> +		break;
> +	case EFC_HW_PORT_ALLOC_FAIL:
> +		sm_event = EFC_EVT_SPORT_ALLOC_FAIL;
> +		break;
> +	case EFC_HW_PORT_ATTACH_OK:
> +		sm_event = EFC_EVT_SPORT_ATTACH_OK;
> +		break;
> +	case EFC_HW_PORT_ATTACH_FAIL:
> +		sm_event = EFC_EVT_SPORT_ATTACH_FAIL;
> +		break;
> +	case EFC_HW_PORT_FREE_OK:
> +		sm_event = EFC_EVT_SPORT_FREE_OK;
> +		break;
> +	case EFC_HW_PORT_FREE_FAIL:
> +		sm_event = EFC_EVT_SPORT_FREE_FAIL;
> +		break;
> +	default:
> +		efc_log_err(efc, "unknown event %#x\n", event);
> +		return EFC_FAIL;
> +	}
> +
> +	efc_log_debug(efc, "sport event: %s\n", efc_sm_event_name(sm_event));
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_sm_post_event(&sport->sm, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +struct efc_sli_port *
> +efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		u32 fc_id, bool enable_ini, bool enable_tgt)
> +{
> +	struct efc_sli_port *sport;
> +
> +	if (domain->efc->enable_ini)
> +		enable_ini = 0;
> +
> +	/* Return a failure if this sport has already been allocated */
> +	if (wwpn != 0) {
> +		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
> +		if (sport) {
> +			efc_log_err(domain->efc,
> +				    "Failed: SPORT %016llX %016llX already allocated\n",
> +				    wwnn, wwpn);
> +			return NULL;
> +		}
> +	}
> +
> +	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);
> +	if (!sport)
> +		return sport;
> +
> +	sport->efc = domain->efc;
> +	snprintf(sport->display_name, sizeof(sport->display_name), "------");
> +	sport->domain = domain;
> +	xa_init(&sport->lookup);
> +	sport->instance_index = domain->sport_instance_count++;
> +	INIT_LIST_HEAD(&sport->node_list);
> +	sport->sm.app = sport;
> +	sport->enable_ini = enable_ini;
> +	sport->enable_tgt = enable_tgt;
> +	sport->enable_rscn = (sport->enable_ini ||
> +			(sport->enable_tgt && enable_target_rscn(sport->efc)));
> +
> +	/* Copy service parameters from domain */
> +	memcpy(sport->service_params, domain->service_params,
> +		sizeof(struct fc_els_flogi));
> +
> +	/* Update requested fc_id */
> +	sport->fc_id = fc_id;
> +
> +	/* Update the sport's service parameters for the new wwn's */
> +	sport->wwpn = wwpn;
> +	sport->wwnn = wwnn;
> +	snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
> +
> +	/*
> +	 * if this is the "first" sport of the domain,
> +	 * then make it the "phys" sport
> +	 */
> +	if (list_empty(&domain->sport_list))
> +		domain->sport = sport;
> +
> +	INIT_LIST_HEAD(&sport->list_entry);
> +	list_add_tail(&sport->list_entry, &domain->sport_list);
> +
> +	efc_log_debug(domain->efc, "[%s] allocate sport\n",
> +		      sport->display_name);
> +
> +	return sport;
> +}

This function requires locking, so please add locking annotations to it.

> +
> +void
> +efc_sport_free(struct efc_sli_port *sport)
> +{
> +	struct efc_domain *domain;
> +
> +	if (!sport)
> +		return;
> +
> +	domain = sport->domain;
> +	efc_log_debug(domain->efc, "[%s] free sport\n", sport->display_name);
> +	list_del(&sport->list_entry);
> +	/*
> +	 * if this is the physical sport,
> +	 * then clear it out of the domain
> +	 */
> +	if (sport == domain->sport)
> +		domain->sport = NULL;
> +
> +	xa_destroy(&sport->lookup);
> +	xa_erase(&domain->lookup, sport->fc_id);
> +
> +	if (list_empty(&domain->sport_list))
> +		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				      NULL);
> +
> +	kfree(sport);
> +}
> +
I would have expected the ports to be reference counted, seeing that 
they are (probably) accessed by structures with vastly different 
lifetime rules.
It also would allow for a more dynamic port deletion, as you wouldn't 
need to lock the entire function, only when removing it from the list.

Have you considered that?

> +void
> +efc_sport_force_free(struct efc_sli_port *sport)
> +{
> +	struct efc_node *node;
> +	struct efc_node *next;
> +
> +	/* shutdown sm processing */
> +	efc_sm_disable(&sport->sm);
> +
> +	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
> +		efc_node_force_free(node);
> +	}
> +
> +	efc_sport_free(sport);
> +}
> +

See? That's what I mean.
You have event processing for that port, and additional nodes attached 
to it. If all of them would be properly reference counted you could do 
away with this function ...

> +/* Find a SLI port object, given an FC_ID */
> +struct efc_sli_port *
> +efc_sport_find(struct efc_domain *domain, u32 d_id)
> +{
> +	return xa_load(&domain->lookup, d_id);
> +}
> +

Does it need to be locked? If so please add locking annotations.

> +/* Find a SLI port, given the WWNN and WWPN */
> +struct efc_sli_port *
> +efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
> +{
> +	struct efc_sli_port *sport = NULL;
> +
> +	list_for_each_entry(sport, &domain->sport_list, list_entry) {
> +		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
> +			return sport;
> +	}
> +	return NULL;
> +}
> +

Same here, only this definitely needs locking.
And, actually, reference counting is definitely something I would 
consider here.

> +/* External call to request an attach for a sport, given an FC_ID */
> +int
> +efc_sport_attach(struct efc_sli_port *sport, u32 fc_id)
> +{
> +	int rc;
> +	struct efc_node *node;
> +	struct efc *efc = sport->efc;
> +
> +	/* Set our lookup */
> +	rc = xa_err(xa_store(&sport->domain->lookup, fc_id, sport, GFP_ATOMIC));
> +	if (rc) {
> +		efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
> +		return rc;
> +	}
> +
> +	/* Update our display_name */
> +	efc_node_fcid_display(fc_id, sport->display_name,
> +			      sizeof(sport->display_name));
> +
> +	list_for_each_entry(node, &sport->node_list, list_entry) {
> +		efc_node_update_display_name(node);
> +	}
> +
> +	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
> +		      sport->display_name, fc_id);
> +
> +	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
> +	if (rc != EFC_HW_RTN_SUCCESS) {
> +		efc_log_err(sport->efc,
> +			    "efc_hw_port_attach failed: %d\n", rc);
> +		return EFC_FAIL;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efc_sport_shutdown(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_node *node;
> +	struct efc_node *node_next;
> +
> +	list_for_each_entry_safe(node, node_next,
> +					&sport->node_list, list_entry) {
> +		if (!(node->rnode.fc_id == FC_FID_FLOGI && sport->is_vport)) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +			continue;
> +		}
> +
> +		/*
> +		 * If this is a vport, logout of the fabric
> +		 * controller so that it deletes the vport
> +		 * on the switch.
> +		 */
> +		/* if link is down, don't send logo */
> +		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		} else {
> +			efc_log_debug(efc,
> +				      "[%s] sport shutdown vport, sending logo to node\n",
> +				      node->display_name);
> +
> +			if (efc->tt.els_send(efc, node, ELS_LOGO,
> +					     EFC_FC_FLOGI_TIMEOUT_SEC,
> +					EFC_FC_ELS_DEFAULT_RETRIES)) {
> +				/* sent LOGO, wait for response */
> +				efc_node_transition(node,
> +						    __efc_d_wait_logo_rsp,
> +						     NULL);
> +				continue;
> +			}
> +
> +			/*
> +			 * failed to send LOGO,
> +			 * go ahead and cleanup node anyways
> +			 */
> +			node_printf(node, "Failed to send LOGO\n");
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +		}
> +	}
> +}
> +
> +/* Clear the sport reference in the vport specification */
> +static void
> +efc_vport_link_down(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->sport = NULL;
> +			break;
> +		}
> +	}
> +}
> +
> +static void *
> +__efc_sport_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		break;
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +			efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN: {
> +		int node_list_empty;
> +
> +		/* Flag this sport as shutting down */
> +		sport->shutting_down = true;
> +
> +		if (sport->is_vport)
> +			efc_vport_link_down(sport);
> +
> +		node_list_empty = list_empty(&sport->node_list);
> +
> +		if (node_list_empty) {
> +			/* Remove the sport from the domain's lookup table */
> +			xa_erase(&domain->lookup, sport->fc_id);
> +			efc_sm_transition(ctx, __efc_sport_wait_port_free,
> +					  NULL);
> +			if (efc->tt.hw_port_free(efc, sport)) {
> +				efc_log_test(sport->efc,
> +					     "efc_hw_port_free failed\n");
> +				/* Not much we can do, free the sport anyways */
> +				efc_sport_free(sport);
> +			}
> +		} else {
> +			/* sm: node list is not empty / shutdown nodes */
> +			efc_sm_transition(ctx,
> +					  __efc_sport_wait_shutdown, NULL);
> +			efc_sport_shutdown(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
> +			     sport->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Physical sport allocated */
> +void *
> +__efc_sport_allocated(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	/* the physical sport is attached */
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		WARN_ON(sport != domain->sport);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +		/* ignore */
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Handle initial virtual port events */
> +void *
> +__efc_sport_vport_init(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
> +
> +		if (sport->wwpn == 0)
> +			efc_log_debug(efc, "vport: letting f/w select WWN\n");
> +
> +		if (sport->fc_id != U32_MAX) {
> +			efc_log_debug(efc, "vport: hard coding port id: %x\n",
> +				      sport->fc_id);
> +		}
> +
> +		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
> +		/* If wwpn is zero, then we'll let the f/w */
> +		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
> +					  sport->wwpn == 0 ? NULL :
> +					  (uint8_t *)&be_wwpn)) {
> +			efc_log_err(efc, "Can't allocate port\n");
> +			break;
> +		}
> +
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine:
> + * Wait for the HW SLI port allocation to complete
> + */
> +void *
> +__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK: {
> +		struct fc_els_flogi *sp;
> +		struct efc_node *fabric;
> +
> +		sp = (struct fc_els_flogi *)sport->service_params;
> +		/*
> +		 * If we let f/w assign wwn's,
> +		 * then sport wwn's with those returned by hw
> +		 */
> +		if (sport->wwnn == 0) {
> +			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
> +			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +		}
> +
> +		/* Update the sport's service parameters */
> +		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
> +		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
> +
> +		/*
> +		 * if sport->fc_id is uninitialized,
> +		 * then request that the fabric node use FDISC
> +		 * to find an fc_id.
> +		 * Otherwise we're restoring vports, or we're in
> +		 * fabric emulation mode, so attach the fc_id
> +		 */
> +		if (sport->fc_id == U32_MAX) {
> +			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
> +						false);
> +			if (!fabric) {
> +				efc_log_err(efc, "efc_node_alloc() failed\n");
> +				return NULL;
> +			}
> +			efc_node_transition(fabric, __efc_vport_fabric_init,
> +					    NULL);
> +		} else {
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +			efc_sport_attach(sport, sport->fc_id);
> +		}
> +		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine: virtual sport allocated.
> + *
> + * This state is entered after the sport is allocated;
> + * it then waits for a fabric node
> + * FDISC to complete, which requests a sport attach.
> + * The sport attach complete is handled in this state.
> + */
> +
> +void *
> +__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK: {
> +		struct efc_node *node;
> +
> +		/* Find our fabric node, and forward this event */
> +		node = efc_node_find(sport, FC_FID_FLOGI);
> +		if (!node) {
> +			efc_log_test(efc, "can't find node %06x\n",
> +				     FC_FID_FLOGI);
> +			break;
> +		}
> +		/* sm: / forward sport attach to fabric node */
> +		efc_node_post_event(node, evt, NULL);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +static void
> +efc_vport_update_spec(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->wwnn = sport->wwnn;
> +			vport->wwpn = sport->wwpn;
> +			vport->tgt_data = sport->tgt_data;
> +			vport->ini_data = sport->ini_data;
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +}
> +
> +/* State entered after the sport attach has completed */
> +void *
> +__efc_sport_attached(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		struct efc_node *node;
> +
> +		efc_log_debug(efc,
> +			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		list_for_each_entry(node, &sport->node_list, list_entry) {
> +			efc_node_update_display_name(node);
> +		}
> +
> +		sport->tgt_id = sport->fc_id;
> +
> +		efc->tt.new_sport(efc, sport);
> +
> +		/*
> +		 * Update the vport (if its not the physical sport)
> +		 * parameters
> +		 */
> +		if (sport->is_vport)
> +			efc_vport_update_spec(sport);
> +		break;
> +	}
> +
> +	case EFC_EVT_EXIT:
> +		efc_log_debug(efc,
> +			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		efc->tt.del_sport(efc, sport);
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +
> +/* SLI port state machine: Wait for the node shutdowns to complete */
> +void *
> +__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +	case EFC_EVT_SPORT_ALLOC_FAIL:
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +	case EFC_EVT_SPORT_ATTACH_FAIL:
> +		/* ignore these events - just wait for the all free event */
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		/*
> +		 * Remove the sport from the domain's
> +		 * sparse vector lookup table
> +		 */
> +		xa_erase(&domain->lookup, sport->fc_id);
> +		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
> +		if (efc->tt.hw_port_free(efc, sport)) {
> +			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
> +			/* Not much we can do, free the sport anyways */
> +			efc_sport_free(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Wait for the HW's port free to complete */
> +void *
> +__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		/* Ignore as we are waiting for the free CB */
> +		break;
> +	case EFC_EVT_SPORT_FREE_OK: {
> +		/* All done, free myself */
> +		/* sm: / efc_sport_free */
> +		efc_sport_free(sport);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +static int
> +efc_vport_sport_alloc(struct efc_domain *domain, struct efc_vport_spec *vport)
> +{
> +	struct efc_sli_port *sport;
> +
> +	sport = efc_sport_alloc(domain, vport->wwpn,
> +				vport->wwnn, vport->fc_id,
> +				vport->enable_ini,
> +				vport->enable_tgt);
> +	vport->sport = sport;
> +	if (!sport)
> +		return EFC_FAIL;
> +
> +	sport->is_vport = true;
> +	sport->tgt_data = vport->tgt_data;
> +	sport->ini_data = vport->ini_data;
> +
> +	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Use the vport specification to find the associated vports and start them */
> +int
> +efc_vport_start(struct efc_domain *domain)
> +{
> +	struct efc *efc = domain->efc;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	int rc = EFC_SUCCESS;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (!vport->sport) {
> +			if (efc_vport_sport_alloc(domain, vport))
> +				rc = EFC_FAIL;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +
> +	return rc;
> +}
> +
> +/* Allocate a new virtual SLI port */
> +int
> +efc_sport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
> +		    void *ini_data)
> +{
> +	struct efc *efc = domain->efc;
> +	struct efc_vport_spec *vport;
> +	int rc = EFC_SUCCESS;
> +	unsigned long flags = 0;
> +
> +	if (ini && domain->efc->enable_ini == 0) {
> +		efc_log_debug(efc,
> +			     "driver initiator functionality not enabled\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (tgt && domain->efc->enable_tgt == 0) {
> +		efc_log_debug(efc,
> +			     "driver target functionality not enabled\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * Create a vport spec if we need to recreate
> +	 * this vport after a link up event
> +	 */
> +	vport = efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id, ini, tgt,
> +					tgt_data, ini_data);
> +	if (!vport) {
> +		efc_log_err(efc, "failed to create vport object entry\n");
> +		return EFC_FAIL;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	rc = efc_vport_sport_alloc(domain, vport);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return rc;
> +}
> +
> +/* Remove a previously-allocated virtual port */
> +int
> +efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
> +		    u64 wwpn, uint64_t wwnn)
> +{
> +	struct efc_sli_port *sport;
> +	int found = 0;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	/* walk the efc_vport_list and remove from there */
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
> +			list_del(&vport->list_entry);
> +			kfree(vport);
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +
> +	if (!domain) {
> +		/* No domain means no sport to look for */
> +		return EFC_SUCCESS;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	list_for_each_entry(sport, &domain->sport_list, list_entry) {
> +		if (sport->wwpn == wwpn && sport->wwnn == wwnn) {
> +			found = 1;
> +			break;
> +		}
> +	}
> +
> +	if (found) {
> +		/* Shutdown this SPORT */
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
> +	}
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Force free all saved vports */
> +void
> +efc_vport_del_all(struct efc *efc)
> +{
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		list_del(&vport->list_entry);
> +		kfree(vport);
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +}
> +
> +/**
> + * Create a saved vport entry.
> + *
> + * A saved vport entry is added to the vport list,
> + * which is restored following a link up.
> + * This function is used to allow vports to be created the first time
> + * the link comes up without having to go through the ioctl() API.
> + */
> +
> +struct efc_vport_spec *
> +efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn,
> +		      u32 fc_id, bool enable_ini,
> +		      bool enable_tgt, void *tgt_data, void *ini_data)
> +{
> +	struct efc_vport_spec *vport;
> +	unsigned long flags = 0;
> +
> +	/*
> +	 * walk the efc_vport_list and return failure
> +	 * if a valid(vport with non zero WWPN and WWNN) vport entry
> +	 * is already created
> +	 */
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if ((wwpn && vport->wwpn == wwpn) &&
> +		    (wwnn && vport->wwnn == wwnn)) {
> +			efc_log_err(efc,
> +				"Failed: VPORT %016llX %016llX already allocated\n",
> +				wwnn, wwpn);
> +			spin_unlock_irqrestore(&efc->vport_lock, flags);
> +			return NULL;
> +		}
> +	}
> +
> +	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
> +	if (!vport) {
> +		spin_unlock_irqrestore(&efc->vport_lock, flags);
> +		return NULL;
> +	}
> +
> +	vport->wwnn = wwnn;
> +	vport->wwpn = wwpn;
> +	vport->fc_id = fc_id;
> +	vport->enable_tgt = enable_tgt;
> +	vport->enable_ini = enable_ini;
> +	vport->tgt_data = tgt_data;
> +	vport->ini_data = ini_data;
> +
> +	INIT_LIST_HEAD(&vport->list_entry);
> +	list_add_tail(&vport->list_entry, &efc->vport_list);
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +	return vport;

As mentioned: please add locking annotations to make it clear which 
functions require locking.

And I'm not sure if reference counting the ports wouldn't be a better 
idea; I can't really see how you would ensure that the port is valid if 
it's being used by temporary structures like FC commands.
The only _safe_ way would be to always access the ports under a lock, 
but that would lead to heavy contention.

But I'll check the remaining patches.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 12/31] elx: libefc: Remote node " James Smart
@ 2020-04-15 15:51   ` Hannes Reinecke
  2020-04-23  1:35     ` James Smart
  2020-04-15 18:19   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-15 15:51 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Remote node (aka remote port) allocation, initializaion and
>    destroy routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Changed node pool creation. Use mempool for node structure and allocate
>      dma mem when required.
>    Added functions efc_node_handle_implicit_logo() and
>      efc_node_handle_explicit_logo() for better indentation.
>    Replace efc_assert with WARN_ON.
>    Use linux xarray api for lookup instead of sparse vectors.
>    Use defined return values.
> ---
>   drivers/scsi/elx/libefc/efc_node.c | 1196 ++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc/efc_node.h |  183 ++++++
>   2 files changed, 1379 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_node.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_node.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
> new file mode 100644
> index 000000000000..e8fd631f1793
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.c
> @@ -0,0 +1,1196 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efc.h"
> +
> +/* HW node callback events from the user driver */
> +int
> +efc_remote_node_cb(void *arg, int event,
> +		   void *data)
> +{
> +	struct efc *efc = arg;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc_remote_node *rnode = data;
> +	struct efc_node *node = rnode->node;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_NODE_ATTACH_OK:
> +		sm_event = EFC_EVT_NODE_ATTACH_OK;
> +		break;
> +
> +	case EFC_HW_NODE_ATTACH_FAIL:
> +		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_OK:
> +		sm_event = EFC_EVT_NODE_FREE_OK;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_FAIL:
> +		sm_event = EFC_EVT_NODE_FREE_FAIL;
> +		break;
> +
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", event);
> +		return EFC_FAIL;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Find an FC node structure given the FC port ID */
> +struct efc_node *
> +efc_node_find(struct efc_sli_port *sport, u32 port_id)
> +{
> +	return xa_load(&sport->lookup, port_id);
> +}
> +
> +struct efc_node *efc_node_alloc(struct efc_sli_port *sport,
> +				  u32 port_id, bool init, bool targ)
> +{
> +	int rc;
> +	struct efc_node *node = NULL;
> +	struct efc *efc = sport->efc;
> +	struct efc_dma *dma;
> +
> +	if (sport->shutting_down) {
> +		efc_log_debug(efc, "node allocation when shutting down %06x",
> +			      port_id);
> +		return NULL;
> +	}
> +
> +	node = mempool_alloc(efc->node_pool, GFP_ATOMIC);
> +	if (!node) {
> +		efc_log_err(efc, "node allocation failed %06x", port_id);
> +		return NULL;
> +	}
> +	memset(node, 0, sizeof(*node));
> +
> +	dma = &node->sparm_dma_buf;
> +	dma->size = NODE_SPARAMS_SIZE;
> +	dma->virt = dma_pool_zalloc(efc->node_dma_pool, GFP_ATOMIC, &dma->phys);
> +	if (!dma->virt) {
> +		efc_log_err(efc, "node dma alloc failed\n");
> +		goto dma_fail;
> +	}
> +	node->rnode.indicator = U32_MAX;
> +	node->sport = sport;
> +	INIT_LIST_HEAD(&node->list_entry);
> +	list_add_tail(&node->list_entry, &sport->node_list);
> +
> +	node->efc = efc;
> +	node->init = init;
> +	node->targ = targ;
> +
> +	spin_lock_init(&node->pend_frames_lock);
> +	INIT_LIST_HEAD(&node->pend_frames);
> +	spin_lock_init(&node->active_ios_lock);
> +	INIT_LIST_HEAD(&node->active_ios);
> +	INIT_LIST_HEAD(&node->els_io_pend_list);
> +	INIT_LIST_HEAD(&node->els_io_active_list);
> +	efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
> +	if (rc) {
> +		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
> +		goto hw_alloc_fail;
> +	}
> +
> +	node->rnode.node = node;
> +	node->sm.app = node;
> +	node->evtdepth = 0;
> +
> +	efc_node_update_display_name(node);
> +
> +	rc = xa_err(xa_store(&sport->lookup, port_id, node, GFP_ATOMIC));
> +	if (rc) {
> +		efc_log_err(efc, "Node lookup store failed: %d\n", rc);
> +		goto xa_fail;
> +	}
> +
> +	return node;
> +
> +xa_fail:
> +	efc->tt.hw_node_free_resources(efc, &node->rnode);
> +hw_alloc_fail:
> +	list_del(&node->list_entry);
> +	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
> +dma_fail:
> +	mempool_free(node, efc->node_pool);
> +	return NULL;
> +}
> +
> +void
> +efc_node_free(struct efc_node *node)
> +{
> +	struct efc_sli_port *sport;
> +	struct efc *efc;
> +	int rc = 0;
> +	struct efc_node *ns = NULL;
> +	struct efc_dma *dma;
> +
> +	sport = node->sport;
> +	efc = node->efc;
> +
> +	node_printf(node, "Free'd\n");
> +
> +	if (node->refound) {
> +		/*
> +		 * Save the name server node. We will send fake RSCN event at
> +		 * the end to handle ignored RSCN event during node deletion
> +		 */
> +		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
> +	}
> +
> +	list_del(&node->list_entry);
> +
> +	/* Free HW resources */
> +	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
> +	if (EFC_HW_RTN_IS_ERROR(rc))
> +		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
> +
> +	/* if the gidpt_delay_timer is still running, then delete it */
> +	if (timer_pending(&node->gidpt_delay_timer))
> +		del_timer(&node->gidpt_delay_timer);
> +
> +	xa_erase(&sport->lookup, node->rnode.fc_id);
> +
> +	/*
> +	 * If the node_list is empty,
> +	 * then post a ALL_CHILD_NODES_FREE event to the sport,
> +	 * after the lock is released.
> +	 * The sport may be free'd as a result of the event.
> +	 */
> +	if (list_empty(&sport->node_list))
> +		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				  NULL);
> +
> +	node->sport = NULL;
> +	node->sm.current_state = NULL;
> +
> +	dma = &node->sparm_dma_buf;
> +	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +	mempool_free(node, efc->node_pool);
> +
> +	if (ns) {
> +		/* sending fake RSCN event to name server node */
> +		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
> +	}
> +}
> +
> +void
> +efc_node_force_free(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	/* shutdown sm processing */
> +	efc_sm_disable(&node->sm);
> +
> +	strncpy(node->prev_state_name, node->current_state_name,
> +		sizeof(node->prev_state_name));
> +	strncpy(node->current_state_name, "disabled",
> +		sizeof(node->current_state_name));
> +
> +	efc->tt.node_io_cleanup(efc, node, true);
> +	efc->tt.node_els_cleanup(efc, node, true);
> +
> +	/* manually purge pending frames (if any) */
> +	efc->tt.node_purge_pending(efc, node);
> +
> +	efc_node_free(node);
> +}
> +
> +static void
> +efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
> +{
> +	if (!dma || !buffer || !buffer_length)
> +		return;
> +
> +	if (buffer_length > dma->size)
> +		buffer_length = dma->size;
> +
> +	memcpy(dma->virt, buffer, buffer_length);
> +	dma->len = buffer_length;
> +}
> +
> +int
> +efc_node_attach(struct efc_node *node)
> +{
> +	int rc = 0;
> +	struct efc_sli_port *sport = node->sport;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = node->efc;
> +
> +	if (!domain->attached) {
> +		efc_log_err(efc,
> +			     "Warning: unattached domain\n");
> +		return EFC_FAIL;
> +	}
> +	/* Update node->wwpn/wwnn */
> +
> +	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
> +				efc_node_get_wwpn(node));
> +	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
> +				efc_node_get_wwnn(node));
> +
> +	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
> +			sizeof(node->service_params) - 4);
> +
> +	/* take lock to protect node->rnode.attached */
> +	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
> +	if (EFC_HW_RTN_IS_ERROR(rc))
> +		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
> +
> +	return rc;
> +}
> +
> +void
> +efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
> +{
> +	switch (fc_id) {
> +	case FC_FID_FLOGI:
> +		snprintf(buffer, buffer_length, "fabric");
> +		break;
> +	case FC_FID_FCTRL:
> +		snprintf(buffer, buffer_length, "fabctl");
> +		break;
> +	case FC_FID_DIR_SERV:
> +		snprintf(buffer, buffer_length, "nserve");
> +		break;
> +	default:
> +		if (fc_id == FC_FID_DOM_MGR) {
> +			snprintf(buffer, buffer_length, "dctl%02x",
> +				 (fc_id & 0x0000ff));
> +		} else {
> +			snprintf(buffer, buffer_length, "%06x", fc_id);
> +		}
> +		break;
> +	}
> +}
> +
> +void
> +efc_node_update_display_name(struct efc_node *node)
> +{
> +	u32 port_id = node->rnode.fc_id;
> +	struct efc_sli_port *sport = node->sport;
> +	char portid_display[16];
> +
> +	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
> +
> +	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
> +		 sport->display_name, portid_display);
> +}
> +
> +void
> +efc_node_send_ls_io_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +
> +	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
> +		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
> +			      node->display_name, node->ls_acc_oxid);
> +
> +		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +		node->ls_acc_io = NULL;
> +	}
> +}
> +
> +/* currently, only case for implicit logo is PLOGI
> + * recvd. Thus, node's ELS IO pending list won't be
> + * empty (PLOGI will be on it)
> + */
> +static void efc_node_handle_implicit_logo(struct efc_node *node)
> +{
> +	int rc;
> +	struct efc *efc = node->efc;
> +
> +	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
> +	node_printf(node, "Reason: implicit logout, re-authenticate\n");
> +
> +	efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +	/* Re-attach node with the same HW node resources */
> +	node->req_free = false;
> +	rc = efc_node_attach(node);
> +	efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +
> +	if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +		efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +}
> +
> +static void efc_node_handle_explicit_logo(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	s8 pend_frames_empty;
> +	struct list_head *list;
> +	unsigned long flags = 0;
> +
> +	/* cleanup any pending LS_ACC ELSs */
> +	efc_node_send_ls_io_cleanup(node);
> +	list = &node->els_io_pend_list;
> +	WARN_ON(!efc_els_io_list_empty(node, list));
> +
> +	spin_lock_irqsave(&node->pend_frames_lock, flags);
> +	pend_frames_empty = list_empty(&node->pend_frames);
> +	spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +	/*
> +	 * there are two scenarios where we want to keep
> +	 * this node alive:
> +	 * 1. there are pending frames that need to be
> +	 *    processed or
> +	 * 2. we're an initiator and the remote node is
> +	 *    a target and we need to re-authenticate
> +	 */
> +	node_printf(node, "Shutdown: explicit logo pend=%d ",
> +			  !pend_frames_empty);
> +	node_printf(node, "sport.ini=%d node.tgt=%d\n",
> +			  node->sport->enable_ini, node->targ);
> +	if (!pend_frames_empty || (node->sport->enable_ini && node->targ)) {
> +		u8 send_plogi = false;
> +
> +		if (node->sport->enable_ini && node->targ) {
> +			/*
> +			 * we're an initiator and
> +			 * node shutting down is a target;
> +			 * we'll need to re-authenticate in
> +			 * initial state
> +			 */
> +			send_plogi = true;
> +		}
> +
> +		/*
> +		 * transition to __efc_d_init
> +		 * (will retain HW node resources)
> +		 */
> +		efc->tt.scsi_io_alloc_enable(efc, node);
> +		node->req_free = false;
> +
> +		/*
> +		 * either pending frames exist,
> +		 * or we're re-authenticating with PLOGI
> +		 * (or both); in either case,
> +		 * return to initial state
> +		 */
> +		efc_node_init_device(node, send_plogi);
> +	}
> +	/* else: let node shutdown occur */
> +}
> +
> +void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		efc_node_hold_frames(node);
> +		WARN_ON(!efc_node_active_ios_empty(node));
> +		WARN_ON(!efc_els_io_list_empty(node,
> +						&node->els_io_active_list));
> +		/* by default, we will be freeing node after we unwind */
> +		node->req_free = true;
> +
> +		switch (node->shutdown_reason) {
> +		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
> +			/* Node shutdown b/c of PLOGI received when node
> +			 * already logged in. We have PLOGI service
> +			 * parameters, so submit node attach; we won't be
> +			 * freeing this node
> +			 */
> +
> +			efc_node_handle_implicit_logo(node);
> +			break;
> +
> +		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO:
> +			efc_node_handle_explicit_logo(node);
> +			break;
> +
> +		case EFC_NODE_SHUTDOWN_DEFAULT:
> +		default: {
> +			struct list_head *list;
> +
> +			/*
> +			 * shutdown due to link down,
> +			 * node going away (xport event) or
> +			 * sport shutdown, purge pending and
> +			 * proceed to cleanup node
> +			 */
> +
> +			/* cleanup any pending LS_ACC ELSs */
> +			efc_node_send_ls_io_cleanup(node);
> +			list = &node->els_io_pend_list;
> +			WARN_ON(!efc_els_io_list_empty(node, list));
> +
> +			node_printf(node,
> +				    "Shutdown reason: default, purge pending\n");
> +			efc->tt.node_purge_pending(efc, node);
> +			break;
> +		}
> +		}
> +
> +		break;
> +	}
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static bool
> +efc_node_check_els_quiesced(struct efc_node *node)
> +{
> +	/* check to see if ELS requests, completions are quiesced */
> +	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
> +	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +		if (!node->attached) {
> +			/* hw node detach already completed, proceed */
> +			node_printf(node, "HW node not attached\n");
> +			efc_node_transition(node,
> +					    __efc_node_wait_ios_shutdown,
> +					     NULL);
> +		} else {
> +			/*
> +			 * hw node detach hasn't completed,
> +			 * transition and wait
> +			 */
> +			node_printf(node, "HW node still attached\n");
> +			efc_node_transition(node, __efc_node_wait_node_free,
> +					    NULL);
> +		}
> +		return true;
> +	}
> +	return false;
> +}
> +
> +void
> +efc_node_initiate_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc;
> +
> +	efc = node->efc;
> +	efc->tt.node_els_cleanup(efc, node, false);
> +
> +	/*
> +	 * if ELS's have already been quiesced, will move to next state
> +	 * if ELS's have not been quiesced, abort them
> +	 */
> +	if (!efc_node_check_els_quiesced(node)) {
> +		/*
> +		 * Abort all ELS's since ELS's won't be aborted by HW
> +		 * node free.
> +		 */
> +		efc_node_hold_frames(node);
> +		efc->tt.node_abort_all_els(efc, node);
> +		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
> +	}
> +}
> +
> +/* Node state machine: Wait for all ELSs to complete */
> +void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	bool check_quiesce = false;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			node_printf(node, "All ELS IOs complete\n");
> +			check_quiesce = true;
> +		}
> +		break;
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		if (WARN_ON(!node->els_cmpl_cnt))
> +			break;
> +		node->els_cmpl_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* all ELS IO's complete */
> +		node_printf(node, "All ELS IOs complete\n");
> +		WARN_ON(!efc_els_io_list_empty(node,
> +					 &node->els_io_active_list));
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	if (check_quiesce)
> +		efc_node_check_els_quiesced(node);
> +
> +	return NULL;
> +}
> +
> +/* Node state machine: Wait for a HW node free event to complete */
> +void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +		/* node is officially no longer attached */
> +		node->attached = false;
> +		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		/* As IOs and ELS IO's complete we expect to get these events */
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* Fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * State is entered when a node receives a shutdown event, and it's waiting
> + * for all the active IOs and ELS IOs associated with the node to complete.
> + */
> +void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +
> +		/* first check to see if no ELS IOs are outstanding */
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			/* If there are any active IOS, Free them. */
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		if (efc_node_active_ios_empty(node) &&
> +		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
> +			      efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +	struct efc *efc = NULL;
> +	struct efc_node_cb *cbdata = arg;
> +
> +	node = ctx->app;
> +	efc = node->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
> +	case EFC_EVT_NODE_MISSING:
> +	case EFC_EVT_FCP_CMD_RCVD:
> +		break;
> +
> +	case EFC_EVT_NODE_REFOUND:
> +		node->refound = true;
> +		break;
> +
> +	/*
> +	 * node->attached must be set appropriately
> +	 * for all node attach/detach events
> +	 */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		node->attached = false;
> +		break;
> +
> +	/*
> +	 * handle any ELS completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		if (WARN_ON(!node->els_cmpl_cnt))
> +			break;
> +		node->els_cmpl_cnt--;
> +		break;
> +
> +	/*
> +	 * handle any ELS request completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		break;
> +
> +	case EFC_EVT_ELS_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/*
> +		 * Unsupported ELS was received,
> +		 * send LS_RJT, command not supported
> +		 */
> +		efc_log_debug(efc,
> +			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
> +			      node->display_name, funcname,
> +			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
> +
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD:
> +	case EFC_EVT_FLOGI_RCVD:
> +	case EFC_EVT_LOGO_RCVD:
> +	case EFC_EVT_PRLI_RCVD:
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_FDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD:
> +	case EFC_EVT_SCR_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/* sm: / send ELS_RJT */
> +		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +		/* if we didn't catch this in a state, send generic LS_RJT */
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
> +
> +		break;
> +	}
> +	case EFC_EVT_ABTS_RCVD: {
> +		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +
> +		/* sm: / send BA_ACC */
> +		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
> +		break;
> +	}
> +
> +	default:
> +		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
> +			     node->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_save_sparms(struct efc_node *node, void *payload)
> +{
> +	memcpy(node->service_params, payload, sizeof(node->service_params));
> +}
> +
> +void
> +efc_node_post_event(struct efc_node *node,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	bool free_node = false;
> +
> +	node->evtdepth++;
> +
> +	efc_sm_post_event(&node->sm, evt, arg);
> +
> +	/* If our event call depth is one and
> +	 * we're not holding frames
> +	 * then we can dispatch any pending frames.
> +	 * We don't want to allow the efc_process_node_pending()
> +	 * call to recurse.
> +	 */
> +	if (!node->hold_frames && node->evtdepth == 1)
> +		efc_process_node_pending(node);
> +
> +	node->evtdepth--;
> +
> +	/*
> +	 * Free the node object if so requested,
> +	 * and we're at an event call depth of zero
> +	 */
> +	if (node->evtdepth == 0 && node->req_free)
> +		free_node = true;
> +
> +	if (free_node)
> +		efc_node_free(node);
> +}
> +
> +void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +				   enum efc_sm_event, void *), void *data)
> +{
> +	struct efc_sm_ctx *ctx = &node->sm;
> +
> +	if (ctx->current_state == state) {
> +		efc_node_post_event(node, EFC_EVT_REENTER, data);
> +	} else {
> +		efc_node_post_event(node, EFC_EVT_EXIT, data);
> +		ctx->current_state = state;
> +		efc_node_post_event(node, EFC_EVT_ENTER, data);
> +	}
> +}
> +
> +void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
> +{
> +	memset(buffer, 0, buffer_len);
> +
> +	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
> +}
> +
> +uint64_t
> +efc_node_get_wwpn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwpn);
> +}
> +
> +uint64_t
> +efc_node_get_wwnn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwnn);
> +}
> +
> +int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
> +		uint8_t cmd, void *(*efc_node_common_func)(const char *,
> +				struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		const char *funcname)
> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
> +		uint16_t cmd, void *(*efc_node_common_func)(const char *,
> +				struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		const char *funcname)
> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_active_ios_empty(struct efc_node *node)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(&node->active_ios);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(list);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *,
> +			      enum efc_sm_event, void *))
> +
> +{
> +	node->nodedb_state = state;
> +	efc_node_transition(node, __efc_node_paused, NULL);
> +}
> +
> +/**
> + * This state is entered when a state is "paused". When resumed, the node
> + * is transitioned to a previously saved state (node->ndoedb_state)
> + */
> +void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		node_printf(node, "Paused\n");
> +		break;
> +
> +	case EFC_EVT_RESUME: {
> +		void *(*pf)(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +
> +		pf = node->nodedb_state;
> +
> +		node->nodedb_state = NULL;
> +		efc_node_transition(node, pf, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node->req_free = true;
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_recv_els_frame(struct efc_node *node,
> +			struct efc_hw_sequence *seq)
> +{
> +	unsigned long flags = 0;
> +	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
> +	struct {
> +		u32 cmd;
> +		enum efc_sm_event evt;
> +		u32 payload_size;
> +	} els_cmd_list[] = {
> +		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
> +		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
> +		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
> +		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
> +		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +	};
> +	struct efc_node_cb cbdata;
> +	u8 *buf = seq->payload->dma.virt;
> +	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
> +	u32 i;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	/* find a matching event for the ELS command */
> +	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
> +		if (els_cmd_list[i].cmd == buf[0]) {
> +			evt = els_cmd_list[i].evt;
> +			break;
> +		}
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, evt, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_node_recv_ct_frame(struct efc_node *node,
> +		       struct efc_hw_sequence *seq)
> +{
> +	struct fc_ct_hdr *iu = seq->payload->dma.virt;
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	struct efc *efc = node->efc;
> +	u16 gscmd = be16_to_cpu(iu->ct_cmd);
> +
> +	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
> +		    node->display_name, gscmd);
> +	efc->tt.send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
> +			    FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
> +}
> +
> +void
> +efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
> +{
> +	struct efc_node_cb cbdata;
> +	unsigned long flags = 0;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_process_node_pending(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	struct efc_hw_sequence *seq = NULL;
> +	u32 pend_frames_processed = 0;
> +	unsigned long flags = 0;
> +
> +	for (;;) {
> +		/* need to check for hold frames condition after each frame
> +		 * processed because any given frame could cause a transition
> +		 * to a state that holds frames
> +		 */
> +		if (node->hold_frames)
> +			break;
> +
> +		/* Get next frame/sequence */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +
> +		if (!list_empty(&node->pend_frames)) {
> +			seq = list_first_entry(&node->pend_frames,
> +					struct efc_hw_sequence, list_entry);
> +			list_del(&seq->list_entry);
> +		}
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +		if (!seq) {
> +			pend_frames_processed =	node->pend_frames_processed;
> +			node->pend_frames_processed = 0;
> +			break;
> +		}
> +		node->pend_frames_processed++;
> +
> +		/* now dispatch frame(s) to dispatch function */
> +		efc_node_dispatch_frame(node, seq);
> +	}
> +
> +	if (pend_frames_processed != 0)
> +		efc_log_debug(efc, "%u node frames held and processed\n",
> +			      pend_frames_processed);
> +}
> +
> +void
> +efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void efc_node_post_els_resp(struct efc_node *node,
> +			    enum efc_hw_node_els_event evt, void *arg)
> +{
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc *efc = node->efc;
> +	unsigned long flags = 0;
> +
> +	switch (evt) {
> +	case EFC_HW_SRRS_ELS_REQ_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_RJT:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
> +		break;
> +	case EFC_HW_ELS_REQ_ABORTED:
> +		sm_event = EFC_EVT_ELS_REQ_ABORTED;
> +		break;
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", evt);
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, sm_event, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void efc_node_post_shutdown(struct efc_node *node,
> +			    u32 evt, void *arg)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
> new file mode 100644
> index 000000000000..0608703cfd04
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.h
> @@ -0,0 +1,183 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFC_NODE_H__)
> +#define __EFC_NODE_H__
> +#include "scsi/fc/fc_ns.h"
> +
> +#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
> +#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
> +#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
> +
> +#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
> +
> +#define scsi_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
> +	io->node->display_name, io->instance_index, io->init_task_tag, \
> +	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
> +
> +static inline void
> +efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		 const char *handler)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	if (evt == EFC_EVT_ENTER) {
> +		strncpy(node->current_state_name, handler,
> +			sizeof(node->current_state_name));
> +	} else if (evt == EFC_EVT_EXIT) {
> +		strncpy(node->prev_state_name, node->current_state_name,
> +			sizeof(node->prev_state_name));
> +		strncpy(node->current_state_name, "invalid",
> +			sizeof(node->current_state_name));
> +	}
> +	node->prev_evt = node->current_evt;
> +	node->current_evt = evt;
> +}
> +
> +/**
> + * hold frames in pending frame list
> + *
> + * Unsolicited receive frames are held on the node pending frame list,
> + * rather than being processed.
> + */
> +
> +static inline void
> +efc_node_hold_frames(struct efc_node *node)
> +{
> +	node->hold_frames = true;
> +}
> +
> +/**
> + * accept frames
> + *
> + * Unsolicited receive frames processed rather than being held on the node
> + * pending frame list.
> + */
> +
> +static inline void
> +efc_node_accept_frames(struct efc_node *node)
> +{
> +	node->hold_frames = false;
> +}
> +
> +/*
> + * Node initiator/target enable defines
> + * All combinations of the SLI port (sport) initiator/target enable,
> + * and remote node initiator/target enable are enumerated.
> + * ex: EFC_NODE_ENABLE_T_TO_IT decodes to target mode is enabled on SLI port
> + * and I+T is enabled on remote node.
> + */
> +enum efc_node_enable {
> +	EFC_NODE_ENABLE_x_TO_x,
> +	EFC_NODE_ENABLE_x_TO_T,
> +	EFC_NODE_ENABLE_x_TO_I,
> +	EFC_NODE_ENABLE_x_TO_IT,
> +	EFC_NODE_ENABLE_T_TO_x,
> +	EFC_NODE_ENABLE_T_TO_T,
> +	EFC_NODE_ENABLE_T_TO_I,
> +	EFC_NODE_ENABLE_T_TO_IT,
> +	EFC_NODE_ENABLE_I_TO_x,
> +	EFC_NODE_ENABLE_I_TO_T,
> +	EFC_NODE_ENABLE_I_TO_I,
> +	EFC_NODE_ENABLE_I_TO_IT,
> +	EFC_NODE_ENABLE_IT_TO_x,
> +	EFC_NODE_ENABLE_IT_TO_T,
> +	EFC_NODE_ENABLE_IT_TO_I,
> +	EFC_NODE_ENABLE_IT_TO_IT,
> +};
> +
> +static inline enum efc_node_enable
> +efc_node_get_enable(struct efc_node *node)
> +{
> +	u32 retval = 0;
> +
> +	if (node->sport->enable_ini)
> +		retval |= (1U << 3);
> +	if (node->sport->enable_tgt)
> +		retval |= (1U << 2);
> +	if (node->init)
> +		retval |= (1U << 1);
> +	if (node->targ)
> +		retval |= (1U << 0);
> +	return (enum efc_node_enable)retval;
> +}
> +
> +extern int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg,
> +		       u8 cmd, void *(*efc_node_common_func)(const char *,
> +		       struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		       const char *funcname);
> +extern int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg,
> +		  u16 cmd, void *(*efc_node_common_func)(const char *,
> +		  struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		  const char *funcname);
> +extern int
> +efc_node_attach(struct efc_node *node);
> +extern struct efc_node *
> +efc_node_alloc(struct efc_sli_port *sport, u32 port_id,
> +		bool init, bool targ);
> +extern void
> +efc_node_free(struct efc_node *efc);
> +extern void
> +efc_node_force_free(struct efc_node *efc);
> +extern void
> +efc_node_update_display_name(struct efc_node *node);
> +void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
> +			 void *arg);
> +
> +extern void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void
> +efc_node_save_sparms(struct efc_node *node, void *payload);
> +extern void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +		    enum efc_sm_event, void *), void *data);
> +extern void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +
> +extern void
> +efc_node_initiate_cleanup(struct efc_node *node);
> +
> +extern void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
> +extern uint64_t
> +efc_node_get_wwpn(struct efc_node *node);
> +
> +extern void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg));
> +extern void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern int
> +efc_node_active_ios_empty(struct efc_node *node);
> +extern void
> +efc_node_send_ls_io_cleanup(struct efc_node *node);
> +
> +extern int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
> +
> +extern void
> +efc_process_node_pending(struct efc_node *domain);
> +
> +#endif /* __EFC_NODE_H__ */
> 
Relating to my previous comments: Can you elaborate about the lifetime 
rules to the node? It looks as if any access to the node is being 
guarded by the main efc->lock.
Which, taken to extremes, would meant that one has to hold that lock for 
_any_ access to the node; that would include I/O processing etc, too.
I can't really see how that would scale to the IOPS you are pursuing, so 
clearly that will not happen.
So how _do_ you ensure the validity of a node if the lock is not held?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands
  2020-04-12  3:32 ` [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2020-04-15 16:10   ` Daniel Wagner
  2020-04-22  5:12     ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 16:10 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:38PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds routines to create mailbox commands used during
> adapter initialization and adds APIs to issue mailbox commands to the
> adapter through the bootstrap mailbox register.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Return defined return values EFC_SUCCESS/FAIL
>   Added 64G link speed support.
>   Defined return values for sli_cqe_mq.
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1216 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 1216 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 0365d7943468..6ecb0f1ad19b 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -3103,3 +3103,1219 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe,
>  
>  	return rc;
>  }
> +
> +/* Wait for the bootstrap mailbox to report "ready" */
> +static int
> +sli_bmbx_wait(struct sli4 *sli4, u32 msec)
> +{
> +	u32 val = 0;
> +
> +	do {
> +		mdelay(1);	/* 1 ms */

Will sli_bmbx_wait() be executed in atomic context? If not,
something like this here is recommended:

        end = jiffies + msecs_to_jiffies(msec);
	do {
		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
		if (val & SLI4_BMBX_RDY)
			return EFC_SUCCESS;
		usleep_range(1000, 2000);
	} while (time_before(jiffies, end));

	return EFC_FAIL;

> +		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
> +		msec--;
> +	} while (msec && !(val & SLI4_BMBX_RDY));
> +
> +	val = (!(val & SLI4_BMBX_RDY));
> +	return val;
> +}
> +
> +/* Write bootstrap mailbox */
> +static int
> +sli_bmbx_write(struct sli4 *sli4)
> +{
> +	u32 val = 0;

No need to pre-initialize val.

> +
> +	/* write buffer location to bootstrap mailbox register */
> +	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
> +	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));

Is the access to the register serialized by a lock?

> +
> +	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
> +		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
> +		return EFC_FAIL;
> +	}
> +	val = SLI4_BMBX_WRITE_LO(sli4->bmbx.phys);
> +	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
> +
> +	/* wait for SLI Port to set ready bit */
> +	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
> +}
> +
> +/* Submit a command to the bootstrap mailbox and check the status */
> +int
> +sli_bmbx_command(struct sli4 *sli4)
> +{
> +	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
> +
> +	if (sli_fw_error_status(sli4) > 0) {
> +		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
> +		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
> +			sli_reg_read_status(sli4),
> +			sli_reg_read_err1(sli4),
> +			sli_reg_read_err2(sli4));
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli_bmbx_write(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox write fail phys=%p reg=%#x\n",
> +			(void *)sli4->bmbx.phys,
> +			readl(sli4->reg[0] + SLI4_BMBX_REG));
> +		return EFC_FAIL;
> +	}
> +
> +	/* check completion queue entry status */
> +	if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) &
> +	    SLI4_MCQE_VALID) {
> +		return sli_cqe_mq(sli4, cqe);
> +	}
> +	efc_log_crit(sli4, "invalid or wrong type\n");
> +	return EFC_FAIL;
> +}
> +
> +/* Write a CONFIG_LINK command to the provided buffer */
> +int
> +sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_config_link *config_link = buf;
> +
> +	memset(buf, 0, size);
> +
> +	config_link->hdr.command = MBX_CMD_CONFIG_LINK;
> +
> +	/* Port interprets zero in a field as "use default value" */
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a DOWN_LINK command to the provided buffer */
> +int
> +sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_mbox_command_header *hdr = buf;
> +
> +	memset(buf, 0, size);
> +
> +	hdr->command = MBX_CMD_DOWN_LINK;
> +
> +	/* Port interprets zero in a field as "use default value" */
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a DUMP Type 4 command to the provided buffer */
> +int
> +sli_cmd_dump_type4(struct sli4 *sli4, void *buf,
> +		   size_t size, u16 wki)
> +{
> +	struct sli4_cmd_dump4 *cmd = buf;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_DUMP;
> +	cmd->type_dword = cpu_to_le32(0x4);
> +	cmd->wki_selection = cpu_to_le16(wki);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a COMMON_READ_TRANSCEIVER_DATA command */
> +int
> +sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
> +				     size_t size, u32 page_num,
> +				     struct efc_dma *dma)
> +{
> +	struct sli4_rqst_cmn_read_transceiver_data *req = NULL;
> +	u32 psize;
> +
> +	if (!dma)
> +		psize = SLI_CONFIG_PYLD_LENGTH(cmn_read_transceiver_data);
> +	else
> +		psize = dma->size;
> +
> +	req = sli_config_cmd_init(sli4, buf, size,
> +					    psize, dma);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&req->hdr, CMN_READ_TRANS_DATA, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_read_transceiver_data));
> +
> +	req->page_number = cpu_to_le32(page_num);
> +	req->port = cpu_to_le32(sli4->port_number);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a READ_LINK_STAT command to the provided buffer */
> +int
> +sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size,
> +			u8 req_ext_counters,
> +			u8 clear_overflow_flags,
> +			u8 clear_all_counters)
> +{
> +	struct sli4_cmd_read_link_stats *cmd = buf;
> +	u32 flags;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_READ_LNK_STAT;
> +
> +	flags = 0;
> +	if (req_ext_counters)
> +		flags |= SLI4_READ_LNKSTAT_REC;
> +	if (clear_all_counters)
> +		flags |= SLI4_READ_LNKSTAT_CLRC;
> +	if (clear_overflow_flags)
> +		flags |= SLI4_READ_LNKSTAT_CLOF;
> +
> +	cmd->dw1_flags = cpu_to_le32(flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write a READ_STATUS command to the provided buffer */
> +int
> +sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size,
> +		    u8 clear_counters)
> +{
> +	struct sli4_cmd_read_status *cmd = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	cmd->hdr.command = MBX_CMD_READ_STATUS;
> +	if (clear_counters)
> +		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
> +	else
> +		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
> +
> +	cmd->dw1_flags = cpu_to_le32(flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_LINK command to the provided buffer */
> +int
> +sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size,
> +		  u32 speed, u8 reset_alpa)
> +{
> +	struct sli4_cmd_init_link *init_link = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	init_link->hdr.command = MBX_CMD_INIT_LINK;
> +
> +	init_link->sel_reset_al_pa_dword =
> +				cpu_to_le32(reset_alpa);
> +	flags &= ~SLI4_INIT_LINK_F_LOOPBACK;
> +
> +	init_link->link_speed_sel_code = cpu_to_le32(speed);
> +	switch (speed) {
> +	case FC_LINK_SPEED_1G:
> +	case FC_LINK_SPEED_2G:
> +	case FC_LINK_SPEED_4G:
> +	case FC_LINK_SPEED_8G:
> +	case FC_LINK_SPEED_16G:
> +	case FC_LINK_SPEED_32G:
> +	case FC_LINK_SPEED_64G:
> +		flags |= SLI4_INIT_LINK_F_FIXED_SPEED;
> +		break;
> +	case FC_LINK_SPEED_10G:
> +		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
> +		init_link->flags0 = cpu_to_le32(flags);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (sli4->topology) {
> +	case SLI4_READ_CFG_TOPO_FC:
> +		/* Attempt P2P but failover to FC-AL */
> +		flags |= SLI4_INIT_LINK_F_FAIL_OVER;
> +		flags |= SLI4_INIT_LINK_F_P2P_FAIL_OVER;
> +		break;
> +	case SLI4_READ_CFG_TOPO_FC_AL:
> +		flags |= SLI4_INIT_LINK_F_FCAL_ONLY;
> +		if (speed == FC_LINK_SPEED_16G || speed == FC_LINK_SPEED_32G) {
> +			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
> +				     speed);
> +			init_link->flags0 = cpu_to_le32(flags);
> +			return EFC_FAIL;
> +		}
> +		break;
> +	case SLI4_READ_CFG_TOPO_FC_DA:
> +		flags |= SLI4_INIT_LINK_F_P2P_ONLY;
> +		break;
> +	default:
> +
> +		efc_log_info(sli4, "unsupported topology %#x\n",
> +			sli4->topology);
> +
> +		init_link->flags0 = cpu_to_le32(flags);
> +		return EFC_FAIL;
> +	}
> +
> +	flags &= (~SLI4_INIT_LINK_F_UNFAIR);
> +	flags &= (~SLI4_INIT_LINK_F_NO_LIRP);
> +	flags &= (~SLI4_INIT_LINK_F_LOOP_VALID_CHK);
> +	flags &= (~SLI4_INIT_LINK_F_NO_LISA);
> +	flags &= (~SLI4_INIT_LINK_F_PICK_HI_ALPA);

The brackets are not needed.

> +	init_link->flags0 = cpu_to_le32(flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_VFI command to the provided buffer */
> +int
> +sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 vfi, u16 fcfi, u16 vpi)
> +{
> +	struct sli4_cmd_init_vfi *init_vfi = buf;
> +	u16 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	init_vfi->hdr.command = MBX_CMD_INIT_VFI;
> +
> +	init_vfi->vfi = cpu_to_le16(vfi);
> +	init_vfi->fcfi = cpu_to_le16(fcfi);
> +
> +	/*
> +	 * If the VPI is valid, initialize it at the same time as
> +	 * the VFI
> +	 */
> +	if (vpi != U16_MAX) {
> +		flags |= SLI4_INIT_VFI_FLAG_VP;
> +		init_vfi->flags0_word = cpu_to_le16(flags);
> +		init_vfi->vpi = cpu_to_le16(vpi);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Write an INIT_VPI command to the provided buffer */
> +int
> +sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 vpi, u16 vfi)
> +{
> +	struct sli4_cmd_init_vpi *init_vpi = buf;
> +
> +	memset(buf, 0, size);
> +
> +	init_vpi->hdr.command = MBX_CMD_INIT_VPI;
> +	init_vpi->vpi = cpu_to_le16(vpi);
> +	init_vpi->vfi = cpu_to_le16(vfi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 xri_base, u16 xri_count)
> +{
> +	struct sli4_cmd_post_xri *post_xri = buf;
> +	u16 xri_count_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	post_xri->hdr.command = MBX_CMD_POST_XRI;
> +	post_xri->xri_base = cpu_to_le16(xri_base);
> +	xri_count_flags = (xri_count & SLI4_POST_XRI_COUNT);

The brackets are not needed.

> +	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
> +	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
> +	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size,
> +		    u8 num_xri)
> +{
> +	struct sli4_cmd_release_xri *release_xri = buf;
> +
> +	memset(buf, 0, size);

Just as reminder, I think this memset() pattern should only operator
on cmd size and not over the whole buffer if the firmware can handle
it.

> +
> +	release_xri->hdr.command = MBX_CMD_RELEASE_XRI;
> +	release_xri->xri_count_word = cpu_to_le16(num_xri &
> +					SLI4_RELEASE_XRI_COUNT);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_read_config(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_read_config *read_config = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_config->hdr.command = MBX_CMD_READ_CONFIG;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_cmd_read_nvparms *read_nvparms = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_nvparms->hdr.command = MBX_CMD_READ_NVPARMS;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size,
> +		      u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id)
> +{
> +	struct sli4_cmd_write_nvparms *write_nvparms = buf;
> +
> +	memset(buf, 0, size);
> +
> +	write_nvparms->hdr.command = MBX_CMD_WRITE_NVPARMS;
> +	memcpy(write_nvparms->wwpn, wwpn, 8);
> +	memcpy(write_nvparms->wwnn, wwnn, 8);
> +
> +	write_nvparms->hard_alpa_d_id =
> +			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_read_rev(struct sli4 *sli4, void *buf, size_t size,
> +		 struct efc_dma *vpd)
> +{
> +	struct sli4_cmd_read_rev *read_rev = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_rev->hdr.command = MBX_CMD_READ_REV;
> +
> +	if (vpd && vpd->size) {
> +		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
> +
> +		read_rev->available_length_dword =
> +			cpu_to_le32(vpd->size &
> +				    SLI4_READ_REV_AVAILABLE_LENGTH);
> +
> +		read_rev->hostbuf.low =
> +				cpu_to_le32(lower_32_bits(vpd->phys));
> +		read_rev->hostbuf.high =
> +				cpu_to_le32(upper_32_bits(vpd->phys));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
> +		     struct efc_dma *dma,
> +		     u16 vpi)
> +{
> +	struct sli4_cmd_read_sparm64 *read_sparm64 = buf;
> +
> +	memset(buf, 0, size);
> +
> +	if (vpi == U16_MAX) {
> +		efc_log_err(sli4, "special VPI not supported!!!\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (!dma || !dma->phys) {
> +		efc_log_err(sli4, "bad DMA buffer\n");
> +		return EFC_FAIL;
> +	}
> +
> +	read_sparm64->hdr.command = MBX_CMD_READ_SPARM64;
> +
> +	read_sparm64->bde_64.bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
> +	read_sparm64->bde_64.u.data.low =
> +			cpu_to_le32(lower_32_bits(dma->phys));
> +	read_sparm64->bde_64.u.data.high =
> +			cpu_to_le32(upper_32_bits(dma->phys));
> +
> +	read_sparm64->vpi = cpu_to_le16(vpi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
> +		      struct efc_dma *dma)
> +{
> +	struct sli4_cmd_read_topology *read_topo = buf;
> +
> +	memset(buf, 0, size);
> +
> +	read_topo->hdr.command = MBX_CMD_READ_TOPOLOGY;
> +
> +	if (dma && dma->size) {
> +		if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
> +			efc_log_err(sli4, "loop map buffer too small %zx\n",
> +				dma->size);
> +			return EFC_FAIL;
> +		}

Shouldn't this fail also when dma or dma->size is invalid?

	if (!dma || !(dma->size && dma->size < SLI4_MIN_LOOP_MAP_BYTES))
		return EFC_FAIL;

And why is this test not before memset()?

> +
> +		memset(dma->virt, 0, dma->size);
> +
> +		read_topo->bde_loop_map.bde_type_buflen =
> +			cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +				    (dma->size & SLI4_BDE_MASK_BUFFER_LEN));
> +		read_topo->bde_loop_map.u.data.low  =
> +			cpu_to_le32(lower_32_bits(dma->phys));
> +		read_topo->bde_loop_map.u.data.high =
> +			cpu_to_le32(upper_32_bits(dma->phys));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size,
> +		 u16 index,
> +		 struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
> +{
> +	struct sli4_cmd_reg_fcfi *reg_fcfi = buf;
> +	u32 i;
> +
> +	memset(buf, 0, size);
> +
> +	reg_fcfi->hdr.command = MBX_CMD_REG_FCFI;
> +
> +	reg_fcfi->fcf_index = cpu_to_le16(index);
> +
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		switch (i) {
> +		case 0:
> +			reg_fcfi->rqid0 = rq_cfg[0].rq_id;
> +			break;
> +		case 1:
> +			reg_fcfi->rqid1 = rq_cfg[1].rq_id;
> +			break;
> +		case 2:
> +			reg_fcfi->rqid2 = rq_cfg[2].rq_id;
> +			break;
> +		case 3:
> +			reg_fcfi->rqid3 = rq_cfg[3].rq_id;
> +			break;
> +		}
> +		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
> +		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
> +		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
> +		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size,
> +		     u8 mode, u16 fcf_index,
> +		     u8 rq_selection_policy, u8 mrq_bit_mask,
> +		     u16 num_mrqs,
> +		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG])
> +{
> +	struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf;
> +	u32 i;
> +	u32 mrq_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	reg_fcfi_mrq->hdr.command = MBX_CMD_REG_FCFI_MRQ;
> +	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
> +		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
> +		goto done;
> +	}
> +
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
> +		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
> +		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
> +		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
> +
> +		switch (i) {
> +		case 3:
> +			reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id;
> +			break;
> +		case 2:
> +			reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id;
> +			break;
> +		case 1:
> +			reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id;
> +			break;
> +		case 0:
> +			reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id;
> +			break;
> +		}
> +	}
> +
> +	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
> +	mrq_flags |= (mrq_bit_mask << 8);
> +	mrq_flags |= (rq_selection_policy << 12);
> +	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
> +done:
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size,
> +		u32 nport_id, u16 rpi, u16 vpi,
> +		struct efc_dma *dma, u8 update,
> +		u8 enable_t10_pi)
> +{
> +	struct sli4_cmd_reg_rpi *reg_rpi = buf;
> +	u32 rportid_flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	reg_rpi->hdr.command = MBX_CMD_REG_RPI;
> +
> +	reg_rpi->rpi = cpu_to_le16(rpi);
> +
> +	rportid_flags = nport_id & SLI4_REGRPI_REMOTE_N_PORTID;
> +
> +	if (update)
> +		rportid_flags |= SLI4_REGRPI_UPD;
> +	else
> +		rportid_flags &= ~SLI4_REGRPI_UPD;
> +
> +	if (enable_t10_pi)
> +		rportid_flags |= SLI4_REGRPI_ETOW;
> +	else
> +		rportid_flags &= ~SLI4_REGRPI_ETOW;
> +
> +	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
> +
> +	reg_rpi->bde_64.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
> +	reg_rpi->bde_64.u.data.low  =
> +		cpu_to_le32(lower_32_bits(dma->phys));
> +	reg_rpi->bde_64.u.data.high =
> +		cpu_to_le32(upper_32_bits(dma->phys));
> +
> +	reg_rpi->vpi = cpu_to_le16(vpi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		u16 vfi, u16 fcfi, struct efc_dma dma,
> +		u16 vpi, __be64 sli_wwpn, u32 fc_id)
> +{
> +	struct sli4_cmd_reg_vfi *reg_vfi = buf;
> +
> +	memset(buf, 0, size);
> +
> +	reg_vfi->hdr.command = MBX_CMD_REG_VFI;
> +
> +	reg_vfi->vfi = cpu_to_le16(vfi);
> +
> +	reg_vfi->fcfi = cpu_to_le16(fcfi);
> +
> +	reg_vfi->sparm.bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_MASK_BUFFER_LEN));
> +	reg_vfi->sparm.u.data.low  =
> +		cpu_to_le32(lower_32_bits(dma.phys));
> +	reg_vfi->sparm.u.data.high =
> +		cpu_to_le32(upper_32_bits(dma.phys));
> +
> +	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
> +	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
> +
> +	reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP);
> +	reg_vfi->vpi = cpu_to_le16(vpi);
> +	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
> +	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size,
> +		u32 fc_id, __be64 sli_wwpn, u16 vpi, u16 vfi,
> +		bool update)
> +{
> +	struct sli4_cmd_reg_vpi *reg_vpi = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	reg_vpi->hdr.command = MBX_CMD_REG_VPI;
> +
> +	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
> +	if (update)
> +		flags |= SLI4_REGVPI_UPD;
> +	else
> +		flags &= ~SLI4_REGVPI_UPD;
> +
> +	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
> +	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
> +	reg_vpi->vpi = cpu_to_le16(vpi);
> +	reg_vpi->vfi = cpu_to_le16(vfi);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_request_features(struct sli4 *sli4, void *buf, size_t size,
> +			 u32 features_mask, bool query)
> +{
> +	struct sli4_cmd_request_features *req_features = buf;
> +
> +	memset(buf, 0, size);
> +
> +	req_features->hdr.command = MBX_CMD_RQST_FEATURES;
> +
> +	if (query)
> +		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
> +
> +	req_features->cmd = cpu_to_le32(features_mask);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size,
> +		   u16 indicator)
> +{
> +	struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf;
> +
> +	memset(buf, 0, size);
> +
> +	unreg_fcfi->hdr.command = MBX_CMD_UNREG_FCFI;
> +
> +	unreg_fcfi->fcfi = cpu_to_le16(indicator);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size,
> +		  u16 indicator,
> +		  enum sli4_resource which, u32 fc_id)
> +{
> +	struct sli4_cmd_unreg_rpi *unreg_rpi = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	unreg_rpi->hdr.command = MBX_CMD_UNREG_RPI;
> +
> +	switch (which) {
> +	case SLI_RSRC_RPI:
> +		flags |= UNREG_RPI_II_RPI;
> +		if (fc_id == U32_MAX)
> +			break;
> +
> +		flags |= UNREG_RPI_DP;
> +		unreg_rpi->dw2_dest_n_portid =
> +			cpu_to_le32(fc_id & UNREG_RPI_DEST_N_PORTID_MASK);
> +		break;
> +	case SLI_RSRC_VPI:
> +		flags |= UNREG_RPI_II_VPI;
> +		break;
> +	case SLI_RSRC_VFI:
> +		flags |= UNREG_RPI_II_VFI;
> +		break;
> +	case SLI_RSRC_FCFI:
> +		flags |= UNREG_RPI_II_FCFI;
> +		break;
> +	default:
> +		efc_log_info(sli4, "unknown type %#x\n", which);
> +		return EFC_FAIL;
> +	}
> +
> +	unreg_rpi->dw1w1_flags = cpu_to_le16(flags);
> +	unreg_rpi->index = cpu_to_le16(indicator);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size,
> +		  u16 index, u32 which)
> +{
> +	struct sli4_cmd_unreg_vfi *unreg_vfi = buf;
> +
> +	memset(buf, 0, size);
> +
> +	unreg_vfi->hdr.command = MBX_CMD_UNREG_VFI;
> +	switch (which) {
> +	case SLI4_UNREG_TYPE_DOMAIN:
> +		unreg_vfi->index = cpu_to_le16(index);
> +		break;
> +	case SLI4_UNREG_TYPE_FCF:
> +		unreg_vfi->index = cpu_to_le16(index);
> +		break;
> +	case SLI4_UNREG_TYPE_ALL:
> +		unreg_vfi->index = cpu_to_le16(U32_MAX);
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +
> +	if (which != SLI4_UNREG_TYPE_DOMAIN)
> +		unreg_vfi->dw2_flags =
> +			cpu_to_le16(UNREG_VFI_II_FCFI);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size,
> +		  u16 indicator, u32 which)
> +{
> +	struct sli4_cmd_unreg_vpi *unreg_vpi = buf;
> +	u32 flags = 0;
> +
> +	memset(buf, 0, size);
> +
> +	unreg_vpi->hdr.command = MBX_CMD_UNREG_VPI;
> +	unreg_vpi->index = cpu_to_le16(indicator);
> +	switch (which) {
> +	case SLI4_UNREG_TYPE_PORT:
> +		flags |= UNREG_VPI_II_VPI;
> +		break;
> +	case SLI4_UNREG_TYPE_DOMAIN:
> +		flags |= UNREG_VPI_II_VFI;
> +		break;
> +	case SLI4_UNREG_TYPE_FCF:
> +		flags |= UNREG_VPI_II_FCFI;
> +		break;
> +	case SLI4_UNREG_TYPE_ALL:
> +		/* override indicator */
> +		unreg_vpi->index = cpu_to_le16(U32_MAX);
> +		flags |= UNREG_VPI_II_FCFI;
> +		break;
> +	default:
> +		return EFC_FAIL;
> +	}
> +
> +	unreg_vpi->dw2w0_flags = cpu_to_le16(flags);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_common_modify_eq_delay(struct sli4 *sli4, void *buf, size_t size,
> +			       struct sli4_queue *q, int num_q, u32 shift,
> +			       u32 delay_mult)
> +{
> +	struct sli4_rqst_cmn_modify_eq_delay *req = NULL;
> +	int i;
> +
> +	req = sli_config_cmd_init(sli4, buf, size,
> +				SLI_CONFIG_PYLD_LENGTH(cmn_modify_eq_delay),
> +				NULL);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&req->hdr, CMN_MODIFY_EQ_DELAY, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_modify_eq_delay));
> +	req->num_eq = cpu_to_le32(num_q);
> +
> +	for (i = 0; i < num_q; i++) {
> +		req->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id);
> +		req->eq_delay_record[i].phase = cpu_to_le32(shift);
> +		req->eq_delay_record[i].delay_multiplier =
> +			cpu_to_le32(delay_mult);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +void
> +sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf,
> +			       size_t size, u16 timeout)
> +{
> +	struct sli4_rqst_lowlevel_set_watchdog *req = NULL;
> +
> +	req = sli_config_cmd_init(sli4, buf, size,
> +			SLI_CONFIG_PYLD_LENGTH(lowlevel_set_watchdog),
> +			NULL);
> +	if (!req)
> +		return;
> +
> +	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_LOWLEVEL_SET_WATCHDOG,
> +			 SLI4_SUBSYSTEM_LOWLEVEL, CMD_V0,
> +			 CFG_RQST_PYLD_LEN(lowlevel_set_watchdog));
> +	req->watchdog_timeout = cpu_to_le16(timeout);
> +}
> +
> +static int
> +sli_cmd_common_get_cntl_attributes(struct sli4 *sli4, void *buf, size_t size,
> +				   struct efc_dma *dma)
> +{
> +	struct sli4_rqst_hdr *hdr = NULL;
> +
> +	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
> +	if (!hdr)
> +		return EFC_FAIL;
> +
> +	hdr->opcode = CMN_GET_CNTL_ATTRIBUTES;
> +	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
> +	hdr->request_length = cpu_to_le32(dma->size);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_common_get_cntl_addl_attributes(struct sli4 *sli4, void *buf,
> +					size_t size, struct efc_dma *dma)
> +{
> +	struct sli4_rqst_hdr *hdr = NULL;
> +
> +	hdr = sli_config_cmd_init(sli4, buf, size, CFG_RQST_CMDSZ(hdr), dma);
> +	if (!hdr)
> +		return EFC_FAIL;
> +
> +	hdr->opcode = CMN_GET_CNTL_ADDL_ATTRS;
> +	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
> +	hdr->request_length = cpu_to_le32(dma->size);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_nop(struct sli4 *sli4, void *buf,
> +		   size_t size, uint64_t context)
> +{
> +	struct sli4_rqst_cmn_nop *nop = NULL;
> +
> +	nop = sli_config_cmd_init(sli4, buf, size,
> +				  SLI_CONFIG_PYLD_LENGTH(cmn_nop), NULL);
> +	if (!nop)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&nop->hdr, CMN_NOP, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_nop));
> +
> +	memcpy(&nop->context, &context, sizeof(context));
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
> +					size_t size, u16 rtype)
> +{
> +	struct sli4_rqst_cmn_get_resource_extent_info *extent = NULL;
> +
> +	extent = sli_config_cmd_init(sli4, buf, size,
> +			CFG_RQST_CMDSZ(cmn_get_resource_extent_info),
> +				     NULL);
> +	if (extent)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&extent->hdr, CMN_GET_RSC_EXTENT_INFO,
> +			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
> +			 CFG_RQST_PYLD_LEN(cmn_get_resource_extent_info));
> +
> +	extent->resource_type = cpu_to_le16(rtype);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf,
> +				   size_t size)
> +{
> +	struct sli4_rqst_hdr *hdr = NULL;
> +
> +	hdr = sli_config_cmd_init(sli4, buf, size,
> +				  SLI_CONFIG_PYLD_LENGTH(cmn_get_sli4_params),
> +				  NULL);
> +	if (!hdr)
> +		return EFC_FAIL;
> +
> +	hdr->opcode = CMN_GET_SLI4_PARAMS;
> +	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
> +	hdr->request_length = CFG_RQST_PYLD_LEN(cmn_get_sli4_params);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +sli_cmd_common_get_port_name(struct sli4 *sli4, void *buf, size_t size)
> +{
> +	struct sli4_rqst_cmn_get_port_name *pname;
> +
> +	pname = sli_config_cmd_init(sli4, buf, size,
> +				    SLI_CONFIG_PYLD_LENGTH(cmn_get_port_name),
> +				    NULL);
> +	if (!pname)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&pname->hdr, CMN_GET_PORT_NAME, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V1, CFG_RQST_PYLD_LEN(cmn_get_port_name));
> +
> +	/* Set the port type value (ethernet=0, FC=1) for V1 commands */
> +	pname->port_type = PORT_TYPE_FC;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size,
> +			    u16 noc,
> +			    u16 eof, u32 desired_write_length,
> +			    u32 offset, char *object_name,
> +			    struct efc_dma *dma)
> +{
> +	struct sli4_rqst_cmn_write_object *wr_obj = NULL;
> +	struct sli4_bde *bde;
> +	u32 dwflags = 0;
> +
> +	wr_obj = sli_config_cmd_init(sli4, buf, size,
> +				     CFG_RQST_CMDSZ(cmn_write_object) +
> +				     sizeof(*bde), NULL);
> +	if (!wr_obj)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&wr_obj->hdr, CMN_WRITE_OBJECT, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0,
> +			 CFG_RQST_PYLD_LEN_VAR(cmn_write_object, sizeof(*bde)));
> +
> +	if (noc)
> +		dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC;
> +	if (eof)
> +		dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF;
> +	dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN);
> +
> +	wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
> +
> +	wr_obj->write_offset = cpu_to_le32(offset);
> +	strncpy(wr_obj->object_name, object_name,
> +		sizeof(wr_obj->object_name));
> +	wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
> +
> +	bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor;
> +
> +	/* Setup to transfer xfer_size bytes to device */
> +	bde->bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (desired_write_length & SLI4_BDE_MASK_BUFFER_LEN));
> +	bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
> +	bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
> +			     char *object_name)
> +{
> +	struct sli4_rqst_cmn_delete_object *req = NULL;
> +
> +	req = sli_config_cmd_init(sli4, buf, size,
> +				  CFG_RQST_CMDSZ(cmn_delete_object), NULL);
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&req->hdr, CMN_DELETE_OBJECT, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_delete_object));
> +
> +	strncpy(req->object_name, object_name, sizeof(req->object_name));
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
> +			   u32 desired_read_length, u32 offset,
> +			   char *object_name, struct efc_dma *dma)
> +{
> +	struct sli4_rqst_cmn_read_object *rd_obj = NULL;
> +	struct sli4_bde *bde;
> +
> +	rd_obj = sli_config_cmd_init(sli4, buf, size,
> +				     CFG_RQST_CMDSZ(cmn_read_object) +
> +				     sizeof(*bde), NULL);
> +	if (!rd_obj)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&rd_obj->hdr, CMN_READ_OBJECT, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0,
> +			 CFG_RQST_PYLD_LEN_VAR(cmn_read_object, sizeof(*bde)));
> +	rd_obj->desired_read_length_dword =
> +		cpu_to_le32(desired_read_length & SLI4_REQ_DESIRE_READLEN);
> +
> +	rd_obj->read_offset = cpu_to_le32(offset);
> +	strncpy(rd_obj->object_name, object_name,
> +		sizeof(rd_obj->object_name));

Does the string a NUL termination?

> +	rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
> +
> +	bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;
> +
> +	/* Setup to transfer xfer_size bytes to device */
> +	bde->bde_type_buflen =
> +		cpu_to_le32((BDE_TYPE_BDE_64 << BDE_TYPE_SHIFT) |
> +			    (desired_read_length & SLI4_BDE_MASK_BUFFER_LEN));
> +	if (dma) {
> +		bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
> +		bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
> +	} else {
> +		bde->u.data.low = 0;
> +		bde->u.data.high = 0;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
> +			  struct efc_dma *cmd,
> +			  struct efc_dma *resp)
> +{
> +	struct sli4_rqst_dmtf_exec_clp_cmd *clp_cmd = NULL;
> +
> +	clp_cmd = sli_config_cmd_init(sli4, buf, size,
> +				      CFG_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL);
> +	if (!clp_cmd)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&clp_cmd->hdr, DMTF_EXEC_CLP_CMD, SLI4_SUBSYSTEM_DMTF,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(dmtf_exec_clp_cmd));
> +
> +	clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size);
> +	clp_cmd->cmd_buf_addr_low =  cpu_to_le32(lower_32_bits(cmd->phys));
> +	clp_cmd->cmd_buf_addr_high =  cpu_to_le32(upper_32_bits(cmd->phys));
> +	clp_cmd->resp_buf_length = cpu_to_le32(resp->size);
> +	clp_cmd->resp_buf_addr_low =  cpu_to_le32(lower_32_bits(resp->phys));
> +	clp_cmd->resp_buf_addr_high =  cpu_to_le32(upper_32_bits(resp->phys));

whitespace damage

> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf,
> +				 size_t size, bool query,
> +				 bool is_buffer_list,
> +				 struct efc_dma *buffer, u8 fdb)
> +{
> +	struct sli4_rqst_cmn_set_dump_location *set_dump_loc = NULL;
> +	u32 buffer_length_flag = 0;
> +
> +	set_dump_loc = sli_config_cmd_init(sli4, buf, size,
> +					CFG_RQST_CMDSZ(cmn_set_dump_location),
> +					NULL);
> +	if (!set_dump_loc)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&set_dump_loc->hdr, CMN_SET_DUMP_LOCATION,
> +			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
> +			 CFG_RQST_PYLD_LEN(cmn_set_dump_location));
> +
> +	if (is_buffer_list)
> +		buffer_length_flag |= SLI4_CMN_SET_DUMP_BLP;
> +
> +	if (query)
> +		buffer_length_flag |= SLI4_CMN_SET_DUMP_QRY;
> +
> +	if (fdb)
> +		buffer_length_flag |= SLI4_CMN_SET_DUMP_FDB;
> +
> +	if (buffer) {
> +		set_dump_loc->buf_addr_low =
> +			cpu_to_le32(lower_32_bits(buffer->phys));
> +		set_dump_loc->buf_addr_high =
> +			cpu_to_le32(upper_32_bits(buffer->phys));
> +
> +		buffer_length_flag |= (buffer->len &
> +				       SLI4_CMN_SET_DUMP_BUFFER_LEN);
> +	} else {
> +		set_dump_loc->buf_addr_low = 0;
> +		set_dump_loc->buf_addr_high = 0;
> +		set_dump_loc->buffer_length_dword = 0;
> +	}
> +	set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag);
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
> +			    u32 feature,
> +			    u32 param_len,
> +			    void *parameter)
> +{
> +	struct sli4_rqst_cmn_set_features *cmd = NULL;
> +
> +	cmd = sli_config_cmd_init(sli4, buf, size,
> +				  CFG_RQST_CMDSZ(cmn_set_features), NULL);
> +	if (!cmd)
> +		return EFC_FAIL;
> +
> +	sli_cmd_fill_hdr(&cmd->hdr, CMN_SET_FEATURES, SLI4_SUBSYSTEM_COMMON,
> +			 CMD_V0, CFG_RQST_PYLD_LEN(cmn_set_features));
> +
> +	cmd->feature = cpu_to_le32(feature);
> +	cmd->param_len = cpu_to_le32(param_len);
> +	memcpy(cmd->params, parameter, param_len);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cqe_mq(struct sli4 *sli4, void *buf)
> +{
> +	struct sli4_mcqe *mcqe = buf;
> +	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
> +	/*
> +	 * Firmware can split mbx completions into two MCQEs: first with only
> +	 * the "consumed" bit set and a second with the "complete" bit set.
> +	 * Thus, ignore MCQE unless "complete" is set.
> +	 */
> +	if (!(dwflags & SLI4_MCQE_COMPLETED))
> +		return SLI4_MCQE_STATUS_NOT_COMPLETED;
> +
> +	if (le16_to_cpu(mcqe->completion_status)) {
> +		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
> +			le16_to_cpu(mcqe->completion_status),
> +			      le16_to_cpu(mcqe->extended_status),
> +			      (dwflags & SLI4_MCQE_CONSUMED),
> +			      (dwflags & SLI4_MCQE_COMPLETED),
> +			      (dwflags & SLI4_MCQE_AE),
> +			      (dwflags & SLI4_MCQE_VALID));
> +	}
> +
> +	return le16_to_cpu(mcqe->completion_status);
> +}
> +
> +int
> +sli_cqe_async(struct sli4 *sli4, void *buf)
> +{
> +	struct sli4_acqe *acqe = buf;
> +	int rc = EFC_FAIL;
> +
> +	if (!buf) {
> +		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (acqe->event_code) {
> +	case SLI4_ACQE_EVENT_CODE_LINK_STATE:
> +		efc_log_info(sli4, "Unsupported by FC link, evt code:%#x\n",
> +			     acqe->event_code);
> +		break;
> +	case SLI4_ACQE_EVENT_CODE_GRP_5:
> +		efc_log_info(sli4, "ACQE GRP5\n");
> +		break;
> +	case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT:
> +		efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n",
> +			acqe->event_type,
> +			le32_to_cpu(acqe->event_data[0]),
> +			le32_to_cpu(acqe->event_data[1]));
> +		break;
> +	case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT:
> +		rc = sli_fc_process_link_attention(sli4, buf);
> +		break;
> +	default:
> +		efc_log_info(sli4, "ACQE unknown=%#x\n",
> +			acqe->event_code);
> +	}
> +
> +	return rc;
> +}
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library
  2020-04-12  3:32 ` [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
  2020-04-15 12:49   ` Hannes Reinecke
@ 2020-04-15 17:06   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 17:06 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:39PM -0700, James Smart wrote:
> This patch continues the libefc_sli SLI-4 library population.
> 
> This patch adds APIS to initialize the library, initialize
> the SLI Port, reset firmware, terminate the SLI Port, and
> terminate the library.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed some function types to bool.
>   Return defined return values EFC_SUCCESS/FAIL
>   Defined dump types SLI4_FUNC_DESC_DUMP, SLI4_CHIP_LEVEL_DUMP.
>   Defined dump status return values for sli_dump_is_ready().
>   Formatted function defines to use 80 character length.
> ---
>  drivers/scsi/elx/libefc_sli/sli4.c | 1202 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc_sli/sli4.h |  434 +++++++++++++
>  2 files changed, 1636 insertions(+)
> 
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> index 6ecb0f1ad19b..c45a3ac8962d 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.c
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -4319,3 +4319,1205 @@ sli_cqe_async(struct sli4 *sli4, void *buf)
>  
>  	return rc;
>  }
> +
> +/* Determine if the chip FW is in a ready state */
> +bool
> +sli_fw_ready(struct sli4 *sli4)
> +{
> +	u32 val;
> +	/*
> +	 * Is firmware ready for operation? Check needed depends on IF_TYPE
> +	 */

Okay, but where is the code for it?

> +	val = sli_reg_read_status(sli4);
> +	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
> +}
> +
> +static bool

Why not EFC_FAIL/SUCCESS?

> +sli_sliport_reset(struct sli4 *sli4)
> +{
> +	u32 iter, val;
> +	int rc = false;
> +
> +	val = SLI4_PORT_CTRL_IP;
> +	/* Initialize port, endian */
> +	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
> +
> +	for (iter = 0; iter < 3000; iter++) {
> +		mdelay(10);	/* 10 ms */
> +		if (sli_fw_ready(sli4)) {
> +			rc = true;
> +			break;
> +		}
> +	}

This should probalyu call sli_wait_for_fw_ready() here.

> +
> +	if (!rc)
> +		efc_log_crit(sli4, "port failed to become ready after initialization\n");
> +
> +	return rc;
> +}
> +
> +static bool
> +sli_wait_for_fw_ready(struct sli4 *sli4, u32 timeout_ms)
> +{
> +	u32 iter = timeout_ms / (SLI4_INIT_PORT_DELAY_US / 1000);
> +	bool ready = false;
> +
> +	do {
> +		iter--;
> +		mdelay(10);	/* 10 ms */
> +		if (sli_fw_ready(sli4) == 1)
> +			ready = true;
> +	} while (!ready && (iter > 0));

And this should be replaced with the usleep_range() approach.

> +
> +	return ready;
> +}
> +
> +static bool
> +sli_fw_init(struct sli4 *sli4)
> +{
> +	bool ready;
> +
> +	/*
> +	 * Is firmware ready for operation?
> +	 */
> +	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
> +	if (!ready) {

	if (!sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC))

and drop ready.

> +		efc_log_crit(sli4, "FW status is NOT ready\n");
> +		return false;
> +	}
> +
> +	/*
> +	 * Reset port to a known state
> +	 */
> +	if (!sli_sliport_reset(sli4))
> +		return false;
> +
> +	return true;

	return !sli_sliport_reset(sli4));

> +}
> +
> +static int
> +sli_fw_term(struct sli4 *sli4)
> +{
> +	/* type 2 etc. use SLIPORT_CONTROL to initialize port */
> +	sli_sliport_reset(sli4);
> +	return EFC_SUCCESS;

	return sli_sliport_reset(sli4);

> +}
> +
> +static int
> +sli_request_features(struct sli4 *sli4, u32 *features, bool query)
> +{
> +	if (!sli_cmd_request_features(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +				     *features, query)) {

	if (sli_cmd....)
		return EFC_FAIL;


> +		struct sli4_cmd_request_features *req_features =
> +							sli4->bmbx.virt;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
> +				__func__);
> +			return EFC_FAIL;
> +		}
> +		if (le16_to_cpu(req_features->hdr.status)) {
> +			efc_log_err(sli4, "REQUEST_FEATURES bad status %#x\n",
> +			       le16_to_cpu(req_features->hdr.status));
> +			return EFC_FAIL;
> +		}
> +		*features = le32_to_cpu(req_features->resp);
> +	} else {
> +		efc_log_err(sli4, "bad REQUEST_FEATURES write\n");
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +void
> +sli_calc_max_qentries(struct sli4 *sli4)
> +{
> +	enum sli4_qtype q;
> +	u32 qentries;
> +
> +	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
> +		sli4->qinfo.max_qentries[q] =
> +			sli_convert_mask_to_count(sli4->qinfo.count_method[q],
> +						  sli4->qinfo.count_mask[q]);
> +	}
> +
> +	/* single, continguous DMA allocations will be called for each queue
> +	 * of size (max_qentries * queue entry size); since these can be large,
> +	 * check against the OS max DMA allocation size
> +	 */
> +	for (q = SLI_QTYPE_EQ; q < SLI_QTYPE_MAX; q++) {
> +		qentries = sli4->qinfo.max_qentries[q];
> +
> +		efc_log_info(sli4, "[%s]: max_qentries from %d to %d\n",
> +			     SLI_QNAME[q],
> +			     sli4->qinfo.max_qentries[q], qentries);
> +		sli4->qinfo.max_qentries[q] = qentries;
> +	}
> +}
> +
> +static int
> +sli_get_config(struct sli4 *sli4)

This function is a bit too long. I suggest to split into smaller
functions, e.g. a function for handling the extensions, etc.

> +{
> +	struct efc_dma data;
> +	u32 psize;
> +
> +	/*
> +	 * Read the device configuration
> +	 */
> +	if (!sli_cmd_read_config(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {

	if (sli_cmd_...)
		return EFC_FAIL;

> +		struct sli4_rsp_read_config	*read_config = sli4->bmbx.virt;
> +		u32 i;
> +		u32 total, total_size;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "bootstrap mailbox fail (READ_CONFIG)\n");
> +			return EFC_FAIL;
> +		}
> +		if (le16_to_cpu(read_config->hdr.status)) {
> +			efc_log_err(sli4, "READ_CONFIG bad status %#x\n",
> +			       le16_to_cpu(read_config->hdr.status));
> +			return EFC_FAIL;
> +		}
> +
> +		sli4->has_extents =
> +			le32_to_cpu(read_config->ext_dword) &
> +				    SLI4_READ_CFG_RESP_RESOURCE_EXT;
> +		if (!sli4->has_extents) {
> +			u32	i = 0, size = 0;
> +			u32	*base = sli4->extent[0].base;
> +
> +			if (!base) {
> +				size = SLI_RSRC_MAX * sizeof(u32);
> +				base = kzalloc(size, GFP_ATOMIC);
> +				if (!base)
> +					return EFC_FAIL;
> +
> +				memset(base, 0,
> +				       SLI_RSRC_MAX * sizeof(u32));
> +			}
> +
> +			for (i = 0; i < SLI_RSRC_MAX; i++) {
> +				sli4->extent[i].number = 1;
> +				sli4->extent[i].n_alloc = 0;
> +				sli4->extent[i].base = &base[i];
> +			}
> +
> +			sli4->extent[SLI_RSRC_VFI].base[0] =
> +				le16_to_cpu(read_config->vfi_base);
> +			sli4->extent[SLI_RSRC_VFI].size =
> +				le16_to_cpu(read_config->vfi_count);
> +
> +			sli4->extent[SLI_RSRC_VPI].base[0] =
> +				le16_to_cpu(read_config->vpi_base);
> +			sli4->extent[SLI_RSRC_VPI].size =
> +				le16_to_cpu(read_config->vpi_count);
> +
> +			sli4->extent[SLI_RSRC_RPI].base[0] =
> +				le16_to_cpu(read_config->rpi_base);
> +			sli4->extent[SLI_RSRC_RPI].size =
> +				le16_to_cpu(read_config->rpi_count);
> +
> +			sli4->extent[SLI_RSRC_XRI].base[0] =
> +				le16_to_cpu(read_config->xri_base);
> +			sli4->extent[SLI_RSRC_XRI].size =
> +				le16_to_cpu(read_config->xri_count);
> +
> +			sli4->extent[SLI_RSRC_FCFI].base[0] = 0;
> +			sli4->extent[SLI_RSRC_FCFI].size =
> +				le16_to_cpu(read_config->fcfi_count);
> +		}
> +
> +		for (i = 0; i < SLI_RSRC_MAX; i++) {
> +			total = sli4->extent[i].number *
> +				sli4->extent[i].size;
> +			total_size = BITS_TO_LONGS(total) * sizeof(long);
> +			sli4->extent[i].use_map =
> +				kzalloc(total_size, GFP_ATOMIC);
> +			if (!sli4->extent[i].use_map) {
> +				efc_log_err(sli4, "bitmap memory allocation failed %d\n",
> +				       i);
> +				return EFC_FAIL;
> +			}
> +			sli4->extent[i].map_size = total;
> +		}
> +
> +		sli4->topology =
> +				(le32_to_cpu(read_config->topology_dword) &
> +				 SLI4_READ_CFG_RESP_TOPOLOGY) >> 24;
> +		switch (sli4->topology) {
> +		case SLI4_READ_CFG_TOPO_FC:
> +			efc_log_info(sli4, "FC (unknown)\n");
> +			break;
> +		case SLI4_READ_CFG_TOPO_FC_DA:
> +			efc_log_info(sli4, "FC (direct attach)\n");
> +			break;
> +		case SLI4_READ_CFG_TOPO_FC_AL:
> +			efc_log_info(sli4, "FC (arbitrated loop)\n");
> +			break;
> +		default:
> +			efc_log_info(sli4, "bad topology %#x\n",
> +				sli4->topology);
> +		}
> +
> +		sli4->e_d_tov = le16_to_cpu(read_config->e_d_tov);
> +		sli4->r_a_tov = le16_to_cpu(read_config->r_a_tov);
> +
> +		sli4->link_module_type = le16_to_cpu(read_config->lmt);
> +
> +		sli4->qinfo.max_qcount[SLI_QTYPE_EQ] =
> +				le16_to_cpu(read_config->eq_count);
> +		sli4->qinfo.max_qcount[SLI_QTYPE_CQ] =
> +				le16_to_cpu(read_config->cq_count);
> +		sli4->qinfo.max_qcount[SLI_QTYPE_WQ] =
> +				le16_to_cpu(read_config->wq_count);
> +		sli4->qinfo.max_qcount[SLI_QTYPE_RQ] =
> +				le16_to_cpu(read_config->rq_count);
> +
> +		/*
> +		 * READ_CONFIG doesn't give the max number of MQ. Applications
> +		 * will typically want 1, but we may need another at some future
> +		 * date. Dummy up a "max" MQ count here.
> +		 */
> +		sli4->qinfo.max_qcount[SLI_QTYPE_MQ] = SLI_USER_MQ_COUNT;
> +	} else {
> +		efc_log_err(sli4, "bad READ_CONFIG write\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (!sli_cmd_common_get_sli4_parameters(sli4, sli4->bmbx.virt,
> +					       SLI4_BMBX_SIZE)) {
> +		struct sli4_rsp_cmn_get_sli4_params	*parms =
> +			(struct sli4_rsp_cmn_get_sli4_params *)
> +			(((u8 *)sli4->bmbx.virt) +
> +			offsetof(struct sli4_cmd_sli_config, payload.embed));
> +		u32 dwflags_loopback;
> +		u32 dwflags_eq_page_cnt;
> +		u32 dwflags_cq_page_cnt;
> +		u32 dwflags_mq_page_cnt;
> +		u32 dwflags_wq_page_cnt;
> +		u32 dwflags_rq_page_cnt;
> +		u32 dwflags_sgl_page_cnt;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
> +				__func__);
> +			return EFC_FAIL;
> +		} else if (parms->hdr.status) {
> +			efc_log_err(sli4, "COMMON_GET_SLI4_PARAMETERS bad status %#x",
> +			       parms->hdr.status);
> +			efc_log_err(sli4, "additional status %#x\n",
> +			       parms->hdr.additional_status);
> +			return EFC_FAIL;
> +		}
> +
> +		dwflags_loopback = le32_to_cpu(parms->dw16_loopback_scope);
> +		dwflags_eq_page_cnt = le32_to_cpu(parms->dw6_eq_page_cnt);
> +		dwflags_cq_page_cnt = le32_to_cpu(parms->dw8_cq_page_cnt);
> +		dwflags_mq_page_cnt = le32_to_cpu(parms->dw10_mq_page_cnt);
> +		dwflags_wq_page_cnt = le32_to_cpu(parms->dw12_wq_page_cnt);
> +		dwflags_rq_page_cnt = le32_to_cpu(parms->dw14_rq_page_cnt);
> +
> +		sli4->auto_reg =
> +			(dwflags_loopback & RSP_GET_PARAM_AREG);
> +		sli4->auto_xfer_rdy =
> +			(dwflags_loopback & RSP_GET_PARAM_AGXF);
> +		sli4->hdr_template_req =
> +			(dwflags_loopback & RSP_GET_PARAM_HDRR);
> +		sli4->t10_dif_inline_capable =
> +			(dwflags_loopback & RSP_GET_PARAM_TIMM);
> +		sli4->t10_dif_separate_capable =
> +			(dwflags_loopback & RSP_GET_PARAM_TSMM);
> +
> +		sli4->mq_create_version =
> +				GET_Q_CREATE_VERSION(dwflags_mq_page_cnt);
> +		sli4->cq_create_version =
> +				GET_Q_CREATE_VERSION(dwflags_cq_page_cnt);
> +
> +		sli4->rq_min_buf_size =
> +			le16_to_cpu(parms->min_rq_buffer_size);
> +		sli4->rq_max_buf_size =
> +			le32_to_cpu(parms->max_rq_buffer_size);
> +
> +		sli4->qinfo.qpage_count[SLI_QTYPE_EQ] =
> +			(dwflags_eq_page_cnt & RSP_GET_PARAM_EQ_PAGE_CNT_MASK);
> +		sli4->qinfo.qpage_count[SLI_QTYPE_CQ] =
> +			(dwflags_cq_page_cnt & RSP_GET_PARAM_CQ_PAGE_CNT_MASK);
> +		sli4->qinfo.qpage_count[SLI_QTYPE_MQ] =
> +			(dwflags_mq_page_cnt & RSP_GET_PARAM_MQ_PAGE_CNT_MASK);
> +		sli4->qinfo.qpage_count[SLI_QTYPE_WQ] =
> +			(dwflags_wq_page_cnt & RSP_GET_PARAM_WQ_PAGE_CNT_MASK);
> +		sli4->qinfo.qpage_count[SLI_QTYPE_RQ] =
> +			(dwflags_rq_page_cnt & RSP_GET_PARAM_RQ_PAGE_CNT_MASK);
> +
> +		/* save count methods and masks for each queue type */
> +
> +		sli4->qinfo.count_mask[SLI_QTYPE_EQ] =
> +				le16_to_cpu(parms->eqe_count_mask);
> +		sli4->qinfo.count_method[SLI_QTYPE_EQ] =
> +				GET_Q_CNT_METHOD(dwflags_eq_page_cnt);
> +
> +		sli4->qinfo.count_mask[SLI_QTYPE_CQ] =
> +				le16_to_cpu(parms->cqe_count_mask);
> +		sli4->qinfo.count_method[SLI_QTYPE_CQ] =
> +				GET_Q_CNT_METHOD(dwflags_cq_page_cnt);
> +
> +		sli4->qinfo.count_mask[SLI_QTYPE_MQ] =
> +				le16_to_cpu(parms->mqe_count_mask);
> +		sli4->qinfo.count_method[SLI_QTYPE_MQ] =
> +				GET_Q_CNT_METHOD(dwflags_mq_page_cnt);
> +
> +		sli4->qinfo.count_mask[SLI_QTYPE_WQ] =
> +				le16_to_cpu(parms->wqe_count_mask);
> +		sli4->qinfo.count_method[SLI_QTYPE_WQ] =
> +				GET_Q_CNT_METHOD(dwflags_wq_page_cnt);
> +
> +		sli4->qinfo.count_mask[SLI_QTYPE_RQ] =
> +				le16_to_cpu(parms->rqe_count_mask);
> +		sli4->qinfo.count_method[SLI_QTYPE_RQ] =
> +				GET_Q_CNT_METHOD(dwflags_rq_page_cnt);
> +
> +		/* now calculate max queue entries */
> +		sli_calc_max_qentries(sli4);
> +
> +		dwflags_sgl_page_cnt = le32_to_cpu(parms->dw18_sgl_page_cnt);
> +
> +		/* max # of pages */
> +		sli4->max_sgl_pages =
> +				(dwflags_sgl_page_cnt &
> +				 RSP_GET_PARAM_SGL_PAGE_CNT_MASK);
> +
> +		/* bit map of available sizes */
> +		sli4->sgl_page_sizes =
> +				(dwflags_sgl_page_cnt &
> +				 RSP_GET_PARAM_SGL_PAGE_SZS_MASK) >> 8;
> +		/* ignore HLM here. Use value from REQUEST_FEATURES */
> +		sli4->sge_supported_length =
> +				le32_to_cpu(parms->sge_supported_length);
> +		sli4->sgl_pre_registration_required =
> +			(dwflags_loopback & RSP_GET_PARAM_SGLR);
> +		/* default to using pre-registered SGL's */
> +		sli4->sgl_pre_registered = true;
> +
> +		sli4->perf_hint =
> +			(dwflags_loopback & RSP_GET_PARAM_PHON);
> +		sli4->perf_wq_id_association =
> +			(dwflags_loopback & RSP_GET_PARAM_PHWQ);
> +
> +		sli4->rq_batch =
> +			(le16_to_cpu(parms->dw15w1_rq_db_window) &
> +			 RSP_GET_PARAM_RQ_DB_WINDOW_MASK) >> 12;
> +
> +		/* Use the highest available WQE size. */
> +		if (((dwflags_wq_page_cnt &
> +		    RSP_GET_PARAM_WQE_SZS_MASK) >> 8) &
> +		    SLI4_128BYTE_WQE_SUPPORT)
> +			sli4->wqe_size = SLI4_WQE_EXT_BYTES;
> +		else
> +			sli4->wqe_size = SLI4_WQE_BYTES;
> +	}
> +
> +	sli4->port_number = 0;
> +
> +	/*
> +	 * Issue COMMON_GET_CNTL_ATTRIBUTES to get port_number. Temporarily
> +	 * uses VPD DMA buffer as the response won't fit in the embedded
> +	 * buffer.
> +	 */
> +	memset(sli4->vpd_data.virt, 0, sli4->vpd_data.size);
> +	if (!sli_cmd_common_get_cntl_attributes(sli4, sli4->bmbx.virt,
> +					       SLI4_BMBX_SIZE,
> +					       &sli4->vpd_data)) {

	if (sli_cmd_...)
		return EFC_FAIL;

> +		struct sli4_rsp_cmn_get_cntl_attributes *attr =
> +			sli4->vpd_data.virt;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
> +				__func__);
> +			return EFC_FAIL;
> +		} else if (attr->hdr.status) {
> +			efc_log_err(sli4, "COMMON_GET_CNTL_ATTRIBUTES bad status %#x",
> +			       attr->hdr.status);
> +			efc_log_err(sli4, "additional status %#x\n",
> +			       attr->hdr.additional_status);
> +			return EFC_FAIL;
> +		}
> +
> +		sli4->port_number = (attr->port_num_type_flags &
> +					    SLI4_CNTL_ATTR_PORTNUM);
> +
> +		memcpy(sli4->bios_version_string,
> +		       attr->bios_version_str,
> +		       sizeof(sli4->bios_version_string));
> +	} else {
> +		efc_log_err(sli4, "bad COMMON_GET_CNTL_ATTRIBUTES write\n");
> +		return EFC_FAIL;
> +	}
> +
> +	psize = sizeof(struct sli4_rsp_cmn_get_cntl_addl_attributes);
> +	data.size = psize;
> +	data.virt = dma_alloc_coherent(&sli4->pcidev->dev, data.size,
> +				       &data.phys, GFP_DMA);
> +	if (!data.virt) {
> +		memset(&data, 0, sizeof(struct efc_dma));
> +		efc_log_err(sli4, "Failed to allocate memory for GET_CNTL_ADDL_ATTR\n");

return EFC_FAIL ?

> +	} else {
> +		if (!sli_cmd_common_get_cntl_addl_attributes(sli4,
> +							    sli4->bmbx.virt,
> +							    SLI4_BMBX_SIZE,
> +							    &data)) {

	if (sli_cmd_...)
		return EFC_FAIL;

> +			struct sli4_rsp_cmn_get_cntl_addl_attributes *attr;
> +
> +			attr = data.virt;
> +			if (sli_bmbx_command(sli4)) {
> +				efc_log_crit(sli4, "mailbox fail (GET_CNTL_ADDL_ATTR)\n");
> +				dma_free_coherent(&sli4->pcidev->dev, data.size,
> +						  data.virt, data.phys);
> +				return EFC_FAIL;
> +			}
> +			if (attr->hdr.status) {
> +				efc_log_err(sli4, "GET_CNTL_ADDL_ATTR bad status %#x\n",
> +				       attr->hdr.status);
> +				dma_free_coherent(&sli4->pcidev->dev, data.size,
> +						  data.virt, data.phys);
> +				return EFC_FAIL;
> +			}
> +
> +			memcpy(sli4->ipl_name, attr->ipl_file_name,
> +			       sizeof(sli4->ipl_name));
> +
> +			efc_log_info(sli4, "IPL:%s\n",
> +				(char *)sli4->ipl_name);
> +		} else {
> +			efc_log_err(sli4, "bad GET_CNTL_ADDL_ATTR write\n");
> +			dma_free_coherent(&sli4->pcidev->dev, data.size,
> +					  data.virt, data.phys);
> +			return EFC_FAIL;
> +		}
> +
> +		dma_free_coherent(&sli4->pcidev->dev, data.size, data.virt,
> +				  data.phys);
> +		memset(&data, 0, sizeof(struct efc_dma));
> +	}
> +
> +	if (!sli_cmd_common_get_port_name(sli4, sli4->bmbx.virt,
> +					 SLI4_BMBX_SIZE)) {
> +		struct sli4_rsp_cmn_get_port_name	*port_name =
> +			(struct sli4_rsp_cmn_get_port_name *)
> +			(((u8 *)sli4->bmbx.virt) +
> +			offsetof(struct sli4_cmd_sli_config, payload.embed));
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
> +				__func__);
> +			return EFC_FAIL;
> +		}
> +
> +		sli4->port_name[0] =
> +			port_name->port_name[sli4->port_number];
> +	}
> +	sli4->port_name[1] = '\0';
> +
> +	if (!sli_cmd_read_rev(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +			     &sli4->vpd_data)) {

	if (sli_cmd_...)
		return EFC_FAIL;

> +		struct sli4_cmd_read_rev	*read_rev = sli4->bmbx.virt;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "bootstrap mailbox write fail (READ_REV)\n");
> +			return EFC_FAIL;
> +		}
> +		if (le16_to_cpu(read_rev->hdr.status)) {
> +			efc_log_err(sli4, "READ_REV bad status %#x\n",
> +			       le16_to_cpu(read_rev->hdr.status));
> +			return EFC_FAIL;
> +		}
> +
> +		sli4->fw_rev[0] =
> +				le32_to_cpu(read_rev->first_fw_id);
> +		memcpy(sli4->fw_name[0], read_rev->first_fw_name,
> +		       sizeof(sli4->fw_name[0]));
> +
> +		sli4->fw_rev[1] =
> +				le32_to_cpu(read_rev->second_fw_id);
> +		memcpy(sli4->fw_name[1], read_rev->second_fw_name,
> +		       sizeof(sli4->fw_name[1]));
> +
> +		sli4->hw_rev[0] = le32_to_cpu(read_rev->first_hw_rev);
> +		sli4->hw_rev[1] = le32_to_cpu(read_rev->second_hw_rev);
> +		sli4->hw_rev[2] = le32_to_cpu(read_rev->third_hw_rev);
> +
> +		efc_log_info(sli4, "FW1:%s (%08x) / FW2:%s (%08x)\n",
> +			read_rev->first_fw_name,
> +			      le32_to_cpu(read_rev->first_fw_id),
> +			      read_rev->second_fw_name,
> +			      le32_to_cpu(read_rev->second_fw_id));
> +
> +		efc_log_info(sli4, "HW1: %08x / HW2: %08x\n",
> +			le32_to_cpu(read_rev->first_hw_rev),
> +			      le32_to_cpu(read_rev->second_hw_rev));
> +
> +		/* Check that all VPD data was returned */
> +		if (le32_to_cpu(read_rev->returned_vpd_length) !=
> +		    le32_to_cpu(read_rev->actual_vpd_length)) {
> +			efc_log_info(sli4, "VPD length: avail=%d returned=%d actual=%d\n",
> +				le32_to_cpu(read_rev->available_length_dword) &
> +					    SLI4_READ_REV_AVAILABLE_LENGTH,
> +				le32_to_cpu(read_rev->returned_vpd_length),
> +				le32_to_cpu(read_rev->actual_vpd_length));
> +		}
> +		sli4->vpd_length = le32_to_cpu(read_rev->returned_vpd_length);
> +	} else {
> +		efc_log_err(sli4, "bad READ_REV write\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (!sli_cmd_read_nvparms(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE)) {

early return here too

> +		struct sli4_cmd_read_nvparms *read_nvparms = sli4->bmbx.virt;
> +
> +		if (sli_bmbx_command(sli4)) {
> +			efc_log_crit(sli4, "bootstrap mailbox fail (READ_NVPARMS)\n");
> +			return EFC_FAIL;
> +		}
> +		if (le16_to_cpu(read_nvparms->hdr.status)) {
> +			efc_log_err(sli4, "READ_NVPARMS bad status %#x\n",
> +			       le16_to_cpu(read_nvparms->hdr.status));
> +			return EFC_FAIL;
> +		}
> +
> +		memcpy(sli4->wwpn, read_nvparms->wwpn,
> +		       sizeof(sli4->wwpn));
> +		memcpy(sli4->wwnn, read_nvparms->wwnn,
> +		       sizeof(sli4->wwnn));
> +
> +		efc_log_info(sli4, "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
> +			sli4->wwpn[0],
> +			      sli4->wwpn[1],
> +			      sli4->wwpn[2],
> +			      sli4->wwpn[3],
> +			      sli4->wwpn[4],
> +			      sli4->wwpn[5],
> +			      sli4->wwpn[6],
> +			      sli4->wwpn[7]);
> +		efc_log_info(sli4, "WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
> +			sli4->wwnn[0],
> +			      sli4->wwnn[1],
> +			      sli4->wwnn[2],
> +			      sli4->wwnn[3],
> +			      sli4->wwnn[4],
> +			      sli4->wwnn[5],
> +			      sli4->wwnn[6],
> +			      sli4->wwnn[7]);
> +	} else {
> +		efc_log_err(sli4, "bad READ_NVPARMS write\n");
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
> +	  void __iomem *reg[])
> +{
> +	u32 intf = U32_MAX;
> +	u32 pci_class_rev = 0;
> +	u32 rev_id = 0;
> +	u32 family = 0;
> +	u32 asic_id = 0;
> +	u32 i;
> +	struct sli4_asic_entry_t *asic;
> +
> +	memset(sli4, 0, sizeof(struct sli4));
> +
> +	sli4->os = os;
> +	sli4->pcidev = pdev;
> +
> +	for (i = 0; i < 6; i++)
> +		sli4->reg[i] = reg[i];
> +	/*
> +	 * Read the SLI_INTF register to discover the register layout
> +	 * and other capability information
> +	 */
> +	pci_read_config_dword(pdev, SLI4_INTF_REG, &intf);

Propably checking the return value would be good. Though most
callsites fo this function don't do it.

> +
> +	if ((intf & SLI4_INTF_VALID_MASK) != (u32)SLI4_INTF_VALID_VALUE) {
> +		efc_log_err(sli4, "SLI_INTF is not valid\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/* driver only support SLI-4 */
> +	if ((intf & SLI4_INTF_REV_MASK) != SLI4_INTF_REV_S4) {
> +		efc_log_err(sli4, "Unsupported SLI revision (intf=%#x)\n",
> +		       intf);
> +		return EFC_FAIL;
> +	}
> +
> +	sli4->sli_family = intf & SLI4_INTF_FAMILY_MASK;
> +
> +	sli4->if_type = intf & SLI4_INTF_IF_TYPE_MASK;
> +	efc_log_info(sli4, "status=%#x error1=%#x error2=%#x\n",
> +		sli_reg_read_status(sli4),
> +			sli_reg_read_err1(sli4),
> +			sli_reg_read_err2(sli4));
> +
> +	/*
> +	 * set the ASIC type and revision
> +	 */
> +	pci_read_config_dword(pdev, PCI_CLASS_REVISION, &pci_class_rev);
> +	rev_id = pci_class_rev & 0xff;
> +	family = sli4->sli_family;
> +	if (family == SLI4_FAMILY_CHECK_ASIC_TYPE) {
> +		pci_read_config_dword(pdev, SLI4_ASIC_ID_REG, &asic_id);
> +
> +		family = asic_id & SLI4_ASIC_GEN_MASK;
> +	}
> +
> +	for (i = 0, asic = sli4_asic_table; i < ARRAY_SIZE(sli4_asic_table);
> +	     i++, asic++) {
> +		if (rev_id == asic->rev_id && family == asic->family) {
> +			sli4->asic_type = family;
> +			sli4->asic_rev = rev_id;
> +			break;
> +		}
> +	}
> +	/* Fail if no matching asic type/rev was found */
> +	if (!sli4->asic_type || !sli4->asic_rev) {
> +		efc_log_err(sli4, "no matching asic family/rev found: %02x/%02x\n",
> +		       family, rev_id);
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * The bootstrap mailbox is equivalent to a MQ with a single 256 byte
> +	 * entry, a CQ with a single 16 byte entry, and no event queue.
> +	 * Alignment must be 16 bytes as the low order address bits in the
> +	 * address register are also control / status.
> +	 */
> +	sli4->bmbx.size = SLI4_BMBX_SIZE + sizeof(struct sli4_mcqe);
> +	sli4->bmbx.virt = dma_alloc_coherent(&pdev->dev, sli4->bmbx.size,
> +					     &sli4->bmbx.phys, GFP_DMA);
> +	if (!sli4->bmbx.virt) {
> +		memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
> +		efc_log_err(sli4, "bootstrap mailbox allocation failed\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->bmbx.phys & SLI4_BMBX_MASK_LO) {
> +		efc_log_err(sli4, "bad alignment for bootstrap mailbox\n");
> +		return EFC_FAIL;
> +	}
> +
> +	efc_log_info(sli4, "bmbx v=%p p=0x%x %08x s=%zd\n", sli4->bmbx.virt,
> +		upper_32_bits(sli4->bmbx.phys),
> +		      lower_32_bits(sli4->bmbx.phys), sli4->bmbx.size);
> +
> +	/* 4096 is arbitrary. What should this value actually be? */
> +	sli4->vpd_data.size = 4096;
> +	sli4->vpd_data.virt = dma_alloc_coherent(&pdev->dev,
> +						 sli4->vpd_data.size,
> +						 &sli4->vpd_data.phys,
> +						 GFP_DMA);
> +	if (!sli4->vpd_data.virt) {
> +		memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
> +		/* Note that failure isn't fatal in this specific case */
> +		efc_log_info(sli4, "VPD buffer allocation failed\n");
> +	}
> +
> +	if (!sli_fw_init(sli4)) {
> +		efc_log_err(sli4, "FW initialization failed\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * Set one of fcpi(initiator), fcpt(target), fcpc(combined) to true
> +	 * in addition to any other desired features
> +	 */
> +	sli4->features = (SLI4_REQFEAT_IAAB | SLI4_REQFEAT_NPIV |
> +				 SLI4_REQFEAT_DIF | SLI4_REQFEAT_VF |
> +				 SLI4_REQFEAT_FCPC | SLI4_REQFEAT_IAAR |
> +				 SLI4_REQFEAT_HLM | SLI4_REQFEAT_PERFH |
> +				 SLI4_REQFEAT_RXSEQ | SLI4_REQFEAT_RXRI |
> +				 SLI4_REQFEAT_MRQP);
> +
> +	/* use performance hints if available */
> +	if (sli4->perf_hint)
> +		sli4->features |= SLI4_REQFEAT_PERFH;
> +
> +	if (sli_request_features(sli4, &sli4->features, true))
> +		return EFC_FAIL;
> +
> +	if (sli_get_config(sli4))
> +		return EFC_FAIL;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_init(struct sli4 *sli4)
> +{
> +	if (sli4->has_extents) {
> +		efc_log_info(sli4, "extend allocation not implemented\n");

Is this a todo?

> +		return EFC_FAIL;
> +	}
> +
> +	if (sli4->high_login_mode)
> +		sli4->features |= SLI4_REQFEAT_HLM;
> +	else
> +		sli4->features &= (~SLI4_REQFEAT_HLM);
> +	sli4->features &= (~SLI4_REQFEAT_RXSEQ);
> +	sli4->features &= (~SLI4_REQFEAT_RXRI);
> +
> +	if (sli_request_features(sli4, &sli4->features, false))
> +		return EFC_FAIL;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_reset(struct sli4 *sli4)
> +{
> +	u32	i;
> +
> +	if (!sli_fw_init(sli4)) {
> +		efc_log_crit(sli4, "FW initialization failed\n");
> +		return EFC_FAIL;
> +	}
> +
> +	kfree(sli4->extent[0].base);
> +	sli4->extent[0].base = NULL;
> +
> +	for (i = 0; i < SLI_RSRC_MAX; i++) {
> +		kfree(sli4->extent[i].use_map);
> +		sli4->extent[i].use_map = NULL;
> +		sli4->extent[i].base = NULL;
> +	}
> +
> +	if (sli_get_config(sli4))
> +		return EFC_FAIL;

	return sli_get_config(sli4):

> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_fw_reset(struct sli4 *sli4)
> +{
> +	u32 val;
> +	bool ready;
> +
> +	/*
> +	 * Firmware must be ready before issuing the reset.
> +	 */
> +	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
> +	if (!ready) {

	if (!sli_wait_...)

and drop the ready variable

> +		efc_log_crit(sli4, "FW status is NOT ready\n");
> +		return EFC_FAIL;
> +	}
> +	/* Lancer uses PHYDEV_CONTROL */
> +
> +	val = SLI4_PHYDEV_CTRL_FRST;
> +	writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
> +
> +	/* wait for the FW to become ready after the reset */
> +	ready = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
> +	if (!ready) {

	if (!sli_wait_...)

> +		efc_log_crit(sli4, "Failed to become ready after firmware reset\n");
> +		return EFC_FAIL;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_teardown(struct sli4 *sli4)
> +{
> +	u32 i;
> +
> +	kfree(sli4->extent[0].base);
> +	sli4->extent[0].base = NULL;
> +
> +	for (i = 0; i < SLI_RSRC_MAX; i++) {
> +		sli4->extent[i].base = NULL;
> +
> +		kfree(sli4->extent[i].use_map);
> +		sli4->extent[i].use_map = NULL;
> +	}
> +
> +	if (sli_fw_term(sli4))
> +		efc_log_err(sli4, "FW deinitialization failed\n");
> +
> +	dma_free_coherent(&sli4->pcidev->dev, sli4->vpd_data.size,
> +			  sli4->vpd_data.virt, sli4->vpd_data.phys);
> +	memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
> +
> +	dma_free_coherent(&sli4->pcidev->dev, sli4->bmbx.size,
> +			  sli4->bmbx.virt, sli4->bmbx.phys);
> +	memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
> +
> +	return EFC_SUCCESS;

This function always succeds so, why returning the status? What can
the caller do in this situation anyway?

> +}
> +
> +int
> +sli_callback(struct sli4 *sli4, enum sli4_callback which,
> +	     void *func, void *arg)
> +{
> +	if (!func) {
> +		efc_log_err(sli4, "bad parameter sli4=%p which=%#x func=%p\n",
> +		       sli4, which, func);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (which) {
> +	case SLI4_CB_LINK:
> +		sli4->link = func;
> +		sli4->link_arg = arg;
> +		break;
> +	default:
> +		efc_log_info(sli4, "unknown callback %#x\n", which);
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
> +		    u32 num_eq, u32 shift, u32 delay_mult)
> +{
> +	sli_cmd_common_modify_eq_delay(sli4, sli4->bmbx.virt, SLI4_BMBX_SIZE,
> +				       eq, num_eq, shift, delay_mult);
> +
> +	if (sli_bmbx_command(sli4)) {
> +		efc_log_crit(sli4, "bootstrap mailbox write fail (MODIFY EQ DELAY)\n");
> +		return EFC_FAIL;
> +	}
> +	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
> +		efc_log_err(sli4, "bad status MODIFY EQ DELAY\n");
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
> +		   u32 *rid, u32 *index)
> +{
> +	int rc = EFC_SUCCESS;
> +	u32 size;
> +	u32 extent_idx;
> +	u32 item_idx;
> +	u32 position;
> +
> +	*rid = U32_MAX;
> +	*index = U32_MAX;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		position =
> +		find_first_zero_bit(sli4->extent[rtype].use_map,
> +				    sli4->extent[rtype].map_size);
> +		if (position >= sli4->extent[rtype].map_size) {
> +			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
> +				    rtype, sli4->extent[rtype].n_alloc);
> +			rc = EFC_FAIL;
> +			break;
> +		}
> +		set_bit(position, sli4->extent[rtype].use_map);
> +		*index = position;
> +
> +		size = sli4->extent[rtype].size;
> +
> +		extent_idx = *index / size;
> +		item_idx   = *index % size;
> +
> +		*rid = sli4->extent[rtype].base[extent_idx] + item_idx;
> +
> +		sli4->extent[rtype].n_alloc++;
> +		break;
> +	default:
> +		rc = EFC_FAIL;
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +sli_resource_free(struct sli4 *sli4,
> +		  enum sli4_resource rtype, u32 rid)
> +{
> +	int rc = EFC_FAIL;
> +	u32 x;
> +	u32 size, *base;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		/*
> +		 * Figure out which extent contains the resource ID. I.e. find
> +		 * the extent such that
> +		 *   extent->base <= resource ID < extent->base + extent->size
> +		 */
> +		base = sli4->extent[rtype].base;
> +		size = sli4->extent[rtype].size;
> +
> +		/*
> +		 * In the case of FW reset, this may be cleared
> +		 * but the force_free path will still attempt to
> +		 * free the resource. Prevent a NULL pointer access.
> +		 */
> +		if (base) {

		if (!base)
			break;

> +			for (x = 0; x < sli4->extent[rtype].number;
> +			     x++) {

the x++ should fit on the same line as the for.

> +				if (rid >= base[x] &&
> +				    (rid < (base[x] + size))) {

And by inverting the logic

			if (!(...))
				continue;


> +					rid -= base[x];
> +					clear_bit((x * size) + rid,
> +						  sli4->extent[rtype].use_map);
> +					rc = EFC_SUCCESS;
> +					break;

it's possible to reduce the level of indetion. yay!

> +				}
> +			}
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype)
> +{
> +	int rc = EFC_FAIL;
> +	u32 i;
> +
> +	switch (rtype) {
> +	case SLI_RSRC_VFI:
> +	case SLI_RSRC_VPI:
> +	case SLI_RSRC_RPI:
> +	case SLI_RSRC_XRI:
> +		for (i = 0; i < sli4->extent[rtype].map_size; i++)
> +			clear_bit(i, sli4->extent[rtype].use_map);
> +		rc = EFC_SUCCESS;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int sli_raise_ue(struct sli4 *sli4, u8 dump)
> +{
> +	u32 val = 0;
> +
> +	if (dump == SLI4_FUNC_DESC_DUMP) {
> +		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
> +		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
> +	} else {
> +		val = SLI4_PHYDEV_CTRL_FRST;
> +
> +		if (dump == SLI4_CHIP_LEVEL_DUMP)
> +			val |= SLI4_PHYDEV_CTRL_DD;
> +		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int sli_dump_is_ready(struct sli4 *sli4)
> +{
> +	int rc = SLI4_DUMP_READY_STATUS_NOT_READY;
> +	u32 port_val;
> +	u32 bmbx_val;
> +
> +	/*
> +	 * Ensure that the port is ready AND the mailbox is
> +	 * ready before signaling that the dump is ready to go.
> +	 */
> +	port_val = sli_reg_read_status(sli4);
> +	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
> +
> +	if ((bmbx_val & SLI4_BMBX_RDY) &&
> +	    (port_val & SLI4_PORT_STATUS_RDY)) {
> +		if (port_val & SLI4_PORT_STATUS_DIP)
> +			rc = SLI4_DUMP_READY_STATUS_DD_PRESENT;
> +		else if (port_val & SLI4_PORT_STATUS_FDP)
> +			rc = SLI4_DUMP_READY_STATUS_FDB_PRESENT;
> +	}
> +
> +	return rc;
> +}
> +
> +bool sli_reset_required(struct sli4 *sli4)
> +{
> +	u32 val;
> +
> +	val = sli_reg_read_status(sli4);
> +	return (val & SLI4_PORT_STATUS_RN);
> +}
> +
> +int
> +sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size,
> +		       u16 xri,
> +		       u32 xri_count, struct efc_dma *page0[],
> +		       struct efc_dma *page1[], struct efc_dma *dma)
> +{
> +	struct sli4_rqst_post_sgl_pages *post = NULL;
> +	u32 i;
> +	__le32 req_len;
> +
> +	post = sli_config_cmd_init(sli4, buf, size,
> +				   SLI_CONFIG_PYLD_LENGTH(post_sgl_pages),
> +				   dma);
> +	if (!post)
> +		return EFC_FAIL;
> +
> +	/* payload size calculation */
> +	/* 4 = xri_start + xri_count */
> +	/* xri_count = # of XRI's registered */
> +	/* sizeof(uint64_t) = physical address size */
> +	/* 2 = # of physical addresses per page set */
> +	req_len = cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
> +	sli_cmd_fill_hdr(&post->hdr, SLI4_OPC_POST_SGL_PAGES, SLI4_SUBSYSTEM_FC,
> +			 CMD_V0, req_len);
> +	post->xri_start = cpu_to_le16(xri);
> +	post->xri_count = cpu_to_le16(xri_count);
> +
> +	for (i = 0; i < xri_count; i++) {
> +		post->page_set[i].page0_low  =
> +				cpu_to_le32(lower_32_bits(page0[i]->phys));
> +		post->page_set[i].page0_high =
> +				cpu_to_le32(upper_32_bits(page0[i]->phys));
> +	}
> +
> +	if (page1) {
> +		for (i = 0; i < xri_count; i++) {
> +			post->page_set[i].page1_low =
> +				cpu_to_le32(lower_32_bits(page1[i]->phys));
> +			post->page_set[i].page1_high =
> +				cpu_to_le32(upper_32_bits(page1[i]->phys));
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf,
> +			   size_t size, struct efc_dma *dma,
> +			   u16 rpi,
> +			   struct efc_dma *payload_dma)
> +{
> +	struct sli4_rqst_post_hdr_templates *req = NULL;
> +	uintptr_t phys = 0;
> +	u32 i = 0;
> +	u32 page_count, payload_size;
> +
> +	page_count = sli_page_count(dma->size, SLI_PAGE_SIZE);
> +
> +	payload_size = ((sizeof(struct sli4_rqst_post_hdr_templates) +
> +		(page_count * SZ_DMAADDR)) - sizeof(struct sli4_rqst_hdr));
> +
> +	if (page_count > 16) {
> +		/*
> +		 * We can't fit more than 16 descriptors into an embedded mbox
> +		 * command, it has to be non-embedded
> +		 */
> +		payload_dma->size = payload_size;
> +		payload_dma->virt = dma_alloc_coherent(&sli4->pcidev->dev,
> +						       payload_dma->size,
> +					     &payload_dma->phys, GFP_DMA);
> +		if (!payload_dma->virt) {
> +			memset(payload_dma, 0, sizeof(struct efc_dma));
> +			efc_log_err(sli4, "mbox payload memory allocation fail\n");
> +			return EFC_FAIL;
> +		}
> +		req = sli_config_cmd_init(sli4, buf, size,
> +					  payload_size, payload_dma);
> +	} else {
> +		req = sli_config_cmd_init(sli4, buf, size,
> +					  payload_size, NULL);
> +	}
> +
> +	if (!req)
> +		return EFC_FAIL;
> +
> +	if (rpi == U16_MAX)
> +		rpi = sli4->extent[SLI_RSRC_RPI].base[0];
> +
> +	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_POST_HDR_TEMPLATES,
> +			 SLI4_SUBSYSTEM_FC, CMD_V0,
> +			 CFG_RQST_PYLD_LEN(post_hdr_templates));
> +
> +	req->rpi_offset = cpu_to_le16(rpi);
> +	req->page_count = cpu_to_le16(page_count);
> +	phys = dma->phys;
> +	for (i = 0; i < page_count; i++) {
> +		req->page_descriptor[i].low  = cpu_to_le32(lower_32_bits(phys));
> +		req->page_descriptor[i].high = cpu_to_le32(upper_32_bits(phys));
> +
> +		phys += SLI_PAGE_SIZE;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +u32
> +sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi)
> +{
> +	u32 bytes = 0;
> +
> +	/* Check if header templates needed */
> +	if (sli4->hdr_template_req)
> +		/* round up to a page */
> +		bytes = SLI_ROUND_PAGE(n_rpi * SLI4_HDR_TEMPLATE_SIZE);
> +
> +	return bytes;
> +}
> +
> +const char *
> +sli_fc_get_status_string(u32 status)
> +{
> +	static struct {
> +		u32 code;
> +		const char *label;
> +	} lookup[] = {
> +		{SLI4_FC_WCQE_STATUS_SUCCESS,		"SUCCESS"},
> +		{SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,	"FCP_RSP_FAILURE"},
> +		{SLI4_FC_WCQE_STATUS_REMOTE_STOP,	"REMOTE_STOP"},
> +		{SLI4_FC_WCQE_STATUS_LOCAL_REJECT,	"LOCAL_REJECT"},
> +		{SLI4_FC_WCQE_STATUS_NPORT_RJT,		"NPORT_RJT"},
> +		{SLI4_FC_WCQE_STATUS_FABRIC_RJT,	"FABRIC_RJT"},
> +		{SLI4_FC_WCQE_STATUS_NPORT_BSY,		"NPORT_BSY"},
> +		{SLI4_FC_WCQE_STATUS_FABRIC_BSY,	"FABRIC_BSY"},
> +		{SLI4_FC_WCQE_STATUS_LS_RJT,		"LS_RJT"},
> +		{SLI4_FC_WCQE_STATUS_CMD_REJECT,	"CMD_REJECT"},
> +		{SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,	"FCP_TGT_LENCHECK"},
> +		{SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED, "BUF_LEN_EXCEEDED"},
> +		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
> +				"RQ_INSUFF_BUF_NEEDED"},
> +		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC, "RQ_INSUFF_FRM_DESC"},
> +		{SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,	"RQ_DMA_FAILURE"},
> +		{SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,	"FCP_RSP_TRUNCATE"},
> +		{SLI4_FC_WCQE_STATUS_DI_ERROR,		"DI_ERROR"},
> +		{SLI4_FC_WCQE_STATUS_BA_RJT,		"BA_RJT"},
> +		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
> +				"RQ_INSUFF_XRI_NEEDED"},
> +		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC, "INSUFF_XRI_DISC"},
> +		{SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,	"RX_ERROR_DETECT"},
> +		{SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,	"RX_ABORT_REQUEST"},
> +		};
> +	u32 i;
> +
> +	for (i = 0; i < ARRAY_SIZE(lookup); i++) {
> +		if (status == lookup[i].code)
> +			return lookup[i].label;
> +	}
> +	return "unknown";
> +}
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> index 13f5a0d8d31c..30a951b9593d 100644
> --- a/drivers/scsi/elx/libefc_sli/sli4.h
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -3696,4 +3696,438 @@ sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
>  	hdr->request_length = len;
>  }
>  
> +/**
> + * Get / set parameter functions
> + */
> +
> +static inline int
> +sli_set_sgl_preregister(struct sli4 *sli4, u32 value)
> +{
> +	if (value == 0 && sli4->sgl_pre_registration_required) {
> +		efc_log_err(sli4, "SGL pre-registration required\n");
> +		return -1;

EFC_FAIL

> +	}
> +
> +	sli4->sgl_pre_registered = value != 0 ? true : false;
> +
> +	return 0;

EFC_SUCCESS

> +}
> +
> +static inline u32
> +sli_get_max_sge(struct sli4 *sli4)
> +{
> +	return sli4->sge_supported_length;
> +}
> +
> +static inline u32
> +sli_get_max_sgl(struct sli4 *sli4)
> +{
> +	if (sli4->sgl_page_sizes != 1) {
> +		efc_log_err(sli4, "unsupported SGL page sizes %#x\n",
> +			sli4->sgl_page_sizes);
> +		return 0;
> +	}
> +
> +	return ((sli4->max_sgl_pages * SLI_PAGE_SIZE)
> +		/ sizeof(struct sli4_sge));

The outer brakets are not needed

> +}
> +
> +static inline enum sli4_link_medium
> +sli_get_medium(struct sli4 *sli4)
> +{
> +	switch (sli4->topology) {
> +	case SLI4_READ_CFG_TOPO_FC:
> +	case SLI4_READ_CFG_TOPO_FC_DA:
> +	case SLI4_READ_CFG_TOPO_FC_AL:
> +		return SLI_LINK_MEDIUM_FC;
> +	default:
> +		return SLI_LINK_MEDIUM_MAX;
> +	}
> +}
> +
> +static inline int
> +sli_set_topology(struct sli4 *sli4, u32 value)
> +{
> +	int	rc = 0;
> +
> +	switch (value) {
> +	case SLI4_READ_CFG_TOPO_FC:
> +	case SLI4_READ_CFG_TOPO_FC_DA:
> +	case SLI4_READ_CFG_TOPO_FC_AL:
> +		sli4->topology = value;
> +		break;
> +	default:
> +		efc_log_err(sli4, "unsupported topology %#x\n", value);
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static inline u32
> +sli_convert_mask_to_count(u32 method, u32 mask)
> +{
> +	u32 count = 0;
> +
> +	if (method) {
> +		count = 1 << (31 - __builtin_clz(mask));
> +		count *= 16;
> +	} else {
> +		count = mask;
> +	}
> +
> +	return count;
> +}
> +
> +static inline u32
> +sli_reg_read_status(struct sli4 *sli)
> +{
> +	return readl(sli->reg[0] + SLI4_PORT_STATUS_REGOFF);
> +}
> +
> +static inline int
> +sli_fw_error_status(struct sli4 *sli4)
> +{
> +	return ((sli_reg_read_status(sli4) & SLI4_PORT_STATUS_ERR) ? 1 : 0);

The outer brakets are not needed.

> +}
> +
> +static inline u32
> +sli_reg_read_err1(struct sli4 *sli)
> +{
> +	return readl(sli->reg[0] + SLI4_PORT_ERROR1);
> +}
> +
> +static inline u32
> +sli_reg_read_err2(struct sli4 *sli)
> +{
> +	return readl(sli->reg[0] + SLI4_PORT_ERROR2);
> +}
> +
> +static inline int
> +sli_fc_rqe_length(struct sli4 *sli4, void *cqe, u32 *len_hdr,
> +		  u32 *len_data)
> +{
> +	struct sli4_fc_async_rcqe	*rcqe = cqe;
> +
> +	*len_hdr = *len_data = 0;
> +
> +	if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
> +		*len_hdr  = rcqe->hdpl_byte & SLI4_RACQE_HDPL;
> +		*len_data = le16_to_cpu(rcqe->data_placement_length);
> +		return 0;
> +	} else {
> +		return -1;
> +	}
> +}
> +
> +static inline u8
> +sli_fc_rqe_fcfi(struct sli4 *sli4, void *cqe)
> +{
> +	u8 code = ((u8 *)cqe)[SLI4_CQE_CODE_OFFSET];
> +	u8 fcfi = U8_MAX;
> +
> +	switch (code) {
> +	case SLI4_CQE_CODE_RQ_ASYNC: {
> +		struct sli4_fc_async_rcqe *rcqe = cqe;
> +
> +		fcfi = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_FCFI;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_RQ_ASYNC_V1: {
> +		struct sli4_fc_async_rcqe_v1 *rcqev1 = cqe;
> +
> +		fcfi = rcqev1->fcfi_byte & SLI4_RACQE_FCFI;
> +		break;
> +	}
> +	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD: {
> +		struct sli4_fc_optimized_write_cmd_cqe *opt_wr = cqe;
> +
> +		fcfi = opt_wr->flags0 & SLI4_OCQE_FCFI;
> +		break;
> +	}
> +	}
> +
> +	return fcfi;
> +}
> +
> +/****************************************************************************
> + * Function prototypes
> + */
> +extern int
> +sli_cmd_config_link(struct sli4 *sli4, void *buf, size_t size);

I don't think the extern keyword is needed.

> +extern int
> +sli_cmd_down_link(struct sli4 *sli4, void *buf, size_t size);
> +extern int
> +sli_cmd_dump_type4(struct sli4 *sli4, void *buf, size_t size, u16 wki);
> +extern int
> +sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf, size_t size,
> +				     u32 page_num, struct efc_dma *dma);
> +extern int
> +sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, size_t size, u8 req_stats,
> +			u8 clear_overflow_flags, u8 clear_all_counters);
> +extern int
> +sli_cmd_read_status(struct sli4 *sli4, void *buf, size_t size, u8 clear);
> +extern int
> +sli_cmd_init_link(struct sli4 *sli4, void *buf, size_t size, u32 speed,
> +		  u8 reset_alpa);
> +extern int
> +sli_cmd_init_vfi(struct sli4 *sli4, void *buf, size_t size, u16 vfi, u16 fcfi,
> +		 u16 vpi);
> +extern int
> +sli_cmd_init_vpi(struct sli4 *sli4, void *buf, size_t size, u16 vpi, u16 vfi);
> +extern int
> +sli_cmd_post_xri(struct sli4 *sli4, void *buf, size_t size, u16 base, u16 cnt);
> +extern int
> +sli_cmd_release_xri(struct sli4 *sli4, void *buf, size_t size, u8 num_xri);
> +extern int
> +sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *dma, u16 vpi);
> +extern int
> +sli_cmd_read_topology(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *dma);
> +extern int
> +sli_cmd_read_nvparms(struct sli4 *sli4, void *buf, size_t size);
> +extern int
> +sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, size_t size, u8 *wwpn,
> +		u8 *wwnn, u8 hard_alpa, u32 preferred_d_id);
> +extern int
> +sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, size_t size, u16 index,
> +		struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
> +extern int
> +sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, size_t size, u8 mode,
> +	u16 index, u8 rq_selection_policy, u8 mrq_bit_mask, u16 num_mrqs,
> +	    struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]);
> +extern int
> +sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, size_t size, u32 nport_id,
> +	u16 rpi, u16 vpi, struct efc_dma *dma, u8 update, u8 enable_t10_pi);
> +extern int
> +sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, size_t size, u16 indicator);
> +extern int
> +sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, size_t size, u16 indicator,
> +		  enum sli4_resource which, u32 fc_id);
> +extern int
> +sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, size_t size, u32 fc_id,
> +		__be64 sli_wwpn, u16 vpi, u16 vfi, bool update);
> +extern int
> +sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size, u16 vfi, u16 fcfi,
> +		struct efc_dma dma, u16 vpi, __be64 sli_wwpn, u32 fc_id);
> +extern int
> +sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, size_t size, u16 id, u32 type);
> +extern int
> +sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, size_t size, u16 idx, u32 type);
> +extern int
> +sli_cmd_common_nop(struct sli4 *sli4, void *buf, size_t size, uint64_t context);
> +extern int
> +sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
> +					size_t size, u16 rtype);
> +extern int
> +sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf, size_t size);
> +extern int
> +sli_cmd_common_write_object(struct sli4 *sli4, void *buf, size_t size, u16 noc,
> +		u16 eof, u32 len, u32 offset, char *name, struct efc_dma *dma);
> +extern int
> +sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, size_t size,
> +		char *object_name);
> +extern int
> +sli_cmd_common_read_object(struct sli4 *sli4, void *buf, size_t size,
> +		u32 length, u32 offset, char *name, struct efc_dma *dma);
> +extern int
> +sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *cmd, struct efc_dma *resp);
> +extern int
> +sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf, size_t size,
> +		bool query, bool is_buffer_list, struct efc_dma *dma, u8 fdb);
> +extern int
> +sli_cmd_common_set_features(struct sli4 *sli4, void *buf, size_t size,
> +			    u32 feature, u32 param_len, void *parameter);
> +
> +extern int sli_cqe_mq(struct sli4 *sli4, void *buf);
> +extern int sli_cqe_async(struct sli4 *sli4, void *buf);
> +
> +extern int
> +sli_setup(struct sli4 *sli4, void *os, struct pci_dev *pdev, void __iomem *r[]);
> +extern void sli_calc_max_qentries(struct sli4 *sli4);
> +extern int sli_init(struct sli4 *sli4);
> +extern int sli_reset(struct sli4 *sli4);
> +extern int sli_fw_reset(struct sli4 *sli4);
> +extern int sli_teardown(struct sli4 *sli4);
> +extern int
> +sli_callback(struct sli4 *sli4, enum sli4_callback cb, void *func, void *arg);
> +extern int
> +sli_bmbx_command(struct sli4 *sli4);
> +extern int
> +__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q, u32 qtype,
> +		size_t size, u32 n_entries, u32 align);
> +extern int
> +__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q);
> +extern int
> +sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq, u32 num_eq,
> +		u32 shift, u32 delay_mult);
> +extern int
> +sli_queue_alloc(struct sli4 *sli4, u32 qtype, struct sli4_queue *q,
> +		u32 n_entries, struct sli4_queue *assoc);
> +extern int
> +sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[], u32 num_cqs,
> +		u32 n_entries, struct sli4_queue *eqs[]);
> +extern int
> +sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype);
> +extern int
> +sli_queue_free(struct sli4 *sli4, struct sli4_queue *q, u32 destroy_queues,
> +		u32 free_memory);
> +extern int
> +sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
> +extern int
> +sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
> +
> +extern int
> +sli_wq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_mq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_rq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_eq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_cq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_mq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
> +extern int
> +sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype, u32 *rid,
> +		u32 *index);
> +extern int
> +sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype, u32 rid);
> +extern int
> +sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype);
> +extern int
> +sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id);
> +extern int
> +sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
> +		enum sli4_qentry *etype, u16 *q_id);
> +
> +extern int sli_raise_ue(struct sli4 *sli4, u8 dump);
> +extern int sli_dump_is_ready(struct sli4 *sli4);
> +extern bool sli_reset_required(struct sli4 *sli4);
> +extern bool sli_fw_ready(struct sli4 *sli4);
> +
> +extern int
> +sli_fc_process_link_attention(struct sli4 *sli4, void *acqe);
> +extern int
> +sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
> +		 u8 *cqe, enum sli4_qentry *etype,
> +		 u16 *rid);
> +u32 sli_fc_response_length(struct sli4 *sli4, u8 *cqe);
> +u32 sli_fc_io_length(struct sli4 *sli4, u8 *cqe);
> +int sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id);
> +u32 sli_fc_ext_status(struct sli4 *sli4, u8 *cqe);
> +extern int
> +sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index);
> +extern int
> +sli_cmd_wq_create(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *qmem, u16 cq_id);
> +int sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, size_t size, u16 xri,
> +		u32 xri_count, struct efc_dma *page0[], struct efc_dma *page1[],
> +		struct efc_dma *dma);
> +extern int
> +sli_cmd_rq_create(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *qmem, u16 cq_id, u16 buffer_size);
> +extern int
> +sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *qmem, u16 cq_id, u16 buffer_size);
> +extern int
> +sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *dma, u16 rpi, struct efc_dma *payload_dma);
> +extern int
> +sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q, u32 n_entries,
> +		u32 buffer_size, struct sli4_queue *cq, bool is_hdr);
> +extern int
> +sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs, struct sli4_queue *q[],
> +		u32 base_cq_id, u32 num, u32 hdr_buf_size, u32 data_buf_size);
> +u32 sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi);
> +extern int
> +sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		enum sli4_abort_type type, bool send_abts, u32 ids, u32 mask,
> +		u16 tag, u16 cq_id);
> +
> +extern int
> +sli_send_frame_wqe(struct sli4 *sli4, void *buf, size_t size, u8 sof, u8 eof,
> +		u32 *hdr, struct efc_dma *payload, u32 req_len, u8 timeout,
> +		u16 xri, u16 req_tag);
> +
> +extern int
> +sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *rsp, u32 rsp_len, u16 xri, u16 tag, u16 cq_id,
> +		u16 ox_id, u16 rnodeid, u16 sportid, bool rnodeattached,
> +		u32 rnode_fcid, u32 flags, u32 s_id);
> +
> +extern int
> +sli_els_request64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *sgl, u8 req_type, u32 req_len, u32 max_rsp_len,
> +		u8 timeout, u16 xri, u16 tag, u16 cq_id, u16 rnodeid,
> +		u16 sport, bool rnodeattached, u32 rnode_fcid, u32 sport_fcid);
> +
> +extern int
> +sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *sgl, u16 xri, u16 tag, u16 cq_id, u32 rpi,
> +		u32 rnode_fcid, u8 timeout);
> +
> +extern int
> +sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *sgl, u32 first_data_sge, u32 xfer_len, u16 xri,
> +		u16 tag, u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		u8 timeout);
> +
> +extern int
> +sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct efc_dma *sgl, u32 first_data_sge, u32 xfer_len,
> +		u32 first_burst, u16 xri, u16 tag, u16 cq_id, u32 rpi,
> +		u32 rnode_fcid, u8 dif, u8 bs, u8 timeout);
> +
> +extern int
> +sli_fcp_treceive64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
> +		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		struct sli_fcp_tgt_params *params);
> +
> +extern int
> +sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
> +		u32 first_data_sge, u32 xfer_len, u16 xri, u16 sec_xri, u16 tag,
> +		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		struct sli_fcp_tgt_params *params);
> +
> +extern int
> +sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		u32 rsp_len, u16 xri, u16 tag, u16 cq_id, u32 rpi,
> +		u32 rnode_fcid, u8 port_owned, struct sli_fcp_tgt_params *prms);
> +
> +extern int
> +sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
> +		u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
> +		struct sli_fcp_tgt_params *params);
> +
> +extern int
> +sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
> +		u32 req_len, u32 max_rsp_len, u16 xri, u16 tag, u16 cq_id,
> +		u32 rnode_fcid, u16 rnodeid, struct sli_ct_params *params);
> +
> +extern int
> +sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf, size_t size,
> +		struct sli_bls_payload *payload, u16 xri, u16 tag, u16 cq_id,
> +		bool rnodeattached, u16 rnodeid, u16 sportid, u32 rnode_fcid,
> +		u32 sport_fcid, u32 s_id);
> +
> +extern int
> +sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *payload,
> +		u32 payload_len, u16 xri, u16 tag, u32 rnode_fcid, u16 rnodeid,
> +		struct sli_ct_params *params);
> +
> +extern int
> +sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, size_t size, u16 xri, u16 tag,
> +		u16 cq_id);
> +extern void
> +sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf, size_t size,
> +		u16 timeout);
> +
> +const char *sli_fc_get_status_string(u32 status);
> +
>  #endif /* !_SLI4_H */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 08/31] elx: libefc: Generic state machine framework
  2020-04-12  3:32 ` [PATCH v3 08/31] elx: libefc: Generic state machine framework James Smart
  2020-04-15 12:37   ` Hannes Reinecke
@ 2020-04-15 17:20   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 17:20 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:40PM -0700, James Smart wrote:
> This patch starts the population of the libefc library.
> The library will contain common tasks usable by a target or initiator
> driver. The library will also contain a FC discovery state machine
> interface.
> 
> This patch creates the library directory and adds definitions
> for the discovery state machine interface.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Removed efc_sm_id array which is not used.
>   Added State Machine event name lookup array.
> ---
>  drivers/scsi/elx/libefc/efc_sm.c |  61 ++++++++++++
>  drivers/scsi/elx/libefc/efc_sm.h | 209 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 270 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
> new file mode 100644
> index 000000000000..aba9d542f22e
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sm.c
> @@ -0,0 +1,61 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Generic state machine framework.
> + */
> +#include "efc.h"
> +#include "efc_sm.h"
> +
> +/**
> + * efc_sm_post_event() - Post an event to a context.
> + *
> + * @ctx: State machine context
> + * @evt: Event to post
> + * @data: Event-specific data (if any)
> + */
> +int
> +efc_sm_post_event(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *data)
> +{
> +	if (ctx->current_state) {
> +		ctx->current_state(ctx, evt, data);
> +		return EFC_SUCCESS;
> +	} else {
> +		return EFC_FAIL;
> +	}

	if (!ctx->current_state)
		return EFC_FAIL;

	ctx->current_state(ctx, evt, data);
	return EFC_SUCCESS;


> +}
> +
> +void
> +efc_sm_transition(struct efc_sm_ctx *ctx,
> +		  void *(*state)(struct efc_sm_ctx *,
> +				 enum efc_sm_event, void *), void *data)
> +
> +{
> +	if (ctx->current_state == state) {
> +		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
> +	} else {
> +		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
> +		ctx->current_state = state;
> +		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
> +	}
> +}
> +
> +void
> +efc_sm_disable(struct efc_sm_ctx *ctx)
> +{
> +	ctx->current_state = NULL;
> +}
> +
> +static char *event_name[] = EFC_SM_EVENT_NAME;
> +
> +const char *efc_sm_event_name(enum efc_sm_event evt)
> +{
> +	if (evt > EFC_EVT_LAST)
> +		return "unknown";
> +
> +	return event_name[evt];
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
> new file mode 100644
> index 000000000000..9cb534a1b86e
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sm.h
> @@ -0,0 +1,209 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + */
> +
> +/**
> + * Generic state machine framework declarations.
> + */
> +
> +#ifndef _EFC_SM_H
> +#define _EFC_SM_H
> +
> +/**
> + * State Machine (SM) IDs.
> + */
> +enum {
> +	EFC_SM_COMMON = 0,
> +	EFC_SM_DOMAIN,
> +	EFC_SM_PORT,
> +	EFC_SM_LOGIN,
> +	EFC_SM_LAST
> +};
> +
> +#define EFC_SM_EVENT_SHIFT		8
> +#define EFC_SM_EVENT_START(id)		((id) << EFC_SM_EVENT_SHIFT)
> +
> +struct efc_sm_ctx;
> +
> +/* State Machine events */
> +enum efc_sm_event {
> +	/* Common Events */
> +	EFC_EVT_ENTER = EFC_SM_EVENT_START(EFC_SM_COMMON),
> +	EFC_EVT_REENTER,
> +	EFC_EVT_EXIT,
> +	EFC_EVT_SHUTDOWN,
> +	EFC_EVT_ALL_CHILD_NODES_FREE,
> +	EFC_EVT_RESUME,
> +	EFC_EVT_TIMER_EXPIRED,
> +
> +	/* Domain Events */
> +	EFC_EVT_RESPONSE = EFC_SM_EVENT_START(EFC_SM_DOMAIN),
> +	EFC_EVT_ERROR,
> +
> +	EFC_EVT_DOMAIN_FOUND,
> +	EFC_EVT_DOMAIN_ALLOC_OK,
> +	EFC_EVT_DOMAIN_ALLOC_FAIL,
> +	EFC_EVT_DOMAIN_REQ_ATTACH,
> +	EFC_EVT_DOMAIN_ATTACH_OK,
> +	EFC_EVT_DOMAIN_ATTACH_FAIL,
> +	EFC_EVT_DOMAIN_LOST,
> +	EFC_EVT_DOMAIN_FREE_OK,
> +	EFC_EVT_DOMAIN_FREE_FAIL,
> +	EFC_EVT_HW_DOMAIN_REQ_ATTACH,
> +	EFC_EVT_HW_DOMAIN_REQ_FREE,
> +
> +	/* Sport Events */
> +	EFC_EVT_SPORT_ALLOC_OK = EFC_SM_EVENT_START(EFC_SM_PORT),
> +	EFC_EVT_SPORT_ALLOC_FAIL,
> +	EFC_EVT_SPORT_ATTACH_OK,
> +	EFC_EVT_SPORT_ATTACH_FAIL,
> +	EFC_EVT_SPORT_FREE_OK,
> +	EFC_EVT_SPORT_FREE_FAIL,
> +	EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
> +	EFC_EVT_HW_PORT_ALLOC_OK,
> +	EFC_EVT_HW_PORT_ALLOC_FAIL,
> +	EFC_EVT_HW_PORT_ATTACH_OK,
> +	EFC_EVT_HW_PORT_REQ_ATTACH,
> +	EFC_EVT_HW_PORT_REQ_FREE,
> +	EFC_EVT_HW_PORT_FREE_OK,
> +
> +	/* Login Events */
> +	EFC_EVT_SRRS_ELS_REQ_OK = EFC_SM_EVENT_START(EFC_SM_LOGIN),
> +	EFC_EVT_SRRS_ELS_CMPL_OK,
> +	EFC_EVT_SRRS_ELS_REQ_FAIL,
> +	EFC_EVT_SRRS_ELS_CMPL_FAIL,
> +	EFC_EVT_SRRS_ELS_REQ_RJT,
> +	EFC_EVT_NODE_ATTACH_OK,
> +	EFC_EVT_NODE_ATTACH_FAIL,
> +	EFC_EVT_NODE_FREE_OK,
> +	EFC_EVT_NODE_FREE_FAIL,
> +	EFC_EVT_ELS_FRAME,
> +	EFC_EVT_ELS_REQ_TIMEOUT,
> +	EFC_EVT_ELS_REQ_ABORTED,
> +	/* request an ELS IO be aborted */
> +	EFC_EVT_ABORT_ELS,
> +	/* ELS abort process complete */
> +	EFC_EVT_ELS_ABORT_CMPL,
> +
> +	EFC_EVT_ABTS_RCVD,
> +
> +	/* node is not in the GID_PT payload */
> +	EFC_EVT_NODE_MISSING,
> +	/* node is allocated and in the GID_PT payload */
> +	EFC_EVT_NODE_REFOUND,
> +	/* node shutting down due to PLOGI recvd (implicit logo) */
> +	EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
> +	/* node shutting down due to LOGO recvd/sent (explicit logo) */
> +	EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +
> +	EFC_EVT_PLOGI_RCVD,
> +	EFC_EVT_FLOGI_RCVD,
> +	EFC_EVT_LOGO_RCVD,
> +	EFC_EVT_PRLI_RCVD,
> +	EFC_EVT_PRLO_RCVD,
> +	EFC_EVT_PDISC_RCVD,
> +	EFC_EVT_FDISC_RCVD,
> +	EFC_EVT_ADISC_RCVD,
> +	EFC_EVT_RSCN_RCVD,
> +	EFC_EVT_SCR_RCVD,
> +	EFC_EVT_ELS_RCVD,
> +
> +	EFC_EVT_FCP_CMD_RCVD,
> +
> +	EFC_EVT_GIDPT_DELAY_EXPIRED,
> +
> +	/* SCSI Target Server events */
> +	EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY,
> +	EFC_EVT_NODE_DEL_INI_COMPLETE,
> +	EFC_EVT_NODE_DEL_TGT_COMPLETE,
> +
> +	/* Must be last */
> +	EFC_EVT_LAST
> +};
> +
> +/* State Machine event name lookup array */
> +#define EFC_SM_EVENT_NAME {						\
> +	[EFC_EVT_ENTER]			= "EFC_EVT_ENTER",		\
> +	[EFC_EVT_REENTER]		= "EFC_EVT_REENTER",		\
> +	[EFC_EVT_EXIT]			= "EFC_EVT_EXIT",		\
> +	[EFC_EVT_SHUTDOWN]		= "EFC_EVT_SHUTDOWN",		\
> +	[EFC_EVT_ALL_CHILD_NODES_FREE]	= "EFC_EVT_ALL_CHILD_NODES_FREE",\
> +	[EFC_EVT_RESUME]		= "EFC_EVT_RESUME",		\
> +	[EFC_EVT_TIMER_EXPIRED]		= "EFC_EVT_TIMER_EXPIRED",	\
> +	[EFC_EVT_RESPONSE]		= "EFC_EVT_RESPONSE",		\
> +	[EFC_EVT_ERROR]			= "EFC_EVT_ERROR",		\
> +	[EFC_EVT_DOMAIN_FOUND]		= "EFC_EVT_DOMAIN_FOUND",	\
> +	[EFC_EVT_DOMAIN_ALLOC_OK]	= "EFC_EVT_DOMAIN_ALLOC_OK",	\
> +	[EFC_EVT_DOMAIN_ALLOC_FAIL]	= "EFC_EVT_DOMAIN_ALLOC_FAIL",	\
> +	[EFC_EVT_DOMAIN_REQ_ATTACH]	= "EFC_EVT_DOMAIN_REQ_ATTACH",	\
> +	[EFC_EVT_DOMAIN_ATTACH_OK]	= "EFC_EVT_DOMAIN_ATTACH_OK",	\
> +	[EFC_EVT_DOMAIN_ATTACH_FAIL]	= "EFC_EVT_DOMAIN_ATTACH_FAIL",	\
> +	[EFC_EVT_DOMAIN_LOST]		= "EFC_EVT_DOMAIN_LOST",	\
> +	[EFC_EVT_DOMAIN_FREE_OK]	= "EFC_EVT_DOMAIN_FREE_OK",	\
> +	[EFC_EVT_DOMAIN_FREE_FAIL]	= "EFC_EVT_DOMAIN_FREE_FAIL",	\
> +	[EFC_EVT_HW_DOMAIN_REQ_ATTACH]	= "EFC_EVT_HW_DOMAIN_REQ_ATTACH",\
> +	[EFC_EVT_HW_DOMAIN_REQ_FREE]	= "EFC_EVT_HW_DOMAIN_REQ_FREE",	\
> +	[EFC_EVT_SPORT_ALLOC_OK]	= "EFC_EVT_SPORT_ALLOC_OK",	\
> +	[EFC_EVT_SPORT_ALLOC_FAIL]	= "EFC_EVT_SPORT_ALLOC_FAIL",	\
> +	[EFC_EVT_SPORT_ATTACH_OK]	= "EFC_EVT_SPORT_ATTACH_OK",	\
> +	[EFC_EVT_SPORT_ATTACH_FAIL]	= "EFC_EVT_SPORT_ATTACH_FAIL",	\
> +	[EFC_EVT_SPORT_FREE_OK]		= "EFC_EVT_SPORT_FREE_OK",	\
> +	[EFC_EVT_SPORT_FREE_FAIL]	= "EFC_EVT_SPORT_FREE_FAIL",	\
> +	[EFC_EVT_SPORT_TOPOLOGY_NOTIFY]	= "EFC_EVT_SPORT_TOPOLOGY_NOTIFY",\
> +	[EFC_EVT_HW_PORT_ALLOC_OK]	= "EFC_EVT_HW_PORT_ALLOC_OK",	\
> +	[EFC_EVT_HW_PORT_ALLOC_FAIL]	= "EFC_EVT_HW_PORT_ALLOC_FAIL",	\
> +	[EFC_EVT_HW_PORT_ATTACH_OK]	= "EFC_EVT_HW_PORT_ATTACH_OK",	\
> +	[EFC_EVT_HW_PORT_REQ_ATTACH]	= "EFC_EVT_HW_PORT_REQ_ATTACH",	\
> +	[EFC_EVT_HW_PORT_REQ_FREE]	= "EFC_EVT_HW_PORT_REQ_FREE",	\
> +	[EFC_EVT_HW_PORT_FREE_OK]	= "EFC_EVT_HW_PORT_FREE_OK",	\
> +	[EFC_EVT_SRRS_ELS_REQ_OK]	= "EFC_EVT_SRRS_ELS_REQ_OK",	\
> +	[EFC_EVT_SRRS_ELS_CMPL_OK]	= "EFC_EVT_SRRS_ELS_CMPL_OK",	\
> +	[EFC_EVT_SRRS_ELS_REQ_FAIL]	= "EFC_EVT_SRRS_ELS_REQ_FAIL",	\
> +	[EFC_EVT_SRRS_ELS_CMPL_FAIL]	= "EFC_EVT_SRRS_ELS_CMPL_FAIL",	\
> +	[EFC_EVT_SRRS_ELS_REQ_RJT]	= "EFC_EVT_SRRS_ELS_REQ_RJT",	\
> +	[EFC_EVT_NODE_ATTACH_OK]	= "EFC_EVT_NODE_ATTACH_OK",	\
> +	[EFC_EVT_NODE_ATTACH_FAIL]	= "EFC_EVT_NODE_ATTACH_FAIL",	\
> +	[EFC_EVT_NODE_FREE_OK]		= "EFC_EVT_NODE_FREE_OK",	\
> +	[EFC_EVT_NODE_FREE_FAIL]	= "EFC_EVT_NODE_FREE_FAIL",	\
> +	[EFC_EVT_ELS_FRAME]		= "EFC_EVT_ELS_FRAME",		\
> +	[EFC_EVT_ELS_REQ_TIMEOUT]	= "EFC_EVT_ELS_REQ_TIMEOUT",	\
> +	[EFC_EVT_ELS_REQ_ABORTED]	= "EFC_EVT_ELS_REQ_ABORTED",	\
> +	[EFC_EVT_ABORT_ELS]		= "EFC_EVT_ABORT_ELS",		\
> +	[EFC_EVT_ELS_ABORT_CMPL]	= "EFC_EVT_ELS_ABORT_CMPL",	\
> +	[EFC_EVT_ABTS_RCVD]		= "EFC_EVT_ABTS_RCVD",		\
> +	[EFC_EVT_NODE_MISSING]		= "EFC_EVT_NODE_MISSING",	\
> +	[EFC_EVT_NODE_REFOUND]		= "EFC_EVT_NODE_REFOUND",	\
> +	[EFC_EVT_SHUTDOWN_IMPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO",\
> +	[EFC_EVT_SHUTDOWN_EXPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO",\
> +	[EFC_EVT_PLOGI_RCVD]		= "EFC_EVT_PLOGI_RCVD",		\
> +	[EFC_EVT_FLOGI_RCVD]		= "EFC_EVT_FLOGI_RCVD",		\
> +	[EFC_EVT_LOGO_RCVD]		= "EFC_EVT_LOGO_RCVD",		\
> +	[EFC_EVT_PRLI_RCVD]		= "EFC_EVT_PRLI_RCVD",		\
> +	[EFC_EVT_PRLO_RCVD]		= "EFC_EVT_PRLO_RCVD",		\
> +	[EFC_EVT_PDISC_RCVD]		= "EFC_EVT_PDISC_RCVD",		\
> +	[EFC_EVT_FDISC_RCVD]		= "EFC_EVT_FDISC_RCVD",		\
> +	[EFC_EVT_ADISC_RCVD]		= "EFC_EVT_ADISC_RCVD",		\
> +	[EFC_EVT_RSCN_RCVD]		= "EFC_EVT_RSCN_RCVD",		\
> +	[EFC_EVT_SCR_RCVD]		= "EFC_EVT_SCR_RCVD",		\
> +	[EFC_EVT_ELS_RCVD]		= "EFC_EVT_ELS_RCVD",		\
> +	[EFC_EVT_FCP_CMD_RCVD]		= "EFC_EVT_FCP_CMD_RCVD",	\
> +	[EFC_EVT_NODE_DEL_INI_COMPLETE]	= "EFC_EVT_NODE_DEL_INI_COMPLETE",\
> +	[EFC_EVT_NODE_DEL_TGT_COMPLETE]	= "EFC_EVT_NODE_DEL_TGT_COMPLETE",\
> +	[EFC_EVT_LAST]			= "EFC_EVT_LAST",		\
> +}

The final array event_name has 803 entries and only 63 entries
are filled. Yep, I really checked this :)

> +int
> +efc_sm_post_event(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *data);
> +void
> +efc_sm_transition(struct efc_sm_ctx *ctx,
> +		  void *(*state)(struct efc_sm_ctx *ctx,
> +				 enum efc_sm_event evt, void *arg),
> +		  void *data);
> +void efc_sm_disable(struct efc_sm_ctx *ctx);
> +const char *efc_sm_event_name(enum efc_sm_event evt);
> +
> +#endif /* ! _EFC_SM_H */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions
  2020-04-12  3:32 ` [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
  2020-04-15 12:41   ` Hannes Reinecke
@ 2020-04-15 17:32   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 17:32 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:41PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI/Local FC port objects
> - efc_domain_s: FC domain (aka fabric) objects
> - efc_node_s: FC node (aka remote ports) objects
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Removed Sparse Vector APIs and structures.
> ---
>  drivers/scsi/elx/libefc/efc.h     |  72 +++++
>  drivers/scsi/elx/libefc/efc_lib.c |  41 +++
>  drivers/scsi/elx/libefc/efclib.h  | 640 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 753 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc.h
>  create mode 100644 drivers/scsi/elx/libefc/efc_lib.c
>  create mode 100644 drivers/scsi/elx/libefc/efclib.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
> new file mode 100644
> index 000000000000..c93c6d59b21a
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc.h
> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFC_H__
> +#define __EFC_H__
> +
> +#include "../include/efc_common.h"
> +#include "efclib.h"
> +#include "efc_sm.h"
> +#include "efc_domain.h"
> +#include "efc_sport.h"
> +#include "efc_node.h"
> +#include "efc_fabric.h"
> +#include "efc_device.h"
> +
> +#define EFC_MAX_REMOTE_NODES			2048
> +#define NODE_SPARAMS_SIZE			256
> +
> +enum efc_hw_rtn {
> +	EFC_HW_RTN_SUCCESS = 0,
> +	EFC_HW_RTN_SUCCESS_SYNC = 1,
> +	EFC_HW_RTN_ERROR = -1,
> +	EFC_HW_RTN_NO_RESOURCES = -2,
> +	EFC_HW_RTN_NO_MEMORY = -3,
> +	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
> +	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
> +	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
> +	EFC_HW_RTN_INVALID_ARG = -7,
> +};
> +
> +#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
> +
> +enum efc_scsi_del_initiator_reason {
> +	EFC_SCSI_INITIATOR_DELETED,
> +	EFC_SCSI_INITIATOR_MISSING,
> +};
> +
> +enum efc_scsi_del_target_reason {
> +	EFC_SCSI_TARGET_DELETED,
> +	EFC_SCSI_TARGET_MISSING,
> +};
> +
> +#define EFC_SCSI_CALL_COMPLETE			0
> +#define EFC_SCSI_CALL_ASYNC			1
> +
> +#define EFC_FC_ELS_DEFAULT_RETRIES		3
> +
> +/* Timeouts */
> +#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT		0
> +#define EFC_FC_FLOGI_TIMEOUT_SEC		5
> +#define EFC_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC	30000000
> +
> +#define domain_sm_trace(domain) \
> +	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n", \
> +		      domain->display_name, __func__, efc_sm_event_name(evt)) \
> +
> +#define domain_trace(domain, fmt, ...) \
> +	efc_log_debug(domain->efc, \
> +		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__) \
> +
> +#define node_sm_trace() \
> +	efc_log_debug(node->efc, \
> +		"[%s] %-20s\n", node->display_name, efc_sm_event_name(evt)) \
> +
> +#define sport_sm_trace(sport) \
> +	efc_log_debug(sport->efc, \
> +		"[%s] %-20s\n", sport->display_name, efc_sm_event_name(evt)) \
> +
> +#endif /* __EFC_H__ */
> diff --git a/drivers/scsi/elx/libefc/efc_lib.c b/drivers/scsi/elx/libefc/efc_lib.c
> new file mode 100644
> index 000000000000..894cce9a39f4
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_lib.c
> @@ -0,0 +1,41 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include "efc.h"
> +
> +int efcport_init(struct efc *efc)
> +{
> +	u32 rc = EFC_SUCCESS;
> +
> +	spin_lock_init(&efc->lock);
> +	INIT_LIST_HEAD(&efc->vport_list);
> +
> +	/* Create Node pool */
> +	efc->node_pool = mempool_create_kmalloc_pool(EFC_MAX_REMOTE_NODES,
> +						sizeof(struct efc_node));
> +	if (!efc->node_pool) {
> +		efc_log_err(efc, "Can't allocate node pool\n");
> +		return -ENOMEM;

EFC_FAIL ?

> +	}
> +
> +	efc->node_dma_pool = dma_pool_create("node_dma_pool", &efc->pcidev->dev,
> +						NODE_SPARAMS_SIZE, 0, 0);
> +	if (!efc->node_dma_pool) {
> +		efc_log_err(efc, "Can't allocate node dma pool\n");
> +		mempool_destroy(efc->node_pool);
> +		return -ENOMEM;
> +	}
> +
> +	return rc;
> +}
> +
> +void efcport_destroy(struct efc *efc)
> +{
> +	mempool_destroy(efc->node_pool);
> +	dma_pool_destroy(efc->node_dma_pool);
> +}
> diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
> new file mode 100644
> index 000000000000..9ac52ca7ec83
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efclib.h
> @@ -0,0 +1,640 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFCLIB_H__
> +#define __EFCLIB_H__
> +
> +#include "scsi/fc/fc_els.h"
> +#include "scsi/fc/fc_fs.h"
> +#include "scsi/fc/fc_ns.h"
> +#include "scsi/fc/fc_gs.h"
> +#include "scsi/fc_frame.h"
> +#include "../include/efc_common.h"
> +
> +#define EFC_SERVICE_PARMS_LENGTH	0x74
> +#define EFC_NAME_LENGTH			32
> +#define EFC_DISPLAY_BUS_INFO_LENGTH	16
> +
> +#define EFC_WWN_LENGTH			32
> +
> +/* Local port topology */
> +enum efc_sport_topology {
> +	EFC_SPORT_TOPOLOGY_UNKNOWN = 0,
> +	EFC_SPORT_TOPOLOGY_FABRIC,
> +	EFC_SPORT_TOPOLOGY_P2P,
> +	EFC_SPORT_TOPOLOGY_LOOP,
> +};
> +
> +#define enable_target_rscn(efc)		1
> +
> +enum efc_node_shutd_rsn {
> +	EFC_NODE_SHUTDOWN_DEFAULT = 0,
> +	EFC_NODE_SHUTDOWN_EXPLICIT_LOGO,
> +	EFC_NODE_SHUTDOWN_IMPLICIT_LOGO,
> +};
> +
> +enum efc_node_send_ls_acc {
> +	EFC_NODE_SEND_LS_ACC_NONE = 0,
> +	EFC_NODE_SEND_LS_ACC_PLOGI,
> +	EFC_NODE_SEND_LS_ACC_PRLI,
> +};
> +
> +#define EFC_LINK_STATUS_UP		0
> +#define EFC_LINK_STATUS_DOWN		1
> +
> +/* State machine context header  */
> +struct efc_sm_ctx {
> +	void *(*current_state)(struct efc_sm_ctx *ctx,
> +			       u32 evt, void *arg);
> +
> +	const char	*description;
> +	void		*app;
> +};
> +
> +/* Description of discovered Fabric Domain */
> +struct efc_domain_record {
> +	u32		index;
> +	u32		priority;
> +	u8		address[6];
> +	u8		wwn[8];
> +	union {
> +		u8	vlan[512];
> +		u8	loop[128];
> +	} map;
> +	u32		speed;
> +	u32		fc_id;
> +	bool		is_loop;
> +	bool		is_nport;
> +};
> +
> +/* Fabric/Domain events */
> +enum efc_hw_domain_event {
> +	EFC_HW_DOMAIN_ALLOC_OK,
> +	EFC_HW_DOMAIN_ALLOC_FAIL,
> +	EFC_HW_DOMAIN_ATTACH_OK,
> +	EFC_HW_DOMAIN_ATTACH_FAIL,
> +	EFC_HW_DOMAIN_FREE_OK,
> +	EFC_HW_DOMAIN_FREE_FAIL,
> +	EFC_HW_DOMAIN_LOST,
> +	EFC_HW_DOMAIN_FOUND,
> +	EFC_HW_DOMAIN_CHANGED,
> +};
> +
> +enum efc_hw_port_event {
> +	EFC_HW_PORT_ALLOC_OK,
> +	EFC_HW_PORT_ALLOC_FAIL,
> +	EFC_HW_PORT_ATTACH_OK,
> +	EFC_HW_PORT_ATTACH_FAIL,
> +	EFC_HW_PORT_FREE_OK,
> +	EFC_HW_PORT_FREE_FAIL,
> +};
> +
> +enum efc_hw_remote_node_event {
> +	EFC_HW_NODE_ATTACH_OK,
> +	EFC_HW_NODE_ATTACH_FAIL,
> +	EFC_HW_NODE_FREE_OK,
> +	EFC_HW_NODE_FREE_FAIL,
> +	EFC_HW_NODE_FREE_ALL_OK,
> +	EFC_HW_NODE_FREE_ALL_FAIL,
> +};
> +
> +enum efc_hw_node_els_event {
> +	EFC_HW_SRRS_ELS_REQ_OK,
> +	EFC_HW_SRRS_ELS_CMPL_OK,
> +	EFC_HW_SRRS_ELS_REQ_FAIL,
> +	EFC_HW_SRRS_ELS_CMPL_FAIL,
> +	EFC_HW_SRRS_ELS_REQ_RJT,
> +	EFC_HW_ELS_REQ_ABORTED,
> +};
> +
> +struct efc_sli_port {
> +	struct list_head	list_entry;
> +	struct efc		*efc;
> +	u32			tgt_id;
> +	u32			index;
> +	u32			instance_index;
> +	char			display_name[EFC_NAME_LENGTH];
> +	struct efc_domain	*domain;
> +	bool			is_vport;
> +	u64			wwpn;
> +	u64			wwnn;
> +	struct list_head	node_list;
> +	void			*ini_sport;
> +	void			*tgt_sport;
> +	void			*tgt_data;
> +	void			*ini_data;
> +
> +	/* Members private to HW/SLI */
> +	void			*hw;
> +	u32			indicator;
> +	u32			fc_id;
> +	struct efc_dma		dma;
> +
> +	u8			wwnn_str[EFC_WWN_LENGTH];
> +	__be64			sli_wwpn;
> +	__be64			sli_wwnn;
> +	bool			free_req_pending;
> +	bool			attached;
> +
> +	struct efc_sm_ctx	sm;
> +	struct xarray		lookup;
> +	bool			enable_ini;
> +	bool			enable_tgt;
> +	bool			enable_rscn;
> +	bool			shutting_down;
> +	bool			p2p_winner;
> +	enum efc_sport_topology topology;
> +	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
> +	u32			p2p_remote_port_id;
> +	u32			p2p_port_id;
> +};
> +
> +/**
> + * Fibre Channel domain object
> + *
> + * This object is a container for the various SLI components needed
> + * to connect to the domain of a FC or FCoE switch
> + * @efc:		pointer back to efc
> + * @instance_index:	unique instance index value
> + * @display_name:	Node display name
> + * @sport_list:		linked list of SLI ports
> + * @ini_domain:		initiator backend private domain data
> + * @tgt_domain:		target backend private domain data
> + * @hw:			pointer to HW
> + * @sm:			state machine context
> + * @fcf:		FC Forwarder table index
> + * @fcf_indicator:	FCFI
> + * @indicator:		VFI
> + * @dma:		memory for Service Parameters
> + * @fcf_wwn:		WWN for FCF/switch
> + * @drvsm:		driver domain sm context
> + * @drvsm_lock:		driver domain sm lock
> + * @attached:		set true after attach completes
> + * @is_fc:		is FC
> + * @is_loop:		is loop topology
> + * @is_nlport:		is public loop
> + * @domain_found_pending:A domain found is pending, drec is updated
> + * @req_domain_free:	True if domain object should be free'd
> + * @req_accept_frames:	set in domain state machine to enable frames
> + * @domain_notify_pend:	Set in domain SM to avoid duplicate node event post
> + * @pending_drec:	Pending drec if a domain found is pending
> + * @service_params:	any sports service parameters
> + * @flogi_service_params:Fabric/P2p service parameters from FLOGI
> + * @lookup:		d_id to node lookup object
> + * @sport:		Pointer to first (physical) SLI port
> + */
> +struct efc_domain {
> +	struct efc		*efc;
> +	char			display_name[EFC_NAME_LENGTH];
> +	struct list_head	sport_list;
> +	void			*ini_domain;
> +	void			*tgt_domain;
> +
> +	/* Declarations private to HW/SLI */
> +	void			*hw;
> +	u32			fcf;
> +	u32			fcf_indicator;
> +	u32			indicator;
> +	struct efc_dma		dma;
> +
> +	/* Declarations private to FC transport */
> +	u64			fcf_wwn;
> +	struct efc_sm_ctx	drvsm;
> +	bool			attached;
> +	bool			is_fc;
> +	bool			is_loop;
> +	bool			is_nlport;
> +	bool			domain_found_pending;
> +	bool			req_domain_free;
> +	bool			req_accept_frames;
> +	bool			domain_notify_pend;
> +
> +	struct efc_domain_record pending_drec;
> +	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
> +	u8			flogi_service_params[EFC_SERVICE_PARMS_LENGTH];
> +
> +	struct xarray		lookup;
> +
> +	struct efc_sli_port	*sport;
> +	u32			sport_instance_count;
> +};
> +
> +/**
> + * Remote Node object
> + *
> + * This object represents a connection between the SLI port and another
> + * Nx_Port on the fabric. Note this can be either a well known port such
> + * as a F_Port (i.e. ff:ff:fe) or another N_Port.
> + * @indicator:		RPI
> + * @fc_id:		FC address
> + * @attached:		true if attached
> + * @node_group:		true if in node group
> + * @free_group:		true if the node group should be free'd
> + * @sport:		associated SLI port
> + * @node:		associated node
> + */
> +struct efc_remote_node {
> +	u32			indicator;
> +	u32			index;
> +	u32			fc_id;
> +
> +	bool			attached;
> +	bool			node_group;
> +	bool			free_group;
> +
> +	struct efc_sli_port	*sport;
> +	void			*node;
> +};
> +
> +/**
> + * FC Node object
> + * @efc:		pointer back to efc structure
> + * @display_name:	Node display name
> + * @hold_frames:	hold incoming frames if true
> + * @lock:		node wide lock

This documetation doesn't seem to be up to date. The looks seems to be
called active_ios_lock. And I am interested in some words into the
locking design, how to use it correctly.

> + * @active_ios:		active I/O's for this node
> + * @ini_node:		backend initiator private node data
> + * @tgt_node:		backend target private node data
> + * @rnode:		Remote node
> + * @sm:			state machine context
> + * @evtdepth:		current event posting nesting depth
> + * @req_free:		this node is to be free'd
> + * @attached:		node is attached (REGLOGIN complete)
> + * @fcp_enabled:	node is enabled to handle FCP
> + * @rscn_pending:	for name server node RSCN is pending
> + * @send_plogi:		send PLOGI accept, upon completion of node attach
> + * @send_plogi_acc:	TRUE if io_alloc() is enabled.
> + * @send_ls_acc:	type of LS acc to send
> + * @ls_acc_io:		SCSI IO for LS acc
> + * @ls_acc_oxid:	OX_ID for pending accept
> + * @ls_acc_did:		D_ID for pending accept
> + * @shutdown_reason:	reason for node shutdown
> + * @sparm_dma_buf:	service parameters buffer
> + * @service_params:	plogi/acc frame from remote device
> + * @pend_frames_lock:	lock for inbound pending frames list

The same question as above :)

> + * @pend_frames:	inbound pending frames list
> + * @pend_frames_processed:count of frames processed in hold frames interval
> + * @ox_id_in_use:	used to verify one at a time us of ox_id
> + * @els_retries_remaining:for ELS, number of retries remaining
> + * @els_req_cnt:	number of outstanding ELS requests
> + * @els_cmpl_cnt:	number of outstanding ELS completions
> + * @abort_cnt:		Abort counter for debugging purpos
> + * @current_state_name:	current node state
> + * @prev_state_name:	previous node state
> + * @current_evt:	current event
> + * @prev_evt:		previous event
> + * @targ:		node is target capable
> + * @init:		node is init capable
> + * @refound:		Handle node refound case when node is being deleted
> + * @els_io_pend_list:	list of pending (not yet processed) ELS IOs
> + * @els_io_active_list:	list of active (processed) ELS IOs
> + * @nodedb_state:	Node debugging, saved state
> + * @gidpt_delay_timer:	GIDPT delay timer
> + * @time_last_gidpt_msec:Start time of last target RSCN GIDPT
> + * @wwnn:		remote port WWNN
> + * @wwpn:		remote port WWPN
> + * @chained_io_count:	Statistics : count of IOs with chained SGL's
> + */
> +struct efc_node {
> +	struct list_head	list_entry;
> +	struct efc		*efc;
> +	char			display_name[EFC_NAME_LENGTH];
> +	struct efc_sli_port	*sport;
> +	bool			hold_frames;
> +	spinlock_t		active_ios_lock;
> +	struct list_head	active_ios;
> +	void			*ini_node;
> +	void			*tgt_node;
> +
> +	struct efc_remote_node	rnode;
> +	/* Declarations private to FC transport */
> +	struct efc_sm_ctx	sm;
> +	u32			evtdepth;
> +
> +	bool			req_free;
> +	bool			attached;
> +	bool			fcp_enabled;
> +	bool			rscn_pending;
> +	bool			send_plogi;
> +	bool			send_plogi_acc;
> +	bool			io_alloc_enabled;
> +
> +	enum efc_node_send_ls_acc send_ls_acc;
> +	void			*ls_acc_io;
> +	u32			ls_acc_oxid;
> +	u32			ls_acc_did;
> +	enum efc_node_shutd_rsn	shutdown_reason;
> +	struct efc_dma		sparm_dma_buf;
> +	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
> +	spinlock_t		pend_frames_lock;
> +	struct list_head	pend_frames;
> +	u32			pend_frames_processed;
> +	u32			ox_id_in_use;
> +	u32			els_retries_remaining;
> +	u32			els_req_cnt;
> +	u32			els_cmpl_cnt;
> +	u32			abort_cnt;
> +
> +	char			current_state_name[EFC_NAME_LENGTH];
> +	char			prev_state_name[EFC_NAME_LENGTH];
> +	int			current_evt;
> +	int			prev_evt;
> +	bool			targ;
> +	bool			init;
> +	bool			refound;
> +	struct list_head	els_io_pend_list;
> +	struct list_head	els_io_active_list;
> +
> +	void *(*nodedb_state)(struct efc_sm_ctx *ctx,
> +			      u32 evt, void *arg);
> +	struct timer_list	gidpt_delay_timer;
> +	time_t			time_last_gidpt_msec;
> +
> +	char			wwnn[EFC_WWN_LENGTH];
> +	char			wwpn[EFC_WWN_LENGTH];
> +
> +	u32			chained_io_count;
> +};
> +
> +/**
> + * NPIV port
> + *
> + * Collection of the information required to restore a virtual port across
> + * link events
> + * @wwnn:		node name
> + * @wwpn:		port name
> + * @fc_id:		port id
> + * @tgt_data:		target backend pointer
> + * @ini_data:		initiator backend pointe
> + * @sport:		Used to match record after attaching for update
> + *
> + */
> +
> +struct efc_vport_spec {
> +	struct list_head	list_entry;
> +	u64			wwnn;
> +	u64			wwpn;
> +	u32			fc_id;
> +	bool			enable_tgt;
> +	bool			enable_ini;
> +	void			*tgt_data;
> +	void			*ini_data;
> +	struct efc_sli_port	*sport;
> +};
> +
> +#define node_printf(node, fmt, args...) \
> +	pr_info("[%s] " fmt, node->display_name, ##args)
> +
> +/* Node SM IO Context Callback structure */
> +struct efc_node_cb {
> +	int			status;
> +	int			ext_status;
> +	struct efc_hw_rq_buffer *header;
> +	struct efc_hw_rq_buffer *payload;
> +	struct efc_dma		els_rsp;
> +};
> +
> +/* HW unsolicited callback status */
> +enum efc_hw_unsol_status {
> +	EFC_HW_UNSOL_SUCCESS,
> +	EFC_HW_UNSOL_ERROR,
> +	EFC_HW_UNSOL_ABTS_RCVD,
> +	EFC_HW_UNSOL_MAX,	/**< must be last */
> +};
> +
> +enum efc_hw_rq_buffer_type {
> +	EFC_HW_RQ_BUFFER_TYPE_HDR,
> +	EFC_HW_RQ_BUFFER_TYPE_PAYLOAD,
> +	EFC_HW_RQ_BUFFER_TYPE_MAX,
> +};
> +
> +struct efc_hw_rq_buffer {
> +	u16			rqindex;
> +	struct efc_dma		dma;
> +};
> +
> +/*
> + * Defines a general FC sequence object,
> + * consisting of a header, payload buffers
> + * and a HW IO in the case of port owned XRI
> + */
> +struct efc_hw_sequence {
> +	struct list_head	list_entry;
> +	void			*hw;
> +	u8			fcfi;
> +	u8			auto_xrdy;
> +	u8			out_of_xris;
> +
> +	struct efc_hw_rq_buffer *header;
> +	struct efc_hw_rq_buffer *payload;
> +
> +	enum efc_hw_unsol_status status;
> +	struct efct_hw_io	*hio;
> +
> +	void			*hw_priv;
> +};
> +
> +/* Return value indiacating the sequence can not be freed */
> +#define EFC_HW_SEQ_HOLD		0
> +/* Return value indiacating the sequence can be freed */
> +#define EFC_HW_SEQ_FREE		1
> +
> +struct libefc_function_template {
> +	/*Domain*/
> +	int (*hw_domain_alloc)(struct efc *efc, struct efc_domain *d, u32 fcf);
> +	int (*hw_domain_attach)(struct efc *efc, struct efc_domain *d, u32 id);
> +
> +	int (*hw_domain_free)(struct efc *hw, struct efc_domain *d);
> +	int (*hw_domain_force_free)(struct efc *efc, struct efc_domain *d);
> +
> +	int (*new_domain)(struct efc *efc, struct efc_domain *d);
> +	void (*del_domain)(struct efc *efc, struct efc_domain *d);
> +
> +	void (*domain_hold_frames)(struct efc *efc, struct efc_domain *d);
> +	void (*domain_accept_frames)(struct efc *efc, struct efc_domain *d);
> +
> +	/*Sport*/
> +	int (*hw_port_alloc)(struct efc *hw, struct efc_sli_port *sp,
> +			     struct efc_domain *d, u8 *val);
> +	int (*hw_port_attach)(struct efc *hw, struct efc_sli_port *sp,
> +			      u32 fc_id);
> +
> +	int (*hw_port_free)(struct efc *hw, struct efc_sli_port *sp);
> +
> +	int (*new_sport)(struct efc *efc, struct efc_sli_port *sp);
> +	void (*del_sport)(struct efc *efc, struct efc_sli_port *sp);
> +
> +	/*Node*/
> +	int (*hw_node_alloc)(struct efc *hw, struct efc_remote_node *n,
> +			     u32 fc_addr, struct efc_sli_port *sport);
> +
> +	int (*hw_node_attach)(struct efc *hw, struct efc_remote_node *n,
> +			      struct efc_dma *sparams);
> +
> +	int (*hw_node_detach)(struct efc *hw, struct efc_remote_node *r);
> +
> +	int (*hw_node_free_resources)(struct efc *efc,
> +				      struct efc_remote_node *node);
> +	int (*node_purge_pending)(struct efc *efc, struct efc_node *n);
> +
> +	void (*node_io_cleanup)(struct efc *efc, struct efc_node *n,
> +				bool force);
> +	void (*node_els_cleanup)(struct efc *efc, struct efc_node *n,
> +				bool force);
> +	void (*node_abort_all_els)(struct efc *efc, struct efc_node *n);
> +
> +	/*Scsi*/
> +	void (*scsi_io_alloc_disable)(struct efc *efc, struct efc_node *node);
> +	void (*scsi_io_alloc_enable)(struct efc *efc, struct efc_node *node);
> +
> +	int (*scsi_validate_node)(struct efc *efc, struct efc_node *n);
> +	int (*scsi_new_node)(struct efc *efc, struct efc_node *n);
> +
> +	int (*scsi_del_node)(struct efc *efc, struct efc_node *n, int reason);
> +
> +	/*Send ELS*/
> +	void *(*els_send)(struct efc *efc, struct efc_node *node,
> +			  u32 cmd, u32 timeout_sec, u32 retries);
> +
> +	void *(*els_send_ct)(struct efc *efc, struct efc_node *node,
> +			     u32 cmd, u32 timeout_sec, u32 retries);
> +
> +	void *(*els_send_resp)(struct efc *efc, struct efc_node *node,
> +			       u32 cmd, u16 ox_id);
> +
> +	void *(*bls_send_acc_hdr)(struct efc *efc, struct efc_node *n,
> +				  struct fc_frame_header *hdr);
> +	void *(*send_flogi_p2p_acc)(struct efc *efc, struct efc_node *n,
> +				    u32 ox_id, u32 s_id);
> +
> +	int (*send_ct_rsp)(struct efc *efc, struct efc_node *node,
> +			   u16 ox_id, struct fc_ct_hdr *hdr,
> +			   u32 rsp_code, u32 reason_code, u32 rsn_code_expl);
> +
> +	void *(*send_ls_rjt)(struct efc *efc, struct efc_node *node,
> +			     u32 ox, u32 rcode, u32 rcode_expl, u32 vendor);
> +
> +	int (*dispatch_fcp_cmd)(struct efc_node *node,
> +				struct efc_hw_sequence *seq);
> +
> +	int (*recv_abts_frame)(struct efc *efc, struct efc_node *node,
> +			       struct efc_hw_sequence *seq);
> +};
> +
> +#define EFC_LOG_LIB		0x01
> +#define EFC_LOG_NODE		0x02
> +#define EFC_LOG_PORT		0x04
> +#define EFC_LOG_DOMAIN		0x08
> +#define EFC_LOG_ELS		0x10
> +#define EFC_LOG_DOMAIN_SM	0x20
> +#define EFC_LOG_SM		0x40
> +
> +/* efc library port structure */
> +struct efc {
> +	void			*base;
> +	struct pci_dev		*pcidev;
> +	u64			req_wwpn;
> +	u64			req_wwnn;
> +
> +	u64			def_wwpn;
> +	u64			def_wwnn;
> +	u64			max_xfer_size;
> +	u32			nodes_count;
> +	mempool_t		*node_pool;
> +	struct dma_pool		*node_dma_pool;
> +
> +	u32			link_status;
> +
> +	/* vport */
> +	struct list_head	vport_list;
> +	/* lock to protect the vport list*/
> +	spinlock_t		vport_lock;
> +
> +	struct libefc_function_template tt;
> +	/* lock to protect the discovery library */
> +	spinlock_t		lock;

and here, e.g. how does this work with vport_lock? What is the locking
order?

> +
> +	bool			enable_ini;
> +	bool			enable_tgt;
> +
> +	u32			log_level;
> +
> +	struct efc_domain	*domain;
> +	void (*domain_free_cb)(struct efc *efc, void *arg);
> +	void			*domain_free_cb_arg;
> +
> +	time_t			tgt_rscn_delay_msec;
> +	time_t			tgt_rscn_period_msec;
> +
> +	bool			external_loopback;
> +	u32			nodedb_mask;
> +};
> +
> +/*
> + * EFC library registration
> + * **********************************/
> +int efcport_init(struct efc *efc);
> +void efcport_destroy(struct efc *efc);
> +/*
> + * EFC Domain
> + * **********************************/
> +int efc_domain_cb(void *arg, int event, void *data);
> +void efc_domain_force_free(struct efc_domain *domain);
> +void
> +efc_register_domain_free_cb(struct efc *efc,
> +			    void (*callback)(struct efc *efc, void *arg),
> +			    void *arg);
> +
> +/*
> + * EFC Local port
> + * **********************************/
> +int efc_lport_cb(void *arg, int event, void *data);
> +struct efc_vport_spec *efc_vport_create_spec(struct efc *efc, u64 wwnn,
> +			u64 wwpn, u32 fc_id, bool enable_ini, bool enable_tgt,
> +			void *tgt_data, void *ini_data);
> +int efc_sport_vport_new(struct efc_domain *domain, u64 wwpn,
> +			u64 wwnn, u32 fc_id, bool ini, bool tgt,
> +			void *tgt_data, void *ini_data);
> +int efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
> +			u64 wwpn, u64 wwnn);
> +
> +void efc_vport_del_all(struct efc *efc);
> +
> +struct efc_sli_port *efc_sport_find(struct efc_domain *domain, u32 d_id);
> +
> +/*
> + * EFC Node
> + * **********************************/
> +int efc_remote_node_cb(void *arg, int event, void *data);
> +u64 efc_node_get_wwnn(struct efc_node *node);
> +u64 efc_node_get_wwpn(struct efc_node *node);
> +struct efc_node *efc_node_find(struct efc_sli_port *sport, u32 id);
> +void efc_node_fcid_display(u32 fc_id, char *buffer, u32 buf_len);
> +
> +void efc_node_post_els_resp(struct efc_node *node, u32 evt, void *arg);
> +void efc_node_post_shutdown(struct efc_node *node, u32 evt, void *arg);
> +/*
> + * EFC FCP/ELS/CT interface
> + * **********************************/
> +int efc_node_recv_abts_frame(struct efc *efc,
> +			     struct efc_node *node,
> +			     struct efc_hw_sequence *seq);
> +void efc_node_recv_els_frame(struct efc_node *node, struct efc_hw_sequence *s);
> +int efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
> +
> +void efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
> +
> +void efc_node_recv_ct_frame(struct efc_node *node, struct efc_hw_sequence *seq);
> +void efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
> +
> +/*
> + * EFC SCSI INTERACTION LAYER
> + * **********************************/
> +void efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node);
> +void efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node);
> +void efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node);
> +
> +#endif /* __EFCLIB_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-15 12:14   ` Hannes Reinecke
@ 2020-04-15 17:43     ` James Bottomley
  2020-04-22  4:44     ` James Smart
  1 sibling, 0 replies; 124+ messages in thread
From: James Bottomley @ 2020-04-15 17:43 UTC (permalink / raw)
  To: Hannes Reinecke, James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On Wed, 2020-04-15 at 14:14 +0200, Hannes Reinecke wrote:
> On 4/12/20 5:32 AM, James Smart wrote:
[...]
> > +struct sli4_queue {
> > +	/* Common to all queue types */
> > +	struct efc_dma	dma;
> > +	spinlock_t	lock;	/* protect the queue
> > operations */
> > +	u32	index;		/* current host entry
> > index */
> > +	u16	size;		/* entry size */
> > +	u16	length;		/* number of entries */
> > +	u16	n_posted;	/* number entries posted */
> > +	u16	id;		/* Port assigned xQ_ID */
> > +	u16	ulp;		/* ULP assigned to this
> > queue */
> > +	void __iomem    *db_regaddr;	/* register address
> > for the doorbell */
> > +	u8		type;		/* queue type ie
> > EQ, CQ, ... */
> 
> Alignment?
> Having an u8 following a pointer is a guaranteed misalignment for
> the remaining entries.

Only for a packed structure, which this isn't.

In an ordinary structure, everything is padded to the required
aligment, so you're going to waste 3 bytes here but it isn't going to
be misaligned.

James


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces James Smart
  2020-04-15 12:50   ` Hannes Reinecke
@ 2020-04-15 17:50   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 17:50 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:42PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC Domain registration, allocation and deallocation sequence
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Acquire efc->lock in efc_domain_cb to protect all the domain state
>     transitions.
>   Removed efc_assert and used WARN_ON.
>   Note: Re: Locking:
>     efc->lock is a global per port lock which is used to synchronize and
>     serialize all the state machine event processing. As there is a
>     single EQ all the events are serialized. This lock will protect the
>     sport list, sport, node list, node, and vport list. All the libefc
>     APIs called by the driver will take this lock internally.
>  Note: Re: "It would even simplify the code, as several cases can be
>       collapsed into one ..."
>     The Hardware events cannot be collapsed as each events is different
>     from State Machine events. The code present looks more readable than
>     a mapping array in this case.
> ---
>  drivers/scsi/elx/libefc/efc_domain.c | 1109 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_domain.h |   52 ++
>  2 files changed, 1161 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
> new file mode 100644
> index 000000000000..4d16e9742e86
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_domain.c
> @@ -0,0 +1,1109 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * domain_sm Domain State Machine: States
> + */
> +
> +#include "efc.h"
> +
> +/* Accept domain callback events from the user driver */
> +int
> +efc_domain_cb(void *arg, int event, void *data)
> +{
> +	struct efc *efc = arg;
> +	struct efc_domain *domain = NULL;
> +	int rc = 0;
> +	unsigned long flags = 0;
> +
> +	if (event != EFC_HW_DOMAIN_FOUND)
> +		domain = data;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	switch (event) {
> +	case EFC_HW_DOMAIN_FOUND: {
> +		u64 fcf_wwn = 0;
> +		struct efc_domain_record *drec = data;
> +
> +		/* extract the fcf_wwn */
> +		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
> +
> +		efc_log_debug(efc, "Domain allocated: wwn %016llX\n", fcf_wwn);
> +		/*
> +		 * lookup domain, or allocate a new one
> +		 * if one doesn't exist already
> +		 */
> +		domain = efc->domain;
> +		if (!domain) {
> +			domain = efc_domain_alloc(efc, fcf_wwn);
> +			if (!domain) {
> +				efc_log_err(efc, "efc_domain_alloc() failed\n");
> +				rc = -1;
> +				break;
> +			}
> +			efc_sm_transition(&domain->drvsm, __efc_domain_init,
> +					  NULL);
> +		}
> +
> +		if (fcf_wwn != domain->fcf_wwn) {
> +			efc_log_err(efc, "evt: FOUND for existing domain\n");
> +			efc_log_err(efc, "wwn:%016llX domain wwn:%016llX\n",
> +				    fcf_wwn, domain->fcf_wwn);
> +		}
> +

Personally I am not a big fan of mixing the state machine code with
the work code. First, because it makes the state machine harder to
follow (long functions) and the 'functional' code is always pushed to
right 80 character limit. If it's only a couple of function call, I
don't see a need to introduce a new function. But the above is already
too long for my taste. But well, that's taste.

> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
> +		break;
> +	}
> +
> +	case EFC_HW_DOMAIN_LOST:
> +		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
> +		efc->tt.domain_hold_frames(efc, domain);
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ALLOC_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ALLOC_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
> +				      NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ATTACH_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_ATTACH_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
> +		efc_domain_post_event(domain,
> +				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_FREE_OK:
> +		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
> +		break;
> +
> +	case EFC_HW_DOMAIN_FREE_FAIL:
> +		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
> +		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
> +		break;
> +
> +	default:
> +		efc_log_warn(efc, "unsupported event %#x\n", event);
> +	}
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	if (efc->domain && domain->req_accept_frames) {
> +		domain->req_accept_frames = false;
> +		efc->tt.domain_accept_frames(efc, domain);
> +	}
> +
> +	return rc;
> +}
> +
> +struct efc_domain *
> +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn)
> +{
> +	struct efc_domain *domain;
> +
> +	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
> +	if (domain) {
> +		domain->efc = efc;
> +		domain->drvsm.app = domain;
> +
> +		xa_init(&domain->lookup);
> +
> +		INIT_LIST_HEAD(&domain->sport_list);
> +		domain->fcf_wwn = fcf_wwn;
> +		efc_log_debug(efc, "Domain allocated: wwn %016llX\n",
> +			      domain->fcf_wwn);
> +		efc->domain = domain;
> +	} else {
> +		efc_log_err(efc, "domain allocation failed\n");
> +	}
> +
> +	return domain;
> +}
> +
> +void
> +efc_domain_free(struct efc_domain *domain)
> +{
> +	struct efc *efc;
> +
> +	efc = domain->efc;
> +
> +	/* Hold frames to clear the domain pointer from the xport lookup */
> +	efc->tt.domain_hold_frames(efc, domain);
> +
> +	efc_log_debug(efc, "Domain free: wwn %016llX\n",
> +		      domain->fcf_wwn);
> +
> +	xa_destroy(&domain->lookup);
> +	efc->domain = NULL;
> +
> +	if (efc->domain_free_cb)
> +		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
> +
> +	kfree(domain);
> +}
> +
> +/* Free memory resources of a domain object */
> +void
> +efc_domain_force_free(struct efc_domain *domain)
> +{
> +	struct efc_sli_port *sport;
> +	struct efc_sli_port *next;
> +	struct efc *efc = domain->efc;
> +
> +	/* Shutdown domain sm */
> +	efc_sm_disable(&domain->drvsm);
> +
> +	list_for_each_entry_safe(sport, next, &domain->sport_list, list_entry) {
> +		efc_sport_force_free(sport);
> +	}
> +
> +	efc->tt.hw_domain_force_free(efc, domain);
> +	efc_domain_free(domain);
> +}
> +
> +/* Register a callback to be called when the domain is freed */
> +void
> +efc_register_domain_free_cb(struct efc *efc,
> +			    void (*callback)(struct efc *efc, void *arg),
> +			    void *arg)
> +{
> +	efc->domain_free_cb = callback;
> +	efc->domain_free_cb_arg = arg;
> +	if (!efc->domain && callback)
> +		(*callback)(efc, arg);
> +}
> +
> +static void *
> +__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_domain *domain = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/*
> +		 * this can arise if an FLOGI fails on the SPORT,
> +		 * and the SPORT is shutdown
> +		 */
> +		break;
> +	default:
> +		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
> +			     funcname, efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +static void *
> +__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_domain *domain = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +		break;
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* save drec, mark domain_found_pending */
> +		memcpy(&domain->pending_drec, arg,
> +		       sizeof(domain->pending_drec));
> +		domain->domain_found_pending = true;
> +		break;
> +	case EFC_EVT_DOMAIN_LOST:
> +		/* unmark domain_found_pending */
> +		domain->domain_found_pending = false;
> +		break;
> +
> +	default:
> +		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
> +			     funcname, efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +#define std_domain_state_decl(...)\
> +	struct efc_domain *domain = NULL;\
> +	struct efc *efc = NULL;\
> +	\
> +	WARN_ON(!ctx || !ctx->app);\
> +	domain = ctx->app;\
> +	WARN_ON(!domain->efc);\
> +	efc = domain->efc
> +
> +void *
> +__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		  void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		domain->attached = false;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND: {
> +		u32	i;
> +		struct efc_domain_record *drec = arg;
> +		struct efc_sli_port *sport;
> +
> +		u64	my_wwnn = efc->req_wwnn;
> +		u64	my_wwpn = efc->req_wwpn;
> +		__be64		be_wwpn;
> +
> +		if (my_wwpn == 0 || my_wwnn == 0) {
> +			efc_log_debug(efc,
> +				"using default hardware WWN configuration\n");
> +			my_wwpn = efc->def_wwpn;
> +			my_wwnn = efc->def_wwnn;
> +		}
> +
> +		efc_log_debug(efc,
> +			"Creating base sport using WWPN %016llX WWNN %016llX\n",
> +			my_wwpn, my_wwnn);
> +
> +		/* Allocate a sport and transition to __efc_sport_allocated */
> +		sport = efc_sport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
> +					efc->enable_ini, efc->enable_tgt);
> +
> +		if (!sport) {
> +			efc_log_err(efc, "efc_sport_alloc() failed\n");
> +			break;
> +		}
> +		efc_sm_transition(&sport->sm, __efc_sport_allocated, NULL);
> +
> +		be_wwpn = cpu_to_be64(sport->wwpn);
> +
> +		/* allocate struct efc_sli_port object for local port
> +		 * Note: drec->fc_id is ALPA from read_topology only if loop
> +		 */
> +		if (efc->tt.hw_port_alloc(efc, sport, NULL,
> +					  (uint8_t *)&be_wwpn)) {
> +			efc_log_err(efc, "Can't allocate port\n");
> +			efc_sport_free(sport);
> +			break;
> +		}
> +
> +		domain->is_loop = drec->is_loop;
> +
> +		/*
> +		 * If the loop position map includes ALPA == 0,
> +		 * then we are in a public loop (NL_PORT)
> +		 * Note that the first element of the loopmap[]
> +		 * contains the count of elements, and if
> +		 * ALPA == 0 is present, it will occupy the first
> +		 * location after the count.
> +		 */
> +		domain->is_nlport = drec->map.loop[1] == 0x00;
> +
> +		if (!domain->is_loop) {
> +			/* Initiate HW domain alloc */
> +			if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
> +				efc_log_err(efc,
> +					    "Failed to initiate HW domain allocation\n");
> +				break;
> +			}
> +			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
> +			break;
> +		}
> +
> +		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
> +			      drec->is_loop ?
> +			      (domain->is_nlport ?
> +			      "public-loop" : "loop") : "other",
> +			      drec->fc_id, drec->speed);
> +
> +		sport->fc_id = drec->fc_id;
> +		sport->topology = EFC_SPORT_TOPOLOGY_LOOP;
> +		snprintf(sport->display_name, sizeof(sport->display_name),
> +			 "s%06x", drec->fc_id);
> +
> +		if (efc->enable_ini) {
> +			u32 count = drec->map.loop[0];
> +
> +			efc_log_debug(efc, "%d position map entries\n",
> +				      count);
> +			for (i = 1; i <= count; i++) {
> +				if (drec->map.loop[i] != drec->fc_id) {
> +					struct efc_node *node;
> +
> +					efc_log_debug(efc, "%#x -> %#x\n",
> +						      drec->fc_id,
> +						      drec->map.loop[i]);
> +					node = efc_node_alloc(sport,
> +							      drec->map.loop[i],
> +							      false, true);
> +					if (!node) {
> +						efc_log_err(efc,
> +							    "efc_node_alloc() failed\n");
> +						break;
> +					}
> +					efc_node_transition(node,
> +							    __efc_d_wait_loop,
> +							    NULL);
> +				}
> +			}
> +		}
> +
> +		/* Initiate HW domain alloc */
> +		if (efc->tt.hw_domain_alloc(efc, domain, drec->index)) {
> +			efc_log_err(efc,
> +				    "Failed to initiate HW domain allocation\n");
> +			break;
> +		}
> +		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
> +		break;
> +	}
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the domain allocation to complete */
> +void *
> +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport;
> +
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ALLOC_OK: {
> +		struct fc_els_flogi  *sp;
> +
> +		sport = domain->sport;
> +		if (WARN_ON(!sport))
> +			return NULL;
> +
> +		sp = (struct fc_els_flogi  *)sport->service_params;
> +
> +		/* Save the domain service parameters */
> +		memcpy(domain->service_params + 4, domain->dma.virt,
> +		       sizeof(struct fc_els_flogi) - 4);
> +		memcpy(sport->service_params + 4, domain->dma.virt,
> +		       sizeof(struct fc_els_flogi) - 4);
> +
> +		/*
> +		 * Update the sport's service parameters,
> +		 * user might have specified non-default names
> +		 */
> +		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
> +		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
> +
> +		/*
> +		 * Take the loop topology path,
> +		 * unless we are an NL_PORT (public loop)
> +		 */
> +		if (domain->is_loop && !domain->is_nlport) {
> +			/*
> +			 * For loop, we already have our FC ID
> +			 * and don't need fabric login.
> +			 * Transition to the allocated state and
> +			 * post an event to attach to
> +			 * the domain. Note that this breaks the
> +			 * normal action/transition
> +			 * pattern here to avoid a race with the
> +			 * domain attach callback.
> +			 */
> +			/* sm: is_loop / domain_attach */
> +			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
> +			__efc_domain_attach_internal(domain, sport->fc_id);
> +			break;
> +		}
> +		{
> +			struct efc_node *node;
> +
> +			/* alloc fabric node, send FLOGI */
> +			node = efc_node_find(sport, FC_FID_FLOGI);
> +			if (node) {
> +				efc_log_err(efc,
> +					    "Fabric Controller node already exists\n");
> +				break;
> +			}
> +			node = efc_node_alloc(sport, FC_FID_FLOGI,
> +					      false, false);
> +			if (!node) {
> +				efc_log_err(efc,
> +					    "Error: efc_node_alloc() failed\n");
> +			} else {
> +				efc_node_transition(node,
> +						    __efc_fabric_init, NULL);
> +			}
> +			/* Accept frames */
> +			domain->req_accept_frames = true;
> +		}
> +		/* sm: / start fabric logins */
> +		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ALLOC_FAIL:
> +		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
> +			    efc_sm_event_name(evt));
> +		efc_log_err(efc, "shutting down domain\n");
> +		domain->req_domain_free = true;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST:
> +		efc_log_debug(efc,
> +			      "%s received while waiting for hw_domain_alloc()\n",
> +			efc_sm_event_name(evt));
> +		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
> +		break;
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the domain attach request */
> +void *
> +__efc_domain_allocated(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	int rc = 0;
> +
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_REQ_ATTACH: {
> +		u32 fc_id;
> +
> +		if (WARN_ON(!arg))
> +			return NULL;
> +
> +		fc_id = *((u32 *)arg);
> +		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
> +			      fc_id);
> +		/* Update sport lookup */
> +		rc = xa_err(xa_store(&domain->lookup, fc_id, domain->sport,
> +				     GFP_ATOMIC));
> +		if (rc) {
> +			efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
> +			return NULL;
> +		}
> +
> +		/* Update display name for the sport */
> +		efc_node_fcid_display(fc_id, domain->sport->display_name,
> +				      sizeof(domain->sport->display_name));
> +
> +		/* Issue domain attach call */
> +		rc = efc->tt.hw_domain_attach(efc, domain, fc_id);
> +		if (rc) {
> +			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
> +				    rc);
> +			return NULL;
> +		}
> +		/* sm: / domain_attach */
> +		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST: {
> +		int rc;
> +
> +		efc_log_debug(efc,
> +			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
> +			efc_sm_event_name(evt));
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to
> +			 * wait state and send shutdown to each
> +			 * sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "hw_domain_free failed: %d\n", rc);
> +			}
> +		}
> +
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for the HW domain attach to complete */
> +void *
> +__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		struct efc_node *node = NULL;
> +		struct efc_node *next_node = NULL;
> +		struct efc_sli_port *sport;
> +		struct efc_sli_port *next_sport;
> +
> +		/*
> +		 * Set domain notify pending state to avoid
> +		 * duplicate domain event post
> +		 */
> +		domain->domain_notify_pend = true;
> +
> +		/* Mark as attached */
> +		domain->attached = true;
> +
> +		/* Register with SCSI API */
> +		efc->tt.new_domain(efc, domain);
> +
> +		/* Transition to ready */
> +		/* sm: / forward event to all sports and nodes */
> +		efc_sm_transition(ctx, __efc_domain_ready, NULL);
> +
> +		/* We have an FCFI, so we can accept frames */
> +		domain->req_accept_frames = true;
> +
> +		/*
> +		 * Notify all nodes that the domain attach request
> +		 * has completed
> +		 * Note: sport will have already received notification
> +		 * of sport attached as a result of the HW's port attach.
> +		 */
> +		list_for_each_entry_safe(sport, next_sport,
> +					 &domain->sport_list, list_entry) {
> +			list_for_each_entry_safe(node, next_node,
> +						 &sport->node_list,
> +						 list_entry) {
> +				efc_node_post_event(node,
> +						    EFC_EVT_DOMAIN_ATTACH_OK,
> +						    NULL);
> +			}
> +		}
> +		domain->domain_notify_pend = false;
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ATTACH_FAIL:
> +		efc_log_debug(efc,
> +			      "%s received while waiting for hw attach\n",
> +			      efc_sm_event_name(evt));
> +		break;
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_LOST:
> +		/*
> +		 * Domain lost while waiting for an attach to complete,
> +		 * go to a state that waits for  the domain attach to
> +		 * complete, then handle domain lost
> +		 */
> +		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_REQ_ATTACH:
> +		/*
> +		 * In P2P we can get an attach request from
> +		 * the other FLOGI path, so drop this one
> +		 */
> +		break;
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Ready state */
> +void *
> +__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		/* start any pending vports */
> +		if (efc_vport_start(domain)) {
> +			efc_log_debug(domain->efc,
> +				      "efc_vport_start didn't start vports\n");
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_LOST: {
> +		int rc;
> +
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to wait state
> +			 * and send shutdown to each sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "hw_domain_free failed: %d\n", rc);
> +			}
> +		}
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_FOUND:
> +		/* Should not happen */
> +		efc_log_err(efc, "%s: evt: %d should not happen\n",
> +			    __func__, evt);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_REQ_ATTACH: {
> +		/* can happen during p2p */
> +		u32 fc_id;
> +
> +		fc_id = *((u32 *)arg);
> +
> +		/* Assume that the domain is attached */
> +		WARN_ON(!domain->attached);
> +
> +		/*
> +		 * Verify that the requested FC_ID
> +		 * is the same as the one we're working with
> +		 */
> +		WARN_ON(domain->sport->fc_id != fc_id);
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Domain state machine: Wait for nodes to free prior to the domain shutdown */
> +void *
> +__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +			      void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		int rc;
> +
> +		/* sm: / efc_hw_domain_free */
> +		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
> +
> +		/* Request efc_hw_domain_free and wait for completion */
> +		rc = efc->tt.hw_domain_free(efc, domain);
> +		if (rc) {
> +			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
> +				    rc);
> +		}
> +		break;
> +	}
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> + /* Domain state machine: Complete the domain shutdown */
> +void *
> +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_FREE_OK: {
> +		efc->tt.del_domain(efc, domain);
> +
> +		/* sm: / domain_free */
> +		if (domain->domain_found_pending) {
> +			/*
> +			 * save fcf_wwn and drec from this domain,
> +			 * free current domain and allocate
> +			 * a new one with the same fcf_wwn
> +			 * could use a SLI-4 "re-register VPI"
> +			 * operation here?
> +			 */
> +			u64 fcf_wwn = domain->fcf_wwn;
> +			struct efc_domain_record drec = domain->pending_drec;
> +
> +			efc_log_debug(efc, "Reallocating domain\n");
> +			domain->req_domain_free = true;
> +			domain = efc_domain_alloc(efc, fcf_wwn);
> +
> +			if (!domain) {
> +				efc_log_err(efc,
> +					    "efc_domain_alloc() failed\n");
> +				return NULL;
> +			}
> +			/*
> +			 * got a new domain; at this point,
> +			 * there are at least two domains
> +			 * once the req_domain_free flag is processed,
> +			 * the associated domain will be removed.
> +			 */
> +			efc_sm_transition(&domain->drvsm, __efc_domain_init,
> +					  NULL);
> +			efc_sm_post_event(&domain->drvsm,
> +					  EFC_EVT_DOMAIN_FOUND, &drec);
> +		} else {
> +			domain->req_domain_free = true;
> +		}
> +		break;
> +	}
> +
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/*
> + * Domain state machine: Wait for the domain alloc/attach completion
> + * after receiving a domain lost.
> + */
> +void *
> +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg)
> +{
> +	std_domain_state_decl();
> +
> +	domain_sm_trace(domain);
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ALLOC_OK:
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		int rc;
> +
> +		if (!list_empty(&domain->sport_list)) {
> +			/*
> +			 * if there are sports, transition to
> +			 * wait state and send shutdown to each sport
> +			 */
> +			struct efc_sli_port	*sport = NULL;
> +			struct efc_sli_port	*sport_next = NULL;
> +
> +			efc_sm_transition(ctx, __efc_domain_wait_sports_free,
> +					  NULL);
> +			list_for_each_entry_safe(sport, sport_next,
> +						 &domain->sport_list,
> +						 list_entry) {
> +				efc_sm_post_event(&sport->sm,
> +						  EFC_EVT_SHUTDOWN, NULL);
> +			}
> +		} else {
> +			/* no sports exist, free domain */
> +			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
> +					  NULL);
> +			rc = efc->tt.hw_domain_free(efc, domain);
> +			if (rc) {
> +				efc_log_err(efc,
> +					    "efc_hw_domain_free() failed: %d\n",
> +									rc);
> +			}
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ALLOC_FAIL:
> +	case EFC_EVT_DOMAIN_ATTACH_FAIL:
> +		efc_log_err(efc, "[domain] %-20s: failed\n",
> +			    efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void
> +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id)
> +{
> +	memcpy(domain->dma.virt,
> +	       ((uint8_t *)domain->flogi_service_params) + 4,
> +		   sizeof(struct fc_els_flogi) - 4);
> +	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
> +				 &s_id);
> +}
> +
> +void
> +efc_domain_attach(struct efc_domain *domain, u32 s_id)
> +{
> +	__efc_domain_attach_internal(domain, s_id);
> +}
> +
> +int
> +efc_domain_post_event(struct efc_domain *domain,
> +		      enum efc_sm_event event, void *arg)
> +{
> +	int rc;
> +	bool req_domain_free;
> +
> +	rc = efc_sm_post_event(&domain->drvsm, event, arg);
> +
> +	req_domain_free = domain->req_domain_free;
> +	domain->req_domain_free = false;
> +
> +	if (req_domain_free)
> +		efc_domain_free(domain);
> +
> +	return rc;
> +}
> +
> +/* Dispatch unsolicited FC frame */
> +int
> +efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
> +{
> +	struct efc_domain *domain = (struct efc_domain *)arg;
> +	struct efc *efc = domain->efc;
> +	struct fc_frame_header *hdr;
> +	u32 s_id;
> +	u32 d_id;
> +	struct efc_node *node = NULL;
> +	struct efc_sli_port *sport = NULL;
> +	unsigned long flags = 0;
> +
> +	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
> +		efc_log_err(efc, "Sequence header or payload is null\n");
> +		return EFC_HW_SEQ_FREE;
> +	}
> +
> +	hdr = seq->header->dma.virt;
> +
> +	/* extract the s_id and d_id */
> +	s_id = ntoh24(hdr->fh_s_id);
> +	d_id = ntoh24(hdr->fh_d_id);
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	sport = domain->sport;
> +	if (!sport) {
> +		efc_log_err(efc,
> +			    "Drop frame, sport for FC ID 0x%06x is NULL", d_id);
> +		spin_unlock_irqrestore(&efc->lock, flags);
> +		return EFC_HW_SEQ_FREE;
> +	}
> +
> +	if (sport->fc_id != d_id) {
> +		/* Not a physical port IO lookup sport associated with the
> +		 * npiv port
> +		 */
> +		/* Look up without lock */
> +		sport = efc_sport_find(domain, d_id);
> +		if (!sport) {
> +			if (hdr->fh_type == FC_TYPE_FCP) {
> +				/* Drop frame */
> +				efc_log_warn(efc,
> +					     "unsolicited FCP frame with invalid d_id x%x\n",
> +					d_id);
> +				spin_unlock_irqrestore(&efc->lock, flags);
> +				return EFC_HW_SEQ_FREE;
> +			}
> +			/* p2p will use this case */
> +			sport = domain->sport;
> +		}
> +	}
> +
> +	/* Lookup the node given the remote s_id */
> +	node = efc_node_find(sport, s_id);
> +
> +	/* If not found, then create a new node */
> +	if (!node) {
> +		/* If this is solicited data or control based on R_CTL and
> +		 * there is no node context,
> +		 * then we can drop the frame
> +		 */
> +		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
> +			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
> +			efc_log_debug(efc,
> +				      "solicited data/ctrl frame without node,drop\n");
> +			spin_unlock_irqrestore(&efc->lock, flags);
> +			return EFC_HW_SEQ_FREE;
> +		}
> +
> +		node = efc_node_alloc(sport, s_id, false, false);
> +		if (!node) {
> +			efc_log_err(efc, "efc_node_alloc() failed\n");
> +			spin_unlock_irqrestore(&efc->lock, flags);
> +			return EFC_HW_SEQ_FREE;
> +		}
> +		/* don't send PLOGI on efc_d_init entry */
> +		efc_node_init_device(node, false);
> +	}
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	if (node->hold_frames || !list_empty(&node->pend_frames)) {
> +
> +		/* add frame to node's pending list */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +			INIT_LIST_HEAD(&seq->list_entry);
> +			list_add_tail(&seq->list_entry, &node->pend_frames);
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +		return EFC_HW_SEQ_HOLD;
> +	}
> +
> +	/* now dispatch frame to the node frame handler */
> +	efc_node_dispatch_frame(node, seq);
> +	return EFC_HW_SEQ_FREE;
> +}
> +
> +void
> +efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
> +{
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	u32 port_id;
> +	struct efc_node *node = (struct efc_node *)arg;
> +	struct efc *efc = node->efc;
> +
> +	port_id = ntoh24(hdr->fh_s_id);
> +
> +	if (WARN_ON(port_id != node->rnode.fc_id))
> +		return;
> +
> +	if ((!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) ||
> +	    !(ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)) {
> +		node_printf(node,
> +		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
> +		    cpu_to_be32(((u32 *)hdr)[0]),
> +		    cpu_to_be32(((u32 *)hdr)[1]),
> +		    cpu_to_be32(((u32 *)hdr)[2]),
> +		    cpu_to_be32(((u32 *)hdr)[3]),
> +		    cpu_to_be32(((u32 *)hdr)[4]),
> +		    cpu_to_be32(((u32 *)hdr)[5]));
> +		return;
> +	}
> +
> +	switch (hdr->fh_r_ctl) {
> +	case FC_RCTL_ELS_REQ:
> +	case FC_RCTL_ELS_REP:
> +		efc_node_recv_els_frame(node, seq);
> +		break;
> +
> +	case FC_RCTL_BA_ABTS:
> +	case FC_RCTL_BA_ACC:
> +	case FC_RCTL_BA_RJT:
> +	case FC_RCTL_BA_NOP:
> +		efc->tt.recv_abts_frame(efc, node, seq);
> +		break;
> +
> +	case FC_RCTL_DD_UNSOL_CMD:
> +	case FC_RCTL_DD_UNSOL_CTL:
> +		switch (hdr->fh_type) {
> +		case FC_TYPE_FCP:
> +			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
> +				if (!node->fcp_enabled) {
> +					efc_node_recv_fcp_cmd(node, seq);
> +					break;
> +				}
> +				/* Dispatch FCP command*/
> +				efc->tt.dispatch_fcp_cmd(node, seq);
> +			} else if ((hdr->fh_r_ctl & 0xf) ==
> +							FC_RCTL_DD_SOL_DATA) {
> +				node_printf(node,
> +				    "solicited data received.Dropping IO\n");
> +			}
> +			break;
> +
> +		case FC_TYPE_CT:
> +			efc_node_recv_ct_frame(node, seq);
> +			break;
> +		default:
> +			break;
> +		}
> +		break;
> +	default:
> +		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
> +	}
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
> new file mode 100644
> index 000000000000..d318dda5935c
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_domain.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Declare driver's domain handler exported interface
> + */
> +
> +#ifndef __EFCT_DOMAIN_H__
> +#define __EFCT_DOMAIN_H__
> +
> +extern struct efc_domain *
> +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn);

I don't think the extern keyword is necessary.

> +extern void
> +efc_domain_free(struct efc_domain *domain);
> +
> +extern void *
> +__efc_domain_init(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_allocated(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_ready(struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_sports_free(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +
> +extern void
> +efc_domain_attach(struct efc_domain *domain, u32 s_id);
> +extern int
> +efc_domain_post_event(struct efc_domain *domain,
> +		      enum efc_sm_event event, void *arg);
> +extern void
> +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id);
> +
> +#endif /* __EFCT_DOMAIN_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 11/31] elx: libefc: SLI and FC PORT state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 11/31] elx: libefc: SLI and FC PORT " James Smart
  2020-04-15 15:38   ` Hannes Reinecke
@ 2020-04-15 18:04   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 18:04 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:43PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - SLI and FC port (aka n_port_id) registration, allocation and
>   deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Acquire efc->lock in efc_lport_cb to protect all the port state
>     transitions.
>   Add vport_lock to protect vport_list access.
>   Fixed vport_sport allocation race.
>   Reworked on vport code.
> ---
>  drivers/scsi/elx/libefc/efc_sport.c | 846 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_sport.h |  52 +++
>  2 files changed, 898 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_sport.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_sport.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_sport.c b/drivers/scsi/elx/libefc/efc_sport.c
> new file mode 100644
> index 000000000000..99f5213e0902
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sport.c
> @@ -0,0 +1,846 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Details SLI port (sport) functions.
> + */
> +
> +#include "efc.h"
> +
> +/* HW sport callback events from the user driver */
> +int
> +efc_lport_cb(void *arg, int event, void *data)
> +{
> +	struct efc *efc = arg;
> +	struct efc_sli_port *sport = data;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_PORT_ALLOC_OK:
> +		sm_event = EFC_EVT_SPORT_ALLOC_OK;
> +		break;
> +	case EFC_HW_PORT_ALLOC_FAIL:
> +		sm_event = EFC_EVT_SPORT_ALLOC_FAIL;
> +		break;
> +	case EFC_HW_PORT_ATTACH_OK:
> +		sm_event = EFC_EVT_SPORT_ATTACH_OK;
> +		break;
> +	case EFC_HW_PORT_ATTACH_FAIL:
> +		sm_event = EFC_EVT_SPORT_ATTACH_FAIL;
> +		break;
> +	case EFC_HW_PORT_FREE_OK:
> +		sm_event = EFC_EVT_SPORT_FREE_OK;
> +		break;
> +	case EFC_HW_PORT_FREE_FAIL:
> +		sm_event = EFC_EVT_SPORT_FREE_FAIL;
> +		break;
> +	default:
> +		efc_log_err(efc, "unknown event %#x\n", event);
> +		return EFC_FAIL;
> +	}
> +
> +	efc_log_debug(efc, "sport event: %s\n", efc_sm_event_name(sm_event));
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_sm_post_event(&sport->sm, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +struct efc_sli_port *
> +efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		u32 fc_id, bool enable_ini, bool enable_tgt)
> +{
> +	struct efc_sli_port *sport;
> +
> +	if (domain->efc->enable_ini)
> +		enable_ini = 0;
> +
> +	/* Return a failure if this sport has already been allocated */
> +	if (wwpn != 0) {
> +		sport = efc_sport_find_wwn(domain, wwnn, wwpn);
> +		if (sport) {
> +			efc_log_err(domain->efc,
> +				    "Failed: SPORT %016llX %016llX already allocated\n",
> +				    wwnn, wwpn);
> +			return NULL;
> +		}
> +	}
> +
> +	sport = kzalloc(sizeof(*sport), GFP_ATOMIC);

GFP_ATOMIC looks wrong.

> +	if (!sport)
> +		return sport;
> +
> +	sport->efc = domain->efc;
> +	snprintf(sport->display_name, sizeof(sport->display_name), "------");
> +	sport->domain = domain;
> +	xa_init(&sport->lookup);
> +	sport->instance_index = domain->sport_instance_count++;
> +	INIT_LIST_HEAD(&sport->node_list);
> +	sport->sm.app = sport;
> +	sport->enable_ini = enable_ini;
> +	sport->enable_tgt = enable_tgt;
> +	sport->enable_rscn = (sport->enable_ini ||
> +			(sport->enable_tgt && enable_target_rscn(sport->efc)));

The outer brakets are not necessary.

> +
> +	/* Copy service parameters from domain */
> +	memcpy(sport->service_params, domain->service_params,
> +		sizeof(struct fc_els_flogi));
> +
> +	/* Update requested fc_id */
> +	sport->fc_id = fc_id;
> +
> +	/* Update the sport's service parameters for the new wwn's */
> +	sport->wwpn = wwpn;
> +	sport->wwnn = wwnn;
> +	snprintf(sport->wwnn_str, sizeof(sport->wwnn_str), "%016llX", wwnn);
> +
> +	/*
> +	 * if this is the "first" sport of the domain,
> +	 * then make it the "phys" sport
> +	 */
> +	if (list_empty(&domain->sport_list))
> +		domain->sport = sport;
> +
> +	INIT_LIST_HEAD(&sport->list_entry);
> +	list_add_tail(&sport->list_entry, &domain->sport_list);
> +
> +	efc_log_debug(domain->efc, "[%s] allocate sport\n",
> +		      sport->display_name);
> +
> +	return sport;
> +}
> +
> +void
> +efc_sport_free(struct efc_sli_port *sport)
> +{
> +	struct efc_domain *domain;
> +
> +	if (!sport)
> +		return;
> +
> +	domain = sport->domain;
> +	efc_log_debug(domain->efc, "[%s] free sport\n", sport->display_name);
> +	list_del(&sport->list_entry);
> +	/*
> +	 * if this is the physical sport,
> +	 * then clear it out of the domain
> +	 */
> +	if (sport == domain->sport)
> +		domain->sport = NULL;
> +
> +	xa_destroy(&sport->lookup);
> +	xa_erase(&domain->lookup, sport->fc_id);
> +
> +	if (list_empty(&domain->sport_list))
> +		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				      NULL);
> +
> +	kfree(sport);
> +}
> +
> +void
> +efc_sport_force_free(struct efc_sli_port *sport)
> +{
> +	struct efc_node *node;
> +	struct efc_node *next;
> +
> +	/* shutdown sm processing */
> +	efc_sm_disable(&sport->sm);
> +
> +	list_for_each_entry_safe(node, next, &sport->node_list, list_entry) {
> +		efc_node_force_free(node);
> +	}
> +
> +	efc_sport_free(sport);
> +}
> +
> +/* Find a SLI port object, given an FC_ID */
> +struct efc_sli_port *
> +efc_sport_find(struct efc_domain *domain, u32 d_id)
> +{
> +	return xa_load(&domain->lookup, d_id);
> +}
> +
> +/* Find a SLI port, given the WWNN and WWPN */
> +struct efc_sli_port *
> +efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
> +{
> +	struct efc_sli_port *sport = NULL;
> +
> +	list_for_each_entry(sport, &domain->sport_list, list_entry) {
> +		if (sport->wwnn == wwnn && sport->wwpn == wwpn)
> +			return sport;
> +	}
> +	return NULL;
> +}
> +
> +/* External call to request an attach for a sport, given an FC_ID */
> +int
> +efc_sport_attach(struct efc_sli_port *sport, u32 fc_id)
> +{
> +	int rc;
> +	struct efc_node *node;
> +	struct efc *efc = sport->efc;
> +
> +	/* Set our lookup */
> +	rc = xa_err(xa_store(&sport->domain->lookup, fc_id, sport, GFP_ATOMIC));
> +	if (rc) {
> +		efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
> +		return rc;

		return EFC_FAIL; ?

> +	}
> +
> +	/* Update our display_name */
> +	efc_node_fcid_display(fc_id, sport->display_name,
> +			      sizeof(sport->display_name));
> +
> +	list_for_each_entry(node, &sport->node_list, list_entry) {
> +		efc_node_update_display_name(node);
> +	}
> +
> +	efc_log_debug(sport->efc, "[%s] attach sport: fc_id x%06x\n",
> +		      sport->display_name, fc_id);
> +
> +	rc = efc->tt.hw_port_attach(efc, sport, fc_id);
> +	if (rc != EFC_HW_RTN_SUCCESS) {
> +		efc_log_err(sport->efc,
> +			    "efc_hw_port_attach failed: %d\n", rc);
> +		return EFC_FAIL;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efc_sport_shutdown(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_node *node;
> +	struct efc_node *node_next;
> +
> +	list_for_each_entry_safe(node, node_next,
> +					&sport->node_list, list_entry) {
> +		if (!(node->rnode.fc_id == FC_FID_FLOGI && sport->is_vport)) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +			continue;
> +		}
> +
> +		/*
> +		 * If this is a vport, logout of the fabric
> +		 * controller so that it deletes the vport
> +		 * on the switch.
> +		 */
> +		/* if link is down, don't send logo */
> +		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		} else {
> +			efc_log_debug(efc,
> +				      "[%s] sport shutdown vport, sending logo to node\n",
> +				      node->display_name);
> +
> +			if (efc->tt.els_send(efc, node, ELS_LOGO,
> +					     EFC_FC_FLOGI_TIMEOUT_SEC,
> +					EFC_FC_ELS_DEFAULT_RETRIES)) {
> +				/* sent LOGO, wait for response */
> +				efc_node_transition(node,
> +						    __efc_d_wait_logo_rsp,
> +						     NULL);
> +				continue;
> +			}
> +
> +			/*
> +			 * failed to send LOGO,
> +			 * go ahead and cleanup node anyways
> +			 */
> +			node_printf(node, "Failed to send LOGO\n");
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +		}
> +	}
> +}
> +
> +/* Clear the sport reference in the vport specification */
> +static void
> +efc_vport_link_down(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->sport = NULL;
> +			break;
> +		}
> +	}
> +}
> +
> +static void *
> +__efc_sport_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		break;
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +			efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN: {
> +		int node_list_empty;
> +
> +		/* Flag this sport as shutting down */
> +		sport->shutting_down = true;
> +
> +		if (sport->is_vport)
> +			efc_vport_link_down(sport);
> +
> +		node_list_empty = list_empty(&sport->node_list);
> +
> +		if (node_list_empty) {
> +			/* Remove the sport from the domain's lookup table */
> +			xa_erase(&domain->lookup, sport->fc_id);
> +			efc_sm_transition(ctx, __efc_sport_wait_port_free,
> +					  NULL);
> +			if (efc->tt.hw_port_free(efc, sport)) {
> +				efc_log_test(sport->efc,
> +					     "efc_hw_port_free failed\n");
> +				/* Not much we can do, free the sport anyways */
> +				efc_sport_free(sport);
> +			}
> +		} else {
> +			/* sm: node list is not empty / shutdown nodes */
> +			efc_sm_transition(ctx,
> +					  __efc_sport_wait_shutdown, NULL);
> +			efc_sport_shutdown(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(sport->efc, "[%s] %-20s %-20s not handled\n",
> +			     sport->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Physical sport allocated */
> +void *
> +__efc_sport_allocated(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	/* the physical sport is attached */
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		WARN_ON(sport != domain->sport);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +		/* ignore */
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Handle initial virtual port events */
> +void *
> +__efc_sport_vport_init(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		__be64 be_wwpn = cpu_to_be64(sport->wwpn);
> +
> +		if (sport->wwpn == 0)
> +			efc_log_debug(efc, "vport: letting f/w select WWN\n");
> +
> +		if (sport->fc_id != U32_MAX) {
> +			efc_log_debug(efc, "vport: hard coding port id: %x\n",
> +				      sport->fc_id);
> +		}
> +
> +		efc_sm_transition(ctx, __efc_sport_vport_wait_alloc, NULL);
> +		/* If wwpn is zero, then we'll let the f/w */
> +		if (efc->tt.hw_port_alloc(efc, sport, sport->domain,
> +					  sport->wwpn == 0 ? NULL :
> +					  (uint8_t *)&be_wwpn)) {
> +			efc_log_err(efc, "Can't allocate port\n");
> +			break;
> +		}
> +
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine:
> + * Wait for the HW SLI port allocation to complete
> + */
> +void *
> +__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK: {
> +		struct fc_els_flogi *sp;
> +		struct efc_node *fabric;
> +
> +		sp = (struct fc_els_flogi *)sport->service_params;
> +		/*
> +		 * If we let f/w assign wwn's,
> +		 * then sport wwn's with those returned by hw
> +		 */
> +		if (sport->wwnn == 0) {
> +			sport->wwnn = be64_to_cpu(sport->sli_wwnn);
> +			sport->wwpn = be64_to_cpu(sport->sli_wwpn);
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +		}
> +
> +		/* Update the sport's service parameters */
> +		sp->fl_wwpn = cpu_to_be64(sport->wwpn);
> +		sp->fl_wwnn = cpu_to_be64(sport->wwnn);
> +
> +		/*
> +		 * if sport->fc_id is uninitialized,
> +		 * then request that the fabric node use FDISC
> +		 * to find an fc_id.
> +		 * Otherwise we're restoring vports, or we're in
> +		 * fabric emulation mode, so attach the fc_id
> +		 */
> +		if (sport->fc_id == U32_MAX) {
> +			fabric = efc_node_alloc(sport, FC_FID_FLOGI, false,
> +						false);
> +			if (!fabric) {
> +				efc_log_err(efc, "efc_node_alloc() failed\n");
> +				return NULL;
> +			}
> +			efc_node_transition(fabric, __efc_vport_fabric_init,
> +					    NULL);
> +		} else {
> +			snprintf(sport->wwnn_str, sizeof(sport->wwnn_str),
> +				 "%016llX", sport->wwpn);
> +			efc_sport_attach(sport, sport->fc_id);
> +		}
> +		efc_sm_transition(ctx, __efc_sport_vport_allocated, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * SLI port state machine: virtual sport allocated.
> + *
> + * This state is entered after the sport is allocated;
> + * it then waits for a fabric node
> + * FDISC to complete, which requests a sport attach.
> + * The sport attach complete is handled in this state.
> + */
> +
> +void *
> +__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK: {
> +		struct efc_node *node;
> +
> +		/* Find our fabric node, and forward this event */
> +		node = efc_node_find(sport, FC_FID_FLOGI);
> +		if (!node) {
> +			efc_log_test(efc, "can't find node %06x\n",
> +				     FC_FID_FLOGI);
> +			break;
> +		}
> +		/* sm: / forward sport attach to fabric node */
> +		efc_node_post_event(node, evt, NULL);
> +		efc_sm_transition(ctx, __efc_sport_attached, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +static void
> +efc_vport_update_spec(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	struct efc_vport_spec *vport;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if (vport->sport == sport) {
> +			vport->wwnn = sport->wwnn;
> +			vport->wwpn = sport->wwpn;
> +			vport->tgt_data = sport->tgt_data;
> +			vport->ini_data = sport->ini_data;
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +}
> +
> +/* State entered after the sport attach has completed */
> +void *
> +__efc_sport_attached(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		struct efc_node *node;
> +
> +		efc_log_debug(efc,
> +			      "[%s] SPORT attached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		list_for_each_entry(node, &sport->node_list, list_entry) {
> +			efc_node_update_display_name(node);
> +		}
> +
> +		sport->tgt_id = sport->fc_id;
> +
> +		efc->tt.new_sport(efc, sport);
> +
> +		/*
> +		 * Update the vport (if its not the physical sport)
> +		 * parameters
> +		 */
> +		if (sport->is_vport)
> +			efc_vport_update_spec(sport);
> +		break;
> +	}
> +
> +	case EFC_EVT_EXIT:
> +		efc_log_debug(efc,
> +			      "[%s] SPORT deattached WWPN %016llX WWNN %016llX\n",
> +			      sport->display_name,
> +			      sport->wwpn, sport->wwnn);
> +
> +		efc->tt.del_sport(efc, sport);
> +		break;
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +
> +/* SLI port state machine: Wait for the node shutdowns to complete */
> +void *
> +__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = sport->efc;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ALLOC_OK:
> +	case EFC_EVT_SPORT_ALLOC_FAIL:
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +	case EFC_EVT_SPORT_ATTACH_FAIL:
> +		/* ignore these events - just wait for the all free event */
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE: {
> +		/*
> +		 * Remove the sport from the domain's
> +		 * sparse vector lookup table
> +		 */
> +		xa_erase(&domain->lookup, sport->fc_id);
> +		efc_sm_transition(ctx, __efc_sport_wait_port_free, NULL);
> +		if (efc->tt.hw_port_free(efc, sport)) {
> +			efc_log_err(sport->efc, "efc_hw_port_free failed\n");
> +			/* Not much we can do, free the sport anyways */
> +			efc_sport_free(sport);
> +		}
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +/* SLI port state machine: Wait for the HW's port free to complete */
> +void *
> +__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_sli_port *sport = ctx->app;
> +
> +	sport_sm_trace(sport);
> +
> +	switch (evt) {
> +	case EFC_EVT_SPORT_ATTACH_OK:
> +		/* Ignore as we are waiting for the free CB */
> +		break;
> +	case EFC_EVT_SPORT_FREE_OK: {
> +		/* All done, free myself */
> +		/* sm: / efc_sport_free */
> +		efc_sport_free(sport);
> +		break;
> +	}
> +	default:
> +		__efc_sport_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +static int
> +efc_vport_sport_alloc(struct efc_domain *domain, struct efc_vport_spec *vport)
> +{
> +	struct efc_sli_port *sport;
> +
> +	sport = efc_sport_alloc(domain, vport->wwpn,
> +				vport->wwnn, vport->fc_id,
> +				vport->enable_ini,
> +				vport->enable_tgt);
> +	vport->sport = sport;
> +	if (!sport)
> +		return EFC_FAIL;
> +
> +	sport->is_vport = true;
> +	sport->tgt_data = vport->tgt_data;
> +	sport->ini_data = vport->ini_data;
> +
> +	efc_sm_transition(&sport->sm, __efc_sport_vport_init, NULL);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Use the vport specification to find the associated vports and start them */
> +int
> +efc_vport_start(struct efc_domain *domain)
> +{
> +	struct efc *efc = domain->efc;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	int rc = EFC_SUCCESS;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (!vport->sport) {
> +			if (efc_vport_sport_alloc(domain, vport))
> +				rc = EFC_FAIL;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +
> +	return rc;
> +}
> +
> +/* Allocate a new virtual SLI port */
> +int
> +efc_sport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
> +		    void *ini_data)
> +{
> +	struct efc *efc = domain->efc;
> +	struct efc_vport_spec *vport;
> +	int rc = EFC_SUCCESS;
> +	unsigned long flags = 0;
> +
> +	if (ini && domain->efc->enable_ini == 0) {
> +		efc_log_debug(efc,
> +			     "driver initiator functionality not enabled\n");
> +		return EFC_FAIL;
> +	}
> +
> +	if (tgt && domain->efc->enable_tgt == 0) {
> +		efc_log_debug(efc,
> +			     "driver target functionality not enabled\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * Create a vport spec if we need to recreate
> +	 * this vport after a link up event
> +	 */
> +	vport = efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id, ini, tgt,
> +					tgt_data, ini_data);
> +	if (!vport) {
> +		efc_log_err(efc, "failed to create vport object entry\n");
> +		return EFC_FAIL;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	rc = efc_vport_sport_alloc(domain, vport);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return rc;
> +}
> +
> +/* Remove a previously-allocated virtual port */
> +int
> +efc_sport_vport_del(struct efc *efc, struct efc_domain *domain,
> +		    u64 wwpn, uint64_t wwnn)
> +{
> +	struct efc_sli_port *sport;
> +	int found = 0;
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	/* walk the efc_vport_list and remove from there */
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
> +			list_del(&vport->list_entry);
> +			kfree(vport);
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +
> +	if (!domain) {
> +		/* No domain means no sport to look for */
> +		return EFC_SUCCESS;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	list_for_each_entry(sport, &domain->sport_list, list_entry) {
> +		if (sport->wwpn == wwpn && sport->wwnn == wwnn) {
> +			found = 1;
> +			break;
> +		}
> +	}
> +
> +	if (found) {
> +		/* Shutdown this SPORT */
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
> +	}
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Force free all saved vports */
> +void
> +efc_vport_del_all(struct efc *efc)
> +{
> +	struct efc_vport_spec *vport;
> +	struct efc_vport_spec *next;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
> +		list_del(&vport->list_entry);
> +		kfree(vport);
> +	}
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +}
> +
> +/**
> + * Create a saved vport entry.
> + *
> + * A saved vport entry is added to the vport list,
> + * which is restored following a link up.
> + * This function is used to allow vports to be created the first time
> + * the link comes up without having to go through the ioctl() API.
> + */
> +
> +struct efc_vport_spec *
> +efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn,
> +		      u32 fc_id, bool enable_ini,
> +		      bool enable_tgt, void *tgt_data, void *ini_data)
> +{
> +	struct efc_vport_spec *vport;
> +	unsigned long flags = 0;
> +
> +	/*
> +	 * walk the efc_vport_list and return failure
> +	 * if a valid(vport with non zero WWPN and WWNN) vport entry
> +	 * is already created
> +	 */
> +	spin_lock_irqsave(&efc->vport_lock, flags);
> +	list_for_each_entry(vport, &efc->vport_list, list_entry) {
> +		if ((wwpn && vport->wwpn == wwpn) &&
> +		    (wwnn && vport->wwnn == wwnn)) {
> +			efc_log_err(efc,
> +				"Failed: VPORT %016llX %016llX already allocated\n",
> +				wwnn, wwpn);
> +			spin_unlock_irqrestore(&efc->vport_lock, flags);
> +			return NULL;
> +		}
> +	}
> +
> +	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
> +	if (!vport) {
> +		spin_unlock_irqrestore(&efc->vport_lock, flags);
> +		return NULL;
> +	}
> +
> +	vport->wwnn = wwnn;
> +	vport->wwpn = wwpn;
> +	vport->fc_id = fc_id;
> +	vport->enable_tgt = enable_tgt;
> +	vport->enable_ini = enable_ini;
> +	vport->tgt_data = tgt_data;
> +	vport->ini_data = ini_data;
> +
> +	INIT_LIST_HEAD(&vport->list_entry);
> +	list_add_tail(&vport->list_entry, &efc->vport_list);
> +	spin_unlock_irqrestore(&efc->vport_lock, flags);
> +	return vport;
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_sport.h b/drivers/scsi/elx/libefc/efc_sport.h
> new file mode 100644
> index 000000000000..3269e29c6f57
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_sport.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/**
> + * EFC FC SLI port (SPORT) exported declarations
> + *
> + */
> +
> +#ifndef __EFC_SPORT_H__
> +#define __EFC_SPORT_H__
> +
> +extern struct efc_sli_port *
> +efc_sport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
> +		u32 fc_id, bool enable_ini, bool enable_tgt);
> +extern void
> +efc_sport_free(struct efc_sli_port *sport);
> +extern void
> +efc_sport_force_free(struct efc_sli_port *sport);
> +extern struct efc_sli_port *
> +efc_sport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn);
> +extern int
> +efc_sport_attach(struct efc_sli_port *sport, u32 fc_id);
> +
> +extern void *
> +__efc_sport_allocated(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_wait_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_wait_port_free(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_vport_init(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_vport_wait_alloc(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_vport_allocated(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_sport_attached(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg);
> +
> +extern int
> +efc_vport_start(struct efc_domain *domain);
> +
> +#endif /* __EFC_SPORT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 12/31] elx: libefc: Remote node " James Smart
  2020-04-15 15:51   ` Hannes Reinecke
@ 2020-04-15 18:19   ` Daniel Wagner
  2020-04-23  1:32     ` James Smart
  1 sibling, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 18:19 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:44PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Remote node (aka remote port) allocation, initializaion and
>   destroy routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed node pool creation. Use mempool for node structure and allocate
>     dma mem when required.
>   Added functions efc_node_handle_implicit_logo() and
>     efc_node_handle_explicit_logo() for better indentation.
>   Replace efc_assert with WARN_ON.
>   Use linux xarray api for lookup instead of sparse vectors.
>   Use defined return values.
> ---
>  drivers/scsi/elx/libefc/efc_node.c | 1196 ++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_node.h |  183 ++++++
>  2 files changed, 1379 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_node.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
> new file mode 100644
> index 000000000000..e8fd631f1793
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.c
> @@ -0,0 +1,1196 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efc.h"
> +
> +/* HW node callback events from the user driver */
> +int
> +efc_remote_node_cb(void *arg, int event,
> +		   void *data)
> +{
> +	struct efc *efc = arg;
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc_remote_node *rnode = data;
> +	struct efc_node *node = rnode->node;
> +	unsigned long flags = 0;
> +
> +	switch (event) {
> +	case EFC_HW_NODE_ATTACH_OK:
> +		sm_event = EFC_EVT_NODE_ATTACH_OK;
> +		break;
> +
> +	case EFC_HW_NODE_ATTACH_FAIL:
> +		sm_event = EFC_EVT_NODE_ATTACH_FAIL;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_OK:
> +		sm_event = EFC_EVT_NODE_FREE_OK;
> +		break;
> +
> +	case EFC_HW_NODE_FREE_FAIL:
> +		sm_event = EFC_EVT_NODE_FREE_FAIL;
> +		break;
> +
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", event);
> +		return EFC_FAIL;
> +	}
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, sm_event, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Find an FC node structure given the FC port ID */
> +struct efc_node *
> +efc_node_find(struct efc_sli_port *sport, u32 port_id)
> +{
> +	return xa_load(&sport->lookup, port_id);
> +}
> +
> +struct efc_node *efc_node_alloc(struct efc_sli_port *sport,
> +				  u32 port_id, bool init, bool targ)
> +{
> +	int rc;
> +	struct efc_node *node = NULL;

no need to pre-initialize node

> +	struct efc *efc = sport->efc;
> +	struct efc_dma *dma;
> +
> +	if (sport->shutting_down) {
> +		efc_log_debug(efc, "node allocation when shutting down %06x",
> +			      port_id);
> +		return NULL;
> +	}
> +
> +	node = mempool_alloc(efc->node_pool, GFP_ATOMIC);

GFP_ATOMIC looks wrong.

> +	if (!node) {
> +		efc_log_err(efc, "node allocation failed %06x", port_id);
> +		return NULL;
> +	}
> +	memset(node, 0, sizeof(*node));
> +
> +	dma = &node->sparm_dma_buf;
> +	dma->size = NODE_SPARAMS_SIZE;
> +	dma->virt = dma_pool_zalloc(efc->node_dma_pool, GFP_ATOMIC, &dma->phys);


and here too

> +	if (!dma->virt) {
> +		efc_log_err(efc, "node dma alloc failed\n");
> +		goto dma_fail;
> +	}
> +	node->rnode.indicator = U32_MAX;
> +	node->sport = sport;
> +	INIT_LIST_HEAD(&node->list_entry);
> +	list_add_tail(&node->list_entry, &sport->node_list);
> +
> +	node->efc = efc;
> +	node->init = init;
> +	node->targ = targ;
> +
> +	spin_lock_init(&node->pend_frames_lock);
> +	INIT_LIST_HEAD(&node->pend_frames);
> +	spin_lock_init(&node->active_ios_lock);
> +	INIT_LIST_HEAD(&node->active_ios);
> +	INIT_LIST_HEAD(&node->els_io_pend_list);
> +	INIT_LIST_HEAD(&node->els_io_active_list);
> +	efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +	rc = efc->tt.hw_node_alloc(efc, &node->rnode, port_id, sport);
> +	if (rc) {
> +		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
> +		goto hw_alloc_fail;
> +	}
> +
> +	node->rnode.node = node;
> +	node->sm.app = node;
> +	node->evtdepth = 0;
> +
> +	efc_node_update_display_name(node);
> +
> +	rc = xa_err(xa_store(&sport->lookup, port_id, node, GFP_ATOMIC));

and here

> +	if (rc) {
> +		efc_log_err(efc, "Node lookup store failed: %d\n", rc);
> +		goto xa_fail;
> +	}
> +
> +	return node;
> +
> +xa_fail:
> +	efc->tt.hw_node_free_resources(efc, &node->rnode);
> +hw_alloc_fail:
> +	list_del(&node->list_entry);
> +	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
> +dma_fail:
> +	mempool_free(node, efc->node_pool);
> +	return NULL;
> +}
> +
> +void
> +efc_node_free(struct efc_node *node)
> +{
> +	struct efc_sli_port *sport;
> +	struct efc *efc;
> +	int rc = 0;
> +	struct efc_node *ns = NULL;
> +	struct efc_dma *dma;
> +
> +	sport = node->sport;
> +	efc = node->efc;
> +
> +	node_printf(node, "Free'd\n");
> +
> +	if (node->refound) {
> +		/*
> +		 * Save the name server node. We will send fake RSCN event at
> +		 * the end to handle ignored RSCN event during node deletion
> +		 */
> +		ns = efc_node_find(node->sport, FC_FID_DIR_SERV);
> +	}
> +
> +	list_del(&node->list_entry);
> +
> +	/* Free HW resources */
> +	rc = efc->tt.hw_node_free_resources(efc, &node->rnode);
> +	if (EFC_HW_RTN_IS_ERROR(rc))
> +		efc_log_test(efc, "efc_hw_node_free failed: %d\n", rc);
> +
> +	/* if the gidpt_delay_timer is still running, then delete it */
> +	if (timer_pending(&node->gidpt_delay_timer))
> +		del_timer(&node->gidpt_delay_timer);
> +
> +	xa_erase(&sport->lookup, node->rnode.fc_id);
> +
> +	/*
> +	 * If the node_list is empty,
> +	 * then post a ALL_CHILD_NODES_FREE event to the sport,
> +	 * after the lock is released.
> +	 * The sport may be free'd as a result of the event.
> +	 */
> +	if (list_empty(&sport->node_list))
> +		efc_sm_post_event(&sport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
> +				  NULL);
> +
> +	node->sport = NULL;
> +	node->sm.current_state = NULL;
> +
> +	dma = &node->sparm_dma_buf;
> +	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +	mempool_free(node, efc->node_pool);
> +
> +	if (ns) {
> +		/* sending fake RSCN event to name server node */
> +		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
> +	}
> +}
> +
> +void
> +efc_node_force_free(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	/* shutdown sm processing */
> +	efc_sm_disable(&node->sm);
> +
> +	strncpy(node->prev_state_name, node->current_state_name,
> +		sizeof(node->prev_state_name));
> +	strncpy(node->current_state_name, "disabled",
> +		sizeof(node->current_state_name));

Do the strings be NUL terminated? Maybe strscpy?

> +
> +	efc->tt.node_io_cleanup(efc, node, true);
> +	efc->tt.node_els_cleanup(efc, node, true);
> +
> +	/* manually purge pending frames (if any) */
> +	efc->tt.node_purge_pending(efc, node);
> +
> +	efc_node_free(node);
> +}
> +
> +static void
> +efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
> +{
> +	if (!dma || !buffer || !buffer_length)
> +		return;
> +
> +	if (buffer_length > dma->size)
> +		buffer_length = dma->size;
> +
> +	memcpy(dma->virt, buffer, buffer_length);
> +	dma->len = buffer_length;
> +}
> +
> +int
> +efc_node_attach(struct efc_node *node)
> +{
> +	int rc = 0;
> +	struct efc_sli_port *sport = node->sport;
> +	struct efc_domain *domain = sport->domain;
> +	struct efc *efc = node->efc;
> +
> +	if (!domain->attached) {
> +		efc_log_err(efc,
> +			     "Warning: unattached domain\n");
> +		return EFC_FAIL;
> +	}
> +	/* Update node->wwpn/wwnn */
> +
> +	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
> +				efc_node_get_wwpn(node));
> +	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
> +				efc_node_get_wwnn(node));
> +
> +	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
> +			sizeof(node->service_params) - 4);
> +
> +	/* take lock to protect node->rnode.attached */
> +	rc = efc->tt.hw_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
> +	if (EFC_HW_RTN_IS_ERROR(rc))
> +		efc_log_test(efc, "efc_hw_node_attach failed: %d\n", rc);
> +
> +	return rc;
> +}
> +
> +void
> +efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
> +{
> +	switch (fc_id) {
> +	case FC_FID_FLOGI:
> +		snprintf(buffer, buffer_length, "fabric");
> +		break;
> +	case FC_FID_FCTRL:
> +		snprintf(buffer, buffer_length, "fabctl");
> +		break;
> +	case FC_FID_DIR_SERV:
> +		snprintf(buffer, buffer_length, "nserve");
> +		break;
> +	default:
> +		if (fc_id == FC_FID_DOM_MGR) {
> +			snprintf(buffer, buffer_length, "dctl%02x",
> +				 (fc_id & 0x0000ff));
> +		} else {
> +			snprintf(buffer, buffer_length, "%06x", fc_id);
> +		}
> +		break;
> +	}
> +}
> +
> +void
> +efc_node_update_display_name(struct efc_node *node)
> +{
> +	u32 port_id = node->rnode.fc_id;
> +	struct efc_sli_port *sport = node->sport;
> +	char portid_display[16];
> +
> +	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
> +
> +	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
> +		 sport->display_name, portid_display);
> +}
> +
> +void
> +efc_node_send_ls_io_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +
> +	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
> +		efc_log_debug(efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
> +			      node->display_name, node->ls_acc_oxid);
> +
> +		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +		node->ls_acc_io = NULL;
> +	}
> +}
> +
> +/* currently, only case for implicit logo is PLOGI
> + * recvd. Thus, node's ELS IO pending list won't be
> + * empty (PLOGI will be on it)
> + */
> +static void efc_node_handle_implicit_logo(struct efc_node *node)
> +{
> +	int rc;
> +	struct efc *efc = node->efc;
> +
> +	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
> +	node_printf(node, "Reason: implicit logout, re-authenticate\n");
> +
> +	efc->tt.scsi_io_alloc_enable(efc, node);
> +
> +	/* Re-attach node with the same HW node resources */
> +	node->req_free = false;
> +	rc = efc_node_attach(node);
> +	efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +
> +	if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +		efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +}
> +
> +static void efc_node_handle_explicit_logo(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	s8 pend_frames_empty;
> +	struct list_head *list;
> +	unsigned long flags = 0;
> +
> +	/* cleanup any pending LS_ACC ELSs */
> +	efc_node_send_ls_io_cleanup(node);
> +	list = &node->els_io_pend_list;
> +	WARN_ON(!efc_els_io_list_empty(node, list));
> +
> +	spin_lock_irqsave(&node->pend_frames_lock, flags);
> +	pend_frames_empty = list_empty(&node->pend_frames);
> +	spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +	/*
> +	 * there are two scenarios where we want to keep
> +	 * this node alive:
> +	 * 1. there are pending frames that need to be
> +	 *    processed or
> +	 * 2. we're an initiator and the remote node is
> +	 *    a target and we need to re-authenticate
> +	 */
> +	node_printf(node, "Shutdown: explicit logo pend=%d ",
> +			  !pend_frames_empty);
> +	node_printf(node, "sport.ini=%d node.tgt=%d\n",
> +			  node->sport->enable_ini, node->targ);
> +	if (!pend_frames_empty || (node->sport->enable_ini && node->targ)) {
> +		u8 send_plogi = false;
> +
> +		if (node->sport->enable_ini && node->targ) {
> +			/*
> +			 * we're an initiator and
> +			 * node shutting down is a target;
> +			 * we'll need to re-authenticate in
> +			 * initial state
> +			 */
> +			send_plogi = true;
> +		}
> +
> +		/*
> +		 * transition to __efc_d_init
> +		 * (will retain HW node resources)
> +		 */
> +		efc->tt.scsi_io_alloc_enable(efc, node);
> +		node->req_free = false;
> +
> +		/*
> +		 * either pending frames exist,
> +		 * or we're re-authenticating with PLOGI
> +		 * (or both); in either case,
> +		 * return to initial state
> +		 */
> +		efc_node_init_device(node, send_plogi);
> +	}
> +	/* else: let node shutdown occur */
> +}
> +
> +void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		efc_node_hold_frames(node);
> +		WARN_ON(!efc_node_active_ios_empty(node));
> +		WARN_ON(!efc_els_io_list_empty(node,
> +						&node->els_io_active_list));
> +		/* by default, we will be freeing node after we unwind */
> +		node->req_free = true;
> +
> +		switch (node->shutdown_reason) {
> +		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
> +			/* Node shutdown b/c of PLOGI received when node
> +			 * already logged in. We have PLOGI service
> +			 * parameters, so submit node attach; we won't be
> +			 * freeing this node
> +			 */
> +
> +			efc_node_handle_implicit_logo(node);
> +			break;
> +
> +		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO:
> +			efc_node_handle_explicit_logo(node);
> +			break;
> +
> +		case EFC_NODE_SHUTDOWN_DEFAULT:
> +		default: {
> +			struct list_head *list;
> +
> +			/*
> +			 * shutdown due to link down,
> +			 * node going away (xport event) or
> +			 * sport shutdown, purge pending and
> +			 * proceed to cleanup node
> +			 */
> +
> +			/* cleanup any pending LS_ACC ELSs */
> +			efc_node_send_ls_io_cleanup(node);
> +			list = &node->els_io_pend_list;
> +			WARN_ON(!efc_els_io_list_empty(node, list));
> +
> +			node_printf(node,
> +				    "Shutdown reason: default, purge pending\n");
> +			efc->tt.node_purge_pending(efc, node);
> +			break;
> +		}
> +		}
> +
> +		break;
> +	}
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static bool
> +efc_node_check_els_quiesced(struct efc_node *node)
> +{
> +	/* check to see if ELS requests, completions are quiesced */
> +	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
> +	    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +		if (!node->attached) {
> +			/* hw node detach already completed, proceed */
> +			node_printf(node, "HW node not attached\n");
> +			efc_node_transition(node,
> +					    __efc_node_wait_ios_shutdown,
> +					     NULL);
> +		} else {
> +			/*
> +			 * hw node detach hasn't completed,
> +			 * transition and wait
> +			 */
> +			node_printf(node, "HW node still attached\n");
> +			efc_node_transition(node, __efc_node_wait_node_free,
> +					    NULL);
> +		}
> +		return true;
> +	}
> +	return false;
> +}
> +
> +void
> +efc_node_initiate_cleanup(struct efc_node *node)
> +{
> +	struct efc *efc;
> +
> +	efc = node->efc;
> +	efc->tt.node_els_cleanup(efc, node, false);
> +
> +	/*
> +	 * if ELS's have already been quiesced, will move to next state
> +	 * if ELS's have not been quiesced, abort them
> +	 */
> +	if (!efc_node_check_els_quiesced(node)) {
> +		/*
> +		 * Abort all ELS's since ELS's won't be aborted by HW
> +		 * node free.
> +		 */
> +		efc_node_hold_frames(node);
> +		efc->tt.node_abort_all_els(efc, node);
> +		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
> +	}
> +}
> +
> +/* Node state machine: Wait for all ELSs to complete */
> +void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	bool check_quiesce = false;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			node_printf(node, "All ELS IOs complete\n");
> +			check_quiesce = true;
> +		}
> +		break;
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		if (WARN_ON(!node->els_cmpl_cnt))
> +			break;
> +		node->els_cmpl_cnt--;
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* all ELS IO's complete */
> +		node_printf(node, "All ELS IOs complete\n");
> +		WARN_ON(!efc_els_io_list_empty(node,
> +					 &node->els_io_active_list));
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		check_quiesce = true;
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	if (check_quiesce)
> +		efc_node_check_els_quiesced(node);
> +
> +	return NULL;
> +}
> +
> +/* Node state machine: Wait for a HW node free event to complete */
> +void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +		/* node is officially no longer attached */
> +		node->attached = false;
> +		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		/* As IOs and ELS IO's complete we expect to get these events */
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* Fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * State is entered when a node receives a shutdown event, and it's waiting
> + * for all the active IOs and ELS IOs associated with the node to complete.
> + */
> +void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +
> +		/* first check to see if no ELS IOs are outstanding */
> +		if (efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			/* If there are any active IOS, Free them. */
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		if (efc_node_active_ios_empty(node) &&
> +		    efc_els_io_list_empty(node, &node->els_io_active_list)) {
> +			efc_node_transition(node, __efc_node_shutdown, NULL);
> +		}
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
> +			      efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +	struct efc *efc = NULL;
> +	struct efc_node_cb *cbdata = arg;
> +
> +	node = ctx->app;
> +	efc = node->efc;
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +	case EFC_EVT_REENTER:
> +	case EFC_EVT_EXIT:
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY:
> +	case EFC_EVT_NODE_MISSING:
> +	case EFC_EVT_FCP_CMD_RCVD:
> +		break;
> +
> +	case EFC_EVT_NODE_REFOUND:
> +		node->refound = true;
> +		break;
> +
> +	/*
> +	 * node->attached must be set appropriately
> +	 * for all node attach/detach events
> +	 */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		break;
> +
> +	case EFC_EVT_NODE_FREE_OK:
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		node->attached = false;
> +		break;
> +
> +	/*
> +	 * handle any ELS completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		if (WARN_ON(!node->els_cmpl_cnt))
> +			break;
> +		node->els_cmpl_cnt--;
> +		break;
> +
> +	/*
> +	 * handle any ELS request completions that
> +	 * other states either didn't care about
> +	 * or forgot about
> +	 */
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +		if (WARN_ON(!node->els_req_cnt))
> +			break;
> +		node->els_req_cnt--;
> +		break;
> +
> +	case EFC_EVT_ELS_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/*
> +		 * Unsupported ELS was received,
> +		 * send LS_RJT, command not supported
> +		 */
> +		efc_log_debug(efc,
> +			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
> +			      node->display_name, funcname,
> +			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
> +
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD:
> +	case EFC_EVT_FLOGI_RCVD:
> +	case EFC_EVT_LOGO_RCVD:
> +	case EFC_EVT_PRLI_RCVD:
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_FDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD:
> +	case EFC_EVT_SCR_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/* sm: / send ELS_RJT */
> +		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +		/* if we didn't catch this in a state, send generic LS_RJT */
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +						ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
> +
> +		break;
> +	}
> +	case EFC_EVT_ABTS_RCVD: {
> +		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
> +			      node->display_name, funcname,
> +			      efc_sm_event_name(evt));
> +
> +		/* sm: / send BA_ACC */
> +		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
> +		break;
> +	}
> +
> +	default:
> +		efc_log_test(node->efc, "[%s] %-20s %-20s not handled\n",
> +			     node->display_name, funcname,
> +			     efc_sm_event_name(evt));
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_save_sparms(struct efc_node *node, void *payload)
> +{
> +	memcpy(node->service_params, payload, sizeof(node->service_params));
> +}
> +
> +void
> +efc_node_post_event(struct efc_node *node,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	bool free_node = false;
> +
> +	node->evtdepth++;
> +
> +	efc_sm_post_event(&node->sm, evt, arg);
> +
> +	/* If our event call depth is one and
> +	 * we're not holding frames
> +	 * then we can dispatch any pending frames.
> +	 * We don't want to allow the efc_process_node_pending()
> +	 * call to recurse.
> +	 */
> +	if (!node->hold_frames && node->evtdepth == 1)
> +		efc_process_node_pending(node);
> +
> +	node->evtdepth--;
> +
> +	/*
> +	 * Free the node object if so requested,
> +	 * and we're at an event call depth of zero
> +	 */
> +	if (node->evtdepth == 0 && node->req_free)
> +		free_node = true;
> +
> +	if (free_node)
> +		efc_node_free(node);
> +}
> +
> +void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +				   enum efc_sm_event, void *), void *data)
> +{
> +	struct efc_sm_ctx *ctx = &node->sm;
> +
> +	if (ctx->current_state == state) {
> +		efc_node_post_event(node, EFC_EVT_REENTER, data);
> +	} else {
> +		efc_node_post_event(node, EFC_EVT_EXIT, data);
> +		ctx->current_state = state;
> +		efc_node_post_event(node, EFC_EVT_ENTER, data);
> +	}

Why does efc_node_transition not need to take the efc->lock as in
efc_remote_node_cb? How are the state machine state transitions
serialized?

> +}
> +
> +void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name)
> +{
> +	memset(buffer, 0, buffer_len);
> +
> +	snprintf(buffer, buffer_len, "eui.%016llX", eui_name);
> +}
> +
> +uint64_t
> +efc_node_get_wwpn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwpn);
> +}
> +
> +uint64_t
> +efc_node_get_wwnn(struct efc_node *node)
> +{
> +	struct fc_els_flogi *sp =
> +			(struct fc_els_flogi *)node->service_params;
> +
> +	return be64_to_cpu(sp->fl_wwnn);
> +}
> +
> +int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
> +		uint8_t cmd, void *(*efc_node_common_func)(const char *,
> +				struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		const char *funcname)
> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
> +		uint16_t cmd, void *(*efc_node_common_func)(const char *,
> +				struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		const char *funcname)
> +{
> +	return 0;
> +}
> +
> +int
> +efc_node_active_ios_empty(struct efc_node *node)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(&node->active_ios);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	empty = list_empty(list);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *,
> +			      enum efc_sm_event, void *))
> +
> +{
> +	node->nodedb_state = state;
> +	efc_node_transition(node, __efc_node_paused, NULL);
> +}
> +
> +/**
> + * This state is entered when a state is "paused". When resumed, the node
> + * is transitioned to a previously saved state (node->ndoedb_state)
> + */
> +void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		node_printf(node, "Paused\n");
> +		break;
> +
> +	case EFC_EVT_RESUME: {
> +		void *(*pf)(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +
> +		pf = node->nodedb_state;
> +
> +		node->nodedb_state = NULL;
> +		efc_node_transition(node, pf, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node->req_free = true;
> +		break;
> +
> +	default:
> +		__efc_node_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_recv_els_frame(struct efc_node *node,
> +			struct efc_hw_sequence *seq)
> +{
> +	unsigned long flags = 0;
> +	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
> +	struct {
> +		u32 cmd;
> +		enum efc_sm_event evt;
> +		u32 payload_size;
> +	} els_cmd_list[] = {
> +		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
> +		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
> +		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
> +		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
> +		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
> +		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
> +	};
> +	struct efc_node_cb cbdata;
> +	u8 *buf = seq->payload->dma.virt;
> +	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
> +	u32 i;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	/* find a matching event for the ELS command */
> +	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
> +		if (els_cmd_list[i].cmd == buf[0]) {
> +			evt = els_cmd_list[i].evt;
> +			break;
> +		}
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, evt, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_node_recv_ct_frame(struct efc_node *node,
> +		       struct efc_hw_sequence *seq)
> +{
> +	struct fc_ct_hdr *iu = seq->payload->dma.virt;
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	struct efc *efc = node->efc;
> +	u16 gscmd = be16_to_cpu(iu->ct_cmd);
> +
> +	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
> +		    node->display_name, gscmd);
> +	efc->tt.send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
> +			    FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
> +}
> +
> +void
> +efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
> +{
> +	struct efc_node_cb cbdata;
> +	unsigned long flags = 0;
> +
> +	memset(&cbdata, 0, sizeof(cbdata));
> +	cbdata.header = seq->header;
> +	cbdata.payload = seq->payload;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_process_node_pending(struct efc_node *node)
> +{
> +	struct efc *efc = node->efc;
> +	struct efc_hw_sequence *seq = NULL;
> +	u32 pend_frames_processed = 0;
> +	unsigned long flags = 0;
> +
> +	for (;;) {
> +		/* need to check for hold frames condition after each frame
> +		 * processed because any given frame could cause a transition
> +		 * to a state that holds frames
> +		 */
> +		if (node->hold_frames)
> +			break;
> +
> +		/* Get next frame/sequence */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +
> +		if (!list_empty(&node->pend_frames)) {
> +			seq = list_first_entry(&node->pend_frames,
> +					struct efc_hw_sequence, list_entry);
> +			list_del(&seq->list_entry);
> +		}
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +		if (!seq) {
> +			pend_frames_processed =	node->pend_frames_processed;
> +			node->pend_frames_processed = 0;
> +			break;
> +		}
> +		node->pend_frames_processed++;
> +
> +		/* now dispatch frame(s) to dispatch function */
> +		efc_node_dispatch_frame(node, seq);
> +	}
> +
> +	if (pend_frames_processed != 0)
> +		efc_log_debug(efc, "%u node frames held and processed\n",
> +			      pend_frames_processed);
> +}
> +
> +void
> +efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	/* Notify the node to resume */
> +	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void
> +efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
> +	spin_unlock_irqrestore(&efc->lock, flags);
> +}
> +
> +void efc_node_post_els_resp(struct efc_node *node,
> +			    enum efc_hw_node_els_event evt, void *arg)
> +{
> +	enum efc_sm_event sm_event = EFC_EVT_LAST;
> +	struct efc *efc = node->efc;
> +	unsigned long flags = 0;
> +
> +	switch (evt) {
> +	case EFC_HW_SRRS_ELS_REQ_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_OK:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_OK;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_CMPL_FAIL:
> +		sm_event = EFC_EVT_SRRS_ELS_CMPL_FAIL;
> +		break;
> +	case EFC_HW_SRRS_ELS_REQ_RJT:
> +		sm_event = EFC_EVT_SRRS_ELS_REQ_RJT;
> +		break;
> +	case EFC_HW_ELS_REQ_ABORTED:
> +		sm_event = EFC_EVT_ELS_REQ_ABORTED;
> +		break;
> +	default:
> +		efc_log_test(efc, "unhandled event %#x\n", evt);
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, sm_event, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> +
> +void efc_node_post_shutdown(struct efc_node *node,
> +			    u32 evt, void *arg)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->efc->lock, flags);
> +	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
> +	spin_unlock_irqrestore(&node->efc->lock, flags);
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
> new file mode 100644
> index 000000000000..0608703cfd04
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_node.h
> @@ -0,0 +1,183 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFC_NODE_H__)
> +#define __EFC_NODE_H__
> +#include "scsi/fc/fc_ns.h"
> +
> +#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
> +#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
> +#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
> +
> +#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
> +
> +#define scsi_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
> +	io->node->display_name, io->instance_index, io->init_task_tag, \
> +	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
> +
> +static inline void
> +efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		 const char *handler)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	if (evt == EFC_EVT_ENTER) {
> +		strncpy(node->current_state_name, handler,
> +			sizeof(node->current_state_name));
> +	} else if (evt == EFC_EVT_EXIT) {
> +		strncpy(node->prev_state_name, node->current_state_name,
> +			sizeof(node->prev_state_name));
> +		strncpy(node->current_state_name, "invalid",
> +			sizeof(node->current_state_name));
> +	}
> +	node->prev_evt = node->current_evt;
> +	node->current_evt = evt;
> +}
> +
> +/**
> + * hold frames in pending frame list
> + *
> + * Unsolicited receive frames are held on the node pending frame list,
> + * rather than being processed.
> + */
> +
> +static inline void
> +efc_node_hold_frames(struct efc_node *node)
> +{
> +	node->hold_frames = true;
> +}
> +
> +/**
> + * accept frames
> + *
> + * Unsolicited receive frames processed rather than being held on the node
> + * pending frame list.
> + */
> +
> +static inline void
> +efc_node_accept_frames(struct efc_node *node)
> +{
> +	node->hold_frames = false;
> +}
> +
> +/*
> + * Node initiator/target enable defines
> + * All combinations of the SLI port (sport) initiator/target enable,
> + * and remote node initiator/target enable are enumerated.
> + * ex: EFC_NODE_ENABLE_T_TO_IT decodes to target mode is enabled on SLI port
> + * and I+T is enabled on remote node.
> + */
> +enum efc_node_enable {
> +	EFC_NODE_ENABLE_x_TO_x,
> +	EFC_NODE_ENABLE_x_TO_T,
> +	EFC_NODE_ENABLE_x_TO_I,
> +	EFC_NODE_ENABLE_x_TO_IT,
> +	EFC_NODE_ENABLE_T_TO_x,
> +	EFC_NODE_ENABLE_T_TO_T,
> +	EFC_NODE_ENABLE_T_TO_I,
> +	EFC_NODE_ENABLE_T_TO_IT,
> +	EFC_NODE_ENABLE_I_TO_x,
> +	EFC_NODE_ENABLE_I_TO_T,
> +	EFC_NODE_ENABLE_I_TO_I,
> +	EFC_NODE_ENABLE_I_TO_IT,
> +	EFC_NODE_ENABLE_IT_TO_x,
> +	EFC_NODE_ENABLE_IT_TO_T,
> +	EFC_NODE_ENABLE_IT_TO_I,
> +	EFC_NODE_ENABLE_IT_TO_IT,
> +};
> +
> +static inline enum efc_node_enable
> +efc_node_get_enable(struct efc_node *node)
> +{
> +	u32 retval = 0;
> +
> +	if (node->sport->enable_ini)
> +		retval |= (1U << 3);
> +	if (node->sport->enable_tgt)
> +		retval |= (1U << 2);
> +	if (node->init)
> +		retval |= (1U << 1);
> +	if (node->targ)
> +		retval |= (1U << 0);
> +	return (enum efc_node_enable)retval;
> +}
> +
> +extern int
> +efc_node_check_els_req(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg,
> +		       u8 cmd, void *(*efc_node_common_func)(const char *,
> +		       struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		       const char *funcname);
> +extern int
> +efc_node_check_ns_req(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg,
> +		  u16 cmd, void *(*efc_node_common_func)(const char *,
> +		  struct efc_sm_ctx *, enum efc_sm_event, void *),
> +		  const char *funcname);
> +extern int
> +efc_node_attach(struct efc_node *node);
> +extern struct efc_node *
> +efc_node_alloc(struct efc_sli_port *sport, u32 port_id,
> +		bool init, bool targ);
> +extern void
> +efc_node_free(struct efc_node *efc);
> +extern void
> +efc_node_force_free(struct efc_node *efc);
> +extern void
> +efc_node_update_display_name(struct efc_node *node);
> +void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
> +			 void *arg);
> +
> +extern void *
> +__efc_node_shutdown(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void
> +efc_node_save_sparms(struct efc_node *node, void *payload);
> +extern void
> +efc_node_transition(struct efc_node *node,
> +		    void *(*state)(struct efc_sm_ctx *,
> +		    enum efc_sm_event, void *), void *data);
> +extern void *
> +__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +
> +extern void
> +efc_node_initiate_cleanup(struct efc_node *node);
> +
> +extern void
> +efc_node_build_eui_name(char *buffer, u32 buffer_len, uint64_t eui_name);
> +extern uint64_t
> +efc_node_get_wwpn(struct efc_node *node);
> +
> +extern void
> +efc_node_pause(struct efc_node *node,
> +	       void *(*state)(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg));
> +extern void *
> +__efc_node_paused(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern int
> +efc_node_active_ios_empty(struct efc_node *node);
> +extern void
> +efc_node_send_ls_io_cleanup(struct efc_node *node);
> +
> +extern int
> +efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
> +
> +extern void
> +efc_process_node_pending(struct efc_node *domain);
> +
> +#endif /* __EFC_NODE_H__ */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 13/31] elx: libefc: Fabric node state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 13/31] elx: libefc: Fabric " James Smart
@ 2020-04-15 18:51   ` Daniel Wagner
  2020-04-16  6:37   ` Hannes Reinecke
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 18:51 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:45PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Fabric node initialization and logins.
> - Name/Directory Services node.
> - Fabric Controller node to process rscn events.
> 
> These are all interactions with remote ports that correspond
> to well-known fabric entities
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Replace efc_assert with WARN_ON
>   Return defined return values
> ---
>  drivers/scsi/elx/libefc/efc_fabric.c | 1759 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_fabric.h |  116 +++
>  2 files changed, 1875 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
> new file mode 100644
> index 000000000000..251f8702dbc5
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_fabric.c
> @@ -0,0 +1,1759 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * This file implements remote node state machines for:
> + * - Fabric logins.
> + * - Fabric controller events.
> + * - Name/directory services interaction.
> + * - Point-to-point logins.
> + */
> +
> +/*
> + * fabric_sm Node State Machine: Fabric States
> + * ns_sm Node State Machine: Name/Directory Services States
> + * p2p_sm Node State Machine: Point-to-Point Node States
> + */
> +
> +#include "efc.h"
> +
> +static void
> +efc_fabric_initiate_shutdown(struct efc_node *node)
> +{
> +	int rc;
> +	struct efc *efc = node->efc;
> +
> +	efc->tt.scsi_io_alloc_disable(efc, node);
> +
> +	if (node->attached) {
> +		/* issue hw node free; don't care if succeeds right away
> +		 * or sometime later, will check node->attached later in
> +		 * shutdown process
> +		 */
> +		rc = efc->tt.hw_node_detach(efc, &node->rnode);
> +		if (rc != EFC_HW_RTN_SUCCESS &&
> +		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
> +			node_printf(node, "Failed freeing HW node, rc=%d\n",
> +				    rc);
> +		}
> +	}
> +	/*
> +	 * node has either been detached or is in the process of being detached,
> +	 * call common node's initiate cleanup function
> +	 */
> +	efc_node_initiate_cleanup(node);
> +}
> +
> +static void *
> +__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +
> +	node = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +	case EFC_EVT_SHUTDOWN:
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	default:
> +		/* call default event handler common to all nodes */
> +		__efc_node_common(funcname, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		  void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_REENTER:	/* not sure why we're getting these ... */
> +		efc_log_debug(efc, ">>> reenter !!\n");
> +		/* fall through */

IIRC, /* fall through */ has been replaced by fallthrough;

> +	case EFC_EVT_ENTER:
> +		/*  sm: / send FLOGI */
> +		efc->tt.els_send(efc, node, ELS_FLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +void
> +efc_fabric_set_topology(struct efc_node *node,
> +			enum efc_sport_topology topology)
> +{
> +	node->sport->topology = topology;
> +}
> +
> +void
> +efc_fabric_notify_topology(struct efc_node *node)
> +{
> +	struct efc_node *tmp_node;
> +	struct efc_node *next;
> +	enum efc_sport_topology topology = node->sport->topology;
> +
> +	/*
> +	 * now loop through the nodes in the sport
> +	 * and send topology notification
> +	 */
> +	list_for_each_entry_safe(tmp_node, next, &node->sport->node_list,
> +				 list_entry) {
> +		if (tmp_node != node) {
> +			efc_node_post_event(tmp_node,
> +					    EFC_EVT_SPORT_TOPOLOGY_NOTIFY,
> +					    (void *)topology);
> +		}
> +	}
> +}
> +
> +static bool efc_rnode_is_nport(struct fc_els_flogi *rsp)
> +{
> +	return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT);
> +}
> +
> +static bool efc_rnode_is_npiv_capable(struct fc_els_flogi *rsp)
> +{
> +	return !!(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_NPIV_ACC);
> +}
> +
> +void *
> +__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +
> +		memcpy(node->sport->domain->flogi_service_params,
> +		       cbdata->els_rsp.virt,
> +		       sizeof(struct fc_els_flogi));
> +
> +		/* Check to see if the fabric is an F_PORT or and N_PORT */
> +		if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) {
> +			/* sm: if not nport / efc_domain_attach */
> +			/* ext_status has the fc_id, attach domain */
> +			if (efc_rnode_is_npiv_capable(cbdata->els_rsp.virt)) {
> +				efc_log_debug(node->efc,
> +					      " NPIV is enabled at switch side\n");
> +				//node->efc->sw_feature_cap |= 1<<10;

Looks like a left over.

> +			}
> +			efc_fabric_set_topology(node,
> +						EFC_SPORT_TOPOLOGY_FABRIC);
> +			efc_fabric_notify_topology(node);
> +			WARN_ON(node->sport->domain->attached);
> +			efc_domain_attach(node->sport->domain,
> +					  cbdata->ext_status);
> +			efc_node_transition(node,
> +					    __efc_fabric_wait_domain_attach,
> +					    NULL);
> +			break;
> +		}
> +
> +		/*  sm: if nport and p2p_winner / efc_domain_attach */
> +		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
> +		if (efc_p2p_setup(node->sport)) {
> +			node_printf(node,
> +				    "p2p setup failed, shutting down node\n");
> +			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +			efc_fabric_initiate_shutdown(node);
> +			break;
> +		}
> +
> +		if (node->sport->p2p_winner) {
> +			efc_node_transition(node,
> +					    __efc_p2p_wait_domain_attach,
> +					     NULL);
> +			if (node->sport->domain->attached &&
> +			    !node->sport->domain->domain_notify_pend) {
> +				/*
> +				 * already attached,
> +				 * just send ATTACH_OK
> +				 */
> +				node_printf(node,
> +					    "p2p winner, domain already attached\n");
> +				efc_node_post_event(node,
> +						    EFC_EVT_DOMAIN_ATTACH_OK,
> +						    NULL);
> +			}
> +		} else {
> +			/*
> +			 * peer is p2p winner;
> +			 * PLOGI will be received on the
> +			 * remote SID=1 node;
> +			 * this node has served its purpose
> +			 */
> +			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +			efc_fabric_initiate_shutdown(node);
> +		}
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_ELS_REQ_ABORTED:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
> +		struct efc_sli_port *sport = node->sport;
> +		/*
> +		 * with these errors, we have no recovery,
> +		 * so shutdown the sport, leave the link
> +		 * up and the domain ready
> +		 */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		node_printf(node,
> +			    "FLOGI failed evt=%s, shutting down sport [%s]\n",
> +			    efc_sm_event_name(evt), sport->display_name);
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_sm_post_event(&sport->sm, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* sm: / send FDISC */
> +		efc->tt.els_send(efc, node, ELS_FDISC,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +
> +		efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {
> +		/* fc_id is in ext_status */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / efc_sport_attach */
> +		efc_sport_attach(node->sport, cbdata->ext_status);
> +		efc_node_transition(node, __efc_fabric_wait_domain_attach,
> +				    NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_log_err(node->efc, "FDISC failed, shutting down sport\n");
> +		/* sm: / shutdown sport */
> +		efc_sm_post_event(&node->sport->sm, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int
> +efc_start_ns_node(struct efc_sli_port *sport)
> +{
> +	struct efc_node *ns;
> +
> +	/* Instantiate a name services node */
> +	ns = efc_node_find(sport, FC_FID_DIR_SERV);
> +	if (!ns) {
> +		ns = efc_node_alloc(sport, FC_FID_DIR_SERV, false, false);
> +		if (!ns)
> +			return EFC_FAIL;
> +	}
> +	/*
> +	 * for found ns, should we be transitioning from here?
> +	 * breaks transition only
> +	 *  1. from within state machine or
> +	 *  2. if after alloc
> +	 */
> +	if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER)
> +		efc_node_pause(ns, __efc_ns_init);
> +	else
> +		efc_node_transition(ns, __efc_ns_init, NULL);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efc_start_fabctl_node(struct efc_sli_port *sport)
> +{
> +	struct efc_node *fabctl;
> +
> +	fabctl = efc_node_find(sport, FC_FID_FCTRL);
> +	if (!fabctl) {
> +		fabctl = efc_node_alloc(sport, FC_FID_FCTRL,
> +					false, false);
> +		if (!fabctl)
> +			return EFC_FAIL;
> +	}
> +	/*
> +	 * for found ns, should we be transitioning from here?
> +	 * breaks transition only
> +	 *  1. from within state machine or
> +	 *  2. if after alloc
> +	 */
> +	efc_node_transition(fabctl, __efc_fabctl_init, NULL);
> +	return EFC_SUCCESS;
> +}
> +
> +void *
> +__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
> +				enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +	case EFC_EVT_SPORT_ATTACH_OK: {
> +		int rc;
> +
> +		rc = efc_start_ns_node(node->sport);
> +		if (rc)
> +			return NULL;
> +
> +		/* sm: if enable_ini / start fabctl node */
> +		/* Instantiate the fabric controller (sends SCR) */
> +		if (node->sport->enable_rscn) {
> +			rc = efc_start_fabctl_node(node->sport);
> +			if (rc)
> +				return NULL;
> +		}
> +		efc_node_transition(node, __efc_fabric_idle, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabric_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		  void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* sm: / send PLOGI */
> +		efc->tt.els_send(efc, node, ELS_PLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL);
> +		break;
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {
> +		/* Save service parameters */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_ns_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +		break;
> +	}
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		/* sm: / send RFTID */
> +		efc->tt.els_send_ct(efc, node, FC_RCTL_ELS_REQ,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "Node attach failed\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "Shutdown event received\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node,
> +				    __efc_fabric_wait_attach_evt_shutdown,
> +				     NULL);
> +		break;
> +
> +	/*
> +	 * if receive RSCN just ignore,
> +	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
> +	 */
> +	case EFC_EVT_RSCN_RCVD:
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	/* wait for any of these attach events and then shutdown */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		node->attached = false;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	/* ignore shutdown event as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "Shutdown event received\n");
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID,
> +					  __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / send RFFID */
> +		efc->tt.els_send_ct(efc, node, FC_NS_RFF_ID,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL);
> +		break;
> +
> +	/*
> +	 * if receive RSCN just ignore,
> +	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
> +	 */
> +	case EFC_EVT_RSCN_RCVD:
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Waits for an RFFID response event; if configured for an initiator operation,
> + * a GIDPT name services request is issued.
> + */
> +void *
> +__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	{
> +		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID,
> +					  __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		if (node->sport->enable_rscn) {
> +			/* sm: if enable_rscn / send GIDPT */
> +			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
> +					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +					EFC_FC_ELS_DEFAULT_RETRIES);
> +
> +			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
> +					    NULL);
> +		} else {
> +			/* if 'T' only, we're done, go to idle */
> +			efc_node_transition(node, __efc_ns_idle, NULL);
> +		}
> +		break;
> +	}
> +	/*
> +	 * if receive RSCN just ignore,
> +	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
> +	 */
> +	case EFC_EVT_RSCN_RCVD:
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int
> +efc_process_gidpt_payload(struct efc_node *node,
> +			  void *data, u32 gidpt_len)
> +{
> +	u32 i, j;
> +	struct efc_node *newnode;
> +	struct efc_sli_port *sport = node->sport;
> +	struct efc *efc = node->efc;
> +	u32 port_id = 0, port_count, portlist_count;
> +	struct efc_node *n;
> +	struct efc_node **active_nodes;
> +	int residual;
> +	struct fc_ct_hdr *hdr = data;
> +	struct fc_gid_pn_resp *gidpt = data + sizeof(*hdr);
> +
> +	residual = be16_to_cpu(hdr->ct_mr_size);
> +
> +	if (residual != 0)
> +		efc_log_debug(node->efc, "residual is %u words\n", residual);
> +
> +	if (be16_to_cpu(hdr->ct_cmd) == FC_FS_RJT) {
> +		node_printf(node,
> +			    "GIDPT request failed: rsn x%x rsn_expl x%x\n",
> +			    hdr->ct_reason, hdr->ct_explan);
> +		return EFC_FAIL;
> +	}
> +
> +	portlist_count = (gidpt_len - sizeof(*hdr)) / sizeof(*gidpt);
> +
> +	/* Count the number of nodes */
> +	port_count = 0;
> +	list_for_each_entry(n, &sport->node_list, list_entry) {
> +		port_count++;
> +	}
> +
> +	/* Allocate a buffer for all nodes */
> +	active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC);
> +	if (!active_nodes) {
> +		node_printf(node, "efc_malloc failed\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/* Fill buffer with fc_id of active nodes */
> +	i = 0;
> +	list_for_each_entry(n, &sport->node_list, list_entry) {
> +		port_id = n->rnode.fc_id;
> +		switch (port_id) {
> +		case FC_FID_FLOGI:
> +		case FC_FID_FCTRL:
> +		case FC_FID_DIR_SERV:
> +			break;
> +		default:
> +			if (port_id != FC_FID_DOM_MGR)
> +				active_nodes[i++] = n;
> +			break;
> +		}
> +	}
> +
> +	/* update the active nodes buffer */
> +	for (i = 0; i < portlist_count; i++) {
> +		hton24(gidpt[i].fp_fid, port_id);
> +
> +		for (j = 0; j < port_count; j++) {
> +			if (active_nodes[j] &&
> +			    port_id == active_nodes[j]->rnode.fc_id) {
> +				active_nodes[j] = NULL;
> +			}
> +		}
> +
> +		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
> +			break;
> +	}
> +
> +	/* Those remaining in the active_nodes[] are now gone ! */
> +	for (i = 0; i < port_count; i++) {
> +		/*
> +		 * if we're an initiator and the remote node
> +		 * is a target, then post the node missing event.
> +		 * if we're target and we have enabled
> +		 * target RSCN, then post the node missing event.
> +		 */
> +		if (active_nodes[i]) {
> +			if ((node->sport->enable_ini &&
> +			     active_nodes[i]->targ) ||
> +			     (node->sport->enable_tgt &&
> +			     enable_target_rscn(efc))) {
> +				efc_node_post_event(active_nodes[i],
> +						    EFC_EVT_NODE_MISSING,
> +						     NULL);
> +			} else {
> +				node_printf(node,
> +					    "GID_PT: skipping non-tgt port_id x%06x\n",
> +					    active_nodes[i]->rnode.fc_id);
> +			}
> +		}
> +	}
> +	kfree(active_nodes);
> +
> +	for (i = 0; i < portlist_count; i++) {
> +		hton24(gidpt[i].fp_fid, port_id);
> +
> +		/* Don't create node for ourselves */
> +		if (port_id != node->rnode.sport->fc_id) {
> +			newnode = efc_node_find(sport, port_id);
> +			if (!newnode) {
> +				if (node->sport->enable_ini) {
> +					newnode = efc_node_alloc(sport,
> +								 port_id,
> +								  false,
> +								  false);
> +					if (!newnode) {
> +						efc_log_err(efc,
> +							    "efc_node_alloc() failed\n");
> +						return EFC_FAIL;
> +					}
> +					/*
> +					 * send PLOGI automatically
> +					 * if initiator
> +					 */
> +					efc_node_init_device(newnode, true);
> +				}
> +				continue;
> +			}
> +
> +			if (node->sport->enable_ini && newnode->targ) {
> +				efc_node_post_event(newnode,
> +						    EFC_EVT_NODE_REFOUND,
> +						    NULL);
> +			}
> +			/*
> +			 * original code sends ADISC,
> +			 * has notion of "refound"
> +			 */
> +		}

A helper function or something which makes this heavy idention go
away, please. This is hard to read.

> +
> +		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
> +			break;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * Wait for a GIDPT response from the name server. Process the FC_IDs that are
> + * reported by creating new remote ports, as needed.
> + */
> +void *
> +__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	{
> +		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT,
> +					  __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / process GIDPT payload */
> +		efc_process_gidpt_payload(node, cbdata->els_rsp.virt,
> +					  cbdata->els_rsp.len);
> +		efc_node_transition(node, __efc_ns_idle, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	{
> +		/* not much we can do; will retry with the next RSCN */
> +		node_printf(node, "GID_PT failed to complete\n");
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_transition(node, __efc_ns_idle, NULL);
> +		break;
> +	}
> +
> +	/* if receive RSCN here, queue up another discovery processing */
> +	case EFC_EVT_RSCN_RCVD: {
> +		node_printf(node, "RSCN received during GID_PT processing\n");
> +		node->rscn_pending = true;
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Idle. Waiting for RSCN received events
> + * (posted from the fabric controller), and
> + * restarts the GIDPT name services query and processing.
> + */

Not a proper kerneldoc. There are more of those in this file.

> +void *
> +__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		if (!node->rscn_pending)
> +			break;
> +
> +		node_printf(node, "RSCN pending, restart discovery\n");
> +		node->rscn_pending = false;
> +
> +			/* fall through */
> +
> +	case EFC_EVT_RSCN_RCVD: {
> +		/* sm: / send GIDPT */
> +		/*
> +		 * If target RSCN processing is enabled,
> +		 * and this is target only (not initiator),
> +		 * and tgt_rscn_delay is non-zero,
> +		 * then we delay issuing the GID_PT
> +		 */
> +		if (efc->tgt_rscn_delay_msec != 0 &&
> +		    !node->sport->enable_ini && node->sport->enable_tgt &&
> +		    enable_target_rscn(efc)) {
> +			efc_node_transition(node, __efc_ns_gidpt_delay, NULL);
> +		} else {
> +			efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
> +					EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +					EFC_FC_ELS_DEFAULT_RETRIES);
> +			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
> +					    NULL);
> +		}
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Handle GIDPT delay timer callback
> + * Post an EFC_EVT_GIDPT_DEIALY_EXPIRED event to the passed in node.
> + */
> +static void
> +gidpt_delay_timer_cb(struct timer_list *t)
> +{
> +	struct efc_node *node = from_timer(node, t, gidpt_delay_timer);
> +
> +	del_timer(&node->gidpt_delay_timer);
> +
> +	efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL);
> +}
> +
> +/**
> + * Name services node state machine: Delayed GIDPT.
> + *
> + * Waiting for GIDPT delay to expire before submitting GIDPT to name server.
> + */
> +void *
> +__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		time_t delay_msec;
> +
> +		/*
> +		 * Compute the delay time.
> +		 * Set to tgt_rscn_delay, if the time since last GIDPT
> +		 * is less than tgt_rscn_period, then use tgt_rscn_period.
> +		 */
> +		delay_msec = efc->tgt_rscn_delay_msec;
> +		if ((jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec)
> +		    < efc->tgt_rscn_period_msec) {

Maybe the first part of the condition as new variable to make the if readable.

> +			delay_msec = efc->tgt_rscn_period_msec;
> +		}
> +		timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb,
> +			    0);
> +		mod_timer(&node->gidpt_delay_timer,
> +			  jiffies + msecs_to_jiffies(delay_msec));
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_GIDPT_DELAY_EXPIRED:
> +		node->time_last_gidpt_msec = jiffies_to_msecs(jiffies);
> +
> +		efc->tt.els_send_ct(efc, node, FC_NS_GID_PT,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_RSCN_RCVD: {
> +		efc_log_debug(efc,
> +			      "RSCN received while in GIDPT delay - no action\n");
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Fabric controller node state machine: Initial state.
> + *
> + * Issue a PLOGI to a well-known fabric controller address.
> + */
> +void *
> +__efc_fabctl_init(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* no need to login to fabric controller, just send SCR */
> +		efc->tt.els_send(efc, node, ELS_SCR,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Fabric controller node state machine: Wait for a node attach request
> + * to complete.
> + *
> + * Wait for a node attach to complete. If successful, issue an SCR
> + * to the fabric controller, subscribing to all RSCN.
> + */
> +void *
> +__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		/* sm: / send SCR */
> +		efc->tt.els_send(efc, node, ELS_SCR,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "Node attach failed\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "Shutdown event received\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node,
> +				    __efc_fabric_wait_attach_evt_shutdown,
> +				     NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * Fabric controller node state machine:
> + * Wait for an SCR response from the fabric controller.
> + */
> +void *
> +__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_transition(node, __efc_fabctl_ready, NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static void
> +efc_process_rscn(struct efc_node *node, struct efc_node_cb *cbdata)
> +{
> +	struct efc *efc = node->efc;
> +	struct efc_sli_port *sport = node->sport;
> +	struct efc_node *ns;
> +
> +	/* Forward this event to the name-services node */
> +	ns = efc_node_find(sport, FC_FID_DIR_SERV);
> +	if (ns)
> +		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata);
> +	else
> +		efc_log_warn(efc, "can't find name server node\n");
> +}
> +
> +/* Fabric controller node state machine: Ready.
> + * In this state, the fabric controller sends a RSCN, which is received
> + * by this node and is forwarded to the name services node object; and
> + * the RSCN LS_ACC is sent.
> + */
> +void *
> +__efc_fabctl_ready(struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_RSCN_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		/*
> +		 * sm: / process RSCN (forward to name services node),
> +		 * send LS_ACC
> +		 */
> +		efc_process_rscn(node, cbdata);
> +		efc->tt.els_send_resp(efc, node, ELS_LS_ACC,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl,
> +				    NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		efc_node_transition(node, __efc_fabctl_ready, NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static uint64_t
> +efc_get_wwpn(struct fc_els_flogi *sp)
> +{
> +	return be64_to_cpu(sp->fl_wwnn);
> +}
> +
> +static int
> +efc_rnode_is_winner(struct efc_sli_port *sport)
> +{
> +	struct fc_els_flogi *remote_sp;
> +	u64 remote_wwpn;
> +	u64 local_wwpn = sport->wwpn;
> +	u64 wwn_bump = 0;
> +
> +	remote_sp = (struct fc_els_flogi *)sport->domain->flogi_service_params;
> +	remote_wwpn = efc_get_wwpn(remote_sp);
> +
> +	local_wwpn ^= wwn_bump;
> +
> +	remote_wwpn = efc_get_wwpn(remote_sp);
> +
> +	efc_log_debug(sport->efc, "r: %llx\n",
> +		      be64_to_cpu(remote_sp->fl_wwpn));
> +	efc_log_debug(sport->efc, "l: %llx\n", local_wwpn);
> +
> +	if (remote_wwpn == local_wwpn) {
> +		efc_log_warn(sport->efc,
> +			     "WWPN of remote node [%08x %08x] matches local WWPN\n",
> +			     (u32)(local_wwpn >> 32ll),
> +			     (u32)local_wwpn);
> +		return -1;
> +	}
> +
> +	return (remote_wwpn > local_wwpn);
> +}
> +
> +void *
> +__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		struct efc_sli_port *sport = node->sport;
> +		struct efc_node *rnode;
> +
> +		/*
> +		 * this transient node (SID=0 (recv'd FLOGI)
> +		 * or DID=fabric (sent FLOGI))
> +		 * is the p2p winner, will use a separate node
> +		 * to send PLOGI to peer
> +		 */
> +		WARN_ON(!node->sport->p2p_winner);
> +
> +		rnode = efc_node_find(sport, node->sport->p2p_remote_port_id);
> +		if (rnode) {
> +			/*
> +			 * the "other" transient p2p node has
> +			 * already kicked off the
> +			 * new node from which PLOGI is sent
> +			 */
> +			node_printf(node,
> +				    "Node with fc_id x%x already exists\n",
> +				    rnode->rnode.fc_id);
> +		} else {
> +			/*
> +			 * create new node (SID=1, DID=2)
> +			 * from which to send PLOGI
> +			 */
> +			rnode = efc_node_alloc(sport,
> +					       sport->p2p_remote_port_id,
> +						false, false);
> +			if (!rnode) {
> +				efc_log_err(efc, "node alloc failed\n");
> +				return NULL;
> +			}
> +
> +			efc_fabric_notify_topology(node);
> +			/* sm: / allocate p2p remote node */
> +			efc_node_transition(rnode, __efc_p2p_rnode_init,
> +					    NULL);
> +		}
> +
> +		/*
> +		 * the transient node (SID=0 or DID=fabric)
> +		 * has served its purpose
> +		 */
> +		if (node->rnode.fc_id == 0) {
> +			/*
> +			 * if this is the SID=0 node,
> +			 * move to the init state in case peer
> +			 * has restarted FLOGI discovery and FLOGI is pending
> +			 */
> +			/* don't send PLOGI on efc_d_init entry */
> +			efc_node_init_device(node, false);
> +		} else {
> +			/*
> +			 * if this is the DID=fabric node
> +			 * (we initiated FLOGI), shut it down
> +			 */
> +			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +			efc_fabric_initiate_shutdown(node);
> +		}
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* sm: / send PLOGI */
> +		efc->tt.els_send(efc, node, ELS_PLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_ABTS_RCVD:
> +		/* sm: send BA_ACC */
> +		efc->tt.bls_send_acc_hdr(efc, node, cbdata->header->dma.virt);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +
> +		/* sm: if p2p_winner / domain_attach */
> +		if (node->sport->p2p_winner) {
> +			efc_node_transition(node,
> +					    __efc_p2p_wait_domain_attach,
> +					NULL);
> +			if (!node->sport->domain->attached) {
> +				node_printf(node, "Domain not attached\n");
> +				efc_domain_attach(node->sport->domain,
> +						  node->sport->p2p_port_id);
> +			} else {
> +				node_printf(node, "Domain already attached\n");
> +				efc_node_post_event(node,
> +						    EFC_EVT_DOMAIN_ATTACH_OK,
> +						    NULL);
> +			}
> +		} else {
> +			/* this node has served its purpose;
> +			 * we'll expect a PLOGI on a separate
> +			 * node (remote SID=0x1); return this node
> +			 * to init state in case peer
> +			 * restarts discovery -- it may already
> +			 * have (pending frames may exist).
> +			 */
> +			/* don't send PLOGI on efc_d_init entry */
> +			efc_node_init_device(node, false);
> +		}
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		/*
> +		 * LS_ACC failed, possibly due to link down;
> +		 * shutdown node and wait
> +		 * for FLOGI discovery to restart
> +		 */
> +		node_printf(node, "FLOGI LS_ACC failed, shutting down\n");
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	case EFC_EVT_ABTS_RCVD: {
> +		/* sm: / send BA_ACC */
> +		efc->tt.bls_send_acc_hdr(efc, node,
> +					 cbdata->header->dma.virt);
> +		break;
> +	}
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +		break;
> +	}
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		node_printf(node, "PLOGI failed, shutting down\n");
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* if we're in external loopback mode, just send LS_ACC */
> +		if (node->efc->external_loopback) {
> +			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
> +						be16_to_cpu(hdr->fh_ox_id));
> +		} else {
> +			/*
> +			 * if this isn't external loopback,
> +			 * pass to default handler
> +			 */
> +			__efc_fabric_common(__func__, ctx, evt, arg);
> +		}
> +		break;
> +	}
> +	case EFC_EVT_PRLI_RCVD:
> +		/* I, or I+T */
> +		/* sent PLOGI and before completion was seen, received the
> +		 * PRLI from the remote node (WCQEs and RCQEs come in on
> +		 * different queues and order of processing cannot be assumed)
> +		 * Save OXID so PRLI can be sent after the attach and continue
> +		 * to wait for PLOGI response
> +		 */
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +					     EFC_NODE_SEND_LS_ACC_PRLI);
> +		efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli,
> +				    NULL);
> +		break;
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				    enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/*
> +		 * Since we've received a PRLI, we have a port login and will
> +		 * just need to wait for the PLOGI response to do the node
> +		 * attach and then we can send the LS_ACC for the PRLI. If,
> +		 * during this time, we receive FCP_CMNDs (which is possible
> +		 * since we've already sent a PRLI and our peer may have
> +		 * accepted).
> +		 * At this time, we are not waiting on any other unsolicited
> +		 * frames to continue with the login process. Thus, it will not
> +		 * hurt to hold frames here.
> +		 */
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
> +		/* Completion from PLOGI sent */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* PLOGI failed, shutdown the node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_fabric_common, __func__)) {
> +			return NULL;
> +		}
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		switch (node->send_ls_acc) {
> +		case EFC_NODE_SEND_LS_ACC_PRLI: {
> +			efc_d_send_prli_rsp(node->ls_acc_io,
> +					    node->ls_acc_oxid);
> +			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +			node->ls_acc_io = NULL;
> +			break;
> +		}
> +		case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */
> +		case EFC_NODE_SEND_LS_ACC_NONE:
> +		default:
> +			/* Normal case for I */
> +			/* sm: send_plogi_acc is not set / send PLOGI acc */
> +			efc_node_transition(node, __efc_d_port_logged_in,
> +					    NULL);
> +			break;
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "Node attach failed\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node,
> +				    __efc_fabric_wait_attach_evt_shutdown,
> +				     NULL);
> +		break;
> +	case EFC_EVT_PRLI_RCVD:
> +		node_printf(node, "%s: PRLI received before node is attached\n",
> +			    efc_sm_event_name(evt));
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PRLI);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +int
> +efc_p2p_setup(struct efc_sli_port *sport)
> +{
> +	struct efc *efc = sport->efc;
> +	int rnode_winner;
> +
> +	rnode_winner = efc_rnode_is_winner(sport);
> +
> +	/* set sport flags to indicate p2p "winner" */
> +	if (rnode_winner == 1) {
> +		sport->p2p_remote_port_id = 0;
> +		sport->p2p_port_id = 0;
> +		sport->p2p_winner = false;
> +	} else if (rnode_winner == 0) {
> +		sport->p2p_remote_port_id = 2;
> +		sport->p2p_port_id = 1;
> +		sport->p2p_winner = true;
> +	} else {
> +		/* no winner; only okay if external loopback enabled */
> +		if (sport->efc->external_loopback) {
> +			/*
> +			 * External loopback mode enabled;
> +			 * local sport and remote node
> +			 * will be registered with an NPortID = 1;
> +			 */
> +			efc_log_debug(efc,
> +				      "External loopback mode enabled\n");
> +			sport->p2p_remote_port_id = 1;
> +			sport->p2p_port_id = 1;
> +			sport->p2p_winner = true;
> +		} else {
> +			efc_log_warn(efc,
> +				     "failed to determine p2p winner\n");
> +			return rnode_winner;
> +		}
> +	}
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h
> new file mode 100644
> index 000000000000..9571b4b7b2ce
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_fabric.h
> @@ -0,0 +1,116 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Declarations for the interface exported by efc_fabric
> + */
> +
> +#ifndef __EFCT_FABRIC_H__
> +#define __EFCT_FABRIC_H__
> +#include "scsi/fc/fc_els.h"
> +#include "scsi/fc/fc_fs.h"
> +#include "scsi/fc/fc_ns.h"
> +
> +void *
> +__efc_fabric_init(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_domain_attach_wait(struct efc_sm_ctx *ctx,
> +				enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
> +				enum efc_sm_event evt, void *arg);
> +
> +void *
> +__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_wait_sport_attach(struct efc_sm_ctx *ctx,
> +			       enum efc_sm_event evt, void *arg);
> +
> +void *
> +__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				      enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_logo_wait_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event, void *arg);
> +void *
> +__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
> +			enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
> +void *
> +__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabctl_init(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabctl_ready(struct efc_sm_ctx *ctx,
> +		   enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +void *
> +__efc_fabric_idle(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +
> +void *
> +__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_domain_attach_wait(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			      enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				    enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +void *
> +__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +
> +int
> +efc_p2p_setup(struct efc_sli_port *sport);
> +void
> +efc_fabric_set_topology(struct efc_node *node,
> +			enum efc_sport_topology topology);
> +void efc_fabric_notify_topology(struct efc_node *node);
> +
> +#endif /* __EFCT_FABRIC_H__ */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-12  3:32 ` [PATCH v3 14/31] elx: libefc: FC node ELS and state handling James Smart
@ 2020-04-15 18:56   ` Daniel Wagner
  2020-04-23  2:50     ` James Smart
  2020-04-16  6:47   ` Hannes Reinecke
  1 sibling, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-15 18:56 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:46PM -0700, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC node PRLI handling and state management
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Replace efc_assert with WARN_ON
>   Bug Fix: Send LS_RJT for non FCP PRLIs
> ---
>  drivers/scsi/elx/libefc/efc_device.c | 1672 ++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/libefc/efc_device.h |   72 ++
>  2 files changed, 1744 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc/efc_device.c
>  create mode 100644 drivers/scsi/elx/libefc/efc_device.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
> new file mode 100644
> index 000000000000..e279a6dd19fa
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_device.c
> @@ -0,0 +1,1672 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * device_sm Node State Machine: Remote Device States
> + */
> +
> +#include "efc.h"
> +#include "efc_device.h"
> +#include "efc_fabric.h"
> +
> +void
> +efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id)
> +{
> +	struct efc *efc = node->efc;
> +	/* If the back-end doesn't want to talk to this initiator,
> +	 * we send an LS_RJT
> +	 */
> +	if (node->sport->enable_tgt &&
> +	    (efc->tt.scsi_validate_node(efc, node) == 0)) {
> +		node_printf(node, "PRLI rejected by target-server\n");
> +
> +		efc->tt.send_ls_rjt(efc, node, ox_id,
> +				    ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +	} else {
> +		/*
> +		 * sm: / process PRLI payload, send PRLI acc
> +		 */
> +		efc->tt.els_send_resp(efc, node, ELS_PRLI, ox_id);
> +
> +		/* Immediately go to ready state to avoid window where we're
> +		 * waiting for the PRLI LS_ACC to complete while holding
> +		 * FCP_CMNDs
> +		 */
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +	}
> +}
> +
> +static void *
> +__efc_d_common(const char *funcname, struct efc_sm_ctx *ctx,
> +	       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +	struct efc *efc = NULL;
> +
> +	node = ctx->app;
> +	efc = node->efc;
> +
> +	switch (evt) {
> +	/* Handle shutdown events */
> +	case EFC_EVT_SHUTDOWN:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
> +			      funcname, efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n",
> +			      node->display_name, funcname,
> +				efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
> +			      funcname, efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	default:
> +		/* call default event handler common to all nodes */
> +		__efc_node_common(funcname, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * State is entered when a node sends a delete initiator/target call to the
> + * target-server/initiator-client and needs to wait for that work to complete.
> + */
> +static void *
> +__efc_d_wait_del_node(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		/* Fall through */

		fallthrough;

> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* These are expected events. */
> +		break;
> +
> +	case EFC_EVT_NODE_DEL_INI_COMPLETE:
> +	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
> +		/*
> +		 * node has either been detached or is in the process
> +		 * of being detached,
> +		 * call common node's initiate cleanup function
> +		 */
> +		efc_node_initiate_cleanup(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */

		fallthrough;

> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +static void *
> +__efc_d_wait_del_ini_tgt(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		/* Fall through */
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* These are expected events. */
> +		break;
> +
> +	case EFC_EVT_NODE_DEL_INI_COMPLETE:
> +	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
> +		efc_node_transition(node, __efc_d_wait_del_node, NULL);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		/* assume no wait needed */
> +		int rc = EFC_SCSI_CALL_COMPLETE;
> +
> +		efc->tt.scsi_io_alloc_disable(efc, node);
> +
> +		/* make necessary delete upcall(s) */
> +		if (node->init && !node->targ) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			efc_node_transition(node,
> +					    __efc_d_wait_del_node,
> +					     NULL);
> +			if (node->sport->enable_tgt)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +					EFC_SCSI_INITIATOR_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +
> +		} else if (node->targ && !node->init) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (target) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			efc_node_transition(node,
> +					    __efc_d_wait_del_node,
> +					     NULL);
> +			if (node->sport->enable_ini)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +					EFC_SCSI_TARGET_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +
> +		} else if (node->init && node->targ) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
> +				node->display_name, node->wwpn, node->wwnn);
> +			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
> +					    NULL);
> +			if (node->sport->enable_tgt)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +						EFC_SCSI_INITIATOR_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +			/* assume no wait needed */
> +			rc = EFC_SCSI_CALL_COMPLETE;
> +			if (node->sport->enable_ini)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +						EFC_SCSI_TARGET_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +		}
> +
> +		/* we've initiated the upcalls as needed, now kick off the node
> +		 * detach to precipitate the aborting of outstanding exchanges
> +		 * associated with said node
> +		 *
> +		 * Beware: if we've made upcall(s), we've already transitioned
> +		 * to a new state by the time we execute this.
> +		 * consider doing this before the upcalls?
> +		 */
> +		if (node->attached) {
> +			/* issue hw node free; don't care if succeeds right
> +			 * away or sometime later, will check node->attached
> +			 * later in shutdown process
> +			 */
> +			rc = efc->tt.hw_node_detach(efc, &node->rnode);
> +			if (rc != EFC_HW_RTN_SUCCESS &&
> +			    rc != EFC_HW_RTN_SUCCESS_SYNC)
> +				node_printf(node,
> +					    "Failed freeing HW node, rc=%d\n",
> +					rc);
> +		}
> +
> +		/* if neither initiator nor target, proceed to cleanup */
> +		if (!node->init && !node->targ) {
> +			/*
> +			 * node has either been detached or is in
> +			 * the process of being detached,
> +			 * call common node's initiate cleanup function
> +			 */
> +			efc_node_initiate_cleanup(node);
> +		}
> +		break;
> +	}
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* Ignore, this can happen if an ELS is
> +		 * aborted while in a delay/retry state
> +		 */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_loop(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		/* send PLOGI automatically if initiator */
> +		efc_node_init_device(node, true);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Save the OX_ID for sending LS_ACC sometime later */
> +void
> +efc_send_ls_acc_after_attach(struct efc_node *node,
> +			     struct fc_frame_header *hdr,
> +			     enum efc_node_send_ls_acc ls)
> +{
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +
> +	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE);
> +
> +	node->ls_acc_oxid = ox_id;
> +	node->send_ls_acc = ls;
> +	node->ls_acc_did = ntoh24(hdr->fh_d_id);
> +}
> +
> +void
> +efc_process_prli_payload(struct efc_node *node, void *prli)
> +{
> +	struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
> +
> +	node->init = (sp->spp_flags & FCP_SPPF_INIT_FCN) != 0;
> +	node->targ = (sp->spp_flags & FCP_SPPF_TARG_FCN) != 0;
> +}
> +
> +void *
> +__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		efc_node_transition(node, __efc_d_port_logged_in, NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* LOGO response received, sent shutdown */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		node_printf(node,
> +			    "LOGO sent (evt=%s), shutdown node\n",
> +			efc_sm_event_name(evt));
> +		/* sm: / post explicit logout */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +				    NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_init_device(struct efc_node *node, bool send_plogi)
> +{
> +	node->send_plogi = send_plogi;
> +	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
> +	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
> +		node->nodedb_state = __efc_d_init;
> +		efc_node_transition(node, __efc_node_paused, NULL);
> +	} else {
> +		efc_node_transition(node, __efc_d_init, NULL);
> +	}
> +}
> +
> +/**
> + * Device node state machine: Initial node state for an initiator or
> + * a target.
> + *
> + * This state is entered when a node is instantiated, either having been
> + * discovered from a name services query, or having received a PLOGI/FLOGI.
> + */
> +void *
> +__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		if (!node->send_plogi)
> +			break;
> +		/* only send if we have initiator capability,
> +		 * and domain is attached
> +		 */
> +		if (node->sport->enable_ini &&
> +		    node->sport->domain->attached) {
> +			efc->tt.els_send(efc, node, ELS_PLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +
> +			efc_node_transition(node, __efc_d_wait_plogi_rsp, NULL);
> +		} else {
> +			node_printf(node, "not sending plogi sport.ini=%d,",
> +				    node->sport->enable_ini);
> +			node_printf(node, "domain attached=%d\n",
> +				    node->sport->domain->attached);
> +		}
> +		break;
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* T, or I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +					     EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/* domain already attached */
> +		if (node->sport->domain->attached) {
> +			rc = efc_node_attach(node);
> +			efc_node_transition(node,
> +					    __efc_d_wait_node_attach, NULL);
> +			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
> +				efc_node_post_event(node,
> +						    EFC_EVT_NODE_ATTACH_OK,
> +						    NULL);
> +			}
> +			break;
> +		}
> +
> +		/* domain not attached; several possibilities: */
> +		switch (node->sport->topology) {

This substate machine should go into a new function.

> +		case EFC_SPORT_TOPOLOGY_P2P:
> +			/* we're not attached and sport is p2p,
> +			 * need to attach
> +			 */
> +			efc_domain_attach(node->sport->domain, d_id);
> +			efc_node_transition(node,
> +					    __efc_d_wait_domain_attach,
> +					    NULL);
> +			break;
> +		case EFC_SPORT_TOPOLOGY_FABRIC:
> +			/* we're not attached and sport is fabric, domain
> +			 * attach should have already been requested as part
> +			 * of the fabric state machine, wait for it
> +			 */
> +			efc_node_transition(node, __efc_d_wait_domain_attach,
> +					    NULL);
> +			break;
> +		case EFC_SPORT_TOPOLOGY_UNKNOWN:
> +			/* Two possibilities:
> +			 * 1. received a PLOGI before our FLOGI has completed
> +			 *    (possible since completion comes in on another
> +			 *    CQ), thus we don't know what we're connected to
> +			 *    yet; transition to a state to wait for the
> +			 *    fabric node to tell us;
> +			 * 2. PLOGI received before link went down and we
> +			 * haven't performed domain attach yet.
> +			 * Note: we cannot distinguish between 1. and 2.
> +			 * so have to assume PLOGI
> +			 * was received after link back up.
> +			 */
> +			node_printf(node,
> +				    "received PLOGI, unknown topology did=0x%x\n",
> +				d_id);
> +			efc_node_transition(node,
> +					    __efc_d_wait_topology_notify,
> +					    NULL);
> +			break;
> +		default:
> +			node_printf(node,
> +				    "received PLOGI, with unexpected topology %d\n",
> +				node->sport->topology);
> +			break;
> +		}
> +		break;
> +	}
> +
> +	case EFC_EVT_FDISC_RCVD: {
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	case EFC_EVT_FLOGI_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +		/* sm: / save sparams, send FLOGI acc */
> +		memcpy(node->sport->domain->flogi_service_params,
> +		       cbdata->payload->dma.virt,
> +		       sizeof(struct fc_els_flogi));
> +
> +		/* send FC LS_ACC response, override s_id */
> +		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
> +		efc->tt.send_flogi_p2p_acc(efc, node,
> +				be16_to_cpu(hdr->fh_ox_id), d_id);
> +		if (efc_p2p_setup(node->sport)) {
> +			node_printf(node,
> +				    "p2p setup failed, shutting down node\n");
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		} else {
> +			efc_node_transition(node,
> +					    __efc_p2p_wait_flogi_acc_cmpl,
> +					    NULL);
> +		}
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a link
> +			 * down; drop and
> +			 * shut node down w/ "explicit logout" so pending
> +			 * frames are processed
> +			 */
> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD:
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a link
> +			 * down; drop and shut node down w/ "explicit logout"
> +			 * so pending frames are processed
> +			 */
> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +		node_printf(node, "%s received, sending reject\n",
> +			    efc_sm_event_name(evt));
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* note: problem, we're now expecting an ELS REQ completion
> +		 * from both the LOGO and PLOGI
> +		 */
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a
> +			 * link down; drop and
> +			 * shut node down w/ "explicit logout" so pending
> +			 * frames are processed
> +			 */
> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +
> +		/* Send LOGO */
> +		node_printf(node, "FCP_CMND received, send LOGO\n");
> +		if (efc->tt.els_send(efc, node, ELS_LOGO,
> +				     EFC_FC_FLOGI_TIMEOUT_SEC,
> +			EFC_FC_ELS_DEFAULT_RETRIES) == NULL) {
> +			/*
> +			 * failed to send LOGO, go ahead and cleanup node
> +			 * anyways
> +			 */
> +			node_printf(node, "Failed to send LOGO\n");
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +		} else {
> +			/* sent LOGO, wait for response */
> +			efc_node_transition(node,
> +					    __efc_d_wait_logo_rsp, NULL);
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* T, or I+T */
> +		/* received PLOGI with svc parms, go ahead and attach node
> +		 * when PLOGI that was sent ultimately completes, it'll be a
> +		 * no-op
> +		 *
> +		 * If there is an outstanding PLOGI sent, can we set a flag
> +		 * to indicate that we don't want to retry it if it times out?
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +		/* sm: domain->attached / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD:
> +		/* I, or I+T */
> +		/* sent PLOGI and before completion was seen, received the
> +		 * PRLI from the remote node (WCQEs and RCQEs come in on
> +		 * different queues and order of processing cannot be assumed)
> +		 * Save OXID so PRLI can be sent after the attach and continue
> +		 * to wait for PLOGI response
> +		 */
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PRLI);
> +		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
> +				    NULL);
> +		break;
> +
> +	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_FDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD:
> +	case EFC_EVT_SCR_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received, sending reject\n",
> +			    efc_sm_event_name(evt));
> +
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
> +		/* Completion from PLOGI sent */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
> +		/* PLOGI failed, shutdown the node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* Our PLOGI was rejected, this is ok in some cases */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* not logged in yet and outstanding PLOGI so don't send LOGO,
> +		 * just drop
> +		 */
> +		node_printf(node, "FCP_CMND received, drop\n");
> +		break;
> +	}
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				  enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/*
> +		 * Since we've received a PRLI, we have a port login and will
> +		 * just need to wait for the PLOGI response to do the node
> +		 * attach and then we can send the LS_ACC for the PRLI. If,
> +		 * during this time, we receive FCP_CMNDs (which is possible
> +		 * since we've already sent a PRLI and our peer may have
> +		 * accepted). At this time, we are not waiting on any other
> +		 * unsolicited frames to continue with the login process. Thus,
> +		 * it will not hurt to hold frames here.
> +		 */
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
> +		/* Completion from PLOGI sent */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* PLOGI failed, shutdown the node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		WARN_ON(!node->sport->domain->attached);
> +		/* sm: / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY: {
> +		enum efc_sport_topology topology =
> +					(enum efc_sport_topology)arg;
> +
> +		WARN_ON(node->sport->domain->attached);
> +
> +		WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		node_printf(node, "topology notification, topology=%d\n",
> +			    topology);
> +
> +		/* At the time the PLOGI was received, the topology was unknown,
> +		 * so we didn't know which node would perform the domain attach:
> +		 * 1. The node from which the PLOGI was sent (p2p) or
> +		 * 2. The node to which the FLOGI was sent (fabric).
> +		 */
> +		if (topology == EFC_SPORT_TOPOLOGY_P2P) {
> +			/* if this is p2p, need to attach to the domain using
> +			 * the d_id from the PLOGI received
> +			 */
> +			efc_domain_attach(node->sport->domain,
> +					  node->ls_acc_did);
> +		}
> +		/* else, if this is fabric, the domain attach
> +		 * should be performed by the fabric node (node sending FLOGI);
> +		 * just wait for attach to complete
> +		 */
> +
> +		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		WARN_ON(!node->sport->domain->attached);
> +		node_printf(node, "domain attach ok\n");
> +		/* sm: / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		switch (node->send_ls_acc) {
> +		case EFC_NODE_SEND_LS_ACC_PLOGI: {
> +			/* sm: send_plogi_acc is set / send PLOGI acc */
> +			/* Normal case for T, or I+T */
> +			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
> +							node->ls_acc_oxid);
> +			efc_node_transition(node,
> +					    __efc_d_wait_plogi_acc_cmpl,
> +					     NULL);
> +			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +			node->ls_acc_io = NULL;
> +			break;
> +		}
> +		case EFC_NODE_SEND_LS_ACC_PRLI: {
> +			efc_d_send_prli_rsp(node,
> +					    node->ls_acc_oxid);
> +			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +			node->ls_acc_io = NULL;
> +			break;
> +		}
> +		case EFC_NODE_SEND_LS_ACC_NONE:
> +		default:
> +			/* Normal case for I */
> +			/* sm: send_plogi_acc is not set / send PLOGI acc */
> +			efc_node_transition(node,
> +					    __efc_d_port_logged_in, NULL);
> +			break;
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "node attach failed\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	/* Handle shutdown events */
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
> +				    NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
> +				    NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
> +		efc_node_transition(node,
> +				    __efc_d_wait_attach_evt_shutdown, NULL);
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	/* wait for any of these attach events and then shutdown */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* Normal case for I or I+T */
> +		if (node->sport->enable_ini &&
> +		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
> +			/* sm: if enable_ini / send PRLI */
> +			efc->tt.els_send(efc, node, ELS_PRLI,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +			/* can now expect ELS_REQ_OK/FAIL/RJT */
> +		}
> +		break;
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD: {
> +		/* Normal case for T or I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		struct fc_els_spp *sp = cbdata->payload->dma.virt
> +					+ sizeof(struct fc_els_prli);
> +
> +		if (sp->spp_type != FC_TYPE_FCP) {
> +			/*Only FCP is supported*/
> +			efc->tt.send_ls_rjt(efc, node,
> +					be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
> +			break;
> +		}
> +
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
> +		/* Normal case for I or I+T */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / process PRLI payload */
> +		efc_process_prli_payload(node, cbdata->els_rsp.virt);
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
> +		/* I, I+T, assume some link failure, shutdown node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT: {
> +		/* PRLI rejected by remote
> +		 * Normal for I, I+T (connected to an I)
> +		 * Node doesn't want to be a target, stay here and wait for a
> +		 * PRLI from the remote node
> +		 * if it really wants to connect to us as target
> +		 */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK: {
> +		/* Normal T, I+T, target-server rejected the process login */
> +		/* This would be received only in the case where we sent
> +		 * LS_RJT for the PRLI, so
> +		 * do nothing.   (note: as T only we could shutdown the node)
> +		 */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/*sm: / save sparams, set send_plogi_acc,
> +		 *post implicit logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/* Restart node attach with new service parameters,
> +		 * and send ACC
> +		 */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
> +				    NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* I, T, I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt),
> +					node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		/* sm: / post explicit logout */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_device_ready(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	if (evt != EFC_EVT_FCP_CMD_RCVD)
> +		node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		node->fcp_enabled = true;
> +		if (node->init) {
> +			efc_log_info(efc,
> +				     "[%s] found (initiator) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			if (node->sport->enable_tgt)
> +				efc->tt.scsi_new_node(efc, node);
> +		}
> +		if (node->targ) {
> +			efc_log_info(efc,
> +				     "[%s] found (target) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			if (node->sport->enable_ini)
> +				efc->tt.scsi_new_node(efc, node);
> +		}
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		node->fcp_enabled = false;
> +		break;
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* sm: / save sparams, set send_plogi_acc, post implicit
> +		 * logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/*
> +		 * Restart node attach with new service parameters,
> +		 * and send ACC
> +		 */
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD: {
> +		/* T, I+T: remote initiator is slow to get started */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		struct fc_els_spp *sp = cbdata->payload->dma.virt
> +					+ sizeof(struct fc_els_prli);
> +
> +		if (sp->spp_type != FC_TYPE_FCP) {
> +			/*Only FCP is supported*/
> +			efc->tt.send_ls_rjt(efc, node,
> +					be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
> +			break;
> +		}
> +
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +
> +		efc->tt.els_send_resp(efc, node, ELS_PRLI,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* sm: / send PRLO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_PRLO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		/* need implicit logout? */
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_ADISC_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* sm: / send ADISC acc */
> +		efc->tt.els_send_resp(efc, node, ELS_ADISC,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_ABTS_RCVD:
> +		/* sm: / process ABTS */
> +		efc_log_err(efc, "Unexpected event:%s\n",
> +					efc_sm_event_name(evt));
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		break;
> +
> +	case EFC_EVT_NODE_REFOUND:
> +		break;
> +
> +	case EFC_EVT_NODE_MISSING:
> +		if (node->sport->enable_rscn)
> +			efc_node_transition(node, __efc_d_device_gone, NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +		/* T, or I+T, PRLI accept completed ok */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		/* T, or I+T, PRLI accept failed to complete */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		node_printf(node, "Failed to send PRLI LS_ACC\n");
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_device_gone(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	int rc = EFC_SCSI_CALL_COMPLETE;
> +	int rc_2 = EFC_SCSI_CALL_COMPLETE;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		static char const *labels[] = {"none", "initiator", "target",
> +							"initiator+target"};
> +
> +		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
> +			     node->display_name,
> +				labels[(node->targ << 1) | (node->init)],
> +						node->wwpn, node->wwnn);
> +
> +		switch (efc_node_get_enable(node)) {
> +		case EFC_NODE_ENABLE_T_TO_T:
> +		case EFC_NODE_ENABLE_I_TO_T:
> +		case EFC_NODE_ENABLE_IT_TO_T:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_T_TO_I:
> +		case EFC_NODE_ENABLE_I_TO_I:
> +		case EFC_NODE_ENABLE_IT_TO_I:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_T_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_I_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +						  EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_IT_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			rc_2 = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		default:
> +			rc = EFC_SCSI_CALL_COMPLETE;
> +			break;
> +		}
> +
> +		if (rc == EFC_SCSI_CALL_COMPLETE &&
> +		    rc_2 == EFC_SCSI_CALL_COMPLETE)
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +
> +		break;
> +	}
> +	case EFC_EVT_NODE_REFOUND:
> +		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
> +
> +		/* reauthenticate with PLOGI/PRLI */
> +		/* efc_node_transition(node, __efc_d_discovered, NULL); */
> +
> +		/* reauthenticate with ADISC */
> +		/* sm: / send ADISC */
> +		efc->tt.els_send(efc, node, ELS_ADISC,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* sm: / save sparams, set send_plogi_acc, post implicit
> +		 * logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/*
> +		 * Restart node attach with new service parameters, and send
> +		 * ACC
> +		 */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
> +				    NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* most likely a stale frame (received prior to link down),
> +		 * if attempt to send LOGO, will probably timeout and eat
> +		 * up 20s; thus, drop FCP_CMND
> +		 */
> +		node_printf(node, "FCP_CMND received, drop\n");
> +		break;
> +	}
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* I, T, I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* received an LS_RJT, in this case, send shutdown
> +		 * (explicit logo) event which will unregister the node,
> +		 * and start over with PLOGI
> +		 */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / post explicit logout */
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +				     NULL);
> +		break;
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* In this case, we have the equivalent of an LS_RJT for
> +		 * the ADISC, so we need to abort the ADISC, and re-login
> +		 * with PLOGI
> +		 */
> +		/* sm: / request abort, send LOGO acc */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
> new file mode 100644
> index 000000000000..513096b8f875
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_device.h
> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Node state machine functions for remote device node sm
> + */
> +
> +#ifndef __EFCT_DEVICE_H__
> +#define __EFCT_DEVICE_H__
> +extern void
> +efc_node_init_device(struct efc_node *node, bool send_plogi);
> +extern void
> +efc_process_prli_payload(struct efc_node *node,
> +			 void *prli);
> +extern void
> +efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id);
> +extern void
> +efc_send_ls_acc_after_attach(struct efc_node *node,
> +			     struct fc_frame_header *hdr,
> +			     enum efc_node_send_ls_acc ls);
> +extern void *
> +__efc_d_wait_loop(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_device_ready(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_device_gone(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg);
> +
> +#endif /* __EFCT_DEVICE_H__ */
> -- 
> 2.16.4
> 
> 

I run out of steam and thus stop here...

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 13/31] elx: libefc: Fabric node state machine interfaces
  2020-04-12  3:32 ` [PATCH v3 13/31] elx: libefc: Fabric " James Smart
  2020-04-15 18:51   ` Daniel Wagner
@ 2020-04-16  6:37   ` Hannes Reinecke
  2020-04-23  1:38     ` James Smart
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  6:37 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - Fabric node initialization and logins.
> - Name/Directory Services node.
> - Fabric Controller node to process rscn events.
> 
> These are all interactions with remote ports that correspond
> to well-known fabric entities
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Replace efc_assert with WARN_ON
>    Return defined return values
> ---
>   drivers/scsi/elx/libefc/efc_fabric.c | 1759 ++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc/efc_fabric.h |  116 +++
>   2 files changed, 1875 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
> new file mode 100644
> index 000000000000..251f8702dbc5
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_fabric.c
> @@ -0,0 +1,1759 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * This file implements remote node state machines for:
> + * - Fabric logins.
> + * - Fabric controller events.
> + * - Name/directory services interaction.
> + * - Point-to-point logins.
> + */
> +
> +/*
> + * fabric_sm Node State Machine: Fabric States
> + * ns_sm Node State Machine: Name/Directory Services States
> + * p2p_sm Node State Machine: Point-to-Point Node States
> + */
> +
> +#include "efc.h"
> +
> +static void
> +efc_fabric_initiate_shutdown(struct efc_node *node)
> +{
> +	int rc;
> +	struct efc *efc = node->efc;
> +
> +	efc->tt.scsi_io_alloc_disable(efc, node);
> +
> +	if (node->attached) {
> +		/* issue hw node free; don't care if succeeds right away
> +		 * or sometime later, will check node->attached later in
> +		 * shutdown process
> +		 */
> +		rc = efc->tt.hw_node_detach(efc, &node->rnode);
> +		if (rc != EFC_HW_RTN_SUCCESS &&
> +		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
> +			node_printf(node, "Failed freeing HW node, rc=%d\n",
> +				    rc);
> +		}
> +	}
> +	/*
> +	 * node has either been detached or is in the process of being detached,
> +	 * call common node's initiate cleanup function
> +	 */
> +	efc_node_initiate_cleanup(node);
> +}
> +
> +static void *
> +__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +
> +	node = ctx->app;
> +
> +	switch (evt) {
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		break;
> +	case EFC_EVT_SHUTDOWN:
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_fabric_initiate_shutdown(node);
> +		break;
> +
> +	default:
> +		/* call default event handler common to all nodes */
> +		__efc_node_common(funcname, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
> +		  void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_REENTER:	/* not sure why we're getting these ... */
> +		efc_log_debug(efc, ">>> reenter !!\n");
> +		/* fall through */
> +	case EFC_EVT_ENTER:
> +		/*  sm: / send FLOGI */
> +		efc->tt.els_send(efc, node, ELS_FLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
> +		break;
> +
> +	default:
> +		__efc_fabric_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
What is going on here?
Why do you declare all function as 'void *', but then continue to return 
only NULL pointer?

Is this some weird API logic or are these functions being fleshed out in 
later patches to return something else but NULL?

But as it stands I would recommend to move all of these functions to 
simple 'void' functions, and updating the function prototypes once they 
can return anything else than 'NULL'.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-12  3:32 ` [PATCH v3 14/31] elx: libefc: FC node ELS and state handling James Smart
  2020-04-15 18:56   ` Daniel Wagner
@ 2020-04-16  6:47   ` Hannes Reinecke
  2020-04-23  2:55     ` James Smart
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  6:47 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> - FC node PRLI handling and state management
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Replace efc_assert with WARN_ON
>    Bug Fix: Send LS_RJT for non FCP PRLIs
> ---
>   drivers/scsi/elx/libefc/efc_device.c | 1672 ++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/libefc/efc_device.h |   72 ++
>   2 files changed, 1744 insertions(+)
>   create mode 100644 drivers/scsi/elx/libefc/efc_device.c
>   create mode 100644 drivers/scsi/elx/libefc/efc_device.h
> 
> diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
> new file mode 100644
> index 000000000000..e279a6dd19fa
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_device.c
> @@ -0,0 +1,1672 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * device_sm Node State Machine: Remote Device States
> + */
> +
> +#include "efc.h"
> +#include "efc_device.h"
> +#include "efc_fabric.h"
> +
> +void
> +efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id)
> +{
> +	struct efc *efc = node->efc;
> +	/* If the back-end doesn't want to talk to this initiator,
> +	 * we send an LS_RJT
> +	 */
> +	if (node->sport->enable_tgt &&
> +	    (efc->tt.scsi_validate_node(efc, node) == 0)) {
> +		node_printf(node, "PRLI rejected by target-server\n");
> +
> +		efc->tt.send_ls_rjt(efc, node, ox_id,
> +				    ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +	} else {
> +		/*
> +		 * sm: / process PRLI payload, send PRLI acc
> +		 */
> +		efc->tt.els_send_resp(efc, node, ELS_PRLI, ox_id);
> +
> +		/* Immediately go to ready state to avoid window where we're
> +		 * waiting for the PRLI LS_ACC to complete while holding
> +		 * FCP_CMNDs
> +		 */
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +	}
> +}
> +
> +static void *
> +__efc_d_common(const char *funcname, struct efc_sm_ctx *ctx,
> +	       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = NULL;
> +	struct efc *efc = NULL;
> +
> +	node = ctx->app;
> +	efc = node->efc;
> +
> +	switch (evt) {
> +	/* Handle shutdown events */
> +	case EFC_EVT_SHUTDOWN:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
> +			      funcname, efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n",
> +			      node->display_name, funcname,
> +				efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
> +			      funcname, efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	default:
> +		/* call default event handler common to all nodes */
> +		__efc_node_common(funcname, ctx, evt, arg);
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * State is entered when a node sends a delete initiator/target call to the
> + * target-server/initiator-client and needs to wait for that work to complete.
> + */
> +static void *
> +__efc_d_wait_del_node(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		/* Fall through */
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* These are expected events. */
> +		break;
> +
> +	case EFC_EVT_NODE_DEL_INI_COMPLETE:
> +	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
> +		/*
> +		 * node has either been detached or is in the process
> +		 * of being detached,
> +		 * call common node's initiate cleanup function
> +		 */
> +		efc_node_initiate_cleanup(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +

If you are going to return NULL anyway, why have a separate return 
statement in the default case?

> +static void *
> +__efc_d_wait_del_ini_tgt(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		/* Fall through */
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* These are expected events. */
> +		break;
> +
> +	case EFC_EVT_NODE_DEL_INI_COMPLETE:
> +	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
> +		efc_node_transition(node, __efc_d_wait_del_node, NULL);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* Can happen as ELS IO IO's complete */
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		/* assume no wait needed */
> +		int rc = EFC_SCSI_CALL_COMPLETE;
> +
> +		efc->tt.scsi_io_alloc_disable(efc, node);
> +
> +		/* make necessary delete upcall(s) */
> +		if (node->init && !node->targ) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			efc_node_transition(node,
> +					    __efc_d_wait_del_node,
> +					     NULL);
> +			if (node->sport->enable_tgt)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +					EFC_SCSI_INITIATOR_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +
> +		} else if (node->targ && !node->init) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (target) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			efc_node_transition(node,
> +					    __efc_d_wait_del_node,
> +					     NULL);
> +			if (node->sport->enable_ini)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +					EFC_SCSI_TARGET_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +
> +		} else if (node->init && node->targ) {
> +			efc_log_info(node->efc,
> +				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
> +				node->display_name, node->wwpn, node->wwnn);
> +			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
> +					    NULL);
> +			if (node->sport->enable_tgt)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +						EFC_SCSI_INITIATOR_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
> +			/* assume no wait needed */
> +			rc = EFC_SCSI_CALL_COMPLETE;
> +			if (node->sport->enable_ini)
> +				rc = efc->tt.scsi_del_node(efc, node,
> +						EFC_SCSI_TARGET_DELETED);
> +
> +			if (rc == EFC_SCSI_CALL_COMPLETE)
> +				efc_node_post_event(node,
> +					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
> +		}
> +
> +		/* we've initiated the upcalls as needed, now kick off the node
> +		 * detach to precipitate the aborting of outstanding exchanges
> +		 * associated with said node
> +		 *
> +		 * Beware: if we've made upcall(s), we've already transitioned
> +		 * to a new state by the time we execute this.
> +		 * consider doing this before the upcalls?
> +		 */
> +		if (node->attached) {
> +			/* issue hw node free; don't care if succeeds right
> +			 * away or sometime later, will check node->attached
> +			 * later in shutdown process
> +			 */
> +			rc = efc->tt.hw_node_detach(efc, &node->rnode);
> +			if (rc != EFC_HW_RTN_SUCCESS &&
> +			    rc != EFC_HW_RTN_SUCCESS_SYNC)
> +				node_printf(node,
> +					    "Failed freeing HW node, rc=%d\n",
> +					rc);
> +		}
> +
> +		/* if neither initiator nor target, proceed to cleanup */
> +		if (!node->init && !node->targ) {
> +			/*
> +			 * node has either been detached or is in
> +			 * the process of being detached,
> +			 * call common node's initiate cleanup function
> +			 */
> +			efc_node_initiate_cleanup(node);
> +		}
> +		break;
> +	}
> +	case EFC_EVT_ALL_CHILD_NODES_FREE:
> +		/* Ignore, this can happen if an ELS is
> +		 * aborted while in a delay/retry state
> +		 */
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_loop(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK: {
> +		/* send PLOGI automatically if initiator */
> +		efc_node_init_device(node, true);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* Save the OX_ID for sending LS_ACC sometime later */
> +void
> +efc_send_ls_acc_after_attach(struct efc_node *node,
> +			     struct fc_frame_header *hdr,
> +			     enum efc_node_send_ls_acc ls)
> +{
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +
> +	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE);
> +
> +	node->ls_acc_oxid = ox_id;
> +	node->send_ls_acc = ls;
> +	node->ls_acc_did = ntoh24(hdr->fh_d_id);
> +}
> +
> +void
> +efc_process_prli_payload(struct efc_node *node, void *prli)
> +{
> +	struct fc_els_spp *sp = prli + sizeof(struct fc_els_prli);
> +
> +	node->init = (sp->spp_flags & FCP_SPPF_INIT_FCN) != 0;
> +	node->targ = (sp->spp_flags & FCP_SPPF_TARG_FCN) != 0;
> +}
> +
> +void *
> +__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		efc_node_transition(node, __efc_d_port_logged_in, NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:
> +		/* LOGO response received, sent shutdown */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		node_printf(node,
> +			    "LOGO sent (evt=%s), shutdown node\n",
> +			efc_sm_event_name(evt));
> +		/* sm: / post explicit logout */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +				    NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void
> +efc_node_init_device(struct efc_node *node, bool send_plogi)
> +{
> +	node->send_plogi = send_plogi;
> +	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
> +	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
> +		node->nodedb_state = __efc_d_init;
> +		efc_node_transition(node, __efc_node_paused, NULL);
> +	} else {
> +		efc_node_transition(node, __efc_d_init, NULL);
> +	}
> +}
> +
> +/**
> + * Device node state machine: Initial node state for an initiator or
> + * a target.
> + *
> + * This state is entered when a node is instantiated, either having been
> + * discovered from a name services query, or having received a PLOGI/FLOGI.
> + */
> +void *
> +__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		if (!node->send_plogi)
> +			break;
> +		/* only send if we have initiator capability,
> +		 * and domain is attached
> +		 */
> +		if (node->sport->enable_ini &&
> +		    node->sport->domain->attached) {
> +			efc->tt.els_send(efc, node, ELS_PLOGI,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +
> +			efc_node_transition(node, __efc_d_wait_plogi_rsp, NULL);
> +		} else {
> +			node_printf(node, "not sending plogi sport.ini=%d,",
> +				    node->sport->enable_ini);
> +			node_printf(node, "domain attached=%d\n",
> +				    node->sport->domain->attached);
> +		}
> +		break;
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* T, or I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +					     EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/* domain already attached */
> +		if (node->sport->domain->attached) {
> +			rc = efc_node_attach(node);
> +			efc_node_transition(node,
> +					    __efc_d_wait_node_attach, NULL);
> +			if (rc == EFC_HW_RTN_SUCCESS_SYNC) {
> +				efc_node_post_event(node,
> +						    EFC_EVT_NODE_ATTACH_OK,
> +						    NULL);
> +			}
> +			break;
> +		}
> +
> +		/* domain not attached; several possibilities: */
> +		switch (node->sport->topology) {
> +		case EFC_SPORT_TOPOLOGY_P2P:
> +			/* we're not attached and sport is p2p,
> +			 * need to attach
> +			 */
> +			efc_domain_attach(node->sport->domain, d_id);
> +			efc_node_transition(node,
> +					    __efc_d_wait_domain_attach,
> +					    NULL);
> +			break;
> +		case EFC_SPORT_TOPOLOGY_FABRIC:
> +			/* we're not attached and sport is fabric, domain
> +			 * attach should have already been requested as part
> +			 * of the fabric state machine, wait for it
> +			 */
> +			efc_node_transition(node, __efc_d_wait_domain_attach,
> +					    NULL);
> +			break;
> +		case EFC_SPORT_TOPOLOGY_UNKNOWN:
> +			/* Two possibilities:
> +			 * 1. received a PLOGI before our FLOGI has completed
> +			 *    (possible since completion comes in on another
> +			 *    CQ), thus we don't know what we're connected to
> +			 *    yet; transition to a state to wait for the
> +			 *    fabric node to tell us;
> +			 * 2. PLOGI received before link went down and we
> +			 * haven't performed domain attach yet.
> +			 * Note: we cannot distinguish between 1. and 2.
> +			 * so have to assume PLOGI
> +			 * was received after link back up.
> +			 */
> +			node_printf(node,
> +				    "received PLOGI, unknown topology did=0x%x\n",
> +				d_id);
> +			efc_node_transition(node,
> +					    __efc_d_wait_topology_notify,
> +					    NULL);
> +			break;
> +		default:
> +			node_printf(node,
> +				    "received PLOGI, with unexpected topology %d\n",
> +				node->sport->topology);
> +			break;
> +		}
> +		break;
> +	}
> +
> +	case EFC_EVT_FDISC_RCVD: {
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		break;
> +	}
> +
> +	case EFC_EVT_FLOGI_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +		/* sm: / save sparams, send FLOGI acc */
> +		memcpy(node->sport->domain->flogi_service_params,
> +		       cbdata->payload->dma.virt,
> +		       sizeof(struct fc_els_flogi));
> +

Is the '->domain' pointer always present at this point?
Shouldn't we rather test for it before accessing?

> +		/* send FC LS_ACC response, override s_id */
> +		efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
> +		efc->tt.send_flogi_p2p_acc(efc, node,
> +				be16_to_cpu(hdr->fh_ox_id), d_id);
> +		if (efc_p2p_setup(node->sport)) {
> +			node_printf(node,
> +				    "p2p setup failed, shutting down node\n");
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		} else {
> +			efc_node_transition(node,
> +					    __efc_p2p_wait_flogi_acc_cmpl,
> +					    NULL);
> +		}
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a link
> +			 * down; drop and
> +			 * shut node down w/ "explicit logout" so pending
> +			 * frames are processed
> +			 */

Same here; I find it slightly weird to have an 'attached' field in the 
domain structure; attached to what?
Doesn't the existence of the ->domain pointer signal the same thing?
If it doesn't, why don't we test for the ->domain pointer before 
accessing it?

> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD:
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a link
> +			 * down; drop and shut node down w/ "explicit logout"
> +			 * so pending frames are processed
> +			 */

See above.

> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +		node_printf(node, "%s received, sending reject\n",
> +			    efc_sm_event_name(evt));
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* note: problem, we're now expecting an ELS REQ completion
> +		 * from both the LOGO and PLOGI
> +		 */
> +		if (!node->sport->domain->attached) {
> +			/* most likely a frame left over from before a
> +			 * link down; drop and
> +			 * shut node down w/ "explicit logout" so pending
> +			 * frames are processed
> +			 */
> +			node_printf(node, "%s domain not attached, dropping\n",
> +				    efc_sm_event_name(evt));
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +			break;
> +		}
> +
> +		/* Send LOGO */
> +		node_printf(node, "FCP_CMND received, send LOGO\n");
> +		if (efc->tt.els_send(efc, node, ELS_LOGO,
> +				     EFC_FC_FLOGI_TIMEOUT_SEC,
> +			EFC_FC_ELS_DEFAULT_RETRIES) == NULL) {
> +			/*
> +			 * failed to send LOGO, go ahead and cleanup node
> +			 * anyways
> +			 */
> +			node_printf(node, "Failed to send LOGO\n");
> +			efc_node_post_event(node,
> +					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +					    NULL);
> +		} else {
> +			/* sent LOGO, wait for response */
> +			efc_node_transition(node,
> +					    __efc_d_wait_logo_rsp, NULL);
> +		}
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		/* don't care about domain_attach_ok */
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* T, or I+T */
> +		/* received PLOGI with svc parms, go ahead and attach node
> +		 * when PLOGI that was sent ultimately completes, it'll be a
> +		 * no-op
> +		 *
> +		 * If there is an outstanding PLOGI sent, can we set a flag
> +		 * to indicate that we don't want to retry it if it times out?
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +		/* sm: domain->attached / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD:
> +		/* I, or I+T */
> +		/* sent PLOGI and before completion was seen, received the
> +		 * PRLI from the remote node (WCQEs and RCQEs come in on
> +		 * different queues and order of processing cannot be assumed)
> +		 * Save OXID so PRLI can be sent after the attach and continue
> +		 * to wait for PLOGI response
> +		 */
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PRLI);
> +		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
> +				    NULL);
> +		break;
> +
> +	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
> +	case EFC_EVT_PRLO_RCVD:
> +	case EFC_EVT_PDISC_RCVD:
> +	case EFC_EVT_FDISC_RCVD:
> +	case EFC_EVT_ADISC_RCVD:
> +	case EFC_EVT_RSCN_RCVD:
> +	case EFC_EVT_SCR_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received, sending reject\n",
> +			    efc_sm_event_name(evt));
> +
> +		efc->tt.send_ls_rjt(efc, node, be16_to_cpu(hdr->fh_ox_id),
> +				    ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
> +
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
> +		/* Completion from PLOGI sent */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
> +		/* PLOGI failed, shutdown the node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* Our PLOGI was rejected, this is ok in some cases */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* not logged in yet and outstanding PLOGI so don't send LOGO,
> +		 * just drop
> +		 */
> +		node_printf(node, "FCP_CMND received, drop\n");
> +		break;
> +	}
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				  enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/*
> +		 * Since we've received a PRLI, we have a port login and will
> +		 * just need to wait for the PLOGI response to do the node
> +		 * attach and then we can send the LS_ACC for the PRLI. If,
> +		 * during this time, we receive FCP_CMNDs (which is possible
> +		 * since we've already sent a PRLI and our peer may have
> +		 * accepted). At this time, we are not waiting on any other
> +		 * unsolicited frames to continue with the login process. Thus,
> +		 * it will not hurt to hold frames here.
> +		 */
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
> +		/* Completion from PLOGI sent */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / save sparams, efc_node_attach */
> +		efc_node_save_sparms(node, cbdata->els_rsp.virt);
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* PLOGI failed, shutdown the node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		WARN_ON(!node->sport->domain->attached);
> +		/* sm: / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
> +					    NULL);
> +
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg)
> +{
> +	int rc;
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SPORT_TOPOLOGY_NOTIFY: {
> +		enum efc_sport_topology topology =
> +					(enum efc_sport_topology)arg;
> +
> +		WARN_ON(node->sport->domain->attached);
> +
> +		WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		node_printf(node, "topology notification, topology=%d\n",
> +			    topology);
> +
> +		/* At the time the PLOGI was received, the topology was unknown,
> +		 * so we didn't know which node would perform the domain attach:
> +		 * 1. The node from which the PLOGI was sent (p2p) or
> +		 * 2. The node to which the FLOGI was sent (fabric).
> +		 */
> +		if (topology == EFC_SPORT_TOPOLOGY_P2P) {
> +			/* if this is p2p, need to attach to the domain using
> +			 * the d_id from the PLOGI received
> +			 */
> +			efc_domain_attach(node->sport->domain,
> +					  node->ls_acc_did);
> +		}
> +		/* else, if this is fabric, the domain attach
> +		 * should be performed by the fabric node (node sending FLOGI);
> +		 * just wait for attach to complete
> +		 */
> +
> +		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
> +		break;
> +	}
> +	case EFC_EVT_DOMAIN_ATTACH_OK:
> +		WARN_ON(!node->sport->domain->attached);
> +		node_printf(node, "domain attach ok\n");
> +		/* sm: / efc_node_attach */
> +		rc = efc_node_attach(node);
> +		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
> +		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
> +			efc_node_post_event(node,
> +					    EFC_EVT_NODE_ATTACH_OK, NULL);
> +
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		switch (node->send_ls_acc) {
> +		case EFC_NODE_SEND_LS_ACC_PLOGI: {
> +			/* sm: send_plogi_acc is set / send PLOGI acc */
> +			/* Normal case for T, or I+T */
> +			efc->tt.els_send_resp(efc, node, ELS_PLOGI,
> +							node->ls_acc_oxid);
> +			efc_node_transition(node,
> +					    __efc_d_wait_plogi_acc_cmpl,
> +					     NULL);
> +			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +			node->ls_acc_io = NULL;
> +			break;
> +		}
> +		case EFC_NODE_SEND_LS_ACC_PRLI: {
> +			efc_d_send_prli_rsp(node,
> +					    node->ls_acc_oxid);
> +			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
> +			node->ls_acc_io = NULL;
> +			break;
> +		}
> +		case EFC_NODE_SEND_LS_ACC_NONE:
> +		default:
> +			/* Normal case for I */
> +			/* sm: send_plogi_acc is not set / send PLOGI acc */
> +			efc_node_transition(node,
> +					    __efc_d_port_logged_in, NULL);
> +			break;
> +		}
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "node attach failed\n");
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	/* Handle shutdown events */
> +	case EFC_EVT_SHUTDOWN:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
> +				    NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
> +		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
> +				    NULL);
> +		break;
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
> +		efc_node_transition(node,
> +				    __efc_d_wait_attach_evt_shutdown, NULL);
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				 enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	/* wait for any of these attach events and then shutdown */
> +	case EFC_EVT_NODE_ATTACH_OK:
> +		node->attached = true;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	case EFC_EVT_NODE_ATTACH_FAIL:
> +		/* node attach failed, shutdown the node */
> +		node->attached = false;
> +		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
> +			    efc_sm_event_name(evt));
> +		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
> +		break;
> +
> +	/* ignore shutdown events as we're already in shutdown path */
> +	case EFC_EVT_SHUTDOWN:
> +		/* have default shutdown event take precedence */
> +		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
> +		/* fall through */
> +	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
> +	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
> +		node_printf(node, "%s received\n", efc_sm_event_name(evt));
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		/* Normal case for I or I+T */
> +		if (node->sport->enable_ini &&
> +		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
> +			/* sm: if enable_ini / send PRLI */
> +			efc->tt.els_send(efc, node, ELS_PRLI,
> +				EFC_FC_ELS_SEND_DEFAULT_TIMEOUT,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +			/* can now expect ELS_REQ_OK/FAIL/RJT */
> +		}
> +		break;
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD: {
> +		/* Normal case for T or I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		struct fc_els_spp *sp = cbdata->payload->dma.virt
> +					+ sizeof(struct fc_els_prli);
> +
> +		if (sp->spp_type != FC_TYPE_FCP) {
> +			/*Only FCP is supported*/
> +			efc->tt.send_ls_rjt(efc, node,
> +					be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
> +			break;
> +		}
> +
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
> +		/* Normal case for I or I+T */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / process PRLI payload */
> +		efc_process_prli_payload(node, cbdata->els_rsp.virt);
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
> +		/* I, I+T, assume some link failure, shutdown node */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT: {
> +		/* PRLI rejected by remote
> +		 * Normal for I, I+T (connected to an I)
> +		 * Node doesn't want to be a target, stay here and wait for a
> +		 * PRLI from the remote node
> +		 * if it really wants to connect to us as target
> +		 */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		break;
> +	}
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK: {
> +		/* Normal T, I+T, target-server rejected the process login */
> +		/* This would be received only in the case where we sent
> +		 * LS_RJT for the PRLI, so
> +		 * do nothing.   (note: as T only we could shutdown the node)
> +		 */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		break;
> +	}
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/*sm: / save sparams, set send_plogi_acc,
> +		 *post implicit logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/* Restart node attach with new service parameters,
> +		 * and send ACC
> +		 */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
> +				    NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* I, T, I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt),
> +					node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node *node = ctx->app;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		efc_node_hold_frames(node);
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		efc_node_accept_frames(node);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		/* sm: / post explicit logout */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
> +		break;
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_device_ready(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	if (evt != EFC_EVT_FCP_CMD_RCVD)
> +		node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER:
> +		node->fcp_enabled = true;
> +		if (node->init) {
> +			efc_log_info(efc,
> +				     "[%s] found (initiator) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			if (node->sport->enable_tgt)
> +				efc->tt.scsi_new_node(efc, node);
> +		}
> +		if (node->targ) {
> +			efc_log_info(efc,
> +				     "[%s] found (target) WWPN %s WWNN %s\n",
> +				node->display_name,
> +				node->wwpn, node->wwnn);
> +			if (node->sport->enable_ini)
> +				efc->tt.scsi_new_node(efc, node);
> +		}
> +		break;
> +
> +	case EFC_EVT_EXIT:
> +		node->fcp_enabled = false;
> +		break;
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* sm: / save sparams, set send_plogi_acc, post implicit
> +		 * logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/*
> +		 * Restart node attach with new service parameters,
> +		 * and send ACC
> +		 */
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLI_RCVD: {
> +		/* T, I+T: remote initiator is slow to get started */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		struct fc_els_spp *sp = cbdata->payload->dma.virt
> +					+ sizeof(struct fc_els_prli);
> +
> +		if (sp->spp_type != FC_TYPE_FCP) {
> +			/*Only FCP is supported*/
> +			efc->tt.send_ls_rjt(efc, node,
> +					be16_to_cpu(hdr->fh_ox_id),
> +					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
> +			break;
> +		}
> +
> +		efc_process_prli_payload(node, cbdata->payload->dma.virt);
> +
> +		efc->tt.els_send_resp(efc, node, ELS_PRLI,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_PRLO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* sm: / send PRLO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_PRLO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		/* need implicit logout? */
> +		break;
> +	}
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_ADISC_RCVD: {
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +		/* sm: / send ADISC acc */
> +		efc->tt.els_send_resp(efc, node, ELS_ADISC,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		break;
> +	}
> +
> +	case EFC_EVT_ABTS_RCVD:
> +		/* sm: / process ABTS */
> +		efc_log_err(efc, "Unexpected event:%s\n",
> +					efc_sm_event_name(evt));
> +		break;
> +
> +	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
> +		break;
> +
> +	case EFC_EVT_NODE_REFOUND:
> +		break;
> +
> +	case EFC_EVT_NODE_MISSING:
> +		if (node->sport->enable_rscn)
> +			efc_node_transition(node, __efc_d_device_gone, NULL);
> +
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_OK:
> +		/* T, or I+T, PRLI accept completed ok */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
> +		/* T, or I+T, PRLI accept failed to complete */
> +		WARN_ON(!node->els_cmpl_cnt);
> +		node->els_cmpl_cnt--;
> +		node_printf(node, "Failed to send PRLI LS_ACC\n");
> +		break;
> +
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_device_gone(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg)
> +{
> +	int rc = EFC_SCSI_CALL_COMPLETE;
> +	int rc_2 = EFC_SCSI_CALL_COMPLETE;
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_ENTER: {
> +		static char const *labels[] = {"none", "initiator", "target",
> +							"initiator+target"};
> +
> +		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
> +			     node->display_name,
> +				labels[(node->targ << 1) | (node->init)],
> +						node->wwpn, node->wwnn);
> +
> +		switch (efc_node_get_enable(node)) {
> +		case EFC_NODE_ENABLE_T_TO_T:
> +		case EFC_NODE_ENABLE_I_TO_T:
> +		case EFC_NODE_ENABLE_IT_TO_T:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_T_TO_I:
> +		case EFC_NODE_ENABLE_I_TO_I:
> +		case EFC_NODE_ENABLE_IT_TO_I:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_T_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_I_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +						  EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		case EFC_NODE_ENABLE_IT_TO_IT:
> +			rc = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_INITIATOR_MISSING);
> +			rc_2 = efc->tt.scsi_del_node(efc, node,
> +				EFC_SCSI_TARGET_MISSING);
> +			break;
> +
> +		default:
> +			rc = EFC_SCSI_CALL_COMPLETE;
> +			break;
> +		}
> +
> +		if (rc == EFC_SCSI_CALL_COMPLETE &&
> +		    rc_2 == EFC_SCSI_CALL_COMPLETE)
> +			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
> +
> +		break;
> +	}
> +	case EFC_EVT_NODE_REFOUND:
> +		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
> +
> +		/* reauthenticate with PLOGI/PRLI */
> +		/* efc_node_transition(node, __efc_d_discovered, NULL); */
> +
> +		/* reauthenticate with ADISC */
> +		/* sm: / send ADISC */
> +		efc->tt.els_send(efc, node, ELS_ADISC,
> +				EFC_FC_FLOGI_TIMEOUT_SEC,
> +				EFC_FC_ELS_DEFAULT_RETRIES);
> +		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
> +		break;
> +
> +	case EFC_EVT_PLOGI_RCVD: {
> +		/* sm: / save sparams, set send_plogi_acc, post implicit
> +		 * logout
> +		 * Save plogi parameters
> +		 */
> +		efc_node_save_sparms(node, cbdata->payload->dma.virt);
> +		efc_send_ls_acc_after_attach(node,
> +					     cbdata->header->dma.virt,
> +				EFC_NODE_SEND_LS_ACC_PLOGI);
> +
> +		/*
> +		 * Restart node attach with new service parameters, and send
> +		 * ACC
> +		 */
> +		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
> +				    NULL);
> +		break;
> +	}
> +
> +	case EFC_EVT_FCP_CMD_RCVD: {
> +		/* most likely a stale frame (received prior to link down),
> +		 * if attempt to send LOGO, will probably timeout and eat
> +		 * up 20s; thus, drop FCP_CMND
> +		 */
> +		node_printf(node, "FCP_CMND received, drop\n");
> +		break;
> +	}
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* I, T, I+T */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +		/* sm: / send LOGO acc */
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg)
> +{
> +	struct efc_node_cb *cbdata = arg;
> +	struct efc_node *node = ctx->app;
> +	struct efc *efc = node->efc;
> +
> +	efc_node_evt_set(ctx, evt, __func__);
> +
> +	node_sm_trace();
> +
> +	switch (evt) {
> +	case EFC_EVT_SRRS_ELS_REQ_OK:
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		efc_node_transition(node, __efc_d_device_ready, NULL);
> +		break;
> +
> +	case EFC_EVT_SRRS_ELS_REQ_RJT:
> +		/* received an LS_RJT, in this case, send shutdown
> +		 * (explicit logo) event which will unregister the node,
> +		 * and start over with PLOGI
> +		 */
> +		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
> +					   __efc_d_common, __func__))
> +			return NULL;
> +
> +		WARN_ON(!node->els_req_cnt);
> +		node->els_req_cnt--;
> +		/* sm: / post explicit logout */
> +		efc_node_post_event(node,
> +				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
> +				     NULL);
> +		break;
> +
> +	case EFC_EVT_LOGO_RCVD: {
> +		/* In this case, we have the equivalent of an LS_RJT for
> +		 * the ADISC, so we need to abort the ADISC, and re-login
> +		 * with PLOGI
> +		 */
> +		/* sm: / request abort, send LOGO acc */
> +		struct fc_frame_header *hdr = cbdata->header->dma.virt;
> +
> +		node_printf(node, "%s received attached=%d\n",
> +			    efc_sm_event_name(evt), node->attached);
> +
> +		efc->tt.els_send_resp(efc, node, ELS_LOGO,
> +					be16_to_cpu(hdr->fh_ox_id));
> +		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
> +		break;
> +	}
> +	default:
> +		__efc_d_common(__func__, ctx, evt, arg);
> +		return NULL;
> +	}
> +
> +	return NULL;
> +}
> diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
> new file mode 100644
> index 000000000000..513096b8f875
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc/efc_device.h
> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Node state machine functions for remote device node sm
> + */
> +
> +#ifndef __EFCT_DEVICE_H__
> +#define __EFCT_DEVICE_H__
> +extern void
> +efc_node_init_device(struct efc_node *node, bool send_plogi);
> +extern void
> +efc_process_prli_payload(struct efc_node *node,
> +			 void *prli);
> +extern void
> +efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id);
> +extern void
> +efc_send_ls_acc_after_attach(struct efc_node *node,
> +			     struct fc_frame_header *hdr,
> +			     enum efc_node_send_ls_acc ls);
> +extern void *
> +__efc_d_wait_loop(struct efc_sm_ctx *ctx,
> +		  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
> +			    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
> +				  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
> +			     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
> +			 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
> +				 enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
> +			  enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
> +			   enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_device_ready(struct efc_sm_ctx *ctx,
> +		     enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_device_gone(struct efc_sm_ctx *ctx,
> +		    enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
> +		       enum efc_sm_event evt, void *arg);
> +extern void *
> +__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
> +		      enum efc_sm_event evt, void *arg);
> +
> +#endif /* __EFCT_DEVICE_H__ */
> 
And, of course, the comment from the previous patch about the really 
weird logic of 'void *' functions always returning NULL does apply to 
here, too.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations
  2020-04-12  3:32 ` [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations James Smart
@ 2020-04-16  6:51   ` Hannes Reinecke
  2020-04-23  2:57     ` James Smart
  2020-04-16  7:22   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  6:51 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch starts the population of the efct target mode
> driver.  The driver is contained in the drivers/scsi/elx/efct
> subdirectory.
> 
> This patch creates the efct directory and starts population of
> the driver by adding SLI-4 configuration parameters, data structures
> for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
> and handling async events.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Changed anonymous enums to named.
>    Removed some structures and defines which are not used.
>    Reworked on efct_hw_io_param struct which can be used for holding
>      params in WQE submission.
> ---
>   drivers/scsi/elx/efct/efct_hw.h | 617 ++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 617 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_hw.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> new file mode 100644
> index 000000000000..b3d4d4bc8d8c
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -0,0 +1,617 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef _EFCT_HW_H
> +#define _EFCT_HW_H
> +
> +#include "../libefc_sli/sli4.h"
> +
> +/*
> + * EFCT PCI IDs
> + */
> +#define EFCT_VENDOR_ID			0x10df
> +/* LightPulse 16Gb x 4 FC (lancer-g6) */
> +#define EFCT_DEVICE_LANCER_G6		0xe307
> +/* LightPulse 32Gb x 4 FC (lancer-g7) */
> +#define EFCT_DEVICE_LANCER_G7		0xf407
> +
> +/*Default RQ entries len used by driver*/
> +#define EFCT_HW_RQ_ENTRIES_MIN		512
> +#define EFCT_HW_RQ_ENTRIES_DEF		1024
> +#define EFCT_HW_RQ_ENTRIES_MAX		4096
> +
> +/*Defines the size of the RQ buffers used for each RQ*/
> +#define EFCT_HW_RQ_SIZE_HDR             128
> +#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
> +
> +/*Define the maximum number of multi-receive queues*/
> +#define EFCT_HW_MAX_MRQS		8
> +
> +/*
> + * Define count of when to set the WQEC bit in a submitted
> + * WQE, causing a consummed/released completion to be posted.
> + */
> +#define EFCT_HW_WQEC_SET_COUNT		32
> +
> +/*Send frame timeout in seconds*/
> +#define EFCT_HW_SEND_FRAME_TIMEOUT	10
> +
> +/*
> + * FDT Transfer Hint value, reads greater than this value
> + * will be segmented to implement fairness. A value of zero disables
> + * the feature.
> + */
> +#define EFCT_HW_FDT_XFER_HINT		8192
> +
> +#define EFCT_HW_TIMECHECK_ITERATIONS	100
> +#define EFCT_HW_MAX_NUM_MQ		1
> +#define EFCT_HW_MAX_NUM_RQ		32
> +#define EFCT_HW_MAX_NUM_EQ		16
> +#define EFCT_HW_MAX_NUM_WQ		32
> +#define EFCT_HW_DEF_NUM_EQ		1
> +
> +#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
> +
> +#define EFCT_HW_MQ_DEPTH		128
> +#define EFCT_HW_EQ_DEPTH		1024
> +
> +/*
> + * A CQ will be assinged to each WQ
> + * (CQ must have 2X entries of the WQ for abort
> + * processing), plus a separate one for each RQ PAIR and one for MQ
> + */
> +#define EFCT_HW_MAX_NUM_CQ \
> +	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
> +
> +#define EFCT_HW_Q_HASH_SIZE		128
> +#define EFCT_HW_RQ_HEADER_SIZE		128
> +#define EFCT_HW_RQ_HEADER_INDEX		0
> +
> +#define EFCT_HW_REQUE_XRI_REGTAG	65534
> +
> +/* Options for efct_hw_command() */
> +enum efct_cmd_opts {
> +	/* command executes synchronously and busy-waits for completion */
> +	EFCT_CMD_POLL,
> +	/* command executes asynchronously. Uses callback */
> +	EFCT_CMD_NOWAIT,
> +};
> +
> +enum efct_hw_rtn {
> +	EFCT_HW_RTN_SUCCESS = 0,
> +	EFCT_HW_RTN_SUCCESS_SYNC = 1,
> +	EFCT_HW_RTN_ERROR = -1,
> +	EFCT_HW_RTN_NO_RESOURCES = -2,
> +	EFCT_HW_RTN_NO_MEMORY = -3,
> +	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
> +	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
> +	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
> +	EFCT_HW_RTN_INVALID_ARG = -7,
> +};
> +
> +#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
> +
> +enum efct_hw_reset {
> +	EFCT_HW_RESET_FUNCTION,
> +	EFCT_HW_RESET_FIRMWARE,
> +	EFCT_HW_RESET_MAX
> +};
> +
> +enum efct_hw_topo {
> +	EFCT_HW_TOPOLOGY_AUTO,
> +	EFCT_HW_TOPOLOGY_NPORT,
> +	EFCT_HW_TOPOLOGY_LOOP,
> +	EFCT_HW_TOPOLOGY_NONE,
> +	EFCT_HW_TOPOLOGY_MAX
> +};
> +
> +/* pack fw revision values into a single uint64_t */
> +#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
> +			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
> +
> +#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
> +
> +enum efct_hw_io_type {
> +	EFCT_HW_ELS_REQ,
> +	EFCT_HW_ELS_RSP,
> +	EFCT_HW_ELS_RSP_SID,
> +	EFCT_HW_FC_CT,
> +	EFCT_HW_FC_CT_RSP,
> +	EFCT_HW_BLS_ACC,
> +	EFCT_HW_BLS_ACC_SID,
> +	EFCT_HW_BLS_RJT,
> +	EFCT_HW_IO_TARGET_READ,
> +	EFCT_HW_IO_TARGET_WRITE,
> +	EFCT_HW_IO_TARGET_RSP,
> +	EFCT_HW_IO_DNRX_REQUEUE,
> +	EFCT_HW_IO_MAX,
> +};
> +
> +enum efct_hw_io_state {
> +	EFCT_HW_IO_STATE_FREE,
> +	EFCT_HW_IO_STATE_INUSE,
> +	EFCT_HW_IO_STATE_WAIT_FREE,
> +	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
> +};
> +
> +struct efct_hw;
> +
> +/**
> + * HW command context.
> + * Stores the state for the asynchronous commands sent to the hardware.
> + */
> +struct efct_command_ctx {
> +	struct list_head	list_entry;
> +	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
> +	void			*arg;	/* Argument for callback */
> +	u8			*buf;	/* buffer holding command / results */
> +	void			*ctx;	/* upper layer context */
> +};
> +
> +struct efct_hw_sgl {
> +	uintptr_t		addr;
> +	size_t			len;
> +};
> +
> +union efct_hw_io_param_u {
> +	struct sli_bls_params bls;
> +	struct sli_els_params els;
> +	struct sli_ct_params fc_ct;
> +	struct sli_fcp_tgt_params fcp_tgt;
> +};
> +
> +/* WQ steering mode */
> +enum efct_hw_wq_steering {
> +	EFCT_HW_WQ_STEERING_CLASS,
> +	EFCT_HW_WQ_STEERING_REQUEST,
> +	EFCT_HW_WQ_STEERING_CPU,
> +};
> +
> +/* HW wqe object */
> +struct efct_hw_wqe {
> +	struct list_head	list_entry;
> +	bool			abort_wqe_submit_needed;
> +	bool			send_abts;
> +	u32			id;
> +	u32			abort_reqtag;
> +	u8			*wqebuf;
> +};
> +
> +/**
> + * HW IO object.
> + *
> + * Stores the per-IO information necessary
> + * for both the lower (SLI) and upper
> + * layers (efct).
> + */
> +struct efct_hw_io {
> +	/* Owned by HW */
> +
> +	/* reference counter and callback function */
> +	struct kref		ref;
> +	void (*release)(struct kref *arg);
> +	/* used for busy, wait_free, free lists */
> +	struct list_head	list_entry;
> +	/* used for timed_wqe list */
> +	struct list_head	wqe_link;
> +	/* used for io posted dnrx list */
> +	struct list_head	dnrx_link;
> +	/* state of IO: free, busy, wait_free */
> +	enum efct_hw_io_state	state;
> +	/* Work queue object, with link for pending */
> +	struct efct_hw_wqe	wqe;
> +	/* pointer back to hardware context */
> +	struct efct_hw		*hw;
> +	struct efc_remote_node	*rnode;
> +	struct efc_dma		xfer_rdy;
> +	u16	type;
> +	/* WQ assigned to the exchange */
> +	struct hw_wq		*wq;
> +	/* Exchange is active in FW */
> +	bool			xbusy;
> +	/* Function called on IO completion */
> +	int
> +	(*done)(struct efct_hw_io *hio,
> +		struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);
> +	/* argument passed to "IO done" callback */
> +	void			*arg;
> +	/* Function called on abort completion */
> +	int
> +	(*abort_done)(struct efct_hw_io *hio,
> +		      struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);
> +	/* argument passed to "abort done" callback */
> +	void			*abort_arg;
> +	/* needed for bug O127585: length of IO */
> +	size_t			length;
> +	/* timeout value for target WQEs */
> +	u8			tgt_wqe_timeout;
> +	/* timestamp when current WQE was submitted */
> +	u64			submit_ticks;
> +
> +	/* if TRUE, latched status shld be returned */
> +	bool			status_saved;
> +	/* if TRUE, abort is in progress */
> +	bool			abort_in_progress;
> +	u32			saved_status;
> +	u32			saved_len;
> +	u32			saved_ext;
> +
> +	/* EQ that this HIO came up on */
> +	struct hw_eq		*eq;
> +	/* WQ steering mode request */
> +	enum efct_hw_wq_steering wq_steering;
> +	/* WQ class if steering mode is Class */
> +	u8			wq_class;
> +
> +	/* request tag for this HW IO */
> +	u16			reqtag;
> +	/* request tag for an abort of this HW IO
> +	 * (note: this is a 32 bit value
> +	 * to allow us to use UINT32_MAX as an uninitialized value)
> +	 */
> +	u32			abort_reqtag;
> +	u32			indicator;	/* XRI */
> +	struct efc_dma		def_sgl;	/* default SGL*/
> +	/* Count of SGEs in default SGL */
> +	u32			def_sgl_count;
> +	/* pointer to current active SGL */
> +	struct efc_dma		*sgl;
> +	u32			sgl_count;	/* count of SGEs in io->sgl */
> +	u32			first_data_sge;	/* index of first data SGE */
> +	struct efc_dma		*ovfl_sgl;	/* overflow SGL */
> +	u32			ovfl_sgl_count;
> +	 /* pointer to overflow segment len */
> +	struct sli4_lsp_sge	*ovfl_lsp;
> +	u32			n_sge;		/* number of active SGEs */
> +	u32			sge_offset;
> +
> +	/* where upper layer can store ref to its IO */
> +	void			*ul_io;
> +};
> +

Please consider running 'pahole' on this structure and rearrange it.
Looks like it'll required quite some padding which could be avoided.

> +/* Typedef for HW "done" callback */
> +typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
> +			      u32 len, int status, u32 ext, void *ul_arg);
> +
> +enum efct_hw_port {
> +	EFCT_HW_PORT_INIT,
> +	EFCT_HW_PORT_SHUTDOWN,
> +};
> +
> +/* Node group rpi reference */
> +struct efct_hw_rpi_ref {
> +	atomic_t rpi_count;
> +	atomic_t rpi_attached;
> +};
> +
> +enum efct_hw_link_stat {
> +	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
> +	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
> +	EFCT_HW_LINK_STAT_CRC_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
> +	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
> +	EFCT_HW_LINK_STAT_MAX,
> +};
> +
> +enum efct_hw_host_stat {
> +	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
> +	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
> +	EFCT_HW_HOST_STAT_MAX,
> +};
> +
> +enum efct_hw_state {
> +	EFCT_HW_STATE_UNINITIALIZED,
> +	EFCT_HW_STATE_QUEUES_ALLOCATED,
> +	EFCT_HW_STATE_ACTIVE,
> +	EFCT_HW_STATE_RESET_IN_PROGRESS,
> +	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
> +};
> +
> +struct efct_hw_link_stat_counts {
> +	u8		overflow;
> +	u32		counter;
> +};
> +
> +struct efct_hw_host_stat_counts {
> +	u32		counter;
> +};
> +
> +/* Structure used for the hash lookup of queue IDs */
> +struct efct_queue_hash {
> +	bool		in_use;
> +	u16		id;
> +	u16		index;
> +};
> +
> +/* WQ callback object */
> +struct hw_wq_callback {
> +	u16		instance_index;	/* use for request tag */
> +	void (*callback)(void *arg, u8 *cqe, int status);
> +	void		*arg;
> +	struct list_head list_entry;
> +};
> +
> +struct reqtag_pool {
> +	spinlock_t lock;	/* pool lock */
> +	struct hw_wq_callback *tags[U16_MAX];
> +	struct list_head freelist;
> +};
> +
> +struct efct_hw_config {
> +	u32		n_eq;
> +	u32		n_cq;
> +	u32		n_mq;
> +	u32		n_rq;
> +	u32		n_wq;
> +	u32		n_io;
> +	u32		n_sgl;
> +	u32		speed;
> +	u32		topology;
> +	/* size of the buffers for first burst */
> +	u32		rq_default_buffer_size;
> +	u8		esoc;
> +	/* MRQ RQ selection policy */
> +	u8		rq_selection_policy;
> +	/* RQ quanta if rq_selection_policy == 2 */
> +	u8		rr_quanta;
> +	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +};
> +
> +struct efct_hw {
> +	struct efct		*os;
> +	struct sli4		sli;
> +	u16			ulp_start;
> +	u16			ulp_max;
> +	u32			dump_size;
> +	enum efct_hw_state	state;
> +	bool			hw_setup_called;
> +	u8			sliport_healthcheck;
> +
> +	/* HW configuration */
> +	struct efct_hw_config	config;
> +
> +	/* calculated queue sizes for each type */
> +	u32			num_qentries[SLI_QTYPE_MAX];
> +
> +	/* Storage for SLI queue objects */
> +	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
> +	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
> +	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
> +	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
> +	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
> +	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
> +
> +	/* HW queue */
> +	u32			eq_count;
> +	u32			cq_count;
> +	u32			mq_count;
> +	u32			wq_count;
> +	u32			rq_count;
> +	struct list_head	eq_list;
> +
> +	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
> +
> +	/* Storage for HW queue objects */
> +	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
> +	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
> +	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
> +	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
> +	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
> +	/* count of hw_rq[] entries */
> +	u32			hw_rq_count;
> +	/* count of multirq RQs */
> +	u32			hw_mrq_count;
> +
> +	/* Sequence objects used in incoming frame processing */
> +	void			*seq_pool;
> +
> +	/* Maintain an ordered, linked list of outstanding HW commands. */
> +	spinlock_t		cmd_lock;
> +	struct list_head	cmd_head;
> +	struct list_head	cmd_pending;
> +	u32			cmd_head_count;
> +
> +	struct sli4_link_event	link;
> +	struct efc_domain	*domain;
> +
> +	u16			fcf_indicator;
> +
> +	/* pointer array of IO objects */
> +	struct efct_hw_io	**io;
> +	/* array of WQE buffs mapped to IO objects */
> +	u8			*wqe_buffs;
> +
> +	/* IO lock to synchronize list access */
> +	spinlock_t		io_lock;
> +	/* IO lock to synchronize IO aborting */
> +	spinlock_t		io_abort_lock;
> +	/* List of IO objects in use */
> +	struct list_head	io_inuse;
> +	/* List of IO objects waiting to be freed */
> +	struct list_head	io_wait_free;
> +	/* List of IO objects available for allocation */
> +	struct list_head	io_free;
> +
> +	struct efc_dma		loop_map;
> +
> +	struct efc_dma		xfer_rdy;
> +
> +	struct efc_dma		dump_sges;
> +
> +	struct efc_dma		rnode_mem;
> +
> +	struct efct_hw_rpi_ref	*rpi_ref;
> +
> +	atomic_t		io_alloc_failed_count;
> +
> +	/* stat: wq sumbit count */
> +	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
> +	/* stat: wq complete count */
> +	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
> +
> +	struct reqtag_pool	*wq_reqtag_pool;
> +	atomic_t		send_frame_seq_id;
> +};
> +
> +enum efct_hw_io_count_type {
> +	EFCT_HW_IO_INUSE_COUNT,
> +	EFCT_HW_IO_FREE_COUNT,
> +	EFCT_HW_IO_WAIT_FREE_COUNT,
> +	EFCT_HW_IO_N_TOTAL_IO_COUNT,
> +};
> +
> +/* HW queue data structures */
> +struct hw_eq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct efct_hw		*hw;
> +	struct sli4_queue	*queue;
> +	struct list_head	cq_list;
> +	u32			use_count;
> +};
> +
> +struct hw_cq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_eq		*eq;
> +	struct sli4_queue	*queue;
> +	struct list_head	q_list;
> +	u32			use_count;
> +};
> +
> +struct hw_q {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +};
> +
> +struct hw_mq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +
> +	u32			use_count;
> +};
> +
> +struct hw_wq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	struct efct_hw		*hw;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +	u32			class;
> +
> +	/* WQ consumed */
> +	u32			wqec_set_count;
> +	u32			wqec_count;
> +	u32			free_count;
> +	u32			total_submit_count;
> +	struct list_head	pending_list;
> +
> +	/* HW IO allocated for use with Send Frame */
> +	struct efct_hw_io	*send_frame_io;
> +
> +	/* Stats */
> +	u32			use_count;
> +	u32			wq_pending_count;
> +};
> +
> +struct hw_rq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			use_count;
> +	u32			hdr_entry_size;
> +	u32			first_burst_entry_size;
> +	u32			data_entry_size;
> +	bool			is_mrq;
> +	u32			base_mrq_id;
> +
> +	struct hw_cq		*cq;
> +
> +	u8			filter_mask;
> +	struct sli4_queue	*hdr;
> +	struct sli4_queue	*first_burst;
> +	struct sli4_queue	*data;
> +
> +	struct efc_hw_rq_buffer	*hdr_buf;
> +	struct efc_hw_rq_buffer	*fb_buf;
> +	struct efc_hw_rq_buffer	*payload_buf;
> +	/* RQ tracker for this RQ */
> +	struct efc_hw_sequence	**rq_tracker;
> +};
> +
> +struct efct_hw_send_frame_context {
> +	struct efct_hw		*hw;
> +	struct hw_wq_callback	*wqcb;
> +	struct efct_hw_wqe	wqe;
> +	void (*callback)(int status, void *arg);
> +	void			*arg;
> +
> +	/* General purpose elements */
> +	struct efc_hw_sequence	*seq;
> +	struct efc_dma		payload;
> +};
> +
> +struct efct_hw_grp_hdr {
> +	u32			size;
> +	__be32			magic_number;
> +	u32			word2;
> +	u8			rev_name[128];
> +	u8			date[12];
> +	u8			revision[32];
> +};
> +
> +#endif /* __EFCT_H__ */
> 
Other than that:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 16/31] elx: efct: Driver initialization routines
  2020-04-12  3:32 ` [PATCH v3 16/31] elx: efct: Driver initialization routines James Smart
@ 2020-04-16  7:11   ` Hannes Reinecke
  2020-04-23  3:09     ` James Smart
  2020-04-16  8:03   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:11 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Emulex FC Target driver init, attach and hardware setup routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed Queue topology string.
>    Used request_threaded_irq instead of a thread.
>    Use a static fuction to get the model.
>    Reworked efct_device_attach to use if statements and gotos.
>    Changed efct_fw_reset, removed accessing other functions.
>    Converted to use pci_alloc_irq_vectors api.
>    Removed proc interface.
>    Removed efct_hw_get and efct_hw_set functions. Driver implicitly
>      knows adapter configuration.
>    Many more small changes.
> ---
>   drivers/scsi/elx/efct/efct_driver.c |  856 +++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_driver.h |  142 +++++
>   drivers/scsi/elx/efct/efct_hw.c     | 1116 +++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h     |   15 +
>   drivers/scsi/elx/efct/efct_xport.c  |  523 ++++++++++++++++
>   drivers/scsi/elx/efct/efct_xport.h  |  201 +++++++
>   6 files changed, 2853 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_driver.c
>   create mode 100644 drivers/scsi/elx/efct/efct_driver.h
>   create mode 100644 drivers/scsi/elx/efct/efct_hw.c
>   create mode 100644 drivers/scsi/elx/efct/efct_xport.c
>   create mode 100644 drivers/scsi/elx/efct/efct_xport.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
> new file mode 100644
> index 000000000000..ff488fb774f1
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_driver.c
> @@ -0,0 +1,856 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +
> +#include "efct_els.h"
> +#include "efct_hw.h"
> +#include "efct_unsol.h"
> +#include "efct_scsi.h"
> +
> +struct efct *efct_devices[MAX_EFCT_DEVICES];
> +
> +static int logmask;
> +module_param(logmask, int, 0444);
> +MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
> +
> +static struct libefc_function_template efct_libefc_templ = {
> +	.hw_domain_alloc = efct_hw_domain_alloc,
> +	.hw_domain_attach = efct_hw_domain_attach,
> +	.hw_domain_free = efct_hw_domain_free,
> +	.hw_domain_force_free = efct_hw_domain_force_free,
> +	.domain_hold_frames = efct_domain_hold_frames,
> +	.domain_accept_frames = efct_domain_accept_frames,
> +
> +	.hw_port_alloc = efct_hw_port_alloc,
> +	.hw_port_attach = efct_hw_port_attach,
> +	.hw_port_free = efct_hw_port_free,
> +
> +	.hw_node_alloc = efct_hw_node_alloc,
> +	.hw_node_attach = efct_hw_node_attach,
> +	.hw_node_detach = efct_hw_node_detach,
> +	.hw_node_free_resources = efct_hw_node_free_resources,
> +	.node_purge_pending = efct_node_purge_pending,
> +
> +	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
> +	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
> +	.scsi_validate_node = efct_scsi_validate_initiator,
> +	.new_domain = efct_scsi_tgt_new_domain,
> +	.del_domain = efct_scsi_tgt_del_domain,
> +	.new_sport = efct_scsi_tgt_new_sport,
> +	.del_sport = efct_scsi_tgt_del_sport,
> +	.scsi_new_node = efct_scsi_new_initiator,
> +	.scsi_del_node = efct_scsi_del_initiator,
> +
> +	.els_send = efct_els_req_send,
> +	.els_send_ct = efct_els_send_ct,
> +	.els_send_resp = efct_els_resp_send,
> +	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
> +	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
> +	.send_ct_rsp = efct_send_ct_rsp,
> +	.send_ls_rjt = efct_send_ls_rjt,
> +
> +	.node_io_cleanup = efct_node_io_cleanup,
> +	.node_els_cleanup = efct_node_els_cleanup,
> +	.node_abort_all_els = efct_node_abort_all_els,
> +
> +	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
> +	.recv_abts_frame = efct_node_recv_abts_frame,
> +};
> +
> +static int
> +efct_device_init(void)
> +{
> +	int rc;
> +
> +	/* driver-wide init for target-server */
> +	rc = efct_scsi_tgt_driver_init();
> +	if (rc) {
> +		pr_err("efct_scsi_tgt_init failed rc=%d\n",
> +			     rc);
> +		return rc;
> +	}
> +
> +	rc = efct_scsi_reg_fc_transport();
> +	if (rc) {
> +		pr_err("failed to register to FC host\n");
> +		return rc;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_device_shutdown(void)
> +{
> +	efct_scsi_release_fc_transport();
> +
> +	efct_scsi_tgt_driver_exit();
> +}
> +
> +static void *
> +efct_device_alloc(u32 nid)
> +{
> +	struct efct *efct = NULL;
> +	u32 i;
> +
> +	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);
> +

kzalloc_node() ?

> +	if (efct) {
> +		memset(efct, 0, sizeof(*efct));
> +		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
> +			if (!efct_devices[i]) {
> +				efct->instance_index = i;
> +				efct_devices[i] = efct;
> +				break;
> +			}
> +		}
> +
> +		if (i == ARRAY_SIZE(efct_devices)) {
> +			pr_err("Exceeded max supported devices.\n");
> +			kfree(efct);
> +			efct = NULL;
> +		} else {
> +			efct->attached = false;
> +		}

Errm. This is wrong.
When we exit the for() loop above, i _will_ equal the array size.
Surely you mean

if (i < ARRAY_SIZE())

right?

> +	}
> +	return efct;
> +}
> +
> +static void
> +efct_teardown_msix(struct efct *efct)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		free_irq(pci_irq_vector(efct->pcidev, i),
> +			 &efct->intr_context[i]);
> +	}
> +
> +	pci_free_irq_vectors(efct->pcidev);
> +}
> +
> +static int
> +efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
> +{
> +	struct efc *efc;
> +	struct sli4 *sli;
> +	int rc = EFC_SUCCESS;
> +
> +	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
> +	if (!efc)
> +		return EFC_FAIL;
> +
> +	memset(efc, 0, sizeof(struct efc));
> +	efct->efcport = efc;
> +
> +	memcpy(&efc->tt, tt, sizeof(*tt));
> +	efc->base = efct;
> +	efc->pcidev = efct->pcidev;
> +
> +	efc->def_wwnn = efct_get_wwnn(&efct->hw);
> +	efc->def_wwpn = efct_get_wwpn(&efct->hw);
> +	efc->enable_tgt = 1;
> +	efc->log_level = EFC_LOG_LIB;
> +
> +	sli = &efct->hw.sli;
> +	efc->max_xfer_size = sli->sge_supported_length *
> +			     sli_get_max_sgl(&efct->hw.sli);
> +
> +	rc = efcport_init(efc);
> +	if (rc)
> +		efc_log_err(efc, "efcport_init failed\n");
> +
> +	return rc;
> +}
> +
> +static int efct_request_firmware_update(struct efct *efct);
> +
> +static const char*
> +efct_pci_model(u16 device)
> +{
> +	switch (device) {
> +	case EFCT_DEVICE_LANCER_G6:	return "LPE31004";
> +	case EFCT_DEVICE_LANCER_G7:	return "LPE36000";
> +	default:			return "unknown";
> +	}
> +}
> +
> +static int
> +efct_device_attach(struct efct *efct)
> +{
> +	u32 rc = 0, i = 0;
> +
> +	if (efct->attached) {
> +		efc_log_err(efct, "Device is already attached\n");
> +		return EFC_FAIL;
> +	}
> +
> +	snprintf(efct->name, sizeof(efct->name), "[%s%d] ", "fc",
> +		 efct->instance_index);
> +
> +	efct->logmask = logmask;
> +	efct->filter_def = "0,0,0,0";
> +	efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
> +
> +	efct->model = efct_pci_model(efct->pcidev->device);
> +
> +	efct->efct_req_fw_upgrade = true;
> +
> +	/* Allocate transport object and bring online */
> +	efct->xport = efct_xport_alloc(efct);
> +	if (!efct->xport) {
> +		efc_log_err(efct, "failed to allocate transport object\n");
> +		rc = -ENOMEM;
> +		goto out;
> +	}
> +
> +	rc = efct_xport_attach(efct->xport);
> +	if (rc) {
> +		efc_log_err(efct, "failed to attach transport object\n");
> +		goto xport_out;
> +	}
> +
> +	rc = efct_xport_initialize(efct->xport);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize transport object\n");
> +		goto xport_out;
> +	}
> +
> +	rc = efct_efclib_config(efct, &efct_libefc_templ);
> +	if (rc) {
> +		efc_log_err(efct, "failed to init efclib\n");
> +		goto efclib_out;
> +	}
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		efc_log_debug(efct, "irq %d enabled\n", i);
> +		enable_irq(pci_irq_vector(efct->pcidev, i));
> +	}
> +
> +	efct->attached = true;
> +
> +	if (efct->efct_req_fw_upgrade)
> +		efct_request_firmware_update(efct);
> +
> +	return rc;
> +
> +efclib_out:
> +	efct_xport_detach(efct->xport);
> +xport_out:
> +	if (efct->xport) {
> +		efct_xport_free(efct->xport);
> +		efct->xport = NULL;
> +	}
> +out:
> +	return rc;
> +}
> +
> +static int
> +efct_device_detach(struct efct *efct)
> +{
> +	int rc = 0, i;
> +
> +	if (efct) {
> +		if (!efct->attached) {
> +			efc_log_warn(efct, "Device is not attached\n");
> +			return EFC_FAIL;
> +		}
> +
> +		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
> +		if (rc)
> +			efc_log_err(efct, "Transport Shutdown timed out\n");
> +
> +		for (i = 0; i < efct->n_msix_vec; i++)
> +			disable_irq(pci_irq_vector(efct->pcidev, i));
> +
> +		if (efct_xport_detach(efct->xport) != 0)
> +			efc_log_err(efct, "Transport detach failed\n");
> +
> +		efct_xport_free(efct->xport);
> +		efct->xport = NULL;
> +
> +		efcport_destroy(efct->efcport);
> +		kfree(efct->efcport);
> +
> +		efct->attached = false;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_fw_write_cb(int status, u32 actual_write_length,
> +		 u32 change_status, void *arg)
> +{
> +	struct efct_fw_write_result *result = arg;
> +
> +	result->status = status;
> +	result->actual_xfer = actual_write_length;
> +	result->change_status = change_status;
> +
> +	complete(&result->done);
> +}
> +
> +static int
> +efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
> +		    u8 *change_status)
> +{
> +	int rc = 0;
> +	u32 bytes_left;
> +	u32 xfer_size;
> +	u32 offset;
> +	struct efc_dma dma;
> +	int last = 0;
> +	struct efct_fw_write_result result;
> +
> +	init_completion(&result.done);
> +
> +	bytes_left = buf_len;
> +	offset = 0;
> +
> +	dma.size = FW_WRITE_BUFSIZE;
> +	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +				      dma.size, &dma.phys, GFP_DMA);
> +	if (!dma.virt)
> +		return -ENOMEM;
> +
> +	while (bytes_left > 0) {
> +		if (bytes_left > FW_WRITE_BUFSIZE)
> +			xfer_size = FW_WRITE_BUFSIZE;
> +		else
> +			xfer_size = bytes_left;
> +
> +		memcpy(dma.virt, buf + offset, xfer_size);
> +
> +		if (bytes_left == xfer_size)
> +			last = 1;
> +
> +		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
> +				       last, efct_fw_write_cb, &result);
> +
> +		if (wait_for_completion_interruptible(&result.done) != 0) {
> +			rc = -ENXIO;
> +			break;
> +		}
> +
> +		if (result.actual_xfer == 0 || result.status != 0) {
> +			rc = -EFAULT;
> +			break;
> +		}
> +
> +		if (last)
> +			*change_status = result.change_status;
> +
> +		bytes_left -= result.actual_xfer;
> +		offset += result.actual_xfer;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
> +	return rc;
> +}
> +
> +/*
> + * Firmware reset to activate the new firmware.
> + * Function 0 will update and load the new firmware
> + * during attach.
> + */
> +static int
> +efct_fw_reset(struct efct *efct)
> +{
> +	int rc = 0;
> +
> +	if (timer_pending(&efct->xport->stats_timer))
> +		del_timer(&efct->xport->stats_timer);
> +
> +	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
> +		efc_log_info(efct, "failed to reset firmware\n");
> +		rc = -1;
> +	} else {
> +		efc_log_info(efct,
> +			"successfully reset firmware.Now resetting port\n");
> +
> +		efct_device_detach(efct);
> +		rc = efct_device_attach(efct);
> +	}
> +	return rc;
> +}
> +
> +static int
> +efct_request_firmware_update(struct efct *efct)
> +{
> +	int rc = 0;
> +	u8 file_name[256], fw_change_status = 0;
> +	const struct firmware *fw;
> +	struct efct_hw_grp_hdr *fw_image;
> +
> +	snprintf(file_name, 256, "%s.grp", efct->model);
> +
> +	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
> +	if (rc) {
> +		efc_log_debug(efct, "Firmware file(%s) not found.\n",
> +				file_name);
> +		return rc;
> +	}
> +
> +	fw_image = (struct efct_hw_grp_hdr *)fw->data;
> +
> +	if (!strncmp(efct->hw.sli.fw_name[0], fw_image->revision,
> +		     strnlen(fw_image->revision, 16))) {
> +		efc_log_debug(efct,
> +			"Skipped update. Firmware is already up to date.\n");
> +		goto exit;
> +	}
> +
> +	efc_log_info(efct, "Firmware update is initiated. %s -> %s\n",
> +		     efct->hw.sli.fw_name[0], fw_image->revision);
> +
> +	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "Firmware update failed. Return code = %d\n", rc);
> +		goto exit;
> +	}
> +
> +	efc_log_info(efct, "Firmware updated successfully\n");
> +	switch (fw_change_status) {
> +	case 0x00:
> +		efc_log_info(efct, "New firmware is active.\n");
> +		break;
> +	case 0x01:
> +		efc_log_info(efct,
> +			"System reboot needed to activate the new firmware\n");
> +		break;
> +	case 0x02:
> +	case 0x03:
> +		efc_log_info(efct,
> +			"firmware is resetting to activate the new firmware\n");
> +		efct_fw_reset(efct);
> +		break;
> +	default:
> +		efc_log_info(efct,
> +			"Unexected value change_status:%d\n", fw_change_status);
> +		break;
> +	}
> +
> +exit:
> +	release_firmware(fw);
> +
> +	return rc;
> +}
> +
> +static void
> +efct_device_free(struct efct *efct)
> +{
> +	if (efct) {
> +		efct_devices[efct->instance_index] = NULL;
> +
> +		kfree(efct);
> +	}
> +}
> +
> +static int
> +efct_device_interrupts_required(struct efct *efct)
> +{
> +	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
> +				!= EFCT_HW_RTN_SUCCESS) {
> +		return -1;
> +	}
> +
> +	return efct->hw.config.n_eq;
> +}
> +
> +static irqreturn_t
> +efct_intr_thread(int irq, void *handle)
> +{
> +	struct efct_intr_context *intr_ctx = handle;
> +	struct efct *efct = intr_ctx->efct;
> +
> +	efct_hw_process(&efct->hw, intr_ctx->index, efct->max_isr_time_msec);
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t
> +efct_intr_msix(int irq, void *handle)
> +{
> +	return IRQ_WAKE_THREAD;
> +}
> +
> +static int
> +efct_setup_msix(struct efct *efct, u32 num_intrs)
> +{
> +	int	rc = 0, i;
> +

Drop the tab between 'int' and 'rc'.

> +	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
> +		dev_err(&efct->pcidev->dev,
> +			"%s : MSI-X not available\n", __func__);
> +		return -EINVAL;
> +	}
> +
> +	efct->n_msix_vec = num_intrs;
> +
> +	rc = pci_alloc_irq_vectors(efct->pcidev, num_intrs, num_intrs,
> +				   PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
> +
> +	if (rc < 0) {
> +		dev_err(&efct->pcidev->dev, "Failed to alloc irq : %d\n", rc);
> +		return rc;
> +	}
> +
> +	for (i = 0; i < num_intrs; i++) {
> +		struct efct_intr_context *intr_ctx = NULL;
> +
> +		intr_ctx = &efct->intr_context[i];
> +		intr_ctx->efct = efct;
> +		intr_ctx->index = i;
> +
> +		rc = request_threaded_irq(pci_irq_vector(efct->pcidev, i),
> +					  efct_intr_msix, efct_intr_thread, 0,
> +					  EFCT_DRIVER_NAME, intr_ctx);
> +		if (rc) {
> +			dev_err(&efct->pcidev->dev,
> +				"Failed to register %d vector: %d\n", i, rc);
> +			goto out;
> +		}
> +	}
> +
> +	return rc;
> +
> +out:
> +	while (--i >= 0)
> +		free_irq(pci_irq_vector(efct->pcidev, i),
> +			 &efct->intr_context[i]);
> +
> +	pci_free_irq_vectors(efct->pcidev);
> +	return rc;
> +}
> +
> +static struct pci_device_id efct_pci_table[] = {
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G6), 0},
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G7), 0},
> +	{}	/* terminate list */
> +};
> +
> +static int
> +efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> +	struct efct *efct = NULL;
> +	int rc;
> +	u32 i, r;
> +	int num_interrupts = 0;
> +	int nid;
> +
> +	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc)
> +		return rc;
> +
> +	pci_set_master(pdev);
> +
> +	rc = pci_set_mwi(pdev);
> +	if (rc) {
> +		dev_info(&pdev->dev,
> +			 "pci_set_mwi returned %d\n", rc);
> +		goto mwi_out;
> +	}
> +
> +	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
> +	if (rc) {
> +		dev_err(&pdev->dev, "pci_request_regions failed %d\n", rc);
> +		goto req_regions_out;
> +	}
> +
> +	/* Fetch the Numa node id for this device */
> +	nid = dev_to_node(&pdev->dev);
> +	if (nid < 0) {
> +		dev_err(&pdev->dev,
> +			"Warning Numa node ID is %d\n", nid);
> +		nid = 0;
> +	}
> +
> +	/* Allocate efct */
> +	efct = efct_device_alloc(nid);
> +	if (!efct) {
> +		dev_err(&pdev->dev, "Failed to allocate efct\n");
> +		rc = -ENOMEM;
> +		goto alloc_out;
> +	}
> +
> +	efct->pcidev = pdev;
> +
> +	efct->numa_node = nid;
> +
> +	/* Map all memory BARs */
> +	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
> +			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
> +						  pci_resource_len(pdev, i));
> +			r++;
> +		}
> +
> +		/*
> +		 * If the 64-bit attribute is set, both this BAR and the
> +		 * next form the complete address. Skip processing the
> +		 * next BAR.
> +		 */
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
> +			i++;
> +	}
> +
> +	pci_set_drvdata(pdev, efct);
> +
> +	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
> +	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
> +		dev_warn(&pdev->dev,
> +			 "trying DMA_BIT_MASK(32)\n");
> +		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
> +		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
> +			dev_err(&pdev->dev,
> +				"setting DMA_BIT_MASK failed\n");
> +			rc = -1;
> +			goto dma_mask_out;
> +		}
> +	}
> +
> +	num_interrupts = efct_device_interrupts_required(efct);
> +	if (num_interrupts < 0) {
> +		efc_log_err(efct, "efct_device_interrupts_required failed\n");
> +		rc = -1;
> +		goto dma_mask_out;
> +	}
> +
> +	/*
> +	 * Initialize MSIX interrupts, note,
> +	 * efct_setup_msix() enables the interrupt
> +	 */
> +	rc = efct_setup_msix(efct, num_interrupts);
> +	if (rc) {
> +		dev_err(&pdev->dev, "Can't setup msix\n");
> +		goto dma_mask_out;
> +	}
> +	/* Disable interrupt for now */
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		efc_log_debug(efct, "irq %d disabled\n", i);
> +		disable_irq(pci_irq_vector(efct->pcidev, i));
> +	}
> +
> +	rc = efct_device_attach((struct efct *)efct);
> +	if (rc)
> +		goto attach_out;
> +
> +	return EFC_SUCCESS;
> +
> +attach_out:
> +	efct_teardown_msix(efct);
> +dma_mask_out:
> +	pci_set_drvdata(pdev, NULL);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +	efct_device_free(efct);
> +alloc_out:
> +	pci_release_regions(pdev);
> +req_regions_out:
> +	pci_clear_mwi(pdev);
> +mwi_out:
> +	pci_disable_device(pdev);
> +	return rc;
> +}
> +
> +static void
> +efct_pci_remove(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	u32	i;
> +
> +	if (!efct)
> +		return;
> +
> +	efct_device_detach(efct);
> +
> +	efct_teardown_msix(efct);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +
> +	pci_set_drvdata(pdev, NULL);
> +
> +	efct_devices[efct->instance_index] = NULL;
> +
> +	efct_device_free(efct);
> +
> +	pci_release_regions(pdev);
> +
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
> +{
> +	if (efct) {
> +		efc_log_debug(efct,
> +			       "PCI channel disable preparing for reset\n");
> +		efct_device_detach(efct);
> +		/* Disable interrupt and pci device */
> +		efct_teardown_msix(efct);
> +	}
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_recover(struct efct *efct)
> +{
> +	if (efct) {
> +		efc_log_debug(efct, "PCI channel preparing for recovery\n");
> +		efct_hw_io_abort_all(&efct->hw);
> +	}
> +}
> +
> +/**
> + * efct_pci_io_error_detected - method for handling PCI I/O error
> + * @pdev: pointer to PCI device.
> + * @state: the current PCI connection state.
> + *
> + * This routine is registered to the PCI subsystem for error handling. This
> + * function is called by the PCI subsystem after a PCI bus error affecting
> + * this device has been detected. When this routine is invoked, it dispatches
> + * device error detected handling routine, which will perform the proper
> + * error detected operation.
> + *
> + * Return codes
> + * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
> + * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
> + */
> +static pci_ers_result_t
> +efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	pci_ers_result_t rc;
> +
> +	switch (state) {
> +	case pci_channel_io_normal:
> +		efct_device_prep_for_recover(efct);
> +		rc = PCI_ERS_RESULT_CAN_RECOVER;
> +		break;
> +	case pci_channel_io_frozen:
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	case pci_channel_io_perm_failure:
> +		efct_device_detach(efct);
> +		rc = PCI_ERS_RESULT_DISCONNECT;
> +		break;
> +	default:
> +		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
> +			       state);
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +static pci_ers_result_t
> +efct_pci_io_slot_reset(struct pci_dev *pdev)
> +{
> +	int rc;
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "failed to re-enable PCI device after reset.\n");
> +		return PCI_ERS_RESULT_DISCONNECT;
> +	}
> +
> +	/*
> +	 * As the new kernel behavior of pci_restore_state() API call clears
> +	 * device saved_state flag, need to save the restored state again.
> +	 */
> +
> +	pci_save_state(pdev);
> +
> +	pci_set_master(pdev);
> +
> +	rc = efct_setup_msix(efct, efct->n_msix_vec);
> +	if (rc)
> +		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
> +			    rc);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +
> +	return PCI_ERS_RESULT_RECOVERED;
> +}
> +
> +static void
> +efct_pci_io_resume(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +}
> +
> +MODULE_DEVICE_TABLE(pci, efct_pci_table);
> +
> +static struct pci_error_handlers efct_pci_err_handler = {
> +	.error_detected = efct_pci_io_error_detected,
> +	.slot_reset = efct_pci_io_slot_reset,
> +	.resume = efct_pci_io_resume,
> +};
> +
> +static struct pci_driver efct_pci_driver = {
> +	.name		= EFCT_DRIVER_NAME,
> +	.id_table	= efct_pci_table,
> +	.probe		= efct_pci_probe,
> +	.remove		= efct_pci_remove,
> +	.err_handler	= &efct_pci_err_handler,
> +};
> +
> +static
> +int __init efct_init(void)
> +{
> +	int	rc = -ENODEV;
> +
> +	rc = efct_device_init();
> +	if (rc) {
> +		pr_err("efct_device_init failed rc=%d\n", rc);
> +		return -ENOMEM;
> +	}
> +
> +	rc = pci_register_driver(&efct_pci_driver);
> +	if (rc)
> +		goto l1;
> +
> +	return rc;
> +
> +l1:
> +	efct_device_shutdown();
> +	return rc;
> +}
> +
> +static void __exit efct_exit(void)
> +{
> +	pci_unregister_driver(&efct_pci_driver);
> +	efct_device_shutdown();
> +}
> +
> +module_init(efct_init);
> +module_exit(efct_exit);
> +MODULE_VERSION(EFCT_DRIVER_VERSION);
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Broadcom");
> diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
> new file mode 100644
> index 000000000000..07ca0b182d90
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_driver.h
> @@ -0,0 +1,142 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_DRIVER_H__)
> +#define __EFCT_DRIVER_H__
> +
> +/***************************************************************************
> + * OS specific includes
> + */
> +#include <stdarg.h>
> +#include <linux/version.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/interrupt.h>
> +#include <asm-generic/ioctl.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/pci.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/bitmap.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <asm/byteorder.h>
> +#include <linux/timer.h>
> +#include <linux/delay.h>
> +#include <linux/fs.h>
> +#include <linux/uaccess.h>
> +#include <linux/sched.h>
> +#include <asm/current.h>
> +#include <asm/cacheflush.h>
> +#include <linux/pagemap.h>
> +#include <linux/kthread.h>
> +#include <linux/proc_fs.h>
> +#include <linux/seq_file.h>
> +#include <linux/random.h>
> +#include <linux/sched.h>
> +#include <linux/jiffies.h>
> +#include <linux/ctype.h>
> +#include <linux/debugfs.h>
> +#include <linux/firmware.h>
> +#include <linux/sched/signal.h>
> +#include "../include/efc_common.h"
> +
> +#define EFCT_DRIVER_NAME			"efct"
> +#define EFCT_DRIVER_VERSION			"1.0.0.0"
> +
> +/* EFCT_OS_MAX_ISR_TIME_MSEC -
> + * maximum time driver code should spend in an interrupt
> + * or kernel thread context without yielding
> + */
> +#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
> +
> +#define EFCT_FC_MAX_SGL				64
> +#define EFCT_FC_DIF_SEED			0
> +
> +/* Timeouts */
> +#define EFCT_FC_ELS_SEND_DEFAULT_TIMEOUT	0
> +#define EFCT_FC_ELS_DEFAULT_RETRIES		3
> +#define EFCT_FC_FLOGI_TIMEOUT_SEC		5
> +#define EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC    30000000 /* 30 seconds */
> +
> +/* Watermark */
> +#define EFCT_WATERMARK_HIGH_PCT			90
> +#define EFCT_WATERMARK_LOW_PCT			80
> +#define EFCT_IO_WATERMARK_PER_INITIATOR		8
> +
> +#include "../libefc/efclib.h"
> +#include "efct_hw.h"
> +#include "efct_io.h"
> +#include "efct_xport.h"
> +
> +#define EFCT_PCI_MAX_REGS			6
> +#define MAX_PCI_INTERRUPTS			16
> +
> +struct efct_intr_context {
> +	struct efct		*efct;
> +	u32			index;
> +};
> +
> +struct efct {
> +	struct pci_dev			*pcidev;
> +	void __iomem			*reg[EFCT_PCI_MAX_REGS];
> +
> +	u32				n_msix_vec;
> +	struct efct_intr_context	intr_context[MAX_PCI_INTERRUPTS];
> +	u32				numa_node;
> +
> +	char				name[EFC_NAME_LENGTH];
> +	bool				attached;
> +	struct efct_scsi_tgt		tgt_efct;
> +	struct efct_xport		*xport;
> +	struct efc			*efcport;
> +	struct Scsi_Host		*shost;
> +	int				logmask;
> +	u32				max_isr_time_msec;
> +
> +	const char			*desc;
> +	u32				instance_index;
> +
> +	const char			*model;
> +
> +	struct efct_hw			hw;
> +
> +	u32				num_vports;
> +	u32				rq_selection_policy;
> +	char				*filter_def;
> +
> +	bool				soft_wwn_enable;
> +
> +	/*
> +	 * Target IO timer value:
> +	 * Zero: target command timeout disabled.
> +	 * Non-zero: Timeout value, in seconds, for target commands
> +	 */
> +	u32				target_io_timer_sec;
> +
> +	int				speed;
> +	int				topology;
> +
> +	u8				efct_req_fw_upgrade;
> +	u16				sw_feature_cap;
> +	struct dentry			*sess_debugfs_dir;
> +};
> +
> +#define FW_WRITE_BUFSIZE		(64 * 1024)
> +
> +struct efct_fw_write_result {
> +	struct completion done;
> +	int status;
> +	u32 actual_xfer;
> +	u32 change_status;
> +};
> +
> +#define MAX_EFCT_DEVICES		64
> +extern struct efct			*efct_devices[MAX_EFCT_DEVICES];
> +
> +#endif /* __EFCT_DRIVER_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> new file mode 100644
> index 000000000000..21fcaf7b3d2b
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -0,0 +1,1116 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +
> +static enum efct_hw_rtn
> +efct_hw_link_event_init(struct efct_hw *hw)
> +{
> +	hw->link.status = SLI_LINK_STATUS_MAX;
> +	hw->link.topology = SLI_LINK_TOPO_NONE;
> +	hw->link.medium = SLI_LINK_MEDIUM_MAX;
> +	hw->link.speed = 0;
> +	hw->link.loop_map = NULL;
> +	hw->link.fc_id = U32_MAX;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_read_max_dump_size(struct efct_hw *hw)
> +{
> +	u8	buf[SLI4_BMBX_SIZE];
> +	struct efct *efct = hw->os;
> +	int	rc = EFCT_HW_RTN_SUCCESS;
> +	struct sli4_rsp_cmn_set_dump_location *rsp;
> +
> +	/* attempt to detemine the dump size for function 0 only. */
> +	if (PCI_FUNC(efct->pcidev->devfn) != 0)
> +		return rc;
> +
> +	rc = sli_cmd_common_set_dump_location(&hw->sli, buf, SLI4_BMBX_SIZE, 1,
> +					     0,	NULL, 0);
> +	if (rc)
> +		return rc;
> +
> +	rsp = (struct sli4_rsp_cmn_set_dump_location *)
> +	      (buf + offsetof(struct sli4_cmd_sli_config, payload.embed));
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "set dump location cmd failed\n");
> +		return rc;
> +	}
> +	hw->dump_size =	(le32_to_cpu(rsp->buffer_length_dword) &
> +			 SLI4_CMN_SET_DUMP_BUFFER_LEN);
> +	efc_log_debug(hw->os, "Dump size %x\n",	hw->dump_size);
> +
> +	return rc;
> +}
> +
> +static int
> +__efct_read_topology_cb(struct efct_hw *hw, int status,
> +			u8 *mqe, void *arg)
> +{
> +	struct sli4_cmd_read_topology *read_topo =
> +				(struct sli4_cmd_read_topology *)mqe;
> +	u8 speed;
> +	struct efc_domain_record drec = {0};
> +	struct efct *efct = hw->os;
> +
> +	if (status || le16_to_cpu(read_topo->hdr.status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
> +			       status,
> +			       le16_to_cpu(read_topo->hdr.status));
> +		kfree(mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (le32_to_cpu(read_topo->dw2_attentype) &
> +		SLI4_READTOPO_ATTEN_TYPE) {
> +	case SLI4_READ_TOPOLOGY_LINK_UP:
> +		hw->link.status = SLI_LINK_STATUS_UP;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_DOWN:
> +		hw->link.status = SLI_LINK_STATUS_DOWN;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
> +		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
> +		break;
> +	default:
> +		hw->link.status = SLI_LINK_STATUS_MAX;
> +		break;
> +	}
> +
> +	switch (read_topo->topology) {
> +	case SLI4_READ_TOPOLOGY_NPORT:
> +		hw->link.topology = SLI_LINK_TOPO_NPORT;
> +		break;
> +	case SLI4_READ_TOPOLOGY_FC_AL:
> +		hw->link.topology = SLI_LINK_TOPO_LOOP;
> +		if (hw->link.status == SLI_LINK_STATUS_UP)
> +			hw->link.loop_map = hw->loop_map.virt;
> +		hw->link.fc_id = read_topo->acquired_al_pa;
> +		break;
> +	default:
> +		hw->link.topology = SLI_LINK_TOPO_MAX;
> +		break;
> +	}
> +
> +	hw->link.medium = SLI_LINK_MEDIUM_FC;
> +
> +	speed = (le32_to_cpu(read_topo->currlink_state) &
> +		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
> +	switch (speed) {
> +	case SLI4_READ_TOPOLOGY_SPEED_1G:
> +		hw->link.speed =  1 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_2G:
> +		hw->link.speed =  2 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_4G:
> +		hw->link.speed =  4 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_8G:
> +		hw->link.speed =  8 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_16G:
> +		hw->link.speed = 16 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_32G:
> +		hw->link.speed = 32 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	}
> +
> +	kfree(mqe);
> +
> +	drec.speed = hw->link.speed;
> +	drec.fc_id = hw->link.fc_id;
> +	drec.is_nport = true;
> +	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Callback function for the SLI link events */
> +static int
> +efct_hw_cb_link(void *ctx, void *e)
> +{
> +	struct efct_hw	*hw = ctx;
> +	struct sli4_link_event *event = e;
> +	struct efc_domain	*d = NULL;
> +	int		rc = EFCT_HW_RTN_ERROR;
> +	struct efct	*efct = hw->os;
> +	struct efc_dma *dma;
> +
> +	efct_hw_link_event_init(hw);
> +
> +	switch (event->status) {
> +	case SLI_LINK_STATUS_UP:
> +
> +		hw->link = *event;
> +		efct->efcport->link_status = EFC_LINK_STATUS_UP;
> +
> +		if (event->topology == SLI_LINK_TOPO_NPORT) {
> +			struct efc_domain_record drec = {0};
> +
> +			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
> +				      event->speed);
> +			drec.speed = event->speed;
> +			drec.fc_id = event->fc_id;
> +			drec.is_nport = true;
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
> +				      &drec);
> +		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
> +			u8	*buf = NULL;
> +
> +			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
> +				      event->speed);
> +			dma = &hw->loop_map;
> +			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt)
> +				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
> +
> +			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +			if (!buf)
> +				break;
> +
> +			if (!sli_cmd_read_topology(&hw->sli, buf,
> +						  SLI4_BMBX_SIZE,
> +						       &hw->loop_map)) {
> +				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +						     __efct_read_topology_cb,
> +						     NULL);
> +			}
> +

Not sure if this is a good idea; we'll have to allocate extra memory 
whenever the loop topology changes.
Which typically happens when there had been a failure somewhere, and 
chances are that it'll affect our root fs, making memory allocation 
something we really need to look at.
Can't we pre-allocate a buffer here somewhere in the global 
initialisation so that we don't have to allocate it every time?

> +			if (rc != EFCT_HW_RTN_SUCCESS) {
> +				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
> +				kfree(buf);
> +			}
> +		} else {
> +			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
> +				      "Link Up, unsupported topology ",
> +				     event->topology, event->speed);
> +		}
> +		break;
> +	case SLI_LINK_STATUS_DOWN:
> +		efc_log_info(hw->os, "Link down\n");
> +
> +		hw->link.status = event->status;
> +		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
> +
> +		d = hw->domain;
> +		if (d)
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
> +		break;
> +	default:
> +		efc_log_test(hw->os, "unhandled link status %#x\n",
> +			      event->status);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
> +{
> +	u32 i, max_sgl;
> +
> +	if (hw->hw_setup_called)
> +		return EFCT_HW_RTN_SUCCESS;
> +
> +	/*
> +	 * efct_hw_init() relies on NULL pointers indicating that a structure
> +	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
> +	 * free/realloc that memory
> +	 */
> +	memset(hw, 0, sizeof(struct efct_hw));
> +
> +	hw->hw_setup_called = true;
> +
> +	hw->os = os;
> +
> +	spin_lock_init(&hw->cmd_lock);
> +	INIT_LIST_HEAD(&hw->cmd_head);
> +	INIT_LIST_HEAD(&hw->cmd_pending);
> +	hw->cmd_head_count = 0;
> +
> +	spin_lock_init(&hw->io_lock);
> +	spin_lock_init(&hw->io_abort_lock);
> +
> +	atomic_set(&hw->io_alloc_failed_count, 0);
> +
> +	hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
> +	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct *)os)->reg)) {
> +		efc_log_err(hw->os, "SLI setup failed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_link_event_init(hw);
> +
> +	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
> +
> +	/*
> +	 * Set all the queue sizes to the maximum allowed.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
> +		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
> +	/*
> +	 * Adjust the the size of the WQs so that the CQ is twice as
> +	 * big as the WQ to allow for 2 completions per IO. This allows us to
> +	 * handle multi-phase as well as aborts.
> +	 */
> +	hw->num_qentries[SLI_QTYPE_WQ] = hw->num_qentries[SLI_QTYPE_CQ] / 2;
> +
> +	/*
> +	 * The RQ assignment for RQ pair mode.
> +	 */
> +	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
> +	hw->config.n_io = hw->sli.extent[SLI_RSRC_XRI].size;
> +	hw->config.n_eq = EFCT_HW_DEF_NUM_EQ;
> +
> +	max_sgl = sli_get_max_sgl(&hw->sli) - SLI4_SGE_MAX_RESERVED;
> +	max_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
> +	hw->config.n_sgl = max_sgl;
> +
> +	(void)efct_hw_read_max_dump_size(hw);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static void
> +efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
> +{
> +	efc_log_info(hw->os,
> +		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
> +		     j, hw->config.filter_def[j], i, id);
> +}
> +
> +static inline void
> +efct_hw_init_free_io(struct efct_hw_io *io)
> +{
> +	/*
> +	 * Set io->done to NULL, to avoid any callbacks, should
> +	 * a completion be received for one of these IOs
> +	 */
> +	io->done = NULL;
> +	io->abort_done = NULL;
> +	io->status_saved = false;
> +	io->abort_in_progress = false;
> +	io->rnode = NULL;
> +	io->type = 0xFFFF;
> +	io->wq = NULL;
> +	io->ul_io = NULL;
> +	io->tgt_wqe_timeout = 0;
> +}
> +
> +static u8 efct_hw_iotype_is_originator(u16 io_type)
> +{
> +	switch (io_type) {
> +	case EFCT_HW_FC_CT:
> +	case EFCT_HW_ELS_REQ:
> +		return 1;
> +	default:
> +		return EFC_SUCCESS;
> +	}
> +}
> +
> +static void
> +efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* Restore the default */
> +	io->sgl = &io->def_sgl;
> +	io->sgl_count = io->def_sgl_count;
> +
> +	/* Clear the overflow SGL */
> +	io->ovfl_sgl = NULL;
> +	io->ovfl_sgl_count = 0;
> +	io->ovfl_lsp = NULL;
> +}
> +
> +static void
> +efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
> +{
> +	struct efct_hw_io *io = arg;
> +	struct efct_hw *hw = io->hw;
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +	u32	len = 0;
> +	u32 ext = 0;
> +
> +	/* clear xbusy flag if WCQE[XB] is clear */
> +	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
> +		io->xbusy = false;
> +
> +	/* get extended CQE status */
> +	switch (io->type) {
> +	case EFCT_HW_BLS_ACC:
> +	case EFCT_HW_BLS_ACC_SID:
> +		break;
> +	case EFCT_HW_ELS_REQ:
> +		sli_fc_els_did(&hw->sli, cqe, &ext);
> +		len = sli_fc_response_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_ELS_RSP:
> +	case EFCT_HW_ELS_RSP_SID:
> +	case EFCT_HW_FC_CT_RSP:
> +		break;
> +	case EFCT_HW_FC_CT:
> +		len = sli_fc_response_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_WRITE:
> +		len = sli_fc_io_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_READ:
> +		len = sli_fc_io_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_RSP:
> +		break;
> +	case EFCT_HW_IO_DNRX_REQUEUE:
> +		/* release the count for re-posting the buffer */
> +		/* efct_hw_io_free(hw, io); */
> +		break;
> +	default:
> +		efc_log_test(hw->os, "unhandled io type %#x for XRI 0x%x\n",
> +			      io->type, io->indicator);
> +		break;
> +	}
> +	if (status) {
> +		ext = sli_fc_ext_status(&hw->sli, cqe);
> +		/*
> +		 * If we're not an originator IO, and XB is set, then issue
> +		 * abort for the IO from within the HW
> +		 */
> +		if ((!efct_hw_iotype_is_originator(io->type)) &&
> +		    wcqe->flags & SLI4_WCQE_XB) {
> +			enum efct_hw_rtn rc;
> +
> +			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
> +				       io->indicator, io->reqtag);
> +
> +			/*
> +			 * Because targets may send a response when the IO
> +			 * completes using the same XRI, we must wait for the
> +			 * XRI_ABORTED CQE to issue the IO callback
> +			 */
> +			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
> +			if (rc == EFCT_HW_RTN_SUCCESS) {
> +				/*
> +				 * latch status to return after abort is
> +				 * complete
> +				 */
> +				io->status_saved = true;
> +				io->saved_status = status;
> +				io->saved_ext = ext;
> +				io->saved_len = len;
> +				goto exit_efct_hw_wq_process_io;
> +			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
> +				/*
> +				 * Already being aborted by someone else (ABTS
> +				 * perhaps). Just fall thru and return original
> +				 * error.
> +				 */
> +				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
> +					       "abort in progress xri=",
> +					      io->indicator, io->reqtag);
> +
> +			} else {
> +				/* Failed to abort for some other reason, log
> +				 * error
> +				 */
> +				efc_log_test(hw->os, "%s%#x tag=%#x rc=%d\n",
> +					      "Failed to abort xri=",
> +					     io->indicator, io->reqtag, rc);
> +			}
> +		}
> +	}
> +
> +	if (io->done) {
> +		efct_hw_done_t done = io->done;
> +		void *arg = io->arg;
> +
> +		io->done = NULL;
> +
> +		if (io->status_saved) {
> +			/* use latched status if exists */
> +			status = io->saved_status;
> +			len = io->saved_len;
> +			ext = io->saved_ext;
> +			io->status_saved = false;
> +		}
> +
> +		/* Restore default SGL */
> +		efct_hw_io_restore_sgl(hw, io);
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +
> +exit_efct_hw_wq_process_io:
> +	return;
> +}
> +
> +/* Initialize the pool of HW IO objects */
> +static enum efct_hw_rtn
> +efct_hw_setup_io(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	struct efct_hw_io	*io = NULL;
> +	uintptr_t	xfer_virt = 0;
> +	uintptr_t	xfer_phys = 0;
> +	u32	index;
> +	bool new_alloc = true;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	if (!hw->io) {
> +		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
> +				 GFP_KERNEL);
> +
> +		if (!hw->io)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(hw->io, 0, hw->config.n_io * sizeof(io));
> +
> +		for (i = 0; i < hw->config.n_io; i++) {
> +			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
> +			if (!hw->io[i])
> +				goto error;
> +
> +			memset(hw->io[i], 0, sizeof(struct efct_hw_io));
> +		}
> +
> +		/* Create WQE buffs for IO */
> +		hw->wqe_buffs = kmalloc((hw->config.n_io *
> +					     hw->sli.wqe_size),
> +					     GFP_ATOMIC);
> +		if (!hw->wqe_buffs) {
> +			kfree(hw->io);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		memset(hw->wqe_buffs, 0, (hw->config.n_io *
> +					hw->sli.wqe_size));
> +
> +	} else {
> +		/* re-use existing IOs, including SGLs */
> +		new_alloc = false;
> +	}
> +
> +	if (new_alloc) {
> +		dma = &hw->xfer_rdy;
> +		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
> +		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       dma->size, &dma->phys, GFP_DMA);
> +		if (!dma->virt)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
> +	xfer_phys = hw->xfer_rdy.phys;
> +
> +	for (i = 0; i < hw->config.n_io; i++) {
> +		struct hw_wq_callback *wqcb;
> +
> +		io = hw->io[i];
> +
> +		/* initialize IO fields */
> +		io->hw = hw;
> +
> +		/* Assign a WQE buff */
> +		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
> +
> +		/* Allocate the request tag for this IO */
> +		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
> +		if (!wqcb) {
> +			efc_log_err(hw->os, "can't allocate request tag\n");
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +		}
> +		io->reqtag = wqcb->instance_index;
> +
> +		/* Now for the fields that are initialized on each free */
> +		efct_hw_init_free_io(io);
> +
> +		/* The XB flag isn't cleared on IO free, so init to zero */
> +		io->xbusy = 0;
> +
> +		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
> +				       &io->indicator, &index)) {
> +			efc_log_err(hw->os,
> +				     "sli_resource_alloc failed @ %d\n", i);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		if (new_alloc) {
> +			dma = &io->def_sgl;
> +			dma->size = hw->config.n_sgl *
> +					sizeof(struct sli4_sge);
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt) {
> +				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
> +				memset(&io->def_sgl, 0,
> +				       sizeof(struct efc_dma));
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +		}
> +		io->def_sgl_count = hw->config.n_sgl;
> +		io->sgl = &io->def_sgl;
> +		io->sgl_count = io->def_sgl_count;
> +
> +		if (hw->xfer_rdy.size) {
> +			io->xfer_rdy.virt = (void *)xfer_virt;
> +			io->xfer_rdy.phys = xfer_phys;
> +			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
> +
> +			xfer_virt += sizeof(struct fcp_txrdy);
> +			xfer_phys += sizeof(struct fcp_txrdy);
> +		}
> +	}
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +error:
> +	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
> +		kfree(hw->io[i]);
> +		hw->io[i] = NULL;
> +	}
> +
> +	kfree(hw->io);
> +	hw->io = NULL;
> +
> +	return EFCT_HW_RTN_NO_MEMORY;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_init_io(struct efct_hw *hw)
> +{
> +	u32	i = 0, io_index = 0;
> +	bool prereg = false;
> +	struct efct_hw_io	*io = NULL;
> +	u8		cmd[SLI4_BMBX_SIZE];
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32	nremaining;
> +	u32	n = 0;
> +	u32	sgls_per_request = 256;
> +	struct efc_dma	**sgls = NULL;
> +	struct efc_dma	reqbuf;
> +	struct efct *efct = hw->os;
> +
> +	prereg = hw->sli.sgl_pre_registered;
> +
> +	memset(&reqbuf, 0, sizeof(struct efc_dma));
> +	if (prereg) {
> +		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
> +				     GFP_ATOMIC);
> +		if (!sgls)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		reqbuf.size = 32 + sgls_per_request * 16;
> +		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						 reqbuf.size, &reqbuf.phys,
> +						 GFP_DMA);
> +		if (!reqbuf.virt) {
> +			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
> +			kfree(sgls);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +	}
> +
> +	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
> +		if (prereg) {
> +			/* Copy address of SGL's into local sgls[] array, break
> +			 * out if the xri is not contiguous.
> +			 */
> +			u32 min = (sgls_per_request < nremaining)
> +					? sgls_per_request : nremaining;
> +			for (n = 0; n < min; n++) {
> +				/* Check that we have contiguous xri values */
> +				if (n > 0) {
> +					if (hw->io[io_index + n]->indicator !=
> +					    hw->io[io_index + n - 1]->indicator
> +					    + 1)
> +						break;
> +				}
> +				sgls[n] = hw->io[io_index + n]->sgl;
> +			}
> +
> +			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
> +						   sizeof(cmd),
> +						hw->io[io_index]->indicator,
> +						n, sgls, NULL, &reqbuf)) {
> +				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
> +						    NULL, NULL)) {
> +					rc = EFCT_HW_RTN_ERROR;
> +					efc_log_err(hw->os,
> +						     "SGL post failed\n");
> +					break;
> +				}
> +			}
> +		} else {
> +			n = nremaining;
> +		}
> +
> +		/* Add to tail if successful */
> +		for (i = 0; i < n; i++, io_index++) {
> +			io = hw->io[io_index];
> +			io->state = EFCT_HW_IO_STATE_FREE;
> +			INIT_LIST_HEAD(&io->list_entry);
> +			list_add_tail(&io->list_entry, &hw->io_free);
> +		}
> +	}
> +
> +	if (prereg) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  reqbuf.size, reqbuf.virt, reqbuf.phys);
> +		memset(&reqbuf, 0, sizeof(struct efc_dma));
> +		kfree(sgls);
> +	}
> +
> +	return rc;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
> +
> +	memset(&param, 0, sizeof(param));
> +	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
> +			      fdt_xfer_hint, rc);
> +	else
> +		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
> +			      le32_to_cpu(param.fdt_xfer_hint));
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_config_rq(struct efct_hw *hw)
> +{
> +	u32 min_rq_count, i, rc;
> +	struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +	u8 buf[SLI4_BMBX_SIZE];
> +
> +	efc_log_info(hw->os, "using REG_FCFI standard\n");
> +
> +	/*
> +	 * Set the filter match/mask values from hw's
> +	 * filter_def values
> +	 */
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		rq_cfg[i].rq_id = cpu_to_le16(0xffff);
> +		rq_cfg[i].r_ctl_mask = (u8)hw->config.filter_def[i];
> +		rq_cfg[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
> +		rq_cfg[i].type_mask = (u8)(hw->config.filter_def[i] >> 16);
> +		rq_cfg[i].type_match = (u8)(hw->config.filter_def[i] >> 24);
> +	}
> +
> +	/*
> +	 * Update the rq_id's of the FCF configuration
> +	 * (don't update more than the number of rq_cfg
> +	 * elements)
> +	 */
> +	min_rq_count = (hw->hw_rq_count < SLI4_CMD_REG_FCFI_NUM_RQ_CFG)	?
> +			hw->hw_rq_count : SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
> +	for (i = 0; i < min_rq_count; i++) {
> +		struct hw_rq *rq = hw->hw_rq[i];
> +		u32 j;
> +
> +		for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; j++) {
> +			u32 mask = (rq->filter_mask != 0) ?
> +				rq->filter_mask : 1;
> +
> +			if (!(mask & (1U << j)))
> +				continue;
> +
> +			rq_cfg[j].rq_id = cpu_to_le16(rq->hdr->id);
> +			efct_logfcfi(hw, j, i, rq->hdr->id);
> +		}
> +	}
> +
> +	rc = EFCT_HW_RTN_ERROR;
> +	if (!sli_cmd_reg_fcfi(&hw->sli, buf,
> +				SLI4_BMBX_SIZE, 0,
> +				rq_cfg)) {
> +		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
> +				NULL, NULL);
> +	}
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_err(hw->os,
> +				"FCFI registration failed\n");
> +		return rc;
> +	}
> +	hw->fcf_indicator =
> +		le16_to_cpu(((struct sli4_cmd_reg_fcfi *)buf)->fcfi);
> +
> +	return rc;
> +}
> +
> +static void
> +efct_hw_queue_hash_add(struct efct_queue_hash *hash,
> +		       u16 id, u16 index)
> +{
> +	u32	hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/*
> +	 * Since the hash is always bigger than the number of queues, then we
> +	 * never have to worry about an infinite loop.
> +	 */
> +	while (hash[hash_index].in_use)
> +		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/* not used, claim the entry */
> +	hash[hash_index].id = id;
> +	hash[hash_index].in_use = true;
> +	hash[hash_index].index = index;
> +}
> +
> +/* enable sli port health check */
> +static enum efct_hw_rtn
> +efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
> +				     u8 enable)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_health_check param;
> +	u32	health_check_flag = 0;
> +
> +	memset(&param, 0, sizeof(param));
> +
> +	if (enable)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
> +
> +	if (query)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
> +
> +	param.health_check_dword = cpu_to_le32(health_check_flag);
> +
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
> +	else
> +		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_init(struct efct_hw *hw)
> +{
> +	enum efct_hw_rtn rc;
> +	u32 i = 0;
> +	u32 max_rpi;
> +	int rem_count;
> +	unsigned long flags = 0;
> +	struct efct_hw_io *temp;
> +	struct sli4 *sli = &hw->sli;
> +	struct hw_rq *rq;
> +
> +	/*
> +	 * Make sure the command lists are empty. If this is start-of-day,
> +	 * they'll be empty since they were just initialized in efct_hw_setup.
> +	 * If we've just gone through a reset, the command and command pending
> +	 * lists should have been cleaned up as part of the reset
> +	 * (efct_hw_reset()).
> +	 */
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +	if (!list_empty(&hw->cmd_head)) {
> +		efc_log_err(hw->os, "command found on cmd list\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	if (!list_empty(&hw->cmd_pending)) {
> +		efc_log_err(hw->os,
> +				"command found on pending list\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	/* Free RQ buffers if prevously allocated */
> +	efct_hw_rx_free(hw);
> +
> +	/*
> +	 * The IO queues must be initialized here for the reset case. The
> +	 * efct_hw_init_io() function will re-add the IOs to the free list.
> +	 * The cmd_head list should be OK since we free all entries in
> +	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
> +	 */
> +
> +	/* If we are in this function due to a reset, there may be stale items
> +	 * on lists that need to be removed.  Clean them up.
> +	 */
> +	rem_count = 0;
> +	if (hw->io_wait_free.next) {
> +		while ((!list_empty(&hw->io_wait_free))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_wait_free,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0) {
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_wait_free list\n",
> +				rem_count);
> +		}
> +	}
> +	rem_count = 0;
> +	if (hw->io_inuse.next) {
> +		while ((!list_empty(&hw->io_inuse))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_inuse,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0)
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_inuse list\n",
> +				       rem_count);
> +	}
> +	rem_count = 0;
> +	if (hw->io_free.next) {
> +		while ((!list_empty(&hw->io_free))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_free,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0)
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_free list\n",
> +				       rem_count);
> +	}
> +
> +	INIT_LIST_HEAD(&hw->io_inuse);
> +	INIT_LIST_HEAD(&hw->io_free);
> +	INIT_LIST_HEAD(&hw->io_wait_free);
> +
> +	/* If MRQ not required, Make sure we dont request feature. */
> +	hw->sli.features &= (~SLI4_REQFEAT_MRQP);
> +
> +	if (sli_init(&hw->sli)) {
> +		efc_log_err(hw->os, "SLI failed to initialize\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->sliport_healthcheck) {
> +		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			efc_log_err(hw->os, "Enable port Health check fail\n");
> +			return rc;
> +		}
> +	}
> +
> +	/*
> +	 * Set FDT transfer hint, only works on Lancer
> +	 */
> +	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2) {
> +		/*
> +		 * Non-fatal error. In particular, we can disregard failure to
> +		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
> +		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
> +		 */
> +		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
> +	}
> +
> +	/* zero the hashes */
> +	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
> +	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
> +	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
> +	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	rc = efct_hw_init_queues(hw);
> +	if (rc != EFCT_HW_RTN_SUCCESS)
> +		return rc;
> +
> +	/* Allocate and post RQ buffers */
> +	rc = efct_hw_rx_allocate(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "rx_allocate failed\n");
> +		return rc;
> +	}
> +
> +	rc = efct_hw_rx_post(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
> +		return rc;
> +	}
> +
> +	max_rpi = sli->extent[SLI_RSRC_RPI].size;
> +	/* Allocate rpi_ref if not previously allocated */
> +	if (!hw->rpi_ref) {
> +		hw->rpi_ref = kmalloc_array(max_rpi, sizeof(*hw->rpi_ref),
> +				      GFP_KERNEL);
> +		if (!hw->rpi_ref)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(hw->rpi_ref, 0, max_rpi * sizeof(*hw->rpi_ref));
> +	}
> +
> +	for (i = 0; i < max_rpi; i++) {
> +		atomic_set(&hw->rpi_ref[i].rpi_count, 0);
> +		atomic_set(&hw->rpi_ref[i].rpi_attached, 0);
> +	}
> +
> +	rc = efct_hw_config_rq(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "efct_hw_config_rq failed %d\n", rc);
> +		return rc;
> +	}
> +
> +	/*
> +	 * Allocate the WQ request tag pool, if not previously allocated
> +	 * (the request tag value is 16 bits, thus the pool allocation size
> +	 * of 64k)
> +	 */
> +	hw->wq_reqtag_pool = efct_hw_reqtag_pool_alloc(hw);
> +	if (!hw->wq_reqtag_pool) {
> +		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
> +		return rc;
> +	}
> +
> +	rc = efct_hw_setup_io(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "IO allocation failure\n");
> +		return rc;
> +	}
> +
> +	rc = efct_hw_init_io(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "IO initialization failure\n");
> +		return rc;
> +	}
> +
> +	/*
> +	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
> +	 * entries
> +	 */
> +	for (i = 0; i < hw->eq_count; i++)
> +		sli_queue_arm(&hw->sli, &hw->eq[i], true);
> +
> +	/*
> +	 * Initialize RQ hash
> +	 */
> +	for (i = 0; i < hw->rq_count; i++)
> +		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
> +
> +	/*
> +	 * Initialize WQ hash
> +	 */
> +	for (i = 0; i < hw->wq_count; i++)
> +		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
> +
> +	/*
> +	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
> +	 */
> +	for (i = 0; i < hw->cq_count; i++) {
> +		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
> +		sli_queue_arm(&hw->sli, &hw->cq[i], true);
> +	}
> +
> +	/* Set RQ process limit*/
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +		hw->cq[rq->cq->instance].proc_limit = hw->config.n_io / 2;
> +	}
> +
> +	/* record the fact that the queues are functional */
> +	hw->state = EFCT_HW_STATE_ACTIVE;
> +	/*
> +	 * Allocate a HW IOs for send frame.
> +	 */
> +	hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
> +	if (!hw->hw_wq[0]->send_frame_io)
> +		efc_log_err(hw->os, "alloc for send_frame_io failed\n");
> +
> +	/* Initialize send frame frame sequence id */
> +	atomic_set(&hw->send_frame_seq_id, 0);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_parse_filter(struct efct_hw *hw, void *value)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	char *p = NULL;
> +	char *token;
> +	u32 idx = 0;
> +
> +	for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
> +		hw->config.filter_def[idx] = 0;
> +
> +	p = kstrdup(value, GFP_KERNEL);
> +	if (!p || !*p) {
> +		efc_log_err(hw->os, "p is NULL\n");
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	idx = 0;
> +	while ((token = strsep(&p, ",")) && *token) {
> +		if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
> +			efc_log_err(hw->os, "kstrtoint failed\n");
> +
> +		if (!p || !*p)
> +			break;
> +
> +		if (idx == ARRAY_SIZE(hw->config.filter_def))
> +			break;
> +	}
> +	kfree(p);
> +
> +	return rc;
> +}
> +
> +u64
> +efct_get_wwnn(struct efct_hw *hw)
> +{
> +	struct sli4 *sli = &hw->sli;
> +	u8 p[8];
> +
> +	memcpy(p, sli->wwnn, sizeof(p));
> +	return get_unaligned_be64(p);
> +}
> +
> +u64
> +efct_get_wwpn(struct efct_hw *hw)
> +{
> +	struct sli4 *sli = &hw->sli;
> +	u8 p[8];
> +
> +	memcpy(p, sli->wwpn, sizeof(p));
> +	return get_unaligned_be64(p);
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index b3d4d4bc8d8c..e5839254c730 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -614,4 +614,19 @@ struct efct_hw_grp_hdr {
>   	u8			revision[32];
>   };
>   
> +static inline int
> +efct_hw_get_link_speed(struct efct_hw *hw) {
> +	return hw->link.speed;
> +}
> +
> +extern enum efct_hw_rtn
> +efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
> +enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
> +extern enum efct_hw_rtn
> +efct_hw_parse_filter(struct efct_hw *hw, void *value);
> +extern uint64_t
> +efct_get_wwnn(struct efct_hw *hw);
> +extern uint64_t
> +efct_get_wwpn(struct efct_hw *hw);
> +
>   #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
> new file mode 100644
> index 000000000000..b683208d396f
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_xport.c
> @@ -0,0 +1,523 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_unsol.h"
> +
> +/* Post node event callback argument. */
> +struct efct_xport_post_node_event {
> +	struct completion done;
> +	atomic_t refcnt;
> +	struct efc_node *node;
> +	u32	evt;
> +	void *context;
> +};
> +
> +static struct dentry *efct_debugfs_root;
> +static atomic_t efct_debugfs_count;
> +
> +static struct scsi_host_template efct_template = {
> +	.module			= THIS_MODULE,
> +	.name			= EFCT_DRIVER_NAME,
> +	.supported_mode		= MODE_TARGET,
> +};
> +
> +/* globals */
> +static struct fc_function_template efct_xport_functions;
> +static struct fc_function_template efct_vport_functions;
> +
> +static struct scsi_transport_template *efct_xport_fc_tt;
> +static struct scsi_transport_template *efct_vport_fc_tt;
> +
> +/*
> + * transport object is allocated,
> + * and associated with a device instance
> + */
> +struct efct_xport *
> +efct_xport_alloc(struct efct *efct)
> +{
> +	struct efct_xport *xport;
> +
> +	xport = kmalloc(sizeof(*xport), GFP_KERNEL);
> +	if (!xport)
> +		return xport;
> +
> +	memset(xport, 0, sizeof(*xport));
> +	xport->efct = efct;
> +	return xport;
> +}
> +
> +static int
> +efct_xport_init_debugfs(struct efct *efct)
> +{
> +	/* Setup efct debugfs root directory */
> +	if (!efct_debugfs_root) {
> +		efct_debugfs_root = debugfs_create_dir("efct", NULL);
> +		atomic_set(&efct_debugfs_count, 0);
> +		if (!efct_debugfs_root) {
> +			efc_log_err(efct, "failed to create debugfs entry\n");
> +			goto debugfs_fail;
> +		}
> +	}
> +
> +	/* Create a directory for sessions in root */
> +	if (!efct->sess_debugfs_dir) {
> +		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
> +		if (!efct->sess_debugfs_dir) {
> +			efc_log_err(efct,
> +				     "failed to create debugfs entry for sessions\n");
> +			goto debugfs_fail;
> +		}
> +		atomic_inc(&efct_debugfs_count);
> +	}
> +
> +	return EFC_SUCCESS;
> +
> +debugfs_fail:
> +	return EFC_FAIL;
> +}
> +
> +static void efct_xport_delete_debugfs(struct efct *efct)
> +{
> +	/* Remove session debugfs directory */
> +	debugfs_remove(efct->sess_debugfs_dir);
> +	efct->sess_debugfs_dir = NULL;
> +	atomic_dec(&efct_debugfs_count);
> +
> +	if (atomic_read(&efct_debugfs_count) == 0) {
> +		/* remove root debugfs directory */
> +		debugfs_remove(efct_debugfs_root);
> +		efct_debugfs_root = NULL;
> +	}
> +}
> +
> +int
> +efct_xport_attach(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	int rc;
> +
> +	xport->fcfi.hold_frames = true;
> +	spin_lock_init(&xport->fcfi.pend_frames_lock);
> +	INIT_LIST_HEAD(&xport->fcfi.pend_frames);
> +
> +	rc = efct_hw_setup(&efct->hw, efct, efct->pcidev);
> +	if (rc) {
> +		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
> +		return rc;
> +	}
> +
> +	efct_hw_parse_filter(&efct->hw, (void *)efct->filter_def);
> +
> +	xport->io_pool = efct_io_pool_create(efct, efct->hw.config.n_sgl);
> +	if (!xport->io_pool) {
> +		efc_log_err(efct, "Can't allocate IO pool\n");
> +		return -ENOMEM;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_xport_link_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_link_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_host_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_host_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_async_link_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_link_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +}
> +
> +static void
> +efct_xport_async_host_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_host_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +}
> +
> +static void
> +efct_xport_config_stats_timer(struct efct *efct);
> +
> +static void
> +efct_xport_stats_timer_cb(struct timer_list *t)
> +{
> +	struct efct_xport *xport = from_timer(xport, t, stats_timer);
> +	struct efct *efct = xport->efct;
> +
> +	efct_xport_config_stats_timer(efct);
> +}
> +
> +static void
> +efct_xport_config_stats_timer(struct efct *efct)
> +{
> +	u32 timeout = 3 * 1000;
> +	struct efct_xport *xport = NULL;
> +
> +	if (!efct) {
> +		pr_err("%s: failed to locate EFCT device\n", __func__);
> +		return;
> +	}
> +
> +	xport = efct->xport;
> +	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
> +			       efct_xport_async_link_stats_cb,
> +			       &xport->fc_xport_stats);
> +	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
> +			       &xport->fc_xport_stats);
> +
> +	timer_setup(&xport->stats_timer,
> +		    &efct_xport_stats_timer_cb, 0);
> +	mod_timer(&xport->stats_timer,
> +		  jiffies + msecs_to_jiffies(timeout));
> +}
> +
> +int
> +efct_xport_initialize(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	int rc = 0;
> +
> +	/* Initialize io lists */
> +	spin_lock_init(&xport->io_pending_lock);
> +	INIT_LIST_HEAD(&xport->io_pending_list);
> +	atomic_set(&xport->io_active_count, 0);
> +	atomic_set(&xport->io_pending_count, 0);
> +	atomic_set(&xport->io_total_free, 0);
> +	atomic_set(&xport->io_total_pending, 0);
> +	atomic_set(&xport->io_alloc_failed_count, 0);
> +	atomic_set(&xport->io_pending_recursing, 0);
> +	rc = efct_hw_init(&efct->hw);
> +	if (rc) {
> +		efc_log_err(efct, "efct_hw_init failure\n");
> +		goto out;
> +	}
> +
> +	rc = efct_scsi_tgt_new_device(efct);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize target\n");
> +		goto hw_init_out;
> +	}
> +
> +	rc = efct_scsi_new_device(efct);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize initiator\n");
> +		goto tgt_dev_out;
> +	}
> +
> +	/* Get FC link and host statistics perodically*/
> +	efct_xport_config_stats_timer(efct);
> +
> +	efct_xport_init_debugfs(efct);
> +
> +	return rc;
> +
> +tgt_dev_out:
> +	efct_scsi_tgt_del_device(efct);
> +
> +hw_init_out:
> +	efct_hw_teardown(&efct->hw);
> +out:
> +	return rc;
> +}
> +
> +int
> +efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
> +		  union efct_xport_stats_u *result)
> +{
> +	u32 rc = 0;
> +	struct efct *efct = NULL;
> +	union efct_xport_stats_u value;
> +
> +	efct = xport->efct;
> +
> +	switch (cmd) {
> +	case EFCT_XPORT_CONFIG_PORT_STATUS:
> +		if (xport->configured_link_state == 0) {
> +			/*
> +			 * Initial state is offline. configured_link_state is
> +			 * set to online explicitly when port is brought online
> +			 */
> +			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
> +		}
> +		result->value = xport->configured_link_state;
> +		break;
> +
> +	case EFCT_XPORT_PORT_STATUS:
> +		/* Determine port status based on link speed. */
> +		value.value = efct_hw_get_link_speed(&efct->hw);
> +		if (value.value == 0)
> +			result->value = 0;
> +		else
> +			result->value = 1;
> +		rc = 0;
> +		break;
> +
> +	case EFCT_XPORT_LINK_SPEED: {
> +		result->value = efct_hw_get_link_speed(&efct->hw);
> +
> +		break;
> +	}
> +	case EFCT_XPORT_LINK_STATISTICS:
> +		memcpy((void *)result, &efct->xport->fc_xport_stats,
> +		       sizeof(union efct_xport_stats_u));
> +		break;
> +	case EFCT_XPORT_LINK_STAT_RESET: {
> +		/* Create a completion to synchronize the stat reset process. */
> +		init_completion(&result->stats.done);
> +
> +		/* First reset the link stats */
> +		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
> +					    efct_xport_link_stats_cb, result);
> +
> +		/* Wait for completion to be signaled when the cmd completes */
> +		if (wait_for_completion_interruptible(&result->stats.done)) {
> +			/* Undefined failure */
> +			efc_log_test(efct, "sem wait failed\n");
> +			rc = -ENXIO;
> +			break;
> +		}
> +
> +		/* Next reset the host stats */
> +		rc = efct_hw_get_host_stats(&efct->hw, 1,
> +					    efct_xport_host_stats_cb, result);
> +
> +		/* Wait for completion to be signaled when the cmd completes */
> +		if (wait_for_completion_interruptible(&result->stats.done)) {
> +			/* Undefined failure */
> +			efc_log_test(efct, "sem wait failed\n");
> +			rc = -ENXIO;
> +			break;
> +		}
> +		break;
> +	}
> +	default:
> +		rc = -1;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +efct_scsi_new_device(struct efct *efct)
> +{
> +	struct Scsi_Host *shost = NULL;
> +	int error = 0;
> +	struct efct_vport *vport = NULL;
> +	union efct_xport_stats_u speed;
> +	u32 supported_speeds = 0;
> +
> +	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
> +	if (!shost) {
> +		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/* save shost to initiator-client context */
> +	efct->shost = shost;
> +
> +	/* save efct information to shost LLD-specific space */
> +	vport = (struct efct_vport *)shost->hostdata;
> +	vport->efct = efct;
> +
> +	/*
> +	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
> +	 * global queue depth (as opposed to the per-LUN queue depth --
> +	 * .cmd_per_lun This may need to be adjusted for I+T mode.
> +	 */
> +	shost->can_queue = efct->hw.config.n_io;
> +	shost->max_cmd_len = 16; /* 16-byte CDBs */
> +	shost->max_id = 0xffff;
> +	shost->max_lun = 0xffffffff;
> +

What happened to multiqueue support?
The original lpfc driver did it, so we should regress for the new driver...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion
  2020-04-12  3:32 ` [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion James Smart
@ 2020-04-16  7:14   ` Hannes Reinecke
  2020-04-16  8:24   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:14 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for queue creation, deletion, and configuration.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed all Queue topology parsing code
>    Reworked queue creation code.
> ---
>   drivers/scsi/elx/efct/efct_hw_queues.c | 765 +++++++++++++++++++++++++++++++++
>   1 file changed, 765 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations
  2020-04-12  3:32 ` [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations James Smart
  2020-04-16  6:51   ` Hannes Reinecke
@ 2020-04-16  7:22   ` Daniel Wagner
  2020-04-23  2:59     ` James Smart
  1 sibling, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  7:22 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:47PM -0700, James Smart wrote:
> This patch starts the population of the efct target mode
> driver.  The driver is contained in the drivers/scsi/elx/efct
> subdirectory.
> 
> This patch creates the efct directory and starts population of
> the driver by adding SLI-4 configuration parameters, data structures
> for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
> and handling async events.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed anonymous enums to named.
>   Removed some structures and defines which are not used.
>   Reworked on efct_hw_io_param struct which can be used for holding
>     params in WQE submission.
> ---
>  drivers/scsi/elx/efct/efct_hw.h | 617 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 617 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_hw.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> new file mode 100644
> index 000000000000..b3d4d4bc8d8c
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -0,0 +1,617 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef _EFCT_HW_H
> +#define _EFCT_HW_H
> +
> +#include "../libefc_sli/sli4.h"
> +
> +/*
> + * EFCT PCI IDs
> + */
> +#define EFCT_VENDOR_ID			0x10df
> +/* LightPulse 16Gb x 4 FC (lancer-g6) */
> +#define EFCT_DEVICE_LANCER_G6		0xe307
> +/* LightPulse 32Gb x 4 FC (lancer-g7) */
> +#define EFCT_DEVICE_LANCER_G7		0xf407
> +
> +/*Default RQ entries len used by driver*/
> +#define EFCT_HW_RQ_ENTRIES_MIN		512
> +#define EFCT_HW_RQ_ENTRIES_DEF		1024
> +#define EFCT_HW_RQ_ENTRIES_MAX		4096
> +
> +/*Defines the size of the RQ buffers used for each RQ*/
> +#define EFCT_HW_RQ_SIZE_HDR             128
> +#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
> +
> +/*Define the maximum number of multi-receive queues*/
> +#define EFCT_HW_MAX_MRQS		8
> +
> +/*
> + * Define count of when to set the WQEC bit in a submitted
> + * WQE, causing a consummed/released completion to be posted.
> + */
> +#define EFCT_HW_WQEC_SET_COUNT		32
> +
> +/*Send frame timeout in seconds*/
> +#define EFCT_HW_SEND_FRAME_TIMEOUT	10
> +
> +/*
> + * FDT Transfer Hint value, reads greater than this value
> + * will be segmented to implement fairness. A value of zero disables
> + * the feature.
> + */
> +#define EFCT_HW_FDT_XFER_HINT		8192
> +
> +#define EFCT_HW_TIMECHECK_ITERATIONS	100
> +#define EFCT_HW_MAX_NUM_MQ		1
> +#define EFCT_HW_MAX_NUM_RQ		32
> +#define EFCT_HW_MAX_NUM_EQ		16
> +#define EFCT_HW_MAX_NUM_WQ		32
> +#define EFCT_HW_DEF_NUM_EQ		1
> +
> +#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
> +
> +#define EFCT_HW_MQ_DEPTH		128
> +#define EFCT_HW_EQ_DEPTH		1024
> +
> +/*
> + * A CQ will be assinged to each WQ
> + * (CQ must have 2X entries of the WQ for abort
> + * processing), plus a separate one for each RQ PAIR and one for MQ
> + */
> +#define EFCT_HW_MAX_NUM_CQ \
> +	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
> +
> +#define EFCT_HW_Q_HASH_SIZE		128
> +#define EFCT_HW_RQ_HEADER_SIZE		128
> +#define EFCT_HW_RQ_HEADER_INDEX		0
> +
> +#define EFCT_HW_REQUE_XRI_REGTAG	65534
> +
> +/* Options for efct_hw_command() */
> +enum efct_cmd_opts {
> +	/* command executes synchronously and busy-waits for completion */
> +	EFCT_CMD_POLL,
> +	/* command executes asynchronously. Uses callback */
> +	EFCT_CMD_NOWAIT,
> +};
> +
> +enum efct_hw_rtn {
> +	EFCT_HW_RTN_SUCCESS = 0,
> +	EFCT_HW_RTN_SUCCESS_SYNC = 1,
> +	EFCT_HW_RTN_ERROR = -1,
> +	EFCT_HW_RTN_NO_RESOURCES = -2,
> +	EFCT_HW_RTN_NO_MEMORY = -3,
> +	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
> +	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
> +	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
> +	EFCT_HW_RTN_INVALID_ARG = -7,
> +};
> +
> +#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
> +
> +enum efct_hw_reset {
> +	EFCT_HW_RESET_FUNCTION,
> +	EFCT_HW_RESET_FIRMWARE,
> +	EFCT_HW_RESET_MAX
> +};
> +
> +enum efct_hw_topo {
> +	EFCT_HW_TOPOLOGY_AUTO,
> +	EFCT_HW_TOPOLOGY_NPORT,
> +	EFCT_HW_TOPOLOGY_LOOP,
> +	EFCT_HW_TOPOLOGY_NONE,
> +	EFCT_HW_TOPOLOGY_MAX
> +};
> +
> +/* pack fw revision values into a single uint64_t */
> +#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
> +			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
> +
> +#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
> +
> +enum efct_hw_io_type {
> +	EFCT_HW_ELS_REQ,
> +	EFCT_HW_ELS_RSP,
> +	EFCT_HW_ELS_RSP_SID,
> +	EFCT_HW_FC_CT,
> +	EFCT_HW_FC_CT_RSP,
> +	EFCT_HW_BLS_ACC,
> +	EFCT_HW_BLS_ACC_SID,
> +	EFCT_HW_BLS_RJT,
> +	EFCT_HW_IO_TARGET_READ,
> +	EFCT_HW_IO_TARGET_WRITE,
> +	EFCT_HW_IO_TARGET_RSP,
> +	EFCT_HW_IO_DNRX_REQUEUE,
> +	EFCT_HW_IO_MAX,
> +};
> +
> +enum efct_hw_io_state {
> +	EFCT_HW_IO_STATE_FREE,
> +	EFCT_HW_IO_STATE_INUSE,
> +	EFCT_HW_IO_STATE_WAIT_FREE,
> +	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
> +};
> +
> +struct efct_hw;
> +
> +/**
> + * HW command context.
> + * Stores the state for the asynchronous commands sent to the hardware.
> + */
> +struct efct_command_ctx {
> +	struct list_head	list_entry;
> +	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
> +	void			*arg;	/* Argument for callback */
> +	u8			*buf;	/* buffer holding command / results */
> +	void			*ctx;	/* upper layer context */
> +};
> +
> +struct efct_hw_sgl {
> +	uintptr_t		addr;
> +	size_t			len;
> +};
> +
> +union efct_hw_io_param_u {
> +	struct sli_bls_params bls;
> +	struct sli_els_params els;
> +	struct sli_ct_params fc_ct;
> +	struct sli_fcp_tgt_params fcp_tgt;
> +};
> +
> +/* WQ steering mode */
> +enum efct_hw_wq_steering {
> +	EFCT_HW_WQ_STEERING_CLASS,
> +	EFCT_HW_WQ_STEERING_REQUEST,
> +	EFCT_HW_WQ_STEERING_CPU,
> +};
> +
> +/* HW wqe object */
> +struct efct_hw_wqe {
> +	struct list_head	list_entry;
> +	bool			abort_wqe_submit_needed;
> +	bool			send_abts;
> +	u32			id;
> +	u32			abort_reqtag;
> +	u8			*wqebuf;
> +};
> +
> +/**
> + * HW IO object.
> + *
> + * Stores the per-IO information necessary
> + * for both the lower (SLI) and upper
> + * layers (efct).
> + */
> +struct efct_hw_io {
> +	/* Owned by HW */

What are the rules? Live time properties?

> +
> +	/* reference counter and callback function */

Make kerneldoc out of it?

> +	struct kref		ref;
> +	void (*release)(struct kref *arg);
> +	/* used for busy, wait_free, free lists */
> +	struct list_head	list_entry;
> +	/* used for timed_wqe list */
> +	struct list_head	wqe_link;
> +	/* used for io posted dnrx list */
> +	struct list_head	dnrx_link;
> +	/* state of IO: free, busy, wait_free */
> +	enum efct_hw_io_state	state;
> +	/* Work queue object, with link for pending */
> +	struct efct_hw_wqe	wqe;
> +	/* pointer back to hardware context */
> +	struct efct_hw		*hw;
> +	struct efc_remote_node	*rnode;
> +	struct efc_dma		xfer_rdy;
> +	u16	type;
> +	/* WQ assigned to the exchange */
> +	struct hw_wq		*wq;
> +	/* Exchange is active in FW */
> +	bool			xbusy;
> +	/* Function called on IO completion */
> +	int
> +	(*done)(struct efct_hw_io *hio,
> +		struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);

There is still a bit left till 80 chars are hit.

> +	/* argument passed to "IO done" callback */
> +	void			*arg;
> +	/* Function called on abort completion */
> +	int
> +	(*abort_done)(struct efct_hw_io *hio,
> +		      struct efc_remote_node *rnode,
> +			u32 len, int status,
> +			u32 ext, void *ul_arg);
> +	/* argument passed to "abort done" callback */
> +	void			*abort_arg;
> +	/* needed for bug O127585: length of IO */
> +	size_t			length;
> +	/* timeout value for target WQEs */
> +	u8			tgt_wqe_timeout;
> +	/* timestamp when current WQE was submitted */
> +	u64			submit_ticks;
> +
> +	/* if TRUE, latched status shld be returned */
> +	bool			status_saved;
> +	/* if TRUE, abort is in progress */
> +	bool			abort_in_progress;
> +	u32			saved_status;
> +	u32			saved_len;
> +	u32			saved_ext;
> +
> +	/* EQ that this HIO came up on */
> +	struct hw_eq		*eq;
> +	/* WQ steering mode request */
> +	enum efct_hw_wq_steering wq_steering;
> +	/* WQ class if steering mode is Class */
> +	u8			wq_class;
> +
> +	/* request tag for this HW IO */
> +	u16			reqtag;
> +	/* request tag for an abort of this HW IO
> +	 * (note: this is a 32 bit value
> +	 * to allow us to use UINT32_MAX as an uninitialized value)
> +	 */
> +	u32			abort_reqtag;
> +	u32			indicator;	/* XRI */
> +	struct efc_dma		def_sgl;	/* default SGL*/
> +	/* Count of SGEs in default SGL */
> +	u32			def_sgl_count;
> +	/* pointer to current active SGL */
> +	struct efc_dma		*sgl;
> +	u32			sgl_count;	/* count of SGEs in io->sgl */
> +	u32			first_data_sge;	/* index of first data SGE */
> +	struct efc_dma		*ovfl_sgl;	/* overflow SGL */
> +	u32			ovfl_sgl_count;
> +	 /* pointer to overflow segment len */
> +	struct sli4_lsp_sge	*ovfl_lsp;
> +	u32			n_sge;		/* number of active SGEs */
> +	u32			sge_offset;
> +
> +	/* where upper layer can store ref to its IO */
> +	void			*ul_io;
> +};
> +
> +/* Typedef for HW "done" callback */
> +typedef int (*efct_hw_done_t)(struct efct_hw_io *, struct efc_remote_node *,
> +			      u32 len, int status, u32 ext, void *ul_arg);
> +
> +enum efct_hw_port {
> +	EFCT_HW_PORT_INIT,
> +	EFCT_HW_PORT_SHUTDOWN,
> +};
> +
> +/* Node group rpi reference */
> +struct efct_hw_rpi_ref {
> +	atomic_t rpi_count;
> +	atomic_t rpi_attached;
> +};
> +
> +enum efct_hw_link_stat {
> +	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
> +	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
> +	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
> +	EFCT_HW_LINK_STAT_CRC_COUNT,
> +	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
> +	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
> +	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
> +	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
> +	EFCT_HW_LINK_STAT_MAX,
> +};
> +
> +enum efct_hw_host_stat {
> +	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
> +	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
> +	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
> +	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
> +	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
> +	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
> +	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
> +	EFCT_HW_HOST_STAT_MAX,
> +};
> +
> +enum efct_hw_state {
> +	EFCT_HW_STATE_UNINITIALIZED,
> +	EFCT_HW_STATE_QUEUES_ALLOCATED,
> +	EFCT_HW_STATE_ACTIVE,
> +	EFCT_HW_STATE_RESET_IN_PROGRESS,
> +	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
> +};
> +
> +struct efct_hw_link_stat_counts {
> +	u8		overflow;
> +	u32		counter;
> +};
> +
> +struct efct_hw_host_stat_counts {
> +	u32		counter;
> +};
> +
> +/* Structure used for the hash lookup of queue IDs */
> +struct efct_queue_hash {
> +	bool		in_use;
> +	u16		id;
> +	u16		index;
> +};
> +
> +/* WQ callback object */
> +struct hw_wq_callback {
> +	u16		instance_index;	/* use for request tag */
> +	void (*callback)(void *arg, u8 *cqe, int status);
> +	void		*arg;
> +	struct list_head list_entry;
> +};
> +
> +struct reqtag_pool {
> +	spinlock_t lock;	/* pool lock */
> +	struct hw_wq_callback *tags[U16_MAX];
> +	struct list_head freelist;
> +};
> +
> +struct efct_hw_config {
> +	u32		n_eq;
> +	u32		n_cq;
> +	u32		n_mq;
> +	u32		n_rq;
> +	u32		n_wq;
> +	u32		n_io;
> +	u32		n_sgl;
> +	u32		speed;
> +	u32		topology;
> +	/* size of the buffers for first burst */
> +	u32		rq_default_buffer_size;
> +	u8		esoc;
> +	/* MRQ RQ selection policy */
> +	u8		rq_selection_policy;
> +	/* RQ quanta if rq_selection_policy == 2 */
> +	u8		rr_quanta;
> +	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +};
> +
> +struct efct_hw {
> +	struct efct		*os;
> +	struct sli4		sli;
> +	u16			ulp_start;
> +	u16			ulp_max;
> +	u32			dump_size;
> +	enum efct_hw_state	state;
> +	bool			hw_setup_called;
> +	u8			sliport_healthcheck;
> +
> +	/* HW configuration */
> +	struct efct_hw_config	config;
> +
> +	/* calculated queue sizes for each type */
> +	u32			num_qentries[SLI_QTYPE_MAX];
> +
> +	/* Storage for SLI queue objects */
> +	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
> +	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
> +	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
> +	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
> +	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
> +	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
> +
> +	/* HW queue */
> +	u32			eq_count;
> +	u32			cq_count;
> +	u32			mq_count;
> +	u32			wq_count;
> +	u32			rq_count;
> +	struct list_head	eq_list;
> +
> +	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
> +	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
> +
> +	/* Storage for HW queue objects */
> +	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
> +	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
> +	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
> +	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
> +	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
> +	/* count of hw_rq[] entries */
> +	u32			hw_rq_count;
> +	/* count of multirq RQs */
> +	u32			hw_mrq_count;
> +
> +	/* Sequence objects used in incoming frame processing */
> +	void			*seq_pool;
> +
> +	/* Maintain an ordered, linked list of outstanding HW commands. */
> +	spinlock_t		cmd_lock;
> +	struct list_head	cmd_head;
> +	struct list_head	cmd_pending;
> +	u32			cmd_head_count;
> +
> +	struct sli4_link_event	link;
> +	struct efc_domain	*domain;
> +
> +	u16			fcf_indicator;
> +
> +	/* pointer array of IO objects */
> +	struct efct_hw_io	**io;
> +	/* array of WQE buffs mapped to IO objects */
> +	u8			*wqe_buffs;
> +
> +	/* IO lock to synchronize list access */
> +	spinlock_t		io_lock;
> +	/* IO lock to synchronize IO aborting */
> +	spinlock_t		io_abort_lock;
> +	/* List of IO objects in use */
> +	struct list_head	io_inuse;
> +	/* List of IO objects waiting to be freed */
> +	struct list_head	io_wait_free;
> +	/* List of IO objects available for allocation */
> +	struct list_head	io_free;
> +
> +	struct efc_dma		loop_map;
> +
> +	struct efc_dma		xfer_rdy;
> +
> +	struct efc_dma		dump_sges;
> +
> +	struct efc_dma		rnode_mem;
> +
> +	struct efct_hw_rpi_ref	*rpi_ref;
> +
> +	atomic_t		io_alloc_failed_count;
> +
> +	/* stat: wq sumbit count */
> +	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
> +	/* stat: wq complete count */
> +	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
> +
> +	struct reqtag_pool	*wq_reqtag_pool;
> +	atomic_t		send_frame_seq_id;
> +};
> +
> +enum efct_hw_io_count_type {
> +	EFCT_HW_IO_INUSE_COUNT,
> +	EFCT_HW_IO_FREE_COUNT,
> +	EFCT_HW_IO_WAIT_FREE_COUNT,
> +	EFCT_HW_IO_N_TOTAL_IO_COUNT,
> +};
> +
> +/* HW queue data structures */
> +struct hw_eq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct efct_hw		*hw;
> +	struct sli4_queue	*queue;
> +	struct list_head	cq_list;
> +	u32			use_count;
> +};
> +
> +struct hw_cq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_eq		*eq;
> +	struct sli4_queue	*queue;
> +	struct list_head	q_list;
> +	u32			use_count;
> +};
> +
> +struct hw_q {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +};
> +
> +struct hw_mq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +
> +	u32			use_count;
> +};
> +
> +struct hw_wq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +	struct efct_hw		*hw;
> +
> +	u32			entry_count;
> +	u32			entry_size;
> +	struct hw_cq		*cq;
> +	struct sli4_queue	*queue;
> +	u32			class;
> +
> +	/* WQ consumed */
> +	u32			wqec_set_count;
> +	u32			wqec_count;
> +	u32			free_count;
> +	u32			total_submit_count;
> +	struct list_head	pending_list;
> +
> +	/* HW IO allocated for use with Send Frame */
> +	struct efct_hw_io	*send_frame_io;
> +
> +	/* Stats */
> +	u32			use_count;
> +	u32			wq_pending_count;
> +};
> +
> +struct hw_rq {
> +	struct list_head	list_entry;
> +	enum sli4_qtype		type;
> +	u32			instance;
> +
> +	u32			entry_count;
> +	u32			use_count;
> +	u32			hdr_entry_size;
> +	u32			first_burst_entry_size;
> +	u32			data_entry_size;
> +	bool			is_mrq;
> +	u32			base_mrq_id;
> +
> +	struct hw_cq		*cq;
> +
> +	u8			filter_mask;
> +	struct sli4_queue	*hdr;
> +	struct sli4_queue	*first_burst;
> +	struct sli4_queue	*data;
> +
> +	struct efc_hw_rq_buffer	*hdr_buf;
> +	struct efc_hw_rq_buffer	*fb_buf;
> +	struct efc_hw_rq_buffer	*payload_buf;
> +	/* RQ tracker for this RQ */
> +	struct efc_hw_sequence	**rq_tracker;
> +};
> +
> +struct efct_hw_send_frame_context {
> +	struct efct_hw		*hw;
> +	struct hw_wq_callback	*wqcb;
> +	struct efct_hw_wqe	wqe;
> +	void (*callback)(int status, void *arg);
> +	void			*arg;
> +
> +	/* General purpose elements */
> +	struct efc_hw_sequence	*seq;
> +	struct efc_dma		payload;
> +};
> +
> +struct efct_hw_grp_hdr {
> +	u32			size;
> +	__be32			magic_number;
> +	u32			word2;
> +	u8			rev_name[128];
> +	u8			date[12];
> +	u8			revision[32];
> +};
> +
> +#endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2020-04-12  3:32 ` [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2020-04-16  7:24   ` Hannes Reinecke
  2020-04-23  3:16     ` James Smart
  2020-04-16  8:41   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:24 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> RQ data buffer allocation and deallocate.
> Memory pool allocation and deallocation APIs.
> Mailbox command submission and completion routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    efct_utils.c file is removed. Replaced efct_pool, efct_varray and
>    efct_array with other alternatives.
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 375 ++++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |   7 +
>   2 files changed, 382 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 21fcaf7b3d2b..3e9906749da2 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -1114,3 +1114,378 @@ efct_get_wwpn(struct efct_hw *hw)
>   	memcpy(p, sli->wwpn, sizeof(p));
>   	return get_unaligned_be64(p);
>   }
> +
> +/*
> + * An efct_hw_rx_buffer_t array is allocated,
> + * along with the required DMA mem
> + */
> +static struct efc_hw_rq_buffer *
> +efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
> +			u32 size)
> +{
> +	struct efct *efct = hw->os;
> +	struct efc_hw_rq_buffer *rq_buf = NULL;
> +	struct efc_hw_rq_buffer *prq;
> +	u32 i;
> +
> +	if (count != 0) {
> +		rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_ATOMIC);
> +		if (!rq_buf)
> +			return NULL;
> +		memset(rq_buf, 0, sizeof(*rq_buf) * count);
> +
> +		for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
> +			prq->rqindex = rqindex;
> +			prq->dma.size = size;
> +			prq->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +							   prq->dma.size,
> +							   &prq->dma.phys,
> +							   GFP_DMA);
> +			if (!prq->dma.virt) {
> +				efc_log_err(hw->os, "DMA allocation failed\n");
> +				kfree(rq_buf);
> +				rq_buf = NULL;
> +				break;
> +			}
> +		}
> +	}
> +	return rq_buf;
> +}
> +
> +static void
> +efct_hw_rx_buffer_free(struct efct_hw *hw,
> +		       struct efc_hw_rq_buffer *rq_buf,
> +			u32 count)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	struct efc_hw_rq_buffer *prq;
> +
> +	if (rq_buf) {
> +		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  prq->dma.size, prq->dma.virt,
> +					  prq->dma.phys);
> +			memset(&prq->dma, 0, sizeof(struct efc_dma));
> +		}
> +
> +		kfree(rq_buf);
> +	}
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_rx_allocate(struct efct_hw *hw)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	int rc = EFCT_HW_RTN_SUCCESS;
> +	u32 rqindex = 0;
> +	struct hw_rq *rq;
> +	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
> +	u32 payload_size = hw->config.rq_default_buffer_size;
> +
> +	rqindex = 0;
> +
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +
> +		/* Allocate header buffers */
> +		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +						      rq->entry_count,
> +						      hdr_size);
> +		if (!rq->hdr_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
> +			      i, rq->hdr->id, rq->entry_count, hdr_size);
> +
> +		rqindex++;
> +
> +		/* Allocate payload buffers */
> +		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +							  rq->entry_count,
> +							  payload_size);
> +		if (!rq->payload_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
> +			      i, rq->data->id, rq->entry_count, payload_size);
> +		rqindex++;
> +	}
> +
> +	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
> +}
> +
> +/* Post the RQ data buffers to the chip */
> +enum efct_hw_rtn
> +efct_hw_rx_post(struct efct_hw *hw)
> +{
> +	u32 i;
> +	u32 idx;
> +	u32 rq_idx;
> +	int rc = 0;
> +
> +	if (!hw->seq_pool) {
> +		u32 count = 0;
> +
> +		for (i = 0; i < hw->hw_rq_count; i++)
> +			count += hw->hw_rq[i]->entry_count;
> +
> +		hw->seq_pool = kmalloc_array(count,
> +				sizeof(struct efc_hw_sequence),	GFP_KERNEL);
> +		if (!hw->seq_pool)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/*
> +	 * In RQ pair mode, we MUST post the header and payload buffer at the
> +	 * same time.
> +	 */
> +	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
> +		struct hw_rq *rq = hw->hw_rq[rq_idx];
> +
> +		for (i = 0; i < rq->entry_count - 1; i++) {
> +			struct efc_hw_sequence *seq;
> +
> +			seq = hw->seq_pool + idx * sizeof(*seq);
> +			if (!seq) {
> +				rc = -1;
> +				break;
> +			}
> +			idx++;
> +			seq->header = &rq->hdr_buf[i];
> +			seq->payload = &rq->payload_buf[i];
> +			rc = efct_hw_sequence_free(hw, seq);
> +			if (rc)
> +				break;
> +		}
> +		if (rc)
> +			break;
> +	}
> +
> +	if (rc && hw->seq_pool)
> +		kfree(hw->seq_pool);
> +
> +	return rc;
> +}
> +
> +void
> +efct_hw_rx_free(struct efct_hw *hw)
> +{
> +	struct hw_rq *rq;
> +	u32 i;
> +
> +	/* Free hw_rq buffers */
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +		if (rq) {
> +			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
> +					       rq->entry_count);
> +			rq->hdr_buf = NULL;
> +			efct_hw_rx_buffer_free(hw, rq->payload_buf,
> +					       rq->entry_count);
> +			rq->payload_buf = NULL;
> +		}
> +	}
> +}
> +
> +static int
> +efct_hw_cmd_submit_pending(struct efct_hw *hw)
> +{
> +	struct efct_command_ctx *ctx = NULL;
> +	int rc = 0;
> +
> +	/* Assumes lock held */
> +

So make it a regular lockdep annotation then.

> +	/* Only submit MQE if there's room */
> +	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
> +	       !list_empty(&hw->cmd_pending))  > +		ctx = list_first_entry(&hw->cmd_pending,
> +				       struct efct_command_ctx, list_entry);
> +		if (!ctx)
> +			break;
> +

Seeing that you always check for !ctx you might as well drop the 
'!list_empty' condition from the while() statement.

> +		list_del(&ctx->list_entry);
> +

And it might even be better to use 'list_for_each_entry_safe()' here

> +		INIT_LIST_HEAD(&ctx->list_entry);
> +		list_add_tail(&ctx->list_entry, &hw->cmd_head);
> +		hw->cmd_head_count++;
> +		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
> +			efc_log_test(hw->os,
> +				      "sli_queue_write failed: %d\n", rc);
> +			rc = -1;
> +			break;
> +		}

and break here for the 'cmd_head_count' condition.

> +	}
> +	return rc;
> +}
> +
> +/*
> + * Send a mailbox command to the hardware, and either wait for a completion
> + * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
> + */
> +enum efct_hw_rtn
> +efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	unsigned long flags = 0;
> +	void *bmbx = NULL;
> +
> +	/*
> +	 * If the chip is in an error state (UE'd) then reject this mailbox
> +	 *  command.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		efc_log_crit(hw->os,
> +			      "status=%#x error1=%#x error2=%#x\n",
> +			sli_reg_read_status(&hw->sli),
> +			sli_reg_read_err1(&hw->sli),
> +			sli_reg_read_err2(&hw->sli));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (opts == EFCT_CMD_POLL) {
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +		bmbx = hw->sli.bmbx.virt;
> +
> +		memset(bmbx, 0, SLI4_BMBX_SIZE);
> +		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
> +
> +		if (sli_bmbx_command(&hw->sli) == 0) {
> +			rc = EFCT_HW_RTN_SUCCESS;
> +			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
> +		}
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);

See? You even _have_ a preallocated mailbox.
So you could use that to facilitate the loop topology detection I've 
commented on in an earlier patch.

> +	} else if (opts == EFCT_CMD_NOWAIT) {
> +		struct efct_command_ctx	*ctx = NULL;
> +
> +		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
> +		if (!ctx)
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +
> +		memset(ctx, 0, sizeof(struct efct_command_ctx));
> +

But now you go and spoil it all again.
At the very least use a mempool here; not sure how frequent these calls 
are, but we're talking to the hardware here, so I assume it'll happen 
more than just once.

> +		if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +			efc_log_err(hw->os,
> +				     "Can't send command, HW state=%d\n",
> +				    hw->state);
> +			kfree(ctx);
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +
> +		if (cb) {
> +			ctx->cb = cb;
> +			ctx->arg = arg;
> +		}
> +		ctx->buf = cmd;
> +		ctx->ctx = hw;
> +
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +
> +			/* Add to pending list */
> +			INIT_LIST_HEAD(&ctx->list_entry);
> +			list_add_tail(&ctx->list_entry, &hw->cmd_pending);
> +
> +			/* Submit as much of the pending list as we can */
> +			if (efct_hw_cmd_submit_pending(hw) == 0)
> +				rc = EFCT_HW_RTN_SUCCESS;
> +

Indentation?

> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
> +			size_t size)
> +{
> +	struct efct_command_ctx *ctx = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +	if (!list_empty(&hw->cmd_head)) {
> +		ctx = list_first_entry(&hw->cmd_head,
> +				       struct efct_command_ctx, list_entry);
> +		list_del(&ctx->list_entry);
> +	}
> +	if (!ctx) {
> +		efc_log_err(hw->os, "no command context?!?\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	hw->cmd_head_count--;
> +
> +	/* Post any pending requests */
> +	efct_hw_cmd_submit_pending(hw);
> +
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	if (ctx->cb) {
> +		if (ctx->buf)
> +			memcpy(ctx->buf, mqe, size);
> +
> +		ctx->cb(hw, status, ctx->buf, ctx->arg);
> +	}
> +
> +	memset(ctx, 0, sizeof(struct efct_command_ctx));
> +	kfree(ctx);
> +
memset() before kfree() is pointless.
Use KASAN et al if you suspect memory issues.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization
  2020-04-12  3:32 ` [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization James Smart
@ 2020-04-16  7:32   ` Hannes Reinecke
  2020-04-16  8:47   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:32 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to create IO interfaces (wqs, etc), SGL initialization,
> and configure hardware features.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Request tag pool(reqtag_pool) handling fuctions.
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 657 ++++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |  42 +++
>   2 files changed, 699 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 3e9906749da2..892493a3a35e 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
[ .. ]
> +enum efct_hw_rtn
> +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
> +		 bool send_abts, void *cb, void *arg)
> +{
> +	enum sli4_abort_type atype = SLI_ABORT_MAX;
> +	u32 id = 0, mask = 0;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	struct hw_wq_callback *wqcb;
> +	unsigned long flags = 0;
> +
> +	if (!io_to_abort) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p\n",
> +			    hw, io_to_abort);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
> +			     hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* take a reference on IO being aborted */
> +	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
> +		/* command no longer active */
> +		efc_log_test(hw->os,
> +			      "io not active xri=0x%x tag=0x%x\n",
> +			     io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/* Must have a valid WQ reference */
> +	if (!io_to_abort->wq) {
> +		efc_log_test(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
> +			      io_to_abort->indicator);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/*
> +	 * Validation checks complete; now check to see if already being
> +	 * aborted
> +	 */
> +	spin_lock_irqsave(&hw->io_abort_lock, flags);
> +	if (io_to_abort->abort_in_progress) {
> +		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		efc_log_debug(hw->os,
> +			       "io already being aborted xri=0x%x tag=0x%x\n",
> +			      io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
> +	}
> +
> +	/*
> +	 * This IO is not already being aborted. Set flag so we won't try to
> +	 * abort it again. After all, we only have one abort_done callback.
> +	 */
> +	io_to_abort->abort_in_progress = true;
> +	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +
> +	/*
> +	 * If we got here, the possibilities are:
> +	 * - host owned xri
> +	 *	- io_to_abort->wq_index != U32_MAX
> +	 *		- submit ABORT_WQE to same WQ
> +	 * - port owned xri:
> +	 *	- rxri: io_to_abort->wq_index == U32_MAX
> +	 *		- submit ABORT_WQE to any WQ
> +	 *	- non-rxri
> +	 *		- io_to_abort->index != U32_MAX
> +	 *			- submit ABORT_WQE to same WQ
> +	 *		- io_to_abort->index == U32_MAX
> +	 *			- submit ABORT_WQE to any WQ
> +	 */
> +	io_to_abort->abort_done = cb;
> +	io_to_abort->abort_arg  = arg;
> +
> +	atype = SLI_ABORT_XRI;
> +	id = io_to_abort->indicator;
> +
> +	/* Allocate a request tag for the abort portion of this IO */
> +	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
> +	if (!wqcb) {
> +		efc_log_err(hw->os, "can't allocate request tag\n");
> +		return EFCT_HW_RTN_NO_RESOURCES;
> +	}
> +	io_to_abort->abort_reqtag = wqcb->instance_index;
> +
> +	/*
> +	 * If the wqe is on the pending list, then set this wqe to be
> +	 * aborted when the IO's wqe is removed from the list.
> +	 */
> +	if (io_to_abort->wq) {
> +		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
> +		if (io_to_abort->wqe.list_entry.next) {
> +			io_to_abort->wqe.abort_wqe_submit_needed = true;
> +			io_to_abort->wqe.send_abts = send_abts;
> +			io_to_abort->wqe.id = id;
> +			io_to_abort->wqe.abort_reqtag =
> +						 io_to_abort->abort_reqtag;
> +			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
> +					       flags);
> +			return EFC_SUCCESS;
> +		}
> +		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
> +	}
> +
> +	if (sli_abort_wqe(&hw->sli, io_to_abort->wqe.wqebuf,
> +			  hw->sli.wqe_size, atype, send_abts, id, mask,
> +			  io_to_abort->abort_reqtag, SLI4_CQ_DEFAULT)) {

Wouldn't it be better if we were to pass the 'wqe' directly here?
That would cut down on the number of arguments required, and make the 
function more readable.
Not to mention that we'll be more efficient if we manage to cut down the 
number of arguments to '4' or even less, as then we wouldn't need to 
pass arguments on the stack.


Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 20/31] elx: efct: Hardware queues processing
  2020-04-12  3:32 ` [PATCH v3 20/31] elx: efct: Hardware queues processing James Smart
@ 2020-04-16  7:37   ` Hannes Reinecke
  2020-04-16  9:17   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:37 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for EQ, CQ, WQ and RQ processing.
> Routines for IO object pool allocation and deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Return defined values
>    Changed IO pool allocation logic to avoid using efct_pool.
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 369 ++++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |  36 ++++
>   drivers/scsi/elx/efct/efct_io.c | 198 +++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_io.h | 191 +++++++++++++++++++++
>   4 files changed, 794 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_io.c
>   create mode 100644 drivers/scsi/elx/efct/efct_io.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 892493a3a35e..6cdc7e27b148 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -2146,3 +2146,372 @@ efct_hw_reqtag_reset(struct efct_hw *hw)
>   		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
>   	}
>   }
> +
> +int
> +efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id)
> +{
> +	int	rc = -1;
> +	int	index = id & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/*
> +	 * Since the hash is always bigger than the maximum number of Qs, then
> +	 * we never have to worry about an infinite loop. We will always find
> +	 * an unused entry.
> +	 */
> +	do {
> +		if (hash[index].in_use &&
> +		    hash[index].id == id)
> +			rc = hash[index].index;
> +		else
> +			index = (index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
> +	} while (rc == -1 && hash[index].in_use);
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_process(struct efct_hw *hw, u32 vector,
> +		u32 max_isr_time_msec)
> +{
> +	struct hw_eq *eq;
> +	int rc = 0;
> +
> +	/*
> +	 * The caller should disable interrupts if they wish to prevent us
> +	 * from processing during a shutdown. The following states are defined:
> +	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
> +	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
> +	 *                                    queues are cleared.
> +	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
> +	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
> +	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
> +	 *                                        completions.
> +	 */
> +	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
> +		return EFC_SUCCESS;
> +
> +	/* Get pointer to struct hw_eq */
> +	eq = hw->hw_eq[vector];
> +	if (!eq)
> +		return EFC_SUCCESS;
> +
> +	eq->use_count++;
> +
> +	rc = efct_hw_eq_process(hw, eq, max_isr_time_msec);
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
> +		   u32 max_isr_time_msec)
> +{
> +	u8		eqe[sizeof(struct sli4_eqe)] = { 0 };
> +	u32	tcheck_count;
> +	time_t		tstart;
> +	time_t		telapsed;
> +	bool		done = false;
> +
> +	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
> +	tstart = jiffies_to_msecs(jiffies);
> +
> +	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
> +		u16	cq_id = 0;
> +		int		rc;
> +
> +		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
> +		if (unlikely(rc)) {
> +			if (rc == SLI4_EQE_STATUS_EQ_FULL) {
> +				u32 i;
> +
> +				/*
> +				 * Received a sentinel EQE indicating the
> +				 * EQ is full. Process all CQs
> +				 */
> +				for (i = 0; i < hw->cq_count; i++)
> +					efct_hw_cq_process(hw, hw->hw_cq[i]);
> +				continue;
> +			} else {
> +				return rc;
> +			}
> +		} else {
> +			int index;
> +
> +			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
> +
> +			if (likely(index >= 0))
> +				efct_hw_cq_process(hw, hw->hw_cq[index]);
> +			else
> +				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
> +					     cq_id);
> +		}
> +
> +		if (eq->queue->n_posted > eq->queue->posted_limit)
> +			sli_queue_arm(&hw->sli, eq->queue, false);
> +
> +		if (tcheck_count && (--tcheck_count == 0)) {
> +			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
> +			telapsed = jiffies_to_msecs(jiffies) - tstart;
> +			if (telapsed >= max_isr_time_msec)
> +				done = true;
> +		}
> +	}
> +	sli_queue_eq_arm(&hw->sli, eq->queue, true);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int queue_rc;
> +
> +	/* Every so often, set the wqec bit to generate comsummed completions */
> +	if (wq->wqec_count)
> +		wq->wqec_count--;
> +
> +	if (wq->wqec_count == 0) {
> +		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
> +
> +		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
> +		wq->wqec_count = wq->wqec_set_count;
> +	}
> +
> +	/* Decrement WQ free count */
> +	wq->free_count--;
> +
> +	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
> +
> +	return (queue_rc < 0) ? -1 : 0;
> +}
> +
> +static void
> +hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
> +{
> +	struct efct_hw_wqe *wqe;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +
> +	/* Update free count with value passed in */
> +	wq->free_count += update_free_count;
> +
> +	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
> +		wqe = list_first_entry(&wq->pending_list,
> +				       struct efct_hw_wqe, list_entry);
> +		list_del(&wqe->list_entry);
> +		_efct_hw_wq_write(wq, wqe);
> +
> +		if (wqe->abort_wqe_submit_needed) {
> +			wqe->abort_wqe_submit_needed = false;
> +			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
> +				      wq->hw->sli.wqe_size,
> +				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
> +				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
> +					  INIT_LIST_HEAD(&wqe->list_entry);

See? Would could drop at least 4 arguments by passing in the wqe 
directly ...

> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +}
> +
> +void
> +efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
> +{
> +	u8		cqe[sizeof(struct sli4_mcqe)];
> +	u16	rid = U16_MAX;
> +	enum sli4_qentry	ctype;		/* completion type */
> +	int		status;
> +	u32	n_processed = 0;
> +	u32	tstart, telapsed;
> +
> +	tstart = jiffies_to_msecs(jiffies);
> +
> +	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
> +		status = sli_cq_parse(&hw->sli, cq->queue,
> +				      cqe, &ctype, &rid);
> +		/*
> +		 * The sign of status is significant. If status is:
> +		 * == 0 : call completed correctly and
> +		 * the CQE indicated success
> +		 * > 0 : call completed correctly and
> +		 * the CQE indicated an error
> +		 * < 0 : call failed and no information is available about the
> +		 * CQE
> +		 */
> +		if (status < 0) {
> +			if (status == SLI4_MCQE_STATUS_NOT_COMPLETED)
> +				/*
> +				 * Notification that an entry was consumed,
> +				 * but not completed
> +				 */
> +				continue;
> +
> +			break;
> +		}
> +
> +		switch (ctype) {
> +		case SLI_QENTRY_ASYNC:
> +			sli_cqe_async(&hw->sli, cqe);
> +			break;
> +		case SLI_QENTRY_MQ:
> +			/*
> +			 * Process MQ entry. Note there is no way to determine
> +			 * the MQ_ID from the completion entry.
> +			 */
> +			efct_hw_mq_process(hw, status, hw->mq);
> +			break;
> +		case SLI_QENTRY_WQ:
> +			efct_hw_wq_process(hw, cq, cqe, status, rid);
> +			break;
> +		case SLI_QENTRY_WQ_RELEASE: {
> +			u32 wq_id = rid;
> +			int index;
> +			struct hw_wq *wq = NULL;
> +
> +			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
> +
> +			if (likely(index >= 0)) {
> +				wq = hw->hw_wq[index];
> +			} else {
> +				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
> +				break;
> +			}
> +			/* Submit any HW IOs that are on the WQ pending list */
> +			hw_wq_submit_pending(wq, wq->wqec_set_count);
> +
> +			break;
> +		}
> +
> +		case SLI_QENTRY_RQ:
> +			efct_hw_rqpair_process_rq(hw, cq, cqe);
> +			break;
> +		case SLI_QENTRY_XABT: {
> +			efct_hw_xabt_process(hw, cq, cqe, rid);
> +			break;
> +		}
> +		default:
> +			efc_log_test(hw->os,
> +				      "unhandled ctype=%#x rid=%#x\n",
> +				     ctype, rid);
> +			break;
> +		}
> +
> +		n_processed++;
> +		if (n_processed == cq->queue->proc_limit)
> +			break;
> +
> +		if (cq->queue->n_posted >= cq->queue->posted_limit)
> +			sli_queue_arm(&hw->sli, cq->queue, false);
> +	}
> +
> +	sli_queue_arm(&hw->sli, cq->queue, true);
> +
> +	if (n_processed > cq->queue->max_num_processed)
> +		cq->queue->max_num_processed = n_processed;
> +	telapsed = jiffies_to_msecs(jiffies) - tstart;
> +	if (telapsed > cq->queue->max_process_time)
> +		cq->queue->max_process_time = telapsed;
> +}
> +
> +void
> +efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
> +		   u8 *cqe, int status, u16 rid)
> +{
> +	struct hw_wq_callback *wqcb;
> +
> +	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
> +		if (status)
> +			efc_log_err(hw->os, "reque xri failed, status = %d\n",
> +				     status);
> +		return;
> +	}
> +
> +	wqcb = efct_hw_reqtag_get_instance(hw, rid);
> +	if (!wqcb) {
> +		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
> +		return;
> +	}
> +
> +	if (!wqcb->callback) {
> +		efc_log_err(hw->os, "wqcb callback is NULL\n");
> +		return;
> +	}
> +
> +	(*wqcb->callback)(wqcb->arg, cqe, status);
> +}
> +
> +void
> +efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
> +		     u8 *cqe, u16 rid)
> +{
> +	/* search IOs wait free list */
> +	struct efct_hw_io *io = NULL;
> +	unsigned long flags = 0;
> +
> +	io = efct_hw_io_lookup(hw, rid);
> +	if (!io) {
> +		/* IO lookup failure should never happen */
> +		efc_log_err(hw->os,
> +			     "Error: xabt io lookup failed rid=%#x\n", rid);
> +		return;
> +	}
> +
> +	if (!io->xbusy)
> +		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
> +	else
> +		/* mark IO as no longer busy */
> +		io->xbusy = false;
> +
> +	/*
> +	 * For IOs that were aborted internally, we need to issue any pending
> +	 * callback here.
> +	 */
> +	if (io->done) {
> +		efct_hw_done_t done = io->done;
> +		void		*arg = io->arg;
> +
> +		/*
> +		 * Use latched status as this is always saved for an internal
> +		 * abort
> +		 */
> +		int status = io->saved_status;
> +		u32 len = io->saved_len;
> +		u32 ext = io->saved_ext;
> +
> +		io->done = NULL;
> +		io->status_saved = false;
> +
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	if (io->state == EFCT_HW_IO_STATE_INUSE ||
> +	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
> +		/* if on wait_free list, caller has already freed IO;
> +		 * remove from wait_free list and add to free list.
> +		 * if on in-use list, already marked as no longer busy;
> +		 * just leave there and wait for caller to free.
> +		 */
> +		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
> +			io->state = EFCT_HW_IO_STATE_FREE;
> +			list_del(&io->list_entry);
> +			efct_hw_io_free_move_correct_list(hw, io);
> +		}
> +	}
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +}
> +
> +static int
> +efct_hw_flush(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +
> +	/* Process any remaining completions */
> +	for (i = 0; i < hw->eq_count; i++)
> +		efct_hw_process(hw, i, ~0);
> +
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 86736d5295ec..b427a4eda5a3 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -678,4 +678,40 @@ extern struct hw_wq_callback
>   *efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
>   void efct_hw_reqtag_reset(struct efct_hw *hw);
>   
> +/* RQ completion handlers for RQ pair mode */
> +extern int
> +efct_hw_rqpair_process_rq(struct efct_hw *hw,
> +			  struct hw_cq *cq, u8 *cqe);
> +extern
> +enum efct_hw_rtn efct_hw_rqpair_sequence_free(struct efct_hw *hw,
> +						struct efc_hw_sequence *seq);
> +static inline void
> +efct_hw_sequence_copy(struct efc_hw_sequence *dst,
> +		      struct efc_hw_sequence *src)
> +{
> +	/* Copy src to dst, then zero out the linked list link */
> +	*dst = *src;
> +}
> +
> +static inline enum efct_hw_rtn
> +efct_hw_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
> +{
> +	/* Only RQ pair mode is supported */
> +	return efct_hw_rqpair_sequence_free(hw, seq);
> +}
> +extern int
> +efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
> +		   u32 max_isr_time_msec);
> +void efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq);
> +extern void
> +efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
> +		   u8 *cqe, int status, u16 rid);
> +extern void
> +efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
> +		     u8 *cqe, u16 rid);
> +extern int
> +efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
> +extern int
> +efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
> +
>   #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
> new file mode 100644
> index 000000000000..8ea05b59c892
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_io.c
> @@ -0,0 +1,198 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +#include "efct_io.h"
> +
> +struct efct_io_pool {
> +	struct efct *efct;
> +	spinlock_t lock;	/* IO pool lock */
> +	u32 io_num_ios;		/* Total IOs allocated */
> +	struct efct_io *ios[EFCT_NUM_SCSI_IOS];
> +	struct list_head freelist;
> +
> +};
> +
> +struct efct_io_pool *
> +efct_io_pool_create(struct efct *efct, u32 num_sgl)
> +{
> +	u32 i = 0;
> +	struct efct_io_pool *io_pool;
> +	struct efct_io *io;
> +
> +	/* Allocate the IO pool */
> +	io_pool = kzalloc(sizeof(*io_pool), GFP_KERNEL);
> +	if (!io_pool)
> +		return NULL;
> +
> +	io_pool->efct = efct;
> +	INIT_LIST_HEAD(&io_pool->freelist);
> +	/* initialize IO pool lock */
> +	spin_lock_init(&io_pool->lock);
> +
> +	for (i = 0; i < EFCT_NUM_SCSI_IOS; i++) {
> +		io = kmalloc(sizeof(*io), GFP_KERNEL);
> +		if (!io)
> +			break;
> +
> +		io_pool->io_num_ios++;
> +		io_pool->ios[i] = io;
> +		io->tag = i;
> +		io->instance_index = i;
> +
> +		/* Allocate a response buffer */
> +		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
> +		io->rspbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						     io->rspbuf.size,
> +						     &io->rspbuf.phys, GFP_DMA);
> +		if (!io->rspbuf.virt) {
> +			efc_log_err(efct, "dma_alloc rspbuf failed\n");
> +			efct_io_pool_free(io_pool);
> +			return NULL;
> +		}
> +
> +		/* Allocate SGL */
> +		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_ATOMIC);
> +		if (!io->sgl) {
> +			efct_io_pool_free(io_pool);
> +			return NULL;
> +		}
> +
> +		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
> +		io->sgl_allocated = num_sgl;
> +		io->sgl_count = 0;
> +
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &io_pool->freelist);
> +	}
> +
> +	return io_pool;
> +}
> +
> +int
> +efct_io_pool_free(struct efct_io_pool *io_pool)
> +{
> +	struct efct *efct;
> +	u32 i;
> +	struct efct_io *io;
> +
> +	if (io_pool) {
> +		efct = io_pool->efct;
> +
> +		for (i = 0; i < io_pool->io_num_ios; i++) {
> +			io = io_pool->ios[i];
> +			if (!io)
> +				continue;
> +
> +			kfree(io->sgl);
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  io->rspbuf.size, io->rspbuf.virt,
> +					  io->rspbuf.phys);
> +			memset(&io->rspbuf, 0, sizeof(struct efc_dma));
> +		}
> +
> +		kfree(io_pool);
> +		efct->xport->io_pool = NULL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +u32 efct_io_pool_allocated(struct efct_io_pool *io_pool)
> +{
> +	return io_pool->io_num_ios;
> +}
> +
> +struct efct_io *
> +efct_io_pool_io_alloc(struct efct_io_pool *io_pool)
> +{
> +	struct efct_io *io = NULL;
> +	struct efct *efct;
> +	unsigned long flags = 0;
> +
> +	efct = io_pool->efct;
> +
> +	spin_lock_irqsave(&io_pool->lock, flags);
> +
> +	if (!list_empty(&io_pool->freelist)) {
> +		io = list_first_entry(&io_pool->freelist, struct efct_io,
> +				     list_entry);
> +	}
> +
> +	if (io) {
> +		list_del(&io->list_entry);
> +		spin_unlock_irqrestore(&io_pool->lock, flags);
> +
> +		io->io_type = EFCT_IO_TYPE_MAX;
> +		io->hio_type = EFCT_HW_IO_MAX;
> +		io->hio = NULL;
> +		io->transferred = 0;
> +		io->efct = efct;
> +		io->timeout = 0;
> +		io->sgl_count = 0;
> +		io->tgt_task_tag = 0;
> +		io->init_task_tag = 0;
> +		io->hw_tag = 0;
> +		io->display_name = "pending";
> +		io->seq_init = 0;
> +		io->els_req_free = false;
> +		io->io_free = 0;
> +		io->release = NULL;
> +		atomic_add_return(1, &efct->xport->io_active_count);
> +		atomic_add_return(1, &efct->xport->io_total_alloc);
> +	} else {
> +		spin_unlock_irqrestore(&io_pool->lock, flags);
> +	}
> +	return io;
> +}
> +
> +/* Free an object used to track an IO */
> +void
> +efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io)
> +{
> +	struct efct *efct;
> +	struct efct_hw_io *hio = NULL;
> +	unsigned long flags = 0;
> +
> +	efct = io_pool->efct;
> +
> +	spin_lock_irqsave(&io_pool->lock, flags);
> +	hio = io->hio;
> +	io->hio = NULL;
> +	io->io_free = 1;
> +	INIT_LIST_HEAD(&io->list_entry);
> +	list_add(&io->list_entry, &io_pool->freelist);
> +	spin_unlock_irqrestore(&io_pool->lock, flags);
> +
> +	if (hio)
> +		efct_hw_io_free(&efct->hw, hio);
> +
> +	atomic_sub_return(1, &efct->xport->io_active_count);
> +	atomic_add_return(1, &efct->xport->io_total_free);
> +}
> +
> +/* Find an I/O given it's node and ox_id */
> +struct efct_io *
> +efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
> +		    u16 ox_id, u16 rx_id)
> +{
> +	struct efct_io	*io = NULL;
> +	unsigned long flags = 0;
> +	u8 found = false;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	list_for_each_entry(io, &node->active_ios, list_entry) {
> +		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
> +		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
> +			if (kref_get_unless_zero(&io->ref))
> +				found = true;
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return found ? io : NULL;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
> new file mode 100644
> index 000000000000..e28d8ed2f7ff
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_io.h
> @@ -0,0 +1,191 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_IO_H__)
> +#define __EFCT_IO_H__
> +
> +#include "efct_lio.h"
> +
> +#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
> +		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
> +
> +#define io_error_log(io, fmt, ...)  \
> +	do { \
> +		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
> +			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
> +	} while (0)
> +
> +#define SCSI_CMD_BUF_LENGTH	48
> +#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
> +#define EFCT_NUM_SCSI_IOS	8192
> +
> +enum efct_io_type {
> +	EFCT_IO_TYPE_IO = 0,
> +	EFCT_IO_TYPE_ELS,
> +	EFCT_IO_TYPE_CT,
> +	EFCT_IO_TYPE_CT_RESP,
> +	EFCT_IO_TYPE_BLS_RESP,
> +	EFCT_IO_TYPE_ABORT,
> +
> +	EFCT_IO_TYPE_MAX,
> +};
> +
> +enum efct_els_state {
> +	EFCT_ELS_REQUEST = 0,
> +	EFCT_ELS_REQUEST_DELAYED,
> +	EFCT_ELS_REQUEST_DELAY_ABORT,
> +	EFCT_ELS_REQ_ABORT,
> +	EFCT_ELS_REQ_ABORTED,
> +	EFCT_ELS_ABORT_IO_COMPL,
> +};
> +
> +struct efct_io {
> +	struct list_head	list_entry;
> +	struct list_head	io_pending_link;
> +	/* reference counter and callback function */
> +	struct kref		ref;
> +	void (*release)(struct kref *arg);
> +	/* pointer back to efct */
> +	struct efct		*efct;
> +	/* unique instance index value */
> +	u32			instance_index;
> +	/* display name */
> +	const char		*display_name;
> +	/* pointer to node */
> +	struct efc_node		*node;
> +	/* (io_pool->io_free_list) free list link */
> +	/* initiator task tag (OX_ID) for back-end and SCSI logging */
> +	u32			init_task_tag;
> +	/* target task tag (RX_ID) - for back-end and SCSI logging */
> +	u32			tgt_task_tag;
> +	/* HW layer unique IO id - for back-end and SCSI logging */
> +	u32			hw_tag;
> +	/* unique IO identifier */
> +	u32			tag;
> +	/* SGL */
> +	struct efct_scsi_sgl	*sgl;
> +	/* Number of allocated SGEs */
> +	u32			sgl_allocated;
> +	/* Number of SGEs in this SGL */
> +	u32			sgl_count;
> +	/* backend target private IO data */
> +	struct efct_scsi_tgt_io tgt_io;
> +	/* expected data transfer length, based on FC header */
> +	u32			exp_xfer_len;
> +
> +	/* Declarations private to HW/SLI */
> +	void			*hw_priv;
> +
> +	/* indicates what this struct efct_io structure is used for */
> +	enum efct_io_type	io_type;
> +	struct efct_hw_io	*hio;
> +	size_t			transferred;
> +
> +	/* set if auto_trsp was set */
> +	bool			auto_resp;
> +	/* set if low latency request */
> +	bool			low_latency;
> +	/* selected WQ steering request */
> +	u8			wq_steering;
> +	/* selected WQ class if steering is class */
> +	u8			wq_class;
> +	/* transfer size for current request */
> +	u64			xfer_req;
> +	/* target callback function */
> +	efct_scsi_io_cb_t	scsi_tgt_cb;
> +	/* target callback function argument */
> +	void			*scsi_tgt_cb_arg;
> +	/* abort callback function */
> +	efct_scsi_io_cb_t	abort_cb;
> +	/* abort callback function argument */
> +	void			*abort_cb_arg;
> +	/* BLS callback function */
> +	efct_scsi_io_cb_t	bls_cb;
> +	/* BLS callback function argument */
> +	void			*bls_cb_arg;
> +	/* TMF command being processed */
> +	enum efct_scsi_tmf_cmd	tmf_cmd;
> +	/* rx_id from the ABTS that initiated the command abort */
> +	u16			abort_rx_id;
> +
> +	/* True if this is a Target command */
> +	bool			cmd_tgt;
> +	/* when aborting, indicates ABTS is to be sent */
> +	bool			send_abts;
> +	/* True if this is an Initiator command */
> +	bool			cmd_ini;
> +	/* True if local node has sequence initiative */
> +	bool			seq_init;
> +	/* iparams for hw io send call */
> +	union efct_hw_io_param_u iparam;
> +	/* HW IO type */
> +	enum efct_hw_io_type	hio_type;
> +	/* wire length */
> +	u64			wire_len;
> +	/* saved HW callback */
> +	void			*hw_cb;
> +
> +	/* for ELS requests/responses */
> +	/* True if ELS is pending */
> +	bool			els_pend;
> +	/* True if ELS is active */
> +	bool			els_active;
> +	/* ELS request payload buffer */
> +	struct efc_dma		els_req;
> +	/* ELS response payload buffer */
> +	struct efc_dma		els_rsp;
> +	bool			els_req_free;
> +	/* Retries remaining */
> +	u32			els_retries_remaining;
> +	void (*els_callback)(struct efc_node *node,
> +			     struct efc_node_cb *cbdata, void *cbarg);
> +	void			*els_callback_arg;
> +	/* timeout */
> +	u32			els_timeout_sec;
> +
> +	/* delay timer */
> +	struct timer_list	delay_timer;
> +
> +	/* for abort handling */
> +	/* pointer to IO to abort */
> +	struct efct_io		*io_to_abort;
> +
> +	enum efct_els_state	state;
> +	/* Protects els cmds */
> +	spinlock_t		els_lock;
> +
> +	/* SCSI Response buffer */
> +	struct efc_dma		rspbuf;
> +	/* Timeout value in seconds for this IO */
> +	u32			timeout;
> +	/* CS_CTL priority for this IO */
> +	u8			cs_ctl;
> +	/* Is io object in freelist > */
> +	u8			io_free;
> +	u32			app_id;
> +};
> +
> +struct efct_io_cb_arg {
> +	int status;		/* completion status */
> +	int ext_status;		/* extended completion status */
> +	void *app;		/* application argument */
> +};
> +
> +struct efct_io_pool *
> +efct_io_pool_create(struct efct *efct, u32 num_sgl);
> +extern int
> +efct_io_pool_free(struct efct_io_pool *io_pool);
> +extern u32
> +efct_io_pool_allocated(struct efct_io_pool *io_pool);
> +
> +extern struct efct_io *
> +efct_io_pool_io_alloc(struct efct_io_pool *io_pool);
> +extern void
> +efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io);
> +extern struct efct_io *
> +efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
> +		    u16 ox_id, u16 rx_id);
> +#endif /* __EFCT_IO_H__ */
> 
Other than that:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 22/31] elx: efct: Extended link Service IO handling
  2020-04-12  3:32 ` [PATCH v3 22/31] elx: efct: Extended link Service IO handling James Smart
@ 2020-04-16  7:58   ` Hannes Reinecke
  2020-04-23  3:30     ` James Smart
  2020-04-16  9:49   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  7:58 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Functions to build and send ELS/CT/BLS commands and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Unified log message using cmd_name
>    Return and drop else, for better indentation and consistency.
>    Changed assertion log messages.
> ---
>   drivers/scsi/elx/efct/efct_els.c | 1928 ++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_els.h |  133 +++
>   2 files changed, 2061 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_els.c
>   create mode 100644 drivers/scsi/elx/efct/efct_els.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
> new file mode 100644
> index 000000000000..8a2598a83445
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_els.c
> @@ -0,0 +1,1928 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Functions to build and send ELS/CT/BLS commands and responses.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_els.h"
> +
> +#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
> +
> +#define EFCT_LOG_ENABLE_ELS_TRACE(efct)		\
> +		(((efct) != NULL) ? (((efct)->logmask & (1U << 1)) != 0) : 0)
> +
> +#define node_els_trace()  \
> +	do { \
> +		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
> +			efc_log_info(efct, "[%s] %-20s\n", \
> +				node->display_name, __func__); \
> +	} while (0)
> +
> +#define els_io_printf(els, fmt, ...) \
> +	efc_log_debug((struct efct *)els->node->efc->base,\
> +		      "[%s]" ELS_IOFMT " %-8s " fmt, \
> +		      els->node->display_name,\
> +		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
> +		      els->display_name, ##__VA_ARGS__)
> +

Why not simply use dyndebug here?

> +#define EFCT_ELS_RSP_LEN		1024
> +#define EFCT_ELS_GID_PT_RSP_LEN		8096
> +
> +static char *cmd_name[] = FC_ELS_CMDS_INIT;
> +
> +void *
> +efct_els_req_send(struct efc *efc, struct efc_node *node, u32 cmd,
> +		  u32 timeout_sec, u32 retries)
> +{
> +	struct efct *efct = efc->base;
> +
> +	efc_log_debug(efct, "send %s\n", cmd_name[cmd]);
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		return efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_FLOGI:
> +		return efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_FDISC:
> +		return efct_send_fdisc(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_LOGO:
> +		return efct_send_logo(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_PRLI:
> +		return efct_send_prli(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_ADISC:
> +		return efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_SCR:
> +		return efct_send_scr(node, timeout_sec, retries, NULL, NULL);
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +

You and your 'void *' functions always returning NULL ...
Please make them normal void functions.

> +void *
> +efct_els_resp_send(struct efc *efc, struct efc_node *node,
> +		   u32 cmd, u16 ox_id)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		efct_send_plogi_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_FLOGI:
> +		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
> +		break;
> +	case ELS_LOGO:
> +		efct_send_logo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLI:
> +		efct_send_prli_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLO:
> +		efct_send_prlo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_ADISC:
> +		efct_send_adisc_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_LS_ACC:
> +		efct_send_ls_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PDISC:
> +	case ELS_FDISC:
> +	case ELS_RSCN:
> +	case ELS_SCR:
> +		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
> +				 ELS_EXPL_NONE, 0);
> +		break;
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +

Same here.

> +struct efct_io *
> +efct_els_io_alloc(struct efc_node *node, u32 reqlen,
> +		  enum efct_els_role role)
> +{
> +	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
> +}
> +
> +struct efct_io *
> +efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
> +		       u32 rsplen, enum efct_els_role role)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	struct efct_io *els;
> +	unsigned long flags = 0;
> +
> +	efct = node->efc->base;
> +
> +	xport = efct->xport;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +
> +	if (!node->io_alloc_enabled) {
> +		efc_log_debug(efct,
> +			       "called with io_alloc_enabled = FALSE\n");
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	els = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!els) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	/* initialize refcount */
> +	kref_init(&els->ref);
> +	els->release = _efct_els_io_free;
> +
> +	switch (role) {
> +	case EFCT_ELS_ROLE_ORIGINATOR:
> +		els->cmd_ini = true;
> +		els->cmd_tgt = false;
> +		break;
> +	case EFCT_ELS_ROLE_RESPONDER:
> +		els->cmd_ini = false;
> +		els->cmd_tgt = true;
> +		break;
> +	}
> +
> +	/* IO should not have an associated HW IO yet.
> +	 * Assigned below.
> +	 */
> +	if (els->hio) {
> +		efc_log_err(efct, "Error: HW io not null hio:%p\n", els->hio);
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	/* populate generic io fields */
> +	els->efct = efct;
> +	els->node = node;
> +
> +	/* set type and ELS-specific fields */
> +	els->io_type = EFCT_IO_TYPE_ELS;
> +	els->display_name = "pending";
> +
> +	/* now allocate DMA for request and response */
> +	els->els_req.size = reqlen;
> +	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       els->els_req.size,
> +					       &els->els_req.phys,
> +					       GFP_DMA);
> +	if (els->els_req.virt) {
> +		els->els_rsp.size = rsplen;
> +		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       els->els_rsp.size,
> +						       &els->els_rsp.phys,
> +						       GFP_DMA);
> +		if (!els->els_rsp.virt) {
> +			efc_log_err(efct, "dma_alloc rsp\n");
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  els->els_req.size,
> +				els->els_req.virt, els->els_req.phys);
> +			memset(&els->els_req, 0, sizeof(struct efc_dma));
> +			efct_io_pool_io_free(efct->xport->io_pool, els);
> +			els = NULL;
> +		}
> +	} else {
> +		efc_log_err(efct, "dma_alloc req\n");
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		els = NULL;
> +	}
> +
> +	if (els) {
> +		/* initialize fields */
> +		els->els_retries_remaining =
> +					EFCT_FC_ELS_DEFAULT_RETRIES;
> +		els->els_pend = false;
> +		els->els_active = false;
> +
> +		/* add els structure to ELS IO list */
> +		INIT_LIST_HEAD(&els->list_entry);
> +		list_add_tail(&els->list_entry,
> +			      &node->els_io_pend_list);
> +		els->els_pend = true;
> +	}
> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return els;
> +}
> +
> +void
> +efct_els_io_free(struct efct_io *els)
> +{
> +	kref_put(&els->ref, els->release);
> +}
> +
> +void
> +_efct_els_io_free(struct kref *arg)
> +{
> +	struct efct_io *els = container_of(arg, struct efct_io, ref);
> +	struct efct *efct;
> +	struct efc_node *node;
> +	int send_empty_event = false;
> +	unsigned long flags = 0;
> +
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_active) {
> +			/* if active, remove from active list and check empty */
> +			list_del(&els->list_entry);
> +			/* Send list empty event if the IO allocator
> +			 * is disabled, and the list is empty
> +			 * If node->io_alloc_enabled was not checked,
> +			 * the event would be posted continually
> +			 */
> +			send_empty_event = (!node->io_alloc_enabled) &&
> +				list_empty(&node->els_io_active_list);
> +			els->els_active = false;
> +		} else if (els->els_pend) {
> +			/* if pending, remove from pending list;
> +			 * node shutdown isn't gated off the
> +			 * pending list (only the active list),
> +			 * so no need to check if pending list is empty
> +			 */
> +			list_del(&els->list_entry);
> +			els->els_pend = 0;
> +		} else {
> +			efc_log_err(efct,
> +				"Error: els not in pending or active set\n");
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +			return;
> +		}
> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	/* free ELS request and response buffers */
> +	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
> +			  els->els_rsp.virt, els->els_rsp.phys);
> +	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
> +			  els->els_req.virt, els->els_req.phys);
> +
> +	memset(&els->els_rsp, 0, sizeof(struct efc_dma));
> +	memset(&els->els_req, 0, sizeof(struct efc_dma));
> +	efct_io_pool_io_free(efct->xport->io_pool, els);
> +
> +	if (send_empty_event)
> +		efc_scsi_io_list_empty(node->efc, node);
> +
> +	efct_scsi_check_pending(efct);
> +}
> +
> +static void
> +efct_els_make_active(struct efct_io *els)
> +{
> +	struct efc_node *node = els->node;
> +	unsigned long flags = 0;
> +
> +	/* move ELS from pending list to active list */
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_pend) {
> +			if (els->els_active) {
> +				efc_log_err(node->efc,
> +					"Error: els in both pend and active\n");
> +				spin_unlock_irqrestore(&node->active_ios_lock,
> +						       flags);
> +				return;
> +			}
> +			/* remove from pending list */
> +			list_del(&els->list_entry);
> +			els->els_pend = false;
> +
> +			/* add els structure to ELS IO list */
> +			INIT_LIST_HEAD(&els->list_entry);
> +			list_add_tail(&els->list_entry,
> +				      &node->els_io_active_list);
> +			els->els_active = true;
> +		} else {
> +			/* must be retrying; make sure it's already active */
> +			if (!els->els_active) {
> +				efc_log_err(node->efc,
> +					"Error: els not in pending or active set\n");
> +			}
> +		}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +static int efct_els_send(struct efct_io *els, u32 reqlen,
> +			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* update ELS request counter */
> +	node->els_req_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = reqlen;
> +	return efct_scsi_io_dispatch(els, cb);
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els);
> +
> +static void
> +efct_els_delay_timer_cb(struct timer_list *t)
> +{
> +	struct efct_io *els = from_timer(els, t, delay_timer);
> +	struct efc_node *node = els->node;
> +
> +	/* Retry delay timer expired, retry the ELS request,
> +	 * Free the HW IO so that a new oxid is used.
> +	 */
> +	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
> +		node->els_req_cnt++;
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    NULL);
> +	} else {
> +		efct_els_retry(els);
> +	}
> +}
> +
> +static void
> +efct_els_abort_cleanup(struct efct_io *els)
> +{
> +	/* handle event for ABORT_WQE
> +	 * whatever state ELS happened to be in, propagate aborted even
> +	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
> +	 */
> +	struct efc_node_cb cbdata;
> +
> +	cbdata.status = 0;
> +	cbdata.ext_status = 0;
> +	cbdata.els_rsp = els->els_rsp;
> +	els_io_printf(els, "Request aborted\n");
> +	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
> +}
> +
> +static int
> +efct_els_req_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +	u32 reason_code;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	if (status != 0)
> +		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
> +
> +	/* set the response len element of els->rsp */
> +	els->els_rsp.len = length;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* FW returns the number of bytes received on the link in
> +	 * the WCQE, not the amount placed in the buffer; use this info to
> +	 * check if there was an overrun.
> +	 */
> +	if (length > els->els_rsp.size) {
> +		efc_log_warn(efct,
> +			      "ELS response returned len=%d > buflen=%zu\n",
> +			     length, els->els_rsp.size);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		return EFC_SUCCESS;
> +	}
> +
> +	/* Post event to ELS IO object */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LS_RJT:
> +		reason_code = (ext_status >> 16) & 0xff;
> +
> +		/* delay and retry if reason code is Logical Busy */
> +		switch (reason_code) {
> +		case ELS_RJT_BUSY:
> +			els->node->els_req_cnt--;
> +			els_io_printf(els,
> +				      "LS_RJT Logical Busy response,delay and retry\n");
> +			timer_setup(&els->delay_timer,
> +				    efct_els_delay_timer_cb, 0);
> +			mod_timer(&els->delay_timer,
> +				  jiffies + msecs_to_jiffies(5000));
> +			els->state = EFCT_ELS_REQUEST_DELAYED;
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +		switch (ext_status) {
> +		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
> +			efct_els_retry(els);
> +			break;
> +
> +		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
> +			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
> +				/* completion for ELS that was aborted */
> +				efct_els_abort_cleanup(els);
> +			} else {
> +				/* completion for ELS received first,
> +				 * transition to wait for abort cmpl
> +				 */
> +				els->state = EFCT_ELS_REQ_ABORTED;
> +			}
> +
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "els req failed status x%x, ext_status, x%x\n",
> +					status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void efct_els_send_req(struct efc_node *node, struct efct_io *els)
> +{
> +	int rc = 0;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +	rc = efct_els_send(els, els->els_req.size,
> +			   els->els_timeout_sec, efct_els_req_cb);
> +
> +	if (rc) {
> +		struct efc_node_cb cbdata;
> +
> +		cbdata.status = INT_MAX;
> +		cbdata.ext_status = INT_MAX;
> +		cbdata.els_rsp = els->els_rsp;
> +		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +	}
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els)
> +{
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	efct = els->node->efc->base;
> +	cbdata.status = INT_MAX;
> +	cbdata.ext_status = INT_MAX;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	if (!els->els_retries_remaining) {
> +		efc_log_err(efct, "ELS retries exhausted\n");
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +		return;
> +	}
> +
> +	els->els_retries_remaining--;
> +	 /* Free the HW IO so that a new oxid is used.*/
> +	if (els->hio) {
> +		efct_hw_io_free(&efct->hw, els->hio);
> +		els->hio = NULL;
> +	}
> +
> +	efct_els_send_req(els->node, els);
> +}
> +
> +static int
> +efct_els_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* Post node event */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
> +		break;
> +
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "[%s] %-8s failed status x%x, ext_status x%x\n",
> +			    node->display_name, els->display_name,
> +			    status, ext_status);
> +		efc_log_warn(efct,
> +			      "els acc complete: failed status x%x, ext_status, x%x\n",
> +		     status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_els_send_rsp(struct efct_io *els, u32 rsplen)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* increment ELS completion counter */
> +	node->els_cmpl_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = rsplen;
> +	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
> +}
> +
> +struct efct_io *
> +efct_send_plogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries,
> +	      void (*cb)(struct efc_node *node,
> +			 struct efc_node_cb *cbdata, void *arg), void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *plogi;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "plogi";
> +
> +	/* Build PLOGI request */
> +	plogi = els->els_req.virt;
> +
> +	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
> +
> +	plogi->fl_cmd = ELS_PLOGI;
> +	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_flogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi  *flogi;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "flogi";
> +
> +	/* Build FLOGI request */
> +	flogi = els->els_req.virt;
> +
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +	flogi->fl_cmd = ELS_FLOGI;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi *fdisc;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "fdisc";
> +
> +	/* Build FDISC request */
> +	fdisc = els->els_req.virt;
> +
> +	memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
> +	fdisc->fl_cmd = ELS_FDISC;
> +	memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prli(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prli prli;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "prli";
> +
> +	/* Build PRLI request */
> +	pp = els->els_req.virt;
> +
> +	memset(pp, 0, sizeof(*pp));
> +
> +	pp->prli.prli_cmd = ELS_PRLI;
> +	pp->prli.prli_spp_len = 16;
> +	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
> +	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
> +			       (node->sport->enable_ini ?
> +			       FCP_SPPF_INIT_FCN : 0) |
> +			       (node->sport->enable_tgt ?
> +			       FCP_SPPF_TARG_FCN : 0));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prlo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prlo prlo;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "prlo";
> +
> +	/* Build PRLO request */
> +	pp = els->els_req.virt;
> +
> +	memset(pp, 0, sizeof(*pp));
> +	pp->prlo.prlo_cmd = ELS_PRLO;
> +	pp->prlo.prlo_obs = 0x10;
> +	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
> +
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_logo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_logo *logo;
> +	struct fc_els_flogi  *sparams;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "logo";
> +
> +	/* Build LOGO request */
> +
> +	logo = els->els_req.virt;
> +
> +	memset(logo, 0, sizeof(*logo));
> +	logo->fl_cmd = ELS_LOGO;
> +	hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
> +	logo->fl_n_port_wwn = sparams->fl_wwpn;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_adisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_adisc *adisc;
> +	struct fc_els_flogi  *sparams;
> +	struct efc_sli_port *sport = node->sport;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "adisc";
> +
> +	/* Build ADISC request */
> +
> +	adisc = els->els_req.virt;
> +
> +	memset(adisc, 0, sizeof(*adisc));
> +	adisc->adisc_cmd = ELS_ADISC;
> +	hton24(adisc->adisc_hard_addr, sport->fc_id);
> +	adisc->adisc_wwpn = sparams->fl_wwpn;
> +	adisc->adisc_wwnn = sparams->fl_wwnn;
> +	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *pdisc;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "pdisc";
> +
> +	pdisc = els->els_req.virt;
> +
> +	memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
> +
> +	pdisc->fl_cmd = ELS_PDISC;
> +	memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_scr(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	      els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_scr *req;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "scr";
> +
> +	req = els->els_req.virt;
> +
> +	memset(req, 0, sizeof(*req));
> +	req->scr_cmd = ELS_SCR;
> +	req->scr_reg_func = ELS_SCRF_FULL;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_rscn(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_rscn *req;
> +	struct fc_els_rscn_page *rscn_page;
> +	u32 length = sizeof(*rscn_page) * port_ids_count;
> +
> +	length += sizeof(*req);
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "rscn";
> +
> +	req = els->els_req.virt;
> +
> +	req->rscn_cmd = ELS_RSCN;
> +	req->rscn_page_len = sizeof(struct fc_els_rscn_page);
> +	req->rscn_plen = cpu_to_be16(length);
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	/* copy in the payload */
> +	rscn_page = els->els_req.virt + sizeof(*req);
> +	memcpy(rscn_page, port_ids,
> +	       port_ids_count * sizeof(*rscn_page));
> +
> +	efct_els_send_req(node, els);
> + > +	return els;
> +}
> +
> +void *
> +efct_send_ls_rjt(struct efc *efc, struct efc_node *node,
> +		 u32 ox_id, u32 reason_code,
> +		u32 reason_code_expl, u32 vendor_unique)
> +{
> +	struct efct_io *io = NULL;
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_ls_rjt *rjt;
> +
> +	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	node_els_trace();
> +
> +	io->els_callback = NULL;
> +	io->els_callback_arg = NULL;
> +	io->display_name = "ls_rjt";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	rjt = io->els_req.virt;
> +	memset(rjt, 0, sizeof(*rjt));
> +
> +	rjt->er_cmd = ELS_LS_RJT;
> +	rjt->er_reason = reason_code;
> +	rjt->er_explan = reason_code_expl;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*rjt));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
That is a bit strange.
Sending a response can fail, but (apparently) sending a request cannot; 
at the very least you don't have (or check) the return value from the 
send request.

Some intricate scheme I've missed?


Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
  2020-04-12  4:57   ` Bart Van Assche
@ 2020-04-16  8:02   ` Hannes Reinecke
  2020-04-16 12:34   ` Daniel Wagner
  2 siblings, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  8:02 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> LIO backend template registration and template functions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Fixed as per the review comments.
>    Removed vport pend list. Pending list is tracked based on the sport
>      assigned to vport.
> ---
>   drivers/scsi/elx/efct/efct_lio.c | 1840 ++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_lio.h |  178 ++++
>   2 files changed, 2018 insertions(+)
>   create mode 100644 drivers/scsi/elx/efct/efct_lio.c
>   create mode 100644 drivers/scsi/elx/efct/efct_lio.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 16/31] elx: efct: Driver initialization routines
  2020-04-12  3:32 ` [PATCH v3 16/31] elx: efct: Driver initialization routines James Smart
  2020-04-16  7:11   ` Hannes Reinecke
@ 2020-04-16  8:03   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  8:03 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:48PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Emulex FC Target driver init, attach and hardware setup routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Removed Queue topology string.
>   Used request_threaded_irq instead of a thread.
>   Use a static fuction to get the model.
>   Reworked efct_device_attach to use if statements and gotos.
>   Changed efct_fw_reset, removed accessing other functions.
>   Converted to use pci_alloc_irq_vectors api.
>   Removed proc interface.
>   Removed efct_hw_get and efct_hw_set functions. Driver implicitly
>     knows adapter configuration.
>   Many more small changes.
> ---
>  drivers/scsi/elx/efct/efct_driver.c |  856 +++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_driver.h |  142 +++++
>  drivers/scsi/elx/efct/efct_hw.c     | 1116 +++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h     |   15 +
>  drivers/scsi/elx/efct/efct_xport.c  |  523 ++++++++++++++++
>  drivers/scsi/elx/efct/efct_xport.h  |  201 +++++++
>  6 files changed, 2853 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_driver.c
>  create mode 100644 drivers/scsi/elx/efct/efct_driver.h
>  create mode 100644 drivers/scsi/elx/efct/efct_hw.c
>  create mode 100644 drivers/scsi/elx/efct/efct_xport.c
>  create mode 100644 drivers/scsi/elx/efct/efct_xport.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
> new file mode 100644
> index 000000000000..ff488fb774f1
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_driver.c
> @@ -0,0 +1,856 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +
> +#include "efct_els.h"
> +#include "efct_hw.h"
> +#include "efct_unsol.h"
> +#include "efct_scsi.h"
> +
> +struct efct *efct_devices[MAX_EFCT_DEVICES];
> +
> +static int logmask;
> +module_param(logmask, int, 0444);
> +MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
> +
> +static struct libefc_function_template efct_libefc_templ = {
> +	.hw_domain_alloc = efct_hw_domain_alloc,
> +	.hw_domain_attach = efct_hw_domain_attach,
> +	.hw_domain_free = efct_hw_domain_free,
> +	.hw_domain_force_free = efct_hw_domain_force_free,
> +	.domain_hold_frames = efct_domain_hold_frames,
> +	.domain_accept_frames = efct_domain_accept_frames,
> +
> +	.hw_port_alloc = efct_hw_port_alloc,
> +	.hw_port_attach = efct_hw_port_attach,
> +	.hw_port_free = efct_hw_port_free,
> +
> +	.hw_node_alloc = efct_hw_node_alloc,
> +	.hw_node_attach = efct_hw_node_attach,
> +	.hw_node_detach = efct_hw_node_detach,
> +	.hw_node_free_resources = efct_hw_node_free_resources,
> +	.node_purge_pending = efct_node_purge_pending,
> +
> +	.scsi_io_alloc_disable = efct_scsi_io_alloc_disable,
> +	.scsi_io_alloc_enable = efct_scsi_io_alloc_enable,
> +	.scsi_validate_node = efct_scsi_validate_initiator,
> +	.new_domain = efct_scsi_tgt_new_domain,
> +	.del_domain = efct_scsi_tgt_del_domain,
> +	.new_sport = efct_scsi_tgt_new_sport,
> +	.del_sport = efct_scsi_tgt_del_sport,
> +	.scsi_new_node = efct_scsi_new_initiator,
> +	.scsi_del_node = efct_scsi_del_initiator,
> +
> +	.els_send = efct_els_req_send,
> +	.els_send_ct = efct_els_send_ct,
> +	.els_send_resp = efct_els_resp_send,
> +	.bls_send_acc_hdr = efct_bls_send_acc_hdr,
> +	.send_flogi_p2p_acc = efct_send_flogi_p2p_acc,
> +	.send_ct_rsp = efct_send_ct_rsp,
> +	.send_ls_rjt = efct_send_ls_rjt,
> +
> +	.node_io_cleanup = efct_node_io_cleanup,
> +	.node_els_cleanup = efct_node_els_cleanup,
> +	.node_abort_all_els = efct_node_abort_all_els,
> +
> +	.dispatch_fcp_cmd = efct_dispatch_fcp_cmd,
> +	.recv_abts_frame = efct_node_recv_abts_frame,
> +};
> +
> +static int
> +efct_device_init(void)
> +{
> +	int rc;
> +
> +	/* driver-wide init for target-server */
> +	rc = efct_scsi_tgt_driver_init();
> +	if (rc) {
> +		pr_err("efct_scsi_tgt_init failed rc=%d\n",
> +			     rc);
> +		return rc;
> +	}
> +
> +	rc = efct_scsi_reg_fc_transport();
> +	if (rc) {
> +		pr_err("failed to register to FC host\n");
> +		return rc;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_device_shutdown(void)
> +{
> +	efct_scsi_release_fc_transport();
> +
> +	efct_scsi_tgt_driver_exit();
> +}
> +
> +static void *
> +efct_device_alloc(u32 nid)
> +{
> +	struct efct *efct = NULL;
> +	u32 i;
> +
> +	efct = kmalloc_node(sizeof(*efct), GFP_ATOMIC, nid);

GFP_KERNEL

> +
> +	if (efct) {

	if (!efct)
		return NULL;


> +		memset(efct, 0, sizeof(*efct));
> +		for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
> +			if (!efct_devices[i]) {
> +				efct->instance_index = i;
> +				efct_devices[i] = efct;
> +				break;
> +			}
> +		}
> +
> +		if (i == ARRAY_SIZE(efct_devices)) {
> +			pr_err("Exceeded max supported devices.\n");
> +			kfree(efct);
> +			efct = NULL;
> +		} else {
> +			efct->attached = false;

Isn't this done by memset already?

> +		}
> +	}
> +	return efct;
> +}
> +
> +static void
> +efct_teardown_msix(struct efct *efct)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		free_irq(pci_irq_vector(efct->pcidev, i),
> +			 &efct->intr_context[i]);
> +	}
> +
> +	pci_free_irq_vectors(efct->pcidev);
> +}
> +
> +static int
> +efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
> +{
> +	struct efc *efc;
> +	struct sli4 *sli;
> +	int rc = EFC_SUCCESS;
> +
> +	efc = kmalloc(sizeof(*efc), GFP_KERNEL);
> +	if (!efc)
> +		return EFC_FAIL;
> +
> +	memset(efc, 0, sizeof(struct efc));
> +	efct->efcport = efc;
> +
> +	memcpy(&efc->tt, tt, sizeof(*tt));
> +	efc->base = efct;
> +	efc->pcidev = efct->pcidev;
> +
> +	efc->def_wwnn = efct_get_wwnn(&efct->hw);
> +	efc->def_wwpn = efct_get_wwpn(&efct->hw);
> +	efc->enable_tgt = 1;
> +	efc->log_level = EFC_LOG_LIB;
> +
> +	sli = &efct->hw.sli;
> +	efc->max_xfer_size = sli->sge_supported_length *
> +			     sli_get_max_sgl(&efct->hw.sli);
> +
> +	rc = efcport_init(efc);
> +	if (rc)
> +		efc_log_err(efc, "efcport_init failed\n");
> +
> +	return rc;
> +}
> +
> +static int efct_request_firmware_update(struct efct *efct);
> +
> +static const char*
> +efct_pci_model(u16 device)
> +{
> +	switch (device) {
> +	case EFCT_DEVICE_LANCER_G6:	return "LPE31004";
> +	case EFCT_DEVICE_LANCER_G7:	return "LPE36000";
> +	default:			return "unknown";
> +	}
> +}
> +
> +static int
> +efct_device_attach(struct efct *efct)
> +{
> +	u32 rc = 0, i = 0;
> +
> +	if (efct->attached) {
> +		efc_log_err(efct, "Device is already attached\n");
> +		return EFC_FAIL;
> +	}
> +
> +	snprintf(efct->name, sizeof(efct->name), "[%s%d] ", "fc",
> +		 efct->instance_index);
> +
> +	efct->logmask = logmask;
> +	efct->filter_def = "0,0,0,0";
> +	efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
> +
> +	efct->model = efct_pci_model(efct->pcidev->device);
> +
> +	efct->efct_req_fw_upgrade = true;
> +
> +	/* Allocate transport object and bring online */
> +	efct->xport = efct_xport_alloc(efct);
> +	if (!efct->xport) {
> +		efc_log_err(efct, "failed to allocate transport object\n");
> +		rc = -ENOMEM;
> +		goto out;
> +	}
> +
> +	rc = efct_xport_attach(efct->xport);
> +	if (rc) {
> +		efc_log_err(efct, "failed to attach transport object\n");
> +		goto xport_out;
> +	}
> +
> +	rc = efct_xport_initialize(efct->xport);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize transport object\n");
> +		goto xport_out;
> +	}
> +
> +	rc = efct_efclib_config(efct, &efct_libefc_templ);
> +	if (rc) {
> +		efc_log_err(efct, "failed to init efclib\n");
> +		goto efclib_out;
> +	}
> +
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		efc_log_debug(efct, "irq %d enabled\n", i);
> +		enable_irq(pci_irq_vector(efct->pcidev, i));
> +	}
> +
> +	efct->attached = true;
> +
> +	if (efct->efct_req_fw_upgrade)
> +		efct_request_firmware_update(efct);
> +
> +	return rc;
> +
> +efclib_out:
> +	efct_xport_detach(efct->xport);
> +xport_out:
> +	if (efct->xport) {
> +		efct_xport_free(efct->xport);
> +		efct->xport = NULL;
> +	}
> +out:
> +	return rc;
> +}
> +
> +static int
> +efct_device_detach(struct efct *efct)
> +{
> +	int rc = 0, i;
> +
> +	if (efct) {

	if (!efct)
		return EFC_SUCCESS;


> +		if (!efct->attached) {
> +			efc_log_warn(efct, "Device is not attached\n");
> +			return EFC_FAIL;
> +		}
> +
> +		rc = efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN);
> +		if (rc)
> +			efc_log_err(efct, "Transport Shutdown timed out\n");
> +
> +		for (i = 0; i < efct->n_msix_vec; i++)
> +			disable_irq(pci_irq_vector(efct->pcidev, i));
> +
> +		if (efct_xport_detach(efct->xport) != 0)
> +			efc_log_err(efct, "Transport detach failed\n");
> +
> +		efct_xport_free(efct->xport);
> +		efct->xport = NULL;
> +
> +		efcport_destroy(efct->efcport);
> +		kfree(efct->efcport);
> +
> +		efct->attached = false;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_fw_write_cb(int status, u32 actual_write_length,
> +		 u32 change_status, void *arg)
> +{
> +	struct efct_fw_write_result *result = arg;
> +
> +	result->status = status;
> +	result->actual_xfer = actual_write_length;
> +	result->change_status = change_status;
> +
> +	complete(&result->done);
> +}
> +
> +static int
> +efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
> +		    u8 *change_status)
> +{
> +	int rc = 0;
> +	u32 bytes_left;
> +	u32 xfer_size;
> +	u32 offset;
> +	struct efc_dma dma;
> +	int last = 0;
> +	struct efct_fw_write_result result;
> +
> +	init_completion(&result.done);
> +
> +	bytes_left = buf_len;
> +	offset = 0;
> +
> +	dma.size = FW_WRITE_BUFSIZE;
> +	dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +				      dma.size, &dma.phys, GFP_DMA);
> +	if (!dma.virt)
> +		return -ENOMEM;
> +
> +	while (bytes_left > 0) {
> +		if (bytes_left > FW_WRITE_BUFSIZE)
> +			xfer_size = FW_WRITE_BUFSIZE;
> +		else
> +			xfer_size = bytes_left;
> +
> +		memcpy(dma.virt, buf + offset, xfer_size);
> +
> +		if (bytes_left == xfer_size)
> +			last = 1;
> +
> +		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
> +				       last, efct_fw_write_cb, &result);
> +
> +		if (wait_for_completion_interruptible(&result.done) != 0) {
> +			rc = -ENXIO;
> +			break;
> +		}
> +
> +		if (result.actual_xfer == 0 || result.status != 0) {
> +			rc = -EFAULT;
> +			break;
> +		}
> +
> +		if (last)
> +			*change_status = result.change_status;
> +
> +		bytes_left -= result.actual_xfer;
> +		offset += result.actual_xfer;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev, dma.size, dma.virt, dma.phys);
> +	return rc;
> +}
> +
> +/*
> + * Firmware reset to activate the new firmware.
> + * Function 0 will update and load the new firmware
> + * during attach.
> + */
> +static int
> +efct_fw_reset(struct efct *efct)
> +{
> +	int rc = 0;
> +
> +	if (timer_pending(&efct->xport->stats_timer))
> +		del_timer(&efct->xport->stats_timer);
> +
> +	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
> +		efc_log_info(efct, "failed to reset firmware\n");
> +		rc = -1;
> +	} else {
> +		efc_log_info(efct,
> +			"successfully reset firmware.Now resetting port\n");
> +
> +		efct_device_detach(efct);
> +		rc = efct_device_attach(efct);
> +	}
> +	return rc;
> +}
> +
> +static int
> +efct_request_firmware_update(struct efct *efct)
> +{
> +	int rc = 0;
> +	u8 file_name[256], fw_change_status = 0;
> +	const struct firmware *fw;
> +	struct efct_hw_grp_hdr *fw_image;
> +
> +	snprintf(file_name, 256, "%s.grp", efct->model);
> +
> +	rc = request_firmware(&fw, file_name, &efct->pcidev->dev);
> +	if (rc) {
> +		efc_log_debug(efct, "Firmware file(%s) not found.\n",
> +				file_name);
> +		return rc;
> +	}
> +
> +	fw_image = (struct efct_hw_grp_hdr *)fw->data;
> +
> +	if (!strncmp(efct->hw.sli.fw_name[0], fw_image->revision,
> +		     strnlen(fw_image->revision, 16))) {
> +		efc_log_debug(efct,
> +			"Skipped update. Firmware is already up to date.\n");
> +		goto exit;
> +	}
> +
> +	efc_log_info(efct, "Firmware update is initiated. %s -> %s\n",
> +		     efct->hw.sli.fw_name[0], fw_image->revision);
> +
> +	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "Firmware update failed. Return code = %d\n", rc);
> +		goto exit;
> +	}
> +
> +	efc_log_info(efct, "Firmware updated successfully\n");
> +	switch (fw_change_status) {
> +	case 0x00:
> +		efc_log_info(efct, "New firmware is active.\n");
> +		break;
> +	case 0x01:
> +		efc_log_info(efct,
> +			"System reboot needed to activate the new firmware\n");
> +		break;
> +	case 0x02:
> +	case 0x03:
> +		efc_log_info(efct,
> +			"firmware is resetting to activate the new firmware\n");
> +		efct_fw_reset(efct);
> +		break;
> +	default:
> +		efc_log_info(efct,
> +			"Unexected value change_status:%d\n", fw_change_status);
> +		break;
> +	}
> +
> +exit:
> +	release_firmware(fw);
> +
> +	return rc;
> +}
> +
> +static void
> +efct_device_free(struct efct *efct)
> +{
> +	if (efct) {
> +		efct_devices[efct->instance_index] = NULL;
> +
> +		kfree(efct);
> +	}
> +}
> +
> +static int
> +efct_device_interrupts_required(struct efct *efct)
> +{
> +	if (efct_hw_setup(&efct->hw, efct, efct->pcidev)
> +				!= EFCT_HW_RTN_SUCCESS) {
> +		return -1;
> +	}
> +
> +	return efct->hw.config.n_eq;
> +}
> +
> +static irqreturn_t
> +efct_intr_thread(int irq, void *handle)
> +{
> +	struct efct_intr_context *intr_ctx = handle;
> +	struct efct *efct = intr_ctx->efct;
> +
> +	efct_hw_process(&efct->hw, intr_ctx->index, efct->max_isr_time_msec);
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t
> +efct_intr_msix(int irq, void *handle)
> +{
> +	return IRQ_WAKE_THREAD;
> +}
> +
> +static int
> +efct_setup_msix(struct efct *efct, u32 num_intrs)
> +{
> +	int	rc = 0, i;
> +
> +	if (!pci_find_capability(efct->pcidev, PCI_CAP_ID_MSIX)) {
> +		dev_err(&efct->pcidev->dev,
> +			"%s : MSI-X not available\n", __func__);
> +		return -EINVAL;
> +	}
> +
> +	efct->n_msix_vec = num_intrs;
> +
> +	rc = pci_alloc_irq_vectors(efct->pcidev, num_intrs, num_intrs,
> +				   PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
> +
> +	if (rc < 0) {
> +		dev_err(&efct->pcidev->dev, "Failed to alloc irq : %d\n", rc);
> +		return rc;
> +	}
> +
> +	for (i = 0; i < num_intrs; i++) {
> +		struct efct_intr_context *intr_ctx = NULL;
> +
> +		intr_ctx = &efct->intr_context[i];
> +		intr_ctx->efct = efct;
> +		intr_ctx->index = i;
> +
> +		rc = request_threaded_irq(pci_irq_vector(efct->pcidev, i),
> +					  efct_intr_msix, efct_intr_thread, 0,
> +					  EFCT_DRIVER_NAME, intr_ctx);
> +		if (rc) {
> +			dev_err(&efct->pcidev->dev,
> +				"Failed to register %d vector: %d\n", i, rc);
> +			goto out;
> +		}
> +	}
> +
> +	return rc;
> +
> +out:
> +	while (--i >= 0)
> +		free_irq(pci_irq_vector(efct->pcidev, i),
> +			 &efct->intr_context[i]);
> +
> +	pci_free_irq_vectors(efct->pcidev);
> +	return rc;
> +}
> +
> +static struct pci_device_id efct_pci_table[] = {
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G6), 0},
> +	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G7), 0},
> +	{}	/* terminate list */
> +};
> +
> +static int
> +efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> +	struct efct *efct = NULL;
> +	int rc;
> +	u32 i, r;
> +	int num_interrupts = 0;
> +	int nid;
> +
> +	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc)
> +		return rc;
> +
> +	pci_set_master(pdev);
> +
> +	rc = pci_set_mwi(pdev);
> +	if (rc) {
> +		dev_info(&pdev->dev,
> +			 "pci_set_mwi returned %d\n", rc);
> +		goto mwi_out;
> +	}
> +
> +	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
> +	if (rc) {
> +		dev_err(&pdev->dev, "pci_request_regions failed %d\n", rc);
> +		goto req_regions_out;
> +	}
> +
> +	/* Fetch the Numa node id for this device */
> +	nid = dev_to_node(&pdev->dev);
> +	if (nid < 0) {
> +		dev_err(&pdev->dev,
> +			"Warning Numa node ID is %d\n", nid);
> +		nid = 0;
> +	}
> +
> +	/* Allocate efct */
> +	efct = efct_device_alloc(nid);
> +	if (!efct) {
> +		dev_err(&pdev->dev, "Failed to allocate efct\n");
> +		rc = -ENOMEM;
> +		goto alloc_out;
> +	}
> +
> +	efct->pcidev = pdev;
> +
> +	efct->numa_node = nid;
> +
> +	/* Map all memory BARs */
> +	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
> +			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
> +						  pci_resource_len(pdev, i));
> +			r++;
> +		}
> +
> +		/*
> +		 * If the 64-bit attribute is set, both this BAR and the
> +		 * next form the complete address. Skip processing the
> +		 * next BAR.
> +		 */
> +		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
> +			i++;
> +	}
> +
> +	pci_set_drvdata(pdev, efct);
> +
> +	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
> +	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
> +		dev_warn(&pdev->dev,
> +			 "trying DMA_BIT_MASK(32)\n");
> +		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
> +		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
> +			dev_err(&pdev->dev,
> +				"setting DMA_BIT_MASK failed\n");
> +			rc = -1;
> +			goto dma_mask_out;
> +		}
> +	}
> +
> +	num_interrupts = efct_device_interrupts_required(efct);
> +	if (num_interrupts < 0) {
> +		efc_log_err(efct, "efct_device_interrupts_required failed\n");
> +		rc = -1;
> +		goto dma_mask_out;
> +	}
> +
> +	/*
> +	 * Initialize MSIX interrupts, note,
> +	 * efct_setup_msix() enables the interrupt
> +	 */
> +	rc = efct_setup_msix(efct, num_interrupts);
> +	if (rc) {
> +		dev_err(&pdev->dev, "Can't setup msix\n");
> +		goto dma_mask_out;
> +	}
> +	/* Disable interrupt for now */
> +	for (i = 0; i < efct->n_msix_vec; i++) {
> +		efc_log_debug(efct, "irq %d disabled\n", i);
> +		disable_irq(pci_irq_vector(efct->pcidev, i));
> +	}
> +
> +	rc = efct_device_attach((struct efct *)efct);
> +	if (rc)
> +		goto attach_out;
> +
> +	return EFC_SUCCESS;
> +
> +attach_out:
> +	efct_teardown_msix(efct);
> +dma_mask_out:
> +	pci_set_drvdata(pdev, NULL);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +	efct_device_free(efct);
> +alloc_out:
> +	pci_release_regions(pdev);
> +req_regions_out:
> +	pci_clear_mwi(pdev);
> +mwi_out:
> +	pci_disable_device(pdev);
> +	return rc;
> +}
> +
> +static void
> +efct_pci_remove(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	u32	i;
> +
> +	if (!efct)
> +		return;
> +
> +	efct_device_detach(efct);
> +
> +	efct_teardown_msix(efct);
> +
> +	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
> +		if (efct->reg[i])
> +			iounmap(efct->reg[i]);
> +	}
> +
> +	pci_set_drvdata(pdev, NULL);
> +
> +	efct_devices[efct->instance_index] = NULL;
> +
> +	efct_device_free(efct);
> +
> +	pci_release_regions(pdev);
> +
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
> +{
> +	if (efct) {
> +		efc_log_debug(efct,
> +			       "PCI channel disable preparing for reset\n");
> +		efct_device_detach(efct);
> +		/* Disable interrupt and pci device */
> +		efct_teardown_msix(efct);
> +	}
> +	pci_disable_device(pdev);
> +}
> +
> +static void
> +efct_device_prep_for_recover(struct efct *efct)
> +{
> +	if (efct) {
> +		efc_log_debug(efct, "PCI channel preparing for recovery\n");
> +		efct_hw_io_abort_all(&efct->hw);
> +	}
> +}
> +
> +/**
> + * efct_pci_io_error_detected - method for handling PCI I/O error
> + * @pdev: pointer to PCI device.
> + * @state: the current PCI connection state.
> + *
> + * This routine is registered to the PCI subsystem for error handling. This
> + * function is called by the PCI subsystem after a PCI bus error affecting
> + * this device has been detected. When this routine is invoked, it dispatches
> + * device error detected handling routine, which will perform the proper
> + * error detected operation.
> + *
> + * Return codes
> + * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
> + * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
> + */
> +static pci_ers_result_t
> +efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +	pci_ers_result_t rc;
> +
> +	switch (state) {
> +	case pci_channel_io_normal:
> +		efct_device_prep_for_recover(efct);
> +		rc = PCI_ERS_RESULT_CAN_RECOVER;
> +		break;
> +	case pci_channel_io_frozen:
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	case pci_channel_io_perm_failure:
> +		efct_device_detach(efct);
> +		rc = PCI_ERS_RESULT_DISCONNECT;
> +		break;
> +	default:
> +		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
> +			       state);
> +		efct_device_prep_for_reset(efct, pdev);
> +		rc = PCI_ERS_RESULT_NEED_RESET;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +static pci_ers_result_t
> +efct_pci_io_slot_reset(struct pci_dev *pdev)
> +{
> +	int rc;
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	rc = pci_enable_device_mem(pdev);
> +	if (rc) {
> +		efc_log_err(efct,
> +			     "failed to re-enable PCI device after reset.\n");
> +		return PCI_ERS_RESULT_DISCONNECT;
> +	}
> +
> +	/*
> +	 * As the new kernel behavior of pci_restore_state() API call clears
> +	 * device saved_state flag, need to save the restored state again.
> +	 */
> +
> +	pci_save_state(pdev);
> +
> +	pci_set_master(pdev);
> +
> +	rc = efct_setup_msix(efct, efct->n_msix_vec);
> +	if (rc)
> +		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
> +			    rc);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +
> +	return PCI_ERS_RESULT_RECOVERED;
> +}
> +
> +static void
> +efct_pci_io_resume(struct pci_dev *pdev)
> +{
> +	struct efct *efct = pci_get_drvdata(pdev);
> +
> +	/* Perform device reset */
> +	efct_device_detach(efct);
> +	/* Bring device to online*/
> +	efct_device_attach(efct);
> +}
> +
> +MODULE_DEVICE_TABLE(pci, efct_pci_table);
> +
> +static struct pci_error_handlers efct_pci_err_handler = {
> +	.error_detected = efct_pci_io_error_detected,
> +	.slot_reset = efct_pci_io_slot_reset,
> +	.resume = efct_pci_io_resume,
> +};
> +
> +static struct pci_driver efct_pci_driver = {
> +	.name		= EFCT_DRIVER_NAME,
> +	.id_table	= efct_pci_table,
> +	.probe		= efct_pci_probe,
> +	.remove		= efct_pci_remove,
> +	.err_handler	= &efct_pci_err_handler,
> +};
> +
> +static
> +int __init efct_init(void)
> +{
> +	int	rc = -ENODEV;
> +
> +	rc = efct_device_init();
> +	if (rc) {
> +		pr_err("efct_device_init failed rc=%d\n", rc);
> +		return -ENOMEM;
> +	}
> +
> +	rc = pci_register_driver(&efct_pci_driver);
> +	if (rc)
> +		goto l1;
> +
> +	return rc;
> +
> +l1:
> +	efct_device_shutdown();
> +	return rc;
> +}
> +
> +static void __exit efct_exit(void)
> +{
> +	pci_unregister_driver(&efct_pci_driver);
> +	efct_device_shutdown();
> +}
> +
> +module_init(efct_init);
> +module_exit(efct_exit);
> +MODULE_VERSION(EFCT_DRIVER_VERSION);
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Broadcom");
> diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
> new file mode 100644
> index 000000000000..07ca0b182d90
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_driver.h
> @@ -0,0 +1,142 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_DRIVER_H__)
> +#define __EFCT_DRIVER_H__
> +
> +/***************************************************************************
> + * OS specific includes
> + */
> +#include <stdarg.h>
> +#include <linux/version.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/interrupt.h>
> +#include <asm-generic/ioctl.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/pci.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/bitmap.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <asm/byteorder.h>
> +#include <linux/timer.h>
> +#include <linux/delay.h>
> +#include <linux/fs.h>
> +#include <linux/uaccess.h>
> +#include <linux/sched.h>
> +#include <asm/current.h>
> +#include <asm/cacheflush.h>
> +#include <linux/pagemap.h>
> +#include <linux/kthread.h>
> +#include <linux/proc_fs.h>
> +#include <linux/seq_file.h>
> +#include <linux/random.h>
> +#include <linux/sched.h>
> +#include <linux/jiffies.h>
> +#include <linux/ctype.h>
> +#include <linux/debugfs.h>
> +#include <linux/firmware.h>
> +#include <linux/sched/signal.h>
> +#include "../include/efc_common.h"
> +
> +#define EFCT_DRIVER_NAME			"efct"
> +#define EFCT_DRIVER_VERSION			"1.0.0.0"
> +
> +/* EFCT_OS_MAX_ISR_TIME_MSEC -
> + * maximum time driver code should spend in an interrupt
> + * or kernel thread context without yielding
> + */
> +#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
> +
> +#define EFCT_FC_MAX_SGL				64
> +#define EFCT_FC_DIF_SEED			0
> +
> +/* Timeouts */
> +#define EFCT_FC_ELS_SEND_DEFAULT_TIMEOUT	0
> +#define EFCT_FC_ELS_DEFAULT_RETRIES		3
> +#define EFCT_FC_FLOGI_TIMEOUT_SEC		5
> +#define EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC    30000000 /* 30 seconds */
> +
> +/* Watermark */
> +#define EFCT_WATERMARK_HIGH_PCT			90
> +#define EFCT_WATERMARK_LOW_PCT			80
> +#define EFCT_IO_WATERMARK_PER_INITIATOR		8
> +
> +#include "../libefc/efclib.h"
> +#include "efct_hw.h"
> +#include "efct_io.h"
> +#include "efct_xport.h"
> +
> +#define EFCT_PCI_MAX_REGS			6
> +#define MAX_PCI_INTERRUPTS			16
> +
> +struct efct_intr_context {
> +	struct efct		*efct;
> +	u32			index;
> +};
> +
> +struct efct {
> +	struct pci_dev			*pcidev;
> +	void __iomem			*reg[EFCT_PCI_MAX_REGS];
> +
> +	u32				n_msix_vec;
> +	struct efct_intr_context	intr_context[MAX_PCI_INTERRUPTS];
> +	u32				numa_node;
> +
> +	char				name[EFC_NAME_LENGTH];
> +	bool				attached;
> +	struct efct_scsi_tgt		tgt_efct;
> +	struct efct_xport		*xport;
> +	struct efc			*efcport;
> +	struct Scsi_Host		*shost;
> +	int				logmask;
> +	u32				max_isr_time_msec;
> +
> +	const char			*desc;
> +	u32				instance_index;
> +
> +	const char			*model;
> +
> +	struct efct_hw			hw;
> +
> +	u32				num_vports;
> +	u32				rq_selection_policy;
> +	char				*filter_def;
> +
> +	bool				soft_wwn_enable;
> +
> +	/*
> +	 * Target IO timer value:
> +	 * Zero: target command timeout disabled.
> +	 * Non-zero: Timeout value, in seconds, for target commands
> +	 */
> +	u32				target_io_timer_sec;
> +
> +	int				speed;
> +	int				topology;
> +
> +	u8				efct_req_fw_upgrade;
> +	u16				sw_feature_cap;
> +	struct dentry			*sess_debugfs_dir;
> +};
> +
> +#define FW_WRITE_BUFSIZE		(64 * 1024)
> +
> +struct efct_fw_write_result {
> +	struct completion done;
> +	int status;
> +	u32 actual_xfer;
> +	u32 change_status;
> +};
> +
> +#define MAX_EFCT_DEVICES		64
> +extern struct efct			*efct_devices[MAX_EFCT_DEVICES];
> +
> +#endif /* __EFCT_DRIVER_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> new file mode 100644
> index 000000000000..21fcaf7b3d2b
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -0,0 +1,1116 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +
> +static enum efct_hw_rtn
> +efct_hw_link_event_init(struct efct_hw *hw)
> +{
> +	hw->link.status = SLI_LINK_STATUS_MAX;
> +	hw->link.topology = SLI_LINK_TOPO_NONE;
> +	hw->link.medium = SLI_LINK_MEDIUM_MAX;
> +	hw->link.speed = 0;
> +	hw->link.loop_map = NULL;
> +	hw->link.fc_id = U32_MAX;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_read_max_dump_size(struct efct_hw *hw)
> +{
> +	u8	buf[SLI4_BMBX_SIZE];
> +	struct efct *efct = hw->os;
> +	int	rc = EFCT_HW_RTN_SUCCESS;
> +	struct sli4_rsp_cmn_set_dump_location *rsp;
> +
> +	/* attempt to detemine the dump size for function 0 only. */
> +	if (PCI_FUNC(efct->pcidev->devfn) != 0)
> +		return rc;
> +
> +	rc = sli_cmd_common_set_dump_location(&hw->sli, buf, SLI4_BMBX_SIZE, 1,
> +					     0,	NULL, 0);
> +	if (rc)
> +		return rc;
> +
> +	rsp = (struct sli4_rsp_cmn_set_dump_location *)
> +	      (buf + offsetof(struct sli4_cmd_sli_config, payload.embed));
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "set dump location cmd failed\n");
> +		return rc;
> +	}
> +	hw->dump_size =	(le32_to_cpu(rsp->buffer_length_dword) &
> +			 SLI4_CMN_SET_DUMP_BUFFER_LEN);
> +	efc_log_debug(hw->os, "Dump size %x\n",	hw->dump_size);
> +
> +	return rc;
> +}
> +
> +static int
> +__efct_read_topology_cb(struct efct_hw *hw, int status,
> +			u8 *mqe, void *arg)
> +{
> +	struct sli4_cmd_read_topology *read_topo =
> +				(struct sli4_cmd_read_topology *)mqe;
> +	u8 speed;
> +	struct efc_domain_record drec = {0};
> +	struct efct *efct = hw->os;
> +
> +	if (status || le16_to_cpu(read_topo->hdr.status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
> +			       status,
> +			       le16_to_cpu(read_topo->hdr.status));
> +		kfree(mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	switch (le32_to_cpu(read_topo->dw2_attentype) &
> +		SLI4_READTOPO_ATTEN_TYPE) {
> +	case SLI4_READ_TOPOLOGY_LINK_UP:
> +		hw->link.status = SLI_LINK_STATUS_UP;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_DOWN:
> +		hw->link.status = SLI_LINK_STATUS_DOWN;
> +		break;
> +	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
> +		hw->link.status = SLI_LINK_STATUS_NO_ALPA;
> +		break;
> +	default:
> +		hw->link.status = SLI_LINK_STATUS_MAX;
> +		break;
> +	}
> +
> +	switch (read_topo->topology) {
> +	case SLI4_READ_TOPOLOGY_NPORT:
> +		hw->link.topology = SLI_LINK_TOPO_NPORT;
> +		break;
> +	case SLI4_READ_TOPOLOGY_FC_AL:
> +		hw->link.topology = SLI_LINK_TOPO_LOOP;
> +		if (hw->link.status == SLI_LINK_STATUS_UP)
> +			hw->link.loop_map = hw->loop_map.virt;
> +		hw->link.fc_id = read_topo->acquired_al_pa;
> +		break;
> +	default:
> +		hw->link.topology = SLI_LINK_TOPO_MAX;
> +		break;
> +	}
> +
> +	hw->link.medium = SLI_LINK_MEDIUM_FC;
> +
> +	speed = (le32_to_cpu(read_topo->currlink_state) &
> +		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
> +	switch (speed) {
> +	case SLI4_READ_TOPOLOGY_SPEED_1G:
> +		hw->link.speed =  1 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_2G:
> +		hw->link.speed =  2 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_4G:
> +		hw->link.speed =  4 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_8G:
> +		hw->link.speed =  8 * 1000;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_16G:
> +		hw->link.speed = 16 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	case SLI4_READ_TOPOLOGY_SPEED_32G:
> +		hw->link.speed = 32 * 1000;
> +		hw->link.loop_map = NULL;
> +		break;
> +	}
> +
> +	kfree(mqe);
> +
> +	drec.speed = hw->link.speed;
> +	drec.fc_id = hw->link.fc_id;
> +	drec.is_nport = true;
> +	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Callback function for the SLI link events */
> +static int
> +efct_hw_cb_link(void *ctx, void *e)
> +{
> +	struct efct_hw	*hw = ctx;
> +	struct sli4_link_event *event = e;
> +	struct efc_domain	*d = NULL;
> +	int		rc = EFCT_HW_RTN_ERROR;
> +	struct efct	*efct = hw->os;
> +	struct efc_dma *dma;
> +
> +	efct_hw_link_event_init(hw);
> +
> +	switch (event->status) {
> +	case SLI_LINK_STATUS_UP:
> +
> +		hw->link = *event;
> +		efct->efcport->link_status = EFC_LINK_STATUS_UP;
> +
> +		if (event->topology == SLI_LINK_TOPO_NPORT) {
> +			struct efc_domain_record drec = {0};
> +
> +			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
> +				      event->speed);
> +			drec.speed = event->speed;
> +			drec.fc_id = event->fc_id;
> +			drec.is_nport = true;
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
> +				      &drec);
> +		} else if (event->topology == SLI_LINK_TOPO_LOOP) {
> +			u8	*buf = NULL;
> +
> +			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
> +				      event->speed);
> +			dma = &hw->loop_map;
> +			dma->size = SLI4_MIN_LOOP_MAP_BYTES;
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt)
> +				efc_log_err(hw->os, "efct_dma_alloc_fail\n");
> +
> +			buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +			if (!buf)
> +				break;
> +
> +			if (!sli_cmd_read_topology(&hw->sli, buf,
> +						  SLI4_BMBX_SIZE,
> +						       &hw->loop_map)) {
> +				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +						     __efct_read_topology_cb,
> +						     NULL);
> +			}
> +
> +			if (rc != EFCT_HW_RTN_SUCCESS) {
> +				efc_log_test(hw->os, "READ_TOPOLOGY failed\n");
> +				kfree(buf);
> +			}
> +		} else {
> +			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
> +				      "Link Up, unsupported topology ",
> +				     event->topology, event->speed);
> +		}
> +		break;
> +	case SLI_LINK_STATUS_DOWN:
> +		efc_log_info(hw->os, "Link down\n");
> +
> +		hw->link.status = event->status;
> +		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
> +
> +		d = hw->domain;
> +		if (d)
> +			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
> +		break;
> +	default:
> +		efc_log_test(hw->os, "unhandled link status %#x\n",
> +			      event->status);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
> +{
> +	u32 i, max_sgl;
> +
> +	if (hw->hw_setup_called)
> +		return EFCT_HW_RTN_SUCCESS;
> +
> +	/*
> +	 * efct_hw_init() relies on NULL pointers indicating that a structure
> +	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
> +	 * free/realloc that memory
> +	 */
> +	memset(hw, 0, sizeof(struct efct_hw));
> +
> +	hw->hw_setup_called = true;
> +
> +	hw->os = os;
> +
> +	spin_lock_init(&hw->cmd_lock);
> +	INIT_LIST_HEAD(&hw->cmd_head);
> +	INIT_LIST_HEAD(&hw->cmd_pending);
> +	hw->cmd_head_count = 0;
> +
> +	spin_lock_init(&hw->io_lock);
> +	spin_lock_init(&hw->io_abort_lock);
> +
> +	atomic_set(&hw->io_alloc_failed_count, 0);
> +
> +	hw->config.speed = FC_LINK_SPEED_AUTO_16_8_4;
> +	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct *)os)->reg)) {
> +		efc_log_err(hw->os, "SLI setup failed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_link_event_init(hw);
> +
> +	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
> +
> +	/*
> +	 * Set all the queue sizes to the maximum allowed.
> +	 */
> +	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
> +		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
> +	/*
> +	 * Adjust the the size of the WQs so that the CQ is twice as
> +	 * big as the WQ to allow for 2 completions per IO. This allows us to
> +	 * handle multi-phase as well as aborts.
> +	 */
> +	hw->num_qentries[SLI_QTYPE_WQ] = hw->num_qentries[SLI_QTYPE_CQ] / 2;
> +
> +	/*
> +	 * The RQ assignment for RQ pair mode.
> +	 */
> +	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
> +	hw->config.n_io = hw->sli.extent[SLI_RSRC_XRI].size;
> +	hw->config.n_eq = EFCT_HW_DEF_NUM_EQ;
> +
> +	max_sgl = sli_get_max_sgl(&hw->sli) - SLI4_SGE_MAX_RESERVED;
> +	max_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
> +	hw->config.n_sgl = max_sgl;
> +
> +	(void)efct_hw_read_max_dump_size(hw);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static void
> +efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
> +{
> +	efc_log_info(hw->os,
> +		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
> +		     j, hw->config.filter_def[j], i, id);
> +}
> +
> +static inline void
> +efct_hw_init_free_io(struct efct_hw_io *io)
> +{
> +	/*
> +	 * Set io->done to NULL, to avoid any callbacks, should
> +	 * a completion be received for one of these IOs
> +	 */
> +	io->done = NULL;
> +	io->abort_done = NULL;
> +	io->status_saved = false;
> +	io->abort_in_progress = false;
> +	io->rnode = NULL;
> +	io->type = 0xFFFF;
> +	io->wq = NULL;
> +	io->ul_io = NULL;
> +	io->tgt_wqe_timeout = 0;
> +}
> +
> +static u8 efct_hw_iotype_is_originator(u16 io_type)
> +{
> +	switch (io_type) {
> +	case EFCT_HW_FC_CT:
> +	case EFCT_HW_ELS_REQ:
> +		return 1;
> +	default:
> +		return EFC_SUCCESS;
> +	}
> +}
> +
> +static void
> +efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* Restore the default */
> +	io->sgl = &io->def_sgl;
> +	io->sgl_count = io->def_sgl_count;
> +
> +	/* Clear the overflow SGL */
> +	io->ovfl_sgl = NULL;
> +	io->ovfl_sgl_count = 0;
> +	io->ovfl_lsp = NULL;
> +}
> +
> +static void
> +efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
> +{
> +	struct efct_hw_io *io = arg;
> +	struct efct_hw *hw = io->hw;
> +	struct sli4_fc_wcqe *wcqe = (void *)cqe;
> +	u32	len = 0;
> +	u32 ext = 0;
> +
> +	/* clear xbusy flag if WCQE[XB] is clear */
> +	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
> +		io->xbusy = false;
> +
> +	/* get extended CQE status */
> +	switch (io->type) {
> +	case EFCT_HW_BLS_ACC:
> +	case EFCT_HW_BLS_ACC_SID:
> +		break;
> +	case EFCT_HW_ELS_REQ:
> +		sli_fc_els_did(&hw->sli, cqe, &ext);
> +		len = sli_fc_response_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_ELS_RSP:
> +	case EFCT_HW_ELS_RSP_SID:
> +	case EFCT_HW_FC_CT_RSP:
> +		break;
> +	case EFCT_HW_FC_CT:
> +		len = sli_fc_response_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_WRITE:
> +		len = sli_fc_io_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_READ:
> +		len = sli_fc_io_length(&hw->sli, cqe);
> +		break;
> +	case EFCT_HW_IO_TARGET_RSP:
> +		break;
> +	case EFCT_HW_IO_DNRX_REQUEUE:
> +		/* release the count for re-posting the buffer */
> +		/* efct_hw_io_free(hw, io); */
> +		break;
> +	default:
> +		efc_log_test(hw->os, "unhandled io type %#x for XRI 0x%x\n",
> +			      io->type, io->indicator);
> +		break;
> +	}
> +	if (status) {
> +		ext = sli_fc_ext_status(&hw->sli, cqe);
> +		/*
> +		 * If we're not an originator IO, and XB is set, then issue
> +		 * abort for the IO from within the HW
> +		 */
> +		if ((!efct_hw_iotype_is_originator(io->type)) &&
> +		    wcqe->flags & SLI4_WCQE_XB) {
> +			enum efct_hw_rtn rc;
> +
> +			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
> +				       io->indicator, io->reqtag);
> +
> +			/*
> +			 * Because targets may send a response when the IO
> +			 * completes using the same XRI, we must wait for the
> +			 * XRI_ABORTED CQE to issue the IO callback
> +			 */
> +			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
> +			if (rc == EFCT_HW_RTN_SUCCESS) {
> +				/*
> +				 * latch status to return after abort is
> +				 * complete
> +				 */
> +				io->status_saved = true;
> +				io->saved_status = status;
> +				io->saved_ext = ext;
> +				io->saved_len = len;
> +				goto exit_efct_hw_wq_process_io;
> +			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
> +				/*
> +				 * Already being aborted by someone else (ABTS
> +				 * perhaps). Just fall thru and return original
> +				 * error.
> +				 */
> +				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
> +					       "abort in progress xri=",
> +					      io->indicator, io->reqtag);
> +
> +			} else {
> +				/* Failed to abort for some other reason, log
> +				 * error
> +				 */
> +				efc_log_test(hw->os, "%s%#x tag=%#x rc=%d\n",
> +					      "Failed to abort xri=",
> +					     io->indicator, io->reqtag, rc);
> +			}
> +		}
> +	}
> +
> +	if (io->done) {
> +		efct_hw_done_t done = io->done;
> +		void *arg = io->arg;
> +
> +		io->done = NULL;
> +
> +		if (io->status_saved) {
> +			/* use latched status if exists */
> +			status = io->saved_status;
> +			len = io->saved_len;
> +			ext = io->saved_ext;
> +			io->status_saved = false;
> +		}
> +
> +		/* Restore default SGL */
> +		efct_hw_io_restore_sgl(hw, io);
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +
> +exit_efct_hw_wq_process_io:
> +	return;
> +}
> +
> +/* Initialize the pool of HW IO objects */
> +static enum efct_hw_rtn
> +efct_hw_setup_io(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	struct efct_hw_io	*io = NULL;
> +	uintptr_t	xfer_virt = 0;
> +	uintptr_t	xfer_phys = 0;
> +	u32	index;
> +	bool new_alloc = true;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	if (!hw->io) {
> +		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
> +				 GFP_KERNEL);
> +
> +		if (!hw->io)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(hw->io, 0, hw->config.n_io * sizeof(io));
> +
> +		for (i = 0; i < hw->config.n_io; i++) {
> +			hw->io[i] = kmalloc(sizeof(*io), GFP_KERNEL);
> +			if (!hw->io[i])
> +				goto error;
> +
> +			memset(hw->io[i], 0, sizeof(struct efct_hw_io));
> +		}
> +
> +		/* Create WQE buffs for IO */
> +		hw->wqe_buffs = kmalloc((hw->config.n_io *
> +					     hw->sli.wqe_size),
> +					     GFP_ATOMIC);
> +		if (!hw->wqe_buffs) {
> +			kfree(hw->io);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		memset(hw->wqe_buffs, 0, (hw->config.n_io *
> +					hw->sli.wqe_size));
> +
> +	} else {
> +		/* re-use existing IOs, including SGLs */
> +		new_alloc = false;
> +	}
> +
> +	if (new_alloc) {
> +		dma = &hw->xfer_rdy;
> +		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
> +		dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       dma->size, &dma->phys, GFP_DMA);
> +		if (!dma->virt)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
> +	xfer_phys = hw->xfer_rdy.phys;
> +
> +	for (i = 0; i < hw->config.n_io; i++) {
> +		struct hw_wq_callback *wqcb;
> +
> +		io = hw->io[i];
> +
> +		/* initialize IO fields */
> +		io->hw = hw;
> +
> +		/* Assign a WQE buff */
> +		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
> +
> +		/* Allocate the request tag for this IO */
> +		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
> +		if (!wqcb) {
> +			efc_log_err(hw->os, "can't allocate request tag\n");
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +		}
> +		io->reqtag = wqcb->instance_index;
> +
> +		/* Now for the fields that are initialized on each free */
> +		efct_hw_init_free_io(io);
> +
> +		/* The XB flag isn't cleared on IO free, so init to zero */
> +		io->xbusy = 0;
> +
> +		if (sli_resource_alloc(&hw->sli, SLI_RSRC_XRI,
> +				       &io->indicator, &index)) {
> +			efc_log_err(hw->os,
> +				     "sli_resource_alloc failed @ %d\n", i);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +		if (new_alloc) {
> +			dma = &io->def_sgl;
> +			dma->size = hw->config.n_sgl *
> +					sizeof(struct sli4_sge);
> +			dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       dma->size, &dma->phys,
> +						       GFP_DMA);
> +			if (!dma->virt) {
> +				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
> +				memset(&io->def_sgl, 0,
> +				       sizeof(struct efc_dma));
> +				return EFCT_HW_RTN_NO_MEMORY;
> +			}
> +		}
> +		io->def_sgl_count = hw->config.n_sgl;
> +		io->sgl = &io->def_sgl;
> +		io->sgl_count = io->def_sgl_count;
> +
> +		if (hw->xfer_rdy.size) {
> +			io->xfer_rdy.virt = (void *)xfer_virt;
> +			io->xfer_rdy.phys = xfer_phys;
> +			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
> +
> +			xfer_virt += sizeof(struct fcp_txrdy);
> +			xfer_phys += sizeof(struct fcp_txrdy);
> +		}
> +	}
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +error:
> +	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
> +		kfree(hw->io[i]);
> +		hw->io[i] = NULL;
> +	}
> +
> +	kfree(hw->io);
> +	hw->io = NULL;
> +
> +	return EFCT_HW_RTN_NO_MEMORY;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_init_io(struct efct_hw *hw)
> +{
> +	u32	i = 0, io_index = 0;
> +	bool prereg = false;
> +	struct efct_hw_io	*io = NULL;
> +	u8		cmd[SLI4_BMBX_SIZE];
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32	nremaining;
> +	u32	n = 0;
> +	u32	sgls_per_request = 256;
> +	struct efc_dma	**sgls = NULL;
> +	struct efc_dma	reqbuf;
> +	struct efct *efct = hw->os;
> +
> +	prereg = hw->sli.sgl_pre_registered;
> +
> +	memset(&reqbuf, 0, sizeof(struct efc_dma));
> +	if (prereg) {
> +		sgls = kmalloc_array(sgls_per_request, sizeof(*sgls),
> +				     GFP_ATOMIC);
> +		if (!sgls)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		reqbuf.size = 32 + sgls_per_request * 16;
> +		reqbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						 reqbuf.size, &reqbuf.phys,
> +						 GFP_DMA);
> +		if (!reqbuf.virt) {
> +			efc_log_err(hw->os, "dma_alloc reqbuf failed\n");
> +			kfree(sgls);
> +			return EFCT_HW_RTN_NO_MEMORY;
> +		}
> +	}
> +
> +	for (nremaining = hw->config.n_io; nremaining; nremaining -= n) {
> +		if (prereg) {
> +			/* Copy address of SGL's into local sgls[] array, break
> +			 * out if the xri is not contiguous.
> +			 */
> +			u32 min = (sgls_per_request < nremaining)
> +					? sgls_per_request : nremaining;
> +			for (n = 0; n < min; n++) {
> +				/* Check that we have contiguous xri values */
> +				if (n > 0) {
> +					if (hw->io[io_index + n]->indicator !=
> +					    hw->io[io_index + n - 1]->indicator
> +					    + 1)
> +						break;
> +				}
> +				sgls[n] = hw->io[io_index + n]->sgl;
> +			}
> +
> +			if (!sli_cmd_post_sgl_pages(&hw->sli, cmd,
> +						   sizeof(cmd),
> +						hw->io[io_index]->indicator,
> +						n, sgls, NULL, &reqbuf)) {
> +				if (efct_hw_command(hw, cmd, EFCT_CMD_POLL,
> +						    NULL, NULL)) {
> +					rc = EFCT_HW_RTN_ERROR;
> +					efc_log_err(hw->os,
> +						     "SGL post failed\n");
> +					break;
> +				}
> +			}

Maybe move this into a function to avoid cluttering everthing to right side.

> +		} else {
> +			n = nremaining;
> +		}
> +
> +		/* Add to tail if successful */
> +		for (i = 0; i < n; i++, io_index++) {
> +			io = hw->io[io_index];
> +			io->state = EFCT_HW_IO_STATE_FREE;
> +			INIT_LIST_HEAD(&io->list_entry);
> +			list_add_tail(&io->list_entry, &hw->io_free);
> +		}
> +	}
> +
> +	if (prereg) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  reqbuf.size, reqbuf.virt, reqbuf.phys);
> +		memset(&reqbuf, 0, sizeof(struct efc_dma));
> +		kfree(sgls);
> +	}
> +
> +	return rc;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
> +
> +	memset(&param, 0, sizeof(param));
> +	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SET_FTD_XFER_HINT,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
> +			      fdt_xfer_hint, rc);
> +	else
> +		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
> +			      le32_to_cpu(param.fdt_xfer_hint));
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_config_rq(struct efct_hw *hw)
> +{
> +	u32 min_rq_count, i, rc;
> +	struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
> +	u8 buf[SLI4_BMBX_SIZE];
> +
> +	efc_log_info(hw->os, "using REG_FCFI standard\n");
> +
> +	/*
> +	 * Set the filter match/mask values from hw's
> +	 * filter_def values
> +	 */
> +	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
> +		rq_cfg[i].rq_id = cpu_to_le16(0xffff);
> +		rq_cfg[i].r_ctl_mask = (u8)hw->config.filter_def[i];
> +		rq_cfg[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
> +		rq_cfg[i].type_mask = (u8)(hw->config.filter_def[i] >> 16);
> +		rq_cfg[i].type_match = (u8)(hw->config.filter_def[i] >> 24);
> +	}
> +
> +	/*
> +	 * Update the rq_id's of the FCF configuration
> +	 * (don't update more than the number of rq_cfg
> +	 * elements)
> +	 */
> +	min_rq_count = (hw->hw_rq_count < SLI4_CMD_REG_FCFI_NUM_RQ_CFG)	?
> +			hw->hw_rq_count : SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
> +	for (i = 0; i < min_rq_count; i++) {
> +		struct hw_rq *rq = hw->hw_rq[i];
> +		u32 j;
> +
> +		for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; j++) {
> +			u32 mask = (rq->filter_mask != 0) ?
> +				rq->filter_mask : 1;
> +
> +			if (!(mask & (1U << j)))
> +				continue;
> +
> +			rq_cfg[j].rq_id = cpu_to_le16(rq->hdr->id);
> +			efct_logfcfi(hw, j, i, rq->hdr->id);
> +		}
> +	}
> +
> +	rc = EFCT_HW_RTN_ERROR;
> +	if (!sli_cmd_reg_fcfi(&hw->sli, buf,
> +				SLI4_BMBX_SIZE, 0,
> +				rq_cfg)) {
> +		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
> +				NULL, NULL);
> +	}
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_err(hw->os,
> +				"FCFI registration failed\n");
> +		return rc;
> +	}
> +	hw->fcf_indicator =
> +		le16_to_cpu(((struct sli4_cmd_reg_fcfi *)buf)->fcfi);
> +
> +	return rc;
> +}
> +
> +static void
> +efct_hw_queue_hash_add(struct efct_queue_hash *hash,
> +		       u16 id, u16 index)
> +{
> +	u32	hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/*
> +	 * Since the hash is always bigger than the number of queues, then we
> +	 * never have to worry about an infinite loop.
> +	 */
> +	while (hash[hash_index].in_use)
> +		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/* not used, claim the entry */
> +	hash[hash_index].id = id;
> +	hash[hash_index].in_use = true;
> +	hash[hash_index].index = index;
> +}
> +
> +/* enable sli port health check */
> +static enum efct_hw_rtn
> +efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query,
> +				     u8 enable)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u8 buf[SLI4_BMBX_SIZE];
> +	struct sli4_rqst_cmn_set_features_health_check param;
> +	u32	health_check_flag = 0;
> +
> +	memset(&param, 0, sizeof(param));
> +
> +	if (enable)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
> +
> +	if (query)
> +		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
> +
> +	param.health_check_dword = cpu_to_le32(health_check_flag);
> +
> +	/* build the set_features command */
> +	sli_cmd_common_set_features(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK,
> +				    sizeof(param),
> +				    &param);
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
> +	if (rc)
> +		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
> +	else
> +		efc_log_test(hw->os, "SLI Port Health Check is enabled\n");
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_init(struct efct_hw *hw)
> +{
> +	enum efct_hw_rtn rc;
> +	u32 i = 0;
> +	u32 max_rpi;
> +	int rem_count;
> +	unsigned long flags = 0;
> +	struct efct_hw_io *temp;
> +	struct sli4 *sli = &hw->sli;
> +	struct hw_rq *rq;
> +
> +	/*
> +	 * Make sure the command lists are empty. If this is start-of-day,
> +	 * they'll be empty since they were just initialized in efct_hw_setup.
> +	 * If we've just gone through a reset, the command and command pending
> +	 * lists should have been cleaned up as part of the reset
> +	 * (efct_hw_reset()).
> +	 */
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +	if (!list_empty(&hw->cmd_head)) {
> +		efc_log_err(hw->os, "command found on cmd list\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);

Move the error logging out of the locking section.

> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	if (!list_empty(&hw->cmd_pending)) {
> +		efc_log_err(hw->os,
> +				"command found on pending list\n");

here too

> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	/* Free RQ buffers if prevously allocated */
> +	efct_hw_rx_free(hw);
> +
> +	/*
> +	 * The IO queues must be initialized here for the reset case. The
> +	 * efct_hw_init_io() function will re-add the IOs to the free list.
> +	 * The cmd_head list should be OK since we free all entries in
> +	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
> +	 */
> +
> +	/* If we are in this function due to a reset, there may be stale items
> +	 * on lists that need to be removed.  Clean them up.
> +	 */
> +	rem_count = 0;
> +	if (hw->io_wait_free.next) {
> +		while ((!list_empty(&hw->io_wait_free))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_wait_free,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0) {
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_wait_free list\n",
> +				rem_count);
> +		}
> +	}
> +	rem_count = 0;
> +	if (hw->io_inuse.next) {
> +		while ((!list_empty(&hw->io_inuse))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_inuse,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0)
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_inuse list\n",
> +				       rem_count);
> +	}
> +	rem_count = 0;
> +	if (hw->io_free.next) {
> +		while ((!list_empty(&hw->io_free))) {
> +			rem_count++;
> +			temp = list_first_entry(&hw->io_free,
> +						struct efct_hw_io,
> +						list_entry);
> +			list_del(&temp->list_entry);
> +		}
> +		if (rem_count > 0)
> +			efc_log_debug(hw->os,
> +				       "rmvd %d items from io_free list\n",
> +				       rem_count);
> +	}
> +
> +	INIT_LIST_HEAD(&hw->io_inuse);
> +	INIT_LIST_HEAD(&hw->io_free);
> +	INIT_LIST_HEAD(&hw->io_wait_free);
> +
> +	/* If MRQ not required, Make sure we dont request feature. */
> +	hw->sli.features &= (~SLI4_REQFEAT_MRQP);
> +
> +	if (sli_init(&hw->sli)) {
> +		efc_log_err(hw->os, "SLI failed to initialize\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->sliport_healthcheck) {
> +		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			efc_log_err(hw->os, "Enable port Health check fail\n");
> +			return rc;
> +		}
> +	}
> +
> +	/*
> +	 * Set FDT transfer hint, only works on Lancer
> +	 */
> +	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2) {
> +		/*
> +		 * Non-fatal error. In particular, we can disregard failure to
> +		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
> +		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
> +		 */
> +		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
> +	}
> +
> +	/* zero the hashes */
> +	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
> +	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
> +	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
> +	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
> +		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
> +
> +	rc = efct_hw_init_queues(hw);
> +	if (rc != EFCT_HW_RTN_SUCCESS)
> +		return rc;
> +
> +	/* Allocate and post RQ buffers */
> +	rc = efct_hw_rx_allocate(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "rx_allocate failed\n");
> +		return rc;
> +	}
> +
> +	rc = efct_hw_rx_post(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
> +		return rc;
> +	}
> +
> +	max_rpi = sli->extent[SLI_RSRC_RPI].size;
> +	/* Allocate rpi_ref if not previously allocated */
> +	if (!hw->rpi_ref) {
> +		hw->rpi_ref = kmalloc_array(max_rpi, sizeof(*hw->rpi_ref),
> +				      GFP_KERNEL);
> +		if (!hw->rpi_ref)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(hw->rpi_ref, 0, max_rpi * sizeof(*hw->rpi_ref));
> +	}
> +
> +	for (i = 0; i < max_rpi; i++) {
> +		atomic_set(&hw->rpi_ref[i].rpi_count, 0);
> +		atomic_set(&hw->rpi_ref[i].rpi_attached, 0);
> +	}
> +
> +	rc = efct_hw_config_rq(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "efct_hw_config_rq failed %d\n", rc);
> +		return rc;
> +	}
> +
> +	/*
> +	 * Allocate the WQ request tag pool, if not previously allocated
> +	 * (the request tag value is 16 bits, thus the pool allocation size
> +	 * of 64k)
> +	 */
> +	hw->wq_reqtag_pool = efct_hw_reqtag_pool_alloc(hw);
> +	if (!hw->wq_reqtag_pool) {
> +		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
> +		return rc;
> +	}
> +
> +	rc = efct_hw_setup_io(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "IO allocation failure\n");
> +		return rc;
> +	}
> +
> +	rc = efct_hw_init_io(hw);
> +	if (rc) {
> +		efc_log_err(hw->os, "IO initialization failure\n");
> +		return rc;
> +	}
> +
> +	/*
> +	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
> +	 * entries
> +	 */
> +	for (i = 0; i < hw->eq_count; i++)
> +		sli_queue_arm(&hw->sli, &hw->eq[i], true);
> +
> +	/*
> +	 * Initialize RQ hash
> +	 */
> +	for (i = 0; i < hw->rq_count; i++)
> +		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
> +
> +	/*
> +	 * Initialize WQ hash
> +	 */
> +	for (i = 0; i < hw->wq_count; i++)
> +		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
> +
> +	/*
> +	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
> +	 */
> +	for (i = 0; i < hw->cq_count; i++) {
> +		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
> +		sli_queue_arm(&hw->sli, &hw->cq[i], true);
> +	}
> +
> +	/* Set RQ process limit*/
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +		hw->cq[rq->cq->instance].proc_limit = hw->config.n_io / 2;
> +	}
> +
> +	/* record the fact that the queues are functional */
> +	hw->state = EFCT_HW_STATE_ACTIVE;
> +	/*
> +	 * Allocate a HW IOs for send frame.
> +	 */
> +	hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
> +	if (!hw->hw_wq[0]->send_frame_io)
> +		efc_log_err(hw->os, "alloc for send_frame_io failed\n");
> +
> +	/* Initialize send frame frame sequence id */
> +	atomic_set(&hw->send_frame_seq_id, 0);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_parse_filter(struct efct_hw *hw, void *value)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	char *p = NULL;
> +	char *token;
> +	u32 idx = 0;
> +
> +	for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
> +		hw->config.filter_def[idx] = 0;
> +
> +	p = kstrdup(value, GFP_KERNEL);
> +	if (!p || !*p) {
> +		efc_log_err(hw->os, "p is NULL\n");
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	idx = 0;
> +	while ((token = strsep(&p, ",")) && *token) {
> +		if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
> +			efc_log_err(hw->os, "kstrtoint failed\n");
> +
> +		if (!p || !*p)
> +			break;
> +
> +		if (idx == ARRAY_SIZE(hw->config.filter_def))
> +			break;
> +	}
> +	kfree(p);
> +
> +	return rc;
> +}
> +
> +u64
> +efct_get_wwnn(struct efct_hw *hw)
> +{
> +	struct sli4 *sli = &hw->sli;
> +	u8 p[8];
> +
> +	memcpy(p, sli->wwnn, sizeof(p));
> +	return get_unaligned_be64(p);
> +}
> +
> +u64
> +efct_get_wwpn(struct efct_hw *hw)
> +{
> +	struct sli4 *sli = &hw->sli;
> +	u8 p[8];
> +
> +	memcpy(p, sli->wwpn, sizeof(p));
> +	return get_unaligned_be64(p);
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index b3d4d4bc8d8c..e5839254c730 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -614,4 +614,19 @@ struct efct_hw_grp_hdr {
>  	u8			revision[32];
>  };
>  
> +static inline int
> +efct_hw_get_link_speed(struct efct_hw *hw) {
> +	return hw->link.speed;
> +}
> +
> +extern enum efct_hw_rtn
> +efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
> +enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
> +extern enum efct_hw_rtn
> +efct_hw_parse_filter(struct efct_hw *hw, void *value);
> +extern uint64_t
> +efct_get_wwnn(struct efct_hw *hw);
> +extern uint64_t
> +efct_get_wwpn(struct efct_hw *hw);
> +
>  #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
> new file mode 100644
> index 000000000000..b683208d396f
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_xport.c
> @@ -0,0 +1,523 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_unsol.h"
> +
> +/* Post node event callback argument. */
> +struct efct_xport_post_node_event {
> +	struct completion done;
> +	atomic_t refcnt;
> +	struct efc_node *node;
> +	u32	evt;
> +	void *context;
> +};
> +
> +static struct dentry *efct_debugfs_root;
> +static atomic_t efct_debugfs_count;
> +
> +static struct scsi_host_template efct_template = {
> +	.module			= THIS_MODULE,
> +	.name			= EFCT_DRIVER_NAME,
> +	.supported_mode		= MODE_TARGET,
> +};
> +
> +/* globals */
> +static struct fc_function_template efct_xport_functions;
> +static struct fc_function_template efct_vport_functions;
> +
> +static struct scsi_transport_template *efct_xport_fc_tt;
> +static struct scsi_transport_template *efct_vport_fc_tt;
> +
> +/*
> + * transport object is allocated,
> + * and associated with a device instance
> + */
> +struct efct_xport *
> +efct_xport_alloc(struct efct *efct)
> +{
> +	struct efct_xport *xport;
> +
> +	xport = kmalloc(sizeof(*xport), GFP_KERNEL);
> +	if (!xport)
> +		return xport;
> +
> +	memset(xport, 0, sizeof(*xport));
> +	xport->efct = efct;
> +	return xport;
> +}
> +
> +static int
> +efct_xport_init_debugfs(struct efct *efct)
> +{
> +	/* Setup efct debugfs root directory */
> +	if (!efct_debugfs_root) {
> +		efct_debugfs_root = debugfs_create_dir("efct", NULL);
> +		atomic_set(&efct_debugfs_count, 0);
> +		if (!efct_debugfs_root) {
> +			efc_log_err(efct, "failed to create debugfs entry\n");
> +			goto debugfs_fail;
> +		}
> +	}
> +
> +	/* Create a directory for sessions in root */
> +	if (!efct->sess_debugfs_dir) {
> +		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
> +		if (!efct->sess_debugfs_dir) {
> +			efc_log_err(efct,
> +				     "failed to create debugfs entry for sessions\n");
> +			goto debugfs_fail;
> +		}
> +		atomic_inc(&efct_debugfs_count);
> +	}
> +
> +	return EFC_SUCCESS;
> +
> +debugfs_fail:
> +	return EFC_FAIL;
> +}
> +
> +static void efct_xport_delete_debugfs(struct efct *efct)
> +{
> +	/* Remove session debugfs directory */
> +	debugfs_remove(efct->sess_debugfs_dir);
> +	efct->sess_debugfs_dir = NULL;
> +	atomic_dec(&efct_debugfs_count);
> +
> +	if (atomic_read(&efct_debugfs_count) == 0) {
> +		/* remove root debugfs directory */
> +		debugfs_remove(efct_debugfs_root);
> +		efct_debugfs_root = NULL;
> +	}
> +}
> +
> +int
> +efct_xport_attach(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	int rc;
> +
> +	xport->fcfi.hold_frames = true;
> +	spin_lock_init(&xport->fcfi.pend_frames_lock);
> +	INIT_LIST_HEAD(&xport->fcfi.pend_frames);
> +
> +	rc = efct_hw_setup(&efct->hw, efct, efct->pcidev);
> +	if (rc) {
> +		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
> +		return rc;
> +	}
> +
> +	efct_hw_parse_filter(&efct->hw, (void *)efct->filter_def);
> +
> +	xport->io_pool = efct_io_pool_create(efct, efct->hw.config.n_sgl);
> +	if (!xport->io_pool) {
> +		efc_log_err(efct, "Can't allocate IO pool\n");
> +		return -ENOMEM;

I am still confused when you default error codes are used and when
EFC_FAIL is used.

> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_xport_link_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_link_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_host_stats_cb(int status, u32 num_counters,
> +			 struct efct_hw_host_stat_counts *counters, void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +
> +	complete(&result->stats.done);
> +}
> +
> +static void
> +efct_xport_async_link_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_link_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.link_stats.link_failure_error_count =
> +		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
> +	result->stats.link_stats.loss_of_sync_error_count =
> +		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
> +	result->stats.link_stats.primitive_sequence_error_count =
> +		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
> +	result->stats.link_stats.invalid_transmission_word_error_count =
> +		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
> +	result->stats.link_stats.crc_error_count =
> +		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
> +}
> +
> +static void
> +efct_xport_async_host_stats_cb(int status, u32 num_counters,
> +			       struct efct_hw_host_stat_counts *counters,
> +			       void *arg)
> +{
> +	union efct_xport_stats_u *result = arg;
> +
> +	result->stats.host_stats.transmit_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.receive_kbyte_count =
> +		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
> +	result->stats.host_stats.transmit_frame_count =
> +		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
> +	result->stats.host_stats.receive_frame_count =
> +		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
> +}
> +
> +static void
> +efct_xport_config_stats_timer(struct efct *efct);
> +
> +static void
> +efct_xport_stats_timer_cb(struct timer_list *t)
> +{
> +	struct efct_xport *xport = from_timer(xport, t, stats_timer);
> +	struct efct *efct = xport->efct;
> +
> +	efct_xport_config_stats_timer(efct);
> +}
> +
> +static void
> +efct_xport_config_stats_timer(struct efct *efct)
> +{
> +	u32 timeout = 3 * 1000;
> +	struct efct_xport *xport = NULL;
> +
> +	if (!efct) {
> +		pr_err("%s: failed to locate EFCT device\n", __func__);
> +		return;
> +	}
> +
> +	xport = efct->xport;
> +	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
> +			       efct_xport_async_link_stats_cb,
> +			       &xport->fc_xport_stats);
> +	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
> +			       &xport->fc_xport_stats);
> +
> +	timer_setup(&xport->stats_timer,
> +		    &efct_xport_stats_timer_cb, 0);
> +	mod_timer(&xport->stats_timer,
> +		  jiffies + msecs_to_jiffies(timeout));
> +}
> +
> +int
> +efct_xport_initialize(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	int rc = 0;
> +
> +	/* Initialize io lists */
> +	spin_lock_init(&xport->io_pending_lock);
> +	INIT_LIST_HEAD(&xport->io_pending_list);
> +	atomic_set(&xport->io_active_count, 0);
> +	atomic_set(&xport->io_pending_count, 0);
> +	atomic_set(&xport->io_total_free, 0);
> +	atomic_set(&xport->io_total_pending, 0);
> +	atomic_set(&xport->io_alloc_failed_count, 0);
> +	atomic_set(&xport->io_pending_recursing, 0);
> +	rc = efct_hw_init(&efct->hw);
> +	if (rc) {
> +		efc_log_err(efct, "efct_hw_init failure\n");
> +		goto out;
> +	}
> +
> +	rc = efct_scsi_tgt_new_device(efct);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize target\n");
> +		goto hw_init_out;
> +	}
> +
> +	rc = efct_scsi_new_device(efct);
> +	if (rc) {
> +		efc_log_err(efct, "failed to initialize initiator\n");
> +		goto tgt_dev_out;
> +	}
> +
> +	/* Get FC link and host statistics perodically*/
> +	efct_xport_config_stats_timer(efct);
> +
> +	efct_xport_init_debugfs(efct);
> +
> +	return rc;
> +
> +tgt_dev_out:
> +	efct_scsi_tgt_del_device(efct);
> +
> +hw_init_out:
> +	efct_hw_teardown(&efct->hw);
> +out:
> +	return rc;
> +}
> +
> +int
> +efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
> +		  union efct_xport_stats_u *result)
> +{
> +	u32 rc = 0;
> +	struct efct *efct = NULL;
> +	union efct_xport_stats_u value;
> +
> +	efct = xport->efct;
> +
> +	switch (cmd) {
> +	case EFCT_XPORT_CONFIG_PORT_STATUS:
> +		if (xport->configured_link_state == 0) {
> +			/*
> +			 * Initial state is offline. configured_link_state is
> +			 * set to online explicitly when port is brought online
> +			 */
> +			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
> +		}
> +		result->value = xport->configured_link_state;
> +		break;
> +
> +	case EFCT_XPORT_PORT_STATUS:
> +		/* Determine port status based on link speed. */
> +		value.value = efct_hw_get_link_speed(&efct->hw);
> +		if (value.value == 0)
> +			result->value = 0;
> +		else
> +			result->value = 1;
> +		rc = 0;
> +		break;
> +
> +	case EFCT_XPORT_LINK_SPEED: {
> +		result->value = efct_hw_get_link_speed(&efct->hw);
> +
> +		break;
> +	}
> +	case EFCT_XPORT_LINK_STATISTICS:
> +		memcpy((void *)result, &efct->xport->fc_xport_stats,
> +		       sizeof(union efct_xport_stats_u));
> +		break;
> +	case EFCT_XPORT_LINK_STAT_RESET: {
> +		/* Create a completion to synchronize the stat reset process. */
> +		init_completion(&result->stats.done);
> +
> +		/* First reset the link stats */
> +		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
> +					    efct_xport_link_stats_cb, result);
> +
> +		/* Wait for completion to be signaled when the cmd completes */
> +		if (wait_for_completion_interruptible(&result->stats.done)) {
> +			/* Undefined failure */
> +			efc_log_test(efct, "sem wait failed\n");
> +			rc = -ENXIO;
> +			break;
> +		}
> +
> +		/* Next reset the host stats */
> +		rc = efct_hw_get_host_stats(&efct->hw, 1,
> +					    efct_xport_host_stats_cb, result);
> +
> +		/* Wait for completion to be signaled when the cmd completes */
> +		if (wait_for_completion_interruptible(&result->stats.done)) {
> +			/* Undefined failure */
> +			efc_log_test(efct, "sem wait failed\n");
> +			rc = -ENXIO;
> +			break;
> +		}
> +		break;
> +	}
> +	default:
> +		rc = -1;

Here again, rc is suddenely -1 and above it's -ENXIO.

> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +efct_scsi_new_device(struct efct *efct)
> +{
> +	struct Scsi_Host *shost = NULL;
> +	int error = 0;
> +	struct efct_vport *vport = NULL;
> +	union efct_xport_stats_u speed;
> +	u32 supported_speeds = 0;
> +
> +	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
> +	if (!shost) {
> +		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/* save shost to initiator-client context */
> +	efct->shost = shost;
> +
> +	/* save efct information to shost LLD-specific space */
> +	vport = (struct efct_vport *)shost->hostdata;
> +	vport->efct = efct;
> +
> +	/*
> +	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
> +	 * global queue depth (as opposed to the per-LUN queue depth --
> +	 * .cmd_per_lun This may need to be adjusted for I+T mode.
> +	 */
> +	shost->can_queue = efct->hw.config.n_io;
> +	shost->max_cmd_len = 16; /* 16-byte CDBs */
> +	shost->max_id = 0xffff;
> +	shost->max_lun = 0xffffffff;
> +
> +	/*
> +	 * can only accept (from mid-layer) as many SGEs as we've
> +	 * pre-registered
> +	 */
> +	shost->sg_tablesize = sli_get_max_sgl(&efct->hw.sli);
> +
> +	/* attach FC Transport template to shost */
> +	shost->transportt = efct_xport_fc_tt;
> +	efc_log_debug(efct, "transport template=%p\n", efct_xport_fc_tt);
> +
> +	/* get pci_dev structure and add host to SCSI ML */
> +	error = scsi_add_host_with_dma(shost, &efct->pcidev->dev,
> +				       &efct->pcidev->dev);
> +	if (error) {
> +		efc_log_test(efct, "failed scsi_add_host_with_dma\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/* Set symbolic name for host port */
> +	snprintf(fc_host_symbolic_name(shost),
> +		 sizeof(fc_host_symbolic_name(shost)),
> +		     "Emulex %s FV%s DV%s", efct->model,
> +		     efct->hw.sli.fw_name[0], EFCT_DRIVER_VERSION);
> +
> +	/* Set host port supported classes */
> +	fc_host_supported_classes(shost) = FC_COS_CLASS3;
> +
> +	speed.value = 1000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_1GBIT;
> +	}
> +	speed.value = 2000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_2GBIT;
> +	}
> +	speed.value = 4000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_4GBIT;
> +	}
> +	speed.value = 8000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_8GBIT;
> +	}
> +	speed.value = 10000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_10GBIT;
> +	}
> +	speed.value = 16000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_16GBIT;
> +	}
> +	speed.value = 32000;
> +	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +			      &speed)) {
> +		supported_speeds |= FC_PORTSPEED_32GBIT;
> +	}
> +
> +	fc_host_supported_speeds(shost) = supported_speeds;
> +
> +	fc_host_node_name(shost) = efct_get_wwnn(&efct->hw);
> +	fc_host_port_name(shost) = efct_get_wwpn(&efct->hw);
> +	fc_host_max_npiv_vports(shost) = 128;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +struct scsi_transport_template *
> +efct_attach_fc_transport(void)
> +{
> +	struct scsi_transport_template *efct_fc_template = NULL;
> +
> +	efct_fc_template = fc_attach_transport(&efct_xport_functions);
> +
> +	if (!efct_fc_template)
> +		pr_err("failed to attach EFCT with fc transport\n");
> +
> +	return efct_fc_template;
> +}
> +
> +struct scsi_transport_template *
> +efct_attach_vport_fc_transport(void)
> +{
> +	struct scsi_transport_template *efct_fc_template = NULL;
> +
> +	efct_fc_template = fc_attach_transport(&efct_vport_functions);
> +
> +	if (!efct_fc_template)
> +		pr_err("failed to attach EFCT with fc transport\n");
> +
> +	return efct_fc_template;
> +}
> +
> +int
> +efct_scsi_reg_fc_transport(void)
> +{
> +	/* attach to appropriate scsi_tranport_* module */
> +	efct_xport_fc_tt = efct_attach_fc_transport();
> +	if (!efct_xport_fc_tt) {
> +		pr_err("%s: failed to attach to scsi_transport_*", __func__);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_vport_fc_tt = efct_attach_vport_fc_transport();
> +	if (!efct_vport_fc_tt) {
> +		pr_err("%s: failed to attach to scsi_transport_*", __func__);
> +		efct_release_fc_transport(efct_xport_fc_tt);
> +		efct_xport_fc_tt = NULL;
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_scsi_release_fc_transport(void)
> +{
> +	/* detach from scsi_transport_* */
> +	efct_release_fc_transport(efct_xport_fc_tt);
> +	efct_xport_fc_tt = NULL;
> +	if (efct_vport_fc_tt)
> +		efct_release_fc_transport(efct_vport_fc_tt);
> +	efct_vport_fc_tt = NULL;
> +
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_xport.h b/drivers/scsi/elx/efct/efct_xport.h
> new file mode 100644
> index 000000000000..0866edc55c54
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_xport.h
> @@ -0,0 +1,201 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_XPORT_H__)
> +#define __EFCT_XPORT_H__
> +
> +/* FCFI lookup/pending frames */
> +struct efct_xport_fcfi {
> +	/* lock to protect pending frames access*/

Does it protect only pend_frames or the whole struct?

> +	spinlock_t	pend_frames_lock;
> +	struct list_head	pend_frames;
> +	/* hold pending frames */
> +	bool hold_frames;
> +	/* count of pending frames that were processed */
> +	u32	pend_frames_processed;

These members are not aligned.

> +};
> +
> +enum efct_xport_ctrl {
> +	EFCT_XPORT_PORT_ONLINE = 1,
> +	EFCT_XPORT_PORT_OFFLINE,
> +	EFCT_XPORT_SHUTDOWN,
> +	EFCT_XPORT_POST_NODE_EVENT,
> +	EFCT_XPORT_WWNN_SET,
> +	EFCT_XPORT_WWPN_SET,
> +};
> +
> +enum efct_xport_status {
> +	EFCT_XPORT_PORT_STATUS,
> +	EFCT_XPORT_CONFIG_PORT_STATUS,
> +	EFCT_XPORT_LINK_SPEED,
> +	EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
> +	EFCT_XPORT_LINK_STATISTICS,
> +	EFCT_XPORT_LINK_STAT_RESET,
> +	EFCT_XPORT_IS_QUIESCED
> +};
> +
> +struct efct_xport_link_stats {
> +	bool		rec;
> +	bool		gec;
> +	bool		w02of;
> +	bool		w03of;
> +	bool		w04of;
> +	bool		w05of;
> +	bool		w06of;
> +	bool		w07of;
> +	bool		w08of;
> +	bool		w09of;
> +	bool		w10of;
> +	bool		w11of;
> +	bool		w12of;
> +	bool		w13of;
> +	bool		w14of;
> +	bool		w15of;
> +	bool		w16of;
> +	bool		w17of;
> +	bool		w18of;
> +	bool		w19of;
> +	bool		w20of;
> +	bool		w21of;
> +	bool		clrc;
> +	bool		clof1;
> +	u32		link_failure_error_count;
> +	u32		loss_of_sync_error_count;
> +	u32		loss_of_signal_error_count;
> +	u32		primitive_sequence_error_count;
> +	u32		invalid_transmission_word_error_count;
> +	u32		crc_error_count;
> +	u32		primitive_sequence_event_timeout_count;
> +	u32		elastic_buffer_overrun_error_count;
> +	u32		arbitration_fc_al_timeout_count;
> +	u32		advertised_receive_bufftor_to_buffer_credit;
> +	u32		current_receive_buffer_to_buffer_credit;
> +	u32		advertised_transmit_buffer_to_buffer_credit;
> +	u32		current_transmit_buffer_to_buffer_credit;
> +	u32		received_eofa_count;
> +	u32		received_eofdti_count;
> +	u32		received_eofni_count;
> +	u32		received_soff_count;
> +	u32		received_dropped_no_aer_count;
> +	u32		received_dropped_no_available_rpi_resources_count;
> +	u32		received_dropped_no_available_xri_resources_count;
> +};
> +
> +struct efct_xport_host_stats {
> +	bool		cc;
> +	u32		transmit_kbyte_count;
> +	u32		receive_kbyte_count;
> +	u32		transmit_frame_count;
> +	u32		receive_frame_count;
> +	u32		transmit_sequence_count;
> +	u32		receive_sequence_count;
> +	u32		total_exchanges_originator;
> +	u32		total_exchanges_responder;
> +	u32		receive_p_bsy_count;
> +	u32		receive_f_bsy_count;
> +	u32		dropped_frames_due_to_no_rq_buffer_count;
> +	u32		empty_rq_timeout_count;
> +	u32		dropped_frames_due_to_no_xri_count;
> +	u32		empty_xri_pool_count;
> +};
> +
> +struct efct_xport_host_statistics {
> +	struct completion		done;
> +	struct efct_xport_link_stats	link_stats;
> +	struct efct_xport_host_stats	host_stats;
> +};
> +
> +union efct_xport_stats_u {
> +	u32	value;
> +	struct efct_xport_host_statistics stats;
> +};
> +
> +struct efct_xport_fcp_stats {
> +	u64		input_bytes;
> +	u64		output_bytes;
> +	u64		input_requests;
> +	u64		output_requests;
> +	u64		control_requests;
> +};
> +
> +struct efct_xport {
> +	struct efct		*efct;
> +	/* wwpn requested by user for primary sport */

kerneldoc?

> +	u64			req_wwpn;
> +	/* wwnn requested by user for primary sport */
> +	u64			req_wwnn;
> +
> +	struct efct_xport_fcfi	fcfi;
> +
> +	/* Nodes */
> +	/* number of allocated nodes */
> +	u32			nodes_count;
> +	/* array of pointers to nodes */
> +	struct efc_node		**nodes;
> +	/* linked list of free nodes */
> +	struct list_head	nodes_free_list;
> +
> +	/* Io pool and counts */
> +	/* pointer to IO pool */
> +	struct efct_io_pool	*io_pool;
> +	/* used to track how often IO pool is empty */
> +	atomic_t		io_alloc_failed_count;
> +	/* lock for io_pending_list */
> +	spinlock_t		io_pending_lock;
> +	/* list of IOs waiting for HW resources
> +	 *  lock: xport->io_pending_lock
> +	 *  link: efct_io_s->io_pending_link
> +	 */
> +	struct list_head	io_pending_list;
> +	/* count of totals IOS allocated */
> +	atomic_t		io_total_alloc;
> +	/* count of totals IOS free'd */
> +	atomic_t		io_total_free;
> +	/* count of totals IOS that were pended */
> +	atomic_t		io_total_pending;
> +	/* count of active IOS */
> +	atomic_t		io_active_count;
> +	/* count of pending IOS */
> +	atomic_t		io_pending_count;
> +	/* non-zero if efct_scsi_check_pending is executing */
> +	atomic_t		io_pending_recursing;
> +
> +	/* Port */
> +	/* requested link state */
> +	u32			configured_link_state;
> +
> +	/* Timer for Statistics */
> +	struct timer_list	stats_timer;
> +	union efct_xport_stats_u fc_xport_stats;
> +	struct efct_xport_fcp_stats fcp_stats;
> +};
> +
> +struct efct_rport_data {
> +	struct efc_node		*node;
> +};
> +
> +extern struct efct_xport *
> +efct_xport_alloc(struct efct *efct);
> +extern int
> +efct_xport_attach(struct efct_xport *xport);
> +extern int
> +efct_xport_initialize(struct efct_xport *xport);
> +extern int
> +efct_xport_detach(struct efct_xport *xport);
> +extern int
> +efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...);
> +extern int
> +efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
> +		  union efct_xport_stats_u *result);
> +extern void
> +efct_xport_free(struct efct_xport *xport);
> +
> +struct scsi_transport_template *efct_attach_fc_transport(void);
> +struct scsi_transport_template *efct_attach_vport_fc_transport(void);
> +void
> +efct_release_fc_transport(struct scsi_transport_template *transport_template);
> +
> +#endif /* __EFCT_XPORT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 25/31] elx: efct: Hardware IO submission routines
  2020-04-12  3:32 ` [PATCH v3 25/31] elx: efct: Hardware IO submission routines James Smart
@ 2020-04-16  8:10   ` Hannes Reinecke
  2020-04-16 12:45     ` Daniel Wagner
  2020-04-16 12:44   ` Daniel Wagner
  1 sibling, 1 reply; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  8:10 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines that write IO to Work queue, send SRRs and raw frames.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Reduced arguments for sli_fcp_tsend64_wqe(), sli_fcp_trsp64_wqe(),
>    sli_fcp_treceive64_wqe() calls
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 519 ++++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |  19 ++
>   2 files changed, 538 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index fd3c2dec3ef6..26dd9bd1eeef 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -2516,3 +2516,522 @@ efct_hw_flush(struct efct_hw *hw)
>   
>   	return EFC_SUCCESS;
>   }
> +
> +int
> +efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int rc = 0;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +	if (!list_empty(&wq->pending_list)) {
> +		INIT_LIST_HEAD(&wqe->list_entry);
> +		list_add_tail(&wqe->list_entry, &wq->pending_list);
> +		wq->wq_pending_count++;
> +		while ((wq->free_count > 0) &&
> +		       ((wqe = list_first_entry(&wq->pending_list,
> +					struct efct_hw_wqe, list_entry))
> +			 != NULL)) {
> +			list_del(&wqe->list_entry);
> +			rc = _efct_hw_wq_write(wq, wqe);
> +			if (rc < 0)
> +				break;
> +			if (wqe->abort_wqe_submit_needed) {
> +				wqe->abort_wqe_submit_needed = false;
> +				sli_abort_wqe(&wq->hw->sli,
> +					      wqe->wqebuf,
> +					      wq->hw->sli.wqe_size,
> +					      SLI_ABORT_XRI,
> +					      wqe->send_abts, wqe->id,
> +					      0, wqe->abort_reqtag,
> +					      SLI4_CQ_DEFAULT);
> +
> +				INIT_LIST_HEAD(&wqe->list_entry);
> +				list_add_tail(&wqe->list_entry,
> +					      &wq->pending_list);
> +				wq->wq_pending_count++;
> +			}
> +		}
> +	} else {
> +		if (wq->free_count > 0) {
> +			rc = _efct_hw_wq_write(wq, wqe);
> +		} else {
> +			INIT_LIST_HEAD(&wqe->list_entry);
> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +
> +	return rc;
> +}
> +
> +/**
> + * This routine supports communication sequences consisting of a single
> + * request and single response between two endpoints. Examples include:
> + *  - Sending an ELS request.
> + *  - Sending an ELS response - To send an ELS response, the caller must provide
> + * the OX_ID from the received request.
> + *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
> + * the caller must provide the R_CTL, TYPE, and DF_CTL
> + * values to place in the FC frame header.
> + */
> +enum efct_hw_rtn
> +efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		  struct efct_hw_io *io,
> +		  struct efc_dma *send, u32 len,
> +		  struct efc_dma *receive, struct efc_remote_node *rnode,
> +		  union efct_hw_io_param_u *iparam,
> +		  efct_hw_srrs_cb_t cb, void *arg)
> +{
> +	struct sli4_sge	*sge = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +	u16	local_flags = 0;
> +	u32 sge0_flags;
> +	u32 sge1_flags;
> +
> +	if (!io || !rnode || !iparam) {
> +		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
> +			hw, io, send, receive, rnode, iparam);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_test(hw->os,
> +			      "cannot send SRRS, HW state=%d\n", hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	io->rnode = rnode;
> +	io->type  = type;
> +	io->done = cb;
> +	io->arg  = arg;
> +
> +	sge = io->sgl->virt;
> +
> +	/* clear both SGE */
> +	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
> +
> +	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
> +	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
> +	if (send) {
> +		sge[0].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(send->phys));
> +		sge[0].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(send->phys));
> +
> +		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +
> +		sge[0].buffer_length = cpu_to_le32(len);
> +	}
> +
> +	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
> +		sge[1].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(receive->phys));
> +		sge[1].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(receive->phys));
> +
> +		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +		sge1_flags |= SLI4_SGE_LAST;
> +
> +		sge[1].buffer_length = cpu_to_le32(receive->size);
> +	} else {
> +		sge0_flags |= SLI4_SGE_LAST;
> +	}
> +
> +	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
> +	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
> +
> +	switch (type) {
> +	case EFCT_HW_ELS_REQ:
> +		if (!send ||
> +		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
> +					  hw->sli.wqe_size, io->sgl,
> +					*((u8 *)send->virt),
> +					len, receive->size,
> +					iparam->els.timeout,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					rnode->sport->fc_id)) {
> +			efc_log_err(hw->os, "REQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;

I did mention several times that I'm not a big fan of overly long 
argument lists.
Can't you pass in 'io' and 'rnode' directly and cut down on the number 
of arguments?

> +	case EFCT_HW_ELS_RSP:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, iparam->els.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					local_flags, U32_MAX)) {

Same here.

> +			efc_log_err(hw->os, "RSP WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_ELS_RSP_SID:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					iparam->els.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					local_flags, iparam->els.s_id)) {

And here.

> +			efc_log_err(hw->os, "RSP (SID) WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT:
> +		if (!send ||
> +		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf, io->sgl,
> +					len, receive->size, io->indicator,
> +					io->reqtag, SLI4_CQ_DEFAULT,
> +					rnode->fc_id, rnode->indicator,
> +					&iparam->fc_ct)) {

And here.

> +			efc_log_err(hw->os, "GEN WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT_RSP:
> +		if (!send ||
> +		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
> +					    io->sgl, len, io->indicator,
> +					    io->reqtag, rnode->fc_id,
> +					    rnode->indicator, &iparam->fc_ct)) {

And here.

> +			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_BLS_ACC:
> +	case EFCT_HW_BLS_RJT:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		if (type == EFCT_HW_BLS_ACC) {
> +			bls.type = SLI4_SLI_BLS_ACC;
> +			memcpy(&bls.u.acc, iparam->bls.payload,
> +			       sizeof(bls.u.acc));
> +		} else {
> +			bls.type = SLI4_SLI_BLS_RJT;
> +			memcpy(&bls.u.rjt, iparam->bls.payload,
> +			       sizeof(bls.u.rjt));
> +		}
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					U32_MAX)) {

This simply cries out for doing so ...

> +			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_BLS_ACC_SID:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		bls.type = SLI4_SLI_BLS_ACC;
> +		memcpy(&bls.u.acc, iparam->bls.payload,
> +		       sizeof(bls.u.acc));
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					iparam->bls.s_id)) {

...


Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion
  2020-04-12  3:32 ` [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion James Smart
  2020-04-16  7:14   ` Hannes Reinecke
@ 2020-04-16  8:24   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  8:24 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:49PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for queue creation, deletion, and configuration.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Removed all Queue topology parsing code
>   Reworked queue creation code.
> ---
>  drivers/scsi/elx/efct/efct_hw_queues.c | 765 +++++++++++++++++++++++++++++++++
>  1 file changed, 765 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
> new file mode 100644
> index 000000000000..c343e7c5b20d
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_hw_queues.c
> @@ -0,0 +1,765 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +#include "efct_unsol.h"
> +
> +/**
> + * SLI queues are created and initialized
> + */
> +enum efct_hw_rtn
> +efct_hw_init_queues(struct efct_hw *hw)
> +{
> +	struct hw_eq *eq = NULL;
> +	struct hw_cq *cq = NULL;
> +	struct hw_wq *wq = NULL;
> +	struct hw_rq *rq = NULL;
> +	struct hw_mq *mq = NULL;
> +
> +	hw->eq_count = 0;
> +	hw->cq_count = 0;
> +	hw->mq_count = 0;
> +	hw->wq_count = 0;
> +	hw->rq_count = 0;
> +	hw->hw_rq_count = 0;
> +	INIT_LIST_HEAD(&hw->eq_list);
> +
> +	/* Create EQ */
> +	eq = efct_hw_new_eq(hw, EFCT_HW_EQ_DEPTH);
> +	if (!eq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;

Not sure if it's worth to intorduce EFCT_HW_RTN_NO_MEMORY, because
ENOMEM is pretty good match and ENOMEM is used in other places
already.

> +	}
> +
> +	/* Create RQ*/
> +	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
> +	if (!cq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	rq = efct_hw_new_rq(cq, EFCT_HW_RQ_ENTRIES_DEF);
> +	if (!rq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/* Create MQ*/
> +	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
> +	if (!cq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	mq = efct_hw_new_mq(cq, EFCT_HW_MQ_DEPTH);
> +	if (!mq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/* Create WQ */
> +	cq = efct_hw_new_cq(eq, hw->num_qentries[SLI_QTYPE_CQ]);
> +	if (!cq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	wq = efct_hw_new_wq(cq, hw->num_qentries[SLI_QTYPE_WQ]);
> +	if (!wq) {
> +		efct_hw_queue_teardown(hw);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +/* Allocate a new EQ object */
> +struct hw_eq *
> +efct_hw_new_eq(struct efct_hw *hw, u32 entry_count)
> +{
> +	struct hw_eq *eq = kmalloc(sizeof(*eq), GFP_KERNEL);
> +
> +	if (eq) {

	if (!eq)
		return NULL;

> +		memset(eq, 0, sizeof(*eq));

kzalloc instead kmalloc + memset

> +		eq->type = SLI_QTYPE_EQ;
> +		eq->hw = hw;
> +		eq->entry_count = entry_count;
> +		eq->instance = hw->eq_count++;
> +		eq->queue = &hw->eq[eq->instance];
> +		INIT_LIST_HEAD(&eq->cq_list);
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_EQ,
> +					eq->queue,
> +					entry_count, NULL)) {
> +			efc_log_err(hw->os,
> +					"EQ[%d] allocation failure\n",
> +					eq->instance);
> +			kfree(eq);
> +			eq = NULL;
			return NULL;
> +		} else {
> +			sli_eq_modify_delay(&hw->sli, eq->queue,
> +					1, 0, 8);
> +			hw->hw_eq[eq->instance] = eq;
> +			INIT_LIST_HEAD(&eq->list_entry);
> +			list_add_tail(&eq->list_entry, &hw->eq_list);
> +			efc_log_debug(hw->os,
> +					"create eq[%2d] id %3d len %4d\n",
> +					eq->instance, eq->queue->id,
> +					eq->entry_count);
> +		}
> +	}
> +	return eq;
> +}
> +
> +/* Allocate a new CQ object */
> +struct hw_cq *
> +efct_hw_new_cq(struct hw_eq *eq, u32 entry_count)
> +{
> +	struct efct_hw *hw = eq->hw;
> +	struct hw_cq *cq = kmalloc(sizeof(*cq), GFP_KERNEL);
> +
> +	if (cq) {

	if (!cq)
		return NULL;

> +		memset(cq, 0, sizeof(*cq));

kzalloc instead of kmalloc + memset


> +		cq->eq = eq;
> +		cq->type = SLI_QTYPE_CQ;
> +		cq->instance = eq->hw->cq_count++;
> +		cq->entry_count = entry_count;
> +		cq->queue = &hw->cq[cq->instance];
> +
> +		INIT_LIST_HEAD(&cq->q_list);
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_CQ, cq->queue,
> +				    cq->entry_count, eq->queue)) {
> +			efc_log_err(hw->os,
> +				     "CQ[%d] allocation failure len=%d\n",
> +				    eq->instance,
> +				    eq->entry_count);
> +			kfree(cq);
> +			cq = NULL;

			return NULL;

> +		} else {
> +			hw->hw_cq[cq->instance] = cq;
> +			INIT_LIST_HEAD(&cq->list_entry);
> +			list_add_tail(&cq->list_entry, &eq->cq_list);
> +			efc_log_debug(hw->os,
> +				       "create cq[%2d] id %3d len %4d\n",
> +				      cq->instance, cq->queue->id,
> +				      cq->entry_count);
> +		}
> +	}
> +	return cq;
> +}
> +
> +/* Allocate a new CQ Set of objects */
> +u32
> +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
> +		   u32 num_cqs, u32 entry_count)
> +{
> +	u32 i;
> +	struct efct_hw *hw = eqs[0]->hw;
> +	struct sli4 *sli4 = &hw->sli;
> +	struct hw_cq *cq = NULL;
> +	struct sli4_queue *qs[SLI_MAX_CQ_SET_COUNT];
> +	struct sli4_queue *assefct[SLI_MAX_CQ_SET_COUNT];
> +
> +	/* Initialise CQS pointers to NULL */
> +	for (i = 0; i < num_cqs; i++)
> +		cqs[i] = NULL;
> +
> +	for (i = 0; i < num_cqs; i++) {
> +		cq = kmalloc(sizeof(*cq), GFP_KERNEL);
> +		if (!cq)
> +			goto error;
> +
> +		memset(cq, 0, sizeof(*cq));

kzalloc()

> +		cqs[i]          = cq;
> +		cq->eq          = eqs[i];
> +		cq->type        = SLI_QTYPE_CQ;
> +		cq->instance    = hw->cq_count++;
> +		cq->entry_count = entry_count;
> +		cq->queue       = &hw->cq[cq->instance];
> +		qs[i]           = cq->queue;
> +		assefct[i]       = eqs[i]->queue;
> +		INIT_LIST_HEAD(&cq->q_list);
> +	}
> +
> +	if (!sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
> +		efc_log_err(hw->os, "Failed to create CQ Set.\n");
> +		goto error;
> +	}
> +
> +	for (i = 0; i < num_cqs; i++) {
> +		hw->hw_cq[cqs[i]->instance] = cqs[i];
> +		INIT_LIST_HEAD(&cqs[i]->list_entry);
> +		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
> +	}
> +
> +	return EFC_SUCCESS;
> +
> +error:
> +	for (i = 0; i < num_cqs; i++) {
> +		kfree(cqs[i]);

		if (cqs[i])
			kfree(cqs[i]);

> +		cqs[i] = NULL;
> +	}
> +	return EFC_FAIL;
> +}
> +
> +/* Allocate a new MQ object */
> +struct hw_mq *
> +efct_hw_new_mq(struct hw_cq *cq, u32 entry_count)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_mq *mq = kmalloc(sizeof(*mq), GFP_KERNEL);
> +
> +	if (mq) {

if (!mq)
	return

> +		memset(mq, 0, sizeof(*mq));

kzalloc

> +		mq->cq = cq;
> +		mq->type = SLI_QTYPE_MQ;
> +		mq->instance = cq->eq->hw->mq_count++;
> +		mq->entry_count = entry_count;
> +		mq->entry_size = EFCT_HW_MQ_DEPTH;
> +		mq->queue = &hw->mq[mq->instance];
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_MQ,
> +				    mq->queue,
> +				    mq->entry_size,
> +				    cq->queue)) {
> +			efc_log_err(hw->os, "MQ allocation failure\n");
> +			kfree(mq);
> +			mq = NULL;
> +		} else {
> +			hw->hw_mq[mq->instance] = mq;
> +			INIT_LIST_HEAD(&mq->list_entry);
> +			list_add_tail(&mq->list_entry, &cq->q_list);
> +			efc_log_debug(hw->os,
> +				       "create mq[%2d] id %3d len %4d\n",
> +				      mq->instance, mq->queue->id,
> +				      mq->entry_count);
> +		}
> +	}
> +	return mq;
> +}
> +
> +/* Allocate a new WQ object */
> +struct hw_wq *
> +efct_hw_new_wq(struct hw_cq *cq, u32 entry_count)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_wq *wq = kmalloc(sizeof(*wq), GFP_KERNEL);
> +
> +	if (wq) {
> +		memset(wq, 0, sizeof(*wq));

same above


> +		wq->hw = cq->eq->hw;
> +		wq->cq = cq;
> +		wq->type = SLI_QTYPE_WQ;
> +		wq->instance = cq->eq->hw->wq_count++;
> +		wq->entry_count = entry_count;
> +		wq->queue = &hw->wq[wq->instance];
> +		wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
> +		wq->wqec_count = wq->wqec_set_count;
> +		wq->free_count = wq->entry_count - 1;
> +		INIT_LIST_HEAD(&wq->pending_list);
> +
> +		if (sli_queue_alloc(&hw->sli, SLI_QTYPE_WQ, wq->queue,
> +				    wq->entry_count, cq->queue)) {
> +			efc_log_err(hw->os, "WQ allocation failure\n");
> +			kfree(wq);
> +			wq = NULL;

return NULL;

> +		} else {
> +			hw->hw_wq[wq->instance] = wq;
> +			INIT_LIST_HEAD(&wq->list_entry);
> +			list_add_tail(&wq->list_entry, &cq->q_list);
> +			efc_log_debug(hw->os,
> +				       "create wq[%2d] id %3d len %4d cls %d\n",
> +				wq->instance, wq->queue->id,
> +				wq->entry_count, wq->class);
> +		}
> +	}
> +	return wq;
> +}
> +
> +/* Allocate an RQ object, which encapsulates 2 SLI queues (for rq pair) */
> +struct hw_rq *
> +efct_hw_new_rq(struct hw_cq *cq, u32 entry_count)
> +{
> +	struct efct_hw *hw = cq->eq->hw;
> +	struct hw_rq *rq = kmalloc(sizeof(*rq), GFP_KERNEL);
> +
> +	if (rq) {
> +		memset(rq, 0, sizeof(*rq));

and again :)

> +		rq->instance = hw->hw_rq_count++;
> +		rq->cq = cq;
> +		rq->type = SLI_QTYPE_RQ;
> +		rq->entry_count = entry_count;
> +
> +		/* Create the header RQ */
> +		rq->hdr = &hw->rq[hw->rq_count];
> +		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
> +
> +		if (sli_fc_rq_alloc(&hw->sli, rq->hdr,
> +				    rq->entry_count,
> +				    rq->hdr_entry_size,
> +				    cq->queue,
> +				    true)) {
> +			efc_log_err(hw->os,
> +				     "RQ allocation failure - header\n");
> +			kfree(rq);
> +			return NULL;
> +		}
> +		/* Update hw_rq_lookup[] */
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		efc_log_debug(hw->os,
> +			      "create rq[%2d] id %3d len %4d hdr  size %4d\n",
> +			      rq->instance, rq->hdr->id, rq->entry_count,
> +			      rq->hdr_entry_size);
> +
> +		/* Create the default data RQ */
> +		rq->data = &hw->rq[hw->rq_count];
> +		rq->data_entry_size = hw->config.rq_default_buffer_size;
> +
> +		if (sli_fc_rq_alloc(&hw->sli, rq->data,
> +				    rq->entry_count,
> +				    rq->data_entry_size,
> +				    cq->queue,
> +				    false)) {
> +			efc_log_err(hw->os,
> +				     "RQ allocation failure - first burst\n");
> +			kfree(rq);
> +			return NULL;
> +		}
> +		/* Update hw_rq_lookup[] */
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		efc_log_debug(hw->os,
> +			       "create rq[%2d] id %3d len %4d data size %4d\n",
> +			 rq->instance, rq->data->id, rq->entry_count,
> +			 rq->data_entry_size);
> +
> +		hw->hw_rq[rq->instance] = rq;
> +		INIT_LIST_HEAD(&rq->list_entry);
> +		list_add_tail(&rq->list_entry, &cq->q_list);
> +
> +		rq->rq_tracker = kmalloc_array(rq->entry_count,
> +					sizeof(struct efc_hw_sequence *),
> +					GFP_ATOMIC);
> +		if (!rq->rq_tracker)
> +			return NULL;
> +
> +		memset(rq->rq_tracker, 0,
> +		       rq->entry_count * sizeof(struct efc_hw_sequence *));
> +	}
> +	return rq;
> +}
> +
> +/**
> + * Allocate an RQ object SET, where each element in set
> + * encapsulates 2 SLI queues (for rq pair)
> + */
> +u32
> +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
> +		   u32 num_rq_pairs, u32 entry_count)
> +{
> +	struct efct_hw *hw = cqs[0]->eq->hw;
> +	struct hw_rq *rq = NULL;
> +	struct sli4_queue *qs[SLI_MAX_RQ_SET_COUNT * 2] = { NULL };
> +	u32 i, q_count, size;
> +
> +	/* Initialise RQS pointers */
> +	for (i = 0; i < num_rq_pairs; i++)
> +		rqs[i] = NULL;
> +
> +	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
> +		rq = kmalloc(sizeof(*rq), GFP_KERNEL);
> +		if (!rq)
> +			goto error;
> +
> +		memset(rq, 0, sizeof(*rq));

kzalloc

> +		rqs[i] = rq;
> +		rq->instance = hw->hw_rq_count++;
> +		rq->cq = cqs[i];
> +		rq->type = SLI_QTYPE_RQ;
> +		rq->entry_count = entry_count;
> +
> +		/* Header RQ */
> +		rq->hdr = &hw->rq[hw->rq_count];
> +		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		qs[q_count] = rq->hdr;
> +
> +		/* Data RQ */
> +		rq->data = &hw->rq[hw->rq_count];
> +		rq->data_entry_size = hw->config.rq_default_buffer_size;
> +		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
> +		hw->rq_count++;
> +		qs[q_count + 1] = rq->data;
> +
> +		rq->rq_tracker = NULL;
> +	}
> +
> +	if (!sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
> +				cqs[0]->queue->id,
> +			    rqs[0]->entry_count,
> +			    rqs[0]->hdr_entry_size,
> +			    rqs[0]->data_entry_size)) {
> +		efc_log_err(hw->os,
> +			     "RQ Set allocation failure for base CQ=%d\n",
> +			    cqs[0]->queue->id);
> +		goto error;
> +	}
> +
> +	for (i = 0; i < num_rq_pairs; i++) {
> +		hw->hw_rq[rqs[i]->instance] = rqs[i];
> +		INIT_LIST_HEAD(&rqs[i]->list_entry);
> +		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
> +		size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count;
> +		rqs[i]->rq_tracker = kmalloc(size, GFP_KERNEL);
> +		if (!rqs[i]->rq_tracker)
> +			goto error;
> +	}
> +
> +	return EFC_SUCCESS;
> +
> +error:
> +	for (i = 0; i < num_rq_pairs; i++) {
> +		if (rqs[i]) {
> +			kfree(rqs[i]->rq_tracker);

Is rq_tracker always a valid pointer?

> +			kfree(rqs[i]);
> +		}
> +	}
> +
> +	return EFC_FAIL;
> +}
> +
> +void
> +efct_hw_del_eq(struct hw_eq *eq)
> +{
> +	if (eq) {
> +		struct hw_cq *cq;
> +		struct hw_cq *cq_next;
> +
> +		list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
> +			efct_hw_del_cq(cq);
> +		list_del(&eq->list_entry);
> +		eq->hw->hw_eq[eq->instance] = NULL;
> +		kfree(eq);
> +	}
> +}
> +
> +void
> +efct_hw_del_cq(struct hw_cq *cq)
> +{
> +	if (cq) {

if (!cq)
	return;

> +		struct hw_q *q;
> +		struct hw_q *q_next;
> +
> +		list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
> +			switch (q->type) {
> +			case SLI_QTYPE_MQ:
> +				efct_hw_del_mq((struct hw_mq *)q);
> +				break;
> +			case SLI_QTYPE_WQ:
> +				efct_hw_del_wq((struct hw_wq *)q);
> +				break;
> +			case SLI_QTYPE_RQ:
> +				efct_hw_del_rq((struct hw_rq *)q);
> +				break;
> +			default:
> +				break;
> +			}
> +		}
> +		list_del(&cq->list_entry);
> +		cq->eq->hw->hw_cq[cq->instance] = NULL;
> +		kfree(cq);
> +	}
> +}
> +
> +void
> +efct_hw_del_mq(struct hw_mq *mq)
> +{
> +	if (mq) {

if (!mq)
	return;

> +		list_del(&mq->list_entry);
> +		mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
> +		kfree(mq);
> +	}
> +}
> +
> +void
> +efct_hw_del_wq(struct hw_wq *wq)
> +{
> +	if (wq) {

if (!wq)
	return;

> +		list_del(&wq->list_entry);
> +		wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
> +		kfree(wq);
> +	}
> +}
> +
> +void
> +efct_hw_del_rq(struct hw_rq *rq)
> +{
> +	struct efct_hw *hw = NULL;
> +
> +	if (rq) {

if (!rq)
	return;

> +		/* Free RQ tracker */
> +		kfree(rq->rq_tracker);
> +		rq->rq_tracker = NULL;
> +		list_del(&rq->list_entry);
> +		hw = rq->cq->eq->hw;
> +		hw->hw_rq[rq->instance] = NULL;
> +		kfree(rq);
> +	}
> +}
> +
> +void
> +efct_hw_queue_dump(struct efct_hw *hw)
> +{
> +	struct hw_eq *eq;
> +	struct hw_cq *cq;
> +	struct hw_q *q;
> +	struct hw_mq *mq;
> +	struct hw_wq *wq;
> +	struct hw_rq *rq;
> +
> +	list_for_each_entry(eq, &hw->eq_list, list_entry) {
> +		efc_log_debug(hw->os, "eq[%d] id %2d\n",
> +			       eq->instance, eq->queue->id);
> +		list_for_each_entry(cq, &eq->cq_list, list_entry) {
> +			efc_log_debug(hw->os, "cq[%d] id %2d current\n",
> +				       cq->instance, cq->queue->id);
> +			list_for_each_entry(q, &cq->q_list, list_entry) {
> +				switch (q->type) {
> +				case SLI_QTYPE_MQ:
> +					mq = (struct hw_mq *)q;
> +					efc_log_debug(hw->os,
> +						       "    mq[%d] id %2d\n",
> +					       mq->instance, mq->queue->id);
> +					break;
> +				case SLI_QTYPE_WQ:
> +					wq = (struct hw_wq *)q;
> +					efc_log_debug(hw->os,
> +						       "    wq[%d] id %2d\n",
> +						wq->instance, wq->queue->id);
> +					break;
> +				case SLI_QTYPE_RQ:
> +					rq = (struct hw_rq *)q;
> +					efc_log_debug(hw->os,
> +						       "    rq[%d] hdr id %2d\n",
> +					       rq->instance, rq->hdr->id);
> +					break;
> +				default:
> +					break;
> +				}
> +			}
> +		}

Maybe move inner loop into function.

> +	}
> +}
> +
> +void
> +efct_hw_queue_teardown(struct efct_hw *hw)
> +{
> +	struct hw_eq *eq;
> +	struct hw_eq *eq_next;
> +
> +	if (hw->eq_list.next) {

	if (!hw->eq_list.next)
		return;



> +		list_for_each_entry_safe(eq, eq_next, &hw->eq_list,
> +					 list_entry) {
> +			efct_hw_del_eq(eq);
> +		}
> +	}
> +}
> +
> +static inline int
> +efct_hw_rqpair_find(struct efct_hw *hw, u16 rq_id)
> +{
> +	return efct_hw_queue_hash_find(hw->rq_hash, rq_id);
> +}
> +
> +static struct efc_hw_sequence *
> +efct_hw_rqpair_get(struct efct_hw *hw, u16 rqindex, u16 bufindex)
> +{
> +	struct sli4_queue *rq_hdr = &hw->rq[rqindex];
> +	struct efc_hw_sequence *seq = NULL;
> +	struct hw_rq *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
> +	unsigned long flags = 0;
> +
> +	if (bufindex >= rq_hdr->length) {
> +		efc_log_err(hw->os,
> +				"RQidx %d bufidx %d exceed ring len %d for id %d\n",
> +				rqindex, bufindex, rq_hdr->length, rq_hdr->id);
> +		return NULL;
> +	}
> +
> +	/* rq_hdr lock also covers rqindex+1 queue */
> +	spin_lock_irqsave(&rq_hdr->lock, flags);
> +
> +	seq = rq->rq_tracker[bufindex];
> +	rq->rq_tracker[bufindex] = NULL;
> +
> +	if (!seq) {
> +		efc_log_err(hw->os,
> +			     "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n",
> +			     rqindex, bufindex, rq_hdr->index);
> +	}
> +
> +	spin_unlock_irqrestore(&rq_hdr->lock, flags);
> +	return seq;
> +}
> +
> +int
> +efct_hw_rqpair_process_rq(struct efct_hw *hw, struct hw_cq *cq,
> +			  u8 *cqe)
> +{
> +	u16 rq_id;
> +	u32 index;
> +	int rqindex;
> +	int	 rq_status;
> +	u32 h_len;
> +	u32 p_len;
> +	struct efc_hw_sequence *seq;
> +	struct hw_rq *rq;

the alignemnt of the variables is inconsistent

> +
> +	rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe,
> +					      &rq_id, &index);
> +	if (rq_status != 0) {
> +		switch (rq_status) {
> +		case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED:
> +		case SLI4_FC_ASYNC_RQ_DMA_FAILURE:
> +			/* just get RQ buffer then return to chip */
> +			rqindex = efct_hw_rqpair_find(hw, rq_id);
> +			if (rqindex < 0) {
> +				efc_log_test(hw->os,
> +					      "status=%#x: lookup fail id=%#x\n",
> +					     rq_status, rq_id);
> +				break;
> +			}
> +
> +			/* get RQ buffer */
> +			seq = efct_hw_rqpair_get(hw, rqindex, index);
> +
> +			/* return to chip */
> +			if (efct_hw_rqpair_sequence_free(hw, seq)) {
> +				efc_log_test(hw->os,
> +					      "status=%#x,fail rtrn buf to RQ\n",
> +					     rq_status);
> +				break;
> +			}
> +			break;
> +		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED:
> +		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC:
> +			/*
> +			 * since RQ buffers were not consumed, cannot return
> +			 * them to chip
> +			 * fall through
> +			 */
> +			efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n",
> +				       rq_status);
> +		default:
> +			break;
> +		}
> +		return EFC_FAIL;
> +	}
> +
> +	rqindex = efct_hw_rqpair_find(hw, rq_id);
> +	if (rqindex < 0) {
> +		efc_log_test(hw->os, "Error: rq_id lookup failed for id=%#x\n",
> +			      rq_id);
> +		return EFC_FAIL;
> +	}
> +
> +	rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
> +	rq->use_count++;
> +
> +	seq = efct_hw_rqpair_get(hw, rqindex, index);
> +	if (WARN_ON(!seq))
> +		return EFC_FAIL;
> +
> +	seq->hw = hw;
> +	seq->auto_xrdy = 0;
> +	seq->out_of_xris = 0;
> +	seq->hio = NULL;
> +
> +	sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len);
> +	seq->header->dma.len = h_len;
> +	seq->payload->dma.len = p_len;
> +	seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe);
> +	seq->hw_priv = cq->eq;
> +
> +	efct_unsolicited_cb(hw->os, seq);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_hw_rqpair_put(struct efct_hw *hw, struct efc_hw_sequence *seq)
> +{
> +	struct sli4_queue *rq_hdr = &hw->rq[seq->header->rqindex];
> +	struct sli4_queue *rq_payload = &hw->rq[seq->payload->rqindex];
> +	u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex];
> +	struct hw_rq *rq = hw->hw_rq[hw_rq_index];
> +	u32     phys_hdr[2];
> +	u32     phys_payload[2];
> +	int      qindex_hdr;
> +	int      qindex_payload;
> +	unsigned long flags = 0;

the alignemnt of the variables is inconsistent

> +
> +	/* Update the RQ verification lookup tables */
> +	phys_hdr[0] = upper_32_bits(seq->header->dma.phys);
> +	phys_hdr[1] = lower_32_bits(seq->header->dma.phys);
> +	phys_payload[0] = upper_32_bits(seq->payload->dma.phys);
> +	phys_payload[1] = lower_32_bits(seq->payload->dma.phys);
> +
> +	/* rq_hdr lock also covers payload / header->rqindex+1 queue */
> +	spin_lock_irqsave(&rq_hdr->lock, flags);
> +
> +	/*
> +	 * Note: The header must be posted last for buffer pair mode because
> +	 *       posting on the header queue posts the payload queue as well.
> +	 *       We do not ring the payload queue independently in RQ pair mode.
> +	 */
> +	qindex_payload = sli_rq_write(&hw->sli, rq_payload,
> +				      (void *)phys_payload);
> +	qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr);
> +	if (qindex_hdr < 0 ||
> +	    qindex_payload < 0) {
> +		efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id);
> +		spin_unlock_irqrestore(&rq_hdr->lock, flags);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* ensure the indexes are the same */
> +	WARN_ON(qindex_hdr != qindex_payload);
> +
> +	/* Update the lookup table */
> +	if (!rq->rq_tracker[qindex_hdr]) {
> +		rq->rq_tracker[qindex_hdr] = seq;
> +	} else {
> +		efc_log_test(hw->os,
> +			      "expected rq_tracker[%d][%d] buffer to be NULL\n",
> +			     hw_rq_index, qindex_hdr);
> +	}
> +
> +	spin_unlock_irqrestore(&rq_hdr->lock, flags);
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_rqpair_sequence_free(struct efct_hw *hw,
> +			     struct efc_hw_sequence *seq)
> +{
> +	enum efct_hw_rtn   rc = EFCT_HW_RTN_SUCCESS;
> +
> +	/*
> +	 * Post the data buffer first. Because in RQ pair mode, ringing the
> +	 * doorbell of the header ring will post the data buffer as well.
> +	 */
> +	if (efct_hw_rqpair_put(hw, seq)) {
> +		efc_log_err(hw->os, "error writing buffers\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	return rc;
> +}
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2020-04-12  3:32 ` [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
  2020-04-16  7:24   ` Hannes Reinecke
@ 2020-04-16  8:41   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  8:41 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:50PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> RQ data buffer allocation and deallocate.
> Memory pool allocation and deallocation APIs.
> Mailbox command submission and completion routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   efct_utils.c file is removed. Replaced efct_pool, efct_varray and
>   efct_array with other alternatives.
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 375 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |   7 +
>  2 files changed, 382 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 21fcaf7b3d2b..3e9906749da2 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -1114,3 +1114,378 @@ efct_get_wwpn(struct efct_hw *hw)
>  	memcpy(p, sli->wwpn, sizeof(p));
>  	return get_unaligned_be64(p);
>  }
> +
> +/*
> + * An efct_hw_rx_buffer_t array is allocated,
> + * along with the required DMA mem
> + */
> +static struct efc_hw_rq_buffer *
> +efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
> +			u32 size)
> +{
> +	struct efct *efct = hw->os;
> +	struct efc_hw_rq_buffer *rq_buf = NULL;
> +	struct efc_hw_rq_buffer *prq;
> +	u32 i;
> +
> +	if (count != 0) {

if (count == 0)
	return NULL;

and then there is no need to pre-initialize rq_buf

> +		rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_ATOMIC);
> +		if (!rq_buf)
> +			return NULL;
> +		memset(rq_buf, 0, sizeof(*rq_buf) * count);

kzalloc

> +
> +		for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
> +			prq->rqindex = rqindex;
> +			prq->dma.size = size;
> +			prq->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +							   prq->dma.size,
> +							   &prq->dma.phys,
> +							   GFP_DMA);

GFP_KERNEL

> +			if (!prq->dma.virt) {
> +				efc_log_err(hw->os, "DMA allocation failed\n");
> +				kfree(rq_buf);
> +				rq_buf = NULL;
> +				break;

return NULL;

> +			}
> +		}
> +	}
> +	return rq_buf;
> +}
> +
> +static void
> +efct_hw_rx_buffer_free(struct efct_hw *hw,
> +		       struct efc_hw_rq_buffer *rq_buf,
> +			u32 count)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	struct efc_hw_rq_buffer *prq;
> +
> +	if (rq_buf) {

if (!rq_buf)
	return;

> +		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  prq->dma.size, prq->dma.virt,
> +					  prq->dma.phys);
> +			memset(&prq->dma, 0, sizeof(struct efc_dma));
> +		}
> +
> +		kfree(rq_buf);
> +	}
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_rx_allocate(struct efct_hw *hw)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	int rc = EFCT_HW_RTN_SUCCESS;
> +	u32 rqindex = 0;
> +	struct hw_rq *rq;
> +	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
> +	u32 payload_size = hw->config.rq_default_buffer_size;
> +
> +	rqindex = 0;
> +
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +
> +		/* Allocate header buffers */
> +		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +						      rq->entry_count,
> +						      hdr_size);
> +		if (!rq->hdr_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
> +			      i, rq->hdr->id, rq->entry_count, hdr_size);
> +
> +		rqindex++;
> +
> +		/* Allocate payload buffers */
> +		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +							  rq->entry_count,
> +							  payload_size);
> +		if (!rq->payload_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
> +			      i, rq->data->id, rq->entry_count, payload_size);
> +		rqindex++;
> +	}
> +
> +	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
> +}
> +
> +/* Post the RQ data buffers to the chip */
> +enum efct_hw_rtn
> +efct_hw_rx_post(struct efct_hw *hw)
> +{
> +	u32 i;
> +	u32 idx;
> +	u32 rq_idx;
> +	int rc = 0;
> +
> +	if (!hw->seq_pool) {
> +		u32 count = 0;
> +
> +		for (i = 0; i < hw->hw_rq_count; i++)
> +			count += hw->hw_rq[i]->entry_count;
> +
> +		hw->seq_pool = kmalloc_array(count,
> +				sizeof(struct efc_hw_sequence),	GFP_KERNEL);
> +		if (!hw->seq_pool)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/*
> +	 * In RQ pair mode, we MUST post the header and payload buffer at the
> +	 * same time.
> +	 */
> +	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
> +		struct hw_rq *rq = hw->hw_rq[rq_idx];
> +
> +		for (i = 0; i < rq->entry_count - 1; i++) {
> +			struct efc_hw_sequence *seq;
> +
> +			seq = hw->seq_pool + idx * sizeof(*seq);
> +			if (!seq) {
> +				rc = -1;
> +				break;
> +			}
> +			idx++;
> +			seq->header = &rq->hdr_buf[i];
> +			seq->payload = &rq->payload_buf[i];
> +			rc = efct_hw_sequence_free(hw, seq);
> +			if (rc)
> +				break;
> +		}
> +		if (rc)
> +			break;
> +	}
> +
> +	if (rc && hw->seq_pool)
> +		kfree(hw->seq_pool);
> +
> +	return rc;
> +}
> +
> +void
> +efct_hw_rx_free(struct efct_hw *hw)
> +{
> +	struct hw_rq *rq;
> +	u32 i;
> +
> +	/* Free hw_rq buffers */
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		rq = hw->hw_rq[i];
> +		if (rq) {
> +			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
> +					       rq->entry_count);
> +			rq->hdr_buf = NULL;
> +			efct_hw_rx_buffer_free(hw, rq->payload_buf,
> +					       rq->entry_count);
> +			rq->payload_buf = NULL;
> +		}
> +	}
> +}
> +
> +static int
> +efct_hw_cmd_submit_pending(struct efct_hw *hw)
> +{
> +	struct efct_command_ctx *ctx = NULL;
> +	int rc = 0;

back to plain numbers.

> +
> +	/* Assumes lock held */
> +
> +	/* Only submit MQE if there's room */
> +	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
> +	       !list_empty(&hw->cmd_pending)) {
> +		ctx = list_first_entry(&hw->cmd_pending,
> +				       struct efct_command_ctx, list_entry);
> +		if (!ctx)
> +			break;
> +
> +		list_del(&ctx->list_entry);
> +
> +		INIT_LIST_HEAD(&ctx->list_entry);
> +		list_add_tail(&ctx->list_entry, &hw->cmd_head);
> +		hw->cmd_head_count++;
> +		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
> +			efc_log_test(hw->os,
> +				      "sli_queue_write failed: %d\n", rc);
> +			rc = -1;
> +			break;
> +		}
> +	}
> +	return rc;
> +}
> +
> +/*
> + * Send a mailbox command to the hardware, and either wait for a completion
> + * (EFCT_CMD_POLL) or get an optional asynchronous completion (EFCT_CMD_NOWAIT).
> + */
> +enum efct_hw_rtn
> +efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	unsigned long flags = 0;
> +	void *bmbx = NULL;
> +
> +	/*
> +	 * If the chip is in an error state (UE'd) then reject this mailbox
> +	 *  command.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		efc_log_crit(hw->os,
> +			      "status=%#x error1=%#x error2=%#x\n",
> +			sli_reg_read_status(&hw->sli),
> +			sli_reg_read_err1(&hw->sli),
> +			sli_reg_read_err2(&hw->sli));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (opts == EFCT_CMD_POLL) {
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +		bmbx = hw->sli.bmbx.virt;
> +
> +		memset(bmbx, 0, SLI4_BMBX_SIZE);
> +		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
> +
> +		if (sli_bmbx_command(&hw->sli) == 0) {
> +			rc = EFCT_HW_RTN_SUCCESS;
> +			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
> +		}
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +	} else if (opts == EFCT_CMD_NOWAIT) {
> +		struct efct_command_ctx	*ctx = NULL;
> +
> +		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);

GPF_KERNEL

> +		if (!ctx)
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +
> +		memset(ctx, 0, sizeof(struct efct_command_ctx));

kzalloc

> +
> +		if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +			efc_log_err(hw->os,
> +				     "Can't send command, HW state=%d\n",
> +				    hw->state);
> +			kfree(ctx);
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +
> +		if (cb) {
> +			ctx->cb = cb;
> +			ctx->arg = arg;
> +		}
> +		ctx->buf = cmd;
> +		ctx->ctx = hw;
> +
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +
> +			/* Add to pending list */
> +			INIT_LIST_HEAD(&ctx->list_entry);
> +			list_add_tail(&ctx->list_entry, &hw->cmd_pending);
> +
> +			/* Submit as much of the pending list as we can */
> +			if (efct_hw_cmd_submit_pending(hw) == 0)
> +				rc = EFCT_HW_RTN_SUCCESS;
> +
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
> +			size_t size)
> +{
> +	struct efct_command_ctx *ctx = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +	if (!list_empty(&hw->cmd_head)) {
> +		ctx = list_first_entry(&hw->cmd_head,
> +				       struct efct_command_ctx, list_entry);
> +		list_del(&ctx->list_entry);
> +	}
> +	if (!ctx) {
> +		efc_log_err(hw->os, "no command context?!?\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	hw->cmd_head_count--;
> +
> +	/* Post any pending requests */
> +	efct_hw_cmd_submit_pending(hw);
> +
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	if (ctx->cb) {
> +		if (ctx->buf)
> +			memcpy(ctx->buf, mqe, size);
> +
> +		ctx->cb(hw, status, ctx->buf, ctx->arg);
> +	}
> +
> +	memset(ctx, 0, sizeof(struct efct_command_ctx));

memset is no needed before kfree

> +	kfree(ctx);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_hw_mq_process(struct efct_hw *hw,
> +		   int status, struct sli4_queue *mq)
> +{
> +	u8		mqe[SLI4_BMBX_SIZE];
> +
> +	if (!sli_mq_read(&hw->sli, mq, mqe))
> +		efct_hw_command_process(hw, status, mqe, mq->size);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_hw_command_cancel(struct efct_hw *hw)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +
> +	/*
> +	 * Manually clean up remaining commands. Note: since this calls
> +	 * efct_hw_command_process(), we'll also process the cmd_pending
> +	 * list, so no need to manually clean that out.
> +	 */
> +	while (!list_empty(&hw->cmd_head)) {
> +		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
> +		struct efct_command_ctx *ctx =
> +	list_first_entry(&hw->cmd_head, struct efct_command_ctx, list_entry);

		struct eftc_command_ctx *ctx;

		ctx = list_first_entry(&hw->cmd_head,
				       struct efct_command_ctx,
				       list_entry);


> +
> +		efc_log_test(hw->os, "hung command %08x\n",
> +			      !ctx ? U32_MAX :
> +			      (!ctx->buf ? U32_MAX :
> +			       *((u32 *)ctx->buf)));
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +	}
> +
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index e5839254c730..1b67e0721936 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -629,4 +629,11 @@ efct_get_wwnn(struct efct_hw *hw);
>  extern uint64_t
>  efct_get_wwpn(struct efct_hw *hw);
>  
> +enum efct_hw_rtn efct_hw_rx_allocate(struct efct_hw *hw);
> +enum efct_hw_rtn efct_hw_rx_post(struct efct_hw *hw);
> +void efct_hw_rx_free(struct efct_hw *hw);
> +extern enum efct_hw_rtn
> +efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
> +		void *arg);
> +
>  #endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization
  2020-04-12  3:32 ` [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization James Smart
  2020-04-16  7:32   ` Hannes Reinecke
@ 2020-04-16  8:47   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  8:47 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:51PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to create IO interfaces (wqs, etc), SGL initialization,
> and configure hardware features.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Request tag pool(reqtag_pool) handling fuctions.
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 657 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  42 +++
>  2 files changed, 699 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 3e9906749da2..892493a3a35e 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -1489,3 +1489,660 @@ efct_hw_command_cancel(struct efct_hw *hw)
>  
>  	return EFC_SUCCESS;
>  }
> +
> +static inline struct efct_hw_io *
> +_efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io	*io = NULL;
> +
> +	if (!list_empty(&hw->io_free)) {
> +		io = list_first_entry(&hw->io_free, struct efct_hw_io,
> +				      list_entry);
> +		list_del(&io->list_entry);
> +	}
> +	if (io) {
> +		INIT_LIST_HEAD(&io->list_entry);
> +		INIT_LIST_HEAD(&io->wqe_link);
> +		INIT_LIST_HEAD(&io->dnrx_link);
> +		list_add_tail(&io->list_entry, &hw->io_inuse);
> +		io->state = EFCT_HW_IO_STATE_INUSE;
> +		io->abort_reqtag = U32_MAX;
> +		io->wq = hw->hw_wq[0];
> +		kref_init(&io->ref);
> +		io->release = efct_hw_io_free_internal;
> +	} else {
> +		atomic_add_return(1, &hw->io_alloc_failed_count);
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io	*io = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	io = _efct_hw_io_alloc(hw);
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	return io;
> +}
> +
> +/*
> + * When an IO is freed, depending on the exchange busy flag, and other
> + * workarounds, move it to the correct list.
> + */

kerneldoc

> +static void
> +efct_hw_io_free_move_correct_list(struct efct_hw *hw,
> +				  struct efct_hw_io *io)
> +{
> +	if (io->xbusy) {
> +		/*
> +		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
> +		 * up
> +		 */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_wait_free);
> +		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
> +	} else {
> +		/* IO not busy, add to free list */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_free);
> +		io->state = EFCT_HW_IO_STATE_FREE;
> +	}
> +}
> +
> +static inline void
> +efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* initialize IO fields */
> +	efct_hw_init_free_io(io);
> +
> +	/* Restore default SGL */
> +	efct_hw_io_restore_sgl(hw, io);
> +}
> +
> +/**
> + * Free a previously-allocated HW IO object. Called when
> + * IO refcount goes to zero (host-owned IOs only).
> + */
> +void
> +efct_hw_io_free_internal(struct kref *arg)
> +{
> +	unsigned long flags = 0;
> +	struct efct_hw_io *io =
> +			container_of(arg, struct efct_hw_io, ref);

This fits on one line.

> +	struct efct_hw *hw = io->hw;
> +
> +	/* perform common cleanup */
> +	efct_hw_io_free_common(hw, io);
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +		/* remove from in-use list */
> +		if (io->list_entry.next &&
> +		    !list_empty(&hw->io_inuse)) {
> +			list_del(&io->list_entry);
> +			efct_hw_io_free_move_correct_list(hw, io);
> +		}

This doesn need to be extra indendent and the if condition fits on one
line.

> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +}
> +
> +int
> +efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	return kref_put(&io->ref, io->release);
> +}
> +
> +u8
> +efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	return (refcount_read(&io->ref.refcount) > 0);
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
> +{
> +	u32 ioindex;
> +
> +	ioindex = xri - hw->sli.extent[SLI_RSRC_XRI].base[0];
> +	return hw->io[ioindex];
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
> +			struct efc_dma *sgl,
> +			u32 sgl_count)
> +{
> +	if (hw->sli.sgl_pre_registered) {
> +		efc_log_err(hw->os,
> +			     "can't use temp SGL with pre-registered SGLs\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	io->ovfl_sgl = sgl;
> +	io->ovfl_sgl_count = sgl_count;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io,
> +		     enum efct_hw_io_type type)
> +{
> +	struct sli4_sge	*data = NULL;
> +	u32 i = 0;
> +	u32 skips = 0;
> +	u32 sge_flags = 0;
> +
> +	if (!io) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p\n", hw, io);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* Clear / reset the scatter-gather list */
> +	io->sgl = &io->def_sgl;
> +	io->sgl_count = io->def_sgl_count;
> +	io->first_data_sge = 0;
> +
> +	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
> +	io->n_sge = 0;
> +	io->sge_offset = 0;
> +
> +	io->type = type;
> +
> +	data = io->sgl->virt;
> +
> +	/*
> +	 * Some IO types have underlying hardware requirements on the order
> +	 * of SGEs. Process all special entries here.
> +	 */
> +	switch (type) {
> +	case EFCT_HW_IO_TARGET_WRITE:
> +#define EFCT_TARGET_WRITE_SKIPS	2
> +		skips = EFCT_TARGET_WRITE_SKIPS;

Move the defines out of the function.


> +
> +		/* populate host resident XFER_RDY buffer */
> +		sge_flags = le32_to_cpu(data->dw2_flags);
> +		sge_flags &= (~SLI4_SGE_TYPE_MASK);
> +		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +		data->buffer_address_high =
> +			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
> +		data->buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
> +		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
> +		data->dw2_flags = cpu_to_le32(sge_flags);
> +		data++;
> +
> +		skips--;
> +
> +		io->n_sge = 1;
> +		break;
> +	case EFCT_HW_IO_TARGET_READ:
> +		/*
> +		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
> +		 */
> +#define EFCT_TARGET_READ_SKIPS	2
> +		skips = EFCT_TARGET_READ_SKIPS;
> +		break;
> +	case EFCT_HW_IO_TARGET_RSP:
> +		/*
> +		 * No skips, etc. for FCP_TRSP64
> +		 */
> +		break;
> +	default:
> +		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Write skip entries
> +	 */
> +	for (i = 0; i < skips; i++) {
> +		sge_flags = le32_to_cpu(data->dw2_flags);
> +		sge_flags &= (~SLI4_SGE_TYPE_MASK);
> +		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
> +		data->dw2_flags = cpu_to_le32(sge_flags);
> +		data++;
> +	}
> +
> +	io->n_sge += skips;
> +
> +	/*
> +	 * Set last
> +	 */
> +	sge_flags = le32_to_cpu(data->dw2_flags);
> +	sge_flags |= SLI4_SGE_LAST;
> +	data->dw2_flags = cpu_to_le32(sge_flags);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
> +		   uintptr_t addr, u32 length)
> +{
> +	struct sli4_sge	*data = NULL;
> +	u32 sge_flags = 0;
> +
> +	if (!io || !addr || !length) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
> +			    hw, io, addr, length);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (length > hw->sli.sge_supported_length) {
> +		efc_log_err(hw->os,
> +			     "length of SGE %d bigger than allowed %d\n",
> +			    length, hw->sli.sge_supported_length);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	data = io->sgl->virt;
> +	data += io->n_sge;
> +
> +	sge_flags = le32_to_cpu(data->dw2_flags);
> +	sge_flags &= ~SLI4_SGE_TYPE_MASK;
> +	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
> +	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
> +	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
> +
> +	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
> +	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
> +	data->buffer_length = cpu_to_le32(length);
> +
> +	/*
> +	 * Always assume this is the last entry and mark as such.
> +	 * If this is not the first entry unset the "last SGE"
> +	 * indication for the previous entry
> +	 */
> +	sge_flags |= SLI4_SGE_LAST;
> +	data->dw2_flags = cpu_to_le32(sge_flags);
> +
> +	if (io->n_sge) {
> +		sge_flags = le32_to_cpu(data[-1].dw2_flags);
> +		sge_flags &= ~SLI4_SGE_LAST;
> +		data[-1].dw2_flags = cpu_to_le32(sge_flags);
> +	}
> +
> +	/* Set first_data_bde if not previously set */
> +	if (io->first_data_sge == 0)
> +		io->first_data_sge = io->n_sge;
> +
> +	io->sge_offset += length;
> +	io->n_sge++;
> +
> +	/* Update the linked segment length (only executed after overflow has
> +	 * begun)
> +	 */
> +	if (io->ovfl_lsp)
> +		io->ovfl_lsp->dw3_seglen =
> +			cpu_to_le32(io->n_sge * sizeof(struct sli4_sge) &
> +				    SLI4_LSP_SGE_SEGLEN);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +void
> +efct_hw_io_abort_all(struct efct_hw *hw)
> +{
> +	struct efct_hw_io *io_to_abort	= NULL;
> +	struct efct_hw_io *next_io = NULL;
> +
> +	list_for_each_entry_safe(io_to_abort, next_io,
> +				 &hw->io_inuse, list_entry) {
> +		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
> +	}
> +}
> +
> +static void
> +efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
> +{
> +	struct efct_hw_io *io = arg;
> +	struct efct_hw *hw = io->hw;
> +	u32 ext = 0;
> +	u32 len = 0;
> +	struct hw_wq_callback *wqcb;
> +	unsigned long flags = 0;
> +
> +	/*
> +	 * For IOs that were aborted internally, we may need to issue the
> +	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
> +	 * not. If the status is Local Reject/No XRI, then
> +	 * issue the callback now.
> +	 */
> +	ext = sli_fc_ext_status(&hw->sli, cqe);
> +	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
> +	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI &&
> +		io->done) {
> +		efct_hw_done_t done = io->done;
> +		void *arg = io->arg;
> +
> +		io->done = NULL;
> +
> +		/*
> +		 * Use latched status as this is always saved for an internal
> +		 * abort Note: We wont have both a done and abort_done
> +		 * function, so don't worry about
> +		 *       clobbering the len, status and ext fields.
> +		 */
> +		status = io->saved_status;
> +		len = io->saved_len;
> +		ext = io->saved_ext;
> +		io->status_saved = false;
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +
> +	if (io->abort_done) {
> +		efct_hw_done_t done = io->abort_done;
> +		void *arg = io->abort_arg;
> +
> +		io->abort_done = NULL;
> +
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +	spin_lock_irqsave(&hw->io_abort_lock, flags);
> +	/* clear abort bit to indicate abort is complete */
> +	io->abort_in_progress = false;
> +	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +
> +	/* Free the WQ callback */
> +	if (io->abort_reqtag == U32_MAX) {
> +		efc_log_err(hw->os, "HW IO already freed\n");
> +		return;
> +	}
> +
> +	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
> +	efct_hw_reqtag_free(hw, wqcb);
> +
> +	/*
> +	 * Call efct_hw_io_free() because this releases the WQ reservation as
> +	 * well as doing the refcount put. Don't duplicate the code here.
> +	 */
> +	(void)efct_hw_io_free(hw, io);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
> +		 bool send_abts, void *cb, void *arg)
> +{
> +	enum sli4_abort_type atype = SLI_ABORT_MAX;
> +	u32 id = 0, mask = 0;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	struct hw_wq_callback *wqcb;
> +	unsigned long flags = 0;
> +
> +	if (!io_to_abort) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p\n",
> +			    hw, io_to_abort);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
> +			     hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* take a reference on IO being aborted */
> +	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
> +		/* command no longer active */
> +		efc_log_test(hw->os,
> +			      "io not active xri=0x%x tag=0x%x\n",
> +			     io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/* Must have a valid WQ reference */
> +	if (!io_to_abort->wq) {
> +		efc_log_test(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
> +			      io_to_abort->indicator);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/*
> +	 * Validation checks complete; now check to see if already being
> +	 * aborted
> +	 */
> +	spin_lock_irqsave(&hw->io_abort_lock, flags);
> +	if (io_to_abort->abort_in_progress) {
> +		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		efc_log_debug(hw->os,
> +			       "io already being aborted xri=0x%x tag=0x%x\n",
> +			      io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
> +	}
> +
> +	/*
> +	 * This IO is not already being aborted. Set flag so we won't try to
> +	 * abort it again. After all, we only have one abort_done callback.
> +	 */
> +	io_to_abort->abort_in_progress = true;
> +	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +
> +	/*
> +	 * If we got here, the possibilities are:
> +	 * - host owned xri
> +	 *	- io_to_abort->wq_index != U32_MAX
> +	 *		- submit ABORT_WQE to same WQ
> +	 * - port owned xri:
> +	 *	- rxri: io_to_abort->wq_index == U32_MAX
> +	 *		- submit ABORT_WQE to any WQ
> +	 *	- non-rxri
> +	 *		- io_to_abort->index != U32_MAX
> +	 *			- submit ABORT_WQE to same WQ
> +	 *		- io_to_abort->index == U32_MAX
> +	 *			- submit ABORT_WQE to any WQ
> +	 */
> +	io_to_abort->abort_done = cb;
> +	io_to_abort->abort_arg  = arg;
> +
> +	atype = SLI_ABORT_XRI;
> +	id = io_to_abort->indicator;
> +
> +	/* Allocate a request tag for the abort portion of this IO */
> +	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
> +	if (!wqcb) {
> +		efc_log_err(hw->os, "can't allocate request tag\n");
> +		return EFCT_HW_RTN_NO_RESOURCES;
> +	}
> +	io_to_abort->abort_reqtag = wqcb->instance_index;
> +
> +	/*
> +	 * If the wqe is on the pending list, then set this wqe to be
> +	 * aborted when the IO's wqe is removed from the list.
> +	 */
> +	if (io_to_abort->wq) {
> +		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
> +		if (io_to_abort->wqe.list_entry.next) {
> +			io_to_abort->wqe.abort_wqe_submit_needed = true;
> +			io_to_abort->wqe.send_abts = send_abts;
> +			io_to_abort->wqe.id = id;
> +			io_to_abort->wqe.abort_reqtag =
> +						 io_to_abort->abort_reqtag;
> +			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
> +					       flags);
> +			return EFC_SUCCESS;
> +		}
> +		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
> +	}
> +
> +	if (sli_abort_wqe(&hw->sli, io_to_abort->wqe.wqebuf,
> +			  hw->sli.wqe_size, atype, send_abts, id, mask,
> +			  io_to_abort->abort_reqtag, SLI4_CQ_DEFAULT)) {
> +		efc_log_err(hw->os, "ABORT WQE error\n");
> +		io_to_abort->abort_reqtag = U32_MAX;
> +		efct_hw_reqtag_free(hw, wqcb);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rc == EFCT_HW_RTN_SUCCESS) {
> +
> +		/* ABORT_WQE does not actually utilize an XRI on the Port,
> +		 * therefore, keep xbusy as-is to track the exchange's state,
> +		 * not the ABORT_WQE's state
> +		 */
> +		rc = efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe);
> +		if (rc > 0)
> +			/* non-negative return is success */
> +			rc = 0;
> +			/*
> +			 * can't abort an abort so skip adding to timed wqe
> +			 * list
> +			 */
> +	}
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		spin_lock_irqsave(&hw->io_abort_lock, flags);
> +		io_to_abort->abort_in_progress = false;
> +		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +	}
> +	return rc;
> +}
> +
> +void
> +efct_hw_reqtag_pool_free(struct efct_hw *hw)
> +{
> +	u32 i = 0;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +	struct hw_wq_callback *wqcb = NULL;
> +
> +	if (reqtag_pool) {
> +		for (i = 0; i < U16_MAX; i++) {
> +			wqcb = reqtag_pool->tags[i];
> +			if (!wqcb)
> +				continue;
> +
> +			kfree(wqcb);
> +		}
> +		kfree(reqtag_pool);
> +		hw->wq_reqtag_pool = NULL;
> +	}
> +}
> +
> +struct reqtag_pool *
> +efct_hw_reqtag_pool_alloc(struct efct_hw *hw)
> +{
> +	u32 i = 0;
> +	struct reqtag_pool *reqtag_pool;
> +	struct hw_wq_callback *wqcb;
> +
> +	reqtag_pool = kzalloc(sizeof(*reqtag_pool), GFP_KERNEL);
> +	if (!reqtag_pool)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&reqtag_pool->freelist);
> +	/* initialize reqtag pool lock */
> +	spin_lock_init(&reqtag_pool->lock);
> +	for (i = 0; i < U16_MAX; i++) {
> +		wqcb = kmalloc(sizeof(*wqcb), GFP_KERNEL);
> +		if (!wqcb)
> +			break;
> +
> +		reqtag_pool->tags[i] = wqcb;
> +		wqcb->instance_index = i;
> +		wqcb->callback = NULL;
> +		wqcb->arg = NULL;
> +		INIT_LIST_HEAD(&wqcb->list_entry);
> +		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
> +	}
> +
> +	return reqtag_pool;
> +}
> +
> +struct hw_wq_callback *
> +efct_hw_reqtag_alloc(struct efct_hw *hw,
> +		     void (*callback)(void *arg, u8 *cqe, int status),
> +		     void *arg)
> +{
> +	struct hw_wq_callback *wqcb = NULL;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +	unsigned long flags = 0;
> +
> +	if (!callback)
> +		return wqcb;
> +
> +	spin_lock_irqsave(&reqtag_pool->lock, flags);
> +
> +	if (!list_empty(&reqtag_pool->freelist)) {
> +		wqcb = list_first_entry(&reqtag_pool->freelist,
> +				      struct hw_wq_callback, list_entry);
> +	}
> +
> +	if (wqcb) {
> +		list_del(&wqcb->list_entry);
> +		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +		wqcb->callback = callback;
> +		wqcb->arg = arg;
> +	} else {
> +		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +	}
> +
> +	return wqcb;
> +}
> +
> +void
> +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb)
> +{
> +	unsigned long flags = 0;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +
> +	if (!wqcb->callback)
> +		efc_log_err(hw->os, "WQCB is already freed\n");
> +
> +	spin_lock_irqsave(&reqtag_pool->lock, flags);
> +	wqcb->callback = NULL;
> +	wqcb->arg = NULL;
> +	INIT_LIST_HEAD(&wqcb->list_entry);
> +	list_add(&wqcb->list_entry, &hw->wq_reqtag_pool->freelist);
> +	spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +}
> +
> +struct hw_wq_callback *
> +efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
> +{
> +	struct hw_wq_callback *wqcb;
> +
> +	wqcb = hw->wq_reqtag_pool->tags[instance_index];
> +	if (!wqcb)
> +		efc_log_err(hw->os, "wqcb for instance %d is null\n",
> +			     instance_index);
> +
> +	return wqcb;
> +}
> +
> +void
> +efct_hw_reqtag_reset(struct efct_hw *hw)
> +{
> +	struct hw_wq_callback *wqcb;
> +	u32 i;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +	struct list_head *p, *n;
> +
> +	/* Remove all from freelist */
> +	list_for_each_safe(p, n, &reqtag_pool->freelist) {
> +		wqcb = list_entry(p, struct hw_wq_callback, list_entry);
> +
> +		if (wqcb)
> +			list_del(&wqcb->list_entry);
> +	}
> +
> +	/* Put them all back */
> +	for (i = 0; i < ARRAY_SIZE(reqtag_pool->tags); i++) {
> +		wqcb = reqtag_pool->tags[i];
> +		wqcb->instance_index = i;
> +		wqcb->callback = NULL;
> +		wqcb->arg = NULL;
> +		INIT_LIST_HEAD(&wqcb->list_entry);
> +		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
> +	}
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 1b67e0721936..86736d5295ec 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -635,5 +635,47 @@ void efct_hw_rx_free(struct efct_hw *hw);
>  extern enum efct_hw_rtn
>  efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
>  		void *arg);
> +struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw);
> +int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io);
> +u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io);
> +extern enum efct_hw_rtn
> +efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		struct efct_hw_io *io, u32 len,
> +		union efct_hw_io_param_u *iparam,
> +		struct efc_remote_node *rnode, void *cb, void *arg);
> +extern enum efct_hw_rtn
> +efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
> +			struct efc_dma *sgl,
> +			u32 sgl_count);
> +extern enum efct_hw_rtn
> +efct_hw_io_init_sges(struct efct_hw *hw,
> +		     struct efct_hw_io *io, enum efct_hw_io_type type);
> +
> +extern enum efct_hw_rtn
> +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
> +		   uintptr_t addr, u32 length);
> +extern enum efct_hw_rtn
> +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
> +		 bool send_abts, void *cb, void *arg);
> +extern u32
> +efct_hw_io_get_count(struct efct_hw *hw,
> +		     enum efct_hw_io_count_type io_count_type);
> +extern struct efct_hw_io
> +*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator);
> +void efct_hw_io_abort_all(struct efct_hw *hw);
> +void efct_hw_io_free_internal(struct kref *arg);
> +
> +/* HW WQ request tag API */
> +struct reqtag_pool *efct_hw_reqtag_pool_alloc(struct efct_hw *hw);
> +void efct_hw_reqtag_pool_free(struct efct_hw *hw);
> +extern struct hw_wq_callback
> +*efct_hw_reqtag_alloc(struct efct_hw *hw,
> +			void (*callback)(void *arg, u8 *cqe,
> +					 int status), void *arg);
> +extern void
> +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
> +extern struct hw_wq_callback
> +*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
> +void efct_hw_reqtag_reset(struct efct_hw *hw);
>  
>  #endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 20/31] elx: efct: Hardware queues processing
  2020-04-12  3:32 ` [PATCH v3 20/31] elx: efct: Hardware queues processing James Smart
  2020-04-16  7:37   ` Hannes Reinecke
@ 2020-04-16  9:17   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  9:17 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:52PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for EQ, CQ, WQ and RQ processing.
> Routines for IO object pool allocation and deallocation.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Return defined values
>   Changed IO pool allocation logic to avoid using efct_pool.
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 369 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  36 ++++
>  drivers/scsi/elx/efct/efct_io.c | 198 +++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_io.h | 191 +++++++++++++++++++++
>  4 files changed, 794 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_io.c
>  create mode 100644 drivers/scsi/elx/efct/efct_io.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 892493a3a35e..6cdc7e27b148 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -2146,3 +2146,372 @@ efct_hw_reqtag_reset(struct efct_hw *hw)
>  		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
>  	}
>  }
> +
> +int
> +efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id)
> +{
> +	int	rc = -1;

it's not a return code, it's a index what is returned

> +	int	index = id & (EFCT_HW_Q_HASH_SIZE - 1);
> +
> +	/*
> +	 * Since the hash is always bigger than the maximum number of Qs, then
> +	 * we never have to worry about an infinite loop. We will always find
> +	 * an unused entry.
> +	 */
> +	do {
> +		if (hash[index].in_use &&
> +		    hash[index].id == id)

fits on one line

> +			rc = hash[index].index;
> +		else
> +			index = (index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
> +	} while (rc == -1 && hash[index].in_use);
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_process(struct efct_hw *hw, u32 vector,
> +		u32 max_isr_time_msec)
> +{
> +	struct hw_eq *eq;
> +	int rc = 0;
> +
> +	/*
> +	 * The caller should disable interrupts if they wish to prevent us
> +	 * from processing during a shutdown. The following states are defined:
> +	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
> +	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
> +	 *                                    queues are cleared.
> +	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
> +	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
> +	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
> +	 *                                        completions.
> +	 */
> +	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
> +		return EFC_SUCCESS;
> +
> +	/* Get pointer to struct hw_eq */
> +	eq = hw->hw_eq[vector];
> +	if (!eq)
> +		return EFC_SUCCESS;
> +
> +	eq->use_count++;
> +
> +	rc = efct_hw_eq_process(hw, eq, max_isr_time_msec);
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
> +		   u32 max_isr_time_msec)
> +{
> +	u8		eqe[sizeof(struct sli4_eqe)] = { 0 };
> +	u32	tcheck_count;
> +	time_t		tstart;
> +	time_t		telapsed;
> +	bool		done = false;

variables are not consistently aligned.

> +
> +	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
> +	tstart = jiffies_to_msecs(jiffies);
> +
> +	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
> +		u16	cq_id = 0;
> +		int		rc;

same here

> +
> +		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
> +		if (unlikely(rc)) {
> +			if (rc == SLI4_EQE_STATUS_EQ_FULL) {
> +				u32 i;
> +
> +				/*
> +				 * Received a sentinel EQE indicating the
> +				 * EQ is full. Process all CQs
> +				 */
> +				for (i = 0; i < hw->cq_count; i++)
> +					efct_hw_cq_process(hw, hw->hw_cq[i]);
> +				continue;
> +			} else {
> +				return rc;
> +			}
> +		} else {
> +			int index;
> +
> +			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
> +
> +			if (likely(index >= 0))
> +				efct_hw_cq_process(hw, hw->hw_cq[index]);
> +			else
> +				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
> +					     cq_id);
> +		}
> +
> +		if (eq->queue->n_posted > eq->queue->posted_limit)
> +			sli_queue_arm(&hw->sli, eq->queue, false);
> +
> +		if (tcheck_count && (--tcheck_count == 0)) {
> +			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
> +			telapsed = jiffies_to_msecs(jiffies) - tstart;
> +			if (telapsed >= max_isr_time_msec)
> +				done = true;
> +		}
> +	}
> +	sli_queue_eq_arm(&hw->sli, eq->queue, true);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int queue_rc;
> +
> +	/* Every so often, set the wqec bit to generate comsummed completions */
> +	if (wq->wqec_count)
> +		wq->wqec_count--;
> +
> +	if (wq->wqec_count == 0) {
> +		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
> +
> +		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
> +		wq->wqec_count = wq->wqec_set_count;
> +	}
> +
> +	/* Decrement WQ free count */
> +	wq->free_count--;
> +
> +	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
> +
> +	return (queue_rc < 0) ? -1 : 0;
> +}
> +
> +static void
> +hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
> +{
> +	struct efct_hw_wqe *wqe;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +
> +	/* Update free count with value passed in */
> +	wq->free_count += update_free_count;
> +
> +	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
> +		wqe = list_first_entry(&wq->pending_list,
> +				       struct efct_hw_wqe, list_entry);
> +		list_del(&wqe->list_entry);
> +		_efct_hw_wq_write(wq, wqe);
> +
> +		if (wqe->abort_wqe_submit_needed) {
> +			wqe->abort_wqe_submit_needed = false;
> +			sli_abort_wqe(&wq->hw->sli, wqe->wqebuf,
> +				      wq->hw->sli.wqe_size,
> +				      SLI_ABORT_XRI, wqe->send_abts, wqe->id,
> +				      0, wqe->abort_reqtag, SLI4_CQ_DEFAULT);
> +					  INIT_LIST_HEAD(&wqe->list_entry);
> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +}
> +
> +void
> +efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
> +{
> +	u8		cqe[sizeof(struct sli4_mcqe)];
> +	u16	rid = U16_MAX;
> +	enum sli4_qentry	ctype;		/* completion type */
> +	int		status;
> +	u32	n_processed = 0;
> +	u32	tstart, telapsed;

Please get the aligment of the variables consistent in all functions.

> +
> +	tstart = jiffies_to_msecs(jiffies);
> +
> +	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
> +		status = sli_cq_parse(&hw->sli, cq->queue,
> +				      cqe, &ctype, &rid);

fits on one line

> +		/*
> +		 * The sign of status is significant. If status is:
> +		 * == 0 : call completed correctly and
> +		 * the CQE indicated success
> +		 * > 0 : call completed correctly and
> +		 * the CQE indicated an error
> +		 * < 0 : call failed and no information is available about the
> +		 * CQE
> +		 */
> +		if (status < 0) {
> +			if (status == SLI4_MCQE_STATUS_NOT_COMPLETED)
> +				/*
> +				 * Notification that an entry was consumed,
> +				 * but not completed
> +				 */
> +				continue;
> +
> +			break;
> +		}
> +
> +		switch (ctype) {
> +		case SLI_QENTRY_ASYNC:
> +			sli_cqe_async(&hw->sli, cqe);
> +			break;
> +		case SLI_QENTRY_MQ:
> +			/*
> +			 * Process MQ entry. Note there is no way to determine
> +			 * the MQ_ID from the completion entry.
> +			 */
> +			efct_hw_mq_process(hw, status, hw->mq);
> +			break;
> +		case SLI_QENTRY_WQ:
> +			efct_hw_wq_process(hw, cq, cqe, status, rid);
> +			break;
> +		case SLI_QENTRY_WQ_RELEASE: {
> +			u32 wq_id = rid;
> +			int index;
> +			struct hw_wq *wq = NULL;
> +
> +			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
> +
> +			if (likely(index >= 0)) {
> +				wq = hw->hw_wq[index];
> +			} else {
> +				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
> +				break;
> +			}
> +			/* Submit any HW IOs that are on the WQ pending list */
> +			hw_wq_submit_pending(wq, wq->wqec_set_count);
> +
> +			break;
> +		}
> +
> +		case SLI_QENTRY_RQ:
> +			efct_hw_rqpair_process_rq(hw, cq, cqe);
> +			break;
> +		case SLI_QENTRY_XABT: {
> +			efct_hw_xabt_process(hw, cq, cqe, rid);
> +			break;
> +		}
> +		default:
> +			efc_log_test(hw->os,
> +				      "unhandled ctype=%#x rid=%#x\n",
> +				     ctype, rid);
> +			break;
> +		}
> +
> +		n_processed++;
> +		if (n_processed == cq->queue->proc_limit)
> +			break;
> +
> +		if (cq->queue->n_posted >= cq->queue->posted_limit)
> +			sli_queue_arm(&hw->sli, cq->queue, false);
> +	}
> +
> +	sli_queue_arm(&hw->sli, cq->queue, true);
> +
> +	if (n_processed > cq->queue->max_num_processed)
> +		cq->queue->max_num_processed = n_processed;
> +	telapsed = jiffies_to_msecs(jiffies) - tstart;
> +	if (telapsed > cq->queue->max_process_time)
> +		cq->queue->max_process_time = telapsed;
> +}
> +
> +void
> +efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
> +		   u8 *cqe, int status, u16 rid)
> +{
> +	struct hw_wq_callback *wqcb;
> +
> +	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
> +		if (status)
> +			efc_log_err(hw->os, "reque xri failed, status = %d\n",
> +				     status);
> +		return;
> +	}
> +
> +	wqcb = efct_hw_reqtag_get_instance(hw, rid);
> +	if (!wqcb) {
> +		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
> +		return;
> +	}
> +
> +	if (!wqcb->callback) {
> +		efc_log_err(hw->os, "wqcb callback is NULL\n");
> +		return;
> +	}
> +
> +	(*wqcb->callback)(wqcb->arg, cqe, status);
> +}
> +
> +void
> +efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
> +		     u8 *cqe, u16 rid)
> +{
> +	/* search IOs wait free list */
> +	struct efct_hw_io *io = NULL;
> +	unsigned long flags = 0;
> +
> +	io = efct_hw_io_lookup(hw, rid);
> +	if (!io) {
> +		/* IO lookup failure should never happen */
> +		efc_log_err(hw->os,
> +			     "Error: xabt io lookup failed rid=%#x\n", rid);
> +		return;
> +	}
> +
> +	if (!io->xbusy)
> +		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
> +	else
> +		/* mark IO as no longer busy */
> +		io->xbusy = false;
> +
> +	/*
> +	 * For IOs that were aborted internally, we need to issue any pending
> +	 * callback here.
> +	 */
> +	if (io->done) {
> +		efct_hw_done_t done = io->done;
> +		void		*arg = io->arg;
> +
> +		/*
> +		 * Use latched status as this is always saved for an internal
> +		 * abort
> +		 */
> +		int status = io->saved_status;
> +		u32 len = io->saved_len;
> +		u32 ext = io->saved_ext;
> +
> +		io->done = NULL;
> +		io->status_saved = false;
> +
> +		done(io, io->rnode, len, status, ext, arg);
> +	}
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	if (io->state == EFCT_HW_IO_STATE_INUSE ||
> +	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
> +		/* if on wait_free list, caller has already freed IO;
> +		 * remove from wait_free list and add to free list.
> +		 * if on in-use list, already marked as no longer busy;
> +		 * just leave there and wait for caller to free.
> +		 */
> +		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
> +			io->state = EFCT_HW_IO_STATE_FREE;
> +			list_del(&io->list_entry);
> +			efct_hw_io_free_move_correct_list(hw, io);
> +		}
> +	}
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +}
> +
> +static int
> +efct_hw_flush(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +
> +	/* Process any remaining completions */
> +	for (i = 0; i < hw->eq_count; i++)
> +		efct_hw_process(hw, i, ~0);
> +
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 86736d5295ec..b427a4eda5a3 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -678,4 +678,40 @@ extern struct hw_wq_callback
>  *efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
>  void efct_hw_reqtag_reset(struct efct_hw *hw);
>  
> +/* RQ completion handlers for RQ pair mode */
> +extern int
> +efct_hw_rqpair_process_rq(struct efct_hw *hw,
> +			  struct hw_cq *cq, u8 *cqe);
> +extern
> +enum efct_hw_rtn efct_hw_rqpair_sequence_free(struct efct_hw *hw,
> +						struct efc_hw_sequence *seq);
> +static inline void
> +efct_hw_sequence_copy(struct efc_hw_sequence *dst,
> +		      struct efc_hw_sequence *src)
> +{
> +	/* Copy src to dst, then zero out the linked list link */
> +	*dst = *src;
> +}
> +
> +static inline enum efct_hw_rtn
> +efct_hw_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
> +{
> +	/* Only RQ pair mode is supported */
> +	return efct_hw_rqpair_sequence_free(hw, seq);
> +}
> +extern int
> +efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
> +		   u32 max_isr_time_msec);
> +void efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq);
> +extern void
> +efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
> +		   u8 *cqe, int status, u16 rid);
> +extern void
> +efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
> +		     u8 *cqe, u16 rid);
> +extern int
> +efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
> +extern int
> +efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
> +
>  #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
> new file mode 100644
> index 000000000000..8ea05b59c892
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_io.c
> @@ -0,0 +1,198 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_hw.h"
> +#include "efct_io.h"
> +
> +struct efct_io_pool {
> +	struct efct *efct;
> +	spinlock_t lock;	/* IO pool lock */
> +	u32 io_num_ios;		/* Total IOs allocated */
> +	struct efct_io *ios[EFCT_NUM_SCSI_IOS];
> +	struct list_head freelist;
> +
> +};
> +
> +struct efct_io_pool *
> +efct_io_pool_create(struct efct *efct, u32 num_sgl)
> +{
> +	u32 i = 0;
> +	struct efct_io_pool *io_pool;
> +	struct efct_io *io;
> +
> +	/* Allocate the IO pool */
> +	io_pool = kzalloc(sizeof(*io_pool), GFP_KERNEL);
> +	if (!io_pool)
> +		return NULL;
> +
> +	io_pool->efct = efct;
> +	INIT_LIST_HEAD(&io_pool->freelist);
> +	/* initialize IO pool lock */
> +	spin_lock_init(&io_pool->lock);
> +
> +	for (i = 0; i < EFCT_NUM_SCSI_IOS; i++) {
> +		io = kmalloc(sizeof(*io), GFP_KERNEL);
> +		if (!io)
> +			break;
> +
> +		io_pool->io_num_ios++;
> +		io_pool->ios[i] = io;
> +		io->tag = i;
> +		io->instance_index = i;
> +
> +		/* Allocate a response buffer */
> +		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
> +		io->rspbuf.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						     io->rspbuf.size,
> +						     &io->rspbuf.phys, GFP_DMA);
> +		if (!io->rspbuf.virt) {
> +			efc_log_err(efct, "dma_alloc rspbuf failed\n");
> +			efct_io_pool_free(io_pool);
> +			return NULL;
> +		}
> +
> +		/* Allocate SGL */
> +		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_ATOMIC);

GFP_KERNEL

> +		if (!io->sgl) {
> +			efct_io_pool_free(io_pool);
> +			return NULL;
> +		}
> +
> +		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
> +		io->sgl_allocated = num_sgl;
> +		io->sgl_count = 0;
> +
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &io_pool->freelist);
> +	}
> +
> +	return io_pool;
> +}
> +
> +int
> +efct_io_pool_free(struct efct_io_pool *io_pool)
> +{
> +	struct efct *efct;
> +	u32 i;
> +	struct efct_io *io;
> +
> +	if (io_pool) {
> +		efct = io_pool->efct;
> +
> +		for (i = 0; i < io_pool->io_num_ios; i++) {
> +			io = io_pool->ios[i];
> +			if (!io)
> +				continue;
> +
> +			kfree(io->sgl);
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  io->rspbuf.size, io->rspbuf.virt,
> +					  io->rspbuf.phys);
> +			memset(&io->rspbuf, 0, sizeof(struct efc_dma));
> +		}
> +
> +		kfree(io_pool);
> +		efct->xport->io_pool = NULL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +u32 efct_io_pool_allocated(struct efct_io_pool *io_pool)
> +{
> +	return io_pool->io_num_ios;
> +}
> +
> +struct efct_io *
> +efct_io_pool_io_alloc(struct efct_io_pool *io_pool)
> +{
> +	struct efct_io *io = NULL;
> +	struct efct *efct;
> +	unsigned long flags = 0;
> +
> +	efct = io_pool->efct;
> +
> +	spin_lock_irqsave(&io_pool->lock, flags);
> +
> +	if (!list_empty(&io_pool->freelist)) {
> +		io = list_first_entry(&io_pool->freelist, struct efct_io,
> +				     list_entry);
> +	}
> +
> +	if (io) {
> +		list_del(&io->list_entry);

Could list_del not be directly after list_first_entry and then the
spin_lock_unlock_irqrestore() could directly follow the if body?

> +		spin_unlock_irqrestore(&io_pool->lock, flags);
> +
> +		io->io_type = EFCT_IO_TYPE_MAX;
> +		io->hio_type = EFCT_HW_IO_MAX;
> +		io->hio = NULL;
> +		io->transferred = 0;
> +		io->efct = efct;
> +		io->timeout = 0;
> +		io->sgl_count = 0;
> +		io->tgt_task_tag = 0;
> +		io->init_task_tag = 0;
> +		io->hw_tag = 0;
> +		io->display_name = "pending";
> +		io->seq_init = 0;
> +		io->els_req_free = false;
> +		io->io_free = 0;
> +		io->release = NULL;
> +		atomic_add_return(1, &efct->xport->io_active_count);
> +		atomic_add_return(1, &efct->xport->io_total_alloc);
> +	} else {
> +		spin_unlock_irqrestore(&io_pool->lock, flags);
> +	}
> +	return io;
> +}
> +
> +/* Free an object used to track an IO */
> +void
> +efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io)
> +{
> +	struct efct *efct;
> +	struct efct_hw_io *hio = NULL;
> +	unsigned long flags = 0;
> +
> +	efct = io_pool->efct;
> +
> +	spin_lock_irqsave(&io_pool->lock, flags);
> +	hio = io->hio;
> +	io->hio = NULL;
> +	io->io_free = 1;
> +	INIT_LIST_HEAD(&io->list_entry);
> +	list_add(&io->list_entry, &io_pool->freelist);
> +	spin_unlock_irqrestore(&io_pool->lock, flags);
> +
> +	if (hio)
> +		efct_hw_io_free(&efct->hw, hio);
> +
> +	atomic_sub_return(1, &efct->xport->io_active_count);
> +	atomic_add_return(1, &efct->xport->io_total_free);
> +}
> +
> +/* Find an I/O given it's node and ox_id */
> +struct efct_io *
> +efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
> +		    u16 ox_id, u16 rx_id)
> +{
> +	struct efct_io	*io = NULL;
> +	unsigned long flags = 0;
> +	u8 found = false;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	list_for_each_entry(io, &node->active_ios, list_entry) {
> +		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
> +		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
> +			if (kref_get_unless_zero(&io->ref))
> +				found = true;
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return found ? io : NULL;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
> new file mode 100644
> index 000000000000..e28d8ed2f7ff
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_io.h
> @@ -0,0 +1,191 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_IO_H__)
> +#define __EFCT_IO_H__
> +
> +#include "efct_lio.h"
> +
> +#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
> +		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
> +
> +#define io_error_log(io, fmt, ...)  \
> +	do { \
> +		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
> +			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
> +	} while (0)
> +
> +#define SCSI_CMD_BUF_LENGTH	48
> +#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
> +#define EFCT_NUM_SCSI_IOS	8192
> +
> +enum efct_io_type {
> +	EFCT_IO_TYPE_IO = 0,
> +	EFCT_IO_TYPE_ELS,
> +	EFCT_IO_TYPE_CT,
> +	EFCT_IO_TYPE_CT_RESP,
> +	EFCT_IO_TYPE_BLS_RESP,
> +	EFCT_IO_TYPE_ABORT,
> +
> +	EFCT_IO_TYPE_MAX,
> +};
> +
> +enum efct_els_state {
> +	EFCT_ELS_REQUEST = 0,
> +	EFCT_ELS_REQUEST_DELAYED,
> +	EFCT_ELS_REQUEST_DELAY_ABORT,
> +	EFCT_ELS_REQ_ABORT,
> +	EFCT_ELS_REQ_ABORTED,
> +	EFCT_ELS_ABORT_IO_COMPL,
> +};
> +
> +struct efct_io {
> +	struct list_head	list_entry;
> +	struct list_head	io_pending_link;
> +	/* reference counter and callback function */

kerneldoc?

> +	struct kref		ref;
> +	void (*release)(struct kref *arg);
> +	/* pointer back to efct */
> +	struct efct		*efct;
> +	/* unique instance index value */
> +	u32			instance_index;
> +	/* display name */
> +	const char		*display_name;
> +	/* pointer to node */
> +	struct efc_node		*node;
> +	/* (io_pool->io_free_list) free list link */
> +	/* initiator task tag (OX_ID) for back-end and SCSI logging */
> +	u32			init_task_tag;
> +	/* target task tag (RX_ID) - for back-end and SCSI logging */
> +	u32			tgt_task_tag;
> +	/* HW layer unique IO id - for back-end and SCSI logging */
> +	u32			hw_tag;
> +	/* unique IO identifier */
> +	u32			tag;
> +	/* SGL */
> +	struct efct_scsi_sgl	*sgl;
> +	/* Number of allocated SGEs */
> +	u32			sgl_allocated;
> +	/* Number of SGEs in this SGL */
> +	u32			sgl_count;
> +	/* backend target private IO data */
> +	struct efct_scsi_tgt_io tgt_io;
> +	/* expected data transfer length, based on FC header */
> +	u32			exp_xfer_len;
> +
> +	/* Declarations private to HW/SLI */
> +	void			*hw_priv;
> +
> +	/* indicates what this struct efct_io structure is used for */
> +	enum efct_io_type	io_type;
> +	struct efct_hw_io	*hio;
> +	size_t			transferred;
> +
> +	/* set if auto_trsp was set */
> +	bool			auto_resp;
> +	/* set if low latency request */
> +	bool			low_latency;
> +	/* selected WQ steering request */
> +	u8			wq_steering;
> +	/* selected WQ class if steering is class */
> +	u8			wq_class;
> +	/* transfer size for current request */
> +	u64			xfer_req;
> +	/* target callback function */
> +	efct_scsi_io_cb_t	scsi_tgt_cb;
> +	/* target callback function argument */
> +	void			*scsi_tgt_cb_arg;
> +	/* abort callback function */
> +	efct_scsi_io_cb_t	abort_cb;
> +	/* abort callback function argument */
> +	void			*abort_cb_arg;
> +	/* BLS callback function */
> +	efct_scsi_io_cb_t	bls_cb;
> +	/* BLS callback function argument */
> +	void			*bls_cb_arg;
> +	/* TMF command being processed */
> +	enum efct_scsi_tmf_cmd	tmf_cmd;
> +	/* rx_id from the ABTS that initiated the command abort */
> +	u16			abort_rx_id;
> +
> +	/* True if this is a Target command */
> +	bool			cmd_tgt;
> +	/* when aborting, indicates ABTS is to be sent */
> +	bool			send_abts;
> +	/* True if this is an Initiator command */
> +	bool			cmd_ini;
> +	/* True if local node has sequence initiative */
> +	bool			seq_init;
> +	/* iparams for hw io send call */
> +	union efct_hw_io_param_u iparam;
> +	/* HW IO type */
> +	enum efct_hw_io_type	hio_type;
> +	/* wire length */
> +	u64			wire_len;
> +	/* saved HW callback */
> +	void			*hw_cb;
> +
> +	/* for ELS requests/responses */
> +	/* True if ELS is pending */
> +	bool			els_pend;
> +	/* True if ELS is active */
> +	bool			els_active;
> +	/* ELS request payload buffer */
> +	struct efc_dma		els_req;
> +	/* ELS response payload buffer */
> +	struct efc_dma		els_rsp;
> +	bool			els_req_free;
> +	/* Retries remaining */
> +	u32			els_retries_remaining;
> +	void (*els_callback)(struct efc_node *node,
> +			     struct efc_node_cb *cbdata, void *cbarg);
> +	void			*els_callback_arg;
> +	/* timeout */
> +	u32			els_timeout_sec;
> +
> +	/* delay timer */
> +	struct timer_list	delay_timer;
> +
> +	/* for abort handling */
> +	/* pointer to IO to abort */
> +	struct efct_io		*io_to_abort;
> +
> +	enum efct_els_state	state;
> +	/* Protects els cmds */
> +	spinlock_t		els_lock;
> +
> +	/* SCSI Response buffer */
> +	struct efc_dma		rspbuf;
> +	/* Timeout value in seconds for this IO */
> +	u32			timeout;
> +	/* CS_CTL priority for this IO */
> +	u8			cs_ctl;
> +	/* Is io object in freelist > */
> +	u8			io_free;
> +	u32			app_id;
> +};
> +
> +struct efct_io_cb_arg {
> +	int status;		/* completion status */
> +	int ext_status;		/* extended completion status */
> +	void *app;		/* application argument */
> +};
> +
> +struct efct_io_pool *
> +efct_io_pool_create(struct efct *efct, u32 num_sgl);
> +extern int
> +efct_io_pool_free(struct efct_io_pool *io_pool);
> +extern u32
> +efct_io_pool_allocated(struct efct_io_pool *io_pool);
> +
> +extern struct efct_io *
> +efct_io_pool_io_alloc(struct efct_io_pool *io_pool);
> +extern void
> +efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io);
> +extern struct efct_io *
> +efct_io_find_tgt_io(struct efct *efct, struct efc_node *node,
> +		    u16 ox_id, u16 rx_id);
> +#endif /* __EFCT_IO_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines
  2020-04-12  3:32 ` [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2020-04-16  9:36   ` Daniel Wagner
  0 siblings, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  9:36 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:53PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to handle unsolicited FC frames.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> 
> ---
> v3:
>   Return defined values
> ---
>  drivers/scsi/elx/efct/efct_hw.c    |   1 +
>  drivers/scsi/elx/efct/efct_unsol.c | 813 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_unsol.h |  49 +++
>  3 files changed, 863 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
>  create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 6cdc7e27b148..fd3c2dec3ef6 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -6,6 +6,7 @@
>  
>  #include "efct_driver.h"
>  #include "efct_hw.h"
> +#include "efct_unsol.h"
>  
>  static enum efct_hw_rtn
>  efct_hw_link_event_init(struct efct_hw *hw)
> diff --git a/drivers/scsi/elx/efct/efct_unsol.c b/drivers/scsi/elx/efct/efct_unsol.c
> new file mode 100644
> index 000000000000..e8611524e2cd
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_unsol.c
> @@ -0,0 +1,813 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_els.h"
> +#include "efct_unsol.h"
> +
> +#define frame_printf(efct, hdr, fmt, ...) \
> +	do { \
> +		char s_id_text[16]; \
> +		efc_node_fcid_display(ntoh24((hdr)->fh_s_id), \
> +			s_id_text, sizeof(s_id_text)); \
> +		efc_log_debug(efct, "[%06x.%s] %02x/%04x/%04x: " fmt, \
> +			ntoh24((hdr)->fh_d_id), s_id_text, \
> +			(hdr)->fh_r_ctl, be16_to_cpu((hdr)->fh_ox_id), \
> +			be16_to_cpu((hdr)->fh_rx_id), ##__VA_ARGS__); \
> +	} while (0)
> +
> +static int
> +efct_unsol_process(struct efct *efct, struct efc_hw_sequence *seq)
> +{
> +	struct efct_xport_fcfi *xport_fcfi = NULL;
> +	struct efc_domain *domain;
> +	struct efct_hw *hw = &efct->hw;
> +	unsigned long flags = 0;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +
> +	/* If the transport FCFI entry is NULL, then drop the frame */
> +	if (!xport_fcfi) {
> +		efc_log_test(efct,
> +			      "FCFI %d is not valid, dropping frame\n",
> +			seq->fcfi);
> +
> +		efct_hw_sequence_free(&efct->hw, seq);
> +		return EFC_SUCCESS;
> +	}
> +
> +	domain = hw->domain;
> +
> +	/*
> +	 * If we are holding frames or the domain is not yet registered or
> +	 * there's already frames on the pending list,
> +	 * then add the new frame to pending list
> +	 */
> +	if (!domain ||
> +	    xport_fcfi->hold_frames ||
> +	    !list_empty(&xport_fcfi->pend_frames)) {
> +		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
> +		INIT_LIST_HEAD(&seq->list_entry);
> +		list_add_tail(&seq->list_entry, &xport_fcfi->pend_frames);
> +		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
> +
> +		if (domain) {
> +			/* immediately process pending frames */
> +			efct_domain_process_pending(domain);
> +		}
> +	} else {
> +		/*
> +		 * We are not holding frames and pending list is empty,
> +		 * just process frame. A non-zero return means the frame
> +		 * was not handled - so cleanup
> +		 */
> +		if (efc_domain_dispatch_frame(domain, seq))
> +			efct_hw_sequence_free(&efct->hw, seq);
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq)
> +{
> +	struct efct *efct = arg;
> +	int rc;
> +
> +	rc = efct_unsol_process(efct, seq);
> +	if (rc)
> +		efct_hw_sequence_free(&efct->hw, seq);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +void
> +efct_process_node_pending(struct efc_node *node)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efc_hw_sequence *seq = NULL;
> +	u32 pend_frames_processed = 0;
> +	unsigned long flags = 0;
> +
> +	for (;;) {
> +		/* need to check for hold frames condition after each frame
> +		 * processed because any given frame could cause a transition
> +		 * to a state that holds frames
> +		 */
> +		if (node->hold_frames)
> +			break;
> +
> +		/* Get next frame/sequence */
> +		spin_lock_irqsave(&node->pend_frames_lock, flags);
> +		if (!list_empty(&node->pend_frames)) {
> +			seq = list_first_entry(&node->pend_frames,
> +					struct efc_hw_sequence, list_entry);
> +			list_del(&seq->list_entry);
> +		}
> +		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
> +
> +		if (!seq) {
> +			pend_frames_processed =	node->pend_frames_processed;
> +			node->pend_frames_processed = 0;
> +			break;
> +		}
> +		node->pend_frames_processed++;
> +
> +		/* now dispatch frame(s) to dispatch function */
> +		efc_node_dispatch_frame(node, seq);
> +		efct_hw_sequence_free(&efct->hw, seq);
> +	}
> +
> +	if (pend_frames_processed != 0)
> +		efc_log_debug(efct, "%u node frames held and processed\n",
> +			       pend_frames_processed);
> +}
> +
> +static bool efct_domain_frames_held(void *arg)
> +{
> +	struct efc_domain *domain = (struct efc_domain *)arg;
> +	struct efct *efct = domain->efc->base;
> +	struct efct_xport_fcfi *xport_fcfi;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +	return xport_fcfi->hold_frames;
> +}
> +
> +void
> +efct_domain_process_pending(struct efc_domain *domain)
> +{
> +	struct efct *efct = domain->efc->base;
> +	struct efct_xport_fcfi *xport_fcfi;
> +	struct efc_hw_sequence *seq = NULL;
> +	u32 pend_frames_processed = 0;
> +	unsigned long flags = 0;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +
> +	for (;;) {
> +		/* need to check for hold frames condition after each frame
> +		 * processed because any given frame could cause a transition
> +		 * to a state that holds frames
> +		 */
> +		if (efct_domain_frames_held(domain))
> +			break;
> +
> +		/* Get next frame/sequence */
> +		spin_lock_irqsave(&xport_fcfi->pend_frames_lock, flags);
> +			if (!list_empty(&xport_fcfi->pend_frames)) {
> +				seq = list_first_entry(&xport_fcfi->pend_frames,
> +						       struct efc_hw_sequence,
> +						       list_entry);
> +				list_del(&seq->list_entry);
> +			}
> +			if (!seq) {
> +				pend_frames_processed =
> +					xport_fcfi->pend_frames_processed;
> +				xport_fcfi->pend_frames_processed = 0;
> +				spin_unlock_irqrestore(&
> +						xport_fcfi->pend_frames_lock,
> +						flags);
> +				break;
> +			}
> +			xport_fcfi->pend_frames_processed++;

no need to indent.

> +		spin_unlock_irqrestore(&xport_fcfi->pend_frames_lock, flags);
> +
> +		/* now dispatch frame(s) to dispatch function */
> +		if (efc_domain_dispatch_frame(domain, seq))
> +			efct_hw_sequence_free(&efct->hw, seq);
> +
> +		seq = NULL;
> +	}
> +	if (pend_frames_processed != 0)
> +		efc_log_debug(efct, "%u domain frames held and processed\n",
> +			       pend_frames_processed);
> +}
> +
> +static struct efc_hw_sequence *
> +efct_frame_next(struct list_head *pend_list, spinlock_t *list_lock)
> +{
> +	struct efc_hw_sequence *frame = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(list_lock, flags);
> +
> +	if (!list_empty(pend_list)) {
> +		frame = list_first_entry(pend_list,
> +					 struct efc_hw_sequence, list_entry);
> +		list_del(&frame->list_entry);
> +	}
> +
> +	spin_unlock_irqrestore(list_lock, flags);
> +	return frame;
> +}
> +
> +static int
> +efct_purge_pending(struct efct *efct, struct list_head *pend_list,
> +		   spinlock_t *list_lock)
> +{
> +	struct efc_hw_sequence *frame;
> +
> +	for (;;) {
> +		frame = efct_frame_next(pend_list, list_lock);
> +		if (!frame)
> +			break;
> +
> +		frame_printf(efct,
> +			     (struct fc_frame_header *)frame->header->dma.virt,
> +			     "Discarding held frame\n");
> +		efct_hw_sequence_free(&efct->hw, frame);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_node_purge_pending(struct efc *efc, struct efc_node *node)
> +{
> +	struct efct *efct = efc->base;
> +
> +	return efct_purge_pending(efct, &node->pend_frames,
> +				&node->pend_frames_lock);
> +}
> +
> +int
> +efct_domain_purge_pending(struct efc_domain *domain)
> +{
> +	struct efct *efct = domain->efc->base;
> +	struct efct_xport_fcfi *xport_fcfi;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +	return efct_purge_pending(efct,
> +				 &xport_fcfi->pend_frames,
> +				 &xport_fcfi->pend_frames_lock);
> +}
> +
> +void
> +efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = domain->efc->base;
> +	struct efct_xport_fcfi *xport_fcfi;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +	if (!xport_fcfi->hold_frames) {
> +		efc_log_debug(efct, "hold frames set for FCFI %d\n",
> +			       domain->fcf_indicator);
> +		xport_fcfi->hold_frames = true;
> +	}
> +}
> +
> +void
> +efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = domain->efc->base;
> +	struct efct_xport_fcfi *xport_fcfi;
> +
> +	xport_fcfi = &efct->xport->fcfi;
> +	if (xport_fcfi->hold_frames) {
> +		efc_log_debug(efct, "hold frames cleared for FCFI %d\n",
> +			       domain->fcf_indicator);
> +	}
> +	xport_fcfi->hold_frames = false;
> +	efct_domain_process_pending(domain);
> +}
> +
> +static int
> +efct_fc_tmf_rejected_cb(struct efct_io *io,
> +			enum efct_scsi_io_status scsi_status,
> +		       u32 flags, void *arg)
> +{
> +	efct_scsi_io_free(io);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_dispatch_unsolicited_tmf(struct efct_io *io,
> +			      u8 task_management_flags,
> +			      struct efc_node *node, u32 lun)
> +{
> +	u32 i;
> +	struct {
> +		u32 mask;
> +		enum efct_scsi_tmf_cmd cmd;
> +	} tmflist[] = {
> +	{FCP_TMF_ABT_TASK_SET, EFCT_SCSI_TMF_ABORT_TASK_SET},
> +	{FCP_TMF_CLR_TASK_SET, EFCT_SCSI_TMF_CLEAR_TASK_SET},
> +	{FCP_TMF_LUN_RESET, EFCT_SCSI_TMF_LOGICAL_UNIT_RESET},
> +	{FCP_TMF_TGT_RESET, EFCT_SCSI_TMF_TARGET_RESET},
> +	{FCP_TMF_CLR_ACA, EFCT_SCSI_TMF_CLEAR_ACA} };
> +
> +	io->exp_xfer_len = 0;
> +
> +	for (i = 0; i < ARRAY_SIZE(tmflist); i++) {
> +		if (tmflist[i].mask & task_management_flags) {
> +			io->tmf_cmd = tmflist[i].cmd;
> +			efct_scsi_recv_tmf(io, lun, tmflist[i].cmd, NULL, 0);
> +			break;
> +		}
> +	}
> +	if (i == ARRAY_SIZE(tmflist)) {
> +		/* Not handled */
> +		node_printf(node, "TMF x%x rejected\n", task_management_flags);
> +		efct_scsi_send_tmf_resp(io, EFCT_SCSI_TMF_FUNCTION_REJECTED,
> +					NULL, efct_fc_tmf_rejected_cb, NULL);
> +	}
> +}
> +
> +static int
> +efct_validate_fcp_cmd(struct efct *efct, struct efc_hw_sequence *seq)
> +{
> +	/*
> +	 * If we received less than FCP_CMND_IU bytes, assume that the frame is
> +	 * corrupted in some way and drop it.
> +	 * This was seen when jamming the FCTL
> +	 * fill bytes field.
> +	 */
> +	if (seq->payload->dma.len < sizeof(struct fcp_cmnd)) {
> +		struct fc_frame_header	*fchdr = seq->header->dma.virt;
> +
> +		efc_log_debug(efct,
> +			"drop ox_id %04x with payload (%zd) less than (%zd)\n",
> +				    be16_to_cpu(fchdr->fh_ox_id),
> +				    seq->payload->dma.len,
> +				    sizeof(struct fcp_cmnd));
> +		return EFC_FAIL;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_populate_io_fcp_cmd(struct efct_io *io, struct fcp_cmnd *cmnd,
> +			 struct fc_frame_header *fchdr, bool sit)
> +{
> +	io->init_task_tag = be16_to_cpu(fchdr->fh_ox_id);
> +	/* note, tgt_task_tag, hw_tag  set when HW io is allocated */
> +	io->exp_xfer_len = be32_to_cpu(cmnd->fc_dl);
> +	io->transferred = 0;
> +
> +	/* The upper 7 bits of CS_CTL is the frame priority thru the SAN.
> +	 * Our assertion here is, the priority given to a frame containing
> +	 * the FCP cmd should be the priority given to ALL frames contained
> +	 * in that IO. Thus we need to save the incoming CS_CTL here.
> +	 */
> +	if (ntoh24(fchdr->fh_f_ctl) & FC_FC_RES_B17)
> +		io->cs_ctl = fchdr->fh_cs_ctl;
> +	else
> +		io->cs_ctl = 0;
> +
> +	io->seq_init = sit;
> +}
> +
> +static u32
> +efct_get_flags_fcp_cmd(struct fcp_cmnd *cmnd)
> +{
> +	u32 flags = 0;
> +
> +	switch (cmnd->fc_pri_ta & FCP_PTA_MASK) {
> +	case FCP_PTA_SIMPLE:
> +		flags |= EFCT_SCSI_CMD_SIMPLE;
> +		break;
> +	case FCP_PTA_HEADQ:
> +		flags |= EFCT_SCSI_CMD_HEAD_OF_QUEUE;
> +		break;
> +	case FCP_PTA_ORDERED:
> +		flags |= EFCT_SCSI_CMD_ORDERED;
> +		break;
> +	case FCP_PTA_ACA:
> +		flags |= EFCT_SCSI_CMD_ACA;
> +		break;
> +	}
> +	if (cmnd->fc_flags & FCP_CFL_WRDATA)
> +		flags |= EFCT_SCSI_CMD_DIR_IN;
> +	if (cmnd->fc_flags & FCP_CFL_RDDATA)
> +		flags |= EFCT_SCSI_CMD_DIR_OUT;
> +
> +	return flags;
> +}
> +
> +static void
> +efct_sframe_common_send_cb(void *arg, u8 *cqe, int status)
> +{
> +	struct efct_hw_send_frame_context *ctx = arg;
> +	struct efct_hw *hw = ctx->hw;
> +
> +	/* Free WQ completion callback */
> +	efct_hw_reqtag_free(hw, ctx->wqcb);
> +
> +	/* Free sequence */
> +	efct_hw_sequence_free(hw, ctx->seq);
> +}
> +
> +static int
> +efct_sframe_common_send(struct efc_node *node,
> +			struct efc_hw_sequence *seq,
> +			enum fc_rctl r_ctl, u32 f_ctl,
> +			u8 type, void *payload, u32 payload_len)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	enum efct_hw_rtn rc = 0;
> +	struct fc_frame_header *req_hdr = seq->header->dma.virt;
> +	struct fc_frame_header hdr;
> +	struct efct_hw_send_frame_context *ctx;
> +
> +	u32 heap_size = seq->payload->dma.size;
> +	uintptr_t heap_phys_base = seq->payload->dma.phys;
> +	u8 *heap_virt_base = seq->payload->dma.virt;
> +	u32 heap_offset = 0;
> +
> +	/* Build the FC header reusing the RQ header DMA buffer */
> +	memset(&hdr, 0, sizeof(hdr));
> +	hdr.fh_r_ctl = r_ctl;
> +	/* send it back to whomever sent it to us */
> +	memcpy(hdr.fh_d_id, req_hdr->fh_s_id, sizeof(hdr.fh_d_id));
> +	memcpy(hdr.fh_s_id, req_hdr->fh_d_id, sizeof(hdr.fh_s_id));
> +	hdr.fh_type = type;
> +	hton24(hdr.fh_f_ctl, f_ctl);
> +	hdr.fh_ox_id = req_hdr->fh_ox_id;
> +	hdr.fh_rx_id = req_hdr->fh_rx_id;
> +	hdr.fh_cs_ctl = 0;
> +	hdr.fh_df_ctl = 0;
> +	hdr.fh_seq_cnt = 0;
> +	hdr.fh_parm_offset = 0;
> +
> +	/*
> +	 * send_frame_seq_id is an atomic, we just let it increment,
> +	 * while storing only the low 8 bits to hdr->seq_id
> +	 */
> +	hdr.fh_seq_id = (u8)atomic_add_return(1, &hw->send_frame_seq_id);
> +	hdr.fh_seq_id--;
> +
> +	/* Allocate and fill in the send frame request context */
> +	ctx = (void *)(heap_virt_base + heap_offset);
> +	heap_offset += sizeof(*ctx);
> +	if (heap_offset > heap_size) {
> +		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
> +				heap_offset, heap_size);
> +		return EFC_FAIL;
> +	}
> +
> +	memset(ctx, 0, sizeof(*ctx));
> +
> +	/* Save sequence */
> +	ctx->seq = seq;
> +
> +	/* Allocate a response payload DMA buffer from the heap */
> +	ctx->payload.phys = heap_phys_base + heap_offset;
> +	ctx->payload.virt = heap_virt_base + heap_offset;
> +	ctx->payload.size = payload_len;
> +	ctx->payload.len = payload_len;
> +	heap_offset += payload_len;
> +	if (heap_offset > heap_size) {
> +		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
> +				heap_offset, heap_size);
> +		return EFC_FAIL;
> +	}
> +
> +	/* Copy the payload in */
> +	memcpy(ctx->payload.virt, payload, payload_len);
> +
> +	/* Send */
> +	rc = efct_hw_send_frame(&efct->hw, (void *)&hdr, FC_SOF_N3,
> +				FC_EOF_T, &ctx->payload, ctx,
> +				efct_sframe_common_send_cb, ctx);
> +	if (rc)
> +		efc_log_test(efct, "efct_hw_send_frame failed: %d\n", rc);
> +
> +	return rc ? -1 : 0;

return code

> +}
> +
> +static int
> +efct_sframe_send_fcp_rsp(struct efc_node *node,
> +			 struct efc_hw_sequence *seq,
> +			 void *rsp, u32 rsp_len)
> +{
> +	return efct_sframe_common_send(node, seq,
> +				      FC_RCTL_DD_CMD_STATUS,
> +				      FC_FC_EX_CTX |
> +				      FC_FC_LAST_SEQ |
> +				      FC_FC_END_SEQ |
> +				      FC_FC_SEQ_INIT,
> +				      FC_TYPE_FCP,
> +				      rsp, rsp_len);
> +}
> +
> +static int
> +efct_sframe_send_task_set_full_or_busy(struct efc_node *node,
> +				       struct efc_hw_sequence *seq)
> +{
> +	struct fcp_resp_with_ext fcprsp;
> +	struct fcp_cmnd *fcpcmd = seq->payload->dma.virt;
> +	int rc = 0;
> +	unsigned long flags = 0;
> +	struct efct *efct = node->efc->base;
> +
> +	/* construct task set full or busy response */
> +	memset(&fcprsp, 0, sizeof(fcprsp));
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		fcprsp.resp.fr_status = list_empty(&node->active_ios) ?
> +				SAM_STAT_BUSY : SAM_STAT_TASK_SET_FULL;

no need to indent

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	*((u32 *)&fcprsp.ext.fr_resid) = be32_to_cpu(fcpcmd->fc_dl);
> +
> +	/* send it using send_frame */
> +	rc = efct_sframe_send_fcp_rsp(node, seq, &fcprsp, sizeof(fcprsp));
> +	if (rc)
> +		efc_log_test(efct,
> +			      "efct_sframe_send_fcp_rsp failed: %d\n",
> +			rc);

aligmentent

> +
> +	return rc;
> +}
> +
> +int
> +efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
> +{
> +	struct efc *efc = node->efc;
> +	struct efct *efct = efc->base;
> +	struct fc_frame_header *fchdr = seq->header->dma.virt;
> +	struct fcp_cmnd	*cmnd = NULL;
> +	struct efct_io *io = NULL;
> +	u32 lun = U32_MAX;
> +	int rc = 0;
> +
> +	if (!seq->payload) {
> +		efc_log_err(efct, "Sequence payload is NULL.\n");
> +		return EFC_FAIL;
> +	}
> +
> +	cmnd = seq->payload->dma.virt;
> +
> +	/* perform FCP_CMND validation check(s) */
> +	if (efct_validate_fcp_cmd(efct, seq))
> +		return EFC_FAIL;
> +
> +	lun = scsilun_to_int(&cmnd->fc_lun);
> +	if (lun == U32_MAX)
> +		return EFC_FAIL;
> +
> +	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
> +	if (!io) {
> +		/* Use SEND_FRAME to send task set full or busy */
> +		rc = efct_sframe_send_task_set_full_or_busy(node, seq);
> +		if (rc)
> +			efc_log_err(efct, "Failed to send busy task: %d\n", rc);
> +		return rc;
> +	}
> +
> +	io->hw_priv = seq->hw_priv;
> +
> +	io->app_id = 0;
> +
> +	/* RQ pair, if we got here, SIT=1 */
> +	efct_populate_io_fcp_cmd(io, cmnd, fchdr, true);
> +
> +	if (cmnd->fc_tm_flags) {
> +		efct_dispatch_unsolicited_tmf(io,
> +					      cmnd->fc_tm_flags,
> +					      node, lun);
> +	} else {
> +		u32 flags = efct_get_flags_fcp_cmd(cmnd);
> +
> +		if (cmnd->fc_flags & FCP_CFL_LEN_MASK) {
> +			efc_log_err(efct, "Additional CDB not supported\n");
> +			return EFC_FAIL;
> +		}
> +		/*
> +		 * Can return failure for things like task set full and UAs,
> +		 * no need to treat as a dropped frame if rc != 0
> +		 */
> +		efct_scsi_recv_cmd(io, lun, cmnd->fc_cdb,
> +				   sizeof(cmnd->fc_cdb), flags);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_sframe_send_bls_acc(struct efc_node *node,
> +			 struct efc_hw_sequence *seq)
> +{
> +	struct fc_frame_header *behdr = seq->header->dma.virt;
> +	u16 ox_id = be16_to_cpu(behdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(behdr->fh_rx_id);
> +	struct fc_ba_acc acc = {0};
> +
> +	acc.ba_ox_id = cpu_to_be16(ox_id);
> +	acc.ba_rx_id = cpu_to_be16(rx_id);
> +	acc.ba_low_seq_cnt = cpu_to_be16(U16_MAX);
> +	acc.ba_high_seq_cnt = cpu_to_be16(U16_MAX);
> +
> +	return efct_sframe_common_send(node, seq,
> +				      FC_RCTL_BA_ACC,
> +				      FC_FC_EX_CTX |
> +				      FC_FC_LAST_SEQ |
> +				      FC_FC_END_SEQ,
> +				      FC_TYPE_BLS,
> +				      &acc, sizeof(acc));
> +}
> +
> +void
> +efct_node_io_cleanup(struct efc *efc, struct efc_node *node, bool force)
> +{
> +	struct efct_io *io;
> +	struct efct_io *next;
> +	unsigned long flags = 0;
> +	struct efct *efct = efc->base;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	list_for_each_entry_safe(io, next, &node->active_ios, list_entry) {
> +		list_del(&io->list_entry);
> +		efct_io_pool_io_free(efct->xport->io_pool, io);
> +	}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +void
> +efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
> +		      bool force)
> +{
> +	struct efct_io *els;
> +	struct efct_io *els_next;
> +	struct efct_io *ls_acc_io;
> +	unsigned long flags = 0;
> +	struct efct *efct = efc->base;
> +
> +	/* first cleanup ELS's that are pending (not yet active) */
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	list_for_each_entry_safe(els, els_next, &node->els_io_pend_list,
> +				 list_entry) {
> +		/*
> +		 * skip the ELS IO for which a response
> +		 * will be sent after shutdown
> +		 */
> +		if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE &&
> +		    els == node->ls_acc_io) {
> +			continue;
> +		}
> +		/*
> +		 * can't call efct_els_io_free()
> +		 * because lock is held; cleanup manually
> +		 */
> +		node_printf(node, "Freeing pending els %s\n",
> +			    els->display_name);
> +		list_del(&els->list_entry);
> +
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  els->els_rsp.size, els->els_rsp.virt,
> +				  els->els_rsp.phys);
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  els->els_req.size, els->els_req.virt,
> +				  els->els_req.phys);
> +		memset(&els->els_rsp, 0, sizeof(struct efc_dma));
> +		memset(&els->els_req, 0, sizeof(struct efc_dma));
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +	}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	ls_acc_io = node->ls_acc_io;
> +
> +	if (node->ls_acc_io && ls_acc_io->hio) {
> +		/*
> +		 * if there's an IO that will result in an LS_ACC after
> +		 * shutdown and its HW IO is non-NULL, it better be an
> +		 * implicit logout in vanilla sequence coalescing. In this
> +		 * case, force the LS_ACC to go out on another XRI (hio)
> +		 * since the previous will have been aborted by the UNREG_RPI
> +		 */
> +		node_printf(node,
> +			    "invalidating ls_acc_io due to implicit logo\n");
> +
> +		/*
> +		 * No need to abort because the unreg_rpi
> +		 * takes care of it, just free
> +		 */
> +		efct_hw_io_free(&efct->hw, ls_acc_io->hio);
> +
> +		/* NULL out hio to force the LS_ACC to grab a new XRI */
> +		ls_acc_io->hio = NULL;
> +	}
> +}
> +
> +void
> +efct_node_abort_all_els(struct efc *efc, struct efc_node *node)
> +{
> +	struct efct_io *els;
> +	struct efct_io *els_next;
> +	struct efc_node_cb cbdata;
> +	struct efct *efct = efc->base;
> +	unsigned long flags = 0;
> +
> +	memset(&cbdata, 0, sizeof(struct efc_node_cb));
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +	list_for_each_entry_safe(els, els_next, &node->els_io_active_list,
> +				 list_entry) {
> +		if (els->els_req_free)
> +			continue;
> +		efc_log_debug(efct, "[%s] initiate ELS abort %s\n",
> +			       node->display_name, els->display_name);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);

move the debug lot out of the locking section.

> +		efct_els_abort(els, &cbdata);
> +		spin_lock_irqsave(&node->active_ios_lock, flags);
> +	}
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +static int
> +efct_process_abts(struct efct_io *io, struct fc_frame_header *hdr)
> +{
> +	struct efc_node *node = io->node;
> +	struct efct *efct = io->efct;
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
> +	struct efct_io *abortio;
> +
> +	/* Find IO and attempt to take a reference on it */
> +	abortio = efct_io_find_tgt_io(efct, node, ox_id, rx_id);
> +
> +	if (abortio) {
> +		/* Got a reference on the IO. Hold it until backend
> +		 * is notified below
> +		 */
> +		node_printf(node, "Abort request: ox_id [%04x] rx_id [%04x]\n",
> +			    ox_id, rx_id);
> +
> +		/*
> +		 * Save the ox_id for the ABTS as the init_task_tag in our
> +		 * manufactured
> +		 * TMF IO object
> +		 */
> +		io->display_name = "abts";
> +		io->init_task_tag = ox_id;
> +		/* don't set tgt_task_tag, don't want to confuse with XRI */
> +
> +		/*
> +		 * Save the rx_id from the ABTS as it is
> +		 * needed for the BLS response,
> +		 * regardless of the IO context's rx_id
> +		 */
> +		io->abort_rx_id = rx_id;
> +
> +		/* Call target server command abort */
> +		io->tmf_cmd = EFCT_SCSI_TMF_ABORT_TASK;
> +		efct_scsi_recv_tmf(io, abortio->tgt_io.lun,
> +				   EFCT_SCSI_TMF_ABORT_TASK, abortio, 0);
> +
> +		/*
> +		 * Backend will have taken an additional
> +		 * reference on the IO if needed;
> +		 * done with current reference.
> +		 */
> +		kref_put(&abortio->ref, abortio->release);
> +	} else {
> +		/*
> +		 * Either IO was not found or it has been
> +		 * freed between finding it
> +		 * and attempting to get the reference,
> +		 */
> +		node_printf(node,
> +			    "Abort request: ox_id [%04x], IO not found (exists=%d)\n",
> +			    ox_id, (abortio != NULL));
> +
> +		/* Send a BA_RJT */
> +		efct_bls_send_rjt_hdr(io, hdr);
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
> +			  struct efc_hw_sequence *seq)
> +{
> +	struct efct *efct = efc->base;
> +	struct fc_frame_header *hdr = seq->header->dma.virt;
> +	struct efct_io *io = NULL;
> +
> +	node->abort_cnt++;
> +
> +	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
> +	if (io) {
> +		io->hw_priv = seq->hw_priv;
> +		/* If we got this far, SIT=1 */
> +		io->seq_init = 1;
> +
> +		/* fill out generic fields */
> +		io->efct = efct;
> +		io->node = node;
> +		io->cmd_tgt = true;
> +
> +		efct_process_abts(io, seq->header->dma.virt);
> +	} else {
> +		node_printf(node,
> +			    "SCSI IO allocation failed for ABTS received ");
> +		node_printf(node,
> +			    "s_id %06x d_id %06x ox_id %04x rx_id %04x\n",
> +			ntoh24(hdr->fh_s_id),
> +			ntoh24(hdr->fh_d_id),
> +			be16_to_cpu(hdr->fh_ox_id),
> +			be16_to_cpu(hdr->fh_rx_id));
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_unsol.h b/drivers/scsi/elx/efct/efct_unsol.h
> new file mode 100644
> index 000000000000..615c83120a00
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_unsol.h
> @@ -0,0 +1,49 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__OSC_UNSOL_H__)
> +#define __OSC_UNSOL_H__
> +
> +extern int
> +efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq);

extern is not needed

> +extern int
> +efct_node_purge_pending(struct efc *efc, struct efc_node *node);
> +extern void
> +efct_process_node_pending(struct efc_node *domain);
> +extern void
> +efct_domain_process_pending(struct efc_domain *domain);
> +extern int
> +efct_domain_purge_pending(struct efc_domain *domain);
> +extern int
> +efct_dispatch_unsolicited_bls(struct efc_node *node,
> +			      struct efc_hw_sequence *seq);
> +extern void
> +efct_domain_hold_frames(struct efc *efc, struct efc_domain *domain);
> +extern void
> +efct_domain_accept_frames(struct efc *efc, struct efc_domain *domain);
> +extern void
> +efct_seq_coalesce_cleanup(struct efct_hw_io *io, u8 count);
> +extern int
> +efct_sframe_send_bls_acc(struct efc_node *node,
> +			 struct efc_hw_sequence *seq);
> +extern int
> +efct_dispatch_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
> +
> +extern int
> +efct_node_recv_abts_frame(struct efc *efc, struct efc_node *node,
> +			  struct efc_hw_sequence *seq);
> +extern void
> +efct_node_els_cleanup(struct efc *efc, struct efc_node *node,
> +		      bool force);
> +
> +extern void
> +efct_node_io_cleanup(struct efc *efc, struct efc_node *node,
> +		     bool force);
> +
> +void
> +efct_node_abort_all_els(struct efc *efc, struct efc_node *node);
> +
> +#endif /* __OSC_UNSOL_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 27/31] elx: efct: xport and hardware teardown routines
  2020-04-12  3:32 ` [PATCH v3 27/31] elx: efct: xport and hardware teardown routines James Smart
@ 2020-04-16  9:45   ` Hannes Reinecke
  2020-04-16 13:01   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16  9:45 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:32 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to detach xport and hardware objects.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>     Removed old patch 28 and merged with 27
> ---
>   drivers/scsi/elx/efct/efct_hw.c    | 333 +++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h    |  31 ++++
>   drivers/scsi/elx/efct/efct_xport.c | 291 ++++++++++++++++++++++++++++++++
>   3 files changed, 655 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index ca2fd237c7d6..a007ca98895d 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -3503,3 +3503,336 @@ efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
>   
>   	return rc;
>   }
> +
> +static int
> +efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
> +			void  *arg)
> +{
> +	kfree(mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Control a port (initialize, shutdown, or set link configuration) */
> +enum efct_hw_rtn
> +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
> +		     uintptr_t value,
> +		void (*cb)(int status, uintptr_t value, void *arg),
> +		void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +
> +	switch (ctrl) {
> +	case EFCT_HW_PORT_INIT:
> +	{
> +		u8	*init_link;
> +		u32 speed = 0;
> +		u8 reset_alpa = 0;
> +
> +		u8	*cfg_link;
> +
> +		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!cfg_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +

Use the predefined mailbox buffer?
Or a memory pool?

> +		if (!sli_cmd_config_link(&hw->sli, cfg_link,
> +					SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, cfg_link,
> +					     EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control,
> +					     NULL);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(cfg_link);
> +			efc_log_err(hw->os, "CONFIG_LINK failed\n");
> +			break;
> +		}
> +		speed = hw->config.speed;
> +		reset_alpa = (u8)(value & 0xff);
> +
> +		/* Allocate a new buffer for the init_link command */
> +		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!init_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
> +				      speed, reset_alpa))
> +			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(init_link);
> +			efc_log_err(hw->os, "INIT_LINK failed\n");
> +		}
> +		break;
> +	}
> +	case EFCT_HW_PORT_SHUTDOWN:
> +	{
> +		u8	*down_link;
> +
> +		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!down_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +

Same here.

> +		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(down_link);
> +			efc_log_err(hw->os, "DOWN_LINK failed\n");
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_teardown(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	u32	iters = 10;
> +	u32	max_rpi;
> +	u32 destroy_queues;
> +	u32 free_memory;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
> +	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
> +
> +	/* Cancel Sliport Healthcheck */
> +	if (hw->sliport_healthcheck) {
> +		hw->sliport_healthcheck = 0;
> +		efct_hw_config_sli_port_health_check(hw, 0, 0);
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +
> +		efct_hw_flush(hw);
> +
> +		/*
> +		 * If there are outstanding commands, wait for them to complete
> +		 */
> +		while (!list_empty(&hw->cmd_head) && iters) {
> +			mdelay(10);
> +			efct_hw_flush(hw);
> +			iters--;
> +		}
> +
> +		if (list_empty(&hw->cmd_head))
> +			efc_log_debug(hw->os,
> +				       "All commands completed on MQ queue\n");
> +		else
> +			efc_log_debug(hw->os,
> +				       "Some cmds still pending on MQ queue\n");
> +
> +		/* Cancel any remaining commands */
> +		efct_hw_command_cancel(hw);
> +	} else {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +	}
> +
> +	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
> +	if (hw->rpi_ref) {
> +		for (i = 0; i < max_rpi; i++) {
> +			u32 count;
> +
> +			count = atomic_read(&hw->rpi_ref[i].rpi_count);
> +			if (count)
> +				efc_log_debug(hw->os,
> +					       "non-zero ref [%d]=%d\n",
> +					       i, count);

Ho-hum. So you have a non-zero refcount and _still_ free the structure?
That smells fishy ...

> +		}
> +		kfree(hw->rpi_ref);
> +		hw->rpi_ref = NULL;
> +	}
> +

Please use proper refcounting here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 22/31] elx: efct: Extended link Service IO handling
  2020-04-12  3:32 ` [PATCH v3 22/31] elx: efct: Extended link Service IO handling James Smart
  2020-04-16  7:58   ` Hannes Reinecke
@ 2020-04-16  9:49   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16  9:49 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:54PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Functions to build and send ELS/CT/BLS commands and responses.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Unified log message using cmd_name
>   Return and drop else, for better indentation and consistency.
>   Changed assertion log messages.
> ---
>  drivers/scsi/elx/efct/efct_els.c | 1928 ++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_els.h |  133 +++
>  2 files changed, 2061 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_els.c
>  create mode 100644 drivers/scsi/elx/efct/efct_els.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_els.c b/drivers/scsi/elx/efct/efct_els.c
> new file mode 100644
> index 000000000000..8a2598a83445
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_els.c
> @@ -0,0 +1,1928 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/*
> + * Functions to build and send ELS/CT/BLS commands and responses.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_els.h"
> +
> +#define ELS_IOFMT "[i:%04x t:%04x h:%04x]"
> +
> +#define EFCT_LOG_ENABLE_ELS_TRACE(efct)		\
> +		(((efct) != NULL) ? (((efct)->logmask & (1U << 1)) != 0) : 0)
> +
> +#define node_els_trace()  \
> +	do { \
> +		if (EFCT_LOG_ENABLE_ELS_TRACE(efct)) \
> +			efc_log_info(efct, "[%s] %-20s\n", \
> +				node->display_name, __func__); \
> +	} while (0)
> +
> +#define els_io_printf(els, fmt, ...) \
> +	efc_log_debug((struct efct *)els->node->efc->base,\
> +		      "[%s]" ELS_IOFMT " %-8s " fmt, \
> +		      els->node->display_name,\
> +		      els->init_task_tag, els->tgt_task_tag, els->hw_tag,\
> +		      els->display_name, ##__VA_ARGS__)
> +
> +#define EFCT_ELS_RSP_LEN		1024
> +#define EFCT_ELS_GID_PT_RSP_LEN		8096
> +
> +static char *cmd_name[] = FC_ELS_CMDS_INIT;
> +
> +void *
> +efct_els_req_send(struct efc *efc, struct efc_node *node, u32 cmd,
> +		  u32 timeout_sec, u32 retries)
> +{
> +	struct efct *efct = efc->base;
> +
> +	efc_log_debug(efct, "send %s\n", cmd_name[cmd]);
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		return efct_send_plogi(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_FLOGI:
> +		return efct_send_flogi(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_FDISC:
> +		return efct_send_fdisc(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_LOGO:
> +		return efct_send_logo(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_PRLI:
> +		return efct_send_prli(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_ADISC:
> +		return efct_send_adisc(node, timeout_sec, retries, NULL, NULL);
> +	case ELS_SCR:
> +		return efct_send_scr(node, timeout_sec, retries, NULL, NULL);
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +
> +void *
> +efct_els_resp_send(struct efc *efc, struct efc_node *node,
> +		   u32 cmd, u16 ox_id)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case ELS_PLOGI:
> +		efct_send_plogi_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_FLOGI:
> +		efct_send_flogi_acc(node, ox_id, 0, NULL, NULL);
> +		break;
> +	case ELS_LOGO:
> +		efct_send_logo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLI:
> +		efct_send_prli_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PRLO:
> +		efct_send_prlo_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_ADISC:
> +		efct_send_adisc_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_LS_ACC:
> +		efct_send_ls_acc(node, ox_id, NULL, NULL);
> +		break;
> +	case ELS_PDISC:
> +	case ELS_FDISC:
> +	case ELS_RSCN:
> +	case ELS_SCR:
> +		efct_send_ls_rjt(efc, node, ox_id, ELS_RJT_UNAB,
> +				 ELS_EXPL_NONE, 0);
> +		break;
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +
> +struct efct_io *
> +efct_els_io_alloc(struct efc_node *node, u32 reqlen,
> +		  enum efct_els_role role)
> +{
> +	return efct_els_io_alloc_size(node, reqlen, EFCT_ELS_RSP_LEN, role);
> +}
> +
> +struct efct_io *
> +efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
> +		       u32 rsplen, enum efct_els_role role)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	struct efct_io *els;
> +	unsigned long flags = 0;
> +
> +	efct = node->efc->base;
> +
> +	xport = efct->xport;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +
> +	if (!node->io_alloc_enabled) {
> +		efc_log_debug(efct,
> +			       "called with io_alloc_enabled = FALSE\n");
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	els = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!els) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	/* initialize refcount */
> +	kref_init(&els->ref);
> +	els->release = _efct_els_io_free;
> +
> +	switch (role) {
> +	case EFCT_ELS_ROLE_ORIGINATOR:
> +		els->cmd_ini = true;
> +		els->cmd_tgt = false;
> +		break;
> +	case EFCT_ELS_ROLE_RESPONDER:
> +		els->cmd_ini = false;
> +		els->cmd_tgt = true;
> +		break;
> +	}
> +
> +	/* IO should not have an associated HW IO yet.
> +	 * Assigned below.
> +	 */
> +	if (els->hio) {
> +		efc_log_err(efct, "Error: HW io not null hio:%p\n", els->hio);
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +		return NULL;
> +	}
> +
> +	/* populate generic io fields */
> +	els->efct = efct;
> +	els->node = node;
> +
> +	/* set type and ELS-specific fields */
> +	els->io_type = EFCT_IO_TYPE_ELS;
> +	els->display_name = "pending";
> +
> +	/* now allocate DMA for request and response */
> +	els->els_req.size = reqlen;
> +	els->els_req.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					       els->els_req.size,
> +					       &els->els_req.phys,
> +					       GFP_DMA);

GFP_KERNEL

> +	if (els->els_req.virt) {
> +		els->els_rsp.size = rsplen;
> +		els->els_rsp.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +						       els->els_rsp.size,
> +						       &els->els_rsp.phys,
> +						       GFP_DMA);

GFP_KERNEL

> +		if (!els->els_rsp.virt) {
> +			efc_log_err(efct, "dma_alloc rsp\n");
> +			dma_free_coherent(&efct->pcidev->dev,
> +					  els->els_req.size,
> +				els->els_req.virt, els->els_req.phys);
> +			memset(&els->els_req, 0, sizeof(struct efc_dma));
> +			efct_io_pool_io_free(efct->xport->io_pool, els);
> +			els = NULL;
> +		}
> +	} else {
> +		efc_log_err(efct, "dma_alloc req\n");
> +		efct_io_pool_io_free(efct->xport->io_pool, els);
> +		els = NULL;
> +	}
> +
> +	if (els) {
> +		/* initialize fields */
> +		els->els_retries_remaining =
> +					EFCT_FC_ELS_DEFAULT_RETRIES;
> +		els->els_pend = false;
> +		els->els_active = false;
> +
> +		/* add els structure to ELS IO list */
> +		INIT_LIST_HEAD(&els->list_entry);
> +		list_add_tail(&els->list_entry,
> +			      &node->els_io_pend_list);
> +		els->els_pend = true;
> +	}
> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return els;
> +}
> +
> +void
> +efct_els_io_free(struct efct_io *els)
> +{
> +	kref_put(&els->ref, els->release);
> +}
> +
> +void
> +_efct_els_io_free(struct kref *arg)
> +{
> +	struct efct_io *els = container_of(arg, struct efct_io, ref);
> +	struct efct *efct;
> +	struct efc_node *node;
> +	int send_empty_event = false;
> +	unsigned long flags = 0;
> +
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_active) {
> +			/* if active, remove from active list and check empty */
> +			list_del(&els->list_entry);
> +			/* Send list empty event if the IO allocator
> +			 * is disabled, and the list is empty
> +			 * If node->io_alloc_enabled was not checked,
> +			 * the event would be posted continually
> +			 */
> +			send_empty_event = (!node->io_alloc_enabled) &&
> +				list_empty(&node->els_io_active_list);
> +			els->els_active = false;
> +		} else if (els->els_pend) {
> +			/* if pending, remove from pending list;
> +			 * node shutdown isn't gated off the
> +			 * pending list (only the active list),
> +			 * so no need to check if pending list is empty
> +			 */
> +			list_del(&els->list_entry);
> +			els->els_pend = 0;
> +		} else {
> +			efc_log_err(efct,
> +				"Error: els not in pending or active set\n");
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);

unlock first then log

> +			return;
> +		}

no need to indent

> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	/* free ELS request and response buffers */
> +	dma_free_coherent(&efct->pcidev->dev, els->els_rsp.size,
> +			  els->els_rsp.virt, els->els_rsp.phys);
> +	dma_free_coherent(&efct->pcidev->dev, els->els_req.size,
> +			  els->els_req.virt, els->els_req.phys);
> +
> +	memset(&els->els_rsp, 0, sizeof(struct efc_dma));
> +	memset(&els->els_req, 0, sizeof(struct efc_dma));
> +	efct_io_pool_io_free(efct->xport->io_pool, els);
> +
> +	if (send_empty_event)
> +		efc_scsi_io_list_empty(node->efc, node);
> +
> +	efct_scsi_check_pending(efct);
> +}
> +
> +static void
> +efct_els_make_active(struct efct_io *els)
> +{
> +	struct efc_node *node = els->node;
> +	unsigned long flags = 0;
> +
> +	/* move ELS from pending list to active list */
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		if (els->els_pend) {
> +			if (els->els_active) {
> +				efc_log_err(node->efc,
> +					"Error: els in both pend and active\n");
> +				spin_unlock_irqrestore(&node->active_ios_lock,
> +						       flags);

unlock then log

> +				return;
> +			}
> +			/* remove from pending list */
> +			list_del(&els->list_entry);
> +			els->els_pend = false;
> +
> +			/* add els structure to ELS IO list */
> +			INIT_LIST_HEAD(&els->list_entry);
> +			list_add_tail(&els->list_entry,
> +				      &node->els_io_active_list);
> +			els->els_active = true;
> +		} else {
> +			/* must be retrying; make sure it's already active */
> +			if (!els->els_active) {

		} else if (!els->els_active) {

> +				efc_log_err(node->efc,
> +					"Error: els not in pending or active set\n");
> +			}
> +		}

no need to indent

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +static int efct_els_send(struct efct_io *els, u32 reqlen,
> +			 u32 timeout_sec, efct_hw_srrs_cb_t cb)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* update ELS request counter */
> +	node->els_req_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = reqlen;
> +	return efct_scsi_io_dispatch(els, cb);
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els);
> +
> +static void
> +efct_els_delay_timer_cb(struct timer_list *t)
> +{
> +	struct efct_io *els = from_timer(els, t, delay_timer);
> +	struct efc_node *node = els->node;
> +
> +	/* Retry delay timer expired, retry the ELS request,
> +	 * Free the HW IO so that a new oxid is used.
> +	 */
> +	if (els->state == EFCT_ELS_REQUEST_DELAY_ABORT) {
> +		node->els_req_cnt++;
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    NULL);

fits on one line

> +	} else {
> +		efct_els_retry(els);
> +	}
> +}
> +
> +static void
> +efct_els_abort_cleanup(struct efct_io *els)
> +{
> +	/* handle event for ABORT_WQE
> +	 * whatever state ELS happened to be in, propagate aborted even
> +	 * up to node state machine in lieu of EFC_HW_SRRS_ELS_* event
> +	 */
> +	struct efc_node_cb cbdata;
> +
> +	cbdata.status = 0;
> +	cbdata.ext_status = 0;
> +	cbdata.els_rsp = els->els_rsp;
> +	els_io_printf(els, "Request aborted\n");
> +	efct_els_io_cleanup(els, EFC_HW_ELS_REQ_ABORTED, &cbdata);
> +}
> +
> +static int
> +efct_els_req_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +	u32 reason_code;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	if (status != 0)
> +		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
> +
> +	/* set the response len element of els->rsp */
> +	els->els_rsp.len = length;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* FW returns the number of bytes received on the link in
> +	 * the WCQE, not the amount placed in the buffer; use this info to
> +	 * check if there was an overrun.
> +	 */
> +	if (length > els->els_rsp.size) {
> +		efc_log_warn(efct,
> +			      "ELS response returned len=%d > buflen=%zu\n",
> +			     length, els->els_rsp.size);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		return EFC_SUCCESS;
> +	}
> +
> +	/* Post event to ELS IO object */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_OK, &cbdata);
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LS_RJT:
> +		reason_code = (ext_status >> 16) & 0xff;
> +
> +		/* delay and retry if reason code is Logical Busy */
> +		switch (reason_code) {
> +		case ELS_RJT_BUSY:
> +			els->node->els_req_cnt--;
> +			els_io_printf(els,
> +				      "LS_RJT Logical Busy response,delay and retry\n");
> +			timer_setup(&els->delay_timer,
> +				    efct_els_delay_timer_cb, 0);
> +			mod_timer(&els->delay_timer,
> +				  jiffies + msecs_to_jiffies(5000));
> +			els->state = EFCT_ELS_REQUEST_DELAYED;
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_RJT,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +
> +	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +		switch (ext_status) {
> +		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
> +			efct_els_retry(els);
> +			break;
> +
> +		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
> +			if (els->state == EFCT_ELS_ABORT_IO_COMPL) {
> +				/* completion for ELS that was aborted */
> +				efct_els_abort_cleanup(els);
> +			} else {
> +				/* completion for ELS received first,
> +				 * transition to wait for abort cmpl
> +				 */
> +				els->state = EFCT_ELS_REQ_ABORTED;
> +			}
> +
> +			break;
> +		default:
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    &cbdata);
> +			break;
> +		}
> +		break;
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "els req failed status x%x, ext_status, x%x\n",
> +					status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void efct_els_send_req(struct efc_node *node, struct efct_io *els)
> +{
> +	int rc = 0;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +	rc = efct_els_send(els, els->els_req.size,
> +			   els->els_timeout_sec, efct_els_req_cb);
> +
> +	if (rc) {

	if (!rc)
		return;

> +		struct efc_node_cb cbdata;
> +
> +		cbdata.status = INT_MAX;
> +		cbdata.ext_status = INT_MAX;
> +		cbdata.els_rsp = els->els_rsp;
> +		efc_log_err(efct, "efct_els_send failed: %d\n", rc);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +	}
> +}
> +
> +static void
> +efct_els_retry(struct efct_io *els)
> +{
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	efct = els->node->efc->base;
> +	cbdata.status = INT_MAX;
> +	cbdata.ext_status = INT_MAX;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	if (!els->els_retries_remaining) {
> +		efc_log_err(efct, "ELS retries exhausted\n");
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +				    &cbdata);
> +		return;
> +	}
> +
> +	els->els_retries_remaining--;
> +	 /* Free the HW IO so that a new oxid is used.*/
> +	if (els->hio) {
> +		efct_hw_io_free(&efct->hw, els->hio);
> +		els->hio = NULL;
> +	}
> +
> +	efct_els_send_req(els->node, els);
> +}
> +
> +static int
> +efct_els_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		u32 length, int status, u32 ext_status, void *arg)
> +{
> +	struct efct_io *els;
> +	struct efc_node *node;
> +	struct efct *efct;
> +	struct efc_node_cb cbdata;
> +
> +	els = arg;
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	cbdata.status = status;
> +	cbdata.ext_status = ext_status;
> +	cbdata.header = NULL;
> +	cbdata.els_rsp = els->els_rsp;
> +
> +	/* Post node event */
> +	switch (status) {
> +	case SLI4_FC_WCQE_STATUS_SUCCESS:
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_OK, &cbdata);
> +		break;
> +
> +	default:	/* Other error */
> +		efc_log_warn(efct,
> +			      "[%s] %-8s failed status x%x, ext_status x%x\n",
> +			    node->display_name, els->display_name,
> +			    status, ext_status);
> +		efc_log_warn(efct,
> +			      "els acc complete: failed status x%x, ext_status, x%x\n",
> +		     status, ext_status);
> +		efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_CMPL_FAIL, &cbdata);
> +		break;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_els_send_rsp(struct efct_io *els, u32 rsplen)
> +{
> +	struct efc_node *node = els->node;
> +
> +	/* increment ELS completion counter */
> +	node->els_cmpl_cnt++;
> +
> +	/* move ELS from pending list to active list */
> +	efct_els_make_active(els);
> +
> +	els->wire_len = rsplen;
> +	return efct_scsi_io_dispatch(els, efct_els_acc_cb);
> +}
> +
> +struct efct_io *
> +efct_send_plogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries,
> +	      void (*cb)(struct efc_node *node,
> +			 struct efc_node_cb *cbdata, void *arg), void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *plogi;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "plogi";
> +
> +	/* Build PLOGI request */
> +	plogi = els->els_req.virt;
> +
> +	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
> +
> +	plogi->fl_cmd = ELS_PLOGI;
> +	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_flogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi  *flogi;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "flogi";
> +
> +	/* Build FLOGI request */
> +	flogi = els->els_req.virt;
> +
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +	flogi->fl_cmd = ELS_FLOGI;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_flogi *fdisc;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*fdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "fdisc";
> +
> +	/* Build FDISC request */
> +	fdisc = els->els_req.virt;
> +
> +	memcpy(fdisc, node->sport->service_params, sizeof(*fdisc));
> +	fdisc->fl_cmd = ELS_FDISC;
> +	memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prli(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prli prli;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "prli";
> +
> +	/* Build PRLI request */
> +	pp = els->els_req.virt;
> +
> +	memset(pp, 0, sizeof(*pp));
> +
> +	pp->prli.prli_cmd = ELS_PRLI;
> +	pp->prli.prli_spp_len = 16;
> +	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
> +	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
> +			       (node->sport->enable_ini ?
> +			       FCP_SPPF_INIT_FCN : 0) |
> +			       (node->sport->enable_tgt ?
> +			       FCP_SPPF_TARG_FCN : 0));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_prlo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *els;
> +	struct {
> +		struct fc_els_prlo prlo;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "prlo";
> +
> +	/* Build PRLO request */
> +	pp = els->els_req.virt;
> +
> +	memset(pp, 0, sizeof(*pp));
> +	pp->prlo.prlo_cmd = ELS_PRLO;
> +	pp->prlo.prlo_obs = 0x10;
> +	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
> +
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_logo(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_logo *logo;
> +	struct fc_els_flogi  *sparams;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "logo";
> +
> +	/* Build LOGO request */
> +
> +	logo = els->els_req.virt;
> +
> +	memset(logo, 0, sizeof(*logo));
> +	logo->fl_cmd = ELS_LOGO;
> +	hton24(logo->fl_n_port_id, node->rnode.sport->fc_id);
> +	logo->fl_n_port_wwn = sparams->fl_wwpn;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_adisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct;
> +	struct fc_els_adisc *adisc;
> +	struct fc_els_flogi  *sparams;
> +	struct efc_sli_port *sport = node->sport;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	sparams = (struct fc_els_flogi *)node->sport->service_params;
> +
> +	els = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "adisc";
> +
> +	/* Build ADISC request */
> +
> +	adisc = els->els_req.virt;
> +
> +	memset(adisc, 0, sizeof(*adisc));
> +	adisc->adisc_cmd = ELS_ADISC;
> +	hton24(adisc->adisc_hard_addr, sport->fc_id);
> +	adisc->adisc_wwpn = sparams->fl_wwpn;
> +	adisc->adisc_wwnn = sparams->fl_wwnn;
> +	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *pdisc;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*pdisc), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "pdisc";
> +
> +	pdisc = els->els_req.virt;
> +
> +	memcpy(pdisc, node->sport->service_params, sizeof(*pdisc));
> +
> +	pdisc->fl_cmd = ELS_PDISC;
> +	memset(pdisc->_fl_resvd, 0, sizeof(pdisc->_fl_resvd));
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_scr(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	      els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_scr *req;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*req), EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "scr";
> +
> +	req = els->els_req.virt;
> +
> +	memset(req, 0, sizeof(*req));
> +	req->scr_cmd = ELS_SCR;
> +	req->scr_reg_func = ELS_SCRF_FULL;
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_send_rscn(struct efc_node *node, u32 timeout_sec, u32 retries,
> +	       void *port_ids, u32 port_ids_count, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_rscn *req;
> +	struct fc_els_rscn_page *rscn_page;
> +	u32 length = sizeof(*rscn_page) * port_ids_count;
> +
> +	length += sizeof(*req);
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, length, EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->els_timeout_sec = timeout_sec;
> +	els->els_retries_remaining = retries;
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "rscn";
> +
> +	req = els->els_req.virt;
> +
> +	req->rscn_cmd = ELS_RSCN;
> +	req->rscn_page_len = sizeof(struct fc_els_rscn_page);
> +	req->rscn_plen = cpu_to_be16(length);
> +
> +	els->hio_type = EFCT_HW_ELS_REQ;
> +	els->iparam.els.timeout = timeout_sec;
> +
> +	/* copy in the payload */
> +	rscn_page = els->els_req.virt + sizeof(*req);
> +	memcpy(rscn_page, port_ids,
> +	       port_ids_count * sizeof(*rscn_page));
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +void *
> +efct_send_ls_rjt(struct efc *efc, struct efc_node *node,
> +		 u32 ox_id, u32 reason_code,
> +		u32 reason_code_expl, u32 vendor_unique)
> +{
> +	struct efct_io *io = NULL;
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_ls_rjt *rjt;
> +
> +	io = efct_els_io_alloc(node, sizeof(*rjt), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	node_els_trace();
> +
> +	io->els_callback = NULL;
> +	io->els_callback_arg = NULL;
> +	io->display_name = "ls_rjt";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	rjt = io->els_req.virt;
> +	memset(rjt, 0, sizeof(*rjt));
> +
> +	rjt->er_cmd = ELS_LS_RJT;
> +	rjt->er_reason = reason_code;
> +	rjt->er_explan = reason_code_expl;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*rjt));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_flogi  *plogi;
> +	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*plogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "plogi_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	plogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(plogi, node->sport->service_params, sizeof(*plogi));
> +	plogi->fl_cmd = ELS_LS_ACC;
> +	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
> +
> +	/* Set Application header support bit if requested */
> +	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
> +		plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST);
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*plogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +void *
> +efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
> +			u32 ox_id, u32 s_id)
> +{
> +	struct efct_io *io = NULL;
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_flogi  *flogi;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = NULL;
> +	io->els_callback_arg = NULL;
> +	io->display_name = "flogi_p2p_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +	io->iparam.els.s_id = s_id;
> +
> +	flogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +	flogi->fl_cmd = ELS_LS_ACC;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP_SID;
> +	rc = efct_els_send_rsp(io, sizeof(*flogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_flogi  *flogi;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*flogi), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "flogi_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +	io->iparam.els.s_id = io->node->sport->fc_id;
> +
> +	flogi = io->els_req.virt;
> +
> +	/* copy our port's service parameters to payload */
> +	memcpy(flogi, node->sport->service_params, sizeof(*flogi));
> +
> +	/* Set F_port */
> +	if (is_fport) {
> +		/* Set F_PORT and Multiple N_PORT_ID Assignment */
> +		flogi->fl_csp.sp_r_a_tov |=  cpu_to_be32(3U << 28);
> +	}
> +
> +	flogi->fl_cmd = ELS_LS_ACC;
> +	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
> +
> +	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP_SID;
> +	rc = efct_els_send_rsp(io, sizeof(*flogi));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *efct_send_prli_acc(struct efc_node *node,
> +				     u32 ox_id, els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct {
> +		struct fc_els_prli prli;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "prli_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	pp = io->els_req.virt;
> +	memset(pp, 0, sizeof(*pp));
> +
> +	pp->prli.prli_cmd = ELS_LS_ACC;
> +	pp->prli.prli_spp_len = 0x10;
> +	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
> +
> +	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
> +					(node->sport->enable_ini ?
> +					 FCP_SPPF_INIT_FCN : 0) |
> +					(node->sport->enable_tgt ?
> +					 FCP_SPPF_TARG_FCN : 0));
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*pp));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct {
> +		struct fc_els_prlo prlo;
> +		struct fc_els_spp spp;
> +	} *pp;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*pp), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "prlo_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	pp = io->els_req.virt;
> +	memset(pp, 0, sizeof(*pp));
> +	pp->prlo.prlo_cmd = ELS_LS_ACC;
> +	pp->prlo.prlo_obs = 0x10;
> +	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
> +
> +	pp->spp.spp_type = FC_TYPE_FCP;
> +	pp->spp.spp_type_ext = 0;
> +	pp->spp.spp_flags = FC_SPP_RESP_ACK;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*pp));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_ls_acc(struct efc_node *node, u32 ox_id, els_cb_t cb,
> +		 void *cbarg)
> +{
> +	int rc;
> +	struct efct *efct = node->efc->base;
> +	struct efct_io *io = NULL;
> +	struct fc_els_ls_acc *acc;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*acc), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "ls_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	acc = io->els_req.virt;
> +	memset(acc, 0, sizeof(*acc));
> +
> +	acc->la_cmd = ELS_LS_ACC;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*acc));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_logo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct_io *io = NULL;
> +	struct efct *efct = node->efc->base;
> +	struct fc_els_ls_acc *logo;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*logo), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "logo_acc";
> +	io->init_task_tag = ox_id;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	logo = io->els_req.virt;
> +	memset(logo, 0, sizeof(*logo));
> +
> +	logo->la_cmd = ELS_LS_ACC;
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*logo));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg)
> +{
> +	int rc;
> +	struct efct_io *io = NULL;
> +	struct fc_els_adisc *adisc;
> +	struct fc_els_flogi  *sparams;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	node_els_trace();
> +
> +	io = efct_els_io_alloc(node, sizeof(*adisc), EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efct, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	io->els_callback = cb;
> +	io->els_callback_arg = cbarg;
> +	io->display_name = "adisc_acc";
> +	io->init_task_tag = ox_id;
> +
> +	/* Go ahead and send the ELS_ACC */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.els.ox_id = ox_id;
> +
> +	sparams = (struct fc_els_flogi  *)node->sport->service_params;
> +	adisc = io->els_req.virt;
> +	memset(adisc, 0, sizeof(*adisc));
> +	adisc->adisc_cmd = ELS_LS_ACC;
> +	adisc->adisc_wwpn = sparams->fl_wwpn;
> +	adisc->adisc_wwnn = sparams->fl_wwnn;
> +	hton24(adisc->adisc_port_id, node->rnode.sport->fc_id);
> +
> +	io->hio_type = EFCT_HW_ELS_RSP;
> +	rc = efct_els_send_rsp(io, sizeof(*adisc));
> +	if (rc) {
> +		efct_els_io_free(io);
> +		io = NULL;
> +	}
> +
> +	return io;
> +}
> +
> +void *
> +efct_els_send_ct(struct efc *efc, struct efc_node *node, u32 cmd,
> +		 u32 timeout_sec, u32 retries)
> +{
> +	struct efct *efct = efc->base;
> +
> +	switch (cmd) {
> +	case FC_RCTL_ELS_REQ:
> +		efct_ns_send_rftid(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case FC_NS_RFF_ID:
> +		efct_ns_send_rffid(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	case FC_NS_GID_PT:
> +		efct_ns_send_gidpt(node, timeout_sec, retries, NULL, NULL);
> +		break;
> +	default:
> +		efc_log_err(efct, "Unhandled command cmd: %x\n", cmd);
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline void fcct_build_req_header(struct fc_ct_hdr  *hdr,
> +					 u16 cmd, u16 max_size)
> +{
> +	hdr->ct_rev = FC_CT_REV;
> +	hdr->ct_fs_type = FC_FST_DIR;
> +	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
> +	hdr->ct_options = 0;
> +	hdr->ct_cmd = cpu_to_be16(cmd);
> +	/* words */
> +	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
> +	hdr->ct_reason = 0;
> +	hdr->ct_explan = 0;
> +	hdr->ct_vendor = 0;
> +}
> +
> +struct efct_io *
> +efct_ns_send_rftid(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_rft_id *rftid;
> +
> +	node_els_trace();
> +
> +	els = efct_els_io_alloc(node, sizeof(*ct) + sizeof(*rftid),
> +				EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +
> +	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +	els->iparam.fc_ct.type = FC_TYPE_CT;
> +	els->iparam.fc_ct.df_ctl = 0;
> +	els->iparam.fc_ct.timeout = timeout_sec;
> +
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "rftid";
> +
> +	ct = els->els_req.virt;
> +	memset(ct, 0, sizeof(*ct));
> +	fcct_build_req_header(ct, FC_NS_RFT_ID, sizeof(*rftid));
> +
> +	rftid = els->els_req.virt + sizeof(*ct);
> +	memset(rftid, 0, sizeof(*rftid));
> +	hton24(rftid->fr_fid.fp_fid, node->rnode.sport->fc_id);
> +	rftid->fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
> +		cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
> +
> +	els->hio_type = EFCT_HW_FC_CT;
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_ns_send_rffid(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_rff_id *rffid;
> +	u32 size = 0;
> +
> +	node_els_trace();
> +
> +	size = sizeof(*ct) + sizeof(*rffid);
> +
> +	els = efct_els_io_alloc(node, size, EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +	els->iparam.fc_ct.type = FC_TYPE_CT;
> +	els->iparam.fc_ct.df_ctl = 0;
> +	els->iparam.fc_ct.timeout = timeout_sec;
> +
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "rffid";
> +	ct = els->els_req.virt;
> +
> +	memset(ct, 0, sizeof(*ct));
> +	fcct_build_req_header(ct, FC_NS_RFF_ID, sizeof(*rffid));
> +
> +	rffid = els->els_req.virt + sizeof(*ct);
> +	memset(rffid, 0, sizeof(*rffid));
> +
> +	hton24(rffid->fr_fid.fp_fid, node->rnode.sport->fc_id);
> +	if (node->sport->enable_ini)
> +		rffid->fr_feat |= FCP_FEAT_INIT;
> +	if (node->sport->enable_tgt)
> +		rffid->fr_feat |= FCP_FEAT_TARG;
> +	rffid->fr_type = FC_TYPE_FCP;
> +
> +	els->hio_type = EFCT_HW_FC_CT;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +struct efct_io *
> +efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg)
> +{
> +	struct efct_io *els = NULL;
> +	struct efct *efct = node->efc->base;
> +	struct fc_ct_hdr *ct;
> +	struct fc_ns_gid_pt *gidpt;
> +	u32 size = 0;
> +
> +	node_els_trace();
> +
> +	size = sizeof(*ct) + sizeof(*gidpt);
> +	els = efct_els_io_alloc_size(node, size,
> +				     EFCT_ELS_GID_PT_RSP_LEN,
> +				   EFCT_ELS_ROLE_ORIGINATOR);
> +	if (!els) {
> +		efc_log_err(efct, "IO alloc failed\n");
> +		return els;
> +	}
> +
> +	els->iparam.fc_ct.r_ctl = FC_RCTL_ELS_REQ;
> +	els->iparam.fc_ct.type = FC_TYPE_CT;
> +	els->iparam.fc_ct.df_ctl = 0;
> +	els->iparam.fc_ct.timeout = timeout_sec;
> +
> +	els->els_callback = cb;
> +	els->els_callback_arg = cbarg;
> +	els->display_name = "gidpt";
> +
> +	ct = els->els_req.virt;
> +
> +	memset(ct, 0, sizeof(*ct));
> +	fcct_build_req_header(ct, FC_NS_GID_PT, sizeof(*gidpt));
> +
> +	gidpt = els->els_req.virt + sizeof(*ct);
> +	memset(gidpt, 0, sizeof(*gidpt));
> +	gidpt->fn_pt_type = FC_TYPE_FCP;
> +
> +	els->hio_type = EFCT_HW_FC_CT;
> +
> +	efct_els_send_req(node, els);
> +
> +	return els;
> +}
> +
> +static int efct_bls_send_rjt_cb(struct efct_hw_io *hio,
> +				struct efc_remote_node *rnode, u32 length,
> +		int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +
> +	efct_scsi_io_free(io);
> +	return EFC_SUCCESS;
> +}
> +
> +static struct efct_io *
> +efct_bls_send_rjt(struct efct_io *io, u32 s_id,
> +		  u16 ox_id, u16 rx_id)
> +{
> +	struct efc_node *node = io->node;
> +	int rc;
> +	struct fc_ba_rjt *acc;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	if (node->rnode.sport->fc_id == s_id)
> +		s_id = U32_MAX;
> +
> +	/* fill out generic fields */
> +	io->efct = efct;
> +	io->node = node;
> +	io->cmd_tgt = true;
> +
> +	/* fill out BLS Response-specific fields */
> +	io->io_type = EFCT_IO_TYPE_BLS_RESP;
> +	io->display_name = "ba_rjt";
> +	io->hio_type = EFCT_HW_BLS_RJT;
> +	io->init_task_tag = ox_id;
> +
> +	/* fill out iparam fields */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.bls.ox_id = ox_id;
> +	io->iparam.bls.rx_id = rx_id;
> +
> +	acc = (void *)io->iparam.bls.payload;
> +
> +	memset(io->iparam.bls.payload, 0,
> +	       sizeof(io->iparam.bls.payload));
> +	acc->br_reason = ELS_RJT_UNAB;
> +	acc->br_explan = ELS_EXPL_NONE;
> +
> +	rc = efct_scsi_io_dispatch(io, efct_bls_send_rjt_cb);
> +	if (rc) {
> +		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
> +		efct_scsi_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +struct efct_io *
> +efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr)
> +{
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
> +	u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +	return efct_bls_send_rjt(io, d_id, ox_id, rx_id);
> +}
> +
> +static int efct_bls_send_acc_cb(struct efct_hw_io *hio,
> +				struct efc_remote_node *rnode, u32 length,
> +		int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +
> +	efct_scsi_io_free(io);
> +	return EFC_SUCCESS;
> +}
> +
> +static struct efct_io *
> +efct_bls_send_acc(struct efct_io *io, u32 s_id,
> +		  u16 ox_id, u16 rx_id)
> +{
> +	struct efc_node *node = io->node;
> +	int rc;
> +	struct fc_ba_acc *acc;
> +	struct efct *efct;
> +
> +	efct = node->efc->base;
> +
> +	if (node->rnode.sport->fc_id == s_id)
> +		s_id = U32_MAX;
> +
> +	/* fill out generic fields */
> +	io->efct = efct;
> +	io->node = node;
> +	io->cmd_tgt = true;
> +
> +	/* fill out BLS Response-specific fields */
> +	io->io_type = EFCT_IO_TYPE_BLS_RESP;
> +	io->display_name = "ba_acc";
> +	io->hio_type = EFCT_HW_BLS_ACC_SID;
> +	io->init_task_tag = ox_id;
> +
> +	/* fill out iparam fields */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.bls.s_id = s_id;
> +	io->iparam.bls.ox_id = ox_id;
> +	io->iparam.bls.rx_id = rx_id;
> +
> +	acc = (void *)io->iparam.bls.payload;
> +
> +	memset(io->iparam.bls.payload, 0,
> +	       sizeof(io->iparam.bls.payload));
> +	acc->ba_ox_id = cpu_to_be16(io->iparam.bls.ox_id);
> +	acc->ba_rx_id = cpu_to_be16(io->iparam.bls.rx_id);
> +	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
> +
> +	rc = efct_scsi_io_dispatch(io, efct_bls_send_acc_cb);
> +	if (rc) {
> +		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
> +		efct_scsi_io_free(io);
> +		io = NULL;
> +	}
> +	return io;
> +}
> +
> +void *
> +efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
> +		      struct fc_frame_header *hdr)
> +{
> +	struct efct_io *io = NULL;
> +	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
> +	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
> +	u32 d_id = ntoh24(hdr->fh_d_id);
> +
> +	io = efct_scsi_io_alloc(node, EFCT_SCSI_IO_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efc, "els IO alloc failed\n");
> +		return io;
> +	}
> +
> +	return efct_bls_send_acc(io, d_id, ox_id, rx_id);
> +}
> +
> +static int
> +efct_els_abort_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		  u32 length, int status, u32 ext_status,
> +		 void *app)
> +{
> +	struct efct_io *els;
> +	struct efct_io *abort_io = NULL; /* IO structure used to abort ELS */
> +	struct efct *efct;
> +
> +	abort_io = app;
> +	els = abort_io->io_to_abort;
> +
> +	if (!els || !els->node || !els->node->efc)
> +		return EFC_FAIL;
> +
> +	efct = els->node->efc->base;
> +
> +	if (status != 0)
> +		efc_log_warn(efct, "status x%x ext x%x\n", status, ext_status);
> +
> +	/* now free the abort IO */
> +	efct_io_pool_io_free(efct->xport->io_pool, abort_io);
> +
> +	/* send completion event to indicate abort process is complete
> +	 * Note: The ELS SM will already be receiving
> +	 * ELS_REQ_OK/FAIL/RJT/ABORTED
> +	 */
> +	if (els->state == EFCT_ELS_REQ_ABORTED) {
> +		/* completion for ELS that was aborted */
> +		efct_els_abort_cleanup(els);
> +	} else {
> +		/* completion for abort was received first,
> +		 * transition to wait for req cmpl
> +		 */
> +		els->state = EFCT_ELS_ABORT_IO_COMPL;
> +	}
> +
> +	/* done with ELS IO to abort */
> +	kref_put(&els->ref, els->release);
> +	return EFC_SUCCESS;
> +}
> +
> +static struct efct_io *
> +efct_els_abort_io(struct efct_io *els, bool send_abts)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	int rc;
> +	struct efct_io *abort_io = NULL;
> +
> +	efct = els->node->efc->base;
> +	xport = efct->xport;
> +
> +	/* take a reference on IO being aborted */
> +	if ((kref_get_unless_zero(&els->ref) == 0)) {
> +		/* command no longer active */
> +		efc_log_debug(efct, "els no longer active\n");
> +		return NULL;
> +	}
> +
> +	/* allocate IO structure to send abort */
> +	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!abort_io) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +	} else {
> +		/* set generic fields */
> +		abort_io->efct = efct;
> +		abort_io->node = els->node;
> +		abort_io->cmd_ini = true;
> +
> +		/* set type and ABORT-specific fields */
> +		abort_io->io_type = EFCT_IO_TYPE_ABORT;
> +		abort_io->display_name = "abort_els";
> +		abort_io->io_to_abort = els;
> +		abort_io->send_abts = send_abts;
> +
> +		/* now dispatch IO */
> +		rc = efct_scsi_io_dispatch_abort(abort_io, efct_els_abort_cb);
> +		if (rc) {
> +			efc_log_err(efct,
> +				     "efct_scsi_io_dispatch failed: %d\n", rc);
> +			efct_io_pool_io_free(efct->xport->io_pool, abort_io);
> +			abort_io = NULL;
> +		}
> +	}
> +
> +	/* if something failed, put reference on ELS to abort */
> +	if (!abort_io)
> +		kref_put(&els->ref, els->release);
> +	return abort_io;
> +}
> +
> +void
> +efct_els_abort(struct efct_io *els, struct efc_node_cb *arg)
> +{
> +	struct efct_io *io = NULL;
> +	struct efc_node *node;
> +	struct efct *efct;
> +
> +	node = els->node;
> +	efct = node->efc->base;
> +
> +	/* request to abort this ELS without an ABTS */
> +	els_io_printf(els, "ELS abort requested\n");
> +	/* Set retries to zero,we are done */
> +	els->els_retries_remaining = 0;
> +	if (els->state == EFCT_ELS_REQUEST) {
> +		els->state = EFCT_ELS_REQ_ABORT;
> +		io = efct_els_abort_io(els, false);
> +		if (!io) {
> +			efc_log_err(efct, "efct_els_abort_io failed\n");
> +			efct_els_io_cleanup(els, EFC_HW_SRRS_ELS_REQ_FAIL,
> +					    arg);
> +		}
> +
> +	} else if (els->state == EFCT_ELS_REQUEST_DELAYED) {
> +		/* mod/resched the timer for a short duration */
> +		mod_timer(&els->delay_timer,
> +			  jiffies + msecs_to_jiffies(1));
> +
> +		els->state = EFCT_ELS_REQUEST_DELAY_ABORT;
> +	}
> +}
> +
> +void
> +efct_els_io_cleanup(struct efct_io *els,
> +		    enum efc_hw_node_els_event node_evt, void *arg)
> +{
> +	/* don't want further events that could come; e.g. abort requests
> +	 * from the node state machine; thus, disable state machine
> +	 */
> +	els->els_req_free = true;
> +	efc_node_post_els_resp(els->node, node_evt, arg);
> +
> +	/* If this IO has a callback, invoke it */
> +	if (els->els_callback) {
> +		(*els->els_callback)(els->node, arg,
> +				    els->els_callback_arg);
> +	}
> +	efct_els_io_free(els);
> +}
> +
> +int
> +efct_els_io_list_empty(struct efc_node *node, struct list_head *list)
> +{
> +	int empty;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		empty = list_empty(list);
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +	return empty;
> +}
> +
> +static int
> +efct_ct_acc_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +	       u32 length, int status, u32 ext_status,
> +	      void *arg)
> +{
> +	struct efct_io *io = arg;
> +
> +	efct_els_io_free(io);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
> +		 struct fc_ct_hdr  *ct_hdr, u32 cmd_rsp_code,
> +		u32 reason_code, u32 reason_code_explanation)
> +{
> +	struct efct_io *io = NULL;
> +	struct fc_ct_hdr  *rsp = NULL;
> +
> +	io = efct_els_io_alloc(node, 256, EFCT_ELS_ROLE_RESPONDER);
> +	if (!io) {
> +		efc_log_err(efc, "IO alloc failed\n");
> +		return EFC_FAIL;
> +	}
> +
> +	rsp = io->els_rsp.virt;
> +	io->io_type = EFCT_IO_TYPE_CT_RESP;
> +
> +	*rsp = *ct_hdr;
> +
> +	fcct_build_req_header(rsp, cmd_rsp_code, 0);
> +	rsp->ct_reason = reason_code;
> +	rsp->ct_explan = reason_code_explanation;
> +
> +	io->display_name = "ct_rsp";
> +	io->init_task_tag = ox_id;
> +	io->wire_len += sizeof(*rsp);
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +
> +	io->io_type = EFCT_IO_TYPE_CT_RESP;
> +	io->hio_type = EFCT_HW_FC_CT_RSP;
> +	io->iparam.fc_ct.ox_id = ox_id;
> +	io->iparam.fc_ct.r_ctl = 3;
> +	io->iparam.fc_ct.type = FC_TYPE_CT;
> +	io->iparam.fc_ct.df_ctl = 0;
> +	io->iparam.fc_ct.timeout = 5;
> +
> +	if (efct_scsi_io_dispatch(io, efct_ct_acc_cb) < 0) {
> +		efct_els_io_free(io);
> +		return EFC_FAIL;
> +	}
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_els.h b/drivers/scsi/elx/efct/efct_els.h
> new file mode 100644
> index 000000000000..9b79783a39a3
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_els.h
> @@ -0,0 +1,133 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_ELS_H__)
> +#define __EFCT_ELS_H__
> +
> +enum efct_els_role {
> +	EFCT_ELS_ROLE_ORIGINATOR,
> +	EFCT_ELS_ROLE_RESPONDER,
> +};
> +
> +void _efct_els_io_free(struct kref *arg);
> +extern struct efct_io *
> +efct_els_io_alloc(struct efc_node *node, u32 reqlen,
> +		  enum efct_els_role role);
> +extern struct efct_io *
> +efct_els_io_alloc_size(struct efc_node *node, u32 reqlen,
> +		       u32 rsplen,
> +				       enum efct_els_role role);
> +void efct_els_io_free(struct efct_io *els);
> +
> +extern void *
> +efct_els_req_send(struct efc *efc, struct efc_node *node,
> +		  u32 cmd, u32 timeout_sec, u32 retries);
> +extern void *
> +efct_els_send_ct(struct efc *efc, struct efc_node *node,
> +		 u32 cmd, u32 timeout_sec, u32 retries);
> +extern void *
> +efct_els_resp_send(struct efc *efc, struct efc_node *node,
> +		   u32 cmd, u16 ox_id);
> +void
> +efct_els_abort(struct efct_io *els, struct efc_node_cb *arg);
> +/* ELS command send */
> +typedef void (*els_cb_t)(struct efc_node *node,
> +			 struct efc_node_cb *cbdata, void *arg);
> +extern struct efct_io *
> +efct_send_plogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_flogi(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_fdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_prli(struct efc_node *node, u32 timeout_sec,
> +	       u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_prlo(struct efc_node *node, u32 timeout_sec,
> +	       u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_logo(struct efc_node *node, u32 timeout_sec,
> +	       u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_adisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_pdisc(struct efc_node *node, u32 timeout_sec,
> +		u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_scr(struct efc_node *node, u32 timeout_sec,
> +	      u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_ns_send_rftid(struct efc_node *node,
> +		   u32 timeout_sec,
> +		  u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_ns_send_rffid(struct efc_node *node,
> +		   u32 timeout_sec,
> +		  u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_ns_send_gidpt(struct efc_node *node, u32 timeout_sec,
> +		   u32 retries, els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_rscn(struct efc_node *node, u32 timeout_sec,
> +	       u32 retries, void *port_ids,
> +	      u32 port_ids_count, els_cb_t cb, void *cbarg);
> +extern void
> +efct_els_io_cleanup(struct efct_io *els, enum efc_hw_node_els_event,
> +		    void *arg);
> +
> +/* ELS acc send */
> +extern struct efct_io *
> +efct_send_ls_acc(struct efc_node *node, u32 ox_id,
> +		 els_cb_t cb, void *cbarg);
> +
> +extern void *
> +efct_send_ls_rjt(struct efc *efc, struct efc_node *node, u32 ox_id,
> +		 u32 reason_cod, u32 reason_code_expl,
> +		u32 vendor_unique);
> +extern void *
> +efct_send_flogi_p2p_acc(struct efc *efc, struct efc_node *node,
> +			u32 ox_id, u32 s_id);
> +extern struct efct_io *
> +efct_send_flogi_acc(struct efc_node *node, u32 ox_id,
> +		    u32 is_fport, els_cb_t cb,
> +		   void *cbarg);
> +extern struct efct_io *
> +efct_send_plogi_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_prli_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_logo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_prlo_acc(struct efc_node *node, u32 ox_id,
> +		   els_cb_t cb, void *cbarg);
> +extern struct efct_io *
> +efct_send_adisc_acc(struct efc_node *node, u32 ox_id,
> +		    els_cb_t cb, void *cbarg);
> +
> +extern void *
> +efct_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
> +		      struct fc_frame_header *hdr);
> +extern struct efct_io *
> +efct_bls_send_rjt_hdr(struct efct_io *io, struct fc_frame_header *hdr);
> +
> +extern int
> +efct_els_io_list_empty(struct efc_node *node, struct list_head *list);
> +
> +/* CT */
> +extern int
> +efct_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
> +		 struct fc_ct_hdr *ct_hdr,
> +		 u32 cmd_rsp_code, u32 reason_code,
> +		 u32 reason_code_explanation);
> +
> +#endif /* __EFCT_ELS_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 28/31] elx: efct: Firmware update, async link processing
  2020-04-12  3:33 ` [PATCH v3 28/31] elx: efct: Firmware update, async link processing James Smart
@ 2020-04-16 10:01   ` Hannes Reinecke
  2020-04-16 13:10   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16 10:01 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:33 AM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Handling of async link event.
> Registrations for VFI, VPI and RPI.
> Add Firmware update helper routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Reworked efct_hw_port_attach_reg_vpi() and efct_hw_port_attach_reg_vfi()
>    Return defined values
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 1509 +++++++++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |   58 ++
>   2 files changed, 1567 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index a007ca98895d..b3a1ec0f674b 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -42,6 +42,12 @@ struct efct_hw_host_stat_cb_arg {
>   	void *arg;
>   };
>   
> +struct efct_hw_fw_wr_cb_arg {
> +	void (*cb)(int status, u32 bytes_written,
> +		   u32 change_status, void *arg);
> +	void *arg;
> +};
> +
>   static enum efct_hw_rtn
>   efct_hw_link_event_init(struct efct_hw *hw)
>   {
> @@ -3836,3 +3842,1506 @@ efct_hw_get_num_eq(struct efct_hw *hw)
>   {
>   	return hw->eq_count;
>   }
> +
> +/* HW async call context structure */
> +struct efct_hw_async_call_ctx {
> +	efct_hw_async_cb_t callback;
> +	void *arg;
> +	u8 cmd[SLI4_BMBX_SIZE];
> +};
> +
> +static void
> +efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
> +{
> +	struct efct_hw_async_call_ctx *ctx = arg;
> +
> +	if (ctx) {
> +		if (ctx->callback)
> +			(*ctx->callback)(hw, status, mqe, ctx->arg);
> +
> +		kfree(ctx);
> +	}
> +}
> +
> +/*
> + * Post a NOP mbox cmd; the callback with argument is invoked upon completion
> + * while in the event processing context.
> + */
> +int
> +efct_hw_async_call(struct efct_hw *hw,
> +		   efct_hw_async_cb_t callback, void *arg)
> +{
> +	int rc = 0;
> +	struct efct_hw_async_call_ctx *ctx;
> +
> +	/*
> +	 * Allocate a callback context (which includes the mbox cmd buffer),
> +	 * we need this to be persistent as the mbox cmd submission may be
> +	 * queued and executed later execution.
> +	 */
> +	ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
> +	if (!ctx)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(ctx, 0, sizeof(*ctx));
> +	ctx->callback = callback;
> +	ctx->arg = arg;
> +

Please use memory pools.

> +	/* Build and send a NOP mailbox command */
> +	if (sli_cmd_common_nop(&hw->sli, ctx->cmd, sizeof(ctx->cmd), 0)) {
> +		efc_log_err(hw->os, "COMMON_NOP format failure\n");
> +		kfree(ctx);
> +		rc = -1;
> +	}
> +
> +	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
> +			    ctx)) {
> +		efc_log_err(hw->os, "COMMON_NOP command failure\n");
> +		kfree(ctx);
> +		rc = -1;
> +	}
> +	return rc;
> +}
> +
> +static void
> +efct_hw_port_free_resources(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Clear the sport attached flag */
> +	sport->attached = false;
> +
> +	/* Free the service parameters buffer */
> +	if (sport->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  sport->dma.size, sport->dma.virt,
> +				  sport->dma.phys);
> +		memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
> +
> +	efc_lport_cb(efct->efcport, evt, sport);
> +}
> +
> +static int
> +efct_hw_port_get_mbox_status(struct efc_sli_port *sport,
> +			     u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
> +			       sport->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_free_unreg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int evt = EFC_HW_PORT_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_PORT_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	efct_hw_port_free_resources(sport, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_port_free_unreg_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* Allocate memory and send unreg_vpi */
> +	if (!data) {
> +		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data) {
> +			efct_hw_port_free_resources(sport,
> +						    EFC_HW_PORT_FREE_FAIL,
> +						    data);
> +			return;
> +		}
> +		memset(data, 0, SLI4_BMBX_SIZE);
> +	}
> +
> +	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       sport->indicator, SLI4_UNREG_TYPE_PORT);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_free_unreg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +	}
> +}
> +
> +static void
> +efct_hw_port_send_evt(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Now inform the registered callbacks */
> +	efc_lport_cb(efct->efcport, evt, sport);
> +
> +	/* Set the sport attached flag */
> +	if (evt == EFC_HW_PORT_ATTACH_OK)
> +		sport->attached = true;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +}
> +
> +static int
> +efct_hw_port_alloc_init_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_port_alloc_init_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending) {
> +		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      sport->indicator, sport->domain->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_init_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_port_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				   int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	u8 *payload = NULL;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	payload = sport->dma.virt;
> +
> +	memcpy(&sport->sli_wwpn,
> +	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
> +		sizeof(sport->sli_wwpn));
> +	memcpy(&sport->sli_wwnn,
> +	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
> +		sizeof(sport->sli_wwnn));
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  sport->dma.size, sport->dma.virt, sport->dma.phys);
> +	memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	efct_hw_port_alloc_init_vpi(sport, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_port_alloc_read_sparm64(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	/* Allocate memory for the service parameters */
> +	sport->dma.size = 112;
> +	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					     sport->dma.size, &sport->dma.phys,
> +					     GFP_DMA);
> +	if (!sport->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &sport->dma, sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_read_sparm64_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/*
> + * This function allocates a VPI object for the port and stores it in the
> + * indicator field of the port object.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
> +		   struct efc_domain *domain, u8 *wwpn)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*cmd = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32 index;
> +
> +	sport->indicator = U32_MAX;
> +	sport->hw = hw;
> +	sport->free_req_pending = false;
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (wwpn)
> +		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
> +			       &sport->indicator, &index)) {
> +		efc_log_err(hw->os, "VPI allocation failure\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (domain) {
> +		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!cmd) {
> +			rc = EFCT_HW_RTN_NO_MEMORY;
> +			goto efct_hw_port_alloc_out;
> +		}
> +		memset(cmd, 0, SLI4_BMBX_SIZE);
> +
> +		/*
> +		 * If the WWPN is NULL, fetch the default
> +		 * WWPN and WWNN before initializing the VPI
> +		 */
> +		if (!wwpn)
> +			efct_hw_port_alloc_read_sparm64(sport, cmd);
> +		else
> +			efct_hw_port_alloc_init_vpi(sport, cmd);
> +	} else if (!wwpn) {
> +		/* This is the convention for the HW, not SLI */
> +		efc_log_test(hw->os, "need WWN for physical port\n");
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +	/* domain NULL and wwpn non-NULL */
> +	// no-op;
> +

Please avoid C++ style coments.

> +efct_hw_port_alloc_out:
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		kfree(cmd);
> +
> +		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
> +				  sport->indicator);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_attach_reg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * This function registers a previously-allocated VPI with the
> + * device.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
> +		    u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);

Please fixup indentation.

> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	sport->fc_id = fc_id;
> +

Pre-allocated buffer or memory pools, please.

> +	rc = sli_cmd_reg_vpi(&hw->sli, buf, SLI4_BMBX_SIZE, sport->fc_id,
> +			    sport->sli_wwpn, sport->indicator,
> +			    sport->domain->indicator, false);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, buf);
> +		return rc;
> +	}
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_attach_reg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, buf);
> +	}
> +
> +	return rc;
> +}
> +
> +/* Issue the UNREG_VPI command to free the assigned VPI context */
> +enum efct_hw_rtn
> +efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (sport->attached)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +	else
> +		sport->free_req_pending = true;
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_domain_get_mbox_status(struct efc_domain *domain,
> +			       u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
> +			       domain->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_resources(struct efc_domain *domain,
> +			      int evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the service parameters buffer */
> +	if (domain->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
> +
> +	efc_domain_cb(efct->efcport, evt, domain);
> +}
> +
> +static void
> +efct_hw_domain_send_sport_evt(struct efc_domain *domain,
> +			      int port_evt, int domain_evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Send alloc/attach ok to the physical sport */
> +	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
> +
> +	/* Now inform the registered callbacks */
> +	efc_domain_cb(efct->efcport, domain_evt, domain);
> +}
> +
> +static int
> +efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	hw->domain = domain;
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
> +				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_domain_alloc_read_sparm64(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &domain->dma, 0);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_read_sparm64_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_domain_alloc_init_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_domain_alloc_read_sparm64(domain, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_domain_alloc_init_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efc_sli_port *sport = domain->sport;
> +	int rc;
> +
> +	/*
> +	 * For FC, the HW alread registered an FCFI.
> +	 * Copy FCF information into the domain and jump to INIT_VFI.
> +	 */
> +	domain->fcf_indicator = hw->fcf_indicator;
> +	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      domain->indicator, domain->fcf_indicator,
> +			sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_init_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/**
> + * This function starts a series of commands needed to connect to the domain,
> + * including
> + *   - REG_FCFI
> + *   - INIT_VFI
> + *   - READ_SPARMS
> + */
> +enum efct_hw_rtn
> +efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
> +		     u32 fcf)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8 *cmd = NULL;
> +	u32 index;
> +
> +	if (!domain || !domain->sport) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
> +			    hw, domain, domain ? domain->sport : NULL);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(efct,
> +			     "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!cmd)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(cmd, 0, SLI4_BMBX_SIZE);
> +

Same here.

> +	/* allocate memory for the service parameters */
> +	domain->dma.size = 112;
> +	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					      domain->dma.size,
> +					      &domain->dma.phys, GFP_DMA);
> +	if (!domain->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		kfree(cmd);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	domain->hw = hw;
> +	domain->fcf = fcf;
> +	domain->fcf_indicator = U32_MAX;
> +	domain->indicator = U32_MAX;
> +
> +	if (sli_resource_alloc(&hw->sli,
> +			       SLI_RSRC_VFI, &domain->indicator,
> +				    &index)) {
> +		efc_log_err(hw->os, "VFI allocation failure\n");
> +
> +		kfree(cmd);
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_alloc_init_vfi(domain, cmd);
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static int
> +efct_hw_domain_attach_reg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		hw->domain = NULL;
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
> +				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_attach(struct efc *efc,
> +		      struct efc_domain *domain, u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +

Unneccesary newline.

> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);

Indentation.

> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	domain->sport->fc_id = fc_id;
> +
> +	rc = sli_cmd_reg_vfi(&hw->sli, buf, SLI4_BMBX_SIZE, domain->indicator,
> +			    domain->fcf_indicator, domain->dma,
> +			    domain->sport->indicator, domain->sport->sli_wwpn,
> +			    domain->sport->fc_id);

Why not pass 'domain' as parameter and reduce the number of arguments?

> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_attach_reg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return rc;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_ATTACH_FAIL, buf);
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_domain_free_unreg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int evt = EFC_HW_DOMAIN_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_DOMAIN_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_unreg_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	if (!data) {
> +		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data)
> +			goto cleanup;
> +	}
> +

Mempool?

> +	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_free_unreg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_free_unreg_vfi(domain, NULL);
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	if (!domain) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  domain->dma.size, domain->dma.virt, domain->dma.phys);
> +	memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
> +			  domain->indicator);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
> +		   u32 fc_addr, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	/* Check for invalid indicator */
> +	if (rnode->indicator != U32_MAX) {
> +		efc_log_err(hw->os,
> +			     "RPI allocation failure addr=%#x rpi=%#x\n",
> +			    fc_addr, rnode->indicator);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* NULL SLI port indicates an unallocated remote node */
> +	rnode->sport = NULL;
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
> +			       &rnode->indicator, &rnode->index)) {
> +		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
> +			     fc_addr);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rnode->fc_id = fc_addr;
> +	rnode->sport = sport;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +

Locking?
Please add a lockdep annotation so that one knows which locks should've 
been taken.

> +static int
> +efct_hw_cb_node_attach(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event	evt = 0;
> +
Unnessasary newline.

> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +		rnode->attached = false;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
> +		evt = EFC_HW_NODE_ATTACH_FAIL;
> +	} else {
> +		rnode->attached = true;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
> +		evt = EFC_HW_NODE_ATTACH_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);

Personally, I find it bad style to call 'kfree' on a passed in 
parameter. One is never show what happens to the parameter if the 
function fails, _and_ the argument itself is actually immutable, so the 
caller has no way of checking if the argument actually has been freed.
Please move the kfree() call to the calling function.

> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Update a remote node object with the remote port's service parameters */
> +enum efct_hw_rtn
> +efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
> +		    struct efc_dma *sparms)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
> +	u8		*buf = NULL;
> +	u32	count = 0;
> +
> +	if (!hw || !rnode || !sparms) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
> +			    hw, rnode, sparms);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);

Mempool ...

> +	/*
> +	 * If the attach count is non-zero, this RPI has already been reg'd.
> +	 * Otherwise, register the RPI
> +	 */
> +	if (rnode->index == U32_MAX) {
> +		efc_log_err(efct, "bad parameter rnode->index invalid\n");
> +		kfree(buf);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +	count--;
> +	if (count) {

What on earth ...
Care to explain what you are attempting here?
Increasing a value just to decrease it again?

> +		/*
> +		 * Can't attach multiple FC_ID's to a node unless High Login
> +		 * Mode is enabled
> +		 */
> +		if (!hw->sli.high_login_mode) {
> +			efc_log_test(hw->os,
> +				      "attach to attached node HLM=%d cnt=%d\n",
> +				     hw->sli.high_login_mode, count);
> +			rc = EFCT_HW_RTN_SUCCESS;
> +		} else {
> +			rnode->node_group = true;
> +			rnode->attached =
> +			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
> +			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
> +							 EFCT_HW_RTN_SUCCESS;
> +		}
> +	} else {
> +		rnode->node_group = false;
> +
> +		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    rnode->fc_id,
> +				    rnode->indicator, rnode->sport->indicator,
> +				    sparms, 0, 0))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_attach, rnode);
> +	}
> +
> +	if (count || rc) {
> +		if (rc < EFCT_HW_RTN_SUCCESS) {
> +			atomic_sub_return(1,
> +					  &hw->rpi_ref[rnode->index].rpi_count);
> +			efc_log_err(hw->os,
> +				     "%s error\n", count ? "HLM" : "REG_RPI");
> +		}
> +		kfree(buf);
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_resources(struct efc *efc,
> +			    struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rnode->sport) {
> +		if (rnode->attached) {
> +			efc_log_err(hw->os, "Err: rnode is still attached\n");
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +		if (rnode->indicator != U32_MAX) {
> +			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
> +					      rnode->indicator)) {
> +				efc_log_err(hw->os,
> +					     "RPI free fail RPI %d addr=%#x\n",
> +					    rnode->indicator,
> +					    rnode->fc_id);
> +				rc = EFCT_HW_RTN_ERROR;
> +			} else {
> +				rnode->node_group = false;
> +				rnode->indicator = U32_MAX;
> +				rnode->index = U32_MAX;
> +				rnode->free_group = false;
> +			}
> +		}
> +	}
> +
> +	return rc;
> +}
> +

Locking?

> +static int
> +efct_hw_cb_node_free(struct efct_hw *hw,
> +		     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +
> +		/*
> +		 * In certain cases, a non-zero MQE status is OK (all must be
> +		 * true):
> +		 *   - node is attached
> +		 *   - if High Login Mode is enabled, node is part of a node
> +		 * group
> +		 *   - status is 0x1400
> +		 */
> +		if (!rnode->attached ||
> +		    (hw->sli.high_login_mode && !rnode->node_group) ||
> +				(le16_to_cpu(hdr->status) !=
> +				 MBX_STATUS_RPI_NOT_REG))
> +			rc = -1;
> +	}
> +
> +	if (rc == 0) {
> +		rnode->node_group = false;
> +		rnode->attached = false;
> +
> +		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
> +			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
> +				   0);
> +		 evt = EFC_HW_NODE_FREE_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS_SYNC;
> +	u32	index = U32_MAX;
> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	index = rnode->index;
> +
> +	if (rnode->sport) {
> +		u32	count = 0;
> +		u32	fc_id;
> +
> +		if (!rnode->attached)
> +			return EFCT_HW_RTN_SUCCESS_SYNC;
> +
> +		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!buf)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(buf, 0, SLI4_BMBX_SIZE);

Mempools..

> +		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
> +		count++;
> +		if (count <= 1) {

Come now. This now doesn't make sense; check for 'count == 0' and drop 
the ++ ...

> +			/*
> +			 * There are no other references to this RPI so
> +			 * unregister it
> +			 */
> +			fc_id = U32_MAX;
> +			/* and free the resource */
> +			rnode->node_group = false;
> +			rnode->free_group = true;
> +		} else {
> +			if (!hw->sli.high_login_mode)
> +				efc_log_test(hw->os,
> +					      "Inval cnt with HLM off, cnt=%d\n",
> +					     count);
> +			fc_id = rnode->fc_id & 0x00ffffff;
> +		}
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +
> +		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				      rnode->indicator,
> +				      SLI_RSRC_RPI, fc_id))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_free, rnode);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			efc_log_err(hw->os, "UNREG_RPI failed\n");
> +			kfree(buf);
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_node_free_all(struct efct_hw *hw, int status, u8 *mqe,
> +			 void *arg)
> +{
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	u32	i;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +	} else {
> +		evt = EFC_HW_NODE_FREE_ALL_OK;
> +	}
> +
> +	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
> +		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
> +		     i++)
> +			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
> +
> +		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
> +			efc_log_test(hw->os, "RPI free all failure\n");
> +			rc = -1;
> +		}
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, NULL);
> +
> +	kfree(mqe);

Same argument for calling 'kfree' on a function parameter.

> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_all(struct efct_hw *hw)
> +{
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +

Mempools...

... and some more functions following which should be modified for using 
mempools for mailbox commands and not calling kfree() on a function 
parameter.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver
  2020-04-12  3:33 ` [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2020-04-16 10:02   ` Hannes Reinecke
  2020-04-16 13:15   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Hannes Reinecke @ 2020-04-16 10:02 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/12/20 5:33 AM, James Smart wrote:
> This patch completes the efct driver population.
> 
> This patch adds driver definitions for:
> Adds the efct driver Kconfig and Makefiles
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Use SPDX license
>    Remove utils.c from makefile
> ---
>   drivers/scsi/elx/Kconfig  |  9 +++++++++
>   drivers/scsi/elx/Makefile | 18 ++++++++++++++++++
>   2 files changed, 27 insertions(+)
>   create mode 100644 drivers/scsi/elx/Kconfig
>   create mode 100644 drivers/scsi/elx/Makefile
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 23/31] elx: efct: SCSI IO handling routines
  2020-04-12  3:32 ` [PATCH v3 23/31] elx: efct: SCSI IO handling routines James Smart
@ 2020-04-16 11:40   ` Daniel Wagner
  0 siblings, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 11:40 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:55PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines for SCSI transport IO alloc, build and send IO.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> 
> ---
> v3:
>   Removed DIF related code which is not used.
>   Removed SCSI get property.
> ---
>  drivers/scsi/elx/efct/efct_scsi.c | 1192 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_scsi.h |  235 ++++++++
>  2 files changed, 1427 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
>  create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c
> new file mode 100644
> index 000000000000..c299eadbc492
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_scsi.c
> @@ -0,0 +1,1192 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include "efct_driver.h"
> +#include "efct_els.h"
> +#include "efct_hw.h"
> +
> +#define enable_tsend_auto_resp(efct)	1
> +#define enable_treceive_auto_resp(efct)	0
> +
> +#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]"
> +
> +#define scsi_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \
> +		io->node->display_name, io->instance_index,\
> +		io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
> +
> +#define EFCT_LOG_ENABLE_SCSI_TRACE(efct)                \
> +		(((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0)
> +
> +#define scsi_io_trace(io, fmt, ...) \
> +	do { \
> +		if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \
> +			scsi_io_printf(io, fmt, ##__VA_ARGS__); \
> +	} while (0)
> +
> +/* Enable the SCSI and Transport IO allocations */
> +void
> +efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		node->io_alloc_enabled = true;

no need to indent

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +/* Disable the SCSI and Transport IO allocations */
> +void
> +efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		node->io_alloc_enabled = false;

no need to indent

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +}
> +
> +struct efct_io *
> +efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role role)
> +{
> +	struct efct *efct;
> +	struct efc *efcp;
> +	struct efct_xport *xport;
> +	struct efct_io *io;
> +	unsigned long flags = 0;
> +
> +	efcp = node->efc;
> +	efct = efcp->base;
> +
> +	xport = efct->xport;
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +
> +		if (!node->io_alloc_enabled) {
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +			return NULL;
> +		}
> +
> +		io = efct_io_pool_io_alloc(efct->xport->io_pool);
> +		if (!io) {
> +			atomic_add_return(1, &xport->io_alloc_failed_count);
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +			return NULL;
> +		}
> +
> +		/* initialize refcount */
> +		kref_init(&io->ref);
> +		io->release = _efct_scsi_io_free;
> +
> +		if (io->hio) {
> +			efc_log_err(efct,
> +				     "assertion failed: io->hio is not NULL\n");
> +			spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +			return NULL;
> +		}
> +
> +		/* set generic fields */
> +		io->efct = efct;
> +		io->node = node;
> +
> +		/* set type and name */
> +		io->io_type = EFCT_IO_TYPE_IO;
> +		io->display_name = "scsi_io";
> +
> +		switch (role) {
> +		case EFCT_SCSI_IO_ROLE_ORIGINATOR:
> +			io->cmd_ini = true;
> +			io->cmd_tgt = false;
> +			break;
> +		case EFCT_SCSI_IO_ROLE_RESPONDER:
> +			io->cmd_ini = false;
> +			io->cmd_tgt = true;
> +			break;
> +		}
> +
> +		/* Add to node's active_ios list */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &node->active_ios);

no need to indent

> +
> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	return io;
> +}
> +
> +void
> +_efct_scsi_io_free(struct kref *arg)
> +{
> +	struct efct_io *io = container_of(arg, struct efct_io, ref);
> +	struct efct *efct = io->efct;
> +	struct efc_node *node = io->node;
> +	int send_empty_event;
> +	unsigned long flags = 0;
> +
> +	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
> +
> +	if (io->io_free) {
> +		efc_log_err(efct, "IO already freed.\n");
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&node->active_ios_lock, flags);
> +		list_del(&io->list_entry);
> +		send_empty_event = (!node->io_alloc_enabled) &&
> +					list_empty(&node->active_ios);

no need to indent

> +	spin_unlock_irqrestore(&node->active_ios_lock, flags);
> +
> +	if (send_empty_event)
> +		efc_scsi_io_list_empty(node->efc, node);
> +
> +	io->node = NULL;
> +	efct_io_pool_io_free(efct->xport->io_pool, io);
> +}
> +
> +void
> +efct_scsi_io_free(struct efct_io *io)
> +{
> +	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
> +	WARN_ON(refcount_read(&io->ref.refcount) != 0);
> +	kref_put(&io->ref, io->release);
> +}
> +
> +static void
> +efct_target_io_cb(struct efct_hw_io *hio, struct efc_remote_node *rnode,
> +		  u32 length, int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +	struct efct *efct;
> +	enum efct_scsi_io_status scsi_stat = EFCT_SCSI_STATUS_GOOD;
> +
> +	if (!io || !io->efct) {
> +		pr_err("%s: IO can not be NULL\n", __func__);
> +		return;
> +	}
> +
> +	scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status);
> +
> +	efct = io->efct;
> +
> +	io->transferred += length;
> +
> +	/* Call target server completion */
> +	if (io->scsi_tgt_cb) {

move the following part into a function

> +		efct_scsi_io_cb_t cb = io->scsi_tgt_cb;
> +		u32 flags = 0;
> +
> +		/* Clear the callback before invoking the callback */
> +		io->scsi_tgt_cb = NULL;
> +
> +		/* if status was good, and auto-good-response was set,
> +		 * then callback target-server with IO_CMPL_RSP_SENT,
> +		 * otherwise send IO_CMPL
> +		 */
> +		if (status == 0 && io->auto_resp)
> +			flags |= EFCT_SCSI_IO_CMPL_RSP_SENT;
> +		else
> +			flags |= EFCT_SCSI_IO_CMPL;
> +
> +		switch (status) {
> +		case SLI4_FC_WCQE_STATUS_SUCCESS:
> +			scsi_stat = EFCT_SCSI_STATUS_GOOD;
> +			break;
> +		case SLI4_FC_WCQE_STATUS_DI_ERROR:
> +			if (ext_status & SLI4_FC_DI_ERROR_GE)
> +				scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR;
> +			else if (ext_status & SLI4_FC_DI_ERROR_AE)
> +				scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR;
> +			else if (ext_status & SLI4_FC_DI_ERROR_RE)
> +				scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR;
> +			else
> +				scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR;
> +			break;
> +		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +			switch (ext_status) {
> +			case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET:
> +			case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
> +				scsi_stat = EFCT_SCSI_STATUS_ABORTED;
> +				break;
> +			case SLI4_FC_LOCAL_REJECT_INVALID_RPI:
> +				scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST;
> +				break;
> +			case SLI4_FC_LOCAL_REJECT_NO_XRI:
> +				scsi_stat = EFCT_SCSI_STATUS_NO_IO;
> +				break;
> +			default:
> +				/*we have seen 0x0d(TX_DMA_FAILED err)*/
> +				scsi_stat = EFCT_SCSI_STATUS_ERROR;
> +				break;
> +			}
> +			break;
> +
> +		case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT:
> +			/* target IO timed out */
> +			scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED;
> +			break;
> +
> +		case SLI4_FC_WCQE_STATUS_SHUTDOWN:
> +			/* Target IO cancelled by HW */
> +			scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN;
> +			break;
> +
> +		default:
> +			scsi_stat = EFCT_SCSI_STATUS_ERROR;
> +			break;
> +		}
> +
> +		cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg);
> +	}
> +	efct_scsi_check_pending(efct);
> +}
> +
> +static int
> +efct_scsi_build_sgls(struct efct_hw *hw, struct efct_hw_io *hio,
> +		struct efct_scsi_sgl *sgl, u32 sgl_count,
> +		enum efct_hw_io_type type)
> +{
> +	int rc;
> +	u32 i;
> +	struct efct *efct = hw->os;
> +
> +	/* Initialize HW SGL */
> +	rc = efct_hw_io_init_sges(hw, hio, type);
> +	if (rc) {
> +		efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc);
> +		return EFC_FAIL;
> +	}
> +
> +	for (i = 0; i < sgl_count; i++) {
> +
> +		/* Add data SGE */
> +		rc = efct_hw_io_add_sge(hw, hio,
> +				sgl[i].addr, sgl[i].len);

fits on one line

> +		if (rc) {
> +			efc_log_err(efct,
> +					"add sge failed cnt=%d rc=%d\n",
> +					sgl_count, rc);
> +			return rc;
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void efc_log_sgl(struct efct_io *io)
> +{
> +	struct efct_hw_io *hio = io->hio;
> +	struct sli4_sge *data = NULL;
> +	u32 *dword = NULL;
> +	u32 i;
> +	u32 n_sge;
> +
> +	scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n",
> +		      upper_32_bits(hio->def_sgl.phys),
> +		      lower_32_bits(hio->def_sgl.phys));
> +	n_sge = (hio->sgl == &hio->def_sgl ?
> +			hio->n_sge : hio->def_sgl_count);

fits on one line

> +	for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) {
> +		dword = (u32 *)data;
> +
> +		scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
> +			      i, dword[0], dword[1], dword[2], dword[3]);
> +
> +		if (dword[2] & (1U << 31))
> +			break;
> +	}
> +
> +}
> +
> +static int
> +efct_scsi_check_pending_async_cb(struct efct_hw *hw, int status,
> +				 u8 *mqe, void *arg)
> +{
> +	struct efct_io *io = arg;
> +
> +	if (io) {
> +		if (io->hw_cb) {
> +			efct_hw_done_t cb = io->hw_cb;
> +
> +			io->hw_cb = NULL;
> +			(cb)(io->hio, NULL, 0,
> +			 SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io);
> +		}
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_scsi_io_dispatch_hw_io(struct efct_io *io, struct efct_hw_io *hio)
> +{
> +	int rc = 0;

EFC_SUCCESS ?

> +	struct efct *efct = io->efct;
> +
> +	/* Got a HW IO;
> +	 * update ini/tgt_task_tag with HW IO info and dispatch
> +	 */
> +	io->hio = hio;
> +	if (io->cmd_tgt)
> +		io->tgt_task_tag = hio->indicator;
> +	else if (io->cmd_ini)
> +		io->init_task_tag = hio->indicator;
> +	io->hw_tag = hio->reqtag;
> +
> +	hio->eq = io->hw_priv;
> +
> +	/* Copy WQ steering */
> +	switch (io->wq_steering) {
> +	case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT:
> +		hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS;
> +		break;
> +	case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT:
> +		hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST;
> +		break;
> +	case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT:
> +		hio->wq_steering = EFCT_HW_WQ_STEERING_CPU;
> +		break;
> +	}
> +
> +	switch (io->io_type) {
> +	case EFCT_IO_TYPE_IO:
> +		rc = efct_scsi_build_sgls(&efct->hw, io->hio,
> +					  io->sgl, io->sgl_count, io->hio_type);
> +		if (rc)
> +			break;
> +
> +		if (EFCT_LOG_ENABLE_SCSI_TRACE(efct))
> +			efc_log_sgl(io);
> +
> +		if (io->app_id)
> +			io->iparam.fcp_tgt.app_id = io->app_id;
> +
> +		rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio,
> +				     io->wire_len, &io->iparam,
> +				     &io->node->rnode, io->hw_cb, io);
> +		break;
> +	case EFCT_IO_TYPE_ELS:
> +	case EFCT_IO_TYPE_CT:
> +		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
> +				       &io->els_req, io->wire_len,
> +			&io->els_rsp, &io->node->rnode, &io->iparam,
> +			io->hw_cb, io);
> +		break;
> +	case EFCT_IO_TYPE_CT_RESP:
> +		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
> +				       &io->els_rsp, io->wire_len,
> +			NULL, &io->node->rnode, &io->iparam,
> +			io->hw_cb, io);
> +		break;
> +	case EFCT_IO_TYPE_BLS_RESP:
> +		/* no need to update tgt_task_tag for BLS response since
> +		 * the RX_ID will be specified by the payload, not the XRI
> +		 */
> +		rc = efct_hw_srrs_send(&efct->hw, io->hio_type, io->hio,
> +				       NULL, 0, NULL, &io->node->rnode,
> +			&io->iparam, io->hw_cb, io);
> +		break;
> +	default:
> +		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
> +		rc = -1;

error code

> +		break;
> +	}
> +	return rc;
> +}
> +
> +static int
> +efct_scsi_io_dispatch_no_hw_io(struct efct_io *io)
> +{
> +	int rc;
> +
> +	switch (io->io_type) {
> +	case EFCT_IO_TYPE_ABORT: {
> +		struct efct_hw_io *hio_to_abort = NULL;
> +
> +		hio_to_abort = io->io_to_abort->hio;
> +
> +		if (!hio_to_abort) {
> +			/*
> +			 * If "IO to abort" does not have an
> +			 * associated HW IO, immediately make callback with
> +			 * success. The command must have been sent to
> +			 * the backend, but the data phase has not yet
> +			 * started, so we don't have a HW IO.
> +			 *
> +			 * Note: since the backend shims should be
> +			 * taking a reference on io_to_abort, it should not
> +			 * be possible to have been completed and freed by
> +			 * the backend before the abort got here.
> +			 */
> +			scsi_io_printf(io, "IO: not active\n");
> +			((efct_hw_done_t)io->hw_cb)(io->hio, NULL, 0,
> +					SLI4_FC_WCQE_STATUS_SUCCESS, 0, io);
> +			rc = 0;

same as in above function


> +		} else {
> +			/* HW IO is valid, abort it */
> +			scsi_io_printf(io, "aborting\n");
> +			rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort,
> +					      io->send_abts, io->hw_cb, io);
> +			if (rc) {
> +				int status = SLI4_FC_WCQE_STATUS_SUCCESS;
> +
> +				if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE &&
> +				    rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
> +					status = -1;
> +					scsi_io_printf(io,
> +						       "Failed to abort IO: status=%d\n",
> +						rc);
> +				}
> +				((efct_hw_done_t)io->hw_cb)(io->hio,
> +						NULL, 0, status, 0, io);
> +				rc = 0;
> +			}
> +		}
> +
> +		break;
> +	}
> +	default:
> +		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
> +		rc = -1;
> +		break;
> +	}
> +	return rc;
> +}
> +
> +/**
> + * Check for pending IOs to dispatch.
> + *
> + * If there are IOs on the pending list, and a HW IO is available, then
> + * dispatch the IOs.
> + */

proper kerneldoc style?

> +void
> +efct_scsi_check_pending(struct efct *efct)
> +{
> +	struct efct_xport *xport = efct->xport;
> +	struct efct_io *io = NULL;
> +	struct efct_hw_io *hio;
> +	int status;
> +	int count = 0;
> +	int dispatch;
> +	unsigned long flags = 0;
> +
> +	/* Guard against recursion */
> +	if (atomic_add_return(1, &xport->io_pending_recursing)) {
> +		/* This function is already running.  Decrement and return. */
> +		atomic_sub_return(1, &xport->io_pending_recursing);
> +		return;
> +	}
> +
> +	do {
> +		spin_lock_irqsave(&xport->io_pending_lock, flags);
> +		status = 0;
> +		hio = NULL;
> +		if (!list_empty(&xport->io_pending_list)) {
> +			io = list_first_entry(&xport->io_pending_list,
> +					      struct efct_io,
> +					      io_pending_link);
> +		}
> +		if (io) {

Couldn't this part of the above if body? Could be io == NULL when
popped from the list?

> +			list_del(&io->io_pending_link);

This says no

> +			if (io->io_type == EFCT_IO_TYPE_ABORT) {
> +				hio = NULL;
> +			} else {
> +				hio = efct_hw_io_alloc(&efct->hw);
> +				if (!hio) {
> +					/*
> +					 * No HW IO available.Put IO back on
> +					 * the front of pending list
> +					 */
> +					list_add(&xport->io_pending_list,
> +						 &io->io_pending_link);
> +					io = NULL;
> +				} else {
> +					hio->eq = io->hw_priv;
> +				}
> +			}
> +		}
> +		/* Must drop the lock before dispatching the IO */
> +		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +
> +		if (io) {
> +			count++;
> +
> +			/*
> +			 * We pulled an IO off the pending list,
> +			 * and either got an HW IO or don't need one
> +			 */
> +			atomic_sub_return(1, &xport->io_pending_count);
> +			if (!hio)
> +				status = efct_scsi_io_dispatch_no_hw_io(io);
> +			else
> +				status = efct_scsi_io_dispatch_hw_io(io, hio);
> +			if (status) {
> +				/*
> +				 * Invoke the HW callback, but do so in the
> +				 * separate execution context,provided by the
> +				 * NOP mailbox completion processing context
> +				 * by using efct_hw_async_call()
> +				 */
> +				if (efct_hw_async_call(&efct->hw,
> +					       efct_scsi_check_pending_async_cb,
> +					io)) {
> +					efc_log_test(efct,
> +						      "call hw async failed\n");
> +				}
> +			}
> +		}
> +	} while (io);

This function is a bit long, I think it would be good to split it up.

> +
> +	/*
> +	 * If nothing was removed from the list,
> +	 * we might be in a case where we need to abort an
> +	 * active IO and the abort is on the pending list.
> +	 * Look for an abort we can dispatch.
> +	 */
> +	if (count == 0) {
> +		dispatch = 0;
> +
> +		spin_lock_irqsave(&xport->io_pending_lock, flags);
> +		list_for_each_entry(io, &xport->io_pending_list,
> +				    io_pending_link) {
> +			if (io->io_type == EFCT_IO_TYPE_ABORT) {
> +				if (io->io_to_abort->hio) {
> +					/* This IO has a HW IO, so it is
> +					 * active.  Dispatch the abort.
> +					 */
> +					dispatch = 1;
> +				} else {
> +					/* Leave this abort on the pending
> +					 * list and keep looking
> +					 */
> +					dispatch = 0;
> +				}
> +			}
> +			if (dispatch) {
> +				list_del(&io->io_pending_link);
> +				atomic_sub_return(1, &xport->io_pending_count);
> +				break;
> +			}
> +		}
> +		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +
> +		if (dispatch) {
> +			status = efct_scsi_io_dispatch_no_hw_io(io);
> +			if (status) {
> +				if (efct_hw_async_call(&efct->hw,
> +					       efct_scsi_check_pending_async_cb,
> +					io)) {
> +					efc_log_test(efct,
> +						      "call to hw async failed\n");
> +				}
> +			}
> +		}
> +	}
> +
> +	atomic_sub_return(1, &xport->io_pending_recursing);
> +}
> +
> +/**
> + * An IO is dispatched:
> + * - if the pending list is not empty, add IO to pending list
> + *   and call a function to process the pending list.
> + * - if pending list is empty, try to allocate a HW IO. If none
> + *   is available, place this IO at the tail of the pending IO
> + *   list.
> + * - if HW IO is available, attach this IO to the HW IO and
> + *   submit it.
> + */

proper kerneldoc style?

> +int
> +efct_scsi_io_dispatch(struct efct_io *io, void *cb)
> +{
> +	struct efct_hw_io *hio;
> +	struct efct *efct = io->efct;
> +	struct efct_xport *xport = efct->xport;
> +	unsigned long flags = 0;
> +
> +	io->hw_cb = cb;
> +
> +	/*
> +	 * if this IO already has a HW IO, then this is either
> +	 * not the first phase of the IO. Send it to the HW.
> +	 */
> +	if (io->hio)
> +		return efct_scsi_io_dispatch_hw_io(io, io->hio);
> +
> +	/*
> +	 * We don't already have a HW IO associated with the IO. First check
> +	 * the pending list. If not empty, add IO to the tail and process the
> +	 * pending list.
> +	 */
> +	spin_lock_irqsave(&xport->io_pending_lock, flags);
> +		if (!list_empty(&xport->io_pending_list)) {
> +			/*
> +			 * If this is a low latency request,
> +			 * the put at the front of the IO pending
> +			 * queue, otherwise put it at the end of the queue.
> +			 */
> +			if (io->low_latency) {
> +				INIT_LIST_HEAD(&io->io_pending_link);
> +				list_add(&xport->io_pending_list,
> +					 &io->io_pending_link);
> +			} else {
> +				INIT_LIST_HEAD(&io->io_pending_link);
> +				list_add_tail(&io->io_pending_link,
> +					      &xport->io_pending_list);
> +			}
> +			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +			atomic_add_return(1, &xport->io_pending_count);
> +			atomic_add_return(1, &xport->io_total_pending);
> +
> +			/* process pending list */
> +			efct_scsi_check_pending(efct);
> +			return EFC_SUCCESS;
> +		}

no need to indent

> +	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +
> +	/*
> +	 * We don't have a HW IO associated with the IO and there's nothing
> +	 * on the pending list. Attempt to allocate a HW IO and dispatch it.
> +	 */
> +	hio = efct_hw_io_alloc(&io->efct->hw);
> +	if (!hio) {
> +		/* Couldn't get a HW IO. Save this IO on the pending list */
> +		spin_lock_irqsave(&xport->io_pending_lock, flags);
> +		INIT_LIST_HEAD(&io->io_pending_link);
> +		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
> +		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +
> +		atomic_add_return(1, &xport->io_total_pending);
> +		atomic_add_return(1, &xport->io_pending_count);
> +		return EFC_SUCCESS;
> +	}
> +
> +	/* We successfully allocated a HW IO; dispatch to HW */
> +	return efct_scsi_io_dispatch_hw_io(io, hio);
> +}
> +
> +/**
> + * An Abort IO is dispatched:
> + * - if the pending list is not empty, add IO to pending list
> + *   and call a function to process the pending list.
> + * - if pending list is empty, send abort to the HW.
> + */

not proper kerneldoc style

> +
> +int
> +efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb)
> +{
> +	struct efct *efct = io->efct;
> +	struct efct_xport *xport = efct->xport;
> +	unsigned long flags = 0;
> +
> +	io->hw_cb = cb;
> +
> +	/*
> +	 * For aborts, we don't need a HW IO, but we still want
> +	 * to pass through the pending list to preserve ordering.
> +	 * Thus, if the pending list is not empty, add this abort
> +	 * to the pending list and process the pending list.
> +	 */
> +	spin_lock_irqsave(&xport->io_pending_lock, flags);
> +		if (!list_empty(&xport->io_pending_list)) {
> +			INIT_LIST_HEAD(&io->io_pending_link);
> +			list_add_tail(&io->io_pending_link,
> +				      &xport->io_pending_list);
> +			spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +			atomic_add_return(1, &xport->io_pending_count);
> +			atomic_add_return(1, &xport->io_total_pending);
> +
> +			/* process pending list */
> +			efct_scsi_check_pending(efct);
> +			return EFC_SUCCESS;
> +		}

no need to indent

> +	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
> +
> +	/* nothing on pending list, dispatch abort */
> +	return efct_scsi_io_dispatch_no_hw_io(io);
> +}
> +
> +static inline int
> +efct_scsi_xfer_data(struct efct_io *io, u32 flags,
> +	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 xwire_len,
> +	enum efct_hw_io_type type, int enable_ar,
> +	efct_scsi_io_cb_t cb, void *arg)
> +{
> +	struct efct *efct;
> +	size_t residual = 0;
> +
> +	io->sgl_count = sgl_count;
> +
> +	efct = io->efct;
> +
> +	scsi_io_trace(io, "%s wire_len %llu\n",
> +		      (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv",
> +		      xwire_len);
> +
> +	io->hio_type = type;
> +
> +	io->scsi_tgt_cb = cb;
> +	io->scsi_tgt_cb_arg = arg;
> +
> +	residual = io->exp_xfer_len - io->transferred;
> +	io->wire_len = (xwire_len < residual) ? xwire_len : residual;
> +	residual = (xwire_len - io->wire_len);
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
> +	io->iparam.fcp_tgt.offset = io->transferred;
> +	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
> +	io->iparam.fcp_tgt.timeout = io->timeout;
> +
> +	/* if this is the last data phase and there is no residual, enable
> +	 * auto-good-response
> +	 */
> +	if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) &&
> +	    residual == 0 &&
> +		((io->transferred + io->wire_len) == io->exp_xfer_len) &&
> +		(!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) {

This should be reformated in a proper way.

> +		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
> +		io->auto_resp = true;
> +	} else {
> +		io->auto_resp = false;
> +	}
> +
> +	/* save this transfer length */
> +	io->xfer_req = io->wire_len;
> +
> +	/* Adjust the transferred count to account for overrun
> +	 * when the residual is calculated in efct_scsi_send_resp
> +	 */
> +	io->transferred += residual;
> +
> +	/* Adjust the SGL size if there is overrun */
> +
> +	if (residual) {
> +		struct efct_scsi_sgl  *sgl_ptr = &io->sgl[sgl_count - 1];
> +
> +		while (residual) {
> +			size_t len = sgl_ptr->len;
> +
> +			if (len > residual) {
> +				sgl_ptr->len = len - residual;
> +				residual = 0;
> +			} else {
> +				sgl_ptr->len = 0;
> +				residual -= len;
> +				io->sgl_count--;
> +			}
> +			sgl_ptr--;
> +		}
> +	}
> +
> +	/* Set latency and WQ steering */
> +	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
> +	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
> +				EFCT_SCSI_WQ_STEERING_SHIFT;
> +	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
> +				EFCT_SCSI_WQ_CLASS_SHIFT;
> +
> +	if (efct->xport) {
> +		struct efct_xport *xport = efct->xport;
> +
> +		if (type == EFCT_HW_IO_TARGET_READ) {
> +			xport->fcp_stats.input_requests++;
> +			xport->fcp_stats.input_bytes += xwire_len;
> +		} else if (type == EFCT_HW_IO_TARGET_WRITE) {
> +			xport->fcp_stats.output_requests++;
> +			xport->fcp_stats.output_bytes += xwire_len;
> +		}
> +	}
> +	return efct_scsi_io_dispatch(io, efct_target_io_cb);
> +}
> +
> +int
> +efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
> +	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
> +	efct_scsi_io_cb_t cb, void *arg)
> +{
> +	return efct_scsi_xfer_data(io, flags, sgl, sgl_count,
> +				 len, EFCT_HW_IO_TARGET_READ,
> +				 enable_tsend_auto_resp(io->efct), cb, arg);
> +}
> +
> +int
> +efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
> +	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
> +	efct_scsi_io_cb_t cb, void *arg)
> +{
> +	return efct_scsi_xfer_data(io, flags, sgl, sgl_count, len,
> +				 EFCT_HW_IO_TARGET_WRITE,
> +				 enable_treceive_auto_resp(io->efct), cb, arg);
> +}
> +
> +int
> +efct_scsi_send_resp(struct efct_io *io, u32 flags,
> +		    struct efct_scsi_cmd_resp *rsp,
> +		   efct_scsi_io_cb_t cb, void *arg)
> +{
> +	struct efct *efct;
> +	int residual;
> +	bool auto_resp = true;		/* Always try auto resp */
> +	u8 scsi_status = 0;
> +	u16 scsi_status_qualifier = 0;
> +	u8 *sense_data = NULL;
> +	u32 sense_data_length = 0;
> +
> +	efct = io->efct;
> +
> +	if (rsp) {
> +		scsi_status = rsp->scsi_status;
> +		scsi_status_qualifier = rsp->scsi_status_qualifier;
> +		sense_data = rsp->sense_data;
> +		sense_data_length = rsp->sense_data_length;
> +		residual = rsp->residual;
> +	} else {
> +		residual = io->exp_xfer_len - io->transferred;
> +	}
> +
> +	io->wire_len = 0;
> +	io->hio_type = EFCT_HW_IO_TARGET_RSP;
> +
> +	io->scsi_tgt_cb = cb;
> +	io->scsi_tgt_cb_arg = arg;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
> +	io->iparam.fcp_tgt.offset = 0;
> +	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
> +	io->iparam.fcp_tgt.timeout = io->timeout;
> +
> +	/* Set low latency queueing request */
> +	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
> +	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
> +				EFCT_SCSI_WQ_STEERING_SHIFT;
> +	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
> +				EFCT_SCSI_WQ_CLASS_SHIFT;
> +
> +	if (scsi_status != 0 || residual || sense_data_length) {
> +		struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
> +		u8 *sns_data = io->rspbuf.virt + sizeof(*fcprsp);
> +
> +		if (!fcprsp) {
> +			efc_log_err(efct, "NULL response buffer\n");
> +			return EFC_FAIL;
> +		}
> +
> +		auto_resp = false;
> +
> +		memset(fcprsp, 0, sizeof(*fcprsp));
> +
> +		io->wire_len += sizeof(*fcprsp);
> +
> +		fcprsp->resp.fr_status = scsi_status;
> +		fcprsp->resp.fr_retry_delay =
> +			cpu_to_be16(scsi_status_qualifier);
> +
> +		/* set residual status if necessary */
> +		if (residual != 0) {
> +			/* FCP: if data transferred is less than the
> +			 * amount expected, then this is an underflow.
> +			 * If data transferred would have been greater
> +			 * than the amount expected this is an overflow
> +			 */
> +			if (residual > 0) {
> +				fcprsp->resp.fr_flags |= FCP_RESID_UNDER;
> +				fcprsp->ext.fr_resid =	cpu_to_be32(residual);
> +			} else {
> +				fcprsp->resp.fr_flags |= FCP_RESID_OVER;
> +				fcprsp->ext.fr_resid = cpu_to_be32(-residual);
> +			}
> +		}
> +
> +		if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) {
> +			if (sense_data_length > SCSI_SENSE_BUFFERSIZE) {
> +				efc_log_err(efct, "Sense exceeds max size.\n");
> +				return EFC_FAIL;
> +			}
> +
> +			fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
> +			memcpy(sns_data, sense_data, sense_data_length);
> +			fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length);
> +			io->wire_len += sense_data_length;
> +		}
> +
> +		io->sgl[0].addr = io->rspbuf.phys;
> +		//io->sgl[0].dif_addr = 0;

debug left over

> +		io->sgl[0].len = io->wire_len;
> +		io->sgl_count = 1;
> +	}
> +
> +	if (auto_resp)
> +		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
> +
> +	return efct_scsi_io_dispatch(io, efct_target_io_cb);
> +}
> +
> +static int
> +efct_target_bls_resp_cb(struct efct_hw_io *hio,
> +			struct efc_remote_node *rnode,
> +	u32 length, int status, u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +	struct efct *efct;
> +	enum efct_scsi_io_status bls_status;
> +
> +	efct = io->efct;
> +
> +	/* BLS isn't really a "SCSI" concept, but use SCSI status */
> +	if (status) {
> +		io_error_log(io, "s=%#x x=%#x\n", status, ext_status);
> +		bls_status = EFCT_SCSI_STATUS_ERROR;
> +	} else {
> +		bls_status = EFCT_SCSI_STATUS_GOOD;
> +	}
> +
> +	if (io->bls_cb) {
> +		efct_scsi_io_cb_t bls_cb = io->bls_cb;
> +		void *bls_cb_arg = io->bls_cb_arg;
> +
> +		io->bls_cb = NULL;
> +		io->bls_cb_arg = NULL;
> +
> +		/* invoke callback */
> +		bls_cb(io, bls_status, 0, bls_cb_arg);
> +	}
> +
> +	efct_scsi_check_pending(efct);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_target_send_bls_resp(struct efct_io *io,
> +			  efct_scsi_io_cb_t cb, void *arg)
> +{
> +	int rc;
> +	struct fc_ba_acc *acc;
> +
> +	/* fill out IO structure with everything needed to send BA_ACC */
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.bls.ox_id = io->init_task_tag;
> +	io->iparam.bls.rx_id = io->abort_rx_id;
> +
> +	acc = (void *)io->iparam.bls.payload;
> +
> +	memset(io->iparam.bls.payload, 0,
> +	       sizeof(io->iparam.bls.payload));
> +	acc->ba_ox_id = cpu_to_be16(io->iparam.bls.ox_id);
> +	acc->ba_rx_id = cpu_to_be16(io->iparam.bls.rx_id);
> +	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
> +
> +	/* generic io fields have already been populated */
> +
> +	/* set type and BLS-specific fields */
> +	io->io_type = EFCT_IO_TYPE_BLS_RESP;
> +	io->display_name = "bls_rsp";
> +	io->hio_type = EFCT_HW_BLS_ACC;
> +	io->bls_cb = cb;
> +	io->bls_cb_arg = arg;
> +
> +	/* dispatch IO */
> +	rc = efct_scsi_io_dispatch(io, efct_target_bls_resp_cb);
> +	return rc;
> +}
> +
> +int
> +efct_scsi_send_tmf_resp(struct efct_io *io,
> +			enum efct_scsi_tmf_resp rspcode,
> +			u8 addl_rsp_info[3],
> +			efct_scsi_io_cb_t cb, void *arg)
> +{
> +	int rc = -1;

again the error code topic.

> +	struct fcp_resp_with_ext *fcprsp = NULL;
> +	struct fcp_resp_rsp_info *rspinfo = NULL;
> +	u8 fcp_rspcode;
> +
> +	io->wire_len = 0;
> +
> +	switch (rspcode) {
> +	case EFCT_SCSI_TMF_FUNCTION_COMPLETE:
> +		fcp_rspcode = FCP_TMF_CMPL;
> +		break;
> +	case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED:
> +	case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND:
> +		fcp_rspcode = FCP_TMF_CMPL;
> +		break;
> +	case EFCT_SCSI_TMF_FUNCTION_REJECTED:
> +		fcp_rspcode = FCP_TMF_REJECTED;
> +		break;
> +	case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER:
> +		fcp_rspcode = FCP_TMF_INVALID_LUN;
> +		break;
> +	case EFCT_SCSI_TMF_SERVICE_DELIVERY:
> +		fcp_rspcode = FCP_TMF_FAILED;
> +		break;
> +	default:
> +		fcp_rspcode = FCP_TMF_REJECTED;
> +		break;
> +	}
> +
> +	io->hio_type = EFCT_HW_IO_TARGET_RSP;
> +
> +	io->scsi_tgt_cb = cb;
> +	io->scsi_tgt_cb_arg = arg;
> +
> +	if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) {
> +		rc = efct_target_send_bls_resp(io, cb, arg);
> +		return rc;
> +	}
> +
> +	/* populate the FCP TMF response */
> +	fcprsp = io->rspbuf.virt;
> +	memset(fcprsp, 0, sizeof(*fcprsp));
> +
> +	fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
> +
> +	rspinfo = io->rspbuf.virt + sizeof(*fcprsp);
> +	if (addl_rsp_info) {
> +		memcpy(rspinfo->_fr_resvd, addl_rsp_info,
> +		       sizeof(rspinfo->_fr_resvd));
> +	}
> +	rspinfo->rsp_code = fcp_rspcode;
> +
> +	io->wire_len = sizeof(*fcprsp) + sizeof(*rspinfo);
> +
> +	fcprsp->ext.fr_rsp_len = cpu_to_be32(sizeof(*rspinfo));
> +
> +	io->sgl[0].addr = io->rspbuf.phys;
> +	io->sgl[0].dif_addr = 0;
> +	io->sgl[0].len = io->wire_len;
> +	io->sgl_count = 1;
> +
> +	memset(&io->iparam, 0, sizeof(io->iparam));
> +	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
> +	io->iparam.fcp_tgt.offset = 0;
> +	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
> +	io->iparam.fcp_tgt.timeout = io->timeout;
> +
> +	rc = efct_scsi_io_dispatch(io, efct_target_io_cb);
> +
> +	return rc;
> +}
> +
> +static int
> +efct_target_abort_cb(struct efct_hw_io *hio,
> +		     struct efc_remote_node *rnode,
> +		     u32 length, int status,
> +		     u32 ext_status, void *app)
> +{
> +	struct efct_io *io = app;
> +	struct efct *efct;
> +	enum efct_scsi_io_status scsi_status;
> +
> +	efct = io->efct;
> +
> +	if (io->abort_cb) {

I would move follwing part again into new function.

> +		efct_scsi_io_cb_t abort_cb = io->abort_cb;
> +		void *abort_cb_arg = io->abort_cb_arg;
> +
> +		io->abort_cb = NULL;
> +		io->abort_cb_arg = NULL;
> +
> +		switch (status) {
> +		case SLI4_FC_WCQE_STATUS_SUCCESS:
> +			scsi_status = EFCT_SCSI_STATUS_GOOD;
> +			break;
> +		case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
> +			switch (ext_status) {
> +			case SLI4_FC_LOCAL_REJECT_NO_XRI:
> +				scsi_status = EFCT_SCSI_STATUS_NO_IO;
> +				break;
> +			case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS:
> +				scsi_status =
> +					EFCT_SCSI_STATUS_ABORT_IN_PROGRESS;
> +				break;
> +			default:
> +				/*we have seen 0x15 (abort in progress)*/
> +				scsi_status = EFCT_SCSI_STATUS_ERROR;
> +				break;
> +			}
> +			break;
> +		case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
> +			scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE;
> +			break;
> +		default:
> +			scsi_status = EFCT_SCSI_STATUS_ERROR;
> +			break;
> +		}
> +		/* invoke callback */
> +		abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg);
> +	}
> +
> +	/* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */
> +	kref_put(&io->io_to_abort->ref, io->io_to_abort->release);
> +
> +	efct_io_pool_io_free(efct->xport->io_pool, io);
> +
> +	efct_scsi_check_pending(efct);
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg)
> +{
> +	struct efct *efct;
> +	struct efct_xport *xport;
> +	int rc;
> +	struct efct_io *abort_io = NULL;
> +
> +	efct = io->efct;
> +	xport = efct->xport;
> +
> +	/* take a reference on IO being aborted */
> +	if ((kref_get_unless_zero(&io->ref) == 0)) {

the double brackets are not needed

> +		/* command no longer active */
> +		scsi_io_printf(io, "command no longer active\n");
> +		return EFC_FAIL;
> +	}
> +
> +	/*
> +	 * allocate a new IO to send the abort request. Use efct_io_alloc()
> +	 * directly, as we need an IO object that will not fail allocation
> +	 * due to allocations being disabled (in efct_scsi_io_alloc())
> +	 */
> +	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
> +	if (!abort_io) {
> +		atomic_add_return(1, &xport->io_alloc_failed_count);
> +		kref_put(&io->ref, io->release);
> +		return EFC_FAIL;
> +	}
> +
> +	/* Save the target server callback and argument */
> +	/* set generic fields */
> +	abort_io->cmd_tgt = true;
> +	abort_io->node = io->node;
> +
> +	/* set type and abort-specific fields */
> +	abort_io->io_type = EFCT_IO_TYPE_ABORT;
> +	abort_io->display_name = "tgt_abort";
> +	abort_io->io_to_abort = io;
> +	abort_io->send_abts = false;
> +	abort_io->abort_cb = cb;
> +	abort_io->abort_cb_arg = arg;
> +
> +	/* now dispatch IO */
> +	rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb);
> +	if (rc)
> +		kref_put(&io->ref, io->release);
> +	return rc;
> +}
> +
> +void
> +efct_scsi_io_complete(struct efct_io *io)
> +{
> +	if (io->io_free) {
> +		efc_log_test(io->efct,
> +			      "Got completion for non-busy io with tag 0x%x\n",
> +		    io->tag);

code aligment

> +		return;
> +	}
> +
> +	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
> +	kref_put(&io->ref, io->release);
> +}
> diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h
> new file mode 100644
> index 000000000000..28204c5fde69
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_scsi.h
> @@ -0,0 +1,235 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#if !defined(__EFCT_SCSI_H__)
> +#define __EFCT_SCSI_H__
> +#include <scsi/scsi_host.h>
> +#include <scsi/scsi_transport_fc.h>
> +
> +/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */
> +#define EFCT_SCSI_CMD_DIR_IN		(1 << 0)
> +#define EFCT_SCSI_CMD_DIR_OUT		(1 << 1)
> +#define EFCT_SCSI_CMD_SIMPLE		(1 << 2)
> +#define EFCT_SCSI_CMD_HEAD_OF_QUEUE	(1 << 3)
> +#define EFCT_SCSI_CMD_ORDERED		(1 << 4)
> +#define EFCT_SCSI_CMD_UNTAGGED		(1 << 5)
> +#define EFCT_SCSI_CMD_ACA		(1 << 6)
> +#define EFCT_SCSI_FIRST_BURST_ERR	(1 << 7)
> +#define EFCT_SCSI_FIRST_BURST_ABORTED	(1 << 8)
> +
> +/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */
> +#define EFCT_SCSI_LAST_DATAPHASE	(1 << 0)
> +#define EFCT_SCSI_NO_AUTO_RESPONSE	(1 << 1)
> +#define EFCT_SCSI_LOW_LATENCY		(1 << 2)
> +
> +#define EFCT_SCSI_SNS_BUF_VALID(sense)	((sense) && \
> +			(0x70 == (((const u8 *)(sense))[0] & 0x70)))
> +
> +#define EFCT_SCSI_WQ_STEERING_SHIFT	16
> +#define EFCT_SCSI_WQ_STEERING_MASK	(0xf << EFCT_SCSI_WQ_STEERING_SHIFT)
> +#define EFCT_SCSI_WQ_STEERING_CLASS	(0 << EFCT_SCSI_WQ_STEERING_SHIFT)
> +#define EFCT_SCSI_WQ_STEERING_REQUEST	(1 << EFCT_SCSI_WQ_STEERING_SHIFT)
> +#define EFCT_SCSI_WQ_STEERING_CPU	(2 << EFCT_SCSI_WQ_STEERING_SHIFT)
> +
> +#define EFCT_SCSI_WQ_CLASS_SHIFT		(20)
> +#define EFCT_SCSI_WQ_CLASS_MASK		(0xf << EFCT_SCSI_WQ_CLASS_SHIFT)
> +#define EFCT_SCSI_WQ_CLASS(x)		((x & EFCT_SCSI_WQ_CLASS_MASK) << \
> +						EFCT_SCSI_WQ_CLASS_SHIFT)
> +
> +#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY	1
> +
> +struct efct_scsi_cmd_resp {
> +	u8 scsi_status;			/* SCSI status */

kerneldoc

> +	u16 scsi_status_qualifier;	/* SCSI status qualifier */
> +	/* pointer to response data buffer */
> +	u8 *response_data;
> +	/* length of response data buffer (bytes) */
> +	u32 response_data_length;
> +	u8 *sense_data;		/* pointer to sense data buffer */
> +	/* length of sense data buffer (bytes) */
> +	u32 sense_data_length;
> +	/* command residual (not used for target), positive value
> +	 * indicates an underflow, negative value indicates overflow
> +	 */
> +	int residual;
> +	/* Command response length received in wcqe */
> +	u32 response_wire_length;
> +};
> +
> +struct efct_vport {
> +	struct efct		*efct;
> +	bool			is_vport;
> +	struct fc_host_statistics fc_host_stats;
> +	struct Scsi_Host	*shost;
> +	struct fc_vport		*fc_vport;
> +	u64			npiv_wwpn;
> +	u64			npiv_wwnn;
> +};
> +
> +/* Status values returned by IO callbacks */
> +enum efct_scsi_io_status {
> +	EFCT_SCSI_STATUS_GOOD = 0,
> +	EFCT_SCSI_STATUS_ABORTED,
> +	EFCT_SCSI_STATUS_ERROR,
> +	EFCT_SCSI_STATUS_DIF_GUARD_ERR,
> +	EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR,
> +	EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR,
> +	EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR,
> +	EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR,
> +	EFCT_SCSI_STATUS_NO_IO,
> +	EFCT_SCSI_STATUS_ABORT_IN_PROGRESS,
> +	EFCT_SCSI_STATUS_CHECK_RESPONSE,
> +	EFCT_SCSI_STATUS_COMMAND_TIMEOUT,
> +	EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED,
> +	EFCT_SCSI_STATUS_SHUTDOWN,
> +	EFCT_SCSI_STATUS_NEXUS_LOST,
> +};
> +
> +struct efct_io;
> +struct efc_node;
> +struct efc_domain;
> +struct efc_sli_port;
> +
> +/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */
> +typedef int (*efct_scsi_io_cb_t)(struct efct_io *io,
> +				    enum efct_scsi_io_status status,
> +				    u32 flags, void *arg);
> +
> +/* Callback used by send_rd_io(), send_wr_io() */
> +typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io *io,
> +			enum efct_scsi_io_status status,
> +			struct efct_scsi_cmd_resp *rsp,
> +			u32 flags, void *arg);
> +
> +/* efct_scsi_cb_t flags */
> +#define EFCT_SCSI_IO_CMPL		(1 << 0)
> +/* IO completed, response sent */
> +#define EFCT_SCSI_IO_CMPL_RSP_SENT	(1 << 1)
> +#define EFCT_SCSI_IO_ABORTED		(1 << 2)
> +
> +/* efct_scsi_recv_tmf() request values */
> +enum efct_scsi_tmf_cmd {
> +	EFCT_SCSI_TMF_ABORT_TASK = 1,
> +	EFCT_SCSI_TMF_QUERY_TASK_SET,
> +	EFCT_SCSI_TMF_ABORT_TASK_SET,
> +	EFCT_SCSI_TMF_CLEAR_TASK_SET,
> +	EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT,
> +	EFCT_SCSI_TMF_LOGICAL_UNIT_RESET,
> +	EFCT_SCSI_TMF_CLEAR_ACA,
> +	EFCT_SCSI_TMF_TARGET_RESET,
> +};
> +
> +/* efct_scsi_send_tmf_resp() response values */
> +enum efct_scsi_tmf_resp {
> +	EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1,
> +	EFCT_SCSI_TMF_FUNCTION_SUCCEEDED,
> +	EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND,
> +	EFCT_SCSI_TMF_FUNCTION_REJECTED,
> +	EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER,
> +	EFCT_SCSI_TMF_SERVICE_DELIVERY,
> +};
> +
> +struct efct_scsi_sgl {
> +	uintptr_t	addr;
> +	uintptr_t	dif_addr;
> +	size_t		len;
> +};
> +
> +/* Return values for calls from base driver to libefc */
> +#define EFCT_SCSI_CALL_COMPLETE	0 /* All work is done */
> +#define EFCT_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
> +
> +enum efct_scsi_io_role {
> +	EFCT_SCSI_IO_ROLE_ORIGINATOR,
> +	EFCT_SCSI_IO_ROLE_RESPONDER,
> +};
> +
> +void efct_scsi_io_alloc_enable(struct efc *efc, struct efc_node *node);
> +void efct_scsi_io_alloc_disable(struct efc *efc, struct efc_node *node);
> +extern struct efct_io *
> +efct_scsi_io_alloc(struct efc_node *node, enum efct_scsi_io_role);
> +void efct_scsi_io_free(struct efct_io *io);
> +struct efct_io *efct_io_get_instance(struct efct *efct, u32 index);
> +
> +int efct_scsi_tgt_driver_init(void);
> +int efct_scsi_tgt_driver_exit(void);
> +int efct_scsi_tgt_new_device(struct efct *efct);
> +int efct_scsi_tgt_del_device(struct efct *efct);
> +int
> +efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain);
> +void
> +efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain);
> +int
> +efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport);
> +void
> +efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport);
> +int
> +efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node);
> +int
> +efct_scsi_new_initiator(struct efc *efc, struct efc_node *node);
> +
> +enum efct_scsi_del_initiator_reason {
> +	EFCT_SCSI_INITIATOR_DELETED,
> +	EFCT_SCSI_INITIATOR_MISSING,
> +};
> +
> +extern int
> +efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
> +			int reason);

extern not needed

> +extern int
> +efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
> +		   u32 cdb_len, u32 flags);
> +extern int
> +efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
> +		   enum efct_scsi_tmf_cmd cmd, struct efct_io *abortio,
> +		  u32 flags);
> +
> +extern int
> +efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
> +		      struct efct_scsi_sgl *sgl, u32 sgl_count,
> +		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
> +extern int
> +efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
> +		      struct efct_scsi_sgl *sgl, u32 sgl_count,
> +		      u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
> +extern int
> +efct_scsi_send_resp(struct efct_io *io, u32 flags,
> +		    struct efct_scsi_cmd_resp *rsp, efct_scsi_io_cb_t cb,
> +		   void *arg);
> +extern int
> +efct_scsi_send_tmf_resp(struct efct_io *io,
> +			enum efct_scsi_tmf_resp rspcode,
> +		       u8 addl_rsp_info[3],
> +		       efct_scsi_io_cb_t cb, void *arg);
> +extern int
> +efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg);
> +
> +void efct_scsi_io_complete(struct efct_io *io);
> +
> +int efct_scsi_reg_fc_transport(void);
> +int efct_scsi_release_fc_transport(void);
> +int efct_scsi_new_device(struct efct *efct);
> +int efct_scsi_del_device(struct efct *efct);
> +void _efct_scsi_io_free(struct kref *arg);
> +
> +int efct_scsi_send_tmf(struct efc_node *node,
> +		       struct efct_io *io,
> +		       struct efct_io *io_to_abort, u32 lun,
> +		       enum efct_scsi_tmf_cmd tmf,
> +		       struct efct_scsi_sgl *sgl,
> +		       u32 sgl_count, u32 len,
> +		       efct_scsi_rsp_io_cb_t cb, void *arg);
> +
> +extern int
> +efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost);
> +extern struct efct_vport *
> +efct_scsi_new_vport(struct efct *efct, struct device *dev);
> +
> +int efct_scsi_io_dispatch(struct efct_io *io, void *cb);
> +int efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb);
> +void efct_scsi_check_pending(struct efct *efct);
> +
> +#endif /* __EFCT_SCSI_H__ */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  4:57   ` Bart Van Assche
@ 2020-04-16 11:48     ` Daniel Wagner
  2020-04-22  4:20     ` James Smart
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 11:48 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: James Smart, linux-scsi, maier, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 09:57:45PM -0700, Bart Van Assche wrote:
> On 2020-04-11 20:32, James Smart wrote:
> > +	return EFC_SUCCESS;
> > +}
> 
> Redefining 0 is unusual in the Linux kernel. I prefer to see "return 0;"
> instead of "return ${DRIVER_NAME}_SUCCESS;".

BTW, I agree with Bart. I think we all know how to interpret 0 and
-ENOMEM etc. Adding this syntactic sugar discracts in my opinion more
than it helps. And considering that the elx driver is using both
variants in inconsistent way, I suggest to use the usual Linux kernel
style.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
  2020-04-12  4:57   ` Bart Van Assche
  2020-04-16  8:02   ` Hannes Reinecke
@ 2020-04-16 12:34   ` Daniel Wagner
  2020-04-22  4:20     ` James Smart
  2 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 12:34 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:56PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> LIO backend template registration and template functions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Fixed as per the review comments.
>   Removed vport pend list. Pending list is tracked based on the sport
>     assigned to vport.
> ---
>  drivers/scsi/elx/efct/efct_lio.c | 1840 ++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_lio.h |  178 ++++
>  2 files changed, 2018 insertions(+)
>  create mode 100644 drivers/scsi/elx/efct/efct_lio.c
>  create mode 100644 drivers/scsi/elx/efct/efct_lio.h
> 
> diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
> new file mode 100644
> index 000000000000..c784ef9dbbee
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_lio.c
> @@ -0,0 +1,1840 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#include <target/target_core_base.h>
> +#include <target/target_core_fabric.h>
> +#include "efct_driver.h"
> +#include "efct_lio.h"
> +
> +static struct workqueue_struct *lio_wq;
> +
> +static int
> +efct_format_wwn(char *str, size_t len, const char *pre, u64 wwn)
> +{
> +	u8 a[8];
> +
> +	put_unaligned_be64(wwn, a);
> +	return snprintf(str, len, "%s%8phC", pre, a);
> +}
> +
> +static int
> +efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
> +{
> +	int num;
> +	u8 b[8];
> +
> +	if (npiv) {
> +		num = sscanf(name,
> +			     "%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx",
> +			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
> +			     &b[7]);
> +	} else {
> +		num = sscanf(name,
> +		      "%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx",
> +			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
> +			     &b[7]);
> +	}
> +
> +	if (num != 8)
> +		return -EINVAL;
> +
> +	*wwp = get_unaligned_be64(b);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn)
> +{
> +	unsigned int cnt = size;
> +	int rc;
> +
> +	*wwpn = *wwnn = 0;
> +	if (name[cnt - 1] == '\n' || name[cnt - 1] == 0)
> +		cnt--;
> +
> +	/* validate we have enough characters for WWPN */
> +	if ((cnt != (16 + 1 + 16)) || (name[16] != ':'))
> +		return -EINVAL;
> +
> +	rc = efct_lio_parse_wwn(&name[0], wwpn, 1);
> +	if (rc)
> +		return rc;
> +
> +	rc = efct_lio_parse_wwn(&name[17], wwnn, 1);
> +	if (rc)
> +		return rc;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static ssize_t
> +efct_lio_tpg_enable_show(struct config_item *item, char *page)
> +{
> +	struct se_portal_group *se_tpg = to_tpg(item);
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
> +}
> +
> +static ssize_t
> +efct_lio_tpg_enable_store(struct config_item *item, const char *page,
> +			  size_t count)
> +{
> +	struct se_portal_group *se_tpg = to_tpg(item);
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +	struct efct *efct;
> +	struct efc *efc;
> +	unsigned long op;
> +	int ret;
> +
> +	if (!tpg->sport || !tpg->sport->efct) {
> +		pr_err("%s: Unable to find EFCT device\n", __func__);
> +		return -EINVAL;
> +	}
> +
> +	efct = tpg->sport->efct;
> +	efc = efct->efcport;
> +
> +	if (kstrtoul(page, 0, &op) < 0)
> +		return -EINVAL;
> +
> +	if (op == 1) {
> +		atomic_set(&tpg->enabled, 1);
> +		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
> +
> +		ret = efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE);
> +		if (ret) {
> +			efct->tgt_efct.lio_sport = NULL;
> +			efc_log_test(efct, "cannot bring port online\n");
> +			return ret;
> +		}
> +	} else if (op == 0) {
> +		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
> +
> +		if (efc->domain && efc->domain->sport)
> +			efct_scsi_tgt_del_sport(efc, efc->domain->sport);
> +
> +		atomic_set(&tpg->enabled, 0);
> +	} else {
> +		return -EINVAL;
> +	}
> +
> +	return count;
> +}
> +
> +static ssize_t
> +efct_lio_npiv_tpg_enable_show(struct config_item *item, char *page)
> +{
> +	struct se_portal_group *se_tpg = to_tpg(item);
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
> +}
> +
> +static ssize_t
> +efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
> +			       size_t count)
> +{
> +	struct se_portal_group *se_tpg = to_tpg(item);
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +	struct efct_lio_vport *lio_vport = tpg->vport;
> +	struct efct *efct;
> +	struct efc *efc;
> +	int ret;
> +	unsigned long op;
> +
> +	if (kstrtoul(page, 0, &op) < 0)
> +		return -EINVAL;
> +
> +	if (!lio_vport) {
> +		pr_err("Unable to find vport\n");
> +		return -EINVAL;
> +	}
> +
> +	efct = lio_vport->efct;
> +	efc = efct->efcport;
> +
> +	if (op == 1) {
> +		atomic_set(&tpg->enabled, 1);
> +		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
> +
> +		if (efc->domain) {
> +			ret = efc_sport_vport_new(efc->domain,
> +						  lio_vport->npiv_wwpn,
> +						  lio_vport->npiv_wwnn,
> +						  U32_MAX, false, true,
> +						  NULL, NULL);
> +			if (ret != 0) {
> +				efc_log_err(efct, "Failed to create Vport\n");
> +				return ret;
> +			}
> +			return count;
> +		}
> +
> +		if (!(efc_vport_create_spec(efc, lio_vport->npiv_wwnn,
> +					    lio_vport->npiv_wwpn, U32_MAX,
> +					    false, true, NULL, NULL)))
> +			return -ENOMEM;
> +
> +	} else if (op == 0) {
> +		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
> +
> +		atomic_set(&tpg->enabled, 0);
> +		/* only physical sport should exist, free lio_sport
> +		 * allocated in efct_lio_make_sport
> +		 */
> +		if (efc->domain) {
> +			efc_sport_vport_del(efct->efcport, efc->domain,
> +					    lio_vport->npiv_wwpn,
> +					    lio_vport->npiv_wwnn);
> +			return count;
> +		}
> +	} else {
> +		return -EINVAL;
> +	}
> +	return count;
> +}
> +
> +static bool efct_lio_node_is_initiator(struct efc_node *node)

Why is the function name not on new line

> +{
> +	if (!node)
> +		return false;
> +
> +	if (node->rnode.fc_id && node->rnode.fc_id != FC_FID_FLOGI &&
> +	    node->rnode.fc_id != FC_FID_DIR_SERV &&
> +	    node->rnode.fc_id != FC_FID_FCTRL) {
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
> +static int  efct_lio_tgt_session_data(struct efct *efct, u64 wwpn,
> +				      char *buf, int size)

same here

> +{
> +	struct efc_sli_port *sport = NULL;
> +	struct efc_node *node = NULL;
> +	struct efc *efc = efct->efcport;
> +	u16 loop_id = 0;
> +	int off = 0, rc = 0;
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "failed to find efct/domain\n");
> +		return EFC_FAIL;
> +	}
> +
> +	list_for_each_entry(sport, &efc->domain->sport_list, list_entry) {
> +		if (sport->wwpn != wwpn)
> +			continue;
> +		list_for_each_entry(node, &sport->node_list,
> +				    list_entry) {
> +			/* Dump only remote NPORT sessions */
> +			if (!efct_lio_node_is_initiator(node))
> +				continue;
> +
> +			rc = snprintf(buf + off, size - off,
> +				"0x%016llx,0x%08x,0x%04x\n",
> +				get_unaligned_be64(node->wwpn),
> +				node->rnode.fc_id, loop_id);
> +			if (rc < 0)
> +				break;
> +			off += rc;
> +		}
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int efct_debugfs_session_open(struct inode *inode, struct file *filp)

function name on a new line as for the rest of the file or the way around.
Just a bit more constency please.

> +{
> +	struct efct_lio_sport *sport = inode->i_private;
> +	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */

	34 * SZ_2K

> +
> +	if (!(filp->f_mode & FMODE_READ)) {
> +		filp->private_data = sport;
> +		return EFC_SUCCESS;
> +	}
> +
> +	filp->private_data = kmalloc(size, GFP_KERNEL);
> +	if (!filp->private_data)
> +		return -ENOMEM;
> +
> +	memset(filp->private_data, 0, size);
> +	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
> +				  size);
> +	return EFC_SUCCESS;
> +}
> +
> +static int efct_debugfs_session_close(struct inode *inode, struct file *filp)
> +{
> +	if (filp->f_mode & FMODE_READ)
> +		kfree(filp->private_data);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static ssize_t efct_debugfs_session_read(struct file *filp, char __user *buf,
> +					 size_t count, loff_t *ppos)
> +{
> +	if (!(filp->f_mode & FMODE_READ))
> +		return -EPERM;

New line here as in function above, or no newline above. Whatever
the prefered style is (is true for the whole file).

> +	return simple_read_from_buffer(buf, count, ppos, filp->private_data,
> +				       strlen(filp->private_data));
> +}
> +
> +static int efct_npiv_debugfs_session_open(struct inode *inode,
> +					  struct file *filp)
> +{
> +	struct efct_lio_vport *sport = inode->i_private;
> +	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */

34 * SZ_2K

> +
> +	if (!(filp->f_mode & FMODE_READ)) {
> +		filp->private_data = sport;
> +		return EFC_SUCCESS;
> +	}
> +
> +	filp->private_data = kmalloc(size, GFP_KERNEL);
> +	if (!filp->private_data)
> +		return -ENOMEM;
> +
> +	memset(filp->private_data, 0, size);
> +	efct_lio_tgt_session_data(sport->efct, sport->npiv_wwpn,
> +				  filp->private_data, size);
> +	return EFC_SUCCESS;
> +}
> +
> +static const struct file_operations efct_debugfs_session_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= efct_debugfs_session_open,
> +	.release	= efct_debugfs_session_close,
> +	.read		= efct_debugfs_session_read,
> +	.llseek		= default_llseek,
> +};
> +
> +static const struct file_operations efct_npiv_debugfs_session_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= efct_npiv_debugfs_session_open,
> +	.release	= efct_debugfs_session_close,
> +	.read		= efct_debugfs_session_read,
> +	.llseek		= default_llseek,
> +};
> +
> +static char *efct_lio_get_fabric_wwn(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->sport->wwpn_str;
> +}
> +
> +static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->vport->wwpn_str;
> +}
> +
> +static u16 efct_lio_get_tag(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpgt;
> +}
> +
> +static u16 efct_lio_get_npiv_tag(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpgt;
> +}
> +
> +static int efct_lio_check_demo_mode(struct se_portal_group *se_tpg)
> +{
> +	return 1;
> +}
> +
> +static int efct_lio_check_demo_mode_cache(struct se_portal_group *se_tpg)
> +{
> +	return 1;
> +}
> +
> +static int efct_lio_check_demo_write_protect(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.demo_mode_write_protect;
> +}
> +
> +static int
> +efct_lio_npiv_check_demo_write_protect(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.demo_mode_write_protect;
> +}
> +
> +static int efct_lio_check_prod_write_protect(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.prod_mode_write_protect;
> +}
> +
> +static int
> +efct_lio_npiv_check_prod_write_protect(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.prod_mode_write_protect;
> +}
> +
> +static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
> +{
> +	return EFC_SUCCESS;
> +}
> +
> +static int efct_lio_check_stop_free(struct se_cmd *se_cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);
> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_CHK_STOP_FREE);
> +	return target_put_sess_cmd(se_cmd);
> +}
> +
> +static int
> +efct_lio_abort_tgt_cb(struct efct_io *io,
> +		      enum efct_scsi_io_status scsi_status,
> +		      u32 flags, void *arg)
> +{
> +	efct_lio_io_printf(io, "%s\n", __func__);
> +	return EFC_SUCCESS;
> +}
> +
> +/* command has been aborted, cleanup here */
> +static void efct_lio_aborted_task(struct se_cmd *se_cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);

	struct efct_scsi_tgt_io *ocp =
		container_of(se_cmd, struct efct_scsi_tgt_io, cmd);

IMO this is better to read

> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
> +
> +	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
> +		return;
> +
> +	ocp->aborting = true;
> +	ocp->err = EFCT_SCSI_STATUS_ABORTED;
> +	/* terminate the exchange */
> +	efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
> +}
> +
> +/* Called when se_cmd's ref count goes to 0 */
> +static void efct_lio_release_cmd(struct se_cmd *se_cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(se_cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);

same as above

> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +	struct efct *efct = io->efct;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_RELEASE_CMD);
> +	efct_scsi_io_complete(io);
> +	atomic_sub_return(1, &efct->tgt_efct.ios_in_use);
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
> +}
> +
> +static void efct_lio_close_session(struct se_session *se_sess)
> +{
> +	struct efc_node *node = se_sess->fabric_sess_ptr;
> +	struct efct *efct = NULL;
> +	int rc;
> +
> +	pr_debug("se_sess=%p node=%p", se_sess, node);
> +
> +	if (!node) {
> +		pr_debug("node is NULL");
> +		return;
> +	}
> +
> +	efct = node->efc->base;
> +	rc = efct_xport_control(efct->xport,
> +				EFCT_XPORT_POST_NODE_EVENT, node,
> +		EFCT_XPORT_SHUTDOWN, NULL);

aligment of "EFCT_XPORT_SHUTDOWN, NULL);" is a bit off


> +	if (rc != 0) {
> +		efc_log_test(efct,
> +			      "Failed to shutdown session %p node %p\n",
> +			     se_sess, node);
> +		return;
> +	}
> +}
> +
> +static u32 efct_lio_sess_get_index(struct se_session *se_sess)
> +{
> +	return EFC_SUCCESS;
> +}
> +
> +static void efct_lio_set_default_node_attrs(struct se_node_acl *nacl)
> +{
> +}
> +
> +static int efct_lio_get_cmd_state(struct se_cmd *se_cmd)
> +{
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_lio_sg_map(struct efct_io *io)
> +{
> +	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
> +	struct se_cmd *cmd = &ocp->cmd;
> +
> +	ocp->seg_map_cnt = pci_map_sg(io->efct->pcidev, cmd->t_data_sg,
> +				      cmd->t_data_nents, cmd->data_direction);
> +	if (ocp->seg_map_cnt == 0)
> +		return -EFAULT;
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_lio_sg_unmap(struct efct_io *io)
> +{
> +	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
> +	struct se_cmd *cmd = &ocp->cmd;
> +
> +	if (WARN_ON(!ocp->seg_map_cnt || !cmd->t_data_sg))
> +		return;
> +
> +	pci_unmap_sg(io->efct->pcidev, cmd->t_data_sg,
> +		     ocp->seg_map_cnt, cmd->data_direction);
> +	ocp->seg_map_cnt = 0;
> +}
> +
> +static int
> +efct_lio_status_done(struct efct_io *io,
> +		     enum efct_scsi_io_status scsi_status,
> +		     u32 flags, void *arg)
> +{
> +	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RSP_DONE);
> +	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
> +		efct_lio_io_printf(io, "callback completed with error=%d\n",
> +				   scsi_status);
> +		ocp->err = scsi_status;
> +	}
> +	if (ocp->seg_map_cnt)
> +		efct_lio_sg_unmap(io);
> +
> +	efct_lio_io_printf(io, "status=%d, err=%d flags=0x%x, dir=%d\n",
> +				scsi_status, ocp->err, flags, ocp->ddir);
> +
> +	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
> +		       u32 flags, void *arg);
> +
> +static int
> +efct_lio_write_pending(struct se_cmd *cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);

as mentioned above, the container_of could start a new line

> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +	struct efct_scsi_sgl *sgl = io->sgl;
> +	struct scatterlist *sg;
> +	u32 flags = 0, cnt, curcnt;
> +	u64 length = 0;
> +	int rc = 0;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_WRITE_PENDING);
> +	efct_lio_io_printf(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
> +			  cmd->transport_state, cmd->se_cmd_flags);
> +
> +	if (ocp->seg_cnt == 0) {
> +		ocp->seg_cnt = cmd->t_data_nents;
> +		ocp->cur_seg = 0;
> +		if (efct_lio_sg_map(io)) {
> +			efct_lio_io_printf(io, "efct_lio_sg_map failed\n");
> +			return -EFAULT;
> +		}
> +	}
> +	curcnt = (ocp->seg_map_cnt - ocp->cur_seg);
> +	curcnt = (curcnt < io->sgl_allocated) ? curcnt : io->sgl_allocated;
> +	/* find current sg */
> +	for (cnt = 0, sg = cmd->t_data_sg; cnt < ocp->cur_seg; cnt++,
> +	     sg = sg_next(sg))
> +		;

Please add a comment here, a single ';' is bit 'thin', e.g.

	for (...)
		/* do nothing */ ;


> +
> +	for (cnt = 0; cnt < curcnt; cnt++, sg = sg_next(sg)) {
> +		sgl[cnt].addr = sg_dma_address(sg);
> +		sgl[cnt].dif_addr = 0;
> +		sgl[cnt].len = sg_dma_len(sg);
> +		length += sgl[cnt].len;
> +		ocp->cur_seg++;
> +	}
> +	if (ocp->cur_seg == ocp->seg_cnt)
> +		flags = EFCT_SCSI_LAST_DATAPHASE;
> +	rc = efct_scsi_recv_wr_data(io, flags, sgl, curcnt, length,
> +				    efct_lio_datamove_done, NULL);
> +	return rc;
> +}
> +
> +static int
> +efct_lio_queue_data_in(struct se_cmd *cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);
> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +	struct efct_scsi_sgl *sgl = io->sgl;
> +	struct scatterlist *sg = NULL;
> +	uint flags = 0, cnt = 0, curcnt = 0;
> +	u64 length = 0;
> +	int rc = 0;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_DATA_IN);
> +
> +	if (ocp->seg_cnt == 0) {
> +		if (cmd->data_length) {
> +			ocp->seg_cnt = cmd->t_data_nents;
> +			ocp->cur_seg = 0;
> +			if (efct_lio_sg_map(io)) {
> +				efct_lio_io_printf(io,
> +						   "efct_lio_sg_map failed\n");
> +				return -EAGAIN;
> +			}
> +		} else {
> +			/* If command length is 0, send the response status */
> +			struct efct_scsi_cmd_resp rsp;
> +
> +			memset(&rsp, 0, sizeof(rsp));
> +			efct_lio_io_printf(io,
> +					   "cmd : %p length 0, send status\n",
> +					   cmd);
> +			return efct_scsi_send_resp(io, 0, &rsp,
> +						  efct_lio_status_done, NULL);
> +		}
> +	}
> +	curcnt = min(ocp->seg_map_cnt - ocp->cur_seg, io->sgl_allocated);
> +
> +	while (cnt < curcnt) {
> +		sg = &cmd->t_data_sg[ocp->cur_seg];
> +		sgl[cnt].addr = sg_dma_address(sg);
> +		sgl[cnt].dif_addr = 0;
> +		if (ocp->transferred_len + sg_dma_len(sg) >= cmd->data_length)
> +			sgl[cnt].len = cmd->data_length - ocp->transferred_len;
> +		else
> +			sgl[cnt].len = sg_dma_len(sg);
> +
> +		ocp->transferred_len += sgl[cnt].len;
> +		length += sgl[cnt].len;
> +		ocp->cur_seg++;
> +		cnt++;
> +		if (ocp->transferred_len == cmd->data_length)
> +			break;
> +	}
> +
> +	if (ocp->transferred_len == cmd->data_length) {
> +		flags = EFCT_SCSI_LAST_DATAPHASE;
> +		ocp->seg_cnt = ocp->cur_seg;
> +	}
> +
> +	/* If there is residual, disable Auto Good Response */
> +	if (cmd->residual_count)
> +		flags |= EFCT_SCSI_NO_AUTO_RESPONSE;
> +
> +	rc = efct_scsi_send_rd_data(io, flags, sgl, curcnt, length,
> +				    efct_lio_datamove_done, NULL);
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RD_DATA);
> +	return rc;
> +}
> +
> +static int
> +efct_lio_datamove_done(struct efct_io *io,
> +		       enum efct_scsi_io_status scsi_status,
> +		      u32 flags, void *arg)
> +{
> +	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
> +	struct se_cmd *cmd = &io->tgt_io.cmd;
> +	int rc;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_DATA_DONE);
> +	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
> +		efct_lio_io_printf(io, "callback completed with error=%d\n",
> +				   scsi_status);
> +		ocp->err = scsi_status;
> +	}
> +	efct_lio_io_printf(io, "seg_map_cnt=%d\n", ocp->seg_map_cnt);
> +	if (ocp->seg_map_cnt) {
> +		if (ocp->err == EFCT_SCSI_STATUS_GOOD &&
> +		    ocp->cur_seg < ocp->seg_cnt) {
> +			efct_lio_io_printf(io, "continuing cmd at segm=%d\n",
> +					  ocp->cur_seg);
> +			if (ocp->ddir == DMA_TO_DEVICE)
> +				rc = efct_lio_write_pending(&ocp->cmd);
> +			else
> +				rc = efct_lio_queue_data_in(&ocp->cmd);
> +			if (rc == 0)
> +				return EFC_SUCCESS;
> +			ocp->err = EFCT_SCSI_STATUS_ERROR;
> +			efct_lio_io_printf(io, "could not continue command\n");
> +		}
> +		efct_lio_sg_unmap(io);
> +	}
> +
> +	if (io->tgt_io.aborting) {
> +		efct_lio_io_printf(io, "IO done aborted\n");
> +		return EFC_SUCCESS;
> +	}
> +
> +	if (ocp->ddir == DMA_TO_DEVICE) {
> +		efct_lio_io_printf(io, "Write done, trans_state=0x%x\n",
> +				  io->tgt_io.cmd.transport_state);
> +		if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
> +			transport_generic_request_failure(&io->tgt_io.cmd,
> +					TCM_CHECK_CONDITION_ABORT_CMD);
> +			efct_set_lio_io_state(io,
> +				EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE);
> +		} else {
> +			efct_set_lio_io_state(io,
> +						EFCT_LIO_STATE_TGT_EXECUTE_CMD);
> +			target_execute_cmd(&io->tgt_io.cmd);
> +		}
> +	} else {
> +		if ((flags & EFCT_SCSI_IO_CMPL_RSP_SENT) == 0) {
> +			struct efct_scsi_cmd_resp rsp;
> +			/* send check condition if an error occurred */
> +			memset(&rsp, 0, sizeof(rsp));
> +			rsp.scsi_status = cmd->scsi_status;
> +			rsp.sense_data = (uint8_t *)io->tgt_io.sense_buffer;
> +			rsp.sense_data_length = cmd->scsi_sense_length;
> +
> +			/* Check for residual underrun or overrun */
> +			if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
> +				rsp.residual = -cmd->residual_count;
> +			else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
> +				rsp.residual = cmd->residual_count;
> +
> +			rc = efct_scsi_send_resp(io, 0, &rsp,
> +						 efct_lio_status_done, NULL);
> +			efct_set_lio_io_state(io,
> +						EFCT_LIO_STATE_SCSI_SEND_RSP);
> +			if (rc != 0) {
> +				efct_lio_io_printf(io,
> +						   "Read done, failed to send rsp, rc=%d\n",
> +				      rc);
> +				transport_generic_free_cmd(&io->tgt_io.cmd, 0);
> +				efct_set_lio_io_state(io,
> +					EFCT_LIO_STATE_TGT_GENERIC_FREE);
> +			} else {
> +				ocp->rsp_sent = true;
> +			}
> +		} else {
> +			ocp->rsp_sent = true;
> +			transport_generic_free_cmd(&io->tgt_io.cmd, 0);
> +			efct_set_lio_io_state(io,
> +					EFCT_LIO_STATE_TGT_GENERIC_FREE);
> +		}
> +	}
> +	return EFC_SUCCESS;
> +}

The last two function could be splitted up into few smaller function
which would help the readablilty quite a bit IMO. Many of these
function tend to be right side heavy which leads to many funky new
lines.

> +
> +static int
> +efct_lio_tmf_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
> +		  u32 flags, void *arg)
> +{
> +	efct_lio_tmfio_printf(io, "cmd=%p status=%d, flags=0x%x\n",
> +			      &io->tgt_io.cmd, scsi_status, flags);
> +
> +	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_lio_null_tmf_done(struct efct_io *tmfio,
> +		       enum efct_scsi_io_status scsi_status,
> +		      u32 flags, void *arg)
> +{
> +	efct_lio_tmfio_printf(tmfio, "cmd=%p status=%d, flags=0x%x\n",
> +			      &tmfio->tgt_io.cmd, scsi_status, flags);
> +
> +	/* free struct efct_io only, no active se_cmd */
> +	efct_scsi_io_complete(tmfio);
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_lio_queue_status(struct se_cmd *cmd)
> +{
> +	struct efct_scsi_cmd_resp rsp;
> +	struct efct_scsi_tgt_io *ocp = container_of(cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);
> +	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
> +	int rc = 0;
> +
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_STATUS);
> +	efct_lio_io_printf(io,
> +		"status=0x%x trans_state=0x%x se_cmd_flags=0x%x sns_len=%d\n",
> +		cmd->scsi_status, cmd->transport_state, cmd->se_cmd_flags,
> +		cmd->scsi_sense_length);
> +
> +	memset(&rsp, 0, sizeof(rsp));
> +	rsp.scsi_status = cmd->scsi_status;
> +	rsp.sense_data = (u8 *)io->tgt_io.sense_buffer;
> +	rsp.sense_data_length = cmd->scsi_sense_length;
> +
> +	/* Check for residual underrun or overrun, mark negitive value for
> +	 * underrun to recognize in HW
> +	 */
> +	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
> +		rsp.residual = -cmd->residual_count;
> +	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
> +		rsp.residual = cmd->residual_count;
> +
> +	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
> +	if (rc == 0)
> +		ocp->rsp_sent = true;
> +	return rc;
> +}
> +
> +static void efct_lio_queue_tm_rsp(struct se_cmd *cmd)
> +{
> +	struct efct_scsi_tgt_io *ocp = container_of(cmd,
> +						     struct efct_scsi_tgt_io,
> +						     cmd);
> +	struct efct_io *tmfio = container_of(ocp, struct efct_io, tgt_io);
> +	struct se_tmr_req *se_tmr = cmd->se_tmr_req;
> +	u8 rspcode;
> +
> +	efct_lio_tmfio_printf(tmfio, "cmd=%p function=0x%x tmr->response=%d\n",
> +			      cmd, se_tmr->function, se_tmr->response);
> +	switch (se_tmr->response) {
> +	case TMR_FUNCTION_COMPLETE:
> +		rspcode = EFCT_SCSI_TMF_FUNCTION_COMPLETE;
> +		break;
> +	case TMR_TASK_DOES_NOT_EXIST:
> +		rspcode = EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND;
> +		break;
> +	case TMR_LUN_DOES_NOT_EXIST:
> +		rspcode = EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER;
> +		break;
> +	case TMR_FUNCTION_REJECTED:
> +	default:
> +		rspcode = EFCT_SCSI_TMF_FUNCTION_REJECTED;
> +		break;
> +	}
> +	efct_scsi_send_tmf_resp(tmfio, rspcode, NULL, efct_lio_tmf_done, NULL);
> +}
> +
> +static struct efct *efct_find_wwpn(u64 wwpn)
> +{
> +	int efctidx;
> +	struct efct *efct;
> +
> +	 /* Search for the HBA that has this WWPN */
> +	for (efctidx = 0; efctidx < MAX_EFCT_DEVICES; efctidx++) {
> +
> +		efct = efct_devices[efctidx];
> +		if (!efct)
> +			continue;
> +
> +		if (wwpn == efct_get_wwpn(&efct->hw))
> +			break;
> +	}
> +
> +	if (efctidx == MAX_EFCT_DEVICES)
> +		return NULL;
> +
> +	return efct_devices[efctidx];
> +}
> +
> +static struct dentry *
> +efct_create_dfs_session(struct efct *efct, void *data, u8 npiv)
> +{
> +	char name[16];
> +
> +	if (!efct->sess_debugfs_dir)
> +		return NULL;
> +
> +	if (!npiv) {
> +		snprintf(name, sizeof(name), "efct-sessions-%d",
> +			 efct->instance_index);
> +		return debugfs_create_file(name, 0644, efct->sess_debugfs_dir,
> +					data, &efct_debugfs_session_fops);
> +	}
> +
> +	snprintf(name, sizeof(name), "sessions-npiv-%d", efct->instance_index);
> +
> +	return debugfs_create_file(name, 0644, efct->sess_debugfs_dir, data,
> +					&efct_npiv_debugfs_session_fops);
> +}
> +
> +static struct se_wwn *
> +efct_lio_make_sport(struct target_fabric_configfs *tf,
> +		    struct config_group *group, const char *name)
> +{
> +	struct efct_lio_sport *lio_sport;
> +	struct efct *efct;
> +	int ret;
> +	u64 wwpn;
> +
> +	ret = efct_lio_parse_wwn(name, &wwpn, 0);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	efct = efct_find_wwpn(wwpn);
> +	if (!efct) {
> +		pr_err("cannot find EFCT for base wwpn %s\n", name);
> +		return ERR_PTR(-ENXIO);
> +	}
> +
> +	lio_sport = kzalloc(sizeof(*lio_sport), GFP_KERNEL);
> +	if (!lio_sport)
> +		return ERR_PTR(-ENOMEM);
> +
> +	lio_sport->efct = efct;
> +	lio_sport->wwpn = wwpn;
> +	efct_format_wwn(lio_sport->wwpn_str, sizeof(lio_sport->wwpn_str),
> +			"naa.", wwpn);
> +	efct->tgt_efct.lio_sport = lio_sport;
> +
> +	lio_sport->sessions = efct_create_dfs_session(efct, lio_sport, 0);
> +	return &lio_sport->sport_wwn;
> +}
> +
> +static struct se_wwn *
> +efct_lio_npiv_make_sport(struct target_fabric_configfs *tf,
> +			 struct config_group *group, const char *name)
> +{
> +	struct efct_lio_vport *lio_vport;
> +	struct efct *efct;
> +	int ret = -1;
> +	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
> +	char *p, *pbuf, tmp[128];
> +	struct efct_lio_vport_list_t *vport_list;
> +	struct fc_vport *new_fc_vport;
> +	struct fc_vport_identifiers vport_id;
> +	unsigned long flags = 0;
> +
> +	snprintf(tmp, sizeof(tmp), "%s", name);
> +	pbuf = &tmp[0];
> +
> +	p = strsep(&pbuf, "@");
> +
> +	if (!p || !pbuf) {
> +		pr_err("Unable to find separator operator(@)\n");
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	ret = efct_lio_parse_wwn(p, &p_wwpn, 0);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	ret = efct_lio_parse_npiv_wwn(pbuf, strlen(pbuf), &npiv_wwpn,
> +				      &npiv_wwnn);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	efct = efct_find_wwpn(p_wwpn);
> +	if (!efct) {
> +		pr_err("cannot find EFCT for base wwpn %s\n", name);
> +		return ERR_PTR(-ENXIO);
> +	}
> +
> +	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
> +	if (!lio_vport)
> +		return ERR_PTR(-ENOMEM);
> +
> +	lio_vport->efct = efct;
> +	lio_vport->wwpn = p_wwpn;
> +	lio_vport->npiv_wwpn = npiv_wwpn;
> +	lio_vport->npiv_wwnn = npiv_wwnn;
> +
> +	efct_format_wwn(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
> +			"naa.", npiv_wwpn);
> +
> +	vport_list = kmalloc(sizeof(*vport_list), GFP_KERNEL);
> +	if (!vport_list) {
> +		kfree(lio_vport);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	memset(vport_list, 0, sizeof(struct efct_lio_vport_list_t));
> +	vport_list->lio_vport = lio_vport;
> +	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +	INIT_LIST_HEAD(&vport_list->list_entry);
> +	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
> +	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +
> +	lio_vport->sessions = efct_create_dfs_session(efct, lio_vport, 1);
> +
> +	memset(&vport_id, 0, sizeof(vport_id));
> +	vport_id.port_name = npiv_wwpn;
> +	vport_id.node_name = npiv_wwnn;
> +	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
> +	vport_id.vport_type = FC_PORTTYPE_NPIV;
> +	vport_id.disable = false;
> +
> +	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
> +	if (!new_fc_vport) {
> +		efc_log_err(efct, "fc_vport_create failed\n");
> +		kfree(lio_vport);
> +		kfree(vport_list);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	lio_vport->fc_vport = new_fc_vport;
> +
> +	return &lio_vport->vport_wwn;
> +}
> +
> +static void
> +efct_lio_drop_sport(struct se_wwn *wwn)
> +{
> +	struct efct_lio_sport *lio_sport = container_of(wwn,
> +					    struct efct_lio_sport, sport_wwn);
> +	struct efct *efct = lio_sport->efct;
> +
> +	/* only physical sport should exist, free lio_sport allocated
> +	 * in efct_lio_make_sport.
> +	 */
> +
> +	debugfs_remove(lio_sport->sessions);
> +	lio_sport->sessions = NULL;
> +
> +	kfree(efct->tgt_efct.lio_sport);
> +	efct->tgt_efct.lio_sport = NULL;
> +}
> +
> +static void
> +efct_lio_npiv_drop_sport(struct se_wwn *wwn)
> +{
> +	struct efct_lio_vport *lio_vport = container_of(wwn,
> +			struct efct_lio_vport, vport_wwn);
> +	struct efct_lio_vport_list_t *vport, *next_vport;
> +	struct efct *efct = lio_vport->efct;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +
> +	debugfs_remove(lio_vport->sessions);
> +
> +	if (lio_vport->fc_vport)
> +		fc_vport_terminate(lio_vport->fc_vport);
> +
> +	lio_vport->sessions = NULL;
> +
> +	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
> +				 list_entry) {
> +		if (vport->lio_vport == lio_vport) {
> +			list_del(&vport->list_entry);
> +			kfree(vport->lio_vport);
> +			kfree(vport);
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +}
> +
> +static struct se_portal_group *
> +efct_lio_make_tpg(struct se_wwn *wwn, const char *name)
> +{
> +	struct efct_lio_sport *lio_sport = container_of(wwn,
> +					    struct efct_lio_sport, sport_wwn);
> +	struct efct_lio_tpg *tpg;
> +	struct efct *efct;
> +	unsigned long n;
> +	int ret;
> +
> +	if (strstr(name, "tpgt_") != name)
> +		return ERR_PTR(-EINVAL);
> +	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
> +		return ERR_PTR(-EINVAL);
> +
> +	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
> +	if (!tpg)
> +		return ERR_PTR(-ENOMEM);
> +
> +	tpg->sport = lio_sport;
> +	tpg->tpgt = n;
> +	atomic_set(&tpg->enabled, 0);
> +
> +	tpg->tpg_attrib.generate_node_acls = 1;
> +	tpg->tpg_attrib.demo_mode_write_protect = 1;
> +	tpg->tpg_attrib.cache_dynamic_acls = 1;
> +	tpg->tpg_attrib.demo_mode_login_only = 1;
> +	tpg->tpg_attrib.session_deletion_wait = 1;
> +
> +	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
> +	if (ret < 0) {
> +		kfree(tpg);
> +		return NULL;
> +	}
> +	efct = lio_sport->efct;
> +	efct->tgt_efct.tpg = tpg;
> +	efc_log_debug(efct, "create portal group %d\n", tpg->tpgt);
> +
> +	return &tpg->tpg;
> +}
> +
> +static void
> +efct_lio_drop_tpg(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	efc_log_debug(tpg->sport->efct, "drop portal group %d\n", tpg->tpgt);
> +	tpg->sport->efct->tgt_efct.tpg = NULL;
> +	core_tpg_deregister(se_tpg);
> +	kfree(tpg);
> +}
> +
> +static struct se_portal_group *
> +efct_lio_npiv_make_tpg(struct se_wwn *wwn, const char *name)
> +{
> +	struct efct_lio_vport *lio_vport = container_of(wwn,
> +			struct efct_lio_vport, vport_wwn);
> +	struct efct_lio_tpg *tpg;
> +	struct efct *efct;
> +	unsigned long n;
> +	int ret;
> +
> +	efct = lio_vport->efct;
> +	if (strstr(name, "tpgt_") != name)
> +		return ERR_PTR(-EINVAL);
> +	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
> +		return ERR_PTR(-EINVAL);
> +
> +	if (n != 1) {
> +		efc_log_err(efct, "Invalid tpgt index: %ld provided\n", n);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
> +	if (!tpg)
> +		return ERR_PTR(-ENOMEM);
> +
> +	tpg->vport = lio_vport;
> +	tpg->tpgt = n;
> +	atomic_set(&tpg->enabled, 0);
> +
> +	tpg->tpg_attrib.generate_node_acls = 1;
> +	tpg->tpg_attrib.demo_mode_write_protect = 1;
> +	tpg->tpg_attrib.cache_dynamic_acls = 1;
> +	tpg->tpg_attrib.demo_mode_login_only = 1;
> +	tpg->tpg_attrib.session_deletion_wait = 1;
> +
> +	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
> +
> +	if (ret < 0) {
> +		kfree(tpg);
> +		return NULL;
> +	}
> +	lio_vport->tpg = tpg;
> +	efc_log_debug(efct, "create vport portal group %d\n", tpg->tpgt);
> +
> +	return &tpg->tpg;
> +}
> +
> +static void
> +efct_lio_npiv_drop_tpg(struct se_portal_group *se_tpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,
> +						struct efct_lio_tpg, tpg);
> +
> +	efc_log_debug(tpg->vport->efct, "drop npiv portal group %d\n",
> +		       tpg->tpgt);
> +	core_tpg_deregister(se_tpg);
> +	kfree(tpg);
> +}
> +
> +static int
> +efct_lio_init_nodeacl(struct se_node_acl *se_nacl, const char *name)
> +{
> +	struct efct_lio_nacl *nacl;
> +	u64 wwnn;
> +
> +	if (efct_lio_parse_wwn(name, &wwnn, 0) < 0)
> +		return -EINVAL;
> +
> +	nacl = container_of(se_nacl, struct efct_lio_nacl, se_node_acl);
> +	nacl->nport_wwnn = wwnn;
> +
> +	efct_format_wwn(nacl->nport_name, sizeof(nacl->nport_name), "", wwnn);
> +	return EFC_SUCCESS;
> +}
> +
> +static int efct_lio_check_demo_mode_login_only(struct se_portal_group *stpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.demo_mode_login_only;
> +}
> +
> +static int
> +efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *stpg)
> +{
> +	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
> +
> +	return tpg->tpg_attrib.demo_mode_login_only;
> +}
> +
> +static struct efct_lio_tpg *
> +efct_get_vport_tpg(struct efc_node *node)
> +{
> +	struct efct *efct;
> +	u64 wwpn = node->sport->wwpn;
> +	struct efct_lio_vport_list_t *vport, *next;
> +	struct efct_lio_vport *lio_vport = NULL;
> +	struct efct_lio_tpg *tpg = NULL;
> +	unsigned long flags = 0;
> +
> +	efct = node->efc->base;
> +	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
> +	list_for_each_entry_safe(vport, next, &efct->tgt_efct.vport_list,
> +				 list_entry) {
> +		lio_vport = vport->lio_vport;
> +		if (wwpn && lio_vport &&
> +		    lio_vport->npiv_wwpn == wwpn) {

this fits on one line

> +			efc_log_test(efct, "found tpg on vport\n");
> +			tpg = lio_vport->tpg;
> +			break;
> +		}
> +	}
> +	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
> +	return tpg;
> +}
> +
> +static int efct_session_cb(struct se_portal_group *se_tpg,
> +			   struct se_session *se_sess, void *private)
> +{
> +	struct efc_node *node = private;
> +	struct efct_scsi_tgt_node *tgt_node = NULL;
> +
> +	tgt_node = kzalloc(sizeof(*tgt_node), GFP_KERNEL);
> +	if (!tgt_node)
> +		return EFC_FAIL;
> +
> +	tgt_node->session = se_sess;
> +	node->tgt_node = tgt_node;
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int efct_scsi_tgt_new_device(struct efct *efct)
> +{
> +	int rc = 0;
> +	u32 total_ios;
> +
> +	/* Get the max settings */
> +	efct->tgt_efct.max_sge = sli_get_max_sge(&efct->hw.sli);
> +	efct->tgt_efct.max_sgl = sli_get_max_sgl(&efct->hw.sli);
> +
> +	/* initialize IO watermark fields */
> +	atomic_set(&efct->tgt_efct.ios_in_use, 0);
> +	total_ios = efct->hw.config.n_io;
> +	efc_log_debug(efct, "total_ios=%d\n", total_ios);
> +	efct->tgt_efct.watermark_min =
> +			(total_ios * EFCT_WATERMARK_LOW_PCT) / 100;
> +	efct->tgt_efct.watermark_max =
> +			(total_ios * EFCT_WATERMARK_HIGH_PCT) / 100;
> +	atomic_set(&efct->tgt_efct.io_high_watermark,
> +		   efct->tgt_efct.watermark_max);
> +	atomic_set(&efct->tgt_efct.watermark_hit, 0);
> +	atomic_set(&efct->tgt_efct.initiator_count, 0);
> +
> +	lio_wq = create_singlethread_workqueue("efct_lio_worker");
> +	if (!lio_wq) {
> +		efc_log_err(efct, "workqueue create failed\n");
> +		return -ENOMEM;
> +	}
> +
> +	spin_lock_init(&efct->tgt_efct.efct_lio_lock);
> +	INIT_LIST_HEAD(&efct->tgt_efct.vport_list);
> +
> +	return rc;
> +}
> +
> +int efct_scsi_tgt_del_device(struct efct *efct)
> +{
> +	flush_workqueue(lio_wq);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_scsi_tgt_new_domain(struct efc *efc, struct efc_domain *domain)
> +{
> +	return 0;
> +}
> +
> +void
> +efct_scsi_tgt_del_domain(struct efc *efc, struct efc_domain *domain)
> +{
> +}
> +
> +/* Called by the libefc when new sli port (sport) is discovered */
> +int
> +efct_scsi_tgt_new_sport(struct efc *efc, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = sport->efc->base;
> +
> +	efc_log_debug(efct, "New SPORT: %s bound to %s\n", sport->display_name,
> +		       efct->tgt_efct.lio_sport->wwpn_str);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Called by the libefc when a sport goes away. */
> +void
> +efct_scsi_tgt_del_sport(struct efc *efc, struct efc_sli_port *sport)
> +{
> +	efc_log_debug(efc, "Del SPORT: %s\n",
> +		       sport->display_name);

fits on one line

> +}
> +/* Called by libefc to validate node. */
> +int
> +efct_scsi_validate_initiator(struct efc *efc, struct efc_node *node)
> +{
> +	return 1;
> +}
> +
> +static void efct_lio_setup_session(struct work_struct *work)
> +{
> +	struct efct_lio_wq_data *wq_data = container_of(work,
> +					   struct efct_lio_wq_data, work);
> +	struct efct *efct = wq_data->efct;
> +	struct efc_node *node = wq_data->ptr;
> +	char wwpn[WWN_NAME_LEN];
> +	struct efct_lio_tpg *tpg = NULL;
> +	struct se_portal_group *se_tpg;
> +	struct se_session *se_sess;
> +	int watermark;
> +	int initiator_count;
> +
> +	/* Check to see if it's belongs to vport,
> +	 * if not get physical port
> +	 */
> +	tpg = efct_get_vport_tpg(node);
> +	if (tpg) {
> +		se_tpg = &tpg->tpg;
> +	} else if (efct->tgt_efct.tpg) {
> +		tpg = efct->tgt_efct.tpg;
> +		se_tpg = &tpg->tpg;
> +	} else {
> +		efc_log_err(efct, "failed to init session\n");
> +		return;
> +	}
> +
> +	/*
> +	 * Format the FCP Initiator port_name into colon
> +	 * separated values to match the format by our explicit
> +	 * ConfigFS NodeACLs.
> +	 */
> +	efct_format_wwn(wwpn, sizeof(wwpn), "",
> +			efc_node_get_wwpn(node));

fits on one line

> +
> +	se_sess = target_setup_session(se_tpg, 0, 0,
> +				       TARGET_PROT_NORMAL,
> +				       wwpn, node,
> +				       efct_session_cb);
> +	if (IS_ERR(se_sess)) {
> +		efc_log_err(efct, "failed to setup session\n");
> +		return;
> +	}
> +
> +	efc_log_debug(efct, "new initiator se_sess=%p node=%p\n",
> +		      se_sess, node);
> +
> +	/* update IO watermark: increment initiator count */
> +	initiator_count =
> +	atomic_add_return(1, &efct->tgt_efct.initiator_count);
> +	watermark = (efct->tgt_efct.watermark_max -
> +	     initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
> +	watermark = (efct->tgt_efct.watermark_min > watermark) ?
> +		efct->tgt_efct.watermark_min : watermark;

to many brackets in the last four lines.

> +	atomic_set(&efct->tgt_efct.io_high_watermark,
> +		   watermark);
> +
> +	kfree(wq_data);
> +}
> +
> +/* Called by the libefc when new a new remote initiator is discovered */
> +int efct_scsi_new_initiator(struct efc *efc, struct efc_node *node)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_lio_wq_data *wq_data;
> +
> +	/*
> +	 * Since LIO only supports initiator validation at thread level,
> +	 * we are open minded and accept all callers.
> +	 */
> +	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
> +	if (!wq_data)
> +		return -ENOMEM;
> +
> +	wq_data->ptr = node;
> +	wq_data->efct = efct;
> +	INIT_WORK(&wq_data->work, efct_lio_setup_session);
> +	queue_work(lio_wq, &wq_data->work);
> +	return EFC_SUCCESS;
> +}
> +
> +static void efct_lio_remove_session(struct work_struct *work)
> +{
> +	struct efct_lio_wq_data *wq_data = container_of(work,
> +					   struct efct_lio_wq_data, work);
> +	struct efct *efct = wq_data->efct;
> +	struct efc_node *node = wq_data->ptr;
> +	struct efct_scsi_tgt_node *tgt_node = NULL;
> +	struct se_session *se_sess;
> +
> +	tgt_node = node->tgt_node;
> +	se_sess = tgt_node->session;
> +
> +	if (!se_sess) {
> +		/* base driver has sent back-to-back requests
> +		 * to unreg session with no intervening
> +		 * register
> +		 */
> +		efc_log_test(efct,
> +			      "unreg session for NULL session\n");
> +		efc_scsi_del_initiator_complete(node->efc,
> +						node);
> +		return;
> +	}
> +
> +	efc_log_debug(efct, "unreg session se_sess=%p node=%p\n",
> +		       se_sess, node);
> +
> +	/* first flag all session commands to complete */
> +	target_sess_cmd_list_set_waiting(se_sess);
> +
> +	/* now wait for session commands to complete */
> +	target_wait_for_sess_cmds(se_sess);
> +	target_remove_session(se_sess);
> +
> +	kfree(node->tgt_node);
> +
> +	node->tgt_node = NULL;
> +	efc_scsi_del_initiator_complete(node->efc, node);
> +
> +	kfree(wq_data);
> +}
> +
> +/* Called by the libefc when an initiator goes away. */
> +int efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,
> +			int reason)
> +{
> +	struct efct *efct = node->efc->base;
> +	struct efct_lio_wq_data *wq_data;
> +	int watermark;
> +	int initiator_count;
> +
> +	if (reason == EFCT_SCSI_INITIATOR_MISSING)
> +		return EFCT_SCSI_CALL_COMPLETE;
> +
> +	wq_data = kmalloc(sizeof(*wq_data), GFP_ATOMIC);
> +	if (!wq_data)
> +		return EFCT_SCSI_CALL_COMPLETE;
> +
> +	memset(wq_data, 0, sizeof(*wq_data));
> +	wq_data->ptr = node;
> +	wq_data->efct = efct;
> +	INIT_WORK(&wq_data->work, efct_lio_remove_session);
> +	queue_work(lio_wq, &wq_data->work);
> +
> +	/*
> +	 * update IO watermark: decrement initiator count
> +	 */
> +	initiator_count =
> +		atomic_sub_return(1, &efct->tgt_efct.initiator_count);
> +	watermark = (efct->tgt_efct.watermark_max -
> +			initiator_count * EFCT_IO_WATERMARK_PER_INITIATOR);
> +	watermark = (efct->tgt_efct.watermark_min > watermark) ?
> +			efct->tgt_efct.watermark_min : watermark;

too many brackets in the last four lines.

> +	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
> +
> +	return EFCT_SCSI_CALL_ASYNC;
> +}
> +
> +int efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
> +		       u32 cdb_len, u32 flags)
> +{
> +	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
> +	struct efct *efct = io->efct;
> +	char *ddir;
> +	struct efct_scsi_tgt_node *tgt_node = NULL;
> +	struct se_session *se_sess;
> +	int rc = 0;
> +
> +	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
> +	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RECV_CMD);
> +	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
> +
> +	/* set target timeout */
> +	io->timeout = efct->target_io_timer_sec;
> +
> +	if (flags & EFCT_SCSI_CMD_SIMPLE)
> +		ocp->task_attr = TCM_SIMPLE_TAG;
> +	else if (flags & EFCT_SCSI_CMD_HEAD_OF_QUEUE)
> +		ocp->task_attr = TCM_HEAD_TAG;
> +	else if (flags & EFCT_SCSI_CMD_ORDERED)
> +		ocp->task_attr = TCM_ORDERED_TAG;
> +	else if (flags & EFCT_SCSI_CMD_ACA)
> +		ocp->task_attr = TCM_ACA_TAG;
> +
> +	switch (flags & (EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT)) {
> +	case EFCT_SCSI_CMD_DIR_IN:
> +		ddir = "FROM_INITIATOR";
> +		ocp->ddir = DMA_TO_DEVICE;
> +		break;
> +	case EFCT_SCSI_CMD_DIR_OUT:
> +		ddir = "TO_INITIATOR";
> +		ocp->ddir = DMA_FROM_DEVICE;
> +		break;
> +	case EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT:
> +		ddir = "BIDIR";
> +		ocp->ddir = DMA_BIDIRECTIONAL;
> +		break;
> +	default:
> +		ddir = "NONE";
> +		ocp->ddir = DMA_NONE;
> +		break;
> +	}
> +
> +	ocp->lun = lun;
> +	efct_lio_io_printf(io, "new cmd=0x%x ddir=%s dl=%u\n",
> +			  cdb[0], ddir, io->exp_xfer_len);
> +
> +	tgt_node = io->node->tgt_node;
> +	se_sess = tgt_node->session;
> +	if (se_sess) {
> +		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_SUBMIT_CMD);
> +		rc = target_submit_cmd(&io->tgt_io.cmd, se_sess,
> +				       cdb, &io->tgt_io.sense_buffer[0],
> +				       ocp->lun, io->exp_xfer_len,
> +				       ocp->task_attr, ocp->ddir,
> +				       TARGET_SCF_ACK_KREF);
> +		if (rc) {
> +			efc_log_err(efct, "failed to submit cmd se_cmd: %p\n",
> +				    &ocp->cmd);
> +			efct_scsi_io_free(io);
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +int
> +efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun,
> +		   enum efct_scsi_tmf_cmd cmd,
> +		  struct efct_io *io_to_abort, u32 flags)

aligment

> +{
> +	unsigned char tmr_func;
> +	struct efct *efct = tmfio->efct;
> +	struct efct_scsi_tgt_io *ocp = &tmfio->tgt_io;
> +	struct efct_scsi_tgt_node *tgt_node = NULL;
> +	struct se_session *se_sess;
> +	int rc;
> +
> +	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
> +	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_SCSI_RECV_TMF);
> +	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
> +	efct_lio_tmfio_printf(tmfio, "%s: new tmf %x lun=%u\n",
> +			      tmfio->display_name, cmd, lun);
> +
> +	switch (cmd) {
> +	case EFCT_SCSI_TMF_ABORT_TASK:
> +		tmr_func = TMR_ABORT_TASK;
> +		break;
> +	case EFCT_SCSI_TMF_ABORT_TASK_SET:
> +		tmr_func = TMR_ABORT_TASK_SET;
> +		break;
> +	case EFCT_SCSI_TMF_CLEAR_TASK_SET:
> +		tmr_func = TMR_CLEAR_TASK_SET;
> +		break;
> +	case EFCT_SCSI_TMF_LOGICAL_UNIT_RESET:
> +		tmr_func = TMR_LUN_RESET;
> +		break;
> +	case EFCT_SCSI_TMF_CLEAR_ACA:
> +		tmr_func = TMR_CLEAR_ACA;
> +		break;
> +	case EFCT_SCSI_TMF_TARGET_RESET:
> +		tmr_func = TMR_TARGET_WARM_RESET;
> +		break;
> +	case EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT:
> +	case EFCT_SCSI_TMF_QUERY_TASK_SET:
> +	default:
> +		goto tmf_fail;
> +	}
> +
> +	tmfio->tgt_io.tmf = tmr_func;
> +	tmfio->tgt_io.lun = lun;
> +	tmfio->tgt_io.io_to_abort = io_to_abort;
> +
> +	tgt_node = tmfio->node->tgt_node;
> +
> +	se_sess = tgt_node->session;
> +	if (!se_sess)
> +		return EFC_SUCCESS;
> +
> +	rc = target_submit_tmr(&ocp->cmd, se_sess, NULL, lun, ocp, tmr_func,
> +			GFP_ATOMIC, tmfio->init_task_tag, TARGET_SCF_ACK_KREF);

Is GFP_ATOMIC really necessary? Are allocations done in atomic context?

> +
> +	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_TGT_SUBMIT_TMR);
> +	if (rc)
> +		goto tmf_fail;
> +
> +	return EFC_SUCCESS;
> +
> +tmf_fail:
> +	efct_scsi_send_tmf_resp(tmfio, EFCT_SCSI_TMF_FUNCTION_REJECTED,
> +				NULL, efct_lio_null_tmf_done, NULL);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Start items for efct_lio_tpg_attrib_cit */
> +
> +#define DEF_EFCT_TPG_ATTRIB(name)					  \
> +									  \
> +static ssize_t efct_lio_tpg_attrib_##name##_show(			  \
> +		struct config_item *item, char *page)			  \
> +{									  \
> +	struct se_portal_group *se_tpg = to_tpg(item);			  \
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
> +			struct efct_lio_tpg, tpg);			  \
> +									  \
> +	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		  \
> +}									  \
> +									  \
> +static ssize_t efct_lio_tpg_attrib_##name##_store(			  \
> +		struct config_item *item, const char *page, size_t count) \
> +{									  \
> +	struct se_portal_group *se_tpg = to_tpg(item);			  \
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
> +					struct efct_lio_tpg, tpg);	  \
> +	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		  \
> +	unsigned long val;						  \
> +	int ret;							  \
> +									  \
> +	ret = kstrtoul(page, 0, &val);					  \
> +	if (ret < 0) {							  \
> +		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
> +		return ret;						  \
> +	}								  \
> +									  \
> +	if (val != 0 && val != 1) {					  \
> +		pr_err("Illegal boolean value %lu\n", val);		  \
> +		return -EINVAL;						  \
> +	}								  \
> +									  \
> +	a->name = val;							  \
> +									  \
> +	return count;							  \
> +}									  \
> +CONFIGFS_ATTR(efct_lio_tpg_attrib_, name)
> +
> +DEF_EFCT_TPG_ATTRIB(generate_node_acls);
> +DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
> +DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
> +DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
> +DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
> +DEF_EFCT_TPG_ATTRIB(session_deletion_wait);
> +
> +static struct configfs_attribute *efct_lio_tpg_attrib_attrs[] = {
> +	&efct_lio_tpg_attrib_attr_generate_node_acls,
> +	&efct_lio_tpg_attrib_attr_cache_dynamic_acls,
> +	&efct_lio_tpg_attrib_attr_demo_mode_write_protect,
> +	&efct_lio_tpg_attrib_attr_prod_mode_write_protect,
> +	&efct_lio_tpg_attrib_attr_demo_mode_login_only,
> +	&efct_lio_tpg_attrib_attr_session_deletion_wait,
> +	NULL,
> +};
> +
> +#define DEF_EFCT_NPIV_TPG_ATTRIB(name)					   \
> +									   \
> +static ssize_t efct_lio_npiv_tpg_attrib_##name##_show(			   \
> +		struct config_item *item, char *page)			   \
> +{									   \
> +	struct se_portal_group *se_tpg = to_tpg(item);			   \
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
> +			struct efct_lio_tpg, tpg);			   \
> +									   \
> +	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		   \
> +}									   \
> +									   \
> +static ssize_t efct_lio_npiv_tpg_attrib_##name##_store(			   \
> +		struct config_item *item, const char *page, size_t count)  \
> +{									   \
> +	struct se_portal_group *se_tpg = to_tpg(item);			   \
> +	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
> +			struct efct_lio_tpg, tpg);			   \
> +	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		   \
> +	unsigned long val;						   \
> +	int ret;							   \
> +									   \
> +	ret = kstrtoul(page, 0, &val);					   \
> +	if (ret < 0) {							   \
> +		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
> +		return ret;						   \
> +	}								   \
> +									   \
> +	if (val != 0 && val != 1) {					   \
> +		pr_err("Illegal boolean value %lu\n", val);		   \
> +		return -EINVAL;						   \
> +	}								   \
> +									   \
> +	a->name = val;							   \
> +									   \
> +	return count;							   \
> +}									   \
> +CONFIGFS_ATTR(efct_lio_npiv_tpg_attrib_, name)
> +
> +DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
> +DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
> +DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
> +DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
> +DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
> +DEF_EFCT_NPIV_TPG_ATTRIB(session_deletion_wait);
> +
> +static struct configfs_attribute *efct_lio_npiv_tpg_attrib_attrs[] = {
> +	&efct_lio_npiv_tpg_attrib_attr_generate_node_acls,
> +	&efct_lio_npiv_tpg_attrib_attr_cache_dynamic_acls,
> +	&efct_lio_npiv_tpg_attrib_attr_demo_mode_write_protect,
> +	&efct_lio_npiv_tpg_attrib_attr_prod_mode_write_protect,
> +	&efct_lio_npiv_tpg_attrib_attr_demo_mode_login_only,
> +	&efct_lio_npiv_tpg_attrib_attr_session_deletion_wait,
> +	NULL,
> +};
> +
> +CONFIGFS_ATTR(efct_lio_tpg_, enable);
> +static struct configfs_attribute *efct_lio_tpg_attrs[] = {
> +				&efct_lio_tpg_attr_enable, NULL };
> +CONFIGFS_ATTR(efct_lio_npiv_tpg_, enable);
> +static struct configfs_attribute *efct_lio_npiv_tpg_attrs[] = {
> +				&efct_lio_npiv_tpg_attr_enable, NULL };
> +
> +static const struct target_core_fabric_ops efct_lio_ops = {
> +	.module				= THIS_MODULE,
> +	.fabric_name			= "efct",
> +	.node_acl_size			= sizeof(struct efct_lio_nacl),
> +	.max_data_sg_nents		= 65535,
> +	.tpg_get_wwn			= efct_lio_get_fabric_wwn,
> +	.tpg_get_tag			= efct_lio_get_tag,
> +	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
> +	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
> +	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
> +	.tpg_check_demo_mode_write_protect = efct_lio_check_demo_write_protect,
> +	.tpg_check_prod_mode_write_protect = efct_lio_check_prod_write_protect,
> +	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
> +	.check_stop_free		= efct_lio_check_stop_free,
> +	.aborted_task			= efct_lio_aborted_task,
> +	.release_cmd			= efct_lio_release_cmd,
> +	.close_session			= efct_lio_close_session,
> +	.sess_get_index			= efct_lio_sess_get_index,
> +	.write_pending			= efct_lio_write_pending,
> +	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
> +	.get_cmd_state			= efct_lio_get_cmd_state,
> +	.queue_data_in			= efct_lio_queue_data_in,
> +	.queue_status			= efct_lio_queue_status,
> +	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
> +	.fabric_make_wwn		= efct_lio_make_sport,
> +	.fabric_drop_wwn		= efct_lio_drop_sport,
> +	.fabric_make_tpg		= efct_lio_make_tpg,
> +	.fabric_drop_tpg		= efct_lio_drop_tpg,
> +	.tpg_check_demo_mode_login_only = efct_lio_check_demo_mode_login_only,
> +	.tpg_check_prot_fabric_only	= NULL,
> +	.sess_get_initiator_sid		= NULL,
> +	.tfc_tpg_base_attrs		= efct_lio_tpg_attrs,
> +	.tfc_tpg_attrib_attrs           = efct_lio_tpg_attrib_attrs,
> +};
> +
> +static const struct target_core_fabric_ops efct_lio_npiv_ops = {
> +	.module				= THIS_MODULE,
> +	.fabric_name			= "efct_npiv",
> +	.node_acl_size			= sizeof(struct efct_lio_nacl),
> +	.max_data_sg_nents		= 65535,
> +	.tpg_get_wwn			= efct_lio_get_npiv_fabric_wwn,
> +	.tpg_get_tag			= efct_lio_get_npiv_tag,
> +	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
> +	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
> +	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
> +	.tpg_check_demo_mode_write_protect =
> +					efct_lio_npiv_check_demo_write_protect,
> +	.tpg_check_prod_mode_write_protect =
> +					efct_lio_npiv_check_prod_write_protect,
> +	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
> +	.check_stop_free		= efct_lio_check_stop_free,
> +	.aborted_task			= efct_lio_aborted_task,
> +	.release_cmd			= efct_lio_release_cmd,
> +	.close_session			= efct_lio_close_session,
> +	.sess_get_index			= efct_lio_sess_get_index,
> +	.write_pending			= efct_lio_write_pending,
> +	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
> +	.get_cmd_state			= efct_lio_get_cmd_state,
> +	.queue_data_in			= efct_lio_queue_data_in,
> +	.queue_status			= efct_lio_queue_status,
> +	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
> +	.fabric_make_wwn		= efct_lio_npiv_make_sport,
> +	.fabric_drop_wwn		= efct_lio_npiv_drop_sport,
> +	.fabric_make_tpg		= efct_lio_npiv_make_tpg,
> +	.fabric_drop_tpg		= efct_lio_npiv_drop_tpg,
> +	.tpg_check_demo_mode_login_only =
> +				efct_lio_npiv_check_demo_mode_login_only,
> +	.tpg_check_prot_fabric_only	= NULL,
> +	.sess_get_initiator_sid		= NULL,
> +	.tfc_tpg_base_attrs		= efct_lio_npiv_tpg_attrs,
> +	.tfc_tpg_attrib_attrs		= efct_lio_npiv_tpg_attrib_attrs,
> +};
> +
> +int efct_scsi_tgt_driver_init(void)
> +{
> +	int rc;
> +
> +	/* Register the top level struct config_item_type with TCM core */
> +	rc = target_register_template(&efct_lio_ops);
> +	if (rc < 0) {
> +		pr_err("target_fabric_configfs_register failed with %d\n", rc);
> +		return rc;
> +	}
> +	rc = target_register_template(&efct_lio_npiv_ops);
> +	if (rc < 0) {
> +		pr_err("target_fabric_configfs_register failed with %d\n", rc);
> +		target_unregister_template(&efct_lio_ops);
> +		return rc;
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +int efct_scsi_tgt_driver_exit(void)
> +{
> +	target_unregister_template(&efct_lio_ops);
> +	target_unregister_template(&efct_lio_npiv_ops);
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_lio.h b/drivers/scsi/elx/efct/efct_lio.h
> new file mode 100644
> index 000000000000..f884bcd3b240
> --- /dev/null
> +++ b/drivers/scsi/elx/efct/efct_lio.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __EFCT_LIO_H__
> +#define __EFCT_LIO_H__
> +
> +#include "efct_scsi.h"
> +#include <target/target_core_base.h>
> +
> +#define efct_lio_io_printf(io, fmt, ...) \
> +	efc_log_debug(io->efct, \
> +		"[%s] [%04x][i:%04x t:%04x h:%04x][c:%02x]" fmt, \
> +		io->node->display_name, io->instance_index, \
> +		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
> +		io->tgt_io.cmd.t_task_cdb[0], ##__VA_ARGS__)
> +
> +#define efct_lio_tmfio_printf(io, fmt, ...) \
> +	efc_log_debug(io->efct, \
> +		"[%s] [%04x][i:%04x t:%04x h:%04x][f:%02x]" fmt, \
> +		io->node->display_name, io->instance_index, \
> +		io->init_task_tag, io->tgt_task_tag, io->hw_tag, \
> +		io->tgt_io.tmf,  ##__VA_ARGS__)

in the last file the \ were all right aligned.

> +
> +#define efct_set_lio_io_state(io, value) (io->tgt_io.state |= value)
> +
> +struct efct_lio_wq_data {
> +	struct efct		*efct;
> +	void			*ptr;
> +	struct work_struct	work;
> +};
> +
> +/* Target private efct structure */
> +struct efct_scsi_tgt {
> +	u32			max_sge;
> +	u32			max_sgl;
> +
> +	/*
> +	 * Variables used to send task set full. We are using a high watermark
> +	 * method to send task set full. We will reserve a fixed number of IOs
> +	 * per initiator plus a fudge factor. Once we reach this number,
> +	 * then the target will start sending task set full/busy responses.
> +	 */
> +	atomic_t		initiator_count;
> +	atomic_t		ios_in_use;
> +	atomic_t		io_high_watermark;
> +
> +	atomic_t		watermark_hit;
> +	int			watermark_min;
> +	int			watermark_max;
> +
> +	struct efct_lio_sport	*lio_sport;
> +	struct efct_lio_tpg	*tpg;
> +
> +	struct list_head	vport_list;
> +	/* Protects vport list*/
> +	spinlock_t		efct_lio_lock;
> +
> +	u64			wwnn;
> +};
> +
> +struct efct_scsi_tgt_sport {
> +	struct efct_lio_sport	*lio_sport;
> +};
> +
> +struct efct_scsi_tgt_node {
> +	struct se_session	*session;
> +};
> +
> +#define EFCT_LIO_STATE_SCSI_RECV_CMD		(1 << 0)
> +#define EFCT_LIO_STATE_TGT_SUBMIT_CMD		(1 << 1)
> +#define EFCT_LIO_STATE_TFO_QUEUE_DATA_IN	(1 << 2)
> +#define EFCT_LIO_STATE_TFO_WRITE_PENDING	(1 << 3)
> +#define EFCT_LIO_STATE_TGT_EXECUTE_CMD		(1 << 4)
> +#define EFCT_LIO_STATE_SCSI_SEND_RD_DATA	(1 << 5)
> +#define EFCT_LIO_STATE_TFO_CHK_STOP_FREE	(1 << 6)
> +#define EFCT_LIO_STATE_SCSI_DATA_DONE		(1 << 7)
> +#define EFCT_LIO_STATE_TFO_QUEUE_STATUS		(1 << 8)
> +#define EFCT_LIO_STATE_SCSI_SEND_RSP		(1 << 9)
> +#define EFCT_LIO_STATE_SCSI_RSP_DONE		(1 << 10)
> +#define EFCT_LIO_STATE_TGT_GENERIC_FREE		(1 << 11)
> +#define EFCT_LIO_STATE_SCSI_RECV_TMF		(1 << 12)
> +#define EFCT_LIO_STATE_TGT_SUBMIT_TMR		(1 << 13)
> +#define EFCT_LIO_STATE_TFO_WRITE_PEND_STATUS	(1 << 14)
> +#define EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE  (1 << 15)
> +
> +#define EFCT_LIO_STATE_TFO_ABORTED_TASK		(1 << 29)
> +#define EFCT_LIO_STATE_TFO_RELEASE_CMD		(1 << 30)
> +#define EFCT_LIO_STATE_SCSI_CMPL_CMD		(1 << 31)
> +
> +struct efct_scsi_tgt_io {
> +	struct se_cmd		cmd;
> +	unsigned char		sense_buffer[TRANSPORT_SENSE_BUFFER];
> +	enum dma_data_direction	ddir;
> +	int			task_attr;
> +	u64			lun;
> +
> +	u32			state;
> +	u8			tmf;
> +	struct efct_io		*io_to_abort;
> +	u32			seg_map_cnt;
> +	u32			seg_cnt;
> +	u32			cur_seg;
> +	enum efct_scsi_io_status err;
> +	bool			aborting;
> +	bool			rsp_sent;
> +	uint32_t		transferred_len;
> +};
> +
> +/* Handler return codes */
> +enum {
> +	SCSI_HANDLER_DATAPHASE_STARTED = 1,
> +	SCSI_HANDLER_RESP_STARTED,
> +	SCSI_HANDLER_VALIDATED_DATAPHASE_STARTED,
> +	SCSI_CMD_NOT_SUPPORTED,
> +};
> +
> +#define WWN_NAME_LEN		32
> +struct efct_lio_vport {
> +	u64			wwpn;
> +	u64			npiv_wwpn;
> +	u64			npiv_wwnn;
> +	unsigned char		wwpn_str[WWN_NAME_LEN];
> +	struct se_wwn		vport_wwn;
> +	struct efct_lio_tpg	*tpg;
> +	struct efct		*efct;
> +	struct dentry		*sessions;
> +	struct Scsi_Host	*shost;
> +	struct fc_vport		*fc_vport;
> +	atomic_t		enable;
> +};
> +
> +struct efct_lio_sport {
> +	u64			wwpn;
> +	unsigned char		wwpn_str[WWN_NAME_LEN];
> +	struct se_wwn		sport_wwn;
> +	struct efct_lio_tpg	*tpg;
> +	struct efct		*efct;
> +	struct dentry		*sessions;
> +	atomic_t		enable;
> +};
> +
> +struct efct_lio_tpg_attrib {
> +	int			generate_node_acls;
> +	int			cache_dynamic_acls;
> +	int			demo_mode_write_protect;
> +	int			prod_mode_write_protect;
> +	int			demo_mode_login_only;
> +	bool			session_deletion_wait;
> +};
> +
> +struct efct_lio_tpg {
> +	struct se_portal_group	tpg;
> +	struct efct_lio_sport	*sport;
> +	struct efct_lio_vport	*vport;
> +	struct efct_lio_tpg_attrib tpg_attrib;
> +	unsigned short		tpgt;
> +	atomic_t		enabled;
> +};
> +
> +struct efct_lio_nacl {
> +	u64			nport_wwnn;
> +	char			nport_name[WWN_NAME_LEN];
> +	struct se_session	*session;
> +	struct se_node_acl	se_node_acl;
> +};
> +
> +struct efct_lio_vport_list_t {
> +	struct list_head	list_entry;
> +	struct efct_lio_vport	*lio_vport;
> +};
> +
> +int efct_scsi_tgt_driver_init(void);
> +int efct_scsi_tgt_driver_exit(void);
> +
> +#endif /*__EFCT_LIO_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 25/31] elx: efct: Hardware IO submission routines
  2020-04-12  3:32 ` [PATCH v3 25/31] elx: efct: Hardware IO submission routines James Smart
  2020-04-16  8:10   ` Hannes Reinecke
@ 2020-04-16 12:44   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 12:44 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:57PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines that write IO to Work queue, send SRRs and raw frames.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Reduced arguments for sli_fcp_tsend64_wqe(), sli_fcp_trsp64_wqe(),
>   sli_fcp_treceive64_wqe() calls
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 519 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  19 ++
>  2 files changed, 538 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index fd3c2dec3ef6..26dd9bd1eeef 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -2516,3 +2516,522 @@ efct_hw_flush(struct efct_hw *hw)
>  
>  	return EFC_SUCCESS;
>  }
> +
> +int
> +efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
> +{
> +	int rc = 0;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&wq->queue->lock, flags);
> +	if (!list_empty(&wq->pending_list)) {
> +		INIT_LIST_HEAD(&wqe->list_entry);
> +		list_add_tail(&wqe->list_entry, &wq->pending_list);
> +		wq->wq_pending_count++;
> +		while ((wq->free_count > 0) &&
> +		       ((wqe = list_first_entry(&wq->pending_list,
> +					struct efct_hw_wqe, list_entry))
> +			 != NULL)) {

The condition is a hard to read. It be good to restructure it.

And maybe moving the body into new function, so the functions code is
not crawling down the right border.

> +			list_del(&wqe->list_entry);
> +			rc = _efct_hw_wq_write(wq, wqe);
> +			if (rc < 0)
> +				break;
> +			if (wqe->abort_wqe_submit_needed) {
> +				wqe->abort_wqe_submit_needed = false;
> +				sli_abort_wqe(&wq->hw->sli,
> +					      wqe->wqebuf,
> +					      wq->hw->sli.wqe_size,
> +					      SLI_ABORT_XRI,
> +					      wqe->send_abts, wqe->id,
> +					      0, wqe->abort_reqtag,
> +					      SLI4_CQ_DEFAULT);
> +
> +				INIT_LIST_HEAD(&wqe->list_entry);
> +				list_add_tail(&wqe->list_entry,
> +					      &wq->pending_list);
> +				wq->wq_pending_count++;
> +			}
> +		}
> +	} else {
> +		if (wq->free_count > 0) {
> +			rc = _efct_hw_wq_write(wq, wqe);
> +		} else {
> +			INIT_LIST_HEAD(&wqe->list_entry);
> +			list_add_tail(&wqe->list_entry, &wq->pending_list);
> +			wq->wq_pending_count++;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&wq->queue->lock, flags);
> +
> +	return rc;
> +}
> +
> +/**
> + * This routine supports communication sequences consisting of a single
> + * request and single response between two endpoints. Examples include:
> + *  - Sending an ELS request.
> + *  - Sending an ELS response - To send an ELS response, the caller must provide
> + * the OX_ID from the received request.
> + *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
> + * the caller must provide the R_CTL, TYPE, and DF_CTL
> + * values to place in the FC frame header.
> + */

This is not proper kerneldoc style

> +enum efct_hw_rtn
> +efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		  struct efct_hw_io *io,
> +		  struct efc_dma *send, u32 len,
> +		  struct efc_dma *receive, struct efc_remote_node *rnode,
> +		  union efct_hw_io_param_u *iparam,
> +		  efct_hw_srrs_cb_t cb, void *arg)
> +{
> +	struct sli4_sge	*sge = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +	u16	local_flags = 0;
> +	u32 sge0_flags;
> +	u32 sge1_flags;
> +
> +	if (!io || !rnode || !iparam) {
> +		pr_err("bad parm hw=%p io=%p s=%p r=%p rn=%p iparm=%p\n",
> +			hw, io, send, receive, rnode, iparam);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_test(hw->os,
> +			      "cannot send SRRS, HW state=%d\n", hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	io->rnode = rnode;
> +	io->type  = type;
> +	io->done = cb;
> +	io->arg  = arg;
> +
> +	sge = io->sgl->virt;
> +
> +	/* clear both SGE */
> +	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
> +
> +	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
> +	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
> +	if (send) {
> +		sge[0].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(send->phys));
> +		sge[0].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(send->phys));
> +
> +		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +
> +		sge[0].buffer_length = cpu_to_le32(len);
> +	}
> +
> +	if (type == EFCT_HW_ELS_REQ || type == EFCT_HW_FC_CT) {
> +		sge[1].buffer_address_high =
> +			cpu_to_le32(upper_32_bits(receive->phys));
> +		sge[1].buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(receive->phys));
> +
> +		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +		sge1_flags |= SLI4_SGE_LAST;
> +
> +		sge[1].buffer_length = cpu_to_le32(receive->size);
> +	} else {
> +		sge0_flags |= SLI4_SGE_LAST;
> +	}
> +
> +	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
> +	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
> +
> +	switch (type) {
> +	case EFCT_HW_ELS_REQ:
> +		if (!send ||

Move the switch into a new function and just call it when 'if (send)'


> +		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
> +					  hw->sli.wqe_size, io->sgl,
> +					*((u8 *)send->virt),
> +					len, receive->size,
> +					iparam->els.timeout,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					rnode->sport->fc_id)) {
> +			efc_log_err(hw->os, "REQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_ELS_RSP:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, iparam->els.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					local_flags, U32_MAX)) {
> +			efc_log_err(hw->os, "RSP WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_ELS_RSP_SID:
> +		if (!send ||
> +		    sli_xmit_els_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, send, len,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					iparam->els.ox_id,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->attached, rnode->fc_id,
> +					local_flags, iparam->els.s_id)) {
> +			efc_log_err(hw->os, "RSP (SID) WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT:
> +		if (!send ||
> +		    sli_gen_request64_wqe(&hw->sli, io->wqe.wqebuf, io->sgl,
> +					len, receive->size, io->indicator,
> +					io->reqtag, SLI4_CQ_DEFAULT,
> +					rnode->fc_id, rnode->indicator,
> +					&iparam->fc_ct)) {
> +			efc_log_err(hw->os, "GEN WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_FC_CT_RSP:
> +		if (!send ||
> +		    sli_xmit_sequence64_wqe(&hw->sli, io->wqe.wqebuf,
> +					    io->sgl, len, io->indicator,
> +					    io->reqtag, rnode->fc_id,
> +					    rnode->indicator, &iparam->fc_ct)) {
> +			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_BLS_ACC:
> +	case EFCT_HW_BLS_RJT:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		if (type == EFCT_HW_BLS_ACC) {
> +			bls.type = SLI4_SLI_BLS_ACC;
> +			memcpy(&bls.u.acc, iparam->bls.payload,
> +			       sizeof(bls.u.acc));
> +		} else {
> +			bls.type = SLI4_SLI_BLS_RJT;
> +			memcpy(&bls.u.rjt, iparam->bls.payload,
> +			       sizeof(bls.u.rjt));
> +		}
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					U32_MAX)) {
> +			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_BLS_ACC_SID:
> +	{
> +		struct sli_bls_payload	bls;
> +
> +		bls.type = SLI4_SLI_BLS_ACC;
> +		memcpy(&bls.u.acc, iparam->bls.payload,
> +		       sizeof(bls.u.acc));
> +
> +		bls.ox_id = cpu_to_le16(iparam->bls.ox_id);
> +		bls.rx_id = cpu_to_le16(iparam->bls.rx_id);
> +
> +		if (sli_xmit_bls_rsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   hw->sli.wqe_size, &bls,
> +					io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT,
> +					rnode->attached,
> +					rnode->indicator,
> +					rnode->sport->indicator,
> +					rnode->fc_id, rnode->sport->fc_id,
> +					iparam->bls.s_id)) {
> +			efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE SID error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_err(hw->os, "bad SRRS type %#x\n", type);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rc == EFCT_HW_RTN_SUCCESS) {
> +
> +		io->xbusy = true;
> +
> +		/*
> +		 * Add IO to active io wqe list before submitting, in case the
> +		 * wcqe processing preempts this thread.
> +		 */
> +		io->wq->use_count++;
> +		rc = efct_hw_wq_write(io->wq, &io->wqe);
> +		if (rc >= 0) {
> +			/* non-negative return is success */
> +			rc = 0;
> +		} else {
> +			/* failed to write wqe, remove from active wqe list */
> +			efc_log_err(hw->os,
> +				     "sli_queue_write failed: %d\n", rc);
> +			io->xbusy = false;
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +/**
> + * Send a read, write, or response IO.
> + *
> + * This routine supports sending a higher-level IO (for example, FCP) between
> + * two endpoints as a target or initiator. Examples include:
> + *  - Sending read data and good response (target).
> + *  - Sending a response (target with no data or after receiving write data).
> + *  .
> + * This routine assumes all IOs use the SGL associated with the HW IO. Prior to
> + * calling this routine, the data should be loaded using efct_hw_io_add_sge().
> + */

Not proper kerneldoc style

> +enum efct_hw_rtn
> +efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		struct efct_hw_io *io,
> +		u32 len, union efct_hw_io_param_u *iparam,
> +		struct efc_remote_node *rnode, void *cb, void *arg)
> +{
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +	u32	rpi;
> +	bool send_wqe = true;
> +
> +	if (!io || !rnode || !iparam) {
> +		pr_err("bad parm hw=%p io=%p iparam=%p rnode=%p\n",
> +			hw, io, iparam, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_err(hw->os, "cannot send IO, HW state=%d\n",
> +			     hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rpi = rnode->indicator;
> +
> +	/*
> +	 * Save state needed during later stages
> +	 */
> +	io->rnode = rnode;
> +	io->type  = type;
> +	io->done  = cb;
> +	io->arg   = arg;
> +
> +	/*
> +	 * Format the work queue entry used to send the IO
> +	 */
> +	switch (type) {
> +	case EFCT_HW_IO_TARGET_WRITE: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
> +
> +		/*
> +		 * Fill in the XFER_RDY for IF_TYPE 0 devices
> +		 */
> +		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
> +		xfer->ft_burst_len = cpu_to_be32(len);
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +
> +		if (sli_fcp_treceive64_wqe(&hw->sli, io->wqe.wqebuf,
> +					   &io->def_sgl, io->first_data_sge,
> +					   len, io->indicator, io->reqtag,
> +					   SLI4_CQ_DEFAULT, rpi, rnode->fc_id,
> +					   0, 0, &iparam->fcp_tgt)) {
> +			efc_log_err(hw->os, "TRECEIVE WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_IO_TARGET_READ: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +
> +		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
> +					&io->def_sgl, io->first_data_sge,
> +					len, io->indicator, io->reqtag,
> +					SLI4_CQ_DEFAULT, rpi, rnode->fc_id,
> +					0, 0, &iparam->fcp_tgt)) {
> +			efc_log_err(hw->os, "TSEND WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	}
> +	case EFCT_HW_IO_TARGET_RSP: {
> +		u16 flags = iparam->fcp_tgt.flags;
> +
> +		if (io->xbusy)
> +			flags |= SLI4_IO_CONTINUATION;
> +		else
> +			flags &= ~SLI4_IO_CONTINUATION;
> +
> +		io->tgt_wqe_timeout = iparam->fcp_tgt.timeout;
> +
> +		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
> +				       &io->def_sgl, len, io->indicator,
> +				       io->reqtag, SLI4_CQ_DEFAULT, rpi,
> +				       rnode->fc_id, 0, &iparam->fcp_tgt)) {
> +			efc_log_err(hw->os, "TRSP WQE error\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +
> +		break;
> +	}
> +	default:
> +		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
> +
> +		io->xbusy = true;
> +
> +		/*
> +		 * Add IO to active io wqe list before submitting, in case the
> +		 * wcqe processing preempts this thread.
> +		 */
> +		hw->tcmd_wq_submit[io->wq->instance]++;
> +		io->wq->use_count++;
> +		rc = efct_hw_wq_write(io->wq, &io->wqe);
> +		if (rc >= 0) {
> +			/* non-negative return is success */
> +			rc = 0;
> +		} else {
> +			/* failed to write wqe, remove from active wqe list */
> +			efc_log_err(hw->os,
> +				     "sli_queue_write failed: %d\n", rc);
> +			io->xbusy = false;
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +/**
> + * Send a raw frame
> + *
> + * Using the SEND_FRAME_WQE, a frame consisting of header and payload is sent.
> + */

kerneldoc

> +enum efct_hw_rtn
> +efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
> +		   u8 sof, u8 eof, struct efc_dma *payload,
> +		   struct efct_hw_send_frame_context *ctx,
> +		   void (*callback)(void *arg, u8 *cqe, int status),
> +		   void *arg)
> +{
> +	int rc;
> +	struct efct_hw_wqe *wqe;
> +	u32 xri;
> +	struct hw_wq *wq;
> +
> +	wqe = &ctx->wqe;
> +
> +	/* populate the callback object */
> +	ctx->hw = hw;
> +
> +	/* Fetch and populate request tag */
> +	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
> +	if (!ctx->wqcb) {
> +		efc_log_err(hw->os, "can't allocate request tag\n");
> +		return EFCT_HW_RTN_NO_RESOURCES;
> +	}
> +
> +	wq = hw->hw_wq[0];
> +
> +	/* Set XRI and RX_ID in the header based on which WQ, and which
> +	 * send_frame_io we are using
> +	 */
> +	xri = wq->send_frame_io->indicator;
> +
> +	/* Build the send frame WQE */
> +	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
> +				hw->sli.wqe_size, sof, eof,
> +				(u32 *)hdr, payload, payload->len,
> +				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
> +				ctx->wqcb->instance_index);
> +	if (rc) {
> +		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
> +			     rc);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* Write to WQ */
> +	rc = efct_hw_wq_write(wq, wqe);
> +	if (rc) {
> +		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	wq->use_count++;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +u32
> +efct_hw_io_get_count(struct efct_hw *hw,
> +		     enum efct_hw_io_count_type io_count_type)
> +{
> +	struct efct_hw_io *io = NULL;
> +	u32 count = 0;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +
> +	switch (io_count_type) {
> +	case EFCT_HW_IO_INUSE_COUNT:
> +		list_for_each_entry(io, &hw->io_inuse, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_FREE_COUNT:
> +		list_for_each_entry(io, &hw->io_free, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_WAIT_FREE_COUNT:
> +		list_for_each_entry(io, &hw->io_wait_free, list_entry) {
> +			count = count + 1;
> +		}
> +		break;
> +	case EFCT_HW_IO_N_TOTAL_IO_COUNT:
> +		count = hw->config.n_io;
> +		break;
> +	}
> +
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	return count;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index b427a4eda5a3..36a832f32616 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -714,4 +714,23 @@ efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
>  extern int
>  efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
>  
> +int efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe);
> +enum efct_hw_rtn
> +efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
> +		   u8 sof, u8 eof, struct efc_dma *payload,
> +		struct efct_hw_send_frame_context *ctx,
> +		void (*callback)(void *arg, u8 *cqe, int status),
> +		void *arg);
> +typedef int(*efct_hw_srrs_cb_t)(struct efct_hw_io *io,
> +				struct efc_remote_node *rnode, u32 length,
> +				int status, u32 ext_status, void *arg);
> +extern enum efct_hw_rtn
> +efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		  struct efct_hw_io *io,
> +		  struct efc_dma *send, u32 len,
> +		  struct efc_dma *receive, struct efc_remote_node *rnode,
> +		  union efct_hw_io_param_u *iparam,
> +		  efct_hw_srrs_cb_t cb,
> +		  void *arg);
> +
>  #endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 25/31] elx: efct: Hardware IO submission routines
  2020-04-16  8:10   ` Hannes Reinecke
@ 2020-04-16 12:45     ` Daniel Wagner
  2020-04-23  3:37       ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 12:45 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: James Smart, linux-scsi, maier, bvanassche, herbszt,
	natechancellor, rdunlap, Ram Vegesna

On Thu, Apr 16, 2020 at 10:10:18AM +0200, Hannes Reinecke wrote:
> > +	switch (type) {
> > +	case EFCT_HW_ELS_REQ:
> > +		if (!send ||
> > +		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
> > +					  hw->sli.wqe_size, io->sgl,
> > +					*((u8 *)send->virt),
> > +					len, receive->size,
> > +					iparam->els.timeout,
> > +					io->indicator, io->reqtag,
> > +					SLI4_CQ_DEFAULT, rnode->indicator,
> > +					rnode->sport->indicator,
> > +					rnode->attached, rnode->fc_id,
> > +					rnode->sport->fc_id)) {
> > +			efc_log_err(hw->os, "REQ WQE error\n");
> > +			rc = EFCT_HW_RTN_ERROR;
> > +		}
> > +		break;
> 
> I did mention several times that I'm not a big fan of overly long argument
> lists.
> Can't you pass in 'io' and 'rnode' directly and cut down on the number of
> arguments?

Yes, please!

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 26/31] elx: efct: link statistics and SFP data
  2020-04-12  3:32 ` [PATCH v3 26/31] elx: efct: link statistics and SFP data James Smart
@ 2020-04-16 12:55   ` Daniel Wagner
  0 siblings, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 12:55 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:58PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to retrieve link stats and SFP transceiver data.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 468 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |  39 ++++
>  2 files changed, 507 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 26dd9bd1eeef..ca2fd237c7d6 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -8,6 +8,40 @@
>  #include "efct_hw.h"
>  #include "efct_unsol.h"
>  
> +struct efct_hw_sfp_cb_arg {
> +	void (*cb)(int status, u32 bytes_written,
> +		   u8 *data, void *arg);
> +	void *arg;
> +	struct efc_dma payload;
> +};
> +
> +struct efct_hw_temp_cb_arg {
> +	void (*cb)(int status, u32 curr_temp,
> +		   u32 crit_temp_thrshld,
> +		   u32 warn_temp_thrshld,
> +		   u32 norm_temp_thrshld,
> +		   u32 fan_off_thrshld,
> +		   u32 fan_on_thrshld,
> +		   void *arg);
> +	void *arg;
> +};
> +
> +struct efct_hw_link_stat_cb_arg {
> +	void (*cb)(int status,
> +		   u32 num_counters,
> +		struct efct_hw_link_stat_counts *counters,
> +		void *arg);
> +	void *arg;
> +};
> +
> +struct efct_hw_host_stat_cb_arg {
> +	void (*cb)(int status,
> +		   u32 num_counters,
> +		struct efct_hw_host_stat_counts *counters,
> +		void *arg);

aligment

> +	void *arg;
> +};
> +
>  static enum efct_hw_rtn
>  efct_hw_link_event_init(struct efct_hw *hw)
>  {
> @@ -3035,3 +3069,437 @@ efct_hw_io_get_count(struct efct_hw *hw,
>  
>  	return count;
>  }
> +
> +static int
> +efct_hw_cb_sfp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
> +{
> +	struct efct_hw_sfp_cb_arg *cb_arg = arg;
> +	struct efc_dma *payload = &cb_arg->payload;
> +	struct sli4_rsp_cmn_read_transceiver_data *mbox_rsp;
> +	struct efct *efct = hw->os;
> +	u32 bytes_written;
> +
> +	mbox_rsp =
> +	(struct sli4_rsp_cmn_read_transceiver_data *)payload->virt;

Hmm maybe the type name is just too long. The first part already uses
abbriviations why not for the rest?

> +	bytes_written = le32_to_cpu(mbox_rsp->hdr.response_length);
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (!status && mbox_rsp->hdr.status)
> +				status = mbox_rsp->hdr.status;
> +			cb_arg->cb(status, bytes_written, mbox_rsp->page_data,
> +				   cb_arg->arg);
> +		}
> +
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  cb_arg->payload.size, cb_arg->payload.virt,
> +				  cb_arg->payload.phys);
> +		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
> +		kfree(cb_arg);
> +	}
> +
> +	kfree(mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Function to retrieve the SFP information */
> +enum efct_hw_rtn
> +efct_hw_get_sfp(struct efct_hw *hw, u16 page,
> +		void (*cb)(int, u32, u8 *, void *), void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	struct efct_hw_sfp_cb_arg *cb_arg;
> +	u8 *mbxdata;
> +	struct efct *efct = hw->os;
> +	struct efc_dma *dma;
> +
> +	/* mbxdata holds the header of the command */
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);

kzalloc

> +	/*
> +	 * cb_arg holds the data that will be passed to the callback on
> +	 * completion
> +	 */
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	memset(cb_arg, 0, sizeof(struct efct_hw_sfp_cb_arg));

kzalloc

> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = arg;
> +
> +	/* payload holds the non-embedded portion */
> +	dma = &cb_arg->payload;
> +	dma->size = sizeof(struct sli4_rsp_cmn_read_transceiver_data);
> +	dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
> +				       dma->size, &dma->phys, GFP_DMA);
> +	if (!dma->virt) {
> +		kfree(cb_arg);
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_common_read_transceiver_data(&hw->sli, mbxdata,
> +						 SLI4_BMBX_SIZE, page,
> +						 &cb_arg->payload))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_sfp, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os,
> +			      "READ_TRANSCEIVER_DATA failed with status %d\n",
> +			     rc);
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  cb_arg->payload.size, cb_arg->payload.virt,
> +				  cb_arg->payload.phys);
> +		memset(&cb_arg->payload, 0, sizeof(struct efc_dma));
> +		kfree(cb_arg);
> +		kfree(mbxdata);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_temp(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
> +{
> +	struct sli4_cmd_dump4 *mbox_rsp = (struct sli4_cmd_dump4 *)mqe;
> +	struct efct_hw_temp_cb_arg *cb_arg = arg;
> +	u32 curr_temp = le32_to_cpu(mbox_rsp->resp_data[0]); /* word 5 */
> +	u32 crit_temp_thrshld =
> +			le32_to_cpu(mbox_rsp->resp_data[1]); /* word 6 */
> +	u32 warn_temp_thrshld =
> +			le32_to_cpu(mbox_rsp->resp_data[2]); /* word 7 */
> +	u32 norm_temp_thrshld =
> +			le32_to_cpu(mbox_rsp->resp_data[3]); /* word 8 */
> +	u32 fan_off_thrshld =
> +			le32_to_cpu(mbox_rsp->resp_data[4]);   /* word 9 */
> +	u32 fan_on_thrshld =
> +			le32_to_cpu(mbox_rsp->resp_data[5]);    /* word 10 */
                                                            ^^^
						two spaces too much

> +
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
> +				status = le16_to_cpu(mbox_rsp->hdr.status);
> +			cb_arg->cb(status,
> +				   curr_temp,
> +				   crit_temp_thrshld,
> +				   warn_temp_thrshld,
> +				   norm_temp_thrshld,
> +				   fan_off_thrshld,
> +				   fan_on_thrshld,

A helper struct instead of so many arguments?

> +				   cb_arg->arg);
> +		}
> +
> +		kfree(cb_arg);
> +	}
> +	kfree(mqe);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Function to retrieve the temperature information */
> +enum efct_hw_rtn
> +efct_hw_get_temperature(struct efct_hw *hw,
> +			void (*cb)(int status,
> +				   u32 curr_temp,
> +				u32 crit_temp_thrshld,
> +				u32 warn_temp_thrshld,
> +				u32 norm_temp_thrshld,
> +				u32 fan_off_thrshld,
> +				u32 fan_on_thrshld,
> +				void *arg),
> +			void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	struct efct_hw_temp_cb_arg *cb_arg;
> +	u8 *mbxdata;
> +
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);

kzalloc

> +
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_dump_type4(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
> +			       SLI4_WKI_TAG_SAT_TEM))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_temp, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "DUMP_TYPE4 failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_link_stat(struct efct_hw *hw, int status,
> +		     u8 *mqe, void  *arg)
> +{
> +	struct sli4_cmd_read_link_stats *mbox_rsp;
> +	struct efct_hw_link_stat_cb_arg *cb_arg = arg;
> +	struct efct_hw_link_stat_counts counts[EFCT_HW_LINK_STAT_MAX];
> +	u32 num_counters;
> +	u32 mbox_rsp_flags = 0;
> +
> +	mbox_rsp = (struct sli4_cmd_read_link_stats *)mqe;
> +	mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags);
> +	num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13;
> +	memset(counts, 0, sizeof(struct efct_hw_link_stat_counts) *
> +				 EFCT_HW_LINK_STAT_MAX);
> +
> +	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W02OF);
> +	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W03OF);
> +	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W04OF);
> +	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W05OF);
> +	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W06OF);
> +	counts[EFCT_HW_LINK_STAT_CRC_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W07OF);
> +	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W08OF);
> +	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W09OF);
> +	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W10OF);
> +	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W11OF);
> +	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W12OF);
> +	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W13OF);
> +	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W14OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W15OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W16OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W17OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W18OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W19OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W20OF);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].overflow =
> +		(mbox_rsp_flags & SLI4_READ_LNKSTAT_W21OF);
> +	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->linkfail_errcnt);
> +	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->losssync_errcnt);
> +	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->losssignal_errcnt);
> +	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->primseq_errcnt);
> +	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->inval_txword_errcnt);
> +	counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter =
> +		le32_to_cpu(mbox_rsp->crc_errcnt);
> +	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter =
> +		le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt);
> +	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt);
> +	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt);
> +	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter =
> +		 le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit);
> +	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter =
> +		 le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit);
> +	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter =
> +		 le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit);
> +	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter =
> +		 le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_eofa_cnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_eofdti_cnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_eofni_cnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_soff_cnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt);
> +	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt);

Is there not a better way to do this? This is just a wall of characters.

> +
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
> +				status = le16_to_cpu(mbox_rsp->hdr.status);
> +			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
> +		}
> +
> +		kfree(cb_arg);
> +	}
> +	kfree(mqe);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_get_link_stats(struct efct_hw *hw,
> +		       u8 req_ext_counters,
> +		       u8 clear_overflow_flags,
> +		       u8 clear_all_counters,
> +		       void (*cb)(int status,
> +				  u32 num_counters,
> +			struct efct_hw_link_stat_counts *counters,
> +			void *arg),
> +		       void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	struct efct_hw_link_stat_cb_arg *cb_arg;
> +	u8 *mbxdata;
> +
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);

kzalloc

> +
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
> +				    req_ext_counters,
> +				    clear_overflow_flags,
> +				    clear_all_counters))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_link_stat, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_host_stat(struct efct_hw *hw, int status,
> +		     u8 *mqe, void  *arg)
> +{
> +	struct sli4_cmd_read_status *mbox_rsp =
> +					(struct sli4_cmd_read_status *)mqe;
> +	struct efct_hw_host_stat_cb_arg *cb_arg = arg;
> +	struct efct_hw_host_stat_counts counts[EFCT_HW_HOST_STAT_MAX];
> +	u32 num_counters = EFCT_HW_HOST_STAT_MAX;
> +
> +	memset(counts, 0, sizeof(struct efct_hw_host_stat_counts) *
> +		   EFCT_HW_HOST_STAT_MAX);
> +
> +	counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->trans_kbyte_cnt);
> +	counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->recv_kbyte_cnt);
> +	counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->trans_frame_cnt);
> +	counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->recv_frame_cnt);
> +	counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->trans_seq_cnt);
> +	counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->recv_seq_cnt);
> +	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter =
> +		 le32_to_cpu(mbox_rsp->tot_exchanges_orig);
> +	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter =
> +		 le32_to_cpu(mbox_rsp->tot_exchanges_resp);
> +	counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->recv_p_bsy_cnt);
> +	counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->recv_f_bsy_cnt);
> +	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt);
> +	counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt);
> +	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt);
> +	counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter =
> +		 le32_to_cpu(mbox_rsp->empty_xri_pool_cnt);
> +
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
> +				status = le16_to_cpu(mbox_rsp->hdr.status);
> +			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
> +		}
> +
> +		kfree(cb_arg);
> +	}
> +	kfree(mqe);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
> +		       void (*cb)(int status,
> +				  u32 num_counters,
> +				  struct efct_hw_host_stat_counts *counters,
> +				  void *arg),
> +		       void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	struct efct_hw_host_stat_cb_arg *cb_arg;
> +	u8 *mbxdata;
> +
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);
> +
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	 cb_arg->cb = cb;
> +	 cb_arg->arg = arg;
> +
> +	 /* Send the HW command to get the host stats */
> +	if (!sli_cmd_read_status(&hw->sli, mbxdata, SLI4_BMBX_SIZE, cc))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_host_stat, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "READ_HOST_STATS failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 36a832f32616..0b6838c7f924 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -733,4 +733,43 @@ efct_hw_srrs_send(struct efct_hw *hw, enum efct_hw_io_type type,
>  		  efct_hw_srrs_cb_t cb,
>  		  void *arg);
>  
> +/* Function for retrieving SFP data */
> +extern enum efct_hw_rtn
> +efct_hw_get_sfp(struct efct_hw *hw, u16 page,
> +		void (*cb)(int, u32, u8 *, void *), void *arg);
> +
> +/* Function for retrieving temperature data */
> +extern enum efct_hw_rtn
> +efct_hw_get_temperature(struct efct_hw *hw,
> +			void (*efct_hw_temp_cb_t)(int status,
> +						  u32 curr_temp,
> +				u32 crit_temp_thrshld,
> +				u32 warn_temp_thrshld,
> +				u32 norm_temp_thrshld,
> +				u32 fan_off_thrshld,
> +				u32 fan_on_thrshld,
> +				void *arg),
> +			void *arg);
> +
> +/* Function for retrieving link statistics */
> +extern enum efct_hw_rtn
> +efct_hw_get_link_stats(struct efct_hw *hw,
> +		       u8 req_ext_counters,
> +		u8 clear_overflow_flags,
> +		u8 clear_all_counters,
> +		void (*efct_hw_link_stat_cb_t)(int status,
> +					       u32 num_counters,
> +			struct efct_hw_link_stat_counts *counters,
> +			void *arg),
> +		void *arg);
> +/* Function for retrieving host statistics */
> +extern enum efct_hw_rtn
> +efct_hw_get_host_stats(struct efct_hw *hw,
> +		       u8 cc,
> +		void (*efct_hw_host_stat_cb_t)(int status,
> +					       u32 num_counters,
> +			struct efct_hw_host_stat_counts *counters,
> +			void *arg),
> +		void *arg);
> +
>  #endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 27/31] elx: efct: xport and hardware teardown routines
  2020-04-12  3:32 ` [PATCH v3 27/31] elx: efct: xport and hardware teardown routines James Smart
  2020-04-16  9:45   ` Hannes Reinecke
@ 2020-04-16 13:01   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 13:01 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:59PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to detach xport and hardware objects.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>    Removed old patch 28 and merged with 27
> ---
>  drivers/scsi/elx/efct/efct_hw.c    | 333 +++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h    |  31 ++++
>  drivers/scsi/elx/efct/efct_xport.c | 291 ++++++++++++++++++++++++++++++++
>  3 files changed, 655 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index ca2fd237c7d6..a007ca98895d 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -3503,3 +3503,336 @@ efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
>  
>  	return rc;
>  }
> +
> +static int
> +efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
> +			void  *arg)
> +{
> +	kfree(mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +/* Control a port (initialize, shutdown, or set link configuration) */
> +enum efct_hw_rtn
> +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
> +		     uintptr_t value,
> +		void (*cb)(int status, uintptr_t value, void *arg),
> +		void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +
> +	switch (ctrl) {
> +	case EFCT_HW_PORT_INIT:
> +	{
> +		u8	*init_link;
> +		u32 speed = 0;
> +		u8 reset_alpa = 0;
> +
> +		u8	*cfg_link;
> +
> +		cfg_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);

GFP_KERNEL

> +		if (!cfg_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		if (!sli_cmd_config_link(&hw->sli, cfg_link,
> +					SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, cfg_link,
> +					     EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control,
> +					     NULL);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(cfg_link);
> +			efc_log_err(hw->os, "CONFIG_LINK failed\n");
> +			break;
> +		}
> +		speed = hw->config.speed;
> +		reset_alpa = (u8)(value & 0xff);
> +
> +		/* Allocate a new buffer for the init_link command */
> +		init_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);

GFP_KERNEL

> +		if (!init_link)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +		if (!sli_cmd_init_link(&hw->sli, init_link, SLI4_BMBX_SIZE,
> +				      speed, reset_alpa))
> +			rc = efct_hw_command(hw, init_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(init_link);
> +			efc_log_err(hw->os, "INIT_LINK failed\n");
> +		}
> +		break;
> +	}
> +	case EFCT_HW_PORT_SHUTDOWN:
> +	{
> +		u8	*down_link;
> +
> +		down_link = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!down_link)
> +			return EFCT_HW_RTN_NO_MEMORY;

GFP_KERNEL

> +
> +		if (!sli_cmd_down_link(&hw->sli, down_link, SLI4_BMBX_SIZE))
> +			rc = efct_hw_command(hw, down_link, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_port_control, NULL);
> +		/* Free buffer on error, since no callback is coming */
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			kfree(down_link);
> +			efc_log_err(hw->os, "DOWN_LINK failed\n");
> +		}
> +		break;
> +	}
> +	default:
> +		efc_log_test(hw->os, "unhandled control %#x\n", ctrl);
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_teardown(struct efct_hw *hw)
> +{
> +	u32	i = 0;
> +	u32	iters = 10;
> +	u32	max_rpi;
> +	u32 destroy_queues;
> +	u32 free_memory;
> +	struct efc_dma *dma;
> +	struct efct *efct = hw->os;
> +
> +	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
> +	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
> +
> +	/* Cancel Sliport Healthcheck */
> +	if (hw->sliport_healthcheck) {
> +		hw->sliport_healthcheck = 0;
> +		efct_hw_config_sli_port_health_check(hw, 0, 0);
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +
> +		efct_hw_flush(hw);
> +
> +		/*
> +		 * If there are outstanding commands, wait for them to complete
> +		 */
> +		while (!list_empty(&hw->cmd_head) && iters) {
> +			mdelay(10);
> +			efct_hw_flush(hw);
> +			iters--;
> +		}
> +
> +		if (list_empty(&hw->cmd_head))
> +			efc_log_debug(hw->os,
> +				       "All commands completed on MQ queue\n");
> +		else
> +			efc_log_debug(hw->os,
> +				       "Some cmds still pending on MQ queue\n");
> +
> +		/* Cancel any remaining commands */
> +		efct_hw_command_cancel(hw);
> +	} else {
> +		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
> +	}
> +
> +	max_rpi = hw->sli.qinfo.max_qcount[SLI_RSRC_RPI];
> +	if (hw->rpi_ref) {
> +		for (i = 0; i < max_rpi; i++) {
> +			u32 count;
> +
> +			count = atomic_read(&hw->rpi_ref[i].rpi_count);
> +			if (count)
> +				efc_log_debug(hw->os,
> +					       "non-zero ref [%d]=%d\n",
> +					       i, count);
> +		}
> +		kfree(hw->rpi_ref);
> +		hw->rpi_ref = NULL;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  hw->rnode_mem.size, hw->rnode_mem.virt,
> +			  hw->rnode_mem.phys);
> +	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
> +
> +	if (hw->io) {
> +		for (i = 0; i < hw->config.n_io; i++) {
> +			if (hw->io[i] && hw->io[i]->sgl &&
> +			    hw->io[i]->sgl->virt) {
> +				dma_free_coherent(&efct->pcidev->dev,
> +						  hw->io[i]->sgl->size,
> +						  hw->io[i]->sgl->virt,
> +						  hw->io[i]->sgl->phys);
> +				memset(&hw->io[i]->sgl, 0,
> +				       sizeof(struct efc_dma));
> +			}
> +			kfree(hw->io[i]);
> +			hw->io[i] = NULL;
> +		}
> +		kfree(hw->io);
> +		hw->io = NULL;
> +		kfree(hw->wqe_buffs);
> +		hw->wqe_buffs = NULL;
> +	}
> +
> +	dma = &hw->xfer_rdy;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	dma = &hw->dump_sges;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	dma = &hw->loop_map;
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  dma->size, dma->virt, dma->phys);
> +	memset(dma, 0, sizeof(struct efc_dma));
> +
> +	for (i = 0; i < hw->wq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->rq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->mq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->cq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
> +			       free_memory);
> +
> +	for (i = 0; i < hw->eq_count; i++)
> +		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
> +			       free_memory);
> +
> +	/* Free rq buffers */
> +	efct_hw_rx_free(hw);
> +
> +	efct_hw_queue_teardown(hw);
> +
> +	if (sli_teardown(&hw->sli))
> +		efc_log_err(hw->os, "SLI teardown failed\n");
> +
> +	/* record the fact that the queues are non-functional */
> +	hw->state = EFCT_HW_STATE_UNINITIALIZED;
> +
> +	/* free sequence free pool */
> +	kfree(hw->seq_pool);
> +	hw->seq_pool = NULL;
> +
> +	/* free hw_wq_callback pool */
> +	efct_hw_reqtag_pool_free(hw);
> +
> +	/* Mark HW setup as not having been called */
> +	hw->hw_setup_called = false;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset,
> +		  enum efct_hw_state prev_state)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	switch (reset) {
> +	case EFCT_HW_RESET_FUNCTION:
> +		efc_log_debug(hw->os, "issuing function level reset\n");
> +		if (sli_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	case EFCT_HW_RESET_FIRMWARE:
> +		efc_log_debug(hw->os, "issuing firmware reset\n");
> +		if (sli_fw_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_soft_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		/*
> +		 * Because the FW reset leaves the FW in a non-running state,
> +		 * follow that with a regular reset.
> +		 */
> +		efc_log_debug(hw->os, "issuing function level reset\n");
> +		if (sli_reset(&hw->sli)) {
> +			efc_log_err(hw->os, "sli_reset failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +		break;
> +	default:
> +		efc_log_err(hw->os,
> +			     "unknown reset type - no reset performed\n");
> +		hw->state = prev_state;
> +		rc = EFCT_HW_RTN_INVALID_ARG;
> +		break;
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32	iters;
> +	enum efct_hw_state prev_state = hw->state;
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE)
> +		efc_log_debug(hw->os,
> +			      "HW state %d is not active\n", hw->state);
> +
> +	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
> +
> +	/*
> +	 * If the prev_state is already reset/teardown in progress,
> +	 * don't continue further
> +	 */
> +	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
> +	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
> +		return efct_hw_sli_reset(hw, reset, prev_state);
> +
> +	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
> +		efct_hw_flush(hw);
> +
> +		/*
> +		 * If an mailbox command requiring a DMA is outstanding
> +		 * (SFP/DDM), then the FW will UE when the reset is issued.
> +		 * So attempt to complete all mailbox commands.
> +		 */
> +		iters = 10;
> +		while (!list_empty(&hw->cmd_head) && iters) {
> +			mdelay(10);
> +			efct_hw_flush(hw);
> +			iters--;
> +		}
> +
> +		if (list_empty(&hw->cmd_head))
> +			efc_log_debug(hw->os,
> +				       "All commands completed on MQ queue\n");
> +		else
> +			efc_log_debug(hw->os,
> +				       "Some commands still pending on MQ queue\n");
> +	}
> +
> +	/* Reset the chip */
> +	rc = efct_hw_sli_reset(hw, reset, prev_state);
> +	if (rc == EFCT_HW_RTN_INVALID_ARG)
> +		return EFCT_HW_RTN_ERROR;
> +
> +	return rc;
> +}
> +
> +int
> +efct_hw_get_num_eq(struct efct_hw *hw)
> +{
> +	return hw->eq_count;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 0b6838c7f924..9c025a1709e3 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -772,4 +772,35 @@ efct_hw_get_host_stats(struct efct_hw *hw,
>  			void *arg),
>  		void *arg);
>  
> +struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count);
> +struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count);
> +extern u32
> +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
> +		   u32 num_cqs, u32 entry_count);
> +struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count);
> +extern struct hw_wq
> +*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count);
> +extern struct hw_rq
> +*efct_hw_new_rq(struct hw_cq *cq, u32 entry_count);
> +extern u32
> +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
> +		   u32 num_rq_pairs, u32 entry_count);
> +void efct_hw_del_eq(struct hw_eq *eq);
> +void efct_hw_del_cq(struct hw_cq *cq);
> +void efct_hw_del_mq(struct hw_mq *mq);
> +void efct_hw_del_wq(struct hw_wq *wq);
> +void efct_hw_del_rq(struct hw_rq *rq);
> +void efct_hw_queue_dump(struct efct_hw *hw);
> +void efct_hw_queue_teardown(struct efct_hw *hw);
> +enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw);
> +enum efct_hw_rtn
> +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset);
> +int efct_hw_get_num_eq(struct efct_hw *hw);
> +
> +extern enum efct_hw_rtn
> +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
> +		     uintptr_t value,
> +		void (*cb)(int status, uintptr_t value, void *arg),
> +		void *arg);
> +
>  #endif /* __EFCT_H__ */
> diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
> index b683208d396f..fef7f427dbbf 100644
> --- a/drivers/scsi/elx/efct/efct_xport.c
> +++ b/drivers/scsi/elx/efct/efct_xport.c
> @@ -521,3 +521,294 @@ efct_scsi_release_fc_transport(void)
>  
>  	return EFC_SUCCESS;
>  }
> +
> +int
> +efct_xport_detach(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +
> +	/* free resources associated with target-server and initiator-client */
> +	efct_scsi_tgt_del_device(efct);
> +
> +	efct_scsi_del_device(efct);
> +
> +	/*Shutdown FC Statistics timer*/
> +	if (timer_pending(&xport->stats_timer))
> +		del_timer(&xport->stats_timer);
> +
> +	efct_hw_teardown(&efct->hw);
> +
> +	efct_xport_delete_debugfs(efct);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_xport_domain_free_cb(struct efc *efc, void *arg)
> +{
> +	struct completion *done = arg;
> +
> +	complete(done);
> +}
> +
> +static int
> +efct_xport_post_node_event_cb(struct efct_hw *hw, int status,
> +			      u8 *mqe, void *arg)
> +{
> +	struct efct_xport_post_node_event *payload = arg;
> +
> +	if (payload) {
> +		efc_node_post_shutdown(payload->node, payload->evt,
> +				       payload->context);
> +		complete(&payload->done);
> +		if (atomic_sub_and_test(1, &payload->refcnt))
> +			kfree(payload);
> +	}
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_xport_force_free(struct efct_xport *xport)
> +{
> +	struct efct *efct = xport->efct;
> +	struct efc *efc = efct->efcport;
> +
> +	efc_log_debug(efct, "reset required, do force shutdown\n");
> +
> +	if (!efc->domain) {
> +		efc_log_err(efct, "Domain is already freed\n");
> +		return;
> +	}
> +
> +	efc_domain_force_free(efc->domain);
> +}
> +
> +int
> +efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...)
> +{
> +	u32 rc = 0;
> +	struct efct *efct = NULL;
> +	va_list argp;
> +
> +	efct = xport->efct;
> +
> +	switch (cmd) {
> +	case EFCT_XPORT_PORT_ONLINE: {
> +		/* Bring the port on-line */
> +		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
> +					  NULL, NULL);
> +		if (rc)
> +			efc_log_err(efct,
> +				     "%s: Can't init port\n", efct->desc);
> +		else
> +			xport->configured_link_state = cmd;
> +		break;
> +	}
> +	case EFCT_XPORT_PORT_OFFLINE: {
> +		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
> +					 NULL, NULL))
> +			efc_log_err(efct, "port shutdown failed\n");
> +		else
> +			xport->configured_link_state = cmd;
> +		break;
> +	}
> +
> +	case EFCT_XPORT_SHUTDOWN: {
> +		struct completion done;
> +		bool reset_required;
> +		unsigned long timeout;
> +
> +		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
> +		 * all PCI functions; orderly shutdown won't work,
> +		 * just force free
> +		 */
> +
> +		reset_required = sli_reset_required(&efct->hw.sli);
> +
> +		if (reset_required) {
> +			efc_log_debug(efct,
> +				       "reset required, do force shutdown\n");
> +			efct_xport_force_free(xport);
> +			break;
> +		}
> +		init_completion(&done);
> +
> +		efc_register_domain_free_cb(efct->efcport,
> +					efct_xport_domain_free_cb, &done);
> +
> +		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
> +					 NULL, NULL)) {
> +			efc_log_debug(efct,
> +				       "port shutdown failed, do force shutdown\n");
> +			efct_xport_force_free(xport);
> +		} else {
> +			efc_log_debug(efct,
> +				       "Waiting %d seconds for domain shutdown.\n",
> +			(EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC / 1000000));
> +
> +			timeout = usecs_to_jiffies(
> +					EFCT_FC_DOMAIN_SHUTDOWN_TIMEOUT_USEC);
> +			if (!wait_for_completion_timeout(&done, timeout)) {
> +				efc_log_debug(efct,
> +					       "Domain shutdown timed out!!\n");
> +				efct_xport_force_free(xport);
> +			}
> +		}
> +
> +		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
> +
> +		/* Free up any saved virtual ports */
> +		efc_vport_del_all(efct->efcport);
> +		break;
> +	}
> +
> +	/*
> +	 * POST_NODE_EVENT:  post an event to a node object
> +	 *
> +	 * This transport function is used to post an event to a node object.
> +	 * It does this by submitting a NOP mailbox command to defer execution
> +	 * to the interrupt context (thereby enforcing the serialized execution
> +	 * of event posting to the node state machine instances)
> +	 */
> +	case EFCT_XPORT_POST_NODE_EVENT: {
> +		struct efc_node *node;
> +		u32	evt;
> +		void *context;
> +		struct efct_xport_post_node_event *payload = NULL;
> +		struct efct *efct;
> +		struct efct_hw *hw;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		node = va_arg(argp, struct efc_node *);
> +		evt = va_arg(argp, u32);
> +		context = va_arg(argp, void *);
> +		va_end(argp);
> +
> +		payload = kmalloc(sizeof(*payload), GFP_KERNEL);
> +		if (!payload)
> +			return EFC_FAIL;
> +
> +		memset(payload, 0, sizeof(*payload));
> +
> +		efct = node->efc->base;
> +		hw = &efct->hw;
> +
> +		/* if node's state machine is disabled,
> +		 * don't bother continuing
> +		 */
> +		if (!node->sm.current_state) {
> +			efc_log_test(efct, "node %p state machine disabled\n",
> +				      node);
> +			kfree(payload);
> +			rc = -1;
> +			break;
> +		}
> +
> +		/* Setup payload */
> +		init_completion(&payload->done);
> +
> +		/* one for self and one for callback */
> +		atomic_set(&payload->refcnt, 2);
> +		payload->node = node;
> +		payload->evt = evt;
> +		payload->context = context;
> +
> +		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
> +				       payload)) {
> +			efc_log_test(efct, "efct_hw_async_call failed\n");
> +			kfree(payload);
> +			rc = -1;
> +			break;
> +		}
> +
> +		/* Wait for completion */
> +		if (wait_for_completion_interruptible(&payload->done)) {
> +			efc_log_test(efct,
> +				      "POST_NODE_EVENT: completion failed\n");
> +			rc = -1;
> +		}
> +		if (atomic_sub_and_test(1, &payload->refcnt))
> +			kfree(payload);
> +
> +		break;
> +	}
> +	/*
> +	 * Set wwnn for the port. This will be used instead of the default
> +	 * provided by FW.
> +	 */
> +	case EFCT_XPORT_WWNN_SET: {
> +		u64 wwnn;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		wwnn = va_arg(argp, uint64_t);
> +		va_end(argp);
> +
> +		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
> +		xport->req_wwnn = wwnn;
> +
> +		break;
> +	}
> +	/*
> +	 * Set wwpn for the port. This will be used instead of the default
> +	 * provided by FW.
> +	 */
> +	case EFCT_XPORT_WWPN_SET: {
> +		u64 wwpn;
> +
> +		/* Retrieve arguments */
> +		va_start(argp, cmd);
> +		wwpn = va_arg(argp, uint64_t);
> +		va_end(argp);
> +
> +		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
> +		xport->req_wwpn = wwpn;
> +
> +		break;
> +	}
> +
> +	default:
> +		break;
> +	}
> +	return rc;
> +}
> +
> +void
> +efct_xport_free(struct efct_xport *xport)
> +{
> +	if (xport) {
> +		efct_io_pool_free(xport->io_pool);
> +
> +		kfree(xport);
> +	}
> +}
> +
> +void
> +efct_release_fc_transport(struct scsi_transport_template *transport_template)
> +{
> +	if (transport_template)
> +		pr_err("releasing transport layer\n");
> +
> +	/* Releasing FC transport */
> +	fc_release_transport(transport_template);
> +}
> +
> +static void
> +efct_xport_remove_host(struct Scsi_Host *shost)
> +{
> +	fc_remove_host(shost);
> +}
> +
> +int efct_scsi_del_device(struct efct *efct)
> +{
> +	if (efct->shost) {
> +		efc_log_debug(efct, "Unregistering with Transport Layer\n");
> +		efct_xport_remove_host(efct->shost);
> +		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
> +		scsi_remove_host(efct->shost);
> +		scsi_host_put(efct->shost);
> +		efct->shost = NULL;
> +	}
> +	return EFC_SUCCESS;
> +}
> -- 
> 2.16.4
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 28/31] elx: efct: Firmware update, async link processing
  2020-04-12  3:33 ` [PATCH v3 28/31] elx: efct: Firmware update, async link processing James Smart
  2020-04-16 10:01   ` Hannes Reinecke
@ 2020-04-16 13:10   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 13:10 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:33:00PM -0700, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Handling of async link event.
> Registrations for VFI, VPI and RPI.
> Add Firmware update helper routines.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Reworked efct_hw_port_attach_reg_vpi() and efct_hw_port_attach_reg_vfi()
>   Return defined values
> ---
>  drivers/scsi/elx/efct/efct_hw.c | 1509 +++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/elx/efct/efct_hw.h |   58 ++
>  2 files changed, 1567 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index a007ca98895d..b3a1ec0f674b 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -42,6 +42,12 @@ struct efct_hw_host_stat_cb_arg {
>  	void *arg;
>  };
>  
> +struct efct_hw_fw_wr_cb_arg {
> +	void (*cb)(int status, u32 bytes_written,
> +		   u32 change_status, void *arg);
> +	void *arg;
> +};
> +
>  static enum efct_hw_rtn
>  efct_hw_link_event_init(struct efct_hw *hw)
>  {
> @@ -3836,3 +3842,1506 @@ efct_hw_get_num_eq(struct efct_hw *hw)
>  {
>  	return hw->eq_count;
>  }
> +
> +/* HW async call context structure */
> +struct efct_hw_async_call_ctx {
> +	efct_hw_async_cb_t callback;
> +	void *arg;
> +	u8 cmd[SLI4_BMBX_SIZE];
> +};
> +
> +static void
> +efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
> +{
> +	struct efct_hw_async_call_ctx *ctx = arg;
> +
> +	if (ctx) {
> +		if (ctx->callback)
> +			(*ctx->callback)(hw, status, mqe, ctx->arg);
> +
> +		kfree(ctx);
> +	}
> +}
> +
> +/*
> + * Post a NOP mbox cmd; the callback with argument is invoked upon completion
> + * while in the event processing context.
> + */
> +int
> +efct_hw_async_call(struct efct_hw *hw,
> +		   efct_hw_async_cb_t callback, void *arg)
> +{
> +	int rc = 0;
> +	struct efct_hw_async_call_ctx *ctx;
> +
> +	/*
> +	 * Allocate a callback context (which includes the mbox cmd buffer),
> +	 * we need this to be persistent as the mbox cmd submission may be
> +	 * queued and executed later execution.
> +	 */
> +	ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);

Ah, maybe I got that wrong in the other places where I asked for
GFP_ATOMIC. If this gets called while holding spinlocks GFP_ATOMIC is
needed.

> +	if (!ctx)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(ctx, 0, sizeof(*ctx));

kzalloc

> +	ctx->callback = callback;
> +	ctx->arg = arg;
> +
> +	/* Build and send a NOP mailbox command */
> +	if (sli_cmd_common_nop(&hw->sli, ctx->cmd, sizeof(ctx->cmd), 0)) {
> +		efc_log_err(hw->os, "COMMON_NOP format failure\n");
> +		kfree(ctx);
> +		rc = -1;
> +	}
> +
> +	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
> +			    ctx)) {
> +		efc_log_err(hw->os, "COMMON_NOP command failure\n");
> +		kfree(ctx);
> +		rc = -1;
> +	}
> +	return rc;
> +}
> +
> +static void
> +efct_hw_port_free_resources(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Clear the sport attached flag */
> +	sport->attached = false;
> +
> +	/* Free the service parameters buffer */
> +	if (sport->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  sport->dma.size, sport->dma.virt,
> +				  sport->dma.phys);
> +		memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VPI, sport->indicator);
> +
> +	efc_lport_cb(efct->efcport, evt, sport);
> +}
> +
> +static int
> +efct_hw_port_get_mbox_status(struct efc_sli_port *sport,
> +			     u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vpi=%#x st=%x hdr=%x\n",
> +			       sport->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_free_unreg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int evt = EFC_HW_PORT_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_PORT_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	efct_hw_port_free_resources(sport, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_port_free_unreg_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* Allocate memory and send unreg_vpi */
> +	if (!data) {
> +		data = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data) {
> +			efct_hw_port_free_resources(sport,
> +						    EFC_HW_PORT_FREE_FAIL,
> +						    data);
> +			return;
> +		}
> +		memset(data, 0, SLI4_BMBX_SIZE);

kzalloc

> +	}
> +
> +	rc = sli_cmd_unreg_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       sport->indicator, SLI4_UNREG_TYPE_PORT);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_free_unreg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_FREE_FAIL, data);
> +	}
> +}
> +
> +static void
> +efct_hw_port_send_evt(struct efc_sli_port *sport, int evt, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Now inform the registered callbacks */
> +	efc_lport_cb(efct->efcport, evt, sport);
> +
> +	/* Set the sport attached flag */
> +	if (evt == EFC_HW_PORT_ATTACH_OK)
> +		sport->attached = true;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +}
> +
> +static int
> +efct_hw_port_alloc_init_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ALLOC_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_port_alloc_init_vpi(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	int rc;
> +
> +	/* If there is a pending free request, then handle it now */
> +	if (sport->free_req_pending) {
> +		efct_hw_port_free_resources(sport, EFC_HW_PORT_FREE_OK, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_init_vpi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      sport->indicator, sport->domain->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_init_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_port_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				   int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	u8 *payload = NULL;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	payload = sport->dma.virt;
> +
> +	memcpy(&sport->sli_wwpn,
> +	       payload + SLI4_READ_SPARM64_WWPN_OFFSET,
> +		sizeof(sport->sli_wwpn));
> +	memcpy(&sport->sli_wwnn,
> +	       payload + SLI4_READ_SPARM64_WWNN_OFFSET,
> +		sizeof(sport->sli_wwnn));
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  sport->dma.size, sport->dma.virt, sport->dma.phys);
> +	memset(&sport->dma, 0, sizeof(struct efc_dma));
> +	efct_hw_port_alloc_init_vpi(sport, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_port_alloc_read_sparm64(struct efc_sli_port *sport, void *data)
> +{
> +	struct efct_hw *hw = sport->hw;
> +	struct efct *efct = hw->os;
> +	int rc;
> +
> +	/* Allocate memory for the service parameters */
> +	sport->dma.size = 112;
> +	sport->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					     sport->dma.size, &sport->dma.phys,
> +					     GFP_DMA);
> +	if (!sport->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &sport->dma, sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_alloc_read_sparm64_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/*
> + * This function allocates a VPI object for the port and stores it in the
> + * indicator field of the port object.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
> +		   struct efc_domain *domain, u8 *wwpn)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*cmd = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32 index;
> +
> +	sport->indicator = U32_MAX;
> +	sport->hw = hw;
> +	sport->free_req_pending = false;
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (wwpn)
> +		memcpy(&sport->sli_wwpn, wwpn, sizeof(sport->sli_wwpn));
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_VPI,
> +			       &sport->indicator, &index)) {
> +		efc_log_err(hw->os, "VPI allocation failure\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (domain) {
> +		cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!cmd) {
> +			rc = EFCT_HW_RTN_NO_MEMORY;
> +			goto efct_hw_port_alloc_out;
> +		}
> +		memset(cmd, 0, SLI4_BMBX_SIZE);
> +
> +		/*
> +		 * If the WWPN is NULL, fetch the default
> +		 * WWPN and WWNN before initializing the VPI
> +		 */
> +		if (!wwpn)
> +			efct_hw_port_alloc_read_sparm64(sport, cmd);
> +		else
> +			efct_hw_port_alloc_init_vpi(sport, cmd);
> +	} else if (!wwpn) {
> +		/* This is the convention for the HW, not SLI */
> +		efc_log_test(hw->os, "need WWN for physical port\n");
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +	/* domain NULL and wwpn non-NULL */
> +	// no-op;

left over?

> +
> +efct_hw_port_alloc_out:
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		kfree(cmd);
> +
> +		sli_resource_free(&hw->sli, SLI_RSRC_VPI,
> +				  sport->indicator);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_port_attach_reg_vpi_cb(struct efct_hw *hw,
> +			       int status, u8 *mqe, void *arg)
> +{
> +	struct efc_sli_port *sport = arg;
> +	int rc;
> +
> +	rc = efct_hw_port_get_mbox_status(sport, mqe, status);
> +	if (rc) {
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_port_send_evt(sport, EFC_HW_PORT_ATTACH_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +/**
> + * This function registers a previously-allocated VPI with the
> + * device.
> + */
> +enum efct_hw_rtn
> +efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
> +		    u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	sport->fc_id = fc_id;
> +
> +	rc = sli_cmd_reg_vpi(&hw->sli, buf, SLI4_BMBX_SIZE, sport->fc_id,
> +			    sport->sli_wwpn, sport->indicator,
> +			    sport->domain->indicator, false);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VPI format failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, buf);
> +		return rc;
> +	}
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +			     efct_hw_port_attach_reg_vpi_cb, sport);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VPI command failure\n");
> +		efct_hw_port_free_resources(sport,
> +					    EFC_HW_PORT_ATTACH_FAIL, buf);
> +	}
> +
> +	return rc;
> +}
> +
> +/* Issue the UNREG_VPI command to free the assigned VPI context */
> +enum efct_hw_rtn
> +efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!sport) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p sport=%p\n", hw,
> +			sport);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (sport->attached)
> +		efct_hw_port_free_unreg_vpi(sport, NULL);
> +	else
> +		sport->free_req_pending = true;
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_domain_get_mbox_status(struct efc_domain *domain,
> +			       u8 *mqe, int status)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct sli4_mbox_command_header *hdr =
> +			(struct sli4_mbox_command_header *)mqe;
> +	int rc = 0;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status vfi=%#x st=%x hdr=%x\n",
> +			       domain->indicator, status,
> +			       le16_to_cpu(hdr->status));
> +		rc = -1;
> +	}
> +
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_resources(struct efc_domain *domain,
> +			      int evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the service parameters buffer */
> +	if (domain->dma.virt) {
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	}
> +
> +	/* Free the command buffer */
> +	kfree(data);
> +
> +	/* Free the SLI resources */
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI, domain->indicator);
> +
> +	efc_domain_cb(efct->efcport, evt, domain);
> +}
> +
> +static void
> +efct_hw_domain_send_sport_evt(struct efc_domain *domain,
> +			      int port_evt, int domain_evt, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efct *efct = hw->os;
> +
> +	/* Free the mbox buffer */
> +	kfree(data);
> +
> +	/* Send alloc/attach ok to the physical sport */
> +	efct_hw_port_send_evt(domain->sport, port_evt, NULL);
> +
> +	/* Now inform the registered callbacks */
> +	efc_domain_cb(efct->efcport, domain_evt, domain);
> +}
> +
> +static int
> +efct_hw_domain_alloc_read_sparm64_cb(struct efct_hw *hw,
> +				     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	hw->domain = domain;
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ALLOC_OK,
> +				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_domain_alloc_read_sparm64(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	rc = sli_cmd_read_sparm64(&hw->sli, data, SLI4_BMBX_SIZE,
> +				  &domain->dma, 0);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_read_sparm64_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "READ_SPARM64 command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +static int
> +efct_hw_domain_alloc_init_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_domain_alloc_read_sparm64(domain, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_hw_domain_alloc_init_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	struct efc_sli_port *sport = domain->sport;
> +	int rc;
> +
> +	/*
> +	 * For FC, the HW alread registered an FCFI.
> +	 * Copy FCF information into the domain and jump to INIT_VFI.
> +	 */
> +	domain->fcf_indicator = hw->fcf_indicator;
> +	rc = sli_cmd_init_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			      domain->indicator, domain->fcf_indicator,
> +			sport->indicator);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI format failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +		return;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_alloc_init_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "INIT_VFI command failure\n");
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
> +	}
> +}
> +
> +/**
> + * This function starts a series of commands needed to connect to the domain,
> + * including
> + *   - REG_FCFI
> + *   - INIT_VFI
> + *   - READ_SPARMS
> + */
> +enum efct_hw_rtn
> +efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
> +		     u32 fcf)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8 *cmd = NULL;
> +	u32 index;
> +
> +	if (!domain || !domain->sport) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p sport=%p\n",
> +			    hw, domain, domain ? domain->sport : NULL);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(efct,
> +			     "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	cmd = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!cmd)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(cmd, 0, SLI4_BMBX_SIZE);
> +
> +	/* allocate memory for the service parameters */
> +	domain->dma.size = 112;
> +	domain->dma.virt = dma_alloc_coherent(&efct->pcidev->dev,
> +					      domain->dma.size,
> +					      &domain->dma.phys, GFP_DMA);
> +	if (!domain->dma.virt) {
> +		efc_log_err(hw->os, "Failed to allocate DMA memory\n");
> +		kfree(cmd);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	domain->hw = hw;
> +	domain->fcf = fcf;
> +	domain->fcf_indicator = U32_MAX;
> +	domain->indicator = U32_MAX;
> +
> +	if (sli_resource_alloc(&hw->sli,
> +			       SLI_RSRC_VFI, &domain->indicator,
> +				    &index)) {
> +		efc_log_err(hw->os, "VFI allocation failure\n");
> +
> +		kfree(cmd);
> +		dma_free_coherent(&efct->pcidev->dev,
> +				  domain->dma.size, domain->dma.virt,
> +				  domain->dma.phys);
> +		memset(&domain->dma, 0, sizeof(struct efc_dma));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_alloc_init_vfi(domain, cmd);
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static int
> +efct_hw_domain_attach_reg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int rc;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		hw->domain = NULL;
> +		efct_hw_domain_free_resources(domain,
> +					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
> +		return EFC_FAIL;
> +	}
> +
> +	efct_hw_domain_send_sport_evt(domain, EFC_HW_PORT_ATTACH_OK,
> +				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
> +	return EFC_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_attach(struct efc *efc,
> +		      struct efc_domain *domain, u32 fc_id)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	domain->sport->fc_id = fc_id;
> +
> +	rc = sli_cmd_reg_vfi(&hw->sli, buf, SLI4_BMBX_SIZE, domain->indicator,
> +			    domain->fcf_indicator, domain->dma,
> +			    domain->sport->indicator, domain->sport->sli_wwpn,
> +			    domain->sport->fc_id);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_attach_reg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "REG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return rc;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_ATTACH_FAIL, buf);
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_domain_free_unreg_vfi_cb(struct efct_hw *hw,
> +				 int status, u8 *mqe, void *arg)
> +{
> +	struct efc_domain *domain = arg;
> +	int evt = EFC_HW_DOMAIN_FREE_OK;
> +	int rc = 0;
> +
> +	rc = efct_hw_domain_get_mbox_status(domain, mqe, status);
> +	if (rc) {
> +		evt = EFC_HW_DOMAIN_FREE_FAIL;
> +		rc = -1;
> +	}
> +
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, evt, mqe);
> +	return rc;
> +}
> +
> +static void
> +efct_hw_domain_free_unreg_vfi(struct efc_domain *domain, void *data)
> +{
> +	struct efct_hw *hw = domain->hw;
> +	int rc;
> +
> +	if (!data) {
> +		data = kzalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!data)
> +			goto cleanup;
> +	}
> +
> +	rc = sli_cmd_unreg_vfi(&hw->sli, data, SLI4_BMBX_SIZE,
> +			       domain->indicator, SLI4_UNREG_TYPE_DOMAIN);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI format failure\n");
> +		goto cleanup;
> +	}
> +
> +	rc = efct_hw_command(hw, data, EFCT_CMD_NOWAIT,
> +			     efct_hw_domain_free_unreg_vfi_cb, domain);
> +	if (rc) {
> +		efc_log_err(hw->os, "UNREG_VFI command failure\n");
> +		goto cleanup;
> +	}
> +
> +	return;
> +
> +cleanup:
> +	hw->domain = NULL;
> +	efct_hw_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!domain) {
> +		efc_log_err(hw->os,
> +			     "bad parameter(s) hw=%p domain=%p\n",
> +			hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	efct_hw_domain_free_unreg_vfi(domain, NULL);
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	if (!domain) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p domain=%p\n", hw, domain);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	dma_free_coherent(&efct->pcidev->dev,
> +			  domain->dma.size, domain->dma.virt, domain->dma.phys);
> +	memset(&domain->dma, 0, sizeof(struct efc_dma));
> +	sli_resource_free(&hw->sli, SLI_RSRC_VFI,
> +			  domain->indicator);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
> +		   u32 fc_addr, struct efc_sli_port *sport)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	/* Check for invalid indicator */
> +	if (rnode->indicator != U32_MAX) {
> +		efc_log_err(hw->os,
> +			     "RPI allocation failure addr=%#x rpi=%#x\n",
> +			    fc_addr, rnode->indicator);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* NULL SLI port indicates an unallocated remote node */
> +	rnode->sport = NULL;
> +
> +	if (sli_resource_alloc(&hw->sli, SLI_RSRC_RPI,
> +			       &rnode->indicator, &rnode->index)) {
> +		efc_log_err(hw->os, "RPI allocation failure addr=%#x\n",
> +			     fc_addr);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	rnode->fc_id = fc_addr;
> +	rnode->sport = sport;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +static int
> +efct_hw_cb_node_attach(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event	evt = 0;
> +
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +		atomic_sub_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +		rnode->attached = false;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 0);
> +		evt = EFC_HW_NODE_ATTACH_FAIL;
> +	} else {
> +		rnode->attached = true;
> +		atomic_set(&hw->rpi_ref[rnode->index].rpi_attached, 1);
> +		evt = EFC_HW_NODE_ATTACH_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +/* Update a remote node object with the remote port's service parameters */
> +enum efct_hw_rtn
> +efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
> +		    struct efc_dma *sparms)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;
> +	u8		*buf = NULL;
> +	u32	count = 0;

aligment

> +
> +	if (!hw || !rnode || !sparms) {
> +		efc_log_err(efct,
> +			     "bad parameter(s) hw=%p rnode=%p sparms=%p\n",
> +			    hw, rnode, sparms);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);
> +	/*
> +	 * If the attach count is non-zero, this RPI has already been reg'd.
> +	 * Otherwise, register the RPI
> +	 */
> +	if (rnode->index == U32_MAX) {
> +		efc_log_err(efct, "bad parameter rnode->index invalid\n");
> +		kfree(buf);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +	count = atomic_add_return(1, &hw->rpi_ref[rnode->index].rpi_count);
> +	count--;
> +	if (count) {
> +		/*
> +		 * Can't attach multiple FC_ID's to a node unless High Login
> +		 * Mode is enabled
> +		 */
> +		if (!hw->sli.high_login_mode) {
> +			efc_log_test(hw->os,
> +				      "attach to attached node HLM=%d cnt=%d\n",
> +				     hw->sli.high_login_mode, count);
> +			rc = EFCT_HW_RTN_SUCCESS;
> +		} else {
> +			rnode->node_group = true;
> +			rnode->attached =
> +			 atomic_read(&hw->rpi_ref[rnode->index].rpi_attached);
> +			rc = rnode->attached  ? EFCT_HW_RTN_SUCCESS_SYNC :
> +							 EFCT_HW_RTN_SUCCESS;
> +		}
> +	} else {
> +		rnode->node_group = false;
> +
> +		if (!sli_cmd_reg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				    rnode->fc_id,
> +				    rnode->indicator, rnode->sport->indicator,
> +				    sparms, 0, 0))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_attach, rnode);
> +	}
> +
> +	if (count || rc) {
> +		if (rc < EFCT_HW_RTN_SUCCESS) {
> +			atomic_sub_return(1,
> +					  &hw->rpi_ref[rnode->index].rpi_count);
> +			efc_log_err(hw->os,
> +				     "%s error\n", count ? "HLM" : "REG_RPI");
> +		}
> +		kfree(buf);
> +	}
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_resources(struct efc *efc,
> +			    struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS;

alignment

> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (rnode->sport) {
> +		if (rnode->attached) {
> +			efc_log_err(hw->os, "Err: rnode is still attached\n");
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +		if (rnode->indicator != U32_MAX) {
> +			if (sli_resource_free(&hw->sli, SLI_RSRC_RPI,
> +					      rnode->indicator)) {
> +				efc_log_err(hw->os,
> +					     "RPI free fail RPI %d addr=%#x\n",
> +					    rnode->indicator,
> +					    rnode->fc_id);
> +				rc = EFCT_HW_RTN_ERROR;
> +			} else {
> +				rnode->node_group = false;
> +				rnode->indicator = U32_MAX;
> +				rnode->index = U32_MAX;
> +				rnode->free_group = false;
> +			}
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_node_free(struct efct_hw *hw,
> +		     int status, u8 *mqe, void *arg)
> +{
> +	struct efc_remote_node *rnode = arg;
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +
> +		/*
> +		 * In certain cases, a non-zero MQE status is OK (all must be
> +		 * true):
> +		 *   - node is attached
> +		 *   - if High Login Mode is enabled, node is part of a node
> +		 * group
> +		 *   - status is 0x1400
> +		 */
> +		if (!rnode->attached ||
> +		    (hw->sli.high_login_mode && !rnode->node_group) ||
> +				(le16_to_cpu(hdr->status) !=
> +				 MBX_STATUS_RPI_NOT_REG))

		if (!rnode->attached ||
		    (hw->sli.high_login_mode && !rnode->node_group) ||
		    (le16_to_cpu(hdr->status) != MBX_STATUS_RPI_NOT_REG))



> +			rc = -1;
> +	}
> +
> +	if (rc == 0) {
> +		rnode->node_group = false;
> +		rnode->attached = false;
> +
> +		if (atomic_read(&hw->rpi_ref[rnode->index].rpi_count) == 0)
> +			atomic_set(&hw->rpi_ref[rnode->index].rpi_attached,
> +				   0);
> +		 evt = EFC_HW_NODE_FREE_OK;
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, rnode);
> +
> +	kfree(mqe);
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode)
> +{
> +	struct efct *efct = efc->base;
> +	struct efct_hw *hw = &efct->hw;
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_SUCCESS_SYNC;
> +	u32	index = U32_MAX;
> +
> +	if (!hw || !rnode) {
> +		efc_log_err(efct, "bad parameter(s) hw=%p rnode=%p\n",
> +			     hw, rnode);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	index = rnode->index;
> +
> +	if (rnode->sport) {
> +		u32	count = 0;
> +		u32	fc_id;
> +
> +		if (!rnode->attached)
> +			return EFCT_HW_RTN_SUCCESS_SYNC;
> +
> +		buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +		if (!buf)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +
> +		memset(buf, 0, SLI4_BMBX_SIZE);
> +		count = atomic_sub_return(1, &hw->rpi_ref[index].rpi_count);
> +		count++;
> +		if (count <= 1) {
> +			/*
> +			 * There are no other references to this RPI so
> +			 * unregister it
> +			 */
> +			fc_id = U32_MAX;
> +			/* and free the resource */
> +			rnode->node_group = false;
> +			rnode->free_group = true;
> +		} else {
> +			if (!hw->sli.high_login_mode)
> +				efc_log_test(hw->os,
> +					      "Inval cnt with HLM off, cnt=%d\n",
> +					     count);
> +			fc_id = rnode->fc_id & 0x00ffffff;
> +		}
> +
> +		rc = EFCT_HW_RTN_ERROR;
> +
> +		if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE,
> +				      rnode->indicator,
> +				      SLI_RSRC_RPI, fc_id))
> +			rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +					     efct_hw_cb_node_free, rnode);
> +
> +		if (rc != EFCT_HW_RTN_SUCCESS) {
> +			efc_log_err(hw->os, "UNREG_RPI failed\n");
> +			kfree(buf);
> +			rc = EFCT_HW_RTN_ERROR;
> +		}
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_node_free_all(struct efct_hw *hw, int status, u8 *mqe,
> +			 void *arg)
> +{
> +	struct sli4_mbox_command_header *hdr =
> +				(struct sli4_mbox_command_header *)mqe;
> +	enum efc_hw_remote_node_event evt = EFC_HW_NODE_FREE_FAIL;
> +	int		rc = 0;
> +	u32	i;
> +	struct efct   *efct = hw->os;
> +
> +	if (status || le16_to_cpu(hdr->status)) {
> +		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n", status,
> +			       le16_to_cpu(hdr->status));
> +	} else {
> +		evt = EFC_HW_NODE_FREE_ALL_OK;
> +	}
> +
> +	if (evt == EFC_HW_NODE_FREE_ALL_OK) {
> +		for (i = 0; i < hw->sli.extent[SLI_RSRC_RPI].size;
> +		     i++)

this fits on one line

> +			atomic_set(&hw->rpi_ref[i].rpi_count, 0);
> +
> +		if (sli_resource_reset(&hw->sli, SLI_RSRC_RPI)) {
> +			efc_log_test(hw->os, "RPI free all failure\n");
> +			rc = -1;
> +		}
> +	}
> +
> +	efc_remote_node_cb(efct->efcport, evt, NULL);
> +
> +	kfree(mqe);
> +
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_node_free_all(struct efct_hw *hw)
> +{
> +	u8	*buf = NULL;
> +	enum efct_hw_rtn	rc = EFCT_HW_RTN_ERROR;

alignmnent

> +
> +	/*
> +	 * Check if the chip is in an error state (UE'd) before proceeding.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
> +	if (!buf)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(buf, 0, SLI4_BMBX_SIZE);

kzalloc
> +
> +	if (!sli_cmd_unreg_rpi(&hw->sli, buf, SLI4_BMBX_SIZE, 0xffff,
> +			      SLI_RSRC_FCFI, U32_MAX))
> +		rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_node_free_all,
> +				     NULL);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_err(hw->os, "UNREG_RPI failed\n");
> +		kfree(buf);
> +		rc = EFCT_HW_RTN_ERROR;
> +	}
> +
> +	return rc;
> +}
> +
> +struct efct_hw_get_nvparms_cb_arg {
> +	void (*cb)(int status,
> +		   u8 *wwpn, u8 *wwnn,
> +		u8 hard_alpa, u32 preferred_d_id,
> +		void *arg);
> +	void *arg;
> +};
> +
> +static int
> +efct_hw_get_nvparms_cb(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efct_hw_get_nvparms_cb_arg *cb_arg = arg;
> +	struct sli4_cmd_read_nvparms *mbox_rsp =
> +			(struct sli4_cmd_read_nvparms *)mqe;
> +	u8 hard_alpa;
> +	u32 preferred_d_id;
> +
> +	hard_alpa = le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
> +				SLI4_READ_NVPARAMS_HARD_ALPA;
> +	preferred_d_id = (le32_to_cpu(mbox_rsp->hard_alpa_d_id) &
> +			  SLI4_READ_NVPARAMS_PREFERRED_D_ID) >> 8;
> +	if (cb_arg->cb)
> +		cb_arg->cb(status, mbox_rsp->wwpn, mbox_rsp->wwnn,
> +			   hard_alpa, preferred_d_id,
> +			   cb_arg->arg);
> +
> +	kfree(mqe);
> +	kfree(cb_arg);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_hw_get_nvparms(struct efct_hw *hw,
> +		    void (*cb)(int status, u8 *wwpn,
> +			       u8 *wwnn, u8 hard_alpa,
> +			       u32 preferred_d_id, void *arg),
> +		    void *ul_arg)
> +{
> +	u8 *mbxdata;
> +	struct efct_hw_get_nvparms_cb_arg *cb_arg;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	/* mbxdata holds the header of the command */
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);
> +
> +	/*
> +	 * cb_arg holds the data that will be passed to the callback on
> +	 * completion
> +	 */
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = ul_arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_read_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_get_nvparms_cb, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "READ_NVPARMS failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +struct efct_hw_set_nvparms_cb_arg {
> +	void (*cb)(int status, void *arg);
> +	void *arg;
> +};
> +
> +static int
> +efct_hw_set_nvparms_cb(struct efct_hw *hw, int status,
> +		       u8 *mqe, void *arg)
> +{
> +	struct efct_hw_set_nvparms_cb_arg *cb_arg = arg;
> +
> +	if (cb_arg->cb)
> +		cb_arg->cb(status, cb_arg->arg);
> +
> +	kfree(mqe);
> +	kfree(cb_arg);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +int
> +efct_hw_set_nvparms(struct efct_hw *hw,
> +		    void (*cb)(int status, void *arg),
> +		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id,
> +		void *ul_arg)
> +{
> +	u8 *mbxdata;
> +	struct efct_hw_set_nvparms_cb_arg *cb_arg;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	/* mbxdata holds the header of the command */
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	/*
> +	 * cb_arg holds the data that will be passed to the callback on
> +	 * completion
> +	 */
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	cb_arg->cb = cb;
> +	cb_arg->arg = ul_arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_write_nvparms(&hw->sli, mbxdata, SLI4_BMBX_SIZE, wwpn,
> +				  wwnn, hard_alpa, preferred_d_id))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_set_nvparms_cb, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "SET_NVPARMS failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_cb_fw_write(struct efct_hw *hw, int status,
> +		    u8 *mqe, void  *arg)
> +{
> +	struct sli4_cmd_sli_config *mbox_rsp =
> +					(struct sli4_cmd_sli_config *)mqe;
> +	struct sli4_rsp_cmn_write_object *wr_obj_rsp;
> +	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
> +	u32 bytes_written;
> +	u16 mbox_status;
> +	u32 change_status;
> +
> +	wr_obj_rsp = (struct sli4_rsp_cmn_write_object *)
> +		      &mbox_rsp->payload.embed;
> +	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
> +	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
> +	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
> +			 RSP_CHANGE_STATUS);
> +
> +	kfree(mqe);
> +
> +	if (cb_arg) {
> +		if (cb_arg->cb) {
> +			if (!status && mbox_status)
> +				status = mbox_status;
> +			cb_arg->cb(status, bytes_written, change_status,
> +				   cb_arg->arg);
> +		}
> +
> +		kfree(cb_arg);
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static enum efct_hw_rtn
> +efct_hw_firmware_write_sli4_intf_2(struct efct_hw *hw, struct efc_dma *dma,
> +				   u32 size, u32 offset, int last,
> +			      void (*cb)(int status, u32 bytes_written,
> +					 u32 change_status, void *arg),
> +				void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	u8 *mbxdata;
> +	struct efct_hw_fw_wr_cb_arg *cb_arg;
> +	int noc = 0;
> +
> +	mbxdata = kmalloc(SLI4_BMBX_SIZE, GFP_KERNEL);
> +	if (!mbxdata)
> +		return EFCT_HW_RTN_NO_MEMORY;
> +
> +	memset(mbxdata, 0, SLI4_BMBX_SIZE);
> +
> +	cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL);
> +	if (!cb_arg) {
> +		kfree(mbxdata);
> +		return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +	memset(cb_arg, 0, sizeof(struct efct_hw_fw_wr_cb_arg));
> +	cb_arg->cb = cb;
> +	cb_arg->arg = arg;
> +
> +	/* Send the HW command */
> +	if (!sli_cmd_common_write_object(&hw->sli, mbxdata, SLI4_BMBX_SIZE,
> +					noc, last, size, offset, "/prg/",
> +					dma))
> +		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
> +				     efct_hw_cb_fw_write, cb_arg);
> +
> +	if (rc != EFCT_HW_RTN_SUCCESS) {
> +		efc_log_test(hw->os, "COMMON_WRITE_OBJECT failed\n");
> +		kfree(mbxdata);
> +		kfree(cb_arg);
> +	}
> +
> +	return rc;
> +}
> +
> +/* Write a portion of a firmware image to the device */
> +enum efct_hw_rtn
> +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
> +		       u32 size, u32 offset, int last,
> +			void (*cb)(int status, u32 bytes_written,
> +				   u32 change_status, void *arg),
> +			void *arg)
> +{
> +	return efct_hw_firmware_write_sli4_intf_2(hw, dma, size, offset,
> +						     last, cb, arg);
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 9c025a1709e3..6bd1fde177cd 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -802,5 +802,63 @@ efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
>  		     uintptr_t value,
>  		void (*cb)(int status, uintptr_t value, void *arg),
>  		void *arg);
> +extern enum efct_hw_rtn

extern is not needed

> +efct_hw_port_alloc(struct efc *efc, struct efc_sli_port *sport,
> +		   struct efc_domain *domain, u8 *wwpn);
> +extern enum efct_hw_rtn
> +efct_hw_port_attach(struct efc *efc, struct efc_sli_port *sport,
> +		    u32 fc_id);
> +extern enum efct_hw_rtn
> +efct_hw_port_free(struct efc *efc, struct efc_sli_port *sport);
> +extern enum efct_hw_rtn
> +efct_hw_domain_alloc(struct efc *efc, struct efc_domain *domain,
> +		     u32 fcf);
> +extern enum efct_hw_rtn
> +efct_hw_domain_attach(struct efc *efc,
> +		      struct efc_domain *domain, u32 fc_id);
> +extern enum efct_hw_rtn
> +efct_hw_domain_free(struct efc *efc, struct efc_domain *domain);
> +extern enum efct_hw_rtn
> +efct_hw_domain_force_free(struct efc *efc, struct efc_domain *domain);
> +extern enum efct_hw_rtn
> +efct_hw_node_alloc(struct efc *efc, struct efc_remote_node *rnode,
> +		   u32 fc_addr, struct efc_sli_port *sport);
> +extern enum efct_hw_rtn
> +efct_hw_node_free_all(struct efct_hw *hw);
> +extern enum efct_hw_rtn
> +efct_hw_node_attach(struct efc *efc, struct efc_remote_node *rnode,
> +		    struct efc_dma *sparms);
> +extern enum efct_hw_rtn
> +efct_hw_node_detach(struct efc *efc, struct efc_remote_node *rnode);
> +extern enum efct_hw_rtn
> +efct_hw_node_free_resources(struct efc *efc,
> +			    struct efc_remote_node *rnode);
> +
> +extern enum efct_hw_rtn
> +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
> +		       u32 size, u32 offset, int last,
> +		       void (*cb)(int status, u32 bytes_written,
> +				  u32 change_status, void *arg),
> +		       void *arg);
> +
> +extern enum efct_hw_rtn
> +efct_hw_get_nvparms(struct efct_hw *hw,
> +		    void (*mgmt_cb)(int status, u8 *wwpn,
> +				    u8 *wwnn, u8 hard_alpa,
> +				    u32 preferred_d_id, void *arg),
> +		    void *arg);
> +extern
> +enum efct_hw_rtn efct_hw_set_nvparms(struct efct_hw *hw,
> +				       void (*mgmt_cb)(int status, void *arg),
> +		u8 *wwpn, u8 *wwnn, u8 hard_alpa,
> +		u32 preferred_d_id, void *arg);
> +
> +typedef int (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
> +				  u8 *mqe, void *arg);
> +extern int
> +efct_hw_async_call(struct efct_hw *hw,
> +		   efct_hw_async_cb_t callback, void *arg);
> +enum efct_hw_rtn
> +efct_hw_init_queues(struct efct_hw *hw);
>  
>  #endif /* __EFCT_H__ */
> -- 
> 2.16.4
> 
> 

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver
  2020-04-12  3:33 ` [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
  2020-04-16 10:02   ` Hannes Reinecke
@ 2020-04-16 13:15   ` Daniel Wagner
  1 sibling, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 13:15 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:33:02PM -0700, James Smart wrote:
> This patch completes the efct driver population.
> 
> This patch adds driver definitions for:
> Adds the efct driver Kconfig and Makefiles
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Use SPDX license
>   Remove utils.c from makefile
> ---
>  drivers/scsi/elx/Kconfig  |  9 +++++++++
>  drivers/scsi/elx/Makefile | 18 ++++++++++++++++++
>  2 files changed, 27 insertions(+)
>  create mode 100644 drivers/scsi/elx/Kconfig
>  create mode 100644 drivers/scsi/elx/Makefile
> 
> diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
> new file mode 100644
> index 000000000000..831daea7a951
> --- /dev/null
> +++ b/drivers/scsi/elx/Kconfig
> @@ -0,0 +1,9 @@
> +config SCSI_EFCT
> +	tristate "Emulex Fibre Channel Target"
> +	depends on PCI && SCSI
> +	depends on TARGET_CORE
> +	depends on SCSI_FC_ATTRS
> +	select CRC_T10DIF
> +	help
> +	  The efct driver provides enhanced SCSI Target Mode
> +	  support for specific SLI-4 adapters.

The help text could be more verbose. The rest looks good.

Reviewed-by: Daniel Wagner <dwagner@suse.de>

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process
  2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
  2020-04-12  6:16     ` kbuild test robot
  2020-04-12  7:56     ` kbuild test robot
@ 2020-04-16 13:15   ` Daniel Wagner
  2 siblings, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-16 13:15 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:33:03PM -0700, James Smart wrote:
> This final patch ties the efct driver into the kernel Kconfig
> and build linkages in the drivers/scsi directory.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>

Reviewed-by: Daniel Wagner <dwagner@suse.de>

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-12  4:57   ` Bart Van Assche
  2020-04-16 11:48     ` Daniel Wagner
@ 2020-04-22  4:20     ` James Smart
  2020-04-22  5:09       ` Bart Van Assche
  1 sibling, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-22  4:20 UTC (permalink / raw)
  To: Bart Van Assche, linux-scsi
  Cc: dwagner, maier, herbszt, natechancellor, rdunlap, hare, Ram Vegesna

On 4/11/2020 9:57 PM, Bart Van Assche wrote:
> On 2020-04-11 20:32, James Smart wrote:
>> +	return EFC_SUCCESS;
>> +}
> 
> Redefining 0 is unusual in the Linux kernel. I prefer to see "return 0;"
> instead of "return ${DRIVER_NAME}_SUCCESS;".

Well, seems we have two different opinions. Prior v2 reviews wanted a 
consistent EFC_XXX set of enums for return codes.

I think it's very odd to return 0, and EFC_xxx elsewhere.  The issue I 
have with "we all know how to interpret 0" - I don't agree.  0 doesn't 
always mean success. It's implicitly an assumption to believe 0 means 
success.

I'd like to stay with EFC_SUCCESS for consistency.

> 
...
> This is one of the most unfriendly debugfs data formats I have seen so
> far: information about all sessions is dumped into one huge debugfs
> attribute.
> 
> Is information about active sessions useful for other LIO target
> drivers? Wasn't it promised that this functionality would be moved into
> the LIO core instead of defining it for the efct driver only?

We will remove the debugfs code. We will look to utilize Mike's patches 
for session info.

> 
>> +static int efct_debugfs_session_open(struct inode *inode, struct file *filp)
>> +{
>> +	struct efct_lio_sport *sport = inode->i_private;
>> +	int size = 17 * PAGE_SIZE; /* 34 byte per session*2048 sessions */
>> +
>> +	if (!(filp->f_mode & FMODE_READ)) {
>> +		filp->private_data = sport;
>> +		return EFC_SUCCESS;
>> +	}
>> +
>> +	filp->private_data = kmalloc(size, GFP_KERNEL);
>> +	if (!filp->private_data)
>> +		return -ENOMEM;
>> +
>> +	memset(filp->private_data, 0, size);
>> +	efct_lio_tgt_session_data(sport->efct, sport->wwpn, filp->private_data,
>> +				  size);
>> +	return EFC_SUCCESS;
>> +}
> 
> kmalloc() + memset() can be changed into kzalloc().
> 
> The above code allocates 68 KB physically contiguous memory? Kernel code
> should not rely on higher order page allocations unless there is no
> other choice.
> 
> Additionally, I see that the amount of memory allocated is independent
> of the number of sessions. I think there are better approaches.

Yes. Will address it.


> 
>> +
>> +	lio_wq = create_singlethread_workqueue("efct_lio_worker");
>> +	if (!lio_wq) {
>> +		efc_log_err(efct, "workqueue create failed\n");
>> +		return -ENOMEM;
>> +	}
> 
> Is any work queued onto lio_wq that needs to be serialized against other
> work queued onto the same queue? If not, can one of the system
> workqueues be used, e.g. system_wq?
> 

We are using "lio_wq" here to call the LIO backed during creation or 
deletion of sessions. LIO api's target_remove_session/ 
target_setup_session are blocking calls so we don't want to process them 
in the interrupt thread. "lio_wq" is used for these two events only and 
this brings the serialization to session management as we create single 
threaded work queue.

-- James

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-16 12:34   ` Daniel Wagner
@ 2020-04-22  4:20     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  4:20 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/16/2020 5:34 AM, Daniel Wagner wrote:
> On Sat, Apr 11, 2020 at 08:32:56PM -0700, James Smart wrote:
>> This patch continues the efct driver population.
>>
>> This patch adds driver definitions for:
>> LIO backend template registration and template functions.
>>
>> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
>> Signed-off-by: James Smart <jsmart2021@gmail.com>
>>
>> ---
>> v3:
>>    Fixed as per the review comments.
>>    Removed vport pend list. Pending list is tracked based on the sport
>>      assigned to vport.
>> ---

Agree with your comments and will revise.

Thanks

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-04-14 15:23   ` Daniel Wagner
@ 2020-04-22  4:28     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  4:28 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/14/2020 8:23 AM, Daniel Wagner wrote:
> Hi James,
> 
> On Sat, Apr 11, 2020 at 08:32:33PM -0700, James Smart wrote:
>> This is the initial patch for the new Emulex target mode SCSI
>> driver sources.
>>
>> This patch:
>> - Creates the new Emulex source level directory drivers/scsi/elx
>>    and adds the directory to the MAINTAINERS file.
> 
> I would drop this. It's clear from the diff stat.
> 
>> - Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
>>    This library is a SLI-4 interface library.
> 
> Instead saying what this patch does, it would be better the explain
> why it structured this way.

Ok. I will see what I can come up with.

> 
...
> 
> BTW, why is it called SLI4_BMBX_WRITE_HI and not just SLI4_BMBX_HI?
> Because below with the doorbell registers there is a different pattern.

Because the semantics of the BMBX register involve writing it (as a u32) 
once for the high address bits and a second time for the low address 
bits. The format is slightly different on each write.

I'll see how to comment this and clarify.  inlines will help.

Agree with the rest of the comments and will clarify

-- james



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-14 18:02   ` Daniel Wagner
@ 2020-04-22  4:41     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  4:41 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/14/2020 11:02 AM, Daniel Wagner wrote:
> Hi,
> 
> On Sat, Apr 11, 2020 at 08:32:34PM -0700, James Smart wrote:
>> This patch continues the libefc_sli SLI-4 library population.
>>
>> This patch add SLI-4 Data structures and defines for:
>> - Buffer Descriptors (BDEs)
>> - Scatter/Gather List elements (SGEs)
>> - Queues and their Entry Descriptions for:
>>     Event Queues (EQs), Completion Queues (CQs),
>>     Receive Queues (RQs), and the Mailbox Queue (MQ).
> 
> There are a definitions which are not used at all,
> e.g. DISEED_SGE_OP_RX_VALUE, sli4_acqe_e, sli4_acqe_event_code,
> etc. What are the plans with those?
> 

When defining adapter interfaces, the interface is usually defined in 
its entirety, whether fully used or not.  Not only is it not always 
clear what will be used when first added, but it seems cleaner to review 
for correctness (comparing 1:1 with a hw spec vs 1sy-2sy vs N in spec). 
In the case of several of the items, there will be future features (DIF 
support) folded in that will utilize most of the definitions.

I know it's not as clean as "only define what is used", but I would 
prefer to handle hw this way.

Agree with the rest of the comments and will address.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2020-04-15 12:14   ` Hannes Reinecke
  2020-04-15 17:43     ` James Bottomley
@ 2020-04-22  4:44     ` James Smart
  1 sibling, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  4:44 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna


> 
> And the remainder seems to be all hardware-dependent structures.
> Would it be possible to rearrange stuff so that hardware/SLI related 
> structures are being kept separate from the software/driver dependent ones?
> Just so that one is clear which structures can or must mot be changed.
> 
> Cheers,
> 
> Hannes

Agree with the comments and will address. Will look at the header split 
as well.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-15 10:04   ` Daniel Wagner
@ 2020-04-22  5:05     ` James Smart
  2020-04-24  7:29       ` Daniel Wagner
  0 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-22  5:05 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/15/2020 3:04 AM, Daniel Wagner wrote:
...
>> +static void
>> +__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
>> +{
>> +	if (!q->dma.size)
>> +		return;
>> +
>> +	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
>> +			  q->dma.virt, q->dma.phys);
>> +	memset(&q->dma, 0, sizeof(struct efc_dma));
> 
> Is this necessary to clear q->dma? Just asking if it's possible to
> avoid the additional work.

unfortunately, yes - at least q->dma.size must be cleared. It's used to 
detect validity (must be non-0).

...
>> +		q->dma.size = size * n_entries;
>> +		q->dma.virt = dma_alloc_coherent(&sli4->pcidev->dev,
>> +						 q->dma.size, &q->dma.phys,
>> +						 GFP_DMA);
>> +		if (!q->dma.virt) {
>> +			memset(&q->dma, 0, sizeof(struct efc_dma));
> 
> So if __sli_queue_destroy() keeps clearing q->dma, than this one can
> go away, since if __sli_queue_init() fails __sli_queue_destroy() will
> be called.

Well, this is the same thing - with q->dma.size being set to 0 so the 
dma_free_coherent() is avoided in the destroy call.

...
>> +sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
>> +	     u8 *entry)
>> +{
>> +	u8 *qe = q->dma.virt;
>> +	u32 qindex;
>> +	u32 val = 0;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&q->lock, flags);
>> +	qindex = q->index;
>> +	qe += q->index * q->size;
>> +
>> +	memcpy(qe, entry, q->size);
>> +	q->n_posted = 1;
> 
> Shouldn't this be q->n_posted++ ? Or is it garanteed n_posted is 0?

yes - we post 1 at a time.


...
>> +sli_rq_write(struct sli4 *sli4, struct sli4_queue *q,
>> +	     u8 *entry)
>> +{
>> +	u8 *qe = q->dma.virt;
>> +	u32 qindex, n_posted;
>> +	u32 val = 0;
>> +
>> +	qindex = q->index;
>> +	qe += q->index * q->size;
>> +
>> +	memcpy(qe, entry, q->size);
>> +	q->n_posted = 1;
> 
> Again why not q->n_posted++ ?

Same thing.

> 
> I am confused why no lock is used here and in the fuction above. A few
> words on the locking desing would be highly appreciated.

Q lock is held while calling this function. There were comments in the 
caller. Will add one here.

Agree with the rest of the comments and will address.

-- james




^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs
  2020-04-15 14:34   ` Daniel Wagner
@ 2020-04-22  5:08     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  5:08 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/15/2020 7:34 AM, Daniel Wagner wrote:
...
>> +/* Write an ABORT_WQE work queue entry */
>> +int
>> +sli_abort_wqe(struct sli4 *sli4, void *buf, size_t size,
>> +	      enum sli4_abort_type type, bool send_abts, u32 ids,
>> +	      u32 mask, u16 tag, u16 cq_id)
>> +{
>> +	struct sli4_abort_wqe	*abort = buf;
>> +
>> +	memset(buf, 0, size);
> 
> Is 'size' expected to be equal to the size of 'struct sli4_abort_wqe'?
> Or could it be bigger? In case 'size' can be bigger than 'abort', do
> you need to clear the complete buffer, or would it be enough to clear
> only the size of 'abort'?

'buf' represents the abort WQE. Size will be the WQE size. Yeah, this is 
unclear. Will fix up the coding to make the actions explicit and clear.

Agree with other comments and will address them.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-22  4:20     ` James Smart
@ 2020-04-22  5:09       ` Bart Van Assche
  2020-04-23  1:39         ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Bart Van Assche @ 2020-04-22  5:09 UTC (permalink / raw)
  To: James Smart, linux-scsi
  Cc: dwagner, maier, herbszt, natechancellor, rdunlap, hare, Ram Vegesna

On 4/21/20 9:20 PM, James Smart wrote:
> On 4/11/2020 9:57 PM, Bart Van Assche wrote:
>> On 2020-04-11 20:32, James Smart wrote:
>>> +
>>> +    lio_wq = create_singlethread_workqueue("efct_lio_worker");
>>> +    if (!lio_wq) {
>>> +        efc_log_err(efct, "workqueue create failed\n");
>>> +        return -ENOMEM;
>>> +    }
>>
>> Is any work queued onto lio_wq that needs to be serialized against other
>> work queued onto the same queue? If not, can one of the system
>> workqueues be used, e.g. system_wq?
>>
> 
> We are using "lio_wq" here to call the LIO backed during creation or 
> deletion of sessions. LIO api's target_remove_session/ 
> target_setup_session are blocking calls so we don't want to process them 
> in the interrupt thread. "lio_wq" is used for these two events only and 
> this brings the serialization to session management as we create single 
> threaded work queue.

Hi James,

I think the above is very useful information. How about adding that 
information as a comment above the definition of "lio_wq"?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands
  2020-04-15 16:10   ` Daniel Wagner
@ 2020-04-22  5:12     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22  5:12 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/15/2020 9:10 AM, Daniel Wagner wrote:
...
>> +	/* write buffer location to bootstrap mailbox register */
>> +	val = SLI4_BMBX_WRITE_HI(sli4->bmbx.phys);
>> +	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
> 
> Is the access to the register serialized by a lock?

The BMBX is used only at adapter initialization start. The 
Initialization sequence serializes access.

Agree with rest of comments and will address.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 11/31] elx: libefc: SLI and FC PORT state machine interfaces
  2020-04-15 15:38   ` Hannes Reinecke
@ 2020-04-22 23:12     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-22 23:12 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/15/2020 8:38 AM, Hannes Reinecke wrote:
...
> I would have expected the ports to be reference counted, seeing that 
> they are (probably) accessed by structures with vastly different 
> lifetime rules.
> It also would allow for a more dynamic port deletion, as you wouldn't 
> need to lock the entire function, only when removing it from the list.
> 
> Have you considered that?
...
> See? That's what I mean.
> You have event processing for that port, and additional nodes attached 
> to it. If all of them would be properly reference counted you could do 
> away with this function ...
...
> As mentioned: please add locking annotations to make it clear which 
> functions require locking.
> 
> And I'm not sure if reference counting the ports wouldn't be a better 
> idea; I can't really see how you would ensure that the port is valid if 
> it's being used by temporary structures like FC commands.
> The only _safe_ way would be to always access the ports under a lock, 
> but that would lead to heavy contention.
> 
> But I'll check the remaining patches.

Yes, we probably should have refcounting. Agree with your comments and 
will address them.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-15 18:19   ` Daniel Wagner
@ 2020-04-23  1:32     ` James Smart
  2020-04-23  7:49       ` Daniel Wagner
  0 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-23  1:32 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/15/2020 11:19 AM, Daniel Wagner wrote:
>> +void
>> +efc_node_transition(struct efc_node *node,
>> +		    void *(*state)(struct efc_sm_ctx *,
>> +				   enum efc_sm_event, void *), void *data)
>> +{
>> +	struct efc_sm_ctx *ctx = &node->sm;
>> +
>> +	if (ctx->current_state == state) {
>> +		efc_node_post_event(node, EFC_EVT_REENTER, data);
>> +	} else {
>> +		efc_node_post_event(node, EFC_EVT_EXIT, data);
>> +		ctx->current_state = state;
>> +		efc_node_post_event(node, EFC_EVT_ENTER, data);
>> +	}
> 
> Why does efc_node_transition not need to take the efc->lock as in
> efc_remote_node_cb? How are the state machine state transitions
> serialized?

efc_remote_node_cb is a callback called from outside the statemachine, 
so it needs to take the lock.   efc_node_transition is called from 
within the statemachine, after the lock is taken. In general the lock is 
taken upon entering the statemachine and released before exiting. There 
isn't granular locking within the statemachine.

For more background, see the reply that will be sent to Hannes shortly.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-15 15:51   ` Hannes Reinecke
@ 2020-04-23  1:35     ` James Smart
  2020-04-23  8:02       ` Daniel Wagner
  0 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-23  1:35 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/15/2020 8:51 AM, Hannes Reinecke wrote:
> Relating to my previous comments: Can you elaborate about the lifetime 
> rules to the node? It looks as if any access to the node is being 
> guarded by the main efc->lock.
> Which, taken to extremes, would meant that one has to hold that lock for 
> _any_ access to the node; that would include I/O processing etc, too.
> I can't really see how that would scale to the IOPS you are pursuing, so 
> clearly that will not happen.
> So how _do_ you ensure the validity of a node if the lock is not held?
> 
> Cheers,
> 
> Hannes

This lock is defined for single EQ/RQ. For multi EQ/RQ which we will add 
in future, will have more granular locks and performance improvements.
All the State Machine Events are triggered from the EQ processing thread 
except vport creation and deletion.

Locking:
   Global lock per efc port "efc->lock"

Domain:
     efc_domain_cb --> (All the events are executed under efc lock)

     Async link up notification(EFC_HW_DOMAIN_FOUND)
          Alloc Domain, Start discovery
          Alloc Sport
     Async link down notification (EFC_HW_DOMAIN_LOST)
           Hold frames. (New frames will be moved to pending)
           Post Sport Shutdown.
           Post All Nodes Shutdown.

Sport:
     efc_lport_cb --> (All the events are executed under efc lock)

       HW sport registration (Mbox command responses) to complete sport 
allocation.

Vport :
     efc_sport_vport_new() --> New Vport

      efc_vport_sport_alloc() (Protected by efc lock)
      Alloc Sport, and start Sport SM

    efc_sport_vport_del()  --> Delete Vport

      Post shutdown event to the Sport. (Protected by efc lock)

Node:
    efc_remote_node_cb() --> (All the events are executed under efc lock)

        HW node registration (Mbox command responses) to complete node 
allocation.

Node lifeCycle:
    efc_domain_dispatch_frame:-

     EFC lock held
     efc_node_find() IF not found, Create one.
     when PLOGI is received, Register with hardware
     Upon PRLI completion, setup a new session with LIO.

    IO path:

      Find sport and node under EFC lock.
      If node can process commands, start IO processing.
      Each IO is added to the node->active_ios.

   Node deletion: (RSCN notification or LOGO received ..)

      EFC lock held
      Disable receiving fcp commands. (node->fcp_enabled)
      Disable IO allocation for this node.
      Remove LIO session.
      Deregister node with HW
      Waits for all active_ios to be completed/freed.
      Purge pending IOs.
      Free the Node.

In IO path, EFC lock is acquired to find the sport and node, release the 
EFC lock and continue with IO allocation and processing. Note: There is 
still an unsafe area where we check for 'node->hold_frames" without the 
lock. Node is assumed to be kept alive until all the IOs under the node 
are freed.  Adding the refcounting will remove this assumption.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 13/31] elx: libefc: Fabric node state machine interfaces
  2020-04-16  6:37   ` Hannes Reinecke
@ 2020-04-23  1:38     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  1:38 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/15/2020 11:37 PM, Hannes Reinecke wrote:
> What is going on here?
> Why do you declare all function as 'void *', but then continue to return 
> only NULL pointer?
> 
> Is this some weird API logic or are these functions being fleshed out in 
> later patches to return something else but NULL?
> 
> But as it stands I would recommend to move all of these functions to 
> simple 'void' functions, and updating the function prototypes once they 
> can return anything else than 'NULL'.
> 
> Cheers,
> 
> Hannes

We will convert to void functions. Agree with your comments.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 24/31] elx: efct: LIO backend interface routines
  2020-04-22  5:09       ` Bart Van Assche
@ 2020-04-23  1:39         ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  1:39 UTC (permalink / raw)
  To: Bart Van Assche, linux-scsi
  Cc: dwagner, maier, herbszt, natechancellor, rdunlap, hare, Ram Vegesna

On 4/21/2020 10:09 PM, Bart Van Assche wrote:
> On 4/21/20 9:20 PM, James Smart wrote:
>> On 4/11/2020 9:57 PM, Bart Van Assche wrote:
>>> On 2020-04-11 20:32, James Smart wrote:
>>>> +
>>>> +    lio_wq = create_singlethread_workqueue("efct_lio_worker");
>>>> +    if (!lio_wq) {
>>>> +        efc_log_err(efct, "workqueue create failed\n");
>>>> +        return -ENOMEM;
>>>> +    }
>>>
>>> Is any work queued onto lio_wq that needs to be serialized against other
>>> work queued onto the same queue? If not, can one of the system
>>> workqueues be used, e.g. system_wq?
>>>
>>
>> We are using "lio_wq" here to call the LIO backed during creation or 
>> deletion of sessions. LIO api's target_remove_session/ 
>> target_setup_session are blocking calls so we don't want to process 
>> them in the interrupt thread. "lio_wq" is used for these two events 
>> only and this brings the serialization to session management as we 
>> create single threaded work queue.
> 
> Hi James,
> 
> I think the above is very useful information. How about adding that 
> information as a comment above the definition of "lio_wq"?
> 
> Thanks,
> 
> Bart.
> 

yep - we will. Thanks

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2020-04-14 15:23   ` Daniel Wagner
  2020-04-15 12:06   ` Hannes Reinecke
@ 2020-04-23  1:52   ` Roman Bolshakov
  2 siblings, 0 replies; 124+ messages in thread
From: Roman Bolshakov @ 2020-04-23  1:52 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, dwagner, maier, bvanassche, herbszt, natechancellor,
	rdunlap, hare, Ram Vegesna

On Sat, Apr 11, 2020 at 08:32:33PM -0700, James Smart wrote:
> This is the initial patch for the new Emulex target mode SCSI
> driver sources.
> 
> This patch:
> - Creates the new Emulex source level directory drivers/scsi/elx
>   and adds the directory to the MAINTAINERS file.
> - Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
>   This library is a SLI-4 interface library.
> - Starts the population of the libefc_sli library with definitions
>   of SLI-4 hardware register offsets and definitions.
> 
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v3:
>   Changed anonymous enums to named.
>   SLI defines to spell out _MASK value directly.
>   Changed multiple #defines to named enums for consistency.
>   SLI4_REG_MAX to SLI4_REG_UNKNOWN
> ---
>  MAINTAINERS                        |   8 ++
>  drivers/scsi/elx/libefc_sli/sli4.c |  26 ++++
>  drivers/scsi/elx/libefc_sli/sli4.h | 252 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 286 insertions(+)
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
>  create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7bd5e23648b1..a7381c0088e4 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6223,6 +6223,14 @@ W:	http://www.broadcom.com
>  S:	Supported
>  F:	drivers/scsi/lpfc/
>  
> +EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
> +M:	James Smart <james.smart@broadcom.com>
> +M:	Ram Vegesna <ram.vegesna@broadcom.com>
> +L:	linux-scsi@vger.kernel.org

Hi James,

Perhaps target-devel@vger.kernel.org can be added too

Thanks,
Roman

> +W:	http://www.broadcom.com
> +S:	Supported
> +F:	drivers/scsi/elx/
> +
>  ENE CB710 FLASH CARD READER DRIVER
>  M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
>  S:	Maintained
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
> new file mode 100644
> index 000000000000..29d33becd334
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.c
> @@ -0,0 +1,26 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + */
> +
> +/**
> + * All common (i.e. transport-independent) SLI-4 functions are implemented
> + * in this file.
> + */
> +#include "sli4.h"
> +
> +struct sli4_asic_entry_t {
> +	u32 rev_id;
> +	u32 family;
> +};
> +
> +static struct sli4_asic_entry_t sli4_asic_table[] = {
> +	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
> +	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
> +};
> diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
> new file mode 100644
> index 000000000000..1fad48643f94
> --- /dev/null
> +++ b/drivers/scsi/elx/libefc_sli/sli4.h
> @@ -0,0 +1,252 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2019 Broadcom. All Rights Reserved. The term
> + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + */
> +
> +/*
> + * All common SLI-4 structures and function prototypes.
> + */
> +
> +#ifndef _SLI4_H
> +#define _SLI4_H
> +
> +#include <linux/pci.h>
> +#include <linux/delay.h>
> +#include "scsi/fc/fc_els.h"
> +#include "scsi/fc/fc_fs.h"
> +#include "../include/efc_common.h"
> +
> +/*************************************************************************
> + * Common SLI-4 register offsets and field definitions
> + */
> +
> +/* SLI_INTF - SLI Interface Definition Register */
> +#define SLI4_INTF_REG			0x0058
> +enum sli4_intf {
> +	SLI4_INTF_REV_SHIFT		= 4,
> +	SLI4_INTF_REV_MASK		= 0xF0,
> +
> +	SLI4_INTF_REV_S3		= 0x30,
> +	SLI4_INTF_REV_S4		= 0x40,
> +
> +	SLI4_INTF_FAMILY_SHIFT		= 8,
> +	SLI4_INTF_FAMILY_MASK		= 0x0F00,
> +
> +	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0x0F00,
> +
> +	SLI4_INTF_IF_TYPE_SHIFT		= 12,
> +	SLI4_INTF_IF_TYPE_MASK		= 0xF000,
> +
> +	SLI4_INTF_IF_TYPE_2		= 0x2000,
> +	SLI4_INTF_IF_TYPE_6		= 0x6000,
> +
> +	SLI4_INTF_VALID_SHIFT		= 29,
> +	SLI4_INTF_VALID_MASK		= 0xE0000000,
> +
> +	SLI4_INTF_VALID_VALUE		= 0xC0000000,
> +};
> +
> +/* ASIC_ID - SLI ASIC Type and Revision Register */
> +#define SLI4_ASIC_ID_REG	0x009c
> +enum sli4_asic {
> +	SLI4_ASIC_GEN_SHIFT	= 8,
> +	SLI4_ASIC_GEN_MASK	= 0xFF00,
> +	SLI4_ASIC_GEN_5		= 0x0B00,
> +	SLI4_ASIC_GEN_6		= 0x0C00,
> +	SLI4_ASIC_GEN_7		= 0x0D00,
> +};
> +
> +enum sli4_acic_revisions {
> +	SLI4_ASIC_REV_A0 = 0x00,
> +	SLI4_ASIC_REV_A1 = 0x01,
> +	SLI4_ASIC_REV_A2 = 0x02,
> +	SLI4_ASIC_REV_A3 = 0x03,
> +	SLI4_ASIC_REV_B0 = 0x10,
> +	SLI4_ASIC_REV_B1 = 0x11,
> +	SLI4_ASIC_REV_B2 = 0x12,
> +	SLI4_ASIC_REV_C0 = 0x20,
> +	SLI4_ASIC_REV_C1 = 0x21,
> +	SLI4_ASIC_REV_C2 = 0x22,
> +	SLI4_ASIC_REV_D0 = 0x30,
> +};
> +
> +/* BMBX - Bootstrap Mailbox Register */
> +#define SLI4_BMBX_REG		0x0160
> +enum sli4_bmbx {
> +	SLI4_BMBX_MASK_HI	= 0x3,
> +	SLI4_BMBX_MASK_LO	= 0xf,
> +	SLI4_BMBX_RDY		= (1 << 0),
> +	SLI4_BMBX_HI		= (1 << 1),
> +	SLI4_BMBX_SIZE		= 256,
> +};
> +
> +#define SLI4_BMBX_WRITE_HI(r) \
> +	((upper_32_bits(r) & ~SLI4_BMBX_MASK_HI) | SLI4_BMBX_HI)
> +#define SLI4_BMBX_WRITE_LO(r) \
> +	(((upper_32_bits(r) & SLI4_BMBX_MASK_HI) << 30) | \
> +	 (((r) & ~SLI4_BMBX_MASK_LO) >> 2))
> +
> +/* SLIPORT_CONTROL - SLI Port Control Register */
> +#define SLI4_PORT_CTRL_REG	0x0408
> +enum sli4_port_ctrl {
> +	SLI4_PORT_CTRL_IP	= (1 << 27),
> +	SLI4_PORT_CTRL_IDIS	= (1 << 22),
> +	SLI4_PORT_CTRL_FDD	= (1 << 31),
> +};
> +
> +/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
> +#define SLI4_PORT_ERROR1	0x040c
> +#define SLI4_PORT_ERROR2	0x0410
> +
> +/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
> +#define SLI4_EQCQ_DB_REG	0x120
> +enum sli4_eqcq_e {
> +	SLI4_EQ_ID_LO_MASK	= 0x01FF,
> +
> +	SLI4_CQ_ID_LO_MASK	= 0x03FF,
> +
> +	SLI4_EQCQ_CI_EQ		= 0x0200,
> +
> +	SLI4_EQCQ_QT_EQ		= 0x00000400,
> +	SLI4_EQCQ_QT_CQ		= 0x00000000,
> +
> +	SLI4_EQCQ_ID_HI_SHIFT	= 11,
> +	SLI4_EQCQ_ID_HI_MASK	= 0xF800,
> +
> +	SLI4_EQCQ_NUM_SHIFT	= 16,
> +	SLI4_EQCQ_NUM_MASK	= 0x1FFF0000,
> +
> +	SLI4_EQCQ_ARM		= 0x20000000,
> +	SLI4_EQCQ_UNARM		= 0x00000000,
> +};
> +
> +#define SLI4_EQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ | \
> +	 ((((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | \
> +	 (a) | SLI4_EQCQ_CI_EQ)
> +
> +#define SLI4_CQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ | \
> +	 ((((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK) | \
> +	 (((n) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK) | (a))
> +
> +/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
> +#define SLI4_IF6_EQ_DB_REG	0x120
> +enum sli4_eq_e {
> +	SLI4_IF6_EQ_ID_MASK	= 0x0FFF,
> +
> +	SLI4_IF6_EQ_NUM_SHIFT	= 16,
> +	SLI4_IF6_EQ_NUM_MASK	= 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_EQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_IF6_EQ_ID_MASK) | \
> +	 (((n) << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK) | (a))
> +
> +/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */
> +#define SLI4_IF6_CQ_DB_REG	0xC0
> +enum sli4_cq_e {
> +	SLI4_IF6_CQ_ID_MASK	= 0xFFFF,
> +
> +	SLI4_IF6_CQ_NUM_SHIFT	= 16,
> +	SLI4_IF6_CQ_NUM_MASK	= 0x1FFF0000,
> +};
> +
> +#define SLI4_IF6_CQ_DOORBELL(n, id, a) \
> +	(((id) & SLI4_IF6_CQ_ID_MASK) | \
> +	 (((n) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK) | (a))
> +
> +/* MQ_DOORBELL - MQ Doorbell Register */
> +#define SLI4_MQ_DB_REG		0x0140
> +#define SLI4_IF6_MQ_DB_REG	0x0160
> +enum sli4_mq_e {
> +	SLI4_MQ_ID_MASK		= 0xFFFF,
> +
> +	SLI4_MQ_NUM_SHIFT	= 16,
> +	SLI4_MQ_NUM_MASK	= 0x3FFF0000,
> +};
> +
> +#define SLI4_MQ_DOORBELL(n, i) \
> +	(((i) & SLI4_MQ_ID_MASK) | \
> +	 (((n) << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK))
> +
> +/* RQ_DOORBELL - RQ Doorbell Register */
> +#define SLI4_RQ_DB_REG		0x0a0
> +#define SLI4_IF6_RQ_DB_REG	0x0080
> +enum sli4_rq_e {
> +	SLI4_RQ_DB_ID_MASK	= 0xFFFF,
> +
> +	SLI4_RQ_DB_NUM_SHIFT	= 16,
> +	SLI4_RQ_DB_NUM_MASK	= 0x3FFF0000,
> +};
> +
> +#define SLI4_RQ_DOORBELL(n, i) \
> +	(((i) & SLI4_RQ_DB_ID_MASK) | \
> +	 (((n) << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK))
> +
> +/* WQ_DOORBELL - WQ Doorbell Register */
> +#define SLI4_IO_WQ_DB_REG	0x040
> +#define SLI4_IF6_WQ_DB_REG	0x040
> +enum sli4_wq_e {
> +	SLI4_WQ_ID_MASK		= 0xFFFF,
> +
> +	SLI4_WQ_IDX_SHIFT	= 16,
> +	SLI4_WQ_IDX_MASK	= 0xFF0000,
> +
> +	SLI4_WQ_NUM_SHIFT	= 24,
> +	SLI4_WQ_NUM_MASK	= 0x0FF00000,
> +};
> +
> +#define SLI4_WQ_DOORBELL(n, x, i) \
> +	(((i) & SLI4_WQ_ID_MASK) | \
> +	 (((x) << SLI4_WQ_IDX_SHIFT) & SLI4_WQ_IDX_MASK) | \
> +	 (((n) << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK))
> +
> +/* SLIPORT_SEMAPHORE - SLI Port Host and Port Status Register */
> +#define SLI4_PORT_SEMP_REG		0x0400
> +enum sli4_port_sem_e {
> +	SLI4_PORT_SEMP_ERR_MASK		= 0xF000,
> +	SLI4_PORT_SEMP_UNRECOV_ERR	= 0xF000,
> +};
> +
> +/* SLIPORT_STATUS - SLI Port Status Register */
> +#define SLI4_PORT_STATUS_REGOFF		0x0404
> +enum sli4_port_status {
> +	SLI4_PORT_STATUS_FDP		= (1 << 21),
> +	SLI4_PORT_STATUS_RDY		= (1 << 23),
> +	SLI4_PORT_STATUS_RN		= (1 << 24),
> +	SLI4_PORT_STATUS_DIP		= (1 << 25),
> +	SLI4_PORT_STATUS_OTI		= (1 << 29),
> +	SLI4_PORT_STATUS_ERR		= (1 << 31),
> +};
> +
> +#define SLI4_PHYDEV_CTRL_REG		0x0414
> +#define SLI4_PHYDEV_CTRL_FRST		(1 << 1)
> +#define SLI4_PHYDEV_CTRL_DD		(1 << 2)
> +
> +/* Register name enums */
> +enum sli4_regname_en {
> +	SLI4_REG_BMBX,
> +	SLI4_REG_EQ_DOORBELL,
> +	SLI4_REG_CQ_DOORBELL,
> +	SLI4_REG_RQ_DOORBELL,
> +	SLI4_REG_IO_WQ_DOORBELL,
> +	SLI4_REG_MQ_DOORBELL,
> +	SLI4_REG_PHYSDEV_CONTROL,
> +	SLI4_REG_PORT_CONTROL,
> +	SLI4_REG_PORT_ERROR1,
> +	SLI4_REG_PORT_ERROR2,
> +	SLI4_REG_PORT_SEMAPHORE,
> +	SLI4_REG_PORT_STATUS,
> +	SLI4_REG_UNKWOWN			/* must be last */
> +};
> +
> +struct sli4_reg {
> +	u32	rset;
> +	u32	off;
> +};
> +
> +#endif /* !_SLI4_H */
> -- 
> 2.16.4
> 

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-15 18:56   ` Daniel Wagner
@ 2020-04-23  2:50     ` James Smart
  2020-04-23  8:05       ` Daniel Wagner
  0 siblings, 1 reply; 124+ messages in thread
From: James Smart @ 2020-04-23  2:50 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/15/2020 11:56 AM, Daniel Wagner wrote:
...
>> +	switch (evt) {
>> +	case EFC_EVT_ENTER:
>> +		efc_node_hold_frames(node);
>> +		/* Fall through */
> 
> 		fallthrough;
> 

Actually the patches that went in for -Wimplicit-fallthrough wants
/* fall through */


Other comments are fine and we'll address them.
BTW: Same goes for comments on patches 3, 7, 8, 9, 10, and 11.

> 
> I run out of steam and thus stop here...
> 
> Thanks,
> Daniel
> 

I understand. Regardless, thank you for the time and effort on this.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-16  6:47   ` Hannes Reinecke
@ 2020-04-23  2:55     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  2:55 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/15/2020 11:47 PM, Hannes Reinecke wrote:
...
>> +    case EFC_EVT_FLOGI_RCVD: {
>> +        struct fc_frame_header *hdr = cbdata->header->dma.virt;
>> +        u32 d_id = ntoh24(hdr->fh_d_id);
>> +
>> +        /* sm: / save sparams, send FLOGI acc */
>> +        memcpy(node->sport->domain->flogi_service_params,
>> +               cbdata->payload->dma.virt,
>> +               sizeof(struct fc_els_flogi));
>> +
> 
> Is the '->domain' pointer always present at this point?
> Shouldn't we rather test for it before accessing?

Always - if there's a node ans sport.

> 
>> +        /* send FC LS_ACC response, override s_id */
>> +        efc_fabric_set_topology(node, EFC_SPORT_TOPOLOGY_P2P);
>> +        efc->tt.send_flogi_p2p_acc(efc, node,
>> +                be16_to_cpu(hdr->fh_ox_id), d_id);
>> +        if (efc_p2p_setup(node->sport)) {
>> +            node_printf(node,
>> +                    "p2p setup failed, shutting down node\n");
>> +            efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
>> +        } else {
>> +            efc_node_transition(node,
>> +                        __efc_p2p_wait_flogi_acc_cmpl,
>> +                        NULL);
>> +        }
>> +
>> +        break;
>> +    }
>> +
>> +    case EFC_EVT_LOGO_RCVD: {
>> +        struct fc_frame_header *hdr = cbdata->header->dma.virt;
>> +
>> +        if (!node->sport->domain->attached) {
>> +            /* most likely a frame left over from before a link
>> +             * down; drop and
>> +             * shut node down w/ "explicit logout" so pending
>> +             * frames are processed
>> +             */
> 
> Same here; I find it slightly weird to have an 'attached' field in the 
> domain structure; attached to what?
> Doesn't the existence of the ->domain pointer signal the same thing?
> If it doesn't, why don't we test for the ->domain pointer before 
> accessing it?

Attached means something different here. Not only does the structure 
exist, but there have been VFI/VPI resources registered with the 
hardware for it.


Agree with your other comments and will address them.
And the same agreement for comments on patches 3, 9 & 10.

-- james



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations
  2020-04-16  6:51   ` Hannes Reinecke
@ 2020-04-23  2:57     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  2:57 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/15/2020 11:51 PM, Hannes Reinecke wrote:
...
> 
> Please consider running 'pahole' on this structure and rearrange it.
> Looks like it'll required quite some padding which could be avoided.

we will

Thanks.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations
  2020-04-16  7:22   ` Daniel Wagner
@ 2020-04-23  2:59     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  2:59 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/16/2020 12:22 AM, Daniel Wagner wrote:
...
>> +/**
>> + * HW IO object.
>> + *
>> + * Stores the per-IO information necessary
>> + * for both the lower (SLI) and upper
>> + * layers (efct).
>> + */
>> +struct efct_hw_io {
>> +	/* Owned by HW */
> 
> What are the rules? Live time properties?
> 
>> +
>> +	/* reference counter and callback function */
> 
> Make kerneldoc out of it?
> 

I'll see if there's something we can write about it.

Agree with rest of comments.

-- james



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 16/31] elx: efct: Driver initialization routines
  2020-04-16  7:11   ` Hannes Reinecke
@ 2020-04-23  3:09     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  3:09 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/16/2020 12:11 AM, Hannes Reinecke wrote:
...
>> +    if (efct) {
>> +        memset(efct, 0, sizeof(*efct));
>> +        for (i = 0; i < ARRAY_SIZE(efct_devices); i++) {
>> +            if (!efct_devices[i]) {
>> +                efct->instance_index = i;
>> +                efct_devices[i] = efct;
>> +                break;
>> +            }
>> +        }
>> +
>> +        if (i == ARRAY_SIZE(efct_devices)) {
>> +            pr_err("Exceeded max supported devices.\n");
>> +            kfree(efct);
>> +            efct = NULL;
>> +        } else {
>> +            efct->attached = false;
>> +        }
> 
> Errm. This is wrong.
> When we exit the for() loop above, i _will_ equal the array size.
> Surely you mean
> 
> if (i < ARRAY_SIZE())
> 
> right?
> 

No. We want to free the structure if we went through the array and 
didn't find an empty slot. It only breaks from the loop if it found an 
empty slot and used it.



>> +        } else if (event->topology == SLI_LINK_TOPO_LOOP) {
>> +            u8    *buf = NULL;
>> +
>> +            efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
>> +                      event->speed);
>> +            dma = &hw->loop_map;
>> +            dma->size = SLI4_MIN_LOOP_MAP_BYTES;
>> +            dma->virt = dma_alloc_coherent(&efct->pcidev->dev,
>> +                               dma->size, &dma->phys,
>> +                               GFP_DMA);
>> +            if (!dma->virt)
>> +                efc_log_err(hw->os, "efct_dma_alloc_fail\n");
>> +
>> +            buf = kmalloc(SLI4_BMBX_SIZE, GFP_ATOMIC);
>> +            if (!buf)
>> +                break;
>> +
>> +            if (!sli_cmd_read_topology(&hw->sli, buf,
>> +                          SLI4_BMBX_SIZE,
>> +                               &hw->loop_map)) {
>> +                rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
>> +                             __efct_read_topology_cb,
>> +                             NULL);
>> +            }
>> +
> 
> Not sure if this is a good idea; we'll have to allocate extra memory 
> whenever the loop topology changes.
> Which typically happens when there had been a failure somewhere, and 
> chances are that it'll affect our root fs, making memory allocation 
> something we really need to look at.
> Can't we pre-allocate a buffer here somewhere in the global 
> initialisation so that we don't have to allocate it every time?

Agree. will look into it.


...
> 
> What happened to multiqueue support?
> The original lpfc driver did it, so we should regress for the new driver...

For a target driver ?

When something like this is in the tgt infrastructure we'll do something 
with it.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2020-04-16  7:24   ` Hannes Reinecke
@ 2020-04-23  3:16     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  3:16 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/16/2020 12:24 AM, Hannes Reinecke wrote:
...
> But now you go and spoil it all again.
> At the very least use a mempool here; not sure how frequent these calls 
> are, but we're talking to the hardware here, so I assume it'll happen 
> more than just once.

we will move to a mempool

Agree with the rest of the comments and will address them.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 22/31] elx: efct: Extended link Service IO handling
  2020-04-16  7:58   ` Hannes Reinecke
@ 2020-04-23  3:30     ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  3:30 UTC (permalink / raw)
  To: Hannes Reinecke, linux-scsi
  Cc: dwagner, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/16/2020 12:58 AM, Hannes Reinecke wrote:
...
> That is a bit strange.
> Sending a response can fail, but (apparently) sending a request cannot; 
> at the very least you don't have (or check) the return value from the 
> send request.
> 
> Some intricate scheme I've missed?
> 

The send_request fully handles the failure case, which ends up calling 
the request completion callback with an error status, before 
send_request returns to the callee.   Not handled that way with the 
response. The response routine handles the error.

Agree with your other comments and will address them.
Same thing for comments on patches 19 and 20.

-- james

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 25/31] elx: efct: Hardware IO submission routines
  2020-04-16 12:45     ` Daniel Wagner
@ 2020-04-23  3:37       ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23  3:37 UTC (permalink / raw)
  To: Daniel Wagner, Hannes Reinecke
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	Ram Vegesna

On 4/16/2020 5:45 AM, Daniel Wagner wrote:
> On Thu, Apr 16, 2020 at 10:10:18AM +0200, Hannes Reinecke wrote:
>>> +	switch (type) {
>>> +	case EFCT_HW_ELS_REQ:
>>> +		if (!send ||
>>> +		    sli_els_request64_wqe(&hw->sli, io->wqe.wqebuf,
>>> +					  hw->sli.wqe_size, io->sgl,
>>> +					*((u8 *)send->virt),
>>> +					len, receive->size,
>>> +					iparam->els.timeout,
>>> +					io->indicator, io->reqtag,
>>> +					SLI4_CQ_DEFAULT, rnode->indicator,
>>> +					rnode->sport->indicator,
>>> +					rnode->attached, rnode->fc_id,
>>> +					rnode->sport->fc_id)) {
>>> +			efc_log_err(hw->os, "REQ WQE error\n");
>>> +			rc = EFCT_HW_RTN_ERROR;
>>> +		}
>>> +		break;
>>
>> I did mention several times that I'm not a big fan of overly long argument
>> lists.
>> Can't you pass in 'io' and 'rnode' directly and cut down on the number of
>> arguments?
> 
> Yes, please!
> 

Agree, and agree with both Hannes' and your comments on this patch.

Daniel - also agree with your comments on patches 16 through 23.

Thanks

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-23  1:32     ` James Smart
@ 2020-04-23  7:49       ` Daniel Wagner
  0 siblings, 0 replies; 124+ messages in thread
From: Daniel Wagner @ 2020-04-23  7:49 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi James,

On Wed, Apr 22, 2020 at 06:32:09PM -0700, James Smart wrote:
> On 4/15/2020 11:19 AM, Daniel Wagner wrote:
> > > +void
> > > +efc_node_transition(struct efc_node *node,
> > > +		    void *(*state)(struct efc_sm_ctx *,
> > > +				   enum efc_sm_event, void *), void *data)
> > > +{
> > > +	struct efc_sm_ctx *ctx = &node->sm;
> > > +
> > > +	if (ctx->current_state == state) {
> > > +		efc_node_post_event(node, EFC_EVT_REENTER, data);
> > > +	} else {
> > > +		efc_node_post_event(node, EFC_EVT_EXIT, data);
> > > +		ctx->current_state = state;
> > > +		efc_node_post_event(node, EFC_EVT_ENTER, data);
> > > +	}
> > 
> > Why does efc_node_transition not need to take the efc->lock as in
> > efc_remote_node_cb? How are the state machine state transitions
> > serialized?
> 
> efc_remote_node_cb is a callback called from outside the statemachine, so it
> needs to take the lock.   efc_node_transition is called from within the
> statemachine, after the lock is taken. In general the lock is taken upon
> entering the statemachine and released before exiting. There isn't granular
> locking within the statemachine.

Thanks for explaining. I find such short explanation extremly helpful
when looking at code. It might be obvious but not for everyone :)

Could you add this as comment somehwere?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-23  1:35     ` James Smart
@ 2020-04-23  8:02       ` Daniel Wagner
  2020-04-23 18:24         ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-23  8:02 UTC (permalink / raw)
  To: James Smart
  Cc: Hannes Reinecke, linux-scsi, maier, bvanassche, herbszt,
	natechancellor, rdunlap, Ram Vegesna

Hi James,

On Wed, Apr 22, 2020 at 06:35:24PM -0700, James Smart wrote:
> On 4/15/2020 8:51 AM, Hannes Reinecke wrote:
> > Relating to my previous comments: Can you elaborate about the lifetime
> > rules to the node? It looks as if any access to the node is being
> > guarded by the main efc->lock.
> > Which, taken to extremes, would meant that one has to hold that lock for
> > _any_ access to the node; that would include I/O processing etc, too.
> > I can't really see how that would scale to the IOPS you are pursuing, so
> > clearly that will not happen.
> > So how _do_ you ensure the validity of a node if the lock is not held?
> > 
> > Cheers,
> > 
> > Hannes
> 
> This lock is defined for single EQ/RQ. For multi EQ/RQ which we will add in
> future, will have more granular locks and performance improvements.
> All the State Machine Events are triggered from the EQ processing thread
> except vport creation and deletion.
> 
> Locking:
>   Global lock per efc port "efc->lock"
> 
> Domain:
>     efc_domain_cb --> (All the events are executed under efc lock)
> 
>     Async link up notification(EFC_HW_DOMAIN_FOUND)
>          Alloc Domain, Start discovery
>          Alloc Sport
>     Async link down notification (EFC_HW_DOMAIN_LOST)
>           Hold frames. (New frames will be moved to pending)
>           Post Sport Shutdown.
>           Post All Nodes Shutdown.
> 
> Sport:
>     efc_lport_cb --> (All the events are executed under efc lock)
> 
>       HW sport registration (Mbox command responses) to complete sport
> allocation.
> 
> Vport :
>     efc_sport_vport_new() --> New Vport
> 
>      efc_vport_sport_alloc() (Protected by efc lock)
>      Alloc Sport, and start Sport SM
> 
>    efc_sport_vport_del()  --> Delete Vport
> 
>      Post shutdown event to the Sport. (Protected by efc lock)
> 
> Node:
>    efc_remote_node_cb() --> (All the events are executed under efc lock)
> 
>        HW node registration (Mbox command responses) to complete node
> allocation.
> 
> Node lifeCycle:
>    efc_domain_dispatch_frame:-
> 
>     EFC lock held
>     efc_node_find() IF not found, Create one.
>     when PLOGI is received, Register with hardware
>     Upon PRLI completion, setup a new session with LIO.
> 
>    IO path:
> 
>      Find sport and node under EFC lock.
>      If node can process commands, start IO processing.
>      Each IO is added to the node->active_ios.
> 
>   Node deletion: (RSCN notification or LOGO received ..)
> 
>      EFC lock held
>      Disable receiving fcp commands. (node->fcp_enabled)
>      Disable IO allocation for this node.
>      Remove LIO session.
>      Deregister node with HW
>      Waits for all active_ios to be completed/freed.
>      Purge pending IOs.
>      Free the Node.


Thanks a lot for this. IMO, this should be added as documentation.

> In IO path, EFC lock is acquired to find the sport and node, release the EFC
> lock and continue with IO allocation and processing. Note: There is still an
> unsafe area where we check for 'node->hold_frames" without the lock.

Is this is the fast path? Would RCU help to avoid taking the lock at all?
The usage pattern sounds like it would be a candidate for RCU.

> Node is
> assumed to be kept alive until all the IOs under the node are freed.  Adding
> the refcounting will remove this assumption.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-23  2:50     ` James Smart
@ 2020-04-23  8:05       ` Daniel Wagner
  2020-04-23  8:12         ` Nathan Chancellor
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-23  8:05 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

Hi James,

On Wed, Apr 22, 2020 at 07:50:06PM -0700, James Smart wrote:
> On 4/15/2020 11:56 AM, Daniel Wagner wrote:
> ...
> > > +	switch (evt) {
> > > +	case EFC_EVT_ENTER:
> > > +		efc_node_hold_frames(node);
> > > +		/* Fall through */
> > 
> > 		fallthrough;
> > 
> 
> Actually the patches that went in for -Wimplicit-fallthrough wants
> /* fall through */

Ah okay, I though the fall through rules are active. Anyway, I am sure
someone will run a script to report when to change.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 14/31] elx: libefc: FC node ELS and state handling
  2020-04-23  8:05       ` Daniel Wagner
@ 2020-04-23  8:12         ` Nathan Chancellor
  0 siblings, 0 replies; 124+ messages in thread
From: Nathan Chancellor @ 2020-04-23  8:12 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: James Smart, linux-scsi, maier, bvanassche, herbszt, rdunlap,
	hare, Ram Vegesna

On Thu, Apr 23, 2020 at 10:05:08AM +0200, Daniel Wagner wrote:
> Hi James,
> 
> On Wed, Apr 22, 2020 at 07:50:06PM -0700, James Smart wrote:
> > On 4/15/2020 11:56 AM, Daniel Wagner wrote:
> > ...
> > > > +	switch (evt) {
> > > > +	case EFC_EVT_ENTER:
> > > > +		efc_node_hold_frames(node);
> > > > +		/* Fall through */
> > > 
> > > 		fallthrough;
> > > 
> > 
> > Actually the patches that went in for -Wimplicit-fallthrough wants
> > /* fall through */
> 
> Ah okay, I though the fall through rules are active. Anyway, I am sure
> someone will run a script to report when to change.
> 
> Thanks,
> Daniel

Indeed, fallthrough; is preferred now, see commit f36d3eb89a43 ("checkpatch:
prefer fallthrough; over fallthrough comments") and the thread that is
linked in that commit.

Cheers,
Nathan

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 12/31] elx: libefc: Remote node state machine interfaces
  2020-04-23  8:02       ` Daniel Wagner
@ 2020-04-23 18:24         ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-23 18:24 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: Hannes Reinecke, linux-scsi, maier, bvanassche, herbszt,
	natechancellor, rdunlap, Ram Vegesna

On 4/23/2020 1:02 AM, Daniel Wagner wrote:
> 
> Thanks a lot for this. IMO, this should be added as documentation.
> 
>> In IO path, EFC lock is acquired to find the sport and node, release the EFC
>> lock and continue with IO allocation and processing. Note: There is still an
>> unsafe area where we check for 'node->hold_frames" without the lock.
> 
> Is this is the fast path? Would RCU help to avoid taking the lock at all?
> The usage pattern sounds like it would be a candidate for RCU.
> 
>> Node is
>> assumed to be kept alive until all the IOs under the node are freed.  Adding
>> the refcounting will remove this assumption.

I'll see what we can do, and will look at RCU.

BTW: also agreed with your comments in all the remaining patches, so 
will address those as well.

Same for Hannes's comments.

-- james


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-22  5:05     ` James Smart
@ 2020-04-24  7:29       ` Daniel Wagner
  2020-04-24 15:21         ` James Smart
  0 siblings, 1 reply; 124+ messages in thread
From: Daniel Wagner @ 2020-04-24  7:29 UTC (permalink / raw)
  To: James Smart
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On Tue, Apr 21, 2020 at 10:05:26PM -0700, James Smart wrote:
> On 4/15/2020 3:04 AM, Daniel Wagner wrote:
> ...
> > > +static void
> > > +__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
> > > +{
> > > +	if (!q->dma.size)
> > > +		return;
> > > +
> > > +	dma_free_coherent(&sli4->pcidev->dev, q->dma.size,
> > > +			  q->dma.virt, q->dma.phys);
> > > +	memset(&q->dma, 0, sizeof(struct efc_dma));
> > 
> > Is this necessary to clear q->dma? Just asking if it's possible to
> > avoid the additional work.
> 
> unfortunately, yes - at least q->dma.size must be cleared. It's used to
> detect validity (must be non-0).

I see.

> > > +sli_mq_write(struct sli4 *sli4, struct sli4_queue *q,
> > > +	     u8 *entry)
> > > +{
> > > +	u8 *qe = q->dma.virt;
> > > +	u32 qindex;
> > > +	u32 val = 0;
> > > +	unsigned long flags;
> > > +
> > > +	spin_lock_irqsave(&q->lock, flags);
> > > +	qindex = q->index;
> > > +	qe += q->index * q->size;
> > > +
> > > +	memcpy(qe, entry, q->size);
> > > +	q->n_posted = 1;
> > 
> > Shouldn't this be q->n_posted++ ? Or is it garanteed n_posted is 0?
> 
> yes - we post 1 at a time.

Sorry to ask again, but I worried that sli_mq_write() is called while there is
work pending. Maybe it could at least documented with something like
WARN_ON_ONCE(n_posted != 0)

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2020-04-24  7:29       ` Daniel Wagner
@ 2020-04-24 15:21         ` James Smart
  0 siblings, 0 replies; 124+ messages in thread
From: James Smart @ 2020-04-24 15:21 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-scsi, maier, bvanassche, herbszt, natechancellor, rdunlap,
	hare, Ram Vegesna

On 4/24/2020 12:29 AM, Daniel Wagner wrote:
>>> Shouldn't this be q->n_posted++ ? Or is it garanteed n_posted is 0?
>>
>> yes - we post 1 at a time.
> 
> Sorry to ask again, but I worried that sli_mq_write() is called while there is
> work pending. Maybe it could at least documented with something like
> WARN_ON_ONCE(n_posted != 0)

Yes - I'm having us re-look at the q->n_posted field. n_posted is to be 
used as an accumulator for when consuming or posting multiple elements 
from/to a queue. As RQ and MQ are not doing multiples, it makes no sense 
to set the field to one. So the DB writes will just explicitly set the 
DB value and ignore the field. The only code then setting n_posted will 
be the code that actually uses it as an accumulator.  This should 
address your concerns.  We'll also look at whether the queue structure 
is actually defined well - meaning perhaps it should contain common 
definitions (used by all queues) and a union of per-queue-type structs - 
so it's very clear what things a queue-type accesses. Not sure what will 
look the cleanest in the end, but that's the thinking.

-- james



^ permalink raw reply	[flat|nested] 124+ messages in thread

end of thread, other threads:[~2020-04-24 15:21 UTC | newest]

Thread overview: 124+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-12  3:32 [PATCH v3 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
2020-04-12  3:32 ` [PATCH v3 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
2020-04-14 15:23   ` Daniel Wagner
2020-04-22  4:28     ` James Smart
2020-04-15 12:06   ` Hannes Reinecke
2020-04-23  1:52   ` Roman Bolshakov
2020-04-12  3:32 ` [PATCH v3 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
2020-04-14 18:02   ` Daniel Wagner
2020-04-22  4:41     ` James Smart
2020-04-15 12:14   ` Hannes Reinecke
2020-04-15 17:43     ` James Bottomley
2020-04-22  4:44     ` James Smart
2020-04-12  3:32 ` [PATCH v3 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
2020-04-14 19:01   ` Daniel Wagner
2020-04-15 12:22   ` Hannes Reinecke
2020-04-12  3:32 ` [PATCH v3 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
2020-04-15 10:04   ` Daniel Wagner
2020-04-22  5:05     ` James Smart
2020-04-24  7:29       ` Daniel Wagner
2020-04-24 15:21         ` James Smart
2020-04-15 12:27   ` Hannes Reinecke
2020-04-12  3:32 ` [PATCH v3 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
2020-04-15 14:34   ` Daniel Wagner
2020-04-22  5:08     ` James Smart
2020-04-12  3:32 ` [PATCH v3 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
2020-04-15 16:10   ` Daniel Wagner
2020-04-22  5:12     ` James Smart
2020-04-12  3:32 ` [PATCH v3 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
2020-04-15 12:49   ` Hannes Reinecke
2020-04-15 17:06   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 08/31] elx: libefc: Generic state machine framework James Smart
2020-04-15 12:37   ` Hannes Reinecke
2020-04-15 17:20   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
2020-04-15 12:41   ` Hannes Reinecke
2020-04-15 17:32   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 10/31] elx: libefc: FC Domain state machine interfaces James Smart
2020-04-15 12:50   ` Hannes Reinecke
2020-04-15 17:50   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 11/31] elx: libefc: SLI and FC PORT " James Smart
2020-04-15 15:38   ` Hannes Reinecke
2020-04-22 23:12     ` James Smart
2020-04-15 18:04   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 12/31] elx: libefc: Remote node " James Smart
2020-04-15 15:51   ` Hannes Reinecke
2020-04-23  1:35     ` James Smart
2020-04-23  8:02       ` Daniel Wagner
2020-04-23 18:24         ` James Smart
2020-04-15 18:19   ` Daniel Wagner
2020-04-23  1:32     ` James Smart
2020-04-23  7:49       ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 13/31] elx: libefc: Fabric " James Smart
2020-04-15 18:51   ` Daniel Wagner
2020-04-16  6:37   ` Hannes Reinecke
2020-04-23  1:38     ` James Smart
2020-04-12  3:32 ` [PATCH v3 14/31] elx: libefc: FC node ELS and state handling James Smart
2020-04-15 18:56   ` Daniel Wagner
2020-04-23  2:50     ` James Smart
2020-04-23  8:05       ` Daniel Wagner
2020-04-23  8:12         ` Nathan Chancellor
2020-04-16  6:47   ` Hannes Reinecke
2020-04-23  2:55     ` James Smart
2020-04-12  3:32 ` [PATCH v3 15/31] elx: efct: Data structures and defines for hw operations James Smart
2020-04-16  6:51   ` Hannes Reinecke
2020-04-23  2:57     ` James Smart
2020-04-16  7:22   ` Daniel Wagner
2020-04-23  2:59     ` James Smart
2020-04-12  3:32 ` [PATCH v3 16/31] elx: efct: Driver initialization routines James Smart
2020-04-16  7:11   ` Hannes Reinecke
2020-04-23  3:09     ` James Smart
2020-04-16  8:03   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 17/31] elx: efct: Hardware queues creation and deletion James Smart
2020-04-16  7:14   ` Hannes Reinecke
2020-04-16  8:24   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 18/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
2020-04-16  7:24   ` Hannes Reinecke
2020-04-23  3:16     ` James Smart
2020-04-16  8:41   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 19/31] elx: efct: Hardware IO and SGL initialization James Smart
2020-04-16  7:32   ` Hannes Reinecke
2020-04-16  8:47   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 20/31] elx: efct: Hardware queues processing James Smart
2020-04-16  7:37   ` Hannes Reinecke
2020-04-16  9:17   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 21/31] elx: efct: Unsolicited FC frame processing routines James Smart
2020-04-16  9:36   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 22/31] elx: efct: Extended link Service IO handling James Smart
2020-04-16  7:58   ` Hannes Reinecke
2020-04-23  3:30     ` James Smart
2020-04-16  9:49   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 23/31] elx: efct: SCSI IO handling routines James Smart
2020-04-16 11:40   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 24/31] elx: efct: LIO backend interface routines James Smart
2020-04-12  4:57   ` Bart Van Assche
2020-04-16 11:48     ` Daniel Wagner
2020-04-22  4:20     ` James Smart
2020-04-22  5:09       ` Bart Van Assche
2020-04-23  1:39         ` James Smart
2020-04-16  8:02   ` Hannes Reinecke
2020-04-16 12:34   ` Daniel Wagner
2020-04-22  4:20     ` James Smart
2020-04-12  3:32 ` [PATCH v3 25/31] elx: efct: Hardware IO submission routines James Smart
2020-04-16  8:10   ` Hannes Reinecke
2020-04-16 12:45     ` Daniel Wagner
2020-04-23  3:37       ` James Smart
2020-04-16 12:44   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 26/31] elx: efct: link statistics and SFP data James Smart
2020-04-16 12:55   ` Daniel Wagner
2020-04-12  3:32 ` [PATCH v3 27/31] elx: efct: xport and hardware teardown routines James Smart
2020-04-16  9:45   ` Hannes Reinecke
2020-04-16 13:01   ` Daniel Wagner
2020-04-12  3:33 ` [PATCH v3 28/31] elx: efct: Firmware update, async link processing James Smart
2020-04-16 10:01   ` Hannes Reinecke
2020-04-16 13:10   ` Daniel Wagner
2020-04-12  3:33 ` [PATCH v3 29/31] elx: efct: scsi_transport_fc host interface support James Smart
2020-04-12  3:33 ` [PATCH v3 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
2020-04-16 10:02   ` Hannes Reinecke
2020-04-16 13:15   ` Daniel Wagner
2020-04-12  3:33 ` [PATCH v3 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
2020-04-12  6:16   ` kbuild test robot
2020-04-12  6:16     ` kbuild test robot
2020-04-12  7:56   ` kbuild test robot
2020-04-12  7:56     ` kbuild test robot
2020-04-16 13:15   ` Daniel Wagner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.