All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver
@ 2021-01-07 22:58 James Smart
  2021-01-07 22:58 ` [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
                   ` (30 more replies)
  0 siblings, 31 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart

This patch set is a request to incorporate the new Broadcom
(Emulex) FC target driver, efct, into the kernel source tree.

The driver source has been Announced a couple of times, the last
version on 1/06/2021. The driver has been hosted on gitlab for
review has had contributions from the community.
  gitlab (git@gitlab.com:jsmart/efct-Emulex_FC_Target.git)

The driver integrates into the source tree at the (new) drivers/scsi/elx
subdirectory.

The driver consists of the following components:
- A libefc_sli subdirectory: This subdirectory contains a library that
  encapsulates common definitions and routines for an Emulex SLI-4
  adapter.
- A libefc subdirectory: This subdirectory contains a library of
  common routines. Of major import is a number of routines that
  implement a FC Discovery engine for target mode.
- An efct subdirectory: This subdirectory contains the efct target
  mode device driver. The driver utilizes the above librarys and
  plugs into the SCSI LIO interfaces. The driver is SCSI only at
  this time.

The patches populate the libraries and device driver and can only
be compiled as a complete set.

This driver is completely independent from the lpfc device driver
and there is no overlap on PCI ID's.

The patches have been cut against the 5.11/scsi-queue branch.

Thank you to those that have contributed to the driver in the past.

Review comments welcome!

-- james

Note: cpu offline/online support will be added at a later time

V7 modifications:
 Change els functions to return status from efc_els_send_rsp()
 Fix return codes in efct_hw_config_mrq()
 Ensure an EFC_XXX status value is returned (not numerical constants)
 Style: break declaration assignment into declaration and later value
   assignment.

V6 modifications:
  Refactor for better indentation and reduced else's
  Refactor to use structure elements directly and avoid local vars
  Slight update to node_sm_trace macro
  convert efc_nport_cb() to not return a value
  moved libefc/efc_lib.c to libefc/efclib.c
  Reword comment in efc_node_handle_explicit_logo()
  efc_disc_io_complete(): update WARN_ON to WARN_ON_ONCE
  Change els send functions to return status from efc_els_send_req()
  Ensure an EFC_XXX status value is returned (not numerical constants)
  Dynamically allocate wq_cpu_array rather than fixed size array
  Fix return variables to have same type as routine
  Ensure an EFCT_HW_XXX status value is returned (not numerical constants)
  Kernel doc format changes for efct_io structure.
  Cleanup of meaningless comments in efct_scsi_cmd_resp declaration
  in hw rtns: Replace EFC_FAIL status with EFCT_HW_XXX status values
  add kfree to efct_hw_teardown()

V5 modifications:
  Copyright for 2021 year change.
  Fixed kernel test robot reported warnings.
  Changed BMBX_WRITE defines to static inline functions.
  Common naming for topology defines.
  Remove efc_log_test.
  Topology naming changes.
  Indentation change.
  Changed argument list for sli_cmd_reg_fcfi/ sli_cmd_reg_fcfi_mrq 
  Added Mempool for ELS ios.
  Remove EFC_HW_NODE_XXX and EFC_HW_NPORT_XXX events.
  Use EFC_EVT_XXX events directly for port and node callbacks.
  Remove node_list from nport and use xarray lookup.
  Use WRITE_ONCE/READ_ONCE for els_io_enabled flag.
  Replace nport_list with xarray lookup related changes.
  Topology naming changes.
  ELS IO mempool changes.
  EFC_EVT_XXX name changes.
  Replace global efct_devices array with linked list.
  Add support for FC speed 64G and 128G
  Use list_del_int() when needed.

V4 modifications:
  Copyright year change.
  Fixed cppcheck tool reported warnings.
  Ran pahole tool, fixed padding for most of the structures. 
  Support for Multiple RQs and EQs.
  New user driver node structure to avoid lock contension in IO path.
  libefc_sli:
    Add target-devel@vger.kernel.org in MAINTAINERS
    Changed doorbell defines to inline functions
    Code cleanup: Indentation, code simplification, unnecessary brackets
    Added SLI4_ prefix to missing defines
    Moved driver specific structures to the end of the file.
    Page size calculation changes
    Reduce function parameters for WQE filling functions
    Changed wait functions to use usleep_range and time_before apis
    Removed size input for all the cmds as its fixed to SLI4_BMBX_SIZE.
    Changed all SLI mbox routines to memset using SLI_BMBX_SIZE.
    All WQE helper routines to use sli->wqe_size for memset.
  libefc:
    Remove the " EFC_SM_EVENT_START" id and define SM events serially.
    Moved hold frames and pending frames handling to library.
    Added efc locking notes.
    Added kref to domain, nport and node objects.
    Renamed efc_sli_port object to efc_nport
    Reworked on APIs and libefc_function_template.
    Moved ELS handling to Library
    Moved registering discovery objects to library.
    Added issue_mbox_rqst, send_els, send_bls template functions.
    Changes to use new function template
    Get Domain Ref count when a new nport is added
    Release domain ref when nport is freed.
    Renamed efc_sport.[ch] to efc_nport.[ch]
    Add kref support for Nport structure.
    Upon received PRLI, wait for LIO session registation before sending
      the PRLI response.
    Changes to call new els functions.
    Move all state machine functions return value from void * to void.
    Replace fall through commnet with fallthrough statement.
    New patch added ELS handling to libefc library.
    New els_io_req for discovery io object.
    New patch cmds to register VFI, VPI and RPI.
    Removed all hw related template calls and added issue_mbox_rqst
       template call.
    Consolidate all the discovery related mbox related cmds in discovery
       library.
    Replaced allocation of command buffer for mailbox with stack variable.
  efct driver:
    New library Template registration and associated functions
    Muti-RQ support, configure mrq.
    Configure filter to send all ELS traffic to one RQ and remaining IOs
       to all RQs.
    CPU based WQ scaling.
    Allocate loop map during initialization
    Remove redundant includes in efc_driver.h
    Remove els entries from scsi io object as discovery object handles it.
    Changed unsol frame handling. 
    Lookup for efct target node based on d_id and s_id.
    Rework IO path to not look up discovery object every time.
    Changes to reduce function params for efct_hw_io_send
    Changes to create efct_node(user driver specific node) and store
       lookup id.
    Removed debugfs interface.
    Reduced the argument list for WQE filling routines.


V3 modifications:
  Changed anonymous enums to named enums
  Split gaint enums into multiple enums
  Use defines to spell out _MASK values directly
  Changed multiple #defines to named enums and a few vice versa cases
    for consistency
  Added Link Speed support for up to 128G
  Removed efc_assert define. Replaced with WARN_ON.
  Returned defined return values EFC_SUCCESS & EFC_FAIL
  Added return values for routines returning more than those 2 values
  Reduction of calling arguments in various routines.
  Expanded dump type and status handling
  Fixed locking in discovery handling routines.
  Fixed line formatting length and indentation issues.
  Removed code that was not used.
  Removed Sparse Vector APIs and structures. Use xarray api instead.
  Changed node pool creation. Use mempool and allocate dma memory when
    required
  Bug Fix: Send LS_RJT for non FCP PRLIs
  Removed Queue topology string and parsing routines and rework queue
    creation. Adapter configuration is implicitly known by the driver
  Used request_threaded_irq instead of using our thread
  Reworked efct_device_attach function to use if statements and gotos
  Changed efct_fw_reset, removed accessing other port
  Convert to used pci_alloc_irq_vectors api
  Removed proc interface.
  Changed assertion log messages.
  Unified log message using cmd_name
  Removed DIF related code which is not used
  Removed SCSI get property
  Incorporated LIO interface review comments
  Reworked xxx_reg_vpi/vfi routines
  Use SPDX license in elx/Makefile
  Many more small changes.
  
V2 modifications:
 Contains the following modifications based on prior review comments:
  Indentation/Alignment/Spacing changes
  Comments: format cleanup; removed obvious or unnecessary comments;
    Added comments for clarity.
  Headers use #ifndef comparing for prior inclusion
  Cleanup structure names (remove _s suffix)
  Encapsulate use of macro arguments
  Refactor to remove static function declarations for static local routines
  Removed unused variables
  Fix SLI4_INTF_VALID_MASK for 32bits
  Ensure no BIT() use
  Use __ffs() in page count macro
  Reorg to move field defines out of structure definition
  Commonize command building routines to reduce duplication
  LIO interface:
    Removed scsi initiator includes
    Cleaned up interface defines
    Removed lio WWN version attribute.
    Expanded macros within logging macros
    Cleaned up lio state setting macro
    Remove __force use
    Modularized session debugfs code so can be easily replaced.
    Cleaned up abort task handling. Return after initiating.
    Modularized where possible to reduce duplication
    Convert from kthread to workqueue use
    Remove unused macros
  Add missing TARGET_CORE build attribute
  Fix kbuild test robot warnings

  Comments not addressed:
    Use of __packed: not believed necessary
    Session debugfs code remains. There is not yet a common lio
      mechanism to replace with.



James Smart (31):
  elx: libefc_sli: SLI-4 register offsets and field definitions
  elx: libefc_sli: SLI Descriptors and Queue entries
  elx: libefc_sli: Data structures and defines for mbox commands
  elx: libefc_sli: queue create/destroy/parse routines
  elx: libefc_sli: Populate and post different WQEs
  elx: libefc_sli: bmbx routines and SLI config commands
  elx: libefc_sli: APIs to setup SLI library
  elx: libefc: Generic state machine framework
  elx: libefc: Emulex FC discovery library APIs and definitions
  elx: libefc: FC Domain state machine interfaces
  elx: libefc: SLI and FC PORT state machine interfaces
  elx: libefc: Remote node state machine interfaces
  elx: libefc: Fabric node state machine interfaces
  elx: libefc: FC node ELS and state handling
  elx: libefc: Extended link Service IO handling
  elx: libefc: Register discovery objects with hardware
  elx: efct: Data structures and defines for hw operations
  elx: efct: Driver initialization routines
  elx: efct: Hardware queues creation and deletion
  elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  elx: efct: Hardware IO and SGL initialization
  elx: efct: Hardware queues processing
  elx: efct: Unsolicited FC frame processing routines
  elx: efct: SCSI IO handling routines
  elx: efct: LIO backend interface routines
  elx: efct: Hardware IO submission routines
  elx: efct: link and host statistics
  elx: efct: xport and hardware teardown routines
  elx: efct: scsi_transport_fc host interface support
  elx: efct: Add Makefile and Kconfig for efct driver
  elx: efct: Tie into kernel Kconfig and build process

 MAINTAINERS                            |    9 +
 drivers/scsi/Kconfig                   |    2 +
 drivers/scsi/Makefile                  |    1 +
 drivers/scsi/elx/Kconfig               |    9 +
 drivers/scsi/elx/Makefile              |   18 +
 drivers/scsi/elx/efct/efct_driver.c    |  789 ++++
 drivers/scsi/elx/efct/efct_driver.h    |  109 +
 drivers/scsi/elx/efct/efct_hw.c        | 3637 +++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h        |  782 ++++
 drivers/scsi/elx/efct/efct_hw_queues.c |  679 ++++
 drivers/scsi/elx/efct/efct_io.c        |  191 +
 drivers/scsi/elx/efct/efct_io.h        |  174 +
 drivers/scsi/elx/efct/efct_lio.c       | 1689 ++++++++
 drivers/scsi/elx/efct/efct_lio.h       |  189 +
 drivers/scsi/elx/efct/efct_scsi.c      | 1164 ++++++
 drivers/scsi/elx/efct/efct_scsi.h      |  207 +
 drivers/scsi/elx/efct/efct_unsol.c     |  493 +++
 drivers/scsi/elx/efct/efct_unsol.h     |   17 +
 drivers/scsi/elx/efct/efct_xport.c     | 1280 ++++++
 drivers/scsi/elx/efct/efct_xport.h     |  186 +
 drivers/scsi/elx/include/efc_common.h  |   41 +
 drivers/scsi/elx/libefc/efc.h          |   69 +
 drivers/scsi/elx/libefc/efc_cmds.c     |  855 ++++
 drivers/scsi/elx/libefc/efc_cmds.h     |   35 +
 drivers/scsi/elx/libefc/efc_device.c   | 1600 ++++++++
 drivers/scsi/elx/libefc/efc_device.h   |   72 +
 drivers/scsi/elx/libefc/efc_domain.c   | 1093 +++++
 drivers/scsi/elx/libefc/efc_domain.h   |   54 +
 drivers/scsi/elx/libefc/efc_els.c      | 1099 +++++
 drivers/scsi/elx/libefc/efc_els.h      |  107 +
 drivers/scsi/elx/libefc/efc_fabric.c   | 1565 +++++++
 drivers/scsi/elx/libefc/efc_fabric.h   |  116 +
 drivers/scsi/elx/libefc/efc_node.c     | 1102 +++++
 drivers/scsi/elx/libefc/efc_node.h     |  191 +
 drivers/scsi/elx/libefc/efc_nport.c    |  792 ++++
 drivers/scsi/elx/libefc/efc_nport.h    |   50 +
 drivers/scsi/elx/libefc/efc_sm.c       |   54 +
 drivers/scsi/elx/libefc/efc_sm.h       |  197 +
 drivers/scsi/elx/libefc/efclib.c       |   81 +
 drivers/scsi/elx/libefc/efclib.h       |  601 +++
 drivers/scsi/elx/libefc_sli/sli4.c     | 5169 ++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h     | 4115 +++++++++++++++++++
 42 files changed, 30683 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h
 create mode 100644 drivers/scsi/elx/include/efc_common.h
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efc_cmds.c
 create mode 100644 drivers/scsi/elx/libefc/efc_cmds.h
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h
 create mode 100644 drivers/scsi/elx/libefc/efc_els.c
 create mode 100644 drivers/scsi/elx/libefc/efc_els.h
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h
 create mode 100644 drivers/scsi/elx/libefc/efc_nport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_nport.h
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h
 create mode 100644 drivers/scsi/elx/libefc/efclib.c
 create mode 100644 drivers/scsi/elx/libefc/efclib.h
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
                   ` (29 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This is the initial patch for the new Emulex target mode SCSI
driver sources.

This patch:
- Creates the new Emulex source level directory drivers/scsi/elx
  and adds the directory to the MAINTAINERS file.
- Creates the first library subdirectory drivers/scsi/elx/libefc_sli.
  This library is a SLI-4 interface library.
- Starts the population of the libefc_sli library with definitions
  of SLI-4 hardware register offsets and definitions.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 MAINTAINERS                        |   9 +
 drivers/scsi/elx/libefc_sli/sli4.c |  21 ++
 drivers/scsi/elx/libefc_sli/sli4.h | 308 +++++++++++++++++++++++++++++
 3 files changed, 338 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c
 create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c021a3704857..c0d56b0ed018 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6509,6 +6509,15 @@ S:	Supported
 W:	http://www.broadcom.com
 F:	drivers/scsi/lpfc/
 
+EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER
+M:	James Smart <james.smart@broadcom.com>
+M:	Ram Vegesna <ram.vegesna@broadcom.com>
+L:	linux-scsi@vger.kernel.org
+L:	target-devel@vger.kernel.org
+S:	Supported
+W:	http://www.broadcom.com
+F:	drivers/scsi/elx/
+
 ENE CB710 FLASH CARD READER DRIVER
 M:	Michał Mirosław <mirq-linux@rere.qmqm.pl>
 S:	Maintained
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
new file mode 100644
index 000000000000..5396e14e7bf8
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * All common (i.e. transport-independent) SLI-4 functions are implemented
+ * in this file.
+ */
+#include "sli4.h"
+
+static struct sli4_asic_entry_t sli4_asic_table[] = {
+	{ SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
+	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
+};
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
new file mode 100644
index 000000000000..f8e4b2355721
--- /dev/null
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -0,0 +1,308 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/*
+ * All common SLI-4 structures and function prototypes.
+ */
+
+#ifndef _SLI4_H
+#define _SLI4_H
+
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "../include/efc_common.h"
+
+/*************************************************************************
+ * Common SLI-4 register offsets and field definitions
+ */
+
+/* SLI_INTF - SLI Interface Definition Register */
+#define SLI4_INTF_REG			0x0058
+enum sli4_intf {
+	SLI4_INTF_REV_SHIFT		= 4,
+	SLI4_INTF_REV_MASK		= 0xf0,
+
+	SLI4_INTF_REV_S3		= 0x30,
+	SLI4_INTF_REV_S4		= 0x40,
+
+	SLI4_INTF_FAMILY_SHIFT		= 8,
+	SLI4_INTF_FAMILY_MASK		= 0x0f00,
+
+	SLI4_FAMILY_CHECK_ASIC_TYPE	= 0x0f00,
+
+	SLI4_INTF_IF_TYPE_SHIFT		= 12,
+	SLI4_INTF_IF_TYPE_MASK		= 0xf000,
+
+	SLI4_INTF_IF_TYPE_2		= 0x2000,
+	SLI4_INTF_IF_TYPE_6		= 0x6000,
+
+	SLI4_INTF_VALID_SHIFT		= 29,
+	SLI4_INTF_VALID_MASK		= 0xe0000000,
+
+	SLI4_INTF_VALID_VALUE		= 0xc0000000,
+};
+
+/* ASIC_ID - SLI ASIC Type and Revision Register */
+#define SLI4_ASIC_ID_REG	0x009c
+enum sli4_asic {
+	SLI4_ASIC_GEN_SHIFT	= 8,
+	SLI4_ASIC_GEN_MASK	= 0xff00,
+	SLI4_ASIC_GEN_5		= 0x0b00,
+	SLI4_ASIC_GEN_6		= 0x0c00,
+	SLI4_ASIC_GEN_7		= 0x0d00,
+};
+
+enum sli4_acic_revisions {
+	SLI4_ASIC_REV_A0	= 0x00,
+	SLI4_ASIC_REV_A1	= 0x01,
+	SLI4_ASIC_REV_A2	= 0x02,
+	SLI4_ASIC_REV_A3	= 0x03,
+	SLI4_ASIC_REV_B0	= 0x10,
+	SLI4_ASIC_REV_B1	= 0x11,
+	SLI4_ASIC_REV_B2	= 0x12,
+	SLI4_ASIC_REV_C0	= 0x20,
+	SLI4_ASIC_REV_C1	= 0x21,
+	SLI4_ASIC_REV_C2	= 0x22,
+	SLI4_ASIC_REV_D0	= 0x30,
+};
+
+struct sli4_asic_entry_t {
+	u32 rev_id;
+	u32 family;
+};
+
+/* BMBX - Bootstrap Mailbox Register */
+#define SLI4_BMBX_REG		0x0160
+enum sli4_bmbx {
+	SLI4_BMBX_MASK_HI	= 0x3,
+	SLI4_BMBX_MASK_LO	= 0xf,
+	SLI4_BMBX_RDY		= 1 << 0,
+	SLI4_BMBX_HI		= 1 << 1,
+	SLI4_BMBX_SIZE		= 256,
+};
+
+static inline u32
+sli_bmbx_write_hi(u64 addr) {
+	u32 val;
+
+	val = upper_32_bits(addr) & ~SLI4_BMBX_MASK_HI;
+	val |= SLI4_BMBX_HI;
+
+	return val;
+}
+
+static inline u32
+sli_bmbx_write_lo(u64 addr) {
+	u32 val;
+
+	val = (upper_32_bits(addr) & SLI4_BMBX_MASK_HI) << 30;
+	val |= ((addr) & ~SLI4_BMBX_MASK_LO) >> 2;
+
+	return val;
+}
+
+/* SLIPORT_CONTROL - SLI Port Control Register */
+#define SLI4_PORT_CTRL_REG	0x0408
+enum sli4_port_ctrl {
+	SLI4_PORT_CTRL_IP	= 1u << 27,
+	SLI4_PORT_CTRL_IDIS	= 1u << 22,
+	SLI4_PORT_CTRL_FDD	= 1u << 31,
+};
+
+/* SLI4_SLIPORT_ERROR - SLI Port Error Register */
+#define SLI4_PORT_ERROR1	0x040c
+#define SLI4_PORT_ERROR2	0x0410
+
+/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */
+#define SLI4_EQCQ_DB_REG	0x120
+enum sli4_eqcq_e {
+	SLI4_EQ_ID_LO_MASK	= 0x01ff,
+
+	SLI4_CQ_ID_LO_MASK	= 0x03ff,
+
+	SLI4_EQCQ_CI_EQ		= 0x0200,
+
+	SLI4_EQCQ_QT_EQ		= 0x00000400,
+	SLI4_EQCQ_QT_CQ		= 0x00000000,
+
+	SLI4_EQCQ_ID_HI_SHIFT	= 11,
+	SLI4_EQCQ_ID_HI_MASK	= 0xf800,
+
+	SLI4_EQCQ_NUM_SHIFT	= 16,
+	SLI4_EQCQ_NUM_MASK	= 0x1fff0000,
+
+	SLI4_EQCQ_ARM		= 0x20000000,
+	SLI4_EQCQ_UNARM		= 0x00000000,
+};
+
+static inline u32
+sli_format_eq_db_data(u16 num_popped, u16 id, u32 arm) {
+	u32 reg;
+
+	reg = (id & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ;
+	reg |= (((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK;
+	reg |= ((num_popped) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK;
+	reg |= arm | SLI4_EQCQ_CI_EQ;
+
+	return reg;
+}
+
+static inline u32
+sli_format_cq_db_data(u16 num_popped, u16 id, u32 arm) {
+	u32 reg;
+
+	reg = ((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ;
+	reg |= (((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK;
+	reg |= ((num_popped) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK;
+	reg |= arm;
+
+	return reg;
+}
+
+/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/
+#define SLI4_IF6_EQ_DB_REG	0x120
+enum sli4_eq_e {
+	SLI4_IF6_EQ_ID_MASK	= 0x0fff,
+
+	SLI4_IF6_EQ_NUM_SHIFT	= 16,
+	SLI4_IF6_EQ_NUM_MASK	= 0x1fff0000,
+};
+
+static inline u32
+sli_format_if6_eq_db_data(u16 num_popped, u16 id, u32 arm) {
+	u32 reg;
+
+	reg = id & SLI4_IF6_EQ_ID_MASK;
+	reg |= (num_popped << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK;
+	reg |= arm;
+
+	return reg;
+}
+
+/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */
+#define SLI4_IF6_CQ_DB_REG	0xc0
+enum sli4_cq_e {
+	SLI4_IF6_CQ_ID_MASK	= 0xffff,
+
+	SLI4_IF6_CQ_NUM_SHIFT	= 16,
+	SLI4_IF6_CQ_NUM_MASK	= 0x1fff0000,
+};
+
+static inline u32
+sli_format_if6_cq_db_data(u16 num_popped, u16 id, u32 arm) {
+	u32 reg;
+
+	reg = id & SLI4_IF6_CQ_ID_MASK;
+	reg |= ((num_popped) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK;
+	reg |= arm;
+
+	return reg;
+}
+
+/* MQ_DOORBELL - MQ Doorbell Register */
+#define SLI4_MQ_DB_REG		0x0140
+#define SLI4_IF6_MQ_DB_REG	0x0160
+enum sli4_mq_e {
+	SLI4_MQ_ID_MASK		= 0xffff,
+
+	SLI4_MQ_NUM_SHIFT	= 16,
+	SLI4_MQ_NUM_MASK	= 0x3fff0000,
+};
+
+static inline u32
+sli_format_mq_db_data(u16 id) {
+	u32 reg;
+
+	reg = id & SLI4_MQ_ID_MASK;
+	reg |= (1 << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK;
+
+	return reg;
+}
+
+/* RQ_DOORBELL - RQ Doorbell Register */
+#define SLI4_RQ_DB_REG		0x0a0
+#define SLI4_IF6_RQ_DB_REG	0x0080
+enum sli4_rq_e {
+	SLI4_RQ_DB_ID_MASK	= 0xffff,
+
+	SLI4_RQ_DB_NUM_SHIFT	= 16,
+	SLI4_RQ_DB_NUM_MASK	= 0x3fff0000,
+};
+
+static inline u32
+sli_format_rq_db_data(u16 id) {
+	u32 reg;
+
+	reg = id & SLI4_RQ_DB_ID_MASK;
+	reg |= (1 << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK;
+
+	return reg;
+}
+
+/* WQ_DOORBELL - WQ Doorbell Register */
+#define SLI4_IO_WQ_DB_REG	0x040
+#define SLI4_IF6_WQ_DB_REG	0x040
+enum sli4_wq_e {
+	SLI4_WQ_ID_MASK		= 0xffff,
+
+	SLI4_WQ_IDX_SHIFT	= 16,
+	SLI4_WQ_IDX_MASK	= 0xff0000,
+
+	SLI4_WQ_NUM_SHIFT	= 24,
+	SLI4_WQ_NUM_MASK	= 0x0ff00000,
+};
+
+static inline u32
+sli_format_wq_db_data(u16 id) {
+	u32 reg;
+
+	reg = id & SLI4_WQ_ID_MASK;
+	reg |= (1 << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK;
+
+	return reg;
+}
+
+/* SLIPORT_STATUS - SLI Port Status Register */
+#define SLI4_PORT_STATUS_REGOFF	0x0404
+enum sli4_port_status {
+	SLI4_PORT_STATUS_FDP	= 1u << 21,
+	SLI4_PORT_STATUS_RDY	= 1u << 23,
+	SLI4_PORT_STATUS_RN	= 1u << 24,
+	SLI4_PORT_STATUS_DIP	= 1u << 25,
+	SLI4_PORT_STATUS_OTI	= 1u << 29,
+	SLI4_PORT_STATUS_ERR	= 1u << 31,
+};
+
+#define SLI4_PHYDEV_CTRL_REG	0x0414
+#define SLI4_PHYDEV_CTRL_FRST	(1 << 1)
+#define SLI4_PHYDEV_CTRL_DD	(1 << 2)
+
+/* Register name enums */
+enum sli4_regname_en {
+	SLI4_REG_BMBX,
+	SLI4_REG_EQ_DOORBELL,
+	SLI4_REG_CQ_DOORBELL,
+	SLI4_REG_RQ_DOORBELL,
+	SLI4_REG_IO_WQ_DOORBELL,
+	SLI4_REG_MQ_DOORBELL,
+	SLI4_REG_PHYSDEV_CONTROL,
+	SLI4_REG_PORT_CONTROL,
+	SLI4_REG_PORT_ERROR1,
+	SLI4_REG_PORT_ERROR2,
+	SLI4_REG_PORT_SEMAPHORE,
+	SLI4_REG_PORT_STATUS,
+	SLI4_REG_UNKWOWN			/* must be last */
+};
+
+struct sli4_reg {
+	u32	rset;
+	u32	off;
+};
+
+#endif /* !_SLI4_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 02/31] elx: libefc_sli: SLI Descriptors and Queue entries
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2021-01-07 22:58 ` [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
                   ` (28 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch add SLI-4 Data structures and defines for:
- Buffer Descriptors (BDEs)
- Scatter/Gather List elements (SGEs)
- Queues and their Entry Descriptions for:
   Event Queues (EQs), Completion Queues (CQs),
   Receive Queues (RQs), and the Mailbox Queue (MQ).

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/include/efc_common.h |   25 +
 drivers/scsi/elx/libefc_sli/sli4.h    | 1776 +++++++++++++++++++++++++
 2 files changed, 1801 insertions(+)
 create mode 100644 drivers/scsi/elx/include/efc_common.h

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
new file mode 100644
index 000000000000..e0dbecbc4518
--- /dev/null
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_COMMON_H__
+#define __EFC_COMMON_H__
+
+#include <linux/pci.h>
+
+#define EFC_SUCCESS	0
+#define EFC_FAIL	1
+
+struct efc_dma {
+	void		*virt;
+	void            *alloc;
+	dma_addr_t	phys;
+
+	size_t		size;
+	size_t          len;
+	struct pci_dev	*pdev;
+};
+
+#endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index f8e4b2355721..d194b566b588 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -305,4 +305,1780 @@ struct sli4_reg {
 	u32	off;
 };
 
+struct sli4_dmaaddr {
+	__le32 low;
+	__le32 high;
+};
+
+/*
+ * a 3-word Buffer Descriptor Entry with
+ * address 1st 2 words, length last word
+ */
+struct sli4_bufptr {
+	struct sli4_dmaaddr addr;
+	__le32 length;
+};
+
+/* Buffer Descriptor Entry (BDE) */
+enum sli4_bde_e {
+	SLI4_BDE_LEN_MASK	= 0x00ffffff,
+	SLI4_BDE_TYPE_MASK	= 0xff000000,
+};
+
+struct sli4_bde {
+	__le32		bde_type_buflen;
+	union {
+		struct sli4_dmaaddr data;
+		struct {
+			__le32	offset;
+			__le32	rsvd2;
+		} imm;
+		struct sli4_dmaaddr blp;
+	} u;
+};
+
+/* Buffer Descriptors */
+enum sli4_bde_type {
+	SLI4_BDE_TYPE_SHIFT	= 24,
+	SLI4_BDE_TYPE_64	= 0x00,	/* Generic 64-bit data */
+	SLI4_BDE_TYPE_IMM	= 0x01,	/* Immediate data */
+	SLI4_BDE_TYPE_BLP	= 0x40,	/* Buffer List Pointer */
+};
+
+#define SLI4_BDE_TYPE_VAL(type) \
+	(SLI4_BDE_TYPE_##type << SLI4_BDE_TYPE_SHIFT)
+
+/* Scatter-Gather Entry (SGE) */
+#define SLI4_SGE_MAX_RESERVED		3
+
+enum sli4_sge_type {
+	/* DW2 */
+	SLI4_SGE_DATA_OFFSET_MASK	= 0x07ffffff,
+	/*DW2W1*/
+	SLI4_SGE_TYPE_SHIFT		= 27,
+	SLI4_SGE_TYPE_MASK		= 0x78000000,
+	/*SGE Types*/
+	SLI4_SGE_TYPE_DATA		= 0x00,
+	SLI4_SGE_TYPE_DIF		= 0x04,	/* Data Integrity Field */
+	SLI4_SGE_TYPE_LSP		= 0x05,	/* List Segment Pointer */
+	SLI4_SGE_TYPE_PEDIF		= 0x06,	/* Post Encryption Engine DIF */
+	SLI4_SGE_TYPE_PESEED		= 0x07,	/* Post Encryption DIF Seed */
+	SLI4_SGE_TYPE_DISEED		= 0x08,	/* DIF Seed */
+	SLI4_SGE_TYPE_ENC		= 0x09,	/* Encryption */
+	SLI4_SGE_TYPE_ATM		= 0x0a,	/* DIF Application Tag Mask */
+	SLI4_SGE_TYPE_SKIP		= 0x0c,	/* SKIP */
+
+	SLI4_SGE_LAST			= 1u << 31,
+};
+
+struct sli4_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		buffer_length;
+};
+
+/* T10 DIF Scatter-Gather Entry (SGE) */
+struct sli4_dif_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		rsvd12;
+};
+
+/* Data Integrity Seed (DISEED) SGE */
+enum sli4_diseed_sge_flags {
+	/* DW2W1 */
+	SLI4_DISEED_SGE_HS		= 1 << 2,
+	SLI4_DISEED_SGE_WS		= 1 << 3,
+	SLI4_DISEED_SGE_IC		= 1 << 4,
+	SLI4_DISEED_SGE_ICS		= 1 << 5,
+	SLI4_DISEED_SGE_ATRT		= 1 << 6,
+	SLI4_DISEED_SGE_AT		= 1 << 7,
+	SLI4_DISEED_SGE_FAT		= 1 << 8,
+	SLI4_DISEED_SGE_NA		= 1 << 9,
+	SLI4_DISEED_SGE_HI		= 1 << 10,
+
+	/* DW3W1 */
+	SLI4_DISEED_SGE_BS_MASK		= 0x0007,
+	SLI4_DISEED_SGE_AI		= 1 << 3,
+	SLI4_DISEED_SGE_ME		= 1 << 4,
+	SLI4_DISEED_SGE_RE		= 1 << 5,
+	SLI4_DISEED_SGE_CE		= 1 << 6,
+	SLI4_DISEED_SGE_NR		= 1 << 7,
+
+	SLI4_DISEED_SGE_OP_RX_SHIFT	= 8,
+	SLI4_DISEED_SGE_OP_RX_MASK	= 0x0f00,
+	SLI4_DISEED_SGE_OP_TX_SHIFT	= 12,
+	SLI4_DISEED_SGE_OP_TX_MASK	= 0xf000,
+};
+
+/* Opcode values */
+enum sli4_diseed_sge_opcodes {
+	SLI4_DISEED_SGE_OP_IN_NODIF_OUT_CRC,
+	SLI4_DISEED_SGE_OP_IN_CRC_OUT_NODIF,
+	SLI4_DISEED_SGE_OP_IN_NODIF_OUT_CSUM,
+	SLI4_DISEED_SGE_OP_IN_CSUM_OUT_NODIF,
+	SLI4_DISEED_SGE_OP_IN_CRC_OUT_CRC,
+	SLI4_DISEED_SGE_OP_IN_CSUM_OUT_CSUM,
+	SLI4_DISEED_SGE_OP_IN_CRC_OUT_CSUM,
+	SLI4_DISEED_SGE_OP_IN_CSUM_OUT_CRC,
+	SLI4_DISEED_SGE_OP_IN_RAW_OUT_RAW,
+};
+
+#define SLI4_DISEED_SGE_OP_RX_VALUE(stype) \
+	(SLI4_DISEED_SGE_OP_##stype << SLI4_DISEED_SGE_OP_RX_SHIFT)
+#define SLI4_DISEED_SGE_OP_TX_VALUE(stype) \
+	(SLI4_DISEED_SGE_OP_##stype << SLI4_DISEED_SGE_OP_TX_SHIFT)
+
+struct sli4_diseed_sge {
+	__le32		ref_tag_cmp;
+	__le32		ref_tag_repl;
+	__le16		app_tag_repl;
+	__le16		dw2w1_flags;
+	__le16		app_tag_cmp;
+	__le16		dw3w1_flags;
+};
+
+/* List Segment Pointer Scatter-Gather Entry (SGE) */
+#define SLI4_LSP_SGE_SEGLEN	0x00ffffff
+
+struct sli4_lsp_sge {
+	__le32		buffer_address_high;
+	__le32		buffer_address_low;
+	__le32		dw2_flags;
+	__le32		dw3_seglen;
+};
+
+enum sli4_eqe_e {
+	SLI4_EQE_VALID	= 1,
+	SLI4_EQE_MJCODE	= 0xe,
+	SLI4_EQE_MNCODE	= 0xfff0,
+};
+
+struct sli4_eqe {
+	__le16		dw0w0_flags;
+	__le16		resource_id;
+};
+
+#define SLI4_MAJOR_CODE_STANDARD	0
+#define SLI4_MAJOR_CODE_SENTINEL	1
+
+/* Sentinel EQE indicating the EQ is full */
+#define SLI4_EQE_STATUS_EQ_FULL		2
+
+enum sli4_mcqe_e {
+	SLI4_MCQE_CONSUMED	= 1u << 27,
+	SLI4_MCQE_COMPLETED	= 1u << 28,
+	SLI4_MCQE_AE		= 1u << 30,
+	SLI4_MCQE_VALID		= 1u << 31,
+};
+
+/* Entry was consumed but not completed */
+#define SLI4_MCQE_STATUS_NOT_COMPLETED	-2
+
+struct sli4_mcqe {
+	__le16		completion_status;
+	__le16		extended_status;
+	__le32		mqe_tag_low;
+	__le32		mqe_tag_high;
+	__le32		dw3_flags;
+};
+
+enum sli4_acqe_e {
+	SLI4_ACQE_AE	= 1 << 6, /* async event - this is an ACQE */
+	SLI4_ACQE_VAL	= 1 << 7, /* valid - contents of CQE are valid */
+};
+
+struct sli4_acqe {
+	__le32		event_data[3];
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		ae_val;
+};
+
+enum sli4_acqe_event_code {
+	SLI4_ACQE_EVENT_CODE_LINK_STATE		= 0x01,
+	SLI4_ACQE_EVENT_CODE_FIP		= 0x02,
+	SLI4_ACQE_EVENT_CODE_DCBX		= 0x03,
+	SLI4_ACQE_EVENT_CODE_ISCSI		= 0x04,
+	SLI4_ACQE_EVENT_CODE_GRP_5		= 0x05,
+	SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT	= 0x10,
+	SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT	= 0x11,
+	SLI4_ACQE_EVENT_CODE_VF_EVENT		= 0x12,
+	SLI4_ACQE_EVENT_CODE_MR_EVENT		= 0x13,
+};
+
+enum sli4_qtype {
+	SLI4_QTYPE_EQ,
+	SLI4_QTYPE_CQ,
+	SLI4_QTYPE_MQ,
+	SLI4_QTYPE_WQ,
+	SLI4_QTYPE_RQ,
+	SLI4_QTYPE_MAX,			/* must be last */
+};
+
+#define SLI4_USER_MQ_COUNT	1
+#define SLI4_MAX_CQ_SET_COUNT	16
+#define SLI4_MAX_RQ_SET_COUNT	16
+
+enum sli4_qentry {
+	SLI4_QENTRY_ASYNC,
+	SLI4_QENTRY_MQ,
+	SLI4_QENTRY_RQ,
+	SLI4_QENTRY_WQ,
+	SLI4_QENTRY_WQ_RELEASE,
+	SLI4_QENTRY_OPT_WRITE_CMD,
+	SLI4_QENTRY_OPT_WRITE_DATA,
+	SLI4_QENTRY_XABT,
+	SLI4_QENTRY_MAX			/* must be last */
+};
+
+enum sli4_queue_flags {
+	SLI4_QUEUE_FLAG_MQ	= 1 << 0,	/* CQ has MQ/Async completion */
+	SLI4_QUEUE_FLAG_HDR	= 1 << 1,	/* RQ for packet headers */
+	SLI4_QUEUE_FLAG_RQBATCH	= 1 << 2,	/* RQ index increment by 8 */
+};
+
+/* Generic Command Request header */
+enum sli4_cmd_version {
+	CMD_V0,
+	CMD_V1,
+	CMD_V2,
+};
+
+struct sli4_rqst_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	__le32		timeout;
+	__le32		request_length;
+	__le32		dw3_version;
+};
+
+/* Generic Command Response header */
+struct sli4_rsp_hdr {
+	u8		opcode;
+	u8		subsystem;
+	__le16		rsvd2;
+	u8		status;
+	u8		additional_status;
+	__le16		rsvd6;
+	__le32		response_length;
+	__le32		actual_response_length;
+};
+
+#define SLI4_QUEUE_RQ_BATCH	8
+
+#define SZ_DMAADDR		sizeof(struct sli4_dmaaddr)
+#define SLI4_RQST_CMDSZ(stype)	sizeof(struct sli4_rqst_##stype)
+
+#define SLI4_RQST_PYLD_LEN(stype) \
+		cpu_to_le32(sizeof(struct sli4_rqst_##stype) - \
+			sizeof(struct sli4_rqst_hdr))
+
+#define SLI4_RQST_PYLD_LEN_VAR(stype, varpyld) \
+		cpu_to_le32((sizeof(struct sli4_rqst_##stype) + \
+			varpyld) - sizeof(struct sli4_rqst_hdr))
+
+#define SLI4_CFG_PYLD_LENGTH(stype) \
+		max(sizeof(struct sli4_rqst_##stype), \
+		sizeof(struct sli4_rsp_##stype))
+
+enum sli4_create_cqv2_e {
+	/* DW5_flags values*/
+	SLI4_CREATE_CQV2_CLSWM_MASK	= 0x00003000,
+	SLI4_CREATE_CQV2_NODELAY	= 0x00004000,
+	SLI4_CREATE_CQV2_AUTOVALID	= 0x00008000,
+	SLI4_CREATE_CQV2_CQECNT_MASK	= 0x18000000,
+	SLI4_CREATE_CQV2_VALID		= 0x20000000,
+	SLI4_CREATE_CQV2_EVT		= 0x80000000,
+	/* DW6W1_flags values*/
+	SLI4_CREATE_CQV2_ARM		= 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	u8			page_size;
+	u8			rsvd19;
+	__le32			dw5_flags;
+	__le16			eq_id;
+	__le16			dw6w1_arm;
+	__le16			cqe_count;
+	__le16			rsvd30;
+	__le32			rsvd32;
+	struct sli4_dmaaddr	page_phys_addr[0];
+};
+
+enum sli4_create_cqset_e {
+	/* DW5_flags values*/
+	SLI4_CREATE_CQSETV0_CLSWM_MASK	= 0x00003000,
+	SLI4_CREATE_CQSETV0_NODELAY	= 0x00004000,
+	SLI4_CREATE_CQSETV0_AUTOVALID	= 0x00008000,
+	SLI4_CREATE_CQSETV0_CQECNT_MASK	= 0x18000000,
+	SLI4_CREATE_CQSETV0_VALID	= 0x20000000,
+	SLI4_CREATE_CQSETV0_EVT		= 0x80000000,
+	/* DW5W1_flags values */
+	SLI4_CREATE_CQSETV0_CQE_COUNT	= 0x7fff,
+	SLI4_CREATE_CQSETV0_ARM		= 0x8000,
+};
+
+struct sli4_rqst_cmn_create_cq_set_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	u8			page_size;
+	u8			rsvd19;
+	__le32			dw5_flags;
+	__le16			num_cq_req;
+	__le16			dw6w1_flags;
+	__le16			eq_id[16];
+	struct sli4_dmaaddr	page_phys_addr[0];
+};
+
+/* CQE count */
+enum sli4_cq_cnt {
+	SLI4_CQ_CNT_256,
+	SLI4_CQ_CNT_512,
+	SLI4_CQ_CNT_1024,
+	SLI4_CQ_CNT_LARGE,
+};
+
+#define SLI4_CQ_CNT_SHIFT	27
+#define SLI4_CQ_CNT_VAL(type)	(SLI4_CQ_CNT_##type << SLI4_CQ_CNT_SHIFT)
+
+#define SLI4_CQE_BYTES		(4 * sizeof(u32))
+
+#define SLI4_CREATE_CQV2_MAX_PAGES	8
+
+/* Generic Common Create EQ/CQ/MQ/WQ/RQ Queue completion */
+struct sli4_rsp_cmn_create_queue {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	u8	rsvd18;
+	u8	ulp;
+	__le32	db_offset;
+	__le16	db_rs;
+	__le16	db_fmt;
+};
+
+struct sli4_rsp_cmn_create_queue_set {
+	struct sli4_rsp_hdr	hdr;
+	__le16	q_id;
+	__le16	num_q_allocated;
+};
+
+/* Common Destroy Queue */
+struct sli4_rqst_cmn_destroy_q {
+	struct sli4_rqst_hdr	hdr;
+	__le16	q_id;
+	__le16	rsvd;
+};
+
+struct sli4_rsp_cmn_destroy_q {
+	struct sli4_rsp_hdr	hdr;
+};
+
+/* Modify the delay multiplier for EQs */
+struct sli4_eqdelay_rec {
+	__le32  eq_id;
+	__le32  phase;
+	__le32  delay_multiplier;
+};
+
+struct sli4_rqst_cmn_modify_eq_delay {
+	struct sli4_rqst_hdr	hdr;
+	__le32			num_eq;
+	struct sli4_eqdelay_rec eq_delay_record[8];
+};
+
+struct sli4_rsp_cmn_modify_eq_delay {
+	struct sli4_rsp_hdr	hdr;
+};
+
+enum sli4_create_cq_e {
+	/* DW5 */
+	SLI4_CREATE_EQ_AUTOVALID		= 1u << 28,
+	SLI4_CREATE_EQ_VALID			= 1u << 29,
+	SLI4_CREATE_EQ_EQESZ			= 1u << 31,
+	/* DW6 */
+	SLI4_CREATE_EQ_COUNT			= 7 << 26,
+	SLI4_CREATE_EQ_ARM			= 1u << 31,
+	/* DW7 */
+	SLI4_CREATE_EQ_DELAYMULTI_SHIFT		= 13,
+	SLI4_CREATE_EQ_DELAYMULTI_MASK		= 0x007fe000,
+	SLI4_CREATE_EQ_DELAYMULTI		= 0x00040000,
+};
+
+struct sli4_rqst_cmn_create_eq {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	__le16			rsvd18;
+	__le32			dw5_flags;
+	__le32			dw6_flags;
+	__le32			dw7_delaymulti;
+	__le32			rsvd32;
+	struct sli4_dmaaddr	page_address[8];
+};
+
+struct sli4_rsp_cmn_create_eq {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+/* EQ count */
+enum sli4_eq_cnt {
+	SLI4_EQ_CNT_256,
+	SLI4_EQ_CNT_512,
+	SLI4_EQ_CNT_1024,
+	SLI4_EQ_CNT_2048,
+	SLI4_EQ_CNT_4096 = 3,
+};
+
+#define SLI4_EQ_CNT_SHIFT	26
+#define SLI4_EQ_CNT_VAL(type)	(SLI4_EQ_CNT_##type << SLI4_EQ_CNT_SHIFT)
+
+#define SLI4_EQE_SIZE_4		0
+#define SLI4_EQE_SIZE_16	1
+
+/* Create a Mailbox Queue; accommodate v0 and v1 forms. */
+enum sli4_create_mq_flags {
+	/* DW6W1 */
+	SLI4_CREATE_MQEXT_RINGSIZE	= 0xf,
+	SLI4_CREATE_MQEXT_CQID_SHIFT	= 6,
+	SLI4_CREATE_MQEXT_CQIDV0_MASK	= 0xffc0,
+	/* DW7 */
+	SLI4_CREATE_MQEXT_VAL		= 1u << 31,
+	/* DW8 */
+	SLI4_CREATE_MQEXT_ACQV		= 1u << 0,
+	SLI4_CREATE_MQEXT_ASYNC_CQIDV0	= 0x7fe,
+};
+
+struct sli4_rqst_cmn_create_mq_ext {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	__le16			cq_id_v1;
+	__le32			async_event_bitmap;
+	__le16			async_cq_id_v1;
+	__le16			dw6w1_flags;
+	__le32			dw7_val;
+	__le32			dw8_flags;
+	__le32			rsvd36;
+	struct sli4_dmaaddr	page_phys_addr[0];
+};
+
+struct sli4_rsp_cmn_create_mq_ext {
+	struct sli4_rsp_cmn_create_queue q_rsp;
+};
+
+enum sli4_mqe_size {
+	SLI4_MQE_SIZE_16 = 0x05,
+	SLI4_MQE_SIZE_32,
+	SLI4_MQE_SIZE_64,
+	SLI4_MQE_SIZE_128,
+};
+
+enum sli4_async_evt {
+	SLI4_ASYNC_EVT_LINK_STATE	= 1 << 1,
+	SLI4_ASYNC_EVT_FIP		= 1 << 2,
+	SLI4_ASYNC_EVT_GRP5		= 1 << 5,
+	SLI4_ASYNC_EVT_FC		= 1 << 16,
+	SLI4_ASYNC_EVT_SLI_PORT		= 1 << 17,
+};
+
+#define	SLI4_ASYNC_EVT_FC_ALL \
+		(SLI4_ASYNC_EVT_LINK_STATE	| \
+		 SLI4_ASYNC_EVT_FIP		| \
+		 SLI4_ASYNC_EVT_GRP5		| \
+		 SLI4_ASYNC_EVT_FC		| \
+		 SLI4_ASYNC_EVT_SLI_PORT)
+
+/* Create a Completion Queue. */
+struct sli4_rqst_cmn_create_cq_v0 {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	__le16			rsvd18;
+	__le32			dw5_flags;
+	__le32			dw6_flags;
+	__le32			rsvd28;
+	__le32			rsvd32;
+	struct sli4_dmaaddr	page_phys_addr[0];
+};
+
+enum sli4_create_rq_e {
+	SLI4_RQ_CREATE_DUA		= 0x1,
+	SLI4_RQ_CREATE_BQU		= 0x2,
+
+	SLI4_RQE_SIZE			= 8,
+	SLI4_RQE_SIZE_8			= 0x2,
+	SLI4_RQE_SIZE_16		= 0x3,
+	SLI4_RQE_SIZE_32		= 0x4,
+	SLI4_RQE_SIZE_64		= 0x5,
+	SLI4_RQE_SIZE_128		= 0x6,
+
+	SLI4_RQ_PAGE_SIZE_4096		= 0x1,
+	SLI4_RQ_PAGE_SIZE_8192		= 0x2,
+	SLI4_RQ_PAGE_SIZE_16384		= 0x4,
+	SLI4_RQ_PAGE_SIZE_32768		= 0x8,
+	SLI4_RQ_PAGE_SIZE_64536		= 0x10,
+
+	SLI4_RQ_CREATE_V0_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V0_MIN_BUF_SIZE	= 128,
+	SLI4_RQ_CREATE_V0_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	u8			dua_bqu_byte;
+	u8			ulp;
+	__le16			rsvd16;
+	u8			rqe_count_byte;
+	u8			rsvd19;
+	__le32			rsvd20;
+	__le16			buffer_size;
+	__le16			cq_id;
+	__le32			rsvd28;
+	struct sli4_dmaaddr	page_phys_addr[SLI4_RQ_CREATE_V0_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum sli4_create_rqv1_e {
+	SLI4_RQ_CREATE_V1_DNB		= 0x80,
+	SLI4_RQ_CREATE_V1_MAX_PAGES	= 8,
+	SLI4_RQ_CREATE_V1_MIN_BUF_SIZE	= 64,
+	SLI4_RQ_CREATE_V1_MAX_BUF_SIZE	= 2048,
+};
+
+struct sli4_rqst_rq_create_v1 {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	u8			rsvd14;
+	u8			dim_dfd_dnb;
+	u8			page_size;
+	u8			rqe_size_byte;
+	__le16			rqe_count;
+	__le32			rsvd20;
+	__le16			rsvd24;
+	__le16			cq_id;
+	__le32			buffer_size;
+	struct sli4_dmaaddr	page_phys_addr[SLI4_RQ_CREATE_V1_MAX_PAGES];
+};
+
+struct sli4_rsp_rq_create_v1 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+#define	SLI4_RQCREATEV2_DNB	0x80
+
+struct sli4_rqst_rq_create_v2 {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	u8			rq_count;
+	u8			dim_dfd_dnb;
+	u8			page_size;
+	u8			rqe_size_byte;
+	__le16			rqe_count;
+	__le16			hdr_buffer_size;
+	__le16			payload_buffer_size;
+	__le16			base_cq_id;
+	__le16			rsvd26;
+	__le32			rsvd42;
+	struct sli4_dmaaddr	page_phys_addr[0];
+};
+
+struct sli4_rsp_rq_create_v2 {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+#define SLI4_CQE_CODE_OFFSET	14
+
+enum sli4_cqe_code {
+	SLI4_CQE_CODE_WORK_REQUEST_COMPLETION = 0x01,
+	SLI4_CQE_CODE_RELEASE_WQE,
+	SLI4_CQE_CODE_RSVD,
+	SLI4_CQE_CODE_RQ_ASYNC,
+	SLI4_CQE_CODE_XRI_ABORTED,
+	SLI4_CQE_CODE_RQ_COALESCING,
+	SLI4_CQE_CODE_RQ_CONSUMPTION,
+	SLI4_CQE_CODE_MEASUREMENT_REPORTING,
+	SLI4_CQE_CODE_RQ_ASYNC_V1,
+	SLI4_CQE_CODE_RQ_COALESCING_V1,
+	SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD,
+	SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA,
+};
+
+#define SLI4_WQ_CREATE_MAX_PAGES		8
+
+struct sli4_rqst_wq_create {
+	struct sli4_rqst_hdr	hdr;
+	__le16			num_pages;
+	__le16			cq_id;
+	u8			page_size;
+	u8			wqe_size_byte;
+	__le16			wqe_count;
+	__le32			rsvd;
+	struct	sli4_dmaaddr	page_phys_addr[SLI4_WQ_CREATE_MAX_PAGES];
+};
+
+struct sli4_rsp_wq_create {
+	struct sli4_rsp_cmn_create_queue rsp;
+};
+
+enum sli4_link_attention_flags {
+	SLI4_LNK_ATTN_TYPE_LINK_UP		= 0x01,
+	SLI4_LNK_ATTN_TYPE_LINK_DOWN		= 0x02,
+	SLI4_LNK_ATTN_TYPE_NO_HARD_ALPA		= 0x03,
+
+	SLI4_LNK_ATTN_P2P			= 0x01,
+	SLI4_LNK_ATTN_FC_AL			= 0x02,
+	SLI4_LNK_ATTN_INTERNAL_LOOPBACK		= 0x03,
+	SLI4_LNK_ATTN_SERDES_LOOPBACK		= 0x04,
+};
+
+struct sli4_link_attention {
+	u8		link_number;
+	u8		attn_type;
+	u8		topology;
+	u8		port_speed;
+	u8		port_fault;
+	u8		shared_link_status;
+	__le16		logical_link_speed;
+	__le32		event_tag;
+	u8		rsvd12;
+	u8		event_code;
+	u8		event_type;
+	u8		flags;
+};
+
+enum sli4_link_event_type {
+	SLI4_EVENT_LINK_ATTENTION		= 0x01,
+	SLI4_EVENT_SHARED_LINK_ATTENTION	= 0x02,
+};
+
+enum sli4_wcqe_flags {
+	SLI4_WCQE_XB = 0x10,
+	SLI4_WCQE_QX = 0x80,
+};
+
+struct sli4_fc_wcqe {
+	u8		hw_status;
+	u8		status;
+	__le16		request_tag;
+	__le32		wqe_specific_1;
+	__le32		wqe_specific_2;
+	u8		rsvd12;
+	u8		qx_byte;
+	u8		code;
+	u8		flags;
+};
+
+/* FC WQ consumed CQ queue entry */
+struct sli4_fc_wqec {
+	__le32		rsvd0;
+	__le32		rsvd1;
+	__le16		wqe_index;
+	__le16		wq_id;
+	__le16		rsvd12;
+	u8		code;
+	u8		vld_byte;
+};
+
+/* FC Completion Status Codes. */
+enum sli4_wcqe_status {
+	SLI4_FC_WCQE_STATUS_SUCCESS,
+	SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,
+	SLI4_FC_WCQE_STATUS_REMOTE_STOP,
+	SLI4_FC_WCQE_STATUS_LOCAL_REJECT,
+	SLI4_FC_WCQE_STATUS_NPORT_RJT,
+	SLI4_FC_WCQE_STATUS_FABRIC_RJT,
+	SLI4_FC_WCQE_STATUS_NPORT_BSY,
+	SLI4_FC_WCQE_STATUS_FABRIC_BSY,
+	SLI4_FC_WCQE_STATUS_RSVD,
+	SLI4_FC_WCQE_STATUS_LS_RJT,
+	SLI4_FC_WCQE_STATUS_RX_BUF_OVERRUN,
+	SLI4_FC_WCQE_STATUS_CMD_REJECT,
+	SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,
+	SLI4_FC_WCQE_STATUS_RSVD1,
+	SLI4_FC_WCQE_STATUS_ELS_CMPLT_NO_AUTOREG,
+	SLI4_FC_WCQE_STATUS_RSVD2,
+	SLI4_FC_WCQE_STATUS_RQ_SUCCESS,
+	SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC,
+	SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,
+	SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,
+	SLI4_FC_WCQE_STATUS_DI_ERROR,
+	SLI4_FC_WCQE_STATUS_BA_RJT,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+	SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC,
+	SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,
+	SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,
+
+	/* driver generated status codes */
+	SLI4_FC_WCQE_STATUS_DISPATCH_ERROR	= 0xfd,
+	SLI4_FC_WCQE_STATUS_SHUTDOWN		= 0xfe,
+	SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT	= 0xff,
+};
+
+/* DI_ERROR Extended Status */
+enum sli4_fc_di_error_status {
+	SLI4_FC_DI_ERROR_GE			= 1 << 0,
+	SLI4_FC_DI_ERROR_AE			= 1 << 1,
+	SLI4_FC_DI_ERROR_RE			= 1 << 2,
+	SLI4_FC_DI_ERROR_TDPV			= 1 << 3,
+	SLI4_FC_DI_ERROR_UDB			= 1 << 4,
+	SLI4_FC_DI_ERROR_EDIR			= 1 << 5,
+};
+
+/* WQE DIF field contents */
+enum sli4_dif_fields {
+	SLI4_DIF_DISABLED,
+	SLI4_DIF_PASS_THROUGH,
+	SLI4_DIF_STRIP,
+	SLI4_DIF_INSERT,
+};
+
+/* Work Queue Entry (WQE) types */
+enum sli4_wqe_types {
+	SLI4_WQE_ABORT				= 0x0f,
+	SLI4_WQE_ELS_REQUEST64			= 0x8a,
+	SLI4_WQE_FCP_IBIDIR64			= 0xac,
+	SLI4_WQE_FCP_IREAD64			= 0x9a,
+	SLI4_WQE_FCP_IWRITE64			= 0x98,
+	SLI4_WQE_FCP_ICMND64			= 0x9c,
+	SLI4_WQE_FCP_TRECEIVE64			= 0xa1,
+	SLI4_WQE_FCP_CONT_TRECEIVE64		= 0xe5,
+	SLI4_WQE_FCP_TRSP64			= 0xa3,
+	SLI4_WQE_FCP_TSEND64			= 0x9f,
+	SLI4_WQE_GEN_REQUEST64			= 0xc2,
+	SLI4_WQE_SEND_FRAME			= 0xe1,
+	SLI4_WQE_XMIT_BCAST64			= 0x84,
+	SLI4_WQE_XMIT_BLS_RSP			= 0x97,
+	SLI4_WQE_ELS_RSP64			= 0x95,
+	SLI4_WQE_XMIT_SEQUENCE64		= 0x82,
+	SLI4_WQE_REQUEUE_XRI			= 0x93,
+};
+
+/* WQE command types */
+enum sli4_wqe_cmds {
+	SLI4_CMD_FCP_IREAD64_WQE		= 0x00,
+	SLI4_CMD_FCP_ICMND64_WQE		= 0x00,
+	SLI4_CMD_FCP_IWRITE64_WQE		= 0x01,
+	SLI4_CMD_FCP_TRECEIVE64_WQE		= 0x02,
+	SLI4_CMD_FCP_TRSP64_WQE			= 0x03,
+	SLI4_CMD_FCP_TSEND64_WQE		= 0x07,
+	SLI4_CMD_GEN_REQUEST64_WQE		= 0x08,
+	SLI4_CMD_XMIT_BCAST64_WQE		= 0x08,
+	SLI4_CMD_XMIT_BLS_RSP64_WQE		= 0x08,
+	SLI4_CMD_ABORT_WQE			= 0x08,
+	SLI4_CMD_XMIT_SEQUENCE64_WQE		= 0x08,
+	SLI4_CMD_REQUEUE_XRI_WQE		= 0x0a,
+	SLI4_CMD_SEND_FRAME_WQE			= 0x0a,
+};
+
+#define SLI4_WQE_SIZE		0x05
+#define SLI4_WQE_EXT_SIZE	0x06
+
+#define SLI4_WQE_BYTES		(16 * sizeof(u32))
+#define SLI4_WQE_EXT_BYTES	(32 * sizeof(u32))
+
+/* Mask for ccp (CS_CTL) */
+#define SLI4_MASK_CCP		0xfe
+
+/* Generic WQE */
+enum sli4_gen_wqe_flags {
+	SLI4_GEN_WQE_EBDECNT	= 0xf,
+	SLI4_GEN_WQE_LEN_LOC	= 0x3 << 7,
+	SLI4_GEN_WQE_QOSD	= 1 << 9,
+	SLI4_GEN_WQE_XBL	= 1 << 11,
+	SLI4_GEN_WQE_HLM	= 1 << 12,
+	SLI4_GEN_WQE_IOD	= 1 << 13,
+	SLI4_GEN_WQE_DBDE	= 1 << 14,
+	SLI4_GEN_WQE_WQES	= 1 << 15,
+
+	SLI4_GEN_WQE_PRI	= 0x7,
+	SLI4_GEN_WQE_PV		= 1 << 3,
+	SLI4_GEN_WQE_EAT	= 1 << 4,
+	SLI4_GEN_WQE_XC		= 1 << 5,
+	SLI4_GEN_WQE_CCPE	= 1 << 7,
+
+	SLI4_GEN_WQE_CMDTYPE	= 0xf,
+	SLI4_GEN_WQE_WQEC	= 1 << 7,
+};
+
+struct sli4_generic_wqe {
+	__le32		cmd_spec0_5[6];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+/* WQE used to abort exchanges. */
+enum sli4_abort_wqe_flags {
+	SLI4_ABRT_WQE_IR	= 0x02,
+
+	SLI4_ABRT_WQE_EBDECNT	= 0xf,
+	SLI4_ABRT_WQE_LEN_LOC	= 0x3 << 7,
+	SLI4_ABRT_WQE_QOSD	= 1 << 9,
+	SLI4_ABRT_WQE_XBL	= 1 << 11,
+	SLI4_ABRT_WQE_IOD	= 1 << 13,
+	SLI4_ABRT_WQE_DBDE	= 1 << 14,
+	SLI4_ABRT_WQE_WQES	= 1 << 15,
+
+	SLI4_ABRT_WQE_PRI	= 0x7,
+	SLI4_ABRT_WQE_PV	= 1 << 3,
+	SLI4_ABRT_WQE_EAT	= 1 << 4,
+	SLI4_ABRT_WQE_XC	= 1 << 5,
+	SLI4_ABRT_WQE_CCPE	= 1 << 7,
+
+	SLI4_ABRT_WQE_CMDTYPE	= 0xf,
+	SLI4_ABRT_WQE_WQEC	= 1 << 7,
+};
+
+struct sli4_abort_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		ext_t_tag;
+	u8		ia_ir_byte;
+	u8		criteria;
+	__le16		rsvd10;
+	__le32		ext_t_mask;
+	__le32		t_mask;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		t_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		dw10w0_flags;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_wqec_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+};
+
+enum sli4_abort_criteria {
+	SLI4_ABORT_CRITERIA_XRI_TAG = 0x01,
+	SLI4_ABORT_CRITERIA_ABORT_TAG,
+	SLI4_ABORT_CRITERIA_REQUEST_TAG,
+	SLI4_ABORT_CRITERIA_EXT_ABORT_TAG,
+};
+
+enum sli4_abort_type {
+	SLI4_ABORT_XRI,
+	SLI4_ABORT_ABORT_ID,
+	SLI4_ABORT_REQUEST_ID,
+	SLI4_ABORT_MAX,		/* must be last */
+};
+
+/* WQE used to create an ELS request. */
+enum sli4_els_req_wqe_flags {
+	SLI4_REQ_WQE_QOSD		= 0x2,
+	SLI4_REQ_WQE_DBDE		= 0x40,
+	SLI4_REQ_WQE_XBL		= 0x8,
+	SLI4_REQ_WQE_XC			= 0x20,
+	SLI4_REQ_WQE_IOD		= 0x20,
+	SLI4_REQ_WQE_HLM		= 0x10,
+	SLI4_REQ_WQE_CCPE		= 0x80,
+	SLI4_REQ_WQE_EAT		= 0x10,
+	SLI4_REQ_WQE_WQES		= 0x80,
+	SLI4_REQ_WQE_PU_SHFT		= 4,
+	SLI4_REQ_WQE_CT_SHFT		= 2,
+	SLI4_REQ_WQE_CT			= 0xc,
+	SLI4_REQ_WQE_ELSID_SHFT		= 4,
+	SLI4_REQ_WQE_SP_SHFT		= 24,
+	SLI4_REQ_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_REQ_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_els_request64_wqe {
+	struct sli4_bde	els_request_payload;
+	__le32		els_request_payload_length;
+	__le32		sid_sp_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		temporary_rpi;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmdtype_elsid_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	struct sli4_bde	els_response_payload_bde;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create an FCP initiator no data command. */
+enum sli4_icmd_wqe_flags {
+	SLI4_ICMD_WQE_DBDE		= 0x40,
+	SLI4_ICMD_WQE_XBL		= 0x8,
+	SLI4_ICMD_WQE_XC		= 0x20,
+	SLI4_ICMD_WQE_IOD		= 0x20,
+	SLI4_ICMD_WQE_HLM		= 0x10,
+	SLI4_ICMD_WQE_CCPE		= 0x80,
+	SLI4_ICMD_WQE_EAT		= 0x10,
+	SLI4_ICMD_WQE_APPID		= 0x10,
+	SLI4_ICMD_WQE_WQES		= 0x80,
+	SLI4_ICMD_WQE_PU_SHFT		= 4,
+	SLI4_ICMD_WQE_CT_SHFT		= 2,
+	SLI4_ICMD_WQE_BS_SHFT		= 4,
+	SLI4_ICMD_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_ICMD_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_icmnd64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le32		rsvd12;
+	__le32		remote_n_port_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP initiator read. */
+enum sli4_ir_wqe_flags {
+	SLI4_IR_WQE_DBDE		= 0x40,
+	SLI4_IR_WQE_XBL			= 0x8,
+	SLI4_IR_WQE_XC			= 0x20,
+	SLI4_IR_WQE_IOD			= 0x20,
+	SLI4_IR_WQE_HLM			= 0x10,
+	SLI4_IR_WQE_CCPE		= 0x80,
+	SLI4_IR_WQE_EAT			= 0x10,
+	SLI4_IR_WQE_APPID		= 0x10,
+	SLI4_IR_WQE_WQES		= 0x80,
+	SLI4_IR_WQE_PU_SHFT		= 4,
+	SLI4_IR_WQE_CT_SHFT		= 2,
+	SLI4_IR_WQE_BS_SHFT		= 4,
+	SLI4_IR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iread64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+
+	__le32		total_transfer_length;
+
+	__le32		remote_n_port_id_dword;
+
+	__le16		xri_tag;
+	__le16		context_tag;
+
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+
+	__le32		abort_tag;
+
+	__le16		request_tag;
+	__le16		rsvd34;
+
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+
+	__le32		rsvd44;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP initiator write. */
+enum sli4_iwr_wqe_flags {
+	SLI4_IWR_WQE_DBDE		= 0x40,
+	SLI4_IWR_WQE_XBL		= 0x8,
+	SLI4_IWR_WQE_XC			= 0x20,
+	SLI4_IWR_WQE_IOD		= 0x20,
+	SLI4_IWR_WQE_HLM		= 0x10,
+	SLI4_IWR_WQE_DNRX		= 0x10,
+	SLI4_IWR_WQE_CCPE		= 0x80,
+	SLI4_IWR_WQE_EAT		= 0x10,
+	SLI4_IWR_WQE_APPID		= 0x10,
+	SLI4_IWR_WQE_WQES		= 0x80,
+	SLI4_IWR_WQE_PU_SHFT		= 4,
+	SLI4_IWR_WQE_CT_SHFT		= 2,
+	SLI4_IWR_WQE_BS_SHFT		= 4,
+	SLI4_IWR_WQE_LEN_LOC_BIT1	= 0x80,
+	SLI4_IWR_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_iwrite64_wqe {
+	struct sli4_bde	bde;
+	__le16		payload_offset_length;
+	__le16		fcp_cmd_buffer_length;
+	__le16		total_transfer_length;
+	__le16		initial_transfer_length;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		len_loc1_byte;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	struct sli4_bde	first_data_bde;
+};
+
+struct sli4_fcp_128byte_wqe {
+	u32 dw[32];
+};
+
+/* WQE used to create an FCP target receive */
+enum sli4_trcv_wqe_flags {
+	SLI4_TRCV_WQE_DBDE		= 0x40,
+	SLI4_TRCV_WQE_XBL		= 0x8,
+	SLI4_TRCV_WQE_AR		= 0x8,
+	SLI4_TRCV_WQE_XC		= 0x20,
+	SLI4_TRCV_WQE_IOD		= 0x20,
+	SLI4_TRCV_WQE_HLM		= 0x10,
+	SLI4_TRCV_WQE_DNRX		= 0x10,
+	SLI4_TRCV_WQE_CCPE		= 0x80,
+	SLI4_TRCV_WQE_EAT		= 0x10,
+	SLI4_TRCV_WQE_APPID		= 0x10,
+	SLI4_TRCV_WQE_WQES		= 0x80,
+	SLI4_TRCV_WQE_PU_SHFT		= 4,
+	SLI4_TRCV_WQE_CT_SHFT		= 2,
+	SLI4_TRCV_WQE_BS_SHFT		= 4,
+	SLI4_TRCV_WQE_LEN_LOC_BIT2	= 0x1,
+};
+
+struct sli4_fcp_treceive64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	union {
+		__le16	sec_xri_tag;
+		__le16	rsvd;
+		__le32	dword;
+	} dword5;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dif_ct_bs_byte;
+	u8		command;
+	u8		class_ar_pu_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_iod_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fcp_data_receive_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create an FCP target response */
+enum sli4_trsp_wqe_flags {
+	SLI4_TRSP_WQE_AG	= 0x8,
+	SLI4_TRSP_WQE_DBDE	= 0x40,
+	SLI4_TRSP_WQE_XBL	= 0x8,
+	SLI4_TRSP_WQE_XC	= 0x20,
+	SLI4_TRSP_WQE_HLM	= 0x10,
+	SLI4_TRSP_WQE_DNRX	= 0x10,
+	SLI4_TRSP_WQE_CCPE	= 0x80,
+	SLI4_TRSP_WQE_EAT	= 0x10,
+	SLI4_TRSP_WQE_APPID	= 0x10,
+	SLI4_TRSP_WQE_WQES	= 0x80,
+};
+
+struct sli4_fcp_trsp64_wqe {
+	struct sli4_bde	bde;
+	__le32		fcp_response_length;
+	__le32		rsvd12;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_dnrx_byte;
+	u8		command;
+	u8		class_ag_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		lloc1_appid;
+	u8		qosd_xbl_hlm_dbde_wqes;
+	u8		eat_xc_ccpe;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create an FCP target send (DATA IN). */
+enum sli4_tsend_wqe_flags {
+	SLI4_TSEND_WQE_XBL	= 0x8,
+	SLI4_TSEND_WQE_DBDE	= 0x40,
+	SLI4_TSEND_WQE_IOD	= 0x20,
+	SLI4_TSEND_WQE_QOSD	= 0x2,
+	SLI4_TSEND_WQE_HLM	= 0x10,
+	SLI4_TSEND_WQE_PU_SHFT	= 4,
+	SLI4_TSEND_WQE_AR	= 0x8,
+	SLI4_TSEND_CT_SHFT	= 2,
+	SLI4_TSEND_BS_SHFT	= 4,
+	SLI4_TSEND_LEN_LOC_BIT2 = 0x1,
+	SLI4_TSEND_CCPE		= 0x80,
+	SLI4_TSEND_APPID_VALID	= 0x20,
+	SLI4_TSEND_WQES		= 0x80,
+	SLI4_TSEND_XC		= 0x20,
+	SLI4_TSEND_EAT		= 0x10,
+};
+
+struct sli4_fcp_tsend64_wqe {
+	struct sli4_bde	bde;
+	__le32		payload_offset_length;
+	__le32		relative_offset;
+	__le32		dword5;
+	__le16		xri_tag;
+	__le16		rpi;
+	u8		ct_byte;
+	u8		command;
+	u8		class_pu_ar_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	u8		dw10byte0;
+	u8		ll_qd_xbl_hlm_iod_dbde;
+	u8		dw10byte2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		fcp_data_transmit_length;
+	struct sli4_bde	first_data_bde;
+};
+
+/* WQE used to create a general request. */
+enum sli4_gen_req_wqe_flags {
+	SLI4_GEN_REQ64_WQE_XBL	= 0x8,
+	SLI4_GEN_REQ64_WQE_DBDE	= 0x40,
+	SLI4_GEN_REQ64_WQE_IOD	= 0x20,
+	SLI4_GEN_REQ64_WQE_QOSD	= 0x2,
+	SLI4_GEN_REQ64_WQE_HLM	= 0x10,
+	SLI4_GEN_REQ64_CT_SHFT	= 2,
+};
+
+struct sli4_gen_request64_wqe {
+	struct sli4_bde	bde;
+	__le32		request_payload_length;
+	__le32		relative_offset;
+	u8		rsvd17;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd34;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		remote_n_port_id_dword;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		max_response_payload_length;
+};
+
+/* WQE used to create a send frame request */
+enum sli4_sf_wqe_flags {
+	SLI4_SF_WQE_DBDE	= 0x40,
+	SLI4_SF_PU		= 0x30,
+	SLI4_SF_CT		= 0xc,
+	SLI4_SF_QOSD		= 0x2,
+	SLI4_SF_LEN_LOC_BIT1	= 0x80,
+	SLI4_SF_LEN_LOC_BIT2	= 0x1,
+	SLI4_SF_XC		= 0x20,
+	SLI4_SF_XBL		= 0x8,
+};
+
+struct sli4_send_frame_wqe {
+	struct sli4_bde	bde;
+	__le32		frame_length;
+	__le32		fc_header_0_1[2];
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		dw7flags0;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	u8		eof;
+	u8		sof;
+	u8		dw10flags0;
+	u8		dw10flags1;
+	u8		dw10flags2;
+	u8		ccp;
+	u8		cmd_type_byte;
+	u8		rsvd41;
+	__le16		cq_id;
+	__le32		fc_header_2_5[4];
+};
+
+/* WQE used to create a transmit sequence */
+enum sli4_seq_wqe_flags {
+	SLI4_SEQ_WQE_DBDE		= 0x4000,
+	SLI4_SEQ_WQE_XBL		= 0x800,
+	SLI4_SEQ_WQE_SI			= 0x4,
+	SLI4_SEQ_WQE_FT			= 0x8,
+	SLI4_SEQ_WQE_XO			= 0x40,
+	SLI4_SEQ_WQE_LS			= 0x80,
+	SLI4_SEQ_WQE_DIF		= 0x3,
+	SLI4_SEQ_WQE_BS			= 0x70,
+	SLI4_SEQ_WQE_PU			= 0x30,
+	SLI4_SEQ_WQE_HLM		= 0x1000,
+	SLI4_SEQ_WQE_IOD_SHIFT		= 13,
+	SLI4_SEQ_WQE_CT_SHIFT		= 2,
+	SLI4_SEQ_WQE_LEN_LOC_SHIFT	= 7,
+};
+
+struct sli4_xmit_sequence64_wqe {
+	struct sli4_bde	bde;
+	__le32		remote_n_port_id_dword;
+	__le32		relative_offset;
+	u8		dw5flags0;
+	u8		df_ctl;
+	u8		type;
+	u8		r_ctl;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw7flags0;
+	u8		command;
+	u8		dw7flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		remote_xid;
+	__le16		dw10w0;
+	u8		dw10flags0;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le32		sequence_payload_len;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/*
+ * WQE used unblock the specified XRI and to release
+ * it to the SLI Port's free pool.
+ */
+enum sli4_requeue_wqe_flags {
+	SLI4_REQU_XRI_WQE_XC	= 0x20,
+	SLI4_REQU_XRI_WQE_QOSD	= 0x2,
+};
+
+struct sli4_requeue_xri_wqe {
+	__le32		rsvd0;
+	__le32		rsvd4;
+	__le32		rsvd8;
+	__le32		rsvd12;
+	__le32		rsvd16;
+	__le32		rsvd20;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		rsvd32;
+	__le16		request_tag;
+	__le16		rsvd34;
+	__le16		flags0;
+	__le16		flags1;
+	__le16		flags2;
+	u8		ccp;
+	u8		cmd_type_wqec_byte;
+	u8		rsvd42;
+	__le16		cq_id;
+	__le32		rsvd44;
+	__le32		rsvd48;
+	__le32		rsvd52;
+	__le32		rsvd56;
+};
+
+/* WQE used to create a BLS response */
+enum sli4_bls_rsp_wqe_flags {
+	SLI4_BLS_RSP_RID		= 0xffffff,
+	SLI4_BLS_RSP_WQE_AR		= 0x40000000,
+	SLI4_BLS_RSP_WQE_CT_SHFT	= 2,
+	SLI4_BLS_RSP_WQE_QOSD		= 0x2,
+	SLI4_BLS_RSP_WQE_HLM		= 0x10,
+};
+
+struct sli4_xmit_bls_rsp_wqe {
+	__le32		payload_word0;
+	__le16		rx_id;
+	__le16		ox_id;
+	__le16		high_seq_cnt;
+	__le16		low_seq_cnt;
+	__le32		rsvd12;
+	__le32		local_n_port_id_dword;
+	__le32		remote_id_dword;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		dw8flags0;
+	u8		command;
+	u8		dw8flags1;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		rsvd38;
+	u8		dw11flags0;
+	u8		dw11flags1;
+	u8		dw11flags2;
+	u8		ccp;
+	u8		dw12flags0;
+	u8		rsvd45;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	u8		rsvd50;
+	u8		rsvd51;
+	__le32		rsvd52;
+	__le32		rsvd56;
+	__le32		rsvd60;
+};
+
+enum sli_bls_type {
+	SLI4_SLI_BLS_ACC,
+	SLI4_SLI_BLS_RJT,
+	SLI4_SLI_BLS_MAX
+};
+
+struct sli_bls_payload {
+	enum sli_bls_type	type;
+	__le16			ox_id;
+	__le16			rx_id;
+	union {
+		struct {
+			u8	seq_id_validity;
+			u8	seq_id_last;
+			u8	rsvd2;
+			u8	rsvd3;
+			u16	ox_id;
+			u16	rx_id;
+			__le16	low_seq_cnt;
+			__le16	high_seq_cnt;
+		} acc;
+		struct {
+			u8	vendor_unique;
+			u8	reason_explanation;
+			u8	reason_code;
+			u8	rsvd3;
+		} rjt;
+	} u;
+};
+
+/* WQE used to create an ELS response */
+
+enum sli4_els_rsp_flags {
+	SLI4_ELS_SID		= 0xffffff,
+	SLI4_ELS_RID		= 0xffffff,
+	SLI4_ELS_DBDE		= 0x40,
+	SLI4_ELS_XBL		= 0x8,
+	SLI4_ELS_IOD		= 0x20,
+	SLI4_ELS_QOSD		= 0x2,
+	SLI4_ELS_XC		= 0x20,
+	SLI4_ELS_CT_OFFSET	= 0X2,
+	SLI4_ELS_SP		= 0X1000000,
+	SLI4_ELS_HLM		= 0X10,
+};
+
+struct sli4_xmit_els_rsp64_wqe {
+	struct sli4_bde	els_response_payload;
+	__le32		els_response_payload_length;
+	__le32		sid_dw;
+	__le32		rid_dw;
+	__le16		xri_tag;
+	__le16		context_tag;
+	u8		ct_byte;
+	u8		command;
+	u8		class_byte;
+	u8		timer;
+	__le32		abort_tag;
+	__le16		request_tag;
+	__le16		ox_id;
+	u8		flags1;
+	u8		flags2;
+	u8		flags3;
+	u8		flags4;
+	u8		cmd_type_wqec;
+	u8		rsvd34;
+	__le16		cq_id;
+	__le16		temporary_rpi;
+	__le16		rsvd38;
+	u32		rsvd40;
+	u32		rsvd44;
+	u32		rsvd48;
+};
+
+/* Local Reject Reason Codes */
+enum sli4_fc_local_rej_codes {
+	SLI4_FC_LOCAL_REJECT_UNKNOWN,
+	SLI4_FC_LOCAL_REJECT_MISSING_CONTINUE,
+	SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT,
+	SLI4_FC_LOCAL_REJECT_INTERNAL_ERROR,
+	SLI4_FC_LOCAL_REJECT_INVALID_RPI,
+	SLI4_FC_LOCAL_REJECT_NO_XRI,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_COMMAND,
+	SLI4_FC_LOCAL_REJECT_XCHG_DROPPED,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_FIELD,
+	SLI4_FC_LOCAL_REJECT_RPI_SUSPENDED,
+	SLI4_FC_LOCAL_REJECT_RSVD,
+	SLI4_FC_LOCAL_REJECT_RSVD1,
+	SLI4_FC_LOCAL_REJECT_NO_ABORT_MATCH,
+	SLI4_FC_LOCAL_REJECT_TX_DMA_FAILED,
+	SLI4_FC_LOCAL_REJECT_RX_DMA_FAILED,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_FRAME,
+	SLI4_FC_LOCAL_REJECT_RSVD2,
+	SLI4_FC_LOCAL_REJECT_NO_RESOURCES, //0x11
+	SLI4_FC_LOCAL_REJECT_FCP_CONF_FAILURE,
+	SLI4_FC_LOCAL_REJECT_ILLEGAL_LENGTH,
+	SLI4_FC_LOCAL_REJECT_UNSUPPORTED_FEATURE,
+	SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS,
+	SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED,
+	SLI4_FC_LOCAL_REJECT_RCV_BUFFER_TIMEOUT,
+	SLI4_FC_LOCAL_REJECT_LOOP_OPEN_FAILURE,
+	SLI4_FC_LOCAL_REJECT_RSVD3,
+	SLI4_FC_LOCAL_REJECT_LINK_DOWN,
+	SLI4_FC_LOCAL_REJECT_CORRUPTED_DATA,
+	SLI4_FC_LOCAL_REJECT_CORRUPTED_RPI,
+	SLI4_FC_LOCAL_REJECT_OUTOFORDER_DATA,
+	SLI4_FC_LOCAL_REJECT_OUTOFORDER_ACK,
+	SLI4_FC_LOCAL_REJECT_DUP_FRAME,
+	SLI4_FC_LOCAL_REJECT_LINK_CONTROL_FRAME, //0x20
+	SLI4_FC_LOCAL_REJECT_BAD_HOST_ADDRESS,
+	SLI4_FC_LOCAL_REJECT_RSVD4,
+	SLI4_FC_LOCAL_REJECT_MISSING_HDR_BUFFER,
+	SLI4_FC_LOCAL_REJECT_MSEQ_CHAIN_CORRUPTED,
+	SLI4_FC_LOCAL_REJECT_ABORTMULT_REQUESTED,
+	SLI4_FC_LOCAL_REJECT_BUFFER_SHORTAGE	= 0x28,
+	SLI4_FC_LOCAL_REJECT_RCV_XRIBUF_WAITING,
+	SLI4_FC_LOCAL_REJECT_INVALID_VPI	= 0x2e,
+	SLI4_FC_LOCAL_REJECT_NO_FPORT_DETECTED,
+	SLI4_FC_LOCAL_REJECT_MISSING_XRIBUF,
+	SLI4_FC_LOCAL_REJECT_RSVD5,
+	SLI4_FC_LOCAL_REJECT_INVALID_XRI,
+	SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET	= 0x40,
+	SLI4_FC_LOCAL_REJECT_MISSING_RELOFFSET,
+	SLI4_FC_LOCAL_REJECT_INSUFF_BUFFERSPACE,
+	SLI4_FC_LOCAL_REJECT_MISSING_SI,
+	SLI4_FC_LOCAL_REJECT_MISSING_ES,
+	SLI4_FC_LOCAL_REJECT_INCOMPLETE_XFER,
+	SLI4_FC_LOCAL_REJECT_SLER_FAILURE,
+	SLI4_FC_LOCAL_REJECT_SLER_CMD_RCV_FAILURE,
+	SLI4_FC_LOCAL_REJECT_SLER_REC_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_REC_SRR_RETRY_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_SRR_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_RSVD6,
+	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RJT_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_RRQ_RETRY_ERR,
+	SLI4_FC_LOCAL_REJECT_SLER_ABTS_ERR,
+};
+
+enum sli4_async_rcqe_flags {
+	SLI4_RACQE_RQ_EL_INDX	= 0xfff,
+	SLI4_RACQE_FCFI		= 0x3f,
+	SLI4_RACQE_HDPL		= 0x3f,
+	SLI4_RACQE_RQ_ID	= 0xffc0,
+};
+
+struct sli4_fc_async_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		fcfi_rq_id_word;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+struct sli4_fc_async_rcqe_v1 {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	u8		fcfi_byte;
+	u8		rsvd5;
+	__le16		rsvd6;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	u8		sof_byte;
+	u8		eof_byte;
+	u8		code;
+	u8		hdpl_byte;
+};
+
+enum sli4_fc_async_rq_status {
+	SLI4_FC_ASYNC_RQ_SUCCESS = 0x10,
+	SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED,
+	SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED,
+	SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC,
+	SLI4_FC_ASYNC_RQ_DMA_FAILURE,
+};
+
+#define SLI4_RCQE_RQ_EL_INDX	0xfff
+
+struct sli4_fc_coalescing_rcqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rq_elmt_indx_word;
+	__le32		rsvd4;
+	__le16		rq_id;
+	__le16		sequence_reporting_placement_length;
+	__le16		rsvd14;
+	u8		code;
+	u8		vld_byte;
+};
+
+#define SLI4_FC_COALESCE_RQ_SUCCESS		0x10
+#define SLI4_FC_COALESCE_RQ_INSUFF_XRI_NEEDED	0x18
+
+enum sli4_optimized_write_cmd_cqe_flags {
+	SLI4_OCQE_RQ_EL_INDX	= 0x7f,		/* DW0 bits 16:30 */
+	SLI4_OCQE_FCFI		= 0x3f,		/* DW1 bits 0:6 */
+	SLI4_OCQE_OOX		= 1 << 6,	/* DW1 bit 15 */
+	SLI4_OCQE_AGXR		= 1 << 7,	/* DW1 bit 16 */
+	SLI4_OCQE_HDPL		= 0x3f,		/* DW3 bits 24:29*/
+};
+
+struct sli4_fc_optimized_write_cmd_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		w1;
+	u8		flags0;
+	u8		flags1;
+	__le16		xri;
+	__le16		rq_id;
+	__le16		data_placement_length;
+	__le16		rpi;
+	u8		code;
+	u8		hdpl_vld;
+};
+
+#define	SLI4_OCQE_XB		0x10
+
+struct sli4_fc_optimized_write_data_cqe {
+	u8		hw_status;
+	u8		status;
+	__le16		xri;
+	__le32		total_data_placed;
+	__le32		extended_status;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+struct sli4_fc_xri_aborted_cqe {
+	u8		rsvd0;
+	u8		status;
+	__le16		rsvd2;
+	__le32		extended_status;
+	__le16		xri;
+	__le16		remote_xid;
+	__le16		rsvd12;
+	u8		code;
+	u8		flags;
+};
+
+enum sli4_generic_ctx {
+	SLI4_GENERIC_CONTEXT_RPI,
+	SLI4_GENERIC_CONTEXT_VPI,
+	SLI4_GENERIC_CONTEXT_VFI,
+	SLI4_GENERIC_CONTEXT_FCFI,
+};
+
+#define SLI4_GENERIC_CLASS_CLASS_2		0x1
+#define SLI4_GENERIC_CLASS_CLASS_3		0x2
+
+#define SLI4_ELS_REQUEST64_DIR_WRITE		0x0
+#define SLI4_ELS_REQUEST64_DIR_READ		0x1
+
+enum sli4_els_request {
+	SLI4_ELS_REQUEST64_OTHER,
+	SLI4_ELS_REQUEST64_LOGO,
+	SLI4_ELS_REQUEST64_FDISC,
+	SLI4_ELS_REQUEST64_FLOGIN,
+	SLI4_ELS_REQUEST64_PLOGI,
+};
+
+enum sli4_els_cmd_type {
+	SLI4_ELS_REQUEST64_CMD_GEN		= 0x08,
+	SLI4_ELS_REQUEST64_CMD_NON_FABRIC	= 0x0c,
+	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
+};
+
+/******Driver specific structures******/
+
+struct sli4_queue {
+	/* Common to all queue types */
+	struct efc_dma	dma;
+	spinlock_t	lock;		/* Lock to protect the doorbell register
+					 * writes and queue reads
+					 */
+	u32		index;		/* current host entry index */
+	u16		size;		/* entry size */
+	u16		length;		/* number of entries */
+	u16		n_posted;	/* number entries posted for CQ, EQ */
+	u16		id;		/* Port assigned xQ_ID */
+	u8		type;		/* queue type ie EQ, CQ, ... */
+	void __iomem    *db_regaddr;	/* register address for the doorbell */
+	u16		phase;		/* For if_type = 6, this value toggle
+					 * for each iteration of the queue,
+					 * a queue entry is valid when a cqe
+					 * valid bit matches this value
+					 */
+	u32		proc_limit;	/* limit CQE processed per iteration */
+	u32		posted_limit;	/* CQE/EQE process before ring db */
+	u32		max_num_processed;
+	u64		max_process_time;
+	union {
+		u32	r_idx;		/* "read" index (MQ only) */
+		u32	flag;
+	} u;
+};
+
+/* Prameters used to populate WQE*/
+struct sli_bls_params {
+	u32		s_id;
+	u32		d_id;
+	u16		ox_id;
+	u16		rx_id;
+	u32		rpi;
+	u32		vpi;
+	bool		rpi_registered;
+	u8		payload[12];
+	u16		xri;
+	u16		tag;
+};
+
+struct sli_els_params {
+	u32		s_id;
+	u32		d_id;
+	u16		ox_id;
+	u32		rpi;
+	u32		vpi;
+	bool		rpi_registered;
+	u32		xmit_len;
+	u32		rsp_len;
+	u8		timeout;
+	u8		cmd;
+	u16		xri;
+	u16		tag;
+};
+
+struct sli_ct_params {
+	u8		r_ctl;
+	u8		type;
+	u8		df_ctl;
+	u8		timeout;
+	u16		ox_id;
+	u32		d_id;
+	u32		rpi;
+	u32		vpi;
+	bool		rpi_registered;
+	u32		xmit_len;
+	u32		rsp_len;
+	u16		xri;
+	u16		tag;
+};
+
+struct sli_fcp_tgt_params {
+	u32		s_id;
+	u32		d_id;
+	u32		rpi;
+	u32		vpi;
+	u32		offset;
+	u16		ox_id;
+	u16		flags;
+	u8		cs_ctl;
+	u8		timeout;
+	u32		app_id;
+	u32		xmit_len;
+	u16		xri;
+	u16		tag;
+};
+
 #endif /* !_SLI4_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 03/31] elx: libefc_sli: Data structures and defines for mbox commands
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
  2021-01-07 22:58 ` [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
  2021-01-07 22:58 ` [PATCH v7 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
                   ` (27 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch adds definitions for SLI-4 mailbox commands
and responses.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc_sli/sli4.h | 1628 +++++++++++++++++++++++++++-
 1 file changed, 1627 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index d194b566b588..39b6fceed85e 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -1990,6 +1990,1520 @@ enum sli4_els_cmd_type {
 	SLI4_ELS_REQUEST64_CMD_FABRIC		= 0x0d,
 };
 
+#define SLI_PAGE_SIZE				SZ_4K
+
+#define SLI4_BMBX_TIMEOUT_MSEC			30000
+#define SLI4_FW_READY_TIMEOUT_MSEC		30000
+
+#define SLI4_BMBX_DELAY_US			1000	/* 1 ms */
+#define SLI4_INIT_PORT_DELAY_US			10000	/* 10 ms */
+
+static inline u32
+sli_page_count(size_t bytes, u32 page_size)
+{
+	if (!page_size)
+		return 0;
+
+	return (bytes + (page_size - 1)) >> __ffs(page_size);
+}
+
+/*************************************************************************
+ * SLI-4 mailbox command formats and definitions
+ */
+
+struct sli4_mbox_command_header {
+	u8	resvd0;
+	u8	command;
+	__le16	status;	/* Port writes to indicate success/fail */
+};
+
+enum sli4_mbx_cmd_value {
+	SLI4_MBX_CMD_CONFIG_LINK	= 0x07,
+	SLI4_MBX_CMD_DUMP		= 0x17,
+	SLI4_MBX_CMD_DOWN_LINK		= 0x06,
+	SLI4_MBX_CMD_INIT_LINK		= 0x05,
+	SLI4_MBX_CMD_INIT_VFI		= 0xa3,
+	SLI4_MBX_CMD_INIT_VPI		= 0xa4,
+	SLI4_MBX_CMD_POST_XRI		= 0xa7,
+	SLI4_MBX_CMD_RELEASE_XRI	= 0xac,
+	SLI4_MBX_CMD_READ_CONFIG	= 0x0b,
+	SLI4_MBX_CMD_READ_STATUS	= 0x0e,
+	SLI4_MBX_CMD_READ_NVPARMS	= 0x02,
+	SLI4_MBX_CMD_READ_REV		= 0x11,
+	SLI4_MBX_CMD_READ_LNK_STAT	= 0x12,
+	SLI4_MBX_CMD_READ_SPARM64	= 0x8d,
+	SLI4_MBX_CMD_READ_TOPOLOGY	= 0x95,
+	SLI4_MBX_CMD_REG_FCFI		= 0xa0,
+	SLI4_MBX_CMD_REG_FCFI_MRQ	= 0xaf,
+	SLI4_MBX_CMD_REG_RPI		= 0x93,
+	SLI4_MBX_CMD_REG_RX_RQ		= 0xa6,
+	SLI4_MBX_CMD_REG_VFI		= 0x9f,
+	SLI4_MBX_CMD_REG_VPI		= 0x96,
+	SLI4_MBX_CMD_RQST_FEATURES	= 0x9d,
+	SLI4_MBX_CMD_SLI_CONFIG		= 0x9b,
+	SLI4_MBX_CMD_UNREG_FCFI		= 0xa2,
+	SLI4_MBX_CMD_UNREG_RPI		= 0x14,
+	SLI4_MBX_CMD_UNREG_VFI		= 0xa1,
+	SLI4_MBX_CMD_UNREG_VPI		= 0x97,
+	SLI4_MBX_CMD_WRITE_NVPARMS	= 0x03,
+	SLI4_MBX_CMD_CFG_AUTO_XFER_RDY	= 0xad,
+};
+
+enum sli4_mbx_status {
+	SLI4_MBX_STATUS_SUCCESS		= 0x0000,
+	SLI4_MBX_STATUS_FAILURE		= 0x0001,
+	SLI4_MBX_STATUS_RPI_NOT_REG	= 0x1400,
+};
+
+/* CONFIG_LINK - configure link-oriented parameters,
+ * such as default N_Port_ID address and various timers
+ */
+enum sli4_cmd_config_link_flags {
+	SLI4_CFG_LINK_BBSCN = 0xf00,
+	SLI4_CFG_LINK_CSCN  = 0x1000,
+};
+
+struct sli4_cmd_config_link {
+	struct sli4_mbox_command_header	hdr;
+	u8		maxbbc;
+	u8		rsvd5;
+	u8		rsvd6;
+	u8		rsvd7;
+	u8		alpa;
+	__le16		n_port_id;
+	u8		rsvd11;
+	__le32		rsvd12;
+	__le32		e_d_tov;
+	__le32		lp_tov;
+	__le32		r_a_tov;
+	__le32		r_t_tov;
+	__le32		al_tov;
+	__le32		rsvd36;
+	__le32		bbscn_dword;
+};
+
+#define SLI4_DUMP4_TYPE		0xf
+
+#define SLI4_WKI_TAG_SAT_TEM	0x1040
+
+struct sli4_cmd_dump4 {
+	struct sli4_mbox_command_header	hdr;
+	__le32		type_dword;
+	__le16		wki_selection;
+	__le16		rsvd10;
+	__le32		rsvd12;
+	__le32		returned_byte_cnt;
+	__le32		resp_data[59];
+};
+
+/* INIT_LINK - initialize the link for a FC port */
+enum sli4_init_link_flags {
+	SLI4_INIT_LINK_F_LOOPBACK	= 1 << 0,
+
+	SLI4_INIT_LINK_F_P2P_ONLY	= 1 << 1,
+	SLI4_INIT_LINK_F_FCAL_ONLY	= 2 << 1,
+	SLI4_INIT_LINK_F_FCAL_FAIL_OVER	= 0 << 1,
+	SLI4_INIT_LINK_F_P2P_FAIL_OVER	= 1 << 1,
+
+	SLI4_INIT_LINK_F_UNFAIR		= 1 << 6,
+	SLI4_INIT_LINK_F_NO_LIRP	= 1 << 7,
+	SLI4_INIT_LINK_F_LOOP_VALID_CHK	= 1 << 8,
+	SLI4_INIT_LINK_F_NO_LISA	= 1 << 9,
+	SLI4_INIT_LINK_F_FAIL_OVER	= 1 << 10,
+	SLI4_INIT_LINK_F_FIXED_SPEED	= 1 << 11,
+	SLI4_INIT_LINK_F_PICK_HI_ALPA	= 1 << 15,
+
+};
+
+enum sli4_fc_link_speed {
+	SLI4_LINK_SPEED_1G = 1,
+	SLI4_LINK_SPEED_2G,
+	SLI4_LINK_SPEED_AUTO_1_2,
+	SLI4_LINK_SPEED_4G,
+	SLI4_LINK_SPEED_AUTO_4_1,
+	SLI4_LINK_SPEED_AUTO_4_2,
+	SLI4_LINK_SPEED_AUTO_4_2_1,
+	SLI4_LINK_SPEED_8G,
+	SLI4_LINK_SPEED_AUTO_8_1,
+	SLI4_LINK_SPEED_AUTO_8_2,
+	SLI4_LINK_SPEED_AUTO_8_2_1,
+	SLI4_LINK_SPEED_AUTO_8_4,
+	SLI4_LINK_SPEED_AUTO_8_4_1,
+	SLI4_LINK_SPEED_AUTO_8_4_2,
+	SLI4_LINK_SPEED_10G,
+	SLI4_LINK_SPEED_16G,
+	SLI4_LINK_SPEED_AUTO_16_8_4,
+	SLI4_LINK_SPEED_AUTO_16_8,
+	SLI4_LINK_SPEED_32G,
+	SLI4_LINK_SPEED_AUTO_32_16_8,
+	SLI4_LINK_SPEED_AUTO_32_16,
+	SLI4_LINK_SPEED_64G,
+	SLI4_LINK_SPEED_AUTO_64_32_16,
+	SLI4_LINK_SPEED_AUTO_64_32,
+	SLI4_LINK_SPEED_128G,
+	SLI4_LINK_SPEED_AUTO_128_64_32,
+	SLI4_LINK_SPEED_AUTO_128_64,
+};
+
+struct sli4_cmd_init_link {
+	struct sli4_mbox_command_header       hdr;
+	__le32	sel_reset_al_pa_dword;
+	__le32	flags0;
+	__le32	link_speed_sel_code;
+};
+
+/* INIT_VFI - initialize the VFI resource */
+enum sli4_init_vfi_flags {
+	SLI4_INIT_VFI_FLAG_VP	= 0x1000,
+	SLI4_INIT_VFI_FLAG_VF	= 0x2000,
+	SLI4_INIT_VFI_FLAG_VT	= 0x4000,
+	SLI4_INIT_VFI_FLAG_VR	= 0x8000,
+
+	SLI4_INIT_VFI_VFID	= 0x1fff,
+	SLI4_INIT_VFI_PRI	= 0xe000,
+
+	SLI4_INIT_VFI_HOP_COUNT = 0xff000000,
+};
+
+struct sli4_cmd_init_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vfi;
+	__le16		flags0_word;
+	__le16		fcfi;
+	__le16		vpi;
+	__le32		vf_id_pri_dword;
+	__le32		hop_cnt_dword;
+};
+
+/* INIT_VPI - initialize the VPI resource */
+struct sli4_cmd_init_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* POST_XRI - post XRI resources to the SLI Port */
+enum sli4_post_xri_flags {
+	SLI4_POST_XRI_COUNT	= 0xfff,
+	SLI4_POST_XRI_FLAG_ENX	= 0x1000,
+	SLI4_POST_XRI_FLAG_DL	= 0x2000,
+	SLI4_POST_XRI_FLAG_DI	= 0x4000,
+	SLI4_POST_XRI_FLAG_VAL	= 0x8000,
+};
+
+struct sli4_cmd_post_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		xri_base;
+	__le16		xri_count_flags;
+};
+
+/* RELEASE_XRI - Release XRI resources from the SLI Port */
+enum sli4_release_xri_flags {
+	SLI4_RELEASE_XRI_REL_XRI_CNT	= 0x1f,
+	SLI4_RELEASE_XRI_COUNT		= 0x1f,
+};
+
+struct sli4_cmd_release_xri {
+	struct sli4_mbox_command_header	hdr;
+	__le16		rel_xri_count_word;
+	__le16		xri_count_word;
+
+	struct {
+		__le16	xri_tag0;
+		__le16	xri_tag1;
+	} xri_tbl[62];
+};
+
+/* READ_CONFIG - read SLI port configuration parameters */
+struct sli4_cmd_read_config {
+	struct sli4_mbox_command_header	hdr;
+};
+
+enum sli4_read_cfg_resp_flags {
+	SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000,	/* DW1 */
+	SLI4_READ_CFG_RESP_TOPOLOGY	= 0xff000000,	/* DW2 */
+};
+
+enum sli4_read_cfg_topo {
+	SLI4_READ_CFG_TOPO_FC		= 0x1,	/* FC topology unknown */
+	SLI4_READ_CFG_TOPO_NON_FC_AL	= 0x2,	/* FC point-to-point or fabric */
+	SLI4_READ_CFG_TOPO_FC_AL	= 0x3,	/* FC-AL topology */
+};
+
+struct sli4_rsp_read_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		ext_dword;
+	__le32		topology_dword;
+	__le32		resvd8;
+	__le16		e_d_tov;
+	__le16		resvd14;
+	__le32		resvd16;
+	__le16		r_a_tov;
+	__le16		resvd22;
+	__le32		resvd24;
+	__le32		resvd28;
+	__le16		lmt;
+	__le16		resvd34;
+	__le32		resvd36;
+	__le32		resvd40;
+	__le16		xri_base;
+	__le16		xri_count;
+	__le16		rpi_base;
+	__le16		rpi_count;
+	__le16		vpi_base;
+	__le16		vpi_count;
+	__le16		vfi_base;
+	__le16		vfi_count;
+	__le16		resvd60;
+	__le16		fcfi_count;
+	__le16		rq_count;
+	__le16		eq_count;
+	__le16		wq_count;
+	__le16		cq_count;
+	__le32		pad[45];
+};
+
+/* READ_NVPARMS - read SLI port configuration parameters */
+enum sli4_read_nvparms_flags {
+	SLI4_READ_NVPARAMS_HARD_ALPA	  = 0xff,
+	SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00,
+};
+
+struct sli4_cmd_read_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* WRITE_NVPARMS - write SLI port configuration parameters */
+struct sli4_cmd_write_nvparms {
+	struct sli4_mbox_command_header	hdr;
+	__le32		resvd0;
+	__le32		resvd4;
+	__le32		resvd8;
+	__le32		resvd12;
+	u8		wwpn[8];
+	u8		wwnn[8];
+	__le32		hard_alpa_d_id;
+};
+
+/* READ_REV - read the Port revision levels */
+enum {
+	SLI4_READ_REV_FLAG_SLI_LEVEL	= 0xf,
+	SLI4_READ_REV_FLAG_FCOEM	= 0x10,
+	SLI4_READ_REV_FLAG_CEEV		= 0x60,
+	SLI4_READ_REV_FLAG_VPD		= 0x2000,
+
+	SLI4_READ_REV_AVAILABLE_LENGTH	= 0xffffff,
+};
+
+struct sli4_cmd_read_rev {
+	struct sli4_mbox_command_header	hdr;
+	__le16			resvd0;
+	__le16			flags0_word;
+	__le32			first_hw_rev;
+	__le32			second_hw_rev;
+	__le32			resvd12;
+	__le32			third_hw_rev;
+	u8			fc_ph_low;
+	u8			fc_ph_high;
+	u8			feature_level_low;
+	u8			feature_level_high;
+	__le32			resvd24;
+	__le32			first_fw_id;
+	u8			first_fw_name[16];
+	__le32			second_fw_id;
+	u8			second_fw_name[16];
+	__le32			rsvd18[30];
+	__le32			available_length_dword;
+	struct sli4_dmaaddr	hostbuf;
+	__le32			returned_vpd_length;
+	__le32			actual_vpd_length;
+};
+
+/* READ_SPARM64 - read the Port service parameters */
+#define SLI4_READ_SPARM64_WWPN_OFFSET	(4 * sizeof(u32))
+#define SLI4_READ_SPARM64_WWNN_OFFSET	(6 * sizeof(u32))
+
+struct sli4_cmd_read_sparm64 {
+	struct sli4_mbox_command_header hdr;
+	__le32			resvd0;
+	__le32			resvd4;
+	struct sli4_bde		bde_64;
+	__le16			vpi;
+	__le16			resvd22;
+	__le16			port_name_start;
+	__le16			port_name_len;
+	__le16			node_name_start;
+	__le16			node_name_len;
+};
+
+/* READ_TOPOLOGY - read the link event information */
+enum sli4_read_topo_e {
+	SLI4_READTOPO_ATTEN_TYPE	= 0xff,
+	SLI4_READTOPO_FLAG_IL		= 0x100,
+	SLI4_READTOPO_FLAG_PB_RECVD	= 0x200,
+
+	SLI4_READTOPO_LINKSTATE_RECV	= 0x3,
+	SLI4_READTOPO_LINKSTATE_TRANS	= 0xc,
+	SLI4_READTOPO_LINKSTATE_MACHINE	= 0xf0,
+	SLI4_READTOPO_LINKSTATE_SPEED	= 0xff00,
+	SLI4_READTOPO_LINKSTATE_TF	= 0x40000000,
+	SLI4_READTOPO_LINKSTATE_LU	= 0x80000000,
+
+	SLI4_READTOPO_SCN_BBSCN		= 0xf,
+	SLI4_READTOPO_SCN_CBBSCN	= 0xf0,
+
+	SLI4_READTOPO_R_T_TOV		= 0x1ff,
+	SLI4_READTOPO_AL_TOV		= 0xf000,
+
+	SLI4_READTOPO_PB_FLAG		= 0x80,
+
+	SLI4_READTOPO_INIT_N_PORTID	= 0xffffff,
+};
+
+#define SLI4_MIN_LOOP_MAP_BYTES	128
+
+struct sli4_cmd_read_topology {
+	struct sli4_mbox_command_header	hdr;
+	__le32			event_tag;
+	__le32			dw2_attentype;
+	u8			topology;
+	u8			lip_type;
+	u8			lip_al_ps;
+	u8			al_pa_granted;
+	struct sli4_bde		bde_loop_map;
+	__le32			linkdown_state;
+	__le32			currlink_state;
+	u8			max_bbc;
+	u8			init_bbc;
+	u8			scn_flags;
+	u8			rsvd39;
+	__le16			dw10w0_al_rt_tov;
+	__le16			lp_tov;
+	u8			acquired_al_pa;
+	u8			pb_flags;
+	__le16			specified_al_pa;
+	__le32			dw12_init_n_port_id;
+};
+
+enum sli4_read_topo_link {
+	SLI4_READ_TOPOLOGY_LINK_UP	= 0x1,
+	SLI4_READ_TOPOLOGY_LINK_DOWN,
+	SLI4_READ_TOPOLOGY_LINK_NO_ALPA,
+};
+
+enum sli4_read_topo {
+	SLI4_READ_TOPO_UNKNOWN		= 0x0,
+	SLI4_READ_TOPO_NON_FC_AL,
+	SLI4_READ_TOPO_FC_AL,
+};
+
+enum sli4_read_topo_speed {
+	SLI4_READ_TOPOLOGY_SPEED_NONE	= 0x00,
+	SLI4_READ_TOPOLOGY_SPEED_1G	= 0x04,
+	SLI4_READ_TOPOLOGY_SPEED_2G	= 0x08,
+	SLI4_READ_TOPOLOGY_SPEED_4G	= 0x10,
+	SLI4_READ_TOPOLOGY_SPEED_8G	= 0x20,
+	SLI4_READ_TOPOLOGY_SPEED_10G	= 0x40,
+	SLI4_READ_TOPOLOGY_SPEED_16G	= 0x80,
+	SLI4_READ_TOPOLOGY_SPEED_32G	= 0x90,
+	SLI4_READ_TOPOLOGY_SPEED_64G	= 0xa0,
+	SLI4_READ_TOPOLOGY_SPEED_128G	= 0xb0,
+};
+
+/* REG_FCFI - activate a FC Forwarder */
+struct sli4_cmd_reg_fcfi_rq_cfg {
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+enum sli4_regfcfi_tag {
+	SLI4_REGFCFI_VLAN_TAG		= 0xfff,
+	SLI4_REGFCFI_VLANTAG_VALID	= 0x1000,
+};
+
+#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG	4
+struct sli4_cmd_reg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+};
+
+#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG	4
+#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ	32
+#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE		0
+#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE		1
+
+enum sli4_reg_fcfi_mrq {
+	SLI4_REGFCFI_MRQ_VLAN_TAG	= 0xfff,
+	SLI4_REGFCFI_MRQ_VLANTAG_VALID	= 0x1000,
+	SLI4_REGFCFI_MRQ_MODE		= 0x2000,
+
+	SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS	= 0xff,
+	SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00,
+	SLI4_REGFCFI_MRQ_RQ_SEL_POLICY	= 0xf000,
+};
+
+struct sli4_cmd_reg_fcfi_mrq {
+	struct sli4_mbox_command_header	hdr;
+	__le16		fcf_index;
+	__le16		fcfi;
+	__le16		rqid1;
+	__le16		rqid0;
+	__le16		rqid3;
+	__le16		rqid2;
+	struct sli4_cmd_reg_fcfi_rq_cfg
+			rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	__le32		dw8_vlan;
+	__le32		dw9_mrqflags;
+};
+
+struct sli4_cmd_rq_cfg {
+	__le16	rq_id;
+	u8	r_ctl_mask;
+	u8	r_ctl_match;
+	u8	type_mask;
+	u8	type_match;
+};
+
+/* REG_RPI - register a Remote Port Indicator */
+enum sli4_reg_rpi {
+	SLI4_REGRPI_REMOTE_N_PORTID	= 0xffffff,	/* DW2 */
+	SLI4_REGRPI_UPD			= 0x1000000,
+	SLI4_REGRPI_ETOW		= 0x8000000,
+	SLI4_REGRPI_TERP		= 0x20000000,
+	SLI4_REGRPI_CI			= 0x80000000,
+};
+
+struct sli4_cmd_reg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			rpi;
+	__le16			rsvd2;
+	__le32			dw2_rportid_flags;
+	struct sli4_bde		bde_64;
+	__le16			vpi;
+	__le16			rsvd26;
+};
+
+#define SLI4_REG_RPI_BUF_LEN		0x70
+
+/* REG_VFI - register a Virtual Fabric Indicator */
+enum sli_reg_vfi {
+	SLI4_REGVFI_VP			= 0x1000,	/* DW1 */
+	SLI4_REGVFI_UPD			= 0x2000,
+
+	SLI4_REGVFI_LOCAL_N_PORTID	= 0xffffff,	/* DW10 */
+};
+
+struct sli4_cmd_reg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le16			vfi;
+	__le16			dw0w1_flags;
+	__le16			fcfi;
+	__le16			vpi;
+	u8			wwpn[8];
+	struct sli4_bde		sparm;
+	__le32			e_d_tov;
+	__le32			r_a_tov;
+	__le32			dw10_lportid_flags;
+};
+
+/* REG_VPI - register a Virtual Port Indicator */
+enum sli4_reg_vpi {
+	SLI4_REGVPI_LOCAL_N_PORTID	= 0xffffff,
+	SLI4_REGVPI_UPD			= 0x1000000,
+};
+
+struct sli4_cmd_reg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		dw2_lportid_flags;
+	u8		wwpn[8];
+	__le32		rsvd12;
+	__le16		vpi;
+	__le16		vfi;
+};
+
+/* REQUEST_FEATURES - request / query SLI features */
+enum sli4_req_features_flags {
+	SLI4_REQFEAT_QRY	= 0x1,		/* Dw1 */
+
+	SLI4_REQFEAT_IAAB	= 1 << 0,	/* DW2 & DW3 */
+	SLI4_REQFEAT_NPIV	= 1 << 1,
+	SLI4_REQFEAT_DIF	= 1 << 2,
+	SLI4_REQFEAT_VF		= 1 << 3,
+	SLI4_REQFEAT_FCPI	= 1 << 4,
+	SLI4_REQFEAT_FCPT	= 1 << 5,
+	SLI4_REQFEAT_FCPC	= 1 << 6,
+	SLI4_REQFEAT_RSVD	= 1 << 7,
+	SLI4_REQFEAT_RQD	= 1 << 8,
+	SLI4_REQFEAT_IAAR	= 1 << 9,
+	SLI4_REQFEAT_HLM	= 1 << 10,
+	SLI4_REQFEAT_PERFH	= 1 << 11,
+	SLI4_REQFEAT_RXSEQ	= 1 << 12,
+	SLI4_REQFEAT_RXRI	= 1 << 13,
+	SLI4_REQFEAT_DCL2	= 1 << 14,
+	SLI4_REQFEAT_RSCO	= 1 << 15,
+	SLI4_REQFEAT_MRQP	= 1 << 16,
+};
+
+struct sli4_cmd_request_features {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_qry;
+	__le32		cmd;
+	__le32		resp;
+};
+
+/*
+ * SLI_CONFIG - submit a configuration command to Port
+ *
+ * Command is either embedded as part of the payload (embed) or located
+ * in a separate memory buffer (mem)
+ */
+enum sli4_sli_config {
+	SLI4_SLICONF_EMB		= 0x1,		/* DW1 */
+	SLI4_SLICONF_PMDCMD_SHIFT	= 3,
+	SLI4_SLICONF_PMDCMD_MASK	= 0xf8,
+	SLI4_SLICONF_PMDCMD_VAL_1	= 8,
+	SLI4_SLICONF_PMDCNT		= 0xf8,
+
+	SLI4_SLICONF_PMD_LEN		= 0x00ffffff,
+};
+
+struct sli4_cmd_sli_config {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		payload_len;
+	__le32		rsvd12[3];
+	union {
+		u8 embed[58 * sizeof(u32)];
+		struct sli4_bufptr mem;
+	} payload;
+};
+
+/* READ_STATUS - read tx/rx status of a particular port */
+#define SLI4_READSTATUS_CLEAR_COUNTERS	0x1
+
+struct sli4_cmd_read_status {
+	struct sli4_mbox_command_header	hdr;
+	__le32		dw1_flags;
+	__le32		rsvd4;
+	__le32		trans_kbyte_cnt;
+	__le32		recv_kbyte_cnt;
+	__le32		trans_frame_cnt;
+	__le32		recv_frame_cnt;
+	__le32		trans_seq_cnt;
+	__le32		recv_seq_cnt;
+	__le32		tot_exchanges_orig;
+	__le32		tot_exchanges_resp;
+	__le32		recv_p_bsy_cnt;
+	__le32		recv_f_bsy_cnt;
+	__le32		no_rq_buf_dropped_frames_cnt;
+	__le32		empty_rq_timeout_cnt;
+	__le32		no_xri_dropped_frames_cnt;
+	__le32		empty_xri_pool_cnt;
+};
+
+/* READ_LNK_STAT - read link status of a particular port */
+enum sli4_read_link_stats_flags {
+	SLI4_READ_LNKSTAT_REC	= 1u << 0,
+	SLI4_READ_LNKSTAT_GEC	= 1u << 1,
+	SLI4_READ_LNKSTAT_W02OF	= 1u << 2,
+	SLI4_READ_LNKSTAT_W03OF	= 1u << 3,
+	SLI4_READ_LNKSTAT_W04OF	= 1u << 4,
+	SLI4_READ_LNKSTAT_W05OF	= 1u << 5,
+	SLI4_READ_LNKSTAT_W06OF	= 1u << 6,
+	SLI4_READ_LNKSTAT_W07OF	= 1u << 7,
+	SLI4_READ_LNKSTAT_W08OF	= 1u << 8,
+	SLI4_READ_LNKSTAT_W09OF	= 1u << 9,
+	SLI4_READ_LNKSTAT_W10OF = 1u << 10,
+	SLI4_READ_LNKSTAT_W11OF = 1u << 11,
+	SLI4_READ_LNKSTAT_W12OF	= 1u << 12,
+	SLI4_READ_LNKSTAT_W13OF	= 1u << 13,
+	SLI4_READ_LNKSTAT_W14OF	= 1u << 14,
+	SLI4_READ_LNKSTAT_W15OF	= 1u << 15,
+	SLI4_READ_LNKSTAT_W16OF	= 1u << 16,
+	SLI4_READ_LNKSTAT_W17OF	= 1u << 17,
+	SLI4_READ_LNKSTAT_W18OF	= 1u << 18,
+	SLI4_READ_LNKSTAT_W19OF	= 1u << 19,
+	SLI4_READ_LNKSTAT_W20OF	= 1u << 20,
+	SLI4_READ_LNKSTAT_W21OF	= 1u << 21,
+	SLI4_READ_LNKSTAT_CLRC	= 1u << 30,
+	SLI4_READ_LNKSTAT_CLOF	= 1u << 31,
+};
+
+struct sli4_cmd_read_link_stats {
+	struct sli4_mbox_command_header	hdr;
+	__le32	dw1_flags;
+	__le32	linkfail_errcnt;
+	__le32	losssync_errcnt;
+	__le32	losssignal_errcnt;
+	__le32	primseq_errcnt;
+	__le32	inval_txword_errcnt;
+	__le32	crc_errcnt;
+	__le32	primseq_eventtimeout_cnt;
+	__le32	elastic_bufoverrun_errcnt;
+	__le32	arbit_fc_al_timeout_cnt;
+	__le32	adv_rx_buftor_to_buf_credit;
+	__le32	curr_rx_buf_to_buf_credit;
+	__le32	adv_tx_buf_to_buf_credit;
+	__le32	curr_tx_buf_to_buf_credit;
+	__le32	rx_eofa_cnt;
+	__le32	rx_eofdti_cnt;
+	__le32	rx_eofni_cnt;
+	__le32	rx_soff_cnt;
+	__le32	rx_dropped_no_aer_cnt;
+	__le32	rx_dropped_no_avail_rpi_rescnt;
+	__le32	rx_dropped_no_avail_xri_rescnt;
+};
+
+/* Format a WQE with WQ_ID Association performance hint */
+static inline void
+sli_set_wq_id_association(void *entry, u16 q_id)
+{
+	u32 *wqe = entry;
+
+	/*
+	 * Set Word 10, bit 0 to zero
+	 * Set Word 10, bits 15:1 to the WQ ID
+	 */
+	wqe[10] &= ~0xffff;
+	wqe[10] |= q_id << 1;
+}
+
+/* UNREG_FCFI - unregister a FCFI */
+struct sli4_cmd_unreg_fcfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		fcfi;
+	__le16		rsvd6;
+};
+
+/* UNREG_RPI - unregister one or more RPI */
+enum sli4_unreg_rpi {
+	SLI4_UNREG_RPI_DP	= 0x2000,
+	SLI4_UNREG_RPI_II_SHIFT	= 14,
+	SLI4_UNREG_RPI_II_MASK	= 0xc000,
+	SLI4_UNREG_RPI_II_RPI	= 0x0000,
+	SLI4_UNREG_RPI_II_VPI	= 0x4000,
+	SLI4_UNREG_RPI_II_VFI	= 0x8000,
+	SLI4_UNREG_RPI_II_FCFI	= 0xc000,
+
+	SLI4_UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff,
+};
+
+struct sli4_cmd_unreg_rpi {
+	struct sli4_mbox_command_header	hdr;
+	__le16		index;
+	__le16		dw1w1_flags;
+	__le32		dw2_dest_n_portid;
+};
+
+/* UNREG_VFI - unregister one or more VFI */
+enum sli4_unreg_vfi {
+	SLI4_UNREG_VFI_II_SHIFT	= 14,
+	SLI4_UNREG_VFI_II_MASK	= 0xc000,
+	SLI4_UNREG_VFI_II_VFI	= 0x0000,
+	SLI4_UNREG_VFI_II_FCFI	= 0xc000,
+};
+
+struct sli4_cmd_unreg_vfi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2_flags;
+};
+
+enum sli4_unreg_type {
+	SLI4_UNREG_TYPE_PORT,
+	SLI4_UNREG_TYPE_DOMAIN,
+	SLI4_UNREG_TYPE_FCF,
+	SLI4_UNREG_TYPE_ALL
+};
+
+/* UNREG_VPI - unregister one or more VPI */
+enum sli4_unreg_vpi {
+	SLI4_UNREG_VPI_II_SHIFT	= 14,
+	SLI4_UNREG_VPI_II_MASK	= 0xc000,
+	SLI4_UNREG_VPI_II_VPI	= 0x0000,
+	SLI4_UNREG_VPI_II_VFI	= 0x8000,
+	SLI4_UNREG_VPI_II_FCFI	= 0xc000,
+};
+
+struct sli4_cmd_unreg_vpi {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le16		index;
+	__le16		dw2w0_flags;
+};
+
+/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */
+struct sli4_cmd_config_auto_xfer_rdy {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+};
+
+#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE	0xffff
+
+struct sli4_cmd_config_auto_xfer_rdy_hp {
+	struct sli4_mbox_command_header	hdr;
+	__le32		rsvd0;
+	__le32		max_burst_len;
+	__le32		dw3_esoc_flags;
+	__le16		block_size;
+	__le16		rsvd14;
+};
+
+/*************************************************************************
+ * SLI-4 common configuration command formats and definitions
+ */
+
+/*
+ * Subsystem values.
+ */
+enum sli4_subsystem {
+	SLI4_SUBSYSTEM_COMMON	= 0x01,
+	SLI4_SUBSYSTEM_LOWLEVEL	= 0x0b,
+	SLI4_SUBSYSTEM_FC	= 0x0c,
+	SLI4_SUBSYSTEM_DMTF	= 0x11,
+};
+
+#define	SLI4_OPC_LOWLEVEL_SET_WATCHDOG		0X36
+
+/*
+ * Common opcode (OPC) values.
+ */
+enum sli4_cmn_opcode {
+	SLI4_CMN_FUNCTION_RESET		= 0x3d,
+	SLI4_CMN_CREATE_CQ		= 0x0c,
+	SLI4_CMN_CREATE_CQ_SET		= 0x1d,
+	SLI4_CMN_DESTROY_CQ		= 0x36,
+	SLI4_CMN_MODIFY_EQ_DELAY	= 0x29,
+	SLI4_CMN_CREATE_EQ		= 0x0d,
+	SLI4_CMN_DESTROY_EQ		= 0x37,
+	SLI4_CMN_CREATE_MQ_EXT		= 0x5a,
+	SLI4_CMN_DESTROY_MQ		= 0x35,
+	SLI4_CMN_GET_CNTL_ATTRIBUTES	= 0x20,
+	SLI4_CMN_NOP			= 0x21,
+	SLI4_CMN_GET_RSC_EXTENT_INFO	= 0x9a,
+	SLI4_CMN_GET_SLI4_PARAMS	= 0xb5,
+	SLI4_CMN_QUERY_FW_CONFIG	= 0x3a,
+	SLI4_CMN_GET_PORT_NAME		= 0x4d,
+
+	SLI4_CMN_WRITE_FLASHROM		= 0x07,
+	/* TRANSCEIVER Data */
+	SLI4_CMN_READ_TRANS_DATA	= 0x49,
+	SLI4_CMN_GET_CNTL_ADDL_ATTRS	= 0x79,
+	SLI4_CMN_GET_FUNCTION_CFG	= 0xa0,
+	SLI4_CMN_GET_PROFILE_CFG	= 0xa4,
+	SLI4_CMN_SET_PROFILE_CFG	= 0xa5,
+	SLI4_CMN_GET_PROFILE_LIST	= 0xa6,
+	SLI4_CMN_GET_ACTIVE_PROFILE	= 0xa7,
+	SLI4_CMN_SET_ACTIVE_PROFILE	= 0xa8,
+	SLI4_CMN_READ_OBJECT		= 0xab,
+	SLI4_CMN_WRITE_OBJECT		= 0xac,
+	SLI4_CMN_DELETE_OBJECT		= 0xae,
+	SLI4_CMN_READ_OBJECT_LIST	= 0xad,
+	SLI4_CMN_SET_DUMP_LOCATION	= 0xb8,
+	SLI4_CMN_SET_FEATURES		= 0xbf,
+	SLI4_CMN_GET_RECFG_LINK_INFO	= 0xc9,
+	SLI4_CMN_SET_RECNG_LINK_ID	= 0xca,
+};
+
+/* DMTF opcode (OPC) values */
+#define DMTF_EXEC_CLP_CMD 0x01
+
+/*
+ * COMMON_FUNCTION_RESET
+ *
+ * Resets the Port, returning it to a power-on state. This configuration
+ * command does not have a payload and should set/expect the lengths to
+ * be zero.
+ */
+struct sli4_rqst_cmn_function_reset {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_function_reset {
+	struct sli4_rsp_hdr	hdr;
+};
+
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * Query for information about the SLI Port
+ */
+enum sli4_cntrl_attr_flags {
+	SLI4_CNTL_ATTR_PORTNUM	= 0x3f,
+	SLI4_CNTL_ATTR_PORTTYPE	= 0xc0,
+};
+
+struct sli4_rsp_cmn_get_cntl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	u8		version_str[32];
+	u8		manufacturer_name[32];
+	__le32		supported_modes;
+	u8		eprom_version_lo;
+	u8		eprom_version_hi;
+	__le16		rsvd17;
+	__le32		mbx_ds_version;
+	__le32		ep_fw_ds_version;
+	u8		ncsi_version_str[12];
+	__le32		def_extended_timeout;
+	u8		model_number[32];
+	u8		description[64];
+	u8		serial_number[32];
+	u8		ip_version_str[32];
+	u8		fw_version_str[32];
+	u8		bios_version_str[32];
+	u8		redboot_version_str[32];
+	u8		driver_version_str[32];
+	u8		fw_on_flash_version_str[32];
+	__le32		functionalities_supported;
+	__le16		max_cdb_length;
+	u8		asic_revision;
+	u8		generational_guid0;
+	__le32		generational_guid1_12[3];
+	__le16		generational_guid13_14;
+	u8		generational_guid15;
+	u8		hba_port_count;
+	__le16		default_link_down_timeout;
+	u8		iscsi_version_min_max;
+	u8		multifunctional_device;
+	u8		cache_valid;
+	u8		hba_status;
+	u8		max_domains_supported;
+	u8		port_num_type_flags;
+	__le32		firmware_post_status;
+	__le32		hba_mtu;
+	u8		iscsi_features;
+	u8		rsvd121[3];
+	__le16		pci_vendor_id;
+	__le16		pci_device_id;
+	__le16		pci_sub_vendor_id;
+	__le16		pci_sub_system_id;
+	u8		pci_bus_number;
+	u8		pci_device_number;
+	u8		pci_function_number;
+	u8		interface_type;
+	__le64		unique_identifier;
+	u8		number_of_netfilters;
+	u8		rsvd122[3];
+};
+
+/*
+ * COMMON_GET_CNTL_ATTRIBUTES
+ *
+ * This command queries the controller information from the Flash ROM.
+ */
+struct sli4_rqst_cmn_get_cntl_addl_attributes {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_cntl_addl_attributes {
+	struct sli4_rsp_hdr	hdr;
+	__le16		ipl_file_number;
+	u8		ipl_file_version;
+	u8		rsvd4;
+	u8		on_die_temperature;
+	u8		rsvd5[3];
+	__le32		driver_advanced_features_supported;
+	__le32		rsvd7[4];
+	char		universal_bios_version[32];
+	char		x86_bios_version[32];
+	char		efi_bios_version[32];
+	char		fcode_version[32];
+	char		uefi_bios_version[32];
+	char		uefi_nic_version[32];
+	char		uefi_fcode_version[32];
+	char		uefi_iscsi_version[32];
+	char		iscsi_x86_bios_version[32];
+	char		pxe_x86_bios_version[32];
+	u8		default_wwpn[8];
+	u8		ext_phy_version[32];
+	u8		fc_universal_bios_version[32];
+	u8		fc_x86_bios_version[32];
+	u8		fc_efi_bios_version[32];
+	u8		fc_fcode_version[32];
+	u8		ext_phy_crc_label[8];
+	u8		ipl_file_name[16];
+	u8		rsvd139[72];
+};
+
+/*
+ * COMMON_NOP
+ *
+ * This command does not do anything; it only returns
+ * the payload in the completion.
+ */
+struct sli4_rqst_cmn_nop {
+	struct sli4_rqst_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rsp_cmn_nop {
+	struct sli4_rsp_hdr	hdr;
+	__le32			context[2];
+};
+
+struct sli4_rqst_cmn_get_resource_extent_info {
+	struct sli4_rqst_hdr	hdr;
+	__le16	resource_type;
+	__le16	rsvd16;
+};
+
+enum sli4_rsc_type {
+	SLI4_RSC_TYPE_VFI	= 0x20,
+	SLI4_RSC_TYPE_VPI	= 0x21,
+	SLI4_RSC_TYPE_RPI	= 0x22,
+	SLI4_RSC_TYPE_XRI	= 0x23,
+};
+
+struct sli4_rsp_cmn_get_resource_extent_info {
+	struct sli4_rsp_hdr	hdr;
+	__le16		resource_extent_count;
+	__le16		resource_extent_size;
+};
+
+#define SLI4_128BYTE_WQE_SUPPORT	0x02
+
+#define GET_Q_CNT_METHOD(m) \
+	(((m) & SLI4_PARAM_Q_CNT_MTHD_MASK) >> SLI4_PARAM_Q_CNT_MTHD_SHFT)
+#define GET_Q_CREATE_VERSION(v) \
+	(((v) & SLI4_PARAM_QV_MASK) >> SLI4_PARAM_QV_SHIFT)
+
+enum sli4_rsp_get_params_e {
+	/*GENERIC*/
+	SLI4_PARAM_Q_CNT_MTHD_SHFT	= 24,
+	SLI4_PARAM_Q_CNT_MTHD_MASK	= 0xf << 24,
+	SLI4_PARAM_QV_SHIFT		= 14,
+	SLI4_PARAM_QV_MASK		= 3 << 14,
+
+	/* DW4 */
+	SLI4_PARAM_PROTO_TYPE_MASK	= 0xff,
+	/* DW5 */
+	SLI4_PARAM_FT			= 1 << 0,
+	SLI4_PARAM_SLI_REV_MASK		= 0xf << 4,
+	SLI4_PARAM_SLI_FAM_MASK		= 0xf << 8,
+	SLI4_PARAM_IF_TYPE_MASK		= 0xf << 12,
+	SLI4_PARAM_SLI_HINT1_MASK	= 0xff << 16,
+	SLI4_PARAM_SLI_HINT2_MASK	= 0x1f << 24,
+	/* DW6 */
+	SLI4_PARAM_EQ_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_EQE_SZS_MASK		= 0xf << 8,
+	SLI4_PARAM_EQ_PAGE_SZS_MASK	= 0xff << 16,
+	/* DW8 */
+	SLI4_PARAM_CQ_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_CQE_SZS_MASK		= 0xf << 8,
+	SLI4_PARAM_CQ_PAGE_SZS_MASK	= 0xff << 16,
+	/* DW10 */
+	SLI4_PARAM_MQ_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_MQ_PAGE_SZS_MASK	= 0xff << 16,
+	/* DW12 */
+	SLI4_PARAM_WQ_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_WQE_SZS_MASK		= 0xf << 8,
+	SLI4_PARAM_WQ_PAGE_SZS_MASK	= 0xff << 16,
+	/* DW14 */
+	SLI4_PARAM_RQ_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_RQE_SZS_MASK		= 0xf << 8,
+	SLI4_PARAM_RQ_PAGE_SZS_MASK	= 0xff << 16,
+	/* DW15W1*/
+	SLI4_PARAM_RQ_DB_WINDOW_MASK	= 0xf000,
+	/* DW16 */
+	SLI4_PARAM_FC			= 1 << 0,
+	SLI4_PARAM_EXT			= 1 << 1,
+	SLI4_PARAM_HDRR			= 1 << 2,
+	SLI4_PARAM_SGLR			= 1 << 3,
+	SLI4_PARAM_FBRR			= 1 << 4,
+	SLI4_PARAM_AREG			= 1 << 5,
+	SLI4_PARAM_TGT			= 1 << 6,
+	SLI4_PARAM_TERP			= 1 << 7,
+	SLI4_PARAM_ASSI			= 1 << 8,
+	SLI4_PARAM_WCHN			= 1 << 9,
+	SLI4_PARAM_TCCA			= 1 << 10,
+	SLI4_PARAM_TRTY			= 1 << 11,
+	SLI4_PARAM_TRIR			= 1 << 12,
+	SLI4_PARAM_PHOFF		= 1 << 13,
+	SLI4_PARAM_PHON			= 1 << 14,
+	SLI4_PARAM_PHWQ			= 1 << 15,
+	SLI4_PARAM_BOUND_4GA		= 1 << 16,
+	SLI4_PARAM_RXC			= 1 << 17,
+	SLI4_PARAM_HLM			= 1 << 18,
+	SLI4_PARAM_IPR			= 1 << 19,
+	SLI4_PARAM_RXRI			= 1 << 20,
+	SLI4_PARAM_SGLC			= 1 << 21,
+	SLI4_PARAM_TIMM			= 1 << 22,
+	SLI4_PARAM_TSMM			= 1 << 23,
+	SLI4_PARAM_OAS			= 1 << 25,
+	SLI4_PARAM_LC			= 1 << 26,
+	SLI4_PARAM_AGXF			= 1 << 27,
+	SLI4_PARAM_LOOPBACK_MASK	= 0xf << 28,
+	/* DW18 */
+	SLI4_PARAM_SGL_PAGE_CNT_MASK	= 0xf << 0,
+	SLI4_PARAM_SGL_PAGE_SZS_MASK	= 0xff << 8,
+	SLI4_PARAM_SGL_PP_ALIGN_MASK	= 0xff << 16,
+};
+
+struct sli4_rqst_cmn_get_sli4_params {
+	struct sli4_rqst_hdr	hdr;
+};
+
+struct sli4_rsp_cmn_get_sli4_params {
+	struct sli4_rsp_hdr	hdr;
+	__le32		dw4_protocol_type;
+	__le32		dw5_sli;
+	__le32		dw6_eq_page_cnt;
+	__le16		eqe_count_mask;
+	__le16		rsvd26;
+	__le32		dw8_cq_page_cnt;
+	__le16		cqe_count_mask;
+	__le16		rsvd34;
+	__le32		dw10_mq_page_cnt;
+	__le16		mqe_count_mask;
+	__le16		rsvd42;
+	__le32		dw12_wq_page_cnt;
+	__le16		wqe_count_mask;
+	__le16		rsvd50;
+	__le32		dw14_rq_page_cnt;
+	__le16		rqe_count_mask;
+	__le16		dw15w1_rq_db_window;
+	__le32		dw16_loopback_scope;
+	__le32		sge_supported_length;
+	__le32		dw18_sgl_page_cnt;
+	__le16		min_rq_buffer_size;
+	__le16		rsvd75;
+	__le32		max_rq_buffer_size;
+	__le16		physical_xri_max;
+	__le16		physical_rpi_max;
+	__le16		physical_vpi_max;
+	__le16		physical_vfi_max;
+	__le32		rsvd88;
+	__le16		frag_num_field_offset;
+	__le16		frag_num_field_size;
+	__le16		sgl_index_field_offset;
+	__le16		sgl_index_field_size;
+	__le32		chain_sge_initial_value_lo;
+	__le32		chain_sge_initial_value_hi;
+};
+
+/*Port Types*/
+enum sli4_port_types {
+	SLI4_PORT_TYPE_ETH	= 0,
+	SLI4_PORT_TYPE_FC	= 1,
+};
+
+struct sli4_rqst_cmn_get_port_name {
+	struct sli4_rqst_hdr	hdr;
+	u8	port_type;
+	u8	rsvd4[3];
+};
+
+struct sli4_rsp_cmn_get_port_name {
+	struct sli4_rsp_hdr	hdr;
+	char	port_name[4];
+};
+
+struct sli4_rqst_cmn_write_flashrom {
+	struct sli4_rqst_hdr	hdr;
+	__le32		flash_rom_access_opcode;
+	__le32		flash_rom_access_operation_type;
+	__le32		data_buffer_size;
+	__le32		offset;
+	u8		data_buffer[4];
+};
+
+/*
+ * COMMON_READ_TRANSCEIVER_DATA
+ *
+ * This command reads SFF transceiver data(Format is defined
+ * by the SFF-8472 specification).
+ */
+struct sli4_rqst_cmn_read_transceiver_data {
+	struct sli4_rqst_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+};
+
+struct sli4_rsp_cmn_read_transceiver_data {
+	struct sli4_rsp_hdr	hdr;
+	__le32			page_number;
+	__le32			port;
+	u8			page_data[128];
+	u8			page_data_2[128];
+};
+
+#define SLI4_REQ_DESIRE_READLEN		0xffffff
+
+struct sli4_rqst_cmn_read_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde		host_buffer_descriptor[0];
+};
+
+#define RSP_COM_READ_OBJ_EOF		0x80000000
+
+struct sli4_rsp_cmn_read_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_read_length;
+	__le32			eof_dword;
+};
+
+enum sli4_rqst_write_object_flags {
+	SLI4_RQ_DES_WRITE_LEN		= 0xffffff,
+	SLI4_RQ_DES_WRITE_LEN_NOC	= 0x40000000,
+	SLI4_RQ_DES_WRITE_LEN_EOF	= 0x80000000,
+};
+
+struct sli4_rqst_cmn_write_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_write_len_dword;
+	__le32			write_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde		host_buffer_descriptor[0];
+};
+
+#define	RSP_CHANGE_STATUS		0xff
+
+struct sli4_rsp_cmn_write_object {
+	struct sli4_rsp_hdr	hdr;
+	__le32			actual_write_length;
+	__le32			change_status_dword;
+};
+
+struct sli4_rqst_cmn_delete_object {
+	struct sli4_rqst_hdr	hdr;
+	__le32			rsvd4;
+	__le32			rsvd5;
+	u8			object_name[104];
+};
+
+#define SLI4_RQ_OBJ_LIST_READ_LEN	0xffffff
+
+struct sli4_rqst_cmn_read_object_list {
+	struct sli4_rqst_hdr	hdr;
+	__le32			desired_read_length_dword;
+	__le32			read_offset;
+	u8			object_name[104];
+	__le32			host_buffer_descriptor_count;
+	struct sli4_bde		host_buffer_descriptor[0];
+};
+
+enum sli4_rqst_set_dump_flags {
+	SLI4_CMN_SET_DUMP_BUFFER_LEN	= 0xffffff,
+	SLI4_CMN_SET_DUMP_FDB		= 0x20000000,
+	SLI4_CMN_SET_DUMP_BLP		= 0x40000000,
+	SLI4_CMN_SET_DUMP_QRY		= 0x80000000,
+};
+
+struct sli4_rqst_cmn_set_dump_location {
+	struct sli4_rqst_hdr	hdr;
+	__le32			buffer_length_dword;
+	__le32			buf_addr_low;
+	__le32			buf_addr_high;
+};
+
+struct sli4_rsp_cmn_set_dump_location {
+	struct sli4_rsp_hdr	hdr;
+	__le32			buffer_length_dword;
+};
+
+enum sli4_dump_level {
+	SLI4_DUMP_LEVEL_NONE,
+	SLI4_CHIP_LEVEL_DUMP,
+	SLI4_FUNC_DESC_DUMP,
+};
+
+enum sli4_dump_state {
+	SLI4_DUMP_STATE_NONE,
+	SLI4_CHIP_DUMP_STATE_VALID,
+	SLI4_FUNC_DUMP_STATE_VALID,
+};
+
+enum sli4_dump_status {
+	SLI4_DUMP_READY_STATUS_NOT_READY,
+	SLI4_DUMP_READY_STATUS_DD_PRESENT,
+	SLI4_DUMP_READY_STATUS_FDB_PRESENT,
+	SLI4_DUMP_READY_STATUS_SKIP_DUMP,
+	SLI4_DUMP_READY_STATUS_FAILED = -1,
+};
+
+enum sli4_set_features {
+	SLI4_SET_FEATURES_DIF_SEED			= 0x01,
+	SLI4_SET_FEATURES_XRI_TIMER			= 0x03,
+	SLI4_SET_FEATURES_MAX_PCIE_SPEED		= 0x04,
+	SLI4_SET_FEATURES_FCTL_CHECK			= 0x05,
+	SLI4_SET_FEATURES_FEC				= 0x06,
+	SLI4_SET_FEATURES_PCIE_RECV_DETECT		= 0x07,
+	SLI4_SET_FEATURES_DIF_MEMORY_MODE		= 0x08,
+	SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE	= 0x09,
+	SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS		= 0x0a,
+	SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI		= 0x0c,
+	SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE	= 0x0d,
+	SLI4_SET_FEATURES_SET_FTD_XFER_HINT		= 0x0f,
+	SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK		= 0x11,
+};
+
+struct sli4_rqst_cmn_set_features {
+	struct sli4_rqst_hdr	hdr;
+	__le32			feature;
+	__le32			param_len;
+	__le32			params[8];
+};
+
+struct sli4_rqst_cmn_set_features_dif_seed {
+	__le16		seed;
+	__le16		rsvd16;
+};
+
+enum sli4_rqst_set_mrq_features {
+	SLI4_RQ_MULTIRQ_ISR		 = 0x1,
+	SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2,
+
+	SLI4_RQ_MULTIRQ_NUM_RQS		 = 0xff,
+	SLI4_RQ_MULTIRQ_RQ_SELECT	 = 0xf00,
+};
+
+struct sli4_rqst_cmn_set_features_multirq {
+	__le32		auto_gen_xfer_dword;
+	__le32		num_rqs_dword;
+};
+
+enum sli4_rqst_health_check_flags {
+	SLI4_RQ_HEALTH_CHECK_ENABLE	= 0x1,
+	SLI4_RQ_HEALTH_CHECK_QUERY	= 0x2,
+};
+
+struct sli4_rqst_cmn_set_features_health_check {
+	__le32		health_check_dword;
+};
+
+struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint {
+	__le32		fdt_xfer_hint;
+};
+
+struct sli4_rqst_dmtf_exec_clp_cmd {
+	struct sli4_rqst_hdr	hdr;
+	__le32			cmd_buf_length;
+	__le32			resp_buf_length;
+	__le32			cmd_buf_addr_low;
+	__le32			cmd_buf_addr_high;
+	__le32			resp_buf_addr_low;
+	__le32			resp_buf_addr_high;
+};
+
+struct sli4_rsp_dmtf_exec_clp_cmd {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd4;
+	__le32			resp_length;
+	__le32			rsvd6;
+	__le32			rsvd7;
+	__le32			rsvd8;
+	__le32			rsvd9;
+	__le32			clp_status;
+	__le32			clp_detailed_status;
+};
+
+#define SLI4_PROTOCOL_FC		0x10
+#define SLI4_PROTOCOL_DEFAULT		0xff
+
+struct sli4_rspource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		rsvd16;
+	__le32		type_specific[0];
+};
+
+enum sli4_pcie_desc_flags {
+	SLI4_PCIE_DESC_IMM		= 0x4000,
+	SLI4_PCIE_DESC_NOSV		= 0x8000,
+
+	SLI4_PCIE_DESC_PF_NO		= 0x3ff0000,
+
+	SLI4_PCIE_DESC_MISSN_ROLE	= 0xff,
+	SLI4_PCIE_DESC_PCHG		= 0x8000000,
+	SLI4_PCIE_DESC_SCHG		= 0x10000000,
+	SLI4_PCIE_DESC_XCHG		= 0x20000000,
+	SLI4_PCIE_DESC_XROM		= 0xc0000000
+};
+
+struct sli4_pcie_resource_descriptor_v1 {
+	u8		descriptor_type;
+	u8		descriptor_length;
+	__le16		imm_nosv_dword;
+	__le32		pf_number_dword;
+	__le32		rsvd3;
+	u8		sriov_state;
+	u8		pf_state;
+	u8		pf_type;
+	u8		rsvd4;
+	__le16		number_of_vfs;
+	__le16		rsvd5;
+	__le32		mission_roles_dword;
+	__le32		rsvd7[16];
+};
+
+struct sli4_rqst_cmn_get_function_config {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_function_config {
+	struct sli4_rsp_hdr	hdr;
+	__le32			desc_count;
+	__le32			desc[54];
+};
+
+/* Link Config Descriptor for link config functions */
+struct sli4_link_config_descriptor {
+	u8		link_config_id;
+	u8		rsvd1[3];
+	__le32		config_description[8];
+};
+
+#define MAX_LINK_DES	10
+
+struct sli4_rqst_cmn_get_reconfig_link_info {
+	struct sli4_rqst_hdr  hdr;
+};
+
+struct sli4_rsp_cmn_get_reconfig_link_info {
+	struct sli4_rsp_hdr	hdr;
+	u8			active_link_config_id;
+	u8			rsvd17;
+	u8			next_link_config_id;
+	u8			rsvd19;
+	__le32			link_configuration_descriptor_count;
+	struct sli4_link_config_descriptor
+				desc[MAX_LINK_DES];
+};
+
+enum sli4_set_reconfig_link_flags {
+	SLI4_SET_RECONFIG_LINKID_NEXT	= 0xff,
+	SLI4_SET_RECONFIG_LINKID_FD	= 1u << 31,
+};
+
+struct sli4_rqst_cmn_set_reconfig_link_id {
+	struct sli4_rqst_hdr  hdr;
+	__le32			dw4_flags;
+};
+
+struct sli4_rsp_cmn_set_reconfig_link_id {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_lowlevel_set_watchdog {
+	struct sli4_rqst_hdr	hdr;
+	__le16			watchdog_timeout;
+	__le16			rsvd18;
+};
+
+struct sli4_rsp_lowlevel_set_watchdog {
+	struct sli4_rsp_hdr	hdr;
+	__le32			rsvd;
+};
+
+/* FC opcode (OPC) values */
+enum sli4_fc_opcodes {
+	SLI4_OPC_WQ_CREATE		= 0x1,
+	SLI4_OPC_WQ_DESTROY		= 0x2,
+	SLI4_OPC_POST_SGL_PAGES		= 0x3,
+	SLI4_OPC_RQ_CREATE		= 0x5,
+	SLI4_OPC_RQ_DESTROY		= 0x6,
+	SLI4_OPC_READ_FCF_TABLE		= 0x8,
+	SLI4_OPC_POST_HDR_TEMPLATES	= 0xb,
+	SLI4_OPC_REDISCOVER_FCF		= 0x10,
+};
+
+/* Use the default CQ associated with the WQ */
+#define SLI4_CQ_DEFAULT 0xffff
+
+/*
+ * POST_SGL_PAGES
+ *
+ * Register the scatter gather list (SGL) memory and
+ * associate it with an XRI.
+ */
+struct sli4_rqst_post_sgl_pages {
+	struct sli4_rqst_hdr	hdr;
+	__le16			xri_start;
+	__le16			xri_count;
+	struct {
+		__le32		page0_low;
+		__le32		page0_high;
+		__le32		page1_low;
+		__le32		page1_high;
+	} page_set[10];
+};
+
+struct sli4_rsp_post_sgl_pages {
+	struct sli4_rsp_hdr	hdr;
+};
+
+struct sli4_rqst_post_hdr_templates {
+	struct sli4_rqst_hdr	hdr;
+	__le16			rpi_offset;
+	__le16			page_count;
+	struct sli4_dmaaddr	page_descriptor[0];
+};
+
+#define SLI4_HDR_TEMPLATE_SIZE		64
+
+enum sli4_io_flags {
+/* The XRI associated with this IO is already active */
+	SLI4_IO_CONTINUATION		= 1 << 0,
+/* Automatically generate a good RSP frame */
+	SLI4_IO_AUTO_GOOD_RESPONSE	= 1 << 1,
+	SLI4_IO_NO_ABORT		= 1 << 2,
+/* Set the DNRX bit because no auto xref rdy buffer is posted */
+	SLI4_IO_DNRX			= 1 << 3,
+};
+
+enum sli4_callback {
+	SLI4_CB_LINK,
+	SLI4_CB_MAX,
+};
+
+enum sli4_link_status {
+	SLI4_LINK_STATUS_UP,
+	SLI4_LINK_STATUS_DOWN,
+	SLI4_LINK_STATUS_NO_ALPA,
+	SLI4_LINK_STATUS_MAX,
+};
+
+enum sli4_link_topology {
+	SLI4_LINK_TOPO_NON_FC_AL = 1,
+	SLI4_LINK_TOPO_FC_AL,
+	SLI4_LINK_TOPO_LOOPBACK_INTERNAL,
+	SLI4_LINK_TOPO_LOOPBACK_EXTERNAL,
+	SLI4_LINK_TOPO_NONE,
+	SLI4_LINK_TOPO_MAX,
+};
+
+enum sli4_link_medium {
+	SLI4_LINK_MEDIUM_ETHERNET,
+	SLI4_LINK_MEDIUM_FC,
+	SLI4_LINK_MEDIUM_MAX,
+};
 /******Driver specific structures******/
 
 struct sli4_queue {
@@ -2020,7 +3534,7 @@ struct sli4_queue {
 	} u;
 };
 
-/* Prameters used to populate WQE*/
+/* Parameters used to populate WQE*/
 struct sli_bls_params {
 	u32		s_id;
 	u32		d_id;
@@ -2081,4 +3595,116 @@ struct sli_fcp_tgt_params {
 	u16		tag;
 };
 
+struct sli4_link_event {
+	enum sli4_link_status	status;
+	enum sli4_link_topology	topology;
+	enum sli4_link_medium	medium;
+	u32			speed;
+	u8			*loop_map;
+	u32			fc_id;
+};
+
+enum sli4_resource {
+	SLI4_RSRC_VFI,
+	SLI4_RSRC_VPI,
+	SLI4_RSRC_RPI,
+	SLI4_RSRC_XRI,
+	SLI4_RSRC_FCFI,
+	SLI4_RSRC_MAX,
+};
+
+struct sli4_extent {
+	u32		number;
+	u32		size;
+	u32		n_alloc;
+	u32		*base;
+	unsigned long	*use_map;
+	u32		map_size;
+};
+
+struct sli4_queue_info {
+	u16	max_qcount[SLI4_QTYPE_MAX];
+	u32	max_qentries[SLI4_QTYPE_MAX];
+	u16	count_mask[SLI4_QTYPE_MAX];
+	u16	count_method[SLI4_QTYPE_MAX];
+	u32	qpage_count[SLI4_QTYPE_MAX];
+};
+
+struct sli4_params {
+	u8	has_extents;
+	u8	auto_reg;
+	u8	auto_xfer_rdy;
+	u8	hdr_template_req;
+	u8	perf_hint;
+	u8	perf_wq_id_association;
+	u8	cq_create_version;
+	u8	mq_create_version;
+	u8	high_login_mode;
+	u8	sgl_pre_registered;
+	u8	sgl_pre_reg_required;
+	u8	t10_dif_inline_capable;
+	u8	t10_dif_separate_capable;
+};
+
+struct sli4 {
+	void			*os;
+	struct pci_dev		*pci;
+	void __iomem		*reg[PCI_STD_NUM_BARS];
+
+	u32			sli_rev;
+	u32			sli_family;
+	u32			if_type;
+
+	u16			asic_type;
+	u16			asic_rev;
+
+	u16			e_d_tov;
+	u16			r_a_tov;
+	struct sli4_queue_info	qinfo;
+	u16			link_module_type;
+	u8			rq_batch;
+	u8			port_number;
+	char			port_name[2];
+	u16			rq_min_buf_size;
+	u32			rq_max_buf_size;
+	u8			topology;
+	u8			wwpn[8];
+	u8			wwnn[8];
+	u32			fw_rev[2];
+	u8			fw_name[2][16];
+	char			ipl_name[16];
+	u32			hw_rev[3];
+	char			modeldesc[64];
+	char			bios_version_string[32];
+	u32			wqe_size;
+	u32			vpd_length;
+	/*
+	 * Tracks the port resources using extents metaphor. For
+	 * devices that don't implement extents (i.e.
+	 * has_extents == FALSE), the code models each resource as
+	 * a single large extent.
+	 */
+	struct sli4_extent	ext[SLI4_RSRC_MAX];
+	u32			features;
+	struct sli4_params	params;
+	u32			sge_supported_length;
+	u32			sgl_page_sizes;
+	u32			max_sgl_pages;
+
+	/*
+	 * Callback functions
+	 */
+	int			(*link)(void *ctx, void *event);
+	void			*link_arg;
+
+	struct efc_dma		bmbx;
+
+	/* Save pointer to physical memory descriptor for non-embedded
+	 * SLI_CONFIG commands for BMBX dumping purposes
+	 */
+	struct efc_dma		*bmbx_non_emb_pmd;
+
+	struct efc_dma		vpd_data;
+};
+
 #endif /* !_SLI4_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 04/31] elx: libefc_sli: queue create/destroy/parse routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (2 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
                   ` (26 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create mailbox commands
and adds APIs to create/destroy/parse SLI-4 EQ, CQ, RQ and MQ queues.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/include/efc_common.h |   16 +
 drivers/scsi/elx/libefc_sli/sli4.c    | 1347 +++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h    |    9 +
 3 files changed, 1372 insertions(+)

diff --git a/drivers/scsi/elx/include/efc_common.h b/drivers/scsi/elx/include/efc_common.h
index e0dbecbc4518..b6849d168bff 100644
--- a/drivers/scsi/elx/include/efc_common.h
+++ b/drivers/scsi/elx/include/efc_common.h
@@ -22,4 +22,20 @@ struct efc_dma {
 	struct pci_dev	*pdev;
 };
 
+#define efc_log_crit(efc, fmt, args...) \
+		dev_crit(&((efc)->pci)->dev, fmt, ##args)
+
+#define efc_log_err(efc, fmt, args...) \
+		dev_err(&((efc)->pci)->dev, fmt, ##args)
+
+#define efc_log_warn(efc, fmt, args...) \
+		dev_warn(&((efc)->pci)->dev, fmt, ##args)
+
+#define efc_log_info(efc, fmt, args...) \
+		dev_info(&((efc)->pci)->dev, fmt, ##args)
+
+#define efc_log_debug(efc, fmt, args...) \
+		dev_dbg(&((efc)->pci)->dev, fmt, ##args)
+
+
 #endif /* __EFC_COMMON_H__ */
diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 5396e14e7bf8..47e6200a20e6 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -19,3 +19,1350 @@ static struct sli4_asic_entry_t sli4_asic_table[] = {
 	{ SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6},
 	{ SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7},
 };
+
+/* Convert queue type enum (SLI_QTYPE_*) into a string */
+static char *SLI4_QNAME[] = {
+	"Event Queue",
+	"Completion Queue",
+	"Mailbox Queue",
+	"Work Queue",
+	"Receive Queue",
+	"Undefined"
+};
+
+/**
+ * sli_config_cmd_init() - Write a SLI_CONFIG command to the provided buffer.
+ *
+ * @sli4: SLI context pointer.
+ * @buf: Destination buffer for the command.
+ * @length: Length in bytes of attached command.
+ * @dma: DMA buffer for non-embedded commands.
+ * Return: Command payload buffer.
+ */
+static void *
+sli_config_cmd_init(struct sli4 *sli4, void *buf, u32 length,
+		    struct efc_dma *dma)
+{
+	struct sli4_cmd_sli_config *config;
+	u32 flags;
+
+	if (length > sizeof(config->payload.embed) && !dma) {
+		efc_log_err(sli4, "Too big for an embedded cmd with len(%d)\n",
+			    length);
+		return NULL;
+	}
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	config = buf;
+
+	config->hdr.command = SLI4_MBX_CMD_SLI_CONFIG;
+	if (!dma) {
+		flags = SLI4_SLICONF_EMB;
+		config->dw1_flags = cpu_to_le32(flags);
+		config->payload_len = cpu_to_le32(length);
+		return config->payload.embed;
+	}
+
+	flags = SLI4_SLICONF_PMDCMD_VAL_1;
+	flags &= ~SLI4_SLICONF_EMB;
+	config->dw1_flags = cpu_to_le32(flags);
+
+	config->payload.mem.addr.low = cpu_to_le32(lower_32_bits(dma->phys));
+	config->payload.mem.addr.high =	cpu_to_le32(upper_32_bits(dma->phys));
+	config->payload.mem.length =
+				cpu_to_le32(dma->size & SLI4_SLICONF_PMD_LEN);
+	config->payload_len = cpu_to_le32(dma->size);
+	/* save pointer to DMA for BMBX dumping purposes */
+	sli4->bmbx_non_emb_pmd = dma;
+	return dma->virt;
+}
+
+/**
+ * sli_cmd_common_create_cq() - Write a COMMON_CREATE_CQ V2 command.
+ *
+ * @sli4: SLI context pointer.
+ * @buf: Destination buffer for the command.
+ * @qmem: DMA memory for queue.
+ * @eq_id: EQ id assosiated with this cq.
+ * Return: status EFC_FAIL/EFC_SUCCESS.
+ */
+static int
+sli_cmd_common_create_cq(struct sli4 *sli4, void *buf, struct efc_dma *qmem,
+			 u16 eq_id)
+{
+	struct sli4_rqst_cmn_create_cq_v2 *cqv2 = NULL;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages = 0;
+	size_t cmd_size = 0;
+	u32 page_size = 0;
+	u32 n_cqe = 0;
+	u32 dw5_flags = 0;
+	u16 dw6w1_arm = 0;
+	__le32 len;
+
+	/* First calculate number of pages and the mailbox cmd length */
+	n_cqe = qmem->size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = SZ_4K;
+		break;
+	case 4096:
+		page_size = SZ_8K;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+	num_pages = sli_page_count(qmem->size, page_size);
+
+	cmd_size = SLI4_RQST_CMDSZ(cmn_create_cq_v2)
+		   + SZ_DMAADDR * num_pages;
+
+	cqv2 = sli_config_cmd_init(sli4, buf, cmd_size, NULL);
+	if (!cqv2)
+		return EFC_FAIL;
+
+	len = SLI4_RQST_PYLD_LEN_VAR(cmn_create_cq_v2, SZ_DMAADDR * num_pages);
+	sli_cmd_fill_hdr(&cqv2->hdr, SLI4_CMN_CREATE_CQ, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V2, len);
+	cqv2->page_size = page_size / SLI_PAGE_SIZE;
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.3) */
+	cqv2->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_CREATE_CQV2_MAX_PAGES)
+		return EFC_FAIL;
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= SLI4_CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= SLI4_CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= SLI4_CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= SLI4_CQ_CNT_VAL(LARGE);
+		cqv2->cqe_count = cpu_to_le16(n_cqe);
+		break;
+	default:
+		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= SLI4_CREATE_CQV2_AUTOVALID;
+
+	dw5_flags |= SLI4_CREATE_CQV2_EVT;
+	dw5_flags |= SLI4_CREATE_CQV2_VALID;
+
+	cqv2->dw5_flags = cpu_to_le32(dw5_flags);
+	cqv2->dw6w1_arm = cpu_to_le16(dw6w1_arm);
+	cqv2->eq_id = cpu_to_le16(eq_id);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_size) {
+		cqv2->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		cqv2->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_create_eq(struct sli4 *sli4, void *buf, struct efc_dma *qmem)
+{
+	struct sli4_rqst_cmn_create_eq *eq;
+	u32 p;
+	uintptr_t addr;
+	u16 num_pages;
+	u32 dw5_flags = 0;
+	u32 dw6_flags = 0, ver;
+
+	eq = sli_config_cmd_init(sli4, buf, SLI4_CFG_PYLD_LENGTH(cmn_create_eq),
+				 NULL);
+	if (!eq)
+		return EFC_FAIL;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		ver = CMD_V2;
+	else
+		ver = CMD_V0;
+
+	sli_cmd_fill_hdr(&eq->hdr, SLI4_CMN_CREATE_EQ, SLI4_SUBSYSTEM_COMMON,
+			 ver, SLI4_RQST_PYLD_LEN(cmn_create_eq));
+
+	/* valid values for number of pages: 1, 2, 4 (sec 4.4.3) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	eq->num_pages = cpu_to_le16(num_pages);
+
+	switch (num_pages) {
+	case 1:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= SLI4_EQ_CNT_VAL(1024);
+		break;
+	case 2:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= SLI4_EQ_CNT_VAL(2048);
+		break;
+	case 4:
+		dw5_flags |= SLI4_EQE_SIZE_4;
+		dw6_flags |= SLI4_EQ_CNT_VAL(4096);
+		break;
+	default:
+		efc_log_err(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= SLI4_CREATE_EQ_AUTOVALID;
+
+	dw5_flags |= SLI4_CREATE_EQ_VALID;
+	dw6_flags &= (~SLI4_CREATE_EQ_ARM);
+	eq->dw5_flags = cpu_to_le32(dw5_flags);
+	eq->dw6_flags = cpu_to_le32(dw6_flags);
+	eq->dw7_delaymulti = cpu_to_le32(SLI4_CREATE_EQ_DELAYMULTI);
+
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		eq->page_address[p].low = cpu_to_le32(lower_32_bits(addr));
+		eq->page_address[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_create_mq_ext(struct sli4 *sli4, void *buf, struct efc_dma *qmem,
+			     u16 cq_id)
+{
+	struct sli4_rqst_cmn_create_mq_ext *mq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+	u16 dw6w1_flags = 0;
+
+	mq = sli_config_cmd_init(sli4, buf,
+				 SLI4_CFG_PYLD_LENGTH(cmn_create_mq_ext), NULL);
+	if (!mq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&mq->hdr, SLI4_CMN_CREATE_MQ_EXT,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_create_mq_ext));
+
+	/* valid values for number of pages: 1, 2, 4, 8 (sec 4.4.12) */
+	num_pages = qmem->size / SLI_PAGE_SIZE;
+	mq->num_pages = cpu_to_le16(num_pages);
+	switch (num_pages) {
+	case 1:
+		dw6w1_flags |= SLI4_MQE_SIZE_16;
+		break;
+	case 2:
+		dw6w1_flags |= SLI4_MQE_SIZE_32;
+		break;
+	case 4:
+		dw6w1_flags |= SLI4_MQE_SIZE_64;
+		break;
+	case 8:
+		dw6w1_flags |= SLI4_MQE_SIZE_128;
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages);
+		return EFC_FAIL;
+	}
+
+	mq->async_event_bitmap = cpu_to_le32(SLI4_ASYNC_EVT_FC_ALL);
+
+	if (sli4->params.mq_create_version) {
+		mq->cq_id_v1 = cpu_to_le16(cq_id);
+		mq->hdr.dw3_version = cpu_to_le32(CMD_V1);
+	} else {
+		dw6w1_flags |= (cq_id << SLI4_CREATE_MQEXT_CQID_SHIFT);
+	}
+	mq->dw7_val = cpu_to_le32(SLI4_CREATE_MQEXT_VAL);
+
+	mq->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+	for (p = 0, addr = qmem->phys; p < num_pages;
+	     p++, addr += SLI_PAGE_SIZE) {
+		mq->page_phys_addr[p].low = cpu_to_le32(lower_32_bits(addr));
+		mq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf, struct efc_dma *qmem, u16 cq_id)
+{
+	struct sli4_rqst_wq_create *wq;
+	u32 p;
+	uintptr_t addr;
+	u32 page_size = 0;
+	u32 n_wqe = 0;
+	u16 num_pages;
+
+	wq = sli_config_cmd_init(sli4, buf, SLI4_CFG_PYLD_LENGTH(wq_create),
+				 NULL);
+	if (!wq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wq->hdr, SLI4_OPC_WQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, SLI4_RQST_PYLD_LEN(wq_create));
+	n_wqe = qmem->size / sli4->wqe_size;
+
+	switch (qmem->size) {
+	case 4096:
+	case 8192:
+	case 16384:
+	case 32768:
+		page_size = SZ_4K;
+		break;
+	case 65536:
+		page_size = SZ_8K;
+		break;
+	case 131072:
+		page_size = SZ_16K;
+		break;
+	case 262144:
+		page_size = SZ_32K;
+		break;
+	case 524288:
+		page_size = SZ_64K;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	/* valid values for number of pages(num_pages): 1-8 */
+	num_pages = sli_page_count(qmem->size, page_size);
+	wq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages || num_pages > SLI4_WQ_CREATE_MAX_PAGES)
+		return EFC_FAIL;
+
+	wq->cq_id = cpu_to_le16(cq_id);
+
+	wq->page_size = page_size / SLI_PAGE_SIZE;
+
+	if (sli4->wqe_size == SLI4_WQE_EXT_BYTES)
+		wq->wqe_size_byte |= SLI4_WQE_EXT_SIZE;
+	else
+		wq->wqe_size_byte |= SLI4_WQE_SIZE;
+
+	wq->wqe_count = cpu_to_le16(n_wqe);
+
+	for (p = 0, addr = qmem->phys; p < num_pages; p++, addr += page_size) {
+		wq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		wq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_rq_create_v1(struct sli4 *sli4, void *buf, struct efc_dma *qmem,
+		     u16 cq_id, u16 buffer_size)
+{
+	struct sli4_rqst_rq_create_v1 *rq;
+	u32 p;
+	uintptr_t addr;
+	u32 num_pages;
+
+	rq = sli_config_cmd_init(sli4, buf, SLI4_CFG_PYLD_LENGTH(rq_create_v1),
+				 NULL);
+	if (!rq)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rq->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V1, SLI4_RQST_PYLD_LEN(rq_create_v1));
+	/* Disable "no buffer warnings" to avoid Lancer bug */
+	rq->dim_dfd_dnb |= SLI4_RQ_CREATE_V1_DNB;
+
+	/* valid values for number of pages: 1-8 (sec 4.5.6) */
+	num_pages = sli_page_count(qmem->size, SLI_PAGE_SIZE);
+	rq->num_pages = cpu_to_le16(num_pages);
+	if (!num_pages ||
+	    num_pages > SLI4_RQ_CREATE_V1_MAX_PAGES) {
+		efc_log_info(sli4, "num_pages %d not valid, max %d\n",
+			num_pages, SLI4_RQ_CREATE_V1_MAX_PAGES);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * RQE count is the total number of entries (note not lg2(# entries))
+	 */
+	rq->rqe_count = cpu_to_le16(qmem->size / SLI4_RQE_SIZE);
+
+	rq->rqe_size_byte |= SLI4_RQE_SIZE_8;
+
+	rq->page_size = SLI4_RQ_PAGE_SIZE_4096;
+
+	if (buffer_size < sli4->rq_min_buf_size ||
+	    buffer_size > sli4->rq_max_buf_size) {
+		efc_log_err(sli4, "buffer_size %d out of range (%d-%d)\n",
+		       buffer_size,
+				sli4->rq_min_buf_size,
+				sli4->rq_max_buf_size);
+		return EFC_FAIL;
+	}
+	rq->buffer_size = cpu_to_le32(buffer_size);
+
+	rq->cq_id = cpu_to_le16(cq_id);
+
+	for (p = 0, addr = qmem->phys;
+			p < num_pages;
+			p++, addr += SLI_PAGE_SIZE) {
+		rq->page_phys_addr[p].low  = cpu_to_le32(lower_32_bits(addr));
+		rq->page_phys_addr[p].high = cpu_to_le32(upper_32_bits(addr));
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_rq_create_v2(struct sli4 *sli4, u32 num_rqs,
+		     struct sli4_queue *qs[], u32 base_cq_id,
+		     u32 header_buffer_size,
+		     u32 payload_buffer_size, struct efc_dma *dma)
+{
+	struct sli4_rqst_rq_create_v2 *req = NULL;
+	u32 i, p, offset = 0;
+	u32 payload_size, page_count;
+	uintptr_t addr;
+	u32 num_pages;
+	__le32 len;
+
+	page_count =  sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE) * num_rqs;
+
+	/* Payload length must accommodate both request and response */
+	payload_size = max(SLI4_RQST_CMDSZ(rq_create_v2) +
+			   SZ_DMAADDR * page_count,
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pci->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	len =  SLI4_RQST_PYLD_LEN_VAR(rq_create_v2, SZ_DMAADDR * page_count);
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_RQ_CREATE, SLI4_SUBSYSTEM_FC,
+			 CMD_V2, len);
+	/* Fill Payload fields */
+	req->dim_dfd_dnb |= SLI4_RQCREATEV2_DNB;
+	num_pages = sli_page_count(qs[0]->dma.size, SLI_PAGE_SIZE);
+	req->num_pages = cpu_to_le16(num_pages);
+	req->rqe_count = cpu_to_le16(qs[0]->dma.size / SLI4_RQE_SIZE);
+	req->rqe_size_byte |= SLI4_RQE_SIZE_8;
+	req->page_size = SLI4_RQ_PAGE_SIZE_4096;
+	req->rq_count = num_rqs;
+	req->base_cq_id = cpu_to_le16(base_cq_id);
+	req->hdr_buffer_size = cpu_to_le16(header_buffer_size);
+	req->payload_buffer_size = cpu_to_le16(payload_buffer_size);
+
+	for (i = 0; i < num_rqs; i++) {
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages;
+		     p++, addr += SLI_PAGE_SIZE) {
+			req->page_phys_addr[offset].low =
+					cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+					cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+__sli_queue_destroy(struct sli4 *sli4, struct sli4_queue *q)
+{
+	if (!q->dma.size)
+		return;
+
+	dma_free_coherent(&sli4->pci->dev, q->dma.size,
+			  q->dma.virt, q->dma.phys);
+	memset(&q->dma, 0, sizeof(struct efc_dma));
+}
+
+int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q, u32 qtype,
+		 size_t size, u32 n_entries, u32 align)
+{
+	if (q->dma.virt) {
+		efc_log_err(sli4, "%s failed\n", __func__);
+		return EFC_FAIL;
+	}
+
+	memset(q, 0, sizeof(struct sli4_queue));
+
+	q->dma.size = size * n_entries;
+	q->dma.virt = dma_alloc_coherent(&sli4->pci->dev, q->dma.size,
+					 &q->dma.phys, GFP_DMA);
+	if (!q->dma.virt) {
+		memset(&q->dma, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "%s allocation failed\n", SLI4_QNAME[qtype]);
+		return EFC_FAIL;
+	}
+
+	memset(q->dma.virt, 0, size * n_entries);
+
+	spin_lock_init(&q->lock);
+
+	q->type = qtype;
+	q->size = size;
+	q->length = n_entries;
+
+	if (q->type == SLI4_QTYPE_EQ || q->type == SLI4_QTYPE_CQ) {
+		/* For prism, phase will be flipped after
+		 * a sweep through eq and cq
+		 */
+		q->phase = 1;
+	}
+
+	/* Limit to hwf the queue size per interrupt */
+	q->proc_limit = n_entries / 2;
+
+	if (q->type == SLI4_QTYPE_EQ)
+		q->posted_limit = q->length / 2;
+	else
+		q->posted_limit = 64;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q,
+		u32 n_entries, u32 buffer_size,
+		struct sli4_queue *cq, bool is_hdr)
+{
+	if (__sli_queue_init(sli4, q, SLI4_QTYPE_RQ, SLI4_RQE_SIZE,
+			     n_entries, SLI_PAGE_SIZE))
+		return EFC_FAIL;
+
+	if (sli_cmd_rq_create_v1(sli4, sli4->bmbx.virt, &q->dma, cq->id,
+				 buffer_size))
+		goto error;
+
+	if (__sli_create_queue(sli4, q))
+		goto error;
+
+	if (is_hdr && q->id & 1) {
+		efc_log_info(sli4, "bad header RQ_ID %d\n", q->id);
+		goto error;
+	} else if (!is_hdr  && (q->id & 1) == 0) {
+		efc_log_info(sli4, "bad data RQ_ID %d\n", q->id);
+		goto error;
+	}
+
+	if (is_hdr)
+		q->u.flag |= SLI4_QUEUE_FLAG_HDR;
+	else
+		q->u.flag &= ~SLI4_QUEUE_FLAG_HDR;
+
+	return EFC_SUCCESS;
+
+error:
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs,
+		    struct sli4_queue *qs[], u32 base_cq_id,
+		    u32 n_entries, u32 header_buffer_size,
+		    u32 payload_buffer_size)
+{
+	u32 i;
+	struct efc_dma dma = {0};
+	struct sli4_rsp_cmn_create_queue_set *rsp = NULL;
+	void __iomem *db_regaddr = NULL;
+	u32 num_rqs = num_rq_pairs * 2;
+
+	for (i = 0; i < num_rqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI4_QTYPE_RQ,
+				     SLI4_RQE_SIZE, n_entries,
+				     SLI_PAGE_SIZE)) {
+			goto error;
+		}
+	}
+
+	if (sli_cmd_rq_create_v2(sli4, num_rqs, qs, base_cq_id,
+			       header_buffer_size, payload_buffer_size, &dma)) {
+		goto error;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_err(sli4, "bootstrap mailbox write failed RQSet\n");
+		goto error;
+	}
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_RQ_DB_REG;
+
+	rsp = dma.virt;
+	if (rsp->hdr.status) {
+		efc_log_err(sli4, "bad create RQSet status=%#x addl=%#x\n",
+		       rsp->hdr.status, rsp->hdr.additional_status);
+		goto error;
+	}
+
+	for (i = 0; i < num_rqs; i++) {
+		qs[i]->id = i + le16_to_cpu(rsp->q_id);
+		if ((qs[i]->id & 1) == 0)
+			qs[i]->u.flag |= SLI4_QUEUE_FLAG_HDR;
+		else
+			qs[i]->u.flag &= ~SLI4_QUEUE_FLAG_HDR;
+
+		qs[i]->db_regaddr = db_regaddr;
+	}
+
+	dma_free_coherent(&sli4->pci->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pci->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_res_sli_config(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_sli_config *sli_config = buf;
+
+	/* sanity check */
+	if (!buf || sli_config->hdr.command !=
+		    SLI4_MBX_CMD_SLI_CONFIG) {
+		efc_log_err(sli4, "bad parameter buf=%p cmd=%#x\n", buf,
+		       buf ? sli_config->hdr.command : -1);
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(sli_config->hdr.status))
+		return le16_to_cpu(sli_config->hdr.status);
+
+	if (le32_to_cpu(sli_config->dw1_flags) & SLI4_SLICONF_EMB)
+		return sli_config->payload.embed[4];
+
+	efc_log_info(sli4, "external buffers not supported\n");
+	return EFC_FAIL;
+}
+
+int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q)
+{
+	struct sli4_rsp_cmn_create_queue *res_q = NULL;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail %s\n",
+			SLI4_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status create %s\n",
+		       SLI4_QNAME[q->type]);
+		return EFC_FAIL;
+	}
+	res_q = (void *)((u8 *)sli4->bmbx.virt +
+			offsetof(struct sli4_cmd_sli_config, payload));
+
+	if (res_q->hdr.status) {
+		efc_log_err(sli4, "bad create %s status=%#x addl=%#x\n",
+		       SLI4_QNAME[q->type], res_q->hdr.status,
+			    res_q->hdr.additional_status);
+		return EFC_FAIL;
+	}
+	q->id = le16_to_cpu(res_q->q_id);
+	switch (q->type) {
+	case SLI4_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_EQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI4_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_EQCQ_DB_REG;
+		break;
+	case SLI4_QTYPE_MQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_MQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_MQ_DB_REG;
+		break;
+	case SLI4_QTYPE_RQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_RQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_RQ_DB_REG;
+		break;
+	case SLI4_QTYPE_WQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			q->db_regaddr = sli4->reg[1] + SLI4_IF6_WQ_DB_REG;
+		else
+			q->db_regaddr =	sli4->reg[0] + SLI4_IO_WQ_DB_REG;
+		break;
+	default:
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype)
+{
+	u32 size = 0;
+
+	switch (qtype) {
+	case SLI4_QTYPE_EQ:
+		size = sizeof(u32);
+		break;
+	case SLI4_QTYPE_CQ:
+		size = 16;
+		break;
+	case SLI4_QTYPE_MQ:
+		size = 256;
+		break;
+	case SLI4_QTYPE_WQ:
+		size = sli4->wqe_size;
+		break;
+	case SLI4_QTYPE_RQ:
+		size = SLI4_RQE_SIZE;
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+		return -1;
+	}
+	return size;
+}
+
+int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype,
+		struct sli4_queue *q, u32 n_entries,
+		     struct sli4_queue *assoc)
+{
+	int size;
+	u32 align = 0;
+
+	/* get queue size */
+	size = sli_get_queue_entry_size(sli4, qtype);
+	if (size < 0)
+		return EFC_FAIL;
+	align = SLI_PAGE_SIZE;
+
+	if (__sli_queue_init(sli4, q, qtype, size, n_entries, align))
+		return EFC_FAIL;
+
+	switch (qtype) {
+	case SLI4_QTYPE_EQ:
+		if (!sli_cmd_common_create_eq(sli4, sli4->bmbx.virt, &q->dma) &&
+		    !__sli_create_queue(sli4, q))
+			return EFC_SUCCESS;
+
+		break;
+	case SLI4_QTYPE_CQ:
+		if (!sli_cmd_common_create_cq(sli4, sli4->bmbx.virt, &q->dma,
+						assoc ? assoc->id : 0) &&
+		    !__sli_create_queue(sli4, q))
+			return EFC_SUCCESS;
+
+		break;
+	case SLI4_QTYPE_MQ:
+		assoc->u.flag |= SLI4_QUEUE_FLAG_MQ;
+		if (!sli_cmd_common_create_mq_ext(sli4, sli4->bmbx.virt,
+						  &q->dma, assoc->id) &&
+		    !__sli_create_queue(sli4, q))
+			return EFC_SUCCESS;
+
+		break;
+	case SLI4_QTYPE_WQ:
+		if (!sli_cmd_wq_create(sli4, sli4->bmbx.virt, &q->dma,
+						assoc ? assoc->id : 0) &&
+		    !__sli_create_queue(sli4, q))
+			return EFC_SUCCESS;
+
+		break;
+	default:
+		efc_log_info(sli4, "unknown queue type %d\n", qtype);
+	}
+
+	__sli_queue_destroy(sli4, q);
+	return EFC_FAIL;
+}
+
+static int sli_cmd_cq_set_create(struct sli4 *sli4,
+				 struct sli4_queue *qs[], u32 num_cqs,
+				 struct sli4_queue *eqs[],
+				 struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_create_cq_set_v0 *req = NULL;
+	uintptr_t addr;
+	u32 i, offset = 0,  page_bytes = 0, payload_size;
+	u32 p = 0, page_size = 0, n_cqe = 0, num_pages_cq;
+	u32 dw5_flags = 0;
+	u16 dw6w1_flags = 0;
+	__le32 req_len;
+
+	n_cqe = qs[0]->dma.size / SLI4_CQE_BYTES;
+	switch (n_cqe) {
+	case 256:
+	case 512:
+	case 1024:
+	case 2048:
+		page_size = 1;
+		break;
+	case 4096:
+		page_size = 2;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	page_bytes = page_size * SLI_PAGE_SIZE;
+	num_pages_cq = sli_page_count(qs[0]->dma.size, page_bytes);
+	payload_size = max(SLI4_RQST_CMDSZ(cmn_create_cq_set_v0) +
+			   (SZ_DMAADDR * num_pages_cq * num_cqs),
+			   sizeof(struct sli4_rsp_cmn_create_queue_set));
+
+	dma->size = payload_size;
+	dma->virt = dma_alloc_coherent(&sli4->pci->dev, dma->size,
+				      &dma->phys, GFP_DMA);
+	if (!dma->virt)
+		return EFC_FAIL;
+
+	memset(dma->virt, 0, payload_size);
+
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt, payload_size, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	req_len = SLI4_RQST_PYLD_LEN_VAR(cmn_create_cq_set_v0,
+					SZ_DMAADDR * num_pages_cq * num_cqs);
+	sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_CREATE_CQ_SET, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	req->page_size = page_size;
+
+	req->num_pages = cpu_to_le16(num_pages_cq);
+	switch (num_pages_cq) {
+	case 1:
+		dw5_flags |= SLI4_CQ_CNT_VAL(256);
+		break;
+	case 2:
+		dw5_flags |= SLI4_CQ_CNT_VAL(512);
+		break;
+	case 4:
+		dw5_flags |= SLI4_CQ_CNT_VAL(1024);
+		break;
+	case 8:
+		dw5_flags |= SLI4_CQ_CNT_VAL(LARGE);
+		dw6w1_flags |= (n_cqe & SLI4_CREATE_CQSETV0_CQE_COUNT);
+		break;
+	default:
+		efc_log_info(sli4, "num_pages %d not valid\n", num_pages_cq);
+		return EFC_FAIL;
+	}
+
+	dw5_flags |= SLI4_CREATE_CQSETV0_EVT;
+	dw5_flags |= SLI4_CREATE_CQSETV0_VALID;
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		dw5_flags |= SLI4_CREATE_CQSETV0_AUTOVALID;
+
+	dw6w1_flags &= ~SLI4_CREATE_CQSETV0_ARM;
+
+	req->dw5_flags = cpu_to_le32(dw5_flags);
+	req->dw6w1_flags = cpu_to_le16(dw6w1_flags);
+
+	req->num_cq_req = cpu_to_le16(num_cqs);
+
+	/* Fill page addresses of all the CQs. */
+	for (i = 0; i < num_cqs; i++) {
+		req->eq_id[i] = cpu_to_le16(eqs[i]->id);
+		for (p = 0, addr = qs[i]->dma.phys; p < num_pages_cq;
+		     p++, addr += page_bytes) {
+			req->page_phys_addr[offset].low =
+				cpu_to_le32(lower_32_bits(addr));
+			req->page_phys_addr[offset].high =
+				cpu_to_le32(upper_32_bits(addr));
+			offset++;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[],
+		 u32 num_cqs, u32 n_entries, struct sli4_queue *eqs[])
+{
+	u32 i;
+	struct efc_dma dma = {0};
+	struct sli4_rsp_cmn_create_queue_set *res;
+	void __iomem *db_regaddr;
+
+	/* Align the queue DMA memory */
+	for (i = 0; i < num_cqs; i++) {
+		if (__sli_queue_init(sli4, qs[i], SLI4_QTYPE_CQ,
+				SLI4_CQE_BYTES, n_entries, SLI_PAGE_SIZE))
+			goto error;
+	}
+
+	if (sli_cmd_cq_set_create(sli4, qs, num_cqs, eqs, &dma))
+		goto error;
+
+	if (sli_bmbx_command(sli4))
+		goto error;
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		db_regaddr = sli4->reg[1] + SLI4_IF6_CQ_DB_REG;
+	else
+		db_regaddr = sli4->reg[0] + SLI4_EQCQ_DB_REG;
+
+	res = dma.virt;
+	if (res->hdr.status) {
+		efc_log_err(sli4, "bad create CQSet status=%#x addl=%#x\n",
+		       res->hdr.status, res->hdr.additional_status);
+		goto error;
+	}
+
+	/* Check if we got all requested CQs. */
+	if (le16_to_cpu(res->num_q_allocated) != num_cqs) {
+		efc_log_crit(sli4, "Requested count CQs doesn't match.\n");
+		goto error;
+	}
+	/* Fill the resp cq ids. */
+	for (i = 0; i < num_cqs; i++) {
+		qs[i]->id = le16_to_cpu(res->q_id) + i;
+		qs[i]->db_regaddr = db_regaddr;
+	}
+
+	dma_free_coherent(&sli4->pci->dev, dma.size, dma.virt, dma.phys);
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++)
+		__sli_queue_destroy(sli4, qs[i]);
+
+	if (dma.virt)
+		dma_free_coherent(&sli4->pci->dev, dma.size, dma.virt,
+				  dma.phys);
+
+	return EFC_FAIL;
+}
+
+static int
+sli_cmd_common_destroy_q(struct sli4 *sli4, u8 opc, u8 subsystem, u16 q_id)
+{
+	struct sli4_rqst_cmn_destroy_q *req;
+
+	/* Payload length must accommodate both request and response */
+	req = sli_config_cmd_init(sli4, sli4->bmbx.virt,
+				  SLI4_CFG_PYLD_LENGTH(cmn_destroy_q), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, opc, subsystem,
+			 CMD_V0, SLI4_RQST_PYLD_LEN(cmn_destroy_q));
+	req->q_id = cpu_to_le16(q_id);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q,
+	       u32 destroy_queues, u32 free_memory)
+{
+	int rc = EFC_SUCCESS;
+	u8 opcode, subsystem;
+	struct sli4_rsp_hdr *res;
+
+	if (!q) {
+		efc_log_err(sli4, "bad parameter sli4=%p q=%p\n", sli4, q);
+		return EFC_FAIL;
+	}
+
+	if (!destroy_queues)
+		goto free_mem;
+
+	switch (q->type) {
+	case SLI4_QTYPE_EQ:
+		opcode = SLI4_CMN_DESTROY_EQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI4_QTYPE_CQ:
+		opcode = SLI4_CMN_DESTROY_CQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI4_QTYPE_MQ:
+		opcode = SLI4_CMN_DESTROY_MQ;
+		subsystem = SLI4_SUBSYSTEM_COMMON;
+		break;
+	case SLI4_QTYPE_WQ:
+		opcode = SLI4_OPC_WQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	case SLI4_QTYPE_RQ:
+		opcode = SLI4_OPC_RQ_DESTROY;
+		subsystem = SLI4_SUBSYSTEM_FC;
+		break;
+	default:
+		efc_log_info(sli4, "bad queue type %d\n", q->type);
+		rc = EFC_FAIL;
+		goto free_mem;
+	}
+
+	rc = sli_cmd_common_destroy_q(sli4, opcode, subsystem, q->id);
+	if (rc)
+		goto free_mem;
+
+	rc = sli_bmbx_command(sli4);
+	if (rc)
+		goto free_mem;
+
+	rc = sli_res_sli_config(sli4, sli4->bmbx.virt);
+	if (rc)
+		goto free_mem;
+
+	res = (void *)((u8 *)sli4->bmbx.virt +
+			     offsetof(struct sli4_cmd_sli_config, payload));
+	if (res->status) {
+		efc_log_err(sli4, "destroy %s st=%#x addl=%#x\n",
+				   SLI4_QNAME[q->type], res->status,
+				   res->additional_status);
+		rc = EFC_FAIL;
+		goto free_mem;
+	}
+
+free_mem:
+	if (free_memory)
+		__sli_queue_destroy(sli4, q);
+
+	return rc;
+}
+
+int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+		val = sli_format_if6_eq_db_data(q->n_posted, q->id, a);
+	else
+		val = sli_format_eq_db_data(q->n_posted, q->id, a);
+
+	writel(val, q->db_regaddr);
+	q->n_posted = 0;
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm)
+{
+	u32 val = 0;
+	unsigned long flags = 0;
+	u32 a = arm ? SLI4_EQCQ_ARM : SLI4_EQCQ_UNARM;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	switch (q->type) {
+	case SLI4_QTYPE_EQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = sli_format_if6_eq_db_data(q->n_posted, q->id, a);
+		else
+			val = sli_format_eq_db_data(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	case SLI4_QTYPE_CQ:
+		if (sli4->if_type == SLI4_INTF_IF_TYPE_6)
+			val = sli_format_if6_cq_db_data(q->n_posted, q->id, a);
+		else
+			val = sli_format_cq_db_data(q->n_posted, q->id, a);
+
+		writel(val, q->db_regaddr);
+		q->n_posted = 0;
+		break;
+	default:
+		efc_log_info(sli4, "should only be used for EQ/CQ, not %s\n",
+			SLI4_QNAME[q->type]);
+	}
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	if (sli4->params.perf_wq_id_association)
+		sli_set_wq_id_association(entry, q->id);
+
+	memcpy(qe, entry, q->size);
+	val = sli_format_wq_db_data(q->id);
+
+	writel(val, q->db_regaddr);
+	q->index = (q->index + 1) & (q->length - 1);
+
+	return qindex;
+}
+
+int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->lock, flags);
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+	val = sli_format_mq_db_data(q->id);
+	writel(val, q->db_regaddr);
+	q->index = (q->index + 1) & (q->length - 1);
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return qindex;
+}
+
+int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	u32 qindex;
+	u32 val = 0;
+
+	qindex = q->index;
+	qe += q->index * q->size;
+
+	memcpy(qe, entry, q->size);
+
+	/*
+	 * In RQ-pair, an RQ either contains the FC header
+	 * (i.e. is_hdr == TRUE) or the payload.
+	 *
+	 * Don't ring doorbell for payload RQ
+	 */
+	if (!(q->u.flag & SLI4_QUEUE_FLAG_HDR))
+		goto skip;
+
+	val = sli_format_rq_db_data(q->id);
+	writel(val, q->db_regaddr);
+skip:
+	q->index = (q->index + 1) & (q->length - 1);
+
+	return qindex;
+}
+
+int
+sli_eq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	unsigned long flags = 0;
+	u16 wflags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += q->index * q->size;
+
+	/* Check if eqe is valid */
+	wflags = le16_to_cpu(((struct sli4_eqe *)qe)->dw0w0_flags);
+
+	if ((wflags & SLI4_EQE_VALID) != q->phase) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type != SLI4_INTF_IF_TYPE_6) {
+		wflags &= ~SLI4_EQE_VALID;
+		((struct sli4_eqe *)qe)->dw0w0_flags = cpu_to_le16(wflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	q->index = (q->index + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && q->index == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	unsigned long flags = 0;
+	u32 dwflags = 0;
+	bool valid_bit_set;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += q->index * q->size;
+
+	/* Check if cqe is valid */
+	dwflags = le32_to_cpu(((struct sli4_mcqe *)qe)->dw3_flags);
+	valid_bit_set = (dwflags & SLI4_MCQE_VALID) != 0;
+
+	if (valid_bit_set != q->phase) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	if (sli4->if_type != SLI4_INTF_IF_TYPE_6) {
+		dwflags &= ~SLI4_MCQE_VALID;
+		((struct sli4_mcqe *)qe)->dw3_flags = cpu_to_le32(dwflags);
+	}
+
+	memcpy(entry, qe, q->size);
+	q->index = (q->index + 1) & (q->length - 1);
+	q->n_posted++;
+	/*
+	 * For prism, the phase value will be used
+	 * to check the validity of eq/cq entries.
+	 * The value toggles after a complete sweep
+	 * through the queue.
+	 */
+
+	if (sli4->if_type == SLI4_INTF_IF_TYPE_6 && q->index == 0)
+		q->phase ^= (u16)0x1;
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_mq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry)
+{
+	u8 *qe = q->dma.virt;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&q->lock, flags);
+
+	qe += q->u.r_idx * q->size;
+
+	/* Check if mqe is valid */
+	if (q->index == q->u.r_idx) {
+		spin_unlock_irqrestore(&q->lock, flags);
+		return EFC_FAIL;
+	}
+
+	memcpy(entry, qe, q->size);
+	q->u.r_idx = (q->u.r_idx + 1) & (q->length - 1);
+
+	spin_unlock_irqrestore(&q->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id)
+{
+	struct sli4_eqe *eqe = (void *)buf;
+	int rc = EFC_SUCCESS;
+	u16 flags = 0;
+	u16 majorcode;
+	u16 minorcode;
+
+	if (!buf || !cq_id) {
+		efc_log_err(sli4, "bad parameters sli4=%p buf=%p cq_id=%p\n",
+		       sli4, buf, cq_id);
+		return EFC_FAIL;
+	}
+
+	flags = le16_to_cpu(eqe->dw0w0_flags);
+	majorcode = (flags & SLI4_EQE_MJCODE) >> 1;
+	minorcode = (flags & SLI4_EQE_MNCODE) >> 4;
+	switch (majorcode) {
+	case SLI4_MAJOR_CODE_STANDARD:
+		*cq_id = le16_to_cpu(eqe->resource_id);
+		break;
+	case SLI4_MAJOR_CODE_SENTINEL:
+		efc_log_info(sli4, "sentinel EQE\n");
+		rc = SLI4_EQE_STATUS_EQ_FULL;
+		break;
+	default:
+		efc_log_info(sli4, "Unsupported EQE: major %x minor %x\n",
+			majorcode, minorcode);
+		rc = EFC_FAIL;
+	}
+
+	return rc;
+}
+
+int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+	     enum sli4_qentry *etype, u16 *q_id)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!cq || !cqe || !etype) {
+		efc_log_err(sli4, "bad params sli4=%p cq=%p cqe=%p etype=%p q_id=%p\n",
+		       sli4, cq, cqe, etype, q_id);
+		return -EINVAL;
+	}
+
+	/* Parse a CQ entry to retrieve the event type and the queue id */
+	if (cq->u.flag & SLI4_QUEUE_FLAG_MQ) {
+		struct sli4_mcqe	*mcqe = (void *)cqe;
+
+		if (le32_to_cpu(mcqe->dw3_flags) & SLI4_MCQE_AE) {
+			*etype = SLI4_QENTRY_ASYNC;
+		} else {
+			*etype = SLI4_QENTRY_MQ;
+			rc = sli_cqe_mq(sli4, mcqe);
+		}
+		*q_id = -1;
+	} else {
+		rc = sli_fc_cqe_parse(sli4, cq, cqe, etype, q_id);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 39b6fceed85e..5bc9e8b8ffd1 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3707,4 +3707,13 @@ struct sli4 {
 	struct efc_dma		vpd_data;
 };
 
+static inline void
+sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
+{
+	hdr->opcode = opc;
+	hdr->subsystem = sub;
+	hdr->dw3_version = cpu_to_le32(ver);
+	hdr->request_length = len;
+}
+
 #endif /* !_SLI4_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 05/31] elx: libefc_sli: Populate and post different WQEs
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (3 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
                   ` (25 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch adds service routines to create different WQEs and adds
APIs to issue iread, iwrite, treceive, tsend and other work queue
entries.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1500 ++++++++++++++++++++++++++++
 1 file changed, 1500 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 47e6200a20e6..6b71779274c4 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -1366,3 +1366,1503 @@ sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
 
 	return rc;
 }
+
+int
+sli_abort_wqe(struct sli4 *sli, void *buf, enum sli4_abort_type type,
+	      bool send_abts, u32 ids, u32 mask, u16 tag, u16 cq_id)
+{
+	struct sli4_abort_wqe *abort = buf;
+
+	memset(buf, 0, sli->wqe_size);
+
+	switch (type) {
+	case SLI4_ABORT_XRI:
+		abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
+		if (mask) {
+			efc_log_warn(sli, "%#x aborting XRI %#x warning non-zero mask",
+				mask, ids);
+			mask = 0;
+		}
+		break;
+	case SLI4_ABORT_ABORT_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_ABORT_TAG;
+		break;
+	case SLI4_ABORT_REQUEST_ID:
+		abort->criteria = SLI4_ABORT_CRITERIA_REQUEST_TAG;
+		break;
+	default:
+		efc_log_info(sli, "unsupported type %#x\n", type);
+		return EFC_FAIL;
+	}
+
+	abort->ia_ir_byte |= send_abts ? 0 : 1;
+
+	/* Suppress ABTS retries */
+	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
+
+	abort->t_mask = cpu_to_le32(mask);
+	abort->t_tag  = cpu_to_le32(ids);
+	abort->command = SLI4_WQE_ABORT;
+	abort->request_tag = cpu_to_le16(tag);
+
+	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
+
+	abort->cq_id = cpu_to_le16(cq_id);
+	abort->cmdtype_wqec_byte |= SLI4_CMD_ABORT_WQE;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_els_request64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		      struct sli_els_params *params)
+{
+	struct sli4_els_request64_wqe *els = buf;
+	struct sli4_sge *sge = sgl->virt;
+	bool is_fabric = false;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli->wqe_size);
+
+	bptr = &els->els_request_payload;
+	if (sli->params.sgl_pre_registered) {
+		els->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_REQ_WQE_XBL;
+
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (params->xmit_len & SLI4_BDE_LEN_MASK));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_LEN_MASK));
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	els->els_request_payload_length = cpu_to_le32(params->xmit_len);
+	els->max_response_payload_length = cpu_to_le32(params->rsp_len);
+
+	els->xri_tag = cpu_to_le16(params->xri);
+	els->timer = params->timeout;
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_REQUEST64;
+
+	els->request_tag = cpu_to_le16(params->tag);
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_IOD;
+
+	els->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_REQ_WQE_QOSD;
+
+	/* figure out the ELS_ID value from the request buffer */
+
+	switch (params->cmd) {
+	case ELS_LOGO:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_LOGO << SLI4_REQ_WQE_ELSID_SHFT;
+		if (params->rpi_registered) {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag = cpu_to_le16(params->rpi);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag = cpu_to_le16(params->vpi);
+		}
+		if (params->d_id == FC_FID_FLOGI)
+			is_fabric = true;
+		break;
+	case ELS_FDISC:
+		if (params->d_id == FC_FID_FLOGI)
+			is_fabric = true;
+		if (params->s_id == 0) {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_FDISC << SLI4_REQ_WQE_ELSID_SHFT;
+			is_fabric = true;
+		} else {
+			els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		}
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(params->vpi);
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		break;
+	case ELS_FLOGI:
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(params->vpi);
+		/*
+		 * Set SP here ... we haven't done a REG_VPI yet
+		 * need to maybe not set this when we have
+		 * completed VFI/VPI registrations ...
+		 *
+		 * Use the FC_ID of the SPORT if it has been allocated,
+		 * otherwise use an S_ID of zero.
+		 */
+		els->sid_sp_dword |= cpu_to_le32(1 << SLI4_REQ_WQE_SP_SHFT);
+		if (params->s_id != U32_MAX)
+			els->sid_sp_dword |= cpu_to_le32(params->s_id);
+		break;
+	case ELS_PLOGI:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_PLOGI << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(params->vpi);
+		break;
+	case ELS_SCR:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+		els->context_tag = cpu_to_le16(params->vpi);
+		break;
+	default:
+		els->cmdtype_elsid_byte |=
+			SLI4_ELS_REQUEST64_OTHER << SLI4_REQ_WQE_ELSID_SHFT;
+		if (params->rpi_registered) {
+			els->ct_byte |= (SLI4_GENERIC_CONTEXT_RPI <<
+					 SLI4_REQ_WQE_CT_SHFT);
+			els->context_tag = cpu_to_le16(params->vpi);
+		} else {
+			els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_VPI << SLI4_REQ_WQE_CT_SHFT;
+			els->context_tag = cpu_to_le16(params->vpi);
+		}
+		break;
+	}
+
+	if (is_fabric)
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_FABRIC;
+	else
+		els->cmdtype_elsid_byte |= SLI4_ELS_REQUEST64_CMD_NON_FABRIC;
+
+	els->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) !=
+					SLI4_GENERIC_CONTEXT_RPI)
+		els->remote_id_dword = cpu_to_le32(params->d_id);
+
+	if (((els->ct_byte & SLI4_REQ_WQE_CT) >> SLI4_REQ_WQE_CT_SHFT) ==
+					SLI4_GENERIC_CONTEXT_VPI)
+		els->temporary_rpi = cpu_to_le16(params->rpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fcp_icmnd64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl, u16 xri,
+		    u16 tag, u16 cq_id, u32 rpi, u32 rnode_fcid, u8 timeout)
+{
+	struct sli4_fcp_icmnd64_wqe *icmnd = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 len;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &icmnd->bde;
+	if (sli->params.sgl_pre_registered) {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_ICMD_WQE_XBL;
+
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_LEN_MASK));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+				    (sgl->size & SLI4_BDE_LEN_MASK));
+
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	icmnd->payload_offset_length = cpu_to_le16(len);
+	icmnd->xri_tag = cpu_to_le16(xri);
+	icmnd->context_tag = cpu_to_le16(rpi);
+	icmnd->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	icmnd->class_pu_byte |= 2 << SLI4_ICMD_WQE_PU_SHFT;
+	icmnd->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	icmnd->command = SLI4_WQE_FCP_ICMND64;
+	icmnd->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_ICMD_WQE_CT_SHFT;
+
+	icmnd->abort_tag = cpu_to_le32(xri);
+
+	icmnd->request_tag = cpu_to_le16(tag);
+	icmnd->len_loc1_byte |= SLI4_ICMD_WQE_LEN_LOC_BIT1;
+	icmnd->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_ICMD_WQE_LEN_LOC_BIT2;
+	icmnd->cmd_type_byte |= SLI4_CMD_FCP_ICMND64_WQE;
+	icmnd->cq_id = cpu_to_le16(cq_id);
+
+	return  EFC_SUCCESS;
+}
+
+int
+sli_fcp_iread64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		    u32 first_data_sge, u32 xfer_len, u16 xri, u16 tag,
+		    u16 cq_id, u32 rpi, u32 rnode_fcid,
+		    u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iread64_wqe *iread = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, len;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+
+	sge = sgl->virt;
+	bptr = &iread->bde;
+	if (sli->params.sgl_pre_registered) {
+		iread->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IR_WQE_XBL;
+
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (le32_to_cpu(sge[0].buffer_length) &
+				     SLI4_BDE_LEN_MASK));
+
+		bptr->u.blp.low  = sge[0].buffer_address_low;
+		bptr->u.blp.high = sge[0].buffer_address_high;
+	} else {
+		iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+				    (sgl->size & SLI4_BDE_LEN_MASK));
+
+		bptr->u.blp.low  =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iread->fcp_cmd_buffer_length = cpu_to_le16(len);
+
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iread->payload_offset_length = cpu_to_le16(len);
+	iread->total_transfer_length = cpu_to_le32(xfer_len);
+
+	iread->xri_tag = cpu_to_le16(xri);
+	iread->context_tag = cpu_to_le16(rpi);
+
+	iread->timer = timeout;
+
+	/* WQE word 4 contains read transfer length */
+	iread->class_pu_byte |= 2 << SLI4_IR_WQE_PU_SHFT;
+	iread->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iread->command = SLI4_WQE_FCP_IREAD64;
+	iread->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_IR_WQE_CT_SHFT;
+	iread->dif_ct_bs_byte |= dif;
+	iread->dif_ct_bs_byte  |= bs << SLI4_IR_WQE_BS_SHFT;
+
+	iread->abort_tag = cpu_to_le32(xri);
+
+	iread->request_tag = cpu_to_le16(tag);
+	iread->len_loc1_byte |= SLI4_IR_WQE_LEN_LOC_BIT1;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_LEN_LOC_BIT2;
+	iread->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IR_WQE_IOD;
+	iread->cmd_type_byte |= SLI4_CMD_FCP_IREAD64_WQE;
+	iread->cq_id = cpu_to_le16(cq_id);
+
+	if (sli->params.perf_hint) {
+		bptr = &iread->first_data_bde;
+		bptr->bde_type_buflen =	cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			  (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_LEN_MASK));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	return  EFC_SUCCESS;
+}
+
+int
+sli_fcp_iwrite64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		     u32 first_data_sge, u32 xfer_len,
+		     u32 first_burst, u16 xri, u16 tag,
+		     u16 cq_id, u32 rpi,
+		     u32 rnode_fcid,
+		     u8 dif, u8 bs, u8 timeout)
+{
+	struct sli4_fcp_iwrite64_wqe *iwrite = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+	u32 sge_flags, min, len;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &iwrite->bde;
+	if (sli->params.sgl_pre_registered) {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_IWR_WQE_XBL;
+
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_DBDE;
+		bptr->bde_type_buflen = cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+		       (le32_to_cpu(sge[0].buffer_length) & SLI4_BDE_LEN_MASK));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_XBL;
+
+		bptr->bde_type_buflen =	cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+					(sgl->size & SLI4_BDE_LEN_MASK));
+
+		bptr->u.blp.low  = cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high = cpu_to_le32(upper_32_bits(sgl->phys));
+
+		/*
+		 * fill out fcp_cmnd buffer len and change resp buffer to be of
+		 * type "skip" (note: response will still be written to sge[1]
+		 * if necessary)
+		 */
+		len = le32_to_cpu(sge[0].buffer_length);
+		iwrite->fcp_cmd_buffer_length = cpu_to_le16(len);
+		sge_flags = le32_to_cpu(sge[1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_TYPE_MASK;
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		sge[1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	len = le32_to_cpu(sge[0].buffer_length) +
+	      le32_to_cpu(sge[1].buffer_length);
+	iwrite->payload_offset_length = cpu_to_le16(len);
+	iwrite->total_transfer_length = cpu_to_le16(xfer_len);
+	min = (xfer_len < first_burst) ? xfer_len : first_burst;
+	iwrite->initial_transfer_length = cpu_to_le16(min);
+
+	iwrite->xri_tag = cpu_to_le16(xri);
+	iwrite->context_tag = cpu_to_le16(rpi);
+
+	iwrite->timer = timeout;
+	/* WQE word 4 contains read transfer length */
+	iwrite->class_pu_byte |= 2 << SLI4_IWR_WQE_PU_SHFT;
+	iwrite->class_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	iwrite->command = SLI4_WQE_FCP_IWRITE64;
+	iwrite->dif_ct_bs_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_IWR_WQE_CT_SHFT;
+	iwrite->dif_ct_bs_byte |= dif;
+	iwrite->dif_ct_bs_byte |= bs << SLI4_IWR_WQE_BS_SHFT;
+
+	iwrite->abort_tag = cpu_to_le32(xri);
+
+	iwrite->request_tag = cpu_to_le16(tag);
+	iwrite->len_loc1_byte |= SLI4_IWR_WQE_LEN_LOC_BIT1;
+	iwrite->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_IWR_WQE_LEN_LOC_BIT2;
+	iwrite->cmd_type_byte |= SLI4_CMD_FCP_IWRITE64_WQE;
+	iwrite->cq_id = cpu_to_le16(cq_id);
+
+	if (sli->params.perf_hint) {
+		bptr = &iwrite->first_data_bde;
+
+		bptr->bde_type_buflen =	cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			 (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_LEN_MASK));
+
+		bptr->u.data.low = sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high = sge[first_data_sge].buffer_address_high;
+	}
+
+	return  EFC_SUCCESS;
+}
+
+int
+sli_fcp_treceive64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		       u32 first_data_sge, u16 cq_id, u8 dif, u8 bs,
+		       struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_treceive64_wqe *trecv = buf;
+	struct sli4_fcp_128byte_wqe *trecv_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &trecv->bde;
+	if (sli->params.sgl_pre_registered) {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes &= ~SLI4_TRCV_WQE_XBL;
+
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (le32_to_cpu(sge[0].buffer_length)
+					& SLI4_BDE_LEN_MASK));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trecv->payload_offset_length = sge[0].buffer_length;
+	} else {
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif &&
+			params->xmit_len <= le32_to_cpu(sge[2].buffer_length)) {
+			trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_DBDE;
+			bptr->bde_type_buflen =
+			      cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+					  (le32_to_cpu(sge[2].buffer_length)
+					  & SLI4_BDE_LEN_MASK));
+
+			bptr->u.data.low = sge[2].buffer_address_low;
+			bptr->u.data.high = sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+				(sgl->size & SLI4_BDE_LEN_MASK));
+			bptr->u.blp.low = cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	trecv->relative_offset = cpu_to_le32(params->offset);
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_XC;
+
+	trecv->xri_tag = cpu_to_le16(params->xri);
+
+	trecv->context_tag = cpu_to_le16(params->rpi);
+
+	/* WQE uses relative offset */
+	trecv->class_ar_pu_byte |= 1 << SLI4_TRCV_WQE_PU_SHFT;
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		trecv->class_ar_pu_byte |= SLI4_TRCV_WQE_AR;
+
+	trecv->command = SLI4_WQE_FCP_TRECEIVE64;
+	trecv->class_ar_pu_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	trecv->dif_ct_bs_byte |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_TRCV_WQE_CT_SHFT;
+	trecv->dif_ct_bs_byte |= bs << SLI4_TRCV_WQE_BS_SHFT;
+
+	trecv->remote_xid = cpu_to_le16(params->ox_id);
+
+	trecv->request_tag = cpu_to_le16(params->tag);
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_IOD;
+
+	trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_LEN_LOC_BIT2;
+
+	trecv->cmd_type_byte |= SLI4_CMD_FCP_TRECEIVE64_WQE;
+
+	trecv->cq_id = cpu_to_le16(cq_id);
+
+	trecv->fcp_data_receive_length = cpu_to_le32(params->xmit_len);
+
+	if (sli->params.perf_hint) {
+		bptr = &trecv->first_data_bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_LEN_MASK));
+		bptr->u.data.low = sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high = sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		trecv->eat_xc_ccpe |= SLI4_TRCV_WQE_CCPE;
+		trecv->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trecv->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trecv->lloc1_appid |= SLI4_TRCV_WQE_APPID;
+		trecv->qosd_xbl_hlm_iod_dbde_wqes |= SLI4_TRCV_WQE_WQES;
+		trecv_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf,
+			    struct efc_dma *sgl, u32 first_data_sge,
+			    u16 sec_xri, u16 cq_id, u8 dif, u8 bs,
+			    struct sli_fcp_tgt_params *params)
+{
+	int rc;
+
+	rc = sli_fcp_treceive64_wqe(sli, buf, sgl, first_data_sge,
+				    cq_id, dif, bs, params);
+	if (!rc) {
+		struct sli4_fcp_treceive64_wqe *trecv = buf;
+
+		trecv->command = SLI4_WQE_FCP_CONT_TRECEIVE64;
+		trecv->dword5.sec_xri_tag = cpu_to_le16(sec_xri);
+	}
+	return rc;
+}
+
+int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		   u16 cq_id, u8 port_owned, struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_trsp64_wqe *trsp = buf;
+	struct sli4_fcp_128byte_wqe *trsp_128 = buf;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE) {
+		trsp->class_ag_byte |= SLI4_TRSP_WQE_AG;
+	} else {
+		struct sli4_sge	*sge = sgl->virt;
+		struct sli4_bde *bptr;
+
+		if (sli4->params.sgl_pre_registered || port_owned)
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_DBDE;
+		else
+			trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_XBL;
+		bptr = &trsp->bde;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				     (le32_to_cpu(sge[0].buffer_length) &
+				      SLI4_BDE_LEN_MASK));
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+
+		trsp->fcp_response_length = cpu_to_le32(params->xmit_len);
+	}
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_XC;
+
+	trsp->xri_tag = cpu_to_le16(params->xri);
+	trsp->rpi = cpu_to_le16(params->rpi);
+
+	trsp->command = SLI4_WQE_FCP_TRSP64;
+	trsp->class_ag_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	trsp->remote_xid = cpu_to_le16(params->ox_id);
+	trsp->request_tag = cpu_to_le16(params->tag);
+	if (params->flags & SLI4_IO_DNRX)
+		trsp->ct_dnrx_byte |= SLI4_TRSP_WQE_DNRX;
+	else
+		trsp->ct_dnrx_byte &= ~SLI4_TRSP_WQE_DNRX;
+
+	trsp->lloc1_appid |= 0x1;
+	trsp->cq_id = cpu_to_le16(cq_id);
+	trsp->cmd_type_byte = SLI4_CMD_FCP_TRSP64_WQE;
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		trsp->eat_xc_ccpe |= SLI4_TRSP_WQE_CCPE;
+		trsp->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(trsp->eat_xc_ccpe & SLI4_TRSP_WQE_EAT)) {
+		trsp->lloc1_appid |= SLI4_TRSP_WQE_APPID;
+		trsp->qosd_xbl_hlm_dbde_wqes |= SLI4_TRSP_WQE_WQES;
+		trsp_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		    u32 first_data_sge, u16 cq_id, u8 dif, u8 bs,
+		    struct sli_fcp_tgt_params *params)
+{
+	struct sli4_fcp_tsend64_wqe *tsend = buf;
+	struct sli4_fcp_128byte_wqe *tsend_128 = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+
+	bptr = &tsend->bde;
+	if (sli4->params.sgl_pre_registered) {
+		tsend->ll_qd_xbl_hlm_iod_dbde &= ~SLI4_TSEND_WQE_XBL;
+
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				   (le32_to_cpu(sge[2].buffer_length) &
+				    SLI4_BDE_LEN_MASK));
+
+		/* TSEND64_WQE specifies first two SGE are skipped (3rd is
+		 * valid)
+		 */
+		bptr->u.data.low  = sge[2].buffer_address_low;
+		bptr->u.data.high = sge[2].buffer_address_high;
+	} else {
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_XBL;
+
+		/* if data is a single physical address, use a BDE */
+		if (!dif &&
+			params->xmit_len <= le32_to_cpu(sge[2].buffer_length)) {
+			tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQE_DBDE;
+
+			bptr->bde_type_buflen =
+			    cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+					(le32_to_cpu(sge[2].buffer_length) &
+					SLI4_BDE_LEN_MASK));
+			/*
+			 * TSEND64_WQE specifies first two SGE are skipped
+			 * (i.e. 3rd is valid)
+			 */
+			bptr->u.data.low =
+				sge[2].buffer_address_low;
+			bptr->u.data.high =
+				sge[2].buffer_address_high;
+		} else {
+			bptr->bde_type_buflen =
+				cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+					    (sgl->size &
+					     SLI4_BDE_LEN_MASK));
+			bptr->u.blp.low =
+				cpu_to_le32(lower_32_bits(sgl->phys));
+			bptr->u.blp.high =
+				cpu_to_le32(upper_32_bits(sgl->phys));
+		}
+	}
+
+	tsend->relative_offset = cpu_to_le32(params->offset);
+
+	if (params->flags & SLI4_IO_CONTINUATION)
+		tsend->dw10byte2 |= SLI4_TSEND_XC;
+
+	tsend->xri_tag = cpu_to_le16(params->xri);
+
+	tsend->rpi = cpu_to_le16(params->rpi);
+	/* WQE uses relative offset */
+	tsend->class_pu_ar_byte |= 1 << SLI4_TSEND_WQE_PU_SHFT;
+
+	if (params->flags & SLI4_IO_AUTO_GOOD_RESPONSE)
+		tsend->class_pu_ar_byte |= SLI4_TSEND_WQE_AR;
+
+	tsend->command = SLI4_WQE_FCP_TSEND64;
+	tsend->class_pu_ar_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+	tsend->ct_byte |= SLI4_GENERIC_CONTEXT_RPI << SLI4_TSEND_CT_SHFT;
+	tsend->ct_byte |= dif;
+	tsend->ct_byte |= bs << SLI4_TSEND_BS_SHFT;
+
+	tsend->remote_xid = cpu_to_le16(params->ox_id);
+
+	tsend->request_tag = cpu_to_le16(params->tag);
+
+	tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_LEN_LOC_BIT2;
+
+	tsend->cq_id = cpu_to_le16(cq_id);
+
+	tsend->cmd_type_byte |= SLI4_CMD_FCP_TSEND64_WQE;
+
+	tsend->fcp_data_transmit_length = cpu_to_le32(params->xmit_len);
+
+	if (sli4->params.perf_hint) {
+		bptr = &tsend->first_data_bde;
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (le32_to_cpu(sge[first_data_sge].buffer_length) &
+			     SLI4_BDE_LEN_MASK));
+		bptr->u.data.low =
+			sge[first_data_sge].buffer_address_low;
+		bptr->u.data.high =
+			sge[first_data_sge].buffer_address_high;
+	}
+
+	/* The upper 7 bits of csctl is the priority */
+	if (params->cs_ctl & SLI4_MASK_CCP) {
+		tsend->dw10byte2 |= SLI4_TSEND_CCPE;
+		tsend->ccp = (params->cs_ctl & SLI4_MASK_CCP);
+	}
+
+	if (params->app_id && sli4->wqe_size == SLI4_WQE_EXT_BYTES &&
+	    !(tsend->dw10byte2 & SLI4_TSEND_EAT)) {
+		tsend->dw10byte0 |= SLI4_TSEND_APPID_VALID;
+		tsend->ll_qd_xbl_hlm_iod_dbde |= SLI4_TSEND_WQES;
+		tsend_128->dw[31] = params->app_id;
+	}
+	return EFC_SUCCESS;
+}
+
+int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		      struct sli_ct_params *params)
+{
+	struct sli4_gen_request64_wqe *gen = buf;
+	struct sli4_sge *sge = NULL;
+	struct sli4_bde *bptr;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!sgl || !sgl->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       sgl, sgl ? sgl->virt : NULL);
+		return EFC_FAIL;
+	}
+	sge = sgl->virt;
+	bptr = &gen->bde;
+
+	if (sli4->params.sgl_pre_registered) {
+		gen->dw10flags1 &= ~SLI4_GEN_REQ64_WQE_XBL;
+
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_DBDE;
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (params->xmit_len & SLI4_BDE_LEN_MASK));
+
+		bptr->u.data.low  = sge[0].buffer_address_low;
+		bptr->u.data.high = sge[0].buffer_address_high;
+	} else {
+		gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_XBL;
+
+		bptr->bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(BLP)) |
+				    ((2 * sizeof(struct sli4_sge)) &
+				     SLI4_BDE_LEN_MASK));
+
+		bptr->u.blp.low =
+			cpu_to_le32(lower_32_bits(sgl->phys));
+		bptr->u.blp.high =
+			cpu_to_le32(upper_32_bits(sgl->phys));
+	}
+
+	gen->request_payload_length = cpu_to_le32(params->xmit_len);
+	gen->max_response_payload_length = cpu_to_le32(params->rsp_len);
+
+	gen->df_ctl = params->df_ctl;
+	gen->type = params->type;
+	gen->r_ctl = params->r_ctl;
+
+	gen->xri_tag = cpu_to_le16(params->xri);
+
+	gen->ct_byte = SLI4_GENERIC_CONTEXT_RPI << SLI4_GEN_REQ64_CT_SHFT;
+	gen->context_tag = cpu_to_le16(params->rpi);
+
+	gen->class_byte = SLI4_GENERIC_CLASS_CLASS_3;
+
+	gen->command = SLI4_WQE_GEN_REQUEST64;
+
+	gen->timer = params->timeout;
+
+	gen->request_tag = cpu_to_le16(params->tag);
+
+	gen->dw10flags1 |= SLI4_GEN_REQ64_WQE_IOD;
+
+	gen->dw10flags0 |= SLI4_GEN_REQ64_WQE_QOSD;
+
+	gen->cmd_type_byte = SLI4_CMD_GEN_REQUEST64_WQE;
+
+	gen->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_send_frame_wqe(struct sli4 *sli, void *buf, u8 sof, u8 eof, u32 *hdr,
+			struct efc_dma *payload, u32 req_len,
+			u8 timeout, u16 xri, u16 req_tag)
+{
+	struct sli4_send_frame_wqe *sf = buf;
+
+	memset(buf, 0, sli->wqe_size);
+
+	sf->dw10flags1 |= SLI4_SF_WQE_DBDE;
+	sf->bde.bde_type_buflen = cpu_to_le32(req_len &
+					      SLI4_BDE_LEN_MASK);
+	sf->bde.u.data.low = cpu_to_le32(lower_32_bits(payload->phys));
+	sf->bde.u.data.high = cpu_to_le32(upper_32_bits(payload->phys));
+
+	/* Copy FC header */
+	sf->fc_header_0_1[0] = cpu_to_le32(hdr[0]);
+	sf->fc_header_0_1[1] = cpu_to_le32(hdr[1]);
+	sf->fc_header_2_5[0] = cpu_to_le32(hdr[2]);
+	sf->fc_header_2_5[1] = cpu_to_le32(hdr[3]);
+	sf->fc_header_2_5[2] = cpu_to_le32(hdr[4]);
+	sf->fc_header_2_5[3] = cpu_to_le32(hdr[5]);
+
+	sf->frame_length = cpu_to_le32(req_len);
+
+	sf->xri_tag = cpu_to_le16(xri);
+	sf->dw7flags0 &= ~SLI4_SF_PU;
+	sf->context_tag = 0;
+
+	sf->ct_byte &= ~SLI4_SF_CT;
+	sf->command = SLI4_WQE_SEND_FRAME;
+	sf->dw7flags0 |= SLI4_GENERIC_CLASS_CLASS_3;
+	sf->timer = timeout;
+
+	sf->request_tag = cpu_to_le16(req_tag);
+	sf->eof = eof;
+	sf->sof = sof;
+
+	sf->dw10flags1 &= ~SLI4_SF_QOSD;
+	sf->dw10flags0 |= SLI4_SF_LEN_LOC_BIT1;
+	sf->dw10flags2 &= ~SLI4_SF_XC;
+
+	sf->dw10flags1 |= SLI4_SF_XBL;
+
+	sf->cmd_type_byte |= SLI4_CMD_SEND_FRAME_WQE;
+	sf->cq_id = cpu_to_le16(0xffff);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli, void *buf,
+		struct sli_bls_payload *payload, struct sli_bls_params *params)
+{
+	struct sli4_xmit_bls_rsp_wqe *bls = buf;
+	u32 dw_ridflags = 0;
+
+	/*
+	 * Callers can either specify RPI or S_ID, but not both
+	 */
+	if (params->rpi_registered && params->s_id != U32_MAX) {
+		efc_log_info(sli, "S_ID specified for attached remote node %d\n",
+			params->rpi);
+		return EFC_FAIL;
+	}
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (payload->type == SLI4_SLI_BLS_ACC) {
+		bls->payload_word0 =
+			cpu_to_le32((payload->u.acc.seq_id_last << 16) |
+				    (payload->u.acc.seq_id_validity << 24));
+		bls->high_seq_cnt = payload->u.acc.high_seq_cnt;
+		bls->low_seq_cnt = payload->u.acc.low_seq_cnt;
+	} else if (payload->type == SLI4_SLI_BLS_RJT) {
+		bls->payload_word0 =
+				cpu_to_le32(*((u32 *)&payload->u.rjt));
+		dw_ridflags |= SLI4_BLS_RSP_WQE_AR;
+	} else {
+		efc_log_info(sli, "bad BLS type %#x\n", payload->type);
+		return EFC_FAIL;
+	}
+
+	bls->ox_id = payload->ox_id;
+	bls->rx_id = payload->rx_id;
+
+	if (params->rpi_registered) {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(params->rpi);
+	} else {
+		bls->dw8flags0 |=
+		SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
+		bls->context_tag = cpu_to_le16(params->vpi);
+
+		if (params->s_id != U32_MAX)
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(params->s_id & 0x00ffffff);
+		else
+			bls->local_n_port_id_dword |=
+				cpu_to_le32(params->s_id & 0x00ffffff);
+
+		dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
+			       (params->d_id & SLI4_BLS_RSP_RID);
+
+		bls->temporary_rpi = cpu_to_le16(params->rpi);
+	}
+
+	bls->xri_tag = cpu_to_le16(params->xri);
+
+	bls->dw8flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	bls->command = SLI4_WQE_XMIT_BLS_RSP;
+
+	bls->request_tag = cpu_to_le16(params->tag);
+
+	bls->dw11flags1 |= SLI4_BLS_RSP_WQE_QOSD;
+
+	bls->remote_id_dword = cpu_to_le32(dw_ridflags);
+	bls->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
+
+	bls->dw12flags0 |= SLI4_CMD_XMIT_BLS_RSP64_WQE;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli, void *buf, struct efc_dma *rsp,
+		       struct sli_els_params *params)
+{
+	struct sli4_xmit_els_rsp64_wqe *els = buf;
+
+	memset(buf, 0, sli->wqe_size);
+
+	if (sli->params.sgl_pre_registered)
+		els->flags2 |= SLI4_ELS_DBDE;
+	else
+		els->flags2 |= SLI4_ELS_XBL;
+
+	els->els_response_payload.bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (params->rsp_len & SLI4_BDE_LEN_MASK));
+	els->els_response_payload.u.data.low =
+		cpu_to_le32(lower_32_bits(rsp->phys));
+	els->els_response_payload.u.data.high =
+		cpu_to_le32(upper_32_bits(rsp->phys));
+
+	els->els_response_payload_length = cpu_to_le32(params->rsp_len);
+
+	els->xri_tag = cpu_to_le16(params->xri);
+
+	els->class_byte |= SLI4_GENERIC_CLASS_CLASS_3;
+
+	els->command = SLI4_WQE_ELS_RSP64;
+
+	els->request_tag = cpu_to_le16(params->tag);
+
+	els->ox_id = cpu_to_le16(params->ox_id);
+
+	els->flags2 |= SLI4_ELS_IOD & SLI4_ELS_REQUEST64_DIR_WRITE;
+
+	els->flags2 |= SLI4_ELS_QOSD;
+
+	els->cmd_type_wqec = SLI4_ELS_REQUEST64_CMD_GEN;
+
+	els->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
+
+	if (params->rpi_registered) {
+		els->ct_byte |=
+			SLI4_GENERIC_CONTEXT_RPI << SLI4_ELS_CT_OFFSET;
+		els->context_tag = cpu_to_le16(params->rpi);
+		return EFC_SUCCESS;
+	}
+
+	els->ct_byte |= SLI4_GENERIC_CONTEXT_VPI << SLI4_ELS_CT_OFFSET;
+	els->context_tag = cpu_to_le16(params->vpi);
+	els->rid_dw = cpu_to_le32(params->d_id & SLI4_ELS_RID);
+	els->temporary_rpi = cpu_to_le16(params->rpi);
+	if (params->s_id != U32_MAX) {
+		els->sid_dw |=
+		      cpu_to_le32(SLI4_ELS_SP | (params->s_id & SLI4_ELS_SID));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *payload,
+		struct sli_ct_params *params)
+{
+	struct sli4_xmit_sequence64_wqe *xmit = buf;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	if (!payload || !payload->virt) {
+		efc_log_err(sli4, "bad parameter sgl=%p virt=%p\n",
+		       payload, payload ? payload->virt : NULL);
+		return EFC_FAIL;
+	}
+
+	if (sli4->params.sgl_pre_registered)
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_DBDE);
+	else
+		xmit->dw10w0 |= cpu_to_le16(SLI4_SEQ_WQE_XBL);
+
+	xmit->bde.bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			(params->rsp_len & SLI4_BDE_LEN_MASK));
+	xmit->bde.u.data.low  =
+			cpu_to_le32(lower_32_bits(payload->phys));
+	xmit->bde.u.data.high =
+			cpu_to_le32(upper_32_bits(payload->phys));
+	xmit->sequence_payload_len = cpu_to_le32(params->rsp_len);
+
+	xmit->remote_n_port_id_dword |= cpu_to_le32(params->d_id & 0x00ffffff);
+
+	xmit->relative_offset = 0;
+
+	/* sequence initiative - this matches what is seen from
+	 * FC switches in response to FCGS commands
+	 */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_SI);
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_FT);/* force transmit */
+	xmit->dw5flags0 &= (~SLI4_SEQ_WQE_XO);/* exchange responder */
+	xmit->dw5flags0 |= SLI4_SEQ_WQE_LS;/* last in seqence */
+	xmit->df_ctl = params->df_ctl;
+	xmit->type = params->type;
+	xmit->r_ctl = params->r_ctl;
+
+	xmit->xri_tag = cpu_to_le16(params->xri);
+	xmit->context_tag = cpu_to_le16(params->rpi);
+
+	xmit->dw7flags0 &= ~SLI4_SEQ_WQE_DIF;
+	xmit->dw7flags0 |=
+		SLI4_GENERIC_CONTEXT_RPI << SLI4_SEQ_WQE_CT_SHIFT;
+	xmit->dw7flags0 &= ~SLI4_SEQ_WQE_BS;
+
+	xmit->command = SLI4_WQE_XMIT_SEQUENCE64;
+	xmit->dw7flags1 |= SLI4_GENERIC_CLASS_CLASS_3;
+	xmit->dw7flags1 &= ~SLI4_SEQ_WQE_PU;
+	xmit->timer = params->timeout;
+
+	xmit->abort_tag = 0;
+	xmit->request_tag = cpu_to_le16(params->tag);
+	xmit->remote_xid = cpu_to_le16(params->ox_id);
+
+	xmit->dw10w0 |=
+	cpu_to_le16(SLI4_ELS_REQUEST64_DIR_READ << SLI4_SEQ_WQE_IOD_SHIFT);
+
+	xmit->cmd_type_wqec_byte |= SLI4_CMD_XMIT_SEQUENCE64_WQE;
+
+	xmit->dw10w0 |= cpu_to_le16(2 << SLI4_SEQ_WQE_LEN_LOC_SHIFT);
+
+	xmit->cq_id = cpu_to_le16(0xFFFF);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, u16 xri, u16 tag, u16 cq_id)
+{
+	struct sli4_requeue_xri_wqe *requeue = buf;
+
+	memset(buf, 0, sli4->wqe_size);
+
+	requeue->command = SLI4_WQE_REQUEUE_XRI;
+	requeue->xri_tag = cpu_to_le16(xri);
+	requeue->request_tag = cpu_to_le16(tag);
+	requeue->flags2 |= cpu_to_le16(SLI4_REQU_XRI_WQE_XC);
+	requeue->flags1 |= cpu_to_le16(SLI4_REQU_XRI_WQE_QOSD);
+	requeue->cq_id = cpu_to_le16(cq_id);
+	requeue->cmd_type_wqec_byte = SLI4_CMD_REQUEUE_XRI_WQE;
+	return EFC_SUCCESS;
+}
+
+int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe)
+{
+	struct sli4_link_attention *link_attn = acqe;
+	struct sli4_link_event event = { 0 };
+
+	efc_log_info(sli4, "link=%d attn_type=%#x top=%#x speed=%#x pfault=%#x\n",
+		link_attn->link_number, link_attn->attn_type,
+		      link_attn->topology, link_attn->port_speed,
+		      link_attn->port_fault);
+	efc_log_info(sli4, "shared_lnk_status=%#x logl_lnk_speed=%#x evttag=%#x\n",
+		link_attn->shared_link_status,
+		      le16_to_cpu(link_attn->logical_link_speed),
+		      le32_to_cpu(link_attn->event_tag));
+
+	if (!sli4->link)
+		return EFC_FAIL;
+
+	event.medium   = SLI4_LINK_MEDIUM_FC;
+
+	switch (link_attn->attn_type) {
+	case SLI4_LNK_ATTN_TYPE_LINK_UP:
+		event.status = SLI4_LINK_STATUS_UP;
+		break;
+	case SLI4_LNK_ATTN_TYPE_LINK_DOWN:
+		event.status = SLI4_LINK_STATUS_DOWN;
+		break;
+	case SLI4_LNK_ATTN_TYPE_NO_HARD_ALPA:
+		efc_log_info(sli4, "attn_type: no hard alpa\n");
+		event.status = SLI4_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		efc_log_info(sli4, "attn_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->event_type) {
+	case SLI4_EVENT_LINK_ATTENTION:
+		break;
+	case SLI4_EVENT_SHARED_LINK_ATTENTION:
+		efc_log_info(sli4, "event_type: FC shared link event\n");
+		break;
+	default:
+		efc_log_info(sli4, "event_type: unknown\n");
+		break;
+	}
+
+	switch (link_attn->topology) {
+	case SLI4_LNK_ATTN_P2P:
+		event.topology = SLI4_LINK_TOPO_NON_FC_AL;
+		break;
+	case SLI4_LNK_ATTN_FC_AL:
+		event.topology = SLI4_LINK_TOPO_FC_AL;
+		break;
+	case SLI4_LNK_ATTN_INTERNAL_LOOPBACK:
+		efc_log_info(sli4, "topology Internal loopback\n");
+		event.topology = SLI4_LINK_TOPO_LOOPBACK_INTERNAL;
+		break;
+	case SLI4_LNK_ATTN_SERDES_LOOPBACK:
+		efc_log_info(sli4, "topology serdes loopback\n");
+		event.topology = SLI4_LINK_TOPO_LOOPBACK_EXTERNAL;
+		break;
+	default:
+		efc_log_info(sli4, "topology: unknown\n");
+		break;
+	}
+
+	event.speed = link_attn->port_speed * 1000;
+
+	sli4->link(sli4->link_arg, (void *)&event);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype, u16 *r_id)
+{
+	u8 code = cqe[SLI4_CQE_CODE_OFFSET];
+	int rc;
+
+	switch (code) {
+	case SLI4_CQE_CODE_WORK_REQUEST_COMPLETION:
+	{
+		struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_WQ;
+		*r_id = le16_to_cpu(wcqe->request_tag);
+		rc = wcqe->status;
+
+		/* Flag errors except for FCP_RSP_FAILURE */
+		if (rc && rc != SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE) {
+			efc_log_info(sli4, "WCQE: status=%#x hw_status=%#x tag=%#x\n",
+				wcqe->status, wcqe->hw_status,
+				le16_to_cpu(wcqe->request_tag));
+			efc_log_info(sli4, "w1=%#x w2=%#x xb=%d\n",
+				le32_to_cpu(wcqe->wqe_specific_1),
+				     le32_to_cpu(wcqe->wqe_specific_2),
+				     (wcqe->flags & SLI4_WCQE_XB));
+			efc_log_info(sli4, "      %08X %08X %08X %08X\n",
+				((u32 *)cqe)[0],
+				     ((u32 *)cqe)[1],
+				     ((u32 *)cqe)[2],
+				     ((u32 *)cqe)[3]);
+		}
+
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC:
+	{
+		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1:
+	{
+		struct sli4_fc_async_rcqe_v1 *rcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD:
+	{
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_OPT_WRITE_CMD;
+		*r_id = le16_to_cpu(optcqe->rq_id);
+		rc = optcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_DATA:
+	{
+		struct sli4_fc_optimized_write_data_cqe *dcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_OPT_WRITE_DATA;
+		*r_id = le16_to_cpu(dcqe->xri);
+		rc = dcqe->status;
+
+		/* Flag errors */
+		if (rc != SLI4_FC_WCQE_STATUS_SUCCESS) {
+			efc_log_info(sli4, "Optimized DATA CQE: status=%#x\n",
+				dcqe->status);
+			efc_log_info(sli4, "hstat=%#x xri=%#x dpl=%#x w3=%#x xb=%d\n",
+				dcqe->hw_status, le16_to_cpu(dcqe->xri),
+				le32_to_cpu(dcqe->total_data_placed),
+				((u32 *)cqe)[3],
+				(dcqe->flags & SLI4_OCQE_XB));
+		}
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_COALESCING:
+	{
+		struct sli4_fc_coalescing_rcqe *rcqe = (void *)cqe;
+
+		*etype = SLI4_QENTRY_RQ;
+		*r_id = le16_to_cpu(rcqe->rq_id);
+		rc = rcqe->status;
+		break;
+	}
+	case SLI4_CQE_CODE_XRI_ABORTED:
+	{
+		struct sli4_fc_xri_aborted_cqe *xa = (void *)cqe;
+
+		*etype = SLI4_QENTRY_XABT;
+		*r_id = le16_to_cpu(xa->xri);
+		rc = EFC_SUCCESS;
+		break;
+	}
+	case SLI4_CQE_CODE_RELEASE_WQE:
+	{
+		struct sli4_fc_wqec *wqec = (void *)cqe;
+
+		*etype = SLI4_QENTRY_WQ_RELEASE;
+		*r_id = le16_to_cpu(wqec->wq_id);
+		rc = EFC_SUCCESS;
+		break;
+	}
+	default:
+		efc_log_info(sli4, "CQE completion code %d not handled\n",
+			code);
+		*etype = SLI4_QENTRY_MAX;
+		*r_id = U16_MAX;
+		rc = -EINVAL;
+	}
+
+	return rc;
+}
+
+u32
+sli_fc_response_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+u32
+sli_fc_io_length(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	return le32_to_cpu(wcqe->wqe_specific_1);
+}
+
+int
+sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+
+	*d_id = 0;
+
+	if (wcqe->status)
+		return EFC_FAIL;
+	*d_id = le32_to_cpu(wcqe->wqe_specific_2) & 0x00ffffff;
+	return EFC_SUCCESS;
+}
+
+u32
+sli_fc_ext_status(struct sli4 *sli4, u8 *cqe)
+{
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	mask;
+
+	switch (wcqe->status) {
+	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+	case SLI4_FC_WCQE_STATUS_CMD_REJECT:
+		mask = 0xff;
+		break;
+	case SLI4_FC_WCQE_STATUS_NPORT_RJT:
+	case SLI4_FC_WCQE_STATUS_FABRIC_RJT:
+	case SLI4_FC_WCQE_STATUS_NPORT_BSY:
+	case SLI4_FC_WCQE_STATUS_FABRIC_BSY:
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		mask = U32_MAX;
+		break;
+	case SLI4_FC_WCQE_STATUS_DI_ERROR:
+		mask = U32_MAX;
+		break;
+	default:
+		mask = 0;
+	}
+
+	return le32_to_cpu(wcqe->wqe_specific_2) & mask;
+}
+
+int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index)
+{
+	int rc = EFC_FAIL;
+	u8 code = 0;
+	u16 rq_element_index;
+
+	*rq_id = 0;
+	*index = U32_MAX;
+
+	code = cqe[SLI4_CQE_CODE_OFFSET];
+
+	/* Retrieve the RQ index from the completion */
+	if (code == SLI4_CQE_CODE_RQ_ASYNC) {
+		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+
+		*rq_id = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID;
+		rq_element_index =
+		le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX;
+		*index = rq_element_index;
+		if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = rcqe->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->fcfi_rq_id_word) &
+				SLI4_RACQE_RQ_ID);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe->data_placement_length),
+				rcqe->sof_byte, rcqe->eof_byte,
+				rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_ASYNC_V1) {
+		struct sli4_fc_async_rcqe_v1 *rcqe_v1 = (void *)cqe;
+
+		*rq_id = le16_to_cpu(rcqe_v1->rq_id);
+		rq_element_index =
+			(le16_to_cpu(rcqe_v1->rq_elmt_indx_word) &
+			 SLI4_RACQE_RQ_EL_INDX);
+		*index = rq_element_index;
+		if (rcqe_v1->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = rcqe_v1->status;
+			efc_log_info(sli4, "status=%02x (%s) rq_id=%d, index=%x\n",
+				rcqe_v1->status,
+				sli_fc_get_status_string(rcqe_v1->status),
+				le16_to_cpu(rcqe_v1->rq_id), rq_element_index);
+
+			efc_log_info(sli4, "pdpl=%x sof=%02x eof=%02x hdpl=%x\n",
+				le16_to_cpu(rcqe_v1->data_placement_length),
+			rcqe_v1->sof_byte, rcqe_v1->eof_byte,
+			rcqe_v1->hdpl_byte & SLI4_RACQE_HDPL);
+		}
+	} else if (code == SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD) {
+		struct sli4_fc_optimized_write_cmd_cqe *optcqe = (void *)cqe;
+
+		*rq_id = le16_to_cpu(optcqe->rq_id);
+		*index = le16_to_cpu(optcqe->w1) & SLI4_OCQE_RQ_EL_INDX;
+		if (optcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+			rc = EFC_SUCCESS;
+		} else {
+			rc = optcqe->status;
+			efc_log_info(sli4, "stat=%02x (%s) rqid=%d, idx=%x pdpl=%x\n",
+				optcqe->status,
+				sli_fc_get_status_string(optcqe->status),
+				le16_to_cpu(optcqe->rq_id), *index,
+				le16_to_cpu(optcqe->data_placement_length));
+
+			efc_log_info(sli4, "hdpl=%x oox=%d agxr=%d xri=0x%x rpi=%x\n",
+				(optcqe->hdpl_vld & SLI4_OCQE_HDPL),
+				(optcqe->flags1 & SLI4_OCQE_OOX),
+				(optcqe->flags1 & SLI4_OCQE_AGXR), optcqe->xri,
+				le16_to_cpu(optcqe->rpi));
+		}
+	} else if (code == SLI4_CQE_CODE_RQ_COALESCING) {
+		struct sli4_fc_coalescing_rcqe  *rcqe = (void *)cqe;
+
+		rq_element_index = (le16_to_cpu(rcqe->rq_elmt_indx_word) &
+				    SLI4_RCQE_RQ_EL_INDX);
+
+		*rq_id = le16_to_cpu(rcqe->rq_id);
+		if (rcqe->status == SLI4_FC_COALESCE_RQ_SUCCESS) {
+			*index = rq_element_index;
+			rc = EFC_SUCCESS;
+		} else {
+			*index = U32_MAX;
+			rc = rcqe->status;
+
+			efc_log_info(sli4, "stat=%02x (%s) rq_id=%d, idx=%x\n",
+				rcqe->status,
+				sli_fc_get_status_string(rcqe->status),
+				le16_to_cpu(rcqe->rq_id), rq_element_index);
+			efc_log_info(sli4, "rq_id=%#x sdpl=%x\n",
+				le16_to_cpu(rcqe->rq_id),
+		    le16_to_cpu(rcqe->sequence_reporting_placement_length));
+		}
+	} else {
+		struct sli4_fc_async_rcqe *rcqe = (void *)cqe;
+
+		*index = U32_MAX;
+		rc = rcqe->status;
+
+		efc_log_info(sli4, "status=%02x rq_id=%d, index=%x pdpl=%x\n",
+			rcqe->status,
+		le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_RQ_ID,
+		(le16_to_cpu(rcqe->rq_elmt_indx_word) & SLI4_RACQE_RQ_EL_INDX),
+		le16_to_cpu(rcqe->data_placement_length));
+		efc_log_info(sli4, "sof=%02x eof=%02x hdpl=%x\n",
+			rcqe->sof_byte, rcqe->eof_byte,
+			rcqe->hdpl_byte & SLI4_RACQE_HDPL);
+	}
+
+	return rc;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 06/31] elx: libefc_sli: bmbx routines and SLI config commands
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (4 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
                   ` (24 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch adds routines to create mailbox commands used during
adapter initialization and adds APIs to issue mailbox commands to the
adapter through the bootstrap mailbox register.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1166 ++++++++++++++++++++++++++++
 1 file changed, 1166 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index 6b71779274c4..e41070fb21d6 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -2866,3 +2866,1169 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index)
 
 	return rc;
 }
+
+static int
+sli_bmbx_wait(struct sli4 *sli4, u32 msec)
+{
+	u32 val;
+	unsigned long end;
+
+	/* Wait for the bootstrap mailbox to report "ready" */
+	end = jiffies + msecs_to_jiffies(msec);
+	do {
+		val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+		if (val & SLI4_BMBX_RDY)
+			return EFC_SUCCESS;
+
+		usleep_range(1000, 2000);
+	} while (time_before(jiffies, end));
+
+	return EFC_FAIL;
+}
+
+static int
+sli_bmbx_write(struct sli4 *sli4)
+{
+	u32 val;
+
+	/* write buffer location to bootstrap mailbox register */
+	val = sli_bmbx_write_hi(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) {
+		efc_log_crit(sli4, "BMBX WRITE_HI failed\n");
+		return EFC_FAIL;
+	}
+	val = sli_bmbx_write_lo(sli4->bmbx.phys);
+	writel(val, (sli4->reg[0] + SLI4_BMBX_REG));
+
+	/* wait for SLI Port to set ready bit */
+	return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC);
+}
+
+int
+sli_bmbx_command(struct sli4 *sli4)
+{
+	void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE;
+
+	if (sli_fw_error_status(sli4) > 0) {
+		efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected");
+		efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+		return EFC_FAIL;
+	}
+
+	/* Submit a command to the bootstrap mailbox and check the status */
+	if (sli_bmbx_write(sli4)) {
+		efc_log_crit(sli4, "bmbx write fail phys=%pad reg=%#x\n",
+			&sli4->bmbx.phys, readl(sli4->reg[0] + SLI4_BMBX_REG));
+		return EFC_FAIL;
+	}
+
+	/* check completion queue entry status */
+	if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) &
+	    SLI4_MCQE_VALID) {
+		return sli_cqe_mq(sli4, cqe);
+	}
+	efc_log_crit(sli4, "invalid or wrong type\n");
+	return EFC_FAIL;
+}
+
+int
+sli_cmd_config_link(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_config_link *config_link = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	config_link->hdr.command = SLI4_MBX_CMD_CONFIG_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_down_link(struct sli4 *sli4, void *buf)
+{
+	struct sli4_mbox_command_header *hdr = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	hdr->command = SLI4_MBX_CMD_DOWN_LINK;
+
+	/* Port interprets zero in a field as "use default value" */
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf, u16 wki)
+{
+	struct sli4_cmd_dump4 *cmd = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	cmd->hdr.command = SLI4_MBX_CMD_DUMP;
+	cmd->type_dword = cpu_to_le32(0x4);
+	cmd->wki_selection = cpu_to_le16(wki);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf, u32 page_num,
+				     struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_transceiver_data *req = NULL;
+	u32 psize;
+
+	if (!dma)
+		psize = SLI4_CFG_PYLD_LENGTH(cmn_read_transceiver_data);
+	else
+		psize = dma->size;
+
+	req = sli_config_cmd_init(sli4, buf, psize, dma);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_READ_TRANS_DATA,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_read_transceiver_data));
+
+	req->page_number = cpu_to_le32(page_num);
+	req->port = cpu_to_le32(sli4->port_number);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, u8 req_ext_counters,
+			u8 clear_overflow_flags,
+			u8 clear_all_counters)
+{
+	struct sli4_cmd_read_link_stats *cmd = buf;
+	u32 flags;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	cmd->hdr.command = SLI4_MBX_CMD_READ_LNK_STAT;
+
+	flags = 0;
+	if (req_ext_counters)
+		flags |= SLI4_READ_LNKSTAT_REC;
+	if (clear_all_counters)
+		flags |= SLI4_READ_LNKSTAT_CLRC;
+	if (clear_overflow_flags)
+		flags |= SLI4_READ_LNKSTAT_CLOF;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, u8 clear_counters)
+{
+	struct sli4_cmd_read_status *cmd = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	cmd->hdr.command = SLI4_MBX_CMD_READ_STATUS;
+	if (clear_counters)
+		flags |= SLI4_READSTATUS_CLEAR_COUNTERS;
+	else
+		flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS;
+
+	cmd->dw1_flags = cpu_to_le32(flags);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, u32 speed, u8 reset_alpa)
+{
+	struct sli4_cmd_init_link *init_link = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	init_link->hdr.command = SLI4_MBX_CMD_INIT_LINK;
+
+	init_link->sel_reset_al_pa_dword =
+				cpu_to_le32(reset_alpa);
+	flags &= ~SLI4_INIT_LINK_F_LOOPBACK;
+
+	init_link->link_speed_sel_code = cpu_to_le32(speed);
+	switch (speed) {
+	case SLI4_LINK_SPEED_1G:
+	case SLI4_LINK_SPEED_2G:
+	case SLI4_LINK_SPEED_4G:
+	case SLI4_LINK_SPEED_8G:
+	case SLI4_LINK_SPEED_16G:
+	case SLI4_LINK_SPEED_32G:
+	case SLI4_LINK_SPEED_64G:
+		flags |= SLI4_INIT_LINK_F_FIXED_SPEED;
+		break;
+	case SLI4_LINK_SPEED_10G:
+		efc_log_info(sli4, "unsupported FC speed %d\n", speed);
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+		/* Attempt P2P but failover to FC-AL */
+		flags |= SLI4_INIT_LINK_F_FAIL_OVER;
+		flags |= SLI4_INIT_LINK_F_P2P_FAIL_OVER;
+		break;
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		flags |= SLI4_INIT_LINK_F_FCAL_ONLY;
+		if (speed == SLI4_LINK_SPEED_16G ||
+		    speed == SLI4_LINK_SPEED_32G) {
+			efc_log_info(sli4, "unsupported FC-AL speed %d\n",
+				     speed);
+			init_link->flags0 = cpu_to_le32(flags);
+			return EFC_FAIL;
+		}
+		break;
+	case SLI4_READ_CFG_TOPO_NON_FC_AL:
+		flags |= SLI4_INIT_LINK_F_P2P_ONLY;
+		break;
+	default:
+
+		efc_log_info(sli4, "unsupported topology %#x\n",
+			sli4->topology);
+
+		init_link->flags0 = cpu_to_le32(flags);
+		return EFC_FAIL;
+	}
+
+	flags &= ~SLI4_INIT_LINK_F_UNFAIR;
+	flags &= ~SLI4_INIT_LINK_F_NO_LIRP;
+	flags &= ~SLI4_INIT_LINK_F_LOOP_VALID_CHK;
+	flags &= ~SLI4_INIT_LINK_F_NO_LISA;
+	flags &= ~SLI4_INIT_LINK_F_PICK_HI_ALPA;
+	init_link->flags0 = cpu_to_le32(flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, u16 vfi, u16 fcfi, u16 vpi)
+{
+	struct sli4_cmd_init_vfi *init_vfi = buf;
+	u16 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	init_vfi->hdr.command = SLI4_MBX_CMD_INIT_VFI;
+	init_vfi->vfi = cpu_to_le16(vfi);
+	init_vfi->fcfi = cpu_to_le16(fcfi);
+
+	/*
+	 * If the VPI is valid, initialize it at the same time as
+	 * the VFI
+	 */
+	if (vpi != U16_MAX) {
+		flags |= SLI4_INIT_VFI_FLAG_VP;
+		init_vfi->flags0_word = cpu_to_le16(flags);
+		init_vfi->vpi = cpu_to_le16(vpi);
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, u16 vpi, u16 vfi)
+{
+	struct sli4_cmd_init_vpi *init_vpi = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	init_vpi->hdr.command = SLI4_MBX_CMD_INIT_VPI;
+	init_vpi->vpi = cpu_to_le16(vpi);
+	init_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, u16 xri_base, u16 xri_count)
+{
+	struct sli4_cmd_post_xri *post_xri = buf;
+	u16 xri_count_flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	post_xri->hdr.command = SLI4_MBX_CMD_POST_XRI;
+	post_xri->xri_base = cpu_to_le16(xri_base);
+	xri_count_flags = xri_count & SLI4_POST_XRI_COUNT;
+	xri_count_flags |= SLI4_POST_XRI_FLAG_ENX;
+	xri_count_flags |= SLI4_POST_XRI_FLAG_VAL;
+	post_xri->xri_count_flags = cpu_to_le16(xri_count_flags);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, u8 num_xri)
+{
+	struct sli4_cmd_release_xri *release_xri = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	release_xri->hdr.command = SLI4_MBX_CMD_RELEASE_XRI;
+	release_xri->xri_count_word = cpu_to_le16(num_xri &
+					SLI4_RELEASE_XRI_COUNT);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_config(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_read_config *read_config = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	read_config->hdr.command = SLI4_MBX_CMD_READ_CONFIG;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf)
+{
+	struct sli4_cmd_read_nvparms *read_nvparms = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	read_nvparms->hdr.command = SLI4_MBX_CMD_READ_NVPARMS;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, u8 *wwpn, u8 *wwnn,
+			u8 hard_alpa, u32 preferred_d_id)
+{
+	struct sli4_cmd_write_nvparms *write_nvparms = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	write_nvparms->hdr.command = SLI4_MBX_CMD_WRITE_NVPARMS;
+	memcpy(write_nvparms->wwpn, wwpn, 8);
+	memcpy(write_nvparms->wwnn, wwnn, 8);
+
+	write_nvparms->hard_alpa_d_id =
+			cpu_to_le32((preferred_d_id << 8) | hard_alpa);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_read_rev(struct sli4 *sli4, void *buf, struct efc_dma *vpd)
+{
+	struct sli4_cmd_read_rev *read_rev = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	read_rev->hdr.command = SLI4_MBX_CMD_READ_REV;
+
+	if (vpd && vpd->size) {
+		read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD);
+
+		read_rev->available_length_dword =
+			cpu_to_le32(vpd->size &
+				    SLI4_READ_REV_AVAILABLE_LENGTH);
+
+		read_rev->hostbuf.low =
+				cpu_to_le32(lower_32_bits(vpd->phys));
+		read_rev->hostbuf.high =
+				cpu_to_le32(upper_32_bits(vpd->phys));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, struct efc_dma *dma, u16 vpi)
+{
+	struct sli4_cmd_read_sparm64 *read_sparm64 = buf;
+
+	if (vpi == U16_MAX) {
+		efc_log_err(sli4, "special VPI not supported!!!\n");
+		return EFC_FAIL;
+	}
+
+	if (!dma || !dma->phys) {
+		efc_log_err(sli4, "bad DMA buffer\n");
+		return EFC_FAIL;
+	}
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	read_sparm64->hdr.command = SLI4_MBX_CMD_READ_SPARM64;
+
+	read_sparm64->bde_64.bde_type_buflen =
+			cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+				    (dma->size & SLI4_BDE_LEN_MASK));
+	read_sparm64->bde_64.u.data.low =
+			cpu_to_le32(lower_32_bits(dma->phys));
+	read_sparm64->bde_64.u.data.high =
+			cpu_to_le32(upper_32_bits(dma->phys));
+
+	read_sparm64->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, struct efc_dma *dma)
+{
+	struct sli4_cmd_read_topology *read_topo = buf;
+
+	if (!dma || !dma->size)
+		return EFC_FAIL;
+
+	if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) {
+		efc_log_err(sli4, "loop map buffer too small %zx\n", dma->size);
+		return EFC_FAIL;
+	}
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	read_topo->hdr.command = SLI4_MBX_CMD_READ_TOPOLOGY;
+
+	memset(dma->virt, 0, dma->size);
+
+	read_topo->bde_loop_map.bde_type_buflen =
+					cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+					(dma->size & SLI4_BDE_LEN_MASK));
+	read_topo->bde_loop_map.u.data.low  =
+				cpu_to_le32(lower_32_bits(dma->phys));
+	read_topo->bde_loop_map.u.data.high =
+				cpu_to_le32(upper_32_bits(dma->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, u16 index,
+		 struct sli4_cmd_rq_cfg *rq_cfg)
+{
+	struct sli4_cmd_reg_fcfi *reg_fcfi = buf;
+	u32 i;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	reg_fcfi->hdr.command = SLI4_MBX_CMD_REG_FCFI;
+
+	reg_fcfi->fcf_index = cpu_to_le16(index);
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		switch (i) {
+		case 0:
+			reg_fcfi->rqid0 = rq_cfg[0].rq_id;
+			break;
+		case 1:
+			reg_fcfi->rqid1 = rq_cfg[1].rq_id;
+			break;
+		case 2:
+			reg_fcfi->rqid2 = rq_cfg[2].rq_id;
+			break;
+		case 3:
+			reg_fcfi->rqid3 = rq_cfg[3].rq_id;
+			break;
+		}
+		reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, u8 mode, u16 fcf_index,
+		     u8 rq_selection_policy, u8 mrq_bit_mask, u16 num_mrqs,
+		     struct sli4_cmd_rq_cfg *rq_cfg)
+{
+	struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf;
+	u32 i;
+	u32 mrq_flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	reg_fcfi_mrq->hdr.command = SLI4_MBX_CMD_REG_FCFI_MRQ;
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) {
+		reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index);
+		goto done;
+	}
+
+	reg_fcfi_mrq->dw8_vlan = cpu_to_le32(SLI4_REGFCFI_MRQ_MODE);
+
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask;
+		reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match;
+		reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask;
+		reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match;
+
+		switch (i) {
+		case 3:
+			reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id;
+			break;
+		case 2:
+			reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id;
+			break;
+		case 1:
+			reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id;
+			break;
+		case 0:
+			reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id;
+			break;
+		}
+	}
+
+	mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS;
+	mrq_flags |= (mrq_bit_mask << 8);
+	mrq_flags |= (rq_selection_policy << 12);
+	reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags);
+done:
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, u32 rpi, u32 vpi, u32 fc_id,
+		struct efc_dma *dma, u8 update, u8 enable_t10_pi)
+{
+	struct sli4_cmd_reg_rpi *reg_rpi = buf;
+	u32 rportid_flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	reg_rpi->hdr.command = SLI4_MBX_CMD_REG_RPI;
+
+	reg_rpi->rpi = cpu_to_le16(rpi);
+
+	rportid_flags = fc_id & SLI4_REGRPI_REMOTE_N_PORTID;
+
+	if (update)
+		rportid_flags |= SLI4_REGRPI_UPD;
+	else
+		rportid_flags &= ~SLI4_REGRPI_UPD;
+
+	if (enable_t10_pi)
+		rportid_flags |= SLI4_REGRPI_ETOW;
+	else
+		rportid_flags &= ~SLI4_REGRPI_ETOW;
+
+	reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags);
+
+	reg_rpi->bde_64.bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_LEN_MASK));
+	reg_rpi->bde_64.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma->phys));
+	reg_rpi->bde_64.u.data.high =
+		cpu_to_le32(upper_32_bits(dma->phys));
+
+	reg_rpi->vpi = cpu_to_le16(vpi);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id)
+{
+	struct sli4_cmd_reg_vfi *reg_vfi = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	reg_vfi->hdr.command = SLI4_MBX_CMD_REG_VFI;
+
+	reg_vfi->vfi = cpu_to_le16(vfi);
+
+	reg_vfi->fcfi = cpu_to_le16(fcfi);
+
+	reg_vfi->sparm.bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_LEN_MASK));
+	reg_vfi->sparm.u.data.low  =
+		cpu_to_le32(lower_32_bits(dma.phys));
+	reg_vfi->sparm.u.data.high =
+		cpu_to_le32(upper_32_bits(dma.phys));
+
+	reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov);
+	reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov);
+
+	reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP);
+	reg_vfi->vpi = cpu_to_le16(vpi);
+	memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn));
+	reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, u32 fc_id, __be64 sli_wwpn,
+		u16 vpi, u16 vfi, bool update)
+{
+	struct sli4_cmd_reg_vpi *reg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	reg_vpi->hdr.command = SLI4_MBX_CMD_REG_VPI;
+
+	flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID);
+	if (update)
+		flags |= SLI4_REGVPI_UPD;
+	else
+		flags &= ~SLI4_REGVPI_UPD;
+
+	reg_vpi->dw2_lportid_flags = cpu_to_le32(flags);
+	memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn));
+	reg_vpi->vpi = cpu_to_le16(vpi);
+	reg_vpi->vfi = cpu_to_le16(vfi);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_request_features(struct sli4 *sli4, void *buf, u32 features_mask,
+			 bool query)
+{
+	struct sli4_cmd_request_features *req_features = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	req_features->hdr.command = SLI4_MBX_CMD_RQST_FEATURES;
+
+	if (query)
+		req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY);
+
+	req_features->cmd = cpu_to_le32(features_mask);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, u16 indicator)
+{
+	struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	unreg_fcfi->hdr.command = SLI4_MBX_CMD_UNREG_FCFI;
+	unreg_fcfi->fcfi = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, u16 indicator,
+		  enum sli4_resource which, u32 fc_id)
+{
+	struct sli4_cmd_unreg_rpi *unreg_rpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	unreg_rpi->hdr.command = SLI4_MBX_CMD_UNREG_RPI;
+	switch (which) {
+	case SLI4_RSRC_RPI:
+		flags |= SLI4_UNREG_RPI_II_RPI;
+		if (fc_id == U32_MAX)
+			break;
+
+		flags |= SLI4_UNREG_RPI_DP;
+		unreg_rpi->dw2_dest_n_portid =
+			cpu_to_le32(fc_id & SLI4_UNREG_RPI_DEST_N_PORTID_MASK);
+		break;
+	case SLI4_RSRC_VPI:
+		flags |= SLI4_UNREG_RPI_II_VPI;
+		break;
+	case SLI4_RSRC_VFI:
+		flags |= SLI4_UNREG_RPI_II_VFI;
+		break;
+	case SLI4_RSRC_FCFI:
+		flags |= SLI4_UNREG_RPI_II_FCFI;
+		break;
+	default:
+		efc_log_info(sli4, "unknown type %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	unreg_rpi->dw1w1_flags = cpu_to_le16(flags);
+	unreg_rpi->index = cpu_to_le16(indicator);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, u16 index, u32 which)
+{
+	struct sli4_cmd_unreg_vfi *unreg_vfi = buf;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	unreg_vfi->hdr.command = SLI4_MBX_CMD_UNREG_VFI;
+	switch (which) {
+	case SLI4_UNREG_TYPE_DOMAIN:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		unreg_vfi->index = cpu_to_le16(index);
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		unreg_vfi->index = cpu_to_le16(U32_MAX);
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	if (which != SLI4_UNREG_TYPE_DOMAIN)
+		unreg_vfi->dw2_flags = cpu_to_le16(SLI4_UNREG_VFI_II_FCFI);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, u16 indicator, u32 which)
+{
+	struct sli4_cmd_unreg_vpi *unreg_vpi = buf;
+	u32 flags = 0;
+
+	memset(buf, 0, SLI4_BMBX_SIZE);
+
+	unreg_vpi->hdr.command = SLI4_MBX_CMD_UNREG_VPI;
+	unreg_vpi->index = cpu_to_le16(indicator);
+	switch (which) {
+	case SLI4_UNREG_TYPE_PORT:
+		flags |= SLI4_UNREG_VPI_II_VPI;
+		break;
+	case SLI4_UNREG_TYPE_DOMAIN:
+		flags |= SLI4_UNREG_VPI_II_VFI;
+		break;
+	case SLI4_UNREG_TYPE_FCF:
+		flags |= SLI4_UNREG_VPI_II_FCFI;
+		break;
+	case SLI4_UNREG_TYPE_ALL:
+		/* override indicator */
+		unreg_vpi->index = cpu_to_le16(U32_MAX);
+		flags |= SLI4_UNREG_VPI_II_FCFI;
+		break;
+	default:
+		return EFC_FAIL;
+	}
+
+	unreg_vpi->dw2w0_flags = cpu_to_le16(flags);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_modify_eq_delay(struct sli4 *sli4, void *buf,
+			       struct sli4_queue *q, int num_q, u32 shift,
+			       u32 delay_mult)
+{
+	struct sli4_rqst_cmn_modify_eq_delay *req = NULL;
+	int i;
+
+	req = sli_config_cmd_init(sli4, buf,
+			SLI4_CFG_PYLD_LENGTH(cmn_modify_eq_delay), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_MODIFY_EQ_DELAY,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_modify_eq_delay));
+	req->num_eq = cpu_to_le32(num_q);
+
+	for (i = 0; i < num_q; i++) {
+		req->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id);
+		req->eq_delay_record[i].phase = cpu_to_le32(shift);
+		req->eq_delay_record[i].delay_multiplier =
+			cpu_to_le32(delay_mult);
+	}
+
+	return EFC_SUCCESS;
+}
+
+void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf,
+			       size_t size, u16 timeout)
+{
+	struct sli4_rqst_lowlevel_set_watchdog *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf,
+			SLI4_CFG_PYLD_LENGTH(lowlevel_set_watchdog), NULL);
+	if (!req)
+		return;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_LOWLEVEL_SET_WATCHDOG,
+			 SLI4_SUBSYSTEM_LOWLEVEL, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(lowlevel_set_watchdog));
+	req->watchdog_timeout = cpu_to_le16(timeout);
+}
+
+static int
+sli_cmd_common_get_cntl_attributes(struct sli4 *sli4, void *buf,
+				   struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, SLI4_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = SLI4_CMN_GET_CNTL_ATTRIBUTES;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_cntl_addl_attributes(struct sli4 *sli4, void *buf,
+					struct efc_dma *dma)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf, SLI4_RQST_CMDSZ(hdr), dma);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = SLI4_CMN_GET_CNTL_ADDL_ATTRS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = cpu_to_le32(dma->size);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf, uint64_t context)
+{
+	struct sli4_rqst_cmn_nop *nop = NULL;
+
+	nop = sli_config_cmd_init(sli4, buf, SLI4_CFG_PYLD_LENGTH(cmn_nop),
+				  NULL);
+	if (!nop)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&nop->hdr, SLI4_CMN_NOP, SLI4_SUBSYSTEM_COMMON,
+			 CMD_V0, SLI4_RQST_PYLD_LEN(cmn_nop));
+
+	memcpy(&nop->context, &context, sizeof(context));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf, u16 rtype)
+{
+	struct sli4_rqst_cmn_get_resource_extent_info *ext = NULL;
+
+	ext = sli_config_cmd_init(sli4, buf,
+			SLI4_RQST_CMDSZ(cmn_get_resource_extent_info), NULL);
+	if (!ext)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&ext->hdr, SLI4_CMN_GET_RSC_EXTENT_INFO,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_get_resource_extent_info));
+
+	ext->resource_type = cpu_to_le16(rtype);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf)
+{
+	struct sli4_rqst_hdr *hdr = NULL;
+
+	hdr = sli_config_cmd_init(sli4, buf,
+			SLI4_CFG_PYLD_LENGTH(cmn_get_sli4_params), NULL);
+	if (!hdr)
+		return EFC_FAIL;
+
+	hdr->opcode = SLI4_CMN_GET_SLI4_PARAMS;
+	hdr->subsystem = SLI4_SUBSYSTEM_COMMON;
+	hdr->request_length = SLI4_RQST_PYLD_LEN(cmn_get_sli4_params);
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_cmd_common_get_port_name(struct sli4 *sli4, void *buf)
+{
+	struct sli4_rqst_cmn_get_port_name *pname;
+
+	pname = sli_config_cmd_init(sli4, buf,
+			SLI4_CFG_PYLD_LENGTH(cmn_get_port_name), NULL);
+	if (!pname)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&pname->hdr, SLI4_CMN_GET_PORT_NAME,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V1,
+			 SLI4_RQST_PYLD_LEN(cmn_get_port_name));
+
+	/* Set the port type value (ethernet=0, FC=1) for V1 commands */
+	pname->port_type = SLI4_PORT_TYPE_FC;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, u16 noc,
+			    u16 eof, u32 desired_write_length,
+			    u32 offset, char *obj_name,
+			    struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_write_object *wr_obj = NULL;
+	struct sli4_bde *bde;
+	u32 dwflags = 0;
+
+	wr_obj = sli_config_cmd_init(sli4, buf,
+			SLI4_RQST_CMDSZ(cmn_write_object) + sizeof(*bde), NULL);
+	if (!wr_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&wr_obj->hdr, SLI4_CMN_WRITE_OBJECT,
+		SLI4_SUBSYSTEM_COMMON, CMD_V0,
+		SLI4_RQST_PYLD_LEN_VAR(cmn_write_object, sizeof(*bde)));
+
+	if (noc)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC;
+	if (eof)
+		dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF;
+	dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN);
+
+	wr_obj->desired_write_len_dword = cpu_to_le32(dwflags);
+
+	wr_obj->write_offset = cpu_to_le32(offset);
+	strncpy(wr_obj->object_name, obj_name, sizeof(wr_obj->object_name) - 1);
+	wr_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (desired_write_length & SLI4_BDE_LEN_MASK));
+	bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+	bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, char *obj_name)
+{
+	struct sli4_rqst_cmn_delete_object *req = NULL;
+
+	req = sli_config_cmd_init(sli4, buf,
+				  SLI4_RQST_CMDSZ(cmn_delete_object), NULL);
+	if (!req)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_DELETE_OBJECT,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_delete_object));
+
+	strncpy(req->object_name, obj_name, sizeof(req->object_name) - 1);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf, u32 desired_read_len,
+			   u32 offset, char *obj_name, struct efc_dma *dma)
+{
+	struct sli4_rqst_cmn_read_object *rd_obj = NULL;
+	struct sli4_bde *bde;
+
+	rd_obj = sli_config_cmd_init(sli4, buf,
+			SLI4_RQST_CMDSZ(cmn_read_object) + sizeof(*bde), NULL);
+	if (!rd_obj)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&rd_obj->hdr, SLI4_CMN_READ_OBJECT,
+		SLI4_SUBSYSTEM_COMMON, CMD_V0,
+		SLI4_RQST_PYLD_LEN_VAR(cmn_read_object, sizeof(*bde)));
+	rd_obj->desired_read_length_dword =
+		cpu_to_le32(desired_read_len & SLI4_REQ_DESIRE_READLEN);
+
+	rd_obj->read_offset = cpu_to_le32(offset);
+	strncpy(rd_obj->object_name, obj_name, sizeof(rd_obj->object_name) - 1);
+	rd_obj->host_buffer_descriptor_count = cpu_to_le32(1);
+
+	bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;
+
+	/* Setup to transfer xfer_size bytes to device */
+	bde->bde_type_buflen =
+		cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) |
+			    (desired_read_len & SLI4_BDE_LEN_MASK));
+	if (dma) {
+		bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys));
+		bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys));
+	} else {
+		bde->u.data.low = 0;
+		bde->u.data.high = 0;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, struct efc_dma *cmd,
+			  struct efc_dma *resp)
+{
+	struct sli4_rqst_dmtf_exec_clp_cmd *clp_cmd = NULL;
+
+	clp_cmd = sli_config_cmd_init(sli4, buf,
+				SLI4_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL);
+	if (!clp_cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&clp_cmd->hdr, DMTF_EXEC_CLP_CMD, SLI4_SUBSYSTEM_DMTF,
+			 CMD_V0, SLI4_RQST_PYLD_LEN(dmtf_exec_clp_cmd));
+
+	clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size);
+	clp_cmd->cmd_buf_addr_low =  cpu_to_le32(lower_32_bits(cmd->phys));
+	clp_cmd->cmd_buf_addr_high =  cpu_to_le32(upper_32_bits(cmd->phys));
+	clp_cmd->resp_buf_length = cpu_to_le32(resp->size);
+	clp_cmd->resp_buf_addr_low =  cpu_to_le32(lower_32_bits(resp->phys));
+	clp_cmd->resp_buf_addr_high =  cpu_to_le32(upper_32_bits(resp->phys));
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf, bool query,
+				 bool is_buffer_list,
+				 struct efc_dma *buffer, u8 fdb)
+{
+	struct sli4_rqst_cmn_set_dump_location *set_dump_loc = NULL;
+	u32 buffer_length_flag = 0;
+
+	set_dump_loc = sli_config_cmd_init(sli4, buf,
+				SLI4_RQST_CMDSZ(cmn_set_dump_location), NULL);
+	if (!set_dump_loc)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&set_dump_loc->hdr, SLI4_CMN_SET_DUMP_LOCATION,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_set_dump_location));
+
+	if (is_buffer_list)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_BLP;
+
+	if (query)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_QRY;
+
+	if (fdb)
+		buffer_length_flag |= SLI4_CMN_SET_DUMP_FDB;
+
+	if (buffer) {
+		set_dump_loc->buf_addr_low =
+			cpu_to_le32(lower_32_bits(buffer->phys));
+		set_dump_loc->buf_addr_high =
+			cpu_to_le32(upper_32_bits(buffer->phys));
+
+		buffer_length_flag |=
+			buffer->len & SLI4_CMN_SET_DUMP_BUFFER_LEN;
+	} else {
+		set_dump_loc->buf_addr_low = 0;
+		set_dump_loc->buf_addr_high = 0;
+		set_dump_loc->buffer_length_dword = 0;
+	}
+	set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag);
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf, u32 feature,
+			    u32 param_len, void *parameter)
+{
+	struct sli4_rqst_cmn_set_features *cmd = NULL;
+
+	cmd = sli_config_cmd_init(sli4, buf,
+				  SLI4_RQST_CMDSZ(cmn_set_features), NULL);
+	if (!cmd)
+		return EFC_FAIL;
+
+	sli_cmd_fill_hdr(&cmd->hdr, SLI4_CMN_SET_FEATURES,
+			 SLI4_SUBSYSTEM_COMMON, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(cmn_set_features));
+
+	cmd->feature = cpu_to_le32(feature);
+	cmd->param_len = cpu_to_le32(param_len);
+	memcpy(cmd->params, parameter, param_len);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cqe_mq(struct sli4 *sli4, void *buf)
+{
+	struct sli4_mcqe *mcqe = buf;
+	u32 dwflags = le32_to_cpu(mcqe->dw3_flags);
+	/*
+	 * Firmware can split mbx completions into two MCQEs: first with only
+	 * the "consumed" bit set and a second with the "complete" bit set.
+	 * Thus, ignore MCQE unless "complete" is set.
+	 */
+	if (!(dwflags & SLI4_MCQE_COMPLETED))
+		return SLI4_MCQE_STATUS_NOT_COMPLETED;
+
+	if (le16_to_cpu(mcqe->completion_status)) {
+		efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n",
+			le16_to_cpu(mcqe->completion_status),
+			      le16_to_cpu(mcqe->extended_status),
+			      (dwflags & SLI4_MCQE_CONSUMED),
+			      (dwflags & SLI4_MCQE_COMPLETED),
+			      (dwflags & SLI4_MCQE_AE),
+			      (dwflags & SLI4_MCQE_VALID));
+	}
+
+	return le16_to_cpu(mcqe->completion_status);
+}
+
+int
+sli_cqe_async(struct sli4 *sli4, void *buf)
+{
+	struct sli4_acqe *acqe = buf;
+	int rc = EFC_FAIL;
+
+	if (!buf) {
+		efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf);
+		return EFC_FAIL;
+	}
+
+	switch (acqe->event_code) {
+	case SLI4_ACQE_EVENT_CODE_LINK_STATE:
+		efc_log_info(sli4, "Unsupported by FC link, evt code:%#x\n",
+			     acqe->event_code);
+		break;
+	case SLI4_ACQE_EVENT_CODE_GRP_5:
+		efc_log_info(sli4, "ACQE GRP5\n");
+		break;
+	case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT:
+		efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n",
+			acqe->event_type,
+			le32_to_cpu(acqe->event_data[0]),
+			le32_to_cpu(acqe->event_data[1]));
+		break;
+	case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT:
+		rc = sli_fc_process_link_attention(sli4, buf);
+		break;
+	default:
+		efc_log_info(sli4, "ACQE unknown=%#x\n",
+			acqe->event_code);
+	}
+
+	return rc;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 07/31] elx: libefc_sli: APIs to setup SLI library
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (5 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 08/31] elx: libefc: Generic state machine framework James Smart
                   ` (23 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc_sli SLI-4 library population.

This patch adds APIS to initialize the library, initialize
the SLI Port, reset firmware, terminate the SLI Port, and
terminate the library.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc_sli/sli4.c | 1135 ++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc_sli/sli4.h |  396 ++++++++++
 2 files changed, 1531 insertions(+)

diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c
index e41070fb21d6..9c22c85031be 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.c
+++ b/drivers/scsi/elx/libefc_sli/sli4.c
@@ -4032,3 +4032,1138 @@ sli_cqe_async(struct sli4 *sli4, void *buf)
 
 	return rc;
 }
+
+bool
+sli_fw_ready(struct sli4 *sli4)
+{
+	u32 val;
+
+	/* Determine if the chip FW is in a ready state */
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RDY) ? 1 : 0;
+}
+
+static bool
+sli_wait_for_fw_ready(struct sli4 *sli4, u32 timeout_ms)
+{
+	unsigned long end;
+
+	end = jiffies + msecs_to_jiffies(timeout_ms);
+
+	do {
+		if (sli_fw_ready(sli4))
+			return true;
+
+		usleep_range(1000, 2000);
+	} while (time_before(jiffies, end));
+
+	return false;
+}
+
+static bool
+sli_sliport_reset(struct sli4 *sli4)
+{
+	bool rc;
+	u32 val;
+
+	val = SLI4_PORT_CTRL_IP;
+	/* Initialize port, endian */
+	writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+
+	rc = sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC);
+	if (!rc)
+		efc_log_crit(sli4, "port failed to become ready after initialization\n");
+
+	return rc;
+}
+
+static bool
+sli_fw_init(struct sli4 *sli4)
+{
+	/*
+	 * Is firmware ready for operation?
+	 */
+	if (!sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC)) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return false;
+	}
+
+	/*
+	 * Reset port to a known state
+	 */
+	return sli_sliport_reset(sli4);
+}
+
+static int
+sli_request_features(struct sli4 *sli4, u32 *features, bool query)
+{
+	struct sli4_cmd_request_features *req_features = sli4->bmbx.virt;
+
+	if (sli_cmd_request_features(sli4, sli4->bmbx.virt, *features, query)) {
+		efc_log_err(sli4, "bad REQUEST_FEATURES write\n");
+		return EFC_FAIL;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+			__func__);
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(req_features->hdr.status)) {
+		efc_log_err(sli4, "REQUEST_FEATURES bad status %#x\n",
+		       le16_to_cpu(req_features->hdr.status));
+		return EFC_FAIL;
+	}
+
+	*features = le32_to_cpu(req_features->resp);
+	return EFC_SUCCESS;
+}
+
+void
+sli_calc_max_qentries(struct sli4 *sli4)
+{
+	enum sli4_qtype q;
+	u32 qentries;
+
+	for (q = SLI4_QTYPE_EQ; q < SLI4_QTYPE_MAX; q++) {
+		sli4->qinfo.max_qentries[q] =
+			sli_convert_mask_to_count(sli4->qinfo.count_method[q],
+						  sli4->qinfo.count_mask[q]);
+	}
+
+	/* single, continguous DMA allocations will be called for each queue
+	 * of size (max_qentries * queue entry size); since these can be large,
+	 * check against the OS max DMA allocation size
+	 */
+	for (q = SLI4_QTYPE_EQ; q < SLI4_QTYPE_MAX; q++) {
+		qentries = sli4->qinfo.max_qentries[q];
+
+		efc_log_info(sli4, "[%s]: max_qentries from %d to %d\n",
+			     SLI4_QNAME[q],
+			     sli4->qinfo.max_qentries[q], qentries);
+		sli4->qinfo.max_qentries[q] = qentries;
+	}
+}
+
+static int
+sli_get_read_config(struct sli4 *sli4)
+{
+	struct sli4_rsp_read_config *conf = sli4->bmbx.virt;
+	u32 i, total, total_size;
+	u32 *base;
+
+	if (sli_cmd_read_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad READ_CONFIG write\n");
+		return EFC_FAIL;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox fail (READ_CONFIG)\n");
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(conf->hdr.status)) {
+		efc_log_err(sli4, "READ_CONFIG bad status %#x\n",
+		       le16_to_cpu(conf->hdr.status));
+		return EFC_FAIL;
+	}
+
+	sli4->params.has_extents =
+	  le32_to_cpu(conf->ext_dword) & SLI4_READ_CFG_RESP_RESOURCE_EXT;
+	if (sli4->params.has_extents) {
+		efc_log_err(sli4, "extents not supported\n");
+		return EFC_FAIL;
+	}
+
+	base = sli4->ext[0].base;
+	if (!base) {
+		int size = SLI4_RSRC_MAX * sizeof(u32);
+
+		base = kzalloc(size, GFP_KERNEL);
+		if (!base)
+			return EFC_FAIL;
+	}
+
+	for (i = 0; i < SLI4_RSRC_MAX; i++) {
+		sli4->ext[i].number = 1;
+		sli4->ext[i].n_alloc = 0;
+		sli4->ext[i].base = &base[i];
+	}
+
+	sli4->ext[SLI4_RSRC_VFI].base[0] = le16_to_cpu(conf->vfi_base);
+	sli4->ext[SLI4_RSRC_VFI].size = le16_to_cpu(conf->vfi_count);
+
+	sli4->ext[SLI4_RSRC_VPI].base[0] = le16_to_cpu(conf->vpi_base);
+	sli4->ext[SLI4_RSRC_VPI].size = le16_to_cpu(conf->vpi_count);
+
+	sli4->ext[SLI4_RSRC_RPI].base[0] = le16_to_cpu(conf->rpi_base);
+	sli4->ext[SLI4_RSRC_RPI].size = le16_to_cpu(conf->rpi_count);
+
+	sli4->ext[SLI4_RSRC_XRI].base[0] = le16_to_cpu(conf->xri_base);
+	sli4->ext[SLI4_RSRC_XRI].size = le16_to_cpu(conf->xri_count);
+
+	sli4->ext[SLI4_RSRC_FCFI].base[0] = 0;
+	sli4->ext[SLI4_RSRC_FCFI].size = le16_to_cpu(conf->fcfi_count);
+
+	for (i = 0; i < SLI4_RSRC_MAX; i++) {
+		total = sli4->ext[i].number * sli4->ext[i].size;
+		total_size = BITS_TO_LONGS(total) * sizeof(long);
+		sli4->ext[i].use_map = kzalloc(total_size, GFP_KERNEL);
+		if (!sli4->ext[i].use_map) {
+			efc_log_err(sli4, "bitmap memory allocation failed %d\n",
+			       i);
+			return EFC_FAIL;
+		}
+		sli4->ext[i].map_size = total;
+	}
+
+	sli4->topology = (le32_to_cpu(conf->topology_dword) &
+			  SLI4_READ_CFG_RESP_TOPOLOGY) >> 24;
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+		efc_log_info(sli4, "FC (unknown)\n");
+		break;
+	case SLI4_READ_CFG_TOPO_NON_FC_AL:
+		efc_log_info(sli4, "FC (direct attach)\n");
+		break;
+	case SLI4_READ_CFG_TOPO_FC_AL:
+		efc_log_info(sli4, "FC (arbitrated loop)\n");
+		break;
+	default:
+		efc_log_info(sli4, "bad topology %#x\n",
+			sli4->topology);
+	}
+
+	sli4->e_d_tov = le16_to_cpu(conf->e_d_tov);
+	sli4->r_a_tov = le16_to_cpu(conf->r_a_tov);
+
+	sli4->link_module_type = le16_to_cpu(conf->lmt);
+
+	sli4->qinfo.max_qcount[SLI4_QTYPE_EQ] =	le16_to_cpu(conf->eq_count);
+	sli4->qinfo.max_qcount[SLI4_QTYPE_CQ] =	le16_to_cpu(conf->cq_count);
+	sli4->qinfo.max_qcount[SLI4_QTYPE_WQ] =	le16_to_cpu(conf->wq_count);
+	sli4->qinfo.max_qcount[SLI4_QTYPE_RQ] =	le16_to_cpu(conf->rq_count);
+
+	/*
+	 * READ_CONFIG doesn't give the max number of MQ. Applications
+	 * will typically want 1, but we may need another at some future
+	 * date. Dummy up a "max" MQ count here.
+	 */
+	sli4->qinfo.max_qcount[SLI4_QTYPE_MQ] = SLI4_USER_MQ_COUNT;
+	return EFC_SUCCESS;
+}
+
+static int
+sli_get_sli4_parameters(struct sli4 *sli4)
+{
+	struct sli4_rsp_cmn_get_sli4_params *parms;
+	u32 dw_loopback;
+	u32 dw_eq_pg_cnt;
+	u32 dw_cq_pg_cnt;
+	u32 dw_mq_pg_cnt;
+	u32 dw_wq_pg_cnt;
+	u32 dw_rq_pg_cnt;
+	u32 dw_sgl_pg_cnt;
+
+	if (sli_cmd_common_get_sli4_parameters(sli4, sli4->bmbx.virt))
+		return EFC_FAIL;
+
+	parms = (struct sli4_rsp_cmn_get_sli4_params *)
+		 (((u8 *)sli4->bmbx.virt) +
+		  offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+			__func__);
+		return EFC_FAIL;
+	}
+
+	if (parms->hdr.status) {
+		efc_log_err(sli4, "COMMON_GET_SLI4_PARAMETERS bad status %#x",
+		       parms->hdr.status);
+		efc_log_err(sli4, "additional status %#x\n",
+		       parms->hdr.additional_status);
+		return EFC_FAIL;
+	}
+
+	dw_loopback = le32_to_cpu(parms->dw16_loopback_scope);
+	dw_eq_pg_cnt = le32_to_cpu(parms->dw6_eq_page_cnt);
+	dw_cq_pg_cnt = le32_to_cpu(parms->dw8_cq_page_cnt);
+	dw_mq_pg_cnt = le32_to_cpu(parms->dw10_mq_page_cnt);
+	dw_wq_pg_cnt = le32_to_cpu(parms->dw12_wq_page_cnt);
+	dw_rq_pg_cnt = le32_to_cpu(parms->dw14_rq_page_cnt);
+
+	sli4->params.auto_reg =	(dw_loopback & SLI4_PARAM_AREG);
+	sli4->params.auto_xfer_rdy = (dw_loopback & SLI4_PARAM_AGXF);
+	sli4->params.hdr_template_req =	(dw_loopback & SLI4_PARAM_HDRR);
+	sli4->params.t10_dif_inline_capable = (dw_loopback & SLI4_PARAM_TIMM);
+	sli4->params.t10_dif_separate_capable =	(dw_loopback & SLI4_PARAM_TSMM);
+
+	sli4->params.mq_create_version = GET_Q_CREATE_VERSION(dw_mq_pg_cnt);
+	sli4->params.cq_create_version = GET_Q_CREATE_VERSION(dw_cq_pg_cnt);
+
+	sli4->rq_min_buf_size =	le16_to_cpu(parms->min_rq_buffer_size);
+	sli4->rq_max_buf_size = le32_to_cpu(parms->max_rq_buffer_size);
+
+	sli4->qinfo.qpage_count[SLI4_QTYPE_EQ] =
+		(dw_eq_pg_cnt & SLI4_PARAM_EQ_PAGE_CNT_MASK);
+	sli4->qinfo.qpage_count[SLI4_QTYPE_CQ] =
+		(dw_cq_pg_cnt & SLI4_PARAM_CQ_PAGE_CNT_MASK);
+	sli4->qinfo.qpage_count[SLI4_QTYPE_MQ] =
+		(dw_mq_pg_cnt & SLI4_PARAM_MQ_PAGE_CNT_MASK);
+	sli4->qinfo.qpage_count[SLI4_QTYPE_WQ] =
+		(dw_wq_pg_cnt & SLI4_PARAM_WQ_PAGE_CNT_MASK);
+	sli4->qinfo.qpage_count[SLI4_QTYPE_RQ] =
+		(dw_rq_pg_cnt & SLI4_PARAM_RQ_PAGE_CNT_MASK);
+
+	/* save count methods and masks for each queue type */
+
+	sli4->qinfo.count_mask[SLI4_QTYPE_EQ] =
+			le16_to_cpu(parms->eqe_count_mask);
+	sli4->qinfo.count_method[SLI4_QTYPE_EQ] =
+			GET_Q_CNT_METHOD(dw_eq_pg_cnt);
+
+	sli4->qinfo.count_mask[SLI4_QTYPE_CQ] =
+			le16_to_cpu(parms->cqe_count_mask);
+	sli4->qinfo.count_method[SLI4_QTYPE_CQ] =
+			GET_Q_CNT_METHOD(dw_cq_pg_cnt);
+
+	sli4->qinfo.count_mask[SLI4_QTYPE_MQ] =
+			le16_to_cpu(parms->mqe_count_mask);
+	sli4->qinfo.count_method[SLI4_QTYPE_MQ] =
+			GET_Q_CNT_METHOD(dw_mq_pg_cnt);
+
+	sli4->qinfo.count_mask[SLI4_QTYPE_WQ] =
+			le16_to_cpu(parms->wqe_count_mask);
+	sli4->qinfo.count_method[SLI4_QTYPE_WQ] =
+			GET_Q_CNT_METHOD(dw_wq_pg_cnt);
+
+	sli4->qinfo.count_mask[SLI4_QTYPE_RQ] =
+			le16_to_cpu(parms->rqe_count_mask);
+	sli4->qinfo.count_method[SLI4_QTYPE_RQ] =
+			GET_Q_CNT_METHOD(dw_rq_pg_cnt);
+
+	/* now calculate max queue entries */
+	sli_calc_max_qentries(sli4);
+
+	dw_sgl_pg_cnt = le32_to_cpu(parms->dw18_sgl_page_cnt);
+
+	/* max # of pages */
+	sli4->max_sgl_pages = (dw_sgl_pg_cnt & SLI4_PARAM_SGL_PAGE_CNT_MASK);
+
+	/* bit map of available sizes */
+	sli4->sgl_page_sizes = (dw_sgl_pg_cnt &
+				SLI4_PARAM_SGL_PAGE_SZS_MASK) >> 8;
+	/* ignore HLM here. Use value from REQUEST_FEATURES */
+	sli4->sge_supported_length = le32_to_cpu(parms->sge_supported_length);
+	sli4->params.sgl_pre_reg_required = (dw_loopback & SLI4_PARAM_SGLR);
+	/* default to using pre-registered SGL's */
+	sli4->params.sgl_pre_registered = true;
+
+	sli4->params.perf_hint = dw_loopback & SLI4_PARAM_PHON;
+	sli4->params.perf_wq_id_association = (dw_loopback & SLI4_PARAM_PHWQ);
+
+	sli4->rq_batch = (le16_to_cpu(parms->dw15w1_rq_db_window) &
+			  SLI4_PARAM_RQ_DB_WINDOW_MASK) >> 12;
+
+	/* Use the highest available WQE size. */
+	if (((dw_wq_pg_cnt & SLI4_PARAM_WQE_SZS_MASK) >> 8) &
+	    SLI4_128BYTE_WQE_SUPPORT)
+		sli4->wqe_size = SLI4_WQE_EXT_BYTES;
+	else
+		sli4->wqe_size = SLI4_WQE_BYTES;
+
+	return EFC_SUCCESS;
+}
+
+static int
+sli_get_ctrl_attributes(struct sli4 *sli4)
+{
+	struct sli4_rsp_cmn_get_cntl_attributes *attr;
+	struct sli4_rsp_cmn_get_cntl_addl_attributes *add_attr;
+	struct efc_dma data;
+	u32 psize;
+
+	/*
+	 * Issue COMMON_GET_CNTL_ATTRIBUTES to get port_number. Temporarily
+	 * uses VPD DMA buffer as the response won't fit in the embedded
+	 * buffer.
+	 */
+	memset(sli4->vpd_data.virt, 0, sli4->vpd_data.size);
+	if (sli_cmd_common_get_cntl_attributes(sli4, sli4->bmbx.virt,
+				&sli4->vpd_data)) {
+		efc_log_err(sli4, "bad COMMON_GET_CNTL_ATTRIBUTES write\n");
+		return EFC_FAIL;
+	}
+
+	attr =	sli4->vpd_data.virt;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "%s: bootstrap mailbox write fail\n",
+				__func__);
+		return EFC_FAIL;
+	}
+
+	if (attr->hdr.status) {
+		efc_log_err(sli4, "COMMON_GET_CNTL_ATTRIBUTES bad status %#x",
+		       attr->hdr.status);
+		efc_log_err(sli4, "additional status %#x\n",
+		       attr->hdr.additional_status);
+		return EFC_FAIL;
+	}
+
+	sli4->port_number = attr->port_num_type_flags & SLI4_CNTL_ATTR_PORTNUM;
+
+	memcpy(sli4->bios_version_string, attr->bios_version_str,
+	       sizeof(sli4->bios_version_string));
+
+	/* get additional attributes */
+	psize = sizeof(struct sli4_rsp_cmn_get_cntl_addl_attributes);
+	data.size = psize;
+	data.virt = dma_alloc_coherent(&sli4->pci->dev, data.size,
+				       &data.phys, GFP_DMA);
+	if (!data.virt) {
+		memset(&data, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "Failed to allocate memory for GET_CNTL_ADDL_ATTR\n");
+		return EFC_FAIL;
+	}
+
+	if (sli_cmd_common_get_cntl_addl_attributes(sli4, sli4->bmbx.virt,
+						&data)) {
+		efc_log_err(sli4, "bad GET_CNTL_ADDL_ATTR write\n");
+		dma_free_coherent(&sli4->pci->dev, data.size,
+				  data.virt, data.phys);
+		return EFC_FAIL;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "mailbox fail (GET_CNTL_ADDL_ATTR)\n");
+		dma_free_coherent(&sli4->pci->dev, data.size,
+				  data.virt, data.phys);
+		return EFC_FAIL;
+	}
+
+	add_attr = data.virt;
+	if (add_attr->hdr.status) {
+		efc_log_err(sli4, "GET_CNTL_ADDL_ATTR bad status %#x\n",
+		       add_attr->hdr.status);
+		dma_free_coherent(&sli4->pci->dev, data.size,
+				  data.virt, data.phys);
+		return EFC_FAIL;
+	}
+
+	memcpy(sli4->ipl_name, add_attr->ipl_file_name, sizeof(sli4->ipl_name));
+
+	efc_log_info(sli4, "IPL:%s\n", (char *)sli4->ipl_name);
+
+	dma_free_coherent(&sli4->pci->dev, data.size, data.virt,
+			  data.phys);
+	memset(&data, 0, sizeof(struct efc_dma));
+	return EFC_SUCCESS;
+}
+
+static int
+sli_get_fw_rev(struct sli4 *sli4)
+{
+	struct sli4_cmd_read_rev	*read_rev = sli4->bmbx.virt;
+
+	if (sli_cmd_read_rev(sli4, sli4->bmbx.virt, &sli4->vpd_data))
+		return EFC_FAIL;
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail (READ_REV)\n");
+		return EFC_FAIL;
+	}
+
+	if (le16_to_cpu(read_rev->hdr.status)) {
+		efc_log_err(sli4, "READ_REV bad status %#x\n",
+		       le16_to_cpu(read_rev->hdr.status));
+		return EFC_FAIL;
+	}
+
+	sli4->fw_rev[0] = le32_to_cpu(read_rev->first_fw_id);
+	memcpy(sli4->fw_name[0], read_rev->first_fw_name,
+		sizeof(sli4->fw_name[0]));
+
+	sli4->fw_rev[1] = le32_to_cpu(read_rev->second_fw_id);
+	memcpy(sli4->fw_name[1], read_rev->second_fw_name,
+	       sizeof(sli4->fw_name[1]));
+
+	sli4->hw_rev[0] = le32_to_cpu(read_rev->first_hw_rev);
+	sli4->hw_rev[1] = le32_to_cpu(read_rev->second_hw_rev);
+	sli4->hw_rev[2] = le32_to_cpu(read_rev->third_hw_rev);
+
+	efc_log_info(sli4, "FW1:%s (%08x) / FW2:%s (%08x)\n",
+		read_rev->first_fw_name, le32_to_cpu(read_rev->first_fw_id),
+		read_rev->second_fw_name, le32_to_cpu(read_rev->second_fw_id));
+
+	efc_log_info(sli4, "HW1: %08x / HW2: %08x\n",
+			le32_to_cpu(read_rev->first_hw_rev),
+			le32_to_cpu(read_rev->second_hw_rev));
+
+	/* Check that all VPD data was returned */
+	if (le32_to_cpu(read_rev->returned_vpd_length) !=
+	    le32_to_cpu(read_rev->actual_vpd_length)) {
+		efc_log_info(sli4, "VPD length: avail=%d return=%d actual=%d\n",
+			le32_to_cpu(read_rev->available_length_dword) &
+				    SLI4_READ_REV_AVAILABLE_LENGTH,
+			le32_to_cpu(read_rev->returned_vpd_length),
+			le32_to_cpu(read_rev->actual_vpd_length));
+	}
+	sli4->vpd_length = le32_to_cpu(read_rev->returned_vpd_length);
+	return EFC_SUCCESS;
+}
+
+static int
+sli_get_config(struct sli4 *sli4)
+{
+	struct sli4_rsp_cmn_get_port_name *port_name;
+	struct sli4_cmd_read_nvparms *read_nvparms;
+
+	/*
+	 * Read the device configuration
+	 */
+	if (sli_get_read_config(sli4))
+		return EFC_FAIL;
+
+	if (sli_get_sli4_parameters(sli4))
+		return EFC_FAIL;
+
+	if (sli_get_ctrl_attributes(sli4))
+		return EFC_FAIL;
+
+	if (sli_cmd_common_get_port_name(sli4, sli4->bmbx.virt))
+		return EFC_FAIL;
+
+	port_name = (struct sli4_rsp_cmn_get_port_name *)
+		    (((u8 *)sli4->bmbx.virt) +
+		    offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox fail (GET_PORT_NAME)\n");
+		return EFC_FAIL;
+	}
+
+	sli4->port_name[0] = port_name->port_name[sli4->port_number];
+	sli4->port_name[1] = '\0';
+
+	if (sli_get_fw_rev(sli4))
+		return EFC_FAIL;
+
+	if (sli_cmd_read_nvparms(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad READ_NVPARMS write\n");
+		return EFC_FAIL;
+	}
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox fail (READ_NVPARMS)\n");
+		return EFC_FAIL;
+	}
+
+	read_nvparms = sli4->bmbx.virt;
+	if (le16_to_cpu(read_nvparms->hdr.status)) {
+		efc_log_err(sli4, "READ_NVPARMS bad status %#x\n",
+		       le16_to_cpu(read_nvparms->hdr.status));
+		return EFC_FAIL;
+	}
+
+	memcpy(sli4->wwpn, read_nvparms->wwpn, sizeof(sli4->wwpn));
+	memcpy(sli4->wwnn, read_nvparms->wwnn, sizeof(sli4->wwnn));
+
+	efc_log_info(sli4, "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+		sli4->wwpn[0], sli4->wwpn[1], sli4->wwpn[2], sli4->wwpn[3],
+		sli4->wwpn[4], sli4->wwpn[5], sli4->wwpn[6], sli4->wwpn[7]);
+	efc_log_info(sli4, "WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n",
+		sli4->wwnn[0], sli4->wwnn[1], sli4->wwnn[2], sli4->wwnn[3],
+		sli4->wwnn[4], sli4->wwnn[5], sli4->wwnn[6], sli4->wwnn[7]);
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev  *pdev,
+	  void __iomem *reg[])
+{
+	u32 intf = U32_MAX;
+	u32 pci_class_rev = 0;
+	u32 rev_id = 0;
+	u32 family = 0;
+	u32 asic_id = 0;
+	u32 i;
+	struct sli4_asic_entry_t *asic;
+
+	memset(sli4, 0, sizeof(struct sli4));
+
+	sli4->os = os;
+	sli4->pci = pdev;
+
+	for (i = 0; i < 6; i++)
+		sli4->reg[i] = reg[i];
+	/*
+	 * Read the SLI_INTF register to discover the register layout
+	 * and other capability information
+	 */
+	if (pci_read_config_dword(pdev, SLI4_INTF_REG, &intf))
+		return EFC_FAIL;
+
+	if ((intf & SLI4_INTF_VALID_MASK) != (u32)SLI4_INTF_VALID_VALUE) {
+		efc_log_err(sli4, "SLI_INTF is not valid\n");
+		return EFC_FAIL;
+	}
+
+	/* driver only support SLI-4 */
+	if ((intf & SLI4_INTF_REV_MASK) != SLI4_INTF_REV_S4) {
+		efc_log_err(sli4, "Unsupported SLI revision (intf=%#x)\n",
+		       intf);
+		return EFC_FAIL;
+	}
+
+	sli4->sli_family = intf & SLI4_INTF_FAMILY_MASK;
+
+	sli4->if_type = intf & SLI4_INTF_IF_TYPE_MASK;
+	efc_log_info(sli4, "status=%#x error1=%#x error2=%#x\n",
+		sli_reg_read_status(sli4),
+			sli_reg_read_err1(sli4),
+			sli_reg_read_err2(sli4));
+
+	/*
+	 * set the ASIC type and revision
+	 */
+	if (pci_read_config_dword(pdev, PCI_CLASS_REVISION, &pci_class_rev))
+		return EFC_FAIL;
+
+	rev_id = pci_class_rev & 0xff;
+	family = sli4->sli_family;
+	if (family == SLI4_FAMILY_CHECK_ASIC_TYPE) {
+		if (!pci_read_config_dword(pdev, SLI4_ASIC_ID_REG, &asic_id))
+			family = asic_id & SLI4_ASIC_GEN_MASK;
+	}
+
+	for (i = 0, asic = sli4_asic_table; i < ARRAY_SIZE(sli4_asic_table);
+	     i++, asic++) {
+		if (rev_id == asic->rev_id && family == asic->family) {
+			sli4->asic_type = family;
+			sli4->asic_rev = rev_id;
+			break;
+		}
+	}
+	/* Fail if no matching asic type/rev was found */
+	if (!sli4->asic_type || !sli4->asic_rev) {
+		efc_log_err(sli4, "no matching asic family/rev found: %02x/%02x\n",
+		       family, rev_id);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * The bootstrap mailbox is equivalent to a MQ with a single 256 byte
+	 * entry, a CQ with a single 16 byte entry, and no event queue.
+	 * Alignment must be 16 bytes as the low order address bits in the
+	 * address register are also control / status.
+	 */
+	sli4->bmbx.size = SLI4_BMBX_SIZE + sizeof(struct sli4_mcqe);
+	sli4->bmbx.virt = dma_alloc_coherent(&pdev->dev, sli4->bmbx.size,
+					     &sli4->bmbx.phys, GFP_DMA);
+	if (!sli4->bmbx.virt) {
+		memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
+		efc_log_err(sli4, "bootstrap mailbox allocation failed\n");
+		return EFC_FAIL;
+	}
+
+	if (sli4->bmbx.phys & SLI4_BMBX_MASK_LO) {
+		efc_log_err(sli4, "bad alignment for bootstrap mailbox\n");
+		return EFC_FAIL;
+	}
+
+	efc_log_info(sli4, "bmbx v=%p p=0x%x %08x s=%zd\n", sli4->bmbx.virt,
+		upper_32_bits(sli4->bmbx.phys),
+		      lower_32_bits(sli4->bmbx.phys), sli4->bmbx.size);
+
+	/* 4096 is arbitrary. What should this value actually be? */
+	sli4->vpd_data.size = 4096;
+	sli4->vpd_data.virt = dma_alloc_coherent(&pdev->dev,
+						 sli4->vpd_data.size,
+						 &sli4->vpd_data.phys,
+						 GFP_DMA);
+	if (!sli4->vpd_data.virt) {
+		memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
+		/* Note that failure isn't fatal in this specific case */
+		efc_log_info(sli4, "VPD buffer allocation failed\n");
+	}
+
+	if (!sli_fw_init(sli4)) {
+		efc_log_err(sli4, "FW initialization failed\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Set one of fcpi(initiator), fcpt(target), fcpc(combined) to true
+	 * in addition to any other desired features
+	 */
+	sli4->features = (SLI4_REQFEAT_IAAB | SLI4_REQFEAT_NPIV |
+				 SLI4_REQFEAT_DIF | SLI4_REQFEAT_VF |
+				 SLI4_REQFEAT_FCPC | SLI4_REQFEAT_IAAR |
+				 SLI4_REQFEAT_HLM | SLI4_REQFEAT_PERFH |
+				 SLI4_REQFEAT_RXSEQ | SLI4_REQFEAT_RXRI |
+				 SLI4_REQFEAT_MRQP);
+
+	/* use performance hints if available */
+	if (sli4->params.perf_hint)
+		sli4->features |= SLI4_REQFEAT_PERFH;
+
+	if (sli_request_features(sli4, &sli4->features, true))
+		return EFC_FAIL;
+
+	if (sli_get_config(sli4))
+		return EFC_FAIL;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_init(struct sli4 *sli4)
+{
+	if (sli4->params.has_extents) {
+		efc_log_info(sli4, "extend allocation not supported\n");
+		return EFC_FAIL;
+	}
+
+	sli4->features &= (~SLI4_REQFEAT_HLM);
+	sli4->features &= (~SLI4_REQFEAT_RXSEQ);
+	sli4->features &= (~SLI4_REQFEAT_RXRI);
+
+	if (sli_request_features(sli4, &sli4->features, false))
+		return EFC_FAIL;
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_reset(struct sli4 *sli4)
+{
+	u32	i;
+
+	if (!sli_fw_init(sli4)) {
+		efc_log_crit(sli4, "FW initialization failed\n");
+		return EFC_FAIL;
+	}
+
+	kfree(sli4->ext[0].base);
+	sli4->ext[0].base = NULL;
+
+	for (i = 0; i < SLI4_RSRC_MAX; i++) {
+		kfree(sli4->ext[i].use_map);
+		sli4->ext[i].use_map = NULL;
+		sli4->ext[i].base = NULL;
+	}
+
+	return sli_get_config(sli4);
+}
+
+int
+sli_fw_reset(struct sli4 *sli4)
+{
+	/*
+	 * Firmware must be ready before issuing the reset.
+	 */
+	if (!sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC)) {
+		efc_log_crit(sli4, "FW status is NOT ready\n");
+		return EFC_FAIL;
+	}
+
+	/* Lancer uses PHYDEV_CONTROL */
+	writel(SLI4_PHYDEV_CTRL_FRST, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+
+	/* wait for the FW to become ready after the reset */
+	if (!sli_wait_for_fw_ready(sli4, SLI4_FW_READY_TIMEOUT_MSEC)) {
+		efc_log_crit(sli4, "Failed to be ready after firmware reset\n");
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+void
+sli_teardown(struct sli4 *sli4)
+{
+	u32 i;
+
+	kfree(sli4->ext[0].base);
+	sli4->ext[0].base = NULL;
+
+	for (i = 0; i < SLI4_RSRC_MAX; i++) {
+		sli4->ext[i].base = NULL;
+
+		kfree(sli4->ext[i].use_map);
+		sli4->ext[i].use_map = NULL;
+	}
+
+	if (!sli_sliport_reset(sli4))
+		efc_log_err(sli4, "FW deinitialization failed\n");
+
+	dma_free_coherent(&sli4->pci->dev, sli4->vpd_data.size,
+			  sli4->vpd_data.virt, sli4->vpd_data.phys);
+	memset(&sli4->vpd_data, 0, sizeof(struct efc_dma));
+
+	dma_free_coherent(&sli4->pci->dev, sli4->bmbx.size,
+			  sli4->bmbx.virt, sli4->bmbx.phys);
+	memset(&sli4->bmbx, 0, sizeof(struct efc_dma));
+}
+
+int
+sli_callback(struct sli4 *sli4, enum sli4_callback which,
+	     void *func, void *arg)
+{
+	if (!func) {
+		efc_log_err(sli4, "bad parameter sli4=%p which=%#x func=%p\n",
+		       sli4, which, func);
+		return EFC_FAIL;
+	}
+
+	switch (which) {
+	case SLI4_CB_LINK:
+		sli4->link = func;
+		sli4->link_arg = arg;
+		break;
+	default:
+		efc_log_info(sli4, "unknown callback %#x\n", which);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq,
+		    u32 num_eq, u32 shift, u32 delay_mult)
+{
+	sli_cmd_common_modify_eq_delay(sli4, sli4->bmbx.virt, eq, num_eq,
+				       shift, delay_mult);
+
+	if (sli_bmbx_command(sli4)) {
+		efc_log_crit(sli4, "bootstrap mailbox write fail (MODIFY EQ DELAY)\n");
+		return EFC_FAIL;
+	}
+	if (sli_res_sli_config(sli4, sli4->bmbx.virt)) {
+		efc_log_err(sli4, "bad status MODIFY EQ DELAY\n");
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype,
+		   u32 *rid, u32 *index)
+{
+	int rc = EFC_SUCCESS;
+	u32 size;
+	u32 ext_idx;
+	u32 item_idx;
+	u32 position;
+
+	*rid = U32_MAX;
+	*index = U32_MAX;
+
+	switch (rtype) {
+	case SLI4_RSRC_VFI:
+	case SLI4_RSRC_VPI:
+	case SLI4_RSRC_RPI:
+	case SLI4_RSRC_XRI:
+		position =
+		find_first_zero_bit(sli4->ext[rtype].use_map,
+				    sli4->ext[rtype].map_size);
+		if (position >= sli4->ext[rtype].map_size) {
+			efc_log_err(sli4, "out of resource %d (alloc=%d)\n",
+				    rtype, sli4->ext[rtype].n_alloc);
+			rc = EFC_FAIL;
+			break;
+		}
+		set_bit(position, sli4->ext[rtype].use_map);
+		*index = position;
+
+		size = sli4->ext[rtype].size;
+
+		ext_idx = *index / size;
+		item_idx   = *index % size;
+
+		*rid = sli4->ext[rtype].base[ext_idx] + item_idx;
+
+		sli4->ext[rtype].n_alloc++;
+		break;
+	default:
+		rc = EFC_FAIL;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype, u32 rid)
+{
+	int rc = EFC_FAIL;
+	u32 x;
+	u32 size, *base;
+
+	switch (rtype) {
+	case SLI4_RSRC_VFI:
+	case SLI4_RSRC_VPI:
+	case SLI4_RSRC_RPI:
+	case SLI4_RSRC_XRI:
+		/*
+		 * Figure out which extent contains the resource ID. I.e. find
+		 * the extent such that
+		 *   extent->base <= resource ID < extent->base + extent->size
+		 */
+		base = sli4->ext[rtype].base;
+		size = sli4->ext[rtype].size;
+
+		/*
+		 * In the case of FW reset, this may be cleared
+		 * but the force_free path will still attempt to
+		 * free the resource. Prevent a NULL pointer access.
+		 */
+		if (!base)
+			break;
+
+		for (x = 0; x < sli4->ext[rtype].number; x++) {
+			if ((rid < base[x] || (rid >= (base[x] + size))))
+				continue;
+
+			rid -= base[x];
+			clear_bit((x * size) + rid, sli4->ext[rtype].use_map);
+			rc = EFC_SUCCESS;
+			break;
+		}
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype)
+{
+	int rc = EFC_FAIL;
+	u32 i;
+
+	switch (rtype) {
+	case SLI4_RSRC_VFI:
+	case SLI4_RSRC_VPI:
+	case SLI4_RSRC_RPI:
+	case SLI4_RSRC_XRI:
+		for (i = 0; i < sli4->ext[rtype].map_size; i++)
+			clear_bit(i, sli4->ext[rtype].use_map);
+		rc = EFC_SUCCESS;
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int sli_raise_ue(struct sli4 *sli4, u8 dump)
+{
+	u32 val = 0;
+
+	if (dump == SLI4_FUNC_DESC_DUMP) {
+		val = SLI4_PORT_CTRL_FDD | SLI4_PORT_CTRL_IP;
+		writel(val, (sli4->reg[0] + SLI4_PORT_CTRL_REG));
+	} else {
+		val = SLI4_PHYDEV_CTRL_FRST;
+
+		if (dump == SLI4_CHIP_LEVEL_DUMP)
+			val |= SLI4_PHYDEV_CTRL_DD;
+		writel(val, (sli4->reg[0] + SLI4_PHYDEV_CTRL_REG));
+	}
+
+	return EFC_SUCCESS;
+}
+
+int sli_dump_is_ready(struct sli4 *sli4)
+{
+	int rc = SLI4_DUMP_READY_STATUS_NOT_READY;
+	u32 port_val;
+	u32 bmbx_val;
+
+	/*
+	 * Ensure that the port is ready AND the mailbox is
+	 * ready before signaling that the dump is ready to go.
+	 */
+	port_val = sli_reg_read_status(sli4);
+	bmbx_val = readl(sli4->reg[0] + SLI4_BMBX_REG);
+
+	if ((bmbx_val & SLI4_BMBX_RDY) &&
+	    (port_val & SLI4_PORT_STATUS_RDY)) {
+		if (port_val & SLI4_PORT_STATUS_DIP)
+			rc = SLI4_DUMP_READY_STATUS_DD_PRESENT;
+		else if (port_val & SLI4_PORT_STATUS_FDP)
+			rc = SLI4_DUMP_READY_STATUS_FDB_PRESENT;
+	}
+
+	return rc;
+}
+
+bool sli_reset_required(struct sli4 *sli4)
+{
+	u32 val;
+
+	val = sli_reg_read_status(sli4);
+	return (val & SLI4_PORT_STATUS_RN);
+}
+
+int
+sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, u16 xri,
+		       u32 xri_count, struct efc_dma *page0[],
+		       struct efc_dma *page1[], struct efc_dma *dma)
+{
+	struct sli4_rqst_post_sgl_pages *post = NULL;
+	u32 i;
+	__le32 req_len;
+
+	post = sli_config_cmd_init(sli4, buf,
+				   SLI4_CFG_PYLD_LENGTH(post_sgl_pages), dma);
+	if (!post)
+		return EFC_FAIL;
+
+	/* payload size calculation */
+	/* 4 = xri_start + xri_count */
+	/* xri_count = # of XRI's registered */
+	/* sizeof(uint64_t) = physical address size */
+	/* 2 = # of physical addresses per page set */
+	req_len = cpu_to_le32(4 + (xri_count * (sizeof(uint64_t) * 2)));
+	sli_cmd_fill_hdr(&post->hdr, SLI4_OPC_POST_SGL_PAGES, SLI4_SUBSYSTEM_FC,
+			 CMD_V0, req_len);
+	post->xri_start = cpu_to_le16(xri);
+	post->xri_count = cpu_to_le16(xri_count);
+
+	for (i = 0; i < xri_count; i++) {
+		post->page_set[i].page0_low  =
+				cpu_to_le32(lower_32_bits(page0[i]->phys));
+		post->page_set[i].page0_high =
+				cpu_to_le32(upper_32_bits(page0[i]->phys));
+	}
+
+	if (page1) {
+		for (i = 0; i < xri_count; i++) {
+			post->page_set[i].page1_low =
+				cpu_to_le32(lower_32_bits(page1[i]->phys));
+			post->page_set[i].page1_high =
+				cpu_to_le32(upper_32_bits(page1[i]->phys));
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf, struct efc_dma *dma,
+			   u16 rpi, struct efc_dma *payload_dma)
+{
+	struct sli4_rqst_post_hdr_templates *req = NULL;
+	uintptr_t phys = 0;
+	u32 i = 0;
+	u32 page_count, payload_size;
+
+	page_count = sli_page_count(dma->size, SLI_PAGE_SIZE);
+
+	payload_size = ((sizeof(struct sli4_rqst_post_hdr_templates) +
+		(page_count * SZ_DMAADDR)) - sizeof(struct sli4_rqst_hdr));
+
+	if (page_count > 16) {
+		/*
+		 * We can't fit more than 16 descriptors into an embedded mbox
+		 * command, it has to be non-embedded
+		 */
+		payload_dma->size = payload_size;
+		payload_dma->virt = dma_alloc_coherent(&sli4->pci->dev,
+						       payload_dma->size,
+					     &payload_dma->phys, GFP_DMA);
+		if (!payload_dma->virt) {
+			memset(payload_dma, 0, sizeof(struct efc_dma));
+			efc_log_err(sli4, "mbox payload memory allocation fail\n");
+			return EFC_FAIL;
+		}
+		req = sli_config_cmd_init(sli4, buf, payload_size, payload_dma);
+	} else {
+		req = sli_config_cmd_init(sli4, buf, payload_size, NULL);
+	}
+
+	if (!req)
+		return EFC_FAIL;
+
+	if (rpi == U16_MAX)
+		rpi = sli4->ext[SLI4_RSRC_RPI].base[0];
+
+	sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_POST_HDR_TEMPLATES,
+			 SLI4_SUBSYSTEM_FC, CMD_V0,
+			 SLI4_RQST_PYLD_LEN(post_hdr_templates));
+
+	req->rpi_offset = cpu_to_le16(rpi);
+	req->page_count = cpu_to_le16(page_count);
+	phys = dma->phys;
+	for (i = 0; i < page_count; i++) {
+		req->page_descriptor[i].low  = cpu_to_le32(lower_32_bits(phys));
+		req->page_descriptor[i].high = cpu_to_le32(upper_32_bits(phys));
+
+		phys += SLI_PAGE_SIZE;
+	}
+
+	return EFC_SUCCESS;
+}
+
+u32
+sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi)
+{
+	u32 bytes = 0;
+
+	/* Check if header templates needed */
+	if (sli4->params.hdr_template_req)
+		/* round up to a page */
+		bytes = round_up(n_rpi * SLI4_HDR_TEMPLATE_SIZE, SLI_PAGE_SIZE);
+
+	return bytes;
+}
+
+const char *
+sli_fc_get_status_string(u32 status)
+{
+	static struct {
+		u32 code;
+		const char *label;
+	} lookup[] = {
+		{SLI4_FC_WCQE_STATUS_SUCCESS,		"SUCCESS"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE,	"FCP_RSP_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_REMOTE_STOP,	"REMOTE_STOP"},
+		{SLI4_FC_WCQE_STATUS_LOCAL_REJECT,	"LOCAL_REJECT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_RJT,		"NPORT_RJT"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_RJT,	"FABRIC_RJT"},
+		{SLI4_FC_WCQE_STATUS_NPORT_BSY,		"NPORT_BSY"},
+		{SLI4_FC_WCQE_STATUS_FABRIC_BSY,	"FABRIC_BSY"},
+		{SLI4_FC_WCQE_STATUS_LS_RJT,		"LS_RJT"},
+		{SLI4_FC_WCQE_STATUS_CMD_REJECT,	"CMD_REJECT"},
+		{SLI4_FC_WCQE_STATUS_FCP_TGT_LENCHECK,	"FCP_TGT_LENCHECK"},
+		{SLI4_FC_WCQE_STATUS_RQ_BUF_LEN_EXCEEDED, "BUF_LEN_EXCEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_BUF_NEEDED,
+				"RQ_INSUFF_BUF_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_FRM_DISC, "RQ_INSUFF_FRM_DESC"},
+		{SLI4_FC_WCQE_STATUS_RQ_DMA_FAILURE,	"RQ_DMA_FAILURE"},
+		{SLI4_FC_WCQE_STATUS_FCP_RSP_TRUNCATE,	"FCP_RSP_TRUNCATE"},
+		{SLI4_FC_WCQE_STATUS_DI_ERROR,		"DI_ERROR"},
+		{SLI4_FC_WCQE_STATUS_BA_RJT,		"BA_RJT"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_NEEDED,
+				"RQ_INSUFF_XRI_NEEDED"},
+		{SLI4_FC_WCQE_STATUS_RQ_INSUFF_XRI_DISC, "INSUFF_XRI_DISC"},
+		{SLI4_FC_WCQE_STATUS_RX_ERROR_DETECT,	"RX_ERROR_DETECT"},
+		{SLI4_FC_WCQE_STATUS_RX_ABORT_REQUEST,	"RX_ABORT_REQUEST"},
+		};
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(lookup); i++) {
+		if (status == lookup[i].code)
+			return lookup[i].label;
+	}
+	return "unknown";
+}
diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h
index 5bc9e8b8ffd1..94f1dc05ecd1 100644
--- a/drivers/scsi/elx/libefc_sli/sli4.h
+++ b/drivers/scsi/elx/libefc_sli/sli4.h
@@ -3716,4 +3716,400 @@ sli_cmd_fill_hdr(struct sli4_rqst_hdr *hdr, u8 opc, u8 sub, u32 ver, __le32 len)
 	hdr->request_length = len;
 }
 
+/**
+ * Get / set parameter functions
+ */
+
+static inline u32
+sli_get_max_sge(struct sli4 *sli4)
+{
+	return sli4->sge_supported_length;
+}
+
+static inline u32
+sli_get_max_sgl(struct sli4 *sli4)
+{
+	if (sli4->sgl_page_sizes != 1) {
+		efc_log_err(sli4, "unsupported SGL page sizes %#x\n",
+			sli4->sgl_page_sizes);
+		return 0;
+	}
+
+	return (sli4->max_sgl_pages * SLI_PAGE_SIZE) / sizeof(struct sli4_sge);
+}
+
+static inline enum sli4_link_medium
+sli_get_medium(struct sli4 *sli4)
+{
+	switch (sli4->topology) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+	case SLI4_READ_CFG_TOPO_NON_FC_AL:
+		return SLI4_LINK_MEDIUM_FC;
+	default:
+		return SLI4_LINK_MEDIUM_MAX;
+	}
+}
+
+static inline int
+sli_set_topology(struct sli4 *sli4, u32 value)
+{
+	int	rc = 0;
+
+	switch (value) {
+	case SLI4_READ_CFG_TOPO_FC:
+	case SLI4_READ_CFG_TOPO_FC_AL:
+	case SLI4_READ_CFG_TOPO_NON_FC_AL:
+		sli4->topology = value;
+		break;
+	default:
+		efc_log_err(sli4, "unsupported topology %#x\n", value);
+		rc = -1;
+	}
+
+	return rc;
+}
+
+static inline u32
+sli_convert_mask_to_count(u32 method, u32 mask)
+{
+	u32 count = 0;
+
+	if (method) {
+		count = 1 << (31 - __builtin_clz(mask));
+		count *= 16;
+	} else {
+		count = mask;
+	}
+
+	return count;
+}
+
+static inline u32
+sli_reg_read_status(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_STATUS_REGOFF);
+}
+
+static inline int
+sli_fw_error_status(struct sli4 *sli4)
+{
+	return (sli_reg_read_status(sli4) & SLI4_PORT_STATUS_ERR) ? 1 : 0;
+}
+
+static inline u32
+sli_reg_read_err1(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR1);
+}
+
+static inline u32
+sli_reg_read_err2(struct sli4 *sli)
+{
+	return readl(sli->reg[0] + SLI4_PORT_ERROR2);
+}
+
+static inline int
+sli_fc_rqe_length(struct sli4 *sli4, void *cqe, u32 *len_hdr,
+		  u32 *len_data)
+{
+	struct sli4_fc_async_rcqe	*rcqe = cqe;
+
+	*len_hdr = *len_data = 0;
+
+	if (rcqe->status == SLI4_FC_ASYNC_RQ_SUCCESS) {
+		*len_hdr  = rcqe->hdpl_byte & SLI4_RACQE_HDPL;
+		*len_data = le16_to_cpu(rcqe->data_placement_length);
+		return 0;
+	} else {
+		return -1;
+	}
+}
+
+static inline u8
+sli_fc_rqe_fcfi(struct sli4 *sli4, void *cqe)
+{
+	u8 code = ((u8 *)cqe)[SLI4_CQE_CODE_OFFSET];
+	u8 fcfi = U8_MAX;
+
+	switch (code) {
+	case SLI4_CQE_CODE_RQ_ASYNC: {
+		struct sli4_fc_async_rcqe *rcqe = cqe;
+
+		fcfi = le16_to_cpu(rcqe->fcfi_rq_id_word) & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_RQ_ASYNC_V1: {
+		struct sli4_fc_async_rcqe_v1 *rcqev1 = cqe;
+
+		fcfi = rcqev1->fcfi_byte & SLI4_RACQE_FCFI;
+		break;
+	}
+	case SLI4_CQE_CODE_OPTIMIZED_WRITE_CMD: {
+		struct sli4_fc_optimized_write_cmd_cqe *opt_wr = cqe;
+
+		fcfi = opt_wr->flags0 & SLI4_OCQE_FCFI;
+		break;
+	}
+	}
+
+	return fcfi;
+}
+
+/****************************************************************************
+ * Function prototypes
+ */
+int
+sli_cmd_config_link(struct sli4 *sli4, void *buf);
+int
+sli_cmd_down_link(struct sli4 *sli4, void *buf);
+int
+sli_cmd_dump_type4(struct sli4 *sli4, void *buf, u16 wki);
+int
+sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf,
+				     u32 page_num, struct efc_dma *dma);
+int
+sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, u8 req_stats,
+			u8 clear_overflow_flags, u8 clear_all_counters);
+int
+sli_cmd_read_status(struct sli4 *sli4, void *buf, u8 clear);
+int
+sli_cmd_init_link(struct sli4 *sli4, void *buf, u32 speed,
+		  u8 reset_alpa);
+int
+sli_cmd_init_vfi(struct sli4 *sli4, void *buf, u16 vfi, u16 fcfi,
+		 u16 vpi);
+int
+sli_cmd_init_vpi(struct sli4 *sli4, void *buf, u16 vpi, u16 vfi);
+int
+sli_cmd_post_xri(struct sli4 *sli4, void *buf, u16 base, u16 cnt);
+int
+sli_cmd_release_xri(struct sli4 *sli4, void *buf, u8 num_xri);
+int
+sli_cmd_read_sparm64(struct sli4 *sli4, void *buf,
+		struct efc_dma *dma, u16 vpi);
+int
+sli_cmd_read_topology(struct sli4 *sli4, void *buf, struct efc_dma *dma);
+int
+sli_cmd_read_nvparms(struct sli4 *sli4, void *buf);
+int
+sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, u8 *wwpn,
+		u8 *wwnn, u8 hard_alpa, u32 preferred_d_id);
+int
+sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, u16 index,
+		 struct sli4_cmd_rq_cfg *rq_cfg);
+int
+sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, u8 mode, u16 index,
+		     u8 rq_selection_policy, u8 mrq_bit_mask, u16 num_mrqs,
+		     struct sli4_cmd_rq_cfg *rq_cfg);
+int
+sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, u32 rpi, u32 vpi, u32 fc_id,
+		struct efc_dma *dma, u8 update, u8 enable_t10_pi);
+int
+sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, u16 indicator);
+int
+sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, u16 indicator,
+		  enum sli4_resource which, u32 fc_id);
+int
+sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, u32 fc_id,
+		__be64 sli_wwpn, u16 vpi, u16 vfi, bool update);
+int
+sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size,
+		u16 vfi, u16 fcfi, struct efc_dma dma,
+		u16 vpi, __be64 sli_wwpn, u32 fc_id);
+int
+sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, u16 id, u32 type);
+int
+sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, u16 idx, u32 type);
+int
+sli_cmd_common_nop(struct sli4 *sli4, void *buf, uint64_t context);
+int
+sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf,
+					u16 rtype);
+int
+sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf);
+int
+sli_cmd_common_write_object(struct sli4 *sli4, void *buf, u16 noc,
+		u16 eof, u32 len, u32 offset, char *name, struct efc_dma *dma);
+int
+sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, char *object_name);
+int
+sli_cmd_common_read_object(struct sli4 *sli4, void *buf,
+		u32 length, u32 offset, char *name, struct efc_dma *dma);
+int
+sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf,
+		struct efc_dma *cmd, struct efc_dma *resp);
+int
+sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf,
+		bool query, bool is_buffer_list, struct efc_dma *dma, u8 fdb);
+int
+sli_cmd_common_set_features(struct sli4 *sli4, void *buf,
+			    u32 feature, u32 param_len, void *parameter);
+
+int sli_cqe_mq(struct sli4 *sli4, void *buf);
+int sli_cqe_async(struct sli4 *sli4, void *buf);
+
+int
+sli_setup(struct sli4 *sli4, void *os, struct pci_dev *pdev, void __iomem *r[]);
+void sli_calc_max_qentries(struct sli4 *sli4);
+int sli_init(struct sli4 *sli4);
+int sli_reset(struct sli4 *sli4);
+int sli_fw_reset(struct sli4 *sli4);
+void sli_teardown(struct sli4 *sli4);
+int
+sli_callback(struct sli4 *sli4, enum sli4_callback cb, void *func, void *arg);
+int
+sli_bmbx_command(struct sli4 *sli4);
+int
+__sli_queue_init(struct sli4 *sli4, struct sli4_queue *q, u32 qtype,
+		size_t size, u32 n_entries, u32 align);
+int
+__sli_create_queue(struct sli4 *sli4, struct sli4_queue *q);
+int
+sli_eq_modify_delay(struct sli4 *sli4, struct sli4_queue *eq, u32 num_eq,
+		u32 shift, u32 delay_mult);
+int
+sli_queue_alloc(struct sli4 *sli4, u32 qtype, struct sli4_queue *q,
+		u32 n_entries, struct sli4_queue *assoc);
+int
+sli_cq_alloc_set(struct sli4 *sli4, struct sli4_queue *qs[], u32 num_cqs,
+		u32 n_entries, struct sli4_queue *eqs[]);
+int
+sli_get_queue_entry_size(struct sli4 *sli4, u32 qtype);
+int
+sli_queue_free(struct sli4 *sli4, struct sli4_queue *q, u32 destroy_queues,
+		u32 free_memory);
+int
+sli_queue_eq_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+int
+sli_queue_arm(struct sli4 *sli4, struct sli4_queue *q, bool arm);
+
+int
+sli_wq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_mq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_rq_write(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_eq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_cq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_mq_read(struct sli4 *sli4, struct sli4_queue *q, u8 *entry);
+int
+sli_resource_alloc(struct sli4 *sli4, enum sli4_resource rtype, u32 *rid,
+		u32 *index);
+int
+sli_resource_free(struct sli4 *sli4, enum sli4_resource rtype, u32 rid);
+int
+sli_resource_reset(struct sli4 *sli4, enum sli4_resource rtype);
+int
+sli_eq_parse(struct sli4 *sli4, u8 *buf, u16 *cq_id);
+int
+sli_cq_parse(struct sli4 *sli4, struct sli4_queue *cq, u8 *cqe,
+		enum sli4_qentry *etype, u16 *q_id);
+
+int sli_raise_ue(struct sli4 *sli4, u8 dump);
+int sli_dump_is_ready(struct sli4 *sli4);
+bool sli_reset_required(struct sli4 *sli4);
+bool sli_fw_ready(struct sli4 *sli4);
+
+int
+sli_fc_process_link_attention(struct sli4 *sli4, void *acqe);
+int
+sli_fc_cqe_parse(struct sli4 *sli4, struct sli4_queue *cq,
+		 u8 *cqe, enum sli4_qentry *etype,
+		 u16 *rid);
+u32 sli_fc_response_length(struct sli4 *sli4, u8 *cqe);
+u32 sli_fc_io_length(struct sli4 *sli4, u8 *cqe);
+int sli_fc_els_did(struct sli4 *sli4, u8 *cqe, u32 *d_id);
+u32 sli_fc_ext_status(struct sli4 *sli4, u8 *cqe);
+int
+sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index);
+int
+sli_cmd_wq_create(struct sli4 *sli4, void *buf,
+		struct efc_dma *qmem, u16 cq_id);
+int sli_cmd_post_sgl_pages(struct sli4 *sli4, void *buf, u16 xri,
+		u32 xri_count, struct efc_dma *page0[], struct efc_dma *page1[],
+		struct efc_dma *dma);
+int
+sli_cmd_post_hdr_templates(struct sli4 *sli4, void *buf,
+		struct efc_dma *dma, u16 rpi, struct efc_dma *payload_dma);
+int
+sli_fc_rq_alloc(struct sli4 *sli4, struct sli4_queue *q, u32 n_entries,
+		u32 buffer_size, struct sli4_queue *cq, bool is_hdr);
+int
+sli_fc_rq_set_alloc(struct sli4 *sli4, u32 num_rq_pairs, struct sli4_queue *q[],
+		u32 base_cq_id, u32 num, u32 hdr_buf_size, u32 data_buf_size);
+u32 sli_fc_get_rpi_requirements(struct sli4 *sli4, u32 n_rpi);
+int
+sli_abort_wqe(struct sli4 *sli4, void *buf, enum sli4_abort_type type,
+	      bool send_abts, u32 ids, u32 mask, u16 tag, u16 cq_id);
+
+int
+sli_send_frame_wqe(struct sli4 *sli4, void *buf, u8 sof, u8 eof,
+		u32 *hdr, struct efc_dma *payload, u32 req_len, u8 timeout,
+		u16 xri, u16 req_tag);
+
+int
+sli_xmit_els_rsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *rsp,
+		struct sli_els_params *params);
+
+int
+sli_els_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		struct sli_els_params *params);
+
+int
+sli_fcp_icmnd64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl, u16 xri,
+		u16 tag, u16 cq_id, u32 rpi, u32 rnode_fcid, u8 timeout);
+
+int
+sli_fcp_iread64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len, u16 xri,
+		u16 tag, u16 cq_id, u32 rpi, u32 rnode_fcid, u8 dif, u8 bs,
+		u8 timeout);
+
+int
+sli_fcp_iwrite64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		u32 first_data_sge, u32 xfer_len,
+		u32 first_burst, u16 xri, u16 tag, u16 cq_id, u32 rpi,
+		u32 rnode_fcid, u8 dif, u8 bs, u8 timeout);
+
+int
+sli_fcp_treceive64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+		       u32 first_data_sge, u16 cq_id, u8 dif, u8 bs,
+		       struct sli_fcp_tgt_params *params);
+int
+sli_fcp_cont_treceive64_wqe(struct sli4 *sli, void *buf, struct efc_dma *sgl,
+			u32 first_data_sge, u16 sec_xri, u16 cq_id, u8 dif,
+			u8 bs, struct sli_fcp_tgt_params *params);
+
+int
+sli_fcp_trsp64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		   u16 cq_id, u8 port_owned, struct sli_fcp_tgt_params *params);
+
+int
+sli_fcp_tsend64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		    u32 first_data_sge, u16 cq_id, u8 dif, u8 bs,
+		    struct sli_fcp_tgt_params *params);
+int
+sli_gen_request64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *sgl,
+		struct sli_ct_params *params);
+
+int
+sli_xmit_bls_rsp64_wqe(struct sli4 *sli4, void *buf,
+		struct sli_bls_payload *payload, struct sli_bls_params *params);
+
+int
+sli_xmit_sequence64_wqe(struct sli4 *sli4, void *buf, struct efc_dma *payload,
+		struct sli_ct_params *params);
+
+int
+sli_requeue_xri_wqe(struct sli4 *sli4, void *buf, u16 xri, u16 tag, u16 cq_id);
+void
+sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf, size_t size,
+		u16 timeout);
+
+const char *sli_fc_get_status_string(u32 status);
+
 #endif /* !_SLI4_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 08/31] elx: libefc: Generic state machine framework
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (6 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
                   ` (22 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch starts the population of the libefc library.
The library will contain common tasks usable by a target or initiator
driver. The library will also contain a FC discovery state machine
interface.

This patch creates the library directory and adds definitions
for the discovery state machine interface.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_sm.c |  54 +++++++++
 drivers/scsi/elx/libefc/efc_sm.h | 197 +++++++++++++++++++++++++++++++
 2 files changed, 251 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.c
 create mode 100644 drivers/scsi/elx/libefc/efc_sm.h

diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c
new file mode 100644
index 000000000000..252399c19293
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Generic state machine framework.
+ */
+#include "efc.h"
+#include "efc_sm.h"
+
+/**
+ * efc_sm_post_event() - Post an event to a context.
+ *
+ * @ctx: State machine context
+ * @evt: Event to post
+ * @data: Event-specific data (if any)
+ */
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data)
+{
+	if (!ctx->current_state)
+		return EFC_FAIL;
+
+	ctx->current_state(ctx, evt, data);
+	return EFC_SUCCESS;
+}
+
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void (*state)(struct efc_sm_ctx *,
+				 enum efc_sm_event, void *), void *data)
+
+{
+	if (ctx->current_state == state) {
+		efc_sm_post_event(ctx, EFC_EVT_REENTER, data);
+	} else {
+		efc_sm_post_event(ctx, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_sm_post_event(ctx, EFC_EVT_ENTER, data);
+	}
+}
+
+static char *event_name[] = EFC_SM_EVENT_NAME;
+
+const char *efc_sm_event_name(enum efc_sm_event evt)
+{
+	if (evt > EFC_EVT_LAST)
+		return "unknown";
+
+	return event_name[evt];
+}
diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h
new file mode 100644
index 000000000000..e28835e7717d
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_sm.h
@@ -0,0 +1,197 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ */
+
+/**
+ * Generic state machine framework declarations.
+ */
+
+#ifndef _EFC_SM_H
+#define _EFC_SM_H
+
+struct efc_sm_ctx;
+
+/* State Machine events */
+enum efc_sm_event {
+	/* Common Events */
+	EFC_EVT_ENTER,
+	EFC_EVT_REENTER,
+	EFC_EVT_EXIT,
+	EFC_EVT_SHUTDOWN,
+	EFC_EVT_ALL_CHILD_NODES_FREE,
+	EFC_EVT_RESUME,
+	EFC_EVT_TIMER_EXPIRED,
+
+	/* Domain Events */
+	EFC_EVT_RESPONSE,
+	EFC_EVT_ERROR,
+
+	EFC_EVT_DOMAIN_FOUND,
+	EFC_EVT_DOMAIN_ALLOC_OK,
+	EFC_EVT_DOMAIN_ALLOC_FAIL,
+	EFC_EVT_DOMAIN_REQ_ATTACH,
+	EFC_EVT_DOMAIN_ATTACH_OK,
+	EFC_EVT_DOMAIN_ATTACH_FAIL,
+	EFC_EVT_DOMAIN_LOST,
+	EFC_EVT_DOMAIN_FREE_OK,
+	EFC_EVT_DOMAIN_FREE_FAIL,
+	EFC_EVT_HW_DOMAIN_REQ_ATTACH,
+	EFC_EVT_HW_DOMAIN_REQ_FREE,
+
+	/* Sport Events */
+	EFC_EVT_NPORT_ALLOC_OK,
+	EFC_EVT_NPORT_ALLOC_FAIL,
+	EFC_EVT_NPORT_ATTACH_OK,
+	EFC_EVT_NPORT_ATTACH_FAIL,
+	EFC_EVT_NPORT_FREE_OK,
+	EFC_EVT_NPORT_FREE_FAIL,
+	EFC_EVT_NPORT_TOPOLOGY_NOTIFY,
+	EFC_EVT_HW_PORT_ALLOC_OK,
+	EFC_EVT_HW_PORT_ALLOC_FAIL,
+	EFC_EVT_HW_PORT_ATTACH_OK,
+	EFC_EVT_HW_PORT_REQ_ATTACH,
+	EFC_EVT_HW_PORT_REQ_FREE,
+	EFC_EVT_HW_PORT_FREE_OK,
+
+	/* Login Events */
+	EFC_EVT_SRRS_ELS_REQ_OK,
+	EFC_EVT_SRRS_ELS_CMPL_OK,
+	EFC_EVT_SRRS_ELS_REQ_FAIL,
+	EFC_EVT_SRRS_ELS_CMPL_FAIL,
+	EFC_EVT_SRRS_ELS_REQ_RJT,
+	EFC_EVT_NODE_ATTACH_OK,
+	EFC_EVT_NODE_ATTACH_FAIL,
+	EFC_EVT_NODE_FREE_OK,
+	EFC_EVT_NODE_FREE_FAIL,
+	EFC_EVT_ELS_FRAME,
+	EFC_EVT_ELS_REQ_TIMEOUT,
+	EFC_EVT_ELS_REQ_ABORTED,
+	/* request an ELS IO be aborted */
+	EFC_EVT_ABORT_ELS,
+	/* ELS abort process complete */
+	EFC_EVT_ELS_ABORT_CMPL,
+
+	EFC_EVT_ABTS_RCVD,
+
+	/* node is not in the GID_PT payload */
+	EFC_EVT_NODE_MISSING,
+	/* node is allocated and in the GID_PT payload */
+	EFC_EVT_NODE_REFOUND,
+	/* node shutting down due to PLOGI recvd (implicit logo) */
+	EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+	/* node shutting down due to LOGO recvd/sent (explicit logo) */
+	EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+
+	EFC_EVT_PLOGI_RCVD,
+	EFC_EVT_FLOGI_RCVD,
+	EFC_EVT_LOGO_RCVD,
+	EFC_EVT_PRLI_RCVD,
+	EFC_EVT_PRLO_RCVD,
+	EFC_EVT_PDISC_RCVD,
+	EFC_EVT_FDISC_RCVD,
+	EFC_EVT_ADISC_RCVD,
+	EFC_EVT_RSCN_RCVD,
+	EFC_EVT_SCR_RCVD,
+	EFC_EVT_ELS_RCVD,
+
+	EFC_EVT_FCP_CMD_RCVD,
+
+	EFC_EVT_GIDPT_DELAY_EXPIRED,
+
+	/* SCSI Target Server events */
+	EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY,
+	EFC_EVT_NODE_DEL_INI_COMPLETE,
+	EFC_EVT_NODE_DEL_TGT_COMPLETE,
+	EFC_EVT_NODE_SESS_REG_OK,
+	EFC_EVT_NODE_SESS_REG_FAIL,
+
+	/* Must be last */
+	EFC_EVT_LAST
+};
+
+/* State Machine event name lookup array */
+#define EFC_SM_EVENT_NAME {						\
+	[EFC_EVT_ENTER]			= "EFC_EVT_ENTER",		\
+	[EFC_EVT_REENTER]		= "EFC_EVT_REENTER",		\
+	[EFC_EVT_EXIT]			= "EFC_EVT_EXIT",		\
+	[EFC_EVT_SHUTDOWN]		= "EFC_EVT_SHUTDOWN",		\
+	[EFC_EVT_ALL_CHILD_NODES_FREE]	= "EFC_EVT_ALL_CHILD_NODES_FREE",\
+	[EFC_EVT_RESUME]		= "EFC_EVT_RESUME",		\
+	[EFC_EVT_TIMER_EXPIRED]		= "EFC_EVT_TIMER_EXPIRED",	\
+	[EFC_EVT_RESPONSE]		= "EFC_EVT_RESPONSE",		\
+	[EFC_EVT_ERROR]			= "EFC_EVT_ERROR",		\
+	[EFC_EVT_DOMAIN_FOUND]		= "EFC_EVT_DOMAIN_FOUND",	\
+	[EFC_EVT_DOMAIN_ALLOC_OK]	= "EFC_EVT_DOMAIN_ALLOC_OK",	\
+	[EFC_EVT_DOMAIN_ALLOC_FAIL]	= "EFC_EVT_DOMAIN_ALLOC_FAIL",	\
+	[EFC_EVT_DOMAIN_REQ_ATTACH]	= "EFC_EVT_DOMAIN_REQ_ATTACH",	\
+	[EFC_EVT_DOMAIN_ATTACH_OK]	= "EFC_EVT_DOMAIN_ATTACH_OK",	\
+	[EFC_EVT_DOMAIN_ATTACH_FAIL]	= "EFC_EVT_DOMAIN_ATTACH_FAIL",	\
+	[EFC_EVT_DOMAIN_LOST]		= "EFC_EVT_DOMAIN_LOST",	\
+	[EFC_EVT_DOMAIN_FREE_OK]	= "EFC_EVT_DOMAIN_FREE_OK",	\
+	[EFC_EVT_DOMAIN_FREE_FAIL]	= "EFC_EVT_DOMAIN_FREE_FAIL",	\
+	[EFC_EVT_HW_DOMAIN_REQ_ATTACH]	= "EFC_EVT_HW_DOMAIN_REQ_ATTACH",\
+	[EFC_EVT_HW_DOMAIN_REQ_FREE]	= "EFC_EVT_HW_DOMAIN_REQ_FREE",	\
+	[EFC_EVT_NPORT_ALLOC_OK]	= "EFC_EVT_NPORT_ALLOC_OK",	\
+	[EFC_EVT_NPORT_ALLOC_FAIL]	= "EFC_EVT_NPORT_ALLOC_FAIL",	\
+	[EFC_EVT_NPORT_ATTACH_OK]	= "EFC_EVT_NPORT_ATTACH_OK",	\
+	[EFC_EVT_NPORT_ATTACH_FAIL]	= "EFC_EVT_NPORT_ATTACH_FAIL",	\
+	[EFC_EVT_NPORT_FREE_OK]		= "EFC_EVT_NPORT_FREE_OK",	\
+	[EFC_EVT_NPORT_FREE_FAIL]	= "EFC_EVT_NPORT_FREE_FAIL",	\
+	[EFC_EVT_NPORT_TOPOLOGY_NOTIFY]	= "EFC_EVT_NPORT_TOPOLOGY_NOTIFY",\
+	[EFC_EVT_HW_PORT_ALLOC_OK]	= "EFC_EVT_HW_PORT_ALLOC_OK",	\
+	[EFC_EVT_HW_PORT_ALLOC_FAIL]	= "EFC_EVT_HW_PORT_ALLOC_FAIL",	\
+	[EFC_EVT_HW_PORT_ATTACH_OK]	= "EFC_EVT_HW_PORT_ATTACH_OK",	\
+	[EFC_EVT_HW_PORT_REQ_ATTACH]	= "EFC_EVT_HW_PORT_REQ_ATTACH",	\
+	[EFC_EVT_HW_PORT_REQ_FREE]	= "EFC_EVT_HW_PORT_REQ_FREE",	\
+	[EFC_EVT_HW_PORT_FREE_OK]	= "EFC_EVT_HW_PORT_FREE_OK",	\
+	[EFC_EVT_SRRS_ELS_REQ_OK]	= "EFC_EVT_SRRS_ELS_REQ_OK",	\
+	[EFC_EVT_SRRS_ELS_CMPL_OK]	= "EFC_EVT_SRRS_ELS_CMPL_OK",	\
+	[EFC_EVT_SRRS_ELS_REQ_FAIL]	= "EFC_EVT_SRRS_ELS_REQ_FAIL",	\
+	[EFC_EVT_SRRS_ELS_CMPL_FAIL]	= "EFC_EVT_SRRS_ELS_CMPL_FAIL",	\
+	[EFC_EVT_SRRS_ELS_REQ_RJT]	= "EFC_EVT_SRRS_ELS_REQ_RJT",	\
+	[EFC_EVT_NODE_ATTACH_OK]	= "EFC_EVT_NODE_ATTACH_OK",	\
+	[EFC_EVT_NODE_ATTACH_FAIL]	= "EFC_EVT_NODE_ATTACH_FAIL",	\
+	[EFC_EVT_NODE_FREE_OK]		= "EFC_EVT_NODE_FREE_OK",	\
+	[EFC_EVT_NODE_FREE_FAIL]	= "EFC_EVT_NODE_FREE_FAIL",	\
+	[EFC_EVT_ELS_FRAME]		= "EFC_EVT_ELS_FRAME",		\
+	[EFC_EVT_ELS_REQ_TIMEOUT]	= "EFC_EVT_ELS_REQ_TIMEOUT",	\
+	[EFC_EVT_ELS_REQ_ABORTED]	= "EFC_EVT_ELS_REQ_ABORTED",	\
+	[EFC_EVT_ABORT_ELS]		= "EFC_EVT_ABORT_ELS",		\
+	[EFC_EVT_ELS_ABORT_CMPL]	= "EFC_EVT_ELS_ABORT_CMPL",	\
+	[EFC_EVT_ABTS_RCVD]		= "EFC_EVT_ABTS_RCVD",		\
+	[EFC_EVT_NODE_MISSING]		= "EFC_EVT_NODE_MISSING",	\
+	[EFC_EVT_NODE_REFOUND]		= "EFC_EVT_NODE_REFOUND",	\
+	[EFC_EVT_SHUTDOWN_IMPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO",\
+	[EFC_EVT_SHUTDOWN_EXPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO",\
+	[EFC_EVT_PLOGI_RCVD]		= "EFC_EVT_PLOGI_RCVD",		\
+	[EFC_EVT_FLOGI_RCVD]		= "EFC_EVT_FLOGI_RCVD",		\
+	[EFC_EVT_LOGO_RCVD]		= "EFC_EVT_LOGO_RCVD",		\
+	[EFC_EVT_PRLI_RCVD]		= "EFC_EVT_PRLI_RCVD",		\
+	[EFC_EVT_PRLO_RCVD]		= "EFC_EVT_PRLO_RCVD",		\
+	[EFC_EVT_PDISC_RCVD]		= "EFC_EVT_PDISC_RCVD",		\
+	[EFC_EVT_FDISC_RCVD]		= "EFC_EVT_FDISC_RCVD",		\
+	[EFC_EVT_ADISC_RCVD]		= "EFC_EVT_ADISC_RCVD",		\
+	[EFC_EVT_RSCN_RCVD]		= "EFC_EVT_RSCN_RCVD",		\
+	[EFC_EVT_SCR_RCVD]		= "EFC_EVT_SCR_RCVD",		\
+	[EFC_EVT_ELS_RCVD]		= "EFC_EVT_ELS_RCVD",		\
+	[EFC_EVT_FCP_CMD_RCVD]		= "EFC_EVT_FCP_CMD_RCVD",	\
+	[EFC_EVT_NODE_DEL_INI_COMPLETE]	= "EFC_EVT_NODE_DEL_INI_COMPLETE",\
+	[EFC_EVT_NODE_DEL_TGT_COMPLETE]	= "EFC_EVT_NODE_DEL_TGT_COMPLETE",\
+	[EFC_EVT_LAST]			= "EFC_EVT_LAST",		\
+}
+
+int
+efc_sm_post_event(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *data);
+void
+efc_sm_transition(struct efc_sm_ctx *ctx,
+		  void (*state)(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg),
+		  void *data);
+void efc_sm_disable(struct efc_sm_ctx *ctx);
+const char *efc_sm_event_name(enum efc_sm_event evt);
+
+#endif /* ! _EFC_SM_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 09/31] elx: libefc: Emulex FC discovery library APIs and definitions
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (7 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 08/31] elx: libefc: Generic state machine framework James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 10/31] elx: libefc: FC Domain state machine interfaces James Smart
                   ` (21 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI/Local FC port objects
- efc_domain_s: FC domain (aka fabric) objects
- efc_node_s: FC node (aka remote ports) objects

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc.h    |  69 ++++
 drivers/scsi/elx/libefc/efclib.c |  81 +++++
 drivers/scsi/elx/libefc/efclib.h | 601 +++++++++++++++++++++++++++++++
 3 files changed, 751 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc.h
 create mode 100644 drivers/scsi/elx/libefc/efclib.c
 create mode 100644 drivers/scsi/elx/libefc/efclib.h

diff --git a/drivers/scsi/elx/libefc/efc.h b/drivers/scsi/elx/libefc/efc.h
new file mode 100644
index 000000000000..19b98c2153f6
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_H__
+#define __EFC_H__
+
+#include "../include/efc_common.h"
+#include "efclib.h"
+#include "efc_sm.h"
+#include "efc_cmds.h"
+#include "efc_domain.h"
+#include "efc_nport.h"
+#include "efc_node.h"
+#include "efc_fabric.h"
+#include "efc_device.h"
+#include "efc_els.h"
+
+#define EFC_MAX_REMOTE_NODES			2048
+#define NODE_SPARAMS_SIZE			256
+
+enum efc_hw_rtn {
+	EFC_HW_RTN_SUCCESS = 0,
+	EFC_HW_RTN_SUCCESS_SYNC = 1,
+	EFC_HW_RTN_ERROR = -1,
+	EFC_HW_RTN_NO_RESOURCES = -2,
+	EFC_HW_RTN_NO_MEMORY = -3,
+	EFC_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFC_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFC_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFC_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFC_HW_RTN_IS_ERROR(e) ((e) < 0)
+
+enum efc_scsi_del_initiator_reason {
+	EFC_SCSI_INITIATOR_DELETED,
+	EFC_SCSI_INITIATOR_MISSING,
+};
+
+enum efc_scsi_del_target_reason {
+	EFC_SCSI_TARGET_DELETED,
+	EFC_SCSI_TARGET_MISSING,
+};
+
+#define EFC_SCSI_CALL_COMPLETE			0
+#define EFC_SCSI_CALL_ASYNC			1
+
+#define EFC_FC_ELS_DEFAULT_RETRIES		3
+
+#define domain_sm_trace(domain) \
+	efc_log_debug(domain->efc, "[domain:%s] %-20s %-20s\n", \
+		      domain->display_name, __func__, efc_sm_event_name(evt)) \
+
+#define domain_trace(domain, fmt, ...) \
+	efc_log_debug(domain->efc, \
+		      "[%s]" fmt, domain->display_name, ##__VA_ARGS__) \
+
+#define node_sm_trace() \
+	efc_log_debug(node->efc, "[%s] %-20s %-20s\n", \
+		      node->display_name, __func__, efc_sm_event_name(evt)) \
+
+#define nport_sm_trace(nport) \
+	efc_log_debug(nport->efc, \
+		"[%s] %-20s\n", nport->display_name, efc_sm_event_name(evt)) \
+
+#endif /* __EFC_H__ */
diff --git a/drivers/scsi/elx/libefc/efclib.c b/drivers/scsi/elx/libefc/efclib.c
new file mode 100644
index 000000000000..786266a5aeaf
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efclib.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * LIBEFC LOCKING
+ *
+ * The critical sections protected by the efc's spinlock are quite broad and
+ * may be improved upon in the future. The libefc code and its locking doesn't
+ * influence the I/O path, so excessive locking doesn't impact I/O performance.
+ *
+ * The strategy is to lock whenever processing a request from user driver. This
+ * means that the entry points into the libefc library are protected by efc
+ * lock. So all the state machine transitions are protected.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include "efc.h"
+
+int efcport_init(struct efc *efc)
+{
+	u32 rc = EFC_SUCCESS;
+
+	spin_lock_init(&efc->lock);
+	INIT_LIST_HEAD(&efc->vport_list);
+	efc->hold_frames = false;
+	spin_lock_init(&efc->pend_frames_lock);
+	INIT_LIST_HEAD(&efc->pend_frames);
+
+	/* Create Node pool */
+	efc->node_pool = mempool_create_kmalloc_pool(EFC_MAX_REMOTE_NODES,
+						sizeof(struct efc_node));
+	if (!efc->node_pool) {
+		efc_log_err(efc, "Can't allocate node pool\n");
+		return EFC_FAIL;
+	}
+
+	efc->node_dma_pool = dma_pool_create("node_dma_pool", &efc->pci->dev,
+						NODE_SPARAMS_SIZE, 0, 0);
+	if (!efc->node_dma_pool) {
+		efc_log_err(efc, "Can't allocate node dma pool\n");
+		mempool_destroy(efc->node_pool);
+		return EFC_FAIL;
+	}
+
+	efc->els_io_pool = mempool_create_kmalloc_pool(EFC_ELS_IO_POOL_SZ,
+						sizeof(struct efc_els_io_req));
+	if (!efc->els_io_pool) {
+		efc_log_err(efc, "Can't allocate els io pool\n");
+		return EFC_FAIL;
+	}
+
+	return rc;
+}
+
+static void
+efc_purge_pending(struct efc *efc)
+{
+	struct efc_hw_sequence *frame, *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->pend_frames_lock, flags);
+
+	list_for_each_entry_safe(frame, next, &efc->pend_frames, list_entry) {
+		list_del(&frame->list_entry);
+		efc->tt.hw_seq_free(efc, frame);
+	}
+
+	spin_unlock_irqrestore(&efc->pend_frames_lock, flags);
+}
+
+void efcport_destroy(struct efc *efc)
+{
+	efc_purge_pending(efc);
+	mempool_destroy(efc->els_io_pool);
+	mempool_destroy(efc->node_pool);
+	dma_pool_destroy(efc->node_dma_pool);
+}
diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
new file mode 100644
index 000000000000..5c40a91457d5
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efclib.h
@@ -0,0 +1,601 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCLIB_H__
+#define __EFCLIB_H__
+
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+#include "scsi/fc/fc_gs.h"
+#include "scsi/fc_frame.h"
+#include "../include/efc_common.h"
+#include "../libefc_sli/sli4.h"
+
+#define EFC_SERVICE_PARMS_LENGTH	120
+#define EFC_NAME_LENGTH			32
+#define EFC_SM_NAME_LENGTH		64
+#define EFC_DISPLAY_BUS_INFO_LENGTH	16
+
+#define EFC_WWN_LENGTH			32
+
+#define EFC_FC_ELS_DEFAULT_RETRIES	3
+
+/* Timeouts */
+#define EFC_FC_ELS_SEND_DEFAULT_TIMEOUT	0
+#define EFC_FC_FLOGI_TIMEOUT_SEC	5
+#define EFC_SHUTDOWN_TIMEOUT_USEC	30000000
+
+/* Local port topology */
+enum efc_nport_topology {
+	EFC_NPORT_TOPO_UNKNOWN = 0,
+	EFC_NPORT_TOPO_FABRIC,
+	EFC_NPORT_TOPO_P2P,
+	EFC_NPORT_TOPO_FC_AL,
+};
+
+#define enable_target_rscn(efc)		1
+
+enum efc_node_shutd_rsn {
+	EFC_NODE_SHUTDOWN_DEFAULT = 0,
+	EFC_NODE_SHUTDOWN_EXPLICIT_LOGO,
+	EFC_NODE_SHUTDOWN_IMPLICIT_LOGO,
+};
+
+enum efc_node_send_ls_acc {
+	EFC_NODE_SEND_LS_ACC_NONE = 0,
+	EFC_NODE_SEND_LS_ACC_PLOGI,
+	EFC_NODE_SEND_LS_ACC_PRLI,
+};
+
+#define EFC_LINK_STATUS_UP		0
+#define EFC_LINK_STATUS_DOWN		1
+
+/* State machine context header  */
+struct efc_sm_ctx {
+	void (*current_state)(struct efc_sm_ctx *ctx,
+			       u32 evt, void *arg);
+
+	const char	*description;
+	void		*app;
+};
+
+/* Description of discovered Fabric Domain */
+struct efc_domain_record {
+	u32		index;
+	u32		priority;
+	u8		address[6];
+	u8		wwn[8];
+	union {
+		u8	vlan[512];
+		u8	loop[128];
+	} map;
+	u32		speed;
+	u32		fc_id;
+	bool		is_loop;
+	bool		is_nport;
+};
+
+/* Domain events */
+enum efc_hw_domain_event {
+	EFC_HW_DOMAIN_ALLOC_OK,
+	EFC_HW_DOMAIN_ALLOC_FAIL,
+	EFC_HW_DOMAIN_ATTACH_OK,
+	EFC_HW_DOMAIN_ATTACH_FAIL,
+	EFC_HW_DOMAIN_FREE_OK,
+	EFC_HW_DOMAIN_FREE_FAIL,
+	EFC_HW_DOMAIN_LOST,
+	EFC_HW_DOMAIN_FOUND,
+	EFC_HW_DOMAIN_CHANGED,
+};
+
+struct efc_nport {
+	struct list_head	list_entry;
+	struct kref		ref;
+	void			(*release)(struct kref *arg);
+	struct efc		*efc;
+	u32			tgt_id;
+	u32			index;
+	u32			instance_index;
+	char			display_name[EFC_NAME_LENGTH];
+	bool			is_vport;
+	bool			free_req_pending;
+	bool			attached;
+	bool			p2p_winner;
+	struct efc_domain	*domain;
+	u64			wwpn;
+	u64			wwnn;
+	void			*ini_nport;
+	void			*tgt_nport;
+	void			*tgt_data;
+	void			*ini_data;
+
+	/* Members private to HW/SLI */
+	void			*hw;
+	u32			indicator;
+	u32			fc_id;
+	struct efc_dma		dma;
+
+	u8			wwnn_str[EFC_WWN_LENGTH];
+	__be64			sli_wwpn;
+	__be64			sli_wwnn;
+
+	struct efc_sm_ctx	sm;
+	struct xarray		lookup;
+	bool			enable_ini;
+	bool			enable_tgt;
+	bool			enable_rscn;
+	bool			shutting_down;
+	u32			p2p_port_id;
+	enum efc_nport_topology topology;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u32			p2p_remote_port_id;
+};
+
+/**
+ * Fibre Channel domain object
+ *
+ * This object is a container for the various SLI components needed
+ * to connect to the domain of a FC or FCoE switch
+ * @efc:		pointer back to efc
+ * @instance_index:	unique instance index value
+ * @display_name:	Node display name
+ * @nport_list:		linked list of nports associated with this domain
+ * @ref:		Reference count, each nport takes a reference
+ * @release:		Function to free domain object
+ * @ini_domain:		initiator backend private domain data
+ * @tgt_domain:		target backend private domain data
+ * @hw:			pointer to HW
+ * @sm:			state machine context
+ * @fcf:		FC Forwarder table index
+ * @fcf_indicator:	FCFI
+ * @indicator:		VFI
+ * @nport_count:	Number of nports allocated
+ * @dma:		memory for Service Parameters
+ * @fcf_wwn:		WWN for FCF/switch
+ * @drvsm:		driver domain sm context
+ * @attached:		set true after attach completes
+ * @is_fc:		is FC
+ * @is_loop:		is loop topology
+ * @is_nlport:		is public loop
+ * @domain_found_pending:A domain found is pending, drec is updated
+ * @req_domain_free:	True if domain object should be free'd
+ * @req_accept_frames:	set in domain state machine to enable frames
+ * @domain_notify_pend:	Set in domain SM to avoid duplicate node event post
+ * @pending_drec:	Pending drec if a domain found is pending
+ * @service_params:	any nports service parameters
+ * @flogi_service_params:Fabric/P2p service parameters from FLOGI
+ * @lookup:		d_id to node lookup object
+ * @nport:		Pointer to first (physical) SLI port
+ */
+struct efc_domain {
+	struct efc		*efc;
+	char			display_name[EFC_NAME_LENGTH];
+	struct list_head	nport_list;
+	struct kref		ref;
+	void			(*release)(struct kref *arg);
+	void			*ini_domain;
+	void			*tgt_domain;
+
+	/* Declarations private to HW/SLI */
+	void			*hw;
+	u32			fcf;
+	u32			fcf_indicator;
+	u32			indicator;
+	u32			nport_count;
+	struct efc_dma		dma;
+
+	/* Declarations private to FC trannport */
+	u64			fcf_wwn;
+	struct efc_sm_ctx	drvsm;
+	bool			attached;
+	bool			is_fc;
+	bool			is_loop;
+	bool			is_nlport;
+	bool			domain_found_pending;
+	bool			req_domain_free;
+	bool			req_accept_frames;
+	bool			domain_notify_pend;
+
+	struct efc_domain_record pending_drec;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	u8			flogi_service_params[EFC_SERVICE_PARMS_LENGTH];
+
+	struct xarray		lookup;
+
+	struct efc_nport	*nport;
+};
+
+/**
+ * Remote Node object
+ *
+ * This object represents a connection between the SLI port and another
+ * Nx_Port on the fabric. Note this can be either a well known port such
+ * as a F_Port (i.e. ff:ff:fe) or another N_Port.
+ * @indicator:		RPI
+ * @fc_id:		FC address
+ * @attached:		true if attached
+ * @nport:		associated SLI port
+ * @node:		associated node
+ */
+struct efc_remote_node {
+	u32			indicator;
+	u32			index;
+	u32			fc_id;
+
+	bool			attached;
+
+	struct efc_nport	*nport;
+	void			*node;
+};
+
+/**
+ * FC Node object
+ * @efc:		pointer back to efc structure
+ * @display_name:	Node display name
+ * @nort:		Assosiated nport pointer.
+ * @hold_frames:	hold incoming frames if true
+ * @els_io_enabled:	Enable allocating els ios for this node
+ * @els_ios_lock:	lock to protect the els ios list
+ * @els_ios_list:	ELS I/O's for this node
+ * @ini_node:		backend initiator private node data
+ * @tgt_node:		backend target private node data
+ * @rnode:		Remote node
+ * @sm:			state machine context
+ * @evtdepth:		current event posting nesting depth
+ * @req_free:		this node is to be free'd
+ * @attached:		node is attached (REGLOGIN complete)
+ * @fcp_enabled:	node is enabled to handle FCP
+ * @rscn_pending:	for name server node RSCN is pending
+ * @send_plogi:		send PLOGI accept, upon completion of node attach
+ * @send_plogi_acc:	TRUE if io_alloc() is enabled.
+ * @send_ls_acc:	type of LS acc to send
+ * @ls_acc_io:		SCSI IO for LS acc
+ * @ls_acc_oxid:	OX_ID for pending accept
+ * @ls_acc_did:		D_ID for pending accept
+ * @shutdown_reason:	reason for node shutdown
+ * @sparm_dma_buf:	service parameters buffer
+ * @service_params:	plogi/acc frame from remote device
+ * @pend_frames_lock:	lock for inbound pending frames list
+ * @pend_frames:	inbound pending frames list
+ * @pend_frames_processed:count of frames processed in hold frames interval
+ * @ox_id_in_use:	used to verify one at a time us of ox_id
+ * @els_retries_remaining:for ELS, number of retries remaining
+ * @els_req_cnt:	number of outstanding ELS requests
+ * @els_cmpl_cnt:	number of outstanding ELS completions
+ * @abort_cnt:		Abort counter for debugging purpos
+ * @current_state_name:	current node state
+ * @prev_state_name:	previous node state
+ * @current_evt:	current event
+ * @prev_evt:		previous event
+ * @targ:		node is target capable
+ * @init:		node is init capable
+ * @refound:		Handle node refound case when node is being deleted
+ * @els_io_pend_list:	list of pending (not yet processed) ELS IOs
+ * @els_io_active_list:	list of active (processed) ELS IOs
+ * @nodedb_state:	Node debugging, saved state
+ * @gidpt_delay_timer:	GIDPT delay timer
+ * @time_last_gidpt_msec:Start time of last target RSCN GIDPT
+ * @wwnn:		remote port WWNN
+ * @wwpn:		remote port WWPN
+ */
+struct efc_node {
+	struct efc		*efc;
+	char			display_name[EFC_NAME_LENGTH];
+	struct efc_nport	*nport;
+	struct kref		ref;
+	void			(*release)(struct kref *arg);
+	bool			hold_frames;
+	bool			els_io_enabled;
+	bool			send_plogi_acc;
+	bool			send_plogi;
+	bool			rscn_pending;
+	bool			fcp_enabled;
+	bool			attached;
+	bool			req_free;
+
+	spinlock_t		els_ios_lock;
+	struct list_head	els_ios_list;
+	void			*ini_node;
+	void			*tgt_node;
+
+	struct efc_remote_node	rnode;
+	/* Declarations private to FC trannport */
+	struct efc_sm_ctx	sm;
+	u32			evtdepth;
+
+	enum efc_node_send_ls_acc send_ls_acc;
+	void			*ls_acc_io;
+	u32			ls_acc_oxid;
+	u32			ls_acc_did;
+	enum efc_node_shutd_rsn	shutdown_reason;
+	bool			targ;
+	bool			init;
+	bool			refound;
+	struct efc_dma		sparm_dma_buf;
+	u8			service_params[EFC_SERVICE_PARMS_LENGTH];
+	spinlock_t		pend_frames_lock;
+	struct list_head	pend_frames;
+	u32			pend_frames_processed;
+	u32			ox_id_in_use;
+	u32			els_retries_remaining;
+	u32			els_req_cnt;
+	u32			els_cmpl_cnt;
+	u32			abort_cnt;
+
+	char			current_state_name[EFC_SM_NAME_LENGTH];
+	char			prev_state_name[EFC_SM_NAME_LENGTH];
+	int			current_evt;
+	int			prev_evt;
+
+	void (*nodedb_state)(struct efc_sm_ctx *ctx,
+			      u32 evt, void *arg);
+	struct timer_list	gidpt_delay_timer;
+	u64			time_last_gidpt_msec;
+
+	char			wwnn[EFC_WWN_LENGTH];
+	char			wwpn[EFC_WWN_LENGTH];
+};
+
+/**
+ * NPIV port
+ *
+ * Collection of the information required to restore a virtual port across
+ * link events
+ * @wwnn:		node name
+ * @wwpn:		port name
+ * @fc_id:		port id
+ * @tgt_data:		target backend pointer
+ * @ini_data:		initiator backend pointe
+ * @nport:		Used to match record after attaching for update
+ *
+ */
+
+struct efc_vport_spec {
+	struct list_head	list_entry;
+	u64			wwnn;
+	u64			wwpn;
+	u32			fc_id;
+	bool			enable_tgt;
+	bool			enable_ini;
+	void			*tgt_data;
+	void			*ini_data;
+	struct efc_nport	*nport;
+};
+
+#define node_printf(node, fmt, args...) \
+	efc_log_info(node->efc, "[%s] " fmt, node->display_name, ##args)
+
+/* Node SM IO Context Callback structure */
+struct efc_node_cb {
+	int			status;
+	int			ext_status;
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+	struct efc_dma		els_rsp;
+
+	/* Actual length of data received */
+	int			rsp_len;
+};
+
+/* HW unsolicited callback status */
+enum efc_hw_unsol_status {
+	EFC_HW_UNSOL_SUCCESS,
+	EFC_HW_UNSOL_ERROR,
+	EFC_HW_UNSOL_ABTS_RCVD,
+	EFC_HW_UNSOL_MAX,	/**< must be last */
+};
+
+enum efc_hw_rq_buffer_type {
+	EFC_HW_RQ_BUFFER_TYPE_HDR,
+	EFC_HW_RQ_BUFFER_TYPE_PAYLOAD,
+	EFC_HW_RQ_BUFFER_TYPE_MAX,
+};
+
+struct efc_hw_rq_buffer {
+	u16			rqindex;
+	struct efc_dma		dma;
+};
+
+/*
+ * Defines a general FC sequence object,
+ * consisting of a header, payload buffers
+ * and a HW IO in the case of port owned XRI
+ */
+struct efc_hw_sequence {
+	struct list_head	list_entry;
+	void			*hw;
+	u8			fcfi;
+	u8			auto_xrdy;
+	u8			out_of_xris;
+	enum efc_hw_unsol_status status;
+
+	struct efc_hw_rq_buffer *header;
+	struct efc_hw_rq_buffer *payload;
+
+	void			*hw_priv;
+};
+
+enum efc_disc_io_type {
+	EFC_DISC_IO_ELS_REQ,
+	EFC_DISC_IO_ELS_RESP,
+	EFC_DISC_IO_CT_REQ,
+	EFC_DISC_IO_CT_RESP
+};
+
+struct efc_io_els_params {
+	u32             s_id;
+	u16             ox_id;
+	u8              timeout;
+};
+
+struct efc_io_ct_params {
+	u8              r_ctl;
+	u8              type;
+	u8              df_ctl;
+	u8              timeout;
+	u16             ox_id;
+};
+
+union efc_disc_io_param {
+	struct efc_io_els_params els;
+	struct efc_io_ct_params ct;
+};
+
+struct efc_disc_io {
+	struct efc_dma		req;         /* send buffer */
+	struct efc_dma		rsp;         /* receive buffer */
+	enum efc_disc_io_type	io_type;     /* EFC_DISC_IO_TYPE enum*/
+	u16			xmit_len;    /* Length of els request*/
+	u16			rsp_len;     /* Max length of rsps to be rcvd */
+	u32			rpi;         /* Registered RPI */
+	u32			vpi;         /* VPI for this nport */
+	u32			s_id;
+	u32			d_id;
+	bool			rpi_registered; /* if false, use tmp RPI */
+	union efc_disc_io_param iparam;
+};
+
+/* Return value indiacating the sequence can not be freed */
+#define EFC_HW_SEQ_HOLD		0
+/* Return value indiacating the sequence can be freed */
+#define EFC_HW_SEQ_FREE		1
+
+struct libefc_function_template {
+	/*Sport*/
+	int (*new_nport)(struct efc *efc, struct efc_nport *sp);
+	void (*del_nport)(struct efc *efc, struct efc_nport *sp);
+
+	/*Scsi Node*/
+	int (*scsi_new_node)(struct efc *efc, struct efc_node *n);
+	int (*scsi_del_node)(struct efc *efc, struct efc_node *n, int reason);
+
+	int (*issue_mbox_rqst)(void *efct, void *buf, void *cb, void *arg);
+	/*Send ELS IO*/
+	int (*send_els)(struct efc *efc, struct efc_disc_io *io);
+	/*Send BLS IO*/
+	int (*send_bls)(struct efc *efc, u32 type, struct sli_bls_params *bls);
+	/*Free HW frame*/
+	int (*hw_seq_free)(struct efc *efc, struct efc_hw_sequence *seq);
+};
+
+#define EFC_LOG_LIB		0x01
+#define EFC_LOG_NODE		0x02
+#define EFC_LOG_PORT		0x04
+#define EFC_LOG_DOMAIN		0x08
+#define EFC_LOG_ELS		0x10
+#define EFC_LOG_DOMAIN_SM	0x20
+#define EFC_LOG_SM		0x40
+
+/* efc library port structure */
+struct efc {
+	void			*base;
+	struct pci_dev		*pci;
+	struct sli4		*sli;
+	u32			fcfi;
+	u64			req_wwpn;
+	u64			req_wwnn;
+
+	u64			def_wwpn;
+	u64			def_wwnn;
+	u64			max_xfer_size;
+	mempool_t		*node_pool;
+	struct dma_pool		*node_dma_pool;
+	u32			nodes_count;
+
+	u32			link_status;
+
+	struct list_head	vport_list;
+	/* lock to protect the vport list */
+	spinlock_t		vport_lock;
+
+	struct libefc_function_template tt;
+	/* lock to protect the discovery library.
+	 * Refer to efclib.c for more details.
+	 */
+	spinlock_t		lock;
+
+	bool			enable_ini;
+	bool			enable_tgt;
+
+	u32			log_level;
+
+	struct efc_domain	*domain;
+	void (*domain_free_cb)(struct efc *efc, void *arg);
+	void			*domain_free_cb_arg;
+
+	u64			tgt_rscn_delay_msec;
+	u64			tgt_rscn_period_msec;
+
+	bool			external_loopback;
+	u32			nodedb_mask;
+	u32			logmask;
+	mempool_t		*els_io_pool;
+	atomic_t		els_io_alloc_failed_count;
+
+	/* hold pending frames */
+	bool			hold_frames;
+	/* lock to protect pending frames list access */
+	spinlock_t		pend_frames_lock;
+	struct list_head	pend_frames;
+	/* count of pending frames that were processed */
+	u32			pend_frames_processed;
+
+};
+
+/*
+ * EFC library registration
+ * **********************************/
+int efcport_init(struct efc *efc);
+void efcport_destroy(struct efc *efc);
+/*
+ * EFC Domain
+ * **********************************/
+int efc_domain_cb(void *arg, int event, void *data);
+void
+efc_register_domain_free_cb(struct efc *efc,
+			void (*callback)(struct efc *efc, void *arg),
+			void *arg);
+
+/*
+ * EFC nport
+ * **********************************/
+void efc_nport_cb(void *arg, int event, void *data);
+struct efc_vport_spec *efc_vport_create_spec(struct efc *efc, u64 wwnn,
+			u64 wwpn, u32 fc_id, bool enable_ini, bool enable_tgt,
+			void *tgt_data, void *ini_data);
+int efc_nport_vport_new(struct efc_domain *domain, u64 wwpn,
+			u64 wwnn, u32 fc_id, bool ini, bool tgt,
+			void *tgt_data, void *ini_data);
+int efc_nport_vport_del(struct efc *efc, struct efc_domain *domain,
+			u64 wwpn, u64 wwnn);
+
+void efc_vport_del_all(struct efc *efc);
+
+/*
+ * EFC Node
+ * **********************************/
+int efc_remote_node_cb(void *arg, int event, void *data);
+void efc_node_fcid_display(u32 fc_id, char *buffer, u32 buf_len);
+void efc_node_post_shutdown(struct efc_node *node, u32 evt, void *arg);
+u64 efc_node_get_wwpn(struct efc_node *node);
+
+/*
+ * EFC FCP/ELS/CT interface
+ * **********************************/
+void efc_dispatch_frame(struct efc *efc, struct efc_hw_sequence *seq);
+void efc_disc_io_complete(struct efc_disc_io *io, u32 len, u32 status,
+			  u32 ext_status);
+
+/*
+ * EFC SCSI INTERACTION LAYER
+ * **********************************/
+void efc_scsi_sess_reg_complete(struct efc_node *node, u32 status);
+void efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node);
+void efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node);
+
+#endif /* __EFCLIB_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 10/31] elx: libefc: FC Domain state machine interfaces
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (8 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 11/31] elx: libefc: SLI and FC PORT " James Smart
                   ` (20 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC Domain registration, allocation and deallocation sequence

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_domain.c | 1093 ++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_domain.h |   54 ++
 2 files changed, 1147 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.c
 create mode 100644 drivers/scsi/elx/libefc/efc_domain.h

diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c
new file mode 100644
index 000000000000..eddb98a601ad
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.c
@@ -0,0 +1,1093 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * domain_sm Domain State Machine: States
+ */
+
+#include "efc.h"
+
+int
+efc_domain_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_domain *domain = NULL;
+	int rc = 0;
+	unsigned long flags = 0;
+
+	if (event != EFC_HW_DOMAIN_FOUND)
+		domain = data;
+
+	/* Accept domain callback events from the user driver */
+	spin_lock_irqsave(&efc->lock, flags);
+	switch (event) {
+	case EFC_HW_DOMAIN_FOUND: {
+		u64 fcf_wwn = 0;
+		struct efc_domain_record *drec = data;
+
+		/* extract the fcf_wwn */
+		fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn));
+
+		efc_log_debug(efc, "Domain found: wwn %016llX\n", fcf_wwn);
+
+		/* lookup domain, or allocate a new one */
+		domain = efc->domain;
+		if (!domain) {
+			domain = efc_domain_alloc(efc, fcf_wwn);
+			if (!domain) {
+				efc_log_err(efc, "efc_domain_alloc() failed\n");
+				rc = -1;
+				break;
+			}
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+		}
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec);
+		break;
+	}
+
+	case EFC_HW_DOMAIN_LOST:
+		domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n");
+		efc->hold_frames = true;
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ALLOC_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL,
+				      NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_ATTACH_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n");
+		efc_domain_post_event(domain,
+				      EFC_EVT_DOMAIN_ATTACH_FAIL, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_OK:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL);
+		break;
+
+	case EFC_HW_DOMAIN_FREE_FAIL:
+		domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n");
+		efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL);
+		break;
+
+	default:
+		efc_log_warn(efc, "unsupported event %#x\n", event);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	if (efc->domain && domain->req_accept_frames) {
+		domain->req_accept_frames = false;
+		efc->hold_frames = false;
+	}
+
+	return rc;
+}
+
+static void
+_efc_domain_free(struct kref *arg)
+{
+	struct efc_domain *domain = container_of(arg, struct efc_domain, ref);
+	struct efc *efc = domain->efc;
+
+	if (efc->domain_free_cb)
+		(*efc->domain_free_cb)(efc, efc->domain_free_cb_arg);
+
+	kfree(domain);
+}
+
+void
+efc_domain_free(struct efc_domain *domain)
+{
+	struct efc *efc;
+
+	efc = domain->efc;
+
+	/* Hold frames to clear the domain pointer from the xport lookup */
+	efc->hold_frames = false;
+
+	efc_log_debug(efc, "Domain free: wwn %016llX\n", domain->fcf_wwn);
+
+	xa_destroy(&domain->lookup);
+	efc->domain = NULL;
+	kref_put(&domain->ref, domain->release);
+}
+
+struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn)
+{
+	struct efc_domain *domain;
+
+	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
+	if (!domain)
+		return NULL;
+
+	domain->efc = efc;
+	domain->drvsm.app = domain;
+
+	/* initialize refcount */
+	kref_init(&domain->ref);
+	domain->release = _efc_domain_free;
+
+	xa_init(&domain->lookup);
+
+	INIT_LIST_HEAD(&domain->nport_list);
+	efc->domain = domain;
+	domain->fcf_wwn = fcf_wwn;
+	efc_log_debug(efc, "Domain allocated: wwn %016llX\n", domain->fcf_wwn);
+
+	return domain;
+}
+
+void
+efc_register_domain_free_cb(struct efc *efc,
+			    void (*callback)(struct efc *efc, void *arg),
+			    void *arg)
+{
+	/* Register a callback to be called when the domain is freed */
+	efc->domain_free_cb = callback;
+	efc->domain_free_cb_arg = arg;
+	if (!efc->domain && callback)
+		(*callback)(efc, arg);
+}
+
+static void
+__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/*
+		 * this can arise if an FLOGI fails on the NPORT,
+		 * and the NPORT is shutdown
+		 */
+		break;
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+	}
+}
+
+static void
+__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_domain *domain = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+		break;
+	case EFC_EVT_DOMAIN_FOUND:
+		/* save drec, mark domain_found_pending */
+		memcpy(&domain->pending_drec, arg,
+		       sizeof(domain->pending_drec));
+		domain->domain_found_pending = true;
+		break;
+	case EFC_EVT_DOMAIN_LOST:
+		/* unmark domain_found_pending */
+		domain->domain_found_pending = false;
+		break;
+
+	default:
+		efc_log_warn(domain->efc, "%-20s %-20s not handled\n",
+			     funcname, efc_sm_event_name(evt));
+	}
+}
+
+#define std_domain_state_decl(...)\
+	struct efc_domain *domain = NULL;\
+	struct efc *efc = NULL;\
+	\
+	WARN_ON(!ctx || !ctx->app);\
+	domain = ctx->app;\
+	WARN_ON(!domain->efc);\
+	efc = domain->efc
+
+void
+__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		domain->attached = false;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND: {
+		u32	i;
+		struct efc_domain_record *drec = arg;
+		struct efc_nport *nport;
+
+		u64 my_wwnn = efc->req_wwnn;
+		u64 my_wwpn = efc->req_wwpn;
+		__be64 bewwpn;
+
+		if (my_wwpn == 0 || my_wwnn == 0) {
+			efc_log_debug(efc,
+				"using default hardware WWN configuration\n");
+			my_wwpn = efc->def_wwpn;
+			my_wwnn = efc->def_wwnn;
+		}
+
+		efc_log_debug(efc,
+			"Creating base nport using WWPN %016llX WWNN %016llX\n",
+			my_wwpn, my_wwnn);
+
+		/* Allocate a nport and transition to __efc_nport_allocated */
+		nport = efc_nport_alloc(domain, my_wwpn, my_wwnn, U32_MAX,
+					efc->enable_ini, efc->enable_tgt);
+
+		if (!nport) {
+			efc_log_err(efc, "efc_nport_alloc() failed\n");
+			break;
+		}
+		efc_sm_transition(&nport->sm, __efc_nport_allocated, NULL);
+
+		bewwpn = cpu_to_be64(nport->wwpn);
+
+		/* allocate struct efc_nport object for local port
+		 * Note: drec->fc_id is ALPA from read_topology only if loop
+		 */
+		if (efc_cmd_nport_alloc(efc, nport, NULL, (uint8_t *)&bewwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			efc_nport_free(nport);
+			break;
+		}
+
+		domain->is_loop = drec->is_loop;
+
+		/*
+		 * If the loop position map includes ALPA == 0,
+		 * then we are in a public loop (NL_PORT)
+		 * Note that the first element of the loopmap[]
+		 * contains the count of elements, and if
+		 * ALPA == 0 is present, it will occupy the first
+		 * location after the count.
+		 */
+		domain->is_nlport = drec->map.loop[1] == 0x00;
+
+		if (!domain->is_loop) {
+			/* Initiate HW domain alloc */
+			if (efc_cmd_domain_alloc(efc, domain, drec->index)) {
+				efc_log_err(efc,
+					    "Failed to initiate HW domain allocation\n");
+				break;
+			}
+			efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+			break;
+		}
+
+		efc_log_debug(efc, "%s fc_id=%#x speed=%d\n",
+			      drec->is_loop ?
+			      (domain->is_nlport ?
+			      "public-loop" : "loop") : "other",
+			      drec->fc_id, drec->speed);
+
+		nport->fc_id = drec->fc_id;
+		nport->topology = EFC_NPORT_TOPO_FC_AL;
+		snprintf(nport->display_name, sizeof(nport->display_name),
+			 "s%06x", drec->fc_id);
+
+		if (efc->enable_ini) {
+			u32 count = drec->map.loop[0];
+
+			efc_log_debug(efc, "%d position map entries\n",
+				      count);
+			for (i = 1; i <= count; i++) {
+				if (drec->map.loop[i] != drec->fc_id) {
+					struct efc_node *node;
+
+					efc_log_debug(efc, "%#x -> %#x\n",
+						      drec->fc_id,
+						      drec->map.loop[i]);
+					node = efc_node_alloc(nport,
+							      drec->map.loop[i],
+							      false, true);
+					if (!node) {
+						efc_log_err(efc,
+							    "efc_node_alloc() failed\n");
+						break;
+					}
+					efc_node_transition(node,
+							    __efc_d_wait_loop,
+							    NULL);
+				}
+			}
+		}
+
+		/* Initiate HW domain alloc */
+		if (efc_cmd_domain_alloc(efc, domain, drec->index)) {
+			efc_log_err(efc,
+				    "Failed to initiate HW domain allocation\n");
+			break;
+		}
+		efc_sm_transition(ctx, __efc_domain_wait_alloc, arg);
+		break;
+	}
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK: {
+		struct fc_els_flogi  *sp;
+		struct efc_nport *nport;
+
+		nport = domain->nport;
+		if (WARN_ON(!nport))
+			return;
+
+		sp = (struct fc_els_flogi  *)nport->service_params;
+
+		/* Save the domain service parameters */
+		memcpy(domain->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+		memcpy(nport->service_params + 4, domain->dma.virt,
+		       sizeof(struct fc_els_flogi) - 4);
+
+		/*
+		 * Update the nport's service parameters,
+		 * user might have specified non-default names
+		 */
+		sp->fl_wwpn = cpu_to_be64(nport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(nport->wwnn);
+
+		/*
+		 * Take the loop topology path,
+		 * unless we are an NL_PORT (public loop)
+		 */
+		if (domain->is_loop && !domain->is_nlport) {
+			/*
+			 * For loop, we already have our FC ID
+			 * and don't need fabric login.
+			 * Transition to the allocated state and
+			 * post an event to attach to
+			 * the domain. Note that this breaks the
+			 * normal action/transition
+			 * pattern here to avoid a race with the
+			 * domain attach callback.
+			 */
+			/* sm: is_loop / domain_attach */
+			efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+			__efc_domain_attach_internal(domain, nport->fc_id);
+			break;
+		}
+		{
+			struct efc_node *node;
+
+			/* alloc fabric node, send FLOGI */
+			node = efc_node_find(nport, FC_FID_FLOGI);
+			if (node) {
+				efc_log_err(efc,
+					    "Fabric Controller node already exists\n");
+				break;
+			}
+			node = efc_node_alloc(nport, FC_FID_FLOGI,
+					      false, false);
+			if (!node) {
+				efc_log_err(efc,
+					    "Error: efc_node_alloc() failed\n");
+			} else {
+				efc_node_transition(node,
+						    __efc_fabric_init, NULL);
+			}
+			/* Accept frames */
+			domain->req_accept_frames = true;
+		}
+		/* sm: / start fabric logins */
+		efc_sm_transition(ctx, __efc_domain_allocated, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+		efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;",
+			    efc_sm_event_name(evt));
+		efc_log_err(efc, "shutting down domain\n");
+		domain->req_domain_free = true;
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw_domain_alloc()\n",
+			efc_sm_event_name(evt));
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_allocated(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		int rc = 0;
+		u32 fc_id;
+
+		if (WARN_ON(!arg))
+			return;
+
+		fc_id = *((u32 *)arg);
+		efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n",
+			      fc_id);
+		/* Update nport lookup */
+		rc = xa_err(xa_store(&domain->lookup, fc_id, domain->nport,
+				     GFP_ATOMIC));
+		if (rc) {
+			efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
+			return;
+		}
+
+		/* Update display name for the nport */
+		efc_node_fcid_display(fc_id, domain->nport->display_name,
+				      sizeof(domain->nport->display_name));
+
+		/* Issue domain attach call */
+		rc = efc_cmd_domain_attach(efc, domain, fc_id);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_attach failed: %d\n",
+				    rc);
+			return;
+		}
+		/* sm: / domain_attach */
+		efc_sm_transition(ctx, __efc_domain_wait_attach, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST: {
+		efc_log_debug(efc,
+			      "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n",
+			efc_sm_event_name(evt));
+		if (!list_empty(&domain->nport_list)) {
+			/*
+			 * if there are nports, transition to
+			 * wait state and send shutdown to each
+			 * nport
+			 */
+			struct efc_nport *nport = NULL, *nport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_nports_free,
+					  NULL);
+			list_for_each_entry_safe(nport, nport_next,
+						 &domain->nport_list,
+						 list_entry) {
+				efc_sm_post_event(&nport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no nports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			if (efc_cmd_domain_free(efc, domain))
+				efc_log_err(efc, "hw_domain_free failed\n");
+		}
+
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_node *node = NULL;
+		struct efc_nport *nport, *next_nport;
+		unsigned long index;
+
+		/*
+		 * Set domain notify pending state to avoid
+		 * duplicate domain event post
+		 */
+		domain->domain_notify_pend = true;
+
+		/* Mark as attached */
+		domain->attached = true;
+
+		/* Transition to ready */
+		/* sm: / forward event to all nports and nodes */
+		efc_sm_transition(ctx, __efc_domain_ready, NULL);
+
+		/* We have an FCFI, so we can accept frames */
+		domain->req_accept_frames = true;
+
+		/*
+		 * Notify all nodes that the domain attach request
+		 * has completed
+		 * Note: nport will have already received notification
+		 * of nport attached as a result of the HW's port attach.
+		 */
+		list_for_each_entry_safe(nport, next_nport,
+					 &domain->nport_list, list_entry) {
+			xa_for_each(&nport->lookup, index, node) {
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		}
+		domain->domain_notify_pend = false;
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_debug(efc,
+			      "%s received while waiting for hw attach\n",
+			      efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_LOST:
+		/*
+		 * Domain lost while waiting for an attach to complete,
+		 * go to a state that waits for  the domain attach to
+		 * complete, then handle domain lost
+		 */
+		efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH:
+		/*
+		 * In P2P we can get an attach request from
+		 * the other FLOGI path, so drop this one
+		 */
+		break;
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		/* start any pending vports */
+		if (efc_vport_start(domain)) {
+			efc_log_debug(domain->efc,
+				      "efc_vport_start didn't start vports\n");
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_LOST: {
+		if (!list_empty(&domain->nport_list)) {
+			/*
+			 * if there are nports, transition to wait state
+			 * and send shutdown to each nport
+			 */
+			struct efc_nport *nport = NULL, *nport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_nports_free,
+					  NULL);
+			list_for_each_entry_safe(nport, nport_next,
+						 &domain->nport_list,
+						 list_entry) {
+				efc_sm_post_event(&nport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no nports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			if (efc_cmd_domain_free(efc, domain))
+				efc_log_err(efc, "hw_domain_free failed\n");
+		}
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_FOUND:
+		/* Should not happen */
+		efc_log_err(efc, "%s: evt: %d should not happen\n",
+			    __func__, evt);
+		break;
+
+	case EFC_EVT_DOMAIN_REQ_ATTACH: {
+		/* can happen during p2p */
+		u32 fc_id;
+
+		fc_id = *((u32 *)arg);
+
+		/* Assume that the domain is attached */
+		WARN_ON(!domain->attached);
+
+		/*
+		 * Verify that the requested FC_ID
+		 * is the same as the one we're working with
+		 */
+		WARN_ON(domain->nport->fc_id != fc_id);
+		break;
+	}
+
+	default:
+		__efc_domain_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_wait_nports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			      void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	/* Wait for nodes to free prior to the domain shutdown */
+	switch (evt) {
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		int rc;
+
+		/* sm: / efc_hw_domain_free */
+		efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL);
+
+		/* Request efc_hw_domain_free and wait for completion */
+		rc = efc_cmd_domain_free(efc, domain);
+		if (rc) {
+			efc_log_err(efc, "efc_hw_domain_free() failed: %d\n",
+				    rc);
+		}
+		break;
+	}
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_FREE_OK:
+		/* sm: / domain_free */
+		if (domain->domain_found_pending) {
+			/*
+			 * save fcf_wwn and drec from this domain,
+			 * free current domain and allocate
+			 * a new one with the same fcf_wwn
+			 * could use a SLI-4 "re-register VPI"
+			 * operation here?
+			 */
+			u64 fcf_wwn = domain->fcf_wwn;
+			struct efc_domain_record drec = domain->pending_drec;
+
+			efc_log_debug(efc, "Reallocating domain\n");
+			domain->req_domain_free = true;
+			domain = efc_domain_alloc(efc, fcf_wwn);
+
+			if (!domain) {
+				efc_log_err(efc,
+					    "efc_domain_alloc() failed\n");
+				return;
+			}
+			/*
+			 * got a new domain; at this point,
+			 * there are at least two domains
+			 * once the req_domain_free flag is processed,
+			 * the associated domain will be removed.
+			 */
+			efc_sm_transition(&domain->drvsm, __efc_domain_init,
+					  NULL);
+			efc_sm_post_event(&domain->drvsm,
+					  EFC_EVT_DOMAIN_FOUND, &drec);
+		} else {
+			domain->req_domain_free = true;
+		}
+		break;
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	std_domain_state_decl();
+
+	domain_sm_trace(domain);
+
+	/*
+	 * Wait for the domain alloc/attach completion
+	 * after receiving a domain lost.
+	 */
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ALLOC_OK:
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		if (!list_empty(&domain->nport_list)) {
+			/*
+			 * if there are nports, transition to
+			 * wait state and send shutdown to each nport
+			 */
+			struct efc_nport *nport = NULL, *nport_next = NULL;
+
+			efc_sm_transition(ctx, __efc_domain_wait_nports_free,
+					  NULL);
+			list_for_each_entry_safe(nport, nport_next,
+						 &domain->nport_list,
+						 list_entry) {
+				efc_sm_post_event(&nport->sm,
+						  EFC_EVT_SHUTDOWN, NULL);
+			}
+		} else {
+			/* no nports exist, free domain */
+			efc_sm_transition(ctx, __efc_domain_wait_shutdown,
+					  NULL);
+			if (efc_cmd_domain_free(efc, domain))
+				efc_log_err(efc, "hw_domain_free() failed\n");
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ALLOC_FAIL:
+	case EFC_EVT_DOMAIN_ATTACH_FAIL:
+		efc_log_err(efc, "[domain] %-20s: failed\n",
+			    efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_domain_common_shutdown(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id)
+{
+	memcpy(domain->dma.virt,
+	       ((uint8_t *)domain->flogi_service_params) + 4,
+		   sizeof(struct fc_els_flogi) - 4);
+	(void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH,
+				 &s_id);
+}
+
+void
+efc_domain_attach(struct efc_domain *domain, u32 s_id)
+{
+	__efc_domain_attach_internal(domain, s_id);
+}
+
+int
+efc_domain_post_event(struct efc_domain *domain,
+		      enum efc_sm_event event, void *arg)
+{
+	int rc;
+	bool req_domain_free;
+
+	rc = efc_sm_post_event(&domain->drvsm, event, arg);
+
+	req_domain_free = domain->req_domain_free;
+	domain->req_domain_free = false;
+
+	if (req_domain_free)
+		efc_domain_free(domain);
+
+	return rc;
+}
+
+static void
+efct_domain_process_pending(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	struct efc_hw_sequence *seq = NULL;
+	u32 processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (efc->hold_frames)
+			break;
+
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&efc->pend_frames_lock, flags);
+
+		if (!list_empty(&efc->pend_frames)) {
+			seq = list_first_entry(&efc->pend_frames,
+					struct efc_hw_sequence, list_entry);
+			list_del(&seq->list_entry);
+		}
+
+		if (!seq) {
+			processed = efc->pend_frames_processed;
+			efc->pend_frames_processed = 0;
+			spin_unlock_irqrestore(&efc->pend_frames_lock, flags);
+			break;
+		}
+		efc->pend_frames_processed++;
+
+		spin_unlock_irqrestore(&efc->pend_frames_lock, flags);
+
+		/* now dispatch frame(s) to dispatch function */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efc->tt.hw_seq_free(efc, seq);
+
+		seq = NULL;
+	}
+
+	if (processed != 0)
+		efc_log_debug(efc, "%u domain frames held and processed\n",
+					processed);
+}
+
+void
+efc_dispatch_frame(struct efc *efc, struct efc_hw_sequence *seq)
+{
+	struct efc_domain *domain = efc->domain;
+
+	/*
+	 * If we are holding frames or the domain is not yet registered or
+	 * there's already frames on the pending list,
+	 * then add the new frame to pending list
+	 */
+	if (!domain || efc->hold_frames || !list_empty(&efc->pend_frames)) {
+		unsigned long flags = 0;
+
+		spin_lock_irqsave(&efc->pend_frames_lock, flags);
+		INIT_LIST_HEAD(&seq->list_entry);
+		list_add_tail(&seq->list_entry, &efc->pend_frames);
+		spin_unlock_irqrestore(&efc->pend_frames_lock, flags);
+
+		if (domain) {
+			/* immediately process pending frames */
+			efct_domain_process_pending(domain);
+		}
+	} else {
+		/*
+		 * We are not holding frames and pending list is empty,
+		 * just process frame. A non-zero return means the frame
+		 * was not handled - so cleanup
+		 */
+		if (efc_domain_dispatch_frame(domain, seq))
+			efc->tt.hw_seq_free(efc, seq);
+	}
+}
+
+int
+efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efc_domain *domain = (struct efc_domain *)arg;
+	struct efc *efc = domain->efc;
+	struct fc_frame_header *hdr;
+	struct efc_node *node = NULL;
+	struct efc_nport *nport = NULL;
+	unsigned long flags = 0;
+	u32 s_id, d_id, rc = EFC_HW_SEQ_FREE;
+
+	if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) {
+		efc_log_err(efc, "Sequence header or payload is null\n");
+		return rc;
+	}
+
+	hdr = seq->header->dma.virt;
+
+	/* extract the s_id and d_id */
+	s_id = ntoh24(hdr->fh_s_id);
+	d_id = ntoh24(hdr->fh_d_id);
+
+	spin_lock_irqsave(&efc->lock, flags);
+
+	nport = efc_nport_find(domain, d_id);
+	if (!nport) {
+		if (hdr->fh_type == FC_TYPE_FCP) {
+			/* Drop frame */
+			efc_log_warn(efc, "FCP frame with invalid d_id x%x\n",
+				     d_id);
+			goto out;
+		}
+
+		/* p2p will use this case */
+		nport = domain->nport;
+		if (!nport || !kref_get_unless_zero(&nport->ref)) {
+			efc_log_err(efc, "Physical nport is NULL\n");
+			goto out;
+		}
+	}
+
+	/* Lookup the node given the remote s_id */
+	node = efc_node_find(nport, s_id);
+
+	/* If not found, then create a new node */
+	if (!node) {
+		/*
+		 * If this is solicited data or control based on R_CTL and
+		 * there is no node context, then we can drop the frame
+		 */
+		if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) ||
+			(hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) {
+			efc_log_debug(efc,
+				"solicited data/ctrl frame without node,drop\n");
+			goto out_release;
+		}
+
+		node = efc_node_alloc(nport, s_id, false, false);
+		if (!node) {
+			efc_log_err(efc, "efc_node_alloc() failed\n");
+			goto out_release;
+		}
+		/* don't send PLOGI on efc_d_init entry */
+		efc_node_init_device(node, false);
+	}
+
+	if (node->hold_frames || !list_empty(&node->pend_frames)) {
+
+		/* add frame to node's pending list */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+		INIT_LIST_HEAD(&seq->list_entry);
+		list_add_tail(&seq->list_entry, &node->pend_frames);
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+		rc = EFC_HW_SEQ_HOLD;
+		goto out_release;
+	}
+
+	/* now dispatch frame to the node frame handler */
+	efc_node_dispatch_frame(node, seq);
+
+out_release:
+	kref_put(&nport->ref, nport->release);
+out:
+	spin_unlock_irqrestore(&efc->lock, flags);
+	return rc;
+}
+
+void
+efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq)
+{
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	u32 port_id;
+	struct efc_node *node = (struct efc_node *)arg;
+	struct efc *efc = node->efc;
+
+	port_id = ntoh24(hdr->fh_s_id);
+
+	if (WARN_ON(port_id != node->rnode.fc_id))
+		return;
+
+	if ((!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) ||
+	    !(ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)) {
+		node_printf(node,
+		    "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n",
+		    cpu_to_be32(((u32 *)hdr)[0]),
+		    cpu_to_be32(((u32 *)hdr)[1]),
+		    cpu_to_be32(((u32 *)hdr)[2]),
+		    cpu_to_be32(((u32 *)hdr)[3]),
+		    cpu_to_be32(((u32 *)hdr)[4]),
+		    cpu_to_be32(((u32 *)hdr)[5]));
+		return;
+	}
+
+	switch (hdr->fh_r_ctl) {
+	case FC_RCTL_ELS_REQ:
+	case FC_RCTL_ELS_REP:
+		efc_node_recv_els_frame(node, seq);
+		break;
+
+	case FC_RCTL_BA_ABTS:
+	case FC_RCTL_BA_ACC:
+	case FC_RCTL_BA_RJT:
+	case FC_RCTL_BA_NOP:
+		efc_log_err(efc, "Received ABTS:\n");
+		break;
+
+	case FC_RCTL_DD_UNSOL_CMD:
+	case FC_RCTL_DD_UNSOL_CTL:
+		switch (hdr->fh_type) {
+		case FC_TYPE_FCP:
+			if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) {
+				if (!node->fcp_enabled) {
+					efc_node_recv_fcp_cmd(node, seq);
+					break;
+				}
+				efc_log_err(efc,
+					"Received FCP CMD. Dropping IO\n");
+			} else if ((hdr->fh_r_ctl & 0xf) ==
+							FC_RCTL_DD_SOL_DATA) {
+				node_printf(node,
+				    "solicited data received.Dropping IO\n");
+			}
+			break;
+
+		case FC_TYPE_CT:
+			efc_node_recv_ct_frame(node, seq);
+			break;
+		default:
+			break;
+		}
+		break;
+	default:
+		efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl);
+	}
+}
diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h
new file mode 100644
index 000000000000..5468ea7ab19b
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_domain.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declare driver's domain handler exported interface
+ */
+
+#ifndef __EFCT_DOMAIN_H__
+#define __EFCT_DOMAIN_H__
+
+struct efc_domain *
+efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn);
+void
+efc_domain_free(struct efc_domain *domain);
+
+void
+__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void
+__efc_domain_wait_alloc(struct efc_sm_ctx *ctx,	enum efc_sm_event evt,
+			void *arg);
+void
+__efc_domain_allocated(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		       void *arg);
+void
+__efc_domain_wait_attach(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			 void *arg);
+void
+__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void
+__efc_domain_wait_nports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			      void *arg);
+void
+__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			   void *arg);
+void
+__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+			      void *arg);
+void
+efc_domain_attach(struct efc_domain *domain, u32 s_id);
+int
+efc_domain_post_event(struct efc_domain *domain, enum efc_sm_event event,
+		      void *arg);
+void
+__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id);
+
+int
+efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+void
+efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq);
+
+#endif /* __EFCT_DOMAIN_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 11/31] elx: libefc: SLI and FC PORT state machine interfaces
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (9 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 10/31] elx: libefc: FC Domain state machine interfaces James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 12/31] elx: libefc: Remote node " James Smart
                   ` (19 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- SLI and FC port (aka n_port_id) registration, allocation and
  deallocation.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_nport.c | 792 ++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_nport.h |  50 ++
 2 files changed, 842 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_nport.c
 create mode 100644 drivers/scsi/elx/libefc/efc_nport.h

diff --git a/drivers/scsi/elx/libefc/efc_nport.c b/drivers/scsi/elx/libefc/efc_nport.c
new file mode 100644
index 000000000000..ec84198fb209
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_nport.c
@@ -0,0 +1,792 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * NPORT
+ *
+ * Port object for physical port and NPIV ports.
+ */
+
+/*
+ * NPORT REFERENCE COUNTING
+ *
+ * A nport reference should be taken when:
+ * - an nport is allocated
+ * - a vport populates associated nport
+ * - a remote node is allocated
+ * - a unsolicited frame is processed
+ * The reference should be dropped when:
+ * - the unsolicited frame processesing is done
+ * - the remote node is removed
+ * - the vport is removed
+ * - the nport is removed
+ */
+
+#include "efc.h"
+
+void
+efc_nport_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_nport *nport = data;
+	unsigned long flags = 0;
+
+	efc_log_debug(efc, "nport event: %s\n", efc_sm_event_name(event));
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_sm_post_event(&nport->sm, event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+static struct efc_nport *
+efc_nport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn)
+{
+	struct efc_nport *nport = NULL;
+
+	/* Find a nport, given the WWNN and WWPN */
+	list_for_each_entry(nport, &domain->nport_list, list_entry) {
+		if (nport->wwnn == wwnn && nport->wwpn == wwpn)
+			return nport;
+	}
+	return NULL;
+}
+
+static void
+_efc_nport_free(struct kref *arg)
+{
+	struct efc_nport *nport = container_of(arg, struct efc_nport, ref);
+
+	kfree(nport);
+}
+
+struct efc_nport *
+efc_nport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt)
+{
+	struct efc_nport *nport;
+
+	if (domain->efc->enable_ini)
+		enable_ini = 0;
+
+	/* Return a failure if this nport has already been allocated */
+	if ((wwpn != 0) || (wwnn != 0)) {
+		nport = efc_nport_find_wwn(domain, wwnn, wwpn);
+		if (nport) {
+			efc_log_err(domain->efc,
+				"Err: NPORT %016llX %016llX already allocated\n",
+				wwnn, wwpn);
+			return NULL;
+		}
+	}
+
+	nport = kzalloc(sizeof(*nport), GFP_ATOMIC);
+	if (!nport)
+		return nport;
+
+	/* initialize refcount */
+	kref_init(&nport->ref);
+	nport->release = _efc_nport_free;
+
+	nport->efc = domain->efc;
+	snprintf(nport->display_name, sizeof(nport->display_name), "------");
+	nport->domain = domain;
+	xa_init(&nport->lookup);
+	nport->instance_index = domain->nport_count++;
+	nport->sm.app = nport;
+	nport->enable_ini = enable_ini;
+	nport->enable_tgt = enable_tgt;
+	nport->enable_rscn = (nport->enable_ini ||
+			(nport->enable_tgt && enable_target_rscn(nport->efc)));
+
+	/* Copy service parameters from domain */
+	memcpy(nport->service_params, domain->service_params,
+		sizeof(struct fc_els_flogi));
+
+	/* Update requested fc_id */
+	nport->fc_id = fc_id;
+
+	/* Update the nport's service parameters for the new wwn's */
+	nport->wwpn = wwpn;
+	nport->wwnn = wwnn;
+	snprintf(nport->wwnn_str, sizeof(nport->wwnn_str), "%016llX",
+			(unsigned long long)wwnn);
+
+	/*
+	 * if this is the "first" nport of the domain,
+	 * then make it the "phys" nport
+	 */
+	if (list_empty(&domain->nport_list))
+		domain->nport = nport;
+
+	INIT_LIST_HEAD(&nport->list_entry);
+	list_add_tail(&nport->list_entry, &domain->nport_list);
+
+	kref_get(&domain->ref);
+
+	efc_log_debug(domain->efc, "New Nport [%s]\n", nport->display_name);
+
+	return nport;
+}
+
+void
+efc_nport_free(struct efc_nport *nport)
+{
+	struct efc_domain *domain;
+
+	if (!nport)
+		return;
+
+	domain = nport->domain;
+	efc_log_debug(domain->efc, "[%s] free nport\n", nport->display_name);
+	list_del(&nport->list_entry);
+	/*
+	 * if this is the physical nport,
+	 * then clear it out of the domain
+	 */
+	if (nport == domain->nport)
+		domain->nport = NULL;
+
+	xa_destroy(&nport->lookup);
+	xa_erase(&domain->lookup, nport->fc_id);
+
+	if (list_empty(&domain->nport_list))
+		efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE,
+				      NULL);
+
+	kref_put(&domain->ref, domain->release);
+	kref_put(&nport->ref, nport->release);
+
+}
+
+struct efc_nport *
+efc_nport_find(struct efc_domain *domain, u32 d_id)
+{
+	struct efc_nport *nport;
+
+	/* Find a nport object, given an FC_ID */
+	nport = xa_load(&domain->lookup, d_id);
+	if (!nport || !kref_get_unless_zero(&nport->ref))
+		return NULL;
+
+	return nport;
+}
+
+int
+efc_nport_attach(struct efc_nport *nport, u32 fc_id)
+{
+	int rc;
+	struct efc_node *node;
+	struct efc *efc = nport->efc;
+	unsigned long index;
+
+	/* Set our lookup */
+	rc = xa_err(xa_store(&nport->domain->lookup, fc_id, nport, GFP_ATOMIC));
+	if (rc) {
+		efc_log_err(efc, "Sport lookup store failed: %d\n", rc);
+		return rc;
+	}
+
+	/* Update our display_name */
+	efc_node_fcid_display(fc_id, nport->display_name,
+			      sizeof(nport->display_name));
+
+	xa_for_each(&nport->lookup, index, node) {
+		efc_node_update_display_name(node);
+	}
+
+	efc_log_debug(nport->efc, "[%s] attach nport: fc_id x%06x\n",
+		      nport->display_name, fc_id);
+
+	/* Register a nport, given an FC_ID */
+	rc = efc_cmd_nport_attach(efc, nport, fc_id);
+	if (rc != EFC_HW_RTN_SUCCESS) {
+		efc_log_err(nport->efc,
+			    "efc_hw_port_attach failed: %d\n", rc);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+static void
+efc_nport_shutdown(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	struct efc_node *node;
+	unsigned long index;
+
+	xa_for_each(&nport->lookup, index, node) {
+		if (!(node->rnode.fc_id == FC_FID_FLOGI && nport->is_vport)) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			continue;
+		}
+
+		/*
+		 * If this is a vport, logout of the fabric
+		 * controller so that it deletes the vport
+		 * on the switch.
+		 */
+		/* if link is down, don't send logo */
+		if (efc->link_status == EFC_LINK_STATUS_DOWN) {
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			continue;
+		}
+
+		efc_log_debug(efc, "[%s] nport shutdown vport, send logo\n",
+					node->display_name);
+
+		if (!efc_send_logo(node)) {
+			/* sent LOGO, wait for response */
+			efc_node_transition(node, __efc_d_wait_logo_rsp, NULL);
+			continue;
+		}
+
+		/*
+		 * failed to send LOGO,
+		 * go ahead and cleanup node anyways
+		 */
+		node_printf(node, "Failed to send LOGO\n");
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+	}
+}
+
+static void
+efc_vport_link_down(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	struct efc_vport_spec *vport;
+
+	/* Clear the nport reference in the vport specification */
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->nport == nport) {
+			kref_put(&nport->ref, nport->release);
+			vport->nport = NULL;
+			break;
+		}
+	}
+}
+
+static void
+__efc_nport_common(const char *funcname, struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc_domain *domain = nport->domain;
+	struct efc *efc = nport->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		break;
+	case EFC_EVT_NPORT_ATTACH_OK:
+			efc_sm_transition(ctx, __efc_nport_attached, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN:
+		/* Flag this nport as shutting down */
+		nport->shutting_down = true;
+
+		if (nport->is_vport)
+			efc_vport_link_down(nport);
+
+		if (xa_empty(&nport->lookup)) {
+			/* Remove the nport from the domain's lookup table */
+			xa_erase(&domain->lookup, nport->fc_id);
+			efc_sm_transition(ctx, __efc_nport_wait_port_free,
+					  NULL);
+			if (efc_cmd_nport_free(efc, nport)) {
+				efc_log_debug(nport->efc,
+					     "efc_hw_port_free failed\n");
+				/* Not much we can do, free the nport anyways */
+				efc_nport_free(nport);
+			}
+		} else {
+			/* sm: node list is not empty / shutdown nodes */
+			efc_sm_transition(ctx,
+					  __efc_nport_wait_shutdown, NULL);
+			efc_nport_shutdown(nport);
+		}
+		break;
+	default:
+		efc_log_debug(nport->efc, "[%s] %-20s %-20s not handled\n",
+			     nport->display_name, funcname,
+			     efc_sm_event_name(evt));
+	}
+}
+
+void
+__efc_nport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc_domain *domain = nport->domain;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	/* the physical nport is attached */
+	case EFC_EVT_NPORT_ATTACH_OK:
+		WARN_ON(nport != domain->nport);
+		efc_sm_transition(ctx, __efc_nport_attached, NULL);
+		break;
+
+	case EFC_EVT_NPORT_ALLOC_OK:
+		/* ignore */
+		break;
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_nport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc *efc = nport->efc;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		__be64 be_wwpn = cpu_to_be64(nport->wwpn);
+
+		if (nport->wwpn == 0)
+			efc_log_debug(efc, "vport: letting f/w select WWN\n");
+
+		if (nport->fc_id != U32_MAX) {
+			efc_log_debug(efc, "vport: hard coding port id: %x\n",
+				      nport->fc_id);
+		}
+
+		efc_sm_transition(ctx, __efc_nport_vport_wait_alloc, NULL);
+		/* If wwpn is zero, then we'll let the f/w */
+		if (efc_cmd_nport_alloc(efc, nport, nport->domain,
+					  nport->wwpn == 0 ? NULL :
+					  (uint8_t *)&be_wwpn)) {
+			efc_log_err(efc, "Can't allocate port\n");
+			break;
+		}
+
+		break;
+	}
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_nport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc *efc = nport->efc;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	case EFC_EVT_NPORT_ALLOC_OK: {
+		struct fc_els_flogi *sp;
+
+		sp = (struct fc_els_flogi *)nport->service_params;
+		/*
+		 * If we let f/w assign wwn's,
+		 * then nport wwn's with those returned by hw
+		 */
+		if (nport->wwnn == 0) {
+			nport->wwnn = be64_to_cpu(nport->sli_wwnn);
+			nport->wwpn = be64_to_cpu(nport->sli_wwpn);
+			snprintf(nport->wwnn_str, sizeof(nport->wwnn_str),
+				 "%016llX", nport->wwpn);
+		}
+
+		/* Update the nport's service parameters */
+		sp->fl_wwpn = cpu_to_be64(nport->wwpn);
+		sp->fl_wwnn = cpu_to_be64(nport->wwnn);
+
+		/*
+		 * if nport->fc_id is uninitialized,
+		 * then request that the fabric node use FDISC
+		 * to find an fc_id.
+		 * Otherwise we're restoring vports, or we're in
+		 * fabric emulation mode, so attach the fc_id
+		 */
+		if (nport->fc_id == U32_MAX) {
+			struct efc_node *fabric;
+
+			fabric = efc_node_alloc(nport, FC_FID_FLOGI, false,
+						false);
+			if (!fabric) {
+				efc_log_err(efc, "efc_node_alloc() failed\n");
+				return;
+			}
+			efc_node_transition(fabric, __efc_vport_fabric_init,
+					    NULL);
+		} else {
+			snprintf(nport->wwnn_str, sizeof(nport->wwnn_str),
+				 "%016llX", nport->wwpn);
+			efc_nport_attach(nport, nport->fc_id);
+		}
+		efc_sm_transition(ctx, __efc_nport_vport_allocated, NULL);
+		break;
+	}
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_nport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc *efc = nport->efc;
+
+	nport_sm_trace(nport);
+
+	/*
+	 * This state is entered after the nport is allocated;
+	 * it then waits for a fabric node
+	 * FDISC to complete, which requests a nport attach.
+	 * The nport attach complete is handled in this state.
+	 */
+	switch (evt) {
+	case EFC_EVT_NPORT_ATTACH_OK: {
+		struct efc_node *node;
+
+		/* Find our fabric node, and forward this event */
+		node = efc_node_find(nport, FC_FID_FLOGI);
+		if (!node) {
+			efc_log_debug(efc, "can't find node %06x\n",
+				     FC_FID_FLOGI);
+			break;
+		}
+		/* sm: / forward nport attach to fabric node */
+		efc_node_post_event(node, evt, NULL);
+		efc_sm_transition(ctx, __efc_nport_attached, NULL);
+		break;
+	}
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+static void
+efc_vport_update_spec(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	struct efc_vport_spec *vport;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if (vport->nport == nport) {
+			vport->wwnn = nport->wwnn;
+			vport->wwpn = nport->wwpn;
+			vport->tgt_data = nport->tgt_data;
+			vport->ini_data = nport->ini_data;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+}
+
+void
+__efc_nport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc *efc = nport->efc;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		struct efc_node *node;
+		unsigned long index;
+
+		efc_log_debug(efc,
+			      "[%s] NPORT attached WWPN %016llX WWNN %016llX\n",
+			      nport->display_name,
+			      nport->wwpn, nport->wwnn);
+
+		xa_for_each(&nport->lookup, index, node) {
+			efc_node_update_display_name(node);
+		}
+
+		nport->tgt_id = nport->fc_id;
+
+		efc->tt.new_nport(efc, nport);
+
+		/*
+		 * Update the vport (if its not the physical nport)
+		 * parameters
+		 */
+		if (nport->is_vport)
+			efc_vport_update_spec(nport);
+		break;
+	}
+
+	case EFC_EVT_EXIT:
+		efc_log_debug(efc,
+			      "[%s] NPORT deattached WWPN %016llX WWNN %016llX\n",
+			      nport->display_name,
+			      nport->wwpn, nport->wwnn);
+
+		efc->tt.del_nport(efc, nport);
+		break;
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+
+void
+__efc_nport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+	struct efc_domain *domain = nport->domain;
+	struct efc *efc = nport->efc;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	case EFC_EVT_NPORT_ALLOC_OK:
+	case EFC_EVT_NPORT_ALLOC_FAIL:
+	case EFC_EVT_NPORT_ATTACH_OK:
+	case EFC_EVT_NPORT_ATTACH_FAIL:
+		/* ignore these events - just wait for the all free event */
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE: {
+		/*
+		 * Remove the nport from the domain's
+		 * sparse vector lookup table
+		 */
+		xa_erase(&domain->lookup, nport->fc_id);
+		efc_sm_transition(ctx, __efc_nport_wait_port_free, NULL);
+		if (efc_cmd_nport_free(efc, nport)) {
+			efc_log_err(nport->efc, "efc_hw_port_free failed\n");
+			/* Not much we can do, free the nport anyways */
+			efc_nport_free(nport);
+		}
+		break;
+	}
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_nport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_nport *nport = ctx->app;
+
+	nport_sm_trace(nport);
+
+	switch (evt) {
+	case EFC_EVT_NPORT_ATTACH_OK:
+		/* Ignore as we are waiting for the free CB */
+		break;
+	case EFC_EVT_NPORT_FREE_OK: {
+		/* All done, free myself */
+		efc_nport_free(nport);
+		break;
+	}
+	default:
+		__efc_nport_common(__func__, ctx, evt, arg);
+	}
+}
+
+static int
+efc_vport_nport_alloc(struct efc_domain *domain, struct efc_vport_spec *vport)
+{
+	struct efc_nport *nport;
+
+	lockdep_assert_held(&domain->efc->lock);
+
+	nport = efc_nport_alloc(domain, vport->wwpn, vport->wwnn, vport->fc_id,
+				vport->enable_ini, vport->enable_tgt);
+	vport->nport = nport;
+	if (!nport)
+		return EFC_FAIL;
+
+	kref_get(&nport->ref);
+	nport->is_vport = true;
+	nport->tgt_data = vport->tgt_data;
+	nport->ini_data = vport->ini_data;
+
+	efc_sm_transition(&nport->sm, __efc_nport_vport_init, NULL);
+
+	return EFC_SUCCESS;
+}
+
+int
+efc_vport_start(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	int rc = EFC_SUCCESS;
+	unsigned long flags = 0;
+
+	/* Use the vport spec to find the associated vports and start them */
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (!vport->nport) {
+			if (efc_vport_nport_alloc(domain, vport))
+				rc = EFC_FAIL;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+
+	return rc;
+}
+
+int
+efc_nport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		    u32 fc_id, bool ini, bool tgt, void *tgt_data,
+		    void *ini_data)
+{
+	struct efc *efc = domain->efc;
+	struct efc_vport_spec *vport;
+	int rc = EFC_SUCCESS;
+	unsigned long flags = 0;
+
+	if (ini && domain->efc->enable_ini == 0) {
+		efc_log_debug(efc,
+			     "driver initiator functionality not enabled\n");
+		return EFC_FAIL;
+	}
+
+	if (tgt && domain->efc->enable_tgt == 0) {
+		efc_log_debug(efc,
+			     "driver target functionality not enabled\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Create a vport spec if we need to recreate
+	 * this vport after a link up event
+	 */
+	vport = efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id, ini, tgt,
+					tgt_data, ini_data);
+	if (!vport) {
+		efc_log_err(efc, "failed to create vport object entry\n");
+		return EFC_FAIL;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	rc = efc_vport_nport_alloc(domain, vport);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return rc;
+}
+
+int
+efc_nport_vport_del(struct efc *efc, struct efc_domain *domain,
+		    u64 wwpn, uint64_t wwnn)
+{
+	struct efc_nport *nport;
+	int found = 0;
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	/* walk the efc_vport_list and remove from there */
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		if (vport->wwpn == wwpn && vport->wwnn == wwnn) {
+			list_del(&vport->list_entry);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+
+	if (!domain) {
+		/* No domain means no nport to look for */
+		return EFC_SUCCESS;
+	}
+
+	spin_lock_irqsave(&efc->lock, flags);
+	list_for_each_entry(nport, &domain->nport_list, list_entry) {
+		if (nport->wwpn == wwpn && nport->wwnn == wwnn) {
+			found = 1;
+			break;
+		}
+	}
+
+	if (found) {
+		kref_put(&nport->ref, nport->release);
+		/* Shutdown this NPORT */
+		efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL);
+	}
+	spin_unlock_irqrestore(&efc->lock, flags);
+	return EFC_SUCCESS;
+}
+
+void
+efc_vport_del_all(struct efc *efc)
+{
+	struct efc_vport_spec *vport;
+	struct efc_vport_spec *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) {
+		list_del(&vport->list_entry);
+		kfree(vport);
+	}
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+}
+
+struct efc_vport_spec *
+efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn,
+		      u32 fc_id, bool enable_ini,
+		      bool enable_tgt, void *tgt_data, void *ini_data)
+{
+	struct efc_vport_spec *vport;
+	unsigned long flags = 0;
+
+	/*
+	 * walk the efc_vport_list and return failure
+	 * if a valid(vport with non zero WWPN and WWNN) vport entry
+	 * is already created
+	 */
+	spin_lock_irqsave(&efc->vport_lock, flags);
+	list_for_each_entry(vport, &efc->vport_list, list_entry) {
+		if ((wwpn && vport->wwpn == wwpn) &&
+		    (wwnn && vport->wwnn == wwnn)) {
+			efc_log_err(efc,
+				"Failed: VPORT %016llX %016llX already allocated\n",
+				wwnn, wwpn);
+			spin_unlock_irqrestore(&efc->vport_lock, flags);
+			return NULL;
+		}
+	}
+
+	vport = kzalloc(sizeof(*vport), GFP_ATOMIC);
+	if (!vport) {
+		spin_unlock_irqrestore(&efc->vport_lock, flags);
+		return NULL;
+	}
+
+	vport->wwnn = wwnn;
+	vport->wwpn = wwpn;
+	vport->fc_id = fc_id;
+	vport->enable_tgt = enable_tgt;
+	vport->enable_ini = enable_ini;
+	vport->tgt_data = tgt_data;
+	vport->ini_data = ini_data;
+
+	INIT_LIST_HEAD(&vport->list_entry);
+	list_add_tail(&vport->list_entry, &efc->vport_list);
+	spin_unlock_irqrestore(&efc->vport_lock, flags);
+	return vport;
+}
diff --git a/drivers/scsi/elx/libefc/efc_nport.h b/drivers/scsi/elx/libefc/efc_nport.h
new file mode 100644
index 000000000000..b575ea205bbf
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_nport.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/**
+ * EFC FC port (NPORT) exported declarations
+ *
+ */
+
+#ifndef __EFC_NPORT_H__
+#define __EFC_NPORT_H__
+
+struct efc_nport *
+efc_nport_find(struct efc_domain *domain, u32 d_id);
+struct efc_nport *
+efc_nport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn,
+		u32 fc_id, bool enable_ini, bool enable_tgt);
+void
+efc_nport_free(struct efc_nport *nport);
+int
+efc_nport_attach(struct efc_nport *nport, u32 fc_id);
+
+void
+__efc_nport_allocated(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+void
+__efc_nport_wait_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void
+__efc_nport_wait_port_free(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+void
+__efc_nport_vport_init(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+void
+__efc_nport_vport_wait_alloc(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+__efc_nport_vport_allocated(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void
+__efc_nport_attached(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+
+int
+efc_vport_start(struct efc_domain *domain);
+
+#endif /* __EFC_NPORT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 12/31] elx: libefc: Remote node state machine interfaces
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (10 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 11/31] elx: libefc: SLI and FC PORT " James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 13/31] elx: libefc: Fabric " James Smart
                   ` (18 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Remote node (aka remote port) allocation, initializaion and
  destroy routines.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_node.c | 1102 ++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_node.h |  191 +++++
 2 files changed, 1293 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_node.c
 create mode 100644 drivers/scsi/elx/libefc/efc_node.h

diff --git a/drivers/scsi/elx/libefc/efc_node.c b/drivers/scsi/elx/libefc/efc_node.c
new file mode 100644
index 000000000000..abae70f6f7d4
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.c
@@ -0,0 +1,1102 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efc.h"
+
+int
+efc_remote_node_cb(void *arg, int event, void *data)
+{
+	struct efc *efc = arg;
+	struct efc_remote_node *rnode = data;
+	struct efc_node *node = rnode->node;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, event, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+struct efc_node *
+efc_node_find(struct efc_nport *nport, u32 port_id)
+{
+	/* Find an FC node structure given the FC port ID */
+	return xa_load(&nport->lookup, port_id);
+}
+
+static void
+_efc_node_free(struct kref *arg)
+{
+	struct efc_node *node = container_of(arg, struct efc_node, ref);
+	struct efc *efc = node->efc;
+	struct efc_dma *dma;
+
+	dma = &node->sparm_dma_buf;
+	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+	mempool_free(node, efc->node_pool);
+}
+
+struct efc_node *efc_node_alloc(struct efc_nport *nport,
+				  u32 port_id, bool init, bool targ)
+{
+	int rc;
+	struct efc_node *node = NULL;
+	struct efc *efc = nport->efc;
+	struct efc_dma *dma;
+
+	if (nport->shutting_down) {
+		efc_log_debug(efc, "node allocation when shutting down %06x",
+			      port_id);
+		return NULL;
+	}
+
+	node = mempool_alloc(efc->node_pool, GFP_ATOMIC);
+	if (!node) {
+		efc_log_err(efc, "node allocation failed %06x", port_id);
+		return NULL;
+	}
+	memset(node, 0, sizeof(*node));
+
+	dma = &node->sparm_dma_buf;
+	dma->size = NODE_SPARAMS_SIZE;
+	dma->virt = dma_pool_zalloc(efc->node_dma_pool, GFP_ATOMIC, &dma->phys);
+	if (!dma->virt) {
+		efc_log_err(efc, "node dma alloc failed\n");
+		goto dma_fail;
+	}
+	node->rnode.indicator = U32_MAX;
+	node->nport = nport;
+
+	node->efc = efc;
+	node->init = init;
+	node->targ = targ;
+
+	spin_lock_init(&node->pend_frames_lock);
+	INIT_LIST_HEAD(&node->pend_frames);
+	spin_lock_init(&node->els_ios_lock);
+	INIT_LIST_HEAD(&node->els_ios_list);
+	WRITE_ONCE(node->els_io_enabled, true);
+
+	rc = efc_cmd_node_alloc(efc, &node->rnode, port_id, nport);
+	if (rc) {
+		efc_log_err(efc, "efc_hw_node_alloc failed: %d\n", rc);
+		goto hw_alloc_fail;
+	}
+
+	node->rnode.node = node;
+	node->sm.app = node;
+	node->evtdepth = 0;
+
+	efc_node_update_display_name(node);
+
+	rc = xa_err(xa_store(&nport->lookup, port_id, node, GFP_ATOMIC));
+	if (rc) {
+		efc_log_err(efc, "Node lookup store failed: %d\n", rc);
+		goto xa_fail;
+	}
+
+	/* initialize refcount */
+	kref_init(&node->ref);
+	node->release = _efc_node_free;
+	kref_get(&nport->ref);
+
+	return node;
+
+xa_fail:
+	efc_node_free_resources(efc, &node->rnode);
+hw_alloc_fail:
+	dma_pool_free(efc->node_dma_pool, dma->virt, dma->phys);
+dma_fail:
+	mempool_free(node, efc->node_pool);
+	return NULL;
+}
+
+void
+efc_node_free(struct efc_node *node)
+{
+	struct efc_nport *nport;
+	struct efc *efc;
+	int rc = 0;
+	struct efc_node *ns = NULL;
+
+	nport = node->nport;
+	efc = node->efc;
+
+	node_printf(node, "Free'd\n");
+
+	if (node->refound) {
+		/*
+		 * Save the name server node. We will send fake RSCN event at
+		 * the end to handle ignored RSCN event during node deletion
+		 */
+		ns = efc_node_find(node->nport, FC_FID_DIR_SERV);
+	}
+
+	if (!node->nport) {
+		efc_log_err(efc, "Node already Freed\n");
+		return;
+	}
+
+	/* Free HW resources */
+	rc = efc_node_free_resources(efc, &node->rnode);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_err(efc, "efc_hw_node_free failed: %d\n", rc);
+
+	/* if the gidpt_delay_timer is still running, then delete it */
+	if (timer_pending(&node->gidpt_delay_timer))
+		del_timer(&node->gidpt_delay_timer);
+
+	xa_erase(&nport->lookup, node->rnode.fc_id);
+
+	/*
+	 * If the node_list is empty,
+	 * then post a ALL_CHILD_NODES_FREE event to the nport,
+	 * after the lock is released.
+	 * The nport may be free'd as a result of the event.
+	 */
+	if (xa_empty(&nport->lookup))
+		efc_sm_post_event(&nport->sm, EFC_EVT_ALL_CHILD_NODES_FREE,
+				  NULL);
+
+	node->nport = NULL;
+	node->sm.current_state = NULL;
+
+	kref_put(&nport->ref, nport->release);
+	kref_put(&node->ref, node->release);
+
+	if (ns) {
+		/* sending fake RSCN event to name server node */
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, NULL);
+	}
+}
+
+static void
+efc_dma_copy_in(struct efc_dma *dma, void *buffer, u32 buffer_length)
+{
+	if (!dma || !buffer || !buffer_length)
+		return;
+
+	if (buffer_length > dma->size)
+		buffer_length = dma->size;
+
+	memcpy(dma->virt, buffer, buffer_length);
+	dma->len = buffer_length;
+}
+
+int
+efc_node_attach(struct efc_node *node)
+{
+	int rc = 0;
+	struct efc_nport *nport = node->nport;
+	struct efc_domain *domain = nport->domain;
+	struct efc *efc = node->efc;
+
+	if (!domain->attached) {
+		efc_log_err(efc, "Warning: unattached domain\n");
+		return EFC_FAIL;
+	}
+	/* Update node->wwpn/wwnn */
+
+	efc_node_build_eui_name(node->wwpn, sizeof(node->wwpn),
+				efc_node_get_wwpn(node));
+	efc_node_build_eui_name(node->wwnn, sizeof(node->wwnn),
+				efc_node_get_wwnn(node));
+
+	efc_dma_copy_in(&node->sparm_dma_buf, node->service_params + 4,
+			sizeof(node->service_params) - 4);
+
+	/* take lock to protect node->rnode.attached */
+	rc = efc_cmd_node_attach(efc, &node->rnode, &node->sparm_dma_buf);
+	if (EFC_HW_RTN_IS_ERROR(rc))
+		efc_log_debug(efc, "efc_hw_node_attach failed: %d\n", rc);
+
+	return rc;
+}
+
+void
+efc_node_fcid_display(u32 fc_id, char *buffer, u32 buffer_length)
+{
+	switch (fc_id) {
+	case FC_FID_FLOGI:
+		snprintf(buffer, buffer_length, "fabric");
+		break;
+	case FC_FID_FCTRL:
+		snprintf(buffer, buffer_length, "fabctl");
+		break;
+	case FC_FID_DIR_SERV:
+		snprintf(buffer, buffer_length, "nserve");
+		break;
+	default:
+		if (fc_id == FC_FID_DOM_MGR) {
+			snprintf(buffer, buffer_length, "dctl%02x",
+				 (fc_id & 0x0000ff));
+		} else {
+			snprintf(buffer, buffer_length, "%06x", fc_id);
+		}
+		break;
+	}
+}
+
+void
+efc_node_update_display_name(struct efc_node *node)
+{
+	u32 port_id = node->rnode.fc_id;
+	struct efc_nport *nport = node->nport;
+	char portid_display[16];
+
+	efc_node_fcid_display(port_id, portid_display, sizeof(portid_display));
+
+	snprintf(node->display_name, sizeof(node->display_name), "%s.%s",
+		 nport->display_name, portid_display);
+}
+
+void
+efc_node_send_ls_io_cleanup(struct efc_node *node)
+{
+	if (node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE) {
+		efc_log_debug(node->efc, "[%s] cleaning up LS_ACC oxid=0x%x\n",
+			      node->display_name, node->ls_acc_oxid);
+
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		node->ls_acc_io = NULL;
+	}
+}
+
+static void efc_node_handle_implicit_logo(struct efc_node *node)
+{
+	int rc;
+
+	/*
+	 * currently, only case for implicit logo is PLOGI
+	 * recvd. Thus, node's ELS IO pending list won't be
+	 * empty (PLOGI will be on it)
+	 */
+	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
+	node_printf(node, "Reason: implicit logout, re-authenticate\n");
+
+	/* Re-attach node with the same HW node resources */
+	node->req_free = false;
+	rc = efc_node_attach(node);
+	efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+
+	if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+		efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, NULL);
+
+}
+
+static void efc_node_handle_explicit_logo(struct efc_node *node)
+{
+	s8 pend_frames_empty;
+	unsigned long flags = 0;
+
+	/* cleanup any pending LS_ACC ELSs */
+	efc_node_send_ls_io_cleanup(node);
+
+	spin_lock_irqsave(&node->pend_frames_lock, flags);
+	pend_frames_empty = list_empty(&node->pend_frames);
+	spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+	/*
+	 * there are two scenarios where we want to keep
+	 * this node alive:
+	 * 1. there are pending frames that need to be
+	 *    processed or
+	 * 2. we're an initiator and the remote node is
+	 *    a target and we need to re-authenticate
+	 */
+	node_printf(node, "Shutdown: explicit logo pend=%d ",
+			  !pend_frames_empty);
+	node_printf(node, "nport.ini=%d node.tgt=%d\n",
+			  node->nport->enable_ini, node->targ);
+	if (!pend_frames_empty || (node->nport->enable_ini && node->targ)) {
+		u8 send_plogi = false;
+
+		if (node->nport->enable_ini && node->targ) {
+			/*
+			 * we're an initiator and
+			 * node shutting down is a target;
+			 * we'll need to re-authenticate in
+			 * initial state
+			 */
+			send_plogi = true;
+		}
+
+		/*
+		 * transition to __efc_d_init
+		 * (will retain HW node resources)
+		 */
+		node->req_free = false;
+
+		/*
+		 * either pending frames exist or we are re-authenticating
+		 * with PLOGI (or both); in either case, return to initial
+		 * state
+		 */
+		efc_node_init_device(node, send_plogi);
+	}
+	/* else: let node shutdown occur */
+}
+
+static void
+efc_node_purge_pending(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_hw_sequence *frame, *next;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->pend_frames_lock, flags);
+
+	list_for_each_entry_safe(frame, next, &node->pend_frames, list_entry) {
+		list_del(&frame->list_entry);
+		efc->tt.hw_seq_free(efc, frame);
+	}
+
+	spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+}
+
+void
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		efc_node_hold_frames(node);
+		WARN_ON(!efc_els_io_list_empty(node, &node->els_ios_list));
+		/* by default, we will be freeing node after we unwind */
+		node->req_free = true;
+
+		switch (node->shutdown_reason) {
+		case EFC_NODE_SHUTDOWN_IMPLICIT_LOGO:
+			/* Node shutdown b/c of PLOGI received when node
+			 * already logged in. We have PLOGI service
+			 * parameters, so submit node attach; we won't be
+			 * freeing this node
+			 */
+
+			efc_node_handle_implicit_logo(node);
+			break;
+
+		case EFC_NODE_SHUTDOWN_EXPLICIT_LOGO:
+			efc_node_handle_explicit_logo(node);
+			break;
+
+		case EFC_NODE_SHUTDOWN_DEFAULT:
+		default: {
+			/*
+			 * shutdown due to link down,
+			 * node going away (xport event) or
+			 * nport shutdown, purge pending and
+			 * proceed to cleanup node
+			 */
+
+			/* cleanup any pending LS_ACC ELSs */
+			efc_node_send_ls_io_cleanup(node);
+
+			node_printf(node,
+				    "Shutdown reason: default, purge pending\n");
+			efc_node_purge_pending(node);
+			break;
+		}
+		}
+
+		break;
+	}
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+	}
+}
+
+static bool
+efc_node_check_els_quiesced(struct efc_node *node)
+{
+	/* check to see if ELS requests, completions are quiesced */
+	if (node->els_req_cnt == 0 && node->els_cmpl_cnt == 0 &&
+	    efc_els_io_list_empty(node, &node->els_ios_list)) {
+		if (!node->attached) {
+			/* hw node detach already completed, proceed */
+			node_printf(node, "HW node not attached\n");
+			efc_node_transition(node,
+					    __efc_node_wait_ios_shutdown,
+					     NULL);
+		} else {
+			/*
+			 * hw node detach hasn't completed,
+			 * transition and wait
+			 */
+			node_printf(node, "HW node still attached\n");
+			efc_node_transition(node, __efc_node_wait_node_free,
+					    NULL);
+		}
+		return true;
+	}
+	return false;
+}
+
+void
+efc_node_initiate_cleanup(struct efc_node *node)
+{
+	/*
+	 * if ELS's have already been quiesced, will move to next state
+	 * if ELS's have not been quiesced, abort them
+	 */
+	if (!efc_node_check_els_quiesced(node)) {
+		efc_node_hold_frames(node);
+		efc_node_transition(node, __efc_node_wait_els_shutdown, NULL);
+	}
+}
+
+void
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	bool check_quiesce = false;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+	/* Node state machine: Wait for all ELSs to complete */
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		if (efc_els_io_list_empty(node, &node->els_ios_list)) {
+			node_printf(node, "All ELS IOs complete\n");
+			check_quiesce = true;
+		}
+		break;
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		if (WARN_ON(!node->els_cmpl_cnt))
+			break;
+		node->els_cmpl_cnt--;
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* all ELS IO's complete */
+		node_printf(node, "All ELS IOs complete\n");
+		WARN_ON(!efc_els_io_list_empty(node, &node->els_ios_list));
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		check_quiesce = true;
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+	}
+
+	if (check_quiesce)
+		efc_node_check_els_quiesced(node);
+}
+
+void
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+		/* node is officially no longer attached */
+		node->attached = false;
+		efc_node_transition(node, __efc_node_wait_ios_shutdown, NULL);
+		break;
+
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		/* As IOs and ELS IO's complete we expect to get these events */
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+
+		/* first check to see if no ELS IOs are outstanding */
+		if (efc_els_io_list_empty(node, &node->els_ios_list))
+			/* If there are any active IOS, Free them. */
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		if (efc_els_io_list_empty(node, &node->els_ios_list))
+			efc_node_transition(node, __efc_node_shutdown, NULL);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s\n", node->display_name,
+			      efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+	struct efc_node_cb *cbdata = arg;
+
+	node = ctx->app;
+	efc = node->efc;
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+	case EFC_EVT_REENTER:
+	case EFC_EVT_EXIT:
+	case EFC_EVT_NPORT_TOPOLOGY_NOTIFY:
+	case EFC_EVT_NODE_MISSING:
+	case EFC_EVT_FCP_CMD_RCVD:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		node->refound = true;
+		break;
+
+	/*
+	 * node->attached must be set appropriately
+	 * for all node attach/detach events
+	 */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	case EFC_EVT_NODE_FREE_OK:
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		break;
+
+	/*
+	 * handle any ELS completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		if (WARN_ON(!node->els_cmpl_cnt))
+			break;
+		node->els_cmpl_cnt--;
+		break;
+
+	/*
+	 * handle any ELS request completions that
+	 * other states either didn't care about
+	 * or forgot about
+	 */
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_ELS_REQ_ABORTED:
+		if (WARN_ON(!node->els_req_cnt))
+			break;
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_ELS_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * Unsupported ELS was received,
+		 * send LS_RJT, command not supported
+		 */
+		efc_log_debug(efc,
+			      "[%s] (%s) ELS x%02x, LS_RJT not supported\n",
+			      node->display_name, funcname,
+			      ((uint8_t *)cbdata->payload->dma.virt)[0]);
+
+		efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+				ELS_RJT_UNSUP, ELS_EXPL_NONE, 0);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD:
+	case EFC_EVT_FLOGI_RCVD:
+	case EFC_EVT_LOGO_RCVD:
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/* sm: / send ELS_RJT */
+		efc_log_debug(efc, "[%s] (%s) %s sending ELS_RJT\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+		/* if we didn't catch this in a state, send generic LS_RJT */
+		efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+				ELS_RJT_UNAB, ELS_EXPL_NONE, 0);
+		break;
+	}
+	case EFC_EVT_ABTS_RCVD: {
+		efc_log_debug(efc, "[%s] (%s) %s sending BA_ACC\n",
+			      node->display_name, funcname,
+			      efc_sm_event_name(evt));
+
+		/* sm: / send BA_ACC */
+		efc_send_bls_acc(node, cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		efc_log_debug(node->efc, "[%s] %-20s %-20s not handled\n",
+			     node->display_name, funcname,
+			     efc_sm_event_name(evt));
+	}
+}
+
+void
+efc_node_save_sparms(struct efc_node *node, void *payload)
+{
+	memcpy(node->service_params, payload, sizeof(node->service_params));
+}
+
+void
+efc_node_post_event(struct efc_node *node,
+		    enum efc_sm_event evt, void *arg)
+{
+	bool free_node = false;
+
+	node->evtdepth++;
+
+	efc_sm_post_event(&node->sm, evt, arg);
+
+	/* If our event call depth is one and
+	 * we're not holding frames
+	 * then we can dispatch any pending frames.
+	 * We don't want to allow the efc_process_node_pending()
+	 * call to recurse.
+	 */
+	if (!node->hold_frames && node->evtdepth == 1)
+		efc_process_node_pending(node);
+
+	node->evtdepth--;
+
+	/*
+	 * Free the node object if so requested,
+	 * and we're at an event call depth of zero
+	 */
+	if (node->evtdepth == 0 && node->req_free)
+		free_node = true;
+
+	if (free_node)
+		efc_node_free(node);
+}
+
+void
+efc_node_transition(struct efc_node *node,
+		    void (*state)(struct efc_sm_ctx *,
+				   enum efc_sm_event, void *), void *data)
+{
+	struct efc_sm_ctx *ctx = &node->sm;
+
+	if (ctx->current_state == state) {
+		efc_node_post_event(node, EFC_EVT_REENTER, data);
+	} else {
+		efc_node_post_event(node, EFC_EVT_EXIT, data);
+		ctx->current_state = state;
+		efc_node_post_event(node, EFC_EVT_ENTER, data);
+	}
+}
+
+void
+efc_node_build_eui_name(char *buf, u32 buf_len, uint64_t eui_name)
+{
+	memset(buf, 0, buf_len);
+
+	snprintf(buf, buf_len, "eui.%016llX", (unsigned long long)eui_name);
+}
+
+u64
+efc_node_get_wwpn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwpn);
+}
+
+u64
+efc_node_get_wwnn(struct efc_node *node)
+{
+	struct fc_els_flogi *sp =
+			(struct fc_els_flogi *)node->service_params;
+
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+int
+efc_node_check_els_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
+		uint8_t cmd, void (*efc_node_common_func)(const char *,
+				struct efc_sm_ctx *, enum efc_sm_event, void *),
+		const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg,
+		uint16_t cmd, void (*efc_node_common_func)(const char *,
+				struct efc_sm_ctx *, enum efc_sm_event, void *),
+		const char *funcname)
+{
+	return 0;
+}
+
+int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list)
+{
+	int empty;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&node->els_ios_lock, flags);
+	empty = list_empty(list);
+	spin_unlock_irqrestore(&node->els_ios_lock, flags);
+	return empty;
+}
+
+void
+efc_node_pause(struct efc_node *node,
+	       void (*state)(struct efc_sm_ctx *,
+			      enum efc_sm_event, void *))
+
+{
+	node->nodedb_state = state;
+	efc_node_transition(node, __efc_node_paused, NULL);
+}
+
+void
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * This state is entered when a state is "paused". When resumed, the
+	 * node is transitioned to a previously saved state (node->ndoedb_state)
+	 */
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node_printf(node, "Paused\n");
+		break;
+
+	case EFC_EVT_RESUME: {
+		void (*pf)(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+
+		pf = node->nodedb_state;
+
+		node->nodedb_state = NULL;
+		efc_node_transition(node, pf, NULL);
+		break;
+	}
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node->req_free = true;
+		break;
+
+	default:
+		__efc_node_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+efc_node_recv_els_frame(struct efc_node *node,
+			struct efc_hw_sequence *seq)
+{
+	u32 prli_size = sizeof(struct fc_els_prli) + sizeof(struct fc_els_spp);
+	struct {
+		u32 cmd;
+		enum efc_sm_event evt;
+		u32 payload_size;
+	} els_cmd_list[] = {
+		{ELS_PLOGI, EFC_EVT_PLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_FLOGI, EFC_EVT_FLOGI_RCVD,	sizeof(struct fc_els_flogi)},
+		{ELS_LOGO, EFC_EVT_LOGO_RCVD, sizeof(struct fc_els_ls_acc)},
+		{ELS_PRLI, EFC_EVT_PRLI_RCVD, prli_size},
+		{ELS_PRLO, EFC_EVT_PRLO_RCVD, prli_size},
+		{ELS_PDISC, EFC_EVT_PDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_FDISC, EFC_EVT_FDISC_RCVD,	MAX_ACC_REJECT_PAYLOAD},
+		{ELS_ADISC, EFC_EVT_ADISC_RCVD,	sizeof(struct fc_els_adisc)},
+		{ELS_RSCN, EFC_EVT_RSCN_RCVD, MAX_ACC_REJECT_PAYLOAD},
+		{ELS_SCR, EFC_EVT_SCR_RCVD, MAX_ACC_REJECT_PAYLOAD},
+	};
+	struct efc_node_cb cbdata;
+	u8 *buf = seq->payload->dma.virt;
+	enum efc_sm_event evt = EFC_EVT_ELS_RCVD;
+	u32 i;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	/* find a matching event for the ELS command */
+	for (i = 0; i < ARRAY_SIZE(els_cmd_list); i++) {
+		if (els_cmd_list[i].cmd == buf[0]) {
+			evt = els_cmd_list[i].evt;
+			break;
+		}
+	}
+
+	efc_node_post_event(node, evt, &cbdata);
+}
+
+void
+efc_node_recv_ct_frame(struct efc_node *node,
+		       struct efc_hw_sequence *seq)
+{
+	struct fc_ct_hdr *iu = seq->payload->dma.virt;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efc *efc = node->efc;
+	u16 gscmd = be16_to_cpu(iu->ct_cmd);
+
+	efc_log_err(efc, "[%s] Received cmd :%x sending CT_REJECT\n",
+		    node->display_name, gscmd);
+	efc_send_ct_rsp(efc, node, be16_to_cpu(hdr->fh_ox_id), iu,
+			FC_FS_RJT, FC_FS_RJT_UNSUP, 0);
+}
+
+void
+efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq)
+{
+	struct efc_node_cb cbdata;
+
+	memset(&cbdata, 0, sizeof(cbdata));
+	cbdata.header = seq->header;
+	cbdata.payload = seq->payload;
+
+	efc_node_post_event(node, EFC_EVT_FCP_CMD_RCVD, &cbdata);
+}
+
+void
+efc_process_node_pending(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_hw_sequence *seq = NULL;
+	u32 pend_frames_processed = 0;
+	unsigned long flags = 0;
+
+	for (;;) {
+		/* need to check for hold frames condition after each frame
+		 * processed because any given frame could cause a transition
+		 * to a state that holds frames
+		 */
+		if (node->hold_frames)
+			break;
+
+		seq = NULL;
+		/* Get next frame/sequence */
+		spin_lock_irqsave(&node->pend_frames_lock, flags);
+
+		if (!list_empty(&node->pend_frames)) {
+			seq = list_first_entry(&node->pend_frames,
+					struct efc_hw_sequence, list_entry);
+			list_del(&seq->list_entry);
+		}
+		spin_unlock_irqrestore(&node->pend_frames_lock, flags);
+
+		if (!seq) {
+			pend_frames_processed =	node->pend_frames_processed;
+			node->pend_frames_processed = 0;
+			break;
+		}
+		node->pend_frames_processed++;
+
+		/* now dispatch frame(s) to dispatch function */
+		efc_node_dispatch_frame(node, seq);
+		efc->tt.hw_seq_free(efc, seq);
+	}
+
+	if (pend_frames_processed != 0)
+		efc_log_debug(efc, "%u node frames held and processed\n",
+			      pend_frames_processed);
+}
+
+void
+efc_scsi_sess_reg_complete(struct efc_node *node, u32 status)
+{
+	unsigned long flags = 0;
+	enum efc_sm_event evt = EFC_EVT_NODE_SESS_REG_OK;
+	struct efc *efc = node->efc;
+
+	if (status)
+		evt = EFC_EVT_NODE_SESS_REG_FAIL;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, evt, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void
+efc_scsi_del_initiator_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void
+efc_scsi_del_target_complete(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	/* Notify the node to resume */
+	efc_node_post_event(node, EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void
+efc_scsi_io_list_empty(struct efc *efc, struct efc_node *node)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, NULL);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void efc_node_post_els_resp(struct efc_node *node, u32 evt, void *arg)
+{
+	struct efc *efc = node->efc;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, evt, arg);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
+
+void efc_node_post_shutdown(struct efc_node *node, u32 evt, void *arg)
+{
+	unsigned long flags = 0;
+	struct efc *efc = node->efc;
+
+	spin_lock_irqsave(&efc->lock, flags);
+	efc_node_post_event(node, EFC_EVT_SHUTDOWN, arg);
+	spin_unlock_irqrestore(&efc->lock, flags);
+}
diff --git a/drivers/scsi/elx/libefc/efc_node.h b/drivers/scsi/elx/libefc/efc_node.h
new file mode 100644
index 000000000000..7ff6a0adb55e
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_node.h
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFC_NODE_H__)
+#define __EFC_NODE_H__
+#include "scsi/fc/fc_ns.h"
+
+#define EFC_NODEDB_PAUSE_FABRIC_LOGIN	(1 << 0)
+#define EFC_NODEDB_PAUSE_NAMESERVER	(1 << 1)
+#define EFC_NODEDB_PAUSE_NEW_NODES	(1 << 2)
+
+#define MAX_ACC_REJECT_PAYLOAD	sizeof(struct fc_els_ls_rjt)
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efc, "[%s] [%04x][i:%04x t:%04x h:%04x]" fmt, \
+	io->node->display_name, io->instance_index, io->init_task_tag, \
+	io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+static inline void
+efc_node_evt_set(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		 const char *handler)
+{
+	struct efc_node *node = ctx->app;
+
+	if (evt == EFC_EVT_ENTER) {
+		strncpy(node->current_state_name, handler,
+			sizeof(node->current_state_name));
+	} else if (evt == EFC_EVT_EXIT) {
+		strncpy(node->prev_state_name, node->current_state_name,
+			sizeof(node->prev_state_name));
+		strncpy(node->current_state_name, "invalid",
+			sizeof(node->current_state_name));
+	}
+	node->prev_evt = node->current_evt;
+	node->current_evt = evt;
+}
+
+/**
+ * hold frames in pending frame list
+ *
+ * Unsolicited receive frames are held on the node pending frame list,
+ * rather than being processed.
+ */
+
+static inline void
+efc_node_hold_frames(struct efc_node *node)
+{
+	node->hold_frames = true;
+}
+
+/**
+ * accept frames
+ *
+ * Unsolicited receive frames processed rather than being held on the node
+ * pending frame list.
+ */
+
+static inline void
+efc_node_accept_frames(struct efc_node *node)
+{
+	node->hold_frames = false;
+}
+
+/*
+ * Node initiator/target enable defines
+ * All combinations of the SLI port (nport) initiator/target enable,
+ * and remote node initiator/target enable are enumerated.
+ * ex: EFC_NODE_ENABLE_T_TO_IT decodes to target mode is enabled on SLI port
+ * and I+T is enabled on remote node.
+ */
+enum efc_node_enable {
+	EFC_NODE_ENABLE_x_TO_x,
+	EFC_NODE_ENABLE_x_TO_T,
+	EFC_NODE_ENABLE_x_TO_I,
+	EFC_NODE_ENABLE_x_TO_IT,
+	EFC_NODE_ENABLE_T_TO_x,
+	EFC_NODE_ENABLE_T_TO_T,
+	EFC_NODE_ENABLE_T_TO_I,
+	EFC_NODE_ENABLE_T_TO_IT,
+	EFC_NODE_ENABLE_I_TO_x,
+	EFC_NODE_ENABLE_I_TO_T,
+	EFC_NODE_ENABLE_I_TO_I,
+	EFC_NODE_ENABLE_I_TO_IT,
+	EFC_NODE_ENABLE_IT_TO_x,
+	EFC_NODE_ENABLE_IT_TO_T,
+	EFC_NODE_ENABLE_IT_TO_I,
+	EFC_NODE_ENABLE_IT_TO_IT,
+};
+
+static inline enum efc_node_enable
+efc_node_get_enable(struct efc_node *node)
+{
+	u32 retval = 0;
+
+	if (node->nport->enable_ini)
+		retval |= (1U << 3);
+	if (node->nport->enable_tgt)
+		retval |= (1U << 2);
+	if (node->init)
+		retval |= (1U << 1);
+	if (node->targ)
+		retval |= (1U << 0);
+	return (enum efc_node_enable)retval;
+}
+
+int
+efc_node_check_els_req(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg,
+		       u8 cmd, void (*efc_node_common_func)(const char *,
+		       struct efc_sm_ctx *, enum efc_sm_event, void *),
+		       const char *funcname);
+int
+efc_node_check_ns_req(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg,
+		  u16 cmd, void (*efc_node_common_func)(const char *,
+		  struct efc_sm_ctx *, enum efc_sm_event, void *),
+		  const char *funcname);
+int
+efc_node_attach(struct efc_node *node);
+struct efc_node *
+efc_node_alloc(struct efc_nport *nport, u32 port_id,
+		bool init, bool targ);
+void
+efc_node_free(struct efc_node *efc);
+void
+efc_node_update_display_name(struct efc_node *node);
+void efc_node_post_event(struct efc_node *node, enum efc_sm_event evt,
+			 void *arg);
+
+void
+__efc_node_shutdown(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+void
+__efc_node_wait_node_free(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void
+__efc_node_wait_els_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+__efc_node_wait_ios_shutdown(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+efc_node_save_sparms(struct efc_node *node, void *payload);
+void
+efc_node_transition(struct efc_node *node,
+		    void (*state)(struct efc_sm_ctx *,
+		    enum efc_sm_event, void *), void *data);
+void
+__efc_node_common(const char *funcname, struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+void
+efc_node_initiate_cleanup(struct efc_node *node);
+
+void
+efc_node_build_eui_name(char *buf, u32 buf_len, uint64_t eui_name);
+
+void
+efc_node_pause(struct efc_node *node,
+	       void (*state)(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg));
+void
+__efc_node_paused(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+int
+efc_node_active_ios_empty(struct efc_node *node);
+void
+efc_node_send_ls_io_cleanup(struct efc_node *node);
+
+int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+void
+efc_process_node_pending(struct efc_node *domain);
+
+u64 efc_node_get_wwnn(struct efc_node *node);
+struct efc_node *
+efc_node_find(struct efc_nport *nport, u32 id);
+void
+efc_node_post_els_resp(struct efc_node *node, u32 evt, void *arg);
+void
+efc_node_recv_els_frame(struct efc_node *node, struct efc_hw_sequence *s);
+void
+efc_node_recv_ct_frame(struct efc_node *node, struct efc_hw_sequence *seq);
+void
+efc_node_recv_fcp_cmd(struct efc_node *node, struct efc_hw_sequence *seq);
+
+#endif /* __EFC_NODE_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 13/31] elx: libefc: Fabric node state machine interfaces
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (11 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 12/31] elx: libefc: Remote node " James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 14/31] elx: libefc: FC node ELS and state handling James Smart
                   ` (17 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- Fabric node initialization and logins.
- Name/Directory Services node.
- Fabric Controller node to process rscn events.

These are all interactions with remote ports that correspond
to well-known fabric entities

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_fabric.c | 1565 ++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_fabric.h |  116 ++
 2 files changed, 1681 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c
 create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h

diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
new file mode 100644
index 000000000000..37ae8b685f1a
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.c
@@ -0,0 +1,1565 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * This file implements remote node state machines for:
+ * - Fabric logins.
+ * - Fabric controller events.
+ * - Name/directory services interaction.
+ * - Point-to-point logins.
+ */
+
+/*
+ * fabric_sm Node State Machine: Fabric States
+ * ns_sm Node State Machine: Name/Directory Services States
+ * p2p_sm Node State Machine: Point-to-Point Node States
+ */
+
+#include "efc.h"
+
+static void
+efc_fabric_initiate_shutdown(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+
+	WRITE_ONCE(node->els_io_enabled, false);
+
+	if (node->attached) {
+		int rc;
+
+		/* issue hw node free; don't care if succeeds right away
+		 * or sometime later, will check node->attached later in
+		 * shutdown process
+		 */
+		rc = efc_cmd_node_detach(efc, &node->rnode);
+		if (rc != EFC_HW_RTN_SUCCESS &&
+		    rc != EFC_HW_RTN_SUCCESS_SYNC) {
+			node_printf(node, "Failed freeing HW node, rc=%d\n",
+				    rc);
+		}
+	}
+	/*
+	 * node has either been detached or is in the process of being detached,
+	 * call common node's initiate cleanup function
+	 */
+	efc_node_initiate_cleanup(node);
+}
+
+static void
+__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+
+	node = ctx->app;
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	case EFC_EVT_SHUTDOWN:
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_REENTER:
+		efc_log_debug(efc, ">>> reenter !!\n");
+		fallthrough;
+
+	case EFC_EVT_ENTER:
+		/* send FLOGI */
+		efc_send_flogi(node);
+		efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_nport_topology topology)
+{
+	node->nport->topology = topology;
+}
+
+void
+efc_fabric_notify_topology(struct efc_node *node)
+{
+	struct efc_node *tmp_node;
+	enum efc_nport_topology topology = node->nport->topology;
+	unsigned long index;
+
+	/*
+	 * now loop through the nodes in the nport
+	 * and send topology notification
+	 */
+	xa_for_each(&node->nport->lookup, index, tmp_node) {
+		if (tmp_node != node) {
+			efc_node_post_event(tmp_node,
+					    EFC_EVT_NPORT_TOPOLOGY_NOTIFY,
+					    (void *)topology);
+		}
+	}
+}
+
+static bool efc_rnode_is_nport(struct fc_els_flogi *rsp)
+{
+	return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT);
+}
+
+void
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+
+		memcpy(node->nport->domain->flogi_service_params,
+		       cbdata->els_rsp.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* Check to see if the fabric is an F_PORT or and N_PORT */
+		if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) {
+			/* sm: if not nport / efc_domain_attach */
+			/* ext_status has the fc_id, attach domain */
+			efc_fabric_set_topology(node, EFC_NPORT_TOPO_FABRIC);
+			efc_fabric_notify_topology(node);
+			WARN_ON(node->nport->domain->attached);
+			efc_domain_attach(node->nport->domain,
+					  cbdata->ext_status);
+			efc_node_transition(node,
+					    __efc_fabric_wait_domain_attach,
+					    NULL);
+			break;
+		}
+
+		/*  sm: if nport and p2p_winner / efc_domain_attach */
+		efc_fabric_set_topology(node, EFC_NPORT_TOPO_P2P);
+		if (efc_p2p_setup(node->nport)) {
+			node_printf(node,
+				    "p2p setup failed, shutting down node\n");
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+			break;
+		}
+
+		if (node->nport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					     NULL);
+			if (node->nport->domain->attached &&
+			    !node->nport->domain->domain_notify_pend) {
+				/*
+				 * already attached,
+				 * just send ATTACH_OK
+				 */
+				node_printf(node,
+					    "p2p winner, domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/*
+			 * peer is p2p winner;
+			 * PLOGI will be received on the
+			 * remote SID=1 node;
+			 * this node has served its purpose
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+
+		break;
+	}
+
+	case EFC_EVT_ELS_REQ_ABORTED:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		struct efc_nport *nport = node->nport;
+		/*
+		 * with these errors, we have no recovery,
+		 * so shutdown the nport, leave the link
+		 * up and the domain ready
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		node_printf(node,
+			    "FLOGI failed evt=%s, shutting down nport [%s]\n",
+			    efc_sm_event_name(evt), nport->display_name);
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send FDISC */
+		efc_send_fdisc(node);
+		efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		/* fc_id is in ext_status */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / efc_nport_attach */
+		efc_nport_attach(node->nport, cbdata->ext_status);
+		efc_node_transition(node, __efc_fabric_wait_domain_attach,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_log_err(node->efc, "FDISC failed, shutting down nport\n");
+		/* sm: / shutdown nport */
+		efc_sm_post_event(&node->nport->sm, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+static int
+efc_start_ns_node(struct efc_nport *nport)
+{
+	struct efc_node *ns;
+
+	/* Instantiate a name services node */
+	ns = efc_node_find(nport, FC_FID_DIR_SERV);
+	if (!ns) {
+		ns = efc_node_alloc(nport, FC_FID_DIR_SERV, false, false);
+		if (!ns)
+			return EFC_FAIL;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER)
+		efc_node_pause(ns, __efc_ns_init);
+	else
+		efc_node_transition(ns, __efc_ns_init, NULL);
+	return EFC_SUCCESS;
+}
+
+static int
+efc_start_fabctl_node(struct efc_nport *nport)
+{
+	struct efc_node *fabctl;
+
+	fabctl = efc_node_find(nport, FC_FID_FCTRL);
+	if (!fabctl) {
+		fabctl = efc_node_alloc(nport, FC_FID_FCTRL,
+					false, false);
+		if (!fabctl)
+			return EFC_FAIL;
+	}
+	/*
+	 * for found ns, should we be transitioning from here?
+	 * breaks transition only
+	 *  1. from within state machine or
+	 *  2. if after alloc
+	 */
+	efc_node_transition(fabctl, __efc_fabctl_init, NULL);
+	return EFC_SUCCESS;
+}
+
+void
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+	case EFC_EVT_NPORT_ATTACH_OK: {
+		int rc;
+
+		rc = efc_start_ns_node(node->nport);
+		if (rc)
+			return;
+
+		/* sm: if enable_ini / start fabctl node */
+		/* Instantiate the fabric controller (sends SCR) */
+		if (node->nport->enable_rscn) {
+			rc = efc_start_fabctl_node(node->nport);
+			if (rc)
+				return;
+		}
+		efc_node_transition(node, __efc_fabric_idle, NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabric_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt,
+		  void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc_send_plogi(node);
+		efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		int rc;
+
+		/* Save service parameters */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_ns_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		/* sm: / send RFTID */
+		efc_ns_send_rftid(node);
+		efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	/* ignore shutdown event as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "Shutdown event received\n");
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID,
+					  __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / send RFFID */
+		efc_ns_send_rffid(node);
+		efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL);
+		break;
+
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * Waits for an RFFID response event;
+	 * if rscn enabled, a GIDPT name services request is issued.
+	 */
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID,
+					  __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		if (node->nport->enable_rscn) {
+			/* sm: if enable_rscn / send GIDPT */
+			efc_ns_send_gidpt(node);
+
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		} else {
+			/* if 'T' only, we're done, go to idle */
+			efc_node_transition(node, __efc_ns_idle, NULL);
+		}
+		break;
+	}
+	/*
+	 * if receive RSCN just ignore,
+	 * we haven't sent GID_PT yet (ACC sent by fabctl node)
+	 */
+	case EFC_EVT_RSCN_RCVD:
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+static int
+efc_process_gidpt_payload(struct efc_node *node,
+			  void *data, u32 gidpt_len)
+{
+	u32 i, j;
+	struct efc_node *newnode;
+	struct efc_nport *nport = node->nport;
+	struct efc *efc = node->efc;
+	u32 port_id = 0, port_count, plist_count;
+	struct efc_node *n;
+	struct efc_node **active_nodes;
+	int residual;
+	struct {
+		struct fc_ct_hdr hdr;
+		struct fc_gid_pn_resp pn_rsp;
+	} *rsp;
+	struct fc_gid_pn_resp *gidpt;
+	unsigned long index;
+
+	rsp = data;
+	gidpt = &rsp->pn_rsp;
+	residual = be16_to_cpu(rsp->hdr.ct_mr_size);
+
+	if (residual != 0)
+		efc_log_debug(node->efc, "residual is %u words\n", residual);
+
+	if (be16_to_cpu(rsp->hdr.ct_cmd) == FC_FS_RJT) {
+		node_printf(node,
+			    "GIDPT request failed: rsn x%x rsn_expl x%x\n",
+			    rsp->hdr.ct_reason, rsp->hdr.ct_explan);
+		return EFC_FAIL;
+	}
+
+	plist_count = (gidpt_len - sizeof(struct fc_ct_hdr)) / sizeof(*gidpt);
+
+	/* Count the number of nodes */
+	port_count = 0;
+	xa_for_each(&nport->lookup, index, n) {
+		port_count++;
+	}
+
+	/* Allocate a buffer for all nodes */
+	active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC);
+	if (!active_nodes) {
+		node_printf(node, "efc_malloc failed\n");
+		return EFC_FAIL;
+	}
+
+	/* Fill buffer with fc_id of active nodes */
+	i = 0;
+	xa_for_each(&nport->lookup, index, n) {
+		port_id = n->rnode.fc_id;
+		switch (port_id) {
+		case FC_FID_FLOGI:
+		case FC_FID_FCTRL:
+		case FC_FID_DIR_SERV:
+			break;
+		default:
+			if (port_id != FC_FID_DOM_MGR)
+				active_nodes[i++] = n;
+			break;
+		}
+	}
+
+	/* update the active nodes buffer */
+	for (i = 0; i < plist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		for (j = 0; j < port_count; j++) {
+			if (active_nodes[j] &&
+			    port_id == active_nodes[j]->rnode.fc_id) {
+				active_nodes[j] = NULL;
+			}
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+
+	/* Those remaining in the active_nodes[] are now gone ! */
+	for (i = 0; i < port_count; i++) {
+		/*
+		 * if we're an initiator and the remote node
+		 * is a target, then post the node missing event.
+		 * if we're target and we have enabled
+		 * target RSCN, then post the node missing event.
+		 */
+		if (!active_nodes[i])
+			continue;
+
+		if ((node->nport->enable_ini && active_nodes[i]->targ) ||
+		    (node->nport->enable_tgt && enable_target_rscn(efc))) {
+			efc_node_post_event(active_nodes[i],
+					    EFC_EVT_NODE_MISSING, NULL);
+		} else {
+			node_printf(node,
+				    "GID_PT: skipping non-tgt port_id x%06x\n",
+				    active_nodes[i]->rnode.fc_id);
+		}
+	}
+	kfree(active_nodes);
+
+	for (i = 0; i < plist_count; i++) {
+		hton24(gidpt[i].fp_fid, port_id);
+
+		/* Don't create node for ourselves */
+		if (port_id == node->rnode.nport->fc_id) {
+			if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+				break;
+			continue;
+		}
+
+		newnode = efc_node_find(nport, port_id);
+		if (!newnode) {
+			if (!node->nport->enable_ini)
+				continue;
+
+			newnode = efc_node_alloc(nport, port_id, false, false);
+			if (!newnode) {
+				efc_log_err(efc, "efc_node_alloc() failed\n");
+				return EFC_FAIL;
+			}
+			/*
+			 * send PLOGI automatically
+			 * if initiator
+			 */
+			efc_node_init_device(newnode, true);
+		}
+
+		if (node->nport->enable_ini && newnode->targ) {
+			efc_node_post_event(newnode, EFC_EVT_NODE_REFOUND,
+					    NULL);
+		}
+
+		if (gidpt[i].fp_resvd & FC_NS_FID_LAST)
+			break;
+	}
+	return EFC_SUCCESS;
+}
+
+void
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+	/*
+	 * Wait for a GIDPT response from the name server. Process the FC_IDs
+	 * that are reported by creating new remote ports, as needed.
+	 */
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:	{
+		if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT,
+					  __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / process GIDPT payload */
+		efc_process_gidpt_payload(node, cbdata->els_rsp.virt,
+					  cbdata->els_rsp.len);
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	{
+		/* not much we can do; will retry with the next RSCN */
+		node_printf(node, "GID_PT failed to complete\n");
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_ns_idle, NULL);
+		break;
+	}
+
+	/* if receive RSCN here, queue up another discovery processing */
+	case EFC_EVT_RSCN_RCVD: {
+		node_printf(node, "RSCN received during GID_PT processing\n");
+		node->rscn_pending = true;
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * Wait for RSCN received events (posted from the fabric controller)
+	 * and restart the GIDPT name services query and processing.
+	 */
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->rscn_pending)
+			break;
+
+		node_printf(node, "RSCN pending, restart discovery\n");
+		node->rscn_pending = false;
+		fallthrough;
+
+	case EFC_EVT_RSCN_RCVD: {
+		/* sm: / send GIDPT */
+		/*
+		 * If target RSCN processing is enabled,
+		 * and this is target only (not initiator),
+		 * and tgt_rscn_delay is non-zero,
+		 * then we delay issuing the GID_PT
+		 */
+		if (efc->tgt_rscn_delay_msec != 0 &&
+		    !node->nport->enable_ini && node->nport->enable_tgt &&
+		    enable_target_rscn(efc)) {
+			efc_node_transition(node, __efc_ns_gidpt_delay, NULL);
+		} else {
+			efc_ns_send_gidpt(node);
+			efc_node_transition(node, __efc_ns_gidpt_wait_rsp,
+					    NULL);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+static void
+gidpt_delay_timer_cb(struct timer_list *t)
+{
+	struct efc_node *node = from_timer(node, t, gidpt_delay_timer);
+
+	del_timer(&node->gidpt_delay_timer);
+
+	efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL);
+}
+
+void
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		u64 delay_msec, tmp;
+
+		/*
+		 * Compute the delay time.
+		 * Set to tgt_rscn_delay, if the time since last GIDPT
+		 * is less than tgt_rscn_period, then use tgt_rscn_period.
+		 */
+		delay_msec = efc->tgt_rscn_delay_msec;
+		tmp = jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec;
+		if (tmp < efc->tgt_rscn_period_msec)
+			delay_msec = efc->tgt_rscn_period_msec;
+
+		timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb,
+			    0);
+		mod_timer(&node->gidpt_delay_timer,
+			  jiffies + msecs_to_jiffies(delay_msec));
+
+		break;
+	}
+
+	case EFC_EVT_GIDPT_DELAY_EXPIRED:
+		node->time_last_gidpt_msec = jiffies_to_msecs(jiffies);
+
+		efc_ns_send_gidpt(node);
+		efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL);
+		break;
+
+	case EFC_EVT_RSCN_RCVD: {
+		efc_log_debug(efc,
+			      "RSCN received while in GIDPT delay - no action\n");
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* no need to login to fabric controller, just send SCR */
+		efc_send_scr(node);
+		efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * Fabric controller node state machine:
+	 * Wait for an SCR response from the fabric controller.
+	 */
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+static void
+efc_process_rscn(struct efc_node *node, struct efc_node_cb *cbdata)
+{
+	struct efc *efc = node->efc;
+	struct efc_nport *nport = node->nport;
+	struct efc_node *ns;
+
+	/* Forward this event to the name-services node */
+	ns = efc_node_find(nport, FC_FID_DIR_SERV);
+	if (ns)
+		efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata);
+	else
+		efc_log_warn(efc, "can't find name server node\n");
+}
+
+void
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * Fabric controller node state machine: Ready.
+	 * In this state, the fabric controller sends a RSCN, which is received
+	 * by this node and is forwarded to the name services node object; and
+	 * the RSCN LS_ACC is sent.
+	 */
+	switch (evt) {
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		/*
+		 * sm: / process RSCN (forward to name services node),
+		 * send LS_ACC
+		 */
+		efc_process_rscn(node, cbdata);
+		efc_send_ls_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl,
+				    NULL);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_fabctl_ready, NULL);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+static uint64_t
+efc_get_wwpn(struct fc_els_flogi *sp)
+{
+	return be64_to_cpu(sp->fl_wwnn);
+}
+
+static int
+efc_rnode_is_winner(struct efc_nport *nport)
+{
+	struct fc_els_flogi *remote_sp;
+	u64 remote_wwpn;
+	u64 local_wwpn = nport->wwpn;
+	u64 wwn_bump = 0;
+
+	remote_sp = (struct fc_els_flogi *)nport->domain->flogi_service_params;
+	remote_wwpn = efc_get_wwpn(remote_sp);
+
+	local_wwpn ^= wwn_bump;
+
+	efc_log_debug(nport->efc, "r: %llx\n",
+		      be64_to_cpu(remote_sp->fl_wwpn));
+	efc_log_debug(nport->efc, "l: %llx\n", local_wwpn);
+
+	if (remote_wwpn == local_wwpn) {
+		efc_log_warn(nport->efc,
+			     "WWPN of remote node [%08x %08x] matches local WWPN\n",
+			     (u32)(local_wwpn >> 32ll),
+			     (u32)local_wwpn);
+		return -1;
+	}
+
+	return (remote_wwpn > local_wwpn);
+}
+
+void
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		struct efc_nport *nport = node->nport;
+		struct efc_node *rnode;
+
+		/*
+		 * this transient node (SID=0 (recv'd FLOGI)
+		 * or DID=fabric (sent FLOGI))
+		 * is the p2p winner, will use a separate node
+		 * to send PLOGI to peer
+		 */
+		WARN_ON(!node->nport->p2p_winner);
+
+		rnode = efc_node_find(nport, node->nport->p2p_remote_port_id);
+		if (rnode) {
+			/*
+			 * the "other" transient p2p node has
+			 * already kicked off the
+			 * new node from which PLOGI is sent
+			 */
+			node_printf(node,
+				    "Node with fc_id x%x already exists\n",
+				    rnode->rnode.fc_id);
+		} else {
+			/*
+			 * create new node (SID=1, DID=2)
+			 * from which to send PLOGI
+			 */
+			rnode = efc_node_alloc(nport,
+					       nport->p2p_remote_port_id,
+						false, false);
+			if (!rnode) {
+				efc_log_err(efc, "node alloc failed\n");
+				return;
+			}
+
+			efc_fabric_notify_topology(node);
+			/* sm: / allocate p2p remote node */
+			efc_node_transition(rnode, __efc_p2p_rnode_init,
+					    NULL);
+		}
+
+		/*
+		 * the transient node (SID=0 or DID=fabric)
+		 * has served its purpose
+		 */
+		if (node->rnode.fc_id == 0) {
+			/*
+			 * if this is the SID=0 node,
+			 * move to the init state in case peer
+			 * has restarted FLOGI discovery and FLOGI is pending
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		} else {
+			/*
+			 * if this is the DID=fabric node
+			 * (we initiated FLOGI), shut it down
+			 */
+			node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+			efc_fabric_initiate_shutdown(node);
+		}
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* sm: / send PLOGI */
+		efc_send_plogi(node);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL);
+		break;
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: send BA_ACC */
+		efc_send_bls_acc(node, cbdata->header->dma.virt);
+
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+
+		/* sm: if p2p_winner / domain_attach */
+		if (node->nport->p2p_winner) {
+			efc_node_transition(node,
+					    __efc_p2p_wait_domain_attach,
+					NULL);
+			if (!node->nport->domain->attached) {
+				node_printf(node, "Domain not attached\n");
+				efc_domain_attach(node->nport->domain,
+						  node->nport->p2p_port_id);
+			} else {
+				node_printf(node, "Domain already attached\n");
+				efc_node_post_event(node,
+						    EFC_EVT_DOMAIN_ATTACH_OK,
+						    NULL);
+			}
+		} else {
+			/* this node has served its purpose;
+			 * we'll expect a PLOGI on a separate
+			 * node (remote SID=0x1); return this node
+			 * to init state in case peer
+			 * restarts discovery -- it may already
+			 * have (pending frames may exist).
+			 */
+			/* don't send PLOGI on efc_d_init entry */
+			efc_node_init_device(node, false);
+		}
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/*
+		 * LS_ACC failed, possibly due to link down;
+		 * shutdown node and wait
+		 * for FLOGI discovery to restart
+		 */
+		node_printf(node, "FLOGI LS_ACC failed, shutting down\n");
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_ABTS_RCVD: {
+		/* sm: / send BA_ACC */
+		efc_send_bls_acc(node, cbdata->header->dma.virt);
+		break;
+	}
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK: {
+		int rc;
+
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		node_printf(node, "PLOGI failed, shutting down\n");
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* if we're in external loopback mode, just send LS_ACC */
+		if (node->efc->external_loopback) {
+			efc_send_plogi_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		} else {
+			/*
+			 * if this isn't external loopback,
+			 * pass to default handler
+			 */
+			__efc_fabric_common(__func__, ctx, evt, arg);
+		}
+		break;
+	}
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted).
+		 * At this time, we are not waiting on any other unsolicited
+		 * frames to continue with the login process. Thus, it will not
+		 * hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PLOGI response received */
+		int rc;
+
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_p2p_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+		break;
+	}
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_fabric_common, __func__)) {
+			return;
+		}
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node->ls_acc_io,
+					    node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node, __efc_d_port_logged_in,
+					    NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_fabric_initiate_shutdown(node);
+		break;
+
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node,
+				    __efc_fabric_wait_attach_evt_shutdown,
+				     NULL);
+		break;
+	case EFC_EVT_PRLI_RCVD:
+		node_printf(node, "%s: PRLI received before node is attached\n",
+			    efc_sm_event_name(evt));
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		break;
+
+	default:
+		__efc_fabric_common(__func__, ctx, evt, arg);
+	}
+}
+
+int
+efc_p2p_setup(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	int rnode_winner;
+
+	rnode_winner = efc_rnode_is_winner(nport);
+
+	/* set nport flags to indicate p2p "winner" */
+	if (rnode_winner == 1) {
+		nport->p2p_remote_port_id = 0;
+		nport->p2p_port_id = 0;
+		nport->p2p_winner = false;
+	} else if (rnode_winner == 0) {
+		nport->p2p_remote_port_id = 2;
+		nport->p2p_port_id = 1;
+		nport->p2p_winner = true;
+	} else {
+		/* no winner; only okay if external loopback enabled */
+		if (nport->efc->external_loopback) {
+			/*
+			 * External loopback mode enabled;
+			 * local nport and remote node
+			 * will be registered with an NPortID = 1;
+			 */
+			efc_log_debug(efc,
+				      "External loopback mode enabled\n");
+			nport->p2p_remote_port_id = 1;
+			nport->p2p_port_id = 1;
+			nport->p2p_winner = true;
+		} else {
+			efc_log_warn(efc,
+				     "failed to determine p2p winner\n");
+			return rnode_winner;
+		}
+	}
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h
new file mode 100644
index 000000000000..b0947ae6fdca
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_fabric.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Declarations for the interface exported by efc_fabric
+ */
+
+#ifndef __EFCT_FABRIC_H__
+#define __EFCT_FABRIC_H__
+#include "scsi/fc/fc_els.h"
+#include "scsi/fc/fc_fs.h"
+#include "scsi/fc/fc_ns.h"
+
+void
+__efc_fabric_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_domain_attach_wait(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx,
+				enum efc_sm_event evt, void *arg);
+
+void
+__efc_vport_fabric_init(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_wait_nport_attach(struct efc_sm_ctx *ctx,
+			       enum efc_sm_event evt, void *arg);
+
+void
+__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void
+__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void
+__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void
+__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void
+__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				      enum efc_sm_event evt, void *arg);
+void
+__efc_ns_logo_wait_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event, void *arg);
+void
+__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx,
+			enum efc_sm_event evt, void *arg);
+void
+__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void
+__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void
+__efc_fabctl_init(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void
+__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void
+__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void
+__efc_fabctl_ready(struct efc_sm_ctx *ctx,
+		   enum efc_sm_event evt, void *arg);
+void
+__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void
+__efc_fabric_idle(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+
+void
+__efc_p2p_rnode_init(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_domain_attach_wait(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			      enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				    enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+
+int
+efc_p2p_setup(struct efc_nport *nport);
+void
+efc_fabric_set_topology(struct efc_node *node,
+			enum efc_nport_topology topology);
+void efc_fabric_notify_topology(struct efc_node *node);
+
+#endif /* __EFCT_FABRIC_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 14/31] elx: libefc: FC node ELS and state handling
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (12 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 13/31] elx: libefc: Fabric " James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 15/31] elx: libefc: Extended link Service IO handling James Smart
                   ` (16 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
- FC node PRLI handling and state management

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_device.c | 1600 ++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_device.h |   72 ++
 2 files changed, 1672 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_device.c
 create mode 100644 drivers/scsi/elx/libefc/efc_device.h

diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
new file mode 100644
index 000000000000..26de8ad79067
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.c
@@ -0,0 +1,1600 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * device_sm Node State Machine: Remote Device States
+ */
+
+#include "efc.h"
+#include "efc_device.h"
+#include "efc_fabric.h"
+
+void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id)
+{
+	u32 rc = EFC_SCSI_CALL_COMPLETE;
+	struct efc *efc = node->efc;
+
+	node->ls_acc_oxid = ox_id;
+	node->send_ls_acc = EFC_NODE_SEND_LS_ACC_PRLI;
+
+	/*
+	 * Wait for backend session registration
+	 * to complete before sending PRLI resp
+	 */
+
+	if (node->init) {
+		efc_log_info(efc, "[%s] found(initiator) WWPN:%s WWNN:%s\n",
+			node->display_name, node->wwpn, node->wwnn);
+		if (node->nport->enable_tgt)
+			rc = efc->tt.scsi_new_node(efc, node);
+	}
+
+	if (rc == EFC_SCSI_CALL_COMPLETE)
+		efc_node_post_event(node, EFC_EVT_NODE_SESS_REG_OK, NULL);
+}
+
+static void
+__efc_d_common(const char *funcname, struct efc_sm_ctx *ctx,
+	       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = NULL;
+	struct efc *efc = NULL;
+
+	node = ctx->app;
+	efc = node->efc;
+
+	switch (evt) {
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n",
+			      node->display_name, funcname,
+				efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		efc_log_debug(efc, "[%s] %-20s %-20s\n", node->display_name,
+			      funcname, efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	default:
+		/* call default event handler common to all nodes */
+		__efc_node_common(funcname, ctx, evt, arg);
+	}
+}
+
+static void
+__efc_d_wait_del_node(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	/*
+	 * State is entered when a node sends a delete initiator/target call
+	 * to the target-server/initiator-client and needs to wait for that
+	 * work to complete.
+	 */
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		fallthrough;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		/*
+		 * node has either been detached or is in the process
+		 * of being detached,
+		 * call common node's initiate cleanup function
+		 */
+		efc_node_initiate_cleanup(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+static void
+__efc_d_wait_del_ini_tgt(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		fallthrough;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* These are expected events. */
+		break;
+
+	case EFC_EVT_NODE_DEL_INI_COMPLETE:
+	case EFC_EVT_NODE_DEL_TGT_COMPLETE:
+		efc_node_transition(node, __efc_d_wait_del_node, NULL);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* Can happen as ELS IO IO's complete */
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		int rc = EFC_SCSI_CALL_COMPLETE;
+
+		/* assume no wait needed */
+		WRITE_ONCE(node->els_io_enabled, false);
+
+		/* make necessary delete upcall(s) */
+		if (node->init && !node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (initiator) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->nport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+
+		} else if (node->targ && !node->init) {
+			efc_log_info(node->efc,
+				     "[%s] delete (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			efc_node_transition(node,
+					    __efc_d_wait_del_node,
+					     NULL);
+			if (node->nport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+					EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+
+		} else if (node->init && node->targ) {
+			efc_log_info(node->efc,
+				     "[%s] delete (I+T) WWPN %s WWNN %s\n",
+				node->display_name, node->wwpn, node->wwnn);
+			efc_node_transition(node, __efc_d_wait_del_ini_tgt,
+					    NULL);
+			if (node->nport->enable_tgt)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_INITIATOR_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_INI_COMPLETE, NULL);
+			/* assume no wait needed */
+			rc = EFC_SCSI_CALL_COMPLETE;
+			if (node->nport->enable_ini)
+				rc = efc->tt.scsi_del_node(efc, node,
+						EFC_SCSI_TARGET_DELETED);
+
+			if (rc == EFC_SCSI_CALL_COMPLETE)
+				efc_node_post_event(node,
+					EFC_EVT_NODE_DEL_TGT_COMPLETE, NULL);
+		}
+
+		/* we've initiated the upcalls as needed, now kick off the node
+		 * detach to precipitate the aborting of outstanding exchanges
+		 * associated with said node
+		 *
+		 * Beware: if we've made upcall(s), we've already transitioned
+		 * to a new state by the time we execute this.
+		 * consider doing this before the upcalls?
+		 */
+		if (node->attached) {
+			/* issue hw node free; don't care if succeeds right
+			 * away or sometime later, will check node->attached
+			 * later in shutdown process
+			 */
+			rc = efc_cmd_node_detach(efc, &node->rnode);
+			if (rc != EFC_HW_RTN_SUCCESS &&
+			    rc != EFC_HW_RTN_SUCCESS_SYNC)
+				node_printf(node,
+					    "Failed freeing HW node, rc=%d\n",
+					rc);
+		}
+
+		/* if neither initiator nor target, proceed to cleanup */
+		if (!node->init && !node->targ) {
+			/*
+			 * node has either been detached or is in
+			 * the process of being detached,
+			 * call common node's initiate cleanup function
+			 */
+			efc_node_initiate_cleanup(node);
+		}
+		break;
+	}
+	case EFC_EVT_ALL_CHILD_NODES_FREE:
+		/* Ignore, this can happen if an ELS is
+		 * aborted while in a delay/retry state
+		 */
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK: {
+		/* send PLOGI automatically if initiator */
+		efc_node_init_device(node, true);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls)
+{
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+
+	/* Save the OX_ID for sending LS_ACC sometime later */
+	WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_NONE);
+
+	node->ls_acc_oxid = ox_id;
+	node->send_ls_acc = ls;
+	node->ls_acc_did = ntoh24(hdr->fh_d_id);
+}
+
+void
+efc_process_prli_payload(struct efc_node *node, void *prli)
+{
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp sp;
+	} *pp;
+
+	pp = prli;
+	node->init = (pp->sp.spp_flags & FCP_SPPF_INIT_FCN) != 0;
+	node->targ = (pp->sp.spp_flags & FCP_SPPF_TARG_FCN) != 0;
+}
+
+void
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:	/* PLOGI ACC completions */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_transition(node, __efc_d_port_logged_in, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:
+		/* LOGO response received, sent shutdown */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_LOGO,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		node_printf(node,
+			    "LOGO sent (evt=%s), shutdown node\n",
+			efc_sm_event_name(evt));
+		/* sm: / post explicit logout */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				    NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+efc_node_init_device(struct efc_node *node, bool send_plogi)
+{
+	node->send_plogi = send_plogi;
+	if ((node->efc->nodedb_mask & EFC_NODEDB_PAUSE_NEW_NODES) &&
+	    (node->rnode.fc_id != FC_FID_DOM_MGR)) {
+		node->nodedb_state = __efc_d_init;
+		efc_node_transition(node, __efc_node_paused, NULL);
+	} else {
+		efc_node_transition(node, __efc_d_init, NULL);
+	}
+}
+
+static void
+efc_d_check_plogi_topology(struct efc_node *node, u32 d_id)
+{
+	switch (node->nport->topology) {
+	case EFC_NPORT_TOPO_P2P:
+		/* we're not attached and nport is p2p,
+		 * need to attach
+		 */
+		efc_domain_attach(node->nport->domain, d_id);
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	case EFC_NPORT_TOPO_FABRIC:
+		/* we're not attached and nport is fabric, domain
+		 * attach should have already been requested as part
+		 * of the fabric state machine, wait for it
+		 */
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	case EFC_NPORT_TOPO_UNKNOWN:
+		/* Two possibilities:
+		 * 1. received a PLOGI before our FLOGI has completed
+		 *    (possible since completion comes in on another
+		 *    CQ), thus we don't know what we're connected to
+		 *    yet; transition to a state to wait for the
+		 *    fabric node to tell us;
+		 * 2. PLOGI received before link went down and we
+		 * haven't performed domain attach yet.
+		 * Note: we cannot distinguish between 1. and 2.
+		 * so have to assume PLOGI
+		 * was received after link back up.
+		 */
+		node_printf(node, "received PLOGI, unknown topology did=0x%x\n",
+				  d_id);
+		efc_node_transition(node, __efc_d_wait_topology_notify, NULL);
+		break;
+	default:
+		node_printf(node, "received PLOGI, unexpected topology %d\n",
+				  node->nport->topology);
+	}
+}
+
+void
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	/*
+	 * This state is entered when a node is instantiated,
+	 * either having been discovered from a name services query,
+	 * or having received a PLOGI/FLOGI.
+	 */
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		if (!node->send_plogi)
+			break;
+		/* only send if we have initiator capability,
+		 * and domain is attached
+		 */
+		if (node->nport->enable_ini &&
+		    node->nport->domain->attached) {
+			efc_send_plogi(node);
+
+			efc_node_transition(node, __efc_d_wait_plogi_rsp, NULL);
+		} else {
+			node_printf(node, "not sending plogi nport.ini=%d,",
+				    node->nport->enable_ini);
+			node_printf(node, "domain attached=%d\n",
+				    node->nport->domain->attached);
+		}
+		break;
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		int rc;
+
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+					     EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* domain not attached; several possibilities: */
+		if (!node->nport->domain->attached) {
+			efc_d_check_plogi_topology(node, ntoh24(hdr->fh_d_id));
+			break;
+		}
+
+		/* domain already attached */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+	}
+
+	case EFC_EVT_FDISC_RCVD: {
+		__efc_d_common(__func__, ctx, evt, arg);
+		break;
+	}
+
+	case EFC_EVT_FLOGI_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		u32 d_id = ntoh24(hdr->fh_d_id);
+
+		/* sm: / save sparams, send FLOGI acc */
+		memcpy(node->nport->domain->flogi_service_params,
+		       cbdata->payload->dma.virt,
+		       sizeof(struct fc_els_flogi));
+
+		/* send FC LS_ACC response, override s_id */
+		efc_fabric_set_topology(node, EFC_NPORT_TOPO_P2P);
+
+		efc_send_flogi_p2p_acc(node, be16_to_cpu(hdr->fh_ox_id), d_id);
+
+		if (efc_p2p_setup(node->nport)) {
+			node_printf(node, "p2p failed, shutting down node\n");
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+			break;
+		}
+
+		efc_node_transition(node,  __efc_p2p_wait_flogi_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->nport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+			break;
+		}
+
+		efc_send_logo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		if (!node->nport->domain->attached) {
+			/* most likely a frame left over from before a link
+			 * down; drop and shut node down w/ "explicit logout"
+			 * so pending frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+
+		efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+				ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* note: problem, we're now expecting an ELS REQ completion
+		 * from both the LOGO and PLOGI
+		 */
+		if (!node->nport->domain->attached) {
+			/* most likely a frame left over from before a
+			 * link down; drop and
+			 * shut node down w/ "explicit logout" so pending
+			 * frames are processed
+			 */
+			node_printf(node, "%s domain not attached, dropping\n",
+				    efc_sm_event_name(evt));
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+			break;
+		}
+
+		/* Send LOGO */
+		node_printf(node, "FCP_CMND received, send LOGO\n");
+		if (efc_send_logo(node)) {
+			/*
+			 * failed to send LOGO, go ahead and cleanup node
+			 * anyways
+			 */
+			node_printf(node, "Failed to send LOGO\n");
+			efc_node_post_event(node,
+					    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+					    NULL);
+		} else {
+			/* sent LOGO, wait for response */
+			efc_node_transition(node,
+					    __efc_d_wait_logo_rsp, NULL);
+		}
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		/* don't care about domain_attach_ok */
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_PLOGI_RCVD: {
+		/* T, or I+T */
+		/* received PLOGI with svc parms, go ahead and attach node
+		 * when PLOGI that was sent ultimately completes, it'll be a
+		 * no-op
+		 *
+		 * If there is an outstanding PLOGI sent, can we set a flag
+		 * to indicate that we don't want to retry it if it times out?
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+		/* sm: domain->attached / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD:
+		/* I, or I+T */
+		/* sent PLOGI and before completion was seen, received the
+		 * PRLI from the remote node (WCQEs and RCQEs come in on
+		 * different queues and order of processing cannot be assumed)
+		 * Save OXID so PRLI can be sent after the attach and continue
+		 * to wait for PLOGI response
+		 */
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PRLI);
+		efc_node_transition(node, __efc_d_wait_plogi_rsp_recvd_prli,
+				    NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: /* why don't we do a shutdown here?? */
+	case EFC_EVT_PRLO_RCVD:
+	case EFC_EVT_PDISC_RCVD:
+	case EFC_EVT_FDISC_RCVD:
+	case EFC_EVT_ADISC_RCVD:
+	case EFC_EVT_RSCN_RCVD:
+	case EFC_EVT_SCR_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received, sending reject\n",
+			    efc_sm_event_name(evt));
+
+		efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+				ELS_RJT_UNAB, ELS_EXPL_PLOGI_REQD, 0);
+
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* Our PLOGI was rejected, this is ok in some cases */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* not logged in yet and outstanding PLOGI so don't send LOGO,
+		 * just drop
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/*
+		 * Since we've received a PRLI, we have a port login and will
+		 * just need to wait for the PLOGI response to do the node
+		 * attach and then we can send the LS_ACC for the PRLI. If,
+		 * during this time, we receive FCP_CMNDs (which is possible
+		 * since we've already sent a PRLI and our peer may have
+		 * accepted). At this time, we are not waiting on any other
+		 * unsolicited frames to continue with the login process. Thus,
+		 * it will not hurt to hold frames here.
+		 */
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK:	/* PLOGI response received */
+		/* Completion from PLOGI sent */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / save sparams, efc_node_attach */
+		efc_node_save_sparms(node, cbdata->els_rsp.virt);
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL:	/* PLOGI response received */
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* PLOGI failed, shutdown the node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		WARN_ON(!node->nport->domain->attached);
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK,
+					    NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg)
+{
+	int rc;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NPORT_TOPOLOGY_NOTIFY: {
+		enum efc_nport_topology topology =
+					(enum efc_nport_topology)arg;
+
+		WARN_ON(node->nport->domain->attached);
+
+		WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		node_printf(node, "topology notification, topology=%d\n",
+			    topology);
+
+		/* At the time the PLOGI was received, the topology was unknown,
+		 * so we didn't know which node would perform the domain attach:
+		 * 1. The node from which the PLOGI was sent (p2p) or
+		 * 2. The node to which the FLOGI was sent (fabric).
+		 */
+		if (topology == EFC_NPORT_TOPO_P2P) {
+			/* if this is p2p, need to attach to the domain using
+			 * the d_id from the PLOGI received
+			 */
+			efc_domain_attach(node->nport->domain,
+					  node->ls_acc_did);
+		}
+		/* else, if this is fabric, the domain attach
+		 * should be performed by the fabric node (node sending FLOGI);
+		 * just wait for attach to complete
+		 */
+
+		efc_node_transition(node, __efc_d_wait_domain_attach, NULL);
+		break;
+	}
+	case EFC_EVT_DOMAIN_ATTACH_OK:
+		WARN_ON(!node->nport->domain->attached);
+		node_printf(node, "domain attach ok\n");
+		/* sm: / efc_node_attach */
+		rc = efc_node_attach(node);
+		efc_node_transition(node, __efc_d_wait_node_attach, NULL);
+		if (rc == EFC_HW_RTN_SUCCESS_SYNC)
+			efc_node_post_event(node,
+					    EFC_EVT_NODE_ATTACH_OK, NULL);
+
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		switch (node->send_ls_acc) {
+		case EFC_NODE_SEND_LS_ACC_PLOGI: {
+			/* sm: send_plogi_acc is set / send PLOGI acc */
+			/* Normal case for T, or I+T */
+			efc_send_plogi_acc(node, node->ls_acc_oxid);
+			efc_node_transition(node, __efc_d_wait_plogi_acc_cmpl,
+					    NULL);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_PRLI: {
+			efc_d_send_prli_rsp(node, node->ls_acc_oxid);
+			node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+			node->ls_acc_io = NULL;
+			break;
+		}
+		case EFC_NODE_SEND_LS_ACC_NONE:
+		default:
+			/* Normal case for I */
+			/* sm: send_plogi_acc is not set / send PLOGI acc */
+			efc_node_transition(node,
+					    __efc_d_port_logged_in, NULL);
+			break;
+		}
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "node attach failed\n");
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* Handle shutdown events */
+	case EFC_EVT_SHUTDOWN:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_EXPLICIT_LOGO;
+		efc_node_transition(node, __efc_d_wait_attach_evt_shutdown,
+				    NULL);
+		break;
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_IMPLICIT_LOGO;
+		efc_node_transition(node,
+				    __efc_d_wait_attach_evt_shutdown, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	/* wait for any of these attach events and then shutdown */
+	case EFC_EVT_NODE_ATTACH_OK:
+		node->attached = true;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	case EFC_EVT_NODE_ATTACH_FAIL:
+		/* node attach failed, shutdown the node */
+		node->attached = false;
+		node_printf(node, "Attach evt=%s, proceed to shutdown\n",
+			    efc_sm_event_name(evt));
+		efc_node_transition(node, __efc_d_initiate_shutdown, NULL);
+		break;
+
+	/* ignore shutdown events as we're already in shutdown path */
+	case EFC_EVT_SHUTDOWN:
+		/* have default shutdown event take precedence */
+		node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT;
+		fallthrough;
+
+	case EFC_EVT_SHUTDOWN_EXPLICIT_LOGO:
+	case EFC_EVT_SHUTDOWN_IMPLICIT_LOGO:
+		node_printf(node, "%s received\n", efc_sm_event_name(evt));
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		/* Normal case for I or I+T */
+		if (node->nport->enable_ini &&
+		    !(node->rnode.fc_id != FC_FID_DOM_MGR)) {
+			/* sm: if enable_ini / send PRLI */
+			efc_send_prli(node);
+			/* can now expect ELS_REQ_OK/FAIL/RJT */
+		}
+		break;
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* Normal case for T or I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		struct {
+			struct fc_els_prli prli;
+			struct fc_els_spp sp;
+		} *pp;
+
+		pp = cbdata->payload->dma.virt;
+		if (pp->sp.spp_type != FC_TYPE_FCP) {
+			/*Only FCP is supported*/
+			efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
+			break;
+		}
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_d_send_prli_rsp(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_NODE_SESS_REG_OK:
+		if (node->send_ls_acc == EFC_NODE_SEND_LS_ACC_PRLI)
+			efc_send_prli_acc(node, node->ls_acc_oxid);
+
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+
+	case EFC_EVT_NODE_SESS_REG_FAIL:
+		efc_send_ls_rjt(node, node->ls_acc_oxid, ELS_RJT_UNAB,
+				ELS_EXPL_UNSUPR, 0);
+		node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE;
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_OK: {	/* PRLI response */
+		/* Normal case for I or I+T */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / process PRLI payload */
+		efc_process_prli_payload(node, cbdata->els_rsp.virt);
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_FAIL: {	/* PRLI response failed */
+		/* I, I+T, assume some link failure, shutdown node */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT: {
+		/* PRLI rejected by remote
+		 * Normal for I, I+T (connected to an I)
+		 * Node doesn't want to be a target, stay here and wait for a
+		 * PRLI from the remote node
+		 * if it really wants to connect to us as target
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_PRLI,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		break;
+	}
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK: {
+		/* Normal T, I+T, target-server rejected the process login */
+		/* This would be received only in the case where we sent
+		 * LS_RJT for the PRLI, so
+		 * do nothing.   (note: as T only we could shutdown the node)
+		 */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		break;
+	}
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/*sm: / save sparams, set send_plogi_acc,
+		 *post implicit logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/* Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt),
+					node->attached);
+		/* sm: / send LOGO acc */
+		efc_send_logo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg)
+{
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		efc_node_hold_frames(node);
+		break;
+
+	case EFC_EVT_EXIT:
+		efc_node_accept_frames(node);
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* sm: / post explicit logout */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL);
+		break;
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	if (evt != EFC_EVT_FCP_CMD_RCVD)
+		node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER:
+		node->fcp_enabled = true;
+		if (node->targ) {
+			efc_log_info(efc,
+				     "[%s] found (target) WWPN %s WWNN %s\n",
+				node->display_name,
+				node->wwpn, node->wwnn);
+			if (node->nport->enable_ini)
+				efc->tt.scsi_new_node(efc, node);
+		}
+		break;
+
+	case EFC_EVT_EXIT:
+		node->fcp_enabled = false;
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters,
+		 * and send ACC
+		 */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, NULL);
+		break;
+	}
+
+	case EFC_EVT_PRLI_RCVD: {
+		/* T, I+T: remote initiator is slow to get started */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		struct {
+			struct fc_els_prli prli;
+			struct fc_els_spp sp;
+		} *pp;
+
+		pp = cbdata->payload->dma.virt;
+		if (pp->sp.spp_type != FC_TYPE_FCP) {
+			/*Only FCP is supported*/
+			efc_send_ls_rjt(node, be16_to_cpu(hdr->fh_ox_id),
+					ELS_RJT_UNAB, ELS_EXPL_UNSUPR, 0);
+			break;
+		}
+
+		efc_process_prli_payload(node, cbdata->payload->dma.virt);
+		efc_send_prli_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_PRLO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send PRLO acc */
+		efc_send_prlo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		/* need implicit logout? */
+		break;
+	}
+
+	case EFC_EVT_LOGO_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc_send_logo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+
+	case EFC_EVT_ADISC_RCVD: {
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+		/* sm: / send ADISC acc */
+		efc_send_adisc_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		break;
+	}
+
+	case EFC_EVT_ABTS_RCVD:
+		/* sm: / process ABTS */
+		efc_log_err(efc, "Unexpected event:%s\n",
+					efc_sm_event_name(evt));
+		break;
+
+	case EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY:
+		break;
+
+	case EFC_EVT_NODE_REFOUND:
+		break;
+
+	case EFC_EVT_NODE_MISSING:
+		if (node->nport->enable_rscn)
+			efc_node_transition(node, __efc_d_device_gone, NULL);
+
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_OK:
+		/* T, or I+T, PRLI accept completed ok */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		break;
+
+	case EFC_EVT_SRRS_ELS_CMPL_FAIL:
+		/* T, or I+T, PRLI accept failed to complete */
+		WARN_ON(!node->els_cmpl_cnt);
+		node->els_cmpl_cnt--;
+		node_printf(node, "Failed to send PRLI LS_ACC\n");
+		break;
+
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+	struct efc *efc = node->efc;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_ENTER: {
+		int rc = EFC_SCSI_CALL_COMPLETE;
+		int rc_2 = EFC_SCSI_CALL_COMPLETE;
+		static char const *labels[] = {"none", "initiator", "target",
+							"initiator+target"};
+
+		efc_log_info(efc, "[%s] missing (%s)    WWPN %s WWNN %s\n",
+			     node->display_name,
+				labels[(node->targ << 1) | (node->init)],
+						node->wwpn, node->wwnn);
+
+		switch (efc_node_get_enable(node)) {
+		case EFC_NODE_ENABLE_T_TO_T:
+		case EFC_NODE_ENABLE_I_TO_T:
+		case EFC_NODE_ENABLE_IT_TO_T:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_I:
+		case EFC_NODE_ENABLE_I_TO_I:
+		case EFC_NODE_ENABLE_IT_TO_I:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_T_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_I_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+						  EFC_SCSI_TARGET_MISSING);
+			break;
+
+		case EFC_NODE_ENABLE_IT_TO_IT:
+			rc = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_INITIATOR_MISSING);
+			rc_2 = efc->tt.scsi_del_node(efc, node,
+				EFC_SCSI_TARGET_MISSING);
+			break;
+
+		default:
+			rc = EFC_SCSI_CALL_COMPLETE;
+			break;
+		}
+
+		if (rc == EFC_SCSI_CALL_COMPLETE &&
+		    rc_2 == EFC_SCSI_CALL_COMPLETE)
+			efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL);
+
+		break;
+	}
+	case EFC_EVT_NODE_REFOUND:
+		/* two approaches, reauthenticate with PLOGI/PRLI, or ADISC */
+
+		/* reauthenticate with PLOGI/PRLI */
+		/* efc_node_transition(node, __efc_d_discovered, NULL); */
+
+		/* reauthenticate with ADISC */
+		/* sm: / send ADISC */
+		efc_send_adisc(node);
+		efc_node_transition(node, __efc_d_wait_adisc_rsp, NULL);
+		break;
+
+	case EFC_EVT_PLOGI_RCVD: {
+		/* sm: / save sparams, set send_plogi_acc, post implicit
+		 * logout
+		 * Save plogi parameters
+		 */
+		efc_node_save_sparms(node, cbdata->payload->dma.virt);
+		efc_send_ls_acc_after_attach(node,
+					     cbdata->header->dma.virt,
+				EFC_NODE_SEND_LS_ACC_PLOGI);
+
+		/*
+		 * Restart node attach with new service parameters, and send
+		 * ACC
+		 */
+		efc_node_post_event(node, EFC_EVT_SHUTDOWN_IMPLICIT_LOGO,
+				    NULL);
+		break;
+	}
+
+	case EFC_EVT_FCP_CMD_RCVD: {
+		/* most likely a stale frame (received prior to link down),
+		 * if attempt to send LOGO, will probably timeout and eat
+		 * up 20s; thus, drop FCP_CMND
+		 */
+		node_printf(node, "FCP_CMND received, drop\n");
+		break;
+	}
+	case EFC_EVT_LOGO_RCVD: {
+		/* I, T, I+T */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+		/* sm: / send LOGO acc */
+		efc_send_logo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
+
+void
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg)
+{
+	struct efc_node_cb *cbdata = arg;
+	struct efc_node *node = ctx->app;
+
+	efc_node_evt_set(ctx, evt, __func__);
+
+	node_sm_trace();
+
+	switch (evt) {
+	case EFC_EVT_SRRS_ELS_REQ_OK:
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		efc_node_transition(node, __efc_d_device_ready, NULL);
+		break;
+
+	case EFC_EVT_SRRS_ELS_REQ_RJT:
+		/* received an LS_RJT, in this case, send shutdown
+		 * (explicit logo) event which will unregister the node,
+		 * and start over with PLOGI
+		 */
+		if (efc_node_check_els_req(ctx, evt, arg, ELS_ADISC,
+					   __efc_d_common, __func__))
+			return;
+
+		WARN_ON(!node->els_req_cnt);
+		node->els_req_cnt--;
+		/* sm: / post explicit logout */
+		efc_node_post_event(node,
+				    EFC_EVT_SHUTDOWN_EXPLICIT_LOGO,
+				     NULL);
+		break;
+
+	case EFC_EVT_LOGO_RCVD: {
+		/* In this case, we have the equivalent of an LS_RJT for
+		 * the ADISC, so we need to abort the ADISC, and re-login
+		 * with PLOGI
+		 */
+		/* sm: / request abort, send LOGO acc */
+		struct fc_frame_header *hdr = cbdata->header->dma.virt;
+
+		node_printf(node, "%s received attached=%d\n",
+			    efc_sm_event_name(evt), node->attached);
+
+		efc_send_logo_acc(node, be16_to_cpu(hdr->fh_ox_id));
+		efc_node_transition(node, __efc_d_wait_logo_acc_cmpl, NULL);
+		break;
+	}
+	default:
+		__efc_d_common(__func__, ctx, evt, arg);
+	}
+}
diff --git a/drivers/scsi/elx/libefc/efc_device.h b/drivers/scsi/elx/libefc/efc_device.h
new file mode 100644
index 000000000000..3cf1d8c6698f
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_device.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Node state machine functions for remote device node sm
+ */
+
+#ifndef __EFCT_DEVICE_H__
+#define __EFCT_DEVICE_H__
+void
+efc_node_init_device(struct efc_node *node, bool send_plogi);
+void
+efc_process_prli_payload(struct efc_node *node,
+			 void *prli);
+void
+efc_d_send_prli_rsp(struct efc_node *node, uint16_t ox_id);
+void
+efc_send_ls_acc_after_attach(struct efc_node *node,
+			     struct fc_frame_header *hdr,
+			     enum efc_node_send_ls_acc ls);
+void
+__efc_d_wait_loop(struct efc_sm_ctx *ctx,
+		  enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_plogi_acc_cmpl(struct efc_sm_ctx *ctx,
+			    enum efc_sm_event evt, void *arg);
+void
+__efc_d_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_plogi_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx,
+				  enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_domain_attach(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+			     enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_node_attach(struct efc_sm_ctx *ctx,
+			 enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx,
+				 enum efc_sm_event evt, void *arg);
+void
+__efc_d_initiate_shutdown(struct efc_sm_ctx *ctx,
+			  enum efc_sm_event evt, void *arg);
+void
+__efc_d_port_logged_in(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_logo_acc_cmpl(struct efc_sm_ctx *ctx,
+			   enum efc_sm_event evt, void *arg);
+void
+__efc_d_device_ready(struct efc_sm_ctx *ctx,
+		     enum efc_sm_event evt, void *arg);
+void
+__efc_d_device_gone(struct efc_sm_ctx *ctx,
+		    enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_adisc_rsp(struct efc_sm_ctx *ctx,
+		       enum efc_sm_event evt, void *arg);
+void
+__efc_d_wait_logo_rsp(struct efc_sm_ctx *ctx,
+		      enum efc_sm_event evt, void *arg);
+
+#endif /* __EFCT_DEVICE_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 15/31] elx: libefc: Extended link Service IO handling
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (13 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 14/31] elx: libefc: FC node ELS and state handling James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-08  9:07   ` Daniel Wagner
  2021-01-07 22:58 ` [PATCH v7 16/31] elx: libefc: Register discovery objects with hardware James Smart
                   ` (15 subsequent siblings)
  30 siblings, 1 reply; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the libefc library population.

This patch adds library interface definitions for:
Functions to build and send ELS/CT/BLS commands and responses.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v7:
 Change els functions to return status from efc_els_send_rsp()
---
 drivers/scsi/elx/libefc/efc_els.c | 1099 +++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_els.h |  107 +++
 2 files changed, 1206 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_els.c
 create mode 100644 drivers/scsi/elx/libefc/efc_els.h

diff --git a/drivers/scsi/elx/libefc/efc_els.c b/drivers/scsi/elx/libefc/efc_els.c
new file mode 100644
index 000000000000..c8a8124d2c94
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_els.c
@@ -0,0 +1,1099 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+/*
+ * Functions to build and send ELS/CT/BLS commands and responses.
+ */
+
+#include "efc.h"
+#include "efc_els.h"
+#include "../libefc_sli/sli4.h"
+
+#define EFC_LOG_ENABLE_ELS_TRACE(efc)		\
+		(((efc) != NULL) ? (((efc)->logmask & (1U << 1)) != 0) : 0)
+
+#define node_els_trace()  \
+	do { \
+		if (EFC_LOG_ENABLE_ELS_TRACE(efc)) \
+			efc_log_info(efc, "[%s] %-20s\n", \
+				node->display_name, __func__); \
+	} while (0)
+
+#define els_io_printf(els, fmt, ...) \
+	efc_log_err((struct efc *)els->node->efc,\
+		      "[%s] %-8s " fmt, \
+		      els->node->display_name,\
+		      els->display_name, ##__VA_ARGS__)
+
+#define EFC_ELS_RSP_LEN			1024
+#define EFC_ELS_GID_PT_RSP_LEN		8096
+
+struct efc_els_io_req *
+efc_els_io_alloc(struct efc_node *node, u32 reqlen)
+{
+	return efc_els_io_alloc_size(node, reqlen, EFC_ELS_RSP_LEN);
+}
+
+struct efc_els_io_req *
+efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen)
+{
+	struct efc *efc;
+	struct efc_els_io_req *els;
+	unsigned long flags = 0;
+
+	efc = node->efc;
+
+	spin_lock_irqsave(&node->els_ios_lock, flags);
+
+	if (!READ_ONCE(node->els_io_enabled)) {
+		efc_log_err(efc, "els io alloc disabled\n");
+		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+		return NULL;
+	}
+
+	els = mempool_alloc(efc->els_io_pool, GFP_ATOMIC);
+	if (!els) {
+		atomic_add_return(1, &efc->els_io_alloc_failed_count);
+		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+		return NULL;
+	}
+
+	/* initialize refcount */
+	kref_init(&els->ref);
+	els->release = _efc_els_io_free;
+
+	/* populate generic io fields */
+	els->node = node;
+
+	/* now allocate DMA for request and response */
+	els->io.req.size = reqlen;
+	els->io.req.virt = dma_alloc_coherent(&efc->pci->dev, els->io.req.size,
+					       &els->io.req.phys, GFP_DMA);
+	if (!els->io.req.virt) {
+		mempool_free(els, efc->els_io_pool);
+		spin_unlock_irqrestore(&node->els_ios_lock, flags);
+		return NULL;
+	}
+
+	els->io.rsp.size = rsplen;
+	els->io.rsp.virt = dma_alloc_coherent(&efc->pci->dev, els->io.rsp.size,
+						&els->io.rsp.phys, GFP_DMA);
+	if (!els->io.rsp.virt) {
+		dma_free_coherent(&efc->pci->dev, els->io.req.size,
+					els->io.req.virt, els->io.req.phys);
+		mempool_free(els, efc->els_io_pool);
+		els = NULL;
+	}
+
+	if (els) {
+		/* initialize fields */
+		els->els_retries_remaining = EFC_FC_ELS_DEFAULT_RETRIES;
+
+		/* add els structure to ELS IO list */
+		INIT_LIST_HEAD(&els->list_entry);
+		list_add_tail(&els->list_entry, &node->els_ios_list);
+	}
+
+	spin_unlock_irqrestore(&node->els_ios_lock, flags);
+	return els;
+}
+
+void
+efc_els_io_free(struct efc_els_io_req *els)
+{
+	kref_put(&els->ref, els->release);
+}
+
+void
+_efc_els_io_free(struct kref *arg)
+{
+	struct efc_els_io_req *els =
+				container_of(arg, struct efc_els_io_req, ref);
+	struct efc *efc;
+	struct efc_node *node;
+	int send_empty_event = false;
+	unsigned long flags = 0;
+
+	node = els->node;
+	efc = node->efc;
+
+	spin_lock_irqsave(&node->els_ios_lock, flags);
+
+	list_del(&els->list_entry);
+	/* Send list empty event if the IO allocator
+	 * is disabled, and the list is empty
+	 * If node->els_io_enabled was not checked,
+	 * the event would be posted continually
+	 */
+	send_empty_event = (!READ_ONCE(node->els_io_enabled)) &&
+			   list_empty(&node->els_ios_list);
+
+	spin_unlock_irqrestore(&node->els_ios_lock, flags);
+
+	/* free ELS request and response buffers */
+	dma_free_coherent(&efc->pci->dev, els->io.rsp.size,
+			  els->io.rsp.virt, els->io.rsp.phys);
+	dma_free_coherent(&efc->pci->dev, els->io.req.size,
+			  els->io.req.virt, els->io.req.phys);
+
+	mempool_free(els, efc->els_io_pool);
+
+	if (send_empty_event)
+		efc_scsi_io_list_empty(node->efc, node);
+}
+
+static void
+efc_els_retry(struct efc_els_io_req *els);
+
+static void
+efc_els_delay_timer_cb(struct timer_list *t)
+{
+	struct efc_els_io_req *els = from_timer(els, t, delay_timer);
+
+	/* Retry delay timer expired, retry the ELS request */
+	efc_els_retry(els);
+}
+
+static int
+efc_els_req_cb(void *arg, u32 length, int status, u32 ext_status)
+{
+	struct efc_els_io_req *els;
+	struct efc_node *node;
+	struct efc *efc;
+	struct efc_node_cb cbdata;
+	u32 reason_code;
+
+	els = arg;
+	node = els->node;
+	efc = node->efc;
+
+	if (status)
+		els_io_printf(els, "status x%x ext x%x\n", status, ext_status);
+
+	/* set the response len element of els->rsp */
+	els->io.rsp.len = length;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->io.rsp;
+
+	/* set the response len element of els->rsp */
+	cbdata.rsp_len = length;
+
+	/* FW returns the number of bytes received on the link in
+	 * the WCQE, not the amount placed in the buffer; use this info to
+	 * check if there was an overrun.
+	 */
+	if (length > els->io.rsp.size) {
+		efc_log_warn(efc,
+			      "ELS response returned len=%d > buflen=%zu\n",
+			     length, els->io.rsp.size);
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata);
+		return EFC_SUCCESS;
+	}
+
+	/* Post event to ELS IO object */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_OK, &cbdata);
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LS_RJT:
+		reason_code = (ext_status >> 16) & 0xff;
+
+		/* delay and retry if reason code is Logical Busy */
+		switch (reason_code) {
+		case ELS_RJT_BUSY:
+			els->node->els_req_cnt--;
+			els_io_printf(els,
+				"LS_RJT Logical Busy, delay and retry\n");
+			timer_setup(&els->delay_timer,
+				    efc_els_delay_timer_cb, 0);
+			mod_timer(&els->delay_timer,
+				  jiffies + msecs_to_jiffies(5000));
+			break;
+		default:
+			efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_RJT,
+					    &cbdata);
+			break;
+		}
+		break;
+
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT:
+			efc_els_retry(els);
+			break;
+		default:
+			efc_log_err(efc, "LOCAL_REJECT with ext status:%x\n",
+				    ext_status);
+			efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL,
+					    &cbdata);
+			break;
+		}
+		break;
+	default:	/* Other error */
+		efc_log_warn(efc, "els req failed status x%x, ext_status x%x\n",
+			     status, ext_status);
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+void efc_disc_io_complete(struct efc_disc_io *io, u32 len, u32 status,
+			  u32 ext_status)
+{
+	struct efc_els_io_req *els =
+				container_of(io, struct efc_els_io_req, io);
+
+	WARN_ON_ONCE(!els->cb);
+
+	((efc_hw_srrs_cb_t) els->cb) (els, len, status, ext_status);
+}
+
+static int efc_els_send_req(struct efc_node *node, struct efc_els_io_req *els,
+			     enum efc_disc_io_type io_type)
+{
+	int rc = EFC_SUCCESS;
+	struct efc *efc = node->efc;
+	struct efc_node_cb cbdata;
+
+	/* update ELS request counter */
+	els->node->els_req_cnt++;
+
+	/* Prepare the IO request details */
+	els->io.io_type = io_type;
+	els->io.xmit_len = els->io.req.size;
+	els->io.rsp_len = els->io.rsp.size;
+	els->io.rpi = node->rnode.indicator;
+	els->io.vpi = node->nport->indicator;
+	els->io.s_id = node->nport->fc_id;
+	els->io.d_id = node->rnode.fc_id;
+
+	if (node->rnode.attached)
+		els->io.rpi_registered = true;
+
+	els->cb = efc_els_req_cb;
+
+	rc = efc->tt.send_els(efc, &els->io);
+	if (!rc)
+		return rc;
+
+	cbdata.status = EFC_STATUS_INVALID;
+	cbdata.ext_status = EFC_STATUS_INVALID;
+	cbdata.els_rsp = els->io.rsp;
+	efc_log_err(efc, "efc_els_send failed: %d\n", rc);
+	efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata);
+
+	return rc;
+}
+
+static void
+efc_els_retry(struct efc_els_io_req *els)
+{
+	struct efc *efc;
+	struct efc_node_cb cbdata;
+	u32 rc;
+
+	efc = els->node->efc;
+	cbdata.status = EFC_STATUS_INVALID;
+	cbdata.ext_status = EFC_STATUS_INVALID;
+	cbdata.els_rsp = els->io.rsp;
+
+
+	if (els->els_retries_remaining) {
+		els->els_retries_remaining--;
+		rc = efc->tt.send_els(efc, &els->io);
+	} else {
+		rc = EFC_FAIL;
+	}
+
+	if (rc) {
+		efc_log_err(efc, "ELS retries exhausted\n");
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata);
+	}
+}
+
+static int
+efc_els_acc_cb(void *arg, u32 length, int status, u32 ext_status)
+{
+	struct efc_els_io_req *els;
+	struct efc_node *node;
+	struct efc *efc;
+	struct efc_node_cb cbdata;
+
+	els = arg;
+	node = els->node;
+	efc = node->efc;
+
+	cbdata.status = status;
+	cbdata.ext_status = ext_status;
+	cbdata.header = NULL;
+	cbdata.els_rsp = els->io.rsp;
+
+	/* Post node event */
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_OK, &cbdata);
+		break;
+
+	default:	/* Other error */
+		efc_log_warn(efc, "[%s] %-8s failed status x%x, ext x%x\n",
+			     node->display_name, els->display_name,
+			     status, ext_status);
+		efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_FAIL, &cbdata);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+efc_els_send_rsp(struct efc_els_io_req *els, u32 rsplen)
+{
+	int rc = EFC_SUCCESS;
+	struct efc_node_cb cbdata;
+	struct efc_node *node = els->node;
+	struct efc *efc = node->efc;
+
+	/* increment ELS completion counter */
+	node->els_cmpl_cnt++;
+
+	els->io.io_type = EFC_DISC_IO_ELS_RESP;
+	els->cb = efc_els_acc_cb;
+
+	/* Prepare the IO request details */
+	els->io.xmit_len = rsplen;
+	els->io.rsp_len = els->io.rsp.size;
+	els->io.rpi = node->rnode.indicator;
+	els->io.vpi = node->nport->indicator;
+	if (node->nport->fc_id != U32_MAX)
+		els->io.s_id = node->nport->fc_id;
+	else
+		els->io.s_id = els->io.iparam.els.s_id;
+	els->io.d_id = node->rnode.fc_id;
+
+	if (node->attached)
+		els->io.rpi_registered = true;
+
+	rc = efc->tt.send_els(efc, &els->io);
+	if (!rc)
+		return rc;
+
+	cbdata.status = EFC_STATUS_INVALID;
+	cbdata.ext_status = EFC_STATUS_INVALID;
+	cbdata.els_rsp = els->io.rsp;
+	efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_FAIL, &cbdata);
+
+	return rc;
+}
+
+int
+efc_send_plogi(struct efc_node *node)
+{
+	struct efc_els_io_req *els;
+	struct efc *efc = node->efc;
+	struct fc_els_flogi  *plogi;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*plogi));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+	els->display_name = "plogi";
+
+	/* Build PLOGI request */
+	plogi = els->io.req.virt;
+
+	memcpy(plogi, node->nport->service_params, sizeof(*plogi));
+
+	plogi->fl_cmd = ELS_PLOGI;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_flogi(struct efc_node *node)
+{
+	struct efc_els_io_req *els;
+	struct efc *efc;
+	struct fc_els_flogi  *flogi;
+
+	efc = node->efc;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*flogi));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "flogi";
+
+	/* Build FLOGI request */
+	flogi = els->io.req.virt;
+
+	memcpy(flogi, node->nport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_FLOGI;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_fdisc(struct efc_node *node)
+{
+	struct efc_els_io_req *els;
+	struct efc *efc;
+	struct fc_els_flogi *fdisc;
+
+	efc = node->efc;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*fdisc));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "fdisc";
+
+	/* Build FDISC request */
+	fdisc = els->io.req.virt;
+
+	memcpy(fdisc, node->nport->service_params, sizeof(*fdisc));
+	fdisc->fl_cmd = ELS_FDISC;
+	memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd));
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_prli(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*pp));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "prli";
+
+	/* Build PRLI request */
+	pp = els->io.req.virt;
+
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_PRLI;
+	pp->prli.prli_spp_len = 16;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR;
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+			       (node->nport->enable_ini ?
+			       FCP_SPPF_INIT_FCN : 0) |
+			       (node->nport->enable_tgt ?
+			       FCP_SPPF_TARG_FCN : 0));
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_logo(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els;
+	struct fc_els_logo *logo;
+	struct fc_els_flogi  *sparams;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->nport->service_params;
+
+	els = efc_els_io_alloc(node, sizeof(*logo));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "logo";
+
+	/* Build LOGO request */
+
+	logo = els->io.req.virt;
+
+	memset(logo, 0, sizeof(*logo));
+	logo->fl_cmd = ELS_LOGO;
+	hton24(logo->fl_n_port_id, node->rnode.nport->fc_id);
+	logo->fl_n_port_wwn = sparams->fl_wwpn;
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_adisc(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+	struct efc_nport *nport = node->nport;
+
+	node_els_trace();
+
+	sparams = (struct fc_els_flogi *)node->nport->service_params;
+
+	els = efc_els_io_alloc(node, sizeof(*adisc));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "adisc";
+
+	/* Build ADISC request */
+
+	adisc = els->io.req.virt;
+
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_ADISC;
+	hton24(adisc->adisc_hard_addr, nport->fc_id);
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.nport->fc_id);
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_scr(struct efc_node *node)
+{
+	struct efc_els_io_req *els;
+	struct efc *efc = node->efc;
+	struct fc_els_scr *req;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*req));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "scr";
+
+	req = els->io.req.virt;
+
+	memset(req, 0, sizeof(*req));
+	req->scr_cmd = ELS_SCR;
+	req->scr_reg_func = ELS_SCRF_FULL;
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ);
+}
+
+int
+efc_send_ls_rjt(struct efc_node *node, u32 ox_id, u32 reason_code,
+		u32 reason_code_expl, u32 vendor_unique)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct fc_els_ls_rjt *rjt;
+
+	els = efc_els_io_alloc(node, sizeof(*rjt));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	node_els_trace();
+
+	els->display_name = "ls_rjt";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	rjt = els->io.req.virt;
+	memset(rjt, 0, sizeof(*rjt));
+
+	rjt->er_cmd = ELS_LS_RJT;
+	rjt->er_reason = reason_code;
+	rjt->er_explan = reason_code_expl;
+
+	return efc_els_send_rsp(els, sizeof(*rjt));
+}
+
+int
+efc_send_plogi_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct fc_els_flogi  *plogi;
+	struct fc_els_flogi  *req = (struct fc_els_flogi *)node->service_params;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*plogi));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "plogi_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	plogi = els->io.req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(plogi, node->nport->service_params, sizeof(*plogi));
+	plogi->fl_cmd = ELS_LS_ACC;
+	memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd));
+
+	/* Set Application header support bit if requested */
+	if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST))
+		plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST);
+
+	return efc_els_send_rsp(els, sizeof(*plogi));
+}
+
+int
+efc_send_flogi_p2p_acc(struct efc_node *node, u32 ox_id, u32 s_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct fc_els_flogi  *flogi;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*flogi));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "flogi_p2p_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+	els->io.iparam.els.s_id = s_id;
+
+	flogi = els->io.req.virt;
+
+	/* copy our port's service parameters to payload */
+	memcpy(flogi, node->nport->service_params, sizeof(*flogi));
+	flogi->fl_cmd = ELS_LS_ACC;
+	memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd));
+
+	memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp));
+
+	return efc_els_send_rsp(els, sizeof(*flogi));
+}
+
+int
+efc_send_prli_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct {
+		struct fc_els_prli prli;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*pp));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "prli_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	pp = els->io.req.virt;
+	memset(pp, 0, sizeof(*pp));
+
+	pp->prli.prli_cmd = ELS_LS_ACC;
+	pp->prli.prli_spp_len = 0x10;
+	pp->prli.prli_len = cpu_to_be16(sizeof(*pp));
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK;
+
+	pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS |
+					(node->nport->enable_ini ?
+					 FCP_SPPF_INIT_FCN : 0) |
+					(node->nport->enable_tgt ?
+					 FCP_SPPF_TARG_FCN : 0));
+
+	return efc_els_send_rsp(els, sizeof(*pp));
+}
+
+int
+efc_send_prlo_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct {
+		struct fc_els_prlo prlo;
+		struct fc_els_spp spp;
+	} *pp;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*pp));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "prlo_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	pp = els->io.req.virt;
+	memset(pp, 0, sizeof(*pp));
+	pp->prlo.prlo_cmd = ELS_LS_ACC;
+	pp->prlo.prlo_obs = 0x10;
+	pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp));
+
+	pp->spp.spp_type = FC_TYPE_FCP;
+	pp->spp.spp_type_ext = 0;
+	pp->spp.spp_flags = FC_SPP_RESP_ACK;
+
+	return efc_els_send_rsp(els, sizeof(*pp));
+}
+
+int
+efc_send_ls_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct fc_els_ls_acc *acc;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*acc));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "ls_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	acc = els->io.req.virt;
+	memset(acc, 0, sizeof(*acc));
+
+	acc->la_cmd = ELS_LS_ACC;
+
+	return efc_els_send_rsp(els, sizeof(*acc));
+}
+
+int
+efc_send_logo_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc_els_io_req *els = NULL;
+	struct efc *efc = node->efc;
+	struct fc_els_ls_acc *logo;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*logo));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "logo_acc";
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	logo = els->io.req.virt;
+	memset(logo, 0, sizeof(*logo));
+
+	logo->la_cmd = ELS_LS_ACC;
+
+	return efc_els_send_rsp(els, sizeof(*logo));
+}
+
+int
+efc_send_adisc_acc(struct efc_node *node, u32 ox_id)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els = NULL;
+	struct fc_els_adisc *adisc;
+	struct fc_els_flogi  *sparams;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*adisc));
+	if (!els) {
+		efc_log_err(efc, "els IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->display_name = "adisc_acc";
+
+	/* Go ahead and send the ELS_ACC */
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+	els->io.iparam.els.ox_id = ox_id;
+
+	sparams = (struct fc_els_flogi  *)node->nport->service_params;
+	adisc = els->io.req.virt;
+	memset(adisc, 0, sizeof(*adisc));
+	adisc->adisc_cmd = ELS_LS_ACC;
+	adisc->adisc_wwpn = sparams->fl_wwpn;
+	adisc->adisc_wwnn = sparams->fl_wwnn;
+	hton24(adisc->adisc_port_id, node->rnode.nport->fc_id);
+
+	return efc_els_send_rsp(els, sizeof(*adisc));
+}
+
+static inline void
+fcct_build_req_header(struct fc_ct_hdr  *hdr, u16 cmd, u16 max_size)
+{
+	hdr->ct_rev = FC_CT_REV;
+	hdr->ct_fs_type = FC_FST_DIR;
+	hdr->ct_fs_subtype = FC_NS_SUBTYPE;
+	hdr->ct_options = 0;
+	hdr->ct_cmd = cpu_to_be16(cmd);
+	/* words */
+	hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32)));
+	hdr->ct_reason = 0;
+	hdr->ct_explan = 0;
+	hdr->ct_vendor = 0;
+}
+
+int
+efc_ns_send_rftid(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els;
+	struct {
+		struct fc_ct_hdr hdr;
+		struct fc_ns_rft_id rftid;
+	} *ct;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*ct));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->io.iparam.ct.type = FC_TYPE_CT;
+	els->io.iparam.ct.df_ctl = 0;
+	els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT;
+
+	els->display_name = "rftid";
+
+	ct = els->io.req.virt;
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(&ct->hdr, FC_NS_RFT_ID,
+				sizeof(struct fc_ns_rft_id));
+
+	hton24(ct->rftid.fr_fid.fp_fid, node->rnode.nport->fc_id);
+	ct->rftid.fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] =
+		cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW));
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ);
+}
+
+int
+efc_ns_send_rffid(struct efc_node *node)
+{
+	struct efc *efc = node->efc;
+	struct efc_els_io_req *els;
+	struct {
+		struct fc_ct_hdr hdr;
+		struct fc_ns_rff_id rffid;
+	} *ct;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc(node, sizeof(*ct));
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->io.iparam.ct.type = FC_TYPE_CT;
+	els->io.iparam.ct.df_ctl = 0;
+	els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT;
+
+	els->display_name = "rffid";
+	ct = els->io.req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(&ct->hdr, FC_NS_RFF_ID,
+			      sizeof(struct fc_ns_rff_id));
+
+	hton24(ct->rffid.fr_fid.fp_fid, node->rnode.nport->fc_id);
+	if (node->nport->enable_ini)
+		ct->rffid.fr_feat |= FCP_FEAT_INIT;
+	if (node->nport->enable_tgt)
+		ct->rffid.fr_feat |= FCP_FEAT_TARG;
+	ct->rffid.fr_type = FC_TYPE_FCP;
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ);
+}
+
+int
+efc_ns_send_gidpt(struct efc_node *node)
+{
+	struct efc_els_io_req *els = NULL;
+	struct efc *efc = node->efc;
+	struct {
+		struct fc_ct_hdr hdr;
+		struct fc_ns_gid_pt gidpt;
+	} *ct;
+
+	node_els_trace();
+
+	els = efc_els_io_alloc_size(node, sizeof(*ct), EFC_ELS_GID_PT_RSP_LEN);
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ;
+	els->io.iparam.ct.type = FC_TYPE_CT;
+	els->io.iparam.ct.df_ctl = 0;
+	els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT;
+
+	els->display_name = "gidpt";
+
+	ct = els->io.req.virt;
+
+	memset(ct, 0, sizeof(*ct));
+	fcct_build_req_header(&ct->hdr, FC_NS_GID_PT,
+			      sizeof(struct fc_ns_gid_pt));
+
+	ct->gidpt.fn_pt_type = FC_TYPE_FCP;
+
+	return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ);
+}
+
+void
+efc_els_io_cleanup(struct efc_els_io_req *els, int evt, void *arg)
+{
+	/* don't want further events that could come; e.g. abort requests
+	 * from the node state machine; thus, disable state machine
+	 */
+	els->els_req_free = true;
+	efc_node_post_els_resp(els->node, evt, arg);
+
+	efc_els_io_free(els);
+}
+
+static int
+efc_ct_acc_cb(void *arg, u32 length, int status, u32 ext_status)
+{
+	struct efc_els_io_req *els = arg;
+
+	efc_els_io_free(els);
+
+	return EFC_SUCCESS;
+}
+
+int
+efc_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr *ct_hdr, u32 cmd_rsp_code,
+		 u32 reason_code, u32 reason_code_explanation)
+{
+	struct efc_els_io_req *els = NULL;
+	struct fc_ct_hdr  *rsp = NULL;
+
+	els = efc_els_io_alloc(node, 256);
+	if (!els) {
+		efc_log_err(efc, "IO alloc failed\n");
+		return EFC_FAIL;
+	}
+
+	rsp = els->io.rsp.virt;
+
+	*rsp = *ct_hdr;
+
+	fcct_build_req_header(rsp, cmd_rsp_code, 0);
+	rsp->ct_reason = reason_code;
+	rsp->ct_explan = reason_code_explanation;
+
+	els->display_name = "ct_rsp";
+	els->cb = efc_ct_acc_cb;
+
+	/* Prepare the IO request details */
+	els->io.io_type = EFC_DISC_IO_CT_RESP;
+	els->io.xmit_len = sizeof(*rsp);
+
+	els->io.rpi = node->rnode.indicator;
+	els->io.d_id = node->rnode.fc_id;
+
+	memset(&els->io.iparam, 0, sizeof(els->io.iparam));
+
+	els->io.iparam.ct.ox_id = ox_id;
+	els->io.iparam.ct.r_ctl = 3;
+	els->io.iparam.ct.type = FC_TYPE_CT;
+	els->io.iparam.ct.df_ctl = 0;
+	els->io.iparam.ct.timeout = 5;
+
+	if (efc->tt.send_els(efc, &els->io)) {
+		efc_els_io_free(els);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+int
+efc_send_bls_acc(struct efc_node *node, struct fc_frame_header *hdr)
+{
+	struct sli_bls_params bls;
+	struct fc_ba_acc *acc;
+	struct efc *efc = node->efc;
+
+	memset(&bls, 0, sizeof(bls));
+	bls.ox_id = be16_to_cpu(hdr->fh_ox_id);
+	bls.rx_id = be16_to_cpu(hdr->fh_rx_id);
+	bls.s_id = ntoh24(hdr->fh_d_id);
+	bls.d_id = node->rnode.fc_id;
+	bls.rpi = node->rnode.indicator;
+	bls.vpi = node->nport->indicator;
+
+	acc = (void *)bls.payload;
+	acc->ba_ox_id = cpu_to_be16(bls.ox_id);
+	acc->ba_rx_id = cpu_to_be16(bls.rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	return efc->tt.send_bls(efc, FC_RCTL_BA_ACC, &bls);
+}
diff --git a/drivers/scsi/elx/libefc/efc_els.h b/drivers/scsi/elx/libefc/efc_els.h
new file mode 100644
index 000000000000..29926e690ec4
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_els.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_ELS_H__
+#define __EFC_ELS_H__
+
+#define EFC_STATUS_INVALID	INT_MAX
+#define EFC_ELS_IO_POOL_SZ	1024
+
+struct efc_els_io_req {
+	struct list_head	list_entry;
+	struct kref		ref;
+	void			(*release)(struct kref *arg);
+	struct efc_node		*node;
+	void			*cb;
+	uint32_t		els_retries_remaining;
+	bool			els_req_free;
+	struct timer_list       delay_timer;
+
+	const char		*display_name;
+
+	struct efc_disc_io	io;
+};
+
+typedef int(*efc_hw_srrs_cb_t)(void *arg, u32 length, int status,
+			       u32 ext_status);
+
+void _efc_els_io_free(struct kref *arg);
+struct efc_els_io_req *
+efc_els_io_alloc(struct efc_node *node, u32 reqlen);
+struct efc_els_io_req *
+efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen);
+void efc_els_io_free(struct efc_els_io_req *els);
+
+/* ELS command send */
+typedef void (*els_cb_t)(struct efc_node *node,
+			 struct efc_node_cb *cbdata, void *arg);
+int
+efc_send_plogi(struct efc_node *node);
+int
+efc_send_flogi(struct efc_node *node);
+int
+efc_send_fdisc(struct efc_node *node);
+int
+efc_send_prli(struct efc_node *node);
+int
+efc_send_prlo(struct efc_node *node);
+int
+efc_send_logo(struct efc_node *node);
+int
+efc_send_adisc(struct efc_node *node);
+int
+efc_send_pdisc(struct efc_node *node);
+int
+efc_send_scr(struct efc_node *node);
+int
+efc_ns_send_rftid(struct efc_node *node);
+int
+efc_ns_send_rffid(struct efc_node *node);
+int
+efc_ns_send_gidpt(struct efc_node *node);
+void
+efc_els_io_cleanup(struct efc_els_io_req *els, int evt, void *arg);
+
+/* ELS acc send */
+int
+efc_send_ls_acc(struct efc_node *node, u32 ox_id);
+int
+efc_send_ls_rjt(struct efc_node *node, u32 ox_id, u32 reason_cod,
+		u32 reason_code_expl, u32 vendor_unique);
+int
+efc_send_flogi_p2p_acc(struct efc_node *node, u32 ox_id, u32 s_id);
+int
+efc_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport);
+int
+efc_send_plogi_acc(struct efc_node *node, u32 ox_id);
+int
+efc_send_prli_acc(struct efc_node *node, u32 ox_id);
+int
+efc_send_logo_acc(struct efc_node *node, u32 ox_id);
+int
+efc_send_prlo_acc(struct efc_node *node, u32 ox_id);
+int
+efc_send_adisc_acc(struct efc_node *node, u32 ox_id);
+
+int
+efc_bls_send_acc_hdr(struct efc *efc, struct efc_node *node,
+		     struct fc_frame_header *hdr);
+int
+efc_bls_send_rjt_hdr(struct efc_els_io_req *io, struct fc_frame_header *hdr);
+
+int
+efc_els_io_list_empty(struct efc_node *node, struct list_head *list);
+
+/* CT */
+int
+efc_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id,
+		 struct fc_ct_hdr *ct_hdr, u32 cmd_rsp_code, u32 reason_code,
+		 u32 reason_code_explanation);
+
+int
+efc_send_bls_acc(struct efc_node *node, struct fc_frame_header *hdr);
+
+#endif /* __EFC_ELS_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 16/31] elx: libefc: Register discovery objects with hardware
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (14 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 15/31] elx: libefc: Extended link Service IO handling James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 17/31] elx: efct: Data structures and defines for hw operations James Smart
                   ` (14 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the libefc library population.

This patch adds library interface definitions for:
-Registrations for VFI, VPI and RPI.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/libefc/efc_cmds.c | 855 +++++++++++++++++++++++++++++
 drivers/scsi/elx/libefc/efc_cmds.h |  35 ++
 2 files changed, 890 insertions(+)
 create mode 100644 drivers/scsi/elx/libefc/efc_cmds.c
 create mode 100644 drivers/scsi/elx/libefc/efc_cmds.h

diff --git a/drivers/scsi/elx/libefc/efc_cmds.c b/drivers/scsi/elx/libefc/efc_cmds.c
new file mode 100644
index 000000000000..d2850d231a7a
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_cmds.c
@@ -0,0 +1,855 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efclib.h"
+#include "../libefc_sli/sli4.h"
+#include "efc_cmds.h"
+#include "efc_sm.h"
+
+static void
+efc_nport_free_resources(struct efc_nport *nport, int evt, void *data)
+{
+	struct efc *efc = nport->efc;
+
+	/* Clear the nport attached flag */
+	nport->attached = false;
+
+	/* Free the service parameters buffer */
+	if (nport->dma.virt) {
+		dma_free_coherent(&efc->pci->dev, nport->dma.size,
+				  nport->dma.virt, nport->dma.phys);
+		memset(&nport->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the SLI resources */
+	sli_resource_free(efc->sli, SLI4_RSRC_VPI, nport->indicator);
+
+	efc_nport_cb(efc, evt, nport);
+}
+
+static int
+efc_nport_get_mbox_status(struct efc_nport *nport, u8 *mqe, int status)
+{
+	struct efc *efc = nport->efc;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(efc, "bad status vpi=%#x st=%x hdr=%x\n",
+			nport->indicator, status, le16_to_cpu(hdr->status));
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+efc_nport_free_unreg_vpi_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_nport *nport = arg;
+	int evt = EFC_EVT_NPORT_FREE_OK;
+	int rc;
+
+	rc = efc_nport_get_mbox_status(nport, mqe, status);
+	if (rc)
+		evt = EFC_EVT_NPORT_FREE_FAIL;
+
+	efc_nport_free_resources(nport, evt, mqe);
+	return rc;
+}
+
+static void
+efc_nport_free_unreg_vpi(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	int rc;
+	u8 data[SLI4_BMBX_SIZE];
+
+	rc = sli_cmd_unreg_vpi(efc->sli, data, nport->indicator,
+			       SLI4_UNREG_TYPE_PORT);
+	if (rc) {
+		efc_log_err(efc, "UNREG_VPI format failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_FAIL, data);
+		return;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+				     efc_nport_free_unreg_vpi_cb, nport);
+	if (rc) {
+		efc_log_err(efc, "UNREG_VPI command failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_FAIL, data);
+	}
+}
+
+static void
+efc_nport_send_evt(struct efc_nport *nport, int evt, void *data)
+{
+	struct efc *efc = nport->efc;
+
+	/* Now inform the registered callbacks */
+	efc_nport_cb(efc, evt, nport);
+
+	/* Set the nport attached flag */
+	if (evt == EFC_EVT_NPORT_ATTACH_OK)
+		nport->attached = true;
+
+	/* If there is a pending free request, then handle it now */
+	if (nport->free_req_pending)
+		efc_nport_free_unreg_vpi(nport);
+}
+
+static int
+efc_nport_alloc_init_vpi_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_nport *nport = arg;
+
+	if (efc_nport_get_mbox_status(nport, mqe, status)) {
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efc_nport_send_evt(nport, EFC_EVT_NPORT_ALLOC_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efc_nport_alloc_init_vpi(struct efc_nport *nport)
+{
+	struct efc *efc = nport->efc;
+	u8 data[SLI4_BMBX_SIZE];
+	int rc;
+
+	/* If there is a pending free request, then handle it now */
+	if (nport->free_req_pending) {
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_OK, data);
+		return;
+	}
+
+	rc = sli_cmd_init_vpi(efc->sli, data,
+			      nport->indicator, nport->domain->indicator);
+	if (rc) {
+		efc_log_err(efc, "INIT_VPI format failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+			efc_nport_alloc_init_vpi_cb, nport);
+	if (rc) {
+		efc_log_err(efc, "INIT_VPI command failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efc_nport_alloc_read_sparm64_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_nport *nport = arg;
+	u8 *payload = NULL;
+
+	if (efc_nport_get_mbox_status(nport, mqe, status)) {
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	payload = nport->dma.virt;
+
+	memcpy(&nport->sli_wwpn, payload + SLI4_READ_SPARM64_WWPN_OFFSET,
+		sizeof(nport->sli_wwpn));
+	memcpy(&nport->sli_wwnn, payload + SLI4_READ_SPARM64_WWNN_OFFSET,
+		sizeof(nport->sli_wwnn));
+
+	dma_free_coherent(&efc->pci->dev, nport->dma.size, nport->dma.virt,
+			  nport->dma.phys);
+	memset(&nport->dma, 0, sizeof(struct efc_dma));
+	efc_nport_alloc_init_vpi(nport);
+	return EFC_SUCCESS;
+}
+
+static void
+efc_nport_alloc_read_sparm64(struct efc *efc, struct efc_nport *nport)
+{
+	u8 data[SLI4_BMBX_SIZE];
+	int rc;
+
+	/* Allocate memory for the service parameters */
+	nport->dma.size = EFC_SPARAM_DMA_SZ;
+	nport->dma.virt = dma_alloc_coherent(&efc->pci->dev,
+					     nport->dma.size, &nport->dma.phys,
+					     GFP_DMA);
+	if (!nport->dma.virt) {
+		efc_log_err(efc, "Failed to allocate DMA memory\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = sli_cmd_read_sparm64(efc->sli, data,
+				  &nport->dma, nport->indicator);
+	if (rc) {
+		efc_log_err(efc, "READ_SPARM64 format failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+				     efc_nport_alloc_read_sparm64_cb, nport);
+	if (rc) {
+		efc_log_err(efc, "READ_SPARM64 command failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data);
+	}
+}
+
+int
+efc_cmd_nport_alloc(struct efc *efc, struct efc_nport *nport,
+		    struct efc_domain *domain, u8 *wwpn)
+{
+	u32 index;
+
+	nport->indicator = U32_MAX;
+	nport->free_req_pending = false;
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc, "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	if (wwpn)
+		memcpy(&nport->sli_wwpn, wwpn, sizeof(nport->sli_wwpn));
+
+	/*
+	 * allocate a VPI object for the port and stores it in the
+	 * indicator field of the port object.
+	 */
+	if (sli_resource_alloc(efc->sli, SLI4_RSRC_VPI,
+			       &nport->indicator, &index)) {
+		efc_log_err(efc, "VPI allocation failure\n");
+		return EFC_FAIL;
+	}
+
+	if (domain) {
+		/*
+		 * If the WWPN is NULL, fetch the default
+		 * WWPN and WWNN before initializing the VPI
+		 */
+		if (!wwpn)
+			efc_nport_alloc_read_sparm64(efc, nport);
+		else
+			efc_nport_alloc_init_vpi(nport);
+	} else if (!wwpn) {
+		/* domain NULL and wwpn non-NULL */
+		efc_log_err(efc, "need WWN for physical port\n");
+		sli_resource_free(efc->sli, SLI4_RSRC_VPI, nport->indicator);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+efc_nport_attach_reg_vpi_cb(struct efc *efc, int status, u8 *mqe,
+			       void *arg)
+{
+	struct efc_nport *nport = arg;
+
+	if (efc_nport_get_mbox_status(nport, mqe, status)) {
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efc_nport_send_evt(nport, EFC_EVT_NPORT_ATTACH_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+int
+efc_cmd_nport_attach(struct efc *efc, struct efc_nport *nport, u32 fc_id)
+{
+	u8 buf[SLI4_BMBX_SIZE];
+	int rc = EFC_SUCCESS;
+
+	if (!nport) {
+		efc_log_err(efc, "bad param(s) nport=%p\n", nport);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			      "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	nport->fc_id = fc_id;
+
+	/* register previously-allocated VPI with the device */
+	rc = sli_cmd_reg_vpi(efc->sli, buf, nport->fc_id,
+			    nport->sli_wwpn, nport->indicator,
+			    nport->domain->indicator, false);
+	if (rc) {
+		efc_log_err(efc, "REG_VPI format failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, buf);
+		return rc;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, buf,
+				     efc_nport_attach_reg_vpi_cb, nport);
+	if (rc) {
+		efc_log_err(efc, "REG_VPI command failure\n");
+		efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, buf);
+	}
+
+	return rc;
+}
+
+int
+efc_cmd_nport_free(struct efc *efc, struct efc_nport *nport)
+{
+	if (!nport) {
+		efc_log_err(efc, "bad parameter(s) nport=%p\n",	nport);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			      "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	/* Issue the UNREG_VPI command to free the assigned VPI context */
+	if (nport->attached)
+		efc_nport_free_unreg_vpi(nport);
+	else
+		nport->free_req_pending = true;
+
+	return EFC_SUCCESS;
+}
+
+static int
+efc_domain_get_mbox_status(struct efc_domain *domain, u8 *mqe, int status)
+{
+	struct efc *efc = domain->efc;
+	struct sli4_mbox_command_header *hdr =
+			(struct sli4_mbox_command_header *)mqe;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(efc, "bad status vfi=%#x st=%x hdr=%x\n",
+			       domain->indicator, status,
+			       le16_to_cpu(hdr->status));
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efc_domain_free_resources(struct efc_domain *domain, int evt, void *data)
+{
+	struct efc *efc = domain->efc;
+
+	/* Free the service parameters buffer */
+	if (domain->dma.virt) {
+		dma_free_coherent(&efc->pci->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+	}
+
+	/* Free the SLI resources */
+	sli_resource_free(efc->sli, SLI4_RSRC_VFI, domain->indicator);
+
+	efc_domain_cb(efc, evt, domain);
+}
+
+static void
+efc_domain_send_nport_evt(struct efc_domain *domain,
+			      int port_evt, int domain_evt, void *data)
+{
+	struct efc *efc = domain->efc;
+
+	/* Send alloc/attach ok to the physical nport */
+	efc_nport_send_evt(domain->nport, port_evt, NULL);
+
+	/* Now inform the registered callbacks */
+	efc_domain_cb(efc, domain_evt, domain);
+}
+
+static int
+efc_domain_alloc_read_sparm64_cb(struct efc *efc, int status, u8 *mqe,
+				 void *arg)
+{
+	struct efc_domain *domain = arg;
+
+	if (efc_domain_get_mbox_status(domain, mqe, status)) {
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efc_domain_send_nport_evt(domain, EFC_EVT_NPORT_ALLOC_OK,
+				      EFC_HW_DOMAIN_ALLOC_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+static void
+efc_domain_alloc_read_sparm64(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	u8 data[SLI4_BMBX_SIZE];
+	int rc;
+
+	rc = sli_cmd_read_sparm64(efc->sli, data, &domain->dma, 0);
+	if (rc) {
+		efc_log_err(efc, "READ_SPARM64 format failure\n");
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+				     efc_domain_alloc_read_sparm64_cb, domain);
+	if (rc) {
+		efc_log_err(efc, "READ_SPARM64 command failure\n");
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+static int
+efc_domain_alloc_init_vfi_cb(struct efc *efc, int status, u8 *mqe,
+				 void *arg)
+{
+	struct efc_domain *domain = arg;
+
+	if (efc_domain_get_mbox_status(domain, mqe, status)) {
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efc_domain_alloc_read_sparm64(domain);
+	return EFC_SUCCESS;
+}
+
+static void
+efc_domain_alloc_init_vfi(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	struct efc_nport *nport = domain->nport;
+	u8 data[SLI4_BMBX_SIZE];
+	int rc;
+
+	/*
+	 * For FC, the HW alread registered an FCFI.
+	 * Copy FCF information into the domain and jump to INIT_VFI.
+	 */
+	domain->fcf_indicator = efc->fcfi;
+	rc = sli_cmd_init_vfi(efc->sli, data, domain->indicator,
+			      domain->fcf_indicator, nport->indicator);
+	if (rc) {
+		efc_log_err(efc, "INIT_VFI format failure\n");
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+		return;
+	}
+
+	efc_log_err(efc, "%s issue mbox\n", __func__);
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+				     efc_domain_alloc_init_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(efc, "INIT_VFI command failure\n");
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ALLOC_FAIL, data);
+	}
+}
+
+int
+efc_cmd_domain_alloc(struct efc *efc, struct efc_domain *domain, u32 fcf)
+{
+	u32 index;
+
+	if (!domain || !domain->nport) {
+		efc_log_err(efc, "bad parameter(s) domain=%p nport=%p\n",
+			    domain, domain ? domain->nport : NULL);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			     "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	/* allocate memory for the service parameters */
+	domain->dma.size = EFC_SPARAM_DMA_SZ;
+	domain->dma.virt = dma_alloc_coherent(&efc->pci->dev,
+					      domain->dma.size,
+					      &domain->dma.phys, GFP_DMA);
+	if (!domain->dma.virt) {
+		efc_log_err(efc, "Failed to allocate DMA memory\n");
+		return EFC_FAIL;
+	}
+
+	domain->fcf = fcf;
+	domain->fcf_indicator = U32_MAX;
+	domain->indicator = U32_MAX;
+
+	if (sli_resource_alloc(efc->sli, SLI4_RSRC_VFI, &domain->indicator,
+			       &index)) {
+		efc_log_err(efc, "VFI allocation failure\n");
+
+		dma_free_coherent(&efc->pci->dev,
+				  domain->dma.size, domain->dma.virt,
+				  domain->dma.phys);
+		memset(&domain->dma, 0, sizeof(struct efc_dma));
+
+		return EFC_FAIL;
+	}
+
+	efc_domain_alloc_init_vfi(domain);
+	return EFC_SUCCESS;
+}
+
+static int
+efc_domain_attach_reg_vfi_cb(struct efc *efc, int status, u8 *mqe,
+				 void *arg)
+{
+	struct efc_domain *domain = arg;
+
+	if (efc_domain_get_mbox_status(domain, mqe, status)) {
+		efc_domain_free_resources(domain,
+					      EFC_HW_DOMAIN_ATTACH_FAIL, mqe);
+		return EFC_FAIL;
+	}
+
+	efc_domain_send_nport_evt(domain, EFC_EVT_NPORT_ATTACH_OK,
+				      EFC_HW_DOMAIN_ATTACH_OK, mqe);
+	return EFC_SUCCESS;
+}
+
+int
+efc_cmd_domain_attach(struct efc *efc, struct efc_domain *domain, u32 fc_id)
+{
+	u8 buf[SLI4_BMBX_SIZE];
+	int rc = EFC_SUCCESS;
+
+	if (!domain) {
+		efc_log_err(efc, "bad param(s) domain=%p\n", domain);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			      "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	domain->nport->fc_id = fc_id;
+
+	rc = sli_cmd_reg_vfi(efc->sli, buf, SLI4_BMBX_SIZE, domain->indicator,
+			     domain->fcf_indicator, domain->dma,
+			     domain->nport->indicator, domain->nport->sli_wwpn,
+			     domain->nport->fc_id);
+	if (rc) {
+		efc_log_err(efc, "REG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, buf,
+				     efc_domain_attach_reg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(efc, "REG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return rc;
+
+cleanup:
+	efc_domain_free_resources(domain, EFC_HW_DOMAIN_ATTACH_FAIL, buf);
+
+	return rc;
+}
+
+static int
+efc_domain_free_unreg_vfi_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_domain *domain = arg;
+	int evt = EFC_HW_DOMAIN_FREE_OK;
+	int rc;
+
+	rc = efc_domain_get_mbox_status(domain, mqe, status);
+	if (rc) {
+		evt = EFC_HW_DOMAIN_FREE_FAIL;
+		rc = EFC_FAIL;
+	}
+
+	efc_domain_free_resources(domain, evt, mqe);
+	return rc;
+}
+
+static void
+efc_domain_free_unreg_vfi(struct efc_domain *domain)
+{
+	struct efc *efc = domain->efc;
+	int rc;
+	u8 data[SLI4_BMBX_SIZE];
+
+	rc = sli_cmd_unreg_vfi(efc->sli, data, domain->indicator,
+			       SLI4_UNREG_TYPE_DOMAIN);
+	if (rc) {
+		efc_log_err(efc, "UNREG_VFI format failure\n");
+		goto cleanup;
+	}
+
+	rc = efc->tt.issue_mbox_rqst(efc->base, data,
+				     efc_domain_free_unreg_vfi_cb, domain);
+	if (rc) {
+		efc_log_err(efc, "UNREG_VFI command failure\n");
+		goto cleanup;
+	}
+
+	return;
+
+cleanup:
+	efc_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data);
+}
+
+int
+efc_cmd_domain_free(struct efc *efc, struct efc_domain *domain)
+{
+	if (!domain) {
+		efc_log_err(efc, "bad parameter(s) domain=%p\n", domain);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			      "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	efc_domain_free_unreg_vfi(domain);
+	return EFC_SUCCESS;
+}
+
+int
+efc_cmd_node_alloc(struct efc *efc, struct efc_remote_node *rnode, u32 fc_addr,
+		   struct efc_nport *nport)
+{
+	/* Check for invalid indicator */
+	if (rnode->indicator != U32_MAX) {
+		efc_log_err(efc,
+			     "RPI allocation failure addr=%#x rpi=%#x\n",
+			    fc_addr, rnode->indicator);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc,
+			      "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	/* NULL SLI port indicates an unallocated remote node */
+	rnode->nport = NULL;
+
+	if (sli_resource_alloc(efc->sli, SLI4_RSRC_RPI,
+			       &rnode->indicator, &rnode->index)) {
+		efc_log_err(efc, "RPI allocation failure addr=%#x\n",
+			     fc_addr);
+		return EFC_FAIL;
+	}
+
+	rnode->fc_id = fc_addr;
+	rnode->nport = nport;
+
+	return EFC_SUCCESS;
+}
+
+static int
+efc_cmd_node_attach_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	int evt = 0;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(efc, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+		rnode->attached = false;
+		evt = EFC_EVT_NODE_ATTACH_FAIL;
+	} else {
+		rnode->attached = true;
+		evt = EFC_EVT_NODE_ATTACH_OK;
+	}
+
+	efc_remote_node_cb(efc, evt, rnode);
+
+	return EFC_SUCCESS;
+}
+
+int
+efc_cmd_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms)
+{
+	int rc = EFC_FAIL;
+	u8 buf[SLI4_BMBX_SIZE];
+
+	if (!rnode || !sparms) {
+		efc_log_err(efc, "bad parameter(s) rnode=%p sparms=%p\n",
+			    rnode, sparms);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc, "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * If the attach count is non-zero, this RPI has already been reg'd.
+	 * Otherwise, register the RPI
+	 */
+	if (rnode->index == U32_MAX) {
+		efc_log_err(efc, "bad parameter rnode->index invalid\n");
+		return EFC_FAIL;
+	}
+
+	/* Update a remote node object with the remote port's service params */
+	if (!sli_cmd_reg_rpi(efc->sli, buf, rnode->indicator,
+			rnode->nport->indicator, rnode->fc_id, sparms, 0, 0))
+		rc = efc->tt.issue_mbox_rqst(efc->base, buf,
+					     efc_cmd_node_attach_cb, rnode);
+
+	return rc;
+}
+
+int
+efc_node_free_resources(struct efc *efc, struct efc_remote_node *rnode)
+{
+	int rc = EFC_SUCCESS;
+
+	if (!rnode) {
+		efc_log_err(efc, "bad parameter rnode=%p\n", rnode);
+		return EFC_FAIL;
+	}
+
+	if (rnode->nport) {
+		if (rnode->attached) {
+			efc_log_err(efc, "rnode is still attached\n");
+			return EFC_FAIL;
+		}
+		if (rnode->indicator != U32_MAX) {
+			if (sli_resource_free(efc->sli, SLI4_RSRC_RPI,
+					      rnode->indicator)) {
+				efc_log_err(efc,
+					    "RPI free fail RPI %d addr=%#x\n",
+					    rnode->indicator, rnode->fc_id);
+				rc = EFC_FAIL;
+			} else {
+				rnode->indicator = U32_MAX;
+				rnode->index = U32_MAX;
+			}
+		}
+	}
+
+	return rc;
+}
+
+static int
+efc_cmd_node_free_cb(struct efc *efc, int status, u8 *mqe, void *arg)
+{
+	struct efc_remote_node *rnode = arg;
+	struct sli4_mbox_command_header *hdr =
+				(struct sli4_mbox_command_header *)mqe;
+	int evt = EFC_EVT_NODE_FREE_FAIL;
+	int rc = EFC_SUCCESS;
+
+	if (status || le16_to_cpu(hdr->status)) {
+		efc_log_debug(efc, "bad status cqe=%#x mqe=%#x\n", status,
+			       le16_to_cpu(hdr->status));
+
+		/*
+		 * In certain cases, a non-zero MQE status is OK (all must be
+		 * true):
+		 *   - node is attached
+		 *   - status is 0x1400
+		 */
+		if (!rnode->attached ||
+		    (le16_to_cpu(hdr->status) != SLI4_MBX_STATUS_RPI_NOT_REG))
+			rc = EFC_FAIL;
+	}
+
+	if (!rc) {
+		rnode->attached = false;
+		evt = EFC_EVT_NODE_FREE_OK;
+	}
+
+	efc_remote_node_cb(efc, evt, rnode);
+
+	return rc;
+}
+
+int
+efc_cmd_node_detach(struct efc *efc, struct efc_remote_node *rnode)
+{
+	u8 buf[SLI4_BMBX_SIZE];
+	int rc = EFC_FAIL;
+
+	if (!rnode) {
+		efc_log_err(efc, "bad parameter rnode=%p\n", rnode);
+		return EFC_FAIL;
+	}
+
+	/*
+	 * Check if the chip is in an error state (UE'd) before proceeding.
+	 */
+	if (sli_fw_error_status(efc->sli) > 0) {
+		efc_log_crit(efc, "Chip is in an error state - reset needed\n");
+		return EFC_FAIL;
+	}
+
+	if (rnode->nport) {
+		if (!rnode->attached)
+			return EFC_FAIL;
+
+		rc = EFC_FAIL;
+
+		if (!sli_cmd_unreg_rpi(efc->sli, buf, rnode->indicator,
+				      SLI4_RSRC_RPI, U32_MAX))
+			rc = efc->tt.issue_mbox_rqst(efc->base, buf,
+					efc_cmd_node_free_cb, rnode);
+
+		if (rc != EFC_SUCCESS) {
+			efc_log_err(efc, "UNREG_RPI failed\n");
+			rc = EFC_FAIL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/libefc/efc_cmds.h b/drivers/scsi/elx/libefc/efc_cmds.h
new file mode 100644
index 000000000000..9c0287799e9e
--- /dev/null
+++ b/drivers/scsi/elx/libefc/efc_cmds.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFC_CMDS_H__
+#define __EFC_CMDS_H__
+
+#define EFC_SPARAM_DMA_SZ	112
+int
+efc_cmd_nport_alloc(struct efc *efc, struct efc_nport *nport,
+		   struct efc_domain *domain, u8 *wwpn);
+int
+efc_cmd_nport_attach(struct efc *efc, struct efc_nport *nport, u32 fc_id);
+int
+efc_cmd_nport_free(struct efc *efc, struct efc_nport *nport);
+int
+efc_cmd_domain_alloc(struct efc *efc, struct efc_domain *domain, u32 fcf);
+int
+efc_cmd_domain_attach(struct efc *efc, struct efc_domain *domain, u32 fc_id);
+int
+efc_cmd_domain_free(struct efc *efc, struct efc_domain *domain);
+int
+efc_cmd_node_detach(struct efc *efc, struct efc_remote_node *rnode);
+int
+efc_node_free_resources(struct efc *efc, struct efc_remote_node *rnode);
+int
+efc_cmd_node_attach(struct efc *efc, struct efc_remote_node *rnode,
+		    struct efc_dma *sparms);
+int
+efc_cmd_node_alloc(struct efc *efc, struct efc_remote_node *rnode, u32 fc_addr,
+		   struct efc_nport *nport);
+
+#endif /* __EFC_CMDS_H */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 17/31] elx: efct: Data structures and defines for hw operations
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (15 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 16/31] elx: libefc: Register discovery objects with hardware James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 18/31] elx: efct: Driver initialization routines James Smart
                   ` (13 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch starts the population of the efct target mode
driver.  The driver is contained in the drivers/scsi/elx/efct
subdirectory.

This patch creates the efct directory and starts population of
the driver by adding SLI-4 configuration parameters, data structures
for configuring SLI-4 queues, converting from os to SLI-4 IO requests,
and handling async events.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.h | 603 ++++++++++++++++++++++++++++++++
 1 file changed, 603 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw.h

diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
new file mode 100644
index 000000000000..e4df0df5fe23
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -0,0 +1,603 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef _EFCT_HW_H
+#define _EFCT_HW_H
+
+#include "../libefc_sli/sli4.h"
+
+/*
+ * EFCT PCI IDs
+ */
+#define EFCT_VENDOR_ID			0x10df
+/* LightPulse 16Gb x 4 FC (lancer-g6) */
+#define EFCT_DEVICE_LANCER_G6		0xe307
+/* LightPulse 32Gb x 4 FC (lancer-g7) */
+#define EFCT_DEVICE_LANCER_G7		0xf407
+
+/*Default RQ entries len used by driver*/
+#define EFCT_HW_RQ_ENTRIES_MIN		512
+#define EFCT_HW_RQ_ENTRIES_DEF		1024
+#define EFCT_HW_RQ_ENTRIES_MAX		4096
+
+/*Defines the size of the RQ buffers used for each RQ*/
+#define EFCT_HW_RQ_SIZE_HDR             128
+#define EFCT_HW_RQ_SIZE_PAYLOAD         1024
+
+/*Define the maximum number of multi-receive queues*/
+#define EFCT_HW_MAX_MRQS		8
+
+/*
+ * Define count of when to set the WQEC bit in a submitted
+ * WQE, causing a consummed/released completion to be posted.
+ */
+#define EFCT_HW_WQEC_SET_COUNT		32
+
+/*Send frame timeout in seconds*/
+#define EFCT_HW_SEND_FRAME_TIMEOUT	10
+
+/*
+ * FDT Transfer Hint value, reads greater than this value
+ * will be segmented to implement fairness. A value of zero disables
+ * the feature.
+ */
+#define EFCT_HW_FDT_XFER_HINT		8192
+
+#define EFCT_HW_TIMECHECK_ITERATIONS	100
+#define EFCT_HW_MAX_NUM_MQ		1
+#define EFCT_HW_MAX_NUM_RQ		32
+#define EFCT_HW_MAX_NUM_EQ		16
+#define EFCT_HW_MAX_NUM_WQ		32
+#define EFCT_HW_DEF_NUM_EQ		1
+
+#define OCE_HW_MAX_NUM_MRQ_PAIRS	16
+
+#define EFCT_HW_MQ_DEPTH		128
+#define EFCT_HW_EQ_DEPTH		1024
+
+/*
+ * A CQ will be assinged to each WQ
+ * (CQ must have 2X entries of the WQ for abort
+ * processing), plus a separate one for each RQ PAIR and one for MQ
+ */
+#define EFCT_HW_MAX_NUM_CQ \
+	((EFCT_HW_MAX_NUM_WQ * 2) + 1 + (OCE_HW_MAX_NUM_MRQ_PAIRS * 2))
+
+#define EFCT_HW_Q_HASH_SIZE		128
+#define EFCT_HW_RQ_HEADER_SIZE		128
+#define EFCT_HW_RQ_HEADER_INDEX		0
+
+#define EFCT_HW_REQUE_XRI_REGTAG	65534
+
+/* Options for efct_hw_command() */
+enum efct_cmd_opts {
+	/* command executes synchronously and busy-waits for completion */
+	EFCT_CMD_POLL,
+	/* command executes asynchronously. Uses callback */
+	EFCT_CMD_NOWAIT,
+};
+
+enum efct_hw_rtn {
+	EFCT_HW_RTN_SUCCESS = 0,
+	EFCT_HW_RTN_SUCCESS_SYNC = 1,
+	EFCT_HW_RTN_ERROR = -1,
+	EFCT_HW_RTN_NO_RESOURCES = -2,
+	EFCT_HW_RTN_NO_MEMORY = -3,
+	EFCT_HW_RTN_IO_NOT_ACTIVE = -4,
+	EFCT_HW_RTN_IO_ABORT_IN_PROGRESS = -5,
+	EFCT_HW_RTN_IO_PORT_OWNED_ALREADY_ABORTED = -6,
+	EFCT_HW_RTN_INVALID_ARG = -7,
+};
+
+#define EFCT_HW_RTN_IS_ERROR(e)	((e) < 0)
+
+enum efct_hw_reset {
+	EFCT_HW_RESET_FUNCTION,
+	EFCT_HW_RESET_FIRMWARE,
+	EFCT_HW_RESET_MAX
+};
+
+enum efct_hw_topo {
+	EFCT_HW_TOPOLOGY_AUTO,
+	EFCT_HW_TOPOLOGY_NPORT,
+	EFCT_HW_TOPOLOGY_LOOP,
+	EFCT_HW_TOPOLOGY_NONE,
+	EFCT_HW_TOPOLOGY_MAX
+};
+
+/* pack fw revision values into a single uint64_t */
+#define HW_FWREV(a, b, c, d) (((uint64_t)(a) << 48) | ((uint64_t)(b) << 32) \
+			| ((uint64_t)(c) << 16) | ((uint64_t)(d)))
+
+#define EFCT_FW_VER_STR(a, b, c, d) (#a "." #b "." #c "." #d)
+
+enum efct_hw_io_type {
+	EFCT_HW_ELS_REQ,
+	EFCT_HW_ELS_RSP,
+	EFCT_HW_FC_CT,
+	EFCT_HW_FC_CT_RSP,
+	EFCT_HW_BLS_ACC,
+	EFCT_HW_BLS_RJT,
+	EFCT_HW_IO_TARGET_READ,
+	EFCT_HW_IO_TARGET_WRITE,
+	EFCT_HW_IO_TARGET_RSP,
+	EFCT_HW_IO_DNRX_REQUEUE,
+	EFCT_HW_IO_MAX,
+};
+
+enum efct_hw_io_state {
+	EFCT_HW_IO_STATE_FREE,
+	EFCT_HW_IO_STATE_INUSE,
+	EFCT_HW_IO_STATE_WAIT_FREE,
+	EFCT_HW_IO_STATE_WAIT_SEC_HIO,
+};
+
+#define EFCT_TARGET_WRITE_SKIPS	1
+#define EFCT_TARGET_READ_SKIPS	2
+
+struct efct_hw;
+struct efct_io;
+
+#define EFCT_CMD_CTX_POOL_SZ	32
+/**
+ * HW command context.
+ * Stores the state for the asynchronous commands sent to the hardware.
+ */
+struct efct_command_ctx {
+	struct list_head	list_entry;
+	int (*cb)(struct efct_hw *hw, int status, u8 *mqe, void *arg);
+	void			*arg;	/* Argument for callback */
+	/* buffer holding command / results */
+	u8			buf[SLI4_BMBX_SIZE];
+	void			*ctx;	/* upper layer context */
+};
+
+struct efct_hw_sgl {
+	uintptr_t		addr;
+	size_t			len;
+};
+
+union efct_hw_io_param_u {
+	struct sli_bls_params bls;
+	struct sli_els_params els;
+	struct sli_ct_params fc_ct;
+	struct sli_fcp_tgt_params fcp_tgt;
+};
+
+/* WQ steering mode */
+enum efct_hw_wq_steering {
+	EFCT_HW_WQ_STEERING_CLASS,
+	EFCT_HW_WQ_STEERING_REQUEST,
+	EFCT_HW_WQ_STEERING_CPU,
+};
+
+/* HW wqe object */
+struct efct_hw_wqe {
+	struct list_head	list_entry;
+	bool			abort_wqe_submit_needed;
+	bool			send_abts;
+	u32			id;
+	u32			abort_reqtag;
+	u8			*wqebuf;
+};
+
+struct efct_hw_io;
+/* Typedef for HW "done" callback */
+typedef int (*efct_hw_done_t)(struct efct_hw_io *, u32 len, int status,
+			      u32 ext, void *ul_arg);
+
+/**
+ * HW IO object.
+ *
+ * Stores the per-IO information necessary
+ * for both SLI and efct.
+ * @ref:		reference counter for hw io object
+ * @state:		state of IO: free, busy, wait_free
+ * @list_entry		used for busy, wait_free, free lists
+ * @wqe			Work queue object, with link for pending
+ * @hw			pointer back to hardware context
+ * @xfer_rdy		transfer ready data
+ * @type		IO type
+ * @xbusy		Exchange is active in FW
+ * @abort_in_progress	if TRUE, abort is in progress
+ * @status_saved	if TRUE, latched status should be returned
+ * @wq_class		WQ class if steering mode is Class
+ * @reqtag		request tag for this HW IO
+ * @wq			WQ assigned to the exchange
+ * @done		Function called on IO completion
+ * @arg			argument passed to IO done callback
+ * @abort_done		Function called on abort completion
+ * @abort_arg		argument passed to abort done callback
+ * @wq_steering		WQ steering mode request
+ * @saved_status	Saved status
+ * @saved_len		Status length
+ * @saved_ext		Saved extended status
+ * @eq			EQ on which this HIO came up
+ * @sge_offset		SGE data offset
+ * @def_sgl_count	Count of SGEs in default SGL
+ * @abort_reqtag	request tag for an abort of this HW IO
+ * @indicator		Exchange indicator
+ * @def_sgl		default SGL
+ * @sgl			pointer to current active SGL
+ * @sgl_count		count of SGEs in io->sgl
+ * @first_data_sge	index of first data SGE
+ * @n_sge		number of active SGEs
+ */
+struct efct_hw_io {
+	struct kref		ref;
+	enum efct_hw_io_state	state;
+	void			(*release)(struct kref *arg);
+	struct list_head	list_entry;
+	struct efct_hw_wqe	wqe;
+
+	struct efct_hw		*hw;
+	struct efc_dma		xfer_rdy;
+	u16			type;
+	bool			xbusy;
+	bool			abort_in_progress;
+	bool			status_saved;
+	u8			wq_class;
+	u16			reqtag;
+
+	struct hw_wq		*wq;
+	efct_hw_done_t		done;
+	void			*arg;
+	efct_hw_done_t		abort_done;
+	void			*abort_arg;
+
+	enum efct_hw_wq_steering wq_steering;
+
+	u32			saved_status;
+	u32			saved_len;
+	u32			saved_ext;
+
+	struct hw_eq		*eq;
+	u32			sge_offset;
+	u32			def_sgl_count;
+	u32			abort_reqtag;
+	u32			indicator;
+	struct efc_dma		def_sgl;
+	struct efc_dma		*sgl;
+	u32			sgl_count;
+	u32			first_data_sge;
+	u32			n_sge;
+};
+
+
+enum efct_hw_port {
+	EFCT_HW_PORT_INIT,
+	EFCT_HW_PORT_SHUTDOWN,
+};
+
+/* Node group rpi reference */
+struct efct_hw_rpi_ref {
+	atomic_t rpi_count;
+	atomic_t rpi_attached;
+};
+
+enum efct_hw_link_stat {
+	EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT,
+	EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT,
+	EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT,
+	EFCT_HW_LINK_STAT_CRC_COUNT,
+	EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT,
+	EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT,
+	EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT,
+	EFCT_HW_LINK_STAT_RCV_EOFA_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_SOFF_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT,
+	EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT,
+	EFCT_HW_LINK_STAT_MAX,
+};
+
+enum efct_hw_host_stat {
+	EFCT_HW_HOST_STAT_TX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_RX_KBYTE_COUNT,
+	EFCT_HW_HOST_STAT_TX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_RX_FRAME_COUNT,
+	EFCT_HW_HOST_STAT_TX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_RX_SEQ_COUNT,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG,
+	EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP,
+	EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT,
+	EFCT_HW_HOST_STAT_RX_F_BSY_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT,
+	EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT,
+	EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT,
+	EFCT_HW_HOST_STAT_MAX,
+};
+
+enum efct_hw_state {
+	EFCT_HW_STATE_UNINITIALIZED,
+	EFCT_HW_STATE_QUEUES_ALLOCATED,
+	EFCT_HW_STATE_ACTIVE,
+	EFCT_HW_STATE_RESET_IN_PROGRESS,
+	EFCT_HW_STATE_TEARDOWN_IN_PROGRESS,
+};
+
+struct efct_hw_link_stat_counts {
+	u8		overflow;
+	u32		counter;
+};
+
+struct efct_hw_host_stat_counts {
+	u32		counter;
+};
+
+/* Structure used for the hash lookup of queue IDs */
+struct efct_queue_hash {
+	bool		in_use;
+	u16		id;
+	u16		index;
+};
+
+/* WQ callback object */
+struct hw_wq_callback {
+	u16		instance_index;	/* use for request tag */
+	void (*callback)(void *arg, u8 *cqe, int status);
+	void		*arg;
+	struct list_head list_entry;
+};
+
+struct reqtag_pool {
+	spinlock_t lock;	/* pool lock */
+	struct hw_wq_callback *tags[U16_MAX];
+	struct list_head freelist;
+};
+
+struct efct_hw_config {
+	u32		n_eq;
+	u32		n_cq;
+	u32		n_mq;
+	u32		n_rq;
+	u32		n_wq;
+	u32		n_io;
+	u32		n_sgl;
+	u32		speed;
+	u32		topology;
+	/* size of the buffers for first burst */
+	u32		rq_default_buffer_size;
+	u8		esoc;
+	/* MRQ RQ selection policy */
+	u8		rq_selection_policy;
+	/* RQ quanta if rq_selection_policy == 2 */
+	u8		rr_quanta;
+	u32		filter_def[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+};
+
+struct efct_hw {
+	struct efct		*os;
+	struct sli4		sli;
+	u16			ulp_start;
+	u16			ulp_max;
+	u32			dump_size;
+	enum efct_hw_state	state;
+	bool			hw_setup_called;
+	u8			sliport_healthcheck;
+	u16			fcf_indicator;
+
+	/* HW configuration */
+	struct efct_hw_config	config;
+
+	/* calculated queue sizes for each type */
+	u32			num_qentries[SLI4_QTYPE_MAX];
+
+	/* Storage for SLI queue objects */
+	struct sli4_queue	wq[EFCT_HW_MAX_NUM_WQ];
+	struct sli4_queue	rq[EFCT_HW_MAX_NUM_RQ];
+	u16			hw_rq_lookup[EFCT_HW_MAX_NUM_RQ];
+	struct sli4_queue	mq[EFCT_HW_MAX_NUM_MQ];
+	struct sli4_queue	cq[EFCT_HW_MAX_NUM_CQ];
+	struct sli4_queue	eq[EFCT_HW_MAX_NUM_EQ];
+
+	/* HW queue */
+	u32			eq_count;
+	u32			cq_count;
+	u32			mq_count;
+	u32			wq_count;
+	u32			rq_count;
+	u32			cmd_head_count;
+	struct list_head	eq_list;
+
+	struct efct_queue_hash	cq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	rq_hash[EFCT_HW_Q_HASH_SIZE];
+	struct efct_queue_hash	wq_hash[EFCT_HW_Q_HASH_SIZE];
+
+	/* Storage for HW queue objects */
+	struct hw_wq		*hw_wq[EFCT_HW_MAX_NUM_WQ];
+	struct hw_rq		*hw_rq[EFCT_HW_MAX_NUM_RQ];
+	struct hw_mq		*hw_mq[EFCT_HW_MAX_NUM_MQ];
+	struct hw_cq		*hw_cq[EFCT_HW_MAX_NUM_CQ];
+	struct hw_eq		*hw_eq[EFCT_HW_MAX_NUM_EQ];
+	/* count of hw_rq[] entries */
+	u32			hw_rq_count;
+	/* count of multirq RQs */
+	u32			hw_mrq_count;
+
+	struct hw_wq		**wq_cpu_array;
+
+	/* Sequence objects used in incoming frame processing */
+	struct efc_hw_sequence	*seq_pool;
+
+	/* Maintain an ordered, linked list of outstanding HW commands. */
+	struct mutex            bmbx_lock;
+	spinlock_t		cmd_lock;
+	struct list_head	cmd_head;
+	struct list_head	cmd_pending;
+	mempool_t		*cmd_ctx_pool;
+	mempool_t		*mbox_rqst_pool;
+
+	struct sli4_link_event	link;
+
+	/* pointer array of IO objects */
+	struct efct_hw_io	**io;
+	/* array of WQE buffs mapped to IO objects */
+	u8			*wqe_buffs;
+
+	/* IO lock to synchronize list access */
+	spinlock_t		io_lock;
+	/* IO lock to synchronize IO aborting */
+	spinlock_t		io_abort_lock;
+	/* List of IO objects in use */
+	struct list_head	io_inuse;
+	/* List of IO objects waiting to be freed */
+	struct list_head	io_wait_free;
+	/* List of IO objects available for allocation */
+	struct list_head	io_free;
+
+	struct efc_dma		loop_map;
+
+	struct efc_dma		xfer_rdy;
+
+	struct efc_dma		rnode_mem;
+
+	atomic_t		io_alloc_failed_count;
+
+	/* stat: wq sumbit count */
+	u32			tcmd_wq_submit[EFCT_HW_MAX_NUM_WQ];
+	/* stat: wq complete count */
+	u32			tcmd_wq_complete[EFCT_HW_MAX_NUM_WQ];
+
+	atomic_t		send_frame_seq_id;
+	struct reqtag_pool	*wq_reqtag_pool;
+};
+
+enum efct_hw_io_count_type {
+	EFCT_HW_IO_INUSE_COUNT,
+	EFCT_HW_IO_FREE_COUNT,
+	EFCT_HW_IO_WAIT_FREE_COUNT,
+	EFCT_HW_IO_N_TOTAL_IO_COUNT,
+};
+
+/* HW queue data structures */
+struct hw_eq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct efct_hw		*hw;
+	struct sli4_queue	*queue;
+	struct list_head	cq_list;
+	u32			use_count;
+};
+
+struct hw_cq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_eq		*eq;
+	struct sli4_queue	*queue;
+	struct list_head	q_list;
+	u32			use_count;
+};
+
+struct hw_q {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+};
+
+struct hw_mq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+
+	u32			use_count;
+};
+
+struct hw_wq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+	struct efct_hw		*hw;
+
+	u32			entry_count;
+	u32			entry_size;
+	struct hw_cq		*cq;
+	struct sli4_queue	*queue;
+	u32			class;
+
+	/* WQ consumed */
+	u32			wqec_set_count;
+	u32			wqec_count;
+	u32			free_count;
+	u32			total_submit_count;
+	struct list_head	pending_list;
+
+	/* HW IO allocated for use with Send Frame */
+	struct efct_hw_io	*send_frame_io;
+
+	/* Stats */
+	u32			use_count;
+	u32			wq_pending_count;
+};
+
+struct hw_rq {
+	struct list_head	list_entry;
+	enum sli4_qtype		type;
+	u32			instance;
+
+	u32			entry_count;
+	u32			use_count;
+	u32			hdr_entry_size;
+	u32			first_burst_entry_size;
+	u32			data_entry_size;
+	bool			is_mrq;
+	u32			base_mrq_id;
+
+	struct hw_cq		*cq;
+
+	u8			filter_mask;
+	struct sli4_queue	*hdr;
+	struct sli4_queue	*first_burst;
+	struct sli4_queue	*data;
+
+	struct efc_hw_rq_buffer	*hdr_buf;
+	struct efc_hw_rq_buffer	*fb_buf;
+	struct efc_hw_rq_buffer	*payload_buf;
+	/* RQ tracker for this RQ */
+	struct efc_hw_sequence	**rq_tracker;
+};
+
+struct efct_hw_send_frame_context {
+	struct efct_hw		*hw;
+	struct hw_wq_callback	*wqcb;
+	struct efct_hw_wqe	wqe;
+	void (*callback)(int status, void *arg);
+	void			*arg;
+
+	/* General purpose elements */
+	struct efc_hw_sequence	*seq;
+	struct efc_dma		payload;
+};
+
+struct efct_hw_grp_hdr {
+	u32			size;
+	__be32			magic_number;
+	u32			word2;
+	u8			rev_name[128];
+	u8			date[12];
+	u8			revision[32];
+};
+
+#endif /* __EFCT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 18/31] elx: efct: Driver initialization routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (16 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 17/31] elx: efct: Data structures and defines for hw operations James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 19/31] elx: efct: Hardware queues creation and deletion James Smart
                   ` (12 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Emulex FC Target driver init, attach and hardware setup routines.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>

v7:
 Fix return codes in efct_hw_config_mrq()
---
 drivers/scsi/elx/efct/efct_driver.c |  789 ++++++++++++++++++
 drivers/scsi/elx/efct/efct_driver.h |  109 +++
 drivers/scsi/elx/efct/efct_hw.c     | 1161 +++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h     |   15 +
 drivers/scsi/elx/efct/efct_xport.c  |  519 ++++++++++++
 drivers/scsi/elx/efct/efct_xport.h  |  186 +++++
 6 files changed, 2779 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_driver.c
 create mode 100644 drivers/scsi/elx/efct/efct_driver.h
 create mode 100644 drivers/scsi/elx/efct/efct_hw.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.c
 create mode 100644 drivers/scsi/elx/efct/efct_xport.h

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
new file mode 100644
index 000000000000..c06c53b4c6cf
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -0,0 +1,789 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+
+#include "efct_hw.h"
+#include "efct_unsol.h"
+#include "efct_scsi.h"
+
+LIST_HEAD(efct_devices);
+
+static int logmask;
+module_param(logmask, int, 0444);
+MODULE_PARM_DESC(logmask, "logging bitmask (default 0)");
+
+static struct libefc_function_template efct_libefc_templ = {
+	.issue_mbox_rqst = efct_issue_mbox_rqst,
+	.send_els = efct_els_hw_srrs_send,
+	.send_bls = efct_efc_bls_send,
+
+	.new_nport = efct_scsi_tgt_new_nport,
+	.del_nport = efct_scsi_tgt_del_nport,
+	.scsi_new_node = efct_scsi_new_initiator,
+	.scsi_del_node = efct_scsi_del_initiator,
+	.hw_seq_free = efct_efc_hw_sequence_free,
+};
+
+static int
+efct_device_init(void)
+{
+	int rc;
+
+	/* driver-wide init for target-server */
+	rc = efct_scsi_tgt_driver_init();
+	if (rc) {
+		pr_err("efct_scsi_tgt_init failed rc=%d\n",
+			     rc);
+		return rc;
+	}
+
+	rc = efct_scsi_reg_fc_transport();
+	if (rc) {
+		pr_err("failed to register to FC host\n");
+		return rc;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_device_shutdown(void)
+{
+	efct_scsi_release_fc_transport();
+
+	efct_scsi_tgt_driver_exit();
+}
+
+static void *
+efct_device_alloc(u32 nid)
+{
+	struct efct *efct = NULL;
+
+	efct = kzalloc_node(sizeof(*efct), GFP_KERNEL, nid);
+	if (!efct)
+		return efct;
+
+	INIT_LIST_HEAD(&efct->list_entry);
+	list_add_tail(&efct->list_entry, &efct_devices);
+
+	return efct;
+}
+
+static void
+efct_teardown_msix(struct efct *efct)
+{
+	u32 i;
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		free_irq(pci_irq_vector(efct->pci, i),
+			 &efct->intr_context[i]);
+	}
+
+	pci_free_irq_vectors(efct->pci);
+}
+
+static int
+efct_efclib_config(struct efct *efct, struct libefc_function_template *tt)
+{
+	struct efc *efc;
+	struct sli4 *sli;
+	int rc = EFC_SUCCESS;
+
+	efc = kzalloc(sizeof(*efc), GFP_KERNEL);
+	if (!efc)
+		return EFC_FAIL;
+
+	efct->efcport = efc;
+
+	memcpy(&efc->tt, tt, sizeof(*tt));
+	efc->base = efct;
+	efc->pci = efct->pci;
+
+	efc->def_wwnn = efct_get_wwnn(&efct->hw);
+	efc->def_wwpn = efct_get_wwpn(&efct->hw);
+	efc->enable_tgt = 1;
+	efc->log_level = EFC_LOG_LIB;
+
+	sli = &efct->hw.sli;
+	efc->max_xfer_size = sli->sge_supported_length *
+			     sli_get_max_sgl(&efct->hw.sli);
+	efc->sli = sli;
+	efc->fcfi = efct->hw.fcf_indicator;
+
+	rc = efcport_init(efc);
+	if (rc)
+		efc_log_err(efc, "efcport_init failed\n");
+
+	return rc;
+}
+
+static int efct_request_firmware_update(struct efct *efct);
+
+static const char*
+efct_pci_model(u16 device)
+{
+	switch (device) {
+	case EFCT_DEVICE_LANCER_G6:	return "LPE31004";
+	case EFCT_DEVICE_LANCER_G7:	return "LPE36000";
+	default:			return "unknown";
+	}
+}
+
+static int
+efct_device_attach(struct efct *efct)
+{
+	u32 rc = EFC_SUCCESS, i = 0;
+
+	if (efct->attached) {
+		efc_log_err(efct, "Device is already attached\n");
+		return EFC_FAIL;
+	}
+
+	snprintf(efct->name, sizeof(efct->name), "[%s%d] ", "fc",
+		 efct->instance_index);
+
+	efct->logmask = logmask;
+	efct->filter_def = EFCT_DEFAULT_FILTER;
+	efct->max_isr_time_msec = EFCT_OS_MAX_ISR_TIME_MSEC;
+
+	efct->model = efct_pci_model(efct->pci->device);
+
+	efct->efct_req_fw_upgrade = true;
+
+	/* Allocate transport object and bring online */
+	efct->xport = efct_xport_alloc(efct);
+	if (!efct->xport) {
+		efc_log_err(efct, "failed to allocate transport object\n");
+		rc = EFC_FAIL;
+		goto out;
+	}
+
+	rc = efct_xport_attach(efct->xport);
+	if (rc) {
+		efc_log_err(efct, "failed to attach transport object\n");
+		goto xport_out;
+	}
+
+	rc = efct_xport_initialize(efct->xport);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize transport object\n");
+		goto xport_out;
+	}
+
+	rc = efct_efclib_config(efct, &efct_libefc_templ);
+	if (rc) {
+		efc_log_err(efct, "failed to init efclib\n");
+		goto efclib_out;
+	}
+
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d enabled\n", i);
+		enable_irq(pci_irq_vector(efct->pci, i));
+	}
+
+	efct->attached = true;
+
+	if (efct->efct_req_fw_upgrade)
+		efct_request_firmware_update(efct);
+
+	return rc;
+
+efclib_out:
+	efct_xport_detach(efct->xport);
+xport_out:
+	efct_xport_free(efct->xport);
+	efct->xport = NULL;
+out:
+	return rc;
+}
+
+static int
+efct_device_detach(struct efct *efct)
+{
+	int i;
+
+	if (!efct || !efct->attached) {
+		pr_err("Device is not attached\n");
+		return EFC_FAIL;
+	}
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_SHUTDOWN))
+		efc_log_err(efct, "Transport Shutdown timed out\n");
+
+	for (i = 0; i < efct->n_msix_vec; i++)
+		disable_irq(pci_irq_vector(efct->pci, i));
+
+	if (efct_xport_detach(efct->xport) != 0)
+		efc_log_err(efct, "Transport detach failed\n");
+
+	efct_xport_free(efct->xport);
+	efct->xport = NULL;
+
+	efcport_destroy(efct->efcport);
+	kfree(efct->efcport);
+
+	efct->attached = false;
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_fw_write_cb(int status, u32 actual_write_length,
+		 u32 change_status, void *arg)
+{
+	struct efct_fw_write_result *result = arg;
+
+	result->status = status;
+	result->actual_xfer = actual_write_length;
+	result->change_status = change_status;
+
+	complete(&result->done);
+}
+
+static int
+efct_firmware_write(struct efct *efct, const u8 *buf, size_t buf_len,
+		    u8 *change_status)
+{
+	int rc = EFC_SUCCESS;
+	u32 bytes_left;
+	u32 xfer_size;
+	u32 offset;
+	struct efc_dma dma;
+	int last = 0;
+	struct efct_fw_write_result result;
+
+	init_completion(&result.done);
+
+	bytes_left = buf_len;
+	offset = 0;
+
+	dma.size = FW_WRITE_BUFSIZE;
+	dma.virt = dma_alloc_coherent(&efct->pci->dev,
+				      dma.size, &dma.phys, GFP_DMA);
+	if (!dma.virt)
+		return -ENOMEM;
+
+	while (bytes_left > 0) {
+		if (bytes_left > FW_WRITE_BUFSIZE)
+			xfer_size = FW_WRITE_BUFSIZE;
+		else
+			xfer_size = bytes_left;
+
+		memcpy(dma.virt, buf + offset, xfer_size);
+
+		if (bytes_left == xfer_size)
+			last = 1;
+
+		efct_hw_firmware_write(&efct->hw, &dma, xfer_size, offset,
+				       last, efct_fw_write_cb, &result);
+
+		if (wait_for_completion_interruptible(&result.done) != 0) {
+			rc = -ENXIO;
+			break;
+		}
+
+		if (result.actual_xfer == 0 || result.status != 0) {
+			rc = -EFAULT;
+			break;
+		}
+
+		if (last)
+			*change_status = result.change_status;
+
+		bytes_left -= result.actual_xfer;
+		offset += result.actual_xfer;
+	}
+
+	dma_free_coherent(&efct->pci->dev, dma.size, dma.virt, dma.phys);
+	return rc;
+}
+
+static int
+efct_fw_reset(struct efct *efct)
+{
+	/*
+	 * Firmware reset to activate the new firmware.
+	 * Function 0 will update and load the new firmware
+	 * during attach.
+	 */
+	if (timer_pending(&efct->xport->stats_timer))
+		del_timer(&efct->xport->stats_timer);
+
+	if (efct_hw_reset(&efct->hw, EFCT_HW_RESET_FIRMWARE)) {
+		efc_log_info(efct, "failed to reset firmware\n");
+		return EFC_FAIL;
+	}
+
+	efc_log_info(efct, "successfully reset firmware.Now resetting port\n");
+
+	efct_device_detach(efct);
+	return efct_device_attach(efct);
+}
+
+static int
+efct_request_firmware_update(struct efct *efct)
+{
+	int rc = EFC_SUCCESS;
+	u8 file_name[256], fw_change_status = 0;
+	const struct firmware *fw;
+	struct efct_hw_grp_hdr *fw_image;
+
+	snprintf(file_name, 256, "%s.grp", efct->model);
+
+	rc = request_firmware(&fw, file_name, &efct->pci->dev);
+	if (rc) {
+		efc_log_debug(efct, "Firmware file(%s) not found.\n",
+				file_name);
+		return rc;
+	}
+
+	fw_image = (struct efct_hw_grp_hdr *)fw->data;
+
+	if (!strncmp(efct->hw.sli.fw_name[0], fw_image->revision,
+		     strnlen(fw_image->revision, 16))) {
+		efc_log_debug(efct,
+			"Skipped update. Firmware is already up to date.\n");
+		goto exit;
+	}
+
+	efc_log_info(efct, "Firmware update is initiated. %s -> %s\n",
+		     efct->hw.sli.fw_name[0], fw_image->revision);
+
+	rc = efct_firmware_write(efct, fw->data, fw->size, &fw_change_status);
+	if (rc) {
+		efc_log_err(efct,
+			     "Firmware update failed. Return code = %d\n", rc);
+		goto exit;
+	}
+
+	efc_log_info(efct, "Firmware updated successfully\n");
+	switch (fw_change_status) {
+	case 0x00:
+		efc_log_info(efct, "New firmware is active.\n");
+		break;
+	case 0x01:
+		efc_log_info(efct,
+			"System reboot needed to activate the new firmware\n");
+		break;
+	case 0x02:
+	case 0x03:
+		efc_log_info(efct,
+			"firmware is resetting to activate the new firmware\n");
+		efct_fw_reset(efct);
+		break;
+	default:
+		efc_log_info(efct,
+			"Unexected value change_status:%d\n", fw_change_status);
+		break;
+	}
+
+exit:
+	release_firmware(fw);
+
+	return rc;
+}
+
+static void
+efct_device_free(struct efct *efct)
+{
+	if (efct) {
+		list_del(&efct->list_entry);
+		kfree(efct);
+	}
+}
+
+static int
+efct_device_interrupts_required(struct efct *efct)
+{
+	if (efct_hw_setup(&efct->hw, efct, efct->pci) != EFCT_HW_RTN_SUCCESS)
+		return EFCT_HW_RTN_ERROR;
+
+	return efct->hw.config.n_eq;
+}
+
+static irqreturn_t
+efct_intr_thread(int irq, void *handle)
+{
+	struct efct_intr_context *intr_ctx = handle;
+	struct efct *efct = intr_ctx->efct;
+
+	efct_hw_process(&efct->hw, intr_ctx->index, efct->max_isr_time_msec);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t
+efct_intr_msix(int irq, void *handle)
+{
+	return IRQ_WAKE_THREAD;
+}
+
+static int
+efct_setup_msix(struct efct *efct, u32 num_intrs)
+{
+	int rc = EFC_SUCCESS, i;
+
+	if (!pci_find_capability(efct->pci, PCI_CAP_ID_MSIX)) {
+		dev_err(&efct->pci->dev,
+			"%s : MSI-X not available\n", __func__);
+		return EFC_FAIL;
+	}
+
+	efct->n_msix_vec = num_intrs;
+
+	rc = pci_alloc_irq_vectors(efct->pci, num_intrs, num_intrs,
+				   PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
+
+	if (rc < 0) {
+		dev_err(&efct->pci->dev, "Failed to alloc irq : %d\n", rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_intrs; i++) {
+		struct efct_intr_context *intr_ctx = NULL;
+
+		intr_ctx = &efct->intr_context[i];
+		intr_ctx->efct = efct;
+		intr_ctx->index = i;
+
+		rc = request_threaded_irq(pci_irq_vector(efct->pci, i),
+					  efct_intr_msix, efct_intr_thread, 0,
+					  EFCT_DRIVER_NAME, intr_ctx);
+		if (rc) {
+			dev_err(&efct->pci->dev,
+				"Failed to register %d vector: %d\n", i, rc);
+			goto out;
+		}
+	}
+
+	return rc;
+
+out:
+	while (--i >= 0)
+		free_irq(pci_irq_vector(efct->pci, i),
+			 &efct->intr_context[i]);
+
+	pci_free_irq_vectors(efct->pci);
+	return rc;
+}
+
+static struct pci_device_id efct_pci_table[] = {
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G6), 0},
+	{PCI_DEVICE(EFCT_VENDOR_ID, EFCT_DEVICE_LANCER_G7), 0},
+	{}	/* terminate list */
+};
+
+static int
+efct_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+	struct efct *efct = NULL;
+	int rc;
+	u32 i, r;
+	int num_interrupts = 0;
+	int nid;
+
+	dev_info(&pdev->dev, "%s\n", EFCT_DRIVER_NAME);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc)
+		return rc;
+
+	pci_set_master(pdev);
+
+	rc = pci_set_mwi(pdev);
+	if (rc) {
+		dev_info(&pdev->dev, "pci_set_mwi returned %d\n", rc);
+		goto mwi_out;
+	}
+
+	rc = pci_request_regions(pdev, EFCT_DRIVER_NAME);
+	if (rc) {
+		dev_err(&pdev->dev, "pci_request_regions failed %d\n", rc);
+		goto req_regions_out;
+	}
+
+	/* Fetch the Numa node id for this device */
+	nid = dev_to_node(&pdev->dev);
+	if (nid < 0) {
+		dev_err(&pdev->dev, "Warning Numa node ID is %d\n", nid);
+		nid = 0;
+	}
+
+	/* Allocate efct */
+	efct = efct_device_alloc(nid);
+	if (!efct) {
+		dev_err(&pdev->dev, "Failed to allocate efct\n");
+		rc = -ENOMEM;
+		goto alloc_out;
+	}
+
+	efct->pci = pdev;
+	efct->numa_node = nid;
+
+	/* Map all memory BARs */
+	for (i = 0, r = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
+			efct->reg[r] = ioremap(pci_resource_start(pdev, i),
+						  pci_resource_len(pdev, i));
+			r++;
+		}
+
+		/*
+		 * If the 64-bit attribute is set, both this BAR and the
+		 * next form the complete address. Skip processing the
+		 * next BAR.
+		 */
+		if (pci_resource_flags(pdev, i) & IORESOURCE_MEM_64)
+			i++;
+	}
+
+	pci_set_drvdata(pdev, efct);
+
+	if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 ||
+	    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {
+		dev_warn(&pdev->dev, "trying DMA_BIT_MASK(32)\n");
+		if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 ||
+		    pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {
+			dev_err(&pdev->dev, "setting DMA_BIT_MASK failed\n");
+			rc = -1;
+			goto dma_mask_out;
+		}
+	}
+
+	num_interrupts = efct_device_interrupts_required(efct);
+	if (num_interrupts < 0) {
+		efc_log_err(efct, "efct_device_interrupts_required failed\n");
+		rc = -1;
+		goto dma_mask_out;
+	}
+
+	/*
+	 * Initialize MSIX interrupts, note,
+	 * efct_setup_msix() enables the interrupt
+	 */
+	rc = efct_setup_msix(efct, num_interrupts);
+	if (rc) {
+		dev_err(&pdev->dev, "Can't setup msix\n");
+		goto dma_mask_out;
+	}
+	/* Disable interrupt for now */
+	for (i = 0; i < efct->n_msix_vec; i++) {
+		efc_log_debug(efct, "irq %d disabled\n", i);
+		disable_irq(pci_irq_vector(efct->pci, i));
+	}
+
+	rc = efct_device_attach(efct);
+	if (rc)
+		goto attach_out;
+
+	return EFC_SUCCESS;
+
+attach_out:
+	efct_teardown_msix(efct);
+dma_mask_out:
+	pci_set_drvdata(pdev, NULL);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+	efct_device_free(efct);
+alloc_out:
+	pci_release_regions(pdev);
+req_regions_out:
+	pci_clear_mwi(pdev);
+mwi_out:
+	pci_disable_device(pdev);
+	return rc;
+}
+
+static void
+efct_pci_remove(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	u32 i;
+
+	if (!efct)
+		return;
+
+	efct_device_detach(efct);
+
+	efct_teardown_msix(efct);
+
+	for (i = 0; i < EFCT_PCI_MAX_REGS; i++) {
+		if (efct->reg[i])
+			iounmap(efct->reg[i]);
+	}
+
+	pci_set_drvdata(pdev, NULL);
+
+	efct_device_free(efct);
+
+	pci_release_regions(pdev);
+
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_reset(struct efct *efct, struct pci_dev *pdev)
+{
+	if (efct) {
+		efc_log_debug(efct,
+			       "PCI channel disable preparing for reset\n");
+		efct_device_detach(efct);
+		/* Disable interrupt and pci device */
+		efct_teardown_msix(efct);
+	}
+	pci_disable_device(pdev);
+}
+
+static void
+efct_device_prep_for_recover(struct efct *efct)
+{
+	if (efct) {
+		efc_log_debug(efct, "PCI channel preparing for recovery\n");
+		efct_hw_io_abort_all(&efct->hw);
+	}
+}
+
+/**
+ * efct_pci_io_error_detected - method for handling PCI I/O error
+ * @pdev: pointer to PCI device.
+ * @state: the current PCI connection state.
+ *
+ * This routine is registered to the PCI subsystem for error handling. This
+ * function is called by the PCI subsystem after a PCI bus error affecting
+ * this device has been detected. When this routine is invoked, it dispatches
+ * device error detected handling routine, which will perform the proper
+ * error detected operation.
+ *
+ * Return codes
+ * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
+ * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
+ */
+static pci_ers_result_t
+efct_pci_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+	pci_ers_result_t rc;
+
+	switch (state) {
+	case pci_channel_io_normal:
+		efct_device_prep_for_recover(efct);
+		rc = PCI_ERS_RESULT_CAN_RECOVER;
+		break;
+	case pci_channel_io_frozen:
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	case pci_channel_io_perm_failure:
+		efct_device_detach(efct);
+		rc = PCI_ERS_RESULT_DISCONNECT;
+		break;
+	default:
+		efc_log_debug(efct, "Unknown PCI error state:0x%x\n",
+			       state);
+		efct_device_prep_for_reset(efct, pdev);
+		rc = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	}
+
+	return rc;
+}
+
+static pci_ers_result_t
+efct_pci_io_slot_reset(struct pci_dev *pdev)
+{
+	int rc;
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	rc = pci_enable_device_mem(pdev);
+	if (rc) {
+		efc_log_err(efct,
+			     "failed to re-enable PCI device after reset.\n");
+		return PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	/*
+	 * As the new kernel behavior of pci_restore_state() API call clears
+	 * device saved_state flag, need to save the restored state again.
+	 */
+
+	pci_save_state(pdev);
+
+	pci_set_master(pdev);
+
+	rc = efct_setup_msix(efct, efct->n_msix_vec);
+	if (rc)
+		efc_log_err(efct, "rc %d returned, IRQ allocation failed\n",
+			    rc);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+
+	return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void
+efct_pci_io_resume(struct pci_dev *pdev)
+{
+	struct efct *efct = pci_get_drvdata(pdev);
+
+	/* Perform device reset */
+	efct_device_detach(efct);
+	/* Bring device to online*/
+	efct_device_attach(efct);
+}
+
+MODULE_DEVICE_TABLE(pci, efct_pci_table);
+
+static struct pci_error_handlers efct_pci_err_handler = {
+	.error_detected = efct_pci_io_error_detected,
+	.slot_reset = efct_pci_io_slot_reset,
+	.resume = efct_pci_io_resume,
+};
+
+static struct pci_driver efct_pci_driver = {
+	.name		= EFCT_DRIVER_NAME,
+	.id_table	= efct_pci_table,
+	.probe		= efct_pci_probe,
+	.remove		= efct_pci_remove,
+	.err_handler	= &efct_pci_err_handler,
+};
+
+static
+int __init efct_init(void)
+{
+	int rc;
+
+	rc = efct_device_init();
+	if (rc) {
+		pr_err("efct_device_init failed rc=%d\n", rc);
+		return rc;
+	}
+
+	rc = pci_register_driver(&efct_pci_driver);
+	if (rc) {
+		pr_err("pci_register_driver failed rc=%d\n", rc);
+		efct_device_shutdown();
+	}
+
+	return rc;
+}
+
+static void __exit efct_exit(void)
+{
+	pci_unregister_driver(&efct_pci_driver);
+	efct_device_shutdown();
+}
+
+module_init(efct_init);
+module_exit(efct_exit);
+MODULE_VERSION(EFCT_DRIVER_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Broadcom");
diff --git a/drivers/scsi/elx/efct/efct_driver.h b/drivers/scsi/elx/efct/efct_driver.h
new file mode 100644
index 000000000000..dab8eac4f243
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_driver.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_DRIVER_H__)
+#define __EFCT_DRIVER_H__
+
+/***************************************************************************
+ * OS specific includes
+ */
+#include <stdarg.h>
+#include <linux/module.h>
+#include <linux/debugfs.h>
+#include <linux/firmware.h>
+#include "../include/efc_common.h"
+#include "../libefc/efclib.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+#include "efct_xport.h"
+
+#define EFCT_DRIVER_NAME			"efct"
+#define EFCT_DRIVER_VERSION			"1.0.0.0"
+
+/* EFCT_DEFAULT_FILTER-
+ * MRQ filter to segregate the IO flow.
+ */
+#define EFCT_DEFAULT_FILTER			"0x01ff22ff,0,0,0"
+
+/* EFCT_OS_MAX_ISR_TIME_MSEC -
+ * maximum time driver code should spend in an interrupt
+ * or kernel thread context without yielding
+ */
+#define EFCT_OS_MAX_ISR_TIME_MSEC		1000
+
+#define EFCT_FC_MAX_SGL				64
+#define EFCT_FC_DIF_SEED			0
+
+/* Watermark */
+#define EFCT_WATERMARK_HIGH_PCT			90
+#define EFCT_WATERMARK_LOW_PCT			80
+#define EFCT_IO_WATERMARK_PER_INITIATOR		8
+
+#define EFCT_PCI_MAX_REGS			6
+#define MAX_PCI_INTERRUPTS			16
+
+struct efct_intr_context {
+	struct efct		*efct;
+	u32			index;
+};
+
+struct efct {
+	struct pci_dev			*pci;
+	void __iomem			*reg[EFCT_PCI_MAX_REGS];
+
+	u32				n_msix_vec;
+	bool				attached;
+	bool				soft_wwn_enable;
+	u8				efct_req_fw_upgrade;
+	struct efct_intr_context	intr_context[MAX_PCI_INTERRUPTS];
+	u32				numa_node;
+
+	char				name[EFC_NAME_LENGTH];
+	u32				instance_index;
+	struct list_head		list_entry;
+	struct efct_scsi_tgt		tgt_efct;
+	struct efct_xport		*xport;
+	struct efc			*efcport;
+	struct Scsi_Host		*shost;
+	int				logmask;
+	u32				max_isr_time_msec;
+
+	const char			*desc;
+
+	const char			*model;
+
+	struct efct_hw			hw;
+
+	u32				rq_selection_policy;
+	char				*filter_def;
+	int				topology;
+
+	/* Look up for target node */
+	struct xarray			lookup;
+
+	/*
+	 * Target IO timer value:
+	 * Zero: target command timeout disabled.
+	 * Non-zero: Timeout value, in seconds, for target commands
+	 */
+	u32				target_io_timer_sec;
+
+	int				speed;
+	struct dentry			*sess_debugfs_dir;
+};
+
+#define FW_WRITE_BUFSIZE		(64 * 1024)
+
+struct efct_fw_write_result {
+	struct completion done;
+	int status;
+	u32 actual_xfer;
+	u32 change_status;
+};
+
+extern struct list_head			efct_devices;
+
+#endif /* __EFCT_DRIVER_H__ */
diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
new file mode 100644
index 000000000000..2c15468bae4b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -0,0 +1,1161 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+
+struct efct_mbox_rqst_ctx {
+	int (*callback)(struct efc *efc, int status, u8 *mqe, void *arg);
+	void *arg;
+};
+
+static enum efct_hw_rtn
+efct_hw_link_event_init(struct efct_hw *hw)
+{
+	hw->link.status = SLI4_LINK_STATUS_MAX;
+	hw->link.topology = SLI4_LINK_TOPO_NONE;
+	hw->link.medium = SLI4_LINK_MEDIUM_MAX;
+	hw->link.speed = 0;
+	hw->link.loop_map = NULL;
+	hw->link.fc_id = U32_MAX;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_read_max_dump_size(struct efct_hw *hw)
+{
+	u8 buf[SLI4_BMBX_SIZE];
+	struct efct *efct = hw->os;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	struct sli4_rsp_cmn_set_dump_location *rsp;
+
+	/* attempt to detemine the dump size for function 0 only. */
+	if (PCI_FUNC(efct->pci->devfn) != 0)
+		return rc;
+
+	if (sli_cmd_common_set_dump_location(&hw->sli, buf, 1, 0, NULL, 0))
+		return EFC_FAIL;
+
+	rsp = (struct sli4_rsp_cmn_set_dump_location *)
+	      (buf + offsetof(struct sli4_cmd_sli_config, payload.embed));
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_debug(hw->os, "set dump location cmd failed\n");
+		return rc;
+	}
+
+	hw->dump_size =
+	  le32_to_cpu(rsp->buffer_length_dword) & SLI4_CMN_SET_DUMP_BUFFER_LEN;
+
+	efc_log_debug(hw->os, "Dump size %x\n",	hw->dump_size);
+
+	return rc;
+}
+
+static int
+__efct_read_topology_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
+{
+	struct sli4_cmd_read_topology *read_topo =
+				(struct sli4_cmd_read_topology *)mqe;
+	u8 speed;
+	struct efc_domain_record drec = {0};
+	struct efct *efct = hw->os;
+
+	if (status || le16_to_cpu(read_topo->hdr.status)) {
+		efc_log_debug(hw->os, "bad status cqe=%#x mqe=%#x\n",
+			       status,
+			       le16_to_cpu(read_topo->hdr.status));
+		return EFC_FAIL;
+	}
+
+	switch (le32_to_cpu(read_topo->dw2_attentype) &
+		SLI4_READTOPO_ATTEN_TYPE) {
+	case SLI4_READ_TOPOLOGY_LINK_UP:
+		hw->link.status = SLI4_LINK_STATUS_UP;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_DOWN:
+		hw->link.status = SLI4_LINK_STATUS_DOWN;
+		break;
+	case SLI4_READ_TOPOLOGY_LINK_NO_ALPA:
+		hw->link.status = SLI4_LINK_STATUS_NO_ALPA;
+		break;
+	default:
+		hw->link.status = SLI4_LINK_STATUS_MAX;
+		break;
+	}
+
+	switch (read_topo->topology) {
+	case SLI4_READ_TOPO_NON_FC_AL:
+		hw->link.topology = SLI4_LINK_TOPO_NON_FC_AL;
+		break;
+	case SLI4_READ_TOPO_FC_AL:
+		hw->link.topology = SLI4_LINK_TOPO_FC_AL;
+		if (hw->link.status == SLI4_LINK_STATUS_UP)
+			hw->link.loop_map = hw->loop_map.virt;
+		hw->link.fc_id = read_topo->acquired_al_pa;
+		break;
+	default:
+		hw->link.topology = SLI4_LINK_TOPO_MAX;
+		break;
+	}
+
+	hw->link.medium = SLI4_LINK_MEDIUM_FC;
+
+	speed = (le32_to_cpu(read_topo->currlink_state) &
+		 SLI4_READTOPO_LINKSTATE_SPEED) >> 8;
+	switch (speed) {
+	case SLI4_READ_TOPOLOGY_SPEED_1G:
+		hw->link.speed =  1 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_2G:
+		hw->link.speed =  2 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_4G:
+		hw->link.speed =  4 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_8G:
+		hw->link.speed =  8 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_16G:
+		hw->link.speed = 16 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_32G:
+		hw->link.speed = 32 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_64G:
+		hw->link.speed = 64 * 1000;
+		break;
+	case SLI4_READ_TOPOLOGY_SPEED_128G:
+		hw->link.speed = 128 * 1000;
+		break;
+	}
+
+	drec.speed = hw->link.speed;
+	drec.fc_id = hw->link.fc_id;
+	drec.is_nport = true;
+	efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND, &drec);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_cb_link(void *ctx, void *e)
+{
+	struct efct_hw *hw = ctx;
+	struct sli4_link_event *event = e;
+	struct efc_domain *d = NULL;
+	int rc = EFC_SUCCESS;
+	struct efct *efct = hw->os;
+
+	efct_hw_link_event_init(hw);
+
+	switch (event->status) {
+	case SLI4_LINK_STATUS_UP:
+
+		hw->link = *event;
+		efct->efcport->link_status = EFC_LINK_STATUS_UP;
+
+		if (event->topology == SLI4_LINK_TOPO_NON_FC_AL) {
+			struct efc_domain_record drec = {0};
+
+			efc_log_info(hw->os, "Link Up, NPORT, speed is %d\n",
+				      event->speed);
+			drec.speed = event->speed;
+			drec.fc_id = event->fc_id;
+			drec.is_nport = true;
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_FOUND,
+				      &drec);
+		} else if (event->topology == SLI4_LINK_TOPO_FC_AL) {
+			u8 buf[SLI4_BMBX_SIZE];
+
+			efc_log_info(hw->os, "Link Up, LOOP, speed is %d\n",
+				      event->speed);
+
+			if (!sli_cmd_read_topology(&hw->sli, buf,
+						   &hw->loop_map)) {
+				rc = efct_hw_command(hw, buf, EFCT_CMD_NOWAIT,
+						__efct_read_topology_cb, NULL);
+			}
+
+			if (rc)
+				efc_log_debug(hw->os, "READ_TOPOLOGY failed\n");
+		} else {
+			efc_log_info(hw->os, "%s(%#x), speed is %d\n",
+				      "Link Up, unsupported topology ",
+				     event->topology, event->speed);
+		}
+		break;
+	case SLI4_LINK_STATUS_DOWN:
+		efc_log_info(hw->os, "Link down\n");
+
+		hw->link.status = event->status;
+		efct->efcport->link_status = EFC_LINK_STATUS_DOWN;
+
+		d = efct->efcport->domain;
+		if (d)
+			efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, d);
+		break;
+	default:
+		efc_log_debug(hw->os, "unhandled link status %#x\n",
+			      event->status);
+		break;
+	}
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev)
+{
+	u32 i, max_sgl, cpus;
+
+	if (hw->hw_setup_called)
+		return EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * efct_hw_init() relies on NULL pointers indicating that a structure
+	 * needs allocation. If a structure is non-NULL, efct_hw_init() won't
+	 * free/realloc that memory
+	 */
+	memset(hw, 0, sizeof(struct efct_hw));
+
+	hw->hw_setup_called = true;
+
+	hw->os = os;
+
+	mutex_init(&hw->bmbx_lock);
+	spin_lock_init(&hw->cmd_lock);
+	INIT_LIST_HEAD(&hw->cmd_head);
+	INIT_LIST_HEAD(&hw->cmd_pending);
+	hw->cmd_head_count = 0;
+
+	/* Create mailbox command ctx pool */
+	hw->cmd_ctx_pool = mempool_create_kmalloc_pool(EFCT_CMD_CTX_POOL_SZ,
+					sizeof(struct efct_command_ctx));
+	if (!hw->cmd_ctx_pool) {
+		efc_log_err(hw->os, "failed to allocate mailbox buffer pool\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Create mailbox request ctx pool for library callback */
+	hw->mbox_rqst_pool = mempool_create_kmalloc_pool(EFCT_CMD_CTX_POOL_SZ,
+					sizeof(struct efct_mbox_rqst_ctx));
+	if (!hw->mbox_rqst_pool) {
+		efc_log_err(hw->os, "failed to allocate mbox request pool\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	spin_lock_init(&hw->io_lock);
+	spin_lock_init(&hw->io_abort_lock);
+	INIT_LIST_HEAD(&hw->io_inuse);
+	INIT_LIST_HEAD(&hw->io_free);
+	INIT_LIST_HEAD(&hw->io_wait_free);
+
+	atomic_set(&hw->io_alloc_failed_count, 0);
+
+	hw->config.speed = SLI4_LINK_SPEED_AUTO_16_8_4;
+	if (sli_setup(&hw->sli, hw->os, pdev, ((struct efct *)os)->reg)) {
+		efc_log_err(hw->os, "SLI setup failed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	efct_hw_link_event_init(hw);
+
+	sli_callback(&hw->sli, SLI4_CB_LINK, efct_hw_cb_link, hw);
+
+	/*
+	 * Set all the queue sizes to the maximum allowed.
+	 */
+	for (i = 0; i < ARRAY_SIZE(hw->num_qentries); i++)
+		hw->num_qentries[i] = hw->sli.qinfo.max_qentries[i];
+	/*
+	 * Adjust the size of the WQs so that the CQ is twice as big as
+	 * the WQ to allow for 2 completions per IO. This allows us to
+	 * handle multi-phase as well as aborts.
+	 */
+	hw->num_qentries[SLI4_QTYPE_WQ] = hw->num_qentries[SLI4_QTYPE_CQ] / 2;
+
+	/*
+	 * The RQ assignment for RQ pair mode.
+	 */
+
+	hw->config.rq_default_buffer_size = EFCT_HW_RQ_SIZE_PAYLOAD;
+	hw->config.n_io = hw->sli.ext[SLI4_RSRC_XRI].size;
+
+	cpus = num_possible_cpus();
+	hw->config.n_eq = cpus > EFCT_HW_MAX_NUM_EQ ? EFCT_HW_MAX_NUM_EQ : cpus;
+
+	max_sgl = sli_get_max_sgl(&hw->sli) - SLI4_SGE_MAX_RESERVED;
+	max_sgl = (max_sgl > EFCT_FC_MAX_SGL) ? EFCT_FC_MAX_SGL : max_sgl;
+	hw->config.n_sgl = max_sgl;
+
+	(void)efct_hw_read_max_dump_size(hw);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static void
+efct_logfcfi(struct efct_hw *hw, u32 j, u32 i, u32 id)
+{
+	efc_log_info(hw->os,
+		      "REG_FCFI: filter[%d] %08X -> RQ[%d] id=%d\n",
+		     j, hw->config.filter_def[j], i, id);
+}
+
+static inline void
+efct_hw_init_free_io(struct efct_hw_io *io)
+{
+	/*
+	 * Set io->done to NULL, to avoid any callbacks, should
+	 * a completion be received for one of these IOs
+	 */
+	io->done = NULL;
+	io->abort_done = NULL;
+	io->status_saved = false;
+	io->abort_in_progress = false;
+	io->type = 0xFFFF;
+	io->wq = NULL;
+}
+
+static u8 efct_hw_iotype_is_originator(u16 io_type)
+{
+	switch (io_type) {
+	case EFCT_HW_FC_CT:
+	case EFCT_HW_ELS_REQ:
+		return EFC_SUCCESS;
+	default:
+		return EFC_FAIL;
+	}
+}
+
+static void
+efct_hw_io_restore_sgl(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* Restore the default */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+}
+
+static void
+efct_hw_wq_process_io(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	struct sli4_fc_wcqe *wcqe = (void *)cqe;
+	u32	len = 0;
+	u32 ext = 0;
+
+	/* clear xbusy flag if WCQE[XB] is clear */
+	if (io->xbusy && (wcqe->flags & SLI4_WCQE_XB) == 0)
+		io->xbusy = false;
+
+	/* get extended CQE status */
+	switch (io->type) {
+	case EFCT_HW_BLS_ACC:
+	case EFCT_HW_BLS_RJT:
+		break;
+	case EFCT_HW_ELS_REQ:
+		sli_fc_els_did(&hw->sli, cqe, &ext);
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_ELS_RSP:
+	case EFCT_HW_FC_CT_RSP:
+		break;
+	case EFCT_HW_FC_CT:
+		len = sli_fc_response_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_WRITE:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		len = sli_fc_io_length(&hw->sli, cqe);
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		break;
+	case EFCT_HW_IO_DNRX_REQUEUE:
+		/* release the count for re-posting the buffer */
+		/* efct_hw_io_free(hw, io); */
+		break;
+	default:
+		efc_log_err(hw->os, "unhandled io type %#x for XRI 0x%x\n",
+			      io->type, io->indicator);
+		break;
+	}
+	if (status) {
+		ext = sli_fc_ext_status(&hw->sli, cqe);
+		/*
+		 * If we're not an originator IO, and XB is set, then issue
+		 * abort for the IO from within the HW
+		 */
+		if ((!efct_hw_iotype_is_originator(io->type)) &&
+		    wcqe->flags & SLI4_WCQE_XB) {
+			enum efct_hw_rtn rc;
+
+			efc_log_debug(hw->os, "aborting xri=%#x tag=%#x\n",
+				       io->indicator, io->reqtag);
+
+			/*
+			 * Because targets may send a response when the IO
+			 * completes using the same XRI, we must wait for the
+			 * XRI_ABORTED CQE to issue the IO callback
+			 */
+			rc = efct_hw_io_abort(hw, io, false, NULL, NULL);
+			if (rc == EFCT_HW_RTN_SUCCESS) {
+				/*
+				 * latch status to return after abort is
+				 * complete
+				 */
+				io->status_saved = true;
+				io->saved_status = status;
+				io->saved_ext = ext;
+				io->saved_len = len;
+				goto exit_efct_hw_wq_process_io;
+			} else if (rc == EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+				/*
+				 * Already being aborted by someone else (ABTS
+				 * perhaps). Just return original
+				 * error.
+				 */
+				efc_log_debug(hw->os, "%s%#x tag=%#x\n",
+					       "abort in progress xri=",
+					      io->indicator, io->reqtag);
+
+			} else {
+				/* Failed to abort for some other reason, log
+				 * error
+				 */
+				efc_log_debug(hw->os, "%s%#x tag=%#x rc=%d\n",
+					      "Failed to abort xri=",
+					     io->indicator, io->reqtag, rc);
+			}
+		}
+	}
+
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+
+		io->done = NULL;
+
+		if (io->status_saved) {
+			/* use latched status if exists */
+			status = io->saved_status;
+			len = io->saved_len;
+			ext = io->saved_ext;
+			io->status_saved = false;
+		}
+
+		/* Restore default SGL */
+		efct_hw_io_restore_sgl(hw, io);
+		done(io, len, status, ext, io->arg);
+	}
+
+exit_efct_hw_wq_process_io:
+	return;
+}
+
+static enum efct_hw_rtn
+efct_hw_setup_io(struct efct_hw *hw)
+{
+	u32	i = 0;
+	struct efct_hw_io	*io = NULL;
+	uintptr_t	xfer_virt = 0;
+	uintptr_t	xfer_phys = 0;
+	u32	index;
+	bool new_alloc = true;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	if (!hw->io) {
+		hw->io = kmalloc_array(hw->config.n_io, sizeof(io),
+				 GFP_KERNEL);
+
+		if (!hw->io)
+			return EFCT_HW_RTN_NO_MEMORY;
+
+		memset(hw->io, 0, hw->config.n_io * sizeof(io));
+
+		for (i = 0; i < hw->config.n_io; i++) {
+			hw->io[i] = kzalloc(sizeof(*io), GFP_KERNEL);
+			if (!hw->io[i])
+				goto error;
+		}
+
+		/* Create WQE buffs for IO */
+		hw->wqe_buffs = kzalloc((hw->config.n_io * hw->sli.wqe_size),
+					 GFP_KERNEL);
+		if (!hw->wqe_buffs) {
+			kfree(hw->io);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+
+	} else {
+		/* re-use existing IOs, including SGLs */
+		new_alloc = false;
+	}
+
+	if (new_alloc) {
+		dma = &hw->xfer_rdy;
+		dma->size = sizeof(struct fcp_txrdy) * hw->config.n_io;
+		dma->virt = dma_alloc_coherent(&efct->pci->dev,
+					       dma->size, &dma->phys, GFP_DMA);
+		if (!dma->virt)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+	xfer_virt = (uintptr_t)hw->xfer_rdy.virt;
+	xfer_phys = hw->xfer_rdy.phys;
+
+	/* Initialize the pool of HW IO objects */
+	for (i = 0; i < hw->config.n_io; i++) {
+		struct hw_wq_callback *wqcb;
+
+		io = hw->io[i];
+
+		/* initialize IO fields */
+		io->hw = hw;
+
+		/* Assign a WQE buff */
+		io->wqe.wqebuf = &hw->wqe_buffs[i * hw->sli.wqe_size];
+
+		/* Allocate the request tag for this IO */
+		wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_io, io);
+		if (!wqcb) {
+			efc_log_err(hw->os, "can't allocate request tag\n");
+			return EFCT_HW_RTN_NO_RESOURCES;
+		}
+		io->reqtag = wqcb->instance_index;
+
+		/* Now for the fields that are initialized on each free */
+		efct_hw_init_free_io(io);
+
+		/* The XB flag isn't cleared on IO free, so init to zero */
+		io->xbusy = 0;
+
+		if (sli_resource_alloc(&hw->sli, SLI4_RSRC_XRI,
+				       &io->indicator, &index)) {
+			efc_log_err(hw->os,
+				     "sli_resource_alloc failed @ %d\n", i);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+		if (new_alloc) {
+			dma = &io->def_sgl;
+			dma->size = hw->config.n_sgl *
+					sizeof(struct sli4_sge);
+			dma->virt = dma_alloc_coherent(&efct->pci->dev,
+						       dma->size, &dma->phys,
+						       GFP_DMA);
+			if (!dma->virt) {
+				efc_log_err(hw->os, "dma_alloc fail %d\n", i);
+				memset(&io->def_sgl, 0,
+				       sizeof(struct efc_dma));
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+		}
+		io->def_sgl_count = hw->config.n_sgl;
+		io->sgl = &io->def_sgl;
+		io->sgl_count = io->def_sgl_count;
+
+		if (hw->xfer_rdy.size) {
+			io->xfer_rdy.virt = (void *)xfer_virt;
+			io->xfer_rdy.phys = xfer_phys;
+			io->xfer_rdy.size = sizeof(struct fcp_txrdy);
+
+			xfer_virt += sizeof(struct fcp_txrdy);
+			xfer_phys += sizeof(struct fcp_txrdy);
+		}
+	}
+
+	return EFCT_HW_RTN_SUCCESS;
+error:
+	for (i = 0; i < hw->config.n_io && hw->io[i]; i++) {
+		kfree(hw->io[i]);
+		hw->io[i] = NULL;
+	}
+
+	kfree(hw->io);
+	hw->io = NULL;
+
+	return EFCT_HW_RTN_NO_MEMORY;
+}
+
+
+static enum efct_hw_rtn
+efct_hw_init_prereg_io(struct efct_hw *hw)
+{
+	u32 i, idx = 0;
+	struct efct_hw_io *io = NULL;
+	u8 cmd[SLI4_BMBX_SIZE];
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 n_rem;
+	u32 n = 0;
+	u32 sgls_per_request = 256;
+	struct efc_dma **sgls = NULL;
+	struct efc_dma req;
+	struct efct *efct = hw->os;
+
+	sgls = kmalloc_array(sgls_per_request, sizeof(*sgls), GFP_KERNEL);
+	if (!sgls)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	memset(&req, 0, sizeof(struct efc_dma));
+	req.size = 32 + sgls_per_request * 16;
+	req.virt = dma_alloc_coherent(&efct->pci->dev, req.size, &req.phys,
+					 GFP_DMA);
+	if (!req.virt) {
+		kfree(sgls);
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	for (n_rem = hw->config.n_io; n_rem; n_rem -= n) {
+		/* Copy address of SGL's into local sgls[] array, break
+		 * out if the xri is not contiguous.
+		 */
+		u32 min = (sgls_per_request < n_rem) ? sgls_per_request : n_rem;
+
+		for (n = 0; n < min; n++) {
+			/* Check that we have contiguous xri values */
+			if (n > 0) {
+				if (hw->io[idx + n]->indicator !=
+				    hw->io[idx + n - 1]->indicator + 1)
+					break;
+			}
+
+			sgls[n] = hw->io[idx + n]->sgl;
+		}
+
+		if (sli_cmd_post_sgl_pages(&hw->sli, cmd,
+				hw->io[idx]->indicator,	n, sgls, NULL, &req)) {
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+
+		if (efct_hw_command(hw, cmd, EFCT_CMD_POLL, NULL, NULL)) {
+			rc = EFCT_HW_RTN_ERROR;
+			efc_log_err(hw->os, "SGL post failed\n");
+			break;
+		}
+
+		/* Add to tail if successful */
+		for (i = 0; i < n; i++, idx++) {
+			io = hw->io[idx];
+			io->state = EFCT_HW_IO_STATE_FREE;
+			INIT_LIST_HEAD(&io->list_entry);
+			list_add_tail(&io->list_entry, &hw->io_free);
+		}
+	}
+
+	dma_free_coherent(&efct->pci->dev, req.size, req.virt, req.phys);
+	memset(&req, 0, sizeof(struct efc_dma));
+	kfree(sgls);
+
+	return rc;
+}
+
+static enum efct_hw_rtn
+efct_hw_init_io(struct efct_hw *hw)
+{
+	u32 i, idx = 0;
+	bool prereg = false;
+	struct efct_hw_io *io = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	prereg = hw->sli.params.sgl_pre_registered;
+
+	if (prereg)
+		return efct_hw_init_prereg_io(hw);
+
+	for (i = 0; i < hw->config.n_io; i++, idx++) {
+		io = hw->io[idx];
+		io->state = EFCT_HW_IO_STATE_FREE;
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_free);
+	}
+
+	return rc;
+}
+
+static enum efct_hw_rtn
+efct_hw_config_set_fdt_xfer_hint(struct efct_hw *hw, u32 fdt_xfer_hint)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint param;
+
+	memset(&param, 0, sizeof(param));
+	param.fdt_xfer_hint = cpu_to_le32(fdt_xfer_hint);
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf,
+		SLI4_SET_FEATURES_SET_FTD_XFER_HINT, sizeof(param), &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_warn(hw->os, "set FDT hint %d failed: %d\n",
+			      fdt_xfer_hint, rc);
+	else
+		efc_log_info(hw->os, "Set FTD transfer hint to %d\n",
+			      le32_to_cpu(param.fdt_xfer_hint));
+
+	return rc;
+}
+
+static int
+efct_hw_config_rq(struct efct_hw *hw)
+{
+	u32 min_rq_count, i, rc;
+	struct sli4_cmd_rq_cfg rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG];
+	u8 buf[SLI4_BMBX_SIZE];
+
+	efc_log_info(hw->os, "using REG_FCFI standard\n");
+
+	/*
+	 * Set the filter match/mask values from hw's
+	 * filter_def values
+	 */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_cfg[i].rq_id = cpu_to_le16(0xffff);
+		rq_cfg[i].r_ctl_mask = (u8)hw->config.filter_def[i];
+		rq_cfg[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 8);
+		rq_cfg[i].type_mask = (u8)(hw->config.filter_def[i] >> 16);
+		rq_cfg[i].type_match = (u8)(hw->config.filter_def[i] >> 24);
+	}
+
+	/*
+	 * Update the rq_id's of the FCF configuration
+	 * (don't update more than the number of rq_cfg
+	 * elements)
+	 */
+	min_rq_count = (hw->hw_rq_count < SLI4_CMD_REG_FCFI_NUM_RQ_CFG)	?
+			hw->hw_rq_count : SLI4_CMD_REG_FCFI_NUM_RQ_CFG;
+	for (i = 0; i < min_rq_count; i++) {
+		struct hw_rq *rq = hw->hw_rq[i];
+		u32 j;
+
+		for (j = 0; j < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; j++) {
+			u32 mask = (rq->filter_mask != 0) ?
+				rq->filter_mask : 1;
+
+			if (!(mask & (1U << j)))
+				continue;
+
+			rq_cfg[i].rq_id = cpu_to_le16(rq->hdr->id);
+			efct_logfcfi(hw, j, i, rq->hdr->id);
+		}
+	}
+
+	rc = EFCT_HW_RTN_ERROR;
+	if (!sli_cmd_reg_fcfi(&hw->sli, buf, 0,	rq_cfg)) {
+		rc = efct_hw_command(hw, buf, EFCT_CMD_POLL,
+				NULL, NULL);
+	}
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_err(hw->os,
+				"FCFI registration failed\n");
+		return rc;
+	}
+	hw->fcf_indicator =
+		le16_to_cpu(((struct sli4_cmd_reg_fcfi *)buf)->fcfi);
+
+	return rc;
+}
+
+static int32_t
+efct_hw_config_mrq(struct efct_hw *hw, uint8_t mode, uint16_t fcf_index)
+{
+	u8 buf[SLI4_BMBX_SIZE], mrq_bitmask = 0;
+	struct hw_rq *rq;
+	struct sli4_cmd_reg_fcfi_mrq *rsp = NULL;
+	struct sli4_cmd_rq_cfg rq_filter[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG];
+	u32 rc, i;
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		goto issue_cmd;
+
+	/* Set the filter match/mask values from hw's filter_def values */
+	for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) {
+		rq_filter[i].rq_id = cpu_to_le16(0xffff);
+		rq_filter[i].type_mask = (u8)hw->config.filter_def[i];
+		rq_filter[i].type_match = (u8)(hw->config.filter_def[i] >> 8);
+		rq_filter[i].r_ctl_mask = (u8)(hw->config.filter_def[i] >> 16);
+		rq_filter[i].r_ctl_match = (u8)(hw->config.filter_def[i] >> 24);
+	}
+
+	rq = hw->hw_rq[0];
+	rq_filter[0].rq_id = cpu_to_le16(rq->hdr->id);
+	rq_filter[1].rq_id = cpu_to_le16(rq->hdr->id);
+
+	mrq_bitmask = 0x2;
+issue_cmd:
+	efc_log_debug(hw->os, "Issue reg_fcfi_mrq count:%d policy:%d mode:%d\n",
+			hw->hw_rq_count, hw->config.rq_selection_policy, mode);
+	/* Invoke REG_FCFI_MRQ */
+	rc = sli_cmd_reg_fcfi_mrq(&hw->sli, buf, mode, fcf_index,
+				 hw->config.rq_selection_policy, mrq_bitmask,
+				 hw->hw_mrq_count, rq_filter);
+	if (rc) {
+		efc_log_err(hw->os, "sli_cmd_reg_fcfi_mrq() failed\n");
+		return EFC_FAIL;
+	}
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+
+	rsp = (struct sli4_cmd_reg_fcfi_mrq *)buf;
+
+	if ((rc) || (le16_to_cpu(rsp->hdr.status))) {
+		efc_log_err(hw->os, "FCFI MRQ reg failed. cmd=%x status=%x\n",
+			    rsp->hdr.command, le16_to_cpu(rsp->hdr.status));
+		return EFC_FAIL;
+	}
+
+	if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE)
+		hw->fcf_indicator = le16_to_cpu(rsp->fcfi);
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_hw_queue_hash_add(struct efct_queue_hash *hash,
+		       u16 id, u16 index)
+{
+	u32 hash_index = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the number of queues, then we
+	 * never have to worry about an infinite loop.
+	 */
+	while (hash[hash_index].in_use)
+		hash_index = (hash_index + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/* not used, claim the entry */
+	hash[hash_index].id = id;
+	hash[hash_index].in_use = true;
+	hash[hash_index].index = index;
+}
+
+static enum efct_hw_rtn
+efct_hw_config_sli_port_health_check(struct efct_hw *hw, u8 query, u8 enable)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u8 buf[SLI4_BMBX_SIZE];
+	struct sli4_rqst_cmn_set_features_health_check param;
+	u32 health_check_flag = 0;
+
+	memset(&param, 0, sizeof(param));
+
+	if (enable)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_ENABLE;
+
+	if (query)
+		health_check_flag |= SLI4_RQ_HEALTH_CHECK_QUERY;
+
+	param.health_check_dword = cpu_to_le32(health_check_flag);
+
+	/* build the set_features command */
+	sli_cmd_common_set_features(&hw->sli, buf,
+		SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK, sizeof(param), &param);
+
+	rc = efct_hw_command(hw, buf, EFCT_CMD_POLL, NULL, NULL);
+	if (rc)
+		efc_log_err(hw->os, "efct_hw_command returns %d\n", rc);
+	else
+		efc_log_debug(hw->os, "SLI Port Health Check is enabled\n");
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_init(struct efct_hw *hw)
+{
+	enum efct_hw_rtn rc;
+	u32 i = 0;
+	int rem_count;
+	unsigned long flags = 0;
+	struct efct_hw_io *temp;
+	struct efc_dma *dma;
+
+	/*
+	 * Make sure the command lists are empty. If this is start-of-day,
+	 * they'll be empty since they were just initialized in efct_hw_setup.
+	 * If we've just gone through a reset, the command and command pending
+	 * lists should have been cleaned up as part of the reset
+	 * (efct_hw_reset()).
+	 */
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efc_log_err(hw->os, "command found on cmd list\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	if (!list_empty(&hw->cmd_pending)) {
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efc_log_err(hw->os, "command found on pending list\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	/* Free RQ buffers if prevously allocated */
+	efct_hw_rx_free(hw);
+
+	/*
+	 * The IO queues must be initialized here for the reset case. The
+	 * efct_hw_init_io() function will re-add the IOs to the free list.
+	 * The cmd_head list should be OK since we free all entries in
+	 * efct_hw_command_cancel() that is called in the efct_hw_reset().
+	 */
+
+	/* If we are in this function due to a reset, there may be stale items
+	 * on lists that need to be removed.  Clean them up.
+	 */
+	rem_count = 0;
+	while ((!list_empty(&hw->io_wait_free))) {
+		rem_count++;
+		temp = list_first_entry(&hw->io_wait_free, struct efct_hw_io,
+					list_entry);
+		list_del_init(&temp->list_entry);
+	}
+	if (rem_count > 0)
+		efc_log_debug(hw->os, "rmvd %d items from io_wait_free list\n",
+			      rem_count);
+
+	rem_count = 0;
+	while ((!list_empty(&hw->io_inuse))) {
+		rem_count++;
+		temp = list_first_entry(&hw->io_inuse, struct efct_hw_io,
+					list_entry);
+		list_del_init(&temp->list_entry);
+	}
+	if (rem_count > 0)
+		efc_log_debug(hw->os, "rmvd %d items from io_inuse list\n",
+			      rem_count);
+
+	rem_count = 0;
+	while ((!list_empty(&hw->io_free))) {
+		rem_count++;
+		temp = list_first_entry(&hw->io_free, struct efct_hw_io,
+					list_entry);
+		list_del_init(&temp->list_entry);
+	}
+	if (rem_count > 0)
+		efc_log_debug(hw->os, "rmvd %d items from io_free list\n",
+			      rem_count);
+
+	/* If MRQ not required, Make sure we dont request feature. */
+	if (hw->config.n_rq == 1)
+		hw->sli.features &= (~SLI4_REQFEAT_MRQP);
+
+	if (sli_init(&hw->sli)) {
+		efc_log_err(hw->os, "SLI failed to initialize\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->sliport_healthcheck) {
+		rc = efct_hw_config_sli_port_health_check(hw, 0, 1);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "Enable port Health check fail\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * Set FDT transfer hint, only works on Lancer
+	 */
+	if (hw->sli.if_type == SLI4_INTF_IF_TYPE_2) {
+		/*
+		 * Non-fatal error. In particular, we can disregard failure to
+		 * set EFCT_HW_FDT_XFER_HINT on devices with legacy firmware
+		 * that do not support EFCT_HW_FDT_XFER_HINT feature.
+		 */
+		efct_hw_config_set_fdt_xfer_hint(hw, EFCT_HW_FDT_XFER_HINT);
+	}
+
+	/* zero the hashes */
+	memset(hw->cq_hash, 0, sizeof(hw->cq_hash));
+	efc_log_debug(hw->os, "Max CQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_CQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->rq_hash, 0, sizeof(hw->rq_hash));
+	efc_log_debug(hw->os, "Max RQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_RQ, EFCT_HW_Q_HASH_SIZE);
+
+	memset(hw->wq_hash, 0, sizeof(hw->wq_hash));
+	efc_log_debug(hw->os, "Max WQs %d, hash size = %d\n",
+		       EFCT_HW_MAX_NUM_WQ, EFCT_HW_Q_HASH_SIZE);
+
+	rc = efct_hw_init_queues(hw);
+	if (rc)
+		return rc;
+
+	rc = efct_hw_map_wq_cpu(hw);
+	if (rc)
+		return rc;
+
+	/* Allocate and p_st RQ buffers */
+	rc = efct_hw_rx_allocate(hw);
+	if (rc) {
+		efc_log_err(hw->os, "rx_allocate failed\n");
+		return rc;
+	}
+
+	rc = efct_hw_rx_post(hw);
+	if (rc) {
+		efc_log_err(hw->os, "WARNING - error posting RQ buffers\n");
+		return rc;
+	}
+
+	if (hw->config.n_eq == 1) {
+		rc = efct_hw_config_rq(hw);
+		if (rc) {
+			efc_log_err(hw->os, "config rq failed %d\n", rc);
+			return rc;
+		}
+	} else {
+		rc = efct_hw_config_mrq(hw, SLI4_CMD_REG_FCFI_SET_FCFI_MODE, 0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "REG_FCFI_MRQ FCFI reg failed\n");
+			return rc;
+		}
+
+		rc = efct_hw_config_mrq(hw, SLI4_CMD_REG_FCFI_SET_MRQ_MODE, 0);
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "REG_FCFI_MRQ MRQ reg failed\n");
+			return rc;
+		}
+
+	}
+
+	/*
+	 * Allocate the WQ request tag pool, if not previously allocated
+	 * (the request tag value is 16 bits, thus the pool allocation size
+	 * of 64k)
+	 */
+	hw->wq_reqtag_pool = efct_hw_reqtag_pool_alloc(hw);
+	if (!hw->wq_reqtag_pool) {
+		efc_log_err(hw->os, "efct_hw_reqtag_init failed %d\n", rc);
+		return rc;
+	}
+
+	rc = efct_hw_setup_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO allocation failure\n");
+		return rc;
+	}
+
+	rc = efct_hw_init_io(hw);
+	if (rc) {
+		efc_log_err(hw->os, "IO initialization failure\n");
+		return rc;
+	}
+
+	dma = &hw->loop_map;
+	dma->size = SLI4_MIN_LOOP_MAP_BYTES;
+	dma->virt = dma_alloc_coherent(&hw->os->pci->dev, dma->size, &dma->phys,
+					GFP_DMA);
+	if (!dma->virt)
+		return EFCT_HW_RTN_ERROR;
+
+	/*
+	 * Arming the EQ allows (e.g.) interrupts when CQ completions write EQ
+	 * entries
+	 */
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_arm(&hw->sli, &hw->eq[i], true);
+
+	/*
+	 * Initialize RQ hash
+	 */
+	for (i = 0; i < hw->rq_count; i++)
+		efct_hw_queue_hash_add(hw->rq_hash, hw->rq[i].id, i);
+
+	/*
+	 * Initialize WQ hash
+	 */
+	for (i = 0; i < hw->wq_count; i++)
+		efct_hw_queue_hash_add(hw->wq_hash, hw->wq[i].id, i);
+
+	/*
+	 * Arming the CQ allows (e.g.) MQ completions to write CQ entries
+	 */
+	for (i = 0; i < hw->cq_count; i++) {
+		efct_hw_queue_hash_add(hw->cq_hash, hw->cq[i].id, i);
+		sli_queue_arm(&hw->sli, &hw->cq[i], true);
+	}
+
+	/* Set RQ process limit*/
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		struct hw_rq *rq = hw->hw_rq[i];
+
+		hw->cq[rq->cq->instance].proc_limit = hw->config.n_io / 2;
+	}
+
+	/* record the fact that the queues are functional */
+	hw->state = EFCT_HW_STATE_ACTIVE;
+	/*
+	 * Allocate a HW IOs for send frame.
+	 */
+	hw->hw_wq[0]->send_frame_io = efct_hw_io_alloc(hw);
+	if (!hw->hw_wq[0]->send_frame_io)
+		efc_log_err(hw->os, "alloc for send_frame_io failed\n");
+
+	/* Initialize send frame sequence id */
+	atomic_set(&hw->send_frame_seq_id, 0);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_parse_filter(struct efct_hw *hw, void *value)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	char *p = NULL;
+	char *token;
+	u32 idx = 0;
+
+	for (idx = 0; idx < ARRAY_SIZE(hw->config.filter_def); idx++)
+		hw->config.filter_def[idx] = 0;
+
+	p = kstrdup(value, GFP_KERNEL);
+	if (!p || !*p) {
+		efc_log_err(hw->os, "p is NULL\n");
+		return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	idx = 0;
+	while ((token = strsep(&p, ",")) && *token) {
+		if (kstrtou32(token, 0, &hw->config.filter_def[idx++]))
+			efc_log_err(hw->os, "kstrtoint failed\n");
+
+		if (!p || !*p)
+			break;
+
+		if (idx == ARRAY_SIZE(hw->config.filter_def))
+			break;
+	}
+	kfree(p);
+
+	return rc;
+}
+
+u64
+efct_get_wwnn(struct efct_hw *hw)
+{
+	struct sli4 *sli = &hw->sli;
+	u8 p[8];
+
+	memcpy(p, sli->wwnn, sizeof(p));
+	return get_unaligned_be64(p);
+}
+
+u64
+efct_get_wwpn(struct efct_hw *hw)
+{
+	struct sli4 *sli = &hw->sli;
+	u8 p[8];
+
+	memcpy(p, sli->wwpn, sizeof(p));
+	return get_unaligned_be64(p);
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index e4df0df5fe23..44f248356d42 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -600,4 +600,19 @@ struct efct_hw_grp_hdr {
 	u8			revision[32];
 };
 
+static inline int
+efct_hw_get_link_speed(struct efct_hw *hw) {
+	return hw->link.speed;
+}
+
+enum efct_hw_rtn
+efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
+enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
+enum efct_hw_rtn
+efct_hw_parse_filter(struct efct_hw *hw, void *value);
+uint64_t
+efct_get_wwnn(struct efct_hw *hw);
+uint64_t
+efct_get_wwpn(struct efct_hw *hw);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
new file mode 100644
index 000000000000..b19304d6febb
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -0,0 +1,519 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_unsol.h"
+
+struct efct_xport_post_node_event {
+	struct completion done;
+	atomic_t refcnt;
+	struct efc_node *node;
+	u32	evt;
+	void *context;
+};
+
+static struct dentry *efct_debugfs_root;
+static atomic_t efct_debugfs_count;
+
+static struct scsi_host_template efct_template = {
+	.module			= THIS_MODULE,
+	.name			= EFCT_DRIVER_NAME,
+	.supported_mode		= MODE_TARGET,
+};
+
+/* globals */
+static struct fc_function_template efct_xport_functions;
+static struct fc_function_template efct_vport_functions;
+
+static struct scsi_transport_template *efct_xport_fc_tt;
+static struct scsi_transport_template *efct_vport_fc_tt;
+
+struct efct_xport *
+efct_xport_alloc(struct efct *efct)
+{
+	struct efct_xport *xport;
+
+	xport = kzalloc(sizeof(*xport), GFP_KERNEL);
+	if (!xport)
+		return xport;
+
+	xport->efct = efct;
+	return xport;
+}
+
+static int
+efct_xport_init_debugfs(struct efct *efct)
+{
+	/* Setup efct debugfs root directory */
+	if (!efct_debugfs_root) {
+		efct_debugfs_root = debugfs_create_dir("efct", NULL);
+		atomic_set(&efct_debugfs_count, 0);
+		if (!efct_debugfs_root) {
+			efc_log_err(efct, "failed to create debugfs entry\n");
+			goto debugfs_fail;
+		}
+	}
+
+	/* Create a directory for sessions in root */
+	if (!efct->sess_debugfs_dir) {
+		efct->sess_debugfs_dir = debugfs_create_dir("sessions", NULL);
+		if (!efct->sess_debugfs_dir) {
+			efc_log_err(efct,
+				     "failed to create debugfs entry for sessions\n");
+			goto debugfs_fail;
+		}
+		atomic_inc(&efct_debugfs_count);
+	}
+
+	return EFC_SUCCESS;
+
+debugfs_fail:
+	return EFC_FAIL;
+}
+
+static void efct_xport_delete_debugfs(struct efct *efct)
+{
+	/* Remove session debugfs directory */
+	debugfs_remove(efct->sess_debugfs_dir);
+	efct->sess_debugfs_dir = NULL;
+	atomic_dec(&efct_debugfs_count);
+
+	if (atomic_read(&efct_debugfs_count) == 0) {
+		/* remove root debugfs directory */
+		debugfs_remove(efct_debugfs_root);
+		efct_debugfs_root = NULL;
+	}
+}
+
+int
+efct_xport_attach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc;
+
+	rc = efct_hw_setup(&efct->hw, efct, efct->pci);
+	if (rc) {
+		efc_log_err(efct, "%s: Can't setup hardware\n", efct->desc);
+		return rc;
+	}
+
+	efct_hw_parse_filter(&efct->hw, (void *)efct->filter_def);
+
+	xport->io_pool = efct_io_pool_create(efct, efct->hw.config.n_sgl);
+	if (!xport->io_pool) {
+		efc_log_err(efct, "Can't allocate IO pool\n");
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_xport_link_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_link_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_host_stats_cb(int status, u32 num_counters,
+			 struct efct_hw_host_stat_counts *counters, void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+
+	complete(&result->stats.done);
+}
+
+static void
+efct_xport_async_link_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_link_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.link_stats.link_failure_error_count =
+		counters[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter;
+	result->stats.link_stats.loss_of_sync_error_count =
+		counters[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter;
+	result->stats.link_stats.primitive_sequence_error_count =
+		counters[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter;
+	result->stats.link_stats.invalid_transmission_word_error_count =
+		counters[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter;
+	result->stats.link_stats.crc_error_count =
+		counters[EFCT_HW_LINK_STAT_CRC_COUNT].counter;
+}
+
+static void
+efct_xport_async_host_stats_cb(int status, u32 num_counters,
+			       struct efct_hw_host_stat_counts *counters,
+			       void *arg)
+{
+	union efct_xport_stats_u *result = arg;
+
+	result->stats.host_stats.transmit_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter;
+	result->stats.host_stats.receive_kbyte_count =
+		counters[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter;
+	result->stats.host_stats.transmit_frame_count =
+		counters[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter;
+	result->stats.host_stats.receive_frame_count =
+		counters[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter;
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct);
+
+static void
+efct_xport_stats_timer_cb(struct timer_list *t)
+{
+	struct efct_xport *xport = from_timer(xport, t, stats_timer);
+	struct efct *efct = xport->efct;
+
+	efct_xport_config_stats_timer(efct);
+}
+
+static void
+efct_xport_config_stats_timer(struct efct *efct)
+{
+	u32 timeout = 3 * 1000;
+	struct efct_xport *xport = NULL;
+
+	if (!efct) {
+		pr_err("%s: failed to locate EFCT device\n", __func__);
+		return;
+	}
+
+	xport = efct->xport;
+	efct_hw_get_link_stats(&efct->hw, 0, 0, 0,
+			       efct_xport_async_link_stats_cb,
+			       &xport->fc_xport_stats);
+	efct_hw_get_host_stats(&efct->hw, 0, efct_xport_async_host_stats_cb,
+			       &xport->fc_xport_stats);
+
+	timer_setup(&xport->stats_timer,
+		    &efct_xport_stats_timer_cb, 0);
+	mod_timer(&xport->stats_timer,
+		  jiffies + msecs_to_jiffies(timeout));
+}
+
+int
+efct_xport_initialize(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+	int rc = 0;
+
+	/* Initialize io lists */
+	spin_lock_init(&xport->io_pending_lock);
+	INIT_LIST_HEAD(&xport->io_pending_list);
+	atomic_set(&xport->io_active_count, 0);
+	atomic_set(&xport->io_pending_count, 0);
+	atomic_set(&xport->io_total_free, 0);
+	atomic_set(&xport->io_total_pending, 0);
+	atomic_set(&xport->io_alloc_failed_count, 0);
+	atomic_set(&xport->io_pending_recursing, 0);
+
+	rc = efct_hw_init(&efct->hw);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_init failure\n");
+		goto out;
+	}
+
+	rc = efct_scsi_tgt_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize target\n");
+		goto hw_init_out;
+	}
+
+	rc = efct_scsi_new_device(efct);
+	if (rc) {
+		efc_log_err(efct, "failed to initialize initiator\n");
+		goto tgt_dev_out;
+	}
+
+	/* Get FC link and host statistics perodically*/
+	efct_xport_config_stats_timer(efct);
+
+	efct_xport_init_debugfs(efct);
+
+	return rc;
+
+tgt_dev_out:
+	efct_scsi_tgt_del_device(efct);
+
+hw_init_out:
+	efct_hw_teardown(&efct->hw);
+out:
+	return rc;
+}
+
+int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	union efct_xport_stats_u value;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_CONFIG_PORT_STATUS:
+		if (xport->configured_link_state == 0) {
+			/*
+			 * Initial state is offline. configured_link_state is
+			 * set to online explicitly when port is brought online
+			 */
+			xport->configured_link_state = EFCT_XPORT_PORT_OFFLINE;
+		}
+		result->value = xport->configured_link_state;
+		break;
+
+	case EFCT_XPORT_PORT_STATUS:
+		/* Determine port status based on link speed. */
+		value.value = efct_hw_get_link_speed(&efct->hw);
+		if (value.value == 0)
+			result->value = 0;
+		else
+			result->value = 1;
+		rc = 0;
+		break;
+
+	case EFCT_XPORT_LINK_SPEED: {
+		result->value = efct_hw_get_link_speed(&efct->hw);
+
+		break;
+	}
+	case EFCT_XPORT_LINK_STATISTICS:
+		memcpy((void *)result, &efct->xport->fc_xport_stats,
+		       sizeof(union efct_xport_stats_u));
+		break;
+	case EFCT_XPORT_LINK_STAT_RESET: {
+		/* Create a completion to synchronize the stat reset process */
+		init_completion(&result->stats.done);
+
+		/* First reset the link stats */
+		rc = efct_hw_get_link_stats(&efct->hw, 0, 1, 1,
+					    efct_xport_link_stats_cb, result);
+		if (rc)
+			break;
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_debug(efct, "sem wait failed\n");
+			rc = EFC_FAIL;
+			break;
+		}
+
+		/* Next reset the host stats */
+		rc = efct_hw_get_host_stats(&efct->hw, 1,
+					    efct_xport_host_stats_cb, result);
+
+		if (rc)
+			break;
+
+		/* Wait for completion to be signaled when the cmd completes */
+		if (wait_for_completion_interruptible(&result->stats.done)) {
+			/* Undefined failure */
+			efc_log_debug(efct, "sem wait failed\n");
+			rc = EFC_FAIL;
+			break;
+		}
+		break;
+	}
+	default:
+		rc = EFC_FAIL;
+		break;
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_new_device(struct efct *efct)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return EFC_FAIL;
+	}
+
+	/* save shost to initiator-client context */
+	efct->shost = shost;
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+
+	/*
+	 * Set initial can_queue value to the max SCSI IOs. This is the maximum
+	 * global queue depth (as opposed to the per-LUN queue depth --
+	 * .cmd_per_lun This may need to be adjusted for I+T mode.
+	 */
+	shost->can_queue = efct->hw.config.n_io;
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/*
+	 * can only accept (from mid-layer) as many SGEs as we've
+	 * pre-registered
+	 */
+	shost->sg_tablesize = sli_get_max_sgl(&efct->hw.sli);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_xport_fc_tt;
+	efc_log_debug(efct, "transport template=%p\n", efct_xport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, &efct->pci->dev,
+				       &efct->pci->dev);
+	if (error) {
+		efc_log_debug(efct, "failed scsi_add_host_with_dma\n");
+		return EFC_FAIL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		     "Emulex %s FV%s DV%s", efct->model,
+		     efct->hw.sli.fw_name[0], EFCT_DRIVER_VERSION);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+
+	fc_host_node_name(shost) = efct_get_wwnn(&efct->hw);
+	fc_host_port_name(shost) = efct_get_wwpn(&efct->hw);
+	fc_host_max_npiv_vports(shost) = 128;
+
+	return EFC_SUCCESS;
+}
+
+struct scsi_transport_template *
+efct_attach_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_xport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+struct scsi_transport_template *
+efct_attach_vport_fc_transport(void)
+{
+	struct scsi_transport_template *efct_fc_template = NULL;
+
+	efct_fc_template = fc_attach_transport(&efct_vport_functions);
+
+	if (!efct_fc_template)
+		pr_err("failed to attach EFCT with fc transport\n");
+
+	return efct_fc_template;
+}
+
+int
+efct_scsi_reg_fc_transport(void)
+{
+	/* attach to appropriate scsi_tranport_* module */
+	efct_xport_fc_tt = efct_attach_fc_transport();
+	if (!efct_xport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		return EFC_FAIL;
+	}
+
+	efct_vport_fc_tt = efct_attach_vport_fc_transport();
+	if (!efct_vport_fc_tt) {
+		pr_err("%s: failed to attach to scsi_transport_*", __func__);
+		efct_release_fc_transport(efct_xport_fc_tt);
+		efct_xport_fc_tt = NULL;
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_release_fc_transport(void)
+{
+	/* detach from scsi_transport_* */
+	efct_release_fc_transport(efct_xport_fc_tt);
+	efct_xport_fc_tt = NULL;
+	if (efct_vport_fc_tt)
+		efct_release_fc_transport(efct_vport_fc_tt);
+	efct_vport_fc_tt = NULL;
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_xport.h b/drivers/scsi/elx/efct/efct_xport.h
new file mode 100644
index 000000000000..cee5dae8ff4a
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_xport.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_XPORT_H__)
+#define __EFCT_XPORT_H__
+
+enum efct_xport_ctrl {
+	EFCT_XPORT_PORT_ONLINE = 1,
+	EFCT_XPORT_PORT_OFFLINE,
+	EFCT_XPORT_SHUTDOWN,
+	EFCT_XPORT_POST_NODE_EVENT,
+	EFCT_XPORT_WWNN_SET,
+	EFCT_XPORT_WWPN_SET,
+};
+
+enum efct_xport_status {
+	EFCT_XPORT_PORT_STATUS,
+	EFCT_XPORT_CONFIG_PORT_STATUS,
+	EFCT_XPORT_LINK_SPEED,
+	EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+	EFCT_XPORT_LINK_STATISTICS,
+	EFCT_XPORT_LINK_STAT_RESET,
+	EFCT_XPORT_IS_QUIESCED
+};
+
+struct efct_xport_link_stats {
+	bool		rec;
+	bool		gec;
+	bool		w02of;
+	bool		w03of;
+	bool		w04of;
+	bool		w05of;
+	bool		w06of;
+	bool		w07of;
+	bool		w08of;
+	bool		w09of;
+	bool		w10of;
+	bool		w11of;
+	bool		w12of;
+	bool		w13of;
+	bool		w14of;
+	bool		w15of;
+	bool		w16of;
+	bool		w17of;
+	bool		w18of;
+	bool		w19of;
+	bool		w20of;
+	bool		w21of;
+	bool		clrc;
+	bool		clof1;
+	u32		link_failure_error_count;
+	u32		loss_of_sync_error_count;
+	u32		loss_of_signal_error_count;
+	u32		primitive_sequence_error_count;
+	u32		invalid_transmission_word_error_count;
+	u32		crc_error_count;
+	u32		primitive_sequence_event_timeout_count;
+	u32		elastic_buffer_overrun_error_count;
+	u32		arbitration_fc_al_timeout_count;
+	u32		advertised_receive_bufftor_to_buffer_credit;
+	u32		current_receive_buffer_to_buffer_credit;
+	u32		advertised_transmit_buffer_to_buffer_credit;
+	u32		current_transmit_buffer_to_buffer_credit;
+	u32		received_eofa_count;
+	u32		received_eofdti_count;
+	u32		received_eofni_count;
+	u32		received_soff_count;
+	u32		received_dropped_no_aer_count;
+	u32		received_dropped_no_available_rpi_resources_count;
+	u32		received_dropped_no_available_xri_resources_count;
+};
+
+struct efct_xport_host_stats {
+	bool		cc;
+	u32		transmit_kbyte_count;
+	u32		receive_kbyte_count;
+	u32		transmit_frame_count;
+	u32		receive_frame_count;
+	u32		transmit_sequence_count;
+	u32		receive_sequence_count;
+	u32		total_exchanges_originator;
+	u32		total_exchanges_responder;
+	u32		receive_p_bsy_count;
+	u32		receive_f_bsy_count;
+	u32		dropped_frames_due_to_no_rq_buffer_count;
+	u32		empty_rq_timeout_count;
+	u32		dropped_frames_due_to_no_xri_count;
+	u32		empty_xri_pool_count;
+};
+
+struct efct_xport_host_statistics {
+	struct completion		done;
+	struct efct_xport_link_stats	link_stats;
+	struct efct_xport_host_stats	host_stats;
+};
+
+union efct_xport_stats_u {
+	u32	value;
+	struct efct_xport_host_statistics stats;
+};
+
+struct efct_xport_fcp_stats {
+	u64		input_bytes;
+	u64		output_bytes;
+	u64		input_requests;
+	u64		output_requests;
+	u64		control_requests;
+};
+
+struct efct_xport {
+	struct efct		*efct;
+	/* wwpn requested by user for primary nport */
+	u64			req_wwpn;
+	/* wwnn requested by user for primary nport */
+	u64			req_wwnn;
+
+	/* Nodes */
+	/* number of allocated nodes */
+	u32			nodes_count;
+	/* used to track how often IO pool is empty */
+	atomic_t		io_alloc_failed_count;
+	/* array of pointers to nodes */
+	struct efc_node		**nodes;
+
+	/* Io pool and counts */
+	/* pointer to IO pool */
+	struct efct_io_pool	*io_pool;
+	/* lock for io_pending_list */
+	spinlock_t		io_pending_lock;
+	/* list of IOs waiting for HW resources
+	 *  lock: xport->io_pending_lock
+	 *  link: efct_io_s->io_pending_link
+	 */
+	struct list_head	io_pending_list;
+	/* count of totals IOS allocated */
+	atomic_t		io_total_alloc;
+	/* count of totals IOS free'd */
+	atomic_t		io_total_free;
+	/* count of totals IOS that were pended */
+	atomic_t		io_total_pending;
+	/* count of active IOS */
+	atomic_t		io_active_count;
+	/* count of pending IOS */
+	atomic_t		io_pending_count;
+	/* non-zero if efct_scsi_check_pending is executing */
+	atomic_t		io_pending_recursing;
+
+	/* Port */
+	/* requested link state */
+	u32			configured_link_state;
+
+	/* Timer for Statistics */
+	struct timer_list	stats_timer;
+	union efct_xport_stats_u fc_xport_stats;
+	struct efct_xport_fcp_stats fcp_stats;
+};
+
+struct efct_rport_data {
+	struct efc_node		*node;
+};
+
+struct efct_xport *
+efct_xport_alloc(struct efct *efct);
+int
+efct_xport_attach(struct efct_xport *xport);
+int
+efct_xport_initialize(struct efct_xport *xport);
+int
+efct_xport_detach(struct efct_xport *xport);
+int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...);
+int
+efct_xport_status(struct efct_xport *xport, enum efct_xport_status cmd,
+		  union efct_xport_stats_u *result);
+void
+efct_xport_free(struct efct_xport *xport);
+
+struct scsi_transport_template *efct_attach_fc_transport(void);
+struct scsi_transport_template *efct_attach_vport_fc_transport(void);
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template);
+
+#endif /* __EFCT_XPORT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 19/31] elx: efct: Hardware queues creation and deletion
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (17 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 18/31] elx: efct: Driver initialization routines James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
                   ` (11 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for queue creation, deletion, and configuration.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.h        |   4 +
 drivers/scsi/elx/efct/efct_hw_queues.c | 679 +++++++++++++++++++++++++
 2 files changed, 683 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c

diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 44f248356d42..39cfe6658783 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -610,6 +610,10 @@ efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev);
 enum efct_hw_rtn efct_hw_init(struct efct_hw *hw);
 enum efct_hw_rtn
 efct_hw_parse_filter(struct efct_hw *hw, void *value);
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw);
+int
+efct_hw_map_wq_cpu(struct efct_hw *hw);
 uint64_t
 efct_get_wwnn(struct efct_hw *hw);
 uint64_t
diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c
new file mode 100644
index 000000000000..cefe56ba4a56
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_hw_queues.c
@@ -0,0 +1,679 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_unsol.h"
+
+enum efct_hw_rtn
+efct_hw_init_queues(struct efct_hw *hw)
+{
+	struct hw_eq *eq = NULL;
+	struct hw_cq *cq = NULL;
+	struct hw_wq *wq = NULL;
+	struct hw_mq *mq = NULL;
+
+	struct hw_eq *eqs[EFCT_HW_MAX_NUM_EQ];
+	struct hw_cq *cqs[EFCT_HW_MAX_NUM_EQ];
+	struct hw_rq *rqs[EFCT_HW_MAX_NUM_EQ];
+	u32 i = 0, j;
+
+	hw->eq_count = 0;
+	hw->cq_count = 0;
+	hw->mq_count = 0;
+	hw->wq_count = 0;
+	hw->rq_count = 0;
+	hw->hw_rq_count = 0;
+	INIT_LIST_HEAD(&hw->eq_list);
+
+	for (i = 0; i < hw->config.n_eq; i++) {
+		/* Create EQ */
+		eq = efct_hw_new_eq(hw, EFCT_HW_EQ_DEPTH);
+		if (!eq) {
+			efct_hw_queue_teardown(hw);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+
+		eqs[i] = eq;
+
+		/* Create one MQ */
+		if (!i) {
+			cq = efct_hw_new_cq(eq,
+					    hw->num_qentries[SLI4_QTYPE_CQ]);
+			if (!cq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+
+			mq = efct_hw_new_mq(cq, EFCT_HW_MQ_DEPTH);
+			if (!mq) {
+				efct_hw_queue_teardown(hw);
+				return EFCT_HW_RTN_NO_MEMORY;
+			}
+		}
+
+		/* Create WQ */
+		cq = efct_hw_new_cq(eq, hw->num_qentries[SLI4_QTYPE_CQ]);
+		if (!cq) {
+			efct_hw_queue_teardown(hw);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+
+		wq = efct_hw_new_wq(cq, hw->num_qentries[SLI4_QTYPE_WQ]);
+		if (!wq) {
+			efct_hw_queue_teardown(hw);
+			return EFCT_HW_RTN_NO_MEMORY;
+		}
+	}
+
+	/* Create CQ set */
+	if (efct_hw_new_cq_set(eqs, cqs, i, hw->num_qentries[SLI4_QTYPE_CQ])) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Create RQ set */
+	if (efct_hw_new_rq_set(cqs, rqs, i, EFCT_HW_RQ_ENTRIES_DEF)) {
+		efct_hw_queue_teardown(hw);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	for (j = 0; j < i ; j++) {
+		rqs[j]->filter_mask = 0;
+		rqs[j]->is_mrq = true;
+		rqs[j]->base_mrq_id = rqs[0]->hdr->id;
+	}
+
+	hw->hw_mrq_count = i;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+int
+efct_hw_map_wq_cpu(struct efct_hw *hw)
+{
+	struct efct *efct = hw->os;
+	u32 cpu = 0, i;
+
+	/* Init cpu_map array */
+	hw->wq_cpu_array = kcalloc(num_possible_cpus(), sizeof(void *),
+				   GFP_KERNEL);
+	if (!hw->wq_cpu_array)
+		return EFC_FAIL;
+
+	for (i = 0; i < hw->config.n_eq; i++) {
+		const struct cpumask *maskp;
+
+		/* Get a CPU mask for all CPUs affinitized to this vector */
+		maskp = pci_irq_get_affinity(efct->pci, i);
+		if (!maskp) {
+			efc_log_debug(efct, "maskp null for vector:%d\n", i);
+			continue;
+		}
+
+		/* Loop through all CPUs associated with vector idx */
+		for_each_cpu_and(cpu, maskp, cpu_present_mask) {
+			efc_log_debug(efct, "CPU:%d irq vector:%d\n", cpu, i);
+			hw->wq_cpu_array[cpu] = hw->hw_wq[i];
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+struct hw_eq *
+efct_hw_new_eq(struct efct_hw *hw, u32 entry_count)
+{
+	struct hw_eq *eq = kzalloc(sizeof(*eq), GFP_KERNEL);
+
+	if (!eq)
+		return NULL;
+
+	eq->type = SLI4_QTYPE_EQ;
+	eq->hw = hw;
+	eq->entry_count = entry_count;
+	eq->instance = hw->eq_count++;
+	eq->queue = &hw->eq[eq->instance];
+	INIT_LIST_HEAD(&eq->cq_list);
+
+	if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_EQ, eq->queue,	entry_count,
+				NULL)) {
+		efc_log_err(hw->os, "EQ[%d] alloc failure\n", eq->instance);
+		kfree(eq);
+		return NULL;
+	}
+
+	sli_eq_modify_delay(&hw->sli, eq->queue, 1, 0, 8);
+	hw->hw_eq[eq->instance] = eq;
+	INIT_LIST_HEAD(&eq->list_entry);
+	list_add_tail(&eq->list_entry, &hw->eq_list);
+	efc_log_debug(hw->os, "create eq[%2d] id %3d len %4d\n", eq->instance,
+				eq->queue->id, eq->entry_count);
+	return eq;
+}
+
+struct hw_cq *
+efct_hw_new_cq(struct hw_eq *eq, u32 entry_count)
+{
+	struct efct_hw *hw = eq->hw;
+	struct hw_cq *cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+
+	if (!cq)
+		return NULL;
+
+	cq->eq = eq;
+	cq->type = SLI4_QTYPE_CQ;
+	cq->instance = eq->hw->cq_count++;
+	cq->entry_count = entry_count;
+	cq->queue = &hw->cq[cq->instance];
+
+	INIT_LIST_HEAD(&cq->q_list);
+
+	if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_CQ, cq->queue,
+				cq->entry_count, eq->queue)) {
+		efc_log_err(hw->os, "CQ[%d] allocation failure len=%d\n",
+				    eq->instance, eq->entry_count);
+		kfree(cq);
+		return NULL;
+	}
+
+	hw->hw_cq[cq->instance] = cq;
+	INIT_LIST_HEAD(&cq->list_entry);
+	list_add_tail(&cq->list_entry, &eq->cq_list);
+	efc_log_debug(hw->os, "create cq[%2d] id %3d len %4d\n", cq->instance,
+				cq->queue->id, cq->entry_count);
+	return cq;
+}
+
+u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count)
+{
+	u32 i;
+	struct efct_hw *hw = eqs[0]->hw;
+	struct sli4 *sli4 = &hw->sli;
+	struct hw_cq *cq = NULL;
+	struct sli4_queue *qs[SLI4_MAX_CQ_SET_COUNT];
+	struct sli4_queue *assefct[SLI4_MAX_CQ_SET_COUNT];
+
+	/* Initialise CQS pointers to NULL */
+	for (i = 0; i < num_cqs; i++)
+		cqs[i] = NULL;
+
+	for (i = 0; i < num_cqs; i++) {
+		cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+		if (!cq)
+			goto error;
+
+		cqs[i]          = cq;
+		cq->eq          = eqs[i];
+		cq->type        = SLI4_QTYPE_CQ;
+		cq->instance    = hw->cq_count++;
+		cq->entry_count = entry_count;
+		cq->queue       = &hw->cq[cq->instance];
+		qs[i]           = cq->queue;
+		assefct[i]       = eqs[i]->queue;
+		INIT_LIST_HEAD(&cq->q_list);
+	}
+
+	if (sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) {
+		efc_log_err(hw->os, "Failed to create CQ Set.\n");
+		goto error;
+	}
+
+	for (i = 0; i < num_cqs; i++) {
+		hw->hw_cq[cqs[i]->instance] = cqs[i];
+		INIT_LIST_HEAD(&cqs[i]->list_entry);
+		list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list);
+	}
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_cqs; i++) {
+		kfree(cqs[i]);
+		cqs[i] = NULL;
+	}
+	return EFC_FAIL;
+}
+
+struct hw_mq *
+efct_hw_new_mq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_mq *mq = kzalloc(sizeof(*mq), GFP_KERNEL);
+
+	if (!mq)
+		return NULL;
+
+	mq->cq = cq;
+	mq->type = SLI4_QTYPE_MQ;
+	mq->instance = cq->eq->hw->mq_count++;
+	mq->entry_count = entry_count;
+	mq->entry_size = EFCT_HW_MQ_DEPTH;
+	mq->queue = &hw->mq[mq->instance];
+
+	if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_MQ, mq->queue, mq->entry_size,
+			    cq->queue)) {
+		efc_log_err(hw->os, "MQ allocation failure\n");
+		kfree(mq);
+		return NULL;
+	}
+
+	hw->hw_mq[mq->instance] = mq;
+	INIT_LIST_HEAD(&mq->list_entry);
+	list_add_tail(&mq->list_entry, &cq->q_list);
+	efc_log_debug(hw->os, "create mq[%2d] id %3d len %4d\n", mq->instance,
+				mq->queue->id, mq->entry_count);
+	return mq;
+}
+
+struct hw_wq *
+efct_hw_new_wq(struct hw_cq *cq, u32 entry_count)
+{
+	struct efct_hw *hw = cq->eq->hw;
+	struct hw_wq *wq = kzalloc(sizeof(*wq), GFP_KERNEL);
+
+	if (!wq)
+		return NULL;
+
+	wq->hw = cq->eq->hw;
+	wq->cq = cq;
+	wq->type = SLI4_QTYPE_WQ;
+	wq->instance = cq->eq->hw->wq_count++;
+	wq->entry_count = entry_count;
+	wq->queue = &hw->wq[wq->instance];
+	wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT;
+	wq->wqec_count = wq->wqec_set_count;
+	wq->free_count = wq->entry_count - 1;
+	INIT_LIST_HEAD(&wq->pending_list);
+
+	if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_WQ, wq->queue,
+				    wq->entry_count, cq->queue)) {
+		efc_log_err(hw->os, "WQ allocation failure\n");
+		kfree(wq);
+		return NULL;
+	}
+
+	hw->hw_wq[wq->instance] = wq;
+	INIT_LIST_HEAD(&wq->list_entry);
+	list_add_tail(&wq->list_entry, &cq->q_list);
+	efc_log_debug(hw->os, "create wq[%2d] id %3d len %4d cls %d\n",
+		wq->instance, wq->queue->id, wq->entry_count, wq->class);
+	return wq;
+}
+
+u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count)
+{
+	struct efct_hw *hw = cqs[0]->eq->hw;
+	struct hw_rq *rq = NULL;
+	struct sli4_queue *qs[SLI4_MAX_RQ_SET_COUNT * 2] = { NULL };
+	u32 i, q_count, size;
+
+	/* Initialise RQS pointers */
+	for (i = 0; i < num_rq_pairs; i++)
+		rqs[i] = NULL;
+
+	/*
+	 * Allocate an RQ object SET, where each element in set
+	 * encapsulates 2 SLI queues (for rq pair)
+	 */
+	for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) {
+		rq = kzalloc(sizeof(*rq), GFP_KERNEL);
+		if (!rq)
+			goto error;
+
+		rqs[i] = rq;
+		rq->instance = hw->hw_rq_count++;
+		rq->cq = cqs[i];
+		rq->type = SLI4_QTYPE_RQ;
+		rq->entry_count = entry_count;
+
+		/* Header RQ */
+		rq->hdr = &hw->rq[hw->rq_count];
+		rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count] = rq->hdr;
+
+		/* Data RQ */
+		rq->data = &hw->rq[hw->rq_count];
+		rq->data_entry_size = hw->config.rq_default_buffer_size;
+		hw->hw_rq_lookup[hw->rq_count] = rq->instance;
+		hw->rq_count++;
+		qs[q_count + 1] = rq->data;
+
+		rq->rq_tracker = NULL;
+	}
+
+	if (sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs,
+				cqs[0]->queue->id,
+			    rqs[0]->entry_count,
+			    rqs[0]->hdr_entry_size,
+			    rqs[0]->data_entry_size)) {
+		efc_log_err(hw->os, "RQ Set alloc failure for base CQ=%d\n",
+			    cqs[0]->queue->id);
+		goto error;
+	}
+
+	for (i = 0; i < num_rq_pairs; i++) {
+		hw->hw_rq[rqs[i]->instance] = rqs[i];
+		INIT_LIST_HEAD(&rqs[i]->list_entry);
+		list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list);
+		size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count;
+		rqs[i]->rq_tracker = kzalloc(size, GFP_KERNEL);
+		if (!rqs[i]->rq_tracker)
+			goto error;
+	}
+
+	return EFC_SUCCESS;
+
+error:
+	for (i = 0; i < num_rq_pairs; i++) {
+		if (rqs[i]) {
+			kfree(rqs[i]->rq_tracker);
+			kfree(rqs[i]);
+		}
+	}
+
+	return EFC_FAIL;
+}
+
+void
+efct_hw_del_eq(struct hw_eq *eq)
+{
+	struct hw_cq *cq;
+	struct hw_cq *cq_next;
+
+	if (!eq)
+		return;
+
+	list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry)
+		efct_hw_del_cq(cq);
+	list_del(&eq->list_entry);
+	eq->hw->hw_eq[eq->instance] = NULL;
+	kfree(eq);
+}
+
+void
+efct_hw_del_cq(struct hw_cq *cq)
+{
+	struct hw_q *q;
+	struct hw_q *q_next;
+
+	if (!cq)
+		return;
+
+	list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) {
+		switch (q->type) {
+		case SLI4_QTYPE_MQ:
+			efct_hw_del_mq((struct hw_mq *)q);
+			break;
+		case SLI4_QTYPE_WQ:
+			efct_hw_del_wq((struct hw_wq *)q);
+			break;
+		case SLI4_QTYPE_RQ:
+			efct_hw_del_rq((struct hw_rq *)q);
+			break;
+		default:
+			break;
+		}
+	}
+	list_del(&cq->list_entry);
+	cq->eq->hw->hw_cq[cq->instance] = NULL;
+	kfree(cq);
+}
+
+void
+efct_hw_del_mq(struct hw_mq *mq)
+{
+	if (!mq)
+		return;
+
+	list_del(&mq->list_entry);
+	mq->cq->eq->hw->hw_mq[mq->instance] = NULL;
+	kfree(mq);
+}
+
+void
+efct_hw_del_wq(struct hw_wq *wq)
+{
+	if (!wq)
+		return;
+
+	list_del(&wq->list_entry);
+	wq->cq->eq->hw->hw_wq[wq->instance] = NULL;
+	kfree(wq);
+}
+
+void
+efct_hw_del_rq(struct hw_rq *rq)
+{
+	struct efct_hw *hw = NULL;
+
+	if (!rq)
+		return;
+	/* Free RQ tracker */
+	kfree(rq->rq_tracker);
+	rq->rq_tracker = NULL;
+	list_del(&rq->list_entry);
+	hw = rq->cq->eq->hw;
+	hw->hw_rq[rq->instance] = NULL;
+	kfree(rq);
+}
+
+void
+efct_hw_queue_teardown(struct efct_hw *hw)
+{
+	struct hw_eq *eq;
+	struct hw_eq *eq_next;
+
+	if (!hw->eq_list.next)
+		return;
+
+	list_for_each_entry_safe(eq, eq_next, &hw->eq_list, list_entry)
+		efct_hw_del_eq(eq);
+}
+
+static inline int
+efct_hw_rqpair_find(struct efct_hw *hw, u16 rq_id)
+{
+	return efct_hw_queue_hash_find(hw->rq_hash, rq_id);
+}
+
+static struct efc_hw_sequence *
+efct_hw_rqpair_get(struct efct_hw *hw, u16 rqindex, u16 bufindex)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[rqindex];
+	struct efc_hw_sequence *seq = NULL;
+	struct hw_rq *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	unsigned long flags = 0;
+
+	if (bufindex >= rq_hdr->length) {
+		efc_log_err(hw->os,
+				"RQidx %d bufidx %d exceed ring len %d for id %d\n",
+				rqindex, bufindex, rq_hdr->length, rq_hdr->id);
+		return NULL;
+	}
+
+	/* rq_hdr lock also covers rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	seq = rq->rq_tracker[bufindex];
+	rq->rq_tracker[bufindex] = NULL;
+
+	if (!seq) {
+		efc_log_err(hw->os,
+			     "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n",
+			     rqindex, bufindex, rq_hdr->index);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return seq;
+}
+
+int
+efct_hw_rqpair_process_rq(struct efct_hw *hw, struct hw_cq *cq,
+			  u8 *cqe)
+{
+	u16 rq_id;
+	u32 index;
+	int rqindex;
+	int rq_status;
+	u32 h_len;
+	u32 p_len;
+	struct efc_hw_sequence *seq;
+	struct hw_rq *rq;
+
+	rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe,
+					      &rq_id, &index);
+	if (rq_status != 0) {
+		switch (rq_status) {
+		case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED:
+		case SLI4_FC_ASYNC_RQ_DMA_FAILURE:
+			/* just get RQ buffer then return to chip */
+			rqindex = efct_hw_rqpair_find(hw, rq_id);
+			if (rqindex < 0) {
+				efc_log_debug(hw->os,
+					      "status=%#x: lookup fail id=%#x\n",
+					     rq_status, rq_id);
+				break;
+			}
+
+			/* get RQ buffer */
+			seq = efct_hw_rqpair_get(hw, rqindex, index);
+
+			/* return to chip */
+			if (efct_hw_rqpair_sequence_free(hw, seq)) {
+				efc_log_debug(hw->os,
+					      "status=%#x,fail rtrn buf to RQ\n",
+					     rq_status);
+				break;
+			}
+			break;
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED:
+		case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC:
+			/*
+			 * since RQ buffers were not consumed, cannot return
+			 * them to chip
+			 */
+			efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n",
+				       rq_status);
+			fallthrough;
+		default:
+			break;
+		}
+		return EFC_FAIL;
+	}
+
+	rqindex = efct_hw_rqpair_find(hw, rq_id);
+	if (rqindex < 0) {
+		efc_log_debug(hw->os, "Error: rq_id lookup failed for id=%#x\n",
+			      rq_id);
+		return EFC_FAIL;
+	}
+
+	rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]];
+	rq->use_count++;
+
+	seq = efct_hw_rqpair_get(hw, rqindex, index);
+	if (WARN_ON(!seq))
+		return EFC_FAIL;
+
+	seq->hw = hw;
+	seq->auto_xrdy = 0;
+	seq->out_of_xris = 0;
+
+	sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len);
+	seq->header->dma.len = h_len;
+	seq->payload->dma.len = p_len;
+	seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe);
+	seq->hw_priv = cq->eq;
+
+	efct_unsolicited_cb(hw->os, seq);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_rqpair_put(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	struct sli4_queue *rq_hdr = &hw->rq[seq->header->rqindex];
+	struct sli4_queue *rq_payload = &hw->rq[seq->payload->rqindex];
+	u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex];
+	struct hw_rq *rq = hw->hw_rq[hw_rq_index];
+	u32 phys_hdr[2];
+	u32 phys_payload[2];
+	int qindex_hdr;
+	int qindex_payload;
+	unsigned long flags = 0;
+
+	/* Update the RQ verification lookup tables */
+	phys_hdr[0] = upper_32_bits(seq->header->dma.phys);
+	phys_hdr[1] = lower_32_bits(seq->header->dma.phys);
+	phys_payload[0] = upper_32_bits(seq->payload->dma.phys);
+	phys_payload[1] = lower_32_bits(seq->payload->dma.phys);
+
+	/* rq_hdr lock also covers payload / header->rqindex+1 queue */
+	spin_lock_irqsave(&rq_hdr->lock, flags);
+
+	/*
+	 * Note: The header must be posted last for buffer pair mode because
+	 *       posting on the header queue posts the payload queue as well.
+	 *       We do not ring the payload queue independently in RQ pair mode.
+	 */
+	qindex_payload = sli_rq_write(&hw->sli, rq_payload,
+				      (void *)phys_payload);
+	qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr);
+	if (qindex_hdr < 0 ||
+	    qindex_payload < 0) {
+		efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id);
+		spin_unlock_irqrestore(&rq_hdr->lock, flags);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* ensure the indexes are the same */
+	WARN_ON(qindex_hdr != qindex_payload);
+
+	/* Update the lookup table */
+	if (!rq->rq_tracker[qindex_hdr]) {
+		rq->rq_tracker[qindex_hdr] = seq;
+	} else {
+		efc_log_debug(hw->os,
+			      "expected rq_tracker[%d][%d] buffer to be NULL\n",
+			      hw_rq_index, qindex_hdr);
+	}
+
+	spin_unlock_irqrestore(&rq_hdr->lock, flags);
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_rqpair_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	/*
+	 * Post the data buffer first. Because in RQ pair mode, ringing the
+	 * doorbell of the header ring will post the data buffer as well.
+	 */
+	if (efct_hw_rqpair_put(hw, seq)) {
+		efc_log_err(hw->os, "error writing buffers\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	return rc;
+}
+
+int
+efct_efc_hw_sequence_free(struct efc *efc, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = efc->base;
+
+	return efct_hw_rqpair_sequence_free(&efct->hw, seq);
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (18 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 19/31] elx: efct: Hardware queues creation and deletion James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-08  9:11   ` Daniel Wagner
  2021-03-04 18:02   ` Hannes Reinecke
  2021-01-07 22:58 ` [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization James Smart
                   ` (10 subsequent siblings)
  30 siblings, 2 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna

This patch continues the efct driver population.

This patch adds driver definitions for:
RQ data buffer allocation and deallocate.
Memory pool allocation and deallocation APIs.
Mailbox command submission and completion routines.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v7:
 Ensure an EFC_XXX status value is returned (not numerical constants)
 Style: break declaration assignment into declaration and later value
   assignment.
---
 drivers/scsi/elx/efct/efct_hw.c | 406 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |   9 +
 2 files changed, 415 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 2c15468bae4b..451a5729bff4 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -1159,3 +1159,409 @@ efct_get_wwpn(struct efct_hw *hw)
 	memcpy(p, sli->wwpn, sizeof(p));
 	return get_unaligned_be64(p);
 }
+
+static struct efc_hw_rq_buffer *
+efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
+			u32 size)
+{
+	struct efct *efct = hw->os;
+	struct efc_hw_rq_buffer *rq_buf = NULL;
+	struct efc_hw_rq_buffer *prq;
+	u32 i;
+
+	if (!count)
+		return NULL;
+
+	rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_KERNEL);
+	if (!rq_buf)
+		return NULL;
+	memset(rq_buf, 0, sizeof(*rq_buf) * count);
+
+	for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
+		prq->rqindex = rqindex;
+		prq->dma.size = size;
+		prq->dma.virt = dma_alloc_coherent(&efct->pci->dev,
+						   prq->dma.size,
+						   &prq->dma.phys,
+						   GFP_DMA);
+		if (!prq->dma.virt) {
+			efc_log_err(hw->os, "DMA allocation failed\n");
+			kfree(rq_buf);
+			return NULL;
+		}
+	}
+	return rq_buf;
+}
+
+static void
+efct_hw_rx_buffer_free(struct efct_hw *hw,
+		       struct efc_hw_rq_buffer *rq_buf,
+			u32 count)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	struct efc_hw_rq_buffer *prq;
+
+	if (rq_buf) {
+		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
+			dma_free_coherent(&efct->pci->dev,
+					  prq->dma.size, prq->dma.virt,
+					  prq->dma.phys);
+			memset(&prq->dma, 0, sizeof(struct efc_dma));
+		}
+
+		kfree(rq_buf);
+	}
+}
+
+enum efct_hw_rtn
+efct_hw_rx_allocate(struct efct_hw *hw)
+{
+	struct efct *efct = hw->os;
+	u32 i;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 rqindex = 0;
+	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
+	u32 payload_size = hw->config.rq_default_buffer_size;
+
+	rqindex = 0;
+
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		struct hw_rq *rq = hw->hw_rq[i];
+
+		/* Allocate header buffers */
+		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+						      rq->entry_count,
+						      hdr_size);
+		if (!rq->hdr_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
+			      i, rq->hdr->id, rq->entry_count, hdr_size);
+
+		rqindex++;
+
+		/* Allocate payload buffers */
+		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
+							  rq->entry_count,
+							  payload_size);
+		if (!rq->payload_buf) {
+			efc_log_err(efct,
+				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+			break;
+		}
+		efc_log_debug(hw->os,
+			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
+			      i, rq->data->id, rq->entry_count, payload_size);
+		rqindex++;
+	}
+
+	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_rx_post(struct efct_hw *hw)
+{
+	u32 i;
+	u32 idx;
+	u32 rq_idx;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	if (!hw->seq_pool) {
+		u32 count = 0;
+
+		for (i = 0; i < hw->hw_rq_count; i++)
+			count += hw->hw_rq[i]->entry_count;
+
+		hw->seq_pool = kmalloc_array(count,
+				sizeof(struct efc_hw_sequence),	GFP_KERNEL);
+		if (!hw->seq_pool)
+			return EFCT_HW_RTN_NO_MEMORY;
+	}
+
+	/*
+	 * In RQ pair mode, we MUST post the header and payload buffer at the
+	 * same time.
+	 */
+	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
+		struct hw_rq *rq = hw->hw_rq[rq_idx];
+
+		for (i = 0; i < rq->entry_count - 1; i++) {
+			struct efc_hw_sequence *seq;
+
+			seq = hw->seq_pool + idx;
+			idx++;
+			seq->header = &rq->hdr_buf[i];
+			seq->payload = &rq->payload_buf[i];
+			rc = efct_hw_sequence_free(hw, seq);
+			if (rc)
+				break;
+		}
+		if (rc)
+			break;
+	}
+
+	if (rc && hw->seq_pool)
+		kfree(hw->seq_pool);
+
+	return rc;
+}
+
+void
+efct_hw_rx_free(struct efct_hw *hw)
+{
+	u32 i;
+
+	/* Free hw_rq buffers */
+	for (i = 0; i < hw->hw_rq_count; i++) {
+		struct hw_rq *rq = hw->hw_rq[i];
+
+		if (rq) {
+			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
+					       rq->entry_count);
+			rq->hdr_buf = NULL;
+			efct_hw_rx_buffer_free(hw, rq->payload_buf,
+					       rq->entry_count);
+			rq->payload_buf = NULL;
+		}
+	}
+}
+
+static int
+efct_hw_cmd_submit_pending(struct efct_hw *hw)
+{
+	int rc = EFC_SUCCESS;
+
+	/* Assumes lock held */
+
+	/* Only submit MQE if there's room */
+	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
+	       !list_empty(&hw->cmd_pending)) {
+		struct efct_command_ctx *ctx;
+
+		ctx = list_first_entry(&hw->cmd_pending,
+				       struct efct_command_ctx, list_entry);
+		if (!ctx)
+			break;
+
+		list_del_init(&ctx->list_entry);
+
+		list_add_tail(&ctx->list_entry, &hw->cmd_head);
+		hw->cmd_head_count++;
+		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
+			efc_log_debug(hw->os,
+				      "sli_queue_write failed: %d\n", rc);
+			rc = EFC_FAIL;
+			break;
+		}
+	}
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	unsigned long flags = 0;
+	void *bmbx = NULL;
+
+	/*
+	 * If the chip is in an error state (UE'd) then reject this mailbox
+	 * command.
+	 */
+	if (sli_fw_error_status(&hw->sli) > 0) {
+		efc_log_crit(hw->os,
+			      "Chip is in an error state - reset needed\n");
+		efc_log_crit(hw->os,
+			      "status=%#x error1=%#x error2=%#x\n",
+			sli_reg_read_status(&hw->sli),
+			sli_reg_read_err1(&hw->sli),
+			sli_reg_read_err2(&hw->sli));
+
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Send a mailbox command to the hardware, and either wait for
+	 * a completion (EFCT_CMD_POLL) or get an optional asynchronous
+	 * completion (EFCT_CMD_NOWAIT).
+	 */
+
+	if (opts == EFCT_CMD_POLL) {
+		mutex_lock(&hw->bmbx_lock);
+		bmbx = hw->sli.bmbx.virt;
+
+		memset(bmbx, 0, SLI4_BMBX_SIZE);
+		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
+
+		if (sli_bmbx_command(&hw->sli) == 0) {
+			rc = EFCT_HW_RTN_SUCCESS;
+			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
+		}
+		mutex_unlock(&hw->bmbx_lock);
+	} else if (opts == EFCT_CMD_NOWAIT) {
+		struct efct_command_ctx	*ctx = NULL;
+
+		if (hw->state != EFCT_HW_STATE_ACTIVE) {
+			efc_log_err(hw->os,
+				     "Can't send command, HW state=%d\n",
+				    hw->state);
+			return EFCT_HW_RTN_ERROR;
+		}
+
+		ctx = mempool_alloc(hw->cmd_ctx_pool, GFP_ATOMIC);
+		if (!ctx)
+			return EFCT_HW_RTN_NO_RESOURCES;
+
+		memset(ctx, 0, sizeof(struct efct_command_ctx));
+
+		if (cb) {
+			ctx->cb = cb;
+			ctx->arg = arg;
+		}
+
+		memcpy(ctx->buf, cmd, SLI4_BMBX_SIZE);
+		ctx->ctx = hw;
+
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+
+		/* Add to pending list */
+		INIT_LIST_HEAD(&ctx->list_entry);
+		list_add_tail(&ctx->list_entry, &hw->cmd_pending);
+
+		/* Submit as much of the pending list as we can */
+		rc = efct_hw_cmd_submit_pending(hw);
+
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+	}
+
+	return rc;
+}
+
+static int
+efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
+			size_t size)
+{
+	struct efct_command_ctx *ctx = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+	if (!list_empty(&hw->cmd_head)) {
+		ctx = list_first_entry(&hw->cmd_head,
+				       struct efct_command_ctx, list_entry);
+		list_del_init(&ctx->list_entry);
+	}
+	if (!ctx) {
+		efc_log_err(hw->os, "no command context?!?\n");
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		return EFC_FAIL;
+	}
+
+	hw->cmd_head_count--;
+
+	/* Post any pending requests */
+	efct_hw_cmd_submit_pending(hw);
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	if (ctx->cb) {
+		memcpy(ctx->buf, mqe, size);
+		ctx->cb(hw, status, ctx->buf, ctx->arg);
+	}
+
+	mempool_free(ctx, hw->cmd_ctx_pool);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_mq_process(struct efct_hw *hw,
+		   int status, struct sli4_queue *mq)
+{
+	u8 mqe[SLI4_BMBX_SIZE];
+
+	if (!sli_mq_read(&hw->sli, mq, mqe))
+		efct_hw_command_process(hw, status, mqe, mq->size);
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_command_cancel(struct efct_hw *hw)
+{
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->cmd_lock, flags);
+
+	/*
+	 * Manually clean up remaining commands. Note: since this calls
+	 * efct_hw_command_process(), we'll also process the cmd_pending
+	 * list, so no need to manually clean that out.
+	 */
+	while (!list_empty(&hw->cmd_head)) {
+		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
+		struct efct_command_ctx *ctx;
+
+		ctx = list_first_entry(&hw->cmd_head,
+				       struct efct_command_ctx, list_entry);
+
+		efc_log_debug(hw->os, "hung command %08x\n",
+			      !ctx ? U32_MAX :
+			      (!ctx->buf ? U32_MAX :
+			       *((u32 *)ctx->buf)));
+		spin_unlock_irqrestore(&hw->cmd_lock, flags);
+		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
+		spin_lock_irqsave(&hw->cmd_lock, flags);
+	}
+
+	spin_unlock_irqrestore(&hw->cmd_lock, flags);
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_mbox_rsp_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
+{
+	struct efct_mbox_rqst_ctx *ctx = arg;
+
+	if (ctx) {
+		if (ctx->callback)
+			(*ctx->callback)(hw->os->efcport, status, mqe,
+					 ctx->arg);
+
+		mempool_free(ctx, hw->mbox_rqst_pool);
+	}
+}
+
+int
+efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg)
+{
+	struct efct_mbox_rqst_ctx *ctx;
+	struct efct *efct = base;
+	struct efct_hw *hw = &efct->hw;
+
+	/*
+	 * Allocate a callback context (which includes the mbox cmd buffer),
+	 * we need this to be persistent as the mbox cmd submission may be
+	 * queued and executed later execution.
+	 */
+	ctx = mempool_alloc(hw->mbox_rqst_pool, GFP_ATOMIC);
+	if (!ctx)
+		return EFC_FAIL;
+
+	ctx->callback = cb;
+	ctx->arg = arg;
+
+	if (efct_hw_command(hw, cmd, EFCT_CMD_NOWAIT, efct_mbox_rsp_cb, ctx)) {
+		efc_log_err(efct, "issue mbox rqst failure\n");
+		mempool_free(ctx, hw->mbox_rqst_pool);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 39cfe6658783..76de5decfbea 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -619,4 +619,13 @@ efct_get_wwnn(struct efct_hw *hw);
 uint64_t
 efct_get_wwpn(struct efct_hw *hw);
 
+enum efct_hw_rtn efct_hw_rx_allocate(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_rx_post(struct efct_hw *hw);
+void efct_hw_rx_free(struct efct_hw *hw);
+enum efct_hw_rtn
+efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
+		void *arg);
+int
+efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (19 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-03-04 18:05   ` Hannes Reinecke
  2021-01-07 22:58 ` [PATCH v7 22/31] elx: efct: Hardware queues processing James Smart
                   ` (9 subsequent siblings)
  30 siblings, 1 reply; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to create IO interfaces (wqs, etc), SGL initialization,
and configure hardware features.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c | 590 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  41 +++
 2 files changed, 631 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 451a5729bff4..da4e53407b94 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -1565,3 +1565,593 @@ efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg)
 	}
 	return EFC_SUCCESS;
 }
+
+static inline struct efct_hw_io *
+_efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io *io = NULL;
+
+	if (!list_empty(&hw->io_free)) {
+		io = list_first_entry(&hw->io_free, struct efct_hw_io,
+				      list_entry);
+		list_del(&io->list_entry);
+	}
+	if (io) {
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_inuse);
+		io->state = EFCT_HW_IO_STATE_INUSE;
+		io->abort_reqtag = U32_MAX;
+		io->wq = hw->wq_cpu_array[raw_smp_processor_id()];
+		if (!io->wq) {
+			efc_log_err(hw->os, "WQ not assigned for cpu:%d\n",
+					raw_smp_processor_id());
+			io->wq = hw->hw_wq[0];
+		}
+		kref_init(&io->ref);
+		io->release = efct_hw_io_free_internal;
+	} else {
+		atomic_add_return(1, &hw->io_alloc_failed_count);
+	}
+
+	return io;
+}
+
+struct efct_hw_io *
+efct_hw_io_alloc(struct efct_hw *hw)
+{
+	struct efct_hw_io *io = NULL;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	io = _efct_hw_io_alloc(hw);
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+
+	return io;
+}
+
+static void
+efct_hw_io_free_move_correct_list(struct efct_hw *hw,
+				  struct efct_hw_io *io)
+{
+
+	/*
+	 * When an IO is freed, depending on the exchange busy flag,
+	 * move it to the correct list.
+	 */
+	if (io->xbusy) {
+		/*
+		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
+		 * up
+		 */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_wait_free);
+		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
+	} else {
+		/* IO not busy, add to free list */
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &hw->io_free);
+		io->state = EFCT_HW_IO_STATE_FREE;
+	}
+}
+
+static inline void
+efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	/* initialize IO fields */
+	efct_hw_init_free_io(io);
+
+	/* Restore default SGL */
+	efct_hw_io_restore_sgl(hw, io);
+}
+
+void
+efct_hw_io_free_internal(struct kref *arg)
+{
+	unsigned long flags = 0;
+	struct efct_hw_io *io =	container_of(arg, struct efct_hw_io, ref);
+	struct efct_hw *hw = io->hw;
+
+	/* perform common cleanup */
+	efct_hw_io_free_common(hw, io);
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	/* remove from in-use list */
+	if (!list_empty(&io->list_entry) && !list_empty(&hw->io_inuse)) {
+		list_del_init(&io->list_entry);
+		efct_hw_io_free_move_correct_list(hw, io);
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+int
+efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
+{
+	return kref_put(&io->ref, io->release);
+}
+
+struct efct_hw_io *
+efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
+{
+	u32 ioindex;
+
+	ioindex = xri - hw->sli.ext[SLI4_RSRC_XRI].base[0];
+	return hw->io[ioindex];
+}
+
+enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io,
+		     enum efct_hw_io_type type)
+{
+	struct sli4_sge	*data = NULL;
+	u32 i = 0;
+	u32 skips = 0;
+	u32 sge_flags = 0;
+
+	if (!io) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n", hw, io);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Clear / reset the scatter-gather list */
+	io->sgl = &io->def_sgl;
+	io->sgl_count = io->def_sgl_count;
+	io->first_data_sge = 0;
+
+	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+	io->n_sge = 0;
+	io->sge_offset = 0;
+
+	io->type = type;
+
+	data = io->sgl->virt;
+
+	/*
+	 * Some IO types have underlying hardware requirements on the order
+	 * of SGEs. Process all special entries here.
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE:
+
+		/* populate host resident XFER_RDY buffer */
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		data->buffer_address_high =
+			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
+		data->buffer_address_low  =
+			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
+		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+
+		skips = EFCT_TARGET_WRITE_SKIPS;
+
+		io->n_sge = 1;
+		break;
+	case EFCT_HW_IO_TARGET_READ:
+		/*
+		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
+		 */
+		skips = EFCT_TARGET_READ_SKIPS;
+		break;
+	case EFCT_HW_IO_TARGET_RSP:
+		/*
+		 * No skips, etc. for FCP_TRSP64
+		 */
+		break;
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/*
+	 * Write skip entries
+	 */
+	for (i = 0; i < skips; i++) {
+		sge_flags = le32_to_cpu(data->dw2_flags);
+		sge_flags &= (~SLI4_SGE_TYPE_MASK);
+		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
+		data->dw2_flags = cpu_to_le32(sge_flags);
+		data++;
+	}
+
+	io->n_sge += skips;
+
+	/*
+	 * Set last
+	 */
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length)
+{
+	struct sli4_sge	*data = NULL;
+	u32 sge_flags = 0;
+
+	if (!io || !addr || !length) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
+			    hw, io, addr, length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (length > hw->sli.sge_supported_length) {
+		efc_log_err(hw->os,
+			     "length of SGE %d bigger than allowed %d\n",
+			    length, hw->sli.sge_supported_length);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	data = io->sgl->virt;
+	data += io->n_sge;
+
+	sge_flags = le32_to_cpu(data->dw2_flags);
+	sge_flags &= ~SLI4_SGE_TYPE_MASK;
+	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
+	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
+	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
+
+	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
+	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
+	data->buffer_length = cpu_to_le32(length);
+
+	/*
+	 * Always assume this is the last entry and mark as such.
+	 * If this is not the first entry unset the "last SGE"
+	 * indication for the previous entry
+	 */
+	sge_flags |= SLI4_SGE_LAST;
+	data->dw2_flags = cpu_to_le32(sge_flags);
+
+	if (io->n_sge) {
+		sge_flags = le32_to_cpu(data[-1].dw2_flags);
+		sge_flags &= ~SLI4_SGE_LAST;
+		data[-1].dw2_flags = cpu_to_le32(sge_flags);
+	}
+
+	/* Set first_data_bde if not previously set */
+	if (io->first_data_sge == 0)
+		io->first_data_sge = io->n_sge;
+
+	io->sge_offset += length;
+	io->n_sge++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+void
+efct_hw_io_abort_all(struct efct_hw *hw)
+{
+	struct efct_hw_io *io_to_abort	= NULL;
+	struct efct_hw_io *next_io = NULL;
+
+	list_for_each_entry_safe(io_to_abort, next_io,
+				 &hw->io_inuse, list_entry) {
+		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
+	}
+}
+
+static void
+efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_io *io = arg;
+	struct efct_hw *hw = io->hw;
+	u32 ext = 0;
+	u32 len = 0;
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	/*
+	 * For IOs that were aborted internally, we may need to issue the
+	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
+	 * not. If the status is Local Reject/No XRI, then
+	 * issue the callback now.
+	 */
+	ext = sli_fc_ext_status(&hw->sli, cqe);
+	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
+	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI && io->done) {
+		efct_hw_done_t done = io->done;
+
+		io->done = NULL;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort Note: We wont have both a done and abort_done
+		 * function, so don't worry about
+		 *       clobbering the len, status and ext fields.
+		 */
+		status = io->saved_status;
+		len = io->saved_len;
+		ext = io->saved_ext;
+		io->status_saved = false;
+		done(io, len, status, ext, io->arg);
+	}
+
+	if (io->abort_done) {
+		efct_hw_done_t done = io->abort_done;
+
+		io->abort_done = NULL;
+		done(io, len, status, ext, io->abort_arg);
+	}
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	/* clear abort bit to indicate abort is complete */
+	io->abort_in_progress = false;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/* Free the WQ callback */
+	if (io->abort_reqtag == U32_MAX) {
+		efc_log_err(hw->os, "HW IO already freed\n");
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
+	efct_hw_reqtag_free(hw, wqcb);
+
+	/*
+	 * Call efct_hw_io_free() because this releases the WQ reservation as
+	 * well as doing the refcount put. Don't duplicate the code here.
+	 */
+	(void)efct_hw_io_free(hw, io);
+}
+
+static void
+efct_hw_fill_abort_wqe(struct efct_hw *hw, struct efct_hw_wqe *wqe)
+{
+	struct sli4_abort_wqe *abort = (void *)wqe->wqebuf;
+
+	memset(abort, 0, hw->sli.wqe_size);
+
+	abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
+	abort->ia_ir_byte |= wqe->send_abts ? 0 : 1;
+
+	/* Suppress ABTS retries */
+	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
+
+	abort->t_tag  = cpu_to_le32(wqe->id);
+	abort->command = SLI4_WQE_ABORT;
+	abort->request_tag = cpu_to_le16(wqe->abort_reqtag);
+
+	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
+
+	abort->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
+}
+
+enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg)
+{
+	struct hw_wq_callback *wqcb;
+	unsigned long flags = 0;
+
+	if (!io_to_abort) {
+		efc_log_err(hw->os,
+			     "bad parameter hw=%p io=%p\n",
+			    hw, io_to_abort);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
+			     hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* take a reference on IO being aborted */
+	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
+		/* command no longer active */
+		efc_log_debug(hw->os,
+			      "io not active xri=0x%x tag=0x%x\n",
+			     io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/* Must have a valid WQ reference */
+	if (!io_to_abort->wq) {
+		efc_log_debug(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
+			      io_to_abort->indicator);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		return EFCT_HW_RTN_IO_NOT_ACTIVE;
+	}
+
+	/*
+	 * Validation checks complete; now check to see if already being
+	 * aborted
+	 */
+	spin_lock_irqsave(&hw->io_abort_lock, flags);
+	if (io_to_abort->abort_in_progress) {
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		efc_log_debug(hw->os,
+			       "io already being aborted xri=0x%x tag=0x%x\n",
+			      io_to_abort->indicator, io_to_abort->reqtag);
+		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
+	}
+
+	/*
+	 * This IO is not already being aborted. Set flag so we won't try to
+	 * abort it again. After all, we only have one abort_done callback.
+	 */
+	io_to_abort->abort_in_progress = true;
+	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+
+	/*
+	 * If we got here, the possibilities are:
+	 * - host owned xri
+	 *	- io_to_abort->wq_index != U32_MAX
+	 *		- submit ABORT_WQE to same WQ
+	 * - port owned xri:
+	 *	- rxri: io_to_abort->wq_index == U32_MAX
+	 *		- submit ABORT_WQE to any WQ
+	 *	- non-rxri
+	 *		- io_to_abort->index != U32_MAX
+	 *			- submit ABORT_WQE to same WQ
+	 *		- io_to_abort->index == U32_MAX
+	 *			- submit ABORT_WQE to any WQ
+	 */
+	io_to_abort->abort_done = cb;
+	io_to_abort->abort_arg  = arg;
+
+	/* Allocate a request tag for the abort portion of this IO */
+	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
+	if (!wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+
+	io_to_abort->abort_reqtag = wqcb->instance_index;
+	io_to_abort->wqe.send_abts = send_abts;
+	io_to_abort->wqe.id = io_to_abort->indicator;
+	io_to_abort->wqe.abort_reqtag = io_to_abort->abort_reqtag;
+
+	/*
+	 * If the wqe is on the pending list, then set this wqe to be
+	 * aborted when the IO's wqe is removed from the list.
+	 */
+	if (io_to_abort->wq) {
+		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
+		if (io_to_abort->wqe.list_entry.next) {
+			io_to_abort->wqe.abort_wqe_submit_needed = true;
+			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
+					       flags);
+			return EFC_SUCCESS;
+		}
+		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
+	}
+
+	efct_hw_fill_abort_wqe(hw, &io_to_abort->wqe);
+
+	/* ABORT_WQE does not actually utilize an XRI on the Port,
+	 * therefore, keep xbusy as-is to track the exchange's state,
+	 * not the ABORT_WQE's state
+	 */
+	if (efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe)) {
+		spin_lock_irqsave(&hw->io_abort_lock, flags);
+		io_to_abort->abort_in_progress = false;
+		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
+		/* efct_ref_get(): same function */
+		kref_put(&io_to_abort->ref, io_to_abort->release);
+		return EFC_FAIL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+void
+efct_hw_reqtag_pool_free(struct efct_hw *hw)
+{
+	u32 i;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+	struct hw_wq_callback *wqcb = NULL;
+
+	if (reqtag_pool) {
+		for (i = 0; i < U16_MAX; i++) {
+			wqcb = reqtag_pool->tags[i];
+			if (!wqcb)
+				continue;
+
+			kfree(wqcb);
+		}
+		kfree(reqtag_pool);
+		hw->wq_reqtag_pool = NULL;
+	}
+}
+
+struct reqtag_pool *
+efct_hw_reqtag_pool_alloc(struct efct_hw *hw)
+{
+	u32 i = 0;
+	struct reqtag_pool *reqtag_pool;
+	struct hw_wq_callback *wqcb;
+
+	reqtag_pool = kzalloc(sizeof(*reqtag_pool), GFP_KERNEL);
+	if (!reqtag_pool)
+		return NULL;
+
+	INIT_LIST_HEAD(&reqtag_pool->freelist);
+	/* initialize reqtag pool lock */
+	spin_lock_init(&reqtag_pool->lock);
+	for (i = 0; i < U16_MAX; i++) {
+		wqcb = kmalloc(sizeof(*wqcb), GFP_KERNEL);
+		if (!wqcb)
+			break;
+
+		reqtag_pool->tags[i] = wqcb;
+		wqcb->instance_index = i;
+		wqcb->callback = NULL;
+		wqcb->arg = NULL;
+		INIT_LIST_HEAD(&wqcb->list_entry);
+		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
+	}
+
+	return reqtag_pool;
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_alloc(struct efct_hw *hw,
+		     void (*callback)(void *arg, u8 *cqe, int status),
+		     void *arg)
+{
+	struct hw_wq_callback *wqcb = NULL;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+	unsigned long flags = 0;
+
+	if (!callback)
+		return wqcb;
+
+	spin_lock_irqsave(&reqtag_pool->lock, flags);
+
+	if (!list_empty(&reqtag_pool->freelist)) {
+		wqcb = list_first_entry(&reqtag_pool->freelist,
+				      struct hw_wq_callback, list_entry);
+	}
+
+	if (wqcb) {
+		list_del_init(&wqcb->list_entry);
+		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+		wqcb->callback = callback;
+		wqcb->arg = arg;
+	} else {
+		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+	}
+
+	return wqcb;
+}
+
+void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb)
+{
+	unsigned long flags = 0;
+	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
+
+	if (!wqcb->callback)
+		efc_log_err(hw->os, "WQCB is already freed\n");
+
+	spin_lock_irqsave(&reqtag_pool->lock, flags);
+	wqcb->callback = NULL;
+	wqcb->arg = NULL;
+	INIT_LIST_HEAD(&wqcb->list_entry);
+	list_add(&wqcb->list_entry, &hw->wq_reqtag_pool->freelist);
+	spin_unlock_irqrestore(&reqtag_pool->lock, flags);
+}
+
+struct hw_wq_callback *
+efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
+{
+	struct hw_wq_callback *wqcb;
+
+	wqcb = hw->wq_reqtag_pool->tags[instance_index];
+	if (!wqcb)
+		efc_log_err(hw->os, "wqcb for instance %d is null\n",
+			     instance_index);
+
+	return wqcb;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 76de5decfbea..dd00d4df6ada 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -628,4 +628,45 @@ efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
 int
 efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg);
 
+struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw);
+int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io);
+u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io);
+enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io, union efct_hw_io_param_u *iparam,
+		void *cb, void *arg);
+enum efct_hw_rtn
+efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
+			struct efc_dma *sgl,
+			u32 sgl_count);
+enum efct_hw_rtn
+efct_hw_io_init_sges(struct efct_hw *hw,
+		     struct efct_hw_io *io, enum efct_hw_io_type type);
+
+enum efct_hw_rtn
+efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
+		   uintptr_t addr, u32 length);
+enum efct_hw_rtn
+efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
+		 bool send_abts, void *cb, void *arg);
+u32
+efct_hw_io_get_count(struct efct_hw *hw,
+		     enum efct_hw_io_count_type io_count_type);
+struct efct_hw_io
+*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator);
+void efct_hw_io_abort_all(struct efct_hw *hw);
+void efct_hw_io_free_internal(struct kref *arg);
+
+/* HW WQ request tag API */
+struct reqtag_pool *efct_hw_reqtag_pool_alloc(struct efct_hw *hw);
+void efct_hw_reqtag_pool_free(struct efct_hw *hw);
+struct hw_wq_callback
+*efct_hw_reqtag_alloc(struct efct_hw *hw,
+			void (*callback)(void *arg, u8 *cqe,
+					 int status), void *arg);
+void
+efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
+struct hw_wq_callback
+*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
+
 #endif /* __EFCT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 22/31] elx: efct: Hardware queues processing
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (20 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 23/31] elx: efct: Unsolicited FC frame processing routines James Smart
                   ` (8 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for EQ, CQ, WQ and RQ processing.
Routines for IO object pool allocation and deallocation.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c | 363 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  38 ++++
 drivers/scsi/elx/efct/efct_io.c | 191 +++++++++++++++++
 drivers/scsi/elx/efct/efct_io.h | 174 +++++++++++++++
 4 files changed, 766 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_io.c
 create mode 100644 drivers/scsi/elx/efct/efct_io.h

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index da4e53407b94..95522f0226d9 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -2155,3 +2155,366 @@ efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
 
 	return wqcb;
 }
+
+int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id)
+{
+	int index = -1;
+	int i = id & (EFCT_HW_Q_HASH_SIZE - 1);
+
+	/*
+	 * Since the hash is always bigger than the maximum number of Qs, then
+	 * we never have to worry about an infinite loop. We will always find
+	 * an unused entry.
+	 */
+	do {
+		if (hash[i].in_use && hash[i].id == id)
+			index = hash[i].index;
+		else
+			i = (i + 1) & (EFCT_HW_Q_HASH_SIZE - 1);
+	} while (index == -1 && hash[i].in_use);
+
+	return index;
+}
+
+int
+efct_hw_process(struct efct_hw *hw, u32 vector,
+		u32 max_isr_time_msec)
+{
+	struct hw_eq *eq;
+
+	/*
+	 * The caller should disable interrupts if they wish to prevent us
+	 * from processing during a shutdown. The following states are defined:
+	 *   EFCT_HW_STATE_UNINITIALIZED - No queues allocated
+	 *   EFCT_HW_STATE_QUEUES_ALLOCATED - The state after a chip reset,
+	 *                                    queues are cleared.
+	 *   EFCT_HW_STATE_ACTIVE - Chip and queues are operational
+	 *   EFCT_HW_STATE_RESET_IN_PROGRESS - reset, we still want completions
+	 *   EFCT_HW_STATE_TEARDOWN_IN_PROGRESS - We still want mailbox
+	 *                                        completions.
+	 */
+	if (hw->state == EFCT_HW_STATE_UNINITIALIZED)
+		return EFC_SUCCESS;
+
+	/* Get pointer to struct hw_eq */
+	eq = hw->hw_eq[vector];
+	if (!eq)
+		return EFC_SUCCESS;
+
+	eq->use_count++;
+
+	return efct_hw_eq_process(hw, eq, max_isr_time_msec);
+}
+
+int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec)
+{
+	u8 eqe[sizeof(struct sli4_eqe)] = { 0 };
+	u32 tcheck_count;
+	u64 tstart;
+	u64 telapsed;
+	bool done = false;
+
+	tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!done && !sli_eq_read(&hw->sli, eq->queue, eqe)) {
+		u16 cq_id = 0;
+		int rc;
+
+		rc = sli_eq_parse(&hw->sli, eqe, &cq_id);
+		if (unlikely(rc)) {
+			if (rc == SLI4_EQE_STATUS_EQ_FULL) {
+				u32 i;
+
+				/*
+				 * Received a sentinel EQE indicating the
+				 * EQ is full. Process all CQs
+				 */
+				for (i = 0; i < hw->cq_count; i++)
+					efct_hw_cq_process(hw, hw->hw_cq[i]);
+				continue;
+			} else {
+				return rc;
+			}
+		} else {
+			int index;
+
+			index  = efct_hw_queue_hash_find(hw->cq_hash, cq_id);
+
+			if (likely(index >= 0))
+				efct_hw_cq_process(hw, hw->hw_cq[index]);
+			else
+				efc_log_err(hw->os, "bad CQ_ID %#06x\n",
+					     cq_id);
+		}
+
+		if (eq->queue->n_posted > eq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, eq->queue, false);
+
+		if (tcheck_count && (--tcheck_count == 0)) {
+			tcheck_count = EFCT_HW_TIMECHECK_ITERATIONS;
+			telapsed = jiffies_to_msecs(jiffies) - tstart;
+			if (telapsed >= max_isr_time_msec)
+				done = true;
+		}
+	}
+	sli_queue_eq_arm(&hw->sli, eq->queue, true);
+
+	return EFC_SUCCESS;
+}
+
+static int
+_efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int queue_rc;
+
+	/* Every so often, set the wqec bit to generate comsummed completions */
+	if (wq->wqec_count)
+		wq->wqec_count--;
+
+	if (wq->wqec_count == 0) {
+		struct sli4_generic_wqe *genwqe = (void *)wqe->wqebuf;
+
+		genwqe->cmdtype_wqec_byte |= SLI4_GEN_WQE_WQEC;
+		wq->wqec_count = wq->wqec_set_count;
+	}
+
+	/* Decrement WQ free count */
+	wq->free_count--;
+
+	queue_rc = sli_wq_write(&wq->hw->sli, wq->queue, wqe->wqebuf);
+
+	return (queue_rc < 0) ? EFC_FAIL : EFC_SUCCESS;
+}
+
+static void
+hw_wq_submit_pending(struct hw_wq *wq, u32 update_free_count)
+{
+	struct efct_hw_wqe *wqe;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+
+	/* Update free count with value passed in */
+	wq->free_count += update_free_count;
+
+	while ((wq->free_count > 0) && (!list_empty(&wq->pending_list))) {
+		wqe = list_first_entry(&wq->pending_list,
+				       struct efct_hw_wqe, list_entry);
+		list_del_init(&wqe->list_entry);
+		_efct_hw_wq_write(wq, wqe);
+
+		if (wqe->abort_wqe_submit_needed) {
+			wqe->abort_wqe_submit_needed = false;
+			efct_hw_fill_abort_wqe(wq->hw, wqe);
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+}
+
+void
+efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq)
+{
+	u8 cqe[sizeof(struct sli4_mcqe)];
+	u16 rid = U16_MAX;
+	/* completion type */
+	enum sli4_qentry ctype;
+	u32 n_processed = 0;
+	u32 tstart, telapsed;
+
+	tstart = jiffies_to_msecs(jiffies);
+
+	while (!sli_cq_read(&hw->sli, cq->queue, cqe)) {
+		int status;
+
+		status = sli_cq_parse(&hw->sli, cq->queue, cqe, &ctype, &rid);
+		/*
+		 * The sign of status is significant. If status is:
+		 * == 0 : call completed correctly and
+		 * the CQE indicated success
+		 * > 0 : call completed correctly and
+		 * the CQE indicated an error
+		 * < 0 : call failed and no information is available about the
+		 * CQE
+		 */
+		if (status < 0) {
+			if (status == SLI4_MCQE_STATUS_NOT_COMPLETED)
+				/*
+				 * Notification that an entry was consumed,
+				 * but not completed
+				 */
+				continue;
+
+			break;
+		}
+
+		switch (ctype) {
+		case SLI4_QENTRY_ASYNC:
+			sli_cqe_async(&hw->sli, cqe);
+			break;
+		case SLI4_QENTRY_MQ:
+			/*
+			 * Process MQ entry. Note there is no way to determine
+			 * the MQ_ID from the completion entry.
+			 */
+			efct_hw_mq_process(hw, status, hw->mq);
+			break;
+		case SLI4_QENTRY_WQ:
+			efct_hw_wq_process(hw, cq, cqe, status, rid);
+			break;
+		case SLI4_QENTRY_WQ_RELEASE: {
+			u32 wq_id = rid;
+			int index;
+			struct hw_wq *wq = NULL;
+
+			index = efct_hw_queue_hash_find(hw->wq_hash, wq_id);
+
+			if (likely(index >= 0)) {
+				wq = hw->hw_wq[index];
+			} else {
+				efc_log_err(hw->os, "bad WQ_ID %#06x\n", wq_id);
+				break;
+			}
+			/* Submit any HW IOs that are on the WQ pending list */
+			hw_wq_submit_pending(wq, wq->wqec_set_count);
+
+			break;
+		}
+
+		case SLI4_QENTRY_RQ:
+			efct_hw_rqpair_process_rq(hw, cq, cqe);
+			break;
+		case SLI4_QENTRY_XABT: {
+			efct_hw_xabt_process(hw, cq, cqe, rid);
+			break;
+		}
+		default:
+			efc_log_debug(hw->os,
+				      "unhandled ctype=%#x rid=%#x\n",
+				     ctype, rid);
+			break;
+		}
+
+		n_processed++;
+		if (n_processed == cq->queue->proc_limit)
+			break;
+
+		if (cq->queue->n_posted >= cq->queue->posted_limit)
+			sli_queue_arm(&hw->sli, cq->queue, false);
+	}
+
+	sli_queue_arm(&hw->sli, cq->queue, true);
+
+	if (n_processed > cq->queue->max_num_processed)
+		cq->queue->max_num_processed = n_processed;
+	telapsed = jiffies_to_msecs(jiffies) - tstart;
+	if (telapsed > cq->queue->max_process_time)
+		cq->queue->max_process_time = telapsed;
+}
+
+void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid)
+{
+	struct hw_wq_callback *wqcb;
+
+	if (rid == EFCT_HW_REQUE_XRI_REGTAG) {
+		if (status)
+			efc_log_err(hw->os, "reque xri failed, status = %d\n",
+				     status);
+		return;
+	}
+
+	wqcb = efct_hw_reqtag_get_instance(hw, rid);
+	if (!wqcb) {
+		efc_log_err(hw->os, "invalid request tag: x%x\n", rid);
+		return;
+	}
+
+	if (!wqcb->callback) {
+		efc_log_err(hw->os, "wqcb callback is NULL\n");
+		return;
+	}
+
+	(*wqcb->callback)(wqcb->arg, cqe, status);
+}
+
+void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid)
+{
+	/* search IOs wait free list */
+	struct efct_hw_io *io = NULL;
+	unsigned long flags = 0;
+
+	io = efct_hw_io_lookup(hw, rid);
+	if (!io) {
+		/* IO lookup failure should never happen */
+		efc_log_err(hw->os,
+			     "Error: xabt io lookup failed rid=%#x\n", rid);
+		return;
+	}
+
+	if (!io->xbusy)
+		efc_log_debug(hw->os, "xabt io not busy rid=%#x\n", rid);
+	else
+		/* mark IO as no longer busy */
+		io->xbusy = false;
+
+	/*
+	 * For IOs that were aborted internally, we need to issue any pending
+	 * callback here.
+	 */
+	if (io->done) {
+		efct_hw_done_t done = io->done;
+		void		*arg = io->arg;
+
+		/*
+		 * Use latched status as this is always saved for an internal
+		 * abort
+		 */
+		int status = io->saved_status;
+		u32 len = io->saved_len;
+		u32 ext = io->saved_ext;
+
+		io->done = NULL;
+		io->status_saved = false;
+
+		done(io, len, status, ext, arg);
+	}
+
+	spin_lock_irqsave(&hw->io_lock, flags);
+	if (io->state == EFCT_HW_IO_STATE_INUSE ||
+	    io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+		/* if on wait_free list, caller has already freed IO;
+		 * remove from wait_free list and add to free list.
+		 * if on in-use list, already marked as no longer busy;
+		 * just leave there and wait for caller to free.
+		 */
+		if (io->state == EFCT_HW_IO_STATE_WAIT_FREE) {
+			io->state = EFCT_HW_IO_STATE_FREE;
+			list_del_init(&io->list_entry);
+			efct_hw_io_free_move_correct_list(hw, io);
+		}
+	}
+	spin_unlock_irqrestore(&hw->io_lock, flags);
+}
+
+static int
+efct_hw_flush(struct efct_hw *hw)
+{
+	u32 i = 0;
+
+	/* Process any remaining completions */
+	for (i = 0; i < hw->eq_count; i++)
+		efct_hw_process(hw, i, ~0);
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index dd00d4df6ada..bcc8e0443460 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -669,4 +669,42 @@ efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
 struct hw_wq_callback
 *efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
 
+/* RQ completion handlers for RQ pair mode */
+int
+efct_hw_rqpair_process_rq(struct efct_hw *hw,
+			  struct hw_cq *cq, u8 *cqe);
+enum efct_hw_rtn
+efct_hw_rqpair_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq);
+static inline void
+efct_hw_sequence_copy(struct efc_hw_sequence *dst,
+		      struct efc_hw_sequence *src)
+{
+	/* Copy src to dst, then zero out the linked list link */
+	*dst = *src;
+}
+
+int
+efct_efc_hw_sequence_free(struct efc *efc, struct efc_hw_sequence *seq);
+
+static inline enum efct_hw_rtn
+efct_hw_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq)
+{
+	/* Only RQ pair mode is supported */
+	return efct_hw_rqpair_sequence_free(hw, seq);
+}
+int
+efct_hw_eq_process(struct efct_hw *hw, struct hw_eq *eq,
+		   u32 max_isr_time_msec);
+void efct_hw_cq_process(struct efct_hw *hw, struct hw_cq *cq);
+void
+efct_hw_wq_process(struct efct_hw *hw, struct hw_cq *cq,
+		   u8 *cqe, int status, u16 rid);
+void
+efct_hw_xabt_process(struct efct_hw *hw, struct hw_cq *cq,
+		     u8 *cqe, u16 rid);
+int
+efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
+int
+efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_io.c b/drivers/scsi/elx/efct/efct_io.c
new file mode 100644
index 000000000000..669ed12a47a9
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+#include "efct_io.h"
+
+struct efct_io_pool {
+	struct efct *efct;
+	spinlock_t lock;	/* IO pool lock */
+	u32 io_num_ios;		/* Total IOs allocated */
+	struct efct_io *ios[EFCT_NUM_SCSI_IOS];
+	struct list_head freelist;
+
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_sgl)
+{
+	u32 i = 0;
+	struct efct_io_pool *io_pool;
+	struct efct_io *io;
+
+	/* Allocate the IO pool */
+	io_pool = kzalloc(sizeof(*io_pool), GFP_KERNEL);
+	if (!io_pool)
+		return NULL;
+
+	io_pool->efct = efct;
+	INIT_LIST_HEAD(&io_pool->freelist);
+	/* initialize IO pool lock */
+	spin_lock_init(&io_pool->lock);
+
+	for (i = 0; i < EFCT_NUM_SCSI_IOS; i++) {
+		io = kzalloc(sizeof(*io), GFP_KERNEL);
+		if (!io)
+			break;
+
+		io_pool->io_num_ios++;
+		io_pool->ios[i] = io;
+		io->tag = i;
+		io->instance_index = i;
+
+		/* Allocate a response buffer */
+		io->rspbuf.size = SCSI_RSP_BUF_LENGTH;
+		io->rspbuf.virt = dma_alloc_coherent(&efct->pci->dev,
+						     io->rspbuf.size,
+						     &io->rspbuf.phys, GFP_DMA);
+		if (!io->rspbuf.virt) {
+			efc_log_err(efct, "dma_alloc rspbuf failed\n");
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		/* Allocate SGL */
+		io->sgl = kzalloc(sizeof(*io->sgl) * num_sgl, GFP_KERNEL);
+		if (!io->sgl) {
+			efct_io_pool_free(io_pool);
+			return NULL;
+		}
+
+		memset(io->sgl, 0, sizeof(*io->sgl) * num_sgl);
+		io->sgl_allocated = num_sgl;
+		io->sgl_count = 0;
+
+		INIT_LIST_HEAD(&io->list_entry);
+		list_add_tail(&io->list_entry, &io_pool->freelist);
+	}
+
+	return io_pool;
+}
+
+int
+efct_io_pool_free(struct efct_io_pool *io_pool)
+{
+	struct efct *efct;
+	u32 i;
+	struct efct_io *io;
+
+	if (io_pool) {
+		efct = io_pool->efct;
+
+		for (i = 0; i < io_pool->io_num_ios; i++) {
+			io = io_pool->ios[i];
+			if (!io)
+				continue;
+
+			kfree(io->sgl);
+			dma_free_coherent(&efct->pci->dev,
+					  io->rspbuf.size, io->rspbuf.virt,
+					  io->rspbuf.phys);
+			memset(&io->rspbuf, 0, sizeof(struct efc_dma));
+		}
+
+		kfree(io_pool);
+		efct->xport->io_pool = NULL;
+	}
+
+	return EFC_SUCCESS;
+}
+
+struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool)
+{
+	struct efct_io *io = NULL;
+	struct efct *efct;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+
+	if (!list_empty(&io_pool->freelist)) {
+		io = list_first_entry(&io_pool->freelist, struct efct_io,
+				     list_entry);
+		list_del_init(&io->list_entry);
+	}
+
+	spin_unlock_irqrestore(&io_pool->lock, flags);
+
+	if (!io)
+		return NULL;
+
+	io->io_type = EFCT_IO_TYPE_MAX;
+	io->hio_type = EFCT_HW_IO_MAX;
+	io->hio = NULL;
+	io->transferred = 0;
+	io->efct = efct;
+	io->timeout = 0;
+	io->sgl_count = 0;
+	io->tgt_task_tag = 0;
+	io->init_task_tag = 0;
+	io->hw_tag = 0;
+	io->display_name = "pending";
+	io->seq_init = 0;
+	io->io_free = 0;
+	io->release = NULL;
+	atomic_add_return(1, &efct->xport->io_active_count);
+	atomic_add_return(1, &efct->xport->io_total_alloc);
+	return io;
+}
+
+/* Free an object used to track an IO */
+void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io)
+{
+	struct efct *efct;
+	struct efct_hw_io *hio = NULL;
+	unsigned long flags = 0;
+
+	efct = io_pool->efct;
+
+	spin_lock_irqsave(&io_pool->lock, flags);
+	hio = io->hio;
+	io->hio = NULL;
+	io->io_free = 1;
+	INIT_LIST_HEAD(&io->list_entry);
+	list_add(&io->list_entry, &io_pool->freelist);
+	spin_unlock_irqrestore(&io_pool->lock, flags);
+
+	if (hio)
+		efct_hw_io_free(&efct->hw, hio);
+
+	atomic_sub_return(1, &efct->xport->io_active_count);
+	atomic_add_return(1, &efct->xport->io_total_free);
+}
+
+/* Find an I/O given it's node and ox_id */
+struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efct_node *node,
+		    u16 ox_id, u16 rx_id)
+{
+	struct efct_io	*io = NULL;
+	unsigned long flags = 0;
+	u8 found = false;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_for_each_entry(io, &node->active_ios, list_entry) {
+		if ((io->cmd_tgt && io->init_task_tag == ox_id) &&
+		    (rx_id == 0xffff || io->tgt_task_tag == rx_id)) {
+			if (kref_get_unless_zero(&io->ref))
+				found = true;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	return found ? io : NULL;
+}
diff --git a/drivers/scsi/elx/efct/efct_io.h b/drivers/scsi/elx/efct/efct_io.h
new file mode 100644
index 000000000000..bb0f51811a7c
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_io.h
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_IO_H__)
+#define __EFCT_IO_H__
+
+#include "efct_lio.h"
+
+#define EFCT_LOG_ENABLE_IO_ERRORS(efct)		\
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 6)) != 0) : 0)
+
+#define io_error_log(io, fmt, ...)  \
+	do { \
+		if (EFCT_LOG_ENABLE_IO_ERRORS(io->efct)) \
+			efc_log_warn(io->efct, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+#define SCSI_CMD_BUF_LENGTH	48
+#define SCSI_RSP_BUF_LENGTH	(FCP_RESP_WITH_EXT + SCSI_SENSE_BUFFERSIZE)
+#define EFCT_NUM_SCSI_IOS	8192
+
+enum efct_io_type {
+	EFCT_IO_TYPE_IO = 0,
+	EFCT_IO_TYPE_ELS,
+	EFCT_IO_TYPE_CT,
+	EFCT_IO_TYPE_CT_RESP,
+	EFCT_IO_TYPE_BLS_RESP,
+	EFCT_IO_TYPE_ABORT,
+
+	EFCT_IO_TYPE_MAX,
+};
+
+enum efct_els_state {
+	EFCT_ELS_REQUEST = 0,
+	EFCT_ELS_REQUEST_DELAYED,
+	EFCT_ELS_REQUEST_DELAY_ABORT,
+	EFCT_ELS_REQ_ABORT,
+	EFCT_ELS_REQ_ABORTED,
+	EFCT_ELS_ABORT_IO_COMPL,
+};
+
+/**
+ * Scsi target IO object
+ * @efct:		pointer back to efct
+ * @instance_index:	unique instance index value
+ * @io:			IO display name
+ * @node:		pointer to node
+ * @list_entry:		io list entry
+ * @io_pending_link:	io pending list entry
+ * @ref:		reference counter
+ * @release:		release callback function
+ * @init_task_tag:	initiator task tag (OX_ID) for back-end and SCSI logging
+ * @tgt_task_tag:	target task tag (RX_ID) for back-end and SCSI logging
+ * @hw_tag:		HW layer unique IO id
+ * @tag:		unique IO identifier
+ * @sgl:		SGL
+ * @sgl_allocated:	Number of allocated SGEs
+ * @sgl_count:		Number of SGEs in this SGL
+ * @tgt_io:		backend target private IO data
+ * @exp_xfer_len:	expected data transfer length, based on FC header
+ * @hw_priv:		Declarations private to HW/SLI
+ * @io_type:		indicates what this struct efct_io structure is used for
+ * @hio:		hw io object
+ * @transferred:	Number of bytes transferred
+ * @auto_resp:		set if auto_trsp was set
+ * @low_latency:	set if low latency request
+ * @wq_steering:	selected WQ steering request
+ * @wq_class:		selected WQ class if steering is class
+ * @xfer_req:		transfer size for current request
+ * @scsi_tgt_cb:	target callback function
+ * @scsi_tgt_cb_arg:	target callback function argument
+ * @abort_cb:		abort callback function
+ * @abort_cb_arg:	abort callback function argument
+ * @bls_cb:		BLS callback function
+ * @bls_cb_arg:		BLS callback function argument
+ * @tmf_cmd:		TMF command being processed
+ * @abort_rx_id:	rx_id from the ABTS that initiated the command abort
+ * @cmd_tgt:		True if this is a Target command
+ * @send_abts:		when aborting, indicates ABTS is to be sent
+ * @cmd_ini:		True if this is an Initiator command
+ * @seq_init:		True if local node has sequence initiative
+ * @iparam:		iparams for hw io send call
+ * @hio_type:		HW IO type
+ * @wire_len:		wire length
+ * @hw_cb:		saved HW callback
+ * @io_to_abort:	for abort handling, pointer to IO to abort
+ * @rspbuf:		SCSI Response buffer
+ * @timeout:		Timeout value in seconds for this IO
+ * @cs_ctl:		CS_CTL priority for this IO
+ * @io_free:		Is io object in freelist
+ * @app_id:		application id
+ */
+struct efct_io {
+	struct efct		*efct;
+	u32			instance_index;
+	const char		*display_name;
+	struct efct_node	*node;
+
+	struct list_head	list_entry;
+	struct list_head	io_pending_link;
+	struct kref		ref;
+	void (*release)(struct kref *arg);
+	u32			init_task_tag;
+	u32			tgt_task_tag;
+	u32			hw_tag;
+	u32			tag;
+	struct efct_scsi_sgl	*sgl;
+	u32			sgl_allocated;
+	u32			sgl_count;
+	struct efct_scsi_tgt_io tgt_io;
+	u32			exp_xfer_len;
+
+	void			*hw_priv;
+
+	enum efct_io_type	io_type;
+	struct efct_hw_io	*hio;
+	size_t			transferred;
+
+	bool			auto_resp;
+	bool			low_latency;
+	u8			wq_steering;
+	u8			wq_class;
+	u64			xfer_req;
+	efct_scsi_io_cb_t	scsi_tgt_cb;
+	void			*scsi_tgt_cb_arg;
+	efct_scsi_io_cb_t	abort_cb;
+	void			*abort_cb_arg;
+	efct_scsi_io_cb_t	bls_cb;
+	void			*bls_cb_arg;
+	enum efct_scsi_tmf_cmd	tmf_cmd;
+	u16			abort_rx_id;
+
+	bool			cmd_tgt;
+	bool			send_abts;
+	bool			cmd_ini;
+	bool			seq_init;
+	union efct_hw_io_param_u iparam;
+	enum efct_hw_io_type	hio_type;
+	u64			wire_len;
+	void			*hw_cb;
+
+	struct efct_io		*io_to_abort;
+
+	struct efc_dma		rspbuf;
+	u32			timeout;
+	u8			cs_ctl;
+	u8			io_free;
+	u32			app_id;
+};
+
+struct efct_io_cb_arg {
+	int status;
+	int ext_status;
+	void *app;
+};
+
+struct efct_io_pool *
+efct_io_pool_create(struct efct *efct, u32 num_sgl);
+int
+efct_io_pool_free(struct efct_io_pool *io_pool);
+u32
+efct_io_pool_allocated(struct efct_io_pool *io_pool);
+
+struct efct_io *
+efct_io_pool_io_alloc(struct efct_io_pool *io_pool);
+void
+efct_io_pool_io_free(struct efct_io_pool *io_pool, struct efct_io *io);
+struct efct_io *
+efct_io_find_tgt_io(struct efct *efct, struct efct_node *node,
+		    u16 ox_id, u16 rx_id);
+#endif /* __EFCT_IO_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 23/31] elx: efct: Unsolicited FC frame processing routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (21 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 22/31] elx: efct: Hardware queues processing James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 24/31] elx: efct: SCSI IO handling routines James Smart
                   ` (7 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to handle unsolicited FC frames.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_unsol.c | 493 +++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_unsol.h |  17 +
 2 files changed, 510 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.c
 create mode 100644 drivers/scsi/elx/efct/efct_unsol.h

diff --git a/drivers/scsi/elx/efct/efct_unsol.c b/drivers/scsi/elx/efct/efct_unsol.c
new file mode 100644
index 000000000000..a11a6a0e238b
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.c
@@ -0,0 +1,493 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_unsol.h"
+
+#define frame_printf(efct, hdr, fmt, ...) \
+	do { \
+		char s_id_text[16]; \
+		efc_node_fcid_display(ntoh24((hdr)->fh_s_id), \
+			s_id_text, sizeof(s_id_text)); \
+		efc_log_debug(efct, "[%06x.%s] %02x/%04x/%04x: " fmt, \
+			ntoh24((hdr)->fh_d_id), s_id_text, \
+			(hdr)->fh_r_ctl, be16_to_cpu((hdr)->fh_ox_id), \
+			be16_to_cpu((hdr)->fh_rx_id), ##__VA_ARGS__); \
+	} while (0)
+
+static struct efct_node *
+efct_node_find(struct efct *efct, u32 port_id, u32 node_id)
+{
+	struct efct_node *node;
+	u64 id = (u64) port_id << 32 | node_id;
+
+	/*
+	 * During node shutdown, Lookup will be removed first,
+	 * before announcing to backend. So, no new IOs will be allowed
+	 */
+	/* Find a target node, given s_id and d_id */
+	node = xa_load(&efct->lookup, id);
+	if (node)
+		kref_get(&node->ref);
+
+	return node;
+}
+
+static int
+efct_dispatch_frame(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	struct efct_node *node;
+	struct fc_frame_header *hdr;
+	u32 s_id, d_id;
+
+	hdr = seq->header->dma.virt;
+
+	/* extract the s_id and d_id */
+	s_id = ntoh24(hdr->fh_s_id);
+	d_id = ntoh24(hdr->fh_d_id);
+
+	if (!(hdr->fh_type == FC_TYPE_FCP || hdr->fh_type == FC_TYPE_BLS))
+		return EFC_FAIL;
+
+	if (hdr->fh_type == FC_TYPE_FCP) {
+		node = efct_node_find(efct, d_id, s_id);
+		if (!node) {
+			efc_log_err(efct,
+				    "Node not found, drop cmd d_id:%x s_id:%x\n",
+				    d_id, s_id);
+			efct_hw_sequence_free(&efct->hw, seq);
+			return EFC_SUCCESS;
+		}
+
+		efct_dispatch_fcp_cmd(node, seq);
+	} else {
+		node = efct_node_find(efct, d_id, s_id);
+		if (!node) {
+			efc_log_err(efct, "Node not found, d_id:%x s_id:%x\n",
+				    d_id, s_id);
+			return EFC_FAIL;
+		}
+
+		efc_log_err(efct, "Received ABTS for Node:%p\n", node);
+		efct_node_recv_abts_frame(node, seq);
+	}
+
+	kref_put(&node->ref, node->release);
+	efct_hw_sequence_free(&efct->hw, seq);
+	return EFC_SUCCESS;
+}
+
+int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = arg;
+
+	/* Process FCP command */
+	if (!efct_dispatch_frame(efct, seq))
+		return EFC_SUCCESS;
+
+	/* Forward frame to discovery lib */
+	efc_dispatch_frame(efct->efcport, seq);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_fc_tmf_rejected_cb(struct efct_io *io,
+			enum efct_scsi_io_status scsi_status,
+			u32 flags, void *arg)
+{
+	efct_scsi_io_free(io);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_dispatch_unsol_tmf(struct efct_io *io, u8 tm_flags, u32 lun)
+{
+	u32 i;
+	struct {
+		u32 mask;
+		enum efct_scsi_tmf_cmd cmd;
+	} tmflist[] = {
+	{FCP_TMF_ABT_TASK_SET, EFCT_SCSI_TMF_ABORT_TASK_SET},
+	{FCP_TMF_CLR_TASK_SET, EFCT_SCSI_TMF_CLEAR_TASK_SET},
+	{FCP_TMF_LUN_RESET, EFCT_SCSI_TMF_LOGICAL_UNIT_RESET},
+	{FCP_TMF_TGT_RESET, EFCT_SCSI_TMF_TARGET_RESET},
+	{FCP_TMF_CLR_ACA, EFCT_SCSI_TMF_CLEAR_ACA} };
+
+	io->exp_xfer_len = 0;
+
+	for (i = 0; i < ARRAY_SIZE(tmflist); i++) {
+		if (tmflist[i].mask & tm_flags) {
+			io->tmf_cmd = tmflist[i].cmd;
+			efct_scsi_recv_tmf(io, lun, tmflist[i].cmd, NULL, 0);
+			break;
+		}
+	}
+	if (i == ARRAY_SIZE(tmflist)) {
+		/* Not handled */
+		efc_log_err(io->node->efct, "TMF x%x rejected\n", tm_flags);
+		efct_scsi_send_tmf_resp(io, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+					NULL, efct_fc_tmf_rejected_cb, NULL);
+	}
+}
+
+static int
+efct_validate_fcp_cmd(struct efct *efct, struct efc_hw_sequence *seq)
+{
+	/*
+	 * If we received less than FCP_CMND_IU bytes, assume that the frame is
+	 * corrupted in some way and drop it.
+	 * This was seen when jamming the FCTL
+	 * fill bytes field.
+	 */
+	if (seq->payload->dma.len < sizeof(struct fcp_cmnd)) {
+		struct fc_frame_header	*fchdr = seq->header->dma.virt;
+
+		efc_log_debug(efct,
+			"drop ox_id %04x with payload (%zd) less than (%zd)\n",
+				    be16_to_cpu(fchdr->fh_ox_id),
+				    seq->payload->dma.len,
+				    sizeof(struct fcp_cmnd));
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+static void
+efct_populate_io_fcp_cmd(struct efct_io *io, struct fcp_cmnd *cmnd,
+			 struct fc_frame_header *fchdr, bool sit)
+{
+	io->init_task_tag = be16_to_cpu(fchdr->fh_ox_id);
+	/* note, tgt_task_tag, hw_tag  set when HW io is allocated */
+	io->exp_xfer_len = be32_to_cpu(cmnd->fc_dl);
+	io->transferred = 0;
+
+	/* The upper 7 bits of CS_CTL is the frame priority thru the SAN.
+	 * Our assertion here is, the priority given to a frame containing
+	 * the FCP cmd should be the priority given to ALL frames contained
+	 * in that IO. Thus we need to save the incoming CS_CTL here.
+	 */
+	if (ntoh24(fchdr->fh_f_ctl) & FC_FC_RES_B17)
+		io->cs_ctl = fchdr->fh_cs_ctl;
+	else
+		io->cs_ctl = 0;
+
+	io->seq_init = sit;
+}
+
+static u32
+efct_get_flags_fcp_cmd(struct fcp_cmnd *cmnd)
+{
+	u32 flags = 0;
+
+	switch (cmnd->fc_pri_ta & FCP_PTA_MASK) {
+	case FCP_PTA_SIMPLE:
+		flags |= EFCT_SCSI_CMD_SIMPLE;
+		break;
+	case FCP_PTA_HEADQ:
+		flags |= EFCT_SCSI_CMD_HEAD_OF_QUEUE;
+		break;
+	case FCP_PTA_ORDERED:
+		flags |= EFCT_SCSI_CMD_ORDERED;
+		break;
+	case FCP_PTA_ACA:
+		flags |= EFCT_SCSI_CMD_ACA;
+		break;
+	}
+	if (cmnd->fc_flags & FCP_CFL_WRDATA)
+		flags |= EFCT_SCSI_CMD_DIR_IN;
+	if (cmnd->fc_flags & FCP_CFL_RDDATA)
+		flags |= EFCT_SCSI_CMD_DIR_OUT;
+
+	return flags;
+}
+
+static void
+efct_sframe_common_send_cb(void *arg, u8 *cqe, int status)
+{
+	struct efct_hw_send_frame_context *ctx = arg;
+	struct efct_hw *hw = ctx->hw;
+
+	/* Free WQ completion callback */
+	efct_hw_reqtag_free(hw, ctx->wqcb);
+
+	/* Free sequence */
+	efct_hw_sequence_free(hw, ctx->seq);
+}
+
+static int
+efct_sframe_common_send(struct efct_node *node,
+			struct efc_hw_sequence *seq,
+			enum fc_rctl r_ctl, u32 f_ctl,
+			u8 type, void *payload, u32 payload_len)
+{
+	struct efct *efct = node->efct;
+	struct efct_hw *hw = &efct->hw;
+	enum efct_hw_rtn rc = 0;
+	struct fc_frame_header *req_hdr = seq->header->dma.virt;
+	struct fc_frame_header hdr;
+	struct efct_hw_send_frame_context *ctx;
+
+	u32 heap_size = seq->payload->dma.size;
+	uintptr_t heap_phys_base = seq->payload->dma.phys;
+	u8 *heap_virt_base = seq->payload->dma.virt;
+	u32 heap_offset = 0;
+
+	/* Build the FC header reusing the RQ header DMA buffer */
+	memset(&hdr, 0, sizeof(hdr));
+	hdr.fh_r_ctl = r_ctl;
+	/* send it back to whomever sent it to us */
+	memcpy(hdr.fh_d_id, req_hdr->fh_s_id, sizeof(hdr.fh_d_id));
+	memcpy(hdr.fh_s_id, req_hdr->fh_d_id, sizeof(hdr.fh_s_id));
+	hdr.fh_type = type;
+	hton24(hdr.fh_f_ctl, f_ctl);
+	hdr.fh_ox_id = req_hdr->fh_ox_id;
+	hdr.fh_rx_id = req_hdr->fh_rx_id;
+	hdr.fh_cs_ctl = 0;
+	hdr.fh_df_ctl = 0;
+	hdr.fh_seq_cnt = 0;
+	hdr.fh_parm_offset = 0;
+
+	/*
+	 * send_frame_seq_id is an atomic, we just let it increment,
+	 * while storing only the low 8 bits to hdr->seq_id
+	 */
+	hdr.fh_seq_id = (u8)atomic_add_return(1, &hw->send_frame_seq_id);
+	hdr.fh_seq_id--;
+
+	/* Allocate and fill in the send frame request context */
+	ctx = (void *)(heap_virt_base + heap_offset);
+	heap_offset += sizeof(*ctx);
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return EFC_FAIL;
+	}
+
+	memset(ctx, 0, sizeof(*ctx));
+
+	/* Save sequence */
+	ctx->seq = seq;
+
+	/* Allocate a response payload DMA buffer from the heap */
+	ctx->payload.phys = heap_phys_base + heap_offset;
+	ctx->payload.virt = heap_virt_base + heap_offset;
+	ctx->payload.size = payload_len;
+	ctx->payload.len = payload_len;
+	heap_offset += payload_len;
+	if (heap_offset > heap_size) {
+		efc_log_err(efct, "Fill send frame failed offset %d size %d\n",
+				heap_offset, heap_size);
+		return EFC_FAIL;
+	}
+
+	/* Copy the payload in */
+	memcpy(ctx->payload.virt, payload, payload_len);
+
+	/* Send */
+	rc = efct_hw_send_frame(&efct->hw, (void *)&hdr, FC_SOF_N3,
+				FC_EOF_T, &ctx->payload, ctx,
+				efct_sframe_common_send_cb, ctx);
+	if (rc)
+		efc_log_debug(efct, "efct_hw_send_frame failed: %d\n", rc);
+
+	return rc;
+}
+
+static int
+efct_sframe_send_fcp_rsp(struct efct_node *node, struct efc_hw_sequence *seq,
+			 void *rsp, u32 rsp_len)
+{
+	return efct_sframe_common_send(node, seq, FC_RCTL_DD_CMD_STATUS,
+				      FC_FC_EX_CTX |
+				      FC_FC_LAST_SEQ |
+				      FC_FC_END_SEQ |
+				      FC_FC_SEQ_INIT,
+				      FC_TYPE_FCP,
+				      rsp, rsp_len);
+}
+
+static int
+efct_sframe_send_task_set_full_or_busy(struct efct_node *node,
+				       struct efc_hw_sequence *seq)
+{
+	struct fcp_resp_with_ext fcprsp;
+	struct fcp_cmnd *fcpcmd = seq->payload->dma.virt;
+	int rc = 0;
+	unsigned long flags = 0;
+	struct efct *efct = node->efct;
+
+	/* construct task set full or busy response */
+	memset(&fcprsp, 0, sizeof(fcprsp));
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	fcprsp.resp.fr_status = list_empty(&node->active_ios) ?
+				SAM_STAT_BUSY : SAM_STAT_TASK_SET_FULL;
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+	*((u32 *)&fcprsp.ext.fr_resid) = be32_to_cpu(fcpcmd->fc_dl);
+
+	/* send it using send_frame */
+	rc = efct_sframe_send_fcp_rsp(node, seq, &fcprsp, sizeof(fcprsp));
+	if (rc)
+		efc_log_debug(efct, "efct_sframe_send_fcp_rsp failed %d\n", rc);
+
+	return rc;
+}
+
+int
+efct_dispatch_fcp_cmd(struct efct_node *node, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = node->efct;
+	struct fc_frame_header *fchdr = seq->header->dma.virt;
+	struct fcp_cmnd	*cmnd = NULL;
+	struct efct_io *io = NULL;
+	u32 lun = U32_MAX;
+
+	if (!seq->payload) {
+		efc_log_err(efct, "Sequence payload is NULL.\n");
+		return EFC_FAIL;
+	}
+
+	cmnd = seq->payload->dma.virt;
+
+	/* perform FCP_CMND validation check(s) */
+	if (efct_validate_fcp_cmd(efct, seq))
+		return EFC_FAIL;
+
+	lun = scsilun_to_int(&cmnd->fc_lun);
+	if (lun == U32_MAX)
+		return EFC_FAIL;
+
+	io = efct_scsi_io_alloc(node);
+	if (!io) {
+		int rc;
+
+		/* Use SEND_FRAME to send task set full or busy */
+		rc = efct_sframe_send_task_set_full_or_busy(node, seq);
+		if (rc)
+			efc_log_err(efct, "Failed to send busy task: %d\n", rc);
+
+		return rc;
+	}
+
+	io->hw_priv = seq->hw_priv;
+
+	io->app_id = 0;
+
+	/* RQ pair, if we got here, SIT=1 */
+	efct_populate_io_fcp_cmd(io, cmnd, fchdr, true);
+
+	if (cmnd->fc_tm_flags) {
+		efct_dispatch_unsol_tmf(io, cmnd->fc_tm_flags, lun);
+	} else {
+		u32 flags = efct_get_flags_fcp_cmd(cmnd);
+
+		if (cmnd->fc_flags & FCP_CFL_LEN_MASK) {
+			efc_log_err(efct, "Additional CDB not supported\n");
+			return EFC_FAIL;
+		}
+		/*
+		 * Can return failure for things like task set full and UAs,
+		 * no need to treat as a dropped frame if rc != 0
+		 */
+		efct_scsi_recv_cmd(io, lun, cmnd->fc_cdb,
+				   sizeof(cmnd->fc_cdb), flags);
+	}
+
+	return EFC_SUCCESS;
+}
+
+static int
+efct_process_abts(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	struct efct_node *node = io->node;
+	struct efct *efct = io->efct;
+	u16 ox_id = be16_to_cpu(hdr->fh_ox_id);
+	u16 rx_id = be16_to_cpu(hdr->fh_rx_id);
+	struct efct_io *abortio;
+
+	/* Find IO and attempt to take a reference on it */
+	abortio = efct_io_find_tgt_io(efct, node, ox_id, rx_id);
+
+	if (abortio) {
+		/* Got a reference on the IO. Hold it until backend
+		 * is notified below
+		 */
+		efc_log_info(node->efct, "Abort ox_id [%04x] rx_id [%04x]\n",
+			     ox_id, rx_id);
+
+		/*
+		 * Save the ox_id for the ABTS as the init_task_tag in our
+		 * manufactured
+		 * TMF IO object
+		 */
+		io->display_name = "abts";
+		io->init_task_tag = ox_id;
+		/* don't set tgt_task_tag, don't want to confuse with XRI */
+
+		/*
+		 * Save the rx_id from the ABTS as it is
+		 * needed for the BLS response,
+		 * regardless of the IO context's rx_id
+		 */
+		io->abort_rx_id = rx_id;
+
+		/* Call target server command abort */
+		io->tmf_cmd = EFCT_SCSI_TMF_ABORT_TASK;
+		efct_scsi_recv_tmf(io, abortio->tgt_io.lun,
+				   EFCT_SCSI_TMF_ABORT_TASK, abortio, 0);
+
+		/*
+		 * Backend will have taken an additional
+		 * reference on the IO if needed;
+		 * done with current reference.
+		 */
+		kref_put(&abortio->ref, abortio->release);
+	} else {
+		/*
+		 * Either IO was not found or it has been
+		 * freed between finding it
+		 * and attempting to get the reference,
+		 */
+		efc_log_info(node->efct, "Abort: ox_id [%04x], IO not found\n",
+			     ox_id);
+
+		/* Send a BA_RJT */
+		efct_bls_send_rjt(io, hdr);
+	}
+	return EFC_SUCCESS;
+}
+
+int
+efct_node_recv_abts_frame(struct efct_node *node, struct efc_hw_sequence *seq)
+{
+	struct efct *efct = node->efct;
+	struct fc_frame_header *hdr = seq->header->dma.virt;
+	struct efct_io *io = NULL;
+
+	node->abort_cnt++;
+	io = efct_scsi_io_alloc(node);
+	if (io) {
+		io->hw_priv = seq->hw_priv;
+		/* If we got this far, SIT=1 */
+		io->seq_init = 1;
+
+		/* fill out generic fields */
+		io->efct = efct;
+		io->node = node;
+		io->cmd_tgt = true;
+
+		efct_process_abts(io, seq->header->dma.virt);
+	} else {
+		efc_log_err(efct,
+			    "SCSI IO allocation failed for ABTS received ");
+		efc_log_err(efct, "s_id %06x d_id %06x ox_id %04x rx_id %04x\n",
+			    ntoh24(hdr->fh_s_id), ntoh24(hdr->fh_d_id),
+			    be16_to_cpu(hdr->fh_ox_id),
+			    be16_to_cpu(hdr->fh_rx_id));
+	}
+
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_unsol.h b/drivers/scsi/elx/efct/efct_unsol.h
new file mode 100644
index 000000000000..16d1e3ba1833
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_unsol.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__OSC_UNSOL_H__)
+#define __OSC_UNSOL_H__
+
+int
+efct_unsolicited_cb(void *arg, struct efc_hw_sequence *seq);
+int
+efct_dispatch_fcp_cmd(struct efct_node *node, struct efc_hw_sequence *seq);
+int
+efct_node_recv_abts_frame(struct efct_node *node, struct efc_hw_sequence *seq);
+
+#endif /* __OSC_UNSOL_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 24/31] elx: efct: SCSI IO handling routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (22 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 23/31] elx: efct: Unsolicited FC frame processing routines James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:58 ` [PATCH v7 25/31] elx: efct: LIO backend interface routines James Smart
                   ` (6 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines for SCSI transport IO alloc, build and send IO.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_scsi.c | 1164 +++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_scsi.h |  207 +++++
 2 files changed, 1371 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.c
 create mode 100644 drivers/scsi/elx/efct/efct_scsi.h

diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c
new file mode 100644
index 000000000000..20aed62a1bff
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.c
@@ -0,0 +1,1164 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include "efct_driver.h"
+#include "efct_hw.h"
+
+#define enable_tsend_auto_resp(efct)	1
+#define enable_treceive_auto_resp(efct)	0
+
+#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]"
+
+#define scsi_io_printf(io, fmt, ...) \
+	efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \
+		io->node->display_name, io->instance_index,\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__)
+
+#define EFCT_LOG_ENABLE_SCSI_TRACE(efct)                \
+		(((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0)
+
+#define scsi_io_trace(io, fmt, ...) \
+	do { \
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \
+			scsi_io_printf(io, fmt, ##__VA_ARGS__); \
+	} while (0)
+
+struct efct_io *
+efct_scsi_io_alloc(struct efct_node *node)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	struct efct_io *io;
+	unsigned long flags = 0;
+
+	efct = node->efct;
+
+	xport = efct->xport;
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+
+	io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!io) {
+		efc_log_err(efct, "IO alloc Failed\n");
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		spin_unlock_irqrestore(&node->active_ios_lock, flags);
+		return NULL;
+	}
+
+	/* initialize refcount */
+	kref_init(&io->ref);
+	io->release = _efct_scsi_io_free;
+
+	/* set generic fields */
+	io->efct = efct;
+	io->node = node;
+	kref_get(&node->ref);
+
+	/* set type and name */
+	io->io_type = EFCT_IO_TYPE_IO;
+	io->display_name = "scsi_io";
+
+	io->cmd_ini = false;
+	io->cmd_tgt = true;
+
+	/* Add to node's active_ios list */
+	INIT_LIST_HEAD(&io->list_entry);
+	list_add(&io->list_entry, &node->active_ios);
+
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	return io;
+}
+
+void
+_efct_scsi_io_free(struct kref *arg)
+{
+	struct efct_io *io = container_of(arg, struct efct_io, ref);
+	struct efct *efct = io->efct;
+	struct efct_node *node = io->node;
+	unsigned long flags = 0;
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+
+	if (io->io_free) {
+		efc_log_err(efct, "IO already freed.\n");
+		return;
+	}
+
+	spin_lock_irqsave(&node->active_ios_lock, flags);
+	list_del_init(&io->list_entry);
+	spin_unlock_irqrestore(&node->active_ios_lock, flags);
+
+	kref_put(&node->ref, node->release);
+	io->node = NULL;
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+}
+
+void
+efct_scsi_io_free(struct efct_io *io)
+{
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	WARN_ON(!refcount_read(&io->ref.refcount));
+	kref_put(&io->ref, io->release);
+}
+
+static void
+efct_target_io_cb(struct efct_hw_io *hio, u32 length, int status,
+		  u32 ext_status, void *app)
+{
+	u32 flags = 0;
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_stat = EFCT_SCSI_STATUS_GOOD;
+	efct_scsi_io_cb_t cb;
+
+	if (!io || !io->efct) {
+		pr_err("%s: IO can not be NULL\n", __func__);
+		return;
+	}
+
+	scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status);
+
+	efct = io->efct;
+
+	io->transferred += length;
+
+	if (!io->scsi_tgt_cb) {
+		efct_scsi_check_pending(efct);
+		return;
+	}
+
+	/* Call target server completion */
+	cb = io->scsi_tgt_cb;
+
+	/* Clear the callback before invoking the callback */
+	io->scsi_tgt_cb = NULL;
+
+	/* if status was good, and auto-good-response was set,
+	 * then callback target-server with IO_CMPL_RSP_SENT,
+	 * otherwise send IO_CMPL
+	 */
+	if (status == 0 && io->auto_resp)
+		flags |= EFCT_SCSI_IO_CMPL_RSP_SENT;
+	else
+		flags |= EFCT_SCSI_IO_CMPL;
+
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		scsi_stat = EFCT_SCSI_STATUS_GOOD;
+		break;
+	case SLI4_FC_WCQE_STATUS_DI_ERROR:
+		if (ext_status & SLI4_FC_DI_ERROR_GE)
+			scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR;
+		else if (ext_status & SLI4_FC_DI_ERROR_AE)
+			scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR;
+		else if (ext_status & SLI4_FC_DI_ERROR_RE)
+			scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR;
+		else
+			scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET:
+		case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED:
+			scsi_stat = EFCT_SCSI_STATUS_ABORTED;
+			break;
+		case SLI4_FC_LOCAL_REJECT_INVALID_RPI:
+			scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST;
+			break;
+		case SLI4_FC_LOCAL_REJECT_NO_XRI:
+			scsi_stat = EFCT_SCSI_STATUS_NO_IO;
+			break;
+		default:
+			/*we have seen 0x0d(TX_DMA_FAILED err)*/
+			scsi_stat = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+		break;
+
+	case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT:
+		/* target IO timed out */
+		scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED;
+		break;
+
+	case SLI4_FC_WCQE_STATUS_SHUTDOWN:
+		/* Target IO cancelled by HW */
+		scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN;
+		break;
+
+	default:
+		scsi_stat = EFCT_SCSI_STATUS_ERROR;
+		break;
+	}
+
+	cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg);
+
+	efct_scsi_check_pending(efct);
+}
+
+static int
+efct_scsi_build_sgls(struct efct_hw *hw, struct efct_hw_io *hio,
+		struct efct_scsi_sgl *sgl, u32 sgl_count,
+		enum efct_hw_io_type type)
+{
+	int rc;
+	u32 i;
+	struct efct *efct = hw->os;
+
+	/* Initialize HW SGL */
+	rc = efct_hw_io_init_sges(hw, hio, type);
+	if (rc) {
+		efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc);
+		return EFC_FAIL;
+	}
+
+	for (i = 0; i < sgl_count; i++) {
+
+		/* Add data SGE */
+		rc = efct_hw_io_add_sge(hw, hio, sgl[i].addr, sgl[i].len);
+		if (rc) {
+			efc_log_err(efct,
+					"add sge failed cnt=%d rc=%d\n",
+					sgl_count, rc);
+			return rc;
+		}
+	}
+
+	return EFC_SUCCESS;
+}
+
+static void efc_log_sgl(struct efct_io *io)
+{
+	struct efct_hw_io *hio = io->hio;
+	struct sli4_sge *data = NULL;
+	u32 *dword = NULL;
+	u32 i;
+	u32 n_sge;
+
+	scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n",
+			upper_32_bits(hio->def_sgl.phys),
+			lower_32_bits(hio->def_sgl.phys));
+	n_sge = (hio->sgl == &hio->def_sgl) ? hio->n_sge : hio->def_sgl_count;
+	for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) {
+		dword = (u32 *)data;
+
+		scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n",
+			      i, dword[0], dword[1], dword[2], dword[3]);
+
+		if (dword[2] & (1U << 31))
+			break;
+	}
+
+}
+
+static void
+efct_scsi_check_pending_async_cb(struct efct_hw *hw, int status,
+				 u8 *mqe, void *arg)
+{
+	struct efct_io *io = arg;
+
+	if (io) {
+		efct_hw_done_t cb = io->hw_cb;
+
+		if (!io->hw_cb)
+			return;
+
+		io->hw_cb = NULL;
+		(cb)(io->hio, 0, SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io);
+	}
+}
+
+static int
+efct_scsi_io_dispatch_hw_io(struct efct_io *io, struct efct_hw_io *hio)
+{
+	int rc = EFC_SUCCESS;
+	struct efct *efct = io->efct;
+
+	/* Got a HW IO;
+	 * update ini/tgt_task_tag with HW IO info and dispatch
+	 */
+	io->hio = hio;
+	if (io->cmd_tgt)
+		io->tgt_task_tag = hio->indicator;
+	else if (io->cmd_ini)
+		io->init_task_tag = hio->indicator;
+	io->hw_tag = hio->reqtag;
+
+	hio->eq = io->hw_priv;
+
+	/* Copy WQ steering */
+	switch (io->wq_steering) {
+	case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS;
+		break;
+	case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST;
+		break;
+	case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT:
+		hio->wq_steering = EFCT_HW_WQ_STEERING_CPU;
+		break;
+	}
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_IO:
+		rc = efct_scsi_build_sgls(&efct->hw, io->hio,
+					  io->sgl, io->sgl_count, io->hio_type);
+		if (rc)
+			break;
+
+		if (EFCT_LOG_ENABLE_SCSI_TRACE(efct))
+			efc_log_sgl(io);
+
+		if (io->app_id)
+			io->iparam.fcp_tgt.app_id = io->app_id;
+
+		io->iparam.fcp_tgt.vpi = io->node->vpi;
+		io->iparam.fcp_tgt.rpi = io->node->rpi;
+		io->iparam.fcp_tgt.s_id = io->node->port_fc_id;
+		io->iparam.fcp_tgt.d_id = io->node->node_fc_id;
+		io->iparam.fcp_tgt.xmit_len = io->wire_len;
+
+		rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio,
+				     &io->iparam, io->hw_cb, io);
+		break;
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = EFC_FAIL;
+		break;
+	}
+	return rc;
+}
+
+static int
+efct_scsi_io_dispatch_no_hw_io(struct efct_io *io)
+{
+	int rc;
+
+	switch (io->io_type) {
+	case EFCT_IO_TYPE_ABORT: {
+		struct efct_hw_io *hio_to_abort = NULL;
+
+		hio_to_abort = io->io_to_abort->hio;
+
+		if (!hio_to_abort) {
+			/*
+			 * If "IO to abort" does not have an
+			 * associated HW IO, immediately make callback with
+			 * success. The command must have been sent to
+			 * the backend, but the data phase has not yet
+			 * started, so we don't have a HW IO.
+			 *
+			 * Note: since the backend shims should be
+			 * taking a reference on io_to_abort, it should not
+			 * be possible to have been completed and freed by
+			 * the backend before the abort got here.
+			 */
+			scsi_io_printf(io, "IO: not active\n");
+			((efct_hw_done_t)io->hw_cb)(io->hio, 0,
+					SLI4_FC_WCQE_STATUS_SUCCESS, 0, io);
+			rc = EFC_SUCCESS;
+			break;
+		}
+
+		/* HW IO is valid, abort it */
+		scsi_io_printf(io, "aborting\n");
+		rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort,
+					      io->send_abts, io->hw_cb, io);
+		if (rc) {
+			int status = SLI4_FC_WCQE_STATUS_SUCCESS;
+			efct_hw_done_t cb = io->hw_cb;
+
+
+			if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE &&
+			    rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) {
+				status = -1;
+				scsi_io_printf(io,
+					"Failed to abort IO rc=%d\n", rc);
+			}
+			cb(io->hio, 0, status, 0, io);
+			rc = EFC_SUCCESS;
+		}
+
+		break;
+	}
+	default:
+		scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type);
+		rc = EFC_FAIL;
+		break;
+	}
+	return rc;
+}
+
+static struct efct_io *
+efct_scsi_dispatch_pending(struct efct *efct)
+{
+	struct efct_xport *xport = efct->xport;
+	struct efct_io *io = NULL;
+	struct efct_hw_io *hio;
+	unsigned long flags = 0;
+	int status;
+
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+
+	if (!list_empty(&xport->io_pending_list)) {
+		io = list_first_entry(&xport->io_pending_list, struct efct_io,
+					io_pending_link);
+		list_del_init(&io->io_pending_link);
+	}
+
+	if (!io) {
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+		return NULL;
+	}
+
+	if (io->io_type == EFCT_IO_TYPE_ABORT) {
+		hio = NULL;
+	} else {
+		hio = efct_hw_io_alloc(&efct->hw);
+		if (!hio) {
+			/*
+			 * No HW IO available.Put IO back on
+			 * the front of pending list
+			 */
+			list_add(&xport->io_pending_list, &io->io_pending_link);
+			io = NULL;
+		} else {
+			hio->eq = io->hw_priv;
+		}
+	}
+
+	/* Must drop the lock before dispatching the IO */
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	if (!io)
+		return NULL;
+
+	/*
+	 * We pulled an IO off the pending list,
+	 * and either got an HW IO or don't need one
+	 */
+	atomic_sub_return(1, &xport->io_pending_count);
+	if (!hio)
+		status = efct_scsi_io_dispatch_no_hw_io(io);
+	else
+		status = efct_scsi_io_dispatch_hw_io(io, hio);
+	if (status) {
+		/*
+		 * Invoke the HW callback, but do so in the
+		 * separate execution context,provided by the
+		 * NOP mailbox completion processing context
+		 * by using efct_hw_async_call()
+		 */
+		if (efct_hw_async_call(&efct->hw,
+				efct_scsi_check_pending_async_cb, io)) {
+			efc_log_debug(efct, "call hw async failed\n");
+		}
+	}
+
+	return io;
+}
+
+void
+efct_scsi_check_pending(struct efct *efct)
+{
+	struct efct_xport *xport = efct->xport;
+	struct efct_io *io = NULL;
+	int count = 0;
+	unsigned long flags = 0;
+	int dispatch = 0;
+
+	/* Guard against recursion */
+	if (atomic_add_return(1, &xport->io_pending_recursing)) {
+		/* This function is already running.  Decrement and return. */
+		atomic_sub_return(1, &xport->io_pending_recursing);
+		return;
+	}
+
+	while (efct_scsi_dispatch_pending(efct))
+		count++;
+
+	if (count) {
+		atomic_sub_return(1, &xport->io_pending_recursing);
+		return;
+	}
+
+	/*
+	 * If nothing was removed from the list,
+	 * we might be in a case where we need to abort an
+	 * active IO and the abort is on the pending list.
+	 * Look for an abort we can dispatch.
+	 */
+
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+
+	list_for_each_entry(io, &xport->io_pending_list, io_pending_link) {
+		if (io->io_type == EFCT_IO_TYPE_ABORT && io->io_to_abort->hio) {
+			/* This IO has a HW IO, so it is
+			 * active.  Dispatch the abort.
+			 */
+			dispatch = 1;
+			list_del_init(&io->io_pending_link);
+			atomic_sub_return(1, &xport->io_pending_count);
+			break;
+		}
+	}
+
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	if (dispatch) {
+		if (efct_scsi_io_dispatch_no_hw_io(io)) {
+			if (efct_hw_async_call(&efct->hw,
+				efct_scsi_check_pending_async_cb, io)) {
+				efc_log_debug(efct, "hw async failed\n");
+			}
+		}
+	}
+
+	atomic_sub_return(1, &xport->io_pending_recursing);
+}
+
+int
+efct_scsi_io_dispatch(struct efct_io *io, void *cb)
+{
+	struct efct_hw_io *hio;
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * if this IO already has a HW IO, then this is either
+	 * not the first phase of the IO. Send it to the HW.
+	 */
+	if (io->hio)
+		return efct_scsi_io_dispatch_hw_io(io, io->hio);
+
+	/*
+	 * We don't already have a HW IO associated with the IO. First check
+	 * the pending list. If not empty, add IO to the tail and process the
+	 * pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+	if (!list_empty(&xport->io_pending_list)) {
+		/*
+		 * If this is a low latency request,
+		 * the put at the front of the IO pending
+		 * queue, otherwise put it at the end of the queue.
+		 */
+		if (io->low_latency) {
+			INIT_LIST_HEAD(&io->io_pending_link);
+			list_add(&xport->io_pending_list, &io->io_pending_link);
+		} else {
+			INIT_LIST_HEAD(&io->io_pending_link);
+			list_add_tail(&io->io_pending_link,
+					&xport->io_pending_list);
+		}
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+		atomic_add_return(1, &xport->io_pending_count);
+		atomic_add_return(1, &xport->io_total_pending);
+
+		/* process pending list */
+		efct_scsi_check_pending(efct);
+		return EFC_SUCCESS;
+	}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/*
+	 * We don't have a HW IO associated with the IO and there's nothing
+	 * on the pending list. Attempt to allocate a HW IO and dispatch it.
+	 */
+	hio = efct_hw_io_alloc(&io->efct->hw);
+	if (!hio) {
+		/* Couldn't get a HW IO. Save this IO on the pending list */
+		spin_lock_irqsave(&xport->io_pending_lock, flags);
+		INIT_LIST_HEAD(&io->io_pending_link);
+		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+		atomic_add_return(1, &xport->io_total_pending);
+		atomic_add_return(1, &xport->io_pending_count);
+		return EFC_SUCCESS;
+	}
+
+	/* We successfully allocated a HW IO; dispatch to HW */
+	return efct_scsi_io_dispatch_hw_io(io, hio);
+}
+
+int
+efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb)
+{
+	struct efct *efct = io->efct;
+	struct efct_xport *xport = efct->xport;
+	unsigned long flags = 0;
+
+	io->hw_cb = cb;
+
+	/*
+	 * For aborts, we don't need a HW IO, but we still want
+	 * to pass through the pending list to preserve ordering.
+	 * Thus, if the pending list is not empty, add this abort
+	 * to the pending list and process the pending list.
+	 */
+	spin_lock_irqsave(&xport->io_pending_lock, flags);
+	if (!list_empty(&xport->io_pending_list)) {
+		INIT_LIST_HEAD(&io->io_pending_link);
+		list_add_tail(&io->io_pending_link, &xport->io_pending_list);
+		spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+		atomic_add_return(1, &xport->io_pending_count);
+		atomic_add_return(1, &xport->io_total_pending);
+
+		/* process pending list */
+		efct_scsi_check_pending(efct);
+		return EFC_SUCCESS;
+	}
+	spin_unlock_irqrestore(&xport->io_pending_lock, flags);
+
+	/* nothing on pending list, dispatch abort */
+	return efct_scsi_io_dispatch_no_hw_io(io);
+}
+
+static inline int
+efct_scsi_xfer_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 xwire_len,
+	enum efct_hw_io_type type, int enable_ar,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	size_t residual = 0;
+
+	io->sgl_count = sgl_count;
+
+	efct = io->efct;
+
+	scsi_io_trace(io, "%s wire_len %llu\n",
+		      (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv",
+		      xwire_len);
+
+	io->hio_type = type;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	residual = io->exp_xfer_len - io->transferred;
+	io->wire_len = (xwire_len < residual) ? xwire_len : residual;
+	residual = (xwire_len - io->wire_len);
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = io->transferred;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* if this is the last data phase and there is no residual, enable
+	 * auto-good-response
+	 */
+	if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) && residual == 0 &&
+		((io->transferred + io->wire_len) == io->exp_xfer_len) &&
+		(!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) {
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+		io->auto_resp = true;
+	} else {
+		io->auto_resp = false;
+	}
+
+	/* save this transfer length */
+	io->xfer_req = io->wire_len;
+
+	/* Adjust the transferred count to account for overrun
+	 * when the residual is calculated in efct_scsi_send_resp
+	 */
+	io->transferred += residual;
+
+	/* Adjust the SGL size if there is overrun */
+
+	if (residual) {
+		struct efct_scsi_sgl  *sgl_ptr = &io->sgl[sgl_count - 1];
+
+		while (residual) {
+			size_t len = sgl_ptr->len;
+
+			if (len > residual) {
+				sgl_ptr->len = len - residual;
+				residual = 0;
+			} else {
+				sgl_ptr->len = 0;
+				residual -= len;
+				io->sgl_count--;
+			}
+			sgl_ptr--;
+		}
+	}
+
+	/* Set latency and WQ steering */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (efct->xport) {
+		struct efct_xport *xport = efct->xport;
+
+		if (type == EFCT_HW_IO_TARGET_READ) {
+			xport->fcp_stats.input_requests++;
+			xport->fcp_stats.input_bytes += xwire_len;
+		} else if (type == EFCT_HW_IO_TARGET_WRITE) {
+			xport->fcp_stats.output_requests++;
+			xport->fcp_stats.output_bytes += xwire_len;
+		}
+	}
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, sgl, sgl_count,
+				 len, EFCT_HW_IO_TARGET_READ,
+				 enable_tsend_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags,
+	struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len,
+	efct_scsi_io_cb_t cb, void *arg)
+{
+	return efct_scsi_xfer_data(io, flags, sgl, sgl_count, len,
+				 EFCT_HW_IO_TARGET_WRITE,
+				 enable_treceive_auto_resp(io->efct), cb, arg);
+}
+
+int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+		    struct efct_scsi_cmd_resp *rsp,
+		    efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	int residual;
+	/* Always try auto resp */
+	bool auto_resp = true;
+	u8 scsi_status = 0;
+	u16 scsi_status_qualifier = 0;
+	u8 *sense_data = NULL;
+	u32 sense_data_length = 0;
+
+	efct = io->efct;
+
+	if (rsp) {
+		scsi_status = rsp->scsi_status;
+		scsi_status_qualifier = rsp->scsi_status_qualifier;
+		sense_data = rsp->sense_data;
+		sense_data_length = rsp->sense_data_length;
+		residual = rsp->residual;
+	} else {
+		residual = io->exp_xfer_len - io->transferred;
+	}
+
+	io->wire_len = 0;
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	/* Set low latency queueing request */
+	io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0;
+	io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >>
+				EFCT_SCSI_WQ_STEERING_SHIFT;
+	io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >>
+				EFCT_SCSI_WQ_CLASS_SHIFT;
+
+	if (scsi_status != 0 || residual || sense_data_length) {
+		struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt;
+		u8 *sns_data;
+
+		if (!fcprsp) {
+			efc_log_err(efct, "NULL response buffer\n");
+			return EFC_FAIL;
+		}
+
+		sns_data = (u8 *)io->rspbuf.virt + sizeof(*fcprsp);
+
+		auto_resp = false;
+
+		memset(fcprsp, 0, sizeof(*fcprsp));
+
+		io->wire_len += sizeof(*fcprsp);
+
+		fcprsp->resp.fr_status = scsi_status;
+		fcprsp->resp.fr_retry_delay =
+			cpu_to_be16(scsi_status_qualifier);
+
+		/* set residual status if necessary */
+		if (residual != 0) {
+			/* FCP: if data transferred is less than the
+			 * amount expected, then this is an underflow.
+			 * If data transferred would have been greater
+			 * than the amount expected this is an overflow
+			 */
+			if (residual > 0) {
+				fcprsp->resp.fr_flags |= FCP_RESID_UNDER;
+				fcprsp->ext.fr_resid =	cpu_to_be32(residual);
+			} else {
+				fcprsp->resp.fr_flags |= FCP_RESID_OVER;
+				fcprsp->ext.fr_resid = cpu_to_be32(-residual);
+			}
+		}
+
+		if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) {
+			if (sense_data_length > SCSI_SENSE_BUFFERSIZE) {
+				efc_log_err(efct, "Sense exceeds max size.\n");
+				return EFC_FAIL;
+			}
+
+			fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL;
+			memcpy(sns_data, sense_data, sense_data_length);
+			fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length);
+			io->wire_len += sense_data_length;
+		}
+
+		io->sgl[0].addr = io->rspbuf.phys;
+		io->sgl[0].dif_addr = 0;
+		io->sgl[0].len = io->wire_len;
+		io->sgl_count = 1;
+	}
+
+	if (auto_resp)
+		io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE;
+
+	return efct_scsi_io_dispatch(io, efct_target_io_cb);
+}
+
+static int
+efct_target_bls_resp_cb(struct efct_hw_io *hio,	u32 length, int status,
+			u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status bls_status;
+
+	efct = io->efct;
+
+	/* BLS isn't really a "SCSI" concept, but use SCSI status */
+	if (status) {
+		io_error_log(io, "s=%#x x=%#x\n", status, ext_status);
+		bls_status = EFCT_SCSI_STATUS_ERROR;
+	} else {
+		bls_status = EFCT_SCSI_STATUS_GOOD;
+	}
+
+	if (io->bls_cb) {
+		efct_scsi_io_cb_t bls_cb = io->bls_cb;
+		void *bls_cb_arg = io->bls_cb_arg;
+
+		io->bls_cb = NULL;
+		io->bls_cb_arg = NULL;
+
+		/* invoke callback */
+		bls_cb(io, bls_status, 0, bls_cb_arg);
+	}
+
+	efct_scsi_check_pending(efct);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_target_send_bls_resp(struct efct_io *io,
+			  efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct_node *node = io->node;
+	struct sli_bls_params *bls = &io->iparam.bls;
+	struct efct *efct = node->efct;
+	struct fc_ba_acc *acc;
+	int rc;
+
+	/* fill out IO structure with everything needed to send BA_ACC */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	bls->ox_id = io->init_task_tag;
+	bls->rx_id = io->abort_rx_id;
+	bls->vpi = io->node->vpi;
+	bls->rpi = io->node->rpi;
+	bls->s_id = U32_MAX;
+	bls->d_id = io->node->node_fc_id;
+	bls->rpi_registered = true;
+
+	acc = (void *)bls->payload;
+	acc->ba_ox_id = cpu_to_be16(bls->ox_id);
+	acc->ba_rx_id = cpu_to_be16(bls->rx_id);
+	acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX);
+
+	/* generic io fields have already been populated */
+
+	/* set type and BLS-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "bls_rsp";
+	io->hio_type = EFCT_HW_BLS_ACC;
+	io->bls_cb = cb;
+	io->bls_cb_arg = arg;
+
+	/* dispatch IO */
+	rc = efct_hw_bls_send(efct, FC_RCTL_BA_ACC, bls,
+			      efct_target_bls_resp_cb, io);
+	return rc;
+}
+
+static int efct_bls_send_rjt_cb(struct efct_hw_io *hio, u32 length, int status,
+				u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+
+	efct_scsi_io_free(io);
+	return 0;
+}
+
+struct efct_io *
+efct_bls_send_rjt(struct efct_io *io, struct fc_frame_header *hdr)
+{
+	struct efct_node *node = io->node;
+	struct sli_bls_params *bls = &io->iparam.bls;
+	struct efct *efct = node->efct;
+	struct fc_ba_rjt *acc;
+	int rc;
+
+	/* fill out BLS Response-specific fields */
+	io->io_type = EFCT_IO_TYPE_BLS_RESP;
+	io->display_name = "ba_rjt";
+	io->hio_type = EFCT_HW_BLS_RJT;
+	io->init_task_tag = be16_to_cpu(hdr->fh_ox_id);
+
+	/* fill out iparam fields */
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	bls->ox_id = be16_to_cpu(hdr->fh_ox_id);
+	bls->rx_id = be16_to_cpu(hdr->fh_rx_id);
+	bls->vpi = io->node->vpi;
+	bls->rpi = io->node->rpi;
+	bls->s_id = U32_MAX;
+	bls->d_id = io->node->node_fc_id;
+	bls->rpi_registered = true;
+
+	acc = (void *)bls->payload;
+	acc->br_reason = ELS_RJT_UNAB;
+	acc->br_explan = ELS_EXPL_NONE;
+
+	rc = efct_hw_bls_send(efct, FC_RCTL_BA_RJT, bls, efct_bls_send_rjt_cb,
+			      io);
+	if (rc) {
+		efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc);
+		efct_scsi_io_free(io);
+		io = NULL;
+	}
+	return io;
+}
+
+int
+efct_scsi_send_tmf_resp(struct efct_io *io,
+			enum efct_scsi_tmf_resp rspcode,
+			u8 addl_rsp_info[3],
+			efct_scsi_io_cb_t cb, void *arg)
+{
+	int rc;
+	struct {
+		struct fcp_resp_with_ext rsp_ext;
+		struct fcp_resp_rsp_info info;
+	} *fcprsp;
+	u8 fcp_rspcode;
+
+	io->wire_len = 0;
+
+	switch (rspcode) {
+	case EFCT_SCSI_TMF_FUNCTION_COMPLETE:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED:
+	case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND:
+		fcp_rspcode = FCP_TMF_CMPL;
+		break;
+	case EFCT_SCSI_TMF_FUNCTION_REJECTED:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER:
+		fcp_rspcode = FCP_TMF_INVALID_LUN;
+		break;
+	case EFCT_SCSI_TMF_SERVICE_DELIVERY:
+		fcp_rspcode = FCP_TMF_FAILED;
+		break;
+	default:
+		fcp_rspcode = FCP_TMF_REJECTED;
+		break;
+	}
+
+	io->hio_type = EFCT_HW_IO_TARGET_RSP;
+
+	io->scsi_tgt_cb = cb;
+	io->scsi_tgt_cb_arg = arg;
+
+	if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) {
+		rc = efct_target_send_bls_resp(io, cb, arg);
+		return rc;
+	}
+
+	/* populate the FCP TMF response */
+	fcprsp = io->rspbuf.virt;
+	memset(fcprsp, 0, sizeof(*fcprsp));
+
+	fcprsp->rsp_ext.resp.fr_flags |= FCP_SNS_LEN_VAL;
+
+	if (addl_rsp_info) {
+		memcpy(fcprsp->info._fr_resvd, addl_rsp_info,
+		       sizeof(fcprsp->info._fr_resvd));
+	}
+	fcprsp->info.rsp_code = fcp_rspcode;
+
+	io->wire_len = sizeof(*fcprsp);
+
+	fcprsp->rsp_ext.ext.fr_rsp_len =
+			cpu_to_be32(sizeof(struct fcp_resp_rsp_info));
+
+	io->sgl[0].addr = io->rspbuf.phys;
+	io->sgl[0].dif_addr = 0;
+	io->sgl[0].len = io->wire_len;
+	io->sgl_count = 1;
+
+	memset(&io->iparam, 0, sizeof(io->iparam));
+	io->iparam.fcp_tgt.ox_id = io->init_task_tag;
+	io->iparam.fcp_tgt.offset = 0;
+	io->iparam.fcp_tgt.cs_ctl = io->cs_ctl;
+	io->iparam.fcp_tgt.timeout = io->timeout;
+
+	rc = efct_scsi_io_dispatch(io, efct_target_io_cb);
+
+	return rc;
+}
+
+static int
+efct_target_abort_cb(struct efct_hw_io *hio, u32 length, int status,
+		     u32 ext_status, void *app)
+{
+	struct efct_io *io = app;
+	struct efct *efct;
+	enum efct_scsi_io_status scsi_status;
+	efct_scsi_io_cb_t abort_cb;
+	void *abort_cb_arg;
+
+	efct = io->efct;
+
+	if (!io->abort_cb)
+		goto done;
+
+	abort_cb = io->abort_cb;
+	abort_cb_arg = io->abort_cb_arg;
+
+	io->abort_cb = NULL;
+	io->abort_cb_arg = NULL;
+
+	switch (status) {
+	case SLI4_FC_WCQE_STATUS_SUCCESS:
+		scsi_status = EFCT_SCSI_STATUS_GOOD;
+		break;
+	case SLI4_FC_WCQE_STATUS_LOCAL_REJECT:
+		switch (ext_status) {
+		case SLI4_FC_LOCAL_REJECT_NO_XRI:
+			scsi_status = EFCT_SCSI_STATUS_NO_IO;
+			break;
+		case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS:
+			scsi_status = EFCT_SCSI_STATUS_ABORT_IN_PROGRESS;
+			break;
+		default:
+			/*we have seen 0x15 (abort in progress)*/
+			scsi_status = EFCT_SCSI_STATUS_ERROR;
+			break;
+		}
+		break;
+	case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE:
+		scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE;
+		break;
+	default:
+		scsi_status = EFCT_SCSI_STATUS_ERROR;
+		break;
+	}
+	/* invoke callback */
+	abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg);
+
+done:
+	/* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */
+	kref_put(&io->io_to_abort->ref, io->io_to_abort->release);
+
+	efct_io_pool_io_free(efct->xport->io_pool, io);
+
+	efct_scsi_check_pending(efct);
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg)
+{
+	struct efct *efct;
+	struct efct_xport *xport;
+	int rc;
+	struct efct_io *abort_io = NULL;
+
+	efct = io->efct;
+	xport = efct->xport;
+
+	/* take a reference on IO being aborted */
+	if (kref_get_unless_zero(&io->ref) == 0) {
+		/* command no longer active */
+		scsi_io_printf(io, "command no longer active\n");
+		return EFC_FAIL;
+	}
+
+	/*
+	 * allocate a new IO to send the abort request. Use efct_io_alloc()
+	 * directly, as we need an IO object that will not fail allocation
+	 * due to allocations being disabled (in efct_scsi_io_alloc())
+	 */
+	abort_io = efct_io_pool_io_alloc(efct->xport->io_pool);
+	if (!abort_io) {
+		atomic_add_return(1, &xport->io_alloc_failed_count);
+		kref_put(&io->ref, io->release);
+		return EFC_FAIL;
+	}
+
+	/* Save the target server callback and argument */
+	/* set generic fields */
+	abort_io->cmd_tgt = true;
+	abort_io->node = io->node;
+
+	/* set type and abort-specific fields */
+	abort_io->io_type = EFCT_IO_TYPE_ABORT;
+	abort_io->display_name = "tgt_abort";
+	abort_io->io_to_abort = io;
+	abort_io->send_abts = false;
+	abort_io->abort_cb = cb;
+	abort_io->abort_cb_arg = arg;
+
+	/* now dispatch IO */
+	rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb);
+	if (rc)
+		kref_put(&io->ref, io->release);
+	return rc;
+}
+
+void
+efct_scsi_io_complete(struct efct_io *io)
+{
+	if (io->io_free) {
+		efc_log_debug(io->efct, "completion for non-busy io tag 0x%x\n",
+					io->tag);
+		return;
+	}
+
+	scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name);
+	kref_put(&io->ref, io->release);
+}
diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h
new file mode 100644
index 000000000000..7c4afa1e8c95
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_scsi.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#if !defined(__EFCT_SCSI_H__)
+#define __EFCT_SCSI_H__
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+
+/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */
+#define EFCT_SCSI_CMD_DIR_IN		(1 << 0)
+#define EFCT_SCSI_CMD_DIR_OUT		(1 << 1)
+#define EFCT_SCSI_CMD_SIMPLE		(1 << 2)
+#define EFCT_SCSI_CMD_HEAD_OF_QUEUE	(1 << 3)
+#define EFCT_SCSI_CMD_ORDERED		(1 << 4)
+#define EFCT_SCSI_CMD_UNTAGGED		(1 << 5)
+#define EFCT_SCSI_CMD_ACA		(1 << 6)
+#define EFCT_SCSI_FIRST_BURST_ERR	(1 << 7)
+#define EFCT_SCSI_FIRST_BURST_ABORTED	(1 << 8)
+
+/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */
+#define EFCT_SCSI_LAST_DATAPHASE	(1 << 0)
+#define EFCT_SCSI_NO_AUTO_RESPONSE	(1 << 1)
+#define EFCT_SCSI_LOW_LATENCY		(1 << 2)
+
+#define EFCT_SCSI_SNS_BUF_VALID(sense)	((sense) && \
+			(0x70 == (((const u8 *)(sense))[0] & 0x70)))
+
+#define EFCT_SCSI_WQ_STEERING_SHIFT	16
+#define EFCT_SCSI_WQ_STEERING_MASK	(0xf << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CLASS	(0 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_REQUEST	(1 << EFCT_SCSI_WQ_STEERING_SHIFT)
+#define EFCT_SCSI_WQ_STEERING_CPU	(2 << EFCT_SCSI_WQ_STEERING_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_SHIFT		(20)
+#define EFCT_SCSI_WQ_CLASS_MASK		(0xf << EFCT_SCSI_WQ_CLASS_SHIFT)
+#define EFCT_SCSI_WQ_CLASS(x)		((x & EFCT_SCSI_WQ_CLASS_MASK) << \
+						EFCT_SCSI_WQ_CLASS_SHIFT)
+
+#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY	1
+
+struct efct_scsi_cmd_resp {
+	u8 scsi_status;
+	u16 scsi_status_qualifier;
+	u8 *response_data;
+	u32 response_data_length;
+	u8 *sense_data;
+	u32 sense_data_length;
+	int residual;
+	u32 response_wire_length;
+};
+
+struct efct_vport {
+	struct efct		*efct;
+	bool			is_vport;
+	struct fc_host_statistics fc_host_stats;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+};
+
+/* Status values returned by IO callbacks */
+enum efct_scsi_io_status {
+	EFCT_SCSI_STATUS_GOOD = 0,
+	EFCT_SCSI_STATUS_ABORTED,
+	EFCT_SCSI_STATUS_ERROR,
+	EFCT_SCSI_STATUS_DIF_GUARD_ERR,
+	EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR,
+	EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR,
+	EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR,
+	EFCT_SCSI_STATUS_NO_IO,
+	EFCT_SCSI_STATUS_ABORT_IN_PROGRESS,
+	EFCT_SCSI_STATUS_CHECK_RESPONSE,
+	EFCT_SCSI_STATUS_COMMAND_TIMEOUT,
+	EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED,
+	EFCT_SCSI_STATUS_SHUTDOWN,
+	EFCT_SCSI_STATUS_NEXUS_LOST,
+};
+
+struct efct_node;
+struct efct_io;
+struct efc_node;
+struct efc_nport;
+
+/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */
+typedef int (*efct_scsi_io_cb_t)(struct efct_io *io,
+				    enum efct_scsi_io_status status,
+				    u32 flags, void *arg);
+
+/* Callback used by send_rd_io(), send_wr_io() */
+typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io *io,
+			enum efct_scsi_io_status status,
+			struct efct_scsi_cmd_resp *rsp,
+			u32 flags, void *arg);
+
+/* efct_scsi_cb_t flags */
+#define EFCT_SCSI_IO_CMPL		(1 << 0)
+/* IO completed, response sent */
+#define EFCT_SCSI_IO_CMPL_RSP_SENT	(1 << 1)
+#define EFCT_SCSI_IO_ABORTED		(1 << 2)
+
+/* efct_scsi_recv_tmf() request values */
+enum efct_scsi_tmf_cmd {
+	EFCT_SCSI_TMF_ABORT_TASK = 1,
+	EFCT_SCSI_TMF_QUERY_TASK_SET,
+	EFCT_SCSI_TMF_ABORT_TASK_SET,
+	EFCT_SCSI_TMF_CLEAR_TASK_SET,
+	EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT,
+	EFCT_SCSI_TMF_LOGICAL_UNIT_RESET,
+	EFCT_SCSI_TMF_CLEAR_ACA,
+	EFCT_SCSI_TMF_TARGET_RESET,
+};
+
+/* efct_scsi_send_tmf_resp() response values */
+enum efct_scsi_tmf_resp {
+	EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1,
+	EFCT_SCSI_TMF_FUNCTION_SUCCEEDED,
+	EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND,
+	EFCT_SCSI_TMF_FUNCTION_REJECTED,
+	EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER,
+	EFCT_SCSI_TMF_SERVICE_DELIVERY,
+};
+
+struct efct_scsi_sgl {
+	uintptr_t	addr;
+	uintptr_t	dif_addr;
+	size_t		len;
+};
+
+/* Return values for calls from base driver to libefc */
+#define EFCT_SCSI_CALL_COMPLETE	0 /* All work is done */
+#define EFCT_SCSI_CALL_ASYNC	1 /* Work will be completed asynchronously */
+
+enum efct_scsi_io_role {
+	EFCT_SCSI_IO_ROLE_ORIGINATOR,
+	EFCT_SCSI_IO_ROLE_RESPONDER,
+};
+
+struct efct_io *
+efct_scsi_io_alloc(struct efct_node *node);
+void efct_scsi_io_free(struct efct_io *io);
+struct efct_io *efct_io_get_instance(struct efct *efct, u32 index);
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+int efct_scsi_tgt_new_device(struct efct *efct);
+int efct_scsi_tgt_del_device(struct efct *efct);
+int
+efct_scsi_tgt_new_nport(struct efc *efc, struct efc_nport *nport);
+void
+efct_scsi_tgt_del_nport(struct efc *efc, struct efc_nport *nport);
+
+int
+efct_scsi_new_initiator(struct efc *efc, struct efc_node *node);
+
+enum efct_scsi_del_initiator_reason {
+	EFCT_SCSI_INITIATOR_DELETED,
+	EFCT_SCSI_INITIATOR_MISSING,
+};
+
+int
+efct_scsi_del_initiator(struct efc *efc, struct efc_node *node,	int reason);
+int
+efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb, u32 cdb_len,
+			u32 flags);
+int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun, enum efct_scsi_tmf_cmd cmd,
+			struct efct_io *abortio, u32 flags);
+int
+efct_scsi_send_rd_data(struct efct_io *io, u32 flags, struct efct_scsi_sgl *sgl,
+		u32 sgl_count, u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+int
+efct_scsi_recv_wr_data(struct efct_io *io, u32 flags, struct efct_scsi_sgl *sgl,
+		u32 sgl_count, u64 wire_len, efct_scsi_io_cb_t cb, void *arg);
+int
+efct_scsi_send_resp(struct efct_io *io, u32 flags,
+	struct efct_scsi_cmd_resp *rsp, efct_scsi_io_cb_t cb, void *arg);
+int
+efct_scsi_send_tmf_resp(struct efct_io *io, enum efct_scsi_tmf_resp rspcode,
+		       u8 addl_rsp_info[3], efct_scsi_io_cb_t cb, void *arg);
+int
+efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg);
+
+void efct_scsi_io_complete(struct efct_io *io);
+
+int efct_scsi_reg_fc_transport(void);
+int efct_scsi_release_fc_transport(void);
+int efct_scsi_new_device(struct efct *efct);
+int efct_scsi_del_device(struct efct *efct);
+void _efct_scsi_io_free(struct kref *arg);
+
+int
+efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost);
+struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev);
+
+int efct_scsi_io_dispatch(struct efct_io *io, void *cb);
+int efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb);
+void efct_scsi_check_pending(struct efct *efct);
+struct efct_io *
+efct_bls_send_rjt(struct efct_io *io, struct fc_frame_header *hdr);
+
+#endif /* __EFCT_SCSI_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 25/31] elx: efct: LIO backend interface routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (23 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 24/31] elx: efct: SCSI IO handling routines James Smart
@ 2021-01-07 22:58 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 26/31] elx: efct: Hardware IO submission routines James Smart
                   ` (5 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:58 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
LIO backend template registration and template functions.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_lio.c | 1689 ++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_lio.h |  189 ++++
 2 files changed, 1878 insertions(+)
 create mode 100644 drivers/scsi/elx/efct/efct_lio.c
 create mode 100644 drivers/scsi/elx/efct/efct_lio.h

diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
new file mode 100644
index 000000000000..9a5aeef3ecab
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.c
@@ -0,0 +1,1689 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+#include "efct_driver.h"
+#include "efct_lio.h"
+
+/*
+ * lio_wq is used to call the LIO backed during creation or deletion of
+ * sessions. This brings serialization to the session management as we create
+ * single threaded work queue.
+ */
+static struct workqueue_struct *lio_wq;
+
+static int
+efct_format_wwn(char *str, size_t len, const char *pre, u64 wwn)
+{
+	u8 a[8];
+
+	put_unaligned_be64(wwn, a);
+	return snprintf(str, len, "%s%8phC", pre, a);
+}
+
+static int
+efct_lio_parse_wwn(const char *name, u64 *wwp, u8 npiv)
+{
+	int num;
+	u8 b[8];
+
+	if (npiv) {
+		num = sscanf(name,
+			     "%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx",
+			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
+			     &b[7]);
+	} else {
+		num = sscanf(name,
+		      "%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx",
+			     &b[0], &b[1], &b[2], &b[3], &b[4], &b[5], &b[6],
+			     &b[7]);
+	}
+
+	if (num != 8)
+		return -EINVAL;
+
+	*wwp = get_unaligned_be64(b);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_parse_npiv_wwn(const char *name, size_t size, u64 *wwpn, u64 *wwnn)
+{
+	unsigned int cnt = size;
+	int rc;
+
+	*wwpn = *wwnn = 0;
+	if (name[cnt - 1] == '\n' || name[cnt - 1] == 0)
+		cnt--;
+
+	/* validate we have enough characters for WWPN */
+	if ((cnt != (16 + 1 + 16)) || (name[16] != ':'))
+		return -EINVAL;
+
+	rc = efct_lio_parse_wwn(&name[0], wwpn, 1);
+	if (rc)
+		return rc;
+
+	rc = efct_lio_parse_wwn(&name[17], wwnn, 1);
+	if (rc)
+		return rc;
+
+	return EFC_SUCCESS;
+}
+
+static ssize_t
+efct_lio_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_tpg_enable_store(struct config_item *item, const char *page,
+			  size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+	struct efct *efct;
+	struct efc *efc;
+	unsigned long op;
+
+	if (!tpg->nport || !tpg->nport->efct) {
+		pr_err("%s: Unable to find EFCT device\n", __func__);
+		return -EINVAL;
+	}
+
+	efct = tpg->nport->efct;
+	efc = efct->efcport;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (op == 1) {
+		int ret;
+
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		ret = efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE);
+		if (ret) {
+			efct->tgt_efct.lio_nport = NULL;
+			efc_log_debug(efct, "cannot bring port online\n");
+			return ret;
+		}
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain && efc->domain->nport)
+			efct_scsi_tgt_del_nport(efc, efc->domain->nport);
+
+		atomic_set(&tpg->enabled, 0);
+	} else {
+		return -EINVAL;
+	}
+
+	return count;
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_show(struct config_item *item, char *page)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", atomic_read(&tpg->enabled));
+}
+
+static ssize_t
+efct_lio_npiv_tpg_enable_store(struct config_item *item, const char *page,
+			       size_t count)
+{
+	struct se_portal_group *se_tpg = to_tpg(item);
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+	struct efct_lio_vport *lio_vport = tpg->vport;
+	struct efct *efct;
+	struct efc *efc;
+	unsigned long op;
+
+	if (kstrtoul(page, 0, &op) < 0)
+		return -EINVAL;
+
+	if (!lio_vport) {
+		pr_err("Unable to find vport\n");
+		return -EINVAL;
+	}
+
+	efct = lio_vport->efct;
+	efc = efct->efcport;
+
+	if (op == 1) {
+		atomic_set(&tpg->enabled, 1);
+		efc_log_debug(efct, "enable portal group %d\n", tpg->tpgt);
+
+		if (efc->domain) {
+			int ret;
+
+			ret = efc_nport_vport_new(efc->domain,
+						  lio_vport->npiv_wwpn,
+						  lio_vport->npiv_wwnn,
+						  U32_MAX, false, true,
+						  NULL, NULL);
+			if (ret != 0) {
+				efc_log_err(efct, "Failed to create Vport\n");
+				return ret;
+			}
+			return count;
+		}
+
+		if (!(efc_vport_create_spec(efc, lio_vport->npiv_wwnn,
+					    lio_vport->npiv_wwpn, U32_MAX,
+					    false, true, NULL, NULL)))
+			return -ENOMEM;
+
+	} else if (op == 0) {
+		efc_log_debug(efct, "disable portal group %d\n", tpg->tpgt);
+
+		atomic_set(&tpg->enabled, 0);
+		/* only physical nport should exist, free lio_nport
+		 * allocated in efct_lio_make_nport
+		 */
+		if (efc->domain) {
+			efc_nport_vport_del(efct->efcport, efc->domain,
+					    lio_vport->npiv_wwpn,
+					    lio_vport->npiv_wwnn);
+			return count;
+		}
+	} else {
+		return -EINVAL;
+	}
+	return count;
+}
+
+static char *efct_lio_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->nport->wwpn_str;
+}
+
+static char *efct_lio_get_npiv_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->vport->wwpn_str;
+}
+
+static u16 efct_lio_get_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static u16 efct_lio_get_npiv_tag(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpgt;
+}
+
+static int efct_lio_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+static int efct_lio_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_write_protect;
+}
+
+static int efct_lio_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static int
+efct_lio_npiv_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.prod_mode_write_protect;
+}
+
+static u32 efct_lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	return EFC_SUCCESS;
+}
+
+static int efct_lio_check_stop_free(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(se_cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_CHK_STOP_FREE);
+	return target_put_sess_cmd(se_cmd);
+}
+
+static int
+efct_lio_abort_tgt_cb(struct efct_io *io,
+		      enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_io_printf(io, "%s\n", __func__);
+	return EFC_SUCCESS;
+}
+
+static void
+efct_lio_aborted_task(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(se_cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_ABORTED_TASK);
+
+	if (!(se_cmd->transport_state & CMD_T_ABORTED) || ocp->rsp_sent)
+		return;
+
+	/* command has been aborted, cleanup here */
+	ocp->aborting = true;
+	ocp->err = EFCT_SCSI_STATUS_ABORTED;
+	/* terminate the exchange */
+	efct_scsi_tgt_abort_io(io, efct_lio_abort_tgt_cb, NULL);
+}
+
+static void efct_lio_release_cmd(struct se_cmd *se_cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(se_cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct *efct = io->efct;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_RELEASE_CMD);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_CMPL_CMD);
+	efct_scsi_io_complete(io);
+	atomic_sub_return(1, &efct->tgt_efct.ios_in_use);
+}
+
+static void efct_lio_close_session(struct se_session *se_sess)
+{
+	struct efc_node *node = se_sess->fabric_sess_ptr;
+	struct efct *efct = NULL;
+	int rc;
+
+	pr_debug("se_sess=%p node=%p", se_sess, node);
+
+	if (!node) {
+		pr_debug("node is NULL");
+		return;
+	}
+
+	efct = node->efc->base;
+	rc = efct_xport_control(efct->xport, EFCT_XPORT_POST_NODE_EVENT, node,
+				EFCT_XPORT_SHUTDOWN);
+	if (rc != 0) {
+		efc_log_debug(efct,
+			      "Failed to shutdown session %p node %p\n",
+			     se_sess, node);
+		return;
+	}
+}
+
+static u32 efct_lio_sess_get_index(struct se_session *se_sess)
+{
+	return EFC_SUCCESS;
+}
+
+static void efct_lio_set_default_node_attrs(struct se_node_acl *nacl)
+{
+}
+
+static int efct_lio_get_cmd_state(struct se_cmd *se_cmd)
+{
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_sg_map(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	ocp->seg_map_cnt = pci_map_sg(io->efct->pci, cmd->t_data_sg,
+				      cmd->t_data_nents, cmd->data_direction);
+	if (ocp->seg_map_cnt == 0)
+		return -EFAULT;
+	return EFC_SUCCESS;
+}
+
+static void
+efct_lio_sg_unmap(struct efct_io *io)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &ocp->cmd;
+
+	if (WARN_ON(!ocp->seg_map_cnt || !cmd->t_data_sg))
+		return;
+
+	pci_unmap_sg(io->efct->pci, cmd->t_data_sg,
+		     ocp->seg_map_cnt, cmd->data_direction);
+	ocp->seg_map_cnt = 0;
+}
+
+static int
+efct_lio_status_done(struct efct_io *io,
+		     enum efct_scsi_io_status scsi_status,
+		     u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RSP_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	if (ocp->seg_map_cnt)
+		efct_lio_sg_unmap(io);
+
+	efct_lio_io_printf(io, "status=%d, err=%d flags=0x%x, dir=%d\n",
+				scsi_status, ocp->err, flags, ocp->ddir);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg);
+
+static int
+efct_lio_write_pending(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg;
+	u32 flags = 0, cnt, curcnt;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_WRITE_PENDING);
+	efct_lio_io_printf(io, "trans_state=0x%x se_cmd_flags=0x%x\n",
+			  cmd->transport_state, cmd->se_cmd_flags);
+
+	if (ocp->seg_cnt == 0) {
+		ocp->seg_cnt = cmd->t_data_nents;
+		ocp->cur_seg = 0;
+		if (efct_lio_sg_map(io)) {
+			efct_lio_io_printf(io, "efct_lio_sg_map failed\n");
+			return -EFAULT;
+		}
+	}
+	curcnt = (ocp->seg_map_cnt - ocp->cur_seg);
+	curcnt = (curcnt < io->sgl_allocated) ? curcnt : io->sgl_allocated;
+	/* find current sg */
+	for (cnt = 0, sg = cmd->t_data_sg; cnt < ocp->cur_seg; cnt++,
+	     sg = sg_next(sg))
+		;/* do nothing */
+
+	for (cnt = 0; cnt < curcnt; cnt++, sg = sg_next(sg)) {
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		sgl[cnt].len = sg_dma_len(sg);
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+	}
+	if (ocp->cur_seg == ocp->seg_cnt)
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+	rc = efct_scsi_recv_wr_data(io, flags, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	return rc;
+}
+
+static int
+efct_lio_queue_data_in(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	struct efct_scsi_sgl *sgl = io->sgl;
+	struct scatterlist *sg = NULL;
+	uint flags = 0, cnt = 0, curcnt = 0;
+	u64 length = 0;
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_DATA_IN);
+
+	if (ocp->seg_cnt == 0) {
+		if (cmd->data_length) {
+			ocp->seg_cnt = cmd->t_data_nents;
+			ocp->cur_seg = 0;
+			if (efct_lio_sg_map(io)) {
+				efct_lio_io_printf(io,
+						   "efct_lio_sg_map failed\n");
+				return -EAGAIN;
+			}
+		} else {
+			/* If command length is 0, send the response status */
+			struct efct_scsi_cmd_resp rsp;
+
+			memset(&rsp, 0, sizeof(rsp));
+			efct_lio_io_printf(io,
+					   "cmd : %p length 0, send status\n",
+					   cmd);
+			return efct_scsi_send_resp(io, 0, &rsp,
+						  efct_lio_status_done, NULL);
+		}
+	}
+	curcnt = min(ocp->seg_map_cnt - ocp->cur_seg, io->sgl_allocated);
+
+	while (cnt < curcnt) {
+		sg = &cmd->t_data_sg[ocp->cur_seg];
+		sgl[cnt].addr = sg_dma_address(sg);
+		sgl[cnt].dif_addr = 0;
+		if (ocp->transferred_len + sg_dma_len(sg) >= cmd->data_length)
+			sgl[cnt].len = cmd->data_length - ocp->transferred_len;
+		else
+			sgl[cnt].len = sg_dma_len(sg);
+
+		ocp->transferred_len += sgl[cnt].len;
+		length += sgl[cnt].len;
+		ocp->cur_seg++;
+		cnt++;
+		if (ocp->transferred_len == cmd->data_length)
+			break;
+	}
+
+	if (ocp->transferred_len == cmd->data_length) {
+		flags = EFCT_SCSI_LAST_DATAPHASE;
+		ocp->seg_cnt = ocp->cur_seg;
+	}
+
+	/* If there is residual, disable Auto Good Response */
+	if (cmd->residual_count)
+		flags |= EFCT_SCSI_NO_AUTO_RESPONSE;
+
+	rc = efct_scsi_send_rd_data(io, flags, sgl, curcnt, length,
+				    efct_lio_datamove_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RD_DATA);
+	return rc;
+}
+
+static void
+efct_lio_send_resp(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		   u32 flags)
+{
+	struct efct_scsi_cmd_resp rsp;
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct se_cmd *cmd = &io->tgt_io.cmd;
+	int rc;
+
+	if (flags & EFCT_SCSI_IO_CMPL_RSP_SENT) {
+		ocp->rsp_sent = true;
+		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+		transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+		return;
+	}
+
+	/* send check condition if an error occurred */
+	memset(&rsp, 0, sizeof(rsp));
+	rsp.scsi_status = cmd->scsi_status;
+	rsp.sense_data = (uint8_t *)io->tgt_io.sense_buffer;
+	rsp.sense_data_length = cmd->scsi_sense_length;
+
+	/* Check for residual underrun or overrun */
+	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+		rsp.residual = -cmd->residual_count;
+	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+		rsp.residual = cmd->residual_count;
+
+	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
+	if (rc != 0) {
+		efct_lio_io_printf(io, "Read done, send rsp failed %d\n", rc);
+		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+		transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	} else {
+		ocp->rsp_sent = true;
+	}
+}
+
+static int
+efct_lio_datamove_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		       u32 flags, void *arg)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_DATA_DONE);
+	if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+		efct_lio_io_printf(io, "callback completed with error=%d\n",
+				   scsi_status);
+		ocp->err = scsi_status;
+	}
+	efct_lio_io_printf(io, "seg_map_cnt=%d\n", ocp->seg_map_cnt);
+	if (ocp->seg_map_cnt) {
+		if (ocp->err == EFCT_SCSI_STATUS_GOOD &&
+		    ocp->cur_seg < ocp->seg_cnt) {
+			int rc;
+
+			efct_lio_io_printf(io, "continuing cmd at segm=%d\n",
+					  ocp->cur_seg);
+			if (ocp->ddir == DMA_TO_DEVICE)
+				rc = efct_lio_write_pending(&ocp->cmd);
+			else
+				rc = efct_lio_queue_data_in(&ocp->cmd);
+			if (!rc)
+				return EFC_SUCCESS;
+
+			ocp->err = EFCT_SCSI_STATUS_ERROR;
+			efct_lio_io_printf(io, "could not continue command\n");
+		}
+		efct_lio_sg_unmap(io);
+	}
+
+	if (io->tgt_io.aborting) {
+		efct_lio_io_printf(io, "IO done aborted\n");
+		return EFC_SUCCESS;
+	}
+
+	if (ocp->ddir == DMA_TO_DEVICE) {
+		efct_lio_io_printf(io, "Write done, trans_state=0x%x\n",
+				  io->tgt_io.cmd.transport_state);
+		if (scsi_status != EFCT_SCSI_STATUS_GOOD) {
+			transport_generic_request_failure(&io->tgt_io.cmd,
+					TCM_CHECK_CONDITION_ABORT_CMD);
+			efct_set_lio_io_state(io,
+				EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE);
+		} else {
+			efct_set_lio_io_state(io,
+						EFCT_LIO_STATE_TGT_EXECUTE_CMD);
+			target_execute_cmd(&io->tgt_io.cmd);
+		}
+	} else {
+		efct_lio_send_resp(io, scsi_status, flags);
+	}
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_tmf_done(struct efct_io *io, enum efct_scsi_io_status scsi_status,
+		  u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(io, "cmd=%p status=%d, flags=0x%x\n",
+			      &io->tgt_io.cmd, scsi_status, flags);
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_GENERIC_FREE);
+	transport_generic_free_cmd(&io->tgt_io.cmd, 0);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_null_tmf_done(struct efct_io *tmfio,
+		       enum efct_scsi_io_status scsi_status,
+		      u32 flags, void *arg)
+{
+	efct_lio_tmfio_printf(tmfio, "cmd=%p status=%d, flags=0x%x\n",
+			      &tmfio->tgt_io.cmd, scsi_status, flags);
+
+	/* free struct efct_io only, no active se_cmd */
+	efct_scsi_io_complete(tmfio);
+	return EFC_SUCCESS;
+}
+
+static int
+efct_lio_queue_status(struct se_cmd *cmd)
+{
+	struct efct_scsi_cmd_resp rsp;
+	struct efct_scsi_tgt_io *ocp =
+		container_of(cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *io = container_of(ocp, struct efct_io, tgt_io);
+	int rc = 0;
+
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_TFO_QUEUE_STATUS);
+	efct_lio_io_printf(io,
+		"status=0x%x trans_state=0x%x se_cmd_flags=0x%x sns_len=%d\n",
+		cmd->scsi_status, cmd->transport_state, cmd->se_cmd_flags,
+		cmd->scsi_sense_length);
+
+	memset(&rsp, 0, sizeof(rsp));
+	rsp.scsi_status = cmd->scsi_status;
+	rsp.sense_data = (u8 *)io->tgt_io.sense_buffer;
+	rsp.sense_data_length = cmd->scsi_sense_length;
+
+	/* Check for residual underrun or overrun, mark negitive value for
+	 * underrun to recognize in HW
+	 */
+	if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT)
+		rsp.residual = -cmd->residual_count;
+	else if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT)
+		rsp.residual = cmd->residual_count;
+
+	rc = efct_scsi_send_resp(io, 0, &rsp, efct_lio_status_done, NULL);
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_SEND_RSP);
+	if (rc == 0)
+		ocp->rsp_sent = true;
+	return rc;
+}
+
+static void efct_lio_queue_tm_rsp(struct se_cmd *cmd)
+{
+	struct efct_scsi_tgt_io *ocp =
+		container_of(cmd, struct efct_scsi_tgt_io, cmd);
+	struct efct_io *tmfio = container_of(ocp, struct efct_io, tgt_io);
+	struct se_tmr_req *se_tmr = cmd->se_tmr_req;
+	u8 rspcode;
+
+	efct_lio_tmfio_printf(tmfio, "cmd=%p function=0x%x tmr->response=%d\n",
+			      cmd, se_tmr->function, se_tmr->response);
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_COMPLETE;
+		break;
+	case TMR_TASK_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND;
+		break;
+	case TMR_LUN_DOES_NOT_EXIST:
+		rspcode = EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER;
+		break;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		rspcode = EFCT_SCSI_TMF_FUNCTION_REJECTED;
+		break;
+	}
+	efct_scsi_send_tmf_resp(tmfio, rspcode, NULL, efct_lio_tmf_done, NULL);
+}
+
+static struct efct *efct_find_wwpn(u64 wwpn)
+{
+	struct efct *efct;
+
+	 /* Search for the HBA that has this WWPN */
+	list_for_each_entry(efct, &efct_devices, list_entry) {
+
+		if (wwpn == efct_get_wwpn(&efct->hw))
+			return efct;
+	}
+
+	return NULL;
+}
+
+static struct se_wwn *
+efct_lio_make_nport(struct target_fabric_configfs *tf,
+		    struct config_group *group, const char *name)
+{
+	struct efct_lio_nport *lio_nport;
+	struct efct *efct;
+	int ret;
+	u64 wwpn;
+
+	ret = efct_lio_parse_wwn(name, &wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_nport = kzalloc(sizeof(*lio_nport), GFP_KERNEL);
+	if (!lio_nport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_nport->efct = efct;
+	lio_nport->wwpn = wwpn;
+	efct_format_wwn(lio_nport->wwpn_str, sizeof(lio_nport->wwpn_str),
+			"naa.", wwpn);
+	efct->tgt_efct.lio_nport = lio_nport;
+
+	return &lio_nport->nport_wwn;
+}
+
+static struct se_wwn *
+efct_lio_npiv_make_nport(struct target_fabric_configfs *tf,
+			 struct config_group *group, const char *name)
+{
+	struct efct_lio_vport *lio_vport;
+	struct efct *efct;
+	int ret = -1;
+	u64 p_wwpn, npiv_wwpn, npiv_wwnn;
+	char *p, *pbuf, tmp[128];
+	struct efct_lio_vport_list_t *vport_list;
+	struct fc_vport *new_fc_vport;
+	struct fc_vport_identifiers vport_id;
+	unsigned long flags = 0;
+
+	snprintf(tmp, sizeof(tmp), "%s", name);
+	pbuf = &tmp[0];
+
+	p = strsep(&pbuf, "@");
+
+	if (!p || !pbuf) {
+		pr_err("Unable to find separator operator(@)\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	ret = efct_lio_parse_wwn(p, &p_wwpn, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = efct_lio_parse_npiv_wwn(pbuf, strlen(pbuf), &npiv_wwpn,
+				      &npiv_wwnn);
+	if (ret)
+		return ERR_PTR(ret);
+
+	efct = efct_find_wwpn(p_wwpn);
+	if (!efct) {
+		pr_err("cannot find EFCT for base wwpn %s\n", name);
+		return ERR_PTR(-ENXIO);
+	}
+
+	lio_vport = kzalloc(sizeof(*lio_vport), GFP_KERNEL);
+	if (!lio_vport)
+		return ERR_PTR(-ENOMEM);
+
+	lio_vport->efct = efct;
+	lio_vport->wwpn = p_wwpn;
+	lio_vport->npiv_wwpn = npiv_wwpn;
+	lio_vport->npiv_wwnn = npiv_wwnn;
+
+	efct_format_wwn(lio_vport->wwpn_str, sizeof(lio_vport->wwpn_str),
+			"naa.", npiv_wwpn);
+
+	vport_list = kzalloc(sizeof(*vport_list), GFP_KERNEL);
+	if (!vport_list) {
+		kfree(lio_vport);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	vport_list->lio_vport = lio_vport;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	INIT_LIST_HEAD(&vport_list->list_entry);
+	list_add_tail(&vport_list->list_entry, &efct->tgt_efct.vport_list);
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+
+	memset(&vport_id, 0, sizeof(vport_id));
+	vport_id.port_name = npiv_wwpn;
+	vport_id.node_name = npiv_wwnn;
+	vport_id.roles = FC_PORT_ROLE_FCP_INITIATOR;
+	vport_id.vport_type = FC_PORTTYPE_NPIV;
+	vport_id.disable = false;
+
+	new_fc_vport = fc_vport_create(efct->shost, 0, &vport_id);
+	if (!new_fc_vport) {
+		efc_log_err(efct, "fc_vport_create failed\n");
+		kfree(lio_vport);
+		kfree(vport_list);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	lio_vport->fc_vport = new_fc_vport;
+
+	return &lio_vport->vport_wwn;
+}
+
+static void
+efct_lio_drop_nport(struct se_wwn *wwn)
+{
+	struct efct_lio_nport *lio_nport =
+		container_of(wwn, struct efct_lio_nport, nport_wwn);
+	struct efct *efct = lio_nport->efct;
+
+	/* only physical nport should exist, free lio_nport allocated
+	 * in efct_lio_make_nport.
+	 */
+	kfree(efct->tgt_efct.lio_nport);
+	efct->tgt_efct.lio_nport = NULL;
+}
+
+static void
+efct_lio_npiv_drop_nport(struct se_wwn *wwn)
+{
+	struct efct_lio_vport *lio_vport =
+		container_of(wwn, struct efct_lio_vport, vport_wwn);
+	struct efct_lio_vport_list_t *vport, *next_vport;
+	struct efct *efct = lio_vport->efct;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+
+	if (lio_vport->fc_vport)
+		fc_vport_terminate(lio_vport->fc_vport);
+
+	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		if (vport->lio_vport == lio_vport) {
+			list_del(&vport->list_entry);
+			kfree(vport->lio_vport);
+			kfree(vport);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+}
+
+static struct se_portal_group *
+efct_lio_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_nport *lio_nport =
+		container_of(wwn, struct efct_lio_nport, nport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->nport = lio_nport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	efct = lio_nport->efct;
+	efct->tgt_efct.tpg = tpg;
+	efc_log_debug(efct, "create portal group %d\n", tpg->tpgt);
+
+	xa_init(&efct->lookup);
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	struct efct *efct = tpg->nport->efct;
+
+	efc_log_debug(efct, "drop portal group %d\n", tpg->tpgt);
+	tpg->nport->efct->tgt_efct.tpg = NULL;
+	core_tpg_deregister(se_tpg);
+	xa_destroy(&efct->lookup);
+	kfree(tpg);
+}
+
+static struct se_portal_group *
+efct_lio_npiv_make_tpg(struct se_wwn *wwn, const char *name)
+{
+	struct efct_lio_vport *lio_vport =
+		container_of(wwn, struct efct_lio_vport, vport_wwn);
+	struct efct_lio_tpg *tpg;
+	struct efct *efct;
+	unsigned long n;
+	int ret;
+
+	efct = lio_vport->efct;
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (kstrtoul(name + 5, 10, &n) || n > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	if (n != 1) {
+		efc_log_err(efct, "Invalid tpgt index: %ld provided\n", n);
+		return ERR_PTR(-EINVAL);
+	}
+
+	tpg = kzalloc(sizeof(*tpg), GFP_KERNEL);
+	if (!tpg)
+		return ERR_PTR(-ENOMEM);
+
+	tpg->vport = lio_vport;
+	tpg->tpgt = n;
+	atomic_set(&tpg->enabled, 0);
+
+	tpg->tpg_attrib.generate_node_acls = 1;
+	tpg->tpg_attrib.demo_mode_write_protect = 1;
+	tpg->tpg_attrib.cache_dynamic_acls = 1;
+	tpg->tpg_attrib.demo_mode_login_only = 1;
+	tpg->tpg_attrib.session_deletion_wait = 1;
+
+	ret = core_tpg_register(wwn, &tpg->tpg, SCSI_PROTOCOL_FCP);
+
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	lio_vport->tpg = tpg;
+	efc_log_debug(efct, "create vport portal group %d\n", tpg->tpgt);
+
+	return &tpg->tpg;
+}
+
+static void
+efct_lio_npiv_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct efct_lio_tpg *tpg =
+		container_of(se_tpg, struct efct_lio_tpg, tpg);
+
+	efc_log_debug(tpg->vport->efct, "drop npiv portal group %d\n",
+		       tpg->tpgt);
+	core_tpg_deregister(se_tpg);
+	kfree(tpg);
+}
+
+static int
+efct_lio_init_nodeacl(struct se_node_acl *se_nacl, const char *name)
+{
+	struct efct_lio_nacl *nacl;
+	u64 wwnn;
+
+	if (efct_lio_parse_wwn(name, &wwnn, 0) < 0)
+		return -EINVAL;
+
+	nacl = container_of(se_nacl, struct efct_lio_nacl, se_node_acl);
+	nacl->nport_wwnn = wwnn;
+
+	efct_format_wwn(nacl->nport_name, sizeof(nacl->nport_name), "", wwnn);
+	return EFC_SUCCESS;
+}
+
+static int efct_lio_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static int
+efct_lio_npiv_check_demo_mode_login_only(struct se_portal_group *stpg)
+{
+	struct efct_lio_tpg *tpg = container_of(stpg, struct efct_lio_tpg, tpg);
+
+	return tpg->tpg_attrib.demo_mode_login_only;
+}
+
+static struct efct_lio_tpg *
+efct_get_vport_tpg(struct efc_node *node)
+{
+	struct efct *efct;
+	u64 wwpn = node->nport->wwpn;
+	struct efct_lio_vport_list_t *vport, *next;
+	struct efct_lio_vport *lio_vport = NULL;
+	struct efct_lio_tpg *tpg = NULL;
+	unsigned long flags = 0;
+
+	efct = node->efc->base;
+	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+	list_for_each_entry_safe(vport, next, &efct->tgt_efct.vport_list,
+				 list_entry) {
+		lio_vport = vport->lio_vport;
+		if (wwpn && lio_vport && lio_vport->npiv_wwpn == wwpn) {
+			efc_log_debug(efct, "found tpg on vport\n");
+			tpg = lio_vport->tpg;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&efct->tgt_efct.efct_lio_lock, flags);
+	return tpg;
+}
+
+static void
+_efct_tgt_node_free(struct kref *arg)
+{
+	struct efct_node *tgt_node = container_of(arg, struct efct_node, ref);
+	struct efc_node *node = tgt_node->node;
+
+	efc_scsi_del_initiator_complete(node->efc, node);
+	kfree(tgt_node);
+}
+
+static int efct_session_cb(struct se_portal_group *se_tpg,
+			   struct se_session *se_sess, void *private)
+{
+	struct efc_node *node = private;
+	struct efct_node *tgt_node = NULL;
+	struct efct *efct = node->efc->base;
+	u64 id;
+	u32 rc;
+
+	tgt_node = kzalloc(sizeof(*tgt_node), GFP_KERNEL);
+	if (!tgt_node)
+		return EFC_FAIL;
+
+	/* initialize refcount */
+	kref_init(&tgt_node->ref);
+	tgt_node->release = _efct_tgt_node_free;
+
+	tgt_node->session = se_sess;
+	node->tgt_node = tgt_node;
+	tgt_node->efct = efct;
+
+	tgt_node->node = node;
+
+	tgt_node->node_fc_id = node->rnode.fc_id;
+	tgt_node->port_fc_id = node->nport->fc_id;
+	tgt_node->vpi = node->nport->indicator;
+	tgt_node->rpi = node->rnode.indicator;
+
+	spin_lock_init(&tgt_node->active_ios_lock);
+	INIT_LIST_HEAD(&tgt_node->active_ios);
+
+	id = (u64) tgt_node->port_fc_id << 32 | tgt_node->node_fc_id;
+	efc_log_err(efct, "New Node id: %llx node:%p\n", id, tgt_node);
+	rc = xa_err(xa_store(&efct->lookup, id, tgt_node, GFP_KERNEL));
+	if (rc)
+		efc_log_err(efct, "Node lookup store failed: %d\n", rc);
+
+	efc_scsi_sess_reg_complete(node, EFC_SUCCESS);
+	return EFC_SUCCESS;
+}
+
+int efct_scsi_tgt_new_device(struct efct *efct)
+{
+	int rc = 0;
+	u32 total_ios;
+
+	/* Get the max settings */
+	efct->tgt_efct.max_sge = sli_get_max_sge(&efct->hw.sli);
+	efct->tgt_efct.max_sgl = sli_get_max_sgl(&efct->hw.sli);
+
+	/* initialize IO watermark fields */
+	atomic_set(&efct->tgt_efct.ios_in_use, 0);
+	total_ios = efct->hw.config.n_io;
+	efc_log_debug(efct, "total_ios=%d\n", total_ios);
+	efct->tgt_efct.watermark_min =
+			(total_ios * EFCT_WATERMARK_LOW_PCT) / 100;
+	efct->tgt_efct.watermark_max =
+			(total_ios * EFCT_WATERMARK_HIGH_PCT) / 100;
+	atomic_set(&efct->tgt_efct.io_high_watermark,
+		   efct->tgt_efct.watermark_max);
+	atomic_set(&efct->tgt_efct.watermark_hit, 0);
+	atomic_set(&efct->tgt_efct.initiator_count, 0);
+
+	lio_wq = create_singlethread_workqueue("efct_lio_worker");
+	if (!lio_wq) {
+		efc_log_err(efct, "workqueue create failed\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_init(&efct->tgt_efct.efct_lio_lock);
+	INIT_LIST_HEAD(&efct->tgt_efct.vport_list);
+
+	return rc;
+}
+
+int efct_scsi_tgt_del_device(struct efct *efct)
+{
+	flush_workqueue(lio_wq);
+
+	return EFC_SUCCESS;
+}
+
+int
+efct_scsi_tgt_new_nport(struct efc *efc, struct efc_nport *nport)
+{
+	struct efct *efct = nport->efc->base;
+
+	efc_log_debug(efct, "New SPORT: %s bound to %s\n", nport->display_name,
+		       efct->tgt_efct.lio_nport->wwpn_str);
+
+	return EFC_SUCCESS;
+}
+
+void
+efct_scsi_tgt_del_nport(struct efc *efc, struct efc_nport *nport)
+{
+	efc_log_debug(efc, "Del SPORT: %s\n", nport->display_name);
+}
+
+static void efct_lio_setup_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data =
+		container_of(work, struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	char wwpn[WWN_NAME_LEN];
+	struct efct_lio_tpg *tpg = NULL;
+	struct se_portal_group *se_tpg;
+	struct se_session *se_sess;
+	int watermark;
+	int ini_count;
+
+	/* Check to see if it's belongs to vport,
+	 * if not get physical port
+	 */
+	tpg = efct_get_vport_tpg(node);
+	if (tpg) {
+		se_tpg = &tpg->tpg;
+	} else if (efct->tgt_efct.tpg) {
+		tpg = efct->tgt_efct.tpg;
+		se_tpg = &tpg->tpg;
+	} else {
+		efc_log_err(efct, "failed to init session\n");
+		return;
+	}
+
+	/*
+	 * Format the FCP Initiator port_name into colon
+	 * separated values to match the format by our explicit
+	 * ConfigFS NodeACLs.
+	 */
+	efct_format_wwn(wwpn, sizeof(wwpn), "",	efc_node_get_wwpn(node));
+
+	se_sess = target_setup_session(se_tpg, 0, 0, TARGET_PROT_NORMAL, wwpn,
+				       node, efct_session_cb);
+	if (IS_ERR(se_sess)) {
+		efc_log_err(efct, "failed to setup session\n");
+		kfree(wq_data);
+		efc_scsi_sess_reg_complete(node, EFC_FAIL);
+		return;
+	}
+
+	efc_log_debug(efct, "new initiator sess=%p node=%p\n", se_sess, node);
+
+	/* update IO watermark: increment initiator count */
+	ini_count = atomic_add_return(1, &efct->tgt_efct.initiator_count);
+	watermark = efct->tgt_efct.watermark_max -
+		    ini_count * EFCT_IO_WATERMARK_PER_INITIATOR;
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+			efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
+
+	kfree(wq_data);
+}
+
+int efct_scsi_new_initiator(struct efc *efc, struct efc_node *node)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_lio_wq_data *wq_data;
+
+	/*
+	 * Since LIO only supports initiator validation at thread level,
+	 * we are open minded and accept all callers.
+	 */
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return -ENOMEM;
+
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_setup_session);
+	queue_work(lio_wq, &wq_data->work);
+	return EFCT_SCSI_CALL_ASYNC;
+}
+
+static void efct_lio_remove_session(struct work_struct *work)
+{
+	struct efct_lio_wq_data *wq_data =
+		container_of(work, struct efct_lio_wq_data, work);
+	struct efct *efct = wq_data->efct;
+	struct efc_node *node = wq_data->ptr;
+	struct efct_node *tgt_node = NULL;
+	struct se_session *se_sess;
+
+	tgt_node = node->tgt_node;
+	se_sess = tgt_node->session;
+
+	if (!se_sess) {
+		/* base driver has sent back-to-back requests
+		 * to unreg session with no intervening
+		 * register
+		 */
+		efc_log_err(efct, "unreg session for NULL session\n");
+		efc_scsi_del_initiator_complete(node->efc, node);
+		return;
+	}
+
+	efc_log_debug(efct, "unreg session se_sess=%p node=%p\n",
+		       se_sess, node);
+
+	/* first flag all session commands to complete */
+	target_stop_session(se_sess);
+
+	/* now wait for session commands to complete */
+	target_wait_for_sess_cmds(se_sess);
+	target_remove_session(se_sess);
+
+	kref_put(&tgt_node->ref, tgt_node->release);
+
+	kfree(wq_data);
+}
+
+int efct_scsi_del_initiator(struct efc *efc, struct efc_node *node, int reason)
+{
+	struct efct *efct = node->efc->base;
+	struct efct_node *tgt_node = node->tgt_node;
+	struct efct_lio_wq_data *wq_data;
+	int watermark;
+	int ini_count;
+	u64 id;
+
+	if (reason == EFCT_SCSI_INITIATOR_MISSING)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
+	if (!wq_data)
+		return EFCT_SCSI_CALL_COMPLETE;
+
+	id = (u64) tgt_node->port_fc_id << 32 | tgt_node->node_fc_id;
+	xa_erase(&efct->lookup, id);
+
+	wq_data->ptr = node;
+	wq_data->efct = efct;
+	INIT_WORK(&wq_data->work, efct_lio_remove_session);
+	queue_work(lio_wq, &wq_data->work);
+
+	/*
+	 * update IO watermark: decrement initiator count
+	 */
+	ini_count = atomic_sub_return(1, &efct->tgt_efct.initiator_count);
+
+	watermark = efct->tgt_efct.watermark_max -
+		    ini_count * EFCT_IO_WATERMARK_PER_INITIATOR;
+	watermark = (efct->tgt_efct.watermark_min > watermark) ?
+			efct->tgt_efct.watermark_min : watermark;
+	atomic_set(&efct->tgt_efct.io_high_watermark, watermark);
+
+	return EFCT_SCSI_CALL_ASYNC;
+}
+
+int efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb,
+		       u32 cdb_len, u32 flags)
+{
+	struct efct_scsi_tgt_io *ocp = &io->tgt_io;
+	struct efct *efct = io->efct;
+	char *ddir;
+	struct efct_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc = 0;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(io, EFCT_LIO_STATE_SCSI_RECV_CMD);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+
+	/* set target timeout */
+	io->timeout = efct->target_io_timer_sec;
+
+	if (flags & EFCT_SCSI_CMD_SIMPLE)
+		ocp->task_attr = TCM_SIMPLE_TAG;
+	else if (flags & EFCT_SCSI_CMD_HEAD_OF_QUEUE)
+		ocp->task_attr = TCM_HEAD_TAG;
+	else if (flags & EFCT_SCSI_CMD_ORDERED)
+		ocp->task_attr = TCM_ORDERED_TAG;
+	else if (flags & EFCT_SCSI_CMD_ACA)
+		ocp->task_attr = TCM_ACA_TAG;
+
+	switch (flags & (EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT)) {
+	case EFCT_SCSI_CMD_DIR_IN:
+		ddir = "FROM_INITIATOR";
+		ocp->ddir = DMA_TO_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "TO_INITIATOR";
+		ocp->ddir = DMA_FROM_DEVICE;
+		break;
+	case EFCT_SCSI_CMD_DIR_IN | EFCT_SCSI_CMD_DIR_OUT:
+		ddir = "BIDIR";
+		ocp->ddir = DMA_BIDIRECTIONAL;
+		break;
+	default:
+		ddir = "NONE";
+		ocp->ddir = DMA_NONE;
+		break;
+	}
+
+	ocp->lun = lun;
+	efct_lio_io_printf(io, "new cmd=0x%x ddir=%s dl=%u\n",
+			  cdb[0], ddir, io->exp_xfer_len);
+
+	tgt_node = io->node;
+	se_sess = tgt_node->session;
+	if (se_sess) {
+		efct_set_lio_io_state(io, EFCT_LIO_STATE_TGT_SUBMIT_CMD);
+		rc = target_submit_cmd(&io->tgt_io.cmd, se_sess,
+				       cdb, &io->tgt_io.sense_buffer[0],
+				       ocp->lun, io->exp_xfer_len,
+				       ocp->task_attr, ocp->ddir,
+				       TARGET_SCF_ACK_KREF);
+		if (rc) {
+			efc_log_err(efct, "failed to submit cmd se_cmd: %p\n",
+				    &ocp->cmd);
+			efct_scsi_io_free(io);
+		}
+	}
+
+	return rc;
+}
+
+int
+efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun, enum efct_scsi_tmf_cmd cmd,
+		   struct efct_io *io_to_abort, u32 flags)
+{
+	unsigned char tmr_func;
+	struct efct *efct = tmfio->efct;
+	struct efct_scsi_tgt_io *ocp = &tmfio->tgt_io;
+	struct efct_node *tgt_node = NULL;
+	struct se_session *se_sess;
+	int rc;
+
+	memset(ocp, 0, sizeof(struct efct_scsi_tgt_io));
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_SCSI_RECV_TMF);
+	atomic_add_return(1, &efct->tgt_efct.ios_in_use);
+	efct_lio_tmfio_printf(tmfio, "%s: new tmf %x lun=%u\n",
+			      tmfio->display_name, cmd, lun);
+
+	switch (cmd) {
+	case EFCT_SCSI_TMF_ABORT_TASK:
+		tmr_func = TMR_ABORT_TASK;
+		break;
+	case EFCT_SCSI_TMF_ABORT_TASK_SET:
+		tmr_func = TMR_ABORT_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_TASK_SET:
+		tmr_func = TMR_CLEAR_TASK_SET;
+		break;
+	case EFCT_SCSI_TMF_LOGICAL_UNIT_RESET:
+		tmr_func = TMR_LUN_RESET;
+		break;
+	case EFCT_SCSI_TMF_CLEAR_ACA:
+		tmr_func = TMR_CLEAR_ACA;
+		break;
+	case EFCT_SCSI_TMF_TARGET_RESET:
+		tmr_func = TMR_TARGET_WARM_RESET;
+		break;
+	case EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT:
+	case EFCT_SCSI_TMF_QUERY_TASK_SET:
+	default:
+		goto tmf_fail;
+	}
+
+	tmfio->tgt_io.tmf = tmr_func;
+	tmfio->tgt_io.lun = lun;
+	tmfio->tgt_io.io_to_abort = io_to_abort;
+
+	tgt_node = tmfio->node;
+
+	se_sess = tgt_node->session;
+	if (!se_sess)
+		return EFC_SUCCESS;
+
+	rc = target_submit_tmr(&ocp->cmd, se_sess, NULL, lun, ocp, tmr_func,
+			GFP_ATOMIC, tmfio->init_task_tag, TARGET_SCF_ACK_KREF);
+
+	efct_set_lio_io_state(tmfio, EFCT_LIO_STATE_TGT_SUBMIT_TMR);
+	if (rc)
+		goto tmf_fail;
+
+	return EFC_SUCCESS;
+
+tmf_fail:
+	efct_scsi_send_tmf_resp(tmfio, EFCT_SCSI_TMF_FUNCTION_REJECTED,
+				NULL, efct_lio_null_tmf_done, NULL);
+	return EFC_SUCCESS;
+}
+
+/* Start items for efct_lio_tpg_attrib_cit */
+
+#define DEF_EFCT_TPG_ATTRIB(name)					  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_show(			  \
+		struct config_item *item, char *page)			  \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+			struct efct_lio_tpg, tpg);			  \
+									  \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		  \
+}									  \
+									  \
+static ssize_t efct_lio_tpg_attrib_##name##_store(			  \
+		struct config_item *item, const char *page, size_t count) \
+{									  \
+	struct se_portal_group *se_tpg = to_tpg(item);			  \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			  \
+					struct efct_lio_tpg, tpg);	  \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		  \
+	unsigned long val;						  \
+	int ret;							  \
+									  \
+	ret = kstrtoul(page, 0, &val);					  \
+	if (ret < 0) {							  \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	  \
+		return ret;						  \
+	}								  \
+									  \
+	if (val != 0 && val != 1) {					  \
+		pr_err("Illegal boolean value %lu\n", val);		  \
+		return -EINVAL;						  \
+	}								  \
+									  \
+	a->name = val;							  \
+									  \
+	return count;							  \
+}									  \
+CONFIGFS_ATTR(efct_lio_tpg_attrib_, name)
+
+DEF_EFCT_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_tpg_attrib_attrs[] = {
+	&efct_lio_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+#define DEF_EFCT_NPIV_TPG_ATTRIB(name)					   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_show(			   \
+		struct config_item *item, char *page)			   \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+									   \
+	return sprintf(page, "%u\n", tpg->tpg_attrib.name);		   \
+}									   \
+									   \
+static ssize_t efct_lio_npiv_tpg_attrib_##name##_store(			   \
+		struct config_item *item, const char *page, size_t count)  \
+{									   \
+	struct se_portal_group *se_tpg = to_tpg(item);			   \
+	struct efct_lio_tpg *tpg = container_of(se_tpg,			   \
+			struct efct_lio_tpg, tpg);			   \
+	struct efct_lio_tpg_attrib *a = &tpg->tpg_attrib;		   \
+	unsigned long val;						   \
+	int ret;							   \
+									   \
+	ret = kstrtoul(page, 0, &val);					   \
+	if (ret < 0) {							   \
+		pr_err("kstrtoul() failed with ret: %d\n", ret);	   \
+		return ret;						   \
+	}								   \
+									   \
+	if (val != 0 && val != 1) {					   \
+		pr_err("Illegal boolean value %lu\n", val);		   \
+		return -EINVAL;						   \
+	}								   \
+									   \
+	a->name = val;							   \
+									   \
+	return count;							   \
+}									   \
+CONFIGFS_ATTR(efct_lio_npiv_tpg_attrib_, name)
+
+DEF_EFCT_NPIV_TPG_ATTRIB(generate_node_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(cache_dynamic_acls);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(prod_mode_write_protect);
+DEF_EFCT_NPIV_TPG_ATTRIB(demo_mode_login_only);
+DEF_EFCT_NPIV_TPG_ATTRIB(session_deletion_wait);
+
+static struct configfs_attribute *efct_lio_npiv_tpg_attrib_attrs[] = {
+	&efct_lio_npiv_tpg_attrib_attr_generate_node_acls,
+	&efct_lio_npiv_tpg_attrib_attr_cache_dynamic_acls,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_prod_mode_write_protect,
+	&efct_lio_npiv_tpg_attrib_attr_demo_mode_login_only,
+	&efct_lio_npiv_tpg_attrib_attr_session_deletion_wait,
+	NULL,
+};
+
+CONFIGFS_ATTR(efct_lio_tpg_, enable);
+static struct configfs_attribute *efct_lio_tpg_attrs[] = {
+				&efct_lio_tpg_attr_enable, NULL };
+CONFIGFS_ATTR(efct_lio_npiv_tpg_, enable);
+static struct configfs_attribute *efct_lio_npiv_tpg_attrs[] = {
+				&efct_lio_npiv_tpg_attr_enable, NULL };
+
+static const struct target_core_fabric_ops efct_lio_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect = efct_lio_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect = efct_lio_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_make_nport,
+	.fabric_drop_wwn		= efct_lio_drop_nport,
+	.fabric_make_tpg		= efct_lio_make_tpg,
+	.fabric_drop_tpg		= efct_lio_drop_tpg,
+	.tpg_check_demo_mode_login_only = efct_lio_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_tpg_attrs,
+	.tfc_tpg_attrib_attrs           = efct_lio_tpg_attrib_attrs,
+};
+
+static const struct target_core_fabric_ops efct_lio_npiv_ops = {
+	.module				= THIS_MODULE,
+	.fabric_name			= "efct_npiv",
+	.node_acl_size			= sizeof(struct efct_lio_nacl),
+	.max_data_sg_nents		= 65535,
+	.tpg_get_wwn			= efct_lio_get_npiv_fabric_wwn,
+	.tpg_get_tag			= efct_lio_get_npiv_tag,
+	.fabric_init_nodeacl		= efct_lio_init_nodeacl,
+	.tpg_check_demo_mode		= efct_lio_check_demo_mode,
+	.tpg_check_demo_mode_cache      = efct_lio_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect =
+					efct_lio_npiv_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect =
+					efct_lio_npiv_check_prod_write_protect,
+	.tpg_get_inst_index		= efct_lio_tpg_get_inst_index,
+	.check_stop_free		= efct_lio_check_stop_free,
+	.aborted_task			= efct_lio_aborted_task,
+	.release_cmd			= efct_lio_release_cmd,
+	.close_session			= efct_lio_close_session,
+	.sess_get_index			= efct_lio_sess_get_index,
+	.write_pending			= efct_lio_write_pending,
+	.set_default_node_attributes	= efct_lio_set_default_node_attrs,
+	.get_cmd_state			= efct_lio_get_cmd_state,
+	.queue_data_in			= efct_lio_queue_data_in,
+	.queue_status			= efct_lio_queue_status,
+	.queue_tm_rsp			= efct_lio_queue_tm_rsp,
+	.fabric_make_wwn		= efct_lio_npiv_make_nport,
+	.fabric_drop_wwn		= efct_lio_npiv_drop_nport,
+	.fabric_make_tpg		= efct_lio_npiv_make_tpg,
+	.fabric_drop_tpg		= efct_lio_npiv_drop_tpg,
+	.tpg_check_demo_mode_login_only =
+				efct_lio_npiv_check_demo_mode_login_only,
+	.tpg_check_prot_fabric_only	= NULL,
+	.sess_get_initiator_sid		= NULL,
+	.tfc_tpg_base_attrs		= efct_lio_npiv_tpg_attrs,
+	.tfc_tpg_attrib_attrs		= efct_lio_npiv_tpg_attrib_attrs,
+};
+
+int efct_scsi_tgt_driver_init(void)
+{
+	int rc;
+
+	/* Register the top level struct config_item_type with TCM core */
+	rc = target_register_template(&efct_lio_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		return rc;
+	}
+	rc = target_register_template(&efct_lio_npiv_ops);
+	if (rc < 0) {
+		pr_err("target_fabric_configfs_register failed with %d\n", rc);
+		target_unregister_template(&efct_lio_ops);
+		return rc;
+	}
+	return EFC_SUCCESS;
+}
+
+int efct_scsi_tgt_driver_exit(void)
+{
+	target_unregister_template(&efct_lio_ops);
+	target_unregister_template(&efct_lio_npiv_ops);
+	return EFC_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_lio.h b/drivers/scsi/elx/efct/efct_lio.h
new file mode 100644
index 000000000000..86337f2417b0
--- /dev/null
+++ b/drivers/scsi/elx/efct/efct_lio.h
@@ -0,0 +1,189 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+ * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+ */
+
+#ifndef __EFCT_LIO_H__
+#define __EFCT_LIO_H__
+
+#include "efct_scsi.h"
+#include <target/target_core_base.h>
+
+#define efct_lio_io_printf(io, fmt, ...)			\
+	efc_log_debug(io->efct,					\
+		"[%s] [%04x][i:%04x t:%04x h:%04x]" fmt,\
+		io->node->display_name, io->instance_index,	\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag,\
+		##__VA_ARGS__)
+
+#define efct_lio_tmfio_printf(io, fmt, ...)			\
+	efc_log_debug(io->efct,					\
+		"[%s] [%04x][i:%04x t:%04x h:%04x][f:%02x]" fmt,\
+		io->node->display_name, io->instance_index,	\
+		io->init_task_tag, io->tgt_task_tag, io->hw_tag,\
+		io->tgt_io.tmf,  ##__VA_ARGS__)
+
+#define efct_set_lio_io_state(io, value) (io->tgt_io.state |= value)
+
+struct efct_lio_wq_data {
+	struct efct		*efct;
+	void			*ptr;
+	struct work_struct	work;
+};
+
+/* Target private efct structure */
+struct efct_scsi_tgt {
+	u32			max_sge;
+	u32			max_sgl;
+
+	/*
+	 * Variables used to send task set full. We are using a high watermark
+	 * method to send task set full. We will reserve a fixed number of IOs
+	 * per initiator plus a fudge factor. Once we reach this number,
+	 * then the target will start sending task set full/busy responses.
+	 */
+	atomic_t		initiator_count;
+	atomic_t		ios_in_use;
+	atomic_t		io_high_watermark;
+
+	atomic_t		watermark_hit;
+	int			watermark_min;
+	int			watermark_max;
+
+	struct efct_lio_nport	*lio_nport;
+	struct efct_lio_tpg	*tpg;
+
+	struct list_head	vport_list;
+	/* Protects vport list*/
+	spinlock_t		efct_lio_lock;
+
+	u64			wwnn;
+};
+
+struct efct_scsi_tgt_nport {
+	struct efct_lio_nport	*lio_nport;
+};
+
+struct efct_node {
+	struct list_head	list_entry;
+	struct kref		ref;
+	void			(*release)(struct kref *arg);
+	struct efct		*efct;
+	struct efc_node		*node;
+	struct se_session	*session;
+	spinlock_t		active_ios_lock;
+	struct list_head	active_ios;
+	char			display_name[EFC_NAME_LENGTH];
+	u32			port_fc_id;
+	u32			node_fc_id;
+	u32			vpi;
+	u32			rpi;
+	u32			abort_cnt;
+};
+
+#define EFCT_LIO_STATE_SCSI_RECV_CMD		(1 << 0)
+#define EFCT_LIO_STATE_TGT_SUBMIT_CMD		(1 << 1)
+#define EFCT_LIO_STATE_TFO_QUEUE_DATA_IN	(1 << 2)
+#define EFCT_LIO_STATE_TFO_WRITE_PENDING	(1 << 3)
+#define EFCT_LIO_STATE_TGT_EXECUTE_CMD		(1 << 4)
+#define EFCT_LIO_STATE_SCSI_SEND_RD_DATA	(1 << 5)
+#define EFCT_LIO_STATE_TFO_CHK_STOP_FREE	(1 << 6)
+#define EFCT_LIO_STATE_SCSI_DATA_DONE		(1 << 7)
+#define EFCT_LIO_STATE_TFO_QUEUE_STATUS		(1 << 8)
+#define EFCT_LIO_STATE_SCSI_SEND_RSP		(1 << 9)
+#define EFCT_LIO_STATE_SCSI_RSP_DONE		(1 << 10)
+#define EFCT_LIO_STATE_TGT_GENERIC_FREE		(1 << 11)
+#define EFCT_LIO_STATE_SCSI_RECV_TMF		(1 << 12)
+#define EFCT_LIO_STATE_TGT_SUBMIT_TMR		(1 << 13)
+#define EFCT_LIO_STATE_TFO_WRITE_PEND_STATUS	(1 << 14)
+#define EFCT_LIO_STATE_TGT_GENERIC_REQ_FAILURE  (1 << 15)
+
+#define EFCT_LIO_STATE_TFO_ABORTED_TASK		(1 << 29)
+#define EFCT_LIO_STATE_TFO_RELEASE_CMD		(1 << 30)
+#define EFCT_LIO_STATE_SCSI_CMPL_CMD		(1u << 31)
+
+struct efct_scsi_tgt_io {
+	struct se_cmd		cmd;
+	unsigned char		sense_buffer[TRANSPORT_SENSE_BUFFER];
+	enum dma_data_direction	ddir;
+	int			task_attr;
+	u64			lun;
+
+	u32			state;
+	u8			tmf;
+	struct efct_io		*io_to_abort;
+	u32			seg_map_cnt;
+	u32			seg_cnt;
+	u32			cur_seg;
+	enum efct_scsi_io_status err;
+	bool			aborting;
+	bool			rsp_sent;
+	uint32_t		transferred_len;
+};
+
+/* Handler return codes */
+enum {
+	SCSI_HANDLER_DATAPHASE_STARTED = 1,
+	SCSI_HANDLER_RESP_STARTED,
+	SCSI_HANDLER_VALIDATED_DATAPHASE_STARTED,
+	SCSI_CMD_NOT_SUPPORTED,
+};
+
+#define WWN_NAME_LEN		32
+struct efct_lio_vport {
+	u64			wwpn;
+	u64			npiv_wwpn;
+	u64			npiv_wwnn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		vport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	struct Scsi_Host	*shost;
+	struct fc_vport		*fc_vport;
+	atomic_t		enable;
+};
+
+struct efct_lio_nport {
+	u64			wwpn;
+	unsigned char		wwpn_str[WWN_NAME_LEN];
+	struct se_wwn		nport_wwn;
+	struct efct_lio_tpg	*tpg;
+	struct efct		*efct;
+	atomic_t		enable;
+};
+
+struct efct_lio_tpg_attrib {
+	u32			generate_node_acls;
+	u32			cache_dynamic_acls;
+	u32			demo_mode_write_protect;
+	u32			prod_mode_write_protect;
+	u32			demo_mode_login_only;
+	bool			session_deletion_wait;
+};
+
+struct efct_lio_tpg {
+	struct se_portal_group	tpg;
+	struct efct_lio_nport	*nport;
+	struct efct_lio_vport	*vport;
+	struct efct_lio_tpg_attrib tpg_attrib;
+	unsigned short		tpgt;
+	atomic_t		enabled;
+};
+
+struct efct_lio_nacl {
+	u64			nport_wwnn;
+	char			nport_name[WWN_NAME_LEN];
+	struct se_session	*session;
+	struct se_node_acl	se_node_acl;
+};
+
+struct efct_lio_vport_list_t {
+	struct list_head	list_entry;
+	struct efct_lio_vport	*lio_vport;
+};
+
+int efct_scsi_tgt_driver_init(void);
+int efct_scsi_tgt_driver_exit(void);
+
+#endif /*__EFCT_LIO_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 26/31] elx: efct: Hardware IO submission routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (24 preceding siblings ...)
  2021-01-07 22:58 ` [PATCH v7 25/31] elx: efct: LIO backend interface routines James Smart
@ 2021-01-07 22:59 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 27/31] elx: efct: link and host statistics James Smart
                   ` (4 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines that write IO to Work queue, send SRRs and raw frames.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c | 515 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  14 +
 2 files changed, 529 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 95522f0226d9..87d18dd50970 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -2518,3 +2518,518 @@ efct_hw_flush(struct efct_hw *hw)
 
 	return EFC_SUCCESS;
 }
+
+int
+efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe)
+{
+	int rc = EFC_SUCCESS;
+	unsigned long flags = 0;
+
+	spin_lock_irqsave(&wq->queue->lock, flags);
+	if (list_empty(&wq->pending_list)) {
+		if (wq->free_count > 0) {
+			rc = _efct_hw_wq_write(wq, wqe);
+		} else {
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+
+		spin_unlock_irqrestore(&wq->queue->lock, flags);
+		return rc;
+	}
+
+	INIT_LIST_HEAD(&wqe->list_entry);
+	list_add_tail(&wqe->list_entry, &wq->pending_list);
+	wq->wq_pending_count++;
+	while (wq->free_count > 0) {
+		wqe = list_first_entry(&wq->pending_list, struct efct_hw_wqe,
+				      list_entry);
+		if (!wqe)
+			break;
+
+		list_del_init(&wqe->list_entry);
+		rc = _efct_hw_wq_write(wq, wqe);
+		if (rc)
+			break;
+
+		if (wqe->abort_wqe_submit_needed) {
+			wqe->abort_wqe_submit_needed = false;
+			efct_hw_fill_abort_wqe(wq->hw, wqe);
+
+			INIT_LIST_HEAD(&wqe->list_entry);
+			list_add_tail(&wqe->list_entry, &wq->pending_list);
+			wq->wq_pending_count++;
+		}
+	}
+
+	spin_unlock_irqrestore(&wq->queue->lock, flags);
+
+	return rc;
+}
+
+int
+efct_efc_bls_send(struct efc *efc, u32 type, struct sli_bls_params *bls)
+{
+	struct efct *efct = efc->base;
+
+	return efct_hw_bls_send(efct, type, bls, NULL, NULL);
+}
+
+int
+efct_hw_bls_send(struct efct *efct, u32 type, struct sli_bls_params *bls_params,
+		 void *cb, void *arg)
+{
+	struct efct_hw *hw = &efct->hw;
+	struct efct_hw_io *hio;
+	struct sli_bls_payload bls;
+	int rc;
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os,
+			      "cannot send BLS, HW state=%d\n", hw->state);
+		return EFC_FAIL;
+	}
+
+	hio = efct_hw_io_alloc(hw);
+	if (!hio) {
+		efc_log_err(hw->os, "HIO allocation failed\n");
+		return EFC_FAIL;
+	}
+
+	hio->done = cb;
+	hio->arg  = arg;
+
+	bls_params->xri = hio->indicator;
+	bls_params->tag = hio->reqtag;
+
+	if (type == FC_RCTL_BA_ACC) {
+		hio->type = EFCT_HW_BLS_ACC;
+		bls.type = SLI4_SLI_BLS_ACC;
+		memcpy(&bls.u.acc, bls_params->payload, sizeof(bls.u.acc));
+	} else {
+		hio->type = EFCT_HW_BLS_RJT;
+		bls.type = SLI4_SLI_BLS_RJT;
+		memcpy(&bls.u.rjt, bls_params->payload, sizeof(bls.u.rjt));
+	}
+
+	bls.ox_id = cpu_to_le16(bls_params->ox_id);
+	bls.rx_id = cpu_to_le16(bls_params->rx_id);
+
+	if (sli_xmit_bls_rsp64_wqe(&hw->sli, hio->wqe.wqebuf,
+				   &bls, bls_params)) {
+		efc_log_err(hw->os, "XMIT_BLS_RSP64 WQE error\n");
+		return EFC_FAIL;
+	}
+
+	hio->xbusy = true;
+
+	/*
+	 * Add IO to active io wqe list before submitting, in case the
+	 * wcqe processing preempts this thread.
+	 */
+	hio->wq->use_count++;
+	rc = efct_hw_wq_write(hio->wq, &hio->wqe);
+	if (rc >= 0) {
+		/* non-negative return is success */
+		rc = EFC_SUCCESS;
+	} else {
+		/* failed to write wqe, remove from active wqe list */
+		efc_log_err(hw->os,
+			     "sli_queue_write failed: %d\n", rc);
+		hio->xbusy = false;
+	}
+
+	return rc;
+}
+
+static int
+efct_els_ssrs_send_cb(struct efct_hw_io *hio, u32 length, int status,
+		      u32 ext_status, void *arg)
+{
+	struct efc_disc_io *io = arg;
+
+	efc_disc_io_complete(io, length, status, ext_status);
+	return EFC_SUCCESS;
+}
+
+static inline void
+efct_fill_els_params(struct efc_disc_io *io, struct sli_els_params *params)
+{
+	u8 *cmd = io->req.virt;
+
+	params->cmd = *cmd;
+	params->s_id = io->s_id;
+	params->d_id = io->d_id;
+	params->ox_id = io->iparam.els.ox_id;
+	params->rpi = io->rpi;
+	params->vpi = io->vpi;
+	params->rpi_registered = io->rpi_registered;
+	params->xmit_len = io->xmit_len;
+	params->rsp_len = io->rsp_len;
+	params->timeout = io->iparam.els.timeout;
+}
+
+static inline void
+efct_fill_ct_params(struct efc_disc_io *io, struct sli_ct_params *params)
+{
+	params->r_ctl = io->iparam.ct.r_ctl;
+	params->type = io->iparam.ct.type;
+	params->df_ctl =  io->iparam.ct.df_ctl;
+	params->d_id = io->d_id;
+	params->ox_id = io->iparam.ct.ox_id;
+	params->rpi = io->rpi;
+	params->vpi = io->vpi;
+	params->rpi_registered = io->rpi_registered;
+	params->xmit_len = io->xmit_len;
+	params->rsp_len = io->rsp_len;
+	params->timeout = io->iparam.ct.timeout;
+}
+
+/**
+ * efct_els_hw_srrs_send() - Send a single request and response cmd.
+ * @efc: efc library structure
+ * @io: Discovery IO used to hold els and ct cmd context.
+ *
+ * This routine supports communication sequences consisting of a single
+ * request and single response between two endpoints. Examples include:
+ *  - Sending an ELS request.
+ *  - Sending an ELS response - To send an ELS response, the caller must provide
+ * the OX_ID from the received request.
+ *  - Sending a FC Common Transport (FC-CT) request - To send a FC-CT request,
+ * the caller must provide the R_CTL, TYPE, and DF_CTL
+ * values to place in the FC frame header.
+ *
+ * Return: Status of the request.
+ */
+int
+efct_els_hw_srrs_send(struct efc *efc, struct efc_disc_io *io)
+{
+	struct efct *efct = efc->base;
+	struct efct_hw_io *hio;
+	struct efct_hw *hw = &efct->hw;
+	struct efc_dma *send = &io->req;
+	struct efc_dma *receive = &io->rsp;
+	struct sli4_sge	*sge = NULL;
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 len = io->xmit_len;
+	u32 sge0_flags;
+	u32 sge1_flags;
+
+	hio = efct_hw_io_alloc(hw);
+	if (!hio) {
+		pr_err("HIO alloc failed\n");
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_debug(hw->os,
+			      "cannot send SRRS, HW state=%d\n", hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	hio->done = efct_els_ssrs_send_cb;
+	hio->arg  = io;
+
+	sge = hio->sgl->virt;
+
+	/* clear both SGE */
+	memset(hio->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
+
+	sge0_flags = le32_to_cpu(sge[0].dw2_flags);
+	sge1_flags = le32_to_cpu(sge[1].dw2_flags);
+	if (send->size) {
+		sge[0].buffer_address_high =
+			cpu_to_le32(upper_32_bits(send->phys));
+		sge[0].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(send->phys));
+
+		sge0_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+
+		sge[0].buffer_length = cpu_to_le32(len);
+	}
+
+	if (io->io_type == EFC_DISC_IO_ELS_REQ ||
+		io->io_type == EFC_DISC_IO_CT_REQ) {
+		sge[1].buffer_address_high =
+			cpu_to_le32(upper_32_bits(receive->phys));
+		sge[1].buffer_address_low  =
+			cpu_to_le32(lower_32_bits(receive->phys));
+
+		sge1_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
+		sge1_flags |= SLI4_SGE_LAST;
+
+		sge[1].buffer_length = cpu_to_le32(receive->size);
+	} else {
+		sge0_flags |= SLI4_SGE_LAST;
+	}
+
+	sge[0].dw2_flags = cpu_to_le32(sge0_flags);
+	sge[1].dw2_flags = cpu_to_le32(sge1_flags);
+
+	switch (io->io_type) {
+	case EFC_DISC_IO_ELS_REQ: {
+		struct sli_els_params els_params;
+
+		hio->type = EFCT_HW_ELS_REQ;
+		efct_fill_els_params(io, &els_params);
+		els_params.xri = hio->indicator;
+		els_params.tag = hio->reqtag;
+
+		if (sli_els_request64_wqe(&hw->sli, hio->wqe.wqebuf, hio->sgl,
+					  &els_params)) {
+			efc_log_err(hw->os, "REQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFC_DISC_IO_ELS_RESP: {
+		struct sli_els_params els_params;
+
+		hio->type = EFCT_HW_ELS_RSP;
+		efct_fill_els_params(io, &els_params);
+		els_params.xri = hio->indicator;
+		els_params.tag = hio->reqtag;
+		if (sli_xmit_els_rsp64_wqe(&hw->sli, hio->wqe.wqebuf, send,
+					   &els_params)){
+			efc_log_err(hw->os, "RSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFC_DISC_IO_CT_REQ: {
+		struct sli_ct_params ct_params;
+
+		hio->type = EFCT_HW_FC_CT;
+		efct_fill_ct_params(io, &ct_params);
+		ct_params.xri = hio->indicator;
+		ct_params.tag = hio->reqtag;
+		if (sli_gen_request64_wqe(&hw->sli, hio->wqe.wqebuf, hio->sgl,
+					  &ct_params)){
+			efc_log_err(hw->os, "GEN WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFC_DISC_IO_CT_RESP: {
+		struct sli_ct_params ct_params;
+
+		hio->type = EFCT_HW_FC_CT_RSP;
+		efct_fill_ct_params(io, &ct_params);
+		ct_params.xri = hio->indicator;
+		ct_params.tag = hio->reqtag;
+		if (sli_xmit_sequence64_wqe(&hw->sli, hio->wqe.wqebuf, hio->sgl,
+					    &ct_params)){
+			efc_log_err(hw->os, "XMIT SEQ WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "bad SRRS type %#x\n", io->io_type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (rc == EFCT_HW_RTN_SUCCESS) {
+
+		hio->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		hio->wq->use_count++;
+		rc = efct_hw_wq_write(hio->wq, &hio->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			hio->xbusy = false;
+		}
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
+		struct efct_hw_io *io, union efct_hw_io_param_u *iparam,
+		void *cb, void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	bool send_wqe = true;
+
+	if (!io) {
+		pr_err("bad parm hw=%p io=%p\n", hw, io);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE) {
+		efc_log_err(hw->os, "cannot send IO, HW state=%d\n", hw->state);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+
+	/*
+	 * Save state needed during later stages
+	 */
+	io->type  = type;
+	io->done  = cb;
+	io->arg   = arg;
+
+	/*
+	 * Format the work queue entry used to send the IO
+	 */
+	switch (type) {
+	case EFCT_HW_IO_TARGET_WRITE: {
+		u16 flags = iparam->fcp_tgt.flags;
+		struct fcp_txrdy *xfer = io->xfer_rdy.virt;
+
+		/*
+		 * Fill in the XFER_RDY for IF_TYPE 0 devices
+		 */
+		xfer->ft_data_ro = cpu_to_be32(iparam->fcp_tgt.offset);
+		xfer->ft_burst_len = cpu_to_be32(iparam->fcp_tgt.xmit_len);
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+		iparam->fcp_tgt.xri = io->indicator;
+		iparam->fcp_tgt.tag = io->reqtag;
+
+		if (sli_fcp_treceive64_wqe(&hw->sli, io->wqe.wqebuf,
+					   &io->def_sgl, io->first_data_sge,
+					   SLI4_CQ_DEFAULT,
+					   0, 0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TRECEIVE WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_READ: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		iparam->fcp_tgt.xri = io->indicator;
+		iparam->fcp_tgt.tag = io->reqtag;
+
+		if (sli_fcp_tsend64_wqe(&hw->sli, io->wqe.wqebuf,
+					&io->def_sgl, io->first_data_sge,
+					SLI4_CQ_DEFAULT,
+					0, 0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TSEND WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	}
+	case EFCT_HW_IO_TARGET_RSP: {
+		u16 flags = iparam->fcp_tgt.flags;
+
+		if (io->xbusy)
+			flags |= SLI4_IO_CONTINUATION;
+		else
+			flags &= ~SLI4_IO_CONTINUATION;
+
+		iparam->fcp_tgt.xri = io->indicator;
+		iparam->fcp_tgt.tag = io->reqtag;
+
+		if (sli_fcp_trsp64_wqe(&hw->sli, io->wqe.wqebuf,
+				       &io->def_sgl, SLI4_CQ_DEFAULT,
+				       0, &iparam->fcp_tgt)) {
+			efc_log_err(hw->os, "TRSP WQE error\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+
+		break;
+	}
+	default:
+		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
+		rc = EFCT_HW_RTN_ERROR;
+	}
+
+	if (send_wqe && rc == EFCT_HW_RTN_SUCCESS) {
+
+		io->xbusy = true;
+
+		/*
+		 * Add IO to active io wqe list before submitting, in case the
+		 * wcqe processing preempts this thread.
+		 */
+		hw->tcmd_wq_submit[io->wq->instance]++;
+		io->wq->use_count++;
+		rc = efct_hw_wq_write(io->wq, &io->wqe);
+		if (rc >= 0) {
+			/* non-negative return is success */
+			rc = 0;
+		} else {
+			/* failed to write wqe, remove from active wqe list */
+			efc_log_err(hw->os,
+				     "sli_queue_write failed: %d\n", rc);
+			io->xbusy = false;
+		}
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		   struct efct_hw_send_frame_context *ctx,
+		   void (*callback)(void *arg, u8 *cqe, int status),
+		   void *arg)
+{
+	enum efct_hw_rtn rc;
+	struct efct_hw_wqe *wqe;
+	u32 xri;
+	struct hw_wq *wq;
+
+	wqe = &ctx->wqe;
+
+	/* populate the callback object */
+	ctx->hw = hw;
+
+	/* Fetch and populate request tag */
+	ctx->wqcb = efct_hw_reqtag_alloc(hw, callback, arg);
+	if (!ctx->wqcb) {
+		efc_log_err(hw->os, "can't allocate request tag\n");
+		return EFCT_HW_RTN_NO_RESOURCES;
+	}
+
+	wq = hw->hw_wq[0];
+
+	/* Set XRI and RX_ID in the header based on which WQ, and which
+	 * send_frame_io we are using
+	 */
+	xri = wq->send_frame_io->indicator;
+
+	/* Build the send frame WQE */
+	rc = sli_send_frame_wqe(&hw->sli, wqe->wqebuf,
+				sof, eof, (u32 *)hdr, payload, payload->len,
+				EFCT_HW_SEND_FRAME_TIMEOUT, xri,
+				ctx->wqcb->instance_index);
+	if (rc) {
+		efc_log_err(hw->os, "sli_send_frame_wqe failed: %d\n",
+			     rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	/* Write to WQ */
+	rc = efct_hw_wq_write(wq, wqe);
+	if (rc) {
+		efc_log_err(hw->os, "efct_hw_wq_write failed: %d\n", rc);
+		return EFCT_HW_RTN_ERROR;
+	}
+
+	wq->use_count++;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index bcc8e0443460..a030fc5ac719 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -706,5 +706,19 @@ int
 efct_hw_process(struct efct_hw *hw, u32 vector, u32 max_isr_time_msec);
 int
 efct_hw_queue_hash_find(struct efct_queue_hash *hash, u16 id);
+int efct_hw_wq_write(struct hw_wq *wq, struct efct_hw_wqe *wqe);
+enum efct_hw_rtn
+efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
+		   u8 sof, u8 eof, struct efc_dma *payload,
+		struct efct_hw_send_frame_context *ctx,
+		void (*callback)(void *arg, u8 *cqe, int status),
+		void *arg);
+int
+efct_els_hw_srrs_send(struct efc *efc, struct efc_disc_io *io);
+int
+efct_efc_bls_send(struct efc *efc, u32 type, struct sli_bls_params *bls);
+int
+efct_hw_bls_send(struct efct *efct, u32 type, struct sli_bls_params *bls_params,
+		 void *cb, void *arg);
 
 #endif /* __EFCT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 27/31] elx: efct: link and host statistics
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (25 preceding siblings ...)
  2021-01-07 22:59 ` [PATCH v7 26/31] elx: efct: Hardware IO submission routines James Smart
@ 2021-01-07 22:59 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 28/31] elx: efct: xport and hardware teardown routines James Smart
                   ` (3 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to retrieve link stats and host stats.
Add Firmware update helper routines.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c | 325 ++++++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h |  31 +++
 2 files changed, 356 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index 87d18dd50970..fca9bff358e3 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -8,6 +8,23 @@
 #include "efct_hw.h"
 #include "efct_unsol.h"
 
+struct efct_hw_link_stat_cb_arg {
+	void (*cb)(int status, u32 num_counters,
+		   struct efct_hw_link_stat_counts *counters, void *arg);
+	void *arg;
+};
+
+struct efct_hw_host_stat_cb_arg {
+	void (*cb)(int status, u32 num_counters,
+		struct efct_hw_host_stat_counts *counters, void *arg);
+	void *arg;
+};
+
+struct efct_hw_fw_wr_cb_arg {
+	void (*cb)(int status, u32 bytes_written, u32 change_status, void *arg);
+	void *arg;
+};
+
 struct efct_mbox_rqst_ctx {
 	int (*callback)(struct efc *efc, int status, u8 *mqe, void *arg);
 	void *arg;
@@ -3033,3 +3050,311 @@ efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr,
 
 	return EFCT_HW_RTN_SUCCESS;
 }
+
+static int
+efct_hw_cb_link_stat(struct efct_hw *hw, int status,
+		     u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_link_stats *mbox_rsp;
+	struct efct_hw_link_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_link_stat_counts counts[EFCT_HW_LINK_STAT_MAX];
+	u32 num_counters, i;
+	u32 mbox_rsp_flags = 0;
+
+	mbox_rsp = (struct sli4_cmd_read_link_stats *)mqe;
+	mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags);
+	num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13;
+	memset(counts, 0, sizeof(struct efct_hw_link_stat_counts) *
+				 EFCT_HW_LINK_STAT_MAX);
+
+	/* Fill overflow counts, mask starts from SLI4_READ_LNKSTAT_W02OF*/
+	for (i = 0; i < EFCT_HW_LINK_STAT_MAX; i++)
+		counts[i].overflow = (mbox_rsp_flags & (1 << (i + 2)));
+
+	counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->linkfail_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssync_errcnt);
+	counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->losssignal_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->primseq_errcnt);
+	counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->inval_txword_errcnt);
+	counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter =
+		le32_to_cpu(mbox_rsp->crc_errcnt);
+	counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter =
+		le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt);
+	counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter =
+		 le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofa_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofdti_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_eofni_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_soff_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt);
+	counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw, u8 req_ext_counters,
+		       u8 clear_overflow_flags, u8 clear_all_counters,
+		       void (*cb)(int status, u32 num_counters,
+				  struct efct_hw_link_stat_counts *counters,
+				  void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_link_stat_cb_arg *cb_arg;
+	u8 mbxdata[SLI4_BMBX_SIZE];
+
+	cb_arg = kzalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Send the HW command */
+	if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, req_ext_counters,
+				    clear_overflow_flags, clear_all_counters))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_link_stat, cb_arg);
+
+	if (rc)
+		kfree(cb_arg);
+
+	return rc;
+}
+
+static int
+efct_hw_cb_host_stat(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_read_status *mbox_rsp =
+					(struct sli4_cmd_read_status *)mqe;
+	struct efct_hw_host_stat_cb_arg *cb_arg = arg;
+	struct efct_hw_host_stat_counts counts[EFCT_HW_HOST_STAT_MAX];
+	u32 num_counters = EFCT_HW_HOST_STAT_MAX;
+
+	memset(counts, 0, sizeof(struct efct_hw_host_stat_counts) *
+		   EFCT_HW_HOST_STAT_MAX);
+
+	counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_kbyte_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_frame_cnt);
+	counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->trans_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_seq_cnt);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_orig);
+	counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter =
+		 le32_to_cpu(mbox_rsp->tot_exchanges_resp);
+	counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_p_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->recv_f_bsy_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt);
+	counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt);
+	counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter =
+		 le32_to_cpu(mbox_rsp->empty_xri_pool_cnt);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status))
+				status = le16_to_cpu(mbox_rsp->hdr.status);
+			cb_arg->cb(status, num_counters, counts, cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw, u8 cc,
+		       void (*cb)(int status, u32 num_counters,
+				  struct efct_hw_host_stat_counts *counters,
+				  void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	struct efct_hw_host_stat_cb_arg *cb_arg;
+	u8 mbxdata[SLI4_BMBX_SIZE];
+
+	cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC);
+	if (!cb_arg)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	 /* Send the HW command to get the host stats */
+	if (!sli_cmd_read_status(&hw->sli, mbxdata, cc))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_host_stat, cb_arg);
+
+	if (rc) {
+		efc_log_debug(hw->os, "READ_HOST_STATS failed\n");
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
+
+struct efct_hw_async_call_ctx {
+	efct_hw_async_cb_t callback;
+	void *arg;
+	u8 cmd[SLI4_BMBX_SIZE];
+};
+
+static void
+efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
+{
+	struct efct_hw_async_call_ctx *ctx = arg;
+
+	if (ctx) {
+		if (ctx->callback)
+			(*ctx->callback)(hw, status, mqe, ctx->arg);
+
+		kfree(ctx);
+	}
+}
+
+int
+efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg)
+{
+	struct efct_hw_async_call_ctx *ctx;
+
+	/*
+	 * Allocate a callback context (which includes the mbox cmd buffer),
+	 * we need this to be persistent as the mbox cmd submission may be
+	 * queued and executed later execution.
+	 */
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	ctx->callback = callback;
+	ctx->arg = arg;
+
+	/* Build and send a NOP mailbox command */
+	if (sli_cmd_common_nop(&hw->sli, ctx->cmd, 0)) {
+		efc_log_err(hw->os, "COMMON_NOP format failure\n");
+		kfree(ctx);
+		return EFC_FAIL;
+	}
+
+	if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb,
+			    ctx)) {
+		efc_log_err(hw->os, "COMMON_NOP command failure\n");
+		kfree(ctx);
+		return EFC_FAIL;
+	}
+	return EFC_SUCCESS;
+}
+
+static int
+efct_hw_cb_fw_write(struct efct_hw *hw, int status, u8 *mqe, void  *arg)
+{
+	struct sli4_cmd_sli_config *mbox_rsp =
+					(struct sli4_cmd_sli_config *)mqe;
+	struct sli4_rsp_cmn_write_object *wr_obj_rsp;
+	struct efct_hw_fw_wr_cb_arg *cb_arg = arg;
+	u32 bytes_written;
+	u16 mbox_status;
+	u32 change_status;
+
+	wr_obj_rsp = (struct sli4_rsp_cmn_write_object *)
+		      &mbox_rsp->payload.embed;
+	bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length);
+	mbox_status = le16_to_cpu(mbox_rsp->hdr.status);
+	change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) &
+			 RSP_CHANGE_STATUS);
+
+	if (cb_arg) {
+		if (cb_arg->cb) {
+			if (!status && mbox_status)
+				status = mbox_status;
+			cb_arg->cb(status, bytes_written, change_status,
+				   cb_arg->arg);
+		}
+
+		kfree(cb_arg);
+	}
+
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma, u32 size,
+		       u32 offset, int last,
+		       void (*cb)(int status, u32 bytes_written,
+				   u32 change_status, void *arg),
+		       void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	u8 mbxdata[SLI4_BMBX_SIZE];
+	struct efct_hw_fw_wr_cb_arg *cb_arg;
+	int noc = 0;
+
+	cb_arg = kzalloc(sizeof(*cb_arg), GFP_KERNEL);
+	if (!cb_arg)
+		return EFCT_HW_RTN_NO_MEMORY;
+
+	cb_arg->cb = cb;
+	cb_arg->arg = arg;
+
+	/* Write a portion of a firmware image to the device */
+	if (!sli_cmd_common_write_object(&hw->sli, mbxdata,
+					noc, last, size, offset, "/prg/",
+					dma))
+		rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT,
+				     efct_hw_cb_fw_write, cb_arg);
+
+	if (rc != EFCT_HW_RTN_SUCCESS) {
+		efc_log_debug(hw->os, "COMMON_WRITE_OBJECT failed\n");
+		kfree(cb_arg);
+	}
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index a030fc5ac719..0dc7e210c86a 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -721,4 +721,35 @@ int
 efct_hw_bls_send(struct efct *efct, u32 type, struct sli_bls_params *bls_params,
 		 void *cb, void *arg);
 
+/* Function for retrieving link statistics */
+enum efct_hw_rtn
+efct_hw_get_link_stats(struct efct_hw *hw,
+		       u8 req_ext_counters,
+		u8 clear_overflow_flags,
+		u8 clear_all_counters,
+		void (*efct_hw_link_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_link_stat_counts *counters,
+			void *arg),
+		void *arg);
+/* Function for retrieving host statistics */
+enum efct_hw_rtn
+efct_hw_get_host_stats(struct efct_hw *hw,
+		       u8 cc,
+		void (*efct_hw_host_stat_cb_t)(int status,
+					       u32 num_counters,
+			struct efct_hw_host_stat_counts *counters,
+			void *arg),
+		void *arg);
+enum efct_hw_rtn
+efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma,
+		       u32 size, u32 offset, int last,
+		       void (*cb)(int status, u32 bytes_written,
+				  u32 change_status, void *arg),
+		       void *arg);
+typedef void (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
+				  u8 *mqe, void *arg);
+int
+efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg);
+
 #endif /* __EFCT_H__ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 28/31] elx: efct: xport and hardware teardown routines
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (26 preceding siblings ...)
  2021-01-07 22:59 ` [PATCH v7 27/31] elx: efct: link and host statistics James Smart
@ 2021-01-07 22:59 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 29/31] elx: efct: scsi_transport_fc host interface support James Smart
                   ` (2 subsequent siblings)
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Routines to detach xport and hardware objects.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_hw.c    | 277 +++++++++++++++++++++++++++++
 drivers/scsi/elx/efct/efct_hw.h    |  27 +++
 drivers/scsi/elx/efct/efct_xport.c | 261 +++++++++++++++++++++++++++
 3 files changed, 565 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
index fca9bff358e3..8486dfaf346f 100644
--- a/drivers/scsi/elx/efct/efct_hw.c
+++ b/drivers/scsi/elx/efct/efct_hw.c
@@ -3358,3 +3358,280 @@ efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma, u32 size,
 
 	return rc;
 }
+
+static int
+efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe,
+			void  *arg)
+{
+	return EFC_SUCCESS;
+}
+
+enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		     void (*cb)(int status, uintptr_t value, void *arg),
+		     void *arg)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
+	u8 link[SLI4_BMBX_SIZE];
+	u32 speed = 0;
+	u8 reset_alpa = 0;
+
+	switch (ctrl) {
+	case EFCT_HW_PORT_INIT:
+		if (!sli_cmd_config_link(&hw->sli, link))
+			rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+
+		if (rc != EFCT_HW_RTN_SUCCESS) {
+			efc_log_err(hw->os, "CONFIG_LINK failed\n");
+			break;
+		}
+		speed = hw->config.speed;
+		reset_alpa = (u8)(value & 0xff);
+
+		rc = EFCT_HW_RTN_ERROR;
+		if (!sli_cmd_init_link(&hw->sli, link, speed, reset_alpa))
+			rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc)
+			efc_log_err(hw->os, "INIT_LINK failed\n");
+		break;
+
+	case EFCT_HW_PORT_SHUTDOWN:
+		if (!sli_cmd_down_link(&hw->sli, link))
+			rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT,
+					     efct_hw_cb_port_control, NULL);
+		/* Free buffer on error, since no callback is coming */
+		if (rc)
+			efc_log_err(hw->os, "DOWN_LINK failed\n");
+		break;
+
+	default:
+		efc_log_debug(hw->os, "unhandled control %#x\n", ctrl);
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_teardown(struct efct_hw *hw)
+{
+	u32	i = 0;
+	u32	iters = 10;
+	u32 destroy_queues;
+	u32 free_memory;
+	struct efc_dma *dma;
+	struct efct *efct = hw->os;
+
+	destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE);
+	free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED);
+
+	/* Cancel Sliport Healthcheck */
+	if (hw->sliport_healthcheck) {
+		hw->sliport_healthcheck = 0;
+		efct_hw_config_sli_port_health_check(hw, 0, 0);
+	}
+
+	if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+
+		efct_hw_flush(hw);
+
+		/*
+		 * If there are outstanding commands, wait for them to complete
+		 */
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some cmds still pending on MQ queue\n");
+
+		/* Cancel any remaining commands */
+		efct_hw_command_cancel(hw);
+	} else {
+		hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS;
+	}
+
+	dma_free_coherent(&efct->pci->dev,
+			  hw->rnode_mem.size, hw->rnode_mem.virt,
+			  hw->rnode_mem.phys);
+	memset(&hw->rnode_mem, 0, sizeof(struct efc_dma));
+
+	if (hw->io) {
+		for (i = 0; i < hw->config.n_io; i++) {
+			if (hw->io[i] && hw->io[i]->sgl &&
+			    hw->io[i]->sgl->virt) {
+				dma_free_coherent(&efct->pci->dev,
+						  hw->io[i]->sgl->size,
+						  hw->io[i]->sgl->virt,
+						  hw->io[i]->sgl->phys);
+			}
+			kfree(hw->io[i]);
+			hw->io[i] = NULL;
+		}
+		kfree(hw->io);
+		hw->io = NULL;
+		kfree(hw->wqe_buffs);
+		hw->wqe_buffs = NULL;
+	}
+
+	dma = &hw->xfer_rdy;
+	dma_free_coherent(&efct->pci->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	dma = &hw->loop_map;
+	dma_free_coherent(&efct->pci->dev,
+			  dma->size, dma->virt, dma->phys);
+	memset(dma, 0, sizeof(struct efc_dma));
+
+	for (i = 0; i < hw->wq_count; i++)
+		sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->rq_count; i++)
+		sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->mq_count; i++)
+		sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->cq_count; i++)
+		sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues,
+			       free_memory);
+
+	for (i = 0; i < hw->eq_count; i++)
+		sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues,
+			       free_memory);
+
+	/* Free rq buffers */
+	efct_hw_rx_free(hw);
+
+	efct_hw_queue_teardown(hw);
+
+	kfree(hw->wq_cpu_array);
+
+	sli_teardown(&hw->sli);
+
+	/* record the fact that the queues are non-functional */
+	hw->state = EFCT_HW_STATE_UNINITIALIZED;
+
+	/* free sequence free pool */
+	kfree(hw->seq_pool);
+	hw->seq_pool = NULL;
+
+	/* free hw_wq_callback pool */
+	efct_hw_reqtag_pool_free(hw);
+
+	mempool_destroy(hw->cmd_ctx_pool);
+	mempool_destroy(hw->mbox_rqst_pool);
+
+	/* Mark HW setup as not having been called */
+	hw->hw_setup_called = false;
+
+	return EFCT_HW_RTN_SUCCESS;
+}
+
+static enum efct_hw_rtn
+efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset,
+		  enum efct_hw_state prev_state)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+
+	switch (reset) {
+	case EFCT_HW_RESET_FUNCTION:
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	case EFCT_HW_RESET_FIRMWARE:
+		efc_log_debug(hw->os, "issuing firmware reset\n");
+		if (sli_fw_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_soft_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		/*
+		 * Because the FW reset leaves the FW in a non-running state,
+		 * follow that with a regular reset.
+		 */
+		efc_log_debug(hw->os, "issuing function level reset\n");
+		if (sli_reset(&hw->sli)) {
+			efc_log_err(hw->os, "sli_reset failed\n");
+			rc = EFCT_HW_RTN_ERROR;
+		}
+		break;
+	default:
+		efc_log_err(hw->os,
+			     "unknown reset type - no reset performed\n");
+		hw->state = prev_state;
+		rc = EFCT_HW_RTN_INVALID_ARG;
+		break;
+	}
+
+	return rc;
+}
+
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset)
+{
+	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
+	u32 iters;
+	enum efct_hw_state prev_state = hw->state;
+
+	if (hw->state != EFCT_HW_STATE_ACTIVE)
+		efc_log_debug(hw->os,
+			      "HW state %d is not active\n", hw->state);
+
+	hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS;
+
+	/*
+	 * If the prev_state is already reset/teardown in progress,
+	 * don't continue further
+	 */
+	if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS ||
+	    prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS)
+		return efct_hw_sli_reset(hw, reset, prev_state);
+
+	if (prev_state != EFCT_HW_STATE_UNINITIALIZED) {
+		efct_hw_flush(hw);
+
+		/*
+		 * If an mailbox command requiring a DMA is outstanding
+		 * (SFP/DDM), then the FW will UE when the reset is issued.
+		 * So attempt to complete all mailbox commands.
+		 */
+		iters = 10;
+		while (!list_empty(&hw->cmd_head) && iters) {
+			mdelay(10);
+			efct_hw_flush(hw);
+			iters--;
+		}
+
+		if (list_empty(&hw->cmd_head))
+			efc_log_debug(hw->os,
+				       "All commands completed on MQ queue\n");
+		else
+			efc_log_debug(hw->os,
+				       "Some commands still pending on MQ queue\n");
+	}
+
+	/* Reset the chip */
+	rc = efct_hw_sli_reset(hw, reset, prev_state);
+	if (rc == EFCT_HW_RTN_INVALID_ARG)
+		return EFCT_HW_RTN_ERROR;
+
+	return rc;
+}
diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
index 0dc7e210c86a..a87c78d01e18 100644
--- a/drivers/scsi/elx/efct/efct_hw.h
+++ b/drivers/scsi/elx/efct/efct_hw.h
@@ -752,4 +752,31 @@ typedef void (*efct_hw_async_cb_t)(struct efct_hw *hw, int status,
 int
 efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg);
 
+struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count);
+struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count);
+u32
+efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[],
+		   u32 num_cqs, u32 entry_count);
+struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count);
+struct hw_wq
+*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count);
+u32
+efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[],
+		   u32 num_rq_pairs, u32 entry_count);
+void efct_hw_del_eq(struct hw_eq *eq);
+void efct_hw_del_cq(struct hw_cq *cq);
+void efct_hw_del_mq(struct hw_mq *mq);
+void efct_hw_del_wq(struct hw_wq *wq);
+void efct_hw_del_rq(struct hw_rq *rq);
+void efct_hw_queue_teardown(struct efct_hw *hw);
+enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw);
+enum efct_hw_rtn
+efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset);
+
+enum efct_hw_rtn
+efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl,
+		     uintptr_t value,
+		void (*cb)(int status, uintptr_t value, void *arg),
+		void *arg);
+
 #endif /* __EFCT_H__ */
diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index b19304d6febb..0cf558c03547 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -517,3 +517,264 @@ efct_scsi_release_fc_transport(void)
 
 	return EFC_SUCCESS;
 }
+
+int
+efct_xport_detach(struct efct_xport *xport)
+{
+	struct efct *efct = xport->efct;
+
+	/* free resources associated with target-server and initiator-client */
+	efct_scsi_tgt_del_device(efct);
+
+	efct_scsi_del_device(efct);
+
+	/*Shutdown FC Statistics timer*/
+	if (timer_pending(&xport->stats_timer))
+		del_timer(&xport->stats_timer);
+
+	efct_hw_teardown(&efct->hw);
+
+	efct_xport_delete_debugfs(efct);
+
+	return EFC_SUCCESS;
+}
+
+static void
+efct_xport_domain_free_cb(struct efc *efc, void *arg)
+{
+	struct completion *done = arg;
+
+	complete(done);
+}
+
+static void
+efct_xport_post_node_event_cb(struct efct_hw *hw, int status,
+			      u8 *mqe, void *arg)
+{
+	struct efct_xport_post_node_event *payload = arg;
+
+	if (payload) {
+		efc_node_post_shutdown(payload->node, payload->evt,
+				       payload->context);
+		complete(&payload->done);
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+	}
+}
+
+int
+efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...)
+{
+	u32 rc = 0;
+	struct efct *efct = NULL;
+	va_list argp;
+
+	efct = xport->efct;
+
+	switch (cmd) {
+	case EFCT_XPORT_PORT_ONLINE: {
+		/* Bring the port on-line */
+		rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0,
+					  NULL, NULL);
+		if (rc)
+			efc_log_err(efct,
+				     "%s: Can't init port\n", efct->desc);
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+	case EFCT_XPORT_PORT_OFFLINE: {
+		if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0,
+					 NULL, NULL))
+			efc_log_err(efct, "port shutdown failed\n");
+		else
+			xport->configured_link_state = cmd;
+		break;
+	}
+
+	case EFCT_XPORT_SHUTDOWN: {
+		struct completion done;
+		unsigned long timeout;
+
+		/* if a PHYSDEV reset was performed (e.g. hw dump), will affect
+		 * all PCI functions; orderly shutdown won't work,
+		 * just force free
+		 */
+		if (sli_reset_required(&efct->hw.sli)) {
+			struct efc_domain *domain = efct->efcport->domain;
+
+			if (domain)
+				efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST,
+					      domain);
+		} else {
+			efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN,
+					     0, NULL, NULL);
+		}
+
+		init_completion(&done);
+
+		efc_register_domain_free_cb(efct->efcport,
+					efct_xport_domain_free_cb, &done);
+
+		efc_log_debug(efct, "Waiting %d seconds for domain shutdown\n",
+				(EFC_SHUTDOWN_TIMEOUT_USEC / 1000000));
+
+		timeout = usecs_to_jiffies(EFC_SHUTDOWN_TIMEOUT_USEC);
+		if (!wait_for_completion_timeout(&done, timeout)) {
+			efc_log_err(efct, "Domain shutdown timed out!!\n");
+			WARN_ON(1);
+		}
+
+		efc_register_domain_free_cb(efct->efcport, NULL, NULL);
+
+		/* Free up any saved virtual ports */
+		efc_vport_del_all(efct->efcport);
+		break;
+	}
+
+	/*
+	 * POST_NODE_EVENT:  post an event to a node object
+	 *
+	 * This transport function is used to post an event to a node object.
+	 * It does this by submitting a NOP mailbox command to defer execution
+	 * to the interrupt context (thereby enforcing the serialized execution
+	 * of event posting to the node state machine instances)
+	 */
+	case EFCT_XPORT_POST_NODE_EVENT: {
+		struct efc_node *node;
+		u32	evt;
+		void *context;
+		struct efct_xport_post_node_event *payload = NULL;
+		struct efct_hw *hw;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		node = va_arg(argp, struct efc_node *);
+		evt = va_arg(argp, u32);
+		context = va_arg(argp, void *);
+		va_end(argp);
+
+		payload = kzalloc(sizeof(*payload), GFP_KERNEL);
+		if (!payload)
+			return EFC_FAIL;
+
+		hw = &efct->hw;
+
+		/* if node's state machine is disabled,
+		 * don't bother continuing
+		 */
+		if (!node->sm.current_state) {
+			efc_log_debug(efct, "node %p state machine disabled\n",
+				      node);
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Setup payload */
+		init_completion(&payload->done);
+
+		/* one for self and one for callback */
+		atomic_set(&payload->refcnt, 2);
+		payload->node = node;
+		payload->evt = evt;
+		payload->context = context;
+
+		if (efct_hw_async_call(hw, efct_xport_post_node_event_cb,
+				       payload)) {
+			efc_log_debug(efct, "efct_hw_async_call failed\n");
+			kfree(payload);
+			rc = -1;
+			break;
+		}
+
+		/* Wait for completion */
+		if (wait_for_completion_interruptible(&payload->done)) {
+			efc_log_debug(efct,
+				      "POST_NODE_EVENT: completion failed\n");
+			rc = -1;
+		}
+		if (atomic_sub_and_test(1, &payload->refcnt))
+			kfree(payload);
+
+		break;
+	}
+	/*
+	 * Set wwnn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWNN_SET: {
+		u64 wwnn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwnn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWNN %016llx\n", wwnn);
+		xport->req_wwnn = wwnn;
+
+		break;
+	}
+	/*
+	 * Set wwpn for the port. This will be used instead of the default
+	 * provided by FW.
+	 */
+	case EFCT_XPORT_WWPN_SET: {
+		u64 wwpn;
+
+		/* Retrieve arguments */
+		va_start(argp, cmd);
+		wwpn = va_arg(argp, uint64_t);
+		va_end(argp);
+
+		efc_log_debug(efct, " WWPN %016llx\n", wwpn);
+		xport->req_wwpn = wwpn;
+
+		break;
+	}
+
+	default:
+		break;
+	}
+	return rc;
+}
+
+void
+efct_xport_free(struct efct_xport *xport)
+{
+	if (xport) {
+		efct_io_pool_free(xport->io_pool);
+
+		kfree(xport);
+	}
+}
+
+void
+efct_release_fc_transport(struct scsi_transport_template *transport_template)
+{
+	if (transport_template)
+		pr_err("releasing transport layer\n");
+
+	/* Releasing FC transport */
+	fc_release_transport(transport_template);
+}
+
+static void
+efct_xport_remove_host(struct Scsi_Host *shost)
+{
+	fc_remove_host(shost);
+}
+
+int efct_scsi_del_device(struct efct *efct)
+{
+	if (efct->shost) {
+		efc_log_debug(efct, "Unregistering with Transport Layer\n");
+		efct_xport_remove_host(efct->shost);
+		efc_log_debug(efct, "Unregistering with SCSI Midlayer\n");
+		scsi_remove_host(efct->shost);
+		scsi_host_put(efct->shost);
+		efct->shost = NULL;
+	}
+	return EFC_SUCCESS;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 29/31] elx: efct: scsi_transport_fc host interface support
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (27 preceding siblings ...)
  2021-01-07 22:59 ` [PATCH v7 28/31] elx: efct: xport and hardware teardown routines James Smart
@ 2021-01-07 22:59 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
  2021-01-07 22:59 ` [PATCH v7 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Daniel Wagner

This patch continues the efct driver population.

This patch adds driver definitions for:
Integration with the scsi_fc_transport host interfaces

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/efct/efct_xport.c | 500 +++++++++++++++++++++++++++++
 1 file changed, 500 insertions(+)

diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c
index 0cf558c03547..bee13337afbc 100644
--- a/drivers/scsi/elx/efct/efct_xport.c
+++ b/drivers/scsi/elx/efct/efct_xport.c
@@ -778,3 +778,503 @@ int efct_scsi_del_device(struct efct *efct)
 	}
 	return EFC_SUCCESS;
 }
+
+static void
+efct_get_host_port_id(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	struct efc_nport *nport;
+
+	if (efc->domain && efc->domain->nport) {
+		nport = efc->domain->nport;
+		fc_host_port_id(shost) = nport->fc_id;
+	}
+}
+
+static void
+efct_get_host_port_type(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	int type = FC_PORTTYPE_UNKNOWN;
+
+	if (efc->domain && efc->domain->nport) {
+		if (efc->domain->is_loop) {
+			type = FC_PORTTYPE_LPORT;
+		} else {
+			struct efc_nport *nport = efc->domain->nport;
+
+			if (nport->is_vport)
+				type = FC_PORTTYPE_NPIV;
+			else if (nport->topology == EFC_NPORT_TOPO_P2P)
+				type = FC_PORTTYPE_PTP;
+			else if (nport->topology == EFC_NPORT_TOPO_UNKNOWN)
+				type = FC_PORTTYPE_UNKNOWN;
+			else
+				type = FC_PORTTYPE_NPORT;
+		}
+	}
+	fc_host_port_type(shost) = type;
+}
+
+static void
+efct_get_host_vport_type(struct Scsi_Host *shost)
+{
+	fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+}
+
+static void
+efct_get_host_port_state(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain)
+		fc_host_port_state(shost) = FC_PORTSTATE_ONLINE;
+	else
+		fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE;
+}
+
+static void
+efct_get_host_speed(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+	union efct_xport_stats_u speed;
+	u32 fc_speed = FC_PORTSPEED_UNKNOWN;
+	int rc;
+
+	if (!efc->domain || !efc->domain->nport) {
+		fc_host_speed(shost) = fc_speed;
+		return;
+	}
+
+	rc = efct_xport_status(efct->xport, EFCT_XPORT_LINK_SPEED, &speed);
+	if (rc == 0) {
+		switch (speed.value) {
+		case 1000:
+			fc_speed = FC_PORTSPEED_1GBIT;
+			break;
+		case 2000:
+			fc_speed = FC_PORTSPEED_2GBIT;
+			break;
+		case 4000:
+			fc_speed = FC_PORTSPEED_4GBIT;
+			break;
+		case 8000:
+			fc_speed = FC_PORTSPEED_8GBIT;
+			break;
+		case 10000:
+			fc_speed = FC_PORTSPEED_10GBIT;
+			break;
+		case 16000:
+			fc_speed = FC_PORTSPEED_16GBIT;
+			break;
+		case 32000:
+			fc_speed = FC_PORTSPEED_32GBIT;
+			break;
+		}
+	}
+
+	fc_host_speed(shost) = fc_speed;
+}
+
+static void
+efct_get_host_fabric_name(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	struct efc *efc = efct->efcport;
+
+	if (efc->domain) {
+		struct fc_els_flogi  *sp =
+			(struct fc_els_flogi  *)
+				efc->domain->flogi_service_params;
+
+		fc_host_fabric_name(shost) = be64_to_cpu(sp->fl_wwnn);
+	}
+}
+
+static struct fc_host_statistics *
+efct_get_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	union efct_xport_stats_u stats;
+	struct efct_xport *xport = efct->xport;
+	u32 rc = 1;
+
+	rc = efct_xport_status(xport, EFCT_XPORT_LINK_STATISTICS, &stats);
+	if (rc != 0) {
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+		return NULL;
+	}
+
+	vport->fc_host_stats.loss_of_sync_count =
+		stats.stats.link_stats.loss_of_sync_error_count;
+	vport->fc_host_stats.link_failure_count =
+		stats.stats.link_stats.link_failure_error_count;
+	vport->fc_host_stats.prim_seq_protocol_err_count =
+		stats.stats.link_stats.primitive_sequence_error_count;
+	vport->fc_host_stats.invalid_tx_word_count =
+		stats.stats.link_stats.invalid_transmission_word_error_count;
+	vport->fc_host_stats.invalid_crc_count =
+		stats.stats.link_stats.crc_error_count;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.tx_words =
+		stats.stats.host_stats.transmit_kbyte_count * 256;
+	/* mbox returns kbyte count so we need to convert to words */
+	vport->fc_host_stats.rx_words =
+		stats.stats.host_stats.receive_kbyte_count * 256;
+	vport->fc_host_stats.tx_frames =
+		stats.stats.host_stats.transmit_frame_count;
+	vport->fc_host_stats.rx_frames =
+		stats.stats.host_stats.receive_frame_count;
+
+	vport->fc_host_stats.fcp_input_requests =
+			xport->fcp_stats.input_requests;
+	vport->fc_host_stats.fcp_output_requests =
+			xport->fcp_stats.output_requests;
+	vport->fc_host_stats.fcp_output_megabytes =
+			xport->fcp_stats.output_bytes >> 20;
+	vport->fc_host_stats.fcp_input_megabytes =
+			xport->fcp_stats.input_bytes >> 20;
+	vport->fc_host_stats.fcp_control_requests =
+			xport->fcp_stats.control_requests;
+
+	return &vport->fc_host_stats;
+}
+
+static void
+efct_reset_stats(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport = (struct efct_vport *)shost->hostdata;
+	struct efct *efct = vport->efct;
+	/* argument has no purpose for this action */
+	union efct_xport_stats_u dummy;
+	u32 rc = 0;
+
+	rc = efct_xport_status(efct->xport, EFCT_XPORT_LINK_STAT_RESET, &dummy);
+	if (rc != 0)
+		pr_err("efct_xport_status returned non 0 - %d\n", rc);
+}
+
+static void
+efct_get_starget_port_id(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_node_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_get_starget_port_name(struct scsi_target *starget)
+{
+	pr_err("%s\n", __func__);
+}
+
+static void
+efct_set_vport_symbolic_name(struct fc_vport *fc_vport)
+{
+	pr_err("%s\n", __func__);
+}
+
+static int
+efct_issue_lip(struct Scsi_Host *shost)
+{
+	struct efct_vport *vport =
+			shost ? (struct efct_vport *)shost->hostdata : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+
+	if (!shost || !vport || !efct) {
+		pr_err("%s: shost=%p vport=%p efct=%p\n", __func__,
+		       shost, vport, efct);
+		return -EPERM;
+	}
+
+	/*
+	 * Bring the link down gracefully then re-init the link.
+	 * The firmware will re-initialize the Fibre Channel interface as
+	 * required. It does not issue a LIP.
+	 */
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_OFFLINE))
+		efc_log_debug(efct, "EFCT_XPORT_PORT_OFFLINE failed\n");
+
+	if (efct_xport_control(efct->xport, EFCT_XPORT_PORT_ONLINE))
+		efc_log_debug(efct, "EFCT_XPORT_PORT_ONLINE failed\n");
+
+	return EFC_SUCCESS;
+}
+
+struct efct_vport *
+efct_scsi_new_vport(struct efct *efct, struct device *dev)
+{
+	struct Scsi_Host *shost = NULL;
+	int error = 0;
+	struct efct_vport *vport = NULL;
+	union efct_xport_stats_u speed;
+	u32 supported_speeds = 0;
+
+	shost = scsi_host_alloc(&efct_template, sizeof(*vport));
+	if (!shost) {
+		efc_log_err(efct, "failed to allocate Scsi_Host struct\n");
+		return NULL;
+	}
+
+	/* save efct information to shost LLD-specific space */
+	vport = (struct efct_vport *)shost->hostdata;
+	vport->efct = efct;
+	vport->is_vport = true;
+
+	shost->can_queue = efct->hw.config.n_io;
+	shost->max_cmd_len = 16; /* 16-byte CDBs */
+	shost->max_id = 0xffff;
+	shost->max_lun = 0xffffffff;
+
+	/* can only accept (from mid-layer) as many SGEs as we've pre-regited*/
+	shost->sg_tablesize = sli_get_max_sgl(&efct->hw.sli);
+
+	/* attach FC Transport template to shost */
+	shost->transportt = efct_vport_fc_tt;
+	efc_log_debug(efct, "vport transport template=%p\n",
+		       efct_vport_fc_tt);
+
+	/* get pci_dev structure and add host to SCSI ML */
+	error = scsi_add_host_with_dma(shost, dev, &efct->pci->dev);
+	if (error) {
+		efc_log_debug(efct, "failed scsi_add_host_with_dma\n");
+		return NULL;
+	}
+
+	/* Set symbolic name for host port */
+	snprintf(fc_host_symbolic_name(shost),
+		 sizeof(fc_host_symbolic_name(shost)),
+		 "Emulex %s FV%s DV%s", efct->model, efct->hw.sli.fw_name[0],
+		 EFCT_DRIVER_VERSION);
+
+	/* Set host port supported classes */
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	speed.value = 1000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_1GBIT;
+	}
+	speed.value = 2000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_2GBIT;
+	}
+	speed.value = 4000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_4GBIT;
+	}
+	speed.value = 8000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_8GBIT;
+	}
+	speed.value = 10000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_10GBIT;
+	}
+	speed.value = 16000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_16GBIT;
+	}
+	speed.value = 32000;
+	if (efct_xport_status(efct->xport, EFCT_XPORT_IS_SUPPORTED_LINK_SPEED,
+			      &speed)) {
+		supported_speeds |= FC_PORTSPEED_32GBIT;
+	}
+
+	fc_host_supported_speeds(shost) = supported_speeds;
+	vport->shost = shost;
+
+	return vport;
+}
+
+int efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost)
+{
+	if (shost) {
+		efc_log_debug(efct,
+				"Unregistering vport with Transport Layer\n");
+		efct_xport_remove_host(shost);
+		efc_log_debug(efct, "Unregistering vport with SCSI Midlayer\n");
+		scsi_remove_host(shost);
+		scsi_host_put(shost);
+		return EFC_SUCCESS;
+	}
+	return EFC_FAIL;
+}
+
+static int
+efct_vport_create(struct fc_vport *fc_vport, bool disable)
+{
+	struct Scsi_Host *shost = fc_vport ? fc_vport->shost : NULL;
+	struct efct_vport *pport = shost ?
+					(struct efct_vport *)shost->hostdata :
+					NULL;
+	struct efct *efct = pport ? pport->efct : NULL;
+	struct efct_vport *vport = NULL;
+
+	if (!fc_vport || !shost || !efct)
+		goto fail;
+
+	vport = efct_scsi_new_vport(efct, &fc_vport->dev);
+	if (!vport) {
+		efc_log_err(efct, "failed to create vport\n");
+		goto fail;
+	}
+
+	vport->fc_vport = fc_vport;
+	vport->npiv_wwpn = fc_vport->port_name;
+	vport->npiv_wwnn = fc_vport->node_name;
+	fc_host_node_name(vport->shost) = vport->npiv_wwnn;
+	fc_host_port_name(vport->shost) = vport->npiv_wwpn;
+	*(struct efct_vport **)fc_vport->dd_data = vport;
+
+	return EFC_SUCCESS;
+
+fail:
+	return EFC_FAIL;
+}
+
+static int
+efct_vport_delete(struct fc_vport *fc_vport)
+{
+	struct efct_vport *vport = *(struct efct_vport **)fc_vport->dd_data;
+	struct Scsi_Host *shost = vport ? vport->shost : NULL;
+	struct efct *efct = vport ? vport->efct : NULL;
+	int rc = -1;
+
+	rc = efct_scsi_del_vport(efct, shost);
+
+	if (rc)
+		pr_err("%s: vport delete failed\n", __func__);
+
+	return rc;
+}
+
+static int
+efct_vport_disable(struct fc_vport *fc_vport, bool disable)
+{
+	return EFC_SUCCESS;
+}
+
+static struct fc_function_template efct_xport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_port_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+	.vport_disable = efct_vport_disable,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+	.vport_create = efct_vport_create,
+	.vport_delete = efct_vport_delete,
+};
+
+static struct fc_function_template efct_vport_functions = {
+	.get_starget_node_name = efct_get_starget_node_name,
+	.get_starget_port_name = efct_get_starget_port_name,
+	.get_starget_port_id  = efct_get_starget_port_id,
+
+	.get_host_port_id = efct_get_host_port_id,
+	.get_host_port_type = efct_get_host_vport_type,
+	.get_host_port_state = efct_get_host_port_state,
+	.get_host_speed = efct_get_host_speed,
+	.get_host_fabric_name = efct_get_host_fabric_name,
+
+	.get_fc_host_stats = efct_get_stats,
+	.reset_fc_host_stats = efct_reset_stats,
+
+	.issue_fc_host_lip = efct_issue_lip,
+	.set_vport_symbolic_name = efct_set_vport_symbolic_name,
+
+	/* allocation lengths for host-specific data */
+	.dd_fcrport_size = sizeof(struct efct_rport_data),
+	.dd_fcvport_size = 128, /* should be sizeof(...) */
+
+	/* remote port fixed attributes */
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+	.show_rport_dev_loss_tmo = 1,
+
+	/* target dynamic attributes */
+	.show_starget_node_name = 1,
+	.show_starget_port_name = 1,
+	.show_starget_port_id = 1,
+
+	/* host fixed attributes */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* host dynamic attributes */
+	.show_host_port_id = 1,
+	.show_host_port_type = 1,
+	.show_host_port_state = 1,
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+	.show_host_speed = 1,
+	.show_host_fabric_name = 1,
+	.show_host_symbolic_name = 1,
+};
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 30/31] elx: efct: Add Makefile and Kconfig for efct driver
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (28 preceding siblings ...)
  2021-01-07 22:59 ` [PATCH v7 29/31] elx: efct: scsi_transport_fc host interface support James Smart
@ 2021-01-07 22:59 ` James Smart
  2021-01-07 22:59 ` [PATCH v7 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This patch completes the efct driver population.

This patch adds driver definitions for:
Adds the efct driver Kconfig and Makefiles

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/elx/Kconfig  |  9 +++++++++
 drivers/scsi/elx/Makefile | 18 ++++++++++++++++++
 2 files changed, 27 insertions(+)
 create mode 100644 drivers/scsi/elx/Kconfig
 create mode 100644 drivers/scsi/elx/Makefile

diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig
new file mode 100644
index 000000000000..831daea7a951
--- /dev/null
+++ b/drivers/scsi/elx/Kconfig
@@ -0,0 +1,9 @@
+config SCSI_EFCT
+	tristate "Emulex Fibre Channel Target"
+	depends on PCI && SCSI
+	depends on TARGET_CORE
+	depends on SCSI_FC_ATTRS
+	select CRC_T10DIF
+	help
+	  The efct driver provides enhanced SCSI Target Mode
+	  support for specific SLI-4 adapters.
diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile
new file mode 100644
index 000000000000..a8537d7a2a6e
--- /dev/null
+++ b/drivers/scsi/elx/Makefile
@@ -0,0 +1,18 @@
+#// SPDX-License-Identifier: GPL-2.0
+#/*
+# * Copyright (C) 2021 Broadcom. All Rights Reserved. The term
+# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
+# */
+
+
+obj-$(CONFIG_SCSI_EFCT) := efct.o
+
+efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o \
+	     efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \
+	     efct/efct_lio.o efct/efct_unsol.o
+
+efct-objs += libefc/efc_cmds.o libefc/efc_domain.o libefc/efc_fabric.o \
+	     libefc/efc_node.o libefc/efc_nport.o libefc/efc_device.o \
+	     libefc/efclib.o libefc/efc_sm.o libefc/efc_els.o
+
+efct-objs += libefc_sli/sli4.o
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v7 31/31] elx: efct: Tie into kernel Kconfig and build process
  2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
                   ` (29 preceding siblings ...)
  2021-01-07 22:59 ` [PATCH v7 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
@ 2021-01-07 22:59 ` James Smart
  30 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2021-01-07 22:59 UTC (permalink / raw)
  To: linux-scsi; +Cc: James Smart, Ram Vegesna, Hannes Reinecke, Daniel Wagner

This final patch ties the efct driver into the kernel Kconfig
and build linkages in the drivers/scsi directory.

Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/scsi/Kconfig  | 2 ++
 drivers/scsi/Makefile | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 701b61ec76ee..f2d47bf55f97 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1170,6 +1170,8 @@ config SCSI_LPFC_DEBUG_FS
 	  This makes debugging information from the lpfc driver
 	  available via the debugfs filesystem.
 
+source "drivers/scsi/elx/Kconfig"
+
 config SCSI_SIM710
 	tristate "Simple 53c710 SCSI support (Compaq, NCR machines)"
 	depends on EISA && SCSI
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c00e3dd57990..844db573283c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280)	+= qla1280.o
 obj-$(CONFIG_SCSI_QLA_FC)	+= qla2xxx/
 obj-$(CONFIG_SCSI_QLA_ISCSI)	+= libiscsi.o qla4xxx/
 obj-$(CONFIG_SCSI_LPFC)		+= lpfc/
+obj-$(CONFIG_SCSI_EFCT)		+= elx/
 obj-$(CONFIG_SCSI_BFA_FC)	+= bfa/
 obj-$(CONFIG_SCSI_CHELSIO_FCOE)	+= csiostor/
 obj-$(CONFIG_SCSI_DMX3191D)	+= dmx3191d.o
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v7 15/31] elx: libefc: Extended link Service IO handling
  2021-01-07 22:58 ` [PATCH v7 15/31] elx: libefc: Extended link Service IO handling James Smart
@ 2021-01-08  9:07   ` Daniel Wagner
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Wagner @ 2021-01-08  9:07 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

On Thu, Jan 07, 2021 at 02:58:49PM -0800, James Smart wrote:
> This patch continues the libefc library population.
> 
> This patch adds library interface definitions for:
> Functions to build and send ELS/CT/BLS commands and responses.
> 
> Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>

Reviewed-by: Daniel Wagner <dwagner@suse.de>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2021-01-07 22:58 ` [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
@ 2021-01-08  9:11   ` Daniel Wagner
  2021-03-04 18:02   ` Hannes Reinecke
  1 sibling, 0 replies; 36+ messages in thread
From: Daniel Wagner @ 2021-01-08  9:11 UTC (permalink / raw)
  To: James Smart; +Cc: linux-scsi, Ram Vegesna

On Thu, Jan 07, 2021 at 02:58:54PM -0800, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> RQ data buffer allocation and deallocate.
> Memory pool allocation and deallocation APIs.
> Mailbox command submission and completion routines.
> 
> Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>

Reviewed-by: Daniel Wagner <dwagner@suse.de>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs
  2021-01-07 22:58 ` [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
  2021-01-08  9:11   ` Daniel Wagner
@ 2021-03-04 18:02   ` Hannes Reinecke
  1 sibling, 0 replies; 36+ messages in thread
From: Hannes Reinecke @ 2021-03-04 18:02 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Ram Vegesna

On 1/7/21 11:58 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> RQ data buffer allocation and deallocate.
> Memory pool allocation and deallocation APIs.
> Mailbox command submission and completion routines.
> 
> Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> 
> ---
> v7:
>   Ensure an EFC_XXX status value is returned (not numerical constants)
>   Style: break declaration assignment into declaration and later value
>     assignment.
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 406 ++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |   9 +
>   2 files changed, 415 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 2c15468bae4b..451a5729bff4 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -1159,3 +1159,409 @@ efct_get_wwpn(struct efct_hw *hw)
>   	memcpy(p, sli->wwpn, sizeof(p));
>   	return get_unaligned_be64(p);
>   }
> +
> +static struct efc_hw_rq_buffer *
> +efct_hw_rx_buffer_alloc(struct efct_hw *hw, u32 rqindex, u32 count,
> +			u32 size)
> +{
> +	struct efct *efct = hw->os;
> +	struct efc_hw_rq_buffer *rq_buf = NULL;
> +	struct efc_hw_rq_buffer *prq;
> +	u32 i;
> +
> +	if (!count)
> +		return NULL;
> +
> +	rq_buf = kmalloc_array(count, sizeof(*rq_buf), GFP_KERNEL);
> +	if (!rq_buf)
> +		return NULL;
> +	memset(rq_buf, 0, sizeof(*rq_buf) * count);
> +
> +	for (i = 0, prq = rq_buf; i < count; i ++, prq++) {
> +		prq->rqindex = rqindex;
> +		prq->dma.size = size;
> +		prq->dma.virt = dma_alloc_coherent(&efct->pci->dev,
> +						   prq->dma.size,
> +						   &prq->dma.phys,
> +						   GFP_DMA);
> +		if (!prq->dma.virt) {
> +			efc_log_err(hw->os, "DMA allocation failed\n");
> +			kfree(rq_buf);
> +			return NULL;
> +		}
> +	}
> +	return rq_buf;
> +}
> +
> +static void
> +efct_hw_rx_buffer_free(struct efct_hw *hw,
> +		       struct efc_hw_rq_buffer *rq_buf,
> +			u32 count)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	struct efc_hw_rq_buffer *prq;
> +
> +	if (rq_buf) {
> +		for (i = 0, prq = rq_buf; i < count; i++, prq++) {
> +			dma_free_coherent(&efct->pci->dev,
> +					  prq->dma.size, prq->dma.virt,
> +					  prq->dma.phys);
> +			memset(&prq->dma, 0, sizeof(struct efc_dma));
> +		}
> +
> +		kfree(rq_buf);
> +	}
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_rx_allocate(struct efct_hw *hw)
> +{
> +	struct efct *efct = hw->os;
> +	u32 i;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +	u32 rqindex = 0;
> +	u32 hdr_size = EFCT_HW_RQ_SIZE_HDR;
> +	u32 payload_size = hw->config.rq_default_buffer_size;
> +
> +	rqindex = 0;
> +
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		struct hw_rq *rq = hw->hw_rq[i];
> +
> +		/* Allocate header buffers */
> +		rq->hdr_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +						      rq->entry_count,
> +						      hdr_size);
> +		if (!rq->hdr_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc hdr_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d header  %4d by %4d bytes\n",
> +			      i, rq->hdr->id, rq->entry_count, hdr_size);
> +
> +		rqindex++;
> +
> +		/* Allocate payload buffers */
> +		rq->payload_buf = efct_hw_rx_buffer_alloc(hw, rqindex,
> +							  rq->entry_count,
> +							  payload_size);
> +		if (!rq->payload_buf) {
> +			efc_log_err(efct,
> +				     "efct_hw_rx_buffer_alloc fb_buf failed\n");
> +			rc = EFCT_HW_RTN_ERROR;
> +			break;
> +		}
> +		efc_log_debug(hw->os,
> +			       "rq[%2d] rq_id %02d default %4d by %4d bytes\n",
> +			      i, rq->data->id, rq->entry_count, payload_size);
> +		rqindex++;
> +	}
> +
> +	return rc ? EFCT_HW_RTN_ERROR : EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_rx_post(struct efct_hw *hw)
> +{
> +	u32 i;
> +	u32 idx;
> +	u32 rq_idx;
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS;
> +
> +	if (!hw->seq_pool) {
> +		u32 count = 0;
> +
> +		for (i = 0; i < hw->hw_rq_count; i++)
> +			count += hw->hw_rq[i]->entry_count;
> +
> +		hw->seq_pool = kmalloc_array(count,
> +				sizeof(struct efc_hw_sequence),	GFP_KERNEL);
> +		if (!hw->seq_pool)
> +			return EFCT_HW_RTN_NO_MEMORY;
> +	}
> +
> +	/*
> +	 * In RQ pair mode, we MUST post the header and payload buffer at the
> +	 * same time.
> +	 */
> +	for (rq_idx = 0, idx = 0; rq_idx < hw->hw_rq_count; rq_idx++) {
> +		struct hw_rq *rq = hw->hw_rq[rq_idx];
> +
> +		for (i = 0; i < rq->entry_count - 1; i++) {
> +			struct efc_hw_sequence *seq;
> +
> +			seq = hw->seq_pool + idx;
> +			idx++;
> +			seq->header = &rq->hdr_buf[i];
> +			seq->payload = &rq->payload_buf[i];
> +			rc = efct_hw_sequence_free(hw, seq);
> +			if (rc)
> +				break;
> +		}
> +		if (rc)
> +			break;
> +	}
> +
> +	if (rc && hw->seq_pool)
> +		kfree(hw->seq_pool);
> +
> +	return rc;
> +}
> +
> +void
> +efct_hw_rx_free(struct efct_hw *hw)
> +{
> +	u32 i;
> +
> +	/* Free hw_rq buffers */
> +	for (i = 0; i < hw->hw_rq_count; i++) {
> +		struct hw_rq *rq = hw->hw_rq[i];
> +
> +		if (rq) {
> +			efct_hw_rx_buffer_free(hw, rq->hdr_buf,
> +					       rq->entry_count);
> +			rq->hdr_buf = NULL;
> +			efct_hw_rx_buffer_free(hw, rq->payload_buf,
> +					       rq->entry_count);
> +			rq->payload_buf = NULL;
> +		}
> +	}
> +}
> +
> +static int
> +efct_hw_cmd_submit_pending(struct efct_hw *hw)
> +{
> +	int rc = EFC_SUCCESS;
> +
> +	/* Assumes lock held */
> +
> +	/* Only submit MQE if there's room */
> +	while (hw->cmd_head_count < (EFCT_HW_MQ_DEPTH - 1) &&
> +	       !list_empty(&hw->cmd_pending)) {
> +		struct efct_command_ctx *ctx;
> +
> +		ctx = list_first_entry(&hw->cmd_pending,
> +				       struct efct_command_ctx, list_entry);
> +		if (!ctx)
> +			break;
> +
> +		list_del_init(&ctx->list_entry);
> +
> +		list_add_tail(&ctx->list_entry, &hw->cmd_head);
> +		hw->cmd_head_count++;
> +		if (sli_mq_write(&hw->sli, hw->mq, ctx->buf) < 0) {
> +			efc_log_debug(hw->os,
> +				      "sli_queue_write failed: %d\n", rc);
> +			rc = EFC_FAIL;
> +			break;
> +		}
> +	}
> +	return rc;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, void *arg)
> +{
> +	enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR;
> +	unsigned long flags = 0;
> +	void *bmbx = NULL;
> +
> +	/*
> +	 * If the chip is in an error state (UE'd) then reject this mailbox
> +	 * command.
> +	 */
> +	if (sli_fw_error_status(&hw->sli) > 0) {
> +		efc_log_crit(hw->os,
> +			      "Chip is in an error state - reset needed\n");
> +		efc_log_crit(hw->os,
> +			      "status=%#x error1=%#x error2=%#x\n",
> +			sli_reg_read_status(&hw->sli),
> +			sli_reg_read_err1(&hw->sli),
> +			sli_reg_read_err2(&hw->sli));
> +
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +

If there is an reset needed, why don't you do so?

> +	/*
> +	 * Send a mailbox command to the hardware, and either wait for
> +	 * a completion (EFCT_CMD_POLL) or get an optional asynchronous
> +	 * completion (EFCT_CMD_NOWAIT).
> +	 */
> +
> +	if (opts == EFCT_CMD_POLL) {
> +		mutex_lock(&hw->bmbx_lock);
> +		bmbx = hw->sli.bmbx.virt;
> +
> +		memset(bmbx, 0, SLI4_BMBX_SIZE);
> +		memcpy(bmbx, cmd, SLI4_BMBX_SIZE);
> +
> +		if (sli_bmbx_command(&hw->sli) == 0) {
> +			rc = EFCT_HW_RTN_SUCCESS;
> +			memcpy(cmd, bmbx, SLI4_BMBX_SIZE);
> +		}
> +		mutex_unlock(&hw->bmbx_lock);
> +	} else if (opts == EFCT_CMD_NOWAIT) {
> +		struct efct_command_ctx	*ctx = NULL;
> +
> +		if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +			efc_log_err(hw->os,
> +				     "Can't send command, HW state=%d\n",
> +				    hw->state);
> +			return EFCT_HW_RTN_ERROR;
> +		}
> +
> +		ctx = mempool_alloc(hw->cmd_ctx_pool, GFP_ATOMIC);
> +		if (!ctx)
> +			return EFCT_HW_RTN_NO_RESOURCES;
> +
> +		memset(ctx, 0, sizeof(struct efct_command_ctx));
> +
> +		if (cb) {
> +			ctx->cb = cb;
> +			ctx->arg = arg;
> +		}
> +
> +		memcpy(ctx->buf, cmd, SLI4_BMBX_SIZE);
> +		ctx->ctx = hw;
> +
> +		spin_lock_irqsave(&hw->cmd_lock, flags);
> +
> +		/* Add to pending list */
> +		INIT_LIST_HEAD(&ctx->list_entry);
> +		list_add_tail(&ctx->list_entry, &hw->cmd_pending);
> +
> +		/* Submit as much of the pending list as we can */
> +		rc = efct_hw_cmd_submit_pending(hw);
> +
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +efct_hw_command_process(struct efct_hw *hw, int status, u8 *mqe,
> +			size_t size)
> +{
> +	struct efct_command_ctx *ctx = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +	if (!list_empty(&hw->cmd_head)) {
> +		ctx = list_first_entry(&hw->cmd_head,
> +				       struct efct_command_ctx, list_entry);
> +		list_del_init(&ctx->list_entry);
> +	}
> +	if (!ctx) {
> +		efc_log_err(hw->os, "no command context?!?\n");
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		return EFC_FAIL;
> +	}
> +
> +	hw->cmd_head_count--;
> +
> +	/* Post any pending requests */
> +	efct_hw_cmd_submit_pending(hw);
> +
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	if (ctx->cb) {
> +		memcpy(ctx->buf, mqe, size);
> +		ctx->cb(hw, status, ctx->buf, ctx->arg);
> +	}
> +
> +	mempool_free(ctx, hw->cmd_ctx_pool);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_hw_mq_process(struct efct_hw *hw,
> +		   int status, struct sli4_queue *mq)
> +{
> +	u8 mqe[SLI4_BMBX_SIZE];
> +
> +	if (!sli_mq_read(&hw->sli, mq, mqe))
> +		efct_hw_command_process(hw, status, mqe, mq->size);
> +

efct_hw_command_process() returns an error status; shouldn't you be 
evaluating that?

> +	return EFC_SUCCESS;
> +}
> +
> +static int
> +efct_hw_command_cancel(struct efct_hw *hw)
> +{
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->cmd_lock, flags);
> +
> +	/*
> +	 * Manually clean up remaining commands. Note: since this calls
> +	 * efct_hw_command_process(), we'll also process the cmd_pending
> +	 * list, so no need to manually clean that out.
> +	 */
> +	while (!list_empty(&hw->cmd_head)) {
> +		u8		mqe[SLI4_BMBX_SIZE] = { 0 };
> +		struct efct_command_ctx *ctx;
> +
> +		ctx = list_first_entry(&hw->cmd_head,
> +				       struct efct_command_ctx, list_entry);
> +
> +		efc_log_debug(hw->os, "hung command %08x\n",
> +			      !ctx ? U32_MAX :
> +			      (!ctx->buf ? U32_MAX :
> +			       *((u32 *)ctx->buf)));
> +		spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +		efct_hw_command_process(hw, -1, mqe, SLI4_BMBX_SIZE);
> +		spin_lock_irqsave(&hw->cmd_lock, flags);

Same here.

> +	}
> +
> +	spin_unlock_irqrestore(&hw->cmd_lock, flags);
> +
> +	return EFC_SUCCESS;
> +}
> +
> +static void
> +efct_mbox_rsp_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg)
> +{
> +	struct efct_mbox_rqst_ctx *ctx = arg;
> +
> +	if (ctx) {
> +		if (ctx->callback)
> +			(*ctx->callback)(hw->os->efcport, status, mqe,
> +					 ctx->arg);
> +
> +		mempool_free(ctx, hw->mbox_rqst_pool);
> +	}
> +}
> +
> +int
> +efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg)
> +{
> +	struct efct_mbox_rqst_ctx *ctx;
> +	struct efct *efct = base;
> +	struct efct_hw *hw = &efct->hw;
> +
> +	/*
> +	 * Allocate a callback context (which includes the mbox cmd buffer),
> +	 * we need this to be persistent as the mbox cmd submission may be
> +	 * queued and executed later execution.
> +	 */
> +	ctx = mempool_alloc(hw->mbox_rqst_pool, GFP_ATOMIC);
> +	if (!ctx)
> +		return EFC_FAIL;
> +
> +	ctx->callback = cb;
> +	ctx->arg = arg;
> +
> +	if (efct_hw_command(hw, cmd, EFCT_CMD_NOWAIT, efct_mbox_rsp_cb, ctx)) {
> +		efc_log_err(efct, "issue mbox rqst failure\n");
> +		mempool_free(ctx, hw->mbox_rqst_pool);
> +		return EFC_FAIL;

You don't believe in return values, do you?
I think efct_hw_command() can return _different_ values than just 
FAIL/SUCCESS (see my above comment about reset), so at the very least 
you should be _looking_ at them.

> +	}
> +	return EFC_SUCCESS;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 39cfe6658783..76de5decfbea 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -619,4 +619,13 @@ efct_get_wwnn(struct efct_hw *hw);
>   uint64_t
>   efct_get_wwpn(struct efct_hw *hw);
>   
> +enum efct_hw_rtn efct_hw_rx_allocate(struct efct_hw *hw);
> +enum efct_hw_rtn efct_hw_rx_post(struct efct_hw *hw);
> +void efct_hw_rx_free(struct efct_hw *hw);
> +enum efct_hw_rtn
> +efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
> +		void *arg);
> +int
> +efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg);
> +
>   #endif /* __EFCT_H__ */
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization
  2021-01-07 22:58 ` [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization James Smart
@ 2021-03-04 18:05   ` Hannes Reinecke
  0 siblings, 0 replies; 36+ messages in thread
From: Hannes Reinecke @ 2021-03-04 18:05 UTC (permalink / raw)
  To: James Smart, linux-scsi; +Cc: Ram Vegesna, Daniel Wagner

On 1/7/21 11:58 PM, James Smart wrote:
> This patch continues the efct driver population.
> 
> This patch adds driver definitions for:
> Routines to create IO interfaces (wqs, etc), SGL initialization,
> and configure hardware features.
> 
> Co-developed-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: Ram Vegesna <ram.vegesna@broadcom.com>
> Signed-off-by: James Smart <jsmart2021@gmail.com>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> ---
>   drivers/scsi/elx/efct/efct_hw.c | 590 ++++++++++++++++++++++++++++++++
>   drivers/scsi/elx/efct/efct_hw.h |  41 +++
>   2 files changed, 631 insertions(+)
> 
> diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c
> index 451a5729bff4..da4e53407b94 100644
> --- a/drivers/scsi/elx/efct/efct_hw.c
> +++ b/drivers/scsi/elx/efct/efct_hw.c
> @@ -1565,3 +1565,593 @@ efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg)
>   	}
>   	return EFC_SUCCESS;
>   }
> +
> +static inline struct efct_hw_io *
> +_efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io *io = NULL;
> +
> +	if (!list_empty(&hw->io_free)) {
> +		io = list_first_entry(&hw->io_free, struct efct_hw_io,
> +				      list_entry);
> +		list_del(&io->list_entry);
> +	}
> +	if (io) {
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_inuse);
> +		io->state = EFCT_HW_IO_STATE_INUSE;
> +		io->abort_reqtag = U32_MAX;
> +		io->wq = hw->wq_cpu_array[raw_smp_processor_id()];
> +		if (!io->wq) {
> +			efc_log_err(hw->os, "WQ not assigned for cpu:%d\n",
> +					raw_smp_processor_id());
> +			io->wq = hw->hw_wq[0];
> +		}
> +		kref_init(&io->ref);
> +		io->release = efct_hw_io_free_internal;
> +	} else {
> +		atomic_add_return(1, &hw->io_alloc_failed_count);
> +	}
> +
> +	return io;
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_alloc(struct efct_hw *hw)
> +{
> +	struct efct_hw_io *io = NULL;
> +	unsigned long flags = 0;
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	io = _efct_hw_io_alloc(hw);
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +
> +	return io;
> +}
> +
> +static void
> +efct_hw_io_free_move_correct_list(struct efct_hw *hw,
> +				  struct efct_hw_io *io)
> +{
> +
> +	/*
> +	 * When an IO is freed, depending on the exchange busy flag,
> +	 * move it to the correct list.
> +	 */
> +	if (io->xbusy) {
> +		/*
> +		 * add to wait_free list and wait for XRI_ABORTED CQEs to clean
> +		 * up
> +		 */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_wait_free);
> +		io->state = EFCT_HW_IO_STATE_WAIT_FREE;
> +	} else {
> +		/* IO not busy, add to free list */
> +		INIT_LIST_HEAD(&io->list_entry);
> +		list_add_tail(&io->list_entry, &hw->io_free);
> +		io->state = EFCT_HW_IO_STATE_FREE;
> +	}
> +}
> +
> +static inline void
> +efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	/* initialize IO fields */
> +	efct_hw_init_free_io(io);
> +
> +	/* Restore default SGL */
> +	efct_hw_io_restore_sgl(hw, io);
> +}
> +
> +void
> +efct_hw_io_free_internal(struct kref *arg)
> +{
> +	unsigned long flags = 0;
> +	struct efct_hw_io *io =	container_of(arg, struct efct_hw_io, ref);
> +	struct efct_hw *hw = io->hw;
> +
> +	/* perform common cleanup */
> +	efct_hw_io_free_common(hw, io);
> +
> +	spin_lock_irqsave(&hw->io_lock, flags);
> +	/* remove from in-use list */
> +	if (!list_empty(&io->list_entry) && !list_empty(&hw->io_inuse)) {
> +		list_del_init(&io->list_entry);
> +		efct_hw_io_free_move_correct_list(hw, io);
> +	}
> +	spin_unlock_irqrestore(&hw->io_lock, flags);
> +}
> +
> +int
> +efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io)
> +{
> +	return kref_put(&io->ref, io->release);
> +}
> +
> +struct efct_hw_io *
> +efct_hw_io_lookup(struct efct_hw *hw, u32 xri)
> +{
> +	u32 ioindex;
> +
> +	ioindex = xri - hw->sli.ext[SLI4_RSRC_XRI].base[0];
> +	return hw->io[ioindex];
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io,
> +		     enum efct_hw_io_type type)
> +{
> +	struct sli4_sge	*data = NULL;
> +	u32 i = 0;
> +	u32 skips = 0;
> +	u32 sge_flags = 0;
> +
> +	if (!io) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p\n", hw, io);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* Clear / reset the scatter-gather list */
> +	io->sgl = &io->def_sgl;
> +	io->sgl_count = io->def_sgl_count;
> +	io->first_data_sge = 0;
> +
> +	memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge));
> +	io->n_sge = 0;
> +	io->sge_offset = 0;
> +
> +	io->type = type;
> +
> +	data = io->sgl->virt;
> +
> +	/*
> +	 * Some IO types have underlying hardware requirements on the order
> +	 * of SGEs. Process all special entries here.
> +	 */
> +	switch (type) {
> +	case EFCT_HW_IO_TARGET_WRITE:
> +
> +		/* populate host resident XFER_RDY buffer */
> +		sge_flags = le32_to_cpu(data->dw2_flags);
> +		sge_flags &= (~SLI4_SGE_TYPE_MASK);
> +		sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT);
> +		data->buffer_address_high =
> +			cpu_to_le32(upper_32_bits(io->xfer_rdy.phys));
> +		data->buffer_address_low  =
> +			cpu_to_le32(lower_32_bits(io->xfer_rdy.phys));
> +		data->buffer_length = cpu_to_le32(io->xfer_rdy.size);
> +		data->dw2_flags = cpu_to_le32(sge_flags);
> +		data++;
> +
> +		skips = EFCT_TARGET_WRITE_SKIPS;
> +
> +		io->n_sge = 1;
> +		break;
> +	case EFCT_HW_IO_TARGET_READ:
> +		/*
> +		 * For FCP_TSEND64, the first 2 entries are SKIP SGE's
> +		 */
> +		skips = EFCT_TARGET_READ_SKIPS;
> +		break;
> +	case EFCT_HW_IO_TARGET_RSP:
> +		/*
> +		 * No skips, etc. for FCP_TRSP64
> +		 */
> +		break;
> +	default:
> +		efc_log_err(hw->os, "unsupported IO type %#x\n", type);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/*
> +	 * Write skip entries
> +	 */
> +	for (i = 0; i < skips; i++) {
> +		sge_flags = le32_to_cpu(data->dw2_flags);
> +		sge_flags &= (~SLI4_SGE_TYPE_MASK);
> +		sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT);
> +		data->dw2_flags = cpu_to_le32(sge_flags);
> +		data++;
> +	}
> +
> +	io->n_sge += skips;
> +
> +	/*
> +	 * Set last
> +	 */
> +	sge_flags = le32_to_cpu(data->dw2_flags);
> +	sge_flags |= SLI4_SGE_LAST;
> +	data->dw2_flags = cpu_to_le32(sge_flags);
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
> +		   uintptr_t addr, u32 length)
> +{
> +	struct sli4_sge	*data = NULL;
> +	u32 sge_flags = 0;
> +
> +	if (!io || !addr || !length) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p addr=%lx length=%u\n",
> +			    hw, io, addr, length);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (length > hw->sli.sge_supported_length) {
> +		efc_log_err(hw->os,
> +			     "length of SGE %d bigger than allowed %d\n",
> +			    length, hw->sli.sge_supported_length);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	data = io->sgl->virt;
> +	data += io->n_sge;
> +
> +	sge_flags = le32_to_cpu(data->dw2_flags);
> +	sge_flags &= ~SLI4_SGE_TYPE_MASK;
> +	sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT;
> +	sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK;
> +	sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset;
> +
> +	data->buffer_address_high = cpu_to_le32(upper_32_bits(addr));
> +	data->buffer_address_low  = cpu_to_le32(lower_32_bits(addr));
> +	data->buffer_length = cpu_to_le32(length);
> +
> +	/*
> +	 * Always assume this is the last entry and mark as such.
> +	 * If this is not the first entry unset the "last SGE"
> +	 * indication for the previous entry
> +	 */
> +	sge_flags |= SLI4_SGE_LAST;
> +	data->dw2_flags = cpu_to_le32(sge_flags);
> +
> +	if (io->n_sge) {
> +		sge_flags = le32_to_cpu(data[-1].dw2_flags);
> +		sge_flags &= ~SLI4_SGE_LAST;
> +		data[-1].dw2_flags = cpu_to_le32(sge_flags);
> +	}
> +
> +	/* Set first_data_bde if not previously set */
> +	if (io->first_data_sge == 0)
> +		io->first_data_sge = io->n_sge;
> +
> +	io->sge_offset += length;
> +	io->n_sge++;
> +
> +	return EFCT_HW_RTN_SUCCESS;
> +}
> +
> +void
> +efct_hw_io_abort_all(struct efct_hw *hw)
> +{
> +	struct efct_hw_io *io_to_abort	= NULL;
> +	struct efct_hw_io *next_io = NULL;
> +
> +	list_for_each_entry_safe(io_to_abort, next_io,
> +				 &hw->io_inuse, list_entry) {
> +		efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL);
> +	}
> +}
> +
> +static void
> +efct_hw_wq_process_abort(void *arg, u8 *cqe, int status)
> +{
> +	struct efct_hw_io *io = arg;
> +	struct efct_hw *hw = io->hw;
> +	u32 ext = 0;
> +	u32 len = 0;
> +	struct hw_wq_callback *wqcb;
> +	unsigned long flags = 0;
> +
> +	/*
> +	 * For IOs that were aborted internally, we may need to issue the
> +	 * callback here depending on whether a XRI_ABORTED CQE is expected ot
> +	 * not. If the status is Local Reject/No XRI, then
> +	 * issue the callback now.
> +	 */
> +	ext = sli_fc_ext_status(&hw->sli, cqe);
> +	if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT &&
> +	    ext == SLI4_FC_LOCAL_REJECT_NO_XRI && io->done) {
> +		efct_hw_done_t done = io->done;
> +
> +		io->done = NULL;
> +
> +		/*
> +		 * Use latched status as this is always saved for an internal
> +		 * abort Note: We wont have both a done and abort_done
> +		 * function, so don't worry about
> +		 *       clobbering the len, status and ext fields.
> +		 */
> +		status = io->saved_status;
> +		len = io->saved_len;
> +		ext = io->saved_ext;
> +		io->status_saved = false;
> +		done(io, len, status, ext, io->arg);
> +	}
> +
> +	if (io->abort_done) {
> +		efct_hw_done_t done = io->abort_done;
> +
> +		io->abort_done = NULL;
> +		done(io, len, status, ext, io->abort_arg);
> +	}
> +	spin_lock_irqsave(&hw->io_abort_lock, flags);
> +	/* clear abort bit to indicate abort is complete */
> +	io->abort_in_progress = false;
> +	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +
> +	/* Free the WQ callback */
> +	if (io->abort_reqtag == U32_MAX) {
> +		efc_log_err(hw->os, "HW IO already freed\n");
> +		return;
> +	}
> +
> +	wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag);
> +	efct_hw_reqtag_free(hw, wqcb);
> +
> +	/*
> +	 * Call efct_hw_io_free() because this releases the WQ reservation as
> +	 * well as doing the refcount put. Don't duplicate the code here.
> +	 */
> +	(void)efct_hw_io_free(hw, io);
> +}
> +
> +static void
> +efct_hw_fill_abort_wqe(struct efct_hw *hw, struct efct_hw_wqe *wqe)
> +{
> +	struct sli4_abort_wqe *abort = (void *)wqe->wqebuf;
> +
> +	memset(abort, 0, hw->sli.wqe_size);
> +
> +	abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG;
> +	abort->ia_ir_byte |= wqe->send_abts ? 0 : 1;
> +
> +	/* Suppress ABTS retries */
> +	abort->ia_ir_byte |= SLI4_ABRT_WQE_IR;
> +
> +	abort->t_tag  = cpu_to_le32(wqe->id);
> +	abort->command = SLI4_WQE_ABORT;
> +	abort->request_tag = cpu_to_le16(wqe->abort_reqtag);
> +
> +	abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD);
> +
> +	abort->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT);
> +}
> +
> +enum efct_hw_rtn
> +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
> +		 bool send_abts, void *cb, void *arg)
> +{
> +	struct hw_wq_callback *wqcb;
> +	unsigned long flags = 0;
> +
> +	if (!io_to_abort) {
> +		efc_log_err(hw->os,
> +			     "bad parameter hw=%p io=%p\n",
> +			    hw, io_to_abort);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	if (hw->state != EFCT_HW_STATE_ACTIVE) {
> +		efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n",
> +			     hw->state);
> +		return EFCT_HW_RTN_ERROR;
> +	}
> +
> +	/* take a reference on IO being aborted */
> +	if (kref_get_unless_zero(&io_to_abort->ref) == 0) {
> +		/* command no longer active */
> +		efc_log_debug(hw->os,
> +			      "io not active xri=0x%x tag=0x%x\n",
> +			     io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/* Must have a valid WQ reference */
> +	if (!io_to_abort->wq) {
> +		efc_log_debug(hw->os, "io_to_abort xri=0x%x not active on WQ\n",
> +			      io_to_abort->indicator);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		return EFCT_HW_RTN_IO_NOT_ACTIVE;
> +	}
> +
> +	/*
> +	 * Validation checks complete; now check to see if already being
> +	 * aborted
> +	 */
> +	spin_lock_irqsave(&hw->io_abort_lock, flags);
> +	if (io_to_abort->abort_in_progress) {

test_and_set_bit() and drop the spinlock?

> +		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		efc_log_debug(hw->os,
> +			       "io already being aborted xri=0x%x tag=0x%x\n",
> +			      io_to_abort->indicator, io_to_abort->reqtag);
> +		return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS;
> +	}
> +
> +	/*
> +	 * This IO is not already being aborted. Set flag so we won't try to
> +	 * abort it again. After all, we only have one abort_done callback.
> +	 */
> +	io_to_abort->abort_in_progress = true;
> +	spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +
> +	/*
> +	 * If we got here, the possibilities are:
> +	 * - host owned xri
> +	 *	- io_to_abort->wq_index != U32_MAX
> +	 *		- submit ABORT_WQE to same WQ
> +	 * - port owned xri:
> +	 *	- rxri: io_to_abort->wq_index == U32_MAX
> +	 *		- submit ABORT_WQE to any WQ
> +	 *	- non-rxri
> +	 *		- io_to_abort->index != U32_MAX
> +	 *			- submit ABORT_WQE to same WQ
> +	 *		- io_to_abort->index == U32_MAX
> +	 *			- submit ABORT_WQE to any WQ
> +	 */
> +	io_to_abort->abort_done = cb;
> +	io_to_abort->abort_arg  = arg;
> +
> +	/* Allocate a request tag for the abort portion of this IO */
> +	wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort);
> +	if (!wqcb) {
> +		efc_log_err(hw->os, "can't allocate request tag\n");
> +		return EFCT_HW_RTN_NO_RESOURCES;
> +	}
> +
> +	io_to_abort->abort_reqtag = wqcb->instance_index;
> +	io_to_abort->wqe.send_abts = send_abts;
> +	io_to_abort->wqe.id = io_to_abort->indicator;
> +	io_to_abort->wqe.abort_reqtag = io_to_abort->abort_reqtag;
> +
> +	/*
> +	 * If the wqe is on the pending list, then set this wqe to be
> +	 * aborted when the IO's wqe is removed from the list.
> +	 */
> +	if (io_to_abort->wq) {
> +		spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags);
> +		if (io_to_abort->wqe.list_entry.next) {
> +			io_to_abort->wqe.abort_wqe_submit_needed = true;
> +			spin_unlock_irqrestore(&io_to_abort->wq->queue->lock,
> +					       flags);
> +			return EFC_SUCCESS;
> +		}
> +		spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags);
> +	}
> +
> +	efct_hw_fill_abort_wqe(hw, &io_to_abort->wqe);
> +
> +	/* ABORT_WQE does not actually utilize an XRI on the Port,
> +	 * therefore, keep xbusy as-is to track the exchange's state,
> +	 * not the ABORT_WQE's state
> +	 */
> +	if (efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe)) {
> +		spin_lock_irqsave(&hw->io_abort_lock, flags);
> +		io_to_abort->abort_in_progress = false;
> +		spin_unlock_irqrestore(&hw->io_abort_lock, flags);
> +		/* efct_ref_get(): same function */
> +		kref_put(&io_to_abort->ref, io_to_abort->release);
> +		return EFC_FAIL;
> +	}
> +
> +	return EFC_SUCCESS;
> +}
> +
> +void
> +efct_hw_reqtag_pool_free(struct efct_hw *hw)
> +{
> +	u32 i;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +	struct hw_wq_callback *wqcb = NULL;
> +
> +	if (reqtag_pool) {
> +		for (i = 0; i < U16_MAX; i++) {
> +			wqcb = reqtag_pool->tags[i];
> +			if (!wqcb)
> +				continue;
> +
> +			kfree(wqcb);
> +		}
> +		kfree(reqtag_pool);
> +		hw->wq_reqtag_pool = NULL;
> +	}
> +}
> +
> +struct reqtag_pool *
> +efct_hw_reqtag_pool_alloc(struct efct_hw *hw)
> +{
> +	u32 i = 0;
> +	struct reqtag_pool *reqtag_pool;
> +	struct hw_wq_callback *wqcb;
> +
> +	reqtag_pool = kzalloc(sizeof(*reqtag_pool), GFP_KERNEL);
> +	if (!reqtag_pool)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&reqtag_pool->freelist);
> +	/* initialize reqtag pool lock */
> +	spin_lock_init(&reqtag_pool->lock);
> +	for (i = 0; i < U16_MAX; i++) {
> +		wqcb = kmalloc(sizeof(*wqcb), GFP_KERNEL);
> +		if (!wqcb)
> +			break;
> +
> +		reqtag_pool->tags[i] = wqcb;
> +		wqcb->instance_index = i;
> +		wqcb->callback = NULL;
> +		wqcb->arg = NULL;
> +		INIT_LIST_HEAD(&wqcb->list_entry);
> +		list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist);
> +	}
> +
> +	return reqtag_pool;
> +}
> +
> +struct hw_wq_callback *
> +efct_hw_reqtag_alloc(struct efct_hw *hw,
> +		     void (*callback)(void *arg, u8 *cqe, int status),
> +		     void *arg)
> +{
> +	struct hw_wq_callback *wqcb = NULL;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +	unsigned long flags = 0;
> +
> +	if (!callback)
> +		return wqcb;
> +
> +	spin_lock_irqsave(&reqtag_pool->lock, flags);
> +
> +	if (!list_empty(&reqtag_pool->freelist)) {
> +		wqcb = list_first_entry(&reqtag_pool->freelist,
> +				      struct hw_wq_callback, list_entry);
> +	}
> +
> +	if (wqcb) {
> +		list_del_init(&wqcb->list_entry);
> +		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +		wqcb->callback = callback;
> +		wqcb->arg = arg;
> +	} else {
> +		spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +	}
> +
> +	return wqcb;
> +}
> +
> +void
> +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb)
> +{
> +	unsigned long flags = 0;
> +	struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool;
> +
> +	if (!wqcb->callback)
> +		efc_log_err(hw->os, "WQCB is already freed\n");
> +
> +	spin_lock_irqsave(&reqtag_pool->lock, flags);
> +	wqcb->callback = NULL;
> +	wqcb->arg = NULL;
> +	INIT_LIST_HEAD(&wqcb->list_entry);
> +	list_add(&wqcb->list_entry, &hw->wq_reqtag_pool->freelist);
> +	spin_unlock_irqrestore(&reqtag_pool->lock, flags);
> +}
> +
> +struct hw_wq_callback *
> +efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index)
> +{
> +	struct hw_wq_callback *wqcb;
> +
> +	wqcb = hw->wq_reqtag_pool->tags[instance_index];
> +	if (!wqcb)
> +		efc_log_err(hw->os, "wqcb for instance %d is null\n",
> +			     instance_index);
> +
> +	return wqcb;
> +}
> diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h
> index 76de5decfbea..dd00d4df6ada 100644
> --- a/drivers/scsi/elx/efct/efct_hw.h
> +++ b/drivers/scsi/elx/efct/efct_hw.h
> @@ -628,4 +628,45 @@ efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb,
>   int
>   efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg);
>   
> +struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw);
> +int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io);
> +u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io);
> +enum efct_hw_rtn
> +efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type,
> +		struct efct_hw_io *io, union efct_hw_io_param_u *iparam,
> +		void *cb, void *arg);
> +enum efct_hw_rtn
> +efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io,
> +			struct efc_dma *sgl,
> +			u32 sgl_count);
> +enum efct_hw_rtn
> +efct_hw_io_init_sges(struct efct_hw *hw,
> +		     struct efct_hw_io *io, enum efct_hw_io_type type);
> +
> +enum efct_hw_rtn
> +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io,
> +		   uintptr_t addr, u32 length);
> +enum efct_hw_rtn
> +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort,
> +		 bool send_abts, void *cb, void *arg);
> +u32
> +efct_hw_io_get_count(struct efct_hw *hw,
> +		     enum efct_hw_io_count_type io_count_type);
> +struct efct_hw_io
> +*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator);
> +void efct_hw_io_abort_all(struct efct_hw *hw);
> +void efct_hw_io_free_internal(struct kref *arg);
> +
> +/* HW WQ request tag API */
> +struct reqtag_pool *efct_hw_reqtag_pool_alloc(struct efct_hw *hw);
> +void efct_hw_reqtag_pool_free(struct efct_hw *hw);
> +struct hw_wq_callback
> +*efct_hw_reqtag_alloc(struct efct_hw *hw,
> +			void (*callback)(void *arg, u8 *cqe,
> +					 int status), void *arg);
> +void
> +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb);
> +struct hw_wq_callback
> +*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index);
> +
>   #endif /* __EFCT_H__ */
> 

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-03-04 18:07 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-07 22:58 [PATCH v7 00/31] [NEW] efct: Broadcom (Emulex) FC Target driver James Smart
2021-01-07 22:58 ` [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions James Smart
2021-01-07 22:58 ` [PATCH v7 02/31] elx: libefc_sli: SLI Descriptors and Queue entries James Smart
2021-01-07 22:58 ` [PATCH v7 03/31] elx: libefc_sli: Data structures and defines for mbox commands James Smart
2021-01-07 22:58 ` [PATCH v7 04/31] elx: libefc_sli: queue create/destroy/parse routines James Smart
2021-01-07 22:58 ` [PATCH v7 05/31] elx: libefc_sli: Populate and post different WQEs James Smart
2021-01-07 22:58 ` [PATCH v7 06/31] elx: libefc_sli: bmbx routines and SLI config commands James Smart
2021-01-07 22:58 ` [PATCH v7 07/31] elx: libefc_sli: APIs to setup SLI library James Smart
2021-01-07 22:58 ` [PATCH v7 08/31] elx: libefc: Generic state machine framework James Smart
2021-01-07 22:58 ` [PATCH v7 09/31] elx: libefc: Emulex FC discovery library APIs and definitions James Smart
2021-01-07 22:58 ` [PATCH v7 10/31] elx: libefc: FC Domain state machine interfaces James Smart
2021-01-07 22:58 ` [PATCH v7 11/31] elx: libefc: SLI and FC PORT " James Smart
2021-01-07 22:58 ` [PATCH v7 12/31] elx: libefc: Remote node " James Smart
2021-01-07 22:58 ` [PATCH v7 13/31] elx: libefc: Fabric " James Smart
2021-01-07 22:58 ` [PATCH v7 14/31] elx: libefc: FC node ELS and state handling James Smart
2021-01-07 22:58 ` [PATCH v7 15/31] elx: libefc: Extended link Service IO handling James Smart
2021-01-08  9:07   ` Daniel Wagner
2021-01-07 22:58 ` [PATCH v7 16/31] elx: libefc: Register discovery objects with hardware James Smart
2021-01-07 22:58 ` [PATCH v7 17/31] elx: efct: Data structures and defines for hw operations James Smart
2021-01-07 22:58 ` [PATCH v7 18/31] elx: efct: Driver initialization routines James Smart
2021-01-07 22:58 ` [PATCH v7 19/31] elx: efct: Hardware queues creation and deletion James Smart
2021-01-07 22:58 ` [PATCH v7 20/31] elx: efct: RQ buffer, memory pool allocation and deallocation APIs James Smart
2021-01-08  9:11   ` Daniel Wagner
2021-03-04 18:02   ` Hannes Reinecke
2021-01-07 22:58 ` [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization James Smart
2021-03-04 18:05   ` Hannes Reinecke
2021-01-07 22:58 ` [PATCH v7 22/31] elx: efct: Hardware queues processing James Smart
2021-01-07 22:58 ` [PATCH v7 23/31] elx: efct: Unsolicited FC frame processing routines James Smart
2021-01-07 22:58 ` [PATCH v7 24/31] elx: efct: SCSI IO handling routines James Smart
2021-01-07 22:58 ` [PATCH v7 25/31] elx: efct: LIO backend interface routines James Smart
2021-01-07 22:59 ` [PATCH v7 26/31] elx: efct: Hardware IO submission routines James Smart
2021-01-07 22:59 ` [PATCH v7 27/31] elx: efct: link and host statistics James Smart
2021-01-07 22:59 ` [PATCH v7 28/31] elx: efct: xport and hardware teardown routines James Smart
2021-01-07 22:59 ` [PATCH v7 29/31] elx: efct: scsi_transport_fc host interface support James Smart
2021-01-07 22:59 ` [PATCH v7 30/31] elx: efct: Add Makefile and Kconfig for efct driver James Smart
2021-01-07 22:59 ` [PATCH v7 31/31] elx: efct: Tie into kernel Kconfig and build process James Smart

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.