All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/6] Add QLogic FastLinQ iSCSI (qedi) driver.
@ 2016-10-19  5:01 ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar

From: Manish Rangankar <manish.rangankar@cavium.com>

This series introduces hardware offload iSCSI initiator driver for the
41000 Series Converged Network Adapters (579xx chip) by Qlogic. The overall
driver design includes a common module ('qed') and protocol specific
dependent modules ('qedi' for iSCSI).

This is an open iSCSI driver, modifications to open iSCSI user components
'iscsid', 'iscsiuio', etc. are required for the solution to work. The user
space changes are also in the process of being submitted.

    https://groups.google.com/forum/#!forum/open-iscsi

The 'qed' common module, under drivers/net/ethernet/qlogic/qed/, is
enhanced with functionality required for the iSCSI support. This series
is based on:

    net-next: 1b830996c1603225a96e233c3b09bf2b12607d78

qedi patches are divided logically for review purpose and individual
patches do not compile.

We really appreciate any review comments you may have on the patch series.

Manish Rangankar (4):
  qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  qedi: Add LL2 iSCSI interface for offload iSCSI.
  qedi: Add support for iSCSI session management.
  qedi: Add support for data path.

Yuval Mintz (2):
  qed: Add support for hardware offloaded iSCSI.
  qed: Add iSCSI out of order packet handling.

 MAINTAINERS                                    |    6 +
 drivers/net/ethernet/qlogic/Kconfig            |    3 +
 drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
 drivers/net/ethernet/qlogic/qed/qed.h          |    9 +-
 drivers/net/ethernet/qlogic/qed/qed_dev.c      |   25 +
 drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
 drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 +++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
 drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
 drivers/net/ethernet/qlogic/qed/qed_ll2.c      |  594 +++++-
 drivers/net/ethernet/qlogic/qed/qed_ll2.h      |    9 +
 drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
 drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
 drivers/net/ethernet/qlogic/qed/qed_ooo.c      |  510 +++++
 drivers/net/ethernet/qlogic/qed/qed_ooo.h      |  116 ++
 drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
 drivers/net/ethernet/qlogic/qed/qed_roce.c     |    1 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c      |   24 +
 drivers/scsi/Kconfig                           |    1 +
 drivers/scsi/Makefile                          |    1 +
 drivers/scsi/qedi/Kconfig                      |   10 +
 drivers/scsi/qedi/Makefile                     |    5 +
 drivers/scsi/qedi/qedi.h                       |  359 ++++
 drivers/scsi/qedi/qedi_dbg.c                   |  143 ++
 drivers/scsi/qedi/qedi_dbg.h                   |  144 ++
 drivers/scsi/qedi/qedi_debugfs.c               |  244 +++
 drivers/scsi/qedi/qedi_fw.c                    | 2405 ++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h                   |   73 +
 drivers/scsi/qedi/qedi_hsi.h                   |   52 +
 drivers/scsi/qedi/qedi_iscsi.c                 | 1610 ++++++++++++++++
 drivers/scsi/qedi/qedi_iscsi.h                 |  228 +++
 drivers/scsi/qedi/qedi_main.c                  | 2075 ++++++++++++++++++++
 drivers/scsi/qedi/qedi_sysfs.c                 |   52 +
 drivers/scsi/qedi/qedi_version.h               |   14 +
 include/linux/qed/qed_if.h                     |    2 +
 include/linux/qed/qed_iscsi_if.h               |  249 +++
 36 files changed, 10294 insertions(+), 45 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.h
 create mode 100644 drivers/scsi/qedi/Kconfig
 create mode 100644 drivers/scsi/qedi/Makefile
 create mode 100644 drivers/scsi/qedi/qedi.h
 create mode 100644 drivers/scsi/qedi/qedi_dbg.c
 create mode 100644 drivers/scsi/qedi/qedi_dbg.h
 create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
 create mode 100644 drivers/scsi/qedi/qedi_fw.c
 create mode 100644 drivers/scsi/qedi/qedi_gbl.h
 create mode 100644 drivers/scsi/qedi/qedi_hsi.h
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.h
 create mode 100644 drivers/scsi/qedi/qedi_main.c
 create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
 create mode 100644 drivers/scsi/qedi/qedi_version.h
 create mode 100644 include/linux/qed/qed_iscsi_if.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC 0/6] Add QLogic FastLinQ iSCSI (qedi) driver.
@ 2016-10-19  5:01 ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar

From: Manish Rangankar <manish.rangankar@cavium.com>

This series introduces hardware offload iSCSI initiator driver for the
41000 Series Converged Network Adapters (579xx chip) by Qlogic. The overall
driver design includes a common module ('qed') and protocol specific
dependent modules ('qedi' for iSCSI).

This is an open iSCSI driver, modifications to open iSCSI user components
'iscsid', 'iscsiuio', etc. are required for the solution to work. The user
space changes are also in the process of being submitted.

    https://groups.google.com/forum/#!forum/open-iscsi

The 'qed' common module, under drivers/net/ethernet/qlogic/qed/, is
enhanced with functionality required for the iSCSI support. This series
is based on:

    net-next: 1b830996c1603225a96e233c3b09bf2b12607d78

qedi patches are divided logically for review purpose and individual
patches do not compile.

We really appreciate any review comments you may have on the patch series.

Manish Rangankar (4):
  qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  qedi: Add LL2 iSCSI interface for offload iSCSI.
  qedi: Add support for iSCSI session management.
  qedi: Add support for data path.

Yuval Mintz (2):
  qed: Add support for hardware offloaded iSCSI.
  qed: Add iSCSI out of order packet handling.

 MAINTAINERS                                    |    6 +
 drivers/net/ethernet/qlogic/Kconfig            |    3 +
 drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
 drivers/net/ethernet/qlogic/qed/qed.h          |    9 +-
 drivers/net/ethernet/qlogic/qed/qed_dev.c      |   25 +
 drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
 drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 +++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
 drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
 drivers/net/ethernet/qlogic/qed/qed_ll2.c      |  594 +++++-
 drivers/net/ethernet/qlogic/qed/qed_ll2.h      |    9 +
 drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
 drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
 drivers/net/ethernet/qlogic/qed/qed_ooo.c      |  510 +++++
 drivers/net/ethernet/qlogic/qed/qed_ooo.h      |  116 ++
 drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
 drivers/net/ethernet/qlogic/qed/qed_roce.c     |    1 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c      |   24 +
 drivers/scsi/Kconfig                           |    1 +
 drivers/scsi/Makefile                          |    1 +
 drivers/scsi/qedi/Kconfig                      |   10 +
 drivers/scsi/qedi/Makefile                     |    5 +
 drivers/scsi/qedi/qedi.h                       |  359 ++++
 drivers/scsi/qedi/qedi_dbg.c                   |  143 ++
 drivers/scsi/qedi/qedi_dbg.h                   |  144 ++
 drivers/scsi/qedi/qedi_debugfs.c               |  244 +++
 drivers/scsi/qedi/qedi_fw.c                    | 2405 ++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h                   |   73 +
 drivers/scsi/qedi/qedi_hsi.h                   |   52 +
 drivers/scsi/qedi/qedi_iscsi.c                 | 1610 ++++++++++++++++
 drivers/scsi/qedi/qedi_iscsi.h                 |  228 +++
 drivers/scsi/qedi/qedi_main.c                  | 2075 ++++++++++++++++++++
 drivers/scsi/qedi/qedi_sysfs.c                 |   52 +
 drivers/scsi/qedi/qedi_version.h               |   14 +
 include/linux/qed/qed_if.h                     |    2 +
 include/linux/qed/qed_iscsi_if.h               |  249 +++
 36 files changed, 10294 insertions(+), 45 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.h
 create mode 100644 drivers/scsi/qedi/Kconfig
 create mode 100644 drivers/scsi/qedi/Makefile
 create mode 100644 drivers/scsi/qedi/qedi.h
 create mode 100644 drivers/scsi/qedi/qedi_dbg.c
 create mode 100644 drivers/scsi/qedi/qedi_dbg.h
 create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
 create mode 100644 drivers/scsi/qedi/qedi_fw.c
 create mode 100644 drivers/scsi/qedi/qedi_gbl.h
 create mode 100644 drivers/scsi/qedi/qedi_hsi.h
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.h
 create mode 100644 drivers/scsi/qedi/qedi_main.c
 create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
 create mode 100644 drivers/scsi/qedi/qedi_version.h
 create mode 100644 include/linux/qed/qed_iscsi_if.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi, Yuval Mintz

From: Yuval Mintz <Yuval.Mintz@qlogic.com>

This adds the backbone required for the various HW initalizations
which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
4xxxx line of adapters - FW notification, resource initializations, etc.

Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
---
 drivers/net/ethernet/qlogic/Kconfig            |   15 +
 drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
 drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
 drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
 drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
 drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
 drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
 drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
 drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
 drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
 drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
 include/linux/qed/qed_if.h                     |    2 +
 include/linux/qed/qed_iscsi_if.h               |  249 +++++
 15 files changed, 1692 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
 create mode 100644 include/linux/qed/qed_iscsi_if.h

diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index 0df1391f9..bad4fae 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -118,4 +118,19 @@ config INFINIBAND_QEDR
 	  for QLogic QED. This would be replaced by the 'real' option
 	  once the QEDR driver is added [+relocated].
 
+config QED_ISCSI
+	bool
+
+config QEDI
+	tristate "QLogic QED 25/40/100Gb iSCSI driver"
+	depends on QED
+	select QED_LL2
+	select QED_ISCSI
+	default n
+	---help---
+	  This provides a temporary node that allows the compilation
+	  and logical testing of the hardware offload iSCSI support
+	  for QLogic QED. This would be replaced by the 'real' option
+	  once the QEDI driver is added [+relocated].
+
 endif # NET_VENDOR_QLOGIC
diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
index cda0af7..b76669c 100644
--- a/drivers/net/ethernet/qlogic/qed/Makefile
+++ b/drivers/net/ethernet/qlogic/qed/Makefile
@@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
 qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
 qed-$(CONFIG_QED_LL2) += qed_ll2.o
 qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
+qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index 653bb57..a61b1c0 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -35,6 +35,7 @@
 
 #define QED_WFQ_UNIT	100
 
+#define ISCSI_BDQ_ID(_port_id) (_port_id)
 #define QED_WID_SIZE            (1024)
 #define QED_PF_DEMS_SIZE        (4)
 
@@ -167,6 +168,7 @@ enum QED_RESOURCES {
 	QED_ILT,
 	QED_LL2_QUEUE,
 	QED_RDMA_STATS_QUEUE,
+	QED_CMDQS_CQS,
 	QED_MAX_RESC,
 };
 
@@ -379,6 +381,7 @@ struct qed_hwfn {
 	bool				using_ll2;
 	struct qed_ll2_info		*p_ll2_info;
 	struct qed_rdma_info		*p_rdma_info;
+	struct qed_iscsi_info		*p_iscsi_info;
 	struct qed_pf_params		pf_params;
 
 	bool b_rdma_enabled_in_prs;
@@ -578,6 +581,8 @@ struct qed_dev {
 	/* Linux specific here */
 	struct  qede_dev		*edev;
 	struct  pci_dev			*pdev;
+	u32 flags;
+#define QED_FLAG_STORAGE_STARTED	(BIT(0))
 	int				msg_enable;
 
 	struct pci_params		pci_params;
@@ -591,6 +596,7 @@ struct qed_dev {
 	union {
 		struct qed_common_cb_ops	*common;
 		struct qed_eth_cb_ops		*eth;
+		struct qed_iscsi_cb_ops		*iscsi;
 	} protocol_ops;
 	void				*ops_cookie;
 
@@ -600,7 +606,7 @@ struct qed_dev {
 	struct qed_cb_ll2_info		*ll2;
 	u8				ll2_mac_address[ETH_ALEN];
 #endif
-
+	DECLARE_HASHTABLE(connections, 10);
 	const struct firmware		*firmware;
 
 	u32 rdma_max_sge;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index 754f6a9..a4234c0 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -29,6 +29,7 @@
 #include "qed_hw.h"
 #include "qed_init_ops.h"
 #include "qed_int.h"
+#include "qed_iscsi.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
 #include "qed_reg_addr.h"
@@ -155,6 +156,9 @@ void qed_resc_free(struct qed_dev *cdev)
 #ifdef CONFIG_QED_LL2
 		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
 		qed_iov_free(p_hwfn);
 		qed_dmae_info_free(p_hwfn);
 		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -411,6 +415,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
 
 int qed_resc_alloc(struct qed_dev *cdev)
 {
+	struct qed_iscsi_info *p_iscsi_info;
 #ifdef CONFIG_QED_LL2
 	struct qed_ll2_info *p_ll2_info;
 #endif
@@ -532,6 +537,13 @@ int qed_resc_alloc(struct qed_dev *cdev)
 			p_hwfn->p_ll2_info = p_ll2_info;
 		}
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
+			p_iscsi_info = qed_iscsi_alloc(p_hwfn);
+			if (!p_iscsi_info)
+				goto alloc_no_mem;
+			p_hwfn->p_iscsi_info = p_iscsi_info;
+		}
 
 		/* DMA info initialization */
 		rc = qed_dmae_info_alloc(p_hwfn);
@@ -585,6 +597,9 @@ void qed_resc_setup(struct qed_dev *cdev)
 		if (p_hwfn->using_ll2)
 			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
 	}
 }
 
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
index 0948be6..cc28066 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_int.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
@@ -218,7 +218,6 @@ struct qed_igu_info {
 	u16			free_blks;
 };
 
-/* TODO Names of function may change... */
 void qed_int_igu_init_pure_rt(struct qed_hwfn *p_hwfn,
 			      struct qed_ptt *p_ptt,
 			      bool b_set,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
new file mode 100644
index 0000000..cb22dad
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
@@ -0,0 +1,1310 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <asm/param.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/log2.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/string.h>
+#include <linux/version.h>
+#include <linux/workqueue.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/qed/qed_iscsi_if.h>
+#include "qed.h"
+#include "qed_cxt.h"
+#include "qed_dev_api.h"
+#include "qed_hsi.h"
+#include "qed_hw.h"
+#include "qed_int.h"
+#include "qed_iscsi.h"
+#include "qed_ll2.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+#include "qed_sriov.h"
+#include "qed_reg_addr.h"
+
+struct qed_iscsi_conn {
+	struct list_head list_entry;
+	bool free_on_delete;
+
+	u16 conn_id;
+	u32 icid;
+	u32 fw_cid;
+
+	u8 layer_code;
+	u8 offl_flags;
+	u8 connect_mode;
+	u32 initial_ack;
+	dma_addr_t sq_pbl_addr;
+	struct qed_chain r2tq;
+	struct qed_chain xhq;
+	struct qed_chain uhq;
+
+	struct tcp_upload_params *tcp_upload_params_virt_addr;
+	dma_addr_t tcp_upload_params_phys_addr;
+	struct scsi_terminate_extra_params *queue_cnts_virt_addr;
+	dma_addr_t queue_cnts_phys_addr;
+	dma_addr_t syn_phy_addr;
+
+	u16 syn_ip_payload_length;
+	u8 local_mac[6];
+	u8 remote_mac[6];
+	u16 vlan_id;
+	u8 tcp_flags;
+	u8 ip_version;
+	u32 remote_ip[4];
+	u32 local_ip[4];
+	u8 ka_max_probe_cnt;
+	u8 dup_ack_theshold;
+	u32 rcv_next;
+	u32 snd_una;
+	u32 snd_next;
+	u32 snd_max;
+	u32 snd_wnd;
+	u32 rcv_wnd;
+	u32 snd_wl1;
+	u32 cwnd;
+	u32 ss_thresh;
+	u16 srtt;
+	u16 rtt_var;
+	u32 ts_time;
+	u32 ts_recent;
+	u32 ts_recent_age;
+	u32 total_rt;
+	u32 ka_timeout_delta;
+	u32 rt_timeout_delta;
+	u8 dup_ack_cnt;
+	u8 snd_wnd_probe_cnt;
+	u8 ka_probe_cnt;
+	u8 rt_cnt;
+	u32 flow_label;
+	u32 ka_timeout;
+	u32 ka_interval;
+	u32 max_rt_time;
+	u32 initial_rcv_wnd;
+	u8 ttl;
+	u8 tos_or_tc;
+	u16 remote_port;
+	u16 local_port;
+	u16 mss;
+	u8 snd_wnd_scale;
+	u8 rcv_wnd_scale;
+	u32 ts_ticks_per_second;
+	u16 da_timeout_value;
+	u8 ack_frequency;
+
+	u8 update_flag;
+	u8 default_cq;
+	u32 max_seq_size;
+	u32 max_recv_pdu_length;
+	u32 max_send_pdu_length;
+	u32 first_seq_length;
+	u32 exp_stat_sn;
+	u32 stat_sn;
+	u16 physical_q0;
+	u16 physical_q1;
+	u8 abortive_dsconnect;
+};
+
+static int
+qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
+			enum spq_mode comp_mode,
+			struct qed_spq_comp_cb *p_comp_addr,
+			void *event_context, iscsi_event_cb_t async_event_cb)
+{
+	struct iscsi_init_ramrod_params *p_ramrod = NULL;
+	struct scsi_init_func_queues *p_queue = NULL;
+	struct qed_iscsi_pf_params *p_params = NULL;
+	struct iscsi_spe_func_init *p_init = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = 0;
+	u32 dval;
+	u16 val;
+	u8 i;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = qed_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_INIT_FUNC,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_init;
+	p_init = &p_ramrod->iscsi_init_spe;
+	p_params = &p_hwfn->pf_params.iscsi_pf_params;
+	p_queue = &p_init->q_params;
+
+	SET_FIELD(p_init->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, ISCSI_SLOW_PATH_LAYER_CODE);
+	p_init->hdr.op_code = ISCSI_RAMROD_CMD_ID_INIT_FUNC;
+
+	val = p_params->half_way_close_timeout;
+	p_init->half_way_close_timeout = cpu_to_le16(val);
+	p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring;
+	p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring;
+	p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring;
+	p_init->func_params.log_page_size = p_params->log_page_size;
+	val = p_params->num_tasks;
+	p_init->func_params.num_tasks = cpu_to_le16(val);
+	p_init->debug_mode.flags = p_params->debug_mode;
+
+	DMA_REGPAIR_LE(p_queue->glbl_q_params_addr,
+		       p_params->glbl_q_params_addr);
+
+	val = p_params->cq_num_entries;
+	p_queue->cq_num_entries = cpu_to_le16(val);
+	val = p_params->cmdq_num_entries;
+	p_queue->cmdq_num_entries = cpu_to_le16(val);
+	p_queue->num_queues = p_params->num_queues;
+	dval = (u8)p_hwfn->hw_info.resc_start[QED_CMDQS_CQS];
+	p_queue->queue_relative_offset = (u8)dval;
+	p_queue->cq_sb_pi = p_params->gl_rq_pi;
+	p_queue->cmdq_sb_pi = p_params->gl_cmd_pi;
+
+	for (i = 0; i < p_params->num_queues; i++) {
+		val = p_hwfn->sbs_info[i]->igu_sb_id;
+		p_queue->cq_cmdq_sb_num_arr[i] = cpu_to_le16(val);
+	}
+
+	p_queue->bdq_resource_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_RQ],
+		       p_params->bdq_pbl_base_addr[BDQ_ID_RQ]);
+	p_queue->bdq_pbl_num_entries[BDQ_ID_RQ] =
+	    p_params->bdq_pbl_num_entries[BDQ_ID_RQ];
+	val = p_params->bdq_xoff_threshold[BDQ_ID_RQ];
+	p_queue->bdq_xoff_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
+	val = p_params->bdq_xon_threshold[BDQ_ID_RQ];
+	p_queue->bdq_xon_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
+
+	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_IMM_DATA],
+		       p_params->bdq_pbl_base_addr[BDQ_ID_IMM_DATA]);
+	p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA] =
+	    p_params->bdq_pbl_num_entries[BDQ_ID_IMM_DATA];
+	val = p_params->bdq_xoff_threshold[BDQ_ID_IMM_DATA];
+	p_queue->bdq_xoff_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
+	val = p_params->bdq_xon_threshold[BDQ_ID_IMM_DATA];
+	p_queue->bdq_xon_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
+	val = p_params->rq_buffer_size;
+	p_queue->rq_buffer_size = cpu_to_le16(val);
+	if (p_params->is_target) {
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
+		if (p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA])
+			SET_FIELD(p_queue->q_validity,
+				  SCSI_INIT_FUNC_QUEUES_IMM_DATA_VALID, 1);
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_CMD_VALID, 1);
+	} else {
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
+	}
+	p_ramrod->tcp_init.two_msl_timer = cpu_to_le32(p_params->two_msl_timer);
+	val = p_params->tx_sws_timer;
+	p_ramrod->tcp_init.tx_sws_timer = cpu_to_le16(val);
+	p_ramrod->tcp_init.maxfinrt = p_params->max_fin_rt;
+
+	p_hwfn->p_iscsi_info->event_context = event_context;
+	p_hwfn->p_iscsi_info->event_cb = async_event_cb;
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
+				     struct qed_iscsi_conn *p_conn,
+				     enum spq_mode comp_mode,
+				     struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_conn_offload *p_ramrod = NULL;
+	struct tcp_offload_params_opt2 *p_tcp2 = NULL;
+	struct tcp_offload_params *p_tcp = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	union qed_qm_pq_params pq_params;
+	u16 pq0_id = 0, pq1_id = 0;
+	dma_addr_t r2tq_pbl_addr;
+	dma_addr_t xhq_pbl_addr;
+	dma_addr_t uhq_pbl_addr;
+	int rc = 0;
+	u32 dval;
+	u16 wval;
+	u8 ucval;
+	u8 i;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_offload;
+
+	/* Transmission PQ is the first of the PF */
+	memset(&pq_params, 0, sizeof(pq_params));
+	pq0_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
+	p_conn->physical_q0 = cpu_to_le16(pq0_id);
+	p_ramrod->iscsi.physical_q0 = cpu_to_le16(pq0_id);
+
+	/* iSCSI Pure-ACK PQ */
+	pq_params.iscsi.q_idx = 1;
+	pq1_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
+	p_conn->physical_q1 = cpu_to_le16(pq1_id);
+	p_ramrod->iscsi.physical_q1 = cpu_to_le16(pq1_id);
+
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN;
+	SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE,
+		  p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+
+	DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr);
+
+	r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.r2tq_pbl_addr, r2tq_pbl_addr);
+
+	xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.xhq_pbl_addr, xhq_pbl_addr);
+
+	uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.uhq_pbl_addr, uhq_pbl_addr);
+
+	p_ramrod->iscsi.initial_ack = cpu_to_le32(p_conn->initial_ack);
+	p_ramrod->iscsi.flags = p_conn->offl_flags;
+	p_ramrod->iscsi.default_cq = p_conn->default_cq;
+	p_ramrod->iscsi.stat_sn = cpu_to_le32(p_conn->stat_sn);
+
+	if (!GET_FIELD(p_ramrod->iscsi.flags,
+		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
+		p_tcp = &p_ramrod->tcp;
+		ucval = p_conn->local_mac[1];
+		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->local_mac[0];
+		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->local_mac[3];
+		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->local_mac[2];
+		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->local_mac[5];
+		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->local_mac[4];
+		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
+		ucval = p_conn->remote_mac[1];
+		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->remote_mac[0];
+		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->remote_mac[3];
+		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->remote_mac[2];
+		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->remote_mac[5];
+		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->remote_mac[4];
+		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
+
+		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
+
+		p_tcp->flags = p_conn->tcp_flags;
+		p_tcp->ip_version = p_conn->ip_version;
+		for (i = 0; i < 4; i++) {
+			dval = p_conn->remote_ip[i];
+			p_tcp->remote_ip[i] = cpu_to_le32(dval);
+			dval = p_conn->local_ip[i];
+			p_tcp->local_ip[i] = cpu_to_le32(dval);
+		}
+		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
+		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
+
+		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
+		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
+		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
+		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
+		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
+		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
+		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
+		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
+		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
+		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
+		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
+		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
+		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
+		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
+		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
+		dval = p_conn->ka_timeout_delta;
+		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
+		dval = p_conn->rt_timeout_delta;
+		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
+		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
+		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
+		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
+		p_tcp->rt_cnt = p_conn->rt_cnt;
+		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
+		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
+		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
+		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
+		dval = p_conn->initial_rcv_wnd;
+		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
+		p_tcp->ttl = p_conn->ttl;
+		p_tcp->tos_or_tc = p_conn->tos_or_tc;
+		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
+		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
+		p_tcp->mss = cpu_to_le16(p_conn->mss);
+		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
+		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
+		dval = p_conn->ts_ticks_per_second;
+		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
+		wval = p_conn->da_timeout_value;
+		p_tcp->da_timeout_value = cpu_to_le16(wval);
+		p_tcp->ack_frequency = p_conn->ack_frequency;
+		p_tcp->connect_mode = p_conn->connect_mode;
+	} else {
+		p_tcp2 =
+		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
+		ucval = p_conn->local_mac[1];
+		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->local_mac[0];
+		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->local_mac[3];
+		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->local_mac[2];
+		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->local_mac[5];
+		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->local_mac[4];
+		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
+
+		ucval = p_conn->remote_mac[1];
+		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->remote_mac[0];
+		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->remote_mac[3];
+		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->remote_mac[2];
+		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->remote_mac[5];
+		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->remote_mac[4];
+		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
+
+		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
+		p_tcp2->flags = p_conn->tcp_flags;
+
+		p_tcp2->ip_version = p_conn->ip_version;
+		for (i = 0; i < 4; i++) {
+			dval = p_conn->remote_ip[i];
+			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
+			dval = p_conn->local_ip[i];
+			p_tcp2->local_ip[i] = cpu_to_le32(dval);
+		}
+
+		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
+		p_tcp2->ttl = p_conn->ttl;
+		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
+		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
+		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
+		p_tcp2->mss = cpu_to_le16(p_conn->mss);
+		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
+		p_tcp2->connect_mode = p_conn->connect_mode;
+		wval = p_conn->syn_ip_payload_length;
+		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
+		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
+		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
+	}
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn,
+				    struct qed_iscsi_conn *p_conn,
+				    enum spq_mode comp_mode,
+				    struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_conn_update_ramrod_params *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+	u32 dval;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_UPDATE_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_update;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_UPDATE_CONN;
+	SET_FIELD(p_ramrod->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+	p_ramrod->flags = p_conn->update_flag;
+	p_ramrod->max_seq_size = cpu_to_le32(p_conn->max_seq_size);
+	dval = p_conn->max_recv_pdu_length;
+	p_ramrod->max_recv_pdu_length = cpu_to_le32(dval);
+	dval = p_conn->max_send_pdu_length;
+	p_ramrod->max_send_pdu_length = cpu_to_le32(dval);
+	dval = p_conn->first_seq_length;
+	p_ramrod->first_seq_length = cpu_to_le32(dval);
+	p_ramrod->exp_stat_sn = cpu_to_le32(p_conn->exp_stat_sn);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn,
+				       struct qed_iscsi_conn *p_conn,
+				       enum spq_mode comp_mode,
+				       struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_conn_termination *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_TERMINATION_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_terminate;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_TERMINATION_CONN;
+	SET_FIELD(p_ramrod->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+	p_ramrod->abortive = p_conn->abortive_dsconnect;
+
+	DMA_REGPAIR_LE(p_ramrod->query_params_addr,
+		       p_conn->tcp_upload_params_phys_addr);
+	DMA_REGPAIR_LE(p_ramrod->queue_cnts_addr, p_conn->queue_cnts_phys_addr);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn,
+				      struct qed_iscsi_conn *p_conn,
+				      enum spq_mode comp_mode,
+				      struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_slow_path_hdr *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_CLEAR_SQ,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_empty;
+	p_ramrod->op_code = ISCSI_RAMROD_CMD_ID_CLEAR_SQ;
+	SET_FIELD(p_ramrod->flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn,
+				  enum spq_mode comp_mode,
+				  struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_func_dstry *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = 0;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = qed_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_DESTROY_FUNC,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_destroy;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_DESTROY_FUNC;
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
+{
+	return (u8 __iomem *)p_hwfn->doorbells +
+			     qed_db_addr(cid, DQ_DEMS_LEGACY);
+}
+
+static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
+						    u8 bdq_id)
+{
+	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
+			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
+							     bdq_id);
+}
+
+static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
+						      u8 bdq_id)
+{
+	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
+			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
+							     bdq_id);
+}
+
+static int qed_iscsi_setup_connection(struct qed_hwfn *p_hwfn,
+				      struct qed_iscsi_conn *p_conn)
+{
+	if (!p_conn->queue_cnts_virt_addr)
+		goto nomem;
+	memset(p_conn->queue_cnts_virt_addr, 0,
+	       sizeof(*p_conn->queue_cnts_virt_addr));
+
+	if (!p_conn->tcp_upload_params_virt_addr)
+		goto nomem;
+	memset(p_conn->tcp_upload_params_virt_addr, 0,
+	       sizeof(*p_conn->tcp_upload_params_virt_addr));
+
+	if (!p_conn->r2tq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->r2tq);
+
+	if (!p_conn->uhq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->uhq);
+
+	if (!p_conn->xhq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->xhq);
+
+	return 0;
+nomem:
+	return -ENOMEM;
+}
+
+static int qed_iscsi_allocate_connection(struct qed_hwfn *p_hwfn,
+					 struct qed_iscsi_conn **p_out_conn)
+{
+	u16 uhq_num_elements = 0, xhq_num_elements = 0, r2tq_num_elements = 0;
+	struct scsi_terminate_extra_params *p_q_cnts = NULL;
+	struct qed_iscsi_pf_params *p_params = NULL;
+	struct tcp_upload_params *p_tcp = NULL;
+	struct qed_iscsi_conn *p_conn = NULL;
+	int rc = 0;
+
+	/* Try finding a free connection that can be used */
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	if (!list_empty(&p_hwfn->p_iscsi_info->free_list))
+		p_conn = list_first_entry(&p_hwfn->p_iscsi_info->free_list,
+					  struct qed_iscsi_conn, list_entry);
+	if (p_conn) {
+		list_del(&p_conn->list_entry);
+		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+		*p_out_conn = p_conn;
+		return 0;
+	}
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+
+	/* Need to allocate a new connection */
+	p_params = &p_hwfn->pf_params.iscsi_pf_params;
+
+	p_conn = kzalloc(sizeof(*p_conn), GFP_KERNEL);
+	if (!p_conn)
+		return -ENOMEM;
+
+	p_q_cnts = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+				      sizeof(*p_q_cnts),
+				      &p_conn->queue_cnts_phys_addr,
+				      GFP_KERNEL);
+	if (!p_q_cnts)
+		goto nomem_queue_cnts_param;
+	p_conn->queue_cnts_virt_addr = p_q_cnts;
+
+	p_tcp = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+				   sizeof(*p_tcp),
+				   &p_conn->tcp_upload_params_phys_addr,
+				   GFP_KERNEL);
+	if (!p_tcp)
+		goto nomem_upload_param;
+	p_conn->tcp_upload_params_virt_addr = p_tcp;
+
+	r2tq_num_elements = p_params->num_r2tq_pages_in_ring *
+			    QED_CHAIN_PAGE_SIZE / 0x80;
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     r2tq_num_elements, 0x80, &p_conn->r2tq);
+	if (rc)
+		goto nomem_r2tq;
+
+	uhq_num_elements = p_params->num_uhq_pages_in_ring *
+			   QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_uhqe);
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     uhq_num_elements,
+			     sizeof(struct iscsi_uhqe), &p_conn->uhq);
+	if (rc)
+		goto nomem_uhq;
+
+	xhq_num_elements = uhq_num_elements;
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     xhq_num_elements,
+			     sizeof(struct iscsi_xhqe), &p_conn->xhq);
+	if (rc)
+		goto nomem;
+
+	p_conn->free_on_delete = true;
+	*p_out_conn = p_conn;
+	return 0;
+
+nomem:
+	qed_chain_free(p_hwfn->cdev, &p_conn->uhq);
+nomem_uhq:
+	qed_chain_free(p_hwfn->cdev, &p_conn->r2tq);
+nomem_r2tq:
+	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+			  sizeof(struct tcp_upload_params),
+			  p_conn->tcp_upload_params_virt_addr,
+			  p_conn->tcp_upload_params_phys_addr);
+nomem_upload_param:
+	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+			  sizeof(struct scsi_terminate_extra_params),
+			  p_conn->queue_cnts_virt_addr,
+			  p_conn->queue_cnts_phys_addr);
+nomem_queue_cnts_param:
+	kfree(p_conn);
+
+	return -ENOMEM;
+}
+
+static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn,
+					struct qed_iscsi_conn *p_in_conn,
+					struct qed_iscsi_conn **p_out_conn)
+{
+	struct qed_iscsi_conn *p_conn = NULL;
+	int rc = 0;
+	u32 icid;
+
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_ISCSI, &icid);
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+	if (rc)
+		return rc;
+
+	/* Use input connection or allocate a new one */
+	if (p_in_conn)
+		p_conn = p_in_conn;
+	else
+		rc = qed_iscsi_allocate_connection(p_hwfn, &p_conn);
+
+	if (!rc)
+		rc = qed_iscsi_setup_connection(p_hwfn, p_conn);
+
+	if (rc) {
+		spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+		qed_cxt_release_cid(p_hwfn, icid);
+		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+		return rc;
+	}
+
+	p_conn->icid = icid;
+	p_conn->conn_id = (u16)icid;
+	p_conn->fw_cid = (p_hwfn->hw_info.opaque_fid << 16) | icid;
+
+	*p_out_conn = p_conn;
+
+	return rc;
+}
+
+static void qed_iscsi_release_connection(struct qed_hwfn *p_hwfn,
+					 struct qed_iscsi_conn *p_conn)
+{
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	list_add_tail(&p_conn->list_entry, &p_hwfn->p_iscsi_info->free_list);
+	qed_cxt_release_cid(p_hwfn, p_conn->icid);
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+}
+
+struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn)
+{
+	struct qed_iscsi_info *p_iscsi_info;
+
+	p_iscsi_info = kzalloc(sizeof(*p_iscsi_info), GFP_KERNEL);
+	if (!p_iscsi_info) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_iscsi_info'\n");
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&p_iscsi_info->free_list);
+	return p_iscsi_info;
+}
+
+void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		     struct qed_iscsi_info *p_iscsi_info)
+{
+	spin_lock_init(&p_iscsi_info->lock);
+}
+
+void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		    struct qed_iscsi_info *p_iscsi_info)
+{
+	kfree(p_iscsi_info);
+}
+
+static void _qed_iscsi_get_tstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct tstorm_iscsi_stats_drv tstats;
+	u32 tstats_addr;
+
+	memset(&tstats, 0, sizeof(tstats));
+	tstats_addr = BAR0_MAP_REG_TSDM_RAM +
+		      TSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, sizeof(tstats));
+
+	p_stats->iscsi_rx_bytes_cnt =
+	    HILO_64_REGPAIR(tstats.iscsi_rx_bytes_cnt);
+	p_stats->iscsi_rx_packet_cnt =
+	    HILO_64_REGPAIR(tstats.iscsi_rx_packet_cnt);
+	p_stats->iscsi_cmdq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_cmdq_threshold_cnt);
+	p_stats->iscsi_rq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_rq_threshold_cnt);
+	p_stats->iscsi_immq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_immq_threshold_cnt);
+}
+
+static void _qed_iscsi_get_mstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct mstorm_iscsi_stats_drv mstats;
+	u32 mstats_addr;
+
+	memset(&mstats, 0, sizeof(mstats));
+	mstats_addr = BAR0_MAP_REG_MSDM_RAM +
+		      MSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, sizeof(mstats));
+
+	p_stats->iscsi_rx_dropped_pdus_task_not_valid =
+	    HILO_64_REGPAIR(mstats.iscsi_rx_dropped_pdus_task_not_valid);
+}
+
+static void _qed_iscsi_get_ustats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct ustorm_iscsi_stats_drv ustats;
+	u32 ustats_addr;
+
+	memset(&ustats, 0, sizeof(ustats));
+	ustats_addr = BAR0_MAP_REG_USDM_RAM +
+		      USTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, sizeof(ustats));
+
+	p_stats->iscsi_rx_data_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_data_pdu_cnt);
+	p_stats->iscsi_rx_r2t_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_r2t_pdu_cnt);
+	p_stats->iscsi_rx_total_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_total_pdu_cnt);
+}
+
+static void _qed_iscsi_get_xstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct xstorm_iscsi_stats_drv xstats;
+	u32 xstats_addr;
+
+	memset(&xstats, 0, sizeof(xstats));
+	xstats_addr = BAR0_MAP_REG_XSDM_RAM +
+		      XSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &xstats, xstats_addr, sizeof(xstats));
+
+	p_stats->iscsi_tx_go_to_slow_start_event_cnt =
+	    HILO_64_REGPAIR(xstats.iscsi_tx_go_to_slow_start_event_cnt);
+	p_stats->iscsi_tx_fast_retransmit_event_cnt =
+	    HILO_64_REGPAIR(xstats.iscsi_tx_fast_retransmit_event_cnt);
+}
+
+static void _qed_iscsi_get_ystats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct ystorm_iscsi_stats_drv ystats;
+	u32 ystats_addr;
+
+	memset(&ystats, 0, sizeof(ystats));
+	ystats_addr = BAR0_MAP_REG_YSDM_RAM +
+		      YSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &ystats, ystats_addr, sizeof(ystats));
+
+	p_stats->iscsi_tx_data_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_data_pdu_cnt);
+	p_stats->iscsi_tx_r2t_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_r2t_pdu_cnt);
+	p_stats->iscsi_tx_total_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_total_pdu_cnt);
+}
+
+static void _qed_iscsi_get_pstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct pstorm_iscsi_stats_drv pstats;
+	u32 pstats_addr;
+
+	memset(&pstats, 0, sizeof(pstats));
+	pstats_addr = BAR0_MAP_REG_PSDM_RAM +
+		      PSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, sizeof(pstats));
+
+	p_stats->iscsi_tx_bytes_cnt =
+	    HILO_64_REGPAIR(pstats.iscsi_tx_bytes_cnt);
+	p_stats->iscsi_tx_packet_cnt =
+	    HILO_64_REGPAIR(pstats.iscsi_tx_packet_cnt);
+}
+
+static int qed_iscsi_get_stats(struct qed_hwfn *p_hwfn,
+			       struct qed_iscsi_stats *stats)
+{
+	struct qed_ptt *p_ptt;
+
+	memset(stats, 0, sizeof(*stats));
+
+	p_ptt = qed_ptt_acquire(p_hwfn);
+	if (!p_ptt) {
+		DP_ERR(p_hwfn, "Failed to acquire ptt\n");
+		return -EAGAIN;
+	}
+
+	_qed_iscsi_get_tstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_mstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_ustats(p_hwfn, p_ptt, stats);
+
+	_qed_iscsi_get_xstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_ystats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_pstats(p_hwfn, p_ptt, stats);
+
+	qed_ptt_release(p_hwfn, p_ptt);
+
+	return 0;
+}
+
+struct qed_hash_iscsi_con {
+	struct hlist_node node;
+	struct qed_iscsi_conn *con;
+};
+
+static int qed_fill_iscsi_dev_info(struct qed_dev *cdev,
+				   struct qed_dev_iscsi_info *info)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+
+	int rc;
+
+	memset(info, 0, sizeof(*info));
+	rc = qed_fill_dev_info(cdev, &info->common);
+
+	info->primary_dbq_rq_addr =
+	    qed_iscsi_get_primary_bdq_prod(hwfn, BDQ_ID_RQ);
+	info->secondary_bdq_rq_addr =
+	    qed_iscsi_get_secondary_bdq_prod(hwfn, BDQ_ID_RQ);
+
+	return rc;
+}
+
+static void qed_register_iscsi_ops(struct qed_dev *cdev,
+				   struct qed_iscsi_cb_ops *ops, void *cookie)
+{
+	cdev->protocol_ops.iscsi = ops;
+	cdev->ops_cookie = cookie;
+}
+
+static struct qed_hash_iscsi_con *qed_iscsi_get_hash(struct qed_dev *cdev,
+						     u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con = NULL;
+
+	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED))
+		return NULL;
+
+	hash_for_each_possible(cdev->connections, hash_con, node, handle) {
+		if (hash_con->con->icid == handle)
+			break;
+	}
+
+	if (!hash_con || (hash_con->con->icid != handle))
+		return NULL;
+
+	return hash_con;
+}
+
+static int qed_iscsi_stop(struct qed_dev *cdev)
+{
+	int rc;
+
+	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED)) {
+		DP_NOTICE(cdev, "iscsi already stopped\n");
+		return 0;
+	}
+
+	if (!hash_empty(cdev->connections)) {
+		DP_NOTICE(cdev,
+			  "Can't stop iscsi - not all connections were returned\n");
+		return -EINVAL;
+	}
+
+	/* Stop the iscsi */
+	rc = qed_sp_iscsi_func_stop(QED_LEADING_HWFN(cdev),
+				    QED_SPQ_MODE_EBLOCK, NULL);
+	cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
+
+	return rc;
+}
+
+static int qed_iscsi_start(struct qed_dev *cdev,
+			   struct qed_iscsi_tid *tasks,
+			   void *event_context,
+			   iscsi_event_cb_t async_event_cb)
+{
+	int rc;
+
+	if (cdev->flags & QED_FLAG_STORAGE_STARTED) {
+		DP_NOTICE(cdev, "iscsi already started;\n");
+		return 0;
+	}
+
+	rc = qed_sp_iscsi_func_start(QED_LEADING_HWFN(cdev),
+				     QED_SPQ_MODE_EBLOCK, NULL, event_context,
+				     async_event_cb);
+	if (rc) {
+		DP_NOTICE(cdev, "Failed to start iscsi\n");
+		return rc;
+	}
+
+	cdev->flags |= QED_FLAG_STORAGE_STARTED;
+	hash_init(cdev->connections);
+
+	if (tasks) {
+		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
+						       GFP_KERNEL);
+
+		if (!tid_info) {
+			DP_NOTICE(cdev,
+				  "Failed to allocate tasks information\n");
+			qed_iscsi_stop(cdev);
+			return -ENOMEM;
+		}
+
+		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
+					      tid_info);
+		if (rc) {
+			DP_NOTICE(cdev, "Failed to gather task information\n");
+			qed_iscsi_stop(cdev);
+			kfree(tid_info);
+			return rc;
+		}
+
+		/* Fill task information */
+		tasks->size = tid_info->tid_size;
+		tasks->num_tids_per_block = tid_info->num_tids_per_block;
+		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
+
+		kfree(tid_info);
+	}
+
+	return 0;
+}
+
+static int qed_iscsi_acquire_conn(struct qed_dev *cdev,
+				  u32 *handle,
+				  u32 *fw_cid, void __iomem **p_doorbell)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	int rc;
+
+	/* Allocate a hashed connection */
+	hash_con = kzalloc(sizeof(*hash_con), GFP_ATOMIC);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to allocate hashed connection\n");
+		return -ENOMEM;
+	}
+
+	/* Acquire the connection */
+	rc = qed_iscsi_acquire_connection(QED_LEADING_HWFN(cdev), NULL,
+					  &hash_con->con);
+	if (rc) {
+		DP_NOTICE(cdev, "Failed to acquire Connection\n");
+		kfree(hash_con);
+		return rc;
+	}
+
+	/* Added the connection to hash table */
+	*handle = hash_con->con->icid;
+	*fw_cid = hash_con->con->fw_cid;
+	hash_add(cdev->connections, &hash_con->node, *handle);
+
+	if (p_doorbell)
+		*p_doorbell = qed_iscsi_get_db_addr(QED_LEADING_HWFN(cdev),
+						    *handle);
+
+	return 0;
+}
+
+static int qed_iscsi_release_conn(struct qed_dev *cdev, u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	hlist_del(&hash_con->node);
+	qed_iscsi_release_connection(QED_LEADING_HWFN(cdev), hash_con->con);
+	kfree(hash_con);
+
+	return 0;
+}
+
+static int qed_iscsi_offload_conn(struct qed_dev *cdev,
+				  u32 handle,
+				  struct qed_iscsi_params_offload *conn_info)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	struct qed_iscsi_conn *con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	/* Update the connection with information from the params */
+	con = hash_con->con;
+
+	ether_addr_copy(con->local_mac, conn_info->src.mac);
+	ether_addr_copy(con->remote_mac, conn_info->dst.mac);
+	memcpy(con->local_ip, conn_info->src.ip, sizeof(con->local_ip));
+	memcpy(con->remote_ip, conn_info->dst.ip, sizeof(con->remote_ip));
+	con->local_port = conn_info->src.port;
+	con->remote_port = conn_info->dst.port;
+
+	con->layer_code = conn_info->layer_code;
+	con->sq_pbl_addr = conn_info->sq_pbl_addr;
+	con->initial_ack = conn_info->initial_ack;
+	con->vlan_id = conn_info->vlan_id;
+	con->tcp_flags = conn_info->tcp_flags;
+	con->ip_version = conn_info->ip_version;
+	con->default_cq = conn_info->default_cq;
+	con->ka_max_probe_cnt = conn_info->ka_max_probe_cnt;
+	con->dup_ack_theshold = conn_info->dup_ack_theshold;
+	con->rcv_next = conn_info->rcv_next;
+	con->snd_una = conn_info->snd_una;
+	con->snd_next = conn_info->snd_next;
+	con->snd_max = conn_info->snd_max;
+	con->snd_wnd = conn_info->snd_wnd;
+	con->rcv_wnd = conn_info->rcv_wnd;
+	con->snd_wl1 = conn_info->snd_wl1;
+	con->cwnd = conn_info->cwnd;
+	con->ss_thresh = conn_info->ss_thresh;
+	con->srtt = conn_info->srtt;
+	con->rtt_var = conn_info->rtt_var;
+	con->ts_time = conn_info->ts_time;
+	con->ts_recent = conn_info->ts_recent;
+	con->ts_recent_age = conn_info->ts_recent_age;
+	con->total_rt = conn_info->total_rt;
+	con->ka_timeout_delta = conn_info->ka_timeout_delta;
+	con->rt_timeout_delta = conn_info->rt_timeout_delta;
+	con->dup_ack_cnt = conn_info->dup_ack_cnt;
+	con->snd_wnd_probe_cnt = conn_info->snd_wnd_probe_cnt;
+	con->ka_probe_cnt = conn_info->ka_probe_cnt;
+	con->rt_cnt = conn_info->rt_cnt;
+	con->flow_label = conn_info->flow_label;
+	con->ka_timeout = conn_info->ka_timeout;
+	con->ka_interval = conn_info->ka_interval;
+	con->max_rt_time = conn_info->max_rt_time;
+	con->initial_rcv_wnd = conn_info->initial_rcv_wnd;
+	con->ttl = conn_info->ttl;
+	con->tos_or_tc = conn_info->tos_or_tc;
+	con->remote_port = conn_info->remote_port;
+	con->local_port = conn_info->local_port;
+	con->mss = conn_info->mss;
+	con->snd_wnd_scale = conn_info->snd_wnd_scale;
+	con->rcv_wnd_scale = conn_info->rcv_wnd_scale;
+	con->ts_ticks_per_second = conn_info->ts_ticks_per_second;
+	con->da_timeout_value = conn_info->da_timeout_value;
+	con->ack_frequency = conn_info->ack_frequency;
+
+	/* Set default values on other connection fields */
+	con->offl_flags = 0x1;
+
+	return qed_sp_iscsi_conn_offload(QED_LEADING_HWFN(cdev), con,
+					 QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_update_conn(struct qed_dev *cdev,
+				 u32 handle,
+				 struct qed_iscsi_params_update *conn_info)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	struct qed_iscsi_conn *con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	/* Update the connection with information from the params */
+	con = hash_con->con;
+	con->update_flag = conn_info->update_flag;
+	con->max_seq_size = conn_info->max_seq_size;
+	con->max_recv_pdu_length = conn_info->max_recv_pdu_length;
+	con->max_send_pdu_length = conn_info->max_send_pdu_length;
+	con->first_seq_length = conn_info->first_seq_length;
+	con->exp_stat_sn = conn_info->exp_stat_sn;
+
+	return qed_sp_iscsi_conn_update(QED_LEADING_HWFN(cdev), con,
+					QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_clear_conn_sq(struct qed_dev *cdev, u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	return qed_sp_iscsi_conn_clear_sq(QED_LEADING_HWFN(cdev),
+					  hash_con->con,
+					  QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_destroy_conn(struct qed_dev *cdev,
+				  u32 handle, u8 abrt_conn)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	hash_con->con->abortive_dsconnect = abrt_conn;
+
+	return qed_sp_iscsi_conn_terminate(QED_LEADING_HWFN(cdev),
+					   hash_con->con,
+					   QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats)
+{
+	return qed_iscsi_get_stats(QED_LEADING_HWFN(cdev), stats);
+}
+
+static const struct qed_iscsi_ops qed_iscsi_ops_pass = {
+	.common = &qed_common_ops_pass,
+	.ll2 = &qed_ll2_ops_pass,
+	.fill_dev_info = &qed_fill_iscsi_dev_info,
+	.register_ops = &qed_register_iscsi_ops,
+	.start = &qed_iscsi_start,
+	.stop = &qed_iscsi_stop,
+	.acquire_conn = &qed_iscsi_acquire_conn,
+	.release_conn = &qed_iscsi_release_conn,
+	.offload_conn = &qed_iscsi_offload_conn,
+	.update_conn = &qed_iscsi_update_conn,
+	.destroy_conn = &qed_iscsi_destroy_conn,
+	.clear_sq = &qed_iscsi_clear_conn_sq,
+	.get_stats = &qed_iscsi_stats,
+};
+
+const struct qed_iscsi_ops *qed_get_iscsi_ops()
+{
+	return &qed_iscsi_ops_pass;
+}
+EXPORT_SYMBOL(qed_get_iscsi_ops);
+
+void qed_put_iscsi_ops(void)
+{
+}
+EXPORT_SYMBOL(qed_put_iscsi_ops);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
new file mode 100644
index 0000000..269848c
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
@@ -0,0 +1,52 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_ISCSI_H
+#define _QED_ISCSI_H
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/qed/tcp_common.h>
+#include <linux/qed/qed_iscsi_if.h>
+#include <linux/qed/qed_chain.h>
+#include "qed.h"
+#include "qed_hsi.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+
+struct qed_iscsi_info {
+	spinlock_t lock;
+	struct list_head free_list;
+	u16 max_num_outstanding_tasks;
+	void *event_context;
+	iscsi_event_cb_t event_cb;
+};
+
+#ifdef CONFIG_QED_LL2
+extern const struct qed_ll2_ops qed_ll2_ops_pass;
+#endif
+
+#if IS_ENABLED(CONFIG_QEDI)
+struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn);
+
+void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		     struct qed_iscsi_info *p_iscsi_info);
+
+void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		    struct qed_iscsi_info *p_iscsi_info);
+#else /* IS_ENABLED(CONFIG_QEDI) */
+static inline struct qed_iscsi_info *qed_iscsi_alloc(
+		struct qed_hwfn *p_hwfn) { return NULL; }
+static inline void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		struct qed_iscsi_info *p_iscsi_info) {}
+static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		struct qed_iscsi_info *p_iscsi_info) {}
+#endif /* IS_ENABLED(CONFIG_QEDI) */
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
index ddd410a..07e2f77 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
@@ -2187,6 +2187,5 @@ const struct qed_eth_ops *qed_get_eth_ops(void)
 
 void qed_put_eth_ops(void)
 {
-	/* TODO - reference count for module? */
 }
 EXPORT_SYMBOL(qed_put_eth_ops);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
index a6db107..e67f3c9 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
@@ -299,6 +299,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		p_tx->cur_completing_bd_idx = 1;
 		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
 		tx_frag = p_pkt->bds_set[0].tx_frag;
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 		if (p_ll2_conn->gsi_enable)
 			qed_ll2b_release_tx_gsi_packet(p_hwfn,
 						       p_ll2_conn->my_id,
@@ -307,6 +308,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 						       b_last_frag,
 						       b_last_packet);
 		else
+#endif
 			qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
@@ -367,6 +369,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 
 		spin_unlock_irqrestore(&p_tx->lock, flags);
 		tx_frag = p_pkt->bds_set[0].tx_frag;
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 		if (p_ll2_conn->gsi_enable)
 			qed_ll2b_complete_tx_gsi_packet(p_hwfn,
 							p_ll2_conn->my_id,
@@ -374,6 +377,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 							tx_frag,
 							b_last_frag, !num_bds);
 		else
+#endif
 			qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
@@ -421,6 +425,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 			  "Mismatch between active_descq and the LL2 Rx chain\n");
 	list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
 
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 	spin_unlock_irqrestore(&p_rx->lock, lock_flags);
 	qed_ll2b_complete_rx_gsi_packet(p_hwfn,
 					p_ll2_info->my_id,
@@ -433,6 +438,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 					src_mac_addrhi,
 					src_mac_addrlo, b_last_cqe);
 	spin_lock_irqsave(&p_rx->lock, lock_flags);
+#endif
 
 	return 0;
 }
@@ -1516,11 +1522,12 @@ static void qed_ll2_register_cb_ops(struct qed_dev *cdev,
 
 static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 {
-	struct qed_ll2_info ll2_info;
+	struct qed_ll2_info *ll2_info;
 	struct qed_ll2_buffer *buffer;
 	enum qed_ll2_conn_type conn_type;
 	struct qed_ptt *p_ptt;
 	int rc, i;
+	u8 gsi_enable = 1;
 
 	/* Initialize LL2 locks & lists */
 	INIT_LIST_HEAD(&cdev->ll2->list);
@@ -1552,6 +1559,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 	switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
 	case QED_PCI_ISCSI:
 		conn_type = QED_LL2_TYPE_ISCSI;
+		gsi_enable = 0;
 		break;
 	case QED_PCI_ETH_ROCE:
 		conn_type = QED_LL2_TYPE_ROCE;
@@ -1561,18 +1569,23 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 	}
 
 	/* Prepare the temporary ll2 information */
-	memset(&ll2_info, 0, sizeof(ll2_info));
-	ll2_info.conn_type = conn_type;
-	ll2_info.mtu = params->mtu;
-	ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets;
-	ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping;
-	ll2_info.tx_tc = 0;
-	ll2_info.tx_dest = CORE_TX_DEST_NW;
-	ll2_info.gsi_enable = 1;
-
-	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &ll2_info,
+	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
+	if (!ll2_info) {
+		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
+		goto fail;
+	}
+	ll2_info->conn_type = conn_type;
+	ll2_info->mtu = params->mtu;
+	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
+	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
+	ll2_info->tx_tc = 0;
+	ll2_info->tx_dest = CORE_TX_DEST_NW;
+	ll2_info->gsi_enable = gsi_enable;
+
+	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), ll2_info,
 					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
 					&cdev->ll2->handle);
+	kfree(ll2_info);
 	if (rc) {
 		DP_INFO(cdev, "Failed to acquire LL2 connection\n");
 		goto fail;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
index 4ee3151..a01ad9d 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
@@ -1239,7 +1239,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
 	if (link.link_up)
 		if_link->link_up = true;
 
-	/* TODO - at the moment assume supported and advertised speed equal */
 	if_link->supported_caps = QED_LM_FIBRE_BIT;
 	if (params.speed.autoneg)
 		if_link->supported_caps |= QED_LM_Autoneg_BIT;
@@ -1294,7 +1293,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
 	if (link.link_up)
 		if_link->speed = link.speed;
 
-	/* TODO - fill duplex properly */
 	if_link->duplex = DUPLEX_FULL;
 	qed_mcp_get_media_type(hwfn->cdev, &media_type);
 	if_link->port = qed_get_port_type(media_type);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
index dff520e..2e5f51b 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
@@ -314,9 +314,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
  * same pf_num may be used by two different hwfn
- * TODO - this shouldn't really be in .h file, but until all fields
- * required during hw-init will be placed in their correct place in shmem
- * we need it in qed_dev.c [for readin the nvram reflection in shmem].
  */
 #define MCP_PF_ID_BY_REL(p_hwfn, rel_pfid) (QED_IS_BB((p_hwfn)->cdev) ?	       \
 					    ((rel_pfid) |		       \
@@ -324,9 +321,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
 					    rel_pfid)
 #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
 
-/* TODO - this is only correct as long as only BB is supported, and
- * no port-swapping is implemented; Afterwards we'll need to fix it.
- */
 #define MFW_PORT(_p_hwfn)       ((_p_hwfn)->abs_pf_id %	\
 				 ((_p_hwfn)->cdev->num_ports_in_engines * 2))
 struct qed_mcp_info {
diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
index b414a05..9754420 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
@@ -82,6 +82,8 @@
 	0x1c80000UL
 #define BAR0_MAP_REG_XSDM_RAM \
 	0x1e00000UL
+#define BAR0_MAP_REG_YSDM_RAM \
+	0x1e80000UL
 #define  NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF \
 	0x5011f4UL
 #define  PRS_REG_SEARCH_TCP \
diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
index caff415..d3fa578 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
@@ -24,6 +24,7 @@
 #include "qed_hsi.h"
 #include "qed_hw.h"
 #include "qed_int.h"
+#include "qed_iscsi.h"
 #include "qed_mcp.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
@@ -249,6 +250,20 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
 		return qed_sriov_eqe_event(p_hwfn,
 					   p_eqe->opcode,
 					   p_eqe->echo, &p_eqe->data);
+	case PROTOCOLID_ISCSI:
+		if (!IS_ENABLED(CONFIG_QEDI))
+			return -EINVAL;
+
+		if (p_hwfn->p_iscsi_info->event_cb) {
+			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
+
+			return p_iscsi->event_cb(p_iscsi->event_context,
+						 p_eqe->opcode, &p_eqe->data);
+		} else {
+			DP_NOTICE(p_hwfn,
+				  "iSCSI async completion is not set\n");
+			return -EINVAL;
+		}
 	default:
 		DP_NOTICE(p_hwfn,
 			  "Unknown Async completion for protocol: %d\n",
diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
index f9ae903..c0c9fa8 100644
--- a/include/linux/qed/qed_if.h
+++ b/include/linux/qed/qed_if.h
@@ -165,6 +165,7 @@ struct qed_iscsi_pf_params {
 	u32 max_cwnd;
 	u16 cq_num_entries;
 	u16 cmdq_num_entries;
+	u32 two_msl_timer;
 	u16 dup_ack_threshold;
 	u16 tx_sws_timer;
 	u16 min_rto;
@@ -271,6 +272,7 @@ struct qed_dev_info {
 enum qed_sb_type {
 	QED_SB_TYPE_L2_QUEUE,
 	QED_SB_TYPE_CNQ,
+	QED_SB_TYPE_STORAGE,
 };
 
 enum qed_protocol {
diff --git a/include/linux/qed/qed_iscsi_if.h b/include/linux/qed/qed_iscsi_if.h
new file mode 100644
index 0000000..6735ee5
--- /dev/null
+++ b/include/linux/qed/qed_iscsi_if.h
@@ -0,0 +1,249 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_ISCSI_IF_H
+#define _QED_ISCSI_IF_H
+#include <linux/types.h>
+#include <linux/qed/qed_if.h>
+
+typedef int (*iscsi_event_cb_t) (void *context,
+				 u8 fw_event_code, void *fw_handle);
+struct qed_iscsi_stats {
+	u64 iscsi_rx_bytes_cnt;
+	u64 iscsi_rx_packet_cnt;
+	u64 iscsi_rx_new_ooo_isle_events_cnt;
+	u32 iscsi_cmdq_threshold_cnt;
+	u32 iscsi_rq_threshold_cnt;
+	u32 iscsi_immq_threshold_cnt;
+
+	u64 iscsi_rx_dropped_pdus_task_not_valid;
+
+	u64 iscsi_rx_data_pdu_cnt;
+	u64 iscsi_rx_r2t_pdu_cnt;
+	u64 iscsi_rx_total_pdu_cnt;
+
+	u64 iscsi_tx_go_to_slow_start_event_cnt;
+	u64 iscsi_tx_fast_retransmit_event_cnt;
+
+	u64 iscsi_tx_data_pdu_cnt;
+	u64 iscsi_tx_r2t_pdu_cnt;
+	u64 iscsi_tx_total_pdu_cnt;
+
+	u64 iscsi_tx_bytes_cnt;
+	u64 iscsi_tx_packet_cnt;
+};
+
+struct qed_dev_iscsi_info {
+	struct qed_dev_info common;
+
+	void __iomem *primary_dbq_rq_addr;
+	void __iomem *secondary_bdq_rq_addr;
+};
+
+struct qed_iscsi_id_params {
+	u8 mac[ETH_ALEN];
+	u32 ip[4];
+	u16 port;
+};
+
+struct qed_iscsi_params_offload {
+	u8 layer_code;
+	dma_addr_t sq_pbl_addr;
+	u32 initial_ack;
+
+	struct qed_iscsi_id_params src;
+	struct qed_iscsi_id_params dst;
+	u16 vlan_id;
+	u8 tcp_flags;
+	u8 ip_version;
+	u8 default_cq;
+
+	u8 ka_max_probe_cnt;
+	u8 dup_ack_theshold;
+	u32 rcv_next;
+	u32 snd_una;
+	u32 snd_next;
+	u32 snd_max;
+	u32 snd_wnd;
+	u32 rcv_wnd;
+	u32 snd_wl1;
+	u32 cwnd;
+	u32 ss_thresh;
+	u16 srtt;
+	u16 rtt_var;
+	u32 ts_time;
+	u32 ts_recent;
+	u32 ts_recent_age;
+	u32 total_rt;
+	u32 ka_timeout_delta;
+	u32 rt_timeout_delta;
+	u8 dup_ack_cnt;
+	u8 snd_wnd_probe_cnt;
+	u8 ka_probe_cnt;
+	u8 rt_cnt;
+	u32 flow_label;
+	u32 ka_timeout;
+	u32 ka_interval;
+	u32 max_rt_time;
+	u32 initial_rcv_wnd;
+	u8 ttl;
+	u8 tos_or_tc;
+	u16 remote_port;
+	u16 local_port;
+	u16 mss;
+	u8 snd_wnd_scale;
+	u8 rcv_wnd_scale;
+	u32 ts_ticks_per_second;
+	u16 da_timeout_value;
+	u8 ack_frequency;
+};
+
+struct qed_iscsi_params_update {
+	u8 update_flag;
+#define QED_ISCSI_CONN_HD_EN            BIT(0)
+#define QED_ISCSI_CONN_DD_EN            BIT(1)
+#define QED_ISCSI_CONN_INITIAL_R2T      BIT(2)
+#define QED_ISCSI_CONN_IMMEDIATE_DATA   BIT(3)
+
+	u32 max_seq_size;
+	u32 max_recv_pdu_length;
+	u32 max_send_pdu_length;
+	u32 first_seq_length;
+	u32 exp_stat_sn;
+};
+
+#define MAX_TID_BLOCKS_ISCSI (512)
+struct qed_iscsi_tid {
+	u32 size;		/* In bytes per task */
+	u32 num_tids_per_block;
+	u8 *blocks[MAX_TID_BLOCKS_ISCSI];
+};
+
+struct qed_iscsi_cb_ops {
+	struct qed_common_cb_ops common;
+
+	/* TODO - need to add handler for ansync. events */
+};
+
+struct qed_iscsi_ops {
+	const struct qed_common_ops *common;
+
+	const struct qed_ll2_ops *ll2;
+
+	int (*fill_dev_info)(struct qed_dev *cdev,
+			     struct qed_dev_iscsi_info *info);
+
+	void (*register_ops)(struct qed_dev *cdev,
+			     struct qed_iscsi_cb_ops *ops, void *cookie);
+
+/**
+ * @brief start iscsi in FW
+ *
+ * @param cdev
+ * @param tasks - qed will fill information about tasks
+ *
+ * return 0 on success, otherwise error value.
+ */
+	int (*start)(struct qed_dev *cdev,
+		     struct qed_iscsi_tid *tasks,
+		     void *event_context, iscsi_event_cb_t async_event_cb);
+
+/**
+ * @brief stops iscsi in FW
+ *
+ * @param cdev
+ *
+ * return 0 on success, otherwise error value.
+ */
+	int (*stop)(struct qed_dev *cdev);
+
+/**
+ * @brief acquire_conn - acquire a new iscsi connection
+ *
+ * @param cdev
+ * @param handle - qed will fill handle that should be used
+ *                 henceforth as identifier of the connection.
+ * @param p_doorbell - qed will fill the address of the doorbell.
+ *
+ * @return 0 on sucesss, otherwise error value.
+ */
+	int (*acquire_conn)(struct qed_dev *cdev,
+			    u32 *handle,
+			    u32 *fw_cid, void __iomem **p_doorbell);
+
+/**
+ * @brief release_conn - release a previously acquired iscsi connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*release_conn)(struct qed_dev *cdev, u32 handle);
+
+/**
+ * @brief offload_conn - configures an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ * @param conn_info - the configuration to use for the offload.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*offload_conn)(struct qed_dev *cdev,
+			    u32 handle,
+			    struct qed_iscsi_params_offload *conn_info);
+
+/**
+ * @brief update_conn - updates an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ * @param conn_info - the configuration to use for the offload.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*update_conn)(struct qed_dev *cdev,
+			   u32 handle,
+			   struct qed_iscsi_params_update *conn_info);
+
+/**
+ * @brief destroy_conn - stops an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*destroy_conn)(struct qed_dev *cdev, u32 handle, u8 abrt_conn);
+
+/**
+ * @brief clear_sq - clear all task in sq
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*clear_sq)(struct qed_dev *cdev, u32 handle);
+
+/**
+ * @brief get iSCSI related statistics
+ *
+ * @param cdev
+ * @param stats - pointer to struck that would be filled we stats
+ *
+ * @return 0 on success, error otherwise.
+ */
+	int (*get_stats)(struct qed_dev *cdev,
+			 struct qed_iscsi_stats *stats);
+};
+
+const struct qed_iscsi_ops *qed_get_iscsi_ops(void);
+void qed_put_iscsi_ops(void);
+#endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi, Yuval Mintz

From: Yuval Mintz <Yuval.Mintz@qlogic.com>

This adds the backbone required for the various HW initalizations
which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
4xxxx line of adapters - FW notification, resource initializations, etc.

Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
---
 drivers/net/ethernet/qlogic/Kconfig            |   15 +
 drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
 drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
 drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
 drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
 drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
 drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
 drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
 drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
 drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
 drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
 include/linux/qed/qed_if.h                     |    2 +
 include/linux/qed/qed_iscsi_if.h               |  249 +++++
 15 files changed, 1692 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
 create mode 100644 include/linux/qed/qed_iscsi_if.h

diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index 0df1391f9..bad4fae 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -118,4 +118,19 @@ config INFINIBAND_QEDR
 	  for QLogic QED. This would be replaced by the 'real' option
 	  once the QEDR driver is added [+relocated].
 
+config QED_ISCSI
+	bool
+
+config QEDI
+	tristate "QLogic QED 25/40/100Gb iSCSI driver"
+	depends on QED
+	select QED_LL2
+	select QED_ISCSI
+	default n
+	---help---
+	  This provides a temporary node that allows the compilation
+	  and logical testing of the hardware offload iSCSI support
+	  for QLogic QED. This would be replaced by the 'real' option
+	  once the QEDI driver is added [+relocated].
+
 endif # NET_VENDOR_QLOGIC
diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
index cda0af7..b76669c 100644
--- a/drivers/net/ethernet/qlogic/qed/Makefile
+++ b/drivers/net/ethernet/qlogic/qed/Makefile
@@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
 qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
 qed-$(CONFIG_QED_LL2) += qed_ll2.o
 qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
+qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index 653bb57..a61b1c0 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -35,6 +35,7 @@
 
 #define QED_WFQ_UNIT	100
 
+#define ISCSI_BDQ_ID(_port_id) (_port_id)
 #define QED_WID_SIZE            (1024)
 #define QED_PF_DEMS_SIZE        (4)
 
@@ -167,6 +168,7 @@ enum QED_RESOURCES {
 	QED_ILT,
 	QED_LL2_QUEUE,
 	QED_RDMA_STATS_QUEUE,
+	QED_CMDQS_CQS,
 	QED_MAX_RESC,
 };
 
@@ -379,6 +381,7 @@ struct qed_hwfn {
 	bool				using_ll2;
 	struct qed_ll2_info		*p_ll2_info;
 	struct qed_rdma_info		*p_rdma_info;
+	struct qed_iscsi_info		*p_iscsi_info;
 	struct qed_pf_params		pf_params;
 
 	bool b_rdma_enabled_in_prs;
@@ -578,6 +581,8 @@ struct qed_dev {
 	/* Linux specific here */
 	struct  qede_dev		*edev;
 	struct  pci_dev			*pdev;
+	u32 flags;
+#define QED_FLAG_STORAGE_STARTED	(BIT(0))
 	int				msg_enable;
 
 	struct pci_params		pci_params;
@@ -591,6 +596,7 @@ struct qed_dev {
 	union {
 		struct qed_common_cb_ops	*common;
 		struct qed_eth_cb_ops		*eth;
+		struct qed_iscsi_cb_ops		*iscsi;
 	} protocol_ops;
 	void				*ops_cookie;
 
@@ -600,7 +606,7 @@ struct qed_dev {
 	struct qed_cb_ll2_info		*ll2;
 	u8				ll2_mac_address[ETH_ALEN];
 #endif
-
+	DECLARE_HASHTABLE(connections, 10);
 	const struct firmware		*firmware;
 
 	u32 rdma_max_sge;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index 754f6a9..a4234c0 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -29,6 +29,7 @@
 #include "qed_hw.h"
 #include "qed_init_ops.h"
 #include "qed_int.h"
+#include "qed_iscsi.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
 #include "qed_reg_addr.h"
@@ -155,6 +156,9 @@ void qed_resc_free(struct qed_dev *cdev)
 #ifdef CONFIG_QED_LL2
 		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
 		qed_iov_free(p_hwfn);
 		qed_dmae_info_free(p_hwfn);
 		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -411,6 +415,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
 
 int qed_resc_alloc(struct qed_dev *cdev)
 {
+	struct qed_iscsi_info *p_iscsi_info;
 #ifdef CONFIG_QED_LL2
 	struct qed_ll2_info *p_ll2_info;
 #endif
@@ -532,6 +537,13 @@ int qed_resc_alloc(struct qed_dev *cdev)
 			p_hwfn->p_ll2_info = p_ll2_info;
 		}
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
+			p_iscsi_info = qed_iscsi_alloc(p_hwfn);
+			if (!p_iscsi_info)
+				goto alloc_no_mem;
+			p_hwfn->p_iscsi_info = p_iscsi_info;
+		}
 
 		/* DMA info initialization */
 		rc = qed_dmae_info_alloc(p_hwfn);
@@ -585,6 +597,9 @@ void qed_resc_setup(struct qed_dev *cdev)
 		if (p_hwfn->using_ll2)
 			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
 #endif
+		if (IS_ENABLED(CONFIG_QEDI) &&
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
 	}
 }
 
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
index 0948be6..cc28066 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_int.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
@@ -218,7 +218,6 @@ struct qed_igu_info {
 	u16			free_blks;
 };
 
-/* TODO Names of function may change... */
 void qed_int_igu_init_pure_rt(struct qed_hwfn *p_hwfn,
 			      struct qed_ptt *p_ptt,
 			      bool b_set,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
new file mode 100644
index 0000000..cb22dad
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
@@ -0,0 +1,1310 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <asm/param.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/log2.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/string.h>
+#include <linux/version.h>
+#include <linux/workqueue.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/qed/qed_iscsi_if.h>
+#include "qed.h"
+#include "qed_cxt.h"
+#include "qed_dev_api.h"
+#include "qed_hsi.h"
+#include "qed_hw.h"
+#include "qed_int.h"
+#include "qed_iscsi.h"
+#include "qed_ll2.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+#include "qed_sriov.h"
+#include "qed_reg_addr.h"
+
+struct qed_iscsi_conn {
+	struct list_head list_entry;
+	bool free_on_delete;
+
+	u16 conn_id;
+	u32 icid;
+	u32 fw_cid;
+
+	u8 layer_code;
+	u8 offl_flags;
+	u8 connect_mode;
+	u32 initial_ack;
+	dma_addr_t sq_pbl_addr;
+	struct qed_chain r2tq;
+	struct qed_chain xhq;
+	struct qed_chain uhq;
+
+	struct tcp_upload_params *tcp_upload_params_virt_addr;
+	dma_addr_t tcp_upload_params_phys_addr;
+	struct scsi_terminate_extra_params *queue_cnts_virt_addr;
+	dma_addr_t queue_cnts_phys_addr;
+	dma_addr_t syn_phy_addr;
+
+	u16 syn_ip_payload_length;
+	u8 local_mac[6];
+	u8 remote_mac[6];
+	u16 vlan_id;
+	u8 tcp_flags;
+	u8 ip_version;
+	u32 remote_ip[4];
+	u32 local_ip[4];
+	u8 ka_max_probe_cnt;
+	u8 dup_ack_theshold;
+	u32 rcv_next;
+	u32 snd_una;
+	u32 snd_next;
+	u32 snd_max;
+	u32 snd_wnd;
+	u32 rcv_wnd;
+	u32 snd_wl1;
+	u32 cwnd;
+	u32 ss_thresh;
+	u16 srtt;
+	u16 rtt_var;
+	u32 ts_time;
+	u32 ts_recent;
+	u32 ts_recent_age;
+	u32 total_rt;
+	u32 ka_timeout_delta;
+	u32 rt_timeout_delta;
+	u8 dup_ack_cnt;
+	u8 snd_wnd_probe_cnt;
+	u8 ka_probe_cnt;
+	u8 rt_cnt;
+	u32 flow_label;
+	u32 ka_timeout;
+	u32 ka_interval;
+	u32 max_rt_time;
+	u32 initial_rcv_wnd;
+	u8 ttl;
+	u8 tos_or_tc;
+	u16 remote_port;
+	u16 local_port;
+	u16 mss;
+	u8 snd_wnd_scale;
+	u8 rcv_wnd_scale;
+	u32 ts_ticks_per_second;
+	u16 da_timeout_value;
+	u8 ack_frequency;
+
+	u8 update_flag;
+	u8 default_cq;
+	u32 max_seq_size;
+	u32 max_recv_pdu_length;
+	u32 max_send_pdu_length;
+	u32 first_seq_length;
+	u32 exp_stat_sn;
+	u32 stat_sn;
+	u16 physical_q0;
+	u16 physical_q1;
+	u8 abortive_dsconnect;
+};
+
+static int
+qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
+			enum spq_mode comp_mode,
+			struct qed_spq_comp_cb *p_comp_addr,
+			void *event_context, iscsi_event_cb_t async_event_cb)
+{
+	struct iscsi_init_ramrod_params *p_ramrod = NULL;
+	struct scsi_init_func_queues *p_queue = NULL;
+	struct qed_iscsi_pf_params *p_params = NULL;
+	struct iscsi_spe_func_init *p_init = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = 0;
+	u32 dval;
+	u16 val;
+	u8 i;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = qed_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_INIT_FUNC,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_init;
+	p_init = &p_ramrod->iscsi_init_spe;
+	p_params = &p_hwfn->pf_params.iscsi_pf_params;
+	p_queue = &p_init->q_params;
+
+	SET_FIELD(p_init->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, ISCSI_SLOW_PATH_LAYER_CODE);
+	p_init->hdr.op_code = ISCSI_RAMROD_CMD_ID_INIT_FUNC;
+
+	val = p_params->half_way_close_timeout;
+	p_init->half_way_close_timeout = cpu_to_le16(val);
+	p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring;
+	p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring;
+	p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring;
+	p_init->func_params.log_page_size = p_params->log_page_size;
+	val = p_params->num_tasks;
+	p_init->func_params.num_tasks = cpu_to_le16(val);
+	p_init->debug_mode.flags = p_params->debug_mode;
+
+	DMA_REGPAIR_LE(p_queue->glbl_q_params_addr,
+		       p_params->glbl_q_params_addr);
+
+	val = p_params->cq_num_entries;
+	p_queue->cq_num_entries = cpu_to_le16(val);
+	val = p_params->cmdq_num_entries;
+	p_queue->cmdq_num_entries = cpu_to_le16(val);
+	p_queue->num_queues = p_params->num_queues;
+	dval = (u8)p_hwfn->hw_info.resc_start[QED_CMDQS_CQS];
+	p_queue->queue_relative_offset = (u8)dval;
+	p_queue->cq_sb_pi = p_params->gl_rq_pi;
+	p_queue->cmdq_sb_pi = p_params->gl_cmd_pi;
+
+	for (i = 0; i < p_params->num_queues; i++) {
+		val = p_hwfn->sbs_info[i]->igu_sb_id;
+		p_queue->cq_cmdq_sb_num_arr[i] = cpu_to_le16(val);
+	}
+
+	p_queue->bdq_resource_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_RQ],
+		       p_params->bdq_pbl_base_addr[BDQ_ID_RQ]);
+	p_queue->bdq_pbl_num_entries[BDQ_ID_RQ] =
+	    p_params->bdq_pbl_num_entries[BDQ_ID_RQ];
+	val = p_params->bdq_xoff_threshold[BDQ_ID_RQ];
+	p_queue->bdq_xoff_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
+	val = p_params->bdq_xon_threshold[BDQ_ID_RQ];
+	p_queue->bdq_xon_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
+
+	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_IMM_DATA],
+		       p_params->bdq_pbl_base_addr[BDQ_ID_IMM_DATA]);
+	p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA] =
+	    p_params->bdq_pbl_num_entries[BDQ_ID_IMM_DATA];
+	val = p_params->bdq_xoff_threshold[BDQ_ID_IMM_DATA];
+	p_queue->bdq_xoff_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
+	val = p_params->bdq_xon_threshold[BDQ_ID_IMM_DATA];
+	p_queue->bdq_xon_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
+	val = p_params->rq_buffer_size;
+	p_queue->rq_buffer_size = cpu_to_le16(val);
+	if (p_params->is_target) {
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
+		if (p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA])
+			SET_FIELD(p_queue->q_validity,
+				  SCSI_INIT_FUNC_QUEUES_IMM_DATA_VALID, 1);
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_CMD_VALID, 1);
+	} else {
+		SET_FIELD(p_queue->q_validity,
+			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
+	}
+	p_ramrod->tcp_init.two_msl_timer = cpu_to_le32(p_params->two_msl_timer);
+	val = p_params->tx_sws_timer;
+	p_ramrod->tcp_init.tx_sws_timer = cpu_to_le16(val);
+	p_ramrod->tcp_init.maxfinrt = p_params->max_fin_rt;
+
+	p_hwfn->p_iscsi_info->event_context = event_context;
+	p_hwfn->p_iscsi_info->event_cb = async_event_cb;
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
+				     struct qed_iscsi_conn *p_conn,
+				     enum spq_mode comp_mode,
+				     struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_conn_offload *p_ramrod = NULL;
+	struct tcp_offload_params_opt2 *p_tcp2 = NULL;
+	struct tcp_offload_params *p_tcp = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	union qed_qm_pq_params pq_params;
+	u16 pq0_id = 0, pq1_id = 0;
+	dma_addr_t r2tq_pbl_addr;
+	dma_addr_t xhq_pbl_addr;
+	dma_addr_t uhq_pbl_addr;
+	int rc = 0;
+	u32 dval;
+	u16 wval;
+	u8 ucval;
+	u8 i;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_offload;
+
+	/* Transmission PQ is the first of the PF */
+	memset(&pq_params, 0, sizeof(pq_params));
+	pq0_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
+	p_conn->physical_q0 = cpu_to_le16(pq0_id);
+	p_ramrod->iscsi.physical_q0 = cpu_to_le16(pq0_id);
+
+	/* iSCSI Pure-ACK PQ */
+	pq_params.iscsi.q_idx = 1;
+	pq1_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
+	p_conn->physical_q1 = cpu_to_le16(pq1_id);
+	p_ramrod->iscsi.physical_q1 = cpu_to_le16(pq1_id);
+
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN;
+	SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE,
+		  p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+
+	DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr);
+
+	r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.r2tq_pbl_addr, r2tq_pbl_addr);
+
+	xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.xhq_pbl_addr, xhq_pbl_addr);
+
+	uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
+	DMA_REGPAIR_LE(p_ramrod->iscsi.uhq_pbl_addr, uhq_pbl_addr);
+
+	p_ramrod->iscsi.initial_ack = cpu_to_le32(p_conn->initial_ack);
+	p_ramrod->iscsi.flags = p_conn->offl_flags;
+	p_ramrod->iscsi.default_cq = p_conn->default_cq;
+	p_ramrod->iscsi.stat_sn = cpu_to_le32(p_conn->stat_sn);
+
+	if (!GET_FIELD(p_ramrod->iscsi.flags,
+		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
+		p_tcp = &p_ramrod->tcp;
+		ucval = p_conn->local_mac[1];
+		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->local_mac[0];
+		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->local_mac[3];
+		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->local_mac[2];
+		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->local_mac[5];
+		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->local_mac[4];
+		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
+		ucval = p_conn->remote_mac[1];
+		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->remote_mac[0];
+		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->remote_mac[3];
+		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->remote_mac[2];
+		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->remote_mac[5];
+		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->remote_mac[4];
+		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
+
+		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
+
+		p_tcp->flags = p_conn->tcp_flags;
+		p_tcp->ip_version = p_conn->ip_version;
+		for (i = 0; i < 4; i++) {
+			dval = p_conn->remote_ip[i];
+			p_tcp->remote_ip[i] = cpu_to_le32(dval);
+			dval = p_conn->local_ip[i];
+			p_tcp->local_ip[i] = cpu_to_le32(dval);
+		}
+		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
+		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
+
+		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
+		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
+		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
+		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
+		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
+		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
+		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
+		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
+		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
+		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
+		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
+		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
+		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
+		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
+		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
+		dval = p_conn->ka_timeout_delta;
+		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
+		dval = p_conn->rt_timeout_delta;
+		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
+		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
+		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
+		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
+		p_tcp->rt_cnt = p_conn->rt_cnt;
+		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
+		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
+		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
+		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
+		dval = p_conn->initial_rcv_wnd;
+		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
+		p_tcp->ttl = p_conn->ttl;
+		p_tcp->tos_or_tc = p_conn->tos_or_tc;
+		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
+		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
+		p_tcp->mss = cpu_to_le16(p_conn->mss);
+		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
+		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
+		dval = p_conn->ts_ticks_per_second;
+		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
+		wval = p_conn->da_timeout_value;
+		p_tcp->da_timeout_value = cpu_to_le16(wval);
+		p_tcp->ack_frequency = p_conn->ack_frequency;
+		p_tcp->connect_mode = p_conn->connect_mode;
+	} else {
+		p_tcp2 =
+		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
+		ucval = p_conn->local_mac[1];
+		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->local_mac[0];
+		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->local_mac[3];
+		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->local_mac[2];
+		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->local_mac[5];
+		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->local_mac[4];
+		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
+
+		ucval = p_conn->remote_mac[1];
+		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
+		ucval = p_conn->remote_mac[0];
+		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
+		ucval = p_conn->remote_mac[3];
+		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
+		ucval = p_conn->remote_mac[2];
+		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
+		ucval = p_conn->remote_mac[5];
+		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
+		ucval = p_conn->remote_mac[4];
+		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
+
+		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
+		p_tcp2->flags = p_conn->tcp_flags;
+
+		p_tcp2->ip_version = p_conn->ip_version;
+		for (i = 0; i < 4; i++) {
+			dval = p_conn->remote_ip[i];
+			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
+			dval = p_conn->local_ip[i];
+			p_tcp2->local_ip[i] = cpu_to_le32(dval);
+		}
+
+		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
+		p_tcp2->ttl = p_conn->ttl;
+		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
+		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
+		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
+		p_tcp2->mss = cpu_to_le16(p_conn->mss);
+		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
+		p_tcp2->connect_mode = p_conn->connect_mode;
+		wval = p_conn->syn_ip_payload_length;
+		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
+		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
+		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
+	}
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn,
+				    struct qed_iscsi_conn *p_conn,
+				    enum spq_mode comp_mode,
+				    struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_conn_update_ramrod_params *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+	u32 dval;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_UPDATE_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_update;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_UPDATE_CONN;
+	SET_FIELD(p_ramrod->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+	p_ramrod->flags = p_conn->update_flag;
+	p_ramrod->max_seq_size = cpu_to_le32(p_conn->max_seq_size);
+	dval = p_conn->max_recv_pdu_length;
+	p_ramrod->max_recv_pdu_length = cpu_to_le32(dval);
+	dval = p_conn->max_send_pdu_length;
+	p_ramrod->max_send_pdu_length = cpu_to_le32(dval);
+	dval = p_conn->first_seq_length;
+	p_ramrod->first_seq_length = cpu_to_le32(dval);
+	p_ramrod->exp_stat_sn = cpu_to_le32(p_conn->exp_stat_sn);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn,
+				       struct qed_iscsi_conn *p_conn,
+				       enum spq_mode comp_mode,
+				       struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_conn_termination *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_TERMINATION_CONN,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_conn_terminate;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_TERMINATION_CONN;
+	SET_FIELD(p_ramrod->hdr.flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
+	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
+	p_ramrod->abortive = p_conn->abortive_dsconnect;
+
+	DMA_REGPAIR_LE(p_ramrod->query_params_addr,
+		       p_conn->tcp_upload_params_phys_addr);
+	DMA_REGPAIR_LE(p_ramrod->queue_cnts_addr, p_conn->queue_cnts_phys_addr);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn,
+				      struct qed_iscsi_conn *p_conn,
+				      enum spq_mode comp_mode,
+				      struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_slow_path_hdr *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = -EINVAL;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = p_conn->icid;
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_CLEAR_SQ,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_empty;
+	p_ramrod->op_code = ISCSI_RAMROD_CMD_ID_CLEAR_SQ;
+	SET_FIELD(p_ramrod->flags,
+		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn,
+				  enum spq_mode comp_mode,
+				  struct qed_spq_comp_cb *p_comp_addr)
+{
+	struct iscsi_spe_func_dstry *p_ramrod = NULL;
+	struct qed_spq_entry *p_ent = NULL;
+	struct qed_sp_init_data init_data;
+	int rc = 0;
+
+	/* Get SPQ entry */
+	memset(&init_data, 0, sizeof(init_data));
+	init_data.cid = qed_spq_get_cid(p_hwfn);
+	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_addr;
+
+	rc = qed_sp_init_request(p_hwfn, &p_ent,
+				 ISCSI_RAMROD_CMD_ID_DESTROY_FUNC,
+				 PROTOCOLID_ISCSI, &init_data);
+	if (rc)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.iscsi_destroy;
+	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_DESTROY_FUNC;
+
+	return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
+{
+	return (u8 __iomem *)p_hwfn->doorbells +
+			     qed_db_addr(cid, DQ_DEMS_LEGACY);
+}
+
+static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
+						    u8 bdq_id)
+{
+	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
+			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
+							     bdq_id);
+}
+
+static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
+						      u8 bdq_id)
+{
+	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
+
+	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
+			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
+							     bdq_id);
+}
+
+static int qed_iscsi_setup_connection(struct qed_hwfn *p_hwfn,
+				      struct qed_iscsi_conn *p_conn)
+{
+	if (!p_conn->queue_cnts_virt_addr)
+		goto nomem;
+	memset(p_conn->queue_cnts_virt_addr, 0,
+	       sizeof(*p_conn->queue_cnts_virt_addr));
+
+	if (!p_conn->tcp_upload_params_virt_addr)
+		goto nomem;
+	memset(p_conn->tcp_upload_params_virt_addr, 0,
+	       sizeof(*p_conn->tcp_upload_params_virt_addr));
+
+	if (!p_conn->r2tq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->r2tq);
+
+	if (!p_conn->uhq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->uhq);
+
+	if (!p_conn->xhq.p_virt_addr)
+		goto nomem;
+	qed_chain_pbl_zero_mem(&p_conn->xhq);
+
+	return 0;
+nomem:
+	return -ENOMEM;
+}
+
+static int qed_iscsi_allocate_connection(struct qed_hwfn *p_hwfn,
+					 struct qed_iscsi_conn **p_out_conn)
+{
+	u16 uhq_num_elements = 0, xhq_num_elements = 0, r2tq_num_elements = 0;
+	struct scsi_terminate_extra_params *p_q_cnts = NULL;
+	struct qed_iscsi_pf_params *p_params = NULL;
+	struct tcp_upload_params *p_tcp = NULL;
+	struct qed_iscsi_conn *p_conn = NULL;
+	int rc = 0;
+
+	/* Try finding a free connection that can be used */
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	if (!list_empty(&p_hwfn->p_iscsi_info->free_list))
+		p_conn = list_first_entry(&p_hwfn->p_iscsi_info->free_list,
+					  struct qed_iscsi_conn, list_entry);
+	if (p_conn) {
+		list_del(&p_conn->list_entry);
+		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+		*p_out_conn = p_conn;
+		return 0;
+	}
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+
+	/* Need to allocate a new connection */
+	p_params = &p_hwfn->pf_params.iscsi_pf_params;
+
+	p_conn = kzalloc(sizeof(*p_conn), GFP_KERNEL);
+	if (!p_conn)
+		return -ENOMEM;
+
+	p_q_cnts = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+				      sizeof(*p_q_cnts),
+				      &p_conn->queue_cnts_phys_addr,
+				      GFP_KERNEL);
+	if (!p_q_cnts)
+		goto nomem_queue_cnts_param;
+	p_conn->queue_cnts_virt_addr = p_q_cnts;
+
+	p_tcp = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+				   sizeof(*p_tcp),
+				   &p_conn->tcp_upload_params_phys_addr,
+				   GFP_KERNEL);
+	if (!p_tcp)
+		goto nomem_upload_param;
+	p_conn->tcp_upload_params_virt_addr = p_tcp;
+
+	r2tq_num_elements = p_params->num_r2tq_pages_in_ring *
+			    QED_CHAIN_PAGE_SIZE / 0x80;
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     r2tq_num_elements, 0x80, &p_conn->r2tq);
+	if (rc)
+		goto nomem_r2tq;
+
+	uhq_num_elements = p_params->num_uhq_pages_in_ring *
+			   QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_uhqe);
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     uhq_num_elements,
+			     sizeof(struct iscsi_uhqe), &p_conn->uhq);
+	if (rc)
+		goto nomem_uhq;
+
+	xhq_num_elements = uhq_num_elements;
+	rc = qed_chain_alloc(p_hwfn->cdev,
+			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
+			     QED_CHAIN_MODE_PBL,
+			     QED_CHAIN_CNT_TYPE_U16,
+			     xhq_num_elements,
+			     sizeof(struct iscsi_xhqe), &p_conn->xhq);
+	if (rc)
+		goto nomem;
+
+	p_conn->free_on_delete = true;
+	*p_out_conn = p_conn;
+	return 0;
+
+nomem:
+	qed_chain_free(p_hwfn->cdev, &p_conn->uhq);
+nomem_uhq:
+	qed_chain_free(p_hwfn->cdev, &p_conn->r2tq);
+nomem_r2tq:
+	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+			  sizeof(struct tcp_upload_params),
+			  p_conn->tcp_upload_params_virt_addr,
+			  p_conn->tcp_upload_params_phys_addr);
+nomem_upload_param:
+	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+			  sizeof(struct scsi_terminate_extra_params),
+			  p_conn->queue_cnts_virt_addr,
+			  p_conn->queue_cnts_phys_addr);
+nomem_queue_cnts_param:
+	kfree(p_conn);
+
+	return -ENOMEM;
+}
+
+static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn,
+					struct qed_iscsi_conn *p_in_conn,
+					struct qed_iscsi_conn **p_out_conn)
+{
+	struct qed_iscsi_conn *p_conn = NULL;
+	int rc = 0;
+	u32 icid;
+
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_ISCSI, &icid);
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+	if (rc)
+		return rc;
+
+	/* Use input connection or allocate a new one */
+	if (p_in_conn)
+		p_conn = p_in_conn;
+	else
+		rc = qed_iscsi_allocate_connection(p_hwfn, &p_conn);
+
+	if (!rc)
+		rc = qed_iscsi_setup_connection(p_hwfn, p_conn);
+
+	if (rc) {
+		spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+		qed_cxt_release_cid(p_hwfn, icid);
+		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+		return rc;
+	}
+
+	p_conn->icid = icid;
+	p_conn->conn_id = (u16)icid;
+	p_conn->fw_cid = (p_hwfn->hw_info.opaque_fid << 16) | icid;
+
+	*p_out_conn = p_conn;
+
+	return rc;
+}
+
+static void qed_iscsi_release_connection(struct qed_hwfn *p_hwfn,
+					 struct qed_iscsi_conn *p_conn)
+{
+	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
+	list_add_tail(&p_conn->list_entry, &p_hwfn->p_iscsi_info->free_list);
+	qed_cxt_release_cid(p_hwfn, p_conn->icid);
+	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
+}
+
+struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn)
+{
+	struct qed_iscsi_info *p_iscsi_info;
+
+	p_iscsi_info = kzalloc(sizeof(*p_iscsi_info), GFP_KERNEL);
+	if (!p_iscsi_info) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_iscsi_info'\n");
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&p_iscsi_info->free_list);
+	return p_iscsi_info;
+}
+
+void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		     struct qed_iscsi_info *p_iscsi_info)
+{
+	spin_lock_init(&p_iscsi_info->lock);
+}
+
+void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		    struct qed_iscsi_info *p_iscsi_info)
+{
+	kfree(p_iscsi_info);
+}
+
+static void _qed_iscsi_get_tstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct tstorm_iscsi_stats_drv tstats;
+	u32 tstats_addr;
+
+	memset(&tstats, 0, sizeof(tstats));
+	tstats_addr = BAR0_MAP_REG_TSDM_RAM +
+		      TSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, sizeof(tstats));
+
+	p_stats->iscsi_rx_bytes_cnt =
+	    HILO_64_REGPAIR(tstats.iscsi_rx_bytes_cnt);
+	p_stats->iscsi_rx_packet_cnt =
+	    HILO_64_REGPAIR(tstats.iscsi_rx_packet_cnt);
+	p_stats->iscsi_cmdq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_cmdq_threshold_cnt);
+	p_stats->iscsi_rq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_rq_threshold_cnt);
+	p_stats->iscsi_immq_threshold_cnt =
+	    le32_to_cpu(tstats.iscsi_immq_threshold_cnt);
+}
+
+static void _qed_iscsi_get_mstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct mstorm_iscsi_stats_drv mstats;
+	u32 mstats_addr;
+
+	memset(&mstats, 0, sizeof(mstats));
+	mstats_addr = BAR0_MAP_REG_MSDM_RAM +
+		      MSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, sizeof(mstats));
+
+	p_stats->iscsi_rx_dropped_pdus_task_not_valid =
+	    HILO_64_REGPAIR(mstats.iscsi_rx_dropped_pdus_task_not_valid);
+}
+
+static void _qed_iscsi_get_ustats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct ustorm_iscsi_stats_drv ustats;
+	u32 ustats_addr;
+
+	memset(&ustats, 0, sizeof(ustats));
+	ustats_addr = BAR0_MAP_REG_USDM_RAM +
+		      USTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, sizeof(ustats));
+
+	p_stats->iscsi_rx_data_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_data_pdu_cnt);
+	p_stats->iscsi_rx_r2t_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_r2t_pdu_cnt);
+	p_stats->iscsi_rx_total_pdu_cnt =
+	    HILO_64_REGPAIR(ustats.iscsi_rx_total_pdu_cnt);
+}
+
+static void _qed_iscsi_get_xstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct xstorm_iscsi_stats_drv xstats;
+	u32 xstats_addr;
+
+	memset(&xstats, 0, sizeof(xstats));
+	xstats_addr = BAR0_MAP_REG_XSDM_RAM +
+		      XSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &xstats, xstats_addr, sizeof(xstats));
+
+	p_stats->iscsi_tx_go_to_slow_start_event_cnt =
+	    HILO_64_REGPAIR(xstats.iscsi_tx_go_to_slow_start_event_cnt);
+	p_stats->iscsi_tx_fast_retransmit_event_cnt =
+	    HILO_64_REGPAIR(xstats.iscsi_tx_fast_retransmit_event_cnt);
+}
+
+static void _qed_iscsi_get_ystats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct ystorm_iscsi_stats_drv ystats;
+	u32 ystats_addr;
+
+	memset(&ystats, 0, sizeof(ystats));
+	ystats_addr = BAR0_MAP_REG_YSDM_RAM +
+		      YSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &ystats, ystats_addr, sizeof(ystats));
+
+	p_stats->iscsi_tx_data_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_data_pdu_cnt);
+	p_stats->iscsi_tx_r2t_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_r2t_pdu_cnt);
+	p_stats->iscsi_tx_total_pdu_cnt =
+	    HILO_64_REGPAIR(ystats.iscsi_tx_total_pdu_cnt);
+}
+
+static void _qed_iscsi_get_pstats(struct qed_hwfn *p_hwfn,
+				  struct qed_ptt *p_ptt,
+				  struct qed_iscsi_stats *p_stats)
+{
+	struct pstorm_iscsi_stats_drv pstats;
+	u32 pstats_addr;
+
+	memset(&pstats, 0, sizeof(pstats));
+	pstats_addr = BAR0_MAP_REG_PSDM_RAM +
+		      PSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
+	qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, sizeof(pstats));
+
+	p_stats->iscsi_tx_bytes_cnt =
+	    HILO_64_REGPAIR(pstats.iscsi_tx_bytes_cnt);
+	p_stats->iscsi_tx_packet_cnt =
+	    HILO_64_REGPAIR(pstats.iscsi_tx_packet_cnt);
+}
+
+static int qed_iscsi_get_stats(struct qed_hwfn *p_hwfn,
+			       struct qed_iscsi_stats *stats)
+{
+	struct qed_ptt *p_ptt;
+
+	memset(stats, 0, sizeof(*stats));
+
+	p_ptt = qed_ptt_acquire(p_hwfn);
+	if (!p_ptt) {
+		DP_ERR(p_hwfn, "Failed to acquire ptt\n");
+		return -EAGAIN;
+	}
+
+	_qed_iscsi_get_tstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_mstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_ustats(p_hwfn, p_ptt, stats);
+
+	_qed_iscsi_get_xstats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_ystats(p_hwfn, p_ptt, stats);
+	_qed_iscsi_get_pstats(p_hwfn, p_ptt, stats);
+
+	qed_ptt_release(p_hwfn, p_ptt);
+
+	return 0;
+}
+
+struct qed_hash_iscsi_con {
+	struct hlist_node node;
+	struct qed_iscsi_conn *con;
+};
+
+static int qed_fill_iscsi_dev_info(struct qed_dev *cdev,
+				   struct qed_dev_iscsi_info *info)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+
+	int rc;
+
+	memset(info, 0, sizeof(*info));
+	rc = qed_fill_dev_info(cdev, &info->common);
+
+	info->primary_dbq_rq_addr =
+	    qed_iscsi_get_primary_bdq_prod(hwfn, BDQ_ID_RQ);
+	info->secondary_bdq_rq_addr =
+	    qed_iscsi_get_secondary_bdq_prod(hwfn, BDQ_ID_RQ);
+
+	return rc;
+}
+
+static void qed_register_iscsi_ops(struct qed_dev *cdev,
+				   struct qed_iscsi_cb_ops *ops, void *cookie)
+{
+	cdev->protocol_ops.iscsi = ops;
+	cdev->ops_cookie = cookie;
+}
+
+static struct qed_hash_iscsi_con *qed_iscsi_get_hash(struct qed_dev *cdev,
+						     u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con = NULL;
+
+	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED))
+		return NULL;
+
+	hash_for_each_possible(cdev->connections, hash_con, node, handle) {
+		if (hash_con->con->icid == handle)
+			break;
+	}
+
+	if (!hash_con || (hash_con->con->icid != handle))
+		return NULL;
+
+	return hash_con;
+}
+
+static int qed_iscsi_stop(struct qed_dev *cdev)
+{
+	int rc;
+
+	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED)) {
+		DP_NOTICE(cdev, "iscsi already stopped\n");
+		return 0;
+	}
+
+	if (!hash_empty(cdev->connections)) {
+		DP_NOTICE(cdev,
+			  "Can't stop iscsi - not all connections were returned\n");
+		return -EINVAL;
+	}
+
+	/* Stop the iscsi */
+	rc = qed_sp_iscsi_func_stop(QED_LEADING_HWFN(cdev),
+				    QED_SPQ_MODE_EBLOCK, NULL);
+	cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
+
+	return rc;
+}
+
+static int qed_iscsi_start(struct qed_dev *cdev,
+			   struct qed_iscsi_tid *tasks,
+			   void *event_context,
+			   iscsi_event_cb_t async_event_cb)
+{
+	int rc;
+
+	if (cdev->flags & QED_FLAG_STORAGE_STARTED) {
+		DP_NOTICE(cdev, "iscsi already started;\n");
+		return 0;
+	}
+
+	rc = qed_sp_iscsi_func_start(QED_LEADING_HWFN(cdev),
+				     QED_SPQ_MODE_EBLOCK, NULL, event_context,
+				     async_event_cb);
+	if (rc) {
+		DP_NOTICE(cdev, "Failed to start iscsi\n");
+		return rc;
+	}
+
+	cdev->flags |= QED_FLAG_STORAGE_STARTED;
+	hash_init(cdev->connections);
+
+	if (tasks) {
+		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
+						       GFP_KERNEL);
+
+		if (!tid_info) {
+			DP_NOTICE(cdev,
+				  "Failed to allocate tasks information\n");
+			qed_iscsi_stop(cdev);
+			return -ENOMEM;
+		}
+
+		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
+					      tid_info);
+		if (rc) {
+			DP_NOTICE(cdev, "Failed to gather task information\n");
+			qed_iscsi_stop(cdev);
+			kfree(tid_info);
+			return rc;
+		}
+
+		/* Fill task information */
+		tasks->size = tid_info->tid_size;
+		tasks->num_tids_per_block = tid_info->num_tids_per_block;
+		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
+
+		kfree(tid_info);
+	}
+
+	return 0;
+}
+
+static int qed_iscsi_acquire_conn(struct qed_dev *cdev,
+				  u32 *handle,
+				  u32 *fw_cid, void __iomem **p_doorbell)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	int rc;
+
+	/* Allocate a hashed connection */
+	hash_con = kzalloc(sizeof(*hash_con), GFP_ATOMIC);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to allocate hashed connection\n");
+		return -ENOMEM;
+	}
+
+	/* Acquire the connection */
+	rc = qed_iscsi_acquire_connection(QED_LEADING_HWFN(cdev), NULL,
+					  &hash_con->con);
+	if (rc) {
+		DP_NOTICE(cdev, "Failed to acquire Connection\n");
+		kfree(hash_con);
+		return rc;
+	}
+
+	/* Added the connection to hash table */
+	*handle = hash_con->con->icid;
+	*fw_cid = hash_con->con->fw_cid;
+	hash_add(cdev->connections, &hash_con->node, *handle);
+
+	if (p_doorbell)
+		*p_doorbell = qed_iscsi_get_db_addr(QED_LEADING_HWFN(cdev),
+						    *handle);
+
+	return 0;
+}
+
+static int qed_iscsi_release_conn(struct qed_dev *cdev, u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	hlist_del(&hash_con->node);
+	qed_iscsi_release_connection(QED_LEADING_HWFN(cdev), hash_con->con);
+	kfree(hash_con);
+
+	return 0;
+}
+
+static int qed_iscsi_offload_conn(struct qed_dev *cdev,
+				  u32 handle,
+				  struct qed_iscsi_params_offload *conn_info)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	struct qed_iscsi_conn *con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	/* Update the connection with information from the params */
+	con = hash_con->con;
+
+	ether_addr_copy(con->local_mac, conn_info->src.mac);
+	ether_addr_copy(con->remote_mac, conn_info->dst.mac);
+	memcpy(con->local_ip, conn_info->src.ip, sizeof(con->local_ip));
+	memcpy(con->remote_ip, conn_info->dst.ip, sizeof(con->remote_ip));
+	con->local_port = conn_info->src.port;
+	con->remote_port = conn_info->dst.port;
+
+	con->layer_code = conn_info->layer_code;
+	con->sq_pbl_addr = conn_info->sq_pbl_addr;
+	con->initial_ack = conn_info->initial_ack;
+	con->vlan_id = conn_info->vlan_id;
+	con->tcp_flags = conn_info->tcp_flags;
+	con->ip_version = conn_info->ip_version;
+	con->default_cq = conn_info->default_cq;
+	con->ka_max_probe_cnt = conn_info->ka_max_probe_cnt;
+	con->dup_ack_theshold = conn_info->dup_ack_theshold;
+	con->rcv_next = conn_info->rcv_next;
+	con->snd_una = conn_info->snd_una;
+	con->snd_next = conn_info->snd_next;
+	con->snd_max = conn_info->snd_max;
+	con->snd_wnd = conn_info->snd_wnd;
+	con->rcv_wnd = conn_info->rcv_wnd;
+	con->snd_wl1 = conn_info->snd_wl1;
+	con->cwnd = conn_info->cwnd;
+	con->ss_thresh = conn_info->ss_thresh;
+	con->srtt = conn_info->srtt;
+	con->rtt_var = conn_info->rtt_var;
+	con->ts_time = conn_info->ts_time;
+	con->ts_recent = conn_info->ts_recent;
+	con->ts_recent_age = conn_info->ts_recent_age;
+	con->total_rt = conn_info->total_rt;
+	con->ka_timeout_delta = conn_info->ka_timeout_delta;
+	con->rt_timeout_delta = conn_info->rt_timeout_delta;
+	con->dup_ack_cnt = conn_info->dup_ack_cnt;
+	con->snd_wnd_probe_cnt = conn_info->snd_wnd_probe_cnt;
+	con->ka_probe_cnt = conn_info->ka_probe_cnt;
+	con->rt_cnt = conn_info->rt_cnt;
+	con->flow_label = conn_info->flow_label;
+	con->ka_timeout = conn_info->ka_timeout;
+	con->ka_interval = conn_info->ka_interval;
+	con->max_rt_time = conn_info->max_rt_time;
+	con->initial_rcv_wnd = conn_info->initial_rcv_wnd;
+	con->ttl = conn_info->ttl;
+	con->tos_or_tc = conn_info->tos_or_tc;
+	con->remote_port = conn_info->remote_port;
+	con->local_port = conn_info->local_port;
+	con->mss = conn_info->mss;
+	con->snd_wnd_scale = conn_info->snd_wnd_scale;
+	con->rcv_wnd_scale = conn_info->rcv_wnd_scale;
+	con->ts_ticks_per_second = conn_info->ts_ticks_per_second;
+	con->da_timeout_value = conn_info->da_timeout_value;
+	con->ack_frequency = conn_info->ack_frequency;
+
+	/* Set default values on other connection fields */
+	con->offl_flags = 0x1;
+
+	return qed_sp_iscsi_conn_offload(QED_LEADING_HWFN(cdev), con,
+					 QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_update_conn(struct qed_dev *cdev,
+				 u32 handle,
+				 struct qed_iscsi_params_update *conn_info)
+{
+	struct qed_hash_iscsi_con *hash_con;
+	struct qed_iscsi_conn *con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	/* Update the connection with information from the params */
+	con = hash_con->con;
+	con->update_flag = conn_info->update_flag;
+	con->max_seq_size = conn_info->max_seq_size;
+	con->max_recv_pdu_length = conn_info->max_recv_pdu_length;
+	con->max_send_pdu_length = conn_info->max_send_pdu_length;
+	con->first_seq_length = conn_info->first_seq_length;
+	con->exp_stat_sn = conn_info->exp_stat_sn;
+
+	return qed_sp_iscsi_conn_update(QED_LEADING_HWFN(cdev), con,
+					QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_clear_conn_sq(struct qed_dev *cdev, u32 handle)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	return qed_sp_iscsi_conn_clear_sq(QED_LEADING_HWFN(cdev),
+					  hash_con->con,
+					  QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_destroy_conn(struct qed_dev *cdev,
+				  u32 handle, u8 abrt_conn)
+{
+	struct qed_hash_iscsi_con *hash_con;
+
+	hash_con = qed_iscsi_get_hash(cdev, handle);
+	if (!hash_con) {
+		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
+			  handle);
+		return -EINVAL;
+	}
+
+	hash_con->con->abortive_dsconnect = abrt_conn;
+
+	return qed_sp_iscsi_conn_terminate(QED_LEADING_HWFN(cdev),
+					   hash_con->con,
+					   QED_SPQ_MODE_EBLOCK, NULL);
+}
+
+static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats)
+{
+	return qed_iscsi_get_stats(QED_LEADING_HWFN(cdev), stats);
+}
+
+static const struct qed_iscsi_ops qed_iscsi_ops_pass = {
+	.common = &qed_common_ops_pass,
+	.ll2 = &qed_ll2_ops_pass,
+	.fill_dev_info = &qed_fill_iscsi_dev_info,
+	.register_ops = &qed_register_iscsi_ops,
+	.start = &qed_iscsi_start,
+	.stop = &qed_iscsi_stop,
+	.acquire_conn = &qed_iscsi_acquire_conn,
+	.release_conn = &qed_iscsi_release_conn,
+	.offload_conn = &qed_iscsi_offload_conn,
+	.update_conn = &qed_iscsi_update_conn,
+	.destroy_conn = &qed_iscsi_destroy_conn,
+	.clear_sq = &qed_iscsi_clear_conn_sq,
+	.get_stats = &qed_iscsi_stats,
+};
+
+const struct qed_iscsi_ops *qed_get_iscsi_ops()
+{
+	return &qed_iscsi_ops_pass;
+}
+EXPORT_SYMBOL(qed_get_iscsi_ops);
+
+void qed_put_iscsi_ops(void)
+{
+}
+EXPORT_SYMBOL(qed_put_iscsi_ops);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
new file mode 100644
index 0000000..269848c
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
@@ -0,0 +1,52 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_ISCSI_H
+#define _QED_ISCSI_H
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/qed/tcp_common.h>
+#include <linux/qed/qed_iscsi_if.h>
+#include <linux/qed/qed_chain.h>
+#include "qed.h"
+#include "qed_hsi.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+
+struct qed_iscsi_info {
+	spinlock_t lock;
+	struct list_head free_list;
+	u16 max_num_outstanding_tasks;
+	void *event_context;
+	iscsi_event_cb_t event_cb;
+};
+
+#ifdef CONFIG_QED_LL2
+extern const struct qed_ll2_ops qed_ll2_ops_pass;
+#endif
+
+#if IS_ENABLED(CONFIG_QEDI)
+struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn);
+
+void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		     struct qed_iscsi_info *p_iscsi_info);
+
+void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		    struct qed_iscsi_info *p_iscsi_info);
+#else /* IS_ENABLED(CONFIG_QEDI) */
+static inline struct qed_iscsi_info *qed_iscsi_alloc(
+		struct qed_hwfn *p_hwfn) { return NULL; }
+static inline void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
+		struct qed_iscsi_info *p_iscsi_info) {}
+static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn,
+		struct qed_iscsi_info *p_iscsi_info) {}
+#endif /* IS_ENABLED(CONFIG_QEDI) */
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
index ddd410a..07e2f77 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
@@ -2187,6 +2187,5 @@ const struct qed_eth_ops *qed_get_eth_ops(void)
 
 void qed_put_eth_ops(void)
 {
-	/* TODO - reference count for module? */
 }
 EXPORT_SYMBOL(qed_put_eth_ops);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
index a6db107..e67f3c9 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
@@ -299,6 +299,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		p_tx->cur_completing_bd_idx = 1;
 		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
 		tx_frag = p_pkt->bds_set[0].tx_frag;
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 		if (p_ll2_conn->gsi_enable)
 			qed_ll2b_release_tx_gsi_packet(p_hwfn,
 						       p_ll2_conn->my_id,
@@ -307,6 +308,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 						       b_last_frag,
 						       b_last_packet);
 		else
+#endif
 			qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
@@ -367,6 +369,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 
 		spin_unlock_irqrestore(&p_tx->lock, flags);
 		tx_frag = p_pkt->bds_set[0].tx_frag;
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 		if (p_ll2_conn->gsi_enable)
 			qed_ll2b_complete_tx_gsi_packet(p_hwfn,
 							p_ll2_conn->my_id,
@@ -374,6 +377,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 							tx_frag,
 							b_last_frag, !num_bds);
 		else
+#endif
 			qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
@@ -421,6 +425,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 			  "Mismatch between active_descq and the LL2 Rx chain\n");
 	list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
 
+#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
 	spin_unlock_irqrestore(&p_rx->lock, lock_flags);
 	qed_ll2b_complete_rx_gsi_packet(p_hwfn,
 					p_ll2_info->my_id,
@@ -433,6 +438,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
 					src_mac_addrhi,
 					src_mac_addrlo, b_last_cqe);
 	spin_lock_irqsave(&p_rx->lock, lock_flags);
+#endif
 
 	return 0;
 }
@@ -1516,11 +1522,12 @@ static void qed_ll2_register_cb_ops(struct qed_dev *cdev,
 
 static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 {
-	struct qed_ll2_info ll2_info;
+	struct qed_ll2_info *ll2_info;
 	struct qed_ll2_buffer *buffer;
 	enum qed_ll2_conn_type conn_type;
 	struct qed_ptt *p_ptt;
 	int rc, i;
+	u8 gsi_enable = 1;
 
 	/* Initialize LL2 locks & lists */
 	INIT_LIST_HEAD(&cdev->ll2->list);
@@ -1552,6 +1559,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 	switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
 	case QED_PCI_ISCSI:
 		conn_type = QED_LL2_TYPE_ISCSI;
+		gsi_enable = 0;
 		break;
 	case QED_PCI_ETH_ROCE:
 		conn_type = QED_LL2_TYPE_ROCE;
@@ -1561,18 +1569,23 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 	}
 
 	/* Prepare the temporary ll2 information */
-	memset(&ll2_info, 0, sizeof(ll2_info));
-	ll2_info.conn_type = conn_type;
-	ll2_info.mtu = params->mtu;
-	ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets;
-	ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping;
-	ll2_info.tx_tc = 0;
-	ll2_info.tx_dest = CORE_TX_DEST_NW;
-	ll2_info.gsi_enable = 1;
-
-	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &ll2_info,
+	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
+	if (!ll2_info) {
+		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
+		goto fail;
+	}
+	ll2_info->conn_type = conn_type;
+	ll2_info->mtu = params->mtu;
+	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
+	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
+	ll2_info->tx_tc = 0;
+	ll2_info->tx_dest = CORE_TX_DEST_NW;
+	ll2_info->gsi_enable = gsi_enable;
+
+	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), ll2_info,
 					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
 					&cdev->ll2->handle);
+	kfree(ll2_info);
 	if (rc) {
 		DP_INFO(cdev, "Failed to acquire LL2 connection\n");
 		goto fail;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
index 4ee3151..a01ad9d 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
@@ -1239,7 +1239,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
 	if (link.link_up)
 		if_link->link_up = true;
 
-	/* TODO - at the moment assume supported and advertised speed equal */
 	if_link->supported_caps = QED_LM_FIBRE_BIT;
 	if (params.speed.autoneg)
 		if_link->supported_caps |= QED_LM_Autoneg_BIT;
@@ -1294,7 +1293,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
 	if (link.link_up)
 		if_link->speed = link.speed;
 
-	/* TODO - fill duplex properly */
 	if_link->duplex = DUPLEX_FULL;
 	qed_mcp_get_media_type(hwfn->cdev, &media_type);
 	if_link->port = qed_get_port_type(media_type);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
index dff520e..2e5f51b 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
@@ -314,9 +314,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
  * same pf_num may be used by two different hwfn
- * TODO - this shouldn't really be in .h file, but until all fields
- * required during hw-init will be placed in their correct place in shmem
- * we need it in qed_dev.c [for readin the nvram reflection in shmem].
  */
 #define MCP_PF_ID_BY_REL(p_hwfn, rel_pfid) (QED_IS_BB((p_hwfn)->cdev) ?	       \
 					    ((rel_pfid) |		       \
@@ -324,9 +321,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
 					    rel_pfid)
 #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
 
-/* TODO - this is only correct as long as only BB is supported, and
- * no port-swapping is implemented; Afterwards we'll need to fix it.
- */
 #define MFW_PORT(_p_hwfn)       ((_p_hwfn)->abs_pf_id %	\
 				 ((_p_hwfn)->cdev->num_ports_in_engines * 2))
 struct qed_mcp_info {
diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
index b414a05..9754420 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
@@ -82,6 +82,8 @@
 	0x1c80000UL
 #define BAR0_MAP_REG_XSDM_RAM \
 	0x1e00000UL
+#define BAR0_MAP_REG_YSDM_RAM \
+	0x1e80000UL
 #define  NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF \
 	0x5011f4UL
 #define  PRS_REG_SEARCH_TCP \
diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
index caff415..d3fa578 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
@@ -24,6 +24,7 @@
 #include "qed_hsi.h"
 #include "qed_hw.h"
 #include "qed_int.h"
+#include "qed_iscsi.h"
 #include "qed_mcp.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
@@ -249,6 +250,20 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
 		return qed_sriov_eqe_event(p_hwfn,
 					   p_eqe->opcode,
 					   p_eqe->echo, &p_eqe->data);
+	case PROTOCOLID_ISCSI:
+		if (!IS_ENABLED(CONFIG_QEDI))
+			return -EINVAL;
+
+		if (p_hwfn->p_iscsi_info->event_cb) {
+			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
+
+			return p_iscsi->event_cb(p_iscsi->event_context,
+						 p_eqe->opcode, &p_eqe->data);
+		} else {
+			DP_NOTICE(p_hwfn,
+				  "iSCSI async completion is not set\n");
+			return -EINVAL;
+		}
 	default:
 		DP_NOTICE(p_hwfn,
 			  "Unknown Async completion for protocol: %d\n",
diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
index f9ae903..c0c9fa8 100644
--- a/include/linux/qed/qed_if.h
+++ b/include/linux/qed/qed_if.h
@@ -165,6 +165,7 @@ struct qed_iscsi_pf_params {
 	u32 max_cwnd;
 	u16 cq_num_entries;
 	u16 cmdq_num_entries;
+	u32 two_msl_timer;
 	u16 dup_ack_threshold;
 	u16 tx_sws_timer;
 	u16 min_rto;
@@ -271,6 +272,7 @@ struct qed_dev_info {
 enum qed_sb_type {
 	QED_SB_TYPE_L2_QUEUE,
 	QED_SB_TYPE_CNQ,
+	QED_SB_TYPE_STORAGE,
 };
 
 enum qed_protocol {
diff --git a/include/linux/qed/qed_iscsi_if.h b/include/linux/qed/qed_iscsi_if.h
new file mode 100644
index 0000000..6735ee5
--- /dev/null
+++ b/include/linux/qed/qed_iscsi_if.h
@@ -0,0 +1,249 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_ISCSI_IF_H
+#define _QED_ISCSI_IF_H
+#include <linux/types.h>
+#include <linux/qed/qed_if.h>
+
+typedef int (*iscsi_event_cb_t) (void *context,
+				 u8 fw_event_code, void *fw_handle);
+struct qed_iscsi_stats {
+	u64 iscsi_rx_bytes_cnt;
+	u64 iscsi_rx_packet_cnt;
+	u64 iscsi_rx_new_ooo_isle_events_cnt;
+	u32 iscsi_cmdq_threshold_cnt;
+	u32 iscsi_rq_threshold_cnt;
+	u32 iscsi_immq_threshold_cnt;
+
+	u64 iscsi_rx_dropped_pdus_task_not_valid;
+
+	u64 iscsi_rx_data_pdu_cnt;
+	u64 iscsi_rx_r2t_pdu_cnt;
+	u64 iscsi_rx_total_pdu_cnt;
+
+	u64 iscsi_tx_go_to_slow_start_event_cnt;
+	u64 iscsi_tx_fast_retransmit_event_cnt;
+
+	u64 iscsi_tx_data_pdu_cnt;
+	u64 iscsi_tx_r2t_pdu_cnt;
+	u64 iscsi_tx_total_pdu_cnt;
+
+	u64 iscsi_tx_bytes_cnt;
+	u64 iscsi_tx_packet_cnt;
+};
+
+struct qed_dev_iscsi_info {
+	struct qed_dev_info common;
+
+	void __iomem *primary_dbq_rq_addr;
+	void __iomem *secondary_bdq_rq_addr;
+};
+
+struct qed_iscsi_id_params {
+	u8 mac[ETH_ALEN];
+	u32 ip[4];
+	u16 port;
+};
+
+struct qed_iscsi_params_offload {
+	u8 layer_code;
+	dma_addr_t sq_pbl_addr;
+	u32 initial_ack;
+
+	struct qed_iscsi_id_params src;
+	struct qed_iscsi_id_params dst;
+	u16 vlan_id;
+	u8 tcp_flags;
+	u8 ip_version;
+	u8 default_cq;
+
+	u8 ka_max_probe_cnt;
+	u8 dup_ack_theshold;
+	u32 rcv_next;
+	u32 snd_una;
+	u32 snd_next;
+	u32 snd_max;
+	u32 snd_wnd;
+	u32 rcv_wnd;
+	u32 snd_wl1;
+	u32 cwnd;
+	u32 ss_thresh;
+	u16 srtt;
+	u16 rtt_var;
+	u32 ts_time;
+	u32 ts_recent;
+	u32 ts_recent_age;
+	u32 total_rt;
+	u32 ka_timeout_delta;
+	u32 rt_timeout_delta;
+	u8 dup_ack_cnt;
+	u8 snd_wnd_probe_cnt;
+	u8 ka_probe_cnt;
+	u8 rt_cnt;
+	u32 flow_label;
+	u32 ka_timeout;
+	u32 ka_interval;
+	u32 max_rt_time;
+	u32 initial_rcv_wnd;
+	u8 ttl;
+	u8 tos_or_tc;
+	u16 remote_port;
+	u16 local_port;
+	u16 mss;
+	u8 snd_wnd_scale;
+	u8 rcv_wnd_scale;
+	u32 ts_ticks_per_second;
+	u16 da_timeout_value;
+	u8 ack_frequency;
+};
+
+struct qed_iscsi_params_update {
+	u8 update_flag;
+#define QED_ISCSI_CONN_HD_EN            BIT(0)
+#define QED_ISCSI_CONN_DD_EN            BIT(1)
+#define QED_ISCSI_CONN_INITIAL_R2T      BIT(2)
+#define QED_ISCSI_CONN_IMMEDIATE_DATA   BIT(3)
+
+	u32 max_seq_size;
+	u32 max_recv_pdu_length;
+	u32 max_send_pdu_length;
+	u32 first_seq_length;
+	u32 exp_stat_sn;
+};
+
+#define MAX_TID_BLOCKS_ISCSI (512)
+struct qed_iscsi_tid {
+	u32 size;		/* In bytes per task */
+	u32 num_tids_per_block;
+	u8 *blocks[MAX_TID_BLOCKS_ISCSI];
+};
+
+struct qed_iscsi_cb_ops {
+	struct qed_common_cb_ops common;
+
+	/* TODO - need to add handler for ansync. events */
+};
+
+struct qed_iscsi_ops {
+	const struct qed_common_ops *common;
+
+	const struct qed_ll2_ops *ll2;
+
+	int (*fill_dev_info)(struct qed_dev *cdev,
+			     struct qed_dev_iscsi_info *info);
+
+	void (*register_ops)(struct qed_dev *cdev,
+			     struct qed_iscsi_cb_ops *ops, void *cookie);
+
+/**
+ * @brief start iscsi in FW
+ *
+ * @param cdev
+ * @param tasks - qed will fill information about tasks
+ *
+ * return 0 on success, otherwise error value.
+ */
+	int (*start)(struct qed_dev *cdev,
+		     struct qed_iscsi_tid *tasks,
+		     void *event_context, iscsi_event_cb_t async_event_cb);
+
+/**
+ * @brief stops iscsi in FW
+ *
+ * @param cdev
+ *
+ * return 0 on success, otherwise error value.
+ */
+	int (*stop)(struct qed_dev *cdev);
+
+/**
+ * @brief acquire_conn - acquire a new iscsi connection
+ *
+ * @param cdev
+ * @param handle - qed will fill handle that should be used
+ *                 henceforth as identifier of the connection.
+ * @param p_doorbell - qed will fill the address of the doorbell.
+ *
+ * @return 0 on sucesss, otherwise error value.
+ */
+	int (*acquire_conn)(struct qed_dev *cdev,
+			    u32 *handle,
+			    u32 *fw_cid, void __iomem **p_doorbell);
+
+/**
+ * @brief release_conn - release a previously acquired iscsi connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*release_conn)(struct qed_dev *cdev, u32 handle);
+
+/**
+ * @brief offload_conn - configures an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ * @param conn_info - the configuration to use for the offload.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*offload_conn)(struct qed_dev *cdev,
+			    u32 handle,
+			    struct qed_iscsi_params_offload *conn_info);
+
+/**
+ * @brief update_conn - updates an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ * @param conn_info - the configuration to use for the offload.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*update_conn)(struct qed_dev *cdev,
+			   u32 handle,
+			   struct qed_iscsi_params_update *conn_info);
+
+/**
+ * @brief destroy_conn - stops an offloaded connection
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*destroy_conn)(struct qed_dev *cdev, u32 handle, u8 abrt_conn);
+
+/**
+ * @brief clear_sq - clear all task in sq
+ *
+ * @param cdev
+ * @param handle - the connection handle.
+ *
+ * @return 0 on success, otherwise error value.
+ */
+	int (*clear_sq)(struct qed_dev *cdev, u32 handle);
+
+/**
+ * @brief get iSCSI related statistics
+ *
+ * @param cdev
+ * @param stats - pointer to struck that would be filled we stats
+ *
+ * @return 0 on success, error otherwise.
+ */
+	int (*get_stats)(struct qed_dev *cdev,
+			 struct qed_iscsi_stats *stats);
+};
+
+const struct qed_iscsi_ops *qed_get_iscsi_ops(void);
+void qed_put_iscsi_ops(void);
+#endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 2/6] qed: Add iSCSI out of order packet handling.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi, Yuval Mintz

From: Yuval Mintz <Yuval.Mintz@qlogic.com>

This patch adds out of order packet handling for hardware offloaded
iSCSI. Out of order packet handling requires driver buffer allocation
and assistance.

Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
---
 drivers/net/ethernet/qlogic/qed/Makefile   |   2 +-
 drivers/net/ethernet/qlogic/qed/qed.h      |   1 +
 drivers/net/ethernet/qlogic/qed/qed_dev.c  |  14 +-
 drivers/net/ethernet/qlogic/qed/qed_ll2.c  | 559 +++++++++++++++++++++++++++--
 drivers/net/ethernet/qlogic/qed/qed_ll2.h  |   9 +
 drivers/net/ethernet/qlogic/qed/qed_ooo.c  | 510 ++++++++++++++++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_ooo.h  | 116 ++++++
 drivers/net/ethernet/qlogic/qed/qed_roce.c |   1 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c  |   9 +
 9 files changed, 1195 insertions(+), 26 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.h

diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
index b76669c..9121bf0 100644
--- a/drivers/net/ethernet/qlogic/qed/Makefile
+++ b/drivers/net/ethernet/qlogic/qed/Makefile
@@ -6,4 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
 qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
 qed-$(CONFIG_QED_LL2) += qed_ll2.o
 qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
-qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
+qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o qed_ooo.o
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index a61b1c0..e5626ae 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -380,6 +380,7 @@ struct qed_hwfn {
 	/* Protocol related */
 	bool				using_ll2;
 	struct qed_ll2_info		*p_ll2_info;
+	struct qed_ooo_info		*p_ooo_info;
 	struct qed_rdma_info		*p_rdma_info;
 	struct qed_iscsi_info		*p_iscsi_info;
 	struct qed_pf_params		pf_params;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index a4234c0..060e9a4 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -32,6 +32,7 @@
 #include "qed_iscsi.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 #include "qed_sriov.h"
@@ -157,8 +158,10 @@ void qed_resc_free(struct qed_dev *cdev)
 		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
 #endif
 		if (IS_ENABLED(CONFIG_QEDI) &&
-				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
 			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
+			qed_ooo_free(p_hwfn, p_hwfn->p_ooo_info);
+		}
 		qed_iov_free(p_hwfn);
 		qed_dmae_info_free(p_hwfn);
 		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -416,6 +419,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
 int qed_resc_alloc(struct qed_dev *cdev)
 {
 	struct qed_iscsi_info *p_iscsi_info;
+	struct qed_ooo_info *p_ooo_info;
 #ifdef CONFIG_QED_LL2
 	struct qed_ll2_info *p_ll2_info;
 #endif
@@ -543,6 +547,10 @@ int qed_resc_alloc(struct qed_dev *cdev)
 			if (!p_iscsi_info)
 				goto alloc_no_mem;
 			p_hwfn->p_iscsi_info = p_iscsi_info;
+			p_ooo_info = qed_ooo_alloc(p_hwfn);
+			if (!p_ooo_info)
+				goto alloc_no_mem;
+			p_hwfn->p_ooo_info = p_ooo_info;
 		}
 
 		/* DMA info initialization */
@@ -598,8 +606,10 @@ void qed_resc_setup(struct qed_dev *cdev)
 			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
 #endif
 		if (IS_ENABLED(CONFIG_QEDI) &&
-				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
 			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
+			qed_ooo_setup(p_hwfn, p_hwfn->p_ooo_info);
+		}
 	}
 }
 
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
index e67f3c9..4ce12e9 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
@@ -36,6 +36,7 @@
 #include "qed_int.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 
@@ -295,27 +296,36 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		list_del(&p_pkt->list_entry);
 		b_last_packet = list_empty(&p_tx->active_descq);
 		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
-		p_tx->cur_completing_packet = *p_pkt;
-		p_tx->cur_completing_bd_idx = 1;
-		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
-		tx_frag = p_pkt->bds_set[0].tx_frag;
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+			struct qed_ooo_buffer *p_buffer;
+
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+		} else {
+			p_tx->cur_completing_packet = *p_pkt;
+			p_tx->cur_completing_bd_idx = 1;
+			b_last_frag = p_tx->cur_completing_bd_idx ==
+				p_pkt->bd_used;
+			tx_frag = p_pkt->bds_set[0].tx_frag;
 #if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
-		if (p_ll2_conn->gsi_enable)
-			qed_ll2b_release_tx_gsi_packet(p_hwfn,
-						       p_ll2_conn->my_id,
-						       p_pkt->cookie,
-						       tx_frag,
-						       b_last_frag,
-						       b_last_packet);
-		else
+			if (p_ll2_conn->gsi_enable)
+				qed_ll2b_release_tx_gsi_packet(p_hwfn,
+					       p_ll2_conn->my_id,
+					       p_pkt->cookie,
+					       tx_frag,
+					       b_last_frag,
+					       b_last_packet);
+			else
 #endif
-			qed_ll2b_complete_tx_packet(p_hwfn,
+				qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
 						    tx_frag,
 						    b_last_frag,
 						    b_last_packet);
-
+		}
 	}
 }
 
@@ -546,13 +556,466 @@ void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		list_del(&p_pkt->list_entry);
 		list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
 
-		rx_buf_addr = p_pkt->rx_buf_addr;
-		cookie = p_pkt->cookie;
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+			struct qed_ooo_buffer *p_buffer;
+
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+		} else {
+			rx_buf_addr = p_pkt->rx_buf_addr;
+			cookie = p_pkt->cookie;
+
+			b_last = list_empty(&p_rx->active_descq);
+		}
+	}
+}
+
+#if IS_ENABLED(CONFIG_QEDI)
+static u8 qed_ll2_convert_rx_parse_to_tx_flags(u16 parse_flags)
+{
+	u8 bd_flags = 0;
+
+	if (GET_FIELD(parse_flags, PARSING_AND_ERR_FLAGS_TAG8021QEXIST))
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_VLAN_INSERTION, 1);
+
+	return bd_flags;
+}
+
+static int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+{
+	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+	struct qed_ll2_rx_queue *p_rx = &p_ll2_conn->rx_queue;
+	u16 packet_length = 0, parse_flags = 0, vlan = 0;
+	struct qed_ll2_rx_packet *p_pkt = NULL;
+	u32 num_ooo_add_to_peninsula = 0, cid;
+	union core_rx_cqe_union *cqe = NULL;
+	u16 cq_new_idx = 0, cq_old_idx = 0;
+	struct qed_ooo_buffer *p_buffer;
+	struct ooo_opaque *iscsi_ooo;
+	u8 placement_offset = 0;
+	u8 cqe_type;
+	int rc;
+
+	cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
+	cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+	if (cq_new_idx == cq_old_idx)
+		return 0;
+
+	while (cq_new_idx != cq_old_idx) {
+		struct core_rx_fast_path_cqe *p_cqe_fp;
+
+		cqe = qed_chain_consume(&p_rx->rcq_chain);
+		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+		cqe_type = cqe->rx_cqe_sp.type;
+
+		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
+			DP_NOTICE(p_hwfn,
+				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
+				  cqe_type);
+			return -EINVAL;
+		}
+		p_cqe_fp = &cqe->rx_cqe_fp;
+
+		placement_offset = p_cqe_fp->placement_offset;
+		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
+		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
+		vlan = le16_to_cpu(p_cqe_fp->vlan);
+		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
+		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
+					   iscsi_ooo);
+		cid = le32_to_cpu(iscsi_ooo->cid);
+
+		/* Process delete isle first */
+		if (iscsi_ooo->drop_size)
+			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
+					     iscsi_ooo->drop_isle,
+					     iscsi_ooo->drop_size);
+
+		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
+			continue;
+
+		/* Now process create/add/join isles */
+		if (list_empty(&p_rx->active_descq)) {
+			DP_NOTICE(p_hwfn,
+				  "LL2 OOO RX chain has no submitted buffers\n");
+			return -EIO;
+		}
+
+		p_pkt = list_first_entry(&p_rx->active_descq,
+					 struct qed_ll2_rx_packet, list_entry);
+
+		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
+			if (!p_pkt) {
+				DP_NOTICE(p_hwfn,
+					  "LL2 OOO RX packet is not valid\n");
+				return -EIO;
+			}
+			list_del(&p_pkt->list_entry);
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			p_buffer->packet_length = packet_length;
+			p_buffer->parse_flags = parse_flags;
+			p_buffer->vlan = vlan;
+			p_buffer->placement_offset = placement_offset;
+			qed_chain_consume(&p_rx->rxq_chain);
+			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
+
+			switch (iscsi_ooo->ooo_opcode) {
+			case TCP_EVENT_ADD_NEW_ISLE:
+				qed_ooo_add_new_isle(p_hwfn,
+						     p_hwfn->p_ooo_info,
+						     cid,
+						     iscsi_ooo->ooo_isle,
+						     p_buffer);
+				break;
+			case TCP_EVENT_ADD_ISLE_RIGHT:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle,
+						       p_buffer,
+						       QED_OOO_RIGHT_BUF);
+				break;
+			case TCP_EVENT_ADD_ISLE_LEFT:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle,
+						       p_buffer,
+						       QED_OOO_LEFT_BUF);
+				break;
+			case TCP_EVENT_JOIN:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle +
+						       1,
+						       p_buffer,
+						       QED_OOO_LEFT_BUF);
+				qed_ooo_join_isles(p_hwfn,
+						   p_hwfn->p_ooo_info,
+						   cid, iscsi_ooo->ooo_isle);
+				break;
+			case TCP_EVENT_ADD_PEN:
+				num_ooo_add_to_peninsula++;
+				qed_ooo_put_ready_buffer(p_hwfn,
+							 p_hwfn->p_ooo_info,
+							 p_buffer, true);
+				break;
+			}
+		} else {
+			DP_NOTICE(p_hwfn,
+				  "Unexpected event (%d) TX OOO completion\n",
+				  iscsi_ooo->ooo_opcode);
+		}
+	}
 
-		b_last = list_empty(&p_rx->active_descq);
+	/* Submit RX buffer here */
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr,
+					    0, p_buffer, true);
+		if (rc) {
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+			break;
+		}
 	}
+
+	/* Submit Tx buffers here */
+	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
+						    p_hwfn->p_ooo_info))) {
+		u16 l4_hdr_offset_w = 0;
+		dma_addr_t first_frag;
+		u8 bd_flags = 0;
+
+		first_frag = p_buffer->rx_buffer_phys_addr +
+			     p_buffer->placement_offset;
+		parse_flags = p_buffer->parse_flags;
+		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
+
+		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
+					       p_buffer->vlan, bd_flags,
+					       l4_hdr_offset_w,
+					       p_ll2_conn->tx_dest, 0,
+					       first_frag,
+					       p_buffer->packet_length,
+					       p_buffer, true);
+		if (rc) {
+			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						 p_buffer, false);
+			break;
+		}
+	}
+
+	return 0;
 }
 
+static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+{
+	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+	struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue;
+	struct qed_ll2_tx_packet *p_pkt = NULL;
+	struct qed_ooo_buffer *p_buffer;
+	bool b_dont_submit_rx = false;
+	u16 new_idx = 0, num_bds = 0;
+	int rc;
+
+	new_idx = le16_to_cpu(*p_tx->p_fw_cons);
+	num_bds = ((s16)new_idx - (s16)p_tx->bds_idx);
+
+	if (!num_bds)
+		return 0;
+
+	while (num_bds) {
+		if (list_empty(&p_tx->active_descq))
+			return -EINVAL;
+
+		p_pkt = list_first_entry(&p_tx->active_descq,
+					 struct qed_ll2_tx_packet, list_entry);
+		if (!p_pkt)
+			return -EINVAL;
+
+		if (p_pkt->bd_used != 1) {
+			DP_NOTICE(p_hwfn,
+				  "Unexpectedly many BDs(%d) in TX OOO completion\n",
+				  p_pkt->bd_used);
+			return -EINVAL;
+		}
+
+		list_del(&p_pkt->list_entry);
+
+		num_bds--;
+		p_tx->bds_idx++;
+		qed_chain_consume(&p_tx->txq_chain);
+
+		p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
+
+		if (b_dont_submit_rx) {
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+			continue;
+		}
+
+		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr, 0,
+					    p_buffer, true);
+		if (rc != 0) {
+			qed_ooo_put_free_buffer(p_hwfn,
+						p_hwfn->p_ooo_info, p_buffer);
+			b_dont_submit_rx = true;
+		}
+	}
+
+	/* Submit Tx buffers here */
+	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
+						    p_hwfn->p_ooo_info))) {
+		u16 l4_hdr_offset_w = 0, parse_flags = p_buffer->parse_flags;
+		dma_addr_t first_frag;
+		u8 bd_flags = 0;
+
+		first_frag = p_buffer->rx_buffer_phys_addr +
+		    p_buffer->placement_offset;
+		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
+		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
+					       p_buffer->vlan, bd_flags,
+					       l4_hdr_offset_w,
+					       p_ll2_conn->tx_dest, 0,
+					       first_frag,
+					       p_buffer->packet_length,
+					       p_buffer, true);
+		if (rc != 0) {
+			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						 p_buffer, false);
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
+			       struct qed_ll2_info *p_ll2_info,
+			       u16 rx_num_ooo_buffers, u16 mtu)
+{
+	struct qed_ooo_buffer *p_buf = NULL;
+	void *p_virt;
+	u16 buf_idx;
+	int rc = 0;
+
+	if (p_ll2_info->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return rc;
+
+	if (!rx_num_ooo_buffers)
+		return -EINVAL;
+
+	for (buf_idx = 0; buf_idx < rx_num_ooo_buffers; buf_idx++) {
+		p_buf = kzalloc(sizeof(*p_buf), GFP_KERNEL);
+		if (!p_buf) {
+			DP_NOTICE(p_hwfn,
+				  "Failed to allocate ooo descriptor\n");
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		p_buf->rx_buffer_size = mtu + 26 + ETH_CACHE_LINE_SIZE;
+		p_buf->rx_buffer_size = (p_buf->rx_buffer_size +
+					 ETH_CACHE_LINE_SIZE - 1) &
+					~(ETH_CACHE_LINE_SIZE - 1);
+		p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+					    p_buf->rx_buffer_size,
+					    &p_buf->rx_buffer_phys_addr,
+					    GFP_KERNEL);
+		if (!p_virt) {
+			DP_NOTICE(p_hwfn, "Failed to allocate ooo buffer\n");
+			kfree(p_buf);
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		p_buf->rx_buffer_virt_addr = p_virt;
+		qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info, p_buf);
+	}
+
+	DP_VERBOSE(p_hwfn, QED_MSG_LL2,
+		   "Allocated [%04x] LL2 OOO buffers [each of size 0x%08x]\n",
+		   rx_num_ooo_buffers, p_buf->rx_buffer_size);
+
+out:
+	return rc;
+}
+
+static void
+qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
+				 struct qed_ll2_info *p_ll2_conn)
+{
+	struct qed_ooo_buffer *p_buffer;
+	int rc;
+
+	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return;
+
+	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		rc = qed_ll2_post_rx_buffer(p_hwfn,
+					    p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr,
+					    0, p_buffer, true);
+		if (rc) {
+			qed_ooo_put_free_buffer(p_hwfn,
+						p_hwfn->p_ooo_info, p_buffer);
+			break;
+		}
+	}
+}
+
+static void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
+					   struct qed_ll2_info *p_ll2_conn)
+{
+	struct qed_ooo_buffer *p_buffer;
+
+	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return;
+
+	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+				  p_buffer->rx_buffer_size,
+				  p_buffer->rx_buffer_virt_addr,
+				  p_buffer->rx_buffer_phys_addr);
+		kfree(p_buffer);
+	}
+}
+
+static void qed_ll2_stop_ooo(struct qed_dev *cdev)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+
+	DP_VERBOSE(cdev, QED_MSG_STORAGE, "Stopping LL2 OOO queue [%02x]\n",
+		   *handle);
+
+	qed_ll2_terminate_connection(hwfn, *handle);
+	qed_ll2_release_connection(hwfn, *handle);
+	*handle = QED_LL2_UNUSED_HANDLE;
+}
+
+static int qed_ll2_start_ooo(struct qed_dev *cdev,
+			     struct qed_ll2_params *params)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+	struct qed_ll2_info *ll2_info;
+	int rc;
+
+	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
+	if (!ll2_info) {
+		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
+		return -ENOMEM;
+	}
+	ll2_info->conn_type = QED_LL2_TYPE_ISCSI_OOO;
+	ll2_info->mtu = params->mtu;
+	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
+	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
+	ll2_info->tx_tc = OOO_LB_TC;
+	ll2_info->tx_dest = CORE_TX_DEST_LB;
+
+	rc = qed_ll2_acquire_connection(hwfn, ll2_info,
+					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
+					handle);
+	kfree(ll2_info);
+	if (rc) {
+		DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n");
+		goto out;
+	}
+
+	rc = qed_ll2_establish_connection(hwfn, *handle);
+	if (rc) {
+		DP_INFO(cdev, "Failed to establist LL2 OOO connection\n");
+		goto fail;
+	}
+
+	return 0;
+
+fail:
+	qed_ll2_release_connection(hwfn, *handle);
+out:
+	*handle = QED_LL2_UNUSED_HANDLE;
+	return rc;
+}
+#else /* IS_ENABLED(CONFIG_QEDI) */
+static inline int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn,
+		void *p_cookie) { return -EINVAL; }
+static inline int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn,
+		void *p_cookie) { return -EINVAL; }
+static inline int
+qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_info,
+			u16 rx_num_ooo_buffers, u16 mtu) { return -EINVAL; }
+static inline void
+qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_conn) { return; }
+static inline void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_conn) { return; }
+static inline void qed_ll2_stop_ooo(struct qed_dev *cdev) { return; }
+static inline int qed_ll2_start_ooo(struct qed_dev *cdev,
+			struct qed_ll2_params *params) { return -EINVAL; }
+#endif /* IS_ENABLED(CONFIG_QEDI) */
+
 static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
 				     struct qed_ll2_info *p_ll2_conn,
 				     u8 action_on_error)
@@ -594,7 +1057,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
 	p_ramrod->drop_ttl0_flg = p_ll2_conn->rx_drop_ttl0_flg;
 	p_ramrod->inner_vlan_removal_en = p_ll2_conn->rx_vlan_removal_en;
 	p_ramrod->queue_id = p_ll2_conn->queue_id;
-	p_ramrod->main_func_queue = 1;
+	p_ramrod->main_func_queue = (conn_type == QED_LL2_TYPE_ISCSI_OOO) ? 0
+									  : 1;
 
 	if ((IS_MF_DEFAULT(p_hwfn) || IS_MF_SI(p_hwfn)) &&
 	    p_ramrod->main_func_queue && (conn_type != QED_LL2_TYPE_ROCE)) {
@@ -625,6 +1089,11 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
 	if (!QED_LL2_TX_REGISTERED(p_ll2_conn))
 		return 0;
 
+	if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
+		p_ll2_conn->tx_stats_en = 0;
+	else
+		p_ll2_conn->tx_stats_en = 1;
+
 	/* Get SPQ entry */
 	memset(&init_data, 0, sizeof(init_data));
 	init_data.cid = p_ll2_conn->cid;
@@ -642,7 +1111,6 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
 	p_ramrod->sb_id = cpu_to_le16(qed_int_get_sp_sb_id(p_hwfn));
 	p_ramrod->sb_index = p_tx->tx_sb_index;
 	p_ramrod->mtu = cpu_to_le16(p_ll2_conn->mtu);
-	p_ll2_conn->tx_stats_en = 1;
 	p_ramrod->stats_en = p_ll2_conn->tx_stats_en;
 	p_ramrod->stats_id = p_ll2_conn->tx_stats_id;
 
@@ -866,9 +1334,22 @@ int qed_ll2_acquire_connection(struct qed_hwfn *p_hwfn,
 	if (rc)
 		goto q_allocate_fail;
 
+	if (IS_ENABLED(CONFIG_QEDI)) {
+		rc = qed_ll2_acquire_connection_ooo(p_hwfn, p_ll2_info,
+					    rx_num_desc * 2, p_params->mtu);
+		if (rc)
+			goto q_allocate_fail;
+	}
+
 	/* Register callbacks for the Rx/Tx queues */
-	comp_rx_cb = qed_ll2_rxq_completion;
-	comp_tx_cb = qed_ll2_txq_completion;
+	if (IS_ENABLED(CONFIG_QEDI) &&
+			p_params->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+		comp_rx_cb = qed_ll2_lb_rxq_completion;
+		comp_tx_cb = qed_ll2_lb_txq_completion;
+	} else {
+		comp_rx_cb = qed_ll2_rxq_completion;
+		comp_tx_cb = qed_ll2_txq_completion;
+	}
 
 	if (rx_num_desc) {
 		qed_int_register_cb(p_hwfn, comp_rx_cb,
@@ -981,6 +1462,9 @@ int qed_ll2_establish_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 	if (p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)
 		qed_wr(p_hwfn, p_hwfn->p_main_ptt, PRS_REG_USE_LIGHT_L2, 1);
 
+	if (IS_ENABLED(CONFIG_QEDI))
+		qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn);
+
 	return rc;
 }
 
@@ -1223,6 +1707,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 			      u16 vlan,
 			      u8 bd_flags,
 			      u16 l4_hdr_offset_w,
+			      enum qed_ll2_tx_dest e_tx_dest,
 			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
 			      dma_addr_t first_frag,
 			      u16 first_frag_len, void *cookie, u8 notify_fw)
@@ -1232,6 +1717,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 	enum core_roce_flavor_type roce_flavor;
 	struct qed_ll2_tx_queue *p_tx;
 	struct qed_chain *p_tx_chain;
+	enum core_tx_dest tx_dest;
 	unsigned long flags;
 	int rc = 0;
 
@@ -1262,6 +1748,8 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 		goto out;
 	}
 
+	tx_dest = e_tx_dest == QED_LL2_TX_DEST_NW ? CORE_TX_DEST_NW :
+						    CORE_TX_DEST_LB;
 	if (qed_roce_flavor == QED_LL2_ROCE) {
 		roce_flavor = CORE_ROCE;
 	} else if (qed_roce_flavor == QED_LL2_RROCE) {
@@ -1276,7 +1764,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 				      num_of_bds, first_frag,
 				      first_frag_len, cookie, notify_fw);
 	qed_ll2_prepare_tx_packet_set_bd(p_hwfn, p_ll2_conn, p_curp,
-					 num_of_bds, CORE_TX_DEST_NW,
+					 num_of_bds, tx_dest,
 					 vlan, bd_flags, l4_hdr_offset_w,
 					 roce_flavor,
 					 first_frag, first_frag_len);
@@ -1351,6 +1839,10 @@ int qed_ll2_terminate_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		qed_ll2_rxq_flush(p_hwfn, connection_handle);
 	}
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
+		qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+
 	return rc;
 }
 
@@ -1381,6 +1873,9 @@ void qed_ll2_release_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 
 	qed_cxt_release_cid(p_hwfn, p_ll2_conn->cid);
 
+	if (IS_ENABLED(CONFIG_QEDI))
+		qed_ll2_release_connection_ooo(p_hwfn, p_ll2_conn);
+
 	mutex_lock(&p_ll2_conn->mutex);
 	p_ll2_conn->b_active = false;
 	mutex_unlock(&p_ll2_conn->mutex);
@@ -1628,6 +2123,18 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 		goto release_terminate;
 	}
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
+		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable) {
+		DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
+		rc = qed_ll2_start_ooo(cdev, params);
+		if (rc) {
+			DP_INFO(cdev,
+				"Failed to initialize the OOO LL2 queue\n");
+			goto release_terminate;
+		}
+	}
+
 	p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
 	if (!p_ptt) {
 		DP_INFO(cdev, "Failed to acquire PTT\n");
@@ -1677,6 +2184,11 @@ static int qed_ll2_stop(struct qed_dev *cdev)
 	qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
 	eth_zero_addr(cdev->ll2_mac_address);
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
+		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable)
+		qed_ll2_stop_ooo(cdev);
+
 	rc = qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev),
 					  cdev->ll2->handle);
 	if (rc)
@@ -1731,7 +2243,8 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb)
 	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev),
 				       cdev->ll2->handle,
 				       1 + skb_shinfo(skb)->nr_frags,
-				       vlan, flags, 0, 0 /* RoCE FLAVOR */,
+				       vlan, flags, 0, QED_LL2_TX_DEST_NW,
+				       0 /* RoCE FLAVOR */,
 				       mapping, skb->len, skb, 1);
 	if (rc)
 		goto err;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
index 80a5dc2..2b31d30 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
@@ -41,6 +41,12 @@ enum qed_ll2_conn_type {
 	MAX_QED_LL2_RX_CONN_TYPE
 };
 
+enum qed_ll2_tx_dest {
+	QED_LL2_TX_DEST_NW, /* Light L2 TX Destination to the Network */
+	QED_LL2_TX_DEST_LB, /* Light L2 TX Destination to the Loopback */
+	QED_LL2_TX_DEST_MAX
+};
+
 struct qed_ll2_rx_packet {
 	struct list_head list_entry;
 	struct core_rx_bd_with_buff_len *rxq_bd;
@@ -192,6 +198,8 @@ int qed_ll2_post_rx_buffer(struct qed_hwfn *p_hwfn,
  * @param l4_hdr_offset_w	L4 Header Offset from start of packet
  *				(in words). This is needed if both l4_csum
  *				and ipv6_ext are set
+ * @param e_tx_dest             indicates if the packet is to be transmitted via
+ *                              loopback or to the network
  * @param first_frag
  * @param first_frag_len
  * @param cookie
@@ -206,6 +214,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 			      u16 vlan,
 			      u8 bd_flags,
 			      u16 l4_hdr_offset_w,
+			      enum qed_ll2_tx_dest e_tx_dest,
 			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
 			      dma_addr_t first_frag,
 			      u16 first_frag_len, void *cookie, u8 notify_fw);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.c b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
new file mode 100644
index 0000000..a037a6f
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
@@ -0,0 +1,510 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include "qed.h"
+#include "qed_iscsi.h"
+#include "qed_ll2.h"
+#include "qed_ooo.h"
+
+static struct qed_ooo_archipelago
+*qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info
+			  *p_ooo_info,
+			  u32 cid)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+
+	list_for_each_entry(p_archipelago,
+			    &p_ooo_info->archipelagos_list, list_entry) {
+		if (p_archipelago->cid == cid)
+			return p_archipelago;
+	}
+
+	return NULL;
+}
+
+static struct qed_ooo_isle *qed_ooo_seek_isle(struct qed_hwfn *p_hwfn,
+					      struct qed_ooo_info *p_ooo_info,
+					      u32 cid, u8 isle)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+	u8 the_num_of_isle = 1;
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	if (!p_archipelago) {
+		DP_NOTICE(p_hwfn,
+			  "Connection %d is not found in OOO list\n", cid);
+		return NULL;
+	}
+
+	list_for_each_entry(p_isle, &p_archipelago->isles_list, list_entry) {
+		if (the_num_of_isle == isle)
+			return p_isle;
+		the_num_of_isle++;
+	}
+
+	return NULL;
+}
+
+void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
+				struct qed_ooo_info *p_ooo_info,
+				struct ooo_opaque *p_cqe)
+{
+	struct qed_ooo_history *p_history = &p_ooo_info->ooo_history;
+
+	if (p_history->head_idx == p_history->num_of_cqes)
+		p_history->head_idx = 0;
+	p_history->p_cqes[p_history->head_idx] = *p_cqe;
+	p_history->head_idx++;
+}
+
+struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn)
+{
+	struct qed_ooo_info *p_ooo_info;
+	u16 max_num_archipelagos = 0;
+	u16 max_num_isles = 0;
+	u32 i;
+
+	if (p_hwfn->hw_info.personality != QED_PCI_ISCSI) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info: unknown personality\n");
+		return NULL;
+	}
+
+	max_num_archipelagos = p_hwfn->pf_params.iscsi_pf_params.num_cons;
+	max_num_isles = QED_MAX_NUM_ISLES + max_num_archipelagos;
+
+	if (!max_num_archipelagos) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info: unknown amount of connections\n");
+		return NULL;
+	}
+
+	p_ooo_info = kzalloc(sizeof(*p_ooo_info), GFP_KERNEL);
+	if (!p_ooo_info) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info\n");
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&p_ooo_info->free_buffers_list);
+	INIT_LIST_HEAD(&p_ooo_info->ready_buffers_list);
+	INIT_LIST_HEAD(&p_ooo_info->free_isles_list);
+	INIT_LIST_HEAD(&p_ooo_info->free_archipelagos_list);
+	INIT_LIST_HEAD(&p_ooo_info->archipelagos_list);
+
+	p_ooo_info->p_isles_mem = kcalloc(max_num_isles,
+					  sizeof(struct qed_ooo_isle),
+					  GFP_KERNEL);
+	if (!p_ooo_info->p_isles_mem) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(isles)\n");
+		goto no_isles_mem;
+	}
+
+	for (i = 0; i < max_num_isles; i++) {
+		INIT_LIST_HEAD(&p_ooo_info->p_isles_mem[i].buffers_list);
+		list_add_tail(&p_ooo_info->p_isles_mem[i].list_entry,
+			      &p_ooo_info->free_isles_list);
+	}
+
+	p_ooo_info->p_archipelagos_mem =
+				kcalloc(max_num_archipelagos,
+					sizeof(struct qed_ooo_archipelago),
+					GFP_KERNEL);
+	if (!p_ooo_info->p_archipelagos_mem) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info(archpelagos)\n");
+		goto no_archipelagos_mem;
+	}
+
+	for (i = 0; i < max_num_archipelagos; i++) {
+		INIT_LIST_HEAD(&p_ooo_info->p_archipelagos_mem[i].isles_list);
+		list_add_tail(&p_ooo_info->p_archipelagos_mem[i].list_entry,
+			      &p_ooo_info->free_archipelagos_list);
+	}
+
+	p_ooo_info->ooo_history.p_cqes =
+				kcalloc(QED_MAX_NUM_OOO_HISTORY_ENTRIES,
+					sizeof(struct ooo_opaque),
+					GFP_KERNEL);
+	if (!p_ooo_info->ooo_history.p_cqes) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(history)\n");
+		goto no_history_mem;
+	}
+
+	return p_ooo_info;
+
+no_history_mem:
+	kfree(p_ooo_info->p_archipelagos_mem);
+no_archipelagos_mem:
+	kfree(p_ooo_info->p_isles_mem);
+no_isles_mem:
+	kfree(p_ooo_info);
+	return NULL;
+}
+
+void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
+				      struct qed_ooo_info *p_ooo_info, u32 cid)
+{
+	struct qed_ooo_archipelago *p_archipelago;
+	struct qed_ooo_buffer *p_buffer;
+	struct qed_ooo_isle *p_isle;
+	bool b_found = false;
+
+	if (list_empty(&p_ooo_info->archipelagos_list))
+		return;
+
+	list_for_each_entry(p_archipelago,
+			    &p_ooo_info->archipelagos_list, list_entry) {
+		if (p_archipelago->cid == cid) {
+			list_del(&p_archipelago->list_entry);
+			b_found = true;
+			break;
+		}
+	}
+
+	if (!b_found)
+		return;
+
+	while (!list_empty(&p_archipelago->isles_list)) {
+		p_isle = list_first_entry(&p_archipelago->isles_list,
+					  struct qed_ooo_isle, list_entry);
+
+		list_del(&p_isle->list_entry);
+
+		while (!list_empty(&p_isle->buffers_list)) {
+			p_buffer = list_first_entry(&p_isle->buffers_list,
+						    struct qed_ooo_buffer,
+						    list_entry);
+
+			if (!p_buffer)
+				break;
+
+			list_del(&p_buffer->list_entry);
+			list_add_tail(&p_buffer->list_entry,
+				      &p_ooo_info->free_buffers_list);
+		}
+		list_add_tail(&p_isle->list_entry,
+			      &p_ooo_info->free_isles_list);
+	}
+
+	list_add_tail(&p_archipelago->list_entry,
+		      &p_ooo_info->free_archipelagos_list);
+}
+
+void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
+			       struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_archipelago *p_arch;
+	struct qed_ooo_buffer *p_buffer;
+	struct qed_ooo_isle *p_isle;
+
+	while (!list_empty(&p_ooo_info->archipelagos_list)) {
+		p_arch = list_first_entry(&p_ooo_info->archipelagos_list,
+					  struct qed_ooo_archipelago,
+					  list_entry);
+
+		list_del(&p_arch->list_entry);
+
+		while (!list_empty(&p_arch->isles_list)) {
+			p_isle = list_first_entry(&p_arch->isles_list,
+						  struct qed_ooo_isle,
+						  list_entry);
+
+			list_del(&p_isle->list_entry);
+
+			while (!list_empty(&p_isle->buffers_list)) {
+				p_buffer =
+				    list_first_entry(&p_isle->buffers_list,
+						     struct qed_ooo_buffer,
+						     list_entry);
+
+				if (!p_buffer)
+					break;
+
+			list_del(&p_buffer->list_entry);
+				list_add_tail(&p_buffer->list_entry,
+					      &p_ooo_info->free_buffers_list);
+			}
+			list_add_tail(&p_isle->list_entry,
+				      &p_ooo_info->free_isles_list);
+		}
+		list_add_tail(&p_arch->list_entry,
+			      &p_ooo_info->free_archipelagos_list);
+	}
+	if (!list_empty(&p_ooo_info->ready_buffers_list))
+		list_splice_tail_init(&p_ooo_info->ready_buffers_list,
+				      &p_ooo_info->free_buffers_list);
+}
+
+void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
+{
+	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
+	memset(p_ooo_info->ooo_history.p_cqes, 0,
+	       p_ooo_info->ooo_history.num_of_cqes *
+	       sizeof(struct ooo_opaque));
+	p_ooo_info->ooo_history.head_idx = 0;
+}
+
+void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer;
+
+	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
+	while (!list_empty(&p_ooo_info->free_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		if (!p_buffer)
+			break;
+
+		list_del(&p_buffer->list_entry);
+		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+				  p_buffer->rx_buffer_size,
+				  p_buffer->rx_buffer_virt_addr,
+				  p_buffer->rx_buffer_phys_addr);
+		kfree(p_buffer);
+	}
+
+	kfree(p_ooo_info->p_isles_mem);
+	kfree(p_ooo_info->p_archipelagos_mem);
+	kfree(p_ooo_info->ooo_history.p_cqes);
+	kfree(p_ooo_info);
+}
+
+void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
+			     struct qed_ooo_info *p_ooo_info,
+			     struct qed_ooo_buffer *p_buffer)
+{
+	list_add_tail(&p_buffer->list_entry, &p_ooo_info->free_buffers_list);
+}
+
+struct qed_ooo_buffer *qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
+					       struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer = NULL;
+
+	if (!list_empty(&p_ooo_info->free_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		list_del(&p_buffer->list_entry);
+	}
+
+	return p_buffer;
+}
+
+void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
+			      struct qed_ooo_info *p_ooo_info,
+			      struct qed_ooo_buffer *p_buffer, u8 on_tail)
+{
+	if (on_tail)
+		list_add_tail(&p_buffer->list_entry,
+			      &p_ooo_info->ready_buffers_list);
+	else
+		list_add(&p_buffer->list_entry,
+			 &p_ooo_info->ready_buffers_list);
+}
+
+struct qed_ooo_buffer *qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
+						struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer = NULL;
+
+	if (!list_empty(&p_ooo_info->ready_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->ready_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		list_del(&p_buffer->list_entry);
+	}
+
+	return p_buffer;
+}
+
+void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 drop_isle, u8 drop_size)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+	u8 isle_idx;
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	for (isle_idx = 0; isle_idx < drop_size; isle_idx++) {
+		p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, drop_isle);
+		if (!p_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is not found(cid %d)\n",
+				  drop_isle, cid);
+			return;
+		}
+		if (list_empty(&p_isle->buffers_list))
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is empty(cid %d)\n", drop_isle, cid);
+		else
+			list_splice_tail_init(&p_isle->buffers_list,
+					      &p_ooo_info->free_buffers_list);
+
+		list_del(&p_isle->list_entry);
+		p_ooo_info->cur_isles_number--;
+		list_add(&p_isle->list_entry, &p_ooo_info->free_isles_list);
+	}
+
+	if (list_empty(&p_archipelago->isles_list)) {
+		list_del(&p_archipelago->list_entry);
+		list_add(&p_archipelago->list_entry,
+			 &p_ooo_info->free_archipelagos_list);
+	}
+}
+
+void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 ooo_isle,
+			  struct qed_ooo_buffer *p_buffer)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_prev_isle = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+
+	if (ooo_isle > 1) {
+		p_prev_isle = qed_ooo_seek_isle(p_hwfn,
+						p_ooo_info, cid, ooo_isle - 1);
+		if (!p_prev_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is not found(cid %d)\n",
+				  ooo_isle - 1, cid);
+			return;
+		}
+	}
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	if (!p_archipelago && (ooo_isle != 1)) {
+		DP_NOTICE(p_hwfn,
+			  "Connection %d is not found in OOO list\n", cid);
+		return;
+	}
+
+	if (!list_empty(&p_ooo_info->free_isles_list)) {
+		p_isle = list_first_entry(&p_ooo_info->free_isles_list,
+					  struct qed_ooo_isle, list_entry);
+
+		list_del(&p_isle->list_entry);
+		if (!list_empty(&p_isle->buffers_list)) {
+			DP_NOTICE(p_hwfn, "Free isle is not empty\n");
+			INIT_LIST_HEAD(&p_isle->buffers_list);
+		}
+	} else {
+		DP_NOTICE(p_hwfn, "No more free isles\n");
+		return;
+	}
+
+	if (!p_archipelago &&
+	    !list_empty(&p_ooo_info->free_archipelagos_list)) {
+		p_archipelago =
+		    list_first_entry(&p_ooo_info->free_archipelagos_list,
+				     struct qed_ooo_archipelago, list_entry);
+
+		list_del(&p_archipelago->list_entry);
+		if (!list_empty(&p_archipelago->isles_list)) {
+			DP_NOTICE(p_hwfn,
+				  "Free OOO connection is not empty\n");
+			INIT_LIST_HEAD(&p_archipelago->isles_list);
+		}
+		p_archipelago->cid = cid;
+		list_add(&p_archipelago->list_entry,
+			 &p_ooo_info->archipelagos_list);
+	} else if (!p_archipelago) {
+		DP_NOTICE(p_hwfn, "No more free OOO connections\n");
+		list_add(&p_isle->list_entry,
+			 &p_ooo_info->free_isles_list);
+		list_add(&p_buffer->list_entry,
+			 &p_ooo_info->free_buffers_list);
+		return;
+	}
+
+	list_add(&p_buffer->list_entry, &p_isle->buffers_list);
+	p_ooo_info->cur_isles_number++;
+	p_ooo_info->gen_isles_number++;
+
+	if (p_ooo_info->cur_isles_number > p_ooo_info->max_isles_number)
+		p_ooo_info->max_isles_number = p_ooo_info->cur_isles_number;
+
+	if (!p_prev_isle)
+		list_add(&p_isle->list_entry, &p_archipelago->isles_list);
+	else
+		list_add(&p_isle->list_entry, &p_prev_isle->list_entry);
+}
+
+void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
+			    struct qed_ooo_info *p_ooo_info,
+			    u32 cid,
+			    u8 ooo_isle,
+			    struct qed_ooo_buffer *p_buffer, u8 buffer_side)
+{
+	struct qed_ooo_isle *p_isle = NULL;
+
+	p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, ooo_isle);
+	if (!p_isle) {
+		DP_NOTICE(p_hwfn,
+			  "Isle %d is not found(cid %d)\n", ooo_isle, cid);
+		return;
+	}
+
+	if (buffer_side == QED_OOO_LEFT_BUF)
+		list_add(&p_buffer->list_entry, &p_isle->buffers_list);
+	else
+		list_add_tail(&p_buffer->list_entry, &p_isle->buffers_list);
+}
+
+void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info, u32 cid, u8 left_isle)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_right_isle = NULL;
+	struct qed_ooo_isle *p_left_isle = NULL;
+
+	p_right_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
+					 left_isle + 1);
+	if (!p_right_isle) {
+		DP_NOTICE(p_hwfn,
+			  "Right isle %d is not found(cid %d)\n",
+			  left_isle + 1, cid);
+		return;
+	}
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	list_del(&p_right_isle->list_entry);
+	p_ooo_info->cur_isles_number--;
+	if (left_isle) {
+		p_left_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
+						left_isle);
+		if (!p_left_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Left isle %d is not found(cid %d)\n",
+				  left_isle, cid);
+			return;
+		}
+		list_splice_tail_init(&p_right_isle->buffers_list,
+				      &p_left_isle->buffers_list);
+	} else {
+		list_splice_tail_init(&p_right_isle->buffers_list,
+				      &p_ooo_info->ready_buffers_list);
+		if (list_empty(&p_archipelago->isles_list)) {
+			list_del(&p_archipelago->list_entry);
+			list_add(&p_archipelago->list_entry,
+				 &p_ooo_info->free_archipelagos_list);
+		}
+	}
+	list_add_tail(&p_right_isle->list_entry, &p_ooo_info->free_isles_list);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.h b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
new file mode 100644
index 0000000..75c6e48
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
@@ -0,0 +1,116 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_OOO_H
+#define _QED_OOO_H
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include "qed.h"
+
+#define QED_MAX_NUM_ISLES	256
+#define QED_MAX_NUM_OOO_HISTORY_ENTRIES	512
+
+#define QED_OOO_LEFT_BUF	0
+#define QED_OOO_RIGHT_BUF	1
+
+struct qed_ooo_buffer {
+	struct list_head list_entry;
+	void *rx_buffer_virt_addr;
+	dma_addr_t rx_buffer_phys_addr;
+	u32 rx_buffer_size;
+	u16 packet_length;
+	u16 parse_flags;
+	u16 vlan;
+	u8 placement_offset;
+};
+
+struct qed_ooo_isle {
+	struct list_head list_entry;
+	struct list_head buffers_list;
+};
+
+struct qed_ooo_archipelago {
+	struct list_head list_entry;
+	struct list_head isles_list;
+	u32 cid;
+};
+
+struct qed_ooo_history {
+	struct ooo_opaque *p_cqes;
+	u32 head_idx;
+	u32 num_of_cqes;
+};
+
+struct qed_ooo_info {
+	struct list_head free_buffers_list;
+	struct list_head ready_buffers_list;
+	struct list_head free_isles_list;
+	struct list_head free_archipelagos_list;
+	struct list_head archipelagos_list;
+	struct qed_ooo_archipelago *p_archipelagos_mem;
+	struct qed_ooo_isle *p_isles_mem;
+	struct qed_ooo_history ooo_history;
+	u32 cur_isles_number;
+	u32 max_isles_number;
+	u32 gen_isles_number;
+};
+
+void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
+				struct qed_ooo_info *p_ooo_info,
+				struct ooo_opaque *p_cqe);
+
+struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn);
+
+void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
+				      struct qed_ooo_info *p_ooo_info,
+				      u32 cid);
+
+void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
+			       struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
+			     struct qed_ooo_info *p_ooo_info,
+			     struct qed_ooo_buffer *p_buffer);
+
+struct qed_ooo_buffer *
+qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
+			      struct qed_ooo_info *p_ooo_info,
+			      struct qed_ooo_buffer *p_buffer, u8 on_tail);
+
+struct qed_ooo_buffer *
+qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
+			 struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 drop_isle, u8 drop_size);
+
+void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid,
+			  u8 ooo_isle, struct qed_ooo_buffer *p_buffer);
+
+void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
+			    struct qed_ooo_info *p_ooo_info,
+			    u32 cid,
+			    u8 ooo_isle,
+			    struct qed_ooo_buffer *p_buffer, u8 buffer_side);
+
+void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info, u32 cid,
+			u8 left_isle);
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
index 2343005..1768cdb 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
@@ -2866,6 +2866,7 @@ static int qed_roce_ll2_tx(struct qed_dev *cdev,
 	/* Tx header */
 	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev), roce_ll2->handle,
 				       1 + pkt->n_seg, 0, flags, 0,
+				       QED_LL2_TX_DEST_NW,
 				       qed_roce_flavor, pkt->header.baddr,
 				       pkt->header.len, pkt, 1);
 	if (rc) {
diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
index d3fa578..b44fd4c 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
@@ -26,6 +26,7 @@
 #include "qed_int.h"
 #include "qed_iscsi.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 #include "qed_sriov.h"
@@ -253,6 +254,14 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
 	case PROTOCOLID_ISCSI:
 		if (!IS_ENABLED(CONFIG_QEDI))
 			return -EINVAL;
+		if (p_eqe->opcode == ISCSI_EVENT_TYPE_ASYN_DELETE_OOO_ISLES) {
+			u32 cid = le32_to_cpu(p_eqe->data.iscsi_info.cid);
+
+			qed_ooo_release_connection_isles(p_hwfn,
+							 p_hwfn->p_ooo_info,
+							 cid);
+			return 0;
+		}
 
 		if (p_hwfn->p_iscsi_info->event_cb) {
 			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 2/6] qed: Add iSCSI out of order packet handling.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi, Yuval Mintz

From: Yuval Mintz <Yuval.Mintz@qlogic.com>

This patch adds out of order packet handling for hardware offloaded
iSCSI. Out of order packet handling requires driver buffer allocation
and assistance.

Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
---
 drivers/net/ethernet/qlogic/qed/Makefile   |   2 +-
 drivers/net/ethernet/qlogic/qed/qed.h      |   1 +
 drivers/net/ethernet/qlogic/qed/qed_dev.c  |  14 +-
 drivers/net/ethernet/qlogic/qed/qed_ll2.c  | 559 +++++++++++++++++++++++++++--
 drivers/net/ethernet/qlogic/qed/qed_ll2.h  |   9 +
 drivers/net/ethernet/qlogic/qed/qed_ooo.c  | 510 ++++++++++++++++++++++++++
 drivers/net/ethernet/qlogic/qed/qed_ooo.h  | 116 ++++++
 drivers/net/ethernet/qlogic/qed/qed_roce.c |   1 +
 drivers/net/ethernet/qlogic/qed/qed_spq.c  |   9 +
 9 files changed, 1195 insertions(+), 26 deletions(-)
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.c
 create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.h

diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
index b76669c..9121bf0 100644
--- a/drivers/net/ethernet/qlogic/qed/Makefile
+++ b/drivers/net/ethernet/qlogic/qed/Makefile
@@ -6,4 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
 qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
 qed-$(CONFIG_QED_LL2) += qed_ll2.o
 qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
-qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
+qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o qed_ooo.o
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index a61b1c0..e5626ae 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -380,6 +380,7 @@ struct qed_hwfn {
 	/* Protocol related */
 	bool				using_ll2;
 	struct qed_ll2_info		*p_ll2_info;
+	struct qed_ooo_info		*p_ooo_info;
 	struct qed_rdma_info		*p_rdma_info;
 	struct qed_iscsi_info		*p_iscsi_info;
 	struct qed_pf_params		pf_params;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index a4234c0..060e9a4 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -32,6 +32,7 @@
 #include "qed_iscsi.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 #include "qed_sriov.h"
@@ -157,8 +158,10 @@ void qed_resc_free(struct qed_dev *cdev)
 		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
 #endif
 		if (IS_ENABLED(CONFIG_QEDI) &&
-				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
 			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
+			qed_ooo_free(p_hwfn, p_hwfn->p_ooo_info);
+		}
 		qed_iov_free(p_hwfn);
 		qed_dmae_info_free(p_hwfn);
 		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -416,6 +419,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
 int qed_resc_alloc(struct qed_dev *cdev)
 {
 	struct qed_iscsi_info *p_iscsi_info;
+	struct qed_ooo_info *p_ooo_info;
 #ifdef CONFIG_QED_LL2
 	struct qed_ll2_info *p_ll2_info;
 #endif
@@ -543,6 +547,10 @@ int qed_resc_alloc(struct qed_dev *cdev)
 			if (!p_iscsi_info)
 				goto alloc_no_mem;
 			p_hwfn->p_iscsi_info = p_iscsi_info;
+			p_ooo_info = qed_ooo_alloc(p_hwfn);
+			if (!p_ooo_info)
+				goto alloc_no_mem;
+			p_hwfn->p_ooo_info = p_ooo_info;
 		}
 
 		/* DMA info initialization */
@@ -598,8 +606,10 @@ void qed_resc_setup(struct qed_dev *cdev)
 			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
 #endif
 		if (IS_ENABLED(CONFIG_QEDI) &&
-				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
+				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
 			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
+			qed_ooo_setup(p_hwfn, p_hwfn->p_ooo_info);
+		}
 	}
 }
 
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
index e67f3c9..4ce12e9 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
@@ -36,6 +36,7 @@
 #include "qed_int.h"
 #include "qed_ll2.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 
@@ -295,27 +296,36 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		list_del(&p_pkt->list_entry);
 		b_last_packet = list_empty(&p_tx->active_descq);
 		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
-		p_tx->cur_completing_packet = *p_pkt;
-		p_tx->cur_completing_bd_idx = 1;
-		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
-		tx_frag = p_pkt->bds_set[0].tx_frag;
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+			struct qed_ooo_buffer *p_buffer;
+
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+		} else {
+			p_tx->cur_completing_packet = *p_pkt;
+			p_tx->cur_completing_bd_idx = 1;
+			b_last_frag = p_tx->cur_completing_bd_idx ==
+				p_pkt->bd_used;
+			tx_frag = p_pkt->bds_set[0].tx_frag;
 #if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
-		if (p_ll2_conn->gsi_enable)
-			qed_ll2b_release_tx_gsi_packet(p_hwfn,
-						       p_ll2_conn->my_id,
-						       p_pkt->cookie,
-						       tx_frag,
-						       b_last_frag,
-						       b_last_packet);
-		else
+			if (p_ll2_conn->gsi_enable)
+				qed_ll2b_release_tx_gsi_packet(p_hwfn,
+					       p_ll2_conn->my_id,
+					       p_pkt->cookie,
+					       tx_frag,
+					       b_last_frag,
+					       b_last_packet);
+			else
 #endif
-			qed_ll2b_complete_tx_packet(p_hwfn,
+				qed_ll2b_complete_tx_packet(p_hwfn,
 						    p_ll2_conn->my_id,
 						    p_pkt->cookie,
 						    tx_frag,
 						    b_last_frag,
 						    b_last_packet);
-
+		}
 	}
 }
 
@@ -546,13 +556,466 @@ void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		list_del(&p_pkt->list_entry);
 		list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
 
-		rx_buf_addr = p_pkt->rx_buf_addr;
-		cookie = p_pkt->cookie;
+		if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+			struct qed_ooo_buffer *p_buffer;
+
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+		} else {
+			rx_buf_addr = p_pkt->rx_buf_addr;
+			cookie = p_pkt->cookie;
+
+			b_last = list_empty(&p_rx->active_descq);
+		}
+	}
+}
+
+#if IS_ENABLED(CONFIG_QEDI)
+static u8 qed_ll2_convert_rx_parse_to_tx_flags(u16 parse_flags)
+{
+	u8 bd_flags = 0;
+
+	if (GET_FIELD(parse_flags, PARSING_AND_ERR_FLAGS_TAG8021QEXIST))
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_VLAN_INSERTION, 1);
+
+	return bd_flags;
+}
+
+static int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+{
+	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+	struct qed_ll2_rx_queue *p_rx = &p_ll2_conn->rx_queue;
+	u16 packet_length = 0, parse_flags = 0, vlan = 0;
+	struct qed_ll2_rx_packet *p_pkt = NULL;
+	u32 num_ooo_add_to_peninsula = 0, cid;
+	union core_rx_cqe_union *cqe = NULL;
+	u16 cq_new_idx = 0, cq_old_idx = 0;
+	struct qed_ooo_buffer *p_buffer;
+	struct ooo_opaque *iscsi_ooo;
+	u8 placement_offset = 0;
+	u8 cqe_type;
+	int rc;
+
+	cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
+	cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+	if (cq_new_idx == cq_old_idx)
+		return 0;
+
+	while (cq_new_idx != cq_old_idx) {
+		struct core_rx_fast_path_cqe *p_cqe_fp;
+
+		cqe = qed_chain_consume(&p_rx->rcq_chain);
+		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+		cqe_type = cqe->rx_cqe_sp.type;
+
+		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
+			DP_NOTICE(p_hwfn,
+				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
+				  cqe_type);
+			return -EINVAL;
+		}
+		p_cqe_fp = &cqe->rx_cqe_fp;
+
+		placement_offset = p_cqe_fp->placement_offset;
+		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
+		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
+		vlan = le16_to_cpu(p_cqe_fp->vlan);
+		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
+		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
+					   iscsi_ooo);
+		cid = le32_to_cpu(iscsi_ooo->cid);
+
+		/* Process delete isle first */
+		if (iscsi_ooo->drop_size)
+			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
+					     iscsi_ooo->drop_isle,
+					     iscsi_ooo->drop_size);
+
+		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
+			continue;
+
+		/* Now process create/add/join isles */
+		if (list_empty(&p_rx->active_descq)) {
+			DP_NOTICE(p_hwfn,
+				  "LL2 OOO RX chain has no submitted buffers\n");
+			return -EIO;
+		}
+
+		p_pkt = list_first_entry(&p_rx->active_descq,
+					 struct qed_ll2_rx_packet, list_entry);
+
+		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
+		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
+			if (!p_pkt) {
+				DP_NOTICE(p_hwfn,
+					  "LL2 OOO RX packet is not valid\n");
+				return -EIO;
+			}
+			list_del(&p_pkt->list_entry);
+			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+			p_buffer->packet_length = packet_length;
+			p_buffer->parse_flags = parse_flags;
+			p_buffer->vlan = vlan;
+			p_buffer->placement_offset = placement_offset;
+			qed_chain_consume(&p_rx->rxq_chain);
+			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
+
+			switch (iscsi_ooo->ooo_opcode) {
+			case TCP_EVENT_ADD_NEW_ISLE:
+				qed_ooo_add_new_isle(p_hwfn,
+						     p_hwfn->p_ooo_info,
+						     cid,
+						     iscsi_ooo->ooo_isle,
+						     p_buffer);
+				break;
+			case TCP_EVENT_ADD_ISLE_RIGHT:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle,
+						       p_buffer,
+						       QED_OOO_RIGHT_BUF);
+				break;
+			case TCP_EVENT_ADD_ISLE_LEFT:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle,
+						       p_buffer,
+						       QED_OOO_LEFT_BUF);
+				break;
+			case TCP_EVENT_JOIN:
+				qed_ooo_add_new_buffer(p_hwfn,
+						       p_hwfn->p_ooo_info,
+						       cid,
+						       iscsi_ooo->ooo_isle +
+						       1,
+						       p_buffer,
+						       QED_OOO_LEFT_BUF);
+				qed_ooo_join_isles(p_hwfn,
+						   p_hwfn->p_ooo_info,
+						   cid, iscsi_ooo->ooo_isle);
+				break;
+			case TCP_EVENT_ADD_PEN:
+				num_ooo_add_to_peninsula++;
+				qed_ooo_put_ready_buffer(p_hwfn,
+							 p_hwfn->p_ooo_info,
+							 p_buffer, true);
+				break;
+			}
+		} else {
+			DP_NOTICE(p_hwfn,
+				  "Unexpected event (%d) TX OOO completion\n",
+				  iscsi_ooo->ooo_opcode);
+		}
+	}
 
-		b_last = list_empty(&p_rx->active_descq);
+	/* Submit RX buffer here */
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr,
+					    0, p_buffer, true);
+		if (rc) {
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+			break;
+		}
 	}
+
+	/* Submit Tx buffers here */
+	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
+						    p_hwfn->p_ooo_info))) {
+		u16 l4_hdr_offset_w = 0;
+		dma_addr_t first_frag;
+		u8 bd_flags = 0;
+
+		first_frag = p_buffer->rx_buffer_phys_addr +
+			     p_buffer->placement_offset;
+		parse_flags = p_buffer->parse_flags;
+		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
+
+		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
+					       p_buffer->vlan, bd_flags,
+					       l4_hdr_offset_w,
+					       p_ll2_conn->tx_dest, 0,
+					       first_frag,
+					       p_buffer->packet_length,
+					       p_buffer, true);
+		if (rc) {
+			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						 p_buffer, false);
+			break;
+		}
+	}
+
+	return 0;
 }
 
+static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+{
+	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+	struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue;
+	struct qed_ll2_tx_packet *p_pkt = NULL;
+	struct qed_ooo_buffer *p_buffer;
+	bool b_dont_submit_rx = false;
+	u16 new_idx = 0, num_bds = 0;
+	int rc;
+
+	new_idx = le16_to_cpu(*p_tx->p_fw_cons);
+	num_bds = ((s16)new_idx - (s16)p_tx->bds_idx);
+
+	if (!num_bds)
+		return 0;
+
+	while (num_bds) {
+		if (list_empty(&p_tx->active_descq))
+			return -EINVAL;
+
+		p_pkt = list_first_entry(&p_tx->active_descq,
+					 struct qed_ll2_tx_packet, list_entry);
+		if (!p_pkt)
+			return -EINVAL;
+
+		if (p_pkt->bd_used != 1) {
+			DP_NOTICE(p_hwfn,
+				  "Unexpectedly many BDs(%d) in TX OOO completion\n",
+				  p_pkt->bd_used);
+			return -EINVAL;
+		}
+
+		list_del(&p_pkt->list_entry);
+
+		num_bds--;
+		p_tx->bds_idx++;
+		qed_chain_consume(&p_tx->txq_chain);
+
+		p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
+		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
+
+		if (b_dont_submit_rx) {
+			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						p_buffer);
+			continue;
+		}
+
+		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr, 0,
+					    p_buffer, true);
+		if (rc != 0) {
+			qed_ooo_put_free_buffer(p_hwfn,
+						p_hwfn->p_ooo_info, p_buffer);
+			b_dont_submit_rx = true;
+		}
+	}
+
+	/* Submit Tx buffers here */
+	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
+						    p_hwfn->p_ooo_info))) {
+		u16 l4_hdr_offset_w = 0, parse_flags = p_buffer->parse_flags;
+		dma_addr_t first_frag;
+		u8 bd_flags = 0;
+
+		first_frag = p_buffer->rx_buffer_phys_addr +
+		    p_buffer->placement_offset;
+		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
+		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
+		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
+					       p_buffer->vlan, bd_flags,
+					       l4_hdr_offset_w,
+					       p_ll2_conn->tx_dest, 0,
+					       first_frag,
+					       p_buffer->packet_length,
+					       p_buffer, true);
+		if (rc != 0) {
+			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
+						 p_buffer, false);
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
+			       struct qed_ll2_info *p_ll2_info,
+			       u16 rx_num_ooo_buffers, u16 mtu)
+{
+	struct qed_ooo_buffer *p_buf = NULL;
+	void *p_virt;
+	u16 buf_idx;
+	int rc = 0;
+
+	if (p_ll2_info->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return rc;
+
+	if (!rx_num_ooo_buffers)
+		return -EINVAL;
+
+	for (buf_idx = 0; buf_idx < rx_num_ooo_buffers; buf_idx++) {
+		p_buf = kzalloc(sizeof(*p_buf), GFP_KERNEL);
+		if (!p_buf) {
+			DP_NOTICE(p_hwfn,
+				  "Failed to allocate ooo descriptor\n");
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		p_buf->rx_buffer_size = mtu + 26 + ETH_CACHE_LINE_SIZE;
+		p_buf->rx_buffer_size = (p_buf->rx_buffer_size +
+					 ETH_CACHE_LINE_SIZE - 1) &
+					~(ETH_CACHE_LINE_SIZE - 1);
+		p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+					    p_buf->rx_buffer_size,
+					    &p_buf->rx_buffer_phys_addr,
+					    GFP_KERNEL);
+		if (!p_virt) {
+			DP_NOTICE(p_hwfn, "Failed to allocate ooo buffer\n");
+			kfree(p_buf);
+			rc = -ENOMEM;
+			goto out;
+		}
+
+		p_buf->rx_buffer_virt_addr = p_virt;
+		qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info, p_buf);
+	}
+
+	DP_VERBOSE(p_hwfn, QED_MSG_LL2,
+		   "Allocated [%04x] LL2 OOO buffers [each of size 0x%08x]\n",
+		   rx_num_ooo_buffers, p_buf->rx_buffer_size);
+
+out:
+	return rc;
+}
+
+static void
+qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
+				 struct qed_ll2_info *p_ll2_conn)
+{
+	struct qed_ooo_buffer *p_buffer;
+	int rc;
+
+	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return;
+
+	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		rc = qed_ll2_post_rx_buffer(p_hwfn,
+					    p_ll2_conn->my_id,
+					    p_buffer->rx_buffer_phys_addr,
+					    0, p_buffer, true);
+		if (rc) {
+			qed_ooo_put_free_buffer(p_hwfn,
+						p_hwfn->p_ooo_info, p_buffer);
+			break;
+		}
+	}
+}
+
+static void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
+					   struct qed_ll2_info *p_ll2_conn)
+{
+	struct qed_ooo_buffer *p_buffer;
+
+	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
+		return;
+
+	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
+						   p_hwfn->p_ooo_info))) {
+		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+				  p_buffer->rx_buffer_size,
+				  p_buffer->rx_buffer_virt_addr,
+				  p_buffer->rx_buffer_phys_addr);
+		kfree(p_buffer);
+	}
+}
+
+static void qed_ll2_stop_ooo(struct qed_dev *cdev)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+
+	DP_VERBOSE(cdev, QED_MSG_STORAGE, "Stopping LL2 OOO queue [%02x]\n",
+		   *handle);
+
+	qed_ll2_terminate_connection(hwfn, *handle);
+	qed_ll2_release_connection(hwfn, *handle);
+	*handle = QED_LL2_UNUSED_HANDLE;
+}
+
+static int qed_ll2_start_ooo(struct qed_dev *cdev,
+			     struct qed_ll2_params *params)
+{
+	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+	struct qed_ll2_info *ll2_info;
+	int rc;
+
+	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
+	if (!ll2_info) {
+		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
+		return -ENOMEM;
+	}
+	ll2_info->conn_type = QED_LL2_TYPE_ISCSI_OOO;
+	ll2_info->mtu = params->mtu;
+	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
+	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
+	ll2_info->tx_tc = OOO_LB_TC;
+	ll2_info->tx_dest = CORE_TX_DEST_LB;
+
+	rc = qed_ll2_acquire_connection(hwfn, ll2_info,
+					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
+					handle);
+	kfree(ll2_info);
+	if (rc) {
+		DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n");
+		goto out;
+	}
+
+	rc = qed_ll2_establish_connection(hwfn, *handle);
+	if (rc) {
+		DP_INFO(cdev, "Failed to establist LL2 OOO connection\n");
+		goto fail;
+	}
+
+	return 0;
+
+fail:
+	qed_ll2_release_connection(hwfn, *handle);
+out:
+	*handle = QED_LL2_UNUSED_HANDLE;
+	return rc;
+}
+#else /* IS_ENABLED(CONFIG_QEDI) */
+static inline int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn,
+		void *p_cookie) { return -EINVAL; }
+static inline int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn,
+		void *p_cookie) { return -EINVAL; }
+static inline int
+qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_info,
+			u16 rx_num_ooo_buffers, u16 mtu) { return -EINVAL; }
+static inline void
+qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_conn) { return; }
+static inline void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
+			struct qed_ll2_info *p_ll2_conn) { return; }
+static inline void qed_ll2_stop_ooo(struct qed_dev *cdev) { return; }
+static inline int qed_ll2_start_ooo(struct qed_dev *cdev,
+			struct qed_ll2_params *params) { return -EINVAL; }
+#endif /* IS_ENABLED(CONFIG_QEDI) */
+
 static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
 				     struct qed_ll2_info *p_ll2_conn,
 				     u8 action_on_error)
@@ -594,7 +1057,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
 	p_ramrod->drop_ttl0_flg = p_ll2_conn->rx_drop_ttl0_flg;
 	p_ramrod->inner_vlan_removal_en = p_ll2_conn->rx_vlan_removal_en;
 	p_ramrod->queue_id = p_ll2_conn->queue_id;
-	p_ramrod->main_func_queue = 1;
+	p_ramrod->main_func_queue = (conn_type == QED_LL2_TYPE_ISCSI_OOO) ? 0
+									  : 1;
 
 	if ((IS_MF_DEFAULT(p_hwfn) || IS_MF_SI(p_hwfn)) &&
 	    p_ramrod->main_func_queue && (conn_type != QED_LL2_TYPE_ROCE)) {
@@ -625,6 +1089,11 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
 	if (!QED_LL2_TX_REGISTERED(p_ll2_conn))
 		return 0;
 
+	if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
+		p_ll2_conn->tx_stats_en = 0;
+	else
+		p_ll2_conn->tx_stats_en = 1;
+
 	/* Get SPQ entry */
 	memset(&init_data, 0, sizeof(init_data));
 	init_data.cid = p_ll2_conn->cid;
@@ -642,7 +1111,6 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
 	p_ramrod->sb_id = cpu_to_le16(qed_int_get_sp_sb_id(p_hwfn));
 	p_ramrod->sb_index = p_tx->tx_sb_index;
 	p_ramrod->mtu = cpu_to_le16(p_ll2_conn->mtu);
-	p_ll2_conn->tx_stats_en = 1;
 	p_ramrod->stats_en = p_ll2_conn->tx_stats_en;
 	p_ramrod->stats_id = p_ll2_conn->tx_stats_id;
 
@@ -866,9 +1334,22 @@ int qed_ll2_acquire_connection(struct qed_hwfn *p_hwfn,
 	if (rc)
 		goto q_allocate_fail;
 
+	if (IS_ENABLED(CONFIG_QEDI)) {
+		rc = qed_ll2_acquire_connection_ooo(p_hwfn, p_ll2_info,
+					    rx_num_desc * 2, p_params->mtu);
+		if (rc)
+			goto q_allocate_fail;
+	}
+
 	/* Register callbacks for the Rx/Tx queues */
-	comp_rx_cb = qed_ll2_rxq_completion;
-	comp_tx_cb = qed_ll2_txq_completion;
+	if (IS_ENABLED(CONFIG_QEDI) &&
+			p_params->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
+		comp_rx_cb = qed_ll2_lb_rxq_completion;
+		comp_tx_cb = qed_ll2_lb_txq_completion;
+	} else {
+		comp_rx_cb = qed_ll2_rxq_completion;
+		comp_tx_cb = qed_ll2_txq_completion;
+	}
 
 	if (rx_num_desc) {
 		qed_int_register_cb(p_hwfn, comp_rx_cb,
@@ -981,6 +1462,9 @@ int qed_ll2_establish_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 	if (p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)
 		qed_wr(p_hwfn, p_hwfn->p_main_ptt, PRS_REG_USE_LIGHT_L2, 1);
 
+	if (IS_ENABLED(CONFIG_QEDI))
+		qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn);
+
 	return rc;
 }
 
@@ -1223,6 +1707,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 			      u16 vlan,
 			      u8 bd_flags,
 			      u16 l4_hdr_offset_w,
+			      enum qed_ll2_tx_dest e_tx_dest,
 			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
 			      dma_addr_t first_frag,
 			      u16 first_frag_len, void *cookie, u8 notify_fw)
@@ -1232,6 +1717,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 	enum core_roce_flavor_type roce_flavor;
 	struct qed_ll2_tx_queue *p_tx;
 	struct qed_chain *p_tx_chain;
+	enum core_tx_dest tx_dest;
 	unsigned long flags;
 	int rc = 0;
 
@@ -1262,6 +1748,8 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 		goto out;
 	}
 
+	tx_dest = e_tx_dest == QED_LL2_TX_DEST_NW ? CORE_TX_DEST_NW :
+						    CORE_TX_DEST_LB;
 	if (qed_roce_flavor == QED_LL2_ROCE) {
 		roce_flavor = CORE_ROCE;
 	} else if (qed_roce_flavor == QED_LL2_RROCE) {
@@ -1276,7 +1764,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 				      num_of_bds, first_frag,
 				      first_frag_len, cookie, notify_fw);
 	qed_ll2_prepare_tx_packet_set_bd(p_hwfn, p_ll2_conn, p_curp,
-					 num_of_bds, CORE_TX_DEST_NW,
+					 num_of_bds, tx_dest,
 					 vlan, bd_flags, l4_hdr_offset_w,
 					 roce_flavor,
 					 first_frag, first_frag_len);
@@ -1351,6 +1839,10 @@ int qed_ll2_terminate_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 		qed_ll2_rxq_flush(p_hwfn, connection_handle);
 	}
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
+		qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
+
 	return rc;
 }
 
@@ -1381,6 +1873,9 @@ void qed_ll2_release_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
 
 	qed_cxt_release_cid(p_hwfn, p_ll2_conn->cid);
 
+	if (IS_ENABLED(CONFIG_QEDI))
+		qed_ll2_release_connection_ooo(p_hwfn, p_ll2_conn);
+
 	mutex_lock(&p_ll2_conn->mutex);
 	p_ll2_conn->b_active = false;
 	mutex_unlock(&p_ll2_conn->mutex);
@@ -1628,6 +2123,18 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
 		goto release_terminate;
 	}
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
+		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable) {
+		DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
+		rc = qed_ll2_start_ooo(cdev, params);
+		if (rc) {
+			DP_INFO(cdev,
+				"Failed to initialize the OOO LL2 queue\n");
+			goto release_terminate;
+		}
+	}
+
 	p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
 	if (!p_ptt) {
 		DP_INFO(cdev, "Failed to acquire PTT\n");
@@ -1677,6 +2184,11 @@ static int qed_ll2_stop(struct qed_dev *cdev)
 	qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
 	eth_zero_addr(cdev->ll2_mac_address);
 
+	if (IS_ENABLED(CONFIG_QEDI) &&
+		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
+		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable)
+		qed_ll2_stop_ooo(cdev);
+
 	rc = qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev),
 					  cdev->ll2->handle);
 	if (rc)
@@ -1731,7 +2243,8 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb)
 	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev),
 				       cdev->ll2->handle,
 				       1 + skb_shinfo(skb)->nr_frags,
-				       vlan, flags, 0, 0 /* RoCE FLAVOR */,
+				       vlan, flags, 0, QED_LL2_TX_DEST_NW,
+				       0 /* RoCE FLAVOR */,
 				       mapping, skb->len, skb, 1);
 	if (rc)
 		goto err;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
index 80a5dc2..2b31d30 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
@@ -41,6 +41,12 @@ enum qed_ll2_conn_type {
 	MAX_QED_LL2_RX_CONN_TYPE
 };
 
+enum qed_ll2_tx_dest {
+	QED_LL2_TX_DEST_NW, /* Light L2 TX Destination to the Network */
+	QED_LL2_TX_DEST_LB, /* Light L2 TX Destination to the Loopback */
+	QED_LL2_TX_DEST_MAX
+};
+
 struct qed_ll2_rx_packet {
 	struct list_head list_entry;
 	struct core_rx_bd_with_buff_len *rxq_bd;
@@ -192,6 +198,8 @@ int qed_ll2_post_rx_buffer(struct qed_hwfn *p_hwfn,
  * @param l4_hdr_offset_w	L4 Header Offset from start of packet
  *				(in words). This is needed if both l4_csum
  *				and ipv6_ext are set
+ * @param e_tx_dest             indicates if the packet is to be transmitted via
+ *                              loopback or to the network
  * @param first_frag
  * @param first_frag_len
  * @param cookie
@@ -206,6 +214,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
 			      u16 vlan,
 			      u8 bd_flags,
 			      u16 l4_hdr_offset_w,
+			      enum qed_ll2_tx_dest e_tx_dest,
 			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
 			      dma_addr_t first_frag,
 			      u16 first_frag_len, void *cookie, u8 notify_fw);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.c b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
new file mode 100644
index 0000000..a037a6f
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
@@ -0,0 +1,510 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include "qed.h"
+#include "qed_iscsi.h"
+#include "qed_ll2.h"
+#include "qed_ooo.h"
+
+static struct qed_ooo_archipelago
+*qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info
+			  *p_ooo_info,
+			  u32 cid)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+
+	list_for_each_entry(p_archipelago,
+			    &p_ooo_info->archipelagos_list, list_entry) {
+		if (p_archipelago->cid == cid)
+			return p_archipelago;
+	}
+
+	return NULL;
+}
+
+static struct qed_ooo_isle *qed_ooo_seek_isle(struct qed_hwfn *p_hwfn,
+					      struct qed_ooo_info *p_ooo_info,
+					      u32 cid, u8 isle)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+	u8 the_num_of_isle = 1;
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	if (!p_archipelago) {
+		DP_NOTICE(p_hwfn,
+			  "Connection %d is not found in OOO list\n", cid);
+		return NULL;
+	}
+
+	list_for_each_entry(p_isle, &p_archipelago->isles_list, list_entry) {
+		if (the_num_of_isle == isle)
+			return p_isle;
+		the_num_of_isle++;
+	}
+
+	return NULL;
+}
+
+void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
+				struct qed_ooo_info *p_ooo_info,
+				struct ooo_opaque *p_cqe)
+{
+	struct qed_ooo_history *p_history = &p_ooo_info->ooo_history;
+
+	if (p_history->head_idx == p_history->num_of_cqes)
+		p_history->head_idx = 0;
+	p_history->p_cqes[p_history->head_idx] = *p_cqe;
+	p_history->head_idx++;
+}
+
+struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn)
+{
+	struct qed_ooo_info *p_ooo_info;
+	u16 max_num_archipelagos = 0;
+	u16 max_num_isles = 0;
+	u32 i;
+
+	if (p_hwfn->hw_info.personality != QED_PCI_ISCSI) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info: unknown personality\n");
+		return NULL;
+	}
+
+	max_num_archipelagos = p_hwfn->pf_params.iscsi_pf_params.num_cons;
+	max_num_isles = QED_MAX_NUM_ISLES + max_num_archipelagos;
+
+	if (!max_num_archipelagos) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info: unknown amount of connections\n");
+		return NULL;
+	}
+
+	p_ooo_info = kzalloc(sizeof(*p_ooo_info), GFP_KERNEL);
+	if (!p_ooo_info) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info\n");
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&p_ooo_info->free_buffers_list);
+	INIT_LIST_HEAD(&p_ooo_info->ready_buffers_list);
+	INIT_LIST_HEAD(&p_ooo_info->free_isles_list);
+	INIT_LIST_HEAD(&p_ooo_info->free_archipelagos_list);
+	INIT_LIST_HEAD(&p_ooo_info->archipelagos_list);
+
+	p_ooo_info->p_isles_mem = kcalloc(max_num_isles,
+					  sizeof(struct qed_ooo_isle),
+					  GFP_KERNEL);
+	if (!p_ooo_info->p_isles_mem) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(isles)\n");
+		goto no_isles_mem;
+	}
+
+	for (i = 0; i < max_num_isles; i++) {
+		INIT_LIST_HEAD(&p_ooo_info->p_isles_mem[i].buffers_list);
+		list_add_tail(&p_ooo_info->p_isles_mem[i].list_entry,
+			      &p_ooo_info->free_isles_list);
+	}
+
+	p_ooo_info->p_archipelagos_mem =
+				kcalloc(max_num_archipelagos,
+					sizeof(struct qed_ooo_archipelago),
+					GFP_KERNEL);
+	if (!p_ooo_info->p_archipelagos_mem) {
+		DP_NOTICE(p_hwfn,
+			  "Failed to allocate qed_ooo_info(archpelagos)\n");
+		goto no_archipelagos_mem;
+	}
+
+	for (i = 0; i < max_num_archipelagos; i++) {
+		INIT_LIST_HEAD(&p_ooo_info->p_archipelagos_mem[i].isles_list);
+		list_add_tail(&p_ooo_info->p_archipelagos_mem[i].list_entry,
+			      &p_ooo_info->free_archipelagos_list);
+	}
+
+	p_ooo_info->ooo_history.p_cqes =
+				kcalloc(QED_MAX_NUM_OOO_HISTORY_ENTRIES,
+					sizeof(struct ooo_opaque),
+					GFP_KERNEL);
+	if (!p_ooo_info->ooo_history.p_cqes) {
+		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(history)\n");
+		goto no_history_mem;
+	}
+
+	return p_ooo_info;
+
+no_history_mem:
+	kfree(p_ooo_info->p_archipelagos_mem);
+no_archipelagos_mem:
+	kfree(p_ooo_info->p_isles_mem);
+no_isles_mem:
+	kfree(p_ooo_info);
+	return NULL;
+}
+
+void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
+				      struct qed_ooo_info *p_ooo_info, u32 cid)
+{
+	struct qed_ooo_archipelago *p_archipelago;
+	struct qed_ooo_buffer *p_buffer;
+	struct qed_ooo_isle *p_isle;
+	bool b_found = false;
+
+	if (list_empty(&p_ooo_info->archipelagos_list))
+		return;
+
+	list_for_each_entry(p_archipelago,
+			    &p_ooo_info->archipelagos_list, list_entry) {
+		if (p_archipelago->cid == cid) {
+			list_del(&p_archipelago->list_entry);
+			b_found = true;
+			break;
+		}
+	}
+
+	if (!b_found)
+		return;
+
+	while (!list_empty(&p_archipelago->isles_list)) {
+		p_isle = list_first_entry(&p_archipelago->isles_list,
+					  struct qed_ooo_isle, list_entry);
+
+		list_del(&p_isle->list_entry);
+
+		while (!list_empty(&p_isle->buffers_list)) {
+			p_buffer = list_first_entry(&p_isle->buffers_list,
+						    struct qed_ooo_buffer,
+						    list_entry);
+
+			if (!p_buffer)
+				break;
+
+			list_del(&p_buffer->list_entry);
+			list_add_tail(&p_buffer->list_entry,
+				      &p_ooo_info->free_buffers_list);
+		}
+		list_add_tail(&p_isle->list_entry,
+			      &p_ooo_info->free_isles_list);
+	}
+
+	list_add_tail(&p_archipelago->list_entry,
+		      &p_ooo_info->free_archipelagos_list);
+}
+
+void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
+			       struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_archipelago *p_arch;
+	struct qed_ooo_buffer *p_buffer;
+	struct qed_ooo_isle *p_isle;
+
+	while (!list_empty(&p_ooo_info->archipelagos_list)) {
+		p_arch = list_first_entry(&p_ooo_info->archipelagos_list,
+					  struct qed_ooo_archipelago,
+					  list_entry);
+
+		list_del(&p_arch->list_entry);
+
+		while (!list_empty(&p_arch->isles_list)) {
+			p_isle = list_first_entry(&p_arch->isles_list,
+						  struct qed_ooo_isle,
+						  list_entry);
+
+			list_del(&p_isle->list_entry);
+
+			while (!list_empty(&p_isle->buffers_list)) {
+				p_buffer =
+				    list_first_entry(&p_isle->buffers_list,
+						     struct qed_ooo_buffer,
+						     list_entry);
+
+				if (!p_buffer)
+					break;
+
+			list_del(&p_buffer->list_entry);
+				list_add_tail(&p_buffer->list_entry,
+					      &p_ooo_info->free_buffers_list);
+			}
+			list_add_tail(&p_isle->list_entry,
+				      &p_ooo_info->free_isles_list);
+		}
+		list_add_tail(&p_arch->list_entry,
+			      &p_ooo_info->free_archipelagos_list);
+	}
+	if (!list_empty(&p_ooo_info->ready_buffers_list))
+		list_splice_tail_init(&p_ooo_info->ready_buffers_list,
+				      &p_ooo_info->free_buffers_list);
+}
+
+void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
+{
+	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
+	memset(p_ooo_info->ooo_history.p_cqes, 0,
+	       p_ooo_info->ooo_history.num_of_cqes *
+	       sizeof(struct ooo_opaque));
+	p_ooo_info->ooo_history.head_idx = 0;
+}
+
+void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer;
+
+	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
+	while (!list_empty(&p_ooo_info->free_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		if (!p_buffer)
+			break;
+
+		list_del(&p_buffer->list_entry);
+		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+				  p_buffer->rx_buffer_size,
+				  p_buffer->rx_buffer_virt_addr,
+				  p_buffer->rx_buffer_phys_addr);
+		kfree(p_buffer);
+	}
+
+	kfree(p_ooo_info->p_isles_mem);
+	kfree(p_ooo_info->p_archipelagos_mem);
+	kfree(p_ooo_info->ooo_history.p_cqes);
+	kfree(p_ooo_info);
+}
+
+void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
+			     struct qed_ooo_info *p_ooo_info,
+			     struct qed_ooo_buffer *p_buffer)
+{
+	list_add_tail(&p_buffer->list_entry, &p_ooo_info->free_buffers_list);
+}
+
+struct qed_ooo_buffer *qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
+					       struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer = NULL;
+
+	if (!list_empty(&p_ooo_info->free_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		list_del(&p_buffer->list_entry);
+	}
+
+	return p_buffer;
+}
+
+void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
+			      struct qed_ooo_info *p_ooo_info,
+			      struct qed_ooo_buffer *p_buffer, u8 on_tail)
+{
+	if (on_tail)
+		list_add_tail(&p_buffer->list_entry,
+			      &p_ooo_info->ready_buffers_list);
+	else
+		list_add(&p_buffer->list_entry,
+			 &p_ooo_info->ready_buffers_list);
+}
+
+struct qed_ooo_buffer *qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
+						struct qed_ooo_info *p_ooo_info)
+{
+	struct qed_ooo_buffer *p_buffer = NULL;
+
+	if (!list_empty(&p_ooo_info->ready_buffers_list)) {
+		p_buffer = list_first_entry(&p_ooo_info->ready_buffers_list,
+					    struct qed_ooo_buffer, list_entry);
+
+		list_del(&p_buffer->list_entry);
+	}
+
+	return p_buffer;
+}
+
+void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 drop_isle, u8 drop_size)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+	u8 isle_idx;
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	for (isle_idx = 0; isle_idx < drop_size; isle_idx++) {
+		p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, drop_isle);
+		if (!p_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is not found(cid %d)\n",
+				  drop_isle, cid);
+			return;
+		}
+		if (list_empty(&p_isle->buffers_list))
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is empty(cid %d)\n", drop_isle, cid);
+		else
+			list_splice_tail_init(&p_isle->buffers_list,
+					      &p_ooo_info->free_buffers_list);
+
+		list_del(&p_isle->list_entry);
+		p_ooo_info->cur_isles_number--;
+		list_add(&p_isle->list_entry, &p_ooo_info->free_isles_list);
+	}
+
+	if (list_empty(&p_archipelago->isles_list)) {
+		list_del(&p_archipelago->list_entry);
+		list_add(&p_archipelago->list_entry,
+			 &p_ooo_info->free_archipelagos_list);
+	}
+}
+
+void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 ooo_isle,
+			  struct qed_ooo_buffer *p_buffer)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_prev_isle = NULL;
+	struct qed_ooo_isle *p_isle = NULL;
+
+	if (ooo_isle > 1) {
+		p_prev_isle = qed_ooo_seek_isle(p_hwfn,
+						p_ooo_info, cid, ooo_isle - 1);
+		if (!p_prev_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Isle %d is not found(cid %d)\n",
+				  ooo_isle - 1, cid);
+			return;
+		}
+	}
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	if (!p_archipelago && (ooo_isle != 1)) {
+		DP_NOTICE(p_hwfn,
+			  "Connection %d is not found in OOO list\n", cid);
+		return;
+	}
+
+	if (!list_empty(&p_ooo_info->free_isles_list)) {
+		p_isle = list_first_entry(&p_ooo_info->free_isles_list,
+					  struct qed_ooo_isle, list_entry);
+
+		list_del(&p_isle->list_entry);
+		if (!list_empty(&p_isle->buffers_list)) {
+			DP_NOTICE(p_hwfn, "Free isle is not empty\n");
+			INIT_LIST_HEAD(&p_isle->buffers_list);
+		}
+	} else {
+		DP_NOTICE(p_hwfn, "No more free isles\n");
+		return;
+	}
+
+	if (!p_archipelago &&
+	    !list_empty(&p_ooo_info->free_archipelagos_list)) {
+		p_archipelago =
+		    list_first_entry(&p_ooo_info->free_archipelagos_list,
+				     struct qed_ooo_archipelago, list_entry);
+
+		list_del(&p_archipelago->list_entry);
+		if (!list_empty(&p_archipelago->isles_list)) {
+			DP_NOTICE(p_hwfn,
+				  "Free OOO connection is not empty\n");
+			INIT_LIST_HEAD(&p_archipelago->isles_list);
+		}
+		p_archipelago->cid = cid;
+		list_add(&p_archipelago->list_entry,
+			 &p_ooo_info->archipelagos_list);
+	} else if (!p_archipelago) {
+		DP_NOTICE(p_hwfn, "No more free OOO connections\n");
+		list_add(&p_isle->list_entry,
+			 &p_ooo_info->free_isles_list);
+		list_add(&p_buffer->list_entry,
+			 &p_ooo_info->free_buffers_list);
+		return;
+	}
+
+	list_add(&p_buffer->list_entry, &p_isle->buffers_list);
+	p_ooo_info->cur_isles_number++;
+	p_ooo_info->gen_isles_number++;
+
+	if (p_ooo_info->cur_isles_number > p_ooo_info->max_isles_number)
+		p_ooo_info->max_isles_number = p_ooo_info->cur_isles_number;
+
+	if (!p_prev_isle)
+		list_add(&p_isle->list_entry, &p_archipelago->isles_list);
+	else
+		list_add(&p_isle->list_entry, &p_prev_isle->list_entry);
+}
+
+void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
+			    struct qed_ooo_info *p_ooo_info,
+			    u32 cid,
+			    u8 ooo_isle,
+			    struct qed_ooo_buffer *p_buffer, u8 buffer_side)
+{
+	struct qed_ooo_isle *p_isle = NULL;
+
+	p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, ooo_isle);
+	if (!p_isle) {
+		DP_NOTICE(p_hwfn,
+			  "Isle %d is not found(cid %d)\n", ooo_isle, cid);
+		return;
+	}
+
+	if (buffer_side == QED_OOO_LEFT_BUF)
+		list_add(&p_buffer->list_entry, &p_isle->buffers_list);
+	else
+		list_add_tail(&p_buffer->list_entry, &p_isle->buffers_list);
+}
+
+void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info, u32 cid, u8 left_isle)
+{
+	struct qed_ooo_archipelago *p_archipelago = NULL;
+	struct qed_ooo_isle *p_right_isle = NULL;
+	struct qed_ooo_isle *p_left_isle = NULL;
+
+	p_right_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
+					 left_isle + 1);
+	if (!p_right_isle) {
+		DP_NOTICE(p_hwfn,
+			  "Right isle %d is not found(cid %d)\n",
+			  left_isle + 1, cid);
+		return;
+	}
+
+	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
+	list_del(&p_right_isle->list_entry);
+	p_ooo_info->cur_isles_number--;
+	if (left_isle) {
+		p_left_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
+						left_isle);
+		if (!p_left_isle) {
+			DP_NOTICE(p_hwfn,
+				  "Left isle %d is not found(cid %d)\n",
+				  left_isle, cid);
+			return;
+		}
+		list_splice_tail_init(&p_right_isle->buffers_list,
+				      &p_left_isle->buffers_list);
+	} else {
+		list_splice_tail_init(&p_right_isle->buffers_list,
+				      &p_ooo_info->ready_buffers_list);
+		if (list_empty(&p_archipelago->isles_list)) {
+			list_del(&p_archipelago->list_entry);
+			list_add(&p_archipelago->list_entry,
+				 &p_ooo_info->free_archipelagos_list);
+		}
+	}
+	list_add_tail(&p_right_isle->list_entry, &p_ooo_info->free_isles_list);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.h b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
new file mode 100644
index 0000000..75c6e48
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
@@ -0,0 +1,116 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_OOO_H
+#define _QED_OOO_H
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include "qed.h"
+
+#define QED_MAX_NUM_ISLES	256
+#define QED_MAX_NUM_OOO_HISTORY_ENTRIES	512
+
+#define QED_OOO_LEFT_BUF	0
+#define QED_OOO_RIGHT_BUF	1
+
+struct qed_ooo_buffer {
+	struct list_head list_entry;
+	void *rx_buffer_virt_addr;
+	dma_addr_t rx_buffer_phys_addr;
+	u32 rx_buffer_size;
+	u16 packet_length;
+	u16 parse_flags;
+	u16 vlan;
+	u8 placement_offset;
+};
+
+struct qed_ooo_isle {
+	struct list_head list_entry;
+	struct list_head buffers_list;
+};
+
+struct qed_ooo_archipelago {
+	struct list_head list_entry;
+	struct list_head isles_list;
+	u32 cid;
+};
+
+struct qed_ooo_history {
+	struct ooo_opaque *p_cqes;
+	u32 head_idx;
+	u32 num_of_cqes;
+};
+
+struct qed_ooo_info {
+	struct list_head free_buffers_list;
+	struct list_head ready_buffers_list;
+	struct list_head free_isles_list;
+	struct list_head free_archipelagos_list;
+	struct list_head archipelagos_list;
+	struct qed_ooo_archipelago *p_archipelagos_mem;
+	struct qed_ooo_isle *p_isles_mem;
+	struct qed_ooo_history ooo_history;
+	u32 cur_isles_number;
+	u32 max_isles_number;
+	u32 gen_isles_number;
+};
+
+void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
+				struct qed_ooo_info *p_ooo_info,
+				struct ooo_opaque *p_cqe);
+
+struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn);
+
+void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
+				      struct qed_ooo_info *p_ooo_info,
+				      u32 cid);
+
+void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
+			       struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
+			     struct qed_ooo_info *p_ooo_info,
+			     struct qed_ooo_buffer *p_buffer);
+
+struct qed_ooo_buffer *
+qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
+			      struct qed_ooo_info *p_ooo_info,
+			      struct qed_ooo_buffer *p_buffer, u8 on_tail);
+
+struct qed_ooo_buffer *
+qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
+			 struct qed_ooo_info *p_ooo_info);
+
+void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid, u8 drop_isle, u8 drop_size);
+
+void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
+			  struct qed_ooo_info *p_ooo_info,
+			  u32 cid,
+			  u8 ooo_isle, struct qed_ooo_buffer *p_buffer);
+
+void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
+			    struct qed_ooo_info *p_ooo_info,
+			    u32 cid,
+			    u8 ooo_isle,
+			    struct qed_ooo_buffer *p_buffer, u8 buffer_side);
+
+void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
+			struct qed_ooo_info *p_ooo_info, u32 cid,
+			u8 left_isle);
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
index 2343005..1768cdb 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
@@ -2866,6 +2866,7 @@ static int qed_roce_ll2_tx(struct qed_dev *cdev,
 	/* Tx header */
 	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev), roce_ll2->handle,
 				       1 + pkt->n_seg, 0, flags, 0,
+				       QED_LL2_TX_DEST_NW,
 				       qed_roce_flavor, pkt->header.baddr,
 				       pkt->header.len, pkt, 1);
 	if (rc) {
diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
index d3fa578..b44fd4c 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
@@ -26,6 +26,7 @@
 #include "qed_int.h"
 #include "qed_iscsi.h"
 #include "qed_mcp.h"
+#include "qed_ooo.h"
 #include "qed_reg_addr.h"
 #include "qed_sp.h"
 #include "qed_sriov.h"
@@ -253,6 +254,14 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
 	case PROTOCOLID_ISCSI:
 		if (!IS_ENABLED(CONFIG_QEDI))
 			return -EINVAL;
+		if (p_eqe->opcode == ISCSI_EVENT_TYPE_ASYN_DELETE_OOO_ISLES) {
+			u32 cid = le32_to_cpu(p_eqe->data.iscsi_info.cid);
+
+			qed_ooo_release_connection_isles(p_hwfn,
+							 p_hwfn->p_ooo_info,
+							 cid);
+			return 0;
+		}
 
 		if (p_hwfn->p_iscsi_info->event_cb) {
 			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
for 41000 Series Converged Network Adapters by QLogic.

This patch consists of following changes:
  - MAINTAINERS Makefile and Kconfig changes for qedi,
  - PCI driver registration,
  - iSCSI host level initialization,
  - Debugfs and log level infrastructure.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 MAINTAINERS                         |    6 +
 drivers/net/ethernet/qlogic/Kconfig |   12 -
 drivers/scsi/Kconfig                |    1 +
 drivers/scsi/Makefile               |    1 +
 drivers/scsi/qedi/Kconfig           |   10 +
 drivers/scsi/qedi/Makefile          |    5 +
 drivers/scsi/qedi/qedi.h            |  286 +++++++
 drivers/scsi/qedi/qedi_dbg.c        |  143 ++++
 drivers/scsi/qedi/qedi_dbg.h        |  144 ++++
 drivers/scsi/qedi/qedi_debugfs.c    |  244 ++++++
 drivers/scsi/qedi/qedi_hsi.h        |   52 ++
 drivers/scsi/qedi/qedi_main.c       | 1550 +++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_sysfs.c      |   52 ++
 drivers/scsi/qedi/qedi_version.h    |   14 +
 14 files changed, 2508 insertions(+), 12 deletions(-)
 create mode 100644 drivers/scsi/qedi/Kconfig
 create mode 100644 drivers/scsi/qedi/Makefile
 create mode 100644 drivers/scsi/qedi/qedi.h
 create mode 100644 drivers/scsi/qedi/qedi_dbg.c
 create mode 100644 drivers/scsi/qedi/qedi_dbg.h
 create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
 create mode 100644 drivers/scsi/qedi/qedi_hsi.h
 create mode 100644 drivers/scsi/qedi/qedi_main.c
 create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
 create mode 100644 drivers/scsi/qedi/qedi_version.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 5e925a2..906d05f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9909,6 +9909,12 @@ F:	drivers/net/ethernet/qlogic/qed/
 F:	include/linux/qed/
 F:	drivers/net/ethernet/qlogic/qede/
 
+QLOGIC QL41xxx ISCSI DRIVER
+M:	QLogic-Storage-Upstream@cavium.com
+L:	linux-scsi@vger.kernel.org
+S:	Supported
+F:	drivers/scsi/qedi/
+
 QNX4 FILESYSTEM
 M:	Anders Larsen <al@alarsen.net>
 W:	http://www.alarsen.net/linux/qnx4fs/
diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index bad4fae..28b4366 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -121,16 +121,4 @@ config INFINIBAND_QEDR
 config QED_ISCSI
 	bool
 
-config QEDI
-	tristate "QLogic QED 25/40/100Gb iSCSI driver"
-	depends on QED
-	select QED_LL2
-	select QED_ISCSI
-	default n
-	---help---
-	  This provides a temporary node that allows the compilation
-	  and logical testing of the hardware offload iSCSI support
-	  for QLogic QED. This would be replaced by the 'real' option
-	  once the QEDI driver is added [+relocated].
-
 endif # NET_VENDOR_QLOGIC
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 3e2bdb9..5cf03db 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1254,6 +1254,7 @@ config SCSI_QLOGICPTI
 
 source "drivers/scsi/qla2xxx/Kconfig"
 source "drivers/scsi/qla4xxx/Kconfig"
+source "drivers/scsi/qedi/Kconfig"
 
 config SCSI_LPFC
 	tristate "Emulex LightPulse Fibre Channel Support"
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 38d938d..da9e312 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -132,6 +132,7 @@ obj-$(CONFIG_PS3_ROM)		+= ps3rom.o
 obj-$(CONFIG_SCSI_CXGB3_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
 obj-$(CONFIG_SCSI_CXGB4_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
 obj-$(CONFIG_SCSI_BNX2_ISCSI)	+= libiscsi.o bnx2i/
+obj-$(CONFIG_QEDI)          += libiscsi.o qedi/
 obj-$(CONFIG_BE2ISCSI)		+= libiscsi.o be2iscsi/
 obj-$(CONFIG_SCSI_ESAS2R)	+= esas2r/
 obj-$(CONFIG_SCSI_PMCRAID)	+= pmcraid.o
diff --git a/drivers/scsi/qedi/Kconfig b/drivers/scsi/qedi/Kconfig
new file mode 100644
index 0000000..23ca8a2
--- /dev/null
+++ b/drivers/scsi/qedi/Kconfig
@@ -0,0 +1,10 @@
+config QEDI
+	tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
+	depends on PCI && SCSI
+	depends on QED
+	select SCSI_ISCSI_ATTRS
+	select QED_LL2
+	select QED_ISCSI
+	---help---
+	This driver supports iSCSI offload for the QLogic FastLinQ
+	41000 Series Converged Network Adapters.
diff --git a/drivers/scsi/qedi/Makefile b/drivers/scsi/qedi/Makefile
new file mode 100644
index 0000000..2b3e16b
--- /dev/null
+++ b/drivers/scsi/qedi/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_QEDI) := qedi.o
+qedi-y := qedi_main.o qedi_iscsi.o qedi_fw.o qedi_sysfs.o \
+	    qedi_dbg.o
+
+qedi-$(CONFIG_DEBUG_FS) += qedi_debugfs.o
diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
new file mode 100644
index 0000000..0a5035e
--- /dev/null
+++ b/drivers/scsi/qedi/qedi.h
@@ -0,0 +1,286 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_H_
+#define _QEDI_H_
+
+#define __PREVENT_QED_HSI__
+
+#include <scsi/scsi_transport_iscsi.h>
+#include <scsi/libiscsi.h>
+#include <scsi/scsi_host.h>
+#include <linux/uio_driver.h>
+
+#include "qedi_hsi.h"
+#include <linux/qed/qed_if.h>
+#include "qedi_dbg.h"
+#include <linux/qed/qed_iscsi_if.h>
+#include "qedi_version.h"
+
+#define QEDI_MODULE_NAME		"qedi"
+
+struct qedi_endpoint;
+
+/*
+ * PCI function probe defines
+ */
+#define QEDI_MODE_NORMAL	0
+#define QEDI_MODE_RECOVERY	1
+
+#define ISCSI_WQE_SET_PTU_INVALIDATE	1
+#define QEDI_MAX_ISCSI_TASK		4096
+#define QEDI_MAX_TASK_NUM		0x0FFF
+#define QEDI_MAX_ISCSI_CONNS_PER_HBA	1024
+#define QEDI_ISCSI_MAX_BDS_PER_CMD	256	/* Firmware max BDs is 256 */
+#define MAX_OUSTANDING_TASKS_PER_CON	1024
+
+#define QEDI_MAX_BD_LEN		0xffff
+#define QEDI_BD_SPLIT_SZ	0x1000
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_FAST_SGE_COUNT	4
+/* MAX Length for cached SGL */
+#define MAX_SGLEN_FOR_CACHESGL	((1U << 16) - 1)
+
+#define MAX_NUM_MSIX_PF         8
+#define MIN_NUM_CPUS_MSIX(x)	min(x->msix_count, num_online_cpus())
+
+#define QEDI_LOCAL_PORT_MIN     60000
+#define QEDI_LOCAL_PORT_MAX     61024
+#define QEDI_LOCAL_PORT_RANGE   (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
+#define QEDI_LOCAL_PORT_INVALID	0xffff
+
+/* Queue sizes in number of elements */
+#define QEDI_SQ_SIZE		MAX_OUSTANDING_TASKS_PER_CON
+#define QEDI_CQ_SIZE		2048
+#define QEDI_CMDQ_SIZE		QEDI_MAX_ISCSI_TASK
+#define QEDI_PROTO_CQ_PROD_IDX	0
+
+struct qedi_glbl_q_params {
+	u64 hw_p_cq;	/* Completion queue PBL */
+	u64 hw_p_rq;	/* Request queue PBL */
+	u64 hw_p_cmdq;	/* Command queue PBL */
+};
+
+struct global_queue {
+	union iscsi_cqe *cq;
+	dma_addr_t cq_dma;
+	u32 cq_mem_size;
+	u32 cq_cons_idx; /* Completion queue consumer index */
+
+	void *cq_pbl;
+	dma_addr_t cq_pbl_dma;
+	u32 cq_pbl_size;
+
+};
+
+struct qedi_fastpath {
+	struct qed_sb_info	*sb_info;
+	u16			sb_id;
+#define QEDI_NAME_SIZE		16
+	char			name[QEDI_NAME_SIZE];
+	struct qedi_ctx         *qedi;
+};
+
+/* Used to pass fastpath information needed to process CQEs */
+struct qedi_io_work {
+	struct list_head list;
+	struct iscsi_cqe_solicited cqe;
+	u16	que_idx;
+};
+
+/**
+ * struct iscsi_cid_queue - Per adapter iscsi cid queue
+ *
+ * @cid_que_base:           queue base memory
+ * @cid_que:                queue memory pointer
+ * @cid_q_prod_idx:         produce index
+ * @cid_q_cons_idx:         consumer index
+ * @cid_q_max_idx:          max index. used to detect wrap around condition
+ * @cid_free_cnt:           queue size
+ * @conn_cid_tbl:           iscsi cid to conn structure mapping table
+ *
+ * Per adapter iSCSI CID Queue
+ */
+struct iscsi_cid_queue {
+	void *cid_que_base;
+	u32 *cid_que;
+	u32 cid_q_prod_idx;
+	u32 cid_q_cons_idx;
+	u32 cid_q_max_idx;
+	u32 cid_free_cnt;
+	struct qedi_conn **conn_cid_tbl;
+};
+
+struct qedi_portid_tbl {
+	spinlock_t      lock;	/* Port id lock */
+	u16             start;
+	u16             max;
+	u16             next;
+	unsigned long   *table;
+};
+
+struct qedi_itt_map {
+	__le32	itt;
+};
+
+/* I/O tracing entry */
+#define QEDI_IO_TRACE_SIZE             2048
+struct qedi_io_log {
+#define QEDI_IO_TRACE_REQ              0
+#define QEDI_IO_TRACE_RSP              1
+	u8 direction;
+	u16 task_id;
+	u32 cid;
+	u32 port_id;	/* Remote port fabric ID */
+	int lun;
+	u8 op;		/* SCSI CDB */
+	u8 lba[4];
+	unsigned int bufflen;	/* SCSI buffer length */
+	unsigned int sg_count;	/* Number of SG elements */
+	u8 fast_sgs;		/* number of fast sgls */
+	u8 slow_sgs;		/* number of slow sgls */
+	u8 cached_sgs;		/* number of cached sgls */
+	int result;		/* Result passed back to mid-layer */
+	unsigned long jiffies;	/* Time stamp when I/O logged */
+	int refcount;		/* Reference count for task id */
+	unsigned int blk_req_cpu; /* CPU that the task is queued on by
+				   * blk layer
+				   */
+	unsigned int req_cpu;	/* CPU that the task is queued on */
+	unsigned int intr_cpu;	/* Interrupt CPU that the task is received on */
+	unsigned int blk_rsp_cpu;/* CPU that task is actually processed and
+				  * returned to blk layer
+				  */
+	bool cached_sge;
+	bool slow_sge;
+	bool fast_sge;
+};
+
+/* Number of entries in BDQ */
+#define QEDI_BDQ_NUM		256
+#define QEDI_BDQ_BUF_SIZE	256
+
+/* DMA coherent buffers for BDQ */
+struct qedi_bdq_buf {
+	void *buf_addr;
+	dma_addr_t buf_dma;
+};
+
+/* Main port level struct */
+struct qedi_ctx {
+	struct qedi_dbg_ctx dbg_ctx;
+	struct Scsi_Host *shost;
+	struct pci_dev *pdev;
+	struct qed_dev *cdev;
+	struct qed_dev_iscsi_info dev_info;
+	struct qed_int_info int_info;
+	struct qedi_glbl_q_params *p_cpuq;
+	struct global_queue **global_queues;
+	/* uio declaration */
+	struct qedi_uio_dev *udev;
+	struct list_head ll2_skb_list;
+	spinlock_t ll2_lock;	/* Light L2 lock */
+	spinlock_t hba_lock;	/* per port lock */
+	struct task_struct *ll2_recv_thread;
+	unsigned long flags;
+#define UIO_DEV_OPENED		1
+#define QEDI_IOTHREAD_WAKE	2
+#define QEDI_IN_RECOVERY	5
+#define QEDI_IN_OFFLINE		6
+
+	u8 mac[ETH_ALEN];
+	u32 src_ip[4];
+	u8 ip_type;
+
+	/* Physical address of above array */
+	u64 hw_p_cpuq;
+
+	struct qedi_bdq_buf bdq[QEDI_BDQ_NUM];
+	void *bdq_pbl;
+	dma_addr_t bdq_pbl_dma;
+	size_t bdq_pbl_mem_size;
+	void *bdq_pbl_list;
+	dma_addr_t bdq_pbl_list_dma;
+	u8 bdq_pbl_list_num_entries;
+	void __iomem *bdq_primary_prod;
+	void __iomem *bdq_secondary_prod;
+	u16 bdq_prod_idx;
+	u16 rq_num_entries;
+
+	u32 msix_count;
+	u32 max_sqes;
+	u8 num_queues;
+	u32 max_active_conns;
+
+	struct iscsi_cid_queue cid_que;
+	struct qedi_endpoint **ep_tbl;
+	struct qedi_portid_tbl lcl_port_tbl;
+
+	/* Rx fast path intr context */
+	struct qed_sb_info	*sb_array;
+	struct qedi_fastpath	*fp_array;
+	struct qed_iscsi_tid	tasks;
+
+#define QEDI_LINK_DOWN		0
+#define QEDI_LINK_UP		1
+	atomic_t link_state;
+
+#define QEDI_RESERVE_TASK_ID	0
+#define MAX_ISCSI_TASK_ENTRIES	4096
+#define QEDI_INVALID_TASK_ID	(MAX_ISCSI_TASK_ENTRIES + 1)
+	unsigned long task_idx_map[MAX_ISCSI_TASK_ENTRIES / BITS_PER_LONG];
+	struct qedi_itt_map *itt_map;
+	u16 tid_reuse_count[QEDI_MAX_ISCSI_TASK];
+	struct qed_pf_params pf_params;
+
+	struct workqueue_struct *tmf_thread;
+	struct workqueue_struct *offload_thread;
+
+	u16 ll2_mtu;
+
+	struct workqueue_struct *dpc_wq;
+
+	spinlock_t task_idx_lock;	/* To protect gbl context */
+	s32 last_tidx_alloc;
+	s32 last_tidx_clear;
+
+	struct qedi_io_log io_trace_buf[QEDI_IO_TRACE_SIZE];
+	spinlock_t io_trace_lock;	/* prtect trace Log buf */
+	u16 io_trace_idx;
+	unsigned int intr_cpu;
+	u32 cached_sgls;
+	bool use_cached_sge;
+	u32 slow_sgls;
+	bool use_slow_sge;
+	u32 fast_sgls;
+	bool use_fast_sge;
+
+	atomic_t num_offloads;
+};
+
+struct qedi_work {
+	struct list_head list;
+	struct qedi_ctx *qedi;
+	union iscsi_cqe cqe;
+	u16     que_idx;
+};
+
+struct qedi_percpu_s {
+	struct task_struct *iothread;
+	struct list_head work_list;
+	spinlock_t p_work_lock;		/* Per cpu worker lock */
+};
+
+static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
+{
+	return (void *)(info->blocks[tid / info->num_tids_per_block] +
+			(tid % info->num_tids_per_block) * info->size);
+}
+
+#endif /* _QEDI_H_ */
diff --git a/drivers/scsi/qedi/qedi_dbg.c b/drivers/scsi/qedi/qedi_dbg.c
new file mode 100644
index 0000000..2678a15
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_dbg.c
@@ -0,0 +1,143 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi_dbg.h"
+#include <linux/vmalloc.h>
+
+void
+qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	     const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_crit("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_crit("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	      const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & QEDI_LOG_WARN))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+		const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & QEDI_LOG_NOTICE))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_notice("[%s]:[%s:%d]:%d: %pV",
+			  dev_name(&qedi->pdev->dev), nfunc, line,
+			  qedi->host_no, &vaf);
+	else
+		pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	      u32 level, const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & level))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+int
+qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
+{
+	int ret = 0;
+
+	for (; iter->name; iter++) {
+		ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
+					    iter->attr);
+		if (ret)
+			pr_err("Unable to create sysfs %s attr, err(%d).\n",
+			       iter->name, ret);
+	}
+	return ret;
+}
+
+void
+qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
+{
+	for (; iter->name; iter++)
+		sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
+}
diff --git a/drivers/scsi/qedi/qedi_dbg.h b/drivers/scsi/qedi/qedi_dbg.h
new file mode 100644
index 0000000..5beb3ec
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_dbg.h
@@ -0,0 +1,144 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_DBG_H_
+#define _QEDI_DBG_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/compiler.h>
+#include <linux/string.h>
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_transport_iscsi.h>
+#include <linux/fs.h>
+
+#define __PREVENT_QED_HSI__
+#include <linux/qed/common_hsi.h>
+#include <linux/qed/qed_if.h>
+
+extern uint debug;
+
+/* Debug print level definitions */
+#define QEDI_LOG_DEFAULT	0x1		/* Set default logging mask */
+#define QEDI_LOG_INFO		0x2		/* Informational logs,
+						 * MAC address, WWPN, WWNN
+						 */
+#define QEDI_LOG_DISC		0x4		/* Init, discovery, rport */
+#define QEDI_LOG_LL2		0x8		/* LL2, VLAN logs */
+#define QEDI_LOG_CONN		0x10		/* Connection setup, cleanup */
+#define QEDI_LOG_EVT		0x20		/* Events, link, mtu */
+#define QEDI_LOG_TIMER		0x40		/* Timer events */
+#define QEDI_LOG_MP_REQ		0x80		/* Middle Path (MP) logs */
+#define QEDI_LOG_SCSI_TM	0x100		/* SCSI Aborts, Task Mgmt */
+#define QEDI_LOG_UNSOL		0x200		/* unsolicited event logs */
+#define QEDI_LOG_IO		0x400		/* scsi cmd, completion */
+#define QEDI_LOG_MQ		0x800		/* Multi Queue logs */
+#define QEDI_LOG_BSG		0x1000		/* BSG logs */
+#define QEDI_LOG_DEBUGFS	0x2000		/* debugFS logs */
+#define QEDI_LOG_LPORT		0x4000		/* lport logs */
+#define QEDI_LOG_ELS		0x8000		/* ELS logs */
+#define QEDI_LOG_NPIV		0x10000		/* NPIV logs */
+#define QEDI_LOG_SESS		0x20000		/* Conection setup, cleanup */
+#define QEDI_LOG_UIO		0x40000		/* iSCSI UIO logs */
+#define QEDI_LOG_TID		0x80000         /* FW TID context acquire,
+						 * free
+						 */
+#define QEDI_TRACK_TID		0x100000        /* Track TID state. To be
+						 * enabled only at module load
+						 * and not run-time.
+						 */
+#define QEDI_TRACK_CMD_LIST    0x300000        /* Track active cmd list nodes,
+						* done with reference to TID,
+						* hence TRACK_TID also enabled.
+						*/
+#define QEDI_LOG_NOTICE		0x40000000	/* Notice logs */
+#define QEDI_LOG_WARN		0x80000000	/* Warning logs */
+
+/* Debug context structure */
+struct qedi_dbg_ctx {
+	unsigned int host_no;
+	struct pci_dev *pdev;
+#ifdef CONFIG_DEBUG_FS
+	struct dentry *bdf_dentry;
+#endif
+};
+
+#define QEDI_ERR(pdev, fmt, ...)	\
+		qedi_dbg_err(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_WARN(pdev, fmt, ...)	\
+		qedi_dbg_warn(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_NOTICE(pdev, fmt, ...)	\
+		qedi_dbg_notice(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_INFO(pdev, level, fmt, ...)	\
+		qedi_dbg_info(pdev, __func__, __LINE__, level, fmt,	\
+			      ## __VA_ARGS__)
+
+void qedi_dbg_err(struct qedi_dbg_ctx *, const char *, u32,
+		  const char *, ...);
+void qedi_dbg_warn(struct qedi_dbg_ctx *, const char *, u32,
+		   const char *, ...);
+void qedi_dbg_notice(struct qedi_dbg_ctx *, const char *, u32,
+		     const char *, ...);
+void qedi_dbg_info(struct qedi_dbg_ctx *, const char *, u32, u32,
+		   const char *, ...);
+
+struct Scsi_Host;
+
+struct sysfs_bin_attrs {
+	char *name;
+	struct bin_attribute *attr;
+};
+
+int qedi_create_sysfs_attr(struct Scsi_Host *,
+			   struct sysfs_bin_attrs *);
+void qedi_remove_sysfs_attr(struct Scsi_Host *,
+			    struct sysfs_bin_attrs *);
+
+#ifdef CONFIG_DEBUG_FS
+/* DebugFS related code */
+struct qedi_list_of_funcs {
+	char *oper_str;
+	ssize_t (*oper_func)(struct qedi_dbg_ctx *qedi);
+};
+
+struct qedi_debugfs_ops {
+	char *name;
+	struct qedi_list_of_funcs *qedi_funcs;
+};
+
+#define qedi_dbg_fileops(drv, ops) \
+{ \
+	.owner  = THIS_MODULE, \
+	.open   = simple_open, \
+	.read   = drv##_dbg_##ops##_cmd_read, \
+	.write  = drv##_dbg_##ops##_cmd_write \
+}
+
+/* Used for debugfs sequential files */
+#define qedi_dbg_fileops_seq(drv, ops) \
+{ \
+	.owner = THIS_MODULE, \
+	.open = drv##_dbg_##ops##_open, \
+	.read = seq_read, \
+	.llseek = seq_lseek, \
+	.release = single_release, \
+}
+
+void qedi_dbg_host_init(struct qedi_dbg_ctx *,
+			struct qedi_debugfs_ops *,
+			const struct file_operations *);
+void qedi_dbg_host_exit(struct qedi_dbg_ctx *);
+void qedi_dbg_init(char *);
+void qedi_dbg_exit(void);
+#endif /* CONFIG_DEBUG_FS */
+
+#endif /* _QEDI_DBG_H_ */
diff --git a/drivers/scsi/qedi/qedi_debugfs.c b/drivers/scsi/qedi/qedi_debugfs.c
new file mode 100644
index 0000000..9559362
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_debugfs.c
@@ -0,0 +1,244 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi.h"
+#include "qedi_dbg.h"
+
+#include <linux/uaccess.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+
+int do_not_recover;
+static struct dentry *qedi_dbg_root;
+
+void
+qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
+		   struct qedi_debugfs_ops *dops,
+		   const struct file_operations *fops)
+{
+	char host_dirname[32];
+	struct dentry *file_dentry = NULL;
+
+	sprintf(host_dirname, "host%u", qedi->host_no);
+	qedi->bdf_dentry = debugfs_create_dir(host_dirname, qedi_dbg_root);
+	if (!qedi->bdf_dentry)
+		return;
+
+	while (dops) {
+		if (!(dops->name))
+			break;
+
+		file_dentry = debugfs_create_file(dops->name, 0600,
+						  qedi->bdf_dentry, qedi,
+						  fops);
+		if (!file_dentry) {
+			QEDI_INFO(qedi, QEDI_LOG_DEBUGFS,
+				  "Debugfs entry %s creation failed\n",
+				  dops->name);
+			debugfs_remove_recursive(qedi->bdf_dentry);
+			return;
+		}
+		dops++;
+		fops++;
+	}
+}
+
+void
+qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi)
+{
+	debugfs_remove_recursive(qedi->bdf_dentry);
+	qedi->bdf_dentry = NULL;
+}
+
+void
+qedi_dbg_init(char *drv_name)
+{
+	qedi_dbg_root = debugfs_create_dir(drv_name, NULL);
+	if (!qedi_dbg_root)
+		QEDI_INFO(NULL, QEDI_LOG_DEBUGFS, "Init of debugfs failed\n");
+}
+
+void
+qedi_dbg_exit(void)
+{
+	debugfs_remove_recursive(qedi_dbg_root);
+	qedi_dbg_root = NULL;
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
+{
+	if (!do_not_recover)
+		do_not_recover = 1;
+
+	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+		  do_not_recover);
+	return 0;
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
+{
+	if (do_not_recover)
+		do_not_recover = 0;
+
+	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+		  do_not_recover);
+	return 0;
+}
+
+static struct qedi_list_of_funcs qedi_dbg_do_not_recover_ops[] = {
+	{ "enable", qedi_dbg_do_not_recover_enable },
+	{ "disable", qedi_dbg_do_not_recover_disable },
+	{ NULL, NULL }
+};
+
+struct qedi_debugfs_ops qedi_debugfs_ops[] = {
+	{ "gbl_ctx", NULL },
+	{ "do_not_recover", qedi_dbg_do_not_recover_ops},
+	{ "io_trace", NULL },
+	{ NULL, NULL }
+};
+
+static ssize_t
+qedi_dbg_do_not_recover_cmd_write(struct file *filp, const char __user *buffer,
+				  size_t count, loff_t *ppos)
+{
+	size_t cnt = 0;
+	struct qedi_dbg_ctx *qedi_dbg =
+			(struct qedi_dbg_ctx *)filp->private_data;
+	struct qedi_list_of_funcs *lof = qedi_dbg_do_not_recover_ops;
+
+	if (*ppos)
+		return 0;
+
+	while (lof) {
+		if (!(lof->oper_str))
+			break;
+
+		if (!strncmp(lof->oper_str, buffer, strlen(lof->oper_str))) {
+			cnt = lof->oper_func(qedi_dbg);
+			break;
+		}
+
+		lof++;
+	}
+	return (count - cnt);
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
+				 size_t count, loff_t *ppos)
+{
+	size_t cnt = 0;
+
+	if (*ppos)
+		return 0;
+
+	cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
+	cnt = min_t(int, count, cnt - *ppos);
+	*ppos += cnt;
+	return cnt;
+}
+
+static int
+qedi_gbl_ctx_show(struct seq_file *s, void *unused)
+{
+	struct qedi_fastpath *fp = NULL;
+	struct qed_sb_info *sb_info = NULL;
+	struct status_block *sb = NULL;
+	struct global_queue *que = NULL;
+	int id;
+	u16 prod_idx;
+	struct qedi_ctx *qedi = s->private;
+	unsigned long flags;
+
+	seq_puts(s, " DUMP CQ CONTEXT:\n");
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		seq_printf(s, "=========FAST CQ PATH [%d] ==========\n", id);
+		fp = &qedi->fp_array[id];
+		sb_info = fp->sb_info;
+		sb = sb_info->sb_virt;
+		prod_idx = (sb->pi_array[QEDI_PROTO_CQ_PROD_IDX] &
+			    STATUS_BLOCK_PROD_INDEX_MASK);
+		seq_printf(s, "SB PROD IDX: %d\n", prod_idx);
+		que = qedi->global_queues[fp->sb_id];
+		seq_printf(s, "DRV CONS IDX: %d\n", que->cq_cons_idx);
+		seq_printf(s, "CQ complete host memory: %d\n", fp->sb_id);
+		seq_puts(s, "=========== END ==================\n\n\n");
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+	return 0;
+}
+
+static int
+qedi_dbg_gbl_ctx_open(struct inode *inode, struct file *file)
+{
+	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
+	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
+					     dbg_ctx);
+
+	return single_open(file, qedi_gbl_ctx_show, qedi);
+}
+
+static int
+qedi_io_trace_show(struct seq_file *s, void *unused)
+{
+	int id, idx = 0;
+	struct qedi_ctx *qedi = s->private;
+	struct qedi_io_log *io_log;
+	unsigned long flags;
+
+	seq_puts(s, " DUMP IO LOGS:\n");
+	spin_lock_irqsave(&qedi->io_trace_lock, flags);
+	idx = qedi->io_trace_idx;
+	for (id = 0; id < QEDI_IO_TRACE_SIZE; id++) {
+		io_log = &qedi->io_trace_buf[idx];
+		seq_printf(s, "iodir-%d:", io_log->direction);
+		seq_printf(s, "tid-0x%x:", io_log->task_id);
+		seq_printf(s, "cid-0x%x:", io_log->cid);
+		seq_printf(s, "lun-%d:", io_log->lun);
+		seq_printf(s, "op-0x%02x:", io_log->op);
+		seq_printf(s, "0x%02x%02x%02x%02x:", io_log->lba[0],
+			   io_log->lba[1], io_log->lba[2], io_log->lba[3]);
+		seq_printf(s, "buflen-%d:", io_log->bufflen);
+		seq_printf(s, "sgcnt-%d:", io_log->sg_count);
+		seq_printf(s, "res-0x%08x:", io_log->result);
+		seq_printf(s, "jif-%lu:", io_log->jiffies);
+		seq_printf(s, "blk_req_cpu-%d:", io_log->blk_req_cpu);
+		seq_printf(s, "req_cpu-%d:", io_log->req_cpu);
+		seq_printf(s, "intr_cpu-%d:", io_log->intr_cpu);
+		seq_printf(s, "blk_rsp_cpu-%d\n", io_log->blk_rsp_cpu);
+
+		idx++;
+		if (idx == QEDI_IO_TRACE_SIZE)
+			idx = 0;
+	}
+	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
+	return 0;
+}
+
+static int
+qedi_dbg_io_trace_open(struct inode *inode, struct file *file)
+{
+	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
+	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
+					     dbg_ctx);
+
+	return single_open(file, qedi_io_trace_show, qedi);
+}
+
+const struct file_operations qedi_dbg_fops[] = {
+	qedi_dbg_fileops_seq(qedi, gbl_ctx),
+	qedi_dbg_fileops(qedi, do_not_recover),
+	qedi_dbg_fileops_seq(qedi, io_trace),
+	{ NULL, NULL },
+};
diff --git a/drivers/scsi/qedi/qedi_hsi.h b/drivers/scsi/qedi/qedi_hsi.h
new file mode 100644
index 0000000..b442a62
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_hsi.h
@@ -0,0 +1,52 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+#ifndef __QEDI_HSI__
+#define __QEDI_HSI__
+/********************************/
+/* Add include to common target */
+/********************************/
+#include <linux/qed/common_hsi.h>
+
+/****************************************/
+/* Add include to common storage target */
+/****************************************/
+#include <linux/qed/storage_common.h>
+
+/************************************************************************/
+/* Add include to common TCP target */
+/************************************************************************/
+#include <linux/qed/tcp_common.h>
+
+/*************************************************************************/
+/* Add include to common iSCSI target for both eCore and protocol driver */
+/************************************************************************/
+#include <linux/qed/iscsi_common.h>
+
+/*
+ * iSCSI CMDQ element
+ */
+struct iscsi_cmdqe {
+	__le16 conn_id;
+	u8 invalid_command;
+	u8 cmd_hdr_type;
+	__le32 reserved1[2];
+	__le32 cmd_payload[13];
+};
+
+/*
+ * iSCSI CMD header type
+ */
+enum iscsi_cmd_hdr_type {
+	ISCSI_CMD_HDR_TYPE_BHS_ONLY /* iSCSI BHS with no expected AHS */,
+	ISCSI_CMD_HDR_TYPE_BHS_W_AHS /* iSCSI BHS with expected AHS */,
+	ISCSI_CMD_HDR_TYPE_AHS /* iSCSI AHS */,
+	MAX_ISCSI_CMD_HDR_TYPE
+};
+
+#endif /* __QEDI_HSI__ */
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
new file mode 100644
index 0000000..35ab2f9
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -0,0 +1,1550 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/if_arp.h>
+#include <scsi/iscsi_if.h>
+#include <linux/inet.h>
+#include <net/arp.h>
+#include <linux/list.h>
+#include <linux/kthread.h>
+#include <linux/mm.h>
+#include <linux/if_vlan.h>
+#include <linux/cpu.h>
+
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi.h>
+
+#include "qedi.h"
+
+static uint fw_debug;
+module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(fw_debug, " Firmware debug level 0(default) to 3");
+
+static uint int_mode;
+module_param(int_mode, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(int_mode,
+		 " Force interrupt mode other than MSI-X: (1 INT#x; 2 MSI)");
+
+uint debug = QEDI_LOG_WARN | QEDI_LOG_SCSI_TM;
+module_param(debug, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug, " Default debug level");
+
+const struct qed_iscsi_ops *qedi_ops;
+static struct scsi_transport_template *qedi_scsi_transport;
+static struct pci_driver qedi_pci_driver;
+static DEFINE_PER_CPU(struct qedi_percpu_s, qedi_percpu);
+/* Static function declaration */
+static int qedi_alloc_global_queues(struct qedi_ctx *qedi);
+static void qedi_free_global_queues(struct qedi_ctx *qedi);
+
+static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
+{
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+	struct async_data *data;
+	int rval = 0;
+
+	if (!context || !fw_handle) {
+		QEDI_ERR(NULL, "Recv event with ctx NULL\n");
+		return -EINVAL;
+	}
+
+	qedi = (struct qedi_ctx *)context;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Recv Event %d fw_handle %p\n", fw_event_code, fw_handle);
+
+	data = (struct async_data *)fw_handle;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "cid=0x%x tid=0x%x err-code=0x%x fw-dbg-param=0x%x\n",
+		   data->cid, data->itid, data->error_code,
+		   data->fw_debug_param);
+
+	qedi_ep = qedi->ep_tbl[data->cid];
+
+	if (!qedi_ep) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Cannot process event, ep already disconnected, cid=0x%x\n",
+			   data->cid);
+		WARN_ON(1);
+		return -ENODEV;
+	}
+
+	switch (fw_event_code) {
+	case ISCSI_EVENT_TYPE_ASYN_CONNECT_COMPLETE:
+		if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+			qedi_ep->state = EP_STATE_OFLDCONN_COMPL;
+
+		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
+		break;
+	case ISCSI_EVENT_TYPE_ASYN_TERMINATE_DONE:
+		qedi_ep->state = EP_STATE_DISCONN_COMPL;
+		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
+		break;
+	case ISCSI_EVENT_TYPE_ISCSI_CONN_ERROR:
+		qedi_process_iscsi_error(qedi_ep, data);
+		break;
+	case ISCSI_EVENT_TYPE_ASYN_ABORT_RCVD:
+	case ISCSI_EVENT_TYPE_ASYN_SYN_RCVD:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_TIME:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_CNT:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_KA_PROBES_CNT:
+	case ISCSI_EVENT_TYPE_ASYN_FIN_WAIT2:
+	case ISCSI_EVENT_TYPE_TCP_CONN_ERROR:
+		qedi_process_tcp_error(qedi_ep, data);
+		break;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "Recv Unknown Event %u\n",
+			 fw_event_code);
+	}
+
+	return rval;
+}
+
+static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
+				  struct qed_sb_info *sb_info, u16 sb_id)
+{
+	struct status_block *sb_virt;
+	dma_addr_t sb_phys;
+	int ret;
+
+	sb_virt = dma_alloc_coherent(&qedi->pdev->dev,
+				     sizeof(struct status_block), &sb_phys,
+				     GFP_KERNEL);
+	if (!sb_virt) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Status block allocation failed for id = %d.\n",
+			  sb_id);
+		return -ENOMEM;
+	}
+
+	ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
+				       sb_id, QED_SB_TYPE_STORAGE);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Status block initialization failed for id = %d.\n",
+			  sb_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void qedi_free_sb(struct qedi_ctx *qedi)
+{
+	struct qed_sb_info *sb_info;
+	int id;
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		sb_info = &qedi->sb_array[id];
+		if (sb_info->sb_virt)
+			dma_free_coherent(&qedi->pdev->dev,
+					  sizeof(*sb_info->sb_virt),
+					  (void *)sb_info->sb_virt,
+					  sb_info->sb_phys);
+	}
+}
+
+static void qedi_free_fp(struct qedi_ctx *qedi)
+{
+	kfree(qedi->fp_array);
+	kfree(qedi->sb_array);
+}
+
+static void qedi_destroy_fp(struct qedi_ctx *qedi)
+{
+	qedi_free_sb(qedi);
+	qedi_free_fp(qedi);
+}
+
+static int qedi_alloc_fp(struct qedi_ctx *qedi)
+{
+	int ret = 0;
+
+	qedi->fp_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
+				 sizeof(struct qedi_fastpath), GFP_KERNEL);
+	if (!qedi->fp_array) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fastpath fp array allocation failed.\n");
+		return -ENOMEM;
+	}
+
+	qedi->sb_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
+				 sizeof(struct qed_sb_info), GFP_KERNEL);
+	if (!qedi->sb_array) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fastpath sb array allocation failed.\n");
+		ret = -ENOMEM;
+		goto free_fp;
+	}
+
+	return ret;
+
+free_fp:
+	qedi_free_fp(qedi);
+	return ret;
+}
+
+static void qedi_int_fp(struct qedi_ctx *qedi)
+{
+	struct qedi_fastpath *fp;
+	int id;
+
+	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
+	       sizeof(*qedi->fp_array));
+	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
+	       sizeof(*qedi->sb_array));
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		fp = &qedi->fp_array[id];
+		fp->sb_info = &qedi->sb_array[id];
+		fp->sb_id = id;
+		fp->qedi = qedi;
+		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
+			 "qedi", id);
+
+		/* fp_array[i] ---- irq cookie
+		 * So init data which is needed in int ctx
+		 */
+	}
+}
+
+static int qedi_prepare_fp(struct qedi_ctx *qedi)
+{
+	struct qedi_fastpath *fp;
+	int id, ret = 0;
+
+	ret = qedi_alloc_fp(qedi);
+	if (ret)
+		goto err;
+
+	qedi_int_fp(qedi);
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		fp = &qedi->fp_array[id];
+		ret = qedi_alloc_and_init_sb(qedi, fp->sb_info, fp->sb_id);
+		if (ret) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "SB allocation and initialization failed.\n");
+			ret = -EIO;
+			goto err_init;
+		}
+	}
+
+	return 0;
+
+err_init:
+	qedi_free_sb(qedi);
+	qedi_free_fp(qedi);
+err:
+	return ret;
+}
+
+static enum qed_int_mode qedi_int_mode_to_enum(void)
+{
+	switch (int_mode) {
+	case 0: return QED_INT_MODE_MSIX;
+	case 1: return QED_INT_MODE_INTA;
+	case 2: return QED_INT_MODE_MSI;
+	default:
+		QEDI_ERR(NULL, "Unknown qede_int_mode=%08x; "
+			 "Defaulting to MSI-x\n", int_mode);
+		return QED_INT_MODE_MSIX;
+	}
+}
+
+static int qedi_setup_cid_que(struct qedi_ctx *qedi)
+{
+	int i;
+
+	qedi->cid_que.cid_que_base = kmalloc((qedi->max_active_conns *
+					      sizeof(u32)), GFP_KERNEL);
+	if (!qedi->cid_que.cid_que_base)
+		return -ENOMEM;
+
+	qedi->cid_que.conn_cid_tbl = kmalloc((qedi->max_active_conns *
+					      sizeof(struct qedi_conn *)),
+					     GFP_KERNEL);
+	if (!qedi->cid_que.conn_cid_tbl) {
+		kfree(qedi->cid_que.cid_que_base);
+		qedi->cid_que.cid_que_base = NULL;
+		return -ENOMEM;
+	}
+
+	qedi->cid_que.cid_que = (u32 *)qedi->cid_que.cid_que_base;
+	qedi->cid_que.cid_q_prod_idx = 0;
+	qedi->cid_que.cid_q_cons_idx = 0;
+	qedi->cid_que.cid_q_max_idx = qedi->max_active_conns;
+	qedi->cid_que.cid_free_cnt = qedi->max_active_conns;
+
+	for (i = 0; i < qedi->max_active_conns; i++) {
+		qedi->cid_que.cid_que[i] = i;
+		qedi->cid_que.conn_cid_tbl[i] = NULL;
+	}
+
+	return 0;
+}
+
+static void qedi_release_cid_que(struct qedi_ctx *qedi)
+{
+	kfree(qedi->cid_que.cid_que_base);
+	qedi->cid_que.cid_que_base = NULL;
+
+	kfree(qedi->cid_que.conn_cid_tbl);
+	qedi->cid_que.conn_cid_tbl = NULL;
+}
+
+static int qedi_init_id_tbl(struct qedi_portid_tbl *id_tbl, u16 size,
+			    u16 start_id, u16 next)
+{
+	id_tbl->start = start_id;
+	id_tbl->max = size;
+	id_tbl->next = next;
+	spin_lock_init(&id_tbl->lock);
+	id_tbl->table = kzalloc(DIV_ROUND_UP(size, 32) * 4, GFP_KERNEL);
+	if (!id_tbl->table)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void qedi_free_id_tbl(struct qedi_portid_tbl *id_tbl)
+{
+	kfree(id_tbl->table);
+	id_tbl->table = NULL;
+}
+
+int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id)
+{
+	int ret = -1;
+
+	id -= id_tbl->start;
+	if (id >= id_tbl->max)
+		return ret;
+
+	spin_lock(&id_tbl->lock);
+	if (!test_bit(id, id_tbl->table)) {
+		set_bit(id, id_tbl->table);
+		ret = 0;
+	}
+	spin_unlock(&id_tbl->lock);
+	return ret;
+}
+
+u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl)
+{
+	u16 id;
+
+	spin_lock(&id_tbl->lock);
+	id = find_next_zero_bit(id_tbl->table, id_tbl->max, id_tbl->next);
+	if (id >= id_tbl->max) {
+		id = QEDI_LOCAL_PORT_INVALID;
+		if (id_tbl->next != 0) {
+			id = find_first_zero_bit(id_tbl->table, id_tbl->next);
+			if (id >= id_tbl->next)
+				id = QEDI_LOCAL_PORT_INVALID;
+		}
+	}
+
+	if (id < id_tbl->max) {
+		set_bit(id, id_tbl->table);
+		id_tbl->next = (id + 1) & (id_tbl->max - 1);
+		id += id_tbl->start;
+	}
+
+	spin_unlock(&id_tbl->lock);
+
+	return id;
+}
+
+void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id)
+{
+	if (id == QEDI_LOCAL_PORT_INVALID)
+		return;
+
+	id -= id_tbl->start;
+	if (id >= id_tbl->max)
+		return;
+
+	clear_bit(id, id_tbl->table);
+}
+
+static void qedi_cm_free_mem(struct qedi_ctx *qedi)
+{
+	kfree(qedi->ep_tbl);
+	qedi->ep_tbl = NULL;
+	qedi_free_id_tbl(&qedi->lcl_port_tbl);
+}
+
+static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
+{
+	u16 port_id;
+
+	qedi->ep_tbl = kzalloc((qedi->max_active_conns *
+				sizeof(struct qedi_endpoint *)), GFP_KERNEL);
+	if (!qedi->ep_tbl)
+		return -ENOMEM;
+	port_id = prandom_u32() % QEDI_LOCAL_PORT_RANGE;
+	if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
+			     QEDI_LOCAL_PORT_MIN, port_id)) {
+		qedi_cm_free_mem(qedi);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
+{
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi = NULL;
+
+	shost = iscsi_host_alloc(&qedi_host_template,
+				 sizeof(struct qedi_ctx), 0);
+	if (!shost) {
+		QEDI_ERR(NULL, "Could not allocate shost\n");
+		goto exit_setup_shost;
+	}
+
+	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA;
+	shost->max_channel = 0;
+	shost->max_lun = ~0;
+	shost->max_cmd_len = 16;
+	shost->transportt = qedi_scsi_transport;
+
+	qedi = iscsi_host_priv(shost);
+	memset(qedi, 0, sizeof(*qedi));
+	qedi->shost = shost;
+	qedi->dbg_ctx.host_no = shost->host_no;
+	qedi->pdev = pdev;
+	qedi->dbg_ctx.pdev = pdev;
+	qedi->max_active_conns = ISCSI_MAX_SESS_PER_HBA;
+	qedi->max_sqes = QEDI_SQ_SIZE;
+
+	if (shost_use_blk_mq(shost))
+		shost->nr_hw_queues = MIN_NUM_CPUS_MSIX(qedi);
+
+	pci_set_drvdata(pdev, qedi);
+
+exit_setup_shost:
+	return qedi;
+}
+
+static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
+{
+	u8 num_sq_pages;
+	u32 log_page_size;
+	int rval = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC, "Min number of MSIX %d\n",
+		  MIN_NUM_CPUS_MSIX(qedi));
+
+	num_sq_pages = (MAX_OUSTANDING_TASKS_PER_CON * 8) / PAGE_SIZE;
+
+	qedi->num_queues = MIN_NUM_CPUS_MSIX(qedi);
+
+	memset(&qedi->pf_params.iscsi_pf_params, 0,
+	       sizeof(qedi->pf_params.iscsi_pf_params));
+
+	qedi->p_cpuq = pci_alloc_consistent(qedi->pdev,
+			qedi->num_queues * sizeof(struct qedi_glbl_q_params),
+			&qedi->hw_p_cpuq);
+	if (!qedi->p_cpuq) {
+		QEDI_ERR(&qedi->dbg_ctx, "pci_alloc_consistent fail\n");
+		rval = -1;
+		goto err_alloc_mem;
+	}
+
+	rval = qedi_alloc_global_queues(qedi);
+	if (rval) {
+		QEDI_ERR(&qedi->dbg_ctx, "Global queue allocation failed.\n");
+		rval = -1;
+		goto err_alloc_mem;
+	}
+
+	qedi->pf_params.iscsi_pf_params.num_cons = QEDI_MAX_ISCSI_CONNS_PER_HBA;
+	qedi->pf_params.iscsi_pf_params.num_tasks = QEDI_MAX_ISCSI_TASK;
+	qedi->pf_params.iscsi_pf_params.half_way_close_timeout = 10;
+	qedi->pf_params.iscsi_pf_params.num_sq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_r2tq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_uhq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_queues = qedi->num_queues;
+	qedi->pf_params.iscsi_pf_params.debug_mode = fw_debug;
+
+	for (log_page_size = 0 ; log_page_size < 32 ; log_page_size++) {
+		if ((1 << log_page_size) == PAGE_SIZE)
+			break;
+	}
+	qedi->pf_params.iscsi_pf_params.log_page_size = log_page_size;
+
+	qedi->pf_params.iscsi_pf_params.glbl_q_params_addr = qedi->hw_p_cpuq;
+
+	/* RQ BDQ initializations.
+	 * rq_num_entries: suggested value for Initiator is 16 (4KB RQ)
+	 * rqe_log_size: 8 for 256B RQE
+	 */
+	qedi->pf_params.iscsi_pf_params.rqe_log_size = 8;
+	/* BDQ address and size */
+	qedi->pf_params.iscsi_pf_params.bdq_pbl_base_addr[BDQ_ID_RQ] =
+							qedi->bdq_pbl_list_dma;
+	qedi->pf_params.iscsi_pf_params.bdq_pbl_num_entries[BDQ_ID_RQ] =
+						qedi->bdq_pbl_list_num_entries;
+	qedi->pf_params.iscsi_pf_params.rq_buffer_size = QEDI_BDQ_BUF_SIZE;
+
+	/* cq_num_entries: num_tasks + rq_num_entries */
+	qedi->pf_params.iscsi_pf_params.cq_num_entries = 2048;
+
+	qedi->pf_params.iscsi_pf_params.gl_rq_pi = QEDI_PROTO_CQ_PROD_IDX;
+	qedi->pf_params.iscsi_pf_params.gl_cmd_pi = 1;
+	qedi->pf_params.iscsi_pf_params.ooo_enable = 1;
+
+err_alloc_mem:
+	return rval;
+}
+
+/* Free DMA coherent memory for array of queue pointers we pass to qed */
+static void qedi_free_iscsi_pf_param(struct qedi_ctx *qedi)
+{
+	size_t size = 0;
+
+	if (qedi->p_cpuq) {
+		size = qedi->num_queues * sizeof(struct qedi_glbl_q_params);
+		pci_free_consistent(qedi->pdev, size, qedi->p_cpuq,
+				    qedi->hw_p_cpuq);
+	}
+
+	qedi_free_global_queues(qedi);
+
+	kfree(qedi->global_queues);
+}
+
+static void qedi_link_update(void *dev, struct qed_link_output *link)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)dev;
+
+	if (link->link_up) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "Link Up event.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_UP);
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Link Down event.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+	}
+}
+
+static struct qed_iscsi_cb_ops qedi_cb_ops = {
+	{
+		.link_update =		qedi_link_update,
+	}
+};
+
+static bool qedi_process_completions(struct qedi_fastpath *fp)
+{
+	struct qedi_work *qedi_work = NULL;
+	struct qedi_ctx *qedi = fp->qedi;
+	struct qed_sb_info *sb_info = fp->sb_info;
+	struct status_block *sb = sb_info->sb_virt;
+	struct qedi_percpu_s *p = NULL;
+	struct global_queue *que;
+	u16 prod_idx;
+	unsigned long flags;
+	union iscsi_cqe *cqe;
+	int cpu;
+
+	/* Get the current firmware producer index */
+	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
+
+	if (prod_idx >= QEDI_CQ_SIZE)
+		prod_idx = prod_idx % QEDI_CQ_SIZE;
+
+	que = qedi->global_queues[fp->sb_id];
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+		  "Before: global queue=%p prod_idx=%d cons_idx=%d, sb_id=%d\n",
+		  que, prod_idx, que->cq_cons_idx, fp->sb_id);
+
+	qedi->intr_cpu = fp->sb_id;
+	cpu = smp_processor_id();
+	p = &per_cpu(qedi_percpu, cpu);
+
+	if (unlikely(!p->iothread))
+		WARN_ON(1);
+
+	spin_lock_irqsave(&p->p_work_lock, flags);
+	while (que->cq_cons_idx != prod_idx) {
+		cqe = &que->cq[que->cq_cons_idx];
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+			  "cqe=%p prod_idx=%d cons_idx=%d.\n",
+			  cqe, prod_idx, que->cq_cons_idx);
+
+		/* Alloc and copy to the cqe */
+		qedi_work = kzalloc(sizeof(*qedi_work), GFP_ATOMIC);
+		if (qedi_work) {
+			INIT_LIST_HEAD(&qedi_work->list);
+			qedi_work->qedi = qedi;
+			memcpy(&qedi_work->cqe, cqe, sizeof(union iscsi_cqe));
+			qedi_work->que_idx = fp->sb_id;
+			list_add_tail(&qedi_work->list, &p->work_list);
+		} else {
+			WARN_ON(1);
+			continue;
+		}
+
+		que->cq_cons_idx++;
+		if (que->cq_cons_idx == QEDI_CQ_SIZE)
+			que->cq_cons_idx = 0;
+	}
+	wake_up_process(p->iothread);
+	spin_unlock_irqrestore(&p->p_work_lock, flags);
+
+	return true;
+}
+
+static bool qedi_fp_has_work(struct qedi_fastpath *fp)
+{
+	struct qedi_ctx *qedi = fp->qedi;
+	struct global_queue *que;
+	struct qed_sb_info *sb_info = fp->sb_info;
+	struct status_block *sb = sb_info->sb_virt;
+	u16 prod_idx;
+
+	barrier();
+
+	/* Get the current firmware producer index */
+	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
+
+	/* Get the pointer to the global CQ this completion is on */
+	que = qedi->global_queues[fp->sb_id];
+
+	/* prod idx wrap around uint16 */
+	if (prod_idx >= QEDI_CQ_SIZE)
+		prod_idx = prod_idx % QEDI_CQ_SIZE;
+
+	return (que->cq_cons_idx != prod_idx);
+}
+
+/* MSI-X fastpath handler code */
+static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
+{
+	struct qedi_fastpath *fp = dev_id;
+	struct qedi_ctx *qedi = fp->qedi;
+	bool wake_io_thread = true;
+
+	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
+
+process_again:
+	wake_io_thread = qedi_process_completions(fp);
+	if (wake_io_thread) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+			  "process already running\n");
+	}
+
+	if (qedi_fp_has_work(fp) == 0)
+		qed_sb_update_sb_idx(fp->sb_info);
+
+	/* Check for more work */
+	rmb();
+
+	if (qedi_fp_has_work(fp) == 0)
+		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
+	else
+		goto process_again;
+
+	return IRQ_HANDLED;
+}
+
+/* simd handler for MSI/INTa */
+static void qedi_simd_int_handler(void *cookie)
+{
+	/* Cookie is qedi_ctx struct */
+	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
+
+	QEDI_WARN(&qedi->dbg_ctx, "qedi=%p.\n", qedi);
+}
+
+#define QEDI_SIMD_HANDLER_NUM		0
+static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
+{
+	int i;
+
+	if (qedi->int_info.msix_cnt) {
+		for (i = 0; i < qedi->int_info.used_cnt; i++) {
+			synchronize_irq(qedi->int_info.msix[i].vector);
+			irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+					      NULL);
+			free_irq(qedi->int_info.msix[i].vector,
+				 &qedi->fp_array[i]);
+		}
+	} else {
+		qedi_ops->common->simd_handler_clean(qedi->cdev,
+						     QEDI_SIMD_HANDLER_NUM);
+	}
+
+	qedi->int_info.used_cnt = 0;
+	qedi_ops->common->set_fp_int(qedi->cdev, 0);
+}
+
+static int qedi_request_msix_irq(struct qedi_ctx *qedi)
+{
+	int i, rc, cpu;
+
+	cpu = cpumask_first(cpu_online_mask);
+	for (i = 0; i < MIN_NUM_CPUS_MSIX(qedi); i++) {
+		rc = request_irq(qedi->int_info.msix[i].vector,
+				 qedi_msix_handler, 0, "qedi",
+				 &qedi->fp_array[i]);
+
+		if (rc) {
+			QEDI_WARN(&qedi->dbg_ctx, "request_irq failed.\n");
+			qedi_sync_free_irqs(qedi);
+			return rc;
+		}
+		qedi->int_info.used_cnt++;
+		rc = irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+					   get_cpu_mask(cpu));
+		cpu = cpumask_next(cpu, cpu_online_mask);
+	}
+
+	return 0;
+}
+
+static int qedi_setup_int(struct qedi_ctx *qedi)
+{
+	int rc = 0;
+
+	rc = qedi_ops->common->set_fp_int(qedi->cdev, num_online_cpus());
+	rc = qedi_ops->common->get_fp_int(qedi->cdev, &qedi->int_info);
+	if (rc)
+		goto exit_setup_int;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Number of msix_cnt = 0x%x num of cpus = 0x%x\n",
+		   qedi->int_info.msix_cnt, num_online_cpus());
+
+	if (qedi->int_info.msix_cnt) {
+		rc = qedi_request_msix_irq(qedi);
+		goto exit_setup_int;
+	} else {
+		qedi_ops->common->simd_handler_config(qedi->cdev, &qedi,
+						      QEDI_SIMD_HANDLER_NUM,
+						      qedi_simd_int_handler);
+		qedi->int_info.used_cnt = 1;
+	}
+
+exit_setup_int:
+	return rc;
+}
+
+static void qedi_free_bdq(struct qedi_ctx *qedi)
+{
+	int i;
+
+	if (qedi->bdq_pbl_list)
+		dma_free_coherent(&qedi->pdev->dev, PAGE_SIZE,
+				  qedi->bdq_pbl_list, qedi->bdq_pbl_list_dma);
+
+	if (qedi->bdq_pbl)
+		dma_free_coherent(&qedi->pdev->dev, qedi->bdq_pbl_mem_size,
+				  qedi->bdq_pbl, qedi->bdq_pbl_dma);
+
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		if (qedi->bdq[i].buf_addr) {
+			dma_free_coherent(&qedi->pdev->dev, QEDI_BDQ_BUF_SIZE,
+					  qedi->bdq[i].buf_addr,
+					  qedi->bdq[i].buf_dma);
+		}
+	}
+}
+
+static void qedi_free_global_queues(struct qedi_ctx *qedi)
+{
+	int i;
+	struct global_queue **gl = qedi->global_queues;
+
+	for (i = 0; i < qedi->num_queues; i++) {
+		if (!gl[i])
+			continue;
+
+		if (gl[i]->cq)
+			dma_free_coherent(&qedi->pdev->dev, gl[i]->cq_mem_size,
+					  gl[i]->cq, gl[i]->cq_dma);
+		if (gl[i]->cq_pbl)
+			dma_free_coherent(&qedi->pdev->dev, gl[i]->cq_pbl_size,
+					  gl[i]->cq_pbl, gl[i]->cq_pbl_dma);
+
+		kfree(gl[i]);
+	}
+	qedi_free_bdq(qedi);
+}
+
+static int qedi_alloc_bdq(struct qedi_ctx *qedi)
+{
+	int i;
+	struct scsi_bd *pbl;
+	u64 *list;
+	dma_addr_t page;
+
+	/* Alloc dma memory for BDQ buffers */
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		qedi->bdq[i].buf_addr =
+				dma_alloc_coherent(&qedi->pdev->dev,
+						   QEDI_BDQ_BUF_SIZE,
+						   &qedi->bdq[i].buf_dma,
+						   GFP_KERNEL);
+		if (!qedi->bdq[i].buf_addr) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not allocate BDQ buffer %d.\n", i);
+			return -ENOMEM;
+		}
+	}
+
+	/* Alloc dma memory for BDQ page buffer list */
+	qedi->bdq_pbl_mem_size = QEDI_BDQ_NUM * sizeof(struct scsi_bd);
+	qedi->bdq_pbl_mem_size = ALIGN(qedi->bdq_pbl_mem_size, PAGE_SIZE);
+	qedi->rq_num_entries = qedi->bdq_pbl_mem_size / sizeof(struct scsi_bd);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN, "rq_num_entries = %d.\n",
+		  qedi->rq_num_entries);
+
+	qedi->bdq_pbl = dma_alloc_coherent(&qedi->pdev->dev,
+					   qedi->bdq_pbl_mem_size,
+					   &qedi->bdq_pbl_dma, GFP_KERNEL);
+	if (!qedi->bdq_pbl) {
+		QEDI_ERR(&qedi->dbg_ctx, "Could not allocate BDQ PBL.\n");
+		return -ENOMEM;
+	}
+
+	/*
+	 * Populate BDQ PBL with physical and virtual address of individual
+	 * BDQ buffers
+	 */
+	pbl = (struct scsi_bd  *)qedi->bdq_pbl;
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		pbl->address.hi =
+			cpu_to_le32((u32)(((u64)(qedi->bdq[i].buf_dma)) >> 32));
+		pbl->address.lo =
+			cpu_to_le32(((u32)(((u64)(qedi->bdq[i].buf_dma)) &
+					    0xffffffff)));
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx], idx [%d]\n",
+			  pbl, pbl->address.hi, pbl->address.lo, i);
+		pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
+		pbl->opaque.lo = cpu_to_le32(((u32)(((u64)i) & 0xffffffff)));
+		pbl++;
+	}
+
+	/* Allocate list of PBL pages */
+	qedi->bdq_pbl_list = dma_alloc_coherent(&qedi->pdev->dev,
+						PAGE_SIZE,
+						&qedi->bdq_pbl_list_dma,
+						GFP_KERNEL);
+	if (!qedi->bdq_pbl_list) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Could not allocate list of PBL pages.\n");
+		return -ENOMEM;
+	}
+	memset(qedi->bdq_pbl_list, 0, PAGE_SIZE);
+
+	/*
+	 * Now populate PBL list with pages that contain pointers to the
+	 * individual buffers.
+	 */
+	qedi->bdq_pbl_list_num_entries = qedi->bdq_pbl_mem_size / PAGE_SIZE;
+	list = (u64 *)qedi->bdq_pbl_list;
+	page = qedi->bdq_pbl_list_dma;
+	for (i = 0; i < qedi->bdq_pbl_list_num_entries; i++) {
+		*list = qedi->bdq_pbl_dma;
+		list++;
+		page += PAGE_SIZE;
+	}
+
+	return 0;
+}
+
+static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+{
+	u32 *list;
+	int i;
+	int status = 0, rc;
+	u32 *pbl;
+	dma_addr_t page;
+	int num_pages;
+
+	/*
+	 * Number of global queues (CQ / RQ). This should
+	 * be <= number of available MSIX vectors for the PF
+	 */
+	if (!qedi->num_queues) {
+		QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n");
+		return 1;
+	}
+
+	/* Make sure we allocated the PBL that will contain the physical
+	 * addresses of our queues
+	 */
+	if (!qedi->p_cpuq) {
+		status = 1;
+		goto mem_alloc_failure;
+	}
+
+	qedi->global_queues = kzalloc((sizeof(struct global_queue *) *
+				       qedi->num_queues), GFP_KERNEL);
+	if (!qedi->global_queues) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Unable to allocate global queues array ptr memory\n");
+		return -ENOMEM;
+	}
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "qedi->global_queues=%p.\n", qedi->global_queues);
+
+	/* Allocate DMA coherent buffers for BDQ */
+	rc = qedi_alloc_bdq(qedi);
+	if (rc)
+		goto mem_alloc_failure;
+
+	/* Allocate a CQ and an associated PBL for each MSI-X
+	 * vector.
+	 */
+	for (i = 0; i < qedi->num_queues; i++) {
+		qedi->global_queues[i] =
+					kzalloc(sizeof(*qedi->global_queues[0]),
+						GFP_KERNEL);
+		if (!qedi->global_queues[i]) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to allocation global queue %d.\n", i);
+			goto mem_alloc_failure;
+		}
+
+		qedi->global_queues[i]->cq_mem_size =
+		    (QEDI_CQ_SIZE + 8) * sizeof(union iscsi_cqe);
+		qedi->global_queues[i]->cq_mem_size =
+		    (qedi->global_queues[i]->cq_mem_size +
+		    (QEDI_PAGE_SIZE - 1));
+
+		qedi->global_queues[i]->cq_pbl_size =
+		    (qedi->global_queues[i]->cq_mem_size /
+		    QEDI_PAGE_SIZE) * sizeof(void *);
+		qedi->global_queues[i]->cq_pbl_size =
+		    (qedi->global_queues[i]->cq_pbl_size +
+		    (QEDI_PAGE_SIZE - 1));
+
+		qedi->global_queues[i]->cq =
+		    dma_alloc_coherent(&qedi->pdev->dev,
+				       qedi->global_queues[i]->cq_mem_size,
+				       &qedi->global_queues[i]->cq_dma,
+				       GFP_KERNEL);
+
+		if (!qedi->global_queues[i]->cq) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Could not allocate cq.\n");
+			status = -ENOMEM;
+			goto mem_alloc_failure;
+		}
+		memset(qedi->global_queues[i]->cq, 0,
+		       qedi->global_queues[i]->cq_mem_size);
+
+		qedi->global_queues[i]->cq_pbl =
+		    dma_alloc_coherent(&qedi->pdev->dev,
+				       qedi->global_queues[i]->cq_pbl_size,
+				       &qedi->global_queues[i]->cq_pbl_dma,
+				       GFP_KERNEL);
+
+		if (!qedi->global_queues[i]->cq_pbl) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Could not allocate cq PBL.\n");
+			status = -ENOMEM;
+			goto mem_alloc_failure;
+		}
+		memset(qedi->global_queues[i]->cq_pbl, 0,
+		       qedi->global_queues[i]->cq_pbl_size);
+
+		/* Create PBL */
+		num_pages = qedi->global_queues[i]->cq_mem_size /
+		    QEDI_PAGE_SIZE;
+		page = qedi->global_queues[i]->cq_dma;
+		pbl = (u32 *)qedi->global_queues[i]->cq_pbl;
+
+		while (num_pages--) {
+			*pbl = (u32)page;
+			pbl++;
+			*pbl = (u32)((u64)page >> 32);
+			pbl++;
+			page += QEDI_PAGE_SIZE;
+		}
+	}
+
+	list = (u32 *)qedi->p_cpuq;
+
+	/*
+	 * The list is built as follows: CQ#0 PBL pointer, RQ#0 PBL pointer,
+	 * CQ#1 PBL pointer, RQ#1 PBL pointer, etc.  Each PBL pointer points
+	 * to the physical address which contains an array of pointers to the
+	 * physical addresses of the specific queue pages.
+	 */
+	for (i = 0; i < qedi->num_queues; i++) {
+		*list = (u32)qedi->global_queues[i]->cq_pbl_dma;
+		list++;
+		*list = (u32)((u64)qedi->global_queues[i]->cq_pbl_dma >> 32);
+		list++;
+
+		*list = (u32)0;
+		list++;
+		*list = (u32)((u64)0 >> 32);
+		list++;
+	}
+
+	return 0;
+
+mem_alloc_failure:
+	qedi_free_global_queues(qedi);
+	return status;
+}
+
+static int qedi_alloc_itt(struct qedi_ctx *qedi)
+{
+	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
+				MAX_ISCSI_TASK_ENTRIES), GFP_KERNEL);
+	if (!qedi->itt_map) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Unable to allocate itt map array memory\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static void qedi_free_itt(struct qedi_ctx *qedi)
+{
+	kfree(qedi->itt_map);
+}
+
+static struct qed_ll2_cb_ops qedi_ll2_cb_ops = {
+	.rx_cb = qedi_ll2_rx,
+	.tx_cb = NULL,
+};
+
+static int qedi_percpu_io_thread(void *arg)
+{
+	struct qedi_percpu_s *p = arg;
+	struct qedi_work *work, *tmp;
+	unsigned long flags;
+	LIST_HEAD(work_list);
+
+	set_user_nice(current, -20);
+
+	while (!kthread_should_stop()) {
+		spin_lock_irqsave(&p->p_work_lock, flags);
+		while (!list_empty(&p->work_list)) {
+			list_splice_init(&p->work_list, &work_list);
+			spin_unlock_irqrestore(&p->p_work_lock, flags);
+
+			list_for_each_entry_safe(work, tmp, &work_list, list) {
+				list_del_init(&work->list);
+				qedi_fp_process_cqes(work->qedi, &work->cqe,
+						     work->que_idx);
+				kfree(work);
+			}
+			spin_lock_irqsave(&p->p_work_lock, flags);
+		}
+		set_current_state(TASK_INTERRUPTIBLE);
+		spin_unlock_irqrestore(&p->p_work_lock, flags);
+		schedule();
+	}
+	__set_current_state(TASK_RUNNING);
+
+	return 0;
+}
+
+static void qedi_percpu_thread_create(unsigned int cpu)
+{
+	struct qedi_percpu_s *p;
+	struct task_struct *thread;
+
+	p = &per_cpu(qedi_percpu, cpu);
+
+	thread = kthread_create_on_node(qedi_percpu_io_thread, (void *)p,
+					cpu_to_node(cpu),
+					"qedi_thread/%d", cpu);
+	if (likely(!IS_ERR(thread))) {
+		kthread_bind(thread, cpu);
+		p->iothread = thread;
+		wake_up_process(thread);
+	}
+}
+
+static void qedi_percpu_thread_destroy(unsigned int cpu)
+{
+	struct qedi_percpu_s *p;
+	struct task_struct *thread;
+	struct qedi_work *work, *tmp;
+
+	p = &per_cpu(qedi_percpu, cpu);
+	spin_lock_bh(&p->p_work_lock);
+	thread = p->iothread;
+	p->iothread = NULL;
+
+	list_for_each_entry_safe(work, tmp, &p->work_list, list) {
+		list_del_init(&work->list);
+		qedi_fp_process_cqes(work->qedi, &work->cqe, work->que_idx);
+		kfree(work);
+	}
+
+	spin_unlock_bh(&p->p_work_lock);
+	if (thread)
+		kthread_stop(thread);
+}
+
+static int qedi_cpu_callback(struct notifier_block *nfb,
+			     unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (unsigned long)hcpu;
+
+	switch (action) {
+	case CPU_ONLINE:
+	case CPU_ONLINE_FROZEN:
+		QEDI_ERR(NULL, "CPU %d online.\n", cpu);
+		qedi_percpu_thread_create(cpu);
+		break;
+	case CPU_DEAD:
+	case CPU_DEAD_FROZEN:
+		QEDI_ERR(NULL, "CPU %d offline.\n", cpu);
+		qedi_percpu_thread_destroy(cpu);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block qedi_cpu_notifier = {
+	.notifier_call = qedi_cpu_callback,
+};
+
+static void __qedi_remove(struct pci_dev *pdev, int mode)
+{
+	struct qedi_ctx *qedi = pci_get_drvdata(pdev);
+
+	if (qedi->tmf_thread) {
+		flush_workqueue(qedi->tmf_thread);
+		destroy_workqueue(qedi->tmf_thread);
+		qedi->tmf_thread = NULL;
+	}
+
+	if (qedi->offload_thread) {
+		flush_workqueue(qedi->offload_thread);
+		destroy_workqueue(qedi->offload_thread);
+		qedi->offload_thread = NULL;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_exit(&qedi->dbg_ctx);
+#endif
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags))
+		qedi_ops->common->set_power_state(qedi->cdev, PCI_D0);
+
+	qedi_sync_free_irqs(qedi);
+
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags)) {
+		qedi_ops->stop(qedi->cdev);
+		qedi_ops->ll2->stop(qedi->cdev);
+	}
+
+	if (mode == QEDI_MODE_NORMAL)
+		qedi_free_iscsi_pf_param(qedi);
+
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags)) {
+		qedi_ops->common->slowpath_stop(qedi->cdev);
+		qedi_ops->common->remove(qedi->cdev);
+	}
+
+	qedi_destroy_fp(qedi);
+
+	if (mode == QEDI_MODE_NORMAL) {
+		qedi_release_cid_que(qedi);
+		qedi_cm_free_mem(qedi);
+		qedi_free_uio(qedi->udev);
+		qedi_free_itt(qedi);
+
+		iscsi_host_remove(qedi->shost);
+		iscsi_host_free(qedi->shost);
+
+		if (qedi->ll2_recv_thread) {
+			kthread_stop(qedi->ll2_recv_thread);
+			qedi->ll2_recv_thread = NULL;
+		}
+		qedi_ll2_free_skbs(qedi);
+	}
+}
+
+static int __qedi_probe(struct pci_dev *pdev, int mode)
+{
+	struct qedi_ctx *qedi;
+	struct qed_ll2_params params;
+	u32 dp_module = 0;
+	u8 dp_level = 0;
+	bool is_vf = false;
+	char host_buf[16];
+	struct qed_link_params link_params;
+	struct qed_slowpath_params sp_params;
+	struct qed_probe_params qed_params;
+	void *task_start, *task_end;
+	int rc;
+	u16 tmp;
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		qedi = qedi_host_alloc(pdev);
+		if (!qedi) {
+			rc = -ENOMEM;
+			goto exit_probe;
+		}
+	} else {
+		qedi = pci_get_drvdata(pdev);
+	}
+
+	memset(&qed_params, 0, sizeof(qed_params));
+	qed_params.protocol = QED_PROTOCOL_ISCSI;
+	qed_params.dp_module = dp_module;
+	qed_params.dp_level = dp_level;
+	qed_params.is_vf = is_vf;
+	qedi->cdev = qedi_ops->common->probe(pdev, &qed_params);
+	if (!qedi->cdev) {
+		rc = -ENODEV;
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot initialize hardware\n");
+		goto free_host;
+	}
+
+	qedi->msix_count = MAX_NUM_MSIX_PF;
+	atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		rc = qedi_set_iscsi_pf_param(qedi);
+		if (rc) {
+			rc = -ENOMEM;
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Set iSCSI pf param fail\n");
+			goto free_host;
+		}
+	}
+
+	qedi_ops->common->update_pf_params(qedi->cdev, &qedi->pf_params);
+
+	rc = qedi_prepare_fp(qedi);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start slowpath.\n");
+		goto free_pf_params;
+	}
+
+	/* Start the Slowpath-process */
+	memset(&sp_params, 0, sizeof(struct qed_slowpath_params));
+	sp_params.int_mode = qedi_int_mode_to_enum();
+	sp_params.drv_major = QEDI_DRIVER_MAJOR_VER;
+	sp_params.drv_minor = QEDI_DRIVER_MINOR_VER;
+	sp_params.drv_rev = QEDI_DRIVER_REV_VER;
+	sp_params.drv_eng = QEDI_DRIVER_ENG_VER;
+	strlcpy(sp_params.name, "qedi iSCSI", QED_DRV_VER_STR_SIZE);
+	rc = qedi_ops->common->slowpath_start(qedi->cdev, &sp_params);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start slowpath\n");
+		goto stop_hw;
+	}
+
+	/* update_pf_params needs to be called before and after slowpath
+	 * start
+	 */
+	qedi_ops->common->update_pf_params(qedi->cdev, &qedi->pf_params);
+
+	qedi_setup_int(qedi);
+	if (rc)
+		goto stop_iscsi_func;
+
+	qedi_ops->common->set_power_state(qedi->cdev, PCI_D0);
+
+	/* Learn information crucial for qedi to progress */
+	rc = qedi_ops->fill_dev_info(qedi->cdev, &qedi->dev_info);
+	if (rc)
+		goto stop_iscsi_func;
+
+	/* Record BDQ producer doorbell addresses */
+	qedi->bdq_primary_prod = qedi->dev_info.primary_dbq_rq_addr;
+	qedi->bdq_secondary_prod = qedi->dev_info.secondary_bdq_rq_addr;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "BDQ primary_prod=%p secondary_prod=%p.\n",
+		  qedi->bdq_primary_prod,
+		  qedi->bdq_secondary_prod);
+
+	/*
+	 * We need to write the number of BDs in the BDQ we've preallocated so
+	 * the f/w will do a prefetch and we'll get an unsolicited CQE when a
+	 * packet arrives.
+	 */
+	qedi->bdq_prod_idx = QEDI_BDQ_NUM;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Writing %d to primary and secondary BDQ doorbell registers.\n",
+		  qedi->bdq_prod_idx);
+	writew(qedi->bdq_prod_idx, qedi->bdq_primary_prod);
+	tmp = readw(qedi->bdq_primary_prod);
+	writew(qedi->bdq_prod_idx, qedi->bdq_secondary_prod);
+	tmp = readw(qedi->bdq_secondary_prod);
+
+	ether_addr_copy(qedi->mac, qedi->dev_info.common.hw_mac);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC, "MAC address is %pM.\n",
+		  qedi->mac);
+
+	sprintf(host_buf, "host_%d", qedi->shost->host_no);
+	qedi_ops->common->set_id(qedi->cdev, host_buf, QEDI_MODULE_VERSION);
+
+	qedi_ops->register_ops(qedi->cdev, &qedi_cb_ops, qedi);
+
+	memset(&params, 0, sizeof(params));
+	params.mtu = DEF_PATH_MTU + IPV6_HDR_LEN + TCP_HDR_LEN;
+	qedi->ll2_mtu = DEF_PATH_MTU;
+	params.drop_ttl0_packets = 0;
+	params.rx_vlan_stripping = 1;
+	ether_addr_copy(params.ll2_mac_address, qedi->dev_info.common.hw_mac);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		/* set up rx path */
+		INIT_LIST_HEAD(&qedi->ll2_skb_list);
+		spin_lock_init(&qedi->ll2_lock);
+		/* start qedi context */
+		spin_lock_init(&qedi->hba_lock);
+		spin_lock_init(&qedi->task_idx_lock);
+	}
+	qedi_ops->ll2->register_cb_ops(qedi->cdev, &qedi_ll2_cb_ops, qedi);
+	qedi_ops->ll2->start(qedi->cdev, &params);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		qedi->ll2_recv_thread = kthread_run(qedi_ll2_recv_thread,
+						    (void *)qedi,
+						    "qedi_ll2_thread");
+	}
+
+	rc = qedi_ops->start(qedi->cdev, &qedi->tasks,
+			     qedi, qedi_iscsi_event_cb);
+	if (rc) {
+		rc = -ENODEV;
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start iSCSI function\n");
+		goto stop_slowpath;
+	}
+
+	task_start = qedi_get_task_mem(&qedi->tasks, 0);
+	task_end = qedi_get_task_mem(&qedi->tasks, MAX_TID_BLOCKS_ISCSI - 1);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Task context start=%p, end=%p block_size=%u.\n",
+		   task_start, task_end, qedi->tasks.size);
+
+	memset(&link_params, 0, sizeof(link_params));
+	link_params.link_up = true;
+	rc = qedi_ops->common->set_link(qedi->cdev, &link_params);
+	if (rc) {
+		QEDI_WARN(&qedi->dbg_ctx, "Link set up failed.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_init(&qedi->dbg_ctx, &qedi_debugfs_ops,
+			   &qedi_dbg_fops);
+#endif
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		if (iscsi_host_add(qedi->shost, &pdev->dev)) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not add iscsi host\n");
+			rc = -ENOMEM;
+			goto remove_host;
+		}
+
+		/* Allocate uio buffers */
+		rc = qedi_alloc_uio_rings(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "UIO alloc ring failed err=%d\n", rc);
+			goto remove_host;
+		}
+
+		rc = qedi_init_uio(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "UIO init failed, err=%d\n", rc);
+			goto free_uio;
+		}
+
+		/* host the array on iscsi_conn */
+		rc = qedi_setup_cid_que(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not setup cid que\n");
+			goto free_uio;
+		}
+
+		rc = qedi_cm_alloc_mem(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not alloc cm memory\n");
+			goto free_cid_que;
+		}
+
+		rc = qedi_alloc_itt(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not alloc itt memory\n");
+			goto free_cid_que;
+		}
+
+		sprintf(host_buf, "host_%d", qedi->shost->host_no);
+		qedi->tmf_thread = create_singlethread_workqueue(host_buf);
+		if (!qedi->tmf_thread) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to start tmf thread!\n");
+			rc = -ENODEV;
+			goto free_cid_que;
+		}
+
+		sprintf(host_buf, "qedi_ofld%d", qedi->shost->host_no);
+		qedi->offload_thread = create_workqueue(host_buf);
+		if (!qedi->offload_thread) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to start offload thread!\n");
+			rc = -ENODEV;
+			goto free_cid_que;
+		}
+
+		/* F/w needs 1st task context memory entry for performance */
+		set_bit(QEDI_RESERVE_TASK_ID, qedi->task_idx_map);
+		atomic_set(&qedi->num_offloads, 0);
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "QLogic FastLinQ iSCSI Module qedi %s, FW %d.%d.%d.%d\n",
+		  QEDI_MODULE_VERSION, FW_MAJOR_VERSION, FW_MINOR_VERSION,
+		   FW_REVISION_VERSION, FW_ENGINEERING_VERSION);
+	return 0;
+
+free_cid_que:
+	qedi_release_cid_que(qedi);
+free_uio:
+	qedi_free_uio(qedi->udev);
+remove_host:
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_exit(&qedi->dbg_ctx);
+#endif
+	iscsi_host_remove(qedi->shost);
+stop_iscsi_func:
+	qedi_ops->stop(qedi->cdev);
+stop_slowpath:
+	qedi_ops->common->slowpath_stop(qedi->cdev);
+stop_hw:
+	qedi_ops->common->remove(qedi->cdev);
+free_pf_params:
+	qedi_free_iscsi_pf_param(qedi);
+free_host:
+	iscsi_host_free(qedi->shost);
+exit_probe:
+	return rc;
+}
+
+static int qedi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	return __qedi_probe(pdev, QEDI_MODE_NORMAL);
+}
+
+static void qedi_remove(struct pci_dev *pdev)
+{
+	__qedi_remove(pdev, QEDI_MODE_NORMAL);
+}
+
+static struct pci_device_id qedi_pci_tbl[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, 0x165E) },
+	{ 0 },
+};
+MODULE_DEVICE_TABLE(pci, qedi_pci_tbl);
+
+static struct pci_driver qedi_pci_driver = {
+	.name = QEDI_MODULE_NAME,
+	.id_table = qedi_pci_tbl,
+	.probe = qedi_probe,
+	.remove = qedi_remove,
+};
+
+static int __init qedi_init(void)
+{
+	int rc = 0;
+	int ret;
+	struct qedi_percpu_s *p;
+	unsigned int cpu = 0;
+
+	qedi_ops = qed_get_iscsi_ops();
+	if (!qedi_ops) {
+		QEDI_ERR(NULL, "Failed to get qed iSCSI operations\n");
+		rc = -EINVAL;
+		goto exit_qedi_init_0;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_init("qedi");
+#endif
+
+	register_hotcpu_notifier(&qedi_cpu_notifier);
+
+	ret = pci_register_driver(&qedi_pci_driver);
+	if (ret) {
+		QEDI_ERR(NULL, "Failed to register driver\n");
+		rc = -EINVAL;
+		goto exit_qedi_init_2;
+	}
+
+	for_each_possible_cpu(cpu) {
+		p = &per_cpu(qedi_percpu, cpu);
+		INIT_LIST_HEAD(&p->work_list);
+		spin_lock_init(&p->p_work_lock);
+		p->iothread = NULL;
+	}
+
+	for_each_online_cpu(cpu)
+		qedi_percpu_thread_create(cpu);
+
+	return rc;
+
+exit_qedi_init_2:
+exit_qedi_init_1:
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_exit();
+#endif
+	qed_put_iscsi_ops();
+exit_qedi_init_0:
+	return rc;
+}
+
+static void __exit qedi_cleanup(void)
+{
+	unsigned int cpu = 0;
+
+	for_each_online_cpu(cpu)
+		qedi_percpu_thread_destroy(cpu);
+
+	pci_unregister_driver(&qedi_pci_driver);
+	unregister_hotcpu_notifier(&qedi_cpu_notifier);
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_exit();
+#endif
+	qed_put_iscsi_ops();
+}
+
+MODULE_DESCRIPTION("QLogic FastLinQ 4xxxx iSCSI Module");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("QLogic Corporation");
+MODULE_VERSION(QEDI_MODULE_VERSION);
+module_init(qedi_init);
+module_exit(qedi_cleanup);
diff --git a/drivers/scsi/qedi/qedi_sysfs.c b/drivers/scsi/qedi/qedi_sysfs.c
new file mode 100644
index 0000000..a2cc3ed
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_sysfs.c
@@ -0,0 +1,52 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi.h"
+#include "qedi_gbl.h"
+#include "qedi_iscsi.h"
+#include "qedi_dbg.h"
+
+static inline struct qedi_ctx *qedi_dev_to_hba(struct device *dev)
+{
+	struct Scsi_Host *shost = class_to_shost(dev);
+
+	return iscsi_host_priv(shost);
+}
+
+static ssize_t qedi_show_port_state(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
+
+	if (atomic_read(&qedi->link_state) == QEDI_LINK_UP)
+		return sprintf(buf, "Online\n");
+	else
+		return sprintf(buf, "Linkdown\n");
+}
+
+static ssize_t qedi_show_speed(struct device *dev,
+			       struct device_attribute *attr, char *buf)
+{
+	struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
+	struct qed_link_output if_link;
+
+	qedi_ops->common->get_link(qedi->cdev, &if_link);
+
+	return sprintf(buf, "%d Gbit\n", if_link.speed / 1000);
+}
+
+static DEVICE_ATTR(port_state, S_IRUGO, qedi_show_port_state, NULL);
+static DEVICE_ATTR(speed, S_IRUGO, qedi_show_speed, NULL);
+
+struct device_attribute *qedi_shost_attrs[] = {
+	&dev_attr_port_state,
+	&dev_attr_speed,
+	NULL
+};
diff --git a/drivers/scsi/qedi/qedi_version.h b/drivers/scsi/qedi/qedi_version.h
new file mode 100644
index 0000000..9543a1b
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_version.h
@@ -0,0 +1,14 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#define QEDI_MODULE_VERSION	"8.10.3.0"
+#define QEDI_DRIVER_MAJOR_VER		8
+#define QEDI_DRIVER_MINOR_VER		10
+#define QEDI_DRIVER_REV_VER		3
+#define QEDI_DRIVER_ENG_VER		0
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
for 41000 Series Converged Network Adapters by QLogic.

This patch consists of following changes:
  - MAINTAINERS Makefile and Kconfig changes for qedi,
  - PCI driver registration,
  - iSCSI host level initialization,
  - Debugfs and log level infrastructure.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 MAINTAINERS                         |    6 +
 drivers/net/ethernet/qlogic/Kconfig |   12 -
 drivers/scsi/Kconfig                |    1 +
 drivers/scsi/Makefile               |    1 +
 drivers/scsi/qedi/Kconfig           |   10 +
 drivers/scsi/qedi/Makefile          |    5 +
 drivers/scsi/qedi/qedi.h            |  286 +++++++
 drivers/scsi/qedi/qedi_dbg.c        |  143 ++++
 drivers/scsi/qedi/qedi_dbg.h        |  144 ++++
 drivers/scsi/qedi/qedi_debugfs.c    |  244 ++++++
 drivers/scsi/qedi/qedi_hsi.h        |   52 ++
 drivers/scsi/qedi/qedi_main.c       | 1550 +++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_sysfs.c      |   52 ++
 drivers/scsi/qedi/qedi_version.h    |   14 +
 14 files changed, 2508 insertions(+), 12 deletions(-)
 create mode 100644 drivers/scsi/qedi/Kconfig
 create mode 100644 drivers/scsi/qedi/Makefile
 create mode 100644 drivers/scsi/qedi/qedi.h
 create mode 100644 drivers/scsi/qedi/qedi_dbg.c
 create mode 100644 drivers/scsi/qedi/qedi_dbg.h
 create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
 create mode 100644 drivers/scsi/qedi/qedi_hsi.h
 create mode 100644 drivers/scsi/qedi/qedi_main.c
 create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
 create mode 100644 drivers/scsi/qedi/qedi_version.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 5e925a2..906d05f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9909,6 +9909,12 @@ F:	drivers/net/ethernet/qlogic/qed/
 F:	include/linux/qed/
 F:	drivers/net/ethernet/qlogic/qede/
 
+QLOGIC QL41xxx ISCSI DRIVER
+M:	QLogic-Storage-Upstream@cavium.com
+L:	linux-scsi@vger.kernel.org
+S:	Supported
+F:	drivers/scsi/qedi/
+
 QNX4 FILESYSTEM
 M:	Anders Larsen <al@alarsen.net>
 W:	http://www.alarsen.net/linux/qnx4fs/
diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index bad4fae..28b4366 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -121,16 +121,4 @@ config INFINIBAND_QEDR
 config QED_ISCSI
 	bool
 
-config QEDI
-	tristate "QLogic QED 25/40/100Gb iSCSI driver"
-	depends on QED
-	select QED_LL2
-	select QED_ISCSI
-	default n
-	---help---
-	  This provides a temporary node that allows the compilation
-	  and logical testing of the hardware offload iSCSI support
-	  for QLogic QED. This would be replaced by the 'real' option
-	  once the QEDI driver is added [+relocated].
-
 endif # NET_VENDOR_QLOGIC
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 3e2bdb9..5cf03db 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1254,6 +1254,7 @@ config SCSI_QLOGICPTI
 
 source "drivers/scsi/qla2xxx/Kconfig"
 source "drivers/scsi/qla4xxx/Kconfig"
+source "drivers/scsi/qedi/Kconfig"
 
 config SCSI_LPFC
 	tristate "Emulex LightPulse Fibre Channel Support"
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 38d938d..da9e312 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -132,6 +132,7 @@ obj-$(CONFIG_PS3_ROM)		+= ps3rom.o
 obj-$(CONFIG_SCSI_CXGB3_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
 obj-$(CONFIG_SCSI_CXGB4_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
 obj-$(CONFIG_SCSI_BNX2_ISCSI)	+= libiscsi.o bnx2i/
+obj-$(CONFIG_QEDI)          += libiscsi.o qedi/
 obj-$(CONFIG_BE2ISCSI)		+= libiscsi.o be2iscsi/
 obj-$(CONFIG_SCSI_ESAS2R)	+= esas2r/
 obj-$(CONFIG_SCSI_PMCRAID)	+= pmcraid.o
diff --git a/drivers/scsi/qedi/Kconfig b/drivers/scsi/qedi/Kconfig
new file mode 100644
index 0000000..23ca8a2
--- /dev/null
+++ b/drivers/scsi/qedi/Kconfig
@@ -0,0 +1,10 @@
+config QEDI
+	tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
+	depends on PCI && SCSI
+	depends on QED
+	select SCSI_ISCSI_ATTRS
+	select QED_LL2
+	select QED_ISCSI
+	---help---
+	This driver supports iSCSI offload for the QLogic FastLinQ
+	41000 Series Converged Network Adapters.
diff --git a/drivers/scsi/qedi/Makefile b/drivers/scsi/qedi/Makefile
new file mode 100644
index 0000000..2b3e16b
--- /dev/null
+++ b/drivers/scsi/qedi/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_QEDI) := qedi.o
+qedi-y := qedi_main.o qedi_iscsi.o qedi_fw.o qedi_sysfs.o \
+	    qedi_dbg.o
+
+qedi-$(CONFIG_DEBUG_FS) += qedi_debugfs.o
diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
new file mode 100644
index 0000000..0a5035e
--- /dev/null
+++ b/drivers/scsi/qedi/qedi.h
@@ -0,0 +1,286 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_H_
+#define _QEDI_H_
+
+#define __PREVENT_QED_HSI__
+
+#include <scsi/scsi_transport_iscsi.h>
+#include <scsi/libiscsi.h>
+#include <scsi/scsi_host.h>
+#include <linux/uio_driver.h>
+
+#include "qedi_hsi.h"
+#include <linux/qed/qed_if.h>
+#include "qedi_dbg.h"
+#include <linux/qed/qed_iscsi_if.h>
+#include "qedi_version.h"
+
+#define QEDI_MODULE_NAME		"qedi"
+
+struct qedi_endpoint;
+
+/*
+ * PCI function probe defines
+ */
+#define QEDI_MODE_NORMAL	0
+#define QEDI_MODE_RECOVERY	1
+
+#define ISCSI_WQE_SET_PTU_INVALIDATE	1
+#define QEDI_MAX_ISCSI_TASK		4096
+#define QEDI_MAX_TASK_NUM		0x0FFF
+#define QEDI_MAX_ISCSI_CONNS_PER_HBA	1024
+#define QEDI_ISCSI_MAX_BDS_PER_CMD	256	/* Firmware max BDs is 256 */
+#define MAX_OUSTANDING_TASKS_PER_CON	1024
+
+#define QEDI_MAX_BD_LEN		0xffff
+#define QEDI_BD_SPLIT_SZ	0x1000
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_FAST_SGE_COUNT	4
+/* MAX Length for cached SGL */
+#define MAX_SGLEN_FOR_CACHESGL	((1U << 16) - 1)
+
+#define MAX_NUM_MSIX_PF         8
+#define MIN_NUM_CPUS_MSIX(x)	min(x->msix_count, num_online_cpus())
+
+#define QEDI_LOCAL_PORT_MIN     60000
+#define QEDI_LOCAL_PORT_MAX     61024
+#define QEDI_LOCAL_PORT_RANGE   (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
+#define QEDI_LOCAL_PORT_INVALID	0xffff
+
+/* Queue sizes in number of elements */
+#define QEDI_SQ_SIZE		MAX_OUSTANDING_TASKS_PER_CON
+#define QEDI_CQ_SIZE		2048
+#define QEDI_CMDQ_SIZE		QEDI_MAX_ISCSI_TASK
+#define QEDI_PROTO_CQ_PROD_IDX	0
+
+struct qedi_glbl_q_params {
+	u64 hw_p_cq;	/* Completion queue PBL */
+	u64 hw_p_rq;	/* Request queue PBL */
+	u64 hw_p_cmdq;	/* Command queue PBL */
+};
+
+struct global_queue {
+	union iscsi_cqe *cq;
+	dma_addr_t cq_dma;
+	u32 cq_mem_size;
+	u32 cq_cons_idx; /* Completion queue consumer index */
+
+	void *cq_pbl;
+	dma_addr_t cq_pbl_dma;
+	u32 cq_pbl_size;
+
+};
+
+struct qedi_fastpath {
+	struct qed_sb_info	*sb_info;
+	u16			sb_id;
+#define QEDI_NAME_SIZE		16
+	char			name[QEDI_NAME_SIZE];
+	struct qedi_ctx         *qedi;
+};
+
+/* Used to pass fastpath information needed to process CQEs */
+struct qedi_io_work {
+	struct list_head list;
+	struct iscsi_cqe_solicited cqe;
+	u16	que_idx;
+};
+
+/**
+ * struct iscsi_cid_queue - Per adapter iscsi cid queue
+ *
+ * @cid_que_base:           queue base memory
+ * @cid_que:                queue memory pointer
+ * @cid_q_prod_idx:         produce index
+ * @cid_q_cons_idx:         consumer index
+ * @cid_q_max_idx:          max index. used to detect wrap around condition
+ * @cid_free_cnt:           queue size
+ * @conn_cid_tbl:           iscsi cid to conn structure mapping table
+ *
+ * Per adapter iSCSI CID Queue
+ */
+struct iscsi_cid_queue {
+	void *cid_que_base;
+	u32 *cid_que;
+	u32 cid_q_prod_idx;
+	u32 cid_q_cons_idx;
+	u32 cid_q_max_idx;
+	u32 cid_free_cnt;
+	struct qedi_conn **conn_cid_tbl;
+};
+
+struct qedi_portid_tbl {
+	spinlock_t      lock;	/* Port id lock */
+	u16             start;
+	u16             max;
+	u16             next;
+	unsigned long   *table;
+};
+
+struct qedi_itt_map {
+	__le32	itt;
+};
+
+/* I/O tracing entry */
+#define QEDI_IO_TRACE_SIZE             2048
+struct qedi_io_log {
+#define QEDI_IO_TRACE_REQ              0
+#define QEDI_IO_TRACE_RSP              1
+	u8 direction;
+	u16 task_id;
+	u32 cid;
+	u32 port_id;	/* Remote port fabric ID */
+	int lun;
+	u8 op;		/* SCSI CDB */
+	u8 lba[4];
+	unsigned int bufflen;	/* SCSI buffer length */
+	unsigned int sg_count;	/* Number of SG elements */
+	u8 fast_sgs;		/* number of fast sgls */
+	u8 slow_sgs;		/* number of slow sgls */
+	u8 cached_sgs;		/* number of cached sgls */
+	int result;		/* Result passed back to mid-layer */
+	unsigned long jiffies;	/* Time stamp when I/O logged */
+	int refcount;		/* Reference count for task id */
+	unsigned int blk_req_cpu; /* CPU that the task is queued on by
+				   * blk layer
+				   */
+	unsigned int req_cpu;	/* CPU that the task is queued on */
+	unsigned int intr_cpu;	/* Interrupt CPU that the task is received on */
+	unsigned int blk_rsp_cpu;/* CPU that task is actually processed and
+				  * returned to blk layer
+				  */
+	bool cached_sge;
+	bool slow_sge;
+	bool fast_sge;
+};
+
+/* Number of entries in BDQ */
+#define QEDI_BDQ_NUM		256
+#define QEDI_BDQ_BUF_SIZE	256
+
+/* DMA coherent buffers for BDQ */
+struct qedi_bdq_buf {
+	void *buf_addr;
+	dma_addr_t buf_dma;
+};
+
+/* Main port level struct */
+struct qedi_ctx {
+	struct qedi_dbg_ctx dbg_ctx;
+	struct Scsi_Host *shost;
+	struct pci_dev *pdev;
+	struct qed_dev *cdev;
+	struct qed_dev_iscsi_info dev_info;
+	struct qed_int_info int_info;
+	struct qedi_glbl_q_params *p_cpuq;
+	struct global_queue **global_queues;
+	/* uio declaration */
+	struct qedi_uio_dev *udev;
+	struct list_head ll2_skb_list;
+	spinlock_t ll2_lock;	/* Light L2 lock */
+	spinlock_t hba_lock;	/* per port lock */
+	struct task_struct *ll2_recv_thread;
+	unsigned long flags;
+#define UIO_DEV_OPENED		1
+#define QEDI_IOTHREAD_WAKE	2
+#define QEDI_IN_RECOVERY	5
+#define QEDI_IN_OFFLINE		6
+
+	u8 mac[ETH_ALEN];
+	u32 src_ip[4];
+	u8 ip_type;
+
+	/* Physical address of above array */
+	u64 hw_p_cpuq;
+
+	struct qedi_bdq_buf bdq[QEDI_BDQ_NUM];
+	void *bdq_pbl;
+	dma_addr_t bdq_pbl_dma;
+	size_t bdq_pbl_mem_size;
+	void *bdq_pbl_list;
+	dma_addr_t bdq_pbl_list_dma;
+	u8 bdq_pbl_list_num_entries;
+	void __iomem *bdq_primary_prod;
+	void __iomem *bdq_secondary_prod;
+	u16 bdq_prod_idx;
+	u16 rq_num_entries;
+
+	u32 msix_count;
+	u32 max_sqes;
+	u8 num_queues;
+	u32 max_active_conns;
+
+	struct iscsi_cid_queue cid_que;
+	struct qedi_endpoint **ep_tbl;
+	struct qedi_portid_tbl lcl_port_tbl;
+
+	/* Rx fast path intr context */
+	struct qed_sb_info	*sb_array;
+	struct qedi_fastpath	*fp_array;
+	struct qed_iscsi_tid	tasks;
+
+#define QEDI_LINK_DOWN		0
+#define QEDI_LINK_UP		1
+	atomic_t link_state;
+
+#define QEDI_RESERVE_TASK_ID	0
+#define MAX_ISCSI_TASK_ENTRIES	4096
+#define QEDI_INVALID_TASK_ID	(MAX_ISCSI_TASK_ENTRIES + 1)
+	unsigned long task_idx_map[MAX_ISCSI_TASK_ENTRIES / BITS_PER_LONG];
+	struct qedi_itt_map *itt_map;
+	u16 tid_reuse_count[QEDI_MAX_ISCSI_TASK];
+	struct qed_pf_params pf_params;
+
+	struct workqueue_struct *tmf_thread;
+	struct workqueue_struct *offload_thread;
+
+	u16 ll2_mtu;
+
+	struct workqueue_struct *dpc_wq;
+
+	spinlock_t task_idx_lock;	/* To protect gbl context */
+	s32 last_tidx_alloc;
+	s32 last_tidx_clear;
+
+	struct qedi_io_log io_trace_buf[QEDI_IO_TRACE_SIZE];
+	spinlock_t io_trace_lock;	/* prtect trace Log buf */
+	u16 io_trace_idx;
+	unsigned int intr_cpu;
+	u32 cached_sgls;
+	bool use_cached_sge;
+	u32 slow_sgls;
+	bool use_slow_sge;
+	u32 fast_sgls;
+	bool use_fast_sge;
+
+	atomic_t num_offloads;
+};
+
+struct qedi_work {
+	struct list_head list;
+	struct qedi_ctx *qedi;
+	union iscsi_cqe cqe;
+	u16     que_idx;
+};
+
+struct qedi_percpu_s {
+	struct task_struct *iothread;
+	struct list_head work_list;
+	spinlock_t p_work_lock;		/* Per cpu worker lock */
+};
+
+static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
+{
+	return (void *)(info->blocks[tid / info->num_tids_per_block] +
+			(tid % info->num_tids_per_block) * info->size);
+}
+
+#endif /* _QEDI_H_ */
diff --git a/drivers/scsi/qedi/qedi_dbg.c b/drivers/scsi/qedi/qedi_dbg.c
new file mode 100644
index 0000000..2678a15
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_dbg.c
@@ -0,0 +1,143 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi_dbg.h"
+#include <linux/vmalloc.h>
+
+void
+qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	     const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_crit("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_crit("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	      const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & QEDI_LOG_WARN))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+		const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & QEDI_LOG_NOTICE))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_notice("[%s]:[%s:%d]:%d: %pV",
+			  dev_name(&qedi->pdev->dev), nfunc, line,
+			  qedi->host_no, &vaf);
+	else
+		pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+void
+qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
+	      u32 level, const char *fmt, ...)
+{
+	va_list va;
+	struct va_format vaf;
+	char nfunc[32];
+
+	memset(nfunc, 0, sizeof(nfunc));
+	memcpy(nfunc, func, sizeof(nfunc) - 1);
+
+	va_start(va, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &va;
+
+	if (!(debug & level))
+		return;
+
+	if (likely(qedi) && likely(qedi->pdev))
+		pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
+			nfunc, line, qedi->host_no, &vaf);
+	else
+		pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+
+	va_end(va);
+}
+
+int
+qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
+{
+	int ret = 0;
+
+	for (; iter->name; iter++) {
+		ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
+					    iter->attr);
+		if (ret)
+			pr_err("Unable to create sysfs %s attr, err(%d).\n",
+			       iter->name, ret);
+	}
+	return ret;
+}
+
+void
+qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
+{
+	for (; iter->name; iter++)
+		sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
+}
diff --git a/drivers/scsi/qedi/qedi_dbg.h b/drivers/scsi/qedi/qedi_dbg.h
new file mode 100644
index 0000000..5beb3ec
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_dbg.h
@@ -0,0 +1,144 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_DBG_H_
+#define _QEDI_DBG_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/compiler.h>
+#include <linux/string.h>
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_transport_iscsi.h>
+#include <linux/fs.h>
+
+#define __PREVENT_QED_HSI__
+#include <linux/qed/common_hsi.h>
+#include <linux/qed/qed_if.h>
+
+extern uint debug;
+
+/* Debug print level definitions */
+#define QEDI_LOG_DEFAULT	0x1		/* Set default logging mask */
+#define QEDI_LOG_INFO		0x2		/* Informational logs,
+						 * MAC address, WWPN, WWNN
+						 */
+#define QEDI_LOG_DISC		0x4		/* Init, discovery, rport */
+#define QEDI_LOG_LL2		0x8		/* LL2, VLAN logs */
+#define QEDI_LOG_CONN		0x10		/* Connection setup, cleanup */
+#define QEDI_LOG_EVT		0x20		/* Events, link, mtu */
+#define QEDI_LOG_TIMER		0x40		/* Timer events */
+#define QEDI_LOG_MP_REQ		0x80		/* Middle Path (MP) logs */
+#define QEDI_LOG_SCSI_TM	0x100		/* SCSI Aborts, Task Mgmt */
+#define QEDI_LOG_UNSOL		0x200		/* unsolicited event logs */
+#define QEDI_LOG_IO		0x400		/* scsi cmd, completion */
+#define QEDI_LOG_MQ		0x800		/* Multi Queue logs */
+#define QEDI_LOG_BSG		0x1000		/* BSG logs */
+#define QEDI_LOG_DEBUGFS	0x2000		/* debugFS logs */
+#define QEDI_LOG_LPORT		0x4000		/* lport logs */
+#define QEDI_LOG_ELS		0x8000		/* ELS logs */
+#define QEDI_LOG_NPIV		0x10000		/* NPIV logs */
+#define QEDI_LOG_SESS		0x20000		/* Conection setup, cleanup */
+#define QEDI_LOG_UIO		0x40000		/* iSCSI UIO logs */
+#define QEDI_LOG_TID		0x80000         /* FW TID context acquire,
+						 * free
+						 */
+#define QEDI_TRACK_TID		0x100000        /* Track TID state. To be
+						 * enabled only at module load
+						 * and not run-time.
+						 */
+#define QEDI_TRACK_CMD_LIST    0x300000        /* Track active cmd list nodes,
+						* done with reference to TID,
+						* hence TRACK_TID also enabled.
+						*/
+#define QEDI_LOG_NOTICE		0x40000000	/* Notice logs */
+#define QEDI_LOG_WARN		0x80000000	/* Warning logs */
+
+/* Debug context structure */
+struct qedi_dbg_ctx {
+	unsigned int host_no;
+	struct pci_dev *pdev;
+#ifdef CONFIG_DEBUG_FS
+	struct dentry *bdf_dentry;
+#endif
+};
+
+#define QEDI_ERR(pdev, fmt, ...)	\
+		qedi_dbg_err(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_WARN(pdev, fmt, ...)	\
+		qedi_dbg_warn(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_NOTICE(pdev, fmt, ...)	\
+		qedi_dbg_notice(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
+#define QEDI_INFO(pdev, level, fmt, ...)	\
+		qedi_dbg_info(pdev, __func__, __LINE__, level, fmt,	\
+			      ## __VA_ARGS__)
+
+void qedi_dbg_err(struct qedi_dbg_ctx *, const char *, u32,
+		  const char *, ...);
+void qedi_dbg_warn(struct qedi_dbg_ctx *, const char *, u32,
+		   const char *, ...);
+void qedi_dbg_notice(struct qedi_dbg_ctx *, const char *, u32,
+		     const char *, ...);
+void qedi_dbg_info(struct qedi_dbg_ctx *, const char *, u32, u32,
+		   const char *, ...);
+
+struct Scsi_Host;
+
+struct sysfs_bin_attrs {
+	char *name;
+	struct bin_attribute *attr;
+};
+
+int qedi_create_sysfs_attr(struct Scsi_Host *,
+			   struct sysfs_bin_attrs *);
+void qedi_remove_sysfs_attr(struct Scsi_Host *,
+			    struct sysfs_bin_attrs *);
+
+#ifdef CONFIG_DEBUG_FS
+/* DebugFS related code */
+struct qedi_list_of_funcs {
+	char *oper_str;
+	ssize_t (*oper_func)(struct qedi_dbg_ctx *qedi);
+};
+
+struct qedi_debugfs_ops {
+	char *name;
+	struct qedi_list_of_funcs *qedi_funcs;
+};
+
+#define qedi_dbg_fileops(drv, ops) \
+{ \
+	.owner  = THIS_MODULE, \
+	.open   = simple_open, \
+	.read   = drv##_dbg_##ops##_cmd_read, \
+	.write  = drv##_dbg_##ops##_cmd_write \
+}
+
+/* Used for debugfs sequential files */
+#define qedi_dbg_fileops_seq(drv, ops) \
+{ \
+	.owner = THIS_MODULE, \
+	.open = drv##_dbg_##ops##_open, \
+	.read = seq_read, \
+	.llseek = seq_lseek, \
+	.release = single_release, \
+}
+
+void qedi_dbg_host_init(struct qedi_dbg_ctx *,
+			struct qedi_debugfs_ops *,
+			const struct file_operations *);
+void qedi_dbg_host_exit(struct qedi_dbg_ctx *);
+void qedi_dbg_init(char *);
+void qedi_dbg_exit(void);
+#endif /* CONFIG_DEBUG_FS */
+
+#endif /* _QEDI_DBG_H_ */
diff --git a/drivers/scsi/qedi/qedi_debugfs.c b/drivers/scsi/qedi/qedi_debugfs.c
new file mode 100644
index 0000000..9559362
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_debugfs.c
@@ -0,0 +1,244 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi.h"
+#include "qedi_dbg.h"
+
+#include <linux/uaccess.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+
+int do_not_recover;
+static struct dentry *qedi_dbg_root;
+
+void
+qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
+		   struct qedi_debugfs_ops *dops,
+		   const struct file_operations *fops)
+{
+	char host_dirname[32];
+	struct dentry *file_dentry = NULL;
+
+	sprintf(host_dirname, "host%u", qedi->host_no);
+	qedi->bdf_dentry = debugfs_create_dir(host_dirname, qedi_dbg_root);
+	if (!qedi->bdf_dentry)
+		return;
+
+	while (dops) {
+		if (!(dops->name))
+			break;
+
+		file_dentry = debugfs_create_file(dops->name, 0600,
+						  qedi->bdf_dentry, qedi,
+						  fops);
+		if (!file_dentry) {
+			QEDI_INFO(qedi, QEDI_LOG_DEBUGFS,
+				  "Debugfs entry %s creation failed\n",
+				  dops->name);
+			debugfs_remove_recursive(qedi->bdf_dentry);
+			return;
+		}
+		dops++;
+		fops++;
+	}
+}
+
+void
+qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi)
+{
+	debugfs_remove_recursive(qedi->bdf_dentry);
+	qedi->bdf_dentry = NULL;
+}
+
+void
+qedi_dbg_init(char *drv_name)
+{
+	qedi_dbg_root = debugfs_create_dir(drv_name, NULL);
+	if (!qedi_dbg_root)
+		QEDI_INFO(NULL, QEDI_LOG_DEBUGFS, "Init of debugfs failed\n");
+}
+
+void
+qedi_dbg_exit(void)
+{
+	debugfs_remove_recursive(qedi_dbg_root);
+	qedi_dbg_root = NULL;
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
+{
+	if (!do_not_recover)
+		do_not_recover = 1;
+
+	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+		  do_not_recover);
+	return 0;
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
+{
+	if (do_not_recover)
+		do_not_recover = 0;
+
+	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+		  do_not_recover);
+	return 0;
+}
+
+static struct qedi_list_of_funcs qedi_dbg_do_not_recover_ops[] = {
+	{ "enable", qedi_dbg_do_not_recover_enable },
+	{ "disable", qedi_dbg_do_not_recover_disable },
+	{ NULL, NULL }
+};
+
+struct qedi_debugfs_ops qedi_debugfs_ops[] = {
+	{ "gbl_ctx", NULL },
+	{ "do_not_recover", qedi_dbg_do_not_recover_ops},
+	{ "io_trace", NULL },
+	{ NULL, NULL }
+};
+
+static ssize_t
+qedi_dbg_do_not_recover_cmd_write(struct file *filp, const char __user *buffer,
+				  size_t count, loff_t *ppos)
+{
+	size_t cnt = 0;
+	struct qedi_dbg_ctx *qedi_dbg =
+			(struct qedi_dbg_ctx *)filp->private_data;
+	struct qedi_list_of_funcs *lof = qedi_dbg_do_not_recover_ops;
+
+	if (*ppos)
+		return 0;
+
+	while (lof) {
+		if (!(lof->oper_str))
+			break;
+
+		if (!strncmp(lof->oper_str, buffer, strlen(lof->oper_str))) {
+			cnt = lof->oper_func(qedi_dbg);
+			break;
+		}
+
+		lof++;
+	}
+	return (count - cnt);
+}
+
+static ssize_t
+qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
+				 size_t count, loff_t *ppos)
+{
+	size_t cnt = 0;
+
+	if (*ppos)
+		return 0;
+
+	cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
+	cnt = min_t(int, count, cnt - *ppos);
+	*ppos += cnt;
+	return cnt;
+}
+
+static int
+qedi_gbl_ctx_show(struct seq_file *s, void *unused)
+{
+	struct qedi_fastpath *fp = NULL;
+	struct qed_sb_info *sb_info = NULL;
+	struct status_block *sb = NULL;
+	struct global_queue *que = NULL;
+	int id;
+	u16 prod_idx;
+	struct qedi_ctx *qedi = s->private;
+	unsigned long flags;
+
+	seq_puts(s, " DUMP CQ CONTEXT:\n");
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		seq_printf(s, "=========FAST CQ PATH [%d] ==========\n", id);
+		fp = &qedi->fp_array[id];
+		sb_info = fp->sb_info;
+		sb = sb_info->sb_virt;
+		prod_idx = (sb->pi_array[QEDI_PROTO_CQ_PROD_IDX] &
+			    STATUS_BLOCK_PROD_INDEX_MASK);
+		seq_printf(s, "SB PROD IDX: %d\n", prod_idx);
+		que = qedi->global_queues[fp->sb_id];
+		seq_printf(s, "DRV CONS IDX: %d\n", que->cq_cons_idx);
+		seq_printf(s, "CQ complete host memory: %d\n", fp->sb_id);
+		seq_puts(s, "=========== END ==================\n\n\n");
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+	return 0;
+}
+
+static int
+qedi_dbg_gbl_ctx_open(struct inode *inode, struct file *file)
+{
+	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
+	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
+					     dbg_ctx);
+
+	return single_open(file, qedi_gbl_ctx_show, qedi);
+}
+
+static int
+qedi_io_trace_show(struct seq_file *s, void *unused)
+{
+	int id, idx = 0;
+	struct qedi_ctx *qedi = s->private;
+	struct qedi_io_log *io_log;
+	unsigned long flags;
+
+	seq_puts(s, " DUMP IO LOGS:\n");
+	spin_lock_irqsave(&qedi->io_trace_lock, flags);
+	idx = qedi->io_trace_idx;
+	for (id = 0; id < QEDI_IO_TRACE_SIZE; id++) {
+		io_log = &qedi->io_trace_buf[idx];
+		seq_printf(s, "iodir-%d:", io_log->direction);
+		seq_printf(s, "tid-0x%x:", io_log->task_id);
+		seq_printf(s, "cid-0x%x:", io_log->cid);
+		seq_printf(s, "lun-%d:", io_log->lun);
+		seq_printf(s, "op-0x%02x:", io_log->op);
+		seq_printf(s, "0x%02x%02x%02x%02x:", io_log->lba[0],
+			   io_log->lba[1], io_log->lba[2], io_log->lba[3]);
+		seq_printf(s, "buflen-%d:", io_log->bufflen);
+		seq_printf(s, "sgcnt-%d:", io_log->sg_count);
+		seq_printf(s, "res-0x%08x:", io_log->result);
+		seq_printf(s, "jif-%lu:", io_log->jiffies);
+		seq_printf(s, "blk_req_cpu-%d:", io_log->blk_req_cpu);
+		seq_printf(s, "req_cpu-%d:", io_log->req_cpu);
+		seq_printf(s, "intr_cpu-%d:", io_log->intr_cpu);
+		seq_printf(s, "blk_rsp_cpu-%d\n", io_log->blk_rsp_cpu);
+
+		idx++;
+		if (idx == QEDI_IO_TRACE_SIZE)
+			idx = 0;
+	}
+	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
+	return 0;
+}
+
+static int
+qedi_dbg_io_trace_open(struct inode *inode, struct file *file)
+{
+	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
+	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
+					     dbg_ctx);
+
+	return single_open(file, qedi_io_trace_show, qedi);
+}
+
+const struct file_operations qedi_dbg_fops[] = {
+	qedi_dbg_fileops_seq(qedi, gbl_ctx),
+	qedi_dbg_fileops(qedi, do_not_recover),
+	qedi_dbg_fileops_seq(qedi, io_trace),
+	{ NULL, NULL },
+};
diff --git a/drivers/scsi/qedi/qedi_hsi.h b/drivers/scsi/qedi/qedi_hsi.h
new file mode 100644
index 0000000..b442a62
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_hsi.h
@@ -0,0 +1,52 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+#ifndef __QEDI_HSI__
+#define __QEDI_HSI__
+/********************************/
+/* Add include to common target */
+/********************************/
+#include <linux/qed/common_hsi.h>
+
+/****************************************/
+/* Add include to common storage target */
+/****************************************/
+#include <linux/qed/storage_common.h>
+
+/************************************************************************/
+/* Add include to common TCP target */
+/************************************************************************/
+#include <linux/qed/tcp_common.h>
+
+/*************************************************************************/
+/* Add include to common iSCSI target for both eCore and protocol driver */
+/************************************************************************/
+#include <linux/qed/iscsi_common.h>
+
+/*
+ * iSCSI CMDQ element
+ */
+struct iscsi_cmdqe {
+	__le16 conn_id;
+	u8 invalid_command;
+	u8 cmd_hdr_type;
+	__le32 reserved1[2];
+	__le32 cmd_payload[13];
+};
+
+/*
+ * iSCSI CMD header type
+ */
+enum iscsi_cmd_hdr_type {
+	ISCSI_CMD_HDR_TYPE_BHS_ONLY /* iSCSI BHS with no expected AHS */,
+	ISCSI_CMD_HDR_TYPE_BHS_W_AHS /* iSCSI BHS with expected AHS */,
+	ISCSI_CMD_HDR_TYPE_AHS /* iSCSI AHS */,
+	MAX_ISCSI_CMD_HDR_TYPE
+};
+
+#endif /* __QEDI_HSI__ */
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
new file mode 100644
index 0000000..35ab2f9
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -0,0 +1,1550 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/if_arp.h>
+#include <scsi/iscsi_if.h>
+#include <linux/inet.h>
+#include <net/arp.h>
+#include <linux/list.h>
+#include <linux/kthread.h>
+#include <linux/mm.h>
+#include <linux/if_vlan.h>
+#include <linux/cpu.h>
+
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi.h>
+
+#include "qedi.h"
+
+static uint fw_debug;
+module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(fw_debug, " Firmware debug level 0(default) to 3");
+
+static uint int_mode;
+module_param(int_mode, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(int_mode,
+		 " Force interrupt mode other than MSI-X: (1 INT#x; 2 MSI)");
+
+uint debug = QEDI_LOG_WARN | QEDI_LOG_SCSI_TM;
+module_param(debug, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug, " Default debug level");
+
+const struct qed_iscsi_ops *qedi_ops;
+static struct scsi_transport_template *qedi_scsi_transport;
+static struct pci_driver qedi_pci_driver;
+static DEFINE_PER_CPU(struct qedi_percpu_s, qedi_percpu);
+/* Static function declaration */
+static int qedi_alloc_global_queues(struct qedi_ctx *qedi);
+static void qedi_free_global_queues(struct qedi_ctx *qedi);
+
+static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
+{
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+	struct async_data *data;
+	int rval = 0;
+
+	if (!context || !fw_handle) {
+		QEDI_ERR(NULL, "Recv event with ctx NULL\n");
+		return -EINVAL;
+	}
+
+	qedi = (struct qedi_ctx *)context;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Recv Event %d fw_handle %p\n", fw_event_code, fw_handle);
+
+	data = (struct async_data *)fw_handle;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "cid=0x%x tid=0x%x err-code=0x%x fw-dbg-param=0x%x\n",
+		   data->cid, data->itid, data->error_code,
+		   data->fw_debug_param);
+
+	qedi_ep = qedi->ep_tbl[data->cid];
+
+	if (!qedi_ep) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Cannot process event, ep already disconnected, cid=0x%x\n",
+			   data->cid);
+		WARN_ON(1);
+		return -ENODEV;
+	}
+
+	switch (fw_event_code) {
+	case ISCSI_EVENT_TYPE_ASYN_CONNECT_COMPLETE:
+		if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+			qedi_ep->state = EP_STATE_OFLDCONN_COMPL;
+
+		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
+		break;
+	case ISCSI_EVENT_TYPE_ASYN_TERMINATE_DONE:
+		qedi_ep->state = EP_STATE_DISCONN_COMPL;
+		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
+		break;
+	case ISCSI_EVENT_TYPE_ISCSI_CONN_ERROR:
+		qedi_process_iscsi_error(qedi_ep, data);
+		break;
+	case ISCSI_EVENT_TYPE_ASYN_ABORT_RCVD:
+	case ISCSI_EVENT_TYPE_ASYN_SYN_RCVD:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_TIME:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_CNT:
+	case ISCSI_EVENT_TYPE_ASYN_MAX_KA_PROBES_CNT:
+	case ISCSI_EVENT_TYPE_ASYN_FIN_WAIT2:
+	case ISCSI_EVENT_TYPE_TCP_CONN_ERROR:
+		qedi_process_tcp_error(qedi_ep, data);
+		break;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "Recv Unknown Event %u\n",
+			 fw_event_code);
+	}
+
+	return rval;
+}
+
+static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
+				  struct qed_sb_info *sb_info, u16 sb_id)
+{
+	struct status_block *sb_virt;
+	dma_addr_t sb_phys;
+	int ret;
+
+	sb_virt = dma_alloc_coherent(&qedi->pdev->dev,
+				     sizeof(struct status_block), &sb_phys,
+				     GFP_KERNEL);
+	if (!sb_virt) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Status block allocation failed for id = %d.\n",
+			  sb_id);
+		return -ENOMEM;
+	}
+
+	ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
+				       sb_id, QED_SB_TYPE_STORAGE);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Status block initialization failed for id = %d.\n",
+			  sb_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void qedi_free_sb(struct qedi_ctx *qedi)
+{
+	struct qed_sb_info *sb_info;
+	int id;
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		sb_info = &qedi->sb_array[id];
+		if (sb_info->sb_virt)
+			dma_free_coherent(&qedi->pdev->dev,
+					  sizeof(*sb_info->sb_virt),
+					  (void *)sb_info->sb_virt,
+					  sb_info->sb_phys);
+	}
+}
+
+static void qedi_free_fp(struct qedi_ctx *qedi)
+{
+	kfree(qedi->fp_array);
+	kfree(qedi->sb_array);
+}
+
+static void qedi_destroy_fp(struct qedi_ctx *qedi)
+{
+	qedi_free_sb(qedi);
+	qedi_free_fp(qedi);
+}
+
+static int qedi_alloc_fp(struct qedi_ctx *qedi)
+{
+	int ret = 0;
+
+	qedi->fp_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
+				 sizeof(struct qedi_fastpath), GFP_KERNEL);
+	if (!qedi->fp_array) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fastpath fp array allocation failed.\n");
+		return -ENOMEM;
+	}
+
+	qedi->sb_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
+				 sizeof(struct qed_sb_info), GFP_KERNEL);
+	if (!qedi->sb_array) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fastpath sb array allocation failed.\n");
+		ret = -ENOMEM;
+		goto free_fp;
+	}
+
+	return ret;
+
+free_fp:
+	qedi_free_fp(qedi);
+	return ret;
+}
+
+static void qedi_int_fp(struct qedi_ctx *qedi)
+{
+	struct qedi_fastpath *fp;
+	int id;
+
+	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
+	       sizeof(*qedi->fp_array));
+	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
+	       sizeof(*qedi->sb_array));
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		fp = &qedi->fp_array[id];
+		fp->sb_info = &qedi->sb_array[id];
+		fp->sb_id = id;
+		fp->qedi = qedi;
+		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
+			 "qedi", id);
+
+		/* fp_array[i] ---- irq cookie
+		 * So init data which is needed in int ctx
+		 */
+	}
+}
+
+static int qedi_prepare_fp(struct qedi_ctx *qedi)
+{
+	struct qedi_fastpath *fp;
+	int id, ret = 0;
+
+	ret = qedi_alloc_fp(qedi);
+	if (ret)
+		goto err;
+
+	qedi_int_fp(qedi);
+
+	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
+		fp = &qedi->fp_array[id];
+		ret = qedi_alloc_and_init_sb(qedi, fp->sb_info, fp->sb_id);
+		if (ret) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "SB allocation and initialization failed.\n");
+			ret = -EIO;
+			goto err_init;
+		}
+	}
+
+	return 0;
+
+err_init:
+	qedi_free_sb(qedi);
+	qedi_free_fp(qedi);
+err:
+	return ret;
+}
+
+static enum qed_int_mode qedi_int_mode_to_enum(void)
+{
+	switch (int_mode) {
+	case 0: return QED_INT_MODE_MSIX;
+	case 1: return QED_INT_MODE_INTA;
+	case 2: return QED_INT_MODE_MSI;
+	default:
+		QEDI_ERR(NULL, "Unknown qede_int_mode=%08x; "
+			 "Defaulting to MSI-x\n", int_mode);
+		return QED_INT_MODE_MSIX;
+	}
+}
+
+static int qedi_setup_cid_que(struct qedi_ctx *qedi)
+{
+	int i;
+
+	qedi->cid_que.cid_que_base = kmalloc((qedi->max_active_conns *
+					      sizeof(u32)), GFP_KERNEL);
+	if (!qedi->cid_que.cid_que_base)
+		return -ENOMEM;
+
+	qedi->cid_que.conn_cid_tbl = kmalloc((qedi->max_active_conns *
+					      sizeof(struct qedi_conn *)),
+					     GFP_KERNEL);
+	if (!qedi->cid_que.conn_cid_tbl) {
+		kfree(qedi->cid_que.cid_que_base);
+		qedi->cid_que.cid_que_base = NULL;
+		return -ENOMEM;
+	}
+
+	qedi->cid_que.cid_que = (u32 *)qedi->cid_que.cid_que_base;
+	qedi->cid_que.cid_q_prod_idx = 0;
+	qedi->cid_que.cid_q_cons_idx = 0;
+	qedi->cid_que.cid_q_max_idx = qedi->max_active_conns;
+	qedi->cid_que.cid_free_cnt = qedi->max_active_conns;
+
+	for (i = 0; i < qedi->max_active_conns; i++) {
+		qedi->cid_que.cid_que[i] = i;
+		qedi->cid_que.conn_cid_tbl[i] = NULL;
+	}
+
+	return 0;
+}
+
+static void qedi_release_cid_que(struct qedi_ctx *qedi)
+{
+	kfree(qedi->cid_que.cid_que_base);
+	qedi->cid_que.cid_que_base = NULL;
+
+	kfree(qedi->cid_que.conn_cid_tbl);
+	qedi->cid_que.conn_cid_tbl = NULL;
+}
+
+static int qedi_init_id_tbl(struct qedi_portid_tbl *id_tbl, u16 size,
+			    u16 start_id, u16 next)
+{
+	id_tbl->start = start_id;
+	id_tbl->max = size;
+	id_tbl->next = next;
+	spin_lock_init(&id_tbl->lock);
+	id_tbl->table = kzalloc(DIV_ROUND_UP(size, 32) * 4, GFP_KERNEL);
+	if (!id_tbl->table)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void qedi_free_id_tbl(struct qedi_portid_tbl *id_tbl)
+{
+	kfree(id_tbl->table);
+	id_tbl->table = NULL;
+}
+
+int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id)
+{
+	int ret = -1;
+
+	id -= id_tbl->start;
+	if (id >= id_tbl->max)
+		return ret;
+
+	spin_lock(&id_tbl->lock);
+	if (!test_bit(id, id_tbl->table)) {
+		set_bit(id, id_tbl->table);
+		ret = 0;
+	}
+	spin_unlock(&id_tbl->lock);
+	return ret;
+}
+
+u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl)
+{
+	u16 id;
+
+	spin_lock(&id_tbl->lock);
+	id = find_next_zero_bit(id_tbl->table, id_tbl->max, id_tbl->next);
+	if (id >= id_tbl->max) {
+		id = QEDI_LOCAL_PORT_INVALID;
+		if (id_tbl->next != 0) {
+			id = find_first_zero_bit(id_tbl->table, id_tbl->next);
+			if (id >= id_tbl->next)
+				id = QEDI_LOCAL_PORT_INVALID;
+		}
+	}
+
+	if (id < id_tbl->max) {
+		set_bit(id, id_tbl->table);
+		id_tbl->next = (id + 1) & (id_tbl->max - 1);
+		id += id_tbl->start;
+	}
+
+	spin_unlock(&id_tbl->lock);
+
+	return id;
+}
+
+void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id)
+{
+	if (id == QEDI_LOCAL_PORT_INVALID)
+		return;
+
+	id -= id_tbl->start;
+	if (id >= id_tbl->max)
+		return;
+
+	clear_bit(id, id_tbl->table);
+}
+
+static void qedi_cm_free_mem(struct qedi_ctx *qedi)
+{
+	kfree(qedi->ep_tbl);
+	qedi->ep_tbl = NULL;
+	qedi_free_id_tbl(&qedi->lcl_port_tbl);
+}
+
+static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
+{
+	u16 port_id;
+
+	qedi->ep_tbl = kzalloc((qedi->max_active_conns *
+				sizeof(struct qedi_endpoint *)), GFP_KERNEL);
+	if (!qedi->ep_tbl)
+		return -ENOMEM;
+	port_id = prandom_u32() % QEDI_LOCAL_PORT_RANGE;
+	if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
+			     QEDI_LOCAL_PORT_MIN, port_id)) {
+		qedi_cm_free_mem(qedi);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
+{
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi = NULL;
+
+	shost = iscsi_host_alloc(&qedi_host_template,
+				 sizeof(struct qedi_ctx), 0);
+	if (!shost) {
+		QEDI_ERR(NULL, "Could not allocate shost\n");
+		goto exit_setup_shost;
+	}
+
+	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA;
+	shost->max_channel = 0;
+	shost->max_lun = ~0;
+	shost->max_cmd_len = 16;
+	shost->transportt = qedi_scsi_transport;
+
+	qedi = iscsi_host_priv(shost);
+	memset(qedi, 0, sizeof(*qedi));
+	qedi->shost = shost;
+	qedi->dbg_ctx.host_no = shost->host_no;
+	qedi->pdev = pdev;
+	qedi->dbg_ctx.pdev = pdev;
+	qedi->max_active_conns = ISCSI_MAX_SESS_PER_HBA;
+	qedi->max_sqes = QEDI_SQ_SIZE;
+
+	if (shost_use_blk_mq(shost))
+		shost->nr_hw_queues = MIN_NUM_CPUS_MSIX(qedi);
+
+	pci_set_drvdata(pdev, qedi);
+
+exit_setup_shost:
+	return qedi;
+}
+
+static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
+{
+	u8 num_sq_pages;
+	u32 log_page_size;
+	int rval = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC, "Min number of MSIX %d\n",
+		  MIN_NUM_CPUS_MSIX(qedi));
+
+	num_sq_pages = (MAX_OUSTANDING_TASKS_PER_CON * 8) / PAGE_SIZE;
+
+	qedi->num_queues = MIN_NUM_CPUS_MSIX(qedi);
+
+	memset(&qedi->pf_params.iscsi_pf_params, 0,
+	       sizeof(qedi->pf_params.iscsi_pf_params));
+
+	qedi->p_cpuq = pci_alloc_consistent(qedi->pdev,
+			qedi->num_queues * sizeof(struct qedi_glbl_q_params),
+			&qedi->hw_p_cpuq);
+	if (!qedi->p_cpuq) {
+		QEDI_ERR(&qedi->dbg_ctx, "pci_alloc_consistent fail\n");
+		rval = -1;
+		goto err_alloc_mem;
+	}
+
+	rval = qedi_alloc_global_queues(qedi);
+	if (rval) {
+		QEDI_ERR(&qedi->dbg_ctx, "Global queue allocation failed.\n");
+		rval = -1;
+		goto err_alloc_mem;
+	}
+
+	qedi->pf_params.iscsi_pf_params.num_cons = QEDI_MAX_ISCSI_CONNS_PER_HBA;
+	qedi->pf_params.iscsi_pf_params.num_tasks = QEDI_MAX_ISCSI_TASK;
+	qedi->pf_params.iscsi_pf_params.half_way_close_timeout = 10;
+	qedi->pf_params.iscsi_pf_params.num_sq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_r2tq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_uhq_pages_in_ring = num_sq_pages;
+	qedi->pf_params.iscsi_pf_params.num_queues = qedi->num_queues;
+	qedi->pf_params.iscsi_pf_params.debug_mode = fw_debug;
+
+	for (log_page_size = 0 ; log_page_size < 32 ; log_page_size++) {
+		if ((1 << log_page_size) == PAGE_SIZE)
+			break;
+	}
+	qedi->pf_params.iscsi_pf_params.log_page_size = log_page_size;
+
+	qedi->pf_params.iscsi_pf_params.glbl_q_params_addr = qedi->hw_p_cpuq;
+
+	/* RQ BDQ initializations.
+	 * rq_num_entries: suggested value for Initiator is 16 (4KB RQ)
+	 * rqe_log_size: 8 for 256B RQE
+	 */
+	qedi->pf_params.iscsi_pf_params.rqe_log_size = 8;
+	/* BDQ address and size */
+	qedi->pf_params.iscsi_pf_params.bdq_pbl_base_addr[BDQ_ID_RQ] =
+							qedi->bdq_pbl_list_dma;
+	qedi->pf_params.iscsi_pf_params.bdq_pbl_num_entries[BDQ_ID_RQ] =
+						qedi->bdq_pbl_list_num_entries;
+	qedi->pf_params.iscsi_pf_params.rq_buffer_size = QEDI_BDQ_BUF_SIZE;
+
+	/* cq_num_entries: num_tasks + rq_num_entries */
+	qedi->pf_params.iscsi_pf_params.cq_num_entries = 2048;
+
+	qedi->pf_params.iscsi_pf_params.gl_rq_pi = QEDI_PROTO_CQ_PROD_IDX;
+	qedi->pf_params.iscsi_pf_params.gl_cmd_pi = 1;
+	qedi->pf_params.iscsi_pf_params.ooo_enable = 1;
+
+err_alloc_mem:
+	return rval;
+}
+
+/* Free DMA coherent memory for array of queue pointers we pass to qed */
+static void qedi_free_iscsi_pf_param(struct qedi_ctx *qedi)
+{
+	size_t size = 0;
+
+	if (qedi->p_cpuq) {
+		size = qedi->num_queues * sizeof(struct qedi_glbl_q_params);
+		pci_free_consistent(qedi->pdev, size, qedi->p_cpuq,
+				    qedi->hw_p_cpuq);
+	}
+
+	qedi_free_global_queues(qedi);
+
+	kfree(qedi->global_queues);
+}
+
+static void qedi_link_update(void *dev, struct qed_link_output *link)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)dev;
+
+	if (link->link_up) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "Link Up event.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_UP);
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Link Down event.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+	}
+}
+
+static struct qed_iscsi_cb_ops qedi_cb_ops = {
+	{
+		.link_update =		qedi_link_update,
+	}
+};
+
+static bool qedi_process_completions(struct qedi_fastpath *fp)
+{
+	struct qedi_work *qedi_work = NULL;
+	struct qedi_ctx *qedi = fp->qedi;
+	struct qed_sb_info *sb_info = fp->sb_info;
+	struct status_block *sb = sb_info->sb_virt;
+	struct qedi_percpu_s *p = NULL;
+	struct global_queue *que;
+	u16 prod_idx;
+	unsigned long flags;
+	union iscsi_cqe *cqe;
+	int cpu;
+
+	/* Get the current firmware producer index */
+	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
+
+	if (prod_idx >= QEDI_CQ_SIZE)
+		prod_idx = prod_idx % QEDI_CQ_SIZE;
+
+	que = qedi->global_queues[fp->sb_id];
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+		  "Before: global queue=%p prod_idx=%d cons_idx=%d, sb_id=%d\n",
+		  que, prod_idx, que->cq_cons_idx, fp->sb_id);
+
+	qedi->intr_cpu = fp->sb_id;
+	cpu = smp_processor_id();
+	p = &per_cpu(qedi_percpu, cpu);
+
+	if (unlikely(!p->iothread))
+		WARN_ON(1);
+
+	spin_lock_irqsave(&p->p_work_lock, flags);
+	while (que->cq_cons_idx != prod_idx) {
+		cqe = &que->cq[que->cq_cons_idx];
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+			  "cqe=%p prod_idx=%d cons_idx=%d.\n",
+			  cqe, prod_idx, que->cq_cons_idx);
+
+		/* Alloc and copy to the cqe */
+		qedi_work = kzalloc(sizeof(*qedi_work), GFP_ATOMIC);
+		if (qedi_work) {
+			INIT_LIST_HEAD(&qedi_work->list);
+			qedi_work->qedi = qedi;
+			memcpy(&qedi_work->cqe, cqe, sizeof(union iscsi_cqe));
+			qedi_work->que_idx = fp->sb_id;
+			list_add_tail(&qedi_work->list, &p->work_list);
+		} else {
+			WARN_ON(1);
+			continue;
+		}
+
+		que->cq_cons_idx++;
+		if (que->cq_cons_idx == QEDI_CQ_SIZE)
+			que->cq_cons_idx = 0;
+	}
+	wake_up_process(p->iothread);
+	spin_unlock_irqrestore(&p->p_work_lock, flags);
+
+	return true;
+}
+
+static bool qedi_fp_has_work(struct qedi_fastpath *fp)
+{
+	struct qedi_ctx *qedi = fp->qedi;
+	struct global_queue *que;
+	struct qed_sb_info *sb_info = fp->sb_info;
+	struct status_block *sb = sb_info->sb_virt;
+	u16 prod_idx;
+
+	barrier();
+
+	/* Get the current firmware producer index */
+	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
+
+	/* Get the pointer to the global CQ this completion is on */
+	que = qedi->global_queues[fp->sb_id];
+
+	/* prod idx wrap around uint16 */
+	if (prod_idx >= QEDI_CQ_SIZE)
+		prod_idx = prod_idx % QEDI_CQ_SIZE;
+
+	return (que->cq_cons_idx != prod_idx);
+}
+
+/* MSI-X fastpath handler code */
+static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
+{
+	struct qedi_fastpath *fp = dev_id;
+	struct qedi_ctx *qedi = fp->qedi;
+	bool wake_io_thread = true;
+
+	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
+
+process_again:
+	wake_io_thread = qedi_process_completions(fp);
+	if (wake_io_thread) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+			  "process already running\n");
+	}
+
+	if (qedi_fp_has_work(fp) == 0)
+		qed_sb_update_sb_idx(fp->sb_info);
+
+	/* Check for more work */
+	rmb();
+
+	if (qedi_fp_has_work(fp) == 0)
+		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
+	else
+		goto process_again;
+
+	return IRQ_HANDLED;
+}
+
+/* simd handler for MSI/INTa */
+static void qedi_simd_int_handler(void *cookie)
+{
+	/* Cookie is qedi_ctx struct */
+	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
+
+	QEDI_WARN(&qedi->dbg_ctx, "qedi=%p.\n", qedi);
+}
+
+#define QEDI_SIMD_HANDLER_NUM		0
+static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
+{
+	int i;
+
+	if (qedi->int_info.msix_cnt) {
+		for (i = 0; i < qedi->int_info.used_cnt; i++) {
+			synchronize_irq(qedi->int_info.msix[i].vector);
+			irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+					      NULL);
+			free_irq(qedi->int_info.msix[i].vector,
+				 &qedi->fp_array[i]);
+		}
+	} else {
+		qedi_ops->common->simd_handler_clean(qedi->cdev,
+						     QEDI_SIMD_HANDLER_NUM);
+	}
+
+	qedi->int_info.used_cnt = 0;
+	qedi_ops->common->set_fp_int(qedi->cdev, 0);
+}
+
+static int qedi_request_msix_irq(struct qedi_ctx *qedi)
+{
+	int i, rc, cpu;
+
+	cpu = cpumask_first(cpu_online_mask);
+	for (i = 0; i < MIN_NUM_CPUS_MSIX(qedi); i++) {
+		rc = request_irq(qedi->int_info.msix[i].vector,
+				 qedi_msix_handler, 0, "qedi",
+				 &qedi->fp_array[i]);
+
+		if (rc) {
+			QEDI_WARN(&qedi->dbg_ctx, "request_irq failed.\n");
+			qedi_sync_free_irqs(qedi);
+			return rc;
+		}
+		qedi->int_info.used_cnt++;
+		rc = irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+					   get_cpu_mask(cpu));
+		cpu = cpumask_next(cpu, cpu_online_mask);
+	}
+
+	return 0;
+}
+
+static int qedi_setup_int(struct qedi_ctx *qedi)
+{
+	int rc = 0;
+
+	rc = qedi_ops->common->set_fp_int(qedi->cdev, num_online_cpus());
+	rc = qedi_ops->common->get_fp_int(qedi->cdev, &qedi->int_info);
+	if (rc)
+		goto exit_setup_int;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Number of msix_cnt = 0x%x num of cpus = 0x%x\n",
+		   qedi->int_info.msix_cnt, num_online_cpus());
+
+	if (qedi->int_info.msix_cnt) {
+		rc = qedi_request_msix_irq(qedi);
+		goto exit_setup_int;
+	} else {
+		qedi_ops->common->simd_handler_config(qedi->cdev, &qedi,
+						      QEDI_SIMD_HANDLER_NUM,
+						      qedi_simd_int_handler);
+		qedi->int_info.used_cnt = 1;
+	}
+
+exit_setup_int:
+	return rc;
+}
+
+static void qedi_free_bdq(struct qedi_ctx *qedi)
+{
+	int i;
+
+	if (qedi->bdq_pbl_list)
+		dma_free_coherent(&qedi->pdev->dev, PAGE_SIZE,
+				  qedi->bdq_pbl_list, qedi->bdq_pbl_list_dma);
+
+	if (qedi->bdq_pbl)
+		dma_free_coherent(&qedi->pdev->dev, qedi->bdq_pbl_mem_size,
+				  qedi->bdq_pbl, qedi->bdq_pbl_dma);
+
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		if (qedi->bdq[i].buf_addr) {
+			dma_free_coherent(&qedi->pdev->dev, QEDI_BDQ_BUF_SIZE,
+					  qedi->bdq[i].buf_addr,
+					  qedi->bdq[i].buf_dma);
+		}
+	}
+}
+
+static void qedi_free_global_queues(struct qedi_ctx *qedi)
+{
+	int i;
+	struct global_queue **gl = qedi->global_queues;
+
+	for (i = 0; i < qedi->num_queues; i++) {
+		if (!gl[i])
+			continue;
+
+		if (gl[i]->cq)
+			dma_free_coherent(&qedi->pdev->dev, gl[i]->cq_mem_size,
+					  gl[i]->cq, gl[i]->cq_dma);
+		if (gl[i]->cq_pbl)
+			dma_free_coherent(&qedi->pdev->dev, gl[i]->cq_pbl_size,
+					  gl[i]->cq_pbl, gl[i]->cq_pbl_dma);
+
+		kfree(gl[i]);
+	}
+	qedi_free_bdq(qedi);
+}
+
+static int qedi_alloc_bdq(struct qedi_ctx *qedi)
+{
+	int i;
+	struct scsi_bd *pbl;
+	u64 *list;
+	dma_addr_t page;
+
+	/* Alloc dma memory for BDQ buffers */
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		qedi->bdq[i].buf_addr =
+				dma_alloc_coherent(&qedi->pdev->dev,
+						   QEDI_BDQ_BUF_SIZE,
+						   &qedi->bdq[i].buf_dma,
+						   GFP_KERNEL);
+		if (!qedi->bdq[i].buf_addr) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not allocate BDQ buffer %d.\n", i);
+			return -ENOMEM;
+		}
+	}
+
+	/* Alloc dma memory for BDQ page buffer list */
+	qedi->bdq_pbl_mem_size = QEDI_BDQ_NUM * sizeof(struct scsi_bd);
+	qedi->bdq_pbl_mem_size = ALIGN(qedi->bdq_pbl_mem_size, PAGE_SIZE);
+	qedi->rq_num_entries = qedi->bdq_pbl_mem_size / sizeof(struct scsi_bd);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN, "rq_num_entries = %d.\n",
+		  qedi->rq_num_entries);
+
+	qedi->bdq_pbl = dma_alloc_coherent(&qedi->pdev->dev,
+					   qedi->bdq_pbl_mem_size,
+					   &qedi->bdq_pbl_dma, GFP_KERNEL);
+	if (!qedi->bdq_pbl) {
+		QEDI_ERR(&qedi->dbg_ctx, "Could not allocate BDQ PBL.\n");
+		return -ENOMEM;
+	}
+
+	/*
+	 * Populate BDQ PBL with physical and virtual address of individual
+	 * BDQ buffers
+	 */
+	pbl = (struct scsi_bd  *)qedi->bdq_pbl;
+	for (i = 0; i < QEDI_BDQ_NUM; i++) {
+		pbl->address.hi =
+			cpu_to_le32((u32)(((u64)(qedi->bdq[i].buf_dma)) >> 32));
+		pbl->address.lo =
+			cpu_to_le32(((u32)(((u64)(qedi->bdq[i].buf_dma)) &
+					    0xffffffff)));
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx], idx [%d]\n",
+			  pbl, pbl->address.hi, pbl->address.lo, i);
+		pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
+		pbl->opaque.lo = cpu_to_le32(((u32)(((u64)i) & 0xffffffff)));
+		pbl++;
+	}
+
+	/* Allocate list of PBL pages */
+	qedi->bdq_pbl_list = dma_alloc_coherent(&qedi->pdev->dev,
+						PAGE_SIZE,
+						&qedi->bdq_pbl_list_dma,
+						GFP_KERNEL);
+	if (!qedi->bdq_pbl_list) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Could not allocate list of PBL pages.\n");
+		return -ENOMEM;
+	}
+	memset(qedi->bdq_pbl_list, 0, PAGE_SIZE);
+
+	/*
+	 * Now populate PBL list with pages that contain pointers to the
+	 * individual buffers.
+	 */
+	qedi->bdq_pbl_list_num_entries = qedi->bdq_pbl_mem_size / PAGE_SIZE;
+	list = (u64 *)qedi->bdq_pbl_list;
+	page = qedi->bdq_pbl_list_dma;
+	for (i = 0; i < qedi->bdq_pbl_list_num_entries; i++) {
+		*list = qedi->bdq_pbl_dma;
+		list++;
+		page += PAGE_SIZE;
+	}
+
+	return 0;
+}
+
+static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+{
+	u32 *list;
+	int i;
+	int status = 0, rc;
+	u32 *pbl;
+	dma_addr_t page;
+	int num_pages;
+
+	/*
+	 * Number of global queues (CQ / RQ). This should
+	 * be <= number of available MSIX vectors for the PF
+	 */
+	if (!qedi->num_queues) {
+		QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n");
+		return 1;
+	}
+
+	/* Make sure we allocated the PBL that will contain the physical
+	 * addresses of our queues
+	 */
+	if (!qedi->p_cpuq) {
+		status = 1;
+		goto mem_alloc_failure;
+	}
+
+	qedi->global_queues = kzalloc((sizeof(struct global_queue *) *
+				       qedi->num_queues), GFP_KERNEL);
+	if (!qedi->global_queues) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Unable to allocate global queues array ptr memory\n");
+		return -ENOMEM;
+	}
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "qedi->global_queues=%p.\n", qedi->global_queues);
+
+	/* Allocate DMA coherent buffers for BDQ */
+	rc = qedi_alloc_bdq(qedi);
+	if (rc)
+		goto mem_alloc_failure;
+
+	/* Allocate a CQ and an associated PBL for each MSI-X
+	 * vector.
+	 */
+	for (i = 0; i < qedi->num_queues; i++) {
+		qedi->global_queues[i] =
+					kzalloc(sizeof(*qedi->global_queues[0]),
+						GFP_KERNEL);
+		if (!qedi->global_queues[i]) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to allocation global queue %d.\n", i);
+			goto mem_alloc_failure;
+		}
+
+		qedi->global_queues[i]->cq_mem_size =
+		    (QEDI_CQ_SIZE + 8) * sizeof(union iscsi_cqe);
+		qedi->global_queues[i]->cq_mem_size =
+		    (qedi->global_queues[i]->cq_mem_size +
+		    (QEDI_PAGE_SIZE - 1));
+
+		qedi->global_queues[i]->cq_pbl_size =
+		    (qedi->global_queues[i]->cq_mem_size /
+		    QEDI_PAGE_SIZE) * sizeof(void *);
+		qedi->global_queues[i]->cq_pbl_size =
+		    (qedi->global_queues[i]->cq_pbl_size +
+		    (QEDI_PAGE_SIZE - 1));
+
+		qedi->global_queues[i]->cq =
+		    dma_alloc_coherent(&qedi->pdev->dev,
+				       qedi->global_queues[i]->cq_mem_size,
+				       &qedi->global_queues[i]->cq_dma,
+				       GFP_KERNEL);
+
+		if (!qedi->global_queues[i]->cq) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Could not allocate cq.\n");
+			status = -ENOMEM;
+			goto mem_alloc_failure;
+		}
+		memset(qedi->global_queues[i]->cq, 0,
+		       qedi->global_queues[i]->cq_mem_size);
+
+		qedi->global_queues[i]->cq_pbl =
+		    dma_alloc_coherent(&qedi->pdev->dev,
+				       qedi->global_queues[i]->cq_pbl_size,
+				       &qedi->global_queues[i]->cq_pbl_dma,
+				       GFP_KERNEL);
+
+		if (!qedi->global_queues[i]->cq_pbl) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Could not allocate cq PBL.\n");
+			status = -ENOMEM;
+			goto mem_alloc_failure;
+		}
+		memset(qedi->global_queues[i]->cq_pbl, 0,
+		       qedi->global_queues[i]->cq_pbl_size);
+
+		/* Create PBL */
+		num_pages = qedi->global_queues[i]->cq_mem_size /
+		    QEDI_PAGE_SIZE;
+		page = qedi->global_queues[i]->cq_dma;
+		pbl = (u32 *)qedi->global_queues[i]->cq_pbl;
+
+		while (num_pages--) {
+			*pbl = (u32)page;
+			pbl++;
+			*pbl = (u32)((u64)page >> 32);
+			pbl++;
+			page += QEDI_PAGE_SIZE;
+		}
+	}
+
+	list = (u32 *)qedi->p_cpuq;
+
+	/*
+	 * The list is built as follows: CQ#0 PBL pointer, RQ#0 PBL pointer,
+	 * CQ#1 PBL pointer, RQ#1 PBL pointer, etc.  Each PBL pointer points
+	 * to the physical address which contains an array of pointers to the
+	 * physical addresses of the specific queue pages.
+	 */
+	for (i = 0; i < qedi->num_queues; i++) {
+		*list = (u32)qedi->global_queues[i]->cq_pbl_dma;
+		list++;
+		*list = (u32)((u64)qedi->global_queues[i]->cq_pbl_dma >> 32);
+		list++;
+
+		*list = (u32)0;
+		list++;
+		*list = (u32)((u64)0 >> 32);
+		list++;
+	}
+
+	return 0;
+
+mem_alloc_failure:
+	qedi_free_global_queues(qedi);
+	return status;
+}
+
+static int qedi_alloc_itt(struct qedi_ctx *qedi)
+{
+	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
+				MAX_ISCSI_TASK_ENTRIES), GFP_KERNEL);
+	if (!qedi->itt_map) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Unable to allocate itt map array memory\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static void qedi_free_itt(struct qedi_ctx *qedi)
+{
+	kfree(qedi->itt_map);
+}
+
+static struct qed_ll2_cb_ops qedi_ll2_cb_ops = {
+	.rx_cb = qedi_ll2_rx,
+	.tx_cb = NULL,
+};
+
+static int qedi_percpu_io_thread(void *arg)
+{
+	struct qedi_percpu_s *p = arg;
+	struct qedi_work *work, *tmp;
+	unsigned long flags;
+	LIST_HEAD(work_list);
+
+	set_user_nice(current, -20);
+
+	while (!kthread_should_stop()) {
+		spin_lock_irqsave(&p->p_work_lock, flags);
+		while (!list_empty(&p->work_list)) {
+			list_splice_init(&p->work_list, &work_list);
+			spin_unlock_irqrestore(&p->p_work_lock, flags);
+
+			list_for_each_entry_safe(work, tmp, &work_list, list) {
+				list_del_init(&work->list);
+				qedi_fp_process_cqes(work->qedi, &work->cqe,
+						     work->que_idx);
+				kfree(work);
+			}
+			spin_lock_irqsave(&p->p_work_lock, flags);
+		}
+		set_current_state(TASK_INTERRUPTIBLE);
+		spin_unlock_irqrestore(&p->p_work_lock, flags);
+		schedule();
+	}
+	__set_current_state(TASK_RUNNING);
+
+	return 0;
+}
+
+static void qedi_percpu_thread_create(unsigned int cpu)
+{
+	struct qedi_percpu_s *p;
+	struct task_struct *thread;
+
+	p = &per_cpu(qedi_percpu, cpu);
+
+	thread = kthread_create_on_node(qedi_percpu_io_thread, (void *)p,
+					cpu_to_node(cpu),
+					"qedi_thread/%d", cpu);
+	if (likely(!IS_ERR(thread))) {
+		kthread_bind(thread, cpu);
+		p->iothread = thread;
+		wake_up_process(thread);
+	}
+}
+
+static void qedi_percpu_thread_destroy(unsigned int cpu)
+{
+	struct qedi_percpu_s *p;
+	struct task_struct *thread;
+	struct qedi_work *work, *tmp;
+
+	p = &per_cpu(qedi_percpu, cpu);
+	spin_lock_bh(&p->p_work_lock);
+	thread = p->iothread;
+	p->iothread = NULL;
+
+	list_for_each_entry_safe(work, tmp, &p->work_list, list) {
+		list_del_init(&work->list);
+		qedi_fp_process_cqes(work->qedi, &work->cqe, work->que_idx);
+		kfree(work);
+	}
+
+	spin_unlock_bh(&p->p_work_lock);
+	if (thread)
+		kthread_stop(thread);
+}
+
+static int qedi_cpu_callback(struct notifier_block *nfb,
+			     unsigned long action, void *hcpu)
+{
+	unsigned int cpu = (unsigned long)hcpu;
+
+	switch (action) {
+	case CPU_ONLINE:
+	case CPU_ONLINE_FROZEN:
+		QEDI_ERR(NULL, "CPU %d online.\n", cpu);
+		qedi_percpu_thread_create(cpu);
+		break;
+	case CPU_DEAD:
+	case CPU_DEAD_FROZEN:
+		QEDI_ERR(NULL, "CPU %d offline.\n", cpu);
+		qedi_percpu_thread_destroy(cpu);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block qedi_cpu_notifier = {
+	.notifier_call = qedi_cpu_callback,
+};
+
+static void __qedi_remove(struct pci_dev *pdev, int mode)
+{
+	struct qedi_ctx *qedi = pci_get_drvdata(pdev);
+
+	if (qedi->tmf_thread) {
+		flush_workqueue(qedi->tmf_thread);
+		destroy_workqueue(qedi->tmf_thread);
+		qedi->tmf_thread = NULL;
+	}
+
+	if (qedi->offload_thread) {
+		flush_workqueue(qedi->offload_thread);
+		destroy_workqueue(qedi->offload_thread);
+		qedi->offload_thread = NULL;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_exit(&qedi->dbg_ctx);
+#endif
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags))
+		qedi_ops->common->set_power_state(qedi->cdev, PCI_D0);
+
+	qedi_sync_free_irqs(qedi);
+
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags)) {
+		qedi_ops->stop(qedi->cdev);
+		qedi_ops->ll2->stop(qedi->cdev);
+	}
+
+	if (mode == QEDI_MODE_NORMAL)
+		qedi_free_iscsi_pf_param(qedi);
+
+	if (!test_bit(QEDI_IN_OFFLINE, &qedi->flags)) {
+		qedi_ops->common->slowpath_stop(qedi->cdev);
+		qedi_ops->common->remove(qedi->cdev);
+	}
+
+	qedi_destroy_fp(qedi);
+
+	if (mode == QEDI_MODE_NORMAL) {
+		qedi_release_cid_que(qedi);
+		qedi_cm_free_mem(qedi);
+		qedi_free_uio(qedi->udev);
+		qedi_free_itt(qedi);
+
+		iscsi_host_remove(qedi->shost);
+		iscsi_host_free(qedi->shost);
+
+		if (qedi->ll2_recv_thread) {
+			kthread_stop(qedi->ll2_recv_thread);
+			qedi->ll2_recv_thread = NULL;
+		}
+		qedi_ll2_free_skbs(qedi);
+	}
+}
+
+static int __qedi_probe(struct pci_dev *pdev, int mode)
+{
+	struct qedi_ctx *qedi;
+	struct qed_ll2_params params;
+	u32 dp_module = 0;
+	u8 dp_level = 0;
+	bool is_vf = false;
+	char host_buf[16];
+	struct qed_link_params link_params;
+	struct qed_slowpath_params sp_params;
+	struct qed_probe_params qed_params;
+	void *task_start, *task_end;
+	int rc;
+	u16 tmp;
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		qedi = qedi_host_alloc(pdev);
+		if (!qedi) {
+			rc = -ENOMEM;
+			goto exit_probe;
+		}
+	} else {
+		qedi = pci_get_drvdata(pdev);
+	}
+
+	memset(&qed_params, 0, sizeof(qed_params));
+	qed_params.protocol = QED_PROTOCOL_ISCSI;
+	qed_params.dp_module = dp_module;
+	qed_params.dp_level = dp_level;
+	qed_params.is_vf = is_vf;
+	qedi->cdev = qedi_ops->common->probe(pdev, &qed_params);
+	if (!qedi->cdev) {
+		rc = -ENODEV;
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot initialize hardware\n");
+		goto free_host;
+	}
+
+	qedi->msix_count = MAX_NUM_MSIX_PF;
+	atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		rc = qedi_set_iscsi_pf_param(qedi);
+		if (rc) {
+			rc = -ENOMEM;
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Set iSCSI pf param fail\n");
+			goto free_host;
+		}
+	}
+
+	qedi_ops->common->update_pf_params(qedi->cdev, &qedi->pf_params);
+
+	rc = qedi_prepare_fp(qedi);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start slowpath.\n");
+		goto free_pf_params;
+	}
+
+	/* Start the Slowpath-process */
+	memset(&sp_params, 0, sizeof(struct qed_slowpath_params));
+	sp_params.int_mode = qedi_int_mode_to_enum();
+	sp_params.drv_major = QEDI_DRIVER_MAJOR_VER;
+	sp_params.drv_minor = QEDI_DRIVER_MINOR_VER;
+	sp_params.drv_rev = QEDI_DRIVER_REV_VER;
+	sp_params.drv_eng = QEDI_DRIVER_ENG_VER;
+	strlcpy(sp_params.name, "qedi iSCSI", QED_DRV_VER_STR_SIZE);
+	rc = qedi_ops->common->slowpath_start(qedi->cdev, &sp_params);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start slowpath\n");
+		goto stop_hw;
+	}
+
+	/* update_pf_params needs to be called before and after slowpath
+	 * start
+	 */
+	qedi_ops->common->update_pf_params(qedi->cdev, &qedi->pf_params);
+
+	qedi_setup_int(qedi);
+	if (rc)
+		goto stop_iscsi_func;
+
+	qedi_ops->common->set_power_state(qedi->cdev, PCI_D0);
+
+	/* Learn information crucial for qedi to progress */
+	rc = qedi_ops->fill_dev_info(qedi->cdev, &qedi->dev_info);
+	if (rc)
+		goto stop_iscsi_func;
+
+	/* Record BDQ producer doorbell addresses */
+	qedi->bdq_primary_prod = qedi->dev_info.primary_dbq_rq_addr;
+	qedi->bdq_secondary_prod = qedi->dev_info.secondary_bdq_rq_addr;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "BDQ primary_prod=%p secondary_prod=%p.\n",
+		  qedi->bdq_primary_prod,
+		  qedi->bdq_secondary_prod);
+
+	/*
+	 * We need to write the number of BDs in the BDQ we've preallocated so
+	 * the f/w will do a prefetch and we'll get an unsolicited CQE when a
+	 * packet arrives.
+	 */
+	qedi->bdq_prod_idx = QEDI_BDQ_NUM;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Writing %d to primary and secondary BDQ doorbell registers.\n",
+		  qedi->bdq_prod_idx);
+	writew(qedi->bdq_prod_idx, qedi->bdq_primary_prod);
+	tmp = readw(qedi->bdq_primary_prod);
+	writew(qedi->bdq_prod_idx, qedi->bdq_secondary_prod);
+	tmp = readw(qedi->bdq_secondary_prod);
+
+	ether_addr_copy(qedi->mac, qedi->dev_info.common.hw_mac);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC, "MAC address is %pM.\n",
+		  qedi->mac);
+
+	sprintf(host_buf, "host_%d", qedi->shost->host_no);
+	qedi_ops->common->set_id(qedi->cdev, host_buf, QEDI_MODULE_VERSION);
+
+	qedi_ops->register_ops(qedi->cdev, &qedi_cb_ops, qedi);
+
+	memset(&params, 0, sizeof(params));
+	params.mtu = DEF_PATH_MTU + IPV6_HDR_LEN + TCP_HDR_LEN;
+	qedi->ll2_mtu = DEF_PATH_MTU;
+	params.drop_ttl0_packets = 0;
+	params.rx_vlan_stripping = 1;
+	ether_addr_copy(params.ll2_mac_address, qedi->dev_info.common.hw_mac);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		/* set up rx path */
+		INIT_LIST_HEAD(&qedi->ll2_skb_list);
+		spin_lock_init(&qedi->ll2_lock);
+		/* start qedi context */
+		spin_lock_init(&qedi->hba_lock);
+		spin_lock_init(&qedi->task_idx_lock);
+	}
+	qedi_ops->ll2->register_cb_ops(qedi->cdev, &qedi_ll2_cb_ops, qedi);
+	qedi_ops->ll2->start(qedi->cdev, &params);
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		qedi->ll2_recv_thread = kthread_run(qedi_ll2_recv_thread,
+						    (void *)qedi,
+						    "qedi_ll2_thread");
+	}
+
+	rc = qedi_ops->start(qedi->cdev, &qedi->tasks,
+			     qedi, qedi_iscsi_event_cb);
+	if (rc) {
+		rc = -ENODEV;
+		QEDI_ERR(&qedi->dbg_ctx, "Cannot start iSCSI function\n");
+		goto stop_slowpath;
+	}
+
+	task_start = qedi_get_task_mem(&qedi->tasks, 0);
+	task_end = qedi_get_task_mem(&qedi->tasks, MAX_TID_BLOCKS_ISCSI - 1);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
+		  "Task context start=%p, end=%p block_size=%u.\n",
+		   task_start, task_end, qedi->tasks.size);
+
+	memset(&link_params, 0, sizeof(link_params));
+	link_params.link_up = true;
+	rc = qedi_ops->common->set_link(qedi->cdev, &link_params);
+	if (rc) {
+		QEDI_WARN(&qedi->dbg_ctx, "Link set up failed.\n");
+		atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_init(&qedi->dbg_ctx, &qedi_debugfs_ops,
+			   &qedi_dbg_fops);
+#endif
+
+	if (mode != QEDI_MODE_RECOVERY) {
+		if (iscsi_host_add(qedi->shost, &pdev->dev)) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not add iscsi host\n");
+			rc = -ENOMEM;
+			goto remove_host;
+		}
+
+		/* Allocate uio buffers */
+		rc = qedi_alloc_uio_rings(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "UIO alloc ring failed err=%d\n", rc);
+			goto remove_host;
+		}
+
+		rc = qedi_init_uio(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "UIO init failed, err=%d\n", rc);
+			goto free_uio;
+		}
+
+		/* host the array on iscsi_conn */
+		rc = qedi_setup_cid_que(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not setup cid que\n");
+			goto free_uio;
+		}
+
+		rc = qedi_cm_alloc_mem(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not alloc cm memory\n");
+			goto free_cid_que;
+		}
+
+		rc = qedi_alloc_itt(qedi);
+		if (rc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not alloc itt memory\n");
+			goto free_cid_que;
+		}
+
+		sprintf(host_buf, "host_%d", qedi->shost->host_no);
+		qedi->tmf_thread = create_singlethread_workqueue(host_buf);
+		if (!qedi->tmf_thread) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to start tmf thread!\n");
+			rc = -ENODEV;
+			goto free_cid_que;
+		}
+
+		sprintf(host_buf, "qedi_ofld%d", qedi->shost->host_no);
+		qedi->offload_thread = create_workqueue(host_buf);
+		if (!qedi->offload_thread) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Unable to start offload thread!\n");
+			rc = -ENODEV;
+			goto free_cid_que;
+		}
+
+		/* F/w needs 1st task context memory entry for performance */
+		set_bit(QEDI_RESERVE_TASK_ID, qedi->task_idx_map);
+		atomic_set(&qedi->num_offloads, 0);
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "QLogic FastLinQ iSCSI Module qedi %s, FW %d.%d.%d.%d\n",
+		  QEDI_MODULE_VERSION, FW_MAJOR_VERSION, FW_MINOR_VERSION,
+		   FW_REVISION_VERSION, FW_ENGINEERING_VERSION);
+	return 0;
+
+free_cid_que:
+	qedi_release_cid_que(qedi);
+free_uio:
+	qedi_free_uio(qedi->udev);
+remove_host:
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_host_exit(&qedi->dbg_ctx);
+#endif
+	iscsi_host_remove(qedi->shost);
+stop_iscsi_func:
+	qedi_ops->stop(qedi->cdev);
+stop_slowpath:
+	qedi_ops->common->slowpath_stop(qedi->cdev);
+stop_hw:
+	qedi_ops->common->remove(qedi->cdev);
+free_pf_params:
+	qedi_free_iscsi_pf_param(qedi);
+free_host:
+	iscsi_host_free(qedi->shost);
+exit_probe:
+	return rc;
+}
+
+static int qedi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	return __qedi_probe(pdev, QEDI_MODE_NORMAL);
+}
+
+static void qedi_remove(struct pci_dev *pdev)
+{
+	__qedi_remove(pdev, QEDI_MODE_NORMAL);
+}
+
+static struct pci_device_id qedi_pci_tbl[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, 0x165E) },
+	{ 0 },
+};
+MODULE_DEVICE_TABLE(pci, qedi_pci_tbl);
+
+static struct pci_driver qedi_pci_driver = {
+	.name = QEDI_MODULE_NAME,
+	.id_table = qedi_pci_tbl,
+	.probe = qedi_probe,
+	.remove = qedi_remove,
+};
+
+static int __init qedi_init(void)
+{
+	int rc = 0;
+	int ret;
+	struct qedi_percpu_s *p;
+	unsigned int cpu = 0;
+
+	qedi_ops = qed_get_iscsi_ops();
+	if (!qedi_ops) {
+		QEDI_ERR(NULL, "Failed to get qed iSCSI operations\n");
+		rc = -EINVAL;
+		goto exit_qedi_init_0;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_init("qedi");
+#endif
+
+	register_hotcpu_notifier(&qedi_cpu_notifier);
+
+	ret = pci_register_driver(&qedi_pci_driver);
+	if (ret) {
+		QEDI_ERR(NULL, "Failed to register driver\n");
+		rc = -EINVAL;
+		goto exit_qedi_init_2;
+	}
+
+	for_each_possible_cpu(cpu) {
+		p = &per_cpu(qedi_percpu, cpu);
+		INIT_LIST_HEAD(&p->work_list);
+		spin_lock_init(&p->p_work_lock);
+		p->iothread = NULL;
+	}
+
+	for_each_online_cpu(cpu)
+		qedi_percpu_thread_create(cpu);
+
+	return rc;
+
+exit_qedi_init_2:
+exit_qedi_init_1:
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_exit();
+#endif
+	qed_put_iscsi_ops();
+exit_qedi_init_0:
+	return rc;
+}
+
+static void __exit qedi_cleanup(void)
+{
+	unsigned int cpu = 0;
+
+	for_each_online_cpu(cpu)
+		qedi_percpu_thread_destroy(cpu);
+
+	pci_unregister_driver(&qedi_pci_driver);
+	unregister_hotcpu_notifier(&qedi_cpu_notifier);
+
+#ifdef CONFIG_DEBUG_FS
+	qedi_dbg_exit();
+#endif
+	qed_put_iscsi_ops();
+}
+
+MODULE_DESCRIPTION("QLogic FastLinQ 4xxxx iSCSI Module");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("QLogic Corporation");
+MODULE_VERSION(QEDI_MODULE_VERSION);
+module_init(qedi_init);
+module_exit(qedi_cleanup);
diff --git a/drivers/scsi/qedi/qedi_sysfs.c b/drivers/scsi/qedi/qedi_sysfs.c
new file mode 100644
index 0000000..a2cc3ed
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_sysfs.c
@@ -0,0 +1,52 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include "qedi.h"
+#include "qedi_gbl.h"
+#include "qedi_iscsi.h"
+#include "qedi_dbg.h"
+
+static inline struct qedi_ctx *qedi_dev_to_hba(struct device *dev)
+{
+	struct Scsi_Host *shost = class_to_shost(dev);
+
+	return iscsi_host_priv(shost);
+}
+
+static ssize_t qedi_show_port_state(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
+
+	if (atomic_read(&qedi->link_state) == QEDI_LINK_UP)
+		return sprintf(buf, "Online\n");
+	else
+		return sprintf(buf, "Linkdown\n");
+}
+
+static ssize_t qedi_show_speed(struct device *dev,
+			       struct device_attribute *attr, char *buf)
+{
+	struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
+	struct qed_link_output if_link;
+
+	qedi_ops->common->get_link(qedi->cdev, &if_link);
+
+	return sprintf(buf, "%d Gbit\n", if_link.speed / 1000);
+}
+
+static DEVICE_ATTR(port_state, S_IRUGO, qedi_show_port_state, NULL);
+static DEVICE_ATTR(speed, S_IRUGO, qedi_show_speed, NULL);
+
+struct device_attribute *qedi_shost_attrs[] = {
+	&dev_attr_port_state,
+	&dev_attr_speed,
+	NULL
+};
diff --git a/drivers/scsi/qedi/qedi_version.h b/drivers/scsi/qedi/qedi_version.h
new file mode 100644
index 0000000..9543a1b
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_version.h
@@ -0,0 +1,14 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#define QEDI_MODULE_VERSION	"8.10.3.0"
+#define QEDI_DRIVER_MAJOR_VER		8
+#define QEDI_DRIVER_MINOR_VER		10
+#define QEDI_DRIVER_REV_VER		3
+#define QEDI_DRIVER_ENG_VER		0
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 4/6] qedi: Add LL2 iSCSI interface for offload iSCSI.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for iscsiuio interface using Light L2 (LL2) qed
interface.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi.h      |  73 +++++++++
 drivers/scsi/qedi/qedi_main.c | 357 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 430 insertions(+)

diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
index 0a5035e..02fefbd 100644
--- a/drivers/scsi/qedi/qedi.h
+++ b/drivers/scsi/qedi/qedi.h
@@ -21,6 +21,7 @@
 #include <linux/qed/qed_if.h>
 #include "qedi_dbg.h"
 #include <linux/qed/qed_iscsi_if.h>
+#include <linux/qed/qed_ll2_if.h>
 #include "qedi_version.h"
 
 #define QEDI_MODULE_NAME		"qedi"
@@ -54,6 +55,78 @@
 #define QEDI_LOCAL_PORT_MAX     61024
 #define QEDI_LOCAL_PORT_RANGE   (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
 #define QEDI_LOCAL_PORT_INVALID	0xffff
+#define TX_RX_RING		16
+#define RX_RING			(TX_RX_RING - 1)
+#define LL2_SINGLE_BUF_SIZE	0x400
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_PAGE_ALIGN(addr)	ALIGN(addr, QEDI_PAGE_SIZE)
+#define QEDI_PAGE_MASK		(~((QEDI_PAGE_SIZE) - 1))
+
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_PATH_HANDLE	0xFE0000000UL
+
+struct qedi_uio_ctrl {
+	/* meta data */
+	u32 uio_hsi_version;
+
+	/* user writes */
+	u32 host_tx_prod;
+	u32 host_rx_cons;
+	u32 host_rx_bd_cons;
+	u32 host_tx_pkt_len;
+	u32 host_rx_cons_cnt;
+
+	/* driver writes */
+	u32 hw_tx_cons;
+	u32 hw_rx_prod;
+	u32 hw_rx_bd_prod;
+	u32 hw_rx_prod_cnt;
+
+	/* other */
+	u8 mac_addr[6];
+	u8 reserve[2];
+};
+
+struct qedi_rx_bd {
+	u32 rx_pkt_index;
+	u32 rx_pkt_len;
+	u16 vlan_id;
+};
+
+#define QEDI_RX_DESC_CNT	(QEDI_PAGE_SIZE / sizeof(struct qedi_rx_bd))
+#define QEDI_MAX_RX_DESC_CNT	(QEDI_RX_DESC_CNT - 1)
+#define QEDI_NUM_RX_BD		(QEDI_RX_DESC_CNT * 1)
+#define QEDI_MAX_RX_BD		(QEDI_NUM_RX_BD - 1)
+
+#define QEDI_NEXT_RX_IDX(x)	((((x) & (QEDI_MAX_RX_DESC_CNT)) ==	\
+				  (QEDI_MAX_RX_DESC_CNT - 1)) ?		\
+				 (x) + 2 : (x) + 1)
+
+struct qedi_uio_dev {
+	struct uio_info		qedi_uinfo;
+	u32			uio_dev;
+	struct list_head	list;
+
+	u32			ll2_ring_size;
+	void			*ll2_ring;
+
+	u32			ll2_buf_size;
+	void			*ll2_buf;
+
+	void			*rx_pkt;
+	void			*tx_pkt;
+
+	struct qedi_ctx		*qedi;
+	struct pci_dev		*pdev;
+	void			*uctrl;
+};
+
+/* List to maintain the skb pointers */
+struct skb_work_list {
+	struct list_head list;
+	struct sk_buff *skb;
+	u16 vlan_id;
+};
 
 /* Queue sizes in number of elements */
 #define QEDI_SQ_SIZE		MAX_OUSTANDING_TASKS_PER_CON
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 35ab2f9..58ac9a2 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -45,9 +45,12 @@
 static struct scsi_transport_template *qedi_scsi_transport;
 static struct pci_driver qedi_pci_driver;
 static DEFINE_PER_CPU(struct qedi_percpu_s, qedi_percpu);
+static LIST_HEAD(qedi_udev_list);
 /* Static function declaration */
 static int qedi_alloc_global_queues(struct qedi_ctx *qedi);
 static void qedi_free_global_queues(struct qedi_ctx *qedi);
+static void qedi_reset_uio_rings(struct qedi_uio_dev *udev);
+static void qedi_ll2_free_skbs(struct qedi_ctx *qedi);
 
 static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
 {
@@ -112,6 +115,224 @@ static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
 	return rval;
 }
 
+static int qedi_uio_open(struct uio_info *uinfo, struct inode *inode)
+{
+	struct qedi_uio_dev *udev = uinfo->priv;
+	struct qedi_ctx *qedi = udev->qedi;
+
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	if (udev->uio_dev != -1)
+		return -EBUSY;
+
+	rtnl_lock();
+	udev->uio_dev = iminor(inode);
+	qedi_reset_uio_rings(udev);
+	set_bit(UIO_DEV_OPENED, &qedi->flags);
+	rtnl_unlock();
+
+	return 0;
+}
+
+static int qedi_uio_close(struct uio_info *uinfo, struct inode *inode)
+{
+	struct qedi_uio_dev *udev = uinfo->priv;
+	struct qedi_ctx *qedi = udev->qedi;
+
+	udev->uio_dev = -1;
+	clear_bit(UIO_DEV_OPENED, &qedi->flags);
+	qedi_ll2_free_skbs(qedi);
+	return 0;
+}
+
+static void __qedi_free_uio_rings(struct qedi_uio_dev *udev)
+{
+	if (udev->ll2_ring) {
+		free_page((unsigned long)udev->ll2_ring);
+		udev->ll2_ring = NULL;
+	}
+
+	if (udev->ll2_buf) {
+		free_pages((unsigned long)udev->ll2_buf, 2);
+		udev->ll2_buf = NULL;
+	}
+}
+
+static void __qedi_free_uio(struct qedi_uio_dev *udev)
+{
+	uio_unregister_device(&udev->qedi_uinfo);
+
+	__qedi_free_uio_rings(udev);
+
+	pci_dev_put(udev->pdev);
+	kfree(udev->uctrl);
+	kfree(udev);
+}
+
+static void qedi_free_uio(struct qedi_uio_dev *udev)
+{
+	if (!udev)
+		return;
+
+	list_del_init(&udev->list);
+	__qedi_free_uio(udev);
+}
+
+static void qedi_reset_uio_rings(struct qedi_uio_dev *udev)
+{
+	struct qedi_ctx *qedi = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+
+	qedi = udev->qedi;
+	uctrl = udev->uctrl;
+
+	spin_lock_bh(&qedi->ll2_lock);
+	uctrl->host_rx_cons = 0;
+	uctrl->hw_rx_prod = 0;
+	uctrl->hw_rx_bd_prod = 0;
+	uctrl->host_rx_bd_cons = 0;
+
+	memset(udev->ll2_ring, 0, udev->ll2_ring_size);
+	memset(udev->ll2_buf, 0, udev->ll2_buf_size);
+	spin_unlock_bh(&qedi->ll2_lock);
+}
+
+static int __qedi_alloc_uio_rings(struct qedi_uio_dev *udev)
+{
+	int rc = 0;
+
+	if (udev->ll2_ring || udev->ll2_buf)
+		return rc;
+
+	/* Allocating memory for LL2 ring  */
+	udev->ll2_ring_size = QEDI_PAGE_SIZE;
+	udev->ll2_ring = (void *)get_zeroed_page(GFP_KERNEL | __GFP_COMP);
+	if (!udev->ll2_ring) {
+		rc = -ENOMEM;
+		goto exit_alloc_ring;
+	}
+
+	/* Allocating memory for Tx/Rx pkt buffer */
+	udev->ll2_buf_size = TX_RX_RING * LL2_SINGLE_BUF_SIZE;
+	udev->ll2_buf_size = QEDI_PAGE_ALIGN(udev->ll2_buf_size);
+	udev->ll2_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_COMP |
+						 __GFP_ZERO, 2);
+	if (!udev->ll2_buf) {
+		rc = -ENOMEM;
+		goto exit_alloc_buf;
+	}
+	return rc;
+
+exit_alloc_buf:
+	free_page((unsigned long)udev->ll2_ring);
+	udev->ll2_ring = NULL;
+exit_alloc_ring:
+	return rc;
+}
+
+static int qedi_alloc_uio_rings(struct qedi_ctx *qedi)
+{
+	struct qedi_uio_dev *udev = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+	int rc = 0;
+
+	list_for_each_entry(udev, &qedi_udev_list, list) {
+		if (udev->pdev == qedi->pdev) {
+			udev->qedi = qedi;
+			if (__qedi_alloc_uio_rings(udev)) {
+				udev->qedi = NULL;
+				return -ENOMEM;
+			}
+			qedi->udev = udev;
+			return 0;
+		}
+	}
+
+	udev = kzalloc(sizeof(*udev), GFP_KERNEL);
+	if (!udev) {
+		rc = -ENOMEM;
+		goto err_udev;
+	}
+
+	uctrl = kzalloc(sizeof(*uctrl), GFP_KERNEL);
+	if (!uctrl) {
+		rc = -ENOMEM;
+		goto err_uctrl;
+	}
+
+	udev->uio_dev = -1;
+
+	udev->qedi = qedi;
+	udev->pdev = qedi->pdev;
+	udev->uctrl = uctrl;
+
+	rc = __qedi_alloc_uio_rings(udev);
+	if (rc)
+		goto err_uio_rings;
+
+	list_add(&udev->list, &qedi_udev_list);
+
+	pci_dev_get(udev->pdev);
+	qedi->udev = udev;
+
+	udev->tx_pkt = udev->ll2_buf;
+	udev->rx_pkt = udev->ll2_buf + LL2_SINGLE_BUF_SIZE;
+	return 0;
+
+ err_uio_rings:
+	kfree(uctrl);
+ err_uctrl:
+	kfree(udev);
+ err_udev:
+	return -ENOMEM;
+}
+
+static int qedi_init_uio(struct qedi_ctx *qedi)
+{
+	struct qedi_uio_dev *udev = qedi->udev;
+	struct uio_info *uinfo;
+	int ret = 0;
+
+	if (!udev)
+		return -ENOMEM;
+
+	uinfo = &udev->qedi_uinfo;
+
+	uinfo->mem[0].addr = (unsigned long)udev->uctrl;
+	uinfo->mem[0].size = sizeof(struct qedi_uio_ctrl);
+	uinfo->mem[0].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->mem[1].addr = (unsigned long)udev->ll2_ring;
+	uinfo->mem[1].size = udev->ll2_ring_size;
+	uinfo->mem[1].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->mem[2].addr = (unsigned long)udev->ll2_buf;
+	uinfo->mem[2].size = udev->ll2_buf_size;
+	uinfo->mem[2].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->name = "qedi_uio";
+	uinfo->version = QEDI_MODULE_VERSION;
+	uinfo->irq = UIO_IRQ_CUSTOM;
+
+	uinfo->open = qedi_uio_open;
+	uinfo->release = qedi_uio_close;
+
+	if (udev->uio_dev == -1) {
+		if (!uinfo->priv) {
+			uinfo->priv = udev;
+
+			ret = uio_register_device(&udev->pdev->dev, uinfo);
+			if (ret) {
+				QEDI_ERR(&qedi->dbg_ctx,
+					 "UIO registration failed\n");
+			}
+		}
+	}
+
+	return ret;
+}
+
 static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
 				  struct qed_sb_info *sb_info, u16 sb_id)
 {
@@ -441,6 +662,142 @@ static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
 	return qedi;
 }
 
+static int qedi_ll2_rx(void *cookie, struct sk_buff *skb, u32 arg1, u32 arg2)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
+	struct qedi_uio_dev *udev;
+	struct qedi_uio_ctrl *uctrl;
+	struct skb_work_list *work;
+	u32 prod;
+
+	if (!qedi) {
+		QEDI_ERR(NULL, "qedi is NULL\n");
+		return -1;
+	}
+
+	if (!test_bit(UIO_DEV_OPENED, &qedi->flags)) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_UIO,
+			  "UIO DEV is not opened\n");
+		kfree_skb(skb);
+		return 0;
+	}
+
+	udev = qedi->udev;
+	uctrl = udev->uctrl;
+
+	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	if (!work) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate work so dropping frame.\n");
+		kfree_skb(skb);
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&work->list);
+	work->skb = skb;
+
+	if (skb_vlan_tag_present(skb))
+		work->vlan_id = skb_vlan_tag_get(skb);
+
+	if (work->vlan_id)
+		__vlan_insert_tag(work->skb, htons(ETH_P_8021Q), work->vlan_id);
+
+	spin_lock_bh(&qedi->ll2_lock);
+	list_add_tail(&work->list, &qedi->ll2_skb_list);
+
+	++uctrl->hw_rx_prod_cnt;
+	prod = (uctrl->hw_rx_prod + 1) % RX_RING;
+	if (prod != uctrl->host_rx_cons) {
+		uctrl->hw_rx_prod = prod;
+		spin_unlock_bh(&qedi->ll2_lock);
+		wake_up_process(qedi->ll2_recv_thread);
+		return 0;
+	}
+
+	spin_unlock_bh(&qedi->ll2_lock);
+	return 0;
+}
+
+/* map this skb to iscsiuio mmaped region */
+static int qedi_ll2_process_skb(struct qedi_ctx *qedi, struct sk_buff *skb,
+				u16 vlan_id)
+{
+	struct qedi_uio_dev *udev = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+	struct qedi_rx_bd rxbd;
+	struct qedi_rx_bd *p_rxbd;
+	u32 rx_bd_prod;
+	void *pkt;
+	int len = 0;
+
+	if (!qedi) {
+		QEDI_ERR(NULL, "qedi is NULL\n");
+		return -1;
+	}
+
+	udev = qedi->udev;
+	uctrl = udev->uctrl;
+	pkt = udev->rx_pkt + (uctrl->hw_rx_prod * LL2_SINGLE_BUF_SIZE);
+	len = min_t(u32, skb->len, (u32)LL2_SINGLE_BUF_SIZE);
+	memcpy(pkt, skb->data, len);
+
+	memset(&rxbd, 0, sizeof(rxbd));
+	rxbd.rx_pkt_index = uctrl->hw_rx_prod;
+	rxbd.rx_pkt_len = len;
+	rxbd.vlan_id = vlan_id;
+
+	uctrl->hw_rx_bd_prod = (uctrl->hw_rx_bd_prod + 1) % QEDI_NUM_RX_BD;
+	rx_bd_prod = uctrl->hw_rx_bd_prod;
+	p_rxbd = (struct qedi_rx_bd *)udev->ll2_ring;
+	p_rxbd += rx_bd_prod;
+
+	memcpy(p_rxbd, &rxbd, sizeof(rxbd));
+
+	/* notify the iscsiuio about new packet */
+	uio_event_notify(&udev->qedi_uinfo);
+
+	return 0;
+}
+
+static void qedi_ll2_free_skbs(struct qedi_ctx *qedi)
+{
+	struct skb_work_list *work, *work_tmp;
+
+	spin_lock_bh(&qedi->ll2_lock);
+	list_for_each_entry_safe(work, work_tmp, &qedi->ll2_skb_list, list) {
+		list_del(&work->list);
+		if (work->skb)
+			kfree_skb(work->skb);
+		kfree(work);
+	}
+	spin_unlock_bh(&qedi->ll2_lock);
+}
+
+static int qedi_ll2_recv_thread(void *arg)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)arg;
+	struct skb_work_list *work, *work_tmp;
+
+	set_user_nice(current, -20);
+
+	while (!kthread_should_stop()) {
+		spin_lock_bh(&qedi->ll2_lock);
+		list_for_each_entry_safe(work, work_tmp, &qedi->ll2_skb_list,
+					 list) {
+			list_del(&work->list);
+			qedi_ll2_process_skb(qedi, work->skb, work->vlan_id);
+			kfree_skb(work->skb);
+			kfree(work);
+		}
+		set_current_state(TASK_INTERRUPTIBLE);
+		spin_unlock_bh(&qedi->ll2_lock);
+		schedule();
+	}
+
+	__set_current_state(TASK_RUNNING);
+	return 0;
+}
+
 static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
 {
 	u8 num_sq_pages;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 4/6] qedi: Add LL2 iSCSI interface for offload iSCSI.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for iscsiuio interface using Light L2 (LL2) qed
interface.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi.h      |  73 +++++++++
 drivers/scsi/qedi/qedi_main.c | 357 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 430 insertions(+)

diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
index 0a5035e..02fefbd 100644
--- a/drivers/scsi/qedi/qedi.h
+++ b/drivers/scsi/qedi/qedi.h
@@ -21,6 +21,7 @@
 #include <linux/qed/qed_if.h>
 #include "qedi_dbg.h"
 #include <linux/qed/qed_iscsi_if.h>
+#include <linux/qed/qed_ll2_if.h>
 #include "qedi_version.h"
 
 #define QEDI_MODULE_NAME		"qedi"
@@ -54,6 +55,78 @@
 #define QEDI_LOCAL_PORT_MAX     61024
 #define QEDI_LOCAL_PORT_RANGE   (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
 #define QEDI_LOCAL_PORT_INVALID	0xffff
+#define TX_RX_RING		16
+#define RX_RING			(TX_RX_RING - 1)
+#define LL2_SINGLE_BUF_SIZE	0x400
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_PAGE_ALIGN(addr)	ALIGN(addr, QEDI_PAGE_SIZE)
+#define QEDI_PAGE_MASK		(~((QEDI_PAGE_SIZE) - 1))
+
+#define QEDI_PAGE_SIZE		4096
+#define QEDI_PATH_HANDLE	0xFE0000000UL
+
+struct qedi_uio_ctrl {
+	/* meta data */
+	u32 uio_hsi_version;
+
+	/* user writes */
+	u32 host_tx_prod;
+	u32 host_rx_cons;
+	u32 host_rx_bd_cons;
+	u32 host_tx_pkt_len;
+	u32 host_rx_cons_cnt;
+
+	/* driver writes */
+	u32 hw_tx_cons;
+	u32 hw_rx_prod;
+	u32 hw_rx_bd_prod;
+	u32 hw_rx_prod_cnt;
+
+	/* other */
+	u8 mac_addr[6];
+	u8 reserve[2];
+};
+
+struct qedi_rx_bd {
+	u32 rx_pkt_index;
+	u32 rx_pkt_len;
+	u16 vlan_id;
+};
+
+#define QEDI_RX_DESC_CNT	(QEDI_PAGE_SIZE / sizeof(struct qedi_rx_bd))
+#define QEDI_MAX_RX_DESC_CNT	(QEDI_RX_DESC_CNT - 1)
+#define QEDI_NUM_RX_BD		(QEDI_RX_DESC_CNT * 1)
+#define QEDI_MAX_RX_BD		(QEDI_NUM_RX_BD - 1)
+
+#define QEDI_NEXT_RX_IDX(x)	((((x) & (QEDI_MAX_RX_DESC_CNT)) ==	\
+				  (QEDI_MAX_RX_DESC_CNT - 1)) ?		\
+				 (x) + 2 : (x) + 1)
+
+struct qedi_uio_dev {
+	struct uio_info		qedi_uinfo;
+	u32			uio_dev;
+	struct list_head	list;
+
+	u32			ll2_ring_size;
+	void			*ll2_ring;
+
+	u32			ll2_buf_size;
+	void			*ll2_buf;
+
+	void			*rx_pkt;
+	void			*tx_pkt;
+
+	struct qedi_ctx		*qedi;
+	struct pci_dev		*pdev;
+	void			*uctrl;
+};
+
+/* List to maintain the skb pointers */
+struct skb_work_list {
+	struct list_head list;
+	struct sk_buff *skb;
+	u16 vlan_id;
+};
 
 /* Queue sizes in number of elements */
 #define QEDI_SQ_SIZE		MAX_OUSTANDING_TASKS_PER_CON
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 35ab2f9..58ac9a2 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -45,9 +45,12 @@
 static struct scsi_transport_template *qedi_scsi_transport;
 static struct pci_driver qedi_pci_driver;
 static DEFINE_PER_CPU(struct qedi_percpu_s, qedi_percpu);
+static LIST_HEAD(qedi_udev_list);
 /* Static function declaration */
 static int qedi_alloc_global_queues(struct qedi_ctx *qedi);
 static void qedi_free_global_queues(struct qedi_ctx *qedi);
+static void qedi_reset_uio_rings(struct qedi_uio_dev *udev);
+static void qedi_ll2_free_skbs(struct qedi_ctx *qedi);
 
 static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
 {
@@ -112,6 +115,224 @@ static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
 	return rval;
 }
 
+static int qedi_uio_open(struct uio_info *uinfo, struct inode *inode)
+{
+	struct qedi_uio_dev *udev = uinfo->priv;
+	struct qedi_ctx *qedi = udev->qedi;
+
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	if (udev->uio_dev != -1)
+		return -EBUSY;
+
+	rtnl_lock();
+	udev->uio_dev = iminor(inode);
+	qedi_reset_uio_rings(udev);
+	set_bit(UIO_DEV_OPENED, &qedi->flags);
+	rtnl_unlock();
+
+	return 0;
+}
+
+static int qedi_uio_close(struct uio_info *uinfo, struct inode *inode)
+{
+	struct qedi_uio_dev *udev = uinfo->priv;
+	struct qedi_ctx *qedi = udev->qedi;
+
+	udev->uio_dev = -1;
+	clear_bit(UIO_DEV_OPENED, &qedi->flags);
+	qedi_ll2_free_skbs(qedi);
+	return 0;
+}
+
+static void __qedi_free_uio_rings(struct qedi_uio_dev *udev)
+{
+	if (udev->ll2_ring) {
+		free_page((unsigned long)udev->ll2_ring);
+		udev->ll2_ring = NULL;
+	}
+
+	if (udev->ll2_buf) {
+		free_pages((unsigned long)udev->ll2_buf, 2);
+		udev->ll2_buf = NULL;
+	}
+}
+
+static void __qedi_free_uio(struct qedi_uio_dev *udev)
+{
+	uio_unregister_device(&udev->qedi_uinfo);
+
+	__qedi_free_uio_rings(udev);
+
+	pci_dev_put(udev->pdev);
+	kfree(udev->uctrl);
+	kfree(udev);
+}
+
+static void qedi_free_uio(struct qedi_uio_dev *udev)
+{
+	if (!udev)
+		return;
+
+	list_del_init(&udev->list);
+	__qedi_free_uio(udev);
+}
+
+static void qedi_reset_uio_rings(struct qedi_uio_dev *udev)
+{
+	struct qedi_ctx *qedi = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+
+	qedi = udev->qedi;
+	uctrl = udev->uctrl;
+
+	spin_lock_bh(&qedi->ll2_lock);
+	uctrl->host_rx_cons = 0;
+	uctrl->hw_rx_prod = 0;
+	uctrl->hw_rx_bd_prod = 0;
+	uctrl->host_rx_bd_cons = 0;
+
+	memset(udev->ll2_ring, 0, udev->ll2_ring_size);
+	memset(udev->ll2_buf, 0, udev->ll2_buf_size);
+	spin_unlock_bh(&qedi->ll2_lock);
+}
+
+static int __qedi_alloc_uio_rings(struct qedi_uio_dev *udev)
+{
+	int rc = 0;
+
+	if (udev->ll2_ring || udev->ll2_buf)
+		return rc;
+
+	/* Allocating memory for LL2 ring  */
+	udev->ll2_ring_size = QEDI_PAGE_SIZE;
+	udev->ll2_ring = (void *)get_zeroed_page(GFP_KERNEL | __GFP_COMP);
+	if (!udev->ll2_ring) {
+		rc = -ENOMEM;
+		goto exit_alloc_ring;
+	}
+
+	/* Allocating memory for Tx/Rx pkt buffer */
+	udev->ll2_buf_size = TX_RX_RING * LL2_SINGLE_BUF_SIZE;
+	udev->ll2_buf_size = QEDI_PAGE_ALIGN(udev->ll2_buf_size);
+	udev->ll2_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_COMP |
+						 __GFP_ZERO, 2);
+	if (!udev->ll2_buf) {
+		rc = -ENOMEM;
+		goto exit_alloc_buf;
+	}
+	return rc;
+
+exit_alloc_buf:
+	free_page((unsigned long)udev->ll2_ring);
+	udev->ll2_ring = NULL;
+exit_alloc_ring:
+	return rc;
+}
+
+static int qedi_alloc_uio_rings(struct qedi_ctx *qedi)
+{
+	struct qedi_uio_dev *udev = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+	int rc = 0;
+
+	list_for_each_entry(udev, &qedi_udev_list, list) {
+		if (udev->pdev == qedi->pdev) {
+			udev->qedi = qedi;
+			if (__qedi_alloc_uio_rings(udev)) {
+				udev->qedi = NULL;
+				return -ENOMEM;
+			}
+			qedi->udev = udev;
+			return 0;
+		}
+	}
+
+	udev = kzalloc(sizeof(*udev), GFP_KERNEL);
+	if (!udev) {
+		rc = -ENOMEM;
+		goto err_udev;
+	}
+
+	uctrl = kzalloc(sizeof(*uctrl), GFP_KERNEL);
+	if (!uctrl) {
+		rc = -ENOMEM;
+		goto err_uctrl;
+	}
+
+	udev->uio_dev = -1;
+
+	udev->qedi = qedi;
+	udev->pdev = qedi->pdev;
+	udev->uctrl = uctrl;
+
+	rc = __qedi_alloc_uio_rings(udev);
+	if (rc)
+		goto err_uio_rings;
+
+	list_add(&udev->list, &qedi_udev_list);
+
+	pci_dev_get(udev->pdev);
+	qedi->udev = udev;
+
+	udev->tx_pkt = udev->ll2_buf;
+	udev->rx_pkt = udev->ll2_buf + LL2_SINGLE_BUF_SIZE;
+	return 0;
+
+ err_uio_rings:
+	kfree(uctrl);
+ err_uctrl:
+	kfree(udev);
+ err_udev:
+	return -ENOMEM;
+}
+
+static int qedi_init_uio(struct qedi_ctx *qedi)
+{
+	struct qedi_uio_dev *udev = qedi->udev;
+	struct uio_info *uinfo;
+	int ret = 0;
+
+	if (!udev)
+		return -ENOMEM;
+
+	uinfo = &udev->qedi_uinfo;
+
+	uinfo->mem[0].addr = (unsigned long)udev->uctrl;
+	uinfo->mem[0].size = sizeof(struct qedi_uio_ctrl);
+	uinfo->mem[0].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->mem[1].addr = (unsigned long)udev->ll2_ring;
+	uinfo->mem[1].size = udev->ll2_ring_size;
+	uinfo->mem[1].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->mem[2].addr = (unsigned long)udev->ll2_buf;
+	uinfo->mem[2].size = udev->ll2_buf_size;
+	uinfo->mem[2].memtype = UIO_MEM_LOGICAL;
+
+	uinfo->name = "qedi_uio";
+	uinfo->version = QEDI_MODULE_VERSION;
+	uinfo->irq = UIO_IRQ_CUSTOM;
+
+	uinfo->open = qedi_uio_open;
+	uinfo->release = qedi_uio_close;
+
+	if (udev->uio_dev == -1) {
+		if (!uinfo->priv) {
+			uinfo->priv = udev;
+
+			ret = uio_register_device(&udev->pdev->dev, uinfo);
+			if (ret) {
+				QEDI_ERR(&qedi->dbg_ctx,
+					 "UIO registration failed\n");
+			}
+		}
+	}
+
+	return ret;
+}
+
 static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
 				  struct qed_sb_info *sb_info, u16 sb_id)
 {
@@ -441,6 +662,142 @@ static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
 	return qedi;
 }
 
+static int qedi_ll2_rx(void *cookie, struct sk_buff *skb, u32 arg1, u32 arg2)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
+	struct qedi_uio_dev *udev;
+	struct qedi_uio_ctrl *uctrl;
+	struct skb_work_list *work;
+	u32 prod;
+
+	if (!qedi) {
+		QEDI_ERR(NULL, "qedi is NULL\n");
+		return -1;
+	}
+
+	if (!test_bit(UIO_DEV_OPENED, &qedi->flags)) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_UIO,
+			  "UIO DEV is not opened\n");
+		kfree_skb(skb);
+		return 0;
+	}
+
+	udev = qedi->udev;
+	uctrl = udev->uctrl;
+
+	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	if (!work) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate work so dropping frame.\n");
+		kfree_skb(skb);
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&work->list);
+	work->skb = skb;
+
+	if (skb_vlan_tag_present(skb))
+		work->vlan_id = skb_vlan_tag_get(skb);
+
+	if (work->vlan_id)
+		__vlan_insert_tag(work->skb, htons(ETH_P_8021Q), work->vlan_id);
+
+	spin_lock_bh(&qedi->ll2_lock);
+	list_add_tail(&work->list, &qedi->ll2_skb_list);
+
+	++uctrl->hw_rx_prod_cnt;
+	prod = (uctrl->hw_rx_prod + 1) % RX_RING;
+	if (prod != uctrl->host_rx_cons) {
+		uctrl->hw_rx_prod = prod;
+		spin_unlock_bh(&qedi->ll2_lock);
+		wake_up_process(qedi->ll2_recv_thread);
+		return 0;
+	}
+
+	spin_unlock_bh(&qedi->ll2_lock);
+	return 0;
+}
+
+/* map this skb to iscsiuio mmaped region */
+static int qedi_ll2_process_skb(struct qedi_ctx *qedi, struct sk_buff *skb,
+				u16 vlan_id)
+{
+	struct qedi_uio_dev *udev = NULL;
+	struct qedi_uio_ctrl *uctrl = NULL;
+	struct qedi_rx_bd rxbd;
+	struct qedi_rx_bd *p_rxbd;
+	u32 rx_bd_prod;
+	void *pkt;
+	int len = 0;
+
+	if (!qedi) {
+		QEDI_ERR(NULL, "qedi is NULL\n");
+		return -1;
+	}
+
+	udev = qedi->udev;
+	uctrl = udev->uctrl;
+	pkt = udev->rx_pkt + (uctrl->hw_rx_prod * LL2_SINGLE_BUF_SIZE);
+	len = min_t(u32, skb->len, (u32)LL2_SINGLE_BUF_SIZE);
+	memcpy(pkt, skb->data, len);
+
+	memset(&rxbd, 0, sizeof(rxbd));
+	rxbd.rx_pkt_index = uctrl->hw_rx_prod;
+	rxbd.rx_pkt_len = len;
+	rxbd.vlan_id = vlan_id;
+
+	uctrl->hw_rx_bd_prod = (uctrl->hw_rx_bd_prod + 1) % QEDI_NUM_RX_BD;
+	rx_bd_prod = uctrl->hw_rx_bd_prod;
+	p_rxbd = (struct qedi_rx_bd *)udev->ll2_ring;
+	p_rxbd += rx_bd_prod;
+
+	memcpy(p_rxbd, &rxbd, sizeof(rxbd));
+
+	/* notify the iscsiuio about new packet */
+	uio_event_notify(&udev->qedi_uinfo);
+
+	return 0;
+}
+
+static void qedi_ll2_free_skbs(struct qedi_ctx *qedi)
+{
+	struct skb_work_list *work, *work_tmp;
+
+	spin_lock_bh(&qedi->ll2_lock);
+	list_for_each_entry_safe(work, work_tmp, &qedi->ll2_skb_list, list) {
+		list_del(&work->list);
+		if (work->skb)
+			kfree_skb(work->skb);
+		kfree(work);
+	}
+	spin_unlock_bh(&qedi->ll2_lock);
+}
+
+static int qedi_ll2_recv_thread(void *arg)
+{
+	struct qedi_ctx *qedi = (struct qedi_ctx *)arg;
+	struct skb_work_list *work, *work_tmp;
+
+	set_user_nice(current, -20);
+
+	while (!kthread_should_stop()) {
+		spin_lock_bh(&qedi->ll2_lock);
+		list_for_each_entry_safe(work, work_tmp, &qedi->ll2_skb_list,
+					 list) {
+			list_del(&work->list);
+			qedi_ll2_process_skb(qedi, work->skb, work->vlan_id);
+			kfree_skb(work->skb);
+			kfree(work);
+		}
+		set_current_state(TASK_INTERRUPTIBLE);
+		spin_unlock_bh(&qedi->ll2_lock);
+		schedule();
+	}
+
+	__set_current_state(TASK_RUNNING);
+	return 0;
+}
+
 static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
 {
 	u8 num_sq_pages;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 5/6] qedi: Add support for iSCSI session management.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for iscsi_transport LLD Login,
Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
and Firmware async event handling support.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi_fw.c    | 1123 ++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h   |   67 ++
 drivers/scsi/qedi/qedi_iscsi.c | 1604 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_iscsi.h |  228 ++++++
 drivers/scsi/qedi/qedi_main.c  |  164 ++++
 5 files changed, 3186 insertions(+)
 create mode 100644 drivers/scsi/qedi/qedi_fw.c
 create mode 100644 drivers/scsi/qedi/qedi_gbl.h
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.h

diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
new file mode 100644
index 0000000..a820785
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_fw.c
@@ -0,0 +1,1123 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/blkdev.h>
+#include <scsi/scsi_tcq.h>
+#include <linux/delay.h>
+
+#include "qedi.h"
+#include "qedi_iscsi.h"
+#include "qedi_gbl.h"
+
+static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
+			       struct iscsi_task *mtask);
+
+void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd)
+{
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+
+	if (cmd->io_tbl.sge_valid && sc) {
+		scsi_dma_unmap(sc);
+		cmd->io_tbl.sge_valid = 0;
+	}
+}
+
+static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+				     union iscsi_cqe *cqe,
+				     struct iscsi_task *task,
+				     struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_logout_rsp *resp_hdr;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_logout_response_hdr *cqe_logout_response;
+	struct qedi_cmd *cmd;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+	cqe_logout_response = &cqe->cqe_common.iscsi_hdr.logout_response;
+	spin_lock(&session->back_lock);
+	resp_hdr = (struct iscsi_logout_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr->opcode = cqe_logout_response->opcode;
+	resp_hdr->flags = cqe_logout_response->flags;
+	resp_hdr->hlength = 0;
+
+	resp_hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
+	resp_hdr->statsn = cpu_to_be32(cqe_logout_response->stat_sn);
+	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_logout_response->exp_cmd_sn);
+	resp_hdr->max_cmdsn = cpu_to_be32(cqe_logout_response->max_cmd_sn);
+
+	resp_hdr->t2wait = cpu_to_be32(cqe_logout_response->time2wait);
+	resp_hdr->t2retain = cpu_to_be32(cqe_logout_response->time2retain);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id,
+			  &cmd->io_cmd);
+	}
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, NULL, 0);
+
+	spin_unlock(&session->back_lock);
+}
+
+static void qedi_process_text_resp(struct qedi_ctx *qedi,
+				   union iscsi_cqe *cqe,
+				   struct iscsi_task *task,
+				   struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_task_context *task_ctx;
+	struct iscsi_text_rsp *resp_hdr_ptr;
+	struct iscsi_text_response_hdr *cqe_text_response;
+	struct qedi_cmd *cmd;
+	int pld_len;
+	u32 *tmp;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+								  cmd->task_id);
+
+	cqe_text_response = &cqe->cqe_common.iscsi_hdr.text_response;
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_text_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr_ptr->opcode = cqe_text_response->opcode;
+	resp_hdr_ptr->flags = cqe_text_response->flags;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_text_response->hdr_second_dword &
+		ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->ttt = cqe_text_response->ttt;
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_text_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_text_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_text_response->max_cmd_sn);
+
+	pld_len = cqe_text_response->hdr_second_dword &
+		  ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+
+	memset(task_ctx, '\0', sizeof(*task_ctx));
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id,
+			  &cmd->io_cmd);
+	}
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+			     qedi_conn->gen_pdu.resp_buf,
+			     (qedi_conn->gen_pdu.resp_wr_ptr -
+			      qedi_conn->gen_pdu.resp_buf));
+	spin_unlock(&session->back_lock);
+}
+
+static void qedi_process_login_resp(struct qedi_ctx *qedi,
+				    union iscsi_cqe *cqe,
+				    struct iscsi_task *task,
+				    struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_task_context *task_ctx;
+	struct iscsi_login_rsp *resp_hdr_ptr;
+	struct iscsi_login_response_hdr *cqe_login_response;
+	struct qedi_cmd *cmd;
+	int pld_len;
+	u32 *tmp;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+
+	cqe_login_response = &cqe->cqe_common.iscsi_hdr.login_response;
+	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+							  cmd->task_id);
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_login_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_login_rsp));
+	resp_hdr_ptr->opcode = cqe_login_response->opcode;
+	resp_hdr_ptr->flags = cqe_login_response->flags_attr;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_login_response->hdr_second_dword &
+		ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->tsih = cqe_login_response->tsih;
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_login_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_login_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_login_response->max_cmd_sn);
+	resp_hdr_ptr->status_class = cqe_login_response->status_class;
+	resp_hdr_ptr->status_detail = cqe_login_response->status_detail;
+	pld_len = cqe_login_response->hdr_second_dword &
+		  ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+
+	memset(task_ctx, '\0', sizeof(*task_ctx));
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+			     qedi_conn->gen_pdu.resp_buf,
+			     (qedi_conn->gen_pdu.resp_wr_ptr -
+			     qedi_conn->gen_pdu.resp_buf));
+
+	spin_unlock(&session->back_lock);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+}
+
+static void qedi_get_rq_bdq_buf(struct qedi_ctx *qedi,
+				struct iscsi_cqe_unsolicited *cqe,
+				char *ptr, int len)
+{
+	u16 idx = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "pld_len [%d], bdq_prod_idx [%d], idx [%d]\n",
+		  len, qedi->bdq_prod_idx,
+		  (qedi->bdq_prod_idx % qedi->rq_num_entries));
+
+	/* Obtain buffer address from rqe_opaque */
+	idx = cqe->rqe_opaque.lo;
+	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
+			  idx);
+		return;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "rqe_opaque.lo [0x%p], rqe_opaque.hi [0x%p], idx [%d]\n",
+		  cqe->rqe_opaque.lo, cqe->rqe_opaque.hi, idx);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "unsol_cqe_type = %d\n", cqe->unsol_cqe_type);
+	switch (cqe->unsol_cqe_type) {
+	case ISCSI_CQE_UNSOLICITED_SINGLE:
+	case ISCSI_CQE_UNSOLICITED_FIRST:
+		if (len)
+			memcpy(ptr, (void *)qedi->bdq[idx].buf_addr, len);
+		break;
+	case ISCSI_CQE_UNSOLICITED_MIDDLE:
+	case ISCSI_CQE_UNSOLICITED_LAST:
+		break;
+	default:
+		break;
+	}
+}
+
+static void qedi_put_rq_bdq_buf(struct qedi_ctx *qedi,
+				struct iscsi_cqe_unsolicited *cqe,
+				int count)
+{
+	u16 tmp;
+	u16 idx = 0;
+	struct scsi_bd *pbl;
+
+	/* Obtain buffer address from rqe_opaque */
+	idx = cqe->rqe_opaque.lo;
+	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
+			  idx);
+		return;
+	}
+
+	pbl = (struct scsi_bd *)qedi->bdq_pbl;
+	pbl += (qedi->bdq_prod_idx % qedi->rq_num_entries);
+	pbl->address.hi =
+		      cpu_to_le32((u32)(((u64)(qedi->bdq[idx].buf_dma)) >> 32));
+	pbl->address.lo =
+			cpu_to_le32(((u32)(((u64)(qedi->bdq[idx].buf_dma)) &
+					    0xffffffff)));
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx] idx [%d]\n",
+		  pbl, pbl->address.hi, pbl->address.lo, idx);
+	pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
+	pbl->opaque.lo = cpu_to_le32(((u32)(((u64)idx) & 0xffffffff)));
+
+	/* Increment producer to let f/w know we've handled the frame */
+	qedi->bdq_prod_idx += count;
+
+	writew(qedi->bdq_prod_idx, qedi->bdq_primary_prod);
+	tmp = readw(qedi->bdq_primary_prod);
+
+	writew(qedi->bdq_prod_idx, qedi->bdq_secondary_prod);
+	tmp = readw(qedi->bdq_secondary_prod);
+}
+
+static void qedi_unsol_pdu_adjust_bdq(struct qedi_ctx *qedi,
+				      struct iscsi_cqe_unsolicited *cqe,
+				      u32 pdu_len, u32 num_bdqs,
+				      char *bdq_data)
+{
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "num_bdqs [%d]\n", num_bdqs);
+
+	qedi_get_rq_bdq_buf(qedi, cqe, bdq_data, pdu_len);
+	qedi_put_rq_bdq_buf(qedi, cqe, (num_bdqs + 1));
+}
+
+static int qedi_process_nopin_mesg(struct qedi_ctx *qedi,
+				   union iscsi_cqe *cqe,
+				   struct iscsi_task *task,
+				   struct qedi_conn *qedi_conn, u16 que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_nop_in_hdr *cqe_nop_in;
+	struct iscsi_nopin *hdr;
+	struct qedi_cmd *cmd;
+	int tgt_async_nop = 0;
+	u32 scsi_lun[2];
+	u32 pdu_len, num_bdqs;
+	char bdq_data[QEDI_BDQ_BUF_SIZE];
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+	cqe_nop_in = &cqe->cqe_common.iscsi_hdr.nop_in;
+
+	pdu_len = cqe_nop_in->hdr_second_dword &
+		  ISCSI_NOP_IN_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
+
+	hdr = (struct iscsi_nopin *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(hdr, 0, sizeof(struct iscsi_hdr));
+	hdr->opcode = cqe_nop_in->opcode;
+	hdr->max_cmdsn = cpu_to_be32(cqe_nop_in->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_nop_in->exp_cmd_sn);
+	hdr->statsn = cpu_to_be32(cqe_nop_in->stat_sn);
+	hdr->ttt = cpu_to_be32(cqe_nop_in->ttt);
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pdu_len, num_bdqs, bdq_data);
+		hdr->itt = RESERVED_ITT;
+		tgt_async_nop = 1;
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+		goto done;
+	}
+
+	/* Response to one of our nop-outs */
+	if (task) {
+		cmd = task->dd_data;
+		hdr->flags = ISCSI_FLAG_CMD_FINAL;
+		hdr->itt = build_itt(cqe->cqe_solicited.itid,
+				     conn->session->age);
+		scsi_lun[0] = 0xffffffff;
+		scsi_lun[1] = 0xffffffff;
+		memcpy(&hdr->lun, scsi_lun, sizeof(struct scsi_lun));
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+			  "Freeing tid=0x%x for cid=0x%x\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id);
+		cmd->state = RESPONSE_RECEIVED;
+		spin_lock(&qedi_conn->list_lock);
+		if (likely(cmd->io_cmd_in_list)) {
+			cmd->io_cmd_in_list = false;
+			list_del_init(&cmd->io_cmd);
+			qedi_conn->active_cmd_count--;
+		}
+
+		spin_unlock(&qedi_conn->list_lock);
+		qedi_clear_task_idx(qedi, cmd->task_id);
+	}
+
+done:
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, bdq_data, pdu_len);
+
+	spin_unlock_bh(&session->back_lock);
+	return tgt_async_nop;
+}
+
+static void qedi_process_async_mesg(struct qedi_ctx *qedi,
+				    union iscsi_cqe *cqe,
+				    struct iscsi_task *task,
+				    struct qedi_conn *qedi_conn,
+				    u16 que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_async_msg_hdr *cqe_async_msg;
+	struct iscsi_async *resp_hdr;
+	u32 scsi_lun[2];
+	u32 pdu_len, num_bdqs;
+	char bdq_data[QEDI_BDQ_BUF_SIZE];
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+
+	cqe_async_msg = &cqe->cqe_common.iscsi_hdr.async_msg;
+	pdu_len = cqe_async_msg->hdr_second_dword &
+		ISCSI_ASYNC_MSG_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pdu_len, num_bdqs, bdq_data);
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+
+	resp_hdr = (struct iscsi_async *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr->opcode = cqe_async_msg->opcode;
+	resp_hdr->flags = 0x80;
+
+	scsi_lun[0] = cpu_to_be32(cqe_async_msg->lun.lo);
+	scsi_lun[1] = cpu_to_be32(cqe_async_msg->lun.hi);
+	memcpy(&resp_hdr->lun, scsi_lun, sizeof(struct scsi_lun));
+	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_async_msg->exp_cmd_sn);
+	resp_hdr->max_cmdsn = cpu_to_be32(cqe_async_msg->max_cmd_sn);
+	resp_hdr->statsn = cpu_to_be32(cqe_async_msg->stat_sn);
+
+	resp_hdr->async_event = cqe_async_msg->async_event;
+	resp_hdr->async_vcode = cqe_async_msg->async_vcode;
+
+	resp_hdr->param1 = cpu_to_be16(cqe_async_msg->param1_rsrv);
+	resp_hdr->param2 = cpu_to_be16(cqe_async_msg->param2_rsrv);
+	resp_hdr->param3 = cpu_to_be16(cqe_async_msg->param3_rsrv);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, bdq_data,
+			     pdu_len);
+
+	spin_unlock_bh(&session->back_lock);
+}
+
+static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
+				     union iscsi_cqe *cqe,
+				     struct iscsi_task *task,
+				     struct qedi_conn *qedi_conn,
+				     uint16_t que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_reject_hdr *cqe_reject;
+	struct iscsi_reject *hdr;
+	u32 pld_len, num_bdqs;
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+	cqe_reject = &cqe->cqe_common.iscsi_hdr.reject;
+	pld_len = cqe_reject->hdr_second_dword &
+		  ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pld_len / QEDI_BDQ_BUF_SIZE;
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pld_len, num_bdqs, conn->data);
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+	hdr = (struct iscsi_reject *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(hdr, 0, sizeof(struct iscsi_hdr));
+	hdr->opcode = cqe_reject->opcode;
+	hdr->reason = cqe_reject->hdr_reason;
+	hdr->flags = cqe_reject->hdr_flags;
+	hton24(hdr->dlength, (cqe_reject->hdr_second_dword &
+			      ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK));
+	hdr->max_cmdsn = cpu_to_be32(cqe_reject->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_reject->exp_cmd_sn);
+	hdr->statsn = cpu_to_be32(cqe_reject->stat_sn);
+	hdr->ffffffff = cpu_to_be32(0xffffffff);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+			     conn->data, pld_len);
+	spin_unlock_bh(&session->back_lock);
+}
+
+static void qedi_mtask_completion(struct qedi_ctx *qedi,
+				  union iscsi_cqe *cqe,
+				  struct iscsi_task *task,
+				  struct qedi_conn *conn, uint16_t que_idx)
+{
+	struct iscsi_conn *iscsi_conn;
+	u32 hdr_opcode;
+
+	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
+	iscsi_conn = conn->cls_conn->dd_data;
+
+	switch (hdr_opcode) {
+	case ISCSI_OPCODE_LOGIN_RESPONSE:
+		qedi_process_login_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_TEXT_RESPONSE:
+		qedi_process_text_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_LOGOUT_RESPONSE:
+		qedi_process_logout_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_NOP_IN:
+		qedi_process_nopin_mesg(qedi, cqe, task, conn, que_idx);
+		break;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "unknown opcode\n");
+	}
+}
+
+static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
+					  struct iscsi_cqe_solicited *cqe,
+					  struct iscsi_task *task,
+					  struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_UNSOL,
+		  "itid=0x%x, cmd task id=0x%x\n",
+		  cqe->itid, cmd->task_id);
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+
+	spin_lock_bh(&session->back_lock);
+	__iscsi_put_task(task);
+	spin_unlock_bh(&session->back_lock);
+}
+
+void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
+			  uint16_t que_idx)
+{
+	struct iscsi_task *task = NULL;
+	struct iscsi_nopout *nopout_hdr;
+	struct qedi_conn *q_conn;
+	struct iscsi_conn *conn;
+	struct iscsi_task_context *fw_task_ctx;
+	u32 comp_type;
+	u32 iscsi_cid;
+	u32 hdr_opcode;
+	u32 ptmp_itt = 0;
+	itt_t proto_itt = 0;
+	u8 cqe_err_bits = 0;
+
+	comp_type = cqe->cqe_common.cqe_type;
+	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
+	cqe_err_bits =
+		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "fw_cid=0x%x, cqe type=0x%x, opcode=0x%x\n",
+		  cqe->cqe_common.conn_id, comp_type, hdr_opcode);
+
+	if (comp_type >= MAX_ISCSI_CQES_TYPE) {
+		QEDI_WARN(&qedi->dbg_ctx, "Invalid CqE type\n");
+		return;
+	}
+
+	iscsi_cid  = cqe->cqe_common.conn_id;
+	q_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+	if (!q_conn) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Session no longer exists for cid=0x%x!!\n",
+			  iscsi_cid);
+		return;
+	}
+
+	conn = q_conn->cls_conn->dd_data;
+
+	if (unlikely(cqe_err_bits &&
+		     GET_FIELD(cqe_err_bits,
+			       CQE_ERROR_BITMAP_DATA_DIGEST_ERR))) {
+		iscsi_conn_failure(conn, ISCSI_ERR_DATA_DGST);
+		return;
+	}
+
+	switch (comp_type) {
+	case ISCSI_CQE_TYPE_SOLICITED:
+	case ISCSI_CQE_TYPE_SOLICITED_WITH_SENSE:
+		fw_task_ctx =
+		  (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+						      cqe->cqe_solicited.itid);
+		if (fw_task_ctx->ystorm_st_context.state.local_comp == 1) {
+			qedi_get_proto_itt(qedi, cqe->cqe_solicited.itid,
+					   &ptmp_itt);
+			proto_itt = build_itt(ptmp_itt, conn->session->age);
+		} else {
+			cqe->cqe_solicited.itid =
+					    qedi_get_itt(cqe->cqe_solicited);
+			proto_itt = build_itt(cqe->cqe_solicited.itid,
+					      conn->session->age);
+		}
+
+		spin_lock_bh(&conn->session->back_lock);
+		task = iscsi_itt_to_task(conn, proto_itt);
+		spin_unlock_bh(&conn->session->back_lock);
+
+		if (!task) {
+			QEDI_WARN(&qedi->dbg_ctx, "task is NULL\n");
+			return;
+		}
+
+		/* Process NOPIN local completion */
+		nopout_hdr = (struct iscsi_nopout *)task->hdr;
+		if ((nopout_hdr->itt == RESERVED_ITT) &&
+		    (cqe->cqe_solicited.itid != (u16)RESERVED_ITT))
+			qedi_process_nopin_local_cmpl(qedi, &cqe->cqe_solicited,
+						      task, q_conn);
+		else
+			/* Process other solicited responses */
+			qedi_mtask_completion(qedi, cqe, task, q_conn, que_idx);
+		break;
+	case ISCSI_CQE_TYPE_UNSOLICITED:
+		switch (hdr_opcode) {
+		case ISCSI_OPCODE_NOP_IN:
+			qedi_process_nopin_mesg(qedi, cqe, task, q_conn,
+						que_idx);
+			break;
+		case ISCSI_OPCODE_ASYNC_MSG:
+			qedi_process_async_mesg(qedi, cqe, task, q_conn,
+						que_idx);
+			break;
+		case ISCSI_OPCODE_REJECT:
+			qedi_process_reject_mesg(qedi, cqe, task, q_conn,
+						 que_idx);
+			break;
+		}
+		goto exit_fp_process;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
+		break;
+	}
+
+exit_fp_process:
+	return;
+}
+
+static void qedi_add_to_sq(struct qedi_conn *qedi_conn, struct iscsi_task *task,
+			   u16 tid, uint16_t ptu_invalidate, int is_cleanup)
+{
+	struct iscsi_wqe *wqe;
+	struct iscsi_wqe_field *cont_field;
+	struct qedi_endpoint *ep;
+	struct scsi_cmnd *sc = task->sc;
+	struct iscsi_login_req *login_hdr;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	login_hdr = (struct iscsi_login_req *)task->hdr;
+	ep = qedi_conn->ep;
+	wqe = &ep->sq[ep->sq_prod_idx];
+
+	memset(wqe, 0, sizeof(*wqe));
+
+	ep->sq_prod_idx++;
+	ep->fw_sq_prod_idx++;
+	if (ep->sq_prod_idx == QEDI_SQ_SIZE)
+		ep->sq_prod_idx = 0;
+
+	if (is_cleanup) {
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_TASK_CLEANUP);
+		wqe->task_id = tid;
+		return;
+	}
+
+	if (ptu_invalidate) {
+		SET_FIELD(wqe->flags, ISCSI_WQE_PTU_INVALIDATE,
+			  ISCSI_WQE_SET_PTU_INVALIDATE);
+	}
+
+	cont_field = &wqe->cont_prevtid_union.cont_field;
+
+	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+	case ISCSI_OP_TEXT:
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_MIDDLE_PATH);
+		SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
+			  1);
+		cont_field->contlen_cdbsize_field = ntoh24(login_hdr->dlength);
+		break;
+	case ISCSI_OP_LOGOUT:
+	case ISCSI_OP_NOOP_OUT:
+	case ISCSI_OP_SCSI_TMFUNC:
+		 SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			   ISCSI_WQE_TYPE_NORMAL);
+		break;
+	default:
+		if (!sc)
+			break;
+
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_NORMAL);
+		cont_field->contlen_cdbsize_field =
+				(sc->sc_data_direction == DMA_TO_DEVICE) ?
+				scsi_bufflen(sc) : 0;
+		if (cmd->use_slowpath)
+			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES, 0);
+		else
+			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
+				  (sc->sc_data_direction ==
+				   DMA_TO_DEVICE) ?
+				  min((u16)QEDI_FAST_SGE_COUNT,
+				      (u16)cmd->io_tbl.sge_valid) : 0);
+		break;
+	}
+
+	wqe->task_id = tid;
+	/* Make sure SQ data is coherent */
+	wmb();
+}
+
+static void qedi_ring_doorbell(struct qedi_conn *qedi_conn)
+{
+	struct iscsi_db_data dbell = { 0 };
+
+	dbell.agg_flags = 0;
+
+	dbell.params |= DB_DEST_XCM << ISCSI_DB_DATA_DEST_SHIFT;
+	dbell.params |= DB_AGG_CMD_SET << ISCSI_DB_DATA_AGG_CMD_SHIFT;
+	dbell.params |=
+		   DQ_XCM_ISCSI_SQ_PROD_CMD << ISCSI_DB_DATA_AGG_VAL_SEL_SHIFT;
+
+	dbell.sq_prod = qedi_conn->ep->fw_sq_prod_idx;
+	writel(*(u32 *)&dbell, qedi_conn->ep->p_doorbell);
+	/* Make sure fw idx is coherent */
+	wmb();
+	mmiowb();
+	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_MP_REQ,
+		  "prod_idx=0x%x, fw_prod_idx=0x%x, cid=0x%x\n",
+		  qedi_conn->ep->sq_prod_idx, qedi_conn->ep->fw_sq_prod_idx,
+		  qedi_conn->iscsi_conn_id);
+}
+
+int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_login_req *login_hdr;
+	struct iscsi_login_req_hdr *fw_login_req = NULL;
+	struct iscsi_cached_sge_ctx *cached_sge = NULL;
+	struct iscsi_sge *single_sge = NULL;
+	struct iscsi_sge *req_sge = NULL;
+	struct iscsi_sge *resp_sge = NULL;
+	struct qedi_cmd *qedi_cmd;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	login_hdr = (struct iscsi_login_req *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_login_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.login_req;
+	fw_login_req->opcode = login_hdr->opcode;
+	fw_login_req->version_min = login_hdr->min_version;
+	fw_login_req->version_max = login_hdr->max_version;
+	fw_login_req->flags_attr = login_hdr->flags;
+	fw_login_req->isid_tabc = *((u16 *)login_hdr->isid + 2);
+	fw_login_req->isid_d = *((u32 *)login_hdr->isid);
+	fw_login_req->tsih = login_hdr->tsih;
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_login_req->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_login_req->cid = qedi_conn->iscsi_conn_id;
+	fw_login_req->cmd_sn = be32_to_cpu(login_hdr->cmdsn);
+	fw_login_req->exp_stat_sn = be32_to_cpu(login_hdr->exp_statsn);
+	fw_login_req->exp_stat_sn = 0;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			     (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = 0x2;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
+						ntoh24(login_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+	fw_task_ctx->ustorm_st_context.task_type = 0x2;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	fw_task_ctx->ustorm_ag_context.exp_data_acked =
+						 ntoh24(login_hdr->dlength);
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
+
+int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_logout_req_hdr *fw_logout_req = NULL;
+	struct iscsi_task_context *fw_task_ctx = NULL;
+	struct iscsi_logout *logout_hdr = NULL;
+	struct qedi_cmd *qedi_cmd = NULL;
+	s16  tid = 0;
+	s16 ptu_invalidate = 0;
+
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	logout_hdr = (struct iscsi_logout *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_logout_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.logout_req;
+	fw_logout_req->opcode = ISCSI_OPCODE_LOGOUT_REQUEST;
+	fw_logout_req->reason_code = 0x80 | logout_hdr->flags;
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_logout_req->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_logout_req->exp_stat_sn = be32_to_cpu(logout_hdr->exp_statsn);
+	fw_logout_req->cmd_sn = be32_to_cpu(logout_hdr->cmdsn);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						  qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	fw_logout_req->cid = qedi_conn->iscsi_conn_id;
+	fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
+
+int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
+			 struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_text_request_hdr *fw_text_request;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_sge *single_sge;
+	struct qedi_cmd *qedi_cmd;
+	/* For 6.5 hdr iscsi_hdr */
+	struct iscsi_text *text_hdr;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	text_hdr = (struct iscsi_text *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	(struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_text_request =
+			&fw_task_ctx->ystorm_st_context.pdu_hdr.text_request;
+	fw_text_request->opcode = text_hdr->opcode;
+	fw_text_request->flags_attr = text_hdr->flags;
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_text_request->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_text_request->ttt = text_hdr->ttt;
+	fw_text_request->cmd_sn = be32_to_cpu(text_hdr->cmdsn);
+	fw_text_request->exp_stat_sn = be32_to_cpu(text_hdr->exp_statsn);
+	fw_text_request->hdr_second_dword = ntoh24(text_hdr->dlength);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						     qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						   qedi->tid_reuse_count[tid]++;
+
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			      (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = 0x2;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_ag_context.exp_data_acked =
+						      ntoh24(text_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
+						      ntoh24(text_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.exp_data_sn =
+					      be32_to_cpu(text_hdr->exp_statsn);
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+	fw_task_ctx->ustorm_st_context.task_type = 0x2;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	/*  Add command in active command list */
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
+
+int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task,
+			   char *datap, int data_len, int unsol)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_nop_out_hdr *fw_nop_out;
+	struct qedi_cmd *qedi_cmd;
+	/* For 6.5 hdr iscsi_hdr */
+	struct iscsi_nopout *nopout_hdr;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_sge *single_sge;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	u32 scsi_lun[2];
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	nopout_hdr = (struct iscsi_nopout *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1) {
+		QEDI_WARN(&qedi->dbg_ctx, "Invalid tid\n");
+		return -ENOMEM;
+	}
+
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_nop_out = &fw_task_ctx->ystorm_st_context.pdu_hdr.nop_out;
+	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_CONST1, 1);
+	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_RSRV, 0);
+
+	memcpy(scsi_lun, &nopout_hdr->lun, sizeof(struct scsi_lun));
+	fw_nop_out->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_nop_out->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+
+	if (nopout_hdr->ttt != ISCSI_TTT_ALL_ONES) {
+		fw_nop_out->itt = be32_to_cpu(nopout_hdr->itt);
+		fw_nop_out->ttt = be32_to_cpu(nopout_hdr->ttt);
+		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+		fw_task_ctx->ystorm_st_context.state.local_comp = 1;
+		SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+			  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 1);
+	} else {
+		fw_nop_out->itt = qedi_set_itt(tid, get_itt(task->itt));
+		fw_nop_out->ttt = ISCSI_TTT_ALL_ONES;
+		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+
+		spin_lock(&qedi_conn->list_lock);
+		list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+		qedi_cmd->io_cmd_in_list = true;
+		qedi_conn->active_cmd_count++;
+		spin_unlock(&qedi_conn->list_lock);
+	}
+
+	fw_nop_out->opcode = ISCSI_OPCODE_NOP_OUT;
+	fw_nop_out->cmd_sn = be32_to_cpu(nopout_hdr->cmdsn);
+	fw_nop_out->exp_stat_sn = be32_to_cpu(nopout_hdr->exp_statsn);
+
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = data_len;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
new file mode 100644
index 0000000..85ea3d7
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_gbl.h
@@ -0,0 +1,67 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_GBL_H_
+#define _QEDI_GBL_H_
+
+#include "qedi_iscsi.h"
+
+extern uint io_tracing;
+extern int do_not_recover;
+extern struct scsi_host_template qedi_host_template;
+extern struct iscsi_transport qedi_iscsi_transport;
+extern const struct qed_iscsi_ops *qedi_ops;
+extern struct qedi_debugfs_ops qedi_debugfs_ops;
+extern const struct file_operations qedi_dbg_fops;
+extern struct device_attribute *qedi_shost_attrs[];
+
+int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
+void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
+
+int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *task);
+int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task);
+int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
+			 struct iscsi_task *task);
+int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task,
+			   char *datap, int data_len, int unsol);
+int qedi_get_task_idx(struct qedi_ctx *qedi);
+void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
+int qedi_iscsi_cleanup_task(struct iscsi_task *task,
+			    bool mark_cmd_node_deleted);
+void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
+void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt);
+void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
+void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
+void qedi_process_iscsi_error(struct qedi_endpoint *ep,
+			      struct async_data *data);
+void qedi_start_conn_recovery(struct qedi_ctx *qedi,
+			      struct qedi_conn *qedi_conn);
+struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid);
+void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data);
+void qedi_mark_device_missing(struct iscsi_cls_session *cls_session);
+void qedi_mark_device_available(struct iscsi_cls_session *cls_session);
+void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu);
+int qedi_recover_all_conns(struct qedi_ctx *qedi);
+void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
+			  uint16_t que_idx);
+void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
+		   u16 tid, int8_t direction);
+int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
+u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl);
+void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id);
+int qedi_create_sysfs_ctx_attr(struct qedi_ctx *qedi);
+void qedi_remove_sysfs_ctx_attr(struct qedi_ctx *qedi);
+void qedi_clearsq(struct qedi_ctx *qedi,
+		  struct qedi_conn *qedi_conn,
+		  struct iscsi_task *task);
+
+#endif
diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
new file mode 100644
index 0000000..caecdb8
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_iscsi.c
@@ -0,0 +1,1604 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/blkdev.h>
+#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <scsi/scsi_tcq.h>
+
+#include "qedi.h"
+#include "qedi_iscsi.h"
+#include "qedi_gbl.h"
+
+int qedi_recover_all_conns(struct qedi_ctx *qedi)
+{
+	struct qedi_conn *qedi_conn;
+	int i;
+
+	for (i = 0; i < qedi->max_active_conns; i++) {
+		qedi_conn = qedi_get_conn_from_id(qedi, i);
+		if (!qedi_conn)
+			continue;
+
+		qedi_start_conn_recovery(qedi, qedi_conn);
+	}
+
+	return SUCCESS;
+}
+
+static int qedi_eh_host_reset(struct scsi_cmnd *cmd)
+{
+	struct Scsi_Host *shost = cmd->device->host;
+	struct qedi_ctx *qedi;
+
+	qedi = (struct qedi_ctx *)iscsi_host_priv(shost);
+
+	return qedi_recover_all_conns(qedi);
+}
+
+struct scsi_host_template qedi_host_template = {
+	.module = THIS_MODULE,
+	.name = "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver",
+	.proc_name = QEDI_MODULE_NAME,
+	.queuecommand = iscsi_queuecommand,
+	.eh_abort_handler = iscsi_eh_abort,
+	.eh_device_reset_handler = iscsi_eh_device_reset,
+	.eh_target_reset_handler = iscsi_eh_recover_target,
+	.eh_host_reset_handler = qedi_eh_host_reset,
+	.target_alloc = iscsi_target_alloc,
+	.change_queue_depth = scsi_change_queue_depth,
+	.can_queue = QEDI_MAX_ISCSI_TASK,
+	.this_id = -1,
+	.sg_tablesize = QEDI_ISCSI_MAX_BDS_PER_CMD,
+	.max_sectors = 0xffff,
+	.cmd_per_lun = 128,
+	.use_clustering = ENABLE_CLUSTERING,
+	.shost_attrs = qedi_shost_attrs,
+};
+
+static void qedi_conn_free_login_resources(struct qedi_ctx *qedi,
+					   struct qedi_conn *qedi_conn)
+{
+	if (qedi_conn->gen_pdu.resp_bd_tbl) {
+		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				  qedi_conn->gen_pdu.resp_bd_tbl,
+				  qedi_conn->gen_pdu.resp_bd_dma);
+		qedi_conn->gen_pdu.resp_bd_tbl = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.req_bd_tbl) {
+		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				  qedi_conn->gen_pdu.req_bd_tbl,
+				  qedi_conn->gen_pdu.req_bd_dma);
+		qedi_conn->gen_pdu.req_bd_tbl = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.resp_buf) {
+		dma_free_coherent(&qedi->pdev->dev,
+				  ISCSI_DEF_MAX_RECV_SEG_LEN,
+				  qedi_conn->gen_pdu.resp_buf,
+				  qedi_conn->gen_pdu.resp_dma_addr);
+		qedi_conn->gen_pdu.resp_buf = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.req_buf) {
+		dma_free_coherent(&qedi->pdev->dev,
+				  ISCSI_DEF_MAX_RECV_SEG_LEN,
+				  qedi_conn->gen_pdu.req_buf,
+				  qedi_conn->gen_pdu.req_dma_addr);
+		qedi_conn->gen_pdu.req_buf = NULL;
+	}
+}
+
+static int qedi_conn_alloc_login_resources(struct qedi_ctx *qedi,
+					   struct qedi_conn *qedi_conn)
+{
+	qedi_conn->gen_pdu.req_buf =
+		dma_alloc_coherent(&qedi->pdev->dev,
+				   ISCSI_DEF_MAX_RECV_SEG_LEN,
+				   &qedi_conn->gen_pdu.req_dma_addr,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.req_buf)
+		goto login_req_buf_failure;
+
+	qedi_conn->gen_pdu.req_buf_size = 0;
+	qedi_conn->gen_pdu.req_wr_ptr = qedi_conn->gen_pdu.req_buf;
+
+	qedi_conn->gen_pdu.resp_buf =
+		dma_alloc_coherent(&qedi->pdev->dev,
+				   ISCSI_DEF_MAX_RECV_SEG_LEN,
+				   &qedi_conn->gen_pdu.resp_dma_addr,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.resp_buf)
+		goto login_resp_buf_failure;
+
+	qedi_conn->gen_pdu.resp_buf_size = ISCSI_DEF_MAX_RECV_SEG_LEN;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf;
+
+	qedi_conn->gen_pdu.req_bd_tbl =
+		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				   &qedi_conn->gen_pdu.req_bd_dma, GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.req_bd_tbl)
+		goto login_req_bd_tbl_failure;
+
+	qedi_conn->gen_pdu.resp_bd_tbl =
+		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				   &qedi_conn->gen_pdu.resp_bd_dma,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.resp_bd_tbl)
+		goto login_resp_bd_tbl_failure;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SESS,
+		  "Allocation successful, cid=0x%x\n",
+		  qedi_conn->iscsi_conn_id);
+	return 0;
+
+login_resp_bd_tbl_failure:
+	dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+			  qedi_conn->gen_pdu.req_bd_tbl,
+			  qedi_conn->gen_pdu.req_bd_dma);
+	qedi_conn->gen_pdu.req_bd_tbl = NULL;
+
+login_req_bd_tbl_failure:
+	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
+			  qedi_conn->gen_pdu.resp_buf,
+			  qedi_conn->gen_pdu.resp_dma_addr);
+	qedi_conn->gen_pdu.resp_buf = NULL;
+login_resp_buf_failure:
+	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
+			  qedi_conn->gen_pdu.req_buf,
+			  qedi_conn->gen_pdu.req_dma_addr);
+	qedi_conn->gen_pdu.req_buf = NULL;
+login_req_buf_failure:
+	iscsi_conn_printk(KERN_ERR, qedi_conn->cls_conn->dd_data,
+			  "login resource alloc failed!!\n");
+	return -ENOMEM;
+}
+
+static void qedi_destroy_cmd_pool(struct qedi_ctx *qedi,
+				  struct iscsi_session *session)
+{
+	int i;
+
+	for (i = 0; i < session->cmds_max; i++) {
+		struct iscsi_task *task = session->cmds[i];
+		struct qedi_cmd *cmd = task->dd_data;
+
+		if (cmd->io_tbl.sge_tbl)
+			dma_free_coherent(&qedi->pdev->dev,
+					  QEDI_ISCSI_MAX_BDS_PER_CMD *
+					  sizeof(struct iscsi_sge),
+					  cmd->io_tbl.sge_tbl,
+					  cmd->io_tbl.sge_tbl_dma);
+
+		if (cmd->sense_buffer)
+			dma_free_coherent(&qedi->pdev->dev,
+					  SCSI_SENSE_BUFFERSIZE,
+					  cmd->sense_buffer,
+					  cmd->sense_buffer_dma);
+	}
+}
+
+static int qedi_alloc_sget(struct qedi_ctx *qedi, struct iscsi_session *session,
+			   struct qedi_cmd *cmd)
+{
+	struct qedi_io_bdt *io = &cmd->io_tbl;
+	struct iscsi_sge *sge;
+
+	io->sge_tbl = dma_alloc_coherent(&qedi->pdev->dev,
+					 QEDI_ISCSI_MAX_BDS_PER_CMD *
+					 sizeof(*sge),
+					 &io->sge_tbl_dma, GFP_KERNEL);
+	if (!io->sge_tbl) {
+		iscsi_session_printk(KERN_ERR, session,
+				     "Could not allocate BD table.\n");
+		return -ENOMEM;
+	}
+
+	io->sge_valid = 0;
+	return 0;
+}
+
+static int qedi_setup_cmd_pool(struct qedi_ctx *qedi,
+			       struct iscsi_session *session)
+{
+	int i;
+
+	for (i = 0; i < session->cmds_max; i++) {
+		struct iscsi_task *task = session->cmds[i];
+		struct qedi_cmd *cmd = task->dd_data;
+
+		task->hdr = &cmd->hdr;
+		task->hdr_max = sizeof(struct iscsi_hdr);
+
+		if (qedi_alloc_sget(qedi, session, cmd))
+			goto free_sgets;
+
+		cmd->sense_buffer = dma_alloc_coherent(&qedi->pdev->dev,
+						       SCSI_SENSE_BUFFERSIZE,
+						       &cmd->sense_buffer_dma,
+						       GFP_KERNEL);
+		if (!cmd->sense_buffer)
+			goto free_sgets;
+	}
+
+	return 0;
+
+free_sgets:
+	qedi_destroy_cmd_pool(qedi, session);
+	return -ENOMEM;
+}
+
+static struct iscsi_cls_session *
+qedi_session_create(struct iscsi_endpoint *ep, u16 cmds_max,
+		    u16 qdepth, uint32_t initial_cmdsn)
+{
+	struct Scsi_Host *shost;
+	struct iscsi_cls_session *cls_session;
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+
+	if (!ep)
+		return NULL;
+
+	qedi_ep = ep->dd_data;
+	shost = qedi_ep->qedi->shost;
+	qedi = iscsi_host_priv(shost);
+
+	if (cmds_max > qedi->max_sqes)
+		cmds_max = qedi->max_sqes;
+	else if (cmds_max < QEDI_SQ_WQES_MIN)
+		cmds_max = QEDI_SQ_WQES_MIN;
+
+	cls_session = iscsi_session_setup(&qedi_iscsi_transport, shost,
+					  cmds_max, 0, sizeof(struct qedi_cmd),
+					  initial_cmdsn, ISCSI_MAX_TARGET);
+	if (!cls_session) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to setup session for ep=%p\n", qedi_ep);
+		return NULL;
+	}
+
+	if (qedi_setup_cmd_pool(qedi, cls_session->dd_data)) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to setup cmd pool for ep=%p\n", qedi_ep);
+		goto session_teardown;
+	}
+
+	return cls_session;
+
+session_teardown:
+	iscsi_session_teardown(cls_session);
+	return NULL;
+}
+
+static void qedi_session_destroy(struct iscsi_cls_session *cls_session)
+{
+	struct iscsi_session *session = cls_session->dd_data;
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+
+	qedi_destroy_cmd_pool(qedi, session);
+	iscsi_session_teardown(cls_session);
+}
+
+static struct iscsi_cls_conn *
+qedi_conn_create(struct iscsi_cls_session *cls_session, uint32_t cid)
+{
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct iscsi_cls_conn *cls_conn;
+	struct qedi_conn *qedi_conn;
+	struct iscsi_conn *conn;
+
+	cls_conn = iscsi_conn_setup(cls_session, sizeof(*qedi_conn),
+				    cid);
+	if (!cls_conn) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "conn_new: iscsi conn setup failed, cid=0x%x, cls_sess=%p!\n",
+			 cid, cls_session);
+		return NULL;
+	}
+
+	conn = cls_conn->dd_data;
+	qedi_conn = conn->dd_data;
+	qedi_conn->cls_conn = cls_conn;
+	qedi_conn->qedi = qedi;
+	qedi_conn->ep = NULL;
+	qedi_conn->active_cmd_count = 0;
+	INIT_LIST_HEAD(&qedi_conn->active_cmd_list);
+	spin_lock_init(&qedi_conn->list_lock);
+
+	if (qedi_conn_alloc_login_resources(qedi, qedi_conn)) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "conn_new: login resc alloc failed, cid=0x%x, cls_sess=%p!!\n",
+				   cid, cls_session);
+		goto free_conn;
+	}
+
+	return cls_conn;
+
+free_conn:
+	iscsi_conn_teardown(cls_conn);
+	return NULL;
+}
+
+void qedi_mark_device_missing(struct iscsi_cls_session *cls_session)
+{
+	iscsi_block_session(cls_session);
+}
+
+void qedi_mark_device_available(struct iscsi_cls_session *cls_session)
+{
+	iscsi_unblock_session(cls_session);
+}
+
+static int qedi_bind_conn_to_iscsi_cid(struct qedi_ctx *qedi,
+				       struct qedi_conn *qedi_conn)
+{
+	u32 iscsi_cid = qedi_conn->iscsi_conn_id;
+
+	if (qedi->cid_que.conn_cid_tbl[iscsi_cid]) {
+		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
+				  "conn bind - entry #%d not free\n",
+				  iscsi_cid);
+		return -EBUSY;
+	}
+
+	qedi->cid_que.conn_cid_tbl[iscsi_cid] = qedi_conn;
+	return 0;
+}
+
+struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid)
+{
+	if (!qedi->cid_que.conn_cid_tbl) {
+		QEDI_ERR(&qedi->dbg_ctx, "missing conn<->cid table\n");
+		return NULL;
+
+	} else if (iscsi_cid >= qedi->max_active_conns) {
+		QEDI_ERR(&qedi->dbg_ctx, "wrong cid #%d\n", iscsi_cid);
+		return NULL;
+	}
+	return qedi->cid_que.conn_cid_tbl[iscsi_cid];
+}
+
+static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+			  struct iscsi_cls_conn *cls_conn,
+			  u64 transport_fd, int is_leading)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct qedi_endpoint *qedi_ep;
+	struct iscsi_endpoint *ep;
+
+	ep = iscsi_lookup_endpoint(transport_fd);
+	if (!ep)
+		return -EINVAL;
+
+	qedi_ep = ep->dd_data;
+	if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
+		return -EINVAL;
+
+	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+		return -EINVAL;
+
+	qedi_ep->conn = qedi_conn;
+	qedi_conn->ep = qedi_ep;
+	qedi_conn->iscsi_conn_id = qedi_ep->iscsi_cid;
+	qedi_conn->fw_cid = qedi_ep->fw_cid;
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
+		return -EINVAL;
+
+	spin_lock_init(&qedi_conn->tmf_work_lock);
+	INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
+	init_waitqueue_head(&qedi_conn->wait_queue);
+	return 0;
+}
+
+static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
+				  struct qedi_conn *qedi_conn)
+{
+	struct qed_iscsi_params_update *conn_info;
+	struct iscsi_cls_conn *cls_conn = qedi_conn->cls_conn;
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_endpoint *qedi_ep;
+	int rval;
+
+	qedi_ep = qedi_conn->ep;
+
+	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
+	if (!conn_info) {
+		QEDI_ERR(&qedi->dbg_ctx, "memory alloc failed\n");
+		return -ENOMEM;
+	}
+
+	conn_info->update_flag = 0;
+
+	if (conn->hdrdgst_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN, true);
+	if (conn->datadgst_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_DD_EN, true);
+	if (conn->session->initial_r2t_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_INITIAL_R2T,
+			  true);
+	if (conn->session->imm_data_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_IMMEDIATE_DATA,
+			  true);
+
+	conn_info->max_seq_size = conn->session->max_burst;
+	conn_info->max_recv_pdu_length = conn->max_recv_dlength;
+	conn_info->max_send_pdu_length = conn->max_xmit_dlength;
+	conn_info->first_seq_length = conn->session->first_burst;
+	conn_info->exp_stat_sn = conn->exp_statsn;
+
+	rval = qedi_ops->update_conn(qedi->cdev, qedi_ep->handle,
+				     conn_info);
+	if (rval) {
+		rval = -ENXIO;
+		QEDI_ERR(&qedi->dbg_ctx, "Could not update connection\n");
+		goto update_conn_err;
+	}
+
+	kfree(conn_info);
+	rval = 0;
+
+update_conn_err:
+	return rval;
+}
+
+static u16 qedi_calc_mss(u16 pmtu, u8 is_ipv6, u8 tcp_ts_en, u8 vlan_en)
+{
+	u16 mss = 0;
+	u16 hdrs = TCP_HDR_LEN;
+
+	if (is_ipv6)
+		hdrs += IPV6_HDR_LEN;
+	else
+		hdrs += IPV4_HDR_LEN;
+
+	if (vlan_en)
+		hdrs += VLAN_LEN;
+
+	mss = pmtu - hdrs;
+
+	if (tcp_ts_en)
+		mss -= TCP_OPTION_LEN;
+
+	if (!mss)
+		mss = DEF_MSS;
+
+	return mss;
+}
+
+static int qedi_iscsi_offload_conn(struct qedi_endpoint *qedi_ep)
+{
+	struct qedi_ctx *qedi = qedi_ep->qedi;
+	struct qed_iscsi_params_offload *conn_info;
+	int rval;
+	int i;
+
+	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
+	if (!conn_info) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to allocate memory ep=%p\n", qedi_ep);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy(conn_info->src.mac, qedi_ep->src_mac);
+	ether_addr_copy(conn_info->dst.mac, qedi_ep->dst_mac);
+
+	conn_info->src.ip[0] = ntohl(qedi_ep->src_addr[0]);
+	conn_info->dst.ip[0] = ntohl(qedi_ep->dst_addr[0]);
+
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		conn_info->ip_version = 0;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "After ntohl: src_addr=%pI4, dst_addr=%pI4\n",
+			  qedi_ep->src_addr, qedi_ep->dst_addr);
+	} else {
+		for (i = 1; i < 4; i++) {
+			conn_info->src.ip[i] = ntohl(qedi_ep->src_addr[i]);
+			conn_info->dst.ip[i] = ntohl(qedi_ep->dst_addr[i]);
+		}
+
+		conn_info->ip_version = 1;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "After ntohl: src_addr=%pI6, dst_addr=%pI6\n",
+			  qedi_ep->src_addr, qedi_ep->dst_addr);
+	}
+
+	conn_info->src.port = qedi_ep->src_port;
+	conn_info->dst.port = qedi_ep->dst_port;
+
+	conn_info->layer_code = ISCSI_SLOW_PATH_LAYER_CODE;
+	conn_info->sq_pbl_addr = qedi_ep->sq_pbl_dma;
+	conn_info->vlan_id = qedi_ep->vlan_id;
+
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_TS_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_CNT_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_KA_EN, 1);
+
+	conn_info->default_cq = (qedi_ep->fw_cid % 8);
+
+	conn_info->ka_max_probe_cnt = DEF_KA_MAX_PROBE_COUNT;
+	conn_info->dup_ack_theshold = 3;
+	conn_info->rcv_wnd = 65535;
+	conn_info->cwnd = DEF_MAX_CWND;
+
+	conn_info->ss_thresh = 65535;
+	conn_info->srtt = 300;
+	conn_info->rtt_var = 150;
+	conn_info->flow_label = 0;
+	conn_info->ka_timeout = DEF_KA_TIMEOUT;
+	conn_info->ka_interval = DEF_KA_INTERVAL;
+	conn_info->max_rt_time = DEF_MAX_RT_TIME;
+	conn_info->ttl = DEF_TTL;
+	conn_info->tos_or_tc = DEF_TOS;
+	conn_info->remote_port = qedi_ep->dst_port;
+	conn_info->local_port = qedi_ep->src_port;
+
+	conn_info->mss = qedi_calc_mss(qedi_ep->pmtu,
+				       (qedi_ep->ip_type == TCP_IPV6),
+				       1, (qedi_ep->vlan_id != 0));
+
+	conn_info->rcv_wnd_scale = 4;
+	conn_info->ts_ticks_per_second = 1000;
+	conn_info->da_timeout_value = 200;
+	conn_info->ack_frequency = 2;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Default cq index [%d], mss [%d]\n",
+		  conn_info->default_cq, conn_info->mss);
+
+	rval = qedi_ops->offload_conn(qedi->cdev, qedi_ep->handle, conn_info);
+	if (rval)
+		QEDI_ERR(&qedi->dbg_ctx, "offload_conn returned %d, ep=%p\n",
+			 rval, qedi_ep);
+
+	kfree(conn_info);
+	return rval;
+}
+
+static int qedi_conn_start(struct iscsi_cls_conn *cls_conn)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_ctx *qedi;
+	int rval;
+
+	qedi = qedi_conn->qedi;
+
+	rval = qedi_iscsi_update_conn(qedi, qedi_conn);
+	if (rval) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "conn_start: FW oflload conn failed.\n");
+		rval = -EINVAL;
+		goto start_err;
+	}
+
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	qedi_conn->abrt_conn = 0;
+
+	rval = iscsi_conn_start(cls_conn);
+	if (rval) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "iscsi_conn_start: FW oflload conn failed!!\n");
+	}
+
+start_err:
+	return rval;
+}
+
+static void qedi_conn_destroy(struct iscsi_cls_conn *cls_conn)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi;
+
+	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
+	qedi = iscsi_host_priv(shost);
+
+	qedi_conn_free_login_resources(qedi, qedi_conn);
+	iscsi_conn_teardown(cls_conn);
+}
+
+static int qedi_ep_get_param(struct iscsi_endpoint *ep,
+			     enum iscsi_param param, char *buf)
+{
+	struct qedi_endpoint *qedi_ep = ep->dd_data;
+	int len;
+
+	if (!qedi_ep)
+		return -ENOTCONN;
+
+	switch (param) {
+	case ISCSI_PARAM_CONN_PORT:
+		len = sprintf(buf, "%hu\n", qedi_ep->dst_port);
+		break;
+	case ISCSI_PARAM_CONN_ADDRESS:
+		if (qedi_ep->ip_type == TCP_IPV4)
+			len = sprintf(buf, "%pI4\n", qedi_ep->dst_addr);
+		else
+			len = sprintf(buf, "%pI6\n", qedi_ep->dst_addr);
+		break;
+	default:
+		return -ENOTCONN;
+	}
+
+	return len;
+}
+
+static int qedi_host_get_param(struct Scsi_Host *shost,
+			       enum iscsi_host_param param, char *buf)
+{
+	struct qedi_ctx *qedi;
+	int len;
+
+	qedi = iscsi_host_priv(shost);
+
+	switch (param) {
+	case ISCSI_HOST_PARAM_HWADDRESS:
+		len = sysfs_format_mac(buf, qedi->mac, 6);
+		break;
+	case ISCSI_HOST_PARAM_NETDEV_NAME:
+		len = sprintf(buf, "host%d\n", shost->host_no);
+		break;
+	case ISCSI_HOST_PARAM_IPADDRESS:
+		if (qedi->ip_type == TCP_IPV4)
+			len = sprintf(buf, "%pI4\n", qedi->src_ip);
+		else
+			len = sprintf(buf, "%pI6\n", qedi->src_ip);
+		break;
+	default:
+		return iscsi_host_get_param(shost, param, buf);
+	}
+
+	return len;
+}
+
+static void qedi_conn_get_stats(struct iscsi_cls_conn *cls_conn,
+				struct iscsi_stats *stats)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qed_iscsi_stats iscsi_stats;
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi;
+
+	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
+	qedi = iscsi_host_priv(shost);
+	qedi_ops->get_stats(qedi->cdev, &iscsi_stats);
+
+	conn->txdata_octets = iscsi_stats.iscsi_tx_bytes_cnt;
+	conn->rxdata_octets = iscsi_stats.iscsi_rx_bytes_cnt;
+	conn->dataout_pdus_cnt = (uint32_t)iscsi_stats.iscsi_tx_data_pdu_cnt;
+	conn->datain_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_data_pdu_cnt;
+	conn->r2t_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_r2t_pdu_cnt;
+
+	stats->txdata_octets = conn->txdata_octets;
+	stats->rxdata_octets = conn->rxdata_octets;
+	stats->scsicmd_pdus = conn->scsicmd_pdus_cnt;
+	stats->dataout_pdus = conn->dataout_pdus_cnt;
+	stats->scsirsp_pdus = conn->scsirsp_pdus_cnt;
+	stats->datain_pdus = conn->datain_pdus_cnt;
+	stats->r2t_pdus = conn->r2t_pdus_cnt;
+	stats->tmfcmd_pdus = conn->tmfcmd_pdus_cnt;
+	stats->tmfrsp_pdus = conn->tmfrsp_pdus_cnt;
+	stats->digest_err = 0;
+	stats->timeout_err = 0;
+	strcpy(stats->custom[0].desc, "eh_abort_cnt");
+	stats->custom[0].value = conn->eh_abort_cnt;
+	stats->custom_length = 1;
+}
+
+static void qedi_iscsi_prep_generic_pdu_bd(struct qedi_conn *qedi_conn)
+{
+	struct iscsi_sge *bd_tbl;
+
+	bd_tbl = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+
+	bd_tbl->sge_addr.hi =
+		(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.req_dma_addr;
+	bd_tbl->sge_len = qedi_conn->gen_pdu.req_wr_ptr -
+				qedi_conn->gen_pdu.req_buf;
+	bd_tbl->reserved0 = 0;
+	bd_tbl = (struct iscsi_sge  *)qedi_conn->gen_pdu.resp_bd_tbl;
+	bd_tbl->sge_addr.hi =
+			(u32)((u64)qedi_conn->gen_pdu.resp_dma_addr >> 32);
+	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.resp_dma_addr;
+	bd_tbl->sge_len = ISCSI_DEF_MAX_RECV_SEG_LEN;
+	bd_tbl->reserved0 = 0;
+}
+
+static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
+{
+	struct qedi_cmd *cmd = task->dd_data;
+	struct qedi_conn *qedi_conn = cmd->conn;
+	char *buf;
+	int data_len;
+	int rc = 0;
+
+	qedi_iscsi_prep_generic_pdu_bd(qedi_conn);
+	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+		qedi_send_iscsi_login(qedi_conn, task);
+		break;
+	case ISCSI_OP_NOOP_OUT:
+		data_len = qedi_conn->gen_pdu.req_buf_size;
+		buf = qedi_conn->gen_pdu.req_buf;
+		if (data_len)
+			rc = qedi_send_iscsi_nopout(qedi_conn, task,
+						    buf, data_len, 1);
+		else
+			rc = qedi_send_iscsi_nopout(qedi_conn, task,
+						    NULL, 0, 1);
+		break;
+	case ISCSI_OP_LOGOUT:
+		rc = qedi_send_iscsi_logout(qedi_conn, task);
+		break;
+	case ISCSI_OP_TEXT:
+		rc = qedi_send_iscsi_text(qedi_conn, task);
+		break;
+	default:
+		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
+				  "unsupported op 0x%x\n", task->hdr->opcode);
+	}
+
+	return rc;
+}
+
+static int qedi_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task)
+{
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	memset(qedi_conn->gen_pdu.req_buf, 0, ISCSI_DEF_MAX_RECV_SEG_LEN);
+
+	qedi_conn->gen_pdu.req_buf_size = task->data_count;
+
+	if (task->data_count) {
+		memcpy(qedi_conn->gen_pdu.req_buf, task->data,
+		       task->data_count);
+		qedi_conn->gen_pdu.req_wr_ptr =
+			qedi_conn->gen_pdu.req_buf + task->data_count;
+	}
+
+	cmd->conn = conn->dd_data;
+	cmd->scsi_cmd = NULL;
+	return qedi_iscsi_send_generic_request(task);
+}
+
+static int qedi_task_xmit(struct iscsi_task *task)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct scsi_cmnd *sc = task->sc;
+
+	cmd->state = 0;
+	cmd->task = NULL;
+	cmd->use_slowpath = false;
+	cmd->conn = qedi_conn;
+	cmd->task = task;
+	cmd->io_cmd_in_list = false;
+	INIT_LIST_HEAD(&cmd->io_cmd);
+
+	if (!sc)
+		return qedi_mtask_xmit(conn, task);
+}
+
+static struct iscsi_endpoint *
+qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+		int non_blocking)
+{
+	struct qedi_ctx *qedi;
+	struct iscsi_endpoint *ep;
+	struct qedi_endpoint *qedi_ep;
+	struct sockaddr_in *addr;
+	struct sockaddr_in6 *addr6;
+	struct qed_dev *cdev  =  NULL;
+	struct qedi_uio_dev *udev = NULL;
+	struct iscsi_path path_req;
+	u32 msg_type = ISCSI_KEVENT_IF_DOWN;
+	u32 iscsi_cid = QEDI_CID_RESERVED;
+	u16 len = 0;
+	char *buf = NULL;
+	int ret;
+
+	if (!shost) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost is NULL\n");
+		return ERR_PTR(ret);
+	}
+
+	if (do_not_recover) {
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+
+	qedi = iscsi_host_priv(shost);
+	cdev = qedi->cdev;
+	udev = qedi->udev;
+
+	if (test_bit(QEDI_IN_OFFLINE, &qedi->flags) ||
+	    test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+
+	ep = iscsi_create_endpoint(sizeof(struct qedi_endpoint));
+	if (!ep) {
+		QEDI_ERR(&qedi->dbg_ctx, "endpoint create fail\n");
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+	qedi_ep = ep->dd_data;
+	memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
+	qedi_ep->state = EP_STATE_IDLE;
+	qedi_ep->iscsi_cid = (u32)-1;
+	qedi_ep->qedi = qedi;
+
+	if (dst_addr->sa_family == AF_INET) {
+		addr = (struct sockaddr_in *)dst_addr;
+		memcpy(qedi_ep->dst_addr, &addr->sin_addr.s_addr,
+		       sizeof(struct in_addr));
+		qedi_ep->dst_port = ntohs(addr->sin_port);
+		qedi_ep->ip_type = TCP_IPV4;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "dst_addr=%pI4, dst_port=%u\n",
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else if (dst_addr->sa_family == AF_INET6) {
+		addr6 = (struct sockaddr_in6 *)dst_addr;
+		memcpy(qedi_ep->dst_addr, &addr6->sin6_addr,
+		       sizeof(struct in6_addr));
+		qedi_ep->dst_port = ntohs(addr6->sin6_port);
+		qedi_ep->ip_type = TCP_IPV6;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "dst_addr=%pI6, dst_port=%u\n",
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid endpoint\n");
+	}
+
+	if (atomic_read(&qedi->link_state) != QEDI_LINK_UP) {
+		QEDI_WARN(&qedi->dbg_ctx, "qedi link down\n");
+		ret = -ENXIO;
+		goto ep_conn_exit;
+	}
+
+	ret = qedi_alloc_sq(qedi, qedi_ep);
+	if (ret)
+		goto ep_conn_exit;
+
+	ret = qedi_ops->acquire_conn(qedi->cdev, &qedi_ep->handle,
+				     &qedi_ep->fw_cid, &qedi_ep->p_doorbell);
+
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx, "Could not acquire connection\n");
+		ret = -ENXIO;
+		goto ep_free_sq;
+	}
+
+	iscsi_cid = qedi_ep->handle;
+	qedi_ep->iscsi_cid = iscsi_cid;
+
+	init_waitqueue_head(&qedi_ep->ofld_wait);
+	init_waitqueue_head(&qedi_ep->tcp_ofld_wait);
+	qedi_ep->state = EP_STATE_OFLDCONN_START;
+	qedi->ep_tbl[iscsi_cid] = qedi_ep;
+
+	buf = (char *)&path_req;
+	len = sizeof(path_req);
+	memset(&path_req, 0, len);
+
+	msg_type = ISCSI_KEVENT_PATH_REQ;
+	path_req.handle = (u64)qedi_ep->iscsi_cid;
+	path_req.pmtu = qedi->ll2_mtu;
+	qedi_ep->pmtu = qedi->ll2_mtu;
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		memcpy(&path_req.dst.v4_addr, &qedi_ep->dst_addr,
+		       sizeof(struct in_addr));
+		path_req.ip_addr_len = 4;
+	} else {
+		memcpy(&path_req.dst.v6_addr, &qedi_ep->dst_addr,
+		       sizeof(struct in6_addr));
+		path_req.ip_addr_len = 16;
+	}
+
+	ret = iscsi_offload_mesg(shost, &qedi_iscsi_transport, msg_type, buf,
+				 len);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "iscsi_offload_mesg() failed for cid=0x%x ret=%d\n",
+			 iscsi_cid, ret);
+		goto ep_rel_conn;
+	}
+
+	atomic_inc(&qedi->num_offloads);
+	return ep;
+
+ep_rel_conn:
+	qedi->ep_tbl[iscsi_cid] = NULL;
+	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
+	if (ret)
+		QEDI_WARN(&qedi->dbg_ctx, "release_conn returned %d\n",
+			  ret);
+ep_free_sq:
+	qedi_free_sq(qedi, qedi_ep);
+ep_conn_exit:
+	iscsi_destroy_endpoint(ep);
+	return ERR_PTR(ret);
+}
+
+static int qedi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
+{
+	struct qedi_endpoint *qedi_ep;
+	int ret = 0;
+
+	if (do_not_recover)
+		return 1;
+
+	qedi_ep = ep->dd_data;
+	if (qedi_ep->state == EP_STATE_IDLE ||
+	    qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
+		return -1;
+
+	if (qedi_ep->state == EP_STATE_OFLDCONN_COMPL)
+		ret = 1;
+
+	ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
+					       ((qedi_ep->state ==
+						EP_STATE_OFLDCONN_FAILED) ||
+						(qedi_ep->state ==
+						EP_STATE_OFLDCONN_COMPL)),
+						msecs_to_jiffies(timeout_ms));
+
+	if (qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
+		ret = -1;
+
+	if (ret > 0)
+		return 1;
+	else if (!ret)
+		return 0;
+	else
+		return ret;
+}
+
+static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn)
+{
+	struct qedi_cmd *cmd, *cmd_tmp;
+
+	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
+				 io_cmd) {
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+}
+
+static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+{
+	struct qedi_endpoint *qedi_ep;
+	struct qedi_conn *qedi_conn = NULL;
+	struct iscsi_conn *conn = NULL;
+	struct qedi_ctx *qedi;
+	int ret = 0;
+	int wait_delay = 20 * HZ;
+	int abrt_conn = 0;
+	int count = 10;
+
+	qedi_ep = ep->dd_data;
+	qedi = qedi_ep->qedi;
+
+	flush_work(&qedi_ep->offload_work);
+
+	if (qedi_ep->conn) {
+		qedi_conn = qedi_ep->conn;
+		conn = qedi_conn->cls_conn->dd_data;
+		iscsi_suspend_queue(conn);
+		abrt_conn = qedi_conn->abrt_conn;
+
+		while (count--)	{
+			if (!test_bit(QEDI_CONN_FW_CLEANUP,
+				      &qedi_conn->flags)) {
+				break;
+			}
+			msleep(1000);
+		}
+
+		if (test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+			if (do_not_recover) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+					  "Do not recover cid=0x%x\n",
+					  qedi_ep->iscsi_cid);
+				goto ep_exit_recover;
+			}
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+				  "Reset recovery cid=0x%x, qedi_ep=%p, state=0x%x\n",
+				  qedi_ep->iscsi_cid, qedi_ep, qedi_ep->state);
+			qedi_cleanup_active_cmd_list(qedi_conn);
+			goto ep_release_conn;
+		}
+	}
+
+	if (do_not_recover)
+		goto ep_exit_recover;
+
+	switch (qedi_ep->state) {
+	case EP_STATE_OFLDCONN_START:
+		goto ep_release_conn;
+	case EP_STATE_OFLDCONN_FAILED:
+			break;
+	case EP_STATE_OFLDCONN_COMPL:
+		if (unlikely(!qedi_conn))
+			break;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd count=%d, abrt_conn=%d, ep state=0x%x, cid=0x%x, qedi_conn=%p\n",
+			  qedi_conn->active_cmd_count, abrt_conn,
+			  qedi_ep->state,
+			  qedi_ep->iscsi_cid,
+			  qedi_ep->conn
+			  );
+
+		if (!qedi_conn->active_cmd_count)
+			abrt_conn = 0;
+		else
+			abrt_conn = 1;
+
+		if (abrt_conn)
+			qedi_clearsq(qedi, qedi_conn, NULL);
+		break;
+	default:
+		break;
+	}
+
+	qedi_ep->state = EP_STATE_DISCONN_START;
+	ret = qedi_ops->destroy_conn(qedi->cdev, qedi_ep->handle, abrt_conn);
+	if (ret) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "destroy_conn failed returned %d\n", ret);
+	} else {
+		ret = wait_event_interruptible_timeout(
+					qedi_ep->tcp_ofld_wait,
+					(qedi_ep->state !=
+					 EP_STATE_DISCONN_START),
+					wait_delay);
+		if ((ret <= 0) || (qedi_ep->state == EP_STATE_DISCONN_START)) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Destroy conn timedout or interrupted, ret=%d, delay=%d, cid=0x%x\n",
+				  ret, wait_delay, qedi_ep->iscsi_cid);
+		}
+	}
+
+ep_release_conn:
+	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
+	if (ret)
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "release_conn returned %d, cid=0x%x\n",
+			  ret, qedi_ep->iscsi_cid);
+ep_exit_recover:
+	qedi_ep->state = EP_STATE_IDLE;
+	qedi->ep_tbl[qedi_ep->iscsi_cid] = NULL;
+	qedi->cid_que.conn_cid_tbl[qedi_ep->iscsi_cid] = NULL;
+	qedi_free_id(&qedi->lcl_port_tbl, qedi_ep->src_port);
+	qedi_free_sq(qedi, qedi_ep);
+
+	if (qedi_conn)
+		qedi_conn->ep = NULL;
+
+	qedi_ep->conn = NULL;
+	qedi_ep->qedi = NULL;
+	atomic_dec(&qedi->num_offloads);
+
+	iscsi_destroy_endpoint(ep);
+}
+
+static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
+{
+	struct qed_dev *cdev = qedi->cdev;
+	struct qedi_uio_dev *udev;
+	struct qedi_uio_ctrl *uctrl;
+	struct sk_buff *skb;
+	u32 len;
+	int rc = 0;
+
+	udev = qedi->udev;
+	if (!udev) {
+		QEDI_ERR(&qedi->dbg_ctx, "udev is NULL.\n");
+		return -EINVAL;
+	}
+
+	uctrl = (struct qedi_uio_ctrl *)udev->uctrl;
+	if (!uctrl) {
+		QEDI_ERR(&qedi->dbg_ctx, "uctlr is NULL.\n");
+		return -EINVAL;
+	}
+
+	len = uctrl->host_tx_pkt_len;
+	if (!len) {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid len %u\n", len);
+		return -EINVAL;
+	}
+
+	skb = alloc_skb(len, GFP_ATOMIC);
+	if (!skb) {
+		QEDI_ERR(&qedi->dbg_ctx, "alloc_skb failed\n");
+		return -EINVAL;
+	}
+
+	skb_put(skb, len);
+	memcpy(skb->data, udev->tx_pkt, len);
+	skb->ip_summed = CHECKSUM_NONE;
+
+	if (vlanid)
+		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlanid);
+
+	rc = qedi_ops->ll2->start_xmit(cdev, skb);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "ll2 start_xmit returned %d\n",
+			 rc);
+		kfree_skb(skb);
+	}
+
+	uctrl->host_tx_pkt_len = 0;
+	uctrl->hw_tx_cons++;
+
+	return rc;
+}
+
+static void qedi_offload_work(struct work_struct *work)
+{
+	struct qedi_endpoint *qedi_ep =
+		container_of(work, struct qedi_endpoint, offload_work);
+	struct qedi_ctx *qedi;
+	int wait_delay = 20 * HZ;
+	int ret;
+
+	qedi = qedi_ep->qedi;
+
+	ret = qedi_iscsi_offload_conn(qedi_ep);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
+			 qedi_ep->iscsi_cid, qedi_ep, ret);
+		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+		return;
+	}
+
+	ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
+					       (qedi_ep->state ==
+					       EP_STATE_OFLDCONN_COMPL),
+					       wait_delay);
+	if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
+		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
+			 qedi_ep->iscsi_cid, qedi_ep);
+	}
+}
+
+static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+{
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+	int ret = 0;
+	u32 iscsi_cid;
+	u16 port_id = 0;
+
+	if (!shost) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost is NULL\n");
+		return ret;
+	}
+
+	if (strcmp(shost->hostt->proc_name, "qedi")) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost %s is invalid\n",
+			 shost->hostt->proc_name);
+		return ret;
+	}
+
+	qedi = iscsi_host_priv(shost);
+	if (path_data->handle == QEDI_PATH_HANDLE) {
+		ret = qedi_data_avail(qedi, path_data->vlan_id);
+		goto set_path_exit;
+	}
+
+	iscsi_cid = (u32)path_data->handle;
+	qedi_ep = qedi->ep_tbl[iscsi_cid];
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
+
+	if (!is_valid_ether_addr(&path_data->mac_addr[0])) {
+		QEDI_NOTICE(&qedi->dbg_ctx, "dst mac NOT VALID\n");
+		ret = -EIO;
+		goto set_path_exit;
+	}
+
+	ether_addr_copy(&qedi_ep->src_mac[0], &qedi->mac[0]);
+	ether_addr_copy(&qedi_ep->dst_mac[0], &path_data->mac_addr[0]);
+
+	qedi_ep->vlan_id = path_data->vlan_id;
+	if (path_data->pmtu < DEF_PATH_MTU) {
+		qedi_ep->pmtu = qedi->ll2_mtu;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "MTU cannot be %u, using default MTU %u\n",
+			   path_data->pmtu, qedi_ep->pmtu);
+	}
+
+	if (path_data->pmtu != qedi->ll2_mtu) {
+		if (path_data->pmtu > JUMBO_MTU) {
+			ret = -EINVAL;
+			QEDI_ERR(NULL, "Invalid MTU %u\n", path_data->pmtu);
+			goto set_path_exit;
+		}
+
+		qedi_reset_host_mtu(qedi, path_data->pmtu);
+		qedi_ep->pmtu = qedi->ll2_mtu;
+	}
+
+	port_id = qedi_ep->src_port;
+	if (port_id >= QEDI_LOCAL_PORT_MIN &&
+	    port_id < QEDI_LOCAL_PORT_MAX) {
+		if (qedi_alloc_id(&qedi->lcl_port_tbl, port_id))
+			port_id = 0;
+	} else {
+		port_id = 0;
+	}
+
+	if (!port_id) {
+		port_id = qedi_alloc_new_id(&qedi->lcl_port_tbl);
+		if (port_id == QEDI_LOCAL_PORT_INVALID) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Failed to allocate port id for iscsi_cid=0x%x\n",
+				 iscsi_cid);
+			ret = -ENOMEM;
+			goto set_path_exit;
+		}
+	}
+
+	qedi_ep->src_port = port_id;
+
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		memcpy(&qedi_ep->src_addr[0], &path_data->src.v4_addr,
+		       sizeof(struct in_addr));
+		memcpy(&qedi->src_ip[0], &path_data->src.v4_addr,
+		       sizeof(struct in_addr));
+		qedi->ip_type = TCP_IPV4;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "src addr:port=%pI4:%u, dst addr:port=%pI4:%u\n",
+			  qedi_ep->src_addr, qedi_ep->src_port,
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else {
+		memcpy(&qedi_ep->src_addr[0], &path_data->src.v6_addr,
+		       sizeof(struct in6_addr));
+		memcpy(&qedi->src_ip[0], &path_data->src.v6_addr,
+		       sizeof(struct in6_addr));
+		qedi->ip_type = TCP_IPV6;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "src addr:port=%pI6:%u, dst addr:port=%pI6:%u\n",
+			  qedi_ep->src_addr, qedi_ep->src_port,
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	}
+
+	INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+	queue_work(qedi->offload_thread, &qedi_ep->offload_work);
+
+	ret = 0;
+
+set_path_exit:
+	return ret;
+}
+
+static umode_t qedi_attr_is_visible(int param_type, int param)
+{
+	switch (param_type) {
+	case ISCSI_HOST_PARAM:
+		switch (param) {
+		case ISCSI_HOST_PARAM_NETDEV_NAME:
+		case ISCSI_HOST_PARAM_HWADDRESS:
+		case ISCSI_HOST_PARAM_IPADDRESS:
+			return S_IRUGO;
+		default:
+			return 0;
+		}
+	case ISCSI_PARAM:
+		switch (param) {
+		case ISCSI_PARAM_MAX_RECV_DLENGTH:
+		case ISCSI_PARAM_MAX_XMIT_DLENGTH:
+		case ISCSI_PARAM_HDRDGST_EN:
+		case ISCSI_PARAM_DATADGST_EN:
+		case ISCSI_PARAM_CONN_ADDRESS:
+		case ISCSI_PARAM_CONN_PORT:
+		case ISCSI_PARAM_EXP_STATSN:
+		case ISCSI_PARAM_PERSISTENT_ADDRESS:
+		case ISCSI_PARAM_PERSISTENT_PORT:
+		case ISCSI_PARAM_PING_TMO:
+		case ISCSI_PARAM_RECV_TMO:
+		case ISCSI_PARAM_INITIAL_R2T_EN:
+		case ISCSI_PARAM_MAX_R2T:
+		case ISCSI_PARAM_IMM_DATA_EN:
+		case ISCSI_PARAM_FIRST_BURST:
+		case ISCSI_PARAM_MAX_BURST:
+		case ISCSI_PARAM_PDU_INORDER_EN:
+		case ISCSI_PARAM_DATASEQ_INORDER_EN:
+		case ISCSI_PARAM_ERL:
+		case ISCSI_PARAM_TARGET_NAME:
+		case ISCSI_PARAM_TPGT:
+		case ISCSI_PARAM_USERNAME:
+		case ISCSI_PARAM_PASSWORD:
+		case ISCSI_PARAM_USERNAME_IN:
+		case ISCSI_PARAM_PASSWORD_IN:
+		case ISCSI_PARAM_FAST_ABORT:
+		case ISCSI_PARAM_ABORT_TMO:
+		case ISCSI_PARAM_LU_RESET_TMO:
+		case ISCSI_PARAM_TGT_RESET_TMO:
+		case ISCSI_PARAM_IFACE_NAME:
+		case ISCSI_PARAM_INITIATOR_NAME:
+		case ISCSI_PARAM_BOOT_ROOT:
+		case ISCSI_PARAM_BOOT_NIC:
+		case ISCSI_PARAM_BOOT_TARGET:
+			return S_IRUGO;
+		default:
+			return 0;
+		}
+	}
+
+	return 0;
+}
+
+static void qedi_cleanup_task(struct iscsi_task *task)
+{
+	if (!task->sc || task->state == ISCSI_TASK_PENDING) {
+		QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n",
+			  atomic_read(&task->refcount));
+		return;
+	}
+
+	qedi_iscsi_unmap_sg_list(task->dd_data);
+}
+
+struct iscsi_transport qedi_iscsi_transport = {
+	.owner = THIS_MODULE,
+	.name = QEDI_MODULE_NAME,
+	.caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST |
+		CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO,
+	.create_session = qedi_session_create,
+	.destroy_session = qedi_session_destroy,
+	.create_conn = qedi_conn_create,
+	.bind_conn = qedi_conn_bind,
+	.start_conn = qedi_conn_start,
+	.stop_conn = iscsi_conn_stop,
+	.destroy_conn = qedi_conn_destroy,
+	.set_param = iscsi_set_param,
+	.get_ep_param = qedi_ep_get_param,
+	.get_conn_param = iscsi_conn_get_param,
+	.get_session_param = iscsi_session_get_param,
+	.get_host_param = qedi_host_get_param,
+	.send_pdu = iscsi_conn_send_pdu,
+	.get_stats = qedi_conn_get_stats,
+	.xmit_task = qedi_task_xmit,
+	.cleanup_task = qedi_cleanup_task,
+	.session_recovery_timedout = iscsi_session_recovery_timedout,
+	.ep_connect = qedi_ep_connect,
+	.ep_poll = qedi_ep_poll,
+	.ep_disconnect = qedi_ep_disconnect,
+	.set_path = qedi_set_path,
+	.attr_is_visible = qedi_attr_is_visible,
+};
+
+void qedi_start_conn_recovery(struct qedi_ctx *qedi,
+			      struct qedi_conn *qedi_conn)
+{
+	struct iscsi_cls_session *cls_sess;
+	struct iscsi_cls_conn *cls_conn;
+	struct iscsi_conn *conn;
+
+	cls_conn = qedi_conn->cls_conn;
+	conn = cls_conn->dd_data;
+	cls_sess = iscsi_conn_to_session(cls_conn);
+
+	if (iscsi_is_session_online(cls_sess)) {
+		qedi_conn->abrt_conn = 1;
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failing connection, state=0x%x, cid=0x%x\n",
+			 conn->session->state, qedi_conn->iscsi_conn_id);
+		iscsi_conn_failure(qedi_conn->cls_conn->dd_data,
+				   ISCSI_ERR_CONN_FAILED);
+	}
+}
+
+void qedi_process_iscsi_error(struct qedi_endpoint *ep, struct async_data *data)
+{
+	struct qedi_conn *qedi_conn;
+	struct qedi_ctx *qedi;
+	char warn_notice[] = "iscsi_warning";
+	char error_notice[] = "iscsi_error";
+	char *message;
+	int need_recovery = 0;
+	u32 err_mask = 0;
+	char msg[64];
+
+	if (!ep)
+		return;
+
+	qedi_conn = ep->conn;
+	if (!qedi_conn)
+		return;
+
+	qedi = ep->qedi;
+
+	QEDI_ERR(&qedi->dbg_ctx, "async event iscsi error:0x%x\n",
+		 data->error_code);
+
+	if (err_mask) {
+		need_recovery = 0;
+		message = warn_notice;
+	} else {
+		need_recovery = 1;
+		message = error_notice;
+	}
+
+	switch (data->error_code) {
+	case ISCSI_STATUS_NONE:
+		strcpy(msg, "tcp_error none");
+		break;
+	case ISCSI_CONN_ERROR_TASK_CID_MISMATCH:
+		strcpy(msg, "task cid mismatch");
+		break;
+	case ISCSI_CONN_ERROR_TASK_NOT_VALID:
+		strcpy(msg, "invalid task");
+		break;
+	case ISCSI_CONN_ERROR_RQ_RING_IS_FULL:
+		strcpy(msg, "rq ring full");
+		break;
+	case ISCSI_CONN_ERROR_CMDQ_RING_IS_FULL:
+		strcpy(msg, "cmdq ring full");
+		break;
+	case ISCSI_CONN_ERROR_HQE_CACHING_FAILED:
+		strcpy(msg, "sge caching failed");
+		break;
+	case ISCSI_CONN_ERROR_HEADER_DIGEST_ERROR:
+		strcpy(msg, "hdr digest error");
+		break;
+	case ISCSI_CONN_ERROR_LOCAL_COMPLETION_ERROR:
+		strcpy(msg, "local cmpl error");
+		break;
+	case ISCSI_CONN_ERROR_DATA_OVERRUN:
+		strcpy(msg, "invalid task");
+		break;
+	case ISCSI_CONN_ERROR_OUT_OF_SGES_ERROR:
+		strcpy(msg, "out of sge error");
+		break;
+	case ISCSI_CONN_ERROR_TCP_SEG_PROC_IP_OPTIONS_ERROR:
+		strcpy(msg, "tcp seg ip options error");
+		break;
+	case ISCSI_CONN_ERROR_TCP_IP_FRAGMENT_ERROR:
+		strcpy(msg, "tcp ip fragment error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_AHS_LEN:
+		strcpy(msg, "AHS len protocol error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_ITT_OUT_OF_RANGE:
+		strcpy(msg, "itt out of range error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_EXCEEDS_PDU_SIZE:
+		strcpy(msg, "data seg more than pdu size");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE:
+		strcpy(msg, "invalid opcode");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE_BEFORE_UPDATE:
+		strcpy(msg, "invalid opcode before update");
+		break;
+	case ISCSI_CONN_ERROR_UNVALID_NOPIN_DSL:
+		strcpy(msg, "unexpected opcode");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_CARRIES_NO_DATA:
+		strcpy(msg, "r2t carries no data");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SN:
+		strcpy(msg, "data sn error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_IN_TTT:
+		strcpy(msg, "data TTT error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_TTT:
+		strcpy(msg, "r2t TTT error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_BUFFER_OFFSET:
+		strcpy(msg, "buffer offset error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_BUFFER_OFFSET_OOO:
+		strcpy(msg, "buffer offset ooo");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_SN:
+		strcpy(msg, "data seg len 0");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_0:
+		strcpy(msg, "data xer len error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_1:
+		strcpy(msg, "data xer len1 error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_2:
+		strcpy(msg, "data xer len2 error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_LUN:
+		strcpy(msg, "protocol lun error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO:
+		strcpy(msg, "f bit zero error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO_S_BIT_ONE:
+		strcpy(msg, "f bit zero s bit one error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_EXP_STAT_SN:
+		strcpy(msg, "exp stat sn error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DSL_NOT_ZERO:
+		strcpy(msg, "dsl not zero error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_DSL:
+		strcpy(msg, "invalid dsl");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_TOO_BIG:
+		strcpy(msg, "data seg len too big");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_OUTSTANDING_R2T_COUNT:
+		strcpy(msg, "outstanding r2t count error");
+		break;
+	case ISCSI_CONN_ERROR_SENSE_DATA_LENGTH:
+		strcpy(msg, "sense datalen error");
+		break;
+	case ISCSI_ERROR_UNKNOWN:
+	default:
+		need_recovery = 0;
+		strcpy(msg, "unknown error");
+		break;
+	}
+	iscsi_conn_printk(KERN_ALERT,
+			  qedi_conn->cls_conn->dd_data,
+			  "qedi: %s - %s\n", message, msg);
+
+	if (need_recovery)
+		qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
+}
+
+void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data)
+{
+	struct qedi_conn *qedi_conn;
+
+	if (!ep)
+		return;
+
+	qedi_conn = ep->conn;
+	if (!qedi_conn)
+		return;
+
+	QEDI_ERR(&ep->qedi->dbg_ctx, "async event TCP error:0x%x\n",
+		 data->error_code);
+
+	qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
+}
diff --git a/drivers/scsi/qedi/qedi_iscsi.h b/drivers/scsi/qedi/qedi_iscsi.h
new file mode 100644
index 0000000..6da1c90
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_iscsi.h
@@ -0,0 +1,228 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_ISCSI_H_
+#define _QEDI_ISCSI_H_
+
+#include <linux/socket.h>
+#include <linux/completion.h>
+#include "qedi.h"
+
+#define ISCSI_MAX_SESS_PER_HBA	4096
+
+#define DEF_KA_TIMEOUT		7200000
+#define DEF_KA_INTERVAL		10000
+#define DEF_KA_MAX_PROBE_COUNT	10
+#define DEF_TOS			0
+#define DEF_TTL			0xfe
+#define DEF_SND_SEQ_SCALE	0
+#define DEF_RCV_BUF		0xffff
+#define DEF_SND_BUF		0xffff
+#define DEF_SEED		0
+#define DEF_MAX_RT_TIME		8000
+#define DEF_MAX_DA_COUNT        2
+#define DEF_SWS_TIMER		1000
+#define DEF_MAX_CWND		2
+#define DEF_PATH_MTU		1500
+#define DEF_MSS			1460
+#define DEF_LL2_MTU		1560
+#define JUMBO_MTU		9000
+
+#define MIN_MTU         576 /* rfc 793 */
+#define IPV4_HDR_LEN    20
+#define IPV6_HDR_LEN    40
+#define TCP_HDR_LEN     20
+#define TCP_OPTION_LEN  12
+#define VLAN_LEN         4
+
+enum {
+	EP_STATE_IDLE                   = 0x0,
+	EP_STATE_ACQRCONN_START         = 0x1,
+	EP_STATE_ACQRCONN_COMPL         = 0x2,
+	EP_STATE_OFLDCONN_START         = 0x4,
+	EP_STATE_OFLDCONN_COMPL         = 0x8,
+	EP_STATE_DISCONN_START          = 0x10,
+	EP_STATE_DISCONN_COMPL          = 0x20,
+	EP_STATE_CLEANUP_START          = 0x40,
+	EP_STATE_CLEANUP_CMPL           = 0x80,
+	EP_STATE_TCP_FIN_RCVD           = 0x100,
+	EP_STATE_TCP_RST_RCVD           = 0x200,
+	EP_STATE_LOGOUT_SENT            = 0x400,
+	EP_STATE_LOGOUT_RESP_RCVD       = 0x800,
+	EP_STATE_CLEANUP_FAILED         = 0x1000,
+	EP_STATE_OFLDCONN_FAILED        = 0x2000,
+	EP_STATE_CONNECT_FAILED         = 0x4000,
+	EP_STATE_DISCONN_TIMEDOUT       = 0x8000,
+};
+
+struct qedi_conn;
+
+struct qedi_endpoint {
+	struct qedi_ctx *qedi;
+	u32 dst_addr[4];
+	u32 src_addr[4];
+	u16 src_port;
+	u16 dst_port;
+	u16 vlan_id;
+	u16 pmtu;
+	u8 src_mac[ETH_ALEN];
+	u8 dst_mac[ETH_ALEN];
+	u8 ip_type;
+	int state;
+	wait_queue_head_t ofld_wait;
+	wait_queue_head_t tcp_ofld_wait;
+	u32 iscsi_cid;
+	/* identifier of the connection from qed */
+	u32 handle;
+	u32 fw_cid;
+	void __iomem *p_doorbell;
+
+	/* Send queue management */
+	struct iscsi_wqe *sq;
+	dma_addr_t sq_dma;
+
+	u16 sq_prod_idx;
+	u16 fw_sq_prod_idx;
+	u16 sq_con_idx;
+	u32 sq_mem_size;
+
+	void *sq_pbl;
+	dma_addr_t sq_pbl_dma;
+	u32 sq_pbl_size;
+	struct qedi_conn *conn;
+	struct work_struct offload_work;
+};
+
+#define QEDI_SQ_WQES_MIN	16
+
+struct qedi_io_bdt {
+	struct iscsi_sge *sge_tbl;
+	dma_addr_t sge_tbl_dma;
+	u16 sge_valid;
+};
+
+/**
+ * struct generic_pdu_resc - login pdu resource structure
+ *
+ * @req_buf:            driver buffer used to stage payload associated with
+ *                      the login request
+ * @req_dma_addr:       dma address for iscsi login request payload buffer
+ * @req_buf_size:       actual login request payload length
+ * @req_wr_ptr:         pointer into login request buffer when next data is
+ *                      to be written
+ * @resp_hdr:           iscsi header where iscsi login response header is to
+ *                      be recreated
+ * @resp_buf:           buffer to stage login response payload
+ * @resp_dma_addr:      login response payload buffer dma address
+ * @resp_buf_size:      login response paylod length
+ * @resp_wr_ptr:        pointer into login response buffer when next data is
+ *                      to be written
+ * @req_bd_tbl:         iscsi login request payload BD table
+ * @req_bd_dma:         login request BD table dma address
+ * @resp_bd_tbl:        iscsi login response payload BD table
+ * @resp_bd_dma:        login request BD table dma address
+ *
+ * following structure defines buffer info for generic pdus such as iSCSI Login,
+ *      Logout and NOP
+ */
+struct generic_pdu_resc {
+	char *req_buf;
+	dma_addr_t req_dma_addr;
+	u32 req_buf_size;
+	char *req_wr_ptr;
+	struct iscsi_hdr resp_hdr;
+	char *resp_buf;
+	dma_addr_t resp_dma_addr;
+	u32 resp_buf_size;
+	char *resp_wr_ptr;
+	char *req_bd_tbl;
+	dma_addr_t req_bd_dma;
+	char *resp_bd_tbl;
+	dma_addr_t resp_bd_dma;
+};
+
+struct qedi_conn {
+	struct iscsi_cls_conn *cls_conn;
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *ep;
+	struct list_head active_cmd_list;
+	spinlock_t list_lock;		/* internal conn lock */
+	u32 active_cmd_count;
+	u32 cmd_cleanup_req;
+	u32 cmd_cleanup_cmpl;
+
+	u32 iscsi_conn_id;
+	int itt;
+	int abrt_conn;
+#define QEDI_CID_RESERVED	0x5AFF
+	u32 fw_cid;
+	/*
+	 * Buffer for login negotiation process
+	 */
+	struct generic_pdu_resc gen_pdu;
+
+	struct list_head tmf_work_list;
+	wait_queue_head_t wait_queue;
+	spinlock_t tmf_work_lock;	/* tmf work lock */
+	unsigned long flags;
+#define QEDI_CONN_FW_CLEANUP	1
+};
+
+struct qedi_cmd {
+	struct list_head io_cmd;
+	bool io_cmd_in_list;
+	struct iscsi_hdr hdr;
+	struct qedi_conn *conn;
+	struct scsi_cmnd *scsi_cmd;
+	struct scatterlist *sg;
+	struct qedi_io_bdt io_tbl;
+	struct iscsi_task_context request;
+	unsigned char *sense_buffer;
+	dma_addr_t sense_buffer_dma;
+	u16 task_id;
+
+	/* field populated for tmf work queue */
+	struct iscsi_task *task;
+	struct work_struct tmf_work;
+	int state;
+#define CLEANUP_WAIT	1
+#define CLEANUP_RECV	2
+#define CLEANUP_WAIT_FAILED	3
+#define CLEANUP_NOT_REQUIRED	4
+#define LUN_RESET_RESPONSE_RECEIVED	5
+#define RESPONSE_RECEIVED	6
+
+	int type;
+#define TYPEIO		1
+#define TYPERESET	2
+
+	struct qedi_work_map *list_tmf_work;
+	/* slowpath management */
+	bool use_slowpath;
+
+	struct iscsi_tm_rsp *tmf_resp_buf;
+};
+
+struct qedi_work_map {
+	struct list_head list;
+	struct qedi_cmd *qedi_cmd;
+	int rtid;
+
+	int state;
+#define QEDI_WORK_QUEUED	1
+#define QEDI_WORK_SCHEDULED	2
+#define QEDI_WORK_EXIT		3
+
+	struct work_struct *ptr_tmf_work;
+};
+
+#define qedi_set_itt(task_id, itt) ((u32)((task_id & 0xffff) | (itt << 16)))
+#define qedi_get_itt(cqe) (cqe.iscsi_hdr.cmd.itt >> 16)
+
+#endif /* _QEDI_ISCSI_H_ */
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 58ac9a2..22d19a3 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -27,6 +27,8 @@
 #include <scsi/scsi.h>
 
 #include "qedi.h"
+#include "qedi_gbl.h"
+#include "qedi_iscsi.h"
 
 static uint fw_debug;
 module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
@@ -1368,6 +1370,139 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
 	return status;
 }
 
+int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
+{
+	int rval = 0;
+	u32 *pbl;
+	dma_addr_t page;
+	int num_pages;
+
+	if (!ep)
+		return -EIO;
+
+	/* Calculate appropriate queue and PBL sizes */
+	ep->sq_mem_size = QEDI_SQ_SIZE * sizeof(struct iscsi_wqe);
+	ep->sq_mem_size += QEDI_PAGE_SIZE - 1;
+
+	ep->sq_pbl_size = (ep->sq_mem_size / QEDI_PAGE_SIZE) * sizeof(void *);
+	ep->sq_pbl_size = ep->sq_pbl_size + QEDI_PAGE_SIZE;
+
+	ep->sq = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_mem_size,
+				    &ep->sq_dma, GFP_KERNEL);
+	if (!ep->sq) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate send queue.\n");
+		rval = -ENOMEM;
+		goto out;
+	}
+	memset(ep->sq, 0, ep->sq_mem_size);
+
+	ep->sq_pbl = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_pbl_size,
+					&ep->sq_pbl_dma, GFP_KERNEL);
+	if (!ep->sq_pbl) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate send queue PBL.\n");
+		rval = -ENOMEM;
+		goto out_free_sq;
+	}
+	memset(ep->sq_pbl, 0, ep->sq_pbl_size);
+
+	/* Create PBL */
+	num_pages = ep->sq_mem_size / QEDI_PAGE_SIZE;
+	page = ep->sq_dma;
+	pbl = (u32 *)ep->sq_pbl;
+
+	while (num_pages--) {
+		*pbl = (u32)page;
+		pbl++;
+		*pbl = (u32)((u64)page >> 32);
+		pbl++;
+		page += QEDI_PAGE_SIZE;
+	}
+
+	return rval;
+
+out_free_sq:
+	dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
+			  ep->sq_dma);
+out:
+	return rval;
+}
+
+void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
+{
+	if (ep->sq_pbl)
+		dma_free_coherent(&qedi->pdev->dev, ep->sq_pbl_size, ep->sq_pbl,
+				  ep->sq_pbl_dma);
+	if (ep->sq)
+		dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
+				  ep->sq_dma);
+}
+
+int qedi_get_task_idx(struct qedi_ctx *qedi)
+{
+	s16 tmp_idx;
+
+again:
+	tmp_idx = find_first_zero_bit(qedi->task_idx_map,
+				      MAX_ISCSI_TASK_ENTRIES);
+
+	if (tmp_idx >= MAX_ISCSI_TASK_ENTRIES) {
+		QEDI_ERR(&qedi->dbg_ctx, "FW task context pool is full.\n");
+		tmp_idx = -1;
+		goto err_idx;
+	}
+
+	if (test_and_set_bit(tmp_idx, qedi->task_idx_map))
+		goto again;
+
+err_idx:
+	return tmp_idx;
+}
+
+void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx)
+{
+	if (!test_and_clear_bit(idx, qedi->task_idx_map)) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "FW task context, already cleared, tid=0x%x\n", idx);
+		WARN_ON(1);
+	}
+}
+
+void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt)
+{
+	qedi->itt_map[tid].itt = proto_itt;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "update itt map tid=0x%x, with proto itt=0x%x\n", tid,
+		  qedi->itt_map[tid].itt);
+}
+
+void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, s16 *tid)
+{
+	u16 i;
+
+	for (i = 0; i < MAX_ISCSI_TASK_ENTRIES; i++) {
+		if (qedi->itt_map[i].itt == itt) {
+			*tid = i;
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+				  "Ref itt=0x%x, found at tid=0x%x\n",
+				  itt, *tid);
+			return;
+		}
+	}
+
+	WARN_ON(1);
+}
+
+void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt)
+{
+	*proto_itt = qedi->itt_map[tid].itt;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "Get itt map tid [0x%x with proto itt[0x%x]",
+		  tid, *proto_itt);
+}
+
 static int qedi_alloc_itt(struct qedi_ctx *qedi)
 {
 	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
@@ -1488,6 +1623,26 @@ static int qedi_cpu_callback(struct notifier_block *nfb,
 	.notifier_call = qedi_cpu_callback,
 };
 
+void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu)
+{
+	struct qed_ll2_params params;
+
+	qedi_recover_all_conns(qedi);
+
+	qedi_ops->ll2->stop(qedi->cdev);
+	qedi_ll2_free_skbs(qedi);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "old MTU %u, new MTU %u\n",
+		  qedi->ll2_mtu, mtu);
+	memset(&params, 0, sizeof(params));
+	qedi->ll2_mtu = mtu;
+	params.mtu = qedi->ll2_mtu + IPV6_HDR_LEN + TCP_HDR_LEN;
+	params.drop_ttl0_packets = 0;
+	params.rx_vlan_stripping = 1;
+	ether_addr_copy(params.ll2_mac_address, qedi->dev_info.common.hw_mac);
+	qedi_ops->ll2->start(qedi->cdev, &params);
+}
+
 static void __qedi_remove(struct pci_dev *pdev, int mode)
 {
 	struct qedi_ctx *qedi = pci_get_drvdata(pdev);
@@ -1852,6 +2007,13 @@ static int __init qedi_init(void)
 	qedi_dbg_init("qedi");
 #endif
 
+	qedi_scsi_transport = iscsi_register_transport(&qedi_iscsi_transport);
+	if (!qedi_scsi_transport) {
+		QEDI_ERR(NULL, "Could not register qedi transport");
+		rc = -ENOMEM;
+		goto exit_qedi_init_1;
+	}
+
 	register_hotcpu_notifier(&qedi_cpu_notifier);
 
 	ret = pci_register_driver(&qedi_pci_driver);
@@ -1874,6 +2036,7 @@ static int __init qedi_init(void)
 	return rc;
 
 exit_qedi_init_2:
+	iscsi_unregister_transport(&qedi_iscsi_transport);
 exit_qedi_init_1:
 #ifdef CONFIG_DEBUG_FS
 	qedi_dbg_exit();
@@ -1892,6 +2055,7 @@ static void __exit qedi_cleanup(void)
 
 	pci_unregister_driver(&qedi_pci_driver);
 	unregister_hotcpu_notifier(&qedi_cpu_notifier);
+	iscsi_unregister_transport(&qedi_iscsi_transport);
 
 #ifdef CONFIG_DEBUG_FS
 	qedi_dbg_exit();
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 5/6] qedi: Add support for iSCSI session management.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for iscsi_transport LLD Login,
Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
and Firmware async event handling support.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi_fw.c    | 1123 ++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h   |   67 ++
 drivers/scsi/qedi/qedi_iscsi.c | 1604 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_iscsi.h |  228 ++++++
 drivers/scsi/qedi/qedi_main.c  |  164 ++++
 5 files changed, 3186 insertions(+)
 create mode 100644 drivers/scsi/qedi/qedi_fw.c
 create mode 100644 drivers/scsi/qedi/qedi_gbl.h
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
 create mode 100644 drivers/scsi/qedi/qedi_iscsi.h

diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
new file mode 100644
index 0000000..a820785
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_fw.c
@@ -0,0 +1,1123 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/blkdev.h>
+#include <scsi/scsi_tcq.h>
+#include <linux/delay.h>
+
+#include "qedi.h"
+#include "qedi_iscsi.h"
+#include "qedi_gbl.h"
+
+static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
+			       struct iscsi_task *mtask);
+
+void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd)
+{
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+
+	if (cmd->io_tbl.sge_valid && sc) {
+		scsi_dma_unmap(sc);
+		cmd->io_tbl.sge_valid = 0;
+	}
+}
+
+static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+				     union iscsi_cqe *cqe,
+				     struct iscsi_task *task,
+				     struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_logout_rsp *resp_hdr;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_logout_response_hdr *cqe_logout_response;
+	struct qedi_cmd *cmd;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+	cqe_logout_response = &cqe->cqe_common.iscsi_hdr.logout_response;
+	spin_lock(&session->back_lock);
+	resp_hdr = (struct iscsi_logout_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr->opcode = cqe_logout_response->opcode;
+	resp_hdr->flags = cqe_logout_response->flags;
+	resp_hdr->hlength = 0;
+
+	resp_hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
+	resp_hdr->statsn = cpu_to_be32(cqe_logout_response->stat_sn);
+	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_logout_response->exp_cmd_sn);
+	resp_hdr->max_cmdsn = cpu_to_be32(cqe_logout_response->max_cmd_sn);
+
+	resp_hdr->t2wait = cpu_to_be32(cqe_logout_response->time2wait);
+	resp_hdr->t2retain = cpu_to_be32(cqe_logout_response->time2retain);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id,
+			  &cmd->io_cmd);
+	}
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, NULL, 0);
+
+	spin_unlock(&session->back_lock);
+}
+
+static void qedi_process_text_resp(struct qedi_ctx *qedi,
+				   union iscsi_cqe *cqe,
+				   struct iscsi_task *task,
+				   struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_task_context *task_ctx;
+	struct iscsi_text_rsp *resp_hdr_ptr;
+	struct iscsi_text_response_hdr *cqe_text_response;
+	struct qedi_cmd *cmd;
+	int pld_len;
+	u32 *tmp;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+								  cmd->task_id);
+
+	cqe_text_response = &cqe->cqe_common.iscsi_hdr.text_response;
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_text_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr_ptr->opcode = cqe_text_response->opcode;
+	resp_hdr_ptr->flags = cqe_text_response->flags;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_text_response->hdr_second_dword &
+		ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->ttt = cqe_text_response->ttt;
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_text_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_text_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_text_response->max_cmd_sn);
+
+	pld_len = cqe_text_response->hdr_second_dword &
+		  ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+
+	memset(task_ctx, '\0', sizeof(*task_ctx));
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	} else {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id,
+			  &cmd->io_cmd);
+	}
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+			     qedi_conn->gen_pdu.resp_buf,
+			     (qedi_conn->gen_pdu.resp_wr_ptr -
+			      qedi_conn->gen_pdu.resp_buf));
+	spin_unlock(&session->back_lock);
+}
+
+static void qedi_process_login_resp(struct qedi_ctx *qedi,
+				    union iscsi_cqe *cqe,
+				    struct iscsi_task *task,
+				    struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_task_context *task_ctx;
+	struct iscsi_login_rsp *resp_hdr_ptr;
+	struct iscsi_login_response_hdr *cqe_login_response;
+	struct qedi_cmd *cmd;
+	int pld_len;
+	u32 *tmp;
+
+	cmd = (struct qedi_cmd *)task->dd_data;
+
+	cqe_login_response = &cqe->cqe_common.iscsi_hdr.login_response;
+	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+							  cmd->task_id);
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_login_rsp *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_login_rsp));
+	resp_hdr_ptr->opcode = cqe_login_response->opcode;
+	resp_hdr_ptr->flags = cqe_login_response->flags_attr;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_login_response->hdr_second_dword &
+		ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->tsih = cqe_login_response->tsih;
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_login_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_login_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_login_response->max_cmd_sn);
+	resp_hdr_ptr->status_class = cqe_login_response->status_class;
+	resp_hdr_ptr->status_detail = cqe_login_response->status_detail;
+	pld_len = cqe_login_response->hdr_second_dword &
+		  ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+
+	memset(task_ctx, '\0', sizeof(*task_ctx));
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+			     qedi_conn->gen_pdu.resp_buf,
+			     (qedi_conn->gen_pdu.resp_wr_ptr -
+			     qedi_conn->gen_pdu.resp_buf));
+
+	spin_unlock(&session->back_lock);
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+}
+
+static void qedi_get_rq_bdq_buf(struct qedi_ctx *qedi,
+				struct iscsi_cqe_unsolicited *cqe,
+				char *ptr, int len)
+{
+	u16 idx = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "pld_len [%d], bdq_prod_idx [%d], idx [%d]\n",
+		  len, qedi->bdq_prod_idx,
+		  (qedi->bdq_prod_idx % qedi->rq_num_entries));
+
+	/* Obtain buffer address from rqe_opaque */
+	idx = cqe->rqe_opaque.lo;
+	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
+			  idx);
+		return;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "rqe_opaque.lo [0x%p], rqe_opaque.hi [0x%p], idx [%d]\n",
+		  cqe->rqe_opaque.lo, cqe->rqe_opaque.hi, idx);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "unsol_cqe_type = %d\n", cqe->unsol_cqe_type);
+	switch (cqe->unsol_cqe_type) {
+	case ISCSI_CQE_UNSOLICITED_SINGLE:
+	case ISCSI_CQE_UNSOLICITED_FIRST:
+		if (len)
+			memcpy(ptr, (void *)qedi->bdq[idx].buf_addr, len);
+		break;
+	case ISCSI_CQE_UNSOLICITED_MIDDLE:
+	case ISCSI_CQE_UNSOLICITED_LAST:
+		break;
+	default:
+		break;
+	}
+}
+
+static void qedi_put_rq_bdq_buf(struct qedi_ctx *qedi,
+				struct iscsi_cqe_unsolicited *cqe,
+				int count)
+{
+	u16 tmp;
+	u16 idx = 0;
+	struct scsi_bd *pbl;
+
+	/* Obtain buffer address from rqe_opaque */
+	idx = cqe->rqe_opaque.lo;
+	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
+			  idx);
+		return;
+	}
+
+	pbl = (struct scsi_bd *)qedi->bdq_pbl;
+	pbl += (qedi->bdq_prod_idx % qedi->rq_num_entries);
+	pbl->address.hi =
+		      cpu_to_le32((u32)(((u64)(qedi->bdq[idx].buf_dma)) >> 32));
+	pbl->address.lo =
+			cpu_to_le32(((u32)(((u64)(qedi->bdq[idx].buf_dma)) &
+					    0xffffffff)));
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx] idx [%d]\n",
+		  pbl, pbl->address.hi, pbl->address.lo, idx);
+	pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
+	pbl->opaque.lo = cpu_to_le32(((u32)(((u64)idx) & 0xffffffff)));
+
+	/* Increment producer to let f/w know we've handled the frame */
+	qedi->bdq_prod_idx += count;
+
+	writew(qedi->bdq_prod_idx, qedi->bdq_primary_prod);
+	tmp = readw(qedi->bdq_primary_prod);
+
+	writew(qedi->bdq_prod_idx, qedi->bdq_secondary_prod);
+	tmp = readw(qedi->bdq_secondary_prod);
+}
+
+static void qedi_unsol_pdu_adjust_bdq(struct qedi_ctx *qedi,
+				      struct iscsi_cqe_unsolicited *cqe,
+				      u32 pdu_len, u32 num_bdqs,
+				      char *bdq_data)
+{
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "num_bdqs [%d]\n", num_bdqs);
+
+	qedi_get_rq_bdq_buf(qedi, cqe, bdq_data, pdu_len);
+	qedi_put_rq_bdq_buf(qedi, cqe, (num_bdqs + 1));
+}
+
+static int qedi_process_nopin_mesg(struct qedi_ctx *qedi,
+				   union iscsi_cqe *cqe,
+				   struct iscsi_task *task,
+				   struct qedi_conn *qedi_conn, u16 que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_nop_in_hdr *cqe_nop_in;
+	struct iscsi_nopin *hdr;
+	struct qedi_cmd *cmd;
+	int tgt_async_nop = 0;
+	u32 scsi_lun[2];
+	u32 pdu_len, num_bdqs;
+	char bdq_data[QEDI_BDQ_BUF_SIZE];
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+	cqe_nop_in = &cqe->cqe_common.iscsi_hdr.nop_in;
+
+	pdu_len = cqe_nop_in->hdr_second_dword &
+		  ISCSI_NOP_IN_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
+
+	hdr = (struct iscsi_nopin *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(hdr, 0, sizeof(struct iscsi_hdr));
+	hdr->opcode = cqe_nop_in->opcode;
+	hdr->max_cmdsn = cpu_to_be32(cqe_nop_in->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_nop_in->exp_cmd_sn);
+	hdr->statsn = cpu_to_be32(cqe_nop_in->stat_sn);
+	hdr->ttt = cpu_to_be32(cqe_nop_in->ttt);
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pdu_len, num_bdqs, bdq_data);
+		hdr->itt = RESERVED_ITT;
+		tgt_async_nop = 1;
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+		goto done;
+	}
+
+	/* Response to one of our nop-outs */
+	if (task) {
+		cmd = task->dd_data;
+		hdr->flags = ISCSI_FLAG_CMD_FINAL;
+		hdr->itt = build_itt(cqe->cqe_solicited.itid,
+				     conn->session->age);
+		scsi_lun[0] = 0xffffffff;
+		scsi_lun[1] = 0xffffffff;
+		memcpy(&hdr->lun, scsi_lun, sizeof(struct scsi_lun));
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+			  "Freeing tid=0x%x for cid=0x%x\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id);
+		cmd->state = RESPONSE_RECEIVED;
+		spin_lock(&qedi_conn->list_lock);
+		if (likely(cmd->io_cmd_in_list)) {
+			cmd->io_cmd_in_list = false;
+			list_del_init(&cmd->io_cmd);
+			qedi_conn->active_cmd_count--;
+		}
+
+		spin_unlock(&qedi_conn->list_lock);
+		qedi_clear_task_idx(qedi, cmd->task_id);
+	}
+
+done:
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, bdq_data, pdu_len);
+
+	spin_unlock_bh(&session->back_lock);
+	return tgt_async_nop;
+}
+
+static void qedi_process_async_mesg(struct qedi_ctx *qedi,
+				    union iscsi_cqe *cqe,
+				    struct iscsi_task *task,
+				    struct qedi_conn *qedi_conn,
+				    u16 que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_async_msg_hdr *cqe_async_msg;
+	struct iscsi_async *resp_hdr;
+	u32 scsi_lun[2];
+	u32 pdu_len, num_bdqs;
+	char bdq_data[QEDI_BDQ_BUF_SIZE];
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+
+	cqe_async_msg = &cqe->cqe_common.iscsi_hdr.async_msg;
+	pdu_len = cqe_async_msg->hdr_second_dword &
+		ISCSI_ASYNC_MSG_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pdu_len, num_bdqs, bdq_data);
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+
+	resp_hdr = (struct iscsi_async *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
+	resp_hdr->opcode = cqe_async_msg->opcode;
+	resp_hdr->flags = 0x80;
+
+	scsi_lun[0] = cpu_to_be32(cqe_async_msg->lun.lo);
+	scsi_lun[1] = cpu_to_be32(cqe_async_msg->lun.hi);
+	memcpy(&resp_hdr->lun, scsi_lun, sizeof(struct scsi_lun));
+	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_async_msg->exp_cmd_sn);
+	resp_hdr->max_cmdsn = cpu_to_be32(cqe_async_msg->max_cmd_sn);
+	resp_hdr->statsn = cpu_to_be32(cqe_async_msg->stat_sn);
+
+	resp_hdr->async_event = cqe_async_msg->async_event;
+	resp_hdr->async_vcode = cqe_async_msg->async_vcode;
+
+	resp_hdr->param1 = cpu_to_be16(cqe_async_msg->param1_rsrv);
+	resp_hdr->param2 = cpu_to_be16(cqe_async_msg->param2_rsrv);
+	resp_hdr->param3 = cpu_to_be16(cqe_async_msg->param3_rsrv);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, bdq_data,
+			     pdu_len);
+
+	spin_unlock_bh(&session->back_lock);
+}
+
+static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
+				     union iscsi_cqe *cqe,
+				     struct iscsi_task *task,
+				     struct qedi_conn *qedi_conn,
+				     uint16_t que_idx)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_reject_hdr *cqe_reject;
+	struct iscsi_reject *hdr;
+	u32 pld_len, num_bdqs;
+	unsigned long flags;
+
+	spin_lock_bh(&session->back_lock);
+	cqe_reject = &cqe->cqe_common.iscsi_hdr.reject;
+	pld_len = cqe_reject->hdr_second_dword &
+		  ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK;
+	num_bdqs = pld_len / QEDI_BDQ_BUF_SIZE;
+
+	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
+		spin_lock_irqsave(&qedi->hba_lock, flags);
+		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
+					  pld_len, num_bdqs, conn->data);
+		spin_unlock_irqrestore(&qedi->hba_lock, flags);
+	}
+	hdr = (struct iscsi_reject *)&qedi_conn->gen_pdu.resp_hdr;
+	memset(hdr, 0, sizeof(struct iscsi_hdr));
+	hdr->opcode = cqe_reject->opcode;
+	hdr->reason = cqe_reject->hdr_reason;
+	hdr->flags = cqe_reject->hdr_flags;
+	hton24(hdr->dlength, (cqe_reject->hdr_second_dword &
+			      ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK));
+	hdr->max_cmdsn = cpu_to_be32(cqe_reject->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_reject->exp_cmd_sn);
+	hdr->statsn = cpu_to_be32(cqe_reject->stat_sn);
+	hdr->ffffffff = cpu_to_be32(0xffffffff);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+			     conn->data, pld_len);
+	spin_unlock_bh(&session->back_lock);
+}
+
+static void qedi_mtask_completion(struct qedi_ctx *qedi,
+				  union iscsi_cqe *cqe,
+				  struct iscsi_task *task,
+				  struct qedi_conn *conn, uint16_t que_idx)
+{
+	struct iscsi_conn *iscsi_conn;
+	u32 hdr_opcode;
+
+	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
+	iscsi_conn = conn->cls_conn->dd_data;
+
+	switch (hdr_opcode) {
+	case ISCSI_OPCODE_LOGIN_RESPONSE:
+		qedi_process_login_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_TEXT_RESPONSE:
+		qedi_process_text_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_LOGOUT_RESPONSE:
+		qedi_process_logout_resp(qedi, cqe, task, conn);
+		break;
+	case ISCSI_OPCODE_NOP_IN:
+		qedi_process_nopin_mesg(qedi, cqe, task, conn, que_idx);
+		break;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "unknown opcode\n");
+	}
+}
+
+static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
+					  struct iscsi_cqe_solicited *cqe,
+					  struct iscsi_task *task,
+					  struct qedi_conn *qedi_conn)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_UNSOL,
+		  "itid=0x%x, cmd task id=0x%x\n",
+		  cqe->itid, cmd->task_id);
+
+	cmd->state = RESPONSE_RECEIVED;
+	qedi_clear_task_idx(qedi, cmd->task_id);
+
+	spin_lock_bh(&session->back_lock);
+	__iscsi_put_task(task);
+	spin_unlock_bh(&session->back_lock);
+}
+
+void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
+			  uint16_t que_idx)
+{
+	struct iscsi_task *task = NULL;
+	struct iscsi_nopout *nopout_hdr;
+	struct qedi_conn *q_conn;
+	struct iscsi_conn *conn;
+	struct iscsi_task_context *fw_task_ctx;
+	u32 comp_type;
+	u32 iscsi_cid;
+	u32 hdr_opcode;
+	u32 ptmp_itt = 0;
+	itt_t proto_itt = 0;
+	u8 cqe_err_bits = 0;
+
+	comp_type = cqe->cqe_common.cqe_type;
+	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
+	cqe_err_bits =
+		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "fw_cid=0x%x, cqe type=0x%x, opcode=0x%x\n",
+		  cqe->cqe_common.conn_id, comp_type, hdr_opcode);
+
+	if (comp_type >= MAX_ISCSI_CQES_TYPE) {
+		QEDI_WARN(&qedi->dbg_ctx, "Invalid CqE type\n");
+		return;
+	}
+
+	iscsi_cid  = cqe->cqe_common.conn_id;
+	q_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+	if (!q_conn) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Session no longer exists for cid=0x%x!!\n",
+			  iscsi_cid);
+		return;
+	}
+
+	conn = q_conn->cls_conn->dd_data;
+
+	if (unlikely(cqe_err_bits &&
+		     GET_FIELD(cqe_err_bits,
+			       CQE_ERROR_BITMAP_DATA_DIGEST_ERR))) {
+		iscsi_conn_failure(conn, ISCSI_ERR_DATA_DGST);
+		return;
+	}
+
+	switch (comp_type) {
+	case ISCSI_CQE_TYPE_SOLICITED:
+	case ISCSI_CQE_TYPE_SOLICITED_WITH_SENSE:
+		fw_task_ctx =
+		  (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
+						      cqe->cqe_solicited.itid);
+		if (fw_task_ctx->ystorm_st_context.state.local_comp == 1) {
+			qedi_get_proto_itt(qedi, cqe->cqe_solicited.itid,
+					   &ptmp_itt);
+			proto_itt = build_itt(ptmp_itt, conn->session->age);
+		} else {
+			cqe->cqe_solicited.itid =
+					    qedi_get_itt(cqe->cqe_solicited);
+			proto_itt = build_itt(cqe->cqe_solicited.itid,
+					      conn->session->age);
+		}
+
+		spin_lock_bh(&conn->session->back_lock);
+		task = iscsi_itt_to_task(conn, proto_itt);
+		spin_unlock_bh(&conn->session->back_lock);
+
+		if (!task) {
+			QEDI_WARN(&qedi->dbg_ctx, "task is NULL\n");
+			return;
+		}
+
+		/* Process NOPIN local completion */
+		nopout_hdr = (struct iscsi_nopout *)task->hdr;
+		if ((nopout_hdr->itt == RESERVED_ITT) &&
+		    (cqe->cqe_solicited.itid != (u16)RESERVED_ITT))
+			qedi_process_nopin_local_cmpl(qedi, &cqe->cqe_solicited,
+						      task, q_conn);
+		else
+			/* Process other solicited responses */
+			qedi_mtask_completion(qedi, cqe, task, q_conn, que_idx);
+		break;
+	case ISCSI_CQE_TYPE_UNSOLICITED:
+		switch (hdr_opcode) {
+		case ISCSI_OPCODE_NOP_IN:
+			qedi_process_nopin_mesg(qedi, cqe, task, q_conn,
+						que_idx);
+			break;
+		case ISCSI_OPCODE_ASYNC_MSG:
+			qedi_process_async_mesg(qedi, cqe, task, q_conn,
+						que_idx);
+			break;
+		case ISCSI_OPCODE_REJECT:
+			qedi_process_reject_mesg(qedi, cqe, task, q_conn,
+						 que_idx);
+			break;
+		}
+		goto exit_fp_process;
+	default:
+		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
+		break;
+	}
+
+exit_fp_process:
+	return;
+}
+
+static void qedi_add_to_sq(struct qedi_conn *qedi_conn, struct iscsi_task *task,
+			   u16 tid, uint16_t ptu_invalidate, int is_cleanup)
+{
+	struct iscsi_wqe *wqe;
+	struct iscsi_wqe_field *cont_field;
+	struct qedi_endpoint *ep;
+	struct scsi_cmnd *sc = task->sc;
+	struct iscsi_login_req *login_hdr;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	login_hdr = (struct iscsi_login_req *)task->hdr;
+	ep = qedi_conn->ep;
+	wqe = &ep->sq[ep->sq_prod_idx];
+
+	memset(wqe, 0, sizeof(*wqe));
+
+	ep->sq_prod_idx++;
+	ep->fw_sq_prod_idx++;
+	if (ep->sq_prod_idx == QEDI_SQ_SIZE)
+		ep->sq_prod_idx = 0;
+
+	if (is_cleanup) {
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_TASK_CLEANUP);
+		wqe->task_id = tid;
+		return;
+	}
+
+	if (ptu_invalidate) {
+		SET_FIELD(wqe->flags, ISCSI_WQE_PTU_INVALIDATE,
+			  ISCSI_WQE_SET_PTU_INVALIDATE);
+	}
+
+	cont_field = &wqe->cont_prevtid_union.cont_field;
+
+	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+	case ISCSI_OP_TEXT:
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_MIDDLE_PATH);
+		SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
+			  1);
+		cont_field->contlen_cdbsize_field = ntoh24(login_hdr->dlength);
+		break;
+	case ISCSI_OP_LOGOUT:
+	case ISCSI_OP_NOOP_OUT:
+	case ISCSI_OP_SCSI_TMFUNC:
+		 SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			   ISCSI_WQE_TYPE_NORMAL);
+		break;
+	default:
+		if (!sc)
+			break;
+
+		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
+			  ISCSI_WQE_TYPE_NORMAL);
+		cont_field->contlen_cdbsize_field =
+				(sc->sc_data_direction == DMA_TO_DEVICE) ?
+				scsi_bufflen(sc) : 0;
+		if (cmd->use_slowpath)
+			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES, 0);
+		else
+			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
+				  (sc->sc_data_direction ==
+				   DMA_TO_DEVICE) ?
+				  min((u16)QEDI_FAST_SGE_COUNT,
+				      (u16)cmd->io_tbl.sge_valid) : 0);
+		break;
+	}
+
+	wqe->task_id = tid;
+	/* Make sure SQ data is coherent */
+	wmb();
+}
+
+static void qedi_ring_doorbell(struct qedi_conn *qedi_conn)
+{
+	struct iscsi_db_data dbell = { 0 };
+
+	dbell.agg_flags = 0;
+
+	dbell.params |= DB_DEST_XCM << ISCSI_DB_DATA_DEST_SHIFT;
+	dbell.params |= DB_AGG_CMD_SET << ISCSI_DB_DATA_AGG_CMD_SHIFT;
+	dbell.params |=
+		   DQ_XCM_ISCSI_SQ_PROD_CMD << ISCSI_DB_DATA_AGG_VAL_SEL_SHIFT;
+
+	dbell.sq_prod = qedi_conn->ep->fw_sq_prod_idx;
+	writel(*(u32 *)&dbell, qedi_conn->ep->p_doorbell);
+	/* Make sure fw idx is coherent */
+	wmb();
+	mmiowb();
+	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_MP_REQ,
+		  "prod_idx=0x%x, fw_prod_idx=0x%x, cid=0x%x\n",
+		  qedi_conn->ep->sq_prod_idx, qedi_conn->ep->fw_sq_prod_idx,
+		  qedi_conn->iscsi_conn_id);
+}
+
+int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_login_req *login_hdr;
+	struct iscsi_login_req_hdr *fw_login_req = NULL;
+	struct iscsi_cached_sge_ctx *cached_sge = NULL;
+	struct iscsi_sge *single_sge = NULL;
+	struct iscsi_sge *req_sge = NULL;
+	struct iscsi_sge *resp_sge = NULL;
+	struct qedi_cmd *qedi_cmd;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	login_hdr = (struct iscsi_login_req *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_login_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.login_req;
+	fw_login_req->opcode = login_hdr->opcode;
+	fw_login_req->version_min = login_hdr->min_version;
+	fw_login_req->version_max = login_hdr->max_version;
+	fw_login_req->flags_attr = login_hdr->flags;
+	fw_login_req->isid_tabc = *((u16 *)login_hdr->isid + 2);
+	fw_login_req->isid_d = *((u32 *)login_hdr->isid);
+	fw_login_req->tsih = login_hdr->tsih;
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_login_req->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_login_req->cid = qedi_conn->iscsi_conn_id;
+	fw_login_req->cmd_sn = be32_to_cpu(login_hdr->cmdsn);
+	fw_login_req->exp_stat_sn = be32_to_cpu(login_hdr->exp_statsn);
+	fw_login_req->exp_stat_sn = 0;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			     (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = 0x2;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
+						ntoh24(login_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+	fw_task_ctx->ustorm_st_context.task_type = 0x2;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	fw_task_ctx->ustorm_ag_context.exp_data_acked =
+						 ntoh24(login_hdr->dlength);
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
+
+int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_logout_req_hdr *fw_logout_req = NULL;
+	struct iscsi_task_context *fw_task_ctx = NULL;
+	struct iscsi_logout *logout_hdr = NULL;
+	struct qedi_cmd *qedi_cmd = NULL;
+	s16  tid = 0;
+	s16 ptu_invalidate = 0;
+
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	logout_hdr = (struct iscsi_logout *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_logout_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.logout_req;
+	fw_logout_req->opcode = ISCSI_OPCODE_LOGOUT_REQUEST;
+	fw_logout_req->reason_code = 0x80 | logout_hdr->flags;
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_logout_req->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_logout_req->exp_stat_sn = be32_to_cpu(logout_hdr->exp_statsn);
+	fw_logout_req->cmd_sn = be32_to_cpu(logout_hdr->cmdsn);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						  qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	fw_logout_req->cid = qedi_conn->iscsi_conn_id;
+	fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
+
+int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
+			 struct iscsi_task *task)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_text_request_hdr *fw_text_request;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_sge *single_sge;
+	struct qedi_cmd *qedi_cmd;
+	/* For 6.5 hdr iscsi_hdr */
+	struct iscsi_text *text_hdr;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	text_hdr = (struct iscsi_text *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	fw_task_ctx =
+	(struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_text_request =
+			&fw_task_ctx->ystorm_st_context.pdu_hdr.text_request;
+	fw_text_request->opcode = text_hdr->opcode;
+	fw_text_request->flags_attr = text_hdr->flags;
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_text_request->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_text_request->ttt = text_hdr->ttt;
+	fw_text_request->cmd_sn = be32_to_cpu(text_hdr->cmdsn);
+	fw_text_request->exp_stat_sn = be32_to_cpu(text_hdr->exp_statsn);
+	fw_text_request->hdr_second_dword = ntoh24(text_hdr->dlength);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						     qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						   qedi->tid_reuse_count[tid]++;
+
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			      (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = 0x2;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_ag_context.exp_data_acked =
+						      ntoh24(text_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
+						      ntoh24(text_hdr->dlength);
+	fw_task_ctx->ustorm_st_context.exp_data_sn =
+					      be32_to_cpu(text_hdr->exp_statsn);
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+	fw_task_ctx->ustorm_st_context.task_type = 0x2;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	/*  Add command in active command list */
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
+
+int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task,
+			   char *datap, int data_len, int unsol)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_nop_out_hdr *fw_nop_out;
+	struct qedi_cmd *qedi_cmd;
+	/* For 6.5 hdr iscsi_hdr */
+	struct iscsi_nopout *nopout_hdr;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_sge *single_sge;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	u32 scsi_lun[2];
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)task->dd_data;
+	nopout_hdr = (struct iscsi_nopout *)task->hdr;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1) {
+		QEDI_WARN(&qedi->dbg_ctx, "Invalid tid\n");
+		return -ENOMEM;
+	}
+
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	qedi_cmd->task_id = tid;
+
+	/* Ystorm context */
+	fw_nop_out = &fw_task_ctx->ystorm_st_context.pdu_hdr.nop_out;
+	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_CONST1, 1);
+	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_RSRV, 0);
+
+	memcpy(scsi_lun, &nopout_hdr->lun, sizeof(struct scsi_lun));
+	fw_nop_out->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_nop_out->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+
+	if (nopout_hdr->ttt != ISCSI_TTT_ALL_ONES) {
+		fw_nop_out->itt = be32_to_cpu(nopout_hdr->itt);
+		fw_nop_out->ttt = be32_to_cpu(nopout_hdr->ttt);
+		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+		fw_task_ctx->ystorm_st_context.state.local_comp = 1;
+		SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+			  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 1);
+	} else {
+		fw_nop_out->itt = qedi_set_itt(tid, get_itt(task->itt));
+		fw_nop_out->ttt = ISCSI_TTT_ALL_ONES;
+		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
+
+		spin_lock(&qedi_conn->list_lock);
+		list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+		qedi_cmd->io_cmd_in_list = true;
+		qedi_conn->active_cmd_count++;
+		spin_unlock(&qedi_conn->list_lock);
+	}
+
+	fw_nop_out->opcode = ISCSI_OPCODE_NOP_OUT;
+	fw_nop_out->cmd_sn = be32_to_cpu(nopout_hdr->cmdsn);
+	fw_nop_out->exp_stat_sn = be32_to_cpu(nopout_hdr->exp_statsn);
+
+	cached_sge =
+	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
+	cached_sge->sge.sge_len = req_sge->sge_len;
+	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
+	cached_sge->sge.sge_addr.hi =
+			(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = data_len;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
new file mode 100644
index 0000000..85ea3d7
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_gbl.h
@@ -0,0 +1,67 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_GBL_H_
+#define _QEDI_GBL_H_
+
+#include "qedi_iscsi.h"
+
+extern uint io_tracing;
+extern int do_not_recover;
+extern struct scsi_host_template qedi_host_template;
+extern struct iscsi_transport qedi_iscsi_transport;
+extern const struct qed_iscsi_ops *qedi_ops;
+extern struct qedi_debugfs_ops qedi_debugfs_ops;
+extern const struct file_operations qedi_dbg_fops;
+extern struct device_attribute *qedi_shost_attrs[];
+
+int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
+void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
+
+int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *task);
+int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task);
+int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
+			 struct iscsi_task *task);
+int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
+			   struct iscsi_task *task,
+			   char *datap, int data_len, int unsol);
+int qedi_get_task_idx(struct qedi_ctx *qedi);
+void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
+int qedi_iscsi_cleanup_task(struct iscsi_task *task,
+			    bool mark_cmd_node_deleted);
+void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
+void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt);
+void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
+void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
+void qedi_process_iscsi_error(struct qedi_endpoint *ep,
+			      struct async_data *data);
+void qedi_start_conn_recovery(struct qedi_ctx *qedi,
+			      struct qedi_conn *qedi_conn);
+struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid);
+void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data);
+void qedi_mark_device_missing(struct iscsi_cls_session *cls_session);
+void qedi_mark_device_available(struct iscsi_cls_session *cls_session);
+void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu);
+int qedi_recover_all_conns(struct qedi_ctx *qedi);
+void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
+			  uint16_t que_idx);
+void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
+		   u16 tid, int8_t direction);
+int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
+u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl);
+void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id);
+int qedi_create_sysfs_ctx_attr(struct qedi_ctx *qedi);
+void qedi_remove_sysfs_ctx_attr(struct qedi_ctx *qedi);
+void qedi_clearsq(struct qedi_ctx *qedi,
+		  struct qedi_conn *qedi_conn,
+		  struct iscsi_task *task);
+
+#endif
diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
new file mode 100644
index 0000000..caecdb8
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_iscsi.c
@@ -0,0 +1,1604 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/blkdev.h>
+#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <scsi/scsi_tcq.h>
+
+#include "qedi.h"
+#include "qedi_iscsi.h"
+#include "qedi_gbl.h"
+
+int qedi_recover_all_conns(struct qedi_ctx *qedi)
+{
+	struct qedi_conn *qedi_conn;
+	int i;
+
+	for (i = 0; i < qedi->max_active_conns; i++) {
+		qedi_conn = qedi_get_conn_from_id(qedi, i);
+		if (!qedi_conn)
+			continue;
+
+		qedi_start_conn_recovery(qedi, qedi_conn);
+	}
+
+	return SUCCESS;
+}
+
+static int qedi_eh_host_reset(struct scsi_cmnd *cmd)
+{
+	struct Scsi_Host *shost = cmd->device->host;
+	struct qedi_ctx *qedi;
+
+	qedi = (struct qedi_ctx *)iscsi_host_priv(shost);
+
+	return qedi_recover_all_conns(qedi);
+}
+
+struct scsi_host_template qedi_host_template = {
+	.module = THIS_MODULE,
+	.name = "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver",
+	.proc_name = QEDI_MODULE_NAME,
+	.queuecommand = iscsi_queuecommand,
+	.eh_abort_handler = iscsi_eh_abort,
+	.eh_device_reset_handler = iscsi_eh_device_reset,
+	.eh_target_reset_handler = iscsi_eh_recover_target,
+	.eh_host_reset_handler = qedi_eh_host_reset,
+	.target_alloc = iscsi_target_alloc,
+	.change_queue_depth = scsi_change_queue_depth,
+	.can_queue = QEDI_MAX_ISCSI_TASK,
+	.this_id = -1,
+	.sg_tablesize = QEDI_ISCSI_MAX_BDS_PER_CMD,
+	.max_sectors = 0xffff,
+	.cmd_per_lun = 128,
+	.use_clustering = ENABLE_CLUSTERING,
+	.shost_attrs = qedi_shost_attrs,
+};
+
+static void qedi_conn_free_login_resources(struct qedi_ctx *qedi,
+					   struct qedi_conn *qedi_conn)
+{
+	if (qedi_conn->gen_pdu.resp_bd_tbl) {
+		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				  qedi_conn->gen_pdu.resp_bd_tbl,
+				  qedi_conn->gen_pdu.resp_bd_dma);
+		qedi_conn->gen_pdu.resp_bd_tbl = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.req_bd_tbl) {
+		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				  qedi_conn->gen_pdu.req_bd_tbl,
+				  qedi_conn->gen_pdu.req_bd_dma);
+		qedi_conn->gen_pdu.req_bd_tbl = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.resp_buf) {
+		dma_free_coherent(&qedi->pdev->dev,
+				  ISCSI_DEF_MAX_RECV_SEG_LEN,
+				  qedi_conn->gen_pdu.resp_buf,
+				  qedi_conn->gen_pdu.resp_dma_addr);
+		qedi_conn->gen_pdu.resp_buf = NULL;
+	}
+
+	if (qedi_conn->gen_pdu.req_buf) {
+		dma_free_coherent(&qedi->pdev->dev,
+				  ISCSI_DEF_MAX_RECV_SEG_LEN,
+				  qedi_conn->gen_pdu.req_buf,
+				  qedi_conn->gen_pdu.req_dma_addr);
+		qedi_conn->gen_pdu.req_buf = NULL;
+	}
+}
+
+static int qedi_conn_alloc_login_resources(struct qedi_ctx *qedi,
+					   struct qedi_conn *qedi_conn)
+{
+	qedi_conn->gen_pdu.req_buf =
+		dma_alloc_coherent(&qedi->pdev->dev,
+				   ISCSI_DEF_MAX_RECV_SEG_LEN,
+				   &qedi_conn->gen_pdu.req_dma_addr,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.req_buf)
+		goto login_req_buf_failure;
+
+	qedi_conn->gen_pdu.req_buf_size = 0;
+	qedi_conn->gen_pdu.req_wr_ptr = qedi_conn->gen_pdu.req_buf;
+
+	qedi_conn->gen_pdu.resp_buf =
+		dma_alloc_coherent(&qedi->pdev->dev,
+				   ISCSI_DEF_MAX_RECV_SEG_LEN,
+				   &qedi_conn->gen_pdu.resp_dma_addr,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.resp_buf)
+		goto login_resp_buf_failure;
+
+	qedi_conn->gen_pdu.resp_buf_size = ISCSI_DEF_MAX_RECV_SEG_LEN;
+	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf;
+
+	qedi_conn->gen_pdu.req_bd_tbl =
+		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				   &qedi_conn->gen_pdu.req_bd_dma, GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.req_bd_tbl)
+		goto login_req_bd_tbl_failure;
+
+	qedi_conn->gen_pdu.resp_bd_tbl =
+		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+				   &qedi_conn->gen_pdu.resp_bd_dma,
+				   GFP_KERNEL);
+	if (!qedi_conn->gen_pdu.resp_bd_tbl)
+		goto login_resp_bd_tbl_failure;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SESS,
+		  "Allocation successful, cid=0x%x\n",
+		  qedi_conn->iscsi_conn_id);
+	return 0;
+
+login_resp_bd_tbl_failure:
+	dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
+			  qedi_conn->gen_pdu.req_bd_tbl,
+			  qedi_conn->gen_pdu.req_bd_dma);
+	qedi_conn->gen_pdu.req_bd_tbl = NULL;
+
+login_req_bd_tbl_failure:
+	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
+			  qedi_conn->gen_pdu.resp_buf,
+			  qedi_conn->gen_pdu.resp_dma_addr);
+	qedi_conn->gen_pdu.resp_buf = NULL;
+login_resp_buf_failure:
+	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
+			  qedi_conn->gen_pdu.req_buf,
+			  qedi_conn->gen_pdu.req_dma_addr);
+	qedi_conn->gen_pdu.req_buf = NULL;
+login_req_buf_failure:
+	iscsi_conn_printk(KERN_ERR, qedi_conn->cls_conn->dd_data,
+			  "login resource alloc failed!!\n");
+	return -ENOMEM;
+}
+
+static void qedi_destroy_cmd_pool(struct qedi_ctx *qedi,
+				  struct iscsi_session *session)
+{
+	int i;
+
+	for (i = 0; i < session->cmds_max; i++) {
+		struct iscsi_task *task = session->cmds[i];
+		struct qedi_cmd *cmd = task->dd_data;
+
+		if (cmd->io_tbl.sge_tbl)
+			dma_free_coherent(&qedi->pdev->dev,
+					  QEDI_ISCSI_MAX_BDS_PER_CMD *
+					  sizeof(struct iscsi_sge),
+					  cmd->io_tbl.sge_tbl,
+					  cmd->io_tbl.sge_tbl_dma);
+
+		if (cmd->sense_buffer)
+			dma_free_coherent(&qedi->pdev->dev,
+					  SCSI_SENSE_BUFFERSIZE,
+					  cmd->sense_buffer,
+					  cmd->sense_buffer_dma);
+	}
+}
+
+static int qedi_alloc_sget(struct qedi_ctx *qedi, struct iscsi_session *session,
+			   struct qedi_cmd *cmd)
+{
+	struct qedi_io_bdt *io = &cmd->io_tbl;
+	struct iscsi_sge *sge;
+
+	io->sge_tbl = dma_alloc_coherent(&qedi->pdev->dev,
+					 QEDI_ISCSI_MAX_BDS_PER_CMD *
+					 sizeof(*sge),
+					 &io->sge_tbl_dma, GFP_KERNEL);
+	if (!io->sge_tbl) {
+		iscsi_session_printk(KERN_ERR, session,
+				     "Could not allocate BD table.\n");
+		return -ENOMEM;
+	}
+
+	io->sge_valid = 0;
+	return 0;
+}
+
+static int qedi_setup_cmd_pool(struct qedi_ctx *qedi,
+			       struct iscsi_session *session)
+{
+	int i;
+
+	for (i = 0; i < session->cmds_max; i++) {
+		struct iscsi_task *task = session->cmds[i];
+		struct qedi_cmd *cmd = task->dd_data;
+
+		task->hdr = &cmd->hdr;
+		task->hdr_max = sizeof(struct iscsi_hdr);
+
+		if (qedi_alloc_sget(qedi, session, cmd))
+			goto free_sgets;
+
+		cmd->sense_buffer = dma_alloc_coherent(&qedi->pdev->dev,
+						       SCSI_SENSE_BUFFERSIZE,
+						       &cmd->sense_buffer_dma,
+						       GFP_KERNEL);
+		if (!cmd->sense_buffer)
+			goto free_sgets;
+	}
+
+	return 0;
+
+free_sgets:
+	qedi_destroy_cmd_pool(qedi, session);
+	return -ENOMEM;
+}
+
+static struct iscsi_cls_session *
+qedi_session_create(struct iscsi_endpoint *ep, u16 cmds_max,
+		    u16 qdepth, uint32_t initial_cmdsn)
+{
+	struct Scsi_Host *shost;
+	struct iscsi_cls_session *cls_session;
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+
+	if (!ep)
+		return NULL;
+
+	qedi_ep = ep->dd_data;
+	shost = qedi_ep->qedi->shost;
+	qedi = iscsi_host_priv(shost);
+
+	if (cmds_max > qedi->max_sqes)
+		cmds_max = qedi->max_sqes;
+	else if (cmds_max < QEDI_SQ_WQES_MIN)
+		cmds_max = QEDI_SQ_WQES_MIN;
+
+	cls_session = iscsi_session_setup(&qedi_iscsi_transport, shost,
+					  cmds_max, 0, sizeof(struct qedi_cmd),
+					  initial_cmdsn, ISCSI_MAX_TARGET);
+	if (!cls_session) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to setup session for ep=%p\n", qedi_ep);
+		return NULL;
+	}
+
+	if (qedi_setup_cmd_pool(qedi, cls_session->dd_data)) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to setup cmd pool for ep=%p\n", qedi_ep);
+		goto session_teardown;
+	}
+
+	return cls_session;
+
+session_teardown:
+	iscsi_session_teardown(cls_session);
+	return NULL;
+}
+
+static void qedi_session_destroy(struct iscsi_cls_session *cls_session)
+{
+	struct iscsi_session *session = cls_session->dd_data;
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+
+	qedi_destroy_cmd_pool(qedi, session);
+	iscsi_session_teardown(cls_session);
+}
+
+static struct iscsi_cls_conn *
+qedi_conn_create(struct iscsi_cls_session *cls_session, uint32_t cid)
+{
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct iscsi_cls_conn *cls_conn;
+	struct qedi_conn *qedi_conn;
+	struct iscsi_conn *conn;
+
+	cls_conn = iscsi_conn_setup(cls_session, sizeof(*qedi_conn),
+				    cid);
+	if (!cls_conn) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "conn_new: iscsi conn setup failed, cid=0x%x, cls_sess=%p!\n",
+			 cid, cls_session);
+		return NULL;
+	}
+
+	conn = cls_conn->dd_data;
+	qedi_conn = conn->dd_data;
+	qedi_conn->cls_conn = cls_conn;
+	qedi_conn->qedi = qedi;
+	qedi_conn->ep = NULL;
+	qedi_conn->active_cmd_count = 0;
+	INIT_LIST_HEAD(&qedi_conn->active_cmd_list);
+	spin_lock_init(&qedi_conn->list_lock);
+
+	if (qedi_conn_alloc_login_resources(qedi, qedi_conn)) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "conn_new: login resc alloc failed, cid=0x%x, cls_sess=%p!!\n",
+				   cid, cls_session);
+		goto free_conn;
+	}
+
+	return cls_conn;
+
+free_conn:
+	iscsi_conn_teardown(cls_conn);
+	return NULL;
+}
+
+void qedi_mark_device_missing(struct iscsi_cls_session *cls_session)
+{
+	iscsi_block_session(cls_session);
+}
+
+void qedi_mark_device_available(struct iscsi_cls_session *cls_session)
+{
+	iscsi_unblock_session(cls_session);
+}
+
+static int qedi_bind_conn_to_iscsi_cid(struct qedi_ctx *qedi,
+				       struct qedi_conn *qedi_conn)
+{
+	u32 iscsi_cid = qedi_conn->iscsi_conn_id;
+
+	if (qedi->cid_que.conn_cid_tbl[iscsi_cid]) {
+		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
+				  "conn bind - entry #%d not free\n",
+				  iscsi_cid);
+		return -EBUSY;
+	}
+
+	qedi->cid_que.conn_cid_tbl[iscsi_cid] = qedi_conn;
+	return 0;
+}
+
+struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid)
+{
+	if (!qedi->cid_que.conn_cid_tbl) {
+		QEDI_ERR(&qedi->dbg_ctx, "missing conn<->cid table\n");
+		return NULL;
+
+	} else if (iscsi_cid >= qedi->max_active_conns) {
+		QEDI_ERR(&qedi->dbg_ctx, "wrong cid #%d\n", iscsi_cid);
+		return NULL;
+	}
+	return qedi->cid_que.conn_cid_tbl[iscsi_cid];
+}
+
+static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+			  struct iscsi_cls_conn *cls_conn,
+			  u64 transport_fd, int is_leading)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct qedi_endpoint *qedi_ep;
+	struct iscsi_endpoint *ep;
+
+	ep = iscsi_lookup_endpoint(transport_fd);
+	if (!ep)
+		return -EINVAL;
+
+	qedi_ep = ep->dd_data;
+	if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
+		return -EINVAL;
+
+	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+		return -EINVAL;
+
+	qedi_ep->conn = qedi_conn;
+	qedi_conn->ep = qedi_ep;
+	qedi_conn->iscsi_conn_id = qedi_ep->iscsi_cid;
+	qedi_conn->fw_cid = qedi_ep->fw_cid;
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
+		return -EINVAL;
+
+	spin_lock_init(&qedi_conn->tmf_work_lock);
+	INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
+	init_waitqueue_head(&qedi_conn->wait_queue);
+	return 0;
+}
+
+static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
+				  struct qedi_conn *qedi_conn)
+{
+	struct qed_iscsi_params_update *conn_info;
+	struct iscsi_cls_conn *cls_conn = qedi_conn->cls_conn;
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_endpoint *qedi_ep;
+	int rval;
+
+	qedi_ep = qedi_conn->ep;
+
+	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
+	if (!conn_info) {
+		QEDI_ERR(&qedi->dbg_ctx, "memory alloc failed\n");
+		return -ENOMEM;
+	}
+
+	conn_info->update_flag = 0;
+
+	if (conn->hdrdgst_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN, true);
+	if (conn->datadgst_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_DD_EN, true);
+	if (conn->session->initial_r2t_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_INITIAL_R2T,
+			  true);
+	if (conn->session->imm_data_en)
+		SET_FIELD(conn_info->update_flag,
+			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_IMMEDIATE_DATA,
+			  true);
+
+	conn_info->max_seq_size = conn->session->max_burst;
+	conn_info->max_recv_pdu_length = conn->max_recv_dlength;
+	conn_info->max_send_pdu_length = conn->max_xmit_dlength;
+	conn_info->first_seq_length = conn->session->first_burst;
+	conn_info->exp_stat_sn = conn->exp_statsn;
+
+	rval = qedi_ops->update_conn(qedi->cdev, qedi_ep->handle,
+				     conn_info);
+	if (rval) {
+		rval = -ENXIO;
+		QEDI_ERR(&qedi->dbg_ctx, "Could not update connection\n");
+		goto update_conn_err;
+	}
+
+	kfree(conn_info);
+	rval = 0;
+
+update_conn_err:
+	return rval;
+}
+
+static u16 qedi_calc_mss(u16 pmtu, u8 is_ipv6, u8 tcp_ts_en, u8 vlan_en)
+{
+	u16 mss = 0;
+	u16 hdrs = TCP_HDR_LEN;
+
+	if (is_ipv6)
+		hdrs += IPV6_HDR_LEN;
+	else
+		hdrs += IPV4_HDR_LEN;
+
+	if (vlan_en)
+		hdrs += VLAN_LEN;
+
+	mss = pmtu - hdrs;
+
+	if (tcp_ts_en)
+		mss -= TCP_OPTION_LEN;
+
+	if (!mss)
+		mss = DEF_MSS;
+
+	return mss;
+}
+
+static int qedi_iscsi_offload_conn(struct qedi_endpoint *qedi_ep)
+{
+	struct qedi_ctx *qedi = qedi_ep->qedi;
+	struct qed_iscsi_params_offload *conn_info;
+	int rval;
+	int i;
+
+	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
+	if (!conn_info) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to allocate memory ep=%p\n", qedi_ep);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy(conn_info->src.mac, qedi_ep->src_mac);
+	ether_addr_copy(conn_info->dst.mac, qedi_ep->dst_mac);
+
+	conn_info->src.ip[0] = ntohl(qedi_ep->src_addr[0]);
+	conn_info->dst.ip[0] = ntohl(qedi_ep->dst_addr[0]);
+
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		conn_info->ip_version = 0;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "After ntohl: src_addr=%pI4, dst_addr=%pI4\n",
+			  qedi_ep->src_addr, qedi_ep->dst_addr);
+	} else {
+		for (i = 1; i < 4; i++) {
+			conn_info->src.ip[i] = ntohl(qedi_ep->src_addr[i]);
+			conn_info->dst.ip[i] = ntohl(qedi_ep->dst_addr[i]);
+		}
+
+		conn_info->ip_version = 1;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "After ntohl: src_addr=%pI6, dst_addr=%pI6\n",
+			  qedi_ep->src_addr, qedi_ep->dst_addr);
+	}
+
+	conn_info->src.port = qedi_ep->src_port;
+	conn_info->dst.port = qedi_ep->dst_port;
+
+	conn_info->layer_code = ISCSI_SLOW_PATH_LAYER_CODE;
+	conn_info->sq_pbl_addr = qedi_ep->sq_pbl_dma;
+	conn_info->vlan_id = qedi_ep->vlan_id;
+
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_TS_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_CNT_EN, 1);
+	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_KA_EN, 1);
+
+	conn_info->default_cq = (qedi_ep->fw_cid % 8);
+
+	conn_info->ka_max_probe_cnt = DEF_KA_MAX_PROBE_COUNT;
+	conn_info->dup_ack_theshold = 3;
+	conn_info->rcv_wnd = 65535;
+	conn_info->cwnd = DEF_MAX_CWND;
+
+	conn_info->ss_thresh = 65535;
+	conn_info->srtt = 300;
+	conn_info->rtt_var = 150;
+	conn_info->flow_label = 0;
+	conn_info->ka_timeout = DEF_KA_TIMEOUT;
+	conn_info->ka_interval = DEF_KA_INTERVAL;
+	conn_info->max_rt_time = DEF_MAX_RT_TIME;
+	conn_info->ttl = DEF_TTL;
+	conn_info->tos_or_tc = DEF_TOS;
+	conn_info->remote_port = qedi_ep->dst_port;
+	conn_info->local_port = qedi_ep->src_port;
+
+	conn_info->mss = qedi_calc_mss(qedi_ep->pmtu,
+				       (qedi_ep->ip_type == TCP_IPV6),
+				       1, (qedi_ep->vlan_id != 0));
+
+	conn_info->rcv_wnd_scale = 4;
+	conn_info->ts_ticks_per_second = 1000;
+	conn_info->da_timeout_value = 200;
+	conn_info->ack_frequency = 2;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Default cq index [%d], mss [%d]\n",
+		  conn_info->default_cq, conn_info->mss);
+
+	rval = qedi_ops->offload_conn(qedi->cdev, qedi_ep->handle, conn_info);
+	if (rval)
+		QEDI_ERR(&qedi->dbg_ctx, "offload_conn returned %d, ep=%p\n",
+			 rval, qedi_ep);
+
+	kfree(conn_info);
+	return rval;
+}
+
+static int qedi_conn_start(struct iscsi_cls_conn *cls_conn)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_ctx *qedi;
+	int rval;
+
+	qedi = qedi_conn->qedi;
+
+	rval = qedi_iscsi_update_conn(qedi, qedi_conn);
+	if (rval) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "conn_start: FW oflload conn failed.\n");
+		rval = -EINVAL;
+		goto start_err;
+	}
+
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	qedi_conn->abrt_conn = 0;
+
+	rval = iscsi_conn_start(cls_conn);
+	if (rval) {
+		iscsi_conn_printk(KERN_ALERT, conn,
+				  "iscsi_conn_start: FW oflload conn failed!!\n");
+	}
+
+start_err:
+	return rval;
+}
+
+static void qedi_conn_destroy(struct iscsi_cls_conn *cls_conn)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi;
+
+	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
+	qedi = iscsi_host_priv(shost);
+
+	qedi_conn_free_login_resources(qedi, qedi_conn);
+	iscsi_conn_teardown(cls_conn);
+}
+
+static int qedi_ep_get_param(struct iscsi_endpoint *ep,
+			     enum iscsi_param param, char *buf)
+{
+	struct qedi_endpoint *qedi_ep = ep->dd_data;
+	int len;
+
+	if (!qedi_ep)
+		return -ENOTCONN;
+
+	switch (param) {
+	case ISCSI_PARAM_CONN_PORT:
+		len = sprintf(buf, "%hu\n", qedi_ep->dst_port);
+		break;
+	case ISCSI_PARAM_CONN_ADDRESS:
+		if (qedi_ep->ip_type == TCP_IPV4)
+			len = sprintf(buf, "%pI4\n", qedi_ep->dst_addr);
+		else
+			len = sprintf(buf, "%pI6\n", qedi_ep->dst_addr);
+		break;
+	default:
+		return -ENOTCONN;
+	}
+
+	return len;
+}
+
+static int qedi_host_get_param(struct Scsi_Host *shost,
+			       enum iscsi_host_param param, char *buf)
+{
+	struct qedi_ctx *qedi;
+	int len;
+
+	qedi = iscsi_host_priv(shost);
+
+	switch (param) {
+	case ISCSI_HOST_PARAM_HWADDRESS:
+		len = sysfs_format_mac(buf, qedi->mac, 6);
+		break;
+	case ISCSI_HOST_PARAM_NETDEV_NAME:
+		len = sprintf(buf, "host%d\n", shost->host_no);
+		break;
+	case ISCSI_HOST_PARAM_IPADDRESS:
+		if (qedi->ip_type == TCP_IPV4)
+			len = sprintf(buf, "%pI4\n", qedi->src_ip);
+		else
+			len = sprintf(buf, "%pI6\n", qedi->src_ip);
+		break;
+	default:
+		return iscsi_host_get_param(shost, param, buf);
+	}
+
+	return len;
+}
+
+static void qedi_conn_get_stats(struct iscsi_cls_conn *cls_conn,
+				struct iscsi_stats *stats)
+{
+	struct iscsi_conn *conn = cls_conn->dd_data;
+	struct qed_iscsi_stats iscsi_stats;
+	struct Scsi_Host *shost;
+	struct qedi_ctx *qedi;
+
+	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
+	qedi = iscsi_host_priv(shost);
+	qedi_ops->get_stats(qedi->cdev, &iscsi_stats);
+
+	conn->txdata_octets = iscsi_stats.iscsi_tx_bytes_cnt;
+	conn->rxdata_octets = iscsi_stats.iscsi_rx_bytes_cnt;
+	conn->dataout_pdus_cnt = (uint32_t)iscsi_stats.iscsi_tx_data_pdu_cnt;
+	conn->datain_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_data_pdu_cnt;
+	conn->r2t_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_r2t_pdu_cnt;
+
+	stats->txdata_octets = conn->txdata_octets;
+	stats->rxdata_octets = conn->rxdata_octets;
+	stats->scsicmd_pdus = conn->scsicmd_pdus_cnt;
+	stats->dataout_pdus = conn->dataout_pdus_cnt;
+	stats->scsirsp_pdus = conn->scsirsp_pdus_cnt;
+	stats->datain_pdus = conn->datain_pdus_cnt;
+	stats->r2t_pdus = conn->r2t_pdus_cnt;
+	stats->tmfcmd_pdus = conn->tmfcmd_pdus_cnt;
+	stats->tmfrsp_pdus = conn->tmfrsp_pdus_cnt;
+	stats->digest_err = 0;
+	stats->timeout_err = 0;
+	strcpy(stats->custom[0].desc, "eh_abort_cnt");
+	stats->custom[0].value = conn->eh_abort_cnt;
+	stats->custom_length = 1;
+}
+
+static void qedi_iscsi_prep_generic_pdu_bd(struct qedi_conn *qedi_conn)
+{
+	struct iscsi_sge *bd_tbl;
+
+	bd_tbl = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+
+	bd_tbl->sge_addr.hi =
+		(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
+	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.req_dma_addr;
+	bd_tbl->sge_len = qedi_conn->gen_pdu.req_wr_ptr -
+				qedi_conn->gen_pdu.req_buf;
+	bd_tbl->reserved0 = 0;
+	bd_tbl = (struct iscsi_sge  *)qedi_conn->gen_pdu.resp_bd_tbl;
+	bd_tbl->sge_addr.hi =
+			(u32)((u64)qedi_conn->gen_pdu.resp_dma_addr >> 32);
+	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.resp_dma_addr;
+	bd_tbl->sge_len = ISCSI_DEF_MAX_RECV_SEG_LEN;
+	bd_tbl->reserved0 = 0;
+}
+
+static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
+{
+	struct qedi_cmd *cmd = task->dd_data;
+	struct qedi_conn *qedi_conn = cmd->conn;
+	char *buf;
+	int data_len;
+	int rc = 0;
+
+	qedi_iscsi_prep_generic_pdu_bd(qedi_conn);
+	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+		qedi_send_iscsi_login(qedi_conn, task);
+		break;
+	case ISCSI_OP_NOOP_OUT:
+		data_len = qedi_conn->gen_pdu.req_buf_size;
+		buf = qedi_conn->gen_pdu.req_buf;
+		if (data_len)
+			rc = qedi_send_iscsi_nopout(qedi_conn, task,
+						    buf, data_len, 1);
+		else
+			rc = qedi_send_iscsi_nopout(qedi_conn, task,
+						    NULL, 0, 1);
+		break;
+	case ISCSI_OP_LOGOUT:
+		rc = qedi_send_iscsi_logout(qedi_conn, task);
+		break;
+	case ISCSI_OP_TEXT:
+		rc = qedi_send_iscsi_text(qedi_conn, task);
+		break;
+	default:
+		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
+				  "unsupported op 0x%x\n", task->hdr->opcode);
+	}
+
+	return rc;
+}
+
+static int qedi_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task)
+{
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+
+	memset(qedi_conn->gen_pdu.req_buf, 0, ISCSI_DEF_MAX_RECV_SEG_LEN);
+
+	qedi_conn->gen_pdu.req_buf_size = task->data_count;
+
+	if (task->data_count) {
+		memcpy(qedi_conn->gen_pdu.req_buf, task->data,
+		       task->data_count);
+		qedi_conn->gen_pdu.req_wr_ptr =
+			qedi_conn->gen_pdu.req_buf + task->data_count;
+	}
+
+	cmd->conn = conn->dd_data;
+	cmd->scsi_cmd = NULL;
+	return qedi_iscsi_send_generic_request(task);
+}
+
+static int qedi_task_xmit(struct iscsi_task *task)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct scsi_cmnd *sc = task->sc;
+
+	cmd->state = 0;
+	cmd->task = NULL;
+	cmd->use_slowpath = false;
+	cmd->conn = qedi_conn;
+	cmd->task = task;
+	cmd->io_cmd_in_list = false;
+	INIT_LIST_HEAD(&cmd->io_cmd);
+
+	if (!sc)
+		return qedi_mtask_xmit(conn, task);
+}
+
+static struct iscsi_endpoint *
+qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+		int non_blocking)
+{
+	struct qedi_ctx *qedi;
+	struct iscsi_endpoint *ep;
+	struct qedi_endpoint *qedi_ep;
+	struct sockaddr_in *addr;
+	struct sockaddr_in6 *addr6;
+	struct qed_dev *cdev  =  NULL;
+	struct qedi_uio_dev *udev = NULL;
+	struct iscsi_path path_req;
+	u32 msg_type = ISCSI_KEVENT_IF_DOWN;
+	u32 iscsi_cid = QEDI_CID_RESERVED;
+	u16 len = 0;
+	char *buf = NULL;
+	int ret;
+
+	if (!shost) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost is NULL\n");
+		return ERR_PTR(ret);
+	}
+
+	if (do_not_recover) {
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+
+	qedi = iscsi_host_priv(shost);
+	cdev = qedi->cdev;
+	udev = qedi->udev;
+
+	if (test_bit(QEDI_IN_OFFLINE, &qedi->flags) ||
+	    test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+
+	ep = iscsi_create_endpoint(sizeof(struct qedi_endpoint));
+	if (!ep) {
+		QEDI_ERR(&qedi->dbg_ctx, "endpoint create fail\n");
+		ret = -ENOMEM;
+		return ERR_PTR(ret);
+	}
+	qedi_ep = ep->dd_data;
+	memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
+	qedi_ep->state = EP_STATE_IDLE;
+	qedi_ep->iscsi_cid = (u32)-1;
+	qedi_ep->qedi = qedi;
+
+	if (dst_addr->sa_family == AF_INET) {
+		addr = (struct sockaddr_in *)dst_addr;
+		memcpy(qedi_ep->dst_addr, &addr->sin_addr.s_addr,
+		       sizeof(struct in_addr));
+		qedi_ep->dst_port = ntohs(addr->sin_port);
+		qedi_ep->ip_type = TCP_IPV4;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "dst_addr=%pI4, dst_port=%u\n",
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else if (dst_addr->sa_family == AF_INET6) {
+		addr6 = (struct sockaddr_in6 *)dst_addr;
+		memcpy(qedi_ep->dst_addr, &addr6->sin6_addr,
+		       sizeof(struct in6_addr));
+		qedi_ep->dst_port = ntohs(addr6->sin6_port);
+		qedi_ep->ip_type = TCP_IPV6;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "dst_addr=%pI6, dst_port=%u\n",
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid endpoint\n");
+	}
+
+	if (atomic_read(&qedi->link_state) != QEDI_LINK_UP) {
+		QEDI_WARN(&qedi->dbg_ctx, "qedi link down\n");
+		ret = -ENXIO;
+		goto ep_conn_exit;
+	}
+
+	ret = qedi_alloc_sq(qedi, qedi_ep);
+	if (ret)
+		goto ep_conn_exit;
+
+	ret = qedi_ops->acquire_conn(qedi->cdev, &qedi_ep->handle,
+				     &qedi_ep->fw_cid, &qedi_ep->p_doorbell);
+
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx, "Could not acquire connection\n");
+		ret = -ENXIO;
+		goto ep_free_sq;
+	}
+
+	iscsi_cid = qedi_ep->handle;
+	qedi_ep->iscsi_cid = iscsi_cid;
+
+	init_waitqueue_head(&qedi_ep->ofld_wait);
+	init_waitqueue_head(&qedi_ep->tcp_ofld_wait);
+	qedi_ep->state = EP_STATE_OFLDCONN_START;
+	qedi->ep_tbl[iscsi_cid] = qedi_ep;
+
+	buf = (char *)&path_req;
+	len = sizeof(path_req);
+	memset(&path_req, 0, len);
+
+	msg_type = ISCSI_KEVENT_PATH_REQ;
+	path_req.handle = (u64)qedi_ep->iscsi_cid;
+	path_req.pmtu = qedi->ll2_mtu;
+	qedi_ep->pmtu = qedi->ll2_mtu;
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		memcpy(&path_req.dst.v4_addr, &qedi_ep->dst_addr,
+		       sizeof(struct in_addr));
+		path_req.ip_addr_len = 4;
+	} else {
+		memcpy(&path_req.dst.v6_addr, &qedi_ep->dst_addr,
+		       sizeof(struct in6_addr));
+		path_req.ip_addr_len = 16;
+	}
+
+	ret = iscsi_offload_mesg(shost, &qedi_iscsi_transport, msg_type, buf,
+				 len);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "iscsi_offload_mesg() failed for cid=0x%x ret=%d\n",
+			 iscsi_cid, ret);
+		goto ep_rel_conn;
+	}
+
+	atomic_inc(&qedi->num_offloads);
+	return ep;
+
+ep_rel_conn:
+	qedi->ep_tbl[iscsi_cid] = NULL;
+	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
+	if (ret)
+		QEDI_WARN(&qedi->dbg_ctx, "release_conn returned %d\n",
+			  ret);
+ep_free_sq:
+	qedi_free_sq(qedi, qedi_ep);
+ep_conn_exit:
+	iscsi_destroy_endpoint(ep);
+	return ERR_PTR(ret);
+}
+
+static int qedi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
+{
+	struct qedi_endpoint *qedi_ep;
+	int ret = 0;
+
+	if (do_not_recover)
+		return 1;
+
+	qedi_ep = ep->dd_data;
+	if (qedi_ep->state == EP_STATE_IDLE ||
+	    qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
+		return -1;
+
+	if (qedi_ep->state == EP_STATE_OFLDCONN_COMPL)
+		ret = 1;
+
+	ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
+					       ((qedi_ep->state ==
+						EP_STATE_OFLDCONN_FAILED) ||
+						(qedi_ep->state ==
+						EP_STATE_OFLDCONN_COMPL)),
+						msecs_to_jiffies(timeout_ms));
+
+	if (qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
+		ret = -1;
+
+	if (ret > 0)
+		return 1;
+	else if (!ret)
+		return 0;
+	else
+		return ret;
+}
+
+static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn)
+{
+	struct qedi_cmd *cmd, *cmd_tmp;
+
+	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
+				 io_cmd) {
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+}
+
+static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+{
+	struct qedi_endpoint *qedi_ep;
+	struct qedi_conn *qedi_conn = NULL;
+	struct iscsi_conn *conn = NULL;
+	struct qedi_ctx *qedi;
+	int ret = 0;
+	int wait_delay = 20 * HZ;
+	int abrt_conn = 0;
+	int count = 10;
+
+	qedi_ep = ep->dd_data;
+	qedi = qedi_ep->qedi;
+
+	flush_work(&qedi_ep->offload_work);
+
+	if (qedi_ep->conn) {
+		qedi_conn = qedi_ep->conn;
+		conn = qedi_conn->cls_conn->dd_data;
+		iscsi_suspend_queue(conn);
+		abrt_conn = qedi_conn->abrt_conn;
+
+		while (count--)	{
+			if (!test_bit(QEDI_CONN_FW_CLEANUP,
+				      &qedi_conn->flags)) {
+				break;
+			}
+			msleep(1000);
+		}
+
+		if (test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+			if (do_not_recover) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+					  "Do not recover cid=0x%x\n",
+					  qedi_ep->iscsi_cid);
+				goto ep_exit_recover;
+			}
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+				  "Reset recovery cid=0x%x, qedi_ep=%p, state=0x%x\n",
+				  qedi_ep->iscsi_cid, qedi_ep, qedi_ep->state);
+			qedi_cleanup_active_cmd_list(qedi_conn);
+			goto ep_release_conn;
+		}
+	}
+
+	if (do_not_recover)
+		goto ep_exit_recover;
+
+	switch (qedi_ep->state) {
+	case EP_STATE_OFLDCONN_START:
+		goto ep_release_conn;
+	case EP_STATE_OFLDCONN_FAILED:
+			break;
+	case EP_STATE_OFLDCONN_COMPL:
+		if (unlikely(!qedi_conn))
+			break;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Active cmd count=%d, abrt_conn=%d, ep state=0x%x, cid=0x%x, qedi_conn=%p\n",
+			  qedi_conn->active_cmd_count, abrt_conn,
+			  qedi_ep->state,
+			  qedi_ep->iscsi_cid,
+			  qedi_ep->conn
+			  );
+
+		if (!qedi_conn->active_cmd_count)
+			abrt_conn = 0;
+		else
+			abrt_conn = 1;
+
+		if (abrt_conn)
+			qedi_clearsq(qedi, qedi_conn, NULL);
+		break;
+	default:
+		break;
+	}
+
+	qedi_ep->state = EP_STATE_DISCONN_START;
+	ret = qedi_ops->destroy_conn(qedi->cdev, qedi_ep->handle, abrt_conn);
+	if (ret) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "destroy_conn failed returned %d\n", ret);
+	} else {
+		ret = wait_event_interruptible_timeout(
+					qedi_ep->tcp_ofld_wait,
+					(qedi_ep->state !=
+					 EP_STATE_DISCONN_START),
+					wait_delay);
+		if ((ret <= 0) || (qedi_ep->state == EP_STATE_DISCONN_START)) {
+			QEDI_WARN(&qedi->dbg_ctx,
+				  "Destroy conn timedout or interrupted, ret=%d, delay=%d, cid=0x%x\n",
+				  ret, wait_delay, qedi_ep->iscsi_cid);
+		}
+	}
+
+ep_release_conn:
+	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
+	if (ret)
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "release_conn returned %d, cid=0x%x\n",
+			  ret, qedi_ep->iscsi_cid);
+ep_exit_recover:
+	qedi_ep->state = EP_STATE_IDLE;
+	qedi->ep_tbl[qedi_ep->iscsi_cid] = NULL;
+	qedi->cid_que.conn_cid_tbl[qedi_ep->iscsi_cid] = NULL;
+	qedi_free_id(&qedi->lcl_port_tbl, qedi_ep->src_port);
+	qedi_free_sq(qedi, qedi_ep);
+
+	if (qedi_conn)
+		qedi_conn->ep = NULL;
+
+	qedi_ep->conn = NULL;
+	qedi_ep->qedi = NULL;
+	atomic_dec(&qedi->num_offloads);
+
+	iscsi_destroy_endpoint(ep);
+}
+
+static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
+{
+	struct qed_dev *cdev = qedi->cdev;
+	struct qedi_uio_dev *udev;
+	struct qedi_uio_ctrl *uctrl;
+	struct sk_buff *skb;
+	u32 len;
+	int rc = 0;
+
+	udev = qedi->udev;
+	if (!udev) {
+		QEDI_ERR(&qedi->dbg_ctx, "udev is NULL.\n");
+		return -EINVAL;
+	}
+
+	uctrl = (struct qedi_uio_ctrl *)udev->uctrl;
+	if (!uctrl) {
+		QEDI_ERR(&qedi->dbg_ctx, "uctlr is NULL.\n");
+		return -EINVAL;
+	}
+
+	len = uctrl->host_tx_pkt_len;
+	if (!len) {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid len %u\n", len);
+		return -EINVAL;
+	}
+
+	skb = alloc_skb(len, GFP_ATOMIC);
+	if (!skb) {
+		QEDI_ERR(&qedi->dbg_ctx, "alloc_skb failed\n");
+		return -EINVAL;
+	}
+
+	skb_put(skb, len);
+	memcpy(skb->data, udev->tx_pkt, len);
+	skb->ip_summed = CHECKSUM_NONE;
+
+	if (vlanid)
+		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlanid);
+
+	rc = qedi_ops->ll2->start_xmit(cdev, skb);
+	if (rc) {
+		QEDI_ERR(&qedi->dbg_ctx, "ll2 start_xmit returned %d\n",
+			 rc);
+		kfree_skb(skb);
+	}
+
+	uctrl->host_tx_pkt_len = 0;
+	uctrl->hw_tx_cons++;
+
+	return rc;
+}
+
+static void qedi_offload_work(struct work_struct *work)
+{
+	struct qedi_endpoint *qedi_ep =
+		container_of(work, struct qedi_endpoint, offload_work);
+	struct qedi_ctx *qedi;
+	int wait_delay = 20 * HZ;
+	int ret;
+
+	qedi = qedi_ep->qedi;
+
+	ret = qedi_iscsi_offload_conn(qedi_ep);
+	if (ret) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
+			 qedi_ep->iscsi_cid, qedi_ep, ret);
+		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+		return;
+	}
+
+	ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
+					       (qedi_ep->state ==
+					       EP_STATE_OFLDCONN_COMPL),
+					       wait_delay);
+	if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
+		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
+			 qedi_ep->iscsi_cid, qedi_ep);
+	}
+}
+
+static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+{
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *qedi_ep;
+	int ret = 0;
+	u32 iscsi_cid;
+	u16 port_id = 0;
+
+	if (!shost) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost is NULL\n");
+		return ret;
+	}
+
+	if (strcmp(shost->hostt->proc_name, "qedi")) {
+		ret = -ENXIO;
+		QEDI_ERR(NULL, "shost %s is invalid\n",
+			 shost->hostt->proc_name);
+		return ret;
+	}
+
+	qedi = iscsi_host_priv(shost);
+	if (path_data->handle == QEDI_PATH_HANDLE) {
+		ret = qedi_data_avail(qedi, path_data->vlan_id);
+		goto set_path_exit;
+	}
+
+	iscsi_cid = (u32)path_data->handle;
+	qedi_ep = qedi->ep_tbl[iscsi_cid];
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
+
+	if (!is_valid_ether_addr(&path_data->mac_addr[0])) {
+		QEDI_NOTICE(&qedi->dbg_ctx, "dst mac NOT VALID\n");
+		ret = -EIO;
+		goto set_path_exit;
+	}
+
+	ether_addr_copy(&qedi_ep->src_mac[0], &qedi->mac[0]);
+	ether_addr_copy(&qedi_ep->dst_mac[0], &path_data->mac_addr[0]);
+
+	qedi_ep->vlan_id = path_data->vlan_id;
+	if (path_data->pmtu < DEF_PATH_MTU) {
+		qedi_ep->pmtu = qedi->ll2_mtu;
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "MTU cannot be %u, using default MTU %u\n",
+			   path_data->pmtu, qedi_ep->pmtu);
+	}
+
+	if (path_data->pmtu != qedi->ll2_mtu) {
+		if (path_data->pmtu > JUMBO_MTU) {
+			ret = -EINVAL;
+			QEDI_ERR(NULL, "Invalid MTU %u\n", path_data->pmtu);
+			goto set_path_exit;
+		}
+
+		qedi_reset_host_mtu(qedi, path_data->pmtu);
+		qedi_ep->pmtu = qedi->ll2_mtu;
+	}
+
+	port_id = qedi_ep->src_port;
+	if (port_id >= QEDI_LOCAL_PORT_MIN &&
+	    port_id < QEDI_LOCAL_PORT_MAX) {
+		if (qedi_alloc_id(&qedi->lcl_port_tbl, port_id))
+			port_id = 0;
+	} else {
+		port_id = 0;
+	}
+
+	if (!port_id) {
+		port_id = qedi_alloc_new_id(&qedi->lcl_port_tbl);
+		if (port_id == QEDI_LOCAL_PORT_INVALID) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Failed to allocate port id for iscsi_cid=0x%x\n",
+				 iscsi_cid);
+			ret = -ENOMEM;
+			goto set_path_exit;
+		}
+	}
+
+	qedi_ep->src_port = port_id;
+
+	if (qedi_ep->ip_type == TCP_IPV4) {
+		memcpy(&qedi_ep->src_addr[0], &path_data->src.v4_addr,
+		       sizeof(struct in_addr));
+		memcpy(&qedi->src_ip[0], &path_data->src.v4_addr,
+		       sizeof(struct in_addr));
+		qedi->ip_type = TCP_IPV4;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "src addr:port=%pI4:%u, dst addr:port=%pI4:%u\n",
+			  qedi_ep->src_addr, qedi_ep->src_port,
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	} else {
+		memcpy(&qedi_ep->src_addr[0], &path_data->src.v6_addr,
+		       sizeof(struct in6_addr));
+		memcpy(&qedi->src_ip[0], &path_data->src.v6_addr,
+		       sizeof(struct in6_addr));
+		qedi->ip_type = TCP_IPV6;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+			  "src addr:port=%pI6:%u, dst addr:port=%pI6:%u\n",
+			  qedi_ep->src_addr, qedi_ep->src_port,
+			  qedi_ep->dst_addr, qedi_ep->dst_port);
+	}
+
+	INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+	queue_work(qedi->offload_thread, &qedi_ep->offload_work);
+
+	ret = 0;
+
+set_path_exit:
+	return ret;
+}
+
+static umode_t qedi_attr_is_visible(int param_type, int param)
+{
+	switch (param_type) {
+	case ISCSI_HOST_PARAM:
+		switch (param) {
+		case ISCSI_HOST_PARAM_NETDEV_NAME:
+		case ISCSI_HOST_PARAM_HWADDRESS:
+		case ISCSI_HOST_PARAM_IPADDRESS:
+			return S_IRUGO;
+		default:
+			return 0;
+		}
+	case ISCSI_PARAM:
+		switch (param) {
+		case ISCSI_PARAM_MAX_RECV_DLENGTH:
+		case ISCSI_PARAM_MAX_XMIT_DLENGTH:
+		case ISCSI_PARAM_HDRDGST_EN:
+		case ISCSI_PARAM_DATADGST_EN:
+		case ISCSI_PARAM_CONN_ADDRESS:
+		case ISCSI_PARAM_CONN_PORT:
+		case ISCSI_PARAM_EXP_STATSN:
+		case ISCSI_PARAM_PERSISTENT_ADDRESS:
+		case ISCSI_PARAM_PERSISTENT_PORT:
+		case ISCSI_PARAM_PING_TMO:
+		case ISCSI_PARAM_RECV_TMO:
+		case ISCSI_PARAM_INITIAL_R2T_EN:
+		case ISCSI_PARAM_MAX_R2T:
+		case ISCSI_PARAM_IMM_DATA_EN:
+		case ISCSI_PARAM_FIRST_BURST:
+		case ISCSI_PARAM_MAX_BURST:
+		case ISCSI_PARAM_PDU_INORDER_EN:
+		case ISCSI_PARAM_DATASEQ_INORDER_EN:
+		case ISCSI_PARAM_ERL:
+		case ISCSI_PARAM_TARGET_NAME:
+		case ISCSI_PARAM_TPGT:
+		case ISCSI_PARAM_USERNAME:
+		case ISCSI_PARAM_PASSWORD:
+		case ISCSI_PARAM_USERNAME_IN:
+		case ISCSI_PARAM_PASSWORD_IN:
+		case ISCSI_PARAM_FAST_ABORT:
+		case ISCSI_PARAM_ABORT_TMO:
+		case ISCSI_PARAM_LU_RESET_TMO:
+		case ISCSI_PARAM_TGT_RESET_TMO:
+		case ISCSI_PARAM_IFACE_NAME:
+		case ISCSI_PARAM_INITIATOR_NAME:
+		case ISCSI_PARAM_BOOT_ROOT:
+		case ISCSI_PARAM_BOOT_NIC:
+		case ISCSI_PARAM_BOOT_TARGET:
+			return S_IRUGO;
+		default:
+			return 0;
+		}
+	}
+
+	return 0;
+}
+
+static void qedi_cleanup_task(struct iscsi_task *task)
+{
+	if (!task->sc || task->state == ISCSI_TASK_PENDING) {
+		QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n",
+			  atomic_read(&task->refcount));
+		return;
+	}
+
+	qedi_iscsi_unmap_sg_list(task->dd_data);
+}
+
+struct iscsi_transport qedi_iscsi_transport = {
+	.owner = THIS_MODULE,
+	.name = QEDI_MODULE_NAME,
+	.caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST |
+		CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO,
+	.create_session = qedi_session_create,
+	.destroy_session = qedi_session_destroy,
+	.create_conn = qedi_conn_create,
+	.bind_conn = qedi_conn_bind,
+	.start_conn = qedi_conn_start,
+	.stop_conn = iscsi_conn_stop,
+	.destroy_conn = qedi_conn_destroy,
+	.set_param = iscsi_set_param,
+	.get_ep_param = qedi_ep_get_param,
+	.get_conn_param = iscsi_conn_get_param,
+	.get_session_param = iscsi_session_get_param,
+	.get_host_param = qedi_host_get_param,
+	.send_pdu = iscsi_conn_send_pdu,
+	.get_stats = qedi_conn_get_stats,
+	.xmit_task = qedi_task_xmit,
+	.cleanup_task = qedi_cleanup_task,
+	.session_recovery_timedout = iscsi_session_recovery_timedout,
+	.ep_connect = qedi_ep_connect,
+	.ep_poll = qedi_ep_poll,
+	.ep_disconnect = qedi_ep_disconnect,
+	.set_path = qedi_set_path,
+	.attr_is_visible = qedi_attr_is_visible,
+};
+
+void qedi_start_conn_recovery(struct qedi_ctx *qedi,
+			      struct qedi_conn *qedi_conn)
+{
+	struct iscsi_cls_session *cls_sess;
+	struct iscsi_cls_conn *cls_conn;
+	struct iscsi_conn *conn;
+
+	cls_conn = qedi_conn->cls_conn;
+	conn = cls_conn->dd_data;
+	cls_sess = iscsi_conn_to_session(cls_conn);
+
+	if (iscsi_is_session_online(cls_sess)) {
+		qedi_conn->abrt_conn = 1;
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failing connection, state=0x%x, cid=0x%x\n",
+			 conn->session->state, qedi_conn->iscsi_conn_id);
+		iscsi_conn_failure(qedi_conn->cls_conn->dd_data,
+				   ISCSI_ERR_CONN_FAILED);
+	}
+}
+
+void qedi_process_iscsi_error(struct qedi_endpoint *ep, struct async_data *data)
+{
+	struct qedi_conn *qedi_conn;
+	struct qedi_ctx *qedi;
+	char warn_notice[] = "iscsi_warning";
+	char error_notice[] = "iscsi_error";
+	char *message;
+	int need_recovery = 0;
+	u32 err_mask = 0;
+	char msg[64];
+
+	if (!ep)
+		return;
+
+	qedi_conn = ep->conn;
+	if (!qedi_conn)
+		return;
+
+	qedi = ep->qedi;
+
+	QEDI_ERR(&qedi->dbg_ctx, "async event iscsi error:0x%x\n",
+		 data->error_code);
+
+	if (err_mask) {
+		need_recovery = 0;
+		message = warn_notice;
+	} else {
+		need_recovery = 1;
+		message = error_notice;
+	}
+
+	switch (data->error_code) {
+	case ISCSI_STATUS_NONE:
+		strcpy(msg, "tcp_error none");
+		break;
+	case ISCSI_CONN_ERROR_TASK_CID_MISMATCH:
+		strcpy(msg, "task cid mismatch");
+		break;
+	case ISCSI_CONN_ERROR_TASK_NOT_VALID:
+		strcpy(msg, "invalid task");
+		break;
+	case ISCSI_CONN_ERROR_RQ_RING_IS_FULL:
+		strcpy(msg, "rq ring full");
+		break;
+	case ISCSI_CONN_ERROR_CMDQ_RING_IS_FULL:
+		strcpy(msg, "cmdq ring full");
+		break;
+	case ISCSI_CONN_ERROR_HQE_CACHING_FAILED:
+		strcpy(msg, "sge caching failed");
+		break;
+	case ISCSI_CONN_ERROR_HEADER_DIGEST_ERROR:
+		strcpy(msg, "hdr digest error");
+		break;
+	case ISCSI_CONN_ERROR_LOCAL_COMPLETION_ERROR:
+		strcpy(msg, "local cmpl error");
+		break;
+	case ISCSI_CONN_ERROR_DATA_OVERRUN:
+		strcpy(msg, "invalid task");
+		break;
+	case ISCSI_CONN_ERROR_OUT_OF_SGES_ERROR:
+		strcpy(msg, "out of sge error");
+		break;
+	case ISCSI_CONN_ERROR_TCP_SEG_PROC_IP_OPTIONS_ERROR:
+		strcpy(msg, "tcp seg ip options error");
+		break;
+	case ISCSI_CONN_ERROR_TCP_IP_FRAGMENT_ERROR:
+		strcpy(msg, "tcp ip fragment error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_AHS_LEN:
+		strcpy(msg, "AHS len protocol error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_ITT_OUT_OF_RANGE:
+		strcpy(msg, "itt out of range error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_EXCEEDS_PDU_SIZE:
+		strcpy(msg, "data seg more than pdu size");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE:
+		strcpy(msg, "invalid opcode");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE_BEFORE_UPDATE:
+		strcpy(msg, "invalid opcode before update");
+		break;
+	case ISCSI_CONN_ERROR_UNVALID_NOPIN_DSL:
+		strcpy(msg, "unexpected opcode");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_CARRIES_NO_DATA:
+		strcpy(msg, "r2t carries no data");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SN:
+		strcpy(msg, "data sn error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_IN_TTT:
+		strcpy(msg, "data TTT error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_TTT:
+		strcpy(msg, "r2t TTT error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_BUFFER_OFFSET:
+		strcpy(msg, "buffer offset error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_BUFFER_OFFSET_OOO:
+		strcpy(msg, "buffer offset ooo");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_SN:
+		strcpy(msg, "data seg len 0");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_0:
+		strcpy(msg, "data xer len error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_1:
+		strcpy(msg, "data xer len1 error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_2:
+		strcpy(msg, "data xer len2 error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_LUN:
+		strcpy(msg, "protocol lun error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO:
+		strcpy(msg, "f bit zero error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO_S_BIT_ONE:
+		strcpy(msg, "f bit zero s bit one error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_EXP_STAT_SN:
+		strcpy(msg, "exp stat sn error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DSL_NOT_ZERO:
+		strcpy(msg, "dsl not zero error");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_DSL:
+		strcpy(msg, "invalid dsl");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_TOO_BIG:
+		strcpy(msg, "data seg len too big");
+		break;
+	case ISCSI_CONN_ERROR_PROTOCOL_ERR_OUTSTANDING_R2T_COUNT:
+		strcpy(msg, "outstanding r2t count error");
+		break;
+	case ISCSI_CONN_ERROR_SENSE_DATA_LENGTH:
+		strcpy(msg, "sense datalen error");
+		break;
+	case ISCSI_ERROR_UNKNOWN:
+	default:
+		need_recovery = 0;
+		strcpy(msg, "unknown error");
+		break;
+	}
+	iscsi_conn_printk(KERN_ALERT,
+			  qedi_conn->cls_conn->dd_data,
+			  "qedi: %s - %s\n", message, msg);
+
+	if (need_recovery)
+		qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
+}
+
+void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data)
+{
+	struct qedi_conn *qedi_conn;
+
+	if (!ep)
+		return;
+
+	qedi_conn = ep->conn;
+	if (!qedi_conn)
+		return;
+
+	QEDI_ERR(&ep->qedi->dbg_ctx, "async event TCP error:0x%x\n",
+		 data->error_code);
+
+	qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
+}
diff --git a/drivers/scsi/qedi/qedi_iscsi.h b/drivers/scsi/qedi/qedi_iscsi.h
new file mode 100644
index 0000000..6da1c90
--- /dev/null
+++ b/drivers/scsi/qedi/qedi_iscsi.h
@@ -0,0 +1,228 @@
+/*
+ * QLogic iSCSI Offload Driver
+ * Copyright (c) 2016 Cavium Inc.
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QEDI_ISCSI_H_
+#define _QEDI_ISCSI_H_
+
+#include <linux/socket.h>
+#include <linux/completion.h>
+#include "qedi.h"
+
+#define ISCSI_MAX_SESS_PER_HBA	4096
+
+#define DEF_KA_TIMEOUT		7200000
+#define DEF_KA_INTERVAL		10000
+#define DEF_KA_MAX_PROBE_COUNT	10
+#define DEF_TOS			0
+#define DEF_TTL			0xfe
+#define DEF_SND_SEQ_SCALE	0
+#define DEF_RCV_BUF		0xffff
+#define DEF_SND_BUF		0xffff
+#define DEF_SEED		0
+#define DEF_MAX_RT_TIME		8000
+#define DEF_MAX_DA_COUNT        2
+#define DEF_SWS_TIMER		1000
+#define DEF_MAX_CWND		2
+#define DEF_PATH_MTU		1500
+#define DEF_MSS			1460
+#define DEF_LL2_MTU		1560
+#define JUMBO_MTU		9000
+
+#define MIN_MTU         576 /* rfc 793 */
+#define IPV4_HDR_LEN    20
+#define IPV6_HDR_LEN    40
+#define TCP_HDR_LEN     20
+#define TCP_OPTION_LEN  12
+#define VLAN_LEN         4
+
+enum {
+	EP_STATE_IDLE                   = 0x0,
+	EP_STATE_ACQRCONN_START         = 0x1,
+	EP_STATE_ACQRCONN_COMPL         = 0x2,
+	EP_STATE_OFLDCONN_START         = 0x4,
+	EP_STATE_OFLDCONN_COMPL         = 0x8,
+	EP_STATE_DISCONN_START          = 0x10,
+	EP_STATE_DISCONN_COMPL          = 0x20,
+	EP_STATE_CLEANUP_START          = 0x40,
+	EP_STATE_CLEANUP_CMPL           = 0x80,
+	EP_STATE_TCP_FIN_RCVD           = 0x100,
+	EP_STATE_TCP_RST_RCVD           = 0x200,
+	EP_STATE_LOGOUT_SENT            = 0x400,
+	EP_STATE_LOGOUT_RESP_RCVD       = 0x800,
+	EP_STATE_CLEANUP_FAILED         = 0x1000,
+	EP_STATE_OFLDCONN_FAILED        = 0x2000,
+	EP_STATE_CONNECT_FAILED         = 0x4000,
+	EP_STATE_DISCONN_TIMEDOUT       = 0x8000,
+};
+
+struct qedi_conn;
+
+struct qedi_endpoint {
+	struct qedi_ctx *qedi;
+	u32 dst_addr[4];
+	u32 src_addr[4];
+	u16 src_port;
+	u16 dst_port;
+	u16 vlan_id;
+	u16 pmtu;
+	u8 src_mac[ETH_ALEN];
+	u8 dst_mac[ETH_ALEN];
+	u8 ip_type;
+	int state;
+	wait_queue_head_t ofld_wait;
+	wait_queue_head_t tcp_ofld_wait;
+	u32 iscsi_cid;
+	/* identifier of the connection from qed */
+	u32 handle;
+	u32 fw_cid;
+	void __iomem *p_doorbell;
+
+	/* Send queue management */
+	struct iscsi_wqe *sq;
+	dma_addr_t sq_dma;
+
+	u16 sq_prod_idx;
+	u16 fw_sq_prod_idx;
+	u16 sq_con_idx;
+	u32 sq_mem_size;
+
+	void *sq_pbl;
+	dma_addr_t sq_pbl_dma;
+	u32 sq_pbl_size;
+	struct qedi_conn *conn;
+	struct work_struct offload_work;
+};
+
+#define QEDI_SQ_WQES_MIN	16
+
+struct qedi_io_bdt {
+	struct iscsi_sge *sge_tbl;
+	dma_addr_t sge_tbl_dma;
+	u16 sge_valid;
+};
+
+/**
+ * struct generic_pdu_resc - login pdu resource structure
+ *
+ * @req_buf:            driver buffer used to stage payload associated with
+ *                      the login request
+ * @req_dma_addr:       dma address for iscsi login request payload buffer
+ * @req_buf_size:       actual login request payload length
+ * @req_wr_ptr:         pointer into login request buffer when next data is
+ *                      to be written
+ * @resp_hdr:           iscsi header where iscsi login response header is to
+ *                      be recreated
+ * @resp_buf:           buffer to stage login response payload
+ * @resp_dma_addr:      login response payload buffer dma address
+ * @resp_buf_size:      login response paylod length
+ * @resp_wr_ptr:        pointer into login response buffer when next data is
+ *                      to be written
+ * @req_bd_tbl:         iscsi login request payload BD table
+ * @req_bd_dma:         login request BD table dma address
+ * @resp_bd_tbl:        iscsi login response payload BD table
+ * @resp_bd_dma:        login request BD table dma address
+ *
+ * following structure defines buffer info for generic pdus such as iSCSI Login,
+ *      Logout and NOP
+ */
+struct generic_pdu_resc {
+	char *req_buf;
+	dma_addr_t req_dma_addr;
+	u32 req_buf_size;
+	char *req_wr_ptr;
+	struct iscsi_hdr resp_hdr;
+	char *resp_buf;
+	dma_addr_t resp_dma_addr;
+	u32 resp_buf_size;
+	char *resp_wr_ptr;
+	char *req_bd_tbl;
+	dma_addr_t req_bd_dma;
+	char *resp_bd_tbl;
+	dma_addr_t resp_bd_dma;
+};
+
+struct qedi_conn {
+	struct iscsi_cls_conn *cls_conn;
+	struct qedi_ctx *qedi;
+	struct qedi_endpoint *ep;
+	struct list_head active_cmd_list;
+	spinlock_t list_lock;		/* internal conn lock */
+	u32 active_cmd_count;
+	u32 cmd_cleanup_req;
+	u32 cmd_cleanup_cmpl;
+
+	u32 iscsi_conn_id;
+	int itt;
+	int abrt_conn;
+#define QEDI_CID_RESERVED	0x5AFF
+	u32 fw_cid;
+	/*
+	 * Buffer for login negotiation process
+	 */
+	struct generic_pdu_resc gen_pdu;
+
+	struct list_head tmf_work_list;
+	wait_queue_head_t wait_queue;
+	spinlock_t tmf_work_lock;	/* tmf work lock */
+	unsigned long flags;
+#define QEDI_CONN_FW_CLEANUP	1
+};
+
+struct qedi_cmd {
+	struct list_head io_cmd;
+	bool io_cmd_in_list;
+	struct iscsi_hdr hdr;
+	struct qedi_conn *conn;
+	struct scsi_cmnd *scsi_cmd;
+	struct scatterlist *sg;
+	struct qedi_io_bdt io_tbl;
+	struct iscsi_task_context request;
+	unsigned char *sense_buffer;
+	dma_addr_t sense_buffer_dma;
+	u16 task_id;
+
+	/* field populated for tmf work queue */
+	struct iscsi_task *task;
+	struct work_struct tmf_work;
+	int state;
+#define CLEANUP_WAIT	1
+#define CLEANUP_RECV	2
+#define CLEANUP_WAIT_FAILED	3
+#define CLEANUP_NOT_REQUIRED	4
+#define LUN_RESET_RESPONSE_RECEIVED	5
+#define RESPONSE_RECEIVED	6
+
+	int type;
+#define TYPEIO		1
+#define TYPERESET	2
+
+	struct qedi_work_map *list_tmf_work;
+	/* slowpath management */
+	bool use_slowpath;
+
+	struct iscsi_tm_rsp *tmf_resp_buf;
+};
+
+struct qedi_work_map {
+	struct list_head list;
+	struct qedi_cmd *qedi_cmd;
+	int rtid;
+
+	int state;
+#define QEDI_WORK_QUEUED	1
+#define QEDI_WORK_SCHEDULED	2
+#define QEDI_WORK_EXIT		3
+
+	struct work_struct *ptr_tmf_work;
+};
+
+#define qedi_set_itt(task_id, itt) ((u32)((task_id & 0xffff) | (itt << 16)))
+#define qedi_get_itt(cqe) (cqe.iscsi_hdr.cmd.itt >> 16)
+
+#endif /* _QEDI_ISCSI_H_ */
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 58ac9a2..22d19a3 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -27,6 +27,8 @@
 #include <scsi/scsi.h>
 
 #include "qedi.h"
+#include "qedi_gbl.h"
+#include "qedi_iscsi.h"
 
 static uint fw_debug;
 module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
@@ -1368,6 +1370,139 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
 	return status;
 }
 
+int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
+{
+	int rval = 0;
+	u32 *pbl;
+	dma_addr_t page;
+	int num_pages;
+
+	if (!ep)
+		return -EIO;
+
+	/* Calculate appropriate queue and PBL sizes */
+	ep->sq_mem_size = QEDI_SQ_SIZE * sizeof(struct iscsi_wqe);
+	ep->sq_mem_size += QEDI_PAGE_SIZE - 1;
+
+	ep->sq_pbl_size = (ep->sq_mem_size / QEDI_PAGE_SIZE) * sizeof(void *);
+	ep->sq_pbl_size = ep->sq_pbl_size + QEDI_PAGE_SIZE;
+
+	ep->sq = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_mem_size,
+				    &ep->sq_dma, GFP_KERNEL);
+	if (!ep->sq) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate send queue.\n");
+		rval = -ENOMEM;
+		goto out;
+	}
+	memset(ep->sq, 0, ep->sq_mem_size);
+
+	ep->sq_pbl = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_pbl_size,
+					&ep->sq_pbl_dma, GFP_KERNEL);
+	if (!ep->sq_pbl) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Could not allocate send queue PBL.\n");
+		rval = -ENOMEM;
+		goto out_free_sq;
+	}
+	memset(ep->sq_pbl, 0, ep->sq_pbl_size);
+
+	/* Create PBL */
+	num_pages = ep->sq_mem_size / QEDI_PAGE_SIZE;
+	page = ep->sq_dma;
+	pbl = (u32 *)ep->sq_pbl;
+
+	while (num_pages--) {
+		*pbl = (u32)page;
+		pbl++;
+		*pbl = (u32)((u64)page >> 32);
+		pbl++;
+		page += QEDI_PAGE_SIZE;
+	}
+
+	return rval;
+
+out_free_sq:
+	dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
+			  ep->sq_dma);
+out:
+	return rval;
+}
+
+void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
+{
+	if (ep->sq_pbl)
+		dma_free_coherent(&qedi->pdev->dev, ep->sq_pbl_size, ep->sq_pbl,
+				  ep->sq_pbl_dma);
+	if (ep->sq)
+		dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
+				  ep->sq_dma);
+}
+
+int qedi_get_task_idx(struct qedi_ctx *qedi)
+{
+	s16 tmp_idx;
+
+again:
+	tmp_idx = find_first_zero_bit(qedi->task_idx_map,
+				      MAX_ISCSI_TASK_ENTRIES);
+
+	if (tmp_idx >= MAX_ISCSI_TASK_ENTRIES) {
+		QEDI_ERR(&qedi->dbg_ctx, "FW task context pool is full.\n");
+		tmp_idx = -1;
+		goto err_idx;
+	}
+
+	if (test_and_set_bit(tmp_idx, qedi->task_idx_map))
+		goto again;
+
+err_idx:
+	return tmp_idx;
+}
+
+void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx)
+{
+	if (!test_and_clear_bit(idx, qedi->task_idx_map)) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "FW task context, already cleared, tid=0x%x\n", idx);
+		WARN_ON(1);
+	}
+}
+
+void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt)
+{
+	qedi->itt_map[tid].itt = proto_itt;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "update itt map tid=0x%x, with proto itt=0x%x\n", tid,
+		  qedi->itt_map[tid].itt);
+}
+
+void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, s16 *tid)
+{
+	u16 i;
+
+	for (i = 0; i < MAX_ISCSI_TASK_ENTRIES; i++) {
+		if (qedi->itt_map[i].itt == itt) {
+			*tid = i;
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+				  "Ref itt=0x%x, found at tid=0x%x\n",
+				  itt, *tid);
+			return;
+		}
+	}
+
+	WARN_ON(1);
+}
+
+void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt)
+{
+	*proto_itt = qedi->itt_map[tid].itt;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
+		  "Get itt map tid [0x%x with proto itt[0x%x]",
+		  tid, *proto_itt);
+}
+
 static int qedi_alloc_itt(struct qedi_ctx *qedi)
 {
 	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
@@ -1488,6 +1623,26 @@ static int qedi_cpu_callback(struct notifier_block *nfb,
 	.notifier_call = qedi_cpu_callback,
 };
 
+void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu)
+{
+	struct qed_ll2_params params;
+
+	qedi_recover_all_conns(qedi);
+
+	qedi_ops->ll2->stop(qedi->cdev);
+	qedi_ll2_free_skbs(qedi);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "old MTU %u, new MTU %u\n",
+		  qedi->ll2_mtu, mtu);
+	memset(&params, 0, sizeof(params));
+	qedi->ll2_mtu = mtu;
+	params.mtu = qedi->ll2_mtu + IPV6_HDR_LEN + TCP_HDR_LEN;
+	params.drop_ttl0_packets = 0;
+	params.rx_vlan_stripping = 1;
+	ether_addr_copy(params.ll2_mac_address, qedi->dev_info.common.hw_mac);
+	qedi_ops->ll2->start(qedi->cdev, &params);
+}
+
 static void __qedi_remove(struct pci_dev *pdev, int mode)
 {
 	struct qedi_ctx *qedi = pci_get_drvdata(pdev);
@@ -1852,6 +2007,13 @@ static int __init qedi_init(void)
 	qedi_dbg_init("qedi");
 #endif
 
+	qedi_scsi_transport = iscsi_register_transport(&qedi_iscsi_transport);
+	if (!qedi_scsi_transport) {
+		QEDI_ERR(NULL, "Could not register qedi transport");
+		rc = -ENOMEM;
+		goto exit_qedi_init_1;
+	}
+
 	register_hotcpu_notifier(&qedi_cpu_notifier);
 
 	ret = pci_register_driver(&qedi_pci_driver);
@@ -1874,6 +2036,7 @@ static int __init qedi_init(void)
 	return rc;
 
 exit_qedi_init_2:
+	iscsi_unregister_transport(&qedi_iscsi_transport);
 exit_qedi_init_1:
 #ifdef CONFIG_DEBUG_FS
 	qedi_dbg_exit();
@@ -1892,6 +2055,7 @@ static void __exit qedi_cleanup(void)
 
 	pci_unregister_driver(&qedi_pci_driver);
 	unregister_hotcpu_notifier(&qedi_cpu_notifier);
+	iscsi_unregister_transport(&qedi_iscsi_transport);
 
 #ifdef CONFIG_DEBUG_FS
 	qedi_dbg_exit();
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 6/6] qedi: Add support for data path.
  2016-10-19  5:01 ` manish.rangankar
@ 2016-10-19  5:01   ` manish.rangankar
  -1 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for data path and TMF handling.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi_fw.c    | 1282 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h   |    6 +
 drivers/scsi/qedi/qedi_iscsi.c |    6 +
 drivers/scsi/qedi/qedi_main.c  |    4 +
 4 files changed, 1298 insertions(+)

diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
index a820785..af1e14d 100644
--- a/drivers/scsi/qedi/qedi_fw.c
+++ b/drivers/scsi/qedi/qedi_fw.c
@@ -147,6 +147,114 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
 	spin_unlock(&session->back_lock);
 }
 
+static void qedi_tmf_resp_work(struct work_struct *work)
+{
+	struct qedi_cmd *qedi_cmd =
+				container_of(work, struct qedi_cmd, tmf_work);
+	struct qedi_conn *qedi_conn = qedi_cmd->conn;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_tm_rsp *resp_hdr_ptr;
+	struct iscsi_cls_session *cls_sess;
+	int rval = 0;
+
+	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
+
+	iscsi_block_session(session->cls_session);
+	rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
+	if (rval) {
+		clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+		qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+		iscsi_unblock_session(session->cls_session);
+		return;
+	}
+
+	iscsi_unblock_session(session->cls_session);
+	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+
+	spin_lock(&session->back_lock);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+	spin_unlock(&session->back_lock);
+	kfree(resp_hdr_ptr);
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+}
+
+static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
+				  union iscsi_cqe *cqe,
+				  struct iscsi_task *task,
+				  struct qedi_conn *qedi_conn)
+
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_tmf_response_hdr *cqe_tmp_response;
+	struct iscsi_tm_rsp *resp_hdr_ptr;
+	struct iscsi_tm *tmf_hdr;
+	struct qedi_cmd *qedi_cmd = NULL;
+	u32 *tmp;
+
+	cqe_tmp_response = &cqe->cqe_common.iscsi_hdr.tmf_response;
+
+	qedi_cmd = task->dd_data;
+	qedi_cmd->tmf_resp_buf = kzalloc(sizeof(*resp_hdr_ptr), GFP_KERNEL);
+	if (!qedi_cmd->tmf_resp_buf) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to allocate resp buf, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		return;
+	}
+
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_tm_rsp));
+
+	/* Fill up the header */
+	resp_hdr_ptr->opcode = cqe_tmp_response->opcode;
+	resp_hdr_ptr->flags = cqe_tmp_response->hdr_flags;
+	resp_hdr_ptr->response = cqe_tmp_response->hdr_response;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_tmp_response->hdr_second_dword &
+		ISCSI_TMF_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_tmp_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn  = cpu_to_be32(cqe_tmp_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_tmp_response->max_cmd_sn);
+
+	tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
+
+	if (likely(qedi_cmd->io_cmd_in_list)) {
+		qedi_cmd->io_cmd_in_list = false;
+		list_del_init(&qedi_cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+
+	if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
+	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
+	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
+		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_resp_work);
+		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
+		goto unblock_sess;
+	}
+
+	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+	kfree(resp_hdr_ptr);
+
+unblock_sess:
+	spin_unlock(&session->back_lock);
+}
+
 static void qedi_process_login_resp(struct qedi_ctx *qedi,
 				    union iscsi_cqe *cqe,
 				    struct iscsi_task *task,
@@ -470,6 +578,121 @@ static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
 	spin_unlock_bh(&session->back_lock);
 }
 
+static void qedi_scsi_completion(struct qedi_ctx *qedi,
+				 union iscsi_cqe *cqe,
+				 struct iscsi_task *task,
+				 struct iscsi_conn *conn)
+{
+	struct scsi_cmnd *sc_cmd;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_scsi_rsp *hdr;
+	struct iscsi_data_in_hdr *cqe_data_in;
+	int datalen = 0;
+	struct qedi_conn *qedi_conn;
+	u32 iscsi_cid;
+	bool mark_cmd_node_deleted = false;
+	u8 cqe_err_bits = 0;
+
+	iscsi_cid  = cqe->cqe_common.conn_id;
+	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+
+	cqe_data_in = &cqe->cqe_common.iscsi_hdr.data_in;
+	cqe_err_bits =
+		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
+
+	spin_lock_bh(&session->back_lock);
+	/* get the scsi command */
+	sc_cmd = cmd->scsi_cmd;
+
+	if (!sc_cmd) {
+		QEDI_WARN(&qedi->dbg_ctx, "sc_cmd is NULL!\n");
+		goto error;
+	}
+
+	if (!sc_cmd->SCp.ptr) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "SCp.ptr is NULL, returned in another context.\n");
+		goto error;
+	}
+
+	if (!sc_cmd->request) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "sc_cmd->request is NULL, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	if (!sc_cmd->request->special) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "request->special is NULL so request not valid, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	if (!sc_cmd->request->q) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "request->q is NULL so request is not valid, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	qedi_iscsi_unmap_sg_list(cmd);
+
+	hdr = (struct iscsi_scsi_rsp *)task->hdr;
+	hdr->opcode = cqe_data_in->opcode;
+	hdr->max_cmdsn = cpu_to_be32(cqe_data_in->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_data_in->exp_cmd_sn);
+	hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
+	hdr->response = cqe_data_in->reserved1;
+	hdr->cmd_status = cqe_data_in->status_rsvd;
+	hdr->flags = cqe_data_in->flags;
+	hdr->residual_count = cpu_to_be32(cqe_data_in->residual_count);
+
+	if (hdr->cmd_status == SAM_STAT_CHECK_CONDITION) {
+		datalen = cqe_data_in->reserved2 &
+			  ISCSI_COMMON_HDR_DATA_SEG_LEN_MASK;
+		memcpy((char *)conn->data, (char *)cmd->sense_buffer, datalen);
+	}
+
+	/* If f/w reports data underrun err then set residual to IO transfer
+	 * length, set Underrun flag and clear Overrun flag explicitly
+	 */
+	if (unlikely(cqe_err_bits &&
+		     GET_FIELD(cqe_err_bits, CQE_ERROR_BITMAP_UNDER_RUN_ERR))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Under flow itt=0x%x proto flags=0x%x tid=0x%x cid 0x%x fw resid 0x%x sc dlen 0x%x\n",
+			  hdr->itt, cqe_data_in->flags, cmd->task_id,
+			  qedi_conn->iscsi_conn_id, hdr->residual_count,
+			  scsi_bufflen(sc_cmd));
+		hdr->residual_count = cpu_to_be32(scsi_bufflen(sc_cmd));
+		hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW;
+		hdr->flags &= (~ISCSI_FLAG_CMD_OVERFLOW);
+	}
+
+	spin_lock(&qedi_conn->list_lock);
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+		mark_cmd_node_deleted = true;
+	}
+	spin_unlock(&qedi_conn->list_lock);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+	cmd->state = RESPONSE_RECEIVED;
+	if (io_tracing)
+		qedi_trace_io(qedi, task, cmd->task_id, QEDI_IO_TRACE_RSP);
+
+	qedi_clear_task_idx(qedi, cmd->task_id);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+			     conn->data, datalen);
+error:
+	spin_unlock_bh(&session->back_lock);
+}
+
 static void qedi_mtask_completion(struct qedi_ctx *qedi,
 				  union iscsi_cqe *cqe,
 				  struct iscsi_task *task,
@@ -482,9 +705,16 @@ static void qedi_mtask_completion(struct qedi_ctx *qedi,
 	iscsi_conn = conn->cls_conn->dd_data;
 
 	switch (hdr_opcode) {
+	case ISCSI_OPCODE_SCSI_RESPONSE:
+	case ISCSI_OPCODE_DATA_IN:
+		qedi_scsi_completion(qedi, cqe, task, iscsi_conn);
+		break;
 	case ISCSI_OPCODE_LOGIN_RESPONSE:
 		qedi_process_login_resp(qedi, cqe, task, conn);
 		break;
+	case ISCSI_OPCODE_TMF_RESPONSE:
+		qedi_process_tmf_resp(qedi, cqe, task, conn);
+		break;
 	case ISCSI_OPCODE_TEXT_RESPONSE:
 		qedi_process_text_resp(qedi, cqe, task, conn);
 		break;
@@ -520,6 +750,131 @@ static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
 	spin_unlock_bh(&session->back_lock);
 }
 
+static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+					  struct iscsi_cqe_solicited *cqe,
+					  struct iscsi_task *task,
+					  struct iscsi_conn *conn)
+{
+	struct qedi_work_map *work, *work_tmp;
+	u32 proto_itt = cqe->itid;
+	u32 ptmp_itt = 0;
+	itt_t protoitt = 0;
+	int found = 0;
+	struct qedi_cmd *qedi_cmd = NULL;
+	u32 rtid = 0;
+	u32 iscsi_cid;
+	struct qedi_conn *qedi_conn;
+	struct qedi_cmd *cmd_new, *dbg_cmd;
+	struct iscsi_task *mtask;
+	struct iscsi_tm *tmf_hdr = NULL;
+
+	iscsi_cid = cqe->conn_id;
+	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+
+	/* Based on this itt get the corresponding qedi_cmd */
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	list_for_each_entry_safe(work, work_tmp, &qedi_conn->tmf_work_list,
+				 list) {
+		if (work->rtid == proto_itt) {
+			/* We found the command */
+			qedi_cmd = work->qedi_cmd;
+			if (!qedi_cmd->list_tmf_work) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+					  "TMF work not found, cqe->tid=0x%x, cid=0x%x\n",
+					  proto_itt, qedi_conn->iscsi_conn_id);
+				WARN_ON(1);
+			}
+			found = 1;
+			mtask = qedi_cmd->task;
+			tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+			rtid = work->rtid;
+
+			list_del_init(&work->list);
+			kfree(work);
+			qedi_cmd->list_tmf_work = NULL;
+		}
+	}
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	if (found) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "TMF work, cqe->tid=0x%x, tmf flags=0x%x, cid=0x%x\n",
+			  proto_itt, tmf_hdr->flags, qedi_conn->iscsi_conn_id);
+
+		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_ABORT_TASK) {
+			spin_lock_bh(&conn->session->back_lock);
+
+			protoitt = build_itt(get_itt(tmf_hdr->rtt),
+					     conn->session->age);
+			task = iscsi_itt_to_task(conn, protoitt);
+
+			spin_unlock_bh(&conn->session->back_lock);
+
+			if (!task) {
+				QEDI_NOTICE(&qedi->dbg_ctx,
+					    "IO task completed, tmf rtt=0x%x, cid=0x%x\n",
+					    get_itt(tmf_hdr->rtt),
+					    qedi_conn->iscsi_conn_id);
+				return;
+			}
+
+			dbg_cmd = task->dd_data;
+
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+				  "Abort tmf rtt=0x%x, i/o itt=0x%x, i/o tid=0x%x, cid=0x%x\n",
+				  get_itt(tmf_hdr->rtt), get_itt(task->itt),
+				  dbg_cmd->task_id, qedi_conn->iscsi_conn_id);
+
+			if (qedi_cmd->state == CLEANUP_WAIT_FAILED)
+				qedi_cmd->state = CLEANUP_RECV;
+
+			qedi_clear_task_idx(qedi_conn->qedi, rtid);
+
+			spin_lock(&qedi_conn->list_lock);
+			list_del_init(&dbg_cmd->io_cmd);
+			qedi_conn->active_cmd_count--;
+			spin_unlock(&qedi_conn->list_lock);
+			qedi_cmd->state = CLEANUP_RECV;
+			wake_up_interruptible(&qedi_conn->wait_queue);
+		}
+	} else if (qedi_conn->cmd_cleanup_req > 0) {
+		spin_lock_bh(&conn->session->back_lock);
+		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+		protoitt = build_itt(ptmp_itt, conn->session->age);
+		task = iscsi_itt_to_task(conn, protoitt);
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "cleanup io itid=0x%x, protoitt=0x%x, cmd_cleanup_cmpl=%d, cid=0x%x\n",
+			  cqe->itid, protoitt, qedi_conn->cmd_cleanup_cmpl,
+			  qedi_conn->iscsi_conn_id);
+
+		spin_unlock_bh(&conn->session->back_lock);
+		if (!task) {
+			QEDI_NOTICE(&qedi->dbg_ctx,
+				    "task is null, itid=0x%x, cid=0x%x\n",
+				    cqe->itid, qedi_conn->iscsi_conn_id);
+			return;
+		}
+		qedi_conn->cmd_cleanup_cmpl++;
+		wake_up(&qedi_conn->wait_queue);
+		cmd_new = task->dd_data;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+			  "Freeing tid=0x%x for cid=0x%x\n",
+			  cqe->itid, qedi_conn->iscsi_conn_id);
+		qedi_clear_task_idx(qedi_conn->qedi, cqe->itid);
+
+	} else {
+		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+		protoitt = build_itt(ptmp_itt, conn->session->age);
+		task = iscsi_itt_to_task(conn, protoitt);
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Delayed or untracked cleanup response, itt=0x%x, tid=0x%x, cid=0x%x, task=%p\n",
+			 protoitt, cqe->itid, qedi_conn->iscsi_conn_id, task);
+		WARN_ON(1);
+	}
+}
+
 void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			  uint16_t que_idx)
 {
@@ -619,6 +974,14 @@ void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			break;
 		}
 		goto exit_fp_process;
+	case ISCSI_CQE_TYPE_DUMMY:
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "Dummy CqE\n");
+		goto exit_fp_process;
+	case ISCSI_CQE_TYPE_TASK_CLEANUP:
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "CleanUp CqE\n");
+		qedi_process_cmd_cleanup_resp(qedi, &cqe->cqe_solicited, task,
+					      conn);
+		goto exit_fp_process;
 	default:
 		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
 		break;
@@ -904,6 +1267,440 @@ int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
 	return 0;
 }
 
+int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
+			struct iscsi_task *task, bool in_recovery)
+{
+	int rval;
+	struct iscsi_task *ctask;
+	struct qedi_cmd *cmd, *cmd_tmp;
+	struct iscsi_tm *tmf_hdr;
+	unsigned int lun = 0;
+	bool lun_reset = false;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+
+	/* From recovery, task is NULL or from tmf resp valid task */
+	if (task) {
+		tmf_hdr = (struct iscsi_tm *)task->hdr;
+
+		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+			ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) {
+			lun_reset = true;
+			lun = scsilun_to_int(&tmf_hdr->lun);
+		}
+	}
+
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "active_cmd_count=%d, cid=0x%x, in_recovery=%d, lun_reset=%d\n",
+		  qedi_conn->active_cmd_count, qedi_conn->iscsi_conn_id,
+		  in_recovery, lun_reset);
+
+	if (lun_reset)
+		spin_lock_bh(&session->back_lock);
+
+	spin_lock(&qedi_conn->list_lock);
+
+	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
+				 io_cmd) {
+		ctask = cmd->task;
+		if (ctask == task)
+			continue;
+
+		if (lun_reset) {
+			if (cmd->scsi_cmd && cmd->scsi_cmd->device) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+					  "tid=0x%x itt=0x%x scsi_cmd_ptr=%p device=%p task_state=%d cmd_state=0%x cid=0x%x\n",
+					  cmd->task_id, get_itt(ctask->itt),
+					  cmd->scsi_cmd, cmd->scsi_cmd->device,
+					  ctask->state, cmd->state,
+					  qedi_conn->iscsi_conn_id);
+				if (cmd->scsi_cmd->device->lun != lun)
+					continue;
+			}
+		}
+		qedi_conn->cmd_cleanup_req++;
+		qedi_iscsi_cleanup_task(ctask, true);
+
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Deleted active cmd list node io_cmd=%p, cid=0x%x\n",
+			  &cmd->io_cmd, qedi_conn->iscsi_conn_id);
+	}
+
+	spin_unlock(&qedi_conn->list_lock);
+
+	if (lun_reset)
+		spin_unlock_bh(&session->back_lock);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "cmd_cleanup_req=%d, cid=0x%x\n",
+		  qedi_conn->cmd_cleanup_req,
+		  qedi_conn->iscsi_conn_id);
+
+	rval  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
+						 ((qedi_conn->cmd_cleanup_req ==
+						 qedi_conn->cmd_cleanup_cmpl) ||
+						 qedi_conn->ep),
+						 5 * HZ);
+	if (rval) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "i/o cmd_cleanup_req=%d, equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
+			  qedi_conn->cmd_cleanup_req,
+			  qedi_conn->cmd_cleanup_cmpl,
+			  qedi_conn->iscsi_conn_id);
+
+		return 0;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "i/o cmd_cleanup_req=%d, not equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
+		  qedi_conn->cmd_cleanup_req,
+		  qedi_conn->cmd_cleanup_cmpl,
+		  qedi_conn->iscsi_conn_id);
+
+	iscsi_host_for_each_session(qedi->shost,
+				    qedi_mark_device_missing);
+	qedi_ops->common->drain(qedi->cdev);
+
+	/* Enable IOs for all other sessions except current.*/
+	if (!wait_event_interruptible_timeout(qedi_conn->wait_queue,
+					      (qedi_conn->cmd_cleanup_req ==
+					       qedi_conn->cmd_cleanup_cmpl),
+					      5 * HZ)) {
+		iscsi_host_for_each_session(qedi->shost,
+					    qedi_mark_device_available);
+		return -1;
+	}
+
+	iscsi_host_for_each_session(qedi->shost,
+				    qedi_mark_device_available);
+
+	return 0;
+}
+
+void qedi_clearsq(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
+		  struct iscsi_task *task)
+{
+	struct qedi_endpoint *qedi_ep;
+	int rval;
+
+	qedi_ep = qedi_conn->ep;
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	if (!qedi_ep) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Cannot proceed, ep already disconnected, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		return;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Clearing SQ for cid=0x%x, conn=%p, ep=%p\n",
+		  qedi_conn->iscsi_conn_id, qedi_conn, qedi_ep);
+
+	qedi_ops->clear_sq(qedi->cdev, qedi_ep->handle);
+
+	rval = qedi_cleanup_all_io(qedi, qedi_conn, task, true);
+	if (rval) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fatal error, need hard reset, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		WARN_ON(1);
+	}
+}
+
+static int qedi_wait_for_cleanup_request(struct qedi_ctx *qedi,
+					 struct qedi_conn *qedi_conn,
+					 struct iscsi_task *task,
+					 struct qedi_cmd *qedi_cmd,
+					 struct qedi_work_map *list_work)
+{
+	struct qedi_cmd *cmd = (struct qedi_cmd *)task->dd_data;
+	int wait;
+
+	wait  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
+						 ((qedi_cmd->state ==
+						   CLEANUP_RECV) ||
+						 ((qedi_cmd->type == TYPEIO) &&
+						  (cmd->state ==
+						   RESPONSE_RECEIVED))),
+						 5 * HZ);
+	if (!wait) {
+		qedi_cmd->state = CLEANUP_WAIT_FAILED;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "Cleanup timedout tid=0x%x, issue connection recovery, cid=0x%x\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+		return -1;
+	}
+	return 0;
+}
+
+static void qedi_tmf_work(struct work_struct *work)
+{
+	struct qedi_cmd *qedi_cmd =
+		container_of(work, struct qedi_cmd, tmf_work);
+	struct qedi_conn *qedi_conn = qedi_cmd->conn;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_cls_session *cls_sess;
+	struct qedi_work_map *list_work = NULL;
+	struct iscsi_task *mtask;
+	struct qedi_cmd *cmd;
+	struct iscsi_task *ctask;
+	struct iscsi_tm *tmf_hdr;
+	s16 rval = 0;
+	s16 tid = 0;
+
+	mtask = qedi_cmd->task;
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
+	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+
+	ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
+	if (!ctask || !ctask->sc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Task already completed\n");
+		goto abort_ret;
+	}
+
+	cmd = (struct qedi_cmd *)ctask->dd_data;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Abort tmf rtt=0x%x, cmd itt=0x%x, cmd tid=0x%x, cid=0x%x\n",
+		  get_itt(tmf_hdr->rtt), get_itt(ctask->itt), cmd->task_id,
+		  qedi_conn->iscsi_conn_id);
+
+	if (do_not_recover) {
+		QEDI_ERR(&qedi->dbg_ctx, "DONT SEND CLEANUP/ABORT %d\n",
+			 do_not_recover);
+		goto abort_ret;
+	}
+
+	list_work = kzalloc(sizeof(*list_work), GFP_ATOMIC);
+	if (!list_work) {
+		QEDI_ERR(&qedi->dbg_ctx, "Memory alloction failed\n");
+		goto abort_ret;
+	}
+
+	qedi_cmd->type = TYPEIO;
+	list_work->qedi_cmd = qedi_cmd;
+	list_work->rtid = cmd->task_id;
+	list_work->state = QEDI_WORK_SCHEDULED;
+	qedi_cmd->list_tmf_work = list_work;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "Queue tmf work=%p, list node=%p, cid=0x%x, tmf flags=0x%x\n",
+		  list_work->ptr_tmf_work, list_work, qedi_conn->iscsi_conn_id,
+		  tmf_hdr->flags);
+
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	list_add_tail(&list_work->list, &qedi_conn->tmf_work_list);
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	qedi_iscsi_cleanup_task(ctask, false);
+
+	rval = qedi_wait_for_cleanup_request(qedi, qedi_conn, ctask, qedi_cmd,
+					     list_work);
+	if (rval == -1) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "FW cleanup got escalated, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		goto ldel_exit;
+	}
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1) {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		goto ldel_exit;
+	}
+
+	qedi_cmd->task_id = tid;
+	qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
+
+abort_ret:
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	return;
+
+ldel_exit:
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	if (!qedi_cmd->list_tmf_work) {
+		list_del_init(&list_work->list);
+		qedi_cmd->list_tmf_work = NULL;
+		kfree(list_work);
+	}
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_del_init(&cmd->io_cmd);
+	qedi_conn->active_cmd_count--;
+	spin_unlock(&qedi_conn->list_lock);
+
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+}
+
+static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
+			       struct iscsi_task *mtask)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_tmf_request_hdr *fw_tmf_request;
+	struct iscsi_sge *single_sge;
+	struct qedi_cmd *qedi_cmd;
+	struct qedi_cmd *cmd;
+	struct iscsi_task *ctask;
+	struct iscsi_tm *tmf_hdr;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	u32 scsi_lun[2];
+	s16 tid = 0, ptu_invalidate = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+
+	tid = qedi_cmd->task_id;
+	qedi_update_itt_map(qedi, tid, mtask->itt);
+
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	fw_tmf_request = &fw_task_ctx->ystorm_st_context.pdu_hdr.tmf_request;
+	fw_tmf_request->itt = qedi_set_itt(tid, get_itt(mtask->itt));
+	fw_tmf_request->cmd_sn = be32_to_cpu(tmf_hdr->cmdsn);
+
+	memcpy(scsi_lun, &tmf_hdr->lun, sizeof(struct scsi_lun));
+	fw_tmf_request->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_tmf_request->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+
+	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	     ISCSI_TM_FUNC_ABORT_TASK) {
+		ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
+		if (!ctask || !ctask->sc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not get reference task\n");
+			return 0;
+		}
+		cmd = (struct qedi_cmd *)ctask->dd_data;
+		fw_tmf_request->rtt =
+				qedi_set_itt(cmd->task_id,
+					     get_itt(tmf_hdr->rtt));
+	} else {
+		fw_tmf_request->rtt = ISCSI_RESERVED_TAG;
+	}
+
+	fw_tmf_request->opcode = tmf_hdr->opcode;
+	fw_tmf_request->function = tmf_hdr->flags;
+	fw_tmf_request->hdr_second_dword = ntoh24(tmf_hdr->dlength);
+	fw_tmf_request->ref_cmd_sn = be32_to_cpu(tmf_hdr->refcmdsn);
+
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "Add TMF to SQ, tmf tid=0x%x, itt=0x%x, cid=0x%x\n",
+		  tid,  mtask->itt, qedi_conn->iscsi_conn_id);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, mtask, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
+
+int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *mtask)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_tm *tmf_hdr;
+	struct qedi_cmd *qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
+	s16 tid = 0;
+
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+	qedi_cmd->task = mtask;
+
+	/* If abort task then schedule the work and return */
+	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	    ISCSI_TM_FUNC_ABORT_TASK) {
+		qedi_cmd->state = CLEANUP_WAIT;
+		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_work);
+		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
+
+	} else if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
+		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
+		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
+		tid = qedi_get_task_idx(qedi);
+		if (tid == -1) {
+			QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
+				 qedi_conn->iscsi_conn_id);
+			return -1;
+		}
+		qedi_cmd->task_id = tid;
+
+		qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
+
+	} else {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid tmf, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		return -1;
+	}
+
+	return 0;
+}
+
 int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
 			 struct iscsi_task *task)
 {
@@ -1121,3 +1918,488 @@ int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
 	qedi_ring_doorbell(qedi_conn);
 	return 0;
 }
+
+static int qedi_split_bd(struct qedi_cmd *cmd, u64 addr, int sg_len,
+			 int bd_index)
+{
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	int frag_size, sg_frags;
+
+	sg_frags = 0;
+
+	while (sg_len) {
+		if (addr % QEDI_PAGE_SIZE)
+			frag_size =
+				   (QEDI_PAGE_SIZE - (addr % QEDI_PAGE_SIZE));
+		else
+			frag_size = (sg_len > QEDI_BD_SPLIT_SZ) ? 0 :
+				    (sg_len % QEDI_BD_SPLIT_SZ);
+
+		if (frag_size == 0)
+			frag_size = QEDI_BD_SPLIT_SZ;
+
+		bd[bd_index + sg_frags].sge_addr.lo = (addr & 0xffffffff);
+		bd[bd_index + sg_frags].sge_addr.hi = (addr >> 32);
+		bd[bd_index + sg_frags].sge_len = (u16)frag_size;
+		QEDI_INFO(&cmd->conn->qedi->dbg_ctx, QEDI_LOG_IO,
+			  "split sge %d: addr=%llx, len=%x",
+			  (bd_index + sg_frags), addr, frag_size);
+
+		addr += (u64)frag_size;
+		sg_frags++;
+		sg_len -= frag_size;
+	}
+	return sg_frags;
+}
+
+static int qedi_map_scsi_sg(struct qedi_ctx *qedi, struct qedi_cmd *cmd)
+{
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	struct scatterlist *sg;
+	int byte_count = 0;
+	int bd_count = 0;
+	int sg_count;
+	int sg_len;
+	int sg_frags;
+	u64 addr, end_addr;
+	int i;
+
+	WARN_ON(scsi_sg_count(sc) > QEDI_ISCSI_MAX_BDS_PER_CMD);
+
+	sg_count = dma_map_sg(&qedi->pdev->dev, scsi_sglist(sc),
+			      scsi_sg_count(sc), sc->sc_data_direction);
+
+	/*
+	 * New condition to send single SGE as cached-SGL.
+	 * Single SGE with length less than 64K.
+	 */
+	sg = scsi_sglist(sc);
+	if ((sg_count == 1) && (sg_dma_len(sg) <= MAX_SGLEN_FOR_CACHESGL)) {
+		sg_len = sg_dma_len(sg);
+		addr = (u64)sg_dma_address(sg);
+
+		bd[bd_count].sge_addr.lo = (addr & 0xffffffff);
+		bd[bd_count].sge_addr.hi = (addr >> 32);
+		bd[bd_count].sge_len = (u16)sg_len;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+			  "single-cashed-sgl: bd_count:%d addr=%llx, len=%x",
+			  sg_count, addr, sg_len);
+
+		return ++bd_count;
+	}
+
+	scsi_for_each_sg(sc, sg, sg_count, i) {
+		sg_len = sg_dma_len(sg);
+		addr = (u64)sg_dma_address(sg);
+		end_addr = (addr + sg_len);
+
+		/*
+		 * first sg elem in the 'list',
+		 * check if end addr is page-aligned.
+		 */
+		if ((i == 0) && (sg_count > 1) && (end_addr % QEDI_PAGE_SIZE))
+			cmd->use_slowpath = true;
+
+		/*
+		 * last sg elem in the 'list',
+		 * check if start addr is page-aligned.
+		 */
+		else if ((i == (sg_count - 1)) &&
+			 (sg_count > 1) && (addr % QEDI_PAGE_SIZE))
+			cmd->use_slowpath = true;
+
+		/*
+		 * middle sg elements in list,
+		 * check if start and end addr is page-aligned
+		 */
+		else if ((i != 0) && (i != (sg_count - 1)) &&
+			 ((addr % QEDI_PAGE_SIZE) ||
+			 (end_addr % QEDI_PAGE_SIZE)))
+			cmd->use_slowpath = true;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "sg[%d] size=0x%x",
+			  i, sg_len);
+
+		if (sg_len > QEDI_BD_SPLIT_SZ) {
+			sg_frags = qedi_split_bd(cmd, addr, sg_len, bd_count);
+		} else {
+			sg_frags = 1;
+			bd[bd_count].sge_addr.lo = addr & 0xffffffff;
+			bd[bd_count].sge_addr.hi = addr >> 32;
+			bd[bd_count].sge_len = sg_len;
+		}
+		byte_count += sg_len;
+		bd_count += sg_frags;
+	}
+
+	if (byte_count != scsi_bufflen(sc))
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "byte_count = %d != scsi_bufflen = %d\n", byte_count,
+			 scsi_bufflen(sc));
+	else
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "byte_count = %d\n",
+			  byte_count);
+
+	WARN_ON(byte_count != scsi_bufflen(sc));
+
+	return bd_count;
+}
+
+static void qedi_iscsi_map_sg_list(struct qedi_cmd *cmd)
+{
+	int bd_count;
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+
+	if (scsi_sg_count(sc)) {
+		bd_count  = qedi_map_scsi_sg(cmd->conn->qedi, cmd);
+		if (bd_count == 0)
+			return;
+	} else {
+		struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+
+		bd[0].sge_addr.lo = 0;
+		bd[0].sge_addr.hi = 0;
+		bd[0].sge_len = 0;
+		bd_count = 0;
+	}
+	cmd->io_tbl.sge_valid = bd_count;
+}
+
+static void qedi_cpy_scsi_cdb(struct scsi_cmnd *sc, u32 *dstp)
+{
+	u32 dword;
+	int lpcnt;
+	u8 *srcp;
+
+	lpcnt = sc->cmd_len / sizeof(dword);
+	srcp = (u8 *)sc->cmnd;
+	while (lpcnt--) {
+		memcpy(&dword, (const void *)srcp, 4);
+		*dstp = cpu_to_be32(dword);
+		srcp += 4;
+		dstp++;
+	}
+	if (sc->cmd_len & 0x3) {
+		dword = (u32)srcp[0] | ((u32)srcp[1] << 8);
+		*dstp = cpu_to_be32(dword);
+	}
+}
+
+void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
+		   u16 tid, int8_t direction)
+{
+	struct qedi_io_log *io_log;
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct scsi_cmnd *sc_cmd = task->sc;
+	unsigned long flags;
+	u8 op;
+
+	spin_lock_irqsave(&qedi->io_trace_lock, flags);
+
+	io_log = &qedi->io_trace_buf[qedi->io_trace_idx];
+	io_log->direction = direction;
+	io_log->task_id = tid;
+	io_log->cid = qedi_conn->iscsi_conn_id;
+	io_log->lun = sc_cmd->device->lun;
+	io_log->op = sc_cmd->cmnd[0];
+	op = sc_cmd->cmnd[0];
+
+	if (op == READ_10 || op == WRITE_10) {
+		io_log->lba[0] = sc_cmd->cmnd[2];
+		io_log->lba[1] = sc_cmd->cmnd[3];
+		io_log->lba[2] = sc_cmd->cmnd[4];
+		io_log->lba[3] = sc_cmd->cmnd[5];
+	} else {
+		io_log->lba[0] = 0;
+		io_log->lba[1] = 0;
+		io_log->lba[2] = 0;
+		io_log->lba[3] = 0;
+	}
+	io_log->bufflen = scsi_bufflen(sc_cmd);
+	io_log->sg_count = scsi_sg_count(sc_cmd);
+	io_log->fast_sgs = qedi->fast_sgls;
+	io_log->cached_sgs = qedi->cached_sgls;
+	io_log->slow_sgs = qedi->slow_sgls;
+	io_log->cached_sge = qedi->use_cached_sge;
+	io_log->slow_sge = qedi->use_slow_sge;
+	io_log->fast_sge = qedi->use_fast_sge;
+	io_log->result = sc_cmd->result;
+	io_log->jiffies = jiffies;
+	io_log->blk_req_cpu = smp_processor_id();
+
+	if (direction == QEDI_IO_TRACE_REQ) {
+		/* For requests we only care about the submission CPU */
+		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
+		io_log->intr_cpu = 0;
+		io_log->blk_rsp_cpu = 0;
+	} else if (direction == QEDI_IO_TRACE_RSP) {
+		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
+		io_log->intr_cpu = qedi->intr_cpu;
+		io_log->blk_rsp_cpu = smp_processor_id();
+	}
+
+	qedi->io_trace_idx++;
+	if (qedi->io_trace_idx == QEDI_IO_TRACE_SIZE)
+		qedi->io_trace_idx = 0;
+
+	qedi->use_cached_sge = false;
+	qedi->use_slow_sge = false;
+	qedi->use_fast_sge = false;
+
+	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
+}
+
+int qedi_iscsi_send_ioreq(struct iscsi_task *task)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct iscsi_session *session = conn->session;
+	struct Scsi_Host *shost = iscsi_session_to_shost(session->cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct scsi_cmnd *sc = task->sc;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_phys_sgl_ctx *phys_sgl;
+	struct iscsi_virt_sgl_ctx *virt_sgl;
+	struct ystorm_iscsi_task_st_ctx *yst_cxt;
+	struct mstorm_iscsi_task_st_ctx *mst_cxt;
+	struct iscsi_sgl *sgl_struct;
+	struct iscsi_sge *single_sge;
+	struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)task->hdr;
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	enum iscsi_task_type task_type;
+	struct iscsi_cmd_hdr *fw_cmd;
+	u32 scsi_lun[2];
+	u16 cq_idx = smp_processor_id() % qedi->num_queues;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+	u8 num_fast_sgs;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	qedi_iscsi_map_sg_list(cmd);
+
+	int_to_scsilun(sc->device->lun, (struct scsi_lun *)scsi_lun);
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	cmd->task_id = tid;
+
+	/* Ystrom context */
+	fw_cmd = &fw_task_ctx->ystorm_st_context.pdu_hdr.cmd;
+	SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_ATTR, ISCSI_ATTR_SIMPLE);
+
+	if (sc->sc_data_direction == DMA_TO_DEVICE) {
+		if (conn->session->initial_r2t_en) {
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+				min((conn->session->imm_data_en *
+				    conn->max_xmit_dlength),
+				    conn->session->first_burst);
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+			      min(fw_task_ctx->ustorm_ag_context.exp_data_acked,
+				  scsi_bufflen(sc));
+		} else {
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+			      min(conn->session->first_burst, scsi_bufflen(sc));
+		}
+
+		SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_WRITE, 1);
+		task_type = ISCSI_TASK_TYPE_INITIATOR_WRITE;
+	} else {
+		if (scsi_bufflen(sc))
+			SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_READ, 1);
+		task_type = ISCSI_TASK_TYPE_INITIATOR_READ;
+	}
+
+	fw_cmd->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_cmd->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_cmd->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_cmd->expected_transfer_length = scsi_bufflen(sc);
+	fw_cmd->cmd_sn = be32_to_cpu(hdr->cmdsn);
+	fw_cmd->opcode = hdr->opcode;
+	qedi_cpy_scsi_cdb(sc, (u32 *)fw_cmd->cdb);
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.sense_db.lo = (u32)cmd->sense_buffer_dma;
+	fw_task_ctx->mstorm_st_context.sense_db.hi =
+					(u32)((u64)cmd->sense_buffer_dma >> 32);
+	fw_task_ctx->mstorm_ag_context.task_cid = qedi_conn->iscsi_conn_id;
+	fw_task_ctx->mstorm_st_context.task_type = task_type;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						     qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						   qedi->tid_reuse_count[tid]++;
+
+	/* Ustrorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = scsi_bufflen(sc);
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = scsi_bufflen(sc);
+	fw_task_ctx->ustorm_st_context.exp_data_sn =
+						   be32_to_cpu(hdr->exp_statsn);
+	fw_task_ctx->ustorm_st_context.task_type = task_type;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = cq_idx;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+
+	num_fast_sgs = (cmd->io_tbl.sge_valid ?
+			min((u16)QEDI_FAST_SGE_COUNT,
+			    (u16)cmd->io_tbl.sge_valid) : 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, num_fast_sgs);
+
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "Total sge count [%d]\n",
+		  cmd->io_tbl.sge_valid);
+
+	yst_cxt = &fw_task_ctx->ystorm_st_context;
+	mst_cxt = &fw_task_ctx->mstorm_st_context;
+	/* Tx path */
+	if (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) {
+		/* not considering  superIO or FastIO */
+		if (cmd->io_tbl.sge_valid == 1) {
+			cached_sge = &yst_cxt->state.sgl_ctx_union.cached_sge;
+			cached_sge->sge.sge_addr.lo = bd[0].sge_addr.lo;
+			cached_sge->sge.sge_addr.hi = bd[0].sge_addr.hi;
+			cached_sge->sge.sge_len = bd[0].sge_len;
+			qedi->cached_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 1);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			phys_sgl = &yst_cxt->state.sgl_ctx_union.phys_sgl;
+			phys_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
+			phys_sgl->sgl_base.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			phys_sgl->sgl_size = cmd->io_tbl.sge_valid;
+			qedi->slow_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES,
+				  min((u16)QEDI_FAST_SGE_COUNT,
+				      (u16)cmd->io_tbl.sge_valid));
+			virt_sgl = &yst_cxt->state.sgl_ctx_union.virt_sgl;
+			virt_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
+			virt_sgl->sgl_base.hi =
+				      (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			virt_sgl->sgl_initial_offset =
+				 (u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
+			qedi->fast_sgls++;
+		}
+		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
+		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
+	} else {
+	/* Rx path */
+		if (cmd->io_tbl.sge_valid == 1) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SINGLE_SGE, 1);
+			single_sge = &mst_cxt->sgl_union.single_sge;
+			single_sge->sge_addr.lo = bd[0].sge_addr.lo;
+			single_sge->sge_addr.hi = bd[0].sge_addr.hi;
+			single_sge->sge_len = bd[0].sge_len;
+			qedi->cached_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
+			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
+			sgl_struct->sgl_addr.lo =
+						(u32)(cmd->io_tbl.sge_tbl_dma);
+			sgl_struct->sgl_addr.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 1);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			sgl_struct->updated_sge_size = 0;
+			sgl_struct->updated_sge_offset = 0;
+			qedi->slow_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
+			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
+			sgl_struct->sgl_addr.lo =
+						(u32)(cmd->io_tbl.sge_tbl_dma);
+			sgl_struct->sgl_addr.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			sgl_struct->byte_offset =
+				(u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			sgl_struct->updated_sge_size = 0;
+			sgl_struct->updated_sge_offset = 0;
+			qedi->fast_sgls++;
+		}
+		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
+		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
+	}
+
+	if (cmd->io_tbl.sge_valid == 1)
+		/* Singel-SGL */
+		qedi->use_cached_sge = true;
+	else {
+		if (cmd->use_slowpath)
+			qedi->use_slow_sge = true;
+		else
+			qedi->use_fast_sge = true;
+	}
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+		  "%s: %s-SGL: num_sges=0x%x first-sge-lo=0x%x first-sge-hi=0x%x",
+		  (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) ?
+		  "Write " : "Read ", (cmd->io_tbl.sge_valid == 1) ?
+		  "Single" : (cmd->use_slowpath ? "SLOW" : "FAST"),
+		  (u16)cmd->io_tbl.sge_valid, (u32)(cmd->io_tbl.sge_tbl_dma),
+		  (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32));
+
+	/*  Add command in active command list */
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&cmd->io_cmd, &qedi_conn->active_cmd_list);
+	cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	if (io_tracing)
+		qedi_trace_io(qedi, task, tid, QEDI_IO_TRACE_REQ);
+
+	return 0;
+}
+
+int qedi_iscsi_cleanup_task(struct iscsi_task *task, bool mark_cmd_node_deleted)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	s16 ptu_invalidate = 0;
+
+	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "issue cleanup tid=0x%x itt=0x%x task_state=%d cmd_state=0%x cid=0x%x\n",
+		  cmd->task_id, get_itt(task->itt), task->state,
+		  cmd->state, qedi_conn->iscsi_conn_id);
+
+	qedi_add_to_sq(qedi_conn, task, cmd->task_id, ptu_invalidate, true);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
index 85ea3d7..c50c2b1 100644
--- a/drivers/scsi/qedi/qedi_gbl.h
+++ b/drivers/scsi/qedi/qedi_gbl.h
@@ -28,11 +28,14 @@ int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
 			  struct iscsi_task *task);
 int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
 			   struct iscsi_task *task);
+int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *mtask);
 int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
 			 struct iscsi_task *task);
 int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
 			   struct iscsi_task *task,
 			   char *datap, int data_len, int unsol);
+int qedi_iscsi_send_ioreq(struct iscsi_task *task);
 int qedi_get_task_idx(struct qedi_ctx *qedi);
 void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
 int qedi_iscsi_cleanup_task(struct iscsi_task *task,
@@ -53,6 +56,9 @@ void qedi_start_conn_recovery(struct qedi_ctx *qedi,
 int qedi_recover_all_conns(struct qedi_ctx *qedi);
 void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			  uint16_t que_idx);
+int qedi_cleanup_all_io(struct qedi_ctx *qedi,
+			struct qedi_conn *qedi_conn,
+			struct iscsi_task *task, bool in_recovery);
 void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
 		   u16 tid, int8_t direction);
 int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
index caecdb8..7a07211 100644
--- a/drivers/scsi/qedi/qedi_iscsi.c
+++ b/drivers/scsi/qedi/qedi_iscsi.c
@@ -755,6 +755,9 @@ static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
 	case ISCSI_OP_LOGOUT:
 		rc = qedi_send_iscsi_logout(qedi_conn, task);
 		break;
+	case ISCSI_OP_SCSI_TMFUNC:
+		rc = qedi_iscsi_abort_work(qedi_conn, task);
+		break;
 	case ISCSI_OP_TEXT:
 		rc = qedi_send_iscsi_text(qedi_conn, task);
 		break;
@@ -804,6 +807,9 @@ static int qedi_task_xmit(struct iscsi_task *task)
 
 	if (!sc)
 		return qedi_mtask_xmit(conn, task);
+
+	cmd->scsi_cmd = sc;
+	return qedi_iscsi_send_ioreq(task);
 }
 
 static struct iscsi_endpoint *
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 22d19a3..fd0d335 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -43,6 +43,10 @@
 module_param(debug, uint, S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(debug, " Default debug level");
 
+uint io_tracing;
+module_param(io_tracing, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(io_tracing,
+		 " Enable logging of SCSI requests/completions into trace buffer. (default off).");
 const struct qed_iscsi_ops *qedi_ops;
 static struct scsi_transport_template *qedi_scsi_transport;
 static struct pci_driver qedi_pci_driver;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC 6/6] qedi: Add support for data path.
@ 2016-10-19  5:01   ` manish.rangankar
  0 siblings, 0 replies; 38+ messages in thread
From: manish.rangankar @ 2016-10-19  5:01 UTC (permalink / raw)
  To: lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Manish Rangankar, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

From: Manish Rangankar <manish.rangankar@cavium.com>

This patch adds support for data path and TMF handling.

Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
---
 drivers/scsi/qedi/qedi_fw.c    | 1282 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qedi/qedi_gbl.h   |    6 +
 drivers/scsi/qedi/qedi_iscsi.c |    6 +
 drivers/scsi/qedi/qedi_main.c  |    4 +
 4 files changed, 1298 insertions(+)

diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
index a820785..af1e14d 100644
--- a/drivers/scsi/qedi/qedi_fw.c
+++ b/drivers/scsi/qedi/qedi_fw.c
@@ -147,6 +147,114 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
 	spin_unlock(&session->back_lock);
 }
 
+static void qedi_tmf_resp_work(struct work_struct *work)
+{
+	struct qedi_cmd *qedi_cmd =
+				container_of(work, struct qedi_cmd, tmf_work);
+	struct qedi_conn *qedi_conn = qedi_cmd->conn;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_tm_rsp *resp_hdr_ptr;
+	struct iscsi_cls_session *cls_sess;
+	int rval = 0;
+
+	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
+
+	iscsi_block_session(session->cls_session);
+	rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
+	if (rval) {
+		clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+		qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+		iscsi_unblock_session(session->cls_session);
+		return;
+	}
+
+	iscsi_unblock_session(session->cls_session);
+	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+
+	spin_lock(&session->back_lock);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+	spin_unlock(&session->back_lock);
+	kfree(resp_hdr_ptr);
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+}
+
+static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
+				  union iscsi_cqe *cqe,
+				  struct iscsi_task *task,
+				  struct qedi_conn *qedi_conn)
+
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_tmf_response_hdr *cqe_tmp_response;
+	struct iscsi_tm_rsp *resp_hdr_ptr;
+	struct iscsi_tm *tmf_hdr;
+	struct qedi_cmd *qedi_cmd = NULL;
+	u32 *tmp;
+
+	cqe_tmp_response = &cqe->cqe_common.iscsi_hdr.tmf_response;
+
+	qedi_cmd = task->dd_data;
+	qedi_cmd->tmf_resp_buf = kzalloc(sizeof(*resp_hdr_ptr), GFP_KERNEL);
+	if (!qedi_cmd->tmf_resp_buf) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Failed to allocate resp buf, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		return;
+	}
+
+	spin_lock(&session->back_lock);
+	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_tm_rsp));
+
+	/* Fill up the header */
+	resp_hdr_ptr->opcode = cqe_tmp_response->opcode;
+	resp_hdr_ptr->flags = cqe_tmp_response->hdr_flags;
+	resp_hdr_ptr->response = cqe_tmp_response->hdr_response;
+	resp_hdr_ptr->hlength = 0;
+
+	hton24(resp_hdr_ptr->dlength,
+	       (cqe_tmp_response->hdr_second_dword &
+		ISCSI_TMF_RESPONSE_HDR_DATA_SEG_LEN_MASK));
+	tmp = (u32 *)resp_hdr_ptr->dlength;
+	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
+				      conn->session->age);
+	resp_hdr_ptr->statsn = cpu_to_be32(cqe_tmp_response->stat_sn);
+	resp_hdr_ptr->exp_cmdsn  = cpu_to_be32(cqe_tmp_response->exp_cmd_sn);
+	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_tmp_response->max_cmd_sn);
+
+	tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
+
+	if (likely(qedi_cmd->io_cmd_in_list)) {
+		qedi_cmd->io_cmd_in_list = false;
+		list_del_init(&qedi_cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+	}
+
+	if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
+	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
+	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	      ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
+		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_resp_work);
+		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
+		goto unblock_sess;
+	}
+
+	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+	kfree(resp_hdr_ptr);
+
+unblock_sess:
+	spin_unlock(&session->back_lock);
+}
+
 static void qedi_process_login_resp(struct qedi_ctx *qedi,
 				    union iscsi_cqe *cqe,
 				    struct iscsi_task *task,
@@ -470,6 +578,121 @@ static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
 	spin_unlock_bh(&session->back_lock);
 }
 
+static void qedi_scsi_completion(struct qedi_ctx *qedi,
+				 union iscsi_cqe *cqe,
+				 struct iscsi_task *task,
+				 struct iscsi_conn *conn)
+{
+	struct scsi_cmnd *sc_cmd;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_scsi_rsp *hdr;
+	struct iscsi_data_in_hdr *cqe_data_in;
+	int datalen = 0;
+	struct qedi_conn *qedi_conn;
+	u32 iscsi_cid;
+	bool mark_cmd_node_deleted = false;
+	u8 cqe_err_bits = 0;
+
+	iscsi_cid  = cqe->cqe_common.conn_id;
+	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+
+	cqe_data_in = &cqe->cqe_common.iscsi_hdr.data_in;
+	cqe_err_bits =
+		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
+
+	spin_lock_bh(&session->back_lock);
+	/* get the scsi command */
+	sc_cmd = cmd->scsi_cmd;
+
+	if (!sc_cmd) {
+		QEDI_WARN(&qedi->dbg_ctx, "sc_cmd is NULL!\n");
+		goto error;
+	}
+
+	if (!sc_cmd->SCp.ptr) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "SCp.ptr is NULL, returned in another context.\n");
+		goto error;
+	}
+
+	if (!sc_cmd->request) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "sc_cmd->request is NULL, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	if (!sc_cmd->request->special) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "request->special is NULL so request not valid, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	if (!sc_cmd->request->q) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "request->q is NULL so request is not valid, sc_cmd=%p.\n",
+			  sc_cmd);
+		goto error;
+	}
+
+	qedi_iscsi_unmap_sg_list(cmd);
+
+	hdr = (struct iscsi_scsi_rsp *)task->hdr;
+	hdr->opcode = cqe_data_in->opcode;
+	hdr->max_cmdsn = cpu_to_be32(cqe_data_in->max_cmd_sn);
+	hdr->exp_cmdsn = cpu_to_be32(cqe_data_in->exp_cmd_sn);
+	hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
+	hdr->response = cqe_data_in->reserved1;
+	hdr->cmd_status = cqe_data_in->status_rsvd;
+	hdr->flags = cqe_data_in->flags;
+	hdr->residual_count = cpu_to_be32(cqe_data_in->residual_count);
+
+	if (hdr->cmd_status == SAM_STAT_CHECK_CONDITION) {
+		datalen = cqe_data_in->reserved2 &
+			  ISCSI_COMMON_HDR_DATA_SEG_LEN_MASK;
+		memcpy((char *)conn->data, (char *)cmd->sense_buffer, datalen);
+	}
+
+	/* If f/w reports data underrun err then set residual to IO transfer
+	 * length, set Underrun flag and clear Overrun flag explicitly
+	 */
+	if (unlikely(cqe_err_bits &&
+		     GET_FIELD(cqe_err_bits, CQE_ERROR_BITMAP_UNDER_RUN_ERR))) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "Under flow itt=0x%x proto flags=0x%x tid=0x%x cid 0x%x fw resid 0x%x sc dlen 0x%x\n",
+			  hdr->itt, cqe_data_in->flags, cmd->task_id,
+			  qedi_conn->iscsi_conn_id, hdr->residual_count,
+			  scsi_bufflen(sc_cmd));
+		hdr->residual_count = cpu_to_be32(scsi_bufflen(sc_cmd));
+		hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW;
+		hdr->flags &= (~ISCSI_FLAG_CMD_OVERFLOW);
+	}
+
+	spin_lock(&qedi_conn->list_lock);
+	if (likely(cmd->io_cmd_in_list)) {
+		cmd->io_cmd_in_list = false;
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+		mark_cmd_node_deleted = true;
+	}
+	spin_unlock(&qedi_conn->list_lock);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+		  "Freeing tid=0x%x for cid=0x%x\n",
+		  cmd->task_id, qedi_conn->iscsi_conn_id);
+	cmd->state = RESPONSE_RECEIVED;
+	if (io_tracing)
+		qedi_trace_io(qedi, task, cmd->task_id, QEDI_IO_TRACE_RSP);
+
+	qedi_clear_task_idx(qedi, cmd->task_id);
+	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+			     conn->data, datalen);
+error:
+	spin_unlock_bh(&session->back_lock);
+}
+
 static void qedi_mtask_completion(struct qedi_ctx *qedi,
 				  union iscsi_cqe *cqe,
 				  struct iscsi_task *task,
@@ -482,9 +705,16 @@ static void qedi_mtask_completion(struct qedi_ctx *qedi,
 	iscsi_conn = conn->cls_conn->dd_data;
 
 	switch (hdr_opcode) {
+	case ISCSI_OPCODE_SCSI_RESPONSE:
+	case ISCSI_OPCODE_DATA_IN:
+		qedi_scsi_completion(qedi, cqe, task, iscsi_conn);
+		break;
 	case ISCSI_OPCODE_LOGIN_RESPONSE:
 		qedi_process_login_resp(qedi, cqe, task, conn);
 		break;
+	case ISCSI_OPCODE_TMF_RESPONSE:
+		qedi_process_tmf_resp(qedi, cqe, task, conn);
+		break;
 	case ISCSI_OPCODE_TEXT_RESPONSE:
 		qedi_process_text_resp(qedi, cqe, task, conn);
 		break;
@@ -520,6 +750,131 @@ static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
 	spin_unlock_bh(&session->back_lock);
 }
 
+static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+					  struct iscsi_cqe_solicited *cqe,
+					  struct iscsi_task *task,
+					  struct iscsi_conn *conn)
+{
+	struct qedi_work_map *work, *work_tmp;
+	u32 proto_itt = cqe->itid;
+	u32 ptmp_itt = 0;
+	itt_t protoitt = 0;
+	int found = 0;
+	struct qedi_cmd *qedi_cmd = NULL;
+	u32 rtid = 0;
+	u32 iscsi_cid;
+	struct qedi_conn *qedi_conn;
+	struct qedi_cmd *cmd_new, *dbg_cmd;
+	struct iscsi_task *mtask;
+	struct iscsi_tm *tmf_hdr = NULL;
+
+	iscsi_cid = cqe->conn_id;
+	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
+
+	/* Based on this itt get the corresponding qedi_cmd */
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	list_for_each_entry_safe(work, work_tmp, &qedi_conn->tmf_work_list,
+				 list) {
+		if (work->rtid == proto_itt) {
+			/* We found the command */
+			qedi_cmd = work->qedi_cmd;
+			if (!qedi_cmd->list_tmf_work) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+					  "TMF work not found, cqe->tid=0x%x, cid=0x%x\n",
+					  proto_itt, qedi_conn->iscsi_conn_id);
+				WARN_ON(1);
+			}
+			found = 1;
+			mtask = qedi_cmd->task;
+			tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+			rtid = work->rtid;
+
+			list_del_init(&work->list);
+			kfree(work);
+			qedi_cmd->list_tmf_work = NULL;
+		}
+	}
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	if (found) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "TMF work, cqe->tid=0x%x, tmf flags=0x%x, cid=0x%x\n",
+			  proto_itt, tmf_hdr->flags, qedi_conn->iscsi_conn_id);
+
+		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_ABORT_TASK) {
+			spin_lock_bh(&conn->session->back_lock);
+
+			protoitt = build_itt(get_itt(tmf_hdr->rtt),
+					     conn->session->age);
+			task = iscsi_itt_to_task(conn, protoitt);
+
+			spin_unlock_bh(&conn->session->back_lock);
+
+			if (!task) {
+				QEDI_NOTICE(&qedi->dbg_ctx,
+					    "IO task completed, tmf rtt=0x%x, cid=0x%x\n",
+					    get_itt(tmf_hdr->rtt),
+					    qedi_conn->iscsi_conn_id);
+				return;
+			}
+
+			dbg_cmd = task->dd_data;
+
+			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+				  "Abort tmf rtt=0x%x, i/o itt=0x%x, i/o tid=0x%x, cid=0x%x\n",
+				  get_itt(tmf_hdr->rtt), get_itt(task->itt),
+				  dbg_cmd->task_id, qedi_conn->iscsi_conn_id);
+
+			if (qedi_cmd->state == CLEANUP_WAIT_FAILED)
+				qedi_cmd->state = CLEANUP_RECV;
+
+			qedi_clear_task_idx(qedi_conn->qedi, rtid);
+
+			spin_lock(&qedi_conn->list_lock);
+			list_del_init(&dbg_cmd->io_cmd);
+			qedi_conn->active_cmd_count--;
+			spin_unlock(&qedi_conn->list_lock);
+			qedi_cmd->state = CLEANUP_RECV;
+			wake_up_interruptible(&qedi_conn->wait_queue);
+		}
+	} else if (qedi_conn->cmd_cleanup_req > 0) {
+		spin_lock_bh(&conn->session->back_lock);
+		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+		protoitt = build_itt(ptmp_itt, conn->session->age);
+		task = iscsi_itt_to_task(conn, protoitt);
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "cleanup io itid=0x%x, protoitt=0x%x, cmd_cleanup_cmpl=%d, cid=0x%x\n",
+			  cqe->itid, protoitt, qedi_conn->cmd_cleanup_cmpl,
+			  qedi_conn->iscsi_conn_id);
+
+		spin_unlock_bh(&conn->session->back_lock);
+		if (!task) {
+			QEDI_NOTICE(&qedi->dbg_ctx,
+				    "task is null, itid=0x%x, cid=0x%x\n",
+				    cqe->itid, qedi_conn->iscsi_conn_id);
+			return;
+		}
+		qedi_conn->cmd_cleanup_cmpl++;
+		wake_up(&qedi_conn->wait_queue);
+		cmd_new = task->dd_data;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+			  "Freeing tid=0x%x for cid=0x%x\n",
+			  cqe->itid, qedi_conn->iscsi_conn_id);
+		qedi_clear_task_idx(qedi_conn->qedi, cqe->itid);
+
+	} else {
+		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+		protoitt = build_itt(ptmp_itt, conn->session->age);
+		task = iscsi_itt_to_task(conn, protoitt);
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "Delayed or untracked cleanup response, itt=0x%x, tid=0x%x, cid=0x%x, task=%p\n",
+			 protoitt, cqe->itid, qedi_conn->iscsi_conn_id, task);
+		WARN_ON(1);
+	}
+}
+
 void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			  uint16_t que_idx)
 {
@@ -619,6 +974,14 @@ void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			break;
 		}
 		goto exit_fp_process;
+	case ISCSI_CQE_TYPE_DUMMY:
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "Dummy CqE\n");
+		goto exit_fp_process;
+	case ISCSI_CQE_TYPE_TASK_CLEANUP:
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "CleanUp CqE\n");
+		qedi_process_cmd_cleanup_resp(qedi, &cqe->cqe_solicited, task,
+					      conn);
+		goto exit_fp_process;
 	default:
 		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
 		break;
@@ -904,6 +1267,440 @@ int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
 	return 0;
 }
 
+int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
+			struct iscsi_task *task, bool in_recovery)
+{
+	int rval;
+	struct iscsi_task *ctask;
+	struct qedi_cmd *cmd, *cmd_tmp;
+	struct iscsi_tm *tmf_hdr;
+	unsigned int lun = 0;
+	bool lun_reset = false;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_session *session = conn->session;
+
+	/* From recovery, task is NULL or from tmf resp valid task */
+	if (task) {
+		tmf_hdr = (struct iscsi_tm *)task->hdr;
+
+		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+			ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) {
+			lun_reset = true;
+			lun = scsilun_to_int(&tmf_hdr->lun);
+		}
+	}
+
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "active_cmd_count=%d, cid=0x%x, in_recovery=%d, lun_reset=%d\n",
+		  qedi_conn->active_cmd_count, qedi_conn->iscsi_conn_id,
+		  in_recovery, lun_reset);
+
+	if (lun_reset)
+		spin_lock_bh(&session->back_lock);
+
+	spin_lock(&qedi_conn->list_lock);
+
+	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
+				 io_cmd) {
+		ctask = cmd->task;
+		if (ctask == task)
+			continue;
+
+		if (lun_reset) {
+			if (cmd->scsi_cmd && cmd->scsi_cmd->device) {
+				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+					  "tid=0x%x itt=0x%x scsi_cmd_ptr=%p device=%p task_state=%d cmd_state=0%x cid=0x%x\n",
+					  cmd->task_id, get_itt(ctask->itt),
+					  cmd->scsi_cmd, cmd->scsi_cmd->device,
+					  ctask->state, cmd->state,
+					  qedi_conn->iscsi_conn_id);
+				if (cmd->scsi_cmd->device->lun != lun)
+					continue;
+			}
+		}
+		qedi_conn->cmd_cleanup_req++;
+		qedi_iscsi_cleanup_task(ctask, true);
+
+		list_del_init(&cmd->io_cmd);
+		qedi_conn->active_cmd_count--;
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Deleted active cmd list node io_cmd=%p, cid=0x%x\n",
+			  &cmd->io_cmd, qedi_conn->iscsi_conn_id);
+	}
+
+	spin_unlock(&qedi_conn->list_lock);
+
+	if (lun_reset)
+		spin_unlock_bh(&session->back_lock);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "cmd_cleanup_req=%d, cid=0x%x\n",
+		  qedi_conn->cmd_cleanup_req,
+		  qedi_conn->iscsi_conn_id);
+
+	rval  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
+						 ((qedi_conn->cmd_cleanup_req ==
+						 qedi_conn->cmd_cleanup_cmpl) ||
+						 qedi_conn->ep),
+						 5 * HZ);
+	if (rval) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "i/o cmd_cleanup_req=%d, equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
+			  qedi_conn->cmd_cleanup_req,
+			  qedi_conn->cmd_cleanup_cmpl,
+			  qedi_conn->iscsi_conn_id);
+
+		return 0;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "i/o cmd_cleanup_req=%d, not equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
+		  qedi_conn->cmd_cleanup_req,
+		  qedi_conn->cmd_cleanup_cmpl,
+		  qedi_conn->iscsi_conn_id);
+
+	iscsi_host_for_each_session(qedi->shost,
+				    qedi_mark_device_missing);
+	qedi_ops->common->drain(qedi->cdev);
+
+	/* Enable IOs for all other sessions except current.*/
+	if (!wait_event_interruptible_timeout(qedi_conn->wait_queue,
+					      (qedi_conn->cmd_cleanup_req ==
+					       qedi_conn->cmd_cleanup_cmpl),
+					      5 * HZ)) {
+		iscsi_host_for_each_session(qedi->shost,
+					    qedi_mark_device_available);
+		return -1;
+	}
+
+	iscsi_host_for_each_session(qedi->shost,
+				    qedi_mark_device_available);
+
+	return 0;
+}
+
+void qedi_clearsq(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
+		  struct iscsi_task *task)
+{
+	struct qedi_endpoint *qedi_ep;
+	int rval;
+
+	qedi_ep = qedi_conn->ep;
+	qedi_conn->cmd_cleanup_req = 0;
+	qedi_conn->cmd_cleanup_cmpl = 0;
+
+	if (!qedi_ep) {
+		QEDI_WARN(&qedi->dbg_ctx,
+			  "Cannot proceed, ep already disconnected, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		return;
+	}
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Clearing SQ for cid=0x%x, conn=%p, ep=%p\n",
+		  qedi_conn->iscsi_conn_id, qedi_conn, qedi_ep);
+
+	qedi_ops->clear_sq(qedi->cdev, qedi_ep->handle);
+
+	rval = qedi_cleanup_all_io(qedi, qedi_conn, task, true);
+	if (rval) {
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "fatal error, need hard reset, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		WARN_ON(1);
+	}
+}
+
+static int qedi_wait_for_cleanup_request(struct qedi_ctx *qedi,
+					 struct qedi_conn *qedi_conn,
+					 struct iscsi_task *task,
+					 struct qedi_cmd *qedi_cmd,
+					 struct qedi_work_map *list_work)
+{
+	struct qedi_cmd *cmd = (struct qedi_cmd *)task->dd_data;
+	int wait;
+
+	wait  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
+						 ((qedi_cmd->state ==
+						   CLEANUP_RECV) ||
+						 ((qedi_cmd->type == TYPEIO) &&
+						  (cmd->state ==
+						   RESPONSE_RECEIVED))),
+						 5 * HZ);
+	if (!wait) {
+		qedi_cmd->state = CLEANUP_WAIT_FAILED;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+			  "Cleanup timedout tid=0x%x, issue connection recovery, cid=0x%x\n",
+			  cmd->task_id, qedi_conn->iscsi_conn_id);
+
+		return -1;
+	}
+	return 0;
+}
+
+static void qedi_tmf_work(struct work_struct *work)
+{
+	struct qedi_cmd *qedi_cmd =
+		container_of(work, struct qedi_cmd, tmf_work);
+	struct qedi_conn *qedi_conn = qedi_cmd->conn;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct iscsi_cls_session *cls_sess;
+	struct qedi_work_map *list_work = NULL;
+	struct iscsi_task *mtask;
+	struct qedi_cmd *cmd;
+	struct iscsi_task *ctask;
+	struct iscsi_tm *tmf_hdr;
+	s16 rval = 0;
+	s16 tid = 0;
+
+	mtask = qedi_cmd->task;
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
+	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+
+	ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
+	if (!ctask || !ctask->sc) {
+		QEDI_ERR(&qedi->dbg_ctx, "Task already completed\n");
+		goto abort_ret;
+	}
+
+	cmd = (struct qedi_cmd *)ctask->dd_data;
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+		  "Abort tmf rtt=0x%x, cmd itt=0x%x, cmd tid=0x%x, cid=0x%x\n",
+		  get_itt(tmf_hdr->rtt), get_itt(ctask->itt), cmd->task_id,
+		  qedi_conn->iscsi_conn_id);
+
+	if (do_not_recover) {
+		QEDI_ERR(&qedi->dbg_ctx, "DONT SEND CLEANUP/ABORT %d\n",
+			 do_not_recover);
+		goto abort_ret;
+	}
+
+	list_work = kzalloc(sizeof(*list_work), GFP_ATOMIC);
+	if (!list_work) {
+		QEDI_ERR(&qedi->dbg_ctx, "Memory alloction failed\n");
+		goto abort_ret;
+	}
+
+	qedi_cmd->type = TYPEIO;
+	list_work->qedi_cmd = qedi_cmd;
+	list_work->rtid = cmd->task_id;
+	list_work->state = QEDI_WORK_SCHEDULED;
+	qedi_cmd->list_tmf_work = list_work;
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "Queue tmf work=%p, list node=%p, cid=0x%x, tmf flags=0x%x\n",
+		  list_work->ptr_tmf_work, list_work, qedi_conn->iscsi_conn_id,
+		  tmf_hdr->flags);
+
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	list_add_tail(&list_work->list, &qedi_conn->tmf_work_list);
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	qedi_iscsi_cleanup_task(ctask, false);
+
+	rval = qedi_wait_for_cleanup_request(qedi, qedi_conn, ctask, qedi_cmd,
+					     list_work);
+	if (rval == -1) {
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+			  "FW cleanup got escalated, cid=0x%x\n",
+			  qedi_conn->iscsi_conn_id);
+		goto ldel_exit;
+	}
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1) {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		goto ldel_exit;
+	}
+
+	qedi_cmd->task_id = tid;
+	qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
+
+abort_ret:
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+	return;
+
+ldel_exit:
+	spin_lock_bh(&qedi_conn->tmf_work_lock);
+	if (!qedi_cmd->list_tmf_work) {
+		list_del_init(&list_work->list);
+		qedi_cmd->list_tmf_work = NULL;
+		kfree(list_work);
+	}
+	spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_del_init(&cmd->io_cmd);
+	qedi_conn->active_cmd_count--;
+	spin_unlock(&qedi_conn->list_lock);
+
+	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+}
+
+static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
+			       struct iscsi_task *mtask)
+{
+	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_tmf_request_hdr *fw_tmf_request;
+	struct iscsi_sge *single_sge;
+	struct qedi_cmd *qedi_cmd;
+	struct qedi_cmd *cmd;
+	struct iscsi_task *ctask;
+	struct iscsi_tm *tmf_hdr;
+	struct iscsi_sge *req_sge;
+	struct iscsi_sge *resp_sge;
+	u32 scsi_lun[2];
+	s16 tid = 0, ptu_invalidate = 0;
+
+	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
+	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
+	qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+
+	tid = qedi_cmd->task_id;
+	qedi_update_itt_map(qedi, tid, mtask->itt);
+
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+
+	fw_tmf_request = &fw_task_ctx->ystorm_st_context.pdu_hdr.tmf_request;
+	fw_tmf_request->itt = qedi_set_itt(tid, get_itt(mtask->itt));
+	fw_tmf_request->cmd_sn = be32_to_cpu(tmf_hdr->cmdsn);
+
+	memcpy(scsi_lun, &tmf_hdr->lun, sizeof(struct scsi_lun));
+	fw_tmf_request->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_tmf_request->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						qedi->tid_reuse_count[tid]++;
+
+	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	     ISCSI_TM_FUNC_ABORT_TASK) {
+		ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
+		if (!ctask || !ctask->sc) {
+			QEDI_ERR(&qedi->dbg_ctx,
+				 "Could not get reference task\n");
+			return 0;
+		}
+		cmd = (struct qedi_cmd *)ctask->dd_data;
+		fw_tmf_request->rtt =
+				qedi_set_itt(cmd->task_id,
+					     get_itt(tmf_hdr->rtt));
+	} else {
+		fw_tmf_request->rtt = ISCSI_RESERVED_TAG;
+	}
+
+	fw_tmf_request->opcode = tmf_hdr->opcode;
+	fw_tmf_request->function = tmf_hdr->flags;
+	fw_tmf_request->hdr_second_dword = ntoh24(tmf_hdr->dlength);
+	fw_tmf_request->ref_cmd_sn = be32_to_cpu(tmf_hdr->refcmdsn);
+
+	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
+	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
+	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
+	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
+	single_sge->sge_len = resp_sge->sge_len;
+
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SINGLE_SGE, 1);
+	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+		  ISCSI_MFLAGS_SLOW_IO, 0);
+	fw_task_ctx->mstorm_st_context.sgl_size = 1;
+	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
+
+	/* Ustorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
+	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
+	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
+
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, 0);
+
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "Add TMF to SQ, tmf tid=0x%x, itt=0x%x, cid=0x%x\n",
+		  tid,  mtask->itt, qedi_conn->iscsi_conn_id);
+
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
+	qedi_cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, mtask, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	return 0;
+}
+
+int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *mtask)
+{
+	struct qedi_ctx *qedi = qedi_conn->qedi;
+	struct iscsi_tm *tmf_hdr;
+	struct qedi_cmd *qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
+	s16 tid = 0;
+
+	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+	qedi_cmd->task = mtask;
+
+	/* If abort task then schedule the work and return */
+	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+	    ISCSI_TM_FUNC_ABORT_TASK) {
+		qedi_cmd->state = CLEANUP_WAIT;
+		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_work);
+		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
+
+	} else if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
+		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
+		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+		    ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
+		tid = qedi_get_task_idx(qedi);
+		if (tid == -1) {
+			QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
+				 qedi_conn->iscsi_conn_id);
+			return -1;
+		}
+		qedi_cmd->task_id = tid;
+
+		qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
+
+	} else {
+		QEDI_ERR(&qedi->dbg_ctx, "Invalid tmf, cid=0x%x\n",
+			 qedi_conn->iscsi_conn_id);
+		return -1;
+	}
+
+	return 0;
+}
+
 int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
 			 struct iscsi_task *task)
 {
@@ -1121,3 +1918,488 @@ int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
 	qedi_ring_doorbell(qedi_conn);
 	return 0;
 }
+
+static int qedi_split_bd(struct qedi_cmd *cmd, u64 addr, int sg_len,
+			 int bd_index)
+{
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	int frag_size, sg_frags;
+
+	sg_frags = 0;
+
+	while (sg_len) {
+		if (addr % QEDI_PAGE_SIZE)
+			frag_size =
+				   (QEDI_PAGE_SIZE - (addr % QEDI_PAGE_SIZE));
+		else
+			frag_size = (sg_len > QEDI_BD_SPLIT_SZ) ? 0 :
+				    (sg_len % QEDI_BD_SPLIT_SZ);
+
+		if (frag_size == 0)
+			frag_size = QEDI_BD_SPLIT_SZ;
+
+		bd[bd_index + sg_frags].sge_addr.lo = (addr & 0xffffffff);
+		bd[bd_index + sg_frags].sge_addr.hi = (addr >> 32);
+		bd[bd_index + sg_frags].sge_len = (u16)frag_size;
+		QEDI_INFO(&cmd->conn->qedi->dbg_ctx, QEDI_LOG_IO,
+			  "split sge %d: addr=%llx, len=%x",
+			  (bd_index + sg_frags), addr, frag_size);
+
+		addr += (u64)frag_size;
+		sg_frags++;
+		sg_len -= frag_size;
+	}
+	return sg_frags;
+}
+
+static int qedi_map_scsi_sg(struct qedi_ctx *qedi, struct qedi_cmd *cmd)
+{
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	struct scatterlist *sg;
+	int byte_count = 0;
+	int bd_count = 0;
+	int sg_count;
+	int sg_len;
+	int sg_frags;
+	u64 addr, end_addr;
+	int i;
+
+	WARN_ON(scsi_sg_count(sc) > QEDI_ISCSI_MAX_BDS_PER_CMD);
+
+	sg_count = dma_map_sg(&qedi->pdev->dev, scsi_sglist(sc),
+			      scsi_sg_count(sc), sc->sc_data_direction);
+
+	/*
+	 * New condition to send single SGE as cached-SGL.
+	 * Single SGE with length less than 64K.
+	 */
+	sg = scsi_sglist(sc);
+	if ((sg_count == 1) && (sg_dma_len(sg) <= MAX_SGLEN_FOR_CACHESGL)) {
+		sg_len = sg_dma_len(sg);
+		addr = (u64)sg_dma_address(sg);
+
+		bd[bd_count].sge_addr.lo = (addr & 0xffffffff);
+		bd[bd_count].sge_addr.hi = (addr >> 32);
+		bd[bd_count].sge_len = (u16)sg_len;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+			  "single-cashed-sgl: bd_count:%d addr=%llx, len=%x",
+			  sg_count, addr, sg_len);
+
+		return ++bd_count;
+	}
+
+	scsi_for_each_sg(sc, sg, sg_count, i) {
+		sg_len = sg_dma_len(sg);
+		addr = (u64)sg_dma_address(sg);
+		end_addr = (addr + sg_len);
+
+		/*
+		 * first sg elem in the 'list',
+		 * check if end addr is page-aligned.
+		 */
+		if ((i == 0) && (sg_count > 1) && (end_addr % QEDI_PAGE_SIZE))
+			cmd->use_slowpath = true;
+
+		/*
+		 * last sg elem in the 'list',
+		 * check if start addr is page-aligned.
+		 */
+		else if ((i == (sg_count - 1)) &&
+			 (sg_count > 1) && (addr % QEDI_PAGE_SIZE))
+			cmd->use_slowpath = true;
+
+		/*
+		 * middle sg elements in list,
+		 * check if start and end addr is page-aligned
+		 */
+		else if ((i != 0) && (i != (sg_count - 1)) &&
+			 ((addr % QEDI_PAGE_SIZE) ||
+			 (end_addr % QEDI_PAGE_SIZE)))
+			cmd->use_slowpath = true;
+
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "sg[%d] size=0x%x",
+			  i, sg_len);
+
+		if (sg_len > QEDI_BD_SPLIT_SZ) {
+			sg_frags = qedi_split_bd(cmd, addr, sg_len, bd_count);
+		} else {
+			sg_frags = 1;
+			bd[bd_count].sge_addr.lo = addr & 0xffffffff;
+			bd[bd_count].sge_addr.hi = addr >> 32;
+			bd[bd_count].sge_len = sg_len;
+		}
+		byte_count += sg_len;
+		bd_count += sg_frags;
+	}
+
+	if (byte_count != scsi_bufflen(sc))
+		QEDI_ERR(&qedi->dbg_ctx,
+			 "byte_count = %d != scsi_bufflen = %d\n", byte_count,
+			 scsi_bufflen(sc));
+	else
+		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "byte_count = %d\n",
+			  byte_count);
+
+	WARN_ON(byte_count != scsi_bufflen(sc));
+
+	return bd_count;
+}
+
+static void qedi_iscsi_map_sg_list(struct qedi_cmd *cmd)
+{
+	int bd_count;
+	struct scsi_cmnd *sc = cmd->scsi_cmd;
+
+	if (scsi_sg_count(sc)) {
+		bd_count  = qedi_map_scsi_sg(cmd->conn->qedi, cmd);
+		if (bd_count == 0)
+			return;
+	} else {
+		struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+
+		bd[0].sge_addr.lo = 0;
+		bd[0].sge_addr.hi = 0;
+		bd[0].sge_len = 0;
+		bd_count = 0;
+	}
+	cmd->io_tbl.sge_valid = bd_count;
+}
+
+static void qedi_cpy_scsi_cdb(struct scsi_cmnd *sc, u32 *dstp)
+{
+	u32 dword;
+	int lpcnt;
+	u8 *srcp;
+
+	lpcnt = sc->cmd_len / sizeof(dword);
+	srcp = (u8 *)sc->cmnd;
+	while (lpcnt--) {
+		memcpy(&dword, (const void *)srcp, 4);
+		*dstp = cpu_to_be32(dword);
+		srcp += 4;
+		dstp++;
+	}
+	if (sc->cmd_len & 0x3) {
+		dword = (u32)srcp[0] | ((u32)srcp[1] << 8);
+		*dstp = cpu_to_be32(dword);
+	}
+}
+
+void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
+		   u16 tid, int8_t direction)
+{
+	struct qedi_io_log *io_log;
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct scsi_cmnd *sc_cmd = task->sc;
+	unsigned long flags;
+	u8 op;
+
+	spin_lock_irqsave(&qedi->io_trace_lock, flags);
+
+	io_log = &qedi->io_trace_buf[qedi->io_trace_idx];
+	io_log->direction = direction;
+	io_log->task_id = tid;
+	io_log->cid = qedi_conn->iscsi_conn_id;
+	io_log->lun = sc_cmd->device->lun;
+	io_log->op = sc_cmd->cmnd[0];
+	op = sc_cmd->cmnd[0];
+
+	if (op == READ_10 || op == WRITE_10) {
+		io_log->lba[0] = sc_cmd->cmnd[2];
+		io_log->lba[1] = sc_cmd->cmnd[3];
+		io_log->lba[2] = sc_cmd->cmnd[4];
+		io_log->lba[3] = sc_cmd->cmnd[5];
+	} else {
+		io_log->lba[0] = 0;
+		io_log->lba[1] = 0;
+		io_log->lba[2] = 0;
+		io_log->lba[3] = 0;
+	}
+	io_log->bufflen = scsi_bufflen(sc_cmd);
+	io_log->sg_count = scsi_sg_count(sc_cmd);
+	io_log->fast_sgs = qedi->fast_sgls;
+	io_log->cached_sgs = qedi->cached_sgls;
+	io_log->slow_sgs = qedi->slow_sgls;
+	io_log->cached_sge = qedi->use_cached_sge;
+	io_log->slow_sge = qedi->use_slow_sge;
+	io_log->fast_sge = qedi->use_fast_sge;
+	io_log->result = sc_cmd->result;
+	io_log->jiffies = jiffies;
+	io_log->blk_req_cpu = smp_processor_id();
+
+	if (direction == QEDI_IO_TRACE_REQ) {
+		/* For requests we only care about the submission CPU */
+		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
+		io_log->intr_cpu = 0;
+		io_log->blk_rsp_cpu = 0;
+	} else if (direction == QEDI_IO_TRACE_RSP) {
+		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
+		io_log->intr_cpu = qedi->intr_cpu;
+		io_log->blk_rsp_cpu = smp_processor_id();
+	}
+
+	qedi->io_trace_idx++;
+	if (qedi->io_trace_idx == QEDI_IO_TRACE_SIZE)
+		qedi->io_trace_idx = 0;
+
+	qedi->use_cached_sge = false;
+	qedi->use_slow_sge = false;
+	qedi->use_fast_sge = false;
+
+	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
+}
+
+int qedi_iscsi_send_ioreq(struct iscsi_task *task)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct iscsi_session *session = conn->session;
+	struct Scsi_Host *shost = iscsi_session_to_shost(session->cls_session);
+	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	struct scsi_cmnd *sc = task->sc;
+	struct iscsi_task_context *fw_task_ctx;
+	struct iscsi_cached_sge_ctx *cached_sge;
+	struct iscsi_phys_sgl_ctx *phys_sgl;
+	struct iscsi_virt_sgl_ctx *virt_sgl;
+	struct ystorm_iscsi_task_st_ctx *yst_cxt;
+	struct mstorm_iscsi_task_st_ctx *mst_cxt;
+	struct iscsi_sgl *sgl_struct;
+	struct iscsi_sge *single_sge;
+	struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)task->hdr;
+	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
+	enum iscsi_task_type task_type;
+	struct iscsi_cmd_hdr *fw_cmd;
+	u32 scsi_lun[2];
+	u16 cq_idx = smp_processor_id() % qedi->num_queues;
+	s16 ptu_invalidate = 0;
+	s16 tid = 0;
+	u8 num_fast_sgs;
+
+	tid = qedi_get_task_idx(qedi);
+	if (tid == -1)
+		return -ENOMEM;
+
+	qedi_iscsi_map_sg_list(cmd);
+
+	int_to_scsilun(sc->device->lun, (struct scsi_lun *)scsi_lun);
+	fw_task_ctx =
+	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
+
+	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
+	cmd->task_id = tid;
+
+	/* Ystrom context */
+	fw_cmd = &fw_task_ctx->ystorm_st_context.pdu_hdr.cmd;
+	SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_ATTR, ISCSI_ATTR_SIMPLE);
+
+	if (sc->sc_data_direction == DMA_TO_DEVICE) {
+		if (conn->session->initial_r2t_en) {
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+				min((conn->session->imm_data_en *
+				    conn->max_xmit_dlength),
+				    conn->session->first_burst);
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+			      min(fw_task_ctx->ustorm_ag_context.exp_data_acked,
+				  scsi_bufflen(sc));
+		} else {
+			fw_task_ctx->ustorm_ag_context.exp_data_acked =
+			      min(conn->session->first_burst, scsi_bufflen(sc));
+		}
+
+		SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_WRITE, 1);
+		task_type = ISCSI_TASK_TYPE_INITIATOR_WRITE;
+	} else {
+		if (scsi_bufflen(sc))
+			SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_READ, 1);
+		task_type = ISCSI_TASK_TYPE_INITIATOR_READ;
+	}
+
+	fw_cmd->lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_cmd->lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	qedi_update_itt_map(qedi, tid, task->itt);
+	fw_cmd->itt = qedi_set_itt(tid, get_itt(task->itt));
+	fw_cmd->expected_transfer_length = scsi_bufflen(sc);
+	fw_cmd->cmd_sn = be32_to_cpu(hdr->cmdsn);
+	fw_cmd->opcode = hdr->opcode;
+	qedi_cpy_scsi_cdb(sc, (u32 *)fw_cmd->cdb);
+
+	/* Mstorm context */
+	fw_task_ctx->mstorm_st_context.sense_db.lo = (u32)cmd->sense_buffer_dma;
+	fw_task_ctx->mstorm_st_context.sense_db.hi =
+					(u32)((u64)cmd->sense_buffer_dma >> 32);
+	fw_task_ctx->mstorm_ag_context.task_cid = qedi_conn->iscsi_conn_id;
+	fw_task_ctx->mstorm_st_context.task_type = task_type;
+
+	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
+		ptu_invalidate = 1;
+		qedi->tid_reuse_count[tid] = 0;
+	}
+	fw_task_ctx->ystorm_st_context.state.reuse_count =
+						     qedi->tid_reuse_count[tid];
+	fw_task_ctx->mstorm_st_context.reuse_count =
+						   qedi->tid_reuse_count[tid]++;
+
+	/* Ustrorm context */
+	fw_task_ctx->ustorm_st_context.rem_rcv_len = scsi_bufflen(sc);
+	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = scsi_bufflen(sc);
+	fw_task_ctx->ustorm_st_context.exp_data_sn =
+						   be32_to_cpu(hdr->exp_statsn);
+	fw_task_ctx->ustorm_st_context.task_type = task_type;
+	fw_task_ctx->ustorm_st_context.cq_rss_number = cq_idx;
+	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
+
+	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
+		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
+		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
+
+	num_fast_sgs = (cmd->io_tbl.sge_valid ?
+			min((u16)QEDI_FAST_SGE_COUNT,
+			    (u16)cmd->io_tbl.sge_valid) : 0);
+	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+		  ISCSI_REG1_NUM_FAST_SGES, num_fast_sgs);
+
+	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
+	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
+
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "Total sge count [%d]\n",
+		  cmd->io_tbl.sge_valid);
+
+	yst_cxt = &fw_task_ctx->ystorm_st_context;
+	mst_cxt = &fw_task_ctx->mstorm_st_context;
+	/* Tx path */
+	if (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) {
+		/* not considering  superIO or FastIO */
+		if (cmd->io_tbl.sge_valid == 1) {
+			cached_sge = &yst_cxt->state.sgl_ctx_union.cached_sge;
+			cached_sge->sge.sge_addr.lo = bd[0].sge_addr.lo;
+			cached_sge->sge.sge_addr.hi = bd[0].sge_addr.hi;
+			cached_sge->sge.sge_len = bd[0].sge_len;
+			qedi->cached_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 1);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			phys_sgl = &yst_cxt->state.sgl_ctx_union.phys_sgl;
+			phys_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
+			phys_sgl->sgl_base.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			phys_sgl->sgl_size = cmd->io_tbl.sge_valid;
+			qedi->slow_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES,
+				  min((u16)QEDI_FAST_SGE_COUNT,
+				      (u16)cmd->io_tbl.sge_valid));
+			virt_sgl = &yst_cxt->state.sgl_ctx_union.virt_sgl;
+			virt_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
+			virt_sgl->sgl_base.hi =
+				      (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			virt_sgl->sgl_initial_offset =
+				 (u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
+			qedi->fast_sgls++;
+		}
+		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
+		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
+	} else {
+	/* Rx path */
+		if (cmd->io_tbl.sge_valid == 1) {
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SINGLE_SGE, 1);
+			single_sge = &mst_cxt->sgl_union.single_sge;
+			single_sge->sge_addr.lo = bd[0].sge_addr.lo;
+			single_sge->sge_addr.hi = bd[0].sge_addr.hi;
+			single_sge->sge_len = bd[0].sge_len;
+			qedi->cached_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
+			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
+			sgl_struct->sgl_addr.lo =
+						(u32)(cmd->io_tbl.sge_tbl_dma);
+			sgl_struct->sgl_addr.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 1);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			sgl_struct->updated_sge_size = 0;
+			sgl_struct->updated_sge_offset = 0;
+			qedi->slow_sgls++;
+		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
+			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
+			sgl_struct->sgl_addr.lo =
+						(u32)(cmd->io_tbl.sge_tbl_dma);
+			sgl_struct->sgl_addr.hi =
+				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
+			sgl_struct->byte_offset =
+				(u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
+			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
+				  ISCSI_MFLAGS_SLOW_IO, 0);
+			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
+				  ISCSI_REG1_NUM_FAST_SGES, 0);
+			sgl_struct->updated_sge_size = 0;
+			sgl_struct->updated_sge_offset = 0;
+			qedi->fast_sgls++;
+		}
+		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
+		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
+	}
+
+	if (cmd->io_tbl.sge_valid == 1)
+		/* Singel-SGL */
+		qedi->use_cached_sge = true;
+	else {
+		if (cmd->use_slowpath)
+			qedi->use_slow_sge = true;
+		else
+			qedi->use_fast_sge = true;
+	}
+	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
+		  "%s: %s-SGL: num_sges=0x%x first-sge-lo=0x%x first-sge-hi=0x%x",
+		  (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) ?
+		  "Write " : "Read ", (cmd->io_tbl.sge_valid == 1) ?
+		  "Single" : (cmd->use_slowpath ? "SLOW" : "FAST"),
+		  (u16)cmd->io_tbl.sge_valid, (u32)(cmd->io_tbl.sge_tbl_dma),
+		  (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32));
+
+	/*  Add command in active command list */
+	spin_lock(&qedi_conn->list_lock);
+	list_add_tail(&cmd->io_cmd, &qedi_conn->active_cmd_list);
+	cmd->io_cmd_in_list = true;
+	qedi_conn->active_cmd_count++;
+	spin_unlock(&qedi_conn->list_lock);
+
+	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
+	qedi_ring_doorbell(qedi_conn);
+	if (io_tracing)
+		qedi_trace_io(qedi, task, tid, QEDI_IO_TRACE_REQ);
+
+	return 0;
+}
+
+int qedi_iscsi_cleanup_task(struct iscsi_task *task, bool mark_cmd_node_deleted)
+{
+	struct iscsi_conn *conn = task->conn;
+	struct qedi_conn *qedi_conn = conn->dd_data;
+	struct qedi_cmd *cmd = task->dd_data;
+	s16 ptu_invalidate = 0;
+
+	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
+		  "issue cleanup tid=0x%x itt=0x%x task_state=%d cmd_state=0%x cid=0x%x\n",
+		  cmd->task_id, get_itt(task->itt), task->state,
+		  cmd->state, qedi_conn->iscsi_conn_id);
+
+	qedi_add_to_sq(qedi_conn, task, cmd->task_id, ptu_invalidate, true);
+	qedi_ring_doorbell(qedi_conn);
+
+	return 0;
+}
diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
index 85ea3d7..c50c2b1 100644
--- a/drivers/scsi/qedi/qedi_gbl.h
+++ b/drivers/scsi/qedi/qedi_gbl.h
@@ -28,11 +28,14 @@ int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
 			  struct iscsi_task *task);
 int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
 			   struct iscsi_task *task);
+int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
+			  struct iscsi_task *mtask);
 int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
 			 struct iscsi_task *task);
 int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
 			   struct iscsi_task *task,
 			   char *datap, int data_len, int unsol);
+int qedi_iscsi_send_ioreq(struct iscsi_task *task);
 int qedi_get_task_idx(struct qedi_ctx *qedi);
 void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
 int qedi_iscsi_cleanup_task(struct iscsi_task *task,
@@ -53,6 +56,9 @@ void qedi_start_conn_recovery(struct qedi_ctx *qedi,
 int qedi_recover_all_conns(struct qedi_ctx *qedi);
 void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
 			  uint16_t que_idx);
+int qedi_cleanup_all_io(struct qedi_ctx *qedi,
+			struct qedi_conn *qedi_conn,
+			struct iscsi_task *task, bool in_recovery);
 void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
 		   u16 tid, int8_t direction);
 int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
index caecdb8..7a07211 100644
--- a/drivers/scsi/qedi/qedi_iscsi.c
+++ b/drivers/scsi/qedi/qedi_iscsi.c
@@ -755,6 +755,9 @@ static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
 	case ISCSI_OP_LOGOUT:
 		rc = qedi_send_iscsi_logout(qedi_conn, task);
 		break;
+	case ISCSI_OP_SCSI_TMFUNC:
+		rc = qedi_iscsi_abort_work(qedi_conn, task);
+		break;
 	case ISCSI_OP_TEXT:
 		rc = qedi_send_iscsi_text(qedi_conn, task);
 		break;
@@ -804,6 +807,9 @@ static int qedi_task_xmit(struct iscsi_task *task)
 
 	if (!sc)
 		return qedi_mtask_xmit(conn, task);
+
+	cmd->scsi_cmd = sc;
+	return qedi_iscsi_send_ioreq(task);
 }
 
 static struct iscsi_endpoint *
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index 22d19a3..fd0d335 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -43,6 +43,10 @@
 module_param(debug, uint, S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(debug, " Default debug level");
 
+uint io_tracing;
+module_param(io_tracing, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(io_tracing,
+		 " Enable logging of SCSI requests/completions into trace buffer. (default off).");
 const struct qed_iscsi_ops *qedi_ops;
 static struct scsi_transport_template *qedi_scsi_transport;
 static struct pci_driver qedi_pci_driver;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19  7:31   ` Hannes Reinecke
  2016-10-19 22:28       ` Arun Easi
  -1 siblings, 1 reply; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19  7:31 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> 
> This adds the backbone required for the various HW initalizations
> which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> 4xxxx line of adapters - FW notification, resource initializations, etc.
> 
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> ---
>  drivers/net/ethernet/qlogic/Kconfig            |   15 +
>  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
>  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
>  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
>  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
>  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
>  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
>  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
>  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
>  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
>  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
>  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
>  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
>  include/linux/qed/qed_if.h                     |    2 +
>  include/linux/qed/qed_iscsi_if.h               |  249 +++++
>  15 files changed, 1692 insertions(+), 22 deletions(-)
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
>  create mode 100644 include/linux/qed/qed_iscsi_if.h
> 
> diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
> index 0df1391f9..bad4fae 100644
> --- a/drivers/net/ethernet/qlogic/Kconfig
> +++ b/drivers/net/ethernet/qlogic/Kconfig
> @@ -118,4 +118,19 @@ config INFINIBAND_QEDR
>  	  for QLogic QED. This would be replaced by the 'real' option
>  	  once the QEDR driver is added [+relocated].
>  
> +config QED_ISCSI
> +	bool
> +
> +config QEDI
> +	tristate "QLogic QED 25/40/100Gb iSCSI driver"
> +	depends on QED
> +	select QED_LL2
> +	select QED_ISCSI
> +	default n
> +	---help---
> +	  This provides a temporary node that allows the compilation
> +	  and logical testing of the hardware offload iSCSI support
> +	  for QLogic QED. This would be replaced by the 'real' option
> +	  once the QEDI driver is added [+relocated].
> +
>  endif # NET_VENDOR_QLOGIC
> diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
> index cda0af7..b76669c 100644
> --- a/drivers/net/ethernet/qlogic/qed/Makefile
> +++ b/drivers/net/ethernet/qlogic/qed/Makefile
> @@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
>  qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
>  qed-$(CONFIG_QED_LL2) += qed_ll2.o
>  qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
> +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
> diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
> index 653bb57..a61b1c0 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed.h
> @@ -35,6 +35,7 @@
>  
>  #define QED_WFQ_UNIT	100
>  
> +#define ISCSI_BDQ_ID(_port_id) (_port_id)
>  #define QED_WID_SIZE            (1024)
>  #define QED_PF_DEMS_SIZE        (4)
>  
> @@ -167,6 +168,7 @@ enum QED_RESOURCES {
>  	QED_ILT,
>  	QED_LL2_QUEUE,
>  	QED_RDMA_STATS_QUEUE,
> +	QED_CMDQS_CQS,
>  	QED_MAX_RESC,
>  };
>  
> @@ -379,6 +381,7 @@ struct qed_hwfn {
>  	bool				using_ll2;
>  	struct qed_ll2_info		*p_ll2_info;
>  	struct qed_rdma_info		*p_rdma_info;
> +	struct qed_iscsi_info		*p_iscsi_info;
>  	struct qed_pf_params		pf_params;
>  
>  	bool b_rdma_enabled_in_prs;
> @@ -578,6 +581,8 @@ struct qed_dev {
>  	/* Linux specific here */
>  	struct  qede_dev		*edev;
>  	struct  pci_dev			*pdev;
> +	u32 flags;
> +#define QED_FLAG_STORAGE_STARTED	(BIT(0))
>  	int				msg_enable;
>  
>  	struct pci_params		pci_params;
> @@ -591,6 +596,7 @@ struct qed_dev {
>  	union {
>  		struct qed_common_cb_ops	*common;
>  		struct qed_eth_cb_ops		*eth;
> +		struct qed_iscsi_cb_ops		*iscsi;
>  	} protocol_ops;
>  	void				*ops_cookie;
>  
> @@ -600,7 +606,7 @@ struct qed_dev {
>  	struct qed_cb_ll2_info		*ll2;
>  	u8				ll2_mac_address[ETH_ALEN];
>  #endif
> -
> +	DECLARE_HASHTABLE(connections, 10);
>  	const struct firmware		*firmware;
>  
>  	u32 rdma_max_sge;
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
> index 754f6a9..a4234c0 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
> @@ -29,6 +29,7 @@
>  #include "qed_hw.h"
>  #include "qed_init_ops.h"
>  #include "qed_int.h"
> +#include "qed_iscsi.h"
>  #include "qed_ll2.h"
>  #include "qed_mcp.h"
>  #include "qed_reg_addr.h"
> @@ -155,6 +156,9 @@ void qed_resc_free(struct qed_dev *cdev)
>  #ifdef CONFIG_QED_LL2
>  		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
>  #endif
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> +			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
>  		qed_iov_free(p_hwfn);
>  		qed_dmae_info_free(p_hwfn);
>  		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
> @@ -411,6 +415,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
>  
>  int qed_resc_alloc(struct qed_dev *cdev)
>  {
> +	struct qed_iscsi_info *p_iscsi_info;
>  #ifdef CONFIG_QED_LL2
>  	struct qed_ll2_info *p_ll2_info;
>  #endif
> @@ -532,6 +537,13 @@ int qed_resc_alloc(struct qed_dev *cdev)
>  			p_hwfn->p_ll2_info = p_ll2_info;
>  		}
>  #endif
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
> +			p_iscsi_info = qed_iscsi_alloc(p_hwfn);
> +			if (!p_iscsi_info)
> +				goto alloc_no_mem;
> +			p_hwfn->p_iscsi_info = p_iscsi_info;
> +		}
>  
>  		/* DMA info initialization */
>  		rc = qed_dmae_info_alloc(p_hwfn);
> @@ -585,6 +597,9 @@ void qed_resc_setup(struct qed_dev *cdev)
>  		if (p_hwfn->using_ll2)
>  			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
>  #endif
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> +			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
>  	}
>  }
>  
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
> index 0948be6..cc28066 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_int.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
> @@ -218,7 +218,6 @@ struct qed_igu_info {
>  	u16			free_blks;
>  };
>  
> -/* TODO Names of function may change... */
>  void qed_int_igu_init_pure_rt(struct qed_hwfn *p_hwfn,
>  			      struct qed_ptt *p_ptt,
>  			      bool b_set,
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> new file mode 100644
> index 0000000..cb22dad
> --- /dev/null
> +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> @@ -0,0 +1,1310 @@
> +/* QLogic qed NIC Driver

Shouldn't that be qedi iSCSI Driver?

> + * Copyright (c) 2015 QLogic Corporation
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include <linux/types.h>
> +#include <asm/byteorder.h>
> +#include <asm/param.h>
> +#include <linux/delay.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/etherdevice.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/log2.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/slab.h>
> +#include <linux/stddef.h>
> +#include <linux/string.h>
> +#include <linux/version.h>
> +#include <linux/workqueue.h>
> +#include <linux/errno.h>
> +#include <linux/list.h>
> +#include <linux/spinlock.h>
> +#include <linux/qed/qed_iscsi_if.h>
> +#include "qed.h"
> +#include "qed_cxt.h"
> +#include "qed_dev_api.h"
> +#include "qed_hsi.h"
> +#include "qed_hw.h"
> +#include "qed_int.h"
> +#include "qed_iscsi.h"
> +#include "qed_ll2.h"
> +#include "qed_mcp.h"
> +#include "qed_sp.h"
> +#include "qed_sriov.h"
> +#include "qed_reg_addr.h"
> +
> +struct qed_iscsi_conn {
> +	struct list_head list_entry;
> +	bool free_on_delete;
> +
> +	u16 conn_id;
> +	u32 icid;
> +	u32 fw_cid;
> +
> +	u8 layer_code;
> +	u8 offl_flags;
> +	u8 connect_mode;
> +	u32 initial_ack;
> +	dma_addr_t sq_pbl_addr;
> +	struct qed_chain r2tq;
> +	struct qed_chain xhq;
> +	struct qed_chain uhq;
> +
> +	struct tcp_upload_params *tcp_upload_params_virt_addr;
> +	dma_addr_t tcp_upload_params_phys_addr;
> +	struct scsi_terminate_extra_params *queue_cnts_virt_addr;
> +	dma_addr_t queue_cnts_phys_addr;
> +	dma_addr_t syn_phy_addr;
> +
> +	u16 syn_ip_payload_length;
> +	u8 local_mac[6];
> +	u8 remote_mac[6];
> +	u16 vlan_id;
> +	u8 tcp_flags;
> +	u8 ip_version;
> +	u32 remote_ip[4];
> +	u32 local_ip[4];
> +	u8 ka_max_probe_cnt;
> +	u8 dup_ack_theshold;
> +	u32 rcv_next;
> +	u32 snd_una;
> +	u32 snd_next;
> +	u32 snd_max;
> +	u32 snd_wnd;
> +	u32 rcv_wnd;
> +	u32 snd_wl1;
> +	u32 cwnd;
> +	u32 ss_thresh;
> +	u16 srtt;
> +	u16 rtt_var;
> +	u32 ts_time;
> +	u32 ts_recent;
> +	u32 ts_recent_age;
> +	u32 total_rt;
> +	u32 ka_timeout_delta;
> +	u32 rt_timeout_delta;
> +	u8 dup_ack_cnt;
> +	u8 snd_wnd_probe_cnt;
> +	u8 ka_probe_cnt;
> +	u8 rt_cnt;
> +	u32 flow_label;
> +	u32 ka_timeout;
> +	u32 ka_interval;
> +	u32 max_rt_time;
> +	u32 initial_rcv_wnd;
> +	u8 ttl;
> +	u8 tos_or_tc;
> +	u16 remote_port;
> +	u16 local_port;
> +	u16 mss;
> +	u8 snd_wnd_scale;
> +	u8 rcv_wnd_scale;
> +	u32 ts_ticks_per_second;
> +	u16 da_timeout_value;
> +	u8 ack_frequency;
> +
> +	u8 update_flag;
> +	u8 default_cq;
> +	u32 max_seq_size;
> +	u32 max_recv_pdu_length;
> +	u32 max_send_pdu_length;
> +	u32 first_seq_length;
> +	u32 exp_stat_sn;
> +	u32 stat_sn;
> +	u16 physical_q0;
> +	u16 physical_q1;
> +	u8 abortive_dsconnect;
> +};
> +
> +static int
> +qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
> +			enum spq_mode comp_mode,
> +			struct qed_spq_comp_cb *p_comp_addr,
> +			void *event_context, iscsi_event_cb_t async_event_cb)
> +{
> +	struct iscsi_init_ramrod_params *p_ramrod = NULL;
> +	struct scsi_init_func_queues *p_queue = NULL;
> +	struct qed_iscsi_pf_params *p_params = NULL;
> +	struct iscsi_spe_func_init *p_init = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	int rc = 0;
> +	u32 dval;
> +	u16 val;
> +	u8 i;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = qed_spq_get_cid(p_hwfn);
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_INIT_FUNC,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_init;
> +	p_init = &p_ramrod->iscsi_init_spe;
> +	p_params = &p_hwfn->pf_params.iscsi_pf_params;
> +	p_queue = &p_init->q_params;
> +
> +	SET_FIELD(p_init->hdr.flags,
> +		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, ISCSI_SLOW_PATH_LAYER_CODE);
> +	p_init->hdr.op_code = ISCSI_RAMROD_CMD_ID_INIT_FUNC;
> +
> +	val = p_params->half_way_close_timeout;
> +	p_init->half_way_close_timeout = cpu_to_le16(val);
> +	p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring;
> +	p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring;
> +	p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring;
> +	p_init->func_params.log_page_size = p_params->log_page_size;
> +	val = p_params->num_tasks;
> +	p_init->func_params.num_tasks = cpu_to_le16(val);
> +	p_init->debug_mode.flags = p_params->debug_mode;
> +
> +	DMA_REGPAIR_LE(p_queue->glbl_q_params_addr,
> +		       p_params->glbl_q_params_addr);
> +
> +	val = p_params->cq_num_entries;
> +	p_queue->cq_num_entries = cpu_to_le16(val);
> +	val = p_params->cmdq_num_entries;
> +	p_queue->cmdq_num_entries = cpu_to_le16(val);
> +	p_queue->num_queues = p_params->num_queues;
> +	dval = (u8)p_hwfn->hw_info.resc_start[QED_CMDQS_CQS];
> +	p_queue->queue_relative_offset = (u8)dval;
> +	p_queue->cq_sb_pi = p_params->gl_rq_pi;
> +	p_queue->cmdq_sb_pi = p_params->gl_cmd_pi;
> +
> +	for (i = 0; i < p_params->num_queues; i++) {
> +		val = p_hwfn->sbs_info[i]->igu_sb_id;
> +		p_queue->cq_cmdq_sb_num_arr[i] = cpu_to_le16(val);
> +	}
> +
> +	p_queue->bdq_resource_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> +
> +	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_RQ],
> +		       p_params->bdq_pbl_base_addr[BDQ_ID_RQ]);
> +	p_queue->bdq_pbl_num_entries[BDQ_ID_RQ] =
> +	    p_params->bdq_pbl_num_entries[BDQ_ID_RQ];
> +	val = p_params->bdq_xoff_threshold[BDQ_ID_RQ];
> +	p_queue->bdq_xoff_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
> +	val = p_params->bdq_xon_threshold[BDQ_ID_RQ];
> +	p_queue->bdq_xon_threshold[BDQ_ID_RQ] = cpu_to_le16(val);
> +
> +	DMA_REGPAIR_LE(p_queue->bdq_pbl_base_address[BDQ_ID_IMM_DATA],
> +		       p_params->bdq_pbl_base_addr[BDQ_ID_IMM_DATA]);
> +	p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA] =
> +	    p_params->bdq_pbl_num_entries[BDQ_ID_IMM_DATA];
> +	val = p_params->bdq_xoff_threshold[BDQ_ID_IMM_DATA];
> +	p_queue->bdq_xoff_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
> +	val = p_params->bdq_xon_threshold[BDQ_ID_IMM_DATA];
> +	p_queue->bdq_xon_threshold[BDQ_ID_IMM_DATA] = cpu_to_le16(val);
> +	val = p_params->rq_buffer_size;
> +	p_queue->rq_buffer_size = cpu_to_le16(val);
> +	if (p_params->is_target) {
> +		SET_FIELD(p_queue->q_validity,
> +			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
> +		if (p_queue->bdq_pbl_num_entries[BDQ_ID_IMM_DATA])
> +			SET_FIELD(p_queue->q_validity,
> +				  SCSI_INIT_FUNC_QUEUES_IMM_DATA_VALID, 1);
> +		SET_FIELD(p_queue->q_validity,
> +			  SCSI_INIT_FUNC_QUEUES_CMD_VALID, 1);
> +	} else {
> +		SET_FIELD(p_queue->q_validity,
> +			  SCSI_INIT_FUNC_QUEUES_RQ_VALID, 1);
> +	}
> +	p_ramrod->tcp_init.two_msl_timer = cpu_to_le32(p_params->two_msl_timer);
> +	val = p_params->tx_sws_timer;
> +	p_ramrod->tcp_init.tx_sws_timer = cpu_to_le16(val);
> +	p_ramrod->tcp_init.maxfinrt = p_params->max_fin_rt;
> +
> +	p_hwfn->p_iscsi_info->event_context = event_context;
> +	p_hwfn->p_iscsi_info->event_cb = async_event_cb;
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
> +				     struct qed_iscsi_conn *p_conn,
> +				     enum spq_mode comp_mode,
> +				     struct qed_spq_comp_cb *p_comp_addr)
> +{
> +	struct iscsi_spe_conn_offload *p_ramrod = NULL;
> +	struct tcp_offload_params_opt2 *p_tcp2 = NULL;
> +	struct tcp_offload_params *p_tcp = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	union qed_qm_pq_params pq_params;
> +	u16 pq0_id = 0, pq1_id = 0;
> +	dma_addr_t r2tq_pbl_addr;
> +	dma_addr_t xhq_pbl_addr;
> +	dma_addr_t uhq_pbl_addr;
> +	int rc = 0;
> +	u32 dval;
> +	u16 wval;
> +	u8 ucval;
> +	u8 i;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = p_conn->icid;
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_conn_offload;
> +
> +	/* Transmission PQ is the first of the PF */
> +	memset(&pq_params, 0, sizeof(pq_params));
> +	pq0_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> +	p_conn->physical_q0 = cpu_to_le16(pq0_id);
> +	p_ramrod->iscsi.physical_q0 = cpu_to_le16(pq0_id);
> +
> +	/* iSCSI Pure-ACK PQ */
> +	pq_params.iscsi.q_idx = 1;
> +	pq1_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> +	p_conn->physical_q1 = cpu_to_le16(pq1_id);
> +	p_ramrod->iscsi.physical_q1 = cpu_to_le16(pq1_id);
> +
> +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN;
> +	SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE,
> +		  p_conn->layer_code);
> +
> +	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
> +	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
> +
> +	DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr);
> +
> +	r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
> +	DMA_REGPAIR_LE(p_ramrod->iscsi.r2tq_pbl_addr, r2tq_pbl_addr);
> +
> +	xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
> +	DMA_REGPAIR_LE(p_ramrod->iscsi.xhq_pbl_addr, xhq_pbl_addr);
> +
> +	uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
> +	DMA_REGPAIR_LE(p_ramrod->iscsi.uhq_pbl_addr, uhq_pbl_addr);
> +
> +	p_ramrod->iscsi.initial_ack = cpu_to_le32(p_conn->initial_ack);
> +	p_ramrod->iscsi.flags = p_conn->offl_flags;
> +	p_ramrod->iscsi.default_cq = p_conn->default_cq;
> +	p_ramrod->iscsi.stat_sn = cpu_to_le32(p_conn->stat_sn);
> +
> +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> +		p_tcp = &p_ramrod->tcp;
> +		ucval = p_conn->local_mac[1];
> +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->local_mac[0];
> +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->local_mac[3];
> +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->local_mac[2];
> +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->local_mac[5];
> +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->local_mac[4];
> +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> +		ucval = p_conn->remote_mac[1];
> +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->remote_mac[0];
> +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->remote_mac[3];
> +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->remote_mac[2];
> +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->remote_mac[5];
> +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->remote_mac[4];
> +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> +
This looks terribly like endianness swapping. You sure this is
applicable for all architecture and endianness settings?
And wouldn't it be better to use one of the get_unaligned_XXX functions
here?

> +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> +
> +		p_tcp->flags = p_conn->tcp_flags;
> +		p_tcp->ip_version = p_conn->ip_version;
> +		for (i = 0; i < 4; i++) {
> +			dval = p_conn->remote_ip[i];
> +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> +			dval = p_conn->local_ip[i];
> +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> +		}
> +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> +
> +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> +		dval = p_conn->ka_timeout_delta;
> +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> +		dval = p_conn->rt_timeout_delta;
> +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> +		p_tcp->rt_cnt = p_conn->rt_cnt;
> +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> +		dval = p_conn->initial_rcv_wnd;
> +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> +		p_tcp->ttl = p_conn->ttl;
> +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> +		dval = p_conn->ts_ticks_per_second;
> +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> +		wval = p_conn->da_timeout_value;
> +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> +		p_tcp->ack_frequency = p_conn->ack_frequency;
> +		p_tcp->connect_mode = p_conn->connect_mode;
> +	} else {
> +		p_tcp2 =
> +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> +		ucval = p_conn->local_mac[1];
> +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->local_mac[0];
> +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->local_mac[3];
> +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->local_mac[2];
> +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->local_mac[5];
> +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->local_mac[4];
> +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> +
> +		ucval = p_conn->remote_mac[1];
> +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->remote_mac[0];
> +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->remote_mac[3];
> +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->remote_mac[2];
> +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->remote_mac[5];
> +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->remote_mac[4];
> +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> +
Same here.

> +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
> +		p_tcp2->flags = p_conn->tcp_flags;
> +
> +		p_tcp2->ip_version = p_conn->ip_version;
> +		for (i = 0; i < 4; i++) {
> +			dval = p_conn->remote_ip[i];
> +			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
> +			dval = p_conn->local_ip[i];
> +			p_tcp2->local_ip[i] = cpu_to_le32(dval);
> +		}
> +
> +		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
> +		p_tcp2->ttl = p_conn->ttl;
> +		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
> +		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
> +		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
> +		p_tcp2->mss = cpu_to_le16(p_conn->mss);
> +		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> +		p_tcp2->connect_mode = p_conn->connect_mode;
> +		wval = p_conn->syn_ip_payload_length;
> +		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
> +		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
> +		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
> +	}
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn,
> +				    struct qed_iscsi_conn *p_conn,
> +				    enum spq_mode comp_mode,
> +				    struct qed_spq_comp_cb *p_comp_addr)
> +{
> +	struct iscsi_conn_update_ramrod_params *p_ramrod = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	int rc = -EINVAL;
> +	u32 dval;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = p_conn->icid;
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_UPDATE_CONN,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_conn_update;
> +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_UPDATE_CONN;
> +	SET_FIELD(p_ramrod->hdr.flags,
> +		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
> +
> +	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
> +	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
> +	p_ramrod->flags = p_conn->update_flag;
> +	p_ramrod->max_seq_size = cpu_to_le32(p_conn->max_seq_size);
> +	dval = p_conn->max_recv_pdu_length;
> +	p_ramrod->max_recv_pdu_length = cpu_to_le32(dval);
> +	dval = p_conn->max_send_pdu_length;
> +	p_ramrod->max_send_pdu_length = cpu_to_le32(dval);
> +	dval = p_conn->first_seq_length;
> +	p_ramrod->first_seq_length = cpu_to_le32(dval);
> +	p_ramrod->exp_stat_sn = cpu_to_le32(p_conn->exp_stat_sn);
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn,
> +				       struct qed_iscsi_conn *p_conn,
> +				       enum spq_mode comp_mode,
> +				       struct qed_spq_comp_cb *p_comp_addr)
> +{
> +	struct iscsi_spe_conn_termination *p_ramrod = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	int rc = -EINVAL;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = p_conn->icid;
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_TERMINATION_CONN,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_conn_terminate;
> +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_TERMINATION_CONN;
> +	SET_FIELD(p_ramrod->hdr.flags,
> +		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
> +
> +	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
> +	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
> +	p_ramrod->abortive = p_conn->abortive_dsconnect;
> +
> +	DMA_REGPAIR_LE(p_ramrod->query_params_addr,
> +		       p_conn->tcp_upload_params_phys_addr);
> +	DMA_REGPAIR_LE(p_ramrod->queue_cnts_addr, p_conn->queue_cnts_phys_addr);
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn,
> +				      struct qed_iscsi_conn *p_conn,
> +				      enum spq_mode comp_mode,
> +				      struct qed_spq_comp_cb *p_comp_addr)
> +{
> +	struct iscsi_slow_path_hdr *p_ramrod = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	int rc = -EINVAL;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = p_conn->icid;
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_CLEAR_SQ,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_empty;
> +	p_ramrod->op_code = ISCSI_RAMROD_CMD_ID_CLEAR_SQ;
> +	SET_FIELD(p_ramrod->flags,
> +		  ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code);
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn,
> +				  enum spq_mode comp_mode,
> +				  struct qed_spq_comp_cb *p_comp_addr)
> +{
> +	struct iscsi_spe_func_dstry *p_ramrod = NULL;
> +	struct qed_spq_entry *p_ent = NULL;
> +	struct qed_sp_init_data init_data;
> +	int rc = 0;
> +
> +	/* Get SPQ entry */
> +	memset(&init_data, 0, sizeof(init_data));
> +	init_data.cid = qed_spq_get_cid(p_hwfn);
> +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> +	init_data.comp_mode = comp_mode;
> +	init_data.p_comp_data = p_comp_addr;
> +
> +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> +				 ISCSI_RAMROD_CMD_ID_DESTROY_FUNC,
> +				 PROTOCOLID_ISCSI, &init_data);
> +	if (rc)
> +		return rc;
> +
> +	p_ramrod = &p_ent->ramrod.iscsi_destroy;
> +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_DESTROY_FUNC;
> +
> +	return qed_spq_post(p_hwfn, p_ent, NULL);
> +}
> +
> +static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
> +{
> +	return (u8 __iomem *)p_hwfn->doorbells +
> +			     qed_db_addr(cid, DQ_DEMS_LEGACY);
> +}
> +
> +static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
> +						    u8 bdq_id)
> +{
> +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> +
> +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
> +			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> +							     bdq_id);
> +}
> +
> +static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
> +						      u8 bdq_id)
> +{
> +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> +
> +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
> +			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> +							     bdq_id);
> +}
> +
> +static int qed_iscsi_setup_connection(struct qed_hwfn *p_hwfn,
> +				      struct qed_iscsi_conn *p_conn)
> +{
> +	if (!p_conn->queue_cnts_virt_addr)
> +		goto nomem;
> +	memset(p_conn->queue_cnts_virt_addr, 0,
> +	       sizeof(*p_conn->queue_cnts_virt_addr));
> +
> +	if (!p_conn->tcp_upload_params_virt_addr)
> +		goto nomem;
> +	memset(p_conn->tcp_upload_params_virt_addr, 0,
> +	       sizeof(*p_conn->tcp_upload_params_virt_addr));
> +
> +	if (!p_conn->r2tq.p_virt_addr)
> +		goto nomem;
> +	qed_chain_pbl_zero_mem(&p_conn->r2tq);
> +
> +	if (!p_conn->uhq.p_virt_addr)
> +		goto nomem;
> +	qed_chain_pbl_zero_mem(&p_conn->uhq);
> +
> +	if (!p_conn->xhq.p_virt_addr)
> +		goto nomem;
> +	qed_chain_pbl_zero_mem(&p_conn->xhq);
> +
> +	return 0;
> +nomem:
> +	return -ENOMEM;
> +}
> +
> +static int qed_iscsi_allocate_connection(struct qed_hwfn *p_hwfn,
> +					 struct qed_iscsi_conn **p_out_conn)
> +{
> +	u16 uhq_num_elements = 0, xhq_num_elements = 0, r2tq_num_elements = 0;
> +	struct scsi_terminate_extra_params *p_q_cnts = NULL;
> +	struct qed_iscsi_pf_params *p_params = NULL;
> +	struct tcp_upload_params *p_tcp = NULL;
> +	struct qed_iscsi_conn *p_conn = NULL;
> +	int rc = 0;
> +
> +	/* Try finding a free connection that can be used */
> +	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
> +	if (!list_empty(&p_hwfn->p_iscsi_info->free_list))
> +		p_conn = list_first_entry(&p_hwfn->p_iscsi_info->free_list,
> +					  struct qed_iscsi_conn, list_entry);
> +	if (p_conn) {
> +		list_del(&p_conn->list_entry);
> +		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
> +		*p_out_conn = p_conn;
> +		return 0;
> +	}
> +	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
> +
> +	/* Need to allocate a new connection */
> +	p_params = &p_hwfn->pf_params.iscsi_pf_params;
> +
> +	p_conn = kzalloc(sizeof(*p_conn), GFP_KERNEL);
> +	if (!p_conn)
> +		return -ENOMEM;
> +
> +	p_q_cnts = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
> +				      sizeof(*p_q_cnts),
> +				      &p_conn->queue_cnts_phys_addr,
> +				      GFP_KERNEL);
> +	if (!p_q_cnts)
> +		goto nomem_queue_cnts_param;
> +	p_conn->queue_cnts_virt_addr = p_q_cnts;
> +
> +	p_tcp = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
> +				   sizeof(*p_tcp),
> +				   &p_conn->tcp_upload_params_phys_addr,
> +				   GFP_KERNEL);
> +	if (!p_tcp)
> +		goto nomem_upload_param;
> +	p_conn->tcp_upload_params_virt_addr = p_tcp;
> +
> +	r2tq_num_elements = p_params->num_r2tq_pages_in_ring *
> +			    QED_CHAIN_PAGE_SIZE / 0x80;
> +	rc = qed_chain_alloc(p_hwfn->cdev,
> +			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
> +			     QED_CHAIN_MODE_PBL,
> +			     QED_CHAIN_CNT_TYPE_U16,
> +			     r2tq_num_elements, 0x80, &p_conn->r2tq);
> +	if (rc)
> +		goto nomem_r2tq;
> +
> +	uhq_num_elements = p_params->num_uhq_pages_in_ring *
> +			   QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_uhqe);
> +	rc = qed_chain_alloc(p_hwfn->cdev,
> +			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
> +			     QED_CHAIN_MODE_PBL,
> +			     QED_CHAIN_CNT_TYPE_U16,
> +			     uhq_num_elements,
> +			     sizeof(struct iscsi_uhqe), &p_conn->uhq);
> +	if (rc)
> +		goto nomem_uhq;
> +
> +	xhq_num_elements = uhq_num_elements;
> +	rc = qed_chain_alloc(p_hwfn->cdev,
> +			     QED_CHAIN_USE_TO_CONSUME_PRODUCE,
> +			     QED_CHAIN_MODE_PBL,
> +			     QED_CHAIN_CNT_TYPE_U16,
> +			     xhq_num_elements,
> +			     sizeof(struct iscsi_xhqe), &p_conn->xhq);
> +	if (rc)
> +		goto nomem;
> +
> +	p_conn->free_on_delete = true;
> +	*p_out_conn = p_conn;
> +	return 0;
> +
> +nomem:
> +	qed_chain_free(p_hwfn->cdev, &p_conn->uhq);
> +nomem_uhq:
> +	qed_chain_free(p_hwfn->cdev, &p_conn->r2tq);
> +nomem_r2tq:
> +	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
> +			  sizeof(struct tcp_upload_params),
> +			  p_conn->tcp_upload_params_virt_addr,
> +			  p_conn->tcp_upload_params_phys_addr);
> +nomem_upload_param:
> +	dma_free_coherent(&p_hwfn->cdev->pdev->dev,
> +			  sizeof(struct scsi_terminate_extra_params),
> +			  p_conn->queue_cnts_virt_addr,
> +			  p_conn->queue_cnts_phys_addr);
> +nomem_queue_cnts_param:
> +	kfree(p_conn);
> +
> +	return -ENOMEM;
> +}
> +
> +static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn,
> +					struct qed_iscsi_conn *p_in_conn,
> +					struct qed_iscsi_conn **p_out_conn)
> +{
> +	struct qed_iscsi_conn *p_conn = NULL;
> +	int rc = 0;
> +	u32 icid;
> +
> +	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
> +	rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_ISCSI, &icid);
> +	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
> +	if (rc)
> +		return rc;
> +
> +	/* Use input connection or allocate a new one */
> +	if (p_in_conn)
> +		p_conn = p_in_conn;
> +	else
> +		rc = qed_iscsi_allocate_connection(p_hwfn, &p_conn);
> +
> +	if (!rc)
> +		rc = qed_iscsi_setup_connection(p_hwfn, p_conn);
> +
> +	if (rc) {
> +		spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
> +		qed_cxt_release_cid(p_hwfn, icid);
> +		spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
> +		return rc;
> +	}
> +
> +	p_conn->icid = icid;
> +	p_conn->conn_id = (u16)icid;
> +	p_conn->fw_cid = (p_hwfn->hw_info.opaque_fid << 16) | icid;
> +
> +	*p_out_conn = p_conn;
> +
> +	return rc;
> +}
> +
> +static void qed_iscsi_release_connection(struct qed_hwfn *p_hwfn,
> +					 struct qed_iscsi_conn *p_conn)
> +{
> +	spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
> +	list_add_tail(&p_conn->list_entry, &p_hwfn->p_iscsi_info->free_list);
> +	qed_cxt_release_cid(p_hwfn, p_conn->icid);
> +	spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
> +}
> +
> +struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn)
> +{
> +	struct qed_iscsi_info *p_iscsi_info;
> +
> +	p_iscsi_info = kzalloc(sizeof(*p_iscsi_info), GFP_KERNEL);
> +	if (!p_iscsi_info) {
> +		DP_NOTICE(p_hwfn, "Failed to allocate qed_iscsi_info'\n");
> +		return NULL;
> +	}
> +
> +	INIT_LIST_HEAD(&p_iscsi_info->free_list);
> +	return p_iscsi_info;
> +}
> +
> +void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> +		     struct qed_iscsi_info *p_iscsi_info)
> +{
> +	spin_lock_init(&p_iscsi_info->lock);
> +}
> +
> +void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> +		    struct qed_iscsi_info *p_iscsi_info)
> +{
> +	kfree(p_iscsi_info);
> +}
> +
> +static void _qed_iscsi_get_tstats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct tstorm_iscsi_stats_drv tstats;
> +	u32 tstats_addr;
> +
> +	memset(&tstats, 0, sizeof(tstats));
> +	tstats_addr = BAR0_MAP_REG_TSDM_RAM +
> +		      TSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, sizeof(tstats));
> +
> +	p_stats->iscsi_rx_bytes_cnt =
> +	    HILO_64_REGPAIR(tstats.iscsi_rx_bytes_cnt);
> +	p_stats->iscsi_rx_packet_cnt =
> +	    HILO_64_REGPAIR(tstats.iscsi_rx_packet_cnt);
> +	p_stats->iscsi_cmdq_threshold_cnt =
> +	    le32_to_cpu(tstats.iscsi_cmdq_threshold_cnt);
> +	p_stats->iscsi_rq_threshold_cnt =
> +	    le32_to_cpu(tstats.iscsi_rq_threshold_cnt);
> +	p_stats->iscsi_immq_threshold_cnt =
> +	    le32_to_cpu(tstats.iscsi_immq_threshold_cnt);
> +}
> +
> +static void _qed_iscsi_get_mstats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct mstorm_iscsi_stats_drv mstats;
> +	u32 mstats_addr;
> +
> +	memset(&mstats, 0, sizeof(mstats));
> +	mstats_addr = BAR0_MAP_REG_MSDM_RAM +
> +		      MSTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, sizeof(mstats));
> +
> +	p_stats->iscsi_rx_dropped_pdus_task_not_valid =
> +	    HILO_64_REGPAIR(mstats.iscsi_rx_dropped_pdus_task_not_valid);
> +}
> +
> +static void _qed_iscsi_get_ustats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct ustorm_iscsi_stats_drv ustats;
> +	u32 ustats_addr;
> +
> +	memset(&ustats, 0, sizeof(ustats));
> +	ustats_addr = BAR0_MAP_REG_USDM_RAM +
> +		      USTORM_ISCSI_RX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, sizeof(ustats));
> +
> +	p_stats->iscsi_rx_data_pdu_cnt =
> +	    HILO_64_REGPAIR(ustats.iscsi_rx_data_pdu_cnt);
> +	p_stats->iscsi_rx_r2t_pdu_cnt =
> +	    HILO_64_REGPAIR(ustats.iscsi_rx_r2t_pdu_cnt);
> +	p_stats->iscsi_rx_total_pdu_cnt =
> +	    HILO_64_REGPAIR(ustats.iscsi_rx_total_pdu_cnt);
> +}
> +
> +static void _qed_iscsi_get_xstats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct xstorm_iscsi_stats_drv xstats;
> +	u32 xstats_addr;
> +
> +	memset(&xstats, 0, sizeof(xstats));
> +	xstats_addr = BAR0_MAP_REG_XSDM_RAM +
> +		      XSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &xstats, xstats_addr, sizeof(xstats));
> +
> +	p_stats->iscsi_tx_go_to_slow_start_event_cnt =
> +	    HILO_64_REGPAIR(xstats.iscsi_tx_go_to_slow_start_event_cnt);
> +	p_stats->iscsi_tx_fast_retransmit_event_cnt =
> +	    HILO_64_REGPAIR(xstats.iscsi_tx_fast_retransmit_event_cnt);
> +}
> +
> +static void _qed_iscsi_get_ystats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct ystorm_iscsi_stats_drv ystats;
> +	u32 ystats_addr;
> +
> +	memset(&ystats, 0, sizeof(ystats));
> +	ystats_addr = BAR0_MAP_REG_YSDM_RAM +
> +		      YSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &ystats, ystats_addr, sizeof(ystats));
> +
> +	p_stats->iscsi_tx_data_pdu_cnt =
> +	    HILO_64_REGPAIR(ystats.iscsi_tx_data_pdu_cnt);
> +	p_stats->iscsi_tx_r2t_pdu_cnt =
> +	    HILO_64_REGPAIR(ystats.iscsi_tx_r2t_pdu_cnt);
> +	p_stats->iscsi_tx_total_pdu_cnt =
> +	    HILO_64_REGPAIR(ystats.iscsi_tx_total_pdu_cnt);
> +}
> +
> +static void _qed_iscsi_get_pstats(struct qed_hwfn *p_hwfn,
> +				  struct qed_ptt *p_ptt,
> +				  struct qed_iscsi_stats *p_stats)
> +{
> +	struct pstorm_iscsi_stats_drv pstats;
> +	u32 pstats_addr;
> +
> +	memset(&pstats, 0, sizeof(pstats));
> +	pstats_addr = BAR0_MAP_REG_PSDM_RAM +
> +		      PSTORM_ISCSI_TX_STATS_OFFSET(p_hwfn->rel_pf_id);
> +	qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, sizeof(pstats));
> +
> +	p_stats->iscsi_tx_bytes_cnt =
> +	    HILO_64_REGPAIR(pstats.iscsi_tx_bytes_cnt);
> +	p_stats->iscsi_tx_packet_cnt =
> +	    HILO_64_REGPAIR(pstats.iscsi_tx_packet_cnt);
> +}
> +
> +static int qed_iscsi_get_stats(struct qed_hwfn *p_hwfn,
> +			       struct qed_iscsi_stats *stats)
> +{
> +	struct qed_ptt *p_ptt;
> +
> +	memset(stats, 0, sizeof(*stats));
> +
> +	p_ptt = qed_ptt_acquire(p_hwfn);
> +	if (!p_ptt) {
> +		DP_ERR(p_hwfn, "Failed to acquire ptt\n");
> +		return -EAGAIN;
> +	}
> +
> +	_qed_iscsi_get_tstats(p_hwfn, p_ptt, stats);
> +	_qed_iscsi_get_mstats(p_hwfn, p_ptt, stats);
> +	_qed_iscsi_get_ustats(p_hwfn, p_ptt, stats);
> +
> +	_qed_iscsi_get_xstats(p_hwfn, p_ptt, stats);
> +	_qed_iscsi_get_ystats(p_hwfn, p_ptt, stats);
> +	_qed_iscsi_get_pstats(p_hwfn, p_ptt, stats);
> +
> +	qed_ptt_release(p_hwfn, p_ptt);
> +
> +	return 0;
> +}
> +
> +struct qed_hash_iscsi_con {
> +	struct hlist_node node;
> +	struct qed_iscsi_conn *con;
> +};
> +
> +static int qed_fill_iscsi_dev_info(struct qed_dev *cdev,
> +				   struct qed_dev_iscsi_info *info)
> +{
> +	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
> +
> +	int rc;
> +
> +	memset(info, 0, sizeof(*info));
> +	rc = qed_fill_dev_info(cdev, &info->common);
> +
> +	info->primary_dbq_rq_addr =
> +	    qed_iscsi_get_primary_bdq_prod(hwfn, BDQ_ID_RQ);
> +	info->secondary_bdq_rq_addr =
> +	    qed_iscsi_get_secondary_bdq_prod(hwfn, BDQ_ID_RQ);
> +
> +	return rc;
> +}
> +
> +static void qed_register_iscsi_ops(struct qed_dev *cdev,
> +				   struct qed_iscsi_cb_ops *ops, void *cookie)
> +{
> +	cdev->protocol_ops.iscsi = ops;
> +	cdev->ops_cookie = cookie;
> +}
> +
> +static struct qed_hash_iscsi_con *qed_iscsi_get_hash(struct qed_dev *cdev,
> +						     u32 handle)
> +{
> +	struct qed_hash_iscsi_con *hash_con = NULL;
> +
> +	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED))
> +		return NULL;
> +
> +	hash_for_each_possible(cdev->connections, hash_con, node, handle) {
> +		if (hash_con->con->icid == handle)
> +			break;
> +	}
> +
> +	if (!hash_con || (hash_con->con->icid != handle))
> +		return NULL;
> +
> +	return hash_con;
> +}
> +
> +static int qed_iscsi_stop(struct qed_dev *cdev)
> +{
> +	int rc;
> +
> +	if (!(cdev->flags & QED_FLAG_STORAGE_STARTED)) {
> +		DP_NOTICE(cdev, "iscsi already stopped\n");
> +		return 0;
> +	}
> +
> +	if (!hash_empty(cdev->connections)) {
> +		DP_NOTICE(cdev,
> +			  "Can't stop iscsi - not all connections were returned\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Stop the iscsi */
> +	rc = qed_sp_iscsi_func_stop(QED_LEADING_HWFN(cdev),
> +				    QED_SPQ_MODE_EBLOCK, NULL);
> +	cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
> +
> +	return rc;
> +}
> +
> +static int qed_iscsi_start(struct qed_dev *cdev,
> +			   struct qed_iscsi_tid *tasks,
> +			   void *event_context,
> +			   iscsi_event_cb_t async_event_cb)
> +{
> +	int rc;
> +
> +	if (cdev->flags & QED_FLAG_STORAGE_STARTED) {
> +		DP_NOTICE(cdev, "iscsi already started;\n");
> +		return 0;
> +	}
> +
> +	rc = qed_sp_iscsi_func_start(QED_LEADING_HWFN(cdev),
> +				     QED_SPQ_MODE_EBLOCK, NULL, event_context,
> +				     async_event_cb);
> +	if (rc) {
> +		DP_NOTICE(cdev, "Failed to start iscsi\n");
> +		return rc;
> +	}
> +
> +	cdev->flags |= QED_FLAG_STORAGE_STARTED;
> +	hash_init(cdev->connections);
> +
> +	if (tasks) {
> +		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
> +						       GFP_KERNEL);
> +
> +		if (!tid_info) {
> +			DP_NOTICE(cdev,
> +				  "Failed to allocate tasks information\n");
> +			qed_iscsi_stop(cdev);
> +			return -ENOMEM;
> +		}
> +
> +		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
> +					      tid_info);
> +		if (rc) {
> +			DP_NOTICE(cdev, "Failed to gather task information\n");
> +			qed_iscsi_stop(cdev);
> +			kfree(tid_info);
> +			return rc;
> +		}
> +
> +		/* Fill task information */
> +		tasks->size = tid_info->tid_size;
> +		tasks->num_tids_per_block = tid_info->num_tids_per_block;
> +		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> +
> +		kfree(tid_info);
> +	}
> +
> +	return 0;
> +}
> +
> +static int qed_iscsi_acquire_conn(struct qed_dev *cdev,
> +				  u32 *handle,
> +				  u32 *fw_cid, void __iomem **p_doorbell)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +	int rc;
> +
> +	/* Allocate a hashed connection */
> +	hash_con = kzalloc(sizeof(*hash_con), GFP_ATOMIC);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to allocate hashed connection\n");
> +		return -ENOMEM;
> +	}
> +
> +	/* Acquire the connection */
> +	rc = qed_iscsi_acquire_connection(QED_LEADING_HWFN(cdev), NULL,
> +					  &hash_con->con);
> +	if (rc) {
> +		DP_NOTICE(cdev, "Failed to acquire Connection\n");
> +		kfree(hash_con);
> +		return rc;
> +	}
> +
> +	/* Added the connection to hash table */
> +	*handle = hash_con->con->icid;
> +	*fw_cid = hash_con->con->fw_cid;
> +	hash_add(cdev->connections, &hash_con->node, *handle);
> +
> +	if (p_doorbell)
> +		*p_doorbell = qed_iscsi_get_db_addr(QED_LEADING_HWFN(cdev),
> +						    *handle);
> +
> +	return 0;
> +}
> +
> +static int qed_iscsi_release_conn(struct qed_dev *cdev, u32 handle)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +
> +	hash_con = qed_iscsi_get_hash(cdev, handle);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
> +			  handle);
> +		return -EINVAL;
> +	}
> +
> +	hlist_del(&hash_con->node);
> +	qed_iscsi_release_connection(QED_LEADING_HWFN(cdev), hash_con->con);
> +	kfree(hash_con);
> +
> +	return 0;
> +}
> +
> +static int qed_iscsi_offload_conn(struct qed_dev *cdev,
> +				  u32 handle,
> +				  struct qed_iscsi_params_offload *conn_info)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +	struct qed_iscsi_conn *con;
> +
> +	hash_con = qed_iscsi_get_hash(cdev, handle);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
> +			  handle);
> +		return -EINVAL;
> +	}
> +
> +	/* Update the connection with information from the params */
> +	con = hash_con->con;
> +
> +	ether_addr_copy(con->local_mac, conn_info->src.mac);
> +	ether_addr_copy(con->remote_mac, conn_info->dst.mac);
> +	memcpy(con->local_ip, conn_info->src.ip, sizeof(con->local_ip));
> +	memcpy(con->remote_ip, conn_info->dst.ip, sizeof(con->remote_ip));
> +	con->local_port = conn_info->src.port;
> +	con->remote_port = conn_info->dst.port;
> +
> +	con->layer_code = conn_info->layer_code;
> +	con->sq_pbl_addr = conn_info->sq_pbl_addr;
> +	con->initial_ack = conn_info->initial_ack;
> +	con->vlan_id = conn_info->vlan_id;
> +	con->tcp_flags = conn_info->tcp_flags;
> +	con->ip_version = conn_info->ip_version;
> +	con->default_cq = conn_info->default_cq;
> +	con->ka_max_probe_cnt = conn_info->ka_max_probe_cnt;
> +	con->dup_ack_theshold = conn_info->dup_ack_theshold;
> +	con->rcv_next = conn_info->rcv_next;
> +	con->snd_una = conn_info->snd_una;
> +	con->snd_next = conn_info->snd_next;
> +	con->snd_max = conn_info->snd_max;
> +	con->snd_wnd = conn_info->snd_wnd;
> +	con->rcv_wnd = conn_info->rcv_wnd;
> +	con->snd_wl1 = conn_info->snd_wl1;
> +	con->cwnd = conn_info->cwnd;
> +	con->ss_thresh = conn_info->ss_thresh;
> +	con->srtt = conn_info->srtt;
> +	con->rtt_var = conn_info->rtt_var;
> +	con->ts_time = conn_info->ts_time;
> +	con->ts_recent = conn_info->ts_recent;
> +	con->ts_recent_age = conn_info->ts_recent_age;
> +	con->total_rt = conn_info->total_rt;
> +	con->ka_timeout_delta = conn_info->ka_timeout_delta;
> +	con->rt_timeout_delta = conn_info->rt_timeout_delta;
> +	con->dup_ack_cnt = conn_info->dup_ack_cnt;
> +	con->snd_wnd_probe_cnt = conn_info->snd_wnd_probe_cnt;
> +	con->ka_probe_cnt = conn_info->ka_probe_cnt;
> +	con->rt_cnt = conn_info->rt_cnt;
> +	con->flow_label = conn_info->flow_label;
> +	con->ka_timeout = conn_info->ka_timeout;
> +	con->ka_interval = conn_info->ka_interval;
> +	con->max_rt_time = conn_info->max_rt_time;
> +	con->initial_rcv_wnd = conn_info->initial_rcv_wnd;
> +	con->ttl = conn_info->ttl;
> +	con->tos_or_tc = conn_info->tos_or_tc;
> +	con->remote_port = conn_info->remote_port;
> +	con->local_port = conn_info->local_port;
> +	con->mss = conn_info->mss;
> +	con->snd_wnd_scale = conn_info->snd_wnd_scale;
> +	con->rcv_wnd_scale = conn_info->rcv_wnd_scale;
> +	con->ts_ticks_per_second = conn_info->ts_ticks_per_second;
> +	con->da_timeout_value = conn_info->da_timeout_value;
> +	con->ack_frequency = conn_info->ack_frequency;
> +
> +	/* Set default values on other connection fields */
> +	con->offl_flags = 0x1;
> +
> +	return qed_sp_iscsi_conn_offload(QED_LEADING_HWFN(cdev), con,
> +					 QED_SPQ_MODE_EBLOCK, NULL);
> +}
> +
> +static int qed_iscsi_update_conn(struct qed_dev *cdev,
> +				 u32 handle,
> +				 struct qed_iscsi_params_update *conn_info)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +	struct qed_iscsi_conn *con;
> +
> +	hash_con = qed_iscsi_get_hash(cdev, handle);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
> +			  handle);
> +		return -EINVAL;
> +	}
> +
> +	/* Update the connection with information from the params */
> +	con = hash_con->con;
> +	con->update_flag = conn_info->update_flag;
> +	con->max_seq_size = conn_info->max_seq_size;
> +	con->max_recv_pdu_length = conn_info->max_recv_pdu_length;
> +	con->max_send_pdu_length = conn_info->max_send_pdu_length;
> +	con->first_seq_length = conn_info->first_seq_length;
> +	con->exp_stat_sn = conn_info->exp_stat_sn;
> +
> +	return qed_sp_iscsi_conn_update(QED_LEADING_HWFN(cdev), con,
> +					QED_SPQ_MODE_EBLOCK, NULL);
> +}
> +
> +static int qed_iscsi_clear_conn_sq(struct qed_dev *cdev, u32 handle)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +
> +	hash_con = qed_iscsi_get_hash(cdev, handle);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
> +			  handle);
> +		return -EINVAL;
> +	}
> +
> +	return qed_sp_iscsi_conn_clear_sq(QED_LEADING_HWFN(cdev),
> +					  hash_con->con,
> +					  QED_SPQ_MODE_EBLOCK, NULL);
> +}
> +
> +static int qed_iscsi_destroy_conn(struct qed_dev *cdev,
> +				  u32 handle, u8 abrt_conn)
> +{
> +	struct qed_hash_iscsi_con *hash_con;
> +
> +	hash_con = qed_iscsi_get_hash(cdev, handle);
> +	if (!hash_con) {
> +		DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
> +			  handle);
> +		return -EINVAL;
> +	}
> +
> +	hash_con->con->abortive_dsconnect = abrt_conn;
> +
> +	return qed_sp_iscsi_conn_terminate(QED_LEADING_HWFN(cdev),
> +					   hash_con->con,
> +					   QED_SPQ_MODE_EBLOCK, NULL);
> +}
> +
> +static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats)
> +{
> +	return qed_iscsi_get_stats(QED_LEADING_HWFN(cdev), stats);
> +}
> +
> +static const struct qed_iscsi_ops qed_iscsi_ops_pass = {
> +	.common = &qed_common_ops_pass,
> +	.ll2 = &qed_ll2_ops_pass,
> +	.fill_dev_info = &qed_fill_iscsi_dev_info,
> +	.register_ops = &qed_register_iscsi_ops,
> +	.start = &qed_iscsi_start,
> +	.stop = &qed_iscsi_stop,
> +	.acquire_conn = &qed_iscsi_acquire_conn,
> +	.release_conn = &qed_iscsi_release_conn,
> +	.offload_conn = &qed_iscsi_offload_conn,
> +	.update_conn = &qed_iscsi_update_conn,
> +	.destroy_conn = &qed_iscsi_destroy_conn,
> +	.clear_sq = &qed_iscsi_clear_conn_sq,
> +	.get_stats = &qed_iscsi_stats,
> +};
> +
> +const struct qed_iscsi_ops *qed_get_iscsi_ops()
> +{
> +	return &qed_iscsi_ops_pass;
> +}
> +EXPORT_SYMBOL(qed_get_iscsi_ops);
> +
> +void qed_put_iscsi_ops(void)
> +{
> +}
> +EXPORT_SYMBOL(qed_put_iscsi_ops);
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> new file mode 100644
> index 0000000..269848c
> --- /dev/null
> +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> @@ -0,0 +1,52 @@
> +/* QLogic qed NIC Driver
> + * Copyright (c) 2015 QLogic Corporation
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QED_ISCSI_H
> +#define _QED_ISCSI_H
> +#include <linux/types.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/qed/tcp_common.h>
> +#include <linux/qed/qed_iscsi_if.h>
> +#include <linux/qed/qed_chain.h>
> +#include "qed.h"
> +#include "qed_hsi.h"
> +#include "qed_mcp.h"
> +#include "qed_sp.h"
> +
> +struct qed_iscsi_info {
> +	spinlock_t lock;
> +	struct list_head free_list;
> +	u16 max_num_outstanding_tasks;
> +	void *event_context;
> +	iscsi_event_cb_t event_cb;
> +};
> +
> +#ifdef CONFIG_QED_LL2
> +extern const struct qed_ll2_ops qed_ll2_ops_pass;
> +#endif
> +
> +#if IS_ENABLED(CONFIG_QEDI)
> +struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn);
> +
> +void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> +		     struct qed_iscsi_info *p_iscsi_info);
> +
> +void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> +		    struct qed_iscsi_info *p_iscsi_info);
> +#else /* IS_ENABLED(CONFIG_QEDI) */
> +static inline struct qed_iscsi_info *qed_iscsi_alloc(
> +		struct qed_hwfn *p_hwfn) { return NULL; }
> +static inline void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> +		struct qed_iscsi_info *p_iscsi_info) {}
> +static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> +		struct qed_iscsi_info *p_iscsi_info) {}
> +#endif /* IS_ENABLED(CONFIG_QEDI) */
> +
> +#endif
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> index ddd410a..07e2f77 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> @@ -2187,6 +2187,5 @@ const struct qed_eth_ops *qed_get_eth_ops(void)
>  
>  void qed_put_eth_ops(void)
>  {
> -	/* TODO - reference count for module? */
>  }
>  EXPORT_SYMBOL(qed_put_eth_ops);
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> index a6db107..e67f3c9 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> @@ -299,6 +299,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  		p_tx->cur_completing_bd_idx = 1;
>  		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
>  		tx_frag = p_pkt->bds_set[0].tx_frag;
> +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
>  		if (p_ll2_conn->gsi_enable)
>  			qed_ll2b_release_tx_gsi_packet(p_hwfn,
>  						       p_ll2_conn->my_id,
> @@ -307,6 +308,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  						       b_last_frag,
>  						       b_last_packet);
>  		else
> +#endif
>  			qed_ll2b_complete_tx_packet(p_hwfn,
>  						    p_ll2_conn->my_id,
>  						    p_pkt->cookie,
Huh? What is that doing here?

> @@ -367,6 +369,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
>  
>  		spin_unlock_irqrestore(&p_tx->lock, flags);
>  		tx_frag = p_pkt->bds_set[0].tx_frag;
> +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
>  		if (p_ll2_conn->gsi_enable)
>  			qed_ll2b_complete_tx_gsi_packet(p_hwfn,
>  							p_ll2_conn->my_id,
> @@ -374,6 +377,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
>  							tx_frag,
>  							b_last_frag, !num_bds);
>  		else
> +#endif
>  			qed_ll2b_complete_tx_packet(p_hwfn,
>  						    p_ll2_conn->my_id,
>  						    p_pkt->cookie,
> @@ -421,6 +425,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
>  			  "Mismatch between active_descq and the LL2 Rx chain\n");
>  	list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
>  
> +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
>  	spin_unlock_irqrestore(&p_rx->lock, lock_flags);
>  	qed_ll2b_complete_rx_gsi_packet(p_hwfn,
>  					p_ll2_info->my_id,
> @@ -433,6 +438,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
>  					src_mac_addrhi,
>  					src_mac_addrlo, b_last_cqe);
>  	spin_lock_irqsave(&p_rx->lock, lock_flags);
> +#endif
>  
>  	return 0;
>  }
> @@ -1516,11 +1522,12 @@ static void qed_ll2_register_cb_ops(struct qed_dev *cdev,
>  
>  static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
>  {
> -	struct qed_ll2_info ll2_info;
> +	struct qed_ll2_info *ll2_info;
>  	struct qed_ll2_buffer *buffer;
>  	enum qed_ll2_conn_type conn_type;
>  	struct qed_ptt *p_ptt;
>  	int rc, i;
> +	u8 gsi_enable = 1;
>  
>  	/* Initialize LL2 locks & lists */
>  	INIT_LIST_HEAD(&cdev->ll2->list);
> @@ -1552,6 +1559,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
>  	switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
>  	case QED_PCI_ISCSI:
>  		conn_type = QED_LL2_TYPE_ISCSI;
> +		gsi_enable = 0;
>  		break;
>  	case QED_PCI_ETH_ROCE:
>  		conn_type = QED_LL2_TYPE_ROCE;
> @@ -1561,18 +1569,23 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
>  	}
>  
>  	/* Prepare the temporary ll2 information */
> -	memset(&ll2_info, 0, sizeof(ll2_info));
> -	ll2_info.conn_type = conn_type;
> -	ll2_info.mtu = params->mtu;
> -	ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets;
> -	ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping;
> -	ll2_info.tx_tc = 0;
> -	ll2_info.tx_dest = CORE_TX_DEST_NW;
> -	ll2_info.gsi_enable = 1;
> -
> -	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &ll2_info,
> +	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
> +	if (!ll2_info) {
> +		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
> +		goto fail;
> +	}
> +	ll2_info->conn_type = conn_type;
> +	ll2_info->mtu = params->mtu;
> +	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
> +	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
> +	ll2_info->tx_tc = 0;
> +	ll2_info->tx_dest = CORE_TX_DEST_NW;
> +	ll2_info->gsi_enable = gsi_enable;
> +
> +	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), ll2_info,
>  					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
>  					&cdev->ll2->handle);
> +	kfree(ll2_info);
>  	if (rc) {
>  		DP_INFO(cdev, "Failed to acquire LL2 connection\n");
>  		goto fail;
Where is the benefit of this hunk? And is it related to iSCSI?

> diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
> index 4ee3151..a01ad9d 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_main.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
> @@ -1239,7 +1239,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
>  	if (link.link_up)
>  		if_link->link_up = true;
>  
> -	/* TODO - at the moment assume supported and advertised speed equal */
>  	if_link->supported_caps = QED_LM_FIBRE_BIT;
>  	if (params.speed.autoneg)
>  		if_link->supported_caps |= QED_LM_Autoneg_BIT;
> @@ -1294,7 +1293,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
>  	if (link.link_up)
>  		if_link->speed = link.speed;
>  
> -	/* TODO - fill duplex properly */
>  	if_link->duplex = DUPLEX_FULL;
>  	qed_mcp_get_media_type(hwfn->cdev, &media_type);
>  	if_link->port = qed_get_port_type(media_type);
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> index dff520e..2e5f51b 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> @@ -314,9 +314,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
>  
>  /* Using hwfn number (and not pf_num) is required since in CMT mode,
>   * same pf_num may be used by two different hwfn
> - * TODO - this shouldn't really be in .h file, but until all fields
> - * required during hw-init will be placed in their correct place in shmem
> - * we need it in qed_dev.c [for readin the nvram reflection in shmem].
>   */
>  #define MCP_PF_ID_BY_REL(p_hwfn, rel_pfid) (QED_IS_BB((p_hwfn)->cdev) ?	       \
>  					    ((rel_pfid) |		       \
> @@ -324,9 +321,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
>  					    rel_pfid)
>  #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
>  
> -/* TODO - this is only correct as long as only BB is supported, and
> - * no port-swapping is implemented; Afterwards we'll need to fix it.
> - */
>  #define MFW_PORT(_p_hwfn)       ((_p_hwfn)->abs_pf_id %	\
>  				 ((_p_hwfn)->cdev->num_ports_in_engines * 2))
>  struct qed_mcp_info {
Please split off the patch and use a separate one to remove all the TODO
entries. They do not relate to the iSCSI offload bit.

> diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> index b414a05..9754420 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> @@ -82,6 +82,8 @@
>  	0x1c80000UL
>  #define BAR0_MAP_REG_XSDM_RAM \
>  	0x1e00000UL
> +#define BAR0_MAP_REG_YSDM_RAM \
> +	0x1e80000UL
>  #define  NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF \
>  	0x5011f4UL
>  #define  PRS_REG_SEARCH_TCP \
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> index caff415..d3fa578 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> @@ -24,6 +24,7 @@
>  #include "qed_hsi.h"
>  #include "qed_hw.h"
>  #include "qed_int.h"
> +#include "qed_iscsi.h"
>  #include "qed_mcp.h"
>  #include "qed_reg_addr.h"
>  #include "qed_sp.h"
> @@ -249,6 +250,20 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
>  		return qed_sriov_eqe_event(p_hwfn,
>  					   p_eqe->opcode,
>  					   p_eqe->echo, &p_eqe->data);
> +	case PROTOCOLID_ISCSI:
> +		if (!IS_ENABLED(CONFIG_QEDI))
> +			return -EINVAL;
> +
> +		if (p_hwfn->p_iscsi_info->event_cb) {
> +			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
> +
> +			return p_iscsi->event_cb(p_iscsi->event_context,
> +						 p_eqe->opcode, &p_eqe->data);
> +		} else {
> +			DP_NOTICE(p_hwfn,
> +				  "iSCSI async completion is not set\n");
> +			return -EINVAL;
> +		}
>  	default:
>  		DP_NOTICE(p_hwfn,
>  			  "Unknown Async completion for protocol: %d\n",
> diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
> index f9ae903..c0c9fa8 100644
> --- a/include/linux/qed/qed_if.h
> +++ b/include/linux/qed/qed_if.h
> @@ -165,6 +165,7 @@ struct qed_iscsi_pf_params {
>  	u32 max_cwnd;
>  	u16 cq_num_entries;
>  	u16 cmdq_num_entries;
> +	u32 two_msl_timer;
>  	u16 dup_ack_threshold;
>  	u16 tx_sws_timer;
>  	u16 min_rto;
> @@ -271,6 +272,7 @@ struct qed_dev_info {
>  enum qed_sb_type {
>  	QED_SB_TYPE_L2_QUEUE,
>  	QED_SB_TYPE_CNQ,
> +	QED_SB_TYPE_STORAGE,
>  };
>  
>  enum qed_protocol {
> diff --git a/include/linux/qed/qed_iscsi_if.h b/include/linux/qed/qed_iscsi_if.h
> new file mode 100644
> index 0000000..6735ee5
> --- /dev/null
> +++ b/include/linux/qed/qed_iscsi_if.h
> @@ -0,0 +1,249 @@
> +/* QLogic qed NIC Driver
Again, this is the iSCSI driver, is it not?

> + * Copyright (c) 2015 QLogic Corporation
> + *
And you _might_ want to check the copyright, seeing that it's being
posted from the cavium.com domain ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 2/6] qed: Add iSCSI out of order packet handling.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19  7:36   ` Hannes Reinecke
  2016-10-20 12:58     ` Mintz, Yuval
  -1 siblings, 1 reply; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19  7:36 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Yuval Mintz, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> 
> This patch adds out of order packet handling for hardware offloaded
> iSCSI. Out of order packet handling requires driver buffer allocation
> and assistance.
> 
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> ---
>  drivers/net/ethernet/qlogic/qed/Makefile   |   2 +-
>  drivers/net/ethernet/qlogic/qed/qed.h      |   1 +
>  drivers/net/ethernet/qlogic/qed/qed_dev.c  |  14 +-
>  drivers/net/ethernet/qlogic/qed/qed_ll2.c  | 559 +++++++++++++++++++++++++++--
>  drivers/net/ethernet/qlogic/qed/qed_ll2.h  |   9 +
>  drivers/net/ethernet/qlogic/qed/qed_ooo.c  | 510 ++++++++++++++++++++++++++
>  drivers/net/ethernet/qlogic/qed/qed_ooo.h  | 116 ++++++
>  drivers/net/ethernet/qlogic/qed/qed_roce.c |   1 +
>  drivers/net/ethernet/qlogic/qed/qed_spq.c  |   9 +
>  9 files changed, 1195 insertions(+), 26 deletions(-)
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.c
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_ooo.h
> 
> diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
> index b76669c..9121bf0 100644
> --- a/drivers/net/ethernet/qlogic/qed/Makefile
> +++ b/drivers/net/ethernet/qlogic/qed/Makefile
> @@ -6,4 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
>  qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
>  qed-$(CONFIG_QED_LL2) += qed_ll2.o
>  qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
> -qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
> +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o qed_ooo.o
> diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
> index a61b1c0..e5626ae 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed.h
> @@ -380,6 +380,7 @@ struct qed_hwfn {
>  	/* Protocol related */
>  	bool				using_ll2;
>  	struct qed_ll2_info		*p_ll2_info;
> +	struct qed_ooo_info		*p_ooo_info;
>  	struct qed_rdma_info		*p_rdma_info;
>  	struct qed_iscsi_info		*p_iscsi_info;
>  	struct qed_pf_params		pf_params;
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
> index a4234c0..060e9a4 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
> @@ -32,6 +32,7 @@
>  #include "qed_iscsi.h"
>  #include "qed_ll2.h"
>  #include "qed_mcp.h"
> +#include "qed_ooo.h"
>  #include "qed_reg_addr.h"
>  #include "qed_sp.h"
>  #include "qed_sriov.h"
> @@ -157,8 +158,10 @@ void qed_resc_free(struct qed_dev *cdev)
>  		qed_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
>  #endif
>  		if (IS_ENABLED(CONFIG_QEDI) &&
> -				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> +				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
>  			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
> +			qed_ooo_free(p_hwfn, p_hwfn->p_ooo_info);
> +		}
>  		qed_iov_free(p_hwfn);
>  		qed_dmae_info_free(p_hwfn);
>  		qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
> @@ -416,6 +419,7 @@ int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
>  int qed_resc_alloc(struct qed_dev *cdev)
>  {
>  	struct qed_iscsi_info *p_iscsi_info;
> +	struct qed_ooo_info *p_ooo_info;
>  #ifdef CONFIG_QED_LL2
>  	struct qed_ll2_info *p_ll2_info;
>  #endif
> @@ -543,6 +547,10 @@ int qed_resc_alloc(struct qed_dev *cdev)
>  			if (!p_iscsi_info)
>  				goto alloc_no_mem;
>  			p_hwfn->p_iscsi_info = p_iscsi_info;
> +			p_ooo_info = qed_ooo_alloc(p_hwfn);
> +			if (!p_ooo_info)
> +				goto alloc_no_mem;
> +			p_hwfn->p_ooo_info = p_ooo_info;
>  		}
>  
>  		/* DMA info initialization */
> @@ -598,8 +606,10 @@ void qed_resc_setup(struct qed_dev *cdev)
>  			qed_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
>  #endif
>  		if (IS_ENABLED(CONFIG_QEDI) &&
> -				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> +				p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
>  			qed_iscsi_setup(p_hwfn, p_hwfn->p_iscsi_info);
> +			qed_ooo_setup(p_hwfn, p_hwfn->p_ooo_info);
> +		}
>  	}
>  }
>  
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> index e67f3c9..4ce12e9 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> @@ -36,6 +36,7 @@
>  #include "qed_int.h"
>  #include "qed_ll2.h"
>  #include "qed_mcp.h"
> +#include "qed_ooo.h"
>  #include "qed_reg_addr.h"
>  #include "qed_sp.h"
>  
> @@ -295,27 +296,36 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  		list_del(&p_pkt->list_entry);
>  		b_last_packet = list_empty(&p_tx->active_descq);
>  		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
> -		p_tx->cur_completing_packet = *p_pkt;
> -		p_tx->cur_completing_bd_idx = 1;
> -		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
> -		tx_frag = p_pkt->bds_set[0].tx_frag;
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
> +			struct qed_ooo_buffer *p_buffer;
> +
> +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						p_buffer);
> +		} else {
> +			p_tx->cur_completing_packet = *p_pkt;
> +			p_tx->cur_completing_bd_idx = 1;
> +			b_last_frag = p_tx->cur_completing_bd_idx ==
> +				p_pkt->bd_used;
> +			tx_frag = p_pkt->bds_set[0].tx_frag;
>  #if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> -		if (p_ll2_conn->gsi_enable)
> -			qed_ll2b_release_tx_gsi_packet(p_hwfn,
> -						       p_ll2_conn->my_id,
> -						       p_pkt->cookie,
> -						       tx_frag,
> -						       b_last_frag,
> -						       b_last_packet);
> -		else
> +			if (p_ll2_conn->gsi_enable)
> +				qed_ll2b_release_tx_gsi_packet(p_hwfn,
> +					       p_ll2_conn->my_id,
> +					       p_pkt->cookie,
> +					       tx_frag,
> +					       b_last_frag,
> +					       b_last_packet);
> +			else
>  #endif
> -			qed_ll2b_complete_tx_packet(p_hwfn,
> +				qed_ll2b_complete_tx_packet(p_hwfn,
>  						    p_ll2_conn->my_id,
>  						    p_pkt->cookie,
>  						    tx_frag,
>  						    b_last_frag,
>  						    b_last_packet);
> -
> +		}
>  	}
>  }
>  
> @@ -546,13 +556,466 @@ void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  		list_del(&p_pkt->list_entry);
>  		list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
>  
> -		rx_buf_addr = p_pkt->rx_buf_addr;
> -		cookie = p_pkt->cookie;
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
> +			struct qed_ooo_buffer *p_buffer;
> +
> +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						p_buffer);
> +		} else {
> +			rx_buf_addr = p_pkt->rx_buf_addr;
> +			cookie = p_pkt->cookie;
> +
> +			b_last = list_empty(&p_rx->active_descq);
> +		}
> +	}
> +}
> +
> +#if IS_ENABLED(CONFIG_QEDI)
> +static u8 qed_ll2_convert_rx_parse_to_tx_flags(u16 parse_flags)
> +{
> +	u8 bd_flags = 0;
> +
> +	if (GET_FIELD(parse_flags, PARSING_AND_ERR_FLAGS_TAG8021QEXIST))
> +		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_VLAN_INSERTION, 1);
> +
> +	return bd_flags;
> +}
> +
> +static int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> +{
> +	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
> +	struct qed_ll2_rx_queue *p_rx = &p_ll2_conn->rx_queue;
> +	u16 packet_length = 0, parse_flags = 0, vlan = 0;
> +	struct qed_ll2_rx_packet *p_pkt = NULL;
> +	u32 num_ooo_add_to_peninsula = 0, cid;
> +	union core_rx_cqe_union *cqe = NULL;
> +	u16 cq_new_idx = 0, cq_old_idx = 0;
> +	struct qed_ooo_buffer *p_buffer;
> +	struct ooo_opaque *iscsi_ooo;
> +	u8 placement_offset = 0;
> +	u8 cqe_type;
> +	int rc;
> +
> +	cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
> +	cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
> +	if (cq_new_idx == cq_old_idx)
> +		return 0;
> +
> +	while (cq_new_idx != cq_old_idx) {
> +		struct core_rx_fast_path_cqe *p_cqe_fp;
> +
> +		cqe = qed_chain_consume(&p_rx->rcq_chain);
> +		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
> +		cqe_type = cqe->rx_cqe_sp.type;
> +
> +		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
> +			DP_NOTICE(p_hwfn,
> +				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
> +				  cqe_type);
> +			return -EINVAL;
> +		}
> +		p_cqe_fp = &cqe->rx_cqe_fp;
> +
> +		placement_offset = p_cqe_fp->placement_offset;
> +		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
> +		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
> +		vlan = le16_to_cpu(p_cqe_fp->vlan);
> +		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
> +		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
> +					   iscsi_ooo);
> +		cid = le32_to_cpu(iscsi_ooo->cid);
> +
> +		/* Process delete isle first */
> +		if (iscsi_ooo->drop_size)
> +			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
> +					     iscsi_ooo->drop_isle,
> +					     iscsi_ooo->drop_size);
> +
> +		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
> +			continue;
> +
> +		/* Now process create/add/join isles */
> +		if (list_empty(&p_rx->active_descq)) {
> +			DP_NOTICE(p_hwfn,
> +				  "LL2 OOO RX chain has no submitted buffers\n");
> +			return -EIO;
> +		}
> +
> +		p_pkt = list_first_entry(&p_rx->active_descq,
> +					 struct qed_ll2_rx_packet, list_entry);
> +
> +		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
> +			if (!p_pkt) {
> +				DP_NOTICE(p_hwfn,
> +					  "LL2 OOO RX packet is not valid\n");
> +				return -EIO;
> +			}
> +			list_del(&p_pkt->list_entry);
> +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> +			p_buffer->packet_length = packet_length;
> +			p_buffer->parse_flags = parse_flags;
> +			p_buffer->vlan = vlan;
> +			p_buffer->placement_offset = placement_offset;
> +			qed_chain_consume(&p_rx->rxq_chain);
> +			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> +
> +			switch (iscsi_ooo->ooo_opcode) {
> +			case TCP_EVENT_ADD_NEW_ISLE:
> +				qed_ooo_add_new_isle(p_hwfn,
> +						     p_hwfn->p_ooo_info,
> +						     cid,
> +						     iscsi_ooo->ooo_isle,
> +						     p_buffer);
> +				break;
> +			case TCP_EVENT_ADD_ISLE_RIGHT:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle,
> +						       p_buffer,
> +						       QED_OOO_RIGHT_BUF);
> +				break;
> +			case TCP_EVENT_ADD_ISLE_LEFT:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle,
> +						       p_buffer,
> +						       QED_OOO_LEFT_BUF);
> +				break;
> +			case TCP_EVENT_JOIN:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle +
> +						       1,
> +						       p_buffer,
> +						       QED_OOO_LEFT_BUF);
> +				qed_ooo_join_isles(p_hwfn,
> +						   p_hwfn->p_ooo_info,
> +						   cid, iscsi_ooo->ooo_isle);
> +				break;
> +			case TCP_EVENT_ADD_PEN:
> +				num_ooo_add_to_peninsula++;
> +				qed_ooo_put_ready_buffer(p_hwfn,
> +							 p_hwfn->p_ooo_info,
> +							 p_buffer, true);
> +				break;
> +			}
> +		} else {
> +			DP_NOTICE(p_hwfn,
> +				  "Unexpected event (%d) TX OOO completion\n",
> +				  iscsi_ooo->ooo_opcode);
> +		}
> +	}
>  
> -		b_last = list_empty(&p_rx->active_descq);
> +	/* Submit RX buffer here */
> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {
> +		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
> +					    p_buffer->rx_buffer_phys_addr,
> +					    0, p_buffer, true);
> +		if (rc) {
> +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						p_buffer);
> +			break;
> +		}
>  	}
> +
> +	/* Submit Tx buffers here */
> +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> +						    p_hwfn->p_ooo_info))) {
> +		u16 l4_hdr_offset_w = 0;
> +		dma_addr_t first_frag;
> +		u8 bd_flags = 0;
> +
> +		first_frag = p_buffer->rx_buffer_phys_addr +
> +			     p_buffer->placement_offset;
> +		parse_flags = p_buffer->parse_flags;
> +		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
> +		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
> +		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
> +
> +		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
> +					       p_buffer->vlan, bd_flags,
> +					       l4_hdr_offset_w,
> +					       p_ll2_conn->tx_dest, 0,
> +					       first_frag,
> +					       p_buffer->packet_length,
> +					       p_buffer, true);
> +		if (rc) {
> +			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						 p_buffer, false);
> +			break;
> +		}
> +	}
> +
> +	return 0;
>  }
>  
> +static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> +{
> +	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
> +	struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue;
> +	struct qed_ll2_tx_packet *p_pkt = NULL;
> +	struct qed_ooo_buffer *p_buffer;
> +	bool b_dont_submit_rx = false;
> +	u16 new_idx = 0, num_bds = 0;
> +	int rc;
> +
> +	new_idx = le16_to_cpu(*p_tx->p_fw_cons);
> +	num_bds = ((s16)new_idx - (s16)p_tx->bds_idx);
> +
> +	if (!num_bds)
> +		return 0;
> +
> +	while (num_bds) {
> +		if (list_empty(&p_tx->active_descq))
> +			return -EINVAL;
> +
> +		p_pkt = list_first_entry(&p_tx->active_descq,
> +					 struct qed_ll2_tx_packet, list_entry);
> +		if (!p_pkt)
> +			return -EINVAL;
> +
> +		if (p_pkt->bd_used != 1) {
> +			DP_NOTICE(p_hwfn,
> +				  "Unexpectedly many BDs(%d) in TX OOO completion\n",
> +				  p_pkt->bd_used);
> +			return -EINVAL;
> +		}
> +
> +		list_del(&p_pkt->list_entry);
> +
> +		num_bds--;
> +		p_tx->bds_idx++;
> +		qed_chain_consume(&p_tx->txq_chain);
> +
> +		p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> +		list_add_tail(&p_pkt->list_entry, &p_tx->free_descq);
> +
> +		if (b_dont_submit_rx) {
> +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						p_buffer);
> +			continue;
> +		}
> +
> +		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
> +					    p_buffer->rx_buffer_phys_addr, 0,
> +					    p_buffer, true);
> +		if (rc != 0) {
> +			qed_ooo_put_free_buffer(p_hwfn,
> +						p_hwfn->p_ooo_info, p_buffer);
> +			b_dont_submit_rx = true;
> +		}
> +	}
> +
> +	/* Submit Tx buffers here */
> +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> +						    p_hwfn->p_ooo_info))) {
> +		u16 l4_hdr_offset_w = 0, parse_flags = p_buffer->parse_flags;
> +		dma_addr_t first_frag;
> +		u8 bd_flags = 0;
> +
> +		first_frag = p_buffer->rx_buffer_phys_addr +
> +		    p_buffer->placement_offset;
> +		bd_flags = qed_ll2_convert_rx_parse_to_tx_flags(parse_flags);
> +		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_FORCE_VLAN_MODE, 1);
> +		SET_FIELD(bd_flags, CORE_TX_BD_FLAGS_L4_PROTOCOL, 1);
> +		rc = qed_ll2_prepare_tx_packet(p_hwfn, p_ll2_conn->my_id, 1,
> +					       p_buffer->vlan, bd_flags,
> +					       l4_hdr_offset_w,
> +					       p_ll2_conn->tx_dest, 0,
> +					       first_frag,
> +					       p_buffer->packet_length,
> +					       p_buffer, true);
> +		if (rc != 0) {
> +			qed_ooo_put_ready_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						 p_buffer, false);
> +			break;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
> +			       struct qed_ll2_info *p_ll2_info,
> +			       u16 rx_num_ooo_buffers, u16 mtu)
> +{
> +	struct qed_ooo_buffer *p_buf = NULL;
> +	void *p_virt;
> +	u16 buf_idx;
> +	int rc = 0;
> +
> +	if (p_ll2_info->conn_type != QED_LL2_TYPE_ISCSI_OOO)
> +		return rc;
> +
> +	if (!rx_num_ooo_buffers)
> +		return -EINVAL;
> +
> +	for (buf_idx = 0; buf_idx < rx_num_ooo_buffers; buf_idx++) {
> +		p_buf = kzalloc(sizeof(*p_buf), GFP_KERNEL);
> +		if (!p_buf) {
> +			DP_NOTICE(p_hwfn,
> +				  "Failed to allocate ooo descriptor\n");
> +			rc = -ENOMEM;
> +			goto out;
> +		}
> +
> +		p_buf->rx_buffer_size = mtu + 26 + ETH_CACHE_LINE_SIZE;
> +		p_buf->rx_buffer_size = (p_buf->rx_buffer_size +
> +					 ETH_CACHE_LINE_SIZE - 1) &
> +					~(ETH_CACHE_LINE_SIZE - 1);
> +		p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
> +					    p_buf->rx_buffer_size,
> +					    &p_buf->rx_buffer_phys_addr,
> +					    GFP_KERNEL);
> +		if (!p_virt) {
> +			DP_NOTICE(p_hwfn, "Failed to allocate ooo buffer\n");
> +			kfree(p_buf);
> +			rc = -ENOMEM;
> +			goto out;
> +		}
> +
> +		p_buf->rx_buffer_virt_addr = p_virt;
> +		qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info, p_buf);
> +	}
> +
> +	DP_VERBOSE(p_hwfn, QED_MSG_LL2,
> +		   "Allocated [%04x] LL2 OOO buffers [each of size 0x%08x]\n",
> +		   rx_num_ooo_buffers, p_buf->rx_buffer_size);
> +
> +out:
> +	return rc;
> +}
> +
> +static void
> +qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
> +				 struct qed_ll2_info *p_ll2_conn)
> +{
> +	struct qed_ooo_buffer *p_buffer;
> +	int rc;
> +
> +	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
> +		return;
> +
> +	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {
> +		rc = qed_ll2_post_rx_buffer(p_hwfn,
> +					    p_ll2_conn->my_id,
> +					    p_buffer->rx_buffer_phys_addr,
> +					    0, p_buffer, true);
> +		if (rc) {
> +			qed_ooo_put_free_buffer(p_hwfn,
> +						p_hwfn->p_ooo_info, p_buffer);
> +			break;
> +		}
> +	}
> +}
> +
> +static void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
> +					   struct qed_ll2_info *p_ll2_conn)
> +{
> +	struct qed_ooo_buffer *p_buffer;
> +
> +	if (p_ll2_conn->conn_type != QED_LL2_TYPE_ISCSI_OOO)
> +		return;
> +
> +	qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {
> +		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
> +				  p_buffer->rx_buffer_size,
> +				  p_buffer->rx_buffer_virt_addr,
> +				  p_buffer->rx_buffer_phys_addr);
> +		kfree(p_buffer);
> +	}
> +}
> +
> +static void qed_ll2_stop_ooo(struct qed_dev *cdev)
> +{
> +	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
> +	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
> +
> +	DP_VERBOSE(cdev, QED_MSG_STORAGE, "Stopping LL2 OOO queue [%02x]\n",
> +		   *handle);
> +
> +	qed_ll2_terminate_connection(hwfn, *handle);
> +	qed_ll2_release_connection(hwfn, *handle);
> +	*handle = QED_LL2_UNUSED_HANDLE;
> +}
> +
> +static int qed_ll2_start_ooo(struct qed_dev *cdev,
> +			     struct qed_ll2_params *params)
> +{
> +	struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
> +	u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
> +	struct qed_ll2_info *ll2_info;
> +	int rc;
> +
> +	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
> +	if (!ll2_info) {
> +		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
> +		return -ENOMEM;
> +	}
> +	ll2_info->conn_type = QED_LL2_TYPE_ISCSI_OOO;
> +	ll2_info->mtu = params->mtu;
> +	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
> +	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
> +	ll2_info->tx_tc = OOO_LB_TC;
> +	ll2_info->tx_dest = CORE_TX_DEST_LB;
> +
> +	rc = qed_ll2_acquire_connection(hwfn, ll2_info,
> +					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
> +					handle);
> +	kfree(ll2_info);
> +	if (rc) {
> +		DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n");
> +		goto out;
> +	}
> +
> +	rc = qed_ll2_establish_connection(hwfn, *handle);
> +	if (rc) {
> +		DP_INFO(cdev, "Failed to establist LL2 OOO connection\n");
> +		goto fail;
> +	}
> +
> +	return 0;
> +
> +fail:
> +	qed_ll2_release_connection(hwfn, *handle);
> +out:
> +	*handle = QED_LL2_UNUSED_HANDLE;
> +	return rc;
> +}
> +#else /* IS_ENABLED(CONFIG_QEDI) */
> +static inline int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn,
> +		void *p_cookie) { return -EINVAL; }
> +static inline int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn,
> +		void *p_cookie) { return -EINVAL; }
> +static inline int
> +qed_ll2_acquire_connection_ooo(struct qed_hwfn *p_hwfn,
> +			struct qed_ll2_info *p_ll2_info,
> +			u16 rx_num_ooo_buffers, u16 mtu) { return -EINVAL; }
> +static inline void
> +qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn,
> +			struct qed_ll2_info *p_ll2_conn) { return; }
> +static inline void qed_ll2_release_connection_ooo(struct qed_hwfn *p_hwfn,
> +			struct qed_ll2_info *p_ll2_conn) { return; }
> +static inline void qed_ll2_stop_ooo(struct qed_dev *cdev) { return; }
> +static inline int qed_ll2_start_ooo(struct qed_dev *cdev,
> +			struct qed_ll2_params *params) { return -EINVAL; }
> +#endif /* IS_ENABLED(CONFIG_QEDI) */
> +
>  static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
>  				     struct qed_ll2_info *p_ll2_conn,
>  				     u8 action_on_error)
> @@ -594,7 +1057,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
>  	p_ramrod->drop_ttl0_flg = p_ll2_conn->rx_drop_ttl0_flg;
>  	p_ramrod->inner_vlan_removal_en = p_ll2_conn->rx_vlan_removal_en;
>  	p_ramrod->queue_id = p_ll2_conn->queue_id;
> -	p_ramrod->main_func_queue = 1;
> +	p_ramrod->main_func_queue = (conn_type == QED_LL2_TYPE_ISCSI_OOO) ? 0
> +									  : 1;
>  
>  	if ((IS_MF_DEFAULT(p_hwfn) || IS_MF_SI(p_hwfn)) &&
>  	    p_ramrod->main_func_queue && (conn_type != QED_LL2_TYPE_ROCE)) {
> @@ -625,6 +1089,11 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
>  	if (!QED_LL2_TX_REGISTERED(p_ll2_conn))
>  		return 0;
>  
> +	if (p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
> +		p_ll2_conn->tx_stats_en = 0;
> +	else
> +		p_ll2_conn->tx_stats_en = 1;
> +
>  	/* Get SPQ entry */
>  	memset(&init_data, 0, sizeof(init_data));
>  	init_data.cid = p_ll2_conn->cid;
> @@ -642,7 +1111,6 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
>  	p_ramrod->sb_id = cpu_to_le16(qed_int_get_sp_sb_id(p_hwfn));
>  	p_ramrod->sb_index = p_tx->tx_sb_index;
>  	p_ramrod->mtu = cpu_to_le16(p_ll2_conn->mtu);
> -	p_ll2_conn->tx_stats_en = 1;
>  	p_ramrod->stats_en = p_ll2_conn->tx_stats_en;
>  	p_ramrod->stats_id = p_ll2_conn->tx_stats_id;
>  
> @@ -866,9 +1334,22 @@ int qed_ll2_acquire_connection(struct qed_hwfn *p_hwfn,
>  	if (rc)
>  		goto q_allocate_fail;
>  
> +	if (IS_ENABLED(CONFIG_QEDI)) {
> +		rc = qed_ll2_acquire_connection_ooo(p_hwfn, p_ll2_info,
> +					    rx_num_desc * 2, p_params->mtu);
> +		if (rc)
> +			goto q_allocate_fail;
> +	}
> +
>  	/* Register callbacks for the Rx/Tx queues */
> -	comp_rx_cb = qed_ll2_rxq_completion;
> -	comp_tx_cb = qed_ll2_txq_completion;
> +	if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_params->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
> +		comp_rx_cb = qed_ll2_lb_rxq_completion;
> +		comp_tx_cb = qed_ll2_lb_txq_completion;
> +	} else {
> +		comp_rx_cb = qed_ll2_rxq_completion;
> +		comp_tx_cb = qed_ll2_txq_completion;
> +	}
>  
>  	if (rx_num_desc) {
>  		qed_int_register_cb(p_hwfn, comp_rx_cb,
> @@ -981,6 +1462,9 @@ int qed_ll2_establish_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  	if (p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)
>  		qed_wr(p_hwfn, p_hwfn->p_main_ptt, PRS_REG_USE_LIGHT_L2, 1);
>  
> +	if (IS_ENABLED(CONFIG_QEDI))
> +		qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn);
> +
>  	return rc;
>  }
>  
> @@ -1223,6 +1707,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
>  			      u16 vlan,
>  			      u8 bd_flags,
>  			      u16 l4_hdr_offset_w,
> +			      enum qed_ll2_tx_dest e_tx_dest,
>  			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
>  			      dma_addr_t first_frag,
>  			      u16 first_frag_len, void *cookie, u8 notify_fw)
> @@ -1232,6 +1717,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
>  	enum core_roce_flavor_type roce_flavor;
>  	struct qed_ll2_tx_queue *p_tx;
>  	struct qed_chain *p_tx_chain;
> +	enum core_tx_dest tx_dest;
>  	unsigned long flags;
>  	int rc = 0;
>  
> @@ -1262,6 +1748,8 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
>  		goto out;
>  	}
>  
> +	tx_dest = e_tx_dest == QED_LL2_TX_DEST_NW ? CORE_TX_DEST_NW :
> +						    CORE_TX_DEST_LB;
>  	if (qed_roce_flavor == QED_LL2_ROCE) {
>  		roce_flavor = CORE_ROCE;
>  	} else if (qed_roce_flavor == QED_LL2_RROCE) {
> @@ -1276,7 +1764,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
>  				      num_of_bds, first_frag,
>  				      first_frag_len, cookie, notify_fw);
>  	qed_ll2_prepare_tx_packet_set_bd(p_hwfn, p_ll2_conn, p_curp,
> -					 num_of_bds, CORE_TX_DEST_NW,
> +					 num_of_bds, tx_dest,
>  					 vlan, bd_flags, l4_hdr_offset_w,
>  					 roce_flavor,
>  					 first_frag, first_frag_len);
> @@ -1351,6 +1839,10 @@ int qed_ll2_terminate_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  		qed_ll2_rxq_flush(p_hwfn, connection_handle);
>  	}
>  
> +	if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO)
> +		qed_ooo_release_all_isles(p_hwfn, p_hwfn->p_ooo_info);
> +
>  	return rc;
>  }
>  
> @@ -1381,6 +1873,9 @@ void qed_ll2_release_connection(struct qed_hwfn *p_hwfn, u8 connection_handle)
>  
>  	qed_cxt_release_cid(p_hwfn, p_ll2_conn->cid);
>  
> +	if (IS_ENABLED(CONFIG_QEDI))
> +		qed_ll2_release_connection_ooo(p_hwfn, p_ll2_conn);
> +
>  	mutex_lock(&p_ll2_conn->mutex);
>  	p_ll2_conn->b_active = false;
>  	mutex_unlock(&p_ll2_conn->mutex);
> @@ -1628,6 +2123,18 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
>  		goto release_terminate;
>  	}
>  
> +	if (IS_ENABLED(CONFIG_QEDI) &&
> +		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
> +		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable) {
> +		DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
> +		rc = qed_ll2_start_ooo(cdev, params);
> +		if (rc) {
> +			DP_INFO(cdev,
> +				"Failed to initialize the OOO LL2 queue\n");
> +			goto release_terminate;
> +		}
> +	}
> +
>  	p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
>  	if (!p_ptt) {
>  		DP_INFO(cdev, "Failed to acquire PTT\n");
> @@ -1677,6 +2184,11 @@ static int qed_ll2_stop(struct qed_dev *cdev)
>  	qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
>  	eth_zero_addr(cdev->ll2_mac_address);
>  
> +	if (IS_ENABLED(CONFIG_QEDI) &&
> +		(cdev->hwfns[0].hw_info.personality == QED_PCI_ISCSI) &&
> +		cdev->hwfns[0].pf_params.iscsi_pf_params.ooo_enable)
> +		qed_ll2_stop_ooo(cdev);
> +
>  	rc = qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev),
>  					  cdev->ll2->handle);
>  	if (rc)
> @@ -1731,7 +2243,8 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb)
>  	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev),
>  				       cdev->ll2->handle,
>  				       1 + skb_shinfo(skb)->nr_frags,
> -				       vlan, flags, 0, 0 /* RoCE FLAVOR */,
> +				       vlan, flags, 0, QED_LL2_TX_DEST_NW,
> +				       0 /* RoCE FLAVOR */,
>  				       mapping, skb->len, skb, 1);
>  	if (rc)
>  		goto err;
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
> index 80a5dc2..2b31d30 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h
> @@ -41,6 +41,12 @@ enum qed_ll2_conn_type {
>  	MAX_QED_LL2_RX_CONN_TYPE
>  };
>  
> +enum qed_ll2_tx_dest {
> +	QED_LL2_TX_DEST_NW, /* Light L2 TX Destination to the Network */
> +	QED_LL2_TX_DEST_LB, /* Light L2 TX Destination to the Loopback */
> +	QED_LL2_TX_DEST_MAX
> +};
> +
>  struct qed_ll2_rx_packet {
>  	struct list_head list_entry;
>  	struct core_rx_bd_with_buff_len *rxq_bd;
> @@ -192,6 +198,8 @@ int qed_ll2_post_rx_buffer(struct qed_hwfn *p_hwfn,
>   * @param l4_hdr_offset_w	L4 Header Offset from start of packet
>   *				(in words). This is needed if both l4_csum
>   *				and ipv6_ext are set
> + * @param e_tx_dest             indicates if the packet is to be transmitted via
> + *                              loopback or to the network
>   * @param first_frag
>   * @param first_frag_len
>   * @param cookie
> @@ -206,6 +214,7 @@ int qed_ll2_prepare_tx_packet(struct qed_hwfn *p_hwfn,
>  			      u16 vlan,
>  			      u8 bd_flags,
>  			      u16 l4_hdr_offset_w,
> +			      enum qed_ll2_tx_dest e_tx_dest,
>  			      enum qed_ll2_roce_flavor_type qed_roce_flavor,
>  			      dma_addr_t first_frag,
>  			      u16 first_frag_len, void *cookie, u8 notify_fw);
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.c b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
> new file mode 100644
> index 0000000..a037a6f
> --- /dev/null
> +++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.c
> @@ -0,0 +1,510 @@
> +/* QLogic qed NIC Driver
> + * Copyright (c) 2015 QLogic Corporation
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include <linux/types.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/pci.h>
> +#include <linux/slab.h>
> +#include <linux/string.h>
> +#include "qed.h"
> +#include "qed_iscsi.h"
> +#include "qed_ll2.h"
> +#include "qed_ooo.h"
> +
> +static struct qed_ooo_archipelago
> +*qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn,
> +			  struct qed_ooo_info
> +			  *p_ooo_info,
> +			  u32 cid)
> +{
> +	struct qed_ooo_archipelago *p_archipelago = NULL;
> +
> +	list_for_each_entry(p_archipelago,
> +			    &p_ooo_info->archipelagos_list, list_entry) {
> +		if (p_archipelago->cid == cid)
> +			return p_archipelago;
> +	}
> +
> +	return NULL;
> +}
> +
> +static struct qed_ooo_isle *qed_ooo_seek_isle(struct qed_hwfn *p_hwfn,
> +					      struct qed_ooo_info *p_ooo_info,
> +					      u32 cid, u8 isle)
> +{
> +	struct qed_ooo_archipelago *p_archipelago = NULL;
> +	struct qed_ooo_isle *p_isle = NULL;
> +	u8 the_num_of_isle = 1;
> +
> +	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
> +	if (!p_archipelago) {
> +		DP_NOTICE(p_hwfn,
> +			  "Connection %d is not found in OOO list\n", cid);
> +		return NULL;
> +	}
> +
> +	list_for_each_entry(p_isle, &p_archipelago->isles_list, list_entry) {
> +		if (the_num_of_isle == isle)
> +			return p_isle;
> +		the_num_of_isle++;
> +	}
> +
> +	return NULL;
> +}
> +
> +void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
> +				struct qed_ooo_info *p_ooo_info,
> +				struct ooo_opaque *p_cqe)
> +{
> +	struct qed_ooo_history *p_history = &p_ooo_info->ooo_history;
> +
> +	if (p_history->head_idx == p_history->num_of_cqes)
> +		p_history->head_idx = 0;
> +	p_history->p_cqes[p_history->head_idx] = *p_cqe;
> +	p_history->head_idx++;
> +}
> +
> +struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn)
> +{
> +	struct qed_ooo_info *p_ooo_info;
> +	u16 max_num_archipelagos = 0;
> +	u16 max_num_isles = 0;
> +	u32 i;
> +
> +	if (p_hwfn->hw_info.personality != QED_PCI_ISCSI) {
> +		DP_NOTICE(p_hwfn,
> +			  "Failed to allocate qed_ooo_info: unknown personality\n");
> +		return NULL;
> +	}
> +
> +	max_num_archipelagos = p_hwfn->pf_params.iscsi_pf_params.num_cons;
> +	max_num_isles = QED_MAX_NUM_ISLES + max_num_archipelagos;
> +
> +	if (!max_num_archipelagos) {
> +		DP_NOTICE(p_hwfn,
> +			  "Failed to allocate qed_ooo_info: unknown amount of connections\n");
> +		return NULL;
> +	}
> +
> +	p_ooo_info = kzalloc(sizeof(*p_ooo_info), GFP_KERNEL);
> +	if (!p_ooo_info) {
> +		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info\n");
> +		return NULL;
> +	}
> +
> +	INIT_LIST_HEAD(&p_ooo_info->free_buffers_list);
> +	INIT_LIST_HEAD(&p_ooo_info->ready_buffers_list);
> +	INIT_LIST_HEAD(&p_ooo_info->free_isles_list);
> +	INIT_LIST_HEAD(&p_ooo_info->free_archipelagos_list);
> +	INIT_LIST_HEAD(&p_ooo_info->archipelagos_list);
> +
> +	p_ooo_info->p_isles_mem = kcalloc(max_num_isles,
> +					  sizeof(struct qed_ooo_isle),
> +					  GFP_KERNEL);
> +	if (!p_ooo_info->p_isles_mem) {
> +		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(isles)\n");
> +		goto no_isles_mem;
> +	}
> +
> +	for (i = 0; i < max_num_isles; i++) {
> +		INIT_LIST_HEAD(&p_ooo_info->p_isles_mem[i].buffers_list);
> +		list_add_tail(&p_ooo_info->p_isles_mem[i].list_entry,
> +			      &p_ooo_info->free_isles_list);
> +	}
> +
> +	p_ooo_info->p_archipelagos_mem =
> +				kcalloc(max_num_archipelagos,
> +					sizeof(struct qed_ooo_archipelago),
> +					GFP_KERNEL);
> +	if (!p_ooo_info->p_archipelagos_mem) {
> +		DP_NOTICE(p_hwfn,
> +			  "Failed to allocate qed_ooo_info(archpelagos)\n");
> +		goto no_archipelagos_mem;
> +	}
> +
> +	for (i = 0; i < max_num_archipelagos; i++) {
> +		INIT_LIST_HEAD(&p_ooo_info->p_archipelagos_mem[i].isles_list);
> +		list_add_tail(&p_ooo_info->p_archipelagos_mem[i].list_entry,
> +			      &p_ooo_info->free_archipelagos_list);
> +	}
> +
> +	p_ooo_info->ooo_history.p_cqes =
> +				kcalloc(QED_MAX_NUM_OOO_HISTORY_ENTRIES,
> +					sizeof(struct ooo_opaque),
> +					GFP_KERNEL);
> +	if (!p_ooo_info->ooo_history.p_cqes) {
> +		DP_NOTICE(p_hwfn, "Failed to allocate qed_ooo_info(history)\n");
> +		goto no_history_mem;
> +	}
> +
> +	return p_ooo_info;
> +
> +no_history_mem:
> +	kfree(p_ooo_info->p_archipelagos_mem);
> +no_archipelagos_mem:
> +	kfree(p_ooo_info->p_isles_mem);
> +no_isles_mem:
> +	kfree(p_ooo_info);
> +	return NULL;
> +}
> +
> +void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
> +				      struct qed_ooo_info *p_ooo_info, u32 cid)
> +{
> +	struct qed_ooo_archipelago *p_archipelago;
> +	struct qed_ooo_buffer *p_buffer;
> +	struct qed_ooo_isle *p_isle;
> +	bool b_found = false;
> +
> +	if (list_empty(&p_ooo_info->archipelagos_list))
> +		return;
> +
> +	list_for_each_entry(p_archipelago,
> +			    &p_ooo_info->archipelagos_list, list_entry) {
> +		if (p_archipelago->cid == cid) {
> +			list_del(&p_archipelago->list_entry);
> +			b_found = true;
> +			break;
> +		}
> +	}
> +
> +	if (!b_found)
> +		return;
> +
> +	while (!list_empty(&p_archipelago->isles_list)) {
> +		p_isle = list_first_entry(&p_archipelago->isles_list,
> +					  struct qed_ooo_isle, list_entry);
> +
> +		list_del(&p_isle->list_entry);
> +
> +		while (!list_empty(&p_isle->buffers_list)) {
> +			p_buffer = list_first_entry(&p_isle->buffers_list,
> +						    struct qed_ooo_buffer,
> +						    list_entry);
> +
> +			if (!p_buffer)
> +				break;
> +
> +			list_del(&p_buffer->list_entry);
> +			list_add_tail(&p_buffer->list_entry,
> +				      &p_ooo_info->free_buffers_list);
> +		}
> +		list_add_tail(&p_isle->list_entry,
> +			      &p_ooo_info->free_isles_list);
> +	}
> +
> +	list_add_tail(&p_archipelago->list_entry,
> +		      &p_ooo_info->free_archipelagos_list);
> +}
> +
> +void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
> +			       struct qed_ooo_info *p_ooo_info)
> +{
> +	struct qed_ooo_archipelago *p_arch;
> +	struct qed_ooo_buffer *p_buffer;
> +	struct qed_ooo_isle *p_isle;
> +
> +	while (!list_empty(&p_ooo_info->archipelagos_list)) {
> +		p_arch = list_first_entry(&p_ooo_info->archipelagos_list,
> +					  struct qed_ooo_archipelago,
> +					  list_entry);
> +
> +		list_del(&p_arch->list_entry);
> +
> +		while (!list_empty(&p_arch->isles_list)) {
> +			p_isle = list_first_entry(&p_arch->isles_list,
> +						  struct qed_ooo_isle,
> +						  list_entry);
> +
> +			list_del(&p_isle->list_entry);
> +
> +			while (!list_empty(&p_isle->buffers_list)) {
> +				p_buffer =
> +				    list_first_entry(&p_isle->buffers_list,
> +						     struct qed_ooo_buffer,
> +						     list_entry);
> +
> +				if (!p_buffer)
> +					break;
> +
> +			list_del(&p_buffer->list_entry);
> +				list_add_tail(&p_buffer->list_entry,
> +					      &p_ooo_info->free_buffers_list);
> +			}
> +			list_add_tail(&p_isle->list_entry,
> +				      &p_ooo_info->free_isles_list);
> +		}
> +		list_add_tail(&p_arch->list_entry,
> +			      &p_ooo_info->free_archipelagos_list);
> +	}
> +	if (!list_empty(&p_ooo_info->ready_buffers_list))
> +		list_splice_tail_init(&p_ooo_info->ready_buffers_list,
> +				      &p_ooo_info->free_buffers_list);
> +}
> +
> +void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
> +{
> +	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
> +	memset(p_ooo_info->ooo_history.p_cqes, 0,
> +	       p_ooo_info->ooo_history.num_of_cqes *
> +	       sizeof(struct ooo_opaque));
> +	p_ooo_info->ooo_history.head_idx = 0;
> +}
> +
> +void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info)
> +{
> +	struct qed_ooo_buffer *p_buffer;
> +
> +	qed_ooo_release_all_isles(p_hwfn, p_ooo_info);
> +	while (!list_empty(&p_ooo_info->free_buffers_list)) {
> +		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
> +					    struct qed_ooo_buffer, list_entry);
> +
> +		if (!p_buffer)
> +			break;
> +
> +		list_del(&p_buffer->list_entry);
> +		dma_free_coherent(&p_hwfn->cdev->pdev->dev,
> +				  p_buffer->rx_buffer_size,
> +				  p_buffer->rx_buffer_virt_addr,
> +				  p_buffer->rx_buffer_phys_addr);
> +		kfree(p_buffer);
> +	}
> +
> +	kfree(p_ooo_info->p_isles_mem);
> +	kfree(p_ooo_info->p_archipelagos_mem);
> +	kfree(p_ooo_info->ooo_history.p_cqes);
> +	kfree(p_ooo_info);
> +}
> +
> +void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
> +			     struct qed_ooo_info *p_ooo_info,
> +			     struct qed_ooo_buffer *p_buffer)
> +{
> +	list_add_tail(&p_buffer->list_entry, &p_ooo_info->free_buffers_list);
> +}
> +
> +struct qed_ooo_buffer *qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
> +					       struct qed_ooo_info *p_ooo_info)
> +{
> +	struct qed_ooo_buffer *p_buffer = NULL;
> +
> +	if (!list_empty(&p_ooo_info->free_buffers_list)) {
> +		p_buffer = list_first_entry(&p_ooo_info->free_buffers_list,
> +					    struct qed_ooo_buffer, list_entry);
> +
> +		list_del(&p_buffer->list_entry);
> +	}
> +
> +	return p_buffer;
> +}
> +
> +void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
> +			      struct qed_ooo_info *p_ooo_info,
> +			      struct qed_ooo_buffer *p_buffer, u8 on_tail)
> +{
> +	if (on_tail)
> +		list_add_tail(&p_buffer->list_entry,
> +			      &p_ooo_info->ready_buffers_list);
> +	else
> +		list_add(&p_buffer->list_entry,
> +			 &p_ooo_info->ready_buffers_list);
> +}
> +
> +struct qed_ooo_buffer *qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
> +						struct qed_ooo_info *p_ooo_info)
> +{
> +	struct qed_ooo_buffer *p_buffer = NULL;
> +
> +	if (!list_empty(&p_ooo_info->ready_buffers_list)) {
> +		p_buffer = list_first_entry(&p_ooo_info->ready_buffers_list,
> +					    struct qed_ooo_buffer, list_entry);
> +
> +		list_del(&p_buffer->list_entry);
> +	}
> +
> +	return p_buffer;
> +}
> +
> +void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
> +			  struct qed_ooo_info *p_ooo_info,
> +			  u32 cid, u8 drop_isle, u8 drop_size)
> +{
> +	struct qed_ooo_archipelago *p_archipelago = NULL;
> +	struct qed_ooo_isle *p_isle = NULL;
> +	u8 isle_idx;
> +
> +	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
> +	for (isle_idx = 0; isle_idx < drop_size; isle_idx++) {
> +		p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, drop_isle);
> +		if (!p_isle) {
> +			DP_NOTICE(p_hwfn,
> +				  "Isle %d is not found(cid %d)\n",
> +				  drop_isle, cid);
> +			return;
> +		}
> +		if (list_empty(&p_isle->buffers_list))
> +			DP_NOTICE(p_hwfn,
> +				  "Isle %d is empty(cid %d)\n", drop_isle, cid);
> +		else
> +			list_splice_tail_init(&p_isle->buffers_list,
> +					      &p_ooo_info->free_buffers_list);
> +
> +		list_del(&p_isle->list_entry);
> +		p_ooo_info->cur_isles_number--;
> +		list_add(&p_isle->list_entry, &p_ooo_info->free_isles_list);
> +	}
> +
> +	if (list_empty(&p_archipelago->isles_list)) {
> +		list_del(&p_archipelago->list_entry);
> +		list_add(&p_archipelago->list_entry,
> +			 &p_ooo_info->free_archipelagos_list);
> +	}
> +}
> +
> +void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
> +			  struct qed_ooo_info *p_ooo_info,
> +			  u32 cid, u8 ooo_isle,
> +			  struct qed_ooo_buffer *p_buffer)
> +{
> +	struct qed_ooo_archipelago *p_archipelago = NULL;
> +	struct qed_ooo_isle *p_prev_isle = NULL;
> +	struct qed_ooo_isle *p_isle = NULL;
> +
> +	if (ooo_isle > 1) {
> +		p_prev_isle = qed_ooo_seek_isle(p_hwfn,
> +						p_ooo_info, cid, ooo_isle - 1);
> +		if (!p_prev_isle) {
> +			DP_NOTICE(p_hwfn,
> +				  "Isle %d is not found(cid %d)\n",
> +				  ooo_isle - 1, cid);
> +			return;
> +		}
> +	}
> +	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
> +	if (!p_archipelago && (ooo_isle != 1)) {
> +		DP_NOTICE(p_hwfn,
> +			  "Connection %d is not found in OOO list\n", cid);
> +		return;
> +	}
> +
> +	if (!list_empty(&p_ooo_info->free_isles_list)) {
> +		p_isle = list_first_entry(&p_ooo_info->free_isles_list,
> +					  struct qed_ooo_isle, list_entry);
> +
> +		list_del(&p_isle->list_entry);
> +		if (!list_empty(&p_isle->buffers_list)) {
> +			DP_NOTICE(p_hwfn, "Free isle is not empty\n");
> +			INIT_LIST_HEAD(&p_isle->buffers_list);
> +		}
> +	} else {
> +		DP_NOTICE(p_hwfn, "No more free isles\n");
> +		return;
> +	}
> +
> +	if (!p_archipelago &&
> +	    !list_empty(&p_ooo_info->free_archipelagos_list)) {
> +		p_archipelago =
> +		    list_first_entry(&p_ooo_info->free_archipelagos_list,
> +				     struct qed_ooo_archipelago, list_entry);
> +
> +		list_del(&p_archipelago->list_entry);
> +		if (!list_empty(&p_archipelago->isles_list)) {
> +			DP_NOTICE(p_hwfn,
> +				  "Free OOO connection is not empty\n");
> +			INIT_LIST_HEAD(&p_archipelago->isles_list);
> +		}
> +		p_archipelago->cid = cid;
> +		list_add(&p_archipelago->list_entry,
> +			 &p_ooo_info->archipelagos_list);
> +	} else if (!p_archipelago) {
> +		DP_NOTICE(p_hwfn, "No more free OOO connections\n");
> +		list_add(&p_isle->list_entry,
> +			 &p_ooo_info->free_isles_list);
> +		list_add(&p_buffer->list_entry,
> +			 &p_ooo_info->free_buffers_list);
> +		return;
> +	}
> +
> +	list_add(&p_buffer->list_entry, &p_isle->buffers_list);
> +	p_ooo_info->cur_isles_number++;
> +	p_ooo_info->gen_isles_number++;
> +
> +	if (p_ooo_info->cur_isles_number > p_ooo_info->max_isles_number)
> +		p_ooo_info->max_isles_number = p_ooo_info->cur_isles_number;
> +
> +	if (!p_prev_isle)
> +		list_add(&p_isle->list_entry, &p_archipelago->isles_list);
> +	else
> +		list_add(&p_isle->list_entry, &p_prev_isle->list_entry);
> +}
> +
> +void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
> +			    struct qed_ooo_info *p_ooo_info,
> +			    u32 cid,
> +			    u8 ooo_isle,
> +			    struct qed_ooo_buffer *p_buffer, u8 buffer_side)
> +{
> +	struct qed_ooo_isle *p_isle = NULL;
> +
> +	p_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid, ooo_isle);
> +	if (!p_isle) {
> +		DP_NOTICE(p_hwfn,
> +			  "Isle %d is not found(cid %d)\n", ooo_isle, cid);
> +		return;
> +	}
> +
> +	if (buffer_side == QED_OOO_LEFT_BUF)
> +		list_add(&p_buffer->list_entry, &p_isle->buffers_list);
> +	else
> +		list_add_tail(&p_buffer->list_entry, &p_isle->buffers_list);
> +}
> +
> +void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
> +			struct qed_ooo_info *p_ooo_info, u32 cid, u8 left_isle)
> +{
> +	struct qed_ooo_archipelago *p_archipelago = NULL;
> +	struct qed_ooo_isle *p_right_isle = NULL;
> +	struct qed_ooo_isle *p_left_isle = NULL;
> +
> +	p_right_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
> +					 left_isle + 1);
> +	if (!p_right_isle) {
> +		DP_NOTICE(p_hwfn,
> +			  "Right isle %d is not found(cid %d)\n",
> +			  left_isle + 1, cid);
> +		return;
> +	}
> +
> +	p_archipelago = qed_ooo_seek_archipelago(p_hwfn, p_ooo_info, cid);
> +	list_del(&p_right_isle->list_entry);
> +	p_ooo_info->cur_isles_number--;
> +	if (left_isle) {
> +		p_left_isle = qed_ooo_seek_isle(p_hwfn, p_ooo_info, cid,
> +						left_isle);
> +		if (!p_left_isle) {
> +			DP_NOTICE(p_hwfn,
> +				  "Left isle %d is not found(cid %d)\n",
> +				  left_isle, cid);
> +			return;
> +		}
> +		list_splice_tail_init(&p_right_isle->buffers_list,
> +				      &p_left_isle->buffers_list);
> +	} else {
> +		list_splice_tail_init(&p_right_isle->buffers_list,
> +				      &p_ooo_info->ready_buffers_list);
> +		if (list_empty(&p_archipelago->isles_list)) {
> +			list_del(&p_archipelago->list_entry);
> +			list_add(&p_archipelago->list_entry,
> +				 &p_ooo_info->free_archipelagos_list);
> +		}
> +	}
> +	list_add_tail(&p_right_isle->list_entry, &p_ooo_info->free_isles_list);
> +}
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_ooo.h b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
> new file mode 100644
> index 0000000..75c6e48
> --- /dev/null
> +++ b/drivers/net/ethernet/qlogic/qed/qed_ooo.h
> @@ -0,0 +1,116 @@
> +/* QLogic qed NIC Driver
> + * Copyright (c) 2015 QLogic Corporation
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QED_OOO_H
> +#define _QED_OOO_H
> +#include <linux/types.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include "qed.h"
> +
> +#define QED_MAX_NUM_ISLES	256
> +#define QED_MAX_NUM_OOO_HISTORY_ENTRIES	512
> +
> +#define QED_OOO_LEFT_BUF	0
> +#define QED_OOO_RIGHT_BUF	1
> +
> +struct qed_ooo_buffer {
> +	struct list_head list_entry;
> +	void *rx_buffer_virt_addr;
> +	dma_addr_t rx_buffer_phys_addr;
> +	u32 rx_buffer_size;
> +	u16 packet_length;
> +	u16 parse_flags;
> +	u16 vlan;
> +	u8 placement_offset;
> +};
> +
> +struct qed_ooo_isle {
> +	struct list_head list_entry;
> +	struct list_head buffers_list;
> +};
> +
> +struct qed_ooo_archipelago {
> +	struct list_head list_entry;
> +	struct list_head isles_list;
> +	u32 cid;
> +};
> +
> +struct qed_ooo_history {
> +	struct ooo_opaque *p_cqes;
> +	u32 head_idx;
> +	u32 num_of_cqes;
> +};
> +
> +struct qed_ooo_info {
> +	struct list_head free_buffers_list;
> +	struct list_head ready_buffers_list;
> +	struct list_head free_isles_list;
> +	struct list_head free_archipelagos_list;
> +	struct list_head archipelagos_list;
> +	struct qed_ooo_archipelago *p_archipelagos_mem;
> +	struct qed_ooo_isle *p_isles_mem;
> +	struct qed_ooo_history ooo_history;
> +	u32 cur_isles_number;
> +	u32 max_isles_number;
> +	u32 gen_isles_number;
> +};
> +
> +void qed_ooo_save_history_entry(struct qed_hwfn *p_hwfn,
> +				struct qed_ooo_info *p_ooo_info,
> +				struct ooo_opaque *p_cqe);
> +
> +struct qed_ooo_info *qed_ooo_alloc(struct qed_hwfn *p_hwfn);
> +
> +void qed_ooo_release_connection_isles(struct qed_hwfn *p_hwfn,
> +				      struct qed_ooo_info *p_ooo_info,
> +				      u32 cid);
> +
> +void qed_ooo_release_all_isles(struct qed_hwfn *p_hwfn,
> +			       struct qed_ooo_info *p_ooo_info);
> +
> +void qed_ooo_setup(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
> +
> +void qed_ooo_free(struct qed_hwfn *p_hwfn, struct qed_ooo_info *p_ooo_info);
> +
> +void qed_ooo_put_free_buffer(struct qed_hwfn *p_hwfn,
> +			     struct qed_ooo_info *p_ooo_info,
> +			     struct qed_ooo_buffer *p_buffer);
> +
> +struct qed_ooo_buffer *
> +qed_ooo_get_free_buffer(struct qed_hwfn *p_hwfn,
> +			struct qed_ooo_info *p_ooo_info);
> +
> +void qed_ooo_put_ready_buffer(struct qed_hwfn *p_hwfn,
> +			      struct qed_ooo_info *p_ooo_info,
> +			      struct qed_ooo_buffer *p_buffer, u8 on_tail);
> +
> +struct qed_ooo_buffer *
> +qed_ooo_get_ready_buffer(struct qed_hwfn *p_hwfn,
> +			 struct qed_ooo_info *p_ooo_info);
> +
> +void qed_ooo_delete_isles(struct qed_hwfn *p_hwfn,
> +			  struct qed_ooo_info *p_ooo_info,
> +			  u32 cid, u8 drop_isle, u8 drop_size);
> +
> +void qed_ooo_add_new_isle(struct qed_hwfn *p_hwfn,
> +			  struct qed_ooo_info *p_ooo_info,
> +			  u32 cid,
> +			  u8 ooo_isle, struct qed_ooo_buffer *p_buffer);
> +
> +void qed_ooo_add_new_buffer(struct qed_hwfn *p_hwfn,
> +			    struct qed_ooo_info *p_ooo_info,
> +			    u32 cid,
> +			    u8 ooo_isle,
> +			    struct qed_ooo_buffer *p_buffer, u8 buffer_side);
> +
> +void qed_ooo_join_isles(struct qed_hwfn *p_hwfn,
> +			struct qed_ooo_info *p_ooo_info, u32 cid,
> +			u8 left_isle);
> +
> +#endif
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
> index 2343005..1768cdb 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
> @@ -2866,6 +2866,7 @@ static int qed_roce_ll2_tx(struct qed_dev *cdev,
>  	/* Tx header */
>  	rc = qed_ll2_prepare_tx_packet(QED_LEADING_HWFN(cdev), roce_ll2->handle,
>  				       1 + pkt->n_seg, 0, flags, 0,
> +				       QED_LL2_TX_DEST_NW,
>  				       qed_roce_flavor, pkt->header.baddr,
>  				       pkt->header.len, pkt, 1);
>  	if (rc) {
> diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> index d3fa578..b44fd4c 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
> +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> @@ -26,6 +26,7 @@
>  #include "qed_int.h"
>  #include "qed_iscsi.h"
>  #include "qed_mcp.h"
> +#include "qed_ooo.h"
>  #include "qed_reg_addr.h"
>  #include "qed_sp.h"
>  #include "qed_sriov.h"
> @@ -253,6 +254,14 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
>  	case PROTOCOLID_ISCSI:
>  		if (!IS_ENABLED(CONFIG_QEDI))
>  			return -EINVAL;
> +		if (p_eqe->opcode == ISCSI_EVENT_TYPE_ASYN_DELETE_OOO_ISLES) {
> +			u32 cid = le32_to_cpu(p_eqe->data.iscsi_info.cid);
> +
> +			qed_ooo_release_connection_isles(p_hwfn,
> +							 p_hwfn->p_ooo_info,
> +							 cid);
> +			return 0;
> +		}
>  
>  		if (p_hwfn->p_iscsi_info->event_cb) {
>  			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
> 
Hmm. The entire out-of-order handling is pretty generic. I really wonder
if this doesn't apply to iSCSI in general; surely iscsi_tcp suffers from
the same problem, no?
If so, wouldn't it be better to move it into generic (iSCSI) code so
that all implementations would benefit from it?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19  7:45   ` Hannes Reinecke
  2016-10-20  8:27     ` Rangankar, Manish
  -1 siblings, 1 reply; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19  7:45 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Nilesh Javali, Adheer Chandravanshi,
	Chad Dupuis, Saurav Kashyap, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
> for 41000 Series Converged Network Adapters by QLogic.
> 
> This patch consists of following changes:
>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>   - PCI driver registration,
>   - iSCSI host level initialization,
>   - Debugfs and log level infrastructure.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---
>  MAINTAINERS                         |    6 +
>  drivers/net/ethernet/qlogic/Kconfig |   12 -
>  drivers/scsi/Kconfig                |    1 +
>  drivers/scsi/Makefile               |    1 +
>  drivers/scsi/qedi/Kconfig           |   10 +
>  drivers/scsi/qedi/Makefile          |    5 +
>  drivers/scsi/qedi/qedi.h            |  286 +++++++
>  drivers/scsi/qedi/qedi_dbg.c        |  143 ++++
>  drivers/scsi/qedi/qedi_dbg.h        |  144 ++++
>  drivers/scsi/qedi/qedi_debugfs.c    |  244 ++++++
>  drivers/scsi/qedi/qedi_hsi.h        |   52 ++
>  drivers/scsi/qedi/qedi_main.c       | 1550 +++++++++++++++++++++++++++++++++++
>  drivers/scsi/qedi/qedi_sysfs.c      |   52 ++
>  drivers/scsi/qedi/qedi_version.h    |   14 +
>  14 files changed, 2508 insertions(+), 12 deletions(-)
>  create mode 100644 drivers/scsi/qedi/Kconfig
>  create mode 100644 drivers/scsi/qedi/Makefile
>  create mode 100644 drivers/scsi/qedi/qedi.h
>  create mode 100644 drivers/scsi/qedi/qedi_dbg.c
>  create mode 100644 drivers/scsi/qedi/qedi_dbg.h
>  create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
>  create mode 100644 drivers/scsi/qedi/qedi_hsi.h
>  create mode 100644 drivers/scsi/qedi/qedi_main.c
>  create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
>  create mode 100644 drivers/scsi/qedi/qedi_version.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 5e925a2..906d05f 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -9909,6 +9909,12 @@ F:	drivers/net/ethernet/qlogic/qed/
>  F:	include/linux/qed/
>  F:	drivers/net/ethernet/qlogic/qede/
>  
> +QLOGIC QL41xxx ISCSI DRIVER
> +M:	QLogic-Storage-Upstream@cavium.com
> +L:	linux-scsi@vger.kernel.org
> +S:	Supported
> +F:	drivers/scsi/qedi/
> +
>  QNX4 FILESYSTEM
>  M:	Anders Larsen <al@alarsen.net>
>  W:	http://www.alarsen.net/linux/qnx4fs/
> diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
> index bad4fae..28b4366 100644
> --- a/drivers/net/ethernet/qlogic/Kconfig
> +++ b/drivers/net/ethernet/qlogic/Kconfig
> @@ -121,16 +121,4 @@ config INFINIBAND_QEDR
>  config QED_ISCSI
>  	bool
>  
> -config QEDI
> -	tristate "QLogic QED 25/40/100Gb iSCSI driver"
> -	depends on QED
> -	select QED_LL2
> -	select QED_ISCSI
> -	default n
> -	---help---
> -	  This provides a temporary node that allows the compilation
> -	  and logical testing of the hardware offload iSCSI support
> -	  for QLogic QED. This would be replaced by the 'real' option
> -	  once the QEDI driver is added [+relocated].
> -
>  endif # NET_VENDOR_QLOGIC
Huh? You just introduce this one in patch 1/6.
Please fold them together so that this can be omitted.

> diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
> index 3e2bdb9..5cf03db 100644
> --- a/drivers/scsi/Kconfig
> +++ b/drivers/scsi/Kconfig
> @@ -1254,6 +1254,7 @@ config SCSI_QLOGICPTI
>  
>  source "drivers/scsi/qla2xxx/Kconfig"
>  source "drivers/scsi/qla4xxx/Kconfig"
> +source "drivers/scsi/qedi/Kconfig"
>  
>  config SCSI_LPFC
>  	tristate "Emulex LightPulse Fibre Channel Support"
> diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
> index 38d938d..da9e312 100644
> --- a/drivers/scsi/Makefile
> +++ b/drivers/scsi/Makefile
> @@ -132,6 +132,7 @@ obj-$(CONFIG_PS3_ROM)		+= ps3rom.o
>  obj-$(CONFIG_SCSI_CXGB3_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
>  obj-$(CONFIG_SCSI_CXGB4_ISCSI)	+= libiscsi.o libiscsi_tcp.o cxgbi/
>  obj-$(CONFIG_SCSI_BNX2_ISCSI)	+= libiscsi.o bnx2i/
> +obj-$(CONFIG_QEDI)          += libiscsi.o qedi/
>  obj-$(CONFIG_BE2ISCSI)		+= libiscsi.o be2iscsi/
>  obj-$(CONFIG_SCSI_ESAS2R)	+= esas2r/
>  obj-$(CONFIG_SCSI_PMCRAID)	+= pmcraid.o
> diff --git a/drivers/scsi/qedi/Kconfig b/drivers/scsi/qedi/Kconfig
> new file mode 100644
> index 0000000..23ca8a2
> --- /dev/null
> +++ b/drivers/scsi/qedi/Kconfig
> @@ -0,0 +1,10 @@
> +config QEDI
> +	tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
> +	depends on PCI && SCSI
> +	depends on QED
> +	select SCSI_ISCSI_ATTRS
> +	select QED_LL2
> +	select QED_ISCSI
> +	---help---
> +	This driver supports iSCSI offload for the QLogic FastLinQ
> +	41000 Series Converged Network Adapters.
> diff --git a/drivers/scsi/qedi/Makefile b/drivers/scsi/qedi/Makefile
> new file mode 100644
> index 0000000..2b3e16b
> --- /dev/null
> +++ b/drivers/scsi/qedi/Makefile
> @@ -0,0 +1,5 @@
> +obj-$(CONFIG_QEDI) := qedi.o
> +qedi-y := qedi_main.o qedi_iscsi.o qedi_fw.o qedi_sysfs.o \
> +	    qedi_dbg.o
> +
> +qedi-$(CONFIG_DEBUG_FS) += qedi_debugfs.o
> diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
> new file mode 100644
> index 0000000..0a5035e
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi.h
> @@ -0,0 +1,286 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QEDI_H_
> +#define _QEDI_H_
> +
> +#define __PREVENT_QED_HSI__
> +
> +#include <scsi/scsi_transport_iscsi.h>
> +#include <scsi/libiscsi.h>
> +#include <scsi/scsi_host.h>
> +#include <linux/uio_driver.h>
> +
> +#include "qedi_hsi.h"
> +#include <linux/qed/qed_if.h>
> +#include "qedi_dbg.h"
> +#include <linux/qed/qed_iscsi_if.h>
> +#include "qedi_version.h"
> +
> +#define QEDI_MODULE_NAME		"qedi"
> +
> +struct qedi_endpoint;
> +
> +/*
> + * PCI function probe defines
> + */
> +#define QEDI_MODE_NORMAL	0
> +#define QEDI_MODE_RECOVERY	1
> +
> +#define ISCSI_WQE_SET_PTU_INVALIDATE	1
> +#define QEDI_MAX_ISCSI_TASK		4096
> +#define QEDI_MAX_TASK_NUM		0x0FFF
> +#define QEDI_MAX_ISCSI_CONNS_PER_HBA	1024
> +#define QEDI_ISCSI_MAX_BDS_PER_CMD	256	/* Firmware max BDs is 256 */
> +#define MAX_OUSTANDING_TASKS_PER_CON	1024
> +
> +#define QEDI_MAX_BD_LEN		0xffff
> +#define QEDI_BD_SPLIT_SZ	0x1000
> +#define QEDI_PAGE_SIZE		4096
> +#define QEDI_FAST_SGE_COUNT	4
> +/* MAX Length for cached SGL */
> +#define MAX_SGLEN_FOR_CACHESGL	((1U << 16) - 1)
> +
> +#define MAX_NUM_MSIX_PF         8
> +#define MIN_NUM_CPUS_MSIX(x)	min(x->msix_count, num_online_cpus())
> +
> +#define QEDI_LOCAL_PORT_MIN     60000
> +#define QEDI_LOCAL_PORT_MAX     61024
> +#define QEDI_LOCAL_PORT_RANGE   (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
> +#define QEDI_LOCAL_PORT_INVALID	0xffff
> +
> +/* Queue sizes in number of elements */
> +#define QEDI_SQ_SIZE		MAX_OUSTANDING_TASKS_PER_CON
> +#define QEDI_CQ_SIZE		2048
> +#define QEDI_CMDQ_SIZE		QEDI_MAX_ISCSI_TASK
> +#define QEDI_PROTO_CQ_PROD_IDX	0
> +
> +struct qedi_glbl_q_params {
> +	u64 hw_p_cq;	/* Completion queue PBL */
> +	u64 hw_p_rq;	/* Request queue PBL */
> +	u64 hw_p_cmdq;	/* Command queue PBL */
> +};
> +
> +struct global_queue {
> +	union iscsi_cqe *cq;
> +	dma_addr_t cq_dma;
> +	u32 cq_mem_size;
> +	u32 cq_cons_idx; /* Completion queue consumer index */
> +
> +	void *cq_pbl;
> +	dma_addr_t cq_pbl_dma;
> +	u32 cq_pbl_size;
> +
> +};
> +
> +struct qedi_fastpath {
> +	struct qed_sb_info	*sb_info;
> +	u16			sb_id;
> +#define QEDI_NAME_SIZE		16
> +	char			name[QEDI_NAME_SIZE];
> +	struct qedi_ctx         *qedi;
> +};
> +
> +/* Used to pass fastpath information needed to process CQEs */
> +struct qedi_io_work {
> +	struct list_head list;
> +	struct iscsi_cqe_solicited cqe;
> +	u16	que_idx;
> +};
> +
> +/**
> + * struct iscsi_cid_queue - Per adapter iscsi cid queue
> + *
> + * @cid_que_base:           queue base memory
> + * @cid_que:                queue memory pointer
> + * @cid_q_prod_idx:         produce index
> + * @cid_q_cons_idx:         consumer index
> + * @cid_q_max_idx:          max index. used to detect wrap around condition
> + * @cid_free_cnt:           queue size
> + * @conn_cid_tbl:           iscsi cid to conn structure mapping table
> + *
> + * Per adapter iSCSI CID Queue
> + */
> +struct iscsi_cid_queue {
> +	void *cid_que_base;
> +	u32 *cid_que;
> +	u32 cid_q_prod_idx;
> +	u32 cid_q_cons_idx;
> +	u32 cid_q_max_idx;
> +	u32 cid_free_cnt;
> +	struct qedi_conn **conn_cid_tbl;
> +};
> +
> +struct qedi_portid_tbl {
> +	spinlock_t      lock;	/* Port id lock */
> +	u16             start;
> +	u16             max;
> +	u16             next;
> +	unsigned long   *table;
> +};
> +
> +struct qedi_itt_map {
> +	__le32	itt;
> +};
> +
> +/* I/O tracing entry */
> +#define QEDI_IO_TRACE_SIZE             2048
> +struct qedi_io_log {
> +#define QEDI_IO_TRACE_REQ              0
> +#define QEDI_IO_TRACE_RSP              1
> +	u8 direction;
> +	u16 task_id;
> +	u32 cid;
> +	u32 port_id;	/* Remote port fabric ID */
> +	int lun;
> +	u8 op;		/* SCSI CDB */
> +	u8 lba[4];
> +	unsigned int bufflen;	/* SCSI buffer length */
> +	unsigned int sg_count;	/* Number of SG elements */
> +	u8 fast_sgs;		/* number of fast sgls */
> +	u8 slow_sgs;		/* number of slow sgls */
> +	u8 cached_sgs;		/* number of cached sgls */
> +	int result;		/* Result passed back to mid-layer */
> +	unsigned long jiffies;	/* Time stamp when I/O logged */
> +	int refcount;		/* Reference count for task id */
> +	unsigned int blk_req_cpu; /* CPU that the task is queued on by
> +				   * blk layer
> +				   */
> +	unsigned int req_cpu;	/* CPU that the task is queued on */
> +	unsigned int intr_cpu;	/* Interrupt CPU that the task is received on */
> +	unsigned int blk_rsp_cpu;/* CPU that task is actually processed and
> +				  * returned to blk layer
> +				  */
> +	bool cached_sge;
> +	bool slow_sge;
> +	bool fast_sge;
> +};
> +
> +/* Number of entries in BDQ */
> +#define QEDI_BDQ_NUM		256
> +#define QEDI_BDQ_BUF_SIZE	256
> +
> +/* DMA coherent buffers for BDQ */
> +struct qedi_bdq_buf {
> +	void *buf_addr;
> +	dma_addr_t buf_dma;
> +};
> +
> +/* Main port level struct */
> +struct qedi_ctx {
> +	struct qedi_dbg_ctx dbg_ctx;
> +	struct Scsi_Host *shost;
> +	struct pci_dev *pdev;
> +	struct qed_dev *cdev;
> +	struct qed_dev_iscsi_info dev_info;
> +	struct qed_int_info int_info;
> +	struct qedi_glbl_q_params *p_cpuq;
> +	struct global_queue **global_queues;
> +	/* uio declaration */
> +	struct qedi_uio_dev *udev;
> +	struct list_head ll2_skb_list;
> +	spinlock_t ll2_lock;	/* Light L2 lock */
> +	spinlock_t hba_lock;	/* per port lock */
> +	struct task_struct *ll2_recv_thread;
> +	unsigned long flags;
> +#define UIO_DEV_OPENED		1
> +#define QEDI_IOTHREAD_WAKE	2
> +#define QEDI_IN_RECOVERY	5
> +#define QEDI_IN_OFFLINE		6
> +
> +	u8 mac[ETH_ALEN];
> +	u32 src_ip[4];
> +	u8 ip_type;
> +
> +	/* Physical address of above array */
> +	u64 hw_p_cpuq;
> +
> +	struct qedi_bdq_buf bdq[QEDI_BDQ_NUM];
> +	void *bdq_pbl;
> +	dma_addr_t bdq_pbl_dma;
> +	size_t bdq_pbl_mem_size;
> +	void *bdq_pbl_list;
> +	dma_addr_t bdq_pbl_list_dma;
> +	u8 bdq_pbl_list_num_entries;
> +	void __iomem *bdq_primary_prod;
> +	void __iomem *bdq_secondary_prod;
> +	u16 bdq_prod_idx;
> +	u16 rq_num_entries;
> +
> +	u32 msix_count;
> +	u32 max_sqes;
> +	u8 num_queues;
> +	u32 max_active_conns;
> +
> +	struct iscsi_cid_queue cid_que;
> +	struct qedi_endpoint **ep_tbl;
> +	struct qedi_portid_tbl lcl_port_tbl;
> +
> +	/* Rx fast path intr context */
> +	struct qed_sb_info	*sb_array;
> +	struct qedi_fastpath	*fp_array;
> +	struct qed_iscsi_tid	tasks;
> +
> +#define QEDI_LINK_DOWN		0
> +#define QEDI_LINK_UP		1
> +	atomic_t link_state;
> +
> +#define QEDI_RESERVE_TASK_ID	0
> +#define MAX_ISCSI_TASK_ENTRIES	4096
> +#define QEDI_INVALID_TASK_ID	(MAX_ISCSI_TASK_ENTRIES + 1)
> +	unsigned long task_idx_map[MAX_ISCSI_TASK_ENTRIES / BITS_PER_LONG];
> +	struct qedi_itt_map *itt_map;
> +	u16 tid_reuse_count[QEDI_MAX_ISCSI_TASK];
> +	struct qed_pf_params pf_params;
> +
> +	struct workqueue_struct *tmf_thread;
> +	struct workqueue_struct *offload_thread;
> +
> +	u16 ll2_mtu;
> +
> +	struct workqueue_struct *dpc_wq;
> +
> +	spinlock_t task_idx_lock;	/* To protect gbl context */
> +	s32 last_tidx_alloc;
> +	s32 last_tidx_clear;
> +
> +	struct qedi_io_log io_trace_buf[QEDI_IO_TRACE_SIZE];
> +	spinlock_t io_trace_lock;	/* prtect trace Log buf */
> +	u16 io_trace_idx;
> +	unsigned int intr_cpu;
> +	u32 cached_sgls;
> +	bool use_cached_sge;
> +	u32 slow_sgls;
> +	bool use_slow_sge;
> +	u32 fast_sgls;
> +	bool use_fast_sge;
> +
> +	atomic_t num_offloads;
> +};
> +
> +struct qedi_work {
> +	struct list_head list;
> +	struct qedi_ctx *qedi;
> +	union iscsi_cqe cqe;
> +	u16     que_idx;
> +};
> +
> +struct qedi_percpu_s {
> +	struct task_struct *iothread;
> +	struct list_head work_list;
> +	spinlock_t p_work_lock;		/* Per cpu worker lock */
> +};
> +
> +static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
> +{
> +	return (void *)(info->blocks[tid / info->num_tids_per_block] +
> +			(tid % info->num_tids_per_block) * info->size);
> +}
> +
> +#endif /* _QEDI_H_ */
> diff --git a/drivers/scsi/qedi/qedi_dbg.c b/drivers/scsi/qedi/qedi_dbg.c
> new file mode 100644
> index 0000000..2678a15
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_dbg.c
> @@ -0,0 +1,143 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include "qedi_dbg.h"
> +#include <linux/vmalloc.h>
> +
> +void
> +qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
> +	     const char *fmt, ...)
> +{
> +	va_list va;
> +	struct va_format vaf;
> +	char nfunc[32];
> +
> +	memset(nfunc, 0, sizeof(nfunc));
> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
> +
> +	va_start(va, fmt);
> +
> +	vaf.fmt = fmt;
> +	vaf.va = &va;
> +
> +	if (likely(qedi) && likely(qedi->pdev))
> +		pr_crit("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
> +			nfunc, line, qedi->host_no, &vaf);
> +	else
> +		pr_crit("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
> +
> +	va_end(va);
> +}
> +
> +void
> +qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
> +	      const char *fmt, ...)
> +{
> +	va_list va;
> +	struct va_format vaf;
> +	char nfunc[32];
> +
> +	memset(nfunc, 0, sizeof(nfunc));
> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
> +
> +	va_start(va, fmt);
> +
> +	vaf.fmt = fmt;
> +	vaf.va = &va;
> +
> +	if (!(debug & QEDI_LOG_WARN))
> +		return;
> +
> +	if (likely(qedi) && likely(qedi->pdev))
> +		pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
> +			nfunc, line, qedi->host_no, &vaf);
> +	else
> +		pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
> +
> +	va_end(va);
> +}
> +
> +void
> +qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
> +		const char *fmt, ...)
> +{
> +	va_list va;
> +	struct va_format vaf;
> +	char nfunc[32];
> +
> +	memset(nfunc, 0, sizeof(nfunc));
> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
> +
> +	va_start(va, fmt);
> +
> +	vaf.fmt = fmt;
> +	vaf.va = &va;
> +
> +	if (!(debug & QEDI_LOG_NOTICE))
> +		return;
> +
> +	if (likely(qedi) && likely(qedi->pdev))
> +		pr_notice("[%s]:[%s:%d]:%d: %pV",
> +			  dev_name(&qedi->pdev->dev), nfunc, line,
> +			  qedi->host_no, &vaf);
> +	else
> +		pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
> +
> +	va_end(va);
> +}
> +
> +void
> +qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
> +	      u32 level, const char *fmt, ...)
> +{
> +	va_list va;
> +	struct va_format vaf;
> +	char nfunc[32];
> +
> +	memset(nfunc, 0, sizeof(nfunc));
> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
> +
> +	va_start(va, fmt);
> +
> +	vaf.fmt = fmt;
> +	vaf.va = &va;
> +
> +	if (!(debug & level))
> +		return;
> +
> +	if (likely(qedi) && likely(qedi->pdev))
> +		pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
> +			nfunc, line, qedi->host_no, &vaf);
> +	else
> +		pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
> +
> +	va_end(va);
> +}
> +
> +int
> +qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
> +{
> +	int ret = 0;
> +
> +	for (; iter->name; iter++) {
> +		ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
> +					    iter->attr);
> +		if (ret)
> +			pr_err("Unable to create sysfs %s attr, err(%d).\n",
> +			       iter->name, ret);
> +	}
> +	return ret;
> +}
> +
> +void
> +qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
> +{
> +	for (; iter->name; iter++)
> +		sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
> +}
> diff --git a/drivers/scsi/qedi/qedi_dbg.h b/drivers/scsi/qedi/qedi_dbg.h
> new file mode 100644
> index 0000000..5beb3ec
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_dbg.h
> @@ -0,0 +1,144 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QEDI_DBG_H_
> +#define _QEDI_DBG_H_
> +
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/compiler.h>
> +#include <linux/string.h>
> +#include <linux/version.h>
> +#include <linux/pci.h>
> +#include <linux/delay.h>
> +#include <scsi/scsi_transport.h>
> +#include <scsi/scsi_transport_iscsi.h>
> +#include <linux/fs.h>
> +
> +#define __PREVENT_QED_HSI__
> +#include <linux/qed/common_hsi.h>
> +#include <linux/qed/qed_if.h>
> +
> +extern uint debug;
> +
> +/* Debug print level definitions */
> +#define QEDI_LOG_DEFAULT	0x1		/* Set default logging mask */
> +#define QEDI_LOG_INFO		0x2		/* Informational logs,
> +						 * MAC address, WWPN, WWNN
> +						 */
> +#define QEDI_LOG_DISC		0x4		/* Init, discovery, rport */
> +#define QEDI_LOG_LL2		0x8		/* LL2, VLAN logs */
> +#define QEDI_LOG_CONN		0x10		/* Connection setup, cleanup */
> +#define QEDI_LOG_EVT		0x20		/* Events, link, mtu */
> +#define QEDI_LOG_TIMER		0x40		/* Timer events */
> +#define QEDI_LOG_MP_REQ		0x80		/* Middle Path (MP) logs */
> +#define QEDI_LOG_SCSI_TM	0x100		/* SCSI Aborts, Task Mgmt */
> +#define QEDI_LOG_UNSOL		0x200		/* unsolicited event logs */
> +#define QEDI_LOG_IO		0x400		/* scsi cmd, completion */
> +#define QEDI_LOG_MQ		0x800		/* Multi Queue logs */
> +#define QEDI_LOG_BSG		0x1000		/* BSG logs */
> +#define QEDI_LOG_DEBUGFS	0x2000		/* debugFS logs */
> +#define QEDI_LOG_LPORT		0x4000		/* lport logs */
> +#define QEDI_LOG_ELS		0x8000		/* ELS logs */
> +#define QEDI_LOG_NPIV		0x10000		/* NPIV logs */
> +#define QEDI_LOG_SESS		0x20000		/* Conection setup, cleanup */
> +#define QEDI_LOG_UIO		0x40000		/* iSCSI UIO logs */
> +#define QEDI_LOG_TID		0x80000         /* FW TID context acquire,
> +						 * free
> +						 */
> +#define QEDI_TRACK_TID		0x100000        /* Track TID state. To be
> +						 * enabled only at module load
> +						 * and not run-time.
> +						 */
> +#define QEDI_TRACK_CMD_LIST    0x300000        /* Track active cmd list nodes,
> +						* done with reference to TID,
> +						* hence TRACK_TID also enabled.
> +						*/
> +#define QEDI_LOG_NOTICE		0x40000000	/* Notice logs */
> +#define QEDI_LOG_WARN		0x80000000	/* Warning logs */
> +
> +/* Debug context structure */
> +struct qedi_dbg_ctx {
> +	unsigned int host_no;
> +	struct pci_dev *pdev;
> +#ifdef CONFIG_DEBUG_FS
> +	struct dentry *bdf_dentry;
> +#endif
> +};
> +
> +#define QEDI_ERR(pdev, fmt, ...)	\
> +		qedi_dbg_err(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
> +#define QEDI_WARN(pdev, fmt, ...)	\
> +		qedi_dbg_warn(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
> +#define QEDI_NOTICE(pdev, fmt, ...)	\
> +		qedi_dbg_notice(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
> +#define QEDI_INFO(pdev, level, fmt, ...)	\
> +		qedi_dbg_info(pdev, __func__, __LINE__, level, fmt,	\
> +			      ## __VA_ARGS__)
> +
> +void qedi_dbg_err(struct qedi_dbg_ctx *, const char *, u32,
> +		  const char *, ...);
> +void qedi_dbg_warn(struct qedi_dbg_ctx *, const char *, u32,
> +		   const char *, ...);
> +void qedi_dbg_notice(struct qedi_dbg_ctx *, const char *, u32,
> +		     const char *, ...);
> +void qedi_dbg_info(struct qedi_dbg_ctx *, const char *, u32, u32,
> +		   const char *, ...);
> +
> +struct Scsi_Host;
> +
> +struct sysfs_bin_attrs {
> +	char *name;
> +	struct bin_attribute *attr;
> +};
> +
> +int qedi_create_sysfs_attr(struct Scsi_Host *,
> +			   struct sysfs_bin_attrs *);
> +void qedi_remove_sysfs_attr(struct Scsi_Host *,
> +			    struct sysfs_bin_attrs *);
> +
> +#ifdef CONFIG_DEBUG_FS
> +/* DebugFS related code */
> +struct qedi_list_of_funcs {
> +	char *oper_str;
> +	ssize_t (*oper_func)(struct qedi_dbg_ctx *qedi);
> +};
> +
> +struct qedi_debugfs_ops {
> +	char *name;
> +	struct qedi_list_of_funcs *qedi_funcs;
> +};
> +
> +#define qedi_dbg_fileops(drv, ops) \
> +{ \
> +	.owner  = THIS_MODULE, \
> +	.open   = simple_open, \
> +	.read   = drv##_dbg_##ops##_cmd_read, \
> +	.write  = drv##_dbg_##ops##_cmd_write \
> +}
> +
> +/* Used for debugfs sequential files */
> +#define qedi_dbg_fileops_seq(drv, ops) \
> +{ \
> +	.owner = THIS_MODULE, \
> +	.open = drv##_dbg_##ops##_open, \
> +	.read = seq_read, \
> +	.llseek = seq_lseek, \
> +	.release = single_release, \
> +}
> +
> +void qedi_dbg_host_init(struct qedi_dbg_ctx *,
> +			struct qedi_debugfs_ops *,
> +			const struct file_operations *);
> +void qedi_dbg_host_exit(struct qedi_dbg_ctx *);
> +void qedi_dbg_init(char *);
> +void qedi_dbg_exit(void);
> +#endif /* CONFIG_DEBUG_FS */
> +
> +#endif /* _QEDI_DBG_H_ */
> diff --git a/drivers/scsi/qedi/qedi_debugfs.c b/drivers/scsi/qedi/qedi_debugfs.c
> new file mode 100644
> index 0000000..9559362
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_debugfs.c
> @@ -0,0 +1,244 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include "qedi.h"
> +#include "qedi_dbg.h"
> +
> +#include <linux/uaccess.h>
> +#include <linux/debugfs.h>
> +#include <linux/module.h>
> +
> +int do_not_recover;
> +static struct dentry *qedi_dbg_root;
> +
> +void
> +qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
> +		   struct qedi_debugfs_ops *dops,
> +		   const struct file_operations *fops)
> +{
> +	char host_dirname[32];
> +	struct dentry *file_dentry = NULL;
> +
> +	sprintf(host_dirname, "host%u", qedi->host_no);
> +	qedi->bdf_dentry = debugfs_create_dir(host_dirname, qedi_dbg_root);
> +	if (!qedi->bdf_dentry)
> +		return;
> +
> +	while (dops) {
> +		if (!(dops->name))
> +			break;
> +
> +		file_dentry = debugfs_create_file(dops->name, 0600,
> +						  qedi->bdf_dentry, qedi,
> +						  fops);
> +		if (!file_dentry) {
> +			QEDI_INFO(qedi, QEDI_LOG_DEBUGFS,
> +				  "Debugfs entry %s creation failed\n",
> +				  dops->name);
> +			debugfs_remove_recursive(qedi->bdf_dentry);
> +			return;
> +		}
> +		dops++;
> +		fops++;
> +	}
> +}
> +
> +void
> +qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi)
> +{
> +	debugfs_remove_recursive(qedi->bdf_dentry);
> +	qedi->bdf_dentry = NULL;
> +}
> +
> +void
> +qedi_dbg_init(char *drv_name)
> +{
> +	qedi_dbg_root = debugfs_create_dir(drv_name, NULL);
> +	if (!qedi_dbg_root)
> +		QEDI_INFO(NULL, QEDI_LOG_DEBUGFS, "Init of debugfs failed\n");
> +}
> +
> +void
> +qedi_dbg_exit(void)
> +{
> +	debugfs_remove_recursive(qedi_dbg_root);
> +	qedi_dbg_root = NULL;
> +}
> +
> +static ssize_t
> +qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
> +{
> +	if (!do_not_recover)
> +		do_not_recover = 1;
> +
> +	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
> +		  do_not_recover);
> +	return 0;
> +}
> +
> +static ssize_t
> +qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
> +{
> +	if (do_not_recover)
> +		do_not_recover = 0;
> +
> +	QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
> +		  do_not_recover);
> +	return 0;
> +}
> +
> +static struct qedi_list_of_funcs qedi_dbg_do_not_recover_ops[] = {
> +	{ "enable", qedi_dbg_do_not_recover_enable },
> +	{ "disable", qedi_dbg_do_not_recover_disable },
> +	{ NULL, NULL }
> +};
> +
> +struct qedi_debugfs_ops qedi_debugfs_ops[] = {
> +	{ "gbl_ctx", NULL },
> +	{ "do_not_recover", qedi_dbg_do_not_recover_ops},
> +	{ "io_trace", NULL },
> +	{ NULL, NULL }
> +};
> +
> +static ssize_t
> +qedi_dbg_do_not_recover_cmd_write(struct file *filp, const char __user *buffer,
> +				  size_t count, loff_t *ppos)
> +{
> +	size_t cnt = 0;
> +	struct qedi_dbg_ctx *qedi_dbg =
> +			(struct qedi_dbg_ctx *)filp->private_data;
> +	struct qedi_list_of_funcs *lof = qedi_dbg_do_not_recover_ops;
> +
> +	if (*ppos)
> +		return 0;
> +
> +	while (lof) {
> +		if (!(lof->oper_str))
> +			break;
> +
> +		if (!strncmp(lof->oper_str, buffer, strlen(lof->oper_str))) {
> +			cnt = lof->oper_func(qedi_dbg);
> +			break;
> +		}
> +
> +		lof++;
> +	}
> +	return (count - cnt);
> +}
> +
> +static ssize_t
> +qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
> +				 size_t count, loff_t *ppos)
> +{
> +	size_t cnt = 0;
> +
> +	if (*ppos)
> +		return 0;
> +
> +	cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
> +	cnt = min_t(int, count, cnt - *ppos);
> +	*ppos += cnt;
> +	return cnt;
> +}
> +
> +static int
> +qedi_gbl_ctx_show(struct seq_file *s, void *unused)
> +{
> +	struct qedi_fastpath *fp = NULL;
> +	struct qed_sb_info *sb_info = NULL;
> +	struct status_block *sb = NULL;
> +	struct global_queue *que = NULL;
> +	int id;
> +	u16 prod_idx;
> +	struct qedi_ctx *qedi = s->private;
> +	unsigned long flags;
> +
> +	seq_puts(s, " DUMP CQ CONTEXT:\n");
> +
> +	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
> +		spin_lock_irqsave(&qedi->hba_lock, flags);
> +		seq_printf(s, "=========FAST CQ PATH [%d] ==========\n", id);
> +		fp = &qedi->fp_array[id];
> +		sb_info = fp->sb_info;
> +		sb = sb_info->sb_virt;
> +		prod_idx = (sb->pi_array[QEDI_PROTO_CQ_PROD_IDX] &
> +			    STATUS_BLOCK_PROD_INDEX_MASK);
> +		seq_printf(s, "SB PROD IDX: %d\n", prod_idx);
> +		que = qedi->global_queues[fp->sb_id];
> +		seq_printf(s, "DRV CONS IDX: %d\n", que->cq_cons_idx);
> +		seq_printf(s, "CQ complete host memory: %d\n", fp->sb_id);
> +		seq_puts(s, "=========== END ==================\n\n\n");
> +		spin_unlock_irqrestore(&qedi->hba_lock, flags);
> +	}
> +	return 0;
> +}
> +
> +static int
> +qedi_dbg_gbl_ctx_open(struct inode *inode, struct file *file)
> +{
> +	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
> +	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
> +					     dbg_ctx);
> +
> +	return single_open(file, qedi_gbl_ctx_show, qedi);
> +}
> +
> +static int
> +qedi_io_trace_show(struct seq_file *s, void *unused)
> +{
> +	int id, idx = 0;
> +	struct qedi_ctx *qedi = s->private;
> +	struct qedi_io_log *io_log;
> +	unsigned long flags;
> +
> +	seq_puts(s, " DUMP IO LOGS:\n");
> +	spin_lock_irqsave(&qedi->io_trace_lock, flags);
> +	idx = qedi->io_trace_idx;
> +	for (id = 0; id < QEDI_IO_TRACE_SIZE; id++) {
> +		io_log = &qedi->io_trace_buf[idx];
> +		seq_printf(s, "iodir-%d:", io_log->direction);
> +		seq_printf(s, "tid-0x%x:", io_log->task_id);
> +		seq_printf(s, "cid-0x%x:", io_log->cid);
> +		seq_printf(s, "lun-%d:", io_log->lun);
> +		seq_printf(s, "op-0x%02x:", io_log->op);
> +		seq_printf(s, "0x%02x%02x%02x%02x:", io_log->lba[0],
> +			   io_log->lba[1], io_log->lba[2], io_log->lba[3]);
> +		seq_printf(s, "buflen-%d:", io_log->bufflen);
> +		seq_printf(s, "sgcnt-%d:", io_log->sg_count);
> +		seq_printf(s, "res-0x%08x:", io_log->result);
> +		seq_printf(s, "jif-%lu:", io_log->jiffies);
> +		seq_printf(s, "blk_req_cpu-%d:", io_log->blk_req_cpu);
> +		seq_printf(s, "req_cpu-%d:", io_log->req_cpu);
> +		seq_printf(s, "intr_cpu-%d:", io_log->intr_cpu);
> +		seq_printf(s, "blk_rsp_cpu-%d\n", io_log->blk_rsp_cpu);
> +
> +		idx++;
> +		if (idx == QEDI_IO_TRACE_SIZE)
> +			idx = 0;
> +	}
> +	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
> +	return 0;
> +}
> +
> +static int
> +qedi_dbg_io_trace_open(struct inode *inode, struct file *file)
> +{
> +	struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
> +	struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
> +					     dbg_ctx);
> +
> +	return single_open(file, qedi_io_trace_show, qedi);
> +}
> +
> +const struct file_operations qedi_dbg_fops[] = {
> +	qedi_dbg_fileops_seq(qedi, gbl_ctx),
> +	qedi_dbg_fileops(qedi, do_not_recover),
> +	qedi_dbg_fileops_seq(qedi, io_trace),
> +	{ NULL, NULL },
> +};
> diff --git a/drivers/scsi/qedi/qedi_hsi.h b/drivers/scsi/qedi/qedi_hsi.h
> new file mode 100644
> index 0000000..b442a62
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_hsi.h
> @@ -0,0 +1,52 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +#ifndef __QEDI_HSI__
> +#define __QEDI_HSI__
> +/********************************/
> +/* Add include to common target */
> +/********************************/
> +#include <linux/qed/common_hsi.h>
> +
Please use kernel-doc style for comments

> +/****************************************/
> +/* Add include to common storage target */
> +/****************************************/
> +#include <linux/qed/storage_common.h>
> +
> +/************************************************************************/
> +/* Add include to common TCP target */
> +/************************************************************************/
> +#include <linux/qed/tcp_common.h>
> +
> +/*************************************************************************/
> +/* Add include to common iSCSI target for both eCore and protocol driver */
> +/************************************************************************/
> +#include <linux/qed/iscsi_common.h>
> +
> +/*
> + * iSCSI CMDQ element
> + */
> +struct iscsi_cmdqe {
> +	__le16 conn_id;
> +	u8 invalid_command;
> +	u8 cmd_hdr_type;
> +	__le32 reserved1[2];
> +	__le32 cmd_payload[13];
> +};
> +
> +/*
> + * iSCSI CMD header type
> + */
> +enum iscsi_cmd_hdr_type {
> +	ISCSI_CMD_HDR_TYPE_BHS_ONLY /* iSCSI BHS with no expected AHS */,
> +	ISCSI_CMD_HDR_TYPE_BHS_W_AHS /* iSCSI BHS with expected AHS */,
> +	ISCSI_CMD_HDR_TYPE_AHS /* iSCSI AHS */,
> +	MAX_ISCSI_CMD_HDR_TYPE
> +};
> +
> +#endif /* __QEDI_HSI__ */
> diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
> new file mode 100644
> index 0000000..35ab2f9
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_main.c
> @@ -0,0 +1,1550 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/kernel.h>
> +#include <linux/if_arp.h>
> +#include <scsi/iscsi_if.h>
> +#include <linux/inet.h>
> +#include <net/arp.h>
> +#include <linux/list.h>
> +#include <linux/kthread.h>
> +#include <linux/mm.h>
> +#include <linux/if_vlan.h>
> +#include <linux/cpu.h>
> +
> +#include <scsi/scsi_cmnd.h>
> +#include <scsi/scsi_device.h>
> +#include <scsi/scsi_eh.h>
> +#include <scsi/scsi_host.h>
> +#include <scsi/scsi.h>
> +
> +#include "qedi.h"
> +
> +static uint fw_debug;
> +module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
> +MODULE_PARM_DESC(fw_debug, " Firmware debug level 0(default) to 3");
> +
> +static uint int_mode;
> +module_param(int_mode, uint, S_IRUGO | S_IWUSR);
> +MODULE_PARM_DESC(int_mode,
> +		 " Force interrupt mode other than MSI-X: (1 INT#x; 2 MSI)");
> +
> +uint debug = QEDI_LOG_WARN | QEDI_LOG_SCSI_TM;
> +module_param(debug, uint, S_IRUGO | S_IWUSR);
> +MODULE_PARM_DESC(debug, " Default debug level");
> +
> +const struct qed_iscsi_ops *qedi_ops;
> +static struct scsi_transport_template *qedi_scsi_transport;
> +static struct pci_driver qedi_pci_driver;
> +static DEFINE_PER_CPU(struct qedi_percpu_s, qedi_percpu);
> +/* Static function declaration */
> +static int qedi_alloc_global_queues(struct qedi_ctx *qedi);
> +static void qedi_free_global_queues(struct qedi_ctx *qedi);
> +
> +static int qedi_iscsi_event_cb(void *context, u8 fw_event_code, void *fw_handle)
> +{
> +	struct qedi_ctx *qedi;
> +	struct qedi_endpoint *qedi_ep;
> +	struct async_data *data;
> +	int rval = 0;
> +
> +	if (!context || !fw_handle) {
> +		QEDI_ERR(NULL, "Recv event with ctx NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	qedi = (struct qedi_ctx *)context;
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "Recv Event %d fw_handle %p\n", fw_event_code, fw_handle);
> +
> +	data = (struct async_data *)fw_handle;
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "cid=0x%x tid=0x%x err-code=0x%x fw-dbg-param=0x%x\n",
> +		   data->cid, data->itid, data->error_code,
> +		   data->fw_debug_param);
> +
> +	qedi_ep = qedi->ep_tbl[data->cid];
> +
> +	if (!qedi_ep) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Cannot process event, ep already disconnected, cid=0x%x\n",
> +			   data->cid);
> +		WARN_ON(1);
> +		return -ENODEV;
> +	}
> +
> +	switch (fw_event_code) {
> +	case ISCSI_EVENT_TYPE_ASYN_CONNECT_COMPLETE:
> +		if (qedi_ep->state == EP_STATE_OFLDCONN_START)
> +			qedi_ep->state = EP_STATE_OFLDCONN_COMPL;
> +
> +		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
> +		break;
> +	case ISCSI_EVENT_TYPE_ASYN_TERMINATE_DONE:
> +		qedi_ep->state = EP_STATE_DISCONN_COMPL;
> +		wake_up_interruptible(&qedi_ep->tcp_ofld_wait);
> +		break;
> +	case ISCSI_EVENT_TYPE_ISCSI_CONN_ERROR:
> +		qedi_process_iscsi_error(qedi_ep, data);
> +		break;
> +	case ISCSI_EVENT_TYPE_ASYN_ABORT_RCVD:
> +	case ISCSI_EVENT_TYPE_ASYN_SYN_RCVD:
> +	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_TIME:
> +	case ISCSI_EVENT_TYPE_ASYN_MAX_RT_CNT:
> +	case ISCSI_EVENT_TYPE_ASYN_MAX_KA_PROBES_CNT:
> +	case ISCSI_EVENT_TYPE_ASYN_FIN_WAIT2:
> +	case ISCSI_EVENT_TYPE_TCP_CONN_ERROR:
> +		qedi_process_tcp_error(qedi_ep, data);
> +		break;
> +	default:
> +		QEDI_ERR(&qedi->dbg_ctx, "Recv Unknown Event %u\n",
> +			 fw_event_code);
> +	}
> +
> +	return rval;
> +}
> +
> +static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
> +				  struct qed_sb_info *sb_info, u16 sb_id)
> +{
> +	struct status_block *sb_virt;
> +	dma_addr_t sb_phys;
> +	int ret;
> +
> +	sb_virt = dma_alloc_coherent(&qedi->pdev->dev,
> +				     sizeof(struct status_block), &sb_phys,
> +				     GFP_KERNEL);
> +	if (!sb_virt) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Status block allocation failed for id = %d.\n",
> +			  sb_id);
> +		return -ENOMEM;
> +	}
> +
> +	ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
> +				       sb_id, QED_SB_TYPE_STORAGE);
> +	if (ret) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Status block initialization failed for id = %d.\n",
> +			  sb_id);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void qedi_free_sb(struct qedi_ctx *qedi)
> +{
> +	struct qed_sb_info *sb_info;
> +	int id;
> +
> +	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
> +		sb_info = &qedi->sb_array[id];
> +		if (sb_info->sb_virt)
> +			dma_free_coherent(&qedi->pdev->dev,
> +					  sizeof(*sb_info->sb_virt),
> +					  (void *)sb_info->sb_virt,
> +					  sb_info->sb_phys);
> +	}
> +}
> +
> +static void qedi_free_fp(struct qedi_ctx *qedi)
> +{
> +	kfree(qedi->fp_array);
> +	kfree(qedi->sb_array);
> +}
> +
> +static void qedi_destroy_fp(struct qedi_ctx *qedi)
> +{
> +	qedi_free_sb(qedi);
> +	qedi_free_fp(qedi);
> +}
> +
> +static int qedi_alloc_fp(struct qedi_ctx *qedi)
> +{
> +	int ret = 0;
> +
> +	qedi->fp_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
> +				 sizeof(struct qedi_fastpath), GFP_KERNEL);
> +	if (!qedi->fp_array) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "fastpath fp array allocation failed.\n");
> +		return -ENOMEM;
> +	}
> +
> +	qedi->sb_array = kcalloc(MIN_NUM_CPUS_MSIX(qedi),
> +				 sizeof(struct qed_sb_info), GFP_KERNEL);
> +	if (!qedi->sb_array) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "fastpath sb array allocation failed.\n");
> +		ret = -ENOMEM;
> +		goto free_fp;
> +	}
> +
> +	return ret;
> +
> +free_fp:
> +	qedi_free_fp(qedi);
> +	return ret;
> +}
> +
> +static void qedi_int_fp(struct qedi_ctx *qedi)
> +{
> +	struct qedi_fastpath *fp;
> +	int id;
> +
> +	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
> +	       sizeof(*qedi->fp_array));
> +	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
> +	       sizeof(*qedi->sb_array));
> +
> +	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
> +		fp = &qedi->fp_array[id];
> +		fp->sb_info = &qedi->sb_array[id];
> +		fp->sb_id = id;
> +		fp->qedi = qedi;
> +		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
> +			 "qedi", id);
> +
> +		/* fp_array[i] ---- irq cookie
> +		 * So init data which is needed in int ctx
> +		 */
> +	}
> +}
> +
Please check if you cannot make use of Christophs irq rework.

> +static int qedi_prepare_fp(struct qedi_ctx *qedi)
> +{
> +	struct qedi_fastpath *fp;
> +	int id, ret = 0;
> +
> +	ret = qedi_alloc_fp(qedi);
> +	if (ret)
> +		goto err;
> +
> +	qedi_int_fp(qedi);
> +
> +	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
> +		fp = &qedi->fp_array[id];
> +		ret = qedi_alloc_and_init_sb(qedi, fp->sb_info, fp->sb_id);
> +		if (ret) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "SB allocation and initialization failed.\n");
> +			ret = -EIO;
> +			goto err_init;
> +		}
> +	}
> +
> +	return 0;
> +
> +err_init:
> +	qedi_free_sb(qedi);
> +	qedi_free_fp(qedi);
> +err:
> +	return ret;
> +}
> +
> +static enum qed_int_mode qedi_int_mode_to_enum(void)
> +{
> +	switch (int_mode) {
> +	case 0: return QED_INT_MODE_MSIX;
> +	case 1: return QED_INT_MODE_INTA;
> +	case 2: return QED_INT_MODE_MSI;
> +	default:
> +		QEDI_ERR(NULL, "Unknown qede_int_mode=%08x; "
> +			 "Defaulting to MSI-x\n", int_mode);
> +		return QED_INT_MODE_MSIX;
> +	}
> +}
> +
> +static int qedi_setup_cid_que(struct qedi_ctx *qedi)
> +{
> +	int i;
> +
> +	qedi->cid_que.cid_que_base = kmalloc((qedi->max_active_conns *
> +					      sizeof(u32)), GFP_KERNEL);
> +	if (!qedi->cid_que.cid_que_base)
> +		return -ENOMEM;
> +
> +	qedi->cid_que.conn_cid_tbl = kmalloc((qedi->max_active_conns *
> +					      sizeof(struct qedi_conn *)),
> +					     GFP_KERNEL);
> +	if (!qedi->cid_que.conn_cid_tbl) {
> +		kfree(qedi->cid_que.cid_que_base);
> +		qedi->cid_que.cid_que_base = NULL;
> +		return -ENOMEM;
> +	}
> +
> +	qedi->cid_que.cid_que = (u32 *)qedi->cid_que.cid_que_base;
> +	qedi->cid_que.cid_q_prod_idx = 0;
> +	qedi->cid_que.cid_q_cons_idx = 0;
> +	qedi->cid_que.cid_q_max_idx = qedi->max_active_conns;
> +	qedi->cid_que.cid_free_cnt = qedi->max_active_conns;
> +
> +	for (i = 0; i < qedi->max_active_conns; i++) {
> +		qedi->cid_que.cid_que[i] = i;
> +		qedi->cid_que.conn_cid_tbl[i] = NULL;
> +	}
> +
> +	return 0;
> +}
> +
> +static void qedi_release_cid_que(struct qedi_ctx *qedi)
> +{
> +	kfree(qedi->cid_que.cid_que_base);
> +	qedi->cid_que.cid_que_base = NULL;
> +
> +	kfree(qedi->cid_que.conn_cid_tbl);
> +	qedi->cid_que.conn_cid_tbl = NULL;
> +}
> +
> +static int qedi_init_id_tbl(struct qedi_portid_tbl *id_tbl, u16 size,
> +			    u16 start_id, u16 next)
> +{
> +	id_tbl->start = start_id;
> +	id_tbl->max = size;
> +	id_tbl->next = next;
> +	spin_lock_init(&id_tbl->lock);
> +	id_tbl->table = kzalloc(DIV_ROUND_UP(size, 32) * 4, GFP_KERNEL);
> +	if (!id_tbl->table)
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +
> +static void qedi_free_id_tbl(struct qedi_portid_tbl *id_tbl)
> +{
> +	kfree(id_tbl->table);
> +	id_tbl->table = NULL;
> +}
> +
> +int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id)
> +{
> +	int ret = -1;
> +
> +	id -= id_tbl->start;
> +	if (id >= id_tbl->max)
> +		return ret;
> +
> +	spin_lock(&id_tbl->lock);
> +	if (!test_bit(id, id_tbl->table)) {
> +		set_bit(id, id_tbl->table);
> +		ret = 0;
> +	}
> +	spin_unlock(&id_tbl->lock);
> +	return ret;
> +}
> +
> +u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl)
> +{
> +	u16 id;
> +
> +	spin_lock(&id_tbl->lock);
> +	id = find_next_zero_bit(id_tbl->table, id_tbl->max, id_tbl->next);
> +	if (id >= id_tbl->max) {
> +		id = QEDI_LOCAL_PORT_INVALID;
> +		if (id_tbl->next != 0) {
> +			id = find_first_zero_bit(id_tbl->table, id_tbl->next);
> +			if (id >= id_tbl->next)
> +				id = QEDI_LOCAL_PORT_INVALID;
> +		}
> +	}
> +
> +	if (id < id_tbl->max) {
> +		set_bit(id, id_tbl->table);
> +		id_tbl->next = (id + 1) & (id_tbl->max - 1);
> +		id += id_tbl->start;
> +	}
> +
> +	spin_unlock(&id_tbl->lock);
> +
> +	return id;
> +}
> +
> +void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id)
> +{
> +	if (id == QEDI_LOCAL_PORT_INVALID)
> +		return;
> +
> +	id -= id_tbl->start;
> +	if (id >= id_tbl->max)
> +		return;
> +
> +	clear_bit(id, id_tbl->table);
> +}
> +
> +static void qedi_cm_free_mem(struct qedi_ctx *qedi)
> +{
> +	kfree(qedi->ep_tbl);
> +	qedi->ep_tbl = NULL;
> +	qedi_free_id_tbl(&qedi->lcl_port_tbl);
> +}
> +
> +static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
> +{
> +	u16 port_id;
> +
> +	qedi->ep_tbl = kzalloc((qedi->max_active_conns *
> +				sizeof(struct qedi_endpoint *)), GFP_KERNEL);
> +	if (!qedi->ep_tbl)
> +		return -ENOMEM;
> +	port_id = prandom_u32() % QEDI_LOCAL_PORT_RANGE;
> +	if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
> +			     QEDI_LOCAL_PORT_MIN, port_id)) {
> +		qedi_cm_free_mem(qedi);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
> +{
> +	struct Scsi_Host *shost;
> +	struct qedi_ctx *qedi = NULL;
> +
> +	shost = iscsi_host_alloc(&qedi_host_template,
> +				 sizeof(struct qedi_ctx), 0);
> +	if (!shost) {
> +		QEDI_ERR(NULL, "Could not allocate shost\n");
> +		goto exit_setup_shost;
> +	}
> +
> +	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA;
> +	shost->max_channel = 0;
> +	shost->max_lun = ~0;
> +	shost->max_cmd_len = 16;
> +	shost->transportt = qedi_scsi_transport;
> +
> +	qedi = iscsi_host_priv(shost);
> +	memset(qedi, 0, sizeof(*qedi));
> +	qedi->shost = shost;
> +	qedi->dbg_ctx.host_no = shost->host_no;
> +	qedi->pdev = pdev;
> +	qedi->dbg_ctx.pdev = pdev;
> +	qedi->max_active_conns = ISCSI_MAX_SESS_PER_HBA;
> +	qedi->max_sqes = QEDI_SQ_SIZE;
> +
> +	if (shost_use_blk_mq(shost))
> +		shost->nr_hw_queues = MIN_NUM_CPUS_MSIX(qedi);
> +
> +	pci_set_drvdata(pdev, qedi);
> +
> +exit_setup_shost:
> +	return qedi;
> +}
> +
> +static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi)
> +{
> +	u8 num_sq_pages;
> +	u32 log_page_size;
> +	int rval = 0;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC, "Min number of MSIX %d\n",
> +		  MIN_NUM_CPUS_MSIX(qedi));
> +
> +	num_sq_pages = (MAX_OUSTANDING_TASKS_PER_CON * 8) / PAGE_SIZE;
> +
> +	qedi->num_queues = MIN_NUM_CPUS_MSIX(qedi);
> +
> +	memset(&qedi->pf_params.iscsi_pf_params, 0,
> +	       sizeof(qedi->pf_params.iscsi_pf_params));
> +
> +	qedi->p_cpuq = pci_alloc_consistent(qedi->pdev,
> +			qedi->num_queues * sizeof(struct qedi_glbl_q_params),
> +			&qedi->hw_p_cpuq);
> +	if (!qedi->p_cpuq) {
> +		QEDI_ERR(&qedi->dbg_ctx, "pci_alloc_consistent fail\n");
> +		rval = -1;
> +		goto err_alloc_mem;
> +	}
> +
> +	rval = qedi_alloc_global_queues(qedi);
> +	if (rval) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Global queue allocation failed.\n");
> +		rval = -1;
> +		goto err_alloc_mem;
> +	}
> +
> +	qedi->pf_params.iscsi_pf_params.num_cons = QEDI_MAX_ISCSI_CONNS_PER_HBA;
> +	qedi->pf_params.iscsi_pf_params.num_tasks = QEDI_MAX_ISCSI_TASK;
> +	qedi->pf_params.iscsi_pf_params.half_way_close_timeout = 10;
> +	qedi->pf_params.iscsi_pf_params.num_sq_pages_in_ring = num_sq_pages;
> +	qedi->pf_params.iscsi_pf_params.num_r2tq_pages_in_ring = num_sq_pages;
> +	qedi->pf_params.iscsi_pf_params.num_uhq_pages_in_ring = num_sq_pages;
> +	qedi->pf_params.iscsi_pf_params.num_queues = qedi->num_queues;
> +	qedi->pf_params.iscsi_pf_params.debug_mode = fw_debug;
> +
> +	for (log_page_size = 0 ; log_page_size < 32 ; log_page_size++) {
> +		if ((1 << log_page_size) == PAGE_SIZE)
> +			break;
> +	}
> +	qedi->pf_params.iscsi_pf_params.log_page_size = log_page_size;
> +
> +	qedi->pf_params.iscsi_pf_params.glbl_q_params_addr = qedi->hw_p_cpuq;
> +
> +	/* RQ BDQ initializations.
> +	 * rq_num_entries: suggested value for Initiator is 16 (4KB RQ)
> +	 * rqe_log_size: 8 for 256B RQE
> +	 */
> +	qedi->pf_params.iscsi_pf_params.rqe_log_size = 8;
> +	/* BDQ address and size */
> +	qedi->pf_params.iscsi_pf_params.bdq_pbl_base_addr[BDQ_ID_RQ] =
> +							qedi->bdq_pbl_list_dma;
> +	qedi->pf_params.iscsi_pf_params.bdq_pbl_num_entries[BDQ_ID_RQ] =
> +						qedi->bdq_pbl_list_num_entries;
> +	qedi->pf_params.iscsi_pf_params.rq_buffer_size = QEDI_BDQ_BUF_SIZE;
> +
> +	/* cq_num_entries: num_tasks + rq_num_entries */
> +	qedi->pf_params.iscsi_pf_params.cq_num_entries = 2048;
> +
> +	qedi->pf_params.iscsi_pf_params.gl_rq_pi = QEDI_PROTO_CQ_PROD_IDX;
> +	qedi->pf_params.iscsi_pf_params.gl_cmd_pi = 1;
> +	qedi->pf_params.iscsi_pf_params.ooo_enable = 1;
> +
> +err_alloc_mem:
> +	return rval;
> +}
> +
> +/* Free DMA coherent memory for array of queue pointers we pass to qed */
> +static void qedi_free_iscsi_pf_param(struct qedi_ctx *qedi)
> +{
> +	size_t size = 0;
> +
> +	if (qedi->p_cpuq) {
> +		size = qedi->num_queues * sizeof(struct qedi_glbl_q_params);
> +		pci_free_consistent(qedi->pdev, size, qedi->p_cpuq,
> +				    qedi->hw_p_cpuq);
> +	}
> +
> +	qedi_free_global_queues(qedi);
> +
> +	kfree(qedi->global_queues);
> +}
> +
> +static void qedi_link_update(void *dev, struct qed_link_output *link)
> +{
> +	struct qedi_ctx *qedi = (struct qedi_ctx *)dev;
> +
> +	if (link->link_up) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "Link Up event.\n");
> +		atomic_set(&qedi->link_state, QEDI_LINK_UP);
> +	} else {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "Link Down event.\n");
> +		atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
> +	}
> +}
> +
> +static struct qed_iscsi_cb_ops qedi_cb_ops = {
> +	{
> +		.link_update =		qedi_link_update,
> +	}
> +};
> +
> +static bool qedi_process_completions(struct qedi_fastpath *fp)
> +{
> +	struct qedi_work *qedi_work = NULL;
> +	struct qedi_ctx *qedi = fp->qedi;
> +	struct qed_sb_info *sb_info = fp->sb_info;
> +	struct status_block *sb = sb_info->sb_virt;
> +	struct qedi_percpu_s *p = NULL;
> +	struct global_queue *que;
> +	u16 prod_idx;
> +	unsigned long flags;
> +	union iscsi_cqe *cqe;
> +	int cpu;
> +
> +	/* Get the current firmware producer index */
> +	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
> +
> +	if (prod_idx >= QEDI_CQ_SIZE)
> +		prod_idx = prod_idx % QEDI_CQ_SIZE;
> +
> +	que = qedi->global_queues[fp->sb_id];
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
> +		  "Before: global queue=%p prod_idx=%d cons_idx=%d, sb_id=%d\n",
> +		  que, prod_idx, que->cq_cons_idx, fp->sb_id);
> +
> +	qedi->intr_cpu = fp->sb_id;
> +	cpu = smp_processor_id();
> +	p = &per_cpu(qedi_percpu, cpu);
> +
> +	if (unlikely(!p->iothread))
> +		WARN_ON(1);
> +
> +	spin_lock_irqsave(&p->p_work_lock, flags);
> +	while (que->cq_cons_idx != prod_idx) {
> +		cqe = &que->cq[que->cq_cons_idx];
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
> +			  "cqe=%p prod_idx=%d cons_idx=%d.\n",
> +			  cqe, prod_idx, que->cq_cons_idx);
> +
> +		/* Alloc and copy to the cqe */
> +		qedi_work = kzalloc(sizeof(*qedi_work), GFP_ATOMIC);
> +		if (qedi_work) {
> +			INIT_LIST_HEAD(&qedi_work->list);
> +			qedi_work->qedi = qedi;
> +			memcpy(&qedi_work->cqe, cqe, sizeof(union iscsi_cqe));
> +			qedi_work->que_idx = fp->sb_id;
> +			list_add_tail(&qedi_work->list, &p->work_list);
> +		} else {
> +			WARN_ON(1);
> +			continue;
> +		}
> +
Memory allocation in an interrupt routine?
You must be kidding ...

> +		que->cq_cons_idx++;
> +		if (que->cq_cons_idx == QEDI_CQ_SIZE)
> +			que->cq_cons_idx = 0;
> +	}
> +	wake_up_process(p->iothread);
> +	spin_unlock_irqrestore(&p->p_work_lock, flags);
> +
> +	return true;
> +}
> +
> +static bool qedi_fp_has_work(struct qedi_fastpath *fp)
> +{
> +	struct qedi_ctx *qedi = fp->qedi;
> +	struct global_queue *que;
> +	struct qed_sb_info *sb_info = fp->sb_info;
> +	struct status_block *sb = sb_info->sb_virt;
> +	u16 prod_idx;
> +
> +	barrier();
> +
> +	/* Get the current firmware producer index */
> +	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
> +
> +	/* Get the pointer to the global CQ this completion is on */
> +	que = qedi->global_queues[fp->sb_id];
> +
> +	/* prod idx wrap around uint16 */
> +	if (prod_idx >= QEDI_CQ_SIZE)
> +		prod_idx = prod_idx % QEDI_CQ_SIZE;
> +
> +	return (que->cq_cons_idx != prod_idx);
> +}
> +
> +/* MSI-X fastpath handler code */
> +static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
> +{
> +	struct qedi_fastpath *fp = dev_id;
> +	struct qedi_ctx *qedi = fp->qedi;
> +	bool wake_io_thread = true;
> +
> +	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
> +
> +process_again:
> +	wake_io_thread = qedi_process_completions(fp);
> +	if (wake_io_thread) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
> +			  "process already running\n");
> +	}
> +
> +	if (qedi_fp_has_work(fp) == 0)
> +		qed_sb_update_sb_idx(fp->sb_info);
> +
> +	/* Check for more work */
> +	rmb();
> +
> +	if (qedi_fp_has_work(fp) == 0)
> +		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
> +	else
> +		goto process_again;
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* simd handler for MSI/INTa */
> +static void qedi_simd_int_handler(void *cookie)
> +{
> +	/* Cookie is qedi_ctx struct */
> +	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
> +
> +	QEDI_WARN(&qedi->dbg_ctx, "qedi=%p.\n", qedi);
> +}
> +
> +#define QEDI_SIMD_HANDLER_NUM		0
> +static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
> +{
> +	int i;
> +
> +	if (qedi->int_info.msix_cnt) {
> +		for (i = 0; i < qedi->int_info.used_cnt; i++) {
> +			synchronize_irq(qedi->int_info.msix[i].vector);
> +			irq_set_affinity_hint(qedi->int_info.msix[i].vector,
> +					      NULL);
> +			free_irq(qedi->int_info.msix[i].vector,
> +				 &qedi->fp_array[i]);
> +		}
> +	} else {
> +		qedi_ops->common->simd_handler_clean(qedi->cdev,
> +						     QEDI_SIMD_HANDLER_NUM);
> +	}
> +
> +	qedi->int_info.used_cnt = 0;
> +	qedi_ops->common->set_fp_int(qedi->cdev, 0);
> +}
> +
Again, consider using the interrupt affinity rework from Christoph Hellwig.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 4/6] qedi: Add LL2 iSCSI interface for offload iSCSI.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19  7:53   ` Hannes Reinecke
  -1 siblings, 0 replies; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19  7:53 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Nilesh Javali, Adheer Chandravanshi,
	Chad Dupuis, Saurav Kashyap, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> This patch adds support for iscsiuio interface using Light L2 (LL2) qed
> interface.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---
>  drivers/scsi/qedi/qedi.h      |  73 +++++++++
>  drivers/scsi/qedi/qedi_main.c | 357 ++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 430 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 5/6] qedi: Add support for iSCSI session management.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19  8:03   ` Hannes Reinecke
  2016-10-20  9:09     ` Rangankar, Manish
  -1 siblings, 1 reply; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19  8:03 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Nilesh Javali, Adheer Chandravanshi,
	Chad Dupuis, Saurav Kashyap, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> This patch adds support for iscsi_transport LLD Login,
> Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
> and Firmware async event handling support.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---
>  drivers/scsi/qedi/qedi_fw.c    | 1123 ++++++++++++++++++++++++++++
>  drivers/scsi/qedi/qedi_gbl.h   |   67 ++
>  drivers/scsi/qedi/qedi_iscsi.c | 1604 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/qedi/qedi_iscsi.h |  228 ++++++
>  drivers/scsi/qedi/qedi_main.c  |  164 ++++
>  5 files changed, 3186 insertions(+)
>  create mode 100644 drivers/scsi/qedi/qedi_fw.c
>  create mode 100644 drivers/scsi/qedi/qedi_gbl.h
>  create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
>  create mode 100644 drivers/scsi/qedi/qedi_iscsi.h
> 
> diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
> new file mode 100644
> index 0000000..a820785
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_fw.c
> @@ -0,0 +1,1123 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include <linux/blkdev.h>
> +#include <scsi/scsi_tcq.h>
> +#include <linux/delay.h>
> +
> +#include "qedi.h"
> +#include "qedi_iscsi.h"
> +#include "qedi_gbl.h"
> +
> +static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
> +			       struct iscsi_task *mtask);
> +
> +void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd)
> +{
> +	struct scsi_cmnd *sc = cmd->scsi_cmd;
> +
> +	if (cmd->io_tbl.sge_valid && sc) {
> +		scsi_dma_unmap(sc);
> +		cmd->io_tbl.sge_valid = 0;
> +	}
> +}
> +
> +static void qedi_process_logout_resp(struct qedi_ctx *qedi,
> +				     union iscsi_cqe *cqe,
> +				     struct iscsi_task *task,
> +				     struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_logout_rsp *resp_hdr;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_logout_response_hdr *cqe_logout_response;
> +	struct qedi_cmd *cmd;
> +
> +	cmd = (struct qedi_cmd *)task->dd_data;
> +	cqe_logout_response = &cqe->cqe_common.iscsi_hdr.logout_response;
> +	spin_lock(&session->back_lock);
> +	resp_hdr = (struct iscsi_logout_rsp *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
> +	resp_hdr->opcode = cqe_logout_response->opcode;
> +	resp_hdr->flags = cqe_logout_response->flags;
> +	resp_hdr->hlength = 0;
> +
> +	resp_hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
> +	resp_hdr->statsn = cpu_to_be32(cqe_logout_response->stat_sn);
> +	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_logout_response->exp_cmd_sn);
> +	resp_hdr->max_cmdsn = cpu_to_be32(cqe_logout_response->max_cmd_sn);
> +
> +	resp_hdr->t2wait = cpu_to_be32(cqe_logout_response->time2wait);
> +	resp_hdr->t2retain = cpu_to_be32(cqe_logout_response->time2retain);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +		  "Freeing tid=0x%x for cid=0x%x\n",
> +		  cmd->task_id, qedi_conn->iscsi_conn_id);
> +
> +	if (likely(cmd->io_cmd_in_list)) {
> +		cmd->io_cmd_in_list = false;
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +	} else {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
> +			  cmd->task_id, qedi_conn->iscsi_conn_id,
> +			  &cmd->io_cmd);
> +	}
> +
> +	cmd->state = RESPONSE_RECEIVED;
> +	qedi_clear_task_idx(qedi, cmd->task_id);
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, NULL, 0);
> +
> +	spin_unlock(&session->back_lock);
> +}
> +
> +static void qedi_process_text_resp(struct qedi_ctx *qedi,
> +				   union iscsi_cqe *cqe,
> +				   struct iscsi_task *task,
> +				   struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_task_context *task_ctx;
> +	struct iscsi_text_rsp *resp_hdr_ptr;
> +	struct iscsi_text_response_hdr *cqe_text_response;
> +	struct qedi_cmd *cmd;
> +	int pld_len;
> +	u32 *tmp;
> +
> +	cmd = (struct qedi_cmd *)task->dd_data;
> +	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +								  cmd->task_id);
> +
> +	cqe_text_response = &cqe->cqe_common.iscsi_hdr.text_response;
> +	spin_lock(&session->back_lock);
> +	resp_hdr_ptr =  (struct iscsi_text_rsp *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_hdr));
> +	resp_hdr_ptr->opcode = cqe_text_response->opcode;
> +	resp_hdr_ptr->flags = cqe_text_response->flags;
> +	resp_hdr_ptr->hlength = 0;
> +
> +	hton24(resp_hdr_ptr->dlength,
> +	       (cqe_text_response->hdr_second_dword &
> +		ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK));
> +	tmp = (u32 *)resp_hdr_ptr->dlength;
> +
> +	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
> +				      conn->session->age);
> +	resp_hdr_ptr->ttt = cqe_text_response->ttt;
> +	resp_hdr_ptr->statsn = cpu_to_be32(cqe_text_response->stat_sn);
> +	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_text_response->exp_cmd_sn);
> +	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_text_response->max_cmd_sn);
> +
> +	pld_len = cqe_text_response->hdr_second_dword &
> +		  ISCSI_TEXT_RESPONSE_HDR_DATA_SEG_LEN_MASK;
> +	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
> +
> +	memset(task_ctx, '\0', sizeof(*task_ctx));
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +		  "Freeing tid=0x%x for cid=0x%x\n",
> +		  cmd->task_id, qedi_conn->iscsi_conn_id);
> +
> +	if (likely(cmd->io_cmd_in_list)) {
> +		cmd->io_cmd_in_list = false;
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +	} else {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "Active cmd list node already deleted, tid=0x%x, cid=0x%x, io_cmd_node=%p\n",
> +			  cmd->task_id, qedi_conn->iscsi_conn_id,
> +			  &cmd->io_cmd);
> +	}
> +
> +	cmd->state = RESPONSE_RECEIVED;
> +	qedi_clear_task_idx(qedi, cmd->task_id);
> +
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
> +			     qedi_conn->gen_pdu.resp_buf,
> +			     (qedi_conn->gen_pdu.resp_wr_ptr -
> +			      qedi_conn->gen_pdu.resp_buf));
> +	spin_unlock(&session->back_lock);
> +}
> +
> +static void qedi_process_login_resp(struct qedi_ctx *qedi,
> +				    union iscsi_cqe *cqe,
> +				    struct iscsi_task *task,
> +				    struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_task_context *task_ctx;
> +	struct iscsi_login_rsp *resp_hdr_ptr;
> +	struct iscsi_login_response_hdr *cqe_login_response;
> +	struct qedi_cmd *cmd;
> +	int pld_len;
> +	u32 *tmp;
> +
> +	cmd = (struct qedi_cmd *)task->dd_data;
> +
> +	cqe_login_response = &cqe->cqe_common.iscsi_hdr.login_response;
> +	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +							  cmd->task_id);
> +	spin_lock(&session->back_lock);
> +	resp_hdr_ptr =  (struct iscsi_login_rsp *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_login_rsp));
> +	resp_hdr_ptr->opcode = cqe_login_response->opcode;
> +	resp_hdr_ptr->flags = cqe_login_response->flags_attr;
> +	resp_hdr_ptr->hlength = 0;
> +
> +	hton24(resp_hdr_ptr->dlength,
> +	       (cqe_login_response->hdr_second_dword &
> +		ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK));
> +	tmp = (u32 *)resp_hdr_ptr->dlength;
> +	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
> +				      conn->session->age);
> +	resp_hdr_ptr->tsih = cqe_login_response->tsih;
> +	resp_hdr_ptr->statsn = cpu_to_be32(cqe_login_response->stat_sn);
> +	resp_hdr_ptr->exp_cmdsn = cpu_to_be32(cqe_login_response->exp_cmd_sn);
> +	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_login_response->max_cmd_sn);
> +	resp_hdr_ptr->status_class = cqe_login_response->status_class;
> +	resp_hdr_ptr->status_detail = cqe_login_response->status_detail;
> +	pld_len = cqe_login_response->hdr_second_dword &
> +		  ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
> +	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
> +
> +	if (likely(cmd->io_cmd_in_list)) {
> +		cmd->io_cmd_in_list = false;
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +	}
> +
> +	memset(task_ctx, '\0', sizeof(*task_ctx));
> +
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
> +			     qedi_conn->gen_pdu.resp_buf,
> +			     (qedi_conn->gen_pdu.resp_wr_ptr -
> +			     qedi_conn->gen_pdu.resp_buf));
> +
> +	spin_unlock(&session->back_lock);
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +		  "Freeing tid=0x%x for cid=0x%x\n",
> +		  cmd->task_id, qedi_conn->iscsi_conn_id);
> +	cmd->state = RESPONSE_RECEIVED;
> +	qedi_clear_task_idx(qedi, cmd->task_id);
> +}
> +
> +static void qedi_get_rq_bdq_buf(struct qedi_ctx *qedi,
> +				struct iscsi_cqe_unsolicited *cqe,
> +				char *ptr, int len)
> +{
> +	u16 idx = 0;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "pld_len [%d], bdq_prod_idx [%d], idx [%d]\n",
> +		  len, qedi->bdq_prod_idx,
> +		  (qedi->bdq_prod_idx % qedi->rq_num_entries));
> +
> +	/* Obtain buffer address from rqe_opaque */
> +	idx = cqe->rqe_opaque.lo;
> +	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
> +			  idx);
> +		return;
> +	}
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "rqe_opaque.lo [0x%p], rqe_opaque.hi [0x%p], idx [%d]\n",
> +		  cqe->rqe_opaque.lo, cqe->rqe_opaque.hi, idx);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "unsol_cqe_type = %d\n", cqe->unsol_cqe_type);
> +	switch (cqe->unsol_cqe_type) {
> +	case ISCSI_CQE_UNSOLICITED_SINGLE:
> +	case ISCSI_CQE_UNSOLICITED_FIRST:
> +		if (len)
> +			memcpy(ptr, (void *)qedi->bdq[idx].buf_addr, len);
> +		break;
> +	case ISCSI_CQE_UNSOLICITED_MIDDLE:
> +	case ISCSI_CQE_UNSOLICITED_LAST:
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +static void qedi_put_rq_bdq_buf(struct qedi_ctx *qedi,
> +				struct iscsi_cqe_unsolicited *cqe,
> +				int count)
> +{
> +	u16 tmp;
> +	u16 idx = 0;
> +	struct scsi_bd *pbl;
> +
> +	/* Obtain buffer address from rqe_opaque */
> +	idx = cqe->rqe_opaque.lo;
> +	if ((idx < 0) || (idx > (QEDI_BDQ_NUM - 1))) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "wrong idx %d returned by FW, dropping the unsolicited pkt\n",
> +			  idx);
> +		return;
> +	}
> +
> +	pbl = (struct scsi_bd *)qedi->bdq_pbl;
> +	pbl += (qedi->bdq_prod_idx % qedi->rq_num_entries);
> +	pbl->address.hi =
> +		      cpu_to_le32((u32)(((u64)(qedi->bdq[idx].buf_dma)) >> 32));
> +	pbl->address.lo =
> +			cpu_to_le32(((u32)(((u64)(qedi->bdq[idx].buf_dma)) &
> +					    0xffffffff)));
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx] idx [%d]\n",
> +		  pbl, pbl->address.hi, pbl->address.lo, idx);
> +	pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
> +	pbl->opaque.lo = cpu_to_le32(((u32)(((u64)idx) & 0xffffffff)));
> +
> +	/* Increment producer to let f/w know we've handled the frame */
> +	qedi->bdq_prod_idx += count;
> +
> +	writew(qedi->bdq_prod_idx, qedi->bdq_primary_prod);
> +	tmp = readw(qedi->bdq_primary_prod);
> +
> +	writew(qedi->bdq_prod_idx, qedi->bdq_secondary_prod);
> +	tmp = readw(qedi->bdq_secondary_prod);
> +}
> +
> +static void qedi_unsol_pdu_adjust_bdq(struct qedi_ctx *qedi,
> +				      struct iscsi_cqe_unsolicited *cqe,
> +				      u32 pdu_len, u32 num_bdqs,
> +				      char *bdq_data)
> +{
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "num_bdqs [%d]\n", num_bdqs);
> +
> +	qedi_get_rq_bdq_buf(qedi, cqe, bdq_data, pdu_len);
> +	qedi_put_rq_bdq_buf(qedi, cqe, (num_bdqs + 1));
> +}
> +
> +static int qedi_process_nopin_mesg(struct qedi_ctx *qedi,
> +				   union iscsi_cqe *cqe,
> +				   struct iscsi_task *task,
> +				   struct qedi_conn *qedi_conn, u16 que_idx)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_nop_in_hdr *cqe_nop_in;
> +	struct iscsi_nopin *hdr;
> +	struct qedi_cmd *cmd;
> +	int tgt_async_nop = 0;
> +	u32 scsi_lun[2];
> +	u32 pdu_len, num_bdqs;
> +	char bdq_data[QEDI_BDQ_BUF_SIZE];
> +	unsigned long flags;
> +
> +	spin_lock_bh(&session->back_lock);
> +	cqe_nop_in = &cqe->cqe_common.iscsi_hdr.nop_in;
> +
> +	pdu_len = cqe_nop_in->hdr_second_dword &
> +		  ISCSI_NOP_IN_HDR_DATA_SEG_LEN_MASK;
> +	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
> +
> +	hdr = (struct iscsi_nopin *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(hdr, 0, sizeof(struct iscsi_hdr));
> +	hdr->opcode = cqe_nop_in->opcode;
> +	hdr->max_cmdsn = cpu_to_be32(cqe_nop_in->max_cmd_sn);
> +	hdr->exp_cmdsn = cpu_to_be32(cqe_nop_in->exp_cmd_sn);
> +	hdr->statsn = cpu_to_be32(cqe_nop_in->stat_sn);
> +	hdr->ttt = cpu_to_be32(cqe_nop_in->ttt);
> +
> +	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
> +		spin_lock_irqsave(&qedi->hba_lock, flags);
> +		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
> +					  pdu_len, num_bdqs, bdq_data);
> +		hdr->itt = RESERVED_ITT;
> +		tgt_async_nop = 1;
> +		spin_unlock_irqrestore(&qedi->hba_lock, flags);
> +		goto done;
> +	}
> +
> +	/* Response to one of our nop-outs */
> +	if (task) {
> +		cmd = task->dd_data;
> +		hdr->flags = ISCSI_FLAG_CMD_FINAL;
> +		hdr->itt = build_itt(cqe->cqe_solicited.itid,
> +				     conn->session->age);
> +		scsi_lun[0] = 0xffffffff;
> +		scsi_lun[1] = 0xffffffff;
> +		memcpy(&hdr->lun, scsi_lun, sizeof(struct scsi_lun));
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +			  "Freeing tid=0x%x for cid=0x%x\n",
> +			  cmd->task_id, qedi_conn->iscsi_conn_id);
> +		cmd->state = RESPONSE_RECEIVED;
> +		spin_lock(&qedi_conn->list_lock);
> +		if (likely(cmd->io_cmd_in_list)) {
> +			cmd->io_cmd_in_list = false;
> +			list_del_init(&cmd->io_cmd);
> +			qedi_conn->active_cmd_count--;
> +		}
> +
> +		spin_unlock(&qedi_conn->list_lock);
> +		qedi_clear_task_idx(qedi, cmd->task_id);
> +	}
> +
> +done:
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, bdq_data, pdu_len);
> +
> +	spin_unlock_bh(&session->back_lock);
> +	return tgt_async_nop;
> +}
> +
> +static void qedi_process_async_mesg(struct qedi_ctx *qedi,
> +				    union iscsi_cqe *cqe,
> +				    struct iscsi_task *task,
> +				    struct qedi_conn *qedi_conn,
> +				    u16 que_idx)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_async_msg_hdr *cqe_async_msg;
> +	struct iscsi_async *resp_hdr;
> +	u32 scsi_lun[2];
> +	u32 pdu_len, num_bdqs;
> +	char bdq_data[QEDI_BDQ_BUF_SIZE];
> +	unsigned long flags;
> +
> +	spin_lock_bh(&session->back_lock);
> +
> +	cqe_async_msg = &cqe->cqe_common.iscsi_hdr.async_msg;
> +	pdu_len = cqe_async_msg->hdr_second_dword &
> +		ISCSI_ASYNC_MSG_HDR_DATA_SEG_LEN_MASK;
> +	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
> +
> +	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
> +		spin_lock_irqsave(&qedi->hba_lock, flags);
> +		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
> +					  pdu_len, num_bdqs, bdq_data);
> +		spin_unlock_irqrestore(&qedi->hba_lock, flags);
> +	}
> +
> +	resp_hdr = (struct iscsi_async *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
> +	resp_hdr->opcode = cqe_async_msg->opcode;
> +	resp_hdr->flags = 0x80;
> +
> +	scsi_lun[0] = cpu_to_be32(cqe_async_msg->lun.lo);
> +	scsi_lun[1] = cpu_to_be32(cqe_async_msg->lun.hi);
I _think_ we have a SCSI LUN structure ...

> +	memcpy(&resp_hdr->lun, scsi_lun, sizeof(struct scsi_lun));
> +	resp_hdr->exp_cmdsn = cpu_to_be32(cqe_async_msg->exp_cmd_sn);
> +	resp_hdr->max_cmdsn = cpu_to_be32(cqe_async_msg->max_cmd_sn);
> +	resp_hdr->statsn = cpu_to_be32(cqe_async_msg->stat_sn);
> +
> +	resp_hdr->async_event = cqe_async_msg->async_event;
> +	resp_hdr->async_vcode = cqe_async_msg->async_vcode;
> +
> +	resp_hdr->param1 = cpu_to_be16(cqe_async_msg->param1_rsrv);
> +	resp_hdr->param2 = cpu_to_be16(cqe_async_msg->param2_rsrv);
> +	resp_hdr->param3 = cpu_to_be16(cqe_async_msg->param3_rsrv);
> +
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, bdq_data,
> +			     pdu_len);
> +
> +	spin_unlock_bh(&session->back_lock);
> +}
> +
> +static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
> +				     union iscsi_cqe *cqe,
> +				     struct iscsi_task *task,
> +				     struct qedi_conn *qedi_conn,
> +				     uint16_t que_idx)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_reject_hdr *cqe_reject;
> +	struct iscsi_reject *hdr;
> +	u32 pld_len, num_bdqs;
> +	unsigned long flags;
> +
> +	spin_lock_bh(&session->back_lock);
> +	cqe_reject = &cqe->cqe_common.iscsi_hdr.reject;
> +	pld_len = cqe_reject->hdr_second_dword &
> +		  ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK;
> +	num_bdqs = pld_len / QEDI_BDQ_BUF_SIZE;
> +
> +	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
> +		spin_lock_irqsave(&qedi->hba_lock, flags);
> +		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
> +					  pld_len, num_bdqs, conn->data);
> +		spin_unlock_irqrestore(&qedi->hba_lock, flags);
> +	}
> +	hdr = (struct iscsi_reject *)&qedi_conn->gen_pdu.resp_hdr;
> +	memset(hdr, 0, sizeof(struct iscsi_hdr));
> +	hdr->opcode = cqe_reject->opcode;
> +	hdr->reason = cqe_reject->hdr_reason;
> +	hdr->flags = cqe_reject->hdr_flags;
> +	hton24(hdr->dlength, (cqe_reject->hdr_second_dword &
> +			      ISCSI_REJECT_HDR_DATA_SEG_LEN_MASK));
> +	hdr->max_cmdsn = cpu_to_be32(cqe_reject->max_cmd_sn);
> +	hdr->exp_cmdsn = cpu_to_be32(cqe_reject->exp_cmd_sn);
> +	hdr->statsn = cpu_to_be32(cqe_reject->stat_sn);
> +	hdr->ffffffff = cpu_to_be32(0xffffffff);
> +
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
> +			     conn->data, pld_len);
> +	spin_unlock_bh(&session->back_lock);
> +}
> +
> +static void qedi_mtask_completion(struct qedi_ctx *qedi,
> +				  union iscsi_cqe *cqe,
> +				  struct iscsi_task *task,
> +				  struct qedi_conn *conn, uint16_t que_idx)
> +{
> +	struct iscsi_conn *iscsi_conn;
> +	u32 hdr_opcode;
> +
> +	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
> +	iscsi_conn = conn->cls_conn->dd_data;
> +
> +	switch (hdr_opcode) {
> +	case ISCSI_OPCODE_LOGIN_RESPONSE:
> +		qedi_process_login_resp(qedi, cqe, task, conn);
> +		break;
> +	case ISCSI_OPCODE_TEXT_RESPONSE:
> +		qedi_process_text_resp(qedi, cqe, task, conn);
> +		break;
> +	case ISCSI_OPCODE_LOGOUT_RESPONSE:
> +		qedi_process_logout_resp(qedi, cqe, task, conn);
> +		break;
> +	case ISCSI_OPCODE_NOP_IN:
> +		qedi_process_nopin_mesg(qedi, cqe, task, conn, que_idx);
> +		break;
> +	default:
> +		QEDI_ERR(&qedi->dbg_ctx, "unknown opcode\n");
> +	}
> +}
> +
> +static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
> +					  struct iscsi_cqe_solicited *cqe,
> +					  struct iscsi_task *task,
> +					  struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct qedi_cmd *cmd = task->dd_data;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_UNSOL,
> +		  "itid=0x%x, cmd task id=0x%x\n",
> +		  cqe->itid, cmd->task_id);
> +
> +	cmd->state = RESPONSE_RECEIVED;
> +	qedi_clear_task_idx(qedi, cmd->task_id);
> +
> +	spin_lock_bh(&session->back_lock);
> +	__iscsi_put_task(task);
> +	spin_unlock_bh(&session->back_lock);
> +}
> +
> +void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
> +			  uint16_t que_idx)
> +{
> +	struct iscsi_task *task = NULL;
> +	struct iscsi_nopout *nopout_hdr;
> +	struct qedi_conn *q_conn;
> +	struct iscsi_conn *conn;
> +	struct iscsi_task_context *fw_task_ctx;
> +	u32 comp_type;
> +	u32 iscsi_cid;
> +	u32 hdr_opcode;
> +	u32 ptmp_itt = 0;
> +	itt_t proto_itt = 0;
> +	u8 cqe_err_bits = 0;
> +
> +	comp_type = cqe->cqe_common.cqe_type;
> +	hdr_opcode = cqe->cqe_common.iscsi_hdr.common.hdr_first_byte;
> +	cqe_err_bits =
> +		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "fw_cid=0x%x, cqe type=0x%x, opcode=0x%x\n",
> +		  cqe->cqe_common.conn_id, comp_type, hdr_opcode);
> +
> +	if (comp_type >= MAX_ISCSI_CQES_TYPE) {
> +		QEDI_WARN(&qedi->dbg_ctx, "Invalid CqE type\n");
> +		return;
> +	}
> +
> +	iscsi_cid  = cqe->cqe_common.conn_id;
> +	q_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
> +	if (!q_conn) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Session no longer exists for cid=0x%x!!\n",
> +			  iscsi_cid);
> +		return;
> +	}
> +
> +	conn = q_conn->cls_conn->dd_data;
> +
> +	if (unlikely(cqe_err_bits &&
> +		     GET_FIELD(cqe_err_bits,
> +			       CQE_ERROR_BITMAP_DATA_DIGEST_ERR))) {
> +		iscsi_conn_failure(conn, ISCSI_ERR_DATA_DGST);
> +		return;
> +	}
> +
> +	switch (comp_type) {
> +	case ISCSI_CQE_TYPE_SOLICITED:
> +	case ISCSI_CQE_TYPE_SOLICITED_WITH_SENSE:
> +		fw_task_ctx =
> +		  (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +						      cqe->cqe_solicited.itid);
> +		if (fw_task_ctx->ystorm_st_context.state.local_comp == 1) {
> +			qedi_get_proto_itt(qedi, cqe->cqe_solicited.itid,
> +					   &ptmp_itt);
> +			proto_itt = build_itt(ptmp_itt, conn->session->age);
> +		} else {
> +			cqe->cqe_solicited.itid =
> +					    qedi_get_itt(cqe->cqe_solicited);
> +			proto_itt = build_itt(cqe->cqe_solicited.itid,
> +					      conn->session->age);
> +		}
> +
> +		spin_lock_bh(&conn->session->back_lock);
> +		task = iscsi_itt_to_task(conn, proto_itt);
> +		spin_unlock_bh(&conn->session->back_lock);
> +
> +		if (!task) {
> +			QEDI_WARN(&qedi->dbg_ctx, "task is NULL\n");
> +			return;
> +		}
> +
> +		/* Process NOPIN local completion */
> +		nopout_hdr = (struct iscsi_nopout *)task->hdr;
> +		if ((nopout_hdr->itt == RESERVED_ITT) &&
> +		    (cqe->cqe_solicited.itid != (u16)RESERVED_ITT))
> +			qedi_process_nopin_local_cmpl(qedi, &cqe->cqe_solicited,
> +						      task, q_conn);
> +		else
> +			/* Process other solicited responses */
> +			qedi_mtask_completion(qedi, cqe, task, q_conn, que_idx);
> +		break;
> +	case ISCSI_CQE_TYPE_UNSOLICITED:
> +		switch (hdr_opcode) {
> +		case ISCSI_OPCODE_NOP_IN:
> +			qedi_process_nopin_mesg(qedi, cqe, task, q_conn,
> +						que_idx);
> +			break;
> +		case ISCSI_OPCODE_ASYNC_MSG:
> +			qedi_process_async_mesg(qedi, cqe, task, q_conn,
> +						que_idx);
> +			break;
> +		case ISCSI_OPCODE_REJECT:
> +			qedi_process_reject_mesg(qedi, cqe, task, q_conn,
> +						 que_idx);
> +			break;
> +		}
> +		goto exit_fp_process;
> +	default:
> +		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
> +		break;
> +	}
> +
> +exit_fp_process:
> +	return;
> +}
> +
> +static void qedi_add_to_sq(struct qedi_conn *qedi_conn, struct iscsi_task *task,
> +			   u16 tid, uint16_t ptu_invalidate, int is_cleanup)
> +{
> +	struct iscsi_wqe *wqe;
> +	struct iscsi_wqe_field *cont_field;
> +	struct qedi_endpoint *ep;
> +	struct scsi_cmnd *sc = task->sc;
> +	struct iscsi_login_req *login_hdr;
> +	struct qedi_cmd *cmd = task->dd_data;
> +
> +	login_hdr = (struct iscsi_login_req *)task->hdr;
> +	ep = qedi_conn->ep;
> +	wqe = &ep->sq[ep->sq_prod_idx];
> +
> +	memset(wqe, 0, sizeof(*wqe));
> +
> +	ep->sq_prod_idx++;
> +	ep->fw_sq_prod_idx++;
> +	if (ep->sq_prod_idx == QEDI_SQ_SIZE)
> +		ep->sq_prod_idx = 0;
> +
> +	if (is_cleanup) {
> +		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
> +			  ISCSI_WQE_TYPE_TASK_CLEANUP);
> +		wqe->task_id = tid;
> +		return;
> +	}
> +
> +	if (ptu_invalidate) {
> +		SET_FIELD(wqe->flags, ISCSI_WQE_PTU_INVALIDATE,
> +			  ISCSI_WQE_SET_PTU_INVALIDATE);
> +	}
> +
> +	cont_field = &wqe->cont_prevtid_union.cont_field;
> +
> +	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
> +	case ISCSI_OP_LOGIN:
> +	case ISCSI_OP_TEXT:
> +		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
> +			  ISCSI_WQE_TYPE_MIDDLE_PATH);
> +		SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
> +			  1);
> +		cont_field->contlen_cdbsize_field = ntoh24(login_hdr->dlength);
> +		break;
> +	case ISCSI_OP_LOGOUT:
> +	case ISCSI_OP_NOOP_OUT:
> +	case ISCSI_OP_SCSI_TMFUNC:
> +		 SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
> +			   ISCSI_WQE_TYPE_NORMAL);
> +		break;
> +	default:
> +		if (!sc)
> +			break;
> +
> +		SET_FIELD(wqe->flags, ISCSI_WQE_WQE_TYPE,
> +			  ISCSI_WQE_TYPE_NORMAL);
> +		cont_field->contlen_cdbsize_field =
> +				(sc->sc_data_direction == DMA_TO_DEVICE) ?
> +				scsi_bufflen(sc) : 0;
> +		if (cmd->use_slowpath)
> +			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES, 0);
> +		else
> +			SET_FIELD(wqe->flags, ISCSI_WQE_NUM_FAST_SGES,
> +				  (sc->sc_data_direction ==
> +				   DMA_TO_DEVICE) ?
> +				  min((u16)QEDI_FAST_SGE_COUNT,
> +				      (u16)cmd->io_tbl.sge_valid) : 0);
> +		break;
> +	}
> +
> +	wqe->task_id = tid;
> +	/* Make sure SQ data is coherent */
> +	wmb();
> +}
> +
> +static void qedi_ring_doorbell(struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_db_data dbell = { 0 };
> +
> +	dbell.agg_flags = 0;
> +
> +	dbell.params |= DB_DEST_XCM << ISCSI_DB_DATA_DEST_SHIFT;
> +	dbell.params |= DB_AGG_CMD_SET << ISCSI_DB_DATA_AGG_CMD_SHIFT;
> +	dbell.params |=
> +		   DQ_XCM_ISCSI_SQ_PROD_CMD << ISCSI_DB_DATA_AGG_VAL_SEL_SHIFT;
> +
> +	dbell.sq_prod = qedi_conn->ep->fw_sq_prod_idx;
> +	writel(*(u32 *)&dbell, qedi_conn->ep->p_doorbell);
> +	/* Make sure fw idx is coherent */
> +	wmb();
> +	mmiowb();
> +	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_MP_REQ,
> +		  "prod_idx=0x%x, fw_prod_idx=0x%x, cid=0x%x\n",
> +		  qedi_conn->ep->sq_prod_idx, qedi_conn->ep->fw_sq_prod_idx,
> +		  qedi_conn->iscsi_conn_id);
> +}
> +
> +int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
> +			  struct iscsi_task *task)
> +{
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_task_context *fw_task_ctx;
> +	struct iscsi_login_req *login_hdr;
> +	struct iscsi_login_req_hdr *fw_login_req = NULL;
> +	struct iscsi_cached_sge_ctx *cached_sge = NULL;
> +	struct iscsi_sge *single_sge = NULL;
> +	struct iscsi_sge *req_sge = NULL;
> +	struct iscsi_sge *resp_sge = NULL;
> +	struct qedi_cmd *qedi_cmd;
> +	s16 ptu_invalidate = 0;
> +	s16 tid = 0;
> +
> +	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
> +	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
> +	qedi_cmd = (struct qedi_cmd *)task->dd_data;
> +	login_hdr = (struct iscsi_login_req *)task->hdr;
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1)
> +		return -ENOMEM;
> +
> +	fw_task_ctx =
> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +
> +	qedi_cmd->task_id = tid;
> +
> +	/* Ystorm context */
> +	fw_login_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.login_req;
> +	fw_login_req->opcode = login_hdr->opcode;
> +	fw_login_req->version_min = login_hdr->min_version;
> +	fw_login_req->version_max = login_hdr->max_version;
> +	fw_login_req->flags_attr = login_hdr->flags;
> +	fw_login_req->isid_tabc = *((u16 *)login_hdr->isid + 2);
> +	fw_login_req->isid_d = *((u32 *)login_hdr->isid);
> +	fw_login_req->tsih = login_hdr->tsih;
> +	qedi_update_itt_map(qedi, tid, task->itt);
> +	fw_login_req->itt = qedi_set_itt(tid, get_itt(task->itt));
> +	fw_login_req->cid = qedi_conn->iscsi_conn_id;
> +	fw_login_req->cmd_sn = be32_to_cpu(login_hdr->cmdsn);
> +	fw_login_req->exp_stat_sn = be32_to_cpu(login_hdr->exp_statsn);
> +	fw_login_req->exp_stat_sn = 0;
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						qedi->tid_reuse_count[tid]++;
> +	cached_sge =
> +	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
> +	cached_sge->sge.sge_len = req_sge->sge_len;
> +	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
> +	cached_sge->sge.sge_addr.hi =
> +			     (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
> +
> +	/* Mstorm context */
> +	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
> +	fw_task_ctx->mstorm_st_context.task_type = 0x2;
> +	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
> +	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
> +	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
> +	single_sge->sge_len = resp_sge->sge_len;
> +
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SINGLE_SGE, 1);
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SLOW_IO, 0);
> +	fw_task_ctx->mstorm_st_context.sgl_size = 1;
> +	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
> +
> +	/* Ustorm context */
> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
> +						ntoh24(login_hdr->dlength);
> +	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
> +	fw_task_ctx->ustorm_st_context.task_type = 0x2;
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +	fw_task_ctx->ustorm_ag_context.exp_data_acked =
> +						 ntoh24(login_hdr->dlength);
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
> +		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
> +
> +	spin_lock(&qedi_conn->list_lock);
> +	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
> +	qedi_cmd->io_cmd_in_list = true;
> +	qedi_conn->active_cmd_count++;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +	return 0;
> +}
> +
> +int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
> +			   struct iscsi_task *task)
> +{
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_logout_req_hdr *fw_logout_req = NULL;
> +	struct iscsi_task_context *fw_task_ctx = NULL;
> +	struct iscsi_logout *logout_hdr = NULL;
> +	struct qedi_cmd *qedi_cmd = NULL;
> +	s16  tid = 0;
> +	s16 ptu_invalidate = 0;
> +
> +	qedi_cmd = (struct qedi_cmd *)task->dd_data;
> +	logout_hdr = (struct iscsi_logout *)task->hdr;
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1)
> +		return -ENOMEM;
> +
> +	fw_task_ctx =
> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +	qedi_cmd->task_id = tid;
> +
> +	/* Ystorm context */
> +	fw_logout_req = &fw_task_ctx->ystorm_st_context.pdu_hdr.logout_req;
> +	fw_logout_req->opcode = ISCSI_OPCODE_LOGOUT_REQUEST;
> +	fw_logout_req->reason_code = 0x80 | logout_hdr->flags;
> +	qedi_update_itt_map(qedi, tid, task->itt);
> +	fw_logout_req->itt = qedi_set_itt(tid, get_itt(task->itt));
> +	fw_logout_req->exp_stat_sn = be32_to_cpu(logout_hdr->exp_statsn);
> +	fw_logout_req->cmd_sn = be32_to_cpu(logout_hdr->cmdsn);
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						  qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						qedi->tid_reuse_count[tid]++;
> +	fw_logout_req->cid = qedi_conn->iscsi_conn_id;
> +	fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
> +
> +	/* Mstorm context */
> +	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
> +
> +	/* Ustorm context */
> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
> +	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
> +	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
> +
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
> +		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +		  ISCSI_REG1_NUM_FAST_SGES, 0);
> +
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +
> +	spin_lock(&qedi_conn->list_lock);
> +	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
> +	qedi_cmd->io_cmd_in_list = true;
> +	qedi_conn->active_cmd_count++;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +
> +	return 0;
> +}
> +
> +int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
> +			 struct iscsi_task *task)
> +{
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_task_context *fw_task_ctx;
> +	struct iscsi_text_request_hdr *fw_text_request;
> +	struct iscsi_cached_sge_ctx *cached_sge;
> +	struct iscsi_sge *single_sge;
> +	struct qedi_cmd *qedi_cmd;
> +	/* For 6.5 hdr iscsi_hdr */
> +	struct iscsi_text *text_hdr;
> +	struct iscsi_sge *req_sge;
> +	struct iscsi_sge *resp_sge;
> +	s16 ptu_invalidate = 0;
> +	s16 tid = 0;
> +
> +	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
> +	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
> +	qedi_cmd = (struct qedi_cmd *)task->dd_data;
> +	text_hdr = (struct iscsi_text *)task->hdr;
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1)
> +		return -ENOMEM;
> +
> +	fw_task_ctx =
> +	(struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +
> +	qedi_cmd->task_id = tid;
> +
> +	/* Ystorm context */
> +	fw_text_request =
> +			&fw_task_ctx->ystorm_st_context.pdu_hdr.text_request;
> +	fw_text_request->opcode = text_hdr->opcode;
> +	fw_text_request->flags_attr = text_hdr->flags;
> +
> +	qedi_update_itt_map(qedi, tid, task->itt);
> +	fw_text_request->itt = qedi_set_itt(tid, get_itt(task->itt));
> +	fw_text_request->ttt = text_hdr->ttt;
> +	fw_text_request->cmd_sn = be32_to_cpu(text_hdr->cmdsn);
> +	fw_text_request->exp_stat_sn = be32_to_cpu(text_hdr->exp_statsn);
> +	fw_text_request->hdr_second_dword = ntoh24(text_hdr->dlength);
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						     qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						   qedi->tid_reuse_count[tid]++;
> +
> +	cached_sge =
> +	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
> +	cached_sge->sge.sge_len = req_sge->sge_len;
> +	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
> +	cached_sge->sge.sge_addr.hi =
> +			      (u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
> +
> +	/* Mstorm context */
> +	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
> +	fw_task_ctx->mstorm_st_context.task_type = 0x2;
> +	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
> +	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
> +	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
> +	single_sge->sge_len = resp_sge->sge_len;
> +
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SINGLE_SGE, 1);
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SLOW_IO, 0);
> +	fw_task_ctx->mstorm_st_context.sgl_size = 1;
> +	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
> +
> +	/* Ustorm context */
> +	fw_task_ctx->ustorm_ag_context.exp_data_acked =
> +						      ntoh24(text_hdr->dlength);
> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len =
> +						      ntoh24(text_hdr->dlength);
> +	fw_task_ctx->ustorm_st_context.exp_data_sn =
> +					      be32_to_cpu(text_hdr->exp_statsn);
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
> +	fw_task_ctx->ustorm_st_context.task_type = 0x2;
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +
> +	/*  Add command in active command list */
> +	spin_lock(&qedi_conn->list_lock);
> +	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
> +	qedi_cmd->io_cmd_in_list = true;
> +	qedi_conn->active_cmd_count++;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +
> +	return 0;
> +}
> +
> +int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
> +			   struct iscsi_task *task,
> +			   char *datap, int data_len, int unsol)
> +{
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_task_context *fw_task_ctx;
> +	struct iscsi_nop_out_hdr *fw_nop_out;
> +	struct qedi_cmd *qedi_cmd;
> +	/* For 6.5 hdr iscsi_hdr */
> +	struct iscsi_nopout *nopout_hdr;
> +	struct iscsi_cached_sge_ctx *cached_sge;
> +	struct iscsi_sge *single_sge;
> +	struct iscsi_sge *req_sge;
> +	struct iscsi_sge *resp_sge;
> +	u32 scsi_lun[2];
> +	s16 ptu_invalidate = 0;
> +	s16 tid = 0;
> +
> +	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
> +	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
> +	qedi_cmd = (struct qedi_cmd *)task->dd_data;
> +	nopout_hdr = (struct iscsi_nopout *)task->hdr;
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1) {
> +		QEDI_WARN(&qedi->dbg_ctx, "Invalid tid\n");
> +		return -ENOMEM;
> +	}
> +
> +	fw_task_ctx =
> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +	qedi_cmd->task_id = tid;
> +
> +	/* Ystorm context */
> +	fw_nop_out = &fw_task_ctx->ystorm_st_context.pdu_hdr.nop_out;
> +	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_CONST1, 1);
> +	SET_FIELD(fw_nop_out->flags_attr, ISCSI_NOP_OUT_HDR_RSRV, 0);
> +
> +	memcpy(scsi_lun, &nopout_hdr->lun, sizeof(struct scsi_lun));
> +	fw_nop_out->lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_nop_out->lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	qedi_update_itt_map(qedi, tid, task->itt);
> +
> +	if (nopout_hdr->ttt != ISCSI_TTT_ALL_ONES) {
> +		fw_nop_out->itt = be32_to_cpu(nopout_hdr->itt);
> +		fw_nop_out->ttt = be32_to_cpu(nopout_hdr->ttt);
> +		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
> +		fw_task_ctx->ystorm_st_context.state.local_comp = 1;
> +		SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
> +			  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 1);
> +	} else {
> +		fw_nop_out->itt = qedi_set_itt(tid, get_itt(task->itt));
> +		fw_nop_out->ttt = ISCSI_TTT_ALL_ONES;
> +		fw_task_ctx->ystorm_st_context.state.buffer_offset[0] = 0;
> +
> +		spin_lock(&qedi_conn->list_lock);
> +		list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
> +		qedi_cmd->io_cmd_in_list = true;
> +		qedi_conn->active_cmd_count++;
> +		spin_unlock(&qedi_conn->list_lock);
> +	}
> +
> +	fw_nop_out->opcode = ISCSI_OPCODE_NOP_OUT;
> +	fw_nop_out->cmd_sn = be32_to_cpu(nopout_hdr->cmdsn);
> +	fw_nop_out->exp_stat_sn = be32_to_cpu(nopout_hdr->exp_statsn);
> +
> +	cached_sge =
> +	       &fw_task_ctx->ystorm_st_context.state.sgl_ctx_union.cached_sge;
> +	cached_sge->sge.sge_len = req_sge->sge_len;
> +	cached_sge->sge.sge_addr.lo = (u32)(qedi_conn->gen_pdu.req_dma_addr);
> +	cached_sge->sge.sge_addr.hi =
> +			(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
> +
> +	/* Mstorm context */
> +	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
> +
> +	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
> +	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
> +	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
> +	single_sge->sge_len = resp_sge->sge_len;
> +	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						qedi->tid_reuse_count[tid]++;
> +	/* Ustorm context */
> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = resp_sge->sge_len;
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = data_len;
> +	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
> +	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
> +
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +		  ISCSI_REG1_NUM_FAST_SGES, 0);
> +
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +
> +	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +	return 0;
> +}
> diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
> new file mode 100644
> index 0000000..85ea3d7
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_gbl.h
> @@ -0,0 +1,67 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QEDI_GBL_H_
> +#define _QEDI_GBL_H_
> +
> +#include "qedi_iscsi.h"
> +
> +extern uint io_tracing;
> +extern int do_not_recover;
> +extern struct scsi_host_template qedi_host_template;
> +extern struct iscsi_transport qedi_iscsi_transport;
> +extern const struct qed_iscsi_ops *qedi_ops;
> +extern struct qedi_debugfs_ops qedi_debugfs_ops;
> +extern const struct file_operations qedi_dbg_fops;
> +extern struct device_attribute *qedi_shost_attrs[];
> +
> +int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
> +void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
> +
> +int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
> +			  struct iscsi_task *task);
> +int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
> +			   struct iscsi_task *task);
> +int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
> +			 struct iscsi_task *task);
> +int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
> +			   struct iscsi_task *task,
> +			   char *datap, int data_len, int unsol);
> +int qedi_get_task_idx(struct qedi_ctx *qedi);
> +void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
> +int qedi_iscsi_cleanup_task(struct iscsi_task *task,
> +			    bool mark_cmd_node_deleted);
> +void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
> +void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt);
> +void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
> +void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
> +void qedi_process_iscsi_error(struct qedi_endpoint *ep,
> +			      struct async_data *data);
> +void qedi_start_conn_recovery(struct qedi_ctx *qedi,
> +			      struct qedi_conn *qedi_conn);
> +struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid);
> +void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data);
> +void qedi_mark_device_missing(struct iscsi_cls_session *cls_session);
> +void qedi_mark_device_available(struct iscsi_cls_session *cls_session);
> +void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu);
> +int qedi_recover_all_conns(struct qedi_ctx *qedi);
> +void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
> +			  uint16_t que_idx);
> +void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
> +		   u16 tid, int8_t direction);
> +int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
> +u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl);
> +void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id);
> +int qedi_create_sysfs_ctx_attr(struct qedi_ctx *qedi);
> +void qedi_remove_sysfs_ctx_attr(struct qedi_ctx *qedi);
> +void qedi_clearsq(struct qedi_ctx *qedi,
> +		  struct qedi_conn *qedi_conn,
> +		  struct iscsi_task *task);
> +
> +#endif
> diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
> new file mode 100644
> index 0000000..caecdb8
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_iscsi.c
> @@ -0,0 +1,1604 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#include <linux/blkdev.h>
> +#include <linux/etherdevice.h>
> +#include <linux/if_ether.h>
> +#include <linux/if_vlan.h>
> +#include <scsi/scsi_tcq.h>
> +
> +#include "qedi.h"
> +#include "qedi_iscsi.h"
> +#include "qedi_gbl.h"
> +
> +int qedi_recover_all_conns(struct qedi_ctx *qedi)
> +{
> +	struct qedi_conn *qedi_conn;
> +	int i;
> +
> +	for (i = 0; i < qedi->max_active_conns; i++) {
> +		qedi_conn = qedi_get_conn_from_id(qedi, i);
> +		if (!qedi_conn)
> +			continue;
> +
> +		qedi_start_conn_recovery(qedi, qedi_conn);
> +	}
> +
> +	return SUCCESS;
> +}
> +
> +static int qedi_eh_host_reset(struct scsi_cmnd *cmd)
> +{
> +	struct Scsi_Host *shost = cmd->device->host;
> +	struct qedi_ctx *qedi;
> +
> +	qedi = (struct qedi_ctx *)iscsi_host_priv(shost);
> +
> +	return qedi_recover_all_conns(qedi);
> +}
> +
> +struct scsi_host_template qedi_host_template = {
> +	.module = THIS_MODULE,
> +	.name = "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver",
> +	.proc_name = QEDI_MODULE_NAME,
> +	.queuecommand = iscsi_queuecommand,
> +	.eh_abort_handler = iscsi_eh_abort,
> +	.eh_device_reset_handler = iscsi_eh_device_reset,
> +	.eh_target_reset_handler = iscsi_eh_recover_target,
> +	.eh_host_reset_handler = qedi_eh_host_reset,
> +	.target_alloc = iscsi_target_alloc,
> +	.change_queue_depth = scsi_change_queue_depth,
> +	.can_queue = QEDI_MAX_ISCSI_TASK,
> +	.this_id = -1,
> +	.sg_tablesize = QEDI_ISCSI_MAX_BDS_PER_CMD,
> +	.max_sectors = 0xffff,
> +	.cmd_per_lun = 128,
> +	.use_clustering = ENABLE_CLUSTERING,
> +	.shost_attrs = qedi_shost_attrs,
> +};
> +
> +static void qedi_conn_free_login_resources(struct qedi_ctx *qedi,
> +					   struct qedi_conn *qedi_conn)
> +{
> +	if (qedi_conn->gen_pdu.resp_bd_tbl) {
> +		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
> +				  qedi_conn->gen_pdu.resp_bd_tbl,
> +				  qedi_conn->gen_pdu.resp_bd_dma);
> +		qedi_conn->gen_pdu.resp_bd_tbl = NULL;
> +	}
> +
> +	if (qedi_conn->gen_pdu.req_bd_tbl) {
> +		dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
> +				  qedi_conn->gen_pdu.req_bd_tbl,
> +				  qedi_conn->gen_pdu.req_bd_dma);
> +		qedi_conn->gen_pdu.req_bd_tbl = NULL;
> +	}
> +
> +	if (qedi_conn->gen_pdu.resp_buf) {
> +		dma_free_coherent(&qedi->pdev->dev,
> +				  ISCSI_DEF_MAX_RECV_SEG_LEN,
> +				  qedi_conn->gen_pdu.resp_buf,
> +				  qedi_conn->gen_pdu.resp_dma_addr);
> +		qedi_conn->gen_pdu.resp_buf = NULL;
> +	}
> +
> +	if (qedi_conn->gen_pdu.req_buf) {
> +		dma_free_coherent(&qedi->pdev->dev,
> +				  ISCSI_DEF_MAX_RECV_SEG_LEN,
> +				  qedi_conn->gen_pdu.req_buf,
> +				  qedi_conn->gen_pdu.req_dma_addr);
> +		qedi_conn->gen_pdu.req_buf = NULL;
> +	}
> +}
> +
> +static int qedi_conn_alloc_login_resources(struct qedi_ctx *qedi,
> +					   struct qedi_conn *qedi_conn)
> +{
> +	qedi_conn->gen_pdu.req_buf =
> +		dma_alloc_coherent(&qedi->pdev->dev,
> +				   ISCSI_DEF_MAX_RECV_SEG_LEN,
> +				   &qedi_conn->gen_pdu.req_dma_addr,
> +				   GFP_KERNEL);
> +	if (!qedi_conn->gen_pdu.req_buf)
> +		goto login_req_buf_failure;
> +
> +	qedi_conn->gen_pdu.req_buf_size = 0;
> +	qedi_conn->gen_pdu.req_wr_ptr = qedi_conn->gen_pdu.req_buf;
> +
> +	qedi_conn->gen_pdu.resp_buf =
> +		dma_alloc_coherent(&qedi->pdev->dev,
> +				   ISCSI_DEF_MAX_RECV_SEG_LEN,
> +				   &qedi_conn->gen_pdu.resp_dma_addr,
> +				   GFP_KERNEL);
> +	if (!qedi_conn->gen_pdu.resp_buf)
> +		goto login_resp_buf_failure;
> +
> +	qedi_conn->gen_pdu.resp_buf_size = ISCSI_DEF_MAX_RECV_SEG_LEN;
> +	qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf;
> +
> +	qedi_conn->gen_pdu.req_bd_tbl =
> +		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
> +				   &qedi_conn->gen_pdu.req_bd_dma, GFP_KERNEL);
> +	if (!qedi_conn->gen_pdu.req_bd_tbl)
> +		goto login_req_bd_tbl_failure;
> +
> +	qedi_conn->gen_pdu.resp_bd_tbl =
> +		dma_alloc_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
> +				   &qedi_conn->gen_pdu.resp_bd_dma,
> +				   GFP_KERNEL);
> +	if (!qedi_conn->gen_pdu.resp_bd_tbl)
> +		goto login_resp_bd_tbl_failure;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SESS,
> +		  "Allocation successful, cid=0x%x\n",
> +		  qedi_conn->iscsi_conn_id);
> +	return 0;
> +
> +login_resp_bd_tbl_failure:
> +	dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE,
> +			  qedi_conn->gen_pdu.req_bd_tbl,
> +			  qedi_conn->gen_pdu.req_bd_dma);
> +	qedi_conn->gen_pdu.req_bd_tbl = NULL;
> +
> +login_req_bd_tbl_failure:
> +	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
> +			  qedi_conn->gen_pdu.resp_buf,
> +			  qedi_conn->gen_pdu.resp_dma_addr);
> +	qedi_conn->gen_pdu.resp_buf = NULL;
> +login_resp_buf_failure:
> +	dma_free_coherent(&qedi->pdev->dev, ISCSI_DEF_MAX_RECV_SEG_LEN,
> +			  qedi_conn->gen_pdu.req_buf,
> +			  qedi_conn->gen_pdu.req_dma_addr);
> +	qedi_conn->gen_pdu.req_buf = NULL;
> +login_req_buf_failure:
> +	iscsi_conn_printk(KERN_ERR, qedi_conn->cls_conn->dd_data,
> +			  "login resource alloc failed!!\n");
> +	return -ENOMEM;
> +}
> +
> +static void qedi_destroy_cmd_pool(struct qedi_ctx *qedi,
> +				  struct iscsi_session *session)
> +{
> +	int i;
> +
> +	for (i = 0; i < session->cmds_max; i++) {
> +		struct iscsi_task *task = session->cmds[i];
> +		struct qedi_cmd *cmd = task->dd_data;
> +
> +		if (cmd->io_tbl.sge_tbl)
> +			dma_free_coherent(&qedi->pdev->dev,
> +					  QEDI_ISCSI_MAX_BDS_PER_CMD *
> +					  sizeof(struct iscsi_sge),
> +					  cmd->io_tbl.sge_tbl,
> +					  cmd->io_tbl.sge_tbl_dma);
> +
> +		if (cmd->sense_buffer)
> +			dma_free_coherent(&qedi->pdev->dev,
> +					  SCSI_SENSE_BUFFERSIZE,
> +					  cmd->sense_buffer,
> +					  cmd->sense_buffer_dma);
> +	}
> +}
> +
> +static int qedi_alloc_sget(struct qedi_ctx *qedi, struct iscsi_session *session,
> +			   struct qedi_cmd *cmd)
> +{
> +	struct qedi_io_bdt *io = &cmd->io_tbl;
> +	struct iscsi_sge *sge;
> +
> +	io->sge_tbl = dma_alloc_coherent(&qedi->pdev->dev,
> +					 QEDI_ISCSI_MAX_BDS_PER_CMD *
> +					 sizeof(*sge),
> +					 &io->sge_tbl_dma, GFP_KERNEL);
> +	if (!io->sge_tbl) {
> +		iscsi_session_printk(KERN_ERR, session,
> +				     "Could not allocate BD table.\n");
> +		return -ENOMEM;
> +	}
> +
> +	io->sge_valid = 0;
> +	return 0;
> +}
> +
> +static int qedi_setup_cmd_pool(struct qedi_ctx *qedi,
> +			       struct iscsi_session *session)
> +{
> +	int i;
> +
> +	for (i = 0; i < session->cmds_max; i++) {
> +		struct iscsi_task *task = session->cmds[i];
> +		struct qedi_cmd *cmd = task->dd_data;
> +
> +		task->hdr = &cmd->hdr;
> +		task->hdr_max = sizeof(struct iscsi_hdr);
> +
> +		if (qedi_alloc_sget(qedi, session, cmd))
> +			goto free_sgets;
> +
> +		cmd->sense_buffer = dma_alloc_coherent(&qedi->pdev->dev,
> +						       SCSI_SENSE_BUFFERSIZE,
> +						       &cmd->sense_buffer_dma,
> +						       GFP_KERNEL);
> +		if (!cmd->sense_buffer)
> +			goto free_sgets;
> +	}
> +
> +	return 0;
> +
> +free_sgets:
> +	qedi_destroy_cmd_pool(qedi, session);
> +	return -ENOMEM;
> +}
> +
> +static struct iscsi_cls_session *
> +qedi_session_create(struct iscsi_endpoint *ep, u16 cmds_max,
> +		    u16 qdepth, uint32_t initial_cmdsn)
> +{
> +	struct Scsi_Host *shost;
> +	struct iscsi_cls_session *cls_session;
> +	struct qedi_ctx *qedi;
> +	struct qedi_endpoint *qedi_ep;
> +
> +	if (!ep)
> +		return NULL;
> +
> +	qedi_ep = ep->dd_data;
> +	shost = qedi_ep->qedi->shost;
> +	qedi = iscsi_host_priv(shost);
> +
> +	if (cmds_max > qedi->max_sqes)
> +		cmds_max = qedi->max_sqes;
> +	else if (cmds_max < QEDI_SQ_WQES_MIN)
> +		cmds_max = QEDI_SQ_WQES_MIN;
> +
> +	cls_session = iscsi_session_setup(&qedi_iscsi_transport, shost,
> +					  cmds_max, 0, sizeof(struct qedi_cmd),
> +					  initial_cmdsn, ISCSI_MAX_TARGET);
> +	if (!cls_session) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Failed to setup session for ep=%p\n", qedi_ep);
> +		return NULL;
> +	}
> +
> +	if (qedi_setup_cmd_pool(qedi, cls_session->dd_data)) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Failed to setup cmd pool for ep=%p\n", qedi_ep);
> +		goto session_teardown;
> +	}
> +
> +	return cls_session;
> +
> +session_teardown:
> +	iscsi_session_teardown(cls_session);
> +	return NULL;
> +}
> +
> +static void qedi_session_destroy(struct iscsi_cls_session *cls_session)
> +{
> +	struct iscsi_session *session = cls_session->dd_data;
> +	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
> +	struct qedi_ctx *qedi = iscsi_host_priv(shost);
> +
> +	qedi_destroy_cmd_pool(qedi, session);
> +	iscsi_session_teardown(cls_session);
> +}
> +
> +static struct iscsi_cls_conn *
> +qedi_conn_create(struct iscsi_cls_session *cls_session, uint32_t cid)
> +{
> +	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
> +	struct qedi_ctx *qedi = iscsi_host_priv(shost);
> +	struct iscsi_cls_conn *cls_conn;
> +	struct qedi_conn *qedi_conn;
> +	struct iscsi_conn *conn;
> +
> +	cls_conn = iscsi_conn_setup(cls_session, sizeof(*qedi_conn),
> +				    cid);
> +	if (!cls_conn) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "conn_new: iscsi conn setup failed, cid=0x%x, cls_sess=%p!\n",
> +			 cid, cls_session);
> +		return NULL;
> +	}
> +
> +	conn = cls_conn->dd_data;
> +	qedi_conn = conn->dd_data;
> +	qedi_conn->cls_conn = cls_conn;
> +	qedi_conn->qedi = qedi;
> +	qedi_conn->ep = NULL;
> +	qedi_conn->active_cmd_count = 0;
> +	INIT_LIST_HEAD(&qedi_conn->active_cmd_list);
> +	spin_lock_init(&qedi_conn->list_lock);
> +
> +	if (qedi_conn_alloc_login_resources(qedi, qedi_conn)) {
> +		iscsi_conn_printk(KERN_ALERT, conn,
> +				  "conn_new: login resc alloc failed, cid=0x%x, cls_sess=%p!!\n",
> +				   cid, cls_session);
> +		goto free_conn;
> +	}
> +
> +	return cls_conn;
> +
> +free_conn:
> +	iscsi_conn_teardown(cls_conn);
> +	return NULL;
> +}
> +
> +void qedi_mark_device_missing(struct iscsi_cls_session *cls_session)
> +{
> +	iscsi_block_session(cls_session);
> +}
> +
> +void qedi_mark_device_available(struct iscsi_cls_session *cls_session)
> +{
> +	iscsi_unblock_session(cls_session);
> +}
> +
> +static int qedi_bind_conn_to_iscsi_cid(struct qedi_ctx *qedi,
> +				       struct qedi_conn *qedi_conn)
> +{
> +	u32 iscsi_cid = qedi_conn->iscsi_conn_id;
> +
> +	if (qedi->cid_que.conn_cid_tbl[iscsi_cid]) {
> +		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
> +				  "conn bind - entry #%d not free\n",
> +				  iscsi_cid);
> +		return -EBUSY;
> +	}
> +
> +	qedi->cid_que.conn_cid_tbl[iscsi_cid] = qedi_conn;
> +	return 0;
> +}
> +
> +struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid)
> +{
> +	if (!qedi->cid_que.conn_cid_tbl) {
> +		QEDI_ERR(&qedi->dbg_ctx, "missing conn<->cid table\n");
> +		return NULL;
> +
> +	} else if (iscsi_cid >= qedi->max_active_conns) {
> +		QEDI_ERR(&qedi->dbg_ctx, "wrong cid #%d\n", iscsi_cid);
> +		return NULL;
> +	}
> +	return qedi->cid_que.conn_cid_tbl[iscsi_cid];
> +}
> +
> +static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
> +			  struct iscsi_cls_conn *cls_conn,
> +			  u64 transport_fd, int is_leading)
> +{
> +	struct iscsi_conn *conn = cls_conn->dd_data;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
> +	struct qedi_ctx *qedi = iscsi_host_priv(shost);
> +	struct qedi_endpoint *qedi_ep;
> +	struct iscsi_endpoint *ep;
> +
> +	ep = iscsi_lookup_endpoint(transport_fd);
> +	if (!ep)
> +		return -EINVAL;
> +
> +	qedi_ep = ep->dd_data;
> +	if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
> +	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
> +		return -EINVAL;
> +
> +	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
> +		return -EINVAL;
> +
> +	qedi_ep->conn = qedi_conn;
> +	qedi_conn->ep = qedi_ep;
> +	qedi_conn->iscsi_conn_id = qedi_ep->iscsi_cid;
> +	qedi_conn->fw_cid = qedi_ep->fw_cid;
> +	qedi_conn->cmd_cleanup_req = 0;
> +	qedi_conn->cmd_cleanup_cmpl = 0;
> +
> +	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
> +		return -EINVAL;
> +
> +	spin_lock_init(&qedi_conn->tmf_work_lock);
> +	INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
> +	init_waitqueue_head(&qedi_conn->wait_queue);
> +	return 0;
> +}
> +
> +static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
> +				  struct qedi_conn *qedi_conn)
> +{
> +	struct qed_iscsi_params_update *conn_info;
> +	struct iscsi_cls_conn *cls_conn = qedi_conn->cls_conn;
> +	struct iscsi_conn *conn = cls_conn->dd_data;
> +	struct qedi_endpoint *qedi_ep;
> +	int rval;
> +
> +	qedi_ep = qedi_conn->ep;
> +
> +	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
> +	if (!conn_info) {
> +		QEDI_ERR(&qedi->dbg_ctx, "memory alloc failed\n");
> +		return -ENOMEM;
> +	}
> +
> +	conn_info->update_flag = 0;
> +
> +	if (conn->hdrdgst_en)
> +		SET_FIELD(conn_info->update_flag,
> +			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN, true);
> +	if (conn->datadgst_en)
> +		SET_FIELD(conn_info->update_flag,
> +			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_DD_EN, true);
> +	if (conn->session->initial_r2t_en)
> +		SET_FIELD(conn_info->update_flag,
> +			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_INITIAL_R2T,
> +			  true);
> +	if (conn->session->imm_data_en)
> +		SET_FIELD(conn_info->update_flag,
> +			  ISCSI_CONN_UPDATE_RAMROD_PARAMS_IMMEDIATE_DATA,
> +			  true);
> +
> +	conn_info->max_seq_size = conn->session->max_burst;
> +	conn_info->max_recv_pdu_length = conn->max_recv_dlength;
> +	conn_info->max_send_pdu_length = conn->max_xmit_dlength;
> +	conn_info->first_seq_length = conn->session->first_burst;
> +	conn_info->exp_stat_sn = conn->exp_statsn;
> +
> +	rval = qedi_ops->update_conn(qedi->cdev, qedi_ep->handle,
> +				     conn_info);
> +	if (rval) {
> +		rval = -ENXIO;
> +		QEDI_ERR(&qedi->dbg_ctx, "Could not update connection\n");
> +		goto update_conn_err;
> +	}
> +
> +	kfree(conn_info);
> +	rval = 0;
> +
> +update_conn_err:
> +	return rval;
> +}
> +
> +static u16 qedi_calc_mss(u16 pmtu, u8 is_ipv6, u8 tcp_ts_en, u8 vlan_en)
> +{
> +	u16 mss = 0;
> +	u16 hdrs = TCP_HDR_LEN;
> +
> +	if (is_ipv6)
> +		hdrs += IPV6_HDR_LEN;
> +	else
> +		hdrs += IPV4_HDR_LEN;
> +
> +	if (vlan_en)
> +		hdrs += VLAN_LEN;
> +
> +	mss = pmtu - hdrs;
> +
> +	if (tcp_ts_en)
> +		mss -= TCP_OPTION_LEN;
> +
> +	if (!mss)
> +		mss = DEF_MSS;
> +
> +	return mss;
> +}
> +
> +static int qedi_iscsi_offload_conn(struct qedi_endpoint *qedi_ep)
> +{
> +	struct qedi_ctx *qedi = qedi_ep->qedi;
> +	struct qed_iscsi_params_offload *conn_info;
> +	int rval;
> +	int i;
> +
> +	conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
> +	if (!conn_info) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Failed to allocate memory ep=%p\n", qedi_ep);
> +		return -ENOMEM;
> +	}
> +
> +	ether_addr_copy(conn_info->src.mac, qedi_ep->src_mac);
> +	ether_addr_copy(conn_info->dst.mac, qedi_ep->dst_mac);
> +
> +	conn_info->src.ip[0] = ntohl(qedi_ep->src_addr[0]);
> +	conn_info->dst.ip[0] = ntohl(qedi_ep->dst_addr[0]);
> +
> +	if (qedi_ep->ip_type == TCP_IPV4) {
> +		conn_info->ip_version = 0;
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "After ntohl: src_addr=%pI4, dst_addr=%pI4\n",
> +			  qedi_ep->src_addr, qedi_ep->dst_addr);
> +	} else {
> +		for (i = 1; i < 4; i++) {
> +			conn_info->src.ip[i] = ntohl(qedi_ep->src_addr[i]);
> +			conn_info->dst.ip[i] = ntohl(qedi_ep->dst_addr[i]);
> +		}
> +
> +		conn_info->ip_version = 1;
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "After ntohl: src_addr=%pI6, dst_addr=%pI6\n",
> +			  qedi_ep->src_addr, qedi_ep->dst_addr);
> +	}
> +
> +	conn_info->src.port = qedi_ep->src_port;
> +	conn_info->dst.port = qedi_ep->dst_port;
> +
> +	conn_info->layer_code = ISCSI_SLOW_PATH_LAYER_CODE;
> +	conn_info->sq_pbl_addr = qedi_ep->sq_pbl_dma;
> +	conn_info->vlan_id = qedi_ep->vlan_id;
> +
> +	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_TS_EN, 1);
> +	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_EN, 1);
> +	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_DA_CNT_EN, 1);
> +	SET_FIELD(conn_info->tcp_flags, TCP_OFFLOAD_PARAMS_KA_EN, 1);
> +
> +	conn_info->default_cq = (qedi_ep->fw_cid % 8);
> +
> +	conn_info->ka_max_probe_cnt = DEF_KA_MAX_PROBE_COUNT;
> +	conn_info->dup_ack_theshold = 3;
> +	conn_info->rcv_wnd = 65535;
> +	conn_info->cwnd = DEF_MAX_CWND;
> +
> +	conn_info->ss_thresh = 65535;
> +	conn_info->srtt = 300;
> +	conn_info->rtt_var = 150;
> +	conn_info->flow_label = 0;
> +	conn_info->ka_timeout = DEF_KA_TIMEOUT;
> +	conn_info->ka_interval = DEF_KA_INTERVAL;
> +	conn_info->max_rt_time = DEF_MAX_RT_TIME;
> +	conn_info->ttl = DEF_TTL;
> +	conn_info->tos_or_tc = DEF_TOS;
> +	conn_info->remote_port = qedi_ep->dst_port;
> +	conn_info->local_port = qedi_ep->src_port;
> +
> +	conn_info->mss = qedi_calc_mss(qedi_ep->pmtu,
> +				       (qedi_ep->ip_type == TCP_IPV6),
> +				       1, (qedi_ep->vlan_id != 0));
> +
> +	conn_info->rcv_wnd_scale = 4;
> +	conn_info->ts_ticks_per_second = 1000;
> +	conn_info->da_timeout_value = 200;
> +	conn_info->ack_frequency = 2;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "Default cq index [%d], mss [%d]\n",
> +		  conn_info->default_cq, conn_info->mss);
> +
> +	rval = qedi_ops->offload_conn(qedi->cdev, qedi_ep->handle, conn_info);
> +	if (rval)
> +		QEDI_ERR(&qedi->dbg_ctx, "offload_conn returned %d, ep=%p\n",
> +			 rval, qedi_ep);
> +
> +	kfree(conn_info);
> +	return rval;
> +}
> +
> +static int qedi_conn_start(struct iscsi_cls_conn *cls_conn)
> +{
> +	struct iscsi_conn *conn = cls_conn->dd_data;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct qedi_ctx *qedi;
> +	int rval;
> +
> +	qedi = qedi_conn->qedi;
> +
> +	rval = qedi_iscsi_update_conn(qedi, qedi_conn);
> +	if (rval) {
> +		iscsi_conn_printk(KERN_ALERT, conn,
> +				  "conn_start: FW oflload conn failed.\n");
> +		rval = -EINVAL;
> +		goto start_err;
> +	}
> +
> +	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +	qedi_conn->abrt_conn = 0;
> +
> +	rval = iscsi_conn_start(cls_conn);
> +	if (rval) {
> +		iscsi_conn_printk(KERN_ALERT, conn,
> +				  "iscsi_conn_start: FW oflload conn failed!!\n");
> +	}
> +
> +start_err:
> +	return rval;
> +}
> +
> +static void qedi_conn_destroy(struct iscsi_cls_conn *cls_conn)
> +{
> +	struct iscsi_conn *conn = cls_conn->dd_data;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct Scsi_Host *shost;
> +	struct qedi_ctx *qedi;
> +
> +	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
> +	qedi = iscsi_host_priv(shost);
> +
> +	qedi_conn_free_login_resources(qedi, qedi_conn);
> +	iscsi_conn_teardown(cls_conn);
> +}
> +
> +static int qedi_ep_get_param(struct iscsi_endpoint *ep,
> +			     enum iscsi_param param, char *buf)
> +{
> +	struct qedi_endpoint *qedi_ep = ep->dd_data;
> +	int len;
> +
> +	if (!qedi_ep)
> +		return -ENOTCONN;
> +
> +	switch (param) {
> +	case ISCSI_PARAM_CONN_PORT:
> +		len = sprintf(buf, "%hu\n", qedi_ep->dst_port);
> +		break;
> +	case ISCSI_PARAM_CONN_ADDRESS:
> +		if (qedi_ep->ip_type == TCP_IPV4)
> +			len = sprintf(buf, "%pI4\n", qedi_ep->dst_addr);
> +		else
> +			len = sprintf(buf, "%pI6\n", qedi_ep->dst_addr);
> +		break;
> +	default:
> +		return -ENOTCONN;
> +	}
> +
> +	return len;
> +}
> +
> +static int qedi_host_get_param(struct Scsi_Host *shost,
> +			       enum iscsi_host_param param, char *buf)
> +{
> +	struct qedi_ctx *qedi;
> +	int len;
> +
> +	qedi = iscsi_host_priv(shost);
> +
> +	switch (param) {
> +	case ISCSI_HOST_PARAM_HWADDRESS:
> +		len = sysfs_format_mac(buf, qedi->mac, 6);
> +		break;
> +	case ISCSI_HOST_PARAM_NETDEV_NAME:
> +		len = sprintf(buf, "host%d\n", shost->host_no);
> +		break;
> +	case ISCSI_HOST_PARAM_IPADDRESS:
> +		if (qedi->ip_type == TCP_IPV4)
> +			len = sprintf(buf, "%pI4\n", qedi->src_ip);
> +		else
> +			len = sprintf(buf, "%pI6\n", qedi->src_ip);
> +		break;
> +	default:
> +		return iscsi_host_get_param(shost, param, buf);
> +	}
> +
> +	return len;
> +}
> +
> +static void qedi_conn_get_stats(struct iscsi_cls_conn *cls_conn,
> +				struct iscsi_stats *stats)
> +{
> +	struct iscsi_conn *conn = cls_conn->dd_data;
> +	struct qed_iscsi_stats iscsi_stats;
> +	struct Scsi_Host *shost;
> +	struct qedi_ctx *qedi;
> +
> +	shost = iscsi_session_to_shost(iscsi_conn_to_session(cls_conn));
> +	qedi = iscsi_host_priv(shost);
> +	qedi_ops->get_stats(qedi->cdev, &iscsi_stats);
> +
> +	conn->txdata_octets = iscsi_stats.iscsi_tx_bytes_cnt;
> +	conn->rxdata_octets = iscsi_stats.iscsi_rx_bytes_cnt;
> +	conn->dataout_pdus_cnt = (uint32_t)iscsi_stats.iscsi_tx_data_pdu_cnt;
> +	conn->datain_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_data_pdu_cnt;
> +	conn->r2t_pdus_cnt = (uint32_t)iscsi_stats.iscsi_rx_r2t_pdu_cnt;
> +
> +	stats->txdata_octets = conn->txdata_octets;
> +	stats->rxdata_octets = conn->rxdata_octets;
> +	stats->scsicmd_pdus = conn->scsicmd_pdus_cnt;
> +	stats->dataout_pdus = conn->dataout_pdus_cnt;
> +	stats->scsirsp_pdus = conn->scsirsp_pdus_cnt;
> +	stats->datain_pdus = conn->datain_pdus_cnt;
> +	stats->r2t_pdus = conn->r2t_pdus_cnt;
> +	stats->tmfcmd_pdus = conn->tmfcmd_pdus_cnt;
> +	stats->tmfrsp_pdus = conn->tmfrsp_pdus_cnt;
> +	stats->digest_err = 0;
> +	stats->timeout_err = 0;
> +	strcpy(stats->custom[0].desc, "eh_abort_cnt");
> +	stats->custom[0].value = conn->eh_abort_cnt;
> +	stats->custom_length = 1;
> +}
> +
> +static void qedi_iscsi_prep_generic_pdu_bd(struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_sge *bd_tbl;
> +
> +	bd_tbl = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
> +
> +	bd_tbl->sge_addr.hi =
> +		(u32)((u64)qedi_conn->gen_pdu.req_dma_addr >> 32);
> +	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.req_dma_addr;
> +	bd_tbl->sge_len = qedi_conn->gen_pdu.req_wr_ptr -
> +				qedi_conn->gen_pdu.req_buf;
> +	bd_tbl->reserved0 = 0;
> +	bd_tbl = (struct iscsi_sge  *)qedi_conn->gen_pdu.resp_bd_tbl;
> +	bd_tbl->sge_addr.hi =
> +			(u32)((u64)qedi_conn->gen_pdu.resp_dma_addr >> 32);
> +	bd_tbl->sge_addr.lo = (u32)qedi_conn->gen_pdu.resp_dma_addr;
> +	bd_tbl->sge_len = ISCSI_DEF_MAX_RECV_SEG_LEN;
> +	bd_tbl->reserved0 = 0;
> +}
> +
> +static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
> +{
> +	struct qedi_cmd *cmd = task->dd_data;
> +	struct qedi_conn *qedi_conn = cmd->conn;
> +	char *buf;
> +	int data_len;
> +	int rc = 0;
> +
> +	qedi_iscsi_prep_generic_pdu_bd(qedi_conn);
> +	switch (task->hdr->opcode & ISCSI_OPCODE_MASK) {
> +	case ISCSI_OP_LOGIN:
> +		qedi_send_iscsi_login(qedi_conn, task);
> +		break;
> +	case ISCSI_OP_NOOP_OUT:
> +		data_len = qedi_conn->gen_pdu.req_buf_size;
> +		buf = qedi_conn->gen_pdu.req_buf;
> +		if (data_len)
> +			rc = qedi_send_iscsi_nopout(qedi_conn, task,
> +						    buf, data_len, 1);
> +		else
> +			rc = qedi_send_iscsi_nopout(qedi_conn, task,
> +						    NULL, 0, 1);
> +		break;
> +	case ISCSI_OP_LOGOUT:
> +		rc = qedi_send_iscsi_logout(qedi_conn, task);
> +		break;
> +	case ISCSI_OP_TEXT:
> +		rc = qedi_send_iscsi_text(qedi_conn, task);
> +		break;
> +	default:
> +		iscsi_conn_printk(KERN_ALERT, qedi_conn->cls_conn->dd_data,
> +				  "unsupported op 0x%x\n", task->hdr->opcode);
> +	}
> +
> +	return rc;
> +}
> +
> +static int qedi_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task)
> +{
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct qedi_cmd *cmd = task->dd_data;
> +
> +	memset(qedi_conn->gen_pdu.req_buf, 0, ISCSI_DEF_MAX_RECV_SEG_LEN);
> +
> +	qedi_conn->gen_pdu.req_buf_size = task->data_count;
> +
> +	if (task->data_count) {
> +		memcpy(qedi_conn->gen_pdu.req_buf, task->data,
> +		       task->data_count);
> +		qedi_conn->gen_pdu.req_wr_ptr =
> +			qedi_conn->gen_pdu.req_buf + task->data_count;
> +	}
> +
> +	cmd->conn = conn->dd_data;
> +	cmd->scsi_cmd = NULL;
> +	return qedi_iscsi_send_generic_request(task);
> +}
> +
> +static int qedi_task_xmit(struct iscsi_task *task)
> +{
> +	struct iscsi_conn *conn = task->conn;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct qedi_cmd *cmd = task->dd_data;
> +	struct scsi_cmnd *sc = task->sc;
> +
> +	cmd->state = 0;
> +	cmd->task = NULL;
> +	cmd->use_slowpath = false;
> +	cmd->conn = qedi_conn;
> +	cmd->task = task;
> +	cmd->io_cmd_in_list = false;
> +	INIT_LIST_HEAD(&cmd->io_cmd);
> +
> +	if (!sc)
> +		return qedi_mtask_xmit(conn, task);
> +}
> +
> +static struct iscsi_endpoint *
> +qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
> +		int non_blocking)
> +{
> +	struct qedi_ctx *qedi;
> +	struct iscsi_endpoint *ep;
> +	struct qedi_endpoint *qedi_ep;
> +	struct sockaddr_in *addr;
> +	struct sockaddr_in6 *addr6;
> +	struct qed_dev *cdev  =  NULL;
> +	struct qedi_uio_dev *udev = NULL;
> +	struct iscsi_path path_req;
> +	u32 msg_type = ISCSI_KEVENT_IF_DOWN;
> +	u32 iscsi_cid = QEDI_CID_RESERVED;
> +	u16 len = 0;
> +	char *buf = NULL;
> +	int ret;
> +
> +	if (!shost) {
> +		ret = -ENXIO;
> +		QEDI_ERR(NULL, "shost is NULL\n");
> +		return ERR_PTR(ret);
> +	}
> +
> +	if (do_not_recover) {
> +		ret = -ENOMEM;
> +		return ERR_PTR(ret);
> +	}
> +
> +	qedi = iscsi_host_priv(shost);
> +	cdev = qedi->cdev;
> +	udev = qedi->udev;
> +
> +	if (test_bit(QEDI_IN_OFFLINE, &qedi->flags) ||
> +	    test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
> +		ret = -ENOMEM;
> +		return ERR_PTR(ret);
> +	}
> +
> +	ep = iscsi_create_endpoint(sizeof(struct qedi_endpoint));
> +	if (!ep) {
> +		QEDI_ERR(&qedi->dbg_ctx, "endpoint create fail\n");
> +		ret = -ENOMEM;
> +		return ERR_PTR(ret);
> +	}
> +	qedi_ep = ep->dd_data;
> +	memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
> +	qedi_ep->state = EP_STATE_IDLE;
> +	qedi_ep->iscsi_cid = (u32)-1;
> +	qedi_ep->qedi = qedi;
> +
> +	if (dst_addr->sa_family == AF_INET) {
> +		addr = (struct sockaddr_in *)dst_addr;
> +		memcpy(qedi_ep->dst_addr, &addr->sin_addr.s_addr,
> +		       sizeof(struct in_addr));
> +		qedi_ep->dst_port = ntohs(addr->sin_port);
> +		qedi_ep->ip_type = TCP_IPV4;
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "dst_addr=%pI4, dst_port=%u\n",
> +			  qedi_ep->dst_addr, qedi_ep->dst_port);
> +	} else if (dst_addr->sa_family == AF_INET6) {
> +		addr6 = (struct sockaddr_in6 *)dst_addr;
> +		memcpy(qedi_ep->dst_addr, &addr6->sin6_addr,
> +		       sizeof(struct in6_addr));
> +		qedi_ep->dst_port = ntohs(addr6->sin6_port);
> +		qedi_ep->ip_type = TCP_IPV6;
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "dst_addr=%pI6, dst_port=%u\n",
> +			  qedi_ep->dst_addr, qedi_ep->dst_port);
> +	} else {
> +		QEDI_ERR(&qedi->dbg_ctx, "Invalid endpoint\n");
> +	}
> +
> +	if (atomic_read(&qedi->link_state) != QEDI_LINK_UP) {
> +		QEDI_WARN(&qedi->dbg_ctx, "qedi link down\n");
> +		ret = -ENXIO;
> +		goto ep_conn_exit;
> +	}
> +
> +	ret = qedi_alloc_sq(qedi, qedi_ep);
> +	if (ret)
> +		goto ep_conn_exit;
> +
> +	ret = qedi_ops->acquire_conn(qedi->cdev, &qedi_ep->handle,
> +				     &qedi_ep->fw_cid, &qedi_ep->p_doorbell);
> +
> +	if (ret) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Could not acquire connection\n");
> +		ret = -ENXIO;
> +		goto ep_free_sq;
> +	}
> +
> +	iscsi_cid = qedi_ep->handle;
> +	qedi_ep->iscsi_cid = iscsi_cid;
> +
> +	init_waitqueue_head(&qedi_ep->ofld_wait);
> +	init_waitqueue_head(&qedi_ep->tcp_ofld_wait);
> +	qedi_ep->state = EP_STATE_OFLDCONN_START;
> +	qedi->ep_tbl[iscsi_cid] = qedi_ep;
> +
> +	buf = (char *)&path_req;
> +	len = sizeof(path_req);
> +	memset(&path_req, 0, len);
> +
> +	msg_type = ISCSI_KEVENT_PATH_REQ;
> +	path_req.handle = (u64)qedi_ep->iscsi_cid;
> +	path_req.pmtu = qedi->ll2_mtu;
> +	qedi_ep->pmtu = qedi->ll2_mtu;
> +	if (qedi_ep->ip_type == TCP_IPV4) {
> +		memcpy(&path_req.dst.v4_addr, &qedi_ep->dst_addr,
> +		       sizeof(struct in_addr));
> +		path_req.ip_addr_len = 4;
> +	} else {
> +		memcpy(&path_req.dst.v6_addr, &qedi_ep->dst_addr,
> +		       sizeof(struct in6_addr));
> +		path_req.ip_addr_len = 16;
> +	}
> +
> +	ret = iscsi_offload_mesg(shost, &qedi_iscsi_transport, msg_type, buf,
> +				 len);
> +	if (ret) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "iscsi_offload_mesg() failed for cid=0x%x ret=%d\n",
> +			 iscsi_cid, ret);
> +		goto ep_rel_conn;
> +	}
> +
> +	atomic_inc(&qedi->num_offloads);
> +	return ep;
> +
> +ep_rel_conn:
> +	qedi->ep_tbl[iscsi_cid] = NULL;
> +	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
> +	if (ret)
> +		QEDI_WARN(&qedi->dbg_ctx, "release_conn returned %d\n",
> +			  ret);
> +ep_free_sq:
> +	qedi_free_sq(qedi, qedi_ep);
> +ep_conn_exit:
> +	iscsi_destroy_endpoint(ep);
> +	return ERR_PTR(ret);
> +}
> +
> +static int qedi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
> +{
> +	struct qedi_endpoint *qedi_ep;
> +	int ret = 0;
> +
> +	if (do_not_recover)
> +		return 1;
> +
> +	qedi_ep = ep->dd_data;
> +	if (qedi_ep->state == EP_STATE_IDLE ||
> +	    qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
> +		return -1;
> +
> +	if (qedi_ep->state == EP_STATE_OFLDCONN_COMPL)
> +		ret = 1;
> +
> +	ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
> +					       ((qedi_ep->state ==
> +						EP_STATE_OFLDCONN_FAILED) ||
> +						(qedi_ep->state ==
> +						EP_STATE_OFLDCONN_COMPL)),
> +						msecs_to_jiffies(timeout_ms));
> +
> +	if (qedi_ep->state == EP_STATE_OFLDCONN_FAILED)
> +		ret = -1;
> +
> +	if (ret > 0)
> +		return 1;
> +	else if (!ret)
> +		return 0;
> +	else
> +		return ret;
> +}
> +
> +static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn)
> +{
> +	struct qedi_cmd *cmd, *cmd_tmp;
> +
> +	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
> +				 io_cmd) {
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +	}
> +}
> +
> +static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
> +{
> +	struct qedi_endpoint *qedi_ep;
> +	struct qedi_conn *qedi_conn = NULL;
> +	struct iscsi_conn *conn = NULL;
> +	struct qedi_ctx *qedi;
> +	int ret = 0;
> +	int wait_delay = 20 * HZ;
> +	int abrt_conn = 0;
> +	int count = 10;
> +
> +	qedi_ep = ep->dd_data;
> +	qedi = qedi_ep->qedi;
> +
> +	flush_work(&qedi_ep->offload_work);
> +
> +	if (qedi_ep->conn) {
> +		qedi_conn = qedi_ep->conn;
> +		conn = qedi_conn->cls_conn->dd_data;
> +		iscsi_suspend_queue(conn);
> +		abrt_conn = qedi_conn->abrt_conn;
> +
> +		while (count--)	{
> +			if (!test_bit(QEDI_CONN_FW_CLEANUP,
> +				      &qedi_conn->flags)) {
> +				break;
> +			}
> +			msleep(1000);
> +		}
> +
> +		if (test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
> +			if (do_not_recover) {
> +				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +					  "Do not recover cid=0x%x\n",
> +					  qedi_ep->iscsi_cid);
> +				goto ep_exit_recover;
> +			}
> +			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +				  "Reset recovery cid=0x%x, qedi_ep=%p, state=0x%x\n",
> +				  qedi_ep->iscsi_cid, qedi_ep, qedi_ep->state);
> +			qedi_cleanup_active_cmd_list(qedi_conn);
> +			goto ep_release_conn;
> +		}
> +	}
> +
> +	if (do_not_recover)
> +		goto ep_exit_recover;
> +
> +	switch (qedi_ep->state) {
> +	case EP_STATE_OFLDCONN_START:
> +		goto ep_release_conn;
> +	case EP_STATE_OFLDCONN_FAILED:
> +			break;
> +	case EP_STATE_OFLDCONN_COMPL:
> +		if (unlikely(!qedi_conn))
> +			break;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "Active cmd count=%d, abrt_conn=%d, ep state=0x%x, cid=0x%x, qedi_conn=%p\n",
> +			  qedi_conn->active_cmd_count, abrt_conn,
> +			  qedi_ep->state,
> +			  qedi_ep->iscsi_cid,
> +			  qedi_ep->conn
> +			  );
> +
> +		if (!qedi_conn->active_cmd_count)
> +			abrt_conn = 0;
> +		else
> +			abrt_conn = 1;
> +
> +		if (abrt_conn)
> +			qedi_clearsq(qedi, qedi_conn, NULL);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	qedi_ep->state = EP_STATE_DISCONN_START;
> +	ret = qedi_ops->destroy_conn(qedi->cdev, qedi_ep->handle, abrt_conn);
> +	if (ret) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "destroy_conn failed returned %d\n", ret);
> +	} else {
> +		ret = wait_event_interruptible_timeout(
> +					qedi_ep->tcp_ofld_wait,
> +					(qedi_ep->state !=
> +					 EP_STATE_DISCONN_START),
> +					wait_delay);
> +		if ((ret <= 0) || (qedi_ep->state == EP_STATE_DISCONN_START)) {
> +			QEDI_WARN(&qedi->dbg_ctx,
> +				  "Destroy conn timedout or interrupted, ret=%d, delay=%d, cid=0x%x\n",
> +				  ret, wait_delay, qedi_ep->iscsi_cid);
> +		}
> +	}
> +
> +ep_release_conn:
> +	ret = qedi_ops->release_conn(qedi->cdev, qedi_ep->handle);
> +	if (ret)
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "release_conn returned %d, cid=0x%x\n",
> +			  ret, qedi_ep->iscsi_cid);
> +ep_exit_recover:
> +	qedi_ep->state = EP_STATE_IDLE;
> +	qedi->ep_tbl[qedi_ep->iscsi_cid] = NULL;
> +	qedi->cid_que.conn_cid_tbl[qedi_ep->iscsi_cid] = NULL;
> +	qedi_free_id(&qedi->lcl_port_tbl, qedi_ep->src_port);
> +	qedi_free_sq(qedi, qedi_ep);
> +
> +	if (qedi_conn)
> +		qedi_conn->ep = NULL;
> +
> +	qedi_ep->conn = NULL;
> +	qedi_ep->qedi = NULL;
> +	atomic_dec(&qedi->num_offloads);
> +
> +	iscsi_destroy_endpoint(ep);
> +}
> +
> +static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
> +{
> +	struct qed_dev *cdev = qedi->cdev;
> +	struct qedi_uio_dev *udev;
> +	struct qedi_uio_ctrl *uctrl;
> +	struct sk_buff *skb;
> +	u32 len;
> +	int rc = 0;
> +
> +	udev = qedi->udev;
> +	if (!udev) {
> +		QEDI_ERR(&qedi->dbg_ctx, "udev is NULL.\n");
> +		return -EINVAL;
> +	}
> +
> +	uctrl = (struct qedi_uio_ctrl *)udev->uctrl;
> +	if (!uctrl) {
> +		QEDI_ERR(&qedi->dbg_ctx, "uctlr is NULL.\n");
> +		return -EINVAL;
> +	}
> +
> +	len = uctrl->host_tx_pkt_len;
> +	if (!len) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Invalid len %u\n", len);
> +		return -EINVAL;
> +	}
> +
> +	skb = alloc_skb(len, GFP_ATOMIC);
> +	if (!skb) {
> +		QEDI_ERR(&qedi->dbg_ctx, "alloc_skb failed\n");
> +		return -EINVAL;
> +	}
> +
> +	skb_put(skb, len);
> +	memcpy(skb->data, udev->tx_pkt, len);
> +	skb->ip_summed = CHECKSUM_NONE;
> +
> +	if (vlanid)
> +		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlanid);
> +
> +	rc = qedi_ops->ll2->start_xmit(cdev, skb);
> +	if (rc) {
> +		QEDI_ERR(&qedi->dbg_ctx, "ll2 start_xmit returned %d\n",
> +			 rc);
> +		kfree_skb(skb);
> +	}
> +
> +	uctrl->host_tx_pkt_len = 0;
> +	uctrl->hw_tx_cons++;
> +
> +	return rc;
> +}
> +
> +static void qedi_offload_work(struct work_struct *work)
> +{
> +	struct qedi_endpoint *qedi_ep =
> +		container_of(work, struct qedi_endpoint, offload_work);
> +	struct qedi_ctx *qedi;
> +	int wait_delay = 20 * HZ;
> +	int ret;
> +
> +	qedi = qedi_ep->qedi;
> +
> +	ret = qedi_iscsi_offload_conn(qedi_ep);
> +	if (ret) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
> +			 qedi_ep->iscsi_cid, qedi_ep, ret);
> +		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
> +		return;
> +	}
> +
> +	ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
> +					       (qedi_ep->state ==
> +					       EP_STATE_OFLDCONN_COMPL),
> +					       wait_delay);
> +	if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
> +		qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
> +			 qedi_ep->iscsi_cid, qedi_ep);
> +	}
> +}
> +
> +static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
> +{
> +	struct qedi_ctx *qedi;
> +	struct qedi_endpoint *qedi_ep;
> +	int ret = 0;
> +	u32 iscsi_cid;
> +	u16 port_id = 0;
> +
> +	if (!shost) {
> +		ret = -ENXIO;
> +		QEDI_ERR(NULL, "shost is NULL\n");
> +		return ret;
> +	}
> +
> +	if (strcmp(shost->hostt->proc_name, "qedi")) {
> +		ret = -ENXIO;
> +		QEDI_ERR(NULL, "shost %s is invalid\n",
> +			 shost->hostt->proc_name);
> +		return ret;
> +	}
> +
> +	qedi = iscsi_host_priv(shost);
> +	if (path_data->handle == QEDI_PATH_HANDLE) {
> +		ret = qedi_data_avail(qedi, path_data->vlan_id);
> +		goto set_path_exit;
> +	}
> +
> +	iscsi_cid = (u32)path_data->handle;
> +	qedi_ep = qedi->ep_tbl[iscsi_cid];
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
> +
> +	if (!is_valid_ether_addr(&path_data->mac_addr[0])) {
> +		QEDI_NOTICE(&qedi->dbg_ctx, "dst mac NOT VALID\n");
> +		ret = -EIO;
> +		goto set_path_exit;
> +	}
> +
> +	ether_addr_copy(&qedi_ep->src_mac[0], &qedi->mac[0]);
> +	ether_addr_copy(&qedi_ep->dst_mac[0], &path_data->mac_addr[0]);
> +
> +	qedi_ep->vlan_id = path_data->vlan_id;
> +	if (path_data->pmtu < DEF_PATH_MTU) {
> +		qedi_ep->pmtu = qedi->ll2_mtu;
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "MTU cannot be %u, using default MTU %u\n",
> +			   path_data->pmtu, qedi_ep->pmtu);
> +	}
> +
> +	if (path_data->pmtu != qedi->ll2_mtu) {
> +		if (path_data->pmtu > JUMBO_MTU) {
> +			ret = -EINVAL;
> +			QEDI_ERR(NULL, "Invalid MTU %u\n", path_data->pmtu);
> +			goto set_path_exit;
> +		}
> +
> +		qedi_reset_host_mtu(qedi, path_data->pmtu);
> +		qedi_ep->pmtu = qedi->ll2_mtu;
> +	}
> +
> +	port_id = qedi_ep->src_port;
> +	if (port_id >= QEDI_LOCAL_PORT_MIN &&
> +	    port_id < QEDI_LOCAL_PORT_MAX) {
> +		if (qedi_alloc_id(&qedi->lcl_port_tbl, port_id))
> +			port_id = 0;
> +	} else {
> +		port_id = 0;
> +	}
> +
> +	if (!port_id) {
> +		port_id = qedi_alloc_new_id(&qedi->lcl_port_tbl);
> +		if (port_id == QEDI_LOCAL_PORT_INVALID) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Failed to allocate port id for iscsi_cid=0x%x\n",
> +				 iscsi_cid);
> +			ret = -ENOMEM;
> +			goto set_path_exit;
> +		}
> +	}
> +
> +	qedi_ep->src_port = port_id;
> +
> +	if (qedi_ep->ip_type == TCP_IPV4) {
> +		memcpy(&qedi_ep->src_addr[0], &path_data->src.v4_addr,
> +		       sizeof(struct in_addr));
> +		memcpy(&qedi->src_ip[0], &path_data->src.v4_addr,
> +		       sizeof(struct in_addr));
> +		qedi->ip_type = TCP_IPV4;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "src addr:port=%pI4:%u, dst addr:port=%pI4:%u\n",
> +			  qedi_ep->src_addr, qedi_ep->src_port,
> +			  qedi_ep->dst_addr, qedi_ep->dst_port);
> +	} else {
> +		memcpy(&qedi_ep->src_addr[0], &path_data->src.v6_addr,
> +		       sizeof(struct in6_addr));
> +		memcpy(&qedi->src_ip[0], &path_data->src.v6_addr,
> +		       sizeof(struct in6_addr));
> +		qedi->ip_type = TCP_IPV6;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +			  "src addr:port=%pI6:%u, dst addr:port=%pI6:%u\n",
> +			  qedi_ep->src_addr, qedi_ep->src_port,
> +			  qedi_ep->dst_addr, qedi_ep->dst_port);
> +	}
> +
> +	INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
> +	queue_work(qedi->offload_thread, &qedi_ep->offload_work);
> +
> +	ret = 0;
> +
> +set_path_exit:
> +	return ret;
> +}
> +
> +static umode_t qedi_attr_is_visible(int param_type, int param)
> +{
> +	switch (param_type) {
> +	case ISCSI_HOST_PARAM:
> +		switch (param) {
> +		case ISCSI_HOST_PARAM_NETDEV_NAME:
> +		case ISCSI_HOST_PARAM_HWADDRESS:
> +		case ISCSI_HOST_PARAM_IPADDRESS:
> +			return S_IRUGO;
> +		default:
> +			return 0;
> +		}
> +	case ISCSI_PARAM:
> +		switch (param) {
> +		case ISCSI_PARAM_MAX_RECV_DLENGTH:
> +		case ISCSI_PARAM_MAX_XMIT_DLENGTH:
> +		case ISCSI_PARAM_HDRDGST_EN:
> +		case ISCSI_PARAM_DATADGST_EN:
> +		case ISCSI_PARAM_CONN_ADDRESS:
> +		case ISCSI_PARAM_CONN_PORT:
> +		case ISCSI_PARAM_EXP_STATSN:
> +		case ISCSI_PARAM_PERSISTENT_ADDRESS:
> +		case ISCSI_PARAM_PERSISTENT_PORT:
> +		case ISCSI_PARAM_PING_TMO:
> +		case ISCSI_PARAM_RECV_TMO:
> +		case ISCSI_PARAM_INITIAL_R2T_EN:
> +		case ISCSI_PARAM_MAX_R2T:
> +		case ISCSI_PARAM_IMM_DATA_EN:
> +		case ISCSI_PARAM_FIRST_BURST:
> +		case ISCSI_PARAM_MAX_BURST:
> +		case ISCSI_PARAM_PDU_INORDER_EN:
> +		case ISCSI_PARAM_DATASEQ_INORDER_EN:
> +		case ISCSI_PARAM_ERL:
> +		case ISCSI_PARAM_TARGET_NAME:
> +		case ISCSI_PARAM_TPGT:
> +		case ISCSI_PARAM_USERNAME:
> +		case ISCSI_PARAM_PASSWORD:
> +		case ISCSI_PARAM_USERNAME_IN:
> +		case ISCSI_PARAM_PASSWORD_IN:
> +		case ISCSI_PARAM_FAST_ABORT:
> +		case ISCSI_PARAM_ABORT_TMO:
> +		case ISCSI_PARAM_LU_RESET_TMO:
> +		case ISCSI_PARAM_TGT_RESET_TMO:
> +		case ISCSI_PARAM_IFACE_NAME:
> +		case ISCSI_PARAM_INITIATOR_NAME:
> +		case ISCSI_PARAM_BOOT_ROOT:
> +		case ISCSI_PARAM_BOOT_NIC:
> +		case ISCSI_PARAM_BOOT_TARGET:
> +			return S_IRUGO;
> +		default:
> +			return 0;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static void qedi_cleanup_task(struct iscsi_task *task)
> +{
> +	if (!task->sc || task->state == ISCSI_TASK_PENDING) {
> +		QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n",
> +			  atomic_read(&task->refcount));
> +		return;
> +	}
> +
> +	qedi_iscsi_unmap_sg_list(task->dd_data);
> +}
> +
> +struct iscsi_transport qedi_iscsi_transport = {
> +	.owner = THIS_MODULE,
> +	.name = QEDI_MODULE_NAME,
> +	.caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST |
> +		CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO,
> +	.create_session = qedi_session_create,
> +	.destroy_session = qedi_session_destroy,
> +	.create_conn = qedi_conn_create,
> +	.bind_conn = qedi_conn_bind,
> +	.start_conn = qedi_conn_start,
> +	.stop_conn = iscsi_conn_stop,
> +	.destroy_conn = qedi_conn_destroy,
> +	.set_param = iscsi_set_param,
> +	.get_ep_param = qedi_ep_get_param,
> +	.get_conn_param = iscsi_conn_get_param,
> +	.get_session_param = iscsi_session_get_param,
> +	.get_host_param = qedi_host_get_param,
> +	.send_pdu = iscsi_conn_send_pdu,
> +	.get_stats = qedi_conn_get_stats,
> +	.xmit_task = qedi_task_xmit,
> +	.cleanup_task = qedi_cleanup_task,
> +	.session_recovery_timedout = iscsi_session_recovery_timedout,
> +	.ep_connect = qedi_ep_connect,
> +	.ep_poll = qedi_ep_poll,
> +	.ep_disconnect = qedi_ep_disconnect,
> +	.set_path = qedi_set_path,
> +	.attr_is_visible = qedi_attr_is_visible,
> +};
> +
> +void qedi_start_conn_recovery(struct qedi_ctx *qedi,
> +			      struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_cls_session *cls_sess;
> +	struct iscsi_cls_conn *cls_conn;
> +	struct iscsi_conn *conn;
> +
> +	cls_conn = qedi_conn->cls_conn;
> +	conn = cls_conn->dd_data;
> +	cls_sess = iscsi_conn_to_session(cls_conn);
> +
> +	if (iscsi_is_session_online(cls_sess)) {
> +		qedi_conn->abrt_conn = 1;
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Failing connection, state=0x%x, cid=0x%x\n",
> +			 conn->session->state, qedi_conn->iscsi_conn_id);
> +		iscsi_conn_failure(qedi_conn->cls_conn->dd_data,
> +				   ISCSI_ERR_CONN_FAILED);
> +	}
> +}
> +
> +void qedi_process_iscsi_error(struct qedi_endpoint *ep, struct async_data *data)
> +{
> +	struct qedi_conn *qedi_conn;
> +	struct qedi_ctx *qedi;
> +	char warn_notice[] = "iscsi_warning";
> +	char error_notice[] = "iscsi_error";
> +	char *message;
> +	int need_recovery = 0;
> +	u32 err_mask = 0;
> +	char msg[64];
> +
> +	if (!ep)
> +		return;
> +
> +	qedi_conn = ep->conn;
> +	if (!qedi_conn)
> +		return;
> +
> +	qedi = ep->qedi;
> +
> +	QEDI_ERR(&qedi->dbg_ctx, "async event iscsi error:0x%x\n",
> +		 data->error_code);
> +
> +	if (err_mask) {
> +		need_recovery = 0;
> +		message = warn_notice;
> +	} else {
> +		need_recovery = 1;
> +		message = error_notice;
> +	}
> +
> +	switch (data->error_code) {
> +	case ISCSI_STATUS_NONE:
> +		strcpy(msg, "tcp_error none");
> +		break;
> +	case ISCSI_CONN_ERROR_TASK_CID_MISMATCH:
> +		strcpy(msg, "task cid mismatch");
> +		break;
> +	case ISCSI_CONN_ERROR_TASK_NOT_VALID:
> +		strcpy(msg, "invalid task");
> +		break;
> +	case ISCSI_CONN_ERROR_RQ_RING_IS_FULL:
> +		strcpy(msg, "rq ring full");
> +		break;
> +	case ISCSI_CONN_ERROR_CMDQ_RING_IS_FULL:
> +		strcpy(msg, "cmdq ring full");
> +		break;
> +	case ISCSI_CONN_ERROR_HQE_CACHING_FAILED:
> +		strcpy(msg, "sge caching failed");
> +		break;
> +	case ISCSI_CONN_ERROR_HEADER_DIGEST_ERROR:
> +		strcpy(msg, "hdr digest error");
> +		break;
> +	case ISCSI_CONN_ERROR_LOCAL_COMPLETION_ERROR:
> +		strcpy(msg, "local cmpl error");
> +		break;
> +	case ISCSI_CONN_ERROR_DATA_OVERRUN:
> +		strcpy(msg, "invalid task");
> +		break;
> +	case ISCSI_CONN_ERROR_OUT_OF_SGES_ERROR:
> +		strcpy(msg, "out of sge error");
> +		break;
> +	case ISCSI_CONN_ERROR_TCP_SEG_PROC_IP_OPTIONS_ERROR:
> +		strcpy(msg, "tcp seg ip options error");
> +		break;
> +	case ISCSI_CONN_ERROR_TCP_IP_FRAGMENT_ERROR:
> +		strcpy(msg, "tcp ip fragment error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_AHS_LEN:
> +		strcpy(msg, "AHS len protocol error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_ITT_OUT_OF_RANGE:
> +		strcpy(msg, "itt out of range error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_EXCEEDS_PDU_SIZE:
> +		strcpy(msg, "data seg more than pdu size");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE:
> +		strcpy(msg, "invalid opcode");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE_BEFORE_UPDATE:
> +		strcpy(msg, "invalid opcode before update");
> +		break;
> +	case ISCSI_CONN_ERROR_UNVALID_NOPIN_DSL:
> +		strcpy(msg, "unexpected opcode");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_CARRIES_NO_DATA:
> +		strcpy(msg, "r2t carries no data");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SN:
> +		strcpy(msg, "data sn error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_IN_TTT:
> +		strcpy(msg, "data TTT error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_TTT:
> +		strcpy(msg, "r2t TTT error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_BUFFER_OFFSET:
> +		strcpy(msg, "buffer offset error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_BUFFER_OFFSET_OOO:
> +		strcpy(msg, "buffer offset ooo");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_SN:
> +		strcpy(msg, "data seg len 0");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_0:
> +		strcpy(msg, "data xer len error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_1:
> +		strcpy(msg, "data xer len1 error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_2:
> +		strcpy(msg, "data xer len2 error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_LUN:
> +		strcpy(msg, "protocol lun error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO:
> +		strcpy(msg, "f bit zero error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO_S_BIT_ONE:
> +		strcpy(msg, "f bit zero s bit one error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_EXP_STAT_SN:
> +		strcpy(msg, "exp stat sn error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DSL_NOT_ZERO:
> +		strcpy(msg, "dsl not zero error");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_DSL:
> +		strcpy(msg, "invalid dsl");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_TOO_BIG:
> +		strcpy(msg, "data seg len too big");
> +		break;
> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_OUTSTANDING_R2T_COUNT:
> +		strcpy(msg, "outstanding r2t count error");
> +		break;
> +	case ISCSI_CONN_ERROR_SENSE_DATA_LENGTH:
> +		strcpy(msg, "sense datalen error");
> +		break;
Please use an array for mapping values onto strings.

> +	case ISCSI_ERROR_UNKNOWN:
> +	default:
> +		need_recovery = 0;
> +		strcpy(msg, "unknown error");
> +		break;
> +	}
> +	iscsi_conn_printk(KERN_ALERT,
> +			  qedi_conn->cls_conn->dd_data,
> +			  "qedi: %s - %s\n", message, msg);
> +
> +	if (need_recovery)
> +		qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
> +}
> +
> +void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data)
> +{
> +	struct qedi_conn *qedi_conn;
> +
> +	if (!ep)
> +		return;
> +
> +	qedi_conn = ep->conn;
> +	if (!qedi_conn)
> +		return;
> +
> +	QEDI_ERR(&ep->qedi->dbg_ctx, "async event TCP error:0x%x\n",
> +		 data->error_code);
> +
> +	qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
> +}
> diff --git a/drivers/scsi/qedi/qedi_iscsi.h b/drivers/scsi/qedi/qedi_iscsi.h
> new file mode 100644
> index 0000000..6da1c90
> --- /dev/null
> +++ b/drivers/scsi/qedi/qedi_iscsi.h
> @@ -0,0 +1,228 @@
> +/*
> + * QLogic iSCSI Offload Driver
> + * Copyright (c) 2016 Cavium Inc.
> + *
> + * This software is available under the terms of the GNU General Public License
> + * (GPL) Version 2, available from the file COPYING in the main directory of
> + * this source tree.
> + */
> +
> +#ifndef _QEDI_ISCSI_H_
> +#define _QEDI_ISCSI_H_
> +
> +#include <linux/socket.h>
> +#include <linux/completion.h>
> +#include "qedi.h"
> +
> +#define ISCSI_MAX_SESS_PER_HBA	4096
> +
> +#define DEF_KA_TIMEOUT		7200000
> +#define DEF_KA_INTERVAL		10000
> +#define DEF_KA_MAX_PROBE_COUNT	10
> +#define DEF_TOS			0
> +#define DEF_TTL			0xfe
> +#define DEF_SND_SEQ_SCALE	0
> +#define DEF_RCV_BUF		0xffff
> +#define DEF_SND_BUF		0xffff
> +#define DEF_SEED		0
> +#define DEF_MAX_RT_TIME		8000
> +#define DEF_MAX_DA_COUNT        2
> +#define DEF_SWS_TIMER		1000
> +#define DEF_MAX_CWND		2
> +#define DEF_PATH_MTU		1500
> +#define DEF_MSS			1460
> +#define DEF_LL2_MTU		1560
> +#define JUMBO_MTU		9000
> +
> +#define MIN_MTU         576 /* rfc 793 */
> +#define IPV4_HDR_LEN    20
> +#define IPV6_HDR_LEN    40
> +#define TCP_HDR_LEN     20
> +#define TCP_OPTION_LEN  12
> +#define VLAN_LEN         4
> +
> +enum {
> +	EP_STATE_IDLE                   = 0x0,
> +	EP_STATE_ACQRCONN_START         = 0x1,
> +	EP_STATE_ACQRCONN_COMPL         = 0x2,
> +	EP_STATE_OFLDCONN_START         = 0x4,
> +	EP_STATE_OFLDCONN_COMPL         = 0x8,
> +	EP_STATE_DISCONN_START          = 0x10,
> +	EP_STATE_DISCONN_COMPL          = 0x20,
> +	EP_STATE_CLEANUP_START          = 0x40,
> +	EP_STATE_CLEANUP_CMPL           = 0x80,
> +	EP_STATE_TCP_FIN_RCVD           = 0x100,
> +	EP_STATE_TCP_RST_RCVD           = 0x200,
> +	EP_STATE_LOGOUT_SENT            = 0x400,
> +	EP_STATE_LOGOUT_RESP_RCVD       = 0x800,
> +	EP_STATE_CLEANUP_FAILED         = 0x1000,
> +	EP_STATE_OFLDCONN_FAILED        = 0x2000,
> +	EP_STATE_CONNECT_FAILED         = 0x4000,
> +	EP_STATE_DISCONN_TIMEDOUT       = 0x8000,
> +};
> +
> +struct qedi_conn;
> +
> +struct qedi_endpoint {
> +	struct qedi_ctx *qedi;
> +	u32 dst_addr[4];
> +	u32 src_addr[4];
> +	u16 src_port;
> +	u16 dst_port;
> +	u16 vlan_id;
> +	u16 pmtu;
> +	u8 src_mac[ETH_ALEN];
> +	u8 dst_mac[ETH_ALEN];
> +	u8 ip_type;
> +	int state;
> +	wait_queue_head_t ofld_wait;
> +	wait_queue_head_t tcp_ofld_wait;
> +	u32 iscsi_cid;
> +	/* identifier of the connection from qed */
> +	u32 handle;
> +	u32 fw_cid;
> +	void __iomem *p_doorbell;
> +
> +	/* Send queue management */
> +	struct iscsi_wqe *sq;
> +	dma_addr_t sq_dma;
> +
> +	u16 sq_prod_idx;
> +	u16 fw_sq_prod_idx;
> +	u16 sq_con_idx;
> +	u32 sq_mem_size;
> +
> +	void *sq_pbl;
> +	dma_addr_t sq_pbl_dma;
> +	u32 sq_pbl_size;
> +	struct qedi_conn *conn;
> +	struct work_struct offload_work;
> +};
> +
> +#define QEDI_SQ_WQES_MIN	16
> +
> +struct qedi_io_bdt {
> +	struct iscsi_sge *sge_tbl;
> +	dma_addr_t sge_tbl_dma;
> +	u16 sge_valid;
> +};
> +
> +/**
> + * struct generic_pdu_resc - login pdu resource structure
> + *
> + * @req_buf:            driver buffer used to stage payload associated with
> + *                      the login request
> + * @req_dma_addr:       dma address for iscsi login request payload buffer
> + * @req_buf_size:       actual login request payload length
> + * @req_wr_ptr:         pointer into login request buffer when next data is
> + *                      to be written
> + * @resp_hdr:           iscsi header where iscsi login response header is to
> + *                      be recreated
> + * @resp_buf:           buffer to stage login response payload
> + * @resp_dma_addr:      login response payload buffer dma address
> + * @resp_buf_size:      login response paylod length
> + * @resp_wr_ptr:        pointer into login response buffer when next data is
> + *                      to be written
> + * @req_bd_tbl:         iscsi login request payload BD table
> + * @req_bd_dma:         login request BD table dma address
> + * @resp_bd_tbl:        iscsi login response payload BD table
> + * @resp_bd_dma:        login request BD table dma address
> + *
> + * following structure defines buffer info for generic pdus such as iSCSI Login,
> + *      Logout and NOP
> + */
> +struct generic_pdu_resc {
> +	char *req_buf;
> +	dma_addr_t req_dma_addr;
> +	u32 req_buf_size;
> +	char *req_wr_ptr;
> +	struct iscsi_hdr resp_hdr;
> +	char *resp_buf;
> +	dma_addr_t resp_dma_addr;
> +	u32 resp_buf_size;
> +	char *resp_wr_ptr;
> +	char *req_bd_tbl;
> +	dma_addr_t req_bd_dma;
> +	char *resp_bd_tbl;
> +	dma_addr_t resp_bd_dma;
> +};
> +
> +struct qedi_conn {
> +	struct iscsi_cls_conn *cls_conn;
> +	struct qedi_ctx *qedi;
> +	struct qedi_endpoint *ep;
> +	struct list_head active_cmd_list;
> +	spinlock_t list_lock;		/* internal conn lock */
> +	u32 active_cmd_count;
> +	u32 cmd_cleanup_req;
> +	u32 cmd_cleanup_cmpl;
> +
> +	u32 iscsi_conn_id;
> +	int itt;
> +	int abrt_conn;
> +#define QEDI_CID_RESERVED	0x5AFF
> +	u32 fw_cid;
> +	/*
> +	 * Buffer for login negotiation process
> +	 */
> +	struct generic_pdu_resc gen_pdu;
> +
> +	struct list_head tmf_work_list;
> +	wait_queue_head_t wait_queue;
> +	spinlock_t tmf_work_lock;	/* tmf work lock */
> +	unsigned long flags;
> +#define QEDI_CONN_FW_CLEANUP	1
> +};
> +
> +struct qedi_cmd {
> +	struct list_head io_cmd;
> +	bool io_cmd_in_list;
> +	struct iscsi_hdr hdr;
> +	struct qedi_conn *conn;
> +	struct scsi_cmnd *scsi_cmd;
> +	struct scatterlist *sg;
> +	struct qedi_io_bdt io_tbl;
> +	struct iscsi_task_context request;
> +	unsigned char *sense_buffer;
> +	dma_addr_t sense_buffer_dma;
> +	u16 task_id;
> +
> +	/* field populated for tmf work queue */
> +	struct iscsi_task *task;
> +	struct work_struct tmf_work;
> +	int state;
> +#define CLEANUP_WAIT	1
> +#define CLEANUP_RECV	2
> +#define CLEANUP_WAIT_FAILED	3
> +#define CLEANUP_NOT_REQUIRED	4
> +#define LUN_RESET_RESPONSE_RECEIVED	5
> +#define RESPONSE_RECEIVED	6
> +
> +	int type;
> +#define TYPEIO		1
> +#define TYPERESET	2
> +
> +	struct qedi_work_map *list_tmf_work;
> +	/* slowpath management */
> +	bool use_slowpath;
> +
> +	struct iscsi_tm_rsp *tmf_resp_buf;
> +};
> +
> +struct qedi_work_map {
> +	struct list_head list;
> +	struct qedi_cmd *qedi_cmd;
> +	int rtid;
> +
> +	int state;
> +#define QEDI_WORK_QUEUED	1
> +#define QEDI_WORK_SCHEDULED	2
> +#define QEDI_WORK_EXIT		3
> +
> +	struct work_struct *ptr_tmf_work;
> +};
> +
> +#define qedi_set_itt(task_id, itt) ((u32)((task_id & 0xffff) | (itt << 16)))
> +#define qedi_get_itt(cqe) (cqe.iscsi_hdr.cmd.itt >> 16)
> +
> +#endif /* _QEDI_ISCSI_H_ */
> diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
> index 58ac9a2..22d19a3 100644
> --- a/drivers/scsi/qedi/qedi_main.c
> +++ b/drivers/scsi/qedi/qedi_main.c
> @@ -27,6 +27,8 @@
>  #include <scsi/scsi.h>
>  
>  #include "qedi.h"
> +#include "qedi_gbl.h"
> +#include "qedi_iscsi.h"
>  
>  static uint fw_debug;
>  module_param(fw_debug, uint, S_IRUGO | S_IWUSR);
> @@ -1368,6 +1370,139 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
>  	return status;
>  }
>  
> +int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
> +{
> +	int rval = 0;
> +	u32 *pbl;
> +	dma_addr_t page;
> +	int num_pages;
> +
> +	if (!ep)
> +		return -EIO;
> +
> +	/* Calculate appropriate queue and PBL sizes */
> +	ep->sq_mem_size = QEDI_SQ_SIZE * sizeof(struct iscsi_wqe);
> +	ep->sq_mem_size += QEDI_PAGE_SIZE - 1;
> +
> +	ep->sq_pbl_size = (ep->sq_mem_size / QEDI_PAGE_SIZE) * sizeof(void *);
> +	ep->sq_pbl_size = ep->sq_pbl_size + QEDI_PAGE_SIZE;
> +
> +	ep->sq = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_mem_size,
> +				    &ep->sq_dma, GFP_KERNEL);
> +	if (!ep->sq) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Could not allocate send queue.\n");
> +		rval = -ENOMEM;
> +		goto out;
> +	}
> +	memset(ep->sq, 0, ep->sq_mem_size);
> +
> +	ep->sq_pbl = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_pbl_size,
> +					&ep->sq_pbl_dma, GFP_KERNEL);
> +	if (!ep->sq_pbl) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Could not allocate send queue PBL.\n");
> +		rval = -ENOMEM;
> +		goto out_free_sq;
> +	}
> +	memset(ep->sq_pbl, 0, ep->sq_pbl_size);
> +
> +	/* Create PBL */
> +	num_pages = ep->sq_mem_size / QEDI_PAGE_SIZE;
> +	page = ep->sq_dma;
> +	pbl = (u32 *)ep->sq_pbl;
> +
> +	while (num_pages--) {
> +		*pbl = (u32)page;
> +		pbl++;
> +		*pbl = (u32)((u64)page >> 32);
> +		pbl++;
> +		page += QEDI_PAGE_SIZE;
> +	}
> +
> +	return rval;
> +
> +out_free_sq:
> +	dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
> +			  ep->sq_dma);
> +out:
> +	return rval;
> +}
> +
> +void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep)
> +{
> +	if (ep->sq_pbl)
> +		dma_free_coherent(&qedi->pdev->dev, ep->sq_pbl_size, ep->sq_pbl,
> +				  ep->sq_pbl_dma);
> +	if (ep->sq)
> +		dma_free_coherent(&qedi->pdev->dev, ep->sq_mem_size, ep->sq,
> +				  ep->sq_dma);
> +}
> +
> +int qedi_get_task_idx(struct qedi_ctx *qedi)
> +{
> +	s16 tmp_idx;
> +
> +again:
> +	tmp_idx = find_first_zero_bit(qedi->task_idx_map,
> +				      MAX_ISCSI_TASK_ENTRIES);
> +
> +	if (tmp_idx >= MAX_ISCSI_TASK_ENTRIES) {
> +		QEDI_ERR(&qedi->dbg_ctx, "FW task context pool is full.\n");
> +		tmp_idx = -1;
> +		goto err_idx;
> +	}
> +
> +	if (test_and_set_bit(tmp_idx, qedi->task_idx_map))
> +		goto again;
> +
> +err_idx:
> +	return tmp_idx;
> +}
> +
> +void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx)
> +{
> +	if (!test_and_clear_bit(idx, qedi->task_idx_map)) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "FW task context, already cleared, tid=0x%x\n", idx);
> +		WARN_ON(1);
> +	}
> +}
> +
> +void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt)
> +{
> +	qedi->itt_map[tid].itt = proto_itt;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "update itt map tid=0x%x, with proto itt=0x%x\n", tid,
> +		  qedi->itt_map[tid].itt);
> +}
> +
> +void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, s16 *tid)
> +{
> +	u16 i;
> +
> +	for (i = 0; i < MAX_ISCSI_TASK_ENTRIES; i++) {
> +		if (qedi->itt_map[i].itt == itt) {
> +			*tid = i;
> +			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +				  "Ref itt=0x%x, found at tid=0x%x\n",
> +				  itt, *tid);
> +			return;
> +		}
> +	}
> +
> +	WARN_ON(1);
> +}
> +
> +void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt)
> +{
> +	*proto_itt = qedi->itt_map[tid].itt;
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "Get itt map tid [0x%x with proto itt[0x%x]",
> +		  tid, *proto_itt);
> +}
> +
>  static int qedi_alloc_itt(struct qedi_ctx *qedi)
>  {
>  	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
> @@ -1488,6 +1623,26 @@ static int qedi_cpu_callback(struct notifier_block *nfb,
>  	.notifier_call = qedi_cpu_callback,
>  };
>  
> +void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu)
> +{
> +	struct qed_ll2_params params;
> +
> +	qedi_recover_all_conns(qedi);
> +
> +	qedi_ops->ll2->stop(qedi->cdev);
> +	qedi_ll2_free_skbs(qedi);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "old MTU %u, new MTU %u\n",
> +		  qedi->ll2_mtu, mtu);
> +	memset(&params, 0, sizeof(params));
> +	qedi->ll2_mtu = mtu;
> +	params.mtu = qedi->ll2_mtu + IPV6_HDR_LEN + TCP_HDR_LEN;
> +	params.drop_ttl0_packets = 0;
> +	params.rx_vlan_stripping = 1;
> +	ether_addr_copy(params.ll2_mac_address, qedi->dev_info.common.hw_mac);
> +	qedi_ops->ll2->start(qedi->cdev, &params);
> +}
> +
>  static void __qedi_remove(struct pci_dev *pdev, int mode)
>  {
>  	struct qedi_ctx *qedi = pci_get_drvdata(pdev);
> @@ -1852,6 +2007,13 @@ static int __init qedi_init(void)
>  	qedi_dbg_init("qedi");
>  #endif
>  
> +	qedi_scsi_transport = iscsi_register_transport(&qedi_iscsi_transport);
> +	if (!qedi_scsi_transport) {
> +		QEDI_ERR(NULL, "Could not register qedi transport");
> +		rc = -ENOMEM;
> +		goto exit_qedi_init_1;
> +	}
> +
>  	register_hotcpu_notifier(&qedi_cpu_notifier);
>  
>  	ret = pci_register_driver(&qedi_pci_driver);
> @@ -1874,6 +2036,7 @@ static int __init qedi_init(void)
>  	return rc;
>  
>  exit_qedi_init_2:
> +	iscsi_unregister_transport(&qedi_iscsi_transport);
>  exit_qedi_init_1:
>  #ifdef CONFIG_DEBUG_FS
>  	qedi_dbg_exit();
> @@ -1892,6 +2055,7 @@ static void __exit qedi_cleanup(void)
>  
>  	pci_unregister_driver(&qedi_pci_driver);
>  	unregister_hotcpu_notifier(&qedi_cpu_notifier);
> +	iscsi_unregister_transport(&qedi_iscsi_transport);
>  
>  #ifdef CONFIG_DEBUG_FS
>  	qedi_dbg_exit();
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-19  5:01   ` manish.rangankar
  (?)
  (?)
@ 2016-10-19  9:09   ` Johannes Thumshirn
  2016-10-20  0:14       ` Arun Easi
  -1 siblings, 1 reply; 38+ messages in thread
From: Johannes Thumshirn @ 2016-10-19  9:09 UTC (permalink / raw)
  To: manish.rangankar
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Yuval.Mintz, QLogic-Storage-Upstream, Yuval Mintz, Arun Easi

Hi Manish,

Some initital comments

On Wed, Oct 19, 2016 at 01:01:08AM -0400, manish.rangankar@cavium.com wrote:
> From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> 
> This adds the backbone required for the various HW initalizations
> which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> 4xxxx line of adapters - FW notification, resource initializations, etc.
> 
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> ---
>  drivers/net/ethernet/qlogic/Kconfig            |   15 +
>  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
>  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
>  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
>  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
>  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
>  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
>  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
>  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
>  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
>  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
>  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
>  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
>  include/linux/qed/qed_if.h                     |    2 +
>  include/linux/qed/qed_iscsi_if.h               |  249 +++++
>  15 files changed, 1692 insertions(+), 22 deletions(-)
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
>  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
>  create mode 100644 include/linux/qed/qed_iscsi_if.h
> 
> diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
> index 0df1391f9..bad4fae 100644
> --- a/drivers/net/ethernet/qlogic/Kconfig
> +++ b/drivers/net/ethernet/qlogic/Kconfig
> @@ -118,4 +118,19 @@ config INFINIBAND_QEDR
>  	  for QLogic QED. This would be replaced by the 'real' option
>  	  once the QEDR driver is added [+relocated].
>  
> +config QED_ISCSI
> +	bool
> +
> +config QEDI
> +	tristate "QLogic QED 25/40/100Gb iSCSI driver"
> +	depends on QED
> +	select QED_LL2
> +	select QED_ISCSI
> +	default n
> +	---help---
> +	  This provides a temporary node that allows the compilation
> +	  and logical testing of the hardware offload iSCSI support
> +	  for QLogic QED. This would be replaced by the 'real' option
> +	  once the QEDI driver is added [+relocated].
> +
>  endif # NET_VENDOR_QLOGIC
> diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
> index cda0af7..b76669c 100644
> --- a/drivers/net/ethernet/qlogic/qed/Makefile
> +++ b/drivers/net/ethernet/qlogic/qed/Makefile
> @@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
>  qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
>  qed-$(CONFIG_QED_LL2) += qed_ll2.o
>  qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
> +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
> diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
> index 653bb57..a61b1c0 100644
> --- a/drivers/net/ethernet/qlogic/qed/qed.h
> +++ b/drivers/net/ethernet/qlogic/qed/qed.h
> @@ -35,6 +35,7 @@
>  
>  #define QED_WFQ_UNIT	100
>  
> +#define ISCSI_BDQ_ID(_port_id) (_port_id)

This looks a bit odd to me.

[...]

>  #endif
> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> +			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);


Why not introduce a small helper like:
static inline bool qed_is_iscsi_personality()
{
	return IS_ENABLED(CONFIG_QEDI) && p_hwfn->hw_info.personality ==
		QED_PCI_ISCSI;
}

>  		qed_iov_free(p_hwfn);

[...]

> +
> +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> +		p_tcp = &p_ramrod->tcp;
> +		ucval = p_conn->local_mac[1];
> +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->local_mac[0];
> +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->local_mac[3];
> +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->local_mac[2];
> +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->local_mac[5];
> +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->local_mac[4];
> +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> +		ucval = p_conn->remote_mac[1];
> +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->remote_mac[0];
> +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->remote_mac[3];
> +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->remote_mac[2];
> +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->remote_mac[5];
> +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->remote_mac[4];
> +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> +
> +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> +
> +		p_tcp->flags = p_conn->tcp_flags;
> +		p_tcp->ip_version = p_conn->ip_version;
> +		for (i = 0; i < 4; i++) {
> +			dval = p_conn->remote_ip[i];
> +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> +			dval = p_conn->local_ip[i];
> +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> +		}
> +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> +
> +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> +		dval = p_conn->ka_timeout_delta;
> +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> +		dval = p_conn->rt_timeout_delta;
> +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> +		p_tcp->rt_cnt = p_conn->rt_cnt;
> +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> +		dval = p_conn->initial_rcv_wnd;
> +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> +		p_tcp->ttl = p_conn->ttl;
> +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> +		dval = p_conn->ts_ticks_per_second;
> +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> +		wval = p_conn->da_timeout_value;
> +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> +		p_tcp->ack_frequency = p_conn->ack_frequency;
> +		p_tcp->connect_mode = p_conn->connect_mode;
> +	} else {
> +		p_tcp2 =
> +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> +		ucval = p_conn->local_mac[1];
> +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->local_mac[0];
> +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->local_mac[3];
> +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->local_mac[2];
> +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->local_mac[5];
> +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->local_mac[4];
> +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> +
> +		ucval = p_conn->remote_mac[1];
> +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> +		ucval = p_conn->remote_mac[0];
> +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> +		ucval = p_conn->remote_mac[3];
> +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> +		ucval = p_conn->remote_mac[2];
> +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> +		ucval = p_conn->remote_mac[5];
> +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> +		ucval = p_conn->remote_mac[4];
> +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> +
> +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
> +		p_tcp2->flags = p_conn->tcp_flags;
> +
> +		p_tcp2->ip_version = p_conn->ip_version;
> +		for (i = 0; i < 4; i++) {
> +			dval = p_conn->remote_ip[i];
> +			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
> +			dval = p_conn->local_ip[i];
> +			p_tcp2->local_ip[i] = cpu_to_le32(dval);
> +		}
> +
> +		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
> +		p_tcp2->ttl = p_conn->ttl;
> +		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
> +		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
> +		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
> +		p_tcp2->mss = cpu_to_le16(p_conn->mss);
> +		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> +		p_tcp2->connect_mode = p_conn->connect_mode;
> +		wval = p_conn->syn_ip_payload_length;
> +		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
> +		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
> +		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
> +	}

Is there any chance you could factor out above blocks into own functions so
you have


	if (!GET_FIELD(p_ramrod->iscsi.flags,
		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
		qedi_do_stuff_off_chip();
	else 
		qedi_do_stuff_on_chip();

> +

[...]

> +static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
> +{
> +	return (u8 __iomem *)p_hwfn->doorbells +
> +			     qed_db_addr(cid, DQ_DEMS_LEGACY);
> +}
> +
> +static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
> +						    u8 bdq_id)
> +{
> +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> +
> +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
> +			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> +							     bdq_id);
> +}
> +
> +static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
> +						      u8 bdq_id)
> +{
> +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> +
> +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
> +			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> +							     bdq_id);
> +}

Why are you casting to u8* here, you're returning void*? 

[...]

> +
> +	if (tasks) {
> +		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
> +						       GFP_KERNEL);
> +
> +		if (!tid_info) {
> +			DP_NOTICE(cdev,
> +				  "Failed to allocate tasks information\n");
> +			qed_iscsi_stop(cdev);
> +			return -ENOMEM;
> +		}
> +
> +		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
> +					      tid_info);
> +		if (rc) {
> +			DP_NOTICE(cdev, "Failed to gather task information\n");
> +			qed_iscsi_stop(cdev);
> +			kfree(tid_info);
> +			return rc;
> +		}
> +
> +		/* Fill task information */
> +		tasks->size = tid_info->tid_size;
> +		tasks->num_tids_per_block = tid_info->num_tids_per_block;
> +		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> +
> +		kfree(tid_info);
> +	}
> +
> +	return 0;
> +}

Maybe:

struct qed_tid_mem *tid_info;
[...]
if (!tasks)
	return 0;

tid_info = kzalloc(sizeof(*tid_info), GFP_KERNEL);

if (!tid_info) {
	DP_NOTICE(cdev, "Failed to allocate tasks information\n");
	qed_iscsi_stop(cdev);
	return -ENOMEM;
}

rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), tid_info);
if (rc) {
	DP_NOTICE(cdev, "Failed to gather task information\n");
	qed_iscsi_stop(cdev);
	kfree(tid_info);
	return rc;
}

/* Fill task information */
tasks->size = tid_info->tid_size;
tasks->num_tids_per_block = tid_info->num_tids_per_block;
memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);

kfree(tid_info);

> +

[...]

> +/**
> + * @brief start iscsi in FW
> + *
> + * @param cdev
> + * @param tasks - qed will fill information about tasks
> + *

Please use proper kerneldoc and not doxygen syntax.

Thanks,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 2/6] qed: Add iSCSI out of order packet handling.
  2016-10-19  5:01   ` manish.rangankar
  (?)
  (?)
@ 2016-10-19  9:39   ` Johannes Thumshirn
  2016-10-20  0:43       ` Arun Easi
  -1 siblings, 1 reply; 38+ messages in thread
From: Johannes Thumshirn @ 2016-10-19  9:39 UTC (permalink / raw)
  To: manish.rangankar
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Yuval.Mintz, QLogic-Storage-Upstream, Yuval Mintz, Arun Easi

On Wed, Oct 19, 2016 at 01:01:09AM -0400, manish.rangankar@cavium.com wrote:
> From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> 
> This patch adds out of order packet handling for hardware offloaded
> iSCSI. Out of order packet handling requires driver buffer allocation
> and assistance.
> 
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> ---

[...]

> +		if (IS_ENABLED(CONFIG_QEDI) &&
> +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {

If you're going to implement the qed_is_iscsi_personallity() helper, please
consider a qed_ll2_is_iscsi_oooo() as well.

> +			struct qed_ooo_buffer *p_buffer;

[...]

> +	while (cq_new_idx != cq_old_idx) {
> +		struct core_rx_fast_path_cqe *p_cqe_fp;
> +
> +		cqe = qed_chain_consume(&p_rx->rcq_chain);
> +		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
> +		cqe_type = cqe->rx_cqe_sp.type;
> +
> +		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
> +			DP_NOTICE(p_hwfn,
> +				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
> +				  cqe_type);
> +			return -EINVAL;
> +		}
> +		p_cqe_fp = &cqe->rx_cqe_fp;
> +
> +		placement_offset = p_cqe_fp->placement_offset;
> +		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
> +		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
> +		vlan = le16_to_cpu(p_cqe_fp->vlan);
> +		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
> +		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
> +					   iscsi_ooo);
> +		cid = le32_to_cpu(iscsi_ooo->cid);
> +
> +		/* Process delete isle first */
> +		if (iscsi_ooo->drop_size)
> +			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
> +					     iscsi_ooo->drop_isle,
> +					     iscsi_ooo->drop_size);
> +
> +		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
> +			continue;
> +
> +		/* Now process create/add/join isles */
> +		if (list_empty(&p_rx->active_descq)) {
> +			DP_NOTICE(p_hwfn,
> +				  "LL2 OOO RX chain has no submitted buffers\n");
> +			return -EIO;
> +		}
> +
> +		p_pkt = list_first_entry(&p_rx->active_descq,
> +					 struct qed_ll2_rx_packet, list_entry);
> +
> +		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
> +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
> +			if (!p_pkt) {
> +				DP_NOTICE(p_hwfn,
> +					  "LL2 OOO RX packet is not valid\n");
> +				return -EIO;
> +			}
> +			list_del(&p_pkt->list_entry);
> +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> +			p_buffer->packet_length = packet_length;
> +			p_buffer->parse_flags = parse_flags;
> +			p_buffer->vlan = vlan;
> +			p_buffer->placement_offset = placement_offset;
> +			qed_chain_consume(&p_rx->rxq_chain);
> +			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> +
> +			switch (iscsi_ooo->ooo_opcode) {
> +			case TCP_EVENT_ADD_NEW_ISLE:
> +				qed_ooo_add_new_isle(p_hwfn,
> +						     p_hwfn->p_ooo_info,
> +						     cid,
> +						     iscsi_ooo->ooo_isle,
> +						     p_buffer);
> +				break;
> +			case TCP_EVENT_ADD_ISLE_RIGHT:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle,
> +						       p_buffer,
> +						       QED_OOO_RIGHT_BUF);
> +				break;
> +			case TCP_EVENT_ADD_ISLE_LEFT:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle,
> +						       p_buffer,
> +						       QED_OOO_LEFT_BUF);
> +				break;
> +			case TCP_EVENT_JOIN:
> +				qed_ooo_add_new_buffer(p_hwfn,
> +						       p_hwfn->p_ooo_info,
> +						       cid,
> +						       iscsi_ooo->ooo_isle +
> +						       1,
> +						       p_buffer,
> +						       QED_OOO_LEFT_BUF);
> +				qed_ooo_join_isles(p_hwfn,
> +						   p_hwfn->p_ooo_info,
> +						   cid, iscsi_ooo->ooo_isle);
> +				break;
> +			case TCP_EVENT_ADD_PEN:
> +				num_ooo_add_to_peninsula++;
> +				qed_ooo_put_ready_buffer(p_hwfn,
> +							 p_hwfn->p_ooo_info,
> +							 p_buffer, true);
> +				break;
> +			}
> +		} else {
> +			DP_NOTICE(p_hwfn,
> +				  "Unexpected event (%d) TX OOO completion\n",
> +				  iscsi_ooo->ooo_opcode);
> +		}
> +	}

Can you factoror the body of that "while(cq_new_idx != cq_old_idx)" loop into
a own function?

>  
> -		b_last = list_empty(&p_rx->active_descq);
> +	/* Submit RX buffer here */
> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {

This could be an opportunity for a qed_for_each_free_buffer() or maybe even a
qed_ooo_submit_rx_buffers() and qed_ooo_submit_tx_buffers() as this is mostly
duplicate code.

> +		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
> +					    p_buffer->rx_buffer_phys_addr,
> +					    0, p_buffer, true);
> +		if (rc) {
> +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> +						p_buffer);
> +			break;
> +		}
>  	}
> +
> +	/* Submit Tx buffers here */
> +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> +						    p_hwfn->p_ooo_info))) {

Ditto.

[...]
> +
> +	/* Submit Tx buffers here */
> +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> +						    p_hwfn->p_ooo_info))) {


And here

[...]

> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {

[..]

> +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> +						   p_hwfn->p_ooo_info))) {

[...]

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19  5:01   ` manish.rangankar
  (?)
  (?)
@ 2016-10-19 10:02   ` Johannes Thumshirn
  2016-10-20  8:41     ` Rangankar, Manish
  2016-10-23 14:04     ` Rangankar, Manish
  -1 siblings, 2 replies; 38+ messages in thread
From: Johannes Thumshirn @ 2016-10-19 10:02 UTC (permalink / raw)
  To: manish.rangankar
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Yuval.Mintz, QLogic-Storage-Upstream, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

On Wed, Oct 19, 2016 at 01:01:10AM -0400, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
> for 41000 Series Converged Network Adapters by QLogic.
> 
> This patch consists of following changes:
>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>   - PCI driver registration,
>   - iSCSI host level initialization,
>   - Debugfs and log level infrastructure.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---

[...]

> +static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
> +{
> +	return (void *)(info->blocks[tid / info->num_tids_per_block] +
> +			(tid % info->num_tids_per_block) * info->size);
> +}

Unnecessary cast here.


[...]

> +void
> +qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
> +	     const char *fmt, ...)
> +{
> +	va_list va;
> +	struct va_format vaf;
> +	char nfunc[32];
> +
> +	memset(nfunc, 0, sizeof(nfunc));
> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
> +
> +	va_start(va, fmt);
> +
> +	vaf.fmt = fmt;
> +	vaf.va = &va;
> +
> +	if (likely(qedi) && likely(qedi->pdev))
> +		pr_crit("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
> +			nfunc, line, qedi->host_no, &vaf);
> +	else
> +		pr_crit("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);

pr_crit, seriously?

[...]

> +static void qedi_int_fp(struct qedi_ctx *qedi)
> +{
> +	struct qedi_fastpath *fp;
> +	int id;
> +
> +	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
> +	       sizeof(*qedi->fp_array));
> +	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
> +	       sizeof(*qedi->sb_array));

I don't think the cast is necessary here.

[...]

> +static int qedi_setup_cid_que(struct qedi_ctx *qedi)
> +{
> +	int i;
> +
> +	qedi->cid_que.cid_que_base = kmalloc((qedi->max_active_conns *
> +					      sizeof(u32)), GFP_KERNEL);
> +	if (!qedi->cid_que.cid_que_base)
> +		return -ENOMEM;
> +
> +	qedi->cid_que.conn_cid_tbl = kmalloc((qedi->max_active_conns *
> +					      sizeof(struct qedi_conn *)),
> +					     GFP_KERNEL);

Please use kmalloc_array() here.

[...]

> +/* MSI-X fastpath handler code */
> +static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
> +{
> +	struct qedi_fastpath *fp = dev_id;
> +	struct qedi_ctx *qedi = fp->qedi;
> +	bool wake_io_thread = true;
> +
> +	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
> +
> +process_again:
> +	wake_io_thread = qedi_process_completions(fp);
> +	if (wake_io_thread) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
> +			  "process already running\n");
> +	}
> +
> +	if (qedi_fp_has_work(fp) == 0)
> +		qed_sb_update_sb_idx(fp->sb_info);
> +
> +	/* Check for more work */
> +	rmb();
> +
> +	if (qedi_fp_has_work(fp) == 0)
> +		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
> +	else
> +		goto process_again;
> +
> +	return IRQ_HANDLED;
> +}

You might want to consider workqueues here.

[...]

> +static int qedi_alloc_itt(struct qedi_ctx *qedi)
> +{
> +	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
> +				MAX_ISCSI_TASK_ENTRIES), GFP_KERNEL);

that screams for kcalloc()

> +	if (!qedi->itt_map) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Unable to allocate itt map array memory\n");
> +		return -ENOMEM;
> +	}
> +	return 0;
> +}
> +
> +static void qedi_free_itt(struct qedi_ctx *qedi)
> +{
> +	kfree(qedi->itt_map);
> +}
> +
> +static struct qed_ll2_cb_ops qedi_ll2_cb_ops = {
> +	.rx_cb = qedi_ll2_rx,
> +	.tx_cb = NULL,
> +};
> +
> +static int qedi_percpu_io_thread(void *arg)
> +{
> +	struct qedi_percpu_s *p = arg;
> +	struct qedi_work *work, *tmp;
> +	unsigned long flags;
> +	LIST_HEAD(work_list);
> +
> +	set_user_nice(current, -20);
> +
> +	while (!kthread_should_stop()) {
> +		spin_lock_irqsave(&p->p_work_lock, flags);
> +		while (!list_empty(&p->work_list)) {
> +			list_splice_init(&p->work_list, &work_list);
> +			spin_unlock_irqrestore(&p->p_work_lock, flags);
> +
> +			list_for_each_entry_safe(work, tmp, &work_list, list) {
> +				list_del_init(&work->list);
> +				qedi_fp_process_cqes(work->qedi, &work->cqe,
> +						     work->que_idx);
> +				kfree(work);
> +			}
> +			spin_lock_irqsave(&p->p_work_lock, flags);
> +		}
> +		set_current_state(TASK_INTERRUPTIBLE);
> +		spin_unlock_irqrestore(&p->p_work_lock, flags);
> +		schedule();
> +	}
> +	__set_current_state(TASK_RUNNING);
> +
> +	return 0;
> +}

A kthread with prio -20 IRQs turned off looping over a list, what could
possibly go wrong here. I bet you your favorite beverage that this will
cause Soft Lockups when running I/O stress tests BTDT.

[...]

> +	if (mode != QEDI_MODE_RECOVERY) {
> +		if (iscsi_host_add(qedi->shost, &pdev->dev)) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Could not add iscsi host\n");
> +			rc = -ENOMEM;
> +			goto remove_host;
> +		}
> +
> +		/* Allocate uio buffers */
> +		rc = qedi_alloc_uio_rings(qedi);
> +		if (rc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "UIO alloc ring failed err=%d\n", rc);
> +			goto remove_host;
> +		}
> +
> +		rc = qedi_init_uio(qedi);
> +		if (rc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "UIO init failed, err=%d\n", rc);
> +			goto free_uio;
> +		}
> +
> +		/* host the array on iscsi_conn */
> +		rc = qedi_setup_cid_que(qedi);
> +		if (rc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Could not setup cid que\n");
> +			goto free_uio;
> +		}
> +
> +		rc = qedi_cm_alloc_mem(qedi);
> +		if (rc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Could not alloc cm memory\n");
> +			goto free_cid_que;
> +		}
> +
> +		rc = qedi_alloc_itt(qedi);
> +		if (rc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Could not alloc itt memory\n");
> +			goto free_cid_que;
> +		}
> +
> +		sprintf(host_buf, "host_%d", qedi->shost->host_no);
> +		qedi->tmf_thread = create_singlethread_workqueue(host_buf);
> +		if (!qedi->tmf_thread) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Unable to start tmf thread!\n");
> +			rc = -ENODEV;
> +			goto free_cid_que;
> +		}
> +
> +		sprintf(host_buf, "qedi_ofld%d", qedi->shost->host_no);
> +		qedi->offload_thread = create_workqueue(host_buf);
> +		if (!qedi->offload_thread) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Unable to start offload thread!\n");
> +			rc = -ENODEV;
> +			goto free_cid_que;
> +		}
> +
> +		/* F/w needs 1st task context memory entry for performance */
> +		set_bit(QEDI_RESERVE_TASK_ID, qedi->task_idx_map);
> +		atomic_set(&qedi->num_offloads, 0);
> +	}
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "QLogic FastLinQ iSCSI Module qedi %s, FW %d.%d.%d.%d\n",
> +		  QEDI_MODULE_VERSION, FW_MAJOR_VERSION, FW_MINOR_VERSION,
> +		   FW_REVISION_VERSION, FW_ENGINEERING_VERSION);
> +	return 0;

Please put the QEDI_INFO() above the if and invert the condition.


Thanks,
	Johannes
-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 6/6] qedi: Add support for data path.
  2016-10-19  5:01   ` manish.rangankar
  (?)
@ 2016-10-19 10:24   ` Hannes Reinecke
  2016-10-20  9:24     ` Rangankar, Manish
  -1 siblings, 1 reply; 38+ messages in thread
From: Hannes Reinecke @ 2016-10-19 10:24 UTC (permalink / raw)
  To: manish.rangankar, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Yuval.Mintz,
	QLogic-Storage-Upstream, Nilesh Javali, Adheer Chandravanshi,
	Chad Dupuis, Saurav Kashyap, Arun Easi

On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> This patch adds support for data path and TMF handling.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---
>  drivers/scsi/qedi/qedi_fw.c    | 1282 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/qedi/qedi_gbl.h   |    6 +
>  drivers/scsi/qedi/qedi_iscsi.c |    6 +
>  drivers/scsi/qedi/qedi_main.c  |    4 +
>  4 files changed, 1298 insertions(+)
> 
> diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
> index a820785..af1e14d 100644
> --- a/drivers/scsi/qedi/qedi_fw.c
> +++ b/drivers/scsi/qedi/qedi_fw.c
> @@ -147,6 +147,114 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
>  	spin_unlock(&session->back_lock);
>  }
>  
> +static void qedi_tmf_resp_work(struct work_struct *work)
> +{
> +	struct qedi_cmd *qedi_cmd =
> +				container_of(work, struct qedi_cmd, tmf_work);
> +	struct qedi_conn *qedi_conn = qedi_cmd->conn;
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_tm_rsp *resp_hdr_ptr;
> +	struct iscsi_cls_session *cls_sess;
> +	int rval = 0;
> +
> +	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
> +	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
> +
> +	iscsi_block_session(session->cls_session);
> +	rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
> +	if (rval) {
> +		clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +		qedi_clear_task_idx(qedi, qedi_cmd->task_id);
> +		iscsi_unblock_session(session->cls_session);
> +		return;
> +	}
> +
> +	iscsi_unblock_session(session->cls_session);
> +	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
> +
> +	spin_lock(&session->back_lock);
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
> +	spin_unlock(&session->back_lock);
> +	kfree(resp_hdr_ptr);
> +	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +}
> +
> +static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
> +				  union iscsi_cqe *cqe,
> +				  struct iscsi_task *task,
> +				  struct qedi_conn *qedi_conn)
> +
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_tmf_response_hdr *cqe_tmp_response;
> +	struct iscsi_tm_rsp *resp_hdr_ptr;
> +	struct iscsi_tm *tmf_hdr;
> +	struct qedi_cmd *qedi_cmd = NULL;
> +	u32 *tmp;
> +
> +	cqe_tmp_response = &cqe->cqe_common.iscsi_hdr.tmf_response;
> +
> +	qedi_cmd = task->dd_data;
> +	qedi_cmd->tmf_resp_buf = kzalloc(sizeof(*resp_hdr_ptr), GFP_KERNEL);
> +	if (!qedi_cmd->tmf_resp_buf) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Failed to allocate resp buf, cid=0x%x\n",
> +			  qedi_conn->iscsi_conn_id);
> +		return;
> +	}
> +
> +	spin_lock(&session->back_lock);
> +	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
> +	memset(resp_hdr_ptr, 0, sizeof(struct iscsi_tm_rsp));
> +
> +	/* Fill up the header */
> +	resp_hdr_ptr->opcode = cqe_tmp_response->opcode;
> +	resp_hdr_ptr->flags = cqe_tmp_response->hdr_flags;
> +	resp_hdr_ptr->response = cqe_tmp_response->hdr_response;
> +	resp_hdr_ptr->hlength = 0;
> +
> +	hton24(resp_hdr_ptr->dlength,
> +	       (cqe_tmp_response->hdr_second_dword &
> +		ISCSI_TMF_RESPONSE_HDR_DATA_SEG_LEN_MASK));
> +	tmp = (u32 *)resp_hdr_ptr->dlength;
> +	resp_hdr_ptr->itt = build_itt(cqe->cqe_solicited.itid,
> +				      conn->session->age);
> +	resp_hdr_ptr->statsn = cpu_to_be32(cqe_tmp_response->stat_sn);
> +	resp_hdr_ptr->exp_cmdsn  = cpu_to_be32(cqe_tmp_response->exp_cmd_sn);
> +	resp_hdr_ptr->max_cmdsn = cpu_to_be32(cqe_tmp_response->max_cmd_sn);
> +
> +	tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
> +
> +	if (likely(qedi_cmd->io_cmd_in_list)) {
> +		qedi_cmd->io_cmd_in_list = false;
> +		list_del_init(&qedi_cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +	}
> +
> +	if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +	      ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
> +	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +	      ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
> +	    ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +	      ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
> +		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_resp_work);
> +		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
> +		goto unblock_sess;
> +	}
> +
> +	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
> +
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
> +	kfree(resp_hdr_ptr);
> +
> +unblock_sess:
> +	spin_unlock(&session->back_lock);
> +}
> +
>  static void qedi_process_login_resp(struct qedi_ctx *qedi,
>  				    union iscsi_cqe *cqe,
>  				    struct iscsi_task *task,
> @@ -470,6 +578,121 @@ static void qedi_process_reject_mesg(struct qedi_ctx *qedi,
>  	spin_unlock_bh(&session->back_lock);
>  }
>  
> +static void qedi_scsi_completion(struct qedi_ctx *qedi,
> +				 union iscsi_cqe *cqe,
> +				 struct iscsi_task *task,
> +				 struct iscsi_conn *conn)
> +{
> +	struct scsi_cmnd *sc_cmd;
> +	struct qedi_cmd *cmd = task->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_scsi_rsp *hdr;
> +	struct iscsi_data_in_hdr *cqe_data_in;
> +	int datalen = 0;
> +	struct qedi_conn *qedi_conn;
> +	u32 iscsi_cid;
> +	bool mark_cmd_node_deleted = false;
> +	u8 cqe_err_bits = 0;
> +
> +	iscsi_cid  = cqe->cqe_common.conn_id;
> +	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
> +
> +	cqe_data_in = &cqe->cqe_common.iscsi_hdr.data_in;
> +	cqe_err_bits =
> +		cqe->cqe_common.error_bitmap.error_bits.cqe_error_status_bits;
> +
> +	spin_lock_bh(&session->back_lock);
> +	/* get the scsi command */
> +	sc_cmd = cmd->scsi_cmd;
> +
> +	if (!sc_cmd) {
> +		QEDI_WARN(&qedi->dbg_ctx, "sc_cmd is NULL!\n");
> +		goto error;
> +	}
> +
> +	if (!sc_cmd->SCp.ptr) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "SCp.ptr is NULL, returned in another context.\n");
> +		goto error;
> +	}
> +
> +	if (!sc_cmd->request) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "sc_cmd->request is NULL, sc_cmd=%p.\n",
> +			  sc_cmd);
> +		goto error;
> +	}
> +
> +	if (!sc_cmd->request->special) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "request->special is NULL so request not valid, sc_cmd=%p.\n",
> +			  sc_cmd);
> +		goto error;
> +	}
> +
> +	if (!sc_cmd->request->q) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "request->q is NULL so request is not valid, sc_cmd=%p.\n",
> +			  sc_cmd);
> +		goto error;
> +	}
> +
> +	qedi_iscsi_unmap_sg_list(cmd);
> +
> +	hdr = (struct iscsi_scsi_rsp *)task->hdr;
> +	hdr->opcode = cqe_data_in->opcode;
> +	hdr->max_cmdsn = cpu_to_be32(cqe_data_in->max_cmd_sn);
> +	hdr->exp_cmdsn = cpu_to_be32(cqe_data_in->exp_cmd_sn);
> +	hdr->itt = build_itt(cqe->cqe_solicited.itid, conn->session->age);
> +	hdr->response = cqe_data_in->reserved1;
> +	hdr->cmd_status = cqe_data_in->status_rsvd;
> +	hdr->flags = cqe_data_in->flags;
> +	hdr->residual_count = cpu_to_be32(cqe_data_in->residual_count);
> +
> +	if (hdr->cmd_status == SAM_STAT_CHECK_CONDITION) {
> +		datalen = cqe_data_in->reserved2 &
> +			  ISCSI_COMMON_HDR_DATA_SEG_LEN_MASK;
> +		memcpy((char *)conn->data, (char *)cmd->sense_buffer, datalen);
> +	}
> +
> +	/* If f/w reports data underrun err then set residual to IO transfer
> +	 * length, set Underrun flag and clear Overrun flag explicitly
> +	 */
> +	if (unlikely(cqe_err_bits &&
> +		     GET_FIELD(cqe_err_bits, CQE_ERROR_BITMAP_UNDER_RUN_ERR))) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "Under flow itt=0x%x proto flags=0x%x tid=0x%x cid 0x%x fw resid 0x%x sc dlen 0x%x\n",
> +			  hdr->itt, cqe_data_in->flags, cmd->task_id,
> +			  qedi_conn->iscsi_conn_id, hdr->residual_count,
> +			  scsi_bufflen(sc_cmd));
> +		hdr->residual_count = cpu_to_be32(scsi_bufflen(sc_cmd));
> +		hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW;
> +		hdr->flags &= (~ISCSI_FLAG_CMD_OVERFLOW);
> +	}
> +
> +	spin_lock(&qedi_conn->list_lock);
> +	if (likely(cmd->io_cmd_in_list)) {
> +		cmd->io_cmd_in_list = false;
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +		mark_cmd_node_deleted = true;
> +	}
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +		  "Freeing tid=0x%x for cid=0x%x\n",
> +		  cmd->task_id, qedi_conn->iscsi_conn_id);
> +	cmd->state = RESPONSE_RECEIVED;
> +	if (io_tracing)
> +		qedi_trace_io(qedi, task, cmd->task_id, QEDI_IO_TRACE_RSP);
> +
> +	qedi_clear_task_idx(qedi, cmd->task_id);
> +	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
> +			     conn->data, datalen);
> +error:
> +	spin_unlock_bh(&session->back_lock);
> +}
> +
>  static void qedi_mtask_completion(struct qedi_ctx *qedi,
>  				  union iscsi_cqe *cqe,
>  				  struct iscsi_task *task,
> @@ -482,9 +705,16 @@ static void qedi_mtask_completion(struct qedi_ctx *qedi,
>  	iscsi_conn = conn->cls_conn->dd_data;
>  
>  	switch (hdr_opcode) {
> +	case ISCSI_OPCODE_SCSI_RESPONSE:
> +	case ISCSI_OPCODE_DATA_IN:
> +		qedi_scsi_completion(qedi, cqe, task, iscsi_conn);
> +		break;
>  	case ISCSI_OPCODE_LOGIN_RESPONSE:
>  		qedi_process_login_resp(qedi, cqe, task, conn);
>  		break;
> +	case ISCSI_OPCODE_TMF_RESPONSE:
> +		qedi_process_tmf_resp(qedi, cqe, task, conn);
> +		break;
>  	case ISCSI_OPCODE_TEXT_RESPONSE:
>  		qedi_process_text_resp(qedi, cqe, task, conn);
>  		break;
> @@ -520,6 +750,131 @@ static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
>  	spin_unlock_bh(&session->back_lock);
>  }
>  
> +static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
> +					  struct iscsi_cqe_solicited *cqe,
> +					  struct iscsi_task *task,
> +					  struct iscsi_conn *conn)
> +{
> +	struct qedi_work_map *work, *work_tmp;
> +	u32 proto_itt = cqe->itid;
> +	u32 ptmp_itt = 0;
> +	itt_t protoitt = 0;
> +	int found = 0;
> +	struct qedi_cmd *qedi_cmd = NULL;
> +	u32 rtid = 0;
> +	u32 iscsi_cid;
> +	struct qedi_conn *qedi_conn;
> +	struct qedi_cmd *cmd_new, *dbg_cmd;
> +	struct iscsi_task *mtask;
> +	struct iscsi_tm *tmf_hdr = NULL;
> +
> +	iscsi_cid = cqe->conn_id;
> +	qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid];
> +
> +	/* Based on this itt get the corresponding qedi_cmd */
> +	spin_lock_bh(&qedi_conn->tmf_work_lock);
> +	list_for_each_entry_safe(work, work_tmp, &qedi_conn->tmf_work_list,
> +				 list) {
> +		if (work->rtid == proto_itt) {
> +			/* We found the command */
> +			qedi_cmd = work->qedi_cmd;
> +			if (!qedi_cmd->list_tmf_work) {
> +				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +					  "TMF work not found, cqe->tid=0x%x, cid=0x%x\n",
> +					  proto_itt, qedi_conn->iscsi_conn_id);
> +				WARN_ON(1);
> +			}
> +			found = 1;
> +			mtask = qedi_cmd->task;
> +			tmf_hdr = (struct iscsi_tm *)mtask->hdr;
> +			rtid = work->rtid;
> +
> +			list_del_init(&work->list);
> +			kfree(work);
> +			qedi_cmd->list_tmf_work = NULL;
> +		}
> +	}
> +	spin_unlock_bh(&qedi_conn->tmf_work_lock);
> +
> +	if (found) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +			  "TMF work, cqe->tid=0x%x, tmf flags=0x%x, cid=0x%x\n",
> +			  proto_itt, tmf_hdr->flags, qedi_conn->iscsi_conn_id);
> +
> +		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +		    ISCSI_TM_FUNC_ABORT_TASK) {
> +			spin_lock_bh(&conn->session->back_lock);
> +
> +			protoitt = build_itt(get_itt(tmf_hdr->rtt),
> +					     conn->session->age);
> +			task = iscsi_itt_to_task(conn, protoitt);
> +
> +			spin_unlock_bh(&conn->session->back_lock);
> +
> +			if (!task) {
> +				QEDI_NOTICE(&qedi->dbg_ctx,
> +					    "IO task completed, tmf rtt=0x%x, cid=0x%x\n",
> +					    get_itt(tmf_hdr->rtt),
> +					    qedi_conn->iscsi_conn_id);
> +				return;
> +			}
> +
> +			dbg_cmd = task->dd_data;
> +
> +			QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +				  "Abort tmf rtt=0x%x, i/o itt=0x%x, i/o tid=0x%x, cid=0x%x\n",
> +				  get_itt(tmf_hdr->rtt), get_itt(task->itt),
> +				  dbg_cmd->task_id, qedi_conn->iscsi_conn_id);
> +
> +			if (qedi_cmd->state == CLEANUP_WAIT_FAILED)
> +				qedi_cmd->state = CLEANUP_RECV;
> +
> +			qedi_clear_task_idx(qedi_conn->qedi, rtid);
> +
> +			spin_lock(&qedi_conn->list_lock);
> +			list_del_init(&dbg_cmd->io_cmd);
> +			qedi_conn->active_cmd_count--;
> +			spin_unlock(&qedi_conn->list_lock);
> +			qedi_cmd->state = CLEANUP_RECV;
> +			wake_up_interruptible(&qedi_conn->wait_queue);
> +		}
> +	} else if (qedi_conn->cmd_cleanup_req > 0) {
> +		spin_lock_bh(&conn->session->back_lock);
> +		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
> +		protoitt = build_itt(ptmp_itt, conn->session->age);
> +		task = iscsi_itt_to_task(conn, protoitt);
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +			  "cleanup io itid=0x%x, protoitt=0x%x, cmd_cleanup_cmpl=%d, cid=0x%x\n",
> +			  cqe->itid, protoitt, qedi_conn->cmd_cleanup_cmpl,
> +			  qedi_conn->iscsi_conn_id);
> +
> +		spin_unlock_bh(&conn->session->back_lock);
> +		if (!task) {
> +			QEDI_NOTICE(&qedi->dbg_ctx,
> +				    "task is null, itid=0x%x, cid=0x%x\n",
> +				    cqe->itid, qedi_conn->iscsi_conn_id);
> +			return;
> +		}
> +		qedi_conn->cmd_cleanup_cmpl++;
> +		wake_up(&qedi_conn->wait_queue);
> +		cmd_new = task->dd_data;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
> +			  "Freeing tid=0x%x for cid=0x%x\n",
> +			  cqe->itid, qedi_conn->iscsi_conn_id);
> +		qedi_clear_task_idx(qedi_conn->qedi, cqe->itid);
> +
> +	} else {
> +		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
> +		protoitt = build_itt(ptmp_itt, conn->session->age);
> +		task = iscsi_itt_to_task(conn, protoitt);
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "Delayed or untracked cleanup response, itt=0x%x, tid=0x%x, cid=0x%x, task=%p\n",
> +			 protoitt, cqe->itid, qedi_conn->iscsi_conn_id, task);
> +		WARN_ON(1);
> +	}
> +}
> +
>  void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
>  			  uint16_t que_idx)
>  {
> @@ -619,6 +974,14 @@ void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
>  			break;
>  		}
>  		goto exit_fp_process;
> +	case ISCSI_CQE_TYPE_DUMMY:
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "Dummy CqE\n");
> +		goto exit_fp_process;
> +	case ISCSI_CQE_TYPE_TASK_CLEANUP:
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, "CleanUp CqE\n");
> +		qedi_process_cmd_cleanup_resp(qedi, &cqe->cqe_solicited, task,
> +					      conn);
> +		goto exit_fp_process;
>  	default:
>  		QEDI_ERR(&qedi->dbg_ctx, "Error cqe.\n");
>  		break;
> @@ -904,6 +1267,440 @@ int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
>  	return 0;
>  }
>  
> +int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
> +			struct iscsi_task *task, bool in_recovery)
> +{
> +	int rval;
> +	struct iscsi_task *ctask;
> +	struct qedi_cmd *cmd, *cmd_tmp;
> +	struct iscsi_tm *tmf_hdr;
> +	unsigned int lun = 0;
> +	bool lun_reset = false;
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +
> +	/* From recovery, task is NULL or from tmf resp valid task */
> +	if (task) {
> +		tmf_hdr = (struct iscsi_tm *)task->hdr;
> +
> +		if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +			ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) {
> +			lun_reset = true;
> +			lun = scsilun_to_int(&tmf_hdr->lun);
> +		}
> +	}
> +
> +	qedi_conn->cmd_cleanup_req = 0;
> +	qedi_conn->cmd_cleanup_cmpl = 0;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "active_cmd_count=%d, cid=0x%x, in_recovery=%d, lun_reset=%d\n",
> +		  qedi_conn->active_cmd_count, qedi_conn->iscsi_conn_id,
> +		  in_recovery, lun_reset);
> +
> +	if (lun_reset)
> +		spin_lock_bh(&session->back_lock);
> +
> +	spin_lock(&qedi_conn->list_lock);
> +
> +	list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
> +				 io_cmd) {
> +		ctask = cmd->task;
> +		if (ctask == task)
> +			continue;
> +
> +		if (lun_reset) {
> +			if (cmd->scsi_cmd && cmd->scsi_cmd->device) {
> +				QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +					  "tid=0x%x itt=0x%x scsi_cmd_ptr=%p device=%p task_state=%d cmd_state=0%x cid=0x%x\n",
> +					  cmd->task_id, get_itt(ctask->itt),
> +					  cmd->scsi_cmd, cmd->scsi_cmd->device,
> +					  ctask->state, cmd->state,
> +					  qedi_conn->iscsi_conn_id);
> +				if (cmd->scsi_cmd->device->lun != lun)
> +					continue;
> +			}
> +		}
> +		qedi_conn->cmd_cleanup_req++;
> +		qedi_iscsi_cleanup_task(ctask, true);
> +
> +		list_del_init(&cmd->io_cmd);
> +		qedi_conn->active_cmd_count--;
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Deleted active cmd list node io_cmd=%p, cid=0x%x\n",
> +			  &cmd->io_cmd, qedi_conn->iscsi_conn_id);
> +	}
> +
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	if (lun_reset)
> +		spin_unlock_bh(&session->back_lock);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "cmd_cleanup_req=%d, cid=0x%x\n",
> +		  qedi_conn->cmd_cleanup_req,
> +		  qedi_conn->iscsi_conn_id);
> +
> +	rval  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
> +						 ((qedi_conn->cmd_cleanup_req ==
> +						 qedi_conn->cmd_cleanup_cmpl) ||
> +						 qedi_conn->ep),
> +						 5 * HZ);
> +	if (rval) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +			  "i/o cmd_cleanup_req=%d, equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
> +			  qedi_conn->cmd_cleanup_req,
> +			  qedi_conn->cmd_cleanup_cmpl,
> +			  qedi_conn->iscsi_conn_id);
> +
> +		return 0;
> +	}
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "i/o cmd_cleanup_req=%d, not equal to cmd_cleanup_cmpl=%d, cid=0x%x\n",
> +		  qedi_conn->cmd_cleanup_req,
> +		  qedi_conn->cmd_cleanup_cmpl,
> +		  qedi_conn->iscsi_conn_id);
> +
> +	iscsi_host_for_each_session(qedi->shost,
> +				    qedi_mark_device_missing);
> +	qedi_ops->common->drain(qedi->cdev);
> +
> +	/* Enable IOs for all other sessions except current.*/
> +	if (!wait_event_interruptible_timeout(qedi_conn->wait_queue,
> +					      (qedi_conn->cmd_cleanup_req ==
> +					       qedi_conn->cmd_cleanup_cmpl),
> +					      5 * HZ)) {
> +		iscsi_host_for_each_session(qedi->shost,
> +					    qedi_mark_device_available);
> +		return -1;
> +	}
> +
> +	iscsi_host_for_each_session(qedi->shost,
> +				    qedi_mark_device_available);
> +
> +	return 0;
> +}
> +
> +void qedi_clearsq(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
> +		  struct iscsi_task *task)
> +{
> +	struct qedi_endpoint *qedi_ep;
> +	int rval;
> +
> +	qedi_ep = qedi_conn->ep;
> +	qedi_conn->cmd_cleanup_req = 0;
> +	qedi_conn->cmd_cleanup_cmpl = 0;
> +
> +	if (!qedi_ep) {
> +		QEDI_WARN(&qedi->dbg_ctx,
> +			  "Cannot proceed, ep already disconnected, cid=0x%x\n",
> +			  qedi_conn->iscsi_conn_id);
> +		return;
> +	}
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "Clearing SQ for cid=0x%x, conn=%p, ep=%p\n",
> +		  qedi_conn->iscsi_conn_id, qedi_conn, qedi_ep);
> +
> +	qedi_ops->clear_sq(qedi->cdev, qedi_ep->handle);
> +
> +	rval = qedi_cleanup_all_io(qedi, qedi_conn, task, true);
> +	if (rval) {
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "fatal error, need hard reset, cid=0x%x\n",
> +			 qedi_conn->iscsi_conn_id);
> +		WARN_ON(1);
> +	}
> +}
> +
> +static int qedi_wait_for_cleanup_request(struct qedi_ctx *qedi,
> +					 struct qedi_conn *qedi_conn,
> +					 struct iscsi_task *task,
> +					 struct qedi_cmd *qedi_cmd,
> +					 struct qedi_work_map *list_work)
> +{
> +	struct qedi_cmd *cmd = (struct qedi_cmd *)task->dd_data;
> +	int wait;
> +
> +	wait  = wait_event_interruptible_timeout(qedi_conn->wait_queue,
> +						 ((qedi_cmd->state ==
> +						   CLEANUP_RECV) ||
> +						 ((qedi_cmd->type == TYPEIO) &&
> +						  (cmd->state ==
> +						   RESPONSE_RECEIVED))),
> +						 5 * HZ);
> +	if (!wait) {
> +		qedi_cmd->state = CLEANUP_WAIT_FAILED;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +			  "Cleanup timedout tid=0x%x, issue connection recovery, cid=0x%x\n",
> +			  cmd->task_id, qedi_conn->iscsi_conn_id);
> +
> +		return -1;
> +	}
> +	return 0;
> +}
> +
> +static void qedi_tmf_work(struct work_struct *work)
> +{
> +	struct qedi_cmd *qedi_cmd =
> +		container_of(work, struct qedi_cmd, tmf_work);
> +	struct qedi_conn *qedi_conn = qedi_cmd->conn;
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_cls_session *cls_sess;
> +	struct qedi_work_map *list_work = NULL;
> +	struct iscsi_task *mtask;
> +	struct qedi_cmd *cmd;
> +	struct iscsi_task *ctask;
> +	struct iscsi_tm *tmf_hdr;
> +	s16 rval = 0;
> +	s16 tid = 0;
> +
> +	mtask = qedi_cmd->task;
> +	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
> +	cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
> +	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +
> +	ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
> +	if (!ctask || !ctask->sc) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Task already completed\n");
> +		goto abort_ret;
> +	}
> +
> +	cmd = (struct qedi_cmd *)ctask->dd_data;
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +		  "Abort tmf rtt=0x%x, cmd itt=0x%x, cmd tid=0x%x, cid=0x%x\n",
> +		  get_itt(tmf_hdr->rtt), get_itt(ctask->itt), cmd->task_id,
> +		  qedi_conn->iscsi_conn_id);
> +
> +	if (do_not_recover) {
> +		QEDI_ERR(&qedi->dbg_ctx, "DONT SEND CLEANUP/ABORT %d\n",
> +			 do_not_recover);
> +		goto abort_ret;
> +	}
> +
> +	list_work = kzalloc(sizeof(*list_work), GFP_ATOMIC);
> +	if (!list_work) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Memory alloction failed\n");
> +		goto abort_ret;
> +	}
> +
> +	qedi_cmd->type = TYPEIO;
> +	list_work->qedi_cmd = qedi_cmd;
> +	list_work->rtid = cmd->task_id;
> +	list_work->state = QEDI_WORK_SCHEDULED;
> +	qedi_cmd->list_tmf_work = list_work;
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "Queue tmf work=%p, list node=%p, cid=0x%x, tmf flags=0x%x\n",
> +		  list_work->ptr_tmf_work, list_work, qedi_conn->iscsi_conn_id,
> +		  tmf_hdr->flags);
> +
> +	spin_lock_bh(&qedi_conn->tmf_work_lock);
> +	list_add_tail(&list_work->list, &qedi_conn->tmf_work_list);
> +	spin_unlock_bh(&qedi_conn->tmf_work_lock);
> +
> +	qedi_iscsi_cleanup_task(ctask, false);
> +
> +	rval = qedi_wait_for_cleanup_request(qedi, qedi_conn, ctask, qedi_cmd,
> +					     list_work);
> +	if (rval == -1) {
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
> +			  "FW cleanup got escalated, cid=0x%x\n",
> +			  qedi_conn->iscsi_conn_id);
> +		goto ldel_exit;
> +	}
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1) {
> +		QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
> +			 qedi_conn->iscsi_conn_id);
> +		goto ldel_exit;
> +	}
> +
> +	qedi_cmd->task_id = tid;
> +	qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
> +
> +abort_ret:
> +	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +	return;
> +
> +ldel_exit:
> +	spin_lock_bh(&qedi_conn->tmf_work_lock);
> +	if (!qedi_cmd->list_tmf_work) {
> +		list_del_init(&list_work->list);
> +		qedi_cmd->list_tmf_work = NULL;
> +		kfree(list_work);
> +	}
> +	spin_unlock_bh(&qedi_conn->tmf_work_lock);
> +
> +	spin_lock(&qedi_conn->list_lock);
> +	list_del_init(&cmd->io_cmd);
> +	qedi_conn->active_cmd_count--;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
> +}
> +
> +static int qedi_send_iscsi_tmf(struct qedi_conn *qedi_conn,
> +			       struct iscsi_task *mtask)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_task_context *fw_task_ctx;
> +	struct iscsi_tmf_request_hdr *fw_tmf_request;
> +	struct iscsi_sge *single_sge;
> +	struct qedi_cmd *qedi_cmd;
> +	struct qedi_cmd *cmd;
> +	struct iscsi_task *ctask;
> +	struct iscsi_tm *tmf_hdr;
> +	struct iscsi_sge *req_sge;
> +	struct iscsi_sge *resp_sge;
> +	u32 scsi_lun[2];
> +	s16 tid = 0, ptu_invalidate = 0;
> +
> +	req_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.req_bd_tbl;
> +	resp_sge = (struct iscsi_sge *)qedi_conn->gen_pdu.resp_bd_tbl;
> +	qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
> +	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
> +
> +	tid = qedi_cmd->task_id;
> +	qedi_update_itt_map(qedi, tid, mtask->itt);
> +
> +	fw_task_ctx =
> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +
> +	fw_tmf_request = &fw_task_ctx->ystorm_st_context.pdu_hdr.tmf_request;
> +	fw_tmf_request->itt = qedi_set_itt(tid, get_itt(mtask->itt));
> +	fw_tmf_request->cmd_sn = be32_to_cpu(tmf_hdr->cmdsn);
> +
> +	memcpy(scsi_lun, &tmf_hdr->lun, sizeof(struct scsi_lun));
> +	fw_tmf_request->lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_tmf_request->lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						qedi->tid_reuse_count[tid]++;
> +
> +	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +	     ISCSI_TM_FUNC_ABORT_TASK) {
> +		ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
> +		if (!ctask || !ctask->sc) {
> +			QEDI_ERR(&qedi->dbg_ctx,
> +				 "Could not get reference task\n");
> +			return 0;
> +		}
> +		cmd = (struct qedi_cmd *)ctask->dd_data;
> +		fw_tmf_request->rtt =
> +				qedi_set_itt(cmd->task_id,
> +					     get_itt(tmf_hdr->rtt));
> +	} else {
> +		fw_tmf_request->rtt = ISCSI_RESERVED_TAG;
> +	}
> +
> +	fw_tmf_request->opcode = tmf_hdr->opcode;
> +	fw_tmf_request->function = tmf_hdr->flags;
> +	fw_tmf_request->hdr_second_dword = ntoh24(tmf_hdr->dlength);
> +	fw_tmf_request->ref_cmd_sn = be32_to_cpu(tmf_hdr->refcmdsn);
> +
> +	single_sge = &fw_task_ctx->mstorm_st_context.sgl_union.single_sge;
> +	fw_task_ctx->mstorm_st_context.task_type = ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->mstorm_ag_context.task_cid = (u16)qedi_conn->iscsi_conn_id;
> +	single_sge->sge_addr.lo = resp_sge->sge_addr.lo;
> +	single_sge->sge_addr.hi = resp_sge->sge_addr.hi;
> +	single_sge->sge_len = resp_sge->sge_len;
> +
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SINGLE_SGE, 1);
> +	SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +		  ISCSI_MFLAGS_SLOW_IO, 0);
> +	fw_task_ctx->mstorm_st_context.sgl_size = 1;
> +	fw_task_ctx->mstorm_st_context.rem_task_size = resp_sge->sge_len;
> +
> +	/* Ustorm context */
> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = 0;
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = 0;
> +	fw_task_ctx->ustorm_st_context.exp_data_sn = 0;
> +	fw_task_ctx->ustorm_st_context.task_type =  ISCSI_TASK_TYPE_MIDPATH;
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = 0;
> +
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
> +		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +		  ISCSI_REG1_NUM_FAST_SGES, 0);
> +
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "Add TMF to SQ, tmf tid=0x%x, itt=0x%x, cid=0x%x\n",
> +		  tid,  mtask->itt, qedi_conn->iscsi_conn_id);
> +
> +	spin_lock(&qedi_conn->list_lock);
> +	list_add_tail(&qedi_cmd->io_cmd, &qedi_conn->active_cmd_list);
> +	qedi_cmd->io_cmd_in_list = true;
> +	qedi_conn->active_cmd_count++;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	qedi_add_to_sq(qedi_conn, mtask, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +	return 0;
> +}
> +
> +int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
> +			  struct iscsi_task *mtask)
> +{
> +	struct qedi_ctx *qedi = qedi_conn->qedi;
> +	struct iscsi_tm *tmf_hdr;
> +	struct qedi_cmd *qedi_cmd = (struct qedi_cmd *)mtask->dd_data;
> +	s16 tid = 0;
> +
> +	tmf_hdr = (struct iscsi_tm *)mtask->hdr;
> +	qedi_cmd->task = mtask;
> +
> +	/* If abort task then schedule the work and return */
> +	if ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +	    ISCSI_TM_FUNC_ABORT_TASK) {
> +		qedi_cmd->state = CLEANUP_WAIT;
> +		INIT_WORK(&qedi_cmd->tmf_work, qedi_tmf_work);
> +		queue_work(qedi->tmf_thread, &qedi_cmd->tmf_work);
> +
> +	} else if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +		    ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
> +		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +		    ISCSI_TM_FUNC_TARGET_WARM_RESET) ||
> +		   ((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
> +		    ISCSI_TM_FUNC_TARGET_COLD_RESET)) {
> +		tid = qedi_get_task_idx(qedi);
> +		if (tid == -1) {
> +			QEDI_ERR(&qedi->dbg_ctx, "Invalid tid, cid=0x%x\n",
> +				 qedi_conn->iscsi_conn_id);
> +			return -1;
> +		}
> +		qedi_cmd->task_id = tid;
> +
> +		qedi_send_iscsi_tmf(qedi_conn, qedi_cmd->task);
> +
> +	} else {
> +		QEDI_ERR(&qedi->dbg_ctx, "Invalid tmf, cid=0x%x\n",
> +			 qedi_conn->iscsi_conn_id);
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
>  int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
>  			 struct iscsi_task *task)
>  {
> @@ -1121,3 +1918,488 @@ int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
>  	qedi_ring_doorbell(qedi_conn);
>  	return 0;
>  }
> +
> +static int qedi_split_bd(struct qedi_cmd *cmd, u64 addr, int sg_len,
> +			 int bd_index)
> +{
> +	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
> +	int frag_size, sg_frags;
> +
> +	sg_frags = 0;
> +
> +	while (sg_len) {
> +		if (addr % QEDI_PAGE_SIZE)
> +			frag_size =
> +				   (QEDI_PAGE_SIZE - (addr % QEDI_PAGE_SIZE));
> +		else
> +			frag_size = (sg_len > QEDI_BD_SPLIT_SZ) ? 0 :
> +				    (sg_len % QEDI_BD_SPLIT_SZ);
> +
> +		if (frag_size == 0)
> +			frag_size = QEDI_BD_SPLIT_SZ;
> +
> +		bd[bd_index + sg_frags].sge_addr.lo = (addr & 0xffffffff);
> +		bd[bd_index + sg_frags].sge_addr.hi = (addr >> 32);
> +		bd[bd_index + sg_frags].sge_len = (u16)frag_size;
> +		QEDI_INFO(&cmd->conn->qedi->dbg_ctx, QEDI_LOG_IO,
> +			  "split sge %d: addr=%llx, len=%x",
> +			  (bd_index + sg_frags), addr, frag_size);
> +
> +		addr += (u64)frag_size;
> +		sg_frags++;
> +		sg_len -= frag_size;
> +	}
> +	return sg_frags;
> +}
> +
> +static int qedi_map_scsi_sg(struct qedi_ctx *qedi, struct qedi_cmd *cmd)
> +{
> +	struct scsi_cmnd *sc = cmd->scsi_cmd;
> +	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
> +	struct scatterlist *sg;
> +	int byte_count = 0;
> +	int bd_count = 0;
> +	int sg_count;
> +	int sg_len;
> +	int sg_frags;
> +	u64 addr, end_addr;
> +	int i;
> +
> +	WARN_ON(scsi_sg_count(sc) > QEDI_ISCSI_MAX_BDS_PER_CMD);
> +
> +	sg_count = dma_map_sg(&qedi->pdev->dev, scsi_sglist(sc),
> +			      scsi_sg_count(sc), sc->sc_data_direction);
> +
> +	/*
> +	 * New condition to send single SGE as cached-SGL.
> +	 * Single SGE with length less than 64K.
> +	 */
> +	sg = scsi_sglist(sc);
> +	if ((sg_count == 1) && (sg_dma_len(sg) <= MAX_SGLEN_FOR_CACHESGL)) {
> +		sg_len = sg_dma_len(sg);
> +		addr = (u64)sg_dma_address(sg);
> +
> +		bd[bd_count].sge_addr.lo = (addr & 0xffffffff);
> +		bd[bd_count].sge_addr.hi = (addr >> 32);
> +		bd[bd_count].sge_len = (u16)sg_len;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
> +			  "single-cashed-sgl: bd_count:%d addr=%llx, len=%x",
> +			  sg_count, addr, sg_len);
> +
> +		return ++bd_count;
> +	}
> +
> +	scsi_for_each_sg(sc, sg, sg_count, i) {
> +		sg_len = sg_dma_len(sg);
> +		addr = (u64)sg_dma_address(sg);
> +		end_addr = (addr + sg_len);
> +
> +		/*
> +		 * first sg elem in the 'list',
> +		 * check if end addr is page-aligned.
> +		 */
> +		if ((i == 0) && (sg_count > 1) && (end_addr % QEDI_PAGE_SIZE))
> +			cmd->use_slowpath = true;
> +
> +		/*
> +		 * last sg elem in the 'list',
> +		 * check if start addr is page-aligned.
> +		 */
> +		else if ((i == (sg_count - 1)) &&
> +			 (sg_count > 1) && (addr % QEDI_PAGE_SIZE))
> +			cmd->use_slowpath = true;
> +
> +		/*
> +		 * middle sg elements in list,
> +		 * check if start and end addr is page-aligned
> +		 */
> +		else if ((i != 0) && (i != (sg_count - 1)) &&
> +			 ((addr % QEDI_PAGE_SIZE) ||
> +			 (end_addr % QEDI_PAGE_SIZE)))
> +			cmd->use_slowpath = true;
> +
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "sg[%d] size=0x%x",
> +			  i, sg_len);
> +
> +		if (sg_len > QEDI_BD_SPLIT_SZ) {
> +			sg_frags = qedi_split_bd(cmd, addr, sg_len, bd_count);
> +		} else {
> +			sg_frags = 1;
> +			bd[bd_count].sge_addr.lo = addr & 0xffffffff;
> +			bd[bd_count].sge_addr.hi = addr >> 32;
> +			bd[bd_count].sge_len = sg_len;
> +		}
> +		byte_count += sg_len;
> +		bd_count += sg_frags;
> +	}
> +
> +	if (byte_count != scsi_bufflen(sc))
> +		QEDI_ERR(&qedi->dbg_ctx,
> +			 "byte_count = %d != scsi_bufflen = %d\n", byte_count,
> +			 scsi_bufflen(sc));
> +	else
> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "byte_count = %d\n",
> +			  byte_count);
> +
> +	WARN_ON(byte_count != scsi_bufflen(sc));
> +
> +	return bd_count;
> +}
> +
> +static void qedi_iscsi_map_sg_list(struct qedi_cmd *cmd)
> +{
> +	int bd_count;
> +	struct scsi_cmnd *sc = cmd->scsi_cmd;
> +
> +	if (scsi_sg_count(sc)) {
> +		bd_count  = qedi_map_scsi_sg(cmd->conn->qedi, cmd);
> +		if (bd_count == 0)
> +			return;
> +	} else {
> +		struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
> +
> +		bd[0].sge_addr.lo = 0;
> +		bd[0].sge_addr.hi = 0;
> +		bd[0].sge_len = 0;
> +		bd_count = 0;
> +	}
> +	cmd->io_tbl.sge_valid = bd_count;
> +}
> +
> +static void qedi_cpy_scsi_cdb(struct scsi_cmnd *sc, u32 *dstp)
> +{
> +	u32 dword;
> +	int lpcnt;
> +	u8 *srcp;
> +
> +	lpcnt = sc->cmd_len / sizeof(dword);
> +	srcp = (u8 *)sc->cmnd;
> +	while (lpcnt--) {
> +		memcpy(&dword, (const void *)srcp, 4);
> +		*dstp = cpu_to_be32(dword);
> +		srcp += 4;
> +		dstp++;
> +	}
> +	if (sc->cmd_len & 0x3) {
> +		dword = (u32)srcp[0] | ((u32)srcp[1] << 8);
> +		*dstp = cpu_to_be32(dword);
> +	}
> +}
> +
> +void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
> +		   u16 tid, int8_t direction)
> +{
> +	struct qedi_io_log *io_log;
> +	struct iscsi_conn *conn = task->conn;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct scsi_cmnd *sc_cmd = task->sc;
> +	unsigned long flags;
> +	u8 op;
> +
> +	spin_lock_irqsave(&qedi->io_trace_lock, flags);
> +
> +	io_log = &qedi->io_trace_buf[qedi->io_trace_idx];
> +	io_log->direction = direction;
> +	io_log->task_id = tid;
> +	io_log->cid = qedi_conn->iscsi_conn_id;
> +	io_log->lun = sc_cmd->device->lun;
> +	io_log->op = sc_cmd->cmnd[0];
> +	op = sc_cmd->cmnd[0];
> +
> +	if (op == READ_10 || op == WRITE_10) {
> +		io_log->lba[0] = sc_cmd->cmnd[2];
> +		io_log->lba[1] = sc_cmd->cmnd[3];
> +		io_log->lba[2] = sc_cmd->cmnd[4];
> +		io_log->lba[3] = sc_cmd->cmnd[5];
> +	} else {
> +		io_log->lba[0] = 0;
> +		io_log->lba[1] = 0;
> +		io_log->lba[2] = 0;
> +		io_log->lba[3] = 0;
> +	}
Only for READ_10 and WRITE_10? What about the other read or write commands?

> +	io_log->bufflen = scsi_bufflen(sc_cmd);
> +	io_log->sg_count = scsi_sg_count(sc_cmd);
> +	io_log->fast_sgs = qedi->fast_sgls;
> +	io_log->cached_sgs = qedi->cached_sgls;
> +	io_log->slow_sgs = qedi->slow_sgls;
> +	io_log->cached_sge = qedi->use_cached_sge;
> +	io_log->slow_sge = qedi->use_slow_sge;
> +	io_log->fast_sge = qedi->use_fast_sge;
> +	io_log->result = sc_cmd->result;
> +	io_log->jiffies = jiffies;
> +	io_log->blk_req_cpu = smp_processor_id();
> +
> +	if (direction == QEDI_IO_TRACE_REQ) {
> +		/* For requests we only care about the submission CPU */
> +		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
> +		io_log->intr_cpu = 0;
> +		io_log->blk_rsp_cpu = 0;
> +	} else if (direction == QEDI_IO_TRACE_RSP) {
> +		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
> +		io_log->intr_cpu = qedi->intr_cpu;
> +		io_log->blk_rsp_cpu = smp_processor_id();
> +	}
> +
> +	qedi->io_trace_idx++;
> +	if (qedi->io_trace_idx == QEDI_IO_TRACE_SIZE)
> +		qedi->io_trace_idx = 0;
> +
> +	qedi->use_cached_sge = false;
> +	qedi->use_slow_sge = false;
> +	qedi->use_fast_sge = false;
> +
> +	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
> +}
> +
> +int qedi_iscsi_send_ioreq(struct iscsi_task *task)
> +{
> +	struct iscsi_conn *conn = task->conn;
> +	struct iscsi_session *session = conn->session;
> +	struct Scsi_Host *shost = iscsi_session_to_shost(session->cls_session);
> +	struct qedi_ctx *qedi = iscsi_host_priv(shost);
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct qedi_cmd *cmd = task->dd_data;
> +	struct scsi_cmnd *sc = task->sc;
> +	struct iscsi_task_context *fw_task_ctx;
> +	struct iscsi_cached_sge_ctx *cached_sge;
> +	struct iscsi_phys_sgl_ctx *phys_sgl;
> +	struct iscsi_virt_sgl_ctx *virt_sgl;
> +	struct ystorm_iscsi_task_st_ctx *yst_cxt;
> +	struct mstorm_iscsi_task_st_ctx *mst_cxt;
> +	struct iscsi_sgl *sgl_struct;
> +	struct iscsi_sge *single_sge;
> +	struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)task->hdr;
> +	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
> +	enum iscsi_task_type task_type;
> +	struct iscsi_cmd_hdr *fw_cmd;
> +	u32 scsi_lun[2];
> +	u16 cq_idx = smp_processor_id() % qedi->num_queues;
> +	s16 ptu_invalidate = 0;
> +	s16 tid = 0;
> +	u8 num_fast_sgs;
> +
> +	tid = qedi_get_task_idx(qedi);
> +	if (tid == -1)
> +		return -ENOMEM;
> +
> +	qedi_iscsi_map_sg_list(cmd);
> +
> +	int_to_scsilun(sc->device->lun, (struct scsi_lun *)scsi_lun);
> +	fw_task_ctx =
> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +
> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
> +	cmd->task_id = tid;
> +
> +	/* Ystrom context */
Ystrom or Ystorm?

> +	fw_cmd = &fw_task_ctx->ystorm_st_context.pdu_hdr.cmd;
> +	SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_ATTR, ISCSI_ATTR_SIMPLE);
> +
> +	if (sc->sc_data_direction == DMA_TO_DEVICE) {
> +		if (conn->session->initial_r2t_en) {
> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
> +				min((conn->session->imm_data_en *
> +				    conn->max_xmit_dlength),
> +				    conn->session->first_burst);
> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
> +			      min(fw_task_ctx->ustorm_ag_context.exp_data_acked,
> +				  scsi_bufflen(sc));
> +		} else {
> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
> +			      min(conn->session->first_burst, scsi_bufflen(sc));
> +		}
> +
> +		SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_WRITE, 1);
> +		task_type = ISCSI_TASK_TYPE_INITIATOR_WRITE;
> +	} else {
> +		if (scsi_bufflen(sc))
> +			SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_READ, 1);
> +		task_type = ISCSI_TASK_TYPE_INITIATOR_READ;
> +	}
> +
> +	fw_cmd->lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_cmd->lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	qedi_update_itt_map(qedi, tid, task->itt);
> +	fw_cmd->itt = qedi_set_itt(tid, get_itt(task->itt));
> +	fw_cmd->expected_transfer_length = scsi_bufflen(sc);
> +	fw_cmd->cmd_sn = be32_to_cpu(hdr->cmdsn);
> +	fw_cmd->opcode = hdr->opcode;
> +	qedi_cpy_scsi_cdb(sc, (u32 *)fw_cmd->cdb);
> +
> +	/* Mstorm context */
> +	fw_task_ctx->mstorm_st_context.sense_db.lo = (u32)cmd->sense_buffer_dma;
> +	fw_task_ctx->mstorm_st_context.sense_db.hi =
> +					(u32)((u64)cmd->sense_buffer_dma >> 32);
> +	fw_task_ctx->mstorm_ag_context.task_cid = qedi_conn->iscsi_conn_id;
> +	fw_task_ctx->mstorm_st_context.task_type = task_type;
> +
> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
> +		ptu_invalidate = 1;
> +		qedi->tid_reuse_count[tid] = 0;
> +	}
> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
> +						     qedi->tid_reuse_count[tid];
> +	fw_task_ctx->mstorm_st_context.reuse_count =
> +						   qedi->tid_reuse_count[tid]++;
> +
> +	/* Ustrorm context */
Ustrorm?

> +	fw_task_ctx->ustorm_st_context.rem_rcv_len = scsi_bufflen(sc);
> +	fw_task_ctx->ustorm_st_context.exp_data_transfer_len = scsi_bufflen(sc);
> +	fw_task_ctx->ustorm_st_context.exp_data_sn =
> +						   be32_to_cpu(hdr->exp_statsn);
> +	fw_task_ctx->ustorm_st_context.task_type = task_type;
> +	fw_task_ctx->ustorm_st_context.cq_rss_number = cq_idx;
> +	fw_task_ctx->ustorm_ag_context.icid = (u16)qedi_conn->iscsi_conn_id;
> +
> +	SET_FIELD(fw_task_ctx->ustorm_ag_context.flags1,
> +		  USTORM_ISCSI_TASK_AG_CTX_R2T2RECV, 1);
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.flags,
> +		  USTORM_ISCSI_TASK_ST_CTX_LOCAL_COMP, 0);
> +
> +	num_fast_sgs = (cmd->io_tbl.sge_valid ?
> +			min((u16)QEDI_FAST_SGE_COUNT,
> +			    (u16)cmd->io_tbl.sge_valid) : 0);
> +	SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +		  ISCSI_REG1_NUM_FAST_SGES, num_fast_sgs);
> +
> +	fw_task_ctx->ustorm_st_context.lun.lo = be32_to_cpu(scsi_lun[0]);
> +	fw_task_ctx->ustorm_st_context.lun.hi = be32_to_cpu(scsi_lun[1]);
> +
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO, "Total sge count [%d]\n",
> +		  cmd->io_tbl.sge_valid);
> +
> +	yst_cxt = &fw_task_ctx->ystorm_st_context;
> +	mst_cxt = &fw_task_ctx->mstorm_st_context;
> +	/* Tx path */
> +	if (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) {
> +		/* not considering  superIO or FastIO */
> +		if (cmd->io_tbl.sge_valid == 1) {
> +			cached_sge = &yst_cxt->state.sgl_ctx_union.cached_sge;
> +			cached_sge->sge.sge_addr.lo = bd[0].sge_addr.lo;
> +			cached_sge->sge.sge_addr.hi = bd[0].sge_addr.hi;
> +			cached_sge->sge.sge_len = bd[0].sge_len;
> +			qedi->cached_sgls++;
> +		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SLOW_IO, 1);
> +			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +				  ISCSI_REG1_NUM_FAST_SGES, 0);
> +			phys_sgl = &yst_cxt->state.sgl_ctx_union.phys_sgl;
> +			phys_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
> +			phys_sgl->sgl_base.hi =
> +				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
> +			phys_sgl->sgl_size = cmd->io_tbl.sge_valid;
> +			qedi->slow_sgls++;
> +		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SLOW_IO, 0);
> +			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +				  ISCSI_REG1_NUM_FAST_SGES,
> +				  min((u16)QEDI_FAST_SGE_COUNT,
> +				      (u16)cmd->io_tbl.sge_valid));
> +			virt_sgl = &yst_cxt->state.sgl_ctx_union.virt_sgl;
> +			virt_sgl->sgl_base.lo = (u32)(cmd->io_tbl.sge_tbl_dma);
> +			virt_sgl->sgl_base.hi =
> +				      (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
> +			virt_sgl->sgl_initial_offset =
> +				 (u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
> +			qedi->fast_sgls++;
> +		}
> +		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
> +		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
> +	} else {
> +	/* Rx path */
> +		if (cmd->io_tbl.sge_valid == 1) {
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SLOW_IO, 0);
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SINGLE_SGE, 1);
> +			single_sge = &mst_cxt->sgl_union.single_sge;
> +			single_sge->sge_addr.lo = bd[0].sge_addr.lo;
> +			single_sge->sge_addr.hi = bd[0].sge_addr.hi;
> +			single_sge->sge_len = bd[0].sge_len;
> +			qedi->cached_sgls++;
> +		} else if ((cmd->io_tbl.sge_valid != 1) && cmd->use_slowpath) {
> +			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
> +			sgl_struct->sgl_addr.lo =
> +						(u32)(cmd->io_tbl.sge_tbl_dma);
> +			sgl_struct->sgl_addr.hi =
> +				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SLOW_IO, 1);
> +			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +				  ISCSI_REG1_NUM_FAST_SGES, 0);
> +			sgl_struct->updated_sge_size = 0;
> +			sgl_struct->updated_sge_offset = 0;
> +			qedi->slow_sgls++;
> +		} else if ((cmd->io_tbl.sge_valid != 1) && !cmd->use_slowpath) {
> +			sgl_struct = &mst_cxt->sgl_union.sgl_struct;
> +			sgl_struct->sgl_addr.lo =
> +						(u32)(cmd->io_tbl.sge_tbl_dma);
> +			sgl_struct->sgl_addr.hi =
> +				     (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32);
> +			sgl_struct->byte_offset =
> +				(u32)bd[0].sge_addr.lo & (QEDI_PAGE_SIZE - 1);
> +			SET_FIELD(fw_task_ctx->mstorm_st_context.flags.mflags,
> +				  ISCSI_MFLAGS_SLOW_IO, 0);
> +			SET_FIELD(fw_task_ctx->ustorm_st_context.reg1.reg1_map,
> +				  ISCSI_REG1_NUM_FAST_SGES, 0);
> +			sgl_struct->updated_sge_size = 0;
> +			sgl_struct->updated_sge_offset = 0;
> +			qedi->fast_sgls++;
> +		}
> +		fw_task_ctx->mstorm_st_context.sgl_size = cmd->io_tbl.sge_valid;
> +		fw_task_ctx->mstorm_st_context.rem_task_size = scsi_bufflen(sc);
> +	}
> +
> +	if (cmd->io_tbl.sge_valid == 1)
> +		/* Singel-SGL */
> +		qedi->use_cached_sge = true;
> +	else {
> +		if (cmd->use_slowpath)
> +			qedi->use_slow_sge = true;
> +		else
> +			qedi->use_fast_sge = true;
> +	}
> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
> +		  "%s: %s-SGL: num_sges=0x%x first-sge-lo=0x%x first-sge-hi=0x%x",
> +		  (task_type == ISCSI_TASK_TYPE_INITIATOR_WRITE) ?
> +		  "Write " : "Read ", (cmd->io_tbl.sge_valid == 1) ?
> +		  "Single" : (cmd->use_slowpath ? "SLOW" : "FAST"),
> +		  (u16)cmd->io_tbl.sge_valid, (u32)(cmd->io_tbl.sge_tbl_dma),
> +		  (u32)((u64)cmd->io_tbl.sge_tbl_dma >> 32));
> +
> +	/*  Add command in active command list */
> +	spin_lock(&qedi_conn->list_lock);
> +	list_add_tail(&cmd->io_cmd, &qedi_conn->active_cmd_list);
> +	cmd->io_cmd_in_list = true;
> +	qedi_conn->active_cmd_count++;
> +	spin_unlock(&qedi_conn->list_lock);
> +
> +	qedi_add_to_sq(qedi_conn, task, tid, ptu_invalidate, false);
> +	qedi_ring_doorbell(qedi_conn);
> +	if (io_tracing)
> +		qedi_trace_io(qedi, task, tid, QEDI_IO_TRACE_REQ);
> +
> +	return 0;
> +}
> +
> +int qedi_iscsi_cleanup_task(struct iscsi_task *task, bool mark_cmd_node_deleted)
> +{
> +	struct iscsi_conn *conn = task->conn;
> +	struct qedi_conn *qedi_conn = conn->dd_data;
> +	struct qedi_cmd *cmd = task->dd_data;
> +	s16 ptu_invalidate = 0;
> +
> +	QEDI_INFO(&qedi_conn->qedi->dbg_ctx, QEDI_LOG_SCSI_TM,
> +		  "issue cleanup tid=0x%x itt=0x%x task_state=%d cmd_state=0%x cid=0x%x\n",
> +		  cmd->task_id, get_itt(task->itt), task->state,
> +		  cmd->state, qedi_conn->iscsi_conn_id);
> +
> +	qedi_add_to_sq(qedi_conn, task, cmd->task_id, ptu_invalidate, true);
> +	qedi_ring_doorbell(qedi_conn);
> +
> +	return 0;
> +}
> diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
> index 85ea3d7..c50c2b1 100644
> --- a/drivers/scsi/qedi/qedi_gbl.h
> +++ b/drivers/scsi/qedi/qedi_gbl.h
> @@ -28,11 +28,14 @@ int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
>  			  struct iscsi_task *task);
>  int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
>  			   struct iscsi_task *task);
> +int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
> +			  struct iscsi_task *mtask);
>  int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
>  			 struct iscsi_task *task);
>  int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
>  			   struct iscsi_task *task,
>  			   char *datap, int data_len, int unsol);
> +int qedi_iscsi_send_ioreq(struct iscsi_task *task);
>  int qedi_get_task_idx(struct qedi_ctx *qedi);
>  void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
>  int qedi_iscsi_cleanup_task(struct iscsi_task *task,
> @@ -53,6 +56,9 @@ void qedi_start_conn_recovery(struct qedi_ctx *qedi,
>  int qedi_recover_all_conns(struct qedi_ctx *qedi);
>  void qedi_fp_process_cqes(struct qedi_ctx *qedi, union iscsi_cqe *cqe,
>  			  uint16_t que_idx);
> +int qedi_cleanup_all_io(struct qedi_ctx *qedi,
> +			struct qedi_conn *qedi_conn,
> +			struct iscsi_task *task, bool in_recovery);
>  void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
>  		   u16 tid, int8_t direction);
>  int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
> diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
> index caecdb8..7a07211 100644
> --- a/drivers/scsi/qedi/qedi_iscsi.c
> +++ b/drivers/scsi/qedi/qedi_iscsi.c
> @@ -755,6 +755,9 @@ static int qedi_iscsi_send_generic_request(struct iscsi_task *task)
>  	case ISCSI_OP_LOGOUT:
>  		rc = qedi_send_iscsi_logout(qedi_conn, task);
>  		break;
> +	case ISCSI_OP_SCSI_TMFUNC:
> +		rc = qedi_iscsi_abort_work(qedi_conn, task);
> +		break;
>  	case ISCSI_OP_TEXT:
>  		rc = qedi_send_iscsi_text(qedi_conn, task);
>  		break;
> @@ -804,6 +807,9 @@ static int qedi_task_xmit(struct iscsi_task *task)
>  
>  	if (!sc)
>  		return qedi_mtask_xmit(conn, task);
> +
> +	cmd->scsi_cmd = sc;
> +	return qedi_iscsi_send_ioreq(task);
>  }
>  
>  static struct iscsi_endpoint *
> diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
> index 22d19a3..fd0d335 100644
> --- a/drivers/scsi/qedi/qedi_main.c
> +++ b/drivers/scsi/qedi/qedi_main.c
> @@ -43,6 +43,10 @@
>  module_param(debug, uint, S_IRUGO | S_IWUSR);
>  MODULE_PARM_DESC(debug, " Default debug level");
>  
> +uint io_tracing;
> +module_param(io_tracing, uint, S_IRUGO | S_IWUSR);
> +MODULE_PARM_DESC(io_tracing,
> +		 " Enable logging of SCSI requests/completions into trace buffer. (default off).");
>  const struct qed_iscsi_ops *qedi_ops;
>  static struct scsi_transport_template *qedi_scsi_transport;
>  static struct pci_driver qedi_pci_driver;
> 
Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 5/6] qedi: Add support for iSCSI session management.
  2016-10-19  5:01   ` manish.rangankar
  (?)
  (?)
@ 2016-10-19 13:28   ` Johannes Thumshirn
  2016-10-20  9:12     ` Rangankar, Manish
  -1 siblings, 1 reply; 38+ messages in thread
From: Johannes Thumshirn @ 2016-10-19 13:28 UTC (permalink / raw)
  To: manish.rangankar
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Yuval.Mintz, QLogic-Storage-Upstream, Nilesh Javali,
	Adheer Chandravanshi, Chad Dupuis, Saurav Kashyap, Arun Easi

On Wed, Oct 19, 2016 at 01:01:12AM -0400, manish.rangankar@cavium.com wrote:
> From: Manish Rangankar <manish.rangankar@cavium.com>
> 
> This patch adds support for iscsi_transport LLD Login,
> Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
> and Firmware async event handling support.
> 
> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
> Signed-off-by: Arun Easi <arun.easi@cavium.com>
> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
> ---

[...]

> +void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd)
> +{
> +	struct scsi_cmnd *sc = cmd->scsi_cmd;
> +
> +	if (cmd->io_tbl.sge_valid && sc) {
> +		scsi_dma_unmap(sc);
> +		cmd->io_tbl.sge_valid = 0;
> +	}
> +}

Maybe set sge_valid to 0 and then call scsi_dma_unmap(). I don't know if it's
really racy but it looks like it is.

[...]

> +static void qedi_process_text_resp(struct qedi_ctx *qedi,
> +				   union iscsi_cqe *cqe,
> +				   struct iscsi_task *task,
> +				   struct qedi_conn *qedi_conn)
> +{
> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
> +	struct iscsi_session *session = conn->session;
> +	struct iscsi_task_context *task_ctx;
> +	struct iscsi_text_rsp *resp_hdr_ptr;
> +	struct iscsi_text_response_hdr *cqe_text_response;
> +	struct qedi_cmd *cmd;
> +	int pld_len;
> +	u32 *tmp;
> +
> +	cmd = (struct qedi_cmd *)task->dd_data;
> +	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +								  cmd->task_id);

No need to cast here, qedi_get_task_mem() returns void *.

[...]

> +	cqe_login_response = &cqe->cqe_common.iscsi_hdr.login_response;
> +	task_ctx = (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +							  cmd->task_id);

Same here.

[...]

> +	}
> +
> +	pbl = (struct scsi_bd *)qedi->bdq_pbl;
> +	pbl += (qedi->bdq_prod_idx % qedi->rq_num_entries);
> +	pbl->address.hi =
> +		      cpu_to_le32((u32)(((u64)(qedi->bdq[idx].buf_dma)) >> 32));
> +	pbl->address.lo =
> +			cpu_to_le32(((u32)(((u64)(qedi->bdq[idx].buf_dma)) &
> +					    0xffffffff)));

Is this LISP or C?

> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
> +		  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx] idx [%d]\n",
> +		  pbl, pbl->address.hi, pbl->address.lo, idx);
> +	pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));

Isn't this plain pbl->opaque.hi = 0; ?

> +	pbl->opaque.lo = cpu_to_le32(((u32)(((u64)idx) & 0xffffffff)));
> +

[...]

> +	switch (comp_type) {
> +	case ISCSI_CQE_TYPE_SOLICITED:
> +	case ISCSI_CQE_TYPE_SOLICITED_WITH_SENSE:
> +		fw_task_ctx =
> +		  (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
> +						      cqe->cqe_solicited.itid);

Again, no cast needed.

[...]

> +	writel(*(u32 *)&dbell, qedi_conn->ep->p_doorbell);
> +	/* Make sure fw idx is coherent */
> +	wmb();
> +	mmiowb();

Isn't either wmb() or mmiowb() enough?

[..]

> +
> +	fw_task_ctx =
> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);

Cast again.

[...]

> +	fw_task_ctx =
> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);

^^

[...]

> +	fw_task_ctx =
> +	(struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);


[...]

> +	fw_task_ctx =
> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
> +

[...]

> +
> +	qedi = (struct qedi_ctx *)iscsi_host_priv(shost);

Same goes for iscsi_host_priv();

[...]

> +	ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
> +					       ((qedi_ep->state ==
> +						EP_STATE_OFLDCONN_FAILED) ||
> +						(qedi_ep->state ==
> +						EP_STATE_OFLDCONN_COMPL)),
> +						msecs_to_jiffies(timeout_ms));

Maybe:
#define QEDI_OLDCON_STATE(q) ((q)->state == EP_STATE_OFLDCONN_FAILED || \
				(q)->state == EP_STATE_OFLDCONN_COMPL)

ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
					QEDI_OLDCON_STATE(qedi_ep),
					msec_to_jiffies(timeout_ms));

But that could be just me hating linewraps.

[...]

Thanks,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-19  7:31   ` Hannes Reinecke
@ 2016-10-19 22:28       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-19 22:28 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

Thanks Hannes for the review. Please see my comments inline..

On Wed, 19 Oct 2016, 12:31am, Hannes Reinecke wrote:

> On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This adds the backbone required for the various HW initalizations
> > which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> > 4xxxx line of adapters - FW notification, resource initializations, etc.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> >  drivers/net/ethernet/qlogic/Kconfig            |   15 +
> >  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
> >  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
> >  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
> >  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
> >  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
> >  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
> >  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
> >  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
> >  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
> >  include/linux/qed/qed_if.h                     |    2 +
> >  include/linux/qed/qed_iscsi_if.h               |  249 +++++
> >  15 files changed, 1692 insertions(+), 22 deletions(-)
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> >  create mode 100644 include/linux/qed/qed_iscsi_if.h
> > 

-- snipped --

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> > new file mode 100644
> > index 0000000..cb22dad
> > --- /dev/null
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> > @@ -0,0 +1,1310 @@
> > +/* QLogic qed NIC Driver
> 
> Shouldn't that be qedi iSCSI Driver?

Actually, this is the common module under drivers/net/, which was 
submitted along with the NIC driver, qede, so the comment stayed.

In this driver architecture, for all protocols, there is this
common module, qed, as well as a protocol module (qede, qedr, qedi
etc.).

This comment needs to be changed in all files under qed/. We will submit 
another patch to do that.

> > +static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
> > +				     struct qed_iscsi_conn *p_conn,
> > +				     enum spq_mode comp_mode,
> > +				     struct qed_spq_comp_cb *p_comp_addr)
> > +{
> > +	struct iscsi_spe_conn_offload *p_ramrod = NULL;
> > +	struct tcp_offload_params_opt2 *p_tcp2 = NULL;
> > +	struct tcp_offload_params *p_tcp = NULL;
> > +	struct qed_spq_entry *p_ent = NULL;
> > +	struct qed_sp_init_data init_data;
> > +	union qed_qm_pq_params pq_params;
> > +	u16 pq0_id = 0, pq1_id = 0;
> > +	dma_addr_t r2tq_pbl_addr;
> > +	dma_addr_t xhq_pbl_addr;
> > +	dma_addr_t uhq_pbl_addr;
> > +	int rc = 0;
> > +	u32 dval;
> > +	u16 wval;
> > +	u8 ucval;
> > +	u8 i;
> > +
> > +	/* Get SPQ entry */
> > +	memset(&init_data, 0, sizeof(init_data));
> > +	init_data.cid = p_conn->icid;
> > +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> > +	init_data.comp_mode = comp_mode;
> > +	init_data.p_comp_data = p_comp_addr;
> > +
> > +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> > +				 ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
> > +				 PROTOCOLID_ISCSI, &init_data);
> > +	if (rc)
> > +		return rc;
> > +
> > +	p_ramrod = &p_ent->ramrod.iscsi_conn_offload;
> > +
> > +	/* Transmission PQ is the first of the PF */
> > +	memset(&pq_params, 0, sizeof(pq_params));
> > +	pq0_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> > +	p_conn->physical_q0 = cpu_to_le16(pq0_id);
> > +	p_ramrod->iscsi.physical_q0 = cpu_to_le16(pq0_id);
> > +
> > +	/* iSCSI Pure-ACK PQ */
> > +	pq_params.iscsi.q_idx = 1;
> > +	pq1_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> > +	p_conn->physical_q1 = cpu_to_le16(pq1_id);
> > +	p_ramrod->iscsi.physical_q1 = cpu_to_le16(pq1_id);
> > +
> > +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN;
> > +	SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE,
> > +		  p_conn->layer_code);
> > +
> > +	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
> > +	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
> > +
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr);
> > +
> > +	r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.r2tq_pbl_addr, r2tq_pbl_addr);
> > +
> > +	xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.xhq_pbl_addr, xhq_pbl_addr);
> > +
> > +	uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.uhq_pbl_addr, uhq_pbl_addr);
> > +
> > +	p_ramrod->iscsi.initial_ack = cpu_to_le32(p_conn->initial_ack);
> > +	p_ramrod->iscsi.flags = p_conn->offl_flags;
> > +	p_ramrod->iscsi.default_cq = p_conn->default_cq;
> > +	p_ramrod->iscsi.stat_sn = cpu_to_le32(p_conn->stat_sn);
> > +
> > +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> > +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> > +		p_tcp = &p_ramrod->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> > +
> This looks terribly like endianness swapping. You sure this is
> applicable for all architecture and endianness settings?
> And wouldn't it be better to use one of the get_unaligned_XXX functions
> here?

The mac address in the p_tcp structure takes mac in the reverse order as 
in p_conn. A for loop, or 3 swab16p for each copy would also do, will make 
that change.

> 
> > +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +
> > +		p_tcp->flags = p_conn->tcp_flags;
> > +		p_tcp->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> > +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> > +
> > +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> > +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> > +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> > +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> > +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> > +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> > +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> > +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> > +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> > +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> > +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> > +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> > +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> > +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> > +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> > +		dval = p_conn->ka_timeout_delta;
> > +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> > +		dval = p_conn->rt_timeout_delta;
> > +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> > +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> > +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> > +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> > +		p_tcp->rt_cnt = p_conn->rt_cnt;
> > +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> > +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> > +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> > +		dval = p_conn->initial_rcv_wnd;
> > +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> > +		p_tcp->ttl = p_conn->ttl;
> > +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> > +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		dval = p_conn->ts_ticks_per_second;
> > +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> > +		wval = p_conn->da_timeout_value;
> > +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> > +		p_tcp->ack_frequency = p_conn->ack_frequency;
> > +		p_tcp->connect_mode = p_conn->connect_mode;
> > +	} else {
> > +		p_tcp2 =
> > +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> > +
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> > +
> Same here.

Noted.

> 
> > +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);

-- snip --

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> > new file mode 100644
> > index 0000000..269848c
> > --- /dev/null
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> > @@ -0,0 +1,52 @@
> > +/* QLogic qed NIC Driver
> > + * Copyright (c) 2015 QLogic Corporation
> > + *
> > + * This software is available under the terms of the GNU General Public License
> > + * (GPL) Version 2, available from the file COPYING in the main directory of
> > + * this source tree.
> > + */
> > +
> > +#ifndef _QED_ISCSI_H
> > +#define _QED_ISCSI_H
> > +#include <linux/types.h>
> > +#include <linux/list.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/qed/tcp_common.h>
> > +#include <linux/qed/qed_iscsi_if.h>
> > +#include <linux/qed/qed_chain.h>
> > +#include "qed.h"
> > +#include "qed_hsi.h"
> > +#include "qed_mcp.h"
> > +#include "qed_sp.h"
> > +
> > +struct qed_iscsi_info {
> > +	spinlock_t lock;
> > +	struct list_head free_list;
> > +	u16 max_num_outstanding_tasks;
> > +	void *event_context;
> > +	iscsi_event_cb_t event_cb;
> > +};
> > +
> > +#ifdef CONFIG_QED_LL2
> > +extern const struct qed_ll2_ops qed_ll2_ops_pass;
> > +#endif
> > +
> > +#if IS_ENABLED(CONFIG_QEDI)
> > +struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn);
> > +
> > +void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> > +		     struct qed_iscsi_info *p_iscsi_info);
> > +
> > +void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> > +		    struct qed_iscsi_info *p_iscsi_info);
> > +#else /* IS_ENABLED(CONFIG_QEDI) */
> > +static inline struct qed_iscsi_info *qed_iscsi_alloc(
> > +		struct qed_hwfn *p_hwfn) { return NULL; }
> > +static inline void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> > +		struct qed_iscsi_info *p_iscsi_info) {}
> > +static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> > +		struct qed_iscsi_info *p_iscsi_info) {}
> > +#endif /* IS_ENABLED(CONFIG_QEDI) */
> > +
> > +#endif
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > index ddd410a..07e2f77 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > @@ -2187,6 +2187,5 @@ const struct qed_eth_ops *qed_get_eth_ops(void)
> >  
> >  void qed_put_eth_ops(void)
> >  {
> > -	/* TODO - reference count for module? */
> >  }
> >  EXPORT_SYMBOL(qed_put_eth_ops);
> >
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > index a6db107..e67f3c9 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > @@ -299,6 +299,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
> >  		p_tx->cur_completing_bd_idx = 1;
> >  		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
> >  		tx_frag = p_pkt->bds_set[0].tx_frag;
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  		if (p_ll2_conn->gsi_enable)
> >  			qed_ll2b_release_tx_gsi_packet(p_hwfn,
> >  						       p_ll2_conn->my_id,
> > @@ -307,6 +308,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
> >  						       b_last_frag,
> >  						       b_last_packet);
> >  		else
> > +#endif
> >  			qed_ll2b_complete_tx_packet(p_hwfn,
> >  						    p_ll2_conn->my_id,
> >  						    p_pkt->cookie,
> Huh? What is that doing here?
> 

This is the infiniband part of the common module. The "#if" was to
prevent a compile error when infiniband part was not used (like
for this, iSCSI).

BTW, there is another patch that was submitted by Yuval M. to fix
that, this RFC just came in between. We will be pulling in that
change for the next series.

> > @@ -367,6 +369,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  
> >  		spin_unlock_irqrestore(&p_tx->lock, flags);
> >  		tx_frag = p_pkt->bds_set[0].tx_frag;
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  		if (p_ll2_conn->gsi_enable)
> >  			qed_ll2b_complete_tx_gsi_packet(p_hwfn,
> >  							p_ll2_conn->my_id,
> > @@ -374,6 +377,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  							tx_frag,
> >  							b_last_frag, !num_bds);
> >  		else
> > +#endif
> >  			qed_ll2b_complete_tx_packet(p_hwfn,
> >  						    p_ll2_conn->my_id,
> >  						    p_pkt->cookie,
> > @@ -421,6 +425,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  			  "Mismatch between active_descq and the LL2 Rx chain\n");
> >  	list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> >  
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  	spin_unlock_irqrestore(&p_rx->lock, lock_flags);
> >  	qed_ll2b_complete_rx_gsi_packet(p_hwfn,
> >  					p_ll2_info->my_id,
> > @@ -433,6 +438,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  					src_mac_addrhi,
> >  					src_mac_addrlo, b_last_cqe);
> >  	spin_lock_irqsave(&p_rx->lock, lock_flags);
> > +#endif
> >  
> >  	return 0;
> >  }
> > @@ -1516,11 +1522,12 @@ static void qed_ll2_register_cb_ops(struct qed_dev *cdev,
> >  
> >  static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  {
> > -	struct qed_ll2_info ll2_info;
> > +	struct qed_ll2_info *ll2_info;
> >  	struct qed_ll2_buffer *buffer;
> >  	enum qed_ll2_conn_type conn_type;
> >  	struct qed_ptt *p_ptt;
> >  	int rc, i;
> > +	u8 gsi_enable = 1;
> >  
> >  	/* Initialize LL2 locks & lists */
> >  	INIT_LIST_HEAD(&cdev->ll2->list);
> > @@ -1552,6 +1559,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  	switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
> >  	case QED_PCI_ISCSI:
> >  		conn_type = QED_LL2_TYPE_ISCSI;
> > +		gsi_enable = 0;
> >  		break;
> >  	case QED_PCI_ETH_ROCE:
> >  		conn_type = QED_LL2_TYPE_ROCE;
> > @@ -1561,18 +1569,23 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  	}
> >  
> >  	/* Prepare the temporary ll2 information */
> > -	memset(&ll2_info, 0, sizeof(ll2_info));
> > -	ll2_info.conn_type = conn_type;
> > -	ll2_info.mtu = params->mtu;
> > -	ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets;
> > -	ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping;
> > -	ll2_info.tx_tc = 0;
> > -	ll2_info.tx_dest = CORE_TX_DEST_NW;
> > -	ll2_info.gsi_enable = 1;
> > -
> > -	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &ll2_info,
> > +	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
> > +	if (!ll2_info) {
> > +		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
> > +		goto fail;
> > +	}
> > +	ll2_info->conn_type = conn_type;
> > +	ll2_info->mtu = params->mtu;
> > +	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
> > +	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
> > +	ll2_info->tx_tc = 0;
> > +	ll2_info->tx_dest = CORE_TX_DEST_NW;
> > +	ll2_info->gsi_enable = gsi_enable;
> > +
> > +	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), ll2_info,
> >  					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
> >  					&cdev->ll2->handle);
> > +	kfree(ll2_info);
> >  	if (rc) {
> >  		DP_INFO(cdev, "Failed to acquire LL2 connection\n");
> >  		goto fail;
> Where is the benefit of this hunk? And is it related to iSCSI?

This hunk was to prevent a large stack warning (was present with
gcc 4.8.3). This is a common function applicable to iSCSI as well.

> 
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > index 4ee3151..a01ad9d 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_main.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > @@ -1239,7 +1239,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
> >  	if (link.link_up)
> >  		if_link->link_up = true;
> >  
> > -	/* TODO - at the moment assume supported and advertised speed equal */
> >  	if_link->supported_caps = QED_LM_FIBRE_BIT;
> >  	if (params.speed.autoneg)
> >  		if_link->supported_caps |= QED_LM_Autoneg_BIT;
> > @@ -1294,7 +1293,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
> >  	if (link.link_up)
> >  		if_link->speed = link.speed;
> >  
> > -	/* TODO - fill duplex properly */
> >  	if_link->duplex = DUPLEX_FULL;
> >  	qed_mcp_get_media_type(hwfn->cdev, &media_type);
> >  	if_link->port = qed_get_port_type(media_type);
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > index dff520e..2e5f51b 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > @@ -314,9 +314,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
> >  
> >  /* Using hwfn number (and not pf_num) is required since in CMT mode,
> >   * same pf_num may be used by two different hwfn
> > - * TODO - this shouldn't really be in .h file, but until all fields
> > - * required during hw-init will be placed in their correct place in shmem
> > - * we need it in qed_dev.c [for readin the nvram reflection in shmem].
> >   */
> >  #define MCP_PF_ID_BY_REL(p_hwfn, rel_pfid) (QED_IS_BB((p_hwfn)->cdev) ?	       \
> >  					    ((rel_pfid) |		       \
> > @@ -324,9 +321,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
> >  					    rel_pfid)
> >  #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
> >  
> > -/* TODO - this is only correct as long as only BB is supported, and
> > - * no port-swapping is implemented; Afterwards we'll need to fix it.
> > - */
> >  #define MFW_PORT(_p_hwfn)       ((_p_hwfn)->abs_pf_id %	\
> >  				 ((_p_hwfn)->cdev->num_ports_in_engines * 2))
> >  struct qed_mcp_info {
> Please split off the patch and use a separate one to remove all the TODO
> entries. They do not relate to the iSCSI offload bit.
> 

Will do.

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > index b414a05..9754420 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > @@ -82,6 +82,8 @@
> >  	0x1c80000UL
> >  #define BAR0_MAP_REG_XSDM_RAM \
> >  	0x1e00000UL
> > +#define BAR0_MAP_REG_YSDM_RAM \
> > +	0x1e80000UL
> >  #define  NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF \
> >  	0x5011f4UL
> >  #define  PRS_REG_SEARCH_TCP \
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > index caff415..d3fa578 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > @@ -24,6 +24,7 @@
> >  #include "qed_hsi.h"
> >  #include "qed_hw.h"
> >  #include "qed_int.h"
> > +#include "qed_iscsi.h"
> >  #include "qed_mcp.h"
> >  #include "qed_reg_addr.h"
> >  #include "qed_sp.h"
> > @@ -249,6 +250,20 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
> >  		return qed_sriov_eqe_event(p_hwfn,
> >  					   p_eqe->opcode,
> >  					   p_eqe->echo, &p_eqe->data);
> > +	case PROTOCOLID_ISCSI:
> > +		if (!IS_ENABLED(CONFIG_QEDI))
> > +			return -EINVAL;
> > +
> > +		if (p_hwfn->p_iscsi_info->event_cb) {
> > +			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
> > +
> > +			return p_iscsi->event_cb(p_iscsi->event_context,
> > +						 p_eqe->opcode, &p_eqe->data);
> > +		} else {
> > +			DP_NOTICE(p_hwfn,
> > +				  "iSCSI async completion is not set\n");
> > +			return -EINVAL;
> > +		}
> >  	default:
> >  		DP_NOTICE(p_hwfn,
> >  			  "Unknown Async completion for protocol: %d\n",
> > diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
> > index f9ae903..c0c9fa8 100644
> > --- a/include/linux/qed/qed_if.h
> > +++ b/include/linux/qed/qed_if.h
> > @@ -165,6 +165,7 @@ struct qed_iscsi_pf_params {
> >  	u32 max_cwnd;
> >  	u16 cq_num_entries;
> >  	u16 cmdq_num_entries;
> > +	u32 two_msl_timer;
> >  	u16 dup_ack_threshold;
> >  	u16 tx_sws_timer;
> >  	u16 min_rto;
> > @@ -271,6 +272,7 @@ struct qed_dev_info {
> >  enum qed_sb_type {
> >  	QED_SB_TYPE_L2_QUEUE,
> >  	QED_SB_TYPE_CNQ,
> > +	QED_SB_TYPE_STORAGE,
> >  };
> >  
> >  enum qed_protocol {
> > diff --git a/include/linux/qed/qed_iscsi_if.h b/include/linux/qed/qed_iscsi_if.h
> > new file mode 100644
> > index 0000000..6735ee5
> > --- /dev/null
> > +++ b/include/linux/qed/qed_iscsi_if.h
> > @@ -0,0 +1,249 @@
> > +/* QLogic qed NIC Driver
> Again, this is the iSCSI driver, is it not?
> 
> > + * Copyright (c) 2015 QLogic Corporation
> > + *
> And you _might_ want to check the copyright, seeing that it's being
> posted from the cavium.com domain ...
> 

Yes, rest of the files (already existing) for qed has the same
copyright, so this patch did not modify it. qedi, OTOH, has all
new files and are using the updated ones. A new patch will be
posted to update the qed files.

Regards,
-Arun

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
@ 2016-10-19 22:28       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-19 22:28 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

Thanks Hannes for the review. Please see my comments inline..

On Wed, 19 Oct 2016, 12:31am, Hannes Reinecke wrote:

> On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This adds the backbone required for the various HW initalizations
> > which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> > 4xxxx line of adapters - FW notification, resource initializations, etc.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> >  drivers/net/ethernet/qlogic/Kconfig            |   15 +
> >  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
> >  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
> >  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
> >  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
> >  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
> >  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
> >  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
> >  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
> >  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
> >  include/linux/qed/qed_if.h                     |    2 +
> >  include/linux/qed/qed_iscsi_if.h               |  249 +++++
> >  15 files changed, 1692 insertions(+), 22 deletions(-)
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> >  create mode 100644 include/linux/qed/qed_iscsi_if.h
> > 

-- snipped --

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> > new file mode 100644
> > index 0000000..cb22dad
> > --- /dev/null
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> > @@ -0,0 +1,1310 @@
> > +/* QLogic qed NIC Driver
> 
> Shouldn't that be qedi iSCSI Driver?

Actually, this is the common module under drivers/net/, which was 
submitted along with the NIC driver, qede, so the comment stayed.

In this driver architecture, for all protocols, there is this
common module, qed, as well as a protocol module (qede, qedr, qedi
etc.).

This comment needs to be changed in all files under qed/. We will submit 
another patch to do that.

> > +static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
> > +				     struct qed_iscsi_conn *p_conn,
> > +				     enum spq_mode comp_mode,
> > +				     struct qed_spq_comp_cb *p_comp_addr)
> > +{
> > +	struct iscsi_spe_conn_offload *p_ramrod = NULL;
> > +	struct tcp_offload_params_opt2 *p_tcp2 = NULL;
> > +	struct tcp_offload_params *p_tcp = NULL;
> > +	struct qed_spq_entry *p_ent = NULL;
> > +	struct qed_sp_init_data init_data;
> > +	union qed_qm_pq_params pq_params;
> > +	u16 pq0_id = 0, pq1_id = 0;
> > +	dma_addr_t r2tq_pbl_addr;
> > +	dma_addr_t xhq_pbl_addr;
> > +	dma_addr_t uhq_pbl_addr;
> > +	int rc = 0;
> > +	u32 dval;
> > +	u16 wval;
> > +	u8 ucval;
> > +	u8 i;
> > +
> > +	/* Get SPQ entry */
> > +	memset(&init_data, 0, sizeof(init_data));
> > +	init_data.cid = p_conn->icid;
> > +	init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
> > +	init_data.comp_mode = comp_mode;
> > +	init_data.p_comp_data = p_comp_addr;
> > +
> > +	rc = qed_sp_init_request(p_hwfn, &p_ent,
> > +				 ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
> > +				 PROTOCOLID_ISCSI, &init_data);
> > +	if (rc)
> > +		return rc;
> > +
> > +	p_ramrod = &p_ent->ramrod.iscsi_conn_offload;
> > +
> > +	/* Transmission PQ is the first of the PF */
> > +	memset(&pq_params, 0, sizeof(pq_params));
> > +	pq0_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> > +	p_conn->physical_q0 = cpu_to_le16(pq0_id);
> > +	p_ramrod->iscsi.physical_q0 = cpu_to_le16(pq0_id);
> > +
> > +	/* iSCSI Pure-ACK PQ */
> > +	pq_params.iscsi.q_idx = 1;
> > +	pq1_id = qed_get_qm_pq(p_hwfn, PROTOCOLID_ISCSI, &pq_params);
> > +	p_conn->physical_q1 = cpu_to_le16(pq1_id);
> > +	p_ramrod->iscsi.physical_q1 = cpu_to_le16(pq1_id);
> > +
> > +	p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN;
> > +	SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE,
> > +		  p_conn->layer_code);
> > +
> > +	p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
> > +	p_ramrod->fw_cid = cpu_to_le32(p_conn->icid);
> > +
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr);
> > +
> > +	r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.r2tq_pbl_addr, r2tq_pbl_addr);
> > +
> > +	xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.xhq_pbl_addr, xhq_pbl_addr);
> > +
> > +	uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
> > +	DMA_REGPAIR_LE(p_ramrod->iscsi.uhq_pbl_addr, uhq_pbl_addr);
> > +
> > +	p_ramrod->iscsi.initial_ack = cpu_to_le32(p_conn->initial_ack);
> > +	p_ramrod->iscsi.flags = p_conn->offl_flags;
> > +	p_ramrod->iscsi.default_cq = p_conn->default_cq;
> > +	p_ramrod->iscsi.stat_sn = cpu_to_le32(p_conn->stat_sn);
> > +
> > +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> > +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> > +		p_tcp = &p_ramrod->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> > +
> This looks terribly like endianness swapping. You sure this is
> applicable for all architecture and endianness settings?
> And wouldn't it be better to use one of the get_unaligned_XXX functions
> here?

The mac address in the p_tcp structure takes mac in the reverse order as 
in p_conn. A for loop, or 3 swab16p for each copy would also do, will make 
that change.

> 
> > +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +
> > +		p_tcp->flags = p_conn->tcp_flags;
> > +		p_tcp->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> > +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> > +
> > +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> > +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> > +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> > +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> > +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> > +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> > +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> > +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> > +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> > +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> > +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> > +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> > +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> > +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> > +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> > +		dval = p_conn->ka_timeout_delta;
> > +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> > +		dval = p_conn->rt_timeout_delta;
> > +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> > +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> > +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> > +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> > +		p_tcp->rt_cnt = p_conn->rt_cnt;
> > +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> > +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> > +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> > +		dval = p_conn->initial_rcv_wnd;
> > +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> > +		p_tcp->ttl = p_conn->ttl;
> > +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> > +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		dval = p_conn->ts_ticks_per_second;
> > +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> > +		wval = p_conn->da_timeout_value;
> > +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> > +		p_tcp->ack_frequency = p_conn->ack_frequency;
> > +		p_tcp->connect_mode = p_conn->connect_mode;
> > +	} else {
> > +		p_tcp2 =
> > +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> > +
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> > +
> Same here.

Noted.

> 
> > +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);

-- snip --

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.h b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> > new file mode 100644
> > index 0000000..269848c
> > --- /dev/null
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> > @@ -0,0 +1,52 @@
> > +/* QLogic qed NIC Driver
> > + * Copyright (c) 2015 QLogic Corporation
> > + *
> > + * This software is available under the terms of the GNU General Public License
> > + * (GPL) Version 2, available from the file COPYING in the main directory of
> > + * this source tree.
> > + */
> > +
> > +#ifndef _QED_ISCSI_H
> > +#define _QED_ISCSI_H
> > +#include <linux/types.h>
> > +#include <linux/list.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/qed/tcp_common.h>
> > +#include <linux/qed/qed_iscsi_if.h>
> > +#include <linux/qed/qed_chain.h>
> > +#include "qed.h"
> > +#include "qed_hsi.h"
> > +#include "qed_mcp.h"
> > +#include "qed_sp.h"
> > +
> > +struct qed_iscsi_info {
> > +	spinlock_t lock;
> > +	struct list_head free_list;
> > +	u16 max_num_outstanding_tasks;
> > +	void *event_context;
> > +	iscsi_event_cb_t event_cb;
> > +};
> > +
> > +#ifdef CONFIG_QED_LL2
> > +extern const struct qed_ll2_ops qed_ll2_ops_pass;
> > +#endif
> > +
> > +#if IS_ENABLED(CONFIG_QEDI)
> > +struct qed_iscsi_info *qed_iscsi_alloc(struct qed_hwfn *p_hwfn);
> > +
> > +void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> > +		     struct qed_iscsi_info *p_iscsi_info);
> > +
> > +void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> > +		    struct qed_iscsi_info *p_iscsi_info);
> > +#else /* IS_ENABLED(CONFIG_QEDI) */
> > +static inline struct qed_iscsi_info *qed_iscsi_alloc(
> > +		struct qed_hwfn *p_hwfn) { return NULL; }
> > +static inline void qed_iscsi_setup(struct qed_hwfn *p_hwfn,
> > +		struct qed_iscsi_info *p_iscsi_info) {}
> > +static inline void qed_iscsi_free(struct qed_hwfn *p_hwfn,
> > +		struct qed_iscsi_info *p_iscsi_info) {}
> > +#endif /* IS_ENABLED(CONFIG_QEDI) */
> > +
> > +#endif
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > index ddd410a..07e2f77 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > @@ -2187,6 +2187,5 @@ const struct qed_eth_ops *qed_get_eth_ops(void)
> >  
> >  void qed_put_eth_ops(void)
> >  {
> > -	/* TODO - reference count for module? */
> >  }
> >  EXPORT_SYMBOL(qed_put_eth_ops);
> >
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > index a6db107..e67f3c9 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
> > @@ -299,6 +299,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
> >  		p_tx->cur_completing_bd_idx = 1;
> >  		b_last_frag = p_tx->cur_completing_bd_idx == p_pkt->bd_used;
> >  		tx_frag = p_pkt->bds_set[0].tx_frag;
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  		if (p_ll2_conn->gsi_enable)
> >  			qed_ll2b_release_tx_gsi_packet(p_hwfn,
> >  						       p_ll2_conn->my_id,
> > @@ -307,6 +308,7 @@ static void qed_ll2_txq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
> >  						       b_last_frag,
> >  						       b_last_packet);
> >  		else
> > +#endif
> >  			qed_ll2b_complete_tx_packet(p_hwfn,
> >  						    p_ll2_conn->my_id,
> >  						    p_pkt->cookie,
> Huh? What is that doing here?
> 

This is the infiniband part of the common module. The "#if" was to
prevent a compile error when infiniband part was not used (like
for this, iSCSI).

BTW, there is another patch that was submitted by Yuval M. to fix
that, this RFC just came in between. We will be pulling in that
change for the next series.

> > @@ -367,6 +369,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  
> >  		spin_unlock_irqrestore(&p_tx->lock, flags);
> >  		tx_frag = p_pkt->bds_set[0].tx_frag;
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  		if (p_ll2_conn->gsi_enable)
> >  			qed_ll2b_complete_tx_gsi_packet(p_hwfn,
> >  							p_ll2_conn->my_id,
> > @@ -374,6 +377,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  							tx_frag,
> >  							b_last_frag, !num_bds);
> >  		else
> > +#endif
> >  			qed_ll2b_complete_tx_packet(p_hwfn,
> >  						    p_ll2_conn->my_id,
> >  						    p_pkt->cookie,
> > @@ -421,6 +425,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  			  "Mismatch between active_descq and the LL2 Rx chain\n");
> >  	list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> >  
> > +#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
> >  	spin_unlock_irqrestore(&p_rx->lock, lock_flags);
> >  	qed_ll2b_complete_rx_gsi_packet(p_hwfn,
> >  					p_ll2_info->my_id,
> > @@ -433,6 +438,7 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
> >  					src_mac_addrhi,
> >  					src_mac_addrlo, b_last_cqe);
> >  	spin_lock_irqsave(&p_rx->lock, lock_flags);
> > +#endif
> >  
> >  	return 0;
> >  }
> > @@ -1516,11 +1522,12 @@ static void qed_ll2_register_cb_ops(struct qed_dev *cdev,
> >  
> >  static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  {
> > -	struct qed_ll2_info ll2_info;
> > +	struct qed_ll2_info *ll2_info;
> >  	struct qed_ll2_buffer *buffer;
> >  	enum qed_ll2_conn_type conn_type;
> >  	struct qed_ptt *p_ptt;
> >  	int rc, i;
> > +	u8 gsi_enable = 1;
> >  
> >  	/* Initialize LL2 locks & lists */
> >  	INIT_LIST_HEAD(&cdev->ll2->list);
> > @@ -1552,6 +1559,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  	switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
> >  	case QED_PCI_ISCSI:
> >  		conn_type = QED_LL2_TYPE_ISCSI;
> > +		gsi_enable = 0;
> >  		break;
> >  	case QED_PCI_ETH_ROCE:
> >  		conn_type = QED_LL2_TYPE_ROCE;
> > @@ -1561,18 +1569,23 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
> >  	}
> >  
> >  	/* Prepare the temporary ll2 information */
> > -	memset(&ll2_info, 0, sizeof(ll2_info));
> > -	ll2_info.conn_type = conn_type;
> > -	ll2_info.mtu = params->mtu;
> > -	ll2_info.rx_drop_ttl0_flg = params->drop_ttl0_packets;
> > -	ll2_info.rx_vlan_removal_en = params->rx_vlan_stripping;
> > -	ll2_info.tx_tc = 0;
> > -	ll2_info.tx_dest = CORE_TX_DEST_NW;
> > -	ll2_info.gsi_enable = 1;
> > -
> > -	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &ll2_info,
> > +	ll2_info = kzalloc(sizeof(*ll2_info), GFP_KERNEL);
> > +	if (!ll2_info) {
> > +		DP_INFO(cdev, "Failed to allocate LL2 info buffer\n");
> > +		goto fail;
> > +	}
> > +	ll2_info->conn_type = conn_type;
> > +	ll2_info->mtu = params->mtu;
> > +	ll2_info->rx_drop_ttl0_flg = params->drop_ttl0_packets;
> > +	ll2_info->rx_vlan_removal_en = params->rx_vlan_stripping;
> > +	ll2_info->tx_tc = 0;
> > +	ll2_info->tx_dest = CORE_TX_DEST_NW;
> > +	ll2_info->gsi_enable = gsi_enable;
> > +
> > +	rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), ll2_info,
> >  					QED_LL2_RX_SIZE, QED_LL2_TX_SIZE,
> >  					&cdev->ll2->handle);
> > +	kfree(ll2_info);
> >  	if (rc) {
> >  		DP_INFO(cdev, "Failed to acquire LL2 connection\n");
> >  		goto fail;
> Where is the benefit of this hunk? And is it related to iSCSI?

This hunk was to prevent a large stack warning (was present with
gcc 4.8.3). This is a common function applicable to iSCSI as well.

> 
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > index 4ee3151..a01ad9d 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_main.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > @@ -1239,7 +1239,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
> >  	if (link.link_up)
> >  		if_link->link_up = true;
> >  
> > -	/* TODO - at the moment assume supported and advertised speed equal */
> >  	if_link->supported_caps = QED_LM_FIBRE_BIT;
> >  	if (params.speed.autoneg)
> >  		if_link->supported_caps |= QED_LM_Autoneg_BIT;
> > @@ -1294,7 +1293,6 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
> >  	if (link.link_up)
> >  		if_link->speed = link.speed;
> >  
> > -	/* TODO - fill duplex properly */
> >  	if_link->duplex = DUPLEX_FULL;
> >  	qed_mcp_get_media_type(hwfn->cdev, &media_type);
> >  	if_link->port = qed_get_port_type(media_type);
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > index dff520e..2e5f51b 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
> > @@ -314,9 +314,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
> >  
> >  /* Using hwfn number (and not pf_num) is required since in CMT mode,
> >   * same pf_num may be used by two different hwfn
> > - * TODO - this shouldn't really be in .h file, but until all fields
> > - * required during hw-init will be placed in their correct place in shmem
> > - * we need it in qed_dev.c [for readin the nvram reflection in shmem].
> >   */
> >  #define MCP_PF_ID_BY_REL(p_hwfn, rel_pfid) (QED_IS_BB((p_hwfn)->cdev) ?	       \
> >  					    ((rel_pfid) |		       \
> > @@ -324,9 +321,6 @@ int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
> >  					    rel_pfid)
> >  #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
> >  
> > -/* TODO - this is only correct as long as only BB is supported, and
> > - * no port-swapping is implemented; Afterwards we'll need to fix it.
> > - */
> >  #define MFW_PORT(_p_hwfn)       ((_p_hwfn)->abs_pf_id %	\
> >  				 ((_p_hwfn)->cdev->num_ports_in_engines * 2))
> >  struct qed_mcp_info {
> Please split off the patch and use a separate one to remove all the TODO
> entries. They do not relate to the iSCSI offload bit.
> 

Will do.

> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > index b414a05..9754420 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
> > @@ -82,6 +82,8 @@
> >  	0x1c80000UL
> >  #define BAR0_MAP_REG_XSDM_RAM \
> >  	0x1e00000UL
> > +#define BAR0_MAP_REG_YSDM_RAM \
> > +	0x1e80000UL
> >  #define  NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF \
> >  	0x5011f4UL
> >  #define  PRS_REG_SEARCH_TCP \
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > index caff415..d3fa578 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
> > @@ -24,6 +24,7 @@
> >  #include "qed_hsi.h"
> >  #include "qed_hw.h"
> >  #include "qed_int.h"
> > +#include "qed_iscsi.h"
> >  #include "qed_mcp.h"
> >  #include "qed_reg_addr.h"
> >  #include "qed_sp.h"
> > @@ -249,6 +250,20 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
> >  		return qed_sriov_eqe_event(p_hwfn,
> >  					   p_eqe->opcode,
> >  					   p_eqe->echo, &p_eqe->data);
> > +	case PROTOCOLID_ISCSI:
> > +		if (!IS_ENABLED(CONFIG_QEDI))
> > +			return -EINVAL;
> > +
> > +		if (p_hwfn->p_iscsi_info->event_cb) {
> > +			struct qed_iscsi_info *p_iscsi = p_hwfn->p_iscsi_info;
> > +
> > +			return p_iscsi->event_cb(p_iscsi->event_context,
> > +						 p_eqe->opcode, &p_eqe->data);
> > +		} else {
> > +			DP_NOTICE(p_hwfn,
> > +				  "iSCSI async completion is not set\n");
> > +			return -EINVAL;
> > +		}
> >  	default:
> >  		DP_NOTICE(p_hwfn,
> >  			  "Unknown Async completion for protocol: %d\n",
> > diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
> > index f9ae903..c0c9fa8 100644
> > --- a/include/linux/qed/qed_if.h
> > +++ b/include/linux/qed/qed_if.h
> > @@ -165,6 +165,7 @@ struct qed_iscsi_pf_params {
> >  	u32 max_cwnd;
> >  	u16 cq_num_entries;
> >  	u16 cmdq_num_entries;
> > +	u32 two_msl_timer;
> >  	u16 dup_ack_threshold;
> >  	u16 tx_sws_timer;
> >  	u16 min_rto;
> > @@ -271,6 +272,7 @@ struct qed_dev_info {
> >  enum qed_sb_type {
> >  	QED_SB_TYPE_L2_QUEUE,
> >  	QED_SB_TYPE_CNQ,
> > +	QED_SB_TYPE_STORAGE,
> >  };
> >  
> >  enum qed_protocol {
> > diff --git a/include/linux/qed/qed_iscsi_if.h b/include/linux/qed/qed_iscsi_if.h
> > new file mode 100644
> > index 0000000..6735ee5
> > --- /dev/null
> > +++ b/include/linux/qed/qed_iscsi_if.h
> > @@ -0,0 +1,249 @@
> > +/* QLogic qed NIC Driver
> Again, this is the iSCSI driver, is it not?
> 
> > + * Copyright (c) 2015 QLogic Corporation
> > + *
> And you _might_ want to check the copyright, seeing that it's being
> posted from the cavium.com domain ...
> 

Yes, rest of the files (already existing) for qed has the same
copyright, so this patch did not modify it. qedi, OTOH, has all
new files and are using the updated ones. A new patch will be
posted to update the qed files.

Regards,
-Arun

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-19  9:09   ` Johannes Thumshirn
@ 2016-10-20  0:14       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-20  0:14 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

Thanks Johannes for the review, please see my response below.

On Wed, 19 Oct 2016, 2:09am, Johannes Thumshirn wrote:

> Hi Manish,
> 
> Some initital comments
> 
> On Wed, Oct 19, 2016 at 01:01:08AM -0400, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This adds the backbone required for the various HW initalizations
> > which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> > 4xxxx line of adapters - FW notification, resource initializations, etc.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> >  drivers/net/ethernet/qlogic/Kconfig            |   15 +
> >  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
> >  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
> >  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
> >  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
> >  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
> >  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
> >  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
> >  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
> >  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
> >  include/linux/qed/qed_if.h                     |    2 +
> >  include/linux/qed/qed_iscsi_if.h               |  249 +++++
> >  15 files changed, 1692 insertions(+), 22 deletions(-)
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> >  create mode 100644 include/linux/qed/qed_iscsi_if.h
> > 
> > diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
> > index 0df1391f9..bad4fae 100644
> > --- a/drivers/net/ethernet/qlogic/Kconfig
> > +++ b/drivers/net/ethernet/qlogic/Kconfig
> > @@ -118,4 +118,19 @@ config INFINIBAND_QEDR
> >  	  for QLogic QED. This would be replaced by the 'real' option
> >  	  once the QEDR driver is added [+relocated].
> >  
> > +config QED_ISCSI
> > +	bool
> > +
> > +config QEDI
> > +	tristate "QLogic QED 25/40/100Gb iSCSI driver"
> > +	depends on QED
> > +	select QED_LL2
> > +	select QED_ISCSI
> > +	default n
> > +	---help---
> > +	  This provides a temporary node that allows the compilation
> > +	  and logical testing of the hardware offload iSCSI support
> > +	  for QLogic QED. This would be replaced by the 'real' option
> > +	  once the QEDI driver is added [+relocated].
> > +
> >  endif # NET_VENDOR_QLOGIC
> > diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
> > index cda0af7..b76669c 100644
> > --- a/drivers/net/ethernet/qlogic/qed/Makefile
> > +++ b/drivers/net/ethernet/qlogic/qed/Makefile
> > @@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
> >  qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
> >  qed-$(CONFIG_QED_LL2) += qed_ll2.o
> >  qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
> > +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
> > index 653bb57..a61b1c0 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed.h
> > @@ -35,6 +35,7 @@
> >  
> >  #define QED_WFQ_UNIT	100
> >  
> > +#define ISCSI_BDQ_ID(_port_id) (_port_id)
> 
> This looks a bit odd to me.
> 
> [...]
> 
> >  #endif
> > +		if (IS_ENABLED(CONFIG_QEDI) &&
> > +				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> > +			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
> 
> 
> Why not introduce a small helper like:
> static inline bool qed_is_iscsi_personality()
> {
> 	return IS_ENABLED(CONFIG_QEDI) && p_hwfn->hw_info.personality ==
> 		QED_PCI_ISCSI;
> }

I think I can remove the IS_ENABLED() check in places like this
and have the check contained in header file. qed_iscsi_free()
already is taken care, if I do the same fore qed_ooo*, I think
the check would just be "p_hwfn->hw_info.personality ==
QED_PCI_ISCSI", which would keep it consistent with the other
areas where similar check is done for other protocols.

> 
> >  		qed_iov_free(p_hwfn);
> 
> [...]
> 
> > +
> > +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> > +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> > +		p_tcp = &p_ramrod->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> > +
> > +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +
> > +		p_tcp->flags = p_conn->tcp_flags;
> > +		p_tcp->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> > +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> > +
> > +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> > +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> > +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> > +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> > +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> > +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> > +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> > +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> > +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> > +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> > +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> > +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> > +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> > +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> > +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> > +		dval = p_conn->ka_timeout_delta;
> > +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> > +		dval = p_conn->rt_timeout_delta;
> > +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> > +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> > +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> > +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> > +		p_tcp->rt_cnt = p_conn->rt_cnt;
> > +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> > +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> > +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> > +		dval = p_conn->initial_rcv_wnd;
> > +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> > +		p_tcp->ttl = p_conn->ttl;
> > +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> > +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		dval = p_conn->ts_ticks_per_second;
> > +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> > +		wval = p_conn->da_timeout_value;
> > +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> > +		p_tcp->ack_frequency = p_conn->ack_frequency;
> > +		p_tcp->connect_mode = p_conn->connect_mode;
> > +	} else {
> > +		p_tcp2 =
> > +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> > +
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> > +
> > +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +		p_tcp2->flags = p_conn->tcp_flags;
> > +
> > +		p_tcp2->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp2->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +
> > +		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp2->ttl = p_conn->ttl;
> > +		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp2->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		p_tcp2->connect_mode = p_conn->connect_mode;
> > +		wval = p_conn->syn_ip_payload_length;
> > +		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
> > +		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
> > +		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
> > +	}
> 
> Is there any chance you could factor out above blocks into own functions so
> you have
> 
> 
> 	if (!GET_FIELD(p_ramrod->iscsi.flags,
> 		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> 		qedi_do_stuff_off_chip();
> 	else 
> 		qedi_do_stuff_on_chip();
> 

This function mostly fills data needed for the firmware interface.
By having all data necessary for the command
ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN in this function it is easier to
refer what is being fed to firmware. If you do not have strong
objections, I would like to keep it this way.

> > +
> 
> [...]
> 
> > +static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
> > +{
> > +	return (u8 __iomem *)p_hwfn->doorbells +
> > +			     qed_db_addr(cid, DQ_DEMS_LEGACY);
> > +}
> > +
> > +static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
> > +						    u8 bdq_id)
> > +{
> > +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> > +
> > +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
> > +			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> > +							     bdq_id);
> > +}
> > +
> > +static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
> > +						      u8 bdq_id)
> > +{
> > +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> > +
> > +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
> > +			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> > +							     bdq_id);
> > +}
> 
> Why are you casting to u8* here, you're returning void*? 
> 

The cast is for the "p_hwfn->regview".

> [...]
> 
> > +
> > +	if (tasks) {
> > +		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
> > +						       GFP_KERNEL);
> > +
> > +		if (!tid_info) {
> > +			DP_NOTICE(cdev,
> > +				  "Failed to allocate tasks information\n");
> > +			qed_iscsi_stop(cdev);
> > +			return -ENOMEM;
> > +		}
> > +
> > +		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
> > +					      tid_info);
> > +		if (rc) {
> > +			DP_NOTICE(cdev, "Failed to gather task information\n");
> > +			qed_iscsi_stop(cdev);
> > +			kfree(tid_info);
> > +			return rc;
> > +		}
> > +
> > +		/* Fill task information */
> > +		tasks->size = tid_info->tid_size;
> > +		tasks->num_tids_per_block = tid_info->num_tids_per_block;
> > +		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> > +
> > +		kfree(tid_info);
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Maybe:
> 
> struct qed_tid_mem *tid_info;
> [...]
> if (!tasks)
> 	return 0;
> 
> tid_info = kzalloc(sizeof(*tid_info), GFP_KERNEL);
> 
> if (!tid_info) {
> 	DP_NOTICE(cdev, "Failed to allocate tasks information\n");
> 	qed_iscsi_stop(cdev);
> 	return -ENOMEM;
> }
> 
> rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), tid_info);
> if (rc) {
> 	DP_NOTICE(cdev, "Failed to gather task information\n");
> 	qed_iscsi_stop(cdev);
> 	kfree(tid_info);
> 	return rc;
> }
> 
> /* Fill task information */
> tasks->size = tid_info->tid_size;
> tasks->num_tids_per_block = tid_info->num_tids_per_block;
> memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> 
> kfree(tid_info);
> 

Sure, will do.

> > +
> 
> [...]
> 
> > +/**
> > + * @brief start iscsi in FW
> > + *
> > + * @param cdev
> > + * @param tasks - qed will fill information about tasks
> > + *
> 
> Please use proper kerneldoc and not doxygen syntax.
> 

Sure, will do.

Regards,
-Arun

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
@ 2016-10-20  0:14       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-20  0:14 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

Thanks Johannes for the review, please see my response below.

On Wed, 19 Oct 2016, 2:09am, Johannes Thumshirn wrote:

> Hi Manish,
> 
> Some initital comments
> 
> On Wed, Oct 19, 2016 at 01:01:08AM -0400, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This adds the backbone required for the various HW initalizations
> > which are necessary for the iSCSI driver (qedi) for QLogic FastLinQ
> > 4xxxx line of adapters - FW notification, resource initializations, etc.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> >  drivers/net/ethernet/qlogic/Kconfig            |   15 +
> >  drivers/net/ethernet/qlogic/qed/Makefile       |    1 +
> >  drivers/net/ethernet/qlogic/qed/qed.h          |    8 +-
> >  drivers/net/ethernet/qlogic/qed/qed_dev.c      |   15 +
> >  drivers/net/ethernet/qlogic/qed/qed_int.h      |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.c    | 1310 ++++++++++++++++++++++++
> >  drivers/net/ethernet/qlogic/qed/qed_iscsi.h    |   52 +
> >  drivers/net/ethernet/qlogic/qed/qed_l2.c       |    1 -
> >  drivers/net/ethernet/qlogic/qed/qed_ll2.c      |   35 +-
> >  drivers/net/ethernet/qlogic/qed/qed_main.c     |    2 -
> >  drivers/net/ethernet/qlogic/qed/qed_mcp.h      |    6 -
> >  drivers/net/ethernet/qlogic/qed/qed_reg_addr.h |    2 +
> >  drivers/net/ethernet/qlogic/qed/qed_spq.c      |   15 +
> >  include/linux/qed/qed_if.h                     |    2 +
> >  include/linux/qed/qed_iscsi_if.h               |  249 +++++
> >  15 files changed, 1692 insertions(+), 22 deletions(-)
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.c
> >  create mode 100644 drivers/net/ethernet/qlogic/qed/qed_iscsi.h
> >  create mode 100644 include/linux/qed/qed_iscsi_if.h
> > 
> > diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
> > index 0df1391f9..bad4fae 100644
> > --- a/drivers/net/ethernet/qlogic/Kconfig
> > +++ b/drivers/net/ethernet/qlogic/Kconfig
> > @@ -118,4 +118,19 @@ config INFINIBAND_QEDR
> >  	  for QLogic QED. This would be replaced by the 'real' option
> >  	  once the QEDR driver is added [+relocated].
> >  
> > +config QED_ISCSI
> > +	bool
> > +
> > +config QEDI
> > +	tristate "QLogic QED 25/40/100Gb iSCSI driver"
> > +	depends on QED
> > +	select QED_LL2
> > +	select QED_ISCSI
> > +	default n
> > +	---help---
> > +	  This provides a temporary node that allows the compilation
> > +	  and logical testing of the hardware offload iSCSI support
> > +	  for QLogic QED. This would be replaced by the 'real' option
> > +	  once the QEDI driver is added [+relocated].
> > +
> >  endif # NET_VENDOR_QLOGIC
> > diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
> > index cda0af7..b76669c 100644
> > --- a/drivers/net/ethernet/qlogic/qed/Makefile
> > +++ b/drivers/net/ethernet/qlogic/qed/Makefile
> > @@ -6,3 +6,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
> >  qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
> >  qed-$(CONFIG_QED_LL2) += qed_ll2.o
> >  qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
> > +qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
> > index 653bb57..a61b1c0 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed.h
> > @@ -35,6 +35,7 @@
> >  
> >  #define QED_WFQ_UNIT	100
> >  
> > +#define ISCSI_BDQ_ID(_port_id) (_port_id)
> 
> This looks a bit odd to me.
> 
> [...]
> 
> >  #endif
> > +		if (IS_ENABLED(CONFIG_QEDI) &&
> > +				p_hwfn->hw_info.personality == QED_PCI_ISCSI)
> > +			qed_iscsi_free(p_hwfn, p_hwfn->p_iscsi_info);
> 
> 
> Why not introduce a small helper like:
> static inline bool qed_is_iscsi_personality()
> {
> 	return IS_ENABLED(CONFIG_QEDI) && p_hwfn->hw_info.personality ==
> 		QED_PCI_ISCSI;
> }

I think I can remove the IS_ENABLED() check in places like this
and have the check contained in header file. qed_iscsi_free()
already is taken care, if I do the same fore qed_ooo*, I think
the check would just be "p_hwfn->hw_info.personality ==
QED_PCI_ISCSI", which would keep it consistent with the other
areas where similar check is done for other protocols.

> 
> >  		qed_iov_free(p_hwfn);
> 
> [...]
> 
> > +
> > +	if (!GET_FIELD(p_ramrod->iscsi.flags,
> > +		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> > +		p_tcp = &p_ramrod->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp->local_mac_addr_lo))[1] = ucval;
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp->remote_mac_addr_lo))[1] = ucval;
> > +
> > +		p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +
> > +		p_tcp->flags = p_conn->tcp_flags;
> > +		p_tcp->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +		p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
> > +		p_tcp->dup_ack_theshold = p_conn->dup_ack_theshold;
> > +
> > +		p_tcp->rcv_next = cpu_to_le32(p_conn->rcv_next);
> > +		p_tcp->snd_una = cpu_to_le32(p_conn->snd_una);
> > +		p_tcp->snd_next = cpu_to_le32(p_conn->snd_next);
> > +		p_tcp->snd_max = cpu_to_le32(p_conn->snd_max);
> > +		p_tcp->snd_wnd = cpu_to_le32(p_conn->snd_wnd);
> > +		p_tcp->rcv_wnd = cpu_to_le32(p_conn->rcv_wnd);
> > +		p_tcp->snd_wl1 = cpu_to_le32(p_conn->snd_wl1);
> > +		p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
> > +		p_tcp->ss_thresh = cpu_to_le32(p_conn->ss_thresh);
> > +		p_tcp->srtt = cpu_to_le16(p_conn->srtt);
> > +		p_tcp->rtt_var = cpu_to_le16(p_conn->rtt_var);
> > +		p_tcp->ts_time = cpu_to_le32(p_conn->ts_time);
> > +		p_tcp->ts_recent = cpu_to_le32(p_conn->ts_recent);
> > +		p_tcp->ts_recent_age = cpu_to_le32(p_conn->ts_recent_age);
> > +		p_tcp->total_rt = cpu_to_le32(p_conn->total_rt);
> > +		dval = p_conn->ka_timeout_delta;
> > +		p_tcp->ka_timeout_delta = cpu_to_le32(dval);
> > +		dval = p_conn->rt_timeout_delta;
> > +		p_tcp->rt_timeout_delta = cpu_to_le32(dval);
> > +		p_tcp->dup_ack_cnt = p_conn->dup_ack_cnt;
> > +		p_tcp->snd_wnd_probe_cnt = p_conn->snd_wnd_probe_cnt;
> > +		p_tcp->ka_probe_cnt = p_conn->ka_probe_cnt;
> > +		p_tcp->rt_cnt = p_conn->rt_cnt;
> > +		p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
> > +		p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
> > +		p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
> > +		dval = p_conn->initial_rcv_wnd;
> > +		p_tcp->initial_rcv_wnd = cpu_to_le32(dval);
> > +		p_tcp->ttl = p_conn->ttl;
> > +		p_tcp->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp->snd_wnd_scale = p_conn->snd_wnd_scale;
> > +		p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		dval = p_conn->ts_ticks_per_second;
> > +		p_tcp->ts_ticks_per_second = cpu_to_le32(dval);
> > +		wval = p_conn->da_timeout_value;
> > +		p_tcp->da_timeout_value = cpu_to_le16(wval);
> > +		p_tcp->ack_frequency = p_conn->ack_frequency;
> > +		p_tcp->connect_mode = p_conn->connect_mode;
> > +	} else {
> > +		p_tcp2 =
> > +		    &((struct iscsi_spe_conn_offload_option2 *)p_ramrod)->tcp;
> > +		ucval = p_conn->local_mac[1];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->local_mac[0];
> > +		((u8 *)(&p_tcp2->local_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->local_mac[3];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->local_mac[2];
> > +		((u8 *)(&p_tcp2->local_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->local_mac[5];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->local_mac[4];
> > +		((u8 *)(&p_tcp2->local_mac_addr_lo))[1] = ucval;
> > +
> > +		ucval = p_conn->remote_mac[1];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[0] = ucval;
> > +		ucval = p_conn->remote_mac[0];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_hi))[1] = ucval;
> > +		ucval = p_conn->remote_mac[3];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[0] = ucval;
> > +		ucval = p_conn->remote_mac[2];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_mid))[1] = ucval;
> > +		ucval = p_conn->remote_mac[5];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[0] = ucval;
> > +		ucval = p_conn->remote_mac[4];
> > +		((u8 *)(&p_tcp2->remote_mac_addr_lo))[1] = ucval;
> > +
> > +		p_tcp2->vlan_id = cpu_to_le16(p_conn->vlan_id);
> > +		p_tcp2->flags = p_conn->tcp_flags;
> > +
> > +		p_tcp2->ip_version = p_conn->ip_version;
> > +		for (i = 0; i < 4; i++) {
> > +			dval = p_conn->remote_ip[i];
> > +			p_tcp2->remote_ip[i] = cpu_to_le32(dval);
> > +			dval = p_conn->local_ip[i];
> > +			p_tcp2->local_ip[i] = cpu_to_le32(dval);
> > +		}
> > +
> > +		p_tcp2->flow_label = cpu_to_le32(p_conn->flow_label);
> > +		p_tcp2->ttl = p_conn->ttl;
> > +		p_tcp2->tos_or_tc = p_conn->tos_or_tc;
> > +		p_tcp2->remote_port = cpu_to_le16(p_conn->remote_port);
> > +		p_tcp2->local_port = cpu_to_le16(p_conn->local_port);
> > +		p_tcp2->mss = cpu_to_le16(p_conn->mss);
> > +		p_tcp2->rcv_wnd_scale = p_conn->rcv_wnd_scale;
> > +		p_tcp2->connect_mode = p_conn->connect_mode;
> > +		wval = p_conn->syn_ip_payload_length;
> > +		p_tcp2->syn_ip_payload_length = cpu_to_le16(wval);
> > +		p_tcp2->syn_phy_addr_lo = DMA_LO_LE(p_conn->syn_phy_addr);
> > +		p_tcp2->syn_phy_addr_hi = DMA_HI_LE(p_conn->syn_phy_addr);
> > +	}
> 
> Is there any chance you could factor out above blocks into own functions so
> you have
> 
> 
> 	if (!GET_FIELD(p_ramrod->iscsi.flags,
> 		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> 		qedi_do_stuff_off_chip();
> 	else 
> 		qedi_do_stuff_on_chip();
> 

This function mostly fills data needed for the firmware interface.
By having all data necessary for the command
ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN in this function it is easier to
refer what is being fed to firmware. If you do not have strong
objections, I would like to keep it this way.

> > +
> 
> [...]
> 
> > +static void __iomem *qed_iscsi_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
> > +{
> > +	return (u8 __iomem *)p_hwfn->doorbells +
> > +			     qed_db_addr(cid, DQ_DEMS_LEGACY);
> > +}
> > +
> > +static void __iomem *qed_iscsi_get_primary_bdq_prod(struct qed_hwfn *p_hwfn,
> > +						    u8 bdq_id)
> > +{
> > +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> > +
> > +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_MSDM_RAM +
> > +			     MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> > +							     bdq_id);
> > +}
> > +
> > +static void __iomem *qed_iscsi_get_secondary_bdq_prod(struct qed_hwfn *p_hwfn,
> > +						      u8 bdq_id)
> > +{
> > +	u8 bdq_function_id = ISCSI_BDQ_ID(p_hwfn->port_id);
> > +
> > +	return (u8 __iomem *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
> > +			     TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(bdq_function_id,
> > +							     bdq_id);
> > +}
> 
> Why are you casting to u8* here, you're returning void*? 
> 

The cast is for the "p_hwfn->regview".

> [...]
> 
> > +
> > +	if (tasks) {
> > +		struct qed_tid_mem *tid_info = kzalloc(sizeof(*tid_info),
> > +						       GFP_KERNEL);
> > +
> > +		if (!tid_info) {
> > +			DP_NOTICE(cdev,
> > +				  "Failed to allocate tasks information\n");
> > +			qed_iscsi_stop(cdev);
> > +			return -ENOMEM;
> > +		}
> > +
> > +		rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
> > +					      tid_info);
> > +		if (rc) {
> > +			DP_NOTICE(cdev, "Failed to gather task information\n");
> > +			qed_iscsi_stop(cdev);
> > +			kfree(tid_info);
> > +			return rc;
> > +		}
> > +
> > +		/* Fill task information */
> > +		tasks->size = tid_info->tid_size;
> > +		tasks->num_tids_per_block = tid_info->num_tids_per_block;
> > +		memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> > +
> > +		kfree(tid_info);
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Maybe:
> 
> struct qed_tid_mem *tid_info;
> [...]
> if (!tasks)
> 	return 0;
> 
> tid_info = kzalloc(sizeof(*tid_info), GFP_KERNEL);
> 
> if (!tid_info) {
> 	DP_NOTICE(cdev, "Failed to allocate tasks information\n");
> 	qed_iscsi_stop(cdev);
> 	return -ENOMEM;
> }
> 
> rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), tid_info);
> if (rc) {
> 	DP_NOTICE(cdev, "Failed to gather task information\n");
> 	qed_iscsi_stop(cdev);
> 	kfree(tid_info);
> 	return rc;
> }
> 
> /* Fill task information */
> tasks->size = tid_info->tid_size;
> tasks->num_tids_per_block = tid_info->num_tids_per_block;
> memcpy(tasks->blocks, tid_info->blocks, MAX_TID_BLOCKS);
> 
> kfree(tid_info);
> 

Sure, will do.

> > +
> 
> [...]
> 
> > +/**
> > + * @brief start iscsi in FW
> > + *
> > + * @param cdev
> > + * @param tasks - qed will fill information about tasks
> > + *
> 
> Please use proper kerneldoc and not doxygen syntax.
> 

Sure, will do.

Regards,
-Arun

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 2/6] qed: Add iSCSI out of order packet handling.
  2016-10-19  9:39   ` Johannes Thumshirn
@ 2016-10-20  0:43       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-20  0:43 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

On Wed, 19 Oct 2016, 2:39am, Johannes Thumshirn wrote:

> On Wed, Oct 19, 2016 at 01:01:09AM -0400, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This patch adds out of order packet handling for hardware offloaded
> > iSCSI. Out of order packet handling requires driver buffer allocation
> > and assistance.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> 
> [...]
> 
> > +		if (IS_ENABLED(CONFIG_QEDI) &&
> > +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
> 
> If you're going to implement the qed_is_iscsi_personallity() helper, please
> consider a qed_ll2_is_iscsi_oooo() as well.

I see that I can avoid the IS_ENABLED() here as well. I will fix this
in the next revision.

> 
> > +			struct qed_ooo_buffer *p_buffer;
> 
> [...]
> 
> > +	while (cq_new_idx != cq_old_idx) {
> > +		struct core_rx_fast_path_cqe *p_cqe_fp;
> > +
> > +		cqe = qed_chain_consume(&p_rx->rcq_chain);
> > +		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
> > +		cqe_type = cqe->rx_cqe_sp.type;
> > +
> > +		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
> > +			DP_NOTICE(p_hwfn,
> > +				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
> > +				  cqe_type);
> > +			return -EINVAL;
> > +		}
> > +		p_cqe_fp = &cqe->rx_cqe_fp;
> > +
> > +		placement_offset = p_cqe_fp->placement_offset;
> > +		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
> > +		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
> > +		vlan = le16_to_cpu(p_cqe_fp->vlan);
> > +		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
> > +		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
> > +					   iscsi_ooo);
> > +		cid = le32_to_cpu(iscsi_ooo->cid);
> > +
> > +		/* Process delete isle first */
> > +		if (iscsi_ooo->drop_size)
> > +			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
> > +					     iscsi_ooo->drop_isle,
> > +					     iscsi_ooo->drop_size);
> > +
> > +		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
> > +			continue;
> > +
> > +		/* Now process create/add/join isles */
> > +		if (list_empty(&p_rx->active_descq)) {
> > +			DP_NOTICE(p_hwfn,
> > +				  "LL2 OOO RX chain has no submitted buffers\n");
> > +			return -EIO;
> > +		}
> > +
> > +		p_pkt = list_first_entry(&p_rx->active_descq,
> > +					 struct qed_ll2_rx_packet, list_entry);
> > +
> > +		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
> > +			if (!p_pkt) {
> > +				DP_NOTICE(p_hwfn,
> > +					  "LL2 OOO RX packet is not valid\n");
> > +				return -EIO;
> > +			}
> > +			list_del(&p_pkt->list_entry);
> > +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> > +			p_buffer->packet_length = packet_length;
> > +			p_buffer->parse_flags = parse_flags;
> > +			p_buffer->vlan = vlan;
> > +			p_buffer->placement_offset = placement_offset;
> > +			qed_chain_consume(&p_rx->rxq_chain);
> > +			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> > +
> > +			switch (iscsi_ooo->ooo_opcode) {
> > +			case TCP_EVENT_ADD_NEW_ISLE:
> > +				qed_ooo_add_new_isle(p_hwfn,
> > +						     p_hwfn->p_ooo_info,
> > +						     cid,
> > +						     iscsi_ooo->ooo_isle,
> > +						     p_buffer);
> > +				break;
> > +			case TCP_EVENT_ADD_ISLE_RIGHT:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle,
> > +						       p_buffer,
> > +						       QED_OOO_RIGHT_BUF);
> > +				break;
> > +			case TCP_EVENT_ADD_ISLE_LEFT:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle,
> > +						       p_buffer,
> > +						       QED_OOO_LEFT_BUF);
> > +				break;
> > +			case TCP_EVENT_JOIN:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle +
> > +						       1,
> > +						       p_buffer,
> > +						       QED_OOO_LEFT_BUF);
> > +				qed_ooo_join_isles(p_hwfn,
> > +						   p_hwfn->p_ooo_info,
> > +						   cid, iscsi_ooo->ooo_isle);
> > +				break;
> > +			case TCP_EVENT_ADD_PEN:
> > +				num_ooo_add_to_peninsula++;
> > +				qed_ooo_put_ready_buffer(p_hwfn,
> > +							 p_hwfn->p_ooo_info,
> > +							 p_buffer, true);
> > +				break;
> > +			}
> > +		} else {
> > +			DP_NOTICE(p_hwfn,
> > +				  "Unexpected event (%d) TX OOO completion\n",
> > +				  iscsi_ooo->ooo_opcode);
> > +		}
> > +	}
> 
> Can you factoror the body of that "while(cq_new_idx != cq_old_idx)" loop into
> a own function?

Ok, will do.

> 
> >  
> > -		b_last = list_empty(&p_rx->active_descq);
> > +	/* Submit RX buffer here */
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> This could be an opportunity for a qed_for_each_free_buffer() or maybe even a
> qed_ooo_submit_rx_buffers() and qed_ooo_submit_tx_buffers() as this is mostly
> duplicate code.

Sure, will do. Thank you.

Regards,
-Arun
> 
> > +		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
> > +					    p_buffer->rx_buffer_phys_addr,
> > +					    0, p_buffer, true);
> > +		if (rc) {
> > +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> > +						p_buffer);
> > +			break;
> > +		}
> >  	}
> > +
> > +	/* Submit Tx buffers here */
> > +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> > +						    p_hwfn->p_ooo_info))) {
> 
> Ditto.
> 
> [...]
> > +
> > +	/* Submit Tx buffers here */
> > +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> > +						    p_hwfn->p_ooo_info))) {
> 
> 
> And here
> 
> [...]
> 
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> [..]
> 
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> [...]
> 
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 2/6] qed: Add iSCSI out of order packet handling.
@ 2016-10-20  0:43       ` Arun Easi
  0 siblings, 0 replies; 38+ messages in thread
From: Arun Easi @ 2016-10-20  0:43 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

On Wed, 19 Oct 2016, 2:39am, Johannes Thumshirn wrote:

> On Wed, Oct 19, 2016 at 01:01:09AM -0400, manish.rangankar@cavium.com wrote:
> > From: Yuval Mintz <Yuval.Mintz@qlogic.com>
> > 
> > This patch adds out of order packet handling for hardware offloaded
> > iSCSI. Out of order packet handling requires driver buffer allocation
> > and assistance.
> > 
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> > ---
> 
> [...]
> 
> > +		if (IS_ENABLED(CONFIG_QEDI) &&
> > +			p_ll2_conn->conn_type == QED_LL2_TYPE_ISCSI_OOO) {
> 
> If you're going to implement the qed_is_iscsi_personallity() helper, please
> consider a qed_ll2_is_iscsi_oooo() as well.

I see that I can avoid the IS_ENABLED() here as well. I will fix this
in the next revision.

> 
> > +			struct qed_ooo_buffer *p_buffer;
> 
> [...]
> 
> > +	while (cq_new_idx != cq_old_idx) {
> > +		struct core_rx_fast_path_cqe *p_cqe_fp;
> > +
> > +		cqe = qed_chain_consume(&p_rx->rcq_chain);
> > +		cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
> > +		cqe_type = cqe->rx_cqe_sp.type;
> > +
> > +		if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) {
> > +			DP_NOTICE(p_hwfn,
> > +				  "Got a non-regular LB LL2 completion [type 0x%02x]\n",
> > +				  cqe_type);
> > +			return -EINVAL;
> > +		}
> > +		p_cqe_fp = &cqe->rx_cqe_fp;
> > +
> > +		placement_offset = p_cqe_fp->placement_offset;
> > +		parse_flags = le16_to_cpu(p_cqe_fp->parse_flags.flags);
> > +		packet_length = le16_to_cpu(p_cqe_fp->packet_length);
> > +		vlan = le16_to_cpu(p_cqe_fp->vlan);
> > +		iscsi_ooo = (struct ooo_opaque *)&p_cqe_fp->opaque_data;
> > +		qed_ooo_save_history_entry(p_hwfn, p_hwfn->p_ooo_info,
> > +					   iscsi_ooo);
> > +		cid = le32_to_cpu(iscsi_ooo->cid);
> > +
> > +		/* Process delete isle first */
> > +		if (iscsi_ooo->drop_size)
> > +			qed_ooo_delete_isles(p_hwfn, p_hwfn->p_ooo_info, cid,
> > +					     iscsi_ooo->drop_isle,
> > +					     iscsi_ooo->drop_size);
> > +
> > +		if (iscsi_ooo->ooo_opcode == TCP_EVENT_NOP)
> > +			continue;
> > +
> > +		/* Now process create/add/join isles */
> > +		if (list_empty(&p_rx->active_descq)) {
> > +			DP_NOTICE(p_hwfn,
> > +				  "LL2 OOO RX chain has no submitted buffers\n");
> > +			return -EIO;
> > +		}
> > +
> > +		p_pkt = list_first_entry(&p_rx->active_descq,
> > +					 struct qed_ll2_rx_packet, list_entry);
> > +
> > +		if ((iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_NEW_ISLE) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_RIGHT) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_ISLE_LEFT) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_ADD_PEN) ||
> > +		    (iscsi_ooo->ooo_opcode == TCP_EVENT_JOIN)) {
> > +			if (!p_pkt) {
> > +				DP_NOTICE(p_hwfn,
> > +					  "LL2 OOO RX packet is not valid\n");
> > +				return -EIO;
> > +			}
> > +			list_del(&p_pkt->list_entry);
> > +			p_buffer = (struct qed_ooo_buffer *)p_pkt->cookie;
> > +			p_buffer->packet_length = packet_length;
> > +			p_buffer->parse_flags = parse_flags;
> > +			p_buffer->vlan = vlan;
> > +			p_buffer->placement_offset = placement_offset;
> > +			qed_chain_consume(&p_rx->rxq_chain);
> > +			list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
> > +
> > +			switch (iscsi_ooo->ooo_opcode) {
> > +			case TCP_EVENT_ADD_NEW_ISLE:
> > +				qed_ooo_add_new_isle(p_hwfn,
> > +						     p_hwfn->p_ooo_info,
> > +						     cid,
> > +						     iscsi_ooo->ooo_isle,
> > +						     p_buffer);
> > +				break;
> > +			case TCP_EVENT_ADD_ISLE_RIGHT:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle,
> > +						       p_buffer,
> > +						       QED_OOO_RIGHT_BUF);
> > +				break;
> > +			case TCP_EVENT_ADD_ISLE_LEFT:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle,
> > +						       p_buffer,
> > +						       QED_OOO_LEFT_BUF);
> > +				break;
> > +			case TCP_EVENT_JOIN:
> > +				qed_ooo_add_new_buffer(p_hwfn,
> > +						       p_hwfn->p_ooo_info,
> > +						       cid,
> > +						       iscsi_ooo->ooo_isle +
> > +						       1,
> > +						       p_buffer,
> > +						       QED_OOO_LEFT_BUF);
> > +				qed_ooo_join_isles(p_hwfn,
> > +						   p_hwfn->p_ooo_info,
> > +						   cid, iscsi_ooo->ooo_isle);
> > +				break;
> > +			case TCP_EVENT_ADD_PEN:
> > +				num_ooo_add_to_peninsula++;
> > +				qed_ooo_put_ready_buffer(p_hwfn,
> > +							 p_hwfn->p_ooo_info,
> > +							 p_buffer, true);
> > +				break;
> > +			}
> > +		} else {
> > +			DP_NOTICE(p_hwfn,
> > +				  "Unexpected event (%d) TX OOO completion\n",
> > +				  iscsi_ooo->ooo_opcode);
> > +		}
> > +	}
> 
> Can you factoror the body of that "while(cq_new_idx != cq_old_idx)" loop into
> a own function?

Ok, will do.

> 
> >  
> > -		b_last = list_empty(&p_rx->active_descq);
> > +	/* Submit RX buffer here */
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> This could be an opportunity for a qed_for_each_free_buffer() or maybe even a
> qed_ooo_submit_rx_buffers() and qed_ooo_submit_tx_buffers() as this is mostly
> duplicate code.

Sure, will do. Thank you.

Regards,
-Arun
> 
> > +		rc = qed_ll2_post_rx_buffer(p_hwfn, p_ll2_conn->my_id,
> > +					    p_buffer->rx_buffer_phys_addr,
> > +					    0, p_buffer, true);
> > +		if (rc) {
> > +			qed_ooo_put_free_buffer(p_hwfn, p_hwfn->p_ooo_info,
> > +						p_buffer);
> > +			break;
> > +		}
> >  	}
> > +
> > +	/* Submit Tx buffers here */
> > +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> > +						    p_hwfn->p_ooo_info))) {
> 
> Ditto.
> 
> [...]
> > +
> > +	/* Submit Tx buffers here */
> > +	while ((p_buffer = qed_ooo_get_ready_buffer(p_hwfn,
> > +						    p_hwfn->p_ooo_info))) {
> 
> 
> And here
> 
> [...]
> 
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> [..]
> 
> > +	while ((p_buffer = qed_ooo_get_free_buffer(p_hwfn,
> > +						   p_hwfn->p_ooo_info))) {
> 
> [...]
> 
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 1/6] qed: Add support for hardware offloaded iSCSI.
  2016-10-20  0:14       ` Arun Easi
  (?)
@ 2016-10-20  7:09       ` Johannes Thumshirn
  -1 siblings, 0 replies; 38+ messages in thread
From: Johannes Thumshirn @ 2016-10-20  7:09 UTC (permalink / raw)
  To: Arun Easi
  Cc: manish.rangankar, lduncan, cleech, martin.petersen, jejb,
	linux-scsi, netdev, Yuval.Mintz, QLogic-Storage-Upstream,
	Yuval Mintz

Hi Arun,

On Wed, Oct 19, 2016 at 05:14:59PM -0700, Arun Easi wrote:
> Thanks Johannes for the review, please see my response below.
> 

[...]

> > 
> > Why not introduce a small helper like:
> > static inline bool qed_is_iscsi_personality()
> > {
> > 	return IS_ENABLED(CONFIG_QEDI) && p_hwfn->hw_info.personality ==
> > 		QED_PCI_ISCSI;
> > }
> 
> I think I can remove the IS_ENABLED() check in places like this
> and have the check contained in header file. qed_iscsi_free()
> already is taken care, if I do the same fore qed_ooo*, I think
> the check would just be "p_hwfn->hw_info.personality ==
> QED_PCI_ISCSI", which would keep it consistent with the other
> areas where similar check is done for other protocols.

Sounds good.

> 

[...]

> > 
> > Is there any chance you could factor out above blocks into own functions so
> > you have
> > 
> > 
> > 	if (!GET_FIELD(p_ramrod->iscsi.flags,
> > 		       ISCSI_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B)) {
> > 		qedi_do_stuff_off_chip();
> > 	else 
> > 		qedi_do_stuff_on_chip();
> > 
> 
> This function mostly fills data needed for the firmware interface.
> By having all data necessary for the command
> ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN in this function it is easier to
> refer what is being fed to firmware. If you do not have strong
> objections, I would like to keep it this way.

No strong objections, I just don't think these lengthy blocks are readable,
but I don't want to do too much bikeshedding about it.

Thanks,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19  7:45   ` Hannes Reinecke
@ 2016-10-20  8:27     ` Rangankar, Manish
  0 siblings, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-20  8:27 UTC (permalink / raw)
  To: Hannes Reinecke, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Mintz, Yuval,
	Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun

Thanks Hannes for the review, please see my comments below,

On 19/10/16 1:15 PM, "Hannes Reinecke" <hare@suse.de> wrote:

>On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
>> for 41000 Series Converged Network Adapters by QLogic.
>> 
>> This patch consists of following changes:
>>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>>   - PCI driver registration,
>>   - iSCSI host level initialization,
>>   - Debugfs and log level infrastructure.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>>  MAINTAINERS                         |    6 +
>>  drivers/net/ethernet/qlogic/Kconfig |   12 -
>>  drivers/scsi/Kconfig                |    1 +
>>  drivers/scsi/Makefile               |    1 +
>>  drivers/scsi/qedi/Kconfig           |   10 +
>>  drivers/scsi/qedi/Makefile          |    5 +
>>  drivers/scsi/qedi/qedi.h            |  286 +++++++
>>  drivers/scsi/qedi/qedi_dbg.c        |  143 ++++
>>  drivers/scsi/qedi/qedi_dbg.h        |  144 ++++
>>  drivers/scsi/qedi/qedi_debugfs.c    |  244 ++++++
>>  drivers/scsi/qedi/qedi_hsi.h        |   52 ++
>>  drivers/scsi/qedi/qedi_main.c       | 1550
>>+++++++++++++++++++++++++++++++++++
>>  drivers/scsi/qedi/qedi_sysfs.c      |   52 ++
>>  drivers/scsi/qedi/qedi_version.h    |   14 +
>>  14 files changed, 2508 insertions(+), 12 deletions(-)
>>  create mode 100644 drivers/scsi/qedi/Kconfig
>>  create mode 100644 drivers/scsi/qedi/Makefile
>>  create mode 100644 drivers/scsi/qedi/qedi.h
>>  create mode 100644 drivers/scsi/qedi/qedi_dbg.c
>>  create mode 100644 drivers/scsi/qedi/qedi_dbg.h
>>  create mode 100644 drivers/scsi/qedi/qedi_debugfs.c
>>  create mode 100644 drivers/scsi/qedi/qedi_hsi.h
>>  create mode 100644 drivers/scsi/qedi/qedi_main.c
>>  create mode 100644 drivers/scsi/qedi/qedi_sysfs.c
>>  create mode 100644 drivers/scsi/qedi/qedi_version.h
>> 
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 5e925a2..906d05f 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -9909,6 +9909,12 @@ F:	drivers/net/ethernet/qlogic/qed/
>>  F:	include/linux/qed/
>>  F:	drivers/net/ethernet/qlogic/qede/
>>  
>> +QLOGIC QL41xxx ISCSI DRIVER
>> +M:	QLogic-Storage-Upstream@cavium.com
>> +L:	linux-scsi@vger.kernel.org
>> +S:	Supported
>> +F:	drivers/scsi/qedi/
>> +
>>  QNX4 FILESYSTEM
>>  M:	Anders Larsen <al@alarsen.net>
>>  W:	http://www.alarsen.net/linux/qnx4fs/
>> diff --git a/drivers/net/ethernet/qlogic/Kconfig
>>b/drivers/net/ethernet/qlogic/Kconfig
>> index bad4fae..28b4366 100644
>> --- a/drivers/net/ethernet/qlogic/Kconfig
>> +++ b/drivers/net/ethernet/qlogic/Kconfig
>> @@ -121,16 +121,4 @@ config INFINIBAND_QEDR
>>  config QED_ISCSI
>>  	bool
>>  
>> -config QEDI
>> -	tristate "QLogic QED 25/40/100Gb iSCSI driver"
>> -	depends on QED
>> -	select QED_LL2
>> -	select QED_ISCSI
>> -	default n
>> -	---help---
>> -	  This provides a temporary node that allows the compilation
>> -	  and logical testing of the hardware offload iSCSI support
>> -	  for QLogic QED. This would be replaced by the 'real' option
>> -	  once the QEDI driver is added [+relocated].
>> -
>>  endif # NET_VENDOR_QLOGIC
>Huh? You just introduce this one in patch 1/6.
>Please fold them together so that this can be omitted.

Yes, we will remove this in the next revision.

-- snipped --


>> @@ -0,0 +1,52 @@
>> +/*
>> + * QLogic iSCSI Offload Driver
>> + * Copyright (c) 2016 Cavium Inc.
>> + *
>> + * This software is available under the terms of the GNU General
>>Public License
>> + * (GPL) Version 2, available from the file COPYING in the main
>>directory of
>> + * this source tree.
>> + */
>> +#ifndef __QEDI_HSI__
>> +#define __QEDI_HSI__
>> +/********************************/
>> +/* Add include to common target */
>> +/********************************/
>> +#include <linux/qed/common_hsi.h>
>> +
>Please use kernel-doc style for comments

Will do.

--snipped--
>> +static void qedi_int_fp(struct qedi_ctx *qedi)
>> +{
>> +	struct qedi_fastpath *fp;
>> +	int id;
>> +
>> +	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
>> +	       sizeof(*qedi->fp_array));
>> +	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
>> +	       sizeof(*qedi->sb_array));
>> +
>> +	for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
>> +		fp = &qedi->fp_array[id];
>> +		fp->sb_info = &qedi->sb_array[id];
>> +		fp->sb_id = id;
>> +		fp->qedi = qedi;
>> +		snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
>> +			 "qedi", id);
>> +
>> +		/* fp_array[i] ---- irq cookie
>> +		 * So init data which is needed in int ctx
>> +		 */
>> +	}
>> +}
>> +
>Please check if you cannot make use of Christophs irq rework.

Sure, we will explore this.

--snipped--
>> +static bool qedi_process_completions(struct qedi_fastpath *fp)
>> +{
>> +	struct qedi_work *qedi_work = NULL;
>> +	struct qedi_ctx *qedi = fp->qedi;
>> +	struct qed_sb_info *sb_info = fp->sb_info;
>> +	struct status_block *sb = sb_info->sb_virt;
>> +	struct qedi_percpu_s *p = NULL;
>> +	struct global_queue *que;
>> +	u16 prod_idx;
>> +	unsigned long flags;
>> +	union iscsi_cqe *cqe;
>> +	int cpu;
>> +
>> +	/* Get the current firmware producer index */
>> +	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
>> +
>> +	if (prod_idx >= QEDI_CQ_SIZE)
>> +		prod_idx = prod_idx % QEDI_CQ_SIZE;
>> +
>> +	que = qedi->global_queues[fp->sb_id];
>> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
>> +		  "Before: global queue=%p prod_idx=%d cons_idx=%d, sb_id=%d\n",
>> +		  que, prod_idx, que->cq_cons_idx, fp->sb_id);
>> +
>> +	qedi->intr_cpu = fp->sb_id;
>> +	cpu = smp_processor_id();
>> +	p = &per_cpu(qedi_percpu, cpu);
>> +
>> +	if (unlikely(!p->iothread))
>> +		WARN_ON(1);
>> +
>> +	spin_lock_irqsave(&p->p_work_lock, flags);
>> +	while (que->cq_cons_idx != prod_idx) {
>> +		cqe = &que->cq[que->cq_cons_idx];
>> +
>> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_IO,
>> +			  "cqe=%p prod_idx=%d cons_idx=%d.\n",
>> +			  cqe, prod_idx, que->cq_cons_idx);
>> +
>> +		/* Alloc and copy to the cqe */
>> +		qedi_work = kzalloc(sizeof(*qedi_work), GFP_ATOMIC);
>> +		if (qedi_work) {
>> +			INIT_LIST_HEAD(&qedi_work->list);
>> +			qedi_work->qedi = qedi;
>> +			memcpy(&qedi_work->cqe, cqe, sizeof(union iscsi_cqe));
>> +			qedi_work->que_idx = fp->sb_id;
>> +			list_add_tail(&qedi_work->list, &p->work_list);
>> +		} else {
>> +			WARN_ON(1);
>> +			continue;
>> +		}
>> +
>Memory allocation in an interrupt routine?
>You must be kidding ...

We will revisit this code.

>
>> +		que->cq_cons_idx++;
>> +		if (que->cq_cons_idx == QEDI_CQ_SIZE)
>> +			que->cq_cons_idx = 0;
>> +	}
>> +	wake_up_process(p->iothread);
>> +	spin_unlock_irqrestore(&p->p_work_lock, flags);
>> +
>> +	return true;
>> +}
>> +
>> +static bool qedi_fp_has_work(struct qedi_fastpath *fp)
>> +{
>> +	struct qedi_ctx *qedi = fp->qedi;
>> +	struct global_queue *que;
>> +	struct qed_sb_info *sb_info = fp->sb_info;
>> +	struct status_block *sb = sb_info->sb_virt;
>> +	u16 prod_idx;
>> +
>> +	barrier();
>> +
>> +	/* Get the current firmware producer index */
>> +	prod_idx = sb->pi_array[QEDI_PROTO_CQ_PROD_IDX];
>> +
>> +	/* Get the pointer to the global CQ this completion is on */
>> +	que = qedi->global_queues[fp->sb_id];
>> +
>> +	/* prod idx wrap around uint16 */
>> +	if (prod_idx >= QEDI_CQ_SIZE)
>> +		prod_idx = prod_idx % QEDI_CQ_SIZE;
>> +
>> +	return (que->cq_cons_idx != prod_idx);
>> +}
>> +
>> +/* MSI-X fastpath handler code */
>> +static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
>> +{
>> +	struct qedi_fastpath *fp = dev_id;
>> +	struct qedi_ctx *qedi = fp->qedi;
>> +	bool wake_io_thread = true;
>> +
>> +	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
>> +
>> +process_again:
>> +	wake_io_thread = qedi_process_completions(fp);
>> +	if (wake_io_thread) {
>> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
>> +			  "process already running\n");
>> +	}
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_update_sb_idx(fp->sb_info);
>> +
>> +	/* Check for more work */
>> +	rmb();
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
>> +	else
>> +		goto process_again;
>> +
>> +	return IRQ_HANDLED;
>> +}
>> +
>> +/* simd handler for MSI/INTa */
>> +static void qedi_simd_int_handler(void *cookie)
>> +{
>> +	/* Cookie is qedi_ctx struct */
>> +	struct qedi_ctx *qedi = (struct qedi_ctx *)cookie;
>> +
>> +	QEDI_WARN(&qedi->dbg_ctx, "qedi=%p.\n", qedi);
>> +}
>> +
>> +#define QEDI_SIMD_HANDLER_NUM		0
>> +static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
>> +{
>> +	int i;
>> +
>> +	if (qedi->int_info.msix_cnt) {
>> +		for (i = 0; i < qedi->int_info.used_cnt; i++) {
>> +			synchronize_irq(qedi->int_info.msix[i].vector);
>> +			irq_set_affinity_hint(qedi->int_info.msix[i].vector,
>> +					      NULL);
>> +			free_irq(qedi->int_info.msix[i].vector,
>> +				 &qedi->fp_array[i]);
>> +		}
>> +	} else {
>> +		qedi_ops->common->simd_handler_clean(qedi->cdev,
>> +						     QEDI_SIMD_HANDLER_NUM);
>> +	}
>> +
>> +	qedi->int_info.used_cnt = 0;
>> +	qedi_ops->common->set_fp_int(qedi->cdev, 0);
>> +}
>> +
>Again, consider using the interrupt affinity rework from Christoph Hellwig

Sure, we will explore this one also.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19 10:02   ` Johannes Thumshirn
@ 2016-10-20  8:41     ` Rangankar, Manish
  2016-10-23 14:04     ` Rangankar, Manish
  1 sibling, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-20  8:41 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Mintz, Yuval, Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun

Thanks Johannes for the review, please see comments below,


On 19/10/16 3:32 PM, "Johannes Thumshirn" <jthumshirn@suse.de> wrote:

>On Wed, Oct 19, 2016 at 01:01:10AM -0400, manish.rangankar@cavium.com
>wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
>> for 41000 Series Converged Network Adapters by QLogic.
>> 
>> This patch consists of following changes:
>>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>>   - PCI driver registration,
>>   - iSCSI host level initialization,
>>   - Debugfs and log level infrastructure.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>
>[...]
>
>> +static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32
>>tid)
>> +{
>> +	return (void *)(info->blocks[tid / info->num_tids_per_block] +
>> +			(tid % info->num_tids_per_block) * info->size);
>> +}
>
>Unnecessary cast here.

Noted

>
>
>[...]
>
>> +void
>> +qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
>> +	     const char *fmt, ...)
>> +{
>> +	va_list va;
>> +	struct va_format vaf;
>> +	char nfunc[32];
>> +
>> +	memset(nfunc, 0, sizeof(nfunc));
>> +	memcpy(nfunc, func, sizeof(nfunc) - 1);
>> +
>> +	va_start(va, fmt);
>> +
>> +	vaf.fmt = fmt;
>> +	vaf.va = &va;
>> +
>> +	if (likely(qedi) && likely(qedi->pdev))
>> +		pr_crit("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
>> +			nfunc, line, qedi->host_no, &vaf);
>> +	else
>> +		pr_crit("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
>
>pr_crit, seriously?

We will change it to pr_err.

>
>[...]
>
>> +static void qedi_int_fp(struct qedi_ctx *qedi)
>> +{
>> +	struct qedi_fastpath *fp;
>> +	int id;
>> +
>> +	memset((void *)qedi->fp_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
>> +	       sizeof(*qedi->fp_array));
>> +	memset((void *)qedi->sb_array, 0, MIN_NUM_CPUS_MSIX(qedi) *
>> +	       sizeof(*qedi->sb_array));
>
>I don't think the cast is necessary here.

Noted


>
>[...]
>
>> +static int qedi_setup_cid_que(struct qedi_ctx *qedi)
>> +{
>> +	int i;
>> +
>> +	qedi->cid_que.cid_que_base = kmalloc((qedi->max_active_conns *
>> +					      sizeof(u32)), GFP_KERNEL);
>> +	if (!qedi->cid_que.cid_que_base)
>> +		return -ENOMEM;
>> +
>> +	qedi->cid_que.conn_cid_tbl = kmalloc((qedi->max_active_conns *
>> +					      sizeof(struct qedi_conn *)),
>> +					     GFP_KERNEL);
>
>Please use kmalloc_array() here.

Will do.

>
>[...]
>
>> +/* MSI-X fastpath handler code */
>> +static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
>> +{
>> +	struct qedi_fastpath *fp = dev_id;
>> +	struct qedi_ctx *qedi = fp->qedi;
>> +	bool wake_io_thread = true;
>> +
>> +	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
>> +
>> +process_again:
>> +	wake_io_thread = qedi_process_completions(fp);
>> +	if (wake_io_thread) {
>> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
>> +			  "process already running\n");
>> +	}
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_update_sb_idx(fp->sb_info);
>> +
>> +	/* Check for more work */
>> +	rmb();
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
>> +	else
>> +		goto process_again;
>> +
>> +	return IRQ_HANDLED;
>> +}
>
>You might want to consider workqueues here.

We will revisit this code.

>
>[...]
>
>> +static int qedi_alloc_itt(struct qedi_ctx *qedi)
>> +{
>> +	qedi->itt_map = kzalloc((sizeof(struct qedi_itt_map) *
>> +				MAX_ISCSI_TASK_ENTRIES), GFP_KERNEL);
>
>that screams for kcalloc()
>
>> +	if (!qedi->itt_map) {
>> +		QEDI_ERR(&qedi->dbg_ctx,
>> +			 "Unable to allocate itt map array memory\n");
>> +		return -ENOMEM;
>> +	}
>> +	return 0;
>> +}
>> +
>> +static void qedi_free_itt(struct qedi_ctx *qedi)
>> +{
>> +	kfree(qedi->itt_map);
>> +}
>> +
>> +static struct qed_ll2_cb_ops qedi_ll2_cb_ops = {
>> +	.rx_cb = qedi_ll2_rx,
>> +	.tx_cb = NULL,
>> +};
>> +
>> +static int qedi_percpu_io_thread(void *arg)
>> +{
>> +	struct qedi_percpu_s *p = arg;
>> +	struct qedi_work *work, *tmp;
>> +	unsigned long flags;
>> +	LIST_HEAD(work_list);
>> +
>> +	set_user_nice(current, -20);
>> +
>> +	while (!kthread_should_stop()) {
>> +		spin_lock_irqsave(&p->p_work_lock, flags);
>> +		while (!list_empty(&p->work_list)) {
>> +			list_splice_init(&p->work_list, &work_list);
>> +			spin_unlock_irqrestore(&p->p_work_lock, flags);
>> +
>> +			list_for_each_entry_safe(work, tmp, &work_list, list) {
>> +				list_del_init(&work->list);
>> +				qedi_fp_process_cqes(work->qedi, &work->cqe,
>> +						     work->que_idx);
>> +				kfree(work);
>> +			}
>> +			spin_lock_irqsave(&p->p_work_lock, flags);
>> +		}
>> +		set_current_state(TASK_INTERRUPTIBLE);
>> +		spin_unlock_irqrestore(&p->p_work_lock, flags);
>> +		schedule();
>> +	}
>> +	__set_current_state(TASK_RUNNING);
>> +
>> +	return 0;
>> +}
>
>A kthread with prio -20 IRQs turned off looping over a list, what could
>possibly go wrong here. I bet you your favorite beverage that this will
>cause Soft Lockups when running I/O stress tests BTDT.

Will remove this.

>
>[...]
>
>> +	if (mode != QEDI_MODE_RECOVERY) {
>> +		if (iscsi_host_add(qedi->shost, &pdev->dev)) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Could not add iscsi host\n");
>> +			rc = -ENOMEM;
>> +			goto remove_host;
>> +		}
>> +
>> +		/* Allocate uio buffers */
>> +		rc = qedi_alloc_uio_rings(qedi);
>> +		if (rc) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "UIO alloc ring failed err=%d\n", rc);
>> +			goto remove_host;
>> +		}
>> +
>> +		rc = qedi_init_uio(qedi);
>> +		if (rc) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "UIO init failed, err=%d\n", rc);
>> +			goto free_uio;
>> +		}
>> +
>> +		/* host the array on iscsi_conn */
>> +		rc = qedi_setup_cid_que(qedi);
>> +		if (rc) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Could not setup cid que\n");
>> +			goto free_uio;
>> +		}
>> +
>> +		rc = qedi_cm_alloc_mem(qedi);
>> +		if (rc) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Could not alloc cm memory\n");
>> +			goto free_cid_que;
>> +		}
>> +
>> +		rc = qedi_alloc_itt(qedi);
>> +		if (rc) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Could not alloc itt memory\n");
>> +			goto free_cid_que;
>> +		}
>> +
>> +		sprintf(host_buf, "host_%d", qedi->shost->host_no);
>> +		qedi->tmf_thread = create_singlethread_workqueue(host_buf);
>> +		if (!qedi->tmf_thread) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Unable to start tmf thread!\n");
>> +			rc = -ENODEV;
>> +			goto free_cid_que;
>> +		}
>> +
>> +		sprintf(host_buf, "qedi_ofld%d", qedi->shost->host_no);
>> +		qedi->offload_thread = create_workqueue(host_buf);
>> +		if (!qedi->offload_thread) {
>> +			QEDI_ERR(&qedi->dbg_ctx,
>> +				 "Unable to start offload thread!\n");
>> +			rc = -ENODEV;
>> +			goto free_cid_que;
>> +		}
>> +
>> +		/* F/w needs 1st task context memory entry for performance */
>> +		set_bit(QEDI_RESERVE_TASK_ID, qedi->task_idx_map);
>> +		atomic_set(&qedi->num_offloads, 0);
>> +	}
>> +
>> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
>> +		  "QLogic FastLinQ iSCSI Module qedi %s, FW %d.%d.%d.%d\n",
>> +		  QEDI_MODULE_VERSION, FW_MAJOR_VERSION, FW_MINOR_VERSION,
>> +		   FW_REVISION_VERSION, FW_ENGINEERING_VERSION);
>> +	return 0;
>
>Please put the QEDI_INFO() above the if and invert the condition.

Will do.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 5/6] qedi: Add support for iSCSI session management.
  2016-10-19  8:03   ` Hannes Reinecke
@ 2016-10-20  9:09     ` Rangankar, Manish
  0 siblings, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-20  9:09 UTC (permalink / raw)
  To: Hannes Reinecke, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Mintz, Yuval,
	Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun



On 19/10/16 1:33 PM, "Hannes Reinecke" <hare@suse.de> wrote:

>On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> This patch adds support for iscsi_transport LLD Login,
>> Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
>> and Firmware async event handling support.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>>  drivers/scsi/qedi/qedi_fw.c    | 1123 ++++++++++++++++++++++++++++
>>  drivers/scsi/qedi/qedi_gbl.h   |   67 ++
>>  drivers/scsi/qedi/qedi_iscsi.c | 1604
>>++++++++++++++++++++++++++++++++++++++++
>>  drivers/scsi/qedi/qedi_iscsi.h |  228 ++++++
>>  drivers/scsi/qedi/qedi_main.c  |  164 ++++
>>  5 files changed, 3186 insertions(+)
>>  create mode 100644 drivers/scsi/qedi/qedi_fw.c
>>  create mode 100644 drivers/scsi/qedi/qedi_gbl.h
>>  create mode 100644 drivers/scsi/qedi/qedi_iscsi.c
>>  create mode 100644 drivers/scsi/qedi/qedi_iscsi.h
>> 

--snipped--
>>
>> +static void qedi_process_async_mesg(struct qedi_ctx *qedi,
>> +				    union iscsi_cqe *cqe,
>> +				    struct iscsi_task *task,
>> +				    struct qedi_conn *qedi_conn,
>> +				    u16 que_idx)
>> +{
>> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
>> +	struct iscsi_session *session = conn->session;
>> +	struct iscsi_async_msg_hdr *cqe_async_msg;
>> +	struct iscsi_async *resp_hdr;
>> +	u32 scsi_lun[2];
>> +	u32 pdu_len, num_bdqs;
>> +	char bdq_data[QEDI_BDQ_BUF_SIZE];
>> +	unsigned long flags;
>> +
>> +	spin_lock_bh(&session->back_lock);
>> +
>> +	cqe_async_msg = &cqe->cqe_common.iscsi_hdr.async_msg;
>> +	pdu_len = cqe_async_msg->hdr_second_dword &
>> +		ISCSI_ASYNC_MSG_HDR_DATA_SEG_LEN_MASK;
>> +	num_bdqs = pdu_len / QEDI_BDQ_BUF_SIZE;
>> +
>> +	if (cqe->cqe_common.cqe_type == ISCSI_CQE_TYPE_UNSOLICITED) {
>> +		spin_lock_irqsave(&qedi->hba_lock, flags);
>> +		qedi_unsol_pdu_adjust_bdq(qedi, &cqe->cqe_unsolicited,
>> +					  pdu_len, num_bdqs, bdq_data);
>> +		spin_unlock_irqrestore(&qedi->hba_lock, flags);
>> +	}
>> +
>> +	resp_hdr = (struct iscsi_async *)&qedi_conn->gen_pdu.resp_hdr;
>> +	memset(resp_hdr, 0, sizeof(struct iscsi_hdr));
>> +	resp_hdr->opcode = cqe_async_msg->opcode;
>> +	resp_hdr->flags = 0x80;
>> +
>> +	scsi_lun[0] = cpu_to_be32(cqe_async_msg->lun.lo);
>> +	scsi_lun[1] = cpu_to_be32(cqe_async_msg->lun.hi);
>I _think_ we have a SCSI LUN structure ...

Will do.

--snipped--
>> +void qedi_process_iscsi_error(struct qedi_endpoint *ep, struct
>>async_data *data)
>> +{
>> +	struct qedi_conn *qedi_conn;
>> +	struct qedi_ctx *qedi;
>> +	char warn_notice[] = "iscsi_warning";
>> +	char error_notice[] = "iscsi_error";
>> +	char *message;
>> +	int need_recovery = 0;
>> +	u32 err_mask = 0;
>> +	char msg[64];
>> +
>> +	if (!ep)
>> +		return;
>> +
>> +	qedi_conn = ep->conn;
>> +	if (!qedi_conn)
>> +		return;
>> +
>> +	qedi = ep->qedi;
>> +
>> +	QEDI_ERR(&qedi->dbg_ctx, "async event iscsi error:0x%x\n",
>> +		 data->error_code);
>> +
>> +	if (err_mask) {
>> +		need_recovery = 0;
>> +		message = warn_notice;
>> +	} else {
>> +		need_recovery = 1;
>> +		message = error_notice;
>> +	}
>> +
>> +	switch (data->error_code) {
>> +	case ISCSI_STATUS_NONE:
>> +		strcpy(msg, "tcp_error none");
>> +		break;
>> +	case ISCSI_CONN_ERROR_TASK_CID_MISMATCH:
>> +		strcpy(msg, "task cid mismatch");
>> +		break;
>> +	case ISCSI_CONN_ERROR_TASK_NOT_VALID:
>> +		strcpy(msg, "invalid task");
>> +		break;
>> +	case ISCSI_CONN_ERROR_RQ_RING_IS_FULL:
>> +		strcpy(msg, "rq ring full");
>> +		break;
>> +	case ISCSI_CONN_ERROR_CMDQ_RING_IS_FULL:
>> +		strcpy(msg, "cmdq ring full");
>> +		break;
>> +	case ISCSI_CONN_ERROR_HQE_CACHING_FAILED:
>> +		strcpy(msg, "sge caching failed");
>> +		break;
>> +	case ISCSI_CONN_ERROR_HEADER_DIGEST_ERROR:
>> +		strcpy(msg, "hdr digest error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_LOCAL_COMPLETION_ERROR:
>> +		strcpy(msg, "local cmpl error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_DATA_OVERRUN:
>> +		strcpy(msg, "invalid task");
>> +		break;
>> +	case ISCSI_CONN_ERROR_OUT_OF_SGES_ERROR:
>> +		strcpy(msg, "out of sge error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_TCP_SEG_PROC_IP_OPTIONS_ERROR:
>> +		strcpy(msg, "tcp seg ip options error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_TCP_IP_FRAGMENT_ERROR:
>> +		strcpy(msg, "tcp ip fragment error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_AHS_LEN:
>> +		strcpy(msg, "AHS len protocol error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_ITT_OUT_OF_RANGE:
>> +		strcpy(msg, "itt out of range error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_EXCEEDS_PDU_SIZE:
>> +		strcpy(msg, "data seg more than pdu size");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE:
>> +		strcpy(msg, "invalid opcode");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_OPCODE_BEFORE_UPDATE:
>> +		strcpy(msg, "invalid opcode before update");
>> +		break;
>> +	case ISCSI_CONN_ERROR_UNVALID_NOPIN_DSL:
>> +		strcpy(msg, "unexpected opcode");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_CARRIES_NO_DATA:
>> +		strcpy(msg, "r2t carries no data");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SN:
>> +		strcpy(msg, "data sn error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_IN_TTT:
>> +		strcpy(msg, "data TTT error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_TTT:
>> +		strcpy(msg, "r2t TTT error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_BUFFER_OFFSET:
>> +		strcpy(msg, "buffer offset error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_BUFFER_OFFSET_OOO:
>> +		strcpy(msg, "buffer offset ooo");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_R2T_SN:
>> +		strcpy(msg, "data seg len 0");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_0:
>> +		strcpy(msg, "data xer len error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_1:
>> +		strcpy(msg, "data xer len1 error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DESIRED_DATA_TRNS_LEN_2:
>> +		strcpy(msg, "data xer len2 error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_LUN:
>> +		strcpy(msg, "protocol lun error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO:
>> +		strcpy(msg, "f bit zero error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_F_BIT_ZERO_S_BIT_ONE:
>> +		strcpy(msg, "f bit zero s bit one error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_EXP_STAT_SN:
>> +		strcpy(msg, "exp stat sn error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DSL_NOT_ZERO:
>> +		strcpy(msg, "dsl not zero error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_INVALID_DSL:
>> +		strcpy(msg, "invalid dsl");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_DATA_SEG_LEN_TOO_BIG:
>> +		strcpy(msg, "data seg len too big");
>> +		break;
>> +	case ISCSI_CONN_ERROR_PROTOCOL_ERR_OUTSTANDING_R2T_COUNT:
>> +		strcpy(msg, "outstanding r2t count error");
>> +		break;
>> +	case ISCSI_CONN_ERROR_SENSE_DATA_LENGTH:
>> +		strcpy(msg, "sense datalen error");
>> +		break;
>Please use an array for mapping values onto strings.

Will add this change in next revision.

Thanks,
Manish R.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 5/6] qedi: Add support for iSCSI session management.
  2016-10-19 13:28   ` Johannes Thumshirn
@ 2016-10-20  9:12     ` Rangankar, Manish
  0 siblings, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-20  9:12 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Mintz, Yuval, Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun



On 19/10/16 6:58 PM, "Johannes Thumshirn" <jthumshirn@suse.de> wrote:

>On Wed, Oct 19, 2016 at 01:01:12AM -0400, manish.rangankar@cavium.com
>wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> This patch adds support for iscsi_transport LLD Login,
>> Logout, NOP-IN/NOP-OUT, Async, Reject PDU processing
>> and Firmware async event handling support.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>
>[...]
>
>> +void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd)
>> +{
>> +	struct scsi_cmnd *sc = cmd->scsi_cmd;
>> +
>> +	if (cmd->io_tbl.sge_valid && sc) {
>> +		scsi_dma_unmap(sc);
>> +		cmd->io_tbl.sge_valid = 0;
>> +	}
>> +}
>
>Maybe set sge_valid to 0 and then call scsi_dma_unmap(). I don't know if
>it's
>really racy but it looks like it is.
>
>[...]
>
>> +static void qedi_process_text_resp(struct qedi_ctx *qedi,
>> +				   union iscsi_cqe *cqe,
>> +				   struct iscsi_task *task,
>> +				   struct qedi_conn *qedi_conn)
>> +{
>> +	struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
>> +	struct iscsi_session *session = conn->session;
>> +	struct iscsi_task_context *task_ctx;
>> +	struct iscsi_text_rsp *resp_hdr_ptr;
>> +	struct iscsi_text_response_hdr *cqe_text_response;
>> +	struct qedi_cmd *cmd;
>> +	int pld_len;
>> +	u32 *tmp;
>> +
>> +	cmd = (struct qedi_cmd *)task->dd_data;
>> +	task_ctx = (struct iscsi_task_context
>>*)qedi_get_task_mem(&qedi->tasks,
>> +								  cmd->task_id);
>
>No need to cast here, qedi_get_task_mem() returns void *.
>
>[...]
>
>> +	cqe_login_response = &cqe->cqe_common.iscsi_hdr.login_response;
>> +	task_ctx = (struct iscsi_task_context
>>*)qedi_get_task_mem(&qedi->tasks,
>> +							  cmd->task_id);
>
>Same here.
>
>[...]
>
>> +	}
>> +
>> +	pbl = (struct scsi_bd *)qedi->bdq_pbl;
>> +	pbl += (qedi->bdq_prod_idx % qedi->rq_num_entries);
>> +	pbl->address.hi =
>> +		      cpu_to_le32((u32)(((u64)(qedi->bdq[idx].buf_dma)) >> 32));
>> +	pbl->address.lo =
>> +			cpu_to_le32(((u32)(((u64)(qedi->bdq[idx].buf_dma)) &
>> +					    0xffffffff)));
>
>Is this LISP or C?
>
>> +	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN,
>> +		  "pbl [0x%p] pbl->address hi [0x%llx] lo [0x%llx] idx [%d]\n",
>> +		  pbl, pbl->address.hi, pbl->address.lo, idx);
>> +	pbl->opaque.hi = cpu_to_le32((u32)(((u64)0) >> 32));
>
>Isn't this plain pbl->opaque.hi = 0; ?
>
>> +	pbl->opaque.lo = cpu_to_le32(((u32)(((u64)idx) & 0xffffffff)));
>> +
>
>[...]
>
>> +	switch (comp_type) {
>> +	case ISCSI_CQE_TYPE_SOLICITED:
>> +	case ISCSI_CQE_TYPE_SOLICITED_WITH_SENSE:
>> +		fw_task_ctx =
>> +		  (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
>> +						      cqe->cqe_solicited.itid);
>
>Again, no cast needed.
>
>[...]
>
>> +	writel(*(u32 *)&dbell, qedi_conn->ep->p_doorbell);
>> +	/* Make sure fw idx is coherent */
>> +	wmb();
>> +	mmiowb();
>
>Isn't either wmb() or mmiowb() enough?
>
>[..]
>
>> +
>> +	fw_task_ctx =
>> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
>>tid);
>
>Cast again.
>
>[...]
>
>> +	fw_task_ctx =
>> +	     (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
>>tid);
>
>^^
>
>[...]
>
>> +	fw_task_ctx =
>> +	(struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks, tid);
>
>
>[...]
>
>> +	fw_task_ctx =
>> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
>>tid);
>> +
>
>[...]
>
>> +
>> +	qedi = (struct qedi_ctx *)iscsi_host_priv(shost);
>
>Same goes for iscsi_host_priv();
>
>[...]
>
>> +	ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
>> +					       ((qedi_ep->state ==
>> +						EP_STATE_OFLDCONN_FAILED) ||
>> +						(qedi_ep->state ==
>> +						EP_STATE_OFLDCONN_COMPL)),
>> +						msecs_to_jiffies(timeout_ms));
>
>Maybe:
>#define QEDI_OLDCON_STATE(q) ((q)->state == EP_STATE_OFLDCONN_FAILED || \
>				(q)->state == EP_STATE_OFLDCONN_COMPL)
>
>ret = wait_event_interruptible_timeout(qedi_ep->ofld_wait,
>					QEDI_OLDCON_STATE(qedi_ep),
>					msec_to_jiffies(timeout_ms));
>
>But that could be just me hating linewraps.
>
>[...]

We will address all the above review comments in the next revision.

Thanks,
Manish R.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 6/6] qedi: Add support for data path.
  2016-10-19 10:24   ` Hannes Reinecke
@ 2016-10-20  9:24     ` Rangankar, Manish
  0 siblings, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-20  9:24 UTC (permalink / raw)
  To: Hannes Reinecke, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev, Mintz, Yuval,
	Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun



On 19/10/16 3:54 PM, "Hannes Reinecke" <hare@suse.de> wrote:

>On 10/19/2016 07:01 AM, manish.rangankar@cavium.com wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> This patch adds support for data path and TMF handling.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>>  drivers/scsi/qedi/qedi_fw.c    | 1282
>>++++++++++++++++++++++++++++++++++++++++
>>  drivers/scsi/qedi/qedi_gbl.h   |    6 +
>>  drivers/scsi/qedi/qedi_iscsi.c |    6 +
>>  drivers/scsi/qedi/qedi_main.c  |    4 +
>>  4 files changed, 1298 insertions(+)
>> 
>> diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
>> index a820785..af1e14d 100644
>> --- a/drivers/scsi/qedi/qedi_fw.c
>> +++ b/drivers/scsi/qedi/qedi_fw.c
>> @@ -147,6 +147,114 @@ static void qedi_process_text_resp(struct
>>qedi_ctx *qedi,
>>  	spin_unlock(&session->back_lock);
>>  }

--snipped--
>> +void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
>> +		   u16 tid, int8_t direction)
>> +{
>> +	struct qedi_io_log *io_log;
>> +	struct iscsi_conn *conn = task->conn;
>> +	struct qedi_conn *qedi_conn = conn->dd_data;
>> +	struct scsi_cmnd *sc_cmd = task->sc;
>> +	unsigned long flags;
>> +	u8 op;
>> +
>> +	spin_lock_irqsave(&qedi->io_trace_lock, flags);
>> +
>> +	io_log = &qedi->io_trace_buf[qedi->io_trace_idx];
>> +	io_log->direction = direction;
>> +	io_log->task_id = tid;
>> +	io_log->cid = qedi_conn->iscsi_conn_id;
>> +	io_log->lun = sc_cmd->device->lun;
>> +	io_log->op = sc_cmd->cmnd[0];
>> +	op = sc_cmd->cmnd[0];
>> +
>> +	if (op == READ_10 || op == WRITE_10) {
>> +		io_log->lba[0] = sc_cmd->cmnd[2];
>> +		io_log->lba[1] = sc_cmd->cmnd[3];
>> +		io_log->lba[2] = sc_cmd->cmnd[4];
>> +		io_log->lba[3] = sc_cmd->cmnd[5];
>> +	} else {
>> +		io_log->lba[0] = 0;
>> +		io_log->lba[1] = 0;
>> +		io_log->lba[2] = 0;
>> +		io_log->lba[3] = 0;
>> +	}
>Only for READ_10 and WRITE_10? What about the other read or write
>commands?

We will add support for other scsi commands in the next revision.

>
>> +	io_log->bufflen = scsi_bufflen(sc_cmd);
>> +	io_log->sg_count = scsi_sg_count(sc_cmd);
>> +	io_log->fast_sgs = qedi->fast_sgls;
>> +	io_log->cached_sgs = qedi->cached_sgls;
>> +	io_log->slow_sgs = qedi->slow_sgls;
>> +	io_log->cached_sge = qedi->use_cached_sge;
>> +	io_log->slow_sge = qedi->use_slow_sge;
>> +	io_log->fast_sge = qedi->use_fast_sge;
>> +	io_log->result = sc_cmd->result;
>> +	io_log->jiffies = jiffies;
>> +	io_log->blk_req_cpu = smp_processor_id();
>> +
>> +	if (direction == QEDI_IO_TRACE_REQ) {
>> +		/* For requests we only care about the submission CPU */
>> +		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
>> +		io_log->intr_cpu = 0;
>> +		io_log->blk_rsp_cpu = 0;
>> +	} else if (direction == QEDI_IO_TRACE_RSP) {
>> +		io_log->req_cpu = smp_processor_id() % qedi->num_queues;
>> +		io_log->intr_cpu = qedi->intr_cpu;
>> +		io_log->blk_rsp_cpu = smp_processor_id();
>> +	}
>> +
>> +	qedi->io_trace_idx++;
>> +	if (qedi->io_trace_idx == QEDI_IO_TRACE_SIZE)
>> +		qedi->io_trace_idx = 0;
>> +
>> +	qedi->use_cached_sge = false;
>> +	qedi->use_slow_sge = false;
>> +	qedi->use_fast_sge = false;
>> +
>> +	spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
>> +}
>> +
>> +int qedi_iscsi_send_ioreq(struct iscsi_task *task)
>> +{
>> +	struct iscsi_conn *conn = task->conn;
>> +	struct iscsi_session *session = conn->session;
>> +	struct Scsi_Host *shost =
>>iscsi_session_to_shost(session->cls_session);
>> +	struct qedi_ctx *qedi = iscsi_host_priv(shost);
>> +	struct qedi_conn *qedi_conn = conn->dd_data;
>> +	struct qedi_cmd *cmd = task->dd_data;
>> +	struct scsi_cmnd *sc = task->sc;
>> +	struct iscsi_task_context *fw_task_ctx;
>> +	struct iscsi_cached_sge_ctx *cached_sge;
>> +	struct iscsi_phys_sgl_ctx *phys_sgl;
>> +	struct iscsi_virt_sgl_ctx *virt_sgl;
>> +	struct ystorm_iscsi_task_st_ctx *yst_cxt;
>> +	struct mstorm_iscsi_task_st_ctx *mst_cxt;
>> +	struct iscsi_sgl *sgl_struct;
>> +	struct iscsi_sge *single_sge;
>> +	struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)task->hdr;
>> +	struct iscsi_sge *bd = cmd->io_tbl.sge_tbl;
>> +	enum iscsi_task_type task_type;
>> +	struct iscsi_cmd_hdr *fw_cmd;
>> +	u32 scsi_lun[2];
>> +	u16 cq_idx = smp_processor_id() % qedi->num_queues;
>> +	s16 ptu_invalidate = 0;
>> +	s16 tid = 0;
>> +	u8 num_fast_sgs;
>> +
>> +	tid = qedi_get_task_idx(qedi);
>> +	if (tid == -1)
>> +		return -ENOMEM;
>> +
>> +	qedi_iscsi_map_sg_list(cmd);
>> +
>> +	int_to_scsilun(sc->device->lun, (struct scsi_lun *)scsi_lun);
>> +	fw_task_ctx =
>> +	      (struct iscsi_task_context *)qedi_get_task_mem(&qedi->tasks,
>>tid);
>> +
>> +	memset(fw_task_ctx, 0, sizeof(struct iscsi_task_context));
>> +	cmd->task_id = tid;
>> +
>> +	/* Ystrom context */
>Ystrom or Ystorm?

Noted

>
>> +	fw_cmd = &fw_task_ctx->ystorm_st_context.pdu_hdr.cmd;
>> +	SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_ATTR, ISCSI_ATTR_SIMPLE);
>> +
>> +	if (sc->sc_data_direction == DMA_TO_DEVICE) {
>> +		if (conn->session->initial_r2t_en) {
>> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
>> +				min((conn->session->imm_data_en *
>> +				    conn->max_xmit_dlength),
>> +				    conn->session->first_burst);
>> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
>> +			      min(fw_task_ctx->ustorm_ag_context.exp_data_acked,
>> +				  scsi_bufflen(sc));
>> +		} else {
>> +			fw_task_ctx->ustorm_ag_context.exp_data_acked =
>> +			      min(conn->session->first_burst, scsi_bufflen(sc));
>> +		}
>> +
>> +		SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_WRITE, 1);
>> +		task_type = ISCSI_TASK_TYPE_INITIATOR_WRITE;
>> +	} else {
>> +		if (scsi_bufflen(sc))
>> +			SET_FIELD(fw_cmd->flags_attr, ISCSI_CMD_HDR_READ, 1);
>> +		task_type = ISCSI_TASK_TYPE_INITIATOR_READ;
>> +	}
>> +
>> +	fw_cmd->lun.lo = be32_to_cpu(scsi_lun[0]);
>> +	fw_cmd->lun.hi = be32_to_cpu(scsi_lun[1]);
>> +
>> +	qedi_update_itt_map(qedi, tid, task->itt);
>> +	fw_cmd->itt = qedi_set_itt(tid, get_itt(task->itt));
>> +	fw_cmd->expected_transfer_length = scsi_bufflen(sc);
>> +	fw_cmd->cmd_sn = be32_to_cpu(hdr->cmdsn);
>> +	fw_cmd->opcode = hdr->opcode;
>> +	qedi_cpy_scsi_cdb(sc, (u32 *)fw_cmd->cdb);
>> +
>> +	/* Mstorm context */
>> +	fw_task_ctx->mstorm_st_context.sense_db.lo =
>>(u32)cmd->sense_buffer_dma;
>> +	fw_task_ctx->mstorm_st_context.sense_db.hi =
>> +					(u32)((u64)cmd->sense_buffer_dma >> 32);
>> +	fw_task_ctx->mstorm_ag_context.task_cid = qedi_conn->iscsi_conn_id;
>> +	fw_task_ctx->mstorm_st_context.task_type = task_type;
>> +
>> +	if (qedi->tid_reuse_count[tid] == QEDI_MAX_TASK_NUM) {
>> +		ptu_invalidate = 1;
>> +		qedi->tid_reuse_count[tid] = 0;
>> +	}
>> +	fw_task_ctx->ystorm_st_context.state.reuse_count =
>> +						     qedi->tid_reuse_count[tid];
>> +	fw_task_ctx->mstorm_st_context.reuse_count =
>> +						   qedi->tid_reuse_count[tid]++;
>> +
>> +	/* Ustrorm context */
>Ustrorm?

Noted

Thanks,
Manish R.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [RFC 2/6] qed: Add iSCSI out of order packet handling.
  2016-10-19  7:36   ` Hannes Reinecke
@ 2016-10-20 12:58     ` Mintz, Yuval
  0 siblings, 0 replies; 38+ messages in thread
From: Mintz, Yuval @ 2016-10-20 12:58 UTC (permalink / raw)
  To: Hannes Reinecke, Rangankar, Manish, lduncan, cleech
  Cc: martin.petersen, jejb, linux-scsi, netdev,
	Dept-Eng QLogic Storage Upstream, Easi, Arun

> > This patch adds out of order packet handling for hardware offloaded
> > iSCSI. Out of order packet handling requires driver buffer allocation
> > and assistance.
> >
> > Signed-off-by: Arun Easi <arun.easi@cavium.com>
> > Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
> >
> Hmm. The entire out-of-order handling is pretty generic. I really wonder
> if this doesn't apply to iSCSI in general; surely iscsi_tcp suffers from
> the same problem, no?
> If so, wouldn't it be better to move it into generic (iSCSI) code so
> that all implementations would benefit from it?

[disclaimer - I'm far from knowledgeable in iscsi ]

I agree that the problem of out-of-order handling is probably generic,
but our solution is very device oriented.
As the device lacks [a lot of] internal memory, it uses the host memory
for out-of-order buffers and the driver assistance in pushing them when
they are needed.
>From driver perspective, all the data is completely opaque; All it does is
follow the firmware's guidance in storing & re-transmitting buffers when
required.

Now, I guess the logic could be divided between hardware-specifics -
Interaction with 'client' [in our case, device's firmware], to receive
new data, instructions regarding placement and re-transmission,
and a lower generic data structure which supports manipulation of
buffers [push-left, push-right, join, etc.].

But given that the data-structure would completely lacks all
protocol-knowledge [as our implementation doesn't have nor require such],
I think there would be very little gain - we might find out that as much
as 80% of the code is device interaction, and the remaining so-called
'generic' data-structure won't be that useful to other clients as it was
closely tied to our device needs and API.

Either way, placing this under iscsi would probably be insufficient for our
future needs, as our qed-iwarp driver would also require this functionality.

Thanks,
Yuval

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  2016-10-19 10:02   ` Johannes Thumshirn
  2016-10-20  8:41     ` Rangankar, Manish
@ 2016-10-23 14:04     ` Rangankar, Manish
  1 sibling, 0 replies; 38+ messages in thread
From: Rangankar, Manish @ 2016-10-23 14:04 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: lduncan, cleech, martin.petersen, jejb, linux-scsi, netdev,
	Mintz, Yuval, Dept-Eng QLogic Storage Upstream, Javali, Nilesh,
	Adheer Chandravanshi, Dupuis, Chad, Kashyap, Saurav, Easi, Arun


On 19/10/16 3:32 PM, "Johannes Thumshirn" <jthumshirn@suse.de> wrote:

>On Wed, Oct 19, 2016 at 01:01:10AM -0400, manish.rangankar@cavium.com
>wrote:
>> From: Manish Rangankar <manish.rangankar@cavium.com>
>> 
>> The QLogic FastLinQ Driver for iSCSI (qedi) is the iSCSI specific module
>> for 41000 Series Converged Network Adapters by QLogic.
>> 
>> This patch consists of following changes:
>>   - MAINTAINERS Makefile and Kconfig changes for qedi,
>>   - PCI driver registration,
>>   - iSCSI host level initialization,
>>   - Debugfs and log level infrastructure.
>> 
>> Signed-off-by: Nilesh Javali <nilesh.javali@cavium.com>
>> Signed-off-by: Adheer Chandravanshi <adheer.chandravanshi@qlogic.com>
>> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com>
>> Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com>
>> Signed-off-by: Arun Easi <arun.easi@cavium.com>
>> Signed-off-by: Manish Rangankar <manish.rangankar@cavium.com>
>> ---
>
>[...]
>
>> +/* MSI-X fastpath handler code */
>> +static irqreturn_t qedi_msix_handler(int irq, void *dev_id)
>> +{
>> +	struct qedi_fastpath *fp = dev_id;
>> +	struct qedi_ctx *qedi = fp->qedi;
>> +	bool wake_io_thread = true;
>> +
>> +	qed_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
>> +
>> +process_again:
>> +	wake_io_thread = qedi_process_completions(fp);
>> +	if (wake_io_thread) {
>> +		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_DISC,
>> +			  "process already running\n");
>> +	}
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_update_sb_idx(fp->sb_info);
>> +
>> +	/* Check for more work */
>> +	rmb();
>> +
>> +	if (qedi_fp_has_work(fp) == 0)
>> +		qed_sb_ack(fp->sb_info, IGU_INT_ENABLE, 1);
>> +	else
>> +		goto process_again;
>> +
>> +	return IRQ_HANDLED;
>> +}
>
>You might want to consider workqueues here.

If there is no serious objection with current per-cpu threads
implementation
then we will like to do workqueue changes just after first submission.
This is because, 
for this change we have go through complete validation cycle on our part.


Thanks,
Manish R.


^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2016-10-23 14:04 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-19  5:01 [RFC 0/6] Add QLogic FastLinQ iSCSI (qedi) driver manish.rangankar
2016-10-19  5:01 ` manish.rangankar
2016-10-19  5:01 ` [RFC 1/6] qed: Add support for hardware offloaded iSCSI manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19  7:31   ` Hannes Reinecke
2016-10-19 22:28     ` Arun Easi
2016-10-19 22:28       ` Arun Easi
2016-10-19  9:09   ` Johannes Thumshirn
2016-10-20  0:14     ` Arun Easi
2016-10-20  0:14       ` Arun Easi
2016-10-20  7:09       ` Johannes Thumshirn
2016-10-19  5:01 ` [RFC 2/6] qed: Add iSCSI out of order packet handling manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19  7:36   ` Hannes Reinecke
2016-10-20 12:58     ` Mintz, Yuval
2016-10-19  9:39   ` Johannes Thumshirn
2016-10-20  0:43     ` Arun Easi
2016-10-20  0:43       ` Arun Easi
2016-10-19  5:01 ` [RFC 3/6] qedi: Add QLogic FastLinQ offload iSCSI driver framework manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19  7:45   ` Hannes Reinecke
2016-10-20  8:27     ` Rangankar, Manish
2016-10-19 10:02   ` Johannes Thumshirn
2016-10-20  8:41     ` Rangankar, Manish
2016-10-23 14:04     ` Rangankar, Manish
2016-10-19  5:01 ` [RFC 4/6] qedi: Add LL2 iSCSI interface for offload iSCSI manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19  7:53   ` Hannes Reinecke
2016-10-19  5:01 ` [RFC 5/6] qedi: Add support for iSCSI session management manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19  8:03   ` Hannes Reinecke
2016-10-20  9:09     ` Rangankar, Manish
2016-10-19 13:28   ` Johannes Thumshirn
2016-10-20  9:12     ` Rangankar, Manish
2016-10-19  5:01 ` [RFC 6/6] qedi: Add support for data path manish.rangankar
2016-10-19  5:01   ` manish.rangankar
2016-10-19 10:24   ` Hannes Reinecke
2016-10-20  9:24     ` Rangankar, Manish

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.