All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
@ 2011-12-18  2:02 Nicholas A. Bellinger
  2011-12-18  2:02 ` [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support Nicholas A. Bellinger
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-18  2:02 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: Andrew Vasquez, Giridhar Malavali, Christoph Hellwig,
	James Bottomley, Roland Dreier, Joern Engel, Madhuranath Iyengar,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Hi Andrew, James, Roland, Christoph & Co,

The following is the forth RFC series for adding qla2xxx LLD target mode support
into mainline 8.03.07.07-k @ 3.2-rc5 along with accompanying tcm_qla2xxx.ko
fabric module cut against target v4.1 infrastructure.  As before, this series
has been broken up into reviewable sections and should considered a 'for-3.4'
item as remaining TODO items are resolved.

The code is available directly against the most recent target-pending/for-next
changes using new v3.3 target_submit_cmd logic here:

  git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git qla_tgt-rfc-v4

There have been many improvements, bugfixes and cleanups since RFC-v3 during
the course of 2011.  Thanks to everyone who has contributed in order to get
this code worked into shape, as well as thanks to Vlad and the SCST community
for original qla2x00t implementation.

Here is a brief rundown of the changes since RFC-v3 earlier this year:

*) Convert to use scatterlist for all CDBS types for mainline target (andy + hch)
*) Fix for NULL s_id in qla_tgt_exec_sess_work() abort handling
   (roland via r3244 scst svn)
*) Disable EXPLICIT_CONFORM for FCP READ CHECK_CONDITION to address issue where
   SRR was being triggered during CHECK_CONDITION handling (nab + roland)
*) Address issues in qla_target.c request queue full handling (roland)
*) Add support for active I/O shutdown using generic target logic.  This includes
   numerous bugfixes related to shutdown. (roland + pure team + nab)
*) Kick target ATIO queue after qla2x00_fw_ready() completion. (roland + andrew)
*) Convert qla_target.c to use mainline v3.1 qla_dbg macros (nab)
*) Move to use command hardware structure definitions between > 24xx and older
   hardware. (madhu)
*) Make qla_target.c follow qla2xxx consistent code+naming conventions (madhu)
*) Add tcm_qla2xxx_free_wq for process context release instead of using
   TRANSPORT_FREE_CMD_INTR (hch + nab)
*) Refactor qla_tgt_send_term_exchange() in order to remove qla_tgt_cmd->locked_rsp
   exception path usage. (nab)
*) Conversion to use an internal workqueue in qla_target.c code for I/O dispatch
   into tcm_qla2xxx instead of legacy ->new_cmd_map(). (hch + nab)
*) Removal of unnecessary qla_hw_data->hardware_lock access in tcm_qla2xxx response
   path in tcm_qla2xxx_check_stop_free() and tcm_qla2xxx_release_cmd() (nab + joern)
*) Conversion to use target_submit_cmd() with for-next v3.3 code (nab + hch)
*) Merge into single tcm_qla2xxx.[c,h] files, and move into drivers/scsi/qla2xxx/

So to get the ball rolling on remaining items, one question is still how to
resolve mixed target/initiator mode operation on a HW per port context basis..?

This is currently done with a qla2xxx module parameter, but to do mixed mode
properly we will need something smarter between scsi-core and target-core ports.
Note we currently set qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED, so by default
patch #1 will effectively disable initiator mode by skipping scsi_scan_host()
from being called in to avoid scsi-core timeouts when performing immediate
transition from initiator mode -> target mode via ISP reset.

What we would like to eventually do is run qla2xxx LLD to allow both initiator
and target mode access based on the physical HW port.  We tried some simple
qla_target.c changes this make this work, but to really do it properly
and address current qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED usage it will
need to involve scsi-core so that individual HW port can be configured and
dynamically changed across different access modes.

(hch + james comments here..?)

So along with resolving this issue with mixed T/I mode, other TODO items
include:

*) Seperate out per qla_tgt session management from qla_hw_data->hardware_lock
   to use seperate lock. (nab)
*) Move write-pending abort checks into process-context (nab)
*) Finish up NPIV support for /sys/kernel/config/target/qla2xxx_npiv/ and determine
   remaining NPIV I/O path items. (nab + madhu)
*) FC jammer testing to verify bugfix for SRRs with non zero relative offsets before
   re-enabling them. (andrew + qlogic team)
*) Add proper support for ABORT_TASK in target-core (nab)
*) Multi-queue support with qla_hw_data->mqenable=1 (mq target fw support..?) 
*) Global event handling for active sessions in qla_tgt_reset()

Please have a look and let us know if you have any comments,

Thank you,

Nicholas Bellinger (3):
  qla2xxx: Add LLD internal target-mode support
  qla2xxx: Enable 2xxx series LLD target mode support
  qla2xxx: Add tcm_qla2xxx fabric module for mainline target

 drivers/scsi/qla2xxx/Kconfig       |    8 +
 drivers/scsi/qla2xxx/Makefile      |    3 +-
 drivers/scsi/qla2xxx/qla_attr.c    |    5 +-
 drivers/scsi/qla2xxx/qla_dbg.c     |   13 +-
 drivers/scsi/qla2xxx/qla_dbg.h     |    5 +
 drivers/scsi/qla2xxx/qla_def.h     |   70 +-
 drivers/scsi/qla2xxx/qla_gbl.h     |    7 +
 drivers/scsi/qla2xxx/qla_gs.c      |    4 +-
 drivers/scsi/qla2xxx/qla_init.c    |  101 +-
 drivers/scsi/qla2xxx/qla_iocb.c    |  105 +-
 drivers/scsi/qla2xxx/qla_isr.c     |   86 +-
 drivers/scsi/qla2xxx/qla_mbx.c     |  122 +-
 drivers/scsi/qla2xxx/qla_mid.c     |   21 +-
 drivers/scsi/qla2xxx/qla_os.c      |  126 +-
 drivers/scsi/qla2xxx/qla_target.c  | 5482 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/qla_target.h  | 1147 ++++++++
 drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2059 ++++++++++++++
 drivers/scsi/qla2xxx/tcm_qla2xxx.h |  148 +
 18 files changed, 9454 insertions(+), 58 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/qla_target.c
 create mode 100644 drivers/scsi/qla2xxx/qla_target.h
 create mode 100644 drivers/scsi/qla2xxx/tcm_qla2xxx.c
 create mode 100644 drivers/scsi/qla2xxx/tcm_qla2xxx.h

-- 
1.7.2.3

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support
  2011-12-18  2:02 [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Nicholas A. Bellinger
@ 2011-12-18  2:02 ` Nicholas A. Bellinger
  2011-12-19 22:59   ` Roland Dreier
  2011-12-18  2:02 ` [RFC-v4 2/3] qla2xxx: Enable 2xxx series LLD target mode support Nicholas A. Bellinger
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-18  2:02 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: Andrew Vasquez, Giridhar Malavali, Christoph Hellwig,
	James Bottomley, Roland Dreier, Joern Engel, Madhuranath Iyengar,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds the internal qla_target.[c,h] support pieces for qla2xxx
series target mode.  This code was originally based on external
qla2x00t module based on 8.02.01-k4, and has been refactored to push
the bulk of code into mainline qla2xxx.ko LLD -> qla_target.c.

The implementation uses internal workqueues for I/O context submission into
tcm_qla2xxx code, and includes the following API for external interaction
to allow qla2xxx LDD to function without direct target-core dependencies:

struct qla_tgt_func_tmpl {

        int (*handle_cmd)(struct scsi_qla_host *, struct qla_tgt_cmd *,
                        unsigned char *, uint32_t, int, int, int);
        int (*handle_data)(struct qla_tgt_cmd *);
        int (*handle_tmr)(struct qla_tgt_mgmt_cmd *, uint32_t, uint8_t);
        void (*free_cmd)(struct qla_tgt_cmd *);
        void (*free_session)(struct qla_tgt_sess *);

        int (*check_initiator_node_acl)(struct scsi_qla_host *, unsigned char *,
                                        void *, uint8_t *, uint16_t);
        struct qla_tgt_sess *(*find_sess_by_loop_id)(struct scsi_qla_host *,
                                                const uint16_t);
        struct qla_tgt_sess *(*find_sess_by_s_id)(struct scsi_qla_host *,
                                                const uint8_t *);
};

Cc: Andrew Vasquez <andrew.vasquez@qlogic.com>
Cc: Giridhar Malavali <giridhar.malavali@qlogic.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Bottomley <JBottomley@Parallels.com>
Cc: Roland Dreier <roland@purestorage.com>
Cc: Joern Engel <joern@logfs.org>
Cc: Madhuranath Iyengar <mni@risingtidesystems.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/scsi/qla2xxx/qla_target.c | 5482 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/qla_target.h | 1147 ++++++++
 2 files changed, 6629 insertions(+), 0 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/qla_target.c
 create mode 100644 drivers/scsi/qla2xxx/qla_target.h

diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
new file mode 100644
index 0000000..1b9be36
--- /dev/null
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -0,0 +1,5482 @@
+/*
+ *  qla_target.c SCSI LLD infrastructure for QLogic 22xx/23xx/24xx/25xx
+ *
+ *  based on qla2x00t.c code:
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2006 Nathaniel Clark <nate@misrule.us>
+ *  Copyright (C) 2006 - 2010 ID7 Ltd.
+ *
+ *  Forward port and refactoring to modern qla2xxx and target/configfs
+ *
+ *  Copyright (C) 2010-2011 Nicholas A. Bellinger <nab@kernel.org>
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/blkdev.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/list.h>
+#include <linux/workqueue.h>
+#include <asm/unaligned.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+
+#include "qla_def.h"
+#include "qla_target.h"
+
+static char *qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED;
+module_param(qlini_mode, charp, S_IRUGO);
+MODULE_PARM_DESC(qlini_mode,
+	"Determines when initiator mode will be enabled. Possible values: "
+	"\"exclusive\" - initiator mode will be enabled on load, "
+	"disabled on enabling target mode and then on disabling target mode "
+	"enabled back; "
+	"\"disabled\" (default) - initiator mode will never be enabled; "
+	"\"enabled\" - initiator mode will always stay enabled.");
+
+static int ql2x_ini_mode = QLA2XXX_INI_MODE_EXCLUSIVE;
+
+/*
+ * From scsi/fc/fc_fcp.h
+ */
+enum fcp_resp_rsp_codes {
+	FCP_TMF_CMPL = 0,
+	FCP_DATA_LEN_INVALID = 1,
+	FCP_CMND_FIELDS_INVALID = 2,
+	FCP_DATA_PARAM_MISMATCH = 3,
+	FCP_TMF_REJECTED = 4,
+	FCP_TMF_FAILED = 5,
+	FCP_TMF_INVALID_LUN = 9,
+};
+
+/*
+ * fc_pri_ta from scsi/fc/fc_fcp.h
+ */
+#define FCP_PTA_SIMPLE      0   /* simple task attribute */
+#define FCP_PTA_HEADQ       1   /* head of queue task attribute */
+#define FCP_PTA_ORDERED     2   /* ordered task attribute */
+#define FCP_PTA_ACA         4   /* auto. contigent allegiance */
+#define FCP_PTA_MASK        7   /* mask for task attribute field */
+#define FCP_PRI_SHIFT       3   /* priority field starts in bit 3 */
+#define FCP_PRI_RESVD_MASK  0x80        /* reserved bits in priority field */
+
+/*
+ * This driver calls qla2x00_req_pkt() and qla2x00_issue_marker(), which
+ * must be called under HW lock and could unlock/lock it inside.
+ * It isn't an issue, since in the current implementation on the time when
+ * those functions are called:
+ *
+ *   - Either context is IRQ and only IRQ handler can modify HW data,
+ *     including rings related fields,
+ *
+ *   - Or access to target mode variables from struct qla_tgt doesn't
+ *     cross those functions boundaries, except tgt_stop, which
+ *     additionally protected by irq_cmd_count.
+ */
+
+static int __qla_tgt_24xx_xmit_response(struct qla_tgt_cmd *, int, uint8_t);
+
+/* Predefs for callbacks handed to qla2xxx LLD */
+static void qla_tgt_24xx_atio_pkt(struct scsi_qla_host *ha, atio_from_isp_t *pkt);
+static void qla_tgt_response_pkt(struct scsi_qla_host *ha, response_t *pkt);
+static int qla_tgt_issue_task_mgmt(struct qla_tgt_sess *sess, uint32_t lun,
+	int fn, void *iocb, int flags);
+static void qla_tgt_send_term_exchange(struct scsi_qla_host *ha, struct qla_tgt_cmd *cmd,
+	atio_from_isp_t *atio, int ha_locked);
+static void qla_tgt_reject_free_srr_imm(struct scsi_qla_host *ha, struct qla_tgt_srr_imm *imm,
+	int ha_lock);
+/*
+ * Global Variables
+ */
+static struct kmem_cache *qla_tgt_cmd_cachep;
+static struct kmem_cache *qla_tgt_mgmt_cmd_cachep;
+static mempool_t *qla_tgt_mgmt_cmd_mempool;
+static struct workqueue_struct *qla_tgt_wq;
+static DEFINE_MUTEX(qla_tgt_mutex);
+static LIST_HEAD(qla_tgt_glist);
+/*
+ * From qla2xxx/qla_iobc.c and used by various qla_target.c logic
+ */
+extern request_t *qla2x00_req_pkt(struct scsi_qla_host *);
+
+/* ha->hardware_lock supposed to be held on entry (to protect tgt->sess_list) */
+static struct qla_tgt_sess *qla_tgt_find_sess_by_port_name(
+	struct qla_tgt *tgt,
+	const uint8_t *port_name)
+{
+	struct qla_tgt_sess *sess;
+
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+		if ((sess->port_name[0] == port_name[0]) &&
+		    (sess->port_name[1] == port_name[1]) &&
+		    (sess->port_name[2] == port_name[2]) &&
+		    (sess->port_name[3] == port_name[3]) &&
+		    (sess->port_name[4] == port_name[4]) &&
+		    (sess->port_name[5] == port_name[5]) &&
+		    (sess->port_name[6] == port_name[6]) &&
+		    (sess->port_name[7] == port_name[7]))
+			return sess;
+	}
+
+	return NULL;
+}
+
+/* Might release hw lock, then reaquire!! */
+static inline int qla_tgt_issue_marker(struct scsi_qla_host *vha, int vha_locked)
+{
+	/* Send marker if required */
+	if (unlikely(vha->marker_needed != 0)) {
+		int rc = qla2x00_issue_marker(vha, vha_locked);
+		if (rc != QLA_SUCCESS) {
+			printk(KERN_ERR "qla_target(%d): issue_marker() "
+				"failed\n", vha->vp_idx);
+		}
+		return rc;
+	}
+	return QLA_SUCCESS;
+}
+
+static inline
+struct scsi_qla_host *qla_tgt_find_host_by_d_id(struct scsi_qla_host *vha, uint8_t *d_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if ((vha->d_id.b.area != d_id[1]) || (vha->d_id.b.domain != d_id[0]))
+		return NULL;
+
+	if (vha->d_id.b.al_pa == d_id[2])
+		return vha;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		uint8_t vp_idx;
+		BUG_ON(ha->tgt_vp_map == NULL);
+		vp_idx = ha->tgt_vp_map[d_id[2]].idx;
+		if (likely(test_bit(vp_idx, ha->vp_idx_map)))
+			return ha->tgt_vp_map[vp_idx].vha;
+	}
+
+	return NULL;
+}
+
+static inline
+struct scsi_qla_host *qla_tgt_find_host_by_vp_idx(struct scsi_qla_host *vha, uint16_t vp_idx)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if (vha->vp_idx == vp_idx)
+		return vha;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		BUG_ON(ha->tgt_vp_map == NULL);
+		if (likely(test_bit(vp_idx, ha->vp_idx_map)))
+			return ha->tgt_vp_map[vp_idx].vha;
+	}
+
+	return NULL;
+}
+
+void qla_tgt_24xx_atio_pkt_all_vps(struct scsi_qla_host *vha, atio_from_isp_t *atio)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	switch (atio->u.raw.entry_type) {
+	case ATIO_TYPE7:
+	{
+		struct scsi_qla_host *host = qla_tgt_find_host_by_d_id(vha,
+						atio->u.isp24.fcp_hdr.d_id);
+		if (unlikely(NULL == host)) {
+			printk(KERN_ERR "qla_target(%d): Received ATIO_TYPE7 "
+				"with unknown d_id %x:%x:%x\n", vha->vp_idx,
+				atio->u.isp24.fcp_hdr.d_id[0],
+				atio->u.isp24.fcp_hdr.d_id[1],
+				atio->u.isp24.fcp_hdr.d_id[2]);
+			break;
+		}
+		qla_tgt_24xx_atio_pkt(host, atio);
+		break;
+	}
+
+	case IMMED_NOTIFY_TYPE:
+	{
+		struct scsi_qla_host *host = vha;
+
+		if (IS_FWI2_CAPABLE(ha)) {
+			imm_ntfy_from_isp_t *entry = (imm_ntfy_from_isp_t *)atio;
+			if ((entry->u.isp24.vp_index != 0xFF) &&
+			    (entry->u.isp24.nport_handle != 0xFFFF)) {
+				host = qla_tgt_find_host_by_vp_idx(vha,
+							entry->u.isp24.vp_index);
+				if (unlikely(!host)) {
+					printk(KERN_ERR "qla_target(%d): Received "
+						"ATIO (IMMED_NOTIFY_TYPE) "
+						"with unknown vp_index %d\n",
+						vha->vp_idx, entry->u.isp24.vp_index);
+					break;
+				}
+			}
+		}
+		qla_tgt_24xx_atio_pkt(host, atio);
+		break;
+	}
+
+	default:
+		printk(KERN_ERR "qla_target(%d): Received unknown ATIO atio "
+		     "type %x\n", vha->vp_idx, atio->u.raw.entry_type);
+		break;
+	}
+
+	return;
+}
+
+void qla_tgt_response_pkt_all_vps(struct scsi_qla_host *vha, response_t *pkt)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	switch (pkt->entry_type) {
+	case CTIO_TYPE7:
+	{
+		ctio7_from_24xx_t *entry = (ctio7_from_24xx_t *)pkt;
+		struct scsi_qla_host *host = qla_tgt_find_host_by_vp_idx(vha,
+						entry->vp_index);
+		if (unlikely(!host)) {
+			printk(KERN_ERR "qla_target(%d): Response pkt (CTIO_TYPE7) "
+				"received, with unknown vp_index %d\n",
+				vha->vp_idx, entry->vp_index);
+			break;
+		}
+		qla_tgt_response_pkt(host, pkt);
+		break;
+	}
+
+	case IMMED_NOTIFY_TYPE:
+	{
+		struct scsi_qla_host *host = vha;
+		if (IS_FWI2_CAPABLE(ha)) {
+			imm_ntfy_from_isp_t *entry = (imm_ntfy_from_isp_t *)pkt;
+			host = qla_tgt_find_host_by_vp_idx(vha, entry->u.isp24.vp_index);
+			if (unlikely(!host)) {
+				printk(KERN_ERR "qla_target(%d): Response pkt "
+					"(IMMED_NOTIFY_TYPE) received, "
+					"with unknown vp_index %d\n",
+					vha->vp_idx, entry->u.isp24.vp_index);
+				break;
+			}
+		}
+		qla_tgt_response_pkt(host, pkt);
+		break;
+	}
+
+	case NOTIFY_ACK_TYPE:
+	{
+		struct scsi_qla_host *host = vha;
+		if (IS_FWI2_CAPABLE(ha)) {
+			nack_to_isp_t *entry = (nack_to_isp_t *)pkt;
+			if (0xFF != entry->u.isp24.vp_index) {
+				host = qla_tgt_find_host_by_vp_idx(vha,
+						entry->u.isp24.vp_index);
+				if (unlikely(!host)) {
+					printk(KERN_ERR "qla_target(%d): Response "
+						"pkt (NOTIFY_ACK_TYPE) "
+						"received, with unknown "
+						"vp_index %d\n", vha->vp_idx,
+						entry->u.isp24.vp_index);
+					break;
+				}
+			}
+		}
+		qla_tgt_response_pkt(host, pkt);
+		break;
+	}
+
+	case ABTS_RECV_24XX:
+	{
+		abts_recv_from_24xx_t *entry = (abts_recv_from_24xx_t *)pkt;
+		struct scsi_qla_host *host = qla_tgt_find_host_by_vp_idx(vha,
+						entry->vp_index);
+		if (unlikely(!host)) {
+			printk(KERN_ERR "qla_target(%d): Response pkt "
+				"(ABTS_RECV_24XX) received, with unknown "
+				"vp_index %d\n", vha->vp_idx, entry->vp_index);
+			break;
+		}
+		qla_tgt_response_pkt(host, pkt);
+		break;
+	}
+
+	case ABTS_RESP_24XX:
+	{
+		abts_resp_to_24xx_t *entry = (abts_resp_to_24xx_t *)pkt;
+		struct scsi_qla_host *host = qla_tgt_find_host_by_vp_idx(vha,
+						entry->vp_index);
+		if (unlikely(!host)) {
+			printk(KERN_ERR "qla_target(%d): Response pkt "
+				"(ABTS_RECV_24XX) received, with unknown "
+				"vp_index %d\n", vha->vp_idx, entry->vp_index);
+			break;
+		}
+		qla_tgt_response_pkt(host, pkt);
+		break;
+	}
+
+	default:
+		qla_tgt_response_pkt(vha, pkt);
+		break;
+	}
+
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_free_session_done(struct qla_tgt_sess *sess)
+{
+	struct qla_tgt *tgt;
+	struct scsi_qla_host *vha = sess->vha;
+	struct qla_hw_data *ha = vha->hw;
+
+	tgt = sess->tgt;
+
+	sess->tearing_down = 1;
+
+	/*
+	 * Release the target session for FC Nexus from fabric module code.
+	 */
+	if (sess->se_sess != NULL)
+		ha->tgt_ops->free_session(sess);
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe104, "Unregistration of"
+		" sess %p finished\n", sess);
+
+	kfree(sess);
+
+	if (!tgt)
+		return;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe002, "empty(sess_list) %d"
+		" sess_count %d\n", list_empty(&tgt->sess_list), tgt->sess_count);
+	/*
+	 * We need to protect against race, when tgt is freed before or
+	 * inside wake_up()
+	 */
+	tgt->sess_count--;
+	if (tgt->sess_count == 0)
+		wake_up_all(&tgt->waitQ);
+}
+
+static void __qla_tgt_unreg_sess(struct kref *kref)
+{
+	struct qla_tgt_sess *sess = container_of(kref, struct qla_tgt_sess,
+				sess_kref);
+
+	list_del(&sess->sess_list_entry);
+
+	if (sess->deleted)
+		list_del(&sess->del_list_entry);
+
+	printk(KERN_INFO "qla_target(%d): %ssession for loop_id %d deleted\n",
+		sess->vha->vp_idx, sess->local ? "local " : "",
+		sess->loop_id);
+
+	qla_tgt_free_session_done(sess);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_unreg_sess(struct kref *kref)
+{
+	struct qla_tgt_sess *sess = container_of(kref, struct qla_tgt_sess,
+				sess_kref);
+	struct scsi_qla_host *vha = sess->vha;
+	unsigned long flags;
+
+	spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+	__qla_tgt_unreg_sess(kref);
+	spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+int __qla_tgt_sess_put(struct qla_tgt_sess *sess)
+{
+	return kref_put(&sess->sess_kref, __qla_tgt_unreg_sess);
+}
+EXPORT_SYMBOL(__qla_tgt_sess_put);
+
+/* called without ha->hardware_lock held */
+static int qla_tgt_sess_put(struct qla_tgt_sess *sess)
+{
+	return kref_put(&sess->sess_kref, qla_tgt_unreg_sess);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_reset(struct scsi_qla_host *vha, void *iocb, int mcmd)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess = NULL;
+	uint32_t unpacked_lun, lun = 0;
+	uint16_t loop_id;
+	int res = 0;
+	uint8_t s_id[3];
+	imm_ntfy_from_isp_t *n = (imm_ntfy_from_isp_t *)iocb;
+
+	memset(&s_id, 0, 3);
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		loop_id = le16_to_cpu(n->u.isp24.nport_handle);
+		s_id[0] = n->u.isp24.port_id[0];
+		s_id[1] = n->u.isp24.port_id[1];
+		s_id[2] = n->u.isp24.port_id[2];
+	} else
+		loop_id = GET_TARGET_ID(ha, (atio_from_isp_t *)n);
+
+	if (loop_id == 0xFFFF) {
+#warning FIXME: Re-enable Global event handling..
+#if 0
+		/* Global event */
+		printk("Processing qla_tgt_reset with loop_id=0xffff global event............\n");
+		atomic_inc(&ha->qla_tgt->tgt_global_resets_count);
+		qla_tgt_clear_tgt_db(ha->qla_tgt, 1);
+		if (!list_empty(&ha->qla_tgt->sess_list)) {
+			sess = list_entry(ha->qla_tgt->sess_list.next,
+				typeof(*sess), sess_list_entry);
+			switch (mcmd) {
+			case QLA_TGT_NEXUS_LOSS_SESS:
+				mcmd = QLA_TGT_NEXUS_LOSS;
+				break;
+			case QLA_TGT_ABORT_ALL_SESS:
+				mcmd = QLA_TGT_ABORT_ALL;
+				break;
+			case QLA_TGT_NEXUS_LOSS:
+			case QLA_TGT_ABORT_ALL:
+				break;
+			default:
+				printk(KERN_ERR "qla_target(%d): Not allowed "
+					"command %x in %s", vha->vp_idx,
+					mcmd, __func__);
+				sess = NULL;
+				break;
+			}
+		} else
+			sess = NULL;
+#endif
+	} else {
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha, loop_id);
+	}
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe003, "Using sess for"
+			" qla_tgt_reset: %p\n", sess);
+	if (!sess) {
+		res = -ESRCH;
+		ha->qla_tgt->tm_to_unknown = 1;
+		return res;
+	}
+
+	printk(KERN_INFO "scsi(%ld): resetting (session %p from port "
+		"%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x, "
+		"mcmd %x, loop_id %d)\n", vha->host_no, sess,
+		sess->port_name[0], sess->port_name[1],
+		sess->port_name[2], sess->port_name[3],
+		sess->port_name[4], sess->port_name[5],
+		sess->port_name[6], sess->port_name[7],
+		mcmd, loop_id);
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		atio_from_isp_t *a = (atio_from_isp_t *)iocb;
+		lun = a->u.isp24.fcp_cmnd.lun;
+	} else
+		lun = swab16(le16_to_cpu(n->u.isp2x.lun));
+
+	unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun);
+
+	return qla_tgt_issue_task_mgmt(sess, unpacked_lun, mcmd,
+				iocb, QLA24XX_MGMT_SEND_NACK);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_schedule_sess_for_deletion(struct qla_tgt_sess *sess, bool immediate)
+{
+	struct qla_tgt *tgt = sess->tgt;
+	uint32_t dev_loss_tmo = tgt->ha->port_down_retry_count + 5;
+
+	if (sess->deleted)
+		return;
+
+	ql_dbg(ql_dbg_tgt, sess->vha, 0xe004, "Scheduling sess %p for"
+		" deletion (schedule %d)", sess, schedule);
+	list_add_tail(&sess->del_list_entry, &tgt->del_sess_list);
+	sess->deleted = 1;
+
+	if (immediate)
+		dev_loss_tmo = 0;
+
+	sess->expires = jiffies + dev_loss_tmo * HZ;
+
+	printk(KERN_INFO "qla_target(%d): session for port %02x:%02x:%02x:"
+		"%02x:%02x:%02x:%02x:%02x (loop ID %d) scheduled for "
+		"deletion in %u secs (expires: %lu) immed: %d\n", sess->vha->vp_idx,
+		sess->port_name[0], sess->port_name[1],
+		sess->port_name[2], sess->port_name[3],
+		sess->port_name[4], sess->port_name[5],
+		sess->port_name[6], sess->port_name[7],
+		sess->loop_id, dev_loss_tmo, sess->expires, immediate);
+
+	if (immediate)
+		schedule_delayed_work(&tgt->sess_del_work, 0);
+	else
+		schedule_delayed_work(&tgt->sess_del_work, jiffies - sess->expires);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_clear_tgt_db(struct qla_tgt *tgt, bool local_only)
+{
+	struct qla_tgt_sess *sess;
+
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry)
+		qla_tgt_schedule_sess_for_deletion(sess, true);
+
+	/* At this point tgt could be already dead */
+}
+
+static int qla24xx_get_loop_id(struct scsi_qla_host *vha, const uint8_t *s_id,
+	uint16_t *loop_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+	dma_addr_t gid_list_dma;
+	struct gid_list_info *gid_list;
+	char *id_iter;
+	int res, rc, i;
+	uint16_t entries;
+
+	gid_list = dma_alloc_coherent(&ha->pdev->dev, GID_LIST_SIZE,
+			&gid_list_dma, GFP_KERNEL);
+	if (!gid_list) {
+		printk(KERN_ERR "qla_target(%d): DMA Alloc failed of %zd\n",
+			vha->vp_idx, GID_LIST_SIZE);
+		return -ENOMEM;
+	}
+
+	/* Get list of logged in devices */
+	rc = qla2x00_get_id_list(vha, gid_list, gid_list_dma, &entries);
+	if (rc != QLA_SUCCESS) {
+		printk(KERN_ERR "qla_target(%d): get_id_list() failed: %x\n",
+			vha->vp_idx, rc);
+		res = -1;
+		goto out_free_id_list;
+	}
+
+	id_iter = (char *)gid_list;
+	res = -1;
+	for (i = 0; i < entries; i++) {
+		struct gid_list_info *gid = (struct gid_list_info *)id_iter;
+		if ((gid->al_pa == s_id[2]) &&
+		    (gid->area == s_id[1]) &&
+		    (gid->domain == s_id[0])) {
+			*loop_id = le16_to_cpu(gid->loop_id);
+			res = 0;
+			break;
+		}
+		id_iter += ha->gid_list_info_size;
+	}
+
+out_free_id_list:
+	dma_free_coherent(&ha->pdev->dev, GID_LIST_SIZE, gid_list, gid_list_dma);
+
+	return res;
+}
+
+static bool qla_tgt_check_fcport_exist(struct scsi_qla_host *vha, struct qla_tgt_sess *sess)
+{
+	struct qla_hw_data *ha = vha->hw;
+	bool res, found = false;
+	int rc, i;
+	uint16_t loop_id = 0xFFFF; /* to eliminate compiler's warning */
+	uint16_t entries;
+	void *pmap;
+	int pmap_len;
+	fc_port_t *fcport;
+	int global_resets;
+
+retry:
+	global_resets = atomic_read(&ha->qla_tgt->tgt_global_resets_count);
+
+	rc = qla2x00_get_node_name_list(vha, &pmap, &pmap_len);
+	if (rc != QLA_SUCCESS) {
+		res = false;
+		goto out;
+	}
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		struct qla_port_24xx_data *pmap24 = pmap;
+
+		entries = pmap_len/sizeof(*pmap24);
+
+		for (i = 0; i < entries; ++i) {
+			if ((sess->port_name[0] == pmap24[i].port_name[0]) &&
+			    (sess->port_name[1] == pmap24[i].port_name[1]) &&
+			    (sess->port_name[2] == pmap24[i].port_name[2]) &&
+			    (sess->port_name[3] == pmap24[i].port_name[3]) &&
+			    (sess->port_name[4] == pmap24[i].port_name[4]) &&
+			    (sess->port_name[5] == pmap24[i].port_name[5]) &&
+			    (sess->port_name[6] == pmap24[i].port_name[6]) &&
+			    (sess->port_name[7] == pmap24[i].port_name[7])) {
+				loop_id = le16_to_cpu(pmap24[i].loop_id);
+				found = true;
+				break;
+			}
+		}
+	} else {
+		struct qla_port_2xxx_data *pmap2x = pmap;
+
+		entries = pmap_len/sizeof(*pmap2x);
+
+		for (i = 0; i < entries; ++i) {
+			if ((sess->port_name[0] == pmap2x[i].port_name[0]) &&
+			    (sess->port_name[1] == pmap2x[i].port_name[1]) &&
+			    (sess->port_name[2] == pmap2x[i].port_name[2]) &&
+			    (sess->port_name[3] == pmap2x[i].port_name[3]) &&
+			    (sess->port_name[4] == pmap2x[i].port_name[4]) &&
+			    (sess->port_name[5] == pmap2x[i].port_name[5]) &&
+			    (sess->port_name[6] == pmap2x[i].port_name[6]) &&
+			    (sess->port_name[7] == pmap2x[i].port_name[7])) {
+				loop_id = le16_to_cpu(pmap2x[i].loop_id);
+				found = true;
+				break;
+			}
+		}
+	}
+
+	kfree(pmap);
+
+	if (!found) {
+		res = false;
+		goto out;
+	}
+
+	printk(KERN_INFO "qla_tgt_check_fcport_exist(): loop_id %d", loop_id);
+
+	fcport = kzalloc(sizeof(*fcport), GFP_KERNEL);
+	if (fcport == NULL) {
+		printk(KERN_ERR "qla_target(%d): Allocation of tmp FC port failed",
+			vha->vp_idx);
+		res = false;
+		goto out;
+	}
+
+	fcport->loop_id = loop_id;
+
+	rc = qla2x00_get_port_database(vha, fcport, 0);
+	if (rc != QLA_SUCCESS) {
+		printk(KERN_ERR "qla_target(%d): Failed to retrieve fcport "
+			"information -- get_port_database() returned %x "
+			"(loop_id=0x%04x)", vha->vp_idx, rc, loop_id);
+		res = false;
+		goto out_free_fcport;
+	}
+
+	if (global_resets != atomic_read(&ha->qla_tgt->tgt_global_resets_count)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe105, "qla_target(%d): global reset"
+			" during session discovery (counter was %d, new %d),"
+			" retrying", vha->vp_idx, global_resets,
+			atomic_read(&ha->qla_tgt->tgt_global_resets_count));
+		goto retry;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe106, "Updating sess %p s_id %x:%x:%x, "
+		"loop_id %d) to d_id %x:%x:%x, loop_id %d", sess,
+		sess->s_id.b.domain, sess->s_id.b.al_pa,
+		sess->s_id.b.area, sess->loop_id, fcport->d_id.b.domain,
+		fcport->d_id.b.al_pa, fcport->d_id.b.area, fcport->loop_id);
+
+	sess->s_id = fcport->d_id;
+	sess->loop_id = fcport->loop_id;
+	sess->conf_compl_supported = fcport->conf_compl_supported;
+
+	res = true;
+
+out_free_fcport:
+	kfree(fcport);
+
+out:
+	return res;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_undelete_sess(struct qla_tgt_sess *sess)
+{
+	BUG_ON(!sess->deleted);
+
+	list_del(&sess->del_list_entry);
+	sess->deleted = 0;
+}
+
+static void qla_tgt_del_sess_work_fn(struct delayed_work *work)
+{
+	struct qla_tgt *tgt = container_of(work, struct qla_tgt,
+					sess_del_work);
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	while (!list_empty(&tgt->del_sess_list)) {
+		sess = list_entry(tgt->del_sess_list.next, typeof(*sess),
+				del_list_entry);
+		if (time_after_eq(jiffies, sess->expires)) {
+			bool cancel;
+
+			qla_tgt_undelete_sess(sess);
+
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+			cancel = qla_tgt_check_fcport_exist(vha, sess);
+			spin_lock_irqsave(&ha->hardware_lock, flags);
+
+			if (cancel) {
+				if (sess->deleted) {
+					/*
+					 * sess was again deleted while we were
+					 * discovering it
+					 */
+					continue;
+				}
+
+				printk(KERN_INFO "qla_target(%d): cancel deletion of "
+					"session for port %02x:%02x:%02x:%02x:%02x:"
+					"%02x:%02x:%02x (loop ID %d), because it isn't"
+					" deleted by firmware", vha->vp_idx,
+					sess->port_name[0], sess->port_name[1],
+					sess->port_name[2], sess->port_name[3],
+					sess->port_name[4], sess->port_name[5],
+					sess->port_name[6], sess->port_name[7],
+					sess->loop_id);
+			} else {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe107, "Timeout: sess %p"
+					" about to be deleted\n", sess);
+				__qla_tgt_sess_put(sess);
+			}
+		} else {
+			schedule_delayed_work(&tgt->sess_del_work,
+				jiffies - sess->expires);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+/*
+ * Adds an extra ref to allow to drop hw lock after adding sess to the list.
+ * Caller must put it.
+ */
+static struct qla_tgt_sess *qla_tgt_create_sess(
+	struct scsi_qla_host *vha,
+	fc_port_t *fcport,
+	bool local)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess;
+	unsigned long flags;
+	unsigned char be_sid[3];
+
+	/* Check to avoid double sessions */
+#if 0
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+		if ((sess->port_name[0] == fcport->port_name[0]) &&
+		    (sess->port_name[1] == fcport->port_name[1]) &&
+		    (sess->port_name[2] == fcport->port_name[2]) &&
+		    (sess->port_name[3] == fcport->port_name[3]) &&
+		    (sess->port_name[4] == fcport->port_name[4]) &&
+		    (sess->port_name[5] == fcport->port_name[5]) &&
+		    (sess->port_name[6] == fcport->port_name[6]) &&
+		    (sess->port_name[7] == fcport->port_name[7])) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe108, "Double sess %p"
+				" found (s_id %x:%x:%x, "
+				"loop_id %d), updating to d_id %x:%x:%x, "
+				"loop_id %d", sess, sess->s_id.b.domain,
+				sess->s_id.b.al_pa, sess->s_id.b.area,
+				sess->loop_id, fcport->d_id.b.domain,
+				fcport->d_id.b.al_pa, fcport->d_id.b.area,
+				fcport->loop_id)
+
+			if (sess->deleted)
+				qla_tgt_undelete_sess(sess);
+
+			qla_tgt_sess_get(sess);
+			sess->s_id = fcport->d_id;
+			sess->loop_id = fcport->loop_id;
+			sess->conf_compl_supported = fcport->conf_compl_supported;
+			if (sess->local && !local)
+				sess->local = 0;
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+			goto out;
+		}
+	}
+	spin_unlock_irq_restore(&ha->hardware_lock, flags);
+#endif
+	/* We are under tgt_mutex, so a new sess can't be added behind us */
+
+	sess = kzalloc(sizeof(*sess), GFP_KERNEL);
+	if (!sess) {
+		printk(KERN_ERR "qla_target(%u): session allocation failed, "
+			"all commands from port %02x:%02x:%02x:%02x:"
+			"%02x:%02x:%02x:%02x will be refused", vha->vp_idx,
+			fcport->port_name[0], fcport->port_name[1],
+			fcport->port_name[2], fcport->port_name[3],
+			fcport->port_name[4], fcport->port_name[5],
+			fcport->port_name[6], fcport->port_name[7]);
+
+		return NULL;
+	}
+	/*
+	 * Take two references to ->sess_kref here to handle qla_tgt_sess
+	 * access across ->hardware_lock reaquire.
+	 */
+	kref_init(&sess->sess_kref);
+	kref_get(&sess->sess_kref);
+
+	sess->tgt = ha->qla_tgt;
+	sess->vha = vha;
+	sess->s_id = fcport->d_id;
+	sess->loop_id = fcport->loop_id;
+	sess->local = local;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe109, "Adding sess %p to tgt %p via"
+		" ->check_initiator_node_acl()\n", sess, ha->qla_tgt);
+
+	be_sid[0] = sess->s_id.b.domain;
+	be_sid[1] = sess->s_id.b.area;
+	be_sid[2] = sess->s_id.b.al_pa;
+	/*
+	 * Determine if this fc_port->port_name is allowed to access
+	 * target mode using explict NodeACLs+MappedLUNs, or using
+	 * TPG demo mode.  If this is successful a target mode FC nexus
+	 * is created.
+	 */
+	if (ha->tgt_ops->check_initiator_node_acl(vha, &fcport->port_name[0],
+				sess, &be_sid[0], fcport->loop_id) < 0) {
+		kfree(sess);
+		return NULL;
+	}
+
+	sess->conf_compl_supported = fcport->conf_compl_supported;
+	BUILD_BUG_ON(sizeof(sess->port_name) != sizeof(fcport->port_name));
+	memcpy(sess->port_name, fcport->port_name, sizeof(sess->port_name));
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	list_add_tail(&sess->sess_list_entry, &ha->qla_tgt->sess_list);
+	ha->qla_tgt->sess_count++;
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	printk(KERN_INFO "qla_target(%d): %ssession for wwn %02x:%02x:%02x:%02x:"
+		"%02x:%02x:%02x:%02x (loop_id %d, s_id %x:%x:%x, confirmed"
+		" completion %ssupported) added\n", vha->vp_idx, local ?
+		"local " : "", fcport->port_name[0], fcport->port_name[1],
+		fcport->port_name[2], fcport->port_name[3], fcport->port_name[4],
+		fcport->port_name[5], fcport->port_name[6], fcport->port_name[7],
+		fcport->loop_id, sess->s_id.b.domain, sess->s_id.b.area,
+		sess->s_id.b.al_pa, sess->conf_compl_supported ? "" : "not ");
+
+	return sess;
+}
+
+/*
+ * Called from drivers/scsi/qla2xxx/qla_init.c:qla2x00_reg_remote_port()
+ */
+void qla_tgt_fc_port_added(struct scsi_qla_host *vha, fc_port_t *fcport)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_sess *sess;
+	unsigned long flags;
+	unsigned char s_id[3];
+
+	if (!vha->hw->tgt_ops)
+		return;
+
+	if (!tgt || (fcport->port_type != FCT_INITIATOR))
+		return;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	if (tgt->tgt_stop) {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		return;
+	}
+	sess = qla_tgt_find_sess_by_port_name(tgt, fcport->port_name);
+	if (!sess) {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+		memset(&s_id, 0, 3);
+		s_id[0] = fcport->d_id.b.domain;
+		s_id[1] = fcport->d_id.b.area;
+		s_id[2] = fcport->d_id.b.al_pa;
+
+		mutex_lock(&ha->tgt_mutex);
+		sess = qla_tgt_create_sess(vha, fcport, false);
+		mutex_unlock(&ha->tgt_mutex);
+
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+		/* put the extra creation ref */
+		if (sess != NULL)
+			__qla_tgt_sess_put(sess);
+	} else {
+		if (sess->deleted) {
+			qla_tgt_undelete_sess(sess);
+
+			printk(KERN_INFO "qla_target(%u): %ssession for port %02x:"
+				"%02x:%02x:%02x:%02x:%02x:%02x:%02x (loop ID %d) "
+				"reappeared\n", vha->vp_idx,
+				sess->local ? "local " : "", sess->port_name[0],
+				sess->port_name[1], sess->port_name[2],
+				sess->port_name[3], sess->port_name[4],
+				sess->port_name[5], sess->port_name[6],
+				sess->port_name[7], sess->loop_id);
+
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe10a, "Reappeared sess %p\n", sess);
+		}
+		sess->s_id = fcport->d_id;
+		sess->loop_id = fcport->loop_id;
+		sess->conf_compl_supported = fcport->conf_compl_supported;
+	}
+
+	if (sess && sess->local) {
+		printk(KERN_INFO "qla_target(%u): local session for "
+			"port %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x "
+			"(loop ID %d) became global\n", vha->vp_idx,
+			fcport->port_name[0], fcport->port_name[1],
+			fcport->port_name[2], fcport->port_name[3],
+			fcport->port_name[4], fcport->port_name[5],
+			fcport->port_name[6], fcport->port_name[7],
+			sess->loop_id);
+		sess->local = 0;
+	}
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+void qla_tgt_fc_port_deleted(struct scsi_qla_host *vha, fc_port_t *fcport)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_sess *sess;
+	unsigned long flags;
+
+	if (!vha->hw->tgt_ops)
+		return;
+
+	if (!tgt || (fcport->port_type != FCT_INITIATOR))
+		return;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	if (tgt->tgt_stop) {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		return;
+	}
+	sess = qla_tgt_find_sess_by_port_name(tgt, fcport->port_name);
+	if (!sess) {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		return;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe10b, "qla_tgt_fc_port_deleted %p", sess);
+
+	sess->local = 1;
+	qla_tgt_schedule_sess_for_deletion(sess, false);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+static inline int test_tgt_sess_count(struct qla_tgt *tgt)
+{
+	struct qla_hw_data *ha = tgt->ha;
+	unsigned long flags;
+	int res;
+	/*
+	 * We need to protect against race, when tgt is freed before or
+	 * inside wake_up()
+	 */
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	ql_dbg(ql_dbg_tgt, tgt->vha, 0xe005, "tgt %p, empty(sess_list)=%d sess_count=%d\n",
+	      tgt, list_empty(&tgt->sess_list), tgt->sess_count);
+	res = (tgt->sess_count == 0);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return res;
+}
+
+/* Called by tcm_qla2xxx configfs code */
+void qla_tgt_stop_phase1(struct qla_tgt *tgt)
+{
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = tgt->ha;
+	unsigned long flags;
+
+	if (tgt->tgt_stop || tgt->tgt_stopped) {
+		printk(KERN_ERR "Already in tgt->tgt_stop or tgt_stopped state\n");
+		dump_stack();
+		return;
+	}
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe006, "Stopping target for host %ld(%p)\n",
+				vha->host_no, vha);
+	/*
+	 * Mutex needed to sync with qla_tgt_fc_port_[added,deleted].
+	 * Lock is needed, because we still can get an incoming packet.
+	 */
+	mutex_lock(&ha->tgt_mutex);
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	tgt->tgt_stop = 1;
+	qla_tgt_clear_tgt_db(tgt, true);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	mutex_unlock(&ha->tgt_mutex);
+
+	flush_delayed_work_sync(&tgt->sess_del_work);
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe10c, "Waiting for sess works (tgt %p)", tgt);
+	spin_lock_irqsave(&tgt->sess_work_lock, flags);
+	while (!list_empty(&tgt->sess_works_list)) {
+		spin_unlock_irqrestore(&tgt->sess_work_lock, flags);
+		flush_scheduled_work();
+		spin_lock_irqsave(&tgt->sess_work_lock, flags);
+	}
+	spin_unlock_irqrestore(&tgt->sess_work_lock, flags);
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe10d, "Waiting for tgt %p: list_empty(sess_list)=%d "
+		"sess_count=%d\n", tgt, list_empty(&tgt->sess_list),
+		tgt->sess_count);
+
+	wait_event(tgt->waitQ, test_tgt_sess_count(tgt));
+
+	/* Big hammer */
+	if (!ha->host_shutting_down && qla_tgt_mode_enabled(vha))
+		qla_tgt_disable_vha(vha);
+
+	/* Wait for sessions to clear out (just in case) */
+	wait_event(tgt->waitQ, test_tgt_sess_count(tgt));
+}
+EXPORT_SYMBOL(qla_tgt_stop_phase1);
+
+/* Called by tcm_qla2xxx configfs code */
+void qla_tgt_stop_phase2(struct qla_tgt *tgt)
+{
+	struct qla_hw_data *ha = tgt->ha;
+	unsigned long flags;
+
+	if (tgt->tgt_stopped) {
+		printk(KERN_ERR "Already in tgt->tgt_stopped state\n");
+		dump_stack();
+		return;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe10e, "Waiting for %d IRQ commands to"
+		" complete (tgt %p)", tgt->irq_cmd_count, tgt);
+
+	mutex_lock(&ha->tgt_mutex);
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	while (tgt->irq_cmd_count != 0) {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		udelay(2);
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+	}
+	tgt->tgt_stop = 0;
+	tgt->tgt_stopped = 1;
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	mutex_unlock(&ha->tgt_mutex);
+
+	ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe10f, "Stop of tgt %p finished", tgt);
+}
+EXPORT_SYMBOL(qla_tgt_stop_phase2);
+
+/* Called from qla_tgt_remove_target() -> qla2x00_remove_one() */
+void qla_tgt_release(struct qla_tgt *tgt)
+{
+	struct qla_hw_data *ha = tgt->ha;
+
+	if ((ha->qla_tgt != NULL) && !tgt->tgt_stopped)
+		qla_tgt_stop_phase2(tgt);
+
+	ha->qla_tgt = NULL;
+
+	ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe110, "Release of tgt %p finished\n", tgt);
+
+	kfree(tgt);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_sched_sess_work(struct qla_tgt *tgt, int type,
+	const void *param, unsigned int param_size)
+{
+	struct qla_tgt_sess_work_param *prm;
+	unsigned long flags;
+
+	prm = kzalloc(sizeof(*prm), GFP_ATOMIC);
+	if (!prm ) {
+		printk(KERN_ERR "qla_target(%d): Unable to create session "
+			"work, command will be refused", 0);
+		return -ENOMEM;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe111, "Scheduling work (type %d, prm %p)"
+		" to find session for param %p (size %d, tgt %p)\n", type, prm, param,
+		param_size, tgt);
+
+	prm->type = type;
+	memcpy(&prm->tm_iocb, param, param_size);
+
+	spin_lock_irqsave(&tgt->sess_work_lock, flags);
+	if (!tgt->sess_works_pending)
+		tgt->tm_to_unknown = 0;
+	list_add_tail(&prm->sess_works_list_entry, &tgt->sess_works_list);
+	tgt->sess_works_pending = 1;
+	spin_unlock_irqrestore(&tgt->sess_work_lock, flags);
+
+	schedule_work(&tgt->sess_work);
+
+	return 0;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ * This function issues a modify LUN IOCB to ISP 2xxx to change or modify
+ * the command count.
+ */
+static void qla_tgt_2xxx_send_modify_lun(struct scsi_qla_host *vha, int cmd_count,
+	int imm_count)
+{
+	struct qla_hw_data *ha = vha->hw;
+	modify_lun_t *pkt;
+
+	printk(KERN_INFO "Sending MODIFY_LUN (ha=%p, cmd=%d, imm=%d)\n",
+		  ha, cmd_count, imm_count);
+
+	/* Sending marker isn't necessary, since we called from ISR */
+
+	pkt = (modify_lun_t *)qla2x00_req_pkt(vha);
+	if (!pkt) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet\n", vha->vp_idx, __func__);
+		return;
+	}
+
+	ha->qla_tgt->modify_lun_expected++;
+
+	pkt->entry_type = MODIFY_LUN_TYPE;
+	pkt->entry_count = 1;
+	if (cmd_count < 0) {
+		pkt->operators = MODIFY_LUN_CMD_SUB;	/* Subtract from command count */
+		pkt->command_count = -cmd_count;
+	} else if (cmd_count > 0) {
+		pkt->operators = MODIFY_LUN_CMD_ADD;	/* Add to command count */
+		pkt->command_count = cmd_count;
+	}
+
+	if (imm_count < 0) {
+		pkt->operators |= MODIFY_LUN_IMM_SUB;
+		pkt->immed_notify_count = -imm_count;
+	} else if (imm_count > 0) {
+		pkt->operators |= MODIFY_LUN_IMM_ADD;
+		pkt->immed_notify_count = imm_count;
+	}
+
+	pkt->timeout = 0;	/* Use default */
+
+	qla2x00_isp_cmd(vha, vha->req);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_send_notify_ack(struct scsi_qla_host *vha,
+	imm_ntfy_from_isp_t *ntfy,
+	uint32_t add_flags, uint16_t resp_code, int resp_code_valid,
+	uint16_t srr_flags, uint16_t srr_reject_code, uint8_t srr_explan)
+{
+	struct qla_hw_data *ha = vha->hw;
+	request_t *pkt;
+	nack_to_isp_t *nack;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe007, "Sending NOTIFY_ACK (ha=%p)\n", ha);
+
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(vha, 1) != QLA_SUCCESS)
+		return;
+
+	pkt = (request_t *)qla2x00_req_pkt(vha);
+	if (!pkt) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet\n", vha->vp_idx, __func__);
+		return;
+	}
+
+	if (ha->qla_tgt != NULL)
+		ha->qla_tgt->notify_ack_expected++;
+
+	pkt->entry_type = NOTIFY_ACK_TYPE;
+	pkt->entry_count = 1;
+
+	nack = (nack_to_isp_t *)pkt;
+	nack->ox_id = ntfy->ox_id;
+	if (IS_FWI2_CAPABLE(ha)) {
+		nack->u.isp24.nport_handle = ntfy->u.isp24.nport_handle;
+		if (le16_to_cpu(ntfy->u.isp24.status) == IMM_NTFY_ELS) {
+			nack->u.isp24.flags = ntfy->u.isp24.flags &
+				__constant_cpu_to_le32(NOTIFY24XX_FLAGS_PUREX_IOCB);
+		}
+		nack->u.isp24.srr_rx_id = ntfy->u.isp24.srr_rx_id;
+		nack->u.isp24.status = ntfy->u.isp24.status;
+		nack->u.isp24.status_subcode = ntfy->u.isp24.status_subcode;
+		nack->u.isp24.exchange_address = ntfy->u.isp24.exchange_address;
+		nack->u.isp24.srr_rel_offs = ntfy->u.isp24.srr_rel_offs;
+		nack->u.isp24.srr_ui = ntfy->u.isp24.srr_ui;
+		nack->u.isp24.srr_flags = cpu_to_le16(srr_flags);
+		nack->u.isp24.srr_reject_code = srr_reject_code;
+		nack->u.isp24.srr_reject_code_expl = srr_explan;
+		nack->u.isp24.vp_index = ntfy->u.isp24.vp_index;
+
+		ql_dbg(ql_dbg_tgt_pkt, vha, 0xe201,
+			"qla_target(%d): Sending 24xx Notify Ack %d\n",
+			vha->vp_idx, nack->u.isp24.status);
+	} else {
+		SET_TARGET_ID(ha, nack->u.isp2x.target,
+			GET_TARGET_ID(ha, (atio_from_isp_t *)ntfy));
+		nack->u.isp2x.status = ntfy->u.isp2x.status;
+		nack->u.isp2x.task_flags = ntfy->u.isp2x.task_flags;
+		nack->u.isp2x.seq_id = ntfy->u.isp2x.seq_id;
+		/* Do not increment here, the chip isn't decrementing */
+		/* nack->u.isp2x.flags = __constant_cpu_to_le16(NOTIFY_ACK_RES_COUNT); */
+		nack->u.isp2x.flags |= cpu_to_le16(add_flags);
+		nack->u.isp2x.srr_rx_id = ntfy->u.isp2x.srr_rx_id;
+		nack->u.isp2x.srr_rel_offs = ntfy->u.isp2x.srr_rel_offs;
+		nack->u.isp2x.srr_ui = ntfy->u.isp2x.srr_ui;
+		nack->u.isp2x.srr_flags = cpu_to_le16(srr_flags);
+		nack->u.isp2x.srr_reject_code = cpu_to_le16(srr_reject_code);
+		nack->u.isp2x.srr_reject_code_expl = srr_explan;
+
+		if (resp_code_valid) {
+			nack->u.isp2x.resp_code = cpu_to_le16(resp_code);
+			nack->u.isp2x.flags |= __constant_cpu_to_le16(
+				NOTIFY_ACK_TM_RESP_CODE_VALID);
+		}
+
+		ql_dbg(ql_dbg_tgt_pkt, vha, 0xe200, "qla_target(%d): Sending Notify Ack"
+			" Seq %#x -> I %#x St %#x RC %#x\n", vha->vp_idx,
+			le16_to_cpu(ntfy->u.isp2x.seq_id),
+			GET_TARGET_ID(ha, (atio_from_isp_t *)ntfy),
+			le16_to_cpu(ntfy->u.isp2x.status),
+			le16_to_cpu(nack->u.isp2x.resp_code));
+	}
+
+	qla2x00_isp_cmd(vha, vha->req);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_24xx_send_abts_resp(struct scsi_qla_host *vha,
+	abts_recv_from_24xx_t *abts, uint32_t status,
+	bool ids_reversed)
+{
+	struct qla_hw_data *ha = vha->hw;
+	abts_resp_to_24xx_t *resp;
+	uint32_t f_ctl;
+	uint8_t *p;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe008, "Sending task mgmt ABTS response"
+		" (ha=%p, atio=%p, status=%x\n", ha, abts, status);
+
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(vha, 1) != QLA_SUCCESS)
+		return;
+
+	resp = (abts_resp_to_24xx_t *)qla2x00_req_pkt(vha);
+	if (!resp) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet", vha->vp_idx, __func__);
+		return;
+	}
+
+	resp->entry_type = ABTS_RESP_24XX;
+	resp->entry_count = 1;
+	resp->nport_handle = abts->nport_handle;
+	resp->vp_index = vha->vp_idx;
+	resp->sof_type = abts->sof_type;
+	resp->exchange_address = abts->exchange_address;
+	resp->fcp_hdr_le = abts->fcp_hdr_le;
+	f_ctl = __constant_cpu_to_le32(F_CTL_EXCH_CONTEXT_RESP |
+			F_CTL_LAST_SEQ | F_CTL_END_SEQ |
+			F_CTL_SEQ_INITIATIVE);
+	p = (uint8_t *)&f_ctl;
+	resp->fcp_hdr_le.f_ctl[0] = *p++;
+	resp->fcp_hdr_le.f_ctl[1] = *p++;
+	resp->fcp_hdr_le.f_ctl[2] = *p;
+	if (ids_reversed) {
+		resp->fcp_hdr_le.d_id[0] = abts->fcp_hdr_le.d_id[0];
+		resp->fcp_hdr_le.d_id[1] = abts->fcp_hdr_le.d_id[1];
+		resp->fcp_hdr_le.d_id[2] = abts->fcp_hdr_le.d_id[2];
+		resp->fcp_hdr_le.s_id[0] = abts->fcp_hdr_le.s_id[0];
+		resp->fcp_hdr_le.s_id[1] = abts->fcp_hdr_le.s_id[1];
+		resp->fcp_hdr_le.s_id[2] = abts->fcp_hdr_le.s_id[2];
+	} else {
+		resp->fcp_hdr_le.d_id[0] = abts->fcp_hdr_le.s_id[0];
+		resp->fcp_hdr_le.d_id[1] = abts->fcp_hdr_le.s_id[1];
+		resp->fcp_hdr_le.d_id[2] = abts->fcp_hdr_le.s_id[2];
+		resp->fcp_hdr_le.s_id[0] = abts->fcp_hdr_le.d_id[0];
+		resp->fcp_hdr_le.s_id[1] = abts->fcp_hdr_le.d_id[1];
+		resp->fcp_hdr_le.s_id[2] = abts->fcp_hdr_le.d_id[2];
+	}
+	resp->exchange_addr_to_abort = abts->exchange_addr_to_abort;
+	if (status == FCP_TMF_CMPL) {
+		resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_ACC;
+		resp->payload.ba_acct.seq_id_valid = SEQ_ID_INVALID;
+		resp->payload.ba_acct.low_seq_cnt = 0x0000;
+		resp->payload.ba_acct.high_seq_cnt = 0xFFFF;
+		resp->payload.ba_acct.ox_id = abts->fcp_hdr_le.ox_id;
+		resp->payload.ba_acct.rx_id = abts->fcp_hdr_le.rx_id;
+	} else {
+		resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_RJT;
+		resp->payload.ba_rjt.reason_code =
+			BA_RJT_REASON_CODE_UNABLE_TO_PERFORM;
+		/* Other bytes are zero */
+	}
+
+	ha->qla_tgt->abts_resp_expected++;
+
+	qla2x00_isp_cmd(vha, vha->req);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_24xx_retry_term_exchange(struct scsi_qla_host *vha,
+	abts_resp_from_24xx_fw_t *entry)
+{
+	ctio7_to_24xx_t *ctio;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe009, "Sending retry TERM EXCH CTIO7"
+			" (ha=%p)\n", vha->hw);
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(vha, 1) != QLA_SUCCESS)
+		return;
+
+	ctio = (ctio7_to_24xx_t *)qla2x00_req_pkt(vha);
+	if (ctio == NULL) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet\n", vha->vp_idx, __func__);
+		return;
+	}
+
+	/*
+	 * We've got on entrance firmware's response on by us generated
+	 * ABTS response. So, in it ID fields are reversed.
+	 */
+
+	ctio->entry_type = CTIO_TYPE7;
+	ctio->entry_count = 1;
+	ctio->nport_handle = entry->nport_handle;
+	ctio->handle = QLA_TGT_SKIP_HANDLE |	CTIO_COMPLETION_HANDLE_MARK;
+	ctio->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+	ctio->vp_index = vha->vp_idx;
+	ctio->initiator_id[0] = entry->fcp_hdr_le.d_id[0];
+	ctio->initiator_id[1] = entry->fcp_hdr_le.d_id[1];
+	ctio->initiator_id[2] = entry->fcp_hdr_le.d_id[2];
+	ctio->exchange_addr = entry->exchange_addr_to_abort;
+	ctio->u.status1.flags =
+		__constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_TERMINATE);
+	ctio->u.status1.ox_id = entry->fcp_hdr_le.ox_id;
+
+	qla2x00_isp_cmd(vha, vha->req);
+
+	qla_tgt_24xx_send_abts_resp(vha, (abts_recv_from_24xx_t *)entry,
+		FCP_TMF_CMPL, true);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int __qla_tgt_24xx_handle_abts(struct scsi_qla_host *vha,
+	abts_recv_from_24xx_t *abts, struct qla_tgt_sess *sess)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_mgmt_cmd *mcmd;
+	int rc;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe112, "qla_target(%d): task abort (tag=%d)\n",
+		vha->vp_idx, abts->exchange_addr_to_abort);
+
+	mcmd = mempool_alloc(qla_tgt_mgmt_cmd_mempool, GFP_ATOMIC);
+	if (mcmd == NULL) {
+		printk(KERN_ERR "qla_target(%d): %s: Allocation of ABORT cmd failed",
+			vha->vp_idx, __func__);
+		return -ENOMEM;
+	}
+	memset(mcmd, 0, sizeof(*mcmd));
+
+	mcmd->sess = sess;
+	memcpy(&mcmd->orig_iocb.abts, abts, sizeof(mcmd->orig_iocb.abts));
+
+	rc = ha->tgt_ops->handle_tmr(mcmd, 0, ABORT_TASK);
+	if (rc != 0) {
+		printk(KERN_ERR "qla_target(%d):  tgt_ops->handle_tmr()"
+				" failed: %d", vha->vp_idx, rc);
+		mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_24xx_handle_abts(struct scsi_qla_host *vha,
+	abts_recv_from_24xx_t *abts)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess;
+	uint32_t tag = abts->exchange_addr_to_abort, s_id;
+	int rc;
+
+	if (le32_to_cpu(abts->fcp_hdr_le.parameter) & ABTS_PARAM_ABORT_SEQ) {
+		printk(KERN_ERR "qla_target(%d): ABTS: Abort Sequence not "
+			"supported\n", vha->vp_idx);
+		qla_tgt_24xx_send_abts_resp(vha, abts, FCP_TMF_REJECTED, false);
+		return;
+	}
+
+	if (tag == ATIO_EXCHANGE_ADDRESS_UNKNOWN) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe113, "qla_target(%d): ABTS: Unknown Exchange "
+			"Address received\n", vha->vp_idx);
+		qla_tgt_24xx_send_abts_resp(vha, abts, FCP_TMF_REJECTED, false);
+		return;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe114, "qla_target(%d): task abort (s_id=%x:%x:%x, "
+		"tag=%d, param=%x)\n", vha->vp_idx, abts->fcp_hdr_le.s_id[2],
+		abts->fcp_hdr_le.s_id[1], abts->fcp_hdr_le.s_id[0], tag,
+		le32_to_cpu(abts->fcp_hdr_le.parameter));
+
+	memset(&s_id, 0, 3);
+	s_id = (abts->fcp_hdr_le.s_id[0] << 16) | (abts->fcp_hdr_le.s_id[1] << 8) |
+		abts->fcp_hdr_le.s_id[2];
+
+	sess = ha->tgt_ops->find_sess_by_s_id(vha, (unsigned char *)&s_id);
+	if (!sess) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe115, "qla_target(%d): task abort for"
+			" non-existant session\n", vha->vp_idx);
+		rc = qla_tgt_sched_sess_work(ha->qla_tgt, QLA_TGT_SESS_WORK_ABORT,
+					abts, sizeof(*abts));
+		if (rc != 0) {
+			ha->qla_tgt->tm_to_unknown = 1;
+			qla_tgt_24xx_send_abts_resp(vha, abts, FCP_TMF_REJECTED, false);
+		}
+		return;
+	}
+
+	rc = __qla_tgt_24xx_handle_abts(vha, abts, sess);
+	if (rc != 0) {
+		printk(KERN_ERR "qla_target(%d): __qla_tgt_24xx_handle_abts() failed: %d\n",
+			    vha->vp_idx, rc);
+		qla_tgt_24xx_send_abts_resp(vha, abts, FCP_TMF_REJECTED, false);
+		return;
+	}
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_24xx_send_task_mgmt_ctio(struct scsi_qla_host *ha,
+	struct qla_tgt_mgmt_cmd *mcmd, uint32_t resp_code)
+{
+	atio_from_isp_t *atio = &mcmd->orig_iocb.atio;
+	ctio7_to_24xx_t *ctio;
+
+	ql_dbg(ql_dbg_tgt, ha, 0xe00a, "Sending task mgmt CTIO7 (ha=%p,"
+		" atio=%p, resp_code=%x\n", ha, atio, resp_code);
+
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(ha, 1) != QLA_SUCCESS)
+		return;
+
+	ctio = (ctio7_to_24xx_t *)qla2x00_req_pkt(ha);
+	if (ctio == NULL) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet\n", ha->vp_idx, __func__);
+		return;
+	}
+
+	ctio->entry_type = CTIO_TYPE7;
+	ctio->entry_count = 1;
+	ctio->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK;
+	ctio->nport_handle = mcmd->sess->loop_id;
+	ctio->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+	ctio->vp_index = ha->vp_idx;
+	ctio->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2];
+	ctio->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1];
+	ctio->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0];
+	ctio->exchange_addr = atio->u.isp24.exchange_addr;
+	ctio->u.status1.flags = (atio->u.isp24.attr << 9) | __constant_cpu_to_le16(
+		CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_SEND_STATUS);
+	ctio->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id);
+	ctio->u.status1.scsi_status = __constant_cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID);
+	ctio->u.status1.response_len = __constant_cpu_to_le16(8);
+	((uint32_t *)ctio->u.status1.sense_data)[0] = cpu_to_be32(resp_code);
+
+	qla2x00_isp_cmd(ha, ha->req);
+}
+
+void qla_tgt_free_mcmd(struct qla_tgt_mgmt_cmd *mcmd)
+{
+	mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+}
+EXPORT_SYMBOL(qla_tgt_free_mcmd);
+
+/* callback from target fabric module code */
+void qla_tgt_xmit_tm_rsp(struct qla_tgt_mgmt_cmd *mcmd)
+{
+	struct scsi_qla_host *vha = mcmd->sess->vha;
+	struct qla_hw_data *ha = vha->hw;
+	unsigned long flags;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe116, "TM response mcmd"
+		" (%p) status %#x state %#x", mcmd, mcmd->se_tmr_req->response,
+		mcmd->flags);
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	if (IS_FWI2_CAPABLE(ha)) {
+		if (mcmd->flags == QLA24XX_MGMT_SEND_NACK)
+			qla_tgt_send_notify_ack(vha, &mcmd->orig_iocb.imm_ntfy,
+				0, 0, 0, 0, 0, 0);
+		else {
+			if (mcmd->se_tmr_req->function == ABORT_TASK)
+				qla_tgt_24xx_send_abts_resp(vha, &mcmd->orig_iocb.abts,
+					mcmd->fc_tm_rsp, false);
+			else
+				qla_tgt_24xx_send_task_mgmt_ctio(vha, mcmd, mcmd->fc_tm_rsp);
+		}
+	} else {
+		qla_tgt_send_notify_ack(vha, (void *)&mcmd->orig_iocb,
+			0, mcmd->fc_tm_rsp, 1, 0, 0, 0);
+	}
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+EXPORT_SYMBOL(qla_tgt_xmit_tm_rsp);
+
+/* No locks */
+static int qla_tgt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
+{
+	struct qla_tgt_cmd *cmd = prm->cmd;
+
+	BUG_ON(cmd->sg_cnt == 0);
+
+	prm->sg = (struct scatterlist *)cmd->sg;
+	prm->seg_cnt = pci_map_sg(prm->tgt->ha->pdev, cmd->sg,
+				cmd->sg_cnt, cmd->dma_data_direction);
+	if (unlikely(prm->seg_cnt == 0))
+		goto out_err;
+
+	prm->cmd->sg_mapped = 1;
+
+	/*
+	 * If greater than four sg entries then we need to allocate
+	 * the continuation entries
+	 */
+	if (prm->seg_cnt > prm->tgt->datasegs_per_cmd)
+		prm->req_cnt += DIV_ROUND_UP(prm->seg_cnt - prm->tgt->datasegs_per_cmd,
+					     prm->tgt->datasegs_per_cont);
+
+	ql_dbg(ql_dbg_tgt, prm->cmd->vha, 0xe00c, "seg_cnt=%d, req_cnt=%d\n",
+			prm->seg_cnt, prm->req_cnt);
+	return 0;
+
+out_err:
+	printk(KERN_ERR "qla_target(%d): PCI mapping failed: sg_cnt=%d",
+		0, prm->cmd->sg_cnt);
+	return -1;
+}
+
+static inline void qla_tgt_unmap_sg(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	BUG_ON(!cmd->sg_mapped);
+	pci_unmap_sg(ha->pdev, cmd->sg, cmd->sg_cnt, cmd->dma_data_direction);
+	cmd->sg_mapped = 0;
+}
+
+static int qla_tgt_check_reserve_free_req(struct scsi_qla_host *vha, uint32_t req_cnt)
+{
+	struct qla_hw_data *ha = vha->hw;
+	device_reg_t __iomem *reg = ha->iobase;
+	uint32_t cnt;
+
+	if (vha->req->cnt < (req_cnt + 2)) {
+		if (IS_FWI2_CAPABLE(ha))
+			cnt = (uint16_t)RD_REG_DWORD(
+				    &reg->isp24.req_q_out);
+		else
+			cnt = qla2x00_debounce_register(
+				    ISP_REQ_Q_OUT(ha, &reg->isp));
+		ql_dbg(ql_dbg_tgt, vha, 0xe00d, "Request ring circled: cnt=%d, "
+			"vha->->ring_index=%d, vha->req->cnt=%d, req_cnt=%d\n",
+			cnt, vha->req->ring_index, vha->req->cnt, req_cnt);
+		if  (vha->req->ring_index < cnt)
+			vha->req->cnt = cnt - vha->req->ring_index;
+		else
+			vha->req->cnt = vha->req->length -
+			    (vha->req->ring_index - cnt);
+	}
+
+	if (unlikely(vha->req->cnt < (req_cnt + 2))) {
+		ql_dbg(ql_dbg_tgt, vha, 0xe00e, "qla_target(%d): There is no room in the "
+			"request ring: vha->req->ring_index=%d, vha->req->cnt=%d, "
+			"req_cnt=%d\n", vha->vp_idx, vha->req->ring_index,
+			vha->req->cnt, req_cnt);
+		return -EAGAIN;
+	}
+	vha->req->cnt -= req_cnt;
+
+	return 0;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static inline void *qla_tgt_get_req_pkt(struct scsi_qla_host *vha)
+{
+	/* Adjust ring index. */
+	vha->req->ring_index++;
+	if (vha->req->ring_index == vha->req->length) {
+		vha->req->ring_index = 0;
+		vha->req->ring_ptr = vha->req->ring;
+	} else {
+		vha->req->ring_ptr++;
+	}
+	return (cont_entry_t *)vha->req->ring_ptr;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static inline uint32_t qla_tgt_make_handle(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	uint32_t h;
+
+	h = ha->current_handle;
+	/* always increment cmd handle */
+	do {
+		++h;
+		if (h > MAX_OUTSTANDING_COMMANDS)
+			h = 1; /* 0 is QLA_TGT_NULL_HANDLE */
+		if (h == ha->current_handle) {
+			printk(KERN_INFO "qla_target(%d): Ran out of "
+				"empty cmd slots in ha %p\n", vha->vp_idx, ha);
+			h = QLA_TGT_NULL_HANDLE;
+			break;
+		}
+	} while ((h == QLA_TGT_NULL_HANDLE) ||
+		 (h == QLA_TGT_SKIP_HANDLE) ||
+		 (ha->cmds[h-1] != NULL));
+
+	if (h != QLA_TGT_NULL_HANDLE)
+		ha->current_handle = h;
+
+	return h;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_2xxx_build_ctio_pkt(struct qla_tgt_prm *prm, struct scsi_qla_host *vha)
+{
+	uint32_t h;
+	ctio_to_2xxx_t *pkt;
+	atio_from_isp_t *atio = &prm->cmd->atio;
+	struct qla_hw_data *ha = vha->hw;
+
+	pkt = (ctio_to_2xxx_t *)vha->req->ring_ptr;
+	prm->pkt = pkt;
+	memset(pkt, 0, sizeof(*pkt));
+
+	if (prm->tgt->tgt_enable_64bit_addr)
+		pkt->entry_type = CTIO_A64_TYPE;
+	else
+		pkt->entry_type = CONTINUE_TGT_IO_TYPE;
+
+	pkt->entry_count = (uint8_t)prm->req_cnt;
+
+	h = qla_tgt_make_handle(vha);
+	if (h != QLA_TGT_NULL_HANDLE)
+		ha->cmds[h-1] = prm->cmd;
+
+	pkt->handle = h | CTIO_COMPLETION_HANDLE_MARK;
+	pkt->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+
+	/* Set initiator ID */
+	h = GET_TARGET_ID(ha, atio);
+	SET_TARGET_ID(ha, pkt->target, h);
+
+	pkt->rx_id = atio->u.isp2x.rx_id;
+	pkt->relative_offset = cpu_to_le32(prm->cmd->offset);
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe202, "qla_target(%d): handle(se_cmd) -> %08x, "
+		"timeout %d L %#x -> I %#x E %#x\n", vha->vp_idx,
+		pkt->handle, QLA_TGT_TIMEOUT,
+		le16_to_cpu(atio->u.isp2x.lun),
+		GET_TARGET_ID(ha, atio), pkt->rx_id);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_24xx_build_ctio_pkt(struct qla_tgt_prm *prm, struct scsi_qla_host *vha)
+{
+	uint32_t h;
+	ctio7_to_24xx_t *pkt;
+	struct qla_hw_data *ha = vha->hw;
+	atio_from_isp_t *atio = &prm->cmd->atio;
+
+	pkt = (ctio7_to_24xx_t *)vha->req->ring_ptr;
+	prm->pkt = pkt;
+	memset(pkt, 0, sizeof(*pkt));
+
+	pkt->entry_type = CTIO_TYPE7;
+	pkt->entry_count = (uint8_t)prm->req_cnt;
+	pkt->vp_index = vha->vp_idx;
+
+	h = qla_tgt_make_handle(vha);
+	if (unlikely(h == QLA_TGT_NULL_HANDLE)) {
+		/*
+		 * CTIO type 7 from the firmware doesn't provide a way to
+		 * know the initiator's LOOP ID, hence we can't find
+		 * the session and, so, the command.
+		 */
+		return -EAGAIN;
+	} else
+		ha->cmds[h-1] = prm->cmd;
+
+	pkt->handle = h | CTIO_COMPLETION_HANDLE_MARK;
+	pkt->nport_handle = prm->cmd->loop_id;
+	pkt->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+	pkt->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2];
+	pkt->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1];
+	pkt->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0];
+	pkt->exchange_addr = atio->u.isp24.exchange_addr;
+	pkt->u.status0.flags |= (atio->u.isp24.attr << 9);
+	pkt->u.status0.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id);
+	pkt->u.status0.relative_offset = cpu_to_le32(prm->cmd->offset);
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe203, "qla_target(%d): handle(cmd) -> %08x, "
+		"timeout %d, ox_id %#x\n", vha->vp_idx, pkt->handle,
+		QLA_TGT_TIMEOUT, le16_to_cpu(pkt->u.status0.ox_id));
+	return 0;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. We have already made sure
+ * that there is sufficient amount of request entries to not drop it.
+ */
+static void qla_tgt_load_cont_data_segments(struct qla_tgt_prm *prm, struct scsi_qla_host *vha)
+{
+	int cnt;
+	uint32_t *dword_ptr;
+	int enable_64bit_addressing = prm->tgt->tgt_enable_64bit_addr;
+
+	/* Build continuation packets */
+	while (prm->seg_cnt > 0) {
+		cont_a64_entry_t *cont_pkt64 =
+			(cont_a64_entry_t *)qla_tgt_get_req_pkt(vha);
+
+		/*
+		 * Make sure that from cont_pkt64 none of
+		 * 64-bit specific fields used for 32-bit
+		 * addressing. Cast to (cont_entry_t *) for
+		 * that.
+		 */
+
+		memset(cont_pkt64, 0, sizeof(*cont_pkt64));
+
+		cont_pkt64->entry_count = 1;
+		cont_pkt64->sys_define = 0;
+
+		if (enable_64bit_addressing) {
+			cont_pkt64->entry_type = CONTINUE_A64_TYPE;
+			dword_ptr =
+			    (uint32_t *)&cont_pkt64->dseg_0_address;
+		} else {
+			cont_pkt64->entry_type = CONTINUE_TYPE;
+			dword_ptr =
+			    (uint32_t *)&((cont_entry_t *)
+					    cont_pkt64)->dseg_0_address;
+		}
+
+		/* Load continuation entry data segments */
+		for (cnt = 0;
+		     cnt < prm->tgt->datasegs_per_cont && prm->seg_cnt;
+		     cnt++, prm->seg_cnt--) {
+			*dword_ptr++ =
+			    cpu_to_le32(pci_dma_lo32
+					(sg_dma_address(prm->sg)));
+			if (enable_64bit_addressing) {
+				*dword_ptr++ =
+				    cpu_to_le32(pci_dma_hi32
+						(sg_dma_address
+						 (prm->sg)));
+			}
+			*dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg));
+
+			ql_dbg(ql_dbg_tgt_sgl, vha, 0xe300, "S/G Segment Cont. phys_addr="
+				"%llx:%llx, len=%d\n",
+			      (long long unsigned int)pci_dma_hi32(sg_dma_address(prm->sg)),
+			      (long long unsigned int)pci_dma_lo32(sg_dma_address(prm->sg)),
+			      (int)sg_dma_len(prm->sg));
+
+			prm->sg = sg_next(prm->sg);
+		}
+	}
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. We have already made sure
+ * that there is sufficient amount of request entries to not drop it.
+ */
+static void qla_tgt_load_data_segments(struct qla_tgt_prm *prm,
+	struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	int cnt;
+	uint32_t *dword_ptr;
+	int enable_64bit_addressing = prm->tgt->tgt_enable_64bit_addr;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		ctio7_to_24xx_t *pkt24 = (ctio7_to_24xx_t *)prm->pkt;
+
+		ql_dbg(ql_dbg_tgt, vha, 0xe00f,
+			"iocb->scsi_status=%x, iocb->flags=%x\n",
+			le16_to_cpu(pkt24->u.status0.scsi_status),
+			le16_to_cpu(pkt24->u.status0.flags));
+
+		pkt24->u.status0.transfer_length = cpu_to_le32(prm->cmd->bufflen);
+
+		/* Setup packet address segment pointer */
+		dword_ptr = pkt24->u.status0.dseg_0_address;
+
+		/* Set total data segment count */
+		if (prm->seg_cnt)
+			pkt24->dseg_count = cpu_to_le16(prm->seg_cnt);
+	} else {
+		ctio_to_2xxx_t *pkt2x = (ctio_to_2xxx_t *)prm->pkt;
+
+		ql_dbg(ql_dbg_tgt_pkt, vha, 0xe204,
+			"iocb->scsi_status=%x, iocb->flags=%x\n",
+			le16_to_cpu(pkt2x->scsi_status), le16_to_cpu(pkt2x->flags));
+
+		pkt2x->transfer_length = cpu_to_le32(prm->cmd->bufflen);
+
+		/* Setup packet address segment pointer */
+		dword_ptr = &pkt2x->dseg_0_address;
+
+		/* Set total data segment count */
+		if (prm->seg_cnt)
+			pkt2x->dseg_count = cpu_to_le16(prm->seg_cnt);
+	}
+
+	if (prm->seg_cnt == 0) {
+		/* No data transfer */
+		*dword_ptr++ = 0;
+		*dword_ptr = 0;
+		return;
+	}
+
+	/* If scatter gather */
+	ql_dbg(ql_dbg_tgt_sgl, vha, 0xe303, "%s", "Building S/G data segments...");
+
+	/* Load command entry data segments */
+	for (cnt = 0;
+	     (cnt < prm->tgt->datasegs_per_cmd) && prm->seg_cnt;
+	     cnt++, prm->seg_cnt--) {
+		*dword_ptr++ =
+		    cpu_to_le32(pci_dma_lo32(sg_dma_address(prm->sg)));
+		if (enable_64bit_addressing) {
+			*dword_ptr++ =
+			    cpu_to_le32(pci_dma_hi32(
+					sg_dma_address(prm->sg)));
+		}
+		*dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg));
+
+		ql_dbg(ql_dbg_tgt_sgl, vha, 0xe304, "S/G Segment phys_addr="
+			"%llx:%llx, len=%d\n",
+		      (long long unsigned int)pci_dma_hi32(sg_dma_address(
+								prm->sg)),
+		      (long long unsigned int)pci_dma_lo32(sg_dma_address(
+								prm->sg)),
+		      (int)sg_dma_len(prm->sg));
+
+		prm->sg = sg_next(prm->sg);
+	}
+
+	qla_tgt_load_cont_data_segments(prm, vha);
+}
+
+static inline int qla_tgt_has_data(struct qla_tgt_cmd *cmd)
+{
+	return cmd->bufflen > 0;
+}
+
+/*
+ * Called without ha->hardware_lock held
+ */
+static int qla_tgt_pre_xmit_response(struct qla_tgt_cmd *cmd, struct qla_tgt_prm *prm,
+			int xmit_type, uint8_t scsi_status, uint32_t *full_req_cnt)
+{
+	struct qla_tgt *tgt = cmd->tgt;
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+
+	if (unlikely(cmd->aborted)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe118, "qla_target(%d): terminating exchange "
+			"for aborted cmd=%p (se_cmd=%p, tag=%d)",
+			vha->vp_idx, cmd, se_cmd, cmd->tag);
+
+		cmd->state = QLA_TGT_STATE_ABORTED;
+
+		qla_tgt_send_term_exchange(vha, cmd, &cmd->atio, 0);
+
+		/* !! At this point cmd could be already freed !! */
+		return QLA_TGT_PRE_XMIT_RESP_CMD_ABORTED;
+	}
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe205, "qla_target(%d): tag=%u\n", vha->vp_idx, cmd->tag);
+
+	prm->cmd = cmd;
+	prm->tgt = tgt;
+	prm->rq_result = scsi_status;
+	prm->sense_buffer = &cmd->sense_buffer[0];
+	prm->sense_buffer_len = TRANSPORT_SENSE_BUFFER;
+	prm->sg = NULL;
+	prm->seg_cnt = -1;
+	prm->req_cnt = 1;
+	prm->add_status_pkt = 0;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe010, "rq_result=%x, xmit_type=%x\n",
+				prm->rq_result, xmit_type);
+
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(vha, 0) != QLA_SUCCESS)
+		return -EFAULT;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe011, "CTIO start: vha(%d)\n", vha->vp_idx);
+
+	if ((xmit_type & QLA_TGT_XMIT_DATA) && qla_tgt_has_data(cmd)) {
+		if  (qla_tgt_pci_map_calc_cnt(prm) != 0)
+			return -EAGAIN;
+	}
+
+	*full_req_cnt = prm->req_cnt;
+
+	if (se_cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+		prm->residual = se_cmd->residual_count;
+		ql_dbg(ql_dbg_tgt, vha, 0xe012, "Residual underflow: %d (tag %d, "
+			"op %x, bufflen %d, rq_result %x)\n",
+			prm->residual, cmd->tag,
+			se_cmd->t_task_cdb ? se_cmd->t_task_cdb[0] : 0,
+			cmd->bufflen, prm->rq_result);
+		prm->rq_result |= SS_RESIDUAL_UNDER;
+	} else if (se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) {
+		prm->residual = se_cmd->residual_count;
+		ql_dbg(ql_dbg_tgt, vha, 0xe013, "Residual overflow: %d (tag %d, "
+			"op %x, bufflen %d, rq_result %x)\n",
+			prm->residual, cmd->tag,
+			se_cmd->t_task_cdb ? se_cmd->t_task_cdb[0] : 0,
+			cmd->bufflen, prm->rq_result);
+		prm->rq_result |= SS_RESIDUAL_OVER;
+		prm->residual = -prm->residual;
+	}
+
+	if (xmit_type & QLA_TGT_XMIT_STATUS) {
+		/*
+		 * If QLA_TGT_XMIT_DATA is not set, add_status_pkt will be ignored
+		 * in *xmit_response() below
+		 */
+		if (qla_tgt_has_data(cmd)) {
+			if (QLA_TGT_SENSE_VALID(prm->sense_buffer) ||
+			    (IS_FWI2_CAPABLE(ha) &&
+			     (prm->rq_result != 0))) {
+				prm->add_status_pkt = 1;
+				(*full_req_cnt)++;
+			}
+		}
+	}
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe014, "req_cnt=%d, full_req_cnt=%d,"
+		" add_status_pkt=%d\n", prm->req_cnt, *full_req_cnt,
+		prm->add_status_pkt);
+
+	return 0;
+}
+
+static inline int qla_tgt_need_explicit_conf(struct qla_hw_data *ha,
+	struct qla_tgt_cmd *cmd, int sending_sense)
+{
+	if (ha->enable_class_2)
+		return 0;
+
+	if (sending_sense)
+		return cmd->conf_compl_supported;
+	else
+		return ha->enable_explicit_conf && cmd->conf_compl_supported;
+}
+
+static void qla_tgt_2xxx_init_ctio_to_isp(ctio_from_2xxx_t *ctio_m1,
+	struct qla_tgt_prm *prm, struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	prm->sense_buffer_len = min((uint32_t)prm->sense_buffer_len,
+				    (uint32_t)sizeof(ctio_m1->sense_data));
+
+	ctio_m1->flags = __constant_cpu_to_le16(OF_SSTS | OF_FAST_POST |
+				     OF_NO_DATA | OF_SS_MODE_1);
+	ctio_m1->flags |= __constant_cpu_to_le16(OF_INC_RC);
+	if (qla_tgt_need_explicit_conf(ha, prm->cmd, 0)) {
+		ctio_m1->flags |= __constant_cpu_to_le16(OF_EXPL_CONF |
+					OF_CONF_REQ);
+	}
+	ctio_m1->scsi_status = cpu_to_le16(prm->rq_result);
+	ctio_m1->residual = cpu_to_le32(prm->residual);
+	if (QLA_TGT_SENSE_VALID(prm->sense_buffer)) {
+		if (qla_tgt_need_explicit_conf(ha, prm->cmd, 1)) {
+			ctio_m1->flags |= __constant_cpu_to_le16(OF_EXPL_CONF |
+						OF_CONF_REQ);
+		}
+		ctio_m1->scsi_status |= __constant_cpu_to_le16(
+						SS_SENSE_LEN_VALID);
+		ctio_m1->sense_length = cpu_to_le16(prm->sense_buffer_len);
+		memcpy(ctio_m1->sense_data, prm->sense_buffer,
+		       prm->sense_buffer_len);
+	} else {
+		memset(ctio_m1->sense_data, 0, sizeof(ctio_m1->sense_data));
+		ctio_m1->sense_length = 0;
+	}
+
+	/* Sense with len > 26, is it possible ??? */
+
+	return;
+}
+
+static int __qla_tgt_2xxx_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+	uint8_t scsi_status)
+{
+	struct scsi_qla_host *vha = cmd->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_prm prm;
+	ctio_to_2xxx_t *pkt;
+	unsigned long flags = 0;
+	uint32_t full_req_cnt = 0;
+	int res;
+
+	memset(&prm, 0, sizeof(prm));
+
+	res = qla_tgt_pre_xmit_response(cmd, &prm, xmit_type, scsi_status, &full_req_cnt);
+	if (unlikely(res != 0)) {
+		if (res == QLA_TGT_PRE_XMIT_RESP_CMD_ABORTED)
+			return 0;
+
+		return res;
+	}
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+
+	/* Does F/W have an IOCBs for this request */
+	res = qla_tgt_check_reserve_free_req(vha, full_req_cnt);
+	if (unlikely(res))
+		goto out_unmap_unlock;
+
+	qla_tgt_2xxx_build_ctio_pkt(&prm, cmd->vha);
+	pkt = (ctio_to_2xxx_t *)prm.pkt;
+
+	if (qla_tgt_has_data(cmd) && (xmit_type & QLA_TGT_XMIT_DATA)) {
+		pkt->flags |= __constant_cpu_to_le16(OF_FAST_POST | OF_DATA_IN);
+		pkt->flags |= __constant_cpu_to_le16(OF_INC_RC);
+
+		qla_tgt_load_data_segments(&prm, vha);
+
+		if (prm.add_status_pkt == 0) {
+			if (xmit_type & QLA_TGT_XMIT_STATUS) {
+				pkt->scsi_status = cpu_to_le16(prm.rq_result);
+				pkt->residual = cpu_to_le32(prm.residual);
+				pkt->flags |= __constant_cpu_to_le16(OF_SSTS);
+				if (qla_tgt_need_explicit_conf(ha, cmd, 0)) {
+					pkt->flags |= __constant_cpu_to_le16(
+							OF_EXPL_CONF |
+							OF_CONF_REQ);
+				}
+			}
+		} else {
+			/*
+			 * We have already made sure that there is sufficient
+			 * amount of request entries to not drop HW lock in
+			 * req_pkt().
+			 */
+			ctio_from_2xxx_t *ctio_m1 =
+				(ctio_from_2xxx_t *)qla_tgt_get_req_pkt(vha);
+
+			ql_dbg(ql_dbg_tgt, vha, 0xe015, "%s", "Building"
+				" additional status packet");
+
+			memcpy(ctio_m1, pkt, sizeof(*ctio_m1));
+			ctio_m1->entry_count = 1;
+			ctio_m1->dseg_count = 0;
+
+			/* Real finish is ctio_m1's finish */
+			pkt->handle |= CTIO_INTERMEDIATE_HANDLE_MARK;
+			pkt->flags &= ~__constant_cpu_to_le16(OF_INC_RC);
+
+			qla_tgt_2xxx_init_ctio_to_isp(ctio_m1, &prm, cmd->vha);
+		}
+	} else
+		qla_tgt_2xxx_init_ctio_to_isp((ctio_from_2xxx_t *)pkt,
+					&prm, cmd->vha);
+
+	cmd->state = QLA_TGT_STATE_PROCESSED; /* Mid-level is done processing */
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe016, "Xmitting CTIO7 response pkt for 2xxx:"
+			" %p scsi_status: 0x%02x\n", pkt, scsi_status);
+
+	qla2x00_isp_cmd(vha, vha->req);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return 0;
+
+out_unmap_unlock:
+	if (cmd->sg_mapped)
+		qla_tgt_unmap_sg(vha, cmd);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return res;
+}
+
+#ifdef CONFIG_QLA_TGT_DEBUG_SRR
+/*
+ *  Original taken from the XFS code
+ */
+static unsigned long qla_tgt_srr_random(void)
+{
+	static int Inited;
+	static unsigned long RandomValue;
+	static DEFINE_SPINLOCK(lock);
+	/* cycles pseudo-randomly through all values between 1 and 2^31 - 2 */
+	register long rv;
+	register long lo;
+	register long hi;
+	unsigned long flags;
+
+	spin_lock_irqsave(&lock, flags);
+	if (!Inited) {
+		RandomValue = jiffies;
+		Inited = 1;
+	}
+	rv = RandomValue;
+	hi = rv / 127773;
+	lo = rv % 127773;
+	rv = 16807 * lo - 2836 * hi;
+	if (rv <= 0)
+		rv += 2147483647;
+	RandomValue = rv;
+	spin_unlock_irqrestore(&lock, flags);
+	return rv;
+}
+
+static void qla_tgt_check_srr_debug(struct qla_tgt_cmd *cmd, int *xmit_type)
+{
+#if 0 /* This is not a real status packets lost, so it won't lead to SRR */
+	if ((*xmit_type & QLA_TGT_XMIT_STATUS) && (qla_tgt_srr_random() % 200) == 50) {
+		*xmit_type &= ~QLA_TGT_XMIT_STATUS;
+		ql_dbg(ql_dbg_tgt_mgt, cmd->vha, 0xe119, "Dropping cmd %p (tag %d) status",
+			cmd, cmd->tag);
+	}
+#endif
+	/*
+	 * It's currently not possible to simulate SRRs for FCP_WRITE without
+	 * a physical link layer failure, so don't even try here..
+	 */
+	if (cmd->dma_data_direction != DMA_FROM_DEVICE)
+		return;
+
+	if (qla_tgt_has_data(cmd) && (cmd->sg_cnt > 1) &&
+	    ((qla_tgt_srr_random() % 100) == 20)) {
+		int i, leave = 0;
+		unsigned int tot_len = 0;
+
+		while (leave == 0)
+			leave = qla_tgt_srr_random() % cmd->sg_cnt;
+
+		for (i = 0; i < leave; i++)
+			tot_len += cmd->sg[i].length;
+
+		ql_dbg(ql_dbg_tgt_mgt, cmd->vha, 0xe11a, "Cutting cmd %p (tag %d) buffer"
+			" tail to len %d, sg_cnt %d (cmd->bufflen %d, cmd->sg_cnt %d)",
+			cmd, cmd->tag, tot_len, leave, cmd->bufflen, cmd->sg_cnt);
+
+		cmd->bufflen = tot_len;
+		cmd->sg_cnt = leave;
+	}
+
+	if (qla_tgt_has_data(cmd) && ((qla_tgt_srr_random() % 100) == 70)) {
+		unsigned int offset = qla_tgt_srr_random() % cmd->bufflen;
+
+		ql_dbg(ql_dbg_tgt_mgt, cmd->vha, 0xe11b, "Cutting cmd %p (tag %d) buffer head "
+			"to offset %d (cmd->bufflen %d)", cmd, cmd->tag,
+			offset, cmd->bufflen);
+		if (offset == 0)
+			*xmit_type &= ~QLA_TGT_XMIT_DATA;
+		else if (qla_tgt_set_data_offset(cmd, offset)) {
+			ql_dbg(ql_dbg_tgt_mgt, cmd->vha, 0xe11c, "qla_tgt_set_data_offset()"
+				" failed (tag %d)", cmd->tag);
+		}
+	}
+}
+#else
+static inline void qla_tgt_check_srr_debug(struct qla_tgt_cmd *cmd, int *xmit_type) {}
+#endif
+
+int qla_tgt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type, uint8_t scsi_status)
+{
+	qla_tgt_check_srr_debug(cmd, &xmit_type);
+
+	ql_dbg(ql_dbg_tgt, cmd->vha, 0xe017, "is_send_status=%d,"
+		" cmd->bufflen=%d, cmd->sg_cnt=%d, cmd->dma_data_direction=%d",
+		(xmit_type & QLA_TGT_XMIT_STATUS) ? 1 : 0, cmd->bufflen,
+		cmd->sg_cnt, cmd->dma_data_direction);
+
+	return (IS_FWI2_CAPABLE(cmd->tgt->ha)) ?
+		__qla_tgt_24xx_xmit_response(cmd, xmit_type, scsi_status) :
+		__qla_tgt_2xxx_xmit_response(cmd, xmit_type, scsi_status);
+}
+EXPORT_SYMBOL(qla_tgt_xmit_response);
+
+static void qla_tgt_24xx_init_ctio_to_isp(ctio7_to_24xx_t *ctio,
+	struct qla_tgt_prm *prm)
+{
+	prm->sense_buffer_len = min((uint32_t)prm->sense_buffer_len,
+				    (uint32_t)sizeof(ctio->u.status1.sense_data));
+	ctio->u.status0.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_SEND_STATUS);
+	if (qla_tgt_need_explicit_conf(prm->tgt->ha, prm->cmd, 0)) {
+		ctio->u.status0.flags |= __constant_cpu_to_le16(
+				CTIO7_FLAGS_EXPLICIT_CONFORM |
+				CTIO7_FLAGS_CONFORM_REQ);
+	}
+	ctio->u.status0.residual = cpu_to_le32(prm->residual);
+	ctio->u.status0.scsi_status = cpu_to_le16(prm->rq_result);
+	if (QLA_TGT_SENSE_VALID(prm->sense_buffer)) {
+		int i;
+
+		if (qla_tgt_need_explicit_conf(prm->tgt->ha, prm->cmd, 1)) {
+			if (prm->cmd->se_cmd.scsi_status != 0) {
+				ql_dbg(ql_dbg_tgt, prm->cmd->vha, 0xe018,
+					"Skipping EXPLICIT_CONFORM and CTIO7_FLAGS_CONFORM_REQ"
+					" for FCP READ w/ non GOOD status\n");
+				goto skip_explict_conf;
+			}
+			ctio->u.status1.flags |= __constant_cpu_to_le16(
+				CTIO7_FLAGS_EXPLICIT_CONFORM |
+				CTIO7_FLAGS_CONFORM_REQ);
+		}
+skip_explict_conf:
+		ctio->u.status1.flags &= ~__constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0);
+		ctio->u.status1.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1);
+		ctio->u.status1.scsi_status |= __constant_cpu_to_le16(SS_SENSE_LEN_VALID);
+		ctio->u.status1.sense_length = cpu_to_le16(prm->sense_buffer_len);
+		for (i = 0; i < prm->sense_buffer_len/4; i++)
+			((uint32_t *)ctio->u.status1.sense_data)[i] =
+				cpu_to_be32(((uint32_t *)prm->sense_buffer)[i]);
+#if 0
+		if (unlikely((prm->sense_buffer_len % 4) != 0)) {
+			static int q;
+			if (q < 10) {
+				printk(KERN_INFO "qla_target(%d): %d bytes of sense "
+					"lost", prm->tgt->ha->vp_idx,
+					prm->sense_buffer_len % 4);
+				q++;
+			}
+		}
+#endif
+	} else {
+		ctio->u.status1.flags &= ~__constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0);
+		ctio->u.status1.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1);
+		ctio->u.status1.sense_length = 0;
+		memset(ctio->u.status1.sense_data, 0, sizeof(ctio->u.status1.sense_data));
+	}
+
+	/* Sense with len > 24, is it possible ??? */
+}
+
+/*
+ * Callback to setup response of xmit_type of QLA_TGT_XMIT_DATA and * QLA_TGT_XMIT_STATUS
+ * for >= 24xx silicon
+ */
+static int __qla_tgt_24xx_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+	uint8_t scsi_status)
+{
+	struct scsi_qla_host *vha = cmd->vha;
+	struct qla_hw_data *ha = vha->hw;
+	ctio7_to_24xx_t *pkt;
+	struct qla_tgt_prm prm;
+	uint32_t full_req_cnt = 0;
+	unsigned long flags = 0;
+	int res;
+
+	memset(&prm, 0, sizeof(prm));
+
+	res = qla_tgt_pre_xmit_response(cmd, &prm, xmit_type, scsi_status, &full_req_cnt);
+	if (unlikely(res != 0)) {
+		if (res == QLA_TGT_PRE_XMIT_RESP_CMD_ABORTED)
+			return 0;
+
+		return res;
+	}
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+
+        /* Does F/W have an IOCBs for this request */
+	res = qla_tgt_check_reserve_free_req(vha, full_req_cnt);
+	if (unlikely(res))
+		goto out_unmap_unlock;
+
+	res = qla_tgt_24xx_build_ctio_pkt(&prm, vha);
+	if (unlikely(res != 0))
+		goto out_unmap_unlock;
+
+
+	pkt = (ctio7_to_24xx_t *)prm.pkt;
+
+	if (qla_tgt_has_data(cmd) && (xmit_type & QLA_TGT_XMIT_DATA)) {
+		pkt->u.status0.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_DATA_IN |
+				CTIO7_FLAGS_STATUS_MODE_0);
+
+		qla_tgt_load_data_segments(&prm, vha);
+
+		if (prm.add_status_pkt == 0) {
+			if (xmit_type & QLA_TGT_XMIT_STATUS) {
+				pkt->u.status0.scsi_status = cpu_to_le16(prm.rq_result);
+				pkt->u.status0.residual = cpu_to_le32(prm.residual);
+				pkt->u.status0.flags |= __constant_cpu_to_le16(
+						CTIO7_FLAGS_SEND_STATUS);
+				if (qla_tgt_need_explicit_conf(ha, cmd, 0)) {
+					pkt->u.status0.flags |= __constant_cpu_to_le16(
+						CTIO7_FLAGS_EXPLICIT_CONFORM |
+						CTIO7_FLAGS_CONFORM_REQ);
+				}
+			}
+
+		} else {
+			/*
+			 * We have already made sure that there is sufficient
+			 * amount of request entries to not drop HW lock in
+			 * req_pkt().
+			 */
+			ctio7_to_24xx_t *ctio =
+				(ctio7_to_24xx_t *)qla_tgt_get_req_pkt(vha);
+
+			ql_dbg(ql_dbg_tgt, vha, 0xe019, "Building additional"
+					" status packet\n");
+
+			memcpy(ctio, pkt, sizeof(*ctio));
+			ctio->entry_count = 1;
+			ctio->dseg_count = 0;
+			ctio->u.status1.flags &= ~__constant_cpu_to_le16(
+						CTIO7_FLAGS_DATA_IN);
+
+			/* Real finish is ctio_m1's finish */
+			pkt->handle |= CTIO_INTERMEDIATE_HANDLE_MARK;
+			pkt->u.status0.flags |= __constant_cpu_to_le16(
+					CTIO7_FLAGS_DONT_RET_CTIO);
+			qla_tgt_24xx_init_ctio_to_isp((ctio7_to_24xx_t *)ctio,
+							&prm);
+			printk("Status CTIO7: %p\n", ctio);
+		}
+	} else
+		qla_tgt_24xx_init_ctio_to_isp(pkt, &prm);
+
+
+	cmd->state = QLA_TGT_STATE_PROCESSED; /* Mid-level is done processing */
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe01a, "Xmitting CTIO7 response pkt for 24xx:"
+			" %p scsi_status: 0x%02x\n", pkt, scsi_status);
+
+	qla2x00_isp_cmd(vha, vha->req);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return 0;
+
+out_unmap_unlock:
+	if (cmd->sg_mapped)
+		qla_tgt_unmap_sg(vha, cmd);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return res;
+}
+
+int qla_tgt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
+{
+	struct scsi_qla_host *vha = cmd->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = cmd->tgt;
+	struct qla_tgt_prm prm;
+	unsigned long flags;
+	int res = 0;
+
+	memset(&prm, 0, sizeof(prm));
+	prm.cmd = cmd;
+	prm.tgt = tgt;
+	prm.sg = NULL;
+	prm.req_cnt = 1;
+
+	/* Send marker if required */
+	if (qla_tgt_issue_marker(vha, 0) != QLA_SUCCESS)
+		return -EIO;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe01b, "CTIO_start: vha(%d)", (int)vha->vp_idx);
+
+	/* Calculate number of entries and segments required */
+	if (qla_tgt_pci_map_calc_cnt(&prm) != 0)
+		return -EAGAIN;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+
+	/* Does F/W have an IOCBs for this request */
+	res = qla_tgt_check_reserve_free_req(vha, prm.req_cnt);
+	if (res != 0)
+		goto out_unlock_free_unmap;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		ctio7_to_24xx_t *pkt;
+		res = qla_tgt_24xx_build_ctio_pkt(&prm, vha);
+		if (unlikely(res != 0))
+			goto out_unlock_free_unmap;
+		pkt = (ctio7_to_24xx_t *)prm.pkt;
+		pkt->u.status0.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_DATA_OUT |
+				CTIO7_FLAGS_STATUS_MODE_0);
+		qla_tgt_load_data_segments(&prm, vha);
+	} else {
+		ctio_to_2xxx_t *pkt;
+		qla_tgt_2xxx_build_ctio_pkt(&prm, vha);
+		pkt = (ctio_to_2xxx_t *)prm.pkt;
+		pkt->flags = __constant_cpu_to_le16(OF_FAST_POST | OF_DATA_OUT);
+		qla_tgt_load_data_segments(&prm, vha);
+	}
+
+	cmd->state = QLA_TGT_STATE_NEED_DATA;
+
+	qla2x00_isp_cmd(vha, vha->req);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return res;
+
+out_unlock_free_unmap:
+	if (cmd->sg_mapped)
+		qla_tgt_unmap_sg(vha, cmd);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	return res;
+}
+EXPORT_SYMBOL(qla_tgt_rdy_to_xfer);
+
+/* If hardware_lock held on entry, might drop it, then reaquire */
+/* This function sends the appropriate CTIO to ISP 2xxx or 24xx */
+static int __qla_tgt_send_term_exchange(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
+	atio_from_isp_t *atio)
+{
+	struct qla_hw_data *ha = vha->hw;
+	request_t *pkt;
+	int ret = 0;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe01c, "Sending TERM EXCH CTIO (ha=%p)\n", ha);
+
+	pkt = (request_t *)qla2x00_req_pkt(vha);
+	if (pkt == NULL) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet\n", vha->vp_idx, __func__);
+		return -ENOMEM;
+	}
+
+	if (cmd != NULL) {
+		if (cmd->state < QLA_TGT_STATE_PROCESSED) {
+			printk(KERN_ERR "qla_target(%d): Terminating cmd %p with "
+				"incorrect state %d\n", vha->vp_idx, cmd,
+				cmd->state);
+		} else
+			ret = 1;
+	}
+
+	pkt->entry_count = 1;
+	pkt->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		ctio7_to_24xx_t *ctio24 = (ctio7_to_24xx_t *)pkt;
+		ctio24->entry_type = CTIO_TYPE7;
+		if (cmd == NULL)
+			ctio24->nport_handle = CTIO7_NHANDLE_UNRECOGNIZED;
+		ctio24->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+		ctio24->vp_index = vha->vp_idx;
+		ctio24->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2];
+		ctio24->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1];
+		ctio24->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0];
+		ctio24->exchange_addr = atio->u.isp24.exchange_addr;
+		ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | __constant_cpu_to_le16(
+			CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_TERMINATE);
+		ctio24->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id);
+
+		/* Most likely, it isn't needed */
+		ctio24->u.status1.residual = get_unaligned((uint32_t *)
+			&atio->u.isp24.fcp_cmnd.add_cdb[atio->u.isp24.fcp_cmnd.add_cdb_len]);
+		if (ctio24->u.status1.residual != 0)
+			ctio24->u.status1.scsi_status |= SS_RESIDUAL_UNDER;
+	} else {
+		ctio_from_2xxx_t *ctio = (ctio_from_2xxx_t *)pkt;
+
+		ctio->entry_type = CTIO_RET_TYPE;
+
+		/* Set IDs */
+		SET_TARGET_ID(ha, ctio->target, GET_TARGET_ID(ha, atio));
+		ctio->rx_id = atio->u.isp2x.rx_id;
+
+		/* Most likely, it isn't needed */
+		ctio->residual = atio->u.isp2x.data_length;
+		if (ctio->residual != 0)
+			ctio->scsi_status |= SS_RESIDUAL_UNDER;
+
+		ctio->flags = __constant_cpu_to_le16(OF_FAST_POST | OF_TERM_EXCH |
+				OF_NO_DATA | OF_SS_MODE_1);
+		ctio->flags |= __constant_cpu_to_le16(OF_INC_RC);
+	}
+
+	qla2x00_isp_cmd(vha, vha->req);
+	return ret;
+}
+
+static void qla_tgt_send_term_exchange(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
+        atio_from_isp_t *atio, int ha_locked)
+{
+	unsigned long flags;
+	int rc;
+
+	if (qla_tgt_issue_marker(vha, ha_locked) < 0)
+		return;
+
+	if (ha_locked) {
+		rc = __qla_tgt_send_term_exchange(vha, cmd, atio);
+		goto done;
+	}
+	spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+	rc = __qla_tgt_send_term_exchange(vha, cmd, atio);
+	spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+done:
+	if (rc == 1) {
+		if (!ha_locked && !in_interrupt())
+			msleep(250); /* just in case */
+
+		vha->hw->tgt_ops->free_cmd(cmd);
+	}
+}
+
+void qla_tgt_free_cmd(struct qla_tgt_cmd *cmd)
+{
+	BUG_ON(cmd->sg_mapped);
+
+	if (unlikely(cmd->free_sg))
+		kfree(cmd->sg);
+	kmem_cache_free(qla_tgt_cmd_cachep, cmd);
+}
+EXPORT_SYMBOL(qla_tgt_free_cmd);
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_prepare_srr_ctio(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
+	void *ctio)
+{
+	struct qla_tgt_srr_ctio *sc;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_srr_imm *imm;
+
+	tgt->ctio_srr_id++;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe11d, "qla_target(%d): CTIO with SRR "
+		"status received\n", vha->vp_idx);
+
+	if (!ctio) {
+		printk(KERN_ERR "qla_target(%d): SRR CTIO, "
+			"but ctio is NULL\n", vha->vp_idx);
+		return EINVAL;
+	}
+
+	sc = kzalloc(sizeof(*sc), GFP_ATOMIC);
+	if (sc != NULL) {
+		sc->cmd = cmd;
+		/* IRQ is already OFF */
+		spin_lock(&tgt->srr_lock);
+		sc->srr_id = tgt->ctio_srr_id;
+		list_add_tail(&sc->srr_list_entry,
+			&tgt->srr_ctio_list);
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe11e, "CTIO SRR %p added (id %d)\n",
+			sc, sc->srr_id);
+		if (tgt->imm_srr_id == tgt->ctio_srr_id) {
+			int found = 0;
+			list_for_each_entry(imm, &tgt->srr_imm_list,
+					srr_list_entry) {
+				if (imm->srr_id == sc->srr_id) {
+					found = 1;
+					break;
+				}
+			}
+			if (found) {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe11f,
+					"Scheduling srr work\n");
+				schedule_work(&tgt->srr_work);
+			} else {
+				printk(KERN_ERR "qla_target(%d): imm_srr_id "
+					"== ctio_srr_id (%d), but there is no "
+					"corresponding SRR IMM, deleting CTIO "
+					"SRR %p\n", vha->vp_idx, tgt->ctio_srr_id,
+					sc);
+				list_del(&sc->srr_list_entry);
+				spin_unlock(&tgt->srr_lock);
+
+				kfree(sc);
+				return -EINVAL;
+			}
+		}
+		spin_unlock(&tgt->srr_lock);
+	} else {
+		struct qla_tgt_srr_imm *ti;
+
+		printk(KERN_ERR "qla_target(%d): Unable to allocate SRR CTIO entry\n",
+			vha->vp_idx);
+		spin_lock(&tgt->srr_lock);
+		list_for_each_entry_safe(imm, ti, &tgt->srr_imm_list,
+					srr_list_entry) {
+			if (imm->srr_id == tgt->ctio_srr_id) {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe120, "IMM SRR %p"
+					" deleted (id %d)\n", imm, imm->srr_id);
+				list_del(&imm->srr_list_entry);
+				qla_tgt_reject_free_srr_imm(vha, imm, 1);
+			}
+		}
+		spin_unlock(&tgt->srr_lock);
+
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static int qla_tgt_term_ctio_exchange(struct scsi_qla_host *vha, void *ctio,
+	struct qla_tgt_cmd *cmd, uint32_t status)
+{
+	struct qla_hw_data *ha = vha->hw;
+	int term = 0;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		if (ctio != NULL) {
+			ctio7_from_24xx_t *c = (ctio7_from_24xx_t *)ctio;
+			term = !(c->flags &
+				__constant_cpu_to_le16(OF_TERM_EXCH));
+		} else
+			term = 1;
+		if (term)
+			qla_tgt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+	} else {
+		if (status != CTIO_SUCCESS)
+			qla_tgt_2xxx_send_modify_lun(vha, 1, 0);
+#if 0 /* seems, it isn't needed */
+		if (ctio != NULL) {
+			ctio_to_2xxx_t *c = (ctio_to_2xxx_t *)ctio;
+			term = !(c->flags &
+				__constant_cpu_to_le16(
+					CTIO7_FLAGS_TERMINATE));
+		} else
+			term = 1;
+		if (term) {
+			qla_tgt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+		}
+#endif
+	}
+	return term;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static inline struct qla_tgt_cmd *qla_tgt_get_cmd(struct scsi_qla_host *vha, uint32_t handle)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	handle--;
+	if (ha->cmds[handle] != NULL) {
+		struct qla_tgt_cmd *cmd = ha->cmds[handle];
+		ha->cmds[handle] = NULL;
+		return cmd;
+	} else
+		return NULL;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static struct qla_tgt_cmd *qla_tgt_ctio_to_cmd(struct scsi_qla_host *vha, uint32_t handle,
+	void *ctio)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_cmd *cmd = NULL;
+
+	/* Clear out internal marks */
+	handle &= ~(CTIO_COMPLETION_HANDLE_MARK | CTIO_INTERMEDIATE_HANDLE_MARK);
+
+	if (handle != QLA_TGT_NULL_HANDLE) {
+		if (unlikely(handle == QLA_TGT_SKIP_HANDLE)) {
+			ql_dbg(ql_dbg_tgt, vha, 0xe01e, "%s", "SKIP_HANDLE CTIO\n");
+			return NULL;
+		}
+		/* handle-1 is actually used */
+		if (unlikely(handle > MAX_OUTSTANDING_COMMANDS)) {
+			printk(KERN_ERR "qla_target(%d): Wrong handle %x "
+				"received\n", vha->vp_idx, handle);
+			return NULL;
+		}
+		cmd = qla_tgt_get_cmd(vha, handle);
+		if (unlikely(cmd == NULL)) {
+			printk(KERN_WARNING "qla_target(%d): Suspicious: unable to "
+				   "find the command with handle %x\n",
+				   vha->vp_idx, handle);
+			return NULL;
+		}
+	} else if (ctio != NULL) {
+		struct qla_tgt_sess *sess;
+		int tag;
+		uint16_t loop_id;
+
+		if (IS_FWI2_CAPABLE(ha)) {
+			/* We can't get loop ID from CTIO7 */
+			printk(KERN_ERR "qla_target(%d): Wrong CTIO received: "
+				"QLA24xx doesn't support NULL handles\n",
+				vha->vp_idx);
+			return NULL;
+		} else {
+			ctio_to_2xxx_t *c = (ctio_to_2xxx_t *)ctio;
+			loop_id = GET_TARGET_ID(ha, (atio_from_isp_t *)ctio);
+			tag = c->rx_id;
+		}
+
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha, loop_id);
+		if (!sess) {
+			printk(KERN_WARNING "qla_target(%d): Suspicious: "
+				   "ctio_completion for non-existing session "
+				   "(loop_id %d, tag %d)\n",
+				   vha->vp_idx, loop_id, tag);
+			return NULL;
+		}
+	}
+
+	return cmd;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_do_ctio_completion(struct scsi_qla_host *vha, uint32_t handle,
+	uint32_t status, void *ctio)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct se_cmd *se_cmd;
+	struct target_core_fabric_ops *tfo;
+	struct qla_tgt_cmd *cmd;
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe206, "qla_target(%d): handle(ctio %p status"
+		" %#x) <- %08x\n", vha->vp_idx, ctio, status, handle);
+
+	if (handle & CTIO_INTERMEDIATE_HANDLE_MARK) {
+		/* That could happen only in case of an error/reset/abort */
+		if (status != CTIO_SUCCESS) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe121, "Intermediate CTIO received"
+				" (status %x)\n", status);
+		}
+		return;
+	}
+
+	cmd = qla_tgt_ctio_to_cmd(vha, handle, ctio);
+	if (cmd == NULL) {
+		if (status != CTIO_SUCCESS)
+			qla_tgt_term_ctio_exchange(vha, ctio, NULL, status);
+		return;
+	}
+	se_cmd = &cmd->se_cmd;
+	tfo = se_cmd->se_tfo;
+
+	if (cmd->sg_mapped)
+		qla_tgt_unmap_sg(vha, cmd);
+
+	if (unlikely(status != CTIO_SUCCESS)) {
+		switch (status & 0xFFFF) {
+		case CTIO_LIP_RESET:
+		case CTIO_TARGET_RESET:
+		case CTIO_ABORTED:
+		case CTIO_TIMEOUT:
+		case CTIO_INVALID_RX_ID:
+			/* They are OK */
+			printk(KERN_INFO "qla_target(%d): CTIO with "
+				"status %#x received, state %x, se_cmd %p, "
+				"(LIP_RESET=e, ABORTED=2, TARGET_RESET=17, "
+				"TIMEOUT=b, INVALID_RX_ID=8)\n", vha->vp_idx,
+				status, cmd->state, se_cmd);
+			break;
+
+		case CTIO_PORT_LOGGED_OUT:
+		case CTIO_PORT_UNAVAILABLE:
+			printk(KERN_INFO "qla_target(%d): CTIO with PORT LOGGED "
+				"OUT (29) or PORT UNAVAILABLE (28) status %x "
+				"received (state %x, se_cmd %p)\n",
+				vha->vp_idx, status, cmd->state, se_cmd);
+			break;
+
+		case CTIO_SRR_RECEIVED:
+			printk(KERN_INFO "qla_target(%d): CTIO with SRR_RECEIVED"
+				" status %x received (state %x, se_cmd %p)\n",
+				vha->vp_idx, status, cmd->state, se_cmd);
+			if (qla_tgt_prepare_srr_ctio(vha, cmd, ctio) != 0)
+				break;
+			else
+				return;
+
+		default:
+			printk(KERN_ERR "qla_target(%d): CTIO with error status "
+				"0x%x received (state %x, se_cmd %p\n",
+				vha->vp_idx, status, cmd->state, se_cmd);
+			break;
+		}
+
+		if (cmd->state != QLA_TGT_STATE_NEED_DATA)
+			if (qla_tgt_term_ctio_exchange(vha, ctio, cmd, status))
+				return;
+	}
+
+	if (cmd->state == QLA_TGT_STATE_PROCESSED) {
+		ql_dbg(ql_dbg_tgt, vha, 0xe01f, "Command %p finished\n", cmd);
+	} else if (cmd->state == QLA_TGT_STATE_NEED_DATA) {
+		int rx_status = 0;
+
+		cmd->state = QLA_TGT_STATE_DATA_IN;
+
+		if (unlikely(status != CTIO_SUCCESS))
+			rx_status = -EIO;
+		else
+			cmd->write_data_transferred = 1;
+
+		ql_dbg(ql_dbg_tgt, vha, 0xe020, "Data received, context %x,"
+				" rx_status %d\n", 0x0, rx_status);
+
+		ha->tgt_ops->handle_data(cmd);
+		return;
+	} else if (cmd->state == QLA_TGT_STATE_ABORTED) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe122, "Aborted command %p (tag %d) finished\n",
+				cmd, cmd->tag);
+	} else {
+		printk(KERN_ERR "qla_target(%d): A command in state (%d) should "
+			"not return a CTIO complete\n", vha->vp_idx, cmd->state);
+	}
+
+	if (unlikely(status != CTIO_SUCCESS)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe123, "Finishing failed CTIO\n");
+		dump_stack();
+	}
+
+	ha->tgt_ops->free_cmd(cmd);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+/* called via callback from qla2xxx */
+void qla_tgt_ctio_completion(struct scsi_qla_host *vha, uint32_t handle)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+
+	if (likely(tgt == NULL)) {
+		ql_dbg(ql_dbg_tgt, vha, 0xe021, "CTIO, but target mode not enabled"
+			" (ha %d %p handle %#x)", vha->vp_idx, ha, handle);
+		return;
+	}
+
+	tgt->irq_cmd_count++;
+	qla_tgt_do_ctio_completion(vha, handle, CTIO_SUCCESS, NULL);
+	tgt->irq_cmd_count--;
+}
+
+static inline int qla_tgt_get_fcp_task_attr(uint8_t task_codes)
+{
+	int fcp_task_attr;
+
+	switch (task_codes) {
+        case ATIO_SIMPLE_QUEUE:
+                fcp_task_attr = MSG_SIMPLE_TAG;
+                break;
+        case ATIO_HEAD_OF_QUEUE:
+                fcp_task_attr = MSG_HEAD_TAG;
+                break;
+        case ATIO_ORDERED_QUEUE:
+                fcp_task_attr = MSG_ORDERED_TAG;
+                break;
+        case ATIO_ACA_QUEUE:
+		fcp_task_attr = MSG_ACA_TAG;
+		break;
+        case ATIO_UNTAGGED:
+                fcp_task_attr = MSG_SIMPLE_TAG;
+                break;
+        default:
+                printk(KERN_WARNING "qla_target: unknown task code %x, use "
+                        "ORDERED instead\n", task_codes);
+                fcp_task_attr = MSG_ORDERED_TAG;
+                break;
+        }
+
+	return fcp_task_attr;
+}
+
+static struct qla_tgt_sess *qla_tgt_make_local_sess(struct scsi_qla_host *,
+					uint8_t *, uint16_t);
+/*
+ * Process context for I/O path into tcm_qla2xxx code
+ */
+static void qla_tgt_do_work(struct work_struct *work)
+{
+	struct qla_tgt_cmd *cmd = container_of(work, struct qla_tgt_cmd, work);
+	scsi_qla_host_t *vha = cmd->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_sess *sess = NULL;
+	atio_from_isp_t *atio = &cmd->atio;
+	unsigned char *cdb;
+	unsigned long flags;
+	uint32_t data_length;
+	int ret, fcp_task_attr, data_dir, bidi = 0;;
+
+	if (tgt->tgt_stop)
+		goto out_term;
+
+	sess = cmd->sess;
+	if (!sess) {
+		uint8_t *s_id = NULL;
+		uint16_t loop_id = 0;
+
+		if (IS_FWI2_CAPABLE(ha))
+			s_id = atio->u.isp24.fcp_hdr.s_id;
+		else
+			loop_id = GET_TARGET_ID(ha, atio);
+
+		mutex_lock(&ha->tgt_mutex);
+		sess = qla_tgt_make_local_sess(vha, s_id, loop_id);
+		/* sess has got an extra creation ref */
+		mutex_unlock(&ha->tgt_mutex);
+
+		if (!sess)
+			goto out_term;
+	}
+
+	if (tgt->tgt_stop)
+		goto out_term;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		cdb = &atio->u.isp24.fcp_cmnd.cdb[0];
+		cmd->tag = atio->u.isp24.exchange_addr;
+		cmd->unpacked_lun = scsilun_to_int(
+				(struct scsi_lun *)&atio->u.isp24.fcp_cmnd.lun);
+
+		if (atio->u.isp24.fcp_cmnd.rddata &&
+		    atio->u.isp24.fcp_cmnd.wrdata) {
+			bidi = 1;
+			data_dir = DMA_TO_DEVICE;
+		} else if (atio->u.isp24.fcp_cmnd.rddata)
+			data_dir = DMA_FROM_DEVICE;
+		else if (atio->u.isp24.fcp_cmnd.wrdata)
+			data_dir = DMA_TO_DEVICE;
+		else
+			data_dir = DMA_NONE;
+
+		fcp_task_attr = qla_tgt_get_fcp_task_attr(
+				atio->u.isp24.fcp_cmnd.task_attr);
+		data_length = be32_to_cpu(get_unaligned((uint32_t *)
+				&atio->u.isp24.fcp_cmnd.add_cdb[
+					atio->u.isp24.fcp_cmnd.add_cdb_len]));
+	} else {
+		uint16_t lun;
+
+		cdb = &atio->u.isp2x.cdb[0];
+		cmd->tag = atio->u.isp2x.rx_id;
+		lun = swab16(le16_to_cpu(atio->u.isp2x.lun));
+		cmd->unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun);
+
+		if ((atio->u.isp2x.execution_codes & (ATIO_EXEC_READ | ATIO_EXEC_WRITE)) ==
+					(ATIO_EXEC_READ | ATIO_EXEC_WRITE)) {
+			bidi = 1;
+			data_dir = DMA_TO_DEVICE;
+		} else if (atio->u.isp2x.execution_codes & ATIO_EXEC_READ)
+			data_dir = DMA_FROM_DEVICE;
+		else if (atio->u.isp2x.execution_codes & ATIO_EXEC_WRITE)
+			data_dir = DMA_TO_DEVICE;
+		else
+			data_dir = DMA_NONE;
+
+		fcp_task_attr = qla_tgt_get_fcp_task_attr(atio->u.isp2x.task_codes);
+		data_length = le32_to_cpu(atio->u.isp2x.data_length);
+	}
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe207, "qla_target: START qla command: %p"
+		" lun: 0x%04x (tag %d)\n", cmd, cmd->unpacked_lun, cmd->tag);
+
+	ret = vha->hw->tgt_ops->handle_cmd(vha, cmd, cdb, data_length,
+			fcp_task_attr, data_dir, bidi);
+	if (ret != 0)
+		goto out_term;
+	/*
+	 * Drop extra session reference from qla_tgt_handle_cmd_for_atio*(
+	 */
+	qla_tgt_sess_put(sess);
+	return;
+
+out_term:
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe14d, "Terminating work cmd %p", cmd);
+	/*
+	 * cmd has not sent to target yet, so pass NULL as the second argument
+	 */
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	qla_tgt_send_term_exchange(vha, NULL, &cmd->atio, 1);
+
+	if (sess)
+		__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_handle_cmd_for_atio(struct scsi_qla_host *vha,
+	atio_from_isp_t *atio)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_sess *sess;
+	struct qla_tgt_cmd *cmd;
+	int res = 0;
+
+	if (unlikely(tgt->tgt_stop)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe124, "New command while device %p"
+			" is shutting down\n", tgt);
+		return -EFAULT;
+	}
+
+	cmd = kmem_cache_zalloc(qla_tgt_cmd_cachep, GFP_ATOMIC);
+	if (!cmd) {
+		printk(KERN_INFO "qla_target(%d): Allocation of cmd "
+			"failed\n", vha->vp_idx);
+		return -ENOMEM;
+	}
+
+	INIT_LIST_HEAD(&cmd->cmd_list);
+
+	memcpy(&cmd->atio, atio, sizeof(*atio));
+	cmd->state = QLA_TGT_STATE_NEW;
+	cmd->tgt = ha->qla_tgt;
+	cmd->vha = vha;
+
+	if (IS_FWI2_CAPABLE(ha))
+		sess = ha->tgt_ops->find_sess_by_s_id(vha,
+					atio->u.isp24.fcp_hdr.s_id);
+	else
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha,
+					GET_TARGET_ID(ha, atio));
+
+	if (unlikely(!sess)) {
+		if (IS_FWI2_CAPABLE(ha)) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe125, "qla_target(%d):"
+				" Unable to find wwn login (s_id %x:%x:%x),"
+				" trying to create it manually\n", vha->vp_idx,
+				atio->u.isp24.fcp_hdr.s_id[0],
+				atio->u.isp24.fcp_hdr.s_id[1],
+				atio->u.isp24.fcp_hdr.s_id[2]);
+		} else {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe126, "qla_target(%d):"
+				" Unable to find wwn login (loop_id=%d), trying"
+				" to create it manually\n", vha->vp_idx,
+				GET_TARGET_ID(ha, atio));
+		}
+		if (atio->u.raw.entry_count > 1) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe127, "Dropping multy entry"
+					" cmd %p\n", cmd);
+			goto out_free_cmd;
+		}
+		goto out_sched;
+	}
+
+	if (sess->tearing_down || tgt->tgt_stop)
+		goto out_free_cmd;
+
+	cmd->sess = sess;
+	cmd->loop_id = sess->loop_id;
+	cmd->conf_compl_supported = sess->conf_compl_supported;
+	/*
+	 * Get the extra kref_get() before dropping qla_hw_data->hardware_lock,
+	 * and call qla_tgt_sess_put() -> kref_put() in qla_tgt_do_work() process
+	 * context to drop the extra reference.
+	*/
+	kref_get(&sess->sess_kref);
+
+out_sched:
+	INIT_WORK(&cmd->work, qla_tgt_do_work);
+	queue_work(qla_tgt_wq, &cmd->work);
+	return 0;
+
+out_free_cmd:
+	qla_tgt_free_cmd(cmd);
+	return res;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_issue_task_mgmt(struct qla_tgt_sess *sess, uint32_t lun,
+	int fn, void *iocb, int flags)
+{
+	struct scsi_qla_host *vha = sess->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_mgmt_cmd *mcmd;
+	int res;
+	uint8_t tmr_func;
+
+	mcmd = mempool_alloc(qla_tgt_mgmt_cmd_mempool, GFP_ATOMIC);
+	if (!mcmd) {
+		printk(KERN_ERR "qla_target(%d): Allocation of management "
+			"command failed, some commands and their data could "
+			"leak\n", vha->vp_idx);
+		return -ENOMEM;
+	}
+	memset(mcmd, 0, sizeof(*mcmd));
+	mcmd->sess = sess;
+
+	if (iocb) {
+		memcpy(&mcmd->orig_iocb.imm_ntfy, iocb,
+			sizeof(mcmd->orig_iocb.imm_ntfy));
+	}
+	mcmd->tmr_func = fn;
+	mcmd->flags = flags;
+
+	switch (fn) {
+	case QLA_TGT_CLEAR_ACA:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe400, "qla_target(%d): CLEAR_ACA received\n",
+			sess->vha->vp_idx);
+		tmr_func = TMR_CLEAR_ACA;
+		break;
+
+	case QLA_TGT_TARGET_RESET:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe401, "qla_target(%d): TARGET_RESET received\n",
+			sess->vha->vp_idx);
+		tmr_func = TMR_TARGET_WARM_RESET;
+		break;
+
+	case QLA_TGT_LUN_RESET:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe402, "qla_target(%d): LUN_RESET received\n",
+			sess->vha->vp_idx);
+		tmr_func = TMR_LUN_RESET;
+		break;
+
+	case QLA_TGT_CLEAR_TS:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe403, "qla_target(%d): CLEAR_TS received\n",
+			sess->vha->vp_idx);
+		tmr_func = TMR_CLEAR_TASK_SET;
+		break;
+
+	case QLA_TGT_ABORT_TS:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe405, "qla_target(%d): ABORT_TS received\n",
+			sess->vha->vp_idx);
+		tmr_func = TMR_ABORT_TASK_SET;
+		break;
+#if 0
+	case QLA_TGT_ABORT_ALL:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe406, "qla_target(%d): Doing ABORT_ALL_TASKS\n",
+			sess->vha->vp_idx);
+		tmr_func = 0;
+		break;
+
+	case QLA_TGT_ABORT_ALL_SESS:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe407, "qla_target(%d): Doing ABORT_ALL_TASKS_SESS\n",
+			sess->vha->vp_idx);
+		tmr_func = 0;
+		break;
+
+	case QLA_TGT_NEXUS_LOSS_SESS:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe408, "qla_target(%d): Doing NEXUS_LOSS_SESS\n",
+			sess->vha->vp_idx);
+		tmr_func = 0;
+		break;
+
+	case QLA_TGT_NEXUS_LOSS:
+		ql_dbg(ql_dbg_tgt_tmr, vha, 0xe409, "qla_target(%d): Doing NEXUS_LOSS\n",
+			sess->vha->vp_idx));
+		tmr_func = 0;
+		break;
+#endif
+	default:
+		printk(KERN_ERR "qla_target(%d): Unknown task mgmt fn 0x%x\n",
+			    sess->vha->vp_idx, fn);
+		mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+		return -ENOSYS;
+	}
+
+	res = ha->tgt_ops->handle_tmr(mcmd, lun, tmr_func);
+	if (res != 0) {
+		printk(KERN_ERR "qla_target(%d): tgt_ops->handle_tmr() failed: %d\n",
+			    sess->vha->vp_idx, res);
+		mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_handle_task_mgmt(struct scsi_qla_host *vha, void *iocb)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt;
+	struct qla_tgt_sess *sess;
+	uint32_t lun, unpacked_lun;
+	int lun_size, fn, res = 0;
+
+	tgt = ha->qla_tgt;
+	if (IS_FWI2_CAPABLE(ha)) {
+		atio_from_isp_t *a = (atio_from_isp_t *)iocb;
+
+		lun = a->u.isp24.fcp_cmnd.lun;
+		lun_size = sizeof(a->u.isp24.fcp_cmnd.lun);
+		fn = a->u.isp24.fcp_cmnd.task_mgmt_flags;
+		sess = ha->tgt_ops->find_sess_by_s_id(vha,
+					a->u.isp24.fcp_hdr.s_id);
+	} else {
+		imm_ntfy_from_isp_t *n = (imm_ntfy_from_isp_t *)iocb;
+		/* make it be in network byte order */
+		lun = swab16(le16_to_cpu(n->u.isp2x.lun));
+		lun_size = sizeof(lun);
+		fn = n->u.isp2x.task_flags >> IMM_NTFY_TASK_MGMT_SHIFT;
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha,
+					GET_TARGET_ID(ha, (atio_from_isp_t *)iocb));
+	}
+	unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun);
+
+	if (!sess) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe128, "qla_target(%d): task mgmt fn 0x%x for "
+			"non-existant session\n", vha->vp_idx, fn);
+		res = qla_tgt_sched_sess_work(tgt, QLA_TGT_SESS_WORK_TM, iocb,
+			IS_FWI2_CAPABLE(ha) ? sizeof(atio_from_isp_t) :
+					      sizeof(imm_ntfy_from_isp_t));
+		if (res != 0)
+			tgt->tm_to_unknown = 1;
+
+		return res;
+	}
+
+	return qla_tgt_issue_task_mgmt(sess, unpacked_lun, fn, iocb, 0);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int __qla_tgt_abort_task(struct scsi_qla_host *vha,
+	imm_ntfy_from_isp_t *iocb, struct qla_tgt_sess *sess)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_mgmt_cmd *mcmd;
+	uint32_t lun, unpacked_lun;
+	int rc;
+	uint16_t tag;
+
+	mcmd = mempool_alloc(qla_tgt_mgmt_cmd_mempool, GFP_ATOMIC);
+	if (mcmd == NULL) {
+		printk(KERN_ERR "qla_target(%d): %s: Allocation of ABORT"
+			" cmd failed\n", vha->vp_idx, __func__);
+		return -ENOMEM;
+	}
+	memset(mcmd, 0, sizeof(*mcmd));
+
+	mcmd->sess = sess;
+	memcpy(&mcmd->orig_iocb.imm_ntfy, iocb, sizeof(mcmd->orig_iocb.imm_ntfy));
+
+	tag = le16_to_cpu(iocb->u.isp2x.seq_id);
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		atio_from_isp_t *a = (atio_from_isp_t *)iocb;
+		lun = a->u.isp24.fcp_cmnd.lun;
+	} else
+		lun = swab16(le16_to_cpu(iocb->u.isp2x.lun));
+
+	unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun);
+
+	rc = ha->tgt_ops->handle_tmr(mcmd, unpacked_lun, ABORT_TASK);
+	if (rc != 0) {
+		printk(KERN_ERR "qla_target(%d): tgt_ops->handle_tmr()"
+			" failed: %d\n", vha->vp_idx, rc);
+		mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static int qla_tgt_abort_task(struct scsi_qla_host *vha, imm_ntfy_from_isp_t *iocb)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess;
+	int loop_id, res;
+
+	loop_id = GET_TARGET_ID(ha, (atio_from_isp_t *)iocb);
+
+	sess = ha->tgt_ops->find_sess_by_loop_id(vha, loop_id);
+	if (sess == NULL) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe129, "qla_target(%d): task abort for unexisting "
+			"session\n", vha->vp_idx);
+		res = qla_tgt_sched_sess_work(sess->tgt, QLA_TGT_SESS_WORK_ABORT,
+					iocb, sizeof(*iocb));
+		if (res != 0)
+			sess->tgt->tm_to_unknown = 1;
+
+		return res;
+	}
+
+	return __qla_tgt_abort_task(vha, iocb, sess);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static int qla_tgt_24xx_handle_els(struct scsi_qla_host *vha,
+	imm_ntfy_from_isp_t *iocb)
+{
+	struct qla_hw_data *ha = vha->hw;
+	int res = 0;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe12a, "qla_target(%d): Port ID: 0x%02x:%02x:%02x"
+		" ELS opcode: 0x%02x\n", vha->vp_idx, iocb->u.isp24.port_id[0],
+		iocb->u.isp24.port_id[1], iocb->u.isp24.port_id[2],
+		iocb->u.isp24.status_subcode);
+
+	switch (iocb->u.isp24.status_subcode) {
+	case ELS_PLOGI:
+	case ELS_FLOGI:
+	case ELS_PRLI:
+	case ELS_LOGO:
+	case ELS_PRLO:
+		res = qla_tgt_reset(vha, iocb, QLA_TGT_NEXUS_LOSS_SESS);
+		break;
+	case ELS_PDISC:
+	case ELS_ADISC:
+	{
+		struct qla_tgt *tgt = ha->qla_tgt;
+		if (tgt->link_reinit_iocb_pending) {
+			qla_tgt_send_notify_ack(vha, &tgt->link_reinit_iocb,
+				0, 0, 0, 0, 0, 0);
+			tgt->link_reinit_iocb_pending = 0;
+		}
+		res = 1; /* send notify ack */
+		break;
+	}
+
+	default:
+		printk(KERN_ERR "qla_target(%d): Unsupported ELS command %x "
+			"received\n", vha->vp_idx, iocb->u.isp24.status_subcode);
+		res = qla_tgt_reset(vha, iocb, QLA_TGT_NEXUS_LOSS_SESS);
+		break;
+	}
+
+	return res;
+}
+
+static int qla_tgt_set_data_offset(struct qla_tgt_cmd *cmd, uint32_t offset)
+{
+	struct scatterlist *sg, *sgp, *sg_srr, *sg_srr_start = NULL;
+	size_t first_offset = 0, rem_offset = offset, tmp = 0;
+	int i, sg_srr_cnt, bufflen = 0;
+
+	ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe305, "Entering qla_tgt_set_data_offset:"
+		" cmd: %p, cmd->sg: %p, cmd->sg_cnt: %u, direction: %d\n",
+		cmd, cmd->sg, cmd->sg_cnt, cmd->dma_data_direction);
+
+	/*
+	 * FIXME: Reject non zero SRR relative offset until we can test
+	 * this code properly.
+	 */
+	printk("Rejecting non zero SRR rel_offs: %u\n", offset);
+	return -1;
+
+	if (!cmd->sg || !cmd->sg_cnt) {
+		printk(KERN_ERR "Missing cmd->sg or zero cmd->sg_cnt in"
+				" qla_tgt_set_data_offset\n");
+		return -EINVAL;
+	}
+	/*
+	 * Walk the current cmd->sg list until we locate the new sg_srr_start
+	 */
+	for_each_sg(cmd->sg, sg, cmd->sg_cnt, i) {
+		ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe306, "sg[%d]: %p page: %p,"
+			" length: %d, offset: %d\n", i, sg, sg_page(sg),
+			sg->length, sg->offset);
+
+		if ((sg->length + tmp) > offset) {
+			first_offset = rem_offset;
+			sg_srr_start = sg;
+			ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe307, "Found matching sg[%d],"
+				" using %p as sg_srr_start, and using first_offset:"
+				" %lu\n", i, sg, first_offset);
+			break;
+		}
+		tmp += sg->length;
+		rem_offset -= sg->length;
+	}
+
+	if (!sg_srr_start) {
+		printk(KERN_ERR "Unable to locate sg_srr_start for offset: %u\n", offset);
+		return -EINVAL;
+	}
+	sg_srr_cnt = (cmd->sg_cnt - i);
+
+	sg_srr = kzalloc(sizeof(struct scatterlist) * sg_srr_cnt, GFP_KERNEL);
+	if (!sg_srr) {
+		printk(KERN_ERR "Unable to allocate sgp\n");
+		return -ENOMEM;
+	}
+	sg_init_table(sg_srr, sg_srr_cnt);
+	sgp = &sg_srr[0];
+	/*
+	 * Walk the remaining list for sg_srr_start, mapping to the newly
+	 * allocated sg_srr taking first_offset into account.
+	 */
+	for_each_sg(sg_srr_start, sg, sg_srr_cnt, i) {
+		if (first_offset) {
+			sg_set_page(sgp, sg_page(sg),
+				(sg->length - first_offset), first_offset);
+			first_offset = 0;
+		} else {
+			sg_set_page(sgp, sg_page(sg), sg->length, 0);
+		}
+		bufflen += sgp->length;
+
+		sgp = sg_next(sgp);
+		if (!sgp)
+			break;
+	}
+
+	cmd->sg = sg_srr;
+	cmd->sg_cnt = sg_srr_cnt;
+	cmd->bufflen = bufflen;
+	cmd->offset += offset;
+	cmd->free_sg = 1;
+
+	ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe308, "New cmd->sg: %p\n", cmd->sg);
+	ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe309, "New cmd->sg_cnt: %u\n", cmd->sg_cnt);
+	ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe30b, "New cmd->bufflen: %u\n", cmd->bufflen);
+	ql_dbg(ql_dbg_tgt_sgl, cmd->vha, 0xe30c, "New cmd->offset: %u\n", cmd->offset);
+
+	if (cmd->sg_cnt < 0)
+		BUG();
+
+	if (cmd->bufflen < 0)
+		BUG();
+
+	return 0;
+}
+
+static inline int qla_tgt_srr_adjust_data(struct qla_tgt_cmd *cmd,
+	uint32_t srr_rel_offs, int *xmit_type)
+{
+	int res = 0, rel_offs;
+
+	rel_offs = srr_rel_offs - cmd->offset;
+	ql_dbg(ql_dbg_tgt_mgt, cmd->vha, 0xe12b, "srr_rel_offs=%d, rel_offs=%d",
+			srr_rel_offs, rel_offs);
+
+	*xmit_type = QLA_TGT_XMIT_ALL;
+
+	if (rel_offs < 0) {
+		printk(KERN_ERR "qla_target(%d): SRR rel_offs (%d) "
+			"< 0", cmd->vha->vp_idx, rel_offs);
+		res = -1;
+	} else if (rel_offs == cmd->bufflen)
+		*xmit_type = QLA_TGT_XMIT_STATUS;
+	else if (rel_offs > 0)
+		res = qla_tgt_set_data_offset(cmd, rel_offs);
+
+	return res;
+}
+
+/* No locks, thread context */
+static void qla_tgt_handle_srr(struct scsi_qla_host *vha, struct qla_tgt_srr_ctio *sctio,
+	struct qla_tgt_srr_imm *imm)
+{
+	imm_ntfy_from_isp_t *ntfy = (imm_ntfy_from_isp_t *)&imm->imm_ntfy;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_cmd *cmd = sctio->cmd;
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+	unsigned long flags;
+	int xmit_type = 0, resp = 0;
+	uint32_t offset;
+	uint16_t srr_ui;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		offset = le32_to_cpu(ntfy->u.isp24.srr_rel_offs);
+		srr_ui = ntfy->u.isp24.srr_ui;
+	} else {
+		offset = le32_to_cpu(ntfy->u.isp2x.srr_rel_offs);
+		srr_ui = ntfy->u.isp2x.srr_ui;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe12c, "SRR cmd %p, srr_ui %x\n",
+			cmd, srr_ui);
+
+	switch (srr_ui) {
+	case SRR_IU_STATUS:
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+		qla_tgt_send_notify_ack(vha, ntfy,
+			0, 0, 0, NOTIFY_ACK_SRR_FLAGS_ACCEPT, 0, 0);
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		xmit_type = QLA_TGT_XMIT_STATUS;
+		resp = 1;
+		break;
+	case SRR_IU_DATA_IN:
+		if (!cmd->sg || !cmd->sg_cnt) {
+			printk(KERN_ERR "Unable to process SRR_IU_DATA_IN due to"
+				" missing cmd->sg, state: %d\n", cmd->state);
+			dump_stack();
+			goto out_reject;
+		}
+		if (se_cmd->scsi_status != 0) {
+			ql_dbg(ql_dbg_tgt, vha, 0xe022, "Rejecting SRR_IU_DATA_IN"
+					" with non GOOD scsi_status\n");
+			goto out_reject;
+		}
+		cmd->bufflen = se_cmd->data_length;
+
+		if (qla_tgt_has_data(cmd)) {
+			if (qla_tgt_srr_adjust_data(cmd, offset, &xmit_type) != 0)
+				goto out_reject;
+			spin_lock_irqsave(&ha->hardware_lock, flags);
+			qla_tgt_send_notify_ack(vha, ntfy,
+				0, 0, 0, NOTIFY_ACK_SRR_FLAGS_ACCEPT, 0, 0);
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+			resp = 1;
+		} else {
+			printk(KERN_ERR "qla_target(%d): SRR for in data for cmd "
+				"without them (tag %d, SCSI status %d), "
+				"reject", vha->vp_idx, cmd->tag,
+				cmd->se_cmd.scsi_status);
+			goto out_reject;
+		}
+		break;
+	case SRR_IU_DATA_OUT:
+		if (!cmd->sg || !cmd->sg_cnt) {
+			printk(KERN_ERR "Unable to process SRR_IU_DATA_OUT due to"
+				" missing cmd->sg\n");
+			dump_stack();
+			goto out_reject;
+		}
+		if (se_cmd->scsi_status != 0) {
+			ql_dbg(ql_dbg_tgt, vha, 0xe023, "Rejecting SRR_IU_DATA_OUT"
+					" with non GOOD scsi_status\n");
+			goto out_reject;
+		}
+		cmd->bufflen = se_cmd->data_length;
+
+		if (qla_tgt_has_data(cmd)) {
+			if (qla_tgt_srr_adjust_data(cmd, offset, &xmit_type) != 0)
+				goto out_reject;
+			spin_lock_irqsave(&ha->hardware_lock, flags);
+			qla_tgt_send_notify_ack(vha, ntfy,
+				0, 0, 0, NOTIFY_ACK_SRR_FLAGS_ACCEPT, 0, 0);
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+			if (xmit_type & QLA_TGT_XMIT_DATA)
+				qla_tgt_rdy_to_xfer(cmd);
+		} else {
+			printk(KERN_ERR "qla_target(%d): SRR for out data for cmd "
+				"without them (tag %d, SCSI status %d), "
+				"reject", vha->vp_idx, cmd->tag,
+				cmd->se_cmd.scsi_status);
+			goto out_reject;
+		}
+		break;
+	default:
+		printk(KERN_ERR "qla_target(%d): Unknown srr_ui value %x",
+			vha->vp_idx, srr_ui);
+		goto out_reject;
+	}
+
+	/* Transmit response in case of status and data-in cases */
+	if (resp) {
+		if (IS_FWI2_CAPABLE(ha))
+			__qla_tgt_24xx_xmit_response(cmd, xmit_type, se_cmd->scsi_status);
+		else
+			__qla_tgt_2xxx_xmit_response(cmd, xmit_type, se_cmd->scsi_status);
+	}
+
+	return;
+
+out_reject:
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	qla_tgt_send_notify_ack(vha, ntfy, 0, 0, 0,
+		NOTIFY_ACK_SRR_FLAGS_REJECT,
+		NOTIFY_ACK_SRR_REJECT_REASON_UNABLE_TO_PERFORM,
+		NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_NO_EXPL);
+	if (cmd->state == QLA_TGT_STATE_NEED_DATA) {
+		cmd->state = QLA_TGT_STATE_DATA_IN;
+		dump_stack();
+	} else
+		qla_tgt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+static void qla_tgt_reject_free_srr_imm(struct scsi_qla_host *vha, struct qla_tgt_srr_imm *imm,
+	int ha_locked)
+{
+	struct qla_hw_data *ha = vha->hw;
+	unsigned long flags = 0;
+
+	if (!ha_locked)
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+
+	qla_tgt_send_notify_ack(vha, (void *)&imm->imm_ntfy, 0, 0, 0,
+		NOTIFY_ACK_SRR_FLAGS_REJECT,
+		NOTIFY_ACK_SRR_REJECT_REASON_UNABLE_TO_PERFORM,
+		NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_NO_EXPL);
+
+	if (!ha_locked)
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	kfree(imm);
+}
+
+static void qla_tgt_handle_srr_work(struct work_struct *work)
+{
+	struct qla_tgt *tgt = container_of(work, struct qla_tgt, srr_work);
+	struct scsi_qla_host *vha = NULL;
+	struct qla_tgt_srr_ctio *sctio;
+	unsigned long flags;
+
+	ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe12e, "Entering SRR work (tgt %p)\n", tgt);
+
+restart:
+	spin_lock_irqsave(&tgt->srr_lock, flags);
+	list_for_each_entry(sctio, &tgt->srr_ctio_list, srr_list_entry) {
+		struct qla_tgt_srr_imm *imm, *i, *ti;
+		struct qla_tgt_cmd *cmd;
+		struct se_cmd *se_cmd;
+
+		imm = NULL;
+		list_for_each_entry_safe(i, ti, &tgt->srr_imm_list,
+						srr_list_entry) {
+			if (i->srr_id == sctio->srr_id) {
+				list_del(&i->srr_list_entry);
+				if (imm) {
+					printk(KERN_ERR "qla_target(%d): There must "
+					  "be only one IMM SRR per CTIO SRR "
+					  "(IMM SRR %p, id %d, CTIO %p\n",
+					  vha->vp_idx, i, i->srr_id, sctio);
+					qla_tgt_reject_free_srr_imm(vha, i, 0);
+				} else
+					imm = i;
+			}
+		}
+
+		ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe12f, "IMM SRR %p, CTIO SRR %p (id %d)\n",
+			imm, sctio, sctio->srr_id);
+
+		if (imm == NULL) {
+			ql_dbg(ql_dbg_tgt_mgt, tgt->vha, 0xe130, "Not found matching IMM"
+				" for SRR CTIO (id %d)\n", sctio->srr_id);
+			continue;
+		} else
+			list_del(&sctio->srr_list_entry);
+
+		spin_unlock_irqrestore(&tgt->srr_lock, flags);
+
+		cmd = sctio->cmd;
+		vha = cmd->vha;
+		/*
+		 * Reset qla_tgt_cmd SRR values and SGL pointer+count to follow
+		 * tcm_qla2xxx_write_pending() and tcm_qla2xxx_queue_data_in()
+		 * logic..
+		 */
+		cmd->offset = 0;
+		if (cmd->free_sg) {
+			kfree(cmd->sg);
+			cmd->sg = NULL;
+			cmd->free_sg = 0;
+		}
+		se_cmd = &cmd->se_cmd;
+
+		cmd->sg_cnt = se_cmd->t_tasks_sg_chained_no;
+		cmd->sg = se_cmd->t_tasks_sg_chained;
+
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe131,  "SRR cmd %p (se_cmd %p, tag %d, op %x), "
+			"sg_cnt=%d, offset=%d", cmd, &cmd->se_cmd,
+			cmd->tag, se_cmd->t_task_cdb[0], cmd->sg_cnt,
+			cmd->offset);
+
+		qla_tgt_handle_srr(vha, sctio, imm);
+
+		kfree(imm);
+		kfree(sctio);
+		goto restart;
+	}
+	spin_unlock_irqrestore(&tgt->srr_lock, flags);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+static void qla_tgt_prepare_srr_imm(struct scsi_qla_host *vha,
+	imm_ntfy_from_isp_t *iocb)
+{
+	struct qla_tgt_srr_imm *imm;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	struct qla_tgt_srr_ctio *sctio;
+
+	tgt->imm_srr_id++;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe132, "qla_target(%d): SRR received\n",
+			vha->vp_idx);
+
+	imm = kzalloc(sizeof(*imm), GFP_ATOMIC);
+	if (imm != NULL) {
+		memcpy(&imm->imm_ntfy, iocb, sizeof(imm->imm_ntfy));
+
+		/* IRQ is already OFF */
+		spin_lock(&tgt->srr_lock);
+		imm->srr_id = tgt->imm_srr_id;
+		list_add_tail(&imm->srr_list_entry,
+			&tgt->srr_imm_list);
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe133, "IMM NTFY SRR %p added (id %d,"
+			" ui %x)\n", imm, imm->srr_id, iocb->u.isp24.srr_ui);
+		if (tgt->imm_srr_id == tgt->ctio_srr_id) {
+			int found = 0;
+			list_for_each_entry(sctio, &tgt->srr_ctio_list,
+					srr_list_entry) {
+				if (sctio->srr_id == imm->srr_id) {
+					found = 1;
+					break;
+				}
+			}
+			if (found) {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe134, "%s", "Scheduling srr work\n");
+				schedule_work(&tgt->srr_work);
+			} else {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe135, "qla_target(%d): imm_srr_id "
+					"== ctio_srr_id (%d), but there is no "
+					"corresponding SRR CTIO, deleting IMM "
+					"SRR %p\n", vha->vp_idx, tgt->ctio_srr_id,
+					imm);
+				list_del(&imm->srr_list_entry);
+
+				kfree(imm);
+
+				spin_unlock(&tgt->srr_lock);
+				goto out_reject;
+			}
+		}
+		spin_unlock(&tgt->srr_lock);
+	} else {
+		struct qla_tgt_srr_ctio *ts;
+
+		printk(KERN_ERR "qla_target(%d): Unable to allocate SRR IMM "
+			"entry, SRR request will be rejected\n", vha->vp_idx);
+
+		/* IRQ is already OFF */
+		spin_lock(&tgt->srr_lock);
+		list_for_each_entry_safe(sctio, ts, &tgt->srr_ctio_list,
+					srr_list_entry) {
+			if (sctio->srr_id == tgt->imm_srr_id) {
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe136, "CTIO SRR %p deleted "
+					"(id %d)\n", sctio, sctio->srr_id);
+				list_del(&sctio->srr_list_entry);
+				qla_tgt_send_term_exchange(vha, sctio->cmd,
+					&sctio->cmd->atio, 1);
+				kfree(sctio);
+			}
+		}
+		spin_unlock(&tgt->srr_lock);
+		goto out_reject;
+	}
+
+	return;
+
+out_reject:
+	qla_tgt_send_notify_ack(vha, iocb, 0, 0, 0,
+		NOTIFY_ACK_SRR_FLAGS_REJECT,
+		NOTIFY_ACK_SRR_REJECT_REASON_UNABLE_TO_PERFORM,
+		NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_NO_EXPL);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+static void qla_tgt_handle_imm_notify(struct scsi_qla_host *vha,
+	imm_ntfy_from_isp_t *iocb)
+{
+	struct qla_hw_data *ha = vha->hw;
+	uint32_t add_flags = 0;
+	int send_notify_ack = 1;
+	uint16_t status;
+
+	status = le16_to_cpu(iocb->u.isp2x.status);
+	switch (status) {
+	case IMM_NTFY_LIP_RESET:
+	{
+		if (IS_FWI2_CAPABLE(ha)) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe137, "qla_target(%d): LIP reset"
+				" (loop %#x), subcode %x\n", vha->vp_idx,
+				le16_to_cpu(iocb->u.isp24.nport_handle),
+				iocb->u.isp24.status_subcode);
+		} else {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe138, "qla_target(%d): LIP reset"
+				" (I %#x)\n", vha->vp_idx,
+				GET_TARGET_ID(ha, (atio_from_isp_t *)iocb));
+			/* set the Clear LIP reset event flag */
+			add_flags |= NOTIFY_ACK_CLEAR_LIP_RESET;
+		}
+		if (qla_tgt_reset(vha, iocb, QLA_TGT_ABORT_ALL) == 0)
+			send_notify_ack = 0;
+		break;
+	}
+
+	case IMM_NTFY_LIP_LINK_REINIT:
+	{
+		struct qla_tgt *tgt = ha->qla_tgt;
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe139, "qla_target(%d): LINK REINIT (loop %#x, "
+			"subcode %x)\n", vha->vp_idx,
+			le16_to_cpu(iocb->u.isp24.nport_handle),
+			iocb->u.isp24.status_subcode);
+		if (tgt->link_reinit_iocb_pending) {
+			qla_tgt_send_notify_ack(vha, &tgt->link_reinit_iocb,
+				0, 0, 0, 0, 0, 0);
+		}
+		memcpy(&tgt->link_reinit_iocb, iocb, sizeof(*iocb));
+		tgt->link_reinit_iocb_pending = 1;
+		/*
+		 * QLogic requires to wait after LINK REINIT for possible
+		 * PDISC or ADISC ELS commands
+		 */
+		send_notify_ack = 0;
+		break;
+	}
+
+	case IMM_NTFY_PORT_LOGOUT:
+		if (IS_FWI2_CAPABLE(ha)) {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13a, "qla_target(%d): Port logout (loop "
+				"%#x, subcode %x)\n", vha->vp_idx,
+				le16_to_cpu(iocb->u.isp24.nport_handle),
+				iocb->u.isp24.status_subcode);
+		} else {
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13b, "qla_target(%d): Port logout (S "
+				"%08x -> L %#x)\n", vha->vp_idx,
+				le16_to_cpu(iocb->u.isp2x.seq_id),
+				le16_to_cpu(iocb->u.isp2x.lun));
+		}
+		if (qla_tgt_reset(vha, iocb, QLA_TGT_NEXUS_LOSS_SESS) == 0)
+			send_notify_ack = 0;
+		/* The sessions will be cleared in the callback, if needed */
+		break;
+
+	case IMM_NTFY_GLBL_TPRLO:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13c, "qla_target(%d): Global TPRLO (%x)\n",
+			vha->vp_idx, status);
+		if (qla_tgt_reset(vha, iocb, QLA_TGT_NEXUS_LOSS) == 0)
+			send_notify_ack = 0;
+		/* The sessions will be cleared in the callback, if needed */
+		break;
+
+	case IMM_NTFY_PORT_CONFIG:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13d, "qla_target(%d): Port config changed (%x)\n",
+			vha->vp_idx, status);
+		if (qla_tgt_reset(vha, iocb, QLA_TGT_ABORT_ALL) == 0)
+			send_notify_ack = 0;
+		/* The sessions will be cleared in the callback, if needed */
+		break;
+
+	case IMM_NTFY_GLBL_LOGO:
+		printk(KERN_WARNING "qla_target(%d): Link failure detected\n",
+			vha->vp_idx);
+		/* I_T nexus loss */
+		if (qla_tgt_reset(vha, iocb, QLA_TGT_NEXUS_LOSS) == 0)
+			send_notify_ack = 0;
+		break;
+
+	case IMM_NTFY_IOCB_OVERFLOW:
+		printk(KERN_ERR "qla_target(%d): Cannot provide requested "
+			"capability (IOCB overflowed the immediate notify "
+			"resource count)\n", vha->vp_idx);
+		break;
+
+	case IMM_NTFY_ABORT_TASK:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13e,
+			"qla_target(%d): Abort Task (S %08x I %#x -> "
+			"L %#x)\n", vha->vp_idx, le16_to_cpu(iocb->u.isp2x.seq_id),
+			GET_TARGET_ID(ha, (atio_from_isp_t *)iocb),
+			le16_to_cpu(iocb->u.isp2x.lun));
+		if (qla_tgt_abort_task(vha, iocb) == 0)
+			send_notify_ack = 0;
+		break;
+
+	case IMM_NTFY_RESOURCE:
+		printk(KERN_ERR "qla_target(%d): Out of resources, host %ld\n",
+			    vha->vp_idx, vha->host_no);
+		break;
+
+	case IMM_NTFY_MSG_RX:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe13f,
+			"qla_target(%d): Immediate notify task %x\n",
+			vha->vp_idx, iocb->u.isp2x.task_flags);
+		if (qla_tgt_handle_task_mgmt(vha, iocb) == 0)
+			send_notify_ack = 0;
+		break;
+
+	case IMM_NTFY_ELS:
+		if (qla_tgt_24xx_handle_els(vha, iocb) == 0)
+			send_notify_ack = 0;
+		break;
+
+	case IMM_NTFY_SRR:
+		qla_tgt_prepare_srr_imm(vha, iocb);
+		send_notify_ack = 0;
+		break;
+
+	default:
+		printk(KERN_ERR "qla_target(%d): Received unknown immediate "
+			"notify status %x\n", vha->vp_idx, status);
+		break;
+	}
+
+	if (send_notify_ack)
+		qla_tgt_send_notify_ack(vha, iocb, add_flags, 0, 0, 0, 0, 0);
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ * This function sends busy to ISP 2xxx or 24xx.
+ */
+static void qla_tgt_send_busy(struct scsi_qla_host *vha,
+	atio_from_isp_t *atio, uint16_t status)
+{
+	struct qla_hw_data *ha = vha->hw;
+	request_t *pkt;
+	struct qla_tgt_sess *sess = NULL;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		sess = ha->tgt_ops->find_sess_by_s_id(vha, atio->u.isp24.fcp_hdr.s_id);
+		if (!sess) {
+			qla_tgt_send_term_exchange(vha, NULL, atio, 1);
+			return;
+		}
+	}
+
+	/* Sending marker isn't necessary, since we called from ISR */
+
+	pkt = (request_t *)qla2x00_req_pkt(vha);
+	if (!pkt) {
+		printk(KERN_ERR "qla_target(%d): %s failed: unable to allocate "
+			"request packet", vha->vp_idx, __func__);
+		return;
+	}
+
+	pkt->entry_count = 1;
+	pkt->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		ctio7_to_24xx_t *ctio24 = (ctio7_to_24xx_t *)pkt;
+
+		ctio24->entry_type = CTIO_TYPE7;
+		ctio24->nport_handle = sess->loop_id;
+		ctio24->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT);
+		ctio24->vp_index = vha->vp_idx;
+		ctio24->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2];
+		ctio24->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1];
+		ctio24->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0];
+		ctio24->exchange_addr = atio->u.isp24.exchange_addr;
+		ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | __constant_cpu_to_le16(
+			CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_SEND_STATUS |
+			CTIO7_FLAGS_DONT_RET_CTIO);
+		/*
+		 * CTIO from fw w/o se_cmd doesn't provide enough info to retry it,
+		 * if the explicit conformation is used.
+		 */
+		ctio24->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id);
+		ctio24->u.status1.scsi_status = cpu_to_le16(status);
+		ctio24->u.status1.residual = get_unaligned((uint32_t *)
+			&atio->u.isp24.fcp_cmnd.add_cdb[atio->u.isp24.fcp_cmnd.add_cdb_len]);
+		if (ctio24->u.status1.residual != 0)
+			ctio24->u.status1.scsi_status |= SS_RESIDUAL_UNDER;
+	} else {
+		ctio_from_2xxx_t *ctio2x = (ctio_from_2xxx_t *)pkt;
+
+		ctio2x->entry_type = CTIO_RET_TYPE;
+		ctio2x->entry_count = 1;
+		ctio2x->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK;
+		ctio2x->scsi_status = __constant_cpu_to_le16(SAM_STAT_BUSY);
+		ctio2x->residual = atio->u.isp2x.data_length;
+		if (ctio2x->residual != 0)
+			ctio2x->scsi_status |= SS_RESIDUAL_UNDER;
+
+		/* Set IDs */
+		SET_TARGET_ID(ha, ctio2x->target, GET_TARGET_ID(ha, atio));
+		ctio2x->rx_id = atio->u.isp2x.rx_id;
+
+		ctio2x->flags = __constant_cpu_to_le16(OF_SSTS | OF_FAST_POST |
+				OF_NO_DATA | OF_SS_MODE_1);
+		ctio2x->flags |= __constant_cpu_to_le16(OF_INC_RC);
+		/*
+		 * CTIO from fw w/o se_cmd doesn't provide enough info to retry it,
+		 * if the explicit conformation is used.
+		 */
+	}
+
+	qla2x00_isp_cmd(vha, vha->req);
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+/* called via callback from qla2xxx */
+static void qla_tgt_24xx_atio_pkt(struct scsi_qla_host *vha, atio_from_isp_t *atio)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	int rc;
+
+	if (unlikely(tgt == NULL)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe140, "ATIO pkt, but no tgt (ha %p)", ha);
+		return;
+	}
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe209, "qla_target(%d): ATIO pkt %p:"
+		" type %02x count %02x", vha->vp_idx, atio, atio->u.raw.entry_type,
+		atio->u.raw.entry_count);
+	/*
+	 * In tgt_stop mode we also should allow all requests to pass.
+	 * Otherwise, some commands can stuck.
+	 */
+
+	tgt->irq_cmd_count++;
+
+	switch (atio->u.raw.entry_type) {
+	case ATIO_TYPE7:
+		ql_dbg(ql_dbg_tgt, vha, 0xe026, "ATIO_TYPE7 instance %d, lun"
+			" %Lx, read/write %d/%d, add_cdb_len %d, data_length "
+			"%04x, s_id %x:%x:%x\n", vha->vp_idx,
+			atio->u.isp24.fcp_cmnd.lun,
+			atio->u.isp24.fcp_cmnd.rddata, atio->u.isp24.fcp_cmnd.wrdata,
+			atio->u.isp24.fcp_cmnd.add_cdb_len,
+			be32_to_cpu(get_unaligned((uint32_t *)
+				&atio->u.isp24.fcp_cmnd.add_cdb[atio->u.isp24.fcp_cmnd.add_cdb_len])),
+			atio->u.isp24.fcp_hdr.s_id[0], atio->u.isp24.fcp_hdr.s_id[1],
+			atio->u.isp24.fcp_hdr.s_id[2]);
+
+		if (unlikely(atio->u.isp24.exchange_addr ==
+				ATIO_EXCHANGE_ADDRESS_UNKNOWN)) {
+			printk(KERN_INFO "qla_target(%d): ATIO_TYPE7 "
+				"received with UNKNOWN exchange address, "
+				"sending QUEUE_FULL\n", vha->vp_idx);
+			qla_tgt_send_busy(vha, atio, SAM_STAT_TASK_SET_FULL);
+			break;
+		}
+		if (likely(atio->u.isp24.fcp_cmnd.task_mgmt_flags == 0))
+			rc = qla_tgt_handle_cmd_for_atio(vha, atio);
+		else
+			rc = qla_tgt_handle_task_mgmt(vha, atio);
+		if (unlikely(rc != 0)) {
+			if (rc == -ESRCH) {
+#if 1 /* With TERM EXCHANGE some FC cards refuse to boot */
+				qla_tgt_send_busy(vha, atio, SAM_STAT_BUSY);
+#else
+				qla_tgt_send_term_exchange(vha, NULL, atio, 1);
+#endif
+			} else {
+				if (tgt->tgt_stop) {
+					printk(KERN_INFO "qla_target: Unable to send "
+					"command to target for req, ignoring \n");
+				} else {
+					printk(KERN_INFO "qla_target(%d): Unable to send "
+					   "command to target, sending BUSY status\n",
+					   vha->vp_idx);
+					qla_tgt_send_busy(vha, atio, SAM_STAT_BUSY);
+				}
+			}
+		}
+		break;
+
+	case IMMED_NOTIFY_TYPE:
+	{
+		if (unlikely(atio->u.isp2x.entry_status != 0)) {
+			printk(KERN_ERR "qla_target(%d): Received ATIO packet %x "
+				"with error status %x\n", vha->vp_idx,
+				atio->u.raw.entry_type, atio->u.isp2x.entry_status);
+			break;
+		}
+		ql_dbg(ql_dbg_tgt, vha, 0xe027, "%s", "IMMED_NOTIFY ATIO");
+		qla_tgt_handle_imm_notify(vha, (imm_ntfy_from_isp_t *)atio);
+		break;
+	}
+
+	default:
+		printk(KERN_ERR "qla_target(%d): Received unknown ATIO atio "
+		     "type %x\n", vha->vp_idx, atio->u.raw.entry_type);
+		break;
+	}
+
+	tgt->irq_cmd_count--;
+}
+
+/* ha->hardware_lock supposed to be held on entry */
+/* called via callback from qla2xxx */
+static void qla_tgt_response_pkt(struct scsi_qla_host *vha, response_t *pkt)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+
+	if (unlikely(tgt == NULL)) {
+		printk(KERN_ERR "qla_target(%d): Response pkt %x received, but no "
+			"tgt (ha %p)\n", vha->vp_idx, pkt->entry_type, ha);
+		return;
+	}
+
+	ql_dbg(ql_dbg_tgt_pkt, vha, 0xe20a, "qla_target(%d): response pkt %p: T %02x"
+		" C %02x S %02x handle %#x\n", vha->vp_idx, pkt, pkt->entry_type,
+		pkt->entry_count, pkt->entry_status, pkt->handle);
+
+	/*
+	 * In tgt_stop mode we also should allow all requests to pass.
+	 * Otherwise, some commands can stuck.
+	 */
+
+	tgt->irq_cmd_count++;
+
+	switch (pkt->entry_type) {
+	case CTIO_TYPE7:
+	{
+		ctio7_from_24xx_t *entry = (ctio7_from_24xx_t *)pkt;
+		ql_dbg(ql_dbg_tgt, vha, 0xe028, "CTIO_TYPE7: instance %d\n", vha->vp_idx);
+		qla_tgt_do_ctio_completion(vha, entry->handle,
+			le16_to_cpu(entry->status)|(pkt->entry_status << 16),
+			entry);
+		break;
+	}
+
+	case ACCEPT_TGT_IO_TYPE:
+	{
+		atio_from_isp_t *atio = (atio_from_isp_t *)pkt;
+		int rc;
+		ql_dbg(ql_dbg_tgt, vha, 0xe029, "ACCEPT_TGT_IO instance %d status %04x "
+			  "lun %04x read/write %d data_length %04x "
+			  "target_id %02x rx_id %04x\n ",
+			  vha->vp_idx, le16_to_cpu(atio->u.isp2x.status),
+			  le16_to_cpu(atio->u.isp2x.lun),
+			  atio->u.isp2x.execution_codes,
+			  le32_to_cpu(atio->u.isp2x.data_length),
+			  GET_TARGET_ID(ha, atio), atio->u.isp2x.rx_id);
+		if (atio->u.isp2x.status != __constant_cpu_to_le16(ATIO_CDB_VALID)) {
+			printk(KERN_ERR "qla_target(%d): ATIO with error "
+				    "status %x received\n", vha->vp_idx,
+				    le16_to_cpu(atio->u.isp2x.status));
+			break;
+		}
+		ql_dbg(ql_dbg_tgt_pkt, vha, 0xe20b, "FCP CDB: 0x%02x, sizeof(cdb): %lu",
+			atio->u.isp2x.cdb[0], (unsigned long int)sizeof(atio->u.isp2x.cdb));
+
+		rc = qla_tgt_handle_cmd_for_atio(vha, atio);
+		if (unlikely(rc != 0)) {
+			if (rc == -ESRCH) {
+#if 1 /* With TERM EXCHANGE some FC cards refuse to boot */
+				qla_tgt_send_busy(vha, atio, 0);
+#else
+				qla_tgt_send_term_exchange(vha, NULL, atio, 1);
+#endif
+			} else {
+				if (tgt->tgt_stop) {
+					printk(KERN_INFO "qla_target: Unable to send "
+						"command to target, sending TERM EXCHANGE"
+						" for rsp\n");
+					qla_tgt_send_term_exchange(vha, NULL,
+						atio, 1);
+				} else {
+					printk(KERN_INFO "qla_target(%d): Unable to send "
+						"command to target, sending BUSY status\n",
+						vha->vp_idx);
+					qla_tgt_send_busy(vha, atio, 0);
+				}
+			}
+		}
+	}
+	break;
+
+	case CONTINUE_TGT_IO_TYPE:
+	{
+		ctio_to_2xxx_t *entry = (ctio_to_2xxx_t *)pkt;
+		ql_dbg(ql_dbg_tgt, vha, 0xe02a, "CONTINUE_TGT_IO: instance %d\n", vha->vp_idx);
+		qla_tgt_do_ctio_completion(vha, entry->handle,
+			le16_to_cpu(entry->status)|(pkt->entry_status << 16),
+			entry);
+		break;
+	}
+
+	case CTIO_A64_TYPE:
+	{
+		ctio_to_2xxx_t *entry = (ctio_to_2xxx_t *)pkt;
+		ql_dbg(ql_dbg_tgt, vha, 0xe02b, "CTIO_A64: instance %d\n", vha->vp_idx);
+		qla_tgt_do_ctio_completion(vha, entry->handle,
+			le16_to_cpu(entry->status)|(pkt->entry_status << 16),
+			entry);
+		break;
+	}
+
+	case IMMED_NOTIFY_TYPE:
+		ql_dbg(ql_dbg_tgt, vha, 0xe02c, "%s", "IMMED_NOTIFY\n");
+		qla_tgt_handle_imm_notify(vha, (imm_ntfy_from_isp_t *)pkt);
+		break;
+
+	case NOTIFY_ACK_TYPE:
+		if (tgt->notify_ack_expected > 0) {
+			nack_to_isp_t *entry = (nack_to_isp_t *)pkt;
+			ql_dbg(ql_dbg_tgt, vha, 0xe02d, "NOTIFY_ACK seq %08x status %x\n",
+				  le16_to_cpu(entry->u.isp2x.seq_id),
+				  le16_to_cpu(entry->u.isp2x.status));
+			tgt->notify_ack_expected--;
+			if (entry->u.isp2x.status !=
+				__constant_cpu_to_le16(NOTIFY_ACK_SUCCESS)) {
+				printk(KERN_ERR "qla_target(%d): NOTIFY_ACK "
+					    "failed %x\n", vha->vp_idx,
+					    le16_to_cpu(entry->u.isp2x.status));
+			}
+		} else {
+			printk(KERN_ERR "qla_target(%d): Unexpected NOTIFY_ACK "
+				    "received\n", vha->vp_idx);
+		}
+		break;
+
+	case ABTS_RECV_24XX:
+		ql_dbg(ql_dbg_tgt, vha, 0xe02e, "ABTS_RECV_24XX: instance %d\n", vha->vp_idx);
+		qla_tgt_24xx_handle_abts(vha, (abts_recv_from_24xx_t *)pkt);
+		break;
+
+	case ABTS_RESP_24XX:
+		if (tgt->abts_resp_expected > 0) {
+			abts_resp_from_24xx_fw_t *entry =
+				(abts_resp_from_24xx_fw_t *)pkt;
+			ql_dbg(ql_dbg_tgt, vha, 0xe02f, "ABTS_RESP_24XX: compl_status %x\n",
+				entry->compl_status);
+			tgt->abts_resp_expected--;
+			if (le16_to_cpu(entry->compl_status) != ABTS_RESP_COMPL_SUCCESS) {
+				if ((entry->error_subcode1 == 0x1E) &&
+				    (entry->error_subcode2 == 0)) {
+					/*
+					 * We've got a race here: aborted exchange not
+					 * terminated, i.e. response for the aborted
+					 * command was sent between the abort request
+					 * was received and processed. Unfortunately,
+					 * the firmware has a silly requirement that
+					 * all aborted exchanges must be explicitely
+					 * terminated, otherwise it refuses to send
+					 * responses for the abort requests. So, we
+					 * have to (re)terminate the exchange and
+					 * retry the abort response.
+					 */
+					qla_tgt_24xx_retry_term_exchange(vha, entry);
+				} else
+					printk(KERN_ERR "qla_target(%d): ABTS_RESP_24XX "
+					    "failed %x (subcode %x:%x)", vha->vp_idx,
+					    entry->compl_status, entry->error_subcode1,
+					    entry->error_subcode2);
+			}
+		} else {
+			printk(KERN_ERR "qla_target(%d): Unexpected ABTS_RESP_24XX "
+				    "received\n", vha->vp_idx);
+		}
+		break;
+
+	case MODIFY_LUN_TYPE:
+		if (tgt->modify_lun_expected > 0) {
+			modify_lun_t *entry = (modify_lun_t *)pkt;
+			ql_dbg(ql_dbg_tgt, vha, 0xe030, "MODIFY_LUN %x, imm %c%d, cmd %c%d",
+				  entry->status,
+				  (entry->operators & MODIFY_LUN_IMM_ADD) ? '+'
+				  : (entry->operators & MODIFY_LUN_IMM_SUB) ? '-'
+				  : ' ',
+				  entry->immed_notify_count,
+				  (entry->operators & MODIFY_LUN_CMD_ADD) ? '+'
+				  : (entry->operators & MODIFY_LUN_CMD_SUB) ? '-'
+				  : ' ',
+				  entry->command_count);
+			tgt->modify_lun_expected--;
+			if (entry->status != MODIFY_LUN_SUCCESS) {
+				printk(KERN_ERR "qla_target(%d): MODIFY_LUN "
+					    "failed %x\n", vha->vp_idx,
+					    entry->status);
+			}
+		} else {
+			printk(KERN_ERR "qla_target(%d): Unexpected MODIFY_LUN "
+			    "received\n", (ha != NULL) ? vha->vp_idx : -1);
+		}
+		break;
+
+	case ENABLE_LUN_TYPE:
+	{
+		enable_lun_t *entry = (enable_lun_t *)pkt;
+		ql_dbg(ql_dbg_tgt, vha, 0xe031, "ENABLE_LUN %x imm %u cmd %u \n",
+			  entry->status, entry->immed_notify_count,
+			  entry->command_count);
+		if (entry->status == ENABLE_LUN_ALREADY_ENABLED) {
+			ql_dbg(ql_dbg_tgt, vha, 0xe032, "LUN is already enabled: %#x\n",
+				  entry->status);
+			entry->status = ENABLE_LUN_SUCCESS;
+		} else if (entry->status == ENABLE_LUN_RC_NONZERO) {
+			ql_dbg(ql_dbg_tgt, vha, 0xe033, "ENABLE_LUN succeeded, but with "
+				"error: %#x\n", entry->status);
+			entry->status = ENABLE_LUN_SUCCESS;
+		} else if (entry->status != ENABLE_LUN_SUCCESS) {
+			printk(KERN_ERR "qla_target(%d): ENABLE_LUN "
+				"failed %x\n", vha->vp_idx, entry->status);
+			qla_tgt_clear_mode(vha);
+		} /* else success */
+		break;
+	}
+
+	default:
+		printk(KERN_ERR "qla_target(%d): Received unknown response pkt "
+		     "type %x\n", vha->vp_idx, pkt->entry_type);
+		break;
+	}
+
+	tgt->irq_cmd_count--;
+}
+
+/*
+ * ha->hardware_lock supposed to be held on entry. Might drop it, then reaquire
+ */
+void qla_tgt_async_event(uint16_t code, struct scsi_qla_host *vha, uint16_t *mailbox)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	int reason_code;
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe034, "scsi(%ld): ha state %d init_done %d"
+		" oper_mode %d topo %d\n", vha->host_no, atomic_read(&vha->loop_state),
+		vha->flags.init_done, ha->operating_mode, ha->current_topology);
+
+	if (!ha->tgt_ops)
+		return;
+
+	if (unlikely(tgt == NULL)) {
+		ql_dbg(ql_dbg_tgt, vha, 0xe035, "ASYNC EVENT %#x, but no tgt"
+				" (ha %p)", code, ha);
+		return;
+	}
+
+	if (((code == MBA_POINT_TO_POINT) || (code == MBA_CHG_IN_CONNECTION)) &&
+	     IS_QLA2100(ha))
+		return;
+	/*
+	 * In tgt_stop mode we also should allow all requests to pass.
+	 * Otherwise, some commands can stuck.
+	 */
+
+	tgt->irq_cmd_count++;
+
+	switch (code) {
+	case MBA_RESET:			/* Reset */
+	case MBA_SYSTEM_ERR:		/* System Error */
+	case MBA_REQ_TRANSFER_ERR:	/* Request Transfer Error */
+	case MBA_RSP_TRANSFER_ERR:	/* Response Transfer Error */
+	case MBA_WAKEUP_THRES:		/* Request Queue Wake-up. */
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe141, "qla_target(%d): System error async event %#x "
+			"occured", vha->vp_idx, code);
+		break;
+
+	case MBA_LOOP_UP:
+	{
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe142, "qla_target(%d): Async LOOP_UP occured "
+			"(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx,
+			le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]),
+			le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4]));
+		if (tgt->link_reinit_iocb_pending) {
+			qla_tgt_send_notify_ack(vha, (void *)&tgt->link_reinit_iocb,
+				0, 0, 0, 0, 0, 0);
+			tgt->link_reinit_iocb_pending = 0;
+		}
+		break;
+	}
+
+	case MBA_LIP_OCCURRED:
+	case MBA_LOOP_DOWN:
+	case MBA_LIP_RESET:
+	case MBA_RSCN_UPDATE:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe143, "qla_target(%d): Async event %#x occured "
+			"(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx,
+			code, le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]),
+			le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4]));
+		break;
+
+	case MBA_PORT_UPDATE:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe144, "qla_target(%d): Port update async event %#x "
+			"occured: updating the ports database (m[1]=%x, m[2]=%x, "
+			"m[3]=%x, m[4]=%x)", vha->vp_idx, code,
+			le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]),
+			le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4]));
+		reason_code = le16_to_cpu(mailbox[2]);
+		if (reason_code == 0x4)
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe145, "Async MB 2: Got PLOGI Complete\n");
+		else if (reason_code == 0x7)
+			ql_dbg(ql_dbg_tgt_mgt, vha, 0xe146, "Async MB 2: Port Logged Out\n");
+		break;
+
+	default:
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe147, "qla_target(%d): Async event %#x occured: "
+			"ignore (m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)",
+			vha->vp_idx, code,
+			le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]),
+			le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4]));
+		break;
+	}
+
+	tgt->irq_cmd_count--;
+}
+
+static fc_port_t *qla_tgt_get_port_database(struct scsi_qla_host *vha,
+	const uint8_t *s_id, uint16_t loop_id)
+{
+	fc_port_t *fcport;
+	int rc;
+
+	fcport = kzalloc(sizeof(*fcport), GFP_KERNEL);
+	if (!fcport) {
+		printk(KERN_ERR "qla_target(%d): Allocation of tmp FC port failed",
+				vha->vp_idx);
+		return NULL;
+	}
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe148, "loop_id %d", loop_id);
+
+	fcport->loop_id = loop_id;
+
+	rc = qla2x00_get_port_database(vha, fcport, 0);
+	if (rc != QLA_SUCCESS) {
+		printk(KERN_ERR "qla_target(%d): Failed to retrieve fcport "
+			"information -- get_port_database() returned %x "
+			"(loop_id=0x%04x)", vha->vp_idx, rc, loop_id);
+		kfree(fcport);
+		return NULL;
+        }
+
+	return fcport;
+}
+
+/* Must be called under tgt_mutex */
+static struct qla_tgt_sess *qla_tgt_make_local_sess(struct scsi_qla_host *vha,
+	uint8_t *s_id, uint16_t loop_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess = NULL;
+	fc_port_t *fcport = NULL;
+	int rc, global_resets;
+
+retry:
+	global_resets = atomic_read(&ha->qla_tgt->tgt_global_resets_count);
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		rc = qla24xx_get_loop_id(vha, s_id, &loop_id);
+		if (rc != 0) {
+			if ((s_id[0] == 0xFF) &&
+			    (s_id[1] == 0xFC)) {
+				/*
+				 * This is Domain Controller, so it should be
+				 * OK to drop SCSI commands from it.
+				 */
+				ql_dbg(ql_dbg_tgt_mgt, vha, 0xe149, "Unable to find"
+					" initiator with S_ID %x:%x:%x", s_id[0],
+					s_id[1], s_id[2]);
+			} else
+				printk(KERN_ERR "qla_target(%d): Unable to find "
+					"initiator with S_ID %x:%x:%x",
+					vha->vp_idx, s_id[0], s_id[1],
+					s_id[2]);
+			return NULL;
+		}
+	}
+
+	fcport = qla_tgt_get_port_database(vha, s_id, loop_id);
+	if (!fcport)
+		return NULL;
+
+	if (global_resets != atomic_read(&ha->qla_tgt->tgt_global_resets_count)) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe14a, "qla_target(%d): global reset"
+			" during session discovery (counter was %d, new %d),"
+			" retrying", vha->vp_idx, global_resets,
+			atomic_read(&ha->qla_tgt->tgt_global_resets_count));
+		goto retry;
+	}
+
+	sess = qla_tgt_create_sess(vha, fcport, true);
+
+	kfree(fcport);
+	return sess;
+}
+
+static void qla_tgt_abort_work(struct qla_tgt *tgt,
+	struct qla_tgt_sess_work_param *prm)
+{
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess = NULL;
+	unsigned long flags;
+	uint32_t be_s_id;
+	uint8_t *s_id = NULL; /* to hide compiler warnings */
+	uint8_t local_s_id[3];
+	int rc, loop_id = -1; /* to hide compiler warnings */
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+
+	if (tgt->tgt_stop)
+		goto out_term;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		be_s_id = (prm->abts.fcp_hdr_le.s_id[0] << 16) |
+			(prm->abts.fcp_hdr_le.s_id[1] << 8) |
+			prm->abts.fcp_hdr_le.s_id[2];
+
+		sess = ha->tgt_ops->find_sess_by_s_id(vha,
+				(unsigned char *)&be_s_id);
+		if (!sess) {
+			s_id = local_s_id;
+			s_id[0] = prm->abts.fcp_hdr_le.s_id[2];
+			s_id[1] = prm->abts.fcp_hdr_le.s_id[1];
+			s_id[2] = prm->abts.fcp_hdr_le.s_id[0];
+		}
+	} else {
+		loop_id = GET_TARGET_ID(ha, (atio_from_isp_t *)&prm->tm_iocb);
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha, loop_id);
+	}
+
+	if (sess) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe14c, "sess %p found\n", sess);
+		kref_get(&sess->sess_kref);
+	} else {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+		mutex_lock(&ha->tgt_mutex);
+		sess = qla_tgt_make_local_sess(vha, s_id, loop_id);
+		/* sess has got an extra creation ref */
+		mutex_unlock(&ha->tgt_mutex);
+
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+
+		if (!sess)
+			goto out_term;
+	}
+
+	if (tgt->tgt_stop)
+		goto out_term;
+
+	if (IS_FWI2_CAPABLE(ha))
+		rc = __qla_tgt_24xx_handle_abts(vha, &prm->abts, sess);
+	else
+		rc = __qla_tgt_abort_task(vha, &prm->tm_iocb, sess);
+	if (rc != 0)
+		goto out_term;
+
+	if (sess)
+		__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	return;
+
+out_term:
+	if (IS_FWI2_CAPABLE(ha)) {
+		qla_tgt_24xx_send_abts_resp(vha, &prm->abts,
+			FCP_TMF_REJECTED, false);
+	} else {
+		qla_tgt_send_notify_ack(vha, (void *)&prm->tm_iocb,
+			0, 0, 0, 0, 0, 0);
+	}
+
+	if (sess)
+		__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+static void qla_tgt_tmr_work(struct qla_tgt *tgt,
+	struct qla_tgt_sess_work_param *prm)
+{
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt_sess *sess = NULL;
+	unsigned long flags;
+	uint8_t *s_id = NULL; /* to hide compiler warnings */
+	int rc, loop_id = -1; /* to hide compiler warnings */
+	uint32_t lun, unpacked_lun;
+	int lun_size, fn;
+	void *iocb;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+
+	if (tgt->tgt_stop)
+		goto out_term;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		s_id = prm->tm_iocb2.u.isp24.fcp_hdr.s_id;
+		sess = ha->tgt_ops->find_sess_by_s_id(vha, s_id);
+	} else {
+		loop_id = GET_TARGET_ID(ha, (atio_from_isp_t *)&prm->tm_iocb);
+		sess = ha->tgt_ops->find_sess_by_loop_id(vha, loop_id);
+	}
+
+	if (sess) {
+		ql_dbg(ql_dbg_tgt_mgt, vha, 0xe14c, "sess %p found\n", sess);
+		kref_get(&sess->sess_kref);
+	} else {
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+		mutex_lock(&ha->tgt_mutex);
+		sess = qla_tgt_make_local_sess(vha, s_id, loop_id);
+		/* sess has got an extra creation ref */
+		mutex_unlock(&ha->tgt_mutex);
+
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+		if (!sess)
+			goto out_term;
+	}
+
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		atio_from_isp_t *a = &prm->tm_iocb2;
+		iocb = a;
+		lun = a->u.isp24.fcp_cmnd.lun;
+		lun_size = sizeof(lun);
+		fn = a->u.isp24.fcp_cmnd.task_mgmt_flags;
+	} else {
+		imm_ntfy_from_isp_t *n = &prm->tm_iocb;
+		iocb = n;
+		/* make it be in network byte order */
+		lun = swab16(le16_to_cpu(n->u.isp2x.lun));
+		lun_size = sizeof(lun);
+		fn = n->u.isp2x.task_flags >> IMM_NTFY_TASK_MGMT_SHIFT;
+	}
+	unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun);
+
+	rc = qla_tgt_issue_task_mgmt(sess, unpacked_lun, fn, iocb, 0);
+	if (rc != 0)
+		goto out_term;
+
+	if (sess)
+		__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	return;
+
+out_term:
+	if (IS_FWI2_CAPABLE(ha))
+		qla_tgt_send_term_exchange(vha, NULL, &prm->tm_iocb2, 1);
+	else
+		qla_tgt_send_notify_ack(vha, &prm->tm_iocb,
+			0, 0, 0, 0, 0, 0);
+	if (sess)
+		__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+static void qla_tgt_sess_work_fn(struct work_struct *work)
+{
+	struct qla_tgt *tgt = container_of(work, struct qla_tgt, sess_work);
+	struct scsi_qla_host *vha = tgt->vha;
+	struct qla_hw_data *ha = vha->hw;
+	unsigned long flags;
+
+	ql_dbg(ql_dbg_tgt_mgt, vha, 0xe14e, "Sess work (tgt %p)", tgt);
+
+	spin_lock_irqsave(&tgt->sess_work_lock, flags);
+	while (!list_empty(&tgt->sess_works_list)) {
+		struct qla_tgt_sess_work_param *prm = list_entry(
+			tgt->sess_works_list.next, typeof(*prm),
+			sess_works_list_entry);
+
+		/*
+		 * This work can be scheduled on several CPUs at time, so we
+		 * must delete the entry to eliminate double processing
+		 */
+		list_del(&prm->sess_works_list_entry);
+
+		spin_unlock_irqrestore(&tgt->sess_work_lock, flags);
+
+		switch (prm->type) {
+		case QLA_TGT_SESS_WORK_ABORT:
+			qla_tgt_abort_work(tgt, prm);
+			break;
+		case QLA_TGT_SESS_WORK_TM:
+			qla_tgt_tmr_work(tgt, prm);
+			break;
+		default:
+			BUG_ON(1);
+			break;
+		}
+
+		spin_lock_irqsave(&tgt->sess_work_lock, flags);
+
+		kfree(prm);
+	}
+	spin_unlock_irqrestore(&tgt->sess_work_lock, flags);
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	spin_lock(&tgt->sess_work_lock);
+	if (list_empty(&tgt->sess_works_list)) {
+		tgt->sess_works_pending = 0;
+		tgt->tm_to_unknown = 0;
+	}
+	spin_unlock(&tgt->sess_work_lock);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+/* Must be called under tgt_host_action_mutex */
+int qla_tgt_add_target(struct qla_hw_data *ha, struct scsi_qla_host *base_vha)
+{
+	struct qla_tgt *tgt;
+
+	ql_dbg(ql_dbg_tgt, base_vha, 0xe036, "Registering target for host %ld(%p)",
+			base_vha->host_no, ha);
+
+	BUG_ON((ha->qla_tgt != NULL) || (ha->tgt_ops != NULL));
+
+	tgt = kzalloc(sizeof(struct qla_tgt), GFP_KERNEL);
+	if (!tgt) {
+		printk(KERN_ERR "Unable to allocate struct qla_tgt\n");
+		return -ENOMEM;
+	}
+
+	tgt->ha = ha;
+	tgt->vha = base_vha;
+	init_waitqueue_head(&tgt->waitQ);
+	INIT_LIST_HEAD(&tgt->sess_list);
+	INIT_LIST_HEAD(&tgt->del_sess_list);
+	INIT_DELAYED_WORK(&tgt->sess_del_work,
+		(void (*)(struct work_struct *))qla_tgt_del_sess_work_fn);
+	spin_lock_init(&tgt->sess_work_lock);
+	INIT_WORK(&tgt->sess_work, qla_tgt_sess_work_fn);
+	INIT_LIST_HEAD(&tgt->sess_works_list);
+	spin_lock_init(&tgt->srr_lock);
+	INIT_LIST_HEAD(&tgt->srr_ctio_list);
+	INIT_LIST_HEAD(&tgt->srr_imm_list);
+	INIT_WORK(&tgt->srr_work, qla_tgt_handle_srr_work);
+	atomic_set(&tgt->tgt_global_resets_count, 0);
+
+	ha->qla_tgt = tgt;
+
+	if (IS_FWI2_CAPABLE(ha)) {
+		printk(KERN_INFO "qla_target(%d): using 64 Bit PCI "
+			   "addressing", base_vha->vp_idx);
+			tgt->tgt_enable_64bit_addr = 1;
+			/* 3 is reserved */
+			tgt->sg_tablesize =
+			    QLA_TGT_MAX_SG_24XX(base_vha->req->length - 3);
+			tgt->datasegs_per_cmd = QLA_TGT_DATASEGS_PER_CMD_24XX;
+			tgt->datasegs_per_cont = QLA_TGT_DATASEGS_PER_CONT_24XX;
+	} else {
+		if (ha->flags.enable_64bit_addressing) {
+			printk(KERN_INFO "qla_target(%d): 64 Bit PCI "
+				   "addressing enabled", base_vha->vp_idx);
+			tgt->tgt_enable_64bit_addr = 1;
+			/* 3 is reserved */
+			tgt->sg_tablesize =
+				QLA_TGT_MAX_SG64(base_vha->req->length - 3);
+			tgt->datasegs_per_cmd = QLA_TGT_DATASEGS_PER_CMD64;
+			tgt->datasegs_per_cont = QLA_TGT_DATASEGS_PER_CONT64;
+		} else {
+			printk(KERN_INFO "qla_target(%d): Using 32 Bit "
+				   "PCI addressing", base_vha->vp_idx);
+			tgt->sg_tablesize =
+				QLA_TGT_MAX_SG32(base_vha->req->length - 3);
+			tgt->datasegs_per_cmd = QLA_TGT_DATASEGS_PER_CMD32;
+			tgt->datasegs_per_cont = QLA_TGT_DATASEGS_PER_CONT32;
+		}
+	}
+
+	mutex_lock(&qla_tgt_mutex);
+	list_add_tail(&tgt->tgt_list_entry, &qla_tgt_glist);
+	mutex_unlock(&qla_tgt_mutex);
+
+	return 0;
+}
+
+/* Must be called under tgt_host_action_mutex */
+int qla_tgt_remove_target(struct qla_hw_data *ha, struct scsi_qla_host *vha)
+{
+	if (!ha->qla_tgt) {
+		printk(KERN_ERR "qla_target(%d): Can't remove "
+			"existing target", vha->vp_idx);
+		return 0;
+	}
+
+	mutex_lock(&qla_tgt_mutex);
+	list_del(&ha->qla_tgt->tgt_list_entry);
+	mutex_unlock(&qla_tgt_mutex);
+
+	ql_dbg(ql_dbg_tgt, vha, 0xe037, "Unregistering target for host %ld(%p)",
+			vha->host_no, ha);
+	qla_tgt_release(ha->qla_tgt);
+
+	return 0;
+}
+
+static void qla_tgt_lport_dump(struct scsi_qla_host *vha, u64 wwpn, unsigned char *b)
+{
+	int i;
+
+	pr_debug("qla2xxx HW vha->node_name: ");
+	for (i = 0; i < 8; i++)
+		pr_debug("%02x ", vha->node_name[i]);
+	pr_debug("\n");
+	pr_debug("qla2xxx HW vha->port_name: ");
+	for (i = 0; i < 8; i++)
+		pr_debug("%02x ", vha->port_name[i]);
+	pr_debug("\n");
+
+	pr_debug("qla2xxx passed configfs WWPN: ");
+	put_unaligned_be64(wwpn, b);
+	for (i = 0; i < 8; i++)
+		pr_debug("%02x ", b[i]);
+	pr_debug("\n");
+}
+
+/**
+ * qla_tgt_lport_register - register lport with external module
+ *
+ * @qla_tgt_ops: Pointer for tcm_qla2xxx qla_tgt_ops
+ * @wwpn: Passwd FC target WWPN
+ * @callback:  lport initialization callback for tcm_qla2xxx code
+ * @target_lport_ptr: pointer for tcm_qla2xxx specific lport data
+ */
+int qla_tgt_lport_register(struct qla_tgt_func_tmpl *qla_tgt_ops, u64 wwpn,
+                       int (*callback)(struct scsi_qla_host *),
+                       void *target_lport_ptr)
+{
+	struct qla_tgt *tgt;
+	struct scsi_qla_host *vha;
+	struct qla_hw_data *ha;
+	struct Scsi_Host *host;
+	unsigned long flags;
+	int rc;
+	u8 b[8];
+
+	mutex_lock(&qla_tgt_mutex);
+	list_for_each_entry(tgt, &qla_tgt_glist, tgt_list_entry) {
+		vha = tgt->vha;
+		ha = vha->hw;
+
+		host = vha->host;
+		if (!host)
+			continue;
+
+		if (ha->tgt_ops != NULL)
+			continue;
+
+		if (!(host->hostt->supported_mode & MODE_TARGET))
+			continue;
+
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+		if (host->active_mode & MODE_TARGET) {
+			pr_debug("MODE_TARGET already active on qla2xxx"
+					"(%d)\n",  host->host_no);
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+			continue;
+		}
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+		if (!scsi_host_get(host)) {
+			pr_err("Unable to scsi_host_get() for"
+					" qla2xxx scsi_host\n");
+			continue;
+		}
+		qla_tgt_lport_dump(vha, wwpn, b);
+
+		if (memcmp(vha->port_name, b, 8)) {
+			scsi_host_put(host);
+			continue;
+		}
+		/*
+		 * Setup passed parameters ahead of invoking callback
+		 */
+		ha->tgt_ops = qla_tgt_ops;
+		ha->target_lport_ptr = target_lport_ptr;
+		rc = (*callback)(vha);
+		if (rc != 0) {
+			ha->tgt_ops = NULL;
+			ha->target_lport_ptr = NULL;
+		}
+		mutex_unlock(&qla_tgt_mutex);
+		return rc;
+	}
+	mutex_unlock(&qla_tgt_mutex);
+
+	return -ENODEV;
+}
+EXPORT_SYMBOL(qla_tgt_lport_register);
+
+/**
+ * qla_tgt_lport_deregister - Degister lport
+ *
+ * @vha:  Registered scsi_qla_host pointer
+ */
+void qla_tgt_lport_deregister(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct Scsi_Host *sh = vha->host;
+	/*
+	 * Clear the target_lport_ptr qla_target_template pointer in qla_hw_data
+	 */
+	ha->target_lport_ptr = NULL;
+	ha->tgt_ops = NULL;
+	/*
+	 * Release the Scsi_Host reference for the underlying qla2xxx host
+	 */
+	scsi_host_put(sh);
+}
+EXPORT_SYMBOL(qla_tgt_lport_deregister);
+
+/* Must be called under HW lock */
+void qla_tgt_set_mode(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	switch (ql2x_ini_mode) {
+	case QLA2XXX_INI_MODE_DISABLED:
+	case QLA2XXX_INI_MODE_EXCLUSIVE:
+		vha->host->active_mode = MODE_TARGET;
+		break;
+	case QLA2XXX_INI_MODE_ENABLED:
+		vha->host->active_mode |= MODE_TARGET;
+		break;
+	default:
+		break;
+	}
+
+	if (ha->ini_mode_force_reverse)
+		qla_reverse_ini_mode(vha);
+}
+
+/* Must be called under HW lock */
+void qla_tgt_clear_mode(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	switch (ql2x_ini_mode) {
+	case QLA2XXX_INI_MODE_DISABLED:
+		vha->host->active_mode = MODE_UNKNOWN;
+		break;
+	case QLA2XXX_INI_MODE_EXCLUSIVE:
+		vha->host->active_mode = MODE_INITIATOR;
+		break;
+	case QLA2XXX_INI_MODE_ENABLED:
+		vha->host->active_mode &= ~MODE_TARGET;
+		break;
+	default:
+		break;
+	}
+
+	if (ha->ini_mode_force_reverse)
+		qla_reverse_ini_mode(vha);
+}
+
+/*
+ * qla_tgt_enable_vha - NO LOCK HELD
+ *
+ * host_reset, bring up w/ Target Mode Enabled
+ */
+void
+qla_tgt_enable_vha(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	unsigned long flags;
+
+	if (!tgt) {
+		printk(KERN_ERR "Unable to locate qla_tgt pointer from"
+				" struct qla_hw_data\n");
+		dump_stack();
+		return;
+	}
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	tgt->tgt_stopped = 0;
+	qla_tgt_set_mode(vha);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+	qla2xxx_wake_dpc(vha);
+	qla2x00_wait_for_hba_online(vha);
+}
+EXPORT_SYMBOL(qla_tgt_enable_vha);
+
+/*
+ * qla_tgt_disable_vha - NO LOCK HELD
+ *
+ * Disable Target Mode and reset the adapter
+ */
+void
+qla_tgt_disable_vha(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_tgt *tgt = ha->qla_tgt;
+	unsigned long flags;
+
+	if (!tgt) {
+		printk(KERN_ERR "Unable to locate qla_tgt pointer from"
+				" struct qla_hw_data\n");
+		dump_stack();
+		return;
+	}
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	qla_tgt_clear_mode(vha);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+	set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+	qla2xxx_wake_dpc(vha);
+	qla2x00_wait_for_hba_online(vha);
+}
+
+/*
+ * Called from qla_init.c:qla24xx_vport_create() contex to setup
+ * the target mode specific struct scsi_qla_host and struct qla_hw_data
+ * members.
+ */
+void
+qla_tgt_vport_create(struct scsi_qla_host *vha, struct qla_hw_data *ha)
+{
+	mutex_init(&ha->tgt_mutex);
+	mutex_init(&ha->tgt_host_action_mutex);
+	qla_tgt_clear_mode(vha);
+	qla_tgt_2xxx_send_enable_lun(vha, false);
+
+	/*
+	 * NOTE: Currently the value is kept the same for <24xx and
+	 * 	 >=24xx ISPs. If it is necessary to change it,
+	 *	 the check should be added for specific ISPs,
+	 *	 assigning the value appropriately.
+	 */
+	ha->atio_q_length = ATIO_ENTRY_CNT_24XX;
+}
+
+void
+qla_tgt_rff_id(struct scsi_qla_host *vha, struct ct_sns_req *ct_req)
+{
+	/*
+	 * FC-4 Feature bit 0 indicates target functionality to the name server.
+	 */
+	if (qla_tgt_mode_enabled(vha)) {
+		if (qla_ini_mode_enabled(vha))
+			ct_req->req.rff_id.fc4_feature = BIT_0 | BIT_1;
+		else
+			ct_req->req.rff_id.fc4_feature = BIT_0;
+	} else if (qla_ini_mode_enabled(vha)) {
+		ct_req->req.rff_id.fc4_feature = BIT_1;
+	}
+}
+
+/*
+ * Called from qla_init.c:qla2x00_initialize_adapter()
+ */
+void
+qla_tgt_initialize_adapter(struct scsi_qla_host *vha, struct qla_hw_data *ha)
+{
+	/* Enable target response to SCSI bus. */
+	if (qla_tgt_mode_enabled(vha))
+		qla_tgt_2xxx_send_enable_lun(vha, true);
+	else if (qla_ini_mode_enabled(vha))
+		qla_tgt_2xxx_send_enable_lun(vha, false);
+}
+
+/*
+ * qla_tgt_init_atio_q_entries() - Initializes ATIO queue entries.
+ * @ha: HA context
+ *
+ * Beginning of ATIO ring has initialization control block already built
+ * by nvram config routine.
+ *
+ * Returns 0 on success.
+ */
+void
+qla_tgt_init_atio_q_entries(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	uint16_t cnt;
+	atio_from_isp_t *pkt = (atio_from_isp_t *)ha->atio_ring;
+
+	for (cnt = 0; cnt < ha->atio_q_length; cnt++) {
+		pkt->u.raw.signature = ATIO_PROCESSED;
+		pkt++;
+	}
+
+}
+
+/*
+ * qla_tgt_24xx_process_atio_queue() - Process ATIO queue entries.
+ * @ha: SCSI driver HA context
+ */
+void
+qla_tgt_24xx_process_atio_queue(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
+	atio_from_isp_t *pkt;
+	int cnt, i;
+
+	if (!vha->flags.online)
+		return;
+
+	while (ha->atio_ring_ptr->signature != ATIO_PROCESSED) {
+		pkt = (atio_from_isp_t *)ha->atio_ring_ptr;
+		cnt = pkt->u.raw.entry_count;
+
+		qla_tgt_24xx_atio_pkt_all_vps(vha, (atio_from_isp_t *)pkt);
+
+		for (i = 0; i < cnt; i++) {
+			ha->atio_ring_index++;
+			if (ha->atio_ring_index == ha->atio_q_length) {
+				ha->atio_ring_index = 0;
+				ha->atio_ring_ptr = ha->atio_ring;
+			} else
+				ha->atio_ring_ptr++;
+
+			pkt->u.raw.signature = ATIO_PROCESSED;
+			pkt = (atio_from_isp_t *)ha->atio_ring_ptr;
+		}
+		wmb();
+	}
+
+	/* Adjust ring index */
+	WRT_REG_DWORD(&reg->atio_q_out, ha->atio_ring_index);
+}
+
+void
+qla_tgt_24xx_config_rings(struct scsi_qla_host *vha, device_reg_t __iomem *reg)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+#warning FIXME: atio_q in/out for ha->mqenable=1..?
+	if (ha->mqenable) {
+#if 0
+                WRT_REG_DWORD(&reg->isp25mq.atio_q_in, 0);
+                WRT_REG_DWORD(&reg->isp25mq.atio_q_out, 0);
+                RD_REG_DWORD(&reg->isp25mq.atio_q_out);
+#endif
+	} else {
+		/* Setup APTIO registers for target mode */
+		WRT_REG_DWORD(&reg->isp24.atio_q_in, 0);
+		WRT_REG_DWORD(&reg->isp24.atio_q_out, 0);
+		RD_REG_DWORD(&reg->isp24.atio_q_out);
+	}
+}
+
+void
+qla_tgt_2xxx_config_nvram_stage1(struct scsi_qla_host *vha, nvram_t *nv)
+{
+	struct qla_hw_data *ha = vha->hw;
+	/*
+	 * Setup driver NVRAM options.
+	 */
+	if (!IS_QLA2100(ha)) {
+		/* Check if target mode enabled */
+		if (qla_tgt_mode_enabled(vha)) {
+			if (!ha->saved_set) {
+				/* We save only once */
+				ha->saved_firmware_options[0] = nv->firmware_options[0];
+				ha->saved_firmware_options[1] = nv->firmware_options[1];
+				ha->saved_add_firmware_options[0] = nv->add_firmware_options[0];
+				ha->saved_add_firmware_options[1] = nv->add_firmware_options[1];
+				ha->saved_set = 1;
+			}
+			/* Enable target mode */
+			nv->firmware_options[0] |= BIT_4;
+			/* Disable ini mode, if requested */
+			if (!qla_ini_mode_enabled(vha))
+				nv->firmware_options[0] |= BIT_5;
+
+			/* Disable Full Login after LIP */
+			nv->firmware_options[1] &= ~BIT_5;
+			/* Enable initial LIP */
+			nv->firmware_options[1] &= BIT_1;
+			/* Enable FC tapes support */
+			nv->add_firmware_options[1] |= BIT_4;
+			/* Enable Command Queuing in Target Mode */
+			nv->add_firmware_options[1] |= BIT_6;
+		} else {
+			if (ha->saved_set) {
+				nv->firmware_options[0] = ha->saved_firmware_options[0];
+				nv->firmware_options[1] = ha->saved_firmware_options[1];
+				nv->add_firmware_options[0] = ha->saved_add_firmware_options[0];
+				nv->add_firmware_options[1] = ha->saved_add_firmware_options[1];
+			}
+		}
+	}
+
+	if (!IS_QLA2100(ha)) {
+		if (ha->enable_class_2) {
+			if (vha->flags.init_done) {
+				fc_host_supported_classes(vha->host) =
+					FC_COS_CLASS2 | FC_COS_CLASS3;
+			}
+			nv->add_firmware_options[1] |= BIT_0;
+		} else {
+			if (vha->flags.init_done) {
+				fc_host_supported_classes(vha->host) =
+					FC_COS_CLASS3;
+			}
+			nv->add_firmware_options[1] &= BIT_0;
+		}
+	}
+}
+
+void
+qla_tgt_2xxx_config_nvram_stage2(struct scsi_qla_host *vha, init_cb_t *icb)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if (ha->node_name_set) {
+		memcpy(icb->node_name, ha->tgt_node_name, WWN_SIZE);
+		icb->firmware_options[1] |= BIT_6;
+	}
+}
+
+void
+qla_tgt_24xx_config_nvram_stage1(struct scsi_qla_host *vha, struct nvram_24xx *nv)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if (qla_tgt_mode_enabled(vha)) {
+		if (!ha->saved_set) {
+			/* We save only once */
+			ha->saved_exchange_count = nv->exchange_count;
+			ha->saved_firmware_options_1 = nv->firmware_options_1;
+			ha->saved_firmware_options_2 = nv->firmware_options_2;
+			ha->saved_firmware_options_3 = nv->firmware_options_3;
+			ha->saved_set = 1;
+		}
+
+		nv->exchange_count = __constant_cpu_to_le16(0xFFFF);
+
+		/* Enable target mode */
+		nv->firmware_options_1 |= __constant_cpu_to_le32(BIT_4);
+
+		/* Disable ini mode, if requested */
+		if (!qla_ini_mode_enabled(vha))
+			nv->firmware_options_1 |= __constant_cpu_to_le32(BIT_5);
+
+		/* Disable Full Login after LIP */
+		nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_13);
+		/* Enable initial LIP */
+		nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_9);
+		/* Enable FC tapes support */
+		nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_12);
+		/* Disable Full Login after LIP */
+		nv->host_p &= __constant_cpu_to_le32(~BIT_10);
+		/* Enable target PRLI control */
+		nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_14);
+	} else {
+		if (ha->saved_set) {
+			nv->exchange_count = ha->saved_exchange_count;
+			nv->firmware_options_1 = ha->saved_firmware_options_1;
+			nv->firmware_options_2 = ha->saved_firmware_options_2;
+			nv->firmware_options_3 = ha->saved_firmware_options_3;
+		}
+	}
+
+	/* out-of-order frames reassembly */
+	nv->firmware_options_3 |= BIT_6|BIT_9;
+
+	if (ha->enable_class_2) {
+		if (vha->flags.init_done)
+			fc_host_supported_classes(vha->host) =
+				FC_COS_CLASS2 | FC_COS_CLASS3;
+
+		nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_8);
+	} else {
+		if (vha->flags.init_done)
+			fc_host_supported_classes(vha->host) = FC_COS_CLASS3;
+
+		nv->firmware_options_2 &= ~__constant_cpu_to_le32(BIT_8);
+	}
+}
+
+void
+qla_tgt_24xx_config_nvram_stage2(struct scsi_qla_host *vha, struct init_cb_24xx *icb)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if (ha->node_name_set) {
+		memcpy(icb->node_name, ha->tgt_node_name, WWN_SIZE);
+		icb->firmware_options_1 |= __constant_cpu_to_le32(BIT_14);
+	}
+}
+
+void
+qla_tgt_abort_isp(struct scsi_qla_host *vha)
+{
+	/* Enable target response to SCSI bus. */
+	if (qla_tgt_mode_enabled(vha))
+		qla_tgt_2xxx_send_enable_lun(vha, true);
+}
+
+int
+qla_tgt_2xxx_process_response_error(struct scsi_qla_host *vha, sts_entry_t *pkt)
+{
+	if (!qla_tgt_mode_enabled(vha))
+		return 0;
+
+	switch (pkt->entry_type) {
+	case ACCEPT_TGT_IO_TYPE:
+	case CONTINUE_TGT_IO_TYPE:
+	case CTIO_A64_TYPE:
+	case IMMED_NOTIFY_TYPE:
+	case NOTIFY_ACK_TYPE:
+	case ENABLE_LUN_TYPE:
+	case MODIFY_LUN_TYPE:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+int
+qla_tgt_24xx_process_response_error(struct scsi_qla_host *vha, struct sts_entry_24xx *pkt)
+{
+	switch (pkt->entry_type) {
+	case ABTS_RECV_24XX:
+	case ABTS_RESP_24XX:
+	case CTIO_TYPE7:
+	case NOTIFY_ACK_TYPE:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+void
+qla_tgt_modify_vp_config(struct scsi_qla_host *vha, struct vp_config_entry_24xx *vpmod)
+{
+	if (qla_tgt_mode_enabled(vha))
+		vpmod->options_idx1 &= ~BIT_5;
+	/* Disable ini mode, if requested */
+	if (!qla_ini_mode_enabled(vha))
+		vpmod->options_idx1 &= ~BIT_4;
+}
+
+void
+qla_tgt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha)
+{
+	mutex_init(&ha->tgt_mutex);
+	mutex_init(&ha->tgt_host_action_mutex);
+	qla_tgt_clear_mode(base_vha);
+}
+
+int
+qla_tgt_mem_alloc(struct qla_hw_data *ha)
+{
+	if (IS_FWI2_CAPABLE(ha)) {
+		ha->tgt_vp_map = kzalloc(sizeof(struct qla_tgt_vp_map) *
+					MAX_MULTI_ID_FABRIC, GFP_KERNEL);
+		if (!ha->tgt_vp_map)
+			return -ENOMEM;
+
+		ha->atio_ring = dma_alloc_coherent(&ha->pdev->dev,
+				(ha->atio_q_length + 1) * sizeof(atio_from_isp_t),
+				&ha->atio_dma, GFP_KERNEL);
+		if (!ha->atio_ring) {
+			kfree(ha->tgt_vp_map);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+void
+qla_tgt_mem_free(struct qla_hw_data *ha)
+{
+	if (ha->atio_ring) {
+		dma_free_coherent(&ha->pdev->dev, (ha->atio_q_length + 1) *
+				sizeof(atio_from_isp_t), ha->atio_ring, ha->atio_dma);
+	}
+	kfree(ha->tgt_vp_map);
+}
+
+static int __init qla_tgt_parse_ini_mode(void)
+{
+	if (strcasecmp(qlini_mode, QLA2XXX_INI_MODE_STR_EXCLUSIVE) == 0)
+		ql2x_ini_mode = QLA2XXX_INI_MODE_EXCLUSIVE;
+	else if (strcasecmp(qlini_mode, QLA2XXX_INI_MODE_STR_DISABLED) == 0)
+		ql2x_ini_mode = QLA2XXX_INI_MODE_DISABLED;
+	else if (strcasecmp(qlini_mode, QLA2XXX_INI_MODE_STR_ENABLED) == 0)
+		ql2x_ini_mode = QLA2XXX_INI_MODE_ENABLED;
+	else
+		return false;
+
+	return true;
+}
+
+int __init qla_tgt_init(void)
+{
+	int ret;
+
+	if (!qla_tgt_parse_ini_mode()) {
+		printk(KERN_ERR "qla_tgt_parse_ini_mode() failed\n");
+		return -EINVAL;
+	}
+
+	qla_tgt_cmd_cachep = kmem_cache_create("qla_tgt_cmd_cachep",
+			sizeof(struct qla_tgt_cmd), __alignof__(struct qla_tgt_cmd),
+			0, NULL);
+	if (!qla_tgt_cmd_cachep) {
+		printk(KERN_ERR "kmem_cache_create for qla_tgt_cmd_cachep failed\n");
+		return -ENOMEM;
+	}
+
+	qla_tgt_mgmt_cmd_cachep = kmem_cache_create("qla_tgt_mgmt_cmd_cachep",
+		sizeof(struct qla_tgt_mgmt_cmd), __alignof__(struct qla_tgt_mgmt_cmd),
+			0, NULL);
+	if (!qla_tgt_mgmt_cmd_cachep) {
+		pr_warn(KERN_ERR "kmem_cache_create for qla_tgt_mgmt_cmd_cachep failed\n");
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	qla_tgt_mgmt_cmd_mempool = mempool_create(25, mempool_alloc_slab,
+				mempool_free_slab, qla_tgt_mgmt_cmd_cachep);
+	if (!qla_tgt_mgmt_cmd_mempool) {
+		pr_warn(KERN_ERR "mempool_create for qla_tgt_mgmt_cmd_mempool failed\n");
+		ret = -ENOMEM;
+		goto out_mgmt_cmd_cachep;
+	}
+
+	qla_tgt_wq = alloc_workqueue("qla_tgt_wq", 0, 0);
+	if (!qla_tgt_wq) {
+		pr_warn(KERN_ERR "alloc_workqueue for qla_tgt_wq failed\n");
+		ret = -ENOMEM;
+		goto out_cmd_mempool;
+	}
+
+	return 0;
+
+out_cmd_mempool:
+	mempool_destroy(qla_tgt_mgmt_cmd_mempool);
+out_mgmt_cmd_cachep:
+	kmem_cache_destroy(qla_tgt_mgmt_cmd_cachep);
+out:
+	kmem_cache_destroy(qla_tgt_cmd_cachep);
+	return ret;
+}
+
+void __exit qla_tgt_exit(void)
+{
+	destroy_workqueue(qla_tgt_wq);
+	mempool_destroy(qla_tgt_mgmt_cmd_mempool);
+	kmem_cache_destroy(qla_tgt_mgmt_cmd_cachep);
+	kmem_cache_destroy(qla_tgt_cmd_cachep);
+}
diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
new file mode 100644
index 0000000..4b35e29
--- /dev/null
+++ b/drivers/scsi/qla2xxx/qla_target.h
@@ -0,0 +1,1147 @@
+/*
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2006 Nathaniel Clark <nate@misrule.us>
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Forward port and refactoring to modern qla2xxx and target/configfs
+ *
+ *  Copyright (C) 2010-2011 Nicholas A. Bellinger <nab@kernel.org>
+ *
+ *  Additional file for the target driver support.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation; either version 2
+ *  of the License, or (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+/*
+ * This is the global def file that is useful for including from the
+ * target portion.
+ */
+
+#ifndef __QLA_TARGET_H
+#define __QLA_TARGET_H
+
+#include "qla_def.h"
+
+/*
+ * Must be changed on any change in any initiator visible interfaces or
+ * data in the target add-on
+ */
+#define QLA2XXX_TARGET_MAGIC	269
+
+/*
+ * Must be changed on any change in any target visible interfaces or
+ * data in the initiator
+ */
+#define QLA2XXX_INITIATOR_MAGIC   57222
+
+#define QLA2XXX_INI_MODE_STR_EXCLUSIVE	"exclusive"
+#define QLA2XXX_INI_MODE_STR_DISABLED	"disabled"
+#define QLA2XXX_INI_MODE_STR_ENABLED	"enabled"
+
+#define QLA2XXX_INI_MODE_EXCLUSIVE	0
+#define QLA2XXX_INI_MODE_DISABLED	1
+#define QLA2XXX_INI_MODE_ENABLED	2
+
+#define QLA2XXX_COMMAND_COUNT_INIT	250
+#define QLA2XXX_IMMED_NOTIFY_COUNT_INIT 250
+
+/*
+ * Used to mark which completion handles (for RIO Status's) are for CTIO's
+ * vs. regular (non-target) info. This is checked for in
+ * qla2x00_process_response_queue() to see if a handle coming back in a
+ * multi-complete should come to the tgt driver or be handled there by qla2xxx
+ */
+#define CTIO_COMPLETION_HANDLE_MARK	BIT_29
+#if (CTIO_COMPLETION_HANDLE_MARK <= MAX_OUTSTANDING_COMMANDS)
+#error "Hackish CTIO_COMPLETION_HANDLE_MARK no longer larger than MAX_OUTSTANDING_COMMANDS"
+#endif
+#define HANDLE_IS_CTIO_COMP(h) (h & CTIO_COMPLETION_HANDLE_MARK)
+
+/* Used to mark CTIO as intermediate */
+#define CTIO_INTERMEDIATE_HANDLE_MARK	BIT_30
+
+#ifndef OF_SS_MODE_0
+/*
+ * ISP target entries - Flags bit definitions.
+ */
+#define OF_SS_MODE_0        0
+#define OF_SS_MODE_1        1
+#define OF_SS_MODE_2        2
+#define OF_SS_MODE_3        3
+
+#define OF_EXPL_CONF        BIT_5       /* Explicit Confirmation Requested */
+#define OF_DATA_IN          BIT_6       /* Data in to initiator */
+					/*  (data from target to initiator) */
+#define OF_DATA_OUT         BIT_7       /* Data out from initiator */
+					/*  (data from initiator to target) */
+#define OF_NO_DATA          (BIT_7 | BIT_6)
+#define OF_INC_RC           BIT_8       /* Increment command resource count */
+#define OF_FAST_POST        BIT_9       /* Enable mailbox fast posting. */
+#define OF_CONF_REQ         BIT_13      /* Confirmation Requested */
+#define OF_TERM_EXCH        BIT_14      /* Terminate exchange */
+#define OF_SSTS             BIT_15      /* Send SCSI status */
+#endif
+
+#ifndef QLA_TGT_DATASEGS_PER_CMD32
+#define QLA_TGT_DATASEGS_PER_CMD32	3
+#define QLA_TGT_DATASEGS_PER_CONT32	7
+#define QLA_TGT_MAX_SG32(ql) \
+   (((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD32 + QLA_TGT_DATASEGS_PER_CONT32*((ql) - 1)) : 0)
+
+#define QLA_TGT_DATASEGS_PER_CMD64	2
+#define QLA_TGT_DATASEGS_PER_CONT64	5
+#define QLA_TGT_MAX_SG64(ql) \
+   (((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD64 + QLA_TGT_DATASEGS_PER_CONT64*((ql) - 1)) : 0)
+#endif
+
+#ifndef QLA_TGT_DATASEGS_PER_CMD_24XX
+#define QLA_TGT_DATASEGS_PER_CMD_24XX	1
+#define QLA_TGT_DATASEGS_PER_CONT_24XX	5
+#define QLA_TGT_MAX_SG_24XX(ql) \
+   (min(1270, ((ql) > 0) ? (QLA_TGT_DATASEGS_PER_CMD_24XX + QLA_TGT_DATASEGS_PER_CONT_24XX*((ql) - 1)) : 0))
+#endif
+
+/********************************************************************\
+ * ISP Queue types left out of new QLogic driver (from old version)
+\********************************************************************/
+
+#ifndef ENABLE_LUN_TYPE
+#define ENABLE_LUN_TYPE 0x0B		/* Enable LUN entry. */
+/*
+ * ISP queue - enable LUN entry structure definition.
+ */
+typedef struct {
+	uint8_t	 entry_type;		/* Entry type. */
+	uint8_t	 entry_count;		/* Entry count. */
+	uint8_t	 sys_define;		/* System defined. */
+	uint8_t	 entry_status;		/* Entry Status. */
+	uint32_t sys_define_2;		/* System defined. */
+	uint8_t	 reserved_8;
+	uint8_t	 reserved_1;
+	uint16_t reserved_2;
+	uint32_t reserved_3;
+	uint8_t	 status;
+	uint8_t	 reserved_4;
+	uint8_t	 command_count;		/* Number of ATIOs allocated. */
+	uint8_t	 immed_notify_count;	/* Number of Immediate Notify entries allocated. */
+	uint16_t reserved_5;
+	uint16_t timeout;		/* 0 = 30 seconds, 0xFFFF = disable */
+	uint16_t reserved_6[20];
+} __attribute__((packed)) enable_lun_t;
+#define ENABLE_LUN_SUCCESS          0x01
+#define ENABLE_LUN_RC_NONZERO       0x04
+#define ENABLE_LUN_INVALID_REQUEST  0x06
+#define ENABLE_LUN_ALREADY_ENABLED  0x3E
+#endif
+
+#ifndef MODIFY_LUN_TYPE
+#define MODIFY_LUN_TYPE 0x0C	  /* Modify LUN entry. */
+/*
+ * ISP queue - modify LUN entry structure definition.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t sys_define_2;		    /* System defined. */
+	uint8_t	 reserved_8;
+	uint8_t	 reserved_1;
+	uint8_t	 operators;
+	uint8_t	 reserved_2;
+	uint32_t reserved_3;
+	uint8_t	 status;
+	uint8_t	 reserved_4;
+	uint8_t	 command_count;		    /* Number of ATIOs allocated. */
+	uint8_t	 immed_notify_count;	    /* Number of Immediate Notify */
+	/* entries allocated. */
+	uint16_t reserved_5;
+	uint16_t timeout;		    /* 0 = 30 seconds, 0xFFFF = disable */
+	uint16_t reserved_7[20];
+} __attribute__((packed)) modify_lun_t;
+#define MODIFY_LUN_SUCCESS	0x01
+#define MODIFY_LUN_CMD_ADD BIT_0
+#define MODIFY_LUN_CMD_SUB BIT_1
+#define MODIFY_LUN_IMM_ADD BIT_2
+#define MODIFY_LUN_IMM_SUB BIT_3
+#endif
+
+#define GET_TARGET_ID(ha, iocb) ((HAS_EXTENDED_IDS(ha))				\
+				 ? le16_to_cpu((iocb)->u.isp2x.target.extended)	\
+				 : (uint16_t)(iocb)->u.isp2x.target.id.standard)
+
+#ifndef IMMED_NOTIFY_TYPE
+#define IMMED_NOTIFY_TYPE 0x0D		/* Immediate notify entry. */
+/*
+ * ISP queue -	immediate notify entry structure definition.
+ *		This is sent by the ISP to the Target driver.
+ *		This IOCB would have report of events sent by the
+ *		initiator, that needs to be handled by the target
+ *		driver immediately.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	union {
+		struct {
+			uint32_t sys_define_2; /* System defined. */
+			target_id_t target;
+			uint16_t lun;
+			uint8_t  target_id;
+			uint8_t  reserved_1;
+			uint16_t status_modifier;
+			uint16_t status;
+			uint16_t task_flags;
+			uint16_t seq_id;
+			uint16_t srr_rx_id;
+			uint32_t srr_rel_offs;
+			uint16_t srr_ui;
+#define SRR_IU_DATA_IN	0x1
+#define SRR_IU_DATA_OUT	0x5
+#define SRR_IU_STATUS	0x7
+			uint16_t srr_ox_id;
+			uint8_t reserved_2[28];
+		} isp2x;
+		struct {
+			uint32_t reserved;
+			uint16_t nport_handle;
+			uint16_t reserved_2;
+			uint16_t flags;
+#define NOTIFY24XX_FLAGS_GLOBAL_TPRLO   BIT_1
+#define NOTIFY24XX_FLAGS_PUREX_IOCB     BIT_0
+			uint16_t srr_rx_id;
+			uint16_t status;
+			uint8_t  status_subcode;
+			uint8_t  reserved_3;
+			uint32_t exchange_address;
+			uint32_t srr_rel_offs;
+			uint16_t srr_ui;
+			uint16_t srr_ox_id;
+			uint8_t  reserved_4[19];
+			uint8_t  vp_index;
+			uint32_t reserved_5;
+			uint8_t  port_id[3];
+			uint8_t  reserved_6;
+		} isp24;
+	} u;
+	uint16_t reserved_7;
+	uint16_t ox_id;
+} __attribute__((packed)) imm_ntfy_from_isp_t;
+#endif
+
+#ifndef NOTIFY_ACK_TYPE
+#define NOTIFY_ACK_TYPE 0x0E	  /* Notify acknowledge entry. */
+/*
+ * ISP queue -	notify acknowledge entry structure definition.
+ *		This is sent to the ISP from the target driver.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	union {
+		struct {
+			uint32_t sys_define_2; /* System defined. */
+			target_id_t target;
+			uint8_t	 target_id;
+			uint8_t	 reserved_1;
+			uint16_t flags;
+			uint16_t resp_code;
+			uint16_t status;
+			uint16_t task_flags;
+			uint16_t seq_id;
+			uint16_t srr_rx_id;
+			uint32_t srr_rel_offs;
+			uint16_t srr_ui;
+			uint16_t srr_flags;
+			uint16_t srr_reject_code;
+			uint8_t  srr_reject_vendor_uniq;
+			uint8_t  srr_reject_code_expl;
+			uint8_t  reserved_2[24];
+		} isp2x;
+		struct {
+			uint32_t handle;
+			uint16_t nport_handle;
+			uint16_t reserved_1;
+			uint16_t flags;
+			uint16_t srr_rx_id;
+			uint16_t status;
+			uint8_t  status_subcode;
+			uint8_t  reserved_3;
+			uint32_t exchange_address;
+			uint32_t srr_rel_offs;
+			uint16_t srr_ui;
+			uint16_t srr_flags;
+			uint8_t  reserved_4[19];
+			uint8_t  vp_index;
+			uint8_t  srr_reject_vendor_uniq;
+			uint8_t  srr_reject_code_expl;
+			uint8_t  srr_reject_code;
+			uint8_t  reserved_5[5];
+		} isp24;
+	} u;
+	uint8_t  reserved[2];
+	uint16_t ox_id;
+} __attribute__((packed)) nack_to_isp_t;
+#define NOTIFY_ACK_SRR_FLAGS_ACCEPT	0
+#define NOTIFY_ACK_SRR_FLAGS_REJECT	1
+
+#define NOTIFY_ACK_SRR_REJECT_REASON_UNABLE_TO_PERFORM	0x9
+
+#define NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_NO_EXPL		0
+#define NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_UNABLE_TO_SUPPLY_DATA	0x2a
+
+#define NOTIFY_ACK_SUCCESS      0x01
+#endif
+
+#ifndef ACCEPT_TGT_IO_TYPE
+#define ACCEPT_TGT_IO_TYPE 0x16 /* Accept target I/O entry. */
+#endif
+
+#ifndef CONTINUE_TGT_IO_TYPE
+#define CONTINUE_TGT_IO_TYPE 0x17
+/*
+ * ISP queue -	Continue Target I/O (CTIO) entry for status mode 0 structure.
+ *		This structure is sent to the ISP 2xxx from target driver.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;		    /* System defined handle */
+	target_id_t target;
+	uint16_t rx_id;
+	uint16_t flags;
+	uint16_t status;
+	uint16_t timeout;		    /* 0 = 30 seconds, 0xFFFF = disable */
+	uint16_t dseg_count;		    /* Data segment count. */
+	uint32_t relative_offset;
+	uint32_t residual;
+	uint16_t reserved_1[3];
+	uint16_t scsi_status;
+	uint32_t transfer_length;
+	uint32_t dseg_0_address;	    /* Data segment 0 address. */
+	uint32_t dseg_0_length;		    /* Data segment 0 length. */
+	uint32_t dseg_1_address;	    /* Data segment 1 address. */
+	uint32_t dseg_1_length;		    /* Data segment 1 length. */
+	uint32_t dseg_2_address;	    /* Data segment 2 address. */
+	uint32_t dseg_2_length;		    /* Data segment 2 length. */
+} __attribute__((packed)) ctio_to_2xxx_t;
+#define ATIO_PATH_INVALID       0x07
+#define ATIO_CANT_PROV_CAP      0x16
+#define ATIO_CDB_VALID          0x3D
+
+#define ATIO_EXEC_READ          BIT_1
+#define ATIO_EXEC_WRITE         BIT_0
+#endif
+
+#ifndef CTIO_A64_TYPE
+#define CTIO_A64_TYPE 0x1F
+#define CTIO_SUCCESS			0x01
+#define CTIO_ABORTED			0x02
+#define CTIO_INVALID_RX_ID		0x08
+#define CTIO_TIMEOUT			0x0B
+#define CTIO_LIP_RESET			0x0E
+#define CTIO_TARGET_RESET		0x17
+#define CTIO_PORT_UNAVAILABLE		0x28
+#define CTIO_PORT_LOGGED_OUT		0x29
+#define CTIO_PORT_CONF_CHANGED		0x2A
+#define CTIO_SRR_RECEIVED		0x45
+#endif
+
+#ifndef CTIO_RET_TYPE
+#define CTIO_RET_TYPE	0x17		/* CTIO return entry */
+/*
+ * ISP queue - CTIO from ISP 2xxx to target driver returned entry structure.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;		    /* System defined handle. */
+	target_id_t target;
+	uint16_t rx_id;
+	uint16_t flags;
+	uint16_t status;
+	uint16_t timeout;	    /* 0 = 30 seconds, 0xFFFF = disable */
+	uint16_t dseg_count;	    /* Data segment count. */
+	uint32_t relative_offset;
+	uint32_t residual;
+	uint16_t reserved_1[2];
+	uint16_t sense_length;
+	uint16_t scsi_status;
+	uint16_t response_length;
+	uint8_t	 sense_data[26];
+} __attribute__((packed)) ctio_from_2xxx_t;
+#endif
+
+#define ATIO_TYPE7 0x06 /* Accept target I/O entry for 24xx */
+
+typedef struct {
+	uint8_t  r_ctl;
+	uint8_t  d_id[3];
+	uint8_t  cs_ctl;
+	uint8_t  s_id[3];
+	uint8_t  type;
+	uint8_t  f_ctl[3];
+	uint8_t  seq_id;
+	uint8_t  df_ctl;
+	uint16_t seq_cnt;
+	uint16_t ox_id;
+	uint16_t rx_id;
+	uint32_t parameter;
+} __attribute__((packed)) fcp_hdr_t;
+
+typedef struct {
+	uint8_t  d_id[3];
+	uint8_t  r_ctl;
+	uint8_t  s_id[3];
+	uint8_t  cs_ctl;
+	uint8_t  f_ctl[3];
+	uint8_t  type;
+	uint16_t seq_cnt;
+	uint8_t  df_ctl;
+	uint8_t  seq_id;
+	uint16_t rx_id;
+	uint16_t ox_id;
+	uint32_t parameter;
+} __attribute__((packed)) fcp_hdr_le_t;
+
+#define F_CTL_EXCH_CONTEXT_RESP	BIT_23
+#define F_CTL_SEQ_CONTEXT_RESIP	BIT_22
+#define F_CTL_LAST_SEQ		BIT_20
+#define F_CTL_END_SEQ		BIT_19
+#define F_CTL_SEQ_INITIATIVE	BIT_16
+
+#define R_CTL_BASIC_LINK_SERV	0x80
+#define R_CTL_B_ACC		0x4
+#define R_CTL_B_RJT		0x5
+
+typedef struct {
+	uint64_t lun;
+	uint8_t  cmnd_ref;
+	uint8_t  task_attr:3;
+	uint8_t  reserved:5;
+	uint8_t  task_mgmt_flags;
+#define FCP_CMND_TASK_MGMT_CLEAR_ACA		6
+#define FCP_CMND_TASK_MGMT_TARGET_RESET		5
+#define FCP_CMND_TASK_MGMT_LU_RESET		4
+#define FCP_CMND_TASK_MGMT_CLEAR_TASK_SET	2
+#define FCP_CMND_TASK_MGMT_ABORT_TASK_SET	1
+	uint8_t  wrdata:1;
+	uint8_t  rddata:1;
+	uint8_t  add_cdb_len:6;
+	uint8_t  cdb[16];
+	/*
+	 * add_cdb is optional and can absent from atio7_fcp_cmnd_t. Size 4 only to
+	 * make sizeof(atio7_fcp_cmnd_t) be as expected by BUILD_BUG_ON() in
+	 * qla_tgt_init().
+	 */
+	uint8_t  add_cdb[4];
+	/* uint32_t data_length; */
+} __attribute__((packed)) atio7_fcp_cmnd_t;
+
+/*
+ * ISP queue -	Accept Target I/O (ATIO) type entry IOCB structure.
+ *		This is sent from the ISP to the target driver.
+ */
+typedef struct {
+	union {
+		struct {
+			uint16_t entry_hdr;
+			uint8_t  sys_define;   /* System defined. */
+			uint8_t  entry_status; /* Entry Status.   */
+			uint32_t sys_define_2; /* System defined. */
+			target_id_t target;
+			uint16_t rx_id;
+			uint16_t flags;
+			uint16_t status;
+			uint8_t  command_ref;
+			uint8_t  task_codes;
+			uint8_t  task_flags;
+			uint8_t  execution_codes;
+			uint8_t  cdb[MAX_CMDSZ];
+			uint32_t data_length;
+			uint16_t lun;
+			uint8_t  initiator_port_name[WWN_SIZE]; /* on qla23xx */
+			uint16_t reserved_32[6];
+			uint16_t ox_id;
+		} isp2x;
+		struct {
+			uint16_t entry_hdr;
+			uint8_t  fcp_cmnd_len_low;
+			uint8_t  fcp_cmnd_len_high:4;
+			uint8_t  attr:4;
+			uint32_t exchange_addr;
+#define ATIO_EXCHANGE_ADDRESS_UNKNOWN	0xFFFFFFFF
+			fcp_hdr_t fcp_hdr;
+			atio7_fcp_cmnd_t fcp_cmnd;
+		} isp24;
+		struct {
+			uint8_t  entry_type;	/* Entry type. */
+			uint8_t  entry_count;	/* Entry count. */
+			uint8_t  data[58];
+			uint32_t signature;
+#define ATIO_PROCESSED 0xDEADDEAD		/* Signature */
+		} raw;
+	} u;
+} __attribute__((packed)) atio_from_isp_t;
+
+#define CTIO_TYPE7 0x12 /* Continue target I/O entry (for 24xx) */
+
+/*
+ * ISP queue -	Continue Target I/O (ATIO) type 7 entry (for 24xx) structure.
+ *		This structure is sent to the ISP 24xx from the target driver.
+ */
+
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;		    /* System defined handle */
+	uint16_t nport_handle;
+#define CTIO7_NHANDLE_UNRECOGNIZED	0xFFFF
+	uint16_t timeout;
+	uint16_t dseg_count;		    /* Data segment count. */
+	uint8_t  vp_index;
+	uint8_t  add_flags;
+	uint8_t  initiator_id[3];
+	uint8_t  reserved;
+	uint32_t exchange_addr;
+	union {
+		struct {
+			uint16_t reserved1;
+			uint16_t flags;
+			uint32_t residual;
+			uint16_t ox_id;
+			uint16_t scsi_status;
+			uint32_t relative_offset;
+			uint32_t reserved2;
+			uint32_t transfer_length;
+			uint32_t reserved3;
+			uint32_t dseg_0_address[2]; /* Data segment 0 address. */
+			uint32_t dseg_0_length; /* Data segment 0 length. */
+		} status0;
+		struct {
+			uint16_t sense_length;
+			uint16_t flags;
+			uint32_t residual;
+			uint16_t ox_id;
+			uint16_t scsi_status;
+			uint16_t response_len;
+			uint16_t reserved;
+			uint8_t sense_data[24];
+		} status1;
+	} u;
+} __attribute__((packed)) ctio7_to_24xx_t;
+
+/*
+ * ISP queue - CTIO type 7 from ISP 24xx to target driver returned entry structure.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;		    /* System defined handle */
+	uint16_t status;
+	uint16_t timeout;
+	uint16_t dseg_count;		    /* Data segment count. */
+	uint8_t  vp_index;
+	uint8_t  reserved1[5];
+	uint32_t exchange_address;
+	uint16_t reserved2;
+	uint16_t flags;
+	uint32_t residual;
+	uint16_t ox_id;
+	uint16_t reserved3;
+	uint32_t relative_offset;
+	uint8_t  reserved4[24];
+} __attribute__((packed)) ctio7_from_24xx_t;
+
+/* CTIO7 flags values */
+#define CTIO7_FLAGS_SEND_STATUS		BIT_15
+#define CTIO7_FLAGS_TERMINATE		BIT_14
+#define CTIO7_FLAGS_CONFORM_REQ		BIT_13
+#define CTIO7_FLAGS_DONT_RET_CTIO	BIT_8
+#define CTIO7_FLAGS_STATUS_MODE_0	0
+#define CTIO7_FLAGS_STATUS_MODE_1	BIT_6
+#define CTIO7_FLAGS_EXPLICIT_CONFORM	BIT_5
+#define CTIO7_FLAGS_CONFIRM_SATISF	BIT_4
+#define CTIO7_FLAGS_DSD_PTR		BIT_2
+#define CTIO7_FLAGS_DATA_IN		BIT_1
+#define CTIO7_FLAGS_DATA_OUT		BIT_0
+
+#define ELS_PLOGI			0x3
+#define ELS_FLOGI			0x4
+#define ELS_LOGO			0x5
+#define ELS_PRLI			0x20
+#define ELS_PRLO			0x21
+#define ELS_TPRLO			0x24
+#define ELS_PDISC			0x50
+#define ELS_ADISC			0x52
+
+/*
+ * ISP queue - ABTS received/response entries structure definition for 24xx.
+ */
+#define ABTS_RECV_24XX		0x54 /* ABTS received (for 24xx) */
+#define ABTS_RESP_24XX		0x55 /* ABTS responce (for 24xx) */
+
+/*
+ * ISP queue -	ABTS received IOCB entry structure definition for 24xx.
+ *		The ABTS BLS received from the wire is sent to the
+ *		target driver by the ISP 24xx.
+ *		The IOCB is placed on the response queue.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint8_t  reserved_1[6];
+	uint16_t nport_handle;
+	uint8_t  reserved_2[2];
+	uint8_t  vp_index;
+	uint8_t  reserved_3:4;
+	uint8_t  sof_type:4;
+	uint32_t exchange_address;
+	fcp_hdr_le_t fcp_hdr_le;
+	uint8_t  reserved_4[16];
+	uint32_t exchange_addr_to_abort;
+} __attribute__((packed)) abts_recv_from_24xx_t;
+
+#define ABTS_PARAM_ABORT_SEQ		BIT_0
+
+typedef struct {
+	uint16_t reserved;
+	uint8_t  seq_id_last;
+	uint8_t  seq_id_valid;
+#define SEQ_ID_VALID	0x80
+#define SEQ_ID_INVALID	0x00
+	uint16_t rx_id;
+	uint16_t ox_id;
+	uint16_t high_seq_cnt;
+	uint16_t low_seq_cnt;
+} __attribute__((packed)) ba_acc_le_t;
+
+typedef struct {
+	uint8_t vendor_uniq;
+	uint8_t reason_expl;
+	uint8_t reason_code;
+#define BA_RJT_REASON_CODE_INVALID_COMMAND	0x1
+#define BA_RJT_REASON_CODE_UNABLE_TO_PERFORM	0x9
+	uint8_t reserved;
+} __attribute__((packed)) ba_rjt_le_t;
+
+/*
+ * ISP queue -	ABTS Response IOCB entry structure definition for 24xx.
+ *		The ABTS response to the ABTS received is sent by the
+ *		target driver to the ISP 24xx.
+ *		The IOCB is placed on the request queue.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;
+	uint16_t reserved_1;
+	uint16_t nport_handle;
+	uint16_t control_flags;
+#define ABTS_CONTR_FLG_TERM_EXCHG	BIT_0
+	uint8_t  vp_index;
+	uint8_t  reserved_3:4;
+	uint8_t  sof_type:4;
+	uint32_t exchange_address;
+	fcp_hdr_le_t fcp_hdr_le;
+	union {
+		ba_acc_le_t ba_acct;
+		ba_rjt_le_t ba_rjt;
+	} __attribute__((packed)) payload;
+	uint32_t reserved_4;
+	uint32_t exchange_addr_to_abort;
+} __attribute__((packed)) abts_resp_to_24xx_t;
+
+/*
+ * ISP queue -	ABTS Response IOCB from ISP24xx Firmware entry structure.
+ *		The ABTS response with completion status to the ABTS response
+ * 		(sent by the target driver to the ISP 24xx) is sent by the
+ *		ISP24xx firmware to the target driver.
+ *		The IOCB is placed on the response queue.
+ */
+typedef struct {
+	uint8_t	 entry_type;		    /* Entry type. */
+	uint8_t	 entry_count;		    /* Entry count. */
+	uint8_t	 sys_define;		    /* System defined. */
+	uint8_t	 entry_status;		    /* Entry Status. */
+	uint32_t handle;
+	uint16_t compl_status;
+#define ABTS_RESP_COMPL_SUCCESS		0
+#define ABTS_RESP_COMPL_SUBCODE_ERROR	0x31
+	uint16_t nport_handle;
+	uint16_t reserved_1;
+	uint8_t  reserved_2;
+	uint8_t  reserved_3:4;
+	uint8_t  sof_type:4;
+	uint32_t exchange_address;
+	fcp_hdr_le_t fcp_hdr_le;
+	uint8_t reserved_4[8];
+	uint32_t error_subcode1;
+#define ABTS_RESP_SUBCODE_ERR_ABORTED_EXCH_NOT_TERM	0x1E
+	uint32_t error_subcode2;
+	uint32_t exchange_addr_to_abort;
+} __attribute__((packed)) abts_resp_from_24xx_fw_t;
+
+/********************************************************************\
+ * Type Definitions used by initiator & target halves
+\********************************************************************/
+
+struct qla_tgt_mgmt_cmd;
+struct qla_tgt_sess;
+
+/*
+ * This structure provides a template of function calls that the
+ * target driver (from within qla_target.c) can issue to the
+ * target module (tcm_qla2xxx).
+ */
+struct qla_tgt_func_tmpl {
+
+	int (*handle_cmd)(struct scsi_qla_host *, struct qla_tgt_cmd *,
+			unsigned char *, uint32_t, int, int, int);
+	int (*handle_data)(struct qla_tgt_cmd *);
+	int (*handle_tmr)(struct qla_tgt_mgmt_cmd *, uint32_t, uint8_t);
+	void (*free_cmd)(struct qla_tgt_cmd *);
+	void (*free_session)(struct qla_tgt_sess *);
+
+	int (*check_initiator_node_acl)(struct scsi_qla_host *, unsigned char *,
+					void *, uint8_t *, uint16_t);
+	struct qla_tgt_sess *(*find_sess_by_loop_id)(struct scsi_qla_host *,
+						const uint16_t);
+	struct qla_tgt_sess *(*find_sess_by_s_id)(struct scsi_qla_host *,
+						const uint8_t *);
+};
+
+int qla2x00_wait_for_hba_online(struct scsi_qla_host *);
+
+#include <target/target_core_base.h>
+
+#define QLA_TGT_TIMEOUT			10	/* in seconds */
+
+#define QLA_TGT_MAX_HW_PENDING_TIME	60 /* in seconds */
+
+/* Immediate notify status constants */
+#define IMM_NTFY_LIP_RESET          0x000E
+#define IMM_NTFY_LIP_LINK_REINIT    0x000F
+#define IMM_NTFY_IOCB_OVERFLOW      0x0016
+#define IMM_NTFY_ABORT_TASK         0x0020
+#define IMM_NTFY_PORT_LOGOUT        0x0029
+#define IMM_NTFY_PORT_CONFIG        0x002A
+#define IMM_NTFY_GLBL_TPRLO         0x002D
+#define IMM_NTFY_GLBL_LOGO          0x002E
+#define IMM_NTFY_RESOURCE           0x0034
+#define IMM_NTFY_MSG_RX             0x0036
+#define IMM_NTFY_SRR                0x0045
+#define IMM_NTFY_ELS                0x0046
+
+/* Immediate notify task flags */
+#define IMM_NTFY_TASK_MGMT_SHIFT    8
+
+#define QLA_TGT_CLEAR_ACA               0x40
+#define QLA_TGT_TARGET_RESET            0x20
+#define QLA_TGT_LUN_RESET               0x10
+#define QLA_TGT_CLEAR_TS                0x04
+#define QLA_TGT_ABORT_TS                0x02
+#define QLA_TGT_ABORT_ALL_SESS          0xFFFF
+#define QLA_TGT_ABORT_ALL               0xFFFE
+#define QLA_TGT_NEXUS_LOSS_SESS         0xFFFD
+#define QLA_TGT_NEXUS_LOSS              0xFFFC
+
+/* Notify Acknowledge flags */
+#define NOTIFY_ACK_RES_COUNT        BIT_8
+#define NOTIFY_ACK_CLEAR_LIP_RESET  BIT_5
+#define NOTIFY_ACK_TM_RESP_CODE_VALID BIT_4
+
+/* Command's states */
+#define QLA_TGT_STATE_NEW               0	/* New command and target processing it */
+#define QLA_TGT_STATE_NEED_DATA         1	/* target needs data to continue */
+#define QLA_TGT_STATE_DATA_IN           2	/* Data arrived and target is processing */
+#define QLA_TGT_STATE_PROCESSED         3	/* target done processing */
+#define QLA_TGT_STATE_ABORTED           4	/* Command aborted */
+
+/* Special handles */
+#define QLA_TGT_NULL_HANDLE             0
+#define QLA_TGT_SKIP_HANDLE             (0xFFFFFFFF & ~CTIO_COMPLETION_HANDLE_MARK)
+
+/* ATIO task_codes field */
+#define ATIO_SIMPLE_QUEUE           0
+#define ATIO_HEAD_OF_QUEUE          1
+#define ATIO_ORDERED_QUEUE          2
+#define ATIO_ACA_QUEUE              4
+#define ATIO_UNTAGGED               5
+
+/* TM failed response codes, see FCP (9.4.11 FCP_RSP_INFO) */
+#define	FC_TM_SUCCESS               0
+#define	FC_TM_BAD_FCP_DATA          1
+#define	FC_TM_BAD_CMD               2
+#define	FC_TM_FCP_DATA_MISMATCH     3
+#define	FC_TM_REJECT                4
+#define FC_TM_FAILED                5
+
+/*
+ * Error code of qla_tgt_pre_xmit_response() meaning that cmd's exchange was
+ * terminated, so no more actions is needed and success should be returned
+ * to target.
+ */
+#define QLA_TGT_PRE_XMIT_RESP_CMD_ABORTED	0x1717
+
+#if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G)
+#define pci_dma_lo32(a) (a & 0xffffffff)
+#define pci_dma_hi32(a) ((((a) >> 16)>>16) & 0xffffffff)
+#else
+#define pci_dma_lo32(a) (a & 0xffffffff)
+#define pci_dma_hi32(a) 0
+#endif
+
+#define QLA_TGT_SENSE_VALID(sense)  ((sense != NULL) && \
+				(((const uint8_t *)(sense))[0] & 0x70) == 0x70)
+
+struct qla_port_2xxx_data {
+	uint8_t port_name[WWN_SIZE];
+	uint16_t loop_id;
+};
+
+struct qla_port_24xx_data {
+	uint8_t port_name[WWN_SIZE];
+	uint16_t loop_id;
+	uint16_t reserved;
+};
+
+struct qla_tgt {
+	struct scsi_qla_host *vha;
+	struct qla_hw_data *ha;
+
+	/*
+	 * To sync between IRQ handlers and qla_tgt_target_release(). Needed,
+	 * because req_pkt() can drop/reaquire HW lock inside. Protected by
+	 * HW lock.
+	 */
+	int irq_cmd_count;
+
+	int datasegs_per_cmd, datasegs_per_cont, sg_tablesize;
+
+	/* Target's flags, serialized by pha->hardware_lock */
+	unsigned int tgt_enable_64bit_addr:1;	/* 64-bits PCI addressing enabled */
+	unsigned int link_reinit_iocb_pending:1;
+	unsigned int tm_to_unknown:1; /* TM to unknown session was sent */
+	unsigned int sess_works_pending:1; /* there are sess_work entries */
+
+	/*
+	 * Protected by tgt_mutex AND hardware_lock for writing and tgt_mutex
+	 * OR hardware_lock for reading.
+	 */
+	int tgt_stop; /* the target mode driver is being stopped */
+	int tgt_stopped; /* the target mode driver has been stopped */
+
+	/* Count of sessions refering qla_tgt. Protected by hardware_lock. */
+	int sess_count;
+
+	/* Protected by hardware_lock. Addition also protected by tgt_mutex. */
+	struct list_head sess_list;
+
+	/* Protected by hardware_lock */
+	struct list_head del_sess_list;
+	struct delayed_work sess_del_work;
+
+	spinlock_t sess_work_lock;
+	struct list_head sess_works_list;
+	struct work_struct sess_work;
+
+	imm_ntfy_from_isp_t link_reinit_iocb;
+	wait_queue_head_t waitQ;
+	int notify_ack_expected;
+	int abts_resp_expected;
+	int modify_lun_expected;
+
+	int ctio_srr_id;
+	int imm_srr_id;
+	spinlock_t srr_lock;
+	struct list_head srr_ctio_list;
+	struct list_head srr_imm_list;
+	struct work_struct srr_work;
+
+	atomic_t tgt_global_resets_count;
+
+	struct list_head tgt_list_entry;
+};
+
+/*
+ * Equivilant to IT Nexus (Initiator-Target)
+ */
+struct qla_tgt_sess {
+	uint16_t loop_id;
+	port_id_t s_id;
+
+	unsigned int conf_compl_supported:1;
+	unsigned int deleted:1;
+	unsigned int local:1;
+	unsigned int tearing_down:1;
+
+	struct se_session *se_sess;
+	struct scsi_qla_host *vha;
+	struct qla_tgt *tgt;
+
+	struct kref sess_kref;
+
+	struct list_head sess_list_entry;
+	unsigned long expires;
+	struct list_head del_list_entry;
+
+	uint8_t port_name[WWN_SIZE];
+};
+
+struct qla_tgt_cmd {
+	struct qla_tgt_sess *sess;
+	int state;
+	struct se_cmd se_cmd;
+	struct work_struct free_work;
+	struct work_struct work;
+	/* Sense buffer that will be mapped into outgoing status */
+	unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER];
+
+	unsigned int conf_compl_supported:1;/* to save extra sess dereferences */
+	unsigned int sg_mapped:1;
+	unsigned int free_sg:1;
+	unsigned int aborted:1; /* Needed in case of SRR */
+	unsigned int write_data_transferred:1;
+
+	struct scatterlist *sg;	/* cmd data buffer SG vector */
+	int sg_cnt;		/* SG segments count */
+	int bufflen;		/* cmd buffer length */
+	int offset;
+	uint32_t tag;
+	uint32_t unpacked_lun;
+	enum dma_data_direction dma_data_direction;
+
+	uint16_t loop_id;		    /* to save extra sess dereferences */
+	struct qla_tgt *tgt;		    /* to save extra sess dereferences */
+	struct scsi_qla_host *vha;
+	struct list_head cmd_list;
+
+	atio_from_isp_t atio;
+};
+
+struct qla_tgt_sess_work_param {
+	struct list_head sess_works_list_entry;
+
+#define QLA_TGT_SESS_WORK_ABORT	1
+#define QLA_TGT_SESS_WORK_TM	2
+	int type;
+
+	union {
+		abts_recv_from_24xx_t abts;
+		imm_ntfy_from_isp_t tm_iocb;
+		atio_from_isp_t tm_iocb2;
+	};
+};
+
+struct qla_tgt_mgmt_cmd {
+	uint8_t tmr_func;
+	uint8_t fc_tm_rsp;
+	struct qla_tgt_sess *sess;
+	struct se_cmd se_cmd;
+	struct se_tmr_req *se_tmr_req;
+	unsigned int flags;
+#define QLA24XX_MGMT_SEND_NACK	1
+	union {
+		atio_from_isp_t atio;
+		imm_ntfy_from_isp_t imm_ntfy;
+		abts_recv_from_24xx_t abts;
+	} __attribute__((packed)) orig_iocb;
+};
+
+struct qla_tgt_prm {
+	struct qla_tgt_cmd *cmd;
+	struct qla_tgt *tgt;
+	void *pkt;
+	struct scatterlist *sg;	/* cmd data buffer SG vector */
+	int seg_cnt;
+	int req_cnt;
+	uint16_t rq_result;
+	uint16_t scsi_status;
+	unsigned char *sense_buffer;
+	int sense_buffer_len;
+	int residual;
+	int add_status_pkt;
+};
+
+struct qla_tgt_srr_imm {
+	struct list_head srr_list_entry;
+	int srr_id;
+	imm_ntfy_from_isp_t imm_ntfy;
+};
+
+struct qla_tgt_srr_ctio {
+	struct list_head srr_list_entry;
+	int srr_id;
+	struct qla_tgt_cmd *cmd;
+};
+
+#define QLA_TGT_XMIT_DATA		1
+#define QLA_TGT_XMIT_STATUS		2
+#define QLA_TGT_XMIT_ALL		(QLA_TGT_XMIT_STATUS|QLA_TGT_XMIT_DATA)
+
+#include <linux/version.h>
+
+extern struct qla_tgt_data qla_target;
+/*
+ * Internal function prototypes
+ */
+void qla_tgt_disable_vha(struct scsi_qla_host *);
+
+/*
+ * Function prototypes for qla_target.c logic used by qla2xxx LLD code.
+ */
+extern int qla_tgt_add_target(struct qla_hw_data *, struct scsi_qla_host *);
+extern int qla_tgt_remove_target(struct qla_hw_data *, struct scsi_qla_host *);
+extern int qla_tgt_lport_register(struct qla_tgt_func_tmpl *, u64,
+			int (*callback)(struct scsi_qla_host *), void *);
+extern void  qla_tgt_lport_deregister(struct scsi_qla_host *);
+extern void qla_tgt_fc_port_added(struct scsi_qla_host *, fc_port_t *);
+extern void qla_tgt_fc_port_deleted(struct scsi_qla_host *, fc_port_t *);
+extern void qla_tgt_set_mode(struct scsi_qla_host *ha);
+extern void qla_tgt_clear_mode(struct scsi_qla_host *ha);
+extern int __init qla_tgt_init(void);
+extern void __exit qla_tgt_exit(void);
+
+static inline bool qla_tgt_mode_enabled(struct scsi_qla_host *ha)
+{
+	return ha->host->active_mode & MODE_TARGET;
+}
+
+static inline bool qla_ini_mode_enabled(struct scsi_qla_host *ha)
+{
+	return ha->host->active_mode & MODE_INITIATOR;
+}
+
+static inline void qla_reverse_ini_mode(struct scsi_qla_host *ha)
+{
+	if (ha->host->active_mode & MODE_INITIATOR)
+		ha->host->active_mode &= ~MODE_INITIATOR;
+	else
+		ha->host->active_mode |= MODE_INITIATOR;
+}
+
+/********************************************************************\
+ * ISP Queue types left out of new QLogic driver (from old version)
+\********************************************************************/
+
+/*
+ * qla_tgt_2xxx_send_enable_lun
+ *	Issue enable or disable LUN entry IOCB to ISP 2xxx.
+ *	NOTE: This IOCB is not available, and so not issued to ISPs >=24xx.
+ *
+ * Input:
+ *	ha = adapter block pointer.
+ *
+ * Caller MUST have hardware lock held. This function might release it,
+ * then reaquire.
+ */
+static inline void
+__qla_tgt_2xxx_send_enable_lun(struct scsi_qla_host *vha, int enable)
+{
+	enable_lun_t *pkt;
+
+	pkt = (enable_lun_t *)qla2x00_alloc_iocbs(vha, 0);
+	if (pkt != NULL) {
+		pkt->entry_type = ENABLE_LUN_TYPE;
+		if (enable) {
+			pkt->command_count = QLA2XXX_COMMAND_COUNT_INIT;
+			pkt->immed_notify_count = QLA2XXX_IMMED_NOTIFY_COUNT_INIT;
+			pkt->timeout = 0xffff;
+		} else {
+			pkt->command_count = 0;
+			pkt->immed_notify_count = 0;
+			pkt->timeout = 0;
+		}
+
+		/* Issue command to ISP */
+		qla2x00_isp_cmd(vha, vha->req);
+
+	} else
+		qla_tgt_clear_mode(vha);
+	if (!pkt)
+		printk (KERN_ERR "%s: **** FAILED ****\n", __func__);
+
+	return;
+}
+
+/*
+ * qla_tgt_2xxx_send_enable_lun
+ *      Issue enable LUN entry IOCB.
+ *
+ * Input:
+ *      ha = adapter block pointer.
+ *	enable = enable/disable flag.
+ */
+static inline void
+qla_tgt_2xxx_send_enable_lun(struct scsi_qla_host *vha, bool enable)
+{
+	struct qla_hw_data *ha = vha->hw;
+
+	if (!IS_FWI2_CAPABLE(ha)) {
+		unsigned long flags;
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+		__qla_tgt_2xxx_send_enable_lun(vha, enable);
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	}
+}
+/*
+ * Exported symbols from qla_target.c LLD logic used by qla2xxx code..
+ */
+extern void qla_tgt_24xx_atio_pkt_all_vps(struct scsi_qla_host *,
+	atio_from_isp_t *);
+extern void qla_tgt_response_pkt_all_vps(struct scsi_qla_host *, response_t *);
+extern int qla_tgt_rdy_to_xfer(struct qla_tgt_cmd *);
+extern int qla_tgt_xmit_response(struct qla_tgt_cmd *, int, uint8_t);
+extern void qla_tgt_xmit_tm_rsp(struct qla_tgt_mgmt_cmd *);
+extern void qla_tgt_free_mcmd(struct qla_tgt_mgmt_cmd *);
+extern void qla_tgt_free_cmd(struct qla_tgt_cmd *cmd);
+extern int __qla_tgt_sess_put(struct qla_tgt_sess *);
+extern void qla_tgt_ctio_completion(struct scsi_qla_host *, uint32_t);
+extern void qla_tgt_async_event(uint16_t, struct scsi_qla_host *, uint16_t *);
+extern void qla_tgt_enable_vha(struct scsi_qla_host *);
+extern void qla_tgt_vport_create(struct scsi_qla_host *, struct qla_hw_data *);
+extern void qla_tgt_rff_id(struct scsi_qla_host *, struct ct_sns_req *);
+extern void qla_tgt_initialize_adapter(struct scsi_qla_host *, struct qla_hw_data *);
+extern void qla_tgt_init_atio_q_entries(struct scsi_qla_host *);
+extern void qla_tgt_24xx_process_atio_queue(struct scsi_qla_host *);
+extern void qla_tgt_24xx_config_rings(struct scsi_qla_host *, device_reg_t __iomem *);
+extern void qla_tgt_2xxx_config_nvram_stage1(struct scsi_qla_host *, nvram_t *);
+extern void qla_tgt_2xxx_config_nvram_stage2(struct scsi_qla_host *, init_cb_t *);
+extern void qla_tgt_24xx_config_nvram_stage1(struct scsi_qla_host *, struct nvram_24xx *);
+extern void qla_tgt_24xx_config_nvram_stage2(struct scsi_qla_host *, struct init_cb_24xx *);
+extern void qla_tgt_abort_isp(struct scsi_qla_host *);
+extern int qla_tgt_2xxx_process_response_error(struct scsi_qla_host *, sts_entry_t *);
+extern int qla_tgt_24xx_process_response_error(struct scsi_qla_host *, struct sts_entry_24xx *);
+extern void qla_tgt_modify_vp_config(struct scsi_qla_host *, struct vp_config_entry_24xx *);
+extern void qla_tgt_probe_one_stage1(struct scsi_qla_host *, struct qla_hw_data *);
+extern int qla_tgt_mem_alloc(struct qla_hw_data *);
+extern void qla_tgt_mem_free(struct qla_hw_data *);
+extern void qla_tgt_stop_phase1(struct qla_tgt *);
+extern void qla_tgt_stop_phase2(struct qla_tgt *);
+
+#endif /* __QLA_TARGET_H */
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC-v4 2/3] qla2xxx: Enable 2xxx series LLD target mode support
  2011-12-18  2:02 [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Nicholas A. Bellinger
  2011-12-18  2:02 ` [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support Nicholas A. Bellinger
@ 2011-12-18  2:02 ` Nicholas A. Bellinger
  2011-12-18  2:02 ` [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target Nicholas A. Bellinger
  2011-12-21 17:11 ` [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Christoph Hellwig
  3 siblings, 0 replies; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-18  2:02 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: Andrew Vasquez, Giridhar Malavali, Christoph Hellwig,
	James Bottomley, Roland Dreier, Joern Engel, Madhuranath Iyengar,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch enables target mode support with qla2xxx SCSI LLD using
qla_target.c logic.  This includes:

*) Addition of target mode specific members to existing data
structures in qla_def.h and struct qla_hw_data->tgt_ops using
qla_target.h:struct qla_tgt_func_tmpl

*) Addition of struct qla_tgt_func_tmpl and direct calls into
qla_target.c logic w/ qla_tgt_* prefixed functions.

*) Addition of qla_iocb.c:qla2x00_req_pkt() for ring processing, and
qla2x00_issue_marker() for handling request/response queue processing
for target mode operation

*) Addition of various qla_tgt_mode_enabled() logic checks in
qla24xx_nvram_config(), qla2x00_initialize_adapter(), qla2x00_rff_id(),
qla2x00_abort_isp(), qla24xx_modify_vp_config(), and qla2x00_vp_abort_isp().

More specific checks for qla_hw_data->qla2x_tmpl include:

*) control plane:

qla_init.c:qla2x00_rport_del() -> qla_tgt_fc_port_deleted()
qla_init.c:qla2x00_reg_remote_port() -> qla_tgt_fc_port_added()
qla_init.c:qla2x00_device_resync() -> qla2x00_mark_device_lost()

*) I/O path:

qla_isr.c:qla2x00_async_event() -> qla_tgt_async_event()
qla_isr.c:qla2x00_process_response_queue() -> qla_tgt_response_pkt_all_vps()
qla_isr.c:qla24xx_process_response_queue() -> qla_tgt_response_pkt_all_vps()

*) interrupt handlers:

qla_isr.c:qla24xx_intr_handler() -> qla_tgt_24xx_process_atio_queue() +
                                    qla24xx_process_response_queue()
qla24xx_msix_default(): qla_tgt_24xx_process_atio_queue() +
		        qla24xx_process_response_queue()


Cc: Andrew Vasquez <andrew.vasquez@qlogic.com>
Cc: Giridhar Malavali <giridhar.malavali@qlogic.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Bottomley <JBottomley@Parallels.com>
Cc: Roland Dreier <roland@purestorage.com>
Cc: Joern Engel <joern@logfs.org>
Cc: Madhuranath Iyengar <mni@risingtidesystems.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/scsi/qla2xxx/Makefile   |    2 +-
 drivers/scsi/qla2xxx/qla_attr.c |    5 +-
 drivers/scsi/qla2xxx/qla_dbg.c  |   13 ++---
 drivers/scsi/qla2xxx/qla_dbg.h  |    5 ++
 drivers/scsi/qla2xxx/qla_def.h  |   70 +++++++++++++++++++--
 drivers/scsi/qla2xxx/qla_gbl.h  |    7 ++
 drivers/scsi/qla2xxx/qla_gs.c   |    4 +-
 drivers/scsi/qla2xxx/qla_init.c |  101 ++++++++++++++++++++++++++++---
 drivers/scsi/qla2xxx/qla_iocb.c |  105 +++++++++++++++++++++++++++++++-
 drivers/scsi/qla2xxx/qla_isr.c  |   86 ++++++++++++++++++++++++++-
 drivers/scsi/qla2xxx/qla_mbx.c  |  122 +++++++++++++++++++++++++++++++++++--
 drivers/scsi/qla2xxx/qla_mid.c  |   21 ++++++-
 drivers/scsi/qla2xxx/qla_os.c   |  126 +++++++++++++++++++++++++++++++++------
 13 files changed, 609 insertions(+), 58 deletions(-)

diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
index 5df782f..702931ff 100644
--- a/drivers/scsi/qla2xxx/Makefile
+++ b/drivers/scsi/qla2xxx/Makefile
@@ -1,5 +1,5 @@
 qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
 		qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \
-        qla_nx.o
+        qla_nx.o qla_target.o
 
 obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
index ac326c4..e5dd55c 100644
--- a/drivers/scsi/qla2xxx/qla_attr.c
+++ b/drivers/scsi/qla2xxx/qla_attr.c
@@ -5,6 +5,7 @@
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
 #include "qla_def.h"
+#include "qla_target.h"
 
 #include <linux/kthread.h>
 #include <linux/vmalloc.h>
@@ -1855,6 +1856,7 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
 	fc_host_supported_speeds(vha->host) =
 		fc_host_supported_speeds(base_vha->host);
 
+	qla_tgt_vport_create(vha, ha);
 	qla24xx_vport_disable(fc_vport, disable);
 
 	if (ha->flags.cpu_affinity_enabled) {
@@ -2068,7 +2070,8 @@ qla2x00_init_host_attr(scsi_qla_host_t *vha)
 	fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count;
 	fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name);
 	fc_host_port_name(vha->host) = wwn_to_u64(vha->port_name);
-	fc_host_supported_classes(vha->host) = FC_COS_CLASS3;
+	fc_host_supported_classes(vha->host) = ha->enable_class_2 ?
+			(FC_COS_CLASS2|FC_COS_CLASS3) : FC_COS_CLASS3;
 	fc_host_max_npiv_vports(vha->host) = ha->max_npiv_vports;
 	fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count;
 
diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
index 9df4787..eaffa0a 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.c
+++ b/drivers/scsi/qla2xxx/qla_dbg.c
@@ -25,6 +25,11 @@
  * | ISP82XX Specific             |       0xb051       |    		|
  * | MultiQ                       |       0xc00b       |		|
  * | Misc                         |       0xd00b       |		|
+ * | Target Mode		  |	  0xe037       |		|
+ * | Target Mode Management	  |	  0xe14e       |		|
+ * | Target Mode SCSI Packets	  |	  0xe20b       |		|
+ * | Target Mode Scatterlists	  |	  0xe30c       |		|
+ * | Target Mode Task Management  |	  0xe409       |		|
  * ----------------------------------------------------------------------
  */
 
@@ -1671,8 +1676,6 @@ ql_dbg(uint32_t level, scsi_qla_host_t *vha, int32_t id, char *msg, ...) {
 	uint32_t len;
 	struct pci_dev *pdev = NULL;
 
-	memset(pbuf, 0, QL_DBG_BUF_LEN);
-
 	va_start(ap, msg);
 
 	if ((level & ql2xextended_error_logging) == level) {
@@ -1719,8 +1722,6 @@ ql_dbg_pci(uint32_t level, struct pci_dev *pdev, int32_t id, char *msg, ...) {
 	if (pdev == NULL)
 		return;
 
-	memset(pbuf, 0, QL_DBG_BUF_LEN);
-
 	va_start(ap, msg);
 
 	if ((level & ql2xextended_error_logging) == level) {
@@ -1758,8 +1759,6 @@ ql_log(uint32_t level, scsi_qla_host_t *vha, int32_t id, char *msg, ...) {
 	uint32_t len;
 	struct pci_dev *pdev = NULL;
 
-	memset(pbuf, 0, QL_DBG_BUF_LEN);
-
 	va_start(ap, msg);
 
 	if (level <= ql_errlev) {
@@ -1818,8 +1817,6 @@ ql_log_pci(uint32_t level, struct pci_dev *pdev, int32_t id, char *msg, ...) {
 	if (pdev == NULL)
 		return;
 
-	memset(pbuf, 0, QL_DBG_BUF_LEN);
-
 	va_start(ap, msg);
 
 	if (level <= ql_errlev) {
diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h
index 98a377b..26752e2 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.h
+++ b/drivers/scsi/qla2xxx/qla_dbg.h
@@ -275,5 +275,10 @@ ql_log_pci(uint32_t, struct pci_dev *pdev, int32_t, char *, ...);
 #define ql_dbg_misc	0x00010000 /* For dumping everything that is not
 				    * not covered by upper categories
 				    */
+#define ql_dbg_tgt	0x00008000 /* Target mode */
+#define ql_dbg_tgt_mgt	0x00004000 /* Target mode management */
+#define ql_dbg_tgt_pkt	0x00002000 /* Target mode SCSI packets */
+#define ql_dbg_tgt_sgl	0x00001000 /* Target mode scatterlists */
+#define ql_dbg_tgt_tmr	0x00000800 /* Target mode task management */
 
 #define QL_DBG_BUF_LEN	512
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index fcf052c..b2f3cf0 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -176,7 +176,7 @@
 #define	LOOP_DOWN_RESET			(LOOP_DOWN_TIME - 30)
 
 /* Maximum outstanding commands in ISP queues (1-65535) */
-#define MAX_OUTSTANDING_COMMANDS	1024
+#define MAX_OUTSTANDING_COMMANDS	16384
 
 /* ISP request and response entry counts (37-65535) */
 #define REQUEST_ENTRY_CNT_2100		128	/* Number of request entries. */
@@ -185,6 +185,7 @@
 #define RESPONSE_ENTRY_CNT_2100		64	/* Number of response entries.*/
 #define RESPONSE_ENTRY_CNT_2300		512	/* Number of response entries.*/
 #define RESPONSE_ENTRY_CNT_MQ		128	/* Number of response entries.*/
+#define ATIO_ENTRY_CNT_24XX		4096	/* Number of ATIO entries. */
 
 struct req_que;
 
@@ -546,7 +547,7 @@ typedef struct {
 #define MBA_SYSTEM_ERR		0x8002	/* System Error. */
 #define MBA_REQ_TRANSFER_ERR	0x8003	/* Request Transfer Error. */
 #define MBA_RSP_TRANSFER_ERR	0x8004	/* Response Transfer Error. */
-#define MBA_WAKEUP_THRES	0x8005	/* Request Queue Wake-up. */
+#define MBA_WAKEUP_THRES       0x8005  /* Request Queue Wake-up. */
 #define MBA_LIP_OCCURRED	0x8010	/* Loop Initialization Procedure */
 					/* occurred. */
 #define MBA_LOOP_UP		0x8011	/* FC Loop UP. */
@@ -1220,11 +1221,27 @@ typedef struct {
  * ISP queue - response queue entry definition.
  */
 typedef struct {
-	uint8_t		data[60];
+	uint8_t		entry_type;		/* Entry type. */
+	uint8_t		entry_count;		/* Entry count. */
+	uint8_t		sys_define;		/* System defined. */
+	uint8_t		entry_status;		/* Entry Status. */
+	uint32_t	handle;			/* System defined handle */
+	uint8_t		data[52];
 	uint32_t	signature;
 #define RESPONSE_PROCESSED	0xDEADDEAD	/* Signature */
 } response_t;
 
+/*
+ * ISP queue - ATIO queue entry definition.
+ */
+typedef struct {
+	uint8_t		entry_type;		/* Entry type. */
+	uint8_t		entry_count;		/* Entry count. */
+	uint8_t		data[58];
+	uint32_t	signature;
+#define ATIO_PROCESSED 0xDEADDEAD		/* Signature */
+} atio_t;
+
 typedef union {
 	uint16_t extended;
 	struct {
@@ -1707,6 +1724,9 @@ typedef struct fc_port {
 
 	uint16_t vp_idx;
 	uint8_t fc4_type;
+
+	/* True, if confirmed completion is supported */
+	uint8_t conf_compl_supported:1;
 } fc_port_t;
 
 /*
@@ -2823,12 +2843,44 @@ struct qla_hw_data {
 
 	uint8_t fw_type;
 	__le32 file_prd_off;	/* File firmware product offset */
-
 	uint32_t	md_template_size;
 	void		*md_tmplt_hdr;
-	dma_addr_t      md_tmplt_hdr_dma;
-	void            *md_dump;
+	dma_addr_t	md_tmplt_hdr_dma;
+	void		*md_dump;
 	uint32_t	md_dump_size;
+
+	/* Protected by hw lock */
+	uint32_t enable_class_2:1;
+	uint32_t enable_explicit_conf:1;
+	uint32_t host_shutting_down:1;
+	uint32_t ini_mode_force_reverse:1;
+	uint32_t node_name_set:1;
+
+	dma_addr_t atio_dma;	/* Physical address. */
+	atio_t *atio_ring;	/* Base virtual address */
+	atio_t *atio_ring_ptr;	/* Current address. */
+	uint16_t atio_ring_index; /* Current index. */
+	uint16_t atio_q_length;
+
+	void *target_lport_ptr;
+	struct qla_tgt_func_tmpl *tgt_ops;
+	struct qla_tgt *qla_tgt;
+	struct qla_tgt_cmd *cmds[MAX_OUTSTANDING_COMMANDS];
+	uint16_t current_handle;
+
+	struct qla_tgt_vp_map *tgt_vp_map;
+	struct mutex tgt_mutex;
+	struct mutex tgt_host_action_mutex;
+
+	int saved_set;
+	uint16_t saved_exchange_count;
+	uint32_t saved_firmware_options_1;
+	uint32_t saved_firmware_options_2;
+	uint32_t saved_firmware_options_3;
+	uint8_t saved_firmware_options[2];
+	uint8_t saved_add_firmware_options[2];
+
+	uint8_t tgt_node_name[WWN_SIZE];
 };
 
 /*
@@ -2955,6 +3007,11 @@ typedef struct scsi_qla_host {
 	atomic_t	vref_count;
 } scsi_qla_host_t;
 
+struct qla_tgt_vp_map {
+	uint8_t	idx;
+	scsi_qla_host_t *vha;
+};
+
 /*
  * Macros to help code, maintain, etc.
  */
@@ -2978,7 +3035,6 @@ typedef struct scsi_qla_host {
 	atomic_dec(&__vha->vref_count);			     \
 } while (0)
 
-
 #define qla_printk(level, ha, format, arg...) \
 	dev_printk(level , &((ha)->pdev->dev) , format , ## arg)
 
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index ce32d81..8c07d24 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -178,6 +178,7 @@ extern int  qla2x00_vp_abort_isp(scsi_qla_host_t *);
 /*
  * Global Function Prototypes in qla_iocb.c source file.
  */
+
 extern uint16_t qla2x00_calc_iocbs_32(uint16_t);
 extern uint16_t qla2x00_calc_iocbs_64(uint16_t);
 extern void qla2x00_build_scsi_iocbs_32(srb_t *, cmd_entry_t *, uint16_t);
@@ -191,6 +192,9 @@ extern uint16_t qla24xx_calc_iocbs(scsi_qla_host_t *, uint16_t);
 extern void qla24xx_build_scsi_iocbs(srb_t *, struct cmd_type_7 *, uint16_t);
 extern int qla24xx_dif_start_scsi(srb_t *);
 
+extern void *qla2x00_alloc_iocbs(scsi_qla_host_t *, srb_t *);
+extern void qla2x00_isp_cmd(struct scsi_qla_host *, struct req_que *);
+extern int qla2x00_issue_marker(scsi_qla_host_t *, int);
 
 /*
  * Global Function Prototypes in qla_mbx.c source file.
@@ -243,6 +247,9 @@ extern int
 qla2x00_init_firmware(scsi_qla_host_t *, uint16_t);
 
 extern int
+qla2x00_get_node_name_list(scsi_qla_host_t *, void **, int *);
+
+extern int
 qla2x00_get_port_database(scsi_qla_host_t *, fc_port_t *, uint8_t);
 
 extern int
diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
index 37937aa..e922f71 100644
--- a/drivers/scsi/qla2xxx/qla_gs.c
+++ b/drivers/scsi/qla2xxx/qla_gs.c
@@ -5,6 +5,7 @@
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
 #include "qla_def.h"
+#include "qla_target.h"
 
 static int qla2x00_sns_ga_nxt(scsi_qla_host_t *, fc_port_t *);
 static int qla2x00_sns_gid_pt(scsi_qla_host_t *, sw_info_t *);
@@ -545,7 +546,8 @@ qla2x00_rff_id(scsi_qla_host_t *vha)
 	ct_req->req.rff_id.port_id[1] = vha->d_id.b.area;
 	ct_req->req.rff_id.port_id[2] = vha->d_id.b.al_pa;
 
-	ct_req->req.rff_id.fc4_feature = BIT_1;
+	qla_tgt_rff_id(vha, ct_req);
+
 	ct_req->req.rff_id.fc4_type = 0x08;		/* SCSI - FCP */
 
 	/* Execute MS IOCB */
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index f03e915f..cd1cb20 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -17,6 +17,9 @@
 #include <asm/prom.h>
 #endif
 
+#include <target/target_core_base.h>
+#include "qla_target.h"
+
 /*
 *  QLogic ISP2x00 Hardware Support Function Prototypes.
 */
@@ -570,7 +573,10 @@ qla2x00_initialize_adapter(scsi_qla_host_t *vha)
 			return QLA_FUNCTION_FAILED;
 		}
 	}
-	rval = qla2x00_init_rings(vha);
+
+	if (qla_ini_mode_enabled(vha))
+		rval = qla2x00_init_rings(vha);
+
 	ha->flags.chip_reset_done = 1;
 
 	if (rval == QLA_SUCCESS && IS_QLA84XX(ha)) {
@@ -586,6 +592,9 @@ qla2x00_initialize_adapter(scsi_qla_host_t *vha)
 	if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha))
 		qla24xx_read_fcp_prio_cfg(vha);
 
+	if (rval == QLA_SUCCESS)
+		qla_tgt_initialize_adapter(vha, ha);
+
 	return (rval);
 }
 
@@ -1733,6 +1742,12 @@ qla24xx_config_rings(struct scsi_qla_host *vha)
 	icb->response_q_address[0] = cpu_to_le32(LSD(rsp->dma));
 	icb->response_q_address[1] = cpu_to_le32(MSD(rsp->dma));
 
+	/* Setup ATIO queue dma pointers for target mode */
+	icb->atio_q_inpointer = __constant_cpu_to_le16(0);
+	icb->atio_q_length = cpu_to_le16(ha->atio_q_length);
+	icb->atio_q_address[0] = cpu_to_le32(LSD(ha->atio_dma));
+	icb->atio_q_address[1] = cpu_to_le32(MSD(ha->atio_dma));
+
 	if (ha->mqenable) {
 		icb->qos = __constant_cpu_to_le16(QLA_DEFAULT_QUE_QOS);
 		icb->rid = __constant_cpu_to_le16(rid);
@@ -1775,6 +1790,8 @@ qla24xx_config_rings(struct scsi_qla_host *vha)
 		WRT_REG_DWORD(&reg->isp24.rsp_q_in, 0);
 		WRT_REG_DWORD(&reg->isp24.rsp_q_out, 0);
 	}
+	qla_tgt_24xx_config_rings(vha, reg);
+
 	/* PCI posting */
 	RD_REG_DWORD(&ioreg->hccr);
 }
@@ -1836,6 +1853,11 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
 
 	spin_unlock(&ha->vport_slock);
 
+	ha->atio_ring_ptr = ha->atio_ring;
+	ha->atio_ring_index = 0;
+	/* Initialize ATIO queue entries */
+	qla_tgt_init_atio_q_entries(vha);
+
 	ha->isp_ops->config_rings(vha);
 
 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
@@ -2096,6 +2118,8 @@ qla2x00_configure_hba(scsi_qla_host_t *vha)
 	vha->d_id.b.area = area;
 	vha->d_id.b.al_pa = al_pa;
 
+	ha->tgt_vp_map[al_pa].idx = vha->vp_idx;
+
 	if (!vha->flags.init_done)
 		ql_log(ql_log_info, vha, 0x2010,
 		    "Topology - %s, Host Loop address 0x%x.\n",
@@ -2301,21 +2325,31 @@ qla2x00_nvram_config(scsi_qla_host_t *vha)
 	}
 #endif
 
+	qla_tgt_2xxx_config_nvram_stage1(vha, nv);
+
 	/* Reset Initialization control block */
 	memset(icb, 0, ha->init_cb_size);
 
 	/*
 	 * Setup driver NVRAM options.
 	 */
+	/* Enable ADISC and fairness */
 	nv->firmware_options[0] |= (BIT_6 | BIT_1);
 	nv->firmware_options[0] &= ~(BIT_5 | BIT_4);
 	nv->firmware_options[1] |= (BIT_5 | BIT_0);
+	/* Enable PDB changed AE */
+	nv->firmware_options[1] |= BIT_0;
+	/* Stop Port Queue on Full Status */
 	nv->firmware_options[1] &= ~BIT_4;
 
 	if (IS_QLA23XX(ha)) {
+		/* Enable full duplex */
 		nv->firmware_options[0] |= BIT_2;
+		/* Disable Fast Status Posting */
 		nv->firmware_options[0] &= ~BIT_3;
-		nv->firmware_options[0] &= ~BIT_6;
+		/* out-of-order frames rassembly */
+		nv->special_options[0] |= BIT_6;
+		/* P2P preferred, otherwise loop */
 		nv->add_firmware_options[1] |= BIT_5 | BIT_4;
 
 		if (IS_QLA2300(ha)) {
@@ -2329,6 +2363,7 @@ qla2x00_nvram_config(scsi_qla_host_t *vha)
 			    sizeof(nv->model_number), "QLA23xx");
 		}
 	} else if (IS_QLA2200(ha)) {
+		/* Enable full duplex */
 		nv->firmware_options[0] |= BIT_2;
 		/*
 		 * 'Point-to-point preferred, else loop' is not a safe
@@ -2360,12 +2395,14 @@ qla2x00_nvram_config(scsi_qla_host_t *vha)
 	while (cnt--)
 		*dptr1++ = *dptr2++;
 
-	/* Use alternate WWN? */
 	if (nv->host_p[1] & BIT_7) {
+		/* Use alternate WWN? */
 		memcpy(icb->node_name, nv->alternate_node_name, WWN_SIZE);
 		memcpy(icb->port_name, nv->alternate_port_name, WWN_SIZE);
 	}
 
+	qla_tgt_2xxx_config_nvram_stage2(vha, icb);
+
 	/* Prepare nodename */
 	if ((icb->firmware_options[1] & BIT_6) == 0) {
 		/*
@@ -2512,14 +2549,21 @@ qla2x00_rport_del(void *data)
 {
 	fc_port_t *fcport = data;
 	struct fc_rport *rport;
+	scsi_qla_host_t *vha = fcport->vha;
 	unsigned long flags;
 
 	spin_lock_irqsave(fcport->vha->host->host_lock, flags);
 	rport = fcport->drport ? fcport->drport: fcport->rport;
 	fcport->drport = NULL;
 	spin_unlock_irqrestore(fcport->vha->host->host_lock, flags);
-	if (rport)
+	if (rport) {
 		fc_remote_port_delete(rport);
+		/*
+		 * Release the target mode FC NEXUS in qla_target.c code
+		 * if target mod is enabled.
+		 */
+		qla_tgt_fc_port_deleted(vha, fcport);
+	}
 }
 
 /**
@@ -2915,6 +2959,12 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
 		    "Unable to allocate fc remote port.\n");
 		return;
 	}
+	/*
+	 * Create target mode FC NEXUS in qla_target.c if target mode is
+	 * enabled..
+	 */
+	qla_tgt_fc_port_added(vha, fcport);
+
 	spin_lock_irqsave(fcport->vha->host->host_lock, flags);
 	*((fc_port_t **)rport->dd_data) = fcport;
 	spin_unlock_irqrestore(fcport->vha->host->host_lock, flags);
@@ -3580,11 +3630,13 @@ qla2x00_device_resync(scsi_qla_host_t *vha)
 				continue;
 
 			if (atomic_read(&fcport->state) == FCS_ONLINE) {
-				if (format != 3 ||
-				    fcport->port_type != FCT_INITIATOR) {
+				if (vha->hw->tgt_ops != NULL)
 					qla2x00_mark_device_lost(vha, fcport,
-					    0, 0);
-				}
+							0, 0);
+				else if ((format != 3) ||
+					   (fcport->port_type != FCT_INITIATOR))
+					qla2x00_mark_device_lost(vha, fcport,
+						0, 0);
 			}
 		}
 	}
@@ -3734,6 +3786,13 @@ qla2x00_fabric_login(scsi_qla_host_t *vha, fc_port_t *fcport,
 			if (mb[10] & BIT_1)
 				fcport->supported_classes |= FC_COS_CLASS3;
 
+			if (IS_FWI2_CAPABLE(ha)) {
+				if (mb[10] & BIT_7)
+					fcport->conf_compl_supported = 1;
+			} else {
+				/* mb[10] bits are undocumented, ToDo */
+			}
+
 			rval = QLA_SUCCESS;
 			break;
 		} else if (mb[0] == MBS_LOOP_ID_USED) {
@@ -4095,6 +4154,8 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
 
 			vha->flags.online = 1;
 
+			qla_tgt_abort_isp(vha);
+
 			ha->isp_ops->enable_intrs(ha);
 
 			ha->isp_abort_cnt = 0;
@@ -4211,6 +4272,7 @@ qla2x00_restart_isp(scsi_qla_host_t *vha)
 	struct qla_hw_data *ha = vha->hw;
 	struct req_que *req = ha->req_q_map[0];
 	struct rsp_que *rsp = ha->rsp_q_map[0];
+	unsigned long flags;
 
 	/* If firmware needs to be loaded */
 	if (qla2x00_isp_firmware(vha)) {
@@ -4235,6 +4297,16 @@ qla2x00_restart_isp(scsi_qla_host_t *vha)
 			qla2x00_marker(vha, req, rsp, 0, 0, MK_SYNC_ALL);
 
 			vha->flags.online = 1;
+
+			/*
+			 * Process any ATIO queue entries that came in
+			 * while we weren't online.
+			 */
+			spin_lock_irqsave(&ha->hardware_lock, flags);
+			if (qla_tgt_mode_enabled(vha))
+				qla_tgt_24xx_process_atio_queue(vha);
+			spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
 			/* Wait at most MAX_TARGET RSCNs for a stable link. */
 			wait_time = 256;
 			do {
@@ -4475,6 +4547,15 @@ qla24xx_nvram_config(scsi_qla_host_t *vha)
 		rval = 1;
 	}
 
+	if (!qla_ini_mode_enabled(vha)) {
+		/* Don't enable full login after initial LIP */
+		nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_13);
+		/* Don't enable LIP full login for initiator */
+		nv->host_p &= __constant_cpu_to_le32(~BIT_10);
+	}
+
+	qla_tgt_24xx_config_nvram_stage1(vha, nv);
+
 	/* Reset Initialization control block */
 	memset(icb, 0, ha->init_cb_size);
 
@@ -4502,8 +4583,10 @@ qla24xx_nvram_config(scsi_qla_host_t *vha)
 	qla2x00_set_model_info(vha, nv->model_name, sizeof(nv->model_name),
 	    "QLA2462");
 
-	/* Use alternate WWN? */
+	qla_tgt_24xx_config_nvram_stage2(vha, icb);
+
 	if (nv->host_p & __constant_cpu_to_le32(BIT_15)) {
+		/* Use alternate WWN? */
 		memcpy(icb->node_name, nv->alternate_node_name, WWN_SIZE);
 		memcpy(icb->port_name, nv->alternate_port_name, WWN_SIZE);
 	}
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
index dbec896..d3a65e0 100644
--- a/drivers/scsi/qla2xxx/qla_iocb.c
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
@@ -5,14 +5,13 @@
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
 #include "qla_def.h"
+#include "qla_target.h"
 
 #include <linux/blkdev.h>
 #include <linux/delay.h>
 
 #include <scsi/scsi_tcq.h>
 
-static void qla2x00_isp_cmd(struct scsi_qla_host *, struct req_que *);
-
 static void qla25xx_set_que(srb_t *, struct rsp_que **);
 /**
  * qla2x00_get_cmd_direction() - Determine control_flag data direction.
@@ -536,13 +535,111 @@ qla2x00_marker(struct scsi_qla_host *vha, struct req_que *req,
 	return (ret);
 }
 
+/*
+ * qla2x00_issue_marker
+ *
+ * Issue marker
+ * Caller CAN have hardware lock held as specified by ha_locked parameter.
+ * Might release it, then reaquire.
+ */
+int qla2x00_issue_marker(scsi_qla_host_t *vha, int ha_locked)
+{
+	if (ha_locked) {
+		if (__qla2x00_marker(vha, vha->req, vha->req->rsp, 0, 0,
+					MK_SYNC_ALL) != QLA_SUCCESS)
+			return QLA_FUNCTION_FAILED;
+	} else {
+		if (qla2x00_marker(vha, vha->req, vha->req->rsp, 0, 0,
+					MK_SYNC_ALL) != QLA_SUCCESS)
+			return QLA_FUNCTION_FAILED;
+	}
+	vha->marker_needed = 0;
+
+	return QLA_SUCCESS;
+}
+
+/**
+ * qla2x00_req_pkt() - Retrieve a request packet from the request ring.
+ * @ha: HA context
+ *
+ * Note: The caller must hold the hardware lock before calling this routine.
+ * Might release it, then reaquire.
+ *
+ * Returns NULL if function failed, else, a pointer to the request packet.
+ */
+request_t *
+qla2x00_req_pkt(scsi_qla_host_t *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	device_reg_t __iomem *reg = ha->iobase;
+	request_t *pkt = NULL;
+	uint32_t *dword_ptr, timer;
+	uint16_t req_cnt = 1, cnt;
+
+	/* Wait 1 second for slot. */
+	for (timer = HZ; timer; timer--) {
+		if ((req_cnt + 2) >= vha->req->cnt) {
+			/* Calculate number of free request entries. */
+			if (IS_FWI2_CAPABLE(ha))
+				cnt = (uint16_t)RD_REG_DWORD(&reg->isp24.req_q_out);
+			else
+				cnt = qla2x00_debounce_register(
+					ISP_REQ_Q_OUT(ha, &reg->isp));
+
+			if  (vha->req->ring_index < cnt)
+				vha->req->cnt = cnt - vha->req->ring_index;
+			else
+				vha->req->cnt = vha->req->length -
+					(vha->req->ring_index - cnt);
+		}
+
+		/* If room for request in request ring. */
+		if ((req_cnt + 2) < vha->req->cnt) {
+			vha->req->cnt--;
+			pkt = vha->req->ring_ptr;
+
+			/* Zero out packet. */
+			dword_ptr = (uint32_t *)pkt;
+			for (cnt = 0; cnt < REQUEST_ENTRY_SIZE / 4; cnt++)
+				*dword_ptr++ = 0;
+
+			/* Set system defined field. */
+			pkt->sys_define = (uint8_t)vha->req->ring_index;
+
+			/* Set entry count. */
+			pkt->entry_count = 1;
+
+			return pkt;
+		}
+
+		/* Release ring specific lock */
+		spin_unlock_irq(&ha->hardware_lock);
+
+		/* 2 us */
+		udelay(2);
+		/*
+		 * Check for pending interrupts, during init we issue marker directly
+		 */
+		if (!vha->marker_needed && !vha->flags.init_done)
+			qla2x00_poll(vha->req->rsp);
+
+		/* Reaquire ring specific lock */
+		spin_lock_irq(&ha->hardware_lock);
+	}
+
+	printk(KERN_INFO "Unable to locate request_t *pkt in ring\n");
+	dump_stack();
+
+	return NULL;
+}
+
 /**
  * qla2x00_isp_cmd() - Modify the request ring pointer.
  * @ha: HA context
  *
  * Note: The caller must hold the hardware lock before calling this routine.
  */
-static void
+void
 qla2x00_isp_cmd(struct scsi_qla_host *vha, struct req_que *req)
 {
 	struct qla_hw_data *ha = vha->hw;
@@ -597,6 +694,7 @@ qla2x00_isp_cmd(struct scsi_qla_host *vha, struct req_que *req)
 	}
 
 }
+EXPORT_SYMBOL(qla2x00_isp_cmd);
 
 /**
  * qla24xx_calc_iocbs() - Determine number of Command Type 3 and
@@ -1792,6 +1890,7 @@ skip_cmd_array:
 queuing_error:
 	return pkt;
 }
+EXPORT_SYMBOL(qla2x00_alloc_iocbs);
 
 static void
 qla2x00_start_iocbs(srb_t *sp)
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 2516adf..90caf60 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -5,6 +5,7 @@
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
 #include "qla_def.h"
+#include "qla_target.h"
 
 #include <linux/delay.h>
 #include <linux/slab.h>
@@ -214,6 +215,12 @@ qla2300_intr_handler(int irq, void *dev_id)
 			mb[2] = RD_MAILBOX_REG(ha, reg, 2);
 			qla2x00_async_event(vha, rsp, mb);
 			break;
+		case 0x17: /* FAST_CTIO_COMP */
+			mb[0] = MBA_CTIO_COMPLETION;
+			mb[1] = MSW(stat);
+			mb[2] = RD_MAILBOX_REG(ha, reg, 2);
+			qla2x00_async_event(vha, rsp, mb);
+			break;
 		default:
 			ql_dbg(ql_dbg_async, vha, 0x5028,
 			    "Unrecognized interrupt type (%d).\n", stat & 0xff);
@@ -334,6 +341,7 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
 	if (IS_QLA8XXX_TYPE(ha))
 		goto skip_rio;
 	switch (mb[0]) {
+	case MBA_CTIO_COMPLETION:
 	case MBA_SCSI_COMPLETION:
 		handles[0] = le32_to_cpu((uint32_t)((mb[2] << 16) | mb[1]));
 		handle_cnt = 1;
@@ -395,6 +403,10 @@ skip_rio:
 				handles[cnt]);
 		break;
 
+	case MBA_CTIO_COMPLETION:
+		qla_tgt_ctio_completion(vha, handles[0]);
+		break;
+
 	case MBA_RESET:			/* Reset */
 		ql_dbg(ql_dbg_async, vha, 0x5002,
 		    "Asynchronous RESET.\n");
@@ -450,8 +462,10 @@ skip_rio:
 	case MBA_WAKEUP_THRES:		/* Request Queue Wake-up */
 		ql_dbg(ql_dbg_async, vha, 0x5008,
 		    "Asynchronous WAKEUP_THRES.\n");
-		break;
 
+		if (qla_tgt_mode_enabled(vha))
+			set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+		break;
 	case MBA_LIP_OCCURRED:		/* Loop Initialization Procedure */
 		ql_log(ql_log_info, vha, 0x5009,
 		    "LIP occurred (%x).\n", mb[1]);
@@ -665,6 +679,8 @@ skip_rio:
 			ql_dbg(ql_dbg_async, vha, 0x5011,
 			    "Asynchronous PORT UPDATE ignored %04x/%04x/%04x.\n",
 			    mb[1], mb[2], mb[3]);
+
+			qla_tgt_async_event(mb[0], vha, mb);
 			break;
 		}
 
@@ -683,6 +699,8 @@ skip_rio:
 
 		set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
 		set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+
+		qla_tgt_async_event(mb[0], vha, mb);
 		break;
 
 	case MBA_RSCN_UPDATE:		/* State Change Registration */
@@ -809,6 +827,8 @@ skip_rio:
 		break;
 	}
 
+	qla_tgt_async_event(mb[0], vha, mb);
+
 	if (!vha->vp_idx && ha->num_vhosts)
 		qla2x00_alert_all_vps(rsp, mb);
 }
@@ -825,6 +845,11 @@ qla2x00_process_completed_request(struct scsi_qla_host *vha,
 	srb_t *sp;
 	struct qla_hw_data *ha = vha->hw;
 
+	if (HANDLE_IS_CTIO_COMP(index)) {
+		qla_tgt_ctio_completion(vha, index);
+		return;
+	}
+
 	/* Validate handle. */
 	if (index >= MAX_OUTSTANDING_COMMANDS) {
 		ql_log(ql_log_warn, vha, 0x3014,
@@ -1341,12 +1366,25 @@ qla2x00_process_response_queue(struct rsp_que *rsp)
 			    "Process error entry.\n");
 
 			qla2x00_error_entry(vha, rsp, pkt);
+
+			if (qla_tgt_2xxx_process_response_error(vha, pkt) == 1)
+				break;
+
 			((response_t *)pkt)->signature = RESPONSE_PROCESSED;
 			wmb();
 			continue;
 		}
 
 		switch (pkt->entry_type) {
+		case ACCEPT_TGT_IO_TYPE:
+		case CONTINUE_TGT_IO_TYPE:
+		case CTIO_A64_TYPE:
+		case IMMED_NOTIFY_TYPE:
+		case NOTIFY_ACK_TYPE:
+		case ENABLE_LUN_TYPE:
+		case MODIFY_LUN_TYPE:
+			qla_tgt_response_pkt_all_vps(vha, (response_t *)pkt);
+			break;
 		case STATUS_TYPE:
 			qla2x00_status_entry(vha, rsp, pkt);
 			break;
@@ -1911,7 +1949,7 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt)
 	struct qla_hw_data *ha = vha->hw;
 	uint32_t handle = LSW(pkt->handle);
 	uint16_t que = MSW(pkt->handle);
-	struct req_que *req = ha->req_q_map[que];
+	struct req_que *req;
 
 	if (pkt->entry_status & RF_INV_E_ORDER)
 		ql_dbg(ql_dbg_async, vha, 0x502a,
@@ -1932,6 +1970,15 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt)
 		ql_dbg(ql_dbg_async, vha, 0x502f,
 		    "UNKNOWN flag error.\n");
 
+	if (que >= ha->max_req_queues) {
+		/* Target command with high bits of handle set */
+		qla_printk(KERN_ERR, ha, "%s: error entry, type 0x%0x status 0x%x\n",
+			   __func__, pkt->entry_type, pkt->entry_status);
+		return;
+	}
+
+	req = ha->req_q_map[que];
+
 	/* Validate handle. */
 	if (handle < MAX_OUTSTANDING_COMMANDS)
 		sp = req->outstanding_cmds[handle];
@@ -1998,6 +2045,16 @@ qla24xx_mbx_completion(scsi_qla_host_t *vha, uint16_t mb0)
 		ql_dbg(ql_dbg_async, vha, 0x504e,
 		    "MBX pointer ERROR.\n");
 	}
+
+#if defined(QL_DEBUG_LEVEL_1)
+	printk(KERN_INFO "scsi(%ld): Mailbox registers:", vha->host_no);
+	for (cnt = 0; cnt < vha->mbx_count; cnt++) {
+		if ((cnt % 4) == 0)
+			printk(KERN_CONT "\n");
+		printk("mbox %02d: 0x%04x   ", cnt, ha->mailbox_out[cnt]);
+	}
+	printk(KERN_CONT "\n");
+#endif
 }
 
 /**
@@ -2029,6 +2086,10 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
 			    "Process error entry.\n");
 
 			qla2x00_error_entry(vha, rsp, (sts_entry_t *) pkt);
+
+			if (qla_tgt_24xx_process_response_error(vha, pkt) == 1)
+				break;
+
 			((response_t *)pkt)->signature = RESPONSE_PROCESSED;
 			wmb();
 			continue;
@@ -2060,6 +2121,13 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
                 case ELS_IOCB_TYPE:
 			qla24xx_els_ct_entry(vha, rsp->req, pkt, ELS_IOCB_TYPE);
 			break;
+		case ABTS_RECV_24XX:
+			/* ensure that the ATIO queue is empty */
+			qla_tgt_24xx_process_atio_queue(vha);
+		case ABTS_RESP_24XX:
+		case CTIO_TYPE7:
+		case NOTIFY_ACK_TYPE:
+			qla_tgt_response_pkt_all_vps(vha, (response_t *)pkt);
 		case MARKER_TYPE:
 			/* Do nothing in this case, this check is to prevent it
 			 * from falling into default case
@@ -2212,6 +2280,13 @@ qla24xx_intr_handler(int irq, void *dev_id)
 		case 0x14:
 			qla24xx_process_response_queue(vha, rsp);
 			break;
+		case 0x1C: /* ATIO queue updated */
+			qla_tgt_24xx_process_atio_queue(vha);
+			break;
+		case 0x1D: /* ATIO and response queues updated */
+			qla_tgt_24xx_process_atio_queue(vha);
+			qla24xx_process_response_queue(vha, rsp);
+			break;
 		default:
 			ql_dbg(ql_dbg_async, vha, 0x504f,
 			    "Unrecognized interrupt type (%d).\n", stat * 0xff);
@@ -2356,6 +2431,13 @@ qla24xx_msix_default(int irq, void *dev_id)
 		case 0x14:
 			qla24xx_process_response_queue(vha, rsp);
 			break;
+		case 0x1C: /* ATIO queue updated */
+			qla_tgt_24xx_process_atio_queue(vha);
+			break;
+		case 0x1D: /* ATIO and response queues updated */
+			qla_tgt_24xx_process_atio_queue(vha);
+			qla24xx_process_response_queue(vha, rsp);
+			break;
 		default:
 			ql_dbg(ql_dbg_async, vha, 0x5051,
 			    "Unrecognized interrupt type (%d).\n", stat & 0xff);
diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
index 3b3cec9..7937c1d 100644
--- a/drivers/scsi/qla2xxx/qla_mbx.c
+++ b/drivers/scsi/qla2xxx/qla_mbx.c
@@ -5,6 +5,7 @@
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
 #include "qla_def.h"
+#include "qla_target.h"
 
 #include <linux/delay.h>
 #include <linux/gfp.h>
@@ -1170,6 +1171,99 @@ qla2x00_init_firmware(scsi_qla_host_t *vha, uint16_t size)
 }
 
 /*
+ * qla2x00_get_node_name_list
+ *      Issue get node name list mailbox command, kmalloc()
+ *      and return the resulting list. Caller must kfree() it!
+ *
+ * Input:
+ *      ha = adapter state pointer.
+ *      out_data = resulting list
+ *      out_len = length of the resulting list
+ *
+ * Returns:
+ *      qla2x00 local function return status code.
+ *
+ * Context:
+ *      Kernel context.
+ */
+int
+qla2x00_get_node_name_list(scsi_qla_host_t *vha, void **out_data, int *out_len)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_port_24xx_data *list = NULL;
+	void *pmap;
+	mbx_cmd_t mc;
+	dma_addr_t pmap_dma;
+	ulong dma_size;
+	int rval, left;
+
+	BUILD_BUG_ON(sizeof(struct qla_port_24xx_data) <
+			sizeof(struct qla_port_2xxx_data));
+
+	left = 1;
+	while (left > 0) {
+		dma_size = left * sizeof(*list);
+		pmap = dma_alloc_coherent(&ha->pdev->dev, dma_size,
+					 &pmap_dma, GFP_KERNEL);
+		if (!pmap) {
+			printk(KERN_ERR "%s(%ld): DMA Alloc failed of "
+				"%ld\n", __func__, vha->host_no, dma_size);
+			rval = QLA_MEMORY_ALLOC_FAILED;
+			goto out;
+		}
+
+		mc.mb[0] = MBC_PORT_NODE_NAME_LIST;
+		mc.mb[1] = BIT_1 | BIT_3;
+		mc.mb[2] = MSW(pmap_dma);
+		mc.mb[3] = LSW(pmap_dma);
+		mc.mb[6] = MSW(MSD(pmap_dma));
+		mc.mb[7] = LSW(MSD(pmap_dma));
+		mc.mb[8] = dma_size;
+		mc.out_mb = MBX_0|MBX_1|MBX_2|MBX_3|MBX_6|MBX_7|MBX_8;
+		mc.in_mb = MBX_0|MBX_1;
+		mc.tov = 30;
+		mc.flags = MBX_DMA_IN;
+
+		rval = qla2x00_mailbox_command(vha, &mc);
+		if (rval != QLA_SUCCESS) {
+			if ((mc.mb[0] == MBS_COMMAND_ERROR) &&
+			    (mc.mb[1] == 0xA)) {
+				if (IS_FWI2_CAPABLE(ha))
+					left += le16_to_cpu(mc.mb[2]) / sizeof(struct qla_port_24xx_data);
+				else
+					left += le16_to_cpu(mc.mb[2]) / sizeof(struct qla_port_2xxx_data);
+				goto restart;
+			}
+			goto out_free;
+		}
+
+		left = 0;
+
+		list = kzalloc(dma_size, GFP_KERNEL);
+		if (!list) {
+			printk(KERN_ERR "%s(%ld): failed to allocate node names"
+				" list structure.\n", __func__, vha->host_no);
+			rval = QLA_MEMORY_ALLOC_FAILED;
+			goto out_free;
+		}
+
+		memcpy(list, pmap, dma_size);
+restart:
+		dma_free_coherent(&ha->pdev->dev, dma_size, pmap, pmap_dma);
+	}
+
+	*out_data = list;
+	*out_len = dma_size;
+
+out:
+	return rval;
+
+out_free:
+	dma_free_coherent(&ha->pdev->dev, dma_size, pmap, pmap_dma);
+	return rval;
+}
+
+/*
  * qla2x00_get_port_database
  *	Issue normal/enhanced get port database mailbox command
  *	and copy device name as necessary.
@@ -1263,10 +1357,17 @@ qla2x00_get_port_database(scsi_qla_host_t *vha, fc_port_t *fcport, uint8_t opt)
 		fcport->d_id.b.rsvd_1 = 0;
 
 		/* If not target must be initiator or unknown type. */
-		if ((pd24->prli_svc_param_word_3[0] & BIT_4) == 0)
-			fcport->port_type = FCT_INITIATOR;
-		else
+		if ((pd24->prli_svc_param_word_3[0] & BIT_4))
 			fcport->port_type = FCT_TARGET;
+		else if ((pd24->prli_svc_param_word_3[0] & BIT_5))
+			fcport->port_type = FCT_INITIATOR;
+
+		/* Passback COS information. */
+		fcport->supported_classes = (pd24->flags & PDF_CLASS_2) ?
+				FC_COS_CLASS2 : FC_COS_CLASS3;
+
+		if (pd24->prli_svc_param_word_3[0] & BIT_7)
+			fcport->conf_compl_supported = 1;
 	} else {
 		/* Check for logged in state. */
 		if (pd->master_state != PD_STATE_PORT_LOGGED_IN &&
@@ -1291,14 +1392,17 @@ qla2x00_get_port_database(scsi_qla_host_t *vha, fc_port_t *fcport, uint8_t opt)
 		fcport->d_id.b.rsvd_1 = 0;
 
 		/* If not target must be initiator or unknown type. */
-		if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
-			fcport->port_type = FCT_INITIATOR;
-		else
+		if ((pd24->prli_svc_param_word_3[0] & BIT_4))
 			fcport->port_type = FCT_TARGET;
+		else if ((pd24->prli_svc_param_word_3[0] & BIT_5))
+			fcport->port_type = FCT_INITIATOR;
 
 		/* Passback COS information. */
 		fcport->supported_classes = (pd->options & BIT_4) ?
 		    FC_COS_CLASS2: FC_COS_CLASS3;
+
+		if (pd->prli_svc_param_word_3[0] & BIT_7)
+			fcport->conf_compl_supported = 1;
 	}
 
 gpd_error_out:
@@ -1314,6 +1418,7 @@ gpd_error_out:
 
 	return rval;
 }
+EXPORT_SYMBOL(qla2x00_get_port_database);
 
 /*
  * qla2x00_get_firmware_state
@@ -1663,6 +1768,8 @@ qla24xx_login_fabric(scsi_qla_host_t *vha, uint16_t loop_id, uint8_t domain,
 			mb[10] |= BIT_0;	/* Class 2. */
 		if (lg->io_parameter[9] || lg->io_parameter[10])
 			mb[10] |= BIT_1;	/* Class 3. */
+		if (lg->io_parameter[0] & __constant_cpu_to_le32(BIT_7))
+			mb[10] |= BIT_7;	/* Confirmed Completion Allowed */
 	}
 
 	dma_pool_free(ha->s_dma_pool, lg, lg_dma);
@@ -2943,6 +3050,9 @@ qla24xx_modify_vp_config(scsi_qla_host_t *vha)
 	vpmod->vp_count = 1;
 	vpmod->vp_index1 = vha->vp_idx;
 	vpmod->options_idx1 = BIT_3|BIT_4|BIT_5;
+
+	qla_tgt_modify_vp_config(vha, vpmod);
+
 	memcpy(vpmod->node_name_idx1, vha->node_name, WWN_SIZE);
 	memcpy(vpmod->port_name_idx1, vha->port_name, WWN_SIZE);
 	vpmod->entry_count = 1;
diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
index f488cc6..4ada731 100644
--- a/drivers/scsi/qla2xxx/qla_mid.c
+++ b/drivers/scsi/qla2xxx/qla_mid.c
@@ -6,6 +6,7 @@
  */
 #include "qla_def.h"
 #include "qla_gbl.h"
+#include "qla_target.h"
 
 #include <linux/moduleparam.h>
 #include <linux/vmalloc.h>
@@ -49,6 +50,7 @@ qla24xx_allocate_vp_id(scsi_qla_host_t *vha)
 
 	spin_lock_irqsave(&ha->vport_slock, flags);
 	list_add_tail(&vha->list, &ha->vp_list);
+	ha->tgt_vp_map[vp_id].vha = vha;
 	spin_unlock_irqrestore(&ha->vport_slock, flags);
 
 	mutex_unlock(&ha->vport_lock);
@@ -79,6 +81,7 @@ qla24xx_deallocate_vp_id(scsi_qla_host_t *vha)
 		spin_lock_irqsave(&ha->vport_slock, flags);
 	}
 	list_del(&vha->list);
+	ha->tgt_vp_map[vha->vp_idx].vha = NULL;
 	spin_unlock_irqrestore(&ha->vport_slock, flags);
 
 	vp_id = vha->vp_idx;
@@ -144,12 +147,16 @@ qla2x00_mark_vp_devices_dead(scsi_qla_host_t *vha)
 int
 qla24xx_disable_vp(scsi_qla_host_t *vha)
 {
+	struct qla_hw_data *ha = vha->hw;
 	int ret;
 
 	ret = qla24xx_control_vp(vha, VCE_COMMAND_DISABLE_VPS_LOGO_ALL);
 	atomic_set(&vha->loop_state, LOOP_DOWN);
 	atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME);
 
+	/* Remove port id from vp target map */
+	ha->tgt_vp_map[vha->d_id.b.al_pa].idx = 0;
+
 	qla2x00_mark_vp_devices_dead(vha);
 	atomic_set(&vha->vp_state, VP_FAILED);
 	vha->flags.management_server_logged_in = 0;
@@ -267,6 +274,8 @@ qla2x00_alert_all_vps(struct rsp_que *rsp, uint16_t *mb)
 int
 qla2x00_vp_abort_isp(scsi_qla_host_t *vha)
 {
+	int ret;
+
 	/*
 	 * Physical port will do most of the abort and recovery work. We can
 	 * just treat it as a loop down
@@ -288,8 +297,16 @@ qla2x00_vp_abort_isp(scsi_qla_host_t *vha)
 		qla24xx_control_vp(vha, VCE_COMMAND_DISABLE_VPS_LOGO_ALL);
 
 	ql_dbg(ql_dbg_taskm, vha, 0x801d,
-	    "Scheduling enable of Vport %d.\n", vha->vp_idx);
-	return qla24xx_enable_vp(vha);
+		"Scheduling enable of Vport %d.\n", vha->vp_idx);
+	ret = qla24xx_enable_vp(vha);
+	if (ret)
+		return ret;
+
+	/* Enable target response to SCSI bus. */
+	if (qla_tgt_mode_enabled(vha))
+		qla_tgt_2xxx_send_enable_lun(vha, true);
+
+	return 0;
 }
 
 static int
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index fd14c7b..b1a2444 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -4,8 +4,6 @@
  *
  * See LICENSE.qla2xxx for copyright and licensing details.
  */
-#include "qla_def.h"
-
 #include <linux/moduleparam.h>
 #include <linux/vmalloc.h>
 #include <linux/delay.h>
@@ -13,12 +11,15 @@
 #include <linux/mutex.h>
 #include <linux/kobject.h>
 #include <linux/slab.h>
-
+#include <linux/workqueue.h>
 #include <scsi/scsi_tcq.h>
 #include <scsi/scsicam.h>
 #include <scsi/scsi_transport.h>
 #include <scsi/scsi_transport_fc.h>
 
+#include "qla_def.h"
+#include "qla_target.h"
+
 /*
  * Driver version
  */
@@ -40,6 +41,12 @@ static struct kmem_cache *ctx_cachep;
  */
 int ql_errlev = ql_log_all;
 
+int ql2xenableclass2;
+module_param(ql2xenableclass2, int, S_IRUGO|S_IRUSR);
+MODULE_PARM_DESC(ql2xenableclass2,
+		"Specify if Class 2 operations are supported from the very "
+		"beginning.");
+
 int ql2xlogintimeout = 20;
 module_param(ql2xlogintimeout, int, S_IRUGO);
 MODULE_PARM_DESC(ql2xlogintimeout,
@@ -252,6 +259,8 @@ struct scsi_host_template qla2xxx_driver_template = {
 
 	.max_sectors		= 0xFFFF,
 	.shost_attrs		= qla2x00_host_attrs,
+
+	.supported_mode		= MODE_INITIATOR | MODE_TARGET,
 };
 
 static struct scsi_transport_template *qla2xxx_transport_template = NULL;
@@ -830,7 +839,7 @@ qla2x00_wait_for_chip_reset(scsi_qla_host_t *vha)
  *    Success (LOOP_READY) : 0
  *    Failed  (LOOP_NOT_READY) : 1
  */
-static inline int
+static int
 qla2x00_wait_for_loop_ready(scsi_qla_host_t *vha)
 {
 	int 	 return_status = QLA_SUCCESS;
@@ -863,6 +872,38 @@ sp_get(struct srb *sp)
 	atomic_inc(&sp->ref_count);
 }
 
+void
+qla2xxx_abort_fcport_cmds(fc_port_t *fcport)
+{
+	scsi_qla_host_t *vha = fcport->vha;
+	struct qla_hw_data *ha = vha->hw;
+	srb_t *sp;
+	unsigned long flags;
+	int cnt;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	for (cnt = 1; cnt < MAX_OUTSTANDING_COMMANDS; cnt++) {
+		sp = vha->req->outstanding_cmds[cnt];
+		if (!sp)
+			continue;
+		if (sp->fcport != fcport)
+			continue;
+
+		spin_unlock_irqrestore(&ha->hardware_lock, flags);
+		if (ha->isp_ops->abort_command(sp)) {
+			ql_dbg(ql_dbg_taskm, vha, 0x8010,
+				"Abort failed --  %lx\n", sp->cmd->serial_number);
+		} else {
+			if (qla2x00_eh_wait_on_command(sp->cmd) != QLA_SUCCESS)
+				ql_dbg(ql_dbg_taskm, vha, 0x8011,
+					"Abort failed while waiting --  %lx\n",
+					sp->cmd->serial_number);
+		}
+		spin_lock_irqsave(&ha->hardware_lock, flags);
+	}
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
 /**************************************************************************
 * qla2xxx_eh_abort
 *
@@ -2078,6 +2119,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	ql_dbg_pci(ql_dbg_init, pdev, 0x000a,
 	    "Memory allocated for ha=%p.\n", ha);
 	ha->pdev = pdev;
+	ha->enable_class_2 = ql2xenableclass2;
 
 	/* Clear our data area */
 	ha->bars = bars;
@@ -2148,6 +2190,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 		ha->mbx_count = MAILBOX_REGISTER_COUNT;
 		req_length = REQUEST_ENTRY_CNT_24XX;
 		rsp_length = RESPONSE_ENTRY_CNT_2300;
+		ha->atio_q_length = ATIO_ENTRY_CNT_24XX;
 		ha->max_loop_id = SNS_LAST_LOOP_ID_2300;
 		ha->init_cb_size = sizeof(struct mid_init_cb_24xx);
 		ha->gid_list_info_size = 8;
@@ -2162,6 +2205,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 		ha->mbx_count = MAILBOX_REGISTER_COUNT;
 		req_length = REQUEST_ENTRY_CNT_24XX;
 		rsp_length = RESPONSE_ENTRY_CNT_2300;
+		ha->atio_q_length = ATIO_ENTRY_CNT_24XX;
 		ha->max_loop_id = SNS_LAST_LOOP_ID_2300;
 		ha->init_cb_size = sizeof(struct mid_init_cb_24xx);
 		ha->gid_list_info_size = 8;
@@ -2293,6 +2337,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	    host->max_cmd_len, host->max_channel, host->max_lun,
 	    host->transportt, sht->vendor_id);
 
+	qla_tgt_probe_one_stage1(base_vha, ha);
+
 	/* Set up the irqs */
 	ret = qla2x00_request_irqs(ha, rsp);
 	if (ret)
@@ -2390,6 +2436,14 @@ que_init:
 	ql_dbg(ql_dbg_init, base_vha, 0x00ee,
 	    "DPC thread started successfully.\n");
 
+	/*
+	 * If we're not coming up in initiator mode, we might sit for
+	 * a while without waking up the dpc thread, which leads to a
+	 * stuck process warning.  So just kick the dpc once here and
+	 * let the kthread start (and go back to sleep in qla2x00_do_dpc).
+	 */
+	qla2xxx_wake_dpc(base_vha);
+
 skip_dpc:
 	list_add_tail(&base_vha->list, &ha->vp_list);
 	base_vha->host->irq = ha->pdev->irq;
@@ -2435,7 +2489,10 @@ skip_dpc:
 	ql_dbg(ql_dbg_init, base_vha, 0x00f2,
 	    "Init done and hba is online.\n");
 
-	scsi_scan_host(host);
+	if (qla_ini_mode_enabled(base_vha))
+		scsi_scan_host(host);
+	else
+		qla_printk(KERN_INFO, ha, "skipping scsi_scan_host() for non-initiator port\n");
 
 	qla2x00_alloc_sysfs_attr(base_vha);
 
@@ -2456,6 +2513,8 @@ skip_dpc:
 	    base_vha->host_no,
 	    ha->isp_ops->fw_version_str(base_vha, fw_str));
 
+	qla_tgt_add_target(ha, base_vha);
+
 	return 0;
 
 probe_init_failed:
@@ -2536,15 +2595,33 @@ qla2x00_shutdown(struct pci_dev *pdev)
 }
 
 static void
+qla2x00_stop_dpc_thread(scsi_qla_host_t *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct task_struct *t = ha->dpc_thread;
+
+	if (ha->dpc_thread == NULL)
+		return;
+	/*
+	 * qla2xxx_wake_dpc checks for ->dpc_thread
+	 * so we need to zero it out.
+	 */
+	ha->dpc_thread = NULL;
+	kthread_stop(t);
+}
+
+static void
 qla2x00_remove_one(struct pci_dev *pdev)
 {
 	scsi_qla_host_t *base_vha, *vha;
-	struct qla_hw_data  *ha;
+	struct qla_hw_data *ha;
 	unsigned long flags;
 
 	base_vha = pci_get_drvdata(pdev);
 	ha = base_vha->hw;
 
+	ha->host_shutting_down = 1;
+
 	mutex_lock(&ha->vport_lock);
 	while (ha->cur_vport_count) {
 		struct Scsi_Host *scsi_host;
@@ -2598,6 +2675,7 @@ qla2x00_remove_one(struct pci_dev *pdev)
 		ha->dpc_thread = NULL;
 		kthread_stop(t);
 	}
+	qla_tgt_remove_target(ha, base_vha);
 
 	qla2x00_free_sysfs_attr(base_vha);
 
@@ -2646,17 +2724,7 @@ qla2x00_free_device(scsi_qla_host_t *vha)
 	if (vha->timer_active)
 		qla2x00_stop_timer(vha);
 
-	/* Kill the kernel thread for this host */
-	if (ha->dpc_thread) {
-		struct task_struct *t = ha->dpc_thread;
-
-		/*
-		 * qla2xxx_wake_dpc checks for ->dpc_thread
-		 * so we need to zero it out.
-		 */
-		ha->dpc_thread = NULL;
-		kthread_stop(t);
-	}
+	qla2x00_stop_dpc_thread(vha);
 
 	qla25xx_delete_queues(vha);
 
@@ -2822,10 +2890,13 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
 	if (!ha->init_cb)
 		goto fail;
 
+	if (qla_tgt_mem_alloc(ha) < 0)
+		goto fail_free_init_cb;
+
 	ha->gid_list = dma_alloc_coherent(&ha->pdev->dev, GID_LIST_SIZE,
 		&ha->gid_list_dma, GFP_KERNEL);
 	if (!ha->gid_list)
-		goto fail_free_init_cb;
+		goto fail_free_tgt_mem;
 
 	ha->srb_mempool = mempool_create_slab_pool(SRB_MIN_REQ, srb_cachep);
 	if (!ha->srb_mempool)
@@ -3042,6 +3113,8 @@ fail_free_gid_list:
 	ha->gid_list_dma);
 	ha->gid_list = NULL;
 	ha->gid_list_dma = 0;
+fail_free_tgt_mem:
+	qla_tgt_mem_free(ha);
 fail_free_init_cb:
 	dma_free_coherent(&ha->pdev->dev, ha->init_cb_size, ha->init_cb,
 	ha->init_cb_dma);
@@ -3160,6 +3233,8 @@ qla2x00_mem_free(struct qla_hw_data *ha)
 	if (ha->ctx_mempool)
 		mempool_destroy(ha->ctx_mempool);
 
+	qla_tgt_mem_free(ha);
+
 	if (ha->init_cb)
 		dma_free_coherent(&ha->pdev->dev, ha->init_cb_size,
 			ha->init_cb, ha->init_cb_dma);
@@ -3188,6 +3263,10 @@ qla2x00_mem_free(struct qla_hw_data *ha)
 
 	ha->gid_list = NULL;
 	ha->gid_list_dma = 0;
+
+	ha->atio_ring = NULL;
+	ha->atio_dma = 0;
+	ha->tgt_vp_map = NULL;
 }
 
 struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
@@ -4387,6 +4466,13 @@ qla2x00_module_init(void)
 		return -ENOMEM;
 	}
 
+	/* Initialize target kmem_cache and mem_pools */
+	ret = qla_tgt_init();
+	if (ret < 0) {
+		kmem_cache_destroy(srb_cachep);
+		return ret;
+	}
+
 	/* Derive version string. */
 	strcpy(qla2x00_version_str, QLA2XXX_VERSION);
 	if (ql2xextended_error_logging)
@@ -4398,6 +4484,7 @@ qla2x00_module_init(void)
 		kmem_cache_destroy(srb_cachep);
 		ql_log(ql_log_fatal, NULL, 0x0002,
 		    "fc_attach_transport failed...Failing load!.\n");
+		qla_tgt_exit();
 		return -ENODEV;
 	}
 
@@ -4411,6 +4498,7 @@ qla2x00_module_init(void)
 	    fc_attach_transport(&qla2xxx_transport_vport_functions);
 	if (!qla2xxx_transport_vport_template) {
 		kmem_cache_destroy(srb_cachep);
+		qla_tgt_exit();
 		fc_release_transport(qla2xxx_transport_template);
 		ql_log(ql_log_fatal, NULL, 0x0004,
 		    "fc_attach_transport vport failed...Failing load!.\n");
@@ -4422,6 +4510,7 @@ qla2x00_module_init(void)
 	ret = pci_register_driver(&qla2xxx_pci_driver);
 	if (ret) {
 		kmem_cache_destroy(srb_cachep);
+		qla_tgt_exit();
 		fc_release_transport(qla2xxx_transport_template);
 		fc_release_transport(qla2xxx_transport_vport_template);
 		ql_log(ql_log_fatal, NULL, 0x0006,
@@ -4441,6 +4530,7 @@ qla2x00_module_exit(void)
 	pci_unregister_driver(&qla2xxx_pci_driver);
 	qla2x00_release_firmware();
 	kmem_cache_destroy(srb_cachep);
+	qla_tgt_exit();
 	if (ctx_cachep)
 		kmem_cache_destroy(ctx_cachep);
 	fc_release_transport(qla2xxx_transport_template);
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target
  2011-12-18  2:02 [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Nicholas A. Bellinger
  2011-12-18  2:02 ` [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support Nicholas A. Bellinger
  2011-12-18  2:02 ` [RFC-v4 2/3] qla2xxx: Enable 2xxx series LLD target mode support Nicholas A. Bellinger
@ 2011-12-18  2:02 ` Nicholas A. Bellinger
  2011-12-22  8:10   ` Roland Dreier
  2011-12-21 17:11 ` [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Christoph Hellwig
  3 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-18  2:02 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: Andrew Vasquez, Giridhar Malavali, Christoph Hellwig,
	James Bottomley, Roland Dreier, Joern Engel, Madhuranath Iyengar,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for tcm_qla2xxx fabric module code using the
new qla_target.c LLD logic.  This includes support for explict NodeACLs
via configfs using tcm_qla2xxx_setup_nacl_from_rport() from libfc
struct fc_host->rports, and demo-mode support for virtual LUN=0 access.

This patch also adds support for using tcm_qla2xxx_lport->lport_fcport_map
and ->lport_loopid_map to track struct se_node_acl pointers for individual
24-bit Port ID and 16-bit Loop ID values for qla_target_template
->find_sess_by_s_id() and ->find_sess_by_loop_id() used in a number of
locations into the primary I/O dispatch logic in qla_target.c LLD code.

The main piece for FC Nexus setup is in tcm_qla2xxx_check_initiator_node_acl(),
which calls tcm_qla2xxx_set_sess_by_[s_id,loop_id]() to setup our
lport->lport_fcport_map and lport_loopid_map pointers respectively, and
register the new nexus with TCM via __transport_register_session().

Cc: Andrew Vasquez <andrew.vasquez@qlogic.com>
Cc: Giridhar Malavali <giridhar.malavali@qlogic.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Bottomley <JBottomley@Parallels.com>
Cc: Roland Dreier <roland@purestorage.com>
Cc: Joern Engel <joern@logfs.org>
Cc: Madhuranath Iyengar <mni@risingtidesystems.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/scsi/qla2xxx/Kconfig       |    8 +
 drivers/scsi/qla2xxx/Makefile      |    1 +
 drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2059 ++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/tcm_qla2xxx.h |  148 +++
 4 files changed, 2216 insertions(+), 0 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/tcm_qla2xxx.c
 create mode 100644 drivers/scsi/qla2xxx/tcm_qla2xxx.h

diff --git a/drivers/scsi/qla2xxx/Kconfig b/drivers/scsi/qla2xxx/Kconfig
index 6208d56..6c3ce52 100644
--- a/drivers/scsi/qla2xxx/Kconfig
+++ b/drivers/scsi/qla2xxx/Kconfig
@@ -25,3 +25,11 @@ config SCSI_QLA_FC
 	Firmware images can be retrieved from:
 
 		ftp://ftp.qlogic.com/outgoing/linux/firmware/
+
+config TCM_QLA2XXX
+	tristate "TCM_QLA2XXX fabric module for Qlogic 2xxx series target mode HBAs"
+	depends on SCSI_QLA_FC && TARGET_CORE
+	select LIBFC
+	default n
+	---help---
+	Say Y here to enable the TCM_QLA2XXX fabric module for Qlogic 2xxx series target mode HBAs
diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
index 702931ff..dce7d78 100644
--- a/drivers/scsi/qla2xxx/Makefile
+++ b/drivers/scsi/qla2xxx/Makefile
@@ -3,3 +3,4 @@ qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
         qla_nx.o qla_target.o
 
 obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
+obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
new file mode 100644
index 0000000..dd73378
--- /dev/null
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -0,0 +1,2059 @@
+/*******************************************************************************
+ * This file contains tcm implementation using v4 configfs fabric infrastructure
+ * for QLogic target mode HBAs
+ *
+ * (c) Copyright 2010-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@risingtidesystems.com>
+ *
+ * tcm_qla2xxx_parse_wwn() and tcm_qla2xxx_format_wwn() contains code from
+ * the TCM_FC / Open-FCoE.org fabric module.
+ *
+ * Copyright (c) 2010 Cisco Systems, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ****************************************************************************/
+
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/configfs.h>
+#include <linux/ctype.h>
+#include <linux/string.h>
+#include <linux/ctype.h>
+#include <asm/unaligned.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_cmnd.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+#include <target/target_core_fabric_configfs.h>
+#include <target/target_core_configfs.h>
+#include <target/configfs_macros.h>
+
+#include "qla_def.h"
+#include "qla_target.h"
+#include "tcm_qla2xxx.h"
+
+extern struct workqueue_struct *tcm_qla2xxx_free_wq;
+extern struct workqueue_struct *tcm_qla2xxx_cmd_wq;
+
+int tcm_qla2xxx_check_true(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+int tcm_qla2xxx_check_false(struct se_portal_group *se_tpg)
+{
+	return 0;
+}
+
+/*
+ * Parse WWN.
+ * If strict, we require lower-case hex and colon separators to be sure
+ * the name is the same as what would be generated by ft_format_wwn()
+ * so the name and wwn are mapped one-to-one.
+ */
+ssize_t tcm_qla2xxx_parse_wwn(const char *name, u64 *wwn, int strict)
+{
+	const char *cp;
+	char c;
+	u32 nibble;
+	u32 byte = 0;
+	u32 pos = 0;
+	u32 err;
+
+	*wwn = 0;
+	for (cp = name; cp < &name[TCM_QLA2XXX_NAMELEN - 1]; cp++) {
+		c = *cp;
+		if (c == '\n' && cp[1] == '\0')
+			continue;
+		if (strict && pos++ == 2 && byte++ < 7) {
+			pos = 0;
+			if (c == ':')
+				continue;
+			err = 1;
+			goto fail;
+		}
+		if (c == '\0') {
+			err = 2;
+			if (strict && byte != 8)
+				goto fail;
+			return cp - name;
+		}
+		err = 3;
+		if (isdigit(c))
+			nibble = c - '0';
+		else if (isxdigit(c) && (islower(c) || !strict))
+			nibble = tolower(c) - 'a' + 10;
+		else
+			goto fail;
+		*wwn = (*wwn << 4) | nibble;
+	}
+	err = 4;
+fail:
+	pr_debug("err %u len %zu pos %u byte %u\n",
+			err, cp - name, pos, byte);
+	return -1;
+}
+
+ssize_t tcm_qla2xxx_format_wwn(char *buf, size_t len, u64 wwn)
+{
+	u8 b[8];
+
+	put_unaligned_be64(wwn, b);
+	return snprintf(buf, len,
+		"%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x",
+		b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7]);
+}
+
+char *tcm_qla2xxx_get_fabric_name(void)
+{
+	return "qla2xxx";
+}
+
+/*
+ * From drivers/scsi/scsi_transport_fc.c:fc_parse_wwn
+ */
+static int tcm_qla2xxx_npiv_extract_wwn(const char *ns, u64 *nm)
+{
+	unsigned int i, j, value;
+	u8 wwn[8];
+
+	memset(wwn, 0, sizeof(wwn));
+
+	/* Validate and store the new name */
+	for (i = 0, j = 0; i < 16; i++) {
+		value = hex_to_bin(*ns++);
+		if (value >= 0)
+			j = (j << 4) | value;
+		else
+			return -EINVAL;
+
+		if (i % 2) {
+			wwn[i/2] = j & 0xff;
+			j = 0;
+		}
+	}
+
+	*nm = wwn_to_u64(wwn);
+	return 0;
+}
+
+/*
+ * This parsing logic follows drivers/scsi/scsi_transport_fc.c:store_fc_host_vport_create()
+ */
+int tcm_qla2xxx_npiv_parse_wwn(
+	const char *name,
+	size_t count,
+	u64 *wwpn,
+	u64 *wwnn)
+{
+	unsigned int cnt = count;
+	int rc;
+
+	*wwpn = 0;
+	*wwnn = 0;
+
+	/* count may include a LF at end of string */
+	if (name[cnt-1] == '\n')
+		cnt--;
+
+	/* validate we have enough characters for WWPN */
+	if ((cnt != (16+1+16)) || (name[16] != ':'))
+		return -EINVAL;
+
+	rc = tcm_qla2xxx_npiv_extract_wwn(&name[0], wwpn);
+	if (rc != 0)
+		return rc;
+
+	rc = tcm_qla2xxx_npiv_extract_wwn(&name[17], wwnn);
+	if (rc != 0)
+		return rc;
+
+	return 0;
+}
+
+ssize_t tcm_qla2xxx_npiv_format_wwn(char *buf, size_t len, u64 wwpn, u64 wwnn)
+{
+	u8 b[8], b2[8];
+
+	put_unaligned_be64(wwpn, b);
+	put_unaligned_be64(wwnn, b2);
+        return snprintf(buf, len,
+                "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x,"
+		"%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x",
+                b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7],
+		b2[0], b2[1], b2[2], b2[3], b2[4], b2[5], b2[6], b2[7]);
+}
+
+char *tcm_qla2xxx_npiv_get_fabric_name(void)
+{
+	return "qla2xxx_npiv";
+}
+
+u8 tcm_qla2xxx_get_fabric_proto_ident(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+	u8 proto_id;
+
+	switch (lport->lport_proto_id) {
+	case SCSI_PROTOCOL_FCP:
+	default:
+		proto_id = fc_get_fabric_proto_ident(se_tpg);
+		break;
+	}
+
+	return proto_id;
+}
+
+char *tcm_qla2xxx_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+
+	return &lport->lport_name[0];
+}
+
+char *tcm_qla2xxx_npiv_get_fabric_wwn(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+
+	return &lport->lport_npiv_name[0];
+}
+
+u16 tcm_qla2xxx_get_tag(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	return tpg->lport_tpgt;
+}
+
+u32 tcm_qla2xxx_get_default_depth(struct se_portal_group *se_tpg)
+{
+	return 1;
+}
+
+u32 tcm_qla2xxx_get_pr_transport_id(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_nacl,
+	struct t10_pr_registration *pr_reg,
+	int *format_code,
+	unsigned char *buf)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+	int ret = 0;
+
+	switch (lport->lport_proto_id) {
+	case SCSI_PROTOCOL_FCP:
+	default:
+		ret = fc_get_pr_transport_id(se_tpg, se_nacl, pr_reg,
+					format_code, buf);
+		break;
+	}
+
+	return ret;
+}
+
+u32 tcm_qla2xxx_get_pr_transport_id_len(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_nacl,
+	struct t10_pr_registration *pr_reg,
+	int *format_code)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+	int ret = 0;
+
+	switch (lport->lport_proto_id) {
+	case SCSI_PROTOCOL_FCP:
+	default:
+		ret = fc_get_pr_transport_id_len(se_tpg, se_nacl, pr_reg,
+					format_code);
+		break;
+	}
+
+	return ret;
+}
+
+char *tcm_qla2xxx_parse_pr_out_transport_id(
+	struct se_portal_group *se_tpg,
+	const char *buf,
+	u32 *out_tid_len,
+	char **port_nexus_ptr)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+	char *tid = NULL;
+
+	switch (lport->lport_proto_id) {
+	case SCSI_PROTOCOL_FCP:
+	default:
+		tid = fc_parse_pr_out_transport_id(se_tpg, buf, out_tid_len,
+					port_nexus_ptr);
+		break;
+	}
+
+	return tid;
+}
+
+int tcm_qla2xxx_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+
+	return QLA_TPG_ATTRIB(tpg)->generate_node_acls;
+}
+
+int tcm_qla2xxx_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+
+	return QLA_TPG_ATTRIB(tpg)->cache_dynamic_acls;
+}
+
+int tcm_qla2xxx_check_demo_write_protect(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+
+	return QLA_TPG_ATTRIB(tpg)->demo_mode_write_protect;
+}
+
+int tcm_qla2xxx_check_prod_write_protect(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+
+	return QLA_TPG_ATTRIB(tpg)->prod_mode_write_protect;
+}
+
+struct se_node_acl *tcm_qla2xxx_alloc_fabric_acl(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_nacl *nacl;
+
+	nacl = kzalloc(sizeof(struct tcm_qla2xxx_nacl), GFP_KERNEL);
+	if (!nacl) {
+		pr_err("Unable to alocate struct tcm_qla2xxx_nacl\n");
+		return NULL;
+	}
+
+	return &nacl->se_node_acl;
+}
+
+void tcm_qla2xxx_release_fabric_acl(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_nacl)
+{
+	struct tcm_qla2xxx_nacl *nacl = container_of(se_nacl,
+			struct tcm_qla2xxx_nacl, se_node_acl);
+	kfree(nacl);
+}
+
+u32 tcm_qla2xxx_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+				struct tcm_qla2xxx_tpg, se_tpg);
+
+	return tpg->lport_tpgt;
+}
+
+static void tcm_qla2xxx_complete_free(struct work_struct *work)
+{
+	struct qla_tgt_cmd *cmd = container_of(work, struct qla_tgt_cmd, work);
+
+	transport_generic_free_cmd(&cmd->se_cmd, 0);
+}
+
+/*
+ * Called from qla_target_template->free_cmd(), and will call
+ * tcm_qla2xxx_release_cmd via normal struct target_core_fabric_ops
+ * release callback.  qla_hw_data->hardware_lock is expected to be held
+ */
+void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd)
+{
+	barrier();
+	/*
+	 * Handle tcm_qla2xxx_init_cmd() -> transport_get_lun_for_cmd()
+	 * failure case where cmd->se_cmd.se_dev was not assigned, and
+	 * a call to transport_generic_free_cmd_intr() is not possible..
+	 */
+	if (!cmd->se_cmd.se_dev) {
+		target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
+		transport_generic_free_cmd(&cmd->se_cmd, 0);
+		return;
+	}
+
+	if (!atomic_read(&cmd->se_cmd.t_transport_complete))
+		target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
+
+	INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
+	queue_work(tcm_qla2xxx_free_wq, &cmd->work);
+}
+
+/*
+ * Called from struct target_core_fabric_ops->check_stop_free() context
+ */
+int tcm_qla2xxx_check_stop_free(struct se_cmd *se_cmd)
+{
+	if (se_cmd->se_tmr_req) {
+		struct qla_tgt_mgmt_cmd *mcmd = container_of(se_cmd,
+				struct qla_tgt_mgmt_cmd, se_cmd);
+		/*
+		 * Release the associated se_cmd->se_tmr_req and se_cmd
+		 * TMR related state now.
+		 */
+		transport_generic_free_cmd(se_cmd, 1);
+		qla_tgt_free_mcmd(mcmd);
+		return 1;
+	}
+	return target_put_sess_cmd(se_cmd->se_sess, se_cmd);
+}
+
+/* tcm_qla2xxx_release_cmd - Callback from TCM Core to release underlying fabric descriptor
+ * @se_cmd command to release
+ */
+void tcm_qla2xxx_release_cmd(struct se_cmd *se_cmd)
+{
+	struct qla_tgt_cmd *cmd;
+
+	if (se_cmd->se_tmr_req != NULL)
+		return;
+
+	cmd = container_of(se_cmd, struct qla_tgt_cmd, se_cmd);
+	qla_tgt_free_cmd(cmd);
+}
+
+int tcm_qla2xxx_shutdown_session(struct se_session *se_sess)
+{
+	struct qla_tgt_sess *sess = se_sess->fabric_sess_ptr;
+
+	if (!sess) {
+		pr_err("se_sess->fabric_sess_ptr is NULL\n");
+		dump_stack();
+		return 0;
+	}
+	return 1;
+}
+
+extern int tcm_qla2xxx_clear_nacl_from_fcport_map(struct se_node_acl *);
+
+void tcm_qla2xxx_close_session(struct se_session *se_sess)
+{
+	struct se_node_acl *se_nacl = se_sess->se_node_acl;
+	struct qla_tgt_sess *sess = se_sess->fabric_sess_ptr;
+	struct scsi_qla_host *vha;
+	unsigned long flags;
+
+	if (!sess) {
+		pr_err("se_sess->fabric_sess_ptr is NULL\n");
+		dump_stack();
+		return;
+	}
+	vha = sess->vha;
+
+	spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+	tcm_qla2xxx_clear_nacl_from_fcport_map(se_nacl);
+	__qla_tgt_sess_put(sess);
+	spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+}
+
+void tcm_qla2xxx_stop_session(struct se_session *se_sess, int sess_sleep , int conn_sleep)
+{
+	struct qla_tgt_sess *sess = se_sess->fabric_sess_ptr;
+	struct scsi_qla_host *vha;
+	unsigned long flags;
+
+	if (!sess) {
+		pr_err("se_sess->fabric_sess_ptr is NULL\n");
+		dump_stack();
+		return;
+	}
+	vha = sess->vha;
+
+	spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+	tcm_qla2xxx_clear_nacl_from_fcport_map(se_sess->se_node_acl);
+	spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+}
+
+void tcm_qla2xxx_reset_nexus(struct se_session *se_sess)
+{
+	return;
+}
+
+int tcm_qla2xxx_sess_logged_in(struct se_session *se_sess)
+{
+	return 0;
+}
+
+u32 tcm_qla2xxx_sess_get_index(struct se_session *se_sess)
+{
+	return 0;
+}
+
+/*
+ * The LIO target core uses DMA_TO_DEVICE to mean that data is going
+ * to the target (eg handling a WRITE) and DMA_FROM_DEVICE to mean
+ * that data is coming from the target (eg handling a READ).  However,
+ * this is just the opposite of what we have to tell the DMA mapping
+ * layer -- eg when handling a READ, the HBA will have to DMA the data
+ * out of memory so it can send it to the initiator, which means we
+ * need to use DMA_TO_DEVICE when we map the data.
+ */
+static enum dma_data_direction tcm_qla2xxx_mapping_dir(struct se_cmd *se_cmd)
+{
+	if (se_cmd->se_cmd_flags & SCF_BIDI)
+		return DMA_BIDIRECTIONAL;
+
+	switch (se_cmd->data_direction) {
+	case DMA_TO_DEVICE:
+		return DMA_FROM_DEVICE;
+	case DMA_FROM_DEVICE:
+		return DMA_TO_DEVICE;
+	case DMA_NONE:
+	default:
+		return DMA_NONE;
+	}
+}
+
+int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
+{
+	struct qla_tgt_cmd *cmd = container_of(se_cmd, struct qla_tgt_cmd, se_cmd);
+
+	cmd->bufflen = se_cmd->data_length;
+	cmd->dma_data_direction = tcm_qla2xxx_mapping_dir(se_cmd);
+
+	/*
+	 * Setup the struct se_task->task_sg[] chained SG list
+	 */
+	transport_do_task_sg_chain(se_cmd);
+	cmd->sg_cnt = se_cmd->t_tasks_sg_chained_no;
+	cmd->sg = se_cmd->t_tasks_sg_chained;
+
+	/*
+	 * qla_target.c:qla_tgt_rdy_to_xfer() will call pci_map_sg() to setup
+	 * the SGL mappings into PCIe memory for incoming FCP WRITE data.
+	 */
+	return qla_tgt_rdy_to_xfer(cmd);
+}
+
+int tcm_qla2xxx_write_pending_status(struct se_cmd *se_cmd)
+{
+	unsigned long flags;
+	/*
+	 * Check for WRITE_PENDING status to determine if we need to wait for
+	 * CTIO aborts to be posted via hardware in tcm_qla2xxx_handle_data().
+	 */
+	spin_lock_irqsave(&se_cmd->t_state_lock, flags);
+	if (se_cmd->t_state == TRANSPORT_WRITE_PENDING ||
+	    se_cmd->t_state == TRANSPORT_COMPLETE_QF_WP) {
+		spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+		wait_for_completion_timeout(&se_cmd->t_transport_stop_comp, 3000);
+		return 0;
+	}
+	spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+
+	return 0;
+}
+
+void tcm_qla2xxx_set_default_node_attrs(struct se_node_acl *nacl)
+{
+	return;
+}
+
+u32 tcm_qla2xxx_get_task_tag(struct se_cmd *se_cmd)
+{
+	struct qla_tgt_cmd *cmd = container_of(se_cmd, struct qla_tgt_cmd, se_cmd);
+
+	return cmd->tag;
+}
+
+int tcm_qla2xxx_get_cmd_state(struct se_cmd *se_cmd)
+{
+	return 0;
+}
+
+/*
+ * Called from process context in qla_target.c:qla_tgt_do_work() code
+ */
+int tcm_qla2xxx_handle_cmd(scsi_qla_host_t *vha, struct qla_tgt_cmd *cmd,
+			unsigned char *cdb, uint32_t data_length, int fcp_task_attr,
+			int data_dir, int bidi)
+{
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+	struct se_session *se_sess;
+	struct qla_tgt_sess *sess;
+	int rc, flags = TARGET_SCF_ACK_KREF;
+
+	if (bidi)
+		flags |= TARGET_SCF_BIDI_OP;
+
+	sess = cmd->sess;
+	if (!sess) {
+		pr_err("Unable to locate struct qla_tgt_sess from qla_tgt_cmd\n");
+		return -EINVAL;
+	}
+
+	se_sess = sess->se_sess;
+	if (!se_sess) {
+		pr_err("Unable to locate active struct se_session\n");
+		return -EINVAL;
+	}
+
+	rc = target_submit_cmd(se_cmd, se_sess, cdb, &cmd->sense_buffer[0],
+				cmd->unpacked_lun, data_length, fcp_task_attr,
+				data_dir, flags);
+	if (rc != 0)
+		return -EINVAL;
+
+	return 0;
+}
+
+void tcm_qla2xxx_do_rsp(struct work_struct *work)
+{
+	struct qla_tgt_cmd *cmd = container_of(work, struct qla_tgt_cmd, work);
+	/*
+	 * Dispatch ->queue_status from workqueue process context
+	 */
+	transport_send_check_condition_and_sense(&cmd->se_cmd,
+				cmd->se_cmd.scsi_sense_reason, 0);
+}
+
+/*
+ * Called from qla_target.c:qla_tgt_do_ctio_completion()
+ */
+int tcm_qla2xxx_handle_data(struct qla_tgt_cmd *cmd)
+{
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+	unsigned long flags;
+	/*
+	 * Ensure that the complete FCP WRITE payload has been received.
+	 * Otherwise return an exception via CHECK_CONDITION status.
+	 */
+	if (!cmd->write_data_transferred) {
+		/*
+		 * Check if se_cmd has already been aborted via LUN_RESET, and is
+		 * waiting upon completion in tcm_qla2xxx_write_pending_status()..
+		 */
+		spin_lock_irqsave(&se_cmd->t_state_lock, flags);
+		if (atomic_read(&se_cmd->t_transport_aborted)) {
+			spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+			complete(&se_cmd->t_transport_stop_comp);
+			return 0;
+		}
+		spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+
+		se_cmd->scsi_sense_reason = TCM_CHECK_CONDITION_ABORT_CMD;
+		INIT_WORK(&cmd->work, tcm_qla2xxx_do_rsp);
+		queue_work(tcm_qla2xxx_free_wq, &cmd->work);
+		return 0;
+	}
+	/*
+	 * We now tell TCM to queue this WRITE CDB with TRANSPORT_PROCESS_WRITE
+	 * status to the backstore processing thread.
+	 */
+	return transport_generic_handle_data(&cmd->se_cmd);
+}
+
+/*
+ * Called from qla_target.c:qla_tgt_issue_task_mgmt()
+ */
+int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *mcmd, uint32_t lun, uint8_t tmr_func)
+{
+	struct qla_tgt_sess *sess = mcmd->sess;
+	struct se_session *se_sess = sess->se_sess;
+	struct se_portal_group *se_tpg = se_sess->se_tpg;
+	struct se_cmd *se_cmd = &mcmd->se_cmd;
+	/*
+	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+	 */
+	transport_init_se_cmd(se_cmd, se_tpg->se_tpg_tfo, se_sess, 0,
+				DMA_NONE, 0, NULL);
+	/*
+	 * Allocate the TCM TMR
+	 */
+	se_cmd->se_tmr_req = core_tmr_alloc_req(se_cmd, mcmd, tmr_func, GFP_ATOMIC);
+	if (!se_cmd->se_tmr_req)
+		return -ENOMEM;
+	/*
+	 * Save the se_tmr_req for qla_tgt_xmit_tm_rsp() callback into LLD code
+	 */
+	mcmd->se_tmr_req = se_cmd->se_tmr_req;
+	/*
+	 * Locate the underlying TCM struct se_lun from sc->device->lun
+	 */
+	if (transport_lookup_tmr_lun(se_cmd, lun) < 0) {
+		transport_generic_free_cmd(se_cmd, 1);
+		return -EINVAL;
+	}
+	/*
+	 * Queue the TMR associated se_cmd into TCM Core for processing
+	 */
+	return transport_generic_handle_tmr(se_cmd);
+}
+
+int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
+{
+	struct qla_tgt_cmd *cmd = container_of(se_cmd, struct qla_tgt_cmd, se_cmd);
+
+	cmd->bufflen = se_cmd->data_length;
+	cmd->dma_data_direction = tcm_qla2xxx_mapping_dir(se_cmd);
+	cmd->aborted = atomic_read(&se_cmd->t_transport_aborted);
+
+	/*
+	 * Setup the struct se_task->task_sg[] chained SG list
+	 */
+	transport_do_task_sg_chain(se_cmd);
+	cmd->sg_cnt = se_cmd->t_tasks_sg_chained_no;
+	cmd->sg = se_cmd->t_tasks_sg_chained;
+	cmd->offset = 0;
+
+	/*
+	 * Now queue completed DATA_IN the qla2xxx LLD and response ring
+	 */
+	return qla_tgt_xmit_response(cmd, QLA_TGT_XMIT_DATA|QLA_TGT_XMIT_STATUS,
+				se_cmd->scsi_status);
+}
+
+int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+{
+	struct qla_tgt_cmd *cmd = container_of(se_cmd, struct qla_tgt_cmd, se_cmd);
+	int xmit_type = QLA_TGT_XMIT_STATUS;
+
+	cmd->bufflen = se_cmd->data_length;
+	cmd->sg = NULL;
+	cmd->sg_cnt = 0;
+	cmd->offset = 0;
+	cmd->dma_data_direction = tcm_qla2xxx_mapping_dir(se_cmd);
+	cmd->aborted = atomic_read(&se_cmd->t_transport_aborted);
+
+	if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+		/*
+		 * For FCP_READ with CHECK_CONDITION status, clear cmd->bufflen
+		 * for qla_tgt_xmit_response LLD code
+		 */
+		se_cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;
+		se_cmd->residual_count = se_cmd->data_length;
+
+		cmd->bufflen = 0;
+	}
+	/*
+	 * Now queue status response to qla2xxx LLD code and response ring
+	 */
+	return qla_tgt_xmit_response(cmd, xmit_type, se_cmd->scsi_status);
+}
+
+int tcm_qla2xxx_queue_tm_rsp(struct se_cmd *se_cmd)
+{
+	struct se_tmr_req *se_tmr = se_cmd->se_tmr_req;
+	struct qla_tgt_mgmt_cmd *mcmd = container_of(se_cmd,
+				struct qla_tgt_mgmt_cmd, se_cmd);
+
+	pr_debug("queue_tm_rsp: mcmd: %p func: 0x%02x response: 0x%02x\n",
+			mcmd, se_tmr->function, se_tmr->response);
+	/*
+	 * Do translation between TCM TM response codes and
+	 * QLA2xxx FC TM response codes.
+	 */
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		mcmd->fc_tm_rsp = FC_TM_SUCCESS;
+		break;
+	case TMR_TASK_DOES_NOT_EXIST:
+		mcmd->fc_tm_rsp = FC_TM_BAD_CMD;
+		break;
+	case TMR_FUNCTION_REJECTED:
+		mcmd->fc_tm_rsp = FC_TM_REJECT;
+		break;
+	case TMR_LUN_DOES_NOT_EXIST:
+	default:
+		mcmd->fc_tm_rsp = FC_TM_FAILED;
+		break;
+	}
+	/*
+	 * Queue the TM response to QLA2xxx LLD to build a
+	 * CTIO response packet.
+	 */
+	qla_tgt_xmit_tm_rsp(mcmd);
+
+	return 0;
+}
+
+u16 tcm_qla2xxx_get_fabric_sense_len(void)
+{
+	return 0;
+}
+
+u16 tcm_qla2xxx_set_fabric_sense_len(struct se_cmd *se_cmd, u32 sense_length)
+{
+	return 0;
+}
+
+int tcm_qla2xxx_is_state_remove(struct se_cmd *se_cmd)
+{
+	return 0;
+}
+
+/* Local pointer to allocated TCM configfs fabric module */
+struct target_fabric_configfs *tcm_qla2xxx_fabric_configfs;
+struct target_fabric_configfs *tcm_qla2xxx_npiv_fabric_configfs;
+
+struct workqueue_struct *tcm_qla2xxx_free_wq;
+struct workqueue_struct *tcm_qla2xxx_cmd_wq;
+
+static int tcm_qla2xxx_setup_nacl_from_rport(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_nacl,
+	struct tcm_qla2xxx_lport *lport,
+	struct tcm_qla2xxx_nacl *nacl,
+	u64 rport_wwnn)
+{
+	struct scsi_qla_host *vha = lport->qla_vha;
+	struct Scsi_Host *sh = vha->host;
+	struct fc_host_attrs *fc_host = shost_to_fc_host(sh);
+	struct fc_rport *rport;
+	struct tcm_qla2xxx_fc_domain *d;
+	struct tcm_qla2xxx_fc_area *a;
+	struct tcm_qla2xxx_fc_al_pa *p;
+	unsigned long flags;
+	unsigned char domain, area, al_pa;
+	/*
+	 * Scan the existing rports, and create a session for the
+	 * explict NodeACL is an matching rport->node_name already
+	 * exists.
+	 */
+	spin_lock_irqsave(sh->host_lock, flags);
+	list_for_each_entry(rport, &fc_host->rports, peers) {
+		if (rport_wwnn != rport->node_name)
+			continue;
+
+		pr_debug("Located existing rport_wwpn and rport->node_name:"
+			" 0x%016LX, port_id: 0x%04x\n", rport->node_name,
+			rport->port_id);
+		domain = (rport->port_id >> 16) & 0xff;
+		area = (rport->port_id >> 8) & 0xff;
+		al_pa = rport->port_id & 0xff;
+		nacl->nport_id = rport->port_id;
+
+		pr_debug("fc_rport domain: 0x%02x area: 0x%02x al_pa: %02x\n",
+				domain, area, al_pa);
+		spin_unlock_irqrestore(sh->host_lock, flags);
+
+
+		spin_lock_irqsave(&vha->hw->hardware_lock, flags);
+		d = &((struct tcm_qla2xxx_fc_domain *)lport->lport_fcport_map)[domain];
+		pr_debug("Using d: %p for domain: 0x%02x\n", d, domain);
+		a = &d->areas[area];
+		pr_debug("Using a: %p for area: 0x%02x\n", a, area);
+		p = &a->al_pas[al_pa];
+		pr_debug("Using p: %p for al_pa: 0x%02x\n", p, al_pa);
+
+		p->se_nacl = se_nacl;
+		pr_debug("Setting p->se_nacl to se_nacl: %p for WWNN: 0x%016LX,"
+			" port_id: 0x%04x\n", se_nacl, rport_wwnn,
+			nacl->nport_id);
+		spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+
+		return 1;
+	}
+	spin_unlock_irqrestore(sh->host_lock, flags);
+
+	return 0;
+}
+
+/*
+ * Expected to be called with struct qla_hw_data->hardware_lock held
+ */
+int tcm_qla2xxx_clear_nacl_from_fcport_map(
+	struct se_node_acl *se_nacl)
+{
+	struct se_portal_group *se_tpg = se_nacl->se_tpg;
+	struct se_wwn *se_wwn = se_tpg->se_tpg_wwn;
+	struct tcm_qla2xxx_lport *lport = container_of(se_wwn,
+				struct tcm_qla2xxx_lport, lport_wwn);
+	struct tcm_qla2xxx_nacl *nacl = container_of(se_nacl,
+				struct tcm_qla2xxx_nacl, se_node_acl);
+	struct tcm_qla2xxx_fc_domain *d;
+	struct tcm_qla2xxx_fc_area *a;
+	struct tcm_qla2xxx_fc_al_pa *p;
+	unsigned char domain, area, al_pa;
+
+	domain = (nacl->nport_id >> 16) & 0xff;
+	area = (nacl->nport_id >> 8) & 0xff;
+	al_pa = nacl->nport_id & 0xff;
+
+	pr_debug("fc_rport domain: 0x%02x area: 0x%02x al_pa: %02x\n",
+			domain, area, al_pa);
+
+	d = &((struct tcm_qla2xxx_fc_domain *)lport->lport_fcport_map)[domain];
+	pr_debug("Using d: %p for domain: 0x%02x\n", d, domain);
+	a = &d->areas[area];
+	pr_debug("Using a: %p for area: 0x%02x\n", a, area);
+	p = &a->al_pas[al_pa];
+	pr_debug("Using p: %p for al_pa: 0x%02x\n", p, al_pa);
+
+	p->se_nacl = NULL;
+	pr_debug("Clearing p->se_nacl to se_nacl: %p for WWNN: 0x%016LX,"
+		" port_id: 0x%04x\n", se_nacl, nacl->nport_wwnn,
+		nacl->nport_id);
+
+	return 0;
+}
+
+static struct se_node_acl *tcm_qla2xxx_make_nodeacl(
+	struct se_portal_group *se_tpg,
+	struct config_group *group,
+	const char *name)
+{
+	struct se_wwn *se_wwn = se_tpg->se_tpg_wwn;
+	struct tcm_qla2xxx_lport *lport = container_of(se_wwn,
+				struct tcm_qla2xxx_lport, lport_wwn);
+	struct se_node_acl *se_nacl, *se_nacl_new;
+	struct tcm_qla2xxx_nacl *nacl;
+	u64 wwnn;
+	u32 qla2xxx_nexus_depth;
+	int rc;
+
+	if (tcm_qla2xxx_parse_wwn(name, &wwnn, 1) < 0)
+		return ERR_PTR(-EINVAL);
+
+	se_nacl_new = tcm_qla2xxx_alloc_fabric_acl(se_tpg);
+	if (!se_nacl_new)
+		return ERR_PTR(-ENOMEM);
+//#warning FIXME: Hardcoded qla2xxx_nexus depth in tcm_qla2xxx_make_nodeacl()
+	qla2xxx_nexus_depth = 1;
+
+	/*
+	 * se_nacl_new may be released by core_tpg_add_initiator_node_acl()
+	 * when converting a NodeACL from demo mode -> explict
+	 */
+	se_nacl = core_tpg_add_initiator_node_acl(se_tpg, se_nacl_new,
+				name, qla2xxx_nexus_depth);
+	if (IS_ERR(se_nacl)) {
+		tcm_qla2xxx_release_fabric_acl(se_tpg, se_nacl_new);
+		return se_nacl;
+	}
+	/*
+	 * Locate our struct tcm_qla2xxx_nacl and set the FC Nport WWPN
+	 */
+	nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
+	nacl->nport_wwnn = wwnn;
+	tcm_qla2xxx_format_wwn(&nacl->nport_name[0], TCM_QLA2XXX_NAMELEN, wwnn);
+	/*
+	 * Setup a se_nacl handle based on an a matching struct fc_rport setup
+	 * via drivers/scsi/qla2xxx/qla_init.c:qla2x00_reg_remote_port()
+	 */
+	rc = tcm_qla2xxx_setup_nacl_from_rport(se_tpg, se_nacl, lport,
+					nacl, wwnn);
+	if (rc < 0) {
+		tcm_qla2xxx_release_fabric_acl(se_tpg, se_nacl_new);
+		return ERR_PTR(rc);
+	}
+
+	return se_nacl;;
+}
+
+static void tcm_qla2xxx_drop_nodeacl(struct se_node_acl *se_acl)
+{
+	struct se_portal_group *se_tpg = se_acl->se_tpg;
+	struct tcm_qla2xxx_nacl *nacl = container_of(se_acl,
+				struct tcm_qla2xxx_nacl, se_node_acl);
+
+	core_tpg_del_initiator_node_acl(se_tpg, se_acl, 1);
+	kfree(nacl);
+}
+
+/* Start items for tcm_qla2xxx_tpg_attrib_cit */
+
+#define DEF_QLA_TPG_ATTRIB(name)					\
+									\
+static ssize_t tcm_qla2xxx_tpg_attrib_show_##name(			\
+	struct se_portal_group *se_tpg,					\
+	char *page)							\
+{									\
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,		\
+			struct tcm_qla2xxx_tpg, se_tpg);		\
+									\
+	return sprintf(page, "%u\n", QLA_TPG_ATTRIB(tpg)->name);	\
+}									\
+									\
+static ssize_t tcm_qla2xxx_tpg_attrib_store_##name(			\
+	struct se_portal_group *se_tpg,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,		\
+			struct tcm_qla2xxx_tpg, se_tpg);		\
+	unsigned long val;						\
+	int ret;							\
+									\
+	ret = strict_strtoul(page, 0, &val);				\
+	if (ret < 0) {							\
+		pr_err("strict_strtoul() failed with"		\
+				" ret: %d\n", ret);			\
+		return -EINVAL;						\
+	}								\
+	ret = tcm_qla2xxx_set_attrib_##name(tpg, val);			\
+									\
+	return (!ret) ? count : -EINVAL;				\
+}
+
+#define DEF_QLA_TPG_ATTR_BOOL(_name)					\
+									\
+static int tcm_qla2xxx_set_attrib_##_name(				\
+	struct tcm_qla2xxx_tpg *tpg,					\
+	unsigned long val)						\
+{									\
+	struct tcm_qla2xxx_tpg_attrib *a = &tpg->tpg_attrib;		\
+									\
+	if ((val != 0) && (val != 1)) {					\
+		pr_err("Illegal boolean value %lu\n", val);	\
+                return -EINVAL;						\
+	}								\
+									\
+	a->_name = val;							\
+	return 0;							\
+}
+
+#define QLA_TPG_ATTR(_name, _mode)	TF_TPG_ATTRIB_ATTR(tcm_qla2xxx, _name, _mode);
+
+/*
+ * Define tcm_qla2xxx_tpg_attrib_s_generate_node_acls
+ */
+DEF_QLA_TPG_ATTR_BOOL(generate_node_acls);
+DEF_QLA_TPG_ATTRIB(generate_node_acls);
+QLA_TPG_ATTR(generate_node_acls, S_IRUGO | S_IWUSR);
+
+/*
+ Define tcm_qla2xxx_attrib_s_cache_dynamic_acls
+ */
+DEF_QLA_TPG_ATTR_BOOL(cache_dynamic_acls);
+DEF_QLA_TPG_ATTRIB(cache_dynamic_acls);
+QLA_TPG_ATTR(cache_dynamic_acls, S_IRUGO | S_IWUSR);
+
+/*
+ * Define tcm_qla2xxx_tpg_attrib_s_demo_mode_write_protect
+ */
+DEF_QLA_TPG_ATTR_BOOL(demo_mode_write_protect);
+DEF_QLA_TPG_ATTRIB(demo_mode_write_protect);
+QLA_TPG_ATTR(demo_mode_write_protect, S_IRUGO | S_IWUSR);
+
+/*
+ * Define tcm_qla2xxx_tpg_attrib_s_prod_mode_write_protect
+ */
+DEF_QLA_TPG_ATTR_BOOL(prod_mode_write_protect);
+DEF_QLA_TPG_ATTRIB(prod_mode_write_protect);
+QLA_TPG_ATTR(prod_mode_write_protect, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *tcm_qla2xxx_tpg_attrib_attrs[] = {
+	&tcm_qla2xxx_tpg_attrib_generate_node_acls.attr,
+	&tcm_qla2xxx_tpg_attrib_cache_dynamic_acls.attr,
+	&tcm_qla2xxx_tpg_attrib_demo_mode_write_protect.attr,
+	&tcm_qla2xxx_tpg_attrib_prod_mode_write_protect.attr,
+	NULL,
+};
+
+/* End items for tcm_qla2xxx_tpg_attrib_cit */
+
+static ssize_t tcm_qla2xxx_tpg_show_enable(
+	struct se_portal_group *se_tpg,
+	char *page)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+			struct tcm_qla2xxx_tpg, se_tpg);
+
+	return snprintf(page, PAGE_SIZE, "%d\n",
+			atomic_read(&tpg->lport_tpg_enabled));
+}
+
+static ssize_t tcm_qla2xxx_tpg_store_enable(
+	struct se_portal_group *se_tpg,
+	const char *page,
+	size_t count)
+{
+	struct se_wwn *se_wwn = se_tpg->se_tpg_wwn;
+	struct tcm_qla2xxx_lport *lport = container_of(se_wwn,
+			struct tcm_qla2xxx_lport, lport_wwn);
+	struct scsi_qla_host *vha = lport->qla_vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+			struct tcm_qla2xxx_tpg, se_tpg);
+	char *endptr;
+	u32 op;
+
+	op = simple_strtoul(page, &endptr, 0);
+	if ((op != 1) && (op != 0)) {
+		pr_err("Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+
+	if (op) {
+		atomic_set(&tpg->lport_tpg_enabled, 1);
+		qla_tgt_enable_vha(vha);
+	} else {
+		if (!ha->qla_tgt) {
+			pr_err("truct qla_hw_data *ha->qla_tgt is NULL\n");
+			return -ENODEV;
+		}
+		atomic_set(&tpg->lport_tpg_enabled, 0);
+		qla_tgt_stop_phase1(ha->qla_tgt);
+	}
+
+	return count;
+}
+
+TF_TPG_BASE_ATTR(tcm_qla2xxx, enable, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *tcm_qla2xxx_tpg_attrs[] = {
+	&tcm_qla2xxx_tpg_enable.attr,
+	NULL,
+};
+
+static struct se_portal_group *tcm_qla2xxx_make_tpg(
+	struct se_wwn *wwn,
+	struct config_group *group,
+	const char *name)
+{
+	struct tcm_qla2xxx_lport *lport = container_of(wwn,
+			struct tcm_qla2xxx_lport, lport_wwn);
+	struct tcm_qla2xxx_tpg *tpg;
+	unsigned long tpgt;
+	int ret;
+
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (strict_strtoul(name + 5, 10, &tpgt) || tpgt > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	if (!lport->qla_npiv_vp && (tpgt != 1)) {
+		pr_err("In non NPIV mode, a single TPG=1 is used for"
+			" HW port mappings\n");
+		return ERR_PTR(-ENOSYS);
+	}
+
+	tpg = kzalloc(sizeof(struct tcm_qla2xxx_tpg), GFP_KERNEL);
+	if (!tpg) {
+		pr_err("Unable to allocate struct tcm_qla2xxx_tpg\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	tpg->lport = lport;
+	tpg->lport_tpgt = tpgt;
+	/*
+	 * By default allow READ-ONLY TPG demo-mode access w/ cached dynamic NodeACLs
+	 */
+	QLA_TPG_ATTRIB(tpg)->generate_node_acls = 1;
+	QLA_TPG_ATTRIB(tpg)->demo_mode_write_protect = 1;
+	QLA_TPG_ATTRIB(tpg)->cache_dynamic_acls = 1;
+
+	ret = core_tpg_register(&tcm_qla2xxx_fabric_configfs->tf_ops, wwn,
+				&tpg->se_tpg, tpg, TRANSPORT_TPG_TYPE_NORMAL);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	/*
+	 * Setup local TPG=1 pointer for non NPIV mode.
+	 */
+	if (lport->qla_npiv_vp == NULL)
+		lport->tpg_1 = tpg;
+
+	return &tpg->se_tpg;
+}
+
+static void tcm_qla2xxx_drop_tpg(struct se_portal_group *se_tpg)
+{
+	struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
+			struct tcm_qla2xxx_tpg, se_tpg);
+	struct tcm_qla2xxx_lport *lport = tpg->lport;
+	struct scsi_qla_host *vha = lport->qla_vha;
+	struct qla_hw_data *ha = vha->hw;
+	/*
+	 * Call into qla2x_target.c LLD logic to shutdown the active
+	 * FC Nexuses and disable target mode operation for this qla_hw_data
+	 */
+	if (ha->qla_tgt && !ha->qla_tgt->tgt_stop)
+		qla_tgt_stop_phase1(ha->qla_tgt);
+
+	core_tpg_deregister(se_tpg);
+	/*
+	 * Clear local TPG=1 pointer for non NPIV mode.
+	 */
+	if (lport->qla_npiv_vp == NULL)
+		lport->tpg_1 = NULL;
+
+	kfree(tpg);
+}
+
+static struct se_portal_group *tcm_qla2xxx_npiv_make_tpg(
+	struct se_wwn *wwn,
+	struct config_group *group,
+	const char *name)
+{
+	struct tcm_qla2xxx_lport *lport = container_of(wwn,
+			struct tcm_qla2xxx_lport, lport_wwn);
+	struct tcm_qla2xxx_tpg *tpg;
+	unsigned long tpgt;
+	int ret;
+
+	if (strstr(name, "tpgt_") != name)
+		return ERR_PTR(-EINVAL);
+	if (strict_strtoul(name + 5, 10, &tpgt) || tpgt > USHRT_MAX)
+		return ERR_PTR(-EINVAL);
+
+	tpg = kzalloc(sizeof(struct tcm_qla2xxx_tpg), GFP_KERNEL);
+	if (!tpg) {
+		pr_err("Unable to allocate struct tcm_qla2xxx_tpg\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	tpg->lport = lport;
+	tpg->lport_tpgt = tpgt;
+
+	ret = core_tpg_register(&tcm_qla2xxx_npiv_fabric_configfs->tf_ops, wwn,
+				&tpg->se_tpg, tpg, TRANSPORT_TPG_TYPE_NORMAL);
+	if (ret < 0) {
+		kfree(tpg);
+		return NULL;
+	}
+	return &tpg->se_tpg;
+}
+
+/*
+ * Expected to be called with struct qla_hw_data->hardware_lock held
+ */
+static struct qla_tgt_sess *tcm_qla2xxx_find_sess_by_s_id(
+	scsi_qla_host_t *vha,
+	const uint8_t *s_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct tcm_qla2xxx_lport *lport;
+	struct se_node_acl *se_nacl;
+	struct tcm_qla2xxx_nacl *nacl;
+	struct tcm_qla2xxx_fc_domain *d;
+	struct tcm_qla2xxx_fc_area *a;
+	struct tcm_qla2xxx_fc_al_pa *p;
+	unsigned char domain, area, al_pa;
+
+	lport = ha->target_lport_ptr;
+	if (!lport) {
+		pr_err("Unable to locate struct tcm_qla2xxx_lport\n");
+		dump_stack();
+		return NULL;
+	}
+
+	domain = s_id[0];
+	area = s_id[1];
+	al_pa = s_id[2];
+
+	pr_debug("find_sess_by_s_id: 0x%02x area: 0x%02x al_pa: %02x\n",
+			domain, area, al_pa);
+
+	d = &((struct tcm_qla2xxx_fc_domain *)lport->lport_fcport_map)[domain];
+	pr_debug("Using d: %p for domain: 0x%02x\n", d, domain);
+	a = &d->areas[area];
+	pr_debug("Using a: %p for area: 0x%02x\n", a, area);
+	p = &a->al_pas[al_pa];
+	pr_debug("Using p: %p for al_pa: 0x%02x\n", p, al_pa);
+
+	se_nacl = p->se_nacl;
+	if (!se_nacl) {
+		pr_debug("Unable to locate s_id: 0x%02x area: 0x%02x"
+			" al_pa: %02x\n", domain, area, al_pa);
+		return NULL;
+	}
+	pr_debug("find_sess_by_s_id: located se_nacl: %p,"
+		" initiatorname: %s\n", se_nacl, se_nacl->initiatorname);
+
+	nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
+	if (!nacl->qla_tgt_sess) {
+		pr_err("Unable to locate struct qla_tgt_sess\n");
+		return NULL;
+	}
+
+	return nacl->qla_tgt_sess;
+}
+
+/*
+ * Expected to be called with struct qla_hw_data->hardware_lock held
+ */
+static void tcm_qla2xxx_set_sess_by_s_id(
+	struct tcm_qla2xxx_lport *lport,
+	struct se_node_acl *new_se_nacl,
+	struct tcm_qla2xxx_nacl *nacl,
+	struct se_session *se_sess,
+	struct qla_tgt_sess *qla_tgt_sess,
+	uint8_t *s_id)
+{
+	struct se_node_acl *saved_nacl;
+	struct tcm_qla2xxx_fc_domain *d;
+	struct tcm_qla2xxx_fc_area *a;
+	struct tcm_qla2xxx_fc_al_pa *p;
+	unsigned char domain, area, al_pa;
+
+	domain = s_id[0];
+	area = s_id[1];
+	al_pa = s_id[2];
+	pr_debug("set_sess_by_s_id: domain 0x%02x area: 0x%02x al_pa: %02x\n",
+			domain, area, al_pa);
+
+	d = &((struct tcm_qla2xxx_fc_domain *)lport->lport_fcport_map)[domain];
+	pr_debug("Using d: %p for domain: 0x%02x\n", d, domain);
+	a = &d->areas[area];
+	pr_debug("Using a: %p for area: 0x%02x\n", a, area);
+	p = &a->al_pas[al_pa];
+	pr_debug("Using p: %p for al_pa: 0x%02x\n", p, al_pa);
+
+	saved_nacl = p->se_nacl;
+	if (!saved_nacl) {
+		pr_debug("Setting up new p->se_nacl to new_se_nacl\n");
+		p->se_nacl = new_se_nacl;
+		qla_tgt_sess->se_sess = se_sess;
+		nacl->qla_tgt_sess = qla_tgt_sess;
+		return;
+	}
+
+	if (nacl->qla_tgt_sess) {
+		if (new_se_nacl == NULL) {
+			pr_debug("Clearing existing nacl->qla_tgt_sess"
+					" and p->se_nacl\n");
+			p->se_nacl = NULL;
+			nacl->qla_tgt_sess = NULL;
+			return;
+		}
+		pr_debug("Replacing existing nacl->qla_tgt_sess and"
+				" p->se_nacl\n");
+		p->se_nacl = new_se_nacl;
+		qla_tgt_sess->se_sess = se_sess;
+		nacl->qla_tgt_sess = qla_tgt_sess;
+		return;
+	}
+
+	if (new_se_nacl == NULL) {
+		pr_debug("Clearing existing p->se_nacl\n");
+		p->se_nacl = NULL;
+		return;
+	}
+
+	pr_debug("Replacing existing p->se_nacl w/o active"
+				" nacl->qla_tgt_sess\n");
+	p->se_nacl = new_se_nacl;
+	qla_tgt_sess->se_sess = se_sess;
+	nacl->qla_tgt_sess = qla_tgt_sess;
+
+	pr_debug("Setup nacl->qla_tgt_sess %p by s_id for se_nacl: %p,"
+		" initiatorname: %s\n", nacl->qla_tgt_sess, new_se_nacl,
+		new_se_nacl->initiatorname);
+}
+
+/*
+ * Expected to be called with struct qla_hw_data->hardware_lock held
+ */
+static struct qla_tgt_sess *tcm_qla2xxx_find_sess_by_loop_id(
+	scsi_qla_host_t *vha,
+	const uint16_t loop_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct tcm_qla2xxx_lport *lport;
+	struct se_node_acl *se_nacl;
+	struct tcm_qla2xxx_nacl *nacl;
+	struct tcm_qla2xxx_fc_loopid *fc_loopid;
+
+	lport = ha->target_lport_ptr;
+	if (!lport) {
+		pr_err("Unable to locate struct tcm_qla2xxx_lport\n");
+		dump_stack();
+		return NULL;
+	}
+
+	pr_debug("find_sess_by_loop_id: Using loop_id: 0x%04x\n", loop_id);
+
+	fc_loopid = &((struct tcm_qla2xxx_fc_loopid *)lport->lport_loopid_map)[loop_id];
+
+	se_nacl = fc_loopid->se_nacl;
+	if (!se_nacl) {
+		pr_debug("Unable to locate se_nacl by loop_id:"
+				" 0x%04x\n", loop_id);
+		return NULL;
+	}
+
+	nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
+
+	if (!nacl->qla_tgt_sess) {
+		pr_err("Unable to locate struct qla_tgt_sess\n");
+		return NULL;
+	}
+
+	return nacl->qla_tgt_sess;
+}
+
+/*
+ * Expected to be called with struct qla_hw_data->hardware_lock held
+ */
+static void tcm_qla2xxx_set_sess_by_loop_id(
+	struct tcm_qla2xxx_lport *lport,
+	struct se_node_acl *new_se_nacl,
+	struct tcm_qla2xxx_nacl *nacl,
+	struct se_session *se_sess,
+	struct qla_tgt_sess *qla_tgt_sess,
+	uint16_t loop_id)
+{
+	struct se_node_acl *saved_nacl;
+	struct tcm_qla2xxx_fc_loopid *fc_loopid;
+
+	pr_debug("set_sess_by_loop_id: Using loop_id: 0x%04x\n", loop_id);
+
+	fc_loopid = &((struct tcm_qla2xxx_fc_loopid *)lport->lport_loopid_map)[loop_id];
+
+	saved_nacl = fc_loopid->se_nacl;
+	if (!saved_nacl) {
+		pr_debug("Setting up new fc_loopid->se_nacl"
+				" to new_se_nacl\n");
+		fc_loopid->se_nacl = new_se_nacl;
+		if (qla_tgt_sess->se_sess != se_sess)
+			qla_tgt_sess->se_sess = se_sess;
+		if (nacl->qla_tgt_sess != qla_tgt_sess)
+			nacl->qla_tgt_sess = qla_tgt_sess;
+		return;
+	}
+
+	if (nacl->qla_tgt_sess) {
+		if (new_se_nacl == NULL) {
+			pr_debug("Clearing nacl->qla_tgt_sess and"
+					" fc_loopid->se_nacl\n");
+			fc_loopid->se_nacl = NULL;
+			nacl->qla_tgt_sess = NULL;
+			return;
+		}
+
+		pr_debug("Replacing existing nacl->qla_tgt_sess and"
+				" fc_loopid->se_nacl\n");
+		fc_loopid->se_nacl = new_se_nacl;
+		if (qla_tgt_sess->se_sess != se_sess)
+			qla_tgt_sess->se_sess = se_sess;
+		if (nacl->qla_tgt_sess != qla_tgt_sess)
+			nacl->qla_tgt_sess = qla_tgt_sess;
+		return;
+	}
+
+	if (new_se_nacl == NULL) {
+		pr_debug("Clearing fc_loopid->se_nacl\n");
+		fc_loopid->se_nacl = NULL;
+		return;
+	}
+
+	pr_debug("Replacing existing fc_loopid->se_nacl w/o"
+			" active nacl->qla_tgt_sess\n");
+	fc_loopid->se_nacl = new_se_nacl;
+	if (qla_tgt_sess->se_sess != se_sess)
+		qla_tgt_sess->se_sess = se_sess;
+	if (nacl->qla_tgt_sess != qla_tgt_sess)
+		nacl->qla_tgt_sess = qla_tgt_sess;
+
+	pr_debug("Setup nacl->qla_tgt_sess %p by loop_id for se_nacl: %p,"
+		" initiatorname: %s\n", nacl->qla_tgt_sess, new_se_nacl,
+		new_se_nacl->initiatorname);
+}
+
+static void tcm_qla2xxx_free_session(struct qla_tgt_sess *sess)
+{
+	struct qla_tgt *tgt = sess->tgt;
+	struct qla_hw_data *ha = tgt->ha;
+	struct se_session *se_sess;
+	struct se_node_acl *se_nacl;
+	struct tcm_qla2xxx_lport *lport;
+	struct tcm_qla2xxx_nacl *nacl;
+	unsigned char be_sid[3];
+
+	se_sess = sess->se_sess;
+	if (!se_sess) {
+		pr_err("struct qla_tgt_sess->se_sess is NULL\n");
+		dump_stack();
+		return;
+	}
+	se_nacl = se_sess->se_node_acl;
+        nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
+
+	lport = ha->target_lport_ptr;
+	if (!lport) {
+		pr_err("Unable to locate struct tcm_qla2xxx_lport\n");
+		dump_stack();
+		return;
+	}
+
+	target_splice_sess_cmd_list(se_sess);
+	spin_unlock_irq(&ha->hardware_lock);
+
+	target_wait_for_sess_cmds(se_sess, 0);
+
+	spin_lock_irq(&ha->hardware_lock);
+
+
+	/*
+	 * Now clear the struct se_node_acl->nacl_sess pointer
+	 */
+	transport_deregister_session_configfs(sess->se_sess);
+
+        /*
+         * And now clear the se_nacl and session pointers from our HW lport
+         * mappings for fabric S_ID and LOOP_ID.
+         */
+	memset(&be_sid, 0, 3);
+	be_sid[0] = sess->s_id.b.domain;
+	be_sid[1] = sess->s_id.b.area;
+	be_sid[2] = sess->s_id.b.al_pa;
+
+        tcm_qla2xxx_set_sess_by_s_id(lport, NULL, nacl, se_sess,
+                        sess, be_sid);
+        tcm_qla2xxx_set_sess_by_loop_id(lport, NULL, nacl, se_sess,
+                        sess, sess->loop_id);
+	/*
+	 * Release the FC nexus -> target se_session link now.
+	 */
+	transport_deregister_session(sess->se_sess);
+}
+
+/*
+ * Called via qla_tgt_create_sess():ha->qla2x_tmpl->check_initiator_node_acl()
+ * to locate struct se_node_acl
+ */
+static int tcm_qla2xxx_check_initiator_node_acl(
+	scsi_qla_host_t *vha,
+	unsigned char *fc_wwpn,
+	void *qla_tgt_sess,
+	uint8_t *s_id,
+	uint16_t loop_id)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct tcm_qla2xxx_lport *lport;
+	struct tcm_qla2xxx_tpg *tpg;
+	struct tcm_qla2xxx_nacl *nacl;
+	struct se_portal_group *se_tpg;
+	struct se_node_acl *se_nacl;
+	struct se_session *se_sess;
+	struct qla_tgt_sess *sess = qla_tgt_sess;
+	unsigned char port_name[36];
+	unsigned long flags;
+
+	lport = ha->target_lport_ptr;
+	if (!lport) {
+		pr_err("Unable to locate struct tcm_qla2xxx_lport\n");
+		dump_stack();
+		return -EINVAL;
+	}
+	/*
+	 * Locate the TPG=1 reference..
+	 */
+	tpg = lport->tpg_1;
+	if (!tpg) {
+		pr_err("Unable to lcoate struct tcm_qla2xxx_lport->tpg_1\n");
+		return -EINVAL;
+	}
+	se_tpg = &tpg->se_tpg;
+
+	se_sess = transport_init_session();
+	if (IS_ERR(se_sess)) {
+		pr_err("Unable to initialize struct se_session\n");
+		return PTR_ERR(se_sess);
+	}
+	/*
+	 * Format the FCP Initiator port_name into colon seperated values to match
+	 * the format by tcm_qla2xxx explict ConfigFS NodeACLs.
+	 */
+	memset(&port_name, 0, 36);
+	snprintf(port_name, 36, "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
+		fc_wwpn[0], fc_wwpn[1], fc_wwpn[2], fc_wwpn[3], fc_wwpn[4],
+		fc_wwpn[5], fc_wwpn[6], fc_wwpn[7]);
+	/*
+	 * Locate our struct se_node_acl either from an explict NodeACL created
+	 * via ConfigFS, or via running in TPG demo mode.
+	 */
+	se_sess->se_node_acl = core_tpg_check_initiator_node_acl(se_tpg, port_name);
+	if (!se_sess->se_node_acl) {
+		transport_free_session(se_sess);
+		return -EINVAL;
+	}
+	se_nacl = se_sess->se_node_acl;
+	nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);
+	/*
+	 * And now setup the new se_nacl and session pointers into our HW lport
+	 * mappings for fabric S_ID and LOOP_ID.
+	 */
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	tcm_qla2xxx_set_sess_by_s_id(lport, se_nacl, nacl, se_sess,
+			qla_tgt_sess, s_id);
+	tcm_qla2xxx_set_sess_by_loop_id(lport, se_nacl, nacl, se_sess,
+			qla_tgt_sess, loop_id);
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+	/*
+	 * Finally register the new FC Nexus with TCM
+	 */
+	__transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess);
+
+	return 0;
+}
+
+/*
+ * Calls into tcm_qla2xxx used by qla2xxx LLD I/O path.
+ */
+static struct qla_tgt_func_tmpl tcm_qla2xxx_template = {
+	.handle_cmd		= tcm_qla2xxx_handle_cmd,
+	.handle_data		= tcm_qla2xxx_handle_data,
+	.handle_tmr		= tcm_qla2xxx_handle_tmr,
+	.free_cmd		= tcm_qla2xxx_free_cmd,
+	.free_session		= tcm_qla2xxx_free_session,
+	.check_initiator_node_acl = tcm_qla2xxx_check_initiator_node_acl,
+	.find_sess_by_s_id	= tcm_qla2xxx_find_sess_by_s_id,
+	.find_sess_by_loop_id	= tcm_qla2xxx_find_sess_by_loop_id,
+};
+
+static int tcm_qla2xxx_init_lport(struct tcm_qla2xxx_lport *lport)
+{
+	lport->lport_fcport_map = vmalloc(
+			sizeof(struct tcm_qla2xxx_fc_domain) * 256);
+	if (!lport->lport_fcport_map) {
+		pr_err("Unable to allocate lport_fcport_map of %lu"
+			" bytes\n", sizeof(struct tcm_qla2xxx_fc_domain) * 256);
+		return -ENOMEM;
+	}
+	memset(lport->lport_fcport_map, 0,
+			sizeof(struct tcm_qla2xxx_fc_domain) * 256);
+	pr_debug("qla2xxx: Allocated lport_fcport_map of %lu bytes\n",
+			sizeof(struct tcm_qla2xxx_fc_domain) * 256);
+
+	lport->lport_loopid_map = vmalloc(sizeof(struct tcm_qla2xxx_fc_loopid) *
+				65536);
+	if (!lport->lport_loopid_map) {
+		pr_err("Unable to allocate lport->lport_loopid_map"
+			" of %lu bytes\n", sizeof(struct tcm_qla2xxx_fc_loopid)
+			* 65536);
+		vfree(lport->lport_fcport_map);
+		return -ENOMEM;
+	}
+	memset(lport->lport_loopid_map, 0, sizeof(struct tcm_qla2xxx_fc_loopid)
+			* 65536);
+	pr_debug("qla2xxx: Allocated lport_loopid_map of %lu bytes\n",
+			sizeof(struct tcm_qla2xxx_fc_loopid) * 65536);
+	return 0;
+}
+
+static int tcm_qla2xxx_lport_register_cb(struct scsi_qla_host *vha)
+{
+	struct qla_hw_data *ha = vha->hw;
+	struct tcm_qla2xxx_lport *lport;
+	/*
+	 * Setup local pointer to vha, NPIV VP pointer (if present) and
+	 * vha->tcm_lport pointer
+	 */
+	lport = (struct tcm_qla2xxx_lport *)ha->target_lport_ptr;
+	lport->qla_vha = vha;
+
+	return 0;
+}
+
+static struct se_wwn *tcm_qla2xxx_make_lport(
+	struct target_fabric_configfs *tf,
+	struct config_group *group,
+	const char *name)
+{
+	struct tcm_qla2xxx_lport *lport;
+	u64 wwpn;
+	int ret = -ENODEV;
+
+	if (tcm_qla2xxx_parse_wwn(name, &wwpn, 1) < 0)
+		return ERR_PTR(-EINVAL);
+
+	lport = kzalloc(sizeof(struct tcm_qla2xxx_lport), GFP_KERNEL);
+	if (!lport) {
+		pr_err("Unable to allocate struct tcm_qla2xxx_lport\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	lport->lport_wwpn = wwpn;
+	tcm_qla2xxx_format_wwn(&lport->lport_name[0], TCM_QLA2XXX_NAMELEN, wwpn);
+
+	ret = tcm_qla2xxx_init_lport(lport);
+	if (ret != 0)
+		goto out;
+
+	ret = qla_tgt_lport_register(&tcm_qla2xxx_template, wwpn,
+				tcm_qla2xxx_lport_register_cb, lport);
+	if (ret != 0)
+		goto out_lport;
+
+	return &lport->lport_wwn;
+out_lport:
+	vfree(lport->lport_loopid_map);
+	vfree(lport->lport_fcport_map);
+out:
+	kfree(lport);
+	return ERR_PTR(ret);
+}
+
+static void tcm_qla2xxx_drop_lport(struct se_wwn *wwn)
+{
+	struct tcm_qla2xxx_lport *lport = container_of(wwn,
+			struct tcm_qla2xxx_lport, lport_wwn);
+	struct scsi_qla_host *vha = lport->qla_vha;
+	struct qla_hw_data *ha = vha->hw;
+	/*
+	 * Call into qla2x_target.c LLD logic to complete the
+	 * shutdown of struct qla_tgt after the call to
+	 * qla_tgt_stop_phase1() from tcm_qla2xxx_drop_tpg() above..
+	 */
+	if (ha->qla_tgt && !ha->qla_tgt->tgt_stopped)
+		qla_tgt_stop_phase2(ha->qla_tgt);
+
+	qla_tgt_lport_deregister(vha);
+
+	vfree(lport->lport_loopid_map);
+	vfree(lport->lport_fcport_map);
+	kfree(lport);
+}
+
+static struct se_wwn *tcm_qla2xxx_npiv_make_lport(
+	struct target_fabric_configfs *tf,
+	struct config_group *group,
+	const char *name)
+{
+	struct tcm_qla2xxx_lport *lport;
+	u64 npiv_wwpn, npiv_wwnn;
+	int ret;
+
+	if (tcm_qla2xxx_npiv_parse_wwn(name, strlen(name)+1,
+				&npiv_wwpn, &npiv_wwnn) < 0)
+		return ERR_PTR(-EINVAL);
+
+	lport = kzalloc(sizeof(struct tcm_qla2xxx_lport), GFP_KERNEL);
+	if (!lport) {
+		pr_err("Unable to allocate struct tcm_qla2xxx_lport"
+				" for NPIV\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	lport->lport_npiv_wwpn = npiv_wwpn;
+	lport->lport_npiv_wwnn = npiv_wwnn;
+	tcm_qla2xxx_npiv_format_wwn(&lport->lport_npiv_name[0],
+			TCM_QLA2XXX_NAMELEN, npiv_wwpn, npiv_wwnn);
+
+/* FIXME: tcm_qla2xxx_npiv_make_lport */
+	ret = -ENOSYS;
+	if (ret != 0)
+		goto out;
+
+	return &lport->lport_wwn;
+out:
+	kfree(lport);
+	return ERR_PTR(ret);
+}
+
+static void tcm_qla2xxx_npiv_drop_lport(struct se_wwn *wwn)
+{
+	struct tcm_qla2xxx_lport *lport = container_of(wwn,
+			struct tcm_qla2xxx_lport, lport_wwn);
+	struct scsi_qla_host *vha = lport->qla_vha;
+	struct Scsi_Host *sh = vha->host;
+	/*
+	 * Notify libfc that we want to release the lport->npiv_vport
+	 */
+	fc_vport_terminate(lport->npiv_vport);
+
+	scsi_host_put(sh);
+	kfree(lport);
+}
+
+
+static ssize_t tcm_qla2xxx_wwn_show_attr_version(
+	struct target_fabric_configfs *tf,
+	char *page)
+{
+	return sprintf(page, "TCM QLOGIC QLA2XXX NPIV capable fabric module %s on %s/%s"
+		" on "UTS_RELEASE"\n", TCM_QLA2XXX_VERSION, utsname()->sysname,
+		utsname()->machine);
+}
+
+TF_WWN_ATTR_RO(tcm_qla2xxx, version);
+
+static struct configfs_attribute *tcm_qla2xxx_wwn_attrs[] = {
+	&tcm_qla2xxx_wwn_version.attr,
+	NULL,
+};
+
+static struct target_core_fabric_ops tcm_qla2xxx_ops = {
+	.get_fabric_name		= tcm_qla2xxx_get_fabric_name,
+	.get_fabric_proto_ident		= tcm_qla2xxx_get_fabric_proto_ident,
+	.tpg_get_wwn			= tcm_qla2xxx_get_fabric_wwn,
+	.tpg_get_tag			= tcm_qla2xxx_get_tag,
+	.tpg_get_default_depth		= tcm_qla2xxx_get_default_depth,
+	.tpg_get_pr_transport_id	= tcm_qla2xxx_get_pr_transport_id,
+	.tpg_get_pr_transport_id_len	= tcm_qla2xxx_get_pr_transport_id_len,
+	.tpg_parse_pr_out_transport_id	= tcm_qla2xxx_parse_pr_out_transport_id,
+	.tpg_check_demo_mode		= tcm_qla2xxx_check_demo_mode,
+	.tpg_check_demo_mode_cache	= tcm_qla2xxx_check_demo_mode_cache,
+	.tpg_check_demo_mode_write_protect = tcm_qla2xxx_check_demo_write_protect,
+	.tpg_check_prod_mode_write_protect = tcm_qla2xxx_check_prod_write_protect,
+	.tpg_check_demo_mode_login_only = tcm_qla2xxx_check_true,
+	.tpg_alloc_fabric_acl		= tcm_qla2xxx_alloc_fabric_acl,
+	.tpg_release_fabric_acl		= tcm_qla2xxx_release_fabric_acl,
+	.tpg_get_inst_index		= tcm_qla2xxx_tpg_get_inst_index,
+	.new_cmd_map			= NULL,
+	.check_stop_free		= tcm_qla2xxx_check_stop_free,
+	.release_cmd			= tcm_qla2xxx_release_cmd,
+	.shutdown_session		= tcm_qla2xxx_shutdown_session,
+	.close_session			= tcm_qla2xxx_close_session,
+	.stop_session			= tcm_qla2xxx_stop_session,
+	.fall_back_to_erl0		= tcm_qla2xxx_reset_nexus,
+	.sess_logged_in			= tcm_qla2xxx_sess_logged_in,
+	.sess_get_index			= tcm_qla2xxx_sess_get_index,
+	.sess_get_initiator_sid		= NULL,
+	.write_pending			= tcm_qla2xxx_write_pending,
+	.write_pending_status		= tcm_qla2xxx_write_pending_status,
+	.set_default_node_attributes	= tcm_qla2xxx_set_default_node_attrs,
+	.get_task_tag			= tcm_qla2xxx_get_task_tag,
+	.get_cmd_state			= tcm_qla2xxx_get_cmd_state,
+	.queue_data_in			= tcm_qla2xxx_queue_data_in,
+	.queue_status			= tcm_qla2xxx_queue_status,
+	.queue_tm_rsp			= tcm_qla2xxx_queue_tm_rsp,
+	.get_fabric_sense_len		= tcm_qla2xxx_get_fabric_sense_len,
+	.set_fabric_sense_len		= tcm_qla2xxx_set_fabric_sense_len,
+	.is_state_remove		= tcm_qla2xxx_is_state_remove,
+	/*
+	 * Setup function pointers for generic logic in target_core_fabric_configfs.c
+	 */
+	.fabric_make_wwn		= tcm_qla2xxx_make_lport,
+	.fabric_drop_wwn		= tcm_qla2xxx_drop_lport,
+	.fabric_make_tpg		= tcm_qla2xxx_make_tpg,
+	.fabric_drop_tpg		= tcm_qla2xxx_drop_tpg,
+	.fabric_post_link		= NULL,
+	.fabric_pre_unlink		= NULL,
+	.fabric_make_np			= NULL,
+	.fabric_drop_np			= NULL,
+	.fabric_make_nodeacl		= tcm_qla2xxx_make_nodeacl,
+	.fabric_drop_nodeacl		= tcm_qla2xxx_drop_nodeacl,
+};
+
+static struct target_core_fabric_ops tcm_qla2xxx_npiv_ops = {
+	.get_fabric_name		= tcm_qla2xxx_npiv_get_fabric_name,
+	.get_fabric_proto_ident		= tcm_qla2xxx_get_fabric_proto_ident,
+	.tpg_get_wwn			= tcm_qla2xxx_npiv_get_fabric_wwn,
+	.tpg_get_tag			= tcm_qla2xxx_get_tag,
+	.tpg_get_default_depth		= tcm_qla2xxx_get_default_depth,
+	.tpg_get_pr_transport_id	= tcm_qla2xxx_get_pr_transport_id,
+	.tpg_get_pr_transport_id_len	= tcm_qla2xxx_get_pr_transport_id_len,
+	.tpg_parse_pr_out_transport_id	= tcm_qla2xxx_parse_pr_out_transport_id,
+	.tpg_check_demo_mode		= tcm_qla2xxx_check_false,
+	.tpg_check_demo_mode_cache	= tcm_qla2xxx_check_true,
+	.tpg_check_demo_mode_write_protect = tcm_qla2xxx_check_true,
+	.tpg_check_prod_mode_write_protect = tcm_qla2xxx_check_false,
+	.tpg_check_demo_mode_login_only	= tcm_qla2xxx_check_true,
+	.tpg_alloc_fabric_acl		= tcm_qla2xxx_alloc_fabric_acl,
+	.tpg_release_fabric_acl		= tcm_qla2xxx_release_fabric_acl,
+	.tpg_get_inst_index		= tcm_qla2xxx_tpg_get_inst_index,
+	.release_cmd			= tcm_qla2xxx_release_cmd,
+	.shutdown_session		= tcm_qla2xxx_shutdown_session,
+	.close_session			= tcm_qla2xxx_close_session,
+	.stop_session			= tcm_qla2xxx_stop_session,
+	.fall_back_to_erl0		= tcm_qla2xxx_reset_nexus,
+	.sess_logged_in			= tcm_qla2xxx_sess_logged_in,
+	.sess_get_index			= tcm_qla2xxx_sess_get_index,
+	.sess_get_initiator_sid		= NULL,
+	.write_pending			= tcm_qla2xxx_write_pending,
+	.write_pending_status		= tcm_qla2xxx_write_pending_status,
+	.set_default_node_attributes	= tcm_qla2xxx_set_default_node_attrs,
+	.get_task_tag			= tcm_qla2xxx_get_task_tag,
+	.get_cmd_state			= tcm_qla2xxx_get_cmd_state,
+	.queue_data_in			= tcm_qla2xxx_queue_data_in,
+	.queue_status			= tcm_qla2xxx_queue_status,
+	.queue_tm_rsp			= tcm_qla2xxx_queue_tm_rsp,
+	.get_fabric_sense_len		= tcm_qla2xxx_get_fabric_sense_len,
+	.set_fabric_sense_len		= tcm_qla2xxx_set_fabric_sense_len,
+	.is_state_remove		= tcm_qla2xxx_is_state_remove,
+	/*
+	 * Setup function pointers for generic logic in target_core_fabric_configfs.c
+	 */
+	.fabric_make_wwn		= tcm_qla2xxx_npiv_make_lport,
+	.fabric_drop_wwn		= tcm_qla2xxx_npiv_drop_lport,
+	.fabric_make_tpg		= tcm_qla2xxx_npiv_make_tpg,
+	.fabric_drop_tpg		= tcm_qla2xxx_drop_tpg,
+	.fabric_post_link		= NULL,
+	.fabric_pre_unlink		= NULL,
+	.fabric_make_np			= NULL,
+	.fabric_drop_np			= NULL,
+	.fabric_make_nodeacl		= tcm_qla2xxx_make_nodeacl,
+	.fabric_drop_nodeacl		= tcm_qla2xxx_drop_nodeacl,
+};
+
+static int tcm_qla2xxx_register_configfs(void)
+{
+	struct target_fabric_configfs *fabric, *npiv_fabric;
+	int ret;
+
+	pr_debug("TCM QLOGIC QLA2XXX fabric module %s on %s/%s"
+		" on "UTS_RELEASE"\n", TCM_QLA2XXX_VERSION, utsname()->sysname,
+		utsname()->machine);
+	/*
+	 * Register the top level struct config_item_type with TCM core
+	 */
+	fabric = target_fabric_configfs_init(THIS_MODULE, "qla2xxx");
+	if (IS_ERR(fabric)) {
+		pr_err("target_fabric_configfs_init() failed\n");
+		return PTR_ERR(fabric);
+	}
+	/*
+	 * Setup fabric->tf_ops from our local tcm_qla2xxx_ops
+	 */
+	fabric->tf_ops = tcm_qla2xxx_ops;
+	/*
+	 * Setup the struct se_task->task_sg[] chaining bit
+	 */
+	fabric->tf_ops.task_sg_chaining = 1;
+	/*
+	 * Setup default attribute lists for various fabric->tf_cit_tmpl
+	 */
+	TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = tcm_qla2xxx_tpg_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = tcm_qla2xxx_tpg_attrib_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+	/*
+	 * Register the fabric for use within TCM
+	 */
+	ret = target_fabric_configfs_register(fabric);
+	if (ret < 0) {
+		pr_err("target_fabric_configfs_register() failed"
+				" for TCM_QLA2XXX\n");
+		return ret;
+	}
+	/*
+	 * Setup our local pointer to *fabric
+	 */
+	tcm_qla2xxx_fabric_configfs = fabric;
+	pr_debug("TCM_QLA2XXX[0] - Set fabric -> tcm_qla2xxx_fabric_configfs\n");
+
+	/*
+	 * Register the top level struct config_item_type for NPIV with TCM core
+	 */
+	npiv_fabric = target_fabric_configfs_init(THIS_MODULE, "qla2xxx_npiv");
+	if (!npiv_fabric) {
+		pr_err("target_fabric_configfs_init() failed\n");
+		ret = -ENOMEM;
+		goto out_fabric;
+	}
+	/*
+	 * Setup fabric->tf_ops from our local tcm_qla2xxx_npiv_ops
+	 */
+	npiv_fabric->tf_ops = tcm_qla2xxx_npiv_ops;
+	/*
+	 * Setup default attribute lists for various npiv_fabric->tf_cit_tmpl
+	 */
+	TF_CIT_TMPL(npiv_fabric)->tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_base_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+	TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+	/*
+	 * Register the npiv_fabric for use within TCM
+	 */
+	ret = target_fabric_configfs_register(npiv_fabric);
+	if (ret < 0) {
+		pr_err("target_fabric_configfs_register() failed"
+				" for TCM_QLA2XXX\n");
+		goto out_fabric;
+	}
+	/*
+	 * Setup our local pointer to *npiv_fabric
+	 */
+	tcm_qla2xxx_npiv_fabric_configfs = npiv_fabric;
+	pr_debug("TCM_QLA2XXX[0] - Set fabric -> tcm_qla2xxx_npiv_fabric_configfs\n");
+
+	tcm_qla2xxx_free_wq = alloc_workqueue("tcm_qla2xxx_free",
+						WQ_MEM_RECLAIM, 0);
+	if (!tcm_qla2xxx_free_wq) {
+		ret = -ENOMEM;
+		goto out_fabric_npiv;
+	}
+
+	tcm_qla2xxx_cmd_wq = alloc_workqueue("tcm_qla2xxx_cmd", 0, 0);
+	if (!tcm_qla2xxx_cmd_wq) {
+		ret = -ENOMEM;
+		goto out_free_wq;
+	}
+
+	return 0;
+
+out_free_wq:
+	destroy_workqueue(tcm_qla2xxx_free_wq);
+out_fabric_npiv:
+	target_fabric_configfs_deregister(tcm_qla2xxx_npiv_fabric_configfs);
+out_fabric:
+	target_fabric_configfs_deregister(tcm_qla2xxx_fabric_configfs);
+	return ret;
+}
+
+static void tcm_qla2xxx_deregister_configfs(void)
+{
+	destroy_workqueue(tcm_qla2xxx_cmd_wq);
+	destroy_workqueue(tcm_qla2xxx_free_wq);
+
+	target_fabric_configfs_deregister(tcm_qla2xxx_fabric_configfs);
+	tcm_qla2xxx_fabric_configfs = NULL;
+	pr_debug("TCM_QLA2XXX[0] - Cleared tcm_qla2xxx_fabric_configfs\n");
+
+	target_fabric_configfs_deregister(tcm_qla2xxx_npiv_fabric_configfs);
+	tcm_qla2xxx_npiv_fabric_configfs = NULL;
+	pr_debug("TCM_QLA2XXX[0] - Cleared tcm_qla2xxx_npiv_fabric_configfs\n");
+}
+
+static int __init tcm_qla2xxx_init(void)
+{
+	int ret;
+
+	ret = tcm_qla2xxx_register_configfs();
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static void __exit tcm_qla2xxx_exit(void)
+{
+	tcm_qla2xxx_deregister_configfs();
+}
+
+MODULE_DESCRIPTION("TCM QLA2XXX series NPIV enabled fabric driver");
+MODULE_LICENSE("GPL");
+module_init(tcm_qla2xxx_init);
+module_exit(tcm_qla2xxx_exit);
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.h b/drivers/scsi/qla2xxx/tcm_qla2xxx.h
new file mode 100644
index 0000000..9dbac00
--- /dev/null
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.h
@@ -0,0 +1,148 @@
+#include <target/target_core_base.h>
+
+#define TCM_QLA2XXX_VERSION	"v0.1"
+/* length of ASCII WWPNs including pad */
+#define TCM_QLA2XXX_NAMELEN	32
+/* lenth of ASCII NPIV 'WWPN+WWNN' including pad */
+#define TCM_QLA2XXX_NPIV_NAMELEN 66
+
+#include "qla_target.h"
+
+struct tcm_qla2xxx_nacl {
+	/* From libfc struct fc_rport->port_id */
+	u16 nport_id;
+	/* Binary World Wide unique Node Name for remote FC Initiator Nport */
+	u64 nport_wwnn;
+	/* ASCII formatted WWPN for FC Initiator Nport */
+	char nport_name[TCM_QLA2XXX_NAMELEN];
+	/* Pointer to qla_tgt_sess */
+	struct qla_tgt_sess *qla_tgt_sess;
+	/* Pointer to TCM FC nexus */
+	struct se_session *nport_nexus;
+	/* Returned by tcm_qla2xxx_make_nodeacl() */
+	struct se_node_acl se_node_acl;
+};
+
+struct tcm_qla2xxx_tpg_attrib {
+	int generate_node_acls;
+	int cache_dynamic_acls;
+	int demo_mode_write_protect;
+	int prod_mode_write_protect;
+};
+
+struct tcm_qla2xxx_tpg {
+	/* FC lport target portal group tag for TCM */
+	u16 lport_tpgt;
+	/* Atomic bit to determine TPG active status */
+	atomic_t lport_tpg_enabled;
+	/* Pointer back to tcm_qla2xxx_lport */
+	struct tcm_qla2xxx_lport *lport;
+	/* Used by tcm_qla2xxx_tpg_attrib_cit */
+	struct tcm_qla2xxx_tpg_attrib tpg_attrib;
+	/* Returned by tcm_qla2xxx_make_tpg() */
+	struct se_portal_group se_tpg;
+};
+
+#define QLA_TPG_ATTRIB(tpg)	(&(tpg)->tpg_attrib)
+
+/*
+ * Used for the 24-bit lport->lport_fcport_map;
+ */
+struct tcm_qla2xxx_fc_al_pa {
+	struct se_node_acl *se_nacl;
+};
+
+struct tcm_qla2xxx_fc_area {
+        struct tcm_qla2xxx_fc_al_pa al_pas[256];
+};
+
+struct tcm_qla2xxx_fc_domain {
+        struct tcm_qla2xxx_fc_area areas[256];
+};
+
+struct tcm_qla2xxx_fc_loopid {
+	struct se_node_acl *se_nacl;
+};
+
+struct tcm_qla2xxx_lport {
+	/* SCSI protocol the lport is providing */
+	u8 lport_proto_id;
+	/* Binary World Wide unique Port Name for FC Target Lport */
+	u64 lport_wwpn;
+	/* Binary World Wide unique Port Name for FC NPIV Target Lport */
+	u64 lport_npiv_wwpn;
+	/* Binary World Wide unique Node Name for FC NPIV Target Lport */
+	u64 lport_npiv_wwnn;
+	/* ASCII formatted WWPN for FC Target Lport */
+	char lport_name[TCM_QLA2XXX_NAMELEN];
+	/* ASCII formatted WWPN+WWNN for NPIV FC Target Lport */
+	char lport_npiv_name[TCM_QLA2XXX_NPIV_NAMELEN];
+	/* vmalloc'ed memory for fc_port pointers in 24-bit FC Port ID space */
+	char *lport_fcport_map;
+	/* vmalloc-ed memory for fc_port pointers for 16-bit FC loop ID */
+	char *lport_loopid_map;
+	/* Pointer to struct scsi_qla_host from qla2xxx LLD */
+	struct scsi_qla_host *qla_vha;
+	/* Pointer to struct scsi_qla_host for NPIV VP from qla2xxx LLD */
+	struct scsi_qla_host *qla_npiv_vp;
+	/* Pointer to struct qla_tgt pointer */
+	struct qla_tgt lport_qla_tgt;
+	/* Pointer to struct fc_vport for NPIV vport from libfc */
+	struct fc_vport *npiv_vport;
+	/* Pointer to TPG=1 for non NPIV mode */
+	struct tcm_qla2xxx_tpg *tpg_1;
+	/* Returned by tcm_qla2xxx_make_lport() */
+	struct se_wwn lport_wwn;
+};
+
+extern int tcm_qla2xxx_check_true(struct se_portal_group *);
+extern int tcm_qla2xxx_check_false(struct se_portal_group *);
+extern ssize_t tcm_qla2xxx_parse_wwn(const char *, u64 *, int);
+extern ssize_t tcm_qla2xxx_format_wwn(char *, size_t, u64);
+extern char *tcm_qla2xxx_get_fabric_name(void);
+extern int tcm_qla2xxx_npiv_parse_wwn(const char *name, size_t, u64 *, u64 *);
+extern ssize_t tcm_qla2xxx_npiv_format_wwn(char *, size_t, u64, u64);
+extern char *tcm_qla2xxx_npiv_get_fabric_name(void);
+extern u8 tcm_qla2xxx_get_fabric_proto_ident(struct se_portal_group *);
+extern char *tcm_qla2xxx_get_fabric_wwn(struct se_portal_group *);
+extern char *tcm_qla2xxx_npiv_get_fabric_wwn(struct se_portal_group *);
+extern u16 tcm_qla2xxx_get_tag(struct se_portal_group *);
+extern u32 tcm_qla2xxx_get_default_depth(struct se_portal_group *);
+extern u32 tcm_qla2xxx_get_pr_transport_id(struct se_portal_group *, struct se_node_acl *,
+			struct t10_pr_registration *, int *, unsigned char *);
+extern u32 tcm_qla2xxx_get_pr_transport_id_len(struct se_portal_group *, struct se_node_acl *,
+			struct t10_pr_registration *, int *);
+extern char *tcm_qla2xxx_parse_pr_out_transport_id(struct se_portal_group *, const char *,
+				u32 *, char **);
+extern int tcm_qla2xxx_check_demo_mode(struct se_portal_group *);
+extern int tcm_qla2xxx_check_demo_mode_cache(struct se_portal_group *);
+extern int tcm_qla2xxx_check_demo_write_protect(struct se_portal_group *);
+extern int tcm_qla2xxx_check_prod_write_protect(struct se_portal_group *);
+extern struct se_node_acl *tcm_qla2xxx_alloc_fabric_acl(struct se_portal_group *);
+extern void tcm_qla2xxx_release_fabric_acl(struct se_portal_group *, struct se_node_acl *);
+extern u32 tcm_qla2xxx_tpg_get_inst_index(struct se_portal_group *);
+extern void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *);
+extern int tcm_qla2xxx_check_stop_free(struct se_cmd *);
+extern void tcm_qla2xxx_release_cmd(struct se_cmd *);
+extern int tcm_qla2xxx_shutdown_session(struct se_session *);
+extern void tcm_qla2xxx_close_session(struct se_session *);
+extern void tcm_qla2xxx_stop_session(struct se_session *, int, int);
+extern void tcm_qla2xxx_reset_nexus(struct se_session *);
+extern int tcm_qla2xxx_sess_logged_in(struct se_session *);
+extern u32 tcm_qla2xxx_sess_get_index(struct se_session *);
+extern int tcm_qla2xxx_write_pending(struct se_cmd *);
+extern int tcm_qla2xxx_write_pending_status(struct se_cmd *);
+extern void tcm_qla2xxx_set_default_node_attrs(struct se_node_acl *);
+extern u32 tcm_qla2xxx_get_task_tag(struct se_cmd *);
+extern int tcm_qla2xxx_get_cmd_state(struct se_cmd *);
+extern int tcm_qla2xxx_handle_cmd(struct scsi_qla_host *, struct qla_tgt_cmd *,
+			unsigned char *, uint32_t, int, int, int);
+extern int tcm_qla2xxx_new_cmd_map(struct se_cmd *);
+extern int tcm_qla2xxx_handle_data(struct qla_tgt_cmd *);
+extern int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *, uint32_t, uint8_t);
+extern int tcm_qla2xxx_queue_data_in(struct se_cmd *);
+extern int tcm_qla2xxx_queue_status(struct se_cmd *);
+extern int tcm_qla2xxx_queue_tm_rsp(struct se_cmd *);
+extern u16 tcm_qla2xxx_get_fabric_sense_len(void);
+extern u16 tcm_qla2xxx_set_fabric_sense_len(struct se_cmd *, u32);
+extern int tcm_qla2xxx_is_state_remove(struct se_cmd *);
-- 
1.7.2.3

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support
  2011-12-18  2:02 ` [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support Nicholas A. Bellinger
@ 2011-12-19 22:59   ` Roland Dreier
  2011-12-21 21:48     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Roland Dreier @ 2011-12-19 22:59 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

Hi Nick,

> +/* ha->hardware_lock supposed to be held on entry */
> +static void qla_tgt_undelete_sess(struct qla_tgt_sess *sess)
> +{
> +       BUG_ON(!sess->deleted);
> +
> +       list_del(&sess->del_list_entry);
> +       sess->deleted = 0;
> +}

Running with basically this code, we hit the crash below (the BUG is
the one above).  The way to hit this was described as having the target
be artificially slow (slower than the initiator SCSI timeout) responding
to reads, and having the initiator keep resending the read commands after
it aborts them:

[  465.935351] scsi(10): resetting (session ffff880614858060 from port
50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
[  465.935496] qla_target(0): Unknown task mgmt fn 0xfffd
[  466.016688] scsi(10): resetting (session ffff880614858060 from port
50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
[  466.016824] qla_target(0): Unknown task mgmt fn 0xfffd
[  466.017102] scsi(10): resetting (session ffff880614858060 from port
50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
[  466.017236] qla_target(0): Unknown task mgmt fn 0xfffd
[  488.064324] qla_target(0): tgt_ops->handle_tmr() failed: -22
[  488.064422] qla_target(0): Unable to send command to target,
sending BUSY status
[  495.587058] ------------[ cut here ]------------
[  495.587161] kernel BUG at drivers/scsi/qla2xxx/qla_target.c:2591!
[  495.587281] invalid opcode: 0000 [#1] SMP
[  495.587553] last sysfs file:
/sys/devices/pci0000:00/0000:00:07.0/0000:09:00.0/host11/port-11:1/expander-11:1/port-11:1:9/end_device-11:1:9/target11:0:34/11:0:34:0/state
[  495.587756] Dumping ftrace buffer:
[  495.587891]    (ftrace buffer empty)
[  495.588002] CPU 10
[  495.588050] Modules linked in: netconsole vfat msdos fat
target_core_pscsi target_core_file target_core_iblock tcm_qla2xxx
target_core_mod configfs ps_bdrv ipmi_devintf ipmi_si ipmi_msghandler
serio_raw ioatdma i7core_edac dca edac_core ses enclosure rdma_ucm
rdma_cm mlx4_ib usb_storage usbhid ahci mpt2sas qla2xxx uas e1000e hid
iw_cm libahci scsi_transport_sas scsi_transport_fc mlx4_core ib_uverbs
raid_class scsi_tgt ib_umad ib_ipoib ib_cm ib_sa ib_mad ib_core
ib_addr
[  495.591092]
[  495.591167] Pid: 5276, comm: LIO_iblock Tainted: G        W
2.6.39.4-dbg+ #14435 Xyratex Storage Server        /HS-1235T-ATX
[  495.591432] RIP: 0010:[<ffffffffa01e4bed>]  [<ffffffffa01e4bed>]
qla_tgt_free_cmd+0x2d/0x50 [qla2xxx]
[  495.591614] RSP: 0018:ffff8805d5b25bb0  EFLAGS: 00010202
[  495.591700] RAX: 000000000000000a RBX: ffff8805b1c50040 RCX: 0000000000000000
[  495.591793] RDX: 0000000000000000 RSI: ffff8805b1c50000 RDI: ffff8805b1c50000
[  495.591886] RBP: ffff8805d5b25bc0 R08: 0000000000000001 R09: 0000000000000000
[  495.591979] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88062314f1a8
[  495.592072] R13: ffff8805b1c50168 R14: ffff88062314f140 R15: 0000000000000286
[  495.592166] FS:  0000000000000000(0000) GS:ffff880c3ea00000(0000)
knlGS:0000000000000000
[  495.592290] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  495.592387] CR2: 00007f59d60a0330 CR3: 0000000001a03000 CR4: 00000000000006e0
[  495.592486] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  495.592583] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  495.592682] Process LIO_iblock (pid: 5276, threadinfo
ffff8805d5b24000, task ffff8805d5a6c4e0)
[  495.592809] Stack:
[  495.592907]  ffff8805d5b25bd0 0000000000000286 ffff8805d5b25bd0
ffffffffa0144891
[  495.593244]  ffff8805d5b25c10 ffffffffa030a40e 0000000a00000002
ffff8805b1c50168
[  495.593533]  ffffffffa030a390 0000000000000286 0000000000000002
0000000000000000
[  495.593819] Call Trace:
[  495.593901]  [<ffffffffa0144891>] tcm_qla2xxx_release_cmd+0x21/0x30
[tcm_qla2xxx]
[  495.594037]  [<ffffffffa030a40e>] target_release_cmd_kref+0x7e/0xe0
[target_core_mod]
[  495.594169]  [<ffffffffa030a390>] ?
target_splice_sess_cmd_list+0xd0/0xd0 [target_core_mod]
[  495.594300]  [<ffffffff8129ba67>] kref_put+0x37/0x70
[  495.594395]  [<ffffffffa030a48c>] target_put_sess_cmd+0x1c/0x20
[target_core_mod]
[  495.594520]  [<ffffffffa014485f>]
tcm_qla2xxx_check_stop_free+0x4f/0x60 [tcm_qla2xxx]
[  495.594651]  [<ffffffffa0309867>]
transport_cmd_check_stop+0x157/0x210 [target_core_mod]
[  495.594784]  [<ffffffffa0309935>]
transport_cmd_check_stop_to_fabric+0x15/0x20 [target_core_mod]
[  495.594919]  [<ffffffffa030ea2e>]
transport_cmd_finish_abort+0x2e/0x70 [target_core_mod]
[  495.595052]  [<ffffffffa0307185>]
core_tmr_handle_tas_abort+0x35/0x70 [target_core_mod]
[  495.595185]  [<ffffffffa030768c>] core_tmr_lun_reset+0x3dc/0x950
[target_core_mod]
[  495.595313]  [<ffffffff81088469>] ? trace_hardirqs_off_caller+0x29/0xc0
[  495.595407]  [<ffffffff8108850d>] ? trace_hardirqs_off+0xd/0x10
[  495.595509]  [<ffffffff81557810>] ? _raw_spin_unlock_irqrestore+0x40/0x80
[  495.595611]  [<ffffffffa0310a2c>]
transport_generic_do_tmr+0x9c/0xc0 [target_core_mod]
[  495.595743]  [<ffffffffa0310cb8>]
transport_processing_thread+0x268/0x460 [target_core_mod]
[  495.595872]  [<ffffffff810742f0>] ? wake_up_bit+0x40/0x40
[  495.595970]  [<ffffffffa0310a50>] ?
transport_generic_do_tmr+0xc0/0xc0 [target_core_mod]
[  495.596095]  [<ffffffff81073d4e>] kthread+0xbe/0xd0
[  495.596195]  [<ffffffff81557dd4>] ? retint_restore_args+0x13/0x13
[  495.596292]  [<ffffffff8108e51d>] ? trace_hardirqs_on_caller+0x14d/0x190
[  495.596391]  [<ffffffff815611a4>] kernel_thread_helper+0x4/0x10
[  495.596488]  [<ffffffff81557dd4>] ? retint_restore_args+0x13/0x13
[  495.596586]  [<ffffffff81073c90>] ? __init_kthread_worker+0x70/0x70
[  495.596683]  [<ffffffff815611a0>] ? gs_change+0x13/0x13
[  495.596781] Code: 89 e5 48 83 ec 10 0f 1f 44 00 00 0f b6 87 40 05
00 00 48 89 fe a8 02 75 12 a8 04 75 10 48 8b 3d fa 3b 02 00 e8 75 0e
f6 e0 c9 c3 <0f> 0b 48 8b bf 48 05 00 00 48 89 75 f8 e8 81 0c f6 e0 48
8b 75
[  495.599869] RIP  [<ffffffffa01e4bed>] qla_tgt_free_cmd+0x2d/0x50 [qla2xxx]
[  495.600011]  RSP <ffff8805d5b25bb0>
[  495.600316] ---[ end trace 71e099d8f1a84ca8 ]---

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2011-12-18  2:02 [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Nicholas A. Bellinger
                   ` (2 preceding siblings ...)
  2011-12-18  2:02 ` [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target Nicholas A. Bellinger
@ 2011-12-21 17:11 ` Christoph Hellwig
  2011-12-22 22:25   ` Andrew Vasquez
  3 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2011-12-21 17:11 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Roland Dreier, Joern Engel,
	Madhuranath Iyengar

I think the most important item is to sort out the mess around the old
generation qla23xx support.  The way the code currently sprinkles ifs
around that is a complete mess.  Given that the qla23xx support has as
far as I know zero test coverage, and has been EOLed by qlogic I see
no reason to keep it around.

Anyone disagreeing with that?

The other bit is sorting out handling of the full command queues in the
hardware.  Currently the target core does a blind retry after some delay,
which isn't a good idea.  I had an RFC patch on how to move it into the
driver, but I'll really need some help to find the proper place to wake
up any process waiting once we get free slots.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support
  2011-12-19 22:59   ` Roland Dreier
@ 2011-12-21 21:48     ` Nicholas A. Bellinger
  2011-12-21 22:46       ` Roland Dreier
  0 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-21 21:48 UTC (permalink / raw)
  To: Roland Dreier
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

On Mon, 2011-12-19 at 14:59 -0800, Roland Dreier wrote:
> Hi Nick,
> 

Hi Roland,

> > +/* ha->hardware_lock supposed to be held on entry */
> > +static void qla_tgt_undelete_sess(struct qla_tgt_sess *sess)
> > +{
> > +       BUG_ON(!sess->deleted);
> > +
> > +       list_del(&sess->del_list_entry);
> > +       sess->deleted = 0;
> > +}
> 
> Running with basically this code, we hit the crash below (the BUG is
> the one above).

So AFAICT the BUG_ON() being triggered below is not actually from
qla_tgt_undelete_sess(), as this is never called from the main release
path from tcm_qla2xxx_release_cmd() in the backtrace below.

>   The way to hit this was described as having the target
> be artificially slow (slower than the initiator SCSI timeout) responding
> to reads, and having the initiator keep resending the read commands after
> it aborts them:
> 

<nod>

> [  465.935351] scsi(10): resetting (session ffff880614858060 from port
> 50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
> [  465.935496] qla_target(0): Unknown task mgmt fn 0xfffd
> [  466.016688] scsi(10): resetting (session ffff880614858060 from port
> 50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
> [  466.016824] qla_target(0): Unknown task mgmt fn 0xfffd
> [  466.017102] scsi(10): resetting (session ffff880614858060 from port
> 50:01:43:80:16:7c:80:7a, mcmd fffd, loop_id 129)
> [  466.017236] qla_target(0): Unknown task mgmt fn 0xfffd
> [  488.064324] qla_target(0): tgt_ops->handle_tmr() failed: -22
> [  488.064422] qla_target(0): Unable to send command to target,
> sending BUSY status
> [  495.587058] ------------[ cut here ]------------
> [  495.587161] kernel BUG at drivers/scsi/qla2xxx/qla_target.c:2591!

So looking at qla_tgt-3.3, qla_target.c:2591 lines up with the
following:

void qla_tgt_free_cmd(struct qla_tgt_cmd *cmd)
{
        BUG_ON(cmd->sg_mapped);

        if (unlikely(cmd->free_sg))
                kfree(cmd->sg);
        kmem_cache_free(qla_tgt_cmd_cachep, cmd);
}

I'm not sure at what point the call to qla_tgt_unmap_sg() is being
ignored here, but I think it's likely related to the fact that
qla_target.c is now using target_submit_cmd() to dispatching backend I/O
directly from workqueue context, and not from processing context in
transport_processing_thread().

core_tmr_lun_reset() is currently making some assumptions about this
when it comes to walking the active I/O lists to abort commands, so I'm
thinking this will also need to change to take into account that I/O's
may still be being dispatched from seperate workqueue process context.

In any event, please verify the BUG_ON() you're observing, and I'll take
a look at this over the holidays.

Thanks,

--nab

> [  495.587281] invalid opcode: 0000 [#1] SMP
> [  495.587553] last sysfs file:
> /sys/devices/pci0000:00/0000:00:07.0/0000:09:00.0/host11/port-11:1/expander-11:1/port-11:1:9/end_device-11:1:9/target11:0:34/11:0:34:0/state
> [  495.587756] Dumping ftrace buffer:
> [  495.587891]    (ftrace buffer empty)
> [  495.588002] CPU 10
> [  495.588050] Modules linked in: netconsole vfat msdos fat
> target_core_pscsi target_core_file target_core_iblock tcm_qla2xxx
> target_core_mod configfs ps_bdrv ipmi_devintf ipmi_si ipmi_msghandler
> serio_raw ioatdma i7core_edac dca edac_core ses enclosure rdma_ucm
> rdma_cm mlx4_ib usb_storage usbhid ahci mpt2sas qla2xxx uas e1000e hid
> iw_cm libahci scsi_transport_sas scsi_transport_fc mlx4_core ib_uverbs
> raid_class scsi_tgt ib_umad ib_ipoib ib_cm ib_sa ib_mad ib_core
> ib_addr
> [  495.591092]
> [  495.591167] Pid: 5276, comm: LIO_iblock Tainted: G        W
> 2.6.39.4-dbg+ #14435 Xyratex Storage Server        /HS-1235T-ATX
> [  495.591432] RIP: 0010:[<ffffffffa01e4bed>]  [<ffffffffa01e4bed>]
> qla_tgt_free_cmd+0x2d/0x50 [qla2xxx]
> [  495.591614] RSP: 0018:ffff8805d5b25bb0  EFLAGS: 00010202
> [  495.591700] RAX: 000000000000000a RBX: ffff8805b1c50040 RCX: 0000000000000000
> [  495.591793] RDX: 0000000000000000 RSI: ffff8805b1c50000 RDI: ffff8805b1c50000
> [  495.591886] RBP: ffff8805d5b25bc0 R08: 0000000000000001 R09: 0000000000000000
> [  495.591979] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88062314f1a8
> [  495.592072] R13: ffff8805b1c50168 R14: ffff88062314f140 R15: 0000000000000286
> [  495.592166] FS:  0000000000000000(0000) GS:ffff880c3ea00000(0000)
> knlGS:0000000000000000
> [  495.592290] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  495.592387] CR2: 00007f59d60a0330 CR3: 0000000001a03000 CR4: 00000000000006e0
> [  495.592486] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [  495.592583] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [  495.592682] Process LIO_iblock (pid: 5276, threadinfo
> ffff8805d5b24000, task ffff8805d5a6c4e0)
> [  495.592809] Stack:
> [  495.592907]  ffff8805d5b25bd0 0000000000000286 ffff8805d5b25bd0
> ffffffffa0144891
> [  495.593244]  ffff8805d5b25c10 ffffffffa030a40e 0000000a00000002
> ffff8805b1c50168
> [  495.593533]  ffffffffa030a390 0000000000000286 0000000000000002
> 0000000000000000
> [  495.593819] Call Trace:
> [  495.593901]  [<ffffffffa0144891>] tcm_qla2xxx_release_cmd+0x21/0x30
> [tcm_qla2xxx]
> [  495.594037]  [<ffffffffa030a40e>] target_release_cmd_kref+0x7e/0xe0
> [target_core_mod]
> [  495.594169]  [<ffffffffa030a390>] ?
> target_splice_sess_cmd_list+0xd0/0xd0 [target_core_mod]
> [  495.594300]  [<ffffffff8129ba67>] kref_put+0x37/0x70
> [  495.594395]  [<ffffffffa030a48c>] target_put_sess_cmd+0x1c/0x20
> [target_core_mod]
> [  495.594520]  [<ffffffffa014485f>]
> tcm_qla2xxx_check_stop_free+0x4f/0x60 [tcm_qla2xxx]
> [  495.594651]  [<ffffffffa0309867>]
> transport_cmd_check_stop+0x157/0x210 [target_core_mod]
> [  495.594784]  [<ffffffffa0309935>]
> transport_cmd_check_stop_to_fabric+0x15/0x20 [target_core_mod]
> [  495.594919]  [<ffffffffa030ea2e>]
> transport_cmd_finish_abort+0x2e/0x70 [target_core_mod]
> [  495.595052]  [<ffffffffa0307185>]
> core_tmr_handle_tas_abort+0x35/0x70 [target_core_mod]
> [  495.595185]  [<ffffffffa030768c>] core_tmr_lun_reset+0x3dc/0x950
> [target_core_mod]
> [  495.595313]  [<ffffffff81088469>] ? trace_hardirqs_off_caller+0x29/0xc0
> [  495.595407]  [<ffffffff8108850d>] ? trace_hardirqs_off+0xd/0x10
> [  495.595509]  [<ffffffff81557810>] ? _raw_spin_unlock_irqrestore+0x40/0x80
> [  495.595611]  [<ffffffffa0310a2c>]
> transport_generic_do_tmr+0x9c/0xc0 [target_core_mod]
> [  495.595743]  [<ffffffffa0310cb8>]
> transport_processing_thread+0x268/0x460 [target_core_mod]
> [  495.595872]  [<ffffffff810742f0>] ? wake_up_bit+0x40/0x40
> [  495.595970]  [<ffffffffa0310a50>] ?
> transport_generic_do_tmr+0xc0/0xc0 [target_core_mod]
> [  495.596095]  [<ffffffff81073d4e>] kthread+0xbe/0xd0
> [  495.596195]  [<ffffffff81557dd4>] ? retint_restore_args+0x13/0x13
> [  495.596292]  [<ffffffff8108e51d>] ? trace_hardirqs_on_caller+0x14d/0x190
> [  495.596391]  [<ffffffff815611a4>] kernel_thread_helper+0x4/0x10
> [  495.596488]  [<ffffffff81557dd4>] ? retint_restore_args+0x13/0x13
> [  495.596586]  [<ffffffff81073c90>] ? __init_kthread_worker+0x70/0x70
> [  495.596683]  [<ffffffff815611a0>] ? gs_change+0x13/0x13
> [  495.596781] Code: 89 e5 48 83 ec 10 0f 1f 44 00 00 0f b6 87 40 05
> 00 00 48 89 fe a8 02 75 12 a8 04 75 10 48 8b 3d fa 3b 02 00 e8 75 0e
> f6 e0 c9 c3 <0f> 0b 48 8b bf 48 05 00 00 48 89 75 f8 e8 81 0c f6 e0 48
> 8b 75
> [  495.599869] RIP  [<ffffffffa01e4bed>] qla_tgt_free_cmd+0x2d/0x50 [qla2xxx]
> [  495.600011]  RSP <ffff8805d5b25bb0>
> [  495.600316] ---[ end trace 71e099d8f1a84ca8 ]---
> --
> To unsubscribe from this list: send the line "unsubscribe target-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support
  2011-12-21 21:48     ` Nicholas A. Bellinger
@ 2011-12-21 22:46       ` Roland Dreier
  0 siblings, 0 replies; 19+ messages in thread
From: Roland Dreier @ 2011-12-21 22:46 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

On Wed, Dec 21, 2011 at 1:48 PM, Nicholas A. Bellinger
<nab@linux-iscsi.org> wrote:
> So looking at qla_tgt-3.3, qla_target.c:2591 lines up with the
> following:
>
> void qla_tgt_free_cmd(struct qla_tgt_cmd *cmd)
> {
>        BUG_ON(cmd->sg_mapped);
>
>        if (unlikely(cmd->free_sg))
>                kfree(cmd->sg);
>        kmem_cache_free(qla_tgt_cmd_cachep, cmd);
> }

Sorry, yeah, grabbed the wrong chunk when quoting your mail.

This is the real BUG we're hitting.

 - R.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target
  2011-12-18  2:02 ` [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target Nicholas A. Bellinger
@ 2011-12-22  8:10   ` Roland Dreier
  2011-12-23 21:51     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Roland Dreier @ 2011-12-22  8:10 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

Hi Nic,

On Sat, Dec 17, 2011 at 6:02 PM, Nicholas A. Bellinger
<nab@linux-iscsi.org> wrote:
> +/*
> + * Called from qla_target_template->free_cmd(), and will call
> + * tcm_qla2xxx_release_cmd via normal struct target_core_fabric_ops
> + * release callback.  qla_hw_data->hardware_lock is expected to be held
> + */
> +void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd)
> +{
> +       barrier();
> +       /*
> +        * Handle tcm_qla2xxx_init_cmd() -> transport_get_lun_for_cmd()
> +        * failure case where cmd->se_cmd.se_dev was not assigned, and
> +        * a call to transport_generic_free_cmd_intr() is not possible..
> +        */
> +       if (!cmd->se_cmd.se_dev) {
> +               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> +               transport_generic_free_cmd(&cmd->se_cmd, 0);
> +               return;
> +       }
> +
> +       if (!atomic_read(&cmd->se_cmd.t_transport_complete))
> +               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> +
> +       INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
> +       queue_work(tcm_qla2xxx_free_wq, &cmd->work);
> +}

can you explain why you do the second target_put_sess_cmd()
without a "return" here?  (the one when t_transport_complete == 0)

It seems this leads to use-after-free ... suppose cmd->execute_task in
__transport_execute_tasks() returns an error (eg due to malformed
emulated command from the initiator -- the easiest way to trigger this
is to do something like "sg_raw /dev/sda 12 00 00 00 00 00" on a
tcm_qla2xxx exported LUN).

Then we'll call transport_generic_request_failure()  which will end up
calling transport_cmd_check_stop_to_fabric(), which will call into
tcm_qla2xxx_check_stop_free(), which will do target_put_sess_cmd()
so we'll be down to 1 reference on the cmd.

Then when the HW finishes sending the SCSI status back, we'll
go into qla_tgt_do_ctio_completion(), which will call into ->free_cmd()
and end up in the function quoted above.

Since we failed the command we never call transport_complete_task()
so t_transport_complete will be 0 and we'll call target_put_sess_cmd()
a second time and therefore free the command immediately, and then
go ahead and queue up the work to free it a second time.

You can make this 100% reproducible and fatal by booting with
"slub_debug=FZUP" (or whatever the corresponding SLAB config
option is, I forget), and then doing some malformed emulated
command that ends up returning bad SCSI status (like the sg_raw
example above).

I still don't understand the command reference counting and freeing
scheme well enough to know what the right fix is here.  Are we
supposed to return after that put in the transport_complete==0 case?
But I thought we weren't supposed to free commands from interrupt
context, although I don't know what's wrong with doing what ends
up being just a kmem_cache_free() in the end.  So is doing the put in
the free_cmd function (which is called from the CTIO completion
handling interrupt context) OK?

Why do we have that put if t_transport_complete isn't set, anyway?
Doesn't the request failure path drop the reference?  Or is the problem
that we return SCSI status without setting t_transport_complete?

Thoughts?

 - R.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2011-12-21 17:11 ` [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Christoph Hellwig
@ 2011-12-22 22:25   ` Andrew Vasquez
  2011-12-23 21:59     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Vasquez @ 2011-12-22 22:25 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Nicholas A. Bellinger, target-devel, linux-scsi,
	Giridhar Malavali, James Bottomley, Roland Dreier, Joern Engel,
	Madhuranath Iyengar

On Wed, 21 Dec 2011, Christoph Hellwig wrote:

> I think the most important item is to sort out the mess around the old
> generation qla23xx support.  The way the code currently sprinkles ifs
> around that is a complete mess.  Given that the qla23xx support has as
> far as I know zero test coverage, and has been EOLed by qlogic I see
> no reason to keep it around.
> 
> Anyone disagreeing with that?
> 

Christoph,

>From an engineering and support perspective it doesn't make sense to
keep the (unsupported and untested) pre-ISP24xx codes around.  Going
forward, once internal resources can be allocated, QLogic would like
to see support for all ISP24xx and above hardware.

Regards,
Andrew Vasquez


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target
  2011-12-22  8:10   ` Roland Dreier
@ 2011-12-23 21:51     ` Nicholas A. Bellinger
  2012-01-02 21:38       ` Roland Dreier
  0 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-23 21:51 UTC (permalink / raw)
  To: Roland Dreier
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

On Thu, 2011-12-22 at 00:10 -0800, Roland Dreier wrote:
> Hi Nic,
> 
> On Sat, Dec 17, 2011 at 6:02 PM, Nicholas A. Bellinger
> <nab@linux-iscsi.org> wrote:
> > +/*
> > + * Called from qla_target_template->free_cmd(), and will call
> > + * tcm_qla2xxx_release_cmd via normal struct target_core_fabric_ops
> > + * release callback.  qla_hw_data->hardware_lock is expected to be held
> > + */
> > +void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd)
> > +{
> > +       barrier();
> > +       /*
> > +        * Handle tcm_qla2xxx_init_cmd() -> transport_get_lun_for_cmd()
> > +        * failure case where cmd->se_cmd.se_dev was not assigned, and
> > +        * a call to transport_generic_free_cmd_intr() is not possible..
> > +        */
> > +       if (!cmd->se_cmd.se_dev) {
> > +               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> > +               transport_generic_free_cmd(&cmd->se_cmd, 0);
> > +               return;
> > +       }
> > +
> > +       if (!atomic_read(&cmd->se_cmd.t_transport_complete))
> > +               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> > +
> > +       INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
> > +       queue_work(tcm_qla2xxx_free_wq, &cmd->work);
> > +}
> 
> can you explain why you do the second target_put_sess_cmd()
> without a "return" here?  (the one when t_transport_complete == 0)
> 
> It seems this leads to use-after-free ... suppose cmd->execute_task in
> __transport_execute_tasks() returns an error (eg due to malformed
> emulated command from the initiator -- the easiest way to trigger this
> is to do something like "sg_raw /dev/sda 12 00 00 00 00 00" on a
> tcm_qla2xxx exported LUN).
> 
> Then we'll call transport_generic_request_failure()  which will end up
> calling transport_cmd_check_stop_to_fabric(), which will call into
> tcm_qla2xxx_check_stop_free(), which will do target_put_sess_cmd()
> so we'll be down to 1 reference on the cmd.
> 
> Then when the HW finishes sending the SCSI status back, we'll
> go into qla_tgt_do_ctio_completion(), which will call into ->free_cmd()
> and end up in the function quoted above.
> 
> Since we failed the command we never call transport_complete_task()
> so t_transport_complete will be 0 and we'll call target_put_sess_cmd()
> a second time and therefore free the command immediately, and then
> go ahead and queue up the work to free it a second time.
> 
> You can make this 100% reproducible and fatal by booting with
> "slub_debug=FZUP" (or whatever the corresponding SLAB config
> option is, I forget), and then doing some malformed emulated
> command that ends up returning bad SCSI status (like the sg_raw
> example above).
> 
> I still don't understand the command reference counting and freeing
> scheme well enough to know what the right fix is here.  Are we
> supposed to return after that put in the transport_complete==0 case?
> But I thought we weren't supposed to free commands from interrupt
> context, although I don't know what's wrong with doing what ends
> up being just a kmem_cache_free() in the end.  So is doing the put in
> the free_cmd function (which is called from the CTIO completion
> handling interrupt context) OK?
> 
> Why do we have that put if t_transport_complete isn't set, anyway?
> Doesn't the request failure path drop the reference?  Or is the problem
> that we return SCSI status without setting t_transport_complete?
> 
> Thoughts?
> 

I believe this is actually left over cruft from when
qla_tgt_cmd->cmd_stop_free had to be set explicitly set in the failure
release path in tcm_qla2xxx_free_cmd().  Eg:

        if (!atomic_read(&cmd->se_cmd.t_task->t_transport_complete)) {
                atomic_set(&cmd->cmd_stop_free, 1);
                smp_mb__after_atomic_dec();
        }


So the (t_transport_complete == 0) check causing the issue above should
be safe to remove now..  The same is true for the !cmd->se_cmd.se_dev
check in tcm_qla2xxx_free_cmd() as well.

I'll get this addressed in lio-core/qla_tgt-3.3 shortly.

Thanks,

--nab





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2011-12-22 22:25   ` Andrew Vasquez
@ 2011-12-23 21:59     ` Nicholas A. Bellinger
  0 siblings, 0 replies; 19+ messages in thread
From: Nicholas A. Bellinger @ 2011-12-23 21:59 UTC (permalink / raw)
  To: Andrew Vasquez
  Cc: Christoph Hellwig, target-devel, linux-scsi, Giridhar Malavali,
	James Bottomley, Roland Dreier, Joern Engel, Madhuranath Iyengar

On Thu, 2011-12-22 at 14:25 -0800, Andrew Vasquez wrote:
> On Wed, 21 Dec 2011, Christoph Hellwig wrote:
> 
> > I think the most important item is to sort out the mess around the old
> > generation qla23xx support.  The way the code currently sprinkles ifs
> > around that is a complete mess.  Given that the qla23xx support has as
> > far as I know zero test coverage, and has been EOLed by qlogic I see
> > no reason to keep it around.
> > 
> > Anyone disagreeing with that?
> > 
> 
> Christoph,
> 
> >From an engineering and support perspective it doesn't make sense to
> keep the (unsupported and untested) pre-ISP24xx codes around.  Going
> forward, once internal resources can be allocated, QLogic would like
> to see support for all ISP24xx and above hardware.
> 
> Regards,
> Andrew Vasquez
> 

Hi Andrew,

Thanks for the official clarification here from Qlogic wrt to pre 24xx
hardware support in qla_target.c.  We will go ahead and begin dropping
the legacy support and plan to have all the old code removed for an
RFC-v5 series to be posted after the holidays.

Thank you,

--nab

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target
  2011-12-23 21:51     ` Nicholas A. Bellinger
@ 2012-01-02 21:38       ` Roland Dreier
  2012-01-10  0:24         ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Roland Dreier @ 2012-01-02 21:38 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

On Fri, Dec 23, 2011 at 1:51 PM, Nicholas A. Bellinger
<nab@linux-iscsi.org> wrote:
> So the (t_transport_complete == 0) check causing the issue above should
> be safe to remove now..  The same is true for the !cmd->se_cmd.se_dev
> check in tcm_qla2xxx_free_cmd() as well.

Basically you're saying the following is the right fix?

--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -401,21 +401,6 @@ static void tcm_qla2xxx_complete_free(struct
work_struct *work)
  */
 void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd)
 {
-       barrier();
-       /*
-        * Handle tcm_qla2xxx_init_cmd() -> transport_get_lun_for_cmd()
-        * failure case where cmd->se_cmd.se_dev was not assigned, and
-        * a call to transport_generic_free_cmd_intr() is not possible..
-        */
-       if (!cmd->se_cmd.se_dev) {
-               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
-               transport_generic_free_cmd(&cmd->se_cmd, 0);
-               return;
-       }
-
-       if (!atomic_read(&cmd->se_cmd.t_transport_complete))
-               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
-
        INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
        queue_work(tcm_qla2xxx_free_wq, &cmd->work);
 }

(I'm deleting the barrier() as part of this too because it's almost certainly
wrong... an indirect call through a function pointer that gets here is just
as much of a compiler optimization barrier anyway).

 - R.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target
  2012-01-02 21:38       ` Roland Dreier
@ 2012-01-10  0:24         ` Nicholas A. Bellinger
  0 siblings, 0 replies; 19+ messages in thread
From: Nicholas A. Bellinger @ 2012-01-10  0:24 UTC (permalink / raw)
  To: Roland Dreier
  Cc: target-devel, linux-scsi, Andrew Vasquez, Giridhar Malavali,
	Christoph Hellwig, James Bottomley, Joern Engel,
	Madhuranath Iyengar

Hi Roland & Co,

My apologies for the delayed response here, still catching up after the
holidays..


On Mon, 2012-01-02 at 13:38 -0800, Roland Dreier wrote:
> On Fri, Dec 23, 2011 at 1:51 PM, Nicholas A. Bellinger
> <nab@linux-iscsi.org> wrote:
> > So the (t_transport_complete == 0) check causing the issue above should
> > be safe to remove now..  The same is true for the !cmd->se_cmd.se_dev
> > check in tcm_qla2xxx_free_cmd() as well.
> 
> Basically you're saying the following is the right fix?
> 
> --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
> +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
> @@ -401,21 +401,6 @@ static void tcm_qla2xxx_complete_free(struct
> work_struct *work)
>   */
>  void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd)
>  {
> -       barrier();
> -       /*
> -        * Handle tcm_qla2xxx_init_cmd() -> transport_get_lun_for_cmd()
> -        * failure case where cmd->se_cmd.se_dev was not assigned, and
> -        * a call to transport_generic_free_cmd_intr() is not possible..
> -        */
> -       if (!cmd->se_cmd.se_dev) {
> -               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> -               transport_generic_free_cmd(&cmd->se_cmd, 0);
> -               return;
> -       }
> -
> -       if (!atomic_read(&cmd->se_cmd.t_transport_complete))
> -               target_put_sess_cmd(cmd->se_cmd.se_sess, &cmd->se_cmd);
> -
>         INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
>         queue_work(tcm_qla2xxx_free_wq, &cmd->work);
>  }
> 
> (I'm deleting the barrier() as part of this too because it's almost certainly
> wrong... an indirect call through a function pointer that gets here is just
> as much of a compiler optimization barrier anyway).
> 

Yes, this should be fine.  I still need to verify this internally, and
will push this into lio-core/qla_tgt-3.3 after testing.

Thanks,

--nab



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2012-05-14 23:12     ` Nicholas A. Bellinger
@ 2012-05-15 14:21       ` Bart Van Assche
  0 siblings, 0 replies; 19+ messages in thread
From: Bart Van Assche @ 2012-05-15 14:21 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, Roland Dreier, target-devel, James Bottomley

On 05/14/12 23:12, Nicholas A. Bellinger wrote:

> On Mon, 2012-05-14 at 12:50 +0000, Bart Van Assche wrote:
>> Note: in the kernel module with the shared code an interface will have
>> to be added that allows the initiator and the target module to enumerate
>> qla2xxx HBA ports. Maybe it's a good idea to add an interface similar to
>> the add_one() / remove_one() callback functions present in the Linux
>> InfiniBand stack.
> 
> Considering the time scales involved with doing this type of testing
> (esp the latter, which can take months) plus given the amount of LLD
> code involved (40K LOC), a large re-org that effects existing FC
> initiator mode operation is something we'd like to avoid for now.


Let me summarize what I've noticed after I had a (very) short look at
the proposed qla2xxx driver changes:
- The proposed changes do not allow to enable initiator and target mode
  simultaneously on a single FC port.
- Most of the initiator-mode SCSI host sysfs attributes are relevant in
  target mode too (e.g. NV-RAM access) and hence should be moved from
  the SCSI host to somewhere else. A good example is e.g. the Linux IB
  stack - in the IB stack there is a clean separation between
  HCA-specific sysfs attributes and sysfs attributes associated with
  higher-level services.
- The posted patch does not allow to add more FC functionality in a
  clean way, e.g. IP over FC or FC-VI. (Note: IP over FC has been
  standardized through RFC 4338 - http://tools.ietf.org/html/rfc4338.)

So I'm not sure whether the proposed approach is acceptable for a
mainline kernel driver. However, I'm not a SCSI maintainer and hence I
do not have a decisive voice in this matter.

Bart.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2012-05-14 12:50   ` Bart Van Assche
@ 2012-05-14 23:12     ` Nicholas A. Bellinger
  2012-05-15 14:21       ` Bart Van Assche
  0 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2012-05-14 23:12 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-scsi, Roland Dreier, target-devel, James Bottomley

On Mon, 2012-05-14 at 12:50 +0000, Bart Van Assche wrote:
> On 05/14/12 03:29, Nicholas A. Bellinger wrote:
> 
> > That would probably work, but I think it's still just a band-aid on the
> > underlying issue of SCSI LLDs always kicking off initiator mode enable /
> > disable operations from within PCI *_probe_one() / *_remove_code() code.
> 
> 
> Maybe the following makes sense (I'm not familiar with the qla2xxx
> driver nor with FC): split the qla2xxx driver in three kernel modules -
> a kernel module with the code that is shared by initiator and target
> mode, a kernel module with the initiator functionality and a kernel
> module with the target functionality. This will allow users to choose
> which functionality to enable by loading the proper kernel module(s).

Seriously, this involves high-level re-factoring of FC HBA drivers with
10x the moving parts and 100x the mission critical install base of your
other HW example.

So yes, wanting to re-factor logic out into common code for initiator /
target mode is good starting point, but without getting into the
real-deal FC HW specifics of how this would work for running code along
with tangible HBA vendor interest backing it, this type of statement
quickly falls into the realm of wishful thinking.

> Note: in the kernel module with the shared code an interface will have
> to be added that allows the initiator and the target module to enumerate
> qla2xxx HBA ports. Maybe it's a good idea to add an interface similar to
> the add_one() / remove_one() callback functions present in the Linux
> InfiniBand stack.
> 

Roland has had a few thoughts to this end about incremental improvements
post 3.5 merge, but the target team has been primarily interested in
fixing tcm_qla2xxx bugs and ensuring the qla2xxx LLD changes for target
mode do not cause FC initiator mode regressions.  

Considering the time scales involved with doing this type of testing
(esp the latter, which can take months) plus given the amount of LLD
code involved (40K LOC), a large re-org that effects existing FC
initiator mode operation is something we'd like to avoid for now.

Thanks,

--nab

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2012-05-14  3:29 ` Nicholas A. Bellinger
@ 2012-05-14 12:50   ` Bart Van Assche
  2012-05-14 23:12     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Bart Van Assche @ 2012-05-14 12:50 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, Roland Dreier, target-devel, James Bottomley

On 05/14/12 03:29, Nicholas A. Bellinger wrote:

> That would probably work, but I think it's still just a band-aid on the
> underlying issue of SCSI LLDs always kicking off initiator mode enable /
> disable operations from within PCI *_probe_one() / *_remove_code() code.


Maybe the following makes sense (I'm not familiar with the qla2xxx
driver nor with FC): split the qla2xxx driver in three kernel modules -
a kernel module with the code that is shared by initiator and target
mode, a kernel module with the initiator functionality and a kernel
module with the target functionality. This will allow users to choose
which functionality to enable by loading the proper kernel module(s).
Note: in the kernel module with the shared code an interface will have
to be added that allows the initiator and the target module to enumerate
qla2xxx HBA ports. Maybe it's a good idea to add an interface similar to
the add_one() / remove_one() callback functions present in the Linux
InfiniBand stack.

Bart.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
  2012-05-13 15:55 Bart Van Assche
@ 2012-05-14  3:29 ` Nicholas A. Bellinger
  2012-05-14 12:50   ` Bart Van Assche
  0 siblings, 1 reply; 19+ messages in thread
From: Nicholas A. Bellinger @ 2012-05-14  3:29 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-scsi, Roland Dreier, target-devel, James Bottomley

On Sun, 2012-05-13 at 15:55 +0000, Bart Van Assche wrote:
> On Sunday December 18, 2011 Nicholas Bellinger wrote:
> > So to get the ball rolling on remaining items, one question is still how to
> > resolve mixed target/initiator mode operation on a HW per port context basis..?
> > 
> > This is currently done with a qla2xxx module parameter, but to do mixed mode
> > properly we will need something smarter between scsi-core and target-core ports.
> > Note we currently set qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED, so by default
> > patch #1 will effectively disable initiator mode by skipping scsi_scan_host()
> > from being called in to avoid scsi-core timeouts when performing immediate
> > transition from initiator mode -> target mode via ISP reset.
> > 
> > What we would like to eventually do is run qla2xxx LLD to allow both initiator
> > and target mode access based on the physical HW port.  We tried some simple
> > qla_target.c changes this make this work, but to really do it properly
> > and address current qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED usage it will
> > need to involve scsi-core so that individual HW port can be configured and
> > dynamically changed across different access modes.
> 
> (sorry for the late reply)
> 
> Dynamically switching between initiator mode, target mode or even using both
> modes simultaneously is already possible for iSCSI over Ethernet and for SRP
> over InfiniBand with the current SCSI core.

Correct, because the implementation of these two examples is not
dependent upon the registering of any LLD logic within scsi-core in
order to function in target mode.  We've always been able to avoid these
types of conflicts between scsi-core and target-core subsystems up until
now when initiator + target have been implemented as logically separate
drivers.

> Why would the SCSI core have to be modified in order to make the same possible
> for FCP ? Am I missing something ?

So it's nothing fabric specific to FCP of course, look at software +
Intel 82599 HW offload for example in tcm_fc(FCoE) where initiator and
target mode are logically separate drivers based upon common libfc
code..

But the underlying issue with the current implementation of HW FC HBAs
such as qla2xxx is that only a small subset of code is actually made
common (at least in the fabric I/O codepath sense) amongst HW FC LLDs
within libfc.  It's my understanding that since this type of code has
historically been very driver/vendor specific, trying to make this
(more) common for individual HW FC HBAs hasn't been interesting enough
for vendors to pursue benefit wise given the amount of re-factoring +
regressions involved to existing initiator-mode operation. 

That being the case, qla_target.c ends up depending upon alot of LLD
specific code (some of which is not initiator specific or made common in
libfc), so at least in the current implementation it really doesn't make
sense to try to push this logic back out into tcm_qla2xxx code..

> If SCSI initiator timeouts during initiator-to-target mode transitions are an
> issue, why not to abort pending SCSI commands before performing the transition ?
> 

That would probably work, but I think it's still just a band-aid on the
underlying issue of SCSI LLDs always kicking off initiator mode enable /
disable operations from within PCI *_probe_one() / *_remove_code() code.

--nab



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module
@ 2012-05-13 15:55 Bart Van Assche
  2012-05-14  3:29 ` Nicholas A. Bellinger
  0 siblings, 1 reply; 19+ messages in thread
From: Bart Van Assche @ 2012-05-13 15:55 UTC (permalink / raw)
  To: linux-scsi, Nicholas A. Bellinger, Roland Dreier, target-devel,
	James Bottomley

On Sunday December 18, 2011 Nicholas Bellinger wrote:
> So to get the ball rolling on remaining items, one question is still how to
> resolve mixed target/initiator mode operation on a HW per port context basis..?
> 
> This is currently done with a qla2xxx module parameter, but to do mixed mode
> properly we will need something smarter between scsi-core and target-core ports.
> Note we currently set qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED, so by default
> patch #1 will effectively disable initiator mode by skipping scsi_scan_host()
> from being called in to avoid scsi-core timeouts when performing immediate
> transition from initiator mode -> target mode via ISP reset.
> 
> What we would like to eventually do is run qla2xxx LLD to allow both initiator
> and target mode access based on the physical HW port.  We tried some simple
> qla_target.c changes this make this work, but to really do it properly
> and address current qlini_mode = QLA2XXX_INI_MODE_STR_DISABLED usage it will
> need to involve scsi-core so that individual HW port can be configured and
> dynamically changed across different access modes.

(sorry for the late reply)

Dynamically switching between initiator mode, target mode or even using both
modes simultaneously is already possible for iSCSI over Ethernet and for SRP
over InfiniBand with the current SCSI core. Why would the SCSI core have to be
modified in order to make the same possible for FCP ? Am I missing something ?
If SCSI initiator timeouts during initiator-to-target mode transitions are an
issue, why not to abort pending SCSI commands before performing the transition ?

Bart.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2012-05-15 14:21 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-18  2:02 [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Nicholas A. Bellinger
2011-12-18  2:02 ` [RFC-v4 1/3] qla2xxx: Add LLD internal target-mode support Nicholas A. Bellinger
2011-12-19 22:59   ` Roland Dreier
2011-12-21 21:48     ` Nicholas A. Bellinger
2011-12-21 22:46       ` Roland Dreier
2011-12-18  2:02 ` [RFC-v4 2/3] qla2xxx: Enable 2xxx series LLD target mode support Nicholas A. Bellinger
2011-12-18  2:02 ` [RFC-v4 3/3] qla2xxx: Add tcm_qla2xxx fabric module for mainline target Nicholas A. Bellinger
2011-12-22  8:10   ` Roland Dreier
2011-12-23 21:51     ` Nicholas A. Bellinger
2012-01-02 21:38       ` Roland Dreier
2012-01-10  0:24         ` Nicholas A. Bellinger
2011-12-21 17:11 ` [RFC-v4 0/3] qla2xxx: v3.4 target mode LLD changes + tcm_qla2xxx fabric module Christoph Hellwig
2011-12-22 22:25   ` Andrew Vasquez
2011-12-23 21:59     ` Nicholas A. Bellinger
2012-05-13 15:55 Bart Van Assche
2012-05-14  3:29 ` Nicholas A. Bellinger
2012-05-14 12:50   ` Bart Van Assche
2012-05-14 23:12     ` Nicholas A. Bellinger
2012-05-15 14:21       ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.