All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/12] iSCSI target v4.1.0-rc1 series
@ 2011-03-02  3:33 Nicholas A. Bellinger
  2011-03-02  3:33 ` [RFC 01/12] iscsi: Resolve iscsi_proto.h naming conflicts with drivers/target/iscsi Nicholas A. Bellinger
                   ` (11 more replies)
  0 siblings, 12 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Greetings folks,

This RFC series is a 'for-39' item for the RisingTide Systems iSCSI target fabric
module compatiable with the mainline TCM v4.0 / >= .38-rc target infrastructure
code.  The following commits been broken up into individual sections to make
reviewing easier for those fimilar with the iSCSI protocol.

At this point the code has been converted to use include/scsi/iscsi_proto.h
structure and flag/bit definitions and legacy internal iscsi_protocol.h has
been removed.  There where some minor PDU namespace changes/bugfix to existing
iscsi_proto.h code plus changing existing libiscsi LLDs currently using these
defs.  These external changes have been included as patch #1.

>From there patches #2 -> #12 should be semi self explanatory for iscsi-target
from individual high level commit messages.  This series includes a number of
recent mainline cleanups for iSCSI target in lio-core-2.6.git/lio-4.1, and
includes new iscsi_target_stat.c code using iSCSI fabric dependent ConfigFS
statistic groups.  Also note that all legacy ProcFS based code has been removed
from this series.

Please note this code is against the latest TCM v4.0 patch series for-39 here:

[PATCH 0/5] target updates for scsi-post-merge .39 (round one, v2)
http://marc.info/?l=linux-scsi&m=129893866530385&w=2

This full RFC series with latest v4.0 'for-39' code is available here:

git://git.kernel.org/pub/scm/linux/kernel/git/nab/scsi-post-merge-2.6.git for-39-iscsi-target

Please review and comment,

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>

Nicholas Bellinger (12):
  iscsi: Resolve iscsi_proto.h naming conflicts with
    drivers/target/iscsi
  iscsi-target: Add primary iSCSI request/response state machine logic
  iscsi-target: Add TCM v4 compatiable ConfigFS control plane
  iscsi-target: Add configfs fabric dependent statistics
  iscsi-target: Add TPG and Device logic
  iscsi-target: Add iSCSI Login Negotiation and Parameter logic
  iscsi-target: Add CHAP Authentication support using libcrypto
  iscsi-target: Add Sequence/PDU list + DataIN response logic
  iscsi-target: Add iSCSI Error Recovery Hierarchy support
  iscsi-target: Add support for task management operations
  iscsi-target: Add misc utility and debug logic
  iscsi-target: Add Makefile/Kconfig and update TCM top level

 drivers/infiniband/ulp/iser/iser_initiator.c      |    2 +-
 drivers/scsi/be2iscsi/be_main.h                   |    4 +-
 drivers/scsi/bnx2i/bnx2i_hwi.c                    |    8 +-
 drivers/scsi/bnx2i/bnx2i_iscsi.c                  |    2 +-
 drivers/scsi/libiscsi.c                           |    6 +-
 drivers/target/Kconfig                            |    1 +
 drivers/target/Makefile                           |    1 +
 drivers/target/iscsi/Kconfig                      |   17 +
 drivers/target/iscsi/Makefile                     |   20 +
 drivers/target/iscsi/iscsi_auth_chap.c            |  502 ++
 drivers/target/iscsi/iscsi_auth_chap.h            |   33 +
 drivers/target/iscsi/iscsi_debug.h                |  113 +
 drivers/target/iscsi/iscsi_parameters.c           | 2078 +++++++
 drivers/target/iscsi/iscsi_parameters.h           |  271 +
 drivers/target/iscsi/iscsi_seq_and_pdu_list.c     |  712 +++
 drivers/target/iscsi/iscsi_seq_and_pdu_list.h     |   88 +
 drivers/target/iscsi/iscsi_target.c               | 6043 +++++++++++++++++++++
 drivers/target/iscsi/iscsi_target.h               |   49 +
 drivers/target/iscsi/iscsi_target_configfs.c      | 1617 ++++++
 drivers/target/iscsi/iscsi_target_configfs.h      |    9 +
 drivers/target/iscsi/iscsi_target_core.h          | 1019 ++++
 drivers/target/iscsi/iscsi_target_datain_values.c |  550 ++
 drivers/target/iscsi/iscsi_target_datain_values.h |   16 +
 drivers/target/iscsi/iscsi_target_device.c        |  128 +
 drivers/target/iscsi/iscsi_target_device.h        |    9 +
 drivers/target/iscsi/iscsi_target_erl0.c          | 1086 ++++
 drivers/target/iscsi/iscsi_target_erl0.h          |   19 +
 drivers/target/iscsi/iscsi_target_erl1.c          | 1382 +++++
 drivers/target/iscsi/iscsi_target_erl1.h          |   35 +
 drivers/target/iscsi/iscsi_target_erl2.c          |  535 ++
 drivers/target/iscsi/iscsi_target_erl2.h          |   21 +
 drivers/target/iscsi/iscsi_target_login.c         | 1411 +++++
 drivers/target/iscsi/iscsi_target_login.h         |   15 +
 drivers/target/iscsi/iscsi_target_nego.c          | 1116 ++++
 drivers/target/iscsi/iscsi_target_nego.h          |   20 +
 drivers/target/iscsi/iscsi_target_nodeattrib.c    |  300 +
 drivers/target/iscsi/iscsi_target_nodeattrib.h    |   14 +
 drivers/target/iscsi/iscsi_target_stat.c          |  955 ++++
 drivers/target/iscsi/iscsi_target_stat.h          |   79 +
 drivers/target/iscsi/iscsi_target_tmr.c           |  908 ++++
 drivers/target/iscsi/iscsi_target_tmr.h           |   17 +
 drivers/target/iscsi/iscsi_target_tpg.c           | 1185 ++++
 drivers/target/iscsi/iscsi_target_tpg.h           |   71 +
 drivers/target/iscsi/iscsi_target_util.c          | 2852 ++++++++++
 drivers/target/iscsi/iscsi_target_util.h          |  128 +
 drivers/target/iscsi/iscsi_thread_queue.c         |  635 +++
 drivers/target/iscsi/iscsi_thread_queue.h         |  103 +
 include/scsi/iscsi_proto.h                        |   30 +-
 48 files changed, 26196 insertions(+), 19 deletions(-)
 create mode 100644 drivers/target/iscsi/Kconfig
 create mode 100644 drivers/target/iscsi/Makefile
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.c
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.h
 create mode 100644 drivers/target/iscsi/iscsi_debug.h
 create mode 100644 drivers/target/iscsi/iscsi_parameters.c
 create mode 100644 drivers/target/iscsi/iscsi_parameters.h
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.c
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.h
 create mode 100644 drivers/target/iscsi/iscsi_target.c
 create mode 100644 drivers/target/iscsi/iscsi_target.h
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.c
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.h
 create mode 100644 drivers/target/iscsi/iscsi_target_core.h
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.c
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.h
 create mode 100644 drivers/target/iscsi/iscsi_target_device.c
 create mode 100644 drivers/target/iscsi/iscsi_target_device.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.h
 create mode 100644 drivers/target/iscsi/iscsi_target_login.c
 create mode 100644 drivers/target/iscsi/iscsi_target_login.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.h
 create mode 100644 drivers/target/iscsi/iscsi_target_stat.c
 create mode 100644 drivers/target/iscsi/iscsi_target_stat.h
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.h
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.h
 create mode 100644 drivers/target/iscsi/iscsi_target_util.c
 create mode 100644 drivers/target/iscsi/iscsi_target_util.h
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.c
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.h

-- 
1.7.4.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC 01/12] iscsi: Resolve iscsi_proto.h naming conflicts with drivers/target/iscsi
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33 ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch renames the following iscsi_proto.h structures to avoid
namespace issues with drivers/target/iscsi/iscsi_target_core.h:

*) struct iscsi_cmd -> struct iscsi_scsi_cmd
*) struct iscsi_cmd_rsp -> struct iscsi_scsi_rsp
*) struct iscsi_login -> struct iscsi_login_req

This patch includes useful ISCSI_FLAG_LOGIN_[CURRENT,NEXT]_STAGE*,
and ISCSI_FLAG_SNACK_TYPE_* definitions used by iscsi_target_mod, and
fixes the incorrect definition of struct iscsi_snack to following
RFC-3720 Section 10.16. SNACK Request.

Also, this patch updates libiscsi, iSER, be2iscsi, and bn2xi to
use the updated structure definitions in a handful of locations.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/infiniband/ulp/iser/iser_initiator.c |    2 +-
 drivers/scsi/be2iscsi/be_main.h              |    4 +-
 drivers/scsi/bnx2i/bnx2i_hwi.c               |    8 +++---
 drivers/scsi/bnx2i/bnx2i_iscsi.c             |    2 +-
 drivers/scsi/libiscsi.c                      |    6 ++--
 include/scsi/iscsi_proto.h                   |   30 +++++++++++++++++++-------
 6 files changed, 33 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
index 95a08a8..dd28367 100644
--- a/drivers/infiniband/ulp/iser/iser_initiator.c
+++ b/drivers/infiniband/ulp/iser/iser_initiator.c
@@ -271,7 +271,7 @@ int iser_send_command(struct iscsi_conn *conn,
 	unsigned long edtl;
 	int err;
 	struct iser_data_buf *data_buf;
-	struct iscsi_cmd *hdr =  (struct iscsi_cmd *)task->hdr;
+	struct iscsi_scsi_cmd *hdr =  (struct iscsi_scsi_cmd *)task->hdr;
 	struct scsi_cmnd *sc  =  task->sc;
 	struct iser_tx_desc *tx_desc = &iser_task->desc;
 
diff --git a/drivers/scsi/be2iscsi/be_main.h b/drivers/scsi/be2iscsi/be_main.h
index 90eb74f..f05ada1 100644
--- a/drivers/scsi/be2iscsi/be_main.h
+++ b/drivers/scsi/be2iscsi/be_main.h
@@ -398,7 +398,7 @@ struct amap_pdu_data_out {
 };
 
 struct be_cmd_bhs {
-	struct iscsi_cmd iscsi_hdr;
+	struct iscsi_scsi_cmd iscsi_hdr;
 	unsigned char pad1[16];
 	struct pdu_data_out iscsi_data_pdu;
 	unsigned char pad2[BE_SENSE_INFO_SIZE -
@@ -429,7 +429,7 @@ struct be_nonio_bhs {
 };
 
 struct be_status_bhs {
-	struct iscsi_cmd iscsi_hdr;
+	struct iscsi_scsi_cmd iscsi_hdr;
 	unsigned char pad1[16];
 	/**
 	 * The plus 2 below is to hold the sense info length that gets
diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
index 96505e3..f947c80 100644
--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
+++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
@@ -326,11 +326,11 @@ int bnx2i_send_iscsi_login(struct bnx2i_conn *bnx2i_conn,
 {
 	struct bnx2i_cmd *bnx2i_cmd;
 	struct bnx2i_login_request *login_wqe;
-	struct iscsi_login *login_hdr;
+	struct iscsi_login_req *login_hdr;
 	u32 dword;
 
 	bnx2i_cmd = (struct bnx2i_cmd *)task->dd_data;
-	login_hdr = (struct iscsi_login *)task->hdr;
+	login_hdr = (struct iscsi_login_req *)task->hdr;
 	login_wqe = (struct bnx2i_login_request *)
 						bnx2i_conn->ep->qp.sq_prod_qe;
 
@@ -1288,7 +1288,7 @@ static int bnx2i_process_scsi_cmd_resp(struct iscsi_session *session,
 	struct bnx2i_cmd_response *resp_cqe;
 	struct bnx2i_cmd *bnx2i_cmd;
 	struct iscsi_task *task;
-	struct iscsi_cmd_rsp *hdr;
+	struct iscsi_scsi_rsp *hdr;
 	u32 datalen = 0;
 
 	resp_cqe = (struct bnx2i_cmd_response *)cqe;
@@ -1315,7 +1315,7 @@ static int bnx2i_process_scsi_cmd_resp(struct iscsi_session *session,
 	}
 	bnx2i_iscsi_unmap_sg_list(bnx2i_cmd);
 
-	hdr = (struct iscsi_cmd_rsp *)task->hdr;
+	hdr = (struct iscsi_scsi_rsp *)task->hdr;
 	resp_cqe = (struct bnx2i_cmd_response *)cqe;
 	hdr->opcode = resp_cqe->op_code;
 	hdr->max_cmdsn = cpu_to_be32(resp_cqe->max_cmd_sn);
diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
index f0dce26..8f047ba 100644
--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
+++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
@@ -1203,7 +1203,7 @@ static int bnx2i_task_xmit(struct iscsi_task *task)
 	struct bnx2i_conn *bnx2i_conn = conn->dd_data;
 	struct scsi_cmnd *sc = task->sc;
 	struct bnx2i_cmd *cmd = task->dd_data;
-	struct iscsi_cmd *hdr = (struct iscsi_cmd *) task->hdr;
+	struct iscsi_scsi_cmd *hdr = (struct iscsi_scsi_cmd *) task->hdr;
 
 	/*
 	 * If there is no scsi_cmnd this must be a mgmt task
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
index da8b615..61d6420 100644
--- a/drivers/scsi/libiscsi.c
+++ b/drivers/scsi/libiscsi.c
@@ -360,7 +360,7 @@ static int iscsi_prep_scsi_cmd_pdu(struct iscsi_task *task)
 	struct iscsi_conn *conn = task->conn;
 	struct iscsi_session *session = conn->session;
 	struct scsi_cmnd *sc = task->sc;
-	struct iscsi_cmd *hdr;
+	struct iscsi_scsi_cmd *hdr;
 	unsigned hdrlength, cmd_len;
 	itt_t itt;
 	int rc;
@@ -374,7 +374,7 @@ static int iscsi_prep_scsi_cmd_pdu(struct iscsi_task *task)
 		if (rc)
 			return rc;
 	}
-	hdr = (struct iscsi_cmd *) task->hdr;
+	hdr = (struct iscsi_scsi_cmd *) task->hdr;
 	itt = hdr->itt;
 	memset(hdr, 0, sizeof(*hdr));
 
@@ -830,7 +830,7 @@ static void iscsi_scsi_cmd_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
 			       struct iscsi_task *task, char *data,
 			       int datalen)
 {
-	struct iscsi_cmd_rsp *rhdr = (struct iscsi_cmd_rsp *)hdr;
+	struct iscsi_scsi_rsp *rhdr = (struct iscsi_scsi_rsp *)hdr;
 	struct iscsi_session *session = conn->session;
 	struct scsi_cmnd *sc = task->sc;
 
diff --git a/include/scsi/iscsi_proto.h b/include/scsi/iscsi_proto.h
index dd0a52c..c3e6d4f 100644
--- a/include/scsi/iscsi_proto.h
+++ b/include/scsi/iscsi_proto.h
@@ -116,7 +116,7 @@ struct iscsi_ahs_hdr {
 #define ISCSI_CDB_SIZE			16
 
 /* iSCSI PDU Header */
-struct iscsi_cmd {
+struct iscsi_scsi_cmd {
 	uint8_t opcode;
 	uint8_t flags;
 	__be16 rsvd2;
@@ -161,7 +161,7 @@ struct iscsi_ecdb_ahdr {
 };
 
 /* SCSI Response Header */
-struct iscsi_cmd_rsp {
+struct iscsi_scsi_rsp {
 	uint8_t opcode;
 	uint8_t flags;
 	uint8_t response;
@@ -406,7 +406,7 @@ struct iscsi_text_rsp {
 };
 
 /* Login Header */
-struct iscsi_login {
+struct iscsi_login_req {
 	uint8_t opcode;
 	uint8_t flags;
 	uint8_t max_version;	/* Max. version supported */
@@ -427,7 +427,13 @@ struct iscsi_login {
 #define ISCSI_FLAG_LOGIN_TRANSIT		0x80
 #define ISCSI_FLAG_LOGIN_CONTINUE		0x40
 #define ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK	0x0C	/* 2 bits */
+#define ISCSI_FLAG_LOGIN_CURRENT_STAGE1		0x04
+#define ISCSI_FLAG_LOGIN_CURRENT_STAGE2		0x08
+#define ISCSI_FLAG_LOGIN_CURRENT_STAGE3		0x0C
 #define ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK	0x03	/* 2 bits */
+#define ISCSI_FLAG_LOGIN_NEXT_STAGE1		0x01
+#define ISCSI_FLAG_LOGIN_NEXT_STAGE2		0x02
+#define ISCSI_FLAG_LOGIN_NEXT_STAGE3		0x03
 
 #define ISCSI_LOGIN_CURRENT_STAGE(flags) \
 	((flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2)
@@ -550,17 +556,25 @@ struct iscsi_logout_rsp {
 struct iscsi_snack {
 	uint8_t opcode;
 	uint8_t flags;
-	uint8_t rsvd2[14];
+	uint8_t rsvd2[2];
+	uint8_t hlength;
+	uint8_t dlength[3];
+	uint8_t lun[8];
 	itt_t	 itt;
+	__be32  ttt;
+	uint8_t rsvd3[4];
+	__be32  exp_statsn;
+	uint8_t rsvd4[8];
 	__be32	begrun;
 	__be32	runlength;
-	__be32	exp_statsn;
-	__be32	rsvd3;
-	__be32	exp_datasn;
-	uint8_t rsvd6[8];
 };
 
 /* SNACK PDU flags */
+#define ISCSI_FLAG_SNACK_TYPE_DATA		0
+#define ISCSI_FLAG_SNACK_TYPE_R2T		0
+#define ISCSI_FLAG_SNACK_TYPE_STATUS		1
+#define ISCSI_FLAG_SNACK_TYPE_DATA_ACK		2
+#define ISCSI_FLAG_SNACK_TYPE_RDATA		3
 #define ISCSI_FLAG_SNACK_TYPE_MASK	0x0F	/* 4 bits */
 
 /* Reject Message Header */
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 02/12] iscsi-target: Add primary iSCSI request/response state machine logic
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 209885 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds iscsi_target.[c,h] containing the main iSCSI Request and
Response PDU state machines and accompanying infrastructure code and
base iscsi_target_core.h include for iscsi_target_mod.  This includes
support for all defined iSCSI operation codes from RFC-3720 Section
10.2.1.2 and primary state machines for per struct iscsi_conn RX/TX
threads.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target.c      | 6043 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target.h      |   49 +
 drivers/target/iscsi/iscsi_target_core.h | 1019 +++++
 3 files changed, 7111 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target.c
 create mode 100644 drivers/target/iscsi/iscsi_target.h
 create mode 100644 drivers/target/iscsi/iscsi_target_core.h

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
new file mode 100644
index 0000000..99115db
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -0,0 +1,6043 @@
+/*******************************************************************************
+ * This file contains main functions related to the iSCSI Target Core Driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/version.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/kmod.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/crypto.h>
+#include <asm/unaligned.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_tmr.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_configfs.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_tmr.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_stat.h"
+
+struct iscsi_global *iscsi_global;
+
+struct kmem_cache *lio_cmd_cache;
+struct kmem_cache *lio_sess_cache;
+struct kmem_cache *lio_conn_cache;
+struct kmem_cache *lio_qr_cache;
+struct kmem_cache *lio_dr_cache;
+struct kmem_cache *lio_ooo_cache;
+struct kmem_cache *lio_r2t_cache;
+struct kmem_cache *lio_tpg_cache;
+
+static void iscsi_rx_thread_wait_for_TCP(struct iscsi_conn *);
+
+static int iscsi_target_detect(void);
+static int iscsi_target_release(void);
+static int iscsi_handle_immediate_data(struct iscsi_cmd *,
+			unsigned char *buf, __u32);
+static inline int iscsi_send_data_in(struct iscsi_cmd *, struct iscsi_conn *,
+			struct se_unmap_sg *, int *);
+static inline int iscsi_send_logout_response(struct iscsi_cmd *, struct iscsi_conn *);
+static inline int iscsi_send_nopin_response(struct iscsi_cmd *, struct iscsi_conn *);
+static inline int iscsi_send_status(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_task_mgt_rsp(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_text_rsp(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_reject(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_logout_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
+
+struct iscsi_tiqn *core_get_tiqn_for_login(unsigned char *buf)
+{
+	struct iscsi_tiqn *tiqn = NULL;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		if (!(strcmp(tiqn->tiqn, buf))) {
+
+			spin_lock(&tiqn->tiqn_state_lock);
+			if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+				atomic_inc(&tiqn->tiqn_access_count);
+				spin_unlock(&tiqn->tiqn_state_lock);
+				spin_unlock(&iscsi_global->tiqn_lock);
+				return tiqn;
+			}
+			spin_unlock(&tiqn->tiqn_state_lock);
+		}
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	return NULL;
+}
+
+static int core_set_tiqn_shutdown(struct iscsi_tiqn *tiqn)
+{
+	spin_lock(&tiqn->tiqn_state_lock);
+	if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+		tiqn->tiqn_state = TIQN_STATE_SHUTDOWN;
+		spin_unlock(&tiqn->tiqn_state_lock);
+		return 0;
+	}
+	spin_unlock(&tiqn->tiqn_state_lock);
+
+	return -1;
+}
+
+void core_put_tiqn_for_login(struct iscsi_tiqn *tiqn)
+{
+	spin_lock(&tiqn->tiqn_state_lock);
+	atomic_dec(&tiqn->tiqn_access_count);
+	spin_unlock(&tiqn->tiqn_state_lock);
+	return;
+}
+
+/*
+ * Note that IQN formatting is expected to be done in userspace, and
+ * no explict IQN format checks are done here.
+ */
+struct iscsi_tiqn *core_add_tiqn(unsigned char *buf, int *ret)
+{
+	struct iscsi_tiqn *tiqn = NULL;
+
+	if (strlen(buf) > ISCSI_TIQN_LEN) {
+		printk(KERN_ERR "Target IQN exceeds %d bytes\n",
+				ISCSI_TIQN_LEN);
+		*ret = -1;
+		return NULL;
+	}
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		if (!(strcmp(tiqn->tiqn, buf))) {
+			printk(KERN_ERR "Target IQN: %s already exists in Core\n",
+				tiqn->tiqn);
+			spin_unlock(&iscsi_global->tiqn_lock);
+			*ret = -1;
+			return NULL;
+		}
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	tiqn = kzalloc(sizeof(struct iscsi_tiqn), GFP_KERNEL);
+	if (!(tiqn)) {
+		printk(KERN_ERR "Unable to allocate struct iscsi_tiqn\n");
+		*ret = -1;
+		return NULL;
+	}
+
+	sprintf(tiqn->tiqn, "%s", buf);
+	INIT_LIST_HEAD(&tiqn->tiqn_list);
+	INIT_LIST_HEAD(&tiqn->tiqn_tpg_list);
+	spin_lock_init(&tiqn->tiqn_state_lock);
+	spin_lock_init(&tiqn->tiqn_tpg_lock);
+	spin_lock_init(&tiqn->sess_err_stats.lock);
+	spin_lock_init(&tiqn->login_stats.lock);
+	spin_lock_init(&tiqn->logout_stats.lock);
+	tiqn->tiqn_index = iscsi_get_new_index(ISCSI_INST_INDEX);
+	tiqn->tiqn_state = TIQN_STATE_ACTIVE;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_add_tail(&tiqn->tiqn_list, &iscsi_global->g_tiqn_list);
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	printk(KERN_INFO "CORE[0] - Added iSCSI Target IQN: %s\n", tiqn->tiqn);
+
+	return tiqn;
+
+}
+
+int __core_del_tiqn(struct iscsi_tiqn *tiqn)
+{
+	iscsi_disable_tpgs(tiqn);
+	iscsi_remove_tpgs(tiqn);
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_del(&tiqn->tiqn_list);
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	printk(KERN_INFO "CORE[0] - Deleted iSCSI Target IQN: %s\n",
+			tiqn->tiqn);
+	kfree(tiqn);
+
+	return 0;
+}
+
+static void core_wait_for_tiqn(struct iscsi_tiqn *tiqn)
+{
+	/*
+	 * Wait for accesses to said struct iscsi_tiqn to end.
+	 */
+	spin_lock(&tiqn->tiqn_state_lock);
+	while (atomic_read(&tiqn->tiqn_access_count)) {
+		spin_unlock(&tiqn->tiqn_state_lock);
+		msleep(10);
+		spin_lock(&tiqn->tiqn_state_lock);
+	}
+	spin_unlock(&tiqn->tiqn_state_lock);
+}
+
+int core_del_tiqn(struct iscsi_tiqn *tiqn)
+{
+	/*
+	 * core_set_tiqn_shutdown sets tiqn->tiqn_state = TIQN_STATE_SHUTDOWN
+	 * while holding tiqn->tiqn_state_lock.  This means that all subsequent
+	 * attempts to access this struct iscsi_tiqn will fail from both transport
+	 * fabric and control code paths.
+	 */
+	if (core_set_tiqn_shutdown(tiqn) < 0) {
+		printk(KERN_ERR "core_set_tiqn_shutdown() failed\n");
+		return -1;
+	}
+
+	core_wait_for_tiqn(tiqn);
+	return __core_del_tiqn(tiqn);
+}
+
+int core_release_tiqns(void)
+{
+	struct iscsi_tiqn *tiqn, *t_tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry_safe(tiqn, t_tiqn,
+			&iscsi_global->g_tiqn_list, tiqn_list) {
+
+		spin_lock(&tiqn->tiqn_state_lock);
+		if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+			tiqn->tiqn_state = TIQN_STATE_SHUTDOWN;
+			spin_unlock(&tiqn->tiqn_state_lock);
+			spin_unlock(&iscsi_global->tiqn_lock);
+
+			core_wait_for_tiqn(tiqn);
+			__core_del_tiqn(tiqn);
+
+			spin_lock(&iscsi_global->tiqn_lock);
+			continue;
+		}
+		spin_unlock(&tiqn->tiqn_state_lock);
+
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	return 0;
+}
+
+int core_access_np(struct iscsi_np *np, struct iscsi_portal_group *tpg)
+{
+	int ret;
+	/*
+	 * Determine if the network portal is accepting storage traffic.
+	 */
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return -1;
+	}
+	if (np->np_login_tpg) {
+		printk(KERN_ERR "np->np_login_tpg() is not NULL!\n");
+		spin_unlock_bh(&np->np_thread_lock);
+		return -1;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+	/*
+	 * Determine if the portal group is accepting storage traffic.
+	 */
+	spin_lock_bh(&tpg->tpg_state_lock);
+	if (tpg->tpg_state != TPG_STATE_ACTIVE) {
+		spin_unlock_bh(&tpg->tpg_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&tpg->tpg_state_lock);
+
+	/*
+	 * Here we serialize access across the TIQN+TPG Tuple.
+	 */
+	ret = down_interruptible(&tpg->np_login_sem);
+	if ((ret != 0) || signal_pending(current))
+		return -1;
+
+	spin_lock_bh(&tpg->tpg_state_lock);
+	if (tpg->tpg_state != TPG_STATE_ACTIVE) {
+		spin_unlock_bh(&tpg->tpg_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&tpg->tpg_state_lock);
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_tpg = tpg;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_deaccess_np(struct iscsi_np *np, struct iscsi_portal_group *tpg)
+{
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_tpg = NULL;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	up(&tpg->np_login_sem);
+
+	if (tiqn)
+		core_put_tiqn_for_login(tiqn);
+
+	return 0;
+}
+
+void *core_get_np_ip(struct iscsi_np *np)
+{
+	return (np->np_flags & NPF_NET_IPV6) ?
+	       (void *)&np->np_ipv6[0] :
+	       (void *)&np->np_ipv4;
+}
+
+struct iscsi_np *core_get_np(
+	void *ip,
+	u16 port,
+	int network_transport)
+{
+	struct iscsi_np *np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry(np, &iscsi_global->g_np_list, np_list) {
+		spin_lock(&np->np_state_lock);
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock(&np->np_state_lock);
+			continue;
+		}
+		spin_unlock(&np->np_state_lock);
+
+		if (!(memcmp(core_get_np_ip(np), ip, np->np_net_size)) &&
+		    (np->np_port == port) &&
+		    (np->np_network_transport == network_transport)) {
+			spin_unlock(&iscsi_global->np_lock);
+			return np;
+		}
+	}
+	spin_unlock(&iscsi_global->np_lock);
+
+	return NULL;
+}
+
+void *core_get_np_ex_ip(struct iscsi_np_ex *np_ex)
+{
+	return (np_ex->np_ex_net_size == IPV6_ADDRESS_SPACE) ?
+	       (void *)&np_ex->np_ex_ipv6 :
+	       (void *)&np_ex->np_ex_ipv4;
+}
+
+int core_del_np_ex(
+	struct iscsi_np *np,
+	void *ip_ex,
+	u16 port_ex,
+	int network_transport)
+{
+	struct iscsi_np_ex *np_ex, *np_ex_t;
+
+	spin_lock(&np->np_ex_lock);
+	list_for_each_entry_safe(np_ex, np_ex_t, &np->np_nex_list, np_ex_list) {
+		if (!(memcmp(core_get_np_ex_ip(np_ex), ip_ex,
+				np_ex->np_ex_net_size)) &&
+				(np_ex->np_ex_port == port_ex)) {
+			__core_del_np_ex(np, np_ex);
+			spin_unlock(&np->np_ex_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&np->np_ex_lock);
+
+	return -1;
+}
+
+int core_add_np_ex(
+	struct iscsi_np *np,
+	void *ip_ex,
+	u16 port_ex,
+	int net_size)
+{
+	struct iscsi_np_ex *np_ex;
+	unsigned char *ip_buf = NULL, *ip_ex_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf_ipv4_ex[IPV4_BUF_SIZE];
+	u32 ip_ex_ipv4;
+
+	np_ex = kzalloc(sizeof(struct iscsi_np_ex), GFP_KERNEL);
+	if (!(np_ex)) {
+		printk(KERN_ERR "struct iscsi_np_ex memory allocate failed!\n");
+		return -1;
+	}
+
+	if (net_size == IPV6_ADDRESS_SPACE) {
+		ip_buf = (unsigned char *)&np->np_ipv6[0];
+		ip_ex_buf = ip_ex;
+		snprintf(np_ex->np_ex_ipv6, IPV6_ADDRESS_SPACE,
+				"%s", ip_ex_buf);
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf_ipv4_ex, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		memcpy((void *)&ip_ex_ipv4, ip_ex, 4);
+		iscsi_ntoa2(buf_ipv4_ex, ip_ex_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip_ex_buf = &buf_ipv4_ex[0];
+
+		memcpy((void *)&np_ex->np_ex_ipv4, ip_ex, IPV4_ADDRESS_SPACE);
+	}
+
+	np_ex->np_ex_port = port_ex;
+	np_ex->np_ex_net_size = net_size;
+	INIT_LIST_HEAD(&np_ex->np_ex_list);
+	spin_lock_init(&np->np_ex_lock);
+
+	spin_lock(&np->np_ex_lock);
+	list_add_tail(&np_ex->np_ex_list, &np->np_nex_list);
+	spin_unlock(&np->np_ex_lock);
+
+	printk(KERN_INFO "CORE[0] - Added Network Portal: Internal %s:%hu"
+		" External %s:%hu on %s on network device: %s\n", ip_buf,
+		np->np_port, ip_ex_buf, port_ex,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", strlen(np->np_net_dev) ?
+			(char *)np->np_net_dev : "None");
+
+	return 0;
+}
+
+/*
+ * Called with struct iscsi_np->np_ex_lock held.
+ */
+int __core_del_np_ex(
+	struct iscsi_np *np,
+	struct iscsi_np_ex *np_ex)
+{
+	unsigned char *ip_buf = NULL, *ip_ex_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf_ipv4_ex[IPV4_BUF_SIZE];
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip_buf = (unsigned char *)&np->np_ipv6[0];
+		ip_ex_buf = (unsigned char *)&np_ex->np_ex_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf_ipv4_ex, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		iscsi_ntoa2(buf_ipv4_ex, np_ex->np_ex_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip_ex_buf = &buf_ipv4_ex[0];
+	}
+
+	list_del(&np_ex->np_ex_list);
+
+	printk(KERN_INFO "CORE[0] - Removed Network Portal: Internal %s:%hu"
+		" External %s:%hu on %s on network device: %s\n",
+		ip_buf, np->np_port, ip_ex_buf, np_ex->np_ex_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", strlen(np->np_net_dev) ?
+			(char *)np->np_net_dev : "None");
+	kfree(np_ex);
+
+	return 0;
+}
+
+void core_del_np_all_ex(
+	struct iscsi_np *np)
+{
+	struct iscsi_np_ex *np_ex, *np_ex_t;
+
+	spin_lock(&np->np_ex_lock);
+	list_for_each_entry_safe(np_ex, np_ex_t, &np->np_nex_list, np_ex_list)
+		__core_del_np_ex(np, np_ex);
+	spin_unlock(&np->np_ex_lock);
+}
+
+static struct iscsi_np *core_add_np_locate(
+	void *ip,
+	void *ip_ex,
+	unsigned char *ip_buf,
+	unsigned char *ip_ex_buf,
+	u16 port,
+	u16 port_ex,
+	int network_transport,
+	int net_size,
+	int *ret)
+{
+	struct iscsi_np *np;
+	struct iscsi_np_ex *np_ex;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry(np, &iscsi_global->g_np_list, np_list) {
+		spin_lock(&np->np_state_lock);
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock(&np->np_state_lock);
+			continue;
+		}
+		spin_unlock(&np->np_state_lock);
+
+		if (!(memcmp(core_get_np_ip(np), ip, np->np_net_size)) &&
+		    (np->np_port == port) &&
+		    (np->np_network_transport == network_transport)) {
+			if (!ip_ex && !port_ex) {
+				printk(KERN_ERR "Network Portal %s:%hu on %s"
+					" already exists, ignoring request.\n",
+					ip_buf, port,
+					(network_transport == ISCSI_TCP) ?
+					"TCP" : "SCTP");
+				spin_unlock(&iscsi_global->np_lock);
+				*ret = -EEXIST;
+				return NULL;
+			}
+
+			spin_lock(&np->np_ex_lock);
+			list_for_each_entry(np_ex, &np->np_nex_list,
+					np_ex_list) {
+				if (!(memcmp(core_get_np_ex_ip(np_ex), ip_ex,
+				     np_ex->np_ex_net_size)) &&
+				    (np_ex->np_ex_port == port_ex)) {
+					printk(KERN_ERR "Network Portal Inter"
+						"nal: %s:%hu External: %s:%hu"
+						" on %s, ignoring request.\n",
+						ip_buf, port,
+						ip_ex_buf, port_ex,
+						(network_transport == ISCSI_TCP)
+							? "TCP" : "SCTP");
+					spin_unlock(&np->np_ex_lock);
+					spin_unlock(&iscsi_global->np_lock);
+					*ret = -EEXIST;
+					return NULL;
+				}
+			}
+			spin_unlock(&np->np_ex_lock);
+			spin_unlock(&iscsi_global->np_lock);
+
+			*ret = core_add_np_ex(np, ip_ex, port_ex,
+						net_size);
+			if (*ret < 0)
+				return NULL;
+
+			*ret = 0;
+			return np;
+		}
+	}
+	spin_unlock(&iscsi_global->np_lock);
+
+	*ret = 0;
+
+	return NULL;
+}
+
+struct iscsi_np *core_add_np(
+	struct iscsi_np_addr *np_addr,
+	int network_transport,
+	int *ret)
+{
+	struct iscsi_np *np;
+	char *ip_buf = NULL;
+	void *ip;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+	int net_size;
+
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		ip_buf = &np_addr->np_ipv6[0];
+		ip = (void *)&np_addr->np_ipv6[0];
+		net_size = IPV6_ADDRESS_SPACE;
+	} else {
+		ip = (void *)&np_addr->np_ipv4;
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np_addr->np_ipv4);
+		ip_buf = &buf_ipv4[0];
+		net_size = IPV4_ADDRESS_SPACE;
+	}
+
+	np = core_add_np_locate(ip, NULL, ip_buf, NULL, np_addr->np_port,
+			0, network_transport, net_size, ret);
+	if ((np))
+		return np;
+
+	if (*ret != 0) {
+		*ret = -EINVAL;
+		return NULL;
+	}
+
+	np = kzalloc(sizeof(struct iscsi_np), GFP_KERNEL);
+	if (!(np)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_np\n");
+		*ret = -ENOMEM;
+		return NULL;
+	}
+
+	np->np_flags |= NPF_IP_NETWORK;
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		np->np_flags |= NPF_NET_IPV6;
+		memcpy(np->np_ipv6, np_addr->np_ipv6, IPV6_ADDRESS_SPACE);
+	} else {
+		np->np_flags |= NPF_NET_IPV4;
+		np->np_ipv4 = np_addr->np_ipv4;
+	}
+	np->np_port		= np_addr->np_port;
+	np->np_network_transport = network_transport;
+	np->np_net_size		= net_size;
+	np->np_index		= iscsi_get_new_index(ISCSI_PORTAL_INDEX);
+	atomic_set(&np->np_shutdown, 0);
+	spin_lock_init(&np->np_state_lock);
+	spin_lock_init(&np->np_thread_lock);
+	spin_lock_init(&np->np_ex_lock);
+	sema_init(&np->np_done_sem, 0);
+	sema_init(&np->np_restart_sem, 0);
+	sema_init(&np->np_shutdown_sem, 0);
+	sema_init(&np->np_start_sem, 0);
+	INIT_LIST_HEAD(&np->np_list);
+	INIT_LIST_HEAD(&np->np_nex_list);
+
+	kernel_thread(iscsi_target_login_thread, np, 0);
+
+	down(&np->np_start_sem);
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		printk(KERN_ERR "Unable to start login thread for iSCSI Network"
+			" Portal %s:%hu\n", ip_buf, np->np_port);
+		kfree(np);
+		*ret = -EADDRINUSE;
+		return NULL;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	spin_lock(&iscsi_global->np_lock);
+	list_add_tail(&np->np_list, &iscsi_global->g_np_list);
+	spin_unlock(&iscsi_global->np_lock);
+
+	printk(KERN_INFO "CORE[0] - Added Network Portal: %s:%hu on %s on"
+		" network device: %s\n", ip_buf, np->np_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	*ret = 0;
+	return np;
+}
+
+int core_reset_np_thread(
+	struct iscsi_np *np,
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	if (tpg && tpg_np) {
+		/*
+		 * The reset operation need only be performed when the
+		 * passed struct iscsi_portal_group has a login in progress
+		 * to one of the network portals.
+		 */
+		if (tpg_np->tpg_np->np_login_tpg != tpg) {
+			spin_unlock_bh(&np->np_thread_lock);
+			return 0;
+		}
+	}
+	if (np->np_thread_state == ISCSI_NP_THREAD_INACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return 0;
+	}
+
+	np->np_thread_state = ISCSI_NP_THREAD_RESET;
+	if (shutdown)
+		atomic_set(&np->np_shutdown, 1);
+
+	if (np->np_thread) {
+		spin_unlock_bh(&np->np_thread_lock);
+		send_sig(SIGKILL, np->np_thread, 1);
+		down(&np->np_restart_sem);
+		spin_lock_bh(&np->np_thread_lock);
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_del_np_thread(struct iscsi_np *np)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_thread_state = ISCSI_NP_THREAD_SHUTDOWN;
+	atomic_set(&np->np_shutdown, 1);
+	if (np->np_thread) {
+		send_sig(SIGKILL, np->np_thread, 1);
+		spin_unlock_bh(&np->np_thread_lock);
+		up(&np->np_shutdown_sem);
+		down(&np->np_done_sem);
+		return 0;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_del_np_comm(struct iscsi_np *np)
+{
+	if (!np->np_socket)
+		return 0;
+
+	/*
+	 * Some network transports set their own FILEIO, see
+	 * if we need to free any additional allocated resources.
+	 */
+	if (np->np_flags & NPF_SCTP_STRUCT_FILE) {
+		kfree(np->np_socket->file);
+		np->np_socket->file = NULL;
+	}
+
+	sock_release(np->np_socket);
+	return 0;
+}
+
+int core_del_np(struct iscsi_np *np)
+{
+	unsigned char *ip = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	core_del_np_thread(np);
+	core_del_np_comm(np);
+	core_del_np_all_ex(np);
+
+	spin_lock(&iscsi_global->np_lock);
+	list_del(&np->np_list);
+	spin_unlock(&iscsi_global->np_lock);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip = &np->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	printk(KERN_INFO "CORE[0] - Removed Network Portal: %s:%hu on %s on"
+		" network device: %s\n", ip, np->np_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	kfree(np);
+	return 0;
+}
+
+void core_reset_nps(void)
+{
+	struct iscsi_np *np, *t_np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry_safe(np, t_np, &iscsi_global->g_np_list, np_list) {
+		spin_unlock(&iscsi_global->np_lock);
+		core_reset_np_thread(np, NULL, NULL, 1);
+		spin_lock(&iscsi_global->np_lock);
+	}
+	spin_unlock(&iscsi_global->np_lock);
+}
+
+void core_release_nps(void)
+{
+	struct iscsi_np *np, *t_np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry_safe(np, t_np, &iscsi_global->g_np_list, np_list) {
+		spin_unlock(&iscsi_global->np_lock);
+		core_del_np(np);
+		spin_lock(&iscsi_global->np_lock);
+	}
+	spin_unlock(&iscsi_global->np_lock);
+}
+
+/* iSCSI mib table index for iscsi_target_stat.c */
+struct iscsi_index_table iscsi_index_table;
+
+/*
+ * Initialize the index table for allocating unique row indexes to various mib
+ * tables
+ */
+static void init_iscsi_index_table(void)
+{
+	memset(&iscsi_index_table, 0, sizeof(iscsi_index_table));
+	spin_lock_init(&iscsi_index_table.lock);
+}
+
+/*
+ * Allocate a new row index for the entry type specified
+ */
+u32 iscsi_get_new_index(iscsi_index_t type)
+{
+	u32 new_index;
+	
+	if ((type < 0) || (type >= INDEX_TYPE_MAX)) {
+		printk(KERN_ERR "Invalid index type %d\n", type);
+		return -1;
+	}
+
+	spin_lock(&iscsi_index_table.lock);
+	new_index = ++iscsi_index_table.iscsi_mib_index[type];
+	if (new_index == 0)
+		new_index = ++iscsi_index_table.iscsi_mib_index[type];
+	spin_unlock(&iscsi_index_table.lock);
+
+	return new_index;
+}
+
+/* init_iscsi_target():
+ *
+ * This function is called during module initialization to setup struct iscsi_global.
+ */
+static int init_iscsi_global(struct iscsi_global *global)
+{
+	memset(global, 0, sizeof(struct iscsi_global));
+	sema_init(&global->auth_sem, 1);
+	sema_init(&global->auth_id_sem, 1);
+	spin_lock_init(&global->active_ts_lock);
+	spin_lock_init(&global->check_thread_lock);
+	spin_lock_init(&global->discovery_lock);
+	spin_lock_init(&global->inactive_ts_lock);
+	spin_lock_init(&global->login_thread_lock);
+	spin_lock_init(&global->np_lock);
+	spin_lock_init(&global->shutdown_lock);
+	spin_lock_init(&global->tiqn_lock);
+	spin_lock_init(&global->ts_bitmap_lock);
+	spin_lock_init(&global->g_tpg_lock);
+	INIT_LIST_HEAD(&global->g_tiqn_list);
+	INIT_LIST_HEAD(&global->g_tpg_list);
+	INIT_LIST_HEAD(&global->g_np_list);
+	INIT_LIST_HEAD(&global->active_ts_list);
+	INIT_LIST_HEAD(&global->inactive_ts_list);
+
+	return 0;
+}
+
+static int default_targetname_seq_show(struct seq_file *m, void *p)
+{
+	if (iscsi_global->targetname_set)
+		seq_printf(m, "iSCSI TargetName: %s\n",
+				iscsi_global->targetname);
+
+	return 0;
+}
+
+static int version_info_seq_show(struct seq_file *m, void *p)
+{
+	seq_printf(m, "%s iSCSI Target Core Stack "ISCSI_VERSION" on"
+		" %s/%s on "UTS_RELEASE"\n", ISCSI_VENDOR,
+		utsname()->sysname, utsname()->machine);
+
+	return 0;
+}
+
+static int default_targetname_seq_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, default_targetname_seq_show, PDE(inode)->data);
+}
+
+static const struct file_operations default_targetname = {
+	.open		= default_targetname_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static int version_info_seq_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, version_info_seq_show, PDE(inode)->data);
+}
+
+static const struct file_operations version_info = {
+	.open		= version_info_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+/*	iscsi_target_detect():
+ *
+ *	This function is called upon module_init and does the following
+ *	actions in said order:
+ *
+ *	0) Allocates and initializes the struct iscsi_global structure.
+ *	1) Registers the character device for the IOCTL.
+ *	2) Registers /proc filesystem entries.
+ *	3) Creates a lookaside cache entry for the struct iscsi_cmd and
+ *	   struct iscsi_conn structures.
+ *	4) Allocates threads to handle login requests.
+ *	5) Allocates thread_sets for the thread_set queue.
+ *	6) Creates the default list of iSCSI parameters.
+ *	7) Create server socket and spawn iscsi_target_server_thread to
+ *	   accept connections.
+ *
+ *	Parameters:	Nothing.
+ *	Returns:	0 on success, -1 on error.
+ */
+/*	FIXME:  getaddrinfo for IPv6 will go here.
+ */
+static int iscsi_target_detect(void)
+{
+	int ret = 0;
+
+	printk(KERN_INFO "%s iSCSI Target Core Stack "ISCSI_VERSION" on"
+		" %s/%s on "UTS_RELEASE"\n", ISCSI_VENDOR,
+		utsname()->sysname, utsname()->machine);
+	/*
+	 * Clear out the struct kmem_cache pointers
+	 */
+	lio_cmd_cache = NULL;
+	lio_sess_cache = NULL;
+	lio_conn_cache = NULL;
+	lio_qr_cache = NULL;
+	lio_dr_cache = NULL;
+	lio_ooo_cache = NULL;
+	lio_r2t_cache = NULL;
+	lio_tpg_cache = NULL;
+
+	iscsi_global = kzalloc(sizeof(struct iscsi_global), GFP_KERNEL);
+	if (!(iscsi_global)) {
+		printk(KERN_ERR "Unable to allocate memory for iscsi_global\n");
+		return -1;
+	}
+	init_iscsi_index_table();
+
+	if (init_iscsi_global(iscsi_global) < 0) {
+		kfree(iscsi_global);
+		return -1;
+	}
+
+	iscsi_target_register_configfs();
+	iscsi_thread_set_init();
+
+	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
+			TARGET_THREAD_SET_COUNT) {
+		printk(KERN_ERR "iscsi_allocate_thread_sets() returned"
+			" unexpected value!\n");
+		ret = -1;
+		goto out;
+	}
+
+	lio_cmd_cache = kmem_cache_create("lio_cmd_cache",
+			sizeof(struct iscsi_cmd), __alignof__(struct iscsi_cmd),
+			0, NULL);
+	if (!(lio_cmd_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_cmd_cache\n");
+		goto out;
+	}
+
+	lio_sess_cache = kmem_cache_create("lio_sess_cache",
+			sizeof(struct iscsi_session), __alignof__(struct iscsi_session),
+			0, NULL);
+	if (!(lio_sess_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_sess_cache\n");
+		goto out;
+	}
+
+	lio_conn_cache = kmem_cache_create("lio_conn_cache",
+			sizeof(struct iscsi_conn), __alignof__(struct iscsi_conn),
+			0, NULL);
+	if (!(lio_conn_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_conn_cache\n");
+		goto out;
+	}
+
+	lio_qr_cache = kmem_cache_create("lio_qr_cache",
+			sizeof(struct iscsi_queue_req),
+			__alignof__(struct iscsi_queue_req), 0, NULL);
+	if (!(lio_qr_cache)) {
+		printk(KERN_ERR "nable to kmem_cache_create() for"
+				" lio_qr_cache\n");
+		goto out;
+	}
+
+	lio_dr_cache = kmem_cache_create("lio_dr_cache",
+			sizeof(struct iscsi_datain_req),
+			__alignof__(struct iscsi_datain_req), 0, NULL);
+	if (!(lio_dr_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_dr_cache\n");
+		goto out;
+	}
+
+	lio_ooo_cache = kmem_cache_create("lio_ooo_cache",
+			sizeof(struct iscsi_ooo_cmdsn),
+			__alignof__(struct iscsi_ooo_cmdsn), 0, NULL);
+	if (!(lio_ooo_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_ooo_cache\n");
+		goto out;
+	}
+
+	lio_r2t_cache = kmem_cache_create("lio_r2t_cache",
+			sizeof(struct iscsi_r2t), __alignof__(struct iscsi_r2t),
+			0, NULL);
+	if (!(lio_r2t_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_r2t_cache\n");
+		goto out;
+	}
+
+	lio_tpg_cache = kmem_cache_create("lio_tpg_cache",
+			sizeof(struct iscsi_portal_group),
+			__alignof__(struct iscsi_portal_group),
+			0, NULL);
+	if (!(lio_tpg_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+			" struct iscsi_portal_group\n");
+		goto out;
+	}
+
+	if (core_load_discovery_tpg() < 0)
+		goto out;
+
+	printk("Loading Complete.\n");
+
+	return ret;
+out:
+	if (lio_cmd_cache)
+		kmem_cache_destroy(lio_cmd_cache);
+	if (lio_sess_cache)
+		kmem_cache_destroy(lio_sess_cache);
+	if (lio_conn_cache)
+		kmem_cache_destroy(lio_conn_cache);
+	if (lio_qr_cache)
+		kmem_cache_destroy(lio_qr_cache);
+	if (lio_dr_cache)
+		kmem_cache_destroy(lio_dr_cache);
+	if (lio_ooo_cache)
+		kmem_cache_destroy(lio_ooo_cache);
+	if (lio_r2t_cache)
+		kmem_cache_destroy(lio_r2t_cache);
+	if (lio_tpg_cache)
+		kmem_cache_destroy(lio_tpg_cache);
+	iscsi_deallocate_thread_sets();
+	iscsi_thread_set_free();
+	iscsi_target_deregister_configfs();
+	kfree(iscsi_global);
+	iscsi_global = NULL;
+
+	return -1;
+}
+
+int iscsi_target_release_phase1(int rmmod)
+{
+	spin_lock(&iscsi_global->shutdown_lock);
+	if (!rmmod) {
+		if (iscsi_global->in_shutdown) {
+			printk(KERN_ERR "Module already in shutdown, aborting\n");
+			spin_unlock(&iscsi_global->shutdown_lock);
+			return -1;
+		}
+
+		if (iscsi_global->in_rmmod) {
+			printk(KERN_ERR "Module already in rmmod, aborting\n");
+			spin_unlock(&iscsi_global->shutdown_lock);
+			return -1;
+		}
+	} else
+		iscsi_global->in_rmmod = 1;
+	iscsi_global->in_shutdown = 1;
+	spin_unlock(&iscsi_global->shutdown_lock);
+
+	return 0;
+}
+
+void iscsi_target_release_phase2(void)
+{
+	core_reset_nps();
+	iscsi_disable_all_tpgs();
+	iscsi_deallocate_thread_sets();
+	iscsi_thread_set_free();
+	iscsi_remove_all_tpgs();
+	core_release_nps();
+	core_release_discovery_tpg();
+	core_release_tiqns();
+	kmem_cache_destroy(lio_cmd_cache);
+	kmem_cache_destroy(lio_sess_cache);
+	kmem_cache_destroy(lio_conn_cache);
+	kmem_cache_destroy(lio_qr_cache);
+	kmem_cache_destroy(lio_dr_cache);
+	kmem_cache_destroy(lio_ooo_cache);
+	kmem_cache_destroy(lio_r2t_cache);
+	kmem_cache_destroy(lio_tpg_cache);
+
+	iscsi_global->ti_forcechanoffline = NULL;
+	iscsi_target_deregister_configfs();
+}
+
+/*	iscsi_target_release():
+ *
+ *
+ */
+static int iscsi_target_release(void)
+{
+	int ret = 0;
+
+	if (!iscsi_global)
+		return ret;
+
+	iscsi_target_release_phase1(1);
+	iscsi_target_release_phase2();
+
+	kfree(iscsi_global);
+
+	printk(KERN_INFO "Unloading Complete.\n");
+
+	return ret;
+}
+
+char *iscsi_get_fabric_name(void)
+{
+	return "iSCSI";
+}
+
+struct iscsi_cmd *iscsi_get_cmd(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd;
+}
+
+u32 iscsi_get_task_tag(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd->init_task_tag;
+}
+
+int iscsi_get_cmd_state(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd->i_state;
+}
+
+void iscsi_new_cmd_failure(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	if (cmd->immediate_data || cmd->unsolicited_data)
+		up(&cmd->unsolicited_data_sem);
+}
+
+int iscsi_is_state_remove(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	return (cmd->i_state == ISTATE_REMOVE);
+}
+
+int lio_sess_logged_in(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	int ret;
+
+	/*
+	 * Called with spin_lock_bh(&se_global->se_tpg_lock); and
+	 * spin_lock(&se_tpg->session_lock); held.
+	 */
+	spin_lock(&sess->conn_lock);
+	ret = (sess->session_state != TARG_SESS_STATE_LOGGED_IN);
+	spin_unlock(&sess->conn_lock);
+
+	return ret;
+}
+
+u32 lio_sess_get_index(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	return sess->session_index;
+}
+
+u32 lio_sess_get_initiator_sid(
+	struct se_session *se_sess,
+	unsigned char *buf,
+	u32 size)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	/*
+	 * iSCSI Initiator Session Identifier from RFC-3720.
+	 */
+	return snprintf(buf, size, "%02x%02x%02x%02x%02x%02x",
+		sess->isid[0], sess->isid[1], sess->isid[2],
+		sess->isid[3], sess->isid[4], sess->isid[5]);
+}
+
+/*	iscsi_add_nopin():
+ *
+ *
+ */
+int iscsi_add_nopin(
+	struct iscsi_conn *conn,
+	int want_response)
+{
+	u8 state;
+	struct iscsi_cmd *cmd;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return -1;
+
+	cmd->iscsi_opcode = ISCSI_OP_NOOP_IN;
+	state = (want_response) ? ISTATE_SEND_NOPIN_WANT_RESPONSE :
+			ISTATE_SEND_NOPIN_NO_RESPONSE;
+	cmd->init_task_tag = 0xFFFFFFFF;
+	spin_lock_bh(&SESS(conn)->ttt_lock);
+	cmd->targ_xfer_tag = (want_response) ? SESS(conn)->targ_xfer_tag++ :
+			0xFFFFFFFF;
+	if (want_response && (cmd->targ_xfer_tag == 0xFFFFFFFF))
+		cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+	spin_unlock_bh(&SESS(conn)->ttt_lock);
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	if (want_response)
+		iscsi_start_nopin_response_timer(conn);
+	iscsi_add_cmd_to_immediate_queue(cmd, conn, state);
+
+	return 0;
+}
+
+/*	iscsi_add_reject():
+ *
+ *
+ */
+int iscsi_add_reject(
+	u8 reason,
+	int fail_conn,
+	unsigned char *buf,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+	struct iscsi_reject *hdr;
+	int ret;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return -1;
+
+	cmd->iscsi_opcode = ISCSI_OP_REJECT;
+	if (fail_conn)
+		cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
+
+	hdr	= (struct iscsi_reject *) cmd->pdu;
+	hdr->reason = reason;
+
+	cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_ATOMIC);
+	if (!(cmd->buf_ptr)) {
+		printk(KERN_ERR "Unable to allocate memory for cmd->buf_ptr\n");
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+		return -1;
+	}
+	memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN);
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_REJECT;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	ret = down_interruptible(&cmd->reject_sem);
+	if (ret != 0)
+		return -1;
+
+	return (!fail_conn) ? 0 : -1;
+}
+
+/*	iscsi_add_reject_from_cmd():
+ *
+ *
+ */
+int iscsi_add_reject_from_cmd(
+	u8 reason,
+	int fail_conn,
+	int add_to_conn,
+	unsigned char *buf,
+	struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn;
+	struct iscsi_reject *hdr;
+	int ret;
+
+	if (!CONN(cmd)) {
+		printk(KERN_ERR "cmd->conn is NULL for ITT: 0x%08x\n",
+				cmd->init_task_tag);
+		return -1;
+	}
+	conn = CONN(cmd);
+
+	cmd->iscsi_opcode = ISCSI_OP_REJECT;
+	if (fail_conn)
+		cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
+
+	hdr	= (struct iscsi_reject *) cmd->pdu;
+	hdr->reason = reason;
+
+	cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_ATOMIC);
+	if (!(cmd->buf_ptr)) {
+		printk(KERN_ERR "Unable to allocate memory for cmd->buf_ptr\n");
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+		return -1;
+	}
+	memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN);
+
+	if (add_to_conn)
+		iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_REJECT;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	ret = down_interruptible(&cmd->reject_sem);
+	if (ret != 0)
+		return -1;
+
+	return (!fail_conn) ? 0 : -1;
+}
+
+/* #define iscsi_calculate_map_segment_DEBUG */
+#ifdef iscsi_calculate_map_segment_DEBUG
+#define DEBUG_MAP_SEGMENTS(buf...) PYXPRINT(buf)
+#else
+#define DEBUG_MAP_SEGMENTS(buf...)
+#endif
+
+/*	iscsi_calculate_map_segment():
+ *
+ *
+ */
+static inline void iscsi_calculate_map_segment(
+	u32 *data_length,
+	struct se_offset_map *lm)
+{
+	u32 sg_offset = 0;
+	struct se_mem *se_mem = lm->map_se_mem;
+
+	DEBUG_MAP_SEGMENTS(" START Mapping se_mem: %p, Length: %d"
+		"  Remaining iSCSI Data: %u\n", se_mem, se_mem->se_len,
+		*data_length);
+	/*
+	 * Still working on pages in the current struct se_mem.
+	 */
+	if (!lm->map_reset) {
+		lm->iovec_length = (lm->sg_length > PAGE_SIZE) ?
+					PAGE_SIZE : lm->sg_length;
+		if (*data_length < lm->iovec_length) {
+			DEBUG_MAP_SEGMENTS("LINUX_MAP: Reset lm->iovec_length"
+				" to %d\n", *data_length);
+
+			lm->iovec_length = *data_length;
+		}
+		lm->iovec_base = page_address(lm->sg_page) + sg_offset;
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP: Set lm->iovec_base to %p from"
+			" lm->sg_page: %p\n", lm->iovec_base, lm->sg_page);
+		return;
+	}
+
+	/*
+	 * First run of an iscsi_linux_map_t.
+	 *
+	 * OR:
+	 *
+	 * Mapped all of the pages in the current scatterlist, move
+	 * on to the next one.
+	 */
+	lm->map_reset = 0;
+	sg_offset = se_mem->se_off;
+	lm->sg_page = se_mem->se_page;
+	lm->sg_length = se_mem->se_len;
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP1[%p]: Starting to se_mem->se_len: %u,"
+		" se_mem->se_off: %u, se_mem->se_page: %p\n", se_mem,
+		se_mem->se_len, se_mem->se_off, se_mem->se_page);;
+	/*
+	 * Get the base and length of the current page for use with the iovec.
+	 */
+recalc:
+	lm->iovec_length = (lm->sg_length > (PAGE_SIZE - sg_offset)) ?
+			   (PAGE_SIZE - sg_offset) : lm->sg_length;
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP: lm->iovec_length: %u, lm->sg_length: %u,"
+		" sg_offset: %u\n", lm->iovec_length, lm->sg_length, sg_offset);
+	/*
+	 * See if there is any iSCSI offset we need to deal with.
+	 */
+	if (!lm->current_offset) {
+		lm->iovec_base = page_address(lm->sg_page) + sg_offset;
+
+		if (*data_length < lm->iovec_length) {
+			DEBUG_MAP_SEGMENTS("LINUX_MAP1[%p]: Reset"
+				" lm->iovec_length to %d\n", se_mem,
+				*data_length);
+			lm->iovec_length = *data_length;
+		}
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP2[%p]: No current_offset,"
+			" set iovec_base to %p and set Current Page to %p\n",
+			se_mem, lm->iovec_base, lm->sg_page);
+
+		return;
+	}
+
+	/*
+	 * We know the iSCSI offset is in the next page of the current
+	 * scatterlist.  Increase the lm->sg_page pointer and try again.
+	 */
+	if (lm->current_offset >= lm->iovec_length) {
+		DEBUG_MAP_SEGMENTS("LINUX_MAP3[%p]: Next Page:"
+			" lm->current_offset: %u, iovec_length: %u"
+			" sg_offset: %u\n", se_mem, lm->current_offset,
+			lm->iovec_length, sg_offset);
+
+		lm->current_offset -= lm->iovec_length;
+		lm->sg_length -= lm->iovec_length;
+		lm->sg_page++;
+		sg_offset = 0;
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP3[%p]: ** Skipping to Next Page,"
+			" updated values: lm->current_offset: %u\n", se_mem,
+			lm->current_offset);
+
+		goto recalc;
+	}
+
+	/*
+	 * The iSCSI offset is in the current page, increment the iovec
+	 * base and reduce iovec length.
+	 */
+	lm->iovec_base = page_address(lm->sg_page);
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP4[%p]: Set lm->iovec_base to %p\n", se_mem,
+			lm->iovec_base);
+
+	lm->iovec_base += sg_offset;
+	lm->iovec_base += lm->current_offset;
+	DEBUG_MAP_SEGMENTS("****** the OLD lm->iovec_length: %u lm->sg_length:"
+		" %u\n", lm->iovec_length, lm->sg_length);
+
+	if ((lm->iovec_length - lm->current_offset) < *data_length)
+		lm->iovec_length -= lm->current_offset;
+	else
+		lm->iovec_length = *data_length;
+
+	if ((lm->sg_length - lm->current_offset) < *data_length)
+		lm->sg_length -= lm->current_offset;
+	else
+		lm->sg_length = *data_length;
+
+	lm->current_offset = 0;
+
+	DEBUG_MAP_SEGMENTS("****** the NEW lm->iovec_length %u lm->sg_length:"
+		" %u\n", lm->iovec_length, lm->sg_length);
+}
+
+/* #define iscsi_linux_get_iscsi_offset_DEBUG */
+#ifdef iscsi_linux_get_iscsi_offset_DEBUG
+#define DEBUG_GET_ISCSI_OFFSET(buf...) PYXPRINT(buf)
+#else
+#define DEBUG_GET_ISCSI_OFFSET(buf...)
+#endif
+
+/*	get_iscsi_offset():
+ *
+ *
+ */
+static int get_iscsi_offset(
+	struct se_offset_map *lmap,
+	struct se_unmap_sg *usg)
+{
+	u32 current_length = 0, current_iscsi_offset = lmap->iscsi_offset;
+	u32 total_offset = 0;
+	struct se_cmd *cmd = usg->se_cmd;
+	struct se_mem *se_mem;
+
+	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_list, se_list)
+		break;
+
+	if (!se_mem) {
+		printk(KERN_ERR "Unable to locate se_mem from"
+				" T_TASK(cmd)->t_mem_list\n");
+		return -1;
+	}
+
+	/*
+	 * Locate the current offset from the passed iSCSI Offset.
+	 */
+	while (lmap->iscsi_offset != current_length) {
+		/*
+		 * The iSCSI Offset is within the current struct se_mem.
+		 *
+		 * Or:
+		 *
+		 * The iSCSI Offset is outside of the current struct se_mem.
+		 * Recalculate the values and obtain the next struct se_mem pointer.
+		 */
+		total_offset += se_mem->se_len;
+
+		DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: current_length: %u,"
+			" total_offset: %u, sg->length: %u\n",
+			current_length, total_offset, se_mem->se_len);
+
+		if (total_offset > lmap->iscsi_offset) {
+			current_length += current_iscsi_offset;
+			lmap->orig_offset = lmap->current_offset =
+				usg->t_offset = current_iscsi_offset;
+			DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: Within Current"
+				" struct se_mem: %p, current_length incremented to"
+				" %u\n", se_mem, current_length);
+		} else {
+			current_length += se_mem->se_len;
+			current_iscsi_offset -= se_mem->se_len;
+
+			DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: Outside of"
+				" Current se_mem: %p, current_length"
+				" incremented to %u and current_iscsi_offset"
+				" decremented to %u\n", se_mem, current_length,
+				current_iscsi_offset);
+
+			list_for_each_entry_continue(se_mem,
+					T_TASK(cmd)->t_mem_list, se_list)
+				break;
+
+			if (!se_mem) {
+				printk(KERN_ERR "Unable to locate struct se_mem\n");
+				return -1;
+			}
+		}
+	}
+	lmap->map_orig_se_mem = se_mem;
+	usg->cur_se_mem = se_mem;
+
+	return 0;
+}
+
+/* #define iscsi_OS_set_SG_iovec_ptrs_DEBUG */
+#ifdef iscsi_OS_set_SG_iovec_ptrs_DEBUG
+#define DEBUG_IOVEC_SCATTERLISTS(buf...) PYXPRINT(buf)
+
+static void iscsi_check_iovec_map(
+	u32 iovec_count,
+	u32 map_length,
+	struct se_map_sg *map_sg,
+	struct se_unmap_sg *unmap_sg)
+{
+	u32 i, iovec_map_length = 0;
+	struct se_cmd *cmd = map_sg->se_cmd;
+	struct iovec *iov = map_sg->iov;
+	struct se_mem *se_mem;
+
+	for (i = 0; i < iovec_count; i++)
+		iovec_map_length += iov[i].iov_len;
+
+	if (iovec_map_length == map_length)
+		return;
+
+	printk(KERN_INFO "Calculated iovec_map_length: %u does not match passed"
+		" map_length: %u\n", iovec_map_length, map_length);
+	printk(KERN_INFO "ITT: 0x%08x data_length: %u data_direction %d\n",
+		CMD_TFO(cmd)->get_task_tag(cmd), cmd->data_length,
+		cmd->data_direction);
+
+	iovec_map_length = 0;
+
+	for (i = 0; i < iovec_count; i++) {
+		printk(KERN_INFO "iov[%d].iov_[base,len]: %p / %u bytes------"
+			"-->\n", i, iov[i].iov_base, iov[i].iov_len);
+
+		printk(KERN_INFO "iovec_map_length from %u to %u\n",
+			iovec_map_length, iovec_map_length + iov[i].iov_len);
+		iovec_map_length += iov[i].iov_len;
+
+		printk(KERN_INFO "XXXX_map_length from %u to %u\n", map_length,
+				(map_length - iov[i].iov_len));
+		map_length -= iov[i].iov_len;
+	}
+
+	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		printk(KERN_INFO "se_mem[%p]: offset: %u length: %u\n",
+			se_mem, se_mem->se_off, se_mem->se_len);
+	}
+
+	BUG();
+}
+
+#else
+#define DEBUG_IOVEC_SCATTERLISTS(buf...)
+#define iscsi_check_iovec_map(a, b, c, d)
+#endif
+
+static int iscsi_set_iovec_ptrs(
+	struct se_map_sg *map_sg,
+	struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0 /* For iovecs */, j = 0 /* For scatterlists */;
+#ifdef iscsi_OS_set_SG_iovec_ptrs_DEBUG
+	u32 orig_map_length = map_sg->data_length;
+#endif
+	struct se_cmd *cmd = map_sg->se_cmd;
+	struct iscsi_cmd *i_cmd = container_of(cmd, struct iscsi_cmd, se_cmd);
+	struct se_offset_map *lmap = &unmap_sg->lmap;
+	struct iovec *iov = map_sg->iov;
+
+	/*
+	 * Used for non scatterlist operations, assume a single iovec.
+	 */
+	if (!T_TASK(cmd)->t_tasks_se_num) {
+		DEBUG_IOVEC_SCATTERLISTS("ITT: 0x%08x No struct se_mem elements"
+			" present\n", CMD_TFO(cmd)->get_task_tag(cmd));
+		iov[0].iov_base = (unsigned char *) T_TASK(cmd)->t_task_buf +
+							map_sg->data_offset;
+		iov[0].iov_len  = map_sg->data_length;
+		return 1;
+	}
+
+	/*
+	 * Set lmap->map_reset = 1 so the first call to
+	 * iscsi_calculate_map_segment() sets up the initial
+	 * values for struct se_offset_map.
+	 */
+	lmap->map_reset = 1;
+
+	DEBUG_IOVEC_SCATTERLISTS("[-------------------] ITT: 0x%08x OS"
+		" Independent Network POSIX defined iovectors to SE Memory"
+		" [-------------------]\n\n", CMD_TFO(cmd)->get_task_tag(cmd));
+
+	/*
+	 * Get a pointer to the first used scatterlist based on the passed
+	 * offset. Also set the rest of the needed values in iscsi_linux_map_t.
+	 */
+	lmap->iscsi_offset = map_sg->data_offset;
+	if (map_sg->sg_kmap_active) {
+		unmap_sg->se_cmd = map_sg->se_cmd;
+		get_iscsi_offset(lmap, unmap_sg);
+		unmap_sg->data_length = map_sg->data_length;
+	} else {
+		lmap->current_offset = lmap->orig_offset;
+	}
+	lmap->map_se_mem = lmap->map_orig_se_mem;
+
+	DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Total map_sg->data_length: %d,"
+		" lmap->iscsi_offset: %d, i_cmd->orig_iov_data_count: %d\n",
+		map_sg->data_length, lmap->iscsi_offset,
+		i_cmd->orig_iov_data_count);
+
+	while (map_sg->data_length) {
+		/*
+		 * Time to get the virtual address for use with iovec pointers.
+		 * This function will return the expected iovec_base address
+		 * and iovec_length.
+		 */
+		iscsi_calculate_map_segment(&map_sg->data_length, lmap);
+
+		/*
+		 * Set the iov.iov_base and iov.iov_len from the current values
+		 * in iscsi_linux_map_t.
+		 */
+		iov[i].iov_base = lmap->iovec_base;
+		iov[i].iov_len = lmap->iovec_length;
+
+		/*
+		 * Subtract the final iovec length from the total length to be
+		 * mapped, and the length of the current scatterlist.  Also
+		 * perform the paranoid check to make sure we are not going to
+		 * overflow the iovecs allocated for this command in the next
+		 * pass.
+		 */
+		map_sg->data_length -= iov[i].iov_len;
+		lmap->sg_length -= iov[i].iov_len;
+
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: iov[%u].iov_len: %u\n",
+				i, iov[i].iov_len);
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: lmap->sg_length: from %u"
+			" to %u\n", lmap->sg_length + iov[i].iov_len,
+				lmap->sg_length);
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Changed total"
+			" map_sg->data_length from %u to %u\n",
+			map_sg->data_length + iov[i].iov_len,
+			map_sg->data_length);
+
+		if ((++i + 1) > i_cmd->orig_iov_data_count) {
+			printk(KERN_ERR "Current iovec count %u is greater than"
+				" struct se_cmd->orig_data_iov_count %u, cannot"
+				" continue.\n", i+1, i_cmd->orig_iov_data_count);
+			return -1;
+		}
+
+		/*
+		 * All done mapping this scatterlist's pages, move on to
+		 * the next scatterlist by setting lmap.map_reset = 1;
+		 */
+		if (!lmap->sg_length || !map_sg->data_length) {
+			list_for_each_entry(lmap->map_se_mem,
+					&lmap->map_se_mem->se_list, se_list)
+				break;
+
+			if (!lmap->map_se_mem) {
+				printk(KERN_ERR "Unable to locate next"
+					" lmap->map_struct se_mem entry\n");
+				return -1;
+			}
+			j++;
+
+			lmap->sg_page = NULL;
+			lmap->map_reset = 1;
+
+			DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Done with current"
+				" scatterlist, incremented Generic scatterlist"
+				" Counter to %d and reset = 1\n", j);
+		} else
+			lmap->sg_page++;
+	}
+
+	unmap_sg->sg_count = j;
+
+	iscsi_check_iovec_map(i, orig_map_length, map_sg, unmap_sg);
+
+	return i;
+}
+
+static void iscsi_map_SG_segments(struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0;
+	struct se_cmd *cmd = unmap_sg->se_cmd;
+	struct se_mem *se_mem = unmap_sg->cur_se_mem;
+
+	if (!(T_TASK(cmd)->t_tasks_se_num))
+		return;
+
+	list_for_each_entry_continue(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		kmap(se_mem->se_page);
+
+		if (++i == unmap_sg->sg_count)
+			break;
+	}
+}
+
+static void iscsi_unmap_SG_segments(struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0;
+	struct se_cmd *cmd = unmap_sg->se_cmd;
+	struct se_mem *se_mem = unmap_sg->cur_se_mem;
+
+	if (!(T_TASK(cmd)->t_tasks_se_num))
+		return;
+
+	list_for_each_entry_continue(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		kunmap(se_mem->se_page);
+
+		if (++i == unmap_sg->sg_count)
+			break;
+	}
+}
+
+/*	iscsi_handle_scsi_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_scsi_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	int	data_direction, cmdsn_ret = 0, immed_ret, ret, transport_ret;
+	int	dump_immediate_data = 0, send_check_condition = 0, payload_length;
+	struct iscsi_cmd	*cmd = NULL;
+	struct iscsi_scsi_cmd *hdr;
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->cmd_pdus++;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->num_cmds++;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	hdr			= (struct iscsi_scsi_cmd *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->data_length	= be32_to_cpu(hdr->data_length);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	/* FIXME; Add checks for AdditionalHeaderSegment */
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_WRITE) &&
+	    !(hdr->flags & ISCSI_FLAG_CMD_FINAL)) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_WRITE & ISCSI_FLAG_CMD_FINAL"
+				" not set. Bad iSCSI Initiator.\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (((hdr->flags & ISCSI_FLAG_CMD_READ) ||
+	     (hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) {
+		/*
+		 * Vmware ESX v3.0 uses a modified Cisco Initiator (v3.4.2)
+		 * that adds support for RESERVE/RELEASE.  There is a bug
+		 * add with this new functionality that sets R/W bits when
+		 * neither CDB carries any READ or WRITE datapayloads.
+		 */
+		if ((hdr->cdb[0] == 0x16) || (hdr->cdb[0] == 0x17)) {
+			hdr->flags &= ~ISCSI_FLAG_CMD_READ;
+			hdr->flags &= ~ISCSI_FLAG_CMD_WRITE;
+			goto done;
+		}
+
+		printk(KERN_ERR "ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE"
+			" set when Expected Data Transfer Length is 0 for"
+			" CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]);
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+done:
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_READ) &&
+	    !(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_READ and/or ISCSI_FLAG_CMD_WRITE"
+			" MUST be set if Expected Data Transfer Length is not 0."
+			" Bad iSCSI Initiator\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if ((hdr->flags & ISCSI_FLAG_CMD_READ) &&
+	    (hdr->flags & ISCSI_FLAG_CMD_WRITE)) {
+		printk(KERN_ERR "Bidirectional operations not supported!\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (hdr->opcode & ISCSI_OP_IMMEDIATE) {
+		printk(KERN_ERR "Illegally set Immediate Bit in iSCSI Initiator"
+				" Scsi Command PDU.\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (payload_length && !SESS_OPS_C(conn)->ImmediateData) {
+		printk(KERN_ERR "ImmediateData=No but DataSegmentLength=%u,"
+			" protocol error.\n", payload_length);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+#if 0
+	if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL) &&
+	     (hdr->flags & ISCSI_FLAG_CMD_WRITE) && SESS_OPS_C(conn)->InitialR2T) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_FINAL is not Set and"
+			" ISCSI_FLAG_CMD_WRITE Bit and InitialR2T=Yes,"
+			" protocol error\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+#endif
+	if ((hdr->data_length == payload_length) &&
+	    (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))) {
+		printk(KERN_ERR "Expected Data Transfer Length and Length of"
+			" Immediate Data are the same, but ISCSI_FLAG_CMD_FINAL"
+			" bit is not set protocol error\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > hdr->data_length) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" EDTL: %u, protocol error.\n", payload_length,
+				hdr->data_length);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" MaxRecvDataSegmentLength: %u, protocol error.\n",
+			payload_length, CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" FirstBurstLength: %u, protocol error.\n",
+			payload_length, SESS_OPS_C(conn)->FirstBurstLength);
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+					buf, conn);
+	}
+
+	data_direction = (hdr->flags & ISCSI_FLAG_CMD_WRITE) ? DMA_TO_DEVICE :
+			 (hdr->flags & ISCSI_FLAG_CMD_READ) ? DMA_FROM_DEVICE :
+			  DMA_NONE;
+
+	cmd = iscsi_allocate_se_cmd(conn, hdr->data_length, data_direction,
+				(hdr->flags & ISCSI_FLAG_CMD_ATTR_MASK));
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1,
+					buf, conn);
+
+	TRACE(TRACE_ISCSI, "Got SCSI Command, ITT: 0x%08x, CmdSN: 0x%08x,"
+		" ExpXferLen: %u, Length: %u, CID: %hu\n", hdr->itt,
+		hdr->cmdsn, hdr->data_length, payload_length, conn->cid);
+
+	cmd->iscsi_opcode	= ISCSI_OP_SCSI_CMD;
+	cmd->i_state		= ISTATE_NEW_CMD;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	cmd->immediate_data	= (payload_length) ? 1 : 0;
+	cmd->unsolicited_data	= ((!(hdr->flags & ISCSI_FLAG_CMD_FINAL) &&
+				     (hdr->flags & ISCSI_FLAG_CMD_WRITE)) ? 1 : 0);
+	if (cmd->unsolicited_data)
+		cmd->cmd_flags |= ICF_NON_IMMEDIATE_UNSOLICITED_DATA;
+
+	SESS(conn)->init_task_tag = cmd->init_task_tag = hdr->itt;
+	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
+		spin_lock_bh(&SESS(conn)->ttt_lock);
+		cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+		if (cmd->targ_xfer_tag == 0xFFFFFFFF)
+			cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+		spin_unlock_bh(&SESS(conn)->ttt_lock);
+	} else if (hdr->flags & ISCSI_FLAG_CMD_WRITE)
+		cmd->targ_xfer_tag = 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	cmd->first_burst_len	= payload_length;
+
+	if (cmd->data_direction == DMA_FROM_DEVICE) {
+		struct iscsi_datain_req *dr;
+
+		dr = iscsi_allocate_datain_req();
+		if (!(dr))
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, 1, buf, cmd);
+
+		iscsi_attach_datain_req(cmd, dr);
+	}
+
+	/*
+	 * The CDB is going to an se_device_t.
+	 */
+	ret = iscsi_get_lun_for_cmd(cmd, hdr->cdb,
+				get_unaligned_le64(&hdr->lun[0]));
+	if (ret < 0) {
+		if (SE_CMD(cmd)->scsi_sense_reason == TCM_NON_EXISTENT_LUN) {
+			TRACE(TRACE_VANITY, "Responding to non-acl'ed,"
+				" non-existent or non-exported iSCSI LUN:"
+				" 0x%016Lx\n", get_unaligned_le64(&hdr->lun[0]));
+		}
+		if (ret == PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES)
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, 1, buf, cmd);
+
+		send_check_condition = 1;
+		goto attach_cmd;
+	}
+	/*
+	 * The Initiator Node has access to the LUN (the addressing method
+	 * is handled inside of iscsi_get_lun_for_cmd()).  Now it's time to
+	 * allocate 1->N transport tasks (depending on sector count and
+	 * maximum request size the physical HBA(s) can handle.
+	 */
+	transport_ret = transport_generic_allocate_tasks(SE_CMD(cmd), hdr->cdb);
+	if (!(transport_ret))
+		goto build_list;
+
+	if (transport_ret == -1) {
+		return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 1, buf, cmd);
+	} else if (transport_ret == -2) {
+		/*
+		 * Unsupported SAM Opcode.  CHECK_CONDITION will be sent
+		 * in iscsi_execute_cmd() during the CmdSN OOO Execution
+		 * Mechinism.
+		 */
+		send_check_condition = 1;
+		goto attach_cmd;
+	}
+
+build_list:
+	if (iscsi_decide_list_to_build(cmd, payload_length) < 0)
+		return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 1, buf, cmd);
+attach_cmd:
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	/*
+	 * Check if we need to delay processing because of ALUA
+	 * Active/NonOptimized primary access state..
+	 */
+	core_alua_check_nonop_delay(SE_CMD(cmd));
+	/*
+	 * Check the CmdSN against ExpCmdSN/MaxCmdSN here if
+	 * the Immediate Bit is not set, and no Immediate
+	 * Data is attached.
+	 *
+	 * A PDU/CmdSN carrying Immediate Data can only
+	 * be processed after the DataCRC has passed.
+	 * If the DataCRC fails, the CmdSN MUST NOT
+	 * be acknowledged. (See below)
+	 */
+	if (!cmd->immediate_data) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			do {} while (0);
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd,
+					conn, cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	/*
+	 * If no Immediate Data is attached, it's OK to return now.
+	 */
+	if (!cmd->immediate_data) {
+		if (send_check_condition)
+			return 0;
+
+		if (cmd->unsolicited_data) {
+			iscsi_set_dataout_sequence_values(cmd);
+
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+
+		return 0;
+	}
+
+	/*
+	 * Early CHECK_CONDITIONs never make it to the transport processing
+	 * thread.  They are processed in CmdSN order by
+	 * iscsi_check_received_cmdsn() below.
+	 */
+	if (send_check_condition) {
+		immed_ret = IMMEDIDATE_DATA_NORMAL_OPERATION;
+		dump_immediate_data = 1;
+		goto after_immediate_data;
+	}
+
+	/*
+	 * Immediate Data is present, send to the transport and block until
+	 * the underlying transport plugin has allocated the buffer to
+	 * receive the Immediate Write Data into.
+	 */
+	transport_generic_handle_cdb(SE_CMD(cmd));
+
+	down(&cmd->unsolicited_data_sem);
+
+	if (SE_CMD(cmd)->se_cmd_flags & SCF_SE_CMD_FAILED) {
+		immed_ret = IMMEDIDATE_DATA_NORMAL_OPERATION;
+		dump_immediate_data = 1;
+		goto after_immediate_data;
+	}
+
+	immed_ret = iscsi_handle_immediate_data(cmd, buf, payload_length);
+after_immediate_data:
+	if (immed_ret == IMMEDIDATE_DATA_NORMAL_OPERATION) {
+		/*
+		 * A PDU/CmdSN carrying Immediate Data passed
+		 * DataCRC, check against ExpCmdSN/MaxCmdSN if
+		 * Immediate Bit is not set.
+		 */
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		/*
+		 * Special case for Unsupported SAM WRITE Opcodes
+		 * and ImmediateData=Yes.
+		 */
+		if (dump_immediate_data) {
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return -1;
+		} else if (cmd->unsolicited_data) {
+			iscsi_set_dataout_sequence_values(cmd);
+
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+
+		if (cmdsn_ret == CMDSN_NORMAL_OPERATION)
+			return 0;
+		else if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)
+			return 0;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd,
+					conn, cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	} else if (immed_ret == IMMEDIDATE_DATA_ERL1_CRC_FAILURE) {
+		/*
+		 * Immediate Data failed DataCRC and ERL>=1,
+		 * silently drop this PDU and let the initiator
+		 * plug the CmdSN gap.
+		 *
+		 * FIXME: Send Unsolicited NOPIN with reserved
+		 * TTT here to help the initiator figure out
+		 * the missing CmdSN, although they should be
+		 * intelligent enough to determine the missing
+		 * CmdSN and issue a retry to plug the sequence.
+		 */
+		cmd->i_state = ISTATE_REMOVE;
+		iscsi_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
+	} else /* immed_ret == IMMEDIDATE_DATA_CANNOT_RECOVER */
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_handle_data_out():
+ *
+ *
+ */
+static inline int iscsi_handle_data_out(struct iscsi_conn *conn, unsigned char *buf)
+{
+	int iov_ret, ooo_cmdsn = 0, ret;
+	u8 data_crc_failed = 0, *pad_bytes[4];
+	u32 checksum, iov_count = 0, padding = 0, rx_got = 0;
+	u32 rx_size = 0, payload_length;
+	struct iscsi_cmd *cmd = NULL;
+	struct se_cmd *se_cmd;
+	struct se_map_sg map_sg;
+	struct se_unmap_sg unmap_sg;
+	struct iscsi_data *hdr;
+	struct iovec *iov;
+	unsigned long flags;
+
+	hdr			= (struct iscsi_data *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->datasn		= be32_to_cpu(hdr->datasn);
+	hdr->offset		= be32_to_cpu(hdr->offset);
+
+	if (!payload_length) {
+		printk(KERN_ERR "DataOUT payload is ZERO, protocol error.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	/* iSCSI write */
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->rx_data_octets += payload_length;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->write_bytes += payload_length;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" MaxRecvDataSegmentLength: %u\n", payload_length,
+			CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	cmd = iscsi_find_cmd_from_itt_or_dump(conn, hdr->itt,
+			payload_length);
+	if (!(cmd))
+		return 0;
+
+	TRACE(TRACE_ISCSI, "Got DataOut ITT: 0x%08x, TTT: 0x%08x,"
+		" DataSN: 0x%08x, Offset: %u, Length: %u, CID: %hu\n",
+		hdr->itt, hdr->ttt, hdr->datasn, hdr->offset,
+		payload_length, conn->cid);
+
+	if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+		printk(KERN_ERR "Command ITT: 0x%08x received DataOUT after"
+			" last DataOUT received, dumping payload\n",
+			cmd->init_task_tag);
+		return iscsi_dump_data_payload(conn, payload_length, 1);
+	}
+
+	if (cmd->data_direction != DMA_TO_DEVICE) {
+		printk(KERN_ERR "Command ITT: 0x%08x received DataOUT for a"
+			" NON-WRITE command.\n", cmd->init_task_tag);
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
+				1, 0, buf, cmd);
+	}
+	se_cmd = SE_CMD(cmd);
+	iscsi_mod_dataout_timer(cmd);
+
+	if ((hdr->offset + payload_length) > cmd->data_length) {
+		printk(KERN_ERR "DataOut Offset: %u, Length %u greater than"
+			" iSCSI Command EDTL %u, protocol error.\n",
+			hdr->offset, payload_length, cmd->data_length);
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
+				1, 0, buf, cmd);
+	}
+
+	/*
+	 * Whenever a DataOUT or DataIN PDU contains a valid TTT, the
+	 * iSCSI LUN field must be set. iSCSI v20 10.7.4.  Of course,
+	 * Cisco cannot figure this out.
+	 */
+#if 0
+	if (hdr->ttt != 0xFFFFFFFF) {
+		int lun = iscsi_unpack_lun(get_unaligned_le64(&hdr->lun[0]));
+		if (lun != SE_CMD(cmd)->orig_fe_lun) {
+			printk(KERN_ERR "Received LUN: %u does not match iSCSI"
+				" LUN: %u\n", lun, SE_CMD(cmd)->orig_fe_lun);
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_INVALID,
+					1, 0, buf, cmd);
+		}
+	}
+#endif
+	if (cmd->unsolicited_data) {
+		int dump_unsolicited_data = 0, wait_for_transport = 0;
+
+		if (SESS_OPS_C(conn)->InitialR2T) {
+			printk(KERN_ERR "Received unexpected unsolicited data"
+				" while InitialR2T=Yes, protocol error.\n");
+			transport_send_check_condition_and_sense(SE_CMD(cmd),
+					TCM_UNEXPECTED_UNSOLICITED_DATA, 0);
+			return -1;
+		}
+		/*
+		 * Special case for dealing with Unsolicited DataOUT
+		 * and Unsupported SAM WRITE Opcodes and SE resource allocation
+		 * failures;
+		 */
+		spin_lock_irqsave(&T_TASK(se_cmd)->t_state_lock, flags);
+		/*
+		 * Handle cases where we do or do not want to sleep on
+		 * unsolicited_data_sem
+		 *
+		 * First, if TRANSPORT_WRITE_PENDING state has not been reached,
+		 * we need assume we need to wait and sleep..
+		 */
+		 wait_for_transport =
+				(se_cmd->t_state != TRANSPORT_WRITE_PENDING);
+		/*
+		 * For the ImmediateData=Yes cases, there will already be
+		 * generic target memory allocated with the original
+		 * ISCSI_OP_SCSI_CMD PDU, so do not sleep for that case.
+		 *
+		 * The last is a check for a delayed TASK_ABORTED status that
+		 * means the data payload will be dropped because
+		 * SCF_SE_CMD_FAILED has been set to indicate that an exception
+		 * condition for this struct sse_cmd has occured in generic target
+		 * code that requires us to drop payload.
+		 */
+		wait_for_transport =
+				(se_cmd->t_state != TRANSPORT_WRITE_PENDING);
+		if ((cmd->immediate_data != 0) ||
+		    (atomic_read(&T_TASK(se_cmd)->t_transport_aborted) != 0))
+			wait_for_transport = 0;
+		spin_unlock_irqrestore(&T_TASK(se_cmd)->t_state_lock, flags);
+
+		if (wait_for_transport)
+			down(&cmd->unsolicited_data_sem);
+
+		spin_lock_irqsave(&T_TASK(se_cmd)->t_state_lock, flags);
+		if (!(se_cmd->se_cmd_flags & SCF_SUPPORTED_SAM_OPCODE) ||
+		     (se_cmd->se_cmd_flags & SCF_SE_CMD_FAILED))
+			dump_unsolicited_data = 1;
+		spin_unlock_irqrestore(&T_TASK(se_cmd)->t_state_lock, flags);
+
+		if (dump_unsolicited_data) {
+			/*
+			 * Check if a delayed TASK_ABORTED status needs to
+			 * be sent now if the ISCSI_FLAG_CMD_FINAL has been
+			 * received with the unsolicitied data out.
+			 */
+			if (hdr->flags & ISCSI_FLAG_CMD_FINAL)
+				iscsi_stop_dataout_timer(cmd);
+
+			transport_check_aborted_status(se_cmd,
+					(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+			return iscsi_dump_data_payload(conn, payload_length, 1);
+		}
+	} else {
+		/*
+		 * For the normal solicited data path:
+		 *
+		 * Check for a delayed TASK_ABORTED status and dump any
+		 * incoming data out payload if one exists.  Also, when the
+		 * ISCSI_FLAG_CMD_FINAL is set to denote the end of the current
+		 * data out sequence, we decrement outstanding_r2ts.  Once
+		 * outstanding_r2ts reaches zero, go ahead and send the delayed
+		 * TASK_ABORTED status.
+		 */
+		if (atomic_read(&T_TASK(se_cmd)->t_transport_aborted) != 0) {
+			if (hdr->flags & ISCSI_FLAG_CMD_FINAL)
+				if (--cmd->outstanding_r2ts < 1) {
+					iscsi_stop_dataout_timer(cmd);
+					transport_check_aborted_status(
+							se_cmd, 1);
+				}
+
+			return iscsi_dump_data_payload(conn, payload_length, 1);
+		}
+	}
+	/*
+	 * Preform DataSN, DataSequenceInOrder, DataPDUInOrder, and
+	 * within-command recovery checks before receiving the payload.
+	 */
+	ret = iscsi_check_pre_dataout(cmd, buf);
+	if (ret == DATAOUT_WITHIN_COMMAND_RECOVERY)
+		return 0;
+	else if (ret == DATAOUT_CANNOT_RECOVER)
+		return -1;
+
+	rx_size += payload_length;
+	iov = &cmd->iov_data[0];
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	memset((void *)&unmap_sg, 0, sizeof(struct se_unmap_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.iov = iov;
+	map_sg.sg_kmap_active = 1;
+	map_sg.data_length = payload_length;
+	map_sg.data_offset = hdr->offset;
+	unmap_sg.fabric_cmd = (void *)cmd;
+	unmap_sg.se_cmd = SE_CMD(cmd);
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, &unmap_sg);
+	if (iov_ret < 0)
+		return -1;
+
+	iov_count += iov_ret;
+
+	padding = ((-payload_length) & 3);
+	if (padding != 0) {
+		iov[iov_count].iov_base	= &pad_bytes;
+		iov[iov_count++].iov_len = padding;
+		rx_size += padding;
+		TRACE(TRACE_ISCSI, "Receiving %u padding bytes.\n", padding);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		iov[iov_count].iov_base = &checksum;
+		iov[iov_count++].iov_len = CRC_LEN;
+		rx_size += CRC_LEN;
+	}
+
+	iscsi_map_SG_segments(&unmap_sg);
+
+	rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
+
+	iscsi_unmap_SG_segments(&unmap_sg);
+
+	if (rx_got != rx_size)
+		return -1;
+
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = payload_length, data_crc = 0;
+		struct iovec *iov_ptr = &cmd->iov_data[0];
+		struct scatterlist sg;
+		/*
+		 * Thanks to the IP stack shitting on passed iovecs,  we have to
+		 * call set_iovec_data_ptrs() again in order to have a iMD/PSCSI
+		 * agnostic way of doing datadigests computations.
+		 */
+		memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+		map_sg.fabric_cmd = (void *)cmd;
+		map_sg.se_cmd = SE_CMD(cmd);
+		map_sg.iov = iov_ptr;
+		map_sg.data_length = payload_length;
+		map_sg.data_offset = hdr->offset;
+
+		if (iscsi_set_iovec_ptrs(&map_sg, &unmap_sg) < 0)
+			return -1;
+
+		crypto_hash_init(&conn->conn_rx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+				" bytes, CRC 0x%08x\n", iov_ptr->iov_len,
+				data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+
+		if (padding) {
+			sg_init_one(&sg, (__u8 *)&pad_bytes, padding);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %d"
+				" bytes of padding, CRC 0x%08x\n",
+				padding, data_crc);
+		}
+		crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+		if (checksum != data_crc) {
+			printk(KERN_ERR "ITT: 0x%08x, Offset: %u, Length: %u,"
+				" DataSN: 0x%08x, CRC32C DataDigest 0x%08x"
+				" does not match computed 0x%08x\n",
+				hdr->itt, hdr->offset, payload_length,
+				hdr->datasn, checksum, data_crc);
+			data_crc_failed = 1;
+		} else {
+			TRACE(TRACE_DIGEST, "Got CRC32C DataDigest 0x%08x for"
+				" %u bytes of Data Out\n", checksum,
+				payload_length);
+		}
+	}
+	/*
+	 * Increment post receive data and CRC values or perform
+	 * within-command recovery.
+	 */
+	ret = iscsi_check_post_dataout(cmd, buf, data_crc_failed);
+	if ((ret == DATAOUT_NORMAL) || (ret == DATAOUT_WITHIN_COMMAND_RECOVERY))
+		return 0;
+	else if (ret == DATAOUT_SEND_R2T) {
+		iscsi_set_dataout_sequence_values(cmd);
+		iscsi_build_r2ts_for_cmd(cmd, conn, 0);
+	} else if (ret == DATAOUT_SEND_TO_TRANSPORT) {
+		/*
+		 * Handle extra special case for out of order
+		 * Unsolicited Data Out.
+		 */
+		spin_lock_bh(&cmd->istate_lock);
+		ooo_cmdsn = (cmd->cmd_flags & ICF_OOO_CMDSN);
+		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
+		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+		spin_unlock_bh(&cmd->istate_lock);
+
+		iscsi_stop_dataout_timer(cmd);
+		return (!ooo_cmdsn) ? transport_generic_handle_data(
+					SE_CMD(cmd)) : 0;
+	} else /* DATAOUT_CANNOT_RECOVER */
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_handle_nop_out():
+ *
+ *
+ */
+static inline int iscsi_handle_nop_out(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	unsigned char *ping_data = NULL;
+	int cmdsn_ret, niov = 0, ret = 0, rx_got, rx_size;
+	u32 checksum, data_crc, padding = 0, payload_length;
+	u64 lun;
+	struct iscsi_cmd *cmd = NULL;
+	struct iovec *iov = NULL;
+	struct iscsi_nopout *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_nopout *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	lun			= get_unaligned_le64(&hdr->lun[0]);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	if ((hdr->itt == 0xFFFFFFFF) && !(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		printk(KERN_ERR "NOPOUT ITT is reserved, but Immediate Bit is"
+			" not set, protocol error.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "NOPOUT Ping Data DataSegmentLength: %u is"
+			" greater than MaxRecvDataSegmentLength: %u, protocol"
+			" error.\n", payload_length,
+			CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	TRACE(TRACE_ISCSI, "Got NOPOUT Ping %s ITT: 0x%08x, TTT: 0x%09x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, Length: %u\n",
+		(hdr->itt == 0xFFFFFFFF) ? "Response" : "Request",
+		hdr->itt, hdr->ttt, hdr->cmdsn, hdr->exp_statsn,
+		payload_length);
+	/*
+	 * This is not a response to a Unsolicited NopIN, which means
+	 * it can either be a NOPOUT ping request (with a valid ITT),
+	 * or a NOPOUT not requesting a NOPIN (with a reserved ITT).
+	 * Either way, make sure we allocate an struct iscsi_cmd, as both
+	 * can contain ping data.
+	 */
+	if (hdr->ttt == 0xFFFFFFFF) {
+		cmd = iscsi_allocate_cmd(conn);
+		if (!(cmd))
+			return iscsi_add_reject(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+		cmd->iscsi_opcode	= ISCSI_OP_NOOP_OUT;
+		cmd->i_state		= ISTATE_SEND_NOPIN;
+		cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ?
+						1 : 0);
+		SESS(conn)->init_task_tag = cmd->init_task_tag = hdr->itt;
+		cmd->targ_xfer_tag	= 0xFFFFFFFF;
+		cmd->cmd_sn		= hdr->cmdsn;
+		cmd->exp_stat_sn	= hdr->exp_statsn;
+		cmd->data_direction	= DMA_NONE;
+	}
+
+	if (payload_length && (hdr->ttt == 0xFFFFFFFF)) {
+		rx_size = payload_length;
+		ping_data = kzalloc(payload_length + 1, GFP_KERNEL);
+		if (!(ping_data)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" NOPOUT ping data.\n");
+			ret = -1;
+			goto out;
+		}
+
+		iov = &cmd->iov_misc[0];
+		iov[niov].iov_base	= ping_data;
+		iov[niov++].iov_len	= payload_length;
+
+		padding = ((-payload_length) & 3);
+		if (padding != 0) {
+			TRACE(TRACE_ISCSI, "Receiving %u additional bytes"
+				" for padding.\n", padding);
+			iov[niov].iov_base	= &cmd->pad_bytes;
+			iov[niov++].iov_len	= padding;
+			rx_size += padding;
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			iov[niov].iov_base	= &checksum;
+			iov[niov++].iov_len	= CRC_LEN;
+			rx_size += CRC_LEN;
+		}
+
+		rx_got = rx_data(conn, &cmd->iov_misc[0], niov, rx_size);
+		if (rx_got != rx_size) {
+			ret = -1;
+			goto out;
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)ping_data, payload_length);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					payload_length);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&cmd->pad_bytes,
+					padding);
+				crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			}
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+			if (checksum != data_crc) {
+				printk(KERN_ERR "Ping data CRC32C DataDigest"
+				" 0x%08x does not match computed 0x%08x\n",
+					checksum, data_crc);
+				if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+					printk(KERN_ERR "Unable to recover from"
+					" NOPOUT Ping DataCRC failure while in"
+						" ERL=0.\n");
+					ret = -1;
+					goto out;
+				} else {
+					/*
+					 * Silently drop this PDU and let the
+					 * initiator plug the CmdSN gap.
+					 */
+					TRACE(TRACE_ERL1, "Dropping NOPOUT"
+					" Command CmdSN: 0x%08x due to"
+					" DataCRC error.\n", hdr->cmdsn);
+					ret = 0;
+					goto out;
+				}
+			} else {
+				TRACE(TRACE_DIGEST, "Got CRC32C DataDigest"
+				" 0x%08x for %u bytes of ping data.\n",
+					checksum, payload_length);
+			}
+		}
+
+		ping_data[payload_length] = '\0';
+		/*
+		 * Attach ping data to struct iscsi_cmd->buf_ptr.
+		 */
+		cmd->buf_ptr = (void *)ping_data;
+		cmd->buf_ptr_size = payload_length;
+
+		TRACE(TRACE_ISCSI, "Got %u bytes of NOPOUT ping"
+			" data.\n", payload_length);
+		TRACE(TRACE_ISCSI, "Ping Data: \"%s\"\n", ping_data);
+	}
+
+	if (hdr->itt != 0xFFFFFFFF) {
+		if (!cmd) {
+			printk(KERN_ERR "Checking CmdSN for NOPOUT,"
+				" but cmd is NULL!\n");
+			return -1;
+		}
+
+		/*
+		 * Initiator is expecting a NopIN ping reply,
+		 */
+		iscsi_attach_cmd_to_queue(conn, cmd);
+
+		iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+		if (hdr->opcode & ISCSI_OP_IMMEDIATE) {
+			iscsi_add_cmd_to_response_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		}
+
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)) {
+			return 0;
+		} else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			ret = 0;
+			goto ping_out;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+			ret = -1;
+			goto ping_out;
+		}
+
+		return 0;
+	}
+
+	if (hdr->ttt != 0xFFFFFFFF) {
+		/*
+		 * This was a response to a unsolicited NOPIN ping.
+		 */
+		cmd = iscsi_find_cmd_from_ttt(conn, hdr->ttt);
+		if (!(cmd))
+			return -1;
+
+		iscsi_stop_nopin_response_timer(conn);
+
+		cmd->i_state = ISTATE_REMOVE;
+		iscsi_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
+		iscsi_start_nopin_timer(conn);
+	} else {
+		/*
+		 * Initiator is not expecting a NOPIN is response.
+		 * Just ignore for now.
+		 *
+		 * iSCSI v19-91 10.18
+		 * "A NOP-OUT may also be used to confirm a changed
+		 *  ExpStatSN if another PDU will not be available
+		 *  for a long time."
+		 */
+		ret = 0;
+		goto out;
+	}
+
+	return 0;
+out:
+	if (cmd)
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+ping_out:
+	kfree(ping_data);
+	return ret;
+}
+
+/*	iscsi_handle_task_mgt_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_task_mgt_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *cmd;
+	struct se_tmr_req *se_tmr;
+	struct iscsi_tmr_req *tmr_req;
+	struct iscsi_tm *hdr;
+	u32 payload_length;
+	int cmdsn_ret, out_of_order_cmdsn = 0, ret;
+	u8 function;
+
+	hdr			= (struct iscsi_tm *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->rtt		= be32_to_cpu(hdr->rtt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->refcmdsn		= be32_to_cpu(hdr->refcmdsn);
+	hdr->exp_datasn		= be32_to_cpu(hdr->exp_datasn);
+	hdr->flags &= ~ISCSI_FLAG_CMD_FINAL;
+	function = hdr->flags;
+
+	TRACE(TRACE_ISCSI, "Got Task Management Request ITT: 0x%08x, CmdSN:"
+		" 0x%08x, Function: 0x%02x, RefTaskTag: 0x%08x, RefCmdSN:"
+		" 0x%08x, CID: %hu\n", hdr->itt, hdr->cmdsn, function,
+		hdr->rtt, hdr->refcmdsn, conn->cid);
+
+	if ((function != ISCSI_TM_FUNC_ABORT_TASK) &&
+	    ((function != ISCSI_TM_FUNC_TASK_REASSIGN) &&
+	     (hdr->rtt != ISCSI_RESERVED_TAG))) {
+		printk(KERN_ERR "RefTaskTag should be set to 0xFFFFFFFF.\n");
+		hdr->rtt = ISCSI_RESERVED_TAG;
+	}
+
+	if ((function == ISCSI_TM_FUNC_TASK_REASSIGN) &&
+			!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		printk(KERN_ERR "Task Management Request TASK_REASSIGN not"
+			" issued as immediate command, bad iSCSI Initiator"
+				"implementation\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+	if ((function != ISCSI_TM_FUNC_ABORT_TASK) &&
+	    (hdr->refcmdsn != ISCSI_RESERVED_TAG))
+		hdr->refcmdsn = ISCSI_RESERVED_TAG;
+
+	cmd = iscsi_allocate_se_cmd_for_tmr(conn, function);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+	cmd->iscsi_opcode	= ISCSI_OP_SCSI_TMFUNC;
+	cmd->i_state		= ISTATE_SEND_TASKMGTRSP;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	cmd->init_task_tag	= hdr->itt;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	se_tmr			= SE_CMD(cmd)->se_tmr_req;
+	tmr_req			= cmd->tmr_req;
+	/*
+	 * Locate the struct se_lun for all TMRs not related to ERL=2 TASK_REASSIGN
+	 */
+	if (function != ISCSI_TM_FUNC_TASK_REASSIGN) {
+		ret = iscsi_get_lun_for_tmr(cmd,
+				get_unaligned_le64(&hdr->lun[0]));
+		if (ret < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_NO_LUN;
+			goto attach;
+		}
+	}
+
+	switch (function) {
+	case ISCSI_TM_FUNC_ABORT_TASK:
+		se_tmr->response = iscsi_tmr_abort_task(cmd, buf);
+		if (se_tmr->response != ISCSI_TMF_RSP_COMPLETE) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_ABORT_TASK_SET:
+	case ISCSI_TM_FUNC_CLEAR_ACA:
+	case ISCSI_TM_FUNC_CLEAR_TASK_SET:
+	case ISCSI_TM_FUNC_LOGICAL_UNIT_RESET:
+		break;
+	case ISCSI_TM_FUNC_TARGET_WARM_RESET:
+		if (iscsi_tmr_task_warm_reset(conn, tmr_req, buf) < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_AUTH_FAILED;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_TARGET_COLD_RESET:
+		if (iscsi_tmr_task_cold_reset(conn, tmr_req, buf) < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_AUTH_FAILED;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_TASK_REASSIGN:
+		se_tmr->response = iscsi_tmr_task_reassign(cmd, buf);
+		/*
+		 * Perform sanity checks on the ExpDataSN only if the
+		 * TASK_REASSIGN was successful.
+		 */
+		if (se_tmr->response != ISCSI_TMF_RSP_COMPLETE)
+			break;
+
+		if (iscsi_check_task_reassign_expdatasn(tmr_req, conn) < 0)
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_INVALID, 1, 1,
+					buf, cmd);
+		break;
+	default:
+		printk(KERN_ERR "Unknown TMR function: 0x%02x, protocol"
+			" error.\n", function);
+		SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+		se_tmr->response = ISCSI_TMF_RSP_NOT_SUPPORTED;
+		goto attach;
+	}
+
+	if ((function != ISCSI_TM_FUNC_TASK_REASSIGN) &&
+	    (se_tmr->response == ISCSI_TMF_RSP_COMPLETE))
+		se_tmr->call_transport = 1;
+attach:
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		if (cmdsn_ret == CMDSN_NORMAL_OPERATION)
+			do {} while (0);
+		else if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)
+			out_of_order_cmdsn = 1;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	if (out_of_order_cmdsn)
+		return 0;
+	/*
+	 * Found the referenced task, send to transport for processing.
+	 */
+	if (se_tmr->call_transport)
+		return transport_generic_handle_tmr(SE_CMD(cmd));
+
+	/*
+	 * Could not find the referenced LUN, task, or Task Management
+	 * command not authorized or supported.  Change state and
+	 * let the tx_thread send the response.
+	 *
+	 * For connection recovery, this is also the default action for
+	 * TMR TASK_REASSIGN.
+	 */
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/* 	iscsi_handle_text_cmd():
+ *
+ *
+ */
+/* #warning FIXME: Support Text Command parameters besides SendTargets */
+static inline int iscsi_handle_text_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	char *text_ptr, *text_in;
+	int cmdsn_ret, niov = 0, rx_got, rx_size;
+	u32 checksum = 0, data_crc = 0, payload_length;
+	u32 padding = 0, pad_bytes = 0, text_length = 0;
+	struct iscsi_cmd *cmd;
+	struct iovec iov[3];
+	struct iscsi_text *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_text *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "Unable to accept text parameter length: %u"
+			"greater than MaxRecvDataSegmentLength %u.\n",
+		       payload_length, CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	TRACE(TRACE_ISCSI, "Got Text Request: ITT: 0x%08x, CmdSN: 0x%08x,"
+		" ExpStatSN: 0x%08x, Length: %u\n", hdr->itt, hdr->cmdsn,
+		hdr->exp_statsn, payload_length);
+
+	rx_size = text_length = payload_length;
+	if (text_length) {
+		text_in = kzalloc(text_length, GFP_KERNEL);
+		if (!(text_in)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" incoming text parameters\n");
+			return -1;
+		}
+
+		memset(iov, 0, 3 * sizeof(struct iovec));
+		iov[niov].iov_base	= text_in;
+		iov[niov++].iov_len	= text_length;
+
+		padding = ((-payload_length) & 3);
+		if (padding != 0) {
+			iov[niov].iov_base = &pad_bytes;
+			iov[niov++].iov_len  = padding;
+			rx_size += padding;
+			TRACE(TRACE_ISCSI, "Receiving %u additional bytes"
+					" for padding.\n", padding);
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			iov[niov].iov_base	= &checksum;
+			iov[niov++].iov_len	= CRC_LEN;
+			rx_size += CRC_LEN;
+		}
+
+		rx_got = rx_data(conn, &iov[0], niov, rx_size);
+		if (rx_got != rx_size) {
+			kfree(text_in);
+			return -1;
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)text_in, text_length);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					text_length);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&pad_bytes, padding);
+				crypto_hash_update(&conn->conn_rx_hash, &sg,
+						padding);
+			}
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);	
+
+			if (checksum != data_crc) {
+				printk(KERN_ERR "Text data CRC32C DataDigest"
+					" 0x%08x does not match computed"
+					" 0x%08x\n", checksum, data_crc);
+				if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+					printk(KERN_ERR "Unable to recover from"
+					" Text Data digest failure while in"
+						" ERL=0.\n");
+					kfree(text_in);
+					return -1;
+				} else {
+					/*
+					 * Silently drop this PDU and let the
+					 * initiator plug the CmdSN gap.
+					 */
+					TRACE(TRACE_ERL1, "Dropping Text"
+					" Command CmdSN: 0x%08x due to"
+					" DataCRC error.\n", hdr->cmdsn);
+					kfree(text_in);
+					return 0;
+				}
+			} else {
+				TRACE(TRACE_DIGEST, "Got CRC32C DataDigest"
+					" 0x%08x for %u bytes of text data.\n",
+						checksum, text_length);
+			}
+		}
+		text_in[text_length - 1] = '\0';
+		TRACE(TRACE_ISCSI, "Successfully read %d bytes of text"
+				" data.\n", text_length);
+
+		if (strncmp("SendTargets", text_in, 11) != 0) {
+			printk(KERN_ERR "Received Text Data that is not"
+				" SendTargets, cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+		text_ptr = strchr(text_in, '=');
+		if (!(text_ptr)) {
+			printk(KERN_ERR "No \"=\" separator found in Text Data,"
+				"  cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+		if (strncmp("=All", text_ptr, 4) != 0) {
+			printk(KERN_ERR "Unable to locate All value for"
+				" SendTargets key,  cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+/*#warning Support SendTargets=(iSCSI Target Name/Nothing) values. */
+		kfree(text_in);
+	}
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+	cmd->iscsi_opcode	= ISCSI_OP_TEXT;
+	cmd->i_state		= ISTATE_SEND_TEXTRSP;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	SESS(conn)->init_task_tag = cmd->init_task_tag	= hdr->itt;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	cmd->data_direction	= DMA_NONE;
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		     (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			return 0;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+						ISTATE_REMOVE);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+
+		return 0;
+	}
+
+	return iscsi_execute_cmd(cmd, 0);
+}
+
+/*	iscsi_logout_closesession():
+ *
+ *
+ */
+int iscsi_logout_closesession(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_conn *conn_p;
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Received logout request CLOSESESSION on CID: %hu"
+		" for SID: %u.\n", conn->cid, SESS(conn)->sid);
+
+	atomic_set(&sess->session_logout, 1);
+	atomic_set(&conn->conn_logout_remove, 1);
+	conn->conn_logout_reason = ISCSI_LOGOUT_REASON_CLOSE_SESSION;
+
+	iscsi_inc_conn_usage_count(conn);
+	iscsi_inc_session_usage_count(sess);
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn_p, &sess->sess_conn_list, conn_list) {
+		if (conn_p->conn_state != TARG_CONN_STATE_LOGGED_IN)
+			continue;
+
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+		conn_p->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_logout_closeconnection():
+ *
+ *
+ */
+int iscsi_logout_closeconnection(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_conn *l_conn;
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Received logout request CLOSECONNECTION for CID:"
+		" %hu on CID: %hu.\n", cmd->logout_cid, conn->cid);
+
+	/*
+	 * A Logout Request with a CLOSECONNECTION reason code for a CID
+	 * can arrive on a connection with a differing CID.
+	 */
+	if (conn->cid == cmd->logout_cid) {
+		spin_lock_bh(&conn->state_lock);
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+		conn->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+
+		atomic_set(&conn->conn_logout_remove, 1);
+		conn->conn_logout_reason = ISCSI_LOGOUT_REASON_CLOSE_CONNECTION;
+		iscsi_inc_conn_usage_count(conn);
+
+		spin_unlock_bh(&conn->state_lock);
+	} else {
+		/*
+		 * Handle all different cid CLOSECONNECTION requests in
+		 * iscsi_logout_post_handler_diffcid() as to give enough
+		 * time for any non immediate command's CmdSN to be
+		 * acknowledged on the connection in question.
+		 *
+		 * Here we simply make sure the CID is still around.
+		 */
+		l_conn = iscsi_get_conn_from_cid(sess,
+				cmd->logout_cid);
+		if (!(l_conn)) {
+			cmd->logout_response = ISCSI_LOGOUT_CID_NOT_FOUND;
+			iscsi_add_cmd_to_response_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		}
+
+		iscsi_dec_conn_usage_count(l_conn);
+	}
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_logout_removeconnforrecovery():
+ *
+ *
+ */
+int iscsi_logout_removeconnforrecovery(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ERL2, "Received explicit REMOVECONNFORRECOVERY logout for"
+		" CID: %hu on CID: %hu.\n", cmd->logout_cid, conn->cid);
+
+	if (SESS_OPS(sess)->ErrorRecoveryLevel != 2) {
+		printk(KERN_ERR "Received Logout Request REMOVECONNFORRECOVERY"
+			" while ERL!=2.\n");
+		cmd->logout_response = ISCSI_LOGOUT_RECOVERY_UNSUPPORTED;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	if (conn->cid == cmd->logout_cid) {
+		printk(KERN_ERR "Received Logout Request REMOVECONNFORRECOVERY"
+			" with CID: %hu on CID: %hu, implementation error.\n",
+				cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_CLEANUP_FAILED;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_handle_logout_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_logout_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	int cmdsn_ret, logout_remove = 0;
+	u8 reason_code = 0;
+	struct iscsi_cmd *cmd;
+	struct iscsi_logout *hdr;
+
+	hdr			= (struct iscsi_logout *) buf;
+	reason_code		= (hdr->flags & 0x7f);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->cid		= be16_to_cpu(hdr->cid);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn	= be32_to_cpu(hdr->exp_statsn);
+
+	{
+	struct iscsi_tiqn *tiqn = iscsi_snmp_get_tiqn(conn);
+
+	if (tiqn) {
+		spin_lock(&tiqn->logout_stats.lock);
+		if (reason_code == ISCSI_LOGOUT_REASON_CLOSE_SESSION)
+			tiqn->logout_stats.normal_logouts++;
+		else
+			tiqn->logout_stats.abnormal_logouts++;
+		spin_unlock(&tiqn->logout_stats.lock);
+		}
+	}
+
+	TRACE(TRACE_ISCSI, "Got Logout Request ITT: 0x%08x CmdSN: 0x%08x"
+		" ExpStatSN: 0x%08x Reason: 0x%02x CID: %hu on CID: %hu\n",
+		hdr->itt, hdr->cmdsn, hdr->exp_statsn, reason_code,
+		hdr->cid, conn->cid);
+
+	if (conn->conn_state != TARG_CONN_STATE_LOGGED_IN) {
+		printk(KERN_ERR "Received logout request on connection that"
+			" is not in logged in state, ignoring request.\n");
+		return 0;
+	}
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1,
+					buf, conn);
+
+	cmd->iscsi_opcode       = ISCSI_OP_LOGOUT;
+	cmd->i_state            = ISTATE_SEND_LOGOUTRSP;
+	cmd->immediate_cmd      = ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	SESS(conn)->init_task_tag = cmd->init_task_tag  = hdr->itt;
+	cmd->targ_xfer_tag      = 0xFFFFFFFF;
+	cmd->cmd_sn             = hdr->cmdsn;
+	cmd->exp_stat_sn        = hdr->exp_statsn;
+	cmd->logout_cid         = hdr->cid;
+	cmd->logout_reason      = reason_code;
+	cmd->data_direction     = DMA_NONE;
+
+	/*
+	 * We need to sleep in these cases (by returning 1) until the Logout
+	 * Response gets sent in the tx thread.
+	 */
+	if ((reason_code == ISCSI_LOGOUT_REASON_CLOSE_SESSION) ||
+	   ((reason_code == ISCSI_LOGOUT_REASON_CLOSE_CONNECTION) &&
+	    (hdr->cid == conn->cid)))
+		logout_remove = 1;
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (reason_code != ISCSI_LOGOUT_REASON_RECOVERY)
+		iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	/*
+	 * Non-Immediate Logout Commands are executed in CmdSN order..
+	 */
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			return logout_remove;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	/*
+	 * Immediate Logout Commands are executed, well, Immediately.
+	 */
+	if (iscsi_execute_cmd(cmd, 0) < 0)
+		return -1;
+
+	return logout_remove;
+}
+
+/*	iscsi_handle_snack():
+ *
+ *
+ */
+static inline int iscsi_handle_snack(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	u32 debug_type, unpacked_lun;
+	u64 lun;
+	struct iscsi_snack *hdr;
+
+	hdr			= (struct iscsi_snack *) buf;
+	hdr->flags		&= ~ISCSI_FLAG_CMD_FINAL;
+	lun			= get_unaligned_le64(&hdr->lun[0]);
+	unpacked_lun		= iscsi_unpack_lun((unsigned char *)&lun);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->begrun		= be32_to_cpu(hdr->begrun);
+	hdr->runlength		= be32_to_cpu(hdr->runlength);
+
+	debug_type = (hdr->flags & 0x02) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(debug_type, "Got ISCSI_INIT_SNACK, ITT: 0x%08x, ExpStatSN:"
+		" 0x%08x, Type: 0x%02x, BegRun: 0x%08x, RunLength: 0x%08x,"
+		" CID: %hu\n", hdr->itt, hdr->exp_statsn, hdr->flags,
+			hdr->begrun, hdr->runlength, conn->cid);
+
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Initiator sent SNACK request while in"
+			" ErrorRecoveryLevel=0.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+	/*
+	 * SNACK_DATA and SNACK_R2T are both 0,  so check which function to
+	 * call from inside iscsi_send_recovery_datain_or_r2t().
+	 */
+	switch (hdr->flags & ISCSI_FLAG_SNACK_TYPE_MASK) {
+	case 0:
+		return iscsi_handle_recovery_datain_or_r2t(conn, buf,
+			hdr->itt, hdr->ttt, hdr->begrun, hdr->runlength);
+		return 0;
+	case ISCSI_FLAG_SNACK_TYPE_STATUS:
+		return iscsi_handle_status_snack(conn, hdr->itt, hdr->ttt,
+			hdr->begrun, hdr->runlength);
+	case ISCSI_FLAG_SNACK_TYPE_DATA_ACK:
+		return iscsi_handle_data_ack(conn, hdr->ttt, hdr->begrun,
+			hdr->runlength);
+	case ISCSI_FLAG_SNACK_TYPE_RDATA:
+		/* FIXME: Support R-Data SNACK */
+		printk(KERN_ERR "R-Data SNACK Not Supported.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	default:
+		printk(KERN_ERR "Unknown SNACK type 0x%02x, protocol"
+			" error.\n", hdr->flags & 0x0f);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	return 0;
+}
+
+/*	iscsi_handle_immediate_data():
+ *
+ *
+ */
+static int iscsi_handle_immediate_data(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	__u32 length)
+{
+	int iov_ret, rx_got = 0, rx_size = 0;
+	__u32 checksum, iov_count = 0, padding = 0, pad_bytes = 0;
+	struct iscsi_conn *conn = cmd->conn;
+	struct se_map_sg map_sg;
+	struct se_unmap_sg unmap_sg;
+	struct iovec *iov;
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	memset((void *)&unmap_sg, 0, sizeof(struct se_unmap_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.sg_kmap_active = 1;
+	map_sg.iov = &cmd->iov_data[0];
+	map_sg.data_length = length;
+	map_sg.data_offset = cmd->write_data_done;
+	unmap_sg.fabric_cmd = (void *)cmd;
+	unmap_sg.se_cmd = SE_CMD(cmd);
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, &unmap_sg);
+	if (iov_ret < 0)
+		return IMMEDIDATE_DATA_CANNOT_RECOVER;
+
+	rx_size = length;
+	iov_count = iov_ret;
+	iov = &cmd->iov_data[0];
+
+	padding = ((-length) & 3);
+	if (padding != 0) {
+		iov[iov_count].iov_base	= &pad_bytes;
+		iov[iov_count++].iov_len = padding;
+		rx_size += padding;
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		iov[iov_count].iov_base 	= &checksum;
+		iov[iov_count++].iov_len 	= CRC_LEN;
+		rx_size += CRC_LEN;
+	}
+
+	iscsi_map_SG_segments(&unmap_sg);
+
+	rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
+
+	iscsi_unmap_SG_segments(&unmap_sg);
+
+	if (rx_got != rx_size) {
+		iscsi_rx_thread_wait_for_TCP(conn);
+		return IMMEDIDATE_DATA_CANNOT_RECOVER;
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = length, data_crc;
+		struct iovec *iov_ptr = &cmd->iov_data[0];
+		struct scatterlist sg;
+		/*
+		 * Thanks to the IP stack shitting on passed iovecs,  we have to
+		 * call set_iovec_data_ptrs again in order to have a iMD/PSCSI
+		 * agnostic way of doing datadigests computations.
+		 */
+		memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+		map_sg.fabric_cmd = (void *)cmd;
+		map_sg.se_cmd = SE_CMD(cmd);
+		map_sg.iov = iov_ptr;
+		map_sg.data_length = length;
+		map_sg.data_offset = cmd->write_data_done;
+
+		if (iscsi_set_iovec_ptrs(&map_sg, &unmap_sg) < 0)
+			return IMMEDIDATE_DATA_CANNOT_RECOVER;
+
+		crypto_hash_init(&conn->conn_rx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+			" bytes, CRC 0x%08x\n", iov_ptr->iov_len, data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+
+		if (padding) {
+			sg_init_one(&sg, (__u8 *)&pad_bytes, padding);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %d"
+			" bytes of padding, CRC 0x%08x\n", padding, data_crc);
+		}
+		crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+		if (checksum != data_crc) {
+			printk(KERN_ERR "ImmediateData CRC32C DataDigest 0x%08x"
+				" does not match computed 0x%08x\n", checksum,
+				data_crc);
+
+			if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+				printk(KERN_ERR "Unable to recover from"
+					" Immediate Data digest failure while"
+					" in ERL=0.\n");
+				iscsi_add_reject_from_cmd(
+						ISCSI_REASON_DATA_DIGEST_ERROR,
+						1, 0, buf, cmd);
+				return IMMEDIDATE_DATA_CANNOT_RECOVER;
+			} else {
+				iscsi_add_reject_from_cmd(
+						ISCSI_REASON_DATA_DIGEST_ERROR,
+						0, 0, buf, cmd);
+				return IMMEDIDATE_DATA_ERL1_CRC_FAILURE;
+			}
+		} else {
+			TRACE(TRACE_DIGEST, "Got CRC32C DataDigest 0x%08x for"
+				" %u bytes of Immediate Data\n", checksum,
+				length);
+		}
+	}
+
+	cmd->write_data_done += length;
+
+	if (cmd->write_data_done == cmd->data_length) {
+		spin_lock_bh(&cmd->istate_lock);
+		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
+		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+		spin_unlock_bh(&cmd->istate_lock);
+	}
+
+	return IMMEDIDATE_DATA_NORMAL_OPERATION;
+}
+
+/*	iscsi_send_async_msg():
+ *
+ *	FIXME: Support SCSI AEN.
+ */
+int iscsi_send_async_msg(
+	struct iscsi_conn *conn,
+	u16 cid,
+	u8 async_event,
+	u8 async_vcode)
+{
+	u8 iscsi_hdr[ISCSI_HDR_LEN+CRC_LEN];
+	u32 tx_send = ISCSI_HDR_LEN, tx_sent = 0;
+	struct timer_list async_msg_timer;
+	struct iscsi_async *hdr;
+	struct iovec iov;
+	struct scatterlist sg;
+
+	memset((void *)&iov, 0, sizeof(struct iovec));
+	memset((void *)&iscsi_hdr, 0, ISCSI_HDR_LEN+CRC_LEN);
+
+	hdr		= (struct iscsi_async *)&iscsi_hdr;
+	hdr->opcode	= ISCSI_OP_ASYNC_EVENT;
+	hdr->flags	|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, 0);
+	put_unaligned_le64(0, &hdr->lun[0]);
+	put_unaligned_be64(0xffffffffffffffff, &hdr->rsvd4[0]);
+	hdr->statsn	= cpu_to_be32(conn->stat_sn++);
+	spin_lock(&SESS(conn)->cmdsn_lock);
+	hdr->exp_cmdsn	= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn	= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	spin_unlock(&SESS(conn)->cmdsn_lock);
+	hdr->async_event = async_event;
+	hdr->async_vcode = async_vcode;
+
+	switch (async_event) {
+	case ISCSI_ASYNC_MSG_SCSI_EVENT:
+		printk(KERN_ERR "ISCSI_ASYNC_MSG_SCSI_EVENT: not supported yet.\n");
+		return -1;
+	case ISCSI_ASYNC_MSG_REQUEST_LOGOUT:
+		TRACE(TRACE_STATE, "Moving to"
+				" TARG_CONN_STATE_LOGOUT_REQUESTED.\n");
+		conn->conn_state = TARG_CONN_STATE_LOGOUT_REQUESTED;
+		hdr->param1 = 0;
+		hdr->param2 = 0;
+		hdr->param3 = cpu_to_be16(SECONDS_FOR_ASYNC_LOGOUT);
+		break;
+	case ISCSI_ASYNC_MSG_DROPPING_CONNECTION:
+		hdr->param1 = cpu_to_be16(cid);
+		hdr->param2 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+		hdr->param3 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+		break;
+	case ISCSI_ASYNC_MSG_DROPPING_ALL_CONNECTIONS:
+		hdr->param1 = 0;
+		hdr->param2 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+		hdr->param3 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+		break;
+	case ISCSI_ASYNC_MSG_PARAM_NEGOTIATION:
+		hdr->param1 = 0;
+		hdr->param2 = 0;
+		hdr->param3 = cpu_to_be16(SECONDS_FOR_ASYNC_TEXT);
+		break;
+	case ISCSI_ASYNC_MSG_VENDOR_SPECIFIC:
+		printk(KERN_ERR "ISCSI_ASYNC_MSG_VENDOR_SPECIFIC not"
+			" supported yet.\n");
+		return -1;
+	default:
+		printk(KERN_ERR "Unknown AsycnEvent 0x%02x, protocol"
+			" error.\n", async_event);
+		return -1;
+	}
+
+	iov.iov_base	= &iscsi_hdr;
+	iov.iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&iscsi_hdr[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+		
+		sg_init_one(&sg, (u8 *)&iscsi_hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov.iov_len += CRC_LEN;
+		tx_send += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Async"
+			" Msg PDU 0x%08x\n", *header_digest);
+	}
+
+	TRACE(TRACE_ISCSI, "Built Async Message StatSN: 0x%08x, AsyncEvent:"
+		" 0x%02x, P1: 0x%04x, P2: 0x%04x, P3: 0x%04x\n",
+		ntohl(hdr->statsn), hdr->async_event, ntohs(hdr->param1),
+		ntohs(hdr->param2), ntohs(hdr->param3));
+
+	tx_sent = tx_data(conn, &iov, 1, tx_send);
+	if (tx_sent != tx_send) {
+		printk(KERN_ERR "tx_data returned %d expecting %d\n",
+				tx_sent, tx_send);
+		return -1;
+	}
+
+	if (async_event == ISCSI_ASYNC_MSG_REQUEST_LOGOUT) {
+		init_timer(&async_msg_timer);
+		SETUP_TIMER(async_msg_timer, SECONDS_FOR_ASYNC_LOGOUT,
+				&SESS(conn)->async_msg_sem,
+				iscsi_async_msg_timer_function);
+		add_timer(&async_msg_timer);
+		down(&SESS(conn)->async_msg_sem);
+		del_timer_sync(&async_msg_timer);
+
+		if (conn->conn_state == TARG_CONN_STATE_LOGOUT_REQUESTED) {
+			printk(KERN_ERR "Asynchronous message timer expired"
+				" without receiving a logout request,  dropping"
+				" iSCSI session.\n");
+			iscsi_send_async_msg(conn, 0,
+				ISCSI_ASYNC_MSG_DROPPING_ALL_CONNECTIONS, 0);
+			iscsi_free_session(SESS(conn));
+		}
+	}
+	return 0;
+}
+
+/*	iscsi_build_conn_drop_async_message():
+ *
+ *	Called with sess->conn_lock held.
+ */
+/* #warning iscsi_build_conn_drop_async_message() only sends out on connections
+	with active network interface */
+static void iscsi_build_conn_drop_async_message(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+	struct iscsi_conn *conn_p;
+
+	/*
+	 * Only send a Asynchronous Message on connections whos network
+	 * interface is still functional.
+	 */
+	list_for_each_entry(conn_p, &SESS(conn)->sess_conn_list, conn_list) {
+		if ((conn_p->conn_state == TARG_CONN_STATE_LOGGED_IN) &&
+		    (iscsi_check_for_active_network_device(conn_p))) {
+			iscsi_inc_conn_usage_count(conn_p);
+			break;
+		}
+	}
+
+	if (!conn_p)
+		return;
+
+	cmd = iscsi_allocate_cmd(conn_p);
+	if (!(cmd)) {
+		iscsi_dec_conn_usage_count(conn_p);
+		return;
+	}
+
+	cmd->logout_cid = conn->cid;
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+	cmd->i_state = ISTATE_SEND_ASYNCMSG;
+
+	iscsi_attach_cmd_to_queue(conn_p, cmd);
+	iscsi_add_cmd_to_response_queue(cmd, conn_p, cmd->i_state);
+
+	iscsi_dec_conn_usage_count(conn_p);
+}
+
+/*	iscsi_send_conn_drop_async_message():
+ *
+ *
+ */
+static int iscsi_send_conn_drop_async_message(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_async *hdr;
+	struct scatterlist sg;
+
+	cmd->tx_size = ISCSI_HDR_LEN;
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+
+	hdr			= (struct iscsi_async *) cmd->pdu;
+	hdr->opcode		= ISCSI_OP_ASYNC_EVENT;
+	hdr->flags		= ISCSI_FLAG_CMD_FINAL;
+	cmd->init_task_tag	= 0xFFFFFFFF;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	put_unaligned_be64(0xffffffffffffffff, &hdr->rsvd4[0]);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn 		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->async_event 	= ISCSI_ASYNC_MSG_DROPPING_CONNECTION;
+	hdr->param1		= cpu_to_be16(cmd->logout_cid);
+	hdr->param2		= cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+	hdr->param3		= cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+		
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" Async Message 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= cmd->tx_size;
+	cmd->iov_misc_count		= 1;
+
+	TRACE(TRACE_ERL2, "Sending Connection Dropped Async Message StatSN:"
+		" 0x%08x, for CID: %hu on CID: %hu\n", cmd->stat_sn,
+			cmd->logout_cid, conn->cid);
+	return 0;
+}
+
+int lio_queue_data_in(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_send_data_in():
+ *
+ *
+ */
+static inline int iscsi_send_data_in(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	struct se_unmap_sg *unmap_sg,
+	int *eodr)
+{
+	int iov_ret = 0, set_statsn = 0;
+	u8 *pad_bytes;
+	u32 iov_count = 0, tx_size = 0;
+	u64 lun;	
+	struct iscsi_datain datain;
+	struct iscsi_datain_req *dr;
+	struct se_map_sg map_sg;
+	struct iscsi_data_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	memset(&datain, 0, sizeof(struct iscsi_datain));
+	dr = iscsi_get_datain_values(cmd, &datain);
+	if (!(dr)) {
+		printk(KERN_ERR "iscsi_get_datain_values failed for ITT: 0x%08x\n",
+				cmd->init_task_tag);
+		return -1;
+	}
+
+	/*
+	 * Be paranoid and double check the logic for now.
+	 */
+	if ((datain.offset + datain.length) > cmd->data_length) {
+		printk(KERN_ERR "Command ITT: 0x%08x, datain.offset: %u and"
+			" datain.length: %u exceeds cmd->data_length: %u\n",
+			cmd->init_task_tag, datain.offset, datain.length,
+				cmd->data_length);
+		return -1;
+	}
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->tx_data_octets += datain.length;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->read_bytes += datain.length;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+	/*
+	 * Special case for successfully execution w/ both DATAIN
+	 * and Sense Data.
+	 */
+	if ((datain.flags & ISCSI_FLAG_DATA_STATUS) &&
+	    (SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE))
+		datain.flags &= ~ISCSI_FLAG_DATA_STATUS;
+	else {
+		if ((dr->dr_complete == DATAIN_COMPLETE_NORMAL) ||
+		    (dr->dr_complete == DATAIN_COMPLETE_CONNECTION_RECOVERY)) {
+			iscsi_increment_maxcmdsn(cmd, SESS(conn));
+			cmd->stat_sn = conn->stat_sn++;
+			set_statsn = 1;
+		} else if (dr->dr_complete ==
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY)
+			set_statsn = 1;
+	}
+
+	hdr	= (struct iscsi_data_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode 		= ISCSI_OP_SCSI_DATA_IN;
+	hdr->flags		= datain.flags;
+	if (hdr->flags & ISCSI_FLAG_DATA_STATUS) {
+		if (SE_CMD(cmd)->se_cmd_flags & SCF_OVERFLOW_BIT) {
+			hdr->flags |= ISCSI_FLAG_DATA_OVERFLOW;
+			hdr->residual_count = cpu_to_be32(cmd->residual_count);
+		} else if (SE_CMD(cmd)->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+			hdr->flags |= ISCSI_FLAG_DATA_UNDERFLOW;
+			hdr->residual_count = cpu_to_be32(cmd->residual_count);
+		}
+	}
+	hton24(hdr->dlength, datain.length);
+	lun			= (hdr->flags & ISCSI_FLAG_DATA_ACK) ?
+				   iscsi_pack_lun(SE_CMD(cmd)->orig_fe_lun) :
+				   0xFFFFFFFFFFFFFFFFULL;
+	put_unaligned_le64(lun, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= (hdr->flags & ISCSI_FLAG_DATA_ACK) ?
+				   cpu_to_be32(cmd->targ_xfer_tag) :
+				   0xFFFFFFFF;
+	hdr->statsn		= (set_statsn) ? cpu_to_be32(cmd->stat_sn) :
+						0xFFFFFFFF;
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->datasn		= cpu_to_be32(datain.data_sn);
+	hdr->offset		= cpu_to_be32(datain.offset);
+
+	iov = &cmd->iov_data[0];
+	iov[iov_count].iov_base	= cmd->pdu;
+	iov[iov_count++].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest"
+			" for DataIN PDU 0x%08x\n", *header_digest);
+	}
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.sg_kmap_active = 1;
+	map_sg.iov = &cmd->iov_data[1];
+	map_sg.data_length = datain.length;
+	map_sg.data_offset = datain.offset;
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, unmap_sg);
+	if (iov_ret < 0)
+		return -1;
+
+	iov_count += iov_ret;
+	tx_size += datain.length;
+
+	unmap_sg->padding = ((-datain.length) & 3);
+	if (unmap_sg->padding != 0) {
+		pad_bytes = kzalloc(unmap_sg->padding * sizeof(__u8),
+					GFP_KERNEL);
+		if (!(pad_bytes)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+					" pad_bytes.\n");
+			return -1;
+		}
+		cmd->buf_ptr = pad_bytes;
+		iov[iov_count].iov_base 	= pad_bytes;
+		iov[iov_count++].iov_len 	= unmap_sg->padding;
+		tx_size += unmap_sg->padding;
+
+		TRACE(TRACE_ISCSI, "Attaching %u padding bytes\n",
+				unmap_sg->padding);
+	}
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = (datain.length + unmap_sg->padding);
+		struct iovec *iov_ptr = &cmd->iov_data[1];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+				" bytes, crc 0x%08x\n", iov_ptr->iov_len,
+					cmd->data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base	= &cmd->data_crc;
+		iov[iov_count++].iov_len = CRC_LEN;
+		tx_size += CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attached CRC32C DataDigest %d bytes, crc"
+			" 0x%08x\n", datain.length+unmap_sg->padding,
+			cmd->data_crc);
+	}
+
+	cmd->iov_data_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built DataIN ITT: 0x%08x, StatSN: 0x%08x,"
+		" DataSN: 0x%08x, Offset: %u, Length: %u, CID: %hu\n",
+		cmd->init_task_tag, ntohl(hdr->statsn), ntohl(hdr->datasn),
+		ntohl(hdr->offset), datain.length, conn->cid);
+
+	if (dr->dr_complete) {
+		*eodr = (SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ?
+				2 : 1;
+		iscsi_free_datain_req(cmd, dr);
+	}
+
+	return 0;
+}
+
+/*	iscsi_send_logout_response():
+ *
+ *
+ */
+static inline int iscsi_send_logout_response(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int niov = 0, tx_size;
+	struct iscsi_conn *logout_conn = NULL;
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_session *sess = SESS(conn);
+	struct iovec *iov;
+	struct iscsi_logout_rsp *hdr;
+	struct scatterlist sg;
+	/*
+	 * The actual shutting down of Sessions and/or Connections
+	 * for CLOSESESSION and CLOSECONNECTION Logout Requests
+	 * is done in scsi_logout_post_handler().
+	 */
+	switch (cmd->logout_reason) {
+	case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+		TRACE(TRACE_ISCSI, "iSCSI session logout successful, setting"
+			" logout response to ISCSI_LOGOUT_SUCCESS.\n");
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+		if (cmd->logout_response == ISCSI_LOGOUT_CID_NOT_FOUND)
+			break;
+		/*
+		 * For CLOSECONNECTION logout requests carrying
+		 * a matching logout CID -> local CID, the reference
+		 * for the local CID will have been incremented in
+		 * iscsi_logout_closeconnection().
+		 *
+		 * For CLOSECONNECTION logout requests carrying
+		 * a different CID than the connection it arrived
+		 * on, the connection responding to cmd->logout_cid
+		 * is stopped in iscsi_logout_post_handler_diffcid().
+		 */
+
+		TRACE(TRACE_ISCSI, "iSCSI CID: %hu logout on CID: %hu"
+			" successful.\n", cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	case ISCSI_LOGOUT_REASON_RECOVERY:
+		if ((cmd->logout_response == ISCSI_LOGOUT_RECOVERY_UNSUPPORTED) ||
+		    (cmd->logout_response == ISCSI_LOGOUT_CLEANUP_FAILED))
+			break;
+		/*
+		 * If the connection is still active from our point of view
+		 * force connection recovery to occur.
+		 */
+		logout_conn = iscsi_get_conn_from_cid_rcfr(sess,
+				cmd->logout_cid);
+		if ((logout_conn)) {
+			iscsi_connection_reinstatement_rcfr(logout_conn);
+			iscsi_dec_conn_usage_count(logout_conn);
+		}
+
+		cr = iscsi_get_inactive_connection_recovery_entry(
+				SESS(conn), cmd->logout_cid);
+		if (!(cr)) {
+			printk(KERN_ERR "Unable to locate CID: %hu for"
+			" REMOVECONNFORRECOVERY Logout Request.\n",
+				cmd->logout_cid);
+			cmd->logout_response = ISCSI_LOGOUT_CID_NOT_FOUND;
+			break;
+		}
+
+		iscsi_discard_cr_cmds_by_expstatsn(cr, cmd->exp_stat_sn);
+
+		TRACE(TRACE_ERL2, "iSCSI REMOVECONNFORRECOVERY logout"
+			" for recovery for CID: %hu on CID: %hu successful.\n",
+				cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	default:
+		printk(KERN_ERR "Unknown cmd->logout_reason: 0x%02x\n",
+				cmd->logout_reason);
+		return -1;
+	}
+
+	tx_size = ISCSI_HDR_LEN;
+	hdr			= (struct iscsi_logout_rsp *)cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_LOGOUT_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hdr->response		= cmd->logout_response;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[niov].iov_base	= cmd->pdu;
+	iov[niov++].iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" Logout Response 0x%08x\n", *header_digest);
+	}
+	cmd->iov_misc_count = niov;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending Logout Response ITT: 0x%08x StatSN:"
+		" 0x%08x Response: 0x%02x CID: %hu on CID: %hu\n",
+		cmd->init_task_tag, cmd->stat_sn, hdr->response,
+		cmd->logout_cid, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_nopin():
+ *
+ *	Unsolicited NOPIN, either requesting a response or not.
+ */
+static inline int iscsi_send_unsolicited_nopin(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int want_response)
+{
+	int tx_size = ISCSI_HDR_LEN;
+	struct iscsi_nopin *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_nopin *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_NOOP_IN;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" NopIN 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= tx_size;
+	cmd->iov_misc_count 	= 1;
+	cmd->tx_size		= tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending Unsolicited NOPIN TTT: 0x%08x StatSN:"
+		" 0x%08x CID: %hu\n", hdr->ttt, cmd->stat_sn, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_nopin_response():
+ *
+ *
+ */
+static inline int iscsi_send_nopin_response(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int niov = 0, tx_size;
+	__u32 padding = 0;
+	struct iovec *iov;
+	struct iscsi_nopin *hdr;
+	struct scatterlist sg;
+
+	tx_size = ISCSI_HDR_LEN;
+	hdr			= (struct iscsi_nopin *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_NOOP_IN;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, cmd->buf_ptr_size);
+	put_unaligned_le64(0xFFFFFFFFFFFFFFFFULL, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[niov].iov_base	= cmd->pdu;
+	iov[niov++].iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest"
+			" to NopIn 0x%08x\n", *header_digest);
+	}
+
+	/*
+	 * NOPOUT Ping Data is attached to struct iscsi_cmd->buf_ptr.
+	 * NOPOUT DataSegmentLength is at struct iscsi_cmd->buf_ptr_size.
+	 */
+	if (cmd->buf_ptr_size) {
+		iov[niov].iov_base	= cmd->buf_ptr;
+		iov[niov++].iov_len	= cmd->buf_ptr_size;
+		tx_size += cmd->buf_ptr_size;
+
+		TRACE(TRACE_ISCSI, "Echoing back %u bytes of ping"
+			" data.\n", cmd->buf_ptr_size);
+
+		padding = ((-cmd->buf_ptr_size) & 3);
+		if (padding != 0) {
+			iov[niov].iov_base = &cmd->pad_bytes;
+			iov[niov++].iov_len = padding;
+			tx_size += padding;
+			TRACE(TRACE_ISCSI, "Attaching %u additional"
+				" padding bytes.\n", padding);
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_tx_hash);
+
+			sg_init_one(&sg, (u8 *)cmd->buf_ptr,
+					cmd->buf_ptr_size);
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+					cmd->buf_ptr_size);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&cmd->pad_bytes, padding);
+				crypto_hash_update(&conn->conn_tx_hash, &sg,
+						padding);	
+			}
+
+			crypto_hash_final(&conn->conn_tx_hash,
+					(u8 *)&cmd->data_crc);
+
+			iov[niov].iov_base = &cmd->data_crc;
+			iov[niov++].iov_len = CRC_LEN;
+			tx_size += CRC_LEN;
+			TRACE(TRACE_DIGEST, "Attached DataDigest for %u"
+				" bytes of ping data, CRC 0x%08x\n",
+				cmd->buf_ptr_size, cmd->data_crc);
+		}
+	}
+
+	cmd->iov_misc_count = niov;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending NOPIN Response ITT: 0x%08x, TTT:"
+		" 0x%08x, StatSN: 0x%08x, Length %u\n", cmd->init_task_tag,
+		cmd->targ_xfer_tag, cmd->stat_sn, cmd->buf_ptr_size);
+
+	return 0;
+}
+
+/*	iscsi_send_r2t():
+ *
+ *
+ */
+int iscsi_send_r2t(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int tx_size = 0;
+	u32 trace_type;
+	u64 lun;
+	struct iscsi_r2t *r2t;
+	struct iscsi_r2t_rsp *hdr;
+	struct scatterlist sg;
+
+	r2t = iscsi_get_r2t_from_list(cmd);
+	if (!(r2t))
+		return -1;
+
+	hdr			= (struct iscsi_r2t_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_R2T;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	lun			= iscsi_pack_lun(SE_CMD(cmd)->orig_fe_lun);
+	put_unaligned_le64(lun, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	spin_lock_bh(&SESS(conn)->ttt_lock);
+	r2t->targ_xfer_tag	= SESS(conn)->targ_xfer_tag++;
+	if (r2t->targ_xfer_tag == 0xFFFFFFFF)
+		r2t->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+	spin_unlock_bh(&SESS(conn)->ttt_lock);
+	hdr->ttt		= cpu_to_be32(r2t->targ_xfer_tag);
+	hdr->statsn		= cpu_to_be32(conn->stat_sn);
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->r2tsn		= cpu_to_be32(r2t->r2t_sn);
+	hdr->data_offset	= cpu_to_be32(r2t->offset);
+	hdr->data_length	= cpu_to_be32(r2t->xfer_len);
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->iov_misc[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for R2T"
+			" PDU 0x%08x\n", *header_digest);
+	}
+
+	trace_type = (!r2t->recovery_r2t) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(trace_type, "Built %sR2T, ITT: 0x%08x, TTT: 0x%08x, StatSN:"
+		" 0x%08x, R2TSN: 0x%08x, Offset: %u, DDTL: %u, CID: %hu\n",
+		(!r2t->recovery_r2t) ? "" : "Recovery ", cmd->init_task_tag,
+		r2t->targ_xfer_tag, ntohl(hdr->statsn), r2t->r2t_sn,
+			r2t->offset, r2t->xfer_len, conn->cid);
+
+	cmd->iov_misc_count = 1;
+	cmd->tx_size = tx_size;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	r2t->sent_r2t = 1;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_build_r2ts_for_cmd():
+ *
+ *	type 0: Normal Operation.
+ *	type 1: Called from Storage Transport.
+ *	type 2: Called from iscsi_task_reassign_complete_write() for
+ *	        connection recovery.
+ */
+int iscsi_build_r2ts_for_cmd(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int type)
+{
+	int first_r2t = 1;
+	__u32 offset = 0, xfer_len = 0;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	if (cmd->cmd_flags & ICF_SENT_LAST_R2T) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return 0;
+	}
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder && (type != 2))
+		if (cmd->r2t_offset < cmd->write_data_done)
+			cmd->r2t_offset = cmd->write_data_done;
+
+	while (cmd->outstanding_r2ts < SESS_OPS_C(conn)->MaxOutstandingR2T) {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			offset = cmd->r2t_offset;
+
+			if (first_r2t && (type == 2)) {
+				xfer_len = ((offset +
+					     (SESS_OPS_C(conn)->MaxBurstLength -
+					     cmd->next_burst_len) >
+					     cmd->data_length) ?
+					    (cmd->data_length - offset) :
+					    (SESS_OPS_C(conn)->MaxBurstLength -
+					     cmd->next_burst_len));
+			} else {
+				xfer_len = ((offset +
+					     SESS_OPS_C(conn)->MaxBurstLength) >
+					     cmd->data_length) ?
+					     (cmd->data_length - offset) :
+					     SESS_OPS_C(conn)->MaxBurstLength;
+			}
+			cmd->r2t_offset += xfer_len;
+
+			if (cmd->r2t_offset == cmd->data_length)
+				cmd->cmd_flags |= ICF_SENT_LAST_R2T;
+		} else {
+			struct iscsi_seq *seq;
+
+			seq = iscsi_get_seq_holder_for_r2t(cmd);
+			if (!(seq)) {
+				spin_unlock_bh(&cmd->r2t_lock);
+				return -1;
+			}
+
+			offset = seq->offset;
+			xfer_len = seq->xfer_len;
+
+			if (cmd->seq_send_order == cmd->seq_count)
+				cmd->cmd_flags |= ICF_SENT_LAST_R2T;
+		}
+		cmd->outstanding_r2ts++;
+		first_r2t = 0;
+
+		if (iscsi_add_r2t_to_list(cmd, offset, xfer_len, 0, 0) < 0) {
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+
+		if (cmd->cmd_flags & ICF_SENT_LAST_R2T)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+int lio_write_pending(
+	struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	if (cmd->immediate_data || cmd->unsolicited_data)
+		up(&cmd->unsolicited_data_sem);
+	else {
+		if (iscsi_build_r2ts_for_cmd(cmd, CONN(cmd), 1) < 0)
+			return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES;
+	}
+
+	return 0;
+}
+
+int lio_write_pending_status(
+	struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+	int ret;
+
+	spin_lock_bh(&cmd->istate_lock);
+	ret = !(cmd->cmd_flags & ICF_GOT_LAST_DATAOUT);
+	spin_unlock_bh(&cmd->istate_lock);
+
+	return ret;
+}
+
+int lio_queue_status(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_STATUS;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+
+	return 0;
+}
+
+u16 lio_set_fabric_sense_len(struct se_cmd *se_cmd, u32 sense_length)
+{
+	unsigned char *buffer = se_cmd->sense_buffer;
+	/*
+	 * From RFC-3720 10.4.7.  Data Segment - Sense and Response Data Segment
+	 * 16-bit SenseLength.
+	 */
+	buffer[0] = ((sense_length >> 8) & 0xff);
+	buffer[1] = (sense_length & 0xff);
+	/*
+	 * Return two byte offset into allocated sense_buffer.
+	 */
+	return 2;
+}
+
+u16 lio_get_fabric_sense_len(void)
+{
+	/*
+	 * Return two byte offset into allocated sense_buffer.
+	 */
+	return 2;
+}
+
+/*	iscsi_send_status():
+ *
+ *
+ */
+static inline int iscsi_send_status(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	__u8 iov_count = 0, recovery;
+	__u32 padding = 0, trace_type, tx_size = 0;
+	struct iscsi_scsi_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	recovery = (cmd->i_state != ISTATE_SEND_STATUS);
+	if (!recovery)
+		cmd->stat_sn = conn->stat_sn++;
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->rsp_pdus++;
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	hdr			= (struct iscsi_scsi_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_SCSI_CMD_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	if (SE_CMD(cmd)->se_cmd_flags & SCF_OVERFLOW_BIT) {
+		hdr->flags |= ISCSI_FLAG_CMD_OVERFLOW;
+		hdr->residual_count = cpu_to_be32(cmd->residual_count);
+	} else if (SE_CMD(cmd)->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+		hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW;
+		hdr->residual_count = cpu_to_be32(cmd->residual_count);
+	}
+	hdr->response		= cmd->iscsi_response;
+	hdr->cmd_status		= SE_CMD(cmd)->scsi_status;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[iov_count].iov_base	= cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	/*
+	 * Attach SENSE DATA payload to iSCSI Response PDU
+	 */
+	if (SE_CMD(cmd)->sense_buffer &&
+	   ((SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
+	    (SE_CMD(cmd)->se_cmd_flags & SCF_EMULATED_TASK_SENSE))) {
+		padding		= -(SE_CMD(cmd)->scsi_sense_length) & 3;
+		hton24(hdr->dlength, SE_CMD(cmd)->scsi_sense_length);
+		iov[iov_count].iov_base	= SE_CMD(cmd)->sense_buffer;
+		iov[iov_count++].iov_len =
+				(SE_CMD(cmd)->scsi_sense_length + padding);
+		tx_size += SE_CMD(cmd)->scsi_sense_length;
+
+		if (padding) {
+			memset(SE_CMD(cmd)->sense_buffer +
+				SE_CMD(cmd)->scsi_sense_length, 0, padding);
+			tx_size += padding;
+			TRACE(TRACE_ISCSI, "Adding %u bytes of padding to"
+				" SENSE.\n", padding);
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_tx_hash);
+
+			sg_init_one(&sg, (u8 *)SE_CMD(cmd)->sense_buffer,
+				(SE_CMD(cmd)->scsi_sense_length + padding));
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+				(SE_CMD(cmd)->scsi_sense_length + padding));
+
+			crypto_hash_final(&conn->conn_tx_hash,
+					(u8 *)&cmd->data_crc);
+
+			iov[iov_count].iov_base    = &cmd->data_crc;
+			iov[iov_count++].iov_len     = CRC_LEN;
+			tx_size += CRC_LEN;
+
+			TRACE(TRACE_DIGEST, "Attaching CRC32 DataDigest for"
+				" SENSE, %u bytes CRC 0x%08x\n",
+				(SE_CMD(cmd)->scsi_sense_length + padding),
+				cmd->data_crc);
+		}
+
+		TRACE(TRACE_ISCSI, "Attaching SENSE DATA: %u bytes to iSCSI"
+				" Response PDU\n",
+				SE_CMD(cmd)->scsi_sense_length);
+	}
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+	
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Response"
+				" PDU 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	trace_type = (!recovery) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(trace_type, "Built %sSCSI Response, ITT: 0x%08x, StatSN: 0x%08x,"
+		" Response: 0x%02x, SAM Status: 0x%02x, CID: %hu\n",
+		(!recovery) ? "" : "Recovery ", cmd->init_task_tag,
+		cmd->stat_sn, 0x00, cmd->se_cmd.scsi_status, conn->cid);
+
+	return 0;
+}
+
+int lio_queue_tm_rsp(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_TASKMGTRSP;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+
+	return 0;
+}
+
+static inline u8 iscsi_convert_tcm_tmr_rsp(struct se_tmr_req *se_tmr)
+{
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		return ISCSI_TMF_RSP_COMPLETE;
+	case TMR_TASK_DOES_NOT_EXIST:
+		return ISCSI_TMF_RSP_NO_TASK;
+	case TMR_LUN_DOES_NOT_EXIST:
+		return ISCSI_TMF_RSP_NO_LUN;
+	case TMR_TASK_MGMT_FUNCTION_NOT_SUPPORTED:
+		return ISCSI_TMF_RSP_NOT_SUPPORTED;
+	case TMR_FUNCTION_AUTHORIZATION_FAILED:
+		return ISCSI_TMF_RSP_AUTH_FAILED;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+}
+
+/*	iscsi_send_task_mgt_rsp():
+ *
+ *
+ */
+static int iscsi_send_task_mgt_rsp(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm_rsp *hdr;
+	struct scatterlist sg;
+	u32 tx_size = 0;
+
+	hdr			= (struct iscsi_tm_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_SCSI_TMFUNC_RSP;
+	hdr->response		= iscsi_convert_tcm_tmr_rsp(se_tmr);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];	
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->iov_misc[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Task"
+			" Mgmt Response PDU 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc_count = 1;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ERL2, "Built Task Management Response ITT: 0x%08x,"
+		" StatSN: 0x%08x, Response: 0x%02x, CID: %hu\n",
+		cmd->init_task_tag, cmd->stat_sn, hdr->response, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_text_rsp():
+ *
+ *
+ *	FIXME: Add support for F_BIT and C_BIT when the length is longer than
+ *	MaxRecvDataSegmentLength.
+ */
+static int iscsi_send_text_rsp(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	u8 iov_count = 0;
+	u32 padding = 0, text_length = 0, tx_size = 0;
+	struct iscsi_text_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	text_length = iscsi_build_sendtargets_response(cmd);
+
+	padding = ((-text_length) & 3);
+	if (padding != 0) {
+		memset((void *) (cmd->buf_ptr + text_length), 0, padding);
+		TRACE(TRACE_ISCSI, "Attaching %u additional bytes for"
+			" padding.\n", padding);
+	}
+
+	hdr			= (struct iscsi_text_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_TEXT_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, text_length);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+
+	iov[iov_count].iov_base = cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	iov[iov_count].iov_base	= cmd->buf_ptr;
+	iov[iov_count++].iov_len = text_length + padding;
+
+	tx_size += (ISCSI_HDR_LEN + text_length + padding);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for"
+			" Text Response PDU 0x%08x\n", *header_digest);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)cmd->buf_ptr, (text_length + padding));
+		crypto_hash_update(&conn->conn_tx_hash, &sg,
+				(text_length + padding));
+
+		crypto_hash_final(&conn->conn_tx_hash,
+				(u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base	= &cmd->data_crc;
+		iov[iov_count++].iov_len = CRC_LEN;
+		tx_size	+= CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attaching DataDigest for %u bytes of text"
+			" data, CRC 0x%08x\n", (text_length + padding),
+			cmd->data_crc);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built Text Response: ITT: 0x%08x, StatSN: 0x%08x,"
+		" Length: %u, CID: %hu\n", cmd->init_task_tag, cmd->stat_sn,
+			text_length, conn->cid);
+	return 0;
+}
+
+/*	iscsi_send_reject():
+ *
+ *
+ */
+static int iscsi_send_reject(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	__u32 iov_count = 0, tx_size = 0;
+	struct iscsi_reject *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_reject *) cmd->pdu;
+	hdr->opcode		= ISCSI_OP_REJECT;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, ISCSI_HDR_LEN);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn	= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn	= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+
+	iov[iov_count].iov_base = cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	iov[iov_count].iov_base = cmd->buf_ptr;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+
+	tx_size = (ISCSI_HDR_LEN + ISCSI_HDR_LEN);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for"
+			" REJECT PDU 0x%08x\n", *header_digest);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)cmd->buf_ptr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg,
+				ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash,
+				(u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base = &cmd->data_crc;
+		iov[iov_count++].iov_len  = CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 DataDigest for REJECT"
+				" PDU 0x%08x\n", cmd->data_crc);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built Reject PDU StatSN: 0x%08x, Reason: 0x%02x,"
+		" CID: %hu\n", ntohl(hdr->statsn), hdr->reason, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_tx_thread_TCP_timeout():
+ *
+ *
+ */
+static void iscsi_tx_thread_TCP_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_tx_thread_wait_for_TCP():
+ *
+ *
+ */
+static void iscsi_tx_thread_wait_for_TCP(struct iscsi_conn *conn)
+{
+	struct timer_list tx_TCP_timer;
+	int ret;
+
+	if ((conn->sock->sk->sk_shutdown & SEND_SHUTDOWN) ||
+	    (conn->sock->sk->sk_shutdown & RCV_SHUTDOWN)) {
+		init_timer(&tx_TCP_timer);
+		SETUP_TIMER(tx_TCP_timer, ISCSI_TX_THREAD_TCP_TIMEOUT,
+			&conn->tx_half_close_sem, iscsi_tx_thread_TCP_timeout);
+		add_timer(&tx_TCP_timer);
+
+		ret = down_interruptible(&conn->tx_half_close_sem);
+
+		del_timer_sync(&tx_TCP_timer);
+	}
+}
+
+#ifdef CONFIG_SMP
+
+void iscsi_thread_get_cpumask(struct iscsi_conn *conn)
+{
+	struct se_thread_set *ts = conn->thread_set;
+	int ord, cpu;
+	/*
+	 * thread_id is assigned from iscsi_global->ts_bitmap from
+	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
+	 *
+	 * Here we use thread_id to determine which CPU that this
+	 * iSCSI connection's se_thread_set will be scheduled to
+	 * execute upon.
+	 */
+	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
+#if 0
+	printk(">>>>>>>>>>>>>>>>>>>> Generated ord: %d from thread_id: %d\n",
+			ord, ts->thread_id);
+#endif
+	for_each_online_cpu(cpu) {
+		if (ord-- == 0) {
+			cpumask_set_cpu(cpu, conn->conn_cpumask);
+			return;
+		}
+	}
+	/*
+	 * This should never be reached..
+	 */
+	dump_stack();
+	cpumask_setall(conn->conn_cpumask);
+}
+
+static inline void iscsi_thread_check_cpumask(
+	struct iscsi_conn *conn,
+	struct task_struct *p,
+	int mode)
+{
+	char buf[128];
+	/*
+	 * mode == 1 signals iscsi_target_tx_thread() usage.
+	 * mode == 0 signals iscsi_target_rx_thread() usage.
+	 */
+	if (mode == 1) {
+		if (!(conn->conn_tx_reset_cpumask))
+			return;
+		conn->conn_tx_reset_cpumask = 0;
+	} else {
+		if (!(conn->conn_rx_reset_cpumask))
+			return;
+		conn->conn_rx_reset_cpumask = 0;
+	}
+	/*
+	 * Update the CPU mask for this single kthread so that
+	 * both TX and RX kthreads are scheduled to run on the
+	 * same CPU.
+	 */
+	memset(buf, 0, 128);
+	cpumask_scnprintf(buf, 128, conn->conn_cpumask);
+#if 0
+	printk(">>>>>>>>>>>>>> Calling set_cpus_allowed_ptr(): %s for %s\n",
+			buf, p->comm);
+#endif
+	set_cpus_allowed_ptr(p, conn->conn_cpumask);
+}
+
+#else
+#define iscsi_thread_get_cpumask(X) ({})
+#define iscsi_thread_check_cpumask(X, Y, Z) ({})
+#endif /* CONFIG_SMP */
+
+/*	iscsi_target_tx_thread():
+ *
+ *
+ */
+int iscsi_target_tx_thread(void *arg)
+{
+	u8 state;
+	int eodr = 0, map_sg = 0, ret = 0, sent_status = 0, use_misc = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_conn *conn;
+	struct iscsi_queue_req *qr = NULL;
+	struct se_cmd *se_cmd;
+	struct se_thread_set *ts = (struct se_thread_set *) arg;
+	struct se_unmap_sg unmap_sg;
+
+	{
+	    char name[20];
+
+	    memset(name, 0, 20);
+	    sprintf(name, "%s/%u", ISCSI_TX_THREAD_NAME, ts->thread_id);
+	    iscsi_daemon(ts->tx_thread, name, SHUTDOWN_SIGS);
+	}
+
+restart:
+	conn = iscsi_tx_thread_pre_handler(ts, TARGET);
+	if (!(conn))
+		goto out;
+
+	eodr = map_sg = ret = sent_status = use_misc = 0;
+
+	while (1) {
+		/*
+		 * Ensure that both TX and RX per connection kthreads
+		 * are scheduled to run on the same CPU.
+		 */
+		iscsi_thread_check_cpumask(conn, current, 1);
+
+		ret = down_interruptible(&conn->tx_sem);
+
+		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
+		     (ret != 0) || signal_pending(current))
+			goto transport_err;
+
+get_immediate:
+		qr = iscsi_get_cmd_from_immediate_queue(conn);
+		if ((qr)) {
+			atomic_set(&conn->check_immediate_queue, 0);
+			cmd = qr->cmd;
+			state = qr->state;
+			kmem_cache_free(lio_qr_cache, qr);
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_R2T:
+				spin_unlock_bh(&cmd->istate_lock);
+				ret = iscsi_send_r2t(cmd, conn);
+				break;
+			case ISTATE_REMOVE:
+				spin_unlock_bh(&cmd->istate_lock);
+
+				if (cmd->data_direction == DMA_TO_DEVICE)
+					iscsi_stop_dataout_timer(cmd);
+
+				spin_lock_bh(&conn->cmd_lock);
+				iscsi_remove_cmd_from_conn_list(cmd, conn);
+				spin_unlock_bh(&conn->cmd_lock);
+				/*
+				 * Determine if a struct se_cmd is assoicated with
+				 * this struct iscsi_cmd.
+				 */
+				if (!(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) &&
+				    !(cmd->tmr_req))
+					iscsi_release_cmd_to_pool(cmd);
+				else
+					transport_generic_free_cmd(SE_CMD(cmd),
+								1, 1, 0);
+				goto get_immediate;
+			case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+				spin_unlock_bh(&cmd->istate_lock);
+				iscsi_mod_nopin_response_timer(conn);
+				ret = iscsi_send_unsolicited_nopin(cmd,
+						conn, 1);
+				break;
+			case ISTATE_SEND_NOPIN_NO_RESPONSE:
+				spin_unlock_bh(&cmd->istate_lock);
+				ret = iscsi_send_unsolicited_nopin(cmd,
+						conn, 0);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+				" 0x%08x, i_state: %d on CID: %hu\n",
+				cmd->iscsi_opcode, cmd->init_task_tag, state,
+				conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			if (ret < 0) {
+				conn->tx_immediate_queue = 0;
+				goto transport_err;
+			}
+
+			if (iscsi_send_tx_data(cmd, conn, 1) < 0) {
+				conn->tx_immediate_queue = 0;
+				iscsi_tx_thread_wait_for_TCP(conn);
+				goto transport_err;
+			}
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_R2T:
+				spin_unlock_bh(&cmd->istate_lock);
+				spin_lock_bh(&cmd->dataout_timeout_lock);
+				iscsi_start_dataout_timer(cmd, conn);
+				spin_unlock_bh(&cmd->dataout_timeout_lock);
+				break;
+			case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+				cmd->i_state = ISTATE_SENT_NOPIN_WANT_RESPONSE;
+				spin_unlock_bh(&cmd->istate_lock);
+				break;
+			case ISTATE_SEND_NOPIN_NO_RESPONSE:
+				cmd->i_state = ISTATE_SENT_STATUS;
+				spin_unlock_bh(&cmd->istate_lock);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			goto get_immediate;
+		} else
+			conn->tx_immediate_queue = 0;
+
+get_response:
+		qr = iscsi_get_cmd_from_response_queue(conn);
+		if ((qr)) {
+			cmd = qr->cmd;
+			state = qr->state;
+			kmem_cache_free(lio_qr_cache, qr);
+
+			spin_lock_bh(&cmd->istate_lock);
+check_rsp_state:
+			switch (state) {
+			case ISTATE_SEND_DATAIN:
+				spin_unlock_bh(&cmd->istate_lock);
+				memset((void *)&unmap_sg, 0,
+						sizeof(struct se_unmap_sg));
+				unmap_sg.fabric_cmd = (void *)cmd;
+				unmap_sg.se_cmd = SE_CMD(cmd);
+				map_sg = 1;
+				ret = iscsi_send_data_in(cmd, conn,
+						&unmap_sg, &eodr);
+				break;
+			case ISTATE_SEND_STATUS:
+			case ISTATE_SEND_STATUS_RECOVERY:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_status(cmd, conn);
+				break;
+			case ISTATE_SEND_LOGOUTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_logout_response(cmd, conn);
+				break;
+			case ISTATE_SEND_ASYNCMSG:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_conn_drop_async_message(
+						cmd, conn);
+				break;
+			case ISTATE_SEND_NOPIN:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_nopin_response(cmd, conn);
+				break;
+			case ISTATE_SEND_REJECT:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_reject(cmd, conn);
+				break;
+			case ISTATE_SEND_TASKMGTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_task_mgt_rsp(cmd, conn);
+				if (ret != 0)
+					break;
+				ret = iscsi_tmr_post_handler(cmd, conn);
+				if (ret != 0)
+					iscsi_fall_back_to_erl0(SESS(conn));
+				break;
+			case ISTATE_SEND_TEXTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_text_rsp(cmd, conn);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			if (ret < 0) {
+				conn->tx_response_queue = 0;
+				goto transport_err;
+			}
+
+			se_cmd = &cmd->se_cmd;
+
+			if (map_sg && !CONN_OPS(conn)->IFMarker &&
+			    T_TASK(se_cmd)->t_tasks_se_num) {
+				iscsi_map_SG_segments(&unmap_sg);
+				if (iscsi_fe_sendpage_sg(&unmap_sg, conn) < 0) {
+					conn->tx_response_queue = 0;
+					iscsi_tx_thread_wait_for_TCP(conn);
+					iscsi_unmap_SG_segments(&unmap_sg);
+					goto transport_err;
+				}
+				iscsi_unmap_SG_segments(&unmap_sg);
+				map_sg = 0;
+			} else {
+				if (map_sg)
+					iscsi_map_SG_segments(&unmap_sg);
+				if (iscsi_send_tx_data(cmd, conn, use_misc) < 0) {
+					conn->tx_response_queue = 0;
+					iscsi_tx_thread_wait_for_TCP(conn);
+					if (map_sg)
+						iscsi_unmap_SG_segments(&unmap_sg);
+					goto transport_err;
+				}
+				if (map_sg) {
+					iscsi_unmap_SG_segments(&unmap_sg);
+					map_sg = 0;
+				}
+			}
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_DATAIN:
+				if (!eodr)
+					goto check_rsp_state;
+
+				if (eodr == 1) {
+					cmd->i_state = ISTATE_SENT_LAST_DATAIN;
+					sent_status = 1;
+					eodr = use_misc = 0;
+				} else if (eodr == 2) {
+					cmd->i_state = state =
+							ISTATE_SEND_STATUS;
+					sent_status = 0;
+					eodr = use_misc = 0;
+					goto check_rsp_state;
+				}
+				break;
+			case ISTATE_SEND_STATUS:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_ASYNCMSG:
+			case ISTATE_SEND_NOPIN:
+			case ISTATE_SEND_STATUS_RECOVERY:
+			case ISTATE_SEND_TEXTRSP:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_REJECT:
+				use_misc = 0;
+				if (cmd->cmd_flags & ICF_REJECT_FAIL_CONN) {
+					cmd->cmd_flags &= ~ICF_REJECT_FAIL_CONN;
+					spin_unlock_bh(&cmd->istate_lock);
+					up(&cmd->reject_sem);
+					goto transport_err;
+				}
+				up(&cmd->reject_sem);
+				break;
+			case ISTATE_SEND_TASKMGTRSP:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_LOGOUTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				if (!(iscsi_logout_post_handler(cmd, conn)))
+					goto restart;
+				spin_lock_bh(&cmd->istate_lock);
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					cmd->i_state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+
+			if (sent_status) {
+				cmd->i_state = ISTATE_SENT_STATUS;
+				sent_status = 0;
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+
+			if (atomic_read(&conn->check_immediate_queue))
+				goto get_immediate;
+
+			goto get_response;
+		} else
+			conn->tx_response_queue = 0;
+	}
+
+transport_err:
+	iscsi_take_action_for_connection_exit(conn);
+	goto restart;
+out:
+	ts->tx_thread = NULL;
+	up(&ts->tx_done_sem);
+	return 0;
+}
+
+static void iscsi_rx_thread_TCP_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_rx_thread_wait_for_TCP():
+ *
+ *
+ */
+static void iscsi_rx_thread_wait_for_TCP(struct iscsi_conn *conn)
+{
+	struct timer_list rx_TCP_timer;
+	int ret;
+
+	if ((conn->sock->sk->sk_shutdown & SEND_SHUTDOWN) ||
+	    (conn->sock->sk->sk_shutdown & RCV_SHUTDOWN)) {
+		init_timer(&rx_TCP_timer);
+		SETUP_TIMER(rx_TCP_timer, ISCSI_RX_THREAD_TCP_TIMEOUT,
+			&conn->rx_half_close_sem, iscsi_rx_thread_TCP_timeout);
+		add_timer(&rx_TCP_timer);
+
+		ret = down_interruptible(&conn->rx_half_close_sem);
+
+		del_timer_sync(&rx_TCP_timer);
+	}
+}
+
+/*	iscsi_target_rx_thread():
+ *
+ *
+ */
+int iscsi_target_rx_thread(void *arg)
+{
+	int ret;
+	__u8 buffer[ISCSI_HDR_LEN], opcode;
+	__u32 checksum = 0, digest = 0;
+	struct iscsi_conn *conn = NULL;
+	struct se_thread_set *ts = (struct se_thread_set *) arg;
+	struct iovec iov;
+	struct scatterlist sg;
+
+	{
+	    char name[20];
+
+	    memset(name, 0, 20);
+	    sprintf(name, "%s/%u", ISCSI_RX_THREAD_NAME, ts->thread_id);
+	    iscsi_daemon(ts->rx_thread, name, SHUTDOWN_SIGS);
+	}
+
+restart:
+	conn = iscsi_rx_thread_pre_handler(ts, TARGET);
+	if (!(conn))
+		goto out;
+
+	while (1) {
+		/*
+		 * Ensure that both TX and RX per connection kthreads
+		 * are scheduled to run on the same CPU.
+		 */
+		iscsi_thread_check_cpumask(conn, current, 0);
+
+		memset((void *)buffer, 0, ISCSI_HDR_LEN);
+		memset((void *)&iov, 0, sizeof(struct iovec));
+
+		iov.iov_base	= buffer;
+		iov.iov_len	= ISCSI_HDR_LEN;
+
+		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+		if (ret != ISCSI_HDR_LEN) {
+			iscsi_rx_thread_wait_for_TCP(conn);
+			goto transport_err;
+		}
+
+		/*
+		 * Set conn->bad_hdr for use with REJECT PDUs.
+		 */
+		memcpy(&conn->bad_hdr, &buffer, ISCSI_HDR_LEN);
+
+		if (CONN_OPS(conn)->HeaderDigest) {
+			iov.iov_base	= &digest;
+			iov.iov_len	= CRC_LEN;
+
+			ret = rx_data(conn, &iov, 1, CRC_LEN);
+			if (ret != CRC_LEN) {
+				iscsi_rx_thread_wait_for_TCP(conn);
+				goto transport_err;
+			}
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)buffer, ISCSI_HDR_LEN);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					ISCSI_HDR_LEN);
+
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&checksum);
+
+			if (digest != checksum) {
+				printk(KERN_ERR "HeaderDigest CRC32C failed,"
+					" received 0x%08x, computed 0x%08x\n",
+					digest, checksum);
+				/*
+				 * Set the PDU to 0xff so it will intentionally
+				 * hit default in the switch below.
+				 */
+				memset((void *)buffer, 0xff, ISCSI_HDR_LEN);
+				spin_lock_bh(&SESS(conn)->session_stats_lock);
+				SESS(conn)->conn_digest_errors++;
+				spin_unlock_bh(&SESS(conn)->session_stats_lock);
+			} else {
+				TRACE(TRACE_DIGEST, "Got HeaderDigest CRC32C"
+						" 0x%08x\n", checksum);
+			}
+		}
+
+		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+			goto transport_err;
+
+		opcode = buffer[0] & ISCSI_OPCODE_MASK;
+
+		if (SESS_OPS_C(conn)->SessionType &&
+		   ((!(opcode & ISCSI_OP_TEXT)) ||
+		    (!(opcode & ISCSI_OP_LOGOUT)))) {
+			printk(KERN_ERR "Received illegal iSCSI Opcode: 0x%02x"
+			" while in Discovery Session, rejecting.\n", opcode);
+			iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buffer, conn);
+			goto transport_err;
+		}
+
+		switch (opcode) {
+		case ISCSI_OP_SCSI_CMD:
+			if (iscsi_handle_scsi_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SCSI_DATA_OUT:
+			if (iscsi_handle_data_out(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_NOOP_OUT:
+			if (iscsi_handle_nop_out(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SCSI_TMFUNC:
+			if (iscsi_handle_task_mgt_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_TEXT:
+			if (iscsi_handle_text_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_LOGOUT:
+			ret = iscsi_handle_logout_cmd(conn, buffer);
+			if (ret > 0) {
+				down(&conn->conn_logout_sem);
+				goto transport_err;
+			} else if (ret < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SNACK:
+			if (iscsi_handle_snack(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		default:
+			printk(KERN_ERR "Got unknown iSCSI OpCode: 0x%02x\n",
+					opcode);
+			if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+				printk(KERN_ERR "Cannot recover from unknown"
+				" opcode while ERL=0, closing iSCSI connection"
+				".\n");
+				goto transport_err;
+			}
+			if (!CONN_OPS(conn)->OFMarker) {
+				printk(KERN_ERR "Unable to recover from unknown"
+				" opcode while OFMarker=No, closing iSCSI"
+					" connection.\n");
+				goto transport_err;
+			}
+			if (iscsi_recover_from_unknown_opcode(conn) < 0) {
+				printk(KERN_ERR "Unable to recover from unknown"
+					" opcode, closing iSCSI connection.\n");
+				goto transport_err;
+			}
+			break;
+		}
+	}
+
+transport_err:
+	if (!signal_pending(current))
+		atomic_set(&conn->transport_failed, 1);
+	iscsi_take_action_for_connection_exit(conn);
+	goto restart;
+out:
+	ts->rx_thread = NULL;
+	up(&ts->rx_done_sem);
+	return 0;
+}
+
+/*	iscsi_release_commands_from_conn():
+ *
+ *
+ */
+static void iscsi_release_commands_from_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd = NULL, *cmd_tmp = NULL;
+	struct iscsi_session *sess = SESS(conn);
+	struct se_cmd *se_cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD)) {
+
+			list_del(&cmd->i_list);
+			spin_unlock_bh(&conn->cmd_lock);
+			iscsi_increment_maxcmdsn(cmd, sess);
+			se_cmd = SE_CMD(cmd);
+			/*
+			 * Special cases for active iSCSI TMR, and
+			 * transport_get_lun_for_cmd() failing from
+			 * iscsi_get_lun_for_cmd() in iscsi_handle_scsi_cmd().
+			 */
+			if (cmd->tmr_req && se_cmd->transport_wait_for_tasks)
+				se_cmd->transport_wait_for_tasks(se_cmd, 1, 1);
+			else if (SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD)
+				transport_release_cmd_to_pool(se_cmd);
+			else
+				__iscsi_release_cmd_to_pool(cmd, sess);
+
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+		list_del(&cmd->i_list);
+		spin_unlock_bh(&conn->cmd_lock);
+
+		iscsi_increment_maxcmdsn(cmd, sess);
+		se_cmd = SE_CMD(cmd);
+
+		if (se_cmd->transport_wait_for_tasks)
+			se_cmd->transport_wait_for_tasks(se_cmd, 1, 1);
+
+		spin_lock_bh(&conn->cmd_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_stop_timers_for_cmds():
+ *
+ *
+ */
+static void iscsi_stop_timers_for_cmds(
+	struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->data_direction == DMA_TO_DEVICE)
+			iscsi_stop_dataout_timer(cmd);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_close_connection():
+ *
+ *
+ */
+int iscsi_close_connection(
+	struct iscsi_conn *conn)
+{
+	int conn_logout = (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT);
+	struct iscsi_session	*sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Closing iSCSI connection CID %hu on SID:"
+		" %u\n", conn->cid, sess->sid);
+
+	iscsi_stop_netif_timer(conn);
+
+	/*
+	 * Always up conn_logout_sem just in case the RX Thread is sleeping
+	 * and the logout response never got sent because the connection
+	 * failed.
+	 */
+	up(&conn->conn_logout_sem);
+
+	iscsi_release_thread_set(conn, TARGET);
+
+	iscsi_stop_timers_for_cmds(conn);
+	iscsi_stop_nopin_response_timer(conn);
+	iscsi_stop_nopin_timer(conn);
+	iscsi_free_queue_reqs_for_conn(conn);
+
+	/*
+	 * During Connection recovery drop unacknowledged out of order
+	 * commands for this connection, and prepare the other commands
+	 * for realligence.
+	 *
+	 * During normal operation clear the out of order commands (but
+	 * do not free the struct iscsi_ooo_cmdsn's) and release all
+	 * struct iscsi_cmds.
+	 */
+	if (atomic_read(&conn->connection_recovery)) {
+		iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(conn);
+		iscsi_prepare_cmds_for_realligance(conn);
+	} else {
+		iscsi_clear_ooo_cmdsns_for_conn(conn);
+		iscsi_release_commands_from_conn(conn);
+	}
+
+	/*
+	 * Handle decrementing session or connection usage count if
+	 * a logout response was not able to be sent because the
+	 * connection failed.  Fall back to Session Recovery here.
+	 */
+	if (atomic_read(&conn->conn_logout_remove)) {
+		if (conn->conn_logout_reason == ISCSI_LOGOUT_REASON_CLOSE_SESSION) {
+			iscsi_dec_conn_usage_count(conn);
+			iscsi_dec_session_usage_count(sess);
+		}
+		if (conn->conn_logout_reason == ISCSI_LOGOUT_REASON_CLOSE_CONNECTION)
+			iscsi_dec_conn_usage_count(conn);
+
+		atomic_set(&conn->conn_logout_remove, 0);
+		atomic_set(&sess->session_reinstatement, 0);
+		atomic_set(&sess->session_fall_back_to_erl0, 1);
+	}
+
+	spin_lock_bh(&sess->conn_lock);
+	iscsi_remove_conn_from_list(sess, conn);
+
+	/*
+	 * Attempt to let the Initiator know this connection failed by
+	 * sending an Connection Dropped Async Message on another
+	 * active connection.
+	 */
+	if (atomic_read(&conn->connection_recovery))
+		iscsi_build_conn_drop_async_message(conn);
+
+	spin_unlock_bh(&sess->conn_lock);
+
+	/*
+	 * If connection reinstatement is being performed on this connection,
+	 * up the connection reinstatement semaphore that is being blocked on
+	 * in iscsi_cause_connection_reinstatement().
+	 */
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->sleep_on_conn_wait_sem)) {
+		spin_unlock_bh(&conn->state_lock);
+		up(&conn->conn_wait_sem);
+		down(&conn->conn_post_wait_sem);
+		spin_lock_bh(&conn->state_lock);
+	}
+
+	/*
+	 * If connection reinstatement is being performed on this connection
+	 * by receiving a REMOVECONNFORRECOVERY logout request, up the
+	 * connection wait rcfr semaphore that is being blocked on
+	 * an iscsi_connection_reinstatement_rcfr().
+	 */
+	if (atomic_read(&conn->connection_wait_rcfr)) {
+		spin_unlock_bh(&conn->state_lock);
+		up(&conn->conn_wait_rcfr_sem);
+		down(&conn->conn_post_wait_sem);
+		spin_lock_bh(&conn->state_lock);
+	}
+	atomic_set(&conn->connection_reinstatement, 1);
+	spin_unlock_bh(&conn->state_lock);
+
+	/*
+	 * If any other processes are accessing this connection pointer we
+	 * must wait until they have completed.
+	 */
+	iscsi_check_conn_usage_count(conn);
+
+	if (conn->conn_rx_hash.tfm)
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+	if (conn->conn_tx_hash.tfm)
+		crypto_free_hash(conn->conn_tx_hash.tfm);
+
+	if (conn->conn_cpumask)
+		free_cpumask_var(conn->conn_cpumask);
+
+	kfree(conn->conn_ops);
+	conn->conn_ops = NULL;
+
+	if (conn->sock) {
+		if (conn->conn_flags & CONNFLAG_SCTP_STRUCT_FILE) {
+			kfree(conn->sock->file);
+			conn->sock->file = NULL;
+		}
+		sock_release(conn->sock);
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_FREE.\n");
+	conn->conn_state = TARG_CONN_STATE_FREE;
+	kmem_cache_free(lio_conn_cache, conn);
+	conn = NULL;
+
+	spin_lock_bh(&sess->conn_lock);
+	atomic_dec(&sess->nconn);
+	printk(KERN_INFO "Decremented iSCSI connection count to %hu from node:"
+		" %s\n", atomic_read(&sess->nconn),
+		SESS_OPS(sess)->InitiatorName);
+	/*
+	 * Make sure that if one connection fails in an non ERL=2 iSCSI
+	 * Session that they all fail.
+	 */
+	if ((SESS_OPS(sess)->ErrorRecoveryLevel != 2) && !conn_logout &&
+	     !atomic_read(&sess->session_logout))
+		atomic_set(&sess->session_fall_back_to_erl0, 1);
+
+	/*
+	 * If this was not the last connection in the session, and we are
+	 * performing session reinstatement or falling back to ERL=0, call
+	 * iscsi_stop_session() without sleeping to shutdown the other
+	 * active connections.
+	 */
+	if (atomic_read(&sess->nconn)) {
+		if (!atomic_read(&sess->session_reinstatement) &&
+		    !atomic_read(&sess->session_fall_back_to_erl0)) {
+			spin_unlock_bh(&sess->conn_lock);
+			return 0;
+		}
+		if (!atomic_read(&sess->session_stop_active)) {
+			atomic_set(&sess->session_stop_active, 1);
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_stop_session(sess, 0, 0);
+			return 0;
+		}
+		spin_unlock_bh(&sess->conn_lock);
+		return 0;
+	}
+
+	/*
+	 * If this was the last connection in the session and one of the
+	 * following is occurring:
+	 *
+	 * Session Reinstatement is not being performed, and are falling back
+	 * to ERL=0 call iscsi_close_session().
+	 *
+	 * Session Logout was requested.  iscsi_close_session() will be called
+	 * elsewhere.
+	 *
+	 * Session Continuation is not being performed, start the Time2Retain
+	 * handler and check if sleep_on_sess_wait_sem is active.
+	 */
+	if (!atomic_read(&sess->session_reinstatement) &&
+	     atomic_read(&sess->session_fall_back_to_erl0)) {
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_close_session(sess);
+
+		return 0;
+	} else if (atomic_read(&sess->session_logout)) {
+		TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FREE.\n");
+		sess->session_state = TARG_SESS_STATE_FREE;
+		spin_unlock_bh(&sess->conn_lock);
+
+		if (atomic_read(&sess->sleep_on_sess_wait_sem))
+			up(&sess->session_wait_sem);
+
+		return 0;
+	} else {
+		TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FAILED.\n");
+		sess->session_state = TARG_SESS_STATE_FAILED;
+
+		if (!atomic_read(&sess->session_continuation)) {
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_start_time2retain_handler(sess);
+		} else
+			spin_unlock_bh(&sess->conn_lock);
+
+		if (atomic_read(&sess->sleep_on_sess_wait_sem))
+			up(&sess->session_wait_sem);
+
+		return 0;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return 0;
+}
+
+/*	iscsi_close_session():
+ *
+ *
+ */
+int iscsi_close_session(struct iscsi_session *sess)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	if (atomic_read(&sess->nconn)) {
+		printk(KERN_ERR "%d connection(s) still exist for iSCSI session"
+			" to %s\n", atomic_read(&sess->nconn),
+			SESS_OPS(sess)->InitiatorName);
+		BUG();
+	}
+
+	spin_lock_bh(&se_tpg->session_lock);
+	atomic_set(&sess->session_logout, 1);
+	atomic_set(&sess->session_reinstatement, 1);
+	iscsi_stop_time2retain_timer(sess);
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	/*
+	 * transport_deregister_session_configfs() will clear the
+	 * struct se_node_acl->nacl_sess pointer now as a iscsi_np process context
+	 * can be setting it again with __transport_register_session() in
+	 * iscsi_post_login_handler() again after the iscsi_stop_session()
+	 * completes in iscsi_np context.
+	 */
+	transport_deregister_session_configfs(sess->se_sess);
+
+	/*
+	 * If any other processes are accessing this session pointer we must
+	 * wait until they have completed.  If we are in an interrupt (the
+	 * time2retain handler) and contain and active session usage count we
+	 * restart the timer and exit.
+	 */
+	if (!in_interrupt()) {
+		if (iscsi_check_session_usage_count(sess) == 1)
+			iscsi_stop_session(sess, 1, 1);
+	} else {
+		if (iscsi_check_session_usage_count(sess) == 2) {
+			atomic_set(&sess->session_logout, 0);
+			iscsi_start_time2retain_handler(sess);
+			return 0;
+		}
+	}
+
+	transport_deregister_session(sess->se_sess);
+
+	if (SESS_OPS(sess)->ErrorRecoveryLevel == 2)
+		iscsi_free_connection_recovery_entires(sess);
+
+	iscsi_free_all_ooo_cmdsns(sess);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FREE.\n");
+	sess->session_state = TARG_SESS_STATE_FREE;
+	printk(KERN_INFO "Released iSCSI session from node: %s\n",
+			SESS_OPS(sess)->InitiatorName);
+	tpg->nsessions--;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_nsessions--;
+
+	printk(KERN_INFO "Decremented number of active iSCSI Sessions on"
+		" iSCSI TPG: %hu to %u\n", tpg->tpgt, tpg->nsessions);
+
+	kfree(sess->sess_ops);
+	sess->sess_ops = NULL;
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	kmem_cache_free(lio_sess_cache, sess);
+	sess = NULL;
+	return 0;
+}
+
+/*	iscsi_logout_post_handler_closesession():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_closesession(
+	struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
+
+	atomic_set(&conn->conn_logout_remove, 0);
+	up(&conn->conn_logout_sem);
+
+	iscsi_dec_conn_usage_count(conn);
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+	iscsi_close_session(sess);
+}
+
+/*	iscsi_logout_post_handler_samecid():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_samecid(
+	struct iscsi_conn *conn)
+{
+	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
+
+	atomic_set(&conn->conn_logout_remove, 0);
+	up(&conn->conn_logout_sem);
+
+	iscsi_cause_connection_reinstatement(conn, 1);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_logout_post_handler_diffcid():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_diffcid(
+	struct iscsi_conn *conn,
+	__u16 cid)
+{
+	struct iscsi_conn *l_conn;
+	struct iscsi_session *sess = SESS(conn);
+
+	if (!sess)
+		return;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(l_conn, &sess->sess_conn_list, conn_list) {
+		if (l_conn->cid == cid) {
+			iscsi_inc_conn_usage_count(l_conn);
+			break;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	if (!l_conn)
+		return;
+
+	if (l_conn->sock)
+		l_conn->sock->ops->shutdown(l_conn->sock, RCV_SHUTDOWN);
+
+	spin_lock_bh(&l_conn->state_lock);
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+	l_conn->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+	spin_unlock_bh(&l_conn->state_lock);
+
+	iscsi_cause_connection_reinstatement(l_conn, 1);
+	iscsi_dec_conn_usage_count(l_conn);
+}
+
+/*	iscsi_logout_post_handler():
+ *
+ *	Return of 0 causes the TX thread to restart.
+ */
+static int iscsi_logout_post_handler(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int ret = 0;
+
+	switch (cmd->logout_reason) {
+	case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+		switch (cmd->logout_response) {
+		case ISCSI_LOGOUT_SUCCESS:
+		case ISCSI_LOGOUT_CLEANUP_FAILED:
+		default:
+			iscsi_logout_post_handler_closesession(conn);
+			break;
+		}
+		ret = 0;
+		break;
+	case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+		if (conn->cid == cmd->logout_cid) {
+			switch (cmd->logout_response) {
+			case ISCSI_LOGOUT_SUCCESS:
+			case ISCSI_LOGOUT_CLEANUP_FAILED:
+			default:
+				iscsi_logout_post_handler_samecid(conn);
+				break;
+			}
+			ret = 0;
+		} else {
+			switch (cmd->logout_response) {
+			case ISCSI_LOGOUT_SUCCESS:
+				iscsi_logout_post_handler_diffcid(conn,
+					cmd->logout_cid);
+				break;
+			case ISCSI_LOGOUT_CID_NOT_FOUND:
+			case ISCSI_LOGOUT_CLEANUP_FAILED:
+			default:
+				break;
+			}
+			ret = 1;
+		}
+		break;
+	case ISCSI_LOGOUT_REASON_RECOVERY:
+		switch (cmd->logout_response) {
+		case ISCSI_LOGOUT_SUCCESS:
+		case ISCSI_LOGOUT_CID_NOT_FOUND:
+		case ISCSI_LOGOUT_RECOVERY_UNSUPPORTED:
+		case ISCSI_LOGOUT_CLEANUP_FAILED:
+		default:
+			break;
+		}
+		ret = 1;
+		break;
+	default:
+		break;
+
+	}
+	return ret;
+}
+
+/*	iscsi_fail_session():
+ *
+ *
+ */
+void iscsi_fail_session(struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_CLEANUP_WAIT.\n");
+		conn->conn_state = TARG_CONN_STATE_CLEANUP_WAIT;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FAILED.\n");
+	sess->session_state = TARG_SESS_STATE_FAILED;
+}
+
+/*	iscsi_free_session():
+ *
+ *
+ */
+int iscsi_free_session(struct iscsi_session *sess)
+{
+	u16 conn_count = atomic_read(&sess->nconn);
+	struct iscsi_conn *conn, *conn_tmp;
+
+	spin_lock_bh(&sess->conn_lock);
+	atomic_set(&sess->sleep_on_sess_wait_sem, 1);
+
+	list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+			conn_list) {
+		if (conn_count == 0)
+			break;
+
+		iscsi_inc_conn_usage_count(conn);
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_cause_connection_reinstatement(conn, 1);
+		spin_lock_bh(&sess->conn_lock);
+
+		iscsi_dec_conn_usage_count(conn);
+		conn_count--;
+	}
+
+	if (atomic_read(&sess->nconn)) {
+		spin_unlock_bh(&sess->conn_lock);
+		down(&sess->session_wait_sem);
+	} else
+		spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_close_session(sess);
+	return 0;
+}
+
+/*	iscsi_stop_session():
+ *
+ *
+ */
+void iscsi_stop_session(
+	struct iscsi_session *sess,
+	int session_sleep,
+	int connection_sleep)
+{
+	u16 conn_count = atomic_read(&sess->nconn);
+	struct iscsi_conn *conn, *conn_tmp = NULL;
+
+	spin_lock_bh(&sess->conn_lock);
+	if (session_sleep)
+		atomic_set(&sess->sleep_on_sess_wait_sem, 1);
+
+	if (connection_sleep) {
+		list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+				conn_list) {
+			if (conn_count == 0)
+				break;
+
+			iscsi_inc_conn_usage_count(conn);
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_cause_connection_reinstatement(conn, 1);
+			spin_lock_bh(&sess->conn_lock);
+
+			iscsi_dec_conn_usage_count(conn);
+			conn_count--;
+		}
+	} else {
+		list_for_each_entry(conn, &sess->sess_conn_list, conn_list)
+			iscsi_cause_connection_reinstatement(conn, 0);
+	}
+
+	if (session_sleep && atomic_read(&sess->nconn)) {
+		spin_unlock_bh(&sess->conn_lock);
+		down(&sess->session_wait_sem);
+	} else
+		spin_unlock_bh(&sess->conn_lock);
+}
+
+/*	iscsi_release_sessions_for_tpg():
+ *
+ *
+ */
+int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *tpg, int force)
+{
+	struct iscsi_session *sess;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+	int session_count = 0;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	if (tpg->nsessions && !force) {
+		spin_unlock_bh(&se_tpg->session_lock);
+		return -1;
+	}
+
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+		spin_lock(&sess->conn_lock);
+		if (atomic_read(&sess->session_fall_back_to_erl0) ||
+		    atomic_read(&sess->session_logout) ||
+		    (sess->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+			spin_unlock(&sess->conn_lock);
+			continue;
+		}
+		atomic_set(&sess->session_reinstatement, 1);
+		spin_unlock(&sess->conn_lock);
+		spin_unlock_bh(&se_tpg->session_lock);
+
+		iscsi_free_session(sess);
+		spin_lock_bh(&se_tpg->session_lock);
+
+		session_count++;
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	TRACE(TRACE_ISCSI, "Released %d iSCSI Session(s) from Target Portal"
+			" Group: %hu\n", session_count, tpg->tpgt);
+	return 0;
+}
+
+static int iscsi_target_init_module(void)
+{
+	if (!(iscsi_target_detect()))
+		return 0;
+
+	return -1;
+}
+
+static void iscsi_target_cleanup_module(void)
+{
+	iscsi_target_release();
+}
+
+#ifdef MODULE
+MODULE_DESCRIPTION("LIO Target Driver Core 3.x.x Release");
+MODULE_LICENSE("GPL");
+module_init(iscsi_target_init_module);
+module_exit(iscsi_target_cleanup_module);
+#endif /* MODULE */
diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
new file mode 100644
index 0000000..25d56c1
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target.h
@@ -0,0 +1,49 @@
+#ifndef ISCSI_TARGET_H
+#define ISCSI_TARGET_H
+
+extern struct iscsi_tiqn *core_get_tiqn_for_login(unsigned char *);
+extern struct iscsi_tiqn *core_get_tiqn(unsigned char *, int);
+extern void core_put_tiqn_for_login(struct iscsi_tiqn *);
+extern struct iscsi_tiqn *core_add_tiqn(unsigned char *, int *);
+extern int core_del_tiqn(struct iscsi_tiqn *);
+extern int core_access_np(struct iscsi_np *, struct iscsi_portal_group *);
+extern int core_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *);
+extern void *core_get_np_ip(struct iscsi_np *np);
+extern struct iscsi_np *core_get_np(void *, u16, int);
+extern int __core_del_np_ex(struct iscsi_np *, struct iscsi_np_ex *);
+extern struct iscsi_np *core_add_np(struct iscsi_np_addr *, int, int *);
+extern int core_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
+				struct iscsi_portal_group *, int);
+extern int core_del_np(struct iscsi_np *);
+extern u32 iscsi_get_new_index(iscsi_index_t);
+extern char *iscsi_get_fabric_name(void);
+extern struct iscsi_cmd *iscsi_get_cmd(struct se_cmd *);
+extern u32 iscsi_get_task_tag(struct se_cmd *);
+extern int iscsi_get_cmd_state(struct se_cmd *);
+extern void iscsi_new_cmd_failure(struct se_cmd *);
+extern int iscsi_is_state_remove(struct se_cmd *);
+extern int lio_sess_logged_in(struct se_session *);
+extern u32 lio_sess_get_index(struct se_session *);
+extern u32 lio_sess_get_initiator_sid(struct se_session *,
+				unsigned char *, u32);
+extern int iscsi_send_async_msg(struct iscsi_conn *, u16, u8, u8);
+extern int lio_queue_data_in(struct se_cmd *);
+extern int iscsi_send_r2t(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int lio_write_pending(struct se_cmd *);
+extern int lio_write_pending_status(struct se_cmd *);
+extern int lio_queue_status(struct se_cmd *);
+extern u16 lio_set_fabric_sense_len(struct se_cmd *, u32);
+extern u16 lio_get_fabric_sense_len(void);
+extern int lio_queue_tm_rsp(struct se_cmd *);
+extern void iscsi_thread_get_cpumask(struct iscsi_conn *);
+extern int iscsi_target_tx_thread(void *);
+extern int iscsi_target_rx_thread(void *);
+extern int iscsi_close_connection(struct iscsi_conn *);
+extern int iscsi_close_session(struct iscsi_session *);
+extern void iscsi_fail_session(struct iscsi_session *);
+extern int iscsi_free_session(struct iscsi_session *);
+extern void iscsi_stop_session(struct iscsi_session *, int, int);
+extern int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *, int);
+
+#endif   /*** ISCSI_TARGET_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
new file mode 100644
index 0000000..86328dc
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -0,0 +1,1019 @@
+#ifndef ISCSI_TARGET_CORE_H
+#define ISCSI_TARGET_CORE_H
+
+#include <linux/in.h>
+#include <linux/configfs.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <scsi/scsi_cmnd.h>
+#include <target/target_core_base.h>
+
+#define ISCSI_VENDOR			"Linux-iSCSI.org"
+#define ISCSI_VERSION			"v4.1.0-rc1"
+#define SHUTDOWN_SIGS	(sigmask(SIGKILL)|sigmask(SIGINT)|sigmask(SIGABRT))
+#define ISCSI_MISC_IOVECS		5
+#define ISCSI_MAX_DATASN_MISSING_COUNT	16
+#define ISCSI_TX_THREAD_TCP_TIMEOUT	2
+#define ISCSI_RX_THREAD_TCP_TIMEOUT	2
+#define ISCSI_IQN_UNIQUENESS		14
+#define ISCSI_IQN_LEN			224
+#define ISCSI_TIQN_LEN			ISCSI_IQN_LEN
+#define SECONDS_FOR_ASYNC_LOGOUT	10
+#define SECONDS_FOR_ASYNC_TEXT		10
+#define IPV6_ADDRESS_SPACE		48
+#define IPV4_ADDRESS_SPACE		4
+#define IPV4_BUF_SIZE			18
+#define RESERVED			0xFFFFFFFF
+/* from target_core_base.h */
+#define ISCSI_MAX_LUNS_PER_TPG		TRANSPORT_MAX_LUNS_PER_TPG
+/* Maximum Target Portal Groups allowed */
+#define ISCSI_MAX_TPGS			64
+/* Size of the Network Device Name Buffer */
+#define ISCSI_NETDEV_NAME_SIZE		12
+/* Size of iSCSI specific sense buffer */
+#define ISCSI_SENSE_BUFFER_LEN		TRANSPORT_SENSE_BUFFER + 2
+
+/* struct iscsi_tpg_np->tpg_np_network_transport */
+#define ISCSI_TCP			0
+#define ISCSI_SCTP_TCP			1
+#define ISCSI_SCTP_UDP			2
+#define ISCSI_IWARP_TCP			3
+#define ISCSI_IWARP_SCTP		4
+#define ISCSI_INFINIBAND		5
+
+#define ISCSI_HDR_LEN			48
+#define CRC_LEN				4
+#define MAX_KEY_NAME_LENGTH		63
+#define MAX_KEY_VALUE_LENGTH		255
+#define INITIATOR			1
+#define TARGET				2
+#define WHITE_SPACE			" \t\v\f\n\r"
+
+/* RFC-3720 7.1.3  Standard Connection State Diagram for an Initiator */
+#define INIT_CONN_STATE_FREE			0x1
+#define INIT_CONN_STATE_XPT_WAIT		0x2
+#define INIT_CONN_STATE_IN_LOGIN		0x4
+#define INIT_CONN_STATE_LOGGED_IN		0x5
+#define INIT_CONN_STATE_IN_LOGOUT		0x6
+#define INIT_CONN_STATE_LOGOUT_REQUESTED	0x7
+#define INIT_CONN_STATE_CLEANUP_WAIT		0x8
+
+/* RFC-3720 7.1.4  Standard Connection State Diagram for a Target */
+#define TARG_CONN_STATE_FREE			0x1
+#define TARG_CONN_STATE_XPT_UP			0x3
+#define TARG_CONN_STATE_IN_LOGIN		0x4
+#define TARG_CONN_STATE_LOGGED_IN		0x5
+#define TARG_CONN_STATE_IN_LOGOUT		0x6
+#define TARG_CONN_STATE_LOGOUT_REQUESTED	0x7
+#define TARG_CONN_STATE_CLEANUP_WAIT		0x8
+
+/* RFC-3720 7.2 Connection Cleanup State Diagram for Initiators and Targets */
+#define CLEANUP_STATE_CLEANUP_WAIT		0x1
+#define CLEANUP_STATE_IN_CLEANUP		0x2
+#define CLEANUP_STATE_CLEANUP_FREE		0x3
+
+/* RFC-3720 7.3.1  Session State Diagram for an Initiator */
+#define INIT_SESS_STATE_FREE			0x1
+#define INIT_SESS_STATE_LOGGED_IN		0x3
+#define INIT_SESS_STATE_FAILED			0x4
+
+/* RFC-3720 7.3.2  Session State Diagram for a Target */
+#define TARG_SESS_STATE_FREE			0x1
+#define TARG_SESS_STATE_ACTIVE			0x2
+#define TARG_SESS_STATE_LOGGED_IN		0x3
+#define TARG_SESS_STATE_FAILED			0x4
+#define TARG_SESS_STATE_IN_CONTINUE		0x5
+
+/* struct iscsi_node_attrib sanity values */
+#define NA_DATAOUT_TIMEOUT		3
+#define NA_DATAOUT_TIMEOUT_MAX		60
+#define NA_DATAOUT_TIMEOUT_MIX		2
+#define NA_DATAOUT_TIMEOUT_RETRIES	5
+#define NA_DATAOUT_TIMEOUT_RETRIES_MAX	15
+#define NA_DATAOUT_TIMEOUT_RETRIES_MIN	1
+#define NA_NOPIN_TIMEOUT		5
+#define NA_NOPIN_TIMEOUT_MAX		60
+#define NA_NOPIN_TIMEOUT_MIN		3
+#define NA_NOPIN_RESPONSE_TIMEOUT	5
+#define NA_NOPIN_RESPONSE_TIMEOUT_MAX	60
+#define NA_NOPIN_RESPONSE_TIMEOUT_MIN	3
+#define NA_RANDOM_DATAIN_PDU_OFFSETS	0
+#define NA_RANDOM_DATAIN_SEQ_OFFSETS	0
+#define NA_RANDOM_R2T_OFFSETS		0
+#define NA_DEFAULT_ERL			0
+#define NA_DEFAULT_ERL_MAX		2
+#define NA_DEFAULT_ERL_MIN		0
+
+/* struct iscsi_tpg_attrib sanity values */
+#define TA_AUTHENTICATION		1
+#define TA_LOGIN_TIMEOUT		15
+#define TA_LOGIN_TIMEOUT_MAX		30
+#define TA_LOGIN_TIMEOUT_MIN		5
+#define TA_NETIF_TIMEOUT		2
+#define TA_NETIF_TIMEOUT_MAX		15
+#define TA_NETIF_TIMEOUT_MIN		2
+#define TA_GENERATE_NODE_ACLS		0
+#define TA_DEFAULT_CMDSN_DEPTH		16
+#define TA_DEFAULT_CMDSN_DEPTH_MAX	512
+#define TA_DEFAULT_CMDSN_DEPTH_MIN	1
+#define TA_CACHE_DYNAMIC_ACLS		0
+/* Enabled by default in demo mode (generic_node_acls=1) */
+#define TA_DEMO_MODE_WRITE_PROTECT	1
+/* Disabled by default in production mode w/ explict ACLs */
+#define TA_PROD_MODE_WRITE_PROTECT	0
+/* Enabled by default with x86 supporting SSE v4.2 */
+#define TA_CRC32C_X86_OFFLOAD		1
+#define TA_CACHE_CORE_NPS		0
+
+/* struct iscsi_data_count->type */
+#define ISCSI_RX_DATA				1
+#define ISCSI_TX_DATA				2
+
+/* struct iscsi_datain_req->dr_done */
+#define DATAIN_COMPLETE_NORMAL			1
+#define DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY 2
+#define DATAIN_COMPLETE_CONNECTION_RECOVERY	3
+
+/* struct iscsi_datain_req->recovery */
+#define DATAIN_WITHIN_COMMAND_RECOVERY		1
+#define DATAIN_CONNECTION_RECOVERY		2
+
+/* struct iscsi_portal_group->state */
+#define TPG_STATE_FREE				0
+#define TPG_STATE_ACTIVE			1
+#define TPG_STATE_INACTIVE			2
+#define TPG_STATE_COLD_RESET			3
+
+/* iscsi_set_device_attribute() states */
+#define ISCSI_DEVATTRIB_ENABLE_DEVICE		1
+#define ISCSI_DEVATTRIB_DISABLE_DEVICE		2
+#define ISCSI_DEVATTRIB_ADD_LUN_ACL		3
+#define ISCSI_DEVATTRIB_DELETE_LUN_ACL		4
+
+/* struct iscsi_tiqn->tiqn_state */
+#define TIQN_STATE_ACTIVE			1
+#define TIQN_STATE_SHUTDOWN			2
+
+/* struct iscsi_cmd->cmd_flags */
+#define ICF_GOT_LAST_DATAOUT			0x00000001
+#define ICF_GOT_DATACK_SNACK			0x00000002
+#define ICF_NON_IMMEDIATE_UNSOLICITED_DATA	0x00000004
+#define ICF_SENT_LAST_R2T			0x00000008
+#define ICF_WITHIN_COMMAND_RECOVERY		0x00000010
+#define ICF_CONTIG_MEMORY			0x00000020
+#define ICF_ATTACHED_TO_RQUEUE			0x00000040
+#define ICF_OOO_CMDSN				0x00000080
+#define ICF_REJECT_FAIL_CONN			0x00000100
+
+/* struct iscsi_cmd->i_state */
+#define ISTATE_NO_STATE				0
+#define ISTATE_NEW_CMD				1
+#define ISTATE_DEFERRED_CMD			2
+#define ISTATE_UNSOLICITED_DATA			3
+#define ISTATE_RECEIVE_DATAOUT			4
+#define ISTATE_RECEIVE_DATAOUT_RECOVERY		5
+#define ISTATE_RECEIVED_LAST_DATAOUT		6
+#define ISTATE_WITHIN_DATAOUT_RECOVERY		7
+#define ISTATE_IN_CONNECTION_RECOVERY		8
+#define ISTATE_RECEIVED_TASKMGT			9
+#define ISTATE_SEND_ASYNCMSG			10
+#define ISTATE_SENT_ASYNCMSG			11
+#define	ISTATE_SEND_DATAIN			12
+#define ISTATE_SEND_LAST_DATAIN			13
+#define ISTATE_SENT_LAST_DATAIN			14
+#define ISTATE_SEND_LOGOUTRSP			15
+#define ISTATE_SENT_LOGOUTRSP			16
+#define ISTATE_SEND_NOPIN			17
+#define ISTATE_SENT_NOPIN			18
+#define ISTATE_SEND_REJECT			19
+#define ISTATE_SENT_REJECT			20
+#define	ISTATE_SEND_R2T				21
+#define ISTATE_SENT_R2T				22
+#define ISTATE_SEND_R2T_RECOVERY		23
+#define ISTATE_SENT_R2T_RECOVERY		24
+#define ISTATE_SEND_LAST_R2T			25
+#define ISTATE_SENT_LAST_R2T			26
+#define ISTATE_SEND_LAST_R2T_RECOVERY		27
+#define ISTATE_SENT_LAST_R2T_RECOVERY		28
+#define ISTATE_SEND_STATUS			29
+#define ISTATE_SEND_STATUS_BROKEN_PC		30
+#define ISTATE_SENT_STATUS			31
+#define ISTATE_SEND_STATUS_RECOVERY		32
+#define ISTATE_SENT_STATUS_RECOVERY		33
+#define ISTATE_SEND_TASKMGTRSP			34
+#define ISTATE_SENT_TASKMGTRSP			35
+#define ISTATE_SEND_TEXTRSP			36
+#define ISTATE_SENT_TEXTRSP			37
+#define ISTATE_SEND_NOPIN_WANT_RESPONSE		38
+#define ISTATE_SENT_NOPIN_WANT_RESPONSE		39
+#define ISTATE_SEND_NOPIN_NO_RESPONSE		40
+#define ISTATE_REMOVE				41
+#define ISTATE_FREE				42
+
+/* Used in struct iscsi_conn->conn_flags */
+#define CONNFLAG_SCTP_STRUCT_FILE		0x01
+
+/* Used for iscsi_recover_cmdsn() return values */
+#define CMDSN_ERROR_CANNOT_RECOVER		-1
+#define CMDSN_NORMAL_OPERATION			0
+#define CMDSN_LOWER_THAN_EXP			1
+#define	CMDSN_HIGHER_THAN_EXP			2
+
+/* Used for iscsi_handle_immediate_data() return values */
+#define IMMEDIDATE_DATA_CANNOT_RECOVER		-1
+#define IMMEDIDATE_DATA_NORMAL_OPERATION	0
+#define IMMEDIDATE_DATA_ERL1_CRC_FAILURE	1
+
+/* Used for iscsi_decide_dataout_action() return values */
+#define DATAOUT_CANNOT_RECOVER			-1
+#define DATAOUT_NORMAL				0
+#define DATAOUT_SEND_R2T			1
+#define DATAOUT_SEND_TO_TRANSPORT		2
+#define DATAOUT_WITHIN_COMMAND_RECOVERY		3
+
+/* Used for struct iscsi_node_auth structure members */
+#define MAX_USER_LEN				256
+#define MAX_PASS_LEN				256
+#define NAF_USERID_SET				0x01
+#define NAF_PASSWORD_SET			0x02
+#define NAF_USERID_IN_SET			0x04
+#define NAF_PASSWORD_IN_SET			0x08
+
+/* Used for struct iscsi_cmd->dataout_timer_flags */
+#define DATAOUT_TF_RUNNING			0x01
+#define DATAOUT_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->netif_timer_flags */
+#define NETIF_TF_RUNNING			0x01
+#define NETIF_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->nopin_timer_flags */
+#define NOPIN_TF_RUNNING			0x01
+#define NOPIN_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->nopin_response_timer_flags */
+#define NOPIN_RESPONSE_TF_RUNNING		0x01
+#define NOPIN_RESPONSE_TF_STOP			0x02
+
+/* Used for struct iscsi_session->time2retain_timer_flags */
+#define T2R_TF_RUNNING				0x01
+#define T2R_TF_STOP				0x02
+#define T2R_TF_EXPIRED				0x04
+
+/* Used for iscsi_tpg_np->tpg_np_login_timer_flags */
+#define TPG_NP_TF_RUNNING			0x01
+#define TPG_NP_TF_STOP				0x02
+
+/* Used for struct iscsi_np->np_flags */
+#define NPF_IP_NETWORK				0x00
+#define NPF_NET_IPV4                            0x01
+#define NPF_NET_IPV6                            0x02
+#define NPF_SCTP_STRUCT_FILE			0x20 /* Bugfix */
+
+/* Used for struct iscsi_np->np_thread_state */
+#define ISCSI_NP_THREAD_ACTIVE			1
+#define ISCSI_NP_THREAD_INACTIVE		2
+#define ISCSI_NP_THREAD_RESET			3
+#define ISCSI_NP_THREAD_SHUTDOWN		4
+#define ISCSI_NP_THREAD_EXIT			5
+
+/* Used for debugging various ERL situations. */
+#define TARGET_ERL_MISSING_CMD_SN			1
+#define TARGET_ERL_MISSING_CMDSN_BATCH			2
+#define TARGET_ERL_MISSING_CMDSN_MIX			3
+#define TARGET_ERL_MISSING_CMDSN_MULTI			4
+#define TARGET_ERL_HEADER_CRC_FAILURE			5
+#define TARGET_ERL_IMMEDIATE_DATA_CRC_FAILURE		6
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE			7
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_BATCH		8
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_MIX		9
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_MULTI		10
+#define TARGET_ERL_DATA_OUT_FAIL			11
+#define TARGET_ERL_DATA_OUT_MISSING			12 /* TODO */
+#define TARGET_ERL_DATA_OUT_MISSING_BATCH		13 /* TODO */
+#define TARGET_ERL_DATA_OUT_MISSING_MIX			14 /* TODO */
+#define TARGET_ERL_DATA_OUT_TIMEOUT			15
+#define TARGET_ERL_FORCE_TX_TRANSPORT_RESET		16
+#define TARGET_ERL_FORCE_RX_TRANSPORT_RESET		17
+
+/*
+ * Threads and timers
+ */
+#define iscsi_daemon(thread, name, sigs)		\
+do {							\
+	daemonize(name);				\
+	current->policy = SCHED_NORMAL;			\
+	set_user_nice(current, -20);			\
+	spin_lock_irq(&current->sighand->siglock);	\
+	siginitsetinv(&current->blocked, (sigs));	\
+	recalc_sigpending();				\
+	(thread) = current;				\
+	spin_unlock_irq(&current->sighand->siglock);	\
+} while (0);
+
+#define MOD_TIMER(t, exp) mod_timer(t, (get_jiffies_64() + exp * HZ))
+#define SETUP_TIMER(timer, t, d, func)			\
+	timer.expires	= (get_jiffies_64() + t * HZ);	\
+	timer.data	= (unsigned long) d;		\
+	timer.function	= func;
+
+struct iscsi_conn_ops {
+	u8	HeaderDigest;			/* [0,1] == [None,CRC32C] */
+	u8	DataDigest;			/* [0,1] == [None,CRC32C] */
+	u32	MaxRecvDataSegmentLength;	/* [512..2**24-1] */
+	u8	OFMarker;			/* [0,1] == [No,Yes] */
+	u8	IFMarker;			/* [0,1] == [No,Yes] */
+	u32	OFMarkInt;			/* [1..65535] */
+	u32	IFMarkInt;			/* [1..65535] */
+};
+
+struct iscsi_sess_ops {
+	char	InitiatorName[224];
+	char	InitiatorAlias[256];
+	char	TargetName[224];
+	char	TargetAlias[256];
+	char	TargetAddress[256];
+	u16	TargetPortalGroupTag;		/* [0..65535] */
+	u16	MaxConnections;			/* [1..65535] */
+	u8	InitialR2T;			/* [0,1] == [No,Yes] */
+	u8	ImmediateData;			/* [0,1] == [No,Yes] */
+	u32	MaxBurstLength;			/* [512..2**24-1] */
+	u32	FirstBurstLength;		/* [512..2**24-1] */
+	u16	DefaultTime2Wait;		/* [0..3600] */
+	u16	DefaultTime2Retain;		/* [0..3600] */
+	u16	MaxOutstandingR2T;		/* [1..65535] */
+	u8	DataPDUInOrder;			/* [0,1] == [No,Yes] */
+	u8	DataSequenceInOrder;		/* [0,1] == [No,Yes] */
+	u8	ErrorRecoveryLevel;		/* [0..2] */
+	u8	SessionType;			/* [0,1] == [Normal,Discovery]*/
+};
+
+struct iscsi_queue_req {
+	int			state;
+	struct se_obj_lun_type_s *queue_se_obj_api;
+	struct iscsi_cmd	*cmd;
+	struct list_head	qr_list;
+} ____cacheline_aligned;
+
+struct iscsi_data_count {
+	int			data_length;
+	int			sync_and_steering;
+	int			type;
+	u32			iov_count;
+	u32			ss_iov_count;
+	u32			ss_marker_count;
+	struct iovec		*iov;
+} ____cacheline_aligned;
+
+struct iscsi_param_list {
+	struct list_head	param_list;
+	struct list_head	extra_response_list;
+} ____cacheline_aligned;
+
+struct iscsi_datain_req {
+	int			dr_complete;
+	int			generate_recovery_values;
+	int			recovery;
+	u32			begrun;
+	u32			runlength;
+	u32			data_length;
+	u32			data_offset;
+	u32			data_offset_end;
+	u32			data_sn;
+	u32			next_burst_len;
+	u32			read_data_done;
+	u32			seq_send_order;
+	struct list_head	dr_list;
+} ____cacheline_aligned;
+
+struct iscsi_ooo_cmdsn {
+	u16			cid;
+	u32			batch_count;
+	u32			cmdsn;
+	u32			exp_cmdsn;
+	struct iscsi_cmd	*cmd;
+	struct list_head	ooo_list;
+} ____cacheline_aligned;
+
+struct iscsi_datain {
+	u8			flags;
+	u32			data_sn;
+	u32			length;
+	u32			offset;
+} ____cacheline_aligned;
+
+struct iscsi_r2t {
+	int			seq_complete;
+	int			recovery_r2t;
+	int			sent_r2t;
+	u32			r2t_sn;
+	u32			offset;
+	u32			targ_xfer_tag;
+	u32			xfer_len;
+	struct list_head	r2t_list;
+} ____cacheline_aligned;
+
+struct iscsi_cmd {
+	u8			dataout_timer_flags;
+	/* DataOUT timeout retries */
+	u8			dataout_timeout_retries;
+	/* Within command recovery count */
+	u8			error_recovery_count;
+	/* iSCSI dependent state for out or order CmdSNs */
+	u8			deferred_i_state;
+	/* iSCSI dependent state */
+	u8			i_state;
+	/* Command is an immediate command (ISCSI_OP_IMMEDIATE set) */
+	u8			immediate_cmd;
+	/* Immediate data present */
+	u8			immediate_data;
+	/* iSCSI Opcode */
+	u8			iscsi_opcode;
+	/* iSCSI Response Code */
+	u8			iscsi_response;
+	/* Logout reason when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
+	u8			logout_reason;
+	/* Logout response code when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
+	u8			logout_response;
+	/* MaxCmdSN has been incremented */
+	u8			maxcmdsn_inc;
+	/* Immediate Unsolicited Dataout */
+	u8			unsolicited_data;
+	/* CID contained in logout PDU when opcode == ISCSI_INIT_LOGOUT_CMND */
+	u16			logout_cid;
+	/* Command flags */
+	u32			cmd_flags;
+	/* Initiator Task Tag assigned from Initiator */
+	u32 			init_task_tag;
+	/* Target Transfer Tag assigned from Target */
+	u32			targ_xfer_tag;
+	/* CmdSN assigned from Initiator */
+	u32			cmd_sn;
+	/* ExpStatSN assigned from Initiator */
+	u32			exp_stat_sn;
+	/* StatSN assigned to this ITT */
+	u32			stat_sn;
+	/* DataSN Counter */
+	u32			data_sn;
+	/* R2TSN Counter */
+	u32			r2t_sn;
+	/* Last DataSN acknowledged via DataAck SNACK */
+	u32			acked_data_sn;
+	/* Used for echoing NOPOUT ping data */
+	u32			buf_ptr_size;
+	/* Used to store DataDigest */
+	u32			data_crc;
+	/* Total size in bytes associated with command */
+	u32			data_length;
+	/* Counter for MaxOutstandingR2T */
+	u32			outstanding_r2ts;
+	/* Next R2T Offset when DataSequenceInOrder=Yes */
+	u32			r2t_offset;
+	/* Iovec current and orig count for iscsi_cmd->iov_data */
+	u32			iov_data_count;
+	u32			orig_iov_data_count;
+	/* Number of miscellaneous iovecs used for IP stack calls */
+	u32			iov_misc_count;
+	/* Bytes used for 32-bit word padding */
+	u32			pad_bytes;
+	/* Number of struct iscsi_pdu in struct iscsi_cmd->pdu_list */
+	u32			pdu_count;
+	/* Next struct iscsi_pdu to send in struct iscsi_cmd->pdu_list */
+	u32			pdu_send_order;
+	/* Current struct iscsi_pdu in struct iscsi_cmd->pdu_list */
+	u32			pdu_start;
+	u32			residual_count;
+	/* Next struct iscsi_seq to send in struct iscsi_cmd->seq_list */
+	u32			seq_send_order;
+	/* Number of struct iscsi_seq in struct iscsi_cmd->seq_list */
+	u32			seq_count;
+	/* Current struct iscsi_seq in struct iscsi_cmd->seq_list */
+	u32			seq_no;
+	/* Lowest offset in current DataOUT sequence */
+	u32			seq_start_offset;
+	/* Highest offset in current DataOUT sequence */
+	u32			seq_end_offset;
+	/* Total size in bytes received so far of READ data */
+	u32			read_data_done;
+	/* Total size in bytes received so far of WRITE data */
+	u32			write_data_done;
+	/* Counter for FirstBurstLength key */
+	u32			first_burst_len;
+	/* Counter for MaxBurstLength key */
+	u32			next_burst_len;
+	/* Transfer size used for IP stack calls */
+	u32			tx_size;
+	/* Buffer used for various purposes */
+	void			*buf_ptr;
+	/* See include/linux/dma-mapping.h */
+	enum dma_data_direction	data_direction;
+	/* iSCSI PDU Header + CRC */
+	unsigned char		pdu[ISCSI_HDR_LEN + CRC_LEN];
+	/* Number of times struct iscsi_cmd is present in immediate queue */
+	atomic_t		immed_queue_count;
+	atomic_t		response_queue_count;
+	atomic_t		transport_sent;
+	spinlock_t		datain_lock;
+	spinlock_t		dataout_timeout_lock;
+	/* spinlock for protecting struct iscsi_cmd->i_state */
+	spinlock_t		istate_lock;
+	/* spinlock for adding within command recovery entries */
+	spinlock_t		error_lock;
+	/* spinlock for adding R2Ts */
+	spinlock_t		r2t_lock;
+	/* DataIN List */
+	struct list_head	datain_list;
+	/* R2T List */
+	struct list_head	cmd_r2t_list;
+	struct semaphore	reject_sem;
+	/* Semaphore used for allocating buffer */
+	struct semaphore	unsolicited_data_sem;
+	/* Timer for DataOUT */
+	struct timer_list	dataout_timer;
+	/* Iovecs for SCSI data payload RX/TX w/ kernel level sockets */
+	struct iovec		*iov_data;
+	/* Iovecs for miscellaneous purposes */
+	struct iovec		iov_misc[ISCSI_MISC_IOVECS];
+	/* Array of struct iscsi_pdu used for DataPDUInOrder=No */
+	struct iscsi_pdu	*pdu_list;
+	/* Current struct iscsi_pdu used for DataPDUInOrder=No */
+	struct iscsi_pdu	*pdu_ptr;
+	/* Array of struct iscsi_seq used for DataSequenceInOrder=No */
+	struct iscsi_seq	*seq_list;
+	/* Current struct iscsi_seq used for DataSequenceInOrder=No */
+	struct iscsi_seq	*seq_ptr;
+	/* TMR Request when iscsi_opcode == ISCSI_OP_SCSI_TMFUNC */
+	struct iscsi_tmr_req	*tmr_req;
+	/* Connection this command is alligient to */
+	struct iscsi_conn 	*conn;
+	/* Pointer to connection recovery entry */
+	struct iscsi_conn_recovery *cr;
+	/* Session the command is part of,  used for connection recovery */
+	struct iscsi_session	*sess;
+	/* Next command in the session pool */
+	struct iscsi_cmd	*next;
+	/* list_head for connection list */
+	struct list_head	i_list;
+	/* Next command in DAS transport list */
+	struct iscsi_cmd	*t_next;
+	/* Previous command in DAS transport list */
+	struct iscsi_cmd	*t_prev;
+	/* The TCM I/O descriptor that is accessed via container_of() */
+	struct se_cmd		se_cmd;
+	/* Sense buffer that will be mapped into outgoing status */
+	unsigned char		sense_buffer[ISCSI_SENSE_BUFFER_LEN];
+}  ____cacheline_aligned;
+
+#define SE_CMD(cmd)		(&(cmd)->se_cmd)
+
+struct iscsi_tmr_req {
+	bool			task_reassign:1;
+	u32			ref_cmd_sn;
+	u32			exp_data_sn;
+	struct iscsi_conn_recovery *conn_recovery;
+	struct se_tmr_req	*se_tmr_req;
+} ____cacheline_aligned;
+
+struct iscsi_conn {
+	char			net_dev[ISCSI_NETDEV_NAME_SIZE];
+	/* Authentication Successful for this connection */
+	u8			auth_complete;
+	/* State connection is currently in */
+	u8			conn_state;
+	u8			conn_logout_reason;
+	u8			netif_timer_flags;
+	u8			network_transport;
+	u8			nopin_timer_flags;
+	u8			nopin_response_timer_flags;
+	u8			tx_immediate_queue;
+	u8			tx_response_queue;
+	/* Used to know what thread encountered a transport failure */
+	u8			which_thread;
+	/* connection id assigned by the Initiator */
+	u16			cid;
+	/* Remote TCP Port */
+	u16			login_port;
+	int			net_size;
+	u32			auth_id;
+	u32			conn_flags;
+	/* Remote TCP IP address */
+	u32			login_ip;
+	/* Used for iscsi_tx_login_rsp() */
+	u32			login_itt;
+	u32			exp_statsn;
+	/* Per connection status sequence number */
+	u32			stat_sn;
+	/* IFMarkInt's Current Value */
+	u32			if_marker;
+	/* OFMarkInt's Current Value */
+	u32			of_marker;
+	/* Used for calculating OFMarker offset to next PDU */
+	u32			of_marker_offset;
+	/* Complete Bad PDU for sending reject */
+	unsigned char		bad_hdr[ISCSI_HDR_LEN];
+	unsigned char		ipv6_login_ip[IPV6_ADDRESS_SPACE];
+	u16			local_port;
+	u32			local_ip;
+	u32			conn_index;
+	atomic_t		active_cmds;
+	atomic_t		check_immediate_queue;
+	atomic_t		conn_logout_remove;
+	atomic_t		conn_usage_count;
+	atomic_t		conn_waiting_on_uc;
+	atomic_t		connection_exit;
+	atomic_t		connection_recovery;
+	atomic_t		connection_reinstatement;
+	atomic_t		connection_wait;
+	atomic_t		connection_wait_rcfr;
+	atomic_t		sleep_on_conn_wait_sem;
+	atomic_t		transport_failed;
+	struct net_device	*net_if;
+	struct semaphore	conn_post_wait_sem;
+	struct semaphore	conn_wait_sem;
+	struct semaphore	conn_wait_rcfr_sem;
+	struct semaphore	conn_waiting_on_uc_sem;
+	struct semaphore	conn_logout_sem;
+	struct semaphore	rx_half_close_sem;
+	struct semaphore	tx_half_close_sem;
+	/* Semaphore for conn's tx_thread to sleep on */
+	struct semaphore	tx_sem;
+	/* socket used by this connection */
+	struct socket		*sock;
+	struct timer_list	nopin_timer;
+	struct timer_list	nopin_response_timer;
+	struct timer_list	transport_timer;;
+	/* Spinlock used for add/deleting cmd's from conn_cmd_list */
+	spinlock_t		cmd_lock;
+	spinlock_t		conn_usage_lock;
+	spinlock_t		immed_queue_lock;
+	spinlock_t		netif_lock;
+	spinlock_t		nopin_timer_lock;
+	spinlock_t		response_queue_lock;
+	spinlock_t		state_lock;
+	/* libcrypto RX and TX contexts for crc32c */
+	struct hash_desc	conn_rx_hash;
+	struct hash_desc	conn_tx_hash;
+	/* Used for scheduling TX and RX connection kthreads */
+	cpumask_var_t		conn_cpumask;
+	int			conn_rx_reset_cpumask:1;
+	int			conn_tx_reset_cpumask:1;
+	/* list_head of struct iscsi_cmd for this connection */
+	struct list_head	conn_cmd_list;
+	struct list_head	immed_queue_list;
+	struct list_head	response_queue_list;
+	struct iscsi_conn_ops	*conn_ops;
+	struct iscsi_param_list	*param_list;
+	/* Used for per connection auth state machine */
+	void			*auth_protocol;
+	struct iscsi_login_thread_s *login_thread;
+	struct iscsi_portal_group *tpg;
+	/* Pointer to parent session */
+	struct iscsi_session	*sess;
+	/* Pointer to thread_set in use for this conn's threads */
+	struct se_thread_set	*thread_set;
+	/* list_head for session connection list */
+	struct list_head	conn_list;
+} ____cacheline_aligned;
+
+#define CONN(cmd)		((struct iscsi_conn *)(cmd)->conn)
+#define CONN_OPS(conn)		((struct iscsi_conn_ops *)(conn)->conn_ops)
+
+struct iscsi_conn_recovery {
+	u16			cid;
+	u32			cmd_count;
+	u32			maxrecvdatasegmentlength;
+	int			ready_for_reallegiance;
+	struct list_head	conn_recovery_cmd_list;
+	spinlock_t		conn_recovery_cmd_lock;
+	struct semaphore		time2wait_sem;
+	struct timer_list		time2retain_timer;
+	struct iscsi_session	*sess;
+	struct list_head	cr_list;
+}  ____cacheline_aligned;
+
+struct iscsi_session {
+	u8			cmdsn_outoforder;
+	u8			initiator_vendor;
+	u8			isid[6];
+	u8			time2retain_timer_flags;
+	u8			version_active;
+	u16			cid_called;
+	u16			conn_recovery_count;
+	u16			tsih;
+	/* state session is currently in */
+	u32			session_state;
+	/* session wide counter: initiator assigned task tag */
+	u32			init_task_tag;
+	/* session wide counter: target assigned task tag */
+	u32			targ_xfer_tag;
+	u32			cmdsn_window;
+	/* session wide counter: expected command sequence number */
+	u32			exp_cmd_sn;
+	/* session wide counter: maximum allowed command sequence number */
+	u32			max_cmd_sn;
+	u32			ooo_cmdsn_count;
+	/* LIO specific session ID */
+	u32			sid;
+	char			auth_type[8];
+	/* unique within the target */
+	u32			session_index;
+	u32			cmd_pdus;
+	u32			rsp_pdus;
+	u64			tx_data_octets;
+	u64			rx_data_octets;
+	u32			conn_digest_errors;
+	u32			conn_timeout_errors;
+	u64			creation_time;
+	spinlock_t		session_stats_lock;
+	/* Number of active connections */
+	atomic_t		nconn;
+	atomic_t		session_continuation;
+	atomic_t		session_fall_back_to_erl0;
+	atomic_t		session_logout;
+	atomic_t		session_reinstatement;
+	atomic_t		session_stop_active;
+	atomic_t		session_usage_count;
+	atomic_t		session_waiting_on_uc;
+	atomic_t		sleep_on_sess_wait_sem;
+	atomic_t		transport_wait_cmds;
+	/* connection list */
+	struct list_head	sess_conn_list;
+	struct list_head	cr_active_list;
+	struct list_head	cr_inactive_list;
+	spinlock_t		cmdsn_lock;
+	spinlock_t		conn_lock;
+	spinlock_t		cr_a_lock;
+	spinlock_t		cr_i_lock;
+	spinlock_t		session_usage_lock;
+	spinlock_t		ttt_lock;
+	struct list_head	sess_ooo_cmdsn_list;
+	struct semaphore	async_msg_sem;
+	struct semaphore	reinstatement_sem;
+	struct semaphore	session_wait_sem;
+	struct semaphore	session_waiting_on_uc_sem;
+	struct timer_list	time2retain_timer;
+	struct iscsi_sess_ops	*sess_ops;
+	struct se_session	*se_sess;
+	struct iscsi_portal_group *tpg;
+} ____cacheline_aligned;
+
+#define SESS(conn)		((struct iscsi_session *)(conn)->sess)
+#define SESS_OPS(sess)		((struct iscsi_sess_ops *)(sess)->sess_ops)
+#define SESS_OPS_C(conn)	((struct iscsi_sess_ops *)(conn)->sess->sess_ops)
+#define SESS_NODE_ACL(sess)	((struct se_node_acl *)(sess)->se_sess->se_node_acl)
+
+struct iscsi_login {
+	u8 auth_complete;
+	u8 checked_for_existing;
+	u8 current_stage;
+	u8 leading_connection;
+	u8 first_request;
+	u8 version_min;
+	u8 version_max;
+	char isid[6];
+	u32 cmd_sn;
+	u32 init_task_tag;
+	u32 initial_exp_statsn;
+	u32 rsp_length;
+	u16 cid;
+	u16 tsih;
+	char *req;
+	char *rsp;
+	char *req_buf;
+	char *rsp_buf;
+} ____cacheline_aligned;
+
+struct iscsi_node_attrib {
+	u32			dataout_timeout;
+	u32			dataout_timeout_retries;
+	u32			default_erl;
+	u32			nopin_timeout;
+	u32			nopin_response_timeout;
+	u32			random_datain_pdu_offsets;
+	u32			random_datain_seq_offsets;
+	u32			random_r2t_offsets;
+	u32			tmr_cold_reset;
+	u32			tmr_warm_reset;
+	struct iscsi_node_acl *nacl;
+} ____cacheline_aligned;
+
+struct se_dev_entry_s;
+
+struct iscsi_node_auth {
+	int			naf_flags;
+	int			authenticate_target;
+	/* Used for iscsi_global->discovery_auth,
+	 * set to zero (auth disabled) by default */
+	int			enforce_discovery_auth;
+	char			userid[MAX_USER_LEN];
+	char			password[MAX_PASS_LEN];
+	char			userid_mutual[MAX_USER_LEN];
+	char			password_mutual[MAX_PASS_LEN];
+} ____cacheline_aligned;
+
+#include "iscsi_target_stat.h"
+
+struct iscsi_node_stat_grps {
+	struct config_group	iscsi_sess_stats_group;
+        struct config_group	iscsi_conn_stats_group;
+};
+
+struct iscsi_node_acl {
+	struct iscsi_node_attrib node_attrib;
+	struct iscsi_node_auth	node_auth;
+	struct iscsi_node_stat_grps node_stat_grps;
+	struct se_node_acl	se_node_acl;
+} ____cacheline_aligned;
+
+#define NODE_STAT_GRPS(nacl)	(&(nacl)->node_stat_grps)
+
+#define ISCSI_NODE_ATTRIB(t)	(&(t)->node_attrib)
+#define ISCSI_NODE_AUTH(t)	(&(t)->node_auth)
+
+struct iscsi_tpg_attrib {
+	u32			authentication;
+	u32			login_timeout;
+	u32			netif_timeout;
+	u32			generate_node_acls;
+	u32			cache_dynamic_acls;
+	u32			default_cmdsn_depth;
+	u32			demo_mode_write_protect;
+	u32			prod_mode_write_protect;
+	/* Used to signal libcrypto crc32-intel offload instruction usage */
+	u32			crc32c_x86_offload;
+	u32			cache_core_nps;
+	struct iscsi_portal_group *tpg;
+}  ____cacheline_aligned;
+
+struct iscsi_np_ex {
+	int			np_ex_net_size;
+	u16			np_ex_port;
+	u32			np_ex_ipv4;
+	unsigned char		np_ex_ipv6[IPV6_ADDRESS_SPACE];
+	struct list_head	np_ex_list;
+} ____cacheline_aligned;
+
+struct iscsi_np {
+	unsigned char		np_net_dev[ISCSI_NETDEV_NAME_SIZE];
+	int			np_network_transport;
+	int			np_thread_state;
+	int			np_login_timer_flags;
+	int			np_net_size;
+	u32			np_exports;
+	u32			np_flags;
+	u32			np_ipv4;
+	unsigned char		np_ipv6[IPV6_ADDRESS_SPACE];
+	u32			np_index;
+	u16			np_port;
+	atomic_t		np_shutdown;
+	spinlock_t		np_ex_lock;
+	spinlock_t		np_state_lock;
+	spinlock_t		np_thread_lock;
+	struct semaphore		np_done_sem;
+	struct semaphore		np_restart_sem;
+	struct semaphore		np_shutdown_sem;
+	struct semaphore		np_start_sem;
+	struct socket		*np_socket;
+	struct task_struct		*np_thread;
+	struct timer_list		np_login_timer;
+	struct iscsi_portal_group *np_login_tpg;
+	struct list_head	np_list;
+	struct list_head	np_nex_list;
+} ____cacheline_aligned;
+
+struct iscsi_tpg_np {
+	u32			tpg_np_index;
+	struct iscsi_np		*tpg_np;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np	*tpg_np_parent;
+	struct list_head	tpg_np_list;
+	struct list_head	tpg_np_child_list;
+	struct list_head	tpg_np_parent_list;
+	struct se_tpg_np	se_tpg_np;
+	spinlock_t		tpg_np_parent_lock;
+} ____cacheline_aligned;
+
+struct iscsi_np_addr {
+	u16		np_port;
+	u32		np_flags;
+	u32		np_ipv4;
+	unsigned char	np_ipv6[IPV6_ADDRESS_SPACE];
+} ____cacheline_aligned;
+
+struct iscsi_portal_group {
+	unsigned char		tpg_chap_id;
+	/* TPG State */
+	u8			tpg_state;
+	/* Target Portal Group Tag */
+	u16			tpgt;
+	/* Id assigned to target sessions */
+	u16			ntsih;
+	/* Number of active sessions */
+	u32			nsessions;
+	/* Number of Network Portals available for this TPG */
+	u32			num_tpg_nps;
+	/* Per TPG LIO specific session ID. */
+	u32			sid;
+	/* Spinlock for adding/removing Network Portals */
+	spinlock_t		tpg_np_lock;
+	spinlock_t		tpg_state_lock;
+	struct se_portal_group tpg_se_tpg;
+	struct semaphore	tpg_access_sem;
+	struct semaphore	np_login_sem;
+	struct iscsi_tpg_attrib	tpg_attrib;
+	/* Pointer to default list of iSCSI parameters for TPG */
+	struct iscsi_param_list	*param_list;
+	struct iscsi_tiqn	*tpg_tiqn;
+	struct list_head 	tpg_gnp_list;
+	struct list_head	tpg_list;
+	struct list_head	g_tpg_list;
+} ____cacheline_aligned;
+
+#define ISCSI_TPG_C(c)		((struct iscsi_portal_group *)(c)->tpg)
+#define ISCSI_TPG_LUN(c, l)  ((iscsi_tpg_list_t *)(c)->tpg->tpg_lun_list_t[l])
+#define ISCSI_TPG_S(s)		((struct iscsi_portal_group *)(s)->tpg)
+#define ISCSI_TPG_ATTRIB(t)	(&(t)->tpg_attrib)
+#define SE_TPG(tpg)		(&(tpg)->tpg_se_tpg)
+
+struct iscsi_wwn_stat_grps {
+	struct config_group	iscsi_stat_group;
+	struct config_group	iscsi_instance_group;
+	struct config_group	iscsi_sess_err_group;
+	struct config_group	iscsi_tgt_attr_group;
+	struct config_group	iscsi_login_stats_group;
+	struct config_group	iscsi_logout_stats_group;
+};
+
+struct iscsi_tiqn {
+	unsigned char		tiqn[ISCSI_TIQN_LEN];
+	int			tiqn_state;
+	u32			tiqn_active_tpgs;
+	u32			tiqn_ntpgs;
+	u32			tiqn_num_tpg_nps;
+	u32			tiqn_nsessions;
+	struct list_head	tiqn_list;
+	struct list_head	tiqn_tpg_list;
+	atomic_t		tiqn_access_count;
+	spinlock_t		tiqn_state_lock;
+	spinlock_t		tiqn_tpg_lock;
+	struct se_wwn		tiqn_wwn;
+	struct iscsi_wwn_stat_grps tiqn_stat_grps;
+	u32			tiqn_index;
+	struct iscsi_sess_err_stats  sess_err_stats;
+	struct iscsi_login_stats     login_stats;
+	struct iscsi_logout_stats    logout_stats;
+} ____cacheline_aligned;
+
+#define WWN_STAT_GRPS(tiqn)	(&(tiqn)->tiqn_stat_grps)
+
+struct iscsi_global {
+	/* iSCSI Node Name */
+	char			targetname[ISCSI_IQN_LEN];
+	/* In module removal */
+	u32			in_rmmod;
+	/* In core shutdown */
+	u32			in_shutdown;
+	/* Is the iSCSI Node name set? */
+	u32			targetname_set;
+	u32			active_ts;
+	/* Unique identifier used for the authentication daemon */
+	u32			auth_id;
+	u32			inactive_ts;
+	/* Thread Set bitmap count */
+	int			ts_bitmap_count;
+	/* Thread Set bitmap pointer */
+	unsigned long		*ts_bitmap;
+	int (*ti_forcechanoffline)(void *);
+	struct list_head	g_tiqn_list;
+	struct list_head	g_tpg_list;
+	struct list_head	tpg_list;
+	struct list_head	g_np_list;
+	spinlock_t		active_ts_lock;
+	spinlock_t		check_thread_lock;
+	/* Spinlock for adding/removing discovery entries */
+	spinlock_t		discovery_lock;
+	spinlock_t		inactive_ts_lock;
+	/* Spinlock for adding/removing login threads */
+	spinlock_t		login_thread_lock;
+	spinlock_t		shutdown_lock;
+	/* Spinlock for adding/removing thread sets */
+	spinlock_t		thread_set_lock;
+	/* Spinlock for iscsi_global->ts_bitmap */
+	spinlock_t		ts_bitmap_lock;
+	/* Spinlock for struct iscsi_tiqn */
+	spinlock_t		tiqn_lock;
+	spinlock_t		g_tpg_lock;
+	/* Spinlock g_np_list */
+	spinlock_t		np_lock;
+	/* Semaphore used for communication to authentication daemon */
+	struct semaphore	auth_sem;
+	/* Semaphore used for allocate of struct iscsi_conn->auth_id */
+	struct semaphore	auth_id_sem;
+	/* Used for iSCSI discovery session authentication */
+	struct iscsi_node_acl	discovery_acl;
+	struct iscsi_portal_group	*discovery_tpg;
+	struct list_head	active_ts_list;
+	struct list_head	inactive_ts_list;
+} ____cacheline_aligned;
+
+#endif /* ISCSI_TARGET_CORE_H */
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 02/12] iscsi-target: Add primary iSCSI request/response state machine logic
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds iscsi_target.[c,h] containing the main iSCSI Request and
Response PDU state machines and accompanying infrastructure code and
base iscsi_target_core.h include for iscsi_target_mod.  This includes
support for all defined iSCSI operation codes from RFC-3720 Section
10.2.1.2 and primary state machines for per struct iscsi_conn RX/TX
threads.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target.c      | 6043 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target.h      |   49 +
 drivers/target/iscsi/iscsi_target_core.h | 1019 +++++
 3 files changed, 7111 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target.c
 create mode 100644 drivers/target/iscsi/iscsi_target.h
 create mode 100644 drivers/target/iscsi/iscsi_target_core.h

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
new file mode 100644
index 0000000..99115db
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -0,0 +1,6043 @@
+/*******************************************************************************
+ * This file contains main functions related to the iSCSI Target Core Driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/version.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/kmod.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/crypto.h>
+#include <asm/unaligned.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_tmr.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_configfs.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_tmr.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_stat.h"
+
+struct iscsi_global *iscsi_global;
+
+struct kmem_cache *lio_cmd_cache;
+struct kmem_cache *lio_sess_cache;
+struct kmem_cache *lio_conn_cache;
+struct kmem_cache *lio_qr_cache;
+struct kmem_cache *lio_dr_cache;
+struct kmem_cache *lio_ooo_cache;
+struct kmem_cache *lio_r2t_cache;
+struct kmem_cache *lio_tpg_cache;
+
+static void iscsi_rx_thread_wait_for_TCP(struct iscsi_conn *);
+
+static int iscsi_target_detect(void);
+static int iscsi_target_release(void);
+static int iscsi_handle_immediate_data(struct iscsi_cmd *,
+			unsigned char *buf, __u32);
+static inline int iscsi_send_data_in(struct iscsi_cmd *, struct iscsi_conn *,
+			struct se_unmap_sg *, int *);
+static inline int iscsi_send_logout_response(struct iscsi_cmd *, struct iscsi_conn *);
+static inline int iscsi_send_nopin_response(struct iscsi_cmd *, struct iscsi_conn *);
+static inline int iscsi_send_status(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_task_mgt_rsp(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_text_rsp(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_send_reject(struct iscsi_cmd *, struct iscsi_conn *);
+static int iscsi_logout_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
+
+struct iscsi_tiqn *core_get_tiqn_for_login(unsigned char *buf)
+{
+	struct iscsi_tiqn *tiqn = NULL;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		if (!(strcmp(tiqn->tiqn, buf))) {
+
+			spin_lock(&tiqn->tiqn_state_lock);
+			if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+				atomic_inc(&tiqn->tiqn_access_count);
+				spin_unlock(&tiqn->tiqn_state_lock);
+				spin_unlock(&iscsi_global->tiqn_lock);
+				return tiqn;
+			}
+			spin_unlock(&tiqn->tiqn_state_lock);
+		}
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	return NULL;
+}
+
+static int core_set_tiqn_shutdown(struct iscsi_tiqn *tiqn)
+{
+	spin_lock(&tiqn->tiqn_state_lock);
+	if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+		tiqn->tiqn_state = TIQN_STATE_SHUTDOWN;
+		spin_unlock(&tiqn->tiqn_state_lock);
+		return 0;
+	}
+	spin_unlock(&tiqn->tiqn_state_lock);
+
+	return -1;
+}
+
+void core_put_tiqn_for_login(struct iscsi_tiqn *tiqn)
+{
+	spin_lock(&tiqn->tiqn_state_lock);
+	atomic_dec(&tiqn->tiqn_access_count);
+	spin_unlock(&tiqn->tiqn_state_lock);
+	return;
+}
+
+/*
+ * Note that IQN formatting is expected to be done in userspace, and
+ * no explict IQN format checks are done here.
+ */
+struct iscsi_tiqn *core_add_tiqn(unsigned char *buf, int *ret)
+{
+	struct iscsi_tiqn *tiqn = NULL;
+
+	if (strlen(buf) > ISCSI_TIQN_LEN) {
+		printk(KERN_ERR "Target IQN exceeds %d bytes\n",
+				ISCSI_TIQN_LEN);
+		*ret = -1;
+		return NULL;
+	}
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		if (!(strcmp(tiqn->tiqn, buf))) {
+			printk(KERN_ERR "Target IQN: %s already exists in Core\n",
+				tiqn->tiqn);
+			spin_unlock(&iscsi_global->tiqn_lock);
+			*ret = -1;
+			return NULL;
+		}
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	tiqn = kzalloc(sizeof(struct iscsi_tiqn), GFP_KERNEL);
+	if (!(tiqn)) {
+		printk(KERN_ERR "Unable to allocate struct iscsi_tiqn\n");
+		*ret = -1;
+		return NULL;
+	}
+
+	sprintf(tiqn->tiqn, "%s", buf);
+	INIT_LIST_HEAD(&tiqn->tiqn_list);
+	INIT_LIST_HEAD(&tiqn->tiqn_tpg_list);
+	spin_lock_init(&tiqn->tiqn_state_lock);
+	spin_lock_init(&tiqn->tiqn_tpg_lock);
+	spin_lock_init(&tiqn->sess_err_stats.lock);
+	spin_lock_init(&tiqn->login_stats.lock);
+	spin_lock_init(&tiqn->logout_stats.lock);
+	tiqn->tiqn_index = iscsi_get_new_index(ISCSI_INST_INDEX);
+	tiqn->tiqn_state = TIQN_STATE_ACTIVE;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_add_tail(&tiqn->tiqn_list, &iscsi_global->g_tiqn_list);
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	printk(KERN_INFO "CORE[0] - Added iSCSI Target IQN: %s\n", tiqn->tiqn);
+
+	return tiqn;
+
+}
+
+int __core_del_tiqn(struct iscsi_tiqn *tiqn)
+{
+	iscsi_disable_tpgs(tiqn);
+	iscsi_remove_tpgs(tiqn);
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_del(&tiqn->tiqn_list);
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	printk(KERN_INFO "CORE[0] - Deleted iSCSI Target IQN: %s\n",
+			tiqn->tiqn);
+	kfree(tiqn);
+
+	return 0;
+}
+
+static void core_wait_for_tiqn(struct iscsi_tiqn *tiqn)
+{
+	/*
+	 * Wait for accesses to said struct iscsi_tiqn to end.
+	 */
+	spin_lock(&tiqn->tiqn_state_lock);
+	while (atomic_read(&tiqn->tiqn_access_count)) {
+		spin_unlock(&tiqn->tiqn_state_lock);
+		msleep(10);
+		spin_lock(&tiqn->tiqn_state_lock);
+	}
+	spin_unlock(&tiqn->tiqn_state_lock);
+}
+
+int core_del_tiqn(struct iscsi_tiqn *tiqn)
+{
+	/*
+	 * core_set_tiqn_shutdown sets tiqn->tiqn_state = TIQN_STATE_SHUTDOWN
+	 * while holding tiqn->tiqn_state_lock.  This means that all subsequent
+	 * attempts to access this struct iscsi_tiqn will fail from both transport
+	 * fabric and control code paths.
+	 */
+	if (core_set_tiqn_shutdown(tiqn) < 0) {
+		printk(KERN_ERR "core_set_tiqn_shutdown() failed\n");
+		return -1;
+	}
+
+	core_wait_for_tiqn(tiqn);
+	return __core_del_tiqn(tiqn);
+}
+
+int core_release_tiqns(void)
+{
+	struct iscsi_tiqn *tiqn, *t_tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry_safe(tiqn, t_tiqn,
+			&iscsi_global->g_tiqn_list, tiqn_list) {
+
+		spin_lock(&tiqn->tiqn_state_lock);
+		if (tiqn->tiqn_state == TIQN_STATE_ACTIVE) {
+			tiqn->tiqn_state = TIQN_STATE_SHUTDOWN;
+			spin_unlock(&tiqn->tiqn_state_lock);
+			spin_unlock(&iscsi_global->tiqn_lock);
+
+			core_wait_for_tiqn(tiqn);
+			__core_del_tiqn(tiqn);
+
+			spin_lock(&iscsi_global->tiqn_lock);
+			continue;
+		}
+		spin_unlock(&tiqn->tiqn_state_lock);
+
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	return 0;
+}
+
+int core_access_np(struct iscsi_np *np, struct iscsi_portal_group *tpg)
+{
+	int ret;
+	/*
+	 * Determine if the network portal is accepting storage traffic.
+	 */
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return -1;
+	}
+	if (np->np_login_tpg) {
+		printk(KERN_ERR "np->np_login_tpg() is not NULL!\n");
+		spin_unlock_bh(&np->np_thread_lock);
+		return -1;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+	/*
+	 * Determine if the portal group is accepting storage traffic.
+	 */
+	spin_lock_bh(&tpg->tpg_state_lock);
+	if (tpg->tpg_state != TPG_STATE_ACTIVE) {
+		spin_unlock_bh(&tpg->tpg_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&tpg->tpg_state_lock);
+
+	/*
+	 * Here we serialize access across the TIQN+TPG Tuple.
+	 */
+	ret = down_interruptible(&tpg->np_login_sem);
+	if ((ret != 0) || signal_pending(current))
+		return -1;
+
+	spin_lock_bh(&tpg->tpg_state_lock);
+	if (tpg->tpg_state != TPG_STATE_ACTIVE) {
+		spin_unlock_bh(&tpg->tpg_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&tpg->tpg_state_lock);
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_tpg = tpg;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_deaccess_np(struct iscsi_np *np, struct iscsi_portal_group *tpg)
+{
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_tpg = NULL;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	up(&tpg->np_login_sem);
+
+	if (tiqn)
+		core_put_tiqn_for_login(tiqn);
+
+	return 0;
+}
+
+void *core_get_np_ip(struct iscsi_np *np)
+{
+	return (np->np_flags & NPF_NET_IPV6) ?
+	       (void *)&np->np_ipv6[0] :
+	       (void *)&np->np_ipv4;
+}
+
+struct iscsi_np *core_get_np(
+	void *ip,
+	u16 port,
+	int network_transport)
+{
+	struct iscsi_np *np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry(np, &iscsi_global->g_np_list, np_list) {
+		spin_lock(&np->np_state_lock);
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock(&np->np_state_lock);
+			continue;
+		}
+		spin_unlock(&np->np_state_lock);
+
+		if (!(memcmp(core_get_np_ip(np), ip, np->np_net_size)) &&
+		    (np->np_port == port) &&
+		    (np->np_network_transport == network_transport)) {
+			spin_unlock(&iscsi_global->np_lock);
+			return np;
+		}
+	}
+	spin_unlock(&iscsi_global->np_lock);
+
+	return NULL;
+}
+
+void *core_get_np_ex_ip(struct iscsi_np_ex *np_ex)
+{
+	return (np_ex->np_ex_net_size == IPV6_ADDRESS_SPACE) ?
+	       (void *)&np_ex->np_ex_ipv6 :
+	       (void *)&np_ex->np_ex_ipv4;
+}
+
+int core_del_np_ex(
+	struct iscsi_np *np,
+	void *ip_ex,
+	u16 port_ex,
+	int network_transport)
+{
+	struct iscsi_np_ex *np_ex, *np_ex_t;
+
+	spin_lock(&np->np_ex_lock);
+	list_for_each_entry_safe(np_ex, np_ex_t, &np->np_nex_list, np_ex_list) {
+		if (!(memcmp(core_get_np_ex_ip(np_ex), ip_ex,
+				np_ex->np_ex_net_size)) &&
+				(np_ex->np_ex_port == port_ex)) {
+			__core_del_np_ex(np, np_ex);
+			spin_unlock(&np->np_ex_lock);
+			return 0;
+		}
+	}
+	spin_unlock(&np->np_ex_lock);
+
+	return -1;
+}
+
+int core_add_np_ex(
+	struct iscsi_np *np,
+	void *ip_ex,
+	u16 port_ex,
+	int net_size)
+{
+	struct iscsi_np_ex *np_ex;
+	unsigned char *ip_buf = NULL, *ip_ex_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf_ipv4_ex[IPV4_BUF_SIZE];
+	u32 ip_ex_ipv4;
+
+	np_ex = kzalloc(sizeof(struct iscsi_np_ex), GFP_KERNEL);
+	if (!(np_ex)) {
+		printk(KERN_ERR "struct iscsi_np_ex memory allocate failed!\n");
+		return -1;
+	}
+
+	if (net_size == IPV6_ADDRESS_SPACE) {
+		ip_buf = (unsigned char *)&np->np_ipv6[0];
+		ip_ex_buf = ip_ex;
+		snprintf(np_ex->np_ex_ipv6, IPV6_ADDRESS_SPACE,
+				"%s", ip_ex_buf);
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf_ipv4_ex, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		memcpy((void *)&ip_ex_ipv4, ip_ex, 4);
+		iscsi_ntoa2(buf_ipv4_ex, ip_ex_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip_ex_buf = &buf_ipv4_ex[0];
+
+		memcpy((void *)&np_ex->np_ex_ipv4, ip_ex, IPV4_ADDRESS_SPACE);
+	}
+
+	np_ex->np_ex_port = port_ex;
+	np_ex->np_ex_net_size = net_size;
+	INIT_LIST_HEAD(&np_ex->np_ex_list);
+	spin_lock_init(&np->np_ex_lock);
+
+	spin_lock(&np->np_ex_lock);
+	list_add_tail(&np_ex->np_ex_list, &np->np_nex_list);
+	spin_unlock(&np->np_ex_lock);
+
+	printk(KERN_INFO "CORE[0] - Added Network Portal: Internal %s:%hu"
+		" External %s:%hu on %s on network device: %s\n", ip_buf,
+		np->np_port, ip_ex_buf, port_ex,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", strlen(np->np_net_dev) ?
+			(char *)np->np_net_dev : "None");
+
+	return 0;
+}
+
+/*
+ * Called with struct iscsi_np->np_ex_lock held.
+ */
+int __core_del_np_ex(
+	struct iscsi_np *np,
+	struct iscsi_np_ex *np_ex)
+{
+	unsigned char *ip_buf = NULL, *ip_ex_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf_ipv4_ex[IPV4_BUF_SIZE];
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip_buf = (unsigned char *)&np->np_ipv6[0];
+		ip_ex_buf = (unsigned char *)&np_ex->np_ex_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf_ipv4_ex, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		iscsi_ntoa2(buf_ipv4_ex, np_ex->np_ex_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip_ex_buf = &buf_ipv4_ex[0];
+	}
+
+	list_del(&np_ex->np_ex_list);
+
+	printk(KERN_INFO "CORE[0] - Removed Network Portal: Internal %s:%hu"
+		" External %s:%hu on %s on network device: %s\n",
+		ip_buf, np->np_port, ip_ex_buf, np_ex->np_ex_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", strlen(np->np_net_dev) ?
+			(char *)np->np_net_dev : "None");
+	kfree(np_ex);
+
+	return 0;
+}
+
+void core_del_np_all_ex(
+	struct iscsi_np *np)
+{
+	struct iscsi_np_ex *np_ex, *np_ex_t;
+
+	spin_lock(&np->np_ex_lock);
+	list_for_each_entry_safe(np_ex, np_ex_t, &np->np_nex_list, np_ex_list)
+		__core_del_np_ex(np, np_ex);
+	spin_unlock(&np->np_ex_lock);
+}
+
+static struct iscsi_np *core_add_np_locate(
+	void *ip,
+	void *ip_ex,
+	unsigned char *ip_buf,
+	unsigned char *ip_ex_buf,
+	u16 port,
+	u16 port_ex,
+	int network_transport,
+	int net_size,
+	int *ret)
+{
+	struct iscsi_np *np;
+	struct iscsi_np_ex *np_ex;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry(np, &iscsi_global->g_np_list, np_list) {
+		spin_lock(&np->np_state_lock);
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock(&np->np_state_lock);
+			continue;
+		}
+		spin_unlock(&np->np_state_lock);
+
+		if (!(memcmp(core_get_np_ip(np), ip, np->np_net_size)) &&
+		    (np->np_port == port) &&
+		    (np->np_network_transport == network_transport)) {
+			if (!ip_ex && !port_ex) {
+				printk(KERN_ERR "Network Portal %s:%hu on %s"
+					" already exists, ignoring request.\n",
+					ip_buf, port,
+					(network_transport == ISCSI_TCP) ?
+					"TCP" : "SCTP");
+				spin_unlock(&iscsi_global->np_lock);
+				*ret = -EEXIST;
+				return NULL;
+			}
+
+			spin_lock(&np->np_ex_lock);
+			list_for_each_entry(np_ex, &np->np_nex_list,
+					np_ex_list) {
+				if (!(memcmp(core_get_np_ex_ip(np_ex), ip_ex,
+				     np_ex->np_ex_net_size)) &&
+				    (np_ex->np_ex_port == port_ex)) {
+					printk(KERN_ERR "Network Portal Inter"
+						"nal: %s:%hu External: %s:%hu"
+						" on %s, ignoring request.\n",
+						ip_buf, port,
+						ip_ex_buf, port_ex,
+						(network_transport == ISCSI_TCP)
+							? "TCP" : "SCTP");
+					spin_unlock(&np->np_ex_lock);
+					spin_unlock(&iscsi_global->np_lock);
+					*ret = -EEXIST;
+					return NULL;
+				}
+			}
+			spin_unlock(&np->np_ex_lock);
+			spin_unlock(&iscsi_global->np_lock);
+
+			*ret = core_add_np_ex(np, ip_ex, port_ex,
+						net_size);
+			if (*ret < 0)
+				return NULL;
+
+			*ret = 0;
+			return np;
+		}
+	}
+	spin_unlock(&iscsi_global->np_lock);
+
+	*ret = 0;
+
+	return NULL;
+}
+
+struct iscsi_np *core_add_np(
+	struct iscsi_np_addr *np_addr,
+	int network_transport,
+	int *ret)
+{
+	struct iscsi_np *np;
+	char *ip_buf = NULL;
+	void *ip;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+	int net_size;
+
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		ip_buf = &np_addr->np_ipv6[0];
+		ip = (void *)&np_addr->np_ipv6[0];
+		net_size = IPV6_ADDRESS_SPACE;
+	} else {
+		ip = (void *)&np_addr->np_ipv4;
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np_addr->np_ipv4);
+		ip_buf = &buf_ipv4[0];
+		net_size = IPV4_ADDRESS_SPACE;
+	}
+
+	np = core_add_np_locate(ip, NULL, ip_buf, NULL, np_addr->np_port,
+			0, network_transport, net_size, ret);
+	if ((np))
+		return np;
+
+	if (*ret != 0) {
+		*ret = -EINVAL;
+		return NULL;
+	}
+
+	np = kzalloc(sizeof(struct iscsi_np), GFP_KERNEL);
+	if (!(np)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_np\n");
+		*ret = -ENOMEM;
+		return NULL;
+	}
+
+	np->np_flags |= NPF_IP_NETWORK;
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		np->np_flags |= NPF_NET_IPV6;
+		memcpy(np->np_ipv6, np_addr->np_ipv6, IPV6_ADDRESS_SPACE);
+	} else {
+		np->np_flags |= NPF_NET_IPV4;
+		np->np_ipv4 = np_addr->np_ipv4;
+	}
+	np->np_port		= np_addr->np_port;
+	np->np_network_transport = network_transport;
+	np->np_net_size		= net_size;
+	np->np_index		= iscsi_get_new_index(ISCSI_PORTAL_INDEX);
+	atomic_set(&np->np_shutdown, 0);
+	spin_lock_init(&np->np_state_lock);
+	spin_lock_init(&np->np_thread_lock);
+	spin_lock_init(&np->np_ex_lock);
+	sema_init(&np->np_done_sem, 0);
+	sema_init(&np->np_restart_sem, 0);
+	sema_init(&np->np_shutdown_sem, 0);
+	sema_init(&np->np_start_sem, 0);
+	INIT_LIST_HEAD(&np->np_list);
+	INIT_LIST_HEAD(&np->np_nex_list);
+
+	kernel_thread(iscsi_target_login_thread, np, 0);
+
+	down(&np->np_start_sem);
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		printk(KERN_ERR "Unable to start login thread for iSCSI Network"
+			" Portal %s:%hu\n", ip_buf, np->np_port);
+		kfree(np);
+		*ret = -EADDRINUSE;
+		return NULL;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	spin_lock(&iscsi_global->np_lock);
+	list_add_tail(&np->np_list, &iscsi_global->g_np_list);
+	spin_unlock(&iscsi_global->np_lock);
+
+	printk(KERN_INFO "CORE[0] - Added Network Portal: %s:%hu on %s on"
+		" network device: %s\n", ip_buf, np->np_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	*ret = 0;
+	return np;
+}
+
+int core_reset_np_thread(
+	struct iscsi_np *np,
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	if (tpg && tpg_np) {
+		/*
+		 * The reset operation need only be performed when the
+		 * passed struct iscsi_portal_group has a login in progress
+		 * to one of the network portals.
+		 */
+		if (tpg_np->tpg_np->np_login_tpg != tpg) {
+			spin_unlock_bh(&np->np_thread_lock);
+			return 0;
+		}
+	}
+	if (np->np_thread_state == ISCSI_NP_THREAD_INACTIVE) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return 0;
+	}
+
+	np->np_thread_state = ISCSI_NP_THREAD_RESET;
+	if (shutdown)
+		atomic_set(&np->np_shutdown, 1);
+
+	if (np->np_thread) {
+		spin_unlock_bh(&np->np_thread_lock);
+		send_sig(SIGKILL, np->np_thread, 1);
+		down(&np->np_restart_sem);
+		spin_lock_bh(&np->np_thread_lock);
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_del_np_thread(struct iscsi_np *np)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_thread_state = ISCSI_NP_THREAD_SHUTDOWN;
+	atomic_set(&np->np_shutdown, 1);
+	if (np->np_thread) {
+		send_sig(SIGKILL, np->np_thread, 1);
+		spin_unlock_bh(&np->np_thread_lock);
+		up(&np->np_shutdown_sem);
+		down(&np->np_done_sem);
+		return 0;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	return 0;
+}
+
+int core_del_np_comm(struct iscsi_np *np)
+{
+	if (!np->np_socket)
+		return 0;
+
+	/*
+	 * Some network transports set their own FILEIO, see
+	 * if we need to free any additional allocated resources.
+	 */
+	if (np->np_flags & NPF_SCTP_STRUCT_FILE) {
+		kfree(np->np_socket->file);
+		np->np_socket->file = NULL;
+	}
+
+	sock_release(np->np_socket);
+	return 0;
+}
+
+int core_del_np(struct iscsi_np *np)
+{
+	unsigned char *ip = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	core_del_np_thread(np);
+	core_del_np_comm(np);
+	core_del_np_all_ex(np);
+
+	spin_lock(&iscsi_global->np_lock);
+	list_del(&np->np_list);
+	spin_unlock(&iscsi_global->np_lock);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip = &np->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	printk(KERN_INFO "CORE[0] - Removed Network Portal: %s:%hu on %s on"
+		" network device: %s\n", ip, np->np_port,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	kfree(np);
+	return 0;
+}
+
+void core_reset_nps(void)
+{
+	struct iscsi_np *np, *t_np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry_safe(np, t_np, &iscsi_global->g_np_list, np_list) {
+		spin_unlock(&iscsi_global->np_lock);
+		core_reset_np_thread(np, NULL, NULL, 1);
+		spin_lock(&iscsi_global->np_lock);
+	}
+	spin_unlock(&iscsi_global->np_lock);
+}
+
+void core_release_nps(void)
+{
+	struct iscsi_np *np, *t_np;
+
+	spin_lock(&iscsi_global->np_lock);
+	list_for_each_entry_safe(np, t_np, &iscsi_global->g_np_list, np_list) {
+		spin_unlock(&iscsi_global->np_lock);
+		core_del_np(np);
+		spin_lock(&iscsi_global->np_lock);
+	}
+	spin_unlock(&iscsi_global->np_lock);
+}
+
+/* iSCSI mib table index for iscsi_target_stat.c */
+struct iscsi_index_table iscsi_index_table;
+
+/*
+ * Initialize the index table for allocating unique row indexes to various mib
+ * tables
+ */
+static void init_iscsi_index_table(void)
+{
+	memset(&iscsi_index_table, 0, sizeof(iscsi_index_table));
+	spin_lock_init(&iscsi_index_table.lock);
+}
+
+/*
+ * Allocate a new row index for the entry type specified
+ */
+u32 iscsi_get_new_index(iscsi_index_t type)
+{
+	u32 new_index;
+	
+	if ((type < 0) || (type >= INDEX_TYPE_MAX)) {
+		printk(KERN_ERR "Invalid index type %d\n", type);
+		return -1;
+	}
+
+	spin_lock(&iscsi_index_table.lock);
+	new_index = ++iscsi_index_table.iscsi_mib_index[type];
+	if (new_index == 0)
+		new_index = ++iscsi_index_table.iscsi_mib_index[type];
+	spin_unlock(&iscsi_index_table.lock);
+
+	return new_index;
+}
+
+/* init_iscsi_target():
+ *
+ * This function is called during module initialization to setup struct iscsi_global.
+ */
+static int init_iscsi_global(struct iscsi_global *global)
+{
+	memset(global, 0, sizeof(struct iscsi_global));
+	sema_init(&global->auth_sem, 1);
+	sema_init(&global->auth_id_sem, 1);
+	spin_lock_init(&global->active_ts_lock);
+	spin_lock_init(&global->check_thread_lock);
+	spin_lock_init(&global->discovery_lock);
+	spin_lock_init(&global->inactive_ts_lock);
+	spin_lock_init(&global->login_thread_lock);
+	spin_lock_init(&global->np_lock);
+	spin_lock_init(&global->shutdown_lock);
+	spin_lock_init(&global->tiqn_lock);
+	spin_lock_init(&global->ts_bitmap_lock);
+	spin_lock_init(&global->g_tpg_lock);
+	INIT_LIST_HEAD(&global->g_tiqn_list);
+	INIT_LIST_HEAD(&global->g_tpg_list);
+	INIT_LIST_HEAD(&global->g_np_list);
+	INIT_LIST_HEAD(&global->active_ts_list);
+	INIT_LIST_HEAD(&global->inactive_ts_list);
+
+	return 0;
+}
+
+static int default_targetname_seq_show(struct seq_file *m, void *p)
+{
+	if (iscsi_global->targetname_set)
+		seq_printf(m, "iSCSI TargetName: %s\n",
+				iscsi_global->targetname);
+
+	return 0;
+}
+
+static int version_info_seq_show(struct seq_file *m, void *p)
+{
+	seq_printf(m, "%s iSCSI Target Core Stack "ISCSI_VERSION" on"
+		" %s/%s on "UTS_RELEASE"\n", ISCSI_VENDOR,
+		utsname()->sysname, utsname()->machine);
+
+	return 0;
+}
+
+static int default_targetname_seq_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, default_targetname_seq_show, PDE(inode)->data);
+}
+
+static const struct file_operations default_targetname = {
+	.open		= default_targetname_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static int version_info_seq_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, version_info_seq_show, PDE(inode)->data);
+}
+
+static const struct file_operations version_info = {
+	.open		= version_info_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+/*	iscsi_target_detect():
+ *
+ *	This function is called upon module_init and does the following
+ *	actions in said order:
+ *
+ *	0) Allocates and initializes the struct iscsi_global structure.
+ *	1) Registers the character device for the IOCTL.
+ *	2) Registers /proc filesystem entries.
+ *	3) Creates a lookaside cache entry for the struct iscsi_cmd and
+ *	   struct iscsi_conn structures.
+ *	4) Allocates threads to handle login requests.
+ *	5) Allocates thread_sets for the thread_set queue.
+ *	6) Creates the default list of iSCSI parameters.
+ *	7) Create server socket and spawn iscsi_target_server_thread to
+ *	   accept connections.
+ *
+ *	Parameters:	Nothing.
+ *	Returns:	0 on success, -1 on error.
+ */
+/*	FIXME:  getaddrinfo for IPv6 will go here.
+ */
+static int iscsi_target_detect(void)
+{
+	int ret = 0;
+
+	printk(KERN_INFO "%s iSCSI Target Core Stack "ISCSI_VERSION" on"
+		" %s/%s on "UTS_RELEASE"\n", ISCSI_VENDOR,
+		utsname()->sysname, utsname()->machine);
+	/*
+	 * Clear out the struct kmem_cache pointers
+	 */
+	lio_cmd_cache = NULL;
+	lio_sess_cache = NULL;
+	lio_conn_cache = NULL;
+	lio_qr_cache = NULL;
+	lio_dr_cache = NULL;
+	lio_ooo_cache = NULL;
+	lio_r2t_cache = NULL;
+	lio_tpg_cache = NULL;
+
+	iscsi_global = kzalloc(sizeof(struct iscsi_global), GFP_KERNEL);
+	if (!(iscsi_global)) {
+		printk(KERN_ERR "Unable to allocate memory for iscsi_global\n");
+		return -1;
+	}
+	init_iscsi_index_table();
+
+	if (init_iscsi_global(iscsi_global) < 0) {
+		kfree(iscsi_global);
+		return -1;
+	}
+
+	iscsi_target_register_configfs();
+	iscsi_thread_set_init();
+
+	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
+			TARGET_THREAD_SET_COUNT) {
+		printk(KERN_ERR "iscsi_allocate_thread_sets() returned"
+			" unexpected value!\n");
+		ret = -1;
+		goto out;
+	}
+
+	lio_cmd_cache = kmem_cache_create("lio_cmd_cache",
+			sizeof(struct iscsi_cmd), __alignof__(struct iscsi_cmd),
+			0, NULL);
+	if (!(lio_cmd_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_cmd_cache\n");
+		goto out;
+	}
+
+	lio_sess_cache = kmem_cache_create("lio_sess_cache",
+			sizeof(struct iscsi_session), __alignof__(struct iscsi_session),
+			0, NULL);
+	if (!(lio_sess_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_sess_cache\n");
+		goto out;
+	}
+
+	lio_conn_cache = kmem_cache_create("lio_conn_cache",
+			sizeof(struct iscsi_conn), __alignof__(struct iscsi_conn),
+			0, NULL);
+	if (!(lio_conn_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_conn_cache\n");
+		goto out;
+	}
+
+	lio_qr_cache = kmem_cache_create("lio_qr_cache",
+			sizeof(struct iscsi_queue_req),
+			__alignof__(struct iscsi_queue_req), 0, NULL);
+	if (!(lio_qr_cache)) {
+		printk(KERN_ERR "nable to kmem_cache_create() for"
+				" lio_qr_cache\n");
+		goto out;
+	}
+
+	lio_dr_cache = kmem_cache_create("lio_dr_cache",
+			sizeof(struct iscsi_datain_req),
+			__alignof__(struct iscsi_datain_req), 0, NULL);
+	if (!(lio_dr_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_dr_cache\n");
+		goto out;
+	}
+
+	lio_ooo_cache = kmem_cache_create("lio_ooo_cache",
+			sizeof(struct iscsi_ooo_cmdsn),
+			__alignof__(struct iscsi_ooo_cmdsn), 0, NULL);
+	if (!(lio_ooo_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_ooo_cache\n");
+		goto out;
+	}
+
+	lio_r2t_cache = kmem_cache_create("lio_r2t_cache",
+			sizeof(struct iscsi_r2t), __alignof__(struct iscsi_r2t),
+			0, NULL);
+	if (!(lio_r2t_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+				" lio_r2t_cache\n");
+		goto out;
+	}
+
+	lio_tpg_cache = kmem_cache_create("lio_tpg_cache",
+			sizeof(struct iscsi_portal_group),
+			__alignof__(struct iscsi_portal_group),
+			0, NULL);
+	if (!(lio_tpg_cache)) {
+		printk(KERN_ERR "Unable to kmem_cache_create() for"
+			" struct iscsi_portal_group\n");
+		goto out;
+	}
+
+	if (core_load_discovery_tpg() < 0)
+		goto out;
+
+	printk("Loading Complete.\n");
+
+	return ret;
+out:
+	if (lio_cmd_cache)
+		kmem_cache_destroy(lio_cmd_cache);
+	if (lio_sess_cache)
+		kmem_cache_destroy(lio_sess_cache);
+	if (lio_conn_cache)
+		kmem_cache_destroy(lio_conn_cache);
+	if (lio_qr_cache)
+		kmem_cache_destroy(lio_qr_cache);
+	if (lio_dr_cache)
+		kmem_cache_destroy(lio_dr_cache);
+	if (lio_ooo_cache)
+		kmem_cache_destroy(lio_ooo_cache);
+	if (lio_r2t_cache)
+		kmem_cache_destroy(lio_r2t_cache);
+	if (lio_tpg_cache)
+		kmem_cache_destroy(lio_tpg_cache);
+	iscsi_deallocate_thread_sets();
+	iscsi_thread_set_free();
+	iscsi_target_deregister_configfs();
+	kfree(iscsi_global);
+	iscsi_global = NULL;
+
+	return -1;
+}
+
+int iscsi_target_release_phase1(int rmmod)
+{
+	spin_lock(&iscsi_global->shutdown_lock);
+	if (!rmmod) {
+		if (iscsi_global->in_shutdown) {
+			printk(KERN_ERR "Module already in shutdown, aborting\n");
+			spin_unlock(&iscsi_global->shutdown_lock);
+			return -1;
+		}
+
+		if (iscsi_global->in_rmmod) {
+			printk(KERN_ERR "Module already in rmmod, aborting\n");
+			spin_unlock(&iscsi_global->shutdown_lock);
+			return -1;
+		}
+	} else
+		iscsi_global->in_rmmod = 1;
+	iscsi_global->in_shutdown = 1;
+	spin_unlock(&iscsi_global->shutdown_lock);
+
+	return 0;
+}
+
+void iscsi_target_release_phase2(void)
+{
+	core_reset_nps();
+	iscsi_disable_all_tpgs();
+	iscsi_deallocate_thread_sets();
+	iscsi_thread_set_free();
+	iscsi_remove_all_tpgs();
+	core_release_nps();
+	core_release_discovery_tpg();
+	core_release_tiqns();
+	kmem_cache_destroy(lio_cmd_cache);
+	kmem_cache_destroy(lio_sess_cache);
+	kmem_cache_destroy(lio_conn_cache);
+	kmem_cache_destroy(lio_qr_cache);
+	kmem_cache_destroy(lio_dr_cache);
+	kmem_cache_destroy(lio_ooo_cache);
+	kmem_cache_destroy(lio_r2t_cache);
+	kmem_cache_destroy(lio_tpg_cache);
+
+	iscsi_global->ti_forcechanoffline = NULL;
+	iscsi_target_deregister_configfs();
+}
+
+/*	iscsi_target_release():
+ *
+ *
+ */
+static int iscsi_target_release(void)
+{
+	int ret = 0;
+
+	if (!iscsi_global)
+		return ret;
+
+	iscsi_target_release_phase1(1);
+	iscsi_target_release_phase2();
+
+	kfree(iscsi_global);
+
+	printk(KERN_INFO "Unloading Complete.\n");
+
+	return ret;
+}
+
+char *iscsi_get_fabric_name(void)
+{
+	return "iSCSI";
+}
+
+struct iscsi_cmd *iscsi_get_cmd(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd;
+}
+
+u32 iscsi_get_task_tag(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd->init_task_tag;
+}
+
+int iscsi_get_cmd_state(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	return cmd->i_state;
+}
+
+void iscsi_new_cmd_failure(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	if (cmd->immediate_data || cmd->unsolicited_data)
+		up(&cmd->unsolicited_data_sem);
+}
+
+int iscsi_is_state_remove(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	return (cmd->i_state == ISTATE_REMOVE);
+}
+
+int lio_sess_logged_in(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	int ret;
+
+	/*
+	 * Called with spin_lock_bh(&se_global->se_tpg_lock); and
+	 * spin_lock(&se_tpg->session_lock); held.
+	 */
+	spin_lock(&sess->conn_lock);
+	ret = (sess->session_state != TARG_SESS_STATE_LOGGED_IN);
+	spin_unlock(&sess->conn_lock);
+
+	return ret;
+}
+
+u32 lio_sess_get_index(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	return sess->session_index;
+}
+
+u32 lio_sess_get_initiator_sid(
+	struct se_session *se_sess,
+	unsigned char *buf,
+	u32 size)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	/*
+	 * iSCSI Initiator Session Identifier from RFC-3720.
+	 */
+	return snprintf(buf, size, "%02x%02x%02x%02x%02x%02x",
+		sess->isid[0], sess->isid[1], sess->isid[2],
+		sess->isid[3], sess->isid[4], sess->isid[5]);
+}
+
+/*	iscsi_add_nopin():
+ *
+ *
+ */
+int iscsi_add_nopin(
+	struct iscsi_conn *conn,
+	int want_response)
+{
+	u8 state;
+	struct iscsi_cmd *cmd;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return -1;
+
+	cmd->iscsi_opcode = ISCSI_OP_NOOP_IN;
+	state = (want_response) ? ISTATE_SEND_NOPIN_WANT_RESPONSE :
+			ISTATE_SEND_NOPIN_NO_RESPONSE;
+	cmd->init_task_tag = 0xFFFFFFFF;
+	spin_lock_bh(&SESS(conn)->ttt_lock);
+	cmd->targ_xfer_tag = (want_response) ? SESS(conn)->targ_xfer_tag++ :
+			0xFFFFFFFF;
+	if (want_response && (cmd->targ_xfer_tag == 0xFFFFFFFF))
+		cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+	spin_unlock_bh(&SESS(conn)->ttt_lock);
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	if (want_response)
+		iscsi_start_nopin_response_timer(conn);
+	iscsi_add_cmd_to_immediate_queue(cmd, conn, state);
+
+	return 0;
+}
+
+/*	iscsi_add_reject():
+ *
+ *
+ */
+int iscsi_add_reject(
+	u8 reason,
+	int fail_conn,
+	unsigned char *buf,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+	struct iscsi_reject *hdr;
+	int ret;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return -1;
+
+	cmd->iscsi_opcode = ISCSI_OP_REJECT;
+	if (fail_conn)
+		cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
+
+	hdr	= (struct iscsi_reject *) cmd->pdu;
+	hdr->reason = reason;
+
+	cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_ATOMIC);
+	if (!(cmd->buf_ptr)) {
+		printk(KERN_ERR "Unable to allocate memory for cmd->buf_ptr\n");
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+		return -1;
+	}
+	memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN);
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_REJECT;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	ret = down_interruptible(&cmd->reject_sem);
+	if (ret != 0)
+		return -1;
+
+	return (!fail_conn) ? 0 : -1;
+}
+
+/*	iscsi_add_reject_from_cmd():
+ *
+ *
+ */
+int iscsi_add_reject_from_cmd(
+	u8 reason,
+	int fail_conn,
+	int add_to_conn,
+	unsigned char *buf,
+	struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn;
+	struct iscsi_reject *hdr;
+	int ret;
+
+	if (!CONN(cmd)) {
+		printk(KERN_ERR "cmd->conn is NULL for ITT: 0x%08x\n",
+				cmd->init_task_tag);
+		return -1;
+	}
+	conn = CONN(cmd);
+
+	cmd->iscsi_opcode = ISCSI_OP_REJECT;
+	if (fail_conn)
+		cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
+
+	hdr	= (struct iscsi_reject *) cmd->pdu;
+	hdr->reason = reason;
+
+	cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_ATOMIC);
+	if (!(cmd->buf_ptr)) {
+		printk(KERN_ERR "Unable to allocate memory for cmd->buf_ptr\n");
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+		return -1;
+	}
+	memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN);
+
+	if (add_to_conn)
+		iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_REJECT;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	ret = down_interruptible(&cmd->reject_sem);
+	if (ret != 0)
+		return -1;
+
+	return (!fail_conn) ? 0 : -1;
+}
+
+/* #define iscsi_calculate_map_segment_DEBUG */
+#ifdef iscsi_calculate_map_segment_DEBUG
+#define DEBUG_MAP_SEGMENTS(buf...) PYXPRINT(buf)
+#else
+#define DEBUG_MAP_SEGMENTS(buf...)
+#endif
+
+/*	iscsi_calculate_map_segment():
+ *
+ *
+ */
+static inline void iscsi_calculate_map_segment(
+	u32 *data_length,
+	struct se_offset_map *lm)
+{
+	u32 sg_offset = 0;
+	struct se_mem *se_mem = lm->map_se_mem;
+
+	DEBUG_MAP_SEGMENTS(" START Mapping se_mem: %p, Length: %d"
+		"  Remaining iSCSI Data: %u\n", se_mem, se_mem->se_len,
+		*data_length);
+	/*
+	 * Still working on pages in the current struct se_mem.
+	 */
+	if (!lm->map_reset) {
+		lm->iovec_length = (lm->sg_length > PAGE_SIZE) ?
+					PAGE_SIZE : lm->sg_length;
+		if (*data_length < lm->iovec_length) {
+			DEBUG_MAP_SEGMENTS("LINUX_MAP: Reset lm->iovec_length"
+				" to %d\n", *data_length);
+
+			lm->iovec_length = *data_length;
+		}
+		lm->iovec_base = page_address(lm->sg_page) + sg_offset;
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP: Set lm->iovec_base to %p from"
+			" lm->sg_page: %p\n", lm->iovec_base, lm->sg_page);
+		return;
+	}
+
+	/*
+	 * First run of an iscsi_linux_map_t.
+	 *
+	 * OR:
+	 *
+	 * Mapped all of the pages in the current scatterlist, move
+	 * on to the next one.
+	 */
+	lm->map_reset = 0;
+	sg_offset = se_mem->se_off;
+	lm->sg_page = se_mem->se_page;
+	lm->sg_length = se_mem->se_len;
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP1[%p]: Starting to se_mem->se_len: %u,"
+		" se_mem->se_off: %u, se_mem->se_page: %p\n", se_mem,
+		se_mem->se_len, se_mem->se_off, se_mem->se_page);;
+	/*
+	 * Get the base and length of the current page for use with the iovec.
+	 */
+recalc:
+	lm->iovec_length = (lm->sg_length > (PAGE_SIZE - sg_offset)) ?
+			   (PAGE_SIZE - sg_offset) : lm->sg_length;
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP: lm->iovec_length: %u, lm->sg_length: %u,"
+		" sg_offset: %u\n", lm->iovec_length, lm->sg_length, sg_offset);
+	/*
+	 * See if there is any iSCSI offset we need to deal with.
+	 */
+	if (!lm->current_offset) {
+		lm->iovec_base = page_address(lm->sg_page) + sg_offset;
+
+		if (*data_length < lm->iovec_length) {
+			DEBUG_MAP_SEGMENTS("LINUX_MAP1[%p]: Reset"
+				" lm->iovec_length to %d\n", se_mem,
+				*data_length);
+			lm->iovec_length = *data_length;
+		}
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP2[%p]: No current_offset,"
+			" set iovec_base to %p and set Current Page to %p\n",
+			se_mem, lm->iovec_base, lm->sg_page);
+
+		return;
+	}
+
+	/*
+	 * We know the iSCSI offset is in the next page of the current
+	 * scatterlist.  Increase the lm->sg_page pointer and try again.
+	 */
+	if (lm->current_offset >= lm->iovec_length) {
+		DEBUG_MAP_SEGMENTS("LINUX_MAP3[%p]: Next Page:"
+			" lm->current_offset: %u, iovec_length: %u"
+			" sg_offset: %u\n", se_mem, lm->current_offset,
+			lm->iovec_length, sg_offset);
+
+		lm->current_offset -= lm->iovec_length;
+		lm->sg_length -= lm->iovec_length;
+		lm->sg_page++;
+		sg_offset = 0;
+
+		DEBUG_MAP_SEGMENTS("LINUX_MAP3[%p]: ** Skipping to Next Page,"
+			" updated values: lm->current_offset: %u\n", se_mem,
+			lm->current_offset);
+
+		goto recalc;
+	}
+
+	/*
+	 * The iSCSI offset is in the current page, increment the iovec
+	 * base and reduce iovec length.
+	 */
+	lm->iovec_base = page_address(lm->sg_page);
+
+	DEBUG_MAP_SEGMENTS("LINUX_MAP4[%p]: Set lm->iovec_base to %p\n", se_mem,
+			lm->iovec_base);
+
+	lm->iovec_base += sg_offset;
+	lm->iovec_base += lm->current_offset;
+	DEBUG_MAP_SEGMENTS("****** the OLD lm->iovec_length: %u lm->sg_length:"
+		" %u\n", lm->iovec_length, lm->sg_length);
+
+	if ((lm->iovec_length - lm->current_offset) < *data_length)
+		lm->iovec_length -= lm->current_offset;
+	else
+		lm->iovec_length = *data_length;
+
+	if ((lm->sg_length - lm->current_offset) < *data_length)
+		lm->sg_length -= lm->current_offset;
+	else
+		lm->sg_length = *data_length;
+
+	lm->current_offset = 0;
+
+	DEBUG_MAP_SEGMENTS("****** the NEW lm->iovec_length %u lm->sg_length:"
+		" %u\n", lm->iovec_length, lm->sg_length);
+}
+
+/* #define iscsi_linux_get_iscsi_offset_DEBUG */
+#ifdef iscsi_linux_get_iscsi_offset_DEBUG
+#define DEBUG_GET_ISCSI_OFFSET(buf...) PYXPRINT(buf)
+#else
+#define DEBUG_GET_ISCSI_OFFSET(buf...)
+#endif
+
+/*	get_iscsi_offset():
+ *
+ *
+ */
+static int get_iscsi_offset(
+	struct se_offset_map *lmap,
+	struct se_unmap_sg *usg)
+{
+	u32 current_length = 0, current_iscsi_offset = lmap->iscsi_offset;
+	u32 total_offset = 0;
+	struct se_cmd *cmd = usg->se_cmd;
+	struct se_mem *se_mem;
+
+	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_list, se_list)
+		break;
+
+	if (!se_mem) {
+		printk(KERN_ERR "Unable to locate se_mem from"
+				" T_TASK(cmd)->t_mem_list\n");
+		return -1;
+	}
+
+	/*
+	 * Locate the current offset from the passed iSCSI Offset.
+	 */
+	while (lmap->iscsi_offset != current_length) {
+		/*
+		 * The iSCSI Offset is within the current struct se_mem.
+		 *
+		 * Or:
+		 *
+		 * The iSCSI Offset is outside of the current struct se_mem.
+		 * Recalculate the values and obtain the next struct se_mem pointer.
+		 */
+		total_offset += se_mem->se_len;
+
+		DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: current_length: %u,"
+			" total_offset: %u, sg->length: %u\n",
+			current_length, total_offset, se_mem->se_len);
+
+		if (total_offset > lmap->iscsi_offset) {
+			current_length += current_iscsi_offset;
+			lmap->orig_offset = lmap->current_offset =
+				usg->t_offset = current_iscsi_offset;
+			DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: Within Current"
+				" struct se_mem: %p, current_length incremented to"
+				" %u\n", se_mem, current_length);
+		} else {
+			current_length += se_mem->se_len;
+			current_iscsi_offset -= se_mem->se_len;
+
+			DEBUG_GET_ISCSI_OFFSET("ISCSI_OFFSET: Outside of"
+				" Current se_mem: %p, current_length"
+				" incremented to %u and current_iscsi_offset"
+				" decremented to %u\n", se_mem, current_length,
+				current_iscsi_offset);
+
+			list_for_each_entry_continue(se_mem,
+					T_TASK(cmd)->t_mem_list, se_list)
+				break;
+
+			if (!se_mem) {
+				printk(KERN_ERR "Unable to locate struct se_mem\n");
+				return -1;
+			}
+		}
+	}
+	lmap->map_orig_se_mem = se_mem;
+	usg->cur_se_mem = se_mem;
+
+	return 0;
+}
+
+/* #define iscsi_OS_set_SG_iovec_ptrs_DEBUG */
+#ifdef iscsi_OS_set_SG_iovec_ptrs_DEBUG
+#define DEBUG_IOVEC_SCATTERLISTS(buf...) PYXPRINT(buf)
+
+static void iscsi_check_iovec_map(
+	u32 iovec_count,
+	u32 map_length,
+	struct se_map_sg *map_sg,
+	struct se_unmap_sg *unmap_sg)
+{
+	u32 i, iovec_map_length = 0;
+	struct se_cmd *cmd = map_sg->se_cmd;
+	struct iovec *iov = map_sg->iov;
+	struct se_mem *se_mem;
+
+	for (i = 0; i < iovec_count; i++)
+		iovec_map_length += iov[i].iov_len;
+
+	if (iovec_map_length == map_length)
+		return;
+
+	printk(KERN_INFO "Calculated iovec_map_length: %u does not match passed"
+		" map_length: %u\n", iovec_map_length, map_length);
+	printk(KERN_INFO "ITT: 0x%08x data_length: %u data_direction %d\n",
+		CMD_TFO(cmd)->get_task_tag(cmd), cmd->data_length,
+		cmd->data_direction);
+
+	iovec_map_length = 0;
+
+	for (i = 0; i < iovec_count; i++) {
+		printk(KERN_INFO "iov[%d].iov_[base,len]: %p / %u bytes------"
+			"-->\n", i, iov[i].iov_base, iov[i].iov_len);
+
+		printk(KERN_INFO "iovec_map_length from %u to %u\n",
+			iovec_map_length, iovec_map_length + iov[i].iov_len);
+		iovec_map_length += iov[i].iov_len;
+
+		printk(KERN_INFO "XXXX_map_length from %u to %u\n", map_length,
+				(map_length - iov[i].iov_len));
+		map_length -= iov[i].iov_len;
+	}
+
+	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		printk(KERN_INFO "se_mem[%p]: offset: %u length: %u\n",
+			se_mem, se_mem->se_off, se_mem->se_len);
+	}
+
+	BUG();
+}
+
+#else
+#define DEBUG_IOVEC_SCATTERLISTS(buf...)
+#define iscsi_check_iovec_map(a, b, c, d)
+#endif
+
+static int iscsi_set_iovec_ptrs(
+	struct se_map_sg *map_sg,
+	struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0 /* For iovecs */, j = 0 /* For scatterlists */;
+#ifdef iscsi_OS_set_SG_iovec_ptrs_DEBUG
+	u32 orig_map_length = map_sg->data_length;
+#endif
+	struct se_cmd *cmd = map_sg->se_cmd;
+	struct iscsi_cmd *i_cmd = container_of(cmd, struct iscsi_cmd, se_cmd);
+	struct se_offset_map *lmap = &unmap_sg->lmap;
+	struct iovec *iov = map_sg->iov;
+
+	/*
+	 * Used for non scatterlist operations, assume a single iovec.
+	 */
+	if (!T_TASK(cmd)->t_tasks_se_num) {
+		DEBUG_IOVEC_SCATTERLISTS("ITT: 0x%08x No struct se_mem elements"
+			" present\n", CMD_TFO(cmd)->get_task_tag(cmd));
+		iov[0].iov_base = (unsigned char *) T_TASK(cmd)->t_task_buf +
+							map_sg->data_offset;
+		iov[0].iov_len  = map_sg->data_length;
+		return 1;
+	}
+
+	/*
+	 * Set lmap->map_reset = 1 so the first call to
+	 * iscsi_calculate_map_segment() sets up the initial
+	 * values for struct se_offset_map.
+	 */
+	lmap->map_reset = 1;
+
+	DEBUG_IOVEC_SCATTERLISTS("[-------------------] ITT: 0x%08x OS"
+		" Independent Network POSIX defined iovectors to SE Memory"
+		" [-------------------]\n\n", CMD_TFO(cmd)->get_task_tag(cmd));
+
+	/*
+	 * Get a pointer to the first used scatterlist based on the passed
+	 * offset. Also set the rest of the needed values in iscsi_linux_map_t.
+	 */
+	lmap->iscsi_offset = map_sg->data_offset;
+	if (map_sg->sg_kmap_active) {
+		unmap_sg->se_cmd = map_sg->se_cmd;
+		get_iscsi_offset(lmap, unmap_sg);
+		unmap_sg->data_length = map_sg->data_length;
+	} else {
+		lmap->current_offset = lmap->orig_offset;
+	}
+	lmap->map_se_mem = lmap->map_orig_se_mem;
+
+	DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Total map_sg->data_length: %d,"
+		" lmap->iscsi_offset: %d, i_cmd->orig_iov_data_count: %d\n",
+		map_sg->data_length, lmap->iscsi_offset,
+		i_cmd->orig_iov_data_count);
+
+	while (map_sg->data_length) {
+		/*
+		 * Time to get the virtual address for use with iovec pointers.
+		 * This function will return the expected iovec_base address
+		 * and iovec_length.
+		 */
+		iscsi_calculate_map_segment(&map_sg->data_length, lmap);
+
+		/*
+		 * Set the iov.iov_base and iov.iov_len from the current values
+		 * in iscsi_linux_map_t.
+		 */
+		iov[i].iov_base = lmap->iovec_base;
+		iov[i].iov_len = lmap->iovec_length;
+
+		/*
+		 * Subtract the final iovec length from the total length to be
+		 * mapped, and the length of the current scatterlist.  Also
+		 * perform the paranoid check to make sure we are not going to
+		 * overflow the iovecs allocated for this command in the next
+		 * pass.
+		 */
+		map_sg->data_length -= iov[i].iov_len;
+		lmap->sg_length -= iov[i].iov_len;
+
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: iov[%u].iov_len: %u\n",
+				i, iov[i].iov_len);
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: lmap->sg_length: from %u"
+			" to %u\n", lmap->sg_length + iov[i].iov_len,
+				lmap->sg_length);
+		DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Changed total"
+			" map_sg->data_length from %u to %u\n",
+			map_sg->data_length + iov[i].iov_len,
+			map_sg->data_length);
+
+		if ((++i + 1) > i_cmd->orig_iov_data_count) {
+			printk(KERN_ERR "Current iovec count %u is greater than"
+				" struct se_cmd->orig_data_iov_count %u, cannot"
+				" continue.\n", i+1, i_cmd->orig_iov_data_count);
+			return -1;
+		}
+
+		/*
+		 * All done mapping this scatterlist's pages, move on to
+		 * the next scatterlist by setting lmap.map_reset = 1;
+		 */
+		if (!lmap->sg_length || !map_sg->data_length) {
+			list_for_each_entry(lmap->map_se_mem,
+					&lmap->map_se_mem->se_list, se_list)
+				break;
+
+			if (!lmap->map_se_mem) {
+				printk(KERN_ERR "Unable to locate next"
+					" lmap->map_struct se_mem entry\n");
+				return -1;
+			}
+			j++;
+
+			lmap->sg_page = NULL;
+			lmap->map_reset = 1;
+
+			DEBUG_IOVEC_SCATTERLISTS("OS_IOVEC: Done with current"
+				" scatterlist, incremented Generic scatterlist"
+				" Counter to %d and reset = 1\n", j);
+		} else
+			lmap->sg_page++;
+	}
+
+	unmap_sg->sg_count = j;
+
+	iscsi_check_iovec_map(i, orig_map_length, map_sg, unmap_sg);
+
+	return i;
+}
+
+static void iscsi_map_SG_segments(struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0;
+	struct se_cmd *cmd = unmap_sg->se_cmd;
+	struct se_mem *se_mem = unmap_sg->cur_se_mem;
+
+	if (!(T_TASK(cmd)->t_tasks_se_num))
+		return;
+
+	list_for_each_entry_continue(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		kmap(se_mem->se_page);
+
+		if (++i == unmap_sg->sg_count)
+			break;
+	}
+}
+
+static void iscsi_unmap_SG_segments(struct se_unmap_sg *unmap_sg)
+{
+	u32 i = 0;
+	struct se_cmd *cmd = unmap_sg->se_cmd;
+	struct se_mem *se_mem = unmap_sg->cur_se_mem;
+
+	if (!(T_TASK(cmd)->t_tasks_se_num))
+		return;
+
+	list_for_each_entry_continue(se_mem, T_TASK(cmd)->t_mem_list, se_list) {
+		kunmap(se_mem->se_page);
+
+		if (++i == unmap_sg->sg_count)
+			break;
+	}
+}
+
+/*	iscsi_handle_scsi_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_scsi_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	int	data_direction, cmdsn_ret = 0, immed_ret, ret, transport_ret;
+	int	dump_immediate_data = 0, send_check_condition = 0, payload_length;
+	struct iscsi_cmd	*cmd = NULL;
+	struct iscsi_scsi_cmd *hdr;
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->cmd_pdus++;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->num_cmds++;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	hdr			= (struct iscsi_scsi_cmd *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->data_length	= be32_to_cpu(hdr->data_length);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	/* FIXME; Add checks for AdditionalHeaderSegment */
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_WRITE) &&
+	    !(hdr->flags & ISCSI_FLAG_CMD_FINAL)) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_WRITE & ISCSI_FLAG_CMD_FINAL"
+				" not set. Bad iSCSI Initiator.\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (((hdr->flags & ISCSI_FLAG_CMD_READ) ||
+	     (hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) {
+		/*
+		 * Vmware ESX v3.0 uses a modified Cisco Initiator (v3.4.2)
+		 * that adds support for RESERVE/RELEASE.  There is a bug
+		 * add with this new functionality that sets R/W bits when
+		 * neither CDB carries any READ or WRITE datapayloads.
+		 */
+		if ((hdr->cdb[0] == 0x16) || (hdr->cdb[0] == 0x17)) {
+			hdr->flags &= ~ISCSI_FLAG_CMD_READ;
+			hdr->flags &= ~ISCSI_FLAG_CMD_WRITE;
+			goto done;
+		}
+
+		printk(KERN_ERR "ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE"
+			" set when Expected Data Transfer Length is 0 for"
+			" CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]);
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+done:
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_READ) &&
+	    !(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_READ and/or ISCSI_FLAG_CMD_WRITE"
+			" MUST be set if Expected Data Transfer Length is not 0."
+			" Bad iSCSI Initiator\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if ((hdr->flags & ISCSI_FLAG_CMD_READ) &&
+	    (hdr->flags & ISCSI_FLAG_CMD_WRITE)) {
+		printk(KERN_ERR "Bidirectional operations not supported!\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (hdr->opcode & ISCSI_OP_IMMEDIATE) {
+		printk(KERN_ERR "Illegally set Immediate Bit in iSCSI Initiator"
+				" Scsi Command PDU.\n");
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+				buf, conn);
+	}
+
+	if (payload_length && !SESS_OPS_C(conn)->ImmediateData) {
+		printk(KERN_ERR "ImmediateData=No but DataSegmentLength=%u,"
+			" protocol error.\n", payload_length);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+#if 0
+	if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL) &&
+	     (hdr->flags & ISCSI_FLAG_CMD_WRITE) && SESS_OPS_C(conn)->InitialR2T) {
+		printk(KERN_ERR "ISCSI_FLAG_CMD_FINAL is not Set and"
+			" ISCSI_FLAG_CMD_WRITE Bit and InitialR2T=Yes,"
+			" protocol error\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+#endif
+	if ((hdr->data_length == payload_length) &&
+	    (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))) {
+		printk(KERN_ERR "Expected Data Transfer Length and Length of"
+			" Immediate Data are the same, but ISCSI_FLAG_CMD_FINAL"
+			" bit is not set protocol error\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > hdr->data_length) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" EDTL: %u, protocol error.\n", payload_length,
+				hdr->data_length);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" MaxRecvDataSegmentLength: %u, protocol error.\n",
+			payload_length, CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+				buf, conn);
+	}
+
+	if (payload_length > SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" FirstBurstLength: %u, protocol error.\n",
+			payload_length, SESS_OPS_C(conn)->FirstBurstLength);
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_INVALID, 1,
+					buf, conn);
+	}
+
+	data_direction = (hdr->flags & ISCSI_FLAG_CMD_WRITE) ? DMA_TO_DEVICE :
+			 (hdr->flags & ISCSI_FLAG_CMD_READ) ? DMA_FROM_DEVICE :
+			  DMA_NONE;
+
+	cmd = iscsi_allocate_se_cmd(conn, hdr->data_length, data_direction,
+				(hdr->flags & ISCSI_FLAG_CMD_ATTR_MASK));
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1,
+					buf, conn);
+
+	TRACE(TRACE_ISCSI, "Got SCSI Command, ITT: 0x%08x, CmdSN: 0x%08x,"
+		" ExpXferLen: %u, Length: %u, CID: %hu\n", hdr->itt,
+		hdr->cmdsn, hdr->data_length, payload_length, conn->cid);
+
+	cmd->iscsi_opcode	= ISCSI_OP_SCSI_CMD;
+	cmd->i_state		= ISTATE_NEW_CMD;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	cmd->immediate_data	= (payload_length) ? 1 : 0;
+	cmd->unsolicited_data	= ((!(hdr->flags & ISCSI_FLAG_CMD_FINAL) &&
+				     (hdr->flags & ISCSI_FLAG_CMD_WRITE)) ? 1 : 0);
+	if (cmd->unsolicited_data)
+		cmd->cmd_flags |= ICF_NON_IMMEDIATE_UNSOLICITED_DATA;
+
+	SESS(conn)->init_task_tag = cmd->init_task_tag = hdr->itt;
+	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
+		spin_lock_bh(&SESS(conn)->ttt_lock);
+		cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+		if (cmd->targ_xfer_tag == 0xFFFFFFFF)
+			cmd->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+		spin_unlock_bh(&SESS(conn)->ttt_lock);
+	} else if (hdr->flags & ISCSI_FLAG_CMD_WRITE)
+		cmd->targ_xfer_tag = 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	cmd->first_burst_len	= payload_length;
+
+	if (cmd->data_direction == DMA_FROM_DEVICE) {
+		struct iscsi_datain_req *dr;
+
+		dr = iscsi_allocate_datain_req();
+		if (!(dr))
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, 1, buf, cmd);
+
+		iscsi_attach_datain_req(cmd, dr);
+	}
+
+	/*
+	 * The CDB is going to an se_device_t.
+	 */
+	ret = iscsi_get_lun_for_cmd(cmd, hdr->cdb,
+				get_unaligned_le64(&hdr->lun[0]));
+	if (ret < 0) {
+		if (SE_CMD(cmd)->scsi_sense_reason == TCM_NON_EXISTENT_LUN) {
+			TRACE(TRACE_VANITY, "Responding to non-acl'ed,"
+				" non-existent or non-exported iSCSI LUN:"
+				" 0x%016Lx\n", get_unaligned_le64(&hdr->lun[0]));
+		}
+		if (ret == PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES)
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, 1, buf, cmd);
+
+		send_check_condition = 1;
+		goto attach_cmd;
+	}
+	/*
+	 * The Initiator Node has access to the LUN (the addressing method
+	 * is handled inside of iscsi_get_lun_for_cmd()).  Now it's time to
+	 * allocate 1->N transport tasks (depending on sector count and
+	 * maximum request size the physical HBA(s) can handle.
+	 */
+	transport_ret = transport_generic_allocate_tasks(SE_CMD(cmd), hdr->cdb);
+	if (!(transport_ret))
+		goto build_list;
+
+	if (transport_ret == -1) {
+		return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 1, buf, cmd);
+	} else if (transport_ret == -2) {
+		/*
+		 * Unsupported SAM Opcode.  CHECK_CONDITION will be sent
+		 * in iscsi_execute_cmd() during the CmdSN OOO Execution
+		 * Mechinism.
+		 */
+		send_check_condition = 1;
+		goto attach_cmd;
+	}
+
+build_list:
+	if (iscsi_decide_list_to_build(cmd, payload_length) < 0)
+		return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 1, buf, cmd);
+attach_cmd:
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	/*
+	 * Check if we need to delay processing because of ALUA
+	 * Active/NonOptimized primary access state..
+	 */
+	core_alua_check_nonop_delay(SE_CMD(cmd));
+	/*
+	 * Check the CmdSN against ExpCmdSN/MaxCmdSN here if
+	 * the Immediate Bit is not set, and no Immediate
+	 * Data is attached.
+	 *
+	 * A PDU/CmdSN carrying Immediate Data can only
+	 * be processed after the DataCRC has passed.
+	 * If the DataCRC fails, the CmdSN MUST NOT
+	 * be acknowledged. (See below)
+	 */
+	if (!cmd->immediate_data) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			do {} while (0);
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd,
+					conn, cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	/*
+	 * If no Immediate Data is attached, it's OK to return now.
+	 */
+	if (!cmd->immediate_data) {
+		if (send_check_condition)
+			return 0;
+
+		if (cmd->unsolicited_data) {
+			iscsi_set_dataout_sequence_values(cmd);
+
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+
+		return 0;
+	}
+
+	/*
+	 * Early CHECK_CONDITIONs never make it to the transport processing
+	 * thread.  They are processed in CmdSN order by
+	 * iscsi_check_received_cmdsn() below.
+	 */
+	if (send_check_condition) {
+		immed_ret = IMMEDIDATE_DATA_NORMAL_OPERATION;
+		dump_immediate_data = 1;
+		goto after_immediate_data;
+	}
+
+	/*
+	 * Immediate Data is present, send to the transport and block until
+	 * the underlying transport plugin has allocated the buffer to
+	 * receive the Immediate Write Data into.
+	 */
+	transport_generic_handle_cdb(SE_CMD(cmd));
+
+	down(&cmd->unsolicited_data_sem);
+
+	if (SE_CMD(cmd)->se_cmd_flags & SCF_SE_CMD_FAILED) {
+		immed_ret = IMMEDIDATE_DATA_NORMAL_OPERATION;
+		dump_immediate_data = 1;
+		goto after_immediate_data;
+	}
+
+	immed_ret = iscsi_handle_immediate_data(cmd, buf, payload_length);
+after_immediate_data:
+	if (immed_ret == IMMEDIDATE_DATA_NORMAL_OPERATION) {
+		/*
+		 * A PDU/CmdSN carrying Immediate Data passed
+		 * DataCRC, check against ExpCmdSN/MaxCmdSN if
+		 * Immediate Bit is not set.
+		 */
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		/*
+		 * Special case for Unsupported SAM WRITE Opcodes
+		 * and ImmediateData=Yes.
+		 */
+		if (dump_immediate_data) {
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return -1;
+		} else if (cmd->unsolicited_data) {
+			iscsi_set_dataout_sequence_values(cmd);
+
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+
+		if (cmdsn_ret == CMDSN_NORMAL_OPERATION)
+			return 0;
+		else if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)
+			return 0;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd,
+					conn, cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	} else if (immed_ret == IMMEDIDATE_DATA_ERL1_CRC_FAILURE) {
+		/*
+		 * Immediate Data failed DataCRC and ERL>=1,
+		 * silently drop this PDU and let the initiator
+		 * plug the CmdSN gap.
+		 *
+		 * FIXME: Send Unsolicited NOPIN with reserved
+		 * TTT here to help the initiator figure out
+		 * the missing CmdSN, although they should be
+		 * intelligent enough to determine the missing
+		 * CmdSN and issue a retry to plug the sequence.
+		 */
+		cmd->i_state = ISTATE_REMOVE;
+		iscsi_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
+	} else /* immed_ret == IMMEDIDATE_DATA_CANNOT_RECOVER */
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_handle_data_out():
+ *
+ *
+ */
+static inline int iscsi_handle_data_out(struct iscsi_conn *conn, unsigned char *buf)
+{
+	int iov_ret, ooo_cmdsn = 0, ret;
+	u8 data_crc_failed = 0, *pad_bytes[4];
+	u32 checksum, iov_count = 0, padding = 0, rx_got = 0;
+	u32 rx_size = 0, payload_length;
+	struct iscsi_cmd *cmd = NULL;
+	struct se_cmd *se_cmd;
+	struct se_map_sg map_sg;
+	struct se_unmap_sg unmap_sg;
+	struct iscsi_data *hdr;
+	struct iovec *iov;
+	unsigned long flags;
+
+	hdr			= (struct iscsi_data *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->datasn		= be32_to_cpu(hdr->datasn);
+	hdr->offset		= be32_to_cpu(hdr->offset);
+
+	if (!payload_length) {
+		printk(KERN_ERR "DataOUT payload is ZERO, protocol error.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	/* iSCSI write */
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->rx_data_octets += payload_length;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->write_bytes += payload_length;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "DataSegmentLength: %u is greater than"
+			" MaxRecvDataSegmentLength: %u\n", payload_length,
+			CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	cmd = iscsi_find_cmd_from_itt_or_dump(conn, hdr->itt,
+			payload_length);
+	if (!(cmd))
+		return 0;
+
+	TRACE(TRACE_ISCSI, "Got DataOut ITT: 0x%08x, TTT: 0x%08x,"
+		" DataSN: 0x%08x, Offset: %u, Length: %u, CID: %hu\n",
+		hdr->itt, hdr->ttt, hdr->datasn, hdr->offset,
+		payload_length, conn->cid);
+
+	if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+		printk(KERN_ERR "Command ITT: 0x%08x received DataOUT after"
+			" last DataOUT received, dumping payload\n",
+			cmd->init_task_tag);
+		return iscsi_dump_data_payload(conn, payload_length, 1);
+	}
+
+	if (cmd->data_direction != DMA_TO_DEVICE) {
+		printk(KERN_ERR "Command ITT: 0x%08x received DataOUT for a"
+			" NON-WRITE command.\n", cmd->init_task_tag);
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
+				1, 0, buf, cmd);
+	}
+	se_cmd = SE_CMD(cmd);
+	iscsi_mod_dataout_timer(cmd);
+
+	if ((hdr->offset + payload_length) > cmd->data_length) {
+		printk(KERN_ERR "DataOut Offset: %u, Length %u greater than"
+			" iSCSI Command EDTL %u, protocol error.\n",
+			hdr->offset, payload_length, cmd->data_length);
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
+				1, 0, buf, cmd);
+	}
+
+	/*
+	 * Whenever a DataOUT or DataIN PDU contains a valid TTT, the
+	 * iSCSI LUN field must be set. iSCSI v20 10.7.4.  Of course,
+	 * Cisco cannot figure this out.
+	 */
+#if 0
+	if (hdr->ttt != 0xFFFFFFFF) {
+		int lun = iscsi_unpack_lun(get_unaligned_le64(&hdr->lun[0]));
+		if (lun != SE_CMD(cmd)->orig_fe_lun) {
+			printk(KERN_ERR "Received LUN: %u does not match iSCSI"
+				" LUN: %u\n", lun, SE_CMD(cmd)->orig_fe_lun);
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_INVALID,
+					1, 0, buf, cmd);
+		}
+	}
+#endif
+	if (cmd->unsolicited_data) {
+		int dump_unsolicited_data = 0, wait_for_transport = 0;
+
+		if (SESS_OPS_C(conn)->InitialR2T) {
+			printk(KERN_ERR "Received unexpected unsolicited data"
+				" while InitialR2T=Yes, protocol error.\n");
+			transport_send_check_condition_and_sense(SE_CMD(cmd),
+					TCM_UNEXPECTED_UNSOLICITED_DATA, 0);
+			return -1;
+		}
+		/*
+		 * Special case for dealing with Unsolicited DataOUT
+		 * and Unsupported SAM WRITE Opcodes and SE resource allocation
+		 * failures;
+		 */
+		spin_lock_irqsave(&T_TASK(se_cmd)->t_state_lock, flags);
+		/*
+		 * Handle cases where we do or do not want to sleep on
+		 * unsolicited_data_sem
+		 *
+		 * First, if TRANSPORT_WRITE_PENDING state has not been reached,
+		 * we need assume we need to wait and sleep..
+		 */
+		 wait_for_transport =
+				(se_cmd->t_state != TRANSPORT_WRITE_PENDING);
+		/*
+		 * For the ImmediateData=Yes cases, there will already be
+		 * generic target memory allocated with the original
+		 * ISCSI_OP_SCSI_CMD PDU, so do not sleep for that case.
+		 *
+		 * The last is a check for a delayed TASK_ABORTED status that
+		 * means the data payload will be dropped because
+		 * SCF_SE_CMD_FAILED has been set to indicate that an exception
+		 * condition for this struct sse_cmd has occured in generic target
+		 * code that requires us to drop payload.
+		 */
+		wait_for_transport =
+				(se_cmd->t_state != TRANSPORT_WRITE_PENDING);
+		if ((cmd->immediate_data != 0) ||
+		    (atomic_read(&T_TASK(se_cmd)->t_transport_aborted) != 0))
+			wait_for_transport = 0;
+		spin_unlock_irqrestore(&T_TASK(se_cmd)->t_state_lock, flags);
+
+		if (wait_for_transport)
+			down(&cmd->unsolicited_data_sem);
+
+		spin_lock_irqsave(&T_TASK(se_cmd)->t_state_lock, flags);
+		if (!(se_cmd->se_cmd_flags & SCF_SUPPORTED_SAM_OPCODE) ||
+		     (se_cmd->se_cmd_flags & SCF_SE_CMD_FAILED))
+			dump_unsolicited_data = 1;
+		spin_unlock_irqrestore(&T_TASK(se_cmd)->t_state_lock, flags);
+
+		if (dump_unsolicited_data) {
+			/*
+			 * Check if a delayed TASK_ABORTED status needs to
+			 * be sent now if the ISCSI_FLAG_CMD_FINAL has been
+			 * received with the unsolicitied data out.
+			 */
+			if (hdr->flags & ISCSI_FLAG_CMD_FINAL)
+				iscsi_stop_dataout_timer(cmd);
+
+			transport_check_aborted_status(se_cmd,
+					(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+			return iscsi_dump_data_payload(conn, payload_length, 1);
+		}
+	} else {
+		/*
+		 * For the normal solicited data path:
+		 *
+		 * Check for a delayed TASK_ABORTED status and dump any
+		 * incoming data out payload if one exists.  Also, when the
+		 * ISCSI_FLAG_CMD_FINAL is set to denote the end of the current
+		 * data out sequence, we decrement outstanding_r2ts.  Once
+		 * outstanding_r2ts reaches zero, go ahead and send the delayed
+		 * TASK_ABORTED status.
+		 */
+		if (atomic_read(&T_TASK(se_cmd)->t_transport_aborted) != 0) {
+			if (hdr->flags & ISCSI_FLAG_CMD_FINAL)
+				if (--cmd->outstanding_r2ts < 1) {
+					iscsi_stop_dataout_timer(cmd);
+					transport_check_aborted_status(
+							se_cmd, 1);
+				}
+
+			return iscsi_dump_data_payload(conn, payload_length, 1);
+		}
+	}
+	/*
+	 * Preform DataSN, DataSequenceInOrder, DataPDUInOrder, and
+	 * within-command recovery checks before receiving the payload.
+	 */
+	ret = iscsi_check_pre_dataout(cmd, buf);
+	if (ret == DATAOUT_WITHIN_COMMAND_RECOVERY)
+		return 0;
+	else if (ret == DATAOUT_CANNOT_RECOVER)
+		return -1;
+
+	rx_size += payload_length;
+	iov = &cmd->iov_data[0];
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	memset((void *)&unmap_sg, 0, sizeof(struct se_unmap_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.iov = iov;
+	map_sg.sg_kmap_active = 1;
+	map_sg.data_length = payload_length;
+	map_sg.data_offset = hdr->offset;
+	unmap_sg.fabric_cmd = (void *)cmd;
+	unmap_sg.se_cmd = SE_CMD(cmd);
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, &unmap_sg);
+	if (iov_ret < 0)
+		return -1;
+
+	iov_count += iov_ret;
+
+	padding = ((-payload_length) & 3);
+	if (padding != 0) {
+		iov[iov_count].iov_base	= &pad_bytes;
+		iov[iov_count++].iov_len = padding;
+		rx_size += padding;
+		TRACE(TRACE_ISCSI, "Receiving %u padding bytes.\n", padding);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		iov[iov_count].iov_base = &checksum;
+		iov[iov_count++].iov_len = CRC_LEN;
+		rx_size += CRC_LEN;
+	}
+
+	iscsi_map_SG_segments(&unmap_sg);
+
+	rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
+
+	iscsi_unmap_SG_segments(&unmap_sg);
+
+	if (rx_got != rx_size)
+		return -1;
+
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = payload_length, data_crc = 0;
+		struct iovec *iov_ptr = &cmd->iov_data[0];
+		struct scatterlist sg;
+		/*
+		 * Thanks to the IP stack shitting on passed iovecs,  we have to
+		 * call set_iovec_data_ptrs() again in order to have a iMD/PSCSI
+		 * agnostic way of doing datadigests computations.
+		 */
+		memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+		map_sg.fabric_cmd = (void *)cmd;
+		map_sg.se_cmd = SE_CMD(cmd);
+		map_sg.iov = iov_ptr;
+		map_sg.data_length = payload_length;
+		map_sg.data_offset = hdr->offset;
+
+		if (iscsi_set_iovec_ptrs(&map_sg, &unmap_sg) < 0)
+			return -1;
+
+		crypto_hash_init(&conn->conn_rx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+				" bytes, CRC 0x%08x\n", iov_ptr->iov_len,
+				data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+
+		if (padding) {
+			sg_init_one(&sg, (__u8 *)&pad_bytes, padding);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %d"
+				" bytes of padding, CRC 0x%08x\n",
+				padding, data_crc);
+		}
+		crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+		if (checksum != data_crc) {
+			printk(KERN_ERR "ITT: 0x%08x, Offset: %u, Length: %u,"
+				" DataSN: 0x%08x, CRC32C DataDigest 0x%08x"
+				" does not match computed 0x%08x\n",
+				hdr->itt, hdr->offset, payload_length,
+				hdr->datasn, checksum, data_crc);
+			data_crc_failed = 1;
+		} else {
+			TRACE(TRACE_DIGEST, "Got CRC32C DataDigest 0x%08x for"
+				" %u bytes of Data Out\n", checksum,
+				payload_length);
+		}
+	}
+	/*
+	 * Increment post receive data and CRC values or perform
+	 * within-command recovery.
+	 */
+	ret = iscsi_check_post_dataout(cmd, buf, data_crc_failed);
+	if ((ret == DATAOUT_NORMAL) || (ret == DATAOUT_WITHIN_COMMAND_RECOVERY))
+		return 0;
+	else if (ret == DATAOUT_SEND_R2T) {
+		iscsi_set_dataout_sequence_values(cmd);
+		iscsi_build_r2ts_for_cmd(cmd, conn, 0);
+	} else if (ret == DATAOUT_SEND_TO_TRANSPORT) {
+		/*
+		 * Handle extra special case for out of order
+		 * Unsolicited Data Out.
+		 */
+		spin_lock_bh(&cmd->istate_lock);
+		ooo_cmdsn = (cmd->cmd_flags & ICF_OOO_CMDSN);
+		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
+		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+		spin_unlock_bh(&cmd->istate_lock);
+
+		iscsi_stop_dataout_timer(cmd);
+		return (!ooo_cmdsn) ? transport_generic_handle_data(
+					SE_CMD(cmd)) : 0;
+	} else /* DATAOUT_CANNOT_RECOVER */
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_handle_nop_out():
+ *
+ *
+ */
+static inline int iscsi_handle_nop_out(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	unsigned char *ping_data = NULL;
+	int cmdsn_ret, niov = 0, ret = 0, rx_got, rx_size;
+	u32 checksum, data_crc, padding = 0, payload_length;
+	u64 lun;
+	struct iscsi_cmd *cmd = NULL;
+	struct iovec *iov = NULL;
+	struct iscsi_nopout *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_nopout *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	lun			= get_unaligned_le64(&hdr->lun[0]);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	if ((hdr->itt == 0xFFFFFFFF) && !(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		printk(KERN_ERR "NOPOUT ITT is reserved, but Immediate Bit is"
+			" not set, protocol error.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "NOPOUT Ping Data DataSegmentLength: %u is"
+			" greater than MaxRecvDataSegmentLength: %u, protocol"
+			" error.\n", payload_length,
+			CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	TRACE(TRACE_ISCSI, "Got NOPOUT Ping %s ITT: 0x%08x, TTT: 0x%09x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, Length: %u\n",
+		(hdr->itt == 0xFFFFFFFF) ? "Response" : "Request",
+		hdr->itt, hdr->ttt, hdr->cmdsn, hdr->exp_statsn,
+		payload_length);
+	/*
+	 * This is not a response to a Unsolicited NopIN, which means
+	 * it can either be a NOPOUT ping request (with a valid ITT),
+	 * or a NOPOUT not requesting a NOPIN (with a reserved ITT).
+	 * Either way, make sure we allocate an struct iscsi_cmd, as both
+	 * can contain ping data.
+	 */
+	if (hdr->ttt == 0xFFFFFFFF) {
+		cmd = iscsi_allocate_cmd(conn);
+		if (!(cmd))
+			return iscsi_add_reject(
+					ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+		cmd->iscsi_opcode	= ISCSI_OP_NOOP_OUT;
+		cmd->i_state		= ISTATE_SEND_NOPIN;
+		cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ?
+						1 : 0);
+		SESS(conn)->init_task_tag = cmd->init_task_tag = hdr->itt;
+		cmd->targ_xfer_tag	= 0xFFFFFFFF;
+		cmd->cmd_sn		= hdr->cmdsn;
+		cmd->exp_stat_sn	= hdr->exp_statsn;
+		cmd->data_direction	= DMA_NONE;
+	}
+
+	if (payload_length && (hdr->ttt == 0xFFFFFFFF)) {
+		rx_size = payload_length;
+		ping_data = kzalloc(payload_length + 1, GFP_KERNEL);
+		if (!(ping_data)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" NOPOUT ping data.\n");
+			ret = -1;
+			goto out;
+		}
+
+		iov = &cmd->iov_misc[0];
+		iov[niov].iov_base	= ping_data;
+		iov[niov++].iov_len	= payload_length;
+
+		padding = ((-payload_length) & 3);
+		if (padding != 0) {
+			TRACE(TRACE_ISCSI, "Receiving %u additional bytes"
+				" for padding.\n", padding);
+			iov[niov].iov_base	= &cmd->pad_bytes;
+			iov[niov++].iov_len	= padding;
+			rx_size += padding;
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			iov[niov].iov_base	= &checksum;
+			iov[niov++].iov_len	= CRC_LEN;
+			rx_size += CRC_LEN;
+		}
+
+		rx_got = rx_data(conn, &cmd->iov_misc[0], niov, rx_size);
+		if (rx_got != rx_size) {
+			ret = -1;
+			goto out;
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)ping_data, payload_length);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					payload_length);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&cmd->pad_bytes,
+					padding);
+				crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			}
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+			if (checksum != data_crc) {
+				printk(KERN_ERR "Ping data CRC32C DataDigest"
+				" 0x%08x does not match computed 0x%08x\n",
+					checksum, data_crc);
+				if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+					printk(KERN_ERR "Unable to recover from"
+					" NOPOUT Ping DataCRC failure while in"
+						" ERL=0.\n");
+					ret = -1;
+					goto out;
+				} else {
+					/*
+					 * Silently drop this PDU and let the
+					 * initiator plug the CmdSN gap.
+					 */
+					TRACE(TRACE_ERL1, "Dropping NOPOUT"
+					" Command CmdSN: 0x%08x due to"
+					" DataCRC error.\n", hdr->cmdsn);
+					ret = 0;
+					goto out;
+				}
+			} else {
+				TRACE(TRACE_DIGEST, "Got CRC32C DataDigest"
+				" 0x%08x for %u bytes of ping data.\n",
+					checksum, payload_length);
+			}
+		}
+
+		ping_data[payload_length] = '\0';
+		/*
+		 * Attach ping data to struct iscsi_cmd->buf_ptr.
+		 */
+		cmd->buf_ptr = (void *)ping_data;
+		cmd->buf_ptr_size = payload_length;
+
+		TRACE(TRACE_ISCSI, "Got %u bytes of NOPOUT ping"
+			" data.\n", payload_length);
+		TRACE(TRACE_ISCSI, "Ping Data: \"%s\"\n", ping_data);
+	}
+
+	if (hdr->itt != 0xFFFFFFFF) {
+		if (!cmd) {
+			printk(KERN_ERR "Checking CmdSN for NOPOUT,"
+				" but cmd is NULL!\n");
+			return -1;
+		}
+
+		/*
+		 * Initiator is expecting a NopIN ping reply,
+		 */
+		iscsi_attach_cmd_to_queue(conn, cmd);
+
+		iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+		if (hdr->opcode & ISCSI_OP_IMMEDIATE) {
+			iscsi_add_cmd_to_response_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		}
+
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)) {
+			return 0;
+		} else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			ret = 0;
+			goto ping_out;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+			ret = -1;
+			goto ping_out;
+		}
+
+		return 0;
+	}
+
+	if (hdr->ttt != 0xFFFFFFFF) {
+		/*
+		 * This was a response to a unsolicited NOPIN ping.
+		 */
+		cmd = iscsi_find_cmd_from_ttt(conn, hdr->ttt);
+		if (!(cmd))
+			return -1;
+
+		iscsi_stop_nopin_response_timer(conn);
+
+		cmd->i_state = ISTATE_REMOVE;
+		iscsi_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
+		iscsi_start_nopin_timer(conn);
+	} else {
+		/*
+		 * Initiator is not expecting a NOPIN is response.
+		 * Just ignore for now.
+		 *
+		 * iSCSI v19-91 10.18
+		 * "A NOP-OUT may also be used to confirm a changed
+		 *  ExpStatSN if another PDU will not be available
+		 *  for a long time."
+		 */
+		ret = 0;
+		goto out;
+	}
+
+	return 0;
+out:
+	if (cmd)
+		__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+ping_out:
+	kfree(ping_data);
+	return ret;
+}
+
+/*	iscsi_handle_task_mgt_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_task_mgt_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *cmd;
+	struct se_tmr_req *se_tmr;
+	struct iscsi_tmr_req *tmr_req;
+	struct iscsi_tm *hdr;
+	u32 payload_length;
+	int cmdsn_ret, out_of_order_cmdsn = 0, ret;
+	u8 function;
+
+	hdr			= (struct iscsi_tm *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->rtt		= be32_to_cpu(hdr->rtt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->refcmdsn		= be32_to_cpu(hdr->refcmdsn);
+	hdr->exp_datasn		= be32_to_cpu(hdr->exp_datasn);
+	hdr->flags &= ~ISCSI_FLAG_CMD_FINAL;
+	function = hdr->flags;
+
+	TRACE(TRACE_ISCSI, "Got Task Management Request ITT: 0x%08x, CmdSN:"
+		" 0x%08x, Function: 0x%02x, RefTaskTag: 0x%08x, RefCmdSN:"
+		" 0x%08x, CID: %hu\n", hdr->itt, hdr->cmdsn, function,
+		hdr->rtt, hdr->refcmdsn, conn->cid);
+
+	if ((function != ISCSI_TM_FUNC_ABORT_TASK) &&
+	    ((function != ISCSI_TM_FUNC_TASK_REASSIGN) &&
+	     (hdr->rtt != ISCSI_RESERVED_TAG))) {
+		printk(KERN_ERR "RefTaskTag should be set to 0xFFFFFFFF.\n");
+		hdr->rtt = ISCSI_RESERVED_TAG;
+	}
+
+	if ((function == ISCSI_TM_FUNC_TASK_REASSIGN) &&
+			!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		printk(KERN_ERR "Task Management Request TASK_REASSIGN not"
+			" issued as immediate command, bad iSCSI Initiator"
+				"implementation\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+	if ((function != ISCSI_TM_FUNC_ABORT_TASK) &&
+	    (hdr->refcmdsn != ISCSI_RESERVED_TAG))
+		hdr->refcmdsn = ISCSI_RESERVED_TAG;
+
+	cmd = iscsi_allocate_se_cmd_for_tmr(conn, function);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+	cmd->iscsi_opcode	= ISCSI_OP_SCSI_TMFUNC;
+	cmd->i_state		= ISTATE_SEND_TASKMGTRSP;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	cmd->init_task_tag	= hdr->itt;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	se_tmr			= SE_CMD(cmd)->se_tmr_req;
+	tmr_req			= cmd->tmr_req;
+	/*
+	 * Locate the struct se_lun for all TMRs not related to ERL=2 TASK_REASSIGN
+	 */
+	if (function != ISCSI_TM_FUNC_TASK_REASSIGN) {
+		ret = iscsi_get_lun_for_tmr(cmd,
+				get_unaligned_le64(&hdr->lun[0]));
+		if (ret < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_NO_LUN;
+			goto attach;
+		}
+	}
+
+	switch (function) {
+	case ISCSI_TM_FUNC_ABORT_TASK:
+		se_tmr->response = iscsi_tmr_abort_task(cmd, buf);
+		if (se_tmr->response != ISCSI_TMF_RSP_COMPLETE) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_ABORT_TASK_SET:
+	case ISCSI_TM_FUNC_CLEAR_ACA:
+	case ISCSI_TM_FUNC_CLEAR_TASK_SET:
+	case ISCSI_TM_FUNC_LOGICAL_UNIT_RESET:
+		break;
+	case ISCSI_TM_FUNC_TARGET_WARM_RESET:
+		if (iscsi_tmr_task_warm_reset(conn, tmr_req, buf) < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_AUTH_FAILED;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_TARGET_COLD_RESET:
+		if (iscsi_tmr_task_cold_reset(conn, tmr_req, buf) < 0) {
+			SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+			se_tmr->response = ISCSI_TMF_RSP_AUTH_FAILED;
+			goto attach;
+		}
+		break;
+	case ISCSI_TM_FUNC_TASK_REASSIGN:
+		se_tmr->response = iscsi_tmr_task_reassign(cmd, buf);
+		/*
+		 * Perform sanity checks on the ExpDataSN only if the
+		 * TASK_REASSIGN was successful.
+		 */
+		if (se_tmr->response != ISCSI_TMF_RSP_COMPLETE)
+			break;
+
+		if (iscsi_check_task_reassign_expdatasn(tmr_req, conn) < 0)
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_BOOKMARK_INVALID, 1, 1,
+					buf, cmd);
+		break;
+	default:
+		printk(KERN_ERR "Unknown TMR function: 0x%02x, protocol"
+			" error.\n", function);
+		SE_CMD(cmd)->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+		se_tmr->response = ISCSI_TMF_RSP_NOT_SUPPORTED;
+		goto attach;
+	}
+
+	if ((function != ISCSI_TM_FUNC_TASK_REASSIGN) &&
+	    (se_tmr->response == ISCSI_TMF_RSP_COMPLETE))
+		se_tmr->call_transport = 1;
+attach:
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn,
+				cmd, hdr->cmdsn);
+		if (cmdsn_ret == CMDSN_NORMAL_OPERATION)
+			do {} while (0);
+		else if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)
+			out_of_order_cmdsn = 1;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	if (out_of_order_cmdsn)
+		return 0;
+	/*
+	 * Found the referenced task, send to transport for processing.
+	 */
+	if (se_tmr->call_transport)
+		return transport_generic_handle_tmr(SE_CMD(cmd));
+
+	/*
+	 * Could not find the referenced LUN, task, or Task Management
+	 * command not authorized or supported.  Change state and
+	 * let the tx_thread send the response.
+	 *
+	 * For connection recovery, this is also the default action for
+	 * TMR TASK_REASSIGN.
+	 */
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/* 	iscsi_handle_text_cmd():
+ *
+ *
+ */
+/* #warning FIXME: Support Text Command parameters besides SendTargets */
+static inline int iscsi_handle_text_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	char *text_ptr, *text_in;
+	int cmdsn_ret, niov = 0, rx_got, rx_size;
+	u32 checksum = 0, data_crc = 0, payload_length;
+	u32 padding = 0, pad_bytes = 0, text_length = 0;
+	struct iscsi_cmd *cmd;
+	struct iovec iov[3];
+	struct iscsi_text *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_text *) buf;
+	payload_length		= ntoh24(hdr->dlength);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+
+	if (payload_length > CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "Unable to accept text parameter length: %u"
+			"greater than MaxRecvDataSegmentLength %u.\n",
+		       payload_length, CONN_OPS(conn)->MaxRecvDataSegmentLength);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	TRACE(TRACE_ISCSI, "Got Text Request: ITT: 0x%08x, CmdSN: 0x%08x,"
+		" ExpStatSN: 0x%08x, Length: %u\n", hdr->itt, hdr->cmdsn,
+		hdr->exp_statsn, payload_length);
+
+	rx_size = text_length = payload_length;
+	if (text_length) {
+		text_in = kzalloc(text_length, GFP_KERNEL);
+		if (!(text_in)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" incoming text parameters\n");
+			return -1;
+		}
+
+		memset(iov, 0, 3 * sizeof(struct iovec));
+		iov[niov].iov_base	= text_in;
+		iov[niov++].iov_len	= text_length;
+
+		padding = ((-payload_length) & 3);
+		if (padding != 0) {
+			iov[niov].iov_base = &pad_bytes;
+			iov[niov++].iov_len  = padding;
+			rx_size += padding;
+			TRACE(TRACE_ISCSI, "Receiving %u additional bytes"
+					" for padding.\n", padding);
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			iov[niov].iov_base	= &checksum;
+			iov[niov++].iov_len	= CRC_LEN;
+			rx_size += CRC_LEN;
+		}
+
+		rx_got = rx_data(conn, &iov[0], niov, rx_size);
+		if (rx_got != rx_size) {
+			kfree(text_in);
+			return -1;
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)text_in, text_length);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					text_length);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&pad_bytes, padding);
+				crypto_hash_update(&conn->conn_rx_hash, &sg,
+						padding);
+			}
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);	
+
+			if (checksum != data_crc) {
+				printk(KERN_ERR "Text data CRC32C DataDigest"
+					" 0x%08x does not match computed"
+					" 0x%08x\n", checksum, data_crc);
+				if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+					printk(KERN_ERR "Unable to recover from"
+					" Text Data digest failure while in"
+						" ERL=0.\n");
+					kfree(text_in);
+					return -1;
+				} else {
+					/*
+					 * Silently drop this PDU and let the
+					 * initiator plug the CmdSN gap.
+					 */
+					TRACE(TRACE_ERL1, "Dropping Text"
+					" Command CmdSN: 0x%08x due to"
+					" DataCRC error.\n", hdr->cmdsn);
+					kfree(text_in);
+					return 0;
+				}
+			} else {
+				TRACE(TRACE_DIGEST, "Got CRC32C DataDigest"
+					" 0x%08x for %u bytes of text data.\n",
+						checksum, text_length);
+			}
+		}
+		text_in[text_length - 1] = '\0';
+		TRACE(TRACE_ISCSI, "Successfully read %d bytes of text"
+				" data.\n", text_length);
+
+		if (strncmp("SendTargets", text_in, 11) != 0) {
+			printk(KERN_ERR "Received Text Data that is not"
+				" SendTargets, cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+		text_ptr = strchr(text_in, '=');
+		if (!(text_ptr)) {
+			printk(KERN_ERR "No \"=\" separator found in Text Data,"
+				"  cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+		if (strncmp("=All", text_ptr, 4) != 0) {
+			printk(KERN_ERR "Unable to locate All value for"
+				" SendTargets key,  cannot continue.\n");
+			kfree(text_in);
+			return -1;
+		}
+/*#warning Support SendTargets=(iSCSI Target Name/Nothing) values. */
+		kfree(text_in);
+	}
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+					1, buf, conn);
+
+	cmd->iscsi_opcode	= ISCSI_OP_TEXT;
+	cmd->i_state		= ISTATE_SEND_TEXTRSP;
+	cmd->immediate_cmd	= ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	SESS(conn)->init_task_tag = cmd->init_task_tag	= hdr->itt;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	cmd->cmd_sn		= hdr->cmdsn;
+	cmd->exp_stat_sn	= hdr->exp_statsn;
+	cmd->data_direction	= DMA_NONE;
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+	iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		     (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			return 0;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+						ISTATE_REMOVE);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+
+		return 0;
+	}
+
+	return iscsi_execute_cmd(cmd, 0);
+}
+
+/*	iscsi_logout_closesession():
+ *
+ *
+ */
+int iscsi_logout_closesession(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_conn *conn_p;
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Received logout request CLOSESESSION on CID: %hu"
+		" for SID: %u.\n", conn->cid, SESS(conn)->sid);
+
+	atomic_set(&sess->session_logout, 1);
+	atomic_set(&conn->conn_logout_remove, 1);
+	conn->conn_logout_reason = ISCSI_LOGOUT_REASON_CLOSE_SESSION;
+
+	iscsi_inc_conn_usage_count(conn);
+	iscsi_inc_session_usage_count(sess);
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn_p, &sess->sess_conn_list, conn_list) {
+		if (conn_p->conn_state != TARG_CONN_STATE_LOGGED_IN)
+			continue;
+
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+		conn_p->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_logout_closeconnection():
+ *
+ *
+ */
+int iscsi_logout_closeconnection(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_conn *l_conn;
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Received logout request CLOSECONNECTION for CID:"
+		" %hu on CID: %hu.\n", cmd->logout_cid, conn->cid);
+
+	/*
+	 * A Logout Request with a CLOSECONNECTION reason code for a CID
+	 * can arrive on a connection with a differing CID.
+	 */
+	if (conn->cid == cmd->logout_cid) {
+		spin_lock_bh(&conn->state_lock);
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+		conn->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+
+		atomic_set(&conn->conn_logout_remove, 1);
+		conn->conn_logout_reason = ISCSI_LOGOUT_REASON_CLOSE_CONNECTION;
+		iscsi_inc_conn_usage_count(conn);
+
+		spin_unlock_bh(&conn->state_lock);
+	} else {
+		/*
+		 * Handle all different cid CLOSECONNECTION requests in
+		 * iscsi_logout_post_handler_diffcid() as to give enough
+		 * time for any non immediate command's CmdSN to be
+		 * acknowledged on the connection in question.
+		 *
+		 * Here we simply make sure the CID is still around.
+		 */
+		l_conn = iscsi_get_conn_from_cid(sess,
+				cmd->logout_cid);
+		if (!(l_conn)) {
+			cmd->logout_response = ISCSI_LOGOUT_CID_NOT_FOUND;
+			iscsi_add_cmd_to_response_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		}
+
+		iscsi_dec_conn_usage_count(l_conn);
+	}
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_logout_removeconnforrecovery():
+ *
+ *
+ */
+int iscsi_logout_removeconnforrecovery(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	TRACE(TRACE_ERL2, "Received explicit REMOVECONNFORRECOVERY logout for"
+		" CID: %hu on CID: %hu.\n", cmd->logout_cid, conn->cid);
+
+	if (SESS_OPS(sess)->ErrorRecoveryLevel != 2) {
+		printk(KERN_ERR "Received Logout Request REMOVECONNFORRECOVERY"
+			" while ERL!=2.\n");
+		cmd->logout_response = ISCSI_LOGOUT_RECOVERY_UNSUPPORTED;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	if (conn->cid == cmd->logout_cid) {
+		printk(KERN_ERR "Received Logout Request REMOVECONNFORRECOVERY"
+			" with CID: %hu on CID: %hu, implementation error.\n",
+				cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_CLEANUP_FAILED;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_handle_logout_cmd():
+ *
+ *
+ */
+static inline int iscsi_handle_logout_cmd(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	int cmdsn_ret, logout_remove = 0;
+	u8 reason_code = 0;
+	struct iscsi_cmd *cmd;
+	struct iscsi_logout *hdr;
+
+	hdr			= (struct iscsi_logout *) buf;
+	reason_code		= (hdr->flags & 0x7f);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->cid		= be16_to_cpu(hdr->cid);
+	hdr->cmdsn		= be32_to_cpu(hdr->cmdsn);
+	hdr->exp_statsn	= be32_to_cpu(hdr->exp_statsn);
+
+	{
+	struct iscsi_tiqn *tiqn = iscsi_snmp_get_tiqn(conn);
+
+	if (tiqn) {
+		spin_lock(&tiqn->logout_stats.lock);
+		if (reason_code == ISCSI_LOGOUT_REASON_CLOSE_SESSION)
+			tiqn->logout_stats.normal_logouts++;
+		else
+			tiqn->logout_stats.abnormal_logouts++;
+		spin_unlock(&tiqn->logout_stats.lock);
+		}
+	}
+
+	TRACE(TRACE_ISCSI, "Got Logout Request ITT: 0x%08x CmdSN: 0x%08x"
+		" ExpStatSN: 0x%08x Reason: 0x%02x CID: %hu on CID: %hu\n",
+		hdr->itt, hdr->cmdsn, hdr->exp_statsn, reason_code,
+		hdr->cid, conn->cid);
+
+	if (conn->conn_state != TARG_CONN_STATE_LOGGED_IN) {
+		printk(KERN_ERR "Received logout request on connection that"
+			" is not in logged in state, ignoring request.\n");
+		return 0;
+	}
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return iscsi_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1,
+					buf, conn);
+
+	cmd->iscsi_opcode       = ISCSI_OP_LOGOUT;
+	cmd->i_state            = ISTATE_SEND_LOGOUTRSP;
+	cmd->immediate_cmd      = ((hdr->opcode & ISCSI_OP_IMMEDIATE) ? 1 : 0);
+	SESS(conn)->init_task_tag = cmd->init_task_tag  = hdr->itt;
+	cmd->targ_xfer_tag      = 0xFFFFFFFF;
+	cmd->cmd_sn             = hdr->cmdsn;
+	cmd->exp_stat_sn        = hdr->exp_statsn;
+	cmd->logout_cid         = hdr->cid;
+	cmd->logout_reason      = reason_code;
+	cmd->data_direction     = DMA_NONE;
+
+	/*
+	 * We need to sleep in these cases (by returning 1) until the Logout
+	 * Response gets sent in the tx thread.
+	 */
+	if ((reason_code == ISCSI_LOGOUT_REASON_CLOSE_SESSION) ||
+	   ((reason_code == ISCSI_LOGOUT_REASON_CLOSE_CONNECTION) &&
+	    (hdr->cid == conn->cid)))
+		logout_remove = 1;
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (reason_code != ISCSI_LOGOUT_REASON_RECOVERY)
+		iscsi_ack_from_expstatsn(conn, hdr->exp_statsn);
+
+	/*
+	 * Non-Immediate Logout Commands are executed in CmdSN order..
+	 */
+	if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
+		cmdsn_ret = iscsi_check_received_cmdsn(conn, cmd, hdr->cmdsn);
+		if ((cmdsn_ret == CMDSN_NORMAL_OPERATION) ||
+		    (cmdsn_ret == CMDSN_HIGHER_THAN_EXP))
+			return logout_remove;
+		else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			cmd->i_state = ISTATE_REMOVE;
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			return 0;
+		} else { /* (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) */
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+		}
+	}
+	/*
+	 * Immediate Logout Commands are executed, well, Immediately.
+	 */
+	if (iscsi_execute_cmd(cmd, 0) < 0)
+		return -1;
+
+	return logout_remove;
+}
+
+/*	iscsi_handle_snack():
+ *
+ *
+ */
+static inline int iscsi_handle_snack(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	u32 debug_type, unpacked_lun;
+	u64 lun;
+	struct iscsi_snack *hdr;
+
+	hdr			= (struct iscsi_snack *) buf;
+	hdr->flags		&= ~ISCSI_FLAG_CMD_FINAL;
+	lun			= get_unaligned_le64(&hdr->lun[0]);
+	unpacked_lun		= iscsi_unpack_lun((unsigned char *)&lun);
+	hdr->itt		= be32_to_cpu(hdr->itt);
+	hdr->ttt		= be32_to_cpu(hdr->ttt);
+	hdr->exp_statsn		= be32_to_cpu(hdr->exp_statsn);
+	hdr->begrun		= be32_to_cpu(hdr->begrun);
+	hdr->runlength		= be32_to_cpu(hdr->runlength);
+
+	debug_type = (hdr->flags & 0x02) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(debug_type, "Got ISCSI_INIT_SNACK, ITT: 0x%08x, ExpStatSN:"
+		" 0x%08x, Type: 0x%02x, BegRun: 0x%08x, RunLength: 0x%08x,"
+		" CID: %hu\n", hdr->itt, hdr->exp_statsn, hdr->flags,
+			hdr->begrun, hdr->runlength, conn->cid);
+
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Initiator sent SNACK request while in"
+			" ErrorRecoveryLevel=0.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+	/*
+	 * SNACK_DATA and SNACK_R2T are both 0,  so check which function to
+	 * call from inside iscsi_send_recovery_datain_or_r2t().
+	 */
+	switch (hdr->flags & ISCSI_FLAG_SNACK_TYPE_MASK) {
+	case 0:
+		return iscsi_handle_recovery_datain_or_r2t(conn, buf,
+			hdr->itt, hdr->ttt, hdr->begrun, hdr->runlength);
+		return 0;
+	case ISCSI_FLAG_SNACK_TYPE_STATUS:
+		return iscsi_handle_status_snack(conn, hdr->itt, hdr->ttt,
+			hdr->begrun, hdr->runlength);
+	case ISCSI_FLAG_SNACK_TYPE_DATA_ACK:
+		return iscsi_handle_data_ack(conn, hdr->ttt, hdr->begrun,
+			hdr->runlength);
+	case ISCSI_FLAG_SNACK_TYPE_RDATA:
+		/* FIXME: Support R-Data SNACK */
+		printk(KERN_ERR "R-Data SNACK Not Supported.\n");
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	default:
+		printk(KERN_ERR "Unknown SNACK type 0x%02x, protocol"
+			" error.\n", hdr->flags & 0x0f);
+		return iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buf, conn);
+	}
+
+	return 0;
+}
+
+/*	iscsi_handle_immediate_data():
+ *
+ *
+ */
+static int iscsi_handle_immediate_data(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	__u32 length)
+{
+	int iov_ret, rx_got = 0, rx_size = 0;
+	__u32 checksum, iov_count = 0, padding = 0, pad_bytes = 0;
+	struct iscsi_conn *conn = cmd->conn;
+	struct se_map_sg map_sg;
+	struct se_unmap_sg unmap_sg;
+	struct iovec *iov;
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	memset((void *)&unmap_sg, 0, sizeof(struct se_unmap_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.sg_kmap_active = 1;
+	map_sg.iov = &cmd->iov_data[0];
+	map_sg.data_length = length;
+	map_sg.data_offset = cmd->write_data_done;
+	unmap_sg.fabric_cmd = (void *)cmd;
+	unmap_sg.se_cmd = SE_CMD(cmd);
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, &unmap_sg);
+	if (iov_ret < 0)
+		return IMMEDIDATE_DATA_CANNOT_RECOVER;
+
+	rx_size = length;
+	iov_count = iov_ret;
+	iov = &cmd->iov_data[0];
+
+	padding = ((-length) & 3);
+	if (padding != 0) {
+		iov[iov_count].iov_base	= &pad_bytes;
+		iov[iov_count++].iov_len = padding;
+		rx_size += padding;
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		iov[iov_count].iov_base 	= &checksum;
+		iov[iov_count++].iov_len 	= CRC_LEN;
+		rx_size += CRC_LEN;
+	}
+
+	iscsi_map_SG_segments(&unmap_sg);
+
+	rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
+
+	iscsi_unmap_SG_segments(&unmap_sg);
+
+	if (rx_got != rx_size) {
+		iscsi_rx_thread_wait_for_TCP(conn);
+		return IMMEDIDATE_DATA_CANNOT_RECOVER;
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = length, data_crc;
+		struct iovec *iov_ptr = &cmd->iov_data[0];
+		struct scatterlist sg;
+		/*
+		 * Thanks to the IP stack shitting on passed iovecs,  we have to
+		 * call set_iovec_data_ptrs again in order to have a iMD/PSCSI
+		 * agnostic way of doing datadigests computations.
+		 */
+		memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+		map_sg.fabric_cmd = (void *)cmd;
+		map_sg.se_cmd = SE_CMD(cmd);
+		map_sg.iov = iov_ptr;
+		map_sg.data_length = length;
+		map_sg.data_offset = cmd->write_data_done;
+
+		if (iscsi_set_iovec_ptrs(&map_sg, &unmap_sg) < 0)
+			return IMMEDIDATE_DATA_CANNOT_RECOVER;
+
+		crypto_hash_init(&conn->conn_rx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+			" bytes, CRC 0x%08x\n", iov_ptr->iov_len, data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+
+		if (padding) {
+			sg_init_one(&sg, (__u8 *)&pad_bytes, padding);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					padding);
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %d"
+			" bytes of padding, CRC 0x%08x\n", padding, data_crc);
+		}
+		crypto_hash_final(&conn->conn_rx_hash, (u8 *)&data_crc);
+
+		if (checksum != data_crc) {
+			printk(KERN_ERR "ImmediateData CRC32C DataDigest 0x%08x"
+				" does not match computed 0x%08x\n", checksum,
+				data_crc);
+
+			if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+				printk(KERN_ERR "Unable to recover from"
+					" Immediate Data digest failure while"
+					" in ERL=0.\n");
+				iscsi_add_reject_from_cmd(
+						ISCSI_REASON_DATA_DIGEST_ERROR,
+						1, 0, buf, cmd);
+				return IMMEDIDATE_DATA_CANNOT_RECOVER;
+			} else {
+				iscsi_add_reject_from_cmd(
+						ISCSI_REASON_DATA_DIGEST_ERROR,
+						0, 0, buf, cmd);
+				return IMMEDIDATE_DATA_ERL1_CRC_FAILURE;
+			}
+		} else {
+			TRACE(TRACE_DIGEST, "Got CRC32C DataDigest 0x%08x for"
+				" %u bytes of Immediate Data\n", checksum,
+				length);
+		}
+	}
+
+	cmd->write_data_done += length;
+
+	if (cmd->write_data_done == cmd->data_length) {
+		spin_lock_bh(&cmd->istate_lock);
+		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
+		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+		spin_unlock_bh(&cmd->istate_lock);
+	}
+
+	return IMMEDIDATE_DATA_NORMAL_OPERATION;
+}
+
+/*	iscsi_send_async_msg():
+ *
+ *	FIXME: Support SCSI AEN.
+ */
+int iscsi_send_async_msg(
+	struct iscsi_conn *conn,
+	u16 cid,
+	u8 async_event,
+	u8 async_vcode)
+{
+	u8 iscsi_hdr[ISCSI_HDR_LEN+CRC_LEN];
+	u32 tx_send = ISCSI_HDR_LEN, tx_sent = 0;
+	struct timer_list async_msg_timer;
+	struct iscsi_async *hdr;
+	struct iovec iov;
+	struct scatterlist sg;
+
+	memset((void *)&iov, 0, sizeof(struct iovec));
+	memset((void *)&iscsi_hdr, 0, ISCSI_HDR_LEN+CRC_LEN);
+
+	hdr		= (struct iscsi_async *)&iscsi_hdr;
+	hdr->opcode	= ISCSI_OP_ASYNC_EVENT;
+	hdr->flags	|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, 0);
+	put_unaligned_le64(0, &hdr->lun[0]);
+	put_unaligned_be64(0xffffffffffffffff, &hdr->rsvd4[0]);
+	hdr->statsn	= cpu_to_be32(conn->stat_sn++);
+	spin_lock(&SESS(conn)->cmdsn_lock);
+	hdr->exp_cmdsn	= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn	= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	spin_unlock(&SESS(conn)->cmdsn_lock);
+	hdr->async_event = async_event;
+	hdr->async_vcode = async_vcode;
+
+	switch (async_event) {
+	case ISCSI_ASYNC_MSG_SCSI_EVENT:
+		printk(KERN_ERR "ISCSI_ASYNC_MSG_SCSI_EVENT: not supported yet.\n");
+		return -1;
+	case ISCSI_ASYNC_MSG_REQUEST_LOGOUT:
+		TRACE(TRACE_STATE, "Moving to"
+				" TARG_CONN_STATE_LOGOUT_REQUESTED.\n");
+		conn->conn_state = TARG_CONN_STATE_LOGOUT_REQUESTED;
+		hdr->param1 = 0;
+		hdr->param2 = 0;
+		hdr->param3 = cpu_to_be16(SECONDS_FOR_ASYNC_LOGOUT);
+		break;
+	case ISCSI_ASYNC_MSG_DROPPING_CONNECTION:
+		hdr->param1 = cpu_to_be16(cid);
+		hdr->param2 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+		hdr->param3 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+		break;
+	case ISCSI_ASYNC_MSG_DROPPING_ALL_CONNECTIONS:
+		hdr->param1 = 0;
+		hdr->param2 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+		hdr->param3 = cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+		break;
+	case ISCSI_ASYNC_MSG_PARAM_NEGOTIATION:
+		hdr->param1 = 0;
+		hdr->param2 = 0;
+		hdr->param3 = cpu_to_be16(SECONDS_FOR_ASYNC_TEXT);
+		break;
+	case ISCSI_ASYNC_MSG_VENDOR_SPECIFIC:
+		printk(KERN_ERR "ISCSI_ASYNC_MSG_VENDOR_SPECIFIC not"
+			" supported yet.\n");
+		return -1;
+	default:
+		printk(KERN_ERR "Unknown AsycnEvent 0x%02x, protocol"
+			" error.\n", async_event);
+		return -1;
+	}
+
+	iov.iov_base	= &iscsi_hdr;
+	iov.iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&iscsi_hdr[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+		
+		sg_init_one(&sg, (u8 *)&iscsi_hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov.iov_len += CRC_LEN;
+		tx_send += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Async"
+			" Msg PDU 0x%08x\n", *header_digest);
+	}
+
+	TRACE(TRACE_ISCSI, "Built Async Message StatSN: 0x%08x, AsyncEvent:"
+		" 0x%02x, P1: 0x%04x, P2: 0x%04x, P3: 0x%04x\n",
+		ntohl(hdr->statsn), hdr->async_event, ntohs(hdr->param1),
+		ntohs(hdr->param2), ntohs(hdr->param3));
+
+	tx_sent = tx_data(conn, &iov, 1, tx_send);
+	if (tx_sent != tx_send) {
+		printk(KERN_ERR "tx_data returned %d expecting %d\n",
+				tx_sent, tx_send);
+		return -1;
+	}
+
+	if (async_event == ISCSI_ASYNC_MSG_REQUEST_LOGOUT) {
+		init_timer(&async_msg_timer);
+		SETUP_TIMER(async_msg_timer, SECONDS_FOR_ASYNC_LOGOUT,
+				&SESS(conn)->async_msg_sem,
+				iscsi_async_msg_timer_function);
+		add_timer(&async_msg_timer);
+		down(&SESS(conn)->async_msg_sem);
+		del_timer_sync(&async_msg_timer);
+
+		if (conn->conn_state == TARG_CONN_STATE_LOGOUT_REQUESTED) {
+			printk(KERN_ERR "Asynchronous message timer expired"
+				" without receiving a logout request,  dropping"
+				" iSCSI session.\n");
+			iscsi_send_async_msg(conn, 0,
+				ISCSI_ASYNC_MSG_DROPPING_ALL_CONNECTIONS, 0);
+			iscsi_free_session(SESS(conn));
+		}
+	}
+	return 0;
+}
+
+/*	iscsi_build_conn_drop_async_message():
+ *
+ *	Called with sess->conn_lock held.
+ */
+/* #warning iscsi_build_conn_drop_async_message() only sends out on connections
+	with active network interface */
+static void iscsi_build_conn_drop_async_message(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+	struct iscsi_conn *conn_p;
+
+	/*
+	 * Only send a Asynchronous Message on connections whos network
+	 * interface is still functional.
+	 */
+	list_for_each_entry(conn_p, &SESS(conn)->sess_conn_list, conn_list) {
+		if ((conn_p->conn_state == TARG_CONN_STATE_LOGGED_IN) &&
+		    (iscsi_check_for_active_network_device(conn_p))) {
+			iscsi_inc_conn_usage_count(conn_p);
+			break;
+		}
+	}
+
+	if (!conn_p)
+		return;
+
+	cmd = iscsi_allocate_cmd(conn_p);
+	if (!(cmd)) {
+		iscsi_dec_conn_usage_count(conn_p);
+		return;
+	}
+
+	cmd->logout_cid = conn->cid;
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+	cmd->i_state = ISTATE_SEND_ASYNCMSG;
+
+	iscsi_attach_cmd_to_queue(conn_p, cmd);
+	iscsi_add_cmd_to_response_queue(cmd, conn_p, cmd->i_state);
+
+	iscsi_dec_conn_usage_count(conn_p);
+}
+
+/*	iscsi_send_conn_drop_async_message():
+ *
+ *
+ */
+static int iscsi_send_conn_drop_async_message(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_async *hdr;
+	struct scatterlist sg;
+
+	cmd->tx_size = ISCSI_HDR_LEN;
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+
+	hdr			= (struct iscsi_async *) cmd->pdu;
+	hdr->opcode		= ISCSI_OP_ASYNC_EVENT;
+	hdr->flags		= ISCSI_FLAG_CMD_FINAL;
+	cmd->init_task_tag	= 0xFFFFFFFF;
+	cmd->targ_xfer_tag	= 0xFFFFFFFF;
+	put_unaligned_be64(0xffffffffffffffff, &hdr->rsvd4[0]);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn 		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->async_event 	= ISCSI_ASYNC_MSG_DROPPING_CONNECTION;
+	hdr->param1		= cpu_to_be16(cmd->logout_cid);
+	hdr->param2		= cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Wait);
+	hdr->param3		= cpu_to_be16(SESS_OPS_C(conn)->DefaultTime2Retain);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+		
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" Async Message 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= cmd->tx_size;
+	cmd->iov_misc_count		= 1;
+
+	TRACE(TRACE_ERL2, "Sending Connection Dropped Async Message StatSN:"
+		" 0x%08x, for CID: %hu on CID: %hu\n", cmd->stat_sn,
+			cmd->logout_cid, conn->cid);
+	return 0;
+}
+
+int lio_queue_data_in(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_send_data_in():
+ *
+ *
+ */
+static inline int iscsi_send_data_in(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	struct se_unmap_sg *unmap_sg,
+	int *eodr)
+{
+	int iov_ret = 0, set_statsn = 0;
+	u8 *pad_bytes;
+	u32 iov_count = 0, tx_size = 0;
+	u64 lun;	
+	struct iscsi_datain datain;
+	struct iscsi_datain_req *dr;
+	struct se_map_sg map_sg;
+	struct iscsi_data_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	memset(&datain, 0, sizeof(struct iscsi_datain));
+	dr = iscsi_get_datain_values(cmd, &datain);
+	if (!(dr)) {
+		printk(KERN_ERR "iscsi_get_datain_values failed for ITT: 0x%08x\n",
+				cmd->init_task_tag);
+		return -1;
+	}
+
+	/*
+	 * Be paranoid and double check the logic for now.
+	 */
+	if ((datain.offset + datain.length) > cmd->data_length) {
+		printk(KERN_ERR "Command ITT: 0x%08x, datain.offset: %u and"
+			" datain.length: %u exceeds cmd->data_length: %u\n",
+			cmd->init_task_tag, datain.offset, datain.length,
+				cmd->data_length);
+		return -1;
+	}
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->tx_data_octets += datain.length;
+	if (SESS_NODE_ACL(SESS(conn))) {
+		spin_lock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+		SESS_NODE_ACL(SESS(conn))->read_bytes += datain.length;
+		spin_unlock(&SESS_NODE_ACL(SESS(conn))->stats_lock);
+	}
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+	/*
+	 * Special case for successfully execution w/ both DATAIN
+	 * and Sense Data.
+	 */
+	if ((datain.flags & ISCSI_FLAG_DATA_STATUS) &&
+	    (SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE))
+		datain.flags &= ~ISCSI_FLAG_DATA_STATUS;
+	else {
+		if ((dr->dr_complete == DATAIN_COMPLETE_NORMAL) ||
+		    (dr->dr_complete == DATAIN_COMPLETE_CONNECTION_RECOVERY)) {
+			iscsi_increment_maxcmdsn(cmd, SESS(conn));
+			cmd->stat_sn = conn->stat_sn++;
+			set_statsn = 1;
+		} else if (dr->dr_complete ==
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY)
+			set_statsn = 1;
+	}
+
+	hdr	= (struct iscsi_data_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode 		= ISCSI_OP_SCSI_DATA_IN;
+	hdr->flags		= datain.flags;
+	if (hdr->flags & ISCSI_FLAG_DATA_STATUS) {
+		if (SE_CMD(cmd)->se_cmd_flags & SCF_OVERFLOW_BIT) {
+			hdr->flags |= ISCSI_FLAG_DATA_OVERFLOW;
+			hdr->residual_count = cpu_to_be32(cmd->residual_count);
+		} else if (SE_CMD(cmd)->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+			hdr->flags |= ISCSI_FLAG_DATA_UNDERFLOW;
+			hdr->residual_count = cpu_to_be32(cmd->residual_count);
+		}
+	}
+	hton24(hdr->dlength, datain.length);
+	lun			= (hdr->flags & ISCSI_FLAG_DATA_ACK) ?
+				   iscsi_pack_lun(SE_CMD(cmd)->orig_fe_lun) :
+				   0xFFFFFFFFFFFFFFFFULL;
+	put_unaligned_le64(lun, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= (hdr->flags & ISCSI_FLAG_DATA_ACK) ?
+				   cpu_to_be32(cmd->targ_xfer_tag) :
+				   0xFFFFFFFF;
+	hdr->statsn		= (set_statsn) ? cpu_to_be32(cmd->stat_sn) :
+						0xFFFFFFFF;
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->datasn		= cpu_to_be32(datain.data_sn);
+	hdr->offset		= cpu_to_be32(datain.offset);
+
+	iov = &cmd->iov_data[0];
+	iov[iov_count].iov_base	= cmd->pdu;
+	iov[iov_count++].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest"
+			" for DataIN PDU 0x%08x\n", *header_digest);
+	}
+
+	memset((void *)&map_sg, 0, sizeof(struct se_map_sg));
+	map_sg.fabric_cmd = (void *)cmd;
+	map_sg.se_cmd = SE_CMD(cmd);
+	map_sg.sg_kmap_active = 1;
+	map_sg.iov = &cmd->iov_data[1];
+	map_sg.data_length = datain.length;
+	map_sg.data_offset = datain.offset;
+
+	iov_ret = iscsi_set_iovec_ptrs(&map_sg, unmap_sg);
+	if (iov_ret < 0)
+		return -1;
+
+	iov_count += iov_ret;
+	tx_size += datain.length;
+
+	unmap_sg->padding = ((-datain.length) & 3);
+	if (unmap_sg->padding != 0) {
+		pad_bytes = kzalloc(unmap_sg->padding * sizeof(__u8),
+					GFP_KERNEL);
+		if (!(pad_bytes)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+					" pad_bytes.\n");
+			return -1;
+		}
+		cmd->buf_ptr = pad_bytes;
+		iov[iov_count].iov_base 	= pad_bytes;
+		iov[iov_count++].iov_len 	= unmap_sg->padding;
+		tx_size += unmap_sg->padding;
+
+		TRACE(TRACE_ISCSI, "Attaching %u padding bytes\n",
+				unmap_sg->padding);
+	}
+	if (CONN_OPS(conn)->DataDigest) {
+		__u32 counter = (datain.length + unmap_sg->padding);
+		struct iovec *iov_ptr = &cmd->iov_data[1];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		while (counter > 0) {
+			sg_init_one(&sg, iov_ptr->iov_base,
+					iov_ptr->iov_len);
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+					iov_ptr->iov_len);
+
+			TRACE(TRACE_DIGEST, "Computed CRC32C DataDigest %zu"
+				" bytes, crc 0x%08x\n", iov_ptr->iov_len,
+					cmd->data_crc);
+			counter -= iov_ptr->iov_len;
+			iov_ptr++;
+		}
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base	= &cmd->data_crc;
+		iov[iov_count++].iov_len = CRC_LEN;
+		tx_size += CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attached CRC32C DataDigest %d bytes, crc"
+			" 0x%08x\n", datain.length+unmap_sg->padding,
+			cmd->data_crc);
+	}
+
+	cmd->iov_data_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built DataIN ITT: 0x%08x, StatSN: 0x%08x,"
+		" DataSN: 0x%08x, Offset: %u, Length: %u, CID: %hu\n",
+		cmd->init_task_tag, ntohl(hdr->statsn), ntohl(hdr->datasn),
+		ntohl(hdr->offset), datain.length, conn->cid);
+
+	if (dr->dr_complete) {
+		*eodr = (SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ?
+				2 : 1;
+		iscsi_free_datain_req(cmd, dr);
+	}
+
+	return 0;
+}
+
+/*	iscsi_send_logout_response():
+ *
+ *
+ */
+static inline int iscsi_send_logout_response(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int niov = 0, tx_size;
+	struct iscsi_conn *logout_conn = NULL;
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_session *sess = SESS(conn);
+	struct iovec *iov;
+	struct iscsi_logout_rsp *hdr;
+	struct scatterlist sg;
+	/*
+	 * The actual shutting down of Sessions and/or Connections
+	 * for CLOSESESSION and CLOSECONNECTION Logout Requests
+	 * is done in scsi_logout_post_handler().
+	 */
+	switch (cmd->logout_reason) {
+	case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+		TRACE(TRACE_ISCSI, "iSCSI session logout successful, setting"
+			" logout response to ISCSI_LOGOUT_SUCCESS.\n");
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+		if (cmd->logout_response == ISCSI_LOGOUT_CID_NOT_FOUND)
+			break;
+		/*
+		 * For CLOSECONNECTION logout requests carrying
+		 * a matching logout CID -> local CID, the reference
+		 * for the local CID will have been incremented in
+		 * iscsi_logout_closeconnection().
+		 *
+		 * For CLOSECONNECTION logout requests carrying
+		 * a different CID than the connection it arrived
+		 * on, the connection responding to cmd->logout_cid
+		 * is stopped in iscsi_logout_post_handler_diffcid().
+		 */
+
+		TRACE(TRACE_ISCSI, "iSCSI CID: %hu logout on CID: %hu"
+			" successful.\n", cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	case ISCSI_LOGOUT_REASON_RECOVERY:
+		if ((cmd->logout_response == ISCSI_LOGOUT_RECOVERY_UNSUPPORTED) ||
+		    (cmd->logout_response == ISCSI_LOGOUT_CLEANUP_FAILED))
+			break;
+		/*
+		 * If the connection is still active from our point of view
+		 * force connection recovery to occur.
+		 */
+		logout_conn = iscsi_get_conn_from_cid_rcfr(sess,
+				cmd->logout_cid);
+		if ((logout_conn)) {
+			iscsi_connection_reinstatement_rcfr(logout_conn);
+			iscsi_dec_conn_usage_count(logout_conn);
+		}
+
+		cr = iscsi_get_inactive_connection_recovery_entry(
+				SESS(conn), cmd->logout_cid);
+		if (!(cr)) {
+			printk(KERN_ERR "Unable to locate CID: %hu for"
+			" REMOVECONNFORRECOVERY Logout Request.\n",
+				cmd->logout_cid);
+			cmd->logout_response = ISCSI_LOGOUT_CID_NOT_FOUND;
+			break;
+		}
+
+		iscsi_discard_cr_cmds_by_expstatsn(cr, cmd->exp_stat_sn);
+
+		TRACE(TRACE_ERL2, "iSCSI REMOVECONNFORRECOVERY logout"
+			" for recovery for CID: %hu on CID: %hu successful.\n",
+				cmd->logout_cid, conn->cid);
+		cmd->logout_response = ISCSI_LOGOUT_SUCCESS;
+		break;
+	default:
+		printk(KERN_ERR "Unknown cmd->logout_reason: 0x%02x\n",
+				cmd->logout_reason);
+		return -1;
+	}
+
+	tx_size = ISCSI_HDR_LEN;
+	hdr			= (struct iscsi_logout_rsp *)cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_LOGOUT_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hdr->response		= cmd->logout_response;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[niov].iov_base	= cmd->pdu;
+	iov[niov++].iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" Logout Response 0x%08x\n", *header_digest);
+	}
+	cmd->iov_misc_count = niov;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending Logout Response ITT: 0x%08x StatSN:"
+		" 0x%08x Response: 0x%02x CID: %hu on CID: %hu\n",
+		cmd->init_task_tag, cmd->stat_sn, hdr->response,
+		cmd->logout_cid, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_nopin():
+ *
+ *	Unsolicited NOPIN, either requesting a response or not.
+ */
+static inline int iscsi_send_unsolicited_nopin(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int want_response)
+{
+	int tx_size = ISCSI_HDR_LEN;
+	struct iscsi_nopin *hdr;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_nopin *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_NOOP_IN;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest to"
+			" NopIN 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= tx_size;
+	cmd->iov_misc_count 	= 1;
+	cmd->tx_size		= tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending Unsolicited NOPIN TTT: 0x%08x StatSN:"
+		" 0x%08x CID: %hu\n", hdr->ttt, cmd->stat_sn, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_nopin_response():
+ *
+ *
+ */
+static inline int iscsi_send_nopin_response(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int niov = 0, tx_size;
+	__u32 padding = 0;
+	struct iovec *iov;
+	struct iscsi_nopin *hdr;
+	struct scatterlist sg;
+
+	tx_size = ISCSI_HDR_LEN;
+	hdr			= (struct iscsi_nopin *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_NOOP_IN;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, cmd->buf_ptr_size);
+	put_unaligned_le64(0xFFFFFFFFFFFFFFFFULL, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[niov].iov_base	= cmd->pdu;
+	iov[niov++].iov_len	= ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32C HeaderDigest"
+			" to NopIn 0x%08x\n", *header_digest);
+	}
+
+	/*
+	 * NOPOUT Ping Data is attached to struct iscsi_cmd->buf_ptr.
+	 * NOPOUT DataSegmentLength is at struct iscsi_cmd->buf_ptr_size.
+	 */
+	if (cmd->buf_ptr_size) {
+		iov[niov].iov_base	= cmd->buf_ptr;
+		iov[niov++].iov_len	= cmd->buf_ptr_size;
+		tx_size += cmd->buf_ptr_size;
+
+		TRACE(TRACE_ISCSI, "Echoing back %u bytes of ping"
+			" data.\n", cmd->buf_ptr_size);
+
+		padding = ((-cmd->buf_ptr_size) & 3);
+		if (padding != 0) {
+			iov[niov].iov_base = &cmd->pad_bytes;
+			iov[niov++].iov_len = padding;
+			tx_size += padding;
+			TRACE(TRACE_ISCSI, "Attaching %u additional"
+				" padding bytes.\n", padding);
+		}
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_tx_hash);
+
+			sg_init_one(&sg, (u8 *)cmd->buf_ptr,
+					cmd->buf_ptr_size);
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+					cmd->buf_ptr_size);
+
+			if (padding) {
+				sg_init_one(&sg, (u8 *)&cmd->pad_bytes, padding);
+				crypto_hash_update(&conn->conn_tx_hash, &sg,
+						padding);	
+			}
+
+			crypto_hash_final(&conn->conn_tx_hash,
+					(u8 *)&cmd->data_crc);
+
+			iov[niov].iov_base = &cmd->data_crc;
+			iov[niov++].iov_len = CRC_LEN;
+			tx_size += CRC_LEN;
+			TRACE(TRACE_DIGEST, "Attached DataDigest for %u"
+				" bytes of ping data, CRC 0x%08x\n",
+				cmd->buf_ptr_size, cmd->data_crc);
+		}
+	}
+
+	cmd->iov_misc_count = niov;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Sending NOPIN Response ITT: 0x%08x, TTT:"
+		" 0x%08x, StatSN: 0x%08x, Length %u\n", cmd->init_task_tag,
+		cmd->targ_xfer_tag, cmd->stat_sn, cmd->buf_ptr_size);
+
+	return 0;
+}
+
+/*	iscsi_send_r2t():
+ *
+ *
+ */
+int iscsi_send_r2t(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int tx_size = 0;
+	u32 trace_type;
+	u64 lun;
+	struct iscsi_r2t *r2t;
+	struct iscsi_r2t_rsp *hdr;
+	struct scatterlist sg;
+
+	r2t = iscsi_get_r2t_from_list(cmd);
+	if (!(r2t))
+		return -1;
+
+	hdr			= (struct iscsi_r2t_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_R2T;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	lun			= iscsi_pack_lun(SE_CMD(cmd)->orig_fe_lun);
+	put_unaligned_le64(lun, &hdr->lun[0]);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	spin_lock_bh(&SESS(conn)->ttt_lock);
+	r2t->targ_xfer_tag	= SESS(conn)->targ_xfer_tag++;
+	if (r2t->targ_xfer_tag == 0xFFFFFFFF)
+		r2t->targ_xfer_tag = SESS(conn)->targ_xfer_tag++;
+	spin_unlock_bh(&SESS(conn)->ttt_lock);
+	hdr->ttt		= cpu_to_be32(r2t->targ_xfer_tag);
+	hdr->statsn		= cpu_to_be32(conn->stat_sn);
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+	hdr->r2tsn		= cpu_to_be32(r2t->r2t_sn);
+	hdr->data_offset	= cpu_to_be32(r2t->offset);
+	hdr->data_length	= cpu_to_be32(r2t->xfer_len);
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->iov_misc[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for R2T"
+			" PDU 0x%08x\n", *header_digest);
+	}
+
+	trace_type = (!r2t->recovery_r2t) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(trace_type, "Built %sR2T, ITT: 0x%08x, TTT: 0x%08x, StatSN:"
+		" 0x%08x, R2TSN: 0x%08x, Offset: %u, DDTL: %u, CID: %hu\n",
+		(!r2t->recovery_r2t) ? "" : "Recovery ", cmd->init_task_tag,
+		r2t->targ_xfer_tag, ntohl(hdr->statsn), r2t->r2t_sn,
+			r2t->offset, r2t->xfer_len, conn->cid);
+
+	cmd->iov_misc_count = 1;
+	cmd->tx_size = tx_size;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	r2t->sent_r2t = 1;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_build_r2ts_for_cmd():
+ *
+ *	type 0: Normal Operation.
+ *	type 1: Called from Storage Transport.
+ *	type 2: Called from iscsi_task_reassign_complete_write() for
+ *	        connection recovery.
+ */
+int iscsi_build_r2ts_for_cmd(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int type)
+{
+	int first_r2t = 1;
+	__u32 offset = 0, xfer_len = 0;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	if (cmd->cmd_flags & ICF_SENT_LAST_R2T) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return 0;
+	}
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder && (type != 2))
+		if (cmd->r2t_offset < cmd->write_data_done)
+			cmd->r2t_offset = cmd->write_data_done;
+
+	while (cmd->outstanding_r2ts < SESS_OPS_C(conn)->MaxOutstandingR2T) {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			offset = cmd->r2t_offset;
+
+			if (first_r2t && (type == 2)) {
+				xfer_len = ((offset +
+					     (SESS_OPS_C(conn)->MaxBurstLength -
+					     cmd->next_burst_len) >
+					     cmd->data_length) ?
+					    (cmd->data_length - offset) :
+					    (SESS_OPS_C(conn)->MaxBurstLength -
+					     cmd->next_burst_len));
+			} else {
+				xfer_len = ((offset +
+					     SESS_OPS_C(conn)->MaxBurstLength) >
+					     cmd->data_length) ?
+					     (cmd->data_length - offset) :
+					     SESS_OPS_C(conn)->MaxBurstLength;
+			}
+			cmd->r2t_offset += xfer_len;
+
+			if (cmd->r2t_offset == cmd->data_length)
+				cmd->cmd_flags |= ICF_SENT_LAST_R2T;
+		} else {
+			struct iscsi_seq *seq;
+
+			seq = iscsi_get_seq_holder_for_r2t(cmd);
+			if (!(seq)) {
+				spin_unlock_bh(&cmd->r2t_lock);
+				return -1;
+			}
+
+			offset = seq->offset;
+			xfer_len = seq->xfer_len;
+
+			if (cmd->seq_send_order == cmd->seq_count)
+				cmd->cmd_flags |= ICF_SENT_LAST_R2T;
+		}
+		cmd->outstanding_r2ts++;
+		first_r2t = 0;
+
+		if (iscsi_add_r2t_to_list(cmd, offset, xfer_len, 0, 0) < 0) {
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+
+		if (cmd->cmd_flags & ICF_SENT_LAST_R2T)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+int lio_write_pending(
+	struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	if (cmd->immediate_data || cmd->unsolicited_data)
+		up(&cmd->unsolicited_data_sem);
+	else {
+		if (iscsi_build_r2ts_for_cmd(cmd, CONN(cmd), 1) < 0)
+			return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES;
+	}
+
+	return 0;
+}
+
+int lio_write_pending_status(
+	struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+	int ret;
+
+	spin_lock_bh(&cmd->istate_lock);
+	ret = !(cmd->cmd_flags & ICF_GOT_LAST_DATAOUT);
+	spin_unlock_bh(&cmd->istate_lock);
+
+	return ret;
+}
+
+int lio_queue_status(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_STATUS;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+
+	return 0;
+}
+
+u16 lio_set_fabric_sense_len(struct se_cmd *se_cmd, u32 sense_length)
+{
+	unsigned char *buffer = se_cmd->sense_buffer;
+	/*
+	 * From RFC-3720 10.4.7.  Data Segment - Sense and Response Data Segment
+	 * 16-bit SenseLength.
+	 */
+	buffer[0] = ((sense_length >> 8) & 0xff);
+	buffer[1] = (sense_length & 0xff);
+	/*
+	 * Return two byte offset into allocated sense_buffer.
+	 */
+	return 2;
+}
+
+u16 lio_get_fabric_sense_len(void)
+{
+	/*
+	 * Return two byte offset into allocated sense_buffer.
+	 */
+	return 2;
+}
+
+/*	iscsi_send_status():
+ *
+ *
+ */
+static inline int iscsi_send_status(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	__u8 iov_count = 0, recovery;
+	__u32 padding = 0, trace_type, tx_size = 0;
+	struct iscsi_scsi_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	recovery = (cmd->i_state != ISTATE_SEND_STATUS);
+	if (!recovery)
+		cmd->stat_sn = conn->stat_sn++;
+
+	spin_lock_bh(&SESS(conn)->session_stats_lock);
+	SESS(conn)->rsp_pdus++;
+	spin_unlock_bh(&SESS(conn)->session_stats_lock);
+
+	hdr			= (struct iscsi_scsi_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_SCSI_CMD_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	if (SE_CMD(cmd)->se_cmd_flags & SCF_OVERFLOW_BIT) {
+		hdr->flags |= ISCSI_FLAG_CMD_OVERFLOW;
+		hdr->residual_count = cpu_to_be32(cmd->residual_count);
+	} else if (SE_CMD(cmd)->se_cmd_flags & SCF_UNDERFLOW_BIT) {
+		hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW;
+		hdr->residual_count = cpu_to_be32(cmd->residual_count);
+	}
+	hdr->response		= cmd->iscsi_response;
+	hdr->cmd_status		= SE_CMD(cmd)->scsi_status;
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+	iov[iov_count].iov_base	= cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	/*
+	 * Attach SENSE DATA payload to iSCSI Response PDU
+	 */
+	if (SE_CMD(cmd)->sense_buffer &&
+	   ((SE_CMD(cmd)->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
+	    (SE_CMD(cmd)->se_cmd_flags & SCF_EMULATED_TASK_SENSE))) {
+		padding		= -(SE_CMD(cmd)->scsi_sense_length) & 3;
+		hton24(hdr->dlength, SE_CMD(cmd)->scsi_sense_length);
+		iov[iov_count].iov_base	= SE_CMD(cmd)->sense_buffer;
+		iov[iov_count++].iov_len =
+				(SE_CMD(cmd)->scsi_sense_length + padding);
+		tx_size += SE_CMD(cmd)->scsi_sense_length;
+
+		if (padding) {
+			memset(SE_CMD(cmd)->sense_buffer +
+				SE_CMD(cmd)->scsi_sense_length, 0, padding);
+			tx_size += padding;
+			TRACE(TRACE_ISCSI, "Adding %u bytes of padding to"
+				" SENSE.\n", padding);
+		}
+
+		if (CONN_OPS(conn)->DataDigest) {
+			crypto_hash_init(&conn->conn_tx_hash);
+
+			sg_init_one(&sg, (u8 *)SE_CMD(cmd)->sense_buffer,
+				(SE_CMD(cmd)->scsi_sense_length + padding));
+			crypto_hash_update(&conn->conn_tx_hash, &sg,
+				(SE_CMD(cmd)->scsi_sense_length + padding));
+
+			crypto_hash_final(&conn->conn_tx_hash,
+					(u8 *)&cmd->data_crc);
+
+			iov[iov_count].iov_base    = &cmd->data_crc;
+			iov[iov_count++].iov_len     = CRC_LEN;
+			tx_size += CRC_LEN;
+
+			TRACE(TRACE_DIGEST, "Attaching CRC32 DataDigest for"
+				" SENSE, %u bytes CRC 0x%08x\n",
+				(SE_CMD(cmd)->scsi_sense_length + padding),
+				cmd->data_crc);
+		}
+
+		TRACE(TRACE_ISCSI, "Attaching SENSE DATA: %u bytes to iSCSI"
+				" Response PDU\n",
+				SE_CMD(cmd)->scsi_sense_length);
+	}
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+	
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Response"
+				" PDU 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	trace_type = (!recovery) ? TRACE_ISCSI : TRACE_ERL1;
+	TRACE(trace_type, "Built %sSCSI Response, ITT: 0x%08x, StatSN: 0x%08x,"
+		" Response: 0x%02x, SAM Status: 0x%02x, CID: %hu\n",
+		(!recovery) ? "" : "Recovery ", cmd->init_task_tag,
+		cmd->stat_sn, 0x00, cmd->se_cmd.scsi_status, conn->cid);
+
+	return 0;
+}
+
+int lio_queue_tm_rsp(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = iscsi_get_cmd(se_cmd);
+
+	cmd->i_state = ISTATE_SEND_TASKMGTRSP;
+	iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+
+	return 0;
+}
+
+static inline u8 iscsi_convert_tcm_tmr_rsp(struct se_tmr_req *se_tmr)
+{
+	switch (se_tmr->response) {
+	case TMR_FUNCTION_COMPLETE:
+		return ISCSI_TMF_RSP_COMPLETE;
+	case TMR_TASK_DOES_NOT_EXIST:
+		return ISCSI_TMF_RSP_NO_TASK;
+	case TMR_LUN_DOES_NOT_EXIST:
+		return ISCSI_TMF_RSP_NO_LUN;
+	case TMR_TASK_MGMT_FUNCTION_NOT_SUPPORTED:
+		return ISCSI_TMF_RSP_NOT_SUPPORTED;
+	case TMR_FUNCTION_AUTHORIZATION_FAILED:
+		return ISCSI_TMF_RSP_AUTH_FAILED;
+	case TMR_FUNCTION_REJECTED:
+	default:
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+}
+
+/*	iscsi_send_task_mgt_rsp():
+ *
+ *
+ */
+static int iscsi_send_task_mgt_rsp(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm_rsp *hdr;
+	struct scatterlist sg;
+	u32 tx_size = 0;
+
+	hdr			= (struct iscsi_tm_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_SCSI_TMFUNC_RSP;
+	hdr->response		= iscsi_convert_tcm_tmr_rsp(se_tmr);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	cmd->iov_misc[0].iov_base	= cmd->pdu;
+	cmd->iov_misc[0].iov_len	= ISCSI_HDR_LEN;
+	tx_size += ISCSI_HDR_LEN;
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];	
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		cmd->iov_misc[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for Task"
+			" Mgmt Response PDU 0x%08x\n", *header_digest);
+	}
+
+	cmd->iov_misc_count = 1;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ERL2, "Built Task Management Response ITT: 0x%08x,"
+		" StatSN: 0x%08x, Response: 0x%02x, CID: %hu\n",
+		cmd->init_task_tag, cmd->stat_sn, hdr->response, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_send_text_rsp():
+ *
+ *
+ *	FIXME: Add support for F_BIT and C_BIT when the length is longer than
+ *	MaxRecvDataSegmentLength.
+ */
+static int iscsi_send_text_rsp(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	u8 iov_count = 0;
+	u32 padding = 0, text_length = 0, tx_size = 0;
+	struct iscsi_text_rsp *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	text_length = iscsi_build_sendtargets_response(cmd);
+
+	padding = ((-text_length) & 3);
+	if (padding != 0) {
+		memset((void *) (cmd->buf_ptr + text_length), 0, padding);
+		TRACE(TRACE_ISCSI, "Attaching %u additional bytes for"
+			" padding.\n", padding);
+	}
+
+	hdr			= (struct iscsi_text_rsp *) cmd->pdu;
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	hdr->opcode		= ISCSI_OP_TEXT_RSP;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, text_length);
+	hdr->itt		= cpu_to_be32(cmd->init_task_tag);
+	hdr->ttt		= cpu_to_be32(cmd->targ_xfer_tag);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+
+	iscsi_increment_maxcmdsn(cmd, SESS(conn));
+	hdr->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+
+	iov[iov_count].iov_base = cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	iov[iov_count].iov_base	= cmd->buf_ptr;
+	iov[iov_count++].iov_len = text_length + padding;
+
+	tx_size += (ISCSI_HDR_LEN + text_length + padding);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for"
+			" Text Response PDU 0x%08x\n", *header_digest);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)cmd->buf_ptr, (text_length + padding));
+		crypto_hash_update(&conn->conn_tx_hash, &sg,
+				(text_length + padding));
+
+		crypto_hash_final(&conn->conn_tx_hash,
+				(u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base	= &cmd->data_crc;
+		iov[iov_count++].iov_len = CRC_LEN;
+		tx_size	+= CRC_LEN;
+
+		TRACE(TRACE_DIGEST, "Attaching DataDigest for %u bytes of text"
+			" data, CRC 0x%08x\n", (text_length + padding),
+			cmd->data_crc);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built Text Response: ITT: 0x%08x, StatSN: 0x%08x,"
+		" Length: %u, CID: %hu\n", cmd->init_task_tag, cmd->stat_sn,
+			text_length, conn->cid);
+	return 0;
+}
+
+/*	iscsi_send_reject():
+ *
+ *
+ */
+static int iscsi_send_reject(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	__u32 iov_count = 0, tx_size = 0;
+	struct iscsi_reject *hdr;
+	struct iovec *iov;
+	struct scatterlist sg;
+
+	hdr			= (struct iscsi_reject *) cmd->pdu;
+	hdr->opcode		= ISCSI_OP_REJECT;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	hton24(hdr->dlength, ISCSI_HDR_LEN);
+	cmd->stat_sn		= conn->stat_sn++;
+	hdr->statsn		= cpu_to_be32(cmd->stat_sn);
+	hdr->exp_cmdsn	= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	hdr->max_cmdsn	= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	iov = &cmd->iov_misc[0];
+
+	iov[iov_count].iov_base = cmd->pdu;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+	iov[iov_count].iov_base = cmd->buf_ptr;
+	iov[iov_count++].iov_len = ISCSI_HDR_LEN;
+
+	tx_size = (ISCSI_HDR_LEN + ISCSI_HDR_LEN);
+
+	if (CONN_OPS(conn)->HeaderDigest) {
+		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
+
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)hdr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg, ISCSI_HDR_LEN); 
+
+		crypto_hash_final(&conn->conn_tx_hash, (u8 *)header_digest);
+
+		iov[0].iov_len += CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 HeaderDigest for"
+			" REJECT PDU 0x%08x\n", *header_digest);
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		crypto_hash_init(&conn->conn_tx_hash);
+
+		sg_init_one(&sg, (u8 *)cmd->buf_ptr, ISCSI_HDR_LEN);
+		crypto_hash_update(&conn->conn_tx_hash, &sg,
+				ISCSI_HDR_LEN);
+
+		crypto_hash_final(&conn->conn_tx_hash,
+				(u8 *)&cmd->data_crc);
+
+		iov[iov_count].iov_base = &cmd->data_crc;
+		iov[iov_count++].iov_len  = CRC_LEN;
+		tx_size += CRC_LEN;
+		TRACE(TRACE_DIGEST, "Attaching CRC32 DataDigest for REJECT"
+				" PDU 0x%08x\n", cmd->data_crc);
+	}
+
+	cmd->iov_misc_count = iov_count;
+	cmd->tx_size = tx_size;
+
+	TRACE(TRACE_ISCSI, "Built Reject PDU StatSN: 0x%08x, Reason: 0x%02x,"
+		" CID: %hu\n", ntohl(hdr->statsn), hdr->reason, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_tx_thread_TCP_timeout():
+ *
+ *
+ */
+static void iscsi_tx_thread_TCP_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_tx_thread_wait_for_TCP():
+ *
+ *
+ */
+static void iscsi_tx_thread_wait_for_TCP(struct iscsi_conn *conn)
+{
+	struct timer_list tx_TCP_timer;
+	int ret;
+
+	if ((conn->sock->sk->sk_shutdown & SEND_SHUTDOWN) ||
+	    (conn->sock->sk->sk_shutdown & RCV_SHUTDOWN)) {
+		init_timer(&tx_TCP_timer);
+		SETUP_TIMER(tx_TCP_timer, ISCSI_TX_THREAD_TCP_TIMEOUT,
+			&conn->tx_half_close_sem, iscsi_tx_thread_TCP_timeout);
+		add_timer(&tx_TCP_timer);
+
+		ret = down_interruptible(&conn->tx_half_close_sem);
+
+		del_timer_sync(&tx_TCP_timer);
+	}
+}
+
+#ifdef CONFIG_SMP
+
+void iscsi_thread_get_cpumask(struct iscsi_conn *conn)
+{
+	struct se_thread_set *ts = conn->thread_set;
+	int ord, cpu;
+	/*
+	 * thread_id is assigned from iscsi_global->ts_bitmap from
+	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
+	 *
+	 * Here we use thread_id to determine which CPU that this
+	 * iSCSI connection's se_thread_set will be scheduled to
+	 * execute upon.
+	 */
+	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
+#if 0
+	printk(">>>>>>>>>>>>>>>>>>>> Generated ord: %d from thread_id: %d\n",
+			ord, ts->thread_id);
+#endif
+	for_each_online_cpu(cpu) {
+		if (ord-- == 0) {
+			cpumask_set_cpu(cpu, conn->conn_cpumask);
+			return;
+		}
+	}
+	/*
+	 * This should never be reached..
+	 */
+	dump_stack();
+	cpumask_setall(conn->conn_cpumask);
+}
+
+static inline void iscsi_thread_check_cpumask(
+	struct iscsi_conn *conn,
+	struct task_struct *p,
+	int mode)
+{
+	char buf[128];
+	/*
+	 * mode == 1 signals iscsi_target_tx_thread() usage.
+	 * mode == 0 signals iscsi_target_rx_thread() usage.
+	 */
+	if (mode == 1) {
+		if (!(conn->conn_tx_reset_cpumask))
+			return;
+		conn->conn_tx_reset_cpumask = 0;
+	} else {
+		if (!(conn->conn_rx_reset_cpumask))
+			return;
+		conn->conn_rx_reset_cpumask = 0;
+	}
+	/*
+	 * Update the CPU mask for this single kthread so that
+	 * both TX and RX kthreads are scheduled to run on the
+	 * same CPU.
+	 */
+	memset(buf, 0, 128);
+	cpumask_scnprintf(buf, 128, conn->conn_cpumask);
+#if 0
+	printk(">>>>>>>>>>>>>> Calling set_cpus_allowed_ptr(): %s for %s\n",
+			buf, p->comm);
+#endif
+	set_cpus_allowed_ptr(p, conn->conn_cpumask);
+}
+
+#else
+#define iscsi_thread_get_cpumask(X) ({})
+#define iscsi_thread_check_cpumask(X, Y, Z) ({})
+#endif /* CONFIG_SMP */
+
+/*	iscsi_target_tx_thread():
+ *
+ *
+ */
+int iscsi_target_tx_thread(void *arg)
+{
+	u8 state;
+	int eodr = 0, map_sg = 0, ret = 0, sent_status = 0, use_misc = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_conn *conn;
+	struct iscsi_queue_req *qr = NULL;
+	struct se_cmd *se_cmd;
+	struct se_thread_set *ts = (struct se_thread_set *) arg;
+	struct se_unmap_sg unmap_sg;
+
+	{
+	    char name[20];
+
+	    memset(name, 0, 20);
+	    sprintf(name, "%s/%u", ISCSI_TX_THREAD_NAME, ts->thread_id);
+	    iscsi_daemon(ts->tx_thread, name, SHUTDOWN_SIGS);
+	}
+
+restart:
+	conn = iscsi_tx_thread_pre_handler(ts, TARGET);
+	if (!(conn))
+		goto out;
+
+	eodr = map_sg = ret = sent_status = use_misc = 0;
+
+	while (1) {
+		/*
+		 * Ensure that both TX and RX per connection kthreads
+		 * are scheduled to run on the same CPU.
+		 */
+		iscsi_thread_check_cpumask(conn, current, 1);
+
+		ret = down_interruptible(&conn->tx_sem);
+
+		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
+		     (ret != 0) || signal_pending(current))
+			goto transport_err;
+
+get_immediate:
+		qr = iscsi_get_cmd_from_immediate_queue(conn);
+		if ((qr)) {
+			atomic_set(&conn->check_immediate_queue, 0);
+			cmd = qr->cmd;
+			state = qr->state;
+			kmem_cache_free(lio_qr_cache, qr);
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_R2T:
+				spin_unlock_bh(&cmd->istate_lock);
+				ret = iscsi_send_r2t(cmd, conn);
+				break;
+			case ISTATE_REMOVE:
+				spin_unlock_bh(&cmd->istate_lock);
+
+				if (cmd->data_direction == DMA_TO_DEVICE)
+					iscsi_stop_dataout_timer(cmd);
+
+				spin_lock_bh(&conn->cmd_lock);
+				iscsi_remove_cmd_from_conn_list(cmd, conn);
+				spin_unlock_bh(&conn->cmd_lock);
+				/*
+				 * Determine if a struct se_cmd is assoicated with
+				 * this struct iscsi_cmd.
+				 */
+				if (!(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) &&
+				    !(cmd->tmr_req))
+					iscsi_release_cmd_to_pool(cmd);
+				else
+					transport_generic_free_cmd(SE_CMD(cmd),
+								1, 1, 0);
+				goto get_immediate;
+			case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+				spin_unlock_bh(&cmd->istate_lock);
+				iscsi_mod_nopin_response_timer(conn);
+				ret = iscsi_send_unsolicited_nopin(cmd,
+						conn, 1);
+				break;
+			case ISTATE_SEND_NOPIN_NO_RESPONSE:
+				spin_unlock_bh(&cmd->istate_lock);
+				ret = iscsi_send_unsolicited_nopin(cmd,
+						conn, 0);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+				" 0x%08x, i_state: %d on CID: %hu\n",
+				cmd->iscsi_opcode, cmd->init_task_tag, state,
+				conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			if (ret < 0) {
+				conn->tx_immediate_queue = 0;
+				goto transport_err;
+			}
+
+			if (iscsi_send_tx_data(cmd, conn, 1) < 0) {
+				conn->tx_immediate_queue = 0;
+				iscsi_tx_thread_wait_for_TCP(conn);
+				goto transport_err;
+			}
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_R2T:
+				spin_unlock_bh(&cmd->istate_lock);
+				spin_lock_bh(&cmd->dataout_timeout_lock);
+				iscsi_start_dataout_timer(cmd, conn);
+				spin_unlock_bh(&cmd->dataout_timeout_lock);
+				break;
+			case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+				cmd->i_state = ISTATE_SENT_NOPIN_WANT_RESPONSE;
+				spin_unlock_bh(&cmd->istate_lock);
+				break;
+			case ISTATE_SEND_NOPIN_NO_RESPONSE:
+				cmd->i_state = ISTATE_SENT_STATUS;
+				spin_unlock_bh(&cmd->istate_lock);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			goto get_immediate;
+		} else
+			conn->tx_immediate_queue = 0;
+
+get_response:
+		qr = iscsi_get_cmd_from_response_queue(conn);
+		if ((qr)) {
+			cmd = qr->cmd;
+			state = qr->state;
+			kmem_cache_free(lio_qr_cache, qr);
+
+			spin_lock_bh(&cmd->istate_lock);
+check_rsp_state:
+			switch (state) {
+			case ISTATE_SEND_DATAIN:
+				spin_unlock_bh(&cmd->istate_lock);
+				memset((void *)&unmap_sg, 0,
+						sizeof(struct se_unmap_sg));
+				unmap_sg.fabric_cmd = (void *)cmd;
+				unmap_sg.se_cmd = SE_CMD(cmd);
+				map_sg = 1;
+				ret = iscsi_send_data_in(cmd, conn,
+						&unmap_sg, &eodr);
+				break;
+			case ISTATE_SEND_STATUS:
+			case ISTATE_SEND_STATUS_RECOVERY:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_status(cmd, conn);
+				break;
+			case ISTATE_SEND_LOGOUTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_logout_response(cmd, conn);
+				break;
+			case ISTATE_SEND_ASYNCMSG:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_conn_drop_async_message(
+						cmd, conn);
+				break;
+			case ISTATE_SEND_NOPIN:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_nopin_response(cmd, conn);
+				break;
+			case ISTATE_SEND_REJECT:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_reject(cmd, conn);
+				break;
+			case ISTATE_SEND_TASKMGTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_task_mgt_rsp(cmd, conn);
+				if (ret != 0)
+					break;
+				ret = iscsi_tmr_post_handler(cmd, conn);
+				if (ret != 0)
+					iscsi_fall_back_to_erl0(SESS(conn));
+				break;
+			case ISTATE_SEND_TEXTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				use_misc = 1;
+				ret = iscsi_send_text_rsp(cmd, conn);
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+			if (ret < 0) {
+				conn->tx_response_queue = 0;
+				goto transport_err;
+			}
+
+			se_cmd = &cmd->se_cmd;
+
+			if (map_sg && !CONN_OPS(conn)->IFMarker &&
+			    T_TASK(se_cmd)->t_tasks_se_num) {
+				iscsi_map_SG_segments(&unmap_sg);
+				if (iscsi_fe_sendpage_sg(&unmap_sg, conn) < 0) {
+					conn->tx_response_queue = 0;
+					iscsi_tx_thread_wait_for_TCP(conn);
+					iscsi_unmap_SG_segments(&unmap_sg);
+					goto transport_err;
+				}
+				iscsi_unmap_SG_segments(&unmap_sg);
+				map_sg = 0;
+			} else {
+				if (map_sg)
+					iscsi_map_SG_segments(&unmap_sg);
+				if (iscsi_send_tx_data(cmd, conn, use_misc) < 0) {
+					conn->tx_response_queue = 0;
+					iscsi_tx_thread_wait_for_TCP(conn);
+					if (map_sg)
+						iscsi_unmap_SG_segments(&unmap_sg);
+					goto transport_err;
+				}
+				if (map_sg) {
+					iscsi_unmap_SG_segments(&unmap_sg);
+					map_sg = 0;
+				}
+			}
+
+			spin_lock_bh(&cmd->istate_lock);
+			switch (state) {
+			case ISTATE_SEND_DATAIN:
+				if (!eodr)
+					goto check_rsp_state;
+
+				if (eodr == 1) {
+					cmd->i_state = ISTATE_SENT_LAST_DATAIN;
+					sent_status = 1;
+					eodr = use_misc = 0;
+				} else if (eodr == 2) {
+					cmd->i_state = state =
+							ISTATE_SEND_STATUS;
+					sent_status = 0;
+					eodr = use_misc = 0;
+					goto check_rsp_state;
+				}
+				break;
+			case ISTATE_SEND_STATUS:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_ASYNCMSG:
+			case ISTATE_SEND_NOPIN:
+			case ISTATE_SEND_STATUS_RECOVERY:
+			case ISTATE_SEND_TEXTRSP:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_REJECT:
+				use_misc = 0;
+				if (cmd->cmd_flags & ICF_REJECT_FAIL_CONN) {
+					cmd->cmd_flags &= ~ICF_REJECT_FAIL_CONN;
+					spin_unlock_bh(&cmd->istate_lock);
+					up(&cmd->reject_sem);
+					goto transport_err;
+				}
+				up(&cmd->reject_sem);
+				break;
+			case ISTATE_SEND_TASKMGTRSP:
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			case ISTATE_SEND_LOGOUTRSP:
+				spin_unlock_bh(&cmd->istate_lock);
+				if (!(iscsi_logout_post_handler(cmd, conn)))
+					goto restart;
+				spin_lock_bh(&cmd->istate_lock);
+				use_misc = 0;
+				sent_status = 1;
+				break;
+			default:
+				printk(KERN_ERR "Unknown Opcode: 0x%02x ITT:"
+					" 0x%08x, i_state: %d on CID: %hu\n",
+					cmd->iscsi_opcode, cmd->init_task_tag,
+					cmd->i_state, conn->cid);
+				spin_unlock_bh(&cmd->istate_lock);
+				goto transport_err;
+			}
+
+			if (sent_status) {
+				cmd->i_state = ISTATE_SENT_STATUS;
+				sent_status = 0;
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+
+			if (atomic_read(&conn->check_immediate_queue))
+				goto get_immediate;
+
+			goto get_response;
+		} else
+			conn->tx_response_queue = 0;
+	}
+
+transport_err:
+	iscsi_take_action_for_connection_exit(conn);
+	goto restart;
+out:
+	ts->tx_thread = NULL;
+	up(&ts->tx_done_sem);
+	return 0;
+}
+
+static void iscsi_rx_thread_TCP_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_rx_thread_wait_for_TCP():
+ *
+ *
+ */
+static void iscsi_rx_thread_wait_for_TCP(struct iscsi_conn *conn)
+{
+	struct timer_list rx_TCP_timer;
+	int ret;
+
+	if ((conn->sock->sk->sk_shutdown & SEND_SHUTDOWN) ||
+	    (conn->sock->sk->sk_shutdown & RCV_SHUTDOWN)) {
+		init_timer(&rx_TCP_timer);
+		SETUP_TIMER(rx_TCP_timer, ISCSI_RX_THREAD_TCP_TIMEOUT,
+			&conn->rx_half_close_sem, iscsi_rx_thread_TCP_timeout);
+		add_timer(&rx_TCP_timer);
+
+		ret = down_interruptible(&conn->rx_half_close_sem);
+
+		del_timer_sync(&rx_TCP_timer);
+	}
+}
+
+/*	iscsi_target_rx_thread():
+ *
+ *
+ */
+int iscsi_target_rx_thread(void *arg)
+{
+	int ret;
+	__u8 buffer[ISCSI_HDR_LEN], opcode;
+	__u32 checksum = 0, digest = 0;
+	struct iscsi_conn *conn = NULL;
+	struct se_thread_set *ts = (struct se_thread_set *) arg;
+	struct iovec iov;
+	struct scatterlist sg;
+
+	{
+	    char name[20];
+
+	    memset(name, 0, 20);
+	    sprintf(name, "%s/%u", ISCSI_RX_THREAD_NAME, ts->thread_id);
+	    iscsi_daemon(ts->rx_thread, name, SHUTDOWN_SIGS);
+	}
+
+restart:
+	conn = iscsi_rx_thread_pre_handler(ts, TARGET);
+	if (!(conn))
+		goto out;
+
+	while (1) {
+		/*
+		 * Ensure that both TX and RX per connection kthreads
+		 * are scheduled to run on the same CPU.
+		 */
+		iscsi_thread_check_cpumask(conn, current, 0);
+
+		memset((void *)buffer, 0, ISCSI_HDR_LEN);
+		memset((void *)&iov, 0, sizeof(struct iovec));
+
+		iov.iov_base	= buffer;
+		iov.iov_len	= ISCSI_HDR_LEN;
+
+		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+		if (ret != ISCSI_HDR_LEN) {
+			iscsi_rx_thread_wait_for_TCP(conn);
+			goto transport_err;
+		}
+
+		/*
+		 * Set conn->bad_hdr for use with REJECT PDUs.
+		 */
+		memcpy(&conn->bad_hdr, &buffer, ISCSI_HDR_LEN);
+
+		if (CONN_OPS(conn)->HeaderDigest) {
+			iov.iov_base	= &digest;
+			iov.iov_len	= CRC_LEN;
+
+			ret = rx_data(conn, &iov, 1, CRC_LEN);
+			if (ret != CRC_LEN) {
+				iscsi_rx_thread_wait_for_TCP(conn);
+				goto transport_err;
+			}
+			crypto_hash_init(&conn->conn_rx_hash);
+
+			sg_init_one(&sg, (u8 *)buffer, ISCSI_HDR_LEN);
+			crypto_hash_update(&conn->conn_rx_hash, &sg,
+					ISCSI_HDR_LEN);
+
+			crypto_hash_final(&conn->conn_rx_hash, (u8 *)&checksum);
+
+			if (digest != checksum) {
+				printk(KERN_ERR "HeaderDigest CRC32C failed,"
+					" received 0x%08x, computed 0x%08x\n",
+					digest, checksum);
+				/*
+				 * Set the PDU to 0xff so it will intentionally
+				 * hit default in the switch below.
+				 */
+				memset((void *)buffer, 0xff, ISCSI_HDR_LEN);
+				spin_lock_bh(&SESS(conn)->session_stats_lock);
+				SESS(conn)->conn_digest_errors++;
+				spin_unlock_bh(&SESS(conn)->session_stats_lock);
+			} else {
+				TRACE(TRACE_DIGEST, "Got HeaderDigest CRC32C"
+						" 0x%08x\n", checksum);
+			}
+		}
+
+		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+			goto transport_err;
+
+		opcode = buffer[0] & ISCSI_OPCODE_MASK;
+
+		if (SESS_OPS_C(conn)->SessionType &&
+		   ((!(opcode & ISCSI_OP_TEXT)) ||
+		    (!(opcode & ISCSI_OP_LOGOUT)))) {
+			printk(KERN_ERR "Received illegal iSCSI Opcode: 0x%02x"
+			" while in Discovery Session, rejecting.\n", opcode);
+			iscsi_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
+					buffer, conn);
+			goto transport_err;
+		}
+
+		switch (opcode) {
+		case ISCSI_OP_SCSI_CMD:
+			if (iscsi_handle_scsi_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SCSI_DATA_OUT:
+			if (iscsi_handle_data_out(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_NOOP_OUT:
+			if (iscsi_handle_nop_out(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SCSI_TMFUNC:
+			if (iscsi_handle_task_mgt_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_TEXT:
+			if (iscsi_handle_text_cmd(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_LOGOUT:
+			ret = iscsi_handle_logout_cmd(conn, buffer);
+			if (ret > 0) {
+				down(&conn->conn_logout_sem);
+				goto transport_err;
+			} else if (ret < 0)
+				goto transport_err;
+			break;
+		case ISCSI_OP_SNACK:
+			if (iscsi_handle_snack(conn, buffer) < 0)
+				goto transport_err;
+			break;
+		default:
+			printk(KERN_ERR "Got unknown iSCSI OpCode: 0x%02x\n",
+					opcode);
+			if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+				printk(KERN_ERR "Cannot recover from unknown"
+				" opcode while ERL=0, closing iSCSI connection"
+				".\n");
+				goto transport_err;
+			}
+			if (!CONN_OPS(conn)->OFMarker) {
+				printk(KERN_ERR "Unable to recover from unknown"
+				" opcode while OFMarker=No, closing iSCSI"
+					" connection.\n");
+				goto transport_err;
+			}
+			if (iscsi_recover_from_unknown_opcode(conn) < 0) {
+				printk(KERN_ERR "Unable to recover from unknown"
+					" opcode, closing iSCSI connection.\n");
+				goto transport_err;
+			}
+			break;
+		}
+	}
+
+transport_err:
+	if (!signal_pending(current))
+		atomic_set(&conn->transport_failed, 1);
+	iscsi_take_action_for_connection_exit(conn);
+	goto restart;
+out:
+	ts->rx_thread = NULL;
+	up(&ts->rx_done_sem);
+	return 0;
+}
+
+/*	iscsi_release_commands_from_conn():
+ *
+ *
+ */
+static void iscsi_release_commands_from_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd = NULL, *cmd_tmp = NULL;
+	struct iscsi_session *sess = SESS(conn);
+	struct se_cmd *se_cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD)) {
+
+			list_del(&cmd->i_list);
+			spin_unlock_bh(&conn->cmd_lock);
+			iscsi_increment_maxcmdsn(cmd, sess);
+			se_cmd = SE_CMD(cmd);
+			/*
+			 * Special cases for active iSCSI TMR, and
+			 * transport_get_lun_for_cmd() failing from
+			 * iscsi_get_lun_for_cmd() in iscsi_handle_scsi_cmd().
+			 */
+			if (cmd->tmr_req && se_cmd->transport_wait_for_tasks)
+				se_cmd->transport_wait_for_tasks(se_cmd, 1, 1);
+			else if (SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD)
+				transport_release_cmd_to_pool(se_cmd);
+			else
+				__iscsi_release_cmd_to_pool(cmd, sess);
+
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+		list_del(&cmd->i_list);
+		spin_unlock_bh(&conn->cmd_lock);
+
+		iscsi_increment_maxcmdsn(cmd, sess);
+		se_cmd = SE_CMD(cmd);
+
+		if (se_cmd->transport_wait_for_tasks)
+			se_cmd->transport_wait_for_tasks(se_cmd, 1, 1);
+
+		spin_lock_bh(&conn->cmd_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_stop_timers_for_cmds():
+ *
+ *
+ */
+static void iscsi_stop_timers_for_cmds(
+	struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->data_direction == DMA_TO_DEVICE)
+			iscsi_stop_dataout_timer(cmd);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_close_connection():
+ *
+ *
+ */
+int iscsi_close_connection(
+	struct iscsi_conn *conn)
+{
+	int conn_logout = (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT);
+	struct iscsi_session	*sess = SESS(conn);
+
+	TRACE(TRACE_ISCSI, "Closing iSCSI connection CID %hu on SID:"
+		" %u\n", conn->cid, sess->sid);
+
+	iscsi_stop_netif_timer(conn);
+
+	/*
+	 * Always up conn_logout_sem just in case the RX Thread is sleeping
+	 * and the logout response never got sent because the connection
+	 * failed.
+	 */
+	up(&conn->conn_logout_sem);
+
+	iscsi_release_thread_set(conn, TARGET);
+
+	iscsi_stop_timers_for_cmds(conn);
+	iscsi_stop_nopin_response_timer(conn);
+	iscsi_stop_nopin_timer(conn);
+	iscsi_free_queue_reqs_for_conn(conn);
+
+	/*
+	 * During Connection recovery drop unacknowledged out of order
+	 * commands for this connection, and prepare the other commands
+	 * for realligence.
+	 *
+	 * During normal operation clear the out of order commands (but
+	 * do not free the struct iscsi_ooo_cmdsn's) and release all
+	 * struct iscsi_cmds.
+	 */
+	if (atomic_read(&conn->connection_recovery)) {
+		iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(conn);
+		iscsi_prepare_cmds_for_realligance(conn);
+	} else {
+		iscsi_clear_ooo_cmdsns_for_conn(conn);
+		iscsi_release_commands_from_conn(conn);
+	}
+
+	/*
+	 * Handle decrementing session or connection usage count if
+	 * a logout response was not able to be sent because the
+	 * connection failed.  Fall back to Session Recovery here.
+	 */
+	if (atomic_read(&conn->conn_logout_remove)) {
+		if (conn->conn_logout_reason == ISCSI_LOGOUT_REASON_CLOSE_SESSION) {
+			iscsi_dec_conn_usage_count(conn);
+			iscsi_dec_session_usage_count(sess);
+		}
+		if (conn->conn_logout_reason == ISCSI_LOGOUT_REASON_CLOSE_CONNECTION)
+			iscsi_dec_conn_usage_count(conn);
+
+		atomic_set(&conn->conn_logout_remove, 0);
+		atomic_set(&sess->session_reinstatement, 0);
+		atomic_set(&sess->session_fall_back_to_erl0, 1);
+	}
+
+	spin_lock_bh(&sess->conn_lock);
+	iscsi_remove_conn_from_list(sess, conn);
+
+	/*
+	 * Attempt to let the Initiator know this connection failed by
+	 * sending an Connection Dropped Async Message on another
+	 * active connection.
+	 */
+	if (atomic_read(&conn->connection_recovery))
+		iscsi_build_conn_drop_async_message(conn);
+
+	spin_unlock_bh(&sess->conn_lock);
+
+	/*
+	 * If connection reinstatement is being performed on this connection,
+	 * up the connection reinstatement semaphore that is being blocked on
+	 * in iscsi_cause_connection_reinstatement().
+	 */
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->sleep_on_conn_wait_sem)) {
+		spin_unlock_bh(&conn->state_lock);
+		up(&conn->conn_wait_sem);
+		down(&conn->conn_post_wait_sem);
+		spin_lock_bh(&conn->state_lock);
+	}
+
+	/*
+	 * If connection reinstatement is being performed on this connection
+	 * by receiving a REMOVECONNFORRECOVERY logout request, up the
+	 * connection wait rcfr semaphore that is being blocked on
+	 * an iscsi_connection_reinstatement_rcfr().
+	 */
+	if (atomic_read(&conn->connection_wait_rcfr)) {
+		spin_unlock_bh(&conn->state_lock);
+		up(&conn->conn_wait_rcfr_sem);
+		down(&conn->conn_post_wait_sem);
+		spin_lock_bh(&conn->state_lock);
+	}
+	atomic_set(&conn->connection_reinstatement, 1);
+	spin_unlock_bh(&conn->state_lock);
+
+	/*
+	 * If any other processes are accessing this connection pointer we
+	 * must wait until they have completed.
+	 */
+	iscsi_check_conn_usage_count(conn);
+
+	if (conn->conn_rx_hash.tfm)
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+	if (conn->conn_tx_hash.tfm)
+		crypto_free_hash(conn->conn_tx_hash.tfm);
+
+	if (conn->conn_cpumask)
+		free_cpumask_var(conn->conn_cpumask);
+
+	kfree(conn->conn_ops);
+	conn->conn_ops = NULL;
+
+	if (conn->sock) {
+		if (conn->conn_flags & CONNFLAG_SCTP_STRUCT_FILE) {
+			kfree(conn->sock->file);
+			conn->sock->file = NULL;
+		}
+		sock_release(conn->sock);
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_FREE.\n");
+	conn->conn_state = TARG_CONN_STATE_FREE;
+	kmem_cache_free(lio_conn_cache, conn);
+	conn = NULL;
+
+	spin_lock_bh(&sess->conn_lock);
+	atomic_dec(&sess->nconn);
+	printk(KERN_INFO "Decremented iSCSI connection count to %hu from node:"
+		" %s\n", atomic_read(&sess->nconn),
+		SESS_OPS(sess)->InitiatorName);
+	/*
+	 * Make sure that if one connection fails in an non ERL=2 iSCSI
+	 * Session that they all fail.
+	 */
+	if ((SESS_OPS(sess)->ErrorRecoveryLevel != 2) && !conn_logout &&
+	     !atomic_read(&sess->session_logout))
+		atomic_set(&sess->session_fall_back_to_erl0, 1);
+
+	/*
+	 * If this was not the last connection in the session, and we are
+	 * performing session reinstatement or falling back to ERL=0, call
+	 * iscsi_stop_session() without sleeping to shutdown the other
+	 * active connections.
+	 */
+	if (atomic_read(&sess->nconn)) {
+		if (!atomic_read(&sess->session_reinstatement) &&
+		    !atomic_read(&sess->session_fall_back_to_erl0)) {
+			spin_unlock_bh(&sess->conn_lock);
+			return 0;
+		}
+		if (!atomic_read(&sess->session_stop_active)) {
+			atomic_set(&sess->session_stop_active, 1);
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_stop_session(sess, 0, 0);
+			return 0;
+		}
+		spin_unlock_bh(&sess->conn_lock);
+		return 0;
+	}
+
+	/*
+	 * If this was the last connection in the session and one of the
+	 * following is occurring:
+	 *
+	 * Session Reinstatement is not being performed, and are falling back
+	 * to ERL=0 call iscsi_close_session().
+	 *
+	 * Session Logout was requested.  iscsi_close_session() will be called
+	 * elsewhere.
+	 *
+	 * Session Continuation is not being performed, start the Time2Retain
+	 * handler and check if sleep_on_sess_wait_sem is active.
+	 */
+	if (!atomic_read(&sess->session_reinstatement) &&
+	     atomic_read(&sess->session_fall_back_to_erl0)) {
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_close_session(sess);
+
+		return 0;
+	} else if (atomic_read(&sess->session_logout)) {
+		TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FREE.\n");
+		sess->session_state = TARG_SESS_STATE_FREE;
+		spin_unlock_bh(&sess->conn_lock);
+
+		if (atomic_read(&sess->sleep_on_sess_wait_sem))
+			up(&sess->session_wait_sem);
+
+		return 0;
+	} else {
+		TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FAILED.\n");
+		sess->session_state = TARG_SESS_STATE_FAILED;
+
+		if (!atomic_read(&sess->session_continuation)) {
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_start_time2retain_handler(sess);
+		} else
+			spin_unlock_bh(&sess->conn_lock);
+
+		if (atomic_read(&sess->sleep_on_sess_wait_sem))
+			up(&sess->session_wait_sem);
+
+		return 0;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return 0;
+}
+
+/*	iscsi_close_session():
+ *
+ *
+ */
+int iscsi_close_session(struct iscsi_session *sess)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	if (atomic_read(&sess->nconn)) {
+		printk(KERN_ERR "%d connection(s) still exist for iSCSI session"
+			" to %s\n", atomic_read(&sess->nconn),
+			SESS_OPS(sess)->InitiatorName);
+		BUG();
+	}
+
+	spin_lock_bh(&se_tpg->session_lock);
+	atomic_set(&sess->session_logout, 1);
+	atomic_set(&sess->session_reinstatement, 1);
+	iscsi_stop_time2retain_timer(sess);
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	/*
+	 * transport_deregister_session_configfs() will clear the
+	 * struct se_node_acl->nacl_sess pointer now as a iscsi_np process context
+	 * can be setting it again with __transport_register_session() in
+	 * iscsi_post_login_handler() again after the iscsi_stop_session()
+	 * completes in iscsi_np context.
+	 */
+	transport_deregister_session_configfs(sess->se_sess);
+
+	/*
+	 * If any other processes are accessing this session pointer we must
+	 * wait until they have completed.  If we are in an interrupt (the
+	 * time2retain handler) and contain and active session usage count we
+	 * restart the timer and exit.
+	 */
+	if (!in_interrupt()) {
+		if (iscsi_check_session_usage_count(sess) == 1)
+			iscsi_stop_session(sess, 1, 1);
+	} else {
+		if (iscsi_check_session_usage_count(sess) == 2) {
+			atomic_set(&sess->session_logout, 0);
+			iscsi_start_time2retain_handler(sess);
+			return 0;
+		}
+	}
+
+	transport_deregister_session(sess->se_sess);
+
+	if (SESS_OPS(sess)->ErrorRecoveryLevel == 2)
+		iscsi_free_connection_recovery_entires(sess);
+
+	iscsi_free_all_ooo_cmdsns(sess);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FREE.\n");
+	sess->session_state = TARG_SESS_STATE_FREE;
+	printk(KERN_INFO "Released iSCSI session from node: %s\n",
+			SESS_OPS(sess)->InitiatorName);
+	tpg->nsessions--;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_nsessions--;
+
+	printk(KERN_INFO "Decremented number of active iSCSI Sessions on"
+		" iSCSI TPG: %hu to %u\n", tpg->tpgt, tpg->nsessions);
+
+	kfree(sess->sess_ops);
+	sess->sess_ops = NULL;
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	kmem_cache_free(lio_sess_cache, sess);
+	sess = NULL;
+	return 0;
+}
+
+/*	iscsi_logout_post_handler_closesession():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_closesession(
+	struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
+
+	atomic_set(&conn->conn_logout_remove, 0);
+	up(&conn->conn_logout_sem);
+
+	iscsi_dec_conn_usage_count(conn);
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+	iscsi_close_session(sess);
+}
+
+/*	iscsi_logout_post_handler_samecid():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_samecid(
+	struct iscsi_conn *conn)
+{
+	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
+
+	atomic_set(&conn->conn_logout_remove, 0);
+	up(&conn->conn_logout_sem);
+
+	iscsi_cause_connection_reinstatement(conn, 1);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_logout_post_handler_diffcid():
+ *
+ *
+ */
+static void iscsi_logout_post_handler_diffcid(
+	struct iscsi_conn *conn,
+	__u16 cid)
+{
+	struct iscsi_conn *l_conn;
+	struct iscsi_session *sess = SESS(conn);
+
+	if (!sess)
+		return;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(l_conn, &sess->sess_conn_list, conn_list) {
+		if (l_conn->cid == cid) {
+			iscsi_inc_conn_usage_count(l_conn);
+			break;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	if (!l_conn)
+		return;
+
+	if (l_conn->sock)
+		l_conn->sock->ops->shutdown(l_conn->sock, RCV_SHUTDOWN);
+
+	spin_lock_bh(&l_conn->state_lock);
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGOUT.\n");
+	l_conn->conn_state = TARG_CONN_STATE_IN_LOGOUT;
+	spin_unlock_bh(&l_conn->state_lock);
+
+	iscsi_cause_connection_reinstatement(l_conn, 1);
+	iscsi_dec_conn_usage_count(l_conn);
+}
+
+/*	iscsi_logout_post_handler():
+ *
+ *	Return of 0 causes the TX thread to restart.
+ */
+static int iscsi_logout_post_handler(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int ret = 0;
+
+	switch (cmd->logout_reason) {
+	case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+		switch (cmd->logout_response) {
+		case ISCSI_LOGOUT_SUCCESS:
+		case ISCSI_LOGOUT_CLEANUP_FAILED:
+		default:
+			iscsi_logout_post_handler_closesession(conn);
+			break;
+		}
+		ret = 0;
+		break;
+	case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+		if (conn->cid == cmd->logout_cid) {
+			switch (cmd->logout_response) {
+			case ISCSI_LOGOUT_SUCCESS:
+			case ISCSI_LOGOUT_CLEANUP_FAILED:
+			default:
+				iscsi_logout_post_handler_samecid(conn);
+				break;
+			}
+			ret = 0;
+		} else {
+			switch (cmd->logout_response) {
+			case ISCSI_LOGOUT_SUCCESS:
+				iscsi_logout_post_handler_diffcid(conn,
+					cmd->logout_cid);
+				break;
+			case ISCSI_LOGOUT_CID_NOT_FOUND:
+			case ISCSI_LOGOUT_CLEANUP_FAILED:
+			default:
+				break;
+			}
+			ret = 1;
+		}
+		break;
+	case ISCSI_LOGOUT_REASON_RECOVERY:
+		switch (cmd->logout_response) {
+		case ISCSI_LOGOUT_SUCCESS:
+		case ISCSI_LOGOUT_CID_NOT_FOUND:
+		case ISCSI_LOGOUT_RECOVERY_UNSUPPORTED:
+		case ISCSI_LOGOUT_CLEANUP_FAILED:
+		default:
+			break;
+		}
+		ret = 1;
+		break;
+	default:
+		break;
+
+	}
+	return ret;
+}
+
+/*	iscsi_fail_session():
+ *
+ *
+ */
+void iscsi_fail_session(struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_CLEANUP_WAIT.\n");
+		conn->conn_state = TARG_CONN_STATE_CLEANUP_WAIT;
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_FAILED.\n");
+	sess->session_state = TARG_SESS_STATE_FAILED;
+}
+
+/*	iscsi_free_session():
+ *
+ *
+ */
+int iscsi_free_session(struct iscsi_session *sess)
+{
+	u16 conn_count = atomic_read(&sess->nconn);
+	struct iscsi_conn *conn, *conn_tmp;
+
+	spin_lock_bh(&sess->conn_lock);
+	atomic_set(&sess->sleep_on_sess_wait_sem, 1);
+
+	list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+			conn_list) {
+		if (conn_count == 0)
+			break;
+
+		iscsi_inc_conn_usage_count(conn);
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_cause_connection_reinstatement(conn, 1);
+		spin_lock_bh(&sess->conn_lock);
+
+		iscsi_dec_conn_usage_count(conn);
+		conn_count--;
+	}
+
+	if (atomic_read(&sess->nconn)) {
+		spin_unlock_bh(&sess->conn_lock);
+		down(&sess->session_wait_sem);
+	} else
+		spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_close_session(sess);
+	return 0;
+}
+
+/*	iscsi_stop_session():
+ *
+ *
+ */
+void iscsi_stop_session(
+	struct iscsi_session *sess,
+	int session_sleep,
+	int connection_sleep)
+{
+	u16 conn_count = atomic_read(&sess->nconn);
+	struct iscsi_conn *conn, *conn_tmp = NULL;
+
+	spin_lock_bh(&sess->conn_lock);
+	if (session_sleep)
+		atomic_set(&sess->sleep_on_sess_wait_sem, 1);
+
+	if (connection_sleep) {
+		list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+				conn_list) {
+			if (conn_count == 0)
+				break;
+
+			iscsi_inc_conn_usage_count(conn);
+			spin_unlock_bh(&sess->conn_lock);
+			iscsi_cause_connection_reinstatement(conn, 1);
+			spin_lock_bh(&sess->conn_lock);
+
+			iscsi_dec_conn_usage_count(conn);
+			conn_count--;
+		}
+	} else {
+		list_for_each_entry(conn, &sess->sess_conn_list, conn_list)
+			iscsi_cause_connection_reinstatement(conn, 0);
+	}
+
+	if (session_sleep && atomic_read(&sess->nconn)) {
+		spin_unlock_bh(&sess->conn_lock);
+		down(&sess->session_wait_sem);
+	} else
+		spin_unlock_bh(&sess->conn_lock);
+}
+
+/*	iscsi_release_sessions_for_tpg():
+ *
+ *
+ */
+int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *tpg, int force)
+{
+	struct iscsi_session *sess;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+	int session_count = 0;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	if (tpg->nsessions && !force) {
+		spin_unlock_bh(&se_tpg->session_lock);
+		return -1;
+	}
+
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+		spin_lock(&sess->conn_lock);
+		if (atomic_read(&sess->session_fall_back_to_erl0) ||
+		    atomic_read(&sess->session_logout) ||
+		    (sess->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+			spin_unlock(&sess->conn_lock);
+			continue;
+		}
+		atomic_set(&sess->session_reinstatement, 1);
+		spin_unlock(&sess->conn_lock);
+		spin_unlock_bh(&se_tpg->session_lock);
+
+		iscsi_free_session(sess);
+		spin_lock_bh(&se_tpg->session_lock);
+
+		session_count++;
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	TRACE(TRACE_ISCSI, "Released %d iSCSI Session(s) from Target Portal"
+			" Group: %hu\n", session_count, tpg->tpgt);
+	return 0;
+}
+
+static int iscsi_target_init_module(void)
+{
+	if (!(iscsi_target_detect()))
+		return 0;
+
+	return -1;
+}
+
+static void iscsi_target_cleanup_module(void)
+{
+	iscsi_target_release();
+}
+
+#ifdef MODULE
+MODULE_DESCRIPTION("LIO Target Driver Core 3.x.x Release");
+MODULE_LICENSE("GPL");
+module_init(iscsi_target_init_module);
+module_exit(iscsi_target_cleanup_module);
+#endif /* MODULE */
diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
new file mode 100644
index 0000000..25d56c1
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target.h
@@ -0,0 +1,49 @@
+#ifndef ISCSI_TARGET_H
+#define ISCSI_TARGET_H
+
+extern struct iscsi_tiqn *core_get_tiqn_for_login(unsigned char *);
+extern struct iscsi_tiqn *core_get_tiqn(unsigned char *, int);
+extern void core_put_tiqn_for_login(struct iscsi_tiqn *);
+extern struct iscsi_tiqn *core_add_tiqn(unsigned char *, int *);
+extern int core_del_tiqn(struct iscsi_tiqn *);
+extern int core_access_np(struct iscsi_np *, struct iscsi_portal_group *);
+extern int core_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *);
+extern void *core_get_np_ip(struct iscsi_np *np);
+extern struct iscsi_np *core_get_np(void *, u16, int);
+extern int __core_del_np_ex(struct iscsi_np *, struct iscsi_np_ex *);
+extern struct iscsi_np *core_add_np(struct iscsi_np_addr *, int, int *);
+extern int core_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
+				struct iscsi_portal_group *, int);
+extern int core_del_np(struct iscsi_np *);
+extern u32 iscsi_get_new_index(iscsi_index_t);
+extern char *iscsi_get_fabric_name(void);
+extern struct iscsi_cmd *iscsi_get_cmd(struct se_cmd *);
+extern u32 iscsi_get_task_tag(struct se_cmd *);
+extern int iscsi_get_cmd_state(struct se_cmd *);
+extern void iscsi_new_cmd_failure(struct se_cmd *);
+extern int iscsi_is_state_remove(struct se_cmd *);
+extern int lio_sess_logged_in(struct se_session *);
+extern u32 lio_sess_get_index(struct se_session *);
+extern u32 lio_sess_get_initiator_sid(struct se_session *,
+				unsigned char *, u32);
+extern int iscsi_send_async_msg(struct iscsi_conn *, u16, u8, u8);
+extern int lio_queue_data_in(struct se_cmd *);
+extern int iscsi_send_r2t(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int lio_write_pending(struct se_cmd *);
+extern int lio_write_pending_status(struct se_cmd *);
+extern int lio_queue_status(struct se_cmd *);
+extern u16 lio_set_fabric_sense_len(struct se_cmd *, u32);
+extern u16 lio_get_fabric_sense_len(void);
+extern int lio_queue_tm_rsp(struct se_cmd *);
+extern void iscsi_thread_get_cpumask(struct iscsi_conn *);
+extern int iscsi_target_tx_thread(void *);
+extern int iscsi_target_rx_thread(void *);
+extern int iscsi_close_connection(struct iscsi_conn *);
+extern int iscsi_close_session(struct iscsi_session *);
+extern void iscsi_fail_session(struct iscsi_session *);
+extern int iscsi_free_session(struct iscsi_session *);
+extern void iscsi_stop_session(struct iscsi_session *, int, int);
+extern int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *, int);
+
+#endif   /*** ISCSI_TARGET_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
new file mode 100644
index 0000000..86328dc
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -0,0 +1,1019 @@
+#ifndef ISCSI_TARGET_CORE_H
+#define ISCSI_TARGET_CORE_H
+
+#include <linux/in.h>
+#include <linux/configfs.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <scsi/scsi_cmnd.h>
+#include <target/target_core_base.h>
+
+#define ISCSI_VENDOR			"Linux-iSCSI.org"
+#define ISCSI_VERSION			"v4.1.0-rc1"
+#define SHUTDOWN_SIGS	(sigmask(SIGKILL)|sigmask(SIGINT)|sigmask(SIGABRT))
+#define ISCSI_MISC_IOVECS		5
+#define ISCSI_MAX_DATASN_MISSING_COUNT	16
+#define ISCSI_TX_THREAD_TCP_TIMEOUT	2
+#define ISCSI_RX_THREAD_TCP_TIMEOUT	2
+#define ISCSI_IQN_UNIQUENESS		14
+#define ISCSI_IQN_LEN			224
+#define ISCSI_TIQN_LEN			ISCSI_IQN_LEN
+#define SECONDS_FOR_ASYNC_LOGOUT	10
+#define SECONDS_FOR_ASYNC_TEXT		10
+#define IPV6_ADDRESS_SPACE		48
+#define IPV4_ADDRESS_SPACE		4
+#define IPV4_BUF_SIZE			18
+#define RESERVED			0xFFFFFFFF
+/* from target_core_base.h */
+#define ISCSI_MAX_LUNS_PER_TPG		TRANSPORT_MAX_LUNS_PER_TPG
+/* Maximum Target Portal Groups allowed */
+#define ISCSI_MAX_TPGS			64
+/* Size of the Network Device Name Buffer */
+#define ISCSI_NETDEV_NAME_SIZE		12
+/* Size of iSCSI specific sense buffer */
+#define ISCSI_SENSE_BUFFER_LEN		TRANSPORT_SENSE_BUFFER + 2
+
+/* struct iscsi_tpg_np->tpg_np_network_transport */
+#define ISCSI_TCP			0
+#define ISCSI_SCTP_TCP			1
+#define ISCSI_SCTP_UDP			2
+#define ISCSI_IWARP_TCP			3
+#define ISCSI_IWARP_SCTP		4
+#define ISCSI_INFINIBAND		5
+
+#define ISCSI_HDR_LEN			48
+#define CRC_LEN				4
+#define MAX_KEY_NAME_LENGTH		63
+#define MAX_KEY_VALUE_LENGTH		255
+#define INITIATOR			1
+#define TARGET				2
+#define WHITE_SPACE			" \t\v\f\n\r"
+
+/* RFC-3720 7.1.3  Standard Connection State Diagram for an Initiator */
+#define INIT_CONN_STATE_FREE			0x1
+#define INIT_CONN_STATE_XPT_WAIT		0x2
+#define INIT_CONN_STATE_IN_LOGIN		0x4
+#define INIT_CONN_STATE_LOGGED_IN		0x5
+#define INIT_CONN_STATE_IN_LOGOUT		0x6
+#define INIT_CONN_STATE_LOGOUT_REQUESTED	0x7
+#define INIT_CONN_STATE_CLEANUP_WAIT		0x8
+
+/* RFC-3720 7.1.4  Standard Connection State Diagram for a Target */
+#define TARG_CONN_STATE_FREE			0x1
+#define TARG_CONN_STATE_XPT_UP			0x3
+#define TARG_CONN_STATE_IN_LOGIN		0x4
+#define TARG_CONN_STATE_LOGGED_IN		0x5
+#define TARG_CONN_STATE_IN_LOGOUT		0x6
+#define TARG_CONN_STATE_LOGOUT_REQUESTED	0x7
+#define TARG_CONN_STATE_CLEANUP_WAIT		0x8
+
+/* RFC-3720 7.2 Connection Cleanup State Diagram for Initiators and Targets */
+#define CLEANUP_STATE_CLEANUP_WAIT		0x1
+#define CLEANUP_STATE_IN_CLEANUP		0x2
+#define CLEANUP_STATE_CLEANUP_FREE		0x3
+
+/* RFC-3720 7.3.1  Session State Diagram for an Initiator */
+#define INIT_SESS_STATE_FREE			0x1
+#define INIT_SESS_STATE_LOGGED_IN		0x3
+#define INIT_SESS_STATE_FAILED			0x4
+
+/* RFC-3720 7.3.2  Session State Diagram for a Target */
+#define TARG_SESS_STATE_FREE			0x1
+#define TARG_SESS_STATE_ACTIVE			0x2
+#define TARG_SESS_STATE_LOGGED_IN		0x3
+#define TARG_SESS_STATE_FAILED			0x4
+#define TARG_SESS_STATE_IN_CONTINUE		0x5
+
+/* struct iscsi_node_attrib sanity values */
+#define NA_DATAOUT_TIMEOUT		3
+#define NA_DATAOUT_TIMEOUT_MAX		60
+#define NA_DATAOUT_TIMEOUT_MIX		2
+#define NA_DATAOUT_TIMEOUT_RETRIES	5
+#define NA_DATAOUT_TIMEOUT_RETRIES_MAX	15
+#define NA_DATAOUT_TIMEOUT_RETRIES_MIN	1
+#define NA_NOPIN_TIMEOUT		5
+#define NA_NOPIN_TIMEOUT_MAX		60
+#define NA_NOPIN_TIMEOUT_MIN		3
+#define NA_NOPIN_RESPONSE_TIMEOUT	5
+#define NA_NOPIN_RESPONSE_TIMEOUT_MAX	60
+#define NA_NOPIN_RESPONSE_TIMEOUT_MIN	3
+#define NA_RANDOM_DATAIN_PDU_OFFSETS	0
+#define NA_RANDOM_DATAIN_SEQ_OFFSETS	0
+#define NA_RANDOM_R2T_OFFSETS		0
+#define NA_DEFAULT_ERL			0
+#define NA_DEFAULT_ERL_MAX		2
+#define NA_DEFAULT_ERL_MIN		0
+
+/* struct iscsi_tpg_attrib sanity values */
+#define TA_AUTHENTICATION		1
+#define TA_LOGIN_TIMEOUT		15
+#define TA_LOGIN_TIMEOUT_MAX		30
+#define TA_LOGIN_TIMEOUT_MIN		5
+#define TA_NETIF_TIMEOUT		2
+#define TA_NETIF_TIMEOUT_MAX		15
+#define TA_NETIF_TIMEOUT_MIN		2
+#define TA_GENERATE_NODE_ACLS		0
+#define TA_DEFAULT_CMDSN_DEPTH		16
+#define TA_DEFAULT_CMDSN_DEPTH_MAX	512
+#define TA_DEFAULT_CMDSN_DEPTH_MIN	1
+#define TA_CACHE_DYNAMIC_ACLS		0
+/* Enabled by default in demo mode (generic_node_acls=1) */
+#define TA_DEMO_MODE_WRITE_PROTECT	1
+/* Disabled by default in production mode w/ explict ACLs */
+#define TA_PROD_MODE_WRITE_PROTECT	0
+/* Enabled by default with x86 supporting SSE v4.2 */
+#define TA_CRC32C_X86_OFFLOAD		1
+#define TA_CACHE_CORE_NPS		0
+
+/* struct iscsi_data_count->type */
+#define ISCSI_RX_DATA				1
+#define ISCSI_TX_DATA				2
+
+/* struct iscsi_datain_req->dr_done */
+#define DATAIN_COMPLETE_NORMAL			1
+#define DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY 2
+#define DATAIN_COMPLETE_CONNECTION_RECOVERY	3
+
+/* struct iscsi_datain_req->recovery */
+#define DATAIN_WITHIN_COMMAND_RECOVERY		1
+#define DATAIN_CONNECTION_RECOVERY		2
+
+/* struct iscsi_portal_group->state */
+#define TPG_STATE_FREE				0
+#define TPG_STATE_ACTIVE			1
+#define TPG_STATE_INACTIVE			2
+#define TPG_STATE_COLD_RESET			3
+
+/* iscsi_set_device_attribute() states */
+#define ISCSI_DEVATTRIB_ENABLE_DEVICE		1
+#define ISCSI_DEVATTRIB_DISABLE_DEVICE		2
+#define ISCSI_DEVATTRIB_ADD_LUN_ACL		3
+#define ISCSI_DEVATTRIB_DELETE_LUN_ACL		4
+
+/* struct iscsi_tiqn->tiqn_state */
+#define TIQN_STATE_ACTIVE			1
+#define TIQN_STATE_SHUTDOWN			2
+
+/* struct iscsi_cmd->cmd_flags */
+#define ICF_GOT_LAST_DATAOUT			0x00000001
+#define ICF_GOT_DATACK_SNACK			0x00000002
+#define ICF_NON_IMMEDIATE_UNSOLICITED_DATA	0x00000004
+#define ICF_SENT_LAST_R2T			0x00000008
+#define ICF_WITHIN_COMMAND_RECOVERY		0x00000010
+#define ICF_CONTIG_MEMORY			0x00000020
+#define ICF_ATTACHED_TO_RQUEUE			0x00000040
+#define ICF_OOO_CMDSN				0x00000080
+#define ICF_REJECT_FAIL_CONN			0x00000100
+
+/* struct iscsi_cmd->i_state */
+#define ISTATE_NO_STATE				0
+#define ISTATE_NEW_CMD				1
+#define ISTATE_DEFERRED_CMD			2
+#define ISTATE_UNSOLICITED_DATA			3
+#define ISTATE_RECEIVE_DATAOUT			4
+#define ISTATE_RECEIVE_DATAOUT_RECOVERY		5
+#define ISTATE_RECEIVED_LAST_DATAOUT		6
+#define ISTATE_WITHIN_DATAOUT_RECOVERY		7
+#define ISTATE_IN_CONNECTION_RECOVERY		8
+#define ISTATE_RECEIVED_TASKMGT			9
+#define ISTATE_SEND_ASYNCMSG			10
+#define ISTATE_SENT_ASYNCMSG			11
+#define	ISTATE_SEND_DATAIN			12
+#define ISTATE_SEND_LAST_DATAIN			13
+#define ISTATE_SENT_LAST_DATAIN			14
+#define ISTATE_SEND_LOGOUTRSP			15
+#define ISTATE_SENT_LOGOUTRSP			16
+#define ISTATE_SEND_NOPIN			17
+#define ISTATE_SENT_NOPIN			18
+#define ISTATE_SEND_REJECT			19
+#define ISTATE_SENT_REJECT			20
+#define	ISTATE_SEND_R2T				21
+#define ISTATE_SENT_R2T				22
+#define ISTATE_SEND_R2T_RECOVERY		23
+#define ISTATE_SENT_R2T_RECOVERY		24
+#define ISTATE_SEND_LAST_R2T			25
+#define ISTATE_SENT_LAST_R2T			26
+#define ISTATE_SEND_LAST_R2T_RECOVERY		27
+#define ISTATE_SENT_LAST_R2T_RECOVERY		28
+#define ISTATE_SEND_STATUS			29
+#define ISTATE_SEND_STATUS_BROKEN_PC		30
+#define ISTATE_SENT_STATUS			31
+#define ISTATE_SEND_STATUS_RECOVERY		32
+#define ISTATE_SENT_STATUS_RECOVERY		33
+#define ISTATE_SEND_TASKMGTRSP			34
+#define ISTATE_SENT_TASKMGTRSP			35
+#define ISTATE_SEND_TEXTRSP			36
+#define ISTATE_SENT_TEXTRSP			37
+#define ISTATE_SEND_NOPIN_WANT_RESPONSE		38
+#define ISTATE_SENT_NOPIN_WANT_RESPONSE		39
+#define ISTATE_SEND_NOPIN_NO_RESPONSE		40
+#define ISTATE_REMOVE				41
+#define ISTATE_FREE				42
+
+/* Used in struct iscsi_conn->conn_flags */
+#define CONNFLAG_SCTP_STRUCT_FILE		0x01
+
+/* Used for iscsi_recover_cmdsn() return values */
+#define CMDSN_ERROR_CANNOT_RECOVER		-1
+#define CMDSN_NORMAL_OPERATION			0
+#define CMDSN_LOWER_THAN_EXP			1
+#define	CMDSN_HIGHER_THAN_EXP			2
+
+/* Used for iscsi_handle_immediate_data() return values */
+#define IMMEDIDATE_DATA_CANNOT_RECOVER		-1
+#define IMMEDIDATE_DATA_NORMAL_OPERATION	0
+#define IMMEDIDATE_DATA_ERL1_CRC_FAILURE	1
+
+/* Used for iscsi_decide_dataout_action() return values */
+#define DATAOUT_CANNOT_RECOVER			-1
+#define DATAOUT_NORMAL				0
+#define DATAOUT_SEND_R2T			1
+#define DATAOUT_SEND_TO_TRANSPORT		2
+#define DATAOUT_WITHIN_COMMAND_RECOVERY		3
+
+/* Used for struct iscsi_node_auth structure members */
+#define MAX_USER_LEN				256
+#define MAX_PASS_LEN				256
+#define NAF_USERID_SET				0x01
+#define NAF_PASSWORD_SET			0x02
+#define NAF_USERID_IN_SET			0x04
+#define NAF_PASSWORD_IN_SET			0x08
+
+/* Used for struct iscsi_cmd->dataout_timer_flags */
+#define DATAOUT_TF_RUNNING			0x01
+#define DATAOUT_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->netif_timer_flags */
+#define NETIF_TF_RUNNING			0x01
+#define NETIF_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->nopin_timer_flags */
+#define NOPIN_TF_RUNNING			0x01
+#define NOPIN_TF_STOP				0x02
+
+/* Used for struct iscsi_conn->nopin_response_timer_flags */
+#define NOPIN_RESPONSE_TF_RUNNING		0x01
+#define NOPIN_RESPONSE_TF_STOP			0x02
+
+/* Used for struct iscsi_session->time2retain_timer_flags */
+#define T2R_TF_RUNNING				0x01
+#define T2R_TF_STOP				0x02
+#define T2R_TF_EXPIRED				0x04
+
+/* Used for iscsi_tpg_np->tpg_np_login_timer_flags */
+#define TPG_NP_TF_RUNNING			0x01
+#define TPG_NP_TF_STOP				0x02
+
+/* Used for struct iscsi_np->np_flags */
+#define NPF_IP_NETWORK				0x00
+#define NPF_NET_IPV4                            0x01
+#define NPF_NET_IPV6                            0x02
+#define NPF_SCTP_STRUCT_FILE			0x20 /* Bugfix */
+
+/* Used for struct iscsi_np->np_thread_state */
+#define ISCSI_NP_THREAD_ACTIVE			1
+#define ISCSI_NP_THREAD_INACTIVE		2
+#define ISCSI_NP_THREAD_RESET			3
+#define ISCSI_NP_THREAD_SHUTDOWN		4
+#define ISCSI_NP_THREAD_EXIT			5
+
+/* Used for debugging various ERL situations. */
+#define TARGET_ERL_MISSING_CMD_SN			1
+#define TARGET_ERL_MISSING_CMDSN_BATCH			2
+#define TARGET_ERL_MISSING_CMDSN_MIX			3
+#define TARGET_ERL_MISSING_CMDSN_MULTI			4
+#define TARGET_ERL_HEADER_CRC_FAILURE			5
+#define TARGET_ERL_IMMEDIATE_DATA_CRC_FAILURE		6
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE			7
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_BATCH		8
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_MIX		9
+#define TARGET_ERL_DATA_OUT_CRC_FAILURE_MULTI		10
+#define TARGET_ERL_DATA_OUT_FAIL			11
+#define TARGET_ERL_DATA_OUT_MISSING			12 /* TODO */
+#define TARGET_ERL_DATA_OUT_MISSING_BATCH		13 /* TODO */
+#define TARGET_ERL_DATA_OUT_MISSING_MIX			14 /* TODO */
+#define TARGET_ERL_DATA_OUT_TIMEOUT			15
+#define TARGET_ERL_FORCE_TX_TRANSPORT_RESET		16
+#define TARGET_ERL_FORCE_RX_TRANSPORT_RESET		17
+
+/*
+ * Threads and timers
+ */
+#define iscsi_daemon(thread, name, sigs)		\
+do {							\
+	daemonize(name);				\
+	current->policy = SCHED_NORMAL;			\
+	set_user_nice(current, -20);			\
+	spin_lock_irq(&current->sighand->siglock);	\
+	siginitsetinv(&current->blocked, (sigs));	\
+	recalc_sigpending();				\
+	(thread) = current;				\
+	spin_unlock_irq(&current->sighand->siglock);	\
+} while (0);
+
+#define MOD_TIMER(t, exp) mod_timer(t, (get_jiffies_64() + exp * HZ))
+#define SETUP_TIMER(timer, t, d, func)			\
+	timer.expires	= (get_jiffies_64() + t * HZ);	\
+	timer.data	= (unsigned long) d;		\
+	timer.function	= func;
+
+struct iscsi_conn_ops {
+	u8	HeaderDigest;			/* [0,1] == [None,CRC32C] */
+	u8	DataDigest;			/* [0,1] == [None,CRC32C] */
+	u32	MaxRecvDataSegmentLength;	/* [512..2**24-1] */
+	u8	OFMarker;			/* [0,1] == [No,Yes] */
+	u8	IFMarker;			/* [0,1] == [No,Yes] */
+	u32	OFMarkInt;			/* [1..65535] */
+	u32	IFMarkInt;			/* [1..65535] */
+};
+
+struct iscsi_sess_ops {
+	char	InitiatorName[224];
+	char	InitiatorAlias[256];
+	char	TargetName[224];
+	char	TargetAlias[256];
+	char	TargetAddress[256];
+	u16	TargetPortalGroupTag;		/* [0..65535] */
+	u16	MaxConnections;			/* [1..65535] */
+	u8	InitialR2T;			/* [0,1] == [No,Yes] */
+	u8	ImmediateData;			/* [0,1] == [No,Yes] */
+	u32	MaxBurstLength;			/* [512..2**24-1] */
+	u32	FirstBurstLength;		/* [512..2**24-1] */
+	u16	DefaultTime2Wait;		/* [0..3600] */
+	u16	DefaultTime2Retain;		/* [0..3600] */
+	u16	MaxOutstandingR2T;		/* [1..65535] */
+	u8	DataPDUInOrder;			/* [0,1] == [No,Yes] */
+	u8	DataSequenceInOrder;		/* [0,1] == [No,Yes] */
+	u8	ErrorRecoveryLevel;		/* [0..2] */
+	u8	SessionType;			/* [0,1] == [Normal,Discovery]*/
+};
+
+struct iscsi_queue_req {
+	int			state;
+	struct se_obj_lun_type_s *queue_se_obj_api;
+	struct iscsi_cmd	*cmd;
+	struct list_head	qr_list;
+} ____cacheline_aligned;
+
+struct iscsi_data_count {
+	int			data_length;
+	int			sync_and_steering;
+	int			type;
+	u32			iov_count;
+	u32			ss_iov_count;
+	u32			ss_marker_count;
+	struct iovec		*iov;
+} ____cacheline_aligned;
+
+struct iscsi_param_list {
+	struct list_head	param_list;
+	struct list_head	extra_response_list;
+} ____cacheline_aligned;
+
+struct iscsi_datain_req {
+	int			dr_complete;
+	int			generate_recovery_values;
+	int			recovery;
+	u32			begrun;
+	u32			runlength;
+	u32			data_length;
+	u32			data_offset;
+	u32			data_offset_end;
+	u32			data_sn;
+	u32			next_burst_len;
+	u32			read_data_done;
+	u32			seq_send_order;
+	struct list_head	dr_list;
+} ____cacheline_aligned;
+
+struct iscsi_ooo_cmdsn {
+	u16			cid;
+	u32			batch_count;
+	u32			cmdsn;
+	u32			exp_cmdsn;
+	struct iscsi_cmd	*cmd;
+	struct list_head	ooo_list;
+} ____cacheline_aligned;
+
+struct iscsi_datain {
+	u8			flags;
+	u32			data_sn;
+	u32			length;
+	u32			offset;
+} ____cacheline_aligned;
+
+struct iscsi_r2t {
+	int			seq_complete;
+	int			recovery_r2t;
+	int			sent_r2t;
+	u32			r2t_sn;
+	u32			offset;
+	u32			targ_xfer_tag;
+	u32			xfer_len;
+	struct list_head	r2t_list;
+} ____cacheline_aligned;
+
+struct iscsi_cmd {
+	u8			dataout_timer_flags;
+	/* DataOUT timeout retries */
+	u8			dataout_timeout_retries;
+	/* Within command recovery count */
+	u8			error_recovery_count;
+	/* iSCSI dependent state for out or order CmdSNs */
+	u8			deferred_i_state;
+	/* iSCSI dependent state */
+	u8			i_state;
+	/* Command is an immediate command (ISCSI_OP_IMMEDIATE set) */
+	u8			immediate_cmd;
+	/* Immediate data present */
+	u8			immediate_data;
+	/* iSCSI Opcode */
+	u8			iscsi_opcode;
+	/* iSCSI Response Code */
+	u8			iscsi_response;
+	/* Logout reason when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
+	u8			logout_reason;
+	/* Logout response code when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
+	u8			logout_response;
+	/* MaxCmdSN has been incremented */
+	u8			maxcmdsn_inc;
+	/* Immediate Unsolicited Dataout */
+	u8			unsolicited_data;
+	/* CID contained in logout PDU when opcode == ISCSI_INIT_LOGOUT_CMND */
+	u16			logout_cid;
+	/* Command flags */
+	u32			cmd_flags;
+	/* Initiator Task Tag assigned from Initiator */
+	u32 			init_task_tag;
+	/* Target Transfer Tag assigned from Target */
+	u32			targ_xfer_tag;
+	/* CmdSN assigned from Initiator */
+	u32			cmd_sn;
+	/* ExpStatSN assigned from Initiator */
+	u32			exp_stat_sn;
+	/* StatSN assigned to this ITT */
+	u32			stat_sn;
+	/* DataSN Counter */
+	u32			data_sn;
+	/* R2TSN Counter */
+	u32			r2t_sn;
+	/* Last DataSN acknowledged via DataAck SNACK */
+	u32			acked_data_sn;
+	/* Used for echoing NOPOUT ping data */
+	u32			buf_ptr_size;
+	/* Used to store DataDigest */
+	u32			data_crc;
+	/* Total size in bytes associated with command */
+	u32			data_length;
+	/* Counter for MaxOutstandingR2T */
+	u32			outstanding_r2ts;
+	/* Next R2T Offset when DataSequenceInOrder=Yes */
+	u32			r2t_offset;
+	/* Iovec current and orig count for iscsi_cmd->iov_data */
+	u32			iov_data_count;
+	u32			orig_iov_data_count;
+	/* Number of miscellaneous iovecs used for IP stack calls */
+	u32			iov_misc_count;
+	/* Bytes used for 32-bit word padding */
+	u32			pad_bytes;
+	/* Number of struct iscsi_pdu in struct iscsi_cmd->pdu_list */
+	u32			pdu_count;
+	/* Next struct iscsi_pdu to send in struct iscsi_cmd->pdu_list */
+	u32			pdu_send_order;
+	/* Current struct iscsi_pdu in struct iscsi_cmd->pdu_list */
+	u32			pdu_start;
+	u32			residual_count;
+	/* Next struct iscsi_seq to send in struct iscsi_cmd->seq_list */
+	u32			seq_send_order;
+	/* Number of struct iscsi_seq in struct iscsi_cmd->seq_list */
+	u32			seq_count;
+	/* Current struct iscsi_seq in struct iscsi_cmd->seq_list */
+	u32			seq_no;
+	/* Lowest offset in current DataOUT sequence */
+	u32			seq_start_offset;
+	/* Highest offset in current DataOUT sequence */
+	u32			seq_end_offset;
+	/* Total size in bytes received so far of READ data */
+	u32			read_data_done;
+	/* Total size in bytes received so far of WRITE data */
+	u32			write_data_done;
+	/* Counter for FirstBurstLength key */
+	u32			first_burst_len;
+	/* Counter for MaxBurstLength key */
+	u32			next_burst_len;
+	/* Transfer size used for IP stack calls */
+	u32			tx_size;
+	/* Buffer used for various purposes */
+	void			*buf_ptr;
+	/* See include/linux/dma-mapping.h */
+	enum dma_data_direction	data_direction;
+	/* iSCSI PDU Header + CRC */
+	unsigned char		pdu[ISCSI_HDR_LEN + CRC_LEN];
+	/* Number of times struct iscsi_cmd is present in immediate queue */
+	atomic_t		immed_queue_count;
+	atomic_t		response_queue_count;
+	atomic_t		transport_sent;
+	spinlock_t		datain_lock;
+	spinlock_t		dataout_timeout_lock;
+	/* spinlock for protecting struct iscsi_cmd->i_state */
+	spinlock_t		istate_lock;
+	/* spinlock for adding within command recovery entries */
+	spinlock_t		error_lock;
+	/* spinlock for adding R2Ts */
+	spinlock_t		r2t_lock;
+	/* DataIN List */
+	struct list_head	datain_list;
+	/* R2T List */
+	struct list_head	cmd_r2t_list;
+	struct semaphore	reject_sem;
+	/* Semaphore used for allocating buffer */
+	struct semaphore	unsolicited_data_sem;
+	/* Timer for DataOUT */
+	struct timer_list	dataout_timer;
+	/* Iovecs for SCSI data payload RX/TX w/ kernel level sockets */
+	struct iovec		*iov_data;
+	/* Iovecs for miscellaneous purposes */
+	struct iovec		iov_misc[ISCSI_MISC_IOVECS];
+	/* Array of struct iscsi_pdu used for DataPDUInOrder=No */
+	struct iscsi_pdu	*pdu_list;
+	/* Current struct iscsi_pdu used for DataPDUInOrder=No */
+	struct iscsi_pdu	*pdu_ptr;
+	/* Array of struct iscsi_seq used for DataSequenceInOrder=No */
+	struct iscsi_seq	*seq_list;
+	/* Current struct iscsi_seq used for DataSequenceInOrder=No */
+	struct iscsi_seq	*seq_ptr;
+	/* TMR Request when iscsi_opcode == ISCSI_OP_SCSI_TMFUNC */
+	struct iscsi_tmr_req	*tmr_req;
+	/* Connection this command is alligient to */
+	struct iscsi_conn 	*conn;
+	/* Pointer to connection recovery entry */
+	struct iscsi_conn_recovery *cr;
+	/* Session the command is part of,  used for connection recovery */
+	struct iscsi_session	*sess;
+	/* Next command in the session pool */
+	struct iscsi_cmd	*next;
+	/* list_head for connection list */
+	struct list_head	i_list;
+	/* Next command in DAS transport list */
+	struct iscsi_cmd	*t_next;
+	/* Previous command in DAS transport list */
+	struct iscsi_cmd	*t_prev;
+	/* The TCM I/O descriptor that is accessed via container_of() */
+	struct se_cmd		se_cmd;
+	/* Sense buffer that will be mapped into outgoing status */
+	unsigned char		sense_buffer[ISCSI_SENSE_BUFFER_LEN];
+}  ____cacheline_aligned;
+
+#define SE_CMD(cmd)		(&(cmd)->se_cmd)
+
+struct iscsi_tmr_req {
+	bool			task_reassign:1;
+	u32			ref_cmd_sn;
+	u32			exp_data_sn;
+	struct iscsi_conn_recovery *conn_recovery;
+	struct se_tmr_req	*se_tmr_req;
+} ____cacheline_aligned;
+
+struct iscsi_conn {
+	char			net_dev[ISCSI_NETDEV_NAME_SIZE];
+	/* Authentication Successful for this connection */
+	u8			auth_complete;
+	/* State connection is currently in */
+	u8			conn_state;
+	u8			conn_logout_reason;
+	u8			netif_timer_flags;
+	u8			network_transport;
+	u8			nopin_timer_flags;
+	u8			nopin_response_timer_flags;
+	u8			tx_immediate_queue;
+	u8			tx_response_queue;
+	/* Used to know what thread encountered a transport failure */
+	u8			which_thread;
+	/* connection id assigned by the Initiator */
+	u16			cid;
+	/* Remote TCP Port */
+	u16			login_port;
+	int			net_size;
+	u32			auth_id;
+	u32			conn_flags;
+	/* Remote TCP IP address */
+	u32			login_ip;
+	/* Used for iscsi_tx_login_rsp() */
+	u32			login_itt;
+	u32			exp_statsn;
+	/* Per connection status sequence number */
+	u32			stat_sn;
+	/* IFMarkInt's Current Value */
+	u32			if_marker;
+	/* OFMarkInt's Current Value */
+	u32			of_marker;
+	/* Used for calculating OFMarker offset to next PDU */
+	u32			of_marker_offset;
+	/* Complete Bad PDU for sending reject */
+	unsigned char		bad_hdr[ISCSI_HDR_LEN];
+	unsigned char		ipv6_login_ip[IPV6_ADDRESS_SPACE];
+	u16			local_port;
+	u32			local_ip;
+	u32			conn_index;
+	atomic_t		active_cmds;
+	atomic_t		check_immediate_queue;
+	atomic_t		conn_logout_remove;
+	atomic_t		conn_usage_count;
+	atomic_t		conn_waiting_on_uc;
+	atomic_t		connection_exit;
+	atomic_t		connection_recovery;
+	atomic_t		connection_reinstatement;
+	atomic_t		connection_wait;
+	atomic_t		connection_wait_rcfr;
+	atomic_t		sleep_on_conn_wait_sem;
+	atomic_t		transport_failed;
+	struct net_device	*net_if;
+	struct semaphore	conn_post_wait_sem;
+	struct semaphore	conn_wait_sem;
+	struct semaphore	conn_wait_rcfr_sem;
+	struct semaphore	conn_waiting_on_uc_sem;
+	struct semaphore	conn_logout_sem;
+	struct semaphore	rx_half_close_sem;
+	struct semaphore	tx_half_close_sem;
+	/* Semaphore for conn's tx_thread to sleep on */
+	struct semaphore	tx_sem;
+	/* socket used by this connection */
+	struct socket		*sock;
+	struct timer_list	nopin_timer;
+	struct timer_list	nopin_response_timer;
+	struct timer_list	transport_timer;;
+	/* Spinlock used for add/deleting cmd's from conn_cmd_list */
+	spinlock_t		cmd_lock;
+	spinlock_t		conn_usage_lock;
+	spinlock_t		immed_queue_lock;
+	spinlock_t		netif_lock;
+	spinlock_t		nopin_timer_lock;
+	spinlock_t		response_queue_lock;
+	spinlock_t		state_lock;
+	/* libcrypto RX and TX contexts for crc32c */
+	struct hash_desc	conn_rx_hash;
+	struct hash_desc	conn_tx_hash;
+	/* Used for scheduling TX and RX connection kthreads */
+	cpumask_var_t		conn_cpumask;
+	int			conn_rx_reset_cpumask:1;
+	int			conn_tx_reset_cpumask:1;
+	/* list_head of struct iscsi_cmd for this connection */
+	struct list_head	conn_cmd_list;
+	struct list_head	immed_queue_list;
+	struct list_head	response_queue_list;
+	struct iscsi_conn_ops	*conn_ops;
+	struct iscsi_param_list	*param_list;
+	/* Used for per connection auth state machine */
+	void			*auth_protocol;
+	struct iscsi_login_thread_s *login_thread;
+	struct iscsi_portal_group *tpg;
+	/* Pointer to parent session */
+	struct iscsi_session	*sess;
+	/* Pointer to thread_set in use for this conn's threads */
+	struct se_thread_set	*thread_set;
+	/* list_head for session connection list */
+	struct list_head	conn_list;
+} ____cacheline_aligned;
+
+#define CONN(cmd)		((struct iscsi_conn *)(cmd)->conn)
+#define CONN_OPS(conn)		((struct iscsi_conn_ops *)(conn)->conn_ops)
+
+struct iscsi_conn_recovery {
+	u16			cid;
+	u32			cmd_count;
+	u32			maxrecvdatasegmentlength;
+	int			ready_for_reallegiance;
+	struct list_head	conn_recovery_cmd_list;
+	spinlock_t		conn_recovery_cmd_lock;
+	struct semaphore		time2wait_sem;
+	struct timer_list		time2retain_timer;
+	struct iscsi_session	*sess;
+	struct list_head	cr_list;
+}  ____cacheline_aligned;
+
+struct iscsi_session {
+	u8			cmdsn_outoforder;
+	u8			initiator_vendor;
+	u8			isid[6];
+	u8			time2retain_timer_flags;
+	u8			version_active;
+	u16			cid_called;
+	u16			conn_recovery_count;
+	u16			tsih;
+	/* state session is currently in */
+	u32			session_state;
+	/* session wide counter: initiator assigned task tag */
+	u32			init_task_tag;
+	/* session wide counter: target assigned task tag */
+	u32			targ_xfer_tag;
+	u32			cmdsn_window;
+	/* session wide counter: expected command sequence number */
+	u32			exp_cmd_sn;
+	/* session wide counter: maximum allowed command sequence number */
+	u32			max_cmd_sn;
+	u32			ooo_cmdsn_count;
+	/* LIO specific session ID */
+	u32			sid;
+	char			auth_type[8];
+	/* unique within the target */
+	u32			session_index;
+	u32			cmd_pdus;
+	u32			rsp_pdus;
+	u64			tx_data_octets;
+	u64			rx_data_octets;
+	u32			conn_digest_errors;
+	u32			conn_timeout_errors;
+	u64			creation_time;
+	spinlock_t		session_stats_lock;
+	/* Number of active connections */
+	atomic_t		nconn;
+	atomic_t		session_continuation;
+	atomic_t		session_fall_back_to_erl0;
+	atomic_t		session_logout;
+	atomic_t		session_reinstatement;
+	atomic_t		session_stop_active;
+	atomic_t		session_usage_count;
+	atomic_t		session_waiting_on_uc;
+	atomic_t		sleep_on_sess_wait_sem;
+	atomic_t		transport_wait_cmds;
+	/* connection list */
+	struct list_head	sess_conn_list;
+	struct list_head	cr_active_list;
+	struct list_head	cr_inactive_list;
+	spinlock_t		cmdsn_lock;
+	spinlock_t		conn_lock;
+	spinlock_t		cr_a_lock;
+	spinlock_t		cr_i_lock;
+	spinlock_t		session_usage_lock;
+	spinlock_t		ttt_lock;
+	struct list_head	sess_ooo_cmdsn_list;
+	struct semaphore	async_msg_sem;
+	struct semaphore	reinstatement_sem;
+	struct semaphore	session_wait_sem;
+	struct semaphore	session_waiting_on_uc_sem;
+	struct timer_list	time2retain_timer;
+	struct iscsi_sess_ops	*sess_ops;
+	struct se_session	*se_sess;
+	struct iscsi_portal_group *tpg;
+} ____cacheline_aligned;
+
+#define SESS(conn)		((struct iscsi_session *)(conn)->sess)
+#define SESS_OPS(sess)		((struct iscsi_sess_ops *)(sess)->sess_ops)
+#define SESS_OPS_C(conn)	((struct iscsi_sess_ops *)(conn)->sess->sess_ops)
+#define SESS_NODE_ACL(sess)	((struct se_node_acl *)(sess)->se_sess->se_node_acl)
+
+struct iscsi_login {
+	u8 auth_complete;
+	u8 checked_for_existing;
+	u8 current_stage;
+	u8 leading_connection;
+	u8 first_request;
+	u8 version_min;
+	u8 version_max;
+	char isid[6];
+	u32 cmd_sn;
+	u32 init_task_tag;
+	u32 initial_exp_statsn;
+	u32 rsp_length;
+	u16 cid;
+	u16 tsih;
+	char *req;
+	char *rsp;
+	char *req_buf;
+	char *rsp_buf;
+} ____cacheline_aligned;
+
+struct iscsi_node_attrib {
+	u32			dataout_timeout;
+	u32			dataout_timeout_retries;
+	u32			default_erl;
+	u32			nopin_timeout;
+	u32			nopin_response_timeout;
+	u32			random_datain_pdu_offsets;
+	u32			random_datain_seq_offsets;
+	u32			random_r2t_offsets;
+	u32			tmr_cold_reset;
+	u32			tmr_warm_reset;
+	struct iscsi_node_acl *nacl;
+} ____cacheline_aligned;
+
+struct se_dev_entry_s;
+
+struct iscsi_node_auth {
+	int			naf_flags;
+	int			authenticate_target;
+	/* Used for iscsi_global->discovery_auth,
+	 * set to zero (auth disabled) by default */
+	int			enforce_discovery_auth;
+	char			userid[MAX_USER_LEN];
+	char			password[MAX_PASS_LEN];
+	char			userid_mutual[MAX_USER_LEN];
+	char			password_mutual[MAX_PASS_LEN];
+} ____cacheline_aligned;
+
+#include "iscsi_target_stat.h"
+
+struct iscsi_node_stat_grps {
+	struct config_group	iscsi_sess_stats_group;
+        struct config_group	iscsi_conn_stats_group;
+};
+
+struct iscsi_node_acl {
+	struct iscsi_node_attrib node_attrib;
+	struct iscsi_node_auth	node_auth;
+	struct iscsi_node_stat_grps node_stat_grps;
+	struct se_node_acl	se_node_acl;
+} ____cacheline_aligned;
+
+#define NODE_STAT_GRPS(nacl)	(&(nacl)->node_stat_grps)
+
+#define ISCSI_NODE_ATTRIB(t)	(&(t)->node_attrib)
+#define ISCSI_NODE_AUTH(t)	(&(t)->node_auth)
+
+struct iscsi_tpg_attrib {
+	u32			authentication;
+	u32			login_timeout;
+	u32			netif_timeout;
+	u32			generate_node_acls;
+	u32			cache_dynamic_acls;
+	u32			default_cmdsn_depth;
+	u32			demo_mode_write_protect;
+	u32			prod_mode_write_protect;
+	/* Used to signal libcrypto crc32-intel offload instruction usage */
+	u32			crc32c_x86_offload;
+	u32			cache_core_nps;
+	struct iscsi_portal_group *tpg;
+}  ____cacheline_aligned;
+
+struct iscsi_np_ex {
+	int			np_ex_net_size;
+	u16			np_ex_port;
+	u32			np_ex_ipv4;
+	unsigned char		np_ex_ipv6[IPV6_ADDRESS_SPACE];
+	struct list_head	np_ex_list;
+} ____cacheline_aligned;
+
+struct iscsi_np {
+	unsigned char		np_net_dev[ISCSI_NETDEV_NAME_SIZE];
+	int			np_network_transport;
+	int			np_thread_state;
+	int			np_login_timer_flags;
+	int			np_net_size;
+	u32			np_exports;
+	u32			np_flags;
+	u32			np_ipv4;
+	unsigned char		np_ipv6[IPV6_ADDRESS_SPACE];
+	u32			np_index;
+	u16			np_port;
+	atomic_t		np_shutdown;
+	spinlock_t		np_ex_lock;
+	spinlock_t		np_state_lock;
+	spinlock_t		np_thread_lock;
+	struct semaphore		np_done_sem;
+	struct semaphore		np_restart_sem;
+	struct semaphore		np_shutdown_sem;
+	struct semaphore		np_start_sem;
+	struct socket		*np_socket;
+	struct task_struct		*np_thread;
+	struct timer_list		np_login_timer;
+	struct iscsi_portal_group *np_login_tpg;
+	struct list_head	np_list;
+	struct list_head	np_nex_list;
+} ____cacheline_aligned;
+
+struct iscsi_tpg_np {
+	u32			tpg_np_index;
+	struct iscsi_np		*tpg_np;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np	*tpg_np_parent;
+	struct list_head	tpg_np_list;
+	struct list_head	tpg_np_child_list;
+	struct list_head	tpg_np_parent_list;
+	struct se_tpg_np	se_tpg_np;
+	spinlock_t		tpg_np_parent_lock;
+} ____cacheline_aligned;
+
+struct iscsi_np_addr {
+	u16		np_port;
+	u32		np_flags;
+	u32		np_ipv4;
+	unsigned char	np_ipv6[IPV6_ADDRESS_SPACE];
+} ____cacheline_aligned;
+
+struct iscsi_portal_group {
+	unsigned char		tpg_chap_id;
+	/* TPG State */
+	u8			tpg_state;
+	/* Target Portal Group Tag */
+	u16			tpgt;
+	/* Id assigned to target sessions */
+	u16			ntsih;
+	/* Number of active sessions */
+	u32			nsessions;
+	/* Number of Network Portals available for this TPG */
+	u32			num_tpg_nps;
+	/* Per TPG LIO specific session ID. */
+	u32			sid;
+	/* Spinlock for adding/removing Network Portals */
+	spinlock_t		tpg_np_lock;
+	spinlock_t		tpg_state_lock;
+	struct se_portal_group tpg_se_tpg;
+	struct semaphore	tpg_access_sem;
+	struct semaphore	np_login_sem;
+	struct iscsi_tpg_attrib	tpg_attrib;
+	/* Pointer to default list of iSCSI parameters for TPG */
+	struct iscsi_param_list	*param_list;
+	struct iscsi_tiqn	*tpg_tiqn;
+	struct list_head 	tpg_gnp_list;
+	struct list_head	tpg_list;
+	struct list_head	g_tpg_list;
+} ____cacheline_aligned;
+
+#define ISCSI_TPG_C(c)		((struct iscsi_portal_group *)(c)->tpg)
+#define ISCSI_TPG_LUN(c, l)  ((iscsi_tpg_list_t *)(c)->tpg->tpg_lun_list_t[l])
+#define ISCSI_TPG_S(s)		((struct iscsi_portal_group *)(s)->tpg)
+#define ISCSI_TPG_ATTRIB(t)	(&(t)->tpg_attrib)
+#define SE_TPG(tpg)		(&(tpg)->tpg_se_tpg)
+
+struct iscsi_wwn_stat_grps {
+	struct config_group	iscsi_stat_group;
+	struct config_group	iscsi_instance_group;
+	struct config_group	iscsi_sess_err_group;
+	struct config_group	iscsi_tgt_attr_group;
+	struct config_group	iscsi_login_stats_group;
+	struct config_group	iscsi_logout_stats_group;
+};
+
+struct iscsi_tiqn {
+	unsigned char		tiqn[ISCSI_TIQN_LEN];
+	int			tiqn_state;
+	u32			tiqn_active_tpgs;
+	u32			tiqn_ntpgs;
+	u32			tiqn_num_tpg_nps;
+	u32			tiqn_nsessions;
+	struct list_head	tiqn_list;
+	struct list_head	tiqn_tpg_list;
+	atomic_t		tiqn_access_count;
+	spinlock_t		tiqn_state_lock;
+	spinlock_t		tiqn_tpg_lock;
+	struct se_wwn		tiqn_wwn;
+	struct iscsi_wwn_stat_grps tiqn_stat_grps;
+	u32			tiqn_index;
+	struct iscsi_sess_err_stats  sess_err_stats;
+	struct iscsi_login_stats     login_stats;
+	struct iscsi_logout_stats    logout_stats;
+} ____cacheline_aligned;
+
+#define WWN_STAT_GRPS(tiqn)	(&(tiqn)->tiqn_stat_grps)
+
+struct iscsi_global {
+	/* iSCSI Node Name */
+	char			targetname[ISCSI_IQN_LEN];
+	/* In module removal */
+	u32			in_rmmod;
+	/* In core shutdown */
+	u32			in_shutdown;
+	/* Is the iSCSI Node name set? */
+	u32			targetname_set;
+	u32			active_ts;
+	/* Unique identifier used for the authentication daemon */
+	u32			auth_id;
+	u32			inactive_ts;
+	/* Thread Set bitmap count */
+	int			ts_bitmap_count;
+	/* Thread Set bitmap pointer */
+	unsigned long		*ts_bitmap;
+	int (*ti_forcechanoffline)(void *);
+	struct list_head	g_tiqn_list;
+	struct list_head	g_tpg_list;
+	struct list_head	tpg_list;
+	struct list_head	g_np_list;
+	spinlock_t		active_ts_lock;
+	spinlock_t		check_thread_lock;
+	/* Spinlock for adding/removing discovery entries */
+	spinlock_t		discovery_lock;
+	spinlock_t		inactive_ts_lock;
+	/* Spinlock for adding/removing login threads */
+	spinlock_t		login_thread_lock;
+	spinlock_t		shutdown_lock;
+	/* Spinlock for adding/removing thread sets */
+	spinlock_t		thread_set_lock;
+	/* Spinlock for iscsi_global->ts_bitmap */
+	spinlock_t		ts_bitmap_lock;
+	/* Spinlock for struct iscsi_tiqn */
+	spinlock_t		tiqn_lock;
+	spinlock_t		g_tpg_lock;
+	/* Spinlock g_np_list */
+	spinlock_t		np_lock;
+	/* Semaphore used for communication to authentication daemon */
+	struct semaphore	auth_sem;
+	/* Semaphore used for allocate of struct iscsi_conn->auth_id */
+	struct semaphore	auth_id_sem;
+	/* Used for iSCSI discovery session authentication */
+	struct iscsi_node_acl	discovery_acl;
+	struct iscsi_portal_group	*discovery_tpg;
+	struct list_head	active_ts_list;
+	struct list_head	inactive_ts_list;
+} ____cacheline_aligned;
+
+#endif /* ISCSI_TARGET_CORE_H */
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 03/12] iscsi-target: Add TCM v4 compatiable ConfigFS control plane
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 61183 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for /sys/kernel/config/target/iscsi using
TCM v4.0 compatiable calls following target_core_fabric_configfs.c

This includes a number of iSCSI fabric dependent attributes upon
target_core_fabric_configfs.c provided struct config_item_types from
include/target/target_core_configfs.hstruct target_fabric_configfs_template

It also includes iscsi_target_nodeattrib.[c,h] for handling the
lio_target_nacl_attrib_attrs[] store/show for iSCSI fabric dependent
attributes.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_configfs.c   | 1617 ++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_configfs.h   |    9 +
 drivers/target/iscsi/iscsi_target_nodeattrib.c |  300 +++++
 drivers/target/iscsi/iscsi_target_nodeattrib.h |   14 +
 4 files changed, 1940 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.c
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.h

diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
new file mode 100644
index 0000000..a1058ce
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_configfs.c
@@ -0,0 +1,1617 @@
+/*******************************************************************************
+ * This file contains the configfs implementation for iSCSI Target mode
+ * from the LIO-Target Project.
+ *
+ * Copyright (c) 2008, 2009, 2010 Rising Tide, Inc.
+ * Copyright (c) 2008, 2009, 2010 Linux-iSCSI.org
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ****************************************************************************/
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/version.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/configfs.h>
+#include <linux/inet.h>
+
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_fabric_configfs.h>
+#include <target/target_core_fabric_lib.h>
+#include <target/target_core_device.h>
+#include <target/target_core_tpg.h>
+#include <target/target_core_configfs.h>
+#include <target/configfs_macros.h>
+
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_nodeattrib.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_target_stat.h"
+#include "iscsi_target_configfs.h"
+
+struct target_fabric_configfs *lio_target_fabric_configfs;
+
+struct lio_target_configfs_attribute {
+	struct configfs_attribute attr;
+	ssize_t (*show)(void *, char *);
+	ssize_t (*store)(void *, const char *, size_t);
+};
+
+struct iscsi_portal_group *lio_get_tpg_from_tpg_item(
+	struct config_item *item,
+	struct iscsi_tiqn **tiqn_out)
+{
+	struct se_portal_group *se_tpg = container_of(to_config_group(item),
+					struct se_portal_group, tpg_group);
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+	int ret;
+
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_portal_group "
+			"pointer\n");
+		return NULL;
+	}
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return NULL;
+
+	*tiqn_out = tpg->tpg_tiqn;
+	return tpg;
+}
+
+/* Start items for lio_target_portal_cit */
+
+static ssize_t lio_target_np_show_sctp(
+	struct se_tpg_np *se_tpg_np,
+	char *page)
+{
+	struct iscsi_tpg_np *tpg_np = container_of(se_tpg_np,
+				struct iscsi_tpg_np, se_tpg_np);
+	struct iscsi_tpg_np *tpg_np_sctp;
+	ssize_t rb;
+
+	tpg_np_sctp = iscsi_tpg_locate_child_np(tpg_np, ISCSI_SCTP_TCP);
+	if ((tpg_np_sctp))
+		rb = sprintf(page, "1\n");
+	else
+		rb = sprintf(page, "0\n");
+
+	return rb;
+}
+
+static ssize_t lio_target_np_store_sctp(
+	struct se_tpg_np *se_tpg_np,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_np *np;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np = container_of(se_tpg_np,
+				struct iscsi_tpg_np, se_tpg_np);
+	struct iscsi_tpg_np *tpg_np_sctp = NULL;
+	struct iscsi_np_addr np_addr;
+	char *endptr;
+	u32 op;
+	int ret;
+
+	op = simple_strtoul(page, &endptr, 0);
+	 if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+	np = tpg_np->tpg_np;
+	if (!(np)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_np from"
+				" struct iscsi_tpg_np\n");
+		return -EINVAL;
+	}
+
+	tpg = tpg_np->tpg;
+	if (iscsi_get_tpg(tpg) < 0)
+		return -EINVAL;
+
+	if (op) {
+		memset((void *)&np_addr, 0, sizeof(struct iscsi_np_addr));
+		if (np->np_flags & NPF_NET_IPV6)
+			snprintf(np_addr.np_ipv6, IPV6_ADDRESS_SPACE,
+				"%s", np->np_ipv6);
+		else
+			np_addr.np_ipv4 = np->np_ipv4;
+		np_addr.np_flags = np->np_flags;
+		np_addr.np_port = np->np_port;
+
+		tpg_np_sctp = iscsi_tpg_add_network_portal(tpg, &np_addr,
+					tpg_np, ISCSI_SCTP_TCP);
+		if (!(tpg_np_sctp) || IS_ERR(tpg_np_sctp))
+			goto out;
+	} else {
+		tpg_np_sctp = iscsi_tpg_locate_child_np(tpg_np, ISCSI_SCTP_TCP);
+		if (!(tpg_np_sctp))
+			goto out;
+
+		ret = iscsi_tpg_del_network_portal(tpg, tpg_np_sctp);
+		if (ret < 0)
+			goto out;
+	}
+
+	iscsi_put_tpg(tpg);
+	return count;
+out:
+	iscsi_put_tpg(tpg);
+	return -EINVAL;
+}
+
+TF_NP_BASE_ATTR(lio_target, sctp, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_portal_attrs[] = {
+	&lio_target_np_sctp.attr,
+	NULL,
+};
+
+/* Stop items for lio_target_portal_cit */
+
+/* Start items for lio_target_np_cit */
+
+#define MAX_PORTAL_LEN		256
+
+struct se_tpg_np *lio_target_call_addnptotpg(
+	struct se_portal_group *se_tpg,
+	struct config_group *group,
+	const char *name)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np;
+	char *str, *str2, *end_ptr, *ip_str, *port_str;
+	struct iscsi_np_addr np_addr;
+	u32 ipv4 = 0;
+	int ret;
+	char buf[MAX_PORTAL_LEN];
+
+	if (strlen(name) > MAX_PORTAL_LEN) {
+		printk(KERN_ERR "strlen(name): %d exceeds MAX_PORTAL_LEN: %d\n",
+			(int)strlen(name), MAX_PORTAL_LEN);
+		return ERR_PTR(-EOVERFLOW);
+	}
+	memset(buf, 0, MAX_PORTAL_LEN);
+	snprintf(buf, MAX_PORTAL_LEN, "%s", name);
+
+	memset((void *)&np_addr, 0, sizeof(struct iscsi_np_addr));
+
+	str = strstr(buf, "[");
+	if ((str)) {
+		str2 = strstr(str, "]");
+		if (!(str2)) {
+			printk(KERN_ERR "Unable to locate trailing \"]\""
+				" in IPv6 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		str++; /* Skip over leading "[" */
+		*str2 = '\0'; /* Terminate the IPv6 address */
+		str2 += 1; /* Skip over the "]" */
+		port_str = strstr(str2, ":");
+		if (!(port_str)) {
+			printk(KERN_ERR "Unable to locate \":port\""
+				" in IPv6 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		*port_str = '\0'; /* Terminate string for IP */
+		port_str += 1; /* Skip over ":" */
+		np_addr.np_port = simple_strtoul(port_str, &end_ptr, 0);
+
+		snprintf(np_addr.np_ipv6, IPV6_ADDRESS_SPACE, "%s", str);
+		np_addr.np_flags |= NPF_NET_IPV6;
+	} else {
+		ip_str = &buf[0];
+		port_str = strstr(ip_str, ":");
+		if (!(port_str)) {
+			printk(KERN_ERR "Unable to locate \":port\""
+				" in IPv4 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		*port_str = '\0'; /* Terminate string for IP */
+		port_str += 1; /* Skip over ":" */
+		np_addr.np_port = simple_strtoul(port_str, &end_ptr, 0);
+
+		ipv4 = in_aton(ip_str);
+		np_addr.np_ipv4 = htonl(ipv4);
+		np_addr.np_flags |= NPF_NET_IPV4;
+	}
+	tpg = container_of(se_tpg, struct iscsi_portal_group, tpg_se_tpg);
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return ERR_PTR(-EINVAL);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s TPGT: %hu"
+		" PORTAL: %s\n",
+		config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
+		tpg->tpgt, name);
+	/*
+	 * Assume ISCSI_TCP by default.  Other network portals for other
+	 * iSCSI fabrics:
+	 *
+	 * Traditional iSCSI over SCTP (initial support)
+	 * iSER/TCP (TODO, hardware available)
+	 * iSER/SCTP (TODO, software emulation with osc-iwarp)
+	 * iSER/IB (TODO, hardware available)
+	 *
+	 * can be enabled with atributes under
+	 * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/
+	 *
+	 */
+	tpg_np = iscsi_tpg_add_network_portal(tpg, &np_addr, NULL, ISCSI_TCP);
+	if (IS_ERR(tpg_np)) {
+		iscsi_put_tpg(tpg);
+		return ERR_PTR(PTR_ERR(tpg_np));
+	}
+	printk(KERN_INFO "LIO_Target_ConfigFS: addnptotpg done!\n");
+
+	iscsi_put_tpg(tpg);
+	return &tpg_np->se_tpg_np;
+}
+
+static void lio_target_call_delnpfromtpg(
+	struct se_tpg_np *se_tpg_np)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np;
+	struct se_portal_group *se_tpg;
+	int ret = 0;
+
+	tpg_np = container_of(se_tpg_np, struct iscsi_tpg_np, se_tpg_np);
+	tpg = tpg_np->tpg;
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return;
+
+	se_tpg = &tpg->tpg_se_tpg;
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu"
+		" PORTAL: %s\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
+		tpg->tpgt, config_item_name(&se_tpg_np->tpg_np_group.cg_item));
+
+	ret = iscsi_tpg_del_network_portal(tpg, tpg_np);
+	if (ret < 0)
+		goto out;
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: delnpfromtpg done!\n");
+out:
+	iscsi_put_tpg(tpg);
+}
+
+/* End items for lio_target_np_cit */
+
+/* Start items for lio_target_nacl_attrib_cit */
+
+#define DEF_NACL_ATTRIB(name)						\
+static ssize_t iscsi_nacl_attrib_show_##name(				\
+	struct se_node_acl *se_nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_acl *nacl = container_of(se_nacl, struct iscsi_node_acl, \
+					se_node_acl);			\
+	ssize_t rb;							\
+									\
+	rb = sprintf(page, "%u\n", ISCSI_NODE_ATTRIB(nacl)->name);	\
+	return rb;							\
+}									\
+									\
+static ssize_t iscsi_nacl_attrib_store_##name(				\
+	struct se_node_acl *se_nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_node_acl *nacl = container_of(se_nacl, struct iscsi_node_acl, \
+					se_node_acl);			\
+	char *endptr;							\
+	u32 val;							\
+	int ret;							\
+									\
+	val = simple_strtoul(page, &endptr, 0);				\
+	ret = iscsi_na_##name(nacl, val);				\
+	if (ret < 0)							\
+		return ret;						\
+									\
+	return count;							\
+}
+
+#define NACL_ATTR(_name, _mode) TF_NACL_ATTRIB_ATTR(iscsi, _name, _mode);
+/*
+ * Define iscsi_node_attrib_s_dataout_timeout
+ */
+DEF_NACL_ATTRIB(dataout_timeout);
+NACL_ATTR(dataout_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_dataout_timeout_retries
+ */
+DEF_NACL_ATTRIB(dataout_timeout_retries);
+NACL_ATTR(dataout_timeout_retries, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_default_erl
+ */
+DEF_NACL_ATTRIB(default_erl);
+NACL_ATTR(default_erl, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_nopin_timeout
+ */
+DEF_NACL_ATTRIB(nopin_timeout);
+NACL_ATTR(nopin_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_nopin_response_timeout
+ */
+DEF_NACL_ATTRIB(nopin_response_timeout);
+NACL_ATTR(nopin_response_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_datain_pdu_offsets
+ */
+DEF_NACL_ATTRIB(random_datain_pdu_offsets);
+NACL_ATTR(random_datain_pdu_offsets, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_datain_seq_offsets
+ */
+DEF_NACL_ATTRIB(random_datain_seq_offsets);
+NACL_ATTR(random_datain_seq_offsets, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_r2t_offsets
+ */
+DEF_NACL_ATTRIB(random_r2t_offsets);
+NACL_ATTR(random_r2t_offsets, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_nacl_attrib_attrs[] = {
+	&iscsi_nacl_attrib_dataout_timeout.attr,
+	&iscsi_nacl_attrib_dataout_timeout_retries.attr,
+	&iscsi_nacl_attrib_default_erl.attr,
+	&iscsi_nacl_attrib_nopin_timeout.attr,
+	&iscsi_nacl_attrib_nopin_response_timeout.attr,
+	&iscsi_nacl_attrib_random_datain_pdu_offsets.attr,
+	&iscsi_nacl_attrib_random_datain_seq_offsets.attr,
+	&iscsi_nacl_attrib_random_r2t_offsets.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_attrib_cit */
+
+/* Start items for lio_target_nacl_auth_cit */
+
+#define __DEF_NACL_AUTH_STR(prefix, name, flags)			\
+static ssize_t __iscsi_##prefix##_show_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+	ssize_t rb;							\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+	rb = snprintf(page, PAGE_SIZE, "%s\n", auth->name);		\
+	return rb;							\
+}									\
+									\
+static ssize_t __iscsi_##prefix##_store_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+									\
+	snprintf(auth->name, PAGE_SIZE, "%s", page);			\
+	if (!(strncmp("NULL", auth->name, 4)))				\
+		auth->naf_flags &= ~flags;				\
+	else								\
+		auth->naf_flags |= flags;				\
+									\
+	if ((auth->naf_flags & NAF_USERID_IN_SET) &&			\
+	    (auth->naf_flags & NAF_PASSWORD_IN_SET))			\
+		auth->authenticate_target = 1;				\
+	else								\
+		auth->authenticate_target = 0;				\
+									\
+	return count;							\
+}
+
+#define __DEF_NACL_AUTH_INT(prefix, name)				\
+static ssize_t __iscsi_##prefix##_show_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+	ssize_t rb;							\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+									\
+	rb = snprintf(page, PAGE_SIZE, "%d\n", auth->name);		\
+	return rb;							\
+}
+
+#define DEF_NACL_AUTH_STR(name, flags)					\
+	__DEF_NACL_AUTH_STR(nacl_auth, name, flags)			\
+static ssize_t iscsi_nacl_auth_show_##name(				\
+	struct se_node_acl *nacl,					\
+	char *page)							\
+{									\
+	return __iscsi_nacl_auth_show_##name(container_of(nacl,		\
+			struct iscsi_node_acl, se_node_acl), page);		\
+}									\
+static ssize_t iscsi_nacl_auth_store_##name(				\
+	struct se_node_acl *nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	return __iscsi_nacl_auth_store_##name(container_of(nacl,	\
+			struct iscsi_node_acl, se_node_acl), page, count);	\
+}
+
+#define DEF_NACL_AUTH_INT(name)						\
+	__DEF_NACL_AUTH_INT(nacl_auth, name)				\
+static ssize_t iscsi_nacl_auth_show_##name(				\
+	struct se_node_acl *nacl,					\
+	char *page)							\
+{									\
+	return __iscsi_nacl_auth_show_##name(container_of(nacl,		\
+			struct iscsi_node_acl, se_node_acl), page);		\
+}
+
+#define AUTH_ATTR(_name, _mode)	TF_NACL_AUTH_ATTR(iscsi, _name, _mode);
+#define AUTH_ATTR_RO(_name) TF_NACL_AUTH_ATTR_RO(iscsi, _name);
+
+/*
+ * One-way authentication userid
+ */
+DEF_NACL_AUTH_STR(userid, NAF_USERID_SET);
+AUTH_ATTR(userid, S_IRUGO | S_IWUSR);
+/*
+ * One-way authentication password
+ */
+DEF_NACL_AUTH_STR(password, NAF_PASSWORD_SET);
+AUTH_ATTR(password, S_IRUGO | S_IWUSR);
+/*
+ * Enforce mutual authentication
+ */
+DEF_NACL_AUTH_INT(authenticate_target);
+AUTH_ATTR_RO(authenticate_target);
+/*
+ * Mutual authentication userid
+ */
+DEF_NACL_AUTH_STR(userid_mutual, NAF_USERID_IN_SET);
+AUTH_ATTR(userid_mutual, S_IRUGO | S_IWUSR);
+/*
+ * Mutual authentication password
+ */
+DEF_NACL_AUTH_STR(password_mutual, NAF_PASSWORD_IN_SET);
+AUTH_ATTR(password_mutual, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_nacl_auth_attrs[] = {
+	&iscsi_nacl_auth_userid.attr,
+	&iscsi_nacl_auth_password.attr,
+	&iscsi_nacl_auth_authenticate_target.attr,
+	&iscsi_nacl_auth_userid_mutual.attr,
+	&iscsi_nacl_auth_password_mutual.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_auth_cit */
+
+/* Start items for lio_target_nacl_param_cit */
+
+#define DEF_NACL_PARAM(name)						\
+static ssize_t iscsi_nacl_param_show_##name(				\
+	struct se_node_acl *se_nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_session *sess;						\
+	struct se_session *se_sess;						\
+	ssize_t rb;							\
+									\
+	spin_lock_bh(&se_nacl->nacl_sess_lock);				\
+	se_sess = se_nacl->nacl_sess;					\
+	if (!(se_sess)) {						\
+		rb = snprintf(page, PAGE_SIZE,				\
+			"No Active iSCSI Session\n");			\
+	} else {							\
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;	\
+		rb = snprintf(page, PAGE_SIZE, "%u\n",			\
+			(u32)SESS_OPS(sess)->name);			\
+	}								\
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);			\
+									\
+	return rb;							\
+}
+
+#define NACL_PARAM_ATTR(_name) TF_NACL_PARAM_ATTR_RO(iscsi, _name);
+
+DEF_NACL_PARAM(MaxConnections);
+NACL_PARAM_ATTR(MaxConnections);
+
+DEF_NACL_PARAM(InitialR2T);
+NACL_PARAM_ATTR(InitialR2T);
+
+DEF_NACL_PARAM(ImmediateData);
+NACL_PARAM_ATTR(ImmediateData);
+
+DEF_NACL_PARAM(MaxBurstLength);
+NACL_PARAM_ATTR(MaxBurstLength);
+
+DEF_NACL_PARAM(FirstBurstLength);
+NACL_PARAM_ATTR(FirstBurstLength);
+
+DEF_NACL_PARAM(DefaultTime2Wait);
+NACL_PARAM_ATTR(DefaultTime2Wait);
+
+DEF_NACL_PARAM(DefaultTime2Retain);
+NACL_PARAM_ATTR(DefaultTime2Retain);
+
+DEF_NACL_PARAM(MaxOutstandingR2T);
+NACL_PARAM_ATTR(MaxOutstandingR2T);
+
+DEF_NACL_PARAM(DataPDUInOrder);
+NACL_PARAM_ATTR(DataPDUInOrder);
+
+DEF_NACL_PARAM(DataSequenceInOrder);
+NACL_PARAM_ATTR(DataSequenceInOrder);
+
+DEF_NACL_PARAM(ErrorRecoveryLevel);
+NACL_PARAM_ATTR(ErrorRecoveryLevel);
+
+static struct configfs_attribute *lio_target_nacl_param_attrs[] = {
+	&iscsi_nacl_param_MaxConnections.attr,
+	&iscsi_nacl_param_InitialR2T.attr,
+	&iscsi_nacl_param_ImmediateData.attr,
+	&iscsi_nacl_param_MaxBurstLength.attr,
+	&iscsi_nacl_param_FirstBurstLength.attr,
+	&iscsi_nacl_param_DefaultTime2Wait.attr,
+	&iscsi_nacl_param_DefaultTime2Retain.attr,
+	&iscsi_nacl_param_MaxOutstandingR2T.attr,
+	&iscsi_nacl_param_DataPDUInOrder.attr,
+	&iscsi_nacl_param_DataSequenceInOrder.attr,
+	&iscsi_nacl_param_ErrorRecoveryLevel.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_param_cit */
+
+/* Start items for lio_target_acl_cit */
+
+static ssize_t lio_target_nacl_show_info(
+	struct se_node_acl *se_nacl,
+	char *page)
+{
+	struct iscsi_session *sess;
+	struct iscsi_conn *conn;
+	struct se_session *se_sess;
+	unsigned char *ip, buf_ipv4[IPV4_BUF_SIZE];
+	ssize_t rb = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (!(se_sess))
+		rb += sprintf(page+rb, "No active iSCSI Session for Initiator"
+			" Endpoint: %s\n", se_nacl->initiatorname);
+	else {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+		if (SESS_OPS(sess)->InitiatorName)
+			rb += sprintf(page+rb, "InitiatorName: %s\n",
+				SESS_OPS(sess)->InitiatorName);
+		if (SESS_OPS(sess)->InitiatorAlias)
+			rb += sprintf(page+rb, "InitiatorAlias: %s\n",
+				SESS_OPS(sess)->InitiatorAlias);
+
+		rb += sprintf(page+rb, "LIO Session ID: %u   "
+			"ISID: 0x%02x %02x %02x %02x %02x %02x  "
+			"TSIH: %hu  ", sess->sid,
+			sess->isid[0], sess->isid[1], sess->isid[2],
+			sess->isid[3], sess->isid[4], sess->isid[5],
+			sess->tsih);
+		rb += sprintf(page+rb, "SessionType: %s\n",
+				(SESS_OPS(sess)->SessionType) ?
+				"Discovery" : "Normal");
+		rb += sprintf(page+rb, "Session State: ");
+		switch (sess->session_state) {
+		case TARG_SESS_STATE_FREE:
+			rb += sprintf(page+rb, "TARG_SESS_FREE\n");
+			break;
+		case TARG_SESS_STATE_ACTIVE:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_ACTIVE\n");
+			break;
+		case TARG_SESS_STATE_LOGGED_IN:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_LOGGED_IN\n");
+			break;
+		case TARG_SESS_STATE_FAILED:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_FAILED\n");
+			break;
+		case TARG_SESS_STATE_IN_CONTINUE:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_IN_CONTINUE\n");
+			break;
+		default:
+			rb += sprintf(page+rb, "ERROR: Unknown Session"
+					" State!\n");
+			break;
+		}
+
+		rb += sprintf(page+rb, "---------------------[iSCSI Session"
+				" Values]-----------------------\n");
+		rb += sprintf(page+rb, "  CmdSN/WR  :  CmdSN/WC  :  ExpCmdSN"
+				"  :  MaxCmdSN  :     ITT    :     TTT\n");
+		rb += sprintf(page+rb, " 0x%08x   0x%08x   0x%08x   0x%08x"
+				"   0x%08x   0x%08x\n",
+			sess->cmdsn_window,
+			(sess->max_cmd_sn - sess->exp_cmd_sn) + 1,
+			sess->exp_cmd_sn, sess->max_cmd_sn,
+			sess->init_task_tag, sess->targ_xfer_tag);
+		rb += sprintf(page+rb, "----------------------[iSCSI"
+				" Connections]-------------------------\n");
+
+		spin_lock(&sess->conn_lock);
+		list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+			rb += sprintf(page+rb, "CID: %hu  Connection"
+					" State: ", conn->cid);
+			switch (conn->conn_state) {
+			case TARG_CONN_STATE_FREE:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_FREE\n");
+				break;
+			case TARG_CONN_STATE_XPT_UP:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_XPT_UP\n");
+				break;
+			case TARG_CONN_STATE_IN_LOGIN:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_IN_LOGIN\n");
+				break;
+			case TARG_CONN_STATE_LOGGED_IN:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_LOGGED_IN\n");
+				break;
+			case TARG_CONN_STATE_IN_LOGOUT:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_IN_LOGOUT\n");
+				break;
+			case TARG_CONN_STATE_LOGOUT_REQUESTED:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_LOGOUT_REQUESTED\n");
+				break;
+			case TARG_CONN_STATE_CLEANUP_WAIT:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_CLEANUP_WAIT\n");
+				break;
+			default:
+				rb += sprintf(page+rb,
+					"ERROR: Unknown Connection State!\n");
+				break;
+			}
+
+			if (conn->net_size == IPV6_ADDRESS_SPACE)
+				ip = &conn->ipv6_login_ip[0];
+			else {
+				iscsi_ntoa2(buf_ipv4, conn->login_ip);
+				ip = &buf_ipv4[0];
+			}
+			rb += sprintf(page+rb, "   Address %s %s", ip,
+				(conn->network_transport == ISCSI_TCP) ?
+				"TCP" : "SCTP");
+			rb += sprintf(page+rb, "  StatSN: 0x%08x\n",
+				conn->stat_sn);
+		}
+		spin_unlock(&sess->conn_lock);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return rb;
+}
+
+TF_NACL_BASE_ATTR_RO(lio_target, info);
+
+static ssize_t lio_target_nacl_show_cmdsn_depth(
+	struct se_node_acl *se_nacl,
+	char *page)
+{
+	return sprintf(page, "%u\n", se_nacl->queue_depth);
+}
+
+static ssize_t lio_target_nacl_store_cmdsn_depth(
+	struct se_node_acl *se_nacl,
+	const char *page,
+	size_t count)
+{
+	struct se_portal_group *se_tpg = se_nacl->se_tpg;
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	struct config_item *acl_ci, *tpg_ci, *wwn_ci;
+	char *endptr;
+	u32 cmdsn_depth = 0;
+	int ret = 0;
+
+	cmdsn_depth = simple_strtoul(page, &endptr, 0);
+	if (cmdsn_depth > TA_DEFAULT_CMDSN_DEPTH_MAX) {
+		printk(KERN_ERR "Passed cmdsn_depth: %u exceeds"
+			" TA_DEFAULT_CMDSN_DEPTH_MAX: %u\n", cmdsn_depth,
+			TA_DEFAULT_CMDSN_DEPTH_MAX);
+		return -EINVAL;
+	}
+	acl_ci = &se_nacl->acl_group.cg_item;
+	if (!(acl_ci)) {
+		printk(KERN_ERR "Unable to locatel acl_ci\n");
+		return -EINVAL;
+	}
+	tpg_ci = &acl_ci->ci_parent->ci_group->cg_item;
+	if (!(tpg_ci)) {
+		printk(KERN_ERR "Unable to locate tpg_ci\n");
+		return -EINVAL;
+	}
+	wwn_ci = &tpg_ci->ci_group->cg_item;
+	if (!(wwn_ci)) {
+		printk(KERN_ERR "Unable to locate config_item wwn_ci\n");
+		return -EINVAL;
+	}
+
+	if (iscsi_get_tpg(tpg) < 0)
+		return -EINVAL;
+	/*
+	 * iscsi_tpg_set_initiator_node_queue_depth() assumes force=1
+	 */
+	ret = iscsi_tpg_set_initiator_node_queue_depth(tpg,
+				config_item_name(acl_ci), cmdsn_depth, 1);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: %s/%s Set CmdSN Window: %u for"
+		"InitiatorName: %s\n", config_item_name(wwn_ci),
+		config_item_name(tpg_ci), cmdsn_depth,
+		config_item_name(acl_ci));
+
+	iscsi_put_tpg(tpg);
+	return (!ret) ? count : (ssize_t)ret;
+}
+
+TF_NACL_BASE_ATTR(lio_target, cmdsn_depth, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_initiator_attrs[] = {
+	&lio_target_nacl_info.attr,
+	&lio_target_nacl_cmdsn_depth.attr,
+	NULL,
+};
+
+static struct se_node_acl *lio_target_make_nodeacl(
+	struct se_portal_group *se_tpg,
+	struct config_group *group,
+	const char *name)
+{
+	struct config_group *stats_cg;
+	struct iscsi_node_acl *acl;
+	struct se_node_acl *se_nacl_new, *se_nacl;
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	u32 cmdsn_depth;
+
+	se_nacl_new = lio_tpg_alloc_fabric_acl(se_tpg);
+
+	acl = container_of(se_nacl_new, struct iscsi_node_acl,
+				se_node_acl);
+
+	cmdsn_depth = ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+	/*
+	 * se_nacl_new may be released by core_tpg_add_initiator_node_acl()
+	 * when converting a NdoeACL from demo mode -> explict
+	 */
+	se_nacl = core_tpg_add_initiator_node_acl(se_tpg, se_nacl_new,
+				name, cmdsn_depth);
+	if (IS_ERR(se_nacl))
+		return ERR_PTR(PTR_ERR(se_nacl));
+
+	stats_cg = &acl->se_node_acl.acl_fabric_stat_group;
+
+	stats_cg->default_groups = kzalloc(sizeof(struct config_group) * 2,
+				GFP_KERNEL);
+	if (!stats_cg->default_groups) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" stats_cg->default_groups\n");
+		core_tpg_del_initiator_node_acl(se_tpg, se_nacl, 1);
+		kfree(acl);	
+		return ERR_PTR(-ENOMEM);
+	}
+
+	stats_cg->default_groups[0] = &NODE_STAT_GRPS(acl)->iscsi_sess_stats_group;
+	stats_cg->default_groups[1] = NULL;
+	config_group_init_type_name(&NODE_STAT_GRPS(acl)->iscsi_sess_stats_group,
+			"iscsi_sess_stats", &iscsi_stat_sess_cit);
+
+	return se_nacl;
+}
+
+static void lio_target_drop_nodeacl(
+	struct se_node_acl *se_nacl)
+{
+	struct se_portal_group *se_tpg = se_nacl->se_tpg;
+	struct iscsi_node_acl *acl = container_of(se_nacl,
+			struct iscsi_node_acl, se_node_acl);
+	struct config_item *df_item;
+	struct config_group *stats_cg;
+	int i;
+
+	stats_cg = &acl->se_node_acl.acl_fabric_stat_group;
+	for (i = 0; stats_cg->default_groups[i]; i++) {
+		df_item = &stats_cg->default_groups[i]->cg_item;
+		stats_cg->default_groups[i] = NULL;
+		config_item_put(df_item);
+	}
+	kfree(stats_cg->default_groups);
+
+	core_tpg_del_initiator_node_acl(se_tpg, se_nacl, 1);
+	kfree(acl);
+}
+
+/* End items for lio_target_acl_cit */
+
+/* Start items for lio_target_tpg_attrib_cit */
+
+#define DEF_TPG_ATTRIB(name)						\
+									\
+static ssize_t iscsi_tpg_attrib_show_##name(				\
+	struct se_portal_group *se_tpg,				\
+	char *page)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	ssize_t rb;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	rb = sprintf(page, "%u\n", ISCSI_TPG_ATTRIB(tpg)->name);	\
+	iscsi_put_tpg(tpg);						\
+	return rb;							\
+}									\
+									\
+static ssize_t iscsi_tpg_attrib_store_##name(				\
+	struct se_portal_group *se_tpg,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	char *endptr;							\
+	u32 val;							\
+	int ret;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	val = simple_strtoul(page, &endptr, 0);				\
+	ret = iscsi_ta_##name(tpg, val);				\
+	if (ret < 0)							\
+		goto out;						\
+									\
+	iscsi_put_tpg(tpg);						\
+	return count;							\
+out:									\
+	iscsi_put_tpg(tpg);						\
+	return ret;							\
+}
+
+#define TPG_ATTR(_name, _mode) TF_TPG_ATTRIB_ATTR(iscsi, _name, _mode);
+
+/*
+ * Define iscsi_tpg_attrib_s_authentication
+ */
+DEF_TPG_ATTRIB(authentication);
+TPG_ATTR(authentication, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_login_timeout
+ */
+DEF_TPG_ATTRIB(login_timeout);
+TPG_ATTR(login_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_netif_timeout
+ */
+DEF_TPG_ATTRIB(netif_timeout);
+TPG_ATTR(netif_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_generate_node_acls
+ */
+DEF_TPG_ATTRIB(generate_node_acls);
+TPG_ATTR(generate_node_acls, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_default_cmdsn_depth
+ */
+DEF_TPG_ATTRIB(default_cmdsn_depth);
+TPG_ATTR(default_cmdsn_depth, S_IRUGO | S_IWUSR);
+/*
+ Define iscsi_tpg_attrib_s_cache_dynamic_acls
+ */
+DEF_TPG_ATTRIB(cache_dynamic_acls);
+TPG_ATTR(cache_dynamic_acls, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_demo_mode_write_protect
+ */
+DEF_TPG_ATTRIB(demo_mode_write_protect);
+TPG_ATTR(demo_mode_write_protect, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_prod_mode_write_protect
+ */
+DEF_TPG_ATTRIB(prod_mode_write_protect);
+TPG_ATTR(prod_mode_write_protect, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_crc32c_x86_offload
+ */
+DEF_TPG_ATTRIB(crc32c_x86_offload);
+TPG_ATTR(crc32c_x86_offload, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
+	&iscsi_tpg_attrib_authentication.attr,
+	&iscsi_tpg_attrib_login_timeout.attr,
+	&iscsi_tpg_attrib_netif_timeout.attr,
+	&iscsi_tpg_attrib_generate_node_acls.attr,
+	&iscsi_tpg_attrib_default_cmdsn_depth.attr,
+	&iscsi_tpg_attrib_cache_dynamic_acls.attr,
+	&iscsi_tpg_attrib_demo_mode_write_protect.attr,
+	&iscsi_tpg_attrib_prod_mode_write_protect.attr,
+	&iscsi_tpg_attrib_crc32c_x86_offload.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_attrib_cit */
+
+/* Start items for lio_target_tpg_param_cit */
+
+#define DEF_TPG_PARAM(name)						\
+static ssize_t iscsi_tpg_param_show_##name(				\
+	struct se_portal_group *se_tpg,				\
+	char *page)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	struct iscsi_param *param;						\
+	ssize_t rb;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	param = iscsi_find_param_from_key(__stringify(name),		\
+				tpg->param_list);			\
+	if (!(param)) {							\
+		iscsi_put_tpg(tpg);					\
+		return -EINVAL;						\
+	}								\
+	rb = snprintf(page, PAGE_SIZE, "%s\n", param->value);		\
+									\
+	iscsi_put_tpg(tpg);						\
+	return rb;							\
+}									\
+static ssize_t iscsi_tpg_param_store_##name(				\
+	struct se_portal_group *se_tpg,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	char *buf;							\
+	int ret;							\
+									\
+	buf = kzalloc(PAGE_SIZE, GFP_KERNEL);				\
+	if (!(buf))							\
+		return -ENOMEM;						\
+	snprintf(buf, PAGE_SIZE, "%s=%s", __stringify(name), page);	\
+	buf[strlen(buf)-1] = '\0'; /* Kill newline */			\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	ret = iscsi_change_param_value(buf, SENDER_TARGET,		\
+				tpg->param_list, 1);			\
+	if (ret < 0)							\
+		goto out;						\
+									\
+	kfree(buf);							\
+	iscsi_put_tpg(tpg);						\
+	return count;							\
+out:									\
+	kfree(buf);							\
+	iscsi_put_tpg(tpg);						\
+	return -EINVAL;						\
+}
+
+#define TPG_PARAM_ATTR(_name, _mode) TF_TPG_PARAM_ATTR(iscsi, _name, _mode);
+
+DEF_TPG_PARAM(AuthMethod);
+TPG_PARAM_ATTR(AuthMethod, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(HeaderDigest);
+TPG_PARAM_ATTR(HeaderDigest, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataDigest);
+TPG_PARAM_ATTR(DataDigest, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxConnections);
+TPG_PARAM_ATTR(MaxConnections, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(TargetAlias);
+TPG_PARAM_ATTR(TargetAlias, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(InitialR2T);
+TPG_PARAM_ATTR(InitialR2T, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(ImmediateData);
+TPG_PARAM_ATTR(ImmediateData, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxRecvDataSegmentLength);
+TPG_PARAM_ATTR(MaxRecvDataSegmentLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxBurstLength);
+TPG_PARAM_ATTR(MaxBurstLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(FirstBurstLength);
+TPG_PARAM_ATTR(FirstBurstLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DefaultTime2Wait);
+TPG_PARAM_ATTR(DefaultTime2Wait, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DefaultTime2Retain);
+TPG_PARAM_ATTR(DefaultTime2Retain, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxOutstandingR2T);
+TPG_PARAM_ATTR(MaxOutstandingR2T, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataPDUInOrder);
+TPG_PARAM_ATTR(DataPDUInOrder, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataSequenceInOrder);
+TPG_PARAM_ATTR(DataSequenceInOrder, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(ErrorRecoveryLevel);
+TPG_PARAM_ATTR(ErrorRecoveryLevel, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(IFMarker);
+TPG_PARAM_ATTR(IFMarker, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(OFMarker);
+TPG_PARAM_ATTR(OFMarker, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(IFMarkInt);
+TPG_PARAM_ATTR(IFMarkInt, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(OFMarkInt);
+TPG_PARAM_ATTR(OFMarkInt, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_param_attrs[] = {
+	&iscsi_tpg_param_AuthMethod.attr,
+	&iscsi_tpg_param_HeaderDigest.attr,
+	&iscsi_tpg_param_DataDigest.attr,
+	&iscsi_tpg_param_MaxConnections.attr,
+	&iscsi_tpg_param_TargetAlias.attr,
+	&iscsi_tpg_param_InitialR2T.attr,
+	&iscsi_tpg_param_ImmediateData.attr,
+	&iscsi_tpg_param_MaxRecvDataSegmentLength.attr,
+	&iscsi_tpg_param_MaxBurstLength.attr,
+	&iscsi_tpg_param_FirstBurstLength.attr,
+	&iscsi_tpg_param_DefaultTime2Wait.attr,
+	&iscsi_tpg_param_DefaultTime2Retain.attr,
+	&iscsi_tpg_param_MaxOutstandingR2T.attr,
+	&iscsi_tpg_param_DataPDUInOrder.attr,
+	&iscsi_tpg_param_DataSequenceInOrder.attr,
+	&iscsi_tpg_param_ErrorRecoveryLevel.attr,
+	&iscsi_tpg_param_IFMarker.attr,
+	&iscsi_tpg_param_OFMarker.attr,
+	&iscsi_tpg_param_IFMarkInt.attr,
+	&iscsi_tpg_param_OFMarkInt.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_param_cit */
+
+/* Start items for lio_target_tpg_cit */
+
+static ssize_t lio_target_tpg_show_enable(
+	struct se_portal_group *se_tpg,
+	char *page)
+{
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	ssize_t len = 0;
+
+	spin_lock(&tpg->tpg_state_lock);
+	len = sprintf(page, "%d\n",
+			(tpg->tpg_state == TPG_STATE_ACTIVE) ? 1 : 0);
+	spin_unlock(&tpg->tpg_state_lock);
+
+	return len;
+}
+
+static ssize_t lio_target_tpg_store_enable(
+	struct se_portal_group *se_tpg,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	char *endptr;
+	u32 op;
+	int ret = 0;
+
+	op = simple_strtoul(page, &endptr, 0);
+	if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return -EINVAL;
+
+	if (op) {
+		ret = iscsi_tpg_enable_portal_group(tpg);
+		if (ret < 0)
+			goto out;
+	} else {
+		/*
+		 * iscsi_tpg_disable_portal_group() assumes force=1
+		 */
+		ret = iscsi_tpg_disable_portal_group(tpg, 1);
+		if (ret < 0)
+			goto out;
+	}
+
+	iscsi_put_tpg(tpg);
+	return count;
+out:
+	iscsi_put_tpg(tpg);
+	return -EINVAL;
+}
+
+TF_TPG_BASE_ATTR(lio_target, enable, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_attrs[] = {
+	&lio_target_tpg_enable.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_cit */
+
+/* Start items for lio_target_tiqn_cit */
+
+struct se_portal_group *lio_target_tiqn_addtpg(
+	struct se_wwn *wwn,
+	struct config_group *group,
+	const char *name)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+	char *tpgt_str, *end_ptr;
+	int ret = 0;
+	unsigned short int tpgt;
+
+	tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
+	/*
+	 * Only tpgt_# directory groups can be created below
+	 * target/iscsi/iqn.superturodiskarry/
+	*/
+	tpgt_str = strstr(name, "tpgt_");
+	if (!(tpgt_str)) {
+		printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
+				" group\n");
+		return NULL;
+	}
+	tpgt_str += 5; /* Skip ahead of "tpgt_" */
+	tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
+
+	tpg = core_alloc_portal_group(tiqn, tpgt);
+	if (!(tpg))
+		return NULL;
+
+	ret = core_tpg_register(
+			&lio_target_fabric_configfs->tf_ops,
+			wwn, &tpg->tpg_se_tpg, (void *)tpg,
+			TRANSPORT_TPG_TYPE_NORMAL);
+	if (ret < 0)
+		return NULL;
+
+	ret = iscsi_tpg_add_portal_group(tiqn, tpg);
+	if (ret != 0)
+		goto out;
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> Allocated TPG: %s\n",
+			name);	
+	return &tpg->tpg_se_tpg;
+out:
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+	kmem_cache_free(lio_tpg_cache, tpg);
+	return NULL;
+}
+
+void lio_target_tiqn_deltpg(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+
+	tpg = container_of(se_tpg, struct iscsi_portal_group, tpg_se_tpg);
+	tiqn = tpg->tpg_tiqn;
+	/*
+	 * iscsi_tpg_del_portal_group() assumes force=1
+	 */
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> Releasing TPG\n");
+	iscsi_tpg_del_portal_group(tiqn, tpg, 1);
+}
+
+/* End items for lio_target_tiqn_cit */
+
+/* Start LIO-Target TIQN struct contig_item lio_target_cit */
+
+static ssize_t lio_target_wwn_show_attr_lio_version(
+	struct target_fabric_configfs *tf,
+	char *page)
+{
+	return sprintf(page, "Linux-iSCSI.org Target "ISCSI_VERSION""
+		" on %s/%s on "UTS_RELEASE"\n", utsname()->sysname,
+		utsname()->machine);
+}
+
+TF_WWN_ATTR_RO(lio_target, lio_version);
+
+static struct configfs_attribute *lio_target_wwn_attrs[] = {
+	&lio_target_wwn_lio_version.attr,
+	NULL,
+};
+
+struct se_wwn *lio_target_call_coreaddtiqn(
+	struct target_fabric_configfs *tf,
+	struct config_group *group,
+	const char *name)
+{
+	struct config_group *stats_cg;
+	struct iscsi_tiqn *tiqn;
+	int ret = 0;
+
+	tiqn = core_add_tiqn((unsigned char *)name, &ret);
+	if (!(tiqn))
+		return NULL;
+	/*
+	 * Setup struct iscsi_wwn_stat_grps for se_wwn->fabric_stat_group.
+	 */
+	stats_cg = &tiqn->tiqn_wwn.fabric_stat_group;
+
+	stats_cg->default_groups = kzalloc(sizeof(struct config_group) * 6,
+				GFP_KERNEL);
+	if (!stats_cg->default_groups) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" stats_cg->default_groups\n");		
+		core_del_tiqn(tiqn);
+		return ERR_PTR(-ENOMEM);
+	}
+	
+	stats_cg->default_groups[0] = &WWN_STAT_GRPS(tiqn)->iscsi_instance_group;
+	stats_cg->default_groups[1] = &WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group;
+	stats_cg->default_groups[2] = &WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group;
+	stats_cg->default_groups[3] = &WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group;
+	stats_cg->default_groups[4] = &WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group;
+	stats_cg->default_groups[5] = NULL;
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_instance_group,
+			"iscsi_instance", &iscsi_stat_instance_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group,
+			"iscsi_sess_err", &iscsi_stat_sess_err_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group,
+			"iscsi_tgt_attr", &iscsi_stat_tgt_attr_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group,
+			"iscsi_login_stats", &iscsi_stat_login_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group,
+			"iscsi_logout_stats", &iscsi_stat_logout_cit);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> Allocated Node:"
+			" %s\n", name);
+	return &tiqn->tiqn_wwn;
+}
+
+void lio_target_call_coredeltiqn(
+	struct se_wwn *wwn)
+{
+	struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
+	struct config_item *df_item;
+	struct config_group *stats_cg;
+	int i;
+	
+	stats_cg = &tiqn->tiqn_wwn.fabric_stat_group;
+	for (i = 0; stats_cg->default_groups[i]; i++) {
+		df_item = &stats_cg->default_groups[i]->cg_item;
+		stats_cg->default_groups[i] = NULL;
+		config_item_put(df_item);
+	}
+	kfree(stats_cg->default_groups);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> %s\n",
+			tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> Releasing"
+			" core_del_tiqn()\n");
+	core_del_tiqn(tiqn);
+}
+
+/* End LIO-Target TIQN struct contig_lio_target_cit */
+
+/* Start lio_target_discovery_auth_cit */
+
+#define DEF_DISC_AUTH_STR(name, flags)					\
+	__DEF_NACL_AUTH_STR(disc, name, flags)				\
+static ssize_t iscsi_disc_show_##name(					\
+	struct target_fabric_configfs *tf,				\
+	char *page)							\
+{									\
+	return __iscsi_disc_show_##name(&iscsi_global->discovery_acl,	\
+		page);							\
+}									\
+static ssize_t iscsi_disc_store_##name(					\
+	struct target_fabric_configfs *tf,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	return __iscsi_disc_store_##name(&iscsi_global->discovery_acl,	\
+		page, count);						\
+}
+
+#define DEF_DISC_AUTH_INT(name)						\
+	__DEF_NACL_AUTH_INT(disc, name)					\
+static ssize_t iscsi_disc_show_##name(					\
+        struct target_fabric_configfs *tf,				\
+        char *page)							\
+{									\
+	return __iscsi_disc_show_##name(&iscsi_global->discovery_acl, 	\
+			page);						\
+}
+
+#define DISC_AUTH_ATTR(_name, _mode) TF_DISC_ATTR(iscsi, _name, _mode)
+#define DISC_AUTH_ATTR_RO(_name) TF_DISC_ATTR_RO(iscsi, _name)
+
+/*
+ * One-way authentication userid
+ */
+DEF_DISC_AUTH_STR(userid, NAF_USERID_SET);
+DISC_AUTH_ATTR(userid, S_IRUGO | S_IWUSR);
+/*
+ * One-way authentication password
+ */
+DEF_DISC_AUTH_STR(password, NAF_PASSWORD_SET);
+DISC_AUTH_ATTR(password, S_IRUGO | S_IWUSR);
+/*
+ * Enforce mutual authentication
+ */
+DEF_DISC_AUTH_INT(authenticate_target);
+DISC_AUTH_ATTR_RO(authenticate_target);
+/*
+ * Mutual authentication userid
+ */
+DEF_DISC_AUTH_STR(userid_mutual, NAF_USERID_IN_SET);
+DISC_AUTH_ATTR(userid_mutual, S_IRUGO | S_IWUSR);
+/*
+ * Mutual authentication password
+ */
+DEF_DISC_AUTH_STR(password_mutual, NAF_PASSWORD_IN_SET);
+DISC_AUTH_ATTR(password_mutual, S_IRUGO | S_IWUSR);
+
+/*
+ * enforce_discovery_auth
+ */
+static ssize_t iscsi_disc_show_enforce_discovery_auth(
+	struct target_fabric_configfs *tf,
+	char *page)
+{
+	struct iscsi_node_auth *discovery_auth = &iscsi_global->discovery_acl.node_auth;
+
+	return sprintf(page, "%d\n", discovery_auth->enforce_discovery_auth);
+}
+
+static ssize_t iscsi_disc_store_enforce_discovery_auth(
+	struct target_fabric_configfs *tf,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_param *param;
+	struct iscsi_portal_group *discovery_tpg = iscsi_global->discovery_tpg;
+	char *endptr;
+	u32 op;
+
+	op = simple_strtoul(page, &endptr, 0);
+	if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for enforce_discovery_auth:"
+				" %u\n", op);
+		return -EINVAL;
+	}
+
+	if (!(discovery_tpg)) {
+		printk(KERN_ERR "iscsi_global->discovery_tpg is NULL\n");
+		return -EINVAL;
+	}
+
+	param = iscsi_find_param_from_key(AUTHMETHOD,
+				discovery_tpg->param_list);
+	if (!(param))
+		return -EINVAL;
+
+	if (op) {
+		/*
+		 * Reset the AuthMethod key to CHAP.
+		 */
+		if (iscsi_update_param_value(param, CHAP) < 0)
+			return -EINVAL;
+
+		discovery_tpg->tpg_attrib.authentication = 1;
+		iscsi_global->discovery_acl.node_auth.enforce_discovery_auth = 1;
+		printk(KERN_INFO "LIO-CORE[0] Successfully enabled"
+			" authentication enforcement for iSCSI"
+			" Discovery TPG\n");
+	} else {
+		/*
+		 * Reset the AuthMethod key to CHAP,None
+		 */
+		if (iscsi_update_param_value(param, "CHAP,None") < 0)
+			return -EINVAL;
+
+		discovery_tpg->tpg_attrib.authentication = 0;
+		iscsi_global->discovery_acl.node_auth.enforce_discovery_auth = 0;
+		printk(KERN_INFO "LIO-CORE[0] Successfully disabled"
+			" authentication enforcement for iSCSI"
+			" Discovery TPG\n");
+	}
+
+	return count;
+}
+
+DISC_AUTH_ATTR(enforce_discovery_auth, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_discovery_auth_attrs[] = {
+	&iscsi_disc_userid.attr,
+	&iscsi_disc_password.attr,
+	&iscsi_disc_authenticate_target.attr,
+	&iscsi_disc_userid_mutual.attr,
+	&iscsi_disc_password_mutual.attr,
+	&iscsi_disc_enforce_discovery_auth.attr,
+	NULL,
+};
+
+/* End lio_target_discovery_auth_cit */
+
+int iscsi_target_register_configfs(void)
+{
+	struct target_fabric_configfs *fabric;
+	int ret;
+
+	lio_target_fabric_configfs = NULL;
+	fabric = target_fabric_configfs_init(THIS_MODULE, "iscsi");
+	if (!(fabric)) {
+		printk(KERN_ERR "target_fabric_configfs_init() for"
+				" LIO-Target failed!\n");
+		return -1;
+	}
+	/*
+	 * Setup the fabric API of function pointers used by target_core_mod..
+	 */
+	fabric->tf_ops.get_fabric_name = &iscsi_get_fabric_name;
+	fabric->tf_ops.get_fabric_proto_ident = &iscsi_get_fabric_proto_ident;
+	fabric->tf_ops.tpg_get_wwn = &lio_tpg_get_endpoint_wwn;
+	fabric->tf_ops.tpg_get_tag = &lio_tpg_get_tag;
+	fabric->tf_ops.tpg_get_default_depth = &lio_tpg_get_default_depth;
+	fabric->tf_ops.tpg_get_pr_transport_id = &iscsi_get_pr_transport_id;
+	fabric->tf_ops.tpg_get_pr_transport_id_len =
+				&iscsi_get_pr_transport_id_len;
+	fabric->tf_ops.tpg_parse_pr_out_transport_id =
+				&iscsi_parse_pr_out_transport_id;
+	fabric->tf_ops.tpg_check_demo_mode = &lio_tpg_check_demo_mode;
+	fabric->tf_ops.tpg_check_demo_mode_cache =
+				&lio_tpg_check_demo_mode_cache;
+	fabric->tf_ops.tpg_check_demo_mode_write_protect =
+				&lio_tpg_check_demo_mode_write_protect;
+	fabric->tf_ops.tpg_check_prod_mode_write_protect =
+				&lio_tpg_check_prod_mode_write_protect;
+	fabric->tf_ops.tpg_alloc_fabric_acl = &lio_tpg_alloc_fabric_acl;
+	fabric->tf_ops.tpg_release_fabric_acl = &lio_tpg_release_fabric_acl;
+	fabric->tf_ops.tpg_get_inst_index = &lio_tpg_get_inst_index;
+	/*
+	 * Use our local iscsi_allocate_iovecs_for_cmd() for the extra
+	 * callback in transport_generic_new_cmd() to allocate
+	 * iscsi_cmd->iov_data[] for Linux/Net kernel sockets operations.
+	 */
+	fabric->tf_ops.alloc_cmd_iovecs = &iscsi_allocate_iovecs_for_cmd;
+	fabric->tf_ops.release_cmd_to_pool = &lio_release_cmd_to_pool;
+	fabric->tf_ops.release_cmd_direct = &lio_release_cmd_direct;
+	fabric->tf_ops.shutdown_session = &lio_tpg_shutdown_session;
+	fabric->tf_ops.close_session = &lio_tpg_close_session;
+	fabric->tf_ops.stop_session = &lio_tpg_stop_session;
+	fabric->tf_ops.fall_back_to_erl0 = &lio_tpg_fall_back_to_erl0;
+	fabric->tf_ops.sess_logged_in = &lio_sess_logged_in;
+	fabric->tf_ops.sess_get_index = &lio_sess_get_index;
+	fabric->tf_ops.sess_get_initiator_sid = &lio_sess_get_initiator_sid;
+	fabric->tf_ops.write_pending = &lio_write_pending;
+	fabric->tf_ops.write_pending_status = &lio_write_pending_status;
+	fabric->tf_ops.set_default_node_attributes =
+				&lio_set_default_node_attributes;
+	fabric->tf_ops.get_task_tag = &iscsi_get_task_tag;
+	fabric->tf_ops.get_cmd_state = &iscsi_get_cmd_state;
+	fabric->tf_ops.new_cmd_failure = &iscsi_new_cmd_failure;
+	fabric->tf_ops.queue_data_in = &lio_queue_data_in;
+	fabric->tf_ops.queue_status = &lio_queue_status;
+	fabric->tf_ops.queue_tm_rsp = &lio_queue_tm_rsp;
+	fabric->tf_ops.set_fabric_sense_len = &lio_set_fabric_sense_len;
+	fabric->tf_ops.get_fabric_sense_len = &lio_get_fabric_sense_len;
+	fabric->tf_ops.is_state_remove = &iscsi_is_state_remove;
+	fabric->tf_ops.pack_lun = &iscsi_pack_lun;
+	/*
+	 * Setup function pointers for generic logic in target_core_fabric_configfs.c
+	 */
+	fabric->tf_ops.fabric_make_wwn = &lio_target_call_coreaddtiqn;
+	fabric->tf_ops.fabric_drop_wwn = &lio_target_call_coredeltiqn;
+	fabric->tf_ops.fabric_make_tpg = &lio_target_tiqn_addtpg;
+	fabric->tf_ops.fabric_drop_tpg = &lio_target_tiqn_deltpg;
+	fabric->tf_ops.fabric_post_link	= NULL;
+	fabric->tf_ops.fabric_pre_unlink = NULL;
+	fabric->tf_ops.fabric_make_np = &lio_target_call_addnptotpg;
+	fabric->tf_ops.fabric_drop_np = &lio_target_call_delnpfromtpg;
+	fabric->tf_ops.fabric_make_nodeacl = &lio_target_make_nodeacl;
+	fabric->tf_ops.fabric_drop_nodeacl = &lio_target_drop_nodeacl;
+	/*
+	 * Setup default attribute lists for various fabric->tf_cit_tmpl
+	 * sturct config_item_type's
+	 */
+	TF_CIT_TMPL(fabric)->tfc_discovery_cit.ct_attrs = lio_target_discovery_auth_attrs;
+	TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = lio_target_wwn_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = lio_target_tpg_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = lio_target_tpg_attrib_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = lio_target_tpg_param_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = lio_target_portal_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = lio_target_initiator_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = lio_target_nacl_attrib_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = lio_target_nacl_auth_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = lio_target_nacl_param_attrs;
+
+	ret = target_fabric_configfs_register(fabric);
+	if (ret < 0) {
+		printk(KERN_ERR "target_fabric_configfs_register() for"
+				" LIO-Target failed!\n");
+		target_fabric_configfs_free(fabric);
+		return -1;
+	}
+
+	lio_target_fabric_configfs = fabric;
+	printk(KERN_INFO "LIO_TARGET[0] - Set fabric ->"
+			" lio_target_fabric_configfs\n");
+	return 0;
+}
+
+
+void iscsi_target_deregister_configfs(void)
+{
+	if (!(lio_target_fabric_configfs))
+		return;
+	/*
+	 * Shutdown discovery sessions and disable discovery TPG
+	 */
+	if (iscsi_global->discovery_tpg)
+		iscsi_tpg_disable_portal_group(iscsi_global->discovery_tpg, 1);
+
+	target_fabric_configfs_deregister(lio_target_fabric_configfs);
+	lio_target_fabric_configfs = NULL;
+	printk(KERN_INFO "LIO_TARGET[0] - Cleared"
+				" lio_target_fabric_configfs\n");
+}
diff --git a/drivers/target/iscsi/iscsi_target_configfs.h b/drivers/target/iscsi/iscsi_target_configfs.h
new file mode 100644
index 0000000..52c5123
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_configfs.h
@@ -0,0 +1,9 @@
+#ifndef ISCSI_TARGET_CONFIGFS_H
+#define ISCSI_TARGET_CONFIGFS_H
+
+extern int iscsi_target_register_configfs(void);
+extern void iscsi_target_deregister_configfs(void);
+
+extern struct kmem_cache *lio_tpg_cache;
+
+#endif /* ISCSI_TARGET_CONFIGFS_H */
diff --git a/drivers/target/iscsi/iscsi_target_nodeattrib.c b/drivers/target/iscsi/iscsi_target_nodeattrib.c
new file mode 100644
index 0000000..23aa7e5
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nodeattrib.c
@@ -0,0 +1,300 @@
+/*******************************************************************************
+ * This file contains the main functions related to Initiator Node Attributes.
+ *
+ * Copyright (c) 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_nodeattrib.h"
+
+static inline char *iscsi_na_get_initiatorname(
+	struct iscsi_node_acl *nacl)
+{
+	struct se_node_acl *se_nacl = &nacl->se_node_acl;	
+
+	return &se_nacl->initiatorname[0];
+}
+
+/*	iscsi_set_default_node_attribues():
+ *
+ *
+ */
+void iscsi_set_default_node_attribues(
+	struct iscsi_node_acl *acl)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	a->dataout_timeout = NA_DATAOUT_TIMEOUT;
+	a->dataout_timeout_retries = NA_DATAOUT_TIMEOUT_RETRIES;
+	a->nopin_timeout = NA_NOPIN_TIMEOUT;
+	a->nopin_response_timeout = NA_NOPIN_RESPONSE_TIMEOUT;
+	a->random_datain_pdu_offsets = NA_RANDOM_DATAIN_PDU_OFFSETS;
+	a->random_datain_seq_offsets = NA_RANDOM_DATAIN_SEQ_OFFSETS;
+	a->random_r2t_offsets = NA_RANDOM_R2T_OFFSETS;
+	a->default_erl = NA_DEFAULT_ERL;
+}
+
+/*	iscsi_na_dataout_timeout():
+ *
+ *
+ */
+extern int iscsi_na_dataout_timeout(
+	struct iscsi_node_acl *acl,
+	u32 dataout_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (dataout_timeout > NA_DATAOUT_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested DataOut Timeout %u larger than"
+			" maximum %u\n", dataout_timeout,
+			NA_DATAOUT_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (dataout_timeout < NA_DATAOUT_TIMEOUT_MIX) {
+		printk(KERN_ERR "Requested DataOut Timeout %u smaller than"
+			" minimum %u\n", dataout_timeout,
+			NA_DATAOUT_TIMEOUT_MIX);
+		return -EINVAL;
+	}
+
+	a->dataout_timeout = dataout_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set DataOut Timeout to %u for Initiator Node"
+		" %s\n", a->dataout_timeout, iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_dataout_timeout_retries():
+ *
+ *
+ */
+extern int iscsi_na_dataout_timeout_retries(
+	struct iscsi_node_acl *acl,
+	u32 dataout_timeout_retries)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (dataout_timeout_retries > NA_DATAOUT_TIMEOUT_RETRIES_MAX) {
+		printk(KERN_ERR "Requested DataOut Timeout Retries %u larger"
+			" than maximum %u", dataout_timeout_retries,
+				NA_DATAOUT_TIMEOUT_RETRIES_MAX);
+		return -EINVAL;
+	} else if (dataout_timeout_retries < NA_DATAOUT_TIMEOUT_RETRIES_MIN) {
+		printk(KERN_ERR "Requested DataOut Timeout Retries %u smaller"
+			" than minimum %u", dataout_timeout_retries,
+				NA_DATAOUT_TIMEOUT_RETRIES_MIN);
+		return -EINVAL;
+	}
+
+	a->dataout_timeout_retries = dataout_timeout_retries;
+	TRACE(TRACE_NODEATTRIB, "Set DataOut Timeout Retries to %u for"
+		" Initiator Node %s\n", a->dataout_timeout_retries,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_nopin_timeout():
+ *
+ *
+ */
+extern int iscsi_na_nopin_timeout(
+	struct iscsi_node_acl *acl,
+	u32 nopin_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+	struct iscsi_session *sess;
+	struct iscsi_conn *conn;
+	struct se_node_acl *se_nacl = &a->nacl->se_node_acl;
+	struct se_session *se_sess;
+	u32 orig_nopin_timeout = a->nopin_timeout;
+
+	if (nopin_timeout > NA_NOPIN_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested NopIn Timeout %u larger than maximum"
+			" %u\n", nopin_timeout, NA_NOPIN_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if ((nopin_timeout < NA_NOPIN_TIMEOUT_MIN) &&
+		   (nopin_timeout != 0)) {
+		printk(KERN_ERR "Requested NopIn Timeout %u smaller than"
+			" minimum %u and not 0\n", nopin_timeout,
+			NA_NOPIN_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->nopin_timeout = nopin_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set NopIn Timeout to %u for Initiator"
+		" Node %s\n", a->nopin_timeout,
+		iscsi_na_get_initiatorname(acl));
+	/*
+	 * Reenable disabled nopin_timeout timer for all iSCSI connections.
+	 */
+	if (!(orig_nopin_timeout)) {
+		spin_lock_bh(&se_nacl->nacl_sess_lock);
+		se_sess = se_nacl->nacl_sess;
+		if (se_sess) {
+			sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+			spin_lock(&sess->conn_lock);
+			list_for_each_entry(conn, &sess->sess_conn_list,
+					conn_list) {
+				if (conn->conn_state !=
+						TARG_CONN_STATE_LOGGED_IN)
+					continue;
+
+				spin_lock(&conn->nopin_timer_lock);
+				__iscsi_start_nopin_timer(conn);
+				spin_unlock(&conn->nopin_timer_lock);
+			}
+			spin_unlock(&sess->conn_lock);
+		}
+		spin_unlock_bh(&se_nacl->nacl_sess_lock);
+	}
+
+	return 0;
+}
+
+/*	iscsi_na_nopin_response_timeout():
+ *
+ *
+ */
+extern int iscsi_na_nopin_response_timeout(
+	struct iscsi_node_acl *acl,
+	u32 nopin_response_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (nopin_response_timeout > NA_NOPIN_RESPONSE_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested NopIn Response Timeout %u larger"
+			" than maximum %u\n", nopin_response_timeout,
+				NA_NOPIN_RESPONSE_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (nopin_response_timeout < NA_NOPIN_RESPONSE_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested NopIn Response Timeout %u smaller"
+			" than minimum %u\n", nopin_response_timeout,
+				NA_NOPIN_RESPONSE_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->nopin_response_timeout = nopin_response_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set NopIn Response Timeout to %u for"
+		" Initiator Node %s\n", a->nopin_timeout,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_datain_pdu_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_datain_pdu_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_datain_pdu_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_datain_pdu_offsets != 0 && random_datain_pdu_offsets != 1) {
+		printk(KERN_ERR "Requested Random DataIN PDU Offsets: %u not"
+			" 0 or 1\n", random_datain_pdu_offsets);
+		return -EINVAL;
+	}
+
+	a->random_datain_pdu_offsets = random_datain_pdu_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random DataIN PDU Offsets to %u for"
+		" Initiator Node %s\n", a->random_datain_pdu_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_datain_seq_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_datain_seq_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_datain_seq_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_datain_seq_offsets != 0 && random_datain_seq_offsets != 1) {
+		printk(KERN_ERR "Requested Random DataIN Sequence Offsets: %u"
+			" not 0 or 1\n", random_datain_seq_offsets);
+		return -EINVAL;
+	}
+
+	a->random_datain_seq_offsets = random_datain_seq_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random DataIN Sequence Offsets to %u for"
+		" Initiator Node %s\n", a->random_datain_seq_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_r2t_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_r2t_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_r2t_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_r2t_offsets != 0 && random_r2t_offsets != 1) {
+		printk(KERN_ERR "Requested Random R2T Offsets: %u not"
+			" 0 or 1\n", random_r2t_offsets);
+		return -EINVAL;
+	}
+
+	a->random_r2t_offsets = random_r2t_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random R2T Offsets to %u for"
+		" Initiator Node %s\n", a->random_r2t_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+extern int iscsi_na_default_erl(
+	struct iscsi_node_acl *acl,
+	u32 default_erl)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (default_erl != 0 && default_erl != 1 && default_erl != 2) {
+		printk(KERN_ERR "Requested default ERL: %u not 0, 1, or 2\n",
+				default_erl);
+		return -EINVAL;
+	}
+
+	a->default_erl = default_erl;
+	TRACE(TRACE_NODEATTRIB, "Set use ERL0 flag to %u for Initiator"
+		" Node %s\n", a->default_erl,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
diff --git a/drivers/target/iscsi/iscsi_target_nodeattrib.h b/drivers/target/iscsi/iscsi_target_nodeattrib.h
new file mode 100644
index 0000000..ed5884e
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nodeattrib.h
@@ -0,0 +1,14 @@
+#ifndef ISCSI_TARGET_NODEATTRIB_H
+#define ISCSI_TARGET_NODEATTRIB_H
+
+extern void iscsi_set_default_node_attribues(struct iscsi_node_acl *);
+extern int iscsi_na_dataout_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_dataout_timeout_retries(struct iscsi_node_acl *, u32);
+extern int iscsi_na_nopin_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_nopin_response_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_datain_pdu_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_datain_seq_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_r2t_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_default_erl(struct iscsi_node_acl *, u32);
+
+#endif /* ISCSI_TARGET_NODEATTRIB_H */
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 03/12] iscsi-target: Add TCM v4 compatiable ConfigFS control plane
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for /sys/kernel/config/target/iscsi using
TCM v4.0 compatiable calls following target_core_fabric_configfs.c

This includes a number of iSCSI fabric dependent attributes upon
target_core_fabric_configfs.c provided struct config_item_types from
include/target/target_core_configfs.hstruct target_fabric_configfs_template

It also includes iscsi_target_nodeattrib.[c,h] for handling the
lio_target_nacl_attrib_attrs[] store/show for iSCSI fabric dependent
attributes.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_configfs.c   | 1617 ++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_configfs.h   |    9 +
 drivers/target/iscsi/iscsi_target_nodeattrib.c |  300 +++++
 drivers/target/iscsi/iscsi_target_nodeattrib.h |   14 +
 4 files changed, 1940 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.c
 create mode 100644 drivers/target/iscsi/iscsi_target_configfs.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nodeattrib.h

diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
new file mode 100644
index 0000000..a1058ce
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_configfs.c
@@ -0,0 +1,1617 @@
+/*******************************************************************************
+ * This file contains the configfs implementation for iSCSI Target mode
+ * from the LIO-Target Project.
+ *
+ * Copyright (c) 2008, 2009, 2010 Rising Tide, Inc.
+ * Copyright (c) 2008, 2009, 2010 Linux-iSCSI.org
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ****************************************************************************/
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/version.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/configfs.h>
+#include <linux/inet.h>
+
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_fabric_configfs.h>
+#include <target/target_core_fabric_lib.h>
+#include <target/target_core_device.h>
+#include <target/target_core_tpg.h>
+#include <target/target_core_configfs.h>
+#include <target/configfs_macros.h>
+
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_nodeattrib.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_target_stat.h"
+#include "iscsi_target_configfs.h"
+
+struct target_fabric_configfs *lio_target_fabric_configfs;
+
+struct lio_target_configfs_attribute {
+	struct configfs_attribute attr;
+	ssize_t (*show)(void *, char *);
+	ssize_t (*store)(void *, const char *, size_t);
+};
+
+struct iscsi_portal_group *lio_get_tpg_from_tpg_item(
+	struct config_item *item,
+	struct iscsi_tiqn **tiqn_out)
+{
+	struct se_portal_group *se_tpg = container_of(to_config_group(item),
+					struct se_portal_group, tpg_group);
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+	int ret;
+
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_portal_group "
+			"pointer\n");
+		return NULL;
+	}
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return NULL;
+
+	*tiqn_out = tpg->tpg_tiqn;
+	return tpg;
+}
+
+/* Start items for lio_target_portal_cit */
+
+static ssize_t lio_target_np_show_sctp(
+	struct se_tpg_np *se_tpg_np,
+	char *page)
+{
+	struct iscsi_tpg_np *tpg_np = container_of(se_tpg_np,
+				struct iscsi_tpg_np, se_tpg_np);
+	struct iscsi_tpg_np *tpg_np_sctp;
+	ssize_t rb;
+
+	tpg_np_sctp = iscsi_tpg_locate_child_np(tpg_np, ISCSI_SCTP_TCP);
+	if ((tpg_np_sctp))
+		rb = sprintf(page, "1\n");
+	else
+		rb = sprintf(page, "0\n");
+
+	return rb;
+}
+
+static ssize_t lio_target_np_store_sctp(
+	struct se_tpg_np *se_tpg_np,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_np *np;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np = container_of(se_tpg_np,
+				struct iscsi_tpg_np, se_tpg_np);
+	struct iscsi_tpg_np *tpg_np_sctp = NULL;
+	struct iscsi_np_addr np_addr;
+	char *endptr;
+	u32 op;
+	int ret;
+
+	op = simple_strtoul(page, &endptr, 0);
+	 if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+	np = tpg_np->tpg_np;
+	if (!(np)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_np from"
+				" struct iscsi_tpg_np\n");
+		return -EINVAL;
+	}
+
+	tpg = tpg_np->tpg;
+	if (iscsi_get_tpg(tpg) < 0)
+		return -EINVAL;
+
+	if (op) {
+		memset((void *)&np_addr, 0, sizeof(struct iscsi_np_addr));
+		if (np->np_flags & NPF_NET_IPV6)
+			snprintf(np_addr.np_ipv6, IPV6_ADDRESS_SPACE,
+				"%s", np->np_ipv6);
+		else
+			np_addr.np_ipv4 = np->np_ipv4;
+		np_addr.np_flags = np->np_flags;
+		np_addr.np_port = np->np_port;
+
+		tpg_np_sctp = iscsi_tpg_add_network_portal(tpg, &np_addr,
+					tpg_np, ISCSI_SCTP_TCP);
+		if (!(tpg_np_sctp) || IS_ERR(tpg_np_sctp))
+			goto out;
+	} else {
+		tpg_np_sctp = iscsi_tpg_locate_child_np(tpg_np, ISCSI_SCTP_TCP);
+		if (!(tpg_np_sctp))
+			goto out;
+
+		ret = iscsi_tpg_del_network_portal(tpg, tpg_np_sctp);
+		if (ret < 0)
+			goto out;
+	}
+
+	iscsi_put_tpg(tpg);
+	return count;
+out:
+	iscsi_put_tpg(tpg);
+	return -EINVAL;
+}
+
+TF_NP_BASE_ATTR(lio_target, sctp, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_portal_attrs[] = {
+	&lio_target_np_sctp.attr,
+	NULL,
+};
+
+/* Stop items for lio_target_portal_cit */
+
+/* Start items for lio_target_np_cit */
+
+#define MAX_PORTAL_LEN		256
+
+struct se_tpg_np *lio_target_call_addnptotpg(
+	struct se_portal_group *se_tpg,
+	struct config_group *group,
+	const char *name)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np;
+	char *str, *str2, *end_ptr, *ip_str, *port_str;
+	struct iscsi_np_addr np_addr;
+	u32 ipv4 = 0;
+	int ret;
+	char buf[MAX_PORTAL_LEN];
+
+	if (strlen(name) > MAX_PORTAL_LEN) {
+		printk(KERN_ERR "strlen(name): %d exceeds MAX_PORTAL_LEN: %d\n",
+			(int)strlen(name), MAX_PORTAL_LEN);
+		return ERR_PTR(-EOVERFLOW);
+	}
+	memset(buf, 0, MAX_PORTAL_LEN);
+	snprintf(buf, MAX_PORTAL_LEN, "%s", name);
+
+	memset((void *)&np_addr, 0, sizeof(struct iscsi_np_addr));
+
+	str = strstr(buf, "[");
+	if ((str)) {
+		str2 = strstr(str, "]");
+		if (!(str2)) {
+			printk(KERN_ERR "Unable to locate trailing \"]\""
+				" in IPv6 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		str++; /* Skip over leading "[" */
+		*str2 = '\0'; /* Terminate the IPv6 address */
+		str2 += 1; /* Skip over the "]" */
+		port_str = strstr(str2, ":");
+		if (!(port_str)) {
+			printk(KERN_ERR "Unable to locate \":port\""
+				" in IPv6 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		*port_str = '\0'; /* Terminate string for IP */
+		port_str += 1; /* Skip over ":" */
+		np_addr.np_port = simple_strtoul(port_str, &end_ptr, 0);
+
+		snprintf(np_addr.np_ipv6, IPV6_ADDRESS_SPACE, "%s", str);
+		np_addr.np_flags |= NPF_NET_IPV6;
+	} else {
+		ip_str = &buf[0];
+		port_str = strstr(ip_str, ":");
+		if (!(port_str)) {
+			printk(KERN_ERR "Unable to locate \":port\""
+				" in IPv4 iSCSI network portal address\n");
+			return ERR_PTR(-EINVAL);
+		}
+		*port_str = '\0'; /* Terminate string for IP */
+		port_str += 1; /* Skip over ":" */
+		np_addr.np_port = simple_strtoul(port_str, &end_ptr, 0);
+
+		ipv4 = in_aton(ip_str);
+		np_addr.np_ipv4 = htonl(ipv4);
+		np_addr.np_flags |= NPF_NET_IPV4;
+	}
+	tpg = container_of(se_tpg, struct iscsi_portal_group, tpg_se_tpg);
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return ERR_PTR(-EINVAL);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s TPGT: %hu"
+		" PORTAL: %s\n",
+		config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
+		tpg->tpgt, name);
+	/*
+	 * Assume ISCSI_TCP by default.  Other network portals for other
+	 * iSCSI fabrics:
+	 *
+	 * Traditional iSCSI over SCTP (initial support)
+	 * iSER/TCP (TODO, hardware available)
+	 * iSER/SCTP (TODO, software emulation with osc-iwarp)
+	 * iSER/IB (TODO, hardware available)
+	 *
+	 * can be enabled with atributes under
+	 * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/
+	 *
+	 */
+	tpg_np = iscsi_tpg_add_network_portal(tpg, &np_addr, NULL, ISCSI_TCP);
+	if (IS_ERR(tpg_np)) {
+		iscsi_put_tpg(tpg);
+		return ERR_PTR(PTR_ERR(tpg_np));
+	}
+	printk(KERN_INFO "LIO_Target_ConfigFS: addnptotpg done!\n");
+
+	iscsi_put_tpg(tpg);
+	return &tpg_np->se_tpg_np;
+}
+
+static void lio_target_call_delnpfromtpg(
+	struct se_tpg_np *se_tpg_np)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np;
+	struct se_portal_group *se_tpg;
+	int ret = 0;
+
+	tpg_np = container_of(se_tpg_np, struct iscsi_tpg_np, se_tpg_np);
+	tpg = tpg_np->tpg;
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return;
+
+	se_tpg = &tpg->tpg_se_tpg;
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu"
+		" PORTAL: %s\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
+		tpg->tpgt, config_item_name(&se_tpg_np->tpg_np_group.cg_item));
+
+	ret = iscsi_tpg_del_network_portal(tpg, tpg_np);
+	if (ret < 0)
+		goto out;
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: delnpfromtpg done!\n");
+out:
+	iscsi_put_tpg(tpg);
+}
+
+/* End items for lio_target_np_cit */
+
+/* Start items for lio_target_nacl_attrib_cit */
+
+#define DEF_NACL_ATTRIB(name)						\
+static ssize_t iscsi_nacl_attrib_show_##name(				\
+	struct se_node_acl *se_nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_acl *nacl = container_of(se_nacl, struct iscsi_node_acl, \
+					se_node_acl);			\
+	ssize_t rb;							\
+									\
+	rb = sprintf(page, "%u\n", ISCSI_NODE_ATTRIB(nacl)->name);	\
+	return rb;							\
+}									\
+									\
+static ssize_t iscsi_nacl_attrib_store_##name(				\
+	struct se_node_acl *se_nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_node_acl *nacl = container_of(se_nacl, struct iscsi_node_acl, \
+					se_node_acl);			\
+	char *endptr;							\
+	u32 val;							\
+	int ret;							\
+									\
+	val = simple_strtoul(page, &endptr, 0);				\
+	ret = iscsi_na_##name(nacl, val);				\
+	if (ret < 0)							\
+		return ret;						\
+									\
+	return count;							\
+}
+
+#define NACL_ATTR(_name, _mode) TF_NACL_ATTRIB_ATTR(iscsi, _name, _mode);
+/*
+ * Define iscsi_node_attrib_s_dataout_timeout
+ */
+DEF_NACL_ATTRIB(dataout_timeout);
+NACL_ATTR(dataout_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_dataout_timeout_retries
+ */
+DEF_NACL_ATTRIB(dataout_timeout_retries);
+NACL_ATTR(dataout_timeout_retries, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_default_erl
+ */
+DEF_NACL_ATTRIB(default_erl);
+NACL_ATTR(default_erl, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_nopin_timeout
+ */
+DEF_NACL_ATTRIB(nopin_timeout);
+NACL_ATTR(nopin_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_nopin_response_timeout
+ */
+DEF_NACL_ATTRIB(nopin_response_timeout);
+NACL_ATTR(nopin_response_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_datain_pdu_offsets
+ */
+DEF_NACL_ATTRIB(random_datain_pdu_offsets);
+NACL_ATTR(random_datain_pdu_offsets, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_datain_seq_offsets
+ */
+DEF_NACL_ATTRIB(random_datain_seq_offsets);
+NACL_ATTR(random_datain_seq_offsets, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_node_attrib_s_random_r2t_offsets
+ */
+DEF_NACL_ATTRIB(random_r2t_offsets);
+NACL_ATTR(random_r2t_offsets, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_nacl_attrib_attrs[] = {
+	&iscsi_nacl_attrib_dataout_timeout.attr,
+	&iscsi_nacl_attrib_dataout_timeout_retries.attr,
+	&iscsi_nacl_attrib_default_erl.attr,
+	&iscsi_nacl_attrib_nopin_timeout.attr,
+	&iscsi_nacl_attrib_nopin_response_timeout.attr,
+	&iscsi_nacl_attrib_random_datain_pdu_offsets.attr,
+	&iscsi_nacl_attrib_random_datain_seq_offsets.attr,
+	&iscsi_nacl_attrib_random_r2t_offsets.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_attrib_cit */
+
+/* Start items for lio_target_nacl_auth_cit */
+
+#define __DEF_NACL_AUTH_STR(prefix, name, flags)			\
+static ssize_t __iscsi_##prefix##_show_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+	ssize_t rb;							\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+	rb = snprintf(page, PAGE_SIZE, "%s\n", auth->name);		\
+	return rb;							\
+}									\
+									\
+static ssize_t __iscsi_##prefix##_store_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+									\
+	snprintf(auth->name, PAGE_SIZE, "%s", page);			\
+	if (!(strncmp("NULL", auth->name, 4)))				\
+		auth->naf_flags &= ~flags;				\
+	else								\
+		auth->naf_flags |= flags;				\
+									\
+	if ((auth->naf_flags & NAF_USERID_IN_SET) &&			\
+	    (auth->naf_flags & NAF_PASSWORD_IN_SET))			\
+		auth->authenticate_target = 1;				\
+	else								\
+		auth->authenticate_target = 0;				\
+									\
+	return count;							\
+}
+
+#define __DEF_NACL_AUTH_INT(prefix, name)				\
+static ssize_t __iscsi_##prefix##_show_##name(				\
+	struct iscsi_node_acl *nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_node_auth *auth = &nacl->node_auth;		\
+	ssize_t rb;							\
+									\
+	if (!capable(CAP_SYS_ADMIN))					\
+		return -EPERM;						\
+									\
+	rb = snprintf(page, PAGE_SIZE, "%d\n", auth->name);		\
+	return rb;							\
+}
+
+#define DEF_NACL_AUTH_STR(name, flags)					\
+	__DEF_NACL_AUTH_STR(nacl_auth, name, flags)			\
+static ssize_t iscsi_nacl_auth_show_##name(				\
+	struct se_node_acl *nacl,					\
+	char *page)							\
+{									\
+	return __iscsi_nacl_auth_show_##name(container_of(nacl,		\
+			struct iscsi_node_acl, se_node_acl), page);		\
+}									\
+static ssize_t iscsi_nacl_auth_store_##name(				\
+	struct se_node_acl *nacl,					\
+	const char *page,						\
+	size_t count)							\
+{									\
+	return __iscsi_nacl_auth_store_##name(container_of(nacl,	\
+			struct iscsi_node_acl, se_node_acl), page, count);	\
+}
+
+#define DEF_NACL_AUTH_INT(name)						\
+	__DEF_NACL_AUTH_INT(nacl_auth, name)				\
+static ssize_t iscsi_nacl_auth_show_##name(				\
+	struct se_node_acl *nacl,					\
+	char *page)							\
+{									\
+	return __iscsi_nacl_auth_show_##name(container_of(nacl,		\
+			struct iscsi_node_acl, se_node_acl), page);		\
+}
+
+#define AUTH_ATTR(_name, _mode)	TF_NACL_AUTH_ATTR(iscsi, _name, _mode);
+#define AUTH_ATTR_RO(_name) TF_NACL_AUTH_ATTR_RO(iscsi, _name);
+
+/*
+ * One-way authentication userid
+ */
+DEF_NACL_AUTH_STR(userid, NAF_USERID_SET);
+AUTH_ATTR(userid, S_IRUGO | S_IWUSR);
+/*
+ * One-way authentication password
+ */
+DEF_NACL_AUTH_STR(password, NAF_PASSWORD_SET);
+AUTH_ATTR(password, S_IRUGO | S_IWUSR);
+/*
+ * Enforce mutual authentication
+ */
+DEF_NACL_AUTH_INT(authenticate_target);
+AUTH_ATTR_RO(authenticate_target);
+/*
+ * Mutual authentication userid
+ */
+DEF_NACL_AUTH_STR(userid_mutual, NAF_USERID_IN_SET);
+AUTH_ATTR(userid_mutual, S_IRUGO | S_IWUSR);
+/*
+ * Mutual authentication password
+ */
+DEF_NACL_AUTH_STR(password_mutual, NAF_PASSWORD_IN_SET);
+AUTH_ATTR(password_mutual, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_nacl_auth_attrs[] = {
+	&iscsi_nacl_auth_userid.attr,
+	&iscsi_nacl_auth_password.attr,
+	&iscsi_nacl_auth_authenticate_target.attr,
+	&iscsi_nacl_auth_userid_mutual.attr,
+	&iscsi_nacl_auth_password_mutual.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_auth_cit */
+
+/* Start items for lio_target_nacl_param_cit */
+
+#define DEF_NACL_PARAM(name)						\
+static ssize_t iscsi_nacl_param_show_##name(				\
+	struct se_node_acl *se_nacl,					\
+	char *page)							\
+{									\
+	struct iscsi_session *sess;						\
+	struct se_session *se_sess;						\
+	ssize_t rb;							\
+									\
+	spin_lock_bh(&se_nacl->nacl_sess_lock);				\
+	se_sess = se_nacl->nacl_sess;					\
+	if (!(se_sess)) {						\
+		rb = snprintf(page, PAGE_SIZE,				\
+			"No Active iSCSI Session\n");			\
+	} else {							\
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;	\
+		rb = snprintf(page, PAGE_SIZE, "%u\n",			\
+			(u32)SESS_OPS(sess)->name);			\
+	}								\
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);			\
+									\
+	return rb;							\
+}
+
+#define NACL_PARAM_ATTR(_name) TF_NACL_PARAM_ATTR_RO(iscsi, _name);
+
+DEF_NACL_PARAM(MaxConnections);
+NACL_PARAM_ATTR(MaxConnections);
+
+DEF_NACL_PARAM(InitialR2T);
+NACL_PARAM_ATTR(InitialR2T);
+
+DEF_NACL_PARAM(ImmediateData);
+NACL_PARAM_ATTR(ImmediateData);
+
+DEF_NACL_PARAM(MaxBurstLength);
+NACL_PARAM_ATTR(MaxBurstLength);
+
+DEF_NACL_PARAM(FirstBurstLength);
+NACL_PARAM_ATTR(FirstBurstLength);
+
+DEF_NACL_PARAM(DefaultTime2Wait);
+NACL_PARAM_ATTR(DefaultTime2Wait);
+
+DEF_NACL_PARAM(DefaultTime2Retain);
+NACL_PARAM_ATTR(DefaultTime2Retain);
+
+DEF_NACL_PARAM(MaxOutstandingR2T);
+NACL_PARAM_ATTR(MaxOutstandingR2T);
+
+DEF_NACL_PARAM(DataPDUInOrder);
+NACL_PARAM_ATTR(DataPDUInOrder);
+
+DEF_NACL_PARAM(DataSequenceInOrder);
+NACL_PARAM_ATTR(DataSequenceInOrder);
+
+DEF_NACL_PARAM(ErrorRecoveryLevel);
+NACL_PARAM_ATTR(ErrorRecoveryLevel);
+
+static struct configfs_attribute *lio_target_nacl_param_attrs[] = {
+	&iscsi_nacl_param_MaxConnections.attr,
+	&iscsi_nacl_param_InitialR2T.attr,
+	&iscsi_nacl_param_ImmediateData.attr,
+	&iscsi_nacl_param_MaxBurstLength.attr,
+	&iscsi_nacl_param_FirstBurstLength.attr,
+	&iscsi_nacl_param_DefaultTime2Wait.attr,
+	&iscsi_nacl_param_DefaultTime2Retain.attr,
+	&iscsi_nacl_param_MaxOutstandingR2T.attr,
+	&iscsi_nacl_param_DataPDUInOrder.attr,
+	&iscsi_nacl_param_DataSequenceInOrder.attr,
+	&iscsi_nacl_param_ErrorRecoveryLevel.attr,
+	NULL,
+};
+
+/* End items for lio_target_nacl_param_cit */
+
+/* Start items for lio_target_acl_cit */
+
+static ssize_t lio_target_nacl_show_info(
+	struct se_node_acl *se_nacl,
+	char *page)
+{
+	struct iscsi_session *sess;
+	struct iscsi_conn *conn;
+	struct se_session *se_sess;
+	unsigned char *ip, buf_ipv4[IPV4_BUF_SIZE];
+	ssize_t rb = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (!(se_sess))
+		rb += sprintf(page+rb, "No active iSCSI Session for Initiator"
+			" Endpoint: %s\n", se_nacl->initiatorname);
+	else {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+		if (SESS_OPS(sess)->InitiatorName)
+			rb += sprintf(page+rb, "InitiatorName: %s\n",
+				SESS_OPS(sess)->InitiatorName);
+		if (SESS_OPS(sess)->InitiatorAlias)
+			rb += sprintf(page+rb, "InitiatorAlias: %s\n",
+				SESS_OPS(sess)->InitiatorAlias);
+
+		rb += sprintf(page+rb, "LIO Session ID: %u   "
+			"ISID: 0x%02x %02x %02x %02x %02x %02x  "
+			"TSIH: %hu  ", sess->sid,
+			sess->isid[0], sess->isid[1], sess->isid[2],
+			sess->isid[3], sess->isid[4], sess->isid[5],
+			sess->tsih);
+		rb += sprintf(page+rb, "SessionType: %s\n",
+				(SESS_OPS(sess)->SessionType) ?
+				"Discovery" : "Normal");
+		rb += sprintf(page+rb, "Session State: ");
+		switch (sess->session_state) {
+		case TARG_SESS_STATE_FREE:
+			rb += sprintf(page+rb, "TARG_SESS_FREE\n");
+			break;
+		case TARG_SESS_STATE_ACTIVE:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_ACTIVE\n");
+			break;
+		case TARG_SESS_STATE_LOGGED_IN:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_LOGGED_IN\n");
+			break;
+		case TARG_SESS_STATE_FAILED:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_FAILED\n");
+			break;
+		case TARG_SESS_STATE_IN_CONTINUE:
+			rb += sprintf(page+rb, "TARG_SESS_STATE_IN_CONTINUE\n");
+			break;
+		default:
+			rb += sprintf(page+rb, "ERROR: Unknown Session"
+					" State!\n");
+			break;
+		}
+
+		rb += sprintf(page+rb, "---------------------[iSCSI Session"
+				" Values]-----------------------\n");
+		rb += sprintf(page+rb, "  CmdSN/WR  :  CmdSN/WC  :  ExpCmdSN"
+				"  :  MaxCmdSN  :     ITT    :     TTT\n");
+		rb += sprintf(page+rb, " 0x%08x   0x%08x   0x%08x   0x%08x"
+				"   0x%08x   0x%08x\n",
+			sess->cmdsn_window,
+			(sess->max_cmd_sn - sess->exp_cmd_sn) + 1,
+			sess->exp_cmd_sn, sess->max_cmd_sn,
+			sess->init_task_tag, sess->targ_xfer_tag);
+		rb += sprintf(page+rb, "----------------------[iSCSI"
+				" Connections]-------------------------\n");
+
+		spin_lock(&sess->conn_lock);
+		list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+			rb += sprintf(page+rb, "CID: %hu  Connection"
+					" State: ", conn->cid);
+			switch (conn->conn_state) {
+			case TARG_CONN_STATE_FREE:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_FREE\n");
+				break;
+			case TARG_CONN_STATE_XPT_UP:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_XPT_UP\n");
+				break;
+			case TARG_CONN_STATE_IN_LOGIN:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_IN_LOGIN\n");
+				break;
+			case TARG_CONN_STATE_LOGGED_IN:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_LOGGED_IN\n");
+				break;
+			case TARG_CONN_STATE_IN_LOGOUT:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_IN_LOGOUT\n");
+				break;
+			case TARG_CONN_STATE_LOGOUT_REQUESTED:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_LOGOUT_REQUESTED\n");
+				break;
+			case TARG_CONN_STATE_CLEANUP_WAIT:
+				rb += sprintf(page+rb,
+					"TARG_CONN_STATE_CLEANUP_WAIT\n");
+				break;
+			default:
+				rb += sprintf(page+rb,
+					"ERROR: Unknown Connection State!\n");
+				break;
+			}
+
+			if (conn->net_size == IPV6_ADDRESS_SPACE)
+				ip = &conn->ipv6_login_ip[0];
+			else {
+				iscsi_ntoa2(buf_ipv4, conn->login_ip);
+				ip = &buf_ipv4[0];
+			}
+			rb += sprintf(page+rb, "   Address %s %s", ip,
+				(conn->network_transport == ISCSI_TCP) ?
+				"TCP" : "SCTP");
+			rb += sprintf(page+rb, "  StatSN: 0x%08x\n",
+				conn->stat_sn);
+		}
+		spin_unlock(&sess->conn_lock);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return rb;
+}
+
+TF_NACL_BASE_ATTR_RO(lio_target, info);
+
+static ssize_t lio_target_nacl_show_cmdsn_depth(
+	struct se_node_acl *se_nacl,
+	char *page)
+{
+	return sprintf(page, "%u\n", se_nacl->queue_depth);
+}
+
+static ssize_t lio_target_nacl_store_cmdsn_depth(
+	struct se_node_acl *se_nacl,
+	const char *page,
+	size_t count)
+{
+	struct se_portal_group *se_tpg = se_nacl->se_tpg;
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	struct config_item *acl_ci, *tpg_ci, *wwn_ci;
+	char *endptr;
+	u32 cmdsn_depth = 0;
+	int ret = 0;
+
+	cmdsn_depth = simple_strtoul(page, &endptr, 0);
+	if (cmdsn_depth > TA_DEFAULT_CMDSN_DEPTH_MAX) {
+		printk(KERN_ERR "Passed cmdsn_depth: %u exceeds"
+			" TA_DEFAULT_CMDSN_DEPTH_MAX: %u\n", cmdsn_depth,
+			TA_DEFAULT_CMDSN_DEPTH_MAX);
+		return -EINVAL;
+	}
+	acl_ci = &se_nacl->acl_group.cg_item;
+	if (!(acl_ci)) {
+		printk(KERN_ERR "Unable to locatel acl_ci\n");
+		return -EINVAL;
+	}
+	tpg_ci = &acl_ci->ci_parent->ci_group->cg_item;
+	if (!(tpg_ci)) {
+		printk(KERN_ERR "Unable to locate tpg_ci\n");
+		return -EINVAL;
+	}
+	wwn_ci = &tpg_ci->ci_group->cg_item;
+	if (!(wwn_ci)) {
+		printk(KERN_ERR "Unable to locate config_item wwn_ci\n");
+		return -EINVAL;
+	}
+
+	if (iscsi_get_tpg(tpg) < 0)
+		return -EINVAL;
+	/*
+	 * iscsi_tpg_set_initiator_node_queue_depth() assumes force=1
+	 */
+	ret = iscsi_tpg_set_initiator_node_queue_depth(tpg,
+				config_item_name(acl_ci), cmdsn_depth, 1);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: %s/%s Set CmdSN Window: %u for"
+		"InitiatorName: %s\n", config_item_name(wwn_ci),
+		config_item_name(tpg_ci), cmdsn_depth,
+		config_item_name(acl_ci));
+
+	iscsi_put_tpg(tpg);
+	return (!ret) ? count : (ssize_t)ret;
+}
+
+TF_NACL_BASE_ATTR(lio_target, cmdsn_depth, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_initiator_attrs[] = {
+	&lio_target_nacl_info.attr,
+	&lio_target_nacl_cmdsn_depth.attr,
+	NULL,
+};
+
+static struct se_node_acl *lio_target_make_nodeacl(
+	struct se_portal_group *se_tpg,
+	struct config_group *group,
+	const char *name)
+{
+	struct config_group *stats_cg;
+	struct iscsi_node_acl *acl;
+	struct se_node_acl *se_nacl_new, *se_nacl;
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	u32 cmdsn_depth;
+
+	se_nacl_new = lio_tpg_alloc_fabric_acl(se_tpg);
+
+	acl = container_of(se_nacl_new, struct iscsi_node_acl,
+				se_node_acl);
+
+	cmdsn_depth = ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+	/*
+	 * se_nacl_new may be released by core_tpg_add_initiator_node_acl()
+	 * when converting a NdoeACL from demo mode -> explict
+	 */
+	se_nacl = core_tpg_add_initiator_node_acl(se_tpg, se_nacl_new,
+				name, cmdsn_depth);
+	if (IS_ERR(se_nacl))
+		return ERR_PTR(PTR_ERR(se_nacl));
+
+	stats_cg = &acl->se_node_acl.acl_fabric_stat_group;
+
+	stats_cg->default_groups = kzalloc(sizeof(struct config_group) * 2,
+				GFP_KERNEL);
+	if (!stats_cg->default_groups) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" stats_cg->default_groups\n");
+		core_tpg_del_initiator_node_acl(se_tpg, se_nacl, 1);
+		kfree(acl);	
+		return ERR_PTR(-ENOMEM);
+	}
+
+	stats_cg->default_groups[0] = &NODE_STAT_GRPS(acl)->iscsi_sess_stats_group;
+	stats_cg->default_groups[1] = NULL;
+	config_group_init_type_name(&NODE_STAT_GRPS(acl)->iscsi_sess_stats_group,
+			"iscsi_sess_stats", &iscsi_stat_sess_cit);
+
+	return se_nacl;
+}
+
+static void lio_target_drop_nodeacl(
+	struct se_node_acl *se_nacl)
+{
+	struct se_portal_group *se_tpg = se_nacl->se_tpg;
+	struct iscsi_node_acl *acl = container_of(se_nacl,
+			struct iscsi_node_acl, se_node_acl);
+	struct config_item *df_item;
+	struct config_group *stats_cg;
+	int i;
+
+	stats_cg = &acl->se_node_acl.acl_fabric_stat_group;
+	for (i = 0; stats_cg->default_groups[i]; i++) {
+		df_item = &stats_cg->default_groups[i]->cg_item;
+		stats_cg->default_groups[i] = NULL;
+		config_item_put(df_item);
+	}
+	kfree(stats_cg->default_groups);
+
+	core_tpg_del_initiator_node_acl(se_tpg, se_nacl, 1);
+	kfree(acl);
+}
+
+/* End items for lio_target_acl_cit */
+
+/* Start items for lio_target_tpg_attrib_cit */
+
+#define DEF_TPG_ATTRIB(name)						\
+									\
+static ssize_t iscsi_tpg_attrib_show_##name(				\
+	struct se_portal_group *se_tpg,				\
+	char *page)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	ssize_t rb;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	rb = sprintf(page, "%u\n", ISCSI_TPG_ATTRIB(tpg)->name);	\
+	iscsi_put_tpg(tpg);						\
+	return rb;							\
+}									\
+									\
+static ssize_t iscsi_tpg_attrib_store_##name(				\
+	struct se_portal_group *se_tpg,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	char *endptr;							\
+	u32 val;							\
+	int ret;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	val = simple_strtoul(page, &endptr, 0);				\
+	ret = iscsi_ta_##name(tpg, val);				\
+	if (ret < 0)							\
+		goto out;						\
+									\
+	iscsi_put_tpg(tpg);						\
+	return count;							\
+out:									\
+	iscsi_put_tpg(tpg);						\
+	return ret;							\
+}
+
+#define TPG_ATTR(_name, _mode) TF_TPG_ATTRIB_ATTR(iscsi, _name, _mode);
+
+/*
+ * Define iscsi_tpg_attrib_s_authentication
+ */
+DEF_TPG_ATTRIB(authentication);
+TPG_ATTR(authentication, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_login_timeout
+ */
+DEF_TPG_ATTRIB(login_timeout);
+TPG_ATTR(login_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_netif_timeout
+ */
+DEF_TPG_ATTRIB(netif_timeout);
+TPG_ATTR(netif_timeout, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_generate_node_acls
+ */
+DEF_TPG_ATTRIB(generate_node_acls);
+TPG_ATTR(generate_node_acls, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_default_cmdsn_depth
+ */
+DEF_TPG_ATTRIB(default_cmdsn_depth);
+TPG_ATTR(default_cmdsn_depth, S_IRUGO | S_IWUSR);
+/*
+ Define iscsi_tpg_attrib_s_cache_dynamic_acls
+ */
+DEF_TPG_ATTRIB(cache_dynamic_acls);
+TPG_ATTR(cache_dynamic_acls, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_demo_mode_write_protect
+ */
+DEF_TPG_ATTRIB(demo_mode_write_protect);
+TPG_ATTR(demo_mode_write_protect, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_prod_mode_write_protect
+ */
+DEF_TPG_ATTRIB(prod_mode_write_protect);
+TPG_ATTR(prod_mode_write_protect, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_crc32c_x86_offload
+ */
+DEF_TPG_ATTRIB(crc32c_x86_offload);
+TPG_ATTR(crc32c_x86_offload, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
+	&iscsi_tpg_attrib_authentication.attr,
+	&iscsi_tpg_attrib_login_timeout.attr,
+	&iscsi_tpg_attrib_netif_timeout.attr,
+	&iscsi_tpg_attrib_generate_node_acls.attr,
+	&iscsi_tpg_attrib_default_cmdsn_depth.attr,
+	&iscsi_tpg_attrib_cache_dynamic_acls.attr,
+	&iscsi_tpg_attrib_demo_mode_write_protect.attr,
+	&iscsi_tpg_attrib_prod_mode_write_protect.attr,
+	&iscsi_tpg_attrib_crc32c_x86_offload.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_attrib_cit */
+
+/* Start items for lio_target_tpg_param_cit */
+
+#define DEF_TPG_PARAM(name)						\
+static ssize_t iscsi_tpg_param_show_##name(				\
+	struct se_portal_group *se_tpg,				\
+	char *page)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	struct iscsi_param *param;						\
+	ssize_t rb;							\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	param = iscsi_find_param_from_key(__stringify(name),		\
+				tpg->param_list);			\
+	if (!(param)) {							\
+		iscsi_put_tpg(tpg);					\
+		return -EINVAL;						\
+	}								\
+	rb = snprintf(page, PAGE_SIZE, "%s\n", param->value);		\
+									\
+	iscsi_put_tpg(tpg);						\
+	return rb;							\
+}									\
+static ssize_t iscsi_tpg_param_store_##name(				\
+	struct se_portal_group *se_tpg,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	struct iscsi_portal_group *tpg = container_of(se_tpg,		\
+			struct iscsi_portal_group, tpg_se_tpg);	\
+	char *buf;							\
+	int ret;							\
+									\
+	buf = kzalloc(PAGE_SIZE, GFP_KERNEL);				\
+	if (!(buf))							\
+		return -ENOMEM;						\
+	snprintf(buf, PAGE_SIZE, "%s=%s", __stringify(name), page);	\
+	buf[strlen(buf)-1] = '\0'; /* Kill newline */			\
+									\
+	if (iscsi_get_tpg(tpg) < 0)					\
+		return -EINVAL;						\
+									\
+	ret = iscsi_change_param_value(buf, SENDER_TARGET,		\
+				tpg->param_list, 1);			\
+	if (ret < 0)							\
+		goto out;						\
+									\
+	kfree(buf);							\
+	iscsi_put_tpg(tpg);						\
+	return count;							\
+out:									\
+	kfree(buf);							\
+	iscsi_put_tpg(tpg);						\
+	return -EINVAL;						\
+}
+
+#define TPG_PARAM_ATTR(_name, _mode) TF_TPG_PARAM_ATTR(iscsi, _name, _mode);
+
+DEF_TPG_PARAM(AuthMethod);
+TPG_PARAM_ATTR(AuthMethod, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(HeaderDigest);
+TPG_PARAM_ATTR(HeaderDigest, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataDigest);
+TPG_PARAM_ATTR(DataDigest, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxConnections);
+TPG_PARAM_ATTR(MaxConnections, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(TargetAlias);
+TPG_PARAM_ATTR(TargetAlias, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(InitialR2T);
+TPG_PARAM_ATTR(InitialR2T, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(ImmediateData);
+TPG_PARAM_ATTR(ImmediateData, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxRecvDataSegmentLength);
+TPG_PARAM_ATTR(MaxRecvDataSegmentLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxBurstLength);
+TPG_PARAM_ATTR(MaxBurstLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(FirstBurstLength);
+TPG_PARAM_ATTR(FirstBurstLength, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DefaultTime2Wait);
+TPG_PARAM_ATTR(DefaultTime2Wait, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DefaultTime2Retain);
+TPG_PARAM_ATTR(DefaultTime2Retain, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(MaxOutstandingR2T);
+TPG_PARAM_ATTR(MaxOutstandingR2T, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataPDUInOrder);
+TPG_PARAM_ATTR(DataPDUInOrder, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(DataSequenceInOrder);
+TPG_PARAM_ATTR(DataSequenceInOrder, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(ErrorRecoveryLevel);
+TPG_PARAM_ATTR(ErrorRecoveryLevel, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(IFMarker);
+TPG_PARAM_ATTR(IFMarker, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(OFMarker);
+TPG_PARAM_ATTR(OFMarker, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(IFMarkInt);
+TPG_PARAM_ATTR(IFMarkInt, S_IRUGO | S_IWUSR);
+
+DEF_TPG_PARAM(OFMarkInt);
+TPG_PARAM_ATTR(OFMarkInt, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_param_attrs[] = {
+	&iscsi_tpg_param_AuthMethod.attr,
+	&iscsi_tpg_param_HeaderDigest.attr,
+	&iscsi_tpg_param_DataDigest.attr,
+	&iscsi_tpg_param_MaxConnections.attr,
+	&iscsi_tpg_param_TargetAlias.attr,
+	&iscsi_tpg_param_InitialR2T.attr,
+	&iscsi_tpg_param_ImmediateData.attr,
+	&iscsi_tpg_param_MaxRecvDataSegmentLength.attr,
+	&iscsi_tpg_param_MaxBurstLength.attr,
+	&iscsi_tpg_param_FirstBurstLength.attr,
+	&iscsi_tpg_param_DefaultTime2Wait.attr,
+	&iscsi_tpg_param_DefaultTime2Retain.attr,
+	&iscsi_tpg_param_MaxOutstandingR2T.attr,
+	&iscsi_tpg_param_DataPDUInOrder.attr,
+	&iscsi_tpg_param_DataSequenceInOrder.attr,
+	&iscsi_tpg_param_ErrorRecoveryLevel.attr,
+	&iscsi_tpg_param_IFMarker.attr,
+	&iscsi_tpg_param_OFMarker.attr,
+	&iscsi_tpg_param_IFMarkInt.attr,
+	&iscsi_tpg_param_OFMarkInt.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_param_cit */
+
+/* Start items for lio_target_tpg_cit */
+
+static ssize_t lio_target_tpg_show_enable(
+	struct se_portal_group *se_tpg,
+	char *page)
+{
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	ssize_t len = 0;
+
+	spin_lock(&tpg->tpg_state_lock);
+	len = sprintf(page, "%d\n",
+			(tpg->tpg_state == TPG_STATE_ACTIVE) ? 1 : 0);
+	spin_unlock(&tpg->tpg_state_lock);
+
+	return len;
+}
+
+static ssize_t lio_target_tpg_store_enable(
+	struct se_portal_group *se_tpg,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_portal_group *tpg = container_of(se_tpg,
+			struct iscsi_portal_group, tpg_se_tpg);
+	char *endptr;
+	u32 op;
+	int ret = 0;
+
+	op = simple_strtoul(page, &endptr, 0);
+	if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+
+	ret = iscsi_get_tpg(tpg);
+	if (ret < 0)
+		return -EINVAL;
+
+	if (op) {
+		ret = iscsi_tpg_enable_portal_group(tpg);
+		if (ret < 0)
+			goto out;
+	} else {
+		/*
+		 * iscsi_tpg_disable_portal_group() assumes force=1
+		 */
+		ret = iscsi_tpg_disable_portal_group(tpg, 1);
+		if (ret < 0)
+			goto out;
+	}
+
+	iscsi_put_tpg(tpg);
+	return count;
+out:
+	iscsi_put_tpg(tpg);
+	return -EINVAL;
+}
+
+TF_TPG_BASE_ATTR(lio_target, enable, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_tpg_attrs[] = {
+	&lio_target_tpg_enable.attr,
+	NULL,
+};
+
+/* End items for lio_target_tpg_cit */
+
+/* Start items for lio_target_tiqn_cit */
+
+struct se_portal_group *lio_target_tiqn_addtpg(
+	struct se_wwn *wwn,
+	struct config_group *group,
+	const char *name)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+	char *tpgt_str, *end_ptr;
+	int ret = 0;
+	unsigned short int tpgt;
+
+	tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
+	/*
+	 * Only tpgt_# directory groups can be created below
+	 * target/iscsi/iqn.superturodiskarry/
+	*/
+	tpgt_str = strstr(name, "tpgt_");
+	if (!(tpgt_str)) {
+		printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
+				" group\n");
+		return NULL;
+	}
+	tpgt_str += 5; /* Skip ahead of "tpgt_" */
+	tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
+
+	tpg = core_alloc_portal_group(tiqn, tpgt);
+	if (!(tpg))
+		return NULL;
+
+	ret = core_tpg_register(
+			&lio_target_fabric_configfs->tf_ops,
+			wwn, &tpg->tpg_se_tpg, (void *)tpg,
+			TRANSPORT_TPG_TYPE_NORMAL);
+	if (ret < 0)
+		return NULL;
+
+	ret = iscsi_tpg_add_portal_group(tiqn, tpg);
+	if (ret != 0)
+		goto out;
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> Allocated TPG: %s\n",
+			name);	
+	return &tpg->tpg_se_tpg;
+out:
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+	kmem_cache_free(lio_tpg_cache, tpg);
+	return NULL;
+}
+
+void lio_target_tiqn_deltpg(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+
+	tpg = container_of(se_tpg, struct iscsi_portal_group, tpg_se_tpg);
+	tiqn = tpg->tpg_tiqn;
+	/*
+	 * iscsi_tpg_del_portal_group() assumes force=1
+	 */
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> Releasing TPG\n");
+	iscsi_tpg_del_portal_group(tiqn, tpg, 1);
+}
+
+/* End items for lio_target_tiqn_cit */
+
+/* Start LIO-Target TIQN struct contig_item lio_target_cit */
+
+static ssize_t lio_target_wwn_show_attr_lio_version(
+	struct target_fabric_configfs *tf,
+	char *page)
+{
+	return sprintf(page, "Linux-iSCSI.org Target "ISCSI_VERSION""
+		" on %s/%s on "UTS_RELEASE"\n", utsname()->sysname,
+		utsname()->machine);
+}
+
+TF_WWN_ATTR_RO(lio_target, lio_version);
+
+static struct configfs_attribute *lio_target_wwn_attrs[] = {
+	&lio_target_wwn_lio_version.attr,
+	NULL,
+};
+
+struct se_wwn *lio_target_call_coreaddtiqn(
+	struct target_fabric_configfs *tf,
+	struct config_group *group,
+	const char *name)
+{
+	struct config_group *stats_cg;
+	struct iscsi_tiqn *tiqn;
+	int ret = 0;
+
+	tiqn = core_add_tiqn((unsigned char *)name, &ret);
+	if (!(tiqn))
+		return NULL;
+	/*
+	 * Setup struct iscsi_wwn_stat_grps for se_wwn->fabric_stat_group.
+	 */
+	stats_cg = &tiqn->tiqn_wwn.fabric_stat_group;
+
+	stats_cg->default_groups = kzalloc(sizeof(struct config_group) * 6,
+				GFP_KERNEL);
+	if (!stats_cg->default_groups) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" stats_cg->default_groups\n");		
+		core_del_tiqn(tiqn);
+		return ERR_PTR(-ENOMEM);
+	}
+	
+	stats_cg->default_groups[0] = &WWN_STAT_GRPS(tiqn)->iscsi_instance_group;
+	stats_cg->default_groups[1] = &WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group;
+	stats_cg->default_groups[2] = &WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group;
+	stats_cg->default_groups[3] = &WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group;
+	stats_cg->default_groups[4] = &WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group;
+	stats_cg->default_groups[5] = NULL;
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_instance_group,
+			"iscsi_instance", &iscsi_stat_instance_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group,
+			"iscsi_sess_err", &iscsi_stat_sess_err_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group,
+			"iscsi_tgt_attr", &iscsi_stat_tgt_attr_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group,
+			"iscsi_login_stats", &iscsi_stat_login_cit);
+	config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group,
+			"iscsi_logout_stats", &iscsi_stat_logout_cit);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: REGISTER -> Allocated Node:"
+			" %s\n", name);
+	return &tiqn->tiqn_wwn;
+}
+
+void lio_target_call_coredeltiqn(
+	struct se_wwn *wwn)
+{
+	struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
+	struct config_item *df_item;
+	struct config_group *stats_cg;
+	int i;
+	
+	stats_cg = &tiqn->tiqn_wwn.fabric_stat_group;
+	for (i = 0; stats_cg->default_groups[i]; i++) {
+		df_item = &stats_cg->default_groups[i]->cg_item;
+		stats_cg->default_groups[i] = NULL;
+		config_item_put(df_item);
+	}
+	kfree(stats_cg->default_groups);
+
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> %s\n",
+			tiqn->tiqn);
+	printk(KERN_INFO "LIO_Target_ConfigFS: DEREGISTER -> Releasing"
+			" core_del_tiqn()\n");
+	core_del_tiqn(tiqn);
+}
+
+/* End LIO-Target TIQN struct contig_lio_target_cit */
+
+/* Start lio_target_discovery_auth_cit */
+
+#define DEF_DISC_AUTH_STR(name, flags)					\
+	__DEF_NACL_AUTH_STR(disc, name, flags)				\
+static ssize_t iscsi_disc_show_##name(					\
+	struct target_fabric_configfs *tf,				\
+	char *page)							\
+{									\
+	return __iscsi_disc_show_##name(&iscsi_global->discovery_acl,	\
+		page);							\
+}									\
+static ssize_t iscsi_disc_store_##name(					\
+	struct target_fabric_configfs *tf,				\
+	const char *page,						\
+	size_t count)							\
+{									\
+	return __iscsi_disc_store_##name(&iscsi_global->discovery_acl,	\
+		page, count);						\
+}
+
+#define DEF_DISC_AUTH_INT(name)						\
+	__DEF_NACL_AUTH_INT(disc, name)					\
+static ssize_t iscsi_disc_show_##name(					\
+        struct target_fabric_configfs *tf,				\
+        char *page)							\
+{									\
+	return __iscsi_disc_show_##name(&iscsi_global->discovery_acl, 	\
+			page);						\
+}
+
+#define DISC_AUTH_ATTR(_name, _mode) TF_DISC_ATTR(iscsi, _name, _mode)
+#define DISC_AUTH_ATTR_RO(_name) TF_DISC_ATTR_RO(iscsi, _name)
+
+/*
+ * One-way authentication userid
+ */
+DEF_DISC_AUTH_STR(userid, NAF_USERID_SET);
+DISC_AUTH_ATTR(userid, S_IRUGO | S_IWUSR);
+/*
+ * One-way authentication password
+ */
+DEF_DISC_AUTH_STR(password, NAF_PASSWORD_SET);
+DISC_AUTH_ATTR(password, S_IRUGO | S_IWUSR);
+/*
+ * Enforce mutual authentication
+ */
+DEF_DISC_AUTH_INT(authenticate_target);
+DISC_AUTH_ATTR_RO(authenticate_target);
+/*
+ * Mutual authentication userid
+ */
+DEF_DISC_AUTH_STR(userid_mutual, NAF_USERID_IN_SET);
+DISC_AUTH_ATTR(userid_mutual, S_IRUGO | S_IWUSR);
+/*
+ * Mutual authentication password
+ */
+DEF_DISC_AUTH_STR(password_mutual, NAF_PASSWORD_IN_SET);
+DISC_AUTH_ATTR(password_mutual, S_IRUGO | S_IWUSR);
+
+/*
+ * enforce_discovery_auth
+ */
+static ssize_t iscsi_disc_show_enforce_discovery_auth(
+	struct target_fabric_configfs *tf,
+	char *page)
+{
+	struct iscsi_node_auth *discovery_auth = &iscsi_global->discovery_acl.node_auth;
+
+	return sprintf(page, "%d\n", discovery_auth->enforce_discovery_auth);
+}
+
+static ssize_t iscsi_disc_store_enforce_discovery_auth(
+	struct target_fabric_configfs *tf,
+	const char *page,
+	size_t count)
+{
+	struct iscsi_param *param;
+	struct iscsi_portal_group *discovery_tpg = iscsi_global->discovery_tpg;
+	char *endptr;
+	u32 op;
+
+	op = simple_strtoul(page, &endptr, 0);
+	if ((op != 1) && (op != 0)) {
+		printk(KERN_ERR "Illegal value for enforce_discovery_auth:"
+				" %u\n", op);
+		return -EINVAL;
+	}
+
+	if (!(discovery_tpg)) {
+		printk(KERN_ERR "iscsi_global->discovery_tpg is NULL\n");
+		return -EINVAL;
+	}
+
+	param = iscsi_find_param_from_key(AUTHMETHOD,
+				discovery_tpg->param_list);
+	if (!(param))
+		return -EINVAL;
+
+	if (op) {
+		/*
+		 * Reset the AuthMethod key to CHAP.
+		 */
+		if (iscsi_update_param_value(param, CHAP) < 0)
+			return -EINVAL;
+
+		discovery_tpg->tpg_attrib.authentication = 1;
+		iscsi_global->discovery_acl.node_auth.enforce_discovery_auth = 1;
+		printk(KERN_INFO "LIO-CORE[0] Successfully enabled"
+			" authentication enforcement for iSCSI"
+			" Discovery TPG\n");
+	} else {
+		/*
+		 * Reset the AuthMethod key to CHAP,None
+		 */
+		if (iscsi_update_param_value(param, "CHAP,None") < 0)
+			return -EINVAL;
+
+		discovery_tpg->tpg_attrib.authentication = 0;
+		iscsi_global->discovery_acl.node_auth.enforce_discovery_auth = 0;
+		printk(KERN_INFO "LIO-CORE[0] Successfully disabled"
+			" authentication enforcement for iSCSI"
+			" Discovery TPG\n");
+	}
+
+	return count;
+}
+
+DISC_AUTH_ATTR(enforce_discovery_auth, S_IRUGO | S_IWUSR);
+
+static struct configfs_attribute *lio_target_discovery_auth_attrs[] = {
+	&iscsi_disc_userid.attr,
+	&iscsi_disc_password.attr,
+	&iscsi_disc_authenticate_target.attr,
+	&iscsi_disc_userid_mutual.attr,
+	&iscsi_disc_password_mutual.attr,
+	&iscsi_disc_enforce_discovery_auth.attr,
+	NULL,
+};
+
+/* End lio_target_discovery_auth_cit */
+
+int iscsi_target_register_configfs(void)
+{
+	struct target_fabric_configfs *fabric;
+	int ret;
+
+	lio_target_fabric_configfs = NULL;
+	fabric = target_fabric_configfs_init(THIS_MODULE, "iscsi");
+	if (!(fabric)) {
+		printk(KERN_ERR "target_fabric_configfs_init() for"
+				" LIO-Target failed!\n");
+		return -1;
+	}
+	/*
+	 * Setup the fabric API of function pointers used by target_core_mod..
+	 */
+	fabric->tf_ops.get_fabric_name = &iscsi_get_fabric_name;
+	fabric->tf_ops.get_fabric_proto_ident = &iscsi_get_fabric_proto_ident;
+	fabric->tf_ops.tpg_get_wwn = &lio_tpg_get_endpoint_wwn;
+	fabric->tf_ops.tpg_get_tag = &lio_tpg_get_tag;
+	fabric->tf_ops.tpg_get_default_depth = &lio_tpg_get_default_depth;
+	fabric->tf_ops.tpg_get_pr_transport_id = &iscsi_get_pr_transport_id;
+	fabric->tf_ops.tpg_get_pr_transport_id_len =
+				&iscsi_get_pr_transport_id_len;
+	fabric->tf_ops.tpg_parse_pr_out_transport_id =
+				&iscsi_parse_pr_out_transport_id;
+	fabric->tf_ops.tpg_check_demo_mode = &lio_tpg_check_demo_mode;
+	fabric->tf_ops.tpg_check_demo_mode_cache =
+				&lio_tpg_check_demo_mode_cache;
+	fabric->tf_ops.tpg_check_demo_mode_write_protect =
+				&lio_tpg_check_demo_mode_write_protect;
+	fabric->tf_ops.tpg_check_prod_mode_write_protect =
+				&lio_tpg_check_prod_mode_write_protect;
+	fabric->tf_ops.tpg_alloc_fabric_acl = &lio_tpg_alloc_fabric_acl;
+	fabric->tf_ops.tpg_release_fabric_acl = &lio_tpg_release_fabric_acl;
+	fabric->tf_ops.tpg_get_inst_index = &lio_tpg_get_inst_index;
+	/*
+	 * Use our local iscsi_allocate_iovecs_for_cmd() for the extra
+	 * callback in transport_generic_new_cmd() to allocate
+	 * iscsi_cmd->iov_data[] for Linux/Net kernel sockets operations.
+	 */
+	fabric->tf_ops.alloc_cmd_iovecs = &iscsi_allocate_iovecs_for_cmd;
+	fabric->tf_ops.release_cmd_to_pool = &lio_release_cmd_to_pool;
+	fabric->tf_ops.release_cmd_direct = &lio_release_cmd_direct;
+	fabric->tf_ops.shutdown_session = &lio_tpg_shutdown_session;
+	fabric->tf_ops.close_session = &lio_tpg_close_session;
+	fabric->tf_ops.stop_session = &lio_tpg_stop_session;
+	fabric->tf_ops.fall_back_to_erl0 = &lio_tpg_fall_back_to_erl0;
+	fabric->tf_ops.sess_logged_in = &lio_sess_logged_in;
+	fabric->tf_ops.sess_get_index = &lio_sess_get_index;
+	fabric->tf_ops.sess_get_initiator_sid = &lio_sess_get_initiator_sid;
+	fabric->tf_ops.write_pending = &lio_write_pending;
+	fabric->tf_ops.write_pending_status = &lio_write_pending_status;
+	fabric->tf_ops.set_default_node_attributes =
+				&lio_set_default_node_attributes;
+	fabric->tf_ops.get_task_tag = &iscsi_get_task_tag;
+	fabric->tf_ops.get_cmd_state = &iscsi_get_cmd_state;
+	fabric->tf_ops.new_cmd_failure = &iscsi_new_cmd_failure;
+	fabric->tf_ops.queue_data_in = &lio_queue_data_in;
+	fabric->tf_ops.queue_status = &lio_queue_status;
+	fabric->tf_ops.queue_tm_rsp = &lio_queue_tm_rsp;
+	fabric->tf_ops.set_fabric_sense_len = &lio_set_fabric_sense_len;
+	fabric->tf_ops.get_fabric_sense_len = &lio_get_fabric_sense_len;
+	fabric->tf_ops.is_state_remove = &iscsi_is_state_remove;
+	fabric->tf_ops.pack_lun = &iscsi_pack_lun;
+	/*
+	 * Setup function pointers for generic logic in target_core_fabric_configfs.c
+	 */
+	fabric->tf_ops.fabric_make_wwn = &lio_target_call_coreaddtiqn;
+	fabric->tf_ops.fabric_drop_wwn = &lio_target_call_coredeltiqn;
+	fabric->tf_ops.fabric_make_tpg = &lio_target_tiqn_addtpg;
+	fabric->tf_ops.fabric_drop_tpg = &lio_target_tiqn_deltpg;
+	fabric->tf_ops.fabric_post_link	= NULL;
+	fabric->tf_ops.fabric_pre_unlink = NULL;
+	fabric->tf_ops.fabric_make_np = &lio_target_call_addnptotpg;
+	fabric->tf_ops.fabric_drop_np = &lio_target_call_delnpfromtpg;
+	fabric->tf_ops.fabric_make_nodeacl = &lio_target_make_nodeacl;
+	fabric->tf_ops.fabric_drop_nodeacl = &lio_target_drop_nodeacl;
+	/*
+	 * Setup default attribute lists for various fabric->tf_cit_tmpl
+	 * sturct config_item_type's
+	 */
+	TF_CIT_TMPL(fabric)->tfc_discovery_cit.ct_attrs = lio_target_discovery_auth_attrs;
+	TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = lio_target_wwn_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = lio_target_tpg_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = lio_target_tpg_attrib_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = lio_target_tpg_param_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = lio_target_portal_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = lio_target_initiator_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = lio_target_nacl_attrib_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = lio_target_nacl_auth_attrs;
+	TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = lio_target_nacl_param_attrs;
+
+	ret = target_fabric_configfs_register(fabric);
+	if (ret < 0) {
+		printk(KERN_ERR "target_fabric_configfs_register() for"
+				" LIO-Target failed!\n");
+		target_fabric_configfs_free(fabric);
+		return -1;
+	}
+
+	lio_target_fabric_configfs = fabric;
+	printk(KERN_INFO "LIO_TARGET[0] - Set fabric ->"
+			" lio_target_fabric_configfs\n");
+	return 0;
+}
+
+
+void iscsi_target_deregister_configfs(void)
+{
+	if (!(lio_target_fabric_configfs))
+		return;
+	/*
+	 * Shutdown discovery sessions and disable discovery TPG
+	 */
+	if (iscsi_global->discovery_tpg)
+		iscsi_tpg_disable_portal_group(iscsi_global->discovery_tpg, 1);
+
+	target_fabric_configfs_deregister(lio_target_fabric_configfs);
+	lio_target_fabric_configfs = NULL;
+	printk(KERN_INFO "LIO_TARGET[0] - Cleared"
+				" lio_target_fabric_configfs\n");
+}
diff --git a/drivers/target/iscsi/iscsi_target_configfs.h b/drivers/target/iscsi/iscsi_target_configfs.h
new file mode 100644
index 0000000..52c5123
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_configfs.h
@@ -0,0 +1,9 @@
+#ifndef ISCSI_TARGET_CONFIGFS_H
+#define ISCSI_TARGET_CONFIGFS_H
+
+extern int iscsi_target_register_configfs(void);
+extern void iscsi_target_deregister_configfs(void);
+
+extern struct kmem_cache *lio_tpg_cache;
+
+#endif /* ISCSI_TARGET_CONFIGFS_H */
diff --git a/drivers/target/iscsi/iscsi_target_nodeattrib.c b/drivers/target/iscsi/iscsi_target_nodeattrib.c
new file mode 100644
index 0000000..23aa7e5
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nodeattrib.c
@@ -0,0 +1,300 @@
+/*******************************************************************************
+ * This file contains the main functions related to Initiator Node Attributes.
+ *
+ * Copyright (c) 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_nodeattrib.h"
+
+static inline char *iscsi_na_get_initiatorname(
+	struct iscsi_node_acl *nacl)
+{
+	struct se_node_acl *se_nacl = &nacl->se_node_acl;	
+
+	return &se_nacl->initiatorname[0];
+}
+
+/*	iscsi_set_default_node_attribues():
+ *
+ *
+ */
+void iscsi_set_default_node_attribues(
+	struct iscsi_node_acl *acl)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	a->dataout_timeout = NA_DATAOUT_TIMEOUT;
+	a->dataout_timeout_retries = NA_DATAOUT_TIMEOUT_RETRIES;
+	a->nopin_timeout = NA_NOPIN_TIMEOUT;
+	a->nopin_response_timeout = NA_NOPIN_RESPONSE_TIMEOUT;
+	a->random_datain_pdu_offsets = NA_RANDOM_DATAIN_PDU_OFFSETS;
+	a->random_datain_seq_offsets = NA_RANDOM_DATAIN_SEQ_OFFSETS;
+	a->random_r2t_offsets = NA_RANDOM_R2T_OFFSETS;
+	a->default_erl = NA_DEFAULT_ERL;
+}
+
+/*	iscsi_na_dataout_timeout():
+ *
+ *
+ */
+extern int iscsi_na_dataout_timeout(
+	struct iscsi_node_acl *acl,
+	u32 dataout_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (dataout_timeout > NA_DATAOUT_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested DataOut Timeout %u larger than"
+			" maximum %u\n", dataout_timeout,
+			NA_DATAOUT_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (dataout_timeout < NA_DATAOUT_TIMEOUT_MIX) {
+		printk(KERN_ERR "Requested DataOut Timeout %u smaller than"
+			" minimum %u\n", dataout_timeout,
+			NA_DATAOUT_TIMEOUT_MIX);
+		return -EINVAL;
+	}
+
+	a->dataout_timeout = dataout_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set DataOut Timeout to %u for Initiator Node"
+		" %s\n", a->dataout_timeout, iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_dataout_timeout_retries():
+ *
+ *
+ */
+extern int iscsi_na_dataout_timeout_retries(
+	struct iscsi_node_acl *acl,
+	u32 dataout_timeout_retries)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (dataout_timeout_retries > NA_DATAOUT_TIMEOUT_RETRIES_MAX) {
+		printk(KERN_ERR "Requested DataOut Timeout Retries %u larger"
+			" than maximum %u", dataout_timeout_retries,
+				NA_DATAOUT_TIMEOUT_RETRIES_MAX);
+		return -EINVAL;
+	} else if (dataout_timeout_retries < NA_DATAOUT_TIMEOUT_RETRIES_MIN) {
+		printk(KERN_ERR "Requested DataOut Timeout Retries %u smaller"
+			" than minimum %u", dataout_timeout_retries,
+				NA_DATAOUT_TIMEOUT_RETRIES_MIN);
+		return -EINVAL;
+	}
+
+	a->dataout_timeout_retries = dataout_timeout_retries;
+	TRACE(TRACE_NODEATTRIB, "Set DataOut Timeout Retries to %u for"
+		" Initiator Node %s\n", a->dataout_timeout_retries,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_nopin_timeout():
+ *
+ *
+ */
+extern int iscsi_na_nopin_timeout(
+	struct iscsi_node_acl *acl,
+	u32 nopin_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+	struct iscsi_session *sess;
+	struct iscsi_conn *conn;
+	struct se_node_acl *se_nacl = &a->nacl->se_node_acl;
+	struct se_session *se_sess;
+	u32 orig_nopin_timeout = a->nopin_timeout;
+
+	if (nopin_timeout > NA_NOPIN_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested NopIn Timeout %u larger than maximum"
+			" %u\n", nopin_timeout, NA_NOPIN_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if ((nopin_timeout < NA_NOPIN_TIMEOUT_MIN) &&
+		   (nopin_timeout != 0)) {
+		printk(KERN_ERR "Requested NopIn Timeout %u smaller than"
+			" minimum %u and not 0\n", nopin_timeout,
+			NA_NOPIN_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->nopin_timeout = nopin_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set NopIn Timeout to %u for Initiator"
+		" Node %s\n", a->nopin_timeout,
+		iscsi_na_get_initiatorname(acl));
+	/*
+	 * Reenable disabled nopin_timeout timer for all iSCSI connections.
+	 */
+	if (!(orig_nopin_timeout)) {
+		spin_lock_bh(&se_nacl->nacl_sess_lock);
+		se_sess = se_nacl->nacl_sess;
+		if (se_sess) {
+			sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+			spin_lock(&sess->conn_lock);
+			list_for_each_entry(conn, &sess->sess_conn_list,
+					conn_list) {
+				if (conn->conn_state !=
+						TARG_CONN_STATE_LOGGED_IN)
+					continue;
+
+				spin_lock(&conn->nopin_timer_lock);
+				__iscsi_start_nopin_timer(conn);
+				spin_unlock(&conn->nopin_timer_lock);
+			}
+			spin_unlock(&sess->conn_lock);
+		}
+		spin_unlock_bh(&se_nacl->nacl_sess_lock);
+	}
+
+	return 0;
+}
+
+/*	iscsi_na_nopin_response_timeout():
+ *
+ *
+ */
+extern int iscsi_na_nopin_response_timeout(
+	struct iscsi_node_acl *acl,
+	u32 nopin_response_timeout)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (nopin_response_timeout > NA_NOPIN_RESPONSE_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested NopIn Response Timeout %u larger"
+			" than maximum %u\n", nopin_response_timeout,
+				NA_NOPIN_RESPONSE_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (nopin_response_timeout < NA_NOPIN_RESPONSE_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested NopIn Response Timeout %u smaller"
+			" than minimum %u\n", nopin_response_timeout,
+				NA_NOPIN_RESPONSE_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->nopin_response_timeout = nopin_response_timeout;
+	TRACE(TRACE_NODEATTRIB, "Set NopIn Response Timeout to %u for"
+		" Initiator Node %s\n", a->nopin_timeout,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_datain_pdu_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_datain_pdu_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_datain_pdu_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_datain_pdu_offsets != 0 && random_datain_pdu_offsets != 1) {
+		printk(KERN_ERR "Requested Random DataIN PDU Offsets: %u not"
+			" 0 or 1\n", random_datain_pdu_offsets);
+		return -EINVAL;
+	}
+
+	a->random_datain_pdu_offsets = random_datain_pdu_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random DataIN PDU Offsets to %u for"
+		" Initiator Node %s\n", a->random_datain_pdu_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_datain_seq_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_datain_seq_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_datain_seq_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_datain_seq_offsets != 0 && random_datain_seq_offsets != 1) {
+		printk(KERN_ERR "Requested Random DataIN Sequence Offsets: %u"
+			" not 0 or 1\n", random_datain_seq_offsets);
+		return -EINVAL;
+	}
+
+	a->random_datain_seq_offsets = random_datain_seq_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random DataIN Sequence Offsets to %u for"
+		" Initiator Node %s\n", a->random_datain_seq_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+/*	iscsi_na_random_r2t_offsets():
+ *
+ *
+ */
+extern int iscsi_na_random_r2t_offsets(
+	struct iscsi_node_acl *acl,
+	u32 random_r2t_offsets)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (random_r2t_offsets != 0 && random_r2t_offsets != 1) {
+		printk(KERN_ERR "Requested Random R2T Offsets: %u not"
+			" 0 or 1\n", random_r2t_offsets);
+		return -EINVAL;
+	}
+
+	a->random_r2t_offsets = random_r2t_offsets;
+	TRACE(TRACE_NODEATTRIB, "Set Random R2T Offsets to %u for"
+		" Initiator Node %s\n", a->random_r2t_offsets,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
+
+extern int iscsi_na_default_erl(
+	struct iscsi_node_acl *acl,
+	u32 default_erl)
+{
+	struct iscsi_node_attrib *a = &acl->node_attrib;
+
+	if (default_erl != 0 && default_erl != 1 && default_erl != 2) {
+		printk(KERN_ERR "Requested default ERL: %u not 0, 1, or 2\n",
+				default_erl);
+		return -EINVAL;
+	}
+
+	a->default_erl = default_erl;
+	TRACE(TRACE_NODEATTRIB, "Set use ERL0 flag to %u for Initiator"
+		" Node %s\n", a->default_erl,
+		iscsi_na_get_initiatorname(acl));
+
+	return 0;
+}
diff --git a/drivers/target/iscsi/iscsi_target_nodeattrib.h b/drivers/target/iscsi/iscsi_target_nodeattrib.h
new file mode 100644
index 0000000..ed5884e
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nodeattrib.h
@@ -0,0 +1,14 @@
+#ifndef ISCSI_TARGET_NODEATTRIB_H
+#define ISCSI_TARGET_NODEATTRIB_H
+
+extern void iscsi_set_default_node_attribues(struct iscsi_node_acl *);
+extern int iscsi_na_dataout_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_dataout_timeout_retries(struct iscsi_node_acl *, u32);
+extern int iscsi_na_nopin_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_nopin_response_timeout(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_datain_pdu_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_datain_seq_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_random_r2t_offsets(struct iscsi_node_acl *, u32);
+extern int iscsi_na_default_erl(struct iscsi_node_acl *, u32);
+
+#endif /* ISCSI_TARGET_NODEATTRIB_H */
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 04/12] iscsi-target: Add configfs fabric dependent statistics
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
                   ` (2 preceding siblings ...)
  2011-03-02  3:33   ` Nicholas A. Bellinger
@ 2011-03-02  3:33 ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for iSCSI fabric dependent configfs statistics
using TCM v4 default statistics groups.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_stat.c |  955 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_stat.h |   79 +++
 2 files changed, 1034 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_stat.c
 create mode 100644 drivers/target/iscsi/iscsi_target_stat.h

diff --git a/drivers/target/iscsi/iscsi_target_stat.c b/drivers/target/iscsi/iscsi_target_stat.c
new file mode 100644
index 0000000..bd31792
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_stat.c
@@ -0,0 +1,955 @@
+/*******************************************************************************
+ * Modern ConfigFS group context specific iSCSI statistics based on original
+ * iscsi_target_mib.c code
+ *
+ * Copyright (c) 2011 Rising Tide Systems
+ * 
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <generated/utsrelease.h>
+#include <linux/utsname.h>
+#include <linux/configfs.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/configfs_macros.h>
+
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_stat.h"
+
+#ifndef INITIAL_JIFFIES
+#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ))
+#endif
+
+/* Instance Attributes Table */
+#define ISCSI_INST_NUM_NODES		1
+#define ISCSI_INST_DESCR		"Storage Engine Target"
+#define ISCSI_INST_LAST_FAILURE_TYPE	0
+#define ISCSI_DISCONTINUITY_TIME	0
+
+#define ISCSI_NODE_INDEX		1
+
+#define ISPRINT(a)   ((a >= ' ') && (a <= '~'))
+
+/****************************************************************************
+ * iSCSI MIB Tables
+ ****************************************************************************/
+/*
+ * Instance Attributes Table
+ */
+CONFIGFS_EATTR_STRUCT(iscsi_stat_instance, iscsi_wwn_stat_grps);
+#define ISCSI_STAT_INSTANCE_ATTR(_name, _mode)			\
+static struct iscsi_stat_instance_attribute			\
+			iscsi_stat_instance_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_instance_show_attr_##_name,			\
+	iscsi_stat_instance_store_attr_##_name);
+
+#define ISCSI_STAT_INSTANCE_ATTR_RO(_name)			\
+static struct iscsi_stat_instance_attribute			\
+			iscsi_stat_instance_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_instance_show_attr_##_name);
+
+static ssize_t iscsi_stat_instance_show_attr_inst(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(inst);
+
+static ssize_t iscsi_stat_instance_show_attr_min_ver(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_DRAFT20_VERSION);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(min_ver);
+
+static ssize_t iscsi_stat_instance_show_attr_max_ver(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_DRAFT20_VERSION);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(max_ver);
+
+static ssize_t iscsi_stat_instance_show_attr_portals(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_num_tpg_nps);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(portals);
+
+static ssize_t iscsi_stat_instance_show_attr_nodes(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_INST_NUM_NODES);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(nodes);
+
+static ssize_t iscsi_stat_instance_show_attr_sessions(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_nsessions);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(sessions);
+
+static ssize_t iscsi_stat_instance_show_attr_fail_sess(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+	u32 sess_err_count;
+
+	spin_lock_bh(&sess_err->lock);
+	sess_err_count = (sess_err->digest_errors +
+			  sess_err->cxn_timeout_errors +
+			  sess_err->pdu_format_errors);
+	spin_unlock_bh(&sess_err->lock);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", sess_err_count);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(fail_sess);
+
+static ssize_t iscsi_stat_instance_show_attr_fail_type(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+
+	return snprintf(page, PAGE_SIZE, "%u\n",
+			sess_err->last_sess_failure_type);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(fail_type);
+
+static ssize_t iscsi_stat_instance_show_attr_fail_rem_name(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+
+	return snprintf(page, PAGE_SIZE, "%s\n",
+			sess_err->last_sess_fail_rem_name[0] ?
+			sess_err->last_sess_fail_rem_name : NONE);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(fail_rem_name);
+
+static ssize_t iscsi_stat_instance_show_attr_disc_time(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_DISCONTINUITY_TIME);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(disc_time);
+
+static ssize_t iscsi_stat_instance_show_attr_description(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%s\n", ISCSI_INST_DESCR);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(description);
+
+static ssize_t iscsi_stat_instance_show_attr_vendor(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%s\n", ISCSI_VENDOR);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(vendor);
+
+static ssize_t iscsi_stat_instance_show_attr_version(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%s on %s/%s\n", ISCSI_VERSION,
+			utsname()->sysname, utsname()->machine);
+}
+ISCSI_STAT_INSTANCE_ATTR_RO(version);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_instance, iscsi_wwn_stat_grps,
+		iscsi_instance_group);
+
+static struct configfs_attribute *iscsi_stat_instance_attrs[] = {
+	&iscsi_stat_instance_inst.attr,
+	&iscsi_stat_instance_min_ver.attr,
+	&iscsi_stat_instance_max_ver.attr,
+	&iscsi_stat_instance_portals.attr,
+	&iscsi_stat_instance_nodes.attr,
+	&iscsi_stat_instance_sessions.attr,
+	&iscsi_stat_instance_fail_sess.attr,
+	&iscsi_stat_instance_fail_type.attr,
+	&iscsi_stat_instance_fail_rem_name.attr,
+	&iscsi_stat_instance_disc_time.attr,
+	&iscsi_stat_instance_description.attr,
+	&iscsi_stat_instance_vendor.attr,
+	&iscsi_stat_instance_version.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_instance_item_ops = {
+	.show_attribute		= iscsi_stat_instance_attr_show,
+	.store_attribute	= iscsi_stat_instance_attr_store,
+};
+
+struct config_item_type iscsi_stat_instance_cit = {
+	.ct_item_ops		= &iscsi_stat_instance_item_ops,
+	.ct_attrs		= iscsi_stat_instance_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Instance Session Failure Stats Table
+ */
+CONFIGFS_EATTR_STRUCT(iscsi_stat_sess_err, iscsi_wwn_stat_grps);
+#define ISCSI_STAT_SESS_ERR_ATTR(_name, _mode)			\
+static struct iscsi_stat_sess_err_attribute			\
+			iscsi_stat_sess_err_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_sess_err_show_attr_##_name,			\
+	iscsi_stat_sess_err_store_attr_##_name);
+
+#define ISCSI_STAT_SESS_ERR_ATTR_RO(_name)			\
+static struct iscsi_stat_sess_err_attribute			\
+			iscsi_stat_sess_err_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_sess_err_show_attr_##_name);
+
+static ssize_t iscsi_stat_sess_err_show_attr_inst(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_SESS_ERR_ATTR_RO(inst);
+
+static ssize_t iscsi_stat_sess_err_show_attr_digest_errors(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+
+	return snprintf(page, PAGE_SIZE, "%u\n", sess_err->digest_errors);
+}
+ISCSI_STAT_SESS_ERR_ATTR_RO(digest_errors);
+
+static ssize_t iscsi_stat_sess_err_show_attr_cxn_errors(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+
+	return snprintf(page, PAGE_SIZE, "%u\n", sess_err->cxn_timeout_errors);
+}
+ISCSI_STAT_SESS_ERR_ATTR_RO(cxn_errors);
+
+static ssize_t iscsi_stat_sess_err_show_attr_format_errors(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_sess_err_stats *sess_err = &tiqn->sess_err_stats;
+
+	return snprintf(page, PAGE_SIZE, "%u\n", sess_err->pdu_format_errors);
+}
+ISCSI_STAT_SESS_ERR_ATTR_RO(format_errors);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_sess_err, iscsi_wwn_stat_grps,
+		iscsi_sess_err_group);
+
+static struct configfs_attribute *iscsi_stat_sess_err_attrs[] = {
+	&iscsi_stat_sess_err_inst.attr,
+	&iscsi_stat_sess_err_digest_errors.attr,
+	&iscsi_stat_sess_err_cxn_errors.attr,
+	&iscsi_stat_sess_err_format_errors.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_sess_err_item_ops = {
+	.show_attribute		= iscsi_stat_sess_err_attr_show,
+	.store_attribute	= iscsi_stat_sess_err_attr_store,
+};
+
+struct config_item_type iscsi_stat_sess_err_cit = {
+	.ct_item_ops		= &iscsi_stat_sess_err_item_ops,
+	.ct_attrs		= iscsi_stat_sess_err_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Target Attributes Table
+ */
+CONFIGFS_EATTR_STRUCT(iscsi_stat_tgt_attr, iscsi_wwn_stat_grps);
+#define ISCSI_STAT_TGT_ATTR(_name, _mode)			\
+static struct iscsi_stat_tgt_attr_attribute			\
+			iscsi_stat_tgt_attr_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_tgt-attr_show_attr_##_name,			\
+	iscsi_stat_tgt_attr_store_attr_##_name);
+
+#define ISCSI_STAT_TGT_ATTR_RO(_name)				\
+static struct iscsi_stat_tgt_attr_attribute			\
+			iscsi_stat_tgt_attr_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_tgt_attr_show_attr_##_name);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_inst(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_TGT_ATTR_RO(inst);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_indx(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_NODE_INDEX);
+}
+ISCSI_STAT_TGT_ATTR_RO(indx);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_login_fails(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	u32 fail_count;
+
+	spin_lock(&lstat->lock);
+	fail_count = (lstat->redirects + lstat->authorize_fails +
+			lstat->authenticate_fails + lstat->negotiate_fails +
+			lstat->other_fails);
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", fail_count);
+}
+ISCSI_STAT_TGT_ATTR_RO(login_fails);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_last_fail_time(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	u32 last_fail_time;
+
+	spin_lock(&lstat->lock);
+	last_fail_time = lstat->last_fail_time ?
+			(u32)(((u32)lstat->last_fail_time -
+				INITIAL_JIFFIES) * 100 / HZ) : 0;
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", last_fail_time);
+}
+ISCSI_STAT_TGT_ATTR_RO(last_fail_time);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_last_fail_type(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	u32 last_fail_type;
+
+	spin_lock(&lstat->lock);
+	last_fail_type = lstat->last_fail_type;
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", last_fail_type);
+}
+ISCSI_STAT_TGT_ATTR_RO(last_fail_type);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_fail_intr_name(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	unsigned char buf[224];
+
+	spin_lock(&lstat->lock);
+	snprintf(buf, 224, "%s", lstat->last_intr_fail_name[0] ?
+				lstat->last_intr_fail_name : NONE);
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%s\n", buf);
+}
+ISCSI_STAT_TGT_ATTR_RO(fail_intr_name);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_fail_intr_addr_type(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+			struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	unsigned char buf[8];
+
+	spin_lock(&lstat->lock);
+	snprintf(buf, 8, "%s", (lstat->last_intr_fail_ip6_addr != NULL) ?
+				"ipv6" : "ipv4");
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%s\n", buf);
+}
+ISCSI_STAT_TGT_ATTR_RO(fail_intr_addr_type);
+
+static ssize_t iscsi_stat_tgt_attr_show_attr_fail_intr_addr(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+			struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	unsigned char buf[32];
+
+	spin_lock(&lstat->lock);
+	if (lstat->last_intr_fail_ip6_addr != NULL)
+		snprintf(buf, 32, "[%s]", lstat->last_intr_fail_ip6_addr);
+	else
+		snprintf(buf, 32, "%08X", lstat->last_intr_fail_addr);
+	spin_unlock(&lstat->lock);
+
+	return snprintf(page, PAGE_SIZE, "%s\n", buf);
+}
+ISCSI_STAT_TGT_ATTR_RO(fail_intr_addr);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_tgt_attr, iscsi_wwn_stat_grps,
+		iscsi_tgt_attr_group);
+
+static struct configfs_attribute *iscsi_stat_tgt_attr_attrs[] = {
+	&iscsi_stat_tgt_attr_inst.attr,
+	&iscsi_stat_tgt_attr_indx.attr,
+	&iscsi_stat_tgt_attr_login_fails.attr,
+	&iscsi_stat_tgt_attr_last_fail_time.attr,
+	&iscsi_stat_tgt_attr_last_fail_type.attr,
+	&iscsi_stat_tgt_attr_fail_intr_name.attr,
+	&iscsi_stat_tgt_attr_fail_intr_addr_type.attr,
+	&iscsi_stat_tgt_attr_fail_intr_addr.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_tgt_attr_item_ops = {
+	.show_attribute		= iscsi_stat_tgt_attr_attr_show,
+	.store_attribute	= iscsi_stat_tgt_attr_attr_store,
+};
+
+struct config_item_type iscsi_stat_tgt_attr_cit = {
+	.ct_item_ops		= &iscsi_stat_tgt_attr_item_ops,
+	.ct_attrs		= iscsi_stat_tgt_attr_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Target Login Stats Table
+ */
+CONFIGFS_EATTR_STRUCT(iscsi_stat_login, iscsi_wwn_stat_grps);
+#define ISCSI_STAT_LOGIN(_name, _mode)				\
+static struct iscsi_stat_login_attribute			\
+			iscsi_stat_login_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_login_show_attr_##_name,			\
+	iscsi_stat_login_store_attr_##_name);	
+
+#define ISCSI_STAT_LOGIN_RO(_name)				\
+static struct iscsi_stat_login_attribute			\
+			iscsi_stat_login_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_login_show_attr_##_name);
+
+static ssize_t iscsi_stat_login_show_attr_inst(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_LOGIN_RO(inst);
+
+static ssize_t iscsi_stat_login_show_attr_indx(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_NODE_INDEX);
+}
+ISCSI_STAT_LOGIN_RO(indx);
+
+static ssize_t iscsi_stat_login_show_attr_accepts(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->accepts);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(accepts);
+
+static ssize_t iscsi_stat_login_show_attr_other_fails(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->other_fails);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(other_fails);
+
+static ssize_t iscsi_stat_login_show_attr_redirects(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->redirects);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(redirects);
+
+static ssize_t iscsi_stat_login_show_attr_authorize_fails(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->authorize_fails);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(authorize_fails);
+
+static ssize_t iscsi_stat_login_show_attr_authenticate_fails(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->authenticate_fails);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(authenticate_fails);
+
+static ssize_t iscsi_stat_login_show_attr_negotiate_fails(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+				struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_login_stats *lstat = &tiqn->login_stats;
+	ssize_t ret;
+
+	spin_lock(&lstat->lock);
+	ret = snprintf(page, PAGE_SIZE, "%u\n", lstat->negotiate_fails);
+	spin_unlock(&lstat->lock);
+
+	return ret;
+}
+ISCSI_STAT_LOGIN_RO(negotiate_fails);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_login, iscsi_wwn_stat_grps,
+		iscsi_login_stats_group);
+
+static struct configfs_attribute *iscsi_stat_login_stats_attrs[] = {
+	&iscsi_stat_login_inst.attr,
+	&iscsi_stat_login_indx.attr,
+	&iscsi_stat_login_accepts.attr,
+	&iscsi_stat_login_other_fails.attr,
+	&iscsi_stat_login_redirects.attr,
+	&iscsi_stat_login_authorize_fails.attr,
+	&iscsi_stat_login_authenticate_fails.attr,
+	&iscsi_stat_login_negotiate_fails.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_login_stats_item_ops = {
+	.show_attribute		= iscsi_stat_login_attr_show,
+	.store_attribute	= iscsi_stat_login_attr_store,
+};
+
+struct config_item_type iscsi_stat_login_cit = {
+	.ct_item_ops		= &iscsi_stat_login_stats_item_ops,
+	.ct_attrs		= iscsi_stat_login_stats_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Target Logout Stats Table
+ */
+
+CONFIGFS_EATTR_STRUCT(iscsi_stat_logout, iscsi_wwn_stat_grps);
+#define ISCSI_STAT_LOGOUT(_name, _mode)				\
+static struct iscsi_stat_logout_attribute			\
+			iscsi_stat_logout_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_logout_show_attr_##_name,			\
+	iscsi_stat_logout_store_attr_##_name);   
+
+#define ISCSI_STAT_LOGOUT_RO(_name)				\
+static struct iscsi_stat_logout_attribute			\
+			iscsi_stat_logout_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_logout_show_attr_##_name);
+
+static ssize_t iscsi_stat_logout_show_attr_inst(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+			struct iscsi_tiqn, tiqn_stat_grps);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_LOGOUT_RO(inst);
+
+static ssize_t iscsi_stat_logout_show_attr_indx(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	return snprintf(page, PAGE_SIZE, "%u\n", ISCSI_NODE_INDEX);
+}
+ISCSI_STAT_LOGOUT_RO(indx);
+
+static ssize_t iscsi_stat_logout_show_attr_normal_logouts(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+			struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_logout_stats *lstats = &tiqn->logout_stats;
+	
+	return snprintf(page, PAGE_SIZE, "%u\n", lstats->normal_logouts);
+}
+ISCSI_STAT_LOGOUT_RO(normal_logouts);
+
+static ssize_t iscsi_stat_logout_show_attr_abnormal_logouts(
+	struct iscsi_wwn_stat_grps *igrps, char *page)
+{
+	struct iscsi_tiqn *tiqn = container_of(igrps,
+			struct iscsi_tiqn, tiqn_stat_grps);
+	struct iscsi_logout_stats *lstats = &tiqn->logout_stats;
+
+	return snprintf(page, PAGE_SIZE, "%u\n", lstats->abnormal_logouts);
+}
+ISCSI_STAT_LOGOUT_RO(abnormal_logouts);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_logout, iscsi_wwn_stat_grps,
+		iscsi_logout_stats_group);
+
+static struct configfs_attribute *iscsi_stat_logout_stats_attrs[] = {
+	&iscsi_stat_logout_inst.attr,
+	&iscsi_stat_logout_indx.attr,
+	&iscsi_stat_logout_normal_logouts.attr,
+	&iscsi_stat_logout_abnormal_logouts.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_logout_stats_item_ops = {
+	.show_attribute		= iscsi_stat_logout_attr_show,
+	.store_attribute	= iscsi_stat_logout_attr_store,
+};
+
+struct config_item_type iscsi_stat_logout_cit = {
+	.ct_item_ops		= &iscsi_stat_logout_stats_item_ops,
+	.ct_attrs		= iscsi_stat_logout_stats_attrs,
+	.ct_owner		= THIS_MODULE,
+};
+
+/*
+ * Session Stats Table
+ */
+
+CONFIGFS_EATTR_STRUCT(iscsi_stat_sess, iscsi_node_stat_grps);
+#define ISCSI_STAT_SESS(_name, _mode)				\
+static struct iscsi_stat_sess_attribute				\
+			iscsi_stat_sess_##_name =		\
+	__CONFIGFS_EATTR(_name, _mode,				\
+	iscsi_stat_sess_show_attr_##_name,			\
+	iscsi_stat_sess_store_attr_##_name);
+
+#define ISCSI_STAT_SESS_RO(_name)				\
+static struct iscsi_stat_sess_attribute				\
+			iscsi_stat_sess_##_name =		\
+	__CONFIGFS_EATTR_RO(_name,				\
+	iscsi_stat_sess_show_attr_##_name);
+
+static ssize_t iscsi_stat_sess_show_attr_inst(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_wwn *wwn = acl->se_node_acl.se_tpg->se_tpg_wwn;
+	struct iscsi_tiqn *tiqn = container_of(wwn,
+			struct iscsi_tiqn, tiqn_wwn);
+
+	return snprintf(page, PAGE_SIZE, "%u\n", tiqn->tiqn_index);
+}
+ISCSI_STAT_SESS_RO(inst);
+
+static ssize_t iscsi_stat_sess_show_attr_node(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n",
+				sess->sess_ops->SessionType ? 0 : ISCSI_NODE_INDEX);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(node);
+
+static ssize_t iscsi_stat_sess_show_attr_indx(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n",
+					sess->session_index);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(indx);
+
+static ssize_t iscsi_stat_sess_show_attr_cmd_pdus(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n", sess->cmd_pdus);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(cmd_pdus);
+
+static ssize_t iscsi_stat_sess_show_attr_rsp_pdus(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n", sess->rsp_pdus);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(rsp_pdus);
+
+static ssize_t iscsi_stat_sess_show_attr_txdata_octs(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%llu\n",
+				(unsigned long long)sess->tx_data_octets);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(txdata_octs);
+
+static ssize_t iscsi_stat_sess_show_attr_rxdata_octs(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%llu\n",
+				(unsigned long long)sess->rx_data_octets);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(rxdata_octs);
+
+static ssize_t iscsi_stat_sess_show_attr_conn_digest_errors(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n",
+					sess->conn_digest_errors);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(conn_digest_errors);
+
+static ssize_t iscsi_stat_sess_show_attr_conn_timeout_errors(
+	struct iscsi_node_stat_grps *igrps, char *page)
+{
+	struct iscsi_node_acl *acl = container_of(igrps,
+			struct iscsi_node_acl, node_stat_grps);
+	struct se_node_acl *se_nacl = &acl->se_node_acl;
+	struct iscsi_session *sess;
+	struct se_session *se_sess;
+	ssize_t ret = 0;
+
+	spin_lock_bh(&se_nacl->nacl_sess_lock);
+	se_sess = se_nacl->nacl_sess;
+	if (se_sess) {
+		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (sess)
+			ret = snprintf(page, PAGE_SIZE, "%u\n",
+					sess->conn_timeout_errors);
+	}
+	spin_unlock_bh(&se_nacl->nacl_sess_lock);
+
+	return ret;
+}
+ISCSI_STAT_SESS_RO(conn_timeout_errors);
+
+CONFIGFS_EATTR_OPS(iscsi_stat_sess, iscsi_node_stat_grps,
+		iscsi_sess_stats_group);
+
+static struct configfs_attribute *iscsi_stat_sess_stats_attrs[] = {
+	&iscsi_stat_sess_inst.attr,
+	&iscsi_stat_sess_node.attr,
+	&iscsi_stat_sess_indx.attr,
+	&iscsi_stat_sess_cmd_pdus.attr,
+	&iscsi_stat_sess_rsp_pdus.attr,
+	&iscsi_stat_sess_txdata_octs.attr,
+	&iscsi_stat_sess_rxdata_octs.attr,
+	&iscsi_stat_sess_conn_digest_errors.attr,
+	&iscsi_stat_sess_conn_timeout_errors.attr,
+	NULL,
+};
+
+static struct configfs_item_operations iscsi_stat_sess_stats_item_ops = {
+	.show_attribute		= iscsi_stat_sess_attr_show,	
+	.store_attribute	= iscsi_stat_sess_attr_store,
+};
+
+struct config_item_type iscsi_stat_sess_cit = {
+	.ct_item_ops		= &iscsi_stat_sess_stats_item_ops,
+	.ct_attrs		= iscsi_stat_sess_stats_attrs,
+	.ct_owner		= THIS_MODULE,
+};
diff --git a/drivers/target/iscsi/iscsi_target_stat.h b/drivers/target/iscsi/iscsi_target_stat.h
new file mode 100644
index 0000000..6b3ddac
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_stat.h
@@ -0,0 +1,79 @@
+#ifndef ISCSI_TARGET_STAT_H
+#define ISCSI_TARGET_STAT_H
+
+/*
+ * For struct iscsi_tiqn->tiqn_wwn default groups
+ */
+extern struct config_item_type iscsi_stat_instance_cit;
+extern struct config_item_type iscsi_stat_sess_err_cit;
+extern struct config_item_type iscsi_stat_tgt_attr_cit;
+extern struct config_item_type iscsi_stat_login_cit;
+extern struct config_item_type iscsi_stat_logout_cit;
+
+/*
+ * For struct iscsi_session->se_sess default groups
+ */
+extern struct config_item_type iscsi_stat_sess_cit;
+
+/* iSCSI session error types */
+#define ISCSI_SESS_ERR_UNKNOWN		0
+#define ISCSI_SESS_ERR_DIGEST		1
+#define ISCSI_SESS_ERR_CXN_TIMEOUT	2
+#define ISCSI_SESS_ERR_PDU_FORMAT	3
+
+/* iSCSI session error stats */
+struct iscsi_sess_err_stats {
+	spinlock_t	lock;
+	u32		digest_errors;
+	u32		cxn_timeout_errors;
+	u32		pdu_format_errors;
+	u32		last_sess_failure_type;
+	char		last_sess_fail_rem_name[224];
+} ____cacheline_aligned;
+
+/* iSCSI login failure types (sub oids) */
+#define ISCSI_LOGIN_FAIL_OTHER		2
+#define ISCSI_LOGIN_FAIL_REDIRECT	3
+#define ISCSI_LOGIN_FAIL_AUTHORIZE	4
+#define ISCSI_LOGIN_FAIL_AUTHENTICATE	5
+#define ISCSI_LOGIN_FAIL_NEGOTIATE	6
+
+/* iSCSI login stats */
+struct iscsi_login_stats {
+	spinlock_t	lock;
+	u32		accepts;
+	u32		other_fails;
+	u32		redirects;
+	u32		authorize_fails;
+	u32		authenticate_fails;
+	u32		negotiate_fails;	/* used for notifications */
+	u64		last_fail_time;		/* time stamp (jiffies) */
+	u32		last_fail_type;
+	u32		last_intr_fail_addr;
+	unsigned char	last_intr_fail_ip6_addr[IPV6_ADDRESS_SPACE];
+	char		last_intr_fail_name[224];
+} ____cacheline_aligned;
+
+/* iSCSI logout stats */
+struct iscsi_logout_stats {
+	spinlock_t	lock;
+	u32		normal_logouts;
+	u32		abnormal_logouts;
+} ____cacheline_aligned;
+
+/* Structures for table index support */
+typedef enum {
+	ISCSI_INST_INDEX,
+	ISCSI_PORTAL_INDEX,
+	ISCSI_TARGET_AUTH_INDEX,
+	ISCSI_SESSION_INDEX,
+	ISCSI_CONNECTION_INDEX,
+	INDEX_TYPE_MAX
+} iscsi_index_t;
+
+struct iscsi_index_table {
+	spinlock_t	lock;
+	u32 		iscsi_mib_index[INDEX_TYPE_MAX];
+} ____cacheline_aligned;
+
+#endif   /*** ISCSI_TARGET_STAT_H ***/
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 05/12] iscsi-target: Add TPG and Device logic
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 41630 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds TPG and device logiced using for mapping iscsi-target
abstractions on top of TCM v4 struct se_portal_group and struct se_device
abstractions.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_device.c |  128 +++
 drivers/target/iscsi/iscsi_target_device.h |    9 +
 drivers/target/iscsi/iscsi_target_tpg.c    | 1185 ++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_tpg.h    |   71 ++
 4 files changed, 1393 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_device.c
 create mode 100644 drivers/target/iscsi/iscsi_target_device.h
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.h

diff --git a/drivers/target/iscsi/iscsi_target_device.c b/drivers/target/iscsi/iscsi_target_device.c
new file mode 100644
index 0000000..635f91a
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_device.c
@@ -0,0 +1,128 @@
+/*******************************************************************************
+ * This file contains the iSCSI Virtual Device and Disk Transport
+ * agnostic related functions.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005-2006 SBE, Inc.  All Rights Reserved.
+ © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <target/target_core_base.h>
+#include <target/target_core_device.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+
+/*	iscsi_get_lun():
+ *
+ *
+ */
+int iscsi_get_lun_for_tmr(
+	struct iscsi_cmd *cmd,
+	u64 lun)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+	u32 unpacked_lun;
+
+	unpacked_lun = iscsi_unpack_lun((unsigned char *)&lun);
+	if (unpacked_lun > (ISCSI_MAX_LUNS_PER_TPG-1)) {
+		printk(KERN_ERR "iSCSI LUN: %u exceeds ISCSI_MAX_LUNS_PER_TPG"
+			"-1: %u for Target Portal Group: %hu\n", unpacked_lun,
+			ISCSI_MAX_LUNS_PER_TPG-1, tpg->tpgt);
+		return -1;
+	}
+
+	return transport_get_lun_for_tmr(SE_CMD(cmd), unpacked_lun);
+}
+
+/*	iscsi_get_lun_for_cmd():
+ *
+ *	Returns (0) on success
+ * 	Returns (< 0) on failure
+ */
+int iscsi_get_lun_for_cmd(
+	struct iscsi_cmd *cmd,
+	unsigned char *cdb,
+	u64 lun)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+	u32 unpacked_lun;
+
+	unpacked_lun = iscsi_unpack_lun((unsigned char *)&lun);
+	if (unpacked_lun > (ISCSI_MAX_LUNS_PER_TPG-1)) {
+		printk(KERN_ERR "iSCSI LUN: %u exceeds ISCSI_MAX_LUNS_PER_TPG"
+			"-1: %u for Target Portal Group: %hu\n", unpacked_lun,
+			ISCSI_MAX_LUNS_PER_TPG-1, tpg->tpgt);
+		return -1;
+	}
+
+	return transport_get_lun_for_cmd(SE_CMD(cmd), cdb, unpacked_lun);
+}
+
+/*	iscsi_determine_maxcmdsn():
+ *
+ *
+ */
+void iscsi_determine_maxcmdsn(struct iscsi_session *sess)
+{
+	struct se_node_acl *se_nacl;
+
+	/*
+	 * This is a discovery session, the single queue slot was already
+	 * assigned in iscsi_login_zero_tsih().  Since only Logout and
+	 * Text Opcodes are allowed during discovery we do not have to worry
+	 * about the HBA's queue depth here.
+	 */
+	if (SESS_OPS(sess)->SessionType)
+		return;
+
+	se_nacl = sess->se_sess->se_node_acl;
+
+	/*
+	 * This is a normal session, set the Session's CmdSN window to the
+	 * struct se_node_acl->queue_depth.  The value in struct se_node_acl->queue_depth
+	 * has already been validated as a legal value in
+	 * core_set_queue_depth_for_node().
+	 */
+	sess->cmdsn_window = se_nacl->queue_depth;
+	sess->max_cmd_sn = (sess->max_cmd_sn + se_nacl->queue_depth) - 1;
+}
+
+/*	iscsi_increment_maxcmdsn();
+ *
+ *
+ */
+void iscsi_increment_maxcmdsn(struct iscsi_cmd *cmd, struct iscsi_session *sess)
+{
+	if (cmd->immediate_cmd || cmd->maxcmdsn_inc)
+		return;
+
+	cmd->maxcmdsn_inc = 1;
+
+	spin_lock(&sess->cmdsn_lock);
+	sess->max_cmd_sn += 1;
+	TRACE(TRACE_ISCSI, "Updated MaxCmdSN to 0x%08x\n", sess->max_cmd_sn);
+	spin_unlock(&sess->cmdsn_lock);
+}
diff --git a/drivers/target/iscsi/iscsi_target_device.h b/drivers/target/iscsi/iscsi_target_device.h
new file mode 100644
index 0000000..f69cf52
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_device.h
@@ -0,0 +1,9 @@
+#ifndef ISCSI_TARGET_DEVICE_H
+#define ISCSI_TARGET_DEVICE_H
+
+extern int iscsi_get_lun_for_tmr(struct iscsi_cmd *, u64);
+extern int iscsi_get_lun_for_cmd(struct iscsi_cmd *, unsigned char *, u64);
+extern void iscsi_determine_maxcmdsn(struct iscsi_session *);
+extern void iscsi_increment_maxcmdsn(struct iscsi_cmd *, struct iscsi_session *);
+
+#endif /* ISCSI_TARGET_DEVICE_H */
diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
new file mode 100644
index 0000000..190741a
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tpg.c
@@ -0,0 +1,1185 @@
+/*******************************************************************************
+ * This file contains iSCSI Target Portal Group related functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ * 
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/ctype.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_configfs.h>
+#include <target/target_core_tpg.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_nodeattrib.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_parameters.h"
+
+char *lio_tpg_get_endpoint_wwn(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return &tpg->tpg_tiqn->tiqn[0];
+}
+
+u16 lio_tpg_get_tag(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return tpg->tpgt;
+}
+
+u32 lio_tpg_get_default_depth(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+}
+
+int lio_tpg_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			 (struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->generate_node_acls;
+}
+
+int lio_tpg_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->cache_dynamic_acls;
+}
+
+int lio_tpg_check_demo_mode_write_protect(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->demo_mode_write_protect;
+}
+
+int lio_tpg_check_prod_mode_write_protect(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->prod_mode_write_protect;
+}
+
+struct se_node_acl *lio_tpg_alloc_fabric_acl(
+	struct se_portal_group *se_tpg)
+{
+	struct iscsi_node_acl *acl;
+
+	acl = kzalloc(sizeof(struct iscsi_node_acl), GFP_KERNEL);
+	if (!(acl)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_node_acl\n");
+		return NULL;
+	}
+
+	return &acl->se_node_acl;
+}
+
+void lio_tpg_release_fabric_acl(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_acl)
+{
+	struct iscsi_node_acl *acl = container_of(se_acl,
+			struct iscsi_node_acl, se_node_acl);
+	kfree(acl);
+}
+
+/*
+ * Called with spin_lock_bh(struct se_portal_group->session_lock) held..
+ *
+ * Also, this function calls iscsi_inc_session_usage_count() on the
+ * struct iscsi_session in question.
+ */
+int lio_tpg_shutdown_session(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	spin_lock(&sess->conn_lock);
+	if (atomic_read(&sess->session_fall_back_to_erl0) ||
+	    atomic_read(&sess->session_logout) ||
+	    (sess->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+		spin_unlock(&sess->conn_lock);
+		return 0;
+	}
+	atomic_set(&sess->session_reinstatement, 1);
+	spin_unlock(&sess->conn_lock);
+
+	iscsi_inc_session_usage_count(sess);
+	iscsi_stop_time2retain_timer(sess);
+
+	return 1;
+}
+
+/*
+ * Calls iscsi_dec_session_usage_count() as inverse of
+ * lio_tpg_shutdown_session()
+ */
+void lio_tpg_close_session(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	/*
+	 * If the iSCSI Session for the iSCSI Initiator Node exists,
+	 * forcefully shutdown the iSCSI NEXUS.
+	 */
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+	iscsi_close_session(sess);
+}
+
+void lio_tpg_stop_session(struct se_session *se_sess, int sess_sleep, int conn_sleep)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	iscsi_stop_session(sess, sess_sleep, conn_sleep);
+}
+
+void lio_tpg_fall_back_to_erl0(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	iscsi_fall_back_to_erl0(sess);
+}
+
+u32 lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return tpg->tpg_tiqn->tiqn_index;
+}
+
+void lio_set_default_node_attributes(struct se_node_acl *se_acl)
+{
+	struct iscsi_node_acl *acl = container_of(se_acl, struct iscsi_node_acl,
+				se_node_acl);
+
+	ISCSI_NODE_ATTRIB(acl)->nacl = acl;
+	iscsi_set_default_node_attribues(acl);
+}
+
+struct iscsi_portal_group *core_alloc_portal_group(struct iscsi_tiqn *tiqn, u16 tpgt)
+{
+	struct iscsi_portal_group *tpg;
+
+	tpg = kmem_cache_zalloc(lio_tpg_cache, GFP_KERNEL);
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to get tpg from lio_tpg_cache\n");
+		return NULL;
+	}
+
+	tpg->tpgt = tpgt;
+	tpg->tpg_state = TPG_STATE_FREE;
+	tpg->tpg_tiqn = tiqn;
+	INIT_LIST_HEAD(&tpg->tpg_gnp_list);
+	INIT_LIST_HEAD(&tpg->g_tpg_list);
+	INIT_LIST_HEAD(&tpg->tpg_list);
+	sema_init(&tpg->tpg_access_sem, 1);
+	sema_init(&tpg->np_login_sem, 1);
+	spin_lock_init(&tpg->tpg_state_lock);
+	spin_lock_init(&tpg->tpg_np_lock);
+
+	return tpg;
+}
+
+static void iscsi_set_default_tpg_attribs(struct iscsi_portal_group *);
+
+int core_load_discovery_tpg(void)
+{
+	struct iscsi_param *param;
+	struct iscsi_portal_group *tpg;
+	int ret;
+
+	tpg = core_alloc_portal_group(NULL, 1);
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to allocate struct iscsi_portal_group\n");
+		return -1;
+	}
+
+	ret = core_tpg_register(
+			&lio_target_fabric_configfs->tf_ops,
+			NULL, &tpg->tpg_se_tpg, (void *)tpg,
+			TRANSPORT_TPG_TYPE_DISCOVERY);
+	if (ret < 0) {
+		kfree(tpg);
+		return -1;
+	}
+
+	tpg->sid = 1; /* First Assigned LIO Session ID */
+	iscsi_set_default_tpg_attribs(tpg);
+
+	if (iscsi_create_default_params(&tpg->param_list) < 0)
+		goto out;
+	/*
+	 * By default we disable authentication for discovery sessions,
+	 * this can be changed with:
+	 *
+	 * /sys/kernel/config/target/iscsi/discovery_auth/enforce_discovery_auth
+	 */
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param))
+		goto out;
+
+	if (iscsi_update_param_value(param, "CHAP,None") < 0)
+		goto out;
+
+	tpg->tpg_attrib.authentication = 0;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state  = TPG_STATE_ACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_global->discovery_tpg = tpg;
+	printk(KERN_INFO "CORE[0] - Allocated Discovery TPG\n");
+
+	return 0;
+out:
+	if (tpg->sid == 1)
+		core_tpg_deregister(&tpg->tpg_se_tpg);
+	kfree(tpg);
+	return -1;
+}
+
+void core_release_discovery_tpg(void)
+{
+	struct iscsi_portal_group *tpg = iscsi_global->discovery_tpg;
+
+	if (!(tpg))
+		return;
+
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+
+	kmem_cache_free(lio_tpg_cache, tpg);
+	iscsi_global->discovery_tpg = NULL;
+}
+
+struct iscsi_portal_group *core_get_tpg_from_np(
+	struct iscsi_tiqn *tiqn,
+	struct iscsi_np *np)
+{
+	struct iscsi_portal_group *tpg = NULL;
+	struct iscsi_tpg_np *tpg_np;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if (tpg->tpg_state == TPG_STATE_FREE) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+
+		spin_lock(&tpg->tpg_np_lock);
+		list_for_each_entry(tpg_np, &tpg->tpg_gnp_list, tpg_np_list) {
+			if (tpg_np->tpg_np == np) {
+				spin_unlock(&tpg->tpg_np_lock);
+				spin_unlock(&tiqn->tiqn_tpg_lock);
+				return tpg;
+			}
+		}
+		spin_unlock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return NULL;
+}
+
+int iscsi_get_tpg(
+	struct iscsi_portal_group *tpg)
+{
+	int ret;
+
+	ret = down_interruptible(&tpg->tpg_access_sem);
+	return ((ret != 0) || signal_pending(current)) ? -1 : 0;
+}
+
+/*	iscsi_put_tpg():
+ *
+ *
+ */
+void iscsi_put_tpg(struct iscsi_portal_group *tpg)
+{
+	up(&tpg->tpg_access_sem);
+}
+
+static void iscsi_clear_tpg_np_login_thread(
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	if (!tpg_np->tpg_np) {
+		printk(KERN_ERR "struct iscsi_tpg_np->tpg_np is NULL!\n");
+		return;
+	}
+
+	core_reset_np_thread(tpg_np->tpg_np, tpg_np, tpg, shutdown);
+	return;
+}
+
+/*	iscsi_clear_tpg_np_login_threads():
+ *
+ *
+ */
+void iscsi_clear_tpg_np_login_threads(
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	struct iscsi_tpg_np *tpg_np;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_for_each_entry(tpg_np, &tpg->tpg_gnp_list, tpg_np_list) {
+		if (!tpg_np->tpg_np) {
+			printk(KERN_ERR "struct iscsi_tpg_np->tpg_np is NULL!\n");
+			continue;
+		}
+		spin_unlock(&tpg->tpg_np_lock);
+		iscsi_clear_tpg_np_login_thread(tpg_np, tpg, shutdown);
+		spin_lock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tpg->tpg_np_lock);
+}
+
+/*	iscsi_tpg_dump_params():
+ *
+ *
+ */
+void iscsi_tpg_dump_params(struct iscsi_portal_group *tpg)
+{
+	iscsi_print_params(tpg->param_list);
+}
+
+/*	iscsi_tpg_free_network_portals():
+ *
+ *
+ */
+static void iscsi_tpg_free_network_portals(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np, *tpg_np_t;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], *ip;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_for_each_entry_safe(tpg_np, tpg_np_t, &tpg->tpg_gnp_list,
+				tpg_np_list) {
+		np = tpg_np->tpg_np;
+		list_del(&tpg_np->tpg_np_list);
+		tpg->num_tpg_nps--;
+		tpg->tpg_tiqn->tiqn_num_tpg_nps--;
+
+		if (np->np_net_size == IPV6_ADDRESS_SPACE)
+			ip = &np->np_ipv6[0];
+		else {
+			memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+			iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+			ip = &buf_ipv4[0];
+		}
+
+		printk(KERN_INFO "CORE[%s] - Removed Network Portal: %s:%hu,%hu"
+			" on %s on network device: %s\n", tpg->tpg_tiqn->tiqn,
+			ip, np->np_port, tpg->tpgt,
+			(np->np_network_transport == ISCSI_TCP) ?
+			"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+			(char *)np->np_net_dev : "None");
+
+		tpg_np->tpg_np = NULL;
+		kfree(tpg_np);
+		spin_unlock(&tpg->tpg_np_lock);
+
+		spin_lock(&np->np_state_lock);
+		np->np_exports--;
+		printk(KERN_INFO "CORE[%s]_TPG[%hu] - Decremented np_exports to %u\n",
+			tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+		spin_unlock(&np->np_state_lock);
+
+		spin_lock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tpg->tpg_np_lock);
+}
+
+/*	iscsi_set_default_tpg_attribs():
+ *
+ *
+ */
+static void iscsi_set_default_tpg_attribs(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	a->authentication = TA_AUTHENTICATION;
+	a->login_timeout = TA_LOGIN_TIMEOUT;
+	a->netif_timeout = TA_NETIF_TIMEOUT;
+	a->default_cmdsn_depth = TA_DEFAULT_CMDSN_DEPTH;
+	a->generate_node_acls = TA_GENERATE_NODE_ACLS;
+	a->cache_dynamic_acls = TA_CACHE_DYNAMIC_ACLS;
+	a->demo_mode_write_protect = TA_DEMO_MODE_WRITE_PROTECT;
+	a->prod_mode_write_protect = TA_PROD_MODE_WRITE_PROTECT;
+	a->crc32c_x86_offload = TA_CRC32C_X86_OFFLOAD;
+	a->cache_core_nps = TA_CACHE_CORE_NPS;
+}
+
+/*	iscsi_tpg_add_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_add_portal_group(struct iscsi_tiqn *tiqn, struct iscsi_portal_group *tpg)
+{
+	if (tpg->tpg_state != TPG_STATE_FREE) {
+		printk(KERN_ERR "Unable to add iSCSI Target Portal Group: %d"
+			" while not in TPG_STATE_FREE state.\n", tpg->tpgt);
+		return -EEXIST;
+	}
+	iscsi_set_default_tpg_attribs(tpg);
+
+	if (iscsi_create_default_params(&tpg->param_list) < 0)
+		goto err_out;
+
+	ISCSI_TPG_ATTRIB(tpg)->tpg = tpg;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state	= TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_add_tail(&tpg->tpg_list, &tiqn->tiqn_tpg_list);
+	tiqn->tiqn_ntpgs++;
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Added iSCSI Target Portal Group\n",
+			tiqn->tiqn, tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	spin_lock_bh(&iscsi_global->g_tpg_lock);
+	list_add_tail(&tpg->g_tpg_list, &iscsi_global->g_tpg_list);
+	spin_unlock_bh(&iscsi_global->g_tpg_lock);
+
+	return 0;
+err_out:
+	if (tpg->param_list) {
+		iscsi_release_param_list(tpg->param_list);
+		tpg->param_list = NULL;
+	}
+	kfree(tpg);
+	return -ENOMEM;
+}
+
+int iscsi_tpg_del_portal_group(
+	struct iscsi_tiqn *tiqn,
+	struct iscsi_portal_group *tpg,
+	int force)
+{
+	u8 old_state = tpg->tpg_state;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state = TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_clear_tpg_np_login_threads(tpg, 1);
+
+	if (iscsi_release_sessions_for_tpg(tpg, force) < 0) {
+		printk(KERN_ERR "Unable to delete iSCSI Target Portal Group:"
+			" %hu while active sessions exist, and force=0\n",
+			tpg->tpgt);
+		tpg->tpg_state = old_state;
+		return -EPERM;
+	}
+
+	core_tpg_clear_object_luns(&tpg->tpg_se_tpg);
+	iscsi_tpg_free_network_portals(tpg);
+
+	spin_lock_bh(&iscsi_global->g_tpg_lock);
+	list_del(&tpg->g_tpg_list);
+	spin_unlock_bh(&iscsi_global->g_tpg_lock);
+
+	if (tpg->param_list) {
+		iscsi_release_param_list(tpg->param_list);
+		tpg->param_list = NULL;
+	}
+
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+//	tpg->tpg_se_tpg = NULL;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state = TPG_STATE_FREE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_ntpgs--;
+	list_del(&tpg->tpg_list);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Deleted iSCSI Target Portal Group\n",
+			tiqn->tiqn, tpg->tpgt);
+
+	kmem_cache_free(lio_tpg_cache, tpg);
+	return 0;
+}
+
+/*	iscsi_tpg_enable_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_enable_portal_group(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_param *param;
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	spin_lock(&tpg->tpg_state_lock);
+	if (tpg->tpg_state == TPG_STATE_ACTIVE) {
+		printk(KERN_ERR "iSCSI target portal group: %hu is already"
+			" active, ignoring request.\n", tpg->tpgt);
+		spin_unlock(&tpg->tpg_state_lock);
+		return -EINVAL;
+	}
+	/*
+	 * Make sure that AuthMethod does not contain None as an option
+	 * unless explictly disabled.  Set the default to CHAP if authentication
+	 * is enforced (as per default), and remove the NONE option.
+	 */
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param)) {
+		spin_unlock(&tpg->tpg_state_lock);
+		return -ENOMEM;
+	}
+
+	if (ISCSI_TPG_ATTRIB(tpg)->authentication) {
+		if (!strcmp(param->value, NONE))
+			if (iscsi_update_param_value(param, CHAP) < 0) {
+				spin_unlock(&tpg->tpg_state_lock);
+				return -ENOMEM;
+			}
+		if (iscsi_ta_authentication(tpg, 1) < 0) {
+			spin_unlock(&tpg->tpg_state_lock);
+			return -ENOMEM;
+		}
+	}
+
+	tpg->tpg_state = TPG_STATE_ACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_active_tpgs++;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Enabled iSCSI Target Portal Group\n",
+			tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return 0;
+}
+
+/*	iscsi_tpg_disable_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_disable_portal_group(struct iscsi_portal_group *tpg, int force)
+{
+	struct iscsi_tiqn *tiqn;
+	u8 old_state = tpg->tpg_state;
+
+	spin_lock(&tpg->tpg_state_lock);
+	if (tpg->tpg_state == TPG_STATE_INACTIVE) {
+		printk(KERN_ERR "iSCSI Target Portal Group: %hu is already"
+			" inactive, ignoring request.\n", tpg->tpgt);
+		spin_unlock(&tpg->tpg_state_lock);
+		return -EINVAL;
+	}
+	tpg->tpg_state = TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_clear_tpg_np_login_threads(tpg, 0);
+
+	if (iscsi_release_sessions_for_tpg(tpg, force) < 0) {
+		spin_lock(&tpg->tpg_state_lock);
+		tpg->tpg_state = old_state;
+		spin_unlock(&tpg->tpg_state_lock);
+		printk(KERN_ERR "Unable to disable iSCSI Target Portal Group:"
+			" %hu while active sessions exist, and force=0\n",
+			tpg->tpgt);
+		return -EPERM;
+	}
+
+	tiqn = tpg->tpg_tiqn;
+	if (!(tiqn) || (tpg == iscsi_global->discovery_tpg))
+		return 0;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_active_tpgs--;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Disabled iSCSI Target Portal Group\n",
+			tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return 0;
+}
+
+struct iscsi_node_attrib *iscsi_tpg_get_node_attrib(
+	struct iscsi_session *sess)
+{
+	struct se_session *se_sess = sess->se_sess;
+	struct se_node_acl *se_nacl = se_sess->se_node_acl;
+	struct iscsi_node_acl *acl = container_of(se_nacl, struct iscsi_node_acl,
+					se_node_acl);
+
+	return &acl->node_attrib;
+}
+
+struct iscsi_tpg_np *iscsi_tpg_locate_child_np(
+	struct iscsi_tpg_np *tpg_np,
+	int network_transport)
+{
+	struct iscsi_tpg_np *tpg_np_child, *tpg_np_child_tmp;
+
+	spin_lock(&tpg_np->tpg_np_parent_lock);
+	list_for_each_entry_safe(tpg_np_child, tpg_np_child_tmp,
+			&tpg_np->tpg_np_parent_list, tpg_np_child_list) {
+		if (tpg_np_child->tpg_np->np_network_transport ==
+				network_transport) {
+			spin_unlock(&tpg_np->tpg_np_parent_lock);
+			return tpg_np_child;
+		}
+	}
+	spin_unlock(&tpg_np->tpg_np_parent_lock);
+
+	return NULL;
+}
+
+/*	iscsi_tpg_add_network_portal():
+ *
+ *
+ */
+struct iscsi_tpg_np *iscsi_tpg_add_network_portal(
+	struct iscsi_portal_group *tpg,
+	struct iscsi_np_addr *np_addr,
+	struct iscsi_tpg_np *tpg_np_parent,
+	int network_transport)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np;
+	char *ip_buf;
+	void *ip;
+	int ret = 0;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		ip_buf = (char *)&np_addr->np_ipv6[0];
+		ip = (void *)&np_addr->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np_addr->np_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip = (void *)&np_addr->np_ipv4;
+	}
+	/*
+	 * If the Network Portal does not currently exist, start it up now.
+	 */
+	np = core_get_np(ip, np_addr->np_port, network_transport);
+	if (!(np)) {
+		np = core_add_np(np_addr, network_transport, &ret);
+		if (!(np))
+			return ERR_PTR(ret);
+	}
+
+	tpg_np = kzalloc(sizeof(struct iscsi_tpg_np), GFP_KERNEL);
+	if (!(tpg_np)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_tpg_np.\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	tpg_np->tpg_np_index	= iscsi_get_new_index(ISCSI_PORTAL_INDEX);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_list);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_child_list);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_parent_list);
+	spin_lock_init(&tpg_np->tpg_np_parent_lock);
+	tpg_np->tpg_np		= np;
+	tpg_np->tpg		= tpg;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_add_tail(&tpg_np->tpg_np_list, &tpg->tpg_gnp_list);
+	tpg->num_tpg_nps++;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_num_tpg_nps++;
+	spin_unlock(&tpg->tpg_np_lock);
+
+	if (tpg_np_parent) {
+		tpg_np->tpg_np_parent = tpg_np_parent;
+		spin_lock(&tpg_np_parent->tpg_np_parent_lock);
+		list_add_tail(&tpg_np->tpg_np_child_list,
+			&tpg_np_parent->tpg_np_parent_list);
+		spin_unlock(&tpg_np_parent->tpg_np_parent_lock);
+	}
+
+	printk(KERN_INFO "CORE[%s] - Added Network Portal: %s:%hu,%hu on %s on"
+		" network device: %s\n", tpg->tpg_tiqn->tiqn, ip_buf,
+		np->np_port, tpg->tpgt,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	spin_lock(&np->np_state_lock);
+	np->np_exports++;
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Incremented np_exports to %u\n",
+		tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+	spin_unlock(&np->np_state_lock);
+
+	return tpg_np;
+}
+
+static int iscsi_tpg_release_np(
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	struct iscsi_np *np)
+{
+	char *ip;
+	char buf_ipv4[IPV4_BUF_SIZE];
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE)
+		ip = &np->np_ipv6[0];
+	else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	iscsi_clear_tpg_np_login_thread(tpg_np, tpg, 1);
+
+	printk(KERN_INFO "CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s"
+		" on network device: %s\n", tpg->tpg_tiqn->tiqn, ip,
+		np->np_port, tpg->tpgt,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	tpg_np->tpg_np = NULL;
+	tpg_np->tpg = NULL;
+	kfree(tpg_np);
+
+	/*
+	 * Shutdown Network Portal when last TPG reference is released.
+	 */
+	spin_lock(&np->np_state_lock);
+	if ((--np->np_exports == 0) && !(ISCSI_TPG_ATTRIB(tpg)->cache_core_nps))
+		atomic_set(&np->np_shutdown, 1);
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Decremented np_exports to %u\n",
+		tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+	spin_unlock(&np->np_state_lock);
+
+	if (atomic_read(&np->np_shutdown))
+		core_del_np(np);
+
+	return 0;
+}
+
+/*	iscsi_tpg_del_network_portal():
+ *
+ *
+ */
+int iscsi_tpg_del_network_portal(
+	struct iscsi_portal_group *tpg,
+	struct iscsi_tpg_np *tpg_np)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np_child, *tpg_np_child_tmp;
+	int ret = 0;
+
+	np = tpg_np->tpg_np;
+	if (!(np)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_np from"
+				" struct iscsi_tpg_np\n");
+		return -EINVAL;
+	}
+
+	if (!tpg_np->tpg_np_parent) {
+		/*
+		 * We are the parent tpg network portal.  Release all of the
+		 * child tpg_np's (eg: the non ISCSI_TCP ones) on our parent
+		 * list first.
+		 */
+		list_for_each_entry_safe(tpg_np_child, tpg_np_child_tmp,
+				&tpg_np->tpg_np_parent_list,
+				tpg_np_child_list) {
+			ret = iscsi_tpg_del_network_portal(tpg, tpg_np_child);
+			if (ret < 0)
+				printk(KERN_ERR "iscsi_tpg_del_network_portal()"
+					" failed: %d\n", ret);
+		}
+	} else {
+		/*
+		 * We are not the parent ISCSI_TCP tpg network portal.  Release
+		 * our own network portals from the child list.
+		 */
+		spin_lock(&tpg_np->tpg_np_parent->tpg_np_parent_lock);
+		list_del(&tpg_np->tpg_np_child_list);
+		spin_unlock(&tpg_np->tpg_np_parent->tpg_np_parent_lock);
+	}
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_del(&tpg_np->tpg_np_list);
+	tpg->num_tpg_nps--;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_num_tpg_nps--;
+	spin_unlock(&tpg->tpg_np_lock);
+
+	return iscsi_tpg_release_np(tpg_np, tpg, np);
+}
+
+/*	iscsi_tpg_set_initiator_node_queue_depth():
+ *
+ *
+ */
+int iscsi_tpg_set_initiator_node_queue_depth(
+	struct iscsi_portal_group *tpg,
+	unsigned char *initiatorname,
+	u32 queue_depth,
+	int force)
+{
+	return core_tpg_set_initiator_node_queue_depth(&tpg->tpg_se_tpg,
+		initiatorname, queue_depth, force);
+}
+
+/*	iscsi_ta_authentication():
+ *
+ *
+ */
+int iscsi_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+{
+	unsigned char buf1[256], buf2[256], *none = NULL;
+	int len;
+	struct iscsi_param *param;
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((authentication != 1) && (authentication != 0)) {
+		printk(KERN_ERR "Illegal value for authentication parameter:"
+			" %u, ignoring request.\n", authentication);
+		return -1;
+	}
+
+	memset(buf1, 0, sizeof(buf1));
+	memset(buf2, 0, sizeof(buf2));
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param))
+		return -EINVAL;
+
+	if (authentication) {
+		snprintf(buf1, sizeof(buf1), "%s", param->value);
+		none = strstr(buf1, NONE);
+		if (!(none))
+			goto out;
+		if (!strncmp(none + 4, ",", 1)) {
+			if (!strcmp(buf1, none))
+				sprintf(buf2, "%s", none+5);
+			else {
+				none--;
+				*none = '\0';
+				len = sprintf(buf2, "%s", buf1);
+				none += 5;
+				sprintf(buf2 + len, "%s", none);
+			}
+		} else {
+			none--;
+			*none = '\0';
+			sprintf(buf2, "%s", buf1);
+		}
+		if (iscsi_update_param_value(param, buf2) < 0)
+			return -EINVAL;
+	} else {
+		snprintf(buf1, sizeof(buf1), "%s", param->value);
+		none = strstr(buf1, NONE);
+		if ((none))
+			goto out;
+		strncat(buf1, ",", strlen(","));
+		strncat(buf1, NONE, strlen(NONE));
+		if (iscsi_update_param_value(param, buf1) < 0)
+			return -EINVAL;
+	}
+
+out:
+	a->authentication = authentication;
+	printk(KERN_INFO "%s iSCSI Authentication Methods for TPG: %hu.\n",
+		a->authentication ? "Enforcing" : "Disabling", tpg->tpgt);
+
+	return 0;
+}
+
+/*	iscsi_ta_login_timeout():
+ *
+ *
+ */
+int iscsi_ta_login_timeout(
+	struct iscsi_portal_group *tpg,
+	u32 login_timeout)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (login_timeout > TA_LOGIN_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested Login Timeout %u larger than maximum"
+			" %u\n", login_timeout, TA_LOGIN_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (login_timeout < TA_LOGIN_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested Logout Timeout %u smaller than"
+			" minimum %u\n", login_timeout, TA_LOGIN_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->login_timeout = login_timeout;
+	printk(KERN_INFO "Set Logout Timeout to %u for Target Portal Group"
+		" %hu\n", a->login_timeout, tpg->tpgt);
+
+	return 0;
+}
+
+/*	iscsi_ta_netif_timeout():
+ *
+ *
+ */
+int iscsi_ta_netif_timeout(
+	struct iscsi_portal_group *tpg,
+	u32 netif_timeout)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (netif_timeout > TA_NETIF_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested Network Interface Timeout %u larger"
+			" than maximum %u\n", netif_timeout,
+				TA_NETIF_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (netif_timeout < TA_NETIF_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested Network Interface Timeout %u smaller"
+			" than minimum %u\n", netif_timeout,
+				TA_NETIF_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->netif_timeout = netif_timeout;
+	printk(KERN_INFO "Set Network Interface Timeout to %u for"
+		" Target Portal Group %hu\n", a->netif_timeout, tpg->tpgt);
+
+	return 0;
+}
+
+int iscsi_ta_generate_node_acls(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->generate_node_acls = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Generate Initiator Portal Group ACLs: %s\n",
+		tpg->tpgt, (a->generate_node_acls) ? "Enabled" : "Disabled");
+
+	return 0;
+}
+
+int iscsi_ta_default_cmdsn_depth(
+	struct iscsi_portal_group *tpg,
+	u32 tcq_depth)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (tcq_depth > TA_DEFAULT_CMDSN_DEPTH_MAX) {
+		printk(KERN_ERR "Requested Default Queue Depth: %u larger"
+			" than maximum %u\n", tcq_depth,
+				TA_DEFAULT_CMDSN_DEPTH_MAX);
+		return -EINVAL;
+	} else if (tcq_depth < TA_DEFAULT_CMDSN_DEPTH_MIN) {
+		printk(KERN_ERR "Requested Default Queue Depth: %u smaller"
+			" than minimum %u\n", tcq_depth,
+				TA_DEFAULT_CMDSN_DEPTH_MIN);
+		return -EINVAL;
+	}
+
+	a->default_cmdsn_depth = tcq_depth;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Set Default CmdSN TCQ Depth to %u\n",
+		tpg->tpgt, a->default_cmdsn_depth);
+
+	return 0;
+}
+
+int iscsi_ta_cache_dynamic_acls(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->cache_dynamic_acls = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Cache Dynamic Initiator Portal Group"
+		" ACLs %s\n", tpg->tpgt, (a->cache_dynamic_acls) ?
+		"Enabled" : "Disabled");
+
+	return 0;
+}
+
+int iscsi_ta_demo_mode_write_protect(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->demo_mode_write_protect = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Demo Mode Write Protect bit: %s\n",
+		tpg->tpgt, (a->demo_mode_write_protect) ? "ON" : "OFF");
+
+	return 0;
+}
+
+int iscsi_ta_prod_mode_write_protect(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->prod_mode_write_protect = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Production Mode Write Protect bit:"
+		" %s\n", tpg->tpgt, (a->prod_mode_write_protect) ?
+		"ON" : "OFF");
+
+	return 0;
+}
+
+int iscsi_ta_crc32c_x86_offload(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->crc32c_x86_offload = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - CRC32C x86 Offload: %s\n",
+		tpg->tpgt, (a->crc32c_x86_offload) ? "ON" : "OFF");
+
+	return 0;
+}
+
+void iscsi_disable_tpgs(struct iscsi_tiqn *tiqn)
+{
+	struct iscsi_portal_group *tpg;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if ((tpg->tpg_state == TPG_STATE_FREE) ||
+		    (tpg->tpg_state == TPG_STATE_INACTIVE)) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+
+		iscsi_tpg_disable_portal_group(tpg, 1);
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+}
+
+/*	iscsi_disable_all_tpgs():
+ *
+ *
+ */
+void iscsi_disable_all_tpgs(void)
+{
+	struct iscsi_tiqn *tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		spin_unlock(&iscsi_global->tiqn_lock);
+		iscsi_disable_tpgs(tiqn);
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+}
+
+void iscsi_remove_tpgs(struct iscsi_tiqn *tiqn)
+{
+	struct iscsi_portal_group *tpg, *tpg_tmp;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry_safe(tpg, tpg_tmp, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if (tpg->tpg_state == TPG_STATE_FREE) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+
+		iscsi_tpg_del_portal_group(tiqn, tpg, 1);
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+}
+
+/*	iscsi_remove_all_tpgs():
+ *
+ *
+ */
+void iscsi_remove_all_tpgs(void)
+{
+	struct iscsi_tiqn *tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		spin_unlock(&iscsi_global->tiqn_lock);
+		iscsi_remove_tpgs(tiqn);
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+}
diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h
new file mode 100644
index 0000000..bcdfacb
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tpg.h
@@ -0,0 +1,71 @@
+#ifndef ISCSI_TARGET_TPG_H
+#define ISCSI_TARGET_TPG_H
+
+extern char *lio_tpg_get_endpoint_wwn(struct se_portal_group *);
+extern u16 lio_tpg_get_tag(struct se_portal_group *);
+extern u32 lio_tpg_get_default_depth(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode_cache(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode_write_protect(struct se_portal_group *);
+extern int lio_tpg_check_prod_mode_write_protect(struct se_portal_group *);
+extern struct se_node_acl *lio_tpg_alloc_fabric_acl(struct se_portal_group *);
+extern void lio_tpg_release_fabric_acl(struct se_portal_group *,
+			struct se_node_acl *);
+extern int lio_tpg_shutdown_session(struct se_session *);
+extern void lio_tpg_close_session(struct se_session *);
+extern void lio_tpg_stop_session(struct se_session *, int, int);
+extern void lio_tpg_fall_back_to_erl0(struct se_session *);
+extern u32 lio_tpg_get_inst_index(struct se_portal_group *);
+extern void lio_set_default_node_attributes(struct se_node_acl *);
+
+extern struct iscsi_portal_group *core_alloc_portal_group(struct iscsi_tiqn *, u16);
+extern int core_load_discovery_tpg(void);
+extern void core_release_discovery_tpg(void);
+extern struct iscsi_portal_group *core_get_tpg_from_np(struct iscsi_tiqn *,
+			struct iscsi_np *);
+extern int iscsi_get_tpg(struct iscsi_portal_group *);
+extern void iscsi_put_tpg(struct iscsi_portal_group *);
+extern void iscsi_clear_tpg_np_login_threads(struct iscsi_portal_group *, int);
+extern void iscsi_tpg_dump_params(struct iscsi_portal_group *);
+extern int iscsi_tpg_add_portal_group(struct iscsi_tiqn *, struct iscsi_portal_group *);
+extern int iscsi_tpg_del_portal_group(struct iscsi_tiqn *, struct iscsi_portal_group *,
+			int);
+extern int iscsi_tpg_enable_portal_group(struct iscsi_portal_group *);
+extern int iscsi_tpg_disable_portal_group(struct iscsi_portal_group *, int);
+extern struct iscsi_node_acl *iscsi_tpg_add_initiator_node_acl(
+			struct iscsi_portal_group *, const char *, u32);
+extern void iscsi_tpg_del_initiator_node_acl(struct iscsi_portal_group *,
+			struct se_node_acl *);
+extern struct iscsi_node_attrib *iscsi_tpg_get_node_attrib(struct iscsi_session *);
+extern void iscsi_tpg_del_external_nps(struct iscsi_tpg_np *);
+extern struct iscsi_tpg_np *iscsi_tpg_locate_child_np(struct iscsi_tpg_np *, int);
+extern struct iscsi_tpg_np *iscsi_tpg_add_network_portal(struct iscsi_portal_group *,
+			struct iscsi_np_addr *, struct iscsi_tpg_np *, int);
+extern int iscsi_tpg_del_network_portal(struct iscsi_portal_group *,
+			struct iscsi_tpg_np *);
+extern int iscsi_tpg_set_initiator_node_queue_depth(struct iscsi_portal_group *,
+			unsigned char *, u32, int);
+extern int iscsi_ta_authentication(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_login_timeout(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_netif_timeout(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_generate_node_acls(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_default_cmdsn_depth(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_cache_dynamic_acls(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_demo_mode_write_protect(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_prod_mode_write_protect(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_crc32c_x86_offload(struct iscsi_portal_group *, u32);
+extern void iscsi_disable_tpgs(struct iscsi_tiqn *);
+extern void iscsi_disable_all_tpgs(void);
+extern void iscsi_remove_tpgs(struct iscsi_tiqn *);
+extern void iscsi_remove_all_tpgs(void);
+
+extern struct iscsi_global *iscsi_global;
+extern struct target_fabric_configfs *lio_target_fabric_configfs;
+extern struct kmem_cache *lio_tpg_cache;
+
+extern int iscsi_close_session(struct iscsi_session *);
+extern int iscsi_free_session(struct iscsi_session *);
+extern int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *, int);
+extern int iscsi_ta_authentication(struct iscsi_portal_group *, __u32);
+
+#endif /* ISCSI_TARGET_TPG_H */
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 05/12] iscsi-target: Add TPG and Device logic
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds TPG and device logiced using for mapping iscsi-target
abstractions on top of TCM v4 struct se_portal_group and struct se_device
abstractions.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_device.c |  128 +++
 drivers/target/iscsi/iscsi_target_device.h |    9 +
 drivers/target/iscsi/iscsi_target_tpg.c    | 1185 ++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_tpg.h    |   71 ++
 4 files changed, 1393 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_device.c
 create mode 100644 drivers/target/iscsi/iscsi_target_device.h
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tpg.h

diff --git a/drivers/target/iscsi/iscsi_target_device.c b/drivers/target/iscsi/iscsi_target_device.c
new file mode 100644
index 0000000..635f91a
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_device.c
@@ -0,0 +1,128 @@
+/*******************************************************************************
+ * This file contains the iSCSI Virtual Device and Disk Transport
+ * agnostic related functions.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005-2006 SBE, Inc.  All Rights Reserved.
+ © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <target/target_core_base.h>
+#include <target/target_core_device.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+
+/*	iscsi_get_lun():
+ *
+ *
+ */
+int iscsi_get_lun_for_tmr(
+	struct iscsi_cmd *cmd,
+	u64 lun)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+	u32 unpacked_lun;
+
+	unpacked_lun = iscsi_unpack_lun((unsigned char *)&lun);
+	if (unpacked_lun > (ISCSI_MAX_LUNS_PER_TPG-1)) {
+		printk(KERN_ERR "iSCSI LUN: %u exceeds ISCSI_MAX_LUNS_PER_TPG"
+			"-1: %u for Target Portal Group: %hu\n", unpacked_lun,
+			ISCSI_MAX_LUNS_PER_TPG-1, tpg->tpgt);
+		return -1;
+	}
+
+	return transport_get_lun_for_tmr(SE_CMD(cmd), unpacked_lun);
+}
+
+/*	iscsi_get_lun_for_cmd():
+ *
+ *	Returns (0) on success
+ * 	Returns (< 0) on failure
+ */
+int iscsi_get_lun_for_cmd(
+	struct iscsi_cmd *cmd,
+	unsigned char *cdb,
+	u64 lun)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+	u32 unpacked_lun;
+
+	unpacked_lun = iscsi_unpack_lun((unsigned char *)&lun);
+	if (unpacked_lun > (ISCSI_MAX_LUNS_PER_TPG-1)) {
+		printk(KERN_ERR "iSCSI LUN: %u exceeds ISCSI_MAX_LUNS_PER_TPG"
+			"-1: %u for Target Portal Group: %hu\n", unpacked_lun,
+			ISCSI_MAX_LUNS_PER_TPG-1, tpg->tpgt);
+		return -1;
+	}
+
+	return transport_get_lun_for_cmd(SE_CMD(cmd), cdb, unpacked_lun);
+}
+
+/*	iscsi_determine_maxcmdsn():
+ *
+ *
+ */
+void iscsi_determine_maxcmdsn(struct iscsi_session *sess)
+{
+	struct se_node_acl *se_nacl;
+
+	/*
+	 * This is a discovery session, the single queue slot was already
+	 * assigned in iscsi_login_zero_tsih().  Since only Logout and
+	 * Text Opcodes are allowed during discovery we do not have to worry
+	 * about the HBA's queue depth here.
+	 */
+	if (SESS_OPS(sess)->SessionType)
+		return;
+
+	se_nacl = sess->se_sess->se_node_acl;
+
+	/*
+	 * This is a normal session, set the Session's CmdSN window to the
+	 * struct se_node_acl->queue_depth.  The value in struct se_node_acl->queue_depth
+	 * has already been validated as a legal value in
+	 * core_set_queue_depth_for_node().
+	 */
+	sess->cmdsn_window = se_nacl->queue_depth;
+	sess->max_cmd_sn = (sess->max_cmd_sn + se_nacl->queue_depth) - 1;
+}
+
+/*	iscsi_increment_maxcmdsn();
+ *
+ *
+ */
+void iscsi_increment_maxcmdsn(struct iscsi_cmd *cmd, struct iscsi_session *sess)
+{
+	if (cmd->immediate_cmd || cmd->maxcmdsn_inc)
+		return;
+
+	cmd->maxcmdsn_inc = 1;
+
+	spin_lock(&sess->cmdsn_lock);
+	sess->max_cmd_sn += 1;
+	TRACE(TRACE_ISCSI, "Updated MaxCmdSN to 0x%08x\n", sess->max_cmd_sn);
+	spin_unlock(&sess->cmdsn_lock);
+}
diff --git a/drivers/target/iscsi/iscsi_target_device.h b/drivers/target/iscsi/iscsi_target_device.h
new file mode 100644
index 0000000..f69cf52
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_device.h
@@ -0,0 +1,9 @@
+#ifndef ISCSI_TARGET_DEVICE_H
+#define ISCSI_TARGET_DEVICE_H
+
+extern int iscsi_get_lun_for_tmr(struct iscsi_cmd *, u64);
+extern int iscsi_get_lun_for_cmd(struct iscsi_cmd *, unsigned char *, u64);
+extern void iscsi_determine_maxcmdsn(struct iscsi_session *);
+extern void iscsi_increment_maxcmdsn(struct iscsi_cmd *, struct iscsi_session *);
+
+#endif /* ISCSI_TARGET_DEVICE_H */
diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
new file mode 100644
index 0000000..190741a
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tpg.c
@@ -0,0 +1,1185 @@
+/*******************************************************************************
+ * This file contains iSCSI Target Portal Group related functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ * 
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/ctype.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_configfs.h>
+#include <target/target_core_tpg.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_nodeattrib.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_parameters.h"
+
+char *lio_tpg_get_endpoint_wwn(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return &tpg->tpg_tiqn->tiqn[0];
+}
+
+u16 lio_tpg_get_tag(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return tpg->tpgt;
+}
+
+u32 lio_tpg_get_default_depth(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+}
+
+int lio_tpg_check_demo_mode(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			 (struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->generate_node_acls;
+}
+
+int lio_tpg_check_demo_mode_cache(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->cache_dynamic_acls;
+}
+
+int lio_tpg_check_demo_mode_write_protect(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->demo_mode_write_protect;
+}
+
+int lio_tpg_check_prod_mode_write_protect(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return ISCSI_TPG_ATTRIB(tpg)->prod_mode_write_protect;
+}
+
+struct se_node_acl *lio_tpg_alloc_fabric_acl(
+	struct se_portal_group *se_tpg)
+{
+	struct iscsi_node_acl *acl;
+
+	acl = kzalloc(sizeof(struct iscsi_node_acl), GFP_KERNEL);
+	if (!(acl)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_node_acl\n");
+		return NULL;
+	}
+
+	return &acl->se_node_acl;
+}
+
+void lio_tpg_release_fabric_acl(
+	struct se_portal_group *se_tpg,
+	struct se_node_acl *se_acl)
+{
+	struct iscsi_node_acl *acl = container_of(se_acl,
+			struct iscsi_node_acl, se_node_acl);
+	kfree(acl);
+}
+
+/*
+ * Called with spin_lock_bh(struct se_portal_group->session_lock) held..
+ *
+ * Also, this function calls iscsi_inc_session_usage_count() on the
+ * struct iscsi_session in question.
+ */
+int lio_tpg_shutdown_session(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	spin_lock(&sess->conn_lock);
+	if (atomic_read(&sess->session_fall_back_to_erl0) ||
+	    atomic_read(&sess->session_logout) ||
+	    (sess->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+		spin_unlock(&sess->conn_lock);
+		return 0;
+	}
+	atomic_set(&sess->session_reinstatement, 1);
+	spin_unlock(&sess->conn_lock);
+
+	iscsi_inc_session_usage_count(sess);
+	iscsi_stop_time2retain_timer(sess);
+
+	return 1;
+}
+
+/*
+ * Calls iscsi_dec_session_usage_count() as inverse of
+ * lio_tpg_shutdown_session()
+ */
+void lio_tpg_close_session(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+	/*
+	 * If the iSCSI Session for the iSCSI Initiator Node exists,
+	 * forcefully shutdown the iSCSI NEXUS.
+	 */
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+	iscsi_close_session(sess);
+}
+
+void lio_tpg_stop_session(struct se_session *se_sess, int sess_sleep, int conn_sleep)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	iscsi_stop_session(sess, sess_sleep, conn_sleep);
+}
+
+void lio_tpg_fall_back_to_erl0(struct se_session *se_sess)
+{
+	struct iscsi_session *sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+
+	iscsi_fall_back_to_erl0(sess);
+}
+
+u32 lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+{
+	struct iscsi_portal_group *tpg =
+			(struct iscsi_portal_group *)se_tpg->se_tpg_fabric_ptr;
+
+	return tpg->tpg_tiqn->tiqn_index;
+}
+
+void lio_set_default_node_attributes(struct se_node_acl *se_acl)
+{
+	struct iscsi_node_acl *acl = container_of(se_acl, struct iscsi_node_acl,
+				se_node_acl);
+
+	ISCSI_NODE_ATTRIB(acl)->nacl = acl;
+	iscsi_set_default_node_attribues(acl);
+}
+
+struct iscsi_portal_group *core_alloc_portal_group(struct iscsi_tiqn *tiqn, u16 tpgt)
+{
+	struct iscsi_portal_group *tpg;
+
+	tpg = kmem_cache_zalloc(lio_tpg_cache, GFP_KERNEL);
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to get tpg from lio_tpg_cache\n");
+		return NULL;
+	}
+
+	tpg->tpgt = tpgt;
+	tpg->tpg_state = TPG_STATE_FREE;
+	tpg->tpg_tiqn = tiqn;
+	INIT_LIST_HEAD(&tpg->tpg_gnp_list);
+	INIT_LIST_HEAD(&tpg->g_tpg_list);
+	INIT_LIST_HEAD(&tpg->tpg_list);
+	sema_init(&tpg->tpg_access_sem, 1);
+	sema_init(&tpg->np_login_sem, 1);
+	spin_lock_init(&tpg->tpg_state_lock);
+	spin_lock_init(&tpg->tpg_np_lock);
+
+	return tpg;
+}
+
+static void iscsi_set_default_tpg_attribs(struct iscsi_portal_group *);
+
+int core_load_discovery_tpg(void)
+{
+	struct iscsi_param *param;
+	struct iscsi_portal_group *tpg;
+	int ret;
+
+	tpg = core_alloc_portal_group(NULL, 1);
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to allocate struct iscsi_portal_group\n");
+		return -1;
+	}
+
+	ret = core_tpg_register(
+			&lio_target_fabric_configfs->tf_ops,
+			NULL, &tpg->tpg_se_tpg, (void *)tpg,
+			TRANSPORT_TPG_TYPE_DISCOVERY);
+	if (ret < 0) {
+		kfree(tpg);
+		return -1;
+	}
+
+	tpg->sid = 1; /* First Assigned LIO Session ID */
+	iscsi_set_default_tpg_attribs(tpg);
+
+	if (iscsi_create_default_params(&tpg->param_list) < 0)
+		goto out;
+	/*
+	 * By default we disable authentication for discovery sessions,
+	 * this can be changed with:
+	 *
+	 * /sys/kernel/config/target/iscsi/discovery_auth/enforce_discovery_auth
+	 */
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param))
+		goto out;
+
+	if (iscsi_update_param_value(param, "CHAP,None") < 0)
+		goto out;
+
+	tpg->tpg_attrib.authentication = 0;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state  = TPG_STATE_ACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_global->discovery_tpg = tpg;
+	printk(KERN_INFO "CORE[0] - Allocated Discovery TPG\n");
+
+	return 0;
+out:
+	if (tpg->sid == 1)
+		core_tpg_deregister(&tpg->tpg_se_tpg);
+	kfree(tpg);
+	return -1;
+}
+
+void core_release_discovery_tpg(void)
+{
+	struct iscsi_portal_group *tpg = iscsi_global->discovery_tpg;
+
+	if (!(tpg))
+		return;
+
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+
+	kmem_cache_free(lio_tpg_cache, tpg);
+	iscsi_global->discovery_tpg = NULL;
+}
+
+struct iscsi_portal_group *core_get_tpg_from_np(
+	struct iscsi_tiqn *tiqn,
+	struct iscsi_np *np)
+{
+	struct iscsi_portal_group *tpg = NULL;
+	struct iscsi_tpg_np *tpg_np;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if (tpg->tpg_state == TPG_STATE_FREE) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+
+		spin_lock(&tpg->tpg_np_lock);
+		list_for_each_entry(tpg_np, &tpg->tpg_gnp_list, tpg_np_list) {
+			if (tpg_np->tpg_np == np) {
+				spin_unlock(&tpg->tpg_np_lock);
+				spin_unlock(&tiqn->tiqn_tpg_lock);
+				return tpg;
+			}
+		}
+		spin_unlock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return NULL;
+}
+
+int iscsi_get_tpg(
+	struct iscsi_portal_group *tpg)
+{
+	int ret;
+
+	ret = down_interruptible(&tpg->tpg_access_sem);
+	return ((ret != 0) || signal_pending(current)) ? -1 : 0;
+}
+
+/*	iscsi_put_tpg():
+ *
+ *
+ */
+void iscsi_put_tpg(struct iscsi_portal_group *tpg)
+{
+	up(&tpg->tpg_access_sem);
+}
+
+static void iscsi_clear_tpg_np_login_thread(
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	if (!tpg_np->tpg_np) {
+		printk(KERN_ERR "struct iscsi_tpg_np->tpg_np is NULL!\n");
+		return;
+	}
+
+	core_reset_np_thread(tpg_np->tpg_np, tpg_np, tpg, shutdown);
+	return;
+}
+
+/*	iscsi_clear_tpg_np_login_threads():
+ *
+ *
+ */
+void iscsi_clear_tpg_np_login_threads(
+	struct iscsi_portal_group *tpg,
+	int shutdown)
+{
+	struct iscsi_tpg_np *tpg_np;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_for_each_entry(tpg_np, &tpg->tpg_gnp_list, tpg_np_list) {
+		if (!tpg_np->tpg_np) {
+			printk(KERN_ERR "struct iscsi_tpg_np->tpg_np is NULL!\n");
+			continue;
+		}
+		spin_unlock(&tpg->tpg_np_lock);
+		iscsi_clear_tpg_np_login_thread(tpg_np, tpg, shutdown);
+		spin_lock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tpg->tpg_np_lock);
+}
+
+/*	iscsi_tpg_dump_params():
+ *
+ *
+ */
+void iscsi_tpg_dump_params(struct iscsi_portal_group *tpg)
+{
+	iscsi_print_params(tpg->param_list);
+}
+
+/*	iscsi_tpg_free_network_portals():
+ *
+ *
+ */
+static void iscsi_tpg_free_network_portals(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np, *tpg_np_t;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], *ip;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_for_each_entry_safe(tpg_np, tpg_np_t, &tpg->tpg_gnp_list,
+				tpg_np_list) {
+		np = tpg_np->tpg_np;
+		list_del(&tpg_np->tpg_np_list);
+		tpg->num_tpg_nps--;
+		tpg->tpg_tiqn->tiqn_num_tpg_nps--;
+
+		if (np->np_net_size == IPV6_ADDRESS_SPACE)
+			ip = &np->np_ipv6[0];
+		else {
+			memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+			iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+			ip = &buf_ipv4[0];
+		}
+
+		printk(KERN_INFO "CORE[%s] - Removed Network Portal: %s:%hu,%hu"
+			" on %s on network device: %s\n", tpg->tpg_tiqn->tiqn,
+			ip, np->np_port, tpg->tpgt,
+			(np->np_network_transport == ISCSI_TCP) ?
+			"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+			(char *)np->np_net_dev : "None");
+
+		tpg_np->tpg_np = NULL;
+		kfree(tpg_np);
+		spin_unlock(&tpg->tpg_np_lock);
+
+		spin_lock(&np->np_state_lock);
+		np->np_exports--;
+		printk(KERN_INFO "CORE[%s]_TPG[%hu] - Decremented np_exports to %u\n",
+			tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+		spin_unlock(&np->np_state_lock);
+
+		spin_lock(&tpg->tpg_np_lock);
+	}
+	spin_unlock(&tpg->tpg_np_lock);
+}
+
+/*	iscsi_set_default_tpg_attribs():
+ *
+ *
+ */
+static void iscsi_set_default_tpg_attribs(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	a->authentication = TA_AUTHENTICATION;
+	a->login_timeout = TA_LOGIN_TIMEOUT;
+	a->netif_timeout = TA_NETIF_TIMEOUT;
+	a->default_cmdsn_depth = TA_DEFAULT_CMDSN_DEPTH;
+	a->generate_node_acls = TA_GENERATE_NODE_ACLS;
+	a->cache_dynamic_acls = TA_CACHE_DYNAMIC_ACLS;
+	a->demo_mode_write_protect = TA_DEMO_MODE_WRITE_PROTECT;
+	a->prod_mode_write_protect = TA_PROD_MODE_WRITE_PROTECT;
+	a->crc32c_x86_offload = TA_CRC32C_X86_OFFLOAD;
+	a->cache_core_nps = TA_CACHE_CORE_NPS;
+}
+
+/*	iscsi_tpg_add_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_add_portal_group(struct iscsi_tiqn *tiqn, struct iscsi_portal_group *tpg)
+{
+	if (tpg->tpg_state != TPG_STATE_FREE) {
+		printk(KERN_ERR "Unable to add iSCSI Target Portal Group: %d"
+			" while not in TPG_STATE_FREE state.\n", tpg->tpgt);
+		return -EEXIST;
+	}
+	iscsi_set_default_tpg_attribs(tpg);
+
+	if (iscsi_create_default_params(&tpg->param_list) < 0)
+		goto err_out;
+
+	ISCSI_TPG_ATTRIB(tpg)->tpg = tpg;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state	= TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_add_tail(&tpg->tpg_list, &tiqn->tiqn_tpg_list);
+	tiqn->tiqn_ntpgs++;
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Added iSCSI Target Portal Group\n",
+			tiqn->tiqn, tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	spin_lock_bh(&iscsi_global->g_tpg_lock);
+	list_add_tail(&tpg->g_tpg_list, &iscsi_global->g_tpg_list);
+	spin_unlock_bh(&iscsi_global->g_tpg_lock);
+
+	return 0;
+err_out:
+	if (tpg->param_list) {
+		iscsi_release_param_list(tpg->param_list);
+		tpg->param_list = NULL;
+	}
+	kfree(tpg);
+	return -ENOMEM;
+}
+
+int iscsi_tpg_del_portal_group(
+	struct iscsi_tiqn *tiqn,
+	struct iscsi_portal_group *tpg,
+	int force)
+{
+	u8 old_state = tpg->tpg_state;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state = TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_clear_tpg_np_login_threads(tpg, 1);
+
+	if (iscsi_release_sessions_for_tpg(tpg, force) < 0) {
+		printk(KERN_ERR "Unable to delete iSCSI Target Portal Group:"
+			" %hu while active sessions exist, and force=0\n",
+			tpg->tpgt);
+		tpg->tpg_state = old_state;
+		return -EPERM;
+	}
+
+	core_tpg_clear_object_luns(&tpg->tpg_se_tpg);
+	iscsi_tpg_free_network_portals(tpg);
+
+	spin_lock_bh(&iscsi_global->g_tpg_lock);
+	list_del(&tpg->g_tpg_list);
+	spin_unlock_bh(&iscsi_global->g_tpg_lock);
+
+	if (tpg->param_list) {
+		iscsi_release_param_list(tpg->param_list);
+		tpg->param_list = NULL;
+	}
+
+	core_tpg_deregister(&tpg->tpg_se_tpg);
+//	tpg->tpg_se_tpg = NULL;
+
+	spin_lock(&tpg->tpg_state_lock);
+	tpg->tpg_state = TPG_STATE_FREE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_ntpgs--;
+	list_del(&tpg->tpg_list);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Deleted iSCSI Target Portal Group\n",
+			tiqn->tiqn, tpg->tpgt);
+
+	kmem_cache_free(lio_tpg_cache, tpg);
+	return 0;
+}
+
+/*	iscsi_tpg_enable_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_enable_portal_group(struct iscsi_portal_group *tpg)
+{
+	struct iscsi_param *param;
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	spin_lock(&tpg->tpg_state_lock);
+	if (tpg->tpg_state == TPG_STATE_ACTIVE) {
+		printk(KERN_ERR "iSCSI target portal group: %hu is already"
+			" active, ignoring request.\n", tpg->tpgt);
+		spin_unlock(&tpg->tpg_state_lock);
+		return -EINVAL;
+	}
+	/*
+	 * Make sure that AuthMethod does not contain None as an option
+	 * unless explictly disabled.  Set the default to CHAP if authentication
+	 * is enforced (as per default), and remove the NONE option.
+	 */
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param)) {
+		spin_unlock(&tpg->tpg_state_lock);
+		return -ENOMEM;
+	}
+
+	if (ISCSI_TPG_ATTRIB(tpg)->authentication) {
+		if (!strcmp(param->value, NONE))
+			if (iscsi_update_param_value(param, CHAP) < 0) {
+				spin_unlock(&tpg->tpg_state_lock);
+				return -ENOMEM;
+			}
+		if (iscsi_ta_authentication(tpg, 1) < 0) {
+			spin_unlock(&tpg->tpg_state_lock);
+			return -ENOMEM;
+		}
+	}
+
+	tpg->tpg_state = TPG_STATE_ACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_active_tpgs++;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Enabled iSCSI Target Portal Group\n",
+			tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return 0;
+}
+
+/*	iscsi_tpg_disable_portal_group():
+ *
+ *
+ */
+int iscsi_tpg_disable_portal_group(struct iscsi_portal_group *tpg, int force)
+{
+	struct iscsi_tiqn *tiqn;
+	u8 old_state = tpg->tpg_state;
+
+	spin_lock(&tpg->tpg_state_lock);
+	if (tpg->tpg_state == TPG_STATE_INACTIVE) {
+		printk(KERN_ERR "iSCSI Target Portal Group: %hu is already"
+			" inactive, ignoring request.\n", tpg->tpgt);
+		spin_unlock(&tpg->tpg_state_lock);
+		return -EINVAL;
+	}
+	tpg->tpg_state = TPG_STATE_INACTIVE;
+	spin_unlock(&tpg->tpg_state_lock);
+
+	iscsi_clear_tpg_np_login_threads(tpg, 0);
+
+	if (iscsi_release_sessions_for_tpg(tpg, force) < 0) {
+		spin_lock(&tpg->tpg_state_lock);
+		tpg->tpg_state = old_state;
+		spin_unlock(&tpg->tpg_state_lock);
+		printk(KERN_ERR "Unable to disable iSCSI Target Portal Group:"
+			" %hu while active sessions exist, and force=0\n",
+			tpg->tpgt);
+		return -EPERM;
+	}
+
+	tiqn = tpg->tpg_tiqn;
+	if (!(tiqn) || (tpg == iscsi_global->discovery_tpg))
+		return 0;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	tiqn->tiqn_active_tpgs--;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Disabled iSCSI Target Portal Group\n",
+			tpg->tpgt);
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+
+	return 0;
+}
+
+struct iscsi_node_attrib *iscsi_tpg_get_node_attrib(
+	struct iscsi_session *sess)
+{
+	struct se_session *se_sess = sess->se_sess;
+	struct se_node_acl *se_nacl = se_sess->se_node_acl;
+	struct iscsi_node_acl *acl = container_of(se_nacl, struct iscsi_node_acl,
+					se_node_acl);
+
+	return &acl->node_attrib;
+}
+
+struct iscsi_tpg_np *iscsi_tpg_locate_child_np(
+	struct iscsi_tpg_np *tpg_np,
+	int network_transport)
+{
+	struct iscsi_tpg_np *tpg_np_child, *tpg_np_child_tmp;
+
+	spin_lock(&tpg_np->tpg_np_parent_lock);
+	list_for_each_entry_safe(tpg_np_child, tpg_np_child_tmp,
+			&tpg_np->tpg_np_parent_list, tpg_np_child_list) {
+		if (tpg_np_child->tpg_np->np_network_transport ==
+				network_transport) {
+			spin_unlock(&tpg_np->tpg_np_parent_lock);
+			return tpg_np_child;
+		}
+	}
+	spin_unlock(&tpg_np->tpg_np_parent_lock);
+
+	return NULL;
+}
+
+/*	iscsi_tpg_add_network_portal():
+ *
+ *
+ */
+struct iscsi_tpg_np *iscsi_tpg_add_network_portal(
+	struct iscsi_portal_group *tpg,
+	struct iscsi_np_addr *np_addr,
+	struct iscsi_tpg_np *tpg_np_parent,
+	int network_transport)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np;
+	char *ip_buf;
+	void *ip;
+	int ret = 0;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	if (np_addr->np_flags & NPF_NET_IPV6) {
+		ip_buf = (char *)&np_addr->np_ipv6[0];
+		ip = (void *)&np_addr->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np_addr->np_ipv4);
+		ip_buf = &buf_ipv4[0];
+		ip = (void *)&np_addr->np_ipv4;
+	}
+	/*
+	 * If the Network Portal does not currently exist, start it up now.
+	 */
+	np = core_get_np(ip, np_addr->np_port, network_transport);
+	if (!(np)) {
+		np = core_add_np(np_addr, network_transport, &ret);
+		if (!(np))
+			return ERR_PTR(ret);
+	}
+
+	tpg_np = kzalloc(sizeof(struct iscsi_tpg_np), GFP_KERNEL);
+	if (!(tpg_np)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_tpg_np.\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	tpg_np->tpg_np_index	= iscsi_get_new_index(ISCSI_PORTAL_INDEX);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_list);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_child_list);
+	INIT_LIST_HEAD(&tpg_np->tpg_np_parent_list);
+	spin_lock_init(&tpg_np->tpg_np_parent_lock);
+	tpg_np->tpg_np		= np;
+	tpg_np->tpg		= tpg;
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_add_tail(&tpg_np->tpg_np_list, &tpg->tpg_gnp_list);
+	tpg->num_tpg_nps++;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_num_tpg_nps++;
+	spin_unlock(&tpg->tpg_np_lock);
+
+	if (tpg_np_parent) {
+		tpg_np->tpg_np_parent = tpg_np_parent;
+		spin_lock(&tpg_np_parent->tpg_np_parent_lock);
+		list_add_tail(&tpg_np->tpg_np_child_list,
+			&tpg_np_parent->tpg_np_parent_list);
+		spin_unlock(&tpg_np_parent->tpg_np_parent_lock);
+	}
+
+	printk(KERN_INFO "CORE[%s] - Added Network Portal: %s:%hu,%hu on %s on"
+		" network device: %s\n", tpg->tpg_tiqn->tiqn, ip_buf,
+		np->np_port, tpg->tpgt,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP", (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	spin_lock(&np->np_state_lock);
+	np->np_exports++;
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Incremented np_exports to %u\n",
+		tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+	spin_unlock(&np->np_state_lock);
+
+	return tpg_np;
+}
+
+static int iscsi_tpg_release_np(
+	struct iscsi_tpg_np *tpg_np,
+	struct iscsi_portal_group *tpg,
+	struct iscsi_np *np)
+{
+	char *ip;
+	char buf_ipv4[IPV4_BUF_SIZE];
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE)
+		ip = &np->np_ipv6[0];
+	else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	iscsi_clear_tpg_np_login_thread(tpg_np, tpg, 1);
+
+	printk(KERN_INFO "CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s"
+		" on network device: %s\n", tpg->tpg_tiqn->tiqn, ip,
+		np->np_port, tpg->tpgt,
+		(np->np_network_transport == ISCSI_TCP) ?
+		"TCP" : "SCTP",  (strlen(np->np_net_dev)) ?
+		(char *)np->np_net_dev : "None");
+
+	tpg_np->tpg_np = NULL;
+	tpg_np->tpg = NULL;
+	kfree(tpg_np);
+
+	/*
+	 * Shutdown Network Portal when last TPG reference is released.
+	 */
+	spin_lock(&np->np_state_lock);
+	if ((--np->np_exports == 0) && !(ISCSI_TPG_ATTRIB(tpg)->cache_core_nps))
+		atomic_set(&np->np_shutdown, 1);
+	printk(KERN_INFO "CORE[%s]_TPG[%hu] - Decremented np_exports to %u\n",
+		tpg->tpg_tiqn->tiqn, tpg->tpgt, np->np_exports);
+	spin_unlock(&np->np_state_lock);
+
+	if (atomic_read(&np->np_shutdown))
+		core_del_np(np);
+
+	return 0;
+}
+
+/*	iscsi_tpg_del_network_portal():
+ *
+ *
+ */
+int iscsi_tpg_del_network_portal(
+	struct iscsi_portal_group *tpg,
+	struct iscsi_tpg_np *tpg_np)
+{
+	struct iscsi_np *np;
+	struct iscsi_tpg_np *tpg_np_child, *tpg_np_child_tmp;
+	int ret = 0;
+
+	np = tpg_np->tpg_np;
+	if (!(np)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_np from"
+				" struct iscsi_tpg_np\n");
+		return -EINVAL;
+	}
+
+	if (!tpg_np->tpg_np_parent) {
+		/*
+		 * We are the parent tpg network portal.  Release all of the
+		 * child tpg_np's (eg: the non ISCSI_TCP ones) on our parent
+		 * list first.
+		 */
+		list_for_each_entry_safe(tpg_np_child, tpg_np_child_tmp,
+				&tpg_np->tpg_np_parent_list,
+				tpg_np_child_list) {
+			ret = iscsi_tpg_del_network_portal(tpg, tpg_np_child);
+			if (ret < 0)
+				printk(KERN_ERR "iscsi_tpg_del_network_portal()"
+					" failed: %d\n", ret);
+		}
+	} else {
+		/*
+		 * We are not the parent ISCSI_TCP tpg network portal.  Release
+		 * our own network portals from the child list.
+		 */
+		spin_lock(&tpg_np->tpg_np_parent->tpg_np_parent_lock);
+		list_del(&tpg_np->tpg_np_child_list);
+		spin_unlock(&tpg_np->tpg_np_parent->tpg_np_parent_lock);
+	}
+
+	spin_lock(&tpg->tpg_np_lock);
+	list_del(&tpg_np->tpg_np_list);
+	tpg->num_tpg_nps--;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_num_tpg_nps--;
+	spin_unlock(&tpg->tpg_np_lock);
+
+	return iscsi_tpg_release_np(tpg_np, tpg, np);
+}
+
+/*	iscsi_tpg_set_initiator_node_queue_depth():
+ *
+ *
+ */
+int iscsi_tpg_set_initiator_node_queue_depth(
+	struct iscsi_portal_group *tpg,
+	unsigned char *initiatorname,
+	u32 queue_depth,
+	int force)
+{
+	return core_tpg_set_initiator_node_queue_depth(&tpg->tpg_se_tpg,
+		initiatorname, queue_depth, force);
+}
+
+/*	iscsi_ta_authentication():
+ *
+ *
+ */
+int iscsi_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+{
+	unsigned char buf1[256], buf2[256], *none = NULL;
+	int len;
+	struct iscsi_param *param;
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((authentication != 1) && (authentication != 0)) {
+		printk(KERN_ERR "Illegal value for authentication parameter:"
+			" %u, ignoring request.\n", authentication);
+		return -1;
+	}
+
+	memset(buf1, 0, sizeof(buf1));
+	memset(buf2, 0, sizeof(buf2));
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, tpg->param_list);
+	if (!(param))
+		return -EINVAL;
+
+	if (authentication) {
+		snprintf(buf1, sizeof(buf1), "%s", param->value);
+		none = strstr(buf1, NONE);
+		if (!(none))
+			goto out;
+		if (!strncmp(none + 4, ",", 1)) {
+			if (!strcmp(buf1, none))
+				sprintf(buf2, "%s", none+5);
+			else {
+				none--;
+				*none = '\0';
+				len = sprintf(buf2, "%s", buf1);
+				none += 5;
+				sprintf(buf2 + len, "%s", none);
+			}
+		} else {
+			none--;
+			*none = '\0';
+			sprintf(buf2, "%s", buf1);
+		}
+		if (iscsi_update_param_value(param, buf2) < 0)
+			return -EINVAL;
+	} else {
+		snprintf(buf1, sizeof(buf1), "%s", param->value);
+		none = strstr(buf1, NONE);
+		if ((none))
+			goto out;
+		strncat(buf1, ",", strlen(","));
+		strncat(buf1, NONE, strlen(NONE));
+		if (iscsi_update_param_value(param, buf1) < 0)
+			return -EINVAL;
+	}
+
+out:
+	a->authentication = authentication;
+	printk(KERN_INFO "%s iSCSI Authentication Methods for TPG: %hu.\n",
+		a->authentication ? "Enforcing" : "Disabling", tpg->tpgt);
+
+	return 0;
+}
+
+/*	iscsi_ta_login_timeout():
+ *
+ *
+ */
+int iscsi_ta_login_timeout(
+	struct iscsi_portal_group *tpg,
+	u32 login_timeout)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (login_timeout > TA_LOGIN_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested Login Timeout %u larger than maximum"
+			" %u\n", login_timeout, TA_LOGIN_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (login_timeout < TA_LOGIN_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested Logout Timeout %u smaller than"
+			" minimum %u\n", login_timeout, TA_LOGIN_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->login_timeout = login_timeout;
+	printk(KERN_INFO "Set Logout Timeout to %u for Target Portal Group"
+		" %hu\n", a->login_timeout, tpg->tpgt);
+
+	return 0;
+}
+
+/*	iscsi_ta_netif_timeout():
+ *
+ *
+ */
+int iscsi_ta_netif_timeout(
+	struct iscsi_portal_group *tpg,
+	u32 netif_timeout)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (netif_timeout > TA_NETIF_TIMEOUT_MAX) {
+		printk(KERN_ERR "Requested Network Interface Timeout %u larger"
+			" than maximum %u\n", netif_timeout,
+				TA_NETIF_TIMEOUT_MAX);
+		return -EINVAL;
+	} else if (netif_timeout < TA_NETIF_TIMEOUT_MIN) {
+		printk(KERN_ERR "Requested Network Interface Timeout %u smaller"
+			" than minimum %u\n", netif_timeout,
+				TA_NETIF_TIMEOUT_MIN);
+		return -EINVAL;
+	}
+
+	a->netif_timeout = netif_timeout;
+	printk(KERN_INFO "Set Network Interface Timeout to %u for"
+		" Target Portal Group %hu\n", a->netif_timeout, tpg->tpgt);
+
+	return 0;
+}
+
+int iscsi_ta_generate_node_acls(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->generate_node_acls = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Generate Initiator Portal Group ACLs: %s\n",
+		tpg->tpgt, (a->generate_node_acls) ? "Enabled" : "Disabled");
+
+	return 0;
+}
+
+int iscsi_ta_default_cmdsn_depth(
+	struct iscsi_portal_group *tpg,
+	u32 tcq_depth)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if (tcq_depth > TA_DEFAULT_CMDSN_DEPTH_MAX) {
+		printk(KERN_ERR "Requested Default Queue Depth: %u larger"
+			" than maximum %u\n", tcq_depth,
+				TA_DEFAULT_CMDSN_DEPTH_MAX);
+		return -EINVAL;
+	} else if (tcq_depth < TA_DEFAULT_CMDSN_DEPTH_MIN) {
+		printk(KERN_ERR "Requested Default Queue Depth: %u smaller"
+			" than minimum %u\n", tcq_depth,
+				TA_DEFAULT_CMDSN_DEPTH_MIN);
+		return -EINVAL;
+	}
+
+	a->default_cmdsn_depth = tcq_depth;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Set Default CmdSN TCQ Depth to %u\n",
+		tpg->tpgt, a->default_cmdsn_depth);
+
+	return 0;
+}
+
+int iscsi_ta_cache_dynamic_acls(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->cache_dynamic_acls = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Cache Dynamic Initiator Portal Group"
+		" ACLs %s\n", tpg->tpgt, (a->cache_dynamic_acls) ?
+		"Enabled" : "Disabled");
+
+	return 0;
+}
+
+int iscsi_ta_demo_mode_write_protect(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->demo_mode_write_protect = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Demo Mode Write Protect bit: %s\n",
+		tpg->tpgt, (a->demo_mode_write_protect) ? "ON" : "OFF");
+
+	return 0;
+}
+
+int iscsi_ta_prod_mode_write_protect(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->prod_mode_write_protect = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - Production Mode Write Protect bit:"
+		" %s\n", tpg->tpgt, (a->prod_mode_write_protect) ?
+		"ON" : "OFF");
+
+	return 0;
+}
+
+int iscsi_ta_crc32c_x86_offload(
+	struct iscsi_portal_group *tpg,
+	u32 flag)
+{
+	struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+	if ((flag != 0) && (flag != 1)) {
+		printk(KERN_ERR "Illegal value %d\n", flag);
+		return -EINVAL;
+	}
+
+	a->crc32c_x86_offload = flag;
+	printk(KERN_INFO "iSCSI_TPG[%hu] - CRC32C x86 Offload: %s\n",
+		tpg->tpgt, (a->crc32c_x86_offload) ? "ON" : "OFF");
+
+	return 0;
+}
+
+void iscsi_disable_tpgs(struct iscsi_tiqn *tiqn)
+{
+	struct iscsi_portal_group *tpg;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if ((tpg->tpg_state == TPG_STATE_FREE) ||
+		    (tpg->tpg_state == TPG_STATE_INACTIVE)) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+
+		iscsi_tpg_disable_portal_group(tpg, 1);
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+}
+
+/*	iscsi_disable_all_tpgs():
+ *
+ *
+ */
+void iscsi_disable_all_tpgs(void)
+{
+	struct iscsi_tiqn *tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		spin_unlock(&iscsi_global->tiqn_lock);
+		iscsi_disable_tpgs(tiqn);
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+}
+
+void iscsi_remove_tpgs(struct iscsi_tiqn *tiqn)
+{
+	struct iscsi_portal_group *tpg, *tpg_tmp;
+
+	spin_lock(&tiqn->tiqn_tpg_lock);
+	list_for_each_entry_safe(tpg, tpg_tmp, &tiqn->tiqn_tpg_list, tpg_list) {
+
+		spin_lock(&tpg->tpg_state_lock);
+		if (tpg->tpg_state == TPG_STATE_FREE) {
+			spin_unlock(&tpg->tpg_state_lock);
+			continue;
+		}
+		spin_unlock(&tpg->tpg_state_lock);
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+
+		iscsi_tpg_del_portal_group(tiqn, tpg, 1);
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+	}
+	spin_unlock(&tiqn->tiqn_tpg_lock);
+}
+
+/*	iscsi_remove_all_tpgs():
+ *
+ *
+ */
+void iscsi_remove_all_tpgs(void)
+{
+	struct iscsi_tiqn *tiqn;
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		spin_unlock(&iscsi_global->tiqn_lock);
+		iscsi_remove_tpgs(tiqn);
+		spin_lock(&iscsi_global->tiqn_lock);
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+}
diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h
new file mode 100644
index 0000000..bcdfacb
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tpg.h
@@ -0,0 +1,71 @@
+#ifndef ISCSI_TARGET_TPG_H
+#define ISCSI_TARGET_TPG_H
+
+extern char *lio_tpg_get_endpoint_wwn(struct se_portal_group *);
+extern u16 lio_tpg_get_tag(struct se_portal_group *);
+extern u32 lio_tpg_get_default_depth(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode_cache(struct se_portal_group *);
+extern int lio_tpg_check_demo_mode_write_protect(struct se_portal_group *);
+extern int lio_tpg_check_prod_mode_write_protect(struct se_portal_group *);
+extern struct se_node_acl *lio_tpg_alloc_fabric_acl(struct se_portal_group *);
+extern void lio_tpg_release_fabric_acl(struct se_portal_group *,
+			struct se_node_acl *);
+extern int lio_tpg_shutdown_session(struct se_session *);
+extern void lio_tpg_close_session(struct se_session *);
+extern void lio_tpg_stop_session(struct se_session *, int, int);
+extern void lio_tpg_fall_back_to_erl0(struct se_session *);
+extern u32 lio_tpg_get_inst_index(struct se_portal_group *);
+extern void lio_set_default_node_attributes(struct se_node_acl *);
+
+extern struct iscsi_portal_group *core_alloc_portal_group(struct iscsi_tiqn *, u16);
+extern int core_load_discovery_tpg(void);
+extern void core_release_discovery_tpg(void);
+extern struct iscsi_portal_group *core_get_tpg_from_np(struct iscsi_tiqn *,
+			struct iscsi_np *);
+extern int iscsi_get_tpg(struct iscsi_portal_group *);
+extern void iscsi_put_tpg(struct iscsi_portal_group *);
+extern void iscsi_clear_tpg_np_login_threads(struct iscsi_portal_group *, int);
+extern void iscsi_tpg_dump_params(struct iscsi_portal_group *);
+extern int iscsi_tpg_add_portal_group(struct iscsi_tiqn *, struct iscsi_portal_group *);
+extern int iscsi_tpg_del_portal_group(struct iscsi_tiqn *, struct iscsi_portal_group *,
+			int);
+extern int iscsi_tpg_enable_portal_group(struct iscsi_portal_group *);
+extern int iscsi_tpg_disable_portal_group(struct iscsi_portal_group *, int);
+extern struct iscsi_node_acl *iscsi_tpg_add_initiator_node_acl(
+			struct iscsi_portal_group *, const char *, u32);
+extern void iscsi_tpg_del_initiator_node_acl(struct iscsi_portal_group *,
+			struct se_node_acl *);
+extern struct iscsi_node_attrib *iscsi_tpg_get_node_attrib(struct iscsi_session *);
+extern void iscsi_tpg_del_external_nps(struct iscsi_tpg_np *);
+extern struct iscsi_tpg_np *iscsi_tpg_locate_child_np(struct iscsi_tpg_np *, int);
+extern struct iscsi_tpg_np *iscsi_tpg_add_network_portal(struct iscsi_portal_group *,
+			struct iscsi_np_addr *, struct iscsi_tpg_np *, int);
+extern int iscsi_tpg_del_network_portal(struct iscsi_portal_group *,
+			struct iscsi_tpg_np *);
+extern int iscsi_tpg_set_initiator_node_queue_depth(struct iscsi_portal_group *,
+			unsigned char *, u32, int);
+extern int iscsi_ta_authentication(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_login_timeout(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_netif_timeout(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_generate_node_acls(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_default_cmdsn_depth(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_cache_dynamic_acls(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_demo_mode_write_protect(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_prod_mode_write_protect(struct iscsi_portal_group *, u32);
+extern int iscsi_ta_crc32c_x86_offload(struct iscsi_portal_group *, u32);
+extern void iscsi_disable_tpgs(struct iscsi_tiqn *);
+extern void iscsi_disable_all_tpgs(void);
+extern void iscsi_remove_tpgs(struct iscsi_tiqn *);
+extern void iscsi_remove_all_tpgs(void);
+
+extern struct iscsi_global *iscsi_global;
+extern struct target_fabric_configfs *lio_target_fabric_configfs;
+extern struct kmem_cache *lio_tpg_cache;
+
+extern int iscsi_close_session(struct iscsi_session *);
+extern int iscsi_free_session(struct iscsi_session *);
+extern int iscsi_release_sessions_for_tpg(struct iscsi_portal_group *, int);
+extern int iscsi_ta_authentication(struct iscsi_portal_group *, __u32);
+
+#endif /* ISCSI_TARGET_TPG_H */
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 06/12] iscsi-target: Add iSCSI Login Negotiation and Parameter logic
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 163936 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds the princple RFC-3720 compatiable iSCSI Login
phase negotiation for iscsi_target_mod.  This also includes
the iscsi_thread_queue.[c,h] code call directly from iSCSI
login associated code.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_parameters.c   | 2078 +++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_parameters.h   |  271 ++++
 drivers/target/iscsi/iscsi_target_login.c | 1411 ++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_login.h |   15 +
 drivers/target/iscsi/iscsi_target_nego.c  | 1116 ++++++++++++++++
 drivers/target/iscsi/iscsi_target_nego.h  |   20 +
 drivers/target/iscsi/iscsi_thread_queue.c |  635 +++++++++
 drivers/target/iscsi/iscsi_thread_queue.h |  103 ++
 8 files changed, 5649 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_parameters.c
 create mode 100644 drivers/target/iscsi/iscsi_parameters.h
 create mode 100644 drivers/target/iscsi/iscsi_target_login.c
 create mode 100644 drivers/target/iscsi/iscsi_target_login.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.h
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.c
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.h

diff --git a/drivers/target/iscsi/iscsi_parameters.c b/drivers/target/iscsi/iscsi_parameters.c
new file mode 100644
index 0000000..81bd7c9
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_parameters.c
@@ -0,0 +1,2078 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI Parameter negotiation.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_util.h"
+#include "iscsi_parameters.h"
+
+/*	iscsi_login_rx_data():
+ *
+ *
+ */
+int iscsi_login_rx_data(
+	struct iscsi_conn *conn,
+	char *buf,
+	int length,
+	int role)
+{
+	int rx_got;
+	struct iovec iov;
+
+	memset(&iov, 0, sizeof(struct iovec));
+	iov.iov_len	= length;
+	iov.iov_base	= buf;
+
+	/*
+	 * Initial Marker-less Interval.
+	 * Add the values regardless of IFMarker/OFMarker, considering
+	 * it may not be negoitated yet.
+	 */
+	if (role == INITIATOR)
+		conn->if_marker += length;
+	else if (role == TARGET)
+		conn->of_marker += length;
+	else {
+		printk(KERN_ERR "Unknown role: 0x%02x.\n", role);
+		return -1;
+	}
+
+	rx_got = rx_data(conn, &iov, 1, length);
+	if (rx_got != length) {
+		printk(KERN_ERR "rx_data returned %d, expecting %d.\n",
+				rx_got, length);
+		return -1;
+	}
+
+	return 0 ;
+}
+
+/*	iscsi_login_tx_data():
+ *
+ *
+ */
+int iscsi_login_tx_data(
+	struct iscsi_conn *conn,
+	char *pdu_buf,
+	char *text_buf,
+	int text_length,
+	int role)
+{
+	int length, tx_sent;
+	struct iovec iov[2];
+
+	length = (ISCSI_HDR_LEN + text_length);
+
+	memset(&iov[0], 0, 2 * sizeof(struct iovec));
+	iov[0].iov_len		= ISCSI_HDR_LEN;
+	iov[0].iov_base		= pdu_buf;
+	iov[1].iov_len		= text_length;
+	iov[1].iov_base		= text_buf;
+
+	/*
+	 * Initial Marker-less Interval.
+	 * Add the values regardless of IFMarker/OFMarker, considering
+	 * it may not be negoitated yet.
+	 */
+	if (role == INITIATOR)
+		conn->of_marker += length;
+	else if (role == TARGET)
+		conn->if_marker += length;
+	else {
+		printk(KERN_ERR "Unknown role: 0x%02x.\n", role);
+		return -1;
+	}
+
+	tx_sent = tx_data(conn, &iov[0], 2, length);
+	if (tx_sent != length) {
+		printk(KERN_ERR "tx_data returned %d, expecting %d.\n",
+				tx_sent, length);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_dump_connection_ops():
+ *
+ *
+ */
+void iscsi_dump_conn_ops(struct iscsi_conn_ops *conn_ops)
+{
+	printk(KERN_INFO "HeaderDigest: %s\n", (conn_ops->HeaderDigest) ?
+				"CRC32C" : "None");
+	printk(KERN_INFO "DataDigest: %s\n", (conn_ops->DataDigest) ?
+				"CRC32C" : "None");
+	printk(KERN_INFO "MaxRecvDataSegmentLength: %u\n",
+				conn_ops->MaxRecvDataSegmentLength);
+	printk(KERN_INFO "OFMarker: %s\n", (conn_ops->OFMarker) ? "Yes" : "No");
+	printk(KERN_INFO "IFMarker: %s\n", (conn_ops->IFMarker) ? "Yes" : "No");
+	if (conn_ops->OFMarker)
+		printk(KERN_INFO "OFMarkInt: %u\n", conn_ops->OFMarkInt);
+	if (conn_ops->IFMarker)
+		printk(KERN_INFO "IFMarkInt: %u\n", conn_ops->IFMarkInt);
+}
+
+/*	iscsi_dump_session_ops():
+ *
+ *
+ */
+void iscsi_dump_sess_ops(struct iscsi_sess_ops *sess_ops)
+{
+	printk(KERN_INFO "InitiatorName: %s\n", sess_ops->InitiatorName);
+	printk(KERN_INFO "InitiatorAlias: %s\n", sess_ops->InitiatorAlias);
+	printk(KERN_INFO "TargetName: %s\n", sess_ops->TargetName);
+	printk(KERN_INFO "TargetAlias: %s\n", sess_ops->TargetAlias);
+	printk(KERN_INFO "TargetPortalGroupTag: %hu\n",
+			sess_ops->TargetPortalGroupTag);
+	printk(KERN_INFO "MaxConnections: %hu\n", sess_ops->MaxConnections);
+	printk(KERN_INFO "InitialR2T: %s\n",
+			(sess_ops->InitialR2T) ? "Yes" : "No");
+	printk(KERN_INFO "ImmediateData: %s\n", (sess_ops->ImmediateData) ?
+			"Yes" : "No");
+	printk(KERN_INFO "MaxBurstLength: %u\n", sess_ops->MaxBurstLength);
+	printk(KERN_INFO "FirstBurstLength: %u\n", sess_ops->FirstBurstLength);
+	printk(KERN_INFO "DefaultTime2Wait: %hu\n", sess_ops->DefaultTime2Wait);
+	printk(KERN_INFO "DefaultTime2Retain: %hu\n",
+			sess_ops->DefaultTime2Retain);
+	printk(KERN_INFO "MaxOutstandingR2T: %hu\n",
+			sess_ops->MaxOutstandingR2T);
+	printk(KERN_INFO "DataPDUInOrder: %s\n",
+			(sess_ops->DataPDUInOrder) ? "Yes" : "No");
+	printk(KERN_INFO "DataSequenceInOrder: %s\n",
+			(sess_ops->DataSequenceInOrder) ? "Yes" : "No");
+	printk(KERN_INFO "ErrorRecoveryLevel: %hu\n",
+			sess_ops->ErrorRecoveryLevel);
+	printk(KERN_INFO "SessionType: %s\n", (sess_ops->SessionType) ?
+			"Discovery" : "Normal");
+}
+
+/*	iscsi_print_params():
+ *
+ *
+ */
+void iscsi_print_params(struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list)
+		printk(KERN_INFO "%s: %s\n", param->name, param->value);
+}
+
+/*	iscsi_set_default_param():
+ *
+ *
+ */
+static struct iscsi_param *iscsi_set_default_param(struct iscsi_param_list *param_list,
+		char *name, char *value, u8 phase, u8 scope, u8 sender,
+		u16 type_range, u8 use)
+{
+	struct iscsi_param *param = NULL;
+
+	param = kzalloc(sizeof(struct iscsi_param), GFP_KERNEL);
+	if (!(param)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter.\n");
+		goto out;
+	}
+	INIT_LIST_HEAD(&param->p_list);
+
+	param->name = kzalloc(strlen(name) + 1, GFP_KERNEL);
+	if (!(param->name)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter name.\n");
+		goto out;
+	}
+
+	param->value = kzalloc(strlen(value) + 1, GFP_KERNEL);
+	if (!(param->value)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter value.\n");
+		goto out;
+	}
+
+	memcpy(param->name, name, strlen(name));
+	param->name[strlen(name)] = '\0';
+	memcpy(param->value, value, strlen(value));
+	param->value[strlen(value)] = '\0';
+	param->phase		= phase;
+	param->scope		= scope;
+	param->sender		= sender;
+	param->use		= use;
+	param->type_range	= type_range;
+
+	switch (param->type_range) {
+	case TYPERANGE_BOOL_AND:
+		param->type = TYPE_BOOL_AND;
+		break;
+	case TYPERANGE_BOOL_OR:
+		param->type = TYPE_BOOL_OR;
+		break;
+	case TYPERANGE_0_TO_2:
+	case TYPERANGE_0_TO_3600:
+	case TYPERANGE_0_TO_32767:
+	case TYPERANGE_0_TO_65535:
+	case TYPERANGE_1_TO_65535:
+	case TYPERANGE_2_TO_3600:
+	case TYPERANGE_512_TO_16777215:
+		param->type = TYPE_NUMBER;
+		break;
+	case TYPERANGE_AUTH:
+	case TYPERANGE_DIGEST:
+		param->type = TYPE_VALUE_LIST | TYPE_STRING;
+		break;
+	case TYPERANGE_MARKINT:
+		param->type = TYPE_NUMBER_RANGE;
+		param->type_range |= TYPERANGE_1_TO_65535;
+		break;
+	case TYPERANGE_ISCSINAME:
+	case TYPERANGE_SESSIONTYPE:
+	case TYPERANGE_TARGETADDRESS:
+	case TYPERANGE_UTF8:
+		param->type = TYPE_STRING;
+		break;
+	default:
+		printk(KERN_ERR "Unknown type_range 0x%02x\n",
+				param->type_range);
+		goto out;
+	}
+	list_add_tail(&param->p_list, &param_list->param_list);
+
+	return param;
+out:
+	if (param) {
+		kfree(param->value);
+		kfree(param->name);
+		kfree(param);
+	}
+
+	return NULL;
+}
+
+/*	iscsi_set_default_params():
+ *
+ *
+ */
+/* #warning Add extension keys */
+int iscsi_create_default_params(struct iscsi_param_list **param_list_ptr)
+{
+	struct iscsi_param *param = NULL;
+	struct iscsi_param_list *pl;
+
+	pl = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL);
+	if (!(pl)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param_list.\n");
+		return -1 ;
+	}
+	INIT_LIST_HEAD(&pl->param_list);
+	INIT_LIST_HEAD(&pl->extra_response_list);
+
+	/*
+	 * The format for setting the initial parameter definitions are:
+	 *
+	 * Parameter name:
+	 * Initial value:
+	 * Allowable phase:
+	 * Scope:
+	 * Allowable senders:
+	 * Typerange:
+	 * Use:
+	 */
+	param = iscsi_set_default_param(pl, AUTHMETHOD, INITIAL_AUTHMETHOD,
+			PHASE_SECURITY, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_AUTH, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, HEADERDIGEST, INITIAL_HEADERDIGEST,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_DIGEST, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATADIGEST, INITIAL_DATADIGEST,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_DIGEST, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXCONNECTIONS,
+			INITIAL_MAXCONNECTIONS, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_1_TO_65535, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, SENDTARGETS, INITIAL_SENDTARGETS,
+			PHASE_FFP0, SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_UTF8, 0);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETNAME, INITIAL_TARGETNAME,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_ISCSINAME, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIATORNAME,
+			INITIAL_INITIATORNAME, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_ISCSINAME, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETALIAS, INITIAL_TARGETALIAS,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_UTF8, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIATORALIAS,
+			INITIAL_INITIATORALIAS, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_INITIATOR, TYPERANGE_UTF8,
+			USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETADDRESS,
+			INITIAL_TARGETADDRESS, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_TARGETADDRESS, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETPORTALGROUPTAG,
+			INITIAL_TARGETPORTALGROUPTAG,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_0_TO_65535, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIALR2T, INITIAL_INITIALR2T,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_BOOL_OR, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IMMEDIATEDATA,
+			INITIAL_IMMEDIATEDATA, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH, TYPERANGE_BOOL_AND,
+			USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXRECVDATASEGMENTLENGTH,
+			INITIAL_MAXRECVDATASEGMENTLENGTH,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXBURSTLENGTH,
+			INITIAL_MAXBURSTLENGTH, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, FIRSTBURSTLENGTH,
+			INITIAL_FIRSTBURSTLENGTH,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DEFAULTTIME2WAIT,
+			INITIAL_DEFAULTTIME2WAIT,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_3600, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DEFAULTTIME2RETAIN,
+			INITIAL_DEFAULTTIME2RETAIN,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_3600, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXOUTSTANDINGR2T,
+			INITIAL_MAXOUTSTANDINGR2T,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_1_TO_65535, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATAPDUINORDER,
+			INITIAL_DATAPDUINORDER, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH, TYPERANGE_BOOL_OR,
+			USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATASEQUENCEINORDER,
+			INITIAL_DATASEQUENCEINORDER,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_BOOL_OR, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, ERRORRECOVERYLEVEL,
+			INITIAL_ERRORRECOVERYLEVEL,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_2, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, SESSIONTYPE, INITIAL_SESSIONTYPE,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_SESSIONTYPE, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IFMARKER, INITIAL_IFMARKER,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_BOOL_AND, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, OFMARKER, INITIAL_OFMARKER,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_BOOL_AND, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IFMARKINT, INITIAL_IFMARKINT,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_MARKINT, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, OFMARKINT, INITIAL_OFMARKINT,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_MARKINT, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	*param_list_ptr = pl;
+	return 0;
+out:
+	iscsi_release_param_list(pl);
+	return -1;
+}
+
+/*	iscsi_set_keys_to_negotiate():
+ *
+ *
+ */
+int iscsi_set_keys_to_negotiate(
+	int role,
+	int sessiontype,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		param->state = 0;
+		if (!strcmp(param->name, AUTHMETHOD)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, HEADERDIGEST)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATADIGEST)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXCONNECTIONS)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETNAME)) {
+			if ((role == INITIATOR) && (sessiontype)) {
+				SET_PSTATE_NEGOTIATE(param);
+				SET_USE_INITIAL_ONLY(param);
+			}
+		} else if (!strcmp(param->name, INITIATORNAME)) {
+			if (role == INITIATOR)
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETALIAS)) {
+			if ((role == TARGET) && (param->value))
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, INITIATORALIAS)) {
+			if ((role == INITIATOR) && (param->value))
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETPORTALGROUPTAG)) {
+			if (role == TARGET)
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, INITIALR2T)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IMMEDIATEDATA)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXBURSTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DEFAULTTIME2RETAIN)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXOUTSTANDINGR2T)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATAPDUINORDER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATASEQUENCEINORDER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, SESSIONTYPE)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IFMARKER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, OFMARKER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IFMARKINT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, OFMARKINT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_set_keys_irrelevant_for_discovery():
+ *
+ *
+ */
+int iscsi_set_keys_irrelevant_for_discovery(
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!strcmp(param->name, MAXCONNECTIONS))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, INITIALR2T))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IMMEDIATEDATA))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, MAXBURSTLENGTH))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, FIRSTBURSTLENGTH))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, MAXOUTSTANDINGR2T))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DATAPDUINORDER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DATASEQUENCEINORDER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, ERRORRECOVERYLEVEL))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DEFAULTTIME2WAIT))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DEFAULTTIME2RETAIN))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IFMARKER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, OFMARKER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IFMARKINT))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, OFMARKINT))
+			param->state &= ~PSTATE_NEGOTIATE;
+	}
+
+	return 0;
+}
+
+/*	iscsi_copy_param_list():
+ *
+ *
+ */
+int iscsi_copy_param_list(
+	struct iscsi_param_list **dst_param_list,
+	struct iscsi_param_list *src_param_list,
+	int leading)
+{
+	struct iscsi_param *new_param = NULL, *param = NULL;
+	struct iscsi_param_list *param_list = NULL;
+
+	param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL);
+	if (!(param_list)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param_list.\n");
+		goto err_out;
+	}
+	INIT_LIST_HEAD(&param_list->param_list);
+	INIT_LIST_HEAD(&param_list->extra_response_list);
+
+	list_for_each_entry(param, &src_param_list->param_list, p_list) {
+		if (!leading && (param->scope & SCOPE_SESSION_WIDE)) {
+			if ((strcmp(param->name, "TargetName") != 0) &&
+			    (strcmp(param->name, "InitiatorName") != 0) &&
+			    (strcmp(param->name, "TargetPortalGroupTag") != 0))
+				continue;
+		}
+
+		new_param = kzalloc(sizeof(struct iscsi_param), GFP_KERNEL);
+		if (!(new_param)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param.\n");
+			goto err_out;
+		}
+
+		new_param->set_param = param->set_param;
+		new_param->phase = param->phase;
+		new_param->scope = param->scope;
+		new_param->sender = param->sender;
+		new_param->type = param->type;
+		new_param->use = param->use;
+		new_param->type_range = param->type_range;
+
+		new_param->name = kzalloc(strlen(param->name) + 1, GFP_KERNEL);
+		if (!(new_param->name)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" parameter name.\n");
+			goto err_out;
+		}
+
+		new_param->value = kzalloc(strlen(param->value) + 1,
+				GFP_KERNEL);
+		if (!(new_param->value)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" parameter value.\n");
+			goto err_out;
+		}
+
+		memcpy(new_param->name, param->name, strlen(param->name));
+		new_param->name[strlen(param->name)] = '\0';
+		memcpy(new_param->value, param->value, strlen(param->value));
+		new_param->value[strlen(param->value)] = '\0';
+
+		list_add_tail(&new_param->p_list, &param_list->param_list);
+	}
+
+	if (!(list_empty(&param_list->param_list)))
+		*dst_param_list = param_list;
+	else {
+		printk(KERN_ERR "No parameters allocated.\n");
+		goto err_out;
+	}
+
+	return 0;
+
+err_out:
+	iscsi_release_param_list(param_list);
+	return -1;
+}
+
+/*	iscsi_release_extra_responses():
+ *
+ *
+ */
+static void iscsi_release_extra_responses(struct iscsi_param_list *param_list)
+{
+	struct iscsi_extra_response *er, *er_tmp;
+
+	list_for_each_entry_safe(er, er_tmp, &param_list->extra_response_list,
+			er_list) {
+		list_del(&er->er_list);
+		kfree(er);
+	}
+}
+
+/*	iscsi_release_param_list():
+ *
+ *
+ */
+void iscsi_release_param_list(struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param, *param_tmp;
+
+	list_for_each_entry_safe(param, param_tmp, &param_list->param_list,
+			p_list) {
+		list_del(&param->p_list);
+
+		kfree(param->name);
+		param->name = NULL;
+		kfree(param->value);
+		param->value = NULL;
+		kfree(param);
+		param = NULL;
+	}
+
+	iscsi_release_extra_responses(param_list);
+
+	kfree(param_list);
+}
+
+/*	iscsi_find_param_from_key():
+ *
+ *
+ */
+struct iscsi_param *iscsi_find_param_from_key(
+	char *key,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	if (!key || !param_list) {
+		printk(KERN_ERR "Key or parameter list pointer is NULL.\n");
+		return NULL;
+	}
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!strcmp(key, param->name))
+			break;
+	}
+
+	if (!param) {
+		printk(KERN_ERR "Unable to locate key \"%s\".\n", key);
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_extract_key_value():
+ *
+ *
+ */
+int iscsi_extract_key_value(char *textbuf, char **key, char **value)
+{
+	*value = strchr(textbuf, '=');
+	if (!(*value)) {
+		printk(KERN_ERR "Unable to locate \"=\" seperator for key,"
+				" ignoring request.\n");
+		return -1;
+	}
+
+	*key = textbuf;
+	**value = '\0';
+	*value = *value + 1;
+
+	return 0;
+}
+
+/*	iscsi_update_param_value():
+ *
+ *
+ */
+int iscsi_update_param_value(struct iscsi_param *param, char *value)
+{
+	kfree(param->value);
+
+	param->value = kzalloc(strlen(value) + 1, GFP_KERNEL);
+	if (!(param->value)) {
+		printk(KERN_ERR "Unable to allocate memory for value.\n");
+		return -1;
+	}
+
+	memcpy(param->value, value, strlen(value));
+	param->value[strlen(value)] = '\0';
+
+	TRACE(TRACE_PARAM, "iSCSI Parameter updated to %s=%s\n",
+			param->name, param->value);
+	return 0;
+}
+
+/*	iscsi_add_notunderstood_response():
+ *
+ *
+ */
+static int iscsi_add_notunderstood_response(
+	char *key,
+	char *value,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_extra_response *extra_response;
+
+	if (strlen(value) > MAX_KEY_VALUE_LENGTH) {
+		printk(KERN_ERR "Value for notunderstood key \"%s\" exceeds %d,"
+			" protocol error.\n", key, MAX_KEY_VALUE_LENGTH);
+		return -1;
+	}
+
+	extra_response = kzalloc(sizeof(struct iscsi_extra_response), GFP_KERNEL);
+	if (!(extra_response)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_extra_response.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&extra_response->er_list);
+
+	strncpy(extra_response->key, key, strlen(key) + 1);
+	strncpy(extra_response->value, NOTUNDERSTOOD,
+			strlen(NOTUNDERSTOOD) + 1);
+
+	list_add_tail(&extra_response->er_list,
+			&param_list->extra_response_list);
+	return 0;
+}
+
+/*	iscsi_check_for_auth_key():
+ *
+ *
+ */
+static int iscsi_check_for_auth_key(char *key)
+{
+	/*
+	 * RFC 1994
+	 */
+	if (!strcmp(key, "CHAP_A") || !strcmp(key, "CHAP_I") ||
+	    !strcmp(key, "CHAP_C") || !strcmp(key, "CHAP_N") ||
+	    !strcmp(key, "CHAP_R"))
+		return 1;
+
+	/*
+	 * RFC 2945
+	 */
+	if (!strcmp(key, "SRP_U") || !strcmp(key, "SRP_N") ||
+	    !strcmp(key, "SRP_g") || !strcmp(key, "SRP_s") ||
+	    !strcmp(key, "SRP_A") || !strcmp(key, "SRP_B") ||
+	    !strcmp(key, "SRP_M") || !strcmp(key, "SRP_HM"))
+		return 1;
+
+	return 0;
+}
+
+/*	iscsi_check_proposer_for_optional_reply():
+ *
+ *
+ */
+static void iscsi_check_proposer_for_optional_reply(struct iscsi_param *param)
+{
+	if (IS_TYPE_BOOL_AND(param)) {
+		if (!strcmp(param->value, NO))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_BOOL_OR(param)) {
+		if (!strcmp(param->value, YES))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		 /*
+		  * Required for gPXE iSCSI boot client
+		  */
+		if (!strcmp(param->name, IMMEDIATEDATA))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_NUMBER(param)) {
+		if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		/*
+		 * The GlobalSAN iSCSI Initiator for MacOSX does
+		 * not respond to MaxBurstLength, FirstBurstLength,
+		 * DefaultTime2Wait or DefaultTime2Retain parameter keys.
+		 * So, we set them to 'reply optional' here, and assume the
+		 * the defaults from iscsi_parameters.h if the initiator
+		 * is not RFC compliant and the keys are not negotiated.
+		 */
+		if (!strcmp(param->name, MAXBURSTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, FIRSTBURSTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, DEFAULTTIME2WAIT))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, DEFAULTTIME2RETAIN))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		/*
+		 * Required for gPXE iSCSI boot client
+		 */
+		if (!strcmp(param->name, MAXCONNECTIONS))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_PHASE_DECLARATIVE(param))
+		SET_PSTATE_REPLY_OPTIONAL(param);
+}
+
+/*	iscsi_check_boolean_value():
+ *
+ *
+ */
+static int iscsi_check_boolean_value(struct iscsi_param *param, char *value)
+{
+	if (strcmp(value, YES) && strcmp(value, NO)) {
+		printk(KERN_ERR "Illegal value for \"%s\", must be either"
+			" \"%s\" or \"%s\".\n", param->name, YES, NO);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_numerical_value():
+ *
+ *
+ */
+static int iscsi_check_numerical_value(struct iscsi_param *param, char *value_ptr)
+{
+	char *tmpptr;
+	int value = 0;
+
+	value = simple_strtoul(value_ptr, &tmpptr, 0);
+
+/* #warning FIXME: Fix this */
+#if 0
+	if (strspn(endptr, WHITE_SPACE) != strlen(endptr)) {
+		printk(KERN_ERR "Illegal value \"%s\" for \"%s\".\n",
+			value, param->name);
+		return -1;
+	}
+#endif
+	if (IS_TYPERANGE_0_TO_2(param)) {
+		if ((value < 0) || (value > 2)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 2.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_3600(param)) {
+		if ((value < 0) || (value > 3600)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 3600.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_32767(param)) {
+		if ((value < 0) || (value > 32767)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 32767.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_65535(param)) {
+		if ((value < 0) || (value > 65535)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 65535.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_1_TO_65535(param)) {
+		if ((value < 1) || (value > 65535)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 1 and 65535.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_2_TO_3600(param)) {
+		if ((value < 2) || (value > 3600)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 2 and 3600.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_512_TO_16777215(param)) {
+		if ((value < 512) || (value > 16777215)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 512 and 16777215.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_numerical_range_value():
+ *
+ *
+ */
+static int iscsi_check_numerical_range_value(struct iscsi_param *param, char *value)
+{
+	char *left_val_ptr = NULL, *right_val_ptr = NULL;
+	char *tilde_ptr = NULL, *tmp_ptr = NULL;
+	u32 left_val, right_val, local_left_val, local_right_val;
+
+	if ((strcmp(param->name, IFMARKINT)) &&
+			(strcmp(param->name, OFMARKINT))) {
+		printk(KERN_ERR "Only parameters \"%s\" or \"%s\" may contain a"
+			" numerical range value.\n", IFMARKINT, OFMARKINT);
+		return -1;
+	}
+
+	if (IS_PSTATE_PROPOSER(param))
+		return 0;
+
+	tilde_ptr = strchr(value, '~');
+	if (!(tilde_ptr)) {
+		printk(KERN_ERR "Unable to locate numerical range indicator"
+			" \"~\" for \"%s\".\n", param->name);
+		return -1;
+	}
+	*tilde_ptr = '\0';
+
+	left_val_ptr = value;
+	right_val_ptr = value + strlen(left_val_ptr) + 1;
+
+	if (iscsi_check_numerical_value(param, left_val_ptr) < 0)
+		return -1;
+	if (iscsi_check_numerical_value(param, right_val_ptr) < 0)
+		return -1;
+
+	left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+	right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+	*tilde_ptr = '~';
+
+	if (right_val < left_val) {
+		printk(KERN_ERR "Numerical range for parameter \"%s\" contains"
+			" a right value which is less than the left.\n",
+				param->name);
+		return -1;
+	}
+
+	/*
+	 * For now,  enforce reasonable defaults for [I,O]FMarkInt.
+	 */
+	tilde_ptr = strchr(param->value, '~');
+	if (!(tilde_ptr)) {
+		printk(KERN_ERR "Unable to locate numerical range indicator"
+			" \"~\" for \"%s\".\n", param->name);
+		return -1;
+	}
+	*tilde_ptr = '\0';
+
+	left_val_ptr = param->value;
+	right_val_ptr = param->value + strlen(left_val_ptr) + 1;
+
+	local_left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+	local_right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+	*tilde_ptr = '~';
+
+	if (param->set_param) {
+		if ((left_val < local_left_val) ||
+		    (right_val < local_left_val)) {
+			printk(KERN_ERR "Passed value range \"%u~%u\" is below"
+				" minimum left value \"%u\" for key \"%s\","
+				" rejecting.\n", left_val, right_val,
+				local_left_val, param->name);
+			return -1;
+		}
+	} else {
+		if ((left_val < local_left_val) &&
+		    (right_val < local_left_val)) {
+			printk(KERN_ERR "Received value range \"%u~%u\" is"
+				" below minimum left value \"%u\" for key"
+				" \"%s\", rejecting.\n", left_val, right_val,
+				local_left_val, param->name);
+			SET_PSTATE_REJECT(param);
+			if (iscsi_update_param_value(param, REJECT) < 0)
+				return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_string_or_list_value():
+ *
+ *
+ */
+static int iscsi_check_string_or_list_value(struct iscsi_param *param, char *value)
+{
+	if (IS_PSTATE_PROPOSER(param))
+		return 0;
+
+	if (IS_TYPERANGE_AUTH_PARAM(param)) {
+		if (strcmp(value, KRB5) && strcmp(value, SPKM1) &&
+		    strcmp(value, SPKM2) && strcmp(value, SRP) &&
+		    strcmp(value, CHAP) && strcmp(value, NONE)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\", \"%s\", \"%s\", \"%s\", \"%s\""
+				" or \"%s\".\n", param->name, KRB5,
+					SPKM1, SPKM2, SRP, CHAP, NONE);
+			return -1;
+		}
+	}
+	if (IS_TYPERANGE_DIGEST_PARAM(param)) {
+		if (strcmp(value, CRC32C) && strcmp(value, NONE)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\" or \"%s\".\n", param->name,
+					CRC32C, NONE);
+			return -1;
+		}
+	}
+	if (IS_TYPERANGE_SESSIONTYPE(param)) {
+		if (strcmp(value, DISCOVERY) && strcmp(value, NORMAL)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\" or \"%s\".\n", param->name,
+					DISCOVERY, NORMAL);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_get_value_from_number_range():
+ *
+ *	This function is used to pick a value range number,  currently just
+ *	returns the lesser of both right values.
+ */
+static char *iscsi_get_value_from_number_range(
+	struct iscsi_param *param,
+	char *value)
+{
+	char *end_ptr, *tilde_ptr1 = NULL, *tilde_ptr2 = NULL;
+	u32 acceptor_right_value, proposer_right_value;
+
+	tilde_ptr1 = strchr(value, '~');
+	if (!(tilde_ptr1))
+		return NULL;
+	*tilde_ptr1++ = '\0';
+	proposer_right_value = simple_strtoul(tilde_ptr1, &end_ptr, 0);
+
+	tilde_ptr2 = strchr(param->value, '~');
+	if (!(tilde_ptr2))
+		return NULL;
+	*tilde_ptr2++ = '\0';
+	acceptor_right_value = simple_strtoul(tilde_ptr2, &end_ptr, 0);
+
+	return (acceptor_right_value >= proposer_right_value) ?
+		tilde_ptr1 : tilde_ptr2;
+}
+
+/*	iscsi_check_valuelist_for_support():
+ *
+ *
+ */
+static char *iscsi_check_valuelist_for_support(
+	struct iscsi_param *param,
+	char *value)
+{
+	char *tmp1 = NULL, *tmp2 = NULL;
+	char *acceptor_values = NULL, *proposer_values = NULL;
+
+	acceptor_values = param->value;
+	proposer_values = value;
+
+	do {
+		if (!proposer_values)
+			return NULL;
+		tmp1 = strchr(proposer_values, ',');
+		if ((tmp1))
+			*tmp1 = '\0';
+		acceptor_values = param->value;
+		do {
+			if (!acceptor_values) {
+				if (tmp1)
+					*tmp1 = ',';
+				return NULL;
+			}
+			tmp2 = strchr(acceptor_values, ',');
+			if ((tmp2))
+				*tmp2 = '\0';
+			if (!acceptor_values || !proposer_values) {
+				if (tmp1)
+					*tmp1 = ',';
+				if (tmp2)
+					*tmp2 = ',';
+				return NULL;
+			}
+			if (!strcmp(acceptor_values, proposer_values)) {
+				if (tmp2)
+					*tmp2 = ',';
+				goto out;
+			}
+			if (tmp2)
+				*tmp2++ = ',';
+
+			acceptor_values = tmp2;
+			if (!acceptor_values)
+				break;
+		} while (acceptor_values);
+		if (tmp1)
+			*tmp1++ = ',';
+		proposer_values = tmp1;
+	} while (proposer_values);
+
+out:
+	return proposer_values;
+}
+
+/*	iscsi_check_acceptor_state():
+ *
+ *
+ */
+static int iscsi_check_acceptor_state(struct iscsi_param *param, char *value)
+{
+	u8 acceptor_boolean_value = 0, proposer_boolean_value = 0;
+	char *negoitated_value = NULL;
+
+	if (IS_PSTATE_ACCEPTOR(param)) {
+		printk(KERN_ERR "Received key \"%s\" twice, protocol error.\n",
+				param->name);
+		return -1;
+	}
+
+	if (IS_PSTATE_REJECT(param))
+		return 0;
+
+	if (IS_TYPE_BOOL_AND(param)) {
+		if (!strcmp(value, YES))
+			proposer_boolean_value = 1;
+		if (!strcmp(param->value, YES))
+			acceptor_boolean_value = 1;
+		if (acceptor_boolean_value && proposer_boolean_value)
+			do {} while (0);
+		else {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			if (!proposer_boolean_value)
+				SET_PSTATE_REPLY_OPTIONAL(param);
+		}
+	} else if (IS_TYPE_BOOL_OR(param)) {
+		if (!strcmp(value, YES))
+			proposer_boolean_value = 1;
+		if (!strcmp(param->value, YES))
+			acceptor_boolean_value = 1;
+		if (acceptor_boolean_value || proposer_boolean_value) {
+			if (iscsi_update_param_value(param, YES) < 0)
+				return -1;
+			if (proposer_boolean_value)
+				SET_PSTATE_REPLY_OPTIONAL(param);
+		}
+	} else if (IS_TYPE_NUMBER(param)) {
+		char *tmpptr, buf[10];
+		u32 acceptor_value = simple_strtoul(param->value, &tmpptr, 0);
+		u32 proposer_value = simple_strtoul(value, &tmpptr, 0);
+
+		memset(buf, 0, 10);
+
+		if (!strcmp(param->name, MAXCONNECTIONS) ||
+		    !strcmp(param->name, MAXBURSTLENGTH) ||
+		    !strcmp(param->name, FIRSTBURSTLENGTH) ||
+		    !strcmp(param->name, MAXOUTSTANDINGR2T) ||
+		    !strcmp(param->name, DEFAULTTIME2RETAIN) ||
+		    !strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			if (proposer_value > acceptor_value) {
+				sprintf(buf, "%u", acceptor_value);
+				if (iscsi_update_param_value(param,
+						&buf[0]) < 0)
+					return -1;
+			} else {
+				if (iscsi_update_param_value(param, value) < 0)
+					return -1;
+			}
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			if (acceptor_value > proposer_value) {
+				sprintf(buf, "%u", acceptor_value);
+				if (iscsi_update_param_value(param,
+						&buf[0]) < 0)
+					return -1;
+			} else {
+				if (iscsi_update_param_value(param, value) < 0)
+					return -1;
+			}
+		} else {
+			if (iscsi_update_param_value(param, value) < 0)
+				return -1;
+		}
+
+		if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_NUMBER_RANGE(param)) {
+		negoitated_value = iscsi_get_value_from_number_range(
+					param, value);
+		if (!(negoitated_value))
+			return -1;
+		if (iscsi_update_param_value(param, negoitated_value) < 0)
+			return -1;
+	} else if (IS_TYPE_VALUE_LIST(param)) {
+		negoitated_value = iscsi_check_valuelist_for_support(
+					param, value);
+		if (!(negoitated_value)) {
+			printk(KERN_ERR "Proposer's value list \"%s\" contains"
+				" no valid values from Acceptor's value list"
+				" \"%s\".\n", value, param->value);
+			return -1;
+		}
+		if (iscsi_update_param_value(param, negoitated_value) < 0)
+			return -1;
+	} else if (IS_PHASE_DECLARATIVE(param)) {
+		if (iscsi_update_param_value(param, value) < 0)
+			return -1;
+		SET_PSTATE_REPLY_OPTIONAL(param);
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_proposer_state():
+ *
+ *
+ */
+static int iscsi_check_proposer_state(struct iscsi_param *param, char *value)
+{
+	if (IS_PSTATE_RESPONSE_GOT(param)) {
+		printk(KERN_ERR "Received key \"%s\" twice, protocol error.\n",
+				param->name);
+		return -1;
+	}
+
+	if (IS_TYPE_NUMBER_RANGE(param)) {
+		u32 left_val = 0, right_val = 0, recieved_value = 0;
+		char *left_val_ptr = NULL, *right_val_ptr = NULL;
+		char *tilde_ptr = NULL, *tmp_ptr = NULL;
+
+		if (!strcmp(value, IRRELEVANT) || !strcmp(value, REJECT)) {
+			if (iscsi_update_param_value(param, value) < 0)
+				return -1;
+			return 0;
+		}
+
+		tilde_ptr = strchr(value, '~');
+		if ((tilde_ptr)) {
+			printk(KERN_ERR "Illegal \"~\" in response for \"%s\".\n",
+					param->name);
+			return -1;
+		}
+		tilde_ptr = strchr(param->value, '~');
+		if (!(tilde_ptr)) {
+			printk(KERN_ERR "Unable to locate numerical range"
+				" indicator \"~\" for \"%s\".\n", param->name);
+			return -1;
+		}
+		*tilde_ptr = '\0';
+
+		left_val_ptr = param->value;
+		right_val_ptr = param->value + strlen(left_val_ptr) + 1;
+		left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+		right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+		recieved_value = simple_strtoul(value, &tmp_ptr, 0);
+
+		*tilde_ptr = '~';
+
+		if ((recieved_value < left_val) ||
+		    (recieved_value > right_val)) {
+			printk(KERN_ERR "Illegal response \"%s=%u\", value must"
+				" be between %u and %u.\n", param->name,
+				recieved_value, left_val, right_val);
+			return -1;
+		}
+	} else if (IS_TYPE_VALUE_LIST(param)) {
+		char *comma_ptr = NULL, *tmp_ptr = NULL;
+
+		comma_ptr = strchr(value, ',');
+		if ((comma_ptr)) {
+			printk(KERN_ERR "Illegal \",\" in response for \"%s\".\n",
+					param->name);
+			return -1;
+		}
+
+		tmp_ptr = iscsi_check_valuelist_for_support(param, value);
+		if (!(tmp_ptr))
+			return -1;
+	}
+
+	if (iscsi_update_param_value(param, value) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_check_value():
+ *
+ *
+ */
+static int iscsi_check_value(struct iscsi_param *param, char *value)
+{
+	char *comma_ptr = NULL;
+
+	if (!strcmp(value, REJECT)) {
+		if (!strcmp(param->name, IFMARKINT) ||
+		    !strcmp(param->name, OFMARKINT)) {
+			/*
+			 * Reject is not fatal for [I,O]FMarkInt,  and causes
+			 * [I,O]FMarker to be reset to No. (See iSCSI v20 A.3.2)
+			 */
+			SET_PSTATE_REJECT(param);
+			return 0;
+		}
+		printk(KERN_ERR "Received %s=%s\n", param->name, value);
+		return -1;
+	}
+	if (!strcmp(value, IRRELEVANT)) {
+		TRACE(TRACE_LOGIN, "Received %s=%s\n", param->name, value);
+		SET_PSTATE_IRRELEVANT(param);
+		return 0;
+	}
+	if (!strcmp(value, NOTUNDERSTOOD)) {
+		if (!IS_PSTATE_PROPOSER(param)) {
+			printk(KERN_ERR "Received illegal offer %s=%s\n",
+				param->name, value);
+			return -1;
+		}
+
+/* #warning FIXME: Add check for X-ExtensionKey here */
+		printk(KERN_ERR "Standard iSCSI key \"%s\" cannot be answered"
+			" with \"%s\", protocol error.\n", param->name, value);
+		return -1;
+	}
+
+	do {
+		comma_ptr = NULL;
+		comma_ptr = strchr(value, ',');
+
+		if (comma_ptr && !IS_TYPE_VALUE_LIST(param)) {
+			printk(KERN_ERR "Detected value seperator \",\", but"
+				" key \"%s\" does not allow a value list,"
+				" protocol error.\n", param->name);
+			return -1;
+		}
+		if (comma_ptr)
+			*comma_ptr = '\0';
+
+		if (strlen(value) > MAX_KEY_VALUE_LENGTH) {
+			printk(KERN_ERR "Value for key \"%s\" exceeds %d,"
+				" protocol error.\n", param->name,
+				MAX_KEY_VALUE_LENGTH);
+			return -1;
+		}
+
+		if (IS_TYPE_BOOL_AND(param) || IS_TYPE_BOOL_OR(param)) {
+			if (iscsi_check_boolean_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_NUMBER(param)) {
+			if (iscsi_check_numerical_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_NUMBER_RANGE(param)) {
+			if (iscsi_check_numerical_range_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_STRING(param) || IS_TYPE_VALUE_LIST(param)) {
+			if (iscsi_check_string_or_list_value(param, value) < 0)
+				return -1;
+		} else {
+			printk(KERN_ERR "Huh? 0x%02x\n", param->type);
+			return -1;
+		}
+
+		if (comma_ptr)
+			*comma_ptr++ = ',';
+
+		value = comma_ptr;
+	} while (value);
+
+	return 0;
+}
+
+/*	__iscsi_check_key()
+ *
+ *
+ */
+static struct iscsi_param *__iscsi_check_key(
+	char *key,
+	int sender,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	if (strlen(key) > MAX_KEY_NAME_LENGTH) {
+		printk(KERN_ERR "Length of key name \"%s\" exceeds %d.\n",
+			key, MAX_KEY_NAME_LENGTH);
+		return NULL;
+	}
+
+	param = iscsi_find_param_from_key(key, param_list);
+	if (!(param))
+		return NULL;
+
+	if ((sender & SENDER_INITIATOR) && !IS_SENDER_INITIATOR(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "target" : "initiator");
+		return NULL;
+	}
+
+	if ((sender & SENDER_TARGET) && !IS_SENDER_TARGET(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "initiator" : "target");
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_check_key():
+ *
+ *
+ */
+static struct iscsi_param *iscsi_check_key(
+	char *key,
+	int phase,
+	int sender,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	/*
+	 * Key name length must not exceed 63 bytes. (See iSCSI v20 5.1)
+	 */
+	if (strlen(key) > MAX_KEY_NAME_LENGTH) {
+		printk(KERN_ERR "Length of key name \"%s\" exceeds %d.\n",
+			key, MAX_KEY_NAME_LENGTH);
+		return NULL;
+	}
+
+	param = iscsi_find_param_from_key(key, param_list);
+	if (!(param))
+		return NULL;
+
+	if ((sender & SENDER_INITIATOR) && !IS_SENDER_INITIATOR(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "target" : "initiator");
+		return NULL;
+	}
+	if ((sender & SENDER_TARGET) && !IS_SENDER_TARGET(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+				" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "initiator" : "target");
+		return NULL;
+	}
+
+	if (IS_PSTATE_ACCEPTOR(param)) {
+		printk(KERN_ERR "Key \"%s\" received twice, protocol error.\n",
+				key);
+		return NULL;
+	}
+
+	if (!phase)
+		return param;
+
+	if (!(param->phase & phase)) {
+		printk(KERN_ERR "Key \"%s\" may not be negotiated during ",
+				param->name);
+		switch (phase) {
+		case PHASE_SECURITY:
+			printk(KERN_INFO "Security phase.\n");
+			break;
+		case PHASE_OPERATIONAL:
+			printk(KERN_INFO "Operational phase.\n");
+		default:
+			printk(KERN_INFO "Unknown phase.\n");
+		}
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_enforce_integrity_rules():
+ *
+ *
+ */
+static int iscsi_enforce_integrity_rules(
+	u8 phase,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpptr;
+	u8 DataSequenceInOrder = 0;
+	u8 ErrorRecoveryLevel = 0, SessionType = 0;
+	u8 IFMarker = 0, OFMarker = 0;
+	u8 IFMarkInt_Reject = 0, OFMarkInt_Reject = 0;
+	u32 FirstBurstLength = 0, MaxBurstLength = 0;
+	struct iscsi_param *param = NULL;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->phase & phase))
+			continue;
+		if (!strcmp(param->name, SESSIONTYPE))
+			if (!strcmp(param->value, NORMAL))
+				SessionType = 1;
+		if (!strcmp(param->name, ERRORRECOVERYLEVEL))
+			ErrorRecoveryLevel = simple_strtoul(param->value,
+					&tmpptr, 0);
+		if (!strcmp(param->name, DATASEQUENCEINORDER))
+			if (!strcmp(param->value, YES))
+				DataSequenceInOrder = 1;
+		if (!strcmp(param->name, MAXBURSTLENGTH))
+			MaxBurstLength = simple_strtoul(param->value,
+					&tmpptr, 0);
+		if (!strcmp(param->name, IFMARKER))
+			if (!strcmp(param->value, YES))
+				IFMarker = 1;
+		if (!strcmp(param->name, OFMARKER))
+			if (!strcmp(param->value, YES))
+				OFMarker = 1;
+		if (!strcmp(param->name, IFMARKINT))
+			if (!strcmp(param->value, REJECT))
+				IFMarkInt_Reject = 1;
+		if (!strcmp(param->name, OFMARKINT))
+			if (!strcmp(param->value, REJECT))
+				OFMarkInt_Reject = 1;
+	}
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->phase & phase))
+			continue;
+		if (!SessionType && (!IS_PSTATE_ACCEPTOR(param) &&
+		     (strcmp(param->name, IFMARKER) &&
+		      strcmp(param->name, OFMARKER) &&
+		      strcmp(param->name, IFMARKINT) &&
+		      strcmp(param->name, OFMARKINT))))
+			continue;
+		if (!strcmp(param->name, MAXOUTSTANDINGR2T) &&
+		    DataSequenceInOrder && (ErrorRecoveryLevel > 0)) {
+			if (strcmp(param->value, "1")) {
+				if (iscsi_update_param_value(param, "1") < 0)
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, MAXCONNECTIONS) && !SessionType) {
+			if (strcmp(param->value, "1")) {
+				if (iscsi_update_param_value(param, "1") < 0)
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			FirstBurstLength = simple_strtoul(param->value,
+					&tmpptr, 0);
+			if (FirstBurstLength > MaxBurstLength) {
+				char tmpbuf[10];
+				memset(tmpbuf, 0, 10);
+				sprintf(tmpbuf, "%u", MaxBurstLength);
+				if (iscsi_update_param_value(param, tmpbuf))
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, IFMARKER) && IFMarkInt_Reject) {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			IFMarker = 0;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+		if (!strcmp(param->name, OFMARKER) && OFMarkInt_Reject) {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			OFMarker = 0;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					 param->name, param->value);
+		}
+		if (!strcmp(param->name, IFMARKINT) && !IFMarker) {
+			if (!strcmp(param->value, REJECT))
+				continue;
+			param->state &= ~PSTATE_NEGOTIATE;
+			if (iscsi_update_param_value(param, IRRELEVANT) < 0)
+				return -1;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+		if (!strcmp(param->name, OFMARKINT) && !OFMarker) {
+			if (!strcmp(param->value, REJECT))
+				continue;
+			param->state &= ~PSTATE_NEGOTIATE;
+			if (iscsi_update_param_value(param, IRRELEVANT) < 0)
+				return -1;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_decode_text_input():
+ *
+ *
+ */
+int iscsi_decode_text_input(
+	u8 phase,
+	u8 sender,
+	char *textbuf,
+	u32 length,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpbuf, *start = NULL, *end = NULL;
+
+	tmpbuf = kzalloc(length + 1, GFP_KERNEL);
+	if (!(tmpbuf)) {
+		printk(KERN_ERR "Unable to allocate memory for tmpbuf.\n");
+		return -1;
+	}
+
+	memcpy(tmpbuf, textbuf, length);
+	tmpbuf[length] = '\0';
+	start = tmpbuf;
+	end = (start + length);
+
+	while (start < end) {
+		char *key, *value;
+		struct iscsi_param *param;
+
+		if (iscsi_extract_key_value(start, &key, &value) < 0) {
+			kfree(tmpbuf);
+			return -1;
+		}
+
+		TRACE(TRACE_PARAM, "Got key: %s=%s\n", key, value);
+
+		if (phase & PHASE_SECURITY) {
+			if (iscsi_check_for_auth_key(key) > 0) {
+				char *tmpptr = key + strlen(key);
+				*tmpptr = '=';
+				kfree(tmpbuf);
+				return 1;
+			}
+		}
+
+		param = iscsi_check_key(key, phase, sender, param_list);
+		if (!(param)) {
+			if (iscsi_add_notunderstood_response(key,
+					value, param_list) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			start += strlen(key) + strlen(value) + 2;
+			continue;
+		}
+		if (iscsi_check_value(param, value) < 0) {
+			kfree(tmpbuf);
+			return -1;
+		}
+
+		start += strlen(key) + strlen(value) + 2;
+
+		if (IS_PSTATE_PROPOSER(param)) {
+			if (iscsi_check_proposer_state(param, value) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			SET_PSTATE_RESPONSE_GOT(param);
+		} else {
+			if (iscsi_check_acceptor_state(param, value) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			SET_PSTATE_ACCEPTOR(param);
+		}
+	}
+
+	kfree(tmpbuf);
+	return 0;
+}
+
+/*	iscsi_encode_text_output():
+ *
+ *
+ */
+int iscsi_encode_text_output(
+	u8 phase,
+	u8 sender,
+	char *textbuf,
+	u32 *length,
+	struct iscsi_param_list *param_list)
+{
+	char *output_buf = NULL;
+	struct iscsi_extra_response *er;
+	struct iscsi_param *param;
+
+	output_buf = textbuf + *length;
+
+	if (iscsi_enforce_integrity_rules(phase, param_list) < 0)
+		return -1;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->sender & sender))
+			continue;
+		if (IS_PSTATE_ACCEPTOR(param) &&
+		    !IS_PSTATE_RESPONSE_SENT(param) &&
+		    !IS_PSTATE_REPLY_OPTIONAL(param) &&
+		    (param->phase & phase)) {
+			*length += sprintf(output_buf, "%s=%s",
+				param->name, param->value);
+			*length += 1;
+			output_buf = textbuf + *length;
+			SET_PSTATE_RESPONSE_SENT(param);
+			TRACE(TRACE_PARAM, "Sending key: %s=%s\n",
+				param->name, param->value);
+			continue;
+		}
+		if (IS_PSTATE_NEGOTIATE(param) &&
+		    !IS_PSTATE_ACCEPTOR(param) &&
+		    !IS_PSTATE_PROPOSER(param) &&
+		    (param->phase & phase)) {
+			*length += sprintf(output_buf, "%s=%s",
+				param->name, param->value);
+			*length += 1;
+			output_buf = textbuf + *length;
+			SET_PSTATE_PROPOSER(param);
+			iscsi_check_proposer_for_optional_reply(param);
+			TRACE(TRACE_PARAM, "Sending key: %s=%s\n",
+				param->name, param->value);
+		}
+	}
+
+	list_for_each_entry(er, &param_list->extra_response_list, er_list) {
+		*length += sprintf(output_buf, "%s=%s", er->key, er->value);
+		*length += 1;
+		output_buf = textbuf + *length;
+		TRACE(TRACE_PARAM, "Sending key: %s=%s\n", er->key, er->value);
+	}
+	iscsi_release_extra_responses(param_list);
+
+	return 0;
+}
+
+/*	iscsi_check_negotiated_keys():
+ *
+ *
+ */
+int iscsi_check_negotiated_keys(struct iscsi_param_list *param_list)
+{
+	int ret = 0;
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (IS_PSTATE_NEGOTIATE(param) &&
+		    IS_PSTATE_PROPOSER(param) &&
+		    !IS_PSTATE_RESPONSE_GOT(param) &&
+		    !IS_PSTATE_REPLY_OPTIONAL(param) &&
+		    !IS_PHASE_DECLARATIVE(param)) {
+			printk(KERN_ERR "No response for proposed key \"%s\".\n",
+					param->name);
+			ret = -1;
+		}
+	}
+
+	return ret;
+}
+
+/*	iscsi_set_param_value():
+ *
+ *
+ */
+int iscsi_change_param_value(
+	char *keyvalue,
+	int sender,
+	struct iscsi_param_list *param_list,
+	int check_key)
+{
+	char *key = NULL, *value = NULL;
+	struct iscsi_param *param;
+
+	if (iscsi_extract_key_value(keyvalue, &key, &value) < 0)
+		return -1;
+
+	if (!check_key) {
+		param = __iscsi_check_key(keyvalue, sender, param_list);
+		if (!(param))
+			return -1;
+	} else {
+		param = iscsi_check_key(keyvalue, 0, sender, param_list);
+		if (!(param))
+			return -1;
+
+		param->set_param = 1;
+		if (iscsi_check_value(param, value) < 0) {
+			param->set_param = 0;
+			return -1;
+		}
+		param->set_param = 0;
+	}
+
+	if (iscsi_update_param_value(param, value) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_set_connection_parameters():
+ *
+ *
+ */
+void iscsi_set_connection_parameters(
+	struct iscsi_conn_ops *ops,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpptr;
+	struct iscsi_param *param;
+
+	printk(KERN_INFO "---------------------------------------------------"
+			"---------------\n");
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!IS_PSTATE_ACCEPTOR(param) && !IS_PSTATE_PROPOSER(param))
+			continue;
+		if (!strcmp(param->name, AUTHMETHOD)) {
+			printk(KERN_INFO "AuthMethod:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, HEADERDIGEST)) {
+			ops->HeaderDigest = !strcmp(param->value, CRC32C);
+			printk(KERN_INFO "HeaderDigest:                 %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATADIGEST)) {
+			ops->DataDigest = !strcmp(param->value, CRC32C);
+			printk(KERN_INFO "DataDigest:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH)) {
+			ops->MaxRecvDataSegmentLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxRecvDataSegmentLength:     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, OFMARKER)) {
+			ops->OFMarker = !strcmp(param->value, YES);
+			printk(KERN_INFO "OFMarker:                     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IFMARKER)) {
+			ops->IFMarker = !strcmp(param->value, YES);
+			printk(KERN_INFO "IFMarker:                     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, OFMARKINT)) {
+			ops->OFMarkInt =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "OFMarkInt:                    %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IFMARKINT)) {
+			ops->IFMarkInt =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "IFMarkInt:                    %s\n",
+				param->value);
+		}
+	}
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+}
+
+/*	iscsi_set_session_parameters():
+ *
+ *
+ */
+void iscsi_set_session_parameters(
+	struct iscsi_sess_ops *ops,
+	struct iscsi_param_list *param_list,
+	int leading)
+{
+	char *tmpptr;
+	struct iscsi_param *param;
+
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!IS_PSTATE_ACCEPTOR(param) && !IS_PSTATE_PROPOSER(param))
+			continue;
+		if (!strcmp(param->name, INITIATORNAME)) {
+			if (!param->value)
+				continue;
+			if (leading)
+				snprintf(ops->InitiatorName,
+						sizeof(ops->InitiatorName),
+						"%s", param->value);
+			printk(KERN_INFO "InitiatorName:                %s\n",
+				param->value);
+		} else if (!strcmp(param->name, INITIATORALIAS)) {
+			if (!param->value)
+				continue;
+			snprintf(ops->InitiatorAlias,
+						sizeof(ops->InitiatorAlias),
+						"%s", param->value);
+			printk(KERN_INFO "InitiatorAlias:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETNAME)) {
+			if (!param->value)
+				continue;
+			if (leading)
+				snprintf(ops->TargetName,
+						sizeof(ops->TargetName),
+						"%s", param->value);
+			printk(KERN_INFO "TargetName:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETALIAS)) {
+			if (!param->value)
+				continue;
+			snprintf(ops->TargetAlias, sizeof(ops->TargetAlias),
+					"%s", param->value);
+			printk(KERN_INFO "TargetAlias:                  %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETPORTALGROUPTAG)) {
+			ops->TargetPortalGroupTag =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "TargetPortalGroupTag:         %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXCONNECTIONS)) {
+			ops->MaxConnections =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxConnections:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, INITIALR2T)) {
+			ops->InitialR2T = !strcmp(param->value, YES);
+			 printk(KERN_INFO "InitialR2T:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IMMEDIATEDATA)) {
+			ops->ImmediateData = !strcmp(param->value, YES);
+			printk(KERN_INFO "ImmediateData:                %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXBURSTLENGTH)) {
+			ops->MaxBurstLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxBurstLength:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			ops->FirstBurstLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "FirstBurstLength:             %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			ops->DefaultTime2Wait =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "DefaultTime2Wait:             %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DEFAULTTIME2RETAIN)) {
+			ops->DefaultTime2Retain =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "DefaultTime2Retain:           %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXOUTSTANDINGR2T)) {
+			ops->MaxOutstandingR2T =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxOutstandingR2T:            %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATAPDUINORDER)) {
+			ops->DataPDUInOrder = !strcmp(param->value, YES);
+			printk(KERN_INFO "DataPDUInOrder:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATASEQUENCEINORDER)) {
+			ops->DataSequenceInOrder = !strcmp(param->value, YES);
+			printk(KERN_INFO "DataSequenceInOrder:          %s\n",
+				param->value);
+		} else if (!strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			ops->ErrorRecoveryLevel =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "ErrorRecoveryLevel:           %s\n",
+				param->value);
+		} else if (!strcmp(param->name, SESSIONTYPE)) {
+			ops->SessionType = !strcmp(param->value, DISCOVERY);
+			printk(KERN_INFO "SessionType:                  %s\n",
+				param->value);
+		}
+	}
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+
+}
+
diff --git a/drivers/target/iscsi/iscsi_parameters.h b/drivers/target/iscsi/iscsi_parameters.h
new file mode 100644
index 0000000..df1de37
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_parameters.h
@@ -0,0 +1,271 @@
+#ifndef ISCSI_PARAMETERS_H
+#define ISCSI_PARAMETERS_H
+
+struct iscsi_extra_response {
+	char key[64];
+	char value[32];
+	struct list_head er_list;
+} ____cacheline_aligned;
+
+struct iscsi_param {
+	char *name;
+	char *value;
+	u8 set_param;
+	u8 phase;
+	u8 scope;
+	u8 sender;
+	u8 type;
+	u8 use;
+	u16 type_range;
+	u32 state;
+	struct list_head p_list;
+} ____cacheline_aligned;
+
+extern struct iscsi_global *iscsi_global;
+
+extern int iscsi_login_rx_data(struct iscsi_conn *, char *, int, int);
+extern int iscsi_login_tx_data(struct iscsi_conn *, char *, char *, int, int);
+extern void iscsi_dump_conn_ops(struct iscsi_conn_ops *);
+extern void iscsi_dump_sess_ops(struct iscsi_sess_ops *);
+extern void iscsi_print_params(struct iscsi_param_list *);
+extern int iscsi_create_default_params(struct iscsi_param_list **);
+extern int iscsi_set_keys_to_negotiate(int, int, struct iscsi_param_list *);
+extern int iscsi_set_keys_irrelevant_for_discovery(struct iscsi_param_list *);
+extern int iscsi_copy_param_list(struct iscsi_param_list **,
+			struct iscsi_param_list *, int);
+extern int iscsi_change_param_value(char *, int, struct iscsi_param_list *, int);
+extern void iscsi_release_param_list(struct iscsi_param_list *);
+extern struct iscsi_param *iscsi_find_param_from_key(char *, struct iscsi_param_list *);
+extern int iscsi_extract_key_value(char *, char **, char **);
+extern int iscsi_update_param_value(struct iscsi_param *, char *);
+extern int iscsi_decode_text_input(u8, u8, char *, u32, struct iscsi_param_list *);
+extern int iscsi_encode_text_output(u8, u8, char *, u32 *,
+			struct iscsi_param_list *);
+extern int iscsi_check_negotiated_keys(struct iscsi_param_list *);
+extern void iscsi_set_connection_parameters(struct iscsi_conn_ops *,
+			struct iscsi_param_list *);
+extern void iscsi_set_session_parameters(struct iscsi_sess_ops *,
+			struct iscsi_param_list *, int);
+
+#define YES				"Yes"
+#define NO				"No"
+#define ALL				"All"
+#define IRRELEVANT			"Irrelevant"
+#define NONE				"None"
+#define NOTUNDERSTOOD			"NotUnderstood"
+#define REJECT				"Reject"
+
+/*
+ * The Parameter Names.
+ */
+#define AUTHMETHOD			"AuthMethod"
+#define HEADERDIGEST			"HeaderDigest"
+#define DATADIGEST			"DataDigest"
+#define MAXCONNECTIONS			"MaxConnections"
+#define SENDTARGETS			"SendTargets"
+#define TARGETNAME			"TargetName"
+#define INITIATORNAME			"InitiatorName"
+#define TARGETALIAS			"TargetAlias"
+#define INITIATORALIAS			"InitiatorAlias"
+#define TARGETADDRESS			"TargetAddress"
+#define TARGETPORTALGROUPTAG		"TargetPortalGroupTag"
+#define INITIALR2T			"InitialR2T"
+#define IMMEDIATEDATA			"ImmediateData"
+#define MAXRECVDATASEGMENTLENGTH	"MaxRecvDataSegmentLength"
+#define MAXBURSTLENGTH			"MaxBurstLength"
+#define FIRSTBURSTLENGTH		"FirstBurstLength"
+#define DEFAULTTIME2WAIT		"DefaultTime2Wait"
+#define DEFAULTTIME2RETAIN		"DefaultTime2Retain"
+#define MAXOUTSTANDINGR2T		"MaxOutstandingR2T"
+#define DATAPDUINORDER  		"DataPDUInOrder"
+#define DATASEQUENCEINORDER		"DataSequenceInOrder"
+#define ERRORRECOVERYLEVEL		"ErrorRecoveryLevel"
+#define SESSIONTYPE			"SessionType"
+#define IFMARKER			"IFMarker"
+#define OFMARKER			"OFMarker"
+#define IFMARKINT			"IFMarkInt"
+#define OFMARKINT			"OFMarkInt"
+#define X_EXTENSIONKEY			"X-com.sbei.version"
+#define X_EXTENSIONKEY_CISCO_NEW	"X-com.cisco.protocol"
+#define X_EXTENSIONKEY_CISCO_OLD	"X-com.cisco.iscsi.draft"
+
+/*
+ * For AuthMethod.
+ */
+#define KRB5				"KRB5"
+#define SPKM1				"SPKM1"
+#define SPKM2				"SPKM2"
+#define SRP				"SRP"
+#define CHAP				"CHAP"
+
+/*
+ * Initial values for Parameter Negotiation.
+ */
+#define INITIAL_AUTHMETHOD			CHAP
+#define INITIAL_HEADERDIGEST			"CRC32C,None"
+#define INITIAL_DATADIGEST			"CRC32C,None"
+#define INITIAL_MAXCONNECTIONS			"1"
+#define INITIAL_SENDTARGETS			ALL
+#define INITIAL_TARGETNAME			"LIO.Target"
+#define INITIAL_INITIATORNAME			"LIO.Initiator"
+#define INITIAL_TARGETALIAS			"LIO Target"
+#define INITIAL_INITIATORALIAS			"LIO Initiator"
+#define INITIAL_TARGETADDRESS			"0.0.0.0:0000,0"
+#define INITIAL_TARGETPORTALGROUPTAG		"1"
+#define INITIAL_INITIALR2T			YES
+#define INITIAL_IMMEDIATEDATA			YES
+#define INITIAL_MAXRECVDATASEGMENTLENGTH	"8192"
+#define INITIAL_MAXBURSTLENGTH			"262144"
+#define INITIAL_FIRSTBURSTLENGTH		"65536"
+#define INITIAL_DEFAULTTIME2WAIT		"2"
+#define INITIAL_DEFAULTTIME2RETAIN		"20"
+#define INITIAL_MAXOUTSTANDINGR2T		"1"
+#define INITIAL_DATAPDUINORDER			YES
+#define INITIAL_DATASEQUENCEINORDER		YES
+#define INITIAL_ERRORRECOVERYLEVEL		"0"
+#define INITIAL_SESSIONTYPE			NORMAL
+#define INITIAL_IFMARKER			NO
+#define INITIAL_OFMARKER			NO
+#define INITIAL_IFMARKINT			"2048~65535"
+#define INITIAL_OFMARKINT			"2048~65535"
+
+/*
+ * For [Header,Data]Digests.
+ */
+#define CRC32C				"CRC32C"
+
+/*
+ * For SessionType.
+ */
+#define DISCOVERY			"Discovery"
+#define NORMAL				"Normal"
+
+/*
+ * struct iscsi_param->use
+ */
+#define USE_LEADING_ONLY		0x01
+#define USE_INITIAL_ONLY		0x02
+#define USE_ALL				0x04
+
+#define IS_USE_LEADING_ONLY(p)		((p)->use & USE_LEADING_ONLY)
+#define IS_USE_INITIAL_ONLY(p)		((p)->use & USE_INITIAL_ONLY)
+#define IS_USE_ALL(p)			((p)->use & USE_ALL)
+
+#define SET_USE_INITIAL_ONLY(p)		((p)->use |= USE_INITIAL_ONLY)
+
+/*
+ * struct iscsi_param->sender
+ */
+#define	SENDER_INITIATOR		0x01
+#define SENDER_TARGET			0x02
+#define SENDER_BOTH			0x03
+/* Used in iscsi_check_key() */
+#define SENDER_RECEIVER			0x04
+
+#define IS_SENDER_INITIATOR(p)		((p)->sender & SENDER_INITIATOR)
+#define IS_SENDER_TARGET(p)		((p)->sender & SENDER_TARGET)
+#define IS_SENDER_BOTH(p)		((p)->sender & SENDER_BOTH)
+
+/*
+ * struct iscsi_param->scope
+ */
+#define SCOPE_CONNECTION_ONLY		0x01
+#define SCOPE_SESSION_WIDE		0x02
+
+#define IS_SCOPE_CONNECTION_ONLY(p)	((p)->scope & SCOPE_CONNECTION_ONLY)
+#define IS_SCOPE_SESSION_WIDE(p)	((p)->scope & SCOPE_SESSION_WIDE)
+
+/*
+ * struct iscsi_param->phase
+ */
+#define PHASE_SECURITY			0x01
+#define PHASE_OPERATIONAL		0x02
+#define PHASE_DECLARATIVE		0x04
+#define PHASE_FFP0			0x08
+
+#define IS_PHASE_SECURITY(p)		((p)->phase & PHASE_SECURITY)
+#define IS_PHASE_OPERATIONAL(p)		((p)->phase & PHASE_OPERATIONAL)
+#define IS_PHASE_DECLARATIVE(p)		((p)->phase & PHASE_DECLARATIVE)
+#define IS_PHASE_FFP0(p)		((p)->phase & PHASE_FFP0)
+
+/*
+ * struct iscsi_param->type
+ */
+#define TYPE_BOOL_AND			0x01
+#define TYPE_BOOL_OR			0x02
+#define TYPE_NUMBER			0x04
+#define TYPE_NUMBER_RANGE		0x08
+#define TYPE_STRING			0x10
+#define TYPE_VALUE_LIST			0x20
+
+#define IS_TYPE_BOOL_AND(p)		((p)->type & TYPE_BOOL_AND)
+#define IS_TYPE_BOOL_OR(p)		((p)->type & TYPE_BOOL_OR)
+#define IS_TYPE_NUMBER(p)		((p)->type & TYPE_NUMBER)
+#define IS_TYPE_NUMBER_RANGE(p)		((p)->type & TYPE_NUMBER_RANGE)
+#define IS_TYPE_STRING(p)		((p)->type & TYPE_STRING)
+#define IS_TYPE_VALUE_LIST(p)		((p)->type & TYPE_VALUE_LIST)
+
+/*
+ * struct iscsi_param->type_range
+ */
+#define TYPERANGE_BOOL_AND		0x0001
+#define TYPERANGE_BOOL_OR		0x0002
+#define TYPERANGE_0_TO_2		0x0004
+#define TYPERANGE_0_TO_3600		0x0008
+#define TYPERANGE_0_TO_32767		0x0010
+#define TYPERANGE_0_TO_65535		0x0020
+#define TYPERANGE_1_TO_65535		0x0040
+#define TYPERANGE_2_TO_3600		0x0080
+#define TYPERANGE_512_TO_16777215	0x0100
+#define TYPERANGE_AUTH			0x0200
+#define TYPERANGE_DIGEST		0x0400
+#define TYPERANGE_ISCSINAME		0x0800
+#define TYPERANGE_MARKINT		0x1000
+#define TYPERANGE_SESSIONTYPE		0x2000
+#define TYPERANGE_TARGETADDRESS		0x4000
+#define TYPERANGE_UTF8			0x8000
+
+#define IS_TYPERANGE_0_TO_2(p)		((p)->type_range & TYPERANGE_0_TO_2)
+#define IS_TYPERANGE_0_TO_3600(p)	((p)->type_range & TYPERANGE_0_TO_3600)
+#define IS_TYPERANGE_0_TO_32767(p)	((p)->type_range & TYPERANGE_0_TO_32767)
+#define IS_TYPERANGE_0_TO_65535(p)	((p)->type_range & TYPERANGE_0_TO_65535)
+#define IS_TYPERANGE_1_TO_65535(p)	((p)->type_range & TYPERANGE_1_TO_65535)
+#define IS_TYPERANGE_2_TO_3600(p)	((p)->type_range & TYPERANGE_2_TO_3600)
+#define IS_TYPERANGE_512_TO_16777215(p)	((p)->type_range & \
+						TYPERANGE_512_TO_16777215)
+#define IS_TYPERANGE_AUTH_PARAM(p)	((p)->type_range & TYPERANGE_AUTH)
+#define IS_TYPERANGE_DIGEST_PARAM(p)	((p)->type_range & TYPERANGE_DIGEST)
+#define IS_TYPERANGE_SESSIONTYPE(p)	((p)->type_range & \
+						TYPERANGE_SESSIONTYPE)
+
+/*
+ * struct iscsi_param->state
+ */
+#define PSTATE_ACCEPTOR			0x01
+#define PSTATE_NEGOTIATE		0x02
+#define PSTATE_PROPOSER			0x04
+#define PSTATE_IRRELEVANT		0x08
+#define PSTATE_REJECT			0x10
+#define PSTATE_REPLY_OPTIONAL		0x20
+#define PSTATE_RESPONSE_GOT		0x40
+#define PSTATE_RESPONSE_SENT		0x80
+
+#define IS_PSTATE_ACCEPTOR(p)		((p)->state & PSTATE_ACCEPTOR)
+#define IS_PSTATE_NEGOTIATE(p)		((p)->state & PSTATE_NEGOTIATE)
+#define IS_PSTATE_PROPOSER(p)		((p)->state & PSTATE_PROPOSER)
+#define IS_PSTATE_IRRELEVANT(p)		((p)->state & PSTATE_IRRELEVANT)
+#define IS_PSTATE_REJECT(p)		((p)->state & PSTATE_REJECT)
+#define IS_PSTATE_REPLY_OPTIONAL(p)	((p)->state & PSTATE_REPLY_OPTIONAL)
+#define IS_PSTATE_RESPONSE_GOT(p)	((p)->state & PSTATE_RESPONSE_GOT)
+#define IS_PSTATE_RESPONSE_SENT(p)	((p)->state & PSTATE_RESPONSE_SENT)
+
+#define SET_PSTATE_ACCEPTOR(p)		((p)->state |= PSTATE_ACCEPTOR)
+#define SET_PSTATE_NEGOTIATE(p)		((p)->state |= PSTATE_NEGOTIATE)
+#define SET_PSTATE_PROPOSER(p)		((p)->state |= PSTATE_PROPOSER)
+#define SET_PSTATE_IRRELEVANT(p)	((p)->state |= PSTATE_IRRELEVANT)
+#define SET_PSTATE_REJECT(p)		((p)->state |= PSTATE_REJECT)
+#define SET_PSTATE_REPLY_OPTIONAL(p)	((p)->state |= PSTATE_REPLY_OPTIONAL)
+#define SET_PSTATE_RESPONSE_GOT(p)	((p)->state |= PSTATE_RESPONSE_GOT)
+#define SET_PSTATE_RESPONSE_SENT(p)	((p)->state |= PSTATE_RESPONSE_SENT)
+
+#endif /* ISCSI_PARAMETERS_H */
diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
new file mode 100644
index 0000000..ab64552
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -0,0 +1,1411 @@
+/*******************************************************************************
+ * This file contains the login functions used by the iSCSI Target driver.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/inet.h>
+#include <linux/crypto.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <net/ipv6.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_stat.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_parameters.h"
+
+/*	iscsi_login_init_conn():
+ *
+ *
+ */
+static int iscsi_login_init_conn(struct iscsi_conn *conn)
+{
+	INIT_LIST_HEAD(&conn->conn_list);
+	INIT_LIST_HEAD(&conn->conn_cmd_list);
+	INIT_LIST_HEAD(&conn->immed_queue_list);
+	INIT_LIST_HEAD(&conn->response_queue_list);
+	sema_init(&conn->conn_post_wait_sem, 0);
+	sema_init(&conn->conn_wait_sem, 0);
+	sema_init(&conn->conn_wait_rcfr_sem, 0);
+	sema_init(&conn->conn_waiting_on_uc_sem, 0);
+	sema_init(&conn->conn_logout_sem, 0);
+	sema_init(&conn->rx_half_close_sem, 0);
+	sema_init(&conn->tx_half_close_sem, 0);
+	sema_init(&conn->tx_sem, 0);
+	spin_lock_init(&conn->cmd_lock);
+	spin_lock_init(&conn->conn_usage_lock);
+	spin_lock_init(&conn->immed_queue_lock);
+	spin_lock_init(&conn->netif_lock);
+	spin_lock_init(&conn->nopin_timer_lock);
+	spin_lock_init(&conn->response_queue_lock);
+	spin_lock_init(&conn->state_lock);
+
+	if (!(zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL))) {
+		printk(KERN_ERR "Unable to allocate conn->conn_cpumask\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*
+ * Used by iscsi_target_nego.c:iscsi_target_locate_portal() to setup
+ * per struct iscsi_conn libcrypto contexts for crc32c and crc32-intel
+ */
+int iscsi_login_setup_crypto(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg = conn->tpg;
+#ifdef CONFIG_X86
+	/*
+	 * Check for the Nehalem optimized crc32c-intel instructions
+	 * This is only currently available while running on bare-metal,
+	 * and is not yet available with QEMU-KVM guests.
+	 */
+	if (cpu_has_xmm4_2 && ISCSI_TPG_ATTRIB(tpg)->crc32c_x86_offload) {
+		conn->conn_rx_hash.flags = 0;
+		conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
+						CRYPTO_ALG_ASYNC);
+		if (IS_ERR(conn->conn_rx_hash.tfm)) {
+			printk(KERN_ERR "crypto_alloc_hash() failed for conn_rx_tfm\n");
+			goto check_crc32c;
+		}
+
+		conn->conn_tx_hash.flags = 0;
+		conn->conn_tx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
+						CRYPTO_ALG_ASYNC);
+		if (IS_ERR(conn->conn_tx_hash.tfm)) {   
+			printk(KERN_ERR "crypto_alloc_hash() failed for conn_tx_tfm\n");
+			crypto_free_hash(conn->conn_rx_hash.tfm);
+			goto check_crc32c;
+		}
+
+		printk(KERN_INFO "LIO-Target[0]: Using Nehalem crc32c-intel"
+					" offload instructions\n");
+		return 0;
+	}
+check_crc32c:
+#endif /* CONFIG_X86 */
+	/*
+	 * Setup slicing by 1x CRC32C algorithm for RX and TX libcrypto contexts
+	 */
+	conn->conn_rx_hash.flags = 0;
+	conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c", 0,
+						CRYPTO_ALG_ASYNC);
+	if (IS_ERR(conn->conn_rx_hash.tfm)) {
+		printk(KERN_ERR "crypto_alloc_hash() failed for conn_rx_tfm\n");
+		return -ENOMEM;
+	}
+
+	conn->conn_tx_hash.flags = 0;
+	conn->conn_tx_hash.tfm = crypto_alloc_hash("crc32c", 0,
+						CRYPTO_ALG_ASYNC);
+	if (IS_ERR(conn->conn_tx_hash.tfm)) {	
+		printk(KERN_ERR "crypto_alloc_hash() failed for conn_tx_tfm\n");
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*	iscsi_login_check_initiator_version():
+ *
+ *
+ */
+static int iscsi_login_check_initiator_version(
+	struct iscsi_conn *conn,
+	u8 version_max,
+	u8 version_min)
+{
+	if ((version_max != 0x00) || (version_min != 0x00)) {
+		printk(KERN_ERR "Unsupported iSCSI IETF Pre-RFC Revision,"
+			" version Min/Max 0x%02x/0x%02x, rejecting login.\n",
+			version_min, version_max);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_NO_VERSION);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_for_session_reinstatement():
+ *
+ *
+ */
+int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+{
+	int sessiontype;
+	struct iscsi_param *initiatorname_param = NULL, *sessiontype_param = NULL;
+	struct iscsi_portal_group *tpg = conn->tpg;
+	struct iscsi_session *sess = NULL, *sess_p = NULL;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+
+	initiatorname_param = iscsi_find_param_from_key(
+			INITIATORNAME, conn->param_list);
+	if (!(initiatorname_param))
+		return -1;
+
+	sessiontype_param = iscsi_find_param_from_key(
+			SESSIONTYPE, conn->param_list);
+	if (!(sessiontype_param))
+		return -1;
+
+	sessiontype = (strncmp(sessiontype_param->value, NORMAL, 6)) ? 1 : 0;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+
+		sess_p = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		spin_lock(&sess_p->conn_lock);
+		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+		    atomic_read(&sess_p->session_logout) ||
+		    (sess_p->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+			spin_unlock(&sess_p->conn_lock);
+			continue;
+		}
+		if (!memcmp((void *)sess_p->isid, (void *)SESS(conn)->isid, 6) &&
+		   (!strcmp((void *)SESS_OPS(sess_p)->InitiatorName,
+			    (void *)initiatorname_param->value) &&
+		   (SESS_OPS(sess_p)->SessionType == sessiontype))) {
+			atomic_set(&sess_p->session_reinstatement, 1);
+			spin_unlock(&sess_p->conn_lock);
+			iscsi_inc_session_usage_count(sess_p);
+			iscsi_stop_time2retain_timer(sess_p);
+			sess = sess_p;
+			break;
+		}
+		spin_unlock(&sess_p->conn_lock);
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+	/*
+	 * If the Time2Retain handler has expired, the session is already gone.
+	 */
+	if (!sess)
+		return 0;
+
+	TRACE(TRACE_ERL0, "%s iSCSI Session SID %u is still active for %s,"
+		" preforming session reinstatement.\n", (sessiontype) ?
+		"Discovery" : "Normal", sess->sid,
+		SESS_OPS(sess)->InitiatorName);
+
+	spin_lock_bh(&sess->conn_lock);
+	if (sess->session_state == TARG_SESS_STATE_FAILED) {
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_dec_session_usage_count(sess);
+		return iscsi_close_session(sess);
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+
+	return iscsi_close_session(sess);
+}
+
+static void iscsi_login_set_conn_values(
+	struct iscsi_session *sess,
+	struct iscsi_conn *conn,
+	u16 cid)
+{
+	conn->sess		= sess;
+	conn->cid 		= cid;
+	/*
+	 * Generate a random Status sequence number (statsn) for the new
+	 * iSCSI connection.
+	 */
+	get_random_bytes(&conn->stat_sn, sizeof(u32));
+
+	down(&iscsi_global->auth_id_sem);
+	conn->auth_id		= iscsi_global->auth_id++;
+	up(&iscsi_global->auth_id_sem);
+}
+
+/*	iscsi_login_zero_tsih():
+ *
+ *	This is the leading connection of a new session,
+ *	or session reinstatement.
+ */
+static int iscsi_login_zero_tsih_s1(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = NULL;
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	sess = kmem_cache_zalloc(lio_sess_cache, GFP_KERNEL);
+	if (!(sess)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		printk(KERN_ERR "Could not allocate memory for session\n");
+		return -1;
+	}
+
+	iscsi_login_set_conn_values(sess, conn, pdu->cid);
+	sess->init_task_tag	= pdu->itt;
+	memcpy((void *)&sess->isid, (void *)pdu->isid, 6);
+	sess->exp_cmd_sn	= pdu->cmdsn;
+	INIT_LIST_HEAD(&sess->sess_conn_list);
+	INIT_LIST_HEAD(&sess->sess_ooo_cmdsn_list);
+	INIT_LIST_HEAD(&sess->cr_active_list);
+	INIT_LIST_HEAD(&sess->cr_inactive_list);
+	sema_init(&sess->async_msg_sem, 0);
+	sema_init(&sess->reinstatement_sem, 0);
+	sema_init(&sess->session_wait_sem, 0);
+	sema_init(&sess->session_waiting_on_uc_sem, 0);
+	spin_lock_init(&sess->cmdsn_lock);
+	spin_lock_init(&sess->conn_lock);
+	spin_lock_init(&sess->cr_a_lock);
+	spin_lock_init(&sess->cr_i_lock);
+	spin_lock_init(&sess->session_usage_lock);
+	spin_lock_init(&sess->ttt_lock);
+	sess->session_index = iscsi_get_new_index(ISCSI_SESSION_INDEX);
+	sess->creation_time = get_jiffies_64();
+	spin_lock_init(&sess->session_stats_lock);
+	/*
+	 * The FFP CmdSN window values will be allocated from the TPG's
+	 * Initiator Node's ACL once the login has been successfully completed.
+	 */
+	sess->max_cmd_sn	= pdu->cmdsn;
+
+	sess->sess_ops = kzalloc(sizeof(struct iscsi_sess_ops), GFP_KERNEL);
+	if (!(sess->sess_ops)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_sess_ops.\n");
+		return -1;
+	}
+
+	sess->se_sess = transport_init_session();
+	if (!(sess->se_sess)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int iscsi_login_zero_tsih_s2 (
+	struct iscsi_conn *conn)
+{
+	struct iscsi_node_attrib *na;
+	struct iscsi_session *sess = conn->sess;
+	unsigned char buf[32];
+
+	sess->tpg = conn->tpg;
+
+	/*
+	 * Assign a new TPG Session Handle.  Note this is protected with
+	 * struct iscsi_portal_group->np_login_sem from core_access_np().
+	 */
+	sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+	if (!(sess->tsih))
+		sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+
+	/*
+	 * Create the default params from user defined values..
+	 */
+	if (iscsi_copy_param_list(&conn->param_list,
+				ISCSI_TPG_C(conn)->param_list, 1) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	iscsi_set_keys_to_negotiate(TARGET, 0, conn->param_list);
+
+	if (SESS_OPS(sess)->SessionType)
+		return iscsi_set_keys_irrelevant_for_discovery(
+				conn->param_list);
+
+	na = iscsi_tpg_get_node_attrib(sess);
+
+	/*
+	 * Need to send TargetPortalGroupTag back in first login response
+	 * on any iSCSI connection where the Initiator provides TargetName.
+	 * See 5.3.1.  Login Phase Start
+	 *
+	 * In our case, we have already located the struct iscsi_tiqn at this point.
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	/*
+	 * Workaround for Initiators that have broken connection recovery logic.
+	 *
+	 * "We would really like to get rid of this." Linux-iSCSI.org team
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "ErrorRecoveryLevel=%d", na->default_erl);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	if (iscsi_login_disable_FIM_keys(conn->param_list, conn) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * Remove PSTATE_NEGOTIATE for the four FIM related keys.
+ * The Initiator node will be able to enable FIM by proposing them itself.
+ */
+int iscsi_login_disable_FIM_keys(
+	struct iscsi_param_list *param_list,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_param *param;
+
+	param = iscsi_find_param_from_key("OFMarker", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" OFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("OFMarkInt", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("IFMarker", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("IFMarkInt", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	return 0;
+}
+
+static int iscsi_login_non_zero_tsih_s1 (
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	iscsi_login_set_conn_values(NULL, conn, pdu->cid);
+	return 0;
+}
+
+/*	iscsi_login_non_zero_tsih_s2():
+ *
+ *	Add a new connection to an existing session.
+ */
+static int iscsi_login_non_zero_tsih_s2(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_portal_group *tpg = conn->tpg;
+	struct iscsi_session *sess = NULL, *sess_p = NULL;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+
+		sess_p = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+		    atomic_read(&sess_p->session_logout) ||
+		   (sess_p->time2retain_timer_flags & T2R_TF_EXPIRED))
+			continue;
+		if (!(memcmp((const void *)sess_p->isid,
+		     (const void *)pdu->isid, 6)) &&
+		     (sess_p->tsih == pdu->tsih)) {
+			iscsi_inc_session_usage_count(sess_p);
+			iscsi_stop_time2retain_timer(sess_p);
+			sess = sess_p;
+			break;
+		}
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	/*
+	 * If the Time2Retain handler has expired, the session is already gone.
+	 */
+	if (!sess) {
+		printk(KERN_ERR "Initiator attempting to add a connection to"
+			" a non-existent session, rejecting iSCSI Login.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_NO_SESSION);
+		return -1;
+	}
+
+	/*
+	 * Stop the Time2Retain timer if this is a failed session, we restart
+	 * the timer if the login is not successful.
+	 */
+	spin_lock_bh(&sess->conn_lock);
+	if (sess->session_state == TARG_SESS_STATE_FAILED)
+		atomic_set(&sess->session_continuation, 1);
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_login_set_conn_values(sess, conn, pdu->cid);
+
+	if (iscsi_copy_param_list(&conn->param_list,
+			ISCSI_TPG_C(conn)->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	iscsi_set_keys_to_negotiate(TARGET, 0, conn->param_list);
+
+	/*
+	 * Need to send TargetPortalGroupTag back in first login response
+	 * on any iSCSI connection where the Initiator provides TargetName.
+	 * See 5.3.1.  Login Phase Start
+	 *
+	 * In our case, we have already located the struct iscsi_tiqn at this point.
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	return iscsi_login_disable_FIM_keys(conn->param_list, conn);
+}
+
+/*	iscsi_login_post_auth_non_zero_tsih():
+ *
+ *
+ */
+int iscsi_login_post_auth_non_zero_tsih(
+	struct iscsi_conn *conn,
+	u16 cid,
+	u32 exp_statsn)
+{
+	struct iscsi_conn *conn_ptr = NULL;
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_session *sess = SESS(conn);
+
+	/*
+	 * By following item 5 in the login table,  if we have found
+	 * an existing ISID and a valid/existing TSIH and an existing
+	 * CID we do connection reinstatement.  Currently we dont not
+	 * support it so we send back an non-zero status class to the
+	 * initiator and release the new connection.
+	 */
+	conn_ptr = iscsi_get_conn_from_cid_rcfr(sess, cid);
+	if ((conn_ptr)) {
+		printk(KERN_ERR "Connection exists with CID %hu for %s,"
+			" performing connection reinstatement.\n",
+			conn_ptr->cid, SESS_OPS(sess)->InitiatorName);
+
+		iscsi_connection_reinstatement_rcfr(conn_ptr);
+		iscsi_dec_conn_usage_count(conn_ptr);
+	}
+
+	/*
+	 * Check for any connection recovery entires containing CID.
+	 * We use the original ExpStatSN sent in the first login request
+	 * to acknowledge commands for the failed connection.
+	 *
+	 * Also note that an explict logout may have already been sent,
+	 * but the response may not be sent due to additional connection
+	 * loss.
+	 */
+	if (SESS_OPS(sess)->ErrorRecoveryLevel == 2) {
+		cr = iscsi_get_inactive_connection_recovery_entry(
+				sess, cid);
+		if ((cr)) {
+			TRACE(TRACE_ERL2, "Performing implicit logout"
+				" for connection recovery on CID: %hu\n",
+					conn->cid);
+			iscsi_discard_cr_cmds_by_expstatsn(cr, exp_statsn);
+		}
+	}
+
+	/*
+	 * Else we follow item 4 from the login table in that we have
+	 * found an existing ISID and a valid/existing TSIH and a new
+	 * CID we go ahead and continue to add a new connection to the
+	 * session.
+	 */
+	TRACE(TRACE_LOGIN, "Adding CID %hu to existing session for %s.\n",
+			cid, SESS_OPS(sess)->InitiatorName);
+
+	if ((atomic_read(&sess->nconn) + 1) > SESS_OPS(sess)->MaxConnections) {
+		printk(KERN_ERR "Adding additional connection to this session"
+			" would exceed MaxConnections %d, login failed.\n",
+				SESS_OPS(sess)->MaxConnections);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_ISID_ERROR);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_post_login_start_timers():
+ *
+ *
+ */
+static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+/* #warning PHY timer is disabled */
+#if 0
+	iscsi_get_network_interface_from_conn(conn);
+
+	spin_lock_bh(&conn->netif_lock);
+	iscsi_start_netif_timer(conn);
+	spin_unlock_bh(&conn->netif_lock);
+#endif
+	if (!SESS_OPS(sess)->SessionType)
+		iscsi_start_nopin_timer(conn);
+}
+
+/*	iscsi_post_login_handler():
+ *
+ *
+ */
+static int iscsi_post_login_handler(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	u8 zero_tsih)
+{
+	int stop_timer = 0;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf1_ipv4[IPV4_BUF_SIZE];
+	unsigned char *ip, *ip_np;
+	struct iscsi_session *sess = SESS(conn);
+	struct se_session *se_sess = sess->se_sess;
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_thread_set *ts;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	iscsi_collect_login_stats(conn, ISCSI_STATUS_CLS_SUCCESS,
+			ISCSI_LOGIN_STATUS_ACCEPT);
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_LOGGED_IN.\n");
+	conn->conn_state = TARG_CONN_STATE_LOGGED_IN;
+
+	iscsi_set_connection_parameters(conn->conn_ops, conn->param_list);
+	iscsi_set_sync_and_steering_values(conn);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip = &conn->ipv6_login_ip[0];
+		ip_np = &np->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf1_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, conn->login_ip);
+		iscsi_ntoa2(buf1_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+		ip_np = &buf1_ipv4[0];
+	}
+
+	/*
+	 * SCSI Initiator -> SCSI Target Port Mapping
+	 */
+	ts = iscsi_get_thread_set(TARGET);
+	if (!zero_tsih) {
+		iscsi_set_session_parameters(sess->sess_ops,
+				conn->param_list, 0);
+		iscsi_release_param_list(conn->param_list);
+		conn->param_list = NULL;
+
+		spin_lock_bh(&sess->conn_lock);
+		atomic_set(&sess->session_continuation, 0);
+		if (sess->session_state == TARG_SESS_STATE_FAILED) {
+			TRACE(TRACE_STATE, "Moving to"
+					" TARG_SESS_STATE_LOGGED_IN.\n");
+			sess->session_state = TARG_SESS_STATE_LOGGED_IN;
+			stop_timer = 1;
+		}
+
+		printk(KERN_INFO "iSCSI Login successful on CID: %hu from %s to"
+			" %s:%hu,%hu\n", conn->cid, ip, ip_np,
+				np->np_port, tpg->tpgt);
+
+		list_add_tail(&conn->conn_list, &sess->sess_conn_list);
+		atomic_inc(&sess->nconn);
+		printk(KERN_INFO "Incremented iSCSI Connection count to %hu"
+			" from node: %s\n", atomic_read(&sess->nconn),
+			SESS_OPS(sess)->InitiatorName);
+		spin_unlock_bh(&sess->conn_lock);
+
+		iscsi_post_login_start_timers(conn);
+		iscsi_activate_thread_set(conn, ts);
+		/*
+		 * Determine CPU mask to ensure connection's RX and TX kthreads
+		 * are scheduled on the same CPU.
+		 */
+		iscsi_thread_get_cpumask(conn);
+		conn->conn_rx_reset_cpumask = 1;
+		conn->conn_tx_reset_cpumask = 1;
+
+		iscsi_dec_conn_usage_count(conn);
+		if (stop_timer) {
+			spin_lock_bh(&se_tpg->session_lock);
+			iscsi_stop_time2retain_timer(sess);
+			spin_unlock_bh(&se_tpg->session_lock);
+		}
+		iscsi_dec_session_usage_count(sess);
+		return 0;
+	}
+
+	iscsi_set_session_parameters(sess->sess_ops, conn->param_list, 1);
+	iscsi_release_param_list(conn->param_list);
+	conn->param_list = NULL;
+
+	iscsi_determine_maxcmdsn(sess);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	__transport_register_session(&sess->tpg->tpg_se_tpg,
+			se_sess->se_node_acl, se_sess, (void *)sess);
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_LOGGED_IN.\n");
+	sess->session_state = TARG_SESS_STATE_LOGGED_IN;
+
+	printk(KERN_INFO "iSCSI Login successful on CID: %hu from %s to %s:%hu,%hu\n",
+		conn->cid, ip, ip_np, np->np_port, tpg->tpgt);
+
+	spin_lock_bh(&sess->conn_lock);
+	list_add_tail(&conn->conn_list, &sess->sess_conn_list);
+	atomic_inc(&sess->nconn);
+	printk(KERN_INFO "Incremented iSCSI Connection count to %hu from node:"
+		" %s\n", atomic_read(&sess->nconn),
+		SESS_OPS(sess)->InitiatorName);
+	spin_unlock_bh(&sess->conn_lock);
+
+	sess->sid = tpg->sid++;
+	if (!sess->sid)
+		sess->sid = tpg->sid++;
+	printk(KERN_INFO "Established iSCSI session from node: %s\n",
+			SESS_OPS(sess)->InitiatorName);
+
+	tpg->nsessions++;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_nsessions++;
+
+	printk(KERN_INFO "Incremented number of active iSCSI sessions to %u on"
+		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	iscsi_post_login_start_timers(conn);
+	iscsi_activate_thread_set(conn, ts);
+	/*
+	 * Determine CPU mask to ensure connection's RX and TX kthreads
+	 * are scheduled on the same CPU.
+	 */
+	iscsi_thread_get_cpumask(conn);
+	conn->conn_rx_reset_cpumask = 1;
+	conn->conn_tx_reset_cpumask = 1;
+
+	iscsi_dec_conn_usage_count(conn);
+
+	return 0;
+}
+
+/*	iscsi_handle_login_thread_timeout():
+ *
+ *
+ */
+static void iscsi_handle_login_thread_timeout(unsigned long data)
+{
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+	struct iscsi_np *np = (struct iscsi_np *) data;
+
+	memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+	spin_lock_bh(&np->np_thread_lock);
+	iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+
+	printk(KERN_ERR "iSCSI Login timeout on Network Portal %s:%hu\n",
+			buf_ipv4, np->np_port);
+
+	if (np->np_login_timer_flags & TPG_NP_TF_STOP) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return;
+	}
+
+	if (np->np_thread)
+		send_sig(SIGKILL, np->np_thread, 1);
+
+	np->np_login_timer_flags &= ~TPG_NP_TF_RUNNING;
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_start_login_thread_timer():
+ *
+ *
+ */
+static void iscsi_start_login_thread_timer(struct iscsi_np *np)
+{
+	/*
+	 * This used the TA_LOGIN_TIMEOUT constant because at this
+	 * point we do not have access to ISCSI_TPG_ATTRIB(tpg)->login_timeout
+	 */
+	spin_lock_bh(&np->np_thread_lock);
+	init_timer(&np->np_login_timer);
+	SETUP_TIMER(np->np_login_timer, TA_LOGIN_TIMEOUT, np,
+			iscsi_handle_login_thread_timeout);
+	np->np_login_timer_flags &= ~TPG_NP_TF_STOP;
+	np->np_login_timer_flags |= TPG_NP_TF_RUNNING;
+	add_timer(&np->np_login_timer);
+
+	TRACE(TRACE_LOGIN, "Added timeout timer to iSCSI login request for"
+			" %u seconds.\n", TA_LOGIN_TIMEOUT);
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_stop_login_thread_timer():
+ *
+ *
+ */
+static void iscsi_stop_login_thread_timer(struct iscsi_np *np)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	if (!(np->np_login_timer_flags & TPG_NP_TF_RUNNING)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return;
+	}
+	np->np_login_timer_flags |= TPG_NP_TF_STOP;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	del_timer_sync(&np->np_login_timer);
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_timer_flags &= ~TPG_NP_TF_RUNNING;
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_target_setup_login_socket():
+ *
+ *
+ */
+static struct socket *iscsi_target_setup_login_socket(struct iscsi_np *np)
+{
+	const char *end;
+	struct socket *sock;
+	int backlog = 5, ip_proto, sock_type, ret, opt = 0;
+	struct sockaddr_in sock_in;
+	struct sockaddr_in6 sock_in6;
+
+	switch (np->np_network_transport) {
+	case ISCSI_TCP:
+		ip_proto = IPPROTO_TCP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_TCP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_UDP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_SEQPACKET;
+		break;
+	case ISCSI_IWARP_TCP:
+	case ISCSI_IWARP_SCTP:
+	case ISCSI_INFINIBAND:
+	default:
+		printk(KERN_ERR "Unsupported network_transport: %d\n",
+				np->np_network_transport);
+		goto fail;
+	}
+
+	if (sock_create((np->np_flags & NPF_NET_IPV6) ? AF_INET6 : AF_INET,
+			sock_type, ip_proto, &sock) < 0) {
+		printk(KERN_ERR "sock_create() failed.\n");
+		goto fail;
+	}
+	np->np_socket = sock;
+
+	/*
+	 * The SCTP stack needs struct socket->file.
+	 */
+	if ((np->np_network_transport == ISCSI_SCTP_TCP) ||
+	    (np->np_network_transport == ISCSI_SCTP_UDP)) {
+		if (!sock->file) {
+			sock->file = kzalloc(sizeof(struct file), GFP_KERNEL);
+			if (!(sock->file)) {
+				printk(KERN_ERR "Unable to allocate struct"
+						" file for SCTP\n");
+				goto fail;
+			}
+			np->np_flags |= NPF_SCTP_STRUCT_FILE;
+		}
+	}
+
+	if (np->np_flags & NPF_NET_IPV6) {
+		memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
+		sock_in6.sin6_family = AF_INET6;
+		sock_in6.sin6_port = htons(np->np_port);
+#if 1
+		ret = in6_pton(&np->np_ipv6[0], IPV6_ADDRESS_SPACE,
+				(void *)&sock_in6.sin6_addr.in6_u, -1, &end);
+		if (ret <= 0) {
+			printk(KERN_ERR "in6_pton returned: %d\n", ret);
+			goto fail;
+		}
+#else
+		ret = iscsi_pton6(&np->np_ipv6[0],
+				(unsigned char *)&sock_in6.sin6_addr.in6_u);
+		if (ret <= 0) {
+			printk(KERN_ERR "iscsi_pton6() returned: %d\n", ret);
+			goto fail;
+		}
+#endif
+	} else {
+		memset(&sock_in, 0, sizeof(struct sockaddr_in));
+		sock_in.sin_family = AF_INET;
+		sock_in.sin_port = htons(np->np_port);
+		sock_in.sin_addr.s_addr = htonl(np->np_ipv4);
+	}
+
+	/*
+	 * Set SO_REUSEADDR, and disable Nagel Algorithm with TCP_NODELAY.
+	 */
+	opt = 1;
+	if (np->np_network_transport == ISCSI_TCP) {
+		ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_NODELAY,
+				(char *)&opt, sizeof(opt));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_setsockopt() for TCP_NODELAY"
+				" failed: %d\n", ret);
+			goto fail;
+		}
+	}
+	ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
+			(char *)&opt, sizeof(opt));
+	if (ret < 0) {
+		printk(KERN_ERR "kernel_setsockopt() for SO_REUSEADDR"
+			" failed\n");
+		goto fail;
+	}
+
+	if (np->np_flags & NPF_NET_IPV6) {
+		ret = kernel_bind(sock, (struct sockaddr *)&sock_in6,
+				sizeof(struct sockaddr_in6));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_bind() failed: %d\n", ret);
+			goto fail;
+		}
+	} else {
+		ret = kernel_bind(sock, (struct sockaddr *)&sock_in,
+				sizeof(struct sockaddr));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_bind() failed: %d\n", ret);
+			goto fail;
+		}
+	}
+
+	if (kernel_listen(sock, backlog)) {
+		printk(KERN_ERR "kernel_listen() failed.\n");
+		goto fail;
+	}
+
+	return sock;
+
+fail:
+	np->np_socket = NULL;
+	if (sock) {
+		if (np->np_flags & NPF_SCTP_STRUCT_FILE) {
+			kfree(sock->file);
+			sock->file = NULL;
+		}
+
+		sock_release(sock);
+	}
+	return NULL;
+}
+
+/*	iscsi_target_login_thread():
+ *
+ *
+ */
+int iscsi_target_login_thread(void *arg)
+{
+	u8 buffer[ISCSI_HDR_LEN], iscsi_opcode, zero_tsih = 0;
+	unsigned char *ip = NULL, *ip_init_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf1_ipv4[IPV4_BUF_SIZE];
+	int err, ret = 0, start = 1, ip_proto;
+	int sock_type, set_sctp_conn_flag = 0;
+	struct iscsi_conn *conn = NULL;
+	struct iscsi_login *login;
+	struct iscsi_portal_group *tpg = NULL;
+	struct socket *new_sock, *sock;
+	struct iscsi_np *np = (struct iscsi_np *) arg;
+	struct iovec iov;
+	struct iscsi_login_req *pdu;
+	struct sockaddr_in sock_in;
+	struct sockaddr_in6 sock_in6;
+
+	{
+	char name[16];
+	memset(name, 0, 16);
+	sprintf(name, "iscsi_np");
+	iscsi_daemon(np->np_thread, name, SHUTDOWN_SIGS);
+	}
+
+	sock = iscsi_target_setup_login_socket(np);
+	if (!(sock)) {
+		up(&np->np_start_sem);
+		return -1;
+	}
+
+get_new_sock:
+	flush_signals(current);
+	ip_proto = sock_type = set_sctp_conn_flag = 0;
+
+	switch (np->np_network_transport) {
+	case ISCSI_TCP:
+		ip_proto = IPPROTO_TCP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_TCP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_UDP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_SEQPACKET;
+		break;
+	case ISCSI_IWARP_TCP:
+	case ISCSI_IWARP_SCTP:
+	case ISCSI_INFINIBAND:
+	default:
+		printk(KERN_ERR "Unsupported network_transport: %d\n",
+			np->np_network_transport);
+		if (start)
+			up(&np->np_start_sem);
+		return -1;
+	}
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN)
+		goto out;
+	else if (np->np_thread_state == ISCSI_NP_THREAD_RESET) {
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock_bh(&np->np_thread_lock);
+			up(&np->np_restart_sem);
+			down(&np->np_shutdown_sem);
+			goto out;
+		}
+		np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+		up(&np->np_restart_sem);
+	} else {
+		np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+
+		if (start) {
+			start = 0;
+			up(&np->np_start_sem);
+		}
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	if (kernel_accept(sock, &new_sock, 0) < 0) {
+		if (signal_pending(current)) {
+			spin_lock_bh(&np->np_thread_lock);
+			if (np->np_thread_state == ISCSI_NP_THREAD_RESET) {
+				if (atomic_read(&np->np_shutdown)) {
+					spin_unlock_bh(&np->np_thread_lock);
+					up(&np->np_restart_sem);
+					down(&np->np_shutdown_sem);
+					goto out;
+				}
+				spin_unlock_bh(&np->np_thread_lock);
+				goto get_new_sock;
+			}
+			spin_unlock_bh(&np->np_thread_lock);
+			goto out;
+		}
+		goto get_new_sock;
+	}
+	/*
+	 * The SCTP stack needs struct socket->file.
+	 */
+	if ((np->np_network_transport == ISCSI_SCTP_TCP) ||
+	    (np->np_network_transport == ISCSI_SCTP_UDP)) {
+		if (!new_sock->file) {
+			new_sock->file = kzalloc(
+					sizeof(struct file), GFP_KERNEL);
+			if (!(new_sock->file)) {
+				printk(KERN_ERR "Unable to allocate struct"
+						" file for SCTP\n");
+				sock_release(new_sock);
+				goto get_new_sock;
+			}
+			set_sctp_conn_flag = 1;
+		}
+	}
+
+	iscsi_start_login_thread_timer(np);
+
+	conn = kmem_cache_zalloc(lio_conn_cache, GFP_KERNEL);
+	if (!(conn)) {
+		printk(KERN_ERR "Could not allocate memory for"
+			" new connection\n");
+		if (set_sctp_conn_flag) {
+			kfree(new_sock->file);
+			new_sock->file = NULL;
+		}
+		sock_release(new_sock);
+
+		goto get_new_sock;
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_FREE.\n");
+	conn->conn_state = TARG_CONN_STATE_FREE;
+	conn->sock = new_sock;
+
+	if (set_sctp_conn_flag)
+		conn->conn_flags |= CONNFLAG_SCTP_STRUCT_FILE;
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_XPT_UP.\n");
+	conn->conn_state = TARG_CONN_STATE_XPT_UP;
+
+	/*
+	 * Allocate conn->conn_ops early as a failure calling
+	 * iscsi_tx_login_rsp() below will call tx_data().
+	 */
+	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
+	if (!(conn->conn_ops)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_conn_ops.\n");
+		goto new_sess_out;
+	}
+	/*
+	 * Perform the remaining iSCSI connection initialization items..
+	 */
+	if (iscsi_login_init_conn(conn) < 0)
+		goto new_sess_out;
+
+	memset(buffer, 0, ISCSI_HDR_LEN);
+	memset(&iov, 0, sizeof(struct iovec));
+	iov.iov_base	= buffer;
+	iov.iov_len	= ISCSI_HDR_LEN;
+
+	if (rx_data(conn, &iov, 1, ISCSI_HDR_LEN) <= 0) {
+		printk(KERN_ERR "rx_data() returned an error.\n");
+		goto new_sess_out;
+	}
+
+	iscsi_opcode = (buffer[0] & ISCSI_OPCODE_MASK);
+	if (!(iscsi_opcode & ISCSI_OP_LOGIN)) {
+		printk(KERN_ERR "First opcode is not login request,"
+			" failing login request.\n");
+		goto new_sess_out;
+	}
+
+	pdu			= (struct iscsi_login_req *) buffer;
+	pdu->cid		= be16_to_cpu(pdu->cid);
+	pdu->tsih		= be16_to_cpu(pdu->tsih);
+	pdu->itt		= be32_to_cpu(pdu->itt);
+	pdu->cmdsn		= be32_to_cpu(pdu->cmdsn);
+	pdu->exp_statsn		= be32_to_cpu(pdu->exp_statsn);
+	/*
+	 * Used by iscsi_tx_login_rsp() for Login Resonses PDUs
+	 * when Status-Class != 0.
+	*/
+	conn->login_itt		= pdu->itt;
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE)
+		ip = &np->np_ipv6[0];
+	else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	spin_lock_bh(&np->np_thread_lock);
+	if ((atomic_read(&np->np_shutdown)) ||
+	    (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		printk(KERN_ERR "iSCSI Network Portal on %s:%hu currently not"
+			" active.\n", ip, np->np_port);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		goto new_sess_out;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
+
+		if (conn->sock->ops->getname(conn->sock,
+				(struct sockaddr *)&sock_in6, &err, 1) < 0) {
+			printk(KERN_ERR "sock_ops->getname() failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+#if 0
+		if (!(iscsi_ntop6((const unsigned char *)
+				&sock_in6.sin6_addr.in6_u,
+				(char *)&conn->ipv6_login_ip[0],
+				IPV6_ADDRESS_SPACE))) {
+			printk(KERN_ERR "iscsi_ntop6() failed\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+#else
+		printk(KERN_INFO "Skipping iscsi_ntop6()\n");
+#endif
+		ip_init_buf = &conn->ipv6_login_ip[0];
+	} else {
+		memset(&sock_in, 0, sizeof(struct sockaddr_in));
+
+		if (conn->sock->ops->getname(conn->sock,
+				(struct sockaddr *)&sock_in, &err, 1) < 0) {
+			printk(KERN_ERR "sock_ops->getname() failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+		memset(buf1_ipv4, 0, IPV4_BUF_SIZE);
+		conn->login_ip = ntohl(sock_in.sin_addr.s_addr);
+		conn->login_port = ntohs(sock_in.sin_port);
+		iscsi_ntoa2(buf1_ipv4, conn->login_ip);
+		ip_init_buf = &buf1_ipv4[0];
+	}
+
+	conn->network_transport = np->np_network_transport;
+	snprintf(conn->net_dev, ISCSI_NETDEV_NAME_SIZE, "%s", np->np_net_dev);
+
+	conn->conn_index = iscsi_get_new_index(ISCSI_CONNECTION_INDEX);
+	conn->local_ip = np->np_ipv4;
+	conn->local_port = np->np_port;
+
+	printk(KERN_INFO "Received iSCSI login request from %s on %s Network"
+			" Portal %s:%hu\n", ip_init_buf,
+		(conn->network_transport == ISCSI_TCP) ? "TCP" : "SCTP",
+			ip, np->np_port);
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGIN.\n");
+	conn->conn_state	= TARG_CONN_STATE_IN_LOGIN;
+
+	if (iscsi_login_check_initiator_version(conn, pdu->max_version,
+			pdu->min_version) < 0)
+		goto new_sess_out;
+
+	zero_tsih = (pdu->tsih == 0x0000);
+	if ((zero_tsih)) {
+		/*
+		 * This is the leading connection of a new session.
+		 * We wait until after authentication to check for
+		 * session reinstatement.
+		 */
+		if (iscsi_login_zero_tsih_s1(conn, buffer) < 0)
+			goto new_sess_out;
+	} else {
+		/*
+		 * Add a new connection to an existing session.
+		 * We check for a non-existant session in
+		 * iscsi_login_non_zero_tsih_s2() below based
+		 * on ISID/TSIH, but wait until after authentication
+		 * to check for connection reinstatement, etc.
+		 */
+		if (iscsi_login_non_zero_tsih_s1(conn, buffer) < 0)
+			goto new_sess_out;
+	}
+
+	/*
+	 * This will process the first login request, and call
+	 * iscsi_target_locate_portal(), and return a valid struct iscsi_login.
+	 */
+	login = iscsi_target_init_negotiation(np, conn, buffer);
+	if (!(login)) {
+		tpg = conn->tpg;
+		goto new_sess_out;
+	}
+
+	tpg = conn->tpg;
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_conn->tpg\n");
+		goto new_sess_out;
+	}
+
+	if (zero_tsih) {
+		if (iscsi_login_zero_tsih_s2(conn) < 0) {
+			iscsi_target_nego_release(login, conn);
+			goto new_sess_out;
+		}
+	} else {
+		if (iscsi_login_non_zero_tsih_s2(conn, buffer) < 0) {
+			iscsi_target_nego_release(login, conn);
+			goto old_sess_out;
+		}
+	}
+
+	if (iscsi_target_start_negotiation(login, conn) < 0)
+		goto new_sess_out;
+
+	if (!SESS(conn)) {
+		printk(KERN_ERR "struct iscsi_conn session pointer is NULL!\n");
+		goto new_sess_out;
+	}
+
+	iscsi_stop_login_thread_timer(np);
+
+	if (signal_pending(current))
+		goto new_sess_out;
+
+	ret = iscsi_post_login_handler(np, conn, zero_tsih);
+
+	if (ret < 0)
+		goto new_sess_out;
+
+	core_deaccess_np(np, tpg);
+	tpg = NULL;
+	goto get_new_sock;
+
+new_sess_out:
+	printk(KERN_ERR "iSCSI Login negotiation failed.\n");
+	iscsi_collect_login_stats(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				  ISCSI_LOGIN_STATUS_INIT_ERR);
+	if (!zero_tsih || !SESS(conn))
+		goto old_sess_out;
+	if (SESS(conn)->se_sess)
+		transport_free_session(SESS(conn)->se_sess);
+	if (SESS(conn)->sess_ops)
+		kfree(SESS(conn)->sess_ops);
+	if (SESS(conn))
+		kmem_cache_free(lio_sess_cache, SESS(conn));
+old_sess_out:
+	iscsi_stop_login_thread_timer(np);
+	/*
+	 * If login negotiation fails check if the Time2Retain timer
+	 * needs to be restarted.
+	 */
+	if (!zero_tsih && SESS(conn)) {
+		spin_lock_bh(&SESS(conn)->conn_lock);
+		if (SESS(conn)->session_state == TARG_SESS_STATE_FAILED) {
+			struct se_portal_group *se_tpg =
+					&ISCSI_TPG_C(conn)->tpg_se_tpg;
+
+			atomic_set(&SESS(conn)->session_continuation, 0);
+			spin_unlock_bh(&SESS(conn)->conn_lock);
+			spin_lock_bh(&se_tpg->session_lock);
+			iscsi_start_time2retain_handler(SESS(conn));
+			spin_unlock_bh(&se_tpg->session_lock);
+		} else
+			spin_unlock_bh(&SESS(conn)->conn_lock);
+		iscsi_dec_session_usage_count(SESS(conn));
+	}
+
+	if (!IS_ERR(conn->conn_rx_hash.tfm))
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+	if (!IS_ERR(conn->conn_tx_hash.tfm))
+		crypto_free_hash(conn->conn_tx_hash.tfm);
+
+	if (conn->conn_cpumask)
+		free_cpumask_var(conn->conn_cpumask);
+
+	kfree(conn->conn_ops);
+
+	if (conn->param_list) {
+		iscsi_release_param_list(conn->param_list);
+		conn->param_list = NULL;
+	}
+	if (conn->sock) {
+		if (conn->conn_flags & CONNFLAG_SCTP_STRUCT_FILE) {
+			kfree(conn->sock->file);
+			conn->sock->file = NULL;
+		}
+		sock_release(conn->sock);
+	}
+	kmem_cache_free(lio_conn_cache, conn);
+
+	if (tpg) {
+		core_deaccess_np(np, tpg);
+		tpg = NULL;
+	}
+
+	if (!(signal_pending(current)))
+		goto get_new_sock;
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (atomic_read(&np->np_shutdown)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		up(&np->np_restart_sem);
+		down(&np->np_shutdown_sem);
+		goto out;
+	}
+	if (np->np_thread_state != ISCSI_NP_THREAD_SHUTDOWN) {
+		spin_unlock_bh(&np->np_thread_lock);
+		goto get_new_sock;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+out:
+	iscsi_stop_login_thread_timer(np);
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_thread_state = ISCSI_NP_THREAD_EXIT;
+	np->np_thread = NULL;
+	spin_unlock_bh(&np->np_thread_lock);
+	up(&np->np_done_sem);
+	return 0;
+}
diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
new file mode 100644
index 0000000..c6d56c2
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_login.h
@@ -0,0 +1,15 @@
+#ifndef ISCSI_TARGET_LOGIN_H
+#define ISCSI_TARGET_LOGIN_H
+
+extern int iscsi_login_setup_crypto(struct iscsi_conn *);
+extern int iscsi_check_for_session_reinstatement(struct iscsi_conn *);
+extern int iscsi_login_post_auth_non_zero_tsih(struct iscsi_conn *, u16, u32);
+extern int iscsi_target_login_thread(void *);
+extern int iscsi_login_disable_FIM_keys(struct iscsi_param_list *, struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_sess_cache;
+extern struct kmem_cache *lio_conn_cache;
+
+#endif   /*** ISCSI_TARGET_LOGIN_H ***/
+
diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
new file mode 100644
index 0000000..5588a3b
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nego.c
@@ -0,0 +1,1116 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI Parameter negotiation.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/ctype.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_tpg.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_auth_chap.h"
+
+#define MAX_LOGIN_PDUS  7
+#define TEXT_LEN	4096
+
+void convert_null_to_semi(char *buf, int len)
+{
+	int i;
+
+	for (i = 0; i < len; i++)
+		if (buf[i] == '\0')
+			buf[i] = ';';
+}
+
+int strlen_semi(char *buf)
+{
+	int i = 0;
+
+	while (buf[i] != '\0') {
+		if (buf[i] == ';')
+			return i;
+		i++;
+	}
+
+	return -1;
+}
+
+int extract_param(
+	const char *in_buf,
+	const char *pattern,
+	unsigned int max_length,
+	char *out_buf,
+	unsigned char *type)
+{
+	char *ptr;
+	int len;
+
+	if (!in_buf || !pattern || !out_buf || !type)
+		return -1;
+
+	ptr = strstr(in_buf, pattern);
+	if (!ptr)
+		return -1;
+
+	ptr = strstr(ptr, "=");
+	if (!ptr)
+		return -1;
+
+	ptr += 1;
+	if (*ptr == '0' && (*(ptr+1) == 'x' || *(ptr+1) == 'X')) {
+		ptr += 2; /* skip 0x */
+		*type = HEX;
+	} else
+		*type = DECIMAL;
+
+	len = strlen_semi(ptr);
+	if (len < 0)
+		return -1;
+
+	if (len > max_length) {
+		printk(KERN_ERR "Length of input: %d exeeds max_length:"
+			" %d\n", len, max_length);
+		return -1;
+	}
+	memcpy(out_buf, ptr, len);
+	out_buf[len] = '\0';
+
+	return 0;
+}
+
+static u32 iscsi_handle_authentication(
+	struct iscsi_conn *conn,
+	char *in_buf,
+	char *out_buf,
+	int in_length,
+	int *out_length,
+	unsigned char *authtype)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_auth *auth;
+	struct iscsi_node_acl *iscsi_nacl;
+	struct se_node_acl *se_nacl;
+
+	if (!(SESS_OPS(sess)->SessionType)) {
+		/*
+		 * For SessionType=Normal
+		 */
+		se_nacl = SESS(conn)->se_sess->se_node_acl;
+		if (!(se_nacl)) {
+			printk(KERN_ERR "Unable to locate struct se_node_acl for"
+					" CHAP auth\n");
+			return -1;
+		}
+		iscsi_nacl = container_of(se_nacl, struct iscsi_node_acl,
+				se_node_acl);
+		if (!(iscsi_nacl)) {
+			printk(KERN_ERR "Unable to locate struct iscsi_node_acl for"
+					" CHAP auth\n");
+			return -1;
+		}
+
+		auth = ISCSI_NODE_AUTH(iscsi_nacl);
+	} else {
+		/*
+		 * For SessionType=Discovery
+		 */
+		auth = &iscsi_global->discovery_acl.node_auth;	
+	}
+
+	if (strstr("CHAP", authtype))
+		strcpy(SESS(conn)->auth_type, "CHAP");
+	else
+		strcpy(SESS(conn)->auth_type, NONE);
+
+	if (strstr("None", authtype))
+		return 1;
+#ifdef CANSRP
+	else if (strstr("SRP", authtype))
+		return srp_main_loop(conn, auth, in_buf, out_buf,
+				&in_length, out_length);
+#endif
+	else if (strstr("CHAP", authtype))
+		return chap_main_loop(conn, auth, in_buf, out_buf,
+				&in_length, out_length);
+	else if (strstr("SPKM1", authtype))
+		return 2;
+	else if (strstr("SPKM2", authtype))
+		return 2;
+	else if (strstr("KRB5", authtype))
+		return 2;
+	else
+		return 2;
+}
+
+static void iscsi_remove_failed_auth_entry(struct iscsi_conn *conn)
+{
+	kfree(conn->auth_protocol);
+}
+
+/*	iscsi_target_check_login_request():
+ *
+ *
+ */
+static int iscsi_target_check_login_request(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int req_csg, req_nsg, rsp_csg, rsp_nsg;
+	u32 payload_length;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	switch (login_req->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+		break;
+	default:
+		printk(KERN_ERR "Received unknown opcode 0x%02x.\n",
+				login_req->opcode & ISCSI_OPCODE_MASK);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((login_req->flags & ISCSI_FLAG_LOGIN_CONTINUE) &&
+	    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+		printk(KERN_ERR "Login request has both ISCSI_FLAG_LOGIN_CONTINUE"
+			" and ISCSI_FLAG_LOGIN_TRANSIT set, protocol error.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	req_csg = (login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	rsp_csg = (login_rsp->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	req_nsg = (login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK);
+	rsp_nsg = (login_rsp->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK);
+
+	if (req_csg != login->current_stage) {
+		printk(KERN_ERR "Initiator unexpectedly changed login stage"
+			" from %d to %d, login failed.\n", login->current_stage,
+			req_csg);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((req_nsg == 2) || (req_csg >= 2) ||
+	   ((login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT) &&
+	    (req_nsg <= req_csg))) {
+		printk(KERN_ERR "Illegal login_req->flags Combination, CSG: %d,"
+			" NSG: %d, ISCSI_FLAG_LOGIN_TRANSIT: %d.\n", req_csg,
+			req_nsg, (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT));
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((login_req->max_version != login->version_max) ||
+	    (login_req->min_version != login->version_min)) {
+		printk(KERN_ERR "Login request changed Version Max/Nin"
+			" unexpectedly to 0x%02x/0x%02x, protocol error\n",
+			login_req->max_version, login_req->min_version);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (memcmp(login_req->isid, login->isid, 6) != 0) {
+		printk(KERN_ERR "Login request changed ISID unexpectedly,"
+				" protocol error.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (login_req->itt != login->init_task_tag) {
+		printk(KERN_ERR "Login request changed ITT unexpectedly to"
+			" 0x%08x, protocol error.\n", login_req->itt);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (payload_length > MAX_KEY_VALUE_PAIRS) {
+		printk(KERN_ERR "Login request payload exceeds default"
+			" MaxRecvDataSegmentLength: %u, protocol error.\n",
+				MAX_KEY_VALUE_PAIRS);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_check_first_request():
+ *
+ *
+ */
+static int iscsi_target_check_first_request(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	struct iscsi_param *param = NULL;
+	struct se_node_acl *se_nacl;
+
+	login->first_request = 0;
+
+	list_for_each_entry(param, &conn->param_list->param_list, p_list) {
+		if (!strncmp(param->name, SESSIONTYPE, 11)) {
+			if (!IS_PSTATE_ACCEPTOR(param)) {
+				printk(KERN_ERR "SessionType key not received"
+					" in first login request.\n");
+				iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+				return -1;
+			}
+			if (!(strncmp(param->value, DISCOVERY, 9)))
+				return 0;
+		}
+
+		if (!strncmp(param->name, INITIATORNAME, 13)) {
+			if (!IS_PSTATE_ACCEPTOR(param)) {
+				if (!login->leading_connection)
+					continue;
+
+				printk(KERN_ERR "InitiatorName key not received"
+					" in first login request.\n");
+				iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+				return -1;
+			}
+
+			/*
+			 * For non-leading connections, double check that the
+			 * received InitiatorName matches the existing session's
+			 * struct iscsi_node_acl.
+			 */
+			if (!login->leading_connection) {
+				se_nacl = SESS(conn)->se_sess->se_node_acl;
+				if (!(se_nacl)) {
+					printk(KERN_ERR "Unable to locate"
+						" struct se_node_acl\n");
+					iscsi_tx_login_rsp(conn,
+							ISCSI_STATUS_CLS_INITIATOR_ERR,
+							ISCSI_LOGIN_STATUS_TGT_NOT_FOUND);
+					return -1;
+				}
+
+				if (strcmp(param->value,
+						se_nacl->initiatorname)) {
+					printk(KERN_ERR "Incorrect"
+						" InitiatorName: %s for this"
+						" iSCSI Initiator Node.\n",
+						param->value);
+					iscsi_tx_login_rsp(conn,
+							ISCSI_STATUS_CLS_INITIATOR_ERR,
+							ISCSI_LOGIN_STATUS_TGT_NOT_FOUND);
+					return -1;
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_do_tx_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_tx_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	__u32 padding = 0;
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_login_rsp *login_rsp;
+
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+
+	login_rsp->opcode		= ISCSI_OP_LOGIN_RSP;
+	hton24(login_rsp->dlength, login->rsp_length);
+	memcpy(login_rsp->isid, login->isid, 6);
+	login_rsp->tsih			= cpu_to_be16(login->tsih);
+	login_rsp->itt			= cpu_to_be32(login->init_task_tag);
+	login_rsp->statsn		= cpu_to_be32(conn->stat_sn++);
+	login_rsp->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	login_rsp->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	TRACE(TRACE_LOGIN, "Sending Login Response, Flags: 0x%02x, ITT: 0x%08x,"
+		" ExpCmdSN; 0x%08x, MaxCmdSN: 0x%08x, StatSN: 0x%08x, Length:"
+		" %u\n", login_rsp->flags, ntohl(login_rsp->itt),
+		ntohl(login_rsp->exp_cmdsn), ntohl(login_rsp->max_cmdsn),
+		ntohl(login_rsp->statsn), login->rsp_length);
+
+	padding = ((-ntohl(login->rsp_length)) & 3);
+
+	if (iscsi_login_tx_data(
+			conn,
+			login->rsp,
+			login->rsp_buf,
+			login->rsp_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	login->rsp_length		= 0;
+	login_rsp->tsih			= be16_to_cpu(login_rsp->tsih);
+	login_rsp->itt			= be32_to_cpu(login_rsp->itt);
+	login_rsp->statsn		= be32_to_cpu(login_rsp->statsn);
+	spin_lock(&sess->cmdsn_lock);
+	login_rsp->exp_cmdsn		= be32_to_cpu(sess->exp_cmd_sn);
+	login_rsp->max_cmdsn		= be32_to_cpu(sess->max_cmd_sn);
+	spin_unlock(&sess->cmdsn_lock);
+
+	return 0;
+}
+
+/*	iscsi_target_do_rx_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_rx_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	u32 padding = 0, payload_length;
+	struct iscsi_login_req *login_req;
+
+	if (iscsi_login_rx_data(conn, login->req, ISCSI_HDR_LEN, TARGET) < 0)
+		return -1;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	payload_length			= ntoh24(login_req->dlength);
+	login_req->tsih			= be16_to_cpu(login_req->tsih);
+	login_req->itt			= be32_to_cpu(login_req->itt);
+	login_req->cid			= be16_to_cpu(login_req->cid);
+	login_req->cmdsn		= be32_to_cpu(login_req->cmdsn);
+	login_req->exp_statsn		= be32_to_cpu(login_req->exp_statsn);
+
+	TRACE(TRACE_LOGIN, "Got Login Command, Flags 0x%02x, ITT: 0x%08x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, CID: %hu, Length: %u\n",
+		 login_req->flags, login_req->itt, login_req->cmdsn,
+		 login_req->exp_statsn, login_req->cid, payload_length);
+
+	if (iscsi_target_check_login_request(conn, login) < 0)
+		return -1;
+
+	padding = ((-payload_length) & 3);
+	memset(login->req_buf, 0, MAX_KEY_VALUE_PAIRS);
+
+	if (iscsi_login_rx_data(
+			conn,
+			login->req_buf,
+			payload_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*      iscsi_target_do_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	if (iscsi_target_do_tx_login_io(conn, login) < 0)
+		return -1;
+
+	if (iscsi_target_do_rx_login_io(conn, login) < 0)
+		return -1;
+
+	return 0;
+}
+
+static int iscsi_target_get_initial_payload(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	u32 padding = 0, payload_length;
+	struct iscsi_login_req *login_req;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	payload_length = ntoh24(login_req->dlength);
+
+	TRACE(TRACE_LOGIN, "Got Login Command, Flags 0x%02x, ITT: 0x%08x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, Length: %u\n",
+		login_req->flags, login_req->itt, login_req->cmdsn,
+		login_req->exp_statsn, payload_length);
+
+	if (iscsi_target_check_login_request(conn, login) < 0)
+		return -1;
+
+	padding = ((-payload_length) & 3);
+
+	if (iscsi_login_rx_data(
+			conn,
+			login->req_buf,
+			payload_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_target_check_for_existing_instances():
+ *
+ *	NOTE: We check for existing sessions or connections AFTER the initiator
+ *	has been successfully authenticated in order to protect against faked
+ *	ISID/TSIH combinations.
+ */
+static int iscsi_target_check_for_existing_instances(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	if (login->checked_for_existing)
+		return 0;
+
+	login->checked_for_existing = 1;
+
+	if (!login->tsih)
+		return iscsi_check_for_session_reinstatement(conn);
+	else
+		return iscsi_login_post_auth_non_zero_tsih(conn, login->cid,
+				login->initial_exp_statsn);
+}
+
+/*	iscsi_target_do_authentication():
+ *
+ *
+ */
+static int iscsi_target_do_authentication(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int authret;
+	u32 payload_length;
+	struct iscsi_param *param;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, conn->param_list);
+	if (!(param))
+		return -1;
+
+	authret = iscsi_handle_authentication(
+			conn,
+			login->req_buf,
+			login->rsp_buf,
+			payload_length,
+			&login->rsp_length,
+			param->value);
+	switch (authret) {
+	case 0:
+		printk(KERN_INFO "Received OK response"
+		" from LIO Authentication, continuing.\n");
+		break;
+	case 1:
+		printk(KERN_INFO "iSCSI security negotiation"
+			" completed sucessfully.\n");
+		login->auth_complete = 1;
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE1) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+			login_rsp->flags |= (ISCSI_FLAG_LOGIN_NEXT_STAGE1 |
+					     ISCSI_FLAG_LOGIN_TRANSIT);
+			login->current_stage = 1;
+		}
+		return iscsi_target_check_for_existing_instances(
+				conn, login);
+	case 2:
+		printk(KERN_ERR "Security negotiation"
+			" failed.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_AUTH_FAILED);
+		return -1;
+	default:
+		printk(KERN_ERR "Received unknown error %d from LIO"
+				" Authentication\n", authret);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_TARGET_ERROR);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_handle_csg_zero():
+ *
+ *
+ */
+static int iscsi_target_handle_csg_zero(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int ret;
+	u32 payload_length;
+	struct iscsi_param *param;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, conn->param_list);
+	if (!(param))
+		return -1;
+
+	ret = iscsi_decode_text_input(
+			PHASE_SECURITY|PHASE_DECLARATIVE,
+			SENDER_INITIATOR|SENDER_RECEIVER,
+			login->req_buf,
+			payload_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (ret > 0) {
+		if (login->auth_complete) {
+			printk(KERN_ERR "Initiator has already been"
+				" successfully authenticated, but is still"
+				" sending %s keys.\n", param->value);
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_INIT_ERR);
+			return -1;
+		}
+
+		goto do_auth;
+	}
+
+	if (login->first_request)
+		if (iscsi_target_check_first_request(conn, login) < 0)
+			return -1;
+
+	ret = iscsi_encode_text_output(
+			PHASE_SECURITY|PHASE_DECLARATIVE,
+			SENDER_TARGET,
+			login->rsp_buf,
+			&login->rsp_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (!(iscsi_check_negotiated_keys(conn->param_list))) {
+		if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+		    !strncmp(param->value, NONE, 4)) {
+			printk(KERN_ERR "Initiator sent AuthMethod=None but"
+				" Target is enforcing iSCSI Authentication,"
+					" login failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_AUTH_FAILED);
+			return -1;
+		}
+
+		if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+		    !login->auth_complete)
+			return 0;
+
+		if (strncmp(param->value, NONE, 4) && !login->auth_complete)
+			return 0;
+
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE1) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_NEXT_STAGE1 |
+					    ISCSI_FLAG_LOGIN_TRANSIT;
+			login->current_stage = 1;
+		}
+	}
+
+	return 0;
+do_auth:
+	return iscsi_target_do_authentication(conn, login);
+}
+
+/*	iscsi_target_handle_csg_one():
+ *
+ *
+ */
+static int iscsi_target_handle_csg_one(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	int ret;
+	u32 payload_length;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	ret = iscsi_decode_text_input(
+			PHASE_OPERATIONAL|PHASE_DECLARATIVE,
+			SENDER_INITIATOR|SENDER_RECEIVER,
+			login->req_buf,
+			payload_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (login->first_request)
+		if (iscsi_target_check_first_request(conn, login) < 0)
+			return -1;
+
+	if (iscsi_target_check_for_existing_instances(conn, login) < 0)
+		return -1;
+
+	ret = iscsi_encode_text_output(
+			PHASE_OPERATIONAL|PHASE_DECLARATIVE,
+			SENDER_TARGET,
+			login->rsp_buf,
+			&login->rsp_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (!(login->auth_complete) &&
+	      ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication) {
+		printk(KERN_ERR "Initiator is requesting CSG: 1, has not been"
+			 " successfully authenticated, and the Target is"
+			" enforcing iSCSI Authentication, login failed.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_AUTH_FAILED);
+		return -1;
+	}
+
+	if (!(iscsi_check_negotiated_keys(conn->param_list)))
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE3) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT))
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_NEXT_STAGE3 |
+					    ISCSI_FLAG_LOGIN_TRANSIT;
+
+	return 0;
+}
+
+/*	iscsi_target_do_login():
+ *
+ *
+ */
+static int iscsi_target_do_login(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	int pdu_count = 0;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+
+	while (1) {
+		if (++pdu_count > MAX_LOGIN_PDUS) {
+			printk(KERN_ERR "MAX_LOGIN_PDUS count reached.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			return -1;
+		}
+
+		switch ((login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2) {
+		case 0:
+			login_rsp->flags |= (0 & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK);
+			if (iscsi_target_handle_csg_zero(conn, login) < 0)
+				return -1;
+			break;
+		case 1:
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_CURRENT_STAGE1;
+			if (iscsi_target_handle_csg_one(conn, login) < 0)
+				return -1;
+			if (login_rsp->flags & ISCSI_FLAG_LOGIN_TRANSIT) {
+				login->tsih = SESS(conn)->tsih;
+				if (iscsi_target_do_tx_login_io(conn,
+						login) < 0)
+					return -1;
+				return 0;
+			}
+			break;
+		default:
+			printk(KERN_ERR "Illegal CSG: %d received from"
+				" Initiator, protocol error.\n",
+				(login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK)
+				>> 2);
+			break;
+		}
+
+		if (iscsi_target_do_login_io(conn, login) < 0)
+			return -1;
+
+		if (login_rsp->flags & ISCSI_FLAG_LOGIN_TRANSIT) {
+			login_rsp->flags &= ~ISCSI_FLAG_LOGIN_TRANSIT;
+			login_rsp->flags &= ~ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK;
+		}
+	}
+
+	return 0;
+}
+
+static void iscsi_initiatorname_tolower(
+	char *param_buf)
+{
+	char *c;
+	u32 iqn_size = strlen(param_buf), i;
+
+	for (i = 0; i < iqn_size; i++) {
+		c = (char *)&param_buf[i];
+		if (!(isupper(*c)))
+			continue;
+
+		*c = tolower(*c);
+	}
+}
+
+/*
+ * Processes the first Login Request..
+ */
+static int iscsi_target_locate_portal(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	char *i_buf = NULL, *s_buf = NULL, *t_buf = NULL;
+	char *tmpbuf, *start = NULL, *end = NULL, *key, *value;
+	struct iscsi_session *sess = conn->sess;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_login_req *login_req;
+	struct iscsi_targ_login_rsp *login_rsp;
+	u32 payload_length;
+	int sessiontype = 0, ret = 0;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_targ_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	login->first_request	= 1;
+	login->leading_connection = (!login_req->tsih) ? 1 : 0;
+	login->current_stage	=
+		(login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	login->version_min	= login_req->min_version;
+	login->version_max	= login_req->max_version;
+	memcpy(login->isid, login_req->isid, 6);
+	login->cmd_sn		= login_req->cmdsn;
+	login->init_task_tag	= login_req->itt;
+	login->initial_exp_statsn = login_req->exp_statsn;
+	login->cid		= login_req->cid;
+	login->tsih		= login_req->tsih;
+
+	if (iscsi_target_get_initial_payload(conn, login) < 0)
+		return -1;
+
+	tmpbuf = kzalloc(payload_length + 1, GFP_KERNEL);
+	if (!(tmpbuf)) {
+		printk(KERN_ERR "Unable to allocate memory for tmpbuf.\n");
+		return -1;
+	}
+
+	memcpy(tmpbuf, login->req_buf, payload_length);
+	tmpbuf[payload_length] = '\0';
+	start = tmpbuf;
+	end = (start + payload_length);
+
+	/*
+	 * Locate the initial keys expected from the Initiator node in
+	 * the first login request in order to progress with the login phase.
+	 */
+	while (start < end) {
+		if (iscsi_extract_key_value(start, &key, &value) < 0) {
+			ret = -1;
+			goto out;
+		}
+
+		if (!(strncmp(key, "InitiatorName", 13)))
+			i_buf = value;
+		else if (!(strncmp(key, "SessionType", 11)))
+			s_buf = value;
+		else if (!(strncmp(key, "TargetName", 10)))
+			t_buf = value;
+
+		start += strlen(key) + strlen(value) + 2;
+	}
+
+	/*
+	 * See 5.3.  Login Phase.
+	 */
+	if (!i_buf) {
+		printk(KERN_ERR "InitiatorName key not received"
+			" in first login request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+	/*
+	 * Convert the incoming InitiatorName to lowercase following
+	 * RFC-3720 3.2.6.1. section c) that says that iSCSI IQNs
+	 * are NOT case sensitive.
+	 */
+	iscsi_initiatorname_tolower(i_buf);
+
+	if (!s_buf) {
+		if (!login->leading_connection)
+			goto get_target;
+
+		printk(KERN_ERR "SessionType key not received"
+			" in first login request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+
+	/*
+	 * Use default portal group for discovery sessions.
+	 */
+	sessiontype = strncmp(s_buf, DISCOVERY, 9);
+	if (!(sessiontype)) {
+		conn->tpg = iscsi_global->discovery_tpg;
+		if (!login->leading_connection)
+			goto get_target;
+
+		SESS_OPS(sess)->SessionType = 1;
+		/*
+ 		 * Setup crc32c modules from libcrypto
+		 */
+		if (iscsi_login_setup_crypto(conn) < 0) {
+			printk(KERN_ERR "iscsi_login_setup_crypto() failed\n");
+			ret = -1;
+			goto out;
+		}
+		/*
+		 * Serialize access across the discovery struct iscsi_portal_group to
+		 * process login attempt.
+		 */
+		if (core_access_np(np, conn->tpg) < 0) {
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+			ret = -1;
+			goto out;
+		}
+		ret = 0;
+		goto out;
+	}
+
+get_target:
+	if (!t_buf) {
+		printk(KERN_ERR "TargetName key not received"
+			" in first login request while"
+			" SessionType=Normal.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+
+	/*
+	 * Locate Target IQN from Storage Node.
+	 */
+	tiqn = core_get_tiqn_for_login(t_buf);
+	if (!(tiqn)) {
+		printk(KERN_ERR "Unable to locate Target IQN: %s in"
+			" Storage Node\n", t_buf);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		goto out;
+	}
+	printk(KERN_INFO "Located Storage Object: %s\n", tiqn->tiqn);
+
+	/*
+	 * Locate Target Portal Group from Storage Node.
+	 */
+	conn->tpg = core_get_tpg_from_np(tiqn, np);
+	if (!(conn->tpg)) {
+		printk(KERN_ERR "Unable to locate Target Portal Group"
+				" on %s\n", tiqn->tiqn);
+		core_put_tiqn_for_login(tiqn);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		goto out;
+	}
+	printk(KERN_INFO "Located Portal Group Object: %hu\n", conn->tpg->tpgt);
+	/*
+	 * Setup crc32c modules from libcrypto
+	 */
+	if (iscsi_login_setup_crypto(conn) < 0) {
+		printk(KERN_ERR "iscsi_login_setup_crypto() failed\n");
+		ret = -1;
+		goto out;
+	}
+	/*
+	 * Serialize access across the struct iscsi_portal_group to
+	 * process login attempt.
+	 */
+	if (core_access_np(np, conn->tpg) < 0) {
+		core_put_tiqn_for_login(tiqn);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		conn->tpg = NULL;
+		goto out;
+	}
+
+	/*
+	 * SESS(conn)->node_acl will be set when the referenced
+	 * struct iscsi_session is located from received ISID+TSIH in
+	 * iscsi_login_non_zero_tsih_s2().
+	 */
+	if (!login->leading_connection) {
+		ret = 0;
+		goto out;
+	}
+
+	/*
+	 * This value is required in iscsi_login_zero_tsih_s2()
+	 */
+	SESS_OPS(sess)->SessionType = 0;
+
+	/*
+	 * Locate incoming Initiator IQN reference from Storage Node.
+	 */
+	sess->se_sess->se_node_acl = core_tpg_check_initiator_node_acl(
+			&conn->tpg->tpg_se_tpg, i_buf);
+	if (!(sess->se_sess->se_node_acl)) {
+		printk(KERN_ERR "iSCSI Initiator Node: %s is not authorized to"
+			" access iSCSI target portal group: %hu.\n",
+				i_buf, conn->tpg->tpgt);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_TGT_FORBIDDEN);
+		ret = -1;
+		goto out;
+	}
+
+	ret = 0;
+out:
+	kfree(tmpbuf);
+	return ret;
+}
+
+/*	iscsi_target_init_negotiation():
+ *
+ *
+ */
+struct iscsi_login *iscsi_target_init_negotiation(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	char *login_pdu)
+{
+	struct iscsi_login *login;
+
+	login = kzalloc(sizeof(struct iscsi_login), GFP_KERNEL);
+	if (!(login)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_login.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+
+	login->req = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL);
+	if (!(login->req)) {
+		printk(KERN_ERR "Unable to allocate memory for Login Request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+	memcpy(login->req, login_pdu, ISCSI_HDR_LEN);
+
+	login->req_buf = kzalloc(MAX_KEY_VALUE_PAIRS, GFP_KERNEL);
+	if (!(login->req_buf)) {
+		printk(KERN_ERR "Unable to allocate memory for response buffer.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+	/*
+	 * SessionType: Discovery
+	 *
+	 * 	Locates Default Portal
+	 *
+	 * SessionType: Normal
+	 *
+	 * 	Locates Target Portal from NP -> Target IQN
+	 */
+	if (iscsi_target_locate_portal(np, conn, login) < 0) {
+		printk(KERN_ERR "iSCSI Login negotiation failed.\n");
+		goto out;
+	}
+
+	return login;
+out:
+	kfree(login->req);
+	kfree(login->req_buf);
+	kfree(login);
+
+	return NULL;
+}
+
+int iscsi_target_start_negotiation(
+	struct iscsi_login *login,
+	struct iscsi_conn *conn)
+{
+	int ret = -1;
+
+	login->rsp = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL);
+	if (!(login->rsp)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" Login Response.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		ret = -1;
+		goto out;
+	}
+
+	login->rsp_buf = kzalloc(MAX_KEY_VALUE_PAIRS, GFP_KERNEL);
+	if (!(login->rsp_buf)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" request buffer.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		ret = -1;
+		goto out;
+	}
+
+	ret = iscsi_target_do_login(conn, login);
+out:
+	if (ret != 0)
+		iscsi_remove_failed_auth_entry(conn);
+
+	iscsi_target_nego_release(login, conn);
+	return ret;
+}
+
+void iscsi_target_nego_release(
+	struct iscsi_login *login,
+	struct iscsi_conn *conn)
+{
+	kfree(login->req);
+	kfree(login->rsp);
+	kfree(login->req_buf);
+	kfree(login->rsp_buf);
+	kfree(login);
+}
diff --git a/drivers/target/iscsi/iscsi_target_nego.h b/drivers/target/iscsi/iscsi_target_nego.h
new file mode 100644
index 0000000..75deb10
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nego.h
@@ -0,0 +1,20 @@
+#ifndef ISCSI_TARGET_NEGO_H
+#define ISCSI_TARGET_NEGO_H
+
+#define DECIMAL         0
+#define HEX             1
+
+extern void convert_null_to_semi(char *, int);
+extern int extract_param(const char *, const char *, unsigned int, char *,
+		unsigned char *);
+extern struct iscsi_login *iscsi_target_init_negotiation(
+		struct iscsi_np *, struct iscsi_conn *, char *);
+extern int iscsi_target_start_negotiation(
+		struct iscsi_login *, struct iscsi_conn *);
+extern void iscsi_target_nego_release(
+		struct iscsi_login *, struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+
+#endif /* ISCSI_TARGET_NEGO_H */
+
diff --git a/drivers/target/iscsi/iscsi_thread_queue.c b/drivers/target/iscsi/iscsi_thread_queue.c
new file mode 100644
index 0000000..d27b090
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_thread_queue.c
@@ -0,0 +1,635 @@
+/*******************************************************************************
+ * This file contains the iSCSI Login Thread and Thread Queue functions.
+ *
+ * Copyright (c) 2003 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_thread_queue.h"
+
+/*	iscsi_add_ts_to_active_list():
+ *
+ *
+ */
+static void iscsi_add_ts_to_active_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->active_ts_lock);
+	list_add_tail(&ts->ts_list, &iscsi_global->active_ts_list);
+	iscsi_global->active_ts++;
+	spin_unlock(&iscsi_global->active_ts_lock);
+}
+
+/*	iscsi_add_ts_to_inactive_list():
+ *
+ *
+ */
+extern void iscsi_add_ts_to_inactive_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	list_add_tail(&ts->ts_list, &iscsi_global->inactive_ts_list);
+	iscsi_global->inactive_ts++;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+}
+
+/*	iscsi_del_ts_from_active_list():
+ *
+ *
+ */
+static void iscsi_del_ts_from_active_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->active_ts_lock);
+	list_del(&ts->ts_list);
+	iscsi_global->active_ts--;
+	spin_unlock(&iscsi_global->active_ts_lock);
+
+	if (ts->stop_active)
+		up(&ts->stop_active_sem);
+}
+
+/*	iscsi_get_ts_from_inactive_list():
+ *
+ *
+ */
+static struct se_thread_set *iscsi_get_ts_from_inactive_list(void)
+{
+	struct se_thread_set *ts;
+
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	if (list_empty(&iscsi_global->inactive_ts_list)) {
+		spin_unlock(&iscsi_global->inactive_ts_lock);
+		return NULL;
+	}
+
+	list_for_each_entry(ts, &iscsi_global->inactive_ts_list, ts_list)
+		break;
+
+	list_del(&ts->ts_list);
+	iscsi_global->inactive_ts--;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+
+	return ts;
+}
+
+/*	iscsi_allocate_thread_sets():
+ *
+ *
+ */
+extern int iscsi_allocate_thread_sets(u32 thread_pair_count)
+{
+	int allocated_thread_pair_count = 0, i, thread_id;
+	struct se_thread_set *ts = NULL;
+
+	for (i = 0; i < thread_pair_count; i++) {
+		ts = kzalloc(sizeof(struct se_thread_set), GFP_KERNEL);
+		if (!(ts)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+					" thread set.\n");
+			return allocated_thread_pair_count;
+		}
+		/*
+		 * Locate the next available regision in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		thread_id = bitmap_find_free_region(iscsi_global->ts_bitmap,
+				iscsi_global->ts_bitmap_count, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+		if (thread_id < 0) {
+			printk(KERN_ERR "bitmap_find_free_region() failed for"
+				" thread_set_bitmap\n");
+			kfree(ts);
+			return allocated_thread_pair_count;
+		}
+
+		ts->thread_id = thread_id;
+		ts->status = ISCSI_THREAD_SET_FREE;
+		INIT_LIST_HEAD(&ts->ts_list);
+		spin_lock_init(&ts->ts_state_lock);
+		sema_init(&ts->stop_active_sem, 0);
+		sema_init(&ts->rx_create_sem, 0);
+		sema_init(&ts->tx_create_sem, 0);
+		sema_init(&ts->rx_done_sem, 0);
+		sema_init(&ts->tx_done_sem, 0);
+		sema_init(&ts->rx_post_start_sem, 0);
+		sema_init(&ts->tx_post_start_sem, 0);
+		sema_init(&ts->rx_restart_sem, 0);
+		sema_init(&ts->tx_restart_sem, 0);
+		sema_init(&ts->rx_start_sem, 0);
+		sema_init(&ts->tx_start_sem, 0);
+
+		ts->create_threads = 1;
+		kernel_thread(iscsi_target_rx_thread,
+				(void *)ts, 0);
+		down(&ts->rx_create_sem);
+
+		kernel_thread(iscsi_target_tx_thread,
+				(void *)ts, 0);
+		down(&ts->tx_create_sem);
+		ts->create_threads = 0;
+
+		iscsi_add_ts_to_inactive_list(ts);
+		allocated_thread_pair_count++;
+	}
+
+	printk(KERN_INFO "Spawned %d thread set(s) (%d total threads).\n",
+		allocated_thread_pair_count, allocated_thread_pair_count * 2);
+	return allocated_thread_pair_count;
+}
+
+/*	iscsi_deallocate_thread_sets():
+ *
+ *
+ */
+extern void iscsi_deallocate_thread_sets(void)
+{
+	u32 released_count = 0;
+	struct se_thread_set *ts = NULL;
+
+	while ((ts = iscsi_get_ts_from_inactive_list())) {
+
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->status = ISCSI_THREAD_SET_DIE;
+		spin_unlock_bh(&ts->ts_state_lock);
+
+		if (ts->rx_thread) {
+			send_sig(SIGKILL, ts->rx_thread, 1);
+			down(&ts->rx_done_sem);
+		}
+		if (ts->tx_thread) {
+			send_sig(SIGKILL, ts->tx_thread, 1);
+			down(&ts->tx_done_sem);
+		}
+		/*
+		 * Release this thread_id in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		bitmap_release_region(iscsi_global->ts_bitmap,
+				ts->thread_id, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+
+		released_count++;
+		kfree(ts);
+	}
+
+	if (released_count)
+		printk(KERN_INFO "Stopped %d thread set(s) (%d total threads)."
+			"\n", released_count, released_count * 2);
+}
+
+/*	iscsi_deallocate_extra_thread_sets():
+ *
+ *
+ */
+static void iscsi_deallocate_extra_thread_sets(void)
+{
+	u32 orig_count, released_count = 0;
+	struct se_thread_set *ts = NULL;
+
+	orig_count = TARGET_THREAD_SET_COUNT;
+
+	while ((iscsi_global->inactive_ts + 1) > orig_count) {
+		ts = iscsi_get_ts_from_inactive_list();
+		if (!(ts))
+			break;
+
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->status = ISCSI_THREAD_SET_DIE;
+		spin_unlock_bh(&ts->ts_state_lock);
+
+		if (ts->rx_thread) {
+			send_sig(SIGKILL, ts->rx_thread, 1);
+			down(&ts->rx_done_sem);
+		}
+		if (ts->tx_thread) {
+			send_sig(SIGKILL, ts->tx_thread, 1);
+			down(&ts->tx_done_sem);
+		}
+		/*
+		 * Release this thread_id in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		bitmap_release_region(iscsi_global->ts_bitmap,
+				ts->thread_id, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+
+		released_count++;
+		kfree(ts);
+	}
+
+	if (released_count) {
+		printk(KERN_INFO "Stopped %d thread set(s) (%d total threads)."
+			"\n", released_count, released_count * 2);
+	}
+}
+
+/*	iscsi_activate_thread_set():
+ *
+ *
+ */
+void iscsi_activate_thread_set(struct iscsi_conn *conn, struct se_thread_set *ts)
+{
+	iscsi_add_ts_to_active_list(ts);
+
+	spin_lock_bh(&ts->ts_state_lock);
+	conn->thread_set = ts;
+	ts->conn = conn;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	/*
+	 * Start up the RX thread and wait on rx_post_start_sem.  The RX
+	 * Thread will then do the same for the TX Thread in
+	 * iscsi_rx_thread_pre_handler().
+	 */
+	up(&ts->rx_start_sem);
+	down(&ts->rx_post_start_sem);
+}
+
+/*	iscsi_get_thread_set_timeout():
+ *
+ *
+ */
+static void iscsi_get_thread_set_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_get_thread_set():
+ *
+ *	Parameters:	iSCSI Connection Pointer.
+ *	Returns:	iSCSI Thread Set Pointer
+ */
+struct se_thread_set *iscsi_get_thread_set(int role)
+{
+	int allocate_ts = 0;
+	struct semaphore sem;
+	struct timer_list timer;
+	struct se_thread_set *ts = NULL;
+
+	/*
+	 * If no inactive thread set is available on the first call to
+	 * iscsi_get_ts_from_inactive_list(), sleep for a second and
+	 * try again.  If still none are available after two attempts,
+	 * allocate a set ourselves.
+	 */
+get_set:
+	ts = iscsi_get_ts_from_inactive_list();
+	if (!(ts)) {
+		if (allocate_ts == 2)
+			iscsi_allocate_thread_sets(1);
+
+		sema_init(&sem, 0);
+		init_timer(&timer);
+		SETUP_TIMER(timer, 1, &sem, iscsi_get_thread_set_timeout);
+		add_timer(&timer);
+
+		down(&sem);
+		del_timer_sync(&timer);
+		allocate_ts++;
+		goto get_set;
+	}
+
+	ts->delay_inactive = 1;
+	ts->signal_sent = ts->stop_active = 0;
+	ts->thread_count = 2;
+	sema_init(&ts->rx_restart_sem, 0);
+	sema_init(&ts->tx_restart_sem, 0);
+
+	return ts;
+}
+
+/*	iscsi_set_thread_clear():
+ *
+ *
+ */
+void iscsi_set_thread_clear(struct iscsi_conn *conn, u8 thread_clear)
+{
+	struct se_thread_set *ts = NULL;
+
+	if (!conn->thread_set) {
+		printk(KERN_ERR "struct iscsi_conn->thread_set is NULL\n");
+		return;
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->thread_clear &= ~thread_clear;
+
+	if ((thread_clear & ISCSI_CLEAR_RX_THREAD) &&
+	    (ts->blocked_threads & ISCSI_BLOCK_RX_THREAD))
+		up(&ts->rx_restart_sem);
+	else if ((thread_clear & ISCSI_CLEAR_TX_THREAD) &&
+		 (ts->blocked_threads & ISCSI_BLOCK_TX_THREAD))
+		up(&ts->tx_restart_sem);
+	spin_unlock_bh(&ts->ts_state_lock);
+}
+
+/*	iscsi_set_thread_set_signal():
+ *
+ *
+ */
+void iscsi_set_thread_set_signal(struct iscsi_conn *conn, u8 signal_sent)
+{
+	struct se_thread_set *ts = NULL;
+
+	if (!conn->thread_set) {
+		printk(KERN_ERR "struct iscsi_conn->thread_set is NULL\n");
+		return;
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->signal_sent |= signal_sent;
+	spin_unlock_bh(&ts->ts_state_lock);
+}
+
+/*	iscsi_release_thread_set():
+ *
+ *	Parameters:	iSCSI Connection Pointer.
+ *	Returns:	0 on success, -1 on error.
+ */
+int iscsi_release_thread_set(struct iscsi_conn *conn, int role)
+{
+	int thread_called = 0;
+	struct se_thread_set *ts = NULL;
+
+	if (!conn || !conn->thread_set) {
+		printk(KERN_ERR "connection or thread set pointer is NULL\n");
+		BUG();
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->status = ISCSI_THREAD_SET_RESET;
+
+	if (!(strncmp(current->comm, ISCSI_RX_THREAD_NAME,
+			strlen(ISCSI_RX_THREAD_NAME))))
+		thread_called = ISCSI_RX_THREAD;
+	else if (!(strncmp(current->comm, ISCSI_TX_THREAD_NAME,
+			strlen(ISCSI_TX_THREAD_NAME))))
+		thread_called = ISCSI_TX_THREAD;
+
+	if (ts->rx_thread && (thread_called == ISCSI_TX_THREAD) &&
+	   (ts->thread_clear & ISCSI_CLEAR_RX_THREAD)) {
+
+		if (!(ts->signal_sent & ISCSI_SIGNAL_RX_THREAD)) {
+			send_sig(SIGABRT, ts->rx_thread, 1);
+			ts->signal_sent |= ISCSI_SIGNAL_RX_THREAD;
+		}
+		ts->blocked_threads |= ISCSI_BLOCK_RX_THREAD;
+		spin_unlock_bh(&ts->ts_state_lock);
+		down(&ts->rx_restart_sem);
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->blocked_threads &= ~ISCSI_BLOCK_RX_THREAD;
+	}
+	if (ts->tx_thread && (thread_called == ISCSI_RX_THREAD) &&
+	   (ts->thread_clear & ISCSI_CLEAR_TX_THREAD)) {
+
+		if (!(ts->signal_sent & ISCSI_SIGNAL_TX_THREAD)) {
+			send_sig(SIGABRT, ts->tx_thread, 1);
+			ts->signal_sent |= ISCSI_SIGNAL_TX_THREAD;
+		}
+		ts->blocked_threads |= ISCSI_BLOCK_TX_THREAD;
+		spin_unlock_bh(&ts->ts_state_lock);
+		down(&ts->tx_restart_sem);
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->blocked_threads &= ~ISCSI_BLOCK_TX_THREAD;
+	}
+
+	conn->thread_set = NULL;
+	ts->conn = NULL;
+	ts->status = ISCSI_THREAD_SET_FREE;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_thread_set_force_reinstatement():
+ *
+ *
+ */
+int iscsi_thread_set_force_reinstatement(struct iscsi_conn *conn)
+{
+	struct se_thread_set *ts;
+
+	if (!conn->thread_set)
+		return -1;
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->status != ISCSI_THREAD_SET_ACTIVE) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		return -1;
+	}
+
+	if (ts->tx_thread && (!(ts->signal_sent & ISCSI_SIGNAL_TX_THREAD))) {
+		send_sig(SIGABRT, ts->tx_thread, 1);
+		ts->signal_sent |= ISCSI_SIGNAL_TX_THREAD;
+	}
+	if (ts->rx_thread && (!(ts->signal_sent & ISCSI_SIGNAL_RX_THREAD))) {
+		send_sig(SIGABRT, ts->rx_thread, 1);
+		ts->signal_sent |= ISCSI_SIGNAL_RX_THREAD;
+	}
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_check_to_add_additional_sets():
+ *
+ *
+ */
+static void iscsi_check_to_add_additional_sets(void)
+{
+	int thread_sets_add;
+
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	thread_sets_add = iscsi_global->inactive_ts;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+	if (thread_sets_add == 1)
+		iscsi_allocate_thread_sets(1);
+}
+
+/*	iscsi_signal_thread_pre_handler():
+ *
+ *
+ */
+static int iscsi_signal_thread_pre_handler(struct se_thread_set *ts)
+{
+	spin_lock_bh(&ts->ts_state_lock);
+	if ((ts->status == ISCSI_THREAD_SET_DIE) || signal_pending(current)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_rx_thread_pre_handler():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_rx_thread_pre_handler(struct se_thread_set *ts, int role)
+{
+	int ret;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->create_threads) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		up(&ts->rx_create_sem);
+		goto sleep;
+	}
+
+	flush_signals(current);
+
+	if (ts->delay_inactive && (--ts->thread_count == 0)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		iscsi_del_ts_from_active_list(ts);
+
+		if (!iscsi_global->in_shutdown)
+			iscsi_deallocate_extra_thread_sets();
+
+		iscsi_add_ts_to_inactive_list(ts);
+		spin_lock_bh(&ts->ts_state_lock);
+	}
+
+	if ((ts->status == ISCSI_THREAD_SET_RESET) &&
+	    (ts->thread_clear & ISCSI_CLEAR_RX_THREAD))
+		up(&ts->rx_restart_sem);
+
+	ts->thread_clear &= ~ISCSI_CLEAR_RX_THREAD;
+	spin_unlock_bh(&ts->ts_state_lock);
+sleep:
+	ret = down_interruptible(&ts->rx_start_sem);
+	if (ret != 0)
+		return NULL;
+
+	if (iscsi_signal_thread_pre_handler(ts) < 0)
+		return NULL;
+
+	if (!ts->conn) {
+		printk(KERN_ERR "struct se_thread_set->conn is NULL for"
+			" thread_id: %d, going back to sleep\n", ts->thread_id);
+		goto sleep;
+	}
+	iscsi_check_to_add_additional_sets();
+	/*
+	 * The RX Thread starts up the TX Thread and sleeps.
+	 */
+	ts->thread_clear |= ISCSI_CLEAR_RX_THREAD;
+	up(&ts->tx_start_sem);
+	down(&ts->tx_post_start_sem);
+
+	return ts->conn;
+}
+
+/*	iscsi_tx_thread_pre_handler():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_tx_thread_pre_handler(struct se_thread_set *ts, int role)
+{
+	int ret;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->create_threads) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		up(&ts->tx_create_sem);
+		goto sleep;
+	}
+
+	flush_signals(current);
+
+	if (ts->delay_inactive && (--ts->thread_count == 0)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		iscsi_del_ts_from_active_list(ts);
+
+		if (!iscsi_global->in_shutdown)
+			iscsi_deallocate_extra_thread_sets();
+
+		iscsi_add_ts_to_inactive_list(ts);
+		spin_lock_bh(&ts->ts_state_lock);
+	}
+	if ((ts->status == ISCSI_THREAD_SET_RESET) &&
+	    (ts->thread_clear & ISCSI_CLEAR_TX_THREAD))
+		up(&ts->tx_restart_sem);
+
+	ts->thread_clear &= ~ISCSI_CLEAR_TX_THREAD;
+	spin_unlock_bh(&ts->ts_state_lock);
+sleep:
+	ret = down_interruptible(&ts->tx_start_sem);
+	if (ret != 0)
+		return NULL;
+
+	if (iscsi_signal_thread_pre_handler(ts) < 0)
+		return NULL;
+
+	if (!ts->conn) {
+		printk(KERN_ERR "struct se_thread_set->conn is NULL for "
+			" thread_id: %d, going back to sleep\n",
+			ts->thread_id);
+		goto sleep;
+	}
+
+	iscsi_check_to_add_additional_sets();
+	/*
+	 * From the TX thread, up the tx_post_start_sem that the RX Thread is
+	 * sleeping on in iscsi_rx_thread_pre_handler(), then up the
+	 * rx_post_start_sem that iscsi_activate_thread_set() is sleeping on.
+	 */
+	ts->thread_clear |= ISCSI_CLEAR_TX_THREAD;
+	up(&ts->tx_post_start_sem);
+	up(&ts->rx_post_start_sem);
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->status = ISCSI_THREAD_SET_ACTIVE;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return ts->conn;
+}
+
+int iscsi_thread_set_init(void)
+{
+	int size;
+
+	iscsi_global->ts_bitmap_count = ISCSI_TS_BITMAP_BITS;
+
+	size = BITS_TO_LONGS(iscsi_global->ts_bitmap_count) * sizeof(long);
+	iscsi_global->ts_bitmap = kzalloc(size, GFP_KERNEL);
+	if (!(iscsi_global->ts_bitmap)) {
+		printk(KERN_ERR "Unable to allocate iscsi_global->ts_bitmap\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+void iscsi_thread_set_free(void)
+{
+	kfree(iscsi_global->ts_bitmap);
+}
diff --git a/drivers/target/iscsi/iscsi_thread_queue.h b/drivers/target/iscsi/iscsi_thread_queue.h
new file mode 100644
index 0000000..54089fd
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_thread_queue.h
@@ -0,0 +1,103 @@
+#ifndef ISCSI_THREAD_QUEUE_H
+#define ISCSI_THREAD_QUEUE_H
+
+/*
+ * Defines for thread sets.
+ */
+extern int iscsi_thread_set_force_reinstatement(struct iscsi_conn *);
+extern void iscsi_add_ts_to_inactive_list(struct se_thread_set *);
+extern int iscsi_allocate_thread_sets(u32);
+extern void iscsi_deallocate_thread_sets(void);
+extern void iscsi_activate_thread_set(struct iscsi_conn *, struct se_thread_set *);
+extern struct se_thread_set *iscsi_get_thread_set(int);
+extern void iscsi_set_thread_clear(struct iscsi_conn *, u8);
+extern void iscsi_set_thread_set_signal(struct iscsi_conn *, u8);
+extern int iscsi_release_thread_set(struct iscsi_conn *, int);
+extern struct iscsi_conn *iscsi_rx_thread_pre_handler(struct se_thread_set *, int);
+extern struct iscsi_conn *iscsi_tx_thread_pre_handler(struct se_thread_set *, int);
+extern int iscsi_thread_set_init(void);
+extern void iscsi_thread_set_free(void);
+
+extern int iscsi_target_tx_thread(void *);
+extern int iscsi_target_rx_thread(void *);
+extern struct iscsi_global *iscsi_global;
+
+#define INITIATOR_THREAD_SET_COUNT		4
+#define TARGET_THREAD_SET_COUNT			4
+
+#define ISCSI_RX_THREAD                         1
+#define ISCSI_TX_THREAD                         2
+#define ISCSI_RX_THREAD_NAME			"iscsi_trx"
+#define ISCSI_TX_THREAD_NAME			"iscsi_ttx"
+#define ISCSI_BLOCK_RX_THREAD			0x1
+#define ISCSI_BLOCK_TX_THREAD			0x2
+#define ISCSI_CLEAR_RX_THREAD			0x1
+#define ISCSI_CLEAR_TX_THREAD			0x2
+#define ISCSI_SIGNAL_RX_THREAD			0x1
+#define ISCSI_SIGNAL_TX_THREAD			0x2
+
+/* struct se_thread_set->status */
+#define ISCSI_THREAD_SET_FREE			1
+#define ISCSI_THREAD_SET_ACTIVE			2
+#define ISCSI_THREAD_SET_DIE			3
+#define ISCSI_THREAD_SET_RESET			4
+#define ISCSI_THREAD_SET_DEALLOCATE_THREADS	5
+
+/* By default allow a maximum of 32K iSCSI connections */
+#define ISCSI_TS_BITMAP_BITS			32768
+
+struct se_thread_set {
+	/* flags used for blocking and restarting sets */
+	u8	blocked_threads;
+	/* flag for creating threads */
+	u8	create_threads;
+	/* flag for delaying readding to inactive list */
+	u8	delay_inactive;
+	/* status for thread set */
+	u8	status;
+	/* which threads have had signals sent */
+	u8	signal_sent;
+	/* used for stopping active sets during shutdown */
+	u8	stop_active;
+	/* flag for which threads exited first */
+	u8	thread_clear;
+	/* Active threads in the thread set */
+	u8	thread_count;
+	/* Unique thread ID */
+	u32	thread_id;
+	/* pointer to connection if set is active */
+	struct iscsi_conn	*conn;
+	/* used for controlling ts state accesses */
+	spinlock_t	ts_state_lock;
+	/* used for stopping active sets during shutdown */
+	struct semaphore	stop_active_sem;
+	/* used for controlling thread creation */
+	struct semaphore	rx_create_sem;
+	/* used for controlling thread creation */
+	struct semaphore	tx_create_sem;
+	/* used for controlling killing */
+	struct semaphore	rx_done_sem;
+	/* used for controlling killing */
+	struct semaphore	tx_done_sem;
+	/* Used for rx side post startup */
+	struct semaphore	rx_post_start_sem;
+	/* Used for tx side post startup */
+	struct semaphore	tx_post_start_sem;
+	/* used for restarting thread queue */
+	struct semaphore	rx_restart_sem;
+	/* used for restarting thread queue */
+	struct semaphore	tx_restart_sem;
+	/* used for normal unused blocking */
+	struct semaphore	rx_start_sem;
+	/* used for normal unused blocking */
+	struct semaphore	tx_start_sem;
+	/* OS descriptor for rx thread */
+	struct task_struct	*rx_thread;
+	/* OS descriptor for tx thread */
+	struct task_struct	*tx_thread;
+	/* struct se_thread_set in list list head*/
+	struct list_head	ts_list;
+} ____cacheline_aligned;
+
+#endif   /*** ISCSI_THREAD_QUEUE_H ***/
+
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 06/12] iscsi-target: Add iSCSI Login Negotiation and Parameter logic
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds the princple RFC-3720 compatiable iSCSI Login
phase negotiation for iscsi_target_mod.  This also includes
the iscsi_thread_queue.[c,h] code call directly from iSCSI
login associated code.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_parameters.c   | 2078 +++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_parameters.h   |  271 ++++
 drivers/target/iscsi/iscsi_target_login.c | 1411 ++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_login.h |   15 +
 drivers/target/iscsi/iscsi_target_nego.c  | 1116 ++++++++++++++++
 drivers/target/iscsi/iscsi_target_nego.h  |   20 +
 drivers/target/iscsi/iscsi_thread_queue.c |  635 +++++++++
 drivers/target/iscsi/iscsi_thread_queue.h |  103 ++
 8 files changed, 5649 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_parameters.c
 create mode 100644 drivers/target/iscsi/iscsi_parameters.h
 create mode 100644 drivers/target/iscsi/iscsi_target_login.c
 create mode 100644 drivers/target/iscsi/iscsi_target_login.h
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.c
 create mode 100644 drivers/target/iscsi/iscsi_target_nego.h
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.c
 create mode 100644 drivers/target/iscsi/iscsi_thread_queue.h

diff --git a/drivers/target/iscsi/iscsi_parameters.c b/drivers/target/iscsi/iscsi_parameters.c
new file mode 100644
index 0000000..81bd7c9
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_parameters.c
@@ -0,0 +1,2078 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI Parameter negotiation.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_util.h"
+#include "iscsi_parameters.h"
+
+/*	iscsi_login_rx_data():
+ *
+ *
+ */
+int iscsi_login_rx_data(
+	struct iscsi_conn *conn,
+	char *buf,
+	int length,
+	int role)
+{
+	int rx_got;
+	struct iovec iov;
+
+	memset(&iov, 0, sizeof(struct iovec));
+	iov.iov_len	= length;
+	iov.iov_base	= buf;
+
+	/*
+	 * Initial Marker-less Interval.
+	 * Add the values regardless of IFMarker/OFMarker, considering
+	 * it may not be negoitated yet.
+	 */
+	if (role == INITIATOR)
+		conn->if_marker += length;
+	else if (role == TARGET)
+		conn->of_marker += length;
+	else {
+		printk(KERN_ERR "Unknown role: 0x%02x.\n", role);
+		return -1;
+	}
+
+	rx_got = rx_data(conn, &iov, 1, length);
+	if (rx_got != length) {
+		printk(KERN_ERR "rx_data returned %d, expecting %d.\n",
+				rx_got, length);
+		return -1;
+	}
+
+	return 0 ;
+}
+
+/*	iscsi_login_tx_data():
+ *
+ *
+ */
+int iscsi_login_tx_data(
+	struct iscsi_conn *conn,
+	char *pdu_buf,
+	char *text_buf,
+	int text_length,
+	int role)
+{
+	int length, tx_sent;
+	struct iovec iov[2];
+
+	length = (ISCSI_HDR_LEN + text_length);
+
+	memset(&iov[0], 0, 2 * sizeof(struct iovec));
+	iov[0].iov_len		= ISCSI_HDR_LEN;
+	iov[0].iov_base		= pdu_buf;
+	iov[1].iov_len		= text_length;
+	iov[1].iov_base		= text_buf;
+
+	/*
+	 * Initial Marker-less Interval.
+	 * Add the values regardless of IFMarker/OFMarker, considering
+	 * it may not be negoitated yet.
+	 */
+	if (role == INITIATOR)
+		conn->of_marker += length;
+	else if (role == TARGET)
+		conn->if_marker += length;
+	else {
+		printk(KERN_ERR "Unknown role: 0x%02x.\n", role);
+		return -1;
+	}
+
+	tx_sent = tx_data(conn, &iov[0], 2, length);
+	if (tx_sent != length) {
+		printk(KERN_ERR "tx_data returned %d, expecting %d.\n",
+				tx_sent, length);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_dump_connection_ops():
+ *
+ *
+ */
+void iscsi_dump_conn_ops(struct iscsi_conn_ops *conn_ops)
+{
+	printk(KERN_INFO "HeaderDigest: %s\n", (conn_ops->HeaderDigest) ?
+				"CRC32C" : "None");
+	printk(KERN_INFO "DataDigest: %s\n", (conn_ops->DataDigest) ?
+				"CRC32C" : "None");
+	printk(KERN_INFO "MaxRecvDataSegmentLength: %u\n",
+				conn_ops->MaxRecvDataSegmentLength);
+	printk(KERN_INFO "OFMarker: %s\n", (conn_ops->OFMarker) ? "Yes" : "No");
+	printk(KERN_INFO "IFMarker: %s\n", (conn_ops->IFMarker) ? "Yes" : "No");
+	if (conn_ops->OFMarker)
+		printk(KERN_INFO "OFMarkInt: %u\n", conn_ops->OFMarkInt);
+	if (conn_ops->IFMarker)
+		printk(KERN_INFO "IFMarkInt: %u\n", conn_ops->IFMarkInt);
+}
+
+/*	iscsi_dump_session_ops():
+ *
+ *
+ */
+void iscsi_dump_sess_ops(struct iscsi_sess_ops *sess_ops)
+{
+	printk(KERN_INFO "InitiatorName: %s\n", sess_ops->InitiatorName);
+	printk(KERN_INFO "InitiatorAlias: %s\n", sess_ops->InitiatorAlias);
+	printk(KERN_INFO "TargetName: %s\n", sess_ops->TargetName);
+	printk(KERN_INFO "TargetAlias: %s\n", sess_ops->TargetAlias);
+	printk(KERN_INFO "TargetPortalGroupTag: %hu\n",
+			sess_ops->TargetPortalGroupTag);
+	printk(KERN_INFO "MaxConnections: %hu\n", sess_ops->MaxConnections);
+	printk(KERN_INFO "InitialR2T: %s\n",
+			(sess_ops->InitialR2T) ? "Yes" : "No");
+	printk(KERN_INFO "ImmediateData: %s\n", (sess_ops->ImmediateData) ?
+			"Yes" : "No");
+	printk(KERN_INFO "MaxBurstLength: %u\n", sess_ops->MaxBurstLength);
+	printk(KERN_INFO "FirstBurstLength: %u\n", sess_ops->FirstBurstLength);
+	printk(KERN_INFO "DefaultTime2Wait: %hu\n", sess_ops->DefaultTime2Wait);
+	printk(KERN_INFO "DefaultTime2Retain: %hu\n",
+			sess_ops->DefaultTime2Retain);
+	printk(KERN_INFO "MaxOutstandingR2T: %hu\n",
+			sess_ops->MaxOutstandingR2T);
+	printk(KERN_INFO "DataPDUInOrder: %s\n",
+			(sess_ops->DataPDUInOrder) ? "Yes" : "No");
+	printk(KERN_INFO "DataSequenceInOrder: %s\n",
+			(sess_ops->DataSequenceInOrder) ? "Yes" : "No");
+	printk(KERN_INFO "ErrorRecoveryLevel: %hu\n",
+			sess_ops->ErrorRecoveryLevel);
+	printk(KERN_INFO "SessionType: %s\n", (sess_ops->SessionType) ?
+			"Discovery" : "Normal");
+}
+
+/*	iscsi_print_params():
+ *
+ *
+ */
+void iscsi_print_params(struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list)
+		printk(KERN_INFO "%s: %s\n", param->name, param->value);
+}
+
+/*	iscsi_set_default_param():
+ *
+ *
+ */
+static struct iscsi_param *iscsi_set_default_param(struct iscsi_param_list *param_list,
+		char *name, char *value, u8 phase, u8 scope, u8 sender,
+		u16 type_range, u8 use)
+{
+	struct iscsi_param *param = NULL;
+
+	param = kzalloc(sizeof(struct iscsi_param), GFP_KERNEL);
+	if (!(param)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter.\n");
+		goto out;
+	}
+	INIT_LIST_HEAD(&param->p_list);
+
+	param->name = kzalloc(strlen(name) + 1, GFP_KERNEL);
+	if (!(param->name)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter name.\n");
+		goto out;
+	}
+
+	param->value = kzalloc(strlen(value) + 1, GFP_KERNEL);
+	if (!(param->value)) {
+		printk(KERN_ERR "Unable to allocate memory for parameter value.\n");
+		goto out;
+	}
+
+	memcpy(param->name, name, strlen(name));
+	param->name[strlen(name)] = '\0';
+	memcpy(param->value, value, strlen(value));
+	param->value[strlen(value)] = '\0';
+	param->phase		= phase;
+	param->scope		= scope;
+	param->sender		= sender;
+	param->use		= use;
+	param->type_range	= type_range;
+
+	switch (param->type_range) {
+	case TYPERANGE_BOOL_AND:
+		param->type = TYPE_BOOL_AND;
+		break;
+	case TYPERANGE_BOOL_OR:
+		param->type = TYPE_BOOL_OR;
+		break;
+	case TYPERANGE_0_TO_2:
+	case TYPERANGE_0_TO_3600:
+	case TYPERANGE_0_TO_32767:
+	case TYPERANGE_0_TO_65535:
+	case TYPERANGE_1_TO_65535:
+	case TYPERANGE_2_TO_3600:
+	case TYPERANGE_512_TO_16777215:
+		param->type = TYPE_NUMBER;
+		break;
+	case TYPERANGE_AUTH:
+	case TYPERANGE_DIGEST:
+		param->type = TYPE_VALUE_LIST | TYPE_STRING;
+		break;
+	case TYPERANGE_MARKINT:
+		param->type = TYPE_NUMBER_RANGE;
+		param->type_range |= TYPERANGE_1_TO_65535;
+		break;
+	case TYPERANGE_ISCSINAME:
+	case TYPERANGE_SESSIONTYPE:
+	case TYPERANGE_TARGETADDRESS:
+	case TYPERANGE_UTF8:
+		param->type = TYPE_STRING;
+		break;
+	default:
+		printk(KERN_ERR "Unknown type_range 0x%02x\n",
+				param->type_range);
+		goto out;
+	}
+	list_add_tail(&param->p_list, &param_list->param_list);
+
+	return param;
+out:
+	if (param) {
+		kfree(param->value);
+		kfree(param->name);
+		kfree(param);
+	}
+
+	return NULL;
+}
+
+/*	iscsi_set_default_params():
+ *
+ *
+ */
+/* #warning Add extension keys */
+int iscsi_create_default_params(struct iscsi_param_list **param_list_ptr)
+{
+	struct iscsi_param *param = NULL;
+	struct iscsi_param_list *pl;
+
+	pl = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL);
+	if (!(pl)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param_list.\n");
+		return -1 ;
+	}
+	INIT_LIST_HEAD(&pl->param_list);
+	INIT_LIST_HEAD(&pl->extra_response_list);
+
+	/*
+	 * The format for setting the initial parameter definitions are:
+	 *
+	 * Parameter name:
+	 * Initial value:
+	 * Allowable phase:
+	 * Scope:
+	 * Allowable senders:
+	 * Typerange:
+	 * Use:
+	 */
+	param = iscsi_set_default_param(pl, AUTHMETHOD, INITIAL_AUTHMETHOD,
+			PHASE_SECURITY, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_AUTH, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, HEADERDIGEST, INITIAL_HEADERDIGEST,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_DIGEST, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATADIGEST, INITIAL_DATADIGEST,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_DIGEST, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXCONNECTIONS,
+			INITIAL_MAXCONNECTIONS, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_1_TO_65535, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, SENDTARGETS, INITIAL_SENDTARGETS,
+			PHASE_FFP0, SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_UTF8, 0);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETNAME, INITIAL_TARGETNAME,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_ISCSINAME, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIATORNAME,
+			INITIAL_INITIATORNAME, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_ISCSINAME, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETALIAS, INITIAL_TARGETALIAS,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_UTF8, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIATORALIAS,
+			INITIAL_INITIATORALIAS, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_INITIATOR, TYPERANGE_UTF8,
+			USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETADDRESS,
+			INITIAL_TARGETADDRESS, PHASE_DECLARATIVE,
+			SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_TARGETADDRESS, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, TARGETPORTALGROUPTAG,
+			INITIAL_TARGETPORTALGROUPTAG,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_TARGET,
+			TYPERANGE_0_TO_65535, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, INITIALR2T, INITIAL_INITIALR2T,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_BOOL_OR, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IMMEDIATEDATA,
+			INITIAL_IMMEDIATEDATA, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH, TYPERANGE_BOOL_AND,
+			USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXRECVDATASEGMENTLENGTH,
+			INITIAL_MAXRECVDATASEGMENTLENGTH,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_ALL);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXBURSTLENGTH,
+			INITIAL_MAXBURSTLENGTH, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, FIRSTBURSTLENGTH,
+			INITIAL_FIRSTBURSTLENGTH,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_512_TO_16777215, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DEFAULTTIME2WAIT,
+			INITIAL_DEFAULTTIME2WAIT,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_3600, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DEFAULTTIME2RETAIN,
+			INITIAL_DEFAULTTIME2RETAIN,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_3600, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, MAXOUTSTANDINGR2T,
+			INITIAL_MAXOUTSTANDINGR2T,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_1_TO_65535, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATAPDUINORDER,
+			INITIAL_DATAPDUINORDER, PHASE_OPERATIONAL,
+			SCOPE_SESSION_WIDE, SENDER_BOTH, TYPERANGE_BOOL_OR,
+			USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, DATASEQUENCEINORDER,
+			INITIAL_DATASEQUENCEINORDER,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_BOOL_OR, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, ERRORRECOVERYLEVEL,
+			INITIAL_ERRORRECOVERYLEVEL,
+			PHASE_OPERATIONAL, SCOPE_SESSION_WIDE, SENDER_BOTH,
+			TYPERANGE_0_TO_2, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, SESSIONTYPE, INITIAL_SESSIONTYPE,
+			PHASE_DECLARATIVE, SCOPE_SESSION_WIDE, SENDER_INITIATOR,
+			TYPERANGE_SESSIONTYPE, USE_LEADING_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IFMARKER, INITIAL_IFMARKER,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_BOOL_AND, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, OFMARKER, INITIAL_OFMARKER,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_BOOL_AND, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, IFMARKINT, INITIAL_IFMARKINT,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_MARKINT, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	param = iscsi_set_default_param(pl, OFMARKINT, INITIAL_OFMARKINT,
+			PHASE_OPERATIONAL, SCOPE_CONNECTION_ONLY, SENDER_BOTH,
+			TYPERANGE_MARKINT, USE_INITIAL_ONLY);
+	if (!(param))
+		goto out;
+
+	*param_list_ptr = pl;
+	return 0;
+out:
+	iscsi_release_param_list(pl);
+	return -1;
+}
+
+/*	iscsi_set_keys_to_negotiate():
+ *
+ *
+ */
+int iscsi_set_keys_to_negotiate(
+	int role,
+	int sessiontype,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		param->state = 0;
+		if (!strcmp(param->name, AUTHMETHOD)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, HEADERDIGEST)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATADIGEST)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXCONNECTIONS)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETNAME)) {
+			if ((role == INITIATOR) && (sessiontype)) {
+				SET_PSTATE_NEGOTIATE(param);
+				SET_USE_INITIAL_ONLY(param);
+			}
+		} else if (!strcmp(param->name, INITIATORNAME)) {
+			if (role == INITIATOR)
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETALIAS)) {
+			if ((role == TARGET) && (param->value))
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, INITIATORALIAS)) {
+			if ((role == INITIATOR) && (param->value))
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, TARGETPORTALGROUPTAG)) {
+			if (role == TARGET)
+				SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, INITIALR2T)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IMMEDIATEDATA)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXBURSTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DEFAULTTIME2RETAIN)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, MAXOUTSTANDINGR2T)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATAPDUINORDER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, DATASEQUENCEINORDER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, SESSIONTYPE)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IFMARKER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, OFMARKER)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, IFMARKINT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		} else if (!strcmp(param->name, OFMARKINT)) {
+			SET_PSTATE_NEGOTIATE(param);
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_set_keys_irrelevant_for_discovery():
+ *
+ *
+ */
+int iscsi_set_keys_irrelevant_for_discovery(
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!strcmp(param->name, MAXCONNECTIONS))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, INITIALR2T))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IMMEDIATEDATA))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, MAXBURSTLENGTH))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, FIRSTBURSTLENGTH))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, MAXOUTSTANDINGR2T))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DATAPDUINORDER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DATASEQUENCEINORDER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, ERRORRECOVERYLEVEL))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DEFAULTTIME2WAIT))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, DEFAULTTIME2RETAIN))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IFMARKER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, OFMARKER))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, IFMARKINT))
+			param->state &= ~PSTATE_NEGOTIATE;
+		else if (!strcmp(param->name, OFMARKINT))
+			param->state &= ~PSTATE_NEGOTIATE;
+	}
+
+	return 0;
+}
+
+/*	iscsi_copy_param_list():
+ *
+ *
+ */
+int iscsi_copy_param_list(
+	struct iscsi_param_list **dst_param_list,
+	struct iscsi_param_list *src_param_list,
+	int leading)
+{
+	struct iscsi_param *new_param = NULL, *param = NULL;
+	struct iscsi_param_list *param_list = NULL;
+
+	param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL);
+	if (!(param_list)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param_list.\n");
+		goto err_out;
+	}
+	INIT_LIST_HEAD(&param_list->param_list);
+	INIT_LIST_HEAD(&param_list->extra_response_list);
+
+	list_for_each_entry(param, &src_param_list->param_list, p_list) {
+		if (!leading && (param->scope & SCOPE_SESSION_WIDE)) {
+			if ((strcmp(param->name, "TargetName") != 0) &&
+			    (strcmp(param->name, "InitiatorName") != 0) &&
+			    (strcmp(param->name, "TargetPortalGroupTag") != 0))
+				continue;
+		}
+
+		new_param = kzalloc(sizeof(struct iscsi_param), GFP_KERNEL);
+		if (!(new_param)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_param.\n");
+			goto err_out;
+		}
+
+		new_param->set_param = param->set_param;
+		new_param->phase = param->phase;
+		new_param->scope = param->scope;
+		new_param->sender = param->sender;
+		new_param->type = param->type;
+		new_param->use = param->use;
+		new_param->type_range = param->type_range;
+
+		new_param->name = kzalloc(strlen(param->name) + 1, GFP_KERNEL);
+		if (!(new_param->name)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" parameter name.\n");
+			goto err_out;
+		}
+
+		new_param->value = kzalloc(strlen(param->value) + 1,
+				GFP_KERNEL);
+		if (!(new_param->value)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" parameter value.\n");
+			goto err_out;
+		}
+
+		memcpy(new_param->name, param->name, strlen(param->name));
+		new_param->name[strlen(param->name)] = '\0';
+		memcpy(new_param->value, param->value, strlen(param->value));
+		new_param->value[strlen(param->value)] = '\0';
+
+		list_add_tail(&new_param->p_list, &param_list->param_list);
+	}
+
+	if (!(list_empty(&param_list->param_list)))
+		*dst_param_list = param_list;
+	else {
+		printk(KERN_ERR "No parameters allocated.\n");
+		goto err_out;
+	}
+
+	return 0;
+
+err_out:
+	iscsi_release_param_list(param_list);
+	return -1;
+}
+
+/*	iscsi_release_extra_responses():
+ *
+ *
+ */
+static void iscsi_release_extra_responses(struct iscsi_param_list *param_list)
+{
+	struct iscsi_extra_response *er, *er_tmp;
+
+	list_for_each_entry_safe(er, er_tmp, &param_list->extra_response_list,
+			er_list) {
+		list_del(&er->er_list);
+		kfree(er);
+	}
+}
+
+/*	iscsi_release_param_list():
+ *
+ *
+ */
+void iscsi_release_param_list(struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param, *param_tmp;
+
+	list_for_each_entry_safe(param, param_tmp, &param_list->param_list,
+			p_list) {
+		list_del(&param->p_list);
+
+		kfree(param->name);
+		param->name = NULL;
+		kfree(param->value);
+		param->value = NULL;
+		kfree(param);
+		param = NULL;
+	}
+
+	iscsi_release_extra_responses(param_list);
+
+	kfree(param_list);
+}
+
+/*	iscsi_find_param_from_key():
+ *
+ *
+ */
+struct iscsi_param *iscsi_find_param_from_key(
+	char *key,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	if (!key || !param_list) {
+		printk(KERN_ERR "Key or parameter list pointer is NULL.\n");
+		return NULL;
+	}
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!strcmp(key, param->name))
+			break;
+	}
+
+	if (!param) {
+		printk(KERN_ERR "Unable to locate key \"%s\".\n", key);
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_extract_key_value():
+ *
+ *
+ */
+int iscsi_extract_key_value(char *textbuf, char **key, char **value)
+{
+	*value = strchr(textbuf, '=');
+	if (!(*value)) {
+		printk(KERN_ERR "Unable to locate \"=\" seperator for key,"
+				" ignoring request.\n");
+		return -1;
+	}
+
+	*key = textbuf;
+	**value = '\0';
+	*value = *value + 1;
+
+	return 0;
+}
+
+/*	iscsi_update_param_value():
+ *
+ *
+ */
+int iscsi_update_param_value(struct iscsi_param *param, char *value)
+{
+	kfree(param->value);
+
+	param->value = kzalloc(strlen(value) + 1, GFP_KERNEL);
+	if (!(param->value)) {
+		printk(KERN_ERR "Unable to allocate memory for value.\n");
+		return -1;
+	}
+
+	memcpy(param->value, value, strlen(value));
+	param->value[strlen(value)] = '\0';
+
+	TRACE(TRACE_PARAM, "iSCSI Parameter updated to %s=%s\n",
+			param->name, param->value);
+	return 0;
+}
+
+/*	iscsi_add_notunderstood_response():
+ *
+ *
+ */
+static int iscsi_add_notunderstood_response(
+	char *key,
+	char *value,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_extra_response *extra_response;
+
+	if (strlen(value) > MAX_KEY_VALUE_LENGTH) {
+		printk(KERN_ERR "Value for notunderstood key \"%s\" exceeds %d,"
+			" protocol error.\n", key, MAX_KEY_VALUE_LENGTH);
+		return -1;
+	}
+
+	extra_response = kzalloc(sizeof(struct iscsi_extra_response), GFP_KERNEL);
+	if (!(extra_response)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_extra_response.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&extra_response->er_list);
+
+	strncpy(extra_response->key, key, strlen(key) + 1);
+	strncpy(extra_response->value, NOTUNDERSTOOD,
+			strlen(NOTUNDERSTOOD) + 1);
+
+	list_add_tail(&extra_response->er_list,
+			&param_list->extra_response_list);
+	return 0;
+}
+
+/*	iscsi_check_for_auth_key():
+ *
+ *
+ */
+static int iscsi_check_for_auth_key(char *key)
+{
+	/*
+	 * RFC 1994
+	 */
+	if (!strcmp(key, "CHAP_A") || !strcmp(key, "CHAP_I") ||
+	    !strcmp(key, "CHAP_C") || !strcmp(key, "CHAP_N") ||
+	    !strcmp(key, "CHAP_R"))
+		return 1;
+
+	/*
+	 * RFC 2945
+	 */
+	if (!strcmp(key, "SRP_U") || !strcmp(key, "SRP_N") ||
+	    !strcmp(key, "SRP_g") || !strcmp(key, "SRP_s") ||
+	    !strcmp(key, "SRP_A") || !strcmp(key, "SRP_B") ||
+	    !strcmp(key, "SRP_M") || !strcmp(key, "SRP_HM"))
+		return 1;
+
+	return 0;
+}
+
+/*	iscsi_check_proposer_for_optional_reply():
+ *
+ *
+ */
+static void iscsi_check_proposer_for_optional_reply(struct iscsi_param *param)
+{
+	if (IS_TYPE_BOOL_AND(param)) {
+		if (!strcmp(param->value, NO))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_BOOL_OR(param)) {
+		if (!strcmp(param->value, YES))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		 /*
+		  * Required for gPXE iSCSI boot client
+		  */
+		if (!strcmp(param->name, IMMEDIATEDATA))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_NUMBER(param)) {
+		if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		/*
+		 * The GlobalSAN iSCSI Initiator for MacOSX does
+		 * not respond to MaxBurstLength, FirstBurstLength,
+		 * DefaultTime2Wait or DefaultTime2Retain parameter keys.
+		 * So, we set them to 'reply optional' here, and assume the
+		 * the defaults from iscsi_parameters.h if the initiator
+		 * is not RFC compliant and the keys are not negotiated.
+		 */
+		if (!strcmp(param->name, MAXBURSTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, FIRSTBURSTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, DEFAULTTIME2WAIT))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		if (!strcmp(param->name, DEFAULTTIME2RETAIN))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+		/*
+		 * Required for gPXE iSCSI boot client
+		 */
+		if (!strcmp(param->name, MAXCONNECTIONS))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_PHASE_DECLARATIVE(param))
+		SET_PSTATE_REPLY_OPTIONAL(param);
+}
+
+/*	iscsi_check_boolean_value():
+ *
+ *
+ */
+static int iscsi_check_boolean_value(struct iscsi_param *param, char *value)
+{
+	if (strcmp(value, YES) && strcmp(value, NO)) {
+		printk(KERN_ERR "Illegal value for \"%s\", must be either"
+			" \"%s\" or \"%s\".\n", param->name, YES, NO);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_numerical_value():
+ *
+ *
+ */
+static int iscsi_check_numerical_value(struct iscsi_param *param, char *value_ptr)
+{
+	char *tmpptr;
+	int value = 0;
+
+	value = simple_strtoul(value_ptr, &tmpptr, 0);
+
+/* #warning FIXME: Fix this */
+#if 0
+	if (strspn(endptr, WHITE_SPACE) != strlen(endptr)) {
+		printk(KERN_ERR "Illegal value \"%s\" for \"%s\".\n",
+			value, param->name);
+		return -1;
+	}
+#endif
+	if (IS_TYPERANGE_0_TO_2(param)) {
+		if ((value < 0) || (value > 2)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 2.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_3600(param)) {
+		if ((value < 0) || (value > 3600)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 3600.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_32767(param)) {
+		if ((value < 0) || (value > 32767)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 32767.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_0_TO_65535(param)) {
+		if ((value < 0) || (value > 65535)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 0 and 65535.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_1_TO_65535(param)) {
+		if ((value < 1) || (value > 65535)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 1 and 65535.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_2_TO_3600(param)) {
+		if ((value < 2) || (value > 3600)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 2 and 3600.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+	if (IS_TYPERANGE_512_TO_16777215(param)) {
+		if ((value < 512) || (value > 16777215)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" between 512 and 16777215.\n", param->name);
+			return -1;
+		}
+		return 0;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_numerical_range_value():
+ *
+ *
+ */
+static int iscsi_check_numerical_range_value(struct iscsi_param *param, char *value)
+{
+	char *left_val_ptr = NULL, *right_val_ptr = NULL;
+	char *tilde_ptr = NULL, *tmp_ptr = NULL;
+	u32 left_val, right_val, local_left_val, local_right_val;
+
+	if ((strcmp(param->name, IFMARKINT)) &&
+			(strcmp(param->name, OFMARKINT))) {
+		printk(KERN_ERR "Only parameters \"%s\" or \"%s\" may contain a"
+			" numerical range value.\n", IFMARKINT, OFMARKINT);
+		return -1;
+	}
+
+	if (IS_PSTATE_PROPOSER(param))
+		return 0;
+
+	tilde_ptr = strchr(value, '~');
+	if (!(tilde_ptr)) {
+		printk(KERN_ERR "Unable to locate numerical range indicator"
+			" \"~\" for \"%s\".\n", param->name);
+		return -1;
+	}
+	*tilde_ptr = '\0';
+
+	left_val_ptr = value;
+	right_val_ptr = value + strlen(left_val_ptr) + 1;
+
+	if (iscsi_check_numerical_value(param, left_val_ptr) < 0)
+		return -1;
+	if (iscsi_check_numerical_value(param, right_val_ptr) < 0)
+		return -1;
+
+	left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+	right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+	*tilde_ptr = '~';
+
+	if (right_val < left_val) {
+		printk(KERN_ERR "Numerical range for parameter \"%s\" contains"
+			" a right value which is less than the left.\n",
+				param->name);
+		return -1;
+	}
+
+	/*
+	 * For now,  enforce reasonable defaults for [I,O]FMarkInt.
+	 */
+	tilde_ptr = strchr(param->value, '~');
+	if (!(tilde_ptr)) {
+		printk(KERN_ERR "Unable to locate numerical range indicator"
+			" \"~\" for \"%s\".\n", param->name);
+		return -1;
+	}
+	*tilde_ptr = '\0';
+
+	left_val_ptr = param->value;
+	right_val_ptr = param->value + strlen(left_val_ptr) + 1;
+
+	local_left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+	local_right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+	*tilde_ptr = '~';
+
+	if (param->set_param) {
+		if ((left_val < local_left_val) ||
+		    (right_val < local_left_val)) {
+			printk(KERN_ERR "Passed value range \"%u~%u\" is below"
+				" minimum left value \"%u\" for key \"%s\","
+				" rejecting.\n", left_val, right_val,
+				local_left_val, param->name);
+			return -1;
+		}
+	} else {
+		if ((left_val < local_left_val) &&
+		    (right_val < local_left_val)) {
+			printk(KERN_ERR "Received value range \"%u~%u\" is"
+				" below minimum left value \"%u\" for key"
+				" \"%s\", rejecting.\n", left_val, right_val,
+				local_left_val, param->name);
+			SET_PSTATE_REJECT(param);
+			if (iscsi_update_param_value(param, REJECT) < 0)
+				return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_string_or_list_value():
+ *
+ *
+ */
+static int iscsi_check_string_or_list_value(struct iscsi_param *param, char *value)
+{
+	if (IS_PSTATE_PROPOSER(param))
+		return 0;
+
+	if (IS_TYPERANGE_AUTH_PARAM(param)) {
+		if (strcmp(value, KRB5) && strcmp(value, SPKM1) &&
+		    strcmp(value, SPKM2) && strcmp(value, SRP) &&
+		    strcmp(value, CHAP) && strcmp(value, NONE)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\", \"%s\", \"%s\", \"%s\", \"%s\""
+				" or \"%s\".\n", param->name, KRB5,
+					SPKM1, SPKM2, SRP, CHAP, NONE);
+			return -1;
+		}
+	}
+	if (IS_TYPERANGE_DIGEST_PARAM(param)) {
+		if (strcmp(value, CRC32C) && strcmp(value, NONE)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\" or \"%s\".\n", param->name,
+					CRC32C, NONE);
+			return -1;
+		}
+	}
+	if (IS_TYPERANGE_SESSIONTYPE(param)) {
+		if (strcmp(value, DISCOVERY) && strcmp(value, NORMAL)) {
+			printk(KERN_ERR "Illegal value for \"%s\", must be"
+				" \"%s\" or \"%s\".\n", param->name,
+					DISCOVERY, NORMAL);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_get_value_from_number_range():
+ *
+ *	This function is used to pick a value range number,  currently just
+ *	returns the lesser of both right values.
+ */
+static char *iscsi_get_value_from_number_range(
+	struct iscsi_param *param,
+	char *value)
+{
+	char *end_ptr, *tilde_ptr1 = NULL, *tilde_ptr2 = NULL;
+	u32 acceptor_right_value, proposer_right_value;
+
+	tilde_ptr1 = strchr(value, '~');
+	if (!(tilde_ptr1))
+		return NULL;
+	*tilde_ptr1++ = '\0';
+	proposer_right_value = simple_strtoul(tilde_ptr1, &end_ptr, 0);
+
+	tilde_ptr2 = strchr(param->value, '~');
+	if (!(tilde_ptr2))
+		return NULL;
+	*tilde_ptr2++ = '\0';
+	acceptor_right_value = simple_strtoul(tilde_ptr2, &end_ptr, 0);
+
+	return (acceptor_right_value >= proposer_right_value) ?
+		tilde_ptr1 : tilde_ptr2;
+}
+
+/*	iscsi_check_valuelist_for_support():
+ *
+ *
+ */
+static char *iscsi_check_valuelist_for_support(
+	struct iscsi_param *param,
+	char *value)
+{
+	char *tmp1 = NULL, *tmp2 = NULL;
+	char *acceptor_values = NULL, *proposer_values = NULL;
+
+	acceptor_values = param->value;
+	proposer_values = value;
+
+	do {
+		if (!proposer_values)
+			return NULL;
+		tmp1 = strchr(proposer_values, ',');
+		if ((tmp1))
+			*tmp1 = '\0';
+		acceptor_values = param->value;
+		do {
+			if (!acceptor_values) {
+				if (tmp1)
+					*tmp1 = ',';
+				return NULL;
+			}
+			tmp2 = strchr(acceptor_values, ',');
+			if ((tmp2))
+				*tmp2 = '\0';
+			if (!acceptor_values || !proposer_values) {
+				if (tmp1)
+					*tmp1 = ',';
+				if (tmp2)
+					*tmp2 = ',';
+				return NULL;
+			}
+			if (!strcmp(acceptor_values, proposer_values)) {
+				if (tmp2)
+					*tmp2 = ',';
+				goto out;
+			}
+			if (tmp2)
+				*tmp2++ = ',';
+
+			acceptor_values = tmp2;
+			if (!acceptor_values)
+				break;
+		} while (acceptor_values);
+		if (tmp1)
+			*tmp1++ = ',';
+		proposer_values = tmp1;
+	} while (proposer_values);
+
+out:
+	return proposer_values;
+}
+
+/*	iscsi_check_acceptor_state():
+ *
+ *
+ */
+static int iscsi_check_acceptor_state(struct iscsi_param *param, char *value)
+{
+	u8 acceptor_boolean_value = 0, proposer_boolean_value = 0;
+	char *negoitated_value = NULL;
+
+	if (IS_PSTATE_ACCEPTOR(param)) {
+		printk(KERN_ERR "Received key \"%s\" twice, protocol error.\n",
+				param->name);
+		return -1;
+	}
+
+	if (IS_PSTATE_REJECT(param))
+		return 0;
+
+	if (IS_TYPE_BOOL_AND(param)) {
+		if (!strcmp(value, YES))
+			proposer_boolean_value = 1;
+		if (!strcmp(param->value, YES))
+			acceptor_boolean_value = 1;
+		if (acceptor_boolean_value && proposer_boolean_value)
+			do {} while (0);
+		else {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			if (!proposer_boolean_value)
+				SET_PSTATE_REPLY_OPTIONAL(param);
+		}
+	} else if (IS_TYPE_BOOL_OR(param)) {
+		if (!strcmp(value, YES))
+			proposer_boolean_value = 1;
+		if (!strcmp(param->value, YES))
+			acceptor_boolean_value = 1;
+		if (acceptor_boolean_value || proposer_boolean_value) {
+			if (iscsi_update_param_value(param, YES) < 0)
+				return -1;
+			if (proposer_boolean_value)
+				SET_PSTATE_REPLY_OPTIONAL(param);
+		}
+	} else if (IS_TYPE_NUMBER(param)) {
+		char *tmpptr, buf[10];
+		u32 acceptor_value = simple_strtoul(param->value, &tmpptr, 0);
+		u32 proposer_value = simple_strtoul(value, &tmpptr, 0);
+
+		memset(buf, 0, 10);
+
+		if (!strcmp(param->name, MAXCONNECTIONS) ||
+		    !strcmp(param->name, MAXBURSTLENGTH) ||
+		    !strcmp(param->name, FIRSTBURSTLENGTH) ||
+		    !strcmp(param->name, MAXOUTSTANDINGR2T) ||
+		    !strcmp(param->name, DEFAULTTIME2RETAIN) ||
+		    !strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			if (proposer_value > acceptor_value) {
+				sprintf(buf, "%u", acceptor_value);
+				if (iscsi_update_param_value(param,
+						&buf[0]) < 0)
+					return -1;
+			} else {
+				if (iscsi_update_param_value(param, value) < 0)
+					return -1;
+			}
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			if (acceptor_value > proposer_value) {
+				sprintf(buf, "%u", acceptor_value);
+				if (iscsi_update_param_value(param,
+						&buf[0]) < 0)
+					return -1;
+			} else {
+				if (iscsi_update_param_value(param, value) < 0)
+					return -1;
+			}
+		} else {
+			if (iscsi_update_param_value(param, value) < 0)
+				return -1;
+		}
+
+		if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
+			SET_PSTATE_REPLY_OPTIONAL(param);
+	} else if (IS_TYPE_NUMBER_RANGE(param)) {
+		negoitated_value = iscsi_get_value_from_number_range(
+					param, value);
+		if (!(negoitated_value))
+			return -1;
+		if (iscsi_update_param_value(param, negoitated_value) < 0)
+			return -1;
+	} else if (IS_TYPE_VALUE_LIST(param)) {
+		negoitated_value = iscsi_check_valuelist_for_support(
+					param, value);
+		if (!(negoitated_value)) {
+			printk(KERN_ERR "Proposer's value list \"%s\" contains"
+				" no valid values from Acceptor's value list"
+				" \"%s\".\n", value, param->value);
+			return -1;
+		}
+		if (iscsi_update_param_value(param, negoitated_value) < 0)
+			return -1;
+	} else if (IS_PHASE_DECLARATIVE(param)) {
+		if (iscsi_update_param_value(param, value) < 0)
+			return -1;
+		SET_PSTATE_REPLY_OPTIONAL(param);
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_proposer_state():
+ *
+ *
+ */
+static int iscsi_check_proposer_state(struct iscsi_param *param, char *value)
+{
+	if (IS_PSTATE_RESPONSE_GOT(param)) {
+		printk(KERN_ERR "Received key \"%s\" twice, protocol error.\n",
+				param->name);
+		return -1;
+	}
+
+	if (IS_TYPE_NUMBER_RANGE(param)) {
+		u32 left_val = 0, right_val = 0, recieved_value = 0;
+		char *left_val_ptr = NULL, *right_val_ptr = NULL;
+		char *tilde_ptr = NULL, *tmp_ptr = NULL;
+
+		if (!strcmp(value, IRRELEVANT) || !strcmp(value, REJECT)) {
+			if (iscsi_update_param_value(param, value) < 0)
+				return -1;
+			return 0;
+		}
+
+		tilde_ptr = strchr(value, '~');
+		if ((tilde_ptr)) {
+			printk(KERN_ERR "Illegal \"~\" in response for \"%s\".\n",
+					param->name);
+			return -1;
+		}
+		tilde_ptr = strchr(param->value, '~');
+		if (!(tilde_ptr)) {
+			printk(KERN_ERR "Unable to locate numerical range"
+				" indicator \"~\" for \"%s\".\n", param->name);
+			return -1;
+		}
+		*tilde_ptr = '\0';
+
+		left_val_ptr = param->value;
+		right_val_ptr = param->value + strlen(left_val_ptr) + 1;
+		left_val = simple_strtoul(left_val_ptr, &tmp_ptr, 0);
+		right_val = simple_strtoul(right_val_ptr, &tmp_ptr, 0);
+		recieved_value = simple_strtoul(value, &tmp_ptr, 0);
+
+		*tilde_ptr = '~';
+
+		if ((recieved_value < left_val) ||
+		    (recieved_value > right_val)) {
+			printk(KERN_ERR "Illegal response \"%s=%u\", value must"
+				" be between %u and %u.\n", param->name,
+				recieved_value, left_val, right_val);
+			return -1;
+		}
+	} else if (IS_TYPE_VALUE_LIST(param)) {
+		char *comma_ptr = NULL, *tmp_ptr = NULL;
+
+		comma_ptr = strchr(value, ',');
+		if ((comma_ptr)) {
+			printk(KERN_ERR "Illegal \",\" in response for \"%s\".\n",
+					param->name);
+			return -1;
+		}
+
+		tmp_ptr = iscsi_check_valuelist_for_support(param, value);
+		if (!(tmp_ptr))
+			return -1;
+	}
+
+	if (iscsi_update_param_value(param, value) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_check_value():
+ *
+ *
+ */
+static int iscsi_check_value(struct iscsi_param *param, char *value)
+{
+	char *comma_ptr = NULL;
+
+	if (!strcmp(value, REJECT)) {
+		if (!strcmp(param->name, IFMARKINT) ||
+		    !strcmp(param->name, OFMARKINT)) {
+			/*
+			 * Reject is not fatal for [I,O]FMarkInt,  and causes
+			 * [I,O]FMarker to be reset to No. (See iSCSI v20 A.3.2)
+			 */
+			SET_PSTATE_REJECT(param);
+			return 0;
+		}
+		printk(KERN_ERR "Received %s=%s\n", param->name, value);
+		return -1;
+	}
+	if (!strcmp(value, IRRELEVANT)) {
+		TRACE(TRACE_LOGIN, "Received %s=%s\n", param->name, value);
+		SET_PSTATE_IRRELEVANT(param);
+		return 0;
+	}
+	if (!strcmp(value, NOTUNDERSTOOD)) {
+		if (!IS_PSTATE_PROPOSER(param)) {
+			printk(KERN_ERR "Received illegal offer %s=%s\n",
+				param->name, value);
+			return -1;
+		}
+
+/* #warning FIXME: Add check for X-ExtensionKey here */
+		printk(KERN_ERR "Standard iSCSI key \"%s\" cannot be answered"
+			" with \"%s\", protocol error.\n", param->name, value);
+		return -1;
+	}
+
+	do {
+		comma_ptr = NULL;
+		comma_ptr = strchr(value, ',');
+
+		if (comma_ptr && !IS_TYPE_VALUE_LIST(param)) {
+			printk(KERN_ERR "Detected value seperator \",\", but"
+				" key \"%s\" does not allow a value list,"
+				" protocol error.\n", param->name);
+			return -1;
+		}
+		if (comma_ptr)
+			*comma_ptr = '\0';
+
+		if (strlen(value) > MAX_KEY_VALUE_LENGTH) {
+			printk(KERN_ERR "Value for key \"%s\" exceeds %d,"
+				" protocol error.\n", param->name,
+				MAX_KEY_VALUE_LENGTH);
+			return -1;
+		}
+
+		if (IS_TYPE_BOOL_AND(param) || IS_TYPE_BOOL_OR(param)) {
+			if (iscsi_check_boolean_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_NUMBER(param)) {
+			if (iscsi_check_numerical_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_NUMBER_RANGE(param)) {
+			if (iscsi_check_numerical_range_value(param, value) < 0)
+				return -1;
+		} else if (IS_TYPE_STRING(param) || IS_TYPE_VALUE_LIST(param)) {
+			if (iscsi_check_string_or_list_value(param, value) < 0)
+				return -1;
+		} else {
+			printk(KERN_ERR "Huh? 0x%02x\n", param->type);
+			return -1;
+		}
+
+		if (comma_ptr)
+			*comma_ptr++ = ',';
+
+		value = comma_ptr;
+	} while (value);
+
+	return 0;
+}
+
+/*	__iscsi_check_key()
+ *
+ *
+ */
+static struct iscsi_param *__iscsi_check_key(
+	char *key,
+	int sender,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	if (strlen(key) > MAX_KEY_NAME_LENGTH) {
+		printk(KERN_ERR "Length of key name \"%s\" exceeds %d.\n",
+			key, MAX_KEY_NAME_LENGTH);
+		return NULL;
+	}
+
+	param = iscsi_find_param_from_key(key, param_list);
+	if (!(param))
+		return NULL;
+
+	if ((sender & SENDER_INITIATOR) && !IS_SENDER_INITIATOR(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "target" : "initiator");
+		return NULL;
+	}
+
+	if ((sender & SENDER_TARGET) && !IS_SENDER_TARGET(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "initiator" : "target");
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_check_key():
+ *
+ *
+ */
+static struct iscsi_param *iscsi_check_key(
+	char *key,
+	int phase,
+	int sender,
+	struct iscsi_param_list *param_list)
+{
+	struct iscsi_param *param;
+
+	/*
+	 * Key name length must not exceed 63 bytes. (See iSCSI v20 5.1)
+	 */
+	if (strlen(key) > MAX_KEY_NAME_LENGTH) {
+		printk(KERN_ERR "Length of key name \"%s\" exceeds %d.\n",
+			key, MAX_KEY_NAME_LENGTH);
+		return NULL;
+	}
+
+	param = iscsi_find_param_from_key(key, param_list);
+	if (!(param))
+		return NULL;
+
+	if ((sender & SENDER_INITIATOR) && !IS_SENDER_INITIATOR(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+			" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "target" : "initiator");
+		return NULL;
+	}
+	if ((sender & SENDER_TARGET) && !IS_SENDER_TARGET(param)) {
+		printk(KERN_ERR "Key \"%s\" may not be sent to %s,"
+				" protocol error.\n", param->name,
+			(sender & SENDER_RECEIVER) ? "initiator" : "target");
+		return NULL;
+	}
+
+	if (IS_PSTATE_ACCEPTOR(param)) {
+		printk(KERN_ERR "Key \"%s\" received twice, protocol error.\n",
+				key);
+		return NULL;
+	}
+
+	if (!phase)
+		return param;
+
+	if (!(param->phase & phase)) {
+		printk(KERN_ERR "Key \"%s\" may not be negotiated during ",
+				param->name);
+		switch (phase) {
+		case PHASE_SECURITY:
+			printk(KERN_INFO "Security phase.\n");
+			break;
+		case PHASE_OPERATIONAL:
+			printk(KERN_INFO "Operational phase.\n");
+		default:
+			printk(KERN_INFO "Unknown phase.\n");
+		}
+		return NULL;
+	}
+
+	return param;
+}
+
+/*	iscsi_enforce_integrity_rules():
+ *
+ *
+ */
+static int iscsi_enforce_integrity_rules(
+	u8 phase,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpptr;
+	u8 DataSequenceInOrder = 0;
+	u8 ErrorRecoveryLevel = 0, SessionType = 0;
+	u8 IFMarker = 0, OFMarker = 0;
+	u8 IFMarkInt_Reject = 0, OFMarkInt_Reject = 0;
+	u32 FirstBurstLength = 0, MaxBurstLength = 0;
+	struct iscsi_param *param = NULL;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->phase & phase))
+			continue;
+		if (!strcmp(param->name, SESSIONTYPE))
+			if (!strcmp(param->value, NORMAL))
+				SessionType = 1;
+		if (!strcmp(param->name, ERRORRECOVERYLEVEL))
+			ErrorRecoveryLevel = simple_strtoul(param->value,
+					&tmpptr, 0);
+		if (!strcmp(param->name, DATASEQUENCEINORDER))
+			if (!strcmp(param->value, YES))
+				DataSequenceInOrder = 1;
+		if (!strcmp(param->name, MAXBURSTLENGTH))
+			MaxBurstLength = simple_strtoul(param->value,
+					&tmpptr, 0);
+		if (!strcmp(param->name, IFMARKER))
+			if (!strcmp(param->value, YES))
+				IFMarker = 1;
+		if (!strcmp(param->name, OFMARKER))
+			if (!strcmp(param->value, YES))
+				OFMarker = 1;
+		if (!strcmp(param->name, IFMARKINT))
+			if (!strcmp(param->value, REJECT))
+				IFMarkInt_Reject = 1;
+		if (!strcmp(param->name, OFMARKINT))
+			if (!strcmp(param->value, REJECT))
+				OFMarkInt_Reject = 1;
+	}
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->phase & phase))
+			continue;
+		if (!SessionType && (!IS_PSTATE_ACCEPTOR(param) &&
+		     (strcmp(param->name, IFMARKER) &&
+		      strcmp(param->name, OFMARKER) &&
+		      strcmp(param->name, IFMARKINT) &&
+		      strcmp(param->name, OFMARKINT))))
+			continue;
+		if (!strcmp(param->name, MAXOUTSTANDINGR2T) &&
+		    DataSequenceInOrder && (ErrorRecoveryLevel > 0)) {
+			if (strcmp(param->value, "1")) {
+				if (iscsi_update_param_value(param, "1") < 0)
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, MAXCONNECTIONS) && !SessionType) {
+			if (strcmp(param->value, "1")) {
+				if (iscsi_update_param_value(param, "1") < 0)
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			FirstBurstLength = simple_strtoul(param->value,
+					&tmpptr, 0);
+			if (FirstBurstLength > MaxBurstLength) {
+				char tmpbuf[10];
+				memset(tmpbuf, 0, 10);
+				sprintf(tmpbuf, "%u", MaxBurstLength);
+				if (iscsi_update_param_value(param, tmpbuf))
+					return -1;
+				TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+			}
+		}
+		if (!strcmp(param->name, IFMARKER) && IFMarkInt_Reject) {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			IFMarker = 0;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+		if (!strcmp(param->name, OFMARKER) && OFMarkInt_Reject) {
+			if (iscsi_update_param_value(param, NO) < 0)
+				return -1;
+			OFMarker = 0;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					 param->name, param->value);
+		}
+		if (!strcmp(param->name, IFMARKINT) && !IFMarker) {
+			if (!strcmp(param->value, REJECT))
+				continue;
+			param->state &= ~PSTATE_NEGOTIATE;
+			if (iscsi_update_param_value(param, IRRELEVANT) < 0)
+				return -1;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+		if (!strcmp(param->name, OFMARKINT) && !OFMarker) {
+			if (!strcmp(param->value, REJECT))
+				continue;
+			param->state &= ~PSTATE_NEGOTIATE;
+			if (iscsi_update_param_value(param, IRRELEVANT) < 0)
+				return -1;
+			TRACE(TRACE_PARAM, "Reset \"%s\" to \"%s\".\n",
+					param->name, param->value);
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_decode_text_input():
+ *
+ *
+ */
+int iscsi_decode_text_input(
+	u8 phase,
+	u8 sender,
+	char *textbuf,
+	u32 length,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpbuf, *start = NULL, *end = NULL;
+
+	tmpbuf = kzalloc(length + 1, GFP_KERNEL);
+	if (!(tmpbuf)) {
+		printk(KERN_ERR "Unable to allocate memory for tmpbuf.\n");
+		return -1;
+	}
+
+	memcpy(tmpbuf, textbuf, length);
+	tmpbuf[length] = '\0';
+	start = tmpbuf;
+	end = (start + length);
+
+	while (start < end) {
+		char *key, *value;
+		struct iscsi_param *param;
+
+		if (iscsi_extract_key_value(start, &key, &value) < 0) {
+			kfree(tmpbuf);
+			return -1;
+		}
+
+		TRACE(TRACE_PARAM, "Got key: %s=%s\n", key, value);
+
+		if (phase & PHASE_SECURITY) {
+			if (iscsi_check_for_auth_key(key) > 0) {
+				char *tmpptr = key + strlen(key);
+				*tmpptr = '=';
+				kfree(tmpbuf);
+				return 1;
+			}
+		}
+
+		param = iscsi_check_key(key, phase, sender, param_list);
+		if (!(param)) {
+			if (iscsi_add_notunderstood_response(key,
+					value, param_list) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			start += strlen(key) + strlen(value) + 2;
+			continue;
+		}
+		if (iscsi_check_value(param, value) < 0) {
+			kfree(tmpbuf);
+			return -1;
+		}
+
+		start += strlen(key) + strlen(value) + 2;
+
+		if (IS_PSTATE_PROPOSER(param)) {
+			if (iscsi_check_proposer_state(param, value) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			SET_PSTATE_RESPONSE_GOT(param);
+		} else {
+			if (iscsi_check_acceptor_state(param, value) < 0) {
+				kfree(tmpbuf);
+				return -1;
+			}
+			SET_PSTATE_ACCEPTOR(param);
+		}
+	}
+
+	kfree(tmpbuf);
+	return 0;
+}
+
+/*	iscsi_encode_text_output():
+ *
+ *
+ */
+int iscsi_encode_text_output(
+	u8 phase,
+	u8 sender,
+	char *textbuf,
+	u32 *length,
+	struct iscsi_param_list *param_list)
+{
+	char *output_buf = NULL;
+	struct iscsi_extra_response *er;
+	struct iscsi_param *param;
+
+	output_buf = textbuf + *length;
+
+	if (iscsi_enforce_integrity_rules(phase, param_list) < 0)
+		return -1;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!(param->sender & sender))
+			continue;
+		if (IS_PSTATE_ACCEPTOR(param) &&
+		    !IS_PSTATE_RESPONSE_SENT(param) &&
+		    !IS_PSTATE_REPLY_OPTIONAL(param) &&
+		    (param->phase & phase)) {
+			*length += sprintf(output_buf, "%s=%s",
+				param->name, param->value);
+			*length += 1;
+			output_buf = textbuf + *length;
+			SET_PSTATE_RESPONSE_SENT(param);
+			TRACE(TRACE_PARAM, "Sending key: %s=%s\n",
+				param->name, param->value);
+			continue;
+		}
+		if (IS_PSTATE_NEGOTIATE(param) &&
+		    !IS_PSTATE_ACCEPTOR(param) &&
+		    !IS_PSTATE_PROPOSER(param) &&
+		    (param->phase & phase)) {
+			*length += sprintf(output_buf, "%s=%s",
+				param->name, param->value);
+			*length += 1;
+			output_buf = textbuf + *length;
+			SET_PSTATE_PROPOSER(param);
+			iscsi_check_proposer_for_optional_reply(param);
+			TRACE(TRACE_PARAM, "Sending key: %s=%s\n",
+				param->name, param->value);
+		}
+	}
+
+	list_for_each_entry(er, &param_list->extra_response_list, er_list) {
+		*length += sprintf(output_buf, "%s=%s", er->key, er->value);
+		*length += 1;
+		output_buf = textbuf + *length;
+		TRACE(TRACE_PARAM, "Sending key: %s=%s\n", er->key, er->value);
+	}
+	iscsi_release_extra_responses(param_list);
+
+	return 0;
+}
+
+/*	iscsi_check_negotiated_keys():
+ *
+ *
+ */
+int iscsi_check_negotiated_keys(struct iscsi_param_list *param_list)
+{
+	int ret = 0;
+	struct iscsi_param *param;
+
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (IS_PSTATE_NEGOTIATE(param) &&
+		    IS_PSTATE_PROPOSER(param) &&
+		    !IS_PSTATE_RESPONSE_GOT(param) &&
+		    !IS_PSTATE_REPLY_OPTIONAL(param) &&
+		    !IS_PHASE_DECLARATIVE(param)) {
+			printk(KERN_ERR "No response for proposed key \"%s\".\n",
+					param->name);
+			ret = -1;
+		}
+	}
+
+	return ret;
+}
+
+/*	iscsi_set_param_value():
+ *
+ *
+ */
+int iscsi_change_param_value(
+	char *keyvalue,
+	int sender,
+	struct iscsi_param_list *param_list,
+	int check_key)
+{
+	char *key = NULL, *value = NULL;
+	struct iscsi_param *param;
+
+	if (iscsi_extract_key_value(keyvalue, &key, &value) < 0)
+		return -1;
+
+	if (!check_key) {
+		param = __iscsi_check_key(keyvalue, sender, param_list);
+		if (!(param))
+			return -1;
+	} else {
+		param = iscsi_check_key(keyvalue, 0, sender, param_list);
+		if (!(param))
+			return -1;
+
+		param->set_param = 1;
+		if (iscsi_check_value(param, value) < 0) {
+			param->set_param = 0;
+			return -1;
+		}
+		param->set_param = 0;
+	}
+
+	if (iscsi_update_param_value(param, value) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_set_connection_parameters():
+ *
+ *
+ */
+void iscsi_set_connection_parameters(
+	struct iscsi_conn_ops *ops,
+	struct iscsi_param_list *param_list)
+{
+	char *tmpptr;
+	struct iscsi_param *param;
+
+	printk(KERN_INFO "---------------------------------------------------"
+			"---------------\n");
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!IS_PSTATE_ACCEPTOR(param) && !IS_PSTATE_PROPOSER(param))
+			continue;
+		if (!strcmp(param->name, AUTHMETHOD)) {
+			printk(KERN_INFO "AuthMethod:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, HEADERDIGEST)) {
+			ops->HeaderDigest = !strcmp(param->value, CRC32C);
+			printk(KERN_INFO "HeaderDigest:                 %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATADIGEST)) {
+			ops->DataDigest = !strcmp(param->value, CRC32C);
+			printk(KERN_INFO "DataDigest:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH)) {
+			ops->MaxRecvDataSegmentLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxRecvDataSegmentLength:     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, OFMARKER)) {
+			ops->OFMarker = !strcmp(param->value, YES);
+			printk(KERN_INFO "OFMarker:                     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IFMARKER)) {
+			ops->IFMarker = !strcmp(param->value, YES);
+			printk(KERN_INFO "IFMarker:                     %s\n",
+				param->value);
+		} else if (!strcmp(param->name, OFMARKINT)) {
+			ops->OFMarkInt =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "OFMarkInt:                    %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IFMARKINT)) {
+			ops->IFMarkInt =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "IFMarkInt:                    %s\n",
+				param->value);
+		}
+	}
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+}
+
+/*	iscsi_set_session_parameters():
+ *
+ *
+ */
+void iscsi_set_session_parameters(
+	struct iscsi_sess_ops *ops,
+	struct iscsi_param_list *param_list,
+	int leading)
+{
+	char *tmpptr;
+	struct iscsi_param *param;
+
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+	list_for_each_entry(param, &param_list->param_list, p_list) {
+		if (!IS_PSTATE_ACCEPTOR(param) && !IS_PSTATE_PROPOSER(param))
+			continue;
+		if (!strcmp(param->name, INITIATORNAME)) {
+			if (!param->value)
+				continue;
+			if (leading)
+				snprintf(ops->InitiatorName,
+						sizeof(ops->InitiatorName),
+						"%s", param->value);
+			printk(KERN_INFO "InitiatorName:                %s\n",
+				param->value);
+		} else if (!strcmp(param->name, INITIATORALIAS)) {
+			if (!param->value)
+				continue;
+			snprintf(ops->InitiatorAlias,
+						sizeof(ops->InitiatorAlias),
+						"%s", param->value);
+			printk(KERN_INFO "InitiatorAlias:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETNAME)) {
+			if (!param->value)
+				continue;
+			if (leading)
+				snprintf(ops->TargetName,
+						sizeof(ops->TargetName),
+						"%s", param->value);
+			printk(KERN_INFO "TargetName:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETALIAS)) {
+			if (!param->value)
+				continue;
+			snprintf(ops->TargetAlias, sizeof(ops->TargetAlias),
+					"%s", param->value);
+			printk(KERN_INFO "TargetAlias:                  %s\n",
+				param->value);
+		} else if (!strcmp(param->name, TARGETPORTALGROUPTAG)) {
+			ops->TargetPortalGroupTag =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "TargetPortalGroupTag:         %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXCONNECTIONS)) {
+			ops->MaxConnections =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxConnections:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, INITIALR2T)) {
+			ops->InitialR2T = !strcmp(param->value, YES);
+			 printk(KERN_INFO "InitialR2T:                   %s\n",
+				param->value);
+		} else if (!strcmp(param->name, IMMEDIATEDATA)) {
+			ops->ImmediateData = !strcmp(param->value, YES);
+			printk(KERN_INFO "ImmediateData:                %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXBURSTLENGTH)) {
+			ops->MaxBurstLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxBurstLength:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, FIRSTBURSTLENGTH)) {
+			ops->FirstBurstLength =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "FirstBurstLength:             %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DEFAULTTIME2WAIT)) {
+			ops->DefaultTime2Wait =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "DefaultTime2Wait:             %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DEFAULTTIME2RETAIN)) {
+			ops->DefaultTime2Retain =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "DefaultTime2Retain:           %s\n",
+				param->value);
+		} else if (!strcmp(param->name, MAXOUTSTANDINGR2T)) {
+			ops->MaxOutstandingR2T =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "MaxOutstandingR2T:            %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATAPDUINORDER)) {
+			ops->DataPDUInOrder = !strcmp(param->value, YES);
+			printk(KERN_INFO "DataPDUInOrder:               %s\n",
+				param->value);
+		} else if (!strcmp(param->name, DATASEQUENCEINORDER)) {
+			ops->DataSequenceInOrder = !strcmp(param->value, YES);
+			printk(KERN_INFO "DataSequenceInOrder:          %s\n",
+				param->value);
+		} else if (!strcmp(param->name, ERRORRECOVERYLEVEL)) {
+			ops->ErrorRecoveryLevel =
+				simple_strtoul(param->value, &tmpptr, 0);
+			printk(KERN_INFO "ErrorRecoveryLevel:           %s\n",
+				param->value);
+		} else if (!strcmp(param->name, SESSIONTYPE)) {
+			ops->SessionType = !strcmp(param->value, DISCOVERY);
+			printk(KERN_INFO "SessionType:                  %s\n",
+				param->value);
+		}
+	}
+	printk(KERN_INFO "----------------------------------------------------"
+			"--------------\n");
+
+}
+
diff --git a/drivers/target/iscsi/iscsi_parameters.h b/drivers/target/iscsi/iscsi_parameters.h
new file mode 100644
index 0000000..df1de37
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_parameters.h
@@ -0,0 +1,271 @@
+#ifndef ISCSI_PARAMETERS_H
+#define ISCSI_PARAMETERS_H
+
+struct iscsi_extra_response {
+	char key[64];
+	char value[32];
+	struct list_head er_list;
+} ____cacheline_aligned;
+
+struct iscsi_param {
+	char *name;
+	char *value;
+	u8 set_param;
+	u8 phase;
+	u8 scope;
+	u8 sender;
+	u8 type;
+	u8 use;
+	u16 type_range;
+	u32 state;
+	struct list_head p_list;
+} ____cacheline_aligned;
+
+extern struct iscsi_global *iscsi_global;
+
+extern int iscsi_login_rx_data(struct iscsi_conn *, char *, int, int);
+extern int iscsi_login_tx_data(struct iscsi_conn *, char *, char *, int, int);
+extern void iscsi_dump_conn_ops(struct iscsi_conn_ops *);
+extern void iscsi_dump_sess_ops(struct iscsi_sess_ops *);
+extern void iscsi_print_params(struct iscsi_param_list *);
+extern int iscsi_create_default_params(struct iscsi_param_list **);
+extern int iscsi_set_keys_to_negotiate(int, int, struct iscsi_param_list *);
+extern int iscsi_set_keys_irrelevant_for_discovery(struct iscsi_param_list *);
+extern int iscsi_copy_param_list(struct iscsi_param_list **,
+			struct iscsi_param_list *, int);
+extern int iscsi_change_param_value(char *, int, struct iscsi_param_list *, int);
+extern void iscsi_release_param_list(struct iscsi_param_list *);
+extern struct iscsi_param *iscsi_find_param_from_key(char *, struct iscsi_param_list *);
+extern int iscsi_extract_key_value(char *, char **, char **);
+extern int iscsi_update_param_value(struct iscsi_param *, char *);
+extern int iscsi_decode_text_input(u8, u8, char *, u32, struct iscsi_param_list *);
+extern int iscsi_encode_text_output(u8, u8, char *, u32 *,
+			struct iscsi_param_list *);
+extern int iscsi_check_negotiated_keys(struct iscsi_param_list *);
+extern void iscsi_set_connection_parameters(struct iscsi_conn_ops *,
+			struct iscsi_param_list *);
+extern void iscsi_set_session_parameters(struct iscsi_sess_ops *,
+			struct iscsi_param_list *, int);
+
+#define YES				"Yes"
+#define NO				"No"
+#define ALL				"All"
+#define IRRELEVANT			"Irrelevant"
+#define NONE				"None"
+#define NOTUNDERSTOOD			"NotUnderstood"
+#define REJECT				"Reject"
+
+/*
+ * The Parameter Names.
+ */
+#define AUTHMETHOD			"AuthMethod"
+#define HEADERDIGEST			"HeaderDigest"
+#define DATADIGEST			"DataDigest"
+#define MAXCONNECTIONS			"MaxConnections"
+#define SENDTARGETS			"SendTargets"
+#define TARGETNAME			"TargetName"
+#define INITIATORNAME			"InitiatorName"
+#define TARGETALIAS			"TargetAlias"
+#define INITIATORALIAS			"InitiatorAlias"
+#define TARGETADDRESS			"TargetAddress"
+#define TARGETPORTALGROUPTAG		"TargetPortalGroupTag"
+#define INITIALR2T			"InitialR2T"
+#define IMMEDIATEDATA			"ImmediateData"
+#define MAXRECVDATASEGMENTLENGTH	"MaxRecvDataSegmentLength"
+#define MAXBURSTLENGTH			"MaxBurstLength"
+#define FIRSTBURSTLENGTH		"FirstBurstLength"
+#define DEFAULTTIME2WAIT		"DefaultTime2Wait"
+#define DEFAULTTIME2RETAIN		"DefaultTime2Retain"
+#define MAXOUTSTANDINGR2T		"MaxOutstandingR2T"
+#define DATAPDUINORDER  		"DataPDUInOrder"
+#define DATASEQUENCEINORDER		"DataSequenceInOrder"
+#define ERRORRECOVERYLEVEL		"ErrorRecoveryLevel"
+#define SESSIONTYPE			"SessionType"
+#define IFMARKER			"IFMarker"
+#define OFMARKER			"OFMarker"
+#define IFMARKINT			"IFMarkInt"
+#define OFMARKINT			"OFMarkInt"
+#define X_EXTENSIONKEY			"X-com.sbei.version"
+#define X_EXTENSIONKEY_CISCO_NEW	"X-com.cisco.protocol"
+#define X_EXTENSIONKEY_CISCO_OLD	"X-com.cisco.iscsi.draft"
+
+/*
+ * For AuthMethod.
+ */
+#define KRB5				"KRB5"
+#define SPKM1				"SPKM1"
+#define SPKM2				"SPKM2"
+#define SRP				"SRP"
+#define CHAP				"CHAP"
+
+/*
+ * Initial values for Parameter Negotiation.
+ */
+#define INITIAL_AUTHMETHOD			CHAP
+#define INITIAL_HEADERDIGEST			"CRC32C,None"
+#define INITIAL_DATADIGEST			"CRC32C,None"
+#define INITIAL_MAXCONNECTIONS			"1"
+#define INITIAL_SENDTARGETS			ALL
+#define INITIAL_TARGETNAME			"LIO.Target"
+#define INITIAL_INITIATORNAME			"LIO.Initiator"
+#define INITIAL_TARGETALIAS			"LIO Target"
+#define INITIAL_INITIATORALIAS			"LIO Initiator"
+#define INITIAL_TARGETADDRESS			"0.0.0.0:0000,0"
+#define INITIAL_TARGETPORTALGROUPTAG		"1"
+#define INITIAL_INITIALR2T			YES
+#define INITIAL_IMMEDIATEDATA			YES
+#define INITIAL_MAXRECVDATASEGMENTLENGTH	"8192"
+#define INITIAL_MAXBURSTLENGTH			"262144"
+#define INITIAL_FIRSTBURSTLENGTH		"65536"
+#define INITIAL_DEFAULTTIME2WAIT		"2"
+#define INITIAL_DEFAULTTIME2RETAIN		"20"
+#define INITIAL_MAXOUTSTANDINGR2T		"1"
+#define INITIAL_DATAPDUINORDER			YES
+#define INITIAL_DATASEQUENCEINORDER		YES
+#define INITIAL_ERRORRECOVERYLEVEL		"0"
+#define INITIAL_SESSIONTYPE			NORMAL
+#define INITIAL_IFMARKER			NO
+#define INITIAL_OFMARKER			NO
+#define INITIAL_IFMARKINT			"2048~65535"
+#define INITIAL_OFMARKINT			"2048~65535"
+
+/*
+ * For [Header,Data]Digests.
+ */
+#define CRC32C				"CRC32C"
+
+/*
+ * For SessionType.
+ */
+#define DISCOVERY			"Discovery"
+#define NORMAL				"Normal"
+
+/*
+ * struct iscsi_param->use
+ */
+#define USE_LEADING_ONLY		0x01
+#define USE_INITIAL_ONLY		0x02
+#define USE_ALL				0x04
+
+#define IS_USE_LEADING_ONLY(p)		((p)->use & USE_LEADING_ONLY)
+#define IS_USE_INITIAL_ONLY(p)		((p)->use & USE_INITIAL_ONLY)
+#define IS_USE_ALL(p)			((p)->use & USE_ALL)
+
+#define SET_USE_INITIAL_ONLY(p)		((p)->use |= USE_INITIAL_ONLY)
+
+/*
+ * struct iscsi_param->sender
+ */
+#define	SENDER_INITIATOR		0x01
+#define SENDER_TARGET			0x02
+#define SENDER_BOTH			0x03
+/* Used in iscsi_check_key() */
+#define SENDER_RECEIVER			0x04
+
+#define IS_SENDER_INITIATOR(p)		((p)->sender & SENDER_INITIATOR)
+#define IS_SENDER_TARGET(p)		((p)->sender & SENDER_TARGET)
+#define IS_SENDER_BOTH(p)		((p)->sender & SENDER_BOTH)
+
+/*
+ * struct iscsi_param->scope
+ */
+#define SCOPE_CONNECTION_ONLY		0x01
+#define SCOPE_SESSION_WIDE		0x02
+
+#define IS_SCOPE_CONNECTION_ONLY(p)	((p)->scope & SCOPE_CONNECTION_ONLY)
+#define IS_SCOPE_SESSION_WIDE(p)	((p)->scope & SCOPE_SESSION_WIDE)
+
+/*
+ * struct iscsi_param->phase
+ */
+#define PHASE_SECURITY			0x01
+#define PHASE_OPERATIONAL		0x02
+#define PHASE_DECLARATIVE		0x04
+#define PHASE_FFP0			0x08
+
+#define IS_PHASE_SECURITY(p)		((p)->phase & PHASE_SECURITY)
+#define IS_PHASE_OPERATIONAL(p)		((p)->phase & PHASE_OPERATIONAL)
+#define IS_PHASE_DECLARATIVE(p)		((p)->phase & PHASE_DECLARATIVE)
+#define IS_PHASE_FFP0(p)		((p)->phase & PHASE_FFP0)
+
+/*
+ * struct iscsi_param->type
+ */
+#define TYPE_BOOL_AND			0x01
+#define TYPE_BOOL_OR			0x02
+#define TYPE_NUMBER			0x04
+#define TYPE_NUMBER_RANGE		0x08
+#define TYPE_STRING			0x10
+#define TYPE_VALUE_LIST			0x20
+
+#define IS_TYPE_BOOL_AND(p)		((p)->type & TYPE_BOOL_AND)
+#define IS_TYPE_BOOL_OR(p)		((p)->type & TYPE_BOOL_OR)
+#define IS_TYPE_NUMBER(p)		((p)->type & TYPE_NUMBER)
+#define IS_TYPE_NUMBER_RANGE(p)		((p)->type & TYPE_NUMBER_RANGE)
+#define IS_TYPE_STRING(p)		((p)->type & TYPE_STRING)
+#define IS_TYPE_VALUE_LIST(p)		((p)->type & TYPE_VALUE_LIST)
+
+/*
+ * struct iscsi_param->type_range
+ */
+#define TYPERANGE_BOOL_AND		0x0001
+#define TYPERANGE_BOOL_OR		0x0002
+#define TYPERANGE_0_TO_2		0x0004
+#define TYPERANGE_0_TO_3600		0x0008
+#define TYPERANGE_0_TO_32767		0x0010
+#define TYPERANGE_0_TO_65535		0x0020
+#define TYPERANGE_1_TO_65535		0x0040
+#define TYPERANGE_2_TO_3600		0x0080
+#define TYPERANGE_512_TO_16777215	0x0100
+#define TYPERANGE_AUTH			0x0200
+#define TYPERANGE_DIGEST		0x0400
+#define TYPERANGE_ISCSINAME		0x0800
+#define TYPERANGE_MARKINT		0x1000
+#define TYPERANGE_SESSIONTYPE		0x2000
+#define TYPERANGE_TARGETADDRESS		0x4000
+#define TYPERANGE_UTF8			0x8000
+
+#define IS_TYPERANGE_0_TO_2(p)		((p)->type_range & TYPERANGE_0_TO_2)
+#define IS_TYPERANGE_0_TO_3600(p)	((p)->type_range & TYPERANGE_0_TO_3600)
+#define IS_TYPERANGE_0_TO_32767(p)	((p)->type_range & TYPERANGE_0_TO_32767)
+#define IS_TYPERANGE_0_TO_65535(p)	((p)->type_range & TYPERANGE_0_TO_65535)
+#define IS_TYPERANGE_1_TO_65535(p)	((p)->type_range & TYPERANGE_1_TO_65535)
+#define IS_TYPERANGE_2_TO_3600(p)	((p)->type_range & TYPERANGE_2_TO_3600)
+#define IS_TYPERANGE_512_TO_16777215(p)	((p)->type_range & \
+						TYPERANGE_512_TO_16777215)
+#define IS_TYPERANGE_AUTH_PARAM(p)	((p)->type_range & TYPERANGE_AUTH)
+#define IS_TYPERANGE_DIGEST_PARAM(p)	((p)->type_range & TYPERANGE_DIGEST)
+#define IS_TYPERANGE_SESSIONTYPE(p)	((p)->type_range & \
+						TYPERANGE_SESSIONTYPE)
+
+/*
+ * struct iscsi_param->state
+ */
+#define PSTATE_ACCEPTOR			0x01
+#define PSTATE_NEGOTIATE		0x02
+#define PSTATE_PROPOSER			0x04
+#define PSTATE_IRRELEVANT		0x08
+#define PSTATE_REJECT			0x10
+#define PSTATE_REPLY_OPTIONAL		0x20
+#define PSTATE_RESPONSE_GOT		0x40
+#define PSTATE_RESPONSE_SENT		0x80
+
+#define IS_PSTATE_ACCEPTOR(p)		((p)->state & PSTATE_ACCEPTOR)
+#define IS_PSTATE_NEGOTIATE(p)		((p)->state & PSTATE_NEGOTIATE)
+#define IS_PSTATE_PROPOSER(p)		((p)->state & PSTATE_PROPOSER)
+#define IS_PSTATE_IRRELEVANT(p)		((p)->state & PSTATE_IRRELEVANT)
+#define IS_PSTATE_REJECT(p)		((p)->state & PSTATE_REJECT)
+#define IS_PSTATE_REPLY_OPTIONAL(p)	((p)->state & PSTATE_REPLY_OPTIONAL)
+#define IS_PSTATE_RESPONSE_GOT(p)	((p)->state & PSTATE_RESPONSE_GOT)
+#define IS_PSTATE_RESPONSE_SENT(p)	((p)->state & PSTATE_RESPONSE_SENT)
+
+#define SET_PSTATE_ACCEPTOR(p)		((p)->state |= PSTATE_ACCEPTOR)
+#define SET_PSTATE_NEGOTIATE(p)		((p)->state |= PSTATE_NEGOTIATE)
+#define SET_PSTATE_PROPOSER(p)		((p)->state |= PSTATE_PROPOSER)
+#define SET_PSTATE_IRRELEVANT(p)	((p)->state |= PSTATE_IRRELEVANT)
+#define SET_PSTATE_REJECT(p)		((p)->state |= PSTATE_REJECT)
+#define SET_PSTATE_REPLY_OPTIONAL(p)	((p)->state |= PSTATE_REPLY_OPTIONAL)
+#define SET_PSTATE_RESPONSE_GOT(p)	((p)->state |= PSTATE_RESPONSE_GOT)
+#define SET_PSTATE_RESPONSE_SENT(p)	((p)->state |= PSTATE_RESPONSE_SENT)
+
+#endif /* ISCSI_PARAMETERS_H */
diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
new file mode 100644
index 0000000..ab64552
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -0,0 +1,1411 @@
+/*******************************************************************************
+ * This file contains the login functions used by the iSCSI Target driver.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/inet.h>
+#include <linux/crypto.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <net/ipv6.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_stat.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_parameters.h"
+
+/*	iscsi_login_init_conn():
+ *
+ *
+ */
+static int iscsi_login_init_conn(struct iscsi_conn *conn)
+{
+	INIT_LIST_HEAD(&conn->conn_list);
+	INIT_LIST_HEAD(&conn->conn_cmd_list);
+	INIT_LIST_HEAD(&conn->immed_queue_list);
+	INIT_LIST_HEAD(&conn->response_queue_list);
+	sema_init(&conn->conn_post_wait_sem, 0);
+	sema_init(&conn->conn_wait_sem, 0);
+	sema_init(&conn->conn_wait_rcfr_sem, 0);
+	sema_init(&conn->conn_waiting_on_uc_sem, 0);
+	sema_init(&conn->conn_logout_sem, 0);
+	sema_init(&conn->rx_half_close_sem, 0);
+	sema_init(&conn->tx_half_close_sem, 0);
+	sema_init(&conn->tx_sem, 0);
+	spin_lock_init(&conn->cmd_lock);
+	spin_lock_init(&conn->conn_usage_lock);
+	spin_lock_init(&conn->immed_queue_lock);
+	spin_lock_init(&conn->netif_lock);
+	spin_lock_init(&conn->nopin_timer_lock);
+	spin_lock_init(&conn->response_queue_lock);
+	spin_lock_init(&conn->state_lock);
+
+	if (!(zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL))) {
+		printk(KERN_ERR "Unable to allocate conn->conn_cpumask\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*
+ * Used by iscsi_target_nego.c:iscsi_target_locate_portal() to setup
+ * per struct iscsi_conn libcrypto contexts for crc32c and crc32-intel
+ */
+int iscsi_login_setup_crypto(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg = conn->tpg;
+#ifdef CONFIG_X86
+	/*
+	 * Check for the Nehalem optimized crc32c-intel instructions
+	 * This is only currently available while running on bare-metal,
+	 * and is not yet available with QEMU-KVM guests.
+	 */
+	if (cpu_has_xmm4_2 && ISCSI_TPG_ATTRIB(tpg)->crc32c_x86_offload) {
+		conn->conn_rx_hash.flags = 0;
+		conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
+						CRYPTO_ALG_ASYNC);
+		if (IS_ERR(conn->conn_rx_hash.tfm)) {
+			printk(KERN_ERR "crypto_alloc_hash() failed for conn_rx_tfm\n");
+			goto check_crc32c;
+		}
+
+		conn->conn_tx_hash.flags = 0;
+		conn->conn_tx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
+						CRYPTO_ALG_ASYNC);
+		if (IS_ERR(conn->conn_tx_hash.tfm)) {   
+			printk(KERN_ERR "crypto_alloc_hash() failed for conn_tx_tfm\n");
+			crypto_free_hash(conn->conn_rx_hash.tfm);
+			goto check_crc32c;
+		}
+
+		printk(KERN_INFO "LIO-Target[0]: Using Nehalem crc32c-intel"
+					" offload instructions\n");
+		return 0;
+	}
+check_crc32c:
+#endif /* CONFIG_X86 */
+	/*
+	 * Setup slicing by 1x CRC32C algorithm for RX and TX libcrypto contexts
+	 */
+	conn->conn_rx_hash.flags = 0;
+	conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c", 0,
+						CRYPTO_ALG_ASYNC);
+	if (IS_ERR(conn->conn_rx_hash.tfm)) {
+		printk(KERN_ERR "crypto_alloc_hash() failed for conn_rx_tfm\n");
+		return -ENOMEM;
+	}
+
+	conn->conn_tx_hash.flags = 0;
+	conn->conn_tx_hash.tfm = crypto_alloc_hash("crc32c", 0,
+						CRYPTO_ALG_ASYNC);
+	if (IS_ERR(conn->conn_tx_hash.tfm)) {	
+		printk(KERN_ERR "crypto_alloc_hash() failed for conn_tx_tfm\n");
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/*	iscsi_login_check_initiator_version():
+ *
+ *
+ */
+static int iscsi_login_check_initiator_version(
+	struct iscsi_conn *conn,
+	u8 version_max,
+	u8 version_min)
+{
+	if ((version_max != 0x00) || (version_min != 0x00)) {
+		printk(KERN_ERR "Unsupported iSCSI IETF Pre-RFC Revision,"
+			" version Min/Max 0x%02x/0x%02x, rejecting login.\n",
+			version_min, version_max);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_NO_VERSION);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_check_for_session_reinstatement():
+ *
+ *
+ */
+int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+{
+	int sessiontype;
+	struct iscsi_param *initiatorname_param = NULL, *sessiontype_param = NULL;
+	struct iscsi_portal_group *tpg = conn->tpg;
+	struct iscsi_session *sess = NULL, *sess_p = NULL;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+
+	initiatorname_param = iscsi_find_param_from_key(
+			INITIATORNAME, conn->param_list);
+	if (!(initiatorname_param))
+		return -1;
+
+	sessiontype_param = iscsi_find_param_from_key(
+			SESSIONTYPE, conn->param_list);
+	if (!(sessiontype_param))
+		return -1;
+
+	sessiontype = (strncmp(sessiontype_param->value, NORMAL, 6)) ? 1 : 0;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+
+		sess_p = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		spin_lock(&sess_p->conn_lock);
+		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+		    atomic_read(&sess_p->session_logout) ||
+		    (sess_p->time2retain_timer_flags & T2R_TF_EXPIRED)) {
+			spin_unlock(&sess_p->conn_lock);
+			continue;
+		}
+		if (!memcmp((void *)sess_p->isid, (void *)SESS(conn)->isid, 6) &&
+		   (!strcmp((void *)SESS_OPS(sess_p)->InitiatorName,
+			    (void *)initiatorname_param->value) &&
+		   (SESS_OPS(sess_p)->SessionType == sessiontype))) {
+			atomic_set(&sess_p->session_reinstatement, 1);
+			spin_unlock(&sess_p->conn_lock);
+			iscsi_inc_session_usage_count(sess_p);
+			iscsi_stop_time2retain_timer(sess_p);
+			sess = sess_p;
+			break;
+		}
+		spin_unlock(&sess_p->conn_lock);
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+	/*
+	 * If the Time2Retain handler has expired, the session is already gone.
+	 */
+	if (!sess)
+		return 0;
+
+	TRACE(TRACE_ERL0, "%s iSCSI Session SID %u is still active for %s,"
+		" preforming session reinstatement.\n", (sessiontype) ?
+		"Discovery" : "Normal", sess->sid,
+		SESS_OPS(sess)->InitiatorName);
+
+	spin_lock_bh(&sess->conn_lock);
+	if (sess->session_state == TARG_SESS_STATE_FAILED) {
+		spin_unlock_bh(&sess->conn_lock);
+		iscsi_dec_session_usage_count(sess);
+		return iscsi_close_session(sess);
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_stop_session(sess, 1, 1);
+	iscsi_dec_session_usage_count(sess);
+
+	return iscsi_close_session(sess);
+}
+
+static void iscsi_login_set_conn_values(
+	struct iscsi_session *sess,
+	struct iscsi_conn *conn,
+	u16 cid)
+{
+	conn->sess		= sess;
+	conn->cid 		= cid;
+	/*
+	 * Generate a random Status sequence number (statsn) for the new
+	 * iSCSI connection.
+	 */
+	get_random_bytes(&conn->stat_sn, sizeof(u32));
+
+	down(&iscsi_global->auth_id_sem);
+	conn->auth_id		= iscsi_global->auth_id++;
+	up(&iscsi_global->auth_id_sem);
+}
+
+/*	iscsi_login_zero_tsih():
+ *
+ *	This is the leading connection of a new session,
+ *	or session reinstatement.
+ */
+static int iscsi_login_zero_tsih_s1(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = NULL;
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	sess = kmem_cache_zalloc(lio_sess_cache, GFP_KERNEL);
+	if (!(sess)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		printk(KERN_ERR "Could not allocate memory for session\n");
+		return -1;
+	}
+
+	iscsi_login_set_conn_values(sess, conn, pdu->cid);
+	sess->init_task_tag	= pdu->itt;
+	memcpy((void *)&sess->isid, (void *)pdu->isid, 6);
+	sess->exp_cmd_sn	= pdu->cmdsn;
+	INIT_LIST_HEAD(&sess->sess_conn_list);
+	INIT_LIST_HEAD(&sess->sess_ooo_cmdsn_list);
+	INIT_LIST_HEAD(&sess->cr_active_list);
+	INIT_LIST_HEAD(&sess->cr_inactive_list);
+	sema_init(&sess->async_msg_sem, 0);
+	sema_init(&sess->reinstatement_sem, 0);
+	sema_init(&sess->session_wait_sem, 0);
+	sema_init(&sess->session_waiting_on_uc_sem, 0);
+	spin_lock_init(&sess->cmdsn_lock);
+	spin_lock_init(&sess->conn_lock);
+	spin_lock_init(&sess->cr_a_lock);
+	spin_lock_init(&sess->cr_i_lock);
+	spin_lock_init(&sess->session_usage_lock);
+	spin_lock_init(&sess->ttt_lock);
+	sess->session_index = iscsi_get_new_index(ISCSI_SESSION_INDEX);
+	sess->creation_time = get_jiffies_64();
+	spin_lock_init(&sess->session_stats_lock);
+	/*
+	 * The FFP CmdSN window values will be allocated from the TPG's
+	 * Initiator Node's ACL once the login has been successfully completed.
+	 */
+	sess->max_cmd_sn	= pdu->cmdsn;
+
+	sess->sess_ops = kzalloc(sizeof(struct iscsi_sess_ops), GFP_KERNEL);
+	if (!(sess->sess_ops)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_sess_ops.\n");
+		return -1;
+	}
+
+	sess->se_sess = transport_init_session();
+	if (!(sess->se_sess)) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int iscsi_login_zero_tsih_s2 (
+	struct iscsi_conn *conn)
+{
+	struct iscsi_node_attrib *na;
+	struct iscsi_session *sess = conn->sess;
+	unsigned char buf[32];
+
+	sess->tpg = conn->tpg;
+
+	/*
+	 * Assign a new TPG Session Handle.  Note this is protected with
+	 * struct iscsi_portal_group->np_login_sem from core_access_np().
+	 */
+	sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+	if (!(sess->tsih))
+		sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+
+	/*
+	 * Create the default params from user defined values..
+	 */
+	if (iscsi_copy_param_list(&conn->param_list,
+				ISCSI_TPG_C(conn)->param_list, 1) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	iscsi_set_keys_to_negotiate(TARGET, 0, conn->param_list);
+
+	if (SESS_OPS(sess)->SessionType)
+		return iscsi_set_keys_irrelevant_for_discovery(
+				conn->param_list);
+
+	na = iscsi_tpg_get_node_attrib(sess);
+
+	/*
+	 * Need to send TargetPortalGroupTag back in first login response
+	 * on any iSCSI connection where the Initiator provides TargetName.
+	 * See 5.3.1.  Login Phase Start
+	 *
+	 * In our case, we have already located the struct iscsi_tiqn at this point.
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	/*
+	 * Workaround for Initiators that have broken connection recovery logic.
+	 *
+	 * "We would really like to get rid of this." Linux-iSCSI.org team
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "ErrorRecoveryLevel=%d", na->default_erl);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	if (iscsi_login_disable_FIM_keys(conn->param_list, conn) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * Remove PSTATE_NEGOTIATE for the four FIM related keys.
+ * The Initiator node will be able to enable FIM by proposing them itself.
+ */
+int iscsi_login_disable_FIM_keys(
+	struct iscsi_param_list *param_list,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_param *param;
+
+	param = iscsi_find_param_from_key("OFMarker", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" OFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("OFMarkInt", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("IFMarker", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	param = iscsi_find_param_from_key("IFMarkInt", param_list);
+	if (!(param)) {
+		printk(KERN_ERR "iscsi_find_param_from_key() for"
+				" IFMarker failed\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+	param->state &= ~PSTATE_NEGOTIATE;
+
+	return 0;
+}
+
+static int iscsi_login_non_zero_tsih_s1 (
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	iscsi_login_set_conn_values(NULL, conn, pdu->cid);
+	return 0;
+}
+
+/*	iscsi_login_non_zero_tsih_s2():
+ *
+ *	Add a new connection to an existing session.
+ */
+static int iscsi_login_non_zero_tsih_s2(
+	struct iscsi_conn *conn,
+	unsigned char *buf)
+{
+	struct iscsi_portal_group *tpg = conn->tpg;
+	struct iscsi_session *sess = NULL, *sess_p = NULL;
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_session *se_sess, *se_sess_tmp;
+	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	list_for_each_entry_safe(se_sess, se_sess_tmp, &se_tpg->tpg_sess_list,
+			sess_list) {
+
+		sess_p = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+		    atomic_read(&sess_p->session_logout) ||
+		   (sess_p->time2retain_timer_flags & T2R_TF_EXPIRED))
+			continue;
+		if (!(memcmp((const void *)sess_p->isid,
+		     (const void *)pdu->isid, 6)) &&
+		     (sess_p->tsih == pdu->tsih)) {
+			iscsi_inc_session_usage_count(sess_p);
+			iscsi_stop_time2retain_timer(sess_p);
+			sess = sess_p;
+			break;
+		}
+	}
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	/*
+	 * If the Time2Retain handler has expired, the session is already gone.
+	 */
+	if (!sess) {
+		printk(KERN_ERR "Initiator attempting to add a connection to"
+			" a non-existent session, rejecting iSCSI Login.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_NO_SESSION);
+		return -1;
+	}
+
+	/*
+	 * Stop the Time2Retain timer if this is a failed session, we restart
+	 * the timer if the login is not successful.
+	 */
+	spin_lock_bh(&sess->conn_lock);
+	if (sess->session_state == TARG_SESS_STATE_FAILED)
+		atomic_set(&sess->session_continuation, 1);
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_login_set_conn_values(sess, conn, pdu->cid);
+
+	if (iscsi_copy_param_list(&conn->param_list,
+			ISCSI_TPG_C(conn)->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	iscsi_set_keys_to_negotiate(TARGET, 0, conn->param_list);
+
+	/*
+	 * Need to send TargetPortalGroupTag back in first login response
+	 * on any iSCSI connection where the Initiator provides TargetName.
+	 * See 5.3.1.  Login Phase Start
+	 *
+	 * In our case, we have already located the struct iscsi_tiqn at this point.
+	 */
+	memset(buf, 0, 32);
+	sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+	if (iscsi_change_param_value(buf, TARGET, conn->param_list, 0) < 0) {
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		return -1;
+	}
+
+	return iscsi_login_disable_FIM_keys(conn->param_list, conn);
+}
+
+/*	iscsi_login_post_auth_non_zero_tsih():
+ *
+ *
+ */
+int iscsi_login_post_auth_non_zero_tsih(
+	struct iscsi_conn *conn,
+	u16 cid,
+	u32 exp_statsn)
+{
+	struct iscsi_conn *conn_ptr = NULL;
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_session *sess = SESS(conn);
+
+	/*
+	 * By following item 5 in the login table,  if we have found
+	 * an existing ISID and a valid/existing TSIH and an existing
+	 * CID we do connection reinstatement.  Currently we dont not
+	 * support it so we send back an non-zero status class to the
+	 * initiator and release the new connection.
+	 */
+	conn_ptr = iscsi_get_conn_from_cid_rcfr(sess, cid);
+	if ((conn_ptr)) {
+		printk(KERN_ERR "Connection exists with CID %hu for %s,"
+			" performing connection reinstatement.\n",
+			conn_ptr->cid, SESS_OPS(sess)->InitiatorName);
+
+		iscsi_connection_reinstatement_rcfr(conn_ptr);
+		iscsi_dec_conn_usage_count(conn_ptr);
+	}
+
+	/*
+	 * Check for any connection recovery entires containing CID.
+	 * We use the original ExpStatSN sent in the first login request
+	 * to acknowledge commands for the failed connection.
+	 *
+	 * Also note that an explict logout may have already been sent,
+	 * but the response may not be sent due to additional connection
+	 * loss.
+	 */
+	if (SESS_OPS(sess)->ErrorRecoveryLevel == 2) {
+		cr = iscsi_get_inactive_connection_recovery_entry(
+				sess, cid);
+		if ((cr)) {
+			TRACE(TRACE_ERL2, "Performing implicit logout"
+				" for connection recovery on CID: %hu\n",
+					conn->cid);
+			iscsi_discard_cr_cmds_by_expstatsn(cr, exp_statsn);
+		}
+	}
+
+	/*
+	 * Else we follow item 4 from the login table in that we have
+	 * found an existing ISID and a valid/existing TSIH and a new
+	 * CID we go ahead and continue to add a new connection to the
+	 * session.
+	 */
+	TRACE(TRACE_LOGIN, "Adding CID %hu to existing session for %s.\n",
+			cid, SESS_OPS(sess)->InitiatorName);
+
+	if ((atomic_read(&sess->nconn) + 1) > SESS_OPS(sess)->MaxConnections) {
+		printk(KERN_ERR "Adding additional connection to this session"
+			" would exceed MaxConnections %d, login failed.\n",
+				SESS_OPS(sess)->MaxConnections);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_ISID_ERROR);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_post_login_start_timers():
+ *
+ *
+ */
+static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+/* #warning PHY timer is disabled */
+#if 0
+	iscsi_get_network_interface_from_conn(conn);
+
+	spin_lock_bh(&conn->netif_lock);
+	iscsi_start_netif_timer(conn);
+	spin_unlock_bh(&conn->netif_lock);
+#endif
+	if (!SESS_OPS(sess)->SessionType)
+		iscsi_start_nopin_timer(conn);
+}
+
+/*	iscsi_post_login_handler():
+ *
+ *
+ */
+static int iscsi_post_login_handler(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	u8 zero_tsih)
+{
+	int stop_timer = 0;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf1_ipv4[IPV4_BUF_SIZE];
+	unsigned char *ip, *ip_np;
+	struct iscsi_session *sess = SESS(conn);
+	struct se_session *se_sess = sess->se_sess;
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+	struct se_thread_set *ts;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	iscsi_collect_login_stats(conn, ISCSI_STATUS_CLS_SUCCESS,
+			ISCSI_LOGIN_STATUS_ACCEPT);
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_LOGGED_IN.\n");
+	conn->conn_state = TARG_CONN_STATE_LOGGED_IN;
+
+	iscsi_set_connection_parameters(conn->conn_ops, conn->param_list);
+	iscsi_set_sync_and_steering_values(conn);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		ip = &conn->ipv6_login_ip[0];
+		ip_np = &np->np_ipv6[0];
+	} else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		memset(buf1_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, conn->login_ip);
+		iscsi_ntoa2(buf1_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+		ip_np = &buf1_ipv4[0];
+	}
+
+	/*
+	 * SCSI Initiator -> SCSI Target Port Mapping
+	 */
+	ts = iscsi_get_thread_set(TARGET);
+	if (!zero_tsih) {
+		iscsi_set_session_parameters(sess->sess_ops,
+				conn->param_list, 0);
+		iscsi_release_param_list(conn->param_list);
+		conn->param_list = NULL;
+
+		spin_lock_bh(&sess->conn_lock);
+		atomic_set(&sess->session_continuation, 0);
+		if (sess->session_state == TARG_SESS_STATE_FAILED) {
+			TRACE(TRACE_STATE, "Moving to"
+					" TARG_SESS_STATE_LOGGED_IN.\n");
+			sess->session_state = TARG_SESS_STATE_LOGGED_IN;
+			stop_timer = 1;
+		}
+
+		printk(KERN_INFO "iSCSI Login successful on CID: %hu from %s to"
+			" %s:%hu,%hu\n", conn->cid, ip, ip_np,
+				np->np_port, tpg->tpgt);
+
+		list_add_tail(&conn->conn_list, &sess->sess_conn_list);
+		atomic_inc(&sess->nconn);
+		printk(KERN_INFO "Incremented iSCSI Connection count to %hu"
+			" from node: %s\n", atomic_read(&sess->nconn),
+			SESS_OPS(sess)->InitiatorName);
+		spin_unlock_bh(&sess->conn_lock);
+
+		iscsi_post_login_start_timers(conn);
+		iscsi_activate_thread_set(conn, ts);
+		/*
+		 * Determine CPU mask to ensure connection's RX and TX kthreads
+		 * are scheduled on the same CPU.
+		 */
+		iscsi_thread_get_cpumask(conn);
+		conn->conn_rx_reset_cpumask = 1;
+		conn->conn_tx_reset_cpumask = 1;
+
+		iscsi_dec_conn_usage_count(conn);
+		if (stop_timer) {
+			spin_lock_bh(&se_tpg->session_lock);
+			iscsi_stop_time2retain_timer(sess);
+			spin_unlock_bh(&se_tpg->session_lock);
+		}
+		iscsi_dec_session_usage_count(sess);
+		return 0;
+	}
+
+	iscsi_set_session_parameters(sess->sess_ops, conn->param_list, 1);
+	iscsi_release_param_list(conn->param_list);
+	conn->param_list = NULL;
+
+	iscsi_determine_maxcmdsn(sess);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	__transport_register_session(&sess->tpg->tpg_se_tpg,
+			se_sess->se_node_acl, se_sess, (void *)sess);
+	TRACE(TRACE_STATE, "Moving to TARG_SESS_STATE_LOGGED_IN.\n");
+	sess->session_state = TARG_SESS_STATE_LOGGED_IN;
+
+	printk(KERN_INFO "iSCSI Login successful on CID: %hu from %s to %s:%hu,%hu\n",
+		conn->cid, ip, ip_np, np->np_port, tpg->tpgt);
+
+	spin_lock_bh(&sess->conn_lock);
+	list_add_tail(&conn->conn_list, &sess->sess_conn_list);
+	atomic_inc(&sess->nconn);
+	printk(KERN_INFO "Incremented iSCSI Connection count to %hu from node:"
+		" %s\n", atomic_read(&sess->nconn),
+		SESS_OPS(sess)->InitiatorName);
+	spin_unlock_bh(&sess->conn_lock);
+
+	sess->sid = tpg->sid++;
+	if (!sess->sid)
+		sess->sid = tpg->sid++;
+	printk(KERN_INFO "Established iSCSI session from node: %s\n",
+			SESS_OPS(sess)->InitiatorName);
+
+	tpg->nsessions++;
+	if (tpg->tpg_tiqn)
+		tpg->tpg_tiqn->tiqn_nsessions++;
+
+	printk(KERN_INFO "Incremented number of active iSCSI sessions to %u on"
+		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	iscsi_post_login_start_timers(conn);
+	iscsi_activate_thread_set(conn, ts);
+	/*
+	 * Determine CPU mask to ensure connection's RX and TX kthreads
+	 * are scheduled on the same CPU.
+	 */
+	iscsi_thread_get_cpumask(conn);
+	conn->conn_rx_reset_cpumask = 1;
+	conn->conn_tx_reset_cpumask = 1;
+
+	iscsi_dec_conn_usage_count(conn);
+
+	return 0;
+}
+
+/*	iscsi_handle_login_thread_timeout():
+ *
+ *
+ */
+static void iscsi_handle_login_thread_timeout(unsigned long data)
+{
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+	struct iscsi_np *np = (struct iscsi_np *) data;
+
+	memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+	spin_lock_bh(&np->np_thread_lock);
+	iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+
+	printk(KERN_ERR "iSCSI Login timeout on Network Portal %s:%hu\n",
+			buf_ipv4, np->np_port);
+
+	if (np->np_login_timer_flags & TPG_NP_TF_STOP) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return;
+	}
+
+	if (np->np_thread)
+		send_sig(SIGKILL, np->np_thread, 1);
+
+	np->np_login_timer_flags &= ~TPG_NP_TF_RUNNING;
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_start_login_thread_timer():
+ *
+ *
+ */
+static void iscsi_start_login_thread_timer(struct iscsi_np *np)
+{
+	/*
+	 * This used the TA_LOGIN_TIMEOUT constant because at this
+	 * point we do not have access to ISCSI_TPG_ATTRIB(tpg)->login_timeout
+	 */
+	spin_lock_bh(&np->np_thread_lock);
+	init_timer(&np->np_login_timer);
+	SETUP_TIMER(np->np_login_timer, TA_LOGIN_TIMEOUT, np,
+			iscsi_handle_login_thread_timeout);
+	np->np_login_timer_flags &= ~TPG_NP_TF_STOP;
+	np->np_login_timer_flags |= TPG_NP_TF_RUNNING;
+	add_timer(&np->np_login_timer);
+
+	TRACE(TRACE_LOGIN, "Added timeout timer to iSCSI login request for"
+			" %u seconds.\n", TA_LOGIN_TIMEOUT);
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_stop_login_thread_timer():
+ *
+ *
+ */
+static void iscsi_stop_login_thread_timer(struct iscsi_np *np)
+{
+	spin_lock_bh(&np->np_thread_lock);
+	if (!(np->np_login_timer_flags & TPG_NP_TF_RUNNING)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		return;
+	}
+	np->np_login_timer_flags |= TPG_NP_TF_STOP;
+	spin_unlock_bh(&np->np_thread_lock);
+
+	del_timer_sync(&np->np_login_timer);
+
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_login_timer_flags &= ~TPG_NP_TF_RUNNING;
+	spin_unlock_bh(&np->np_thread_lock);
+}
+
+/*	iscsi_target_setup_login_socket():
+ *
+ *
+ */
+static struct socket *iscsi_target_setup_login_socket(struct iscsi_np *np)
+{
+	const char *end;
+	struct socket *sock;
+	int backlog = 5, ip_proto, sock_type, ret, opt = 0;
+	struct sockaddr_in sock_in;
+	struct sockaddr_in6 sock_in6;
+
+	switch (np->np_network_transport) {
+	case ISCSI_TCP:
+		ip_proto = IPPROTO_TCP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_TCP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_UDP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_SEQPACKET;
+		break;
+	case ISCSI_IWARP_TCP:
+	case ISCSI_IWARP_SCTP:
+	case ISCSI_INFINIBAND:
+	default:
+		printk(KERN_ERR "Unsupported network_transport: %d\n",
+				np->np_network_transport);
+		goto fail;
+	}
+
+	if (sock_create((np->np_flags & NPF_NET_IPV6) ? AF_INET6 : AF_INET,
+			sock_type, ip_proto, &sock) < 0) {
+		printk(KERN_ERR "sock_create() failed.\n");
+		goto fail;
+	}
+	np->np_socket = sock;
+
+	/*
+	 * The SCTP stack needs struct socket->file.
+	 */
+	if ((np->np_network_transport == ISCSI_SCTP_TCP) ||
+	    (np->np_network_transport == ISCSI_SCTP_UDP)) {
+		if (!sock->file) {
+			sock->file = kzalloc(sizeof(struct file), GFP_KERNEL);
+			if (!(sock->file)) {
+				printk(KERN_ERR "Unable to allocate struct"
+						" file for SCTP\n");
+				goto fail;
+			}
+			np->np_flags |= NPF_SCTP_STRUCT_FILE;
+		}
+	}
+
+	if (np->np_flags & NPF_NET_IPV6) {
+		memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
+		sock_in6.sin6_family = AF_INET6;
+		sock_in6.sin6_port = htons(np->np_port);
+#if 1
+		ret = in6_pton(&np->np_ipv6[0], IPV6_ADDRESS_SPACE,
+				(void *)&sock_in6.sin6_addr.in6_u, -1, &end);
+		if (ret <= 0) {
+			printk(KERN_ERR "in6_pton returned: %d\n", ret);
+			goto fail;
+		}
+#else
+		ret = iscsi_pton6(&np->np_ipv6[0],
+				(unsigned char *)&sock_in6.sin6_addr.in6_u);
+		if (ret <= 0) {
+			printk(KERN_ERR "iscsi_pton6() returned: %d\n", ret);
+			goto fail;
+		}
+#endif
+	} else {
+		memset(&sock_in, 0, sizeof(struct sockaddr_in));
+		sock_in.sin_family = AF_INET;
+		sock_in.sin_port = htons(np->np_port);
+		sock_in.sin_addr.s_addr = htonl(np->np_ipv4);
+	}
+
+	/*
+	 * Set SO_REUSEADDR, and disable Nagel Algorithm with TCP_NODELAY.
+	 */
+	opt = 1;
+	if (np->np_network_transport == ISCSI_TCP) {
+		ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_NODELAY,
+				(char *)&opt, sizeof(opt));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_setsockopt() for TCP_NODELAY"
+				" failed: %d\n", ret);
+			goto fail;
+		}
+	}
+	ret = kernel_setsockopt(sock, SOL_SOCKET, SO_REUSEADDR,
+			(char *)&opt, sizeof(opt));
+	if (ret < 0) {
+		printk(KERN_ERR "kernel_setsockopt() for SO_REUSEADDR"
+			" failed\n");
+		goto fail;
+	}
+
+	if (np->np_flags & NPF_NET_IPV6) {
+		ret = kernel_bind(sock, (struct sockaddr *)&sock_in6,
+				sizeof(struct sockaddr_in6));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_bind() failed: %d\n", ret);
+			goto fail;
+		}
+	} else {
+		ret = kernel_bind(sock, (struct sockaddr *)&sock_in,
+				sizeof(struct sockaddr));
+		if (ret < 0) {
+			printk(KERN_ERR "kernel_bind() failed: %d\n", ret);
+			goto fail;
+		}
+	}
+
+	if (kernel_listen(sock, backlog)) {
+		printk(KERN_ERR "kernel_listen() failed.\n");
+		goto fail;
+	}
+
+	return sock;
+
+fail:
+	np->np_socket = NULL;
+	if (sock) {
+		if (np->np_flags & NPF_SCTP_STRUCT_FILE) {
+			kfree(sock->file);
+			sock->file = NULL;
+		}
+
+		sock_release(sock);
+	}
+	return NULL;
+}
+
+/*	iscsi_target_login_thread():
+ *
+ *
+ */
+int iscsi_target_login_thread(void *arg)
+{
+	u8 buffer[ISCSI_HDR_LEN], iscsi_opcode, zero_tsih = 0;
+	unsigned char *ip = NULL, *ip_init_buf = NULL;
+	unsigned char buf_ipv4[IPV4_BUF_SIZE], buf1_ipv4[IPV4_BUF_SIZE];
+	int err, ret = 0, start = 1, ip_proto;
+	int sock_type, set_sctp_conn_flag = 0;
+	struct iscsi_conn *conn = NULL;
+	struct iscsi_login *login;
+	struct iscsi_portal_group *tpg = NULL;
+	struct socket *new_sock, *sock;
+	struct iscsi_np *np = (struct iscsi_np *) arg;
+	struct iovec iov;
+	struct iscsi_login_req *pdu;
+	struct sockaddr_in sock_in;
+	struct sockaddr_in6 sock_in6;
+
+	{
+	char name[16];
+	memset(name, 0, 16);
+	sprintf(name, "iscsi_np");
+	iscsi_daemon(np->np_thread, name, SHUTDOWN_SIGS);
+	}
+
+	sock = iscsi_target_setup_login_socket(np);
+	if (!(sock)) {
+		up(&np->np_start_sem);
+		return -1;
+	}
+
+get_new_sock:
+	flush_signals(current);
+	ip_proto = sock_type = set_sctp_conn_flag = 0;
+
+	switch (np->np_network_transport) {
+	case ISCSI_TCP:
+		ip_proto = IPPROTO_TCP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_TCP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_STREAM;
+		break;
+	case ISCSI_SCTP_UDP:
+		ip_proto = IPPROTO_SCTP;
+		sock_type = SOCK_SEQPACKET;
+		break;
+	case ISCSI_IWARP_TCP:
+	case ISCSI_IWARP_SCTP:
+	case ISCSI_INFINIBAND:
+	default:
+		printk(KERN_ERR "Unsupported network_transport: %d\n",
+			np->np_network_transport);
+		if (start)
+			up(&np->np_start_sem);
+		return -1;
+	}
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN)
+		goto out;
+	else if (np->np_thread_state == ISCSI_NP_THREAD_RESET) {
+		if (atomic_read(&np->np_shutdown)) {
+			spin_unlock_bh(&np->np_thread_lock);
+			up(&np->np_restart_sem);
+			down(&np->np_shutdown_sem);
+			goto out;
+		}
+		np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+		up(&np->np_restart_sem);
+	} else {
+		np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+
+		if (start) {
+			start = 0;
+			up(&np->np_start_sem);
+		}
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	if (kernel_accept(sock, &new_sock, 0) < 0) {
+		if (signal_pending(current)) {
+			spin_lock_bh(&np->np_thread_lock);
+			if (np->np_thread_state == ISCSI_NP_THREAD_RESET) {
+				if (atomic_read(&np->np_shutdown)) {
+					spin_unlock_bh(&np->np_thread_lock);
+					up(&np->np_restart_sem);
+					down(&np->np_shutdown_sem);
+					goto out;
+				}
+				spin_unlock_bh(&np->np_thread_lock);
+				goto get_new_sock;
+			}
+			spin_unlock_bh(&np->np_thread_lock);
+			goto out;
+		}
+		goto get_new_sock;
+	}
+	/*
+	 * The SCTP stack needs struct socket->file.
+	 */
+	if ((np->np_network_transport == ISCSI_SCTP_TCP) ||
+	    (np->np_network_transport == ISCSI_SCTP_UDP)) {
+		if (!new_sock->file) {
+			new_sock->file = kzalloc(
+					sizeof(struct file), GFP_KERNEL);
+			if (!(new_sock->file)) {
+				printk(KERN_ERR "Unable to allocate struct"
+						" file for SCTP\n");
+				sock_release(new_sock);
+				goto get_new_sock;
+			}
+			set_sctp_conn_flag = 1;
+		}
+	}
+
+	iscsi_start_login_thread_timer(np);
+
+	conn = kmem_cache_zalloc(lio_conn_cache, GFP_KERNEL);
+	if (!(conn)) {
+		printk(KERN_ERR "Could not allocate memory for"
+			" new connection\n");
+		if (set_sctp_conn_flag) {
+			kfree(new_sock->file);
+			new_sock->file = NULL;
+		}
+		sock_release(new_sock);
+
+		goto get_new_sock;
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_FREE.\n");
+	conn->conn_state = TARG_CONN_STATE_FREE;
+	conn->sock = new_sock;
+
+	if (set_sctp_conn_flag)
+		conn->conn_flags |= CONNFLAG_SCTP_STRUCT_FILE;
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_XPT_UP.\n");
+	conn->conn_state = TARG_CONN_STATE_XPT_UP;
+
+	/*
+	 * Allocate conn->conn_ops early as a failure calling
+	 * iscsi_tx_login_rsp() below will call tx_data().
+	 */
+	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
+	if (!(conn->conn_ops)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_conn_ops.\n");
+		goto new_sess_out;
+	}
+	/*
+	 * Perform the remaining iSCSI connection initialization items..
+	 */
+	if (iscsi_login_init_conn(conn) < 0)
+		goto new_sess_out;
+
+	memset(buffer, 0, ISCSI_HDR_LEN);
+	memset(&iov, 0, sizeof(struct iovec));
+	iov.iov_base	= buffer;
+	iov.iov_len	= ISCSI_HDR_LEN;
+
+	if (rx_data(conn, &iov, 1, ISCSI_HDR_LEN) <= 0) {
+		printk(KERN_ERR "rx_data() returned an error.\n");
+		goto new_sess_out;
+	}
+
+	iscsi_opcode = (buffer[0] & ISCSI_OPCODE_MASK);
+	if (!(iscsi_opcode & ISCSI_OP_LOGIN)) {
+		printk(KERN_ERR "First opcode is not login request,"
+			" failing login request.\n");
+		goto new_sess_out;
+	}
+
+	pdu			= (struct iscsi_login_req *) buffer;
+	pdu->cid		= be16_to_cpu(pdu->cid);
+	pdu->tsih		= be16_to_cpu(pdu->tsih);
+	pdu->itt		= be32_to_cpu(pdu->itt);
+	pdu->cmdsn		= be32_to_cpu(pdu->cmdsn);
+	pdu->exp_statsn		= be32_to_cpu(pdu->exp_statsn);
+	/*
+	 * Used by iscsi_tx_login_rsp() for Login Resonses PDUs
+	 * when Status-Class != 0.
+	*/
+	conn->login_itt		= pdu->itt;
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE)
+		ip = &np->np_ipv6[0];
+	else {
+		memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+		iscsi_ntoa2(buf_ipv4, np->np_ipv4);
+		ip = &buf_ipv4[0];
+	}
+
+	spin_lock_bh(&np->np_thread_lock);
+	if ((atomic_read(&np->np_shutdown)) ||
+	    (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		printk(KERN_ERR "iSCSI Network Portal on %s:%hu currently not"
+			" active.\n", ip, np->np_port);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		goto new_sess_out;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	if (np->np_net_size == IPV6_ADDRESS_SPACE) {
+		memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
+
+		if (conn->sock->ops->getname(conn->sock,
+				(struct sockaddr *)&sock_in6, &err, 1) < 0) {
+			printk(KERN_ERR "sock_ops->getname() failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+#if 0
+		if (!(iscsi_ntop6((const unsigned char *)
+				&sock_in6.sin6_addr.in6_u,
+				(char *)&conn->ipv6_login_ip[0],
+				IPV6_ADDRESS_SPACE))) {
+			printk(KERN_ERR "iscsi_ntop6() failed\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+#else
+		printk(KERN_INFO "Skipping iscsi_ntop6()\n");
+#endif
+		ip_init_buf = &conn->ipv6_login_ip[0];
+	} else {
+		memset(&sock_in, 0, sizeof(struct sockaddr_in));
+
+		if (conn->sock->ops->getname(conn->sock,
+				(struct sockaddr *)&sock_in, &err, 1) < 0) {
+			printk(KERN_ERR "sock_ops->getname() failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			goto new_sess_out;
+		}
+		memset(buf1_ipv4, 0, IPV4_BUF_SIZE);
+		conn->login_ip = ntohl(sock_in.sin_addr.s_addr);
+		conn->login_port = ntohs(sock_in.sin_port);
+		iscsi_ntoa2(buf1_ipv4, conn->login_ip);
+		ip_init_buf = &buf1_ipv4[0];
+	}
+
+	conn->network_transport = np->np_network_transport;
+	snprintf(conn->net_dev, ISCSI_NETDEV_NAME_SIZE, "%s", np->np_net_dev);
+
+	conn->conn_index = iscsi_get_new_index(ISCSI_CONNECTION_INDEX);
+	conn->local_ip = np->np_ipv4;
+	conn->local_port = np->np_port;
+
+	printk(KERN_INFO "Received iSCSI login request from %s on %s Network"
+			" Portal %s:%hu\n", ip_init_buf,
+		(conn->network_transport == ISCSI_TCP) ? "TCP" : "SCTP",
+			ip, np->np_port);
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_IN_LOGIN.\n");
+	conn->conn_state	= TARG_CONN_STATE_IN_LOGIN;
+
+	if (iscsi_login_check_initiator_version(conn, pdu->max_version,
+			pdu->min_version) < 0)
+		goto new_sess_out;
+
+	zero_tsih = (pdu->tsih == 0x0000);
+	if ((zero_tsih)) {
+		/*
+		 * This is the leading connection of a new session.
+		 * We wait until after authentication to check for
+		 * session reinstatement.
+		 */
+		if (iscsi_login_zero_tsih_s1(conn, buffer) < 0)
+			goto new_sess_out;
+	} else {
+		/*
+		 * Add a new connection to an existing session.
+		 * We check for a non-existant session in
+		 * iscsi_login_non_zero_tsih_s2() below based
+		 * on ISID/TSIH, but wait until after authentication
+		 * to check for connection reinstatement, etc.
+		 */
+		if (iscsi_login_non_zero_tsih_s1(conn, buffer) < 0)
+			goto new_sess_out;
+	}
+
+	/*
+	 * This will process the first login request, and call
+	 * iscsi_target_locate_portal(), and return a valid struct iscsi_login.
+	 */
+	login = iscsi_target_init_negotiation(np, conn, buffer);
+	if (!(login)) {
+		tpg = conn->tpg;
+		goto new_sess_out;
+	}
+
+	tpg = conn->tpg;
+	if (!(tpg)) {
+		printk(KERN_ERR "Unable to locate struct iscsi_conn->tpg\n");
+		goto new_sess_out;
+	}
+
+	if (zero_tsih) {
+		if (iscsi_login_zero_tsih_s2(conn) < 0) {
+			iscsi_target_nego_release(login, conn);
+			goto new_sess_out;
+		}
+	} else {
+		if (iscsi_login_non_zero_tsih_s2(conn, buffer) < 0) {
+			iscsi_target_nego_release(login, conn);
+			goto old_sess_out;
+		}
+	}
+
+	if (iscsi_target_start_negotiation(login, conn) < 0)
+		goto new_sess_out;
+
+	if (!SESS(conn)) {
+		printk(KERN_ERR "struct iscsi_conn session pointer is NULL!\n");
+		goto new_sess_out;
+	}
+
+	iscsi_stop_login_thread_timer(np);
+
+	if (signal_pending(current))
+		goto new_sess_out;
+
+	ret = iscsi_post_login_handler(np, conn, zero_tsih);
+
+	if (ret < 0)
+		goto new_sess_out;
+
+	core_deaccess_np(np, tpg);
+	tpg = NULL;
+	goto get_new_sock;
+
+new_sess_out:
+	printk(KERN_ERR "iSCSI Login negotiation failed.\n");
+	iscsi_collect_login_stats(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				  ISCSI_LOGIN_STATUS_INIT_ERR);
+	if (!zero_tsih || !SESS(conn))
+		goto old_sess_out;
+	if (SESS(conn)->se_sess)
+		transport_free_session(SESS(conn)->se_sess);
+	if (SESS(conn)->sess_ops)
+		kfree(SESS(conn)->sess_ops);
+	if (SESS(conn))
+		kmem_cache_free(lio_sess_cache, SESS(conn));
+old_sess_out:
+	iscsi_stop_login_thread_timer(np);
+	/*
+	 * If login negotiation fails check if the Time2Retain timer
+	 * needs to be restarted.
+	 */
+	if (!zero_tsih && SESS(conn)) {
+		spin_lock_bh(&SESS(conn)->conn_lock);
+		if (SESS(conn)->session_state == TARG_SESS_STATE_FAILED) {
+			struct se_portal_group *se_tpg =
+					&ISCSI_TPG_C(conn)->tpg_se_tpg;
+
+			atomic_set(&SESS(conn)->session_continuation, 0);
+			spin_unlock_bh(&SESS(conn)->conn_lock);
+			spin_lock_bh(&se_tpg->session_lock);
+			iscsi_start_time2retain_handler(SESS(conn));
+			spin_unlock_bh(&se_tpg->session_lock);
+		} else
+			spin_unlock_bh(&SESS(conn)->conn_lock);
+		iscsi_dec_session_usage_count(SESS(conn));
+	}
+
+	if (!IS_ERR(conn->conn_rx_hash.tfm))
+		crypto_free_hash(conn->conn_rx_hash.tfm);
+	if (!IS_ERR(conn->conn_tx_hash.tfm))
+		crypto_free_hash(conn->conn_tx_hash.tfm);
+
+	if (conn->conn_cpumask)
+		free_cpumask_var(conn->conn_cpumask);
+
+	kfree(conn->conn_ops);
+
+	if (conn->param_list) {
+		iscsi_release_param_list(conn->param_list);
+		conn->param_list = NULL;
+	}
+	if (conn->sock) {
+		if (conn->conn_flags & CONNFLAG_SCTP_STRUCT_FILE) {
+			kfree(conn->sock->file);
+			conn->sock->file = NULL;
+		}
+		sock_release(conn->sock);
+	}
+	kmem_cache_free(lio_conn_cache, conn);
+
+	if (tpg) {
+		core_deaccess_np(np, tpg);
+		tpg = NULL;
+	}
+
+	if (!(signal_pending(current)))
+		goto get_new_sock;
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (atomic_read(&np->np_shutdown)) {
+		spin_unlock_bh(&np->np_thread_lock);
+		up(&np->np_restart_sem);
+		down(&np->np_shutdown_sem);
+		goto out;
+	}
+	if (np->np_thread_state != ISCSI_NP_THREAD_SHUTDOWN) {
+		spin_unlock_bh(&np->np_thread_lock);
+		goto get_new_sock;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+out:
+	iscsi_stop_login_thread_timer(np);
+	spin_lock_bh(&np->np_thread_lock);
+	np->np_thread_state = ISCSI_NP_THREAD_EXIT;
+	np->np_thread = NULL;
+	spin_unlock_bh(&np->np_thread_lock);
+	up(&np->np_done_sem);
+	return 0;
+}
diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
new file mode 100644
index 0000000..c6d56c2
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_login.h
@@ -0,0 +1,15 @@
+#ifndef ISCSI_TARGET_LOGIN_H
+#define ISCSI_TARGET_LOGIN_H
+
+extern int iscsi_login_setup_crypto(struct iscsi_conn *);
+extern int iscsi_check_for_session_reinstatement(struct iscsi_conn *);
+extern int iscsi_login_post_auth_non_zero_tsih(struct iscsi_conn *, u16, u32);
+extern int iscsi_target_login_thread(void *);
+extern int iscsi_login_disable_FIM_keys(struct iscsi_param_list *, struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_sess_cache;
+extern struct kmem_cache *lio_conn_cache;
+
+#endif   /*** ISCSI_TARGET_LOGIN_H ***/
+
diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
new file mode 100644
index 0000000..5588a3b
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nego.c
@@ -0,0 +1,1116 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI Parameter negotiation.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/ctype.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_tpg.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_target_login.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+#include "iscsi_auth_chap.h"
+
+#define MAX_LOGIN_PDUS  7
+#define TEXT_LEN	4096
+
+void convert_null_to_semi(char *buf, int len)
+{
+	int i;
+
+	for (i = 0; i < len; i++)
+		if (buf[i] == '\0')
+			buf[i] = ';';
+}
+
+int strlen_semi(char *buf)
+{
+	int i = 0;
+
+	while (buf[i] != '\0') {
+		if (buf[i] == ';')
+			return i;
+		i++;
+	}
+
+	return -1;
+}
+
+int extract_param(
+	const char *in_buf,
+	const char *pattern,
+	unsigned int max_length,
+	char *out_buf,
+	unsigned char *type)
+{
+	char *ptr;
+	int len;
+
+	if (!in_buf || !pattern || !out_buf || !type)
+		return -1;
+
+	ptr = strstr(in_buf, pattern);
+	if (!ptr)
+		return -1;
+
+	ptr = strstr(ptr, "=");
+	if (!ptr)
+		return -1;
+
+	ptr += 1;
+	if (*ptr == '0' && (*(ptr+1) == 'x' || *(ptr+1) == 'X')) {
+		ptr += 2; /* skip 0x */
+		*type = HEX;
+	} else
+		*type = DECIMAL;
+
+	len = strlen_semi(ptr);
+	if (len < 0)
+		return -1;
+
+	if (len > max_length) {
+		printk(KERN_ERR "Length of input: %d exeeds max_length:"
+			" %d\n", len, max_length);
+		return -1;
+	}
+	memcpy(out_buf, ptr, len);
+	out_buf[len] = '\0';
+
+	return 0;
+}
+
+static u32 iscsi_handle_authentication(
+	struct iscsi_conn *conn,
+	char *in_buf,
+	char *out_buf,
+	int in_length,
+	int *out_length,
+	unsigned char *authtype)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_auth *auth;
+	struct iscsi_node_acl *iscsi_nacl;
+	struct se_node_acl *se_nacl;
+
+	if (!(SESS_OPS(sess)->SessionType)) {
+		/*
+		 * For SessionType=Normal
+		 */
+		se_nacl = SESS(conn)->se_sess->se_node_acl;
+		if (!(se_nacl)) {
+			printk(KERN_ERR "Unable to locate struct se_node_acl for"
+					" CHAP auth\n");
+			return -1;
+		}
+		iscsi_nacl = container_of(se_nacl, struct iscsi_node_acl,
+				se_node_acl);
+		if (!(iscsi_nacl)) {
+			printk(KERN_ERR "Unable to locate struct iscsi_node_acl for"
+					" CHAP auth\n");
+			return -1;
+		}
+
+		auth = ISCSI_NODE_AUTH(iscsi_nacl);
+	} else {
+		/*
+		 * For SessionType=Discovery
+		 */
+		auth = &iscsi_global->discovery_acl.node_auth;	
+	}
+
+	if (strstr("CHAP", authtype))
+		strcpy(SESS(conn)->auth_type, "CHAP");
+	else
+		strcpy(SESS(conn)->auth_type, NONE);
+
+	if (strstr("None", authtype))
+		return 1;
+#ifdef CANSRP
+	else if (strstr("SRP", authtype))
+		return srp_main_loop(conn, auth, in_buf, out_buf,
+				&in_length, out_length);
+#endif
+	else if (strstr("CHAP", authtype))
+		return chap_main_loop(conn, auth, in_buf, out_buf,
+				&in_length, out_length);
+	else if (strstr("SPKM1", authtype))
+		return 2;
+	else if (strstr("SPKM2", authtype))
+		return 2;
+	else if (strstr("KRB5", authtype))
+		return 2;
+	else
+		return 2;
+}
+
+static void iscsi_remove_failed_auth_entry(struct iscsi_conn *conn)
+{
+	kfree(conn->auth_protocol);
+}
+
+/*	iscsi_target_check_login_request():
+ *
+ *
+ */
+static int iscsi_target_check_login_request(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int req_csg, req_nsg, rsp_csg, rsp_nsg;
+	u32 payload_length;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	switch (login_req->opcode & ISCSI_OPCODE_MASK) {
+	case ISCSI_OP_LOGIN:
+		break;
+	default:
+		printk(KERN_ERR "Received unknown opcode 0x%02x.\n",
+				login_req->opcode & ISCSI_OPCODE_MASK);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((login_req->flags & ISCSI_FLAG_LOGIN_CONTINUE) &&
+	    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+		printk(KERN_ERR "Login request has both ISCSI_FLAG_LOGIN_CONTINUE"
+			" and ISCSI_FLAG_LOGIN_TRANSIT set, protocol error.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	req_csg = (login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	rsp_csg = (login_rsp->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	req_nsg = (login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK);
+	rsp_nsg = (login_rsp->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK);
+
+	if (req_csg != login->current_stage) {
+		printk(KERN_ERR "Initiator unexpectedly changed login stage"
+			" from %d to %d, login failed.\n", login->current_stage,
+			req_csg);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((req_nsg == 2) || (req_csg >= 2) ||
+	   ((login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT) &&
+	    (req_nsg <= req_csg))) {
+		printk(KERN_ERR "Illegal login_req->flags Combination, CSG: %d,"
+			" NSG: %d, ISCSI_FLAG_LOGIN_TRANSIT: %d.\n", req_csg,
+			req_nsg, (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT));
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if ((login_req->max_version != login->version_max) ||
+	    (login_req->min_version != login->version_min)) {
+		printk(KERN_ERR "Login request changed Version Max/Nin"
+			" unexpectedly to 0x%02x/0x%02x, protocol error\n",
+			login_req->max_version, login_req->min_version);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (memcmp(login_req->isid, login->isid, 6) != 0) {
+		printk(KERN_ERR "Login request changed ISID unexpectedly,"
+				" protocol error.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (login_req->itt != login->init_task_tag) {
+		printk(KERN_ERR "Login request changed ITT unexpectedly to"
+			" 0x%08x, protocol error.\n", login_req->itt);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_INIT_ERR);
+		return -1;
+	}
+
+	if (payload_length > MAX_KEY_VALUE_PAIRS) {
+		printk(KERN_ERR "Login request payload exceeds default"
+			" MaxRecvDataSegmentLength: %u, protocol error.\n",
+				MAX_KEY_VALUE_PAIRS);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_check_first_request():
+ *
+ *
+ */
+static int iscsi_target_check_first_request(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	struct iscsi_param *param = NULL;
+	struct se_node_acl *se_nacl;
+
+	login->first_request = 0;
+
+	list_for_each_entry(param, &conn->param_list->param_list, p_list) {
+		if (!strncmp(param->name, SESSIONTYPE, 11)) {
+			if (!IS_PSTATE_ACCEPTOR(param)) {
+				printk(KERN_ERR "SessionType key not received"
+					" in first login request.\n");
+				iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+				return -1;
+			}
+			if (!(strncmp(param->value, DISCOVERY, 9)))
+				return 0;
+		}
+
+		if (!strncmp(param->name, INITIATORNAME, 13)) {
+			if (!IS_PSTATE_ACCEPTOR(param)) {
+				if (!login->leading_connection)
+					continue;
+
+				printk(KERN_ERR "InitiatorName key not received"
+					" in first login request.\n");
+				iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+				return -1;
+			}
+
+			/*
+			 * For non-leading connections, double check that the
+			 * received InitiatorName matches the existing session's
+			 * struct iscsi_node_acl.
+			 */
+			if (!login->leading_connection) {
+				se_nacl = SESS(conn)->se_sess->se_node_acl;
+				if (!(se_nacl)) {
+					printk(KERN_ERR "Unable to locate"
+						" struct se_node_acl\n");
+					iscsi_tx_login_rsp(conn,
+							ISCSI_STATUS_CLS_INITIATOR_ERR,
+							ISCSI_LOGIN_STATUS_TGT_NOT_FOUND);
+					return -1;
+				}
+
+				if (strcmp(param->value,
+						se_nacl->initiatorname)) {
+					printk(KERN_ERR "Incorrect"
+						" InitiatorName: %s for this"
+						" iSCSI Initiator Node.\n",
+						param->value);
+					iscsi_tx_login_rsp(conn,
+							ISCSI_STATUS_CLS_INITIATOR_ERR,
+							ISCSI_LOGIN_STATUS_TGT_NOT_FOUND);
+					return -1;
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_do_tx_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_tx_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	__u32 padding = 0;
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_login_rsp *login_rsp;
+
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+
+	login_rsp->opcode		= ISCSI_OP_LOGIN_RSP;
+	hton24(login_rsp->dlength, login->rsp_length);
+	memcpy(login_rsp->isid, login->isid, 6);
+	login_rsp->tsih			= cpu_to_be16(login->tsih);
+	login_rsp->itt			= cpu_to_be32(login->init_task_tag);
+	login_rsp->statsn		= cpu_to_be32(conn->stat_sn++);
+	login_rsp->exp_cmdsn		= cpu_to_be32(SESS(conn)->exp_cmd_sn);
+	login_rsp->max_cmdsn		= cpu_to_be32(SESS(conn)->max_cmd_sn);
+
+	TRACE(TRACE_LOGIN, "Sending Login Response, Flags: 0x%02x, ITT: 0x%08x,"
+		" ExpCmdSN; 0x%08x, MaxCmdSN: 0x%08x, StatSN: 0x%08x, Length:"
+		" %u\n", login_rsp->flags, ntohl(login_rsp->itt),
+		ntohl(login_rsp->exp_cmdsn), ntohl(login_rsp->max_cmdsn),
+		ntohl(login_rsp->statsn), login->rsp_length);
+
+	padding = ((-ntohl(login->rsp_length)) & 3);
+
+	if (iscsi_login_tx_data(
+			conn,
+			login->rsp,
+			login->rsp_buf,
+			login->rsp_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	login->rsp_length		= 0;
+	login_rsp->tsih			= be16_to_cpu(login_rsp->tsih);
+	login_rsp->itt			= be32_to_cpu(login_rsp->itt);
+	login_rsp->statsn		= be32_to_cpu(login_rsp->statsn);
+	spin_lock(&sess->cmdsn_lock);
+	login_rsp->exp_cmdsn		= be32_to_cpu(sess->exp_cmd_sn);
+	login_rsp->max_cmdsn		= be32_to_cpu(sess->max_cmd_sn);
+	spin_unlock(&sess->cmdsn_lock);
+
+	return 0;
+}
+
+/*	iscsi_target_do_rx_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_rx_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	u32 padding = 0, payload_length;
+	struct iscsi_login_req *login_req;
+
+	if (iscsi_login_rx_data(conn, login->req, ISCSI_HDR_LEN, TARGET) < 0)
+		return -1;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	payload_length			= ntoh24(login_req->dlength);
+	login_req->tsih			= be16_to_cpu(login_req->tsih);
+	login_req->itt			= be32_to_cpu(login_req->itt);
+	login_req->cid			= be16_to_cpu(login_req->cid);
+	login_req->cmdsn		= be32_to_cpu(login_req->cmdsn);
+	login_req->exp_statsn		= be32_to_cpu(login_req->exp_statsn);
+
+	TRACE(TRACE_LOGIN, "Got Login Command, Flags 0x%02x, ITT: 0x%08x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, CID: %hu, Length: %u\n",
+		 login_req->flags, login_req->itt, login_req->cmdsn,
+		 login_req->exp_statsn, login_req->cid, payload_length);
+
+	if (iscsi_target_check_login_request(conn, login) < 0)
+		return -1;
+
+	padding = ((-payload_length) & 3);
+	memset(login->req_buf, 0, MAX_KEY_VALUE_PAIRS);
+
+	if (iscsi_login_rx_data(
+			conn,
+			login->req_buf,
+			payload_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*      iscsi_target_do_login_io():
+ *
+ *
+ */
+static int iscsi_target_do_login_io(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	if (iscsi_target_do_tx_login_io(conn, login) < 0)
+		return -1;
+
+	if (iscsi_target_do_rx_login_io(conn, login) < 0)
+		return -1;
+
+	return 0;
+}
+
+static int iscsi_target_get_initial_payload(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	u32 padding = 0, payload_length;
+	struct iscsi_login_req *login_req;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	payload_length = ntoh24(login_req->dlength);
+
+	TRACE(TRACE_LOGIN, "Got Login Command, Flags 0x%02x, ITT: 0x%08x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, Length: %u\n",
+		login_req->flags, login_req->itt, login_req->cmdsn,
+		login_req->exp_statsn, payload_length);
+
+	if (iscsi_target_check_login_request(conn, login) < 0)
+		return -1;
+
+	padding = ((-payload_length) & 3);
+
+	if (iscsi_login_rx_data(
+			conn,
+			login->req_buf,
+			payload_length + padding,
+			TARGET) < 0)
+		return -1;
+
+	return 0;
+}
+
+/*	iscsi_target_check_for_existing_instances():
+ *
+ *	NOTE: We check for existing sessions or connections AFTER the initiator
+ *	has been successfully authenticated in order to protect against faked
+ *	ISID/TSIH combinations.
+ */
+static int iscsi_target_check_for_existing_instances(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	if (login->checked_for_existing)
+		return 0;
+
+	login->checked_for_existing = 1;
+
+	if (!login->tsih)
+		return iscsi_check_for_session_reinstatement(conn);
+	else
+		return iscsi_login_post_auth_non_zero_tsih(conn, login->cid,
+				login->initial_exp_statsn);
+}
+
+/*	iscsi_target_do_authentication():
+ *
+ *
+ */
+static int iscsi_target_do_authentication(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int authret;
+	u32 payload_length;
+	struct iscsi_param *param;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, conn->param_list);
+	if (!(param))
+		return -1;
+
+	authret = iscsi_handle_authentication(
+			conn,
+			login->req_buf,
+			login->rsp_buf,
+			payload_length,
+			&login->rsp_length,
+			param->value);
+	switch (authret) {
+	case 0:
+		printk(KERN_INFO "Received OK response"
+		" from LIO Authentication, continuing.\n");
+		break;
+	case 1:
+		printk(KERN_INFO "iSCSI security negotiation"
+			" completed sucessfully.\n");
+		login->auth_complete = 1;
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE1) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+			login_rsp->flags |= (ISCSI_FLAG_LOGIN_NEXT_STAGE1 |
+					     ISCSI_FLAG_LOGIN_TRANSIT);
+			login->current_stage = 1;
+		}
+		return iscsi_target_check_for_existing_instances(
+				conn, login);
+	case 2:
+		printk(KERN_ERR "Security negotiation"
+			" failed.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_AUTH_FAILED);
+		return -1;
+	default:
+		printk(KERN_ERR "Received unknown error %d from LIO"
+				" Authentication\n", authret);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_TARGET_ERROR);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_target_handle_csg_zero():
+ *
+ *
+ */
+static int iscsi_target_handle_csg_zero(
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	int ret;
+	u32 payload_length;
+	struct iscsi_param *param;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	param = iscsi_find_param_from_key(AUTHMETHOD, conn->param_list);
+	if (!(param))
+		return -1;
+
+	ret = iscsi_decode_text_input(
+			PHASE_SECURITY|PHASE_DECLARATIVE,
+			SENDER_INITIATOR|SENDER_RECEIVER,
+			login->req_buf,
+			payload_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (ret > 0) {
+		if (login->auth_complete) {
+			printk(KERN_ERR "Initiator has already been"
+				" successfully authenticated, but is still"
+				" sending %s keys.\n", param->value);
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_INIT_ERR);
+			return -1;
+		}
+
+		goto do_auth;
+	}
+
+	if (login->first_request)
+		if (iscsi_target_check_first_request(conn, login) < 0)
+			return -1;
+
+	ret = iscsi_encode_text_output(
+			PHASE_SECURITY|PHASE_DECLARATIVE,
+			SENDER_TARGET,
+			login->rsp_buf,
+			&login->rsp_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (!(iscsi_check_negotiated_keys(conn->param_list))) {
+		if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+		    !strncmp(param->value, NONE, 4)) {
+			printk(KERN_ERR "Initiator sent AuthMethod=None but"
+				" Target is enforcing iSCSI Authentication,"
+					" login failed.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+					ISCSI_LOGIN_STATUS_AUTH_FAILED);
+			return -1;
+		}
+
+		if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+		    !login->auth_complete)
+			return 0;
+
+		if (strncmp(param->value, NONE, 4) && !login->auth_complete)
+			return 0;
+
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE1) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT)) {
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_NEXT_STAGE1 |
+					    ISCSI_FLAG_LOGIN_TRANSIT;
+			login->current_stage = 1;
+		}
+	}
+
+	return 0;
+do_auth:
+	return iscsi_target_do_authentication(conn, login);
+}
+
+/*	iscsi_target_handle_csg_one():
+ *
+ *
+ */
+static int iscsi_target_handle_csg_one(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	int ret;
+	u32 payload_length;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	ret = iscsi_decode_text_input(
+			PHASE_OPERATIONAL|PHASE_DECLARATIVE,
+			SENDER_INITIATOR|SENDER_RECEIVER,
+			login->req_buf,
+			payload_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (login->first_request)
+		if (iscsi_target_check_first_request(conn, login) < 0)
+			return -1;
+
+	if (iscsi_target_check_for_existing_instances(conn, login) < 0)
+		return -1;
+
+	ret = iscsi_encode_text_output(
+			PHASE_OPERATIONAL|PHASE_DECLARATIVE,
+			SENDER_TARGET,
+			login->rsp_buf,
+			&login->rsp_length,
+			conn->param_list);
+	if (ret < 0)
+		return -1;
+
+	if (!(login->auth_complete) &&
+	      ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication) {
+		printk(KERN_ERR "Initiator is requesting CSG: 1, has not been"
+			 " successfully authenticated, and the Target is"
+			" enforcing iSCSI Authentication, login failed.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_AUTH_FAILED);
+		return -1;
+	}
+
+	if (!(iscsi_check_negotiated_keys(conn->param_list)))
+		if ((login_req->flags & ISCSI_FLAG_LOGIN_NEXT_STAGE3) &&
+		    (login_req->flags & ISCSI_FLAG_LOGIN_TRANSIT))
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_NEXT_STAGE3 |
+					    ISCSI_FLAG_LOGIN_TRANSIT;
+
+	return 0;
+}
+
+/*	iscsi_target_do_login():
+ *
+ *
+ */
+static int iscsi_target_do_login(struct iscsi_conn *conn, struct iscsi_login *login)
+{
+	int pdu_count = 0;
+	struct iscsi_login_req *login_req;
+	struct iscsi_login_rsp *login_rsp;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_login_rsp *) login->rsp;
+
+	while (1) {
+		if (++pdu_count > MAX_LOGIN_PDUS) {
+			printk(KERN_ERR "MAX_LOGIN_PDUS count reached.\n");
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+					ISCSI_LOGIN_STATUS_TARGET_ERROR);
+			return -1;
+		}
+
+		switch ((login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2) {
+		case 0:
+			login_rsp->flags |= (0 & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK);
+			if (iscsi_target_handle_csg_zero(conn, login) < 0)
+				return -1;
+			break;
+		case 1:
+			login_rsp->flags |= ISCSI_FLAG_LOGIN_CURRENT_STAGE1;
+			if (iscsi_target_handle_csg_one(conn, login) < 0)
+				return -1;
+			if (login_rsp->flags & ISCSI_FLAG_LOGIN_TRANSIT) {
+				login->tsih = SESS(conn)->tsih;
+				if (iscsi_target_do_tx_login_io(conn,
+						login) < 0)
+					return -1;
+				return 0;
+			}
+			break;
+		default:
+			printk(KERN_ERR "Illegal CSG: %d received from"
+				" Initiator, protocol error.\n",
+				(login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK)
+				>> 2);
+			break;
+		}
+
+		if (iscsi_target_do_login_io(conn, login) < 0)
+			return -1;
+
+		if (login_rsp->flags & ISCSI_FLAG_LOGIN_TRANSIT) {
+			login_rsp->flags &= ~ISCSI_FLAG_LOGIN_TRANSIT;
+			login_rsp->flags &= ~ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK;
+		}
+	}
+
+	return 0;
+}
+
+static void iscsi_initiatorname_tolower(
+	char *param_buf)
+{
+	char *c;
+	u32 iqn_size = strlen(param_buf), i;
+
+	for (i = 0; i < iqn_size; i++) {
+		c = (char *)&param_buf[i];
+		if (!(isupper(*c)))
+			continue;
+
+		*c = tolower(*c);
+	}
+}
+
+/*
+ * Processes the first Login Request..
+ */
+static int iscsi_target_locate_portal(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	struct iscsi_login *login)
+{
+	char *i_buf = NULL, *s_buf = NULL, *t_buf = NULL;
+	char *tmpbuf, *start = NULL, *end = NULL, *key, *value;
+	struct iscsi_session *sess = conn->sess;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_login_req *login_req;
+	struct iscsi_targ_login_rsp *login_rsp;
+	u32 payload_length;
+	int sessiontype = 0, ret = 0;
+
+	login_req = (struct iscsi_login_req *) login->req;
+	login_rsp = (struct iscsi_targ_login_rsp *) login->rsp;
+	payload_length = ntoh24(login_req->dlength);
+
+	login->first_request	= 1;
+	login->leading_connection = (!login_req->tsih) ? 1 : 0;
+	login->current_stage	=
+		(login_req->flags & ISCSI_FLAG_LOGIN_CURRENT_STAGE_MASK) >> 2;
+	login->version_min	= login_req->min_version;
+	login->version_max	= login_req->max_version;
+	memcpy(login->isid, login_req->isid, 6);
+	login->cmd_sn		= login_req->cmdsn;
+	login->init_task_tag	= login_req->itt;
+	login->initial_exp_statsn = login_req->exp_statsn;
+	login->cid		= login_req->cid;
+	login->tsih		= login_req->tsih;
+
+	if (iscsi_target_get_initial_payload(conn, login) < 0)
+		return -1;
+
+	tmpbuf = kzalloc(payload_length + 1, GFP_KERNEL);
+	if (!(tmpbuf)) {
+		printk(KERN_ERR "Unable to allocate memory for tmpbuf.\n");
+		return -1;
+	}
+
+	memcpy(tmpbuf, login->req_buf, payload_length);
+	tmpbuf[payload_length] = '\0';
+	start = tmpbuf;
+	end = (start + payload_length);
+
+	/*
+	 * Locate the initial keys expected from the Initiator node in
+	 * the first login request in order to progress with the login phase.
+	 */
+	while (start < end) {
+		if (iscsi_extract_key_value(start, &key, &value) < 0) {
+			ret = -1;
+			goto out;
+		}
+
+		if (!(strncmp(key, "InitiatorName", 13)))
+			i_buf = value;
+		else if (!(strncmp(key, "SessionType", 11)))
+			s_buf = value;
+		else if (!(strncmp(key, "TargetName", 10)))
+			t_buf = value;
+
+		start += strlen(key) + strlen(value) + 2;
+	}
+
+	/*
+	 * See 5.3.  Login Phase.
+	 */
+	if (!i_buf) {
+		printk(KERN_ERR "InitiatorName key not received"
+			" in first login request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+	/*
+	 * Convert the incoming InitiatorName to lowercase following
+	 * RFC-3720 3.2.6.1. section c) that says that iSCSI IQNs
+	 * are NOT case sensitive.
+	 */
+	iscsi_initiatorname_tolower(i_buf);
+
+	if (!s_buf) {
+		if (!login->leading_connection)
+			goto get_target;
+
+		printk(KERN_ERR "SessionType key not received"
+			" in first login request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+
+	/*
+	 * Use default portal group for discovery sessions.
+	 */
+	sessiontype = strncmp(s_buf, DISCOVERY, 9);
+	if (!(sessiontype)) {
+		conn->tpg = iscsi_global->discovery_tpg;
+		if (!login->leading_connection)
+			goto get_target;
+
+		SESS_OPS(sess)->SessionType = 1;
+		/*
+ 		 * Setup crc32c modules from libcrypto
+		 */
+		if (iscsi_login_setup_crypto(conn) < 0) {
+			printk(KERN_ERR "iscsi_login_setup_crypto() failed\n");
+			ret = -1;
+			goto out;
+		}
+		/*
+		 * Serialize access across the discovery struct iscsi_portal_group to
+		 * process login attempt.
+		 */
+		if (core_access_np(np, conn->tpg) < 0) {
+			iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+			ret = -1;
+			goto out;
+		}
+		ret = 0;
+		goto out;
+	}
+
+get_target:
+	if (!t_buf) {
+		printk(KERN_ERR "TargetName key not received"
+			" in first login request while"
+			" SessionType=Normal.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+			ISCSI_LOGIN_STATUS_MISSING_FIELDS);
+		ret = -1;
+		goto out;
+	}
+
+	/*
+	 * Locate Target IQN from Storage Node.
+	 */
+	tiqn = core_get_tiqn_for_login(t_buf);
+	if (!(tiqn)) {
+		printk(KERN_ERR "Unable to locate Target IQN: %s in"
+			" Storage Node\n", t_buf);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		goto out;
+	}
+	printk(KERN_INFO "Located Storage Object: %s\n", tiqn->tiqn);
+
+	/*
+	 * Locate Target Portal Group from Storage Node.
+	 */
+	conn->tpg = core_get_tpg_from_np(tiqn, np);
+	if (!(conn->tpg)) {
+		printk(KERN_ERR "Unable to locate Target Portal Group"
+				" on %s\n", tiqn->tiqn);
+		core_put_tiqn_for_login(tiqn);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		goto out;
+	}
+	printk(KERN_INFO "Located Portal Group Object: %hu\n", conn->tpg->tpgt);
+	/*
+	 * Setup crc32c modules from libcrypto
+	 */
+	if (iscsi_login_setup_crypto(conn) < 0) {
+		printk(KERN_ERR "iscsi_login_setup_crypto() failed\n");
+		ret = -1;
+		goto out;
+	}
+	/*
+	 * Serialize access across the struct iscsi_portal_group to
+	 * process login attempt.
+	 */
+	if (core_access_np(np, conn->tpg) < 0) {
+		core_put_tiqn_for_login(tiqn);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
+		ret = -1;
+		conn->tpg = NULL;
+		goto out;
+	}
+
+	/*
+	 * SESS(conn)->node_acl will be set when the referenced
+	 * struct iscsi_session is located from received ISID+TSIH in
+	 * iscsi_login_non_zero_tsih_s2().
+	 */
+	if (!login->leading_connection) {
+		ret = 0;
+		goto out;
+	}
+
+	/*
+	 * This value is required in iscsi_login_zero_tsih_s2()
+	 */
+	SESS_OPS(sess)->SessionType = 0;
+
+	/*
+	 * Locate incoming Initiator IQN reference from Storage Node.
+	 */
+	sess->se_sess->se_node_acl = core_tpg_check_initiator_node_acl(
+			&conn->tpg->tpg_se_tpg, i_buf);
+	if (!(sess->se_sess->se_node_acl)) {
+		printk(KERN_ERR "iSCSI Initiator Node: %s is not authorized to"
+			" access iSCSI target portal group: %hu.\n",
+				i_buf, conn->tpg->tpgt);
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
+				ISCSI_LOGIN_STATUS_TGT_FORBIDDEN);
+		ret = -1;
+		goto out;
+	}
+
+	ret = 0;
+out:
+	kfree(tmpbuf);
+	return ret;
+}
+
+/*	iscsi_target_init_negotiation():
+ *
+ *
+ */
+struct iscsi_login *iscsi_target_init_negotiation(
+	struct iscsi_np *np,
+	struct iscsi_conn *conn,
+	char *login_pdu)
+{
+	struct iscsi_login *login;
+
+	login = kzalloc(sizeof(struct iscsi_login), GFP_KERNEL);
+	if (!(login)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_login.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+
+	login->req = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL);
+	if (!(login->req)) {
+		printk(KERN_ERR "Unable to allocate memory for Login Request.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+	memcpy(login->req, login_pdu, ISCSI_HDR_LEN);
+
+	login->req_buf = kzalloc(MAX_KEY_VALUE_PAIRS, GFP_KERNEL);
+	if (!(login->req_buf)) {
+		printk(KERN_ERR "Unable to allocate memory for response buffer.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		goto out;
+	}
+	/*
+	 * SessionType: Discovery
+	 *
+	 * 	Locates Default Portal
+	 *
+	 * SessionType: Normal
+	 *
+	 * 	Locates Target Portal from NP -> Target IQN
+	 */
+	if (iscsi_target_locate_portal(np, conn, login) < 0) {
+		printk(KERN_ERR "iSCSI Login negotiation failed.\n");
+		goto out;
+	}
+
+	return login;
+out:
+	kfree(login->req);
+	kfree(login->req_buf);
+	kfree(login);
+
+	return NULL;
+}
+
+int iscsi_target_start_negotiation(
+	struct iscsi_login *login,
+	struct iscsi_conn *conn)
+{
+	int ret = -1;
+
+	login->rsp = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL);
+	if (!(login->rsp)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" Login Response.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		ret = -1;
+		goto out;
+	}
+
+	login->rsp_buf = kzalloc(MAX_KEY_VALUE_PAIRS, GFP_KERNEL);
+	if (!(login->rsp_buf)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" request buffer.\n");
+		iscsi_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+		ret = -1;
+		goto out;
+	}
+
+	ret = iscsi_target_do_login(conn, login);
+out:
+	if (ret != 0)
+		iscsi_remove_failed_auth_entry(conn);
+
+	iscsi_target_nego_release(login, conn);
+	return ret;
+}
+
+void iscsi_target_nego_release(
+	struct iscsi_login *login,
+	struct iscsi_conn *conn)
+{
+	kfree(login->req);
+	kfree(login->rsp);
+	kfree(login->req_buf);
+	kfree(login->rsp_buf);
+	kfree(login);
+}
diff --git a/drivers/target/iscsi/iscsi_target_nego.h b/drivers/target/iscsi/iscsi_target_nego.h
new file mode 100644
index 0000000..75deb10
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_nego.h
@@ -0,0 +1,20 @@
+#ifndef ISCSI_TARGET_NEGO_H
+#define ISCSI_TARGET_NEGO_H
+
+#define DECIMAL         0
+#define HEX             1
+
+extern void convert_null_to_semi(char *, int);
+extern int extract_param(const char *, const char *, unsigned int, char *,
+		unsigned char *);
+extern struct iscsi_login *iscsi_target_init_negotiation(
+		struct iscsi_np *, struct iscsi_conn *, char *);
+extern int iscsi_target_start_negotiation(
+		struct iscsi_login *, struct iscsi_conn *);
+extern void iscsi_target_nego_release(
+		struct iscsi_login *, struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+
+#endif /* ISCSI_TARGET_NEGO_H */
+
diff --git a/drivers/target/iscsi/iscsi_thread_queue.c b/drivers/target/iscsi/iscsi_thread_queue.c
new file mode 100644
index 0000000..d27b090
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_thread_queue.c
@@ -0,0 +1,635 @@
+/*******************************************************************************
+ * This file contains the iSCSI Login Thread and Thread Queue functions.
+ *
+ * Copyright (c) 2003 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_thread_queue.h"
+
+/*	iscsi_add_ts_to_active_list():
+ *
+ *
+ */
+static void iscsi_add_ts_to_active_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->active_ts_lock);
+	list_add_tail(&ts->ts_list, &iscsi_global->active_ts_list);
+	iscsi_global->active_ts++;
+	spin_unlock(&iscsi_global->active_ts_lock);
+}
+
+/*	iscsi_add_ts_to_inactive_list():
+ *
+ *
+ */
+extern void iscsi_add_ts_to_inactive_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	list_add_tail(&ts->ts_list, &iscsi_global->inactive_ts_list);
+	iscsi_global->inactive_ts++;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+}
+
+/*	iscsi_del_ts_from_active_list():
+ *
+ *
+ */
+static void iscsi_del_ts_from_active_list(struct se_thread_set *ts)
+{
+	spin_lock(&iscsi_global->active_ts_lock);
+	list_del(&ts->ts_list);
+	iscsi_global->active_ts--;
+	spin_unlock(&iscsi_global->active_ts_lock);
+
+	if (ts->stop_active)
+		up(&ts->stop_active_sem);
+}
+
+/*	iscsi_get_ts_from_inactive_list():
+ *
+ *
+ */
+static struct se_thread_set *iscsi_get_ts_from_inactive_list(void)
+{
+	struct se_thread_set *ts;
+
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	if (list_empty(&iscsi_global->inactive_ts_list)) {
+		spin_unlock(&iscsi_global->inactive_ts_lock);
+		return NULL;
+	}
+
+	list_for_each_entry(ts, &iscsi_global->inactive_ts_list, ts_list)
+		break;
+
+	list_del(&ts->ts_list);
+	iscsi_global->inactive_ts--;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+
+	return ts;
+}
+
+/*	iscsi_allocate_thread_sets():
+ *
+ *
+ */
+extern int iscsi_allocate_thread_sets(u32 thread_pair_count)
+{
+	int allocated_thread_pair_count = 0, i, thread_id;
+	struct se_thread_set *ts = NULL;
+
+	for (i = 0; i < thread_pair_count; i++) {
+		ts = kzalloc(sizeof(struct se_thread_set), GFP_KERNEL);
+		if (!(ts)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+					" thread set.\n");
+			return allocated_thread_pair_count;
+		}
+		/*
+		 * Locate the next available regision in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		thread_id = bitmap_find_free_region(iscsi_global->ts_bitmap,
+				iscsi_global->ts_bitmap_count, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+		if (thread_id < 0) {
+			printk(KERN_ERR "bitmap_find_free_region() failed for"
+				" thread_set_bitmap\n");
+			kfree(ts);
+			return allocated_thread_pair_count;
+		}
+
+		ts->thread_id = thread_id;
+		ts->status = ISCSI_THREAD_SET_FREE;
+		INIT_LIST_HEAD(&ts->ts_list);
+		spin_lock_init(&ts->ts_state_lock);
+		sema_init(&ts->stop_active_sem, 0);
+		sema_init(&ts->rx_create_sem, 0);
+		sema_init(&ts->tx_create_sem, 0);
+		sema_init(&ts->rx_done_sem, 0);
+		sema_init(&ts->tx_done_sem, 0);
+		sema_init(&ts->rx_post_start_sem, 0);
+		sema_init(&ts->tx_post_start_sem, 0);
+		sema_init(&ts->rx_restart_sem, 0);
+		sema_init(&ts->tx_restart_sem, 0);
+		sema_init(&ts->rx_start_sem, 0);
+		sema_init(&ts->tx_start_sem, 0);
+
+		ts->create_threads = 1;
+		kernel_thread(iscsi_target_rx_thread,
+				(void *)ts, 0);
+		down(&ts->rx_create_sem);
+
+		kernel_thread(iscsi_target_tx_thread,
+				(void *)ts, 0);
+		down(&ts->tx_create_sem);
+		ts->create_threads = 0;
+
+		iscsi_add_ts_to_inactive_list(ts);
+		allocated_thread_pair_count++;
+	}
+
+	printk(KERN_INFO "Spawned %d thread set(s) (%d total threads).\n",
+		allocated_thread_pair_count, allocated_thread_pair_count * 2);
+	return allocated_thread_pair_count;
+}
+
+/*	iscsi_deallocate_thread_sets():
+ *
+ *
+ */
+extern void iscsi_deallocate_thread_sets(void)
+{
+	u32 released_count = 0;
+	struct se_thread_set *ts = NULL;
+
+	while ((ts = iscsi_get_ts_from_inactive_list())) {
+
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->status = ISCSI_THREAD_SET_DIE;
+		spin_unlock_bh(&ts->ts_state_lock);
+
+		if (ts->rx_thread) {
+			send_sig(SIGKILL, ts->rx_thread, 1);
+			down(&ts->rx_done_sem);
+		}
+		if (ts->tx_thread) {
+			send_sig(SIGKILL, ts->tx_thread, 1);
+			down(&ts->tx_done_sem);
+		}
+		/*
+		 * Release this thread_id in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		bitmap_release_region(iscsi_global->ts_bitmap,
+				ts->thread_id, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+
+		released_count++;
+		kfree(ts);
+	}
+
+	if (released_count)
+		printk(KERN_INFO "Stopped %d thread set(s) (%d total threads)."
+			"\n", released_count, released_count * 2);
+}
+
+/*	iscsi_deallocate_extra_thread_sets():
+ *
+ *
+ */
+static void iscsi_deallocate_extra_thread_sets(void)
+{
+	u32 orig_count, released_count = 0;
+	struct se_thread_set *ts = NULL;
+
+	orig_count = TARGET_THREAD_SET_COUNT;
+
+	while ((iscsi_global->inactive_ts + 1) > orig_count) {
+		ts = iscsi_get_ts_from_inactive_list();
+		if (!(ts))
+			break;
+
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->status = ISCSI_THREAD_SET_DIE;
+		spin_unlock_bh(&ts->ts_state_lock);
+
+		if (ts->rx_thread) {
+			send_sig(SIGKILL, ts->rx_thread, 1);
+			down(&ts->rx_done_sem);
+		}
+		if (ts->tx_thread) {
+			send_sig(SIGKILL, ts->tx_thread, 1);
+			down(&ts->tx_done_sem);
+		}
+		/*
+		 * Release this thread_id in the thread_set_bitmap
+		 */
+		spin_lock(&iscsi_global->ts_bitmap_lock);
+		bitmap_release_region(iscsi_global->ts_bitmap,
+				ts->thread_id, get_order(1));
+		spin_unlock(&iscsi_global->ts_bitmap_lock);
+
+		released_count++;
+		kfree(ts);
+	}
+
+	if (released_count) {
+		printk(KERN_INFO "Stopped %d thread set(s) (%d total threads)."
+			"\n", released_count, released_count * 2);
+	}
+}
+
+/*	iscsi_activate_thread_set():
+ *
+ *
+ */
+void iscsi_activate_thread_set(struct iscsi_conn *conn, struct se_thread_set *ts)
+{
+	iscsi_add_ts_to_active_list(ts);
+
+	spin_lock_bh(&ts->ts_state_lock);
+	conn->thread_set = ts;
+	ts->conn = conn;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	/*
+	 * Start up the RX thread and wait on rx_post_start_sem.  The RX
+	 * Thread will then do the same for the TX Thread in
+	 * iscsi_rx_thread_pre_handler().
+	 */
+	up(&ts->rx_start_sem);
+	down(&ts->rx_post_start_sem);
+}
+
+/*	iscsi_get_thread_set_timeout():
+ *
+ *
+ */
+static void iscsi_get_thread_set_timeout(unsigned long data)
+{
+	up((struct semaphore *)data);
+}
+
+/*	iscsi_get_thread_set():
+ *
+ *	Parameters:	iSCSI Connection Pointer.
+ *	Returns:	iSCSI Thread Set Pointer
+ */
+struct se_thread_set *iscsi_get_thread_set(int role)
+{
+	int allocate_ts = 0;
+	struct semaphore sem;
+	struct timer_list timer;
+	struct se_thread_set *ts = NULL;
+
+	/*
+	 * If no inactive thread set is available on the first call to
+	 * iscsi_get_ts_from_inactive_list(), sleep for a second and
+	 * try again.  If still none are available after two attempts,
+	 * allocate a set ourselves.
+	 */
+get_set:
+	ts = iscsi_get_ts_from_inactive_list();
+	if (!(ts)) {
+		if (allocate_ts == 2)
+			iscsi_allocate_thread_sets(1);
+
+		sema_init(&sem, 0);
+		init_timer(&timer);
+		SETUP_TIMER(timer, 1, &sem, iscsi_get_thread_set_timeout);
+		add_timer(&timer);
+
+		down(&sem);
+		del_timer_sync(&timer);
+		allocate_ts++;
+		goto get_set;
+	}
+
+	ts->delay_inactive = 1;
+	ts->signal_sent = ts->stop_active = 0;
+	ts->thread_count = 2;
+	sema_init(&ts->rx_restart_sem, 0);
+	sema_init(&ts->tx_restart_sem, 0);
+
+	return ts;
+}
+
+/*	iscsi_set_thread_clear():
+ *
+ *
+ */
+void iscsi_set_thread_clear(struct iscsi_conn *conn, u8 thread_clear)
+{
+	struct se_thread_set *ts = NULL;
+
+	if (!conn->thread_set) {
+		printk(KERN_ERR "struct iscsi_conn->thread_set is NULL\n");
+		return;
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->thread_clear &= ~thread_clear;
+
+	if ((thread_clear & ISCSI_CLEAR_RX_THREAD) &&
+	    (ts->blocked_threads & ISCSI_BLOCK_RX_THREAD))
+		up(&ts->rx_restart_sem);
+	else if ((thread_clear & ISCSI_CLEAR_TX_THREAD) &&
+		 (ts->blocked_threads & ISCSI_BLOCK_TX_THREAD))
+		up(&ts->tx_restart_sem);
+	spin_unlock_bh(&ts->ts_state_lock);
+}
+
+/*	iscsi_set_thread_set_signal():
+ *
+ *
+ */
+void iscsi_set_thread_set_signal(struct iscsi_conn *conn, u8 signal_sent)
+{
+	struct se_thread_set *ts = NULL;
+
+	if (!conn->thread_set) {
+		printk(KERN_ERR "struct iscsi_conn->thread_set is NULL\n");
+		return;
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->signal_sent |= signal_sent;
+	spin_unlock_bh(&ts->ts_state_lock);
+}
+
+/*	iscsi_release_thread_set():
+ *
+ *	Parameters:	iSCSI Connection Pointer.
+ *	Returns:	0 on success, -1 on error.
+ */
+int iscsi_release_thread_set(struct iscsi_conn *conn, int role)
+{
+	int thread_called = 0;
+	struct se_thread_set *ts = NULL;
+
+	if (!conn || !conn->thread_set) {
+		printk(KERN_ERR "connection or thread set pointer is NULL\n");
+		BUG();
+	}
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->status = ISCSI_THREAD_SET_RESET;
+
+	if (!(strncmp(current->comm, ISCSI_RX_THREAD_NAME,
+			strlen(ISCSI_RX_THREAD_NAME))))
+		thread_called = ISCSI_RX_THREAD;
+	else if (!(strncmp(current->comm, ISCSI_TX_THREAD_NAME,
+			strlen(ISCSI_TX_THREAD_NAME))))
+		thread_called = ISCSI_TX_THREAD;
+
+	if (ts->rx_thread && (thread_called == ISCSI_TX_THREAD) &&
+	   (ts->thread_clear & ISCSI_CLEAR_RX_THREAD)) {
+
+		if (!(ts->signal_sent & ISCSI_SIGNAL_RX_THREAD)) {
+			send_sig(SIGABRT, ts->rx_thread, 1);
+			ts->signal_sent |= ISCSI_SIGNAL_RX_THREAD;
+		}
+		ts->blocked_threads |= ISCSI_BLOCK_RX_THREAD;
+		spin_unlock_bh(&ts->ts_state_lock);
+		down(&ts->rx_restart_sem);
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->blocked_threads &= ~ISCSI_BLOCK_RX_THREAD;
+	}
+	if (ts->tx_thread && (thread_called == ISCSI_RX_THREAD) &&
+	   (ts->thread_clear & ISCSI_CLEAR_TX_THREAD)) {
+
+		if (!(ts->signal_sent & ISCSI_SIGNAL_TX_THREAD)) {
+			send_sig(SIGABRT, ts->tx_thread, 1);
+			ts->signal_sent |= ISCSI_SIGNAL_TX_THREAD;
+		}
+		ts->blocked_threads |= ISCSI_BLOCK_TX_THREAD;
+		spin_unlock_bh(&ts->ts_state_lock);
+		down(&ts->tx_restart_sem);
+		spin_lock_bh(&ts->ts_state_lock);
+		ts->blocked_threads &= ~ISCSI_BLOCK_TX_THREAD;
+	}
+
+	conn->thread_set = NULL;
+	ts->conn = NULL;
+	ts->status = ISCSI_THREAD_SET_FREE;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_thread_set_force_reinstatement():
+ *
+ *
+ */
+int iscsi_thread_set_force_reinstatement(struct iscsi_conn *conn)
+{
+	struct se_thread_set *ts;
+
+	if (!conn->thread_set)
+		return -1;
+	ts = conn->thread_set;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->status != ISCSI_THREAD_SET_ACTIVE) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		return -1;
+	}
+
+	if (ts->tx_thread && (!(ts->signal_sent & ISCSI_SIGNAL_TX_THREAD))) {
+		send_sig(SIGABRT, ts->tx_thread, 1);
+		ts->signal_sent |= ISCSI_SIGNAL_TX_THREAD;
+	}
+	if (ts->rx_thread && (!(ts->signal_sent & ISCSI_SIGNAL_RX_THREAD))) {
+		send_sig(SIGABRT, ts->rx_thread, 1);
+		ts->signal_sent |= ISCSI_SIGNAL_RX_THREAD;
+	}
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_check_to_add_additional_sets():
+ *
+ *
+ */
+static void iscsi_check_to_add_additional_sets(void)
+{
+	int thread_sets_add;
+
+	spin_lock(&iscsi_global->inactive_ts_lock);
+	thread_sets_add = iscsi_global->inactive_ts;
+	spin_unlock(&iscsi_global->inactive_ts_lock);
+	if (thread_sets_add == 1)
+		iscsi_allocate_thread_sets(1);
+}
+
+/*	iscsi_signal_thread_pre_handler():
+ *
+ *
+ */
+static int iscsi_signal_thread_pre_handler(struct se_thread_set *ts)
+{
+	spin_lock_bh(&ts->ts_state_lock);
+	if ((ts->status == ISCSI_THREAD_SET_DIE) || signal_pending(current)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		return -1;
+	}
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return 0;
+}
+
+/*	iscsi_rx_thread_pre_handler():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_rx_thread_pre_handler(struct se_thread_set *ts, int role)
+{
+	int ret;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->create_threads) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		up(&ts->rx_create_sem);
+		goto sleep;
+	}
+
+	flush_signals(current);
+
+	if (ts->delay_inactive && (--ts->thread_count == 0)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		iscsi_del_ts_from_active_list(ts);
+
+		if (!iscsi_global->in_shutdown)
+			iscsi_deallocate_extra_thread_sets();
+
+		iscsi_add_ts_to_inactive_list(ts);
+		spin_lock_bh(&ts->ts_state_lock);
+	}
+
+	if ((ts->status == ISCSI_THREAD_SET_RESET) &&
+	    (ts->thread_clear & ISCSI_CLEAR_RX_THREAD))
+		up(&ts->rx_restart_sem);
+
+	ts->thread_clear &= ~ISCSI_CLEAR_RX_THREAD;
+	spin_unlock_bh(&ts->ts_state_lock);
+sleep:
+	ret = down_interruptible(&ts->rx_start_sem);
+	if (ret != 0)
+		return NULL;
+
+	if (iscsi_signal_thread_pre_handler(ts) < 0)
+		return NULL;
+
+	if (!ts->conn) {
+		printk(KERN_ERR "struct se_thread_set->conn is NULL for"
+			" thread_id: %d, going back to sleep\n", ts->thread_id);
+		goto sleep;
+	}
+	iscsi_check_to_add_additional_sets();
+	/*
+	 * The RX Thread starts up the TX Thread and sleeps.
+	 */
+	ts->thread_clear |= ISCSI_CLEAR_RX_THREAD;
+	up(&ts->tx_start_sem);
+	down(&ts->tx_post_start_sem);
+
+	return ts->conn;
+}
+
+/*	iscsi_tx_thread_pre_handler():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_tx_thread_pre_handler(struct se_thread_set *ts, int role)
+{
+	int ret;
+
+	spin_lock_bh(&ts->ts_state_lock);
+	if (ts->create_threads) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		up(&ts->tx_create_sem);
+		goto sleep;
+	}
+
+	flush_signals(current);
+
+	if (ts->delay_inactive && (--ts->thread_count == 0)) {
+		spin_unlock_bh(&ts->ts_state_lock);
+		iscsi_del_ts_from_active_list(ts);
+
+		if (!iscsi_global->in_shutdown)
+			iscsi_deallocate_extra_thread_sets();
+
+		iscsi_add_ts_to_inactive_list(ts);
+		spin_lock_bh(&ts->ts_state_lock);
+	}
+	if ((ts->status == ISCSI_THREAD_SET_RESET) &&
+	    (ts->thread_clear & ISCSI_CLEAR_TX_THREAD))
+		up(&ts->tx_restart_sem);
+
+	ts->thread_clear &= ~ISCSI_CLEAR_TX_THREAD;
+	spin_unlock_bh(&ts->ts_state_lock);
+sleep:
+	ret = down_interruptible(&ts->tx_start_sem);
+	if (ret != 0)
+		return NULL;
+
+	if (iscsi_signal_thread_pre_handler(ts) < 0)
+		return NULL;
+
+	if (!ts->conn) {
+		printk(KERN_ERR "struct se_thread_set->conn is NULL for "
+			" thread_id: %d, going back to sleep\n",
+			ts->thread_id);
+		goto sleep;
+	}
+
+	iscsi_check_to_add_additional_sets();
+	/*
+	 * From the TX thread, up the tx_post_start_sem that the RX Thread is
+	 * sleeping on in iscsi_rx_thread_pre_handler(), then up the
+	 * rx_post_start_sem that iscsi_activate_thread_set() is sleeping on.
+	 */
+	ts->thread_clear |= ISCSI_CLEAR_TX_THREAD;
+	up(&ts->tx_post_start_sem);
+	up(&ts->rx_post_start_sem);
+
+	spin_lock_bh(&ts->ts_state_lock);
+	ts->status = ISCSI_THREAD_SET_ACTIVE;
+	spin_unlock_bh(&ts->ts_state_lock);
+
+	return ts->conn;
+}
+
+int iscsi_thread_set_init(void)
+{
+	int size;
+
+	iscsi_global->ts_bitmap_count = ISCSI_TS_BITMAP_BITS;
+
+	size = BITS_TO_LONGS(iscsi_global->ts_bitmap_count) * sizeof(long);
+	iscsi_global->ts_bitmap = kzalloc(size, GFP_KERNEL);
+	if (!(iscsi_global->ts_bitmap)) {
+		printk(KERN_ERR "Unable to allocate iscsi_global->ts_bitmap\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+void iscsi_thread_set_free(void)
+{
+	kfree(iscsi_global->ts_bitmap);
+}
diff --git a/drivers/target/iscsi/iscsi_thread_queue.h b/drivers/target/iscsi/iscsi_thread_queue.h
new file mode 100644
index 0000000..54089fd
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_thread_queue.h
@@ -0,0 +1,103 @@
+#ifndef ISCSI_THREAD_QUEUE_H
+#define ISCSI_THREAD_QUEUE_H
+
+/*
+ * Defines for thread sets.
+ */
+extern int iscsi_thread_set_force_reinstatement(struct iscsi_conn *);
+extern void iscsi_add_ts_to_inactive_list(struct se_thread_set *);
+extern int iscsi_allocate_thread_sets(u32);
+extern void iscsi_deallocate_thread_sets(void);
+extern void iscsi_activate_thread_set(struct iscsi_conn *, struct se_thread_set *);
+extern struct se_thread_set *iscsi_get_thread_set(int);
+extern void iscsi_set_thread_clear(struct iscsi_conn *, u8);
+extern void iscsi_set_thread_set_signal(struct iscsi_conn *, u8);
+extern int iscsi_release_thread_set(struct iscsi_conn *, int);
+extern struct iscsi_conn *iscsi_rx_thread_pre_handler(struct se_thread_set *, int);
+extern struct iscsi_conn *iscsi_tx_thread_pre_handler(struct se_thread_set *, int);
+extern int iscsi_thread_set_init(void);
+extern void iscsi_thread_set_free(void);
+
+extern int iscsi_target_tx_thread(void *);
+extern int iscsi_target_rx_thread(void *);
+extern struct iscsi_global *iscsi_global;
+
+#define INITIATOR_THREAD_SET_COUNT		4
+#define TARGET_THREAD_SET_COUNT			4
+
+#define ISCSI_RX_THREAD                         1
+#define ISCSI_TX_THREAD                         2
+#define ISCSI_RX_THREAD_NAME			"iscsi_trx"
+#define ISCSI_TX_THREAD_NAME			"iscsi_ttx"
+#define ISCSI_BLOCK_RX_THREAD			0x1
+#define ISCSI_BLOCK_TX_THREAD			0x2
+#define ISCSI_CLEAR_RX_THREAD			0x1
+#define ISCSI_CLEAR_TX_THREAD			0x2
+#define ISCSI_SIGNAL_RX_THREAD			0x1
+#define ISCSI_SIGNAL_TX_THREAD			0x2
+
+/* struct se_thread_set->status */
+#define ISCSI_THREAD_SET_FREE			1
+#define ISCSI_THREAD_SET_ACTIVE			2
+#define ISCSI_THREAD_SET_DIE			3
+#define ISCSI_THREAD_SET_RESET			4
+#define ISCSI_THREAD_SET_DEALLOCATE_THREADS	5
+
+/* By default allow a maximum of 32K iSCSI connections */
+#define ISCSI_TS_BITMAP_BITS			32768
+
+struct se_thread_set {
+	/* flags used for blocking and restarting sets */
+	u8	blocked_threads;
+	/* flag for creating threads */
+	u8	create_threads;
+	/* flag for delaying readding to inactive list */
+	u8	delay_inactive;
+	/* status for thread set */
+	u8	status;
+	/* which threads have had signals sent */
+	u8	signal_sent;
+	/* used for stopping active sets during shutdown */
+	u8	stop_active;
+	/* flag for which threads exited first */
+	u8	thread_clear;
+	/* Active threads in the thread set */
+	u8	thread_count;
+	/* Unique thread ID */
+	u32	thread_id;
+	/* pointer to connection if set is active */
+	struct iscsi_conn	*conn;
+	/* used for controlling ts state accesses */
+	spinlock_t	ts_state_lock;
+	/* used for stopping active sets during shutdown */
+	struct semaphore	stop_active_sem;
+	/* used for controlling thread creation */
+	struct semaphore	rx_create_sem;
+	/* used for controlling thread creation */
+	struct semaphore	tx_create_sem;
+	/* used for controlling killing */
+	struct semaphore	rx_done_sem;
+	/* used for controlling killing */
+	struct semaphore	tx_done_sem;
+	/* Used for rx side post startup */
+	struct semaphore	rx_post_start_sem;
+	/* Used for tx side post startup */
+	struct semaphore	tx_post_start_sem;
+	/* used for restarting thread queue */
+	struct semaphore	rx_restart_sem;
+	/* used for restarting thread queue */
+	struct semaphore	tx_restart_sem;
+	/* used for normal unused blocking */
+	struct semaphore	rx_start_sem;
+	/* used for normal unused blocking */
+	struct semaphore	tx_start_sem;
+	/* OS descriptor for rx thread */
+	struct task_struct	*rx_thread;
+	/* OS descriptor for tx thread */
+	struct task_struct	*tx_thread;
+	/* struct se_thread_set in list list head*/
+	struct list_head	ts_list;
+} ____cacheline_aligned;
+
+#endif   /*** ISCSI_THREAD_QUEUE_H ***/
+
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 07/12] iscsi-target: Add CHAP Authentication support using libcrypto
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 15448 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for libcrypto md5 based iSCSI CHAP authentication
support for iscsi_target_mod.  This includes support for mutual and one-way
NodeACL authentication for SessionType=Normal and SessionType=Discovery
via /sys/kernel/config/target/iscsi.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_auth_chap.c |  502 ++++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_auth_chap.h |   33 ++
 2 files changed, 535 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.c
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.h

diff --git a/drivers/target/iscsi/iscsi_auth_chap.c b/drivers/target/iscsi/iscsi_auth_chap.c
new file mode 100644
index 0000000..c6defc3
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_auth_chap.c
@@ -0,0 +1,502 @@
+/*******************************************************************************
+ * This file houses the main functions for the iSCSI CHAP support
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <linux/err.h>
+#include <linux/scatterlist.h>
+
+#include "iscsi_target_core.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_auth_chap.h"
+
+#ifdef DEBUG_CHAP
+#define PRINT(x...)		printk(KERN_INFO x)
+#else
+#define PRINT(x...)
+#endif
+
+unsigned char chap_asciihex_to_binaryhex(unsigned char val[2])
+{
+	unsigned char result = 0;
+	/*
+	 * MSB
+	 */
+	if ((val[0] >= 'a') && (val[0] <= 'f'))
+		result = ((val[0] - 'a' + 10) & 0xf) << 4;
+	else
+		if ((val[0] >= 'A') && (val[0] <= 'F'))
+			result = ((val[0] - 'A' + 10) & 0xf) << 4;
+		else /* digit */
+			result = ((val[0] - '0') & 0xf) << 4;
+	/*
+	 * LSB
+	 */
+	if ((val[1] >= 'a') && (val[1] <= 'f'))
+		result |= ((val[1] - 'a' + 10) & 0xf);
+	else
+		if ((val[1] >= 'A') && (val[1] <= 'F'))
+			result |= ((val[1] - 'A' + 10) & 0xf);
+		else /* digit */
+			result |= ((val[1] - '0') & 0xf);
+
+	return result;
+}
+
+int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)
+{
+	int i = 0, j = 0;
+
+	for (i = 0; i < len; i += 2)
+		dst[j++] = (unsigned char) chap_asciihex_to_binaryhex(&src[i]);
+
+	dst[j] = '\0';
+	return j;
+}
+
+void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
+{
+	int i;
+
+	for (i = 0; i < src_len; i++)
+		sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
+}
+
+void chap_set_random(char *data, int length)
+{
+	long r;
+	unsigned n;
+
+	while (length > 0) {
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 4);
+		n = r & 0x7;
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 5);
+		n = (n << 3) | (r & 0x7);
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 5);
+		n = (n << 2) | (r & 0x3);
+
+		*data++ = n;
+		 length--;
+	}
+}
+
+static struct iscsi_chap *chap_server_open(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	const char *A_str,
+	char *AIC_str,
+	unsigned int *AIC_len)
+{
+	struct iscsi_chap *chap;
+	int ret;
+
+	if (!(auth->naf_flags & NAF_USERID_SET) ||
+	    !(auth->naf_flags & NAF_PASSWORD_SET)) {
+		printk(KERN_ERR "CHAP user or password not set for"
+				" Initiator ACL\n");
+		return NULL;
+	}
+
+	conn->auth_protocol = kzalloc(sizeof(struct iscsi_chap), GFP_KERNEL);
+	if (!(conn->auth_protocol))
+		return NULL;
+
+	chap = (struct iscsi_chap *) conn->auth_protocol;
+	/*
+	 * We only support MD5 MDA presently.
+	 */
+	if (strncmp(A_str, "CHAP_A=5", 8)) {
+		printk(KERN_ERR "CHAP_A is not MD5.\n");
+		return NULL;
+	}
+	PRINT("[server] Got CHAP_A=5\n");
+	/*
+	 * Send back CHAP_A set to MD5.
+	 */
+	*AIC_len = sprintf(AIC_str, "CHAP_A=5");
+	*AIC_len += 1;
+	chap->digest_type = CHAP_DIGEST_MD5;
+	PRINT("[server] Sending CHAP_A=%d\n", chap->digest_type);
+	/*
+	 * Set Identifier.
+	 */
+	chap->id = ISCSI_TPG_C(conn)->tpg_chap_id++;
+	*AIC_len += sprintf(AIC_str + *AIC_len, "CHAP_I=%d", chap->id);
+	*AIC_len += 1;
+	PRINT("[server] Sending CHAP_I=%d\n", chap->id);
+	/*
+	 * Generate Challenge.
+	 */
+	ret = chap_gen_challenge(conn, 1, AIC_str, AIC_len);
+	if (ret < 0)
+		return NULL;
+
+	return chap;
+}
+
+void chap_close(struct iscsi_conn *conn)
+{
+	kfree(conn->auth_protocol);
+	conn->auth_protocol = NULL;
+}
+
+int chap_gen_challenge(
+	struct iscsi_conn *conn,
+	int caller,
+	char *C_str,
+	unsigned int *C_len)
+{
+	unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
+
+	chap_set_random(chap->challenge, CHAP_CHALLENGE_LENGTH);
+	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
+					CHAP_CHALLENGE_LENGTH);
+	/*
+	 * Set CHAP_C, and copy the generated challenge into C_str.
+	 */
+	*C_len += sprintf(C_str + *C_len, "CHAP_C=0x%s", challenge_asciihex);
+	*C_len += 1;
+
+	PRINT("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
+			challenge_asciihex);
+	return 0;
+}
+
+int chap_server_compute_md5(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *NR_in_ptr,
+	char *NR_out_ptr,
+	unsigned int *NR_out_len)
+{
+	char *endptr;
+	unsigned char id, digest[MD5_SIGNATURE_SIZE];
+	unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2];
+	unsigned char identifier[10], *challenge, *challenge_binhex;
+	unsigned char client_digest[MD5_SIGNATURE_SIZE];
+	unsigned char server_digest[MD5_SIGNATURE_SIZE];
+	unsigned char chap_n[MAX_CHAP_N_SIZE], chap_r[MAX_RESPONSE_LENGTH];
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+	struct crypto_hash *tfm;
+	struct hash_desc desc;
+	struct scatterlist sg;
+	int auth_ret = -1, ret, challenge_len;
+
+	memset(identifier, 0, 10);
+	memset(chap_n, 0, MAX_CHAP_N_SIZE);
+	memset(chap_r, 0, MAX_RESPONSE_LENGTH);
+	memset(digest, 0, MD5_SIGNATURE_SIZE);
+	memset(response, 0, MD5_SIGNATURE_SIZE * 2 + 2);
+	memset(client_digest, 0, MD5_SIGNATURE_SIZE);
+	memset(server_digest, 0, MD5_SIGNATURE_SIZE);
+
+	challenge = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL);
+	if (!(challenge)) {
+		printk(KERN_ERR "Unable to allocate challenge buffer\n");
+		return -1;
+	}
+
+	challenge_binhex = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL);
+	if (!(challenge_binhex)) {
+		printk(KERN_ERR "Unable to allocate challenge_binhex buffer\n");
+		kfree(challenge);
+		return -1;
+	}
+	/*
+	 * Extract CHAP_N.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_N", MAX_CHAP_N_SIZE, chap_n,
+				&type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_N.\n");
+		goto out;
+	}
+	if (type == HEX) {
+		printk(KERN_ERR "Could not find CHAP_N.\n");
+		goto out;
+	}
+
+	if (memcmp(chap_n, auth->userid, strlen(auth->userid)) != 0) {
+		printk(KERN_ERR "CHAP_N values do not match!\n");
+		goto out;
+	}
+	PRINT("[server] Got CHAP_N=%s\n", chap_n);
+	/*
+	 * Extract CHAP_R.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_R", MAX_RESPONSE_LENGTH, chap_r,
+				&type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_R.\n");
+		goto out;
+	}
+	if (type != HEX) {
+		printk(KERN_ERR "Could not find CHAP_R.\n");
+		goto out;
+	}
+
+	PRINT("[server] Got CHAP_R=%s\n", chap_r);
+	chap_string_to_hex(client_digest, chap_r, strlen(chap_r));
+
+	tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm)) {
+		printk(KERN_ERR "Unable to allocate struct crypto_hash\n");
+		goto out;
+	}
+	desc.tfm = tfm;
+	desc.flags = 0;
+
+	ret = crypto_hash_init(&desc);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_init() failed\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&chap->id, 1);
+	ret = crypto_hash_update(&desc, &sg, 1);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for id\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&auth->password, strlen(auth->password));
+	ret = crypto_hash_update(&desc, &sg, strlen(auth->password));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for password\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)chap->challenge, strlen(chap->challenge));
+	ret = crypto_hash_update(&desc, &sg, strlen(chap->challenge));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for challenge\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	ret = crypto_hash_final(&desc, server_digest);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_final() failed for server digest\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	crypto_free_hash(tfm);
+
+	chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
+	PRINT("[server] MD5 Server Digest: %s\n", response);
+
+	if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
+		PRINT("[server] MD5 Digests do not match!\n\n");
+		goto out;
+	} else
+		PRINT("[server] MD5 Digests match, CHAP connetication"
+				" successful.\n\n");
+	/*
+	 * One way authentication has succeeded, return now if mutual
+	 * authentication is not enabled.
+	 */
+	if (!auth->authenticate_target) {
+		kfree(challenge);
+		kfree(challenge_binhex);
+		return 0;
+	}
+	/*
+	 * Get CHAP_I.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_I", 10, identifier, &type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_I.\n");
+		goto out;
+	}
+
+	if (type == HEX)
+		id = (unsigned char)simple_strtoul((char *)&identifier[2],
+					&endptr, 0);
+	else
+		id = (unsigned char)simple_strtoul(identifier, &endptr, 0);
+	/*
+	 * RFC 1994 says Identifier is no more than octet (8 bits).
+	 */
+	PRINT("[server] Got CHAP_I=%d\n", id);
+	/*
+	 * Get CHAP_C.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_C", CHAP_CHALLENGE_STR_LEN,
+			challenge, &type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_C.\n");
+		goto out;
+	}
+
+	if (type != HEX) {
+		printk(KERN_ERR "Could not find CHAP_C.\n");
+		goto out;
+	}
+	PRINT("[server] Got CHAP_C=%s\n", challenge);
+	challenge_len = chap_string_to_hex(challenge_binhex, challenge,
+				strlen(challenge));
+	if (!(challenge_len)) {
+		printk(KERN_ERR "Unable to convert incoming challenge\n");
+		goto out;
+	}
+	/*
+	 * Generate CHAP_N and CHAP_R for mutual authentication.
+	 */
+	tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm)) {
+		printk(KERN_ERR "Unable to allocate struct crypto_hash\n");
+		goto out;
+	}
+	desc.tfm = tfm;
+	desc.flags = 0;
+
+	ret = crypto_hash_init(&desc);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_init() failed\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&id, 1);
+	ret = crypto_hash_update(&desc, &sg, 1);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for id\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)auth->password_mutual,
+				strlen(auth->password_mutual));
+	ret = crypto_hash_update(&desc, &sg, strlen(auth->password_mutual));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for"
+				" password_mutual\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	/*
+	 * Convert received challenge to binary hex.
+	 */
+	sg_init_one(&sg, (void *)challenge_binhex, challenge_len);
+	ret = crypto_hash_update(&desc, &sg, challenge_len);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for ma challenge\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	ret = crypto_hash_final(&desc, digest);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_final() failed for ma digest\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	crypto_free_hash(tfm);
+	/*
+	 * Generate CHAP_N and CHAP_R.
+	 */
+	*NR_out_len = sprintf(NR_out_ptr, "CHAP_N=%s", auth->userid_mutual);
+	*NR_out_len += 1;
+	PRINT("[server] Sending CHAP_N=%s\n", auth->userid_mutual);
+	/*
+	 * Convert response from binary hex to ascii hext.
+	 */
+	chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
+	*NR_out_len += sprintf(NR_out_ptr + *NR_out_len, "CHAP_R=0x%s",
+			response);
+	*NR_out_len += 1;
+	PRINT("[server] Sending CHAP_R=0x%s\n", response);
+	auth_ret = 0;
+out:
+	kfree(challenge);
+	kfree(challenge_binhex);
+	return auth_ret;
+}
+
+int chap_got_response(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *NR_in_ptr,
+	char *NR_out_ptr,
+	unsigned int *NR_out_len)
+{
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	switch (chap->digest_type) {
+	case CHAP_DIGEST_MD5:
+		if (chap_server_compute_md5(conn, auth, NR_in_ptr,
+				NR_out_ptr, NR_out_len) < 0)
+			return -1;
+		break;
+	default:
+		printk(KERN_ERR "Unknown CHAP digest type %d!\n",
+				chap->digest_type);
+		return -1;
+	}
+
+	return 0;
+}
+
+u32 chap_main_loop(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *in_text,
+	char *out_text,
+	int *in_len,
+	int *out_len)
+{
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	if (!(chap)) {
+		chap = chap_server_open(conn, auth, in_text, out_text, out_len);
+		if (!(chap))
+			return 2;
+		chap->chap_state = CHAP_STAGE_SERVER_AIC;
+		return 0;
+	} else if (chap->chap_state == CHAP_STAGE_SERVER_AIC) {
+		convert_null_to_semi(in_text, *in_len);
+		if (chap_got_response(conn, auth, in_text, out_text,
+				out_len) < 0) {
+			chap_close(conn);
+			return 2;
+		}
+		if (auth->authenticate_target)
+			chap->chap_state = CHAP_STAGE_SERVER_NR;
+		else
+			*out_len = 0;
+		chap_close(conn);
+		return 1;
+	}
+
+	return 2;
+}
diff --git a/drivers/target/iscsi/iscsi_auth_chap.h b/drivers/target/iscsi/iscsi_auth_chap.h
new file mode 100644
index 0000000..a4492b6
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_auth_chap.h
@@ -0,0 +1,33 @@
+#ifndef _ISCSI_CHAP_H_
+#define _ISCSI_CHAP_H_
+
+#define CHAP_DIGEST_MD5		5
+#define CHAP_DIGEST_SHA		6
+
+#define CHAP_CHALLENGE_LENGTH	16
+#define CHAP_CHALLENGE_STR_LEN	4096
+#define MAX_RESPONSE_LENGTH	64	/* sufficient for MD5 */
+#define	MAX_CHAP_N_SIZE		512
+
+#define MD5_SIGNATURE_SIZE	16	/* 16 bytes in a MD5 message digest */
+
+#define CHAP_STAGE_CLIENT_A	1
+#define CHAP_STAGE_SERVER_AIC	2
+#define CHAP_STAGE_CLIENT_NR	3
+#define CHAP_STAGE_CLIENT_NRIC	4
+#define CHAP_STAGE_SERVER_NR	5
+
+extern int chap_gen_challenge(struct iscsi_conn *, int, char *, unsigned int *);
+extern u32 chap_main_loop(struct iscsi_conn *, struct iscsi_node_auth *, char *, char *,
+				int *, int *);
+
+struct iscsi_chap {
+	unsigned char	digest_type;
+	unsigned char	id;
+	unsigned char	challenge[CHAP_CHALLENGE_LENGTH];
+	unsigned int	challenge_len;
+	unsigned int	authenticate_target;
+	unsigned int	chap_state;
+} ____cacheline_aligned;
+
+#endif   /*** _ISCSI_CHAP_H_ ***/
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 07/12] iscsi-target: Add CHAP Authentication support using libcrypto
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for libcrypto md5 based iSCSI CHAP authentication
support for iscsi_target_mod.  This includes support for mutual and one-way
NodeACL authentication for SessionType=Normal and SessionType=Discovery
via /sys/kernel/config/target/iscsi.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_auth_chap.c |  502 ++++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_auth_chap.h |   33 ++
 2 files changed, 535 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.c
 create mode 100644 drivers/target/iscsi/iscsi_auth_chap.h

diff --git a/drivers/target/iscsi/iscsi_auth_chap.c b/drivers/target/iscsi/iscsi_auth_chap.c
new file mode 100644
index 0000000..c6defc3
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_auth_chap.c
@@ -0,0 +1,502 @@
+/*******************************************************************************
+ * This file houses the main functions for the iSCSI CHAP support
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <linux/err.h>
+#include <linux/scatterlist.h>
+
+#include "iscsi_target_core.h"
+#include "iscsi_target_nego.h"
+#include "iscsi_auth_chap.h"
+
+#ifdef DEBUG_CHAP
+#define PRINT(x...)		printk(KERN_INFO x)
+#else
+#define PRINT(x...)
+#endif
+
+unsigned char chap_asciihex_to_binaryhex(unsigned char val[2])
+{
+	unsigned char result = 0;
+	/*
+	 * MSB
+	 */
+	if ((val[0] >= 'a') && (val[0] <= 'f'))
+		result = ((val[0] - 'a' + 10) & 0xf) << 4;
+	else
+		if ((val[0] >= 'A') && (val[0] <= 'F'))
+			result = ((val[0] - 'A' + 10) & 0xf) << 4;
+		else /* digit */
+			result = ((val[0] - '0') & 0xf) << 4;
+	/*
+	 * LSB
+	 */
+	if ((val[1] >= 'a') && (val[1] <= 'f'))
+		result |= ((val[1] - 'a' + 10) & 0xf);
+	else
+		if ((val[1] >= 'A') && (val[1] <= 'F'))
+			result |= ((val[1] - 'A' + 10) & 0xf);
+		else /* digit */
+			result |= ((val[1] - '0') & 0xf);
+
+	return result;
+}
+
+int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)
+{
+	int i = 0, j = 0;
+
+	for (i = 0; i < len; i += 2)
+		dst[j++] = (unsigned char) chap_asciihex_to_binaryhex(&src[i]);
+
+	dst[j] = '\0';
+	return j;
+}
+
+void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
+{
+	int i;
+
+	for (i = 0; i < src_len; i++)
+		sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
+}
+
+void chap_set_random(char *data, int length)
+{
+	long r;
+	unsigned n;
+
+	while (length > 0) {
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 4);
+		n = r & 0x7;
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 5);
+		n = (n << 3) | (r & 0x7);
+
+		get_random_bytes(&r, sizeof(long));
+		r = r ^ (r >> 8);
+		r = r ^ (r >> 5);
+		n = (n << 2) | (r & 0x3);
+
+		*data++ = n;
+		 length--;
+	}
+}
+
+static struct iscsi_chap *chap_server_open(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	const char *A_str,
+	char *AIC_str,
+	unsigned int *AIC_len)
+{
+	struct iscsi_chap *chap;
+	int ret;
+
+	if (!(auth->naf_flags & NAF_USERID_SET) ||
+	    !(auth->naf_flags & NAF_PASSWORD_SET)) {
+		printk(KERN_ERR "CHAP user or password not set for"
+				" Initiator ACL\n");
+		return NULL;
+	}
+
+	conn->auth_protocol = kzalloc(sizeof(struct iscsi_chap), GFP_KERNEL);
+	if (!(conn->auth_protocol))
+		return NULL;
+
+	chap = (struct iscsi_chap *) conn->auth_protocol;
+	/*
+	 * We only support MD5 MDA presently.
+	 */
+	if (strncmp(A_str, "CHAP_A=5", 8)) {
+		printk(KERN_ERR "CHAP_A is not MD5.\n");
+		return NULL;
+	}
+	PRINT("[server] Got CHAP_A=5\n");
+	/*
+	 * Send back CHAP_A set to MD5.
+	 */
+	*AIC_len = sprintf(AIC_str, "CHAP_A=5");
+	*AIC_len += 1;
+	chap->digest_type = CHAP_DIGEST_MD5;
+	PRINT("[server] Sending CHAP_A=%d\n", chap->digest_type);
+	/*
+	 * Set Identifier.
+	 */
+	chap->id = ISCSI_TPG_C(conn)->tpg_chap_id++;
+	*AIC_len += sprintf(AIC_str + *AIC_len, "CHAP_I=%d", chap->id);
+	*AIC_len += 1;
+	PRINT("[server] Sending CHAP_I=%d\n", chap->id);
+	/*
+	 * Generate Challenge.
+	 */
+	ret = chap_gen_challenge(conn, 1, AIC_str, AIC_len);
+	if (ret < 0)
+		return NULL;
+
+	return chap;
+}
+
+void chap_close(struct iscsi_conn *conn)
+{
+	kfree(conn->auth_protocol);
+	conn->auth_protocol = NULL;
+}
+
+int chap_gen_challenge(
+	struct iscsi_conn *conn,
+	int caller,
+	char *C_str,
+	unsigned int *C_len)
+{
+	unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
+
+	chap_set_random(chap->challenge, CHAP_CHALLENGE_LENGTH);
+	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
+					CHAP_CHALLENGE_LENGTH);
+	/*
+	 * Set CHAP_C, and copy the generated challenge into C_str.
+	 */
+	*C_len += sprintf(C_str + *C_len, "CHAP_C=0x%s", challenge_asciihex);
+	*C_len += 1;
+
+	PRINT("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
+			challenge_asciihex);
+	return 0;
+}
+
+int chap_server_compute_md5(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *NR_in_ptr,
+	char *NR_out_ptr,
+	unsigned int *NR_out_len)
+{
+	char *endptr;
+	unsigned char id, digest[MD5_SIGNATURE_SIZE];
+	unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2];
+	unsigned char identifier[10], *challenge, *challenge_binhex;
+	unsigned char client_digest[MD5_SIGNATURE_SIZE];
+	unsigned char server_digest[MD5_SIGNATURE_SIZE];
+	unsigned char chap_n[MAX_CHAP_N_SIZE], chap_r[MAX_RESPONSE_LENGTH];
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+	struct crypto_hash *tfm;
+	struct hash_desc desc;
+	struct scatterlist sg;
+	int auth_ret = -1, ret, challenge_len;
+
+	memset(identifier, 0, 10);
+	memset(chap_n, 0, MAX_CHAP_N_SIZE);
+	memset(chap_r, 0, MAX_RESPONSE_LENGTH);
+	memset(digest, 0, MD5_SIGNATURE_SIZE);
+	memset(response, 0, MD5_SIGNATURE_SIZE * 2 + 2);
+	memset(client_digest, 0, MD5_SIGNATURE_SIZE);
+	memset(server_digest, 0, MD5_SIGNATURE_SIZE);
+
+	challenge = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL);
+	if (!(challenge)) {
+		printk(KERN_ERR "Unable to allocate challenge buffer\n");
+		return -1;
+	}
+
+	challenge_binhex = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL);
+	if (!(challenge_binhex)) {
+		printk(KERN_ERR "Unable to allocate challenge_binhex buffer\n");
+		kfree(challenge);
+		return -1;
+	}
+	/*
+	 * Extract CHAP_N.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_N", MAX_CHAP_N_SIZE, chap_n,
+				&type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_N.\n");
+		goto out;
+	}
+	if (type == HEX) {
+		printk(KERN_ERR "Could not find CHAP_N.\n");
+		goto out;
+	}
+
+	if (memcmp(chap_n, auth->userid, strlen(auth->userid)) != 0) {
+		printk(KERN_ERR "CHAP_N values do not match!\n");
+		goto out;
+	}
+	PRINT("[server] Got CHAP_N=%s\n", chap_n);
+	/*
+	 * Extract CHAP_R.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_R", MAX_RESPONSE_LENGTH, chap_r,
+				&type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_R.\n");
+		goto out;
+	}
+	if (type != HEX) {
+		printk(KERN_ERR "Could not find CHAP_R.\n");
+		goto out;
+	}
+
+	PRINT("[server] Got CHAP_R=%s\n", chap_r);
+	chap_string_to_hex(client_digest, chap_r, strlen(chap_r));
+
+	tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm)) {
+		printk(KERN_ERR "Unable to allocate struct crypto_hash\n");
+		goto out;
+	}
+	desc.tfm = tfm;
+	desc.flags = 0;
+
+	ret = crypto_hash_init(&desc);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_init() failed\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&chap->id, 1);
+	ret = crypto_hash_update(&desc, &sg, 1);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for id\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&auth->password, strlen(auth->password));
+	ret = crypto_hash_update(&desc, &sg, strlen(auth->password));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for password\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)chap->challenge, strlen(chap->challenge));
+	ret = crypto_hash_update(&desc, &sg, strlen(chap->challenge));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for challenge\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	ret = crypto_hash_final(&desc, server_digest);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_final() failed for server digest\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	crypto_free_hash(tfm);
+
+	chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
+	PRINT("[server] MD5 Server Digest: %s\n", response);
+
+	if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
+		PRINT("[server] MD5 Digests do not match!\n\n");
+		goto out;
+	} else
+		PRINT("[server] MD5 Digests match, CHAP connetication"
+				" successful.\n\n");
+	/*
+	 * One way authentication has succeeded, return now if mutual
+	 * authentication is not enabled.
+	 */
+	if (!auth->authenticate_target) {
+		kfree(challenge);
+		kfree(challenge_binhex);
+		return 0;
+	}
+	/*
+	 * Get CHAP_I.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_I", 10, identifier, &type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_I.\n");
+		goto out;
+	}
+
+	if (type == HEX)
+		id = (unsigned char)simple_strtoul((char *)&identifier[2],
+					&endptr, 0);
+	else
+		id = (unsigned char)simple_strtoul(identifier, &endptr, 0);
+	/*
+	 * RFC 1994 says Identifier is no more than octet (8 bits).
+	 */
+	PRINT("[server] Got CHAP_I=%d\n", id);
+	/*
+	 * Get CHAP_C.
+	 */
+	if (extract_param(NR_in_ptr, "CHAP_C", CHAP_CHALLENGE_STR_LEN,
+			challenge, &type) < 0) {
+		printk(KERN_ERR "Could not find CHAP_C.\n");
+		goto out;
+	}
+
+	if (type != HEX) {
+		printk(KERN_ERR "Could not find CHAP_C.\n");
+		goto out;
+	}
+	PRINT("[server] Got CHAP_C=%s\n", challenge);
+	challenge_len = chap_string_to_hex(challenge_binhex, challenge,
+				strlen(challenge));
+	if (!(challenge_len)) {
+		printk(KERN_ERR "Unable to convert incoming challenge\n");
+		goto out;
+	}
+	/*
+	 * Generate CHAP_N and CHAP_R for mutual authentication.
+	 */
+	tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm)) {
+		printk(KERN_ERR "Unable to allocate struct crypto_hash\n");
+		goto out;
+	}
+	desc.tfm = tfm;
+	desc.flags = 0;
+
+	ret = crypto_hash_init(&desc);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_init() failed\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)&id, 1);
+	ret = crypto_hash_update(&desc, &sg, 1);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for id\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	sg_init_one(&sg, (void *)auth->password_mutual,
+				strlen(auth->password_mutual));
+	ret = crypto_hash_update(&desc, &sg, strlen(auth->password_mutual));
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for"
+				" password_mutual\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	/*
+	 * Convert received challenge to binary hex.
+	 */
+	sg_init_one(&sg, (void *)challenge_binhex, challenge_len);
+	ret = crypto_hash_update(&desc, &sg, challenge_len);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_update() failed for ma challenge\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+
+	ret = crypto_hash_final(&desc, digest);
+	if (ret < 0) {
+		printk(KERN_ERR "crypto_hash_final() failed for ma digest\n");
+		crypto_free_hash(tfm);
+		goto out;
+	}
+	crypto_free_hash(tfm);
+	/*
+	 * Generate CHAP_N and CHAP_R.
+	 */
+	*NR_out_len = sprintf(NR_out_ptr, "CHAP_N=%s", auth->userid_mutual);
+	*NR_out_len += 1;
+	PRINT("[server] Sending CHAP_N=%s\n", auth->userid_mutual);
+	/*
+	 * Convert response from binary hex to ascii hext.
+	 */
+	chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
+	*NR_out_len += sprintf(NR_out_ptr + *NR_out_len, "CHAP_R=0x%s",
+			response);
+	*NR_out_len += 1;
+	PRINT("[server] Sending CHAP_R=0x%s\n", response);
+	auth_ret = 0;
+out:
+	kfree(challenge);
+	kfree(challenge_binhex);
+	return auth_ret;
+}
+
+int chap_got_response(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *NR_in_ptr,
+	char *NR_out_ptr,
+	unsigned int *NR_out_len)
+{
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	switch (chap->digest_type) {
+	case CHAP_DIGEST_MD5:
+		if (chap_server_compute_md5(conn, auth, NR_in_ptr,
+				NR_out_ptr, NR_out_len) < 0)
+			return -1;
+		break;
+	default:
+		printk(KERN_ERR "Unknown CHAP digest type %d!\n",
+				chap->digest_type);
+		return -1;
+	}
+
+	return 0;
+}
+
+u32 chap_main_loop(
+	struct iscsi_conn *conn,
+	struct iscsi_node_auth *auth,
+	char *in_text,
+	char *out_text,
+	int *in_len,
+	int *out_len)
+{
+	struct iscsi_chap *chap = (struct iscsi_chap *) conn->auth_protocol;
+
+	if (!(chap)) {
+		chap = chap_server_open(conn, auth, in_text, out_text, out_len);
+		if (!(chap))
+			return 2;
+		chap->chap_state = CHAP_STAGE_SERVER_AIC;
+		return 0;
+	} else if (chap->chap_state == CHAP_STAGE_SERVER_AIC) {
+		convert_null_to_semi(in_text, *in_len);
+		if (chap_got_response(conn, auth, in_text, out_text,
+				out_len) < 0) {
+			chap_close(conn);
+			return 2;
+		}
+		if (auth->authenticate_target)
+			chap->chap_state = CHAP_STAGE_SERVER_NR;
+		else
+			*out_len = 0;
+		chap_close(conn);
+		return 1;
+	}
+
+	return 2;
+}
diff --git a/drivers/target/iscsi/iscsi_auth_chap.h b/drivers/target/iscsi/iscsi_auth_chap.h
new file mode 100644
index 0000000..a4492b6
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_auth_chap.h
@@ -0,0 +1,33 @@
+#ifndef _ISCSI_CHAP_H_
+#define _ISCSI_CHAP_H_
+
+#define CHAP_DIGEST_MD5		5
+#define CHAP_DIGEST_SHA		6
+
+#define CHAP_CHALLENGE_LENGTH	16
+#define CHAP_CHALLENGE_STR_LEN	4096
+#define MAX_RESPONSE_LENGTH	64	/* sufficient for MD5 */
+#define	MAX_CHAP_N_SIZE		512
+
+#define MD5_SIGNATURE_SIZE	16	/* 16 bytes in a MD5 message digest */
+
+#define CHAP_STAGE_CLIENT_A	1
+#define CHAP_STAGE_SERVER_AIC	2
+#define CHAP_STAGE_CLIENT_NR	3
+#define CHAP_STAGE_CLIENT_NRIC	4
+#define CHAP_STAGE_SERVER_NR	5
+
+extern int chap_gen_challenge(struct iscsi_conn *, int, char *, unsigned int *);
+extern u32 chap_main_loop(struct iscsi_conn *, struct iscsi_node_auth *, char *, char *,
+				int *, int *);
+
+struct iscsi_chap {
+	unsigned char	digest_type;
+	unsigned char	id;
+	unsigned char	challenge[CHAP_CHALLENGE_LENGTH];
+	unsigned int	challenge_len;
+	unsigned int	authenticate_target;
+	unsigned int	chap_state;
+} ____cacheline_aligned;
+
+#endif   /*** _ISCSI_CHAP_H_ ***/
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 08/12] iscsi-target: Add Sequence/PDU list + DataIN response logic
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 38972 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds Sequence/PDU list logic used by RFC-3720 for
DataSequenceInOrder=[Yes,No] and DataPDUInOrder=[Yes,No].  It also
includes support for these modes of support for generating iSCSI
DataIN response data from iscsi_target.c:iscsi_send_data_in().

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_seq_and_pdu_list.c     |  712 +++++++++++++++++++++
 drivers/target/iscsi/iscsi_seq_and_pdu_list.h     |   88 +++
 drivers/target/iscsi/iscsi_target_datain_values.c |  550 ++++++++++++++++
 drivers/target/iscsi/iscsi_target_datain_values.h |   16 +
 4 files changed, 1366 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.c
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.h
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.c
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.h

diff --git a/drivers/target/iscsi/iscsi_seq_and_pdu_list.c b/drivers/target/iscsi/iscsi_seq_and_pdu_list.c
new file mode 100644
index 0000000..9a6603b
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_seq_and_pdu_list.c
@@ -0,0 +1,712 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI DataSequenceInOrder=No
+ * and DataPDUInOrder=No.
+ *
+ * Copyright (c) 2003 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ © Copyright 2007-2011 RisingTide Systems LLC.
+ * 
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/random.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_util.h"
+#include "iscsi_seq_and_pdu_list.h"
+
+#define OFFLOAD_BUF_SIZE	32768
+
+/*	iscsi_dump_seq_list():
+ *
+ *
+ */
+void iscsi_dump_seq_list(struct iscsi_cmd *cmd)
+{
+	int i;
+	struct iscsi_seq *seq;
+
+	printk(KERN_INFO "Dumping Sequence List for ITT: 0x%08x:\n",
+			cmd->init_task_tag);
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		seq = &cmd->seq_list[i];
+		printk(KERN_INFO "i: %d, pdu_start: %d, pdu_count: %d,"
+			" offset: %d, xfer_len: %d, seq_send_order: %d,"
+			" seq_no: %d\n", i, seq->pdu_start, seq->pdu_count,
+			seq->offset, seq->xfer_len, seq->seq_send_order,
+			seq->seq_no);
+	}
+}
+
+/*	iscsi_dump_pdu_list():
+ *
+ *
+ */
+void iscsi_dump_pdu_list(struct iscsi_cmd *cmd)
+{
+	int i;
+	struct iscsi_pdu *pdu;
+
+	printk(KERN_INFO "Dumping PDU List for ITT: 0x%08x:\n",
+			cmd->init_task_tag);
+
+	for (i = 0; i < cmd->pdu_count; i++) {
+		pdu = &cmd->pdu_list[i];
+		printk(KERN_INFO "i: %d, offset: %d, length: %d,"
+			" pdu_send_order: %d, seq_no: %d\n", i, pdu->offset,
+			pdu->length, pdu->pdu_send_order, pdu->seq_no);
+	}
+}
+
+/*	iscsi_ordered_seq_lists():
+ *
+ *
+ */
+static inline void iscsi_ordered_seq_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	u32 i, seq_count = 0;
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		cmd->seq_list[i].seq_send_order = seq_count++;
+	}
+}
+
+/*	iscsi_ordered_pdu_lists():
+ *
+ *
+ */
+static inline void iscsi_ordered_pdu_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	u32 i, pdu_send_order = 0, seq_no = 0;
+
+	for (i = 0; i < cmd->pdu_count; i++) {
+redo:
+		if (cmd->pdu_list[i].seq_no == seq_no) {
+			cmd->pdu_list[i].pdu_send_order = pdu_send_order++;
+			continue;
+		}
+		seq_no++;
+		pdu_send_order = 0;
+		goto redo;
+	}
+}
+
+/*	iscsi_create_random_array():
+ *
+ *	Generate count random values into array.
+ *	Use 0x80000000 to mark generates valued in array[].
+ */
+static inline void iscsi_create_random_array(u32 *array, u32 count)
+{
+	int i, j, k;
+
+	if (count == 1) {
+		array[0] = 0;
+		return;
+	}
+
+	for (i = 0; i < count; i++) {
+redo:
+		get_random_bytes(&j, sizeof(u32));
+		j = (1 + (int) (9999 + 1) - j) % count;
+		for (k = 0; k < i + 1; k++) {
+			j |= 0x80000000;
+			if ((array[k] & 0x80000000) && (array[k] == j))
+				goto redo;
+		}
+		array[i] = j;
+	}
+
+	for (i = 0; i < count; i++)
+		array[i] &= ~0x80000000;
+
+	return;
+}
+
+/*	iscsi_randomize_pdu_lists():
+ *
+ *
+ */
+static inline int iscsi_randomize_pdu_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	int i = 0;
+	u32 *array, pdu_count, seq_count = 0, seq_no = 0, seq_offset = 0;
+
+	for (pdu_count = 0; pdu_count < cmd->pdu_count; pdu_count++) {
+redo:
+		if (cmd->pdu_list[pdu_count].seq_no == seq_no) {
+			seq_count++;
+			continue;
+		}
+		array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+		if (!(array)) {
+			printk(KERN_ERR "Unable to allocate memory"
+				" for random array.\n");
+			return -1;
+		}
+		iscsi_create_random_array(array, seq_count);
+
+		for (i = 0; i < seq_count; i++)
+			cmd->pdu_list[seq_offset+i].pdu_send_order = array[i];
+
+		kfree(array);
+
+		seq_offset += seq_count;
+		seq_count = 0;
+		seq_no++;
+		goto redo;
+	}
+
+	if (seq_count) {
+		array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+		if (!(array)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" random array.\n");
+			return -1;
+		}
+		iscsi_create_random_array(array, seq_count);
+
+		for (i = 0; i < seq_count; i++)
+			cmd->pdu_list[seq_offset+i].pdu_send_order = array[i];
+
+		kfree(array);
+	}
+
+	return 0;
+}
+
+/*	iscsi_randomize_seq_lists():
+ *
+ *
+ */
+static inline int iscsi_randomize_seq_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	int i, j = 0;
+	u32 *array, seq_count = cmd->seq_count;
+
+	if ((type == PDULIST_IMMEDIATE) || (type == PDULIST_UNSOLICITED))
+		seq_count--;
+	else if (type == PDULIST_IMMEDIATE_AND_UNSOLICITED)
+		seq_count -= 2;
+
+	if (!seq_count)
+		return 0;
+
+	array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+	if (!(array)) {
+		printk(KERN_ERR "Unable to allocate memory for random array.\n");
+		return -1;
+	}
+	iscsi_create_random_array(array, seq_count);
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		cmd->seq_list[i].seq_send_order = array[j++];
+	}
+
+	kfree(array);
+	return 0;
+}
+
+/*	iscsi_determine_counts_for_list():
+ *
+ *
+ */
+static inline void iscsi_determine_counts_for_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl,
+	u32 *seq_count,
+	u32 *pdu_count)
+{
+	int check_immediate = 0;
+	u32 burstlength = 0, offset = 0;
+	u32 unsolicited_data_length = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	if ((bl->type == PDULIST_IMMEDIATE) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		check_immediate = 1;
+
+	if ((bl->type == PDULIST_UNSOLICITED) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		unsolicited_data_length = (cmd->data_length >
+			SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length;
+
+	while (offset < cmd->data_length) {
+		*pdu_count += 1;
+
+		if (check_immediate) {
+			check_immediate = 0;
+			offset += bl->immediate_data_length;
+			*seq_count += 1;
+			if (unsolicited_data_length)
+				unsolicited_data_length -=
+					bl->immediate_data_length;
+			continue;
+		}
+		if (unsolicited_data_length > 0) {
+			if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength)
+					>= cmd->data_length) {
+				unsolicited_data_length -=
+					(cmd->data_length - offset);
+				offset += (cmd->data_length - offset);
+				continue;
+			}
+			if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength)
+					>= SESS_OPS_C(conn)->FirstBurstLength) {
+				unsolicited_data_length -=
+					(SESS_OPS_C(conn)->FirstBurstLength -
+					offset);
+				offset += (SESS_OPS_C(conn)->FirstBurstLength -
+					offset);
+				burstlength = 0;
+				*seq_count += 1;
+				continue;
+			}
+
+			offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			unsolicited_data_length -=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			continue;
+		}
+		if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     cmd->data_length) {
+			offset += (cmd->data_length - offset);
+			continue;
+		}
+		if ((burstlength + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			offset += (SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength);
+			burstlength = 0;
+			*seq_count += 1;
+			continue;
+		}
+
+		burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	}
+}
+
+
+/*	iscsi_build_pdu_and_seq_list():
+ *
+ *	Builds PDU and/or Sequence list,  called while DataSequenceInOrder=No
+ *	and DataPDUInOrder=No.
+ */
+static inline int iscsi_build_pdu_and_seq_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl)
+{
+	int check_immediate = 0, datapduinorder, datasequenceinorder;
+	u32 burstlength = 0, offset = 0, i = 0;
+	u32 pdu_count = 0, seq_no = 0, unsolicited_data_length = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = cmd->pdu_list;
+	struct iscsi_seq *seq = cmd->seq_list;
+
+	datapduinorder = SESS_OPS_C(conn)->DataPDUInOrder;
+	datasequenceinorder = SESS_OPS_C(conn)->DataSequenceInOrder;
+
+	if ((bl->type == PDULIST_IMMEDIATE) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		check_immediate = 1;
+
+	if ((bl->type == PDULIST_UNSOLICITED) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		unsolicited_data_length = (cmd->data_length >
+			SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length;
+
+	while (offset < cmd->data_length) {
+		pdu_count++;
+		if (!datapduinorder) {
+			pdu[i].offset = offset;
+			pdu[i].seq_no = seq_no;
+		}
+		if (!datasequenceinorder && (pdu_count == 1)) {
+			seq[seq_no].pdu_start = i;
+			seq[seq_no].seq_no = seq_no;
+			seq[seq_no].offset = offset;
+			seq[seq_no].orig_offset = offset;
+		}
+
+		if (check_immediate) {
+			check_immediate = 0;
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_IMMEDIATE;
+				pdu[i++].length = bl->immediate_data_length;
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_IMMEDIATE;
+				seq[seq_no].pdu_count = 1;
+				seq[seq_no].xfer_len =
+					bl->immediate_data_length;
+			}
+			offset += bl->immediate_data_length;
+			pdu_count = 0;
+			seq_no++;
+			if (unsolicited_data_length)
+				unsolicited_data_length -=
+					bl->immediate_data_length;
+			continue;
+		}
+		if (unsolicited_data_length > 0) {
+			if ((offset +
+			     CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+			     cmd->data_length) {
+				if (!datapduinorder) {
+					pdu[i].type = PDUTYPE_UNSOLICITED;
+					pdu[i].length =
+						(cmd->data_length - offset);
+				}
+				if (!datasequenceinorder) {
+					seq[seq_no].type = SEQTYPE_UNSOLICITED;
+					seq[seq_no].pdu_count = pdu_count;
+					seq[seq_no].xfer_len = (burstlength +
+						(cmd->data_length - offset));
+				}
+				unsolicited_data_length -=
+						(cmd->data_length - offset);
+				offset += (cmd->data_length - offset);
+				continue;
+			}
+			if ((offset +
+			     CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+					SESS_OPS_C(conn)->FirstBurstLength) {
+				if (!datapduinorder) {
+					pdu[i].type = PDUTYPE_UNSOLICITED;
+					pdu[i++].length =
+					   (SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				}
+				if (!datasequenceinorder) {
+					seq[seq_no].type = SEQTYPE_UNSOLICITED;
+					seq[seq_no].pdu_count = pdu_count;
+					seq[seq_no].xfer_len = (burstlength +
+					   (SESS_OPS_C(conn)->FirstBurstLength -
+						offset));
+				}
+				unsolicited_data_length -=
+					(SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				offset += (SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				burstlength = 0;
+				pdu_count = 0;
+				seq_no++;
+				continue;
+			}
+
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_UNSOLICITED;
+				pdu[i++].length =
+				     CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			}
+			burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			unsolicited_data_length -=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			continue;
+		}
+		if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     cmd->data_length) {
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_NORMAL;
+				pdu[i].length = (cmd->data_length - offset);
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_NORMAL;
+				seq[seq_no].pdu_count = pdu_count;
+				seq[seq_no].xfer_len = (burstlength +
+					(cmd->data_length - offset));
+			}
+			offset += (cmd->data_length - offset);
+			continue;
+		}
+		if ((burstlength + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_NORMAL;
+				pdu[i++].length =
+					(SESS_OPS_C(conn)->MaxBurstLength -
+						burstlength);
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_NORMAL;
+				seq[seq_no].pdu_count = pdu_count;
+				seq[seq_no].xfer_len = (burstlength +
+					(SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength));
+			}
+			offset += (SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength);
+			burstlength = 0;
+			pdu_count = 0;
+			seq_no++;
+			continue;
+		}
+
+		if (!datapduinorder) {
+			pdu[i].type = PDUTYPE_NORMAL;
+			pdu[i++].length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		}
+		burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	}
+
+	if (!datasequenceinorder) {
+		if (bl->data_direction & ISCSI_PDU_WRITE) {
+			if (bl->randomize & RANDOM_R2T_OFFSETS) {
+				if (iscsi_randomize_seq_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_seq_lists(cmd, bl->type);
+		} else if (bl->data_direction & ISCSI_PDU_READ) {
+			if (bl->randomize & RANDOM_DATAIN_SEQ_OFFSETS) {
+				if (iscsi_randomize_seq_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_seq_lists(cmd, bl->type);
+		}
+#if 0
+		iscsi_dump_seq_list(cmd);
+#endif
+	}
+	if (!datapduinorder) {
+		if (bl->data_direction & ISCSI_PDU_WRITE) {
+			if (bl->randomize & RANDOM_DATAOUT_PDU_OFFSETS) {
+				if (iscsi_randomize_pdu_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_pdu_lists(cmd, bl->type);
+		} else if (bl->data_direction & ISCSI_PDU_READ) {
+			if (bl->randomize & RANDOM_DATAIN_PDU_OFFSETS) {
+				if (iscsi_randomize_pdu_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_pdu_lists(cmd, bl->type);
+		}
+#if 0
+		iscsi_dump_pdu_list(cmd);
+#endif
+	}
+
+	return 0;
+}
+
+/*	iscsi_do_build_list():
+ *
+ *	Only called while DataSequenceInOrder=No or DataPDUInOrder=No.
+ */
+int iscsi_do_build_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl)
+{
+	u32 pdu_count = 0, seq_count = 1;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_seq *seq = NULL;
+
+	iscsi_determine_counts_for_list(cmd, bl, &seq_count, &pdu_count);
+
+	if (!SESS_OPS_C(conn)->DataSequenceInOrder) {
+		seq = kzalloc(seq_count * sizeof(struct iscsi_seq), GFP_ATOMIC);
+		if (!(seq)) {
+			printk(KERN_ERR "Unable to allocate struct iscsi_seq list\n");
+			return -1;
+		}
+		cmd->seq_list = seq;
+		cmd->seq_count = seq_count;
+	}
+
+	if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+		pdu = kzalloc(pdu_count * sizeof(struct iscsi_pdu), GFP_ATOMIC);
+		if (!(pdu)) {
+			printk(KERN_ERR "Unable to allocate struct iscsi_pdu list.\n");
+			kfree(seq);
+			return -1;
+		}
+		cmd->pdu_list = pdu;
+		cmd->pdu_count = pdu_count;
+	}
+
+	return iscsi_build_pdu_and_seq_list(cmd, bl);
+}
+
+/*	iscsi_get_pdu_holder():
+ *
+ *
+ */
+struct iscsi_pdu *iscsi_get_pdu_holder(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	u32 i;
+	struct iscsi_pdu *pdu = NULL;
+
+	if (!cmd->pdu_list) {
+		printk(KERN_ERR "struct iscsi_cmd->pdu_list is NULL!\n");
+		return NULL;
+	}
+
+	pdu = &cmd->pdu_list[0];
+
+	for (i = 0; i < cmd->pdu_count; i++)
+		if ((pdu[i].offset == offset) && (pdu[i].length == length))
+			return &pdu[i];
+
+	printk(KERN_ERR "Unable to locate PDU holder for ITT: 0x%08x, Offset:"
+		" %u, Length: %u\n", cmd->init_task_tag, offset, length);
+	return NULL;
+}
+
+/*	iscsi_get_pdu_holder_for_seq():
+ *
+ *
+ */
+struct iscsi_pdu *iscsi_get_pdu_holder_for_seq(
+	struct iscsi_cmd *cmd,
+	struct iscsi_seq *seq)
+{
+	u32 i;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+
+	if (!cmd->pdu_list) {
+		printk(KERN_ERR "struct iscsi_cmd->pdu_list is NULL!\n");
+		return NULL;
+	}
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+redo:
+		pdu = &cmd->pdu_list[cmd->pdu_start];
+
+		for (i = 0; pdu[i].seq_no != cmd->seq_no; i++) {
+#if 0
+			printk(KERN_INFO "pdu[i].seq_no: %d, pdu[i].pdu"
+				"_send_order: %d, pdu[i].offset: %d,"
+				" pdu[i].length: %d\n", pdu[i].seq_no,
+				pdu[i].pdu_send_order, pdu[i].offset,
+				pdu[i].length);
+#endif
+			if (pdu[i].pdu_send_order == cmd->pdu_send_order) {
+				cmd->pdu_send_order++;
+				return &pdu[i];
+			}
+		}
+
+		cmd->pdu_start += cmd->pdu_send_order;
+		cmd->pdu_send_order = 0;
+		cmd->seq_no++;
+
+		if (cmd->pdu_start < cmd->pdu_count)
+			goto redo;
+
+		printk(KERN_ERR "Command ITT: 0x%08x unable to locate"
+			" struct iscsi_pdu for cmd->pdu_send_order: %u.\n",
+			cmd->init_task_tag, cmd->pdu_send_order);
+		return NULL;
+	} else {
+		if (!seq) {
+			printk(KERN_ERR "struct iscsi_seq is NULL!\n");
+			return NULL;
+		}
+#if 0
+		printk(KERN_INFO "seq->pdu_start: %d, seq->pdu_count: %d,"
+			" seq->seq_no: %d\n", seq->pdu_start, seq->pdu_count,
+			seq->seq_no);
+#endif
+		pdu = &cmd->pdu_list[seq->pdu_start];
+
+		if (seq->pdu_send_order == seq->pdu_count) {
+			printk(KERN_ERR "Command ITT: 0x%08x seq->pdu_send"
+				"_order: %u equals seq->pdu_count: %u\n",
+				cmd->init_task_tag, seq->pdu_send_order,
+				seq->pdu_count);
+			return NULL;
+		}
+
+		for (i = 0; i < seq->pdu_count; i++) {
+			if (pdu[i].pdu_send_order == seq->pdu_send_order) {
+				seq->pdu_send_order++;
+				return &pdu[i];
+			}
+		}
+
+		printk(KERN_ERR "Command ITT: 0x%08x unable to locate iscsi"
+			"_pdu_t for seq->pdu_send_order: %u.\n",
+			cmd->init_task_tag, seq->pdu_send_order);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/*	iscsi_get_seq_holder():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	u32 i;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return NULL;
+	}
+
+	for (i = 0; i < cmd->seq_count; i++) {
+#if 0
+		printk(KERN_INFO "seq_list[i].orig_offset: %d, seq_list[i]."
+			"xfer_len: %d, seq_list[i].seq_no %u\n",
+			cmd->seq_list[i].orig_offset, cmd->seq_list[i].xfer_len,
+			cmd->seq_list[i].seq_no);
+#endif
+		if ((cmd->seq_list[i].orig_offset +
+				cmd->seq_list[i].xfer_len) >=
+				(offset + length))
+			return &cmd->seq_list[i];
+	}
+
+	printk(KERN_ERR "Unable to locate Sequence holder for ITT: 0x%08x,"
+		" Offset: %u, Length: %u\n", cmd->init_task_tag, offset,
+		length);
+	return NULL;
+}
diff --git a/drivers/target/iscsi/iscsi_seq_and_pdu_list.h b/drivers/target/iscsi/iscsi_seq_and_pdu_list.h
new file mode 100644
index 0000000..7b4c1bd
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_seq_and_pdu_list.h
@@ -0,0 +1,88 @@
+#ifndef ISCSI_SEQ_AND_PDU_LIST_H
+#define ISCSI_SEQ_AND_PDU_LIST_H
+
+/* struct iscsi_pdu->status */
+#define DATAOUT_PDU_SENT			1
+
+/* struct iscsi_seq->type */
+#define SEQTYPE_IMMEDIATE			1
+#define SEQTYPE_UNSOLICITED			2
+#define SEQTYPE_NORMAL				3
+
+/* struct iscsi_seq->status */
+#define DATAOUT_SEQUENCE_GOT_R2T		1
+#define DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY 2
+#define DATAOUT_SEQUENCE_COMPLETE		3
+
+/* iscsi_determine_counts_for_list() type */
+#define PDULIST_NORMAL				1
+#define PDULIST_IMMEDIATE			2
+#define PDULIST_UNSOLICITED			3
+#define PDULIST_IMMEDIATE_AND_UNSOLICITED	4
+
+/* struct iscsi_pdu->type */
+#define PDUTYPE_IMMEDIATE			1
+#define PDUTYPE_UNSOLICITED			2
+#define PDUTYPE_NORMAL				3
+
+/* struct iscsi_pdu->status */
+#define ISCSI_PDU_NOT_RECEIVED			0
+#define ISCSI_PDU_RECEIVED_OK			1
+#define ISCSI_PDU_CRC_FAILED			2
+#define ISCSI_PDU_TIMED_OUT			3
+
+/* struct iscsi_build_list->randomize */
+#define RANDOM_DATAIN_PDU_OFFSETS		0x01
+#define RANDOM_DATAIN_SEQ_OFFSETS		0x02
+#define RANDOM_DATAOUT_PDU_OFFSETS		0x04
+#define RANDOM_R2T_OFFSETS			0x08
+
+/* struct iscsi_build_list->data_direction */
+#define ISCSI_PDU_READ				0x01
+#define ISCSI_PDU_WRITE				0x02
+
+struct iscsi_build_list {
+	u8		data_direction;
+	u8		randomize;
+	u8		type;
+	u32		immediate_data_length;
+} ____cacheline_aligned;
+
+struct iscsi_pdu {
+	int		status;
+	int		type;
+	u8		flags;
+	u32		data_sn;
+	u32		length;
+	u32		offset;
+	u32		pdu_send_order;
+	u32		seq_no;
+} ____cacheline_aligned;
+
+struct iscsi_seq {
+	int		sent;
+	int		status;
+	int		type;
+	u32		data_sn;
+	u32		first_datasn;
+	u32		last_datasn;
+	u32		next_burst_len;
+	u32		pdu_start;
+	u32		pdu_count;
+	u32		offset;
+	u32		orig_offset;
+	u32		pdu_send_order;
+	u32		r2t_sn;
+	u32		seq_send_order;
+	u32		seq_no;
+	u32		xfer_len;
+} ____cacheline_aligned;
+
+extern struct iscsi_global *iscsi_global;
+
+extern int iscsi_do_build_list(struct iscsi_cmd *, struct iscsi_build_list *);
+extern struct iscsi_pdu *iscsi_get_pdu_holder(struct iscsi_cmd *, u32, u32);
+extern struct iscsi_pdu *iscsi_get_pdu_holder_for_seq(struct iscsi_cmd *, struct iscsi_seq *);
+extern struct iscsi_seq *iscsi_get_seq_holder(struct iscsi_cmd *, u32, u32);
+
+#endif /* ISCSI_SEQ_AND_PDU_LIST_H */
diff --git a/drivers/target/iscsi/iscsi_target_datain_values.c b/drivers/target/iscsi/iscsi_target_datain_values.c
new file mode 100644
index 0000000..26ec1d5
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_datain_values.c
@@ -0,0 +1,550 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target DataIN value generation functions.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/delay.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/in.h>
+#include <scsi/iscsi_proto.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_datain_values.h"
+
+struct iscsi_datain_req *iscsi_allocate_datain_req(void)
+{
+	struct iscsi_datain_req *dr;
+
+	dr = kmem_cache_zalloc(lio_dr_cache, GFP_ATOMIC);
+	if (!(dr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_datain_req\n");
+		return NULL;
+	}
+	INIT_LIST_HEAD(&dr->dr_list);
+
+	return dr;
+}
+
+void iscsi_attach_datain_req(struct iscsi_cmd *cmd, struct iscsi_datain_req *dr)
+{
+	spin_lock(&cmd->datain_lock);
+	list_add_tail(&dr->dr_list, &cmd->datain_list);
+	spin_unlock(&cmd->datain_lock);
+}
+
+void iscsi_free_datain_req(struct iscsi_cmd *cmd, struct iscsi_datain_req *dr)
+{
+	spin_lock(&cmd->datain_lock);
+	list_del(&dr->dr_list);
+	spin_unlock(&cmd->datain_lock);
+
+	kmem_cache_free(lio_dr_cache, dr);
+}
+
+void iscsi_free_all_datain_reqs(struct iscsi_cmd *cmd)
+{
+	struct iscsi_datain_req *dr, *dr_tmp;
+
+	spin_lock(&cmd->datain_lock);
+	list_for_each_entry_safe(dr, dr_tmp, &cmd->datain_list, dr_list) {
+		list_del(&dr->dr_list);
+		kmem_cache_free(lio_dr_cache, dr);
+	}
+	spin_unlock(&cmd->datain_lock);
+}
+
+struct iscsi_datain_req *iscsi_get_datain_req(struct iscsi_cmd *cmd)
+{
+	struct iscsi_datain_req *dr;
+
+	if (list_empty(&cmd->datain_list)) {
+		printk(KERN_ERR "cmd->datain_list is empty for ITT:"
+			" 0x%08x\n", cmd->init_task_tag);
+		return NULL;
+	}
+	list_for_each_entry(dr, &cmd->datain_list, dr_list)
+		break;
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_yes_and_yes():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=Yes and DataPDUInOrder=Yes.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_yes_and_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 next_burst_len, read_data_done, read_data_left;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	next_burst_len = (!dr->recovery) ?
+			cmd->next_burst_len : dr->next_burst_len;
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	if ((read_data_left <= CONN_OPS(conn)->MaxRecvDataSegmentLength) &&
+	    (read_data_left <= (SESS_OPS_C(conn)->MaxBurstLength -
+	     next_burst_len))) {
+		datain->length = read_data_left;
+
+		datain->flags |= (ISCSI_FLAG_CMD_FINAL | ISCSI_FLAG_DATA_STATUS);
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			datain->flags |= ISCSI_FLAG_DATA_ACK;
+	} else {
+		if ((next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			datain->length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			next_burst_len += datain->length;
+		} else {
+			datain->length = (SESS_OPS_C(conn)->MaxBurstLength -
+					  next_burst_len);
+			next_burst_len = 0;
+
+			datain->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				datain->flags |= ISCSI_FLAG_DATA_ACK;
+		}
+	}
+
+	datain->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	datain->offset = read_data_done;
+
+	if (!dr->recovery) {
+		cmd->next_burst_len = next_burst_len;
+		cmd->read_data_done += datain->length;
+	} else {
+		dr->next_burst_len = next_burst_len;
+		dr->read_data_done += datain->length;
+	}
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_no_and_yes():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=No and DataPDUInOrder=Yes.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_no_and_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 offset, read_data_done, read_data_left, seq_send_order;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_seq *seq;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_no(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+	seq_send_order = (!dr->recovery) ?
+			cmd->seq_send_order : dr->seq_send_order;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	seq = iscsi_get_seq_holder_for_datain(cmd, seq_send_order);
+	if (!(seq))
+		return NULL;
+
+	seq->sent = 1;
+
+	if (!dr->recovery && !seq->next_burst_len)
+		seq->first_datasn = cmd->data_sn;
+
+	offset = (seq->offset + seq->next_burst_len);
+
+	if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+	     cmd->data_length) {
+		datain->length = (cmd->data_length - offset);
+		datain->offset = offset;
+
+		datain->flags |= ISCSI_FLAG_CMD_FINAL;
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			datain->flags |= ISCSI_FLAG_DATA_ACK;
+
+		seq->next_burst_len = 0;
+		seq_send_order++;
+	} else {
+		if ((seq->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			datain->length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			datain->offset = (seq->offset + seq->next_burst_len);
+
+			seq->next_burst_len += datain->length;
+		} else {
+			datain->length = (SESS_OPS_C(conn)->MaxBurstLength -
+					  seq->next_burst_len);
+			datain->offset = (seq->offset + seq->next_burst_len);
+
+			datain->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				datain->flags |= ISCSI_FLAG_DATA_ACK;
+
+			seq->next_burst_len = 0;
+			seq_send_order++;
+		}
+	}
+
+	if ((read_data_done + datain->length) == cmd->data_length)
+		datain->flags |= ISCSI_FLAG_DATA_STATUS;
+
+	datain->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->seq_send_order = seq_send_order;
+		cmd->read_data_done += datain->length;
+	} else {
+		dr->seq_send_order = seq_send_order;
+		dr->read_data_done += datain->length;
+	}
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_CMD_FINAL)
+			seq->last_datasn = datain->data_sn;
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_yes_and_no():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=Yes and DataPDUInOrder=No.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_yes_and_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 next_burst_len, read_data_done, read_data_left;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_pdu *pdu;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	next_burst_len = (!dr->recovery) ?
+			cmd->next_burst_len : dr->next_burst_len;
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return dr;
+	}
+
+	pdu = iscsi_get_pdu_holder_for_seq(cmd, NULL);
+	if (!(pdu))
+		return dr;
+
+	if ((read_data_done + pdu->length) == cmd->data_length) {
+		pdu->flags |= (ISCSI_FLAG_CMD_FINAL | ISCSI_FLAG_DATA_STATUS);
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+		next_burst_len = 0;
+	} else {
+		if ((next_burst_len + CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength)
+			next_burst_len += pdu->length;
+		else {
+			pdu->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+			next_burst_len = 0;
+		}
+	}
+
+	pdu->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->next_burst_len = next_burst_len;
+		cmd->read_data_done += pdu->length;
+	} else {
+		dr->next_burst_len = next_burst_len;
+		dr->read_data_done += pdu->length;
+	}
+
+	datain->flags = pdu->flags;
+	datain->length = pdu->length;
+	datain->offset = pdu->offset;
+	datain->data_sn = pdu->data_sn;
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_no_and_no():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=No and DataPDUInOrder=No.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_no_and_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 read_data_done, read_data_left, seq_send_order;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_pdu *pdu;
+	struct iscsi_seq *seq = NULL;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_no(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+	seq_send_order = (!dr->recovery) ?
+			cmd->seq_send_order : dr->seq_send_order;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	seq = iscsi_get_seq_holder_for_datain(cmd, seq_send_order);
+	if (!(seq))
+		return NULL;
+
+	seq->sent = 1;
+
+	if (!dr->recovery && !seq->next_burst_len)
+		seq->first_datasn = cmd->data_sn;
+
+	pdu = iscsi_get_pdu_holder_for_seq(cmd, seq);
+	if (!(pdu))
+		return NULL;
+
+	if (seq->pdu_send_order == seq->pdu_count) {
+		pdu->flags |= ISCSI_FLAG_CMD_FINAL;
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+		seq->next_burst_len = 0;
+		seq_send_order++;
+	} else
+		seq->next_burst_len += pdu->length;
+
+	if ((read_data_done + pdu->length) == cmd->data_length)
+		pdu->flags |= ISCSI_FLAG_DATA_STATUS;
+
+	pdu->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->seq_send_order = seq_send_order;
+		cmd->read_data_done += pdu->length;
+	} else {
+		dr->seq_send_order = seq_send_order;
+		dr->read_data_done += pdu->length;
+	}
+
+	datain->flags = pdu->flags;
+	datain->length = pdu->length;
+	datain->offset = pdu->offset;
+	datain->data_sn = pdu->data_sn;
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_CMD_FINAL)
+			seq->last_datasn = datain->data_sn;
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_get_datain_values():
+ *
+ *
+ */
+struct iscsi_datain_req *iscsi_get_datain_values(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder &&
+	    SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_yes_and_yes(cmd, datain);
+	else if (!SESS_OPS_C(conn)->DataSequenceInOrder &&
+		  SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_no_and_yes(cmd, datain);
+	else if (SESS_OPS_C(conn)->DataSequenceInOrder &&
+		 !SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_yes_and_no(cmd, datain);
+	else if (!SESS_OPS_C(conn)->DataSequenceInOrder &&
+		   !SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_no_and_no(cmd, datain);
+
+	return NULL;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_datain_values.h b/drivers/target/iscsi/iscsi_target_datain_values.h
new file mode 100644
index 0000000..0534835
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_datain_values.h
@@ -0,0 +1,16 @@
+#ifndef ISCSI_TARGET_DATAIN_VALUES_H
+#define ISCSI_TARGET_DATAIN_VALUES_H
+
+extern struct iscsi_datain_req *iscsi_allocate_datain_req(void);
+extern void iscsi_attach_datain_req(struct iscsi_cmd *, struct iscsi_datain_req *);
+extern void iscsi_free_datain_req(struct iscsi_cmd *, struct iscsi_datain_req *);
+extern void iscsi_free_all_datain_reqs(struct iscsi_cmd *);
+extern struct iscsi_datain_req *iscsi_get_datain_req(struct iscsi_cmd *);
+extern struct iscsi_datain_req *iscsi_get_datain_values(struct iscsi_cmd *,
+			struct iscsi_datain *);
+
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_dr_cache;
+
+#endif   /*** ISCSI_TARGET_DATAIN_VALUES_H ***/
+
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 08/12] iscsi-target: Add Sequence/PDU list + DataIN response logic
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds Sequence/PDU list logic used by RFC-3720 for
DataSequenceInOrder=[Yes,No] and DataPDUInOrder=[Yes,No].  It also
includes support for these modes of support for generating iSCSI
DataIN response data from iscsi_target.c:iscsi_send_data_in().

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_seq_and_pdu_list.c     |  712 +++++++++++++++++++++
 drivers/target/iscsi/iscsi_seq_and_pdu_list.h     |   88 +++
 drivers/target/iscsi/iscsi_target_datain_values.c |  550 ++++++++++++++++
 drivers/target/iscsi/iscsi_target_datain_values.h |   16 +
 4 files changed, 1366 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.c
 create mode 100644 drivers/target/iscsi/iscsi_seq_and_pdu_list.h
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.c
 create mode 100644 drivers/target/iscsi/iscsi_target_datain_values.h

diff --git a/drivers/target/iscsi/iscsi_seq_and_pdu_list.c b/drivers/target/iscsi/iscsi_seq_and_pdu_list.c
new file mode 100644
index 0000000..9a6603b
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_seq_and_pdu_list.c
@@ -0,0 +1,712 @@
+/*******************************************************************************
+ * This file contains main functions related to iSCSI DataSequenceInOrder=No
+ * and DataPDUInOrder=No.
+ *
+ * Copyright (c) 2003 PyX Technologies, Inc.
+ * Copyright (c) 2006-2007 SBE, Inc.  All Rights Reserved.
+ © Copyright 2007-2011 RisingTide Systems LLC.
+ * 
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/random.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_util.h"
+#include "iscsi_seq_and_pdu_list.h"
+
+#define OFFLOAD_BUF_SIZE	32768
+
+/*	iscsi_dump_seq_list():
+ *
+ *
+ */
+void iscsi_dump_seq_list(struct iscsi_cmd *cmd)
+{
+	int i;
+	struct iscsi_seq *seq;
+
+	printk(KERN_INFO "Dumping Sequence List for ITT: 0x%08x:\n",
+			cmd->init_task_tag);
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		seq = &cmd->seq_list[i];
+		printk(KERN_INFO "i: %d, pdu_start: %d, pdu_count: %d,"
+			" offset: %d, xfer_len: %d, seq_send_order: %d,"
+			" seq_no: %d\n", i, seq->pdu_start, seq->pdu_count,
+			seq->offset, seq->xfer_len, seq->seq_send_order,
+			seq->seq_no);
+	}
+}
+
+/*	iscsi_dump_pdu_list():
+ *
+ *
+ */
+void iscsi_dump_pdu_list(struct iscsi_cmd *cmd)
+{
+	int i;
+	struct iscsi_pdu *pdu;
+
+	printk(KERN_INFO "Dumping PDU List for ITT: 0x%08x:\n",
+			cmd->init_task_tag);
+
+	for (i = 0; i < cmd->pdu_count; i++) {
+		pdu = &cmd->pdu_list[i];
+		printk(KERN_INFO "i: %d, offset: %d, length: %d,"
+			" pdu_send_order: %d, seq_no: %d\n", i, pdu->offset,
+			pdu->length, pdu->pdu_send_order, pdu->seq_no);
+	}
+}
+
+/*	iscsi_ordered_seq_lists():
+ *
+ *
+ */
+static inline void iscsi_ordered_seq_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	u32 i, seq_count = 0;
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		cmd->seq_list[i].seq_send_order = seq_count++;
+	}
+}
+
+/*	iscsi_ordered_pdu_lists():
+ *
+ *
+ */
+static inline void iscsi_ordered_pdu_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	u32 i, pdu_send_order = 0, seq_no = 0;
+
+	for (i = 0; i < cmd->pdu_count; i++) {
+redo:
+		if (cmd->pdu_list[i].seq_no == seq_no) {
+			cmd->pdu_list[i].pdu_send_order = pdu_send_order++;
+			continue;
+		}
+		seq_no++;
+		pdu_send_order = 0;
+		goto redo;
+	}
+}
+
+/*	iscsi_create_random_array():
+ *
+ *	Generate count random values into array.
+ *	Use 0x80000000 to mark generates valued in array[].
+ */
+static inline void iscsi_create_random_array(u32 *array, u32 count)
+{
+	int i, j, k;
+
+	if (count == 1) {
+		array[0] = 0;
+		return;
+	}
+
+	for (i = 0; i < count; i++) {
+redo:
+		get_random_bytes(&j, sizeof(u32));
+		j = (1 + (int) (9999 + 1) - j) % count;
+		for (k = 0; k < i + 1; k++) {
+			j |= 0x80000000;
+			if ((array[k] & 0x80000000) && (array[k] == j))
+				goto redo;
+		}
+		array[i] = j;
+	}
+
+	for (i = 0; i < count; i++)
+		array[i] &= ~0x80000000;
+
+	return;
+}
+
+/*	iscsi_randomize_pdu_lists():
+ *
+ *
+ */
+static inline int iscsi_randomize_pdu_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	int i = 0;
+	u32 *array, pdu_count, seq_count = 0, seq_no = 0, seq_offset = 0;
+
+	for (pdu_count = 0; pdu_count < cmd->pdu_count; pdu_count++) {
+redo:
+		if (cmd->pdu_list[pdu_count].seq_no == seq_no) {
+			seq_count++;
+			continue;
+		}
+		array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+		if (!(array)) {
+			printk(KERN_ERR "Unable to allocate memory"
+				" for random array.\n");
+			return -1;
+		}
+		iscsi_create_random_array(array, seq_count);
+
+		for (i = 0; i < seq_count; i++)
+			cmd->pdu_list[seq_offset+i].pdu_send_order = array[i];
+
+		kfree(array);
+
+		seq_offset += seq_count;
+		seq_count = 0;
+		seq_no++;
+		goto redo;
+	}
+
+	if (seq_count) {
+		array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+		if (!(array)) {
+			printk(KERN_ERR "Unable to allocate memory for"
+				" random array.\n");
+			return -1;
+		}
+		iscsi_create_random_array(array, seq_count);
+
+		for (i = 0; i < seq_count; i++)
+			cmd->pdu_list[seq_offset+i].pdu_send_order = array[i];
+
+		kfree(array);
+	}
+
+	return 0;
+}
+
+/*	iscsi_randomize_seq_lists():
+ *
+ *
+ */
+static inline int iscsi_randomize_seq_lists(
+	struct iscsi_cmd *cmd,
+	u8 type)
+{
+	int i, j = 0;
+	u32 *array, seq_count = cmd->seq_count;
+
+	if ((type == PDULIST_IMMEDIATE) || (type == PDULIST_UNSOLICITED))
+		seq_count--;
+	else if (type == PDULIST_IMMEDIATE_AND_UNSOLICITED)
+		seq_count -= 2;
+
+	if (!seq_count)
+		return 0;
+
+	array = kzalloc(seq_count * sizeof(u32), GFP_KERNEL);
+	if (!(array)) {
+		printk(KERN_ERR "Unable to allocate memory for random array.\n");
+		return -1;
+	}
+	iscsi_create_random_array(array, seq_count);
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		cmd->seq_list[i].seq_send_order = array[j++];
+	}
+
+	kfree(array);
+	return 0;
+}
+
+/*	iscsi_determine_counts_for_list():
+ *
+ *
+ */
+static inline void iscsi_determine_counts_for_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl,
+	u32 *seq_count,
+	u32 *pdu_count)
+{
+	int check_immediate = 0;
+	u32 burstlength = 0, offset = 0;
+	u32 unsolicited_data_length = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	if ((bl->type == PDULIST_IMMEDIATE) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		check_immediate = 1;
+
+	if ((bl->type == PDULIST_UNSOLICITED) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		unsolicited_data_length = (cmd->data_length >
+			SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length;
+
+	while (offset < cmd->data_length) {
+		*pdu_count += 1;
+
+		if (check_immediate) {
+			check_immediate = 0;
+			offset += bl->immediate_data_length;
+			*seq_count += 1;
+			if (unsolicited_data_length)
+				unsolicited_data_length -=
+					bl->immediate_data_length;
+			continue;
+		}
+		if (unsolicited_data_length > 0) {
+			if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength)
+					>= cmd->data_length) {
+				unsolicited_data_length -=
+					(cmd->data_length - offset);
+				offset += (cmd->data_length - offset);
+				continue;
+			}
+			if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength)
+					>= SESS_OPS_C(conn)->FirstBurstLength) {
+				unsolicited_data_length -=
+					(SESS_OPS_C(conn)->FirstBurstLength -
+					offset);
+				offset += (SESS_OPS_C(conn)->FirstBurstLength -
+					offset);
+				burstlength = 0;
+				*seq_count += 1;
+				continue;
+			}
+
+			offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			unsolicited_data_length -=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			continue;
+		}
+		if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     cmd->data_length) {
+			offset += (cmd->data_length - offset);
+			continue;
+		}
+		if ((burstlength + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			offset += (SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength);
+			burstlength = 0;
+			*seq_count += 1;
+			continue;
+		}
+
+		burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	}
+}
+
+
+/*	iscsi_build_pdu_and_seq_list():
+ *
+ *	Builds PDU and/or Sequence list,  called while DataSequenceInOrder=No
+ *	and DataPDUInOrder=No.
+ */
+static inline int iscsi_build_pdu_and_seq_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl)
+{
+	int check_immediate = 0, datapduinorder, datasequenceinorder;
+	u32 burstlength = 0, offset = 0, i = 0;
+	u32 pdu_count = 0, seq_no = 0, unsolicited_data_length = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = cmd->pdu_list;
+	struct iscsi_seq *seq = cmd->seq_list;
+
+	datapduinorder = SESS_OPS_C(conn)->DataPDUInOrder;
+	datasequenceinorder = SESS_OPS_C(conn)->DataSequenceInOrder;
+
+	if ((bl->type == PDULIST_IMMEDIATE) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		check_immediate = 1;
+
+	if ((bl->type == PDULIST_UNSOLICITED) ||
+	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
+		unsolicited_data_length = (cmd->data_length >
+			SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length;
+
+	while (offset < cmd->data_length) {
+		pdu_count++;
+		if (!datapduinorder) {
+			pdu[i].offset = offset;
+			pdu[i].seq_no = seq_no;
+		}
+		if (!datasequenceinorder && (pdu_count == 1)) {
+			seq[seq_no].pdu_start = i;
+			seq[seq_no].seq_no = seq_no;
+			seq[seq_no].offset = offset;
+			seq[seq_no].orig_offset = offset;
+		}
+
+		if (check_immediate) {
+			check_immediate = 0;
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_IMMEDIATE;
+				pdu[i++].length = bl->immediate_data_length;
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_IMMEDIATE;
+				seq[seq_no].pdu_count = 1;
+				seq[seq_no].xfer_len =
+					bl->immediate_data_length;
+			}
+			offset += bl->immediate_data_length;
+			pdu_count = 0;
+			seq_no++;
+			if (unsolicited_data_length)
+				unsolicited_data_length -=
+					bl->immediate_data_length;
+			continue;
+		}
+		if (unsolicited_data_length > 0) {
+			if ((offset +
+			     CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+			     cmd->data_length) {
+				if (!datapduinorder) {
+					pdu[i].type = PDUTYPE_UNSOLICITED;
+					pdu[i].length =
+						(cmd->data_length - offset);
+				}
+				if (!datasequenceinorder) {
+					seq[seq_no].type = SEQTYPE_UNSOLICITED;
+					seq[seq_no].pdu_count = pdu_count;
+					seq[seq_no].xfer_len = (burstlength +
+						(cmd->data_length - offset));
+				}
+				unsolicited_data_length -=
+						(cmd->data_length - offset);
+				offset += (cmd->data_length - offset);
+				continue;
+			}
+			if ((offset +
+			     CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+					SESS_OPS_C(conn)->FirstBurstLength) {
+				if (!datapduinorder) {
+					pdu[i].type = PDUTYPE_UNSOLICITED;
+					pdu[i++].length =
+					   (SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				}
+				if (!datasequenceinorder) {
+					seq[seq_no].type = SEQTYPE_UNSOLICITED;
+					seq[seq_no].pdu_count = pdu_count;
+					seq[seq_no].xfer_len = (burstlength +
+					   (SESS_OPS_C(conn)->FirstBurstLength -
+						offset));
+				}
+				unsolicited_data_length -=
+					(SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				offset += (SESS_OPS_C(conn)->FirstBurstLength -
+						offset);
+				burstlength = 0;
+				pdu_count = 0;
+				seq_no++;
+				continue;
+			}
+
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_UNSOLICITED;
+				pdu[i++].length =
+				     CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			}
+			burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			unsolicited_data_length -=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			continue;
+		}
+		if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     cmd->data_length) {
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_NORMAL;
+				pdu[i].length = (cmd->data_length - offset);
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_NORMAL;
+				seq[seq_no].pdu_count = pdu_count;
+				seq[seq_no].xfer_len = (burstlength +
+					(cmd->data_length - offset));
+			}
+			offset += (cmd->data_length - offset);
+			continue;
+		}
+		if ((burstlength + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			if (!datapduinorder) {
+				pdu[i].type = PDUTYPE_NORMAL;
+				pdu[i++].length =
+					(SESS_OPS_C(conn)->MaxBurstLength -
+						burstlength);
+			}
+			if (!datasequenceinorder) {
+				seq[seq_no].type = SEQTYPE_NORMAL;
+				seq[seq_no].pdu_count = pdu_count;
+				seq[seq_no].xfer_len = (burstlength +
+					(SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength));
+			}
+			offset += (SESS_OPS_C(conn)->MaxBurstLength -
+					burstlength);
+			burstlength = 0;
+			pdu_count = 0;
+			seq_no++;
+			continue;
+		}
+
+		if (!datapduinorder) {
+			pdu[i].type = PDUTYPE_NORMAL;
+			pdu[i++].length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		}
+		burstlength += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		offset += CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	}
+
+	if (!datasequenceinorder) {
+		if (bl->data_direction & ISCSI_PDU_WRITE) {
+			if (bl->randomize & RANDOM_R2T_OFFSETS) {
+				if (iscsi_randomize_seq_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_seq_lists(cmd, bl->type);
+		} else if (bl->data_direction & ISCSI_PDU_READ) {
+			if (bl->randomize & RANDOM_DATAIN_SEQ_OFFSETS) {
+				if (iscsi_randomize_seq_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_seq_lists(cmd, bl->type);
+		}
+#if 0
+		iscsi_dump_seq_list(cmd);
+#endif
+	}
+	if (!datapduinorder) {
+		if (bl->data_direction & ISCSI_PDU_WRITE) {
+			if (bl->randomize & RANDOM_DATAOUT_PDU_OFFSETS) {
+				if (iscsi_randomize_pdu_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_pdu_lists(cmd, bl->type);
+		} else if (bl->data_direction & ISCSI_PDU_READ) {
+			if (bl->randomize & RANDOM_DATAIN_PDU_OFFSETS) {
+				if (iscsi_randomize_pdu_lists(cmd, bl->type)
+						< 0)
+					return -1;
+			} else
+				iscsi_ordered_pdu_lists(cmd, bl->type);
+		}
+#if 0
+		iscsi_dump_pdu_list(cmd);
+#endif
+	}
+
+	return 0;
+}
+
+/*	iscsi_do_build_list():
+ *
+ *	Only called while DataSequenceInOrder=No or DataPDUInOrder=No.
+ */
+int iscsi_do_build_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_build_list *bl)
+{
+	u32 pdu_count = 0, seq_count = 1;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_seq *seq = NULL;
+
+	iscsi_determine_counts_for_list(cmd, bl, &seq_count, &pdu_count);
+
+	if (!SESS_OPS_C(conn)->DataSequenceInOrder) {
+		seq = kzalloc(seq_count * sizeof(struct iscsi_seq), GFP_ATOMIC);
+		if (!(seq)) {
+			printk(KERN_ERR "Unable to allocate struct iscsi_seq list\n");
+			return -1;
+		}
+		cmd->seq_list = seq;
+		cmd->seq_count = seq_count;
+	}
+
+	if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+		pdu = kzalloc(pdu_count * sizeof(struct iscsi_pdu), GFP_ATOMIC);
+		if (!(pdu)) {
+			printk(KERN_ERR "Unable to allocate struct iscsi_pdu list.\n");
+			kfree(seq);
+			return -1;
+		}
+		cmd->pdu_list = pdu;
+		cmd->pdu_count = pdu_count;
+	}
+
+	return iscsi_build_pdu_and_seq_list(cmd, bl);
+}
+
+/*	iscsi_get_pdu_holder():
+ *
+ *
+ */
+struct iscsi_pdu *iscsi_get_pdu_holder(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	u32 i;
+	struct iscsi_pdu *pdu = NULL;
+
+	if (!cmd->pdu_list) {
+		printk(KERN_ERR "struct iscsi_cmd->pdu_list is NULL!\n");
+		return NULL;
+	}
+
+	pdu = &cmd->pdu_list[0];
+
+	for (i = 0; i < cmd->pdu_count; i++)
+		if ((pdu[i].offset == offset) && (pdu[i].length == length))
+			return &pdu[i];
+
+	printk(KERN_ERR "Unable to locate PDU holder for ITT: 0x%08x, Offset:"
+		" %u, Length: %u\n", cmd->init_task_tag, offset, length);
+	return NULL;
+}
+
+/*	iscsi_get_pdu_holder_for_seq():
+ *
+ *
+ */
+struct iscsi_pdu *iscsi_get_pdu_holder_for_seq(
+	struct iscsi_cmd *cmd,
+	struct iscsi_seq *seq)
+{
+	u32 i;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+
+	if (!cmd->pdu_list) {
+		printk(KERN_ERR "struct iscsi_cmd->pdu_list is NULL!\n");
+		return NULL;
+	}
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+redo:
+		pdu = &cmd->pdu_list[cmd->pdu_start];
+
+		for (i = 0; pdu[i].seq_no != cmd->seq_no; i++) {
+#if 0
+			printk(KERN_INFO "pdu[i].seq_no: %d, pdu[i].pdu"
+				"_send_order: %d, pdu[i].offset: %d,"
+				" pdu[i].length: %d\n", pdu[i].seq_no,
+				pdu[i].pdu_send_order, pdu[i].offset,
+				pdu[i].length);
+#endif
+			if (pdu[i].pdu_send_order == cmd->pdu_send_order) {
+				cmd->pdu_send_order++;
+				return &pdu[i];
+			}
+		}
+
+		cmd->pdu_start += cmd->pdu_send_order;
+		cmd->pdu_send_order = 0;
+		cmd->seq_no++;
+
+		if (cmd->pdu_start < cmd->pdu_count)
+			goto redo;
+
+		printk(KERN_ERR "Command ITT: 0x%08x unable to locate"
+			" struct iscsi_pdu for cmd->pdu_send_order: %u.\n",
+			cmd->init_task_tag, cmd->pdu_send_order);
+		return NULL;
+	} else {
+		if (!seq) {
+			printk(KERN_ERR "struct iscsi_seq is NULL!\n");
+			return NULL;
+		}
+#if 0
+		printk(KERN_INFO "seq->pdu_start: %d, seq->pdu_count: %d,"
+			" seq->seq_no: %d\n", seq->pdu_start, seq->pdu_count,
+			seq->seq_no);
+#endif
+		pdu = &cmd->pdu_list[seq->pdu_start];
+
+		if (seq->pdu_send_order == seq->pdu_count) {
+			printk(KERN_ERR "Command ITT: 0x%08x seq->pdu_send"
+				"_order: %u equals seq->pdu_count: %u\n",
+				cmd->init_task_tag, seq->pdu_send_order,
+				seq->pdu_count);
+			return NULL;
+		}
+
+		for (i = 0; i < seq->pdu_count; i++) {
+			if (pdu[i].pdu_send_order == seq->pdu_send_order) {
+				seq->pdu_send_order++;
+				return &pdu[i];
+			}
+		}
+
+		printk(KERN_ERR "Command ITT: 0x%08x unable to locate iscsi"
+			"_pdu_t for seq->pdu_send_order: %u.\n",
+			cmd->init_task_tag, seq->pdu_send_order);
+		return NULL;
+	}
+
+	return NULL;
+}
+
+/*	iscsi_get_seq_holder():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	u32 i;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return NULL;
+	}
+
+	for (i = 0; i < cmd->seq_count; i++) {
+#if 0
+		printk(KERN_INFO "seq_list[i].orig_offset: %d, seq_list[i]."
+			"xfer_len: %d, seq_list[i].seq_no %u\n",
+			cmd->seq_list[i].orig_offset, cmd->seq_list[i].xfer_len,
+			cmd->seq_list[i].seq_no);
+#endif
+		if ((cmd->seq_list[i].orig_offset +
+				cmd->seq_list[i].xfer_len) >=
+				(offset + length))
+			return &cmd->seq_list[i];
+	}
+
+	printk(KERN_ERR "Unable to locate Sequence holder for ITT: 0x%08x,"
+		" Offset: %u, Length: %u\n", cmd->init_task_tag, offset,
+		length);
+	return NULL;
+}
diff --git a/drivers/target/iscsi/iscsi_seq_and_pdu_list.h b/drivers/target/iscsi/iscsi_seq_and_pdu_list.h
new file mode 100644
index 0000000..7b4c1bd
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_seq_and_pdu_list.h
@@ -0,0 +1,88 @@
+#ifndef ISCSI_SEQ_AND_PDU_LIST_H
+#define ISCSI_SEQ_AND_PDU_LIST_H
+
+/* struct iscsi_pdu->status */
+#define DATAOUT_PDU_SENT			1
+
+/* struct iscsi_seq->type */
+#define SEQTYPE_IMMEDIATE			1
+#define SEQTYPE_UNSOLICITED			2
+#define SEQTYPE_NORMAL				3
+
+/* struct iscsi_seq->status */
+#define DATAOUT_SEQUENCE_GOT_R2T		1
+#define DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY 2
+#define DATAOUT_SEQUENCE_COMPLETE		3
+
+/* iscsi_determine_counts_for_list() type */
+#define PDULIST_NORMAL				1
+#define PDULIST_IMMEDIATE			2
+#define PDULIST_UNSOLICITED			3
+#define PDULIST_IMMEDIATE_AND_UNSOLICITED	4
+
+/* struct iscsi_pdu->type */
+#define PDUTYPE_IMMEDIATE			1
+#define PDUTYPE_UNSOLICITED			2
+#define PDUTYPE_NORMAL				3
+
+/* struct iscsi_pdu->status */
+#define ISCSI_PDU_NOT_RECEIVED			0
+#define ISCSI_PDU_RECEIVED_OK			1
+#define ISCSI_PDU_CRC_FAILED			2
+#define ISCSI_PDU_TIMED_OUT			3
+
+/* struct iscsi_build_list->randomize */
+#define RANDOM_DATAIN_PDU_OFFSETS		0x01
+#define RANDOM_DATAIN_SEQ_OFFSETS		0x02
+#define RANDOM_DATAOUT_PDU_OFFSETS		0x04
+#define RANDOM_R2T_OFFSETS			0x08
+
+/* struct iscsi_build_list->data_direction */
+#define ISCSI_PDU_READ				0x01
+#define ISCSI_PDU_WRITE				0x02
+
+struct iscsi_build_list {
+	u8		data_direction;
+	u8		randomize;
+	u8		type;
+	u32		immediate_data_length;
+} ____cacheline_aligned;
+
+struct iscsi_pdu {
+	int		status;
+	int		type;
+	u8		flags;
+	u32		data_sn;
+	u32		length;
+	u32		offset;
+	u32		pdu_send_order;
+	u32		seq_no;
+} ____cacheline_aligned;
+
+struct iscsi_seq {
+	int		sent;
+	int		status;
+	int		type;
+	u32		data_sn;
+	u32		first_datasn;
+	u32		last_datasn;
+	u32		next_burst_len;
+	u32		pdu_start;
+	u32		pdu_count;
+	u32		offset;
+	u32		orig_offset;
+	u32		pdu_send_order;
+	u32		r2t_sn;
+	u32		seq_send_order;
+	u32		seq_no;
+	u32		xfer_len;
+} ____cacheline_aligned;
+
+extern struct iscsi_global *iscsi_global;
+
+extern int iscsi_do_build_list(struct iscsi_cmd *, struct iscsi_build_list *);
+extern struct iscsi_pdu *iscsi_get_pdu_holder(struct iscsi_cmd *, u32, u32);
+extern struct iscsi_pdu *iscsi_get_pdu_holder_for_seq(struct iscsi_cmd *, struct iscsi_seq *);
+extern struct iscsi_seq *iscsi_get_seq_holder(struct iscsi_cmd *, u32, u32);
+
+#endif /* ISCSI_SEQ_AND_PDU_LIST_H */
diff --git a/drivers/target/iscsi/iscsi_target_datain_values.c b/drivers/target/iscsi/iscsi_target_datain_values.c
new file mode 100644
index 0000000..26ec1d5
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_datain_values.c
@@ -0,0 +1,550 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target DataIN value generation functions.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/delay.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/in.h>
+#include <scsi/iscsi_proto.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_datain_values.h"
+
+struct iscsi_datain_req *iscsi_allocate_datain_req(void)
+{
+	struct iscsi_datain_req *dr;
+
+	dr = kmem_cache_zalloc(lio_dr_cache, GFP_ATOMIC);
+	if (!(dr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_datain_req\n");
+		return NULL;
+	}
+	INIT_LIST_HEAD(&dr->dr_list);
+
+	return dr;
+}
+
+void iscsi_attach_datain_req(struct iscsi_cmd *cmd, struct iscsi_datain_req *dr)
+{
+	spin_lock(&cmd->datain_lock);
+	list_add_tail(&dr->dr_list, &cmd->datain_list);
+	spin_unlock(&cmd->datain_lock);
+}
+
+void iscsi_free_datain_req(struct iscsi_cmd *cmd, struct iscsi_datain_req *dr)
+{
+	spin_lock(&cmd->datain_lock);
+	list_del(&dr->dr_list);
+	spin_unlock(&cmd->datain_lock);
+
+	kmem_cache_free(lio_dr_cache, dr);
+}
+
+void iscsi_free_all_datain_reqs(struct iscsi_cmd *cmd)
+{
+	struct iscsi_datain_req *dr, *dr_tmp;
+
+	spin_lock(&cmd->datain_lock);
+	list_for_each_entry_safe(dr, dr_tmp, &cmd->datain_list, dr_list) {
+		list_del(&dr->dr_list);
+		kmem_cache_free(lio_dr_cache, dr);
+	}
+	spin_unlock(&cmd->datain_lock);
+}
+
+struct iscsi_datain_req *iscsi_get_datain_req(struct iscsi_cmd *cmd)
+{
+	struct iscsi_datain_req *dr;
+
+	if (list_empty(&cmd->datain_list)) {
+		printk(KERN_ERR "cmd->datain_list is empty for ITT:"
+			" 0x%08x\n", cmd->init_task_tag);
+		return NULL;
+	}
+	list_for_each_entry(dr, &cmd->datain_list, dr_list)
+		break;
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_yes_and_yes():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=Yes and DataPDUInOrder=Yes.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_yes_and_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 next_burst_len, read_data_done, read_data_left;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	next_burst_len = (!dr->recovery) ?
+			cmd->next_burst_len : dr->next_burst_len;
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	if ((read_data_left <= CONN_OPS(conn)->MaxRecvDataSegmentLength) &&
+	    (read_data_left <= (SESS_OPS_C(conn)->MaxBurstLength -
+	     next_burst_len))) {
+		datain->length = read_data_left;
+
+		datain->flags |= (ISCSI_FLAG_CMD_FINAL | ISCSI_FLAG_DATA_STATUS);
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			datain->flags |= ISCSI_FLAG_DATA_ACK;
+	} else {
+		if ((next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			datain->length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			next_burst_len += datain->length;
+		} else {
+			datain->length = (SESS_OPS_C(conn)->MaxBurstLength -
+					  next_burst_len);
+			next_burst_len = 0;
+
+			datain->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				datain->flags |= ISCSI_FLAG_DATA_ACK;
+		}
+	}
+
+	datain->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	datain->offset = read_data_done;
+
+	if (!dr->recovery) {
+		cmd->next_burst_len = next_burst_len;
+		cmd->read_data_done += datain->length;
+	} else {
+		dr->next_burst_len = next_burst_len;
+		dr->read_data_done += datain->length;
+	}
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_no_and_yes():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=No and DataPDUInOrder=Yes.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_no_and_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 offset, read_data_done, read_data_left, seq_send_order;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_seq *seq;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_no(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+	seq_send_order = (!dr->recovery) ?
+			cmd->seq_send_order : dr->seq_send_order;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	seq = iscsi_get_seq_holder_for_datain(cmd, seq_send_order);
+	if (!(seq))
+		return NULL;
+
+	seq->sent = 1;
+
+	if (!dr->recovery && !seq->next_burst_len)
+		seq->first_datasn = cmd->data_sn;
+
+	offset = (seq->offset + seq->next_burst_len);
+
+	if ((offset + CONN_OPS(conn)->MaxRecvDataSegmentLength) >=
+	     cmd->data_length) {
+		datain->length = (cmd->data_length - offset);
+		datain->offset = offset;
+
+		datain->flags |= ISCSI_FLAG_CMD_FINAL;
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			datain->flags |= ISCSI_FLAG_DATA_ACK;
+
+		seq->next_burst_len = 0;
+		seq_send_order++;
+	} else {
+		if ((seq->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			datain->length =
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			datain->offset = (seq->offset + seq->next_burst_len);
+
+			seq->next_burst_len += datain->length;
+		} else {
+			datain->length = (SESS_OPS_C(conn)->MaxBurstLength -
+					  seq->next_burst_len);
+			datain->offset = (seq->offset + seq->next_burst_len);
+
+			datain->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				datain->flags |= ISCSI_FLAG_DATA_ACK;
+
+			seq->next_burst_len = 0;
+			seq_send_order++;
+		}
+	}
+
+	if ((read_data_done + datain->length) == cmd->data_length)
+		datain->flags |= ISCSI_FLAG_DATA_STATUS;
+
+	datain->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->seq_send_order = seq_send_order;
+		cmd->read_data_done += datain->length;
+	} else {
+		dr->seq_send_order = seq_send_order;
+		dr->read_data_done += datain->length;
+	}
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_CMD_FINAL)
+			seq->last_datasn = datain->data_sn;
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_yes_and_no():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=Yes and DataPDUInOrder=No.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_yes_and_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 next_burst_len, read_data_done, read_data_left;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_pdu *pdu;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	next_burst_len = (!dr->recovery) ?
+			cmd->next_burst_len : dr->next_burst_len;
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return dr;
+	}
+
+	pdu = iscsi_get_pdu_holder_for_seq(cmd, NULL);
+	if (!(pdu))
+		return dr;
+
+	if ((read_data_done + pdu->length) == cmd->data_length) {
+		pdu->flags |= (ISCSI_FLAG_CMD_FINAL | ISCSI_FLAG_DATA_STATUS);
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+		next_burst_len = 0;
+	} else {
+		if ((next_burst_len + CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength)
+			next_burst_len += pdu->length;
+		else {
+			pdu->flags |= ISCSI_FLAG_CMD_FINAL;
+			if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+				pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+			next_burst_len = 0;
+		}
+	}
+
+	pdu->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->next_burst_len = next_burst_len;
+		cmd->read_data_done += pdu->length;
+	} else {
+		dr->next_burst_len = next_burst_len;
+		dr->read_data_done += pdu->length;
+	}
+
+	datain->flags = pdu->flags;
+	datain->length = pdu->length;
+	datain->offset = pdu->offset;
+	datain->data_sn = pdu->data_sn;
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_set_datain_values_no_and_no():
+ *
+ *	For Normal and Recovery DataSequenceInOrder=No and DataPDUInOrder=No.
+ */
+static inline struct iscsi_datain_req *iscsi_set_datain_values_no_and_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	__u32 read_data_done, read_data_left, seq_send_order;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct iscsi_pdu *pdu;
+	struct iscsi_seq *seq = NULL;
+
+	dr = iscsi_get_datain_req(cmd);
+	if (!(dr))
+		return NULL;
+
+	if (dr->recovery && dr->generate_recovery_values) {
+		if (iscsi_create_recovery_datain_values_datasequenceinorder_no(
+					cmd, dr) < 0)
+			return NULL;
+
+		dr->generate_recovery_values = 0;
+	}
+
+	read_data_done = (!dr->recovery) ?
+			cmd->read_data_done : dr->read_data_done;
+	seq_send_order = (!dr->recovery) ?
+			cmd->seq_send_order : dr->seq_send_order;
+
+	read_data_left = (cmd->data_length - read_data_done);
+	if (!(read_data_left)) {
+		printk(KERN_ERR "ITT: 0x%08x read_data_left is zero!\n",
+				cmd->init_task_tag);
+		return NULL;
+	}
+
+	seq = iscsi_get_seq_holder_for_datain(cmd, seq_send_order);
+	if (!(seq))
+		return NULL;
+
+	seq->sent = 1;
+
+	if (!dr->recovery && !seq->next_burst_len)
+		seq->first_datasn = cmd->data_sn;
+
+	pdu = iscsi_get_pdu_holder_for_seq(cmd, seq);
+	if (!(pdu))
+		return NULL;
+
+	if (seq->pdu_send_order == seq->pdu_count) {
+		pdu->flags |= ISCSI_FLAG_CMD_FINAL;
+		if (SESS_OPS_C(conn)->ErrorRecoveryLevel > 0)
+			pdu->flags |= ISCSI_FLAG_DATA_ACK;
+
+		seq->next_burst_len = 0;
+		seq_send_order++;
+	} else
+		seq->next_burst_len += pdu->length;
+
+	if ((read_data_done + pdu->length) == cmd->data_length)
+		pdu->flags |= ISCSI_FLAG_DATA_STATUS;
+
+	pdu->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
+	if (!dr->recovery) {
+		cmd->seq_send_order = seq_send_order;
+		cmd->read_data_done += pdu->length;
+	} else {
+		dr->seq_send_order = seq_send_order;
+		dr->read_data_done += pdu->length;
+	}
+
+	datain->flags = pdu->flags;
+	datain->length = pdu->length;
+	datain->offset = pdu->offset;
+	datain->data_sn = pdu->data_sn;
+
+	if (!dr->recovery) {
+		if (datain->flags & ISCSI_FLAG_CMD_FINAL)
+			seq->last_datasn = datain->data_sn;
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS)
+			dr->dr_complete = DATAIN_COMPLETE_NORMAL;
+
+		return dr;
+	}
+
+	if (!dr->runlength) {
+		if (datain->flags & ISCSI_FLAG_DATA_STATUS) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	} else {
+		if ((dr->begrun + dr->runlength) == dr->data_sn) {
+			dr->dr_complete =
+			    (dr->recovery == DATAIN_WITHIN_COMMAND_RECOVERY) ?
+				DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY :
+				DATAIN_COMPLETE_CONNECTION_RECOVERY;
+		}
+	}
+
+	return dr;
+}
+
+/*	iscsi_get_datain_values():
+ *
+ *
+ */
+struct iscsi_datain_req *iscsi_get_datain_values(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain *datain)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder &&
+	    SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_yes_and_yes(cmd, datain);
+	else if (!SESS_OPS_C(conn)->DataSequenceInOrder &&
+		  SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_no_and_yes(cmd, datain);
+	else if (SESS_OPS_C(conn)->DataSequenceInOrder &&
+		 !SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_yes_and_no(cmd, datain);
+	else if (!SESS_OPS_C(conn)->DataSequenceInOrder &&
+		   !SESS_OPS_C(conn)->DataPDUInOrder)
+		return iscsi_set_datain_values_no_and_no(cmd, datain);
+
+	return NULL;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_datain_values.h b/drivers/target/iscsi/iscsi_target_datain_values.h
new file mode 100644
index 0000000..0534835
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_datain_values.h
@@ -0,0 +1,16 @@
+#ifndef ISCSI_TARGET_DATAIN_VALUES_H
+#define ISCSI_TARGET_DATAIN_VALUES_H
+
+extern struct iscsi_datain_req *iscsi_allocate_datain_req(void);
+extern void iscsi_attach_datain_req(struct iscsi_cmd *, struct iscsi_datain_req *);
+extern void iscsi_free_datain_req(struct iscsi_cmd *, struct iscsi_datain_req *);
+extern void iscsi_free_all_datain_reqs(struct iscsi_cmd *);
+extern struct iscsi_datain_req *iscsi_get_datain_req(struct iscsi_cmd *);
+extern struct iscsi_datain_req *iscsi_get_datain_values(struct iscsi_cmd *,
+			struct iscsi_datain *);
+
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_dr_cache;
+
+#endif   /*** ISCSI_TARGET_DATAIN_VALUES_H ***/
+
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 09/12] iscsi-target: Add iSCSI Error Recovery Hierarchy support
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 89985 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for RFC-3720 compatiable ErrorRecoveryLevel
support as defined in Section 6.1.5.  Error Recovery Hierarchy.

This includes support for iSCSI session reinstatement, iSCSI within
command and within connection recovery, and explict/implict connection
recovery (CSM-E and CSM-I) from state machines in Section 7 of RFC-3720.

These functions are called from iscsi_target.c to handle processing
based on the negotiated session-wide ErrorRecoveryLevel parameter.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_erl0.c | 1086 +++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_erl0.h |   19 +
 drivers/target/iscsi/iscsi_target_erl1.c | 1382 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_erl1.h |   35 +
 drivers/target/iscsi/iscsi_target_erl2.c |  535 ++++++++++++
 drivers/target/iscsi/iscsi_target_erl2.h |   21 +
 6 files changed, 3078 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.h

diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
new file mode 100644
index 0000000..57e9442
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl0.c
@@ -0,0 +1,1086 @@
+/******************************************************************************
+ * This file contains error recovery level zero functions used by
+ * the iSCSI Target driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+
+/*	iscsi_set_dataout_sequence_values():
+ *
+ *	Used to set values in struct iscsi_cmd that iscsi_dataout_check_sequence()
+ *	checks against to determine a PDU's Offset+Length is within the current
+ *	DataOUT Sequence.  Used for DataSequenceInOrder=Yes only.
+ */
+void iscsi_set_dataout_sequence_values(
+	struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	/*
+	 * Still set seq_start_offset and seq_end_offset for Unsolicited
+	 * DataOUT, even if DataSequenceInOrder=No.
+	 */
+	if (cmd->unsolicited_data) {
+		cmd->seq_start_offset = cmd->write_data_done;
+		cmd->seq_end_offset = (cmd->write_data_done +
+			(cmd->data_length >
+			 SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length);
+		return;
+	}
+
+	if (!SESS_OPS_C(conn)->DataSequenceInOrder)
+		return;
+
+	if (!cmd->seq_start_offset && !cmd->seq_end_offset) {
+		cmd->seq_start_offset = cmd->write_data_done;
+		cmd->seq_end_offset = (cmd->data_length >
+			SESS_OPS_C(conn)->MaxBurstLength) ?
+			(cmd->write_data_done +
+			SESS_OPS_C(conn)->MaxBurstLength) : cmd->data_length;
+	} else {
+		cmd->seq_start_offset = cmd->seq_end_offset;
+		cmd->seq_end_offset = ((cmd->seq_end_offset +
+			SESS_OPS_C(conn)->MaxBurstLength) >=
+			cmd->data_length) ? cmd->data_length :
+			(cmd->seq_end_offset +
+			 SESS_OPS_C(conn)->MaxBurstLength);
+	}
+}
+
+/*	iscsi_dataout_within_command_recovery_check():
+ *
+ *
+ */
+static inline int iscsi_dataout_within_command_recovery_check(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * We do the within-command recovery checks here as it is
+	 * the first function called in iscsi_check_pre_dataout().
+	 * Basically, if we are in within-command recovery and
+	 * the PDU does not contain the offset the sequence needs,
+	 * dump the payload.
+	 *
+	 * This only applies to DataPDUInOrder=Yes, for
+	 * DataPDUInOrder=No we only re-request the failed PDU
+	 * and check that all PDUs in a sequence are received
+	 * upon end of sequence.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if ((cmd->cmd_flags & ICF_WITHIN_COMMAND_RECOVERY) &&
+		    (cmd->write_data_done != hdr->offset))
+			goto dump;
+
+		cmd->cmd_flags &= ~ICF_WITHIN_COMMAND_RECOVERY;
+	} else {
+		struct iscsi_seq *seq;
+
+		seq = iscsi_get_seq_holder(cmd, hdr->offset, payload_length);
+		if (!(seq))
+			return DATAOUT_CANNOT_RECOVER;
+		/*
+		 * Set the struct iscsi_seq pointer to reuse later.
+		 */
+		cmd->seq_ptr = seq;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			if ((seq->status ==
+			     DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY) &&
+			   ((seq->offset != hdr->offset) ||
+			    (seq->data_sn != hdr->datasn)))
+				goto dump;
+		} else {
+			if ((seq->status ==
+			     DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY) &&
+			    (seq->data_sn != hdr->datasn))
+				goto dump;
+		}
+
+		if (seq->status == DATAOUT_SEQUENCE_COMPLETE)
+			goto dump;
+
+		if (seq->status != DATAOUT_SEQUENCE_COMPLETE)
+			seq->status = 0;
+	}
+
+	return DATAOUT_NORMAL;
+
+dump:
+	printk(KERN_ERR "Dumping DataOUT PDU Offset: %u Length: %d DataSN:"
+		" 0x%08x\n", hdr->offset, payload_length, hdr->datasn);
+	return iscsi_dump_data_payload(conn, payload_length, 1);
+}
+
+/*	iscsi_dataout_check_unsolicited_sequence():
+ *
+ *
+ */
+static inline int iscsi_dataout_check_unsolicited_sequence(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	__u32 first_burst_len;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+
+	if ((hdr->offset < cmd->seq_start_offset) ||
+	   ((hdr->offset + payload_length) > cmd->seq_end_offset)) {
+		printk(KERN_ERR "Command ITT: 0x%08x with Offset: %u,"
+		" Length: %u outside of Unsolicited Sequence %u:%u while"
+		" DataSequenceInOrder=Yes.\n", cmd->init_task_tag,
+		hdr->offset, payload_length, cmd->seq_start_offset,
+			cmd->seq_end_offset);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	first_burst_len = (cmd->first_burst_len + payload_length);
+
+	if (first_burst_len > SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "Total %u bytes exceeds FirstBurstLength: %u"
+			" for this Unsolicited DataOut Burst.\n",
+			first_burst_len, SESS_OPS_C(conn)->FirstBurstLength);
+		transport_send_check_condition_and_sense(SE_CMD(cmd),
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	/*
+	 * Perform various MaxBurstLength and ISCSI_FLAG_CMD_FINAL sanity
+	 * checks for the current Unsolicited DataOUT Sequence.
+	 */
+	if (hdr->flags & ISCSI_FLAG_CMD_FINAL) {
+		/*
+		 * Ignore ISCSI_FLAG_CMD_FINAL checks while DataPDUInOrder=No, end of
+		 * sequence checks are handled in
+		 * iscsi_dataout_datapduinorder_no_fbit().
+		 */
+		if (!SESS_OPS_C(conn)->DataPDUInOrder)
+			goto out;
+
+		if ((first_burst_len != cmd->data_length) &&
+		    (first_burst_len != SESS_OPS_C(conn)->FirstBurstLength)) {
+			printk(KERN_ERR "Unsolicited non-immediate data"
+			" received %u does not equal FirstBurstLength: %u, and"
+			" does not equal ExpXferLen %u.\n", first_burst_len,
+				SESS_OPS_C(conn)->FirstBurstLength,
+				cmd->data_length);
+			transport_send_check_condition_and_sense(SE_CMD(cmd),
+					TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+	} else {
+		if (first_burst_len == SESS_OPS_C(conn)->FirstBurstLength) {
+			printk(KERN_ERR "Command ITT: 0x%08x reached"
+			" FirstBurstLength: %u, but ISCSI_FLAG_CMD_FINAL is not set. protocol"
+				" error.\n", cmd->init_task_tag,
+				SESS_OPS_C(conn)->FirstBurstLength);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+		if (first_burst_len == cmd->data_length) {
+			printk(KERN_ERR "Command ITT: 0x%08x reached"
+			" ExpXferLen: %u, but ISCSI_FLAG_CMD_FINAL is not set. protocol"
+			" error.\n", cmd->init_task_tag, cmd->data_length);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+	}
+
+out:
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_check_sequence():
+ *
+ *
+ */
+static inline int iscsi_dataout_check_sequence(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	__u32 next_burst_len;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *seq = NULL;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * For DataSequenceInOrder=Yes: Check that the offset and offset+length
+	 * is within range as defined by iscsi_set_dataout_sequence_values().
+	 *
+	 * For DataSequenceInOrder=No: Check that an struct iscsi_seq exists for
+	 * offset+length tuple.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		/*
+		 * Due to possibility of recovery DataOUT sent by the initiator
+		 * fullfilling an Recovery R2T, it's best to just dump the
+		 * payload here, instead of erroring out.
+		 */
+		if ((hdr->offset < cmd->seq_start_offset) ||
+		   ((hdr->offset + payload_length) > cmd->seq_end_offset)) {
+			printk(KERN_ERR "Command ITT: 0x%08x with Offset: %u,"
+			" Length: %u outside of Sequence %u:%u while"
+			" DataSequenceInOrder=Yes.\n", cmd->init_task_tag,
+			hdr->offset, payload_length, cmd->seq_start_offset,
+				cmd->seq_end_offset);
+
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			return DATAOUT_WITHIN_COMMAND_RECOVERY;
+		}
+
+		next_burst_len = (cmd->next_burst_len + payload_length);
+	} else {
+		seq = iscsi_get_seq_holder(cmd, hdr->offset, payload_length);
+		if (!(seq))
+			return DATAOUT_CANNOT_RECOVER;
+		/*
+		 * Set the struct iscsi_seq pointer to reuse later.
+		 */
+		cmd->seq_ptr = seq;
+
+		if (seq->status == DATAOUT_SEQUENCE_COMPLETE) {
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			return DATAOUT_WITHIN_COMMAND_RECOVERY;
+		}
+
+		next_burst_len = (seq->next_burst_len + payload_length);
+	}
+
+	if (next_burst_len > SESS_OPS_C(conn)->MaxBurstLength) {
+		printk(KERN_ERR "Command ITT: 0x%08x, NextBurstLength: %u and"
+			" Length: %u exceeds MaxBurstLength: %u. protocol"
+			" error.\n", cmd->init_task_tag,
+			(next_burst_len - payload_length),
+			payload_length, SESS_OPS_C(conn)->MaxBurstLength);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	/*
+	 * Perform various MaxBurstLength and ISCSI_FLAG_CMD_FINAL sanity
+	 * checks for the current DataOUT Sequence.
+	 */
+	if (hdr->flags & ISCSI_FLAG_CMD_FINAL) {
+		/*
+		 * Ignore ISCSI_FLAG_CMD_FINAL checks while DataPDUInOrder=No, end of
+		 * sequence checks are handled in
+		 * iscsi_dataout_datapduinorder_no_fbit().
+		 */
+		if (!SESS_OPS_C(conn)->DataPDUInOrder)
+			goto out;
+
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if ((next_burst_len <
+			     SESS_OPS_C(conn)->MaxBurstLength) &&
+			   ((cmd->write_data_done + payload_length) <
+			     cmd->data_length)) {
+				printk(KERN_ERR "Command ITT: 0x%08x set ISCSI_FLAG_CMD_FINAL"
+				" before end of DataOUT sequence, protocol"
+				" error.\n", cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		} else {
+			if (next_burst_len < seq->xfer_len) {
+				printk(KERN_ERR "Command ITT: 0x%08x set ISCSI_FLAG_CMD_FINAL"
+				" before end of DataOUT sequence, protocol"
+				" error.\n", cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		}
+	} else {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if (next_burst_len ==
+					SESS_OPS_C(conn)->MaxBurstLength) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" MaxBurstLength: %u, but ISCSI_FLAG_CMD_FINAL is"
+				" not set, protocol error.", cmd->init_task_tag,
+					SESS_OPS_C(conn)->MaxBurstLength);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+			if ((cmd->write_data_done + payload_length) ==
+					cmd->data_length) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" last DataOUT PDU in sequence but ISCSI_FLAG_"
+				"CMD_FINAL is not set, protocol error.\n",
+					cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		} else {
+			if (next_burst_len == seq->xfer_len) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" last DataOUT PDU in sequence but ISCSI_FLAG_"
+				"CMD_FINAL is not set, protocol error.\n",
+					cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		}
+	}
+
+out:
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_check_datasn():
+ *
+ *	Called from:	iscsi_check_pre_dataout()
+ */
+static inline int iscsi_dataout_check_datasn(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int dump = 0, recovery = 0;
+	__u32 data_sn = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * Considering the target has no method of re-requesting DataOUT
+	 * by DataSN, if we receieve a greater DataSN than expected we
+	 * assume the functions for DataPDUInOrder=[Yes,No] below will
+	 * handle it.
+	 *
+	 * If the DataSN is less than expected, dump the payload.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder)
+		data_sn = cmd->data_sn;
+	else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+		data_sn = seq->data_sn;
+	}
+
+	if (hdr->datasn > data_sn) {
+		printk(KERN_ERR "Command ITT: 0x%08x, received DataSN: 0x%08x"
+			" higher than expected 0x%08x.\n", cmd->init_task_tag,
+				hdr->datasn, data_sn);
+		recovery = 1;
+		goto recover;
+	} else if (hdr->datasn < data_sn) {
+		printk(KERN_ERR "Command ITT: 0x%08x, received DataSN: 0x%08x"
+			" lower than expected 0x%08x, discarding payload.\n",
+			cmd->init_task_tag, hdr->datasn, data_sn);
+		dump = 1;
+		goto dump;
+	}
+
+	return DATAOUT_NORMAL;
+
+recover:
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Unable to perform within-command recovery"
+				" while ERL=0.\n");
+		return DATAOUT_CANNOT_RECOVER;
+	}
+dump:
+	if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	return (recovery || dump) ? DATAOUT_WITHIN_COMMAND_RECOVERY :
+				DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_pre_datapduinorder_yes():
+ *
+ *
+ */
+static inline int iscsi_dataout_pre_datapduinorder_yes(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int dump = 0, recovery = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * For DataSequenceInOrder=Yes: If the offset is greater than the global
+	 * DataPDUInOrder=Yes offset counter in struct iscsi_cmd a protcol error has
+	 * occured and fail the connection.
+	 *
+	 * For DataSequenceInOrder=No: If the offset is greater than the per
+	 * sequence DataPDUInOrder=Yes offset counter in struct iscsi_seq a protocol
+	 * error has occured and fail the connection.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if (hdr->offset != cmd->write_data_done) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u different than expected %u.\n", cmd->init_task_tag,
+				hdr->offset, cmd->write_data_done);
+			recovery = 1;
+			goto recover;
+		}
+	} else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+
+		if (hdr->offset > seq->offset) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u greater than expected %u.\n", cmd->init_task_tag,
+				hdr->offset, seq->offset);
+			recovery = 1;
+			goto recover;
+		} else if (hdr->offset < seq->offset) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u less than expected %u, discarding payload.\n",
+				cmd->init_task_tag, hdr->offset, seq->offset);
+			dump = 1;
+			goto dump;
+		}
+	}
+
+	return DATAOUT_NORMAL;
+
+recover:
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Unable to perform within-command recovery"
+				" while ERL=0.\n");
+		return DATAOUT_CANNOT_RECOVER;
+	}
+dump:
+	if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	return (recovery) ? iscsi_recover_dataout_sequence(cmd,
+		hdr->offset, payload_length) :
+	       (dump) ? DATAOUT_WITHIN_COMMAND_RECOVERY : DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_pre_datapduinorder_no():
+ *
+ *	Called from:	iscsi_check_pre_dataout()
+ */
+static inline int iscsi_dataout_pre_datapduinorder_no(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_pdu *pdu;
+        struct iscsi_data *hdr = (struct iscsi_data *) buf;
+        u32 payload_length = ntoh24(hdr->dlength);
+
+	pdu = iscsi_get_pdu_holder(cmd, hdr->offset, payload_length);
+	if (!(pdu))
+		return DATAOUT_CANNOT_RECOVER;
+
+	cmd->pdu_ptr = pdu;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+	case ISCSI_PDU_CRC_FAILED:
+	case ISCSI_PDU_TIMED_OUT:
+		break;
+	case ISCSI_PDU_RECEIVED_OK:
+		printk(KERN_ERR "Command ITT: 0x%08x received already gotten"
+			" Offset: %u, Length: %u\n", cmd->init_task_tag,
+				hdr->offset, payload_length);
+		return iscsi_dump_data_payload(CONN(cmd), payload_length, 1);
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_update_r2t():
+ *
+ *
+ */
+static int iscsi_dataout_update_r2t(struct iscsi_cmd *cmd, u32 offset, u32 length)
+{
+	struct iscsi_r2t *r2t;
+
+	if (cmd->unsolicited_data)
+		return 0;
+
+	r2t = iscsi_get_r2t_for_eos(cmd, offset, length);
+	if (!(r2t))
+		return -1;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	r2t->seq_complete = 1;
+	cmd->outstanding_r2ts--;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_dataout_update_datapduinorder_no():
+ *
+ *
+ */
+static int iscsi_dataout_update_datapduinorder_no(
+	struct iscsi_cmd *cmd,
+	u32 data_sn,
+	int f_bit)
+{
+	int ret = 0;
+	struct iscsi_pdu *pdu = cmd->pdu_ptr;
+
+	pdu->data_sn = data_sn;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	case ISCSI_PDU_CRC_FAILED:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	case ISCSI_PDU_TIMED_OUT:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	if (f_bit) {
+		ret = iscsi_dataout_datapduinorder_no_fbit(cmd, pdu);
+		if (ret == DATAOUT_CANNOT_RECOVER)
+			return ret;
+	}
+
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_post_crc_passed():
+ *
+ *	Called from:	iscsi_check_post_dataout()
+ */
+static inline int iscsi_dataout_post_crc_passed(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int ret, send_r2t = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *seq = NULL;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (cmd->unsolicited_data) {
+		if ((cmd->first_burst_len + payload_length) ==
+		     SESS_OPS_C(conn)->FirstBurstLength) {
+			if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+					payload_length) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			send_r2t = 1;
+		}
+
+		if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+			ret = iscsi_dataout_update_datapduinorder_no(cmd,
+				hdr->datasn, (hdr->flags & ISCSI_FLAG_CMD_FINAL));
+			if (ret == DATAOUT_CANNOT_RECOVER)
+				return ret;
+		}
+
+		cmd->first_burst_len += payload_length;
+
+		if (SESS_OPS_C(conn)->DataSequenceInOrder)
+			cmd->data_sn++;
+		else {
+			seq = cmd->seq_ptr;
+			seq->data_sn++;
+			seq->offset += payload_length;
+		}
+
+		if (send_r2t) {
+			if (seq)
+				seq->status = DATAOUT_SEQUENCE_COMPLETE;
+			cmd->first_burst_len = 0;
+			cmd->unsolicited_data = 0;
+		}
+	} else {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if ((cmd->next_burst_len + payload_length) ==
+			     SESS_OPS_C(conn)->MaxBurstLength) {
+				if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+						payload_length) < 0)
+					return DATAOUT_CANNOT_RECOVER;
+				send_r2t = 1;
+			}
+
+			if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+				ret = iscsi_dataout_update_datapduinorder_no(
+						cmd, hdr->datasn,
+						(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+				if (ret == DATAOUT_CANNOT_RECOVER)
+					return ret;
+			}
+
+			cmd->next_burst_len += payload_length;
+			cmd->data_sn++;
+
+			if (send_r2t)
+				cmd->next_burst_len = 0;
+		} else {
+			seq = cmd->seq_ptr;
+
+			if ((seq->next_burst_len + payload_length) ==
+			     seq->xfer_len) {
+				if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+						payload_length) < 0)
+					return DATAOUT_CANNOT_RECOVER;
+				send_r2t = 1;
+			}
+
+			if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+				ret = iscsi_dataout_update_datapduinorder_no(
+						cmd, hdr->datasn,
+						(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+				if (ret == DATAOUT_CANNOT_RECOVER)
+					return ret;
+			}
+
+			seq->data_sn++;
+			seq->offset += payload_length;
+			seq->next_burst_len += payload_length;
+
+			if (send_r2t) {
+				seq->next_burst_len = 0;
+				seq->status = DATAOUT_SEQUENCE_COMPLETE;
+			}
+		}
+	}
+
+	if (send_r2t && SESS_OPS_C(conn)->DataSequenceInOrder)
+		cmd->data_sn = 0;
+
+	cmd->write_data_done += payload_length;
+
+	return (cmd->write_data_done == cmd->data_length) ?
+		DATAOUT_SEND_TO_TRANSPORT : (send_r2t) ?
+		DATAOUT_SEND_R2T : DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_post_crc_failed():
+ *
+ *	Called from:	iscsi_check_post_dataout()
+ */
+static inline int iscsi_dataout_post_crc_failed(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (SESS_OPS_C(conn)->DataPDUInOrder)
+		goto recover;
+
+	/*
+	 * The rest of this function is only called when DataPDUInOrder=No.
+	 */
+	pdu = cmd->pdu_ptr;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+		pdu->status = ISCSI_PDU_CRC_FAILED;
+		break;
+	case ISCSI_PDU_CRC_FAILED:
+		break;
+	case ISCSI_PDU_TIMED_OUT:
+		pdu->status = ISCSI_PDU_CRC_FAILED;
+		break;
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+recover:
+	return iscsi_recover_dataout_sequence(cmd, hdr->offset, payload_length);
+}
+
+/*	iscsi_check_pre_dataout():
+ *
+ *	Called from iscsi_handle_data_out() before DataOUT Payload is received
+ *	and CRC computed.
+ */
+extern int iscsi_check_pre_dataout(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int ret;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	ret = iscsi_dataout_within_command_recovery_check(cmd, buf);
+	if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+	    (ret == DATAOUT_CANNOT_RECOVER))
+		return ret;
+
+	ret = iscsi_dataout_check_datasn(cmd, buf);
+	if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+	    (ret == DATAOUT_CANNOT_RECOVER))
+		return ret;
+
+	if (cmd->unsolicited_data) {
+		ret = iscsi_dataout_check_unsolicited_sequence(cmd, buf);
+		if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+		    (ret == DATAOUT_CANNOT_RECOVER))
+			return ret;
+	} else {
+		ret = iscsi_dataout_check_sequence(cmd, buf);
+		if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+		    (ret == DATAOUT_CANNOT_RECOVER))
+			return ret;
+	}
+
+	return (SESS_OPS_C(conn)->DataPDUInOrder) ?
+		iscsi_dataout_pre_datapduinorder_yes(cmd, buf) :
+		iscsi_dataout_pre_datapduinorder_no(cmd, buf);
+}
+
+/*	iscsi_check_post_dataout():
+ *
+ *	Called from iscsi_handle_data_out() after DataOUT Payload is received
+ *	and CRC computed.
+ */
+int iscsi_check_post_dataout(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	__u8 data_crc_failed)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->dataout_timeout_retries = 0;
+
+	if (!data_crc_failed)
+		return iscsi_dataout_post_crc_passed(cmd, buf);
+	else {
+		if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+			printk(KERN_ERR "Unable to recover from DataOUT CRC"
+				" failure while ERL=0, closing session.\n");
+			iscsi_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
+					1, 0, buf, cmd);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+
+		iscsi_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
+				0, 0, buf, cmd);
+		return iscsi_dataout_post_crc_failed(cmd, buf);
+	}
+}
+
+/*	iscsi_handle_time2retain_timeout():
+ *
+ *
+ */
+static void iscsi_handle_time2retain_timeout(unsigned long data)
+{
+	struct iscsi_session *sess = (struct iscsi_session *) data;
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	if (sess->time2retain_timer_flags & T2R_TF_STOP) {
+		spin_unlock_bh(&se_tpg->session_lock);
+		return;
+	}
+	if (atomic_read(&sess->session_reinstatement)) {
+		printk(KERN_ERR "Exiting Time2Retain handler because"
+				" session_reinstatement=1\n");
+		spin_unlock_bh(&se_tpg->session_lock);
+		return;
+	}
+	sess->time2retain_timer_flags |= T2R_TF_EXPIRED;
+
+	printk(KERN_ERR "Time2Retain timer expired for SID: %u, cleaning up"
+			" iSCSI session.\n", sess->sid);
+	{
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	if (tiqn) {
+		spin_lock(&tiqn->sess_err_stats.lock);
+		strcpy(tiqn->sess_err_stats.last_sess_fail_rem_name,
+			(void *)SESS_OPS(sess)->InitiatorName);
+		tiqn->sess_err_stats.last_sess_failure_type =
+				ISCSI_SESS_ERR_CXN_TIMEOUT;
+		tiqn->sess_err_stats.cxn_timeout_errors++;
+		sess->conn_timeout_errors++;
+		spin_unlock(&tiqn->sess_err_stats.lock);
+	}
+	}
+
+	spin_unlock_bh(&se_tpg->session_lock);
+	iscsi_close_session(sess);
+}
+
+/*	iscsi_start_session_cleanup_handler():
+ *
+ *
+ */
+extern void iscsi_start_time2retain_handler (struct iscsi_session *sess)
+{
+	int tpg_active;
+
+	/*
+	 * Only start Time2Retain timer when the assoicated TPG is still in
+	 * an ACTIVE (eg: not disabled or shutdown) state.
+	 */
+	spin_lock(&ISCSI_TPG_S(sess)->tpg_state_lock);
+	tpg_active = (ISCSI_TPG_S(sess)->tpg_state == TPG_STATE_ACTIVE);
+	spin_unlock(&ISCSI_TPG_S(sess)->tpg_state_lock);
+
+	if (!(tpg_active))
+		return;
+
+	if (sess->time2retain_timer_flags & T2R_TF_RUNNING)
+		return;
+
+	TRACE(TRACE_TIMER, "Starting Time2Retain timer for %u seconds on"
+		" SID: %u\n", SESS_OPS(sess)->DefaultTime2Retain, sess->sid);
+
+	init_timer(&sess->time2retain_timer);
+	SETUP_TIMER(sess->time2retain_timer, SESS_OPS(sess)->DefaultTime2Retain,
+			sess, iscsi_handle_time2retain_timeout);
+	sess->time2retain_timer_flags &= ~T2R_TF_STOP;
+	sess->time2retain_timer_flags |= T2R_TF_RUNNING;
+	add_timer(&sess->time2retain_timer);
+
+	return;
+}
+
+/*	iscsi_stop_time2retain_timer():
+ *
+ *	Called with spin_lock_bh(&struct se_portal_group->session_lock) held
+ */
+extern int iscsi_stop_time2retain_timer(struct iscsi_session *sess)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	if (sess->time2retain_timer_flags & T2R_TF_EXPIRED)
+		return -1;
+
+	if (!(sess->time2retain_timer_flags & T2R_TF_RUNNING))
+		return 0;
+
+	sess->time2retain_timer_flags |= T2R_TF_STOP;
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	del_timer_sync(&sess->time2retain_timer);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	sess->time2retain_timer_flags &= ~T2R_TF_RUNNING;
+	TRACE(TRACE_TIMER, "Stopped Time2Retain Timer for SID: %u\n",
+			sess->sid);
+	return 0;
+}
+
+/*	iscsi_connection_reinstatement_rcfr():
+ *
+ *
+ */
+void iscsi_connection_reinstatement_rcfr(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		goto sleep;
+	}
+
+	if (atomic_read(&conn->transport_failed)) {
+		spin_unlock_bh(&conn->state_lock);
+		goto sleep;
+	}
+	spin_unlock_bh(&conn->state_lock);
+
+	iscsi_thread_set_force_reinstatement(conn);
+
+sleep:
+	down(&conn->conn_wait_rcfr_sem);
+	up(&conn->conn_post_wait_sem);
+}
+
+/*	iscsi_cause_connection_reinstatement():
+ *
+ *
+ */
+void iscsi_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (atomic_read(&conn->transport_failed)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (atomic_read(&conn->connection_reinstatement)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	atomic_set(&conn->connection_reinstatement, 1);
+	if (!sleep) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	atomic_set(&conn->sleep_on_conn_wait_sem, 1);
+	spin_unlock_bh(&conn->state_lock);
+
+	down(&conn->conn_wait_sem);
+	up(&conn->conn_post_wait_sem);
+}
+
+/*	iscsi_fall_back_to_erl0():
+ *
+ *
+ */
+void iscsi_fall_back_to_erl0(struct iscsi_session *sess)
+{
+	TRACE(TRACE_ERL0, "Falling back to ErrorRecoveryLevel=0 for SID:"
+			" %u\n", sess->sid);
+
+	atomic_set(&sess->session_fall_back_to_erl0, 1);
+}
+
+/*	iscsi_handle_connection_cleanup():
+ *
+ *
+ */
+static void iscsi_handle_connection_cleanup(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	if ((SESS_OPS(sess)->ErrorRecoveryLevel == 2) &&
+	    !atomic_read(&sess->session_reinstatement) &&
+	    !atomic_read(&sess->session_fall_back_to_erl0))
+		iscsi_connection_recovery_transport_reset(conn);
+	else {
+		TRACE(TRACE_ERL0, "Performing cleanup for failed iSCSI"
+			" Connection ID: %hu from %s\n", conn->cid,
+			SESS_OPS(sess)->InitiatorName);
+		iscsi_close_connection(conn);
+	}
+}
+
+/*	iscsi_take_action_for_connection_exit():
+ *
+ *
+ */
+extern void iscsi_take_action_for_connection_exit(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+	atomic_set(&conn->connection_exit, 1);
+
+	if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) {
+		spin_unlock_bh(&conn->state_lock);
+		iscsi_close_connection(conn);
+		return;
+	}
+
+	if (conn->conn_state == TARG_CONN_STATE_CLEANUP_WAIT) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_CLEANUP_WAIT.\n");
+	conn->conn_state = TARG_CONN_STATE_CLEANUP_WAIT;
+	spin_unlock_bh(&conn->state_lock);
+
+	iscsi_handle_connection_cleanup(conn);
+}
+
+/*	iscsi_recover_from_unknown_opcode():
+ *
+ *	This is the simple function that makes the magic of
+ *	sync and steering happen in the follow paradoxical order:
+ *
+ *	0) Receive conn->of_marker (bytes left until next OFMarker)
+ *	   bytes into an offload buffer.  When we pass the exact number
+ *	   of bytes in conn->of_marker, iscsi_dump_data_payload() and hence
+ *	   rx_data() will automatically receive the identical __u32 marker
+ *	   values and store it in conn->of_marker_offset;
+ *	1) Now conn->of_marker_offset will contain the offset to the start
+ *	   of the next iSCSI PDU.  Dump these remaining bytes into another
+ *	   offload buffer.
+ *	2) We are done!
+ *	   Next byte in the TCP stream will contain the next iSCSI PDU!
+ *	   Cool Huh?!
+ */
+int iscsi_recover_from_unknown_opcode(struct iscsi_conn *conn)
+{
+	/*
+	 * Make sure the remaining bytes to next maker is a sane value.
+	 */
+	if (conn->of_marker > (CONN_OPS(conn)->OFMarkInt * 4)) {
+		printk(KERN_ERR "Remaining bytes to OFMarker: %u exceeds"
+			" OFMarkInt bytes: %u.\n", conn->of_marker,
+				CONN_OPS(conn)->OFMarkInt * 4);
+		return -1;
+	}
+
+	TRACE(TRACE_ERL1, "Advancing %u bytes in TCP stream to get to the"
+			" next OFMarker.\n", conn->of_marker);
+
+	if (iscsi_dump_data_payload(conn, conn->of_marker, 0) < 0)
+		return -1;
+
+	/*
+	 * Make sure the offset marker we retrived is a valid value.
+	 */
+	if (conn->of_marker_offset > (ISCSI_HDR_LEN + (CRC_LEN * 2) +
+	    CONN_OPS(conn)->MaxRecvDataSegmentLength)) {
+		printk(KERN_ERR "OfMarker offset value: %u exceeds limit.\n",
+			conn->of_marker_offset);
+		return -1;
+	}
+
+	TRACE(TRACE_ERL1, "Discarding %u bytes of TCP stream to get to the"
+			" next iSCSI Opcode.\n", conn->of_marker_offset);
+
+	if (iscsi_dump_data_payload(conn, conn->of_marker_offset, 0) < 0)
+		return -1;
+
+	return 0;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl0.h b/drivers/target/iscsi/iscsi_target_erl0.h
new file mode 100644
index 0000000..6619d1e
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl0.h
@@ -0,0 +1,19 @@
+#ifndef ISCSI_TARGET_ERL0_H
+#define ISCSI_TARGET_ERL0_H
+
+extern void iscsi_set_dataout_sequence_values(struct iscsi_cmd *);
+extern int iscsi_check_pre_dataout(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_check_post_dataout(struct iscsi_cmd *, unsigned char *, __u8);
+extern void iscsi_start_time2retain_handler(struct iscsi_session *);
+extern int iscsi_stop_time2retain_timer(struct iscsi_session *);
+extern void iscsi_connection_reinstatement_rcfr(struct iscsi_conn *);
+extern void iscsi_cause_connection_reinstatement(struct iscsi_conn *, int);
+extern void iscsi_fall_back_to_erl0(struct iscsi_session *);
+extern void iscsi_take_action_for_connection_exit(struct iscsi_conn *);
+extern int iscsi_recover_from_unknown_opcode(struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+extern int iscsi_add_reject_from_cmd(u8, int, int, unsigned char *,
+			struct iscsi_cmd *);
+
+#endif   /*** ISCSI_TARGET_ERL0_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
new file mode 100644
index 0000000..50233ff
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl1.c
@@ -0,0 +1,1382 @@
+/*******************************************************************************
+ * This file contains error recovery level one used by the iSCSI Target driver.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+
+#define OFFLOAD_BUF_SIZE	32768
+
+/*	iscsi_dump_data_payload():
+ *
+ *	Used to dump excess datain payload for certain error recovery
+ *	situations.  Receive in OFFLOAD_BUF_SIZE max of datain per rx_data().
+ *
+ *	dump_padding_digest denotes if padding and data digests need
+ *	to be dumped.
+ */
+int iscsi_dump_data_payload(
+	struct iscsi_conn *conn,
+	u32 buf_len,
+	int dump_padding_digest)
+{
+	char *buf, pad_bytes[4];
+	int ret = DATAOUT_WITHIN_COMMAND_RECOVERY, rx_got;
+	u32 length, padding, offset = 0, size;
+	struct iovec iov;
+
+	length = (buf_len > OFFLOAD_BUF_SIZE) ? OFFLOAD_BUF_SIZE : buf_len;
+
+	buf = kzalloc(length, GFP_ATOMIC);
+	if (!(buf)) {
+		printk(KERN_ERR "Unable to allocate %u bytes for offload"
+				" buffer.\n", length);
+		return -1;
+	}
+	memset(&iov, 0, sizeof(struct iovec));
+
+	while (offset < buf_len) {
+		size = ((offset + length) > buf_len) ?
+			(buf_len - offset) : length;
+
+		iov.iov_len = size;
+		iov.iov_base = buf;
+
+		rx_got = rx_data(conn, &iov, 1, size);
+		if (rx_got != size) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+
+		offset += size;
+	}
+
+	if (!dump_padding_digest)
+		goto out;
+
+	padding = ((-buf_len) & 3);
+	if (padding != 0) {
+		iov.iov_len = padding;
+		iov.iov_base = pad_bytes;
+
+		rx_got = rx_data(conn, &iov, 1, padding);
+		if (rx_got != padding) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		u32 data_crc;
+
+		iov.iov_len = CRC_LEN;
+		iov.iov_base = &data_crc;
+
+		rx_got = rx_data(conn, &iov, 1, CRC_LEN);
+		if (rx_got != CRC_LEN) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+	}
+
+out:
+	kfree(buf);
+	return ret;
+}
+
+/*	iscsi_send_recovery_r2t_for_snack():
+ *
+ *	Used for retransmitting R2Ts from a R2T SNACK request.
+ */
+static int iscsi_send_recovery_r2t_for_snack(
+	struct iscsi_cmd *cmd,
+	struct iscsi_r2t *r2t)
+{
+	/*
+	 * If the struct iscsi_r2t has not been sent yet, we can safely
+	 * ignore retransmission
+	 * of the R2TSN in question.
+	 */
+	spin_lock_bh(&cmd->r2t_lock);
+	if (!r2t->sent_r2t) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return 0;
+	}
+	r2t->sent_r2t = 0;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	iscsi_add_cmd_to_immediate_queue(cmd, CONN(cmd), ISTATE_SEND_R2T);
+
+	return 0;
+}
+
+/*	iscsi_handle_r2t_snack():
+ *
+ *
+ */
+static int iscsi_handle_r2t_snack(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	u32 begrun,
+	u32 runlength)
+{
+	u32 last_r2tsn;
+	struct iscsi_r2t *r2t;
+
+	/*
+	 * Make sure the initiator is not requesting retransmission
+	 * of R2TSNs already acknowledged by a TMR TASK_REASSIGN.
+	 */
+	if ((cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+	    (begrun <= cmd->acked_data_sn)) {
+		printk(KERN_ERR "ITT: 0x%08x, R2T SNACK requesting"
+			" retransmission of R2TSN: 0x%08x to 0x%08x but already"
+			" acked to  R2TSN: 0x%08x by TMR TASK_REASSIGN,"
+			" protocol error.\n", cmd->init_task_tag, begrun,
+			(begrun + runlength), cmd->acked_data_sn);
+
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+	}
+
+	if (runlength) {
+		if ((begrun + runlength) > cmd->r2t_sn) {
+			printk(KERN_ERR "Command ITT: 0x%08x received R2T SNACK"
+			" with BegRun: 0x%08x, RunLength: 0x%08x, exceeds"
+			" current R2TSN: 0x%08x, protocol error.\n",
+			cmd->init_task_tag, begrun, runlength, cmd->r2t_sn);
+			return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_INVALID, 1, 0, buf, cmd);
+		}
+		last_r2tsn = (begrun + runlength);
+	} else
+		last_r2tsn = cmd->r2t_sn;
+
+	while (begrun < last_r2tsn) {
+		r2t = iscsi_get_holder_for_r2tsn(cmd, begrun);
+		if (!(r2t))
+			return -1;
+		if (iscsi_send_recovery_r2t_for_snack(cmd, r2t) < 0)
+			return -1;
+
+		begrun++;
+	}
+
+	return 0;
+}
+
+/*	iscsi_create_recovery_datain_values_datasequenceinorder_yes():
+ *
+ *	Generates Offsets and NextBurstLength based on Begrun and Runlength
+ *	carried in a Data SNACK or ExpDataSN in TMR TASK_REASSIGN.
+ *
+ *	For DataSequenceInOrder=Yes and DataPDUInOrder=[Yes,No] only.
+ *
+ *	FIXME: How is this handled for a RData SNACK?
+ */
+int iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain_req *dr)
+{
+	u32 data_sn = 0, data_sn_count = 0;
+	u32 pdu_start = 0, seq_no = 0;
+	u32 begrun = dr->begrun;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	while (begrun > data_sn++) {
+		data_sn_count++;
+		if ((dr->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			dr->read_data_done +=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			dr->next_burst_len +=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		} else {
+			dr->read_data_done +=
+				(SESS_OPS_C(conn)->MaxBurstLength -
+				 dr->next_burst_len);
+			dr->next_burst_len = 0;
+			pdu_start += data_sn_count;
+			data_sn_count = 0;
+			seq_no++;
+		}
+	}
+
+	if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+		cmd->seq_no = seq_no;
+		cmd->pdu_start = pdu_start;
+		cmd->pdu_send_order = data_sn_count;
+	}
+
+	return 0;
+}
+
+/*	iscsi_create_recovery_datain_values_datasequenceinorder_no():
+ *
+ *	Generates Offsets and NextBurstLength based on Begrun and Runlength
+ *	carried in a Data SNACK or ExpDataSN in TMR TASK_REASSIGN.
+ *
+ *	For DataSequenceInOrder=No and DataPDUInOrder=[Yes,No] only.
+ *
+ *	FIXME: How is this handled for a RData SNACK?
+ */
+int iscsi_create_recovery_datain_values_datasequenceinorder_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain_req *dr)
+{
+	int found_seq = 0, i;
+	u32 data_sn, read_data_done = 0, seq_send_order = 0;
+	u32 begrun = dr->begrun;
+	u32 runlength = dr->runlength;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *first_seq = NULL, *seq = NULL;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return -1;
+	}
+
+	/*
+	 * Calculate read_data_done for all sequences containing a
+	 * first_datasn and last_datasn less than the BegRun.
+	 *
+	 * Locate the struct iscsi_seq the BegRun lies within and calculate
+	 * NextBurstLenghth up to the DataSN based on MaxRecvDataSegmentLength.
+	 *
+	 * Also use struct iscsi_seq->seq_send_order to determine where to start.
+	 */
+	for (i = 0; i < cmd->seq_count; i++) {
+		seq = &cmd->seq_list[i];
+
+		if (!seq->seq_send_order)
+			first_seq = seq;
+
+		/*
+		 * No data has been transferred for this DataIN sequence, so the
+		 * seq->first_datasn and seq->last_datasn have not been set.
+		 */
+		if (!seq->sent) {
+#if 0
+			printk(KERN_ERR "Ignoring non-sent sequence 0x%08x ->"
+				" 0x%08x\n\n", seq->first_datasn,
+				seq->last_datasn);
+#endif
+			continue;
+		}
+
+		/*
+		 * This DataIN sequence is precedes the received BegRun, add the
+		 * total xfer_len of the sequence to read_data_done and reset
+		 * seq->pdu_send_order.
+		 */
+		if ((seq->first_datasn < begrun) &&
+				(seq->last_datasn < begrun)) {
+#if 0
+			printk(KERN_ERR "Pre BegRun sequence 0x%08x ->"
+				" 0x%08x\n", seq->first_datasn,
+				seq->last_datasn);
+#endif
+			read_data_done += cmd->seq_list[i].xfer_len;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			continue;
+		}
+
+		/*
+		 * The BegRun lies within this DataIN sequence.
+		 */
+		if ((seq->first_datasn <= begrun) &&
+				(seq->last_datasn >= begrun)) {
+#if 0
+			printk(KERN_ERR "Found sequence begrun: 0x%08x in"
+				" 0x%08x -> 0x%08x\n", begrun,
+				seq->first_datasn, seq->last_datasn);
+#endif
+			seq_send_order = seq->seq_send_order;
+			data_sn = seq->first_datasn;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			found_seq = 1;
+
+			/*
+			 * For DataPDUInOrder=Yes, while the first DataSN of
+			 * the sequence is less than the received BegRun, add
+			 * the MaxRecvDataSegmentLength to read_data_done and
+			 * to the sequence's next_burst_len;
+			 *
+			 * For DataPDUInOrder=No, while the first DataSN of the
+			 * sequence is less than the received BegRun, find the
+			 * struct iscsi_pdu of the DataSN in question and add the
+			 * MaxRecvDataSegmentLength to read_data_done and to the
+			 * sequence's next_burst_len;
+			 */
+			if (SESS_OPS_C(conn)->DataPDUInOrder) {
+				while (data_sn < begrun) {
+					seq->pdu_send_order++;
+					read_data_done +=
+						CONN_OPS(conn)->MaxRecvDataSegmentLength;
+					seq->next_burst_len +=
+						CONN_OPS(conn)->MaxRecvDataSegmentLength;
+					data_sn++;
+				}
+			} else {
+				int j;
+				struct iscsi_pdu *pdu;
+
+				while (data_sn < begrun) {
+					seq->pdu_send_order++;
+
+					for (j = 0; j < seq->pdu_count; j++) {
+						pdu = &cmd->pdu_list[
+							seq->pdu_start + j];
+						if (pdu->data_sn == data_sn) {
+							read_data_done +=
+								pdu->length;
+							seq->next_burst_len +=
+								pdu->length;
+						}
+					}
+					data_sn++;
+				}
+			}
+			continue;
+		}
+
+		/*
+		 * This DataIN sequence is larger than the received BegRun,
+		 * reset seq->pdu_send_order and continue.
+		 */
+		if ((seq->first_datasn > begrun) ||
+				(seq->last_datasn > begrun)) {
+#if 0
+			printk(KERN_ERR "Post BegRun sequence 0x%08x -> 0x%08x\n",
+					seq->first_datasn, seq->last_datasn);
+#endif
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			continue;
+		}
+	}
+
+	if (!found_seq) {
+		if (!begrun) {
+			if (!first_seq) {
+				printk(KERN_ERR "ITT: 0x%08x, Begrun: 0x%08x"
+					" but first_seq is NULL\n",
+					cmd->init_task_tag, begrun);
+				return -1;
+			}
+			seq_send_order = first_seq->seq_send_order;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			goto done;
+		}
+
+		printk(KERN_ERR "Unable to locate struct iscsi_seq for ITT: 0x%08x,"
+			" BegRun: 0x%08x, RunLength: 0x%08x while"
+			" DataSequenceInOrder=No and DataPDUInOrder=%s.\n",
+				cmd->init_task_tag, begrun, runlength,
+			(SESS_OPS_C(conn)->DataPDUInOrder) ? "Yes" : "No");
+		return -1;
+	}
+
+done:
+	dr->read_data_done = read_data_done;
+	dr->seq_send_order = seq_send_order;
+
+	return 0;
+}
+
+/*	iscsi_handle_recovery_datain():
+ *
+ *
+ */
+static inline int iscsi_handle_recovery_datain(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+
+	if (!(atomic_read(&T_TASK(se_cmd)->t_transport_complete))) {
+		printk(KERN_ERR "Ignoring ITT: 0x%08x Data SNACK\n",
+				cmd->init_task_tag);
+		return 0;
+	}
+
+	/*
+	 * Make sure the initiator is not requesting retransmission
+	 * of DataSNs already acknowledged by a Data ACK SNACK.
+	 */
+	if ((cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+	    (begrun <= cmd->acked_data_sn)) {
+		printk(KERN_ERR "ITT: 0x%08x, Data SNACK requesting"
+			" retransmission of DataSN: 0x%08x to 0x%08x but"
+			" already acked to DataSN: 0x%08x by Data ACK SNACK,"
+			" protocol error.\n", cmd->init_task_tag, begrun,
+			(begrun + runlength), cmd->acked_data_sn);
+
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
+				1, 0, buf, cmd);
+	}
+
+	/*
+	 * Make sure BegRun and RunLength in the Data SNACK are sane.
+	 * Note: (cmd->data_sn - 1) will carry the maximum DataSN sent.
+	 */
+	if ((begrun + runlength) > (cmd->data_sn - 1)) {
+		printk(KERN_ERR "Initiator requesting BegRun: 0x%08x, RunLength"
+			": 0x%08x greater than maximum DataSN: 0x%08x.\n",
+				begrun, runlength, (cmd->data_sn - 1));
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
+				1, 0, buf, cmd);
+	}
+
+	dr = iscsi_allocate_datain_req();
+	if (!(dr))
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 0, buf, cmd);
+
+	dr->data_sn = dr->begrun = begrun;
+	dr->runlength = runlength;
+	dr->generate_recovery_values = 1;
+	dr->recovery = DATAIN_WITHIN_COMMAND_RECOVERY;
+
+	iscsi_attach_datain_req(cmd, dr);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_handle_recovery_datain_or_r2t():
+ *
+ *
+ */
+int iscsi_handle_recovery_datain_or_r2t(
+	struct iscsi_conn *conn,
+	unsigned char *buf,
+	u32 init_task_tag,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_cmd *cmd;
+
+	cmd = iscsi_find_cmd_from_itt(conn, init_task_tag);
+	if (!(cmd))
+		return 0;
+
+	/*
+	 * FIXME: This will not work for bidi commands.
+	 */
+	switch (cmd->data_direction) {
+	case DMA_TO_DEVICE:
+		return iscsi_handle_r2t_snack(cmd, buf, begrun, runlength);
+	case DMA_FROM_DEVICE:
+		return iscsi_handle_recovery_datain(cmd, buf, begrun,
+				runlength);
+	default:
+		printk(KERN_ERR "Unknown cmd->data_direction: 0x%02x\n",
+				cmd->data_direction);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_send_recovery_status():
+ *
+ *
+ */
+/* #warning FIXME: Status SNACK needs to be dependent on OPCODE!!! */
+int iscsi_handle_status_snack(
+	struct iscsi_conn *conn,
+	u32 init_task_tag,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	u32 last_statsn;
+	struct iscsi_cmd *cmd = NULL;
+
+	if (conn->exp_statsn > begrun) {
+		printk(KERN_ERR "Got Status SNACK Begrun: 0x%08x, RunLength:"
+			" 0x%08x but already got ExpStatSN: 0x%08x on CID:"
+			" %hu.\n", begrun, runlength, conn->exp_statsn,
+			conn->cid);
+		return 0;
+	}
+
+	last_statsn = (!runlength) ? conn->stat_sn : (begrun + runlength);
+
+	while (begrun < last_statsn) {
+		spin_lock_bh(&conn->cmd_lock);
+		list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+			if (cmd->stat_sn == begrun)
+				break;
+		}
+		spin_unlock_bh(&conn->cmd_lock);
+
+		if (!cmd) {
+			printk(KERN_ERR "Unable to find StatSN: 0x%08x for"
+				" a Status SNACK, assuming this was a"
+				" protactic SNACK for an untransmitted"
+				" StatSN, ignoring.\n", begrun);
+			begrun++;
+			continue;
+		}
+
+		spin_lock_bh(&cmd->istate_lock);
+		if (cmd->i_state == ISTATE_SEND_DATAIN) {
+			spin_unlock_bh(&cmd->istate_lock);
+			printk(KERN_ERR "Ignoring Status SNACK for BegRun:"
+				" 0x%08x, RunLength: 0x%08x, assuming this was"
+				" a protactic SNACK for an untransmitted"
+				" StatSN\n", begrun, runlength);
+			begrun++;
+			continue;
+		}
+		spin_unlock_bh(&cmd->istate_lock);
+
+		cmd->i_state = ISTATE_SEND_STATUS_RECOVERY;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		begrun++;
+	}
+
+	return 0;
+}
+
+/*	iscsi_handle_data_ack():
+ *
+ *
+ */
+int iscsi_handle_data_ack(
+	struct iscsi_conn *conn,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_cmd *cmd = NULL;
+
+	cmd = iscsi_find_cmd_from_ttt(conn, targ_xfer_tag);
+	if (!(cmd)) {
+		printk(KERN_ERR "Data ACK SNACK for TTT: 0x%08x is"
+			" invalid.\n", targ_xfer_tag);
+		return -1;
+	}
+
+	if (begrun <= cmd->acked_data_sn) {
+		printk(KERN_ERR "ITT: 0x%08x Data ACK SNACK BegRUN: 0x%08x is"
+			" less than the already acked DataSN: 0x%08x.\n",
+			cmd->init_task_tag, begrun, cmd->acked_data_sn);
+		return -1;
+	}
+
+	/*
+	 * For Data ACK SNACK, BegRun is the next expected DataSN.
+	 * (see iSCSI v19: 10.16.6)
+	 */
+	cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+	cmd->acked_data_sn = (begrun - 1);
+
+	TRACE(TRACE_ISCSI, "Received Data ACK SNACK for ITT: 0x%08x,"
+		" updated acked DataSN to 0x%08x.\n",
+			cmd->init_task_tag, cmd->acked_data_sn);
+
+	return 0;
+}
+
+/*	iscsi_send_recovery_r2t():
+ *
+ *
+ */
+static int iscsi_send_recovery_r2t(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 xfer_len)
+{
+	int ret;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	ret = iscsi_add_r2t_to_list(cmd, offset, xfer_len, 1, 0);
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return ret;
+}
+
+/*	iscsi_dataout_datapduinorder_no_fbit():
+ *
+ *
+ */
+int iscsi_dataout_datapduinorder_no_fbit(
+	struct iscsi_cmd *cmd,
+	struct iscsi_pdu *pdu)
+{
+	int i, send_recovery_r2t = 0, recovery = 0;
+	u32 length = 0, offset = 0, pdu_count = 0, xfer_len = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *first_pdu = NULL;
+
+	/*
+	 * Get an struct iscsi_pdu pointer to the first PDU, and total PDU count
+	 * of the DataOUT sequence.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		for (i = 0; i < cmd->pdu_count; i++) {
+			if (cmd->pdu_list[i].seq_no == pdu->seq_no) {
+				if (!first_pdu)
+					first_pdu = &cmd->pdu_list[i];
+				 xfer_len += cmd->pdu_list[i].length;
+				 pdu_count++;
+			} else if (pdu_count)
+				break;
+		}
+	} else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+
+		first_pdu = &cmd->pdu_list[seq->pdu_start];
+		pdu_count = seq->pdu_count;
+	}
+
+	if (!first_pdu || !pdu_count)
+		return DATAOUT_CANNOT_RECOVER;
+
+	/*
+	 * Loop through the ending DataOUT Sequence checking each struct iscsi_pdu.
+	 * The following ugly logic does batching of not received PDUs.
+	 */
+	for (i = 0; i < pdu_count; i++) {
+		if (first_pdu[i].status == ISCSI_PDU_RECEIVED_OK) {
+			if (!send_recovery_r2t)
+				continue;
+
+			if (iscsi_send_recovery_r2t(cmd, offset, length) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+
+			send_recovery_r2t = length = offset = 0;
+			continue;
+		}
+		/*
+		 * Set recovery = 1 for any missing, CRC failed, or timed
+		 * out PDUs to let the DataOUT logic know that this sequence
+		 * has not been completed yet.
+		 *
+		 * Also, only send a Recovery R2T for ISCSI_PDU_NOT_RECEIVED.
+		 * We assume if the PDU either failed CRC or timed out
+		 * that a Recovery R2T has already been sent.
+		 */
+		recovery = 1;
+
+		if (first_pdu[i].status != ISCSI_PDU_NOT_RECEIVED)
+			continue;
+
+		if (!offset)
+			offset = first_pdu[i].offset;
+		length += first_pdu[i].length;
+
+		send_recovery_r2t = 1;
+	}
+
+	if (send_recovery_r2t)
+		if (iscsi_send_recovery_r2t(cmd, offset, length) < 0)
+			return DATAOUT_CANNOT_RECOVER;
+
+	return (!recovery) ? DATAOUT_NORMAL : DATAOUT_WITHIN_COMMAND_RECOVERY;
+}
+
+/*	iscsi_recalculate_dataout_values():
+ *
+ *
+ */
+static int iscsi_recalculate_dataout_values(
+	struct iscsi_cmd *cmd,
+	u32 pdu_offset,
+	u32 pdu_length,
+	u32 *r2t_offset,
+	u32 *r2t_length)
+{
+	int i;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		cmd->data_sn = 0;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			*r2t_offset = cmd->write_data_done;
+			*r2t_length = (cmd->seq_end_offset -
+					cmd->write_data_done);
+			return 0;
+		}
+
+		*r2t_offset = cmd->seq_start_offset;
+		*r2t_length = (cmd->seq_end_offset - cmd->seq_start_offset);
+
+		for (i = 0; i < cmd->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			if ((pdu->offset >= cmd->seq_start_offset) &&
+			   ((pdu->offset + pdu->length) <=
+			     cmd->seq_end_offset)) {
+				if (!cmd->unsolicited_data)
+					cmd->next_burst_len -= pdu->length;
+				else
+					cmd->first_burst_len -= pdu->length;
+
+				cmd->write_data_done -= pdu->length;
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	} else {
+		struct iscsi_seq *seq = NULL;
+
+		seq = iscsi_get_seq_holder(cmd, pdu_offset, pdu_length);
+		if (!(seq))
+			return -1;
+
+		*r2t_offset = seq->orig_offset;
+		*r2t_length = seq->xfer_len;
+
+		cmd->write_data_done -= (seq->offset - seq->orig_offset);
+		if (cmd->immediate_data)
+			cmd->first_burst_len = cmd->write_data_done;
+
+		seq->data_sn = 0;
+		seq->offset = seq->orig_offset;
+		seq->next_burst_len = 0;
+		seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder)
+			return 0;
+
+		for (i = 0; i < seq->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i+seq->pdu_start];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			pdu->status = ISCSI_PDU_NOT_RECEIVED;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_recover_dataout_crc_sequence():
+ *
+ *
+ */
+int iscsi_recover_dataout_sequence(
+	struct iscsi_cmd *cmd,
+	u32 pdu_offset,
+	u32 pdu_length)
+{
+	u32 r2t_length = 0, r2t_offset = 0;
+
+	spin_lock_bh(&cmd->istate_lock);
+	cmd->cmd_flags |= ICF_WITHIN_COMMAND_RECOVERY;
+	spin_unlock_bh(&cmd->istate_lock);
+
+	if (iscsi_recalculate_dataout_values(cmd, pdu_offset, pdu_length,
+			&r2t_offset, &r2t_length) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	iscsi_send_recovery_r2t(cmd, r2t_offset, r2t_length);
+
+	return DATAOUT_WITHIN_COMMAND_RECOVERY;
+}
+
+/*	iscsi_allocate_ooo_cmdsn():
+ *
+ *
+ */
+static inline struct iscsi_ooo_cmdsn *iscsi_allocate_ooo_cmdsn(void)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn = NULL;
+
+	ooo_cmdsn = kmem_cache_zalloc(lio_ooo_cache, GFP_ATOMIC);
+	if (!(ooo_cmdsn)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_ooo_cmdsn.\n");
+		return NULL;
+	}
+	INIT_LIST_HEAD(&ooo_cmdsn->ooo_list);
+
+	return ooo_cmdsn;
+}
+
+/*	iscsi_attach_ooo_cmdsn():
+ *
+ *	Called with sess->cmdsn_lock held.
+ */
+static inline int iscsi_attach_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_ooo_cmdsn *ooo_cmdsn)
+{
+	struct iscsi_ooo_cmdsn *ooo_tail, *ooo_tmp;
+	/*
+	 * We attach the struct iscsi_ooo_cmdsn entry to the out of order
+	 * list in increasing CmdSN order.
+	 * This allows iscsi_execute_ooo_cmdsns() to detect any
+	 * additional CmdSN holes while performing delayed execution.
+	 */
+	if (list_empty(&sess->sess_ooo_cmdsn_list))
+		list_add_tail(&ooo_cmdsn->ooo_list,
+				&sess->sess_ooo_cmdsn_list);
+	else {
+		ooo_tail = list_entry(sess->sess_ooo_cmdsn_list.prev,
+				typeof(*ooo_tail), ooo_list);
+		/*
+		 * CmdSN is greater than the tail of the list.
+		 */
+		if (ooo_tail->cmdsn < ooo_cmdsn->cmdsn)
+			list_add_tail(&ooo_cmdsn->ooo_list,
+					&sess->sess_ooo_cmdsn_list);
+		else {
+			/*
+			 * CmdSN is either lower than the head,  or somewhere
+			 * in the middle.
+			 */
+			list_for_each_entry(ooo_tmp, &sess->sess_ooo_cmdsn_list,
+						ooo_list) {
+				while (ooo_tmp->cmdsn < ooo_cmdsn->cmdsn)
+					continue;
+
+				list_add(&ooo_cmdsn->ooo_list,
+					&ooo_tmp->ooo_list);
+				break;
+			}
+		}
+	}
+	sess->ooo_cmdsn_count++;
+
+	TRACE(TRACE_CMDSN, "Set out of order CmdSN count for SID:"
+		" %u to %hu.\n", sess->sid, sess->ooo_cmdsn_count);
+
+	return 0;
+}
+
+/*	iscsi_remove_ooo_cmdsn()
+ *
+ *	Removes an struct iscsi_ooo_cmdsn from a session's list,
+ *	called with struct iscsi_session->cmdsn_lock held.
+ */
+void iscsi_remove_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_ooo_cmdsn *ooo_cmdsn)
+{
+	list_del(&ooo_cmdsn->ooo_list);
+	kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+}
+
+/*	iscsi_clear_ooo_cmdsns_for_conn():
+ *
+ *
+ */
+void iscsi_clear_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn;
+	struct iscsi_session *sess = SESS(conn);
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry(ooo_cmdsn, &sess->sess_ooo_cmdsn_list, ooo_list) {
+		if (ooo_cmdsn->cid != conn->cid)
+			continue;
+
+		ooo_cmdsn->cmd = NULL;
+	}
+	spin_unlock(&sess->cmdsn_lock);
+}
+
+/*	iscsi_execute_ooo_cmdsns():
+ *
+ *	Called with sess->cmdsn_lock held.
+ */
+int iscsi_execute_ooo_cmdsns(struct iscsi_session *sess)
+{
+	int ooo_count = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+				&sess->sess_ooo_cmdsn_list, ooo_list) {
+		if (ooo_cmdsn->cmdsn != sess->exp_cmd_sn)
+			continue;
+
+		if (!ooo_cmdsn->cmd) {
+			sess->exp_cmd_sn++;
+			iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+			continue;
+		}
+
+		cmd = ooo_cmdsn->cmd;
+		cmd->i_state = cmd->deferred_i_state;
+		ooo_count++;
+		sess->exp_cmd_sn++;
+		TRACE(TRACE_CMDSN, "Executing out of order CmdSN: 0x%08x,"
+			" incremented ExpCmdSN to 0x%08x.\n",
+			cmd->cmd_sn, sess->exp_cmd_sn);
+
+		iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+
+		if (iscsi_execute_cmd(cmd, 1) < 0)
+			return -1;
+
+		continue;
+	}
+
+	return ooo_count;
+}
+
+/*	iscsi_execute_cmd():
+ *
+ *	Called either:
+ *
+ *	1. With sess->cmdsn_lock held from iscsi_execute_ooo_cmdsns()
+ *	or iscsi_check_received_cmdsn().
+ *	2. With no locks held directly from iscsi_handle_XXX_pdu() functions
+ *	for immediate commands.
+ */
+int iscsi_execute_cmd(struct iscsi_cmd *cmd, int ooo)
+{
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+	int lr = 0;
+
+	spin_lock_bh(&cmd->istate_lock);
+	if (ooo)
+		cmd->cmd_flags &= ~ICF_OOO_CMDSN;
+
+	switch (cmd->iscsi_opcode) {
+	case ISCSI_OP_SCSI_CMD:
+		/*
+		 * Go ahead and send the CHECK_CONDITION status for
+		 * any SCSI CDB exceptions that may have occurred, also
+		 * handle the SCF_SCSI_RESERVATION_CONFLICT case here as well.
+		 */
+		if (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION) {
+			if (se_cmd->se_cmd_flags &
+					SCF_SCSI_RESERVATION_CONFLICT) {
+				cmd->i_state = ISTATE_SEND_STATUS;
+				spin_unlock_bh(&cmd->istate_lock);
+				iscsi_add_cmd_to_response_queue(cmd, CONN(cmd),
+						cmd->i_state);
+				return 0;
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+			/*
+			 * Determine if delayed TASK_ABORTED status for WRITEs
+			 * should be sent now if no unsolicited data out
+			 * payloads are expected, or if the delayed status
+			 * should be sent after unsolicited data out with
+			 * ISCSI_FLAG_CMD_FINAL set in iscsi_handle_data_out()
+			 */
+			if (transport_check_aborted_status(se_cmd,
+					(cmd->unsolicited_data == 0)) != 0)
+				return 0;
+			/*
+			 * Otherwise send CHECK_CONDITION and sense for
+			 * exception
+			 */
+			return transport_send_check_condition_and_sense(se_cmd,
+					se_cmd->scsi_sense_reason, 0);
+		}
+		/*
+		 * Special case for delayed CmdSN with Immediate
+		 * Data and/or Unsolicited Data Out attached.
+		 */
+		if (cmd->immediate_data) {
+			if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+				spin_unlock_bh(&cmd->istate_lock);
+				return transport_generic_handle_data(
+						&cmd->se_cmd);
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+
+			if (!(cmd->cmd_flags &
+					ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) {
+				/*
+				 * Send the delayed TASK_ABORTED status for
+				 * WRITEs if no more unsolicitied data is
+				 * expected.
+				 */
+				if (transport_check_aborted_status(se_cmd, 1)
+						!= 0)
+					return 0;
+
+				iscsi_set_dataout_sequence_values(cmd);
+				iscsi_build_r2ts_for_cmd(cmd, CONN(cmd), 0);
+			}
+			return 0;
+		}
+		/*
+		 * The default handler.
+		 */
+		spin_unlock_bh(&cmd->istate_lock);
+
+		if ((cmd->data_direction == DMA_TO_DEVICE) &&
+		    !(cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) {
+			/*
+			 * Send the delayed TASK_ABORTED status for WRITEs if
+			 * no more nsolicitied data is expected.
+			 */
+			if (transport_check_aborted_status(se_cmd, 1) != 0)
+				return 0;
+
+			iscsi_set_dataout_sequence_values(cmd);
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+		return transport_generic_handle_cdb(&cmd->se_cmd);
+
+	case ISCSI_OP_NOOP_OUT:
+	case ISCSI_OP_TEXT:
+		spin_unlock_bh(&cmd->istate_lock);
+		iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+		break;
+	case ISCSI_OP_SCSI_TMFUNC:
+		if (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION) {
+			spin_unlock_bh(&cmd->istate_lock);
+			iscsi_add_cmd_to_response_queue(cmd, CONN(cmd),
+					cmd->i_state);
+			return 0;
+		}
+		spin_unlock_bh(&cmd->istate_lock);
+
+		return transport_generic_handle_tmr(SE_CMD(cmd));
+	case ISCSI_OP_LOGOUT:
+		spin_unlock_bh(&cmd->istate_lock);
+		switch (cmd->logout_reason) {
+		case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+			lr = iscsi_logout_closesession(cmd, CONN(cmd));
+			break;
+		case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+			lr = iscsi_logout_closeconnection(cmd, CONN(cmd));
+			break;
+		case ISCSI_LOGOUT_REASON_RECOVERY:
+			lr = iscsi_logout_removeconnforrecovery(cmd, CONN(cmd));
+			break;
+		default:
+			printk(KERN_ERR "Unknown iSCSI Logout Request Code:"
+				" 0x%02x\n", cmd->logout_reason);
+			return -1;
+		}
+
+		return lr;
+	default:
+		spin_unlock_bh(&cmd->istate_lock);
+		printk(KERN_ERR "Cannot perform out of order execution for"
+		" unknown iSCSI Opcode: 0x%02x\n", cmd->iscsi_opcode);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_free_all_ooo_cmdsns():
+ *
+ *
+ */
+void iscsi_free_all_ooo_cmdsns(struct iscsi_session *sess)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+			&sess->sess_ooo_cmdsn_list, ooo_list) {
+
+		list_del(&ooo_cmdsn->ooo_list);
+		kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+	}
+	spin_unlock(&sess->cmdsn_lock);
+}
+
+/*	iscsi_handle_ooo_cmdsn():
+ *
+ *
+ */
+int iscsi_handle_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_cmd *cmd,
+	u32 cmdsn)
+{
+	int batch = 0;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn = NULL, *ooo_tail = NULL;
+
+	sess->cmdsn_outoforder = 1;
+
+	cmd->deferred_i_state		= cmd->i_state;
+	cmd->i_state			= ISTATE_DEFERRED_CMD;
+	cmd->cmd_flags			|= ICF_OOO_CMDSN;
+
+	if (list_empty(&sess->sess_ooo_cmdsn_list))
+		batch = 1;
+	else {
+		ooo_tail = list_entry(sess->sess_ooo_cmdsn_list.prev,
+				typeof(*ooo_tail), ooo_list);
+		if (ooo_tail->cmdsn != (cmdsn - 1))
+			batch = 1;
+	}
+
+	ooo_cmdsn = iscsi_allocate_ooo_cmdsn();
+	if (!(ooo_cmdsn))
+		return CMDSN_ERROR_CANNOT_RECOVER;
+
+	ooo_cmdsn->cmd			= cmd;
+	ooo_cmdsn->batch_count		= (batch) ?
+					  (cmdsn - sess->exp_cmd_sn) : 1;
+	ooo_cmdsn->cid			= CONN(cmd)->cid;
+	ooo_cmdsn->exp_cmdsn		= sess->exp_cmd_sn;
+	ooo_cmdsn->cmdsn		= cmdsn;
+
+	if (iscsi_attach_ooo_cmdsn(sess, ooo_cmdsn) < 0) {
+		kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+		return CMDSN_ERROR_CANNOT_RECOVER;
+	}
+
+	return CMDSN_HIGHER_THAN_EXP;
+}
+
+/*	 iscsi_set_dataout_timeout_values():
+ *
+ *
+ */
+static int iscsi_set_dataout_timeout_values(
+	struct iscsi_cmd *cmd,
+	u32 *offset,
+	u32 *length)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_r2t *r2t;
+
+	if (cmd->unsolicited_data) {
+		*offset = 0;
+		*length = (SESS_OPS_C(conn)->FirstBurstLength >
+			   cmd->data_length) ?
+			   cmd->data_length :
+			   SESS_OPS_C(conn)->FirstBurstLength;
+		return 0;
+	}
+
+	spin_lock_bh(&cmd->r2t_lock);
+	if (list_empty(&cmd->cmd_r2t_list)) {
+		printk(KERN_ERR "cmd->cmd_r2t_list is empty!\n");
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list)
+		if (r2t->sent_r2t && !r2t->recovery_r2t && !r2t->seq_complete)
+			break;
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate any incomplete DataOUT"
+			" sequences for ITT: 0x%08x.\n", cmd->init_task_tag);
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	*offset = r2t->offset;
+	*length = r2t->xfer_len;
+
+	spin_unlock_bh(&cmd->r2t_lock);
+	return 0;
+}
+
+/*	iscsi_handle_dataout_timeout():
+ *
+ *	NOTE: Called from interrupt (timer) context.
+ */
+static void iscsi_handle_dataout_timeout(unsigned long data)
+{
+	u32 pdu_length = 0, pdu_offset = 0;
+	u32 r2t_length = 0, r2t_offset = 0;
+	struct iscsi_cmd *cmd = (struct iscsi_cmd *) data;
+	struct iscsi_conn *conn = conn = CONN(cmd);
+	struct iscsi_session *sess = NULL;
+	struct iscsi_node_attrib *na;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (cmd->dataout_timer_flags & DATAOUT_TF_STOP) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_RUNNING;
+	sess = SESS(conn);
+	na = iscsi_tpg_get_node_attrib(sess);
+
+	if (!SESS_OPS(sess)->ErrorRecoveryLevel) {
+		TRACE(TRACE_ERL0, "Unable to recover from DataOut timeout while"
+			" in ERL=0.\n");
+		goto failure;
+	}
+
+	if (++cmd->dataout_timeout_retries == na->dataout_timeout_retries) {
+		TRACE(TRACE_TIMER, "Command ITT: 0x%08x exceeded max retries"
+			" for DataOUT timeout %u, closing iSCSI connection.\n",
+			cmd->init_task_tag, na->dataout_timeout_retries);
+		goto failure;
+	}
+
+	cmd->cmd_flags |= ICF_WITHIN_COMMAND_RECOVERY;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			pdu_offset = cmd->write_data_done;
+			if ((pdu_offset + (SESS_OPS_C(conn)->MaxBurstLength -
+			     cmd->next_burst_len)) > cmd->data_length)
+				pdu_length = (cmd->data_length -
+					cmd->write_data_done);
+			else
+				pdu_length = (SESS_OPS_C(conn)->MaxBurstLength -
+						cmd->next_burst_len);
+		} else {
+			pdu_offset = cmd->seq_start_offset;
+			pdu_length = (cmd->seq_end_offset -
+				cmd->seq_start_offset);
+		}
+	} else {
+		if (iscsi_set_dataout_timeout_values(cmd, &pdu_offset,
+				&pdu_length) < 0)
+			goto failure;
+	}
+
+	if (iscsi_recalculate_dataout_values(cmd, pdu_offset, pdu_length,
+			&r2t_offset, &r2t_length) < 0)
+		goto failure;
+
+	TRACE(TRACE_TIMER, "Command ITT: 0x%08x timed out waiting for"
+		" completion of %sDataOUT Sequence Offset: %u, Length: %u\n",
+		cmd->init_task_tag, (cmd->unsolicited_data) ? "Unsolicited " :
+		"", r2t_offset, r2t_length);
+
+	if (iscsi_send_recovery_r2t(cmd, r2t_offset, r2t_length) < 0)
+		goto failure;
+
+	iscsi_start_dataout_timer(cmd, conn);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+	iscsi_dec_conn_usage_count(conn);
+
+	return;
+
+failure:
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+
+	return;
+}
+
+/*	iscsi_mod_dataout_timer():
+ *
+ *
+ */
+void iscsi_mod_dataout_timer(struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (!(cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		return;
+	}
+
+	MOD_TIMER(&cmd->dataout_timer, na->dataout_timeout);
+	TRACE(TRACE_TIMER, "Updated DataOUT timer for ITT: 0x%08x",
+			cmd->init_task_tag);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+}
+
+/*	iscsi_start_dataout_timer():
+ *
+ *	Called with cmd->dataout_timeout_lock held.
+ */
+void iscsi_start_dataout_timer(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = na = iscsi_tpg_get_node_attrib(sess);
+
+	if (cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)
+		return;
+
+	TRACE(TRACE_TIMER, "Starting DataOUT timer for ITT: 0x%08x on"
+		" CID: %hu.\n", cmd->init_task_tag, conn->cid);
+
+	init_timer(&cmd->dataout_timer);
+	SETUP_TIMER(cmd->dataout_timer, na->dataout_timeout, cmd,
+			iscsi_handle_dataout_timeout);
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_STOP;
+	cmd->dataout_timer_flags |= DATAOUT_TF_RUNNING;
+	add_timer(&cmd->dataout_timer);
+}
+
+/*	iscsi_stop_dataout_timer():
+ *
+ *
+ */
+void iscsi_stop_dataout_timer(struct iscsi_cmd *cmd)
+{
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (!(cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		return;
+	}
+	cmd->dataout_timer_flags |= DATAOUT_TF_STOP;
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+
+	del_timer_sync(&cmd->dataout_timer);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_RUNNING;
+	TRACE(TRACE_TIMER, "Stopped DataOUT Timer for ITT: 0x%08x\n",
+			cmd->init_task_tag);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl1.h b/drivers/target/iscsi/iscsi_target_erl1.h
new file mode 100644
index 0000000..e764ec2
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl1.h
@@ -0,0 +1,35 @@
+#ifndef ISCSI_TARGET_ERL1_H
+#define ISCSI_TARGET_ERL1_H
+
+extern int iscsi_dump_data_payload(struct iscsi_conn *, __u32, int);
+extern int iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+			struct iscsi_cmd *, struct iscsi_datain_req *);
+extern int iscsi_create_recovery_datain_values_datasequenceinorder_no(
+			struct iscsi_cmd *, struct iscsi_datain_req *);
+extern int iscsi_handle_recovery_datain_or_r2t(struct iscsi_conn *, unsigned char *,
+			__u32, __u32, __u32, __u32);
+extern int iscsi_handle_status_snack(struct iscsi_conn *, __u32, __u32,
+			__u32, __u32);
+extern int iscsi_handle_data_ack(struct iscsi_conn *, __u32, __u32, __u32);
+extern int iscsi_dataout_datapduinorder_no_fbit(struct iscsi_cmd *, struct iscsi_pdu *);
+extern int iscsi_recover_dataout_sequence(struct iscsi_cmd *, __u32, __u32);
+extern void iscsi_clear_ooo_cmdsns_for_conn(struct iscsi_conn *);
+extern void iscsi_free_all_ooo_cmdsns(struct iscsi_session *);
+extern int iscsi_execute_ooo_cmdsns(struct iscsi_session *);
+extern int iscsi_execute_cmd(struct iscsi_cmd *, int);
+extern int iscsi_handle_ooo_cmdsn(struct iscsi_session *, struct iscsi_cmd *, __u32);
+extern void iscsi_remove_ooo_cmdsn(struct iscsi_session *, struct iscsi_ooo_cmdsn *);
+extern void iscsi_mod_dataout_timer(struct iscsi_cmd *);
+extern void iscsi_start_dataout_timer(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_stop_dataout_timer(struct iscsi_cmd *);
+
+extern struct kmem_cache *lio_ooo_cache;
+
+extern int iscsi_add_reject_from_cmd(u8, int, int, unsigned char *,
+			struct iscsi_cmd *);
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int iscsi_logout_closesession(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_logout_closeconnection(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_logout_removeconnforrecovery(struct iscsi_cmd *, struct iscsi_conn *);
+
+#endif /* ISCSI_TARGET_ERL1_H */
diff --git a/drivers/target/iscsi/iscsi_target_erl2.c b/drivers/target/iscsi/iscsi_target_erl2.c
new file mode 100644
index 0000000..2e61514
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl2.c
@@ -0,0 +1,535 @@
+/*******************************************************************************
+ * This file contains error recovery level two functions used by
+ * the iSCSI Target driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+
+/*	iscsi_create_conn_recovery_datain_values():
+ *
+ *	FIXME: Does RData SNACK apply here as well?
+ */
+void iscsi_create_conn_recovery_datain_values(
+	struct iscsi_cmd *cmd,
+	u32 exp_data_sn)
+{
+	u32 data_sn = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->next_burst_len = 0;
+	cmd->read_data_done = 0;
+
+	while (exp_data_sn > data_sn) {
+		if ((cmd->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			cmd->read_data_done +=
+			       CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			cmd->next_burst_len +=
+			       CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		} else {
+			cmd->read_data_done +=
+				(SESS_OPS_C(conn)->MaxBurstLength -
+				cmd->next_burst_len);
+			cmd->next_burst_len = 0;
+		}
+		data_sn++;
+	}
+}
+
+/*	iscsi_create_conn_recovery_dataout_values():
+ *
+ *
+ */
+void iscsi_create_conn_recovery_dataout_values(
+	struct iscsi_cmd *cmd)
+{
+	u32 write_data_done = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->data_sn = 0;
+	cmd->next_burst_len = 0;
+
+	while (cmd->write_data_done > write_data_done) {
+		if ((write_data_done + SESS_OPS_C(conn)->MaxBurstLength) <=
+		     cmd->write_data_done)
+			write_data_done += SESS_OPS_C(conn)->MaxBurstLength;
+		else
+			break;
+	}
+
+	cmd->write_data_done = write_data_done;
+}
+
+/*	iscsi_attach_active_connection_recovery_entry():
+ *
+ *
+ */
+static int iscsi_attach_active_connection_recovery_entry(
+	struct iscsi_session *sess,
+	struct iscsi_conn_recovery *cr)
+{
+	spin_lock(&sess->cr_a_lock);
+	list_add_tail(&cr->cr_list, &sess->cr_active_list);
+	spin_unlock(&sess->cr_a_lock);
+
+	return 0;
+}
+
+/*	iscsi_attach_inactive_connection_recovery():
+ *
+ *
+ */
+static int iscsi_attach_inactive_connection_recovery_entry(
+	struct iscsi_session *sess,
+	struct iscsi_conn_recovery *cr)
+{
+	spin_lock(&sess->cr_i_lock);
+	list_add_tail(&cr->cr_list, &sess->cr_inactive_list);
+
+	sess->conn_recovery_count++;
+	TRACE(TRACE_ERL2, "Incremented connection recovery count to %u for"
+		" SID: %u\n", sess->conn_recovery_count, sess->sid);
+	spin_unlock(&sess->cr_i_lock);
+
+	return 0;
+}
+
+/*	iscsi_get_inactive_connection_recovery_entry():
+ *
+ *
+ */
+struct iscsi_conn_recovery *iscsi_get_inactive_connection_recovery_entry(
+	struct iscsi_session *sess,
+	u16 cid)
+{
+	struct iscsi_conn_recovery *cr;
+
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry(cr, &sess->cr_inactive_list, cr_list) {
+		if (cr->cid == cid)
+			break;
+	}
+	spin_unlock(&sess->cr_i_lock);
+
+	return (cr) ? cr : NULL;
+}
+
+/*	iscsi_free_connection_recovery_entires():
+ *
+ *
+ */
+void iscsi_free_connection_recovery_entires(struct iscsi_session *sess)
+{
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_conn_recovery *cr, *cr_tmp;
+
+	spin_lock(&sess->cr_a_lock);
+	list_for_each_entry_safe(cr, cr_tmp, &sess->cr_active_list, cr_list) {
+		list_del(&cr->cr_list);
+		spin_unlock(&sess->cr_a_lock);
+
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry_safe(cmd, cmd_tmp,
+				&cr->conn_recovery_cmd_list, i_list) {
+
+			list_del(&cmd->i_list);
+			cmd->conn = NULL;
+			spin_unlock(&cr->conn_recovery_cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, sess);
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock(&cr->conn_recovery_cmd_lock);
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		spin_lock(&sess->cr_a_lock);
+
+		kfree(cr);
+	}
+	spin_unlock(&sess->cr_a_lock);
+
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry_safe(cr, cr_tmp, &sess->cr_inactive_list, cr_list) {
+		list_del(&cr->cr_list);
+		spin_unlock(&sess->cr_i_lock);
+
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry_safe(cmd, cmd_tmp,
+				&cr->conn_recovery_cmd_list, i_list) {
+
+			list_del(&cmd->i_list);
+			cmd->conn = NULL;
+			spin_unlock(&cr->conn_recovery_cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, sess);
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock(&cr->conn_recovery_cmd_lock);
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		spin_lock(&sess->cr_i_lock);
+
+		kfree(cr);
+	}
+	spin_unlock(&sess->cr_i_lock);
+}
+
+/*	iscsi_remove_active_connection_recovery_entry():
+ *
+ *
+ */
+int iscsi_remove_active_connection_recovery_entry(
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	spin_lock(&sess->cr_a_lock);
+	list_del(&cr->cr_list);
+
+	sess->conn_recovery_count--;
+	TRACE(TRACE_ERL2, "Decremented connection recovery count to %u for"
+		" SID: %u\n", sess->conn_recovery_count, sess->sid);
+	spin_unlock(&sess->cr_a_lock);
+
+	kfree(cr);
+
+	return 0;
+}
+
+/*	iscsi_remove_inactive_connection_recovery_entry():
+ *
+ *
+ */
+int iscsi_remove_inactive_connection_recovery_entry(
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	spin_lock(&sess->cr_i_lock);
+	list_del(&cr->cr_list);
+	spin_unlock(&sess->cr_i_lock);
+
+	return 0;
+}
+
+/*	iscsi_remove_cmd_from_connection_recovery():
+ *
+ *	Called with cr->conn_recovery_cmd_lock help.
+ */
+int iscsi_remove_cmd_from_connection_recovery(
+	struct iscsi_cmd *cmd,
+	struct iscsi_session *sess)
+{
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		BUG();
+	}
+	cr = cmd->cr;
+
+	list_del(&cmd->i_list);
+	return --cr->cmd_count;
+}
+
+/*	iscsi_discard_cr_cmds_by_expstatsn():
+ *
+ *
+ */
+void iscsi_discard_cr_cmds_by_expstatsn(
+	struct iscsi_conn_recovery *cr,
+	u32 exp_statsn)
+{
+	u32 dropped_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_session *sess = cr->sess;
+
+	spin_lock(&cr->conn_recovery_cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp,
+			&cr->conn_recovery_cmd_list, i_list) {
+
+		if (((cmd->deferred_i_state != ISTATE_SENT_STATUS) &&
+		     (cmd->deferred_i_state != ISTATE_REMOVE)) ||
+		     (cmd->stat_sn >= exp_statsn)) {
+			continue;
+		}
+
+		dropped_count++;
+		TRACE(TRACE_ERL2, "Dropping Acknowledged ITT: 0x%08x, StatSN:"
+			" 0x%08x, CID: %hu.\n", cmd->init_task_tag,
+				cmd->stat_sn, cr->cid);
+
+		iscsi_remove_cmd_from_connection_recovery(cmd, sess);
+
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+		    !(SE_CMD(cmd)->transport_wait_for_tasks))
+			__iscsi_release_cmd_to_pool(cmd, sess);
+		else
+			SE_CMD(cmd)->transport_wait_for_tasks(
+					SE_CMD(cmd), 1, 0);
+		spin_lock(&cr->conn_recovery_cmd_lock);
+	}
+	spin_unlock(&cr->conn_recovery_cmd_lock);
+
+	TRACE(TRACE_ERL2, "Dropped %u total acknowledged commands on"
+		" CID: %hu less than old ExpStatSN: 0x%08x\n",
+			dropped_count, cr->cid, exp_statsn);
+
+	if (!cr->cmd_count) {
+		TRACE(TRACE_ERL2, "No commands to be reassigned for failed"
+			" connection CID: %hu on SID: %u\n",
+			cr->cid, sess->sid);
+		iscsi_remove_inactive_connection_recovery_entry(cr, sess);
+		iscsi_attach_active_connection_recovery_entry(sess, cr);
+		printk(KERN_INFO "iSCSI connection recovery successful for CID:"
+			" %hu on SID: %u\n", cr->cid, sess->sid);
+		iscsi_remove_active_connection_recovery_entry(cr, sess);
+	} else {
+		iscsi_remove_inactive_connection_recovery_entry(cr, sess);
+		iscsi_attach_active_connection_recovery_entry(sess, cr);
+	}
+
+	return;
+}
+
+/*	iscsi_discard_unacknowledged_ooo_cmdsns_for_conn():
+ *
+ *
+ */
+int iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
+{
+	u32 dropped_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+	struct iscsi_session *sess = SESS(conn);
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+			&sess->sess_ooo_cmdsn_list, ooo_list) {
+
+		if (ooo_cmdsn->cid != conn->cid)
+			continue;
+
+		dropped_count++;
+		TRACE(TRACE_ERL2, "Dropping unacknowledged CmdSN:"
+		" 0x%08x during connection recovery on CID: %hu\n",
+			ooo_cmdsn->cmdsn, conn->cid);
+		iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+	}
+	SESS(conn)->ooo_cmdsn_count -= dropped_count;
+	spin_unlock(&sess->cmdsn_lock);
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+		if (!(cmd->cmd_flags & ICF_OOO_CMDSN))
+			continue;
+
+		iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+		spin_unlock_bh(&conn->cmd_lock);
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+		    !(SE_CMD(cmd)->transport_wait_for_tasks))
+			__iscsi_release_cmd_to_pool(cmd, sess);
+		else
+			SE_CMD(cmd)->transport_wait_for_tasks(
+					SE_CMD(cmd), 1, 1);
+		spin_lock_bh(&conn->cmd_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	TRACE(TRACE_ERL2, "Dropped %u total unacknowledged commands on CID:"
+		" %hu for ExpCmdSN: 0x%08x.\n", dropped_count, conn->cid,
+				sess->exp_cmd_sn);
+	return 0;
+}
+
+/*	iscsi_prepare_cmds_for_realligance():
+ *
+ *
+ */
+int iscsi_prepare_cmds_for_realligance(struct iscsi_conn *conn)
+{
+	u32 cmd_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_conn_recovery *cr;
+
+	/*
+	 * Allocate an struct iscsi_conn_recovery for this connection.
+	 * Each struct iscsi_cmd contains an struct iscsi_conn_recovery pointer
+	 * (struct iscsi_cmd->cr) so we need to allocate this before preparing the
+	 * connection's command list for connection recovery.
+	 */
+	cr = kzalloc(sizeof(struct iscsi_conn_recovery), GFP_KERNEL);
+	if (!(cr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_conn_recovery.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&cr->cr_list);
+	INIT_LIST_HEAD(&cr->conn_recovery_cmd_list);
+	spin_lock_init(&cr->conn_recovery_cmd_lock);
+	/*
+	 * Only perform connection recovery on ISCSI_OP_SCSI_CMD or
+	 * ISCSI_OP_NOOP_OUT opcodes.  For all other opcodes call
+	 * iscsi_remove_cmd_from_conn_list() to release the command to the
+	 * session pool and remove it from the connection's list.
+	 *
+	 * Also stop the DataOUT timer, which will be restarted after
+	 * sending the TMR response.
+	 */
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+
+		if ((cmd->iscsi_opcode != ISCSI_OP_SCSI_CMD) &&
+		    (cmd->iscsi_opcode != ISCSI_OP_NOOP_OUT)) {
+			TRACE(TRACE_ERL2, "Not performing realligence on"
+				" Opcode: 0x%02x, ITT: 0x%08x, CmdSN: 0x%08x,"
+				" CID: %hu\n", cmd->iscsi_opcode,
+				cmd->init_task_tag, cmd->cmd_sn, conn->cid);
+
+			iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+			spin_unlock_bh(&conn->cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 0);
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+
+		/*
+		 * Special case where commands greater than or equal to
+		 * the session's ExpCmdSN are attached to the connection
+		 * list but not to the out of order CmdSN list.  The one
+		 * obvious case is when a command with immediate data
+		 * attached must only check the CmdSN against ExpCmdSN
+		 * after the data is received.  The special case below
+		 * is when the connection fails before data is received,
+		 * but also may apply to other PDUs, so it has been
+		 * made generic here.
+		 */
+		if (!(cmd->cmd_flags & ICF_OOO_CMDSN) && !cmd->immediate_cmd &&
+		     (cmd->cmd_sn >= SESS(conn)->exp_cmd_sn)) {
+			iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+			spin_unlock_bh(&conn->cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+
+		cmd_count++;
+		TRACE(TRACE_ERL2, "Preparing Opcode: 0x%02x, ITT: 0x%08x,"
+			" CmdSN: 0x%08x, StatSN: 0x%08x, CID: %hu for"
+			" realligence.\n", cmd->iscsi_opcode,
+			cmd->init_task_tag, cmd->cmd_sn, cmd->stat_sn,
+			conn->cid);
+
+		cmd->deferred_i_state = cmd->i_state;
+		cmd->i_state = ISTATE_IN_CONNECTION_RECOVERY;
+
+		if (cmd->data_direction == DMA_TO_DEVICE)
+			iscsi_stop_dataout_timer(cmd);
+
+		cmd->sess = SESS(conn);
+
+		iscsi_remove_cmd_from_conn_list(cmd, conn);
+		spin_unlock_bh(&conn->cmd_lock);
+
+		iscsi_free_all_datain_reqs(cmd);
+
+		if ((SE_CMD(cmd)) &&
+		    (SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) &&
+		     SE_CMD(cmd)->transport_wait_for_tasks)
+			SE_CMD(cmd)->transport_wait_for_tasks(SE_CMD(cmd),
+					0, 0);
+		/*
+		 * Add the struct iscsi_cmd to the connection recovery cmd list
+		 */
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_add_tail(&cmd->i_list, &cr->conn_recovery_cmd_list);
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+
+		spin_lock_bh(&conn->cmd_lock);
+		cmd->cr = cr;
+		cmd->conn = NULL;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	/*
+	 * Fill in the various values in the preallocated struct iscsi_conn_recovery.
+	 */
+	cr->cid = conn->cid;
+	cr->cmd_count = cmd_count;
+	cr->maxrecvdatasegmentlength = CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	cr->sess = SESS(conn);
+
+	iscsi_attach_inactive_connection_recovery_entry(SESS(conn), cr);
+
+	return 0;
+}
+
+/*	iscsi_connection_recovery_transport_reset():
+ *
+ *
+ */
+int iscsi_connection_recovery_transport_reset(struct iscsi_conn *conn)
+{
+	atomic_set(&conn->connection_recovery, 1);
+
+	if (iscsi_close_connection(conn) < 0)
+		return -1;
+
+	return 0;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl2.h b/drivers/target/iscsi/iscsi_target_erl2.h
new file mode 100644
index 0000000..0da7d3c
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl2.h
@@ -0,0 +1,21 @@
+#ifndef ISCSI_TARGET_ERL2_H
+#define ISCSI_TARGET_ERL2_H
+
+extern void iscsi_create_conn_recovery_datain_values(struct iscsi_cmd *, __u32);
+extern void iscsi_create_conn_recovery_dataout_values(struct iscsi_cmd *);
+extern struct iscsi_conn_recovery *iscsi_get_inactive_connection_recovery_entry(
+			struct iscsi_session *, __u16);
+extern void iscsi_free_connection_recovery_entires(struct iscsi_session *);
+extern int iscsi_remove_active_connection_recovery_entry(
+			struct iscsi_conn_recovery *, struct iscsi_session *);
+extern int iscsi_remove_cmd_from_connection_recovery(struct iscsi_cmd *,
+			struct iscsi_session *);
+extern void iscsi_discard_cr_cmds_by_expstatsn(struct iscsi_conn_recovery *, __u32);
+extern int iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *);
+extern int iscsi_prepare_cmds_for_realligance(struct iscsi_conn *);
+extern int iscsi_connection_recovery_transport_reset(struct iscsi_conn *);
+
+extern int iscsi_close_connection(struct iscsi_conn *);
+
+#endif /*** ISCSI_TARGET_ERL2_H ***/
+
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 09/12] iscsi-target: Add iSCSI Error Recovery Hierarchy support
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for RFC-3720 compatiable ErrorRecoveryLevel
support as defined in Section 6.1.5.  Error Recovery Hierarchy.

This includes support for iSCSI session reinstatement, iSCSI within
command and within connection recovery, and explict/implict connection
recovery (CSM-E and CSM-I) from state machines in Section 7 of RFC-3720.

These functions are called from iscsi_target.c to handle processing
based on the negotiated session-wide ErrorRecoveryLevel parameter.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_erl0.c | 1086 +++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_erl0.h |   19 +
 drivers/target/iscsi/iscsi_target_erl1.c | 1382 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_erl1.h |   35 +
 drivers/target/iscsi/iscsi_target_erl2.c |  535 ++++++++++++
 drivers/target/iscsi/iscsi_target_erl2.h |   21 +
 6 files changed, 3078 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl0.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl1.h
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.c
 create mode 100644 drivers/target/iscsi/iscsi_target_erl2.h

diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
new file mode 100644
index 0000000..57e9442
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl0.c
@@ -0,0 +1,1086 @@
+/******************************************************************************
+ * This file contains error recovery level zero functions used by
+ * the iSCSI Target driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ * 
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_thread_queue.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+
+/*	iscsi_set_dataout_sequence_values():
+ *
+ *	Used to set values in struct iscsi_cmd that iscsi_dataout_check_sequence()
+ *	checks against to determine a PDU's Offset+Length is within the current
+ *	DataOUT Sequence.  Used for DataSequenceInOrder=Yes only.
+ */
+void iscsi_set_dataout_sequence_values(
+	struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	/*
+	 * Still set seq_start_offset and seq_end_offset for Unsolicited
+	 * DataOUT, even if DataSequenceInOrder=No.
+	 */
+	if (cmd->unsolicited_data) {
+		cmd->seq_start_offset = cmd->write_data_done;
+		cmd->seq_end_offset = (cmd->write_data_done +
+			(cmd->data_length >
+			 SESS_OPS_C(conn)->FirstBurstLength) ?
+			SESS_OPS_C(conn)->FirstBurstLength : cmd->data_length);
+		return;
+	}
+
+	if (!SESS_OPS_C(conn)->DataSequenceInOrder)
+		return;
+
+	if (!cmd->seq_start_offset && !cmd->seq_end_offset) {
+		cmd->seq_start_offset = cmd->write_data_done;
+		cmd->seq_end_offset = (cmd->data_length >
+			SESS_OPS_C(conn)->MaxBurstLength) ?
+			(cmd->write_data_done +
+			SESS_OPS_C(conn)->MaxBurstLength) : cmd->data_length;
+	} else {
+		cmd->seq_start_offset = cmd->seq_end_offset;
+		cmd->seq_end_offset = ((cmd->seq_end_offset +
+			SESS_OPS_C(conn)->MaxBurstLength) >=
+			cmd->data_length) ? cmd->data_length :
+			(cmd->seq_end_offset +
+			 SESS_OPS_C(conn)->MaxBurstLength);
+	}
+}
+
+/*	iscsi_dataout_within_command_recovery_check():
+ *
+ *
+ */
+static inline int iscsi_dataout_within_command_recovery_check(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * We do the within-command recovery checks here as it is
+	 * the first function called in iscsi_check_pre_dataout().
+	 * Basically, if we are in within-command recovery and
+	 * the PDU does not contain the offset the sequence needs,
+	 * dump the payload.
+	 *
+	 * This only applies to DataPDUInOrder=Yes, for
+	 * DataPDUInOrder=No we only re-request the failed PDU
+	 * and check that all PDUs in a sequence are received
+	 * upon end of sequence.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if ((cmd->cmd_flags & ICF_WITHIN_COMMAND_RECOVERY) &&
+		    (cmd->write_data_done != hdr->offset))
+			goto dump;
+
+		cmd->cmd_flags &= ~ICF_WITHIN_COMMAND_RECOVERY;
+	} else {
+		struct iscsi_seq *seq;
+
+		seq = iscsi_get_seq_holder(cmd, hdr->offset, payload_length);
+		if (!(seq))
+			return DATAOUT_CANNOT_RECOVER;
+		/*
+		 * Set the struct iscsi_seq pointer to reuse later.
+		 */
+		cmd->seq_ptr = seq;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			if ((seq->status ==
+			     DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY) &&
+			   ((seq->offset != hdr->offset) ||
+			    (seq->data_sn != hdr->datasn)))
+				goto dump;
+		} else {
+			if ((seq->status ==
+			     DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY) &&
+			    (seq->data_sn != hdr->datasn))
+				goto dump;
+		}
+
+		if (seq->status == DATAOUT_SEQUENCE_COMPLETE)
+			goto dump;
+
+		if (seq->status != DATAOUT_SEQUENCE_COMPLETE)
+			seq->status = 0;
+	}
+
+	return DATAOUT_NORMAL;
+
+dump:
+	printk(KERN_ERR "Dumping DataOUT PDU Offset: %u Length: %d DataSN:"
+		" 0x%08x\n", hdr->offset, payload_length, hdr->datasn);
+	return iscsi_dump_data_payload(conn, payload_length, 1);
+}
+
+/*	iscsi_dataout_check_unsolicited_sequence():
+ *
+ *
+ */
+static inline int iscsi_dataout_check_unsolicited_sequence(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	__u32 first_burst_len;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+
+	if ((hdr->offset < cmd->seq_start_offset) ||
+	   ((hdr->offset + payload_length) > cmd->seq_end_offset)) {
+		printk(KERN_ERR "Command ITT: 0x%08x with Offset: %u,"
+		" Length: %u outside of Unsolicited Sequence %u:%u while"
+		" DataSequenceInOrder=Yes.\n", cmd->init_task_tag,
+		hdr->offset, payload_length, cmd->seq_start_offset,
+			cmd->seq_end_offset);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	first_burst_len = (cmd->first_burst_len + payload_length);
+
+	if (first_burst_len > SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "Total %u bytes exceeds FirstBurstLength: %u"
+			" for this Unsolicited DataOut Burst.\n",
+			first_burst_len, SESS_OPS_C(conn)->FirstBurstLength);
+		transport_send_check_condition_and_sense(SE_CMD(cmd),
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	/*
+	 * Perform various MaxBurstLength and ISCSI_FLAG_CMD_FINAL sanity
+	 * checks for the current Unsolicited DataOUT Sequence.
+	 */
+	if (hdr->flags & ISCSI_FLAG_CMD_FINAL) {
+		/*
+		 * Ignore ISCSI_FLAG_CMD_FINAL checks while DataPDUInOrder=No, end of
+		 * sequence checks are handled in
+		 * iscsi_dataout_datapduinorder_no_fbit().
+		 */
+		if (!SESS_OPS_C(conn)->DataPDUInOrder)
+			goto out;
+
+		if ((first_burst_len != cmd->data_length) &&
+		    (first_burst_len != SESS_OPS_C(conn)->FirstBurstLength)) {
+			printk(KERN_ERR "Unsolicited non-immediate data"
+			" received %u does not equal FirstBurstLength: %u, and"
+			" does not equal ExpXferLen %u.\n", first_burst_len,
+				SESS_OPS_C(conn)->FirstBurstLength,
+				cmd->data_length);
+			transport_send_check_condition_and_sense(SE_CMD(cmd),
+					TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+	} else {
+		if (first_burst_len == SESS_OPS_C(conn)->FirstBurstLength) {
+			printk(KERN_ERR "Command ITT: 0x%08x reached"
+			" FirstBurstLength: %u, but ISCSI_FLAG_CMD_FINAL is not set. protocol"
+				" error.\n", cmd->init_task_tag,
+				SESS_OPS_C(conn)->FirstBurstLength);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+		if (first_burst_len == cmd->data_length) {
+			printk(KERN_ERR "Command ITT: 0x%08x reached"
+			" ExpXferLen: %u, but ISCSI_FLAG_CMD_FINAL is not set. protocol"
+			" error.\n", cmd->init_task_tag, cmd->data_length);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+	}
+
+out:
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_check_sequence():
+ *
+ *
+ */
+static inline int iscsi_dataout_check_sequence(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	__u32 next_burst_len;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *seq = NULL;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * For DataSequenceInOrder=Yes: Check that the offset and offset+length
+	 * is within range as defined by iscsi_set_dataout_sequence_values().
+	 *
+	 * For DataSequenceInOrder=No: Check that an struct iscsi_seq exists for
+	 * offset+length tuple.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		/*
+		 * Due to possibility of recovery DataOUT sent by the initiator
+		 * fullfilling an Recovery R2T, it's best to just dump the
+		 * payload here, instead of erroring out.
+		 */
+		if ((hdr->offset < cmd->seq_start_offset) ||
+		   ((hdr->offset + payload_length) > cmd->seq_end_offset)) {
+			printk(KERN_ERR "Command ITT: 0x%08x with Offset: %u,"
+			" Length: %u outside of Sequence %u:%u while"
+			" DataSequenceInOrder=Yes.\n", cmd->init_task_tag,
+			hdr->offset, payload_length, cmd->seq_start_offset,
+				cmd->seq_end_offset);
+
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			return DATAOUT_WITHIN_COMMAND_RECOVERY;
+		}
+
+		next_burst_len = (cmd->next_burst_len + payload_length);
+	} else {
+		seq = iscsi_get_seq_holder(cmd, hdr->offset, payload_length);
+		if (!(seq))
+			return DATAOUT_CANNOT_RECOVER;
+		/*
+		 * Set the struct iscsi_seq pointer to reuse later.
+		 */
+		cmd->seq_ptr = seq;
+
+		if (seq->status == DATAOUT_SEQUENCE_COMPLETE) {
+			if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			return DATAOUT_WITHIN_COMMAND_RECOVERY;
+		}
+
+		next_burst_len = (seq->next_burst_len + payload_length);
+	}
+
+	if (next_burst_len > SESS_OPS_C(conn)->MaxBurstLength) {
+		printk(KERN_ERR "Command ITT: 0x%08x, NextBurstLength: %u and"
+			" Length: %u exceeds MaxBurstLength: %u. protocol"
+			" error.\n", cmd->init_task_tag,
+			(next_burst_len - payload_length),
+			payload_length, SESS_OPS_C(conn)->MaxBurstLength);
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	/*
+	 * Perform various MaxBurstLength and ISCSI_FLAG_CMD_FINAL sanity
+	 * checks for the current DataOUT Sequence.
+	 */
+	if (hdr->flags & ISCSI_FLAG_CMD_FINAL) {
+		/*
+		 * Ignore ISCSI_FLAG_CMD_FINAL checks while DataPDUInOrder=No, end of
+		 * sequence checks are handled in
+		 * iscsi_dataout_datapduinorder_no_fbit().
+		 */
+		if (!SESS_OPS_C(conn)->DataPDUInOrder)
+			goto out;
+
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if ((next_burst_len <
+			     SESS_OPS_C(conn)->MaxBurstLength) &&
+			   ((cmd->write_data_done + payload_length) <
+			     cmd->data_length)) {
+				printk(KERN_ERR "Command ITT: 0x%08x set ISCSI_FLAG_CMD_FINAL"
+				" before end of DataOUT sequence, protocol"
+				" error.\n", cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		} else {
+			if (next_burst_len < seq->xfer_len) {
+				printk(KERN_ERR "Command ITT: 0x%08x set ISCSI_FLAG_CMD_FINAL"
+				" before end of DataOUT sequence, protocol"
+				" error.\n", cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		}
+	} else {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if (next_burst_len ==
+					SESS_OPS_C(conn)->MaxBurstLength) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" MaxBurstLength: %u, but ISCSI_FLAG_CMD_FINAL is"
+				" not set, protocol error.", cmd->init_task_tag,
+					SESS_OPS_C(conn)->MaxBurstLength);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+			if ((cmd->write_data_done + payload_length) ==
+					cmd->data_length) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" last DataOUT PDU in sequence but ISCSI_FLAG_"
+				"CMD_FINAL is not set, protocol error.\n",
+					cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		} else {
+			if (next_burst_len == seq->xfer_len) {
+				printk(KERN_ERR "Command ITT: 0x%08x reached"
+				" last DataOUT PDU in sequence but ISCSI_FLAG_"
+				"CMD_FINAL is not set, protocol error.\n",
+					cmd->init_task_tag);
+				return DATAOUT_CANNOT_RECOVER;
+			}
+		}
+	}
+
+out:
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_check_datasn():
+ *
+ *	Called from:	iscsi_check_pre_dataout()
+ */
+static inline int iscsi_dataout_check_datasn(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int dump = 0, recovery = 0;
+	__u32 data_sn = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * Considering the target has no method of re-requesting DataOUT
+	 * by DataSN, if we receieve a greater DataSN than expected we
+	 * assume the functions for DataPDUInOrder=[Yes,No] below will
+	 * handle it.
+	 *
+	 * If the DataSN is less than expected, dump the payload.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder)
+		data_sn = cmd->data_sn;
+	else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+		data_sn = seq->data_sn;
+	}
+
+	if (hdr->datasn > data_sn) {
+		printk(KERN_ERR "Command ITT: 0x%08x, received DataSN: 0x%08x"
+			" higher than expected 0x%08x.\n", cmd->init_task_tag,
+				hdr->datasn, data_sn);
+		recovery = 1;
+		goto recover;
+	} else if (hdr->datasn < data_sn) {
+		printk(KERN_ERR "Command ITT: 0x%08x, received DataSN: 0x%08x"
+			" lower than expected 0x%08x, discarding payload.\n",
+			cmd->init_task_tag, hdr->datasn, data_sn);
+		dump = 1;
+		goto dump;
+	}
+
+	return DATAOUT_NORMAL;
+
+recover:
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Unable to perform within-command recovery"
+				" while ERL=0.\n");
+		return DATAOUT_CANNOT_RECOVER;
+	}
+dump:
+	if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	return (recovery || dump) ? DATAOUT_WITHIN_COMMAND_RECOVERY :
+				DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_pre_datapduinorder_yes():
+ *
+ *
+ */
+static inline int iscsi_dataout_pre_datapduinorder_yes(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int dump = 0, recovery = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	/*
+	 * For DataSequenceInOrder=Yes: If the offset is greater than the global
+	 * DataPDUInOrder=Yes offset counter in struct iscsi_cmd a protcol error has
+	 * occured and fail the connection.
+	 *
+	 * For DataSequenceInOrder=No: If the offset is greater than the per
+	 * sequence DataPDUInOrder=Yes offset counter in struct iscsi_seq a protocol
+	 * error has occured and fail the connection.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if (hdr->offset != cmd->write_data_done) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u different than expected %u.\n", cmd->init_task_tag,
+				hdr->offset, cmd->write_data_done);
+			recovery = 1;
+			goto recover;
+		}
+	} else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+
+		if (hdr->offset > seq->offset) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u greater than expected %u.\n", cmd->init_task_tag,
+				hdr->offset, seq->offset);
+			recovery = 1;
+			goto recover;
+		} else if (hdr->offset < seq->offset) {
+			printk(KERN_ERR "Command ITT: 0x%08x, received offset"
+			" %u less than expected %u, discarding payload.\n",
+				cmd->init_task_tag, hdr->offset, seq->offset);
+			dump = 1;
+			goto dump;
+		}
+	}
+
+	return DATAOUT_NORMAL;
+
+recover:
+	if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+		printk(KERN_ERR "Unable to perform within-command recovery"
+				" while ERL=0.\n");
+		return DATAOUT_CANNOT_RECOVER;
+	}
+dump:
+	if (iscsi_dump_data_payload(conn, payload_length, 1) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	return (recovery) ? iscsi_recover_dataout_sequence(cmd,
+		hdr->offset, payload_length) :
+	       (dump) ? DATAOUT_WITHIN_COMMAND_RECOVERY : DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_pre_datapduinorder_no():
+ *
+ *	Called from:	iscsi_check_pre_dataout()
+ */
+static inline int iscsi_dataout_pre_datapduinorder_no(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_pdu *pdu;
+        struct iscsi_data *hdr = (struct iscsi_data *) buf;
+        u32 payload_length = ntoh24(hdr->dlength);
+
+	pdu = iscsi_get_pdu_holder(cmd, hdr->offset, payload_length);
+	if (!(pdu))
+		return DATAOUT_CANNOT_RECOVER;
+
+	cmd->pdu_ptr = pdu;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+	case ISCSI_PDU_CRC_FAILED:
+	case ISCSI_PDU_TIMED_OUT:
+		break;
+	case ISCSI_PDU_RECEIVED_OK:
+		printk(KERN_ERR "Command ITT: 0x%08x received already gotten"
+			" Offset: %u, Length: %u\n", cmd->init_task_tag,
+				hdr->offset, payload_length);
+		return iscsi_dump_data_payload(CONN(cmd), payload_length, 1);
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_update_r2t():
+ *
+ *
+ */
+static int iscsi_dataout_update_r2t(struct iscsi_cmd *cmd, u32 offset, u32 length)
+{
+	struct iscsi_r2t *r2t;
+
+	if (cmd->unsolicited_data)
+		return 0;
+
+	r2t = iscsi_get_r2t_for_eos(cmd, offset, length);
+	if (!(r2t))
+		return -1;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	r2t->seq_complete = 1;
+	cmd->outstanding_r2ts--;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_dataout_update_datapduinorder_no():
+ *
+ *
+ */
+static int iscsi_dataout_update_datapduinorder_no(
+	struct iscsi_cmd *cmd,
+	u32 data_sn,
+	int f_bit)
+{
+	int ret = 0;
+	struct iscsi_pdu *pdu = cmd->pdu_ptr;
+
+	pdu->data_sn = data_sn;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	case ISCSI_PDU_CRC_FAILED:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	case ISCSI_PDU_TIMED_OUT:
+		pdu->status = ISCSI_PDU_RECEIVED_OK;
+		break;
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+	if (f_bit) {
+		ret = iscsi_dataout_datapduinorder_no_fbit(cmd, pdu);
+		if (ret == DATAOUT_CANNOT_RECOVER)
+			return ret;
+	}
+
+	return DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_post_crc_passed():
+ *
+ *	Called from:	iscsi_check_post_dataout()
+ */
+static inline int iscsi_dataout_post_crc_passed(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int ret, send_r2t = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *seq = NULL;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (cmd->unsolicited_data) {
+		if ((cmd->first_burst_len + payload_length) ==
+		     SESS_OPS_C(conn)->FirstBurstLength) {
+			if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+					payload_length) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+			send_r2t = 1;
+		}
+
+		if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+			ret = iscsi_dataout_update_datapduinorder_no(cmd,
+				hdr->datasn, (hdr->flags & ISCSI_FLAG_CMD_FINAL));
+			if (ret == DATAOUT_CANNOT_RECOVER)
+				return ret;
+		}
+
+		cmd->first_burst_len += payload_length;
+
+		if (SESS_OPS_C(conn)->DataSequenceInOrder)
+			cmd->data_sn++;
+		else {
+			seq = cmd->seq_ptr;
+			seq->data_sn++;
+			seq->offset += payload_length;
+		}
+
+		if (send_r2t) {
+			if (seq)
+				seq->status = DATAOUT_SEQUENCE_COMPLETE;
+			cmd->first_burst_len = 0;
+			cmd->unsolicited_data = 0;
+		}
+	} else {
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if ((cmd->next_burst_len + payload_length) ==
+			     SESS_OPS_C(conn)->MaxBurstLength) {
+				if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+						payload_length) < 0)
+					return DATAOUT_CANNOT_RECOVER;
+				send_r2t = 1;
+			}
+
+			if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+				ret = iscsi_dataout_update_datapduinorder_no(
+						cmd, hdr->datasn,
+						(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+				if (ret == DATAOUT_CANNOT_RECOVER)
+					return ret;
+			}
+
+			cmd->next_burst_len += payload_length;
+			cmd->data_sn++;
+
+			if (send_r2t)
+				cmd->next_burst_len = 0;
+		} else {
+			seq = cmd->seq_ptr;
+
+			if ((seq->next_burst_len + payload_length) ==
+			     seq->xfer_len) {
+				if (iscsi_dataout_update_r2t(cmd, hdr->offset,
+						payload_length) < 0)
+					return DATAOUT_CANNOT_RECOVER;
+				send_r2t = 1;
+			}
+
+			if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+				ret = iscsi_dataout_update_datapduinorder_no(
+						cmd, hdr->datasn,
+						(hdr->flags & ISCSI_FLAG_CMD_FINAL));
+				if (ret == DATAOUT_CANNOT_RECOVER)
+					return ret;
+			}
+
+			seq->data_sn++;
+			seq->offset += payload_length;
+			seq->next_burst_len += payload_length;
+
+			if (send_r2t) {
+				seq->next_burst_len = 0;
+				seq->status = DATAOUT_SEQUENCE_COMPLETE;
+			}
+		}
+	}
+
+	if (send_r2t && SESS_OPS_C(conn)->DataSequenceInOrder)
+		cmd->data_sn = 0;
+
+	cmd->write_data_done += payload_length;
+
+	return (cmd->write_data_done == cmd->data_length) ?
+		DATAOUT_SEND_TO_TRANSPORT : (send_r2t) ?
+		DATAOUT_SEND_R2T : DATAOUT_NORMAL;
+}
+
+/*	iscsi_dataout_post_crc_failed():
+ *
+ *	Called from:	iscsi_check_post_dataout()
+ */
+static inline int iscsi_dataout_post_crc_failed(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu;
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (SESS_OPS_C(conn)->DataPDUInOrder)
+		goto recover;
+
+	/*
+	 * The rest of this function is only called when DataPDUInOrder=No.
+	 */
+	pdu = cmd->pdu_ptr;
+
+	switch (pdu->status) {
+	case ISCSI_PDU_NOT_RECEIVED:
+		pdu->status = ISCSI_PDU_CRC_FAILED;
+		break;
+	case ISCSI_PDU_CRC_FAILED:
+		break;
+	case ISCSI_PDU_TIMED_OUT:
+		pdu->status = ISCSI_PDU_CRC_FAILED;
+		break;
+	default:
+		return DATAOUT_CANNOT_RECOVER;
+	}
+
+recover:
+	return iscsi_recover_dataout_sequence(cmd, hdr->offset, payload_length);
+}
+
+/*	iscsi_check_pre_dataout():
+ *
+ *	Called from iscsi_handle_data_out() before DataOUT Payload is received
+ *	and CRC computed.
+ */
+extern int iscsi_check_pre_dataout(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	int ret;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	ret = iscsi_dataout_within_command_recovery_check(cmd, buf);
+	if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+	    (ret == DATAOUT_CANNOT_RECOVER))
+		return ret;
+
+	ret = iscsi_dataout_check_datasn(cmd, buf);
+	if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+	    (ret == DATAOUT_CANNOT_RECOVER))
+		return ret;
+
+	if (cmd->unsolicited_data) {
+		ret = iscsi_dataout_check_unsolicited_sequence(cmd, buf);
+		if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+		    (ret == DATAOUT_CANNOT_RECOVER))
+			return ret;
+	} else {
+		ret = iscsi_dataout_check_sequence(cmd, buf);
+		if ((ret == DATAOUT_WITHIN_COMMAND_RECOVERY) ||
+		    (ret == DATAOUT_CANNOT_RECOVER))
+			return ret;
+	}
+
+	return (SESS_OPS_C(conn)->DataPDUInOrder) ?
+		iscsi_dataout_pre_datapduinorder_yes(cmd, buf) :
+		iscsi_dataout_pre_datapduinorder_no(cmd, buf);
+}
+
+/*	iscsi_check_post_dataout():
+ *
+ *	Called from iscsi_handle_data_out() after DataOUT Payload is received
+ *	and CRC computed.
+ */
+int iscsi_check_post_dataout(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	__u8 data_crc_failed)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->dataout_timeout_retries = 0;
+
+	if (!data_crc_failed)
+		return iscsi_dataout_post_crc_passed(cmd, buf);
+	else {
+		if (!SESS_OPS_C(conn)->ErrorRecoveryLevel) {
+			printk(KERN_ERR "Unable to recover from DataOUT CRC"
+				" failure while ERL=0, closing session.\n");
+			iscsi_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
+					1, 0, buf, cmd);
+			return DATAOUT_CANNOT_RECOVER;
+		}
+
+		iscsi_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
+				0, 0, buf, cmd);
+		return iscsi_dataout_post_crc_failed(cmd, buf);
+	}
+}
+
+/*	iscsi_handle_time2retain_timeout():
+ *
+ *
+ */
+static void iscsi_handle_time2retain_timeout(unsigned long data)
+{
+	struct iscsi_session *sess = (struct iscsi_session *) data;
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	spin_lock_bh(&se_tpg->session_lock);
+	if (sess->time2retain_timer_flags & T2R_TF_STOP) {
+		spin_unlock_bh(&se_tpg->session_lock);
+		return;
+	}
+	if (atomic_read(&sess->session_reinstatement)) {
+		printk(KERN_ERR "Exiting Time2Retain handler because"
+				" session_reinstatement=1\n");
+		spin_unlock_bh(&se_tpg->session_lock);
+		return;
+	}
+	sess->time2retain_timer_flags |= T2R_TF_EXPIRED;
+
+	printk(KERN_ERR "Time2Retain timer expired for SID: %u, cleaning up"
+			" iSCSI session.\n", sess->sid);
+	{
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	if (tiqn) {
+		spin_lock(&tiqn->sess_err_stats.lock);
+		strcpy(tiqn->sess_err_stats.last_sess_fail_rem_name,
+			(void *)SESS_OPS(sess)->InitiatorName);
+		tiqn->sess_err_stats.last_sess_failure_type =
+				ISCSI_SESS_ERR_CXN_TIMEOUT;
+		tiqn->sess_err_stats.cxn_timeout_errors++;
+		sess->conn_timeout_errors++;
+		spin_unlock(&tiqn->sess_err_stats.lock);
+	}
+	}
+
+	spin_unlock_bh(&se_tpg->session_lock);
+	iscsi_close_session(sess);
+}
+
+/*	iscsi_start_session_cleanup_handler():
+ *
+ *
+ */
+extern void iscsi_start_time2retain_handler (struct iscsi_session *sess)
+{
+	int tpg_active;
+
+	/*
+	 * Only start Time2Retain timer when the assoicated TPG is still in
+	 * an ACTIVE (eg: not disabled or shutdown) state.
+	 */
+	spin_lock(&ISCSI_TPG_S(sess)->tpg_state_lock);
+	tpg_active = (ISCSI_TPG_S(sess)->tpg_state == TPG_STATE_ACTIVE);
+	spin_unlock(&ISCSI_TPG_S(sess)->tpg_state_lock);
+
+	if (!(tpg_active))
+		return;
+
+	if (sess->time2retain_timer_flags & T2R_TF_RUNNING)
+		return;
+
+	TRACE(TRACE_TIMER, "Starting Time2Retain timer for %u seconds on"
+		" SID: %u\n", SESS_OPS(sess)->DefaultTime2Retain, sess->sid);
+
+	init_timer(&sess->time2retain_timer);
+	SETUP_TIMER(sess->time2retain_timer, SESS_OPS(sess)->DefaultTime2Retain,
+			sess, iscsi_handle_time2retain_timeout);
+	sess->time2retain_timer_flags &= ~T2R_TF_STOP;
+	sess->time2retain_timer_flags |= T2R_TF_RUNNING;
+	add_timer(&sess->time2retain_timer);
+
+	return;
+}
+
+/*	iscsi_stop_time2retain_timer():
+ *
+ *	Called with spin_lock_bh(&struct se_portal_group->session_lock) held
+ */
+extern int iscsi_stop_time2retain_timer(struct iscsi_session *sess)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+
+	if (sess->time2retain_timer_flags & T2R_TF_EXPIRED)
+		return -1;
+
+	if (!(sess->time2retain_timer_flags & T2R_TF_RUNNING))
+		return 0;
+
+	sess->time2retain_timer_flags |= T2R_TF_STOP;
+	spin_unlock_bh(&se_tpg->session_lock);
+
+	del_timer_sync(&sess->time2retain_timer);
+
+	spin_lock_bh(&se_tpg->session_lock);
+	sess->time2retain_timer_flags &= ~T2R_TF_RUNNING;
+	TRACE(TRACE_TIMER, "Stopped Time2Retain Timer for SID: %u\n",
+			sess->sid);
+	return 0;
+}
+
+/*	iscsi_connection_reinstatement_rcfr():
+ *
+ *
+ */
+void iscsi_connection_reinstatement_rcfr(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		goto sleep;
+	}
+
+	if (atomic_read(&conn->transport_failed)) {
+		spin_unlock_bh(&conn->state_lock);
+		goto sleep;
+	}
+	spin_unlock_bh(&conn->state_lock);
+
+	iscsi_thread_set_force_reinstatement(conn);
+
+sleep:
+	down(&conn->conn_wait_rcfr_sem);
+	up(&conn->conn_post_wait_sem);
+}
+
+/*	iscsi_cause_connection_reinstatement():
+ *
+ *
+ */
+void iscsi_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (atomic_read(&conn->transport_failed)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (atomic_read(&conn->connection_reinstatement)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	atomic_set(&conn->connection_reinstatement, 1);
+	if (!sleep) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	atomic_set(&conn->sleep_on_conn_wait_sem, 1);
+	spin_unlock_bh(&conn->state_lock);
+
+	down(&conn->conn_wait_sem);
+	up(&conn->conn_post_wait_sem);
+}
+
+/*	iscsi_fall_back_to_erl0():
+ *
+ *
+ */
+void iscsi_fall_back_to_erl0(struct iscsi_session *sess)
+{
+	TRACE(TRACE_ERL0, "Falling back to ErrorRecoveryLevel=0 for SID:"
+			" %u\n", sess->sid);
+
+	atomic_set(&sess->session_fall_back_to_erl0, 1);
+}
+
+/*	iscsi_handle_connection_cleanup():
+ *
+ *
+ */
+static void iscsi_handle_connection_cleanup(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+
+	if ((SESS_OPS(sess)->ErrorRecoveryLevel == 2) &&
+	    !atomic_read(&sess->session_reinstatement) &&
+	    !atomic_read(&sess->session_fall_back_to_erl0))
+		iscsi_connection_recovery_transport_reset(conn);
+	else {
+		TRACE(TRACE_ERL0, "Performing cleanup for failed iSCSI"
+			" Connection ID: %hu from %s\n", conn->cid,
+			SESS_OPS(sess)->InitiatorName);
+		iscsi_close_connection(conn);
+	}
+}
+
+/*	iscsi_take_action_for_connection_exit():
+ *
+ *
+ */
+extern void iscsi_take_action_for_connection_exit(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->state_lock);
+	if (atomic_read(&conn->connection_exit)) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+	atomic_set(&conn->connection_exit, 1);
+
+	if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) {
+		spin_unlock_bh(&conn->state_lock);
+		iscsi_close_connection(conn);
+		return;
+	}
+
+	if (conn->conn_state == TARG_CONN_STATE_CLEANUP_WAIT) {
+		spin_unlock_bh(&conn->state_lock);
+		return;
+	}
+
+	TRACE(TRACE_STATE, "Moving to TARG_CONN_STATE_CLEANUP_WAIT.\n");
+	conn->conn_state = TARG_CONN_STATE_CLEANUP_WAIT;
+	spin_unlock_bh(&conn->state_lock);
+
+	iscsi_handle_connection_cleanup(conn);
+}
+
+/*	iscsi_recover_from_unknown_opcode():
+ *
+ *	This is the simple function that makes the magic of
+ *	sync and steering happen in the follow paradoxical order:
+ *
+ *	0) Receive conn->of_marker (bytes left until next OFMarker)
+ *	   bytes into an offload buffer.  When we pass the exact number
+ *	   of bytes in conn->of_marker, iscsi_dump_data_payload() and hence
+ *	   rx_data() will automatically receive the identical __u32 marker
+ *	   values and store it in conn->of_marker_offset;
+ *	1) Now conn->of_marker_offset will contain the offset to the start
+ *	   of the next iSCSI PDU.  Dump these remaining bytes into another
+ *	   offload buffer.
+ *	2) We are done!
+ *	   Next byte in the TCP stream will contain the next iSCSI PDU!
+ *	   Cool Huh?!
+ */
+int iscsi_recover_from_unknown_opcode(struct iscsi_conn *conn)
+{
+	/*
+	 * Make sure the remaining bytes to next maker is a sane value.
+	 */
+	if (conn->of_marker > (CONN_OPS(conn)->OFMarkInt * 4)) {
+		printk(KERN_ERR "Remaining bytes to OFMarker: %u exceeds"
+			" OFMarkInt bytes: %u.\n", conn->of_marker,
+				CONN_OPS(conn)->OFMarkInt * 4);
+		return -1;
+	}
+
+	TRACE(TRACE_ERL1, "Advancing %u bytes in TCP stream to get to the"
+			" next OFMarker.\n", conn->of_marker);
+
+	if (iscsi_dump_data_payload(conn, conn->of_marker, 0) < 0)
+		return -1;
+
+	/*
+	 * Make sure the offset marker we retrived is a valid value.
+	 */
+	if (conn->of_marker_offset > (ISCSI_HDR_LEN + (CRC_LEN * 2) +
+	    CONN_OPS(conn)->MaxRecvDataSegmentLength)) {
+		printk(KERN_ERR "OfMarker offset value: %u exceeds limit.\n",
+			conn->of_marker_offset);
+		return -1;
+	}
+
+	TRACE(TRACE_ERL1, "Discarding %u bytes of TCP stream to get to the"
+			" next iSCSI Opcode.\n", conn->of_marker_offset);
+
+	if (iscsi_dump_data_payload(conn, conn->of_marker_offset, 0) < 0)
+		return -1;
+
+	return 0;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl0.h b/drivers/target/iscsi/iscsi_target_erl0.h
new file mode 100644
index 0000000..6619d1e
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl0.h
@@ -0,0 +1,19 @@
+#ifndef ISCSI_TARGET_ERL0_H
+#define ISCSI_TARGET_ERL0_H
+
+extern void iscsi_set_dataout_sequence_values(struct iscsi_cmd *);
+extern int iscsi_check_pre_dataout(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_check_post_dataout(struct iscsi_cmd *, unsigned char *, __u8);
+extern void iscsi_start_time2retain_handler(struct iscsi_session *);
+extern int iscsi_stop_time2retain_timer(struct iscsi_session *);
+extern void iscsi_connection_reinstatement_rcfr(struct iscsi_conn *);
+extern void iscsi_cause_connection_reinstatement(struct iscsi_conn *, int);
+extern void iscsi_fall_back_to_erl0(struct iscsi_session *);
+extern void iscsi_take_action_for_connection_exit(struct iscsi_conn *);
+extern int iscsi_recover_from_unknown_opcode(struct iscsi_conn *);
+
+extern struct iscsi_global *iscsi_global;
+extern int iscsi_add_reject_from_cmd(u8, int, int, unsigned char *,
+			struct iscsi_cmd *);
+
+#endif   /*** ISCSI_TARGET_ERL0_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
new file mode 100644
index 0000000..50233ff
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl1.c
@@ -0,0 +1,1382 @@
+/*******************************************************************************
+ * This file contains error recovery level one used by the iSCSI Target driver.
+ *
+ * Copyright (c) 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2006 SBE, Inc.  All Rights Reserved.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+
+#define OFFLOAD_BUF_SIZE	32768
+
+/*	iscsi_dump_data_payload():
+ *
+ *	Used to dump excess datain payload for certain error recovery
+ *	situations.  Receive in OFFLOAD_BUF_SIZE max of datain per rx_data().
+ *
+ *	dump_padding_digest denotes if padding and data digests need
+ *	to be dumped.
+ */
+int iscsi_dump_data_payload(
+	struct iscsi_conn *conn,
+	u32 buf_len,
+	int dump_padding_digest)
+{
+	char *buf, pad_bytes[4];
+	int ret = DATAOUT_WITHIN_COMMAND_RECOVERY, rx_got;
+	u32 length, padding, offset = 0, size;
+	struct iovec iov;
+
+	length = (buf_len > OFFLOAD_BUF_SIZE) ? OFFLOAD_BUF_SIZE : buf_len;
+
+	buf = kzalloc(length, GFP_ATOMIC);
+	if (!(buf)) {
+		printk(KERN_ERR "Unable to allocate %u bytes for offload"
+				" buffer.\n", length);
+		return -1;
+	}
+	memset(&iov, 0, sizeof(struct iovec));
+
+	while (offset < buf_len) {
+		size = ((offset + length) > buf_len) ?
+			(buf_len - offset) : length;
+
+		iov.iov_len = size;
+		iov.iov_base = buf;
+
+		rx_got = rx_data(conn, &iov, 1, size);
+		if (rx_got != size) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+
+		offset += size;
+	}
+
+	if (!dump_padding_digest)
+		goto out;
+
+	padding = ((-buf_len) & 3);
+	if (padding != 0) {
+		iov.iov_len = padding;
+		iov.iov_base = pad_bytes;
+
+		rx_got = rx_data(conn, &iov, 1, padding);
+		if (rx_got != padding) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+	}
+
+	if (CONN_OPS(conn)->DataDigest) {
+		u32 data_crc;
+
+		iov.iov_len = CRC_LEN;
+		iov.iov_base = &data_crc;
+
+		rx_got = rx_data(conn, &iov, 1, CRC_LEN);
+		if (rx_got != CRC_LEN) {
+			ret = DATAOUT_CANNOT_RECOVER;
+			goto out;
+		}
+	}
+
+out:
+	kfree(buf);
+	return ret;
+}
+
+/*	iscsi_send_recovery_r2t_for_snack():
+ *
+ *	Used for retransmitting R2Ts from a R2T SNACK request.
+ */
+static int iscsi_send_recovery_r2t_for_snack(
+	struct iscsi_cmd *cmd,
+	struct iscsi_r2t *r2t)
+{
+	/*
+	 * If the struct iscsi_r2t has not been sent yet, we can safely
+	 * ignore retransmission
+	 * of the R2TSN in question.
+	 */
+	spin_lock_bh(&cmd->r2t_lock);
+	if (!r2t->sent_r2t) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return 0;
+	}
+	r2t->sent_r2t = 0;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	iscsi_add_cmd_to_immediate_queue(cmd, CONN(cmd), ISTATE_SEND_R2T);
+
+	return 0;
+}
+
+/*	iscsi_handle_r2t_snack():
+ *
+ *
+ */
+static int iscsi_handle_r2t_snack(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	u32 begrun,
+	u32 runlength)
+{
+	u32 last_r2tsn;
+	struct iscsi_r2t *r2t;
+
+	/*
+	 * Make sure the initiator is not requesting retransmission
+	 * of R2TSNs already acknowledged by a TMR TASK_REASSIGN.
+	 */
+	if ((cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+	    (begrun <= cmd->acked_data_sn)) {
+		printk(KERN_ERR "ITT: 0x%08x, R2T SNACK requesting"
+			" retransmission of R2TSN: 0x%08x to 0x%08x but already"
+			" acked to  R2TSN: 0x%08x by TMR TASK_REASSIGN,"
+			" protocol error.\n", cmd->init_task_tag, begrun,
+			(begrun + runlength), cmd->acked_data_sn);
+
+			return iscsi_add_reject_from_cmd(
+					ISCSI_REASON_PROTOCOL_ERROR,
+					1, 0, buf, cmd);
+	}
+
+	if (runlength) {
+		if ((begrun + runlength) > cmd->r2t_sn) {
+			printk(KERN_ERR "Command ITT: 0x%08x received R2T SNACK"
+			" with BegRun: 0x%08x, RunLength: 0x%08x, exceeds"
+			" current R2TSN: 0x%08x, protocol error.\n",
+			cmd->init_task_tag, begrun, runlength, cmd->r2t_sn);
+			return iscsi_add_reject_from_cmd(
+				ISCSI_REASON_BOOKMARK_INVALID, 1, 0, buf, cmd);
+		}
+		last_r2tsn = (begrun + runlength);
+	} else
+		last_r2tsn = cmd->r2t_sn;
+
+	while (begrun < last_r2tsn) {
+		r2t = iscsi_get_holder_for_r2tsn(cmd, begrun);
+		if (!(r2t))
+			return -1;
+		if (iscsi_send_recovery_r2t_for_snack(cmd, r2t) < 0)
+			return -1;
+
+		begrun++;
+	}
+
+	return 0;
+}
+
+/*	iscsi_create_recovery_datain_values_datasequenceinorder_yes():
+ *
+ *	Generates Offsets and NextBurstLength based on Begrun and Runlength
+ *	carried in a Data SNACK or ExpDataSN in TMR TASK_REASSIGN.
+ *
+ *	For DataSequenceInOrder=Yes and DataPDUInOrder=[Yes,No] only.
+ *
+ *	FIXME: How is this handled for a RData SNACK?
+ */
+int iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain_req *dr)
+{
+	u32 data_sn = 0, data_sn_count = 0;
+	u32 pdu_start = 0, seq_no = 0;
+	u32 begrun = dr->begrun;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	while (begrun > data_sn++) {
+		data_sn_count++;
+		if ((dr->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			dr->read_data_done +=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			dr->next_burst_len +=
+				CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		} else {
+			dr->read_data_done +=
+				(SESS_OPS_C(conn)->MaxBurstLength -
+				 dr->next_burst_len);
+			dr->next_burst_len = 0;
+			pdu_start += data_sn_count;
+			data_sn_count = 0;
+			seq_no++;
+		}
+	}
+
+	if (!SESS_OPS_C(conn)->DataPDUInOrder) {
+		cmd->seq_no = seq_no;
+		cmd->pdu_start = pdu_start;
+		cmd->pdu_send_order = data_sn_count;
+	}
+
+	return 0;
+}
+
+/*	iscsi_create_recovery_datain_values_datasequenceinorder_no():
+ *
+ *	Generates Offsets and NextBurstLength based on Begrun and Runlength
+ *	carried in a Data SNACK or ExpDataSN in TMR TASK_REASSIGN.
+ *
+ *	For DataSequenceInOrder=No and DataPDUInOrder=[Yes,No] only.
+ *
+ *	FIXME: How is this handled for a RData SNACK?
+ */
+int iscsi_create_recovery_datain_values_datasequenceinorder_no(
+	struct iscsi_cmd *cmd,
+	struct iscsi_datain_req *dr)
+{
+	int found_seq = 0, i;
+	u32 data_sn, read_data_done = 0, seq_send_order = 0;
+	u32 begrun = dr->begrun;
+	u32 runlength = dr->runlength;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_seq *first_seq = NULL, *seq = NULL;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return -1;
+	}
+
+	/*
+	 * Calculate read_data_done for all sequences containing a
+	 * first_datasn and last_datasn less than the BegRun.
+	 *
+	 * Locate the struct iscsi_seq the BegRun lies within and calculate
+	 * NextBurstLenghth up to the DataSN based on MaxRecvDataSegmentLength.
+	 *
+	 * Also use struct iscsi_seq->seq_send_order to determine where to start.
+	 */
+	for (i = 0; i < cmd->seq_count; i++) {
+		seq = &cmd->seq_list[i];
+
+		if (!seq->seq_send_order)
+			first_seq = seq;
+
+		/*
+		 * No data has been transferred for this DataIN sequence, so the
+		 * seq->first_datasn and seq->last_datasn have not been set.
+		 */
+		if (!seq->sent) {
+#if 0
+			printk(KERN_ERR "Ignoring non-sent sequence 0x%08x ->"
+				" 0x%08x\n\n", seq->first_datasn,
+				seq->last_datasn);
+#endif
+			continue;
+		}
+
+		/*
+		 * This DataIN sequence is precedes the received BegRun, add the
+		 * total xfer_len of the sequence to read_data_done and reset
+		 * seq->pdu_send_order.
+		 */
+		if ((seq->first_datasn < begrun) &&
+				(seq->last_datasn < begrun)) {
+#if 0
+			printk(KERN_ERR "Pre BegRun sequence 0x%08x ->"
+				" 0x%08x\n", seq->first_datasn,
+				seq->last_datasn);
+#endif
+			read_data_done += cmd->seq_list[i].xfer_len;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			continue;
+		}
+
+		/*
+		 * The BegRun lies within this DataIN sequence.
+		 */
+		if ((seq->first_datasn <= begrun) &&
+				(seq->last_datasn >= begrun)) {
+#if 0
+			printk(KERN_ERR "Found sequence begrun: 0x%08x in"
+				" 0x%08x -> 0x%08x\n", begrun,
+				seq->first_datasn, seq->last_datasn);
+#endif
+			seq_send_order = seq->seq_send_order;
+			data_sn = seq->first_datasn;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			found_seq = 1;
+
+			/*
+			 * For DataPDUInOrder=Yes, while the first DataSN of
+			 * the sequence is less than the received BegRun, add
+			 * the MaxRecvDataSegmentLength to read_data_done and
+			 * to the sequence's next_burst_len;
+			 *
+			 * For DataPDUInOrder=No, while the first DataSN of the
+			 * sequence is less than the received BegRun, find the
+			 * struct iscsi_pdu of the DataSN in question and add the
+			 * MaxRecvDataSegmentLength to read_data_done and to the
+			 * sequence's next_burst_len;
+			 */
+			if (SESS_OPS_C(conn)->DataPDUInOrder) {
+				while (data_sn < begrun) {
+					seq->pdu_send_order++;
+					read_data_done +=
+						CONN_OPS(conn)->MaxRecvDataSegmentLength;
+					seq->next_burst_len +=
+						CONN_OPS(conn)->MaxRecvDataSegmentLength;
+					data_sn++;
+				}
+			} else {
+				int j;
+				struct iscsi_pdu *pdu;
+
+				while (data_sn < begrun) {
+					seq->pdu_send_order++;
+
+					for (j = 0; j < seq->pdu_count; j++) {
+						pdu = &cmd->pdu_list[
+							seq->pdu_start + j];
+						if (pdu->data_sn == data_sn) {
+							read_data_done +=
+								pdu->length;
+							seq->next_burst_len +=
+								pdu->length;
+						}
+					}
+					data_sn++;
+				}
+			}
+			continue;
+		}
+
+		/*
+		 * This DataIN sequence is larger than the received BegRun,
+		 * reset seq->pdu_send_order and continue.
+		 */
+		if ((seq->first_datasn > begrun) ||
+				(seq->last_datasn > begrun)) {
+#if 0
+			printk(KERN_ERR "Post BegRun sequence 0x%08x -> 0x%08x\n",
+					seq->first_datasn, seq->last_datasn);
+#endif
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			continue;
+		}
+	}
+
+	if (!found_seq) {
+		if (!begrun) {
+			if (!first_seq) {
+				printk(KERN_ERR "ITT: 0x%08x, Begrun: 0x%08x"
+					" but first_seq is NULL\n",
+					cmd->init_task_tag, begrun);
+				return -1;
+			}
+			seq_send_order = first_seq->seq_send_order;
+			seq->next_burst_len = seq->pdu_send_order = 0;
+			goto done;
+		}
+
+		printk(KERN_ERR "Unable to locate struct iscsi_seq for ITT: 0x%08x,"
+			" BegRun: 0x%08x, RunLength: 0x%08x while"
+			" DataSequenceInOrder=No and DataPDUInOrder=%s.\n",
+				cmd->init_task_tag, begrun, runlength,
+			(SESS_OPS_C(conn)->DataPDUInOrder) ? "Yes" : "No");
+		return -1;
+	}
+
+done:
+	dr->read_data_done = read_data_done;
+	dr->seq_send_order = seq_send_order;
+
+	return 0;
+}
+
+/*	iscsi_handle_recovery_datain():
+ *
+ *
+ */
+static inline int iscsi_handle_recovery_datain(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+
+	if (!(atomic_read(&T_TASK(se_cmd)->t_transport_complete))) {
+		printk(KERN_ERR "Ignoring ITT: 0x%08x Data SNACK\n",
+				cmd->init_task_tag);
+		return 0;
+	}
+
+	/*
+	 * Make sure the initiator is not requesting retransmission
+	 * of DataSNs already acknowledged by a Data ACK SNACK.
+	 */
+	if ((cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+	    (begrun <= cmd->acked_data_sn)) {
+		printk(KERN_ERR "ITT: 0x%08x, Data SNACK requesting"
+			" retransmission of DataSN: 0x%08x to 0x%08x but"
+			" already acked to DataSN: 0x%08x by Data ACK SNACK,"
+			" protocol error.\n", cmd->init_task_tag, begrun,
+			(begrun + runlength), cmd->acked_data_sn);
+
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
+				1, 0, buf, cmd);
+	}
+
+	/*
+	 * Make sure BegRun and RunLength in the Data SNACK are sane.
+	 * Note: (cmd->data_sn - 1) will carry the maximum DataSN sent.
+	 */
+	if ((begrun + runlength) > (cmd->data_sn - 1)) {
+		printk(KERN_ERR "Initiator requesting BegRun: 0x%08x, RunLength"
+			": 0x%08x greater than maximum DataSN: 0x%08x.\n",
+				begrun, runlength, (cmd->data_sn - 1));
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
+				1, 0, buf, cmd);
+	}
+
+	dr = iscsi_allocate_datain_req();
+	if (!(dr))
+		return iscsi_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				1, 0, buf, cmd);
+
+	dr->data_sn = dr->begrun = begrun;
+	dr->runlength = runlength;
+	dr->generate_recovery_values = 1;
+	dr->recovery = DATAIN_WITHIN_COMMAND_RECOVERY;
+
+	iscsi_attach_datain_req(cmd, dr);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+
+	return 0;
+}
+
+/*	iscsi_handle_recovery_datain_or_r2t():
+ *
+ *
+ */
+int iscsi_handle_recovery_datain_or_r2t(
+	struct iscsi_conn *conn,
+	unsigned char *buf,
+	u32 init_task_tag,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_cmd *cmd;
+
+	cmd = iscsi_find_cmd_from_itt(conn, init_task_tag);
+	if (!(cmd))
+		return 0;
+
+	/*
+	 * FIXME: This will not work for bidi commands.
+	 */
+	switch (cmd->data_direction) {
+	case DMA_TO_DEVICE:
+		return iscsi_handle_r2t_snack(cmd, buf, begrun, runlength);
+	case DMA_FROM_DEVICE:
+		return iscsi_handle_recovery_datain(cmd, buf, begrun,
+				runlength);
+	default:
+		printk(KERN_ERR "Unknown cmd->data_direction: 0x%02x\n",
+				cmd->data_direction);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_send_recovery_status():
+ *
+ *
+ */
+/* #warning FIXME: Status SNACK needs to be dependent on OPCODE!!! */
+int iscsi_handle_status_snack(
+	struct iscsi_conn *conn,
+	u32 init_task_tag,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	u32 last_statsn;
+	struct iscsi_cmd *cmd = NULL;
+
+	if (conn->exp_statsn > begrun) {
+		printk(KERN_ERR "Got Status SNACK Begrun: 0x%08x, RunLength:"
+			" 0x%08x but already got ExpStatSN: 0x%08x on CID:"
+			" %hu.\n", begrun, runlength, conn->exp_statsn,
+			conn->cid);
+		return 0;
+	}
+
+	last_statsn = (!runlength) ? conn->stat_sn : (begrun + runlength);
+
+	while (begrun < last_statsn) {
+		spin_lock_bh(&conn->cmd_lock);
+		list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+			if (cmd->stat_sn == begrun)
+				break;
+		}
+		spin_unlock_bh(&conn->cmd_lock);
+
+		if (!cmd) {
+			printk(KERN_ERR "Unable to find StatSN: 0x%08x for"
+				" a Status SNACK, assuming this was a"
+				" protactic SNACK for an untransmitted"
+				" StatSN, ignoring.\n", begrun);
+			begrun++;
+			continue;
+		}
+
+		spin_lock_bh(&cmd->istate_lock);
+		if (cmd->i_state == ISTATE_SEND_DATAIN) {
+			spin_unlock_bh(&cmd->istate_lock);
+			printk(KERN_ERR "Ignoring Status SNACK for BegRun:"
+				" 0x%08x, RunLength: 0x%08x, assuming this was"
+				" a protactic SNACK for an untransmitted"
+				" StatSN\n", begrun, runlength);
+			begrun++;
+			continue;
+		}
+		spin_unlock_bh(&cmd->istate_lock);
+
+		cmd->i_state = ISTATE_SEND_STATUS_RECOVERY;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		begrun++;
+	}
+
+	return 0;
+}
+
+/*	iscsi_handle_data_ack():
+ *
+ *
+ */
+int iscsi_handle_data_ack(
+	struct iscsi_conn *conn,
+	u32 targ_xfer_tag,
+	u32 begrun,
+	u32 runlength)
+{
+	struct iscsi_cmd *cmd = NULL;
+
+	cmd = iscsi_find_cmd_from_ttt(conn, targ_xfer_tag);
+	if (!(cmd)) {
+		printk(KERN_ERR "Data ACK SNACK for TTT: 0x%08x is"
+			" invalid.\n", targ_xfer_tag);
+		return -1;
+	}
+
+	if (begrun <= cmd->acked_data_sn) {
+		printk(KERN_ERR "ITT: 0x%08x Data ACK SNACK BegRUN: 0x%08x is"
+			" less than the already acked DataSN: 0x%08x.\n",
+			cmd->init_task_tag, begrun, cmd->acked_data_sn);
+		return -1;
+	}
+
+	/*
+	 * For Data ACK SNACK, BegRun is the next expected DataSN.
+	 * (see iSCSI v19: 10.16.6)
+	 */
+	cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+	cmd->acked_data_sn = (begrun - 1);
+
+	TRACE(TRACE_ISCSI, "Received Data ACK SNACK for ITT: 0x%08x,"
+		" updated acked DataSN to 0x%08x.\n",
+			cmd->init_task_tag, cmd->acked_data_sn);
+
+	return 0;
+}
+
+/*	iscsi_send_recovery_r2t():
+ *
+ *
+ */
+static int iscsi_send_recovery_r2t(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 xfer_len)
+{
+	int ret;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	ret = iscsi_add_r2t_to_list(cmd, offset, xfer_len, 1, 0);
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return ret;
+}
+
+/*	iscsi_dataout_datapduinorder_no_fbit():
+ *
+ *
+ */
+int iscsi_dataout_datapduinorder_no_fbit(
+	struct iscsi_cmd *cmd,
+	struct iscsi_pdu *pdu)
+{
+	int i, send_recovery_r2t = 0, recovery = 0;
+	u32 length = 0, offset = 0, pdu_count = 0, xfer_len = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *first_pdu = NULL;
+
+	/*
+	 * Get an struct iscsi_pdu pointer to the first PDU, and total PDU count
+	 * of the DataOUT sequence.
+	 */
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		for (i = 0; i < cmd->pdu_count; i++) {
+			if (cmd->pdu_list[i].seq_no == pdu->seq_no) {
+				if (!first_pdu)
+					first_pdu = &cmd->pdu_list[i];
+				 xfer_len += cmd->pdu_list[i].length;
+				 pdu_count++;
+			} else if (pdu_count)
+				break;
+		}
+	} else {
+		struct iscsi_seq *seq = cmd->seq_ptr;
+
+		first_pdu = &cmd->pdu_list[seq->pdu_start];
+		pdu_count = seq->pdu_count;
+	}
+
+	if (!first_pdu || !pdu_count)
+		return DATAOUT_CANNOT_RECOVER;
+
+	/*
+	 * Loop through the ending DataOUT Sequence checking each struct iscsi_pdu.
+	 * The following ugly logic does batching of not received PDUs.
+	 */
+	for (i = 0; i < pdu_count; i++) {
+		if (first_pdu[i].status == ISCSI_PDU_RECEIVED_OK) {
+			if (!send_recovery_r2t)
+				continue;
+
+			if (iscsi_send_recovery_r2t(cmd, offset, length) < 0)
+				return DATAOUT_CANNOT_RECOVER;
+
+			send_recovery_r2t = length = offset = 0;
+			continue;
+		}
+		/*
+		 * Set recovery = 1 for any missing, CRC failed, or timed
+		 * out PDUs to let the DataOUT logic know that this sequence
+		 * has not been completed yet.
+		 *
+		 * Also, only send a Recovery R2T for ISCSI_PDU_NOT_RECEIVED.
+		 * We assume if the PDU either failed CRC or timed out
+		 * that a Recovery R2T has already been sent.
+		 */
+		recovery = 1;
+
+		if (first_pdu[i].status != ISCSI_PDU_NOT_RECEIVED)
+			continue;
+
+		if (!offset)
+			offset = first_pdu[i].offset;
+		length += first_pdu[i].length;
+
+		send_recovery_r2t = 1;
+	}
+
+	if (send_recovery_r2t)
+		if (iscsi_send_recovery_r2t(cmd, offset, length) < 0)
+			return DATAOUT_CANNOT_RECOVER;
+
+	return (!recovery) ? DATAOUT_NORMAL : DATAOUT_WITHIN_COMMAND_RECOVERY;
+}
+
+/*	iscsi_recalculate_dataout_values():
+ *
+ *
+ */
+static int iscsi_recalculate_dataout_values(
+	struct iscsi_cmd *cmd,
+	u32 pdu_offset,
+	u32 pdu_length,
+	u32 *r2t_offset,
+	u32 *r2t_length)
+{
+	int i;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_pdu *pdu = NULL;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		cmd->data_sn = 0;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			*r2t_offset = cmd->write_data_done;
+			*r2t_length = (cmd->seq_end_offset -
+					cmd->write_data_done);
+			return 0;
+		}
+
+		*r2t_offset = cmd->seq_start_offset;
+		*r2t_length = (cmd->seq_end_offset - cmd->seq_start_offset);
+
+		for (i = 0; i < cmd->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			if ((pdu->offset >= cmd->seq_start_offset) &&
+			   ((pdu->offset + pdu->length) <=
+			     cmd->seq_end_offset)) {
+				if (!cmd->unsolicited_data)
+					cmd->next_burst_len -= pdu->length;
+				else
+					cmd->first_burst_len -= pdu->length;
+
+				cmd->write_data_done -= pdu->length;
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	} else {
+		struct iscsi_seq *seq = NULL;
+
+		seq = iscsi_get_seq_holder(cmd, pdu_offset, pdu_length);
+		if (!(seq))
+			return -1;
+
+		*r2t_offset = seq->orig_offset;
+		*r2t_length = seq->xfer_len;
+
+		cmd->write_data_done -= (seq->offset - seq->orig_offset);
+		if (cmd->immediate_data)
+			cmd->first_burst_len = cmd->write_data_done;
+
+		seq->data_sn = 0;
+		seq->offset = seq->orig_offset;
+		seq->next_burst_len = 0;
+		seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder)
+			return 0;
+
+		for (i = 0; i < seq->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i+seq->pdu_start];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			pdu->status = ISCSI_PDU_NOT_RECEIVED;
+		}
+	}
+
+	return 0;
+}
+
+/*	iscsi_recover_dataout_crc_sequence():
+ *
+ *
+ */
+int iscsi_recover_dataout_sequence(
+	struct iscsi_cmd *cmd,
+	u32 pdu_offset,
+	u32 pdu_length)
+{
+	u32 r2t_length = 0, r2t_offset = 0;
+
+	spin_lock_bh(&cmd->istate_lock);
+	cmd->cmd_flags |= ICF_WITHIN_COMMAND_RECOVERY;
+	spin_unlock_bh(&cmd->istate_lock);
+
+	if (iscsi_recalculate_dataout_values(cmd, pdu_offset, pdu_length,
+			&r2t_offset, &r2t_length) < 0)
+		return DATAOUT_CANNOT_RECOVER;
+
+	iscsi_send_recovery_r2t(cmd, r2t_offset, r2t_length);
+
+	return DATAOUT_WITHIN_COMMAND_RECOVERY;
+}
+
+/*	iscsi_allocate_ooo_cmdsn():
+ *
+ *
+ */
+static inline struct iscsi_ooo_cmdsn *iscsi_allocate_ooo_cmdsn(void)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn = NULL;
+
+	ooo_cmdsn = kmem_cache_zalloc(lio_ooo_cache, GFP_ATOMIC);
+	if (!(ooo_cmdsn)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_ooo_cmdsn.\n");
+		return NULL;
+	}
+	INIT_LIST_HEAD(&ooo_cmdsn->ooo_list);
+
+	return ooo_cmdsn;
+}
+
+/*	iscsi_attach_ooo_cmdsn():
+ *
+ *	Called with sess->cmdsn_lock held.
+ */
+static inline int iscsi_attach_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_ooo_cmdsn *ooo_cmdsn)
+{
+	struct iscsi_ooo_cmdsn *ooo_tail, *ooo_tmp;
+	/*
+	 * We attach the struct iscsi_ooo_cmdsn entry to the out of order
+	 * list in increasing CmdSN order.
+	 * This allows iscsi_execute_ooo_cmdsns() to detect any
+	 * additional CmdSN holes while performing delayed execution.
+	 */
+	if (list_empty(&sess->sess_ooo_cmdsn_list))
+		list_add_tail(&ooo_cmdsn->ooo_list,
+				&sess->sess_ooo_cmdsn_list);
+	else {
+		ooo_tail = list_entry(sess->sess_ooo_cmdsn_list.prev,
+				typeof(*ooo_tail), ooo_list);
+		/*
+		 * CmdSN is greater than the tail of the list.
+		 */
+		if (ooo_tail->cmdsn < ooo_cmdsn->cmdsn)
+			list_add_tail(&ooo_cmdsn->ooo_list,
+					&sess->sess_ooo_cmdsn_list);
+		else {
+			/*
+			 * CmdSN is either lower than the head,  or somewhere
+			 * in the middle.
+			 */
+			list_for_each_entry(ooo_tmp, &sess->sess_ooo_cmdsn_list,
+						ooo_list) {
+				while (ooo_tmp->cmdsn < ooo_cmdsn->cmdsn)
+					continue;
+
+				list_add(&ooo_cmdsn->ooo_list,
+					&ooo_tmp->ooo_list);
+				break;
+			}
+		}
+	}
+	sess->ooo_cmdsn_count++;
+
+	TRACE(TRACE_CMDSN, "Set out of order CmdSN count for SID:"
+		" %u to %hu.\n", sess->sid, sess->ooo_cmdsn_count);
+
+	return 0;
+}
+
+/*	iscsi_remove_ooo_cmdsn()
+ *
+ *	Removes an struct iscsi_ooo_cmdsn from a session's list,
+ *	called with struct iscsi_session->cmdsn_lock held.
+ */
+void iscsi_remove_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_ooo_cmdsn *ooo_cmdsn)
+{
+	list_del(&ooo_cmdsn->ooo_list);
+	kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+}
+
+/*	iscsi_clear_ooo_cmdsns_for_conn():
+ *
+ *
+ */
+void iscsi_clear_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn;
+	struct iscsi_session *sess = SESS(conn);
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry(ooo_cmdsn, &sess->sess_ooo_cmdsn_list, ooo_list) {
+		if (ooo_cmdsn->cid != conn->cid)
+			continue;
+
+		ooo_cmdsn->cmd = NULL;
+	}
+	spin_unlock(&sess->cmdsn_lock);
+}
+
+/*	iscsi_execute_ooo_cmdsns():
+ *
+ *	Called with sess->cmdsn_lock held.
+ */
+int iscsi_execute_ooo_cmdsns(struct iscsi_session *sess)
+{
+	int ooo_count = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+				&sess->sess_ooo_cmdsn_list, ooo_list) {
+		if (ooo_cmdsn->cmdsn != sess->exp_cmd_sn)
+			continue;
+
+		if (!ooo_cmdsn->cmd) {
+			sess->exp_cmd_sn++;
+			iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+			continue;
+		}
+
+		cmd = ooo_cmdsn->cmd;
+		cmd->i_state = cmd->deferred_i_state;
+		ooo_count++;
+		sess->exp_cmd_sn++;
+		TRACE(TRACE_CMDSN, "Executing out of order CmdSN: 0x%08x,"
+			" incremented ExpCmdSN to 0x%08x.\n",
+			cmd->cmd_sn, sess->exp_cmd_sn);
+
+		iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+
+		if (iscsi_execute_cmd(cmd, 1) < 0)
+			return -1;
+
+		continue;
+	}
+
+	return ooo_count;
+}
+
+/*	iscsi_execute_cmd():
+ *
+ *	Called either:
+ *
+ *	1. With sess->cmdsn_lock held from iscsi_execute_ooo_cmdsns()
+ *	or iscsi_check_received_cmdsn().
+ *	2. With no locks held directly from iscsi_handle_XXX_pdu() functions
+ *	for immediate commands.
+ */
+int iscsi_execute_cmd(struct iscsi_cmd *cmd, int ooo)
+{
+	struct se_cmd *se_cmd = &cmd->se_cmd;
+	int lr = 0;
+
+	spin_lock_bh(&cmd->istate_lock);
+	if (ooo)
+		cmd->cmd_flags &= ~ICF_OOO_CMDSN;
+
+	switch (cmd->iscsi_opcode) {
+	case ISCSI_OP_SCSI_CMD:
+		/*
+		 * Go ahead and send the CHECK_CONDITION status for
+		 * any SCSI CDB exceptions that may have occurred, also
+		 * handle the SCF_SCSI_RESERVATION_CONFLICT case here as well.
+		 */
+		if (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION) {
+			if (se_cmd->se_cmd_flags &
+					SCF_SCSI_RESERVATION_CONFLICT) {
+				cmd->i_state = ISTATE_SEND_STATUS;
+				spin_unlock_bh(&cmd->istate_lock);
+				iscsi_add_cmd_to_response_queue(cmd, CONN(cmd),
+						cmd->i_state);
+				return 0;
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+			/*
+			 * Determine if delayed TASK_ABORTED status for WRITEs
+			 * should be sent now if no unsolicited data out
+			 * payloads are expected, or if the delayed status
+			 * should be sent after unsolicited data out with
+			 * ISCSI_FLAG_CMD_FINAL set in iscsi_handle_data_out()
+			 */
+			if (transport_check_aborted_status(se_cmd,
+					(cmd->unsolicited_data == 0)) != 0)
+				return 0;
+			/*
+			 * Otherwise send CHECK_CONDITION and sense for
+			 * exception
+			 */
+			return transport_send_check_condition_and_sense(se_cmd,
+					se_cmd->scsi_sense_reason, 0);
+		}
+		/*
+		 * Special case for delayed CmdSN with Immediate
+		 * Data and/or Unsolicited Data Out attached.
+		 */
+		if (cmd->immediate_data) {
+			if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+				spin_unlock_bh(&cmd->istate_lock);
+				return transport_generic_handle_data(
+						&cmd->se_cmd);
+			}
+			spin_unlock_bh(&cmd->istate_lock);
+
+			if (!(cmd->cmd_flags &
+					ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) {
+				/*
+				 * Send the delayed TASK_ABORTED status for
+				 * WRITEs if no more unsolicitied data is
+				 * expected.
+				 */
+				if (transport_check_aborted_status(se_cmd, 1)
+						!= 0)
+					return 0;
+
+				iscsi_set_dataout_sequence_values(cmd);
+				iscsi_build_r2ts_for_cmd(cmd, CONN(cmd), 0);
+			}
+			return 0;
+		}
+		/*
+		 * The default handler.
+		 */
+		spin_unlock_bh(&cmd->istate_lock);
+
+		if ((cmd->data_direction == DMA_TO_DEVICE) &&
+		    !(cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) {
+			/*
+			 * Send the delayed TASK_ABORTED status for WRITEs if
+			 * no more nsolicitied data is expected.
+			 */
+			if (transport_check_aborted_status(se_cmd, 1) != 0)
+				return 0;
+
+			iscsi_set_dataout_sequence_values(cmd);
+			spin_lock_bh(&cmd->dataout_timeout_lock);
+			iscsi_start_dataout_timer(cmd, CONN(cmd));
+			spin_unlock_bh(&cmd->dataout_timeout_lock);
+		}
+		return transport_generic_handle_cdb(&cmd->se_cmd);
+
+	case ISCSI_OP_NOOP_OUT:
+	case ISCSI_OP_TEXT:
+		spin_unlock_bh(&cmd->istate_lock);
+		iscsi_add_cmd_to_response_queue(cmd, CONN(cmd), cmd->i_state);
+		break;
+	case ISCSI_OP_SCSI_TMFUNC:
+		if (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION) {
+			spin_unlock_bh(&cmd->istate_lock);
+			iscsi_add_cmd_to_response_queue(cmd, CONN(cmd),
+					cmd->i_state);
+			return 0;
+		}
+		spin_unlock_bh(&cmd->istate_lock);
+
+		return transport_generic_handle_tmr(SE_CMD(cmd));
+	case ISCSI_OP_LOGOUT:
+		spin_unlock_bh(&cmd->istate_lock);
+		switch (cmd->logout_reason) {
+		case ISCSI_LOGOUT_REASON_CLOSE_SESSION:
+			lr = iscsi_logout_closesession(cmd, CONN(cmd));
+			break;
+		case ISCSI_LOGOUT_REASON_CLOSE_CONNECTION:
+			lr = iscsi_logout_closeconnection(cmd, CONN(cmd));
+			break;
+		case ISCSI_LOGOUT_REASON_RECOVERY:
+			lr = iscsi_logout_removeconnforrecovery(cmd, CONN(cmd));
+			break;
+		default:
+			printk(KERN_ERR "Unknown iSCSI Logout Request Code:"
+				" 0x%02x\n", cmd->logout_reason);
+			return -1;
+		}
+
+		return lr;
+	default:
+		spin_unlock_bh(&cmd->istate_lock);
+		printk(KERN_ERR "Cannot perform out of order execution for"
+		" unknown iSCSI Opcode: 0x%02x\n", cmd->iscsi_opcode);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_free_all_ooo_cmdsns():
+ *
+ *
+ */
+void iscsi_free_all_ooo_cmdsns(struct iscsi_session *sess)
+{
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+			&sess->sess_ooo_cmdsn_list, ooo_list) {
+
+		list_del(&ooo_cmdsn->ooo_list);
+		kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+	}
+	spin_unlock(&sess->cmdsn_lock);
+}
+
+/*	iscsi_handle_ooo_cmdsn():
+ *
+ *
+ */
+int iscsi_handle_ooo_cmdsn(
+	struct iscsi_session *sess,
+	struct iscsi_cmd *cmd,
+	u32 cmdsn)
+{
+	int batch = 0;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn = NULL, *ooo_tail = NULL;
+
+	sess->cmdsn_outoforder = 1;
+
+	cmd->deferred_i_state		= cmd->i_state;
+	cmd->i_state			= ISTATE_DEFERRED_CMD;
+	cmd->cmd_flags			|= ICF_OOO_CMDSN;
+
+	if (list_empty(&sess->sess_ooo_cmdsn_list))
+		batch = 1;
+	else {
+		ooo_tail = list_entry(sess->sess_ooo_cmdsn_list.prev,
+				typeof(*ooo_tail), ooo_list);
+		if (ooo_tail->cmdsn != (cmdsn - 1))
+			batch = 1;
+	}
+
+	ooo_cmdsn = iscsi_allocate_ooo_cmdsn();
+	if (!(ooo_cmdsn))
+		return CMDSN_ERROR_CANNOT_RECOVER;
+
+	ooo_cmdsn->cmd			= cmd;
+	ooo_cmdsn->batch_count		= (batch) ?
+					  (cmdsn - sess->exp_cmd_sn) : 1;
+	ooo_cmdsn->cid			= CONN(cmd)->cid;
+	ooo_cmdsn->exp_cmdsn		= sess->exp_cmd_sn;
+	ooo_cmdsn->cmdsn		= cmdsn;
+
+	if (iscsi_attach_ooo_cmdsn(sess, ooo_cmdsn) < 0) {
+		kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
+		return CMDSN_ERROR_CANNOT_RECOVER;
+	}
+
+	return CMDSN_HIGHER_THAN_EXP;
+}
+
+/*	 iscsi_set_dataout_timeout_values():
+ *
+ *
+ */
+static int iscsi_set_dataout_timeout_values(
+	struct iscsi_cmd *cmd,
+	u32 *offset,
+	u32 *length)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_r2t *r2t;
+
+	if (cmd->unsolicited_data) {
+		*offset = 0;
+		*length = (SESS_OPS_C(conn)->FirstBurstLength >
+			   cmd->data_length) ?
+			   cmd->data_length :
+			   SESS_OPS_C(conn)->FirstBurstLength;
+		return 0;
+	}
+
+	spin_lock_bh(&cmd->r2t_lock);
+	if (list_empty(&cmd->cmd_r2t_list)) {
+		printk(KERN_ERR "cmd->cmd_r2t_list is empty!\n");
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list)
+		if (r2t->sent_r2t && !r2t->recovery_r2t && !r2t->seq_complete)
+			break;
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate any incomplete DataOUT"
+			" sequences for ITT: 0x%08x.\n", cmd->init_task_tag);
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	*offset = r2t->offset;
+	*length = r2t->xfer_len;
+
+	spin_unlock_bh(&cmd->r2t_lock);
+	return 0;
+}
+
+/*	iscsi_handle_dataout_timeout():
+ *
+ *	NOTE: Called from interrupt (timer) context.
+ */
+static void iscsi_handle_dataout_timeout(unsigned long data)
+{
+	u32 pdu_length = 0, pdu_offset = 0;
+	u32 r2t_length = 0, r2t_offset = 0;
+	struct iscsi_cmd *cmd = (struct iscsi_cmd *) data;
+	struct iscsi_conn *conn = conn = CONN(cmd);
+	struct iscsi_session *sess = NULL;
+	struct iscsi_node_attrib *na;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (cmd->dataout_timer_flags & DATAOUT_TF_STOP) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_RUNNING;
+	sess = SESS(conn);
+	na = iscsi_tpg_get_node_attrib(sess);
+
+	if (!SESS_OPS(sess)->ErrorRecoveryLevel) {
+		TRACE(TRACE_ERL0, "Unable to recover from DataOut timeout while"
+			" in ERL=0.\n");
+		goto failure;
+	}
+
+	if (++cmd->dataout_timeout_retries == na->dataout_timeout_retries) {
+		TRACE(TRACE_TIMER, "Command ITT: 0x%08x exceeded max retries"
+			" for DataOUT timeout %u, closing iSCSI connection.\n",
+			cmd->init_task_tag, na->dataout_timeout_retries);
+		goto failure;
+	}
+
+	cmd->cmd_flags |= ICF_WITHIN_COMMAND_RECOVERY;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			pdu_offset = cmd->write_data_done;
+			if ((pdu_offset + (SESS_OPS_C(conn)->MaxBurstLength -
+			     cmd->next_burst_len)) > cmd->data_length)
+				pdu_length = (cmd->data_length -
+					cmd->write_data_done);
+			else
+				pdu_length = (SESS_OPS_C(conn)->MaxBurstLength -
+						cmd->next_burst_len);
+		} else {
+			pdu_offset = cmd->seq_start_offset;
+			pdu_length = (cmd->seq_end_offset -
+				cmd->seq_start_offset);
+		}
+	} else {
+		if (iscsi_set_dataout_timeout_values(cmd, &pdu_offset,
+				&pdu_length) < 0)
+			goto failure;
+	}
+
+	if (iscsi_recalculate_dataout_values(cmd, pdu_offset, pdu_length,
+			&r2t_offset, &r2t_length) < 0)
+		goto failure;
+
+	TRACE(TRACE_TIMER, "Command ITT: 0x%08x timed out waiting for"
+		" completion of %sDataOUT Sequence Offset: %u, Length: %u\n",
+		cmd->init_task_tag, (cmd->unsolicited_data) ? "Unsolicited " :
+		"", r2t_offset, r2t_length);
+
+	if (iscsi_send_recovery_r2t(cmd, r2t_offset, r2t_length) < 0)
+		goto failure;
+
+	iscsi_start_dataout_timer(cmd, conn);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+	iscsi_dec_conn_usage_count(conn);
+
+	return;
+
+failure:
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+
+	return;
+}
+
+/*	iscsi_mod_dataout_timer():
+ *
+ *
+ */
+void iscsi_mod_dataout_timer(struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (!(cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		return;
+	}
+
+	MOD_TIMER(&cmd->dataout_timer, na->dataout_timeout);
+	TRACE(TRACE_TIMER, "Updated DataOUT timer for ITT: 0x%08x",
+			cmd->init_task_tag);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+}
+
+/*	iscsi_start_dataout_timer():
+ *
+ *	Called with cmd->dataout_timeout_lock held.
+ */
+void iscsi_start_dataout_timer(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = na = iscsi_tpg_get_node_attrib(sess);
+
+	if (cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)
+		return;
+
+	TRACE(TRACE_TIMER, "Starting DataOUT timer for ITT: 0x%08x on"
+		" CID: %hu.\n", cmd->init_task_tag, conn->cid);
+
+	init_timer(&cmd->dataout_timer);
+	SETUP_TIMER(cmd->dataout_timer, na->dataout_timeout, cmd,
+			iscsi_handle_dataout_timeout);
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_STOP;
+	cmd->dataout_timer_flags |= DATAOUT_TF_RUNNING;
+	add_timer(&cmd->dataout_timer);
+}
+
+/*	iscsi_stop_dataout_timer():
+ *
+ *
+ */
+void iscsi_stop_dataout_timer(struct iscsi_cmd *cmd)
+{
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	if (!(cmd->dataout_timer_flags & DATAOUT_TF_RUNNING)) {
+		spin_unlock_bh(&cmd->dataout_timeout_lock);
+		return;
+	}
+	cmd->dataout_timer_flags |= DATAOUT_TF_STOP;
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+
+	del_timer_sync(&cmd->dataout_timer);
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	cmd->dataout_timer_flags &= ~DATAOUT_TF_RUNNING;
+	TRACE(TRACE_TIMER, "Stopped DataOUT Timer for ITT: 0x%08x\n",
+			cmd->init_task_tag);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl1.h b/drivers/target/iscsi/iscsi_target_erl1.h
new file mode 100644
index 0000000..e764ec2
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl1.h
@@ -0,0 +1,35 @@
+#ifndef ISCSI_TARGET_ERL1_H
+#define ISCSI_TARGET_ERL1_H
+
+extern int iscsi_dump_data_payload(struct iscsi_conn *, __u32, int);
+extern int iscsi_create_recovery_datain_values_datasequenceinorder_yes(
+			struct iscsi_cmd *, struct iscsi_datain_req *);
+extern int iscsi_create_recovery_datain_values_datasequenceinorder_no(
+			struct iscsi_cmd *, struct iscsi_datain_req *);
+extern int iscsi_handle_recovery_datain_or_r2t(struct iscsi_conn *, unsigned char *,
+			__u32, __u32, __u32, __u32);
+extern int iscsi_handle_status_snack(struct iscsi_conn *, __u32, __u32,
+			__u32, __u32);
+extern int iscsi_handle_data_ack(struct iscsi_conn *, __u32, __u32, __u32);
+extern int iscsi_dataout_datapduinorder_no_fbit(struct iscsi_cmd *, struct iscsi_pdu *);
+extern int iscsi_recover_dataout_sequence(struct iscsi_cmd *, __u32, __u32);
+extern void iscsi_clear_ooo_cmdsns_for_conn(struct iscsi_conn *);
+extern void iscsi_free_all_ooo_cmdsns(struct iscsi_session *);
+extern int iscsi_execute_ooo_cmdsns(struct iscsi_session *);
+extern int iscsi_execute_cmd(struct iscsi_cmd *, int);
+extern int iscsi_handle_ooo_cmdsn(struct iscsi_session *, struct iscsi_cmd *, __u32);
+extern void iscsi_remove_ooo_cmdsn(struct iscsi_session *, struct iscsi_ooo_cmdsn *);
+extern void iscsi_mod_dataout_timer(struct iscsi_cmd *);
+extern void iscsi_start_dataout_timer(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_stop_dataout_timer(struct iscsi_cmd *);
+
+extern struct kmem_cache *lio_ooo_cache;
+
+extern int iscsi_add_reject_from_cmd(u8, int, int, unsigned char *,
+			struct iscsi_cmd *);
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int iscsi_logout_closesession(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_logout_closeconnection(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_logout_removeconnforrecovery(struct iscsi_cmd *, struct iscsi_conn *);
+
+#endif /* ISCSI_TARGET_ERL1_H */
diff --git a/drivers/target/iscsi/iscsi_target_erl2.c b/drivers/target/iscsi/iscsi_target_erl2.c
new file mode 100644
index 0000000..2e61514
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl2.c
@@ -0,0 +1,535 @@
+/*******************************************************************************
+ * This file contains error recovery level two functions used by
+ * the iSCSI Target driver.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+
+/*	iscsi_create_conn_recovery_datain_values():
+ *
+ *	FIXME: Does RData SNACK apply here as well?
+ */
+void iscsi_create_conn_recovery_datain_values(
+	struct iscsi_cmd *cmd,
+	u32 exp_data_sn)
+{
+	u32 data_sn = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->next_burst_len = 0;
+	cmd->read_data_done = 0;
+
+	while (exp_data_sn > data_sn) {
+		if ((cmd->next_burst_len +
+		     CONN_OPS(conn)->MaxRecvDataSegmentLength) <
+		     SESS_OPS_C(conn)->MaxBurstLength) {
+			cmd->read_data_done +=
+			       CONN_OPS(conn)->MaxRecvDataSegmentLength;
+			cmd->next_burst_len +=
+			       CONN_OPS(conn)->MaxRecvDataSegmentLength;
+		} else {
+			cmd->read_data_done +=
+				(SESS_OPS_C(conn)->MaxBurstLength -
+				cmd->next_burst_len);
+			cmd->next_burst_len = 0;
+		}
+		data_sn++;
+	}
+}
+
+/*	iscsi_create_conn_recovery_dataout_values():
+ *
+ *
+ */
+void iscsi_create_conn_recovery_dataout_values(
+	struct iscsi_cmd *cmd)
+{
+	u32 write_data_done = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->data_sn = 0;
+	cmd->next_burst_len = 0;
+
+	while (cmd->write_data_done > write_data_done) {
+		if ((write_data_done + SESS_OPS_C(conn)->MaxBurstLength) <=
+		     cmd->write_data_done)
+			write_data_done += SESS_OPS_C(conn)->MaxBurstLength;
+		else
+			break;
+	}
+
+	cmd->write_data_done = write_data_done;
+}
+
+/*	iscsi_attach_active_connection_recovery_entry():
+ *
+ *
+ */
+static int iscsi_attach_active_connection_recovery_entry(
+	struct iscsi_session *sess,
+	struct iscsi_conn_recovery *cr)
+{
+	spin_lock(&sess->cr_a_lock);
+	list_add_tail(&cr->cr_list, &sess->cr_active_list);
+	spin_unlock(&sess->cr_a_lock);
+
+	return 0;
+}
+
+/*	iscsi_attach_inactive_connection_recovery():
+ *
+ *
+ */
+static int iscsi_attach_inactive_connection_recovery_entry(
+	struct iscsi_session *sess,
+	struct iscsi_conn_recovery *cr)
+{
+	spin_lock(&sess->cr_i_lock);
+	list_add_tail(&cr->cr_list, &sess->cr_inactive_list);
+
+	sess->conn_recovery_count++;
+	TRACE(TRACE_ERL2, "Incremented connection recovery count to %u for"
+		" SID: %u\n", sess->conn_recovery_count, sess->sid);
+	spin_unlock(&sess->cr_i_lock);
+
+	return 0;
+}
+
+/*	iscsi_get_inactive_connection_recovery_entry():
+ *
+ *
+ */
+struct iscsi_conn_recovery *iscsi_get_inactive_connection_recovery_entry(
+	struct iscsi_session *sess,
+	u16 cid)
+{
+	struct iscsi_conn_recovery *cr;
+
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry(cr, &sess->cr_inactive_list, cr_list) {
+		if (cr->cid == cid)
+			break;
+	}
+	spin_unlock(&sess->cr_i_lock);
+
+	return (cr) ? cr : NULL;
+}
+
+/*	iscsi_free_connection_recovery_entires():
+ *
+ *
+ */
+void iscsi_free_connection_recovery_entires(struct iscsi_session *sess)
+{
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_conn_recovery *cr, *cr_tmp;
+
+	spin_lock(&sess->cr_a_lock);
+	list_for_each_entry_safe(cr, cr_tmp, &sess->cr_active_list, cr_list) {
+		list_del(&cr->cr_list);
+		spin_unlock(&sess->cr_a_lock);
+
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry_safe(cmd, cmd_tmp,
+				&cr->conn_recovery_cmd_list, i_list) {
+
+			list_del(&cmd->i_list);
+			cmd->conn = NULL;
+			spin_unlock(&cr->conn_recovery_cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, sess);
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock(&cr->conn_recovery_cmd_lock);
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		spin_lock(&sess->cr_a_lock);
+
+		kfree(cr);
+	}
+	spin_unlock(&sess->cr_a_lock);
+
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry_safe(cr, cr_tmp, &sess->cr_inactive_list, cr_list) {
+		list_del(&cr->cr_list);
+		spin_unlock(&sess->cr_i_lock);
+
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry_safe(cmd, cmd_tmp,
+				&cr->conn_recovery_cmd_list, i_list) {
+
+			list_del(&cmd->i_list);
+			cmd->conn = NULL;
+			spin_unlock(&cr->conn_recovery_cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, sess);
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock(&cr->conn_recovery_cmd_lock);
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		spin_lock(&sess->cr_i_lock);
+
+		kfree(cr);
+	}
+	spin_unlock(&sess->cr_i_lock);
+}
+
+/*	iscsi_remove_active_connection_recovery_entry():
+ *
+ *
+ */
+int iscsi_remove_active_connection_recovery_entry(
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	spin_lock(&sess->cr_a_lock);
+	list_del(&cr->cr_list);
+
+	sess->conn_recovery_count--;
+	TRACE(TRACE_ERL2, "Decremented connection recovery count to %u for"
+		" SID: %u\n", sess->conn_recovery_count, sess->sid);
+	spin_unlock(&sess->cr_a_lock);
+
+	kfree(cr);
+
+	return 0;
+}
+
+/*	iscsi_remove_inactive_connection_recovery_entry():
+ *
+ *
+ */
+int iscsi_remove_inactive_connection_recovery_entry(
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	spin_lock(&sess->cr_i_lock);
+	list_del(&cr->cr_list);
+	spin_unlock(&sess->cr_i_lock);
+
+	return 0;
+}
+
+/*	iscsi_remove_cmd_from_connection_recovery():
+ *
+ *	Called with cr->conn_recovery_cmd_lock help.
+ */
+int iscsi_remove_cmd_from_connection_recovery(
+	struct iscsi_cmd *cmd,
+	struct iscsi_session *sess)
+{
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		BUG();
+	}
+	cr = cmd->cr;
+
+	list_del(&cmd->i_list);
+	return --cr->cmd_count;
+}
+
+/*	iscsi_discard_cr_cmds_by_expstatsn():
+ *
+ *
+ */
+void iscsi_discard_cr_cmds_by_expstatsn(
+	struct iscsi_conn_recovery *cr,
+	u32 exp_statsn)
+{
+	u32 dropped_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_session *sess = cr->sess;
+
+	spin_lock(&cr->conn_recovery_cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp,
+			&cr->conn_recovery_cmd_list, i_list) {
+
+		if (((cmd->deferred_i_state != ISTATE_SENT_STATUS) &&
+		     (cmd->deferred_i_state != ISTATE_REMOVE)) ||
+		     (cmd->stat_sn >= exp_statsn)) {
+			continue;
+		}
+
+		dropped_count++;
+		TRACE(TRACE_ERL2, "Dropping Acknowledged ITT: 0x%08x, StatSN:"
+			" 0x%08x, CID: %hu.\n", cmd->init_task_tag,
+				cmd->stat_sn, cr->cid);
+
+		iscsi_remove_cmd_from_connection_recovery(cmd, sess);
+
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+		    !(SE_CMD(cmd)->transport_wait_for_tasks))
+			__iscsi_release_cmd_to_pool(cmd, sess);
+		else
+			SE_CMD(cmd)->transport_wait_for_tasks(
+					SE_CMD(cmd), 1, 0);
+		spin_lock(&cr->conn_recovery_cmd_lock);
+	}
+	spin_unlock(&cr->conn_recovery_cmd_lock);
+
+	TRACE(TRACE_ERL2, "Dropped %u total acknowledged commands on"
+		" CID: %hu less than old ExpStatSN: 0x%08x\n",
+			dropped_count, cr->cid, exp_statsn);
+
+	if (!cr->cmd_count) {
+		TRACE(TRACE_ERL2, "No commands to be reassigned for failed"
+			" connection CID: %hu on SID: %u\n",
+			cr->cid, sess->sid);
+		iscsi_remove_inactive_connection_recovery_entry(cr, sess);
+		iscsi_attach_active_connection_recovery_entry(sess, cr);
+		printk(KERN_INFO "iSCSI connection recovery successful for CID:"
+			" %hu on SID: %u\n", cr->cid, sess->sid);
+		iscsi_remove_active_connection_recovery_entry(cr, sess);
+	} else {
+		iscsi_remove_inactive_connection_recovery_entry(cr, sess);
+		iscsi_attach_active_connection_recovery_entry(sess, cr);
+	}
+
+	return;
+}
+
+/*	iscsi_discard_unacknowledged_ooo_cmdsns_for_conn():
+ *
+ *
+ */
+int iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *conn)
+{
+	u32 dropped_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_ooo_cmdsn *ooo_cmdsn, *ooo_cmdsn_tmp;
+	struct iscsi_session *sess = SESS(conn);
+
+	spin_lock(&sess->cmdsn_lock);
+	list_for_each_entry_safe(ooo_cmdsn, ooo_cmdsn_tmp,
+			&sess->sess_ooo_cmdsn_list, ooo_list) {
+
+		if (ooo_cmdsn->cid != conn->cid)
+			continue;
+
+		dropped_count++;
+		TRACE(TRACE_ERL2, "Dropping unacknowledged CmdSN:"
+		" 0x%08x during connection recovery on CID: %hu\n",
+			ooo_cmdsn->cmdsn, conn->cid);
+		iscsi_remove_ooo_cmdsn(sess, ooo_cmdsn);
+	}
+	SESS(conn)->ooo_cmdsn_count -= dropped_count;
+	spin_unlock(&sess->cmdsn_lock);
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+		if (!(cmd->cmd_flags & ICF_OOO_CMDSN))
+			continue;
+
+		iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+		spin_unlock_bh(&conn->cmd_lock);
+		if (!(SE_CMD(cmd)) ||
+		    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+		    !(SE_CMD(cmd)->transport_wait_for_tasks))
+			__iscsi_release_cmd_to_pool(cmd, sess);
+		else
+			SE_CMD(cmd)->transport_wait_for_tasks(
+					SE_CMD(cmd), 1, 1);
+		spin_lock_bh(&conn->cmd_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	TRACE(TRACE_ERL2, "Dropped %u total unacknowledged commands on CID:"
+		" %hu for ExpCmdSN: 0x%08x.\n", dropped_count, conn->cid,
+				sess->exp_cmd_sn);
+	return 0;
+}
+
+/*	iscsi_prepare_cmds_for_realligance():
+ *
+ *
+ */
+int iscsi_prepare_cmds_for_realligance(struct iscsi_conn *conn)
+{
+	u32 cmd_count = 0;
+	struct iscsi_cmd *cmd, *cmd_tmp;
+	struct iscsi_conn_recovery *cr;
+
+	/*
+	 * Allocate an struct iscsi_conn_recovery for this connection.
+	 * Each struct iscsi_cmd contains an struct iscsi_conn_recovery pointer
+	 * (struct iscsi_cmd->cr) so we need to allocate this before preparing the
+	 * connection's command list for connection recovery.
+	 */
+	cr = kzalloc(sizeof(struct iscsi_conn_recovery), GFP_KERNEL);
+	if (!(cr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_conn_recovery.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&cr->cr_list);
+	INIT_LIST_HEAD(&cr->conn_recovery_cmd_list);
+	spin_lock_init(&cr->conn_recovery_cmd_lock);
+	/*
+	 * Only perform connection recovery on ISCSI_OP_SCSI_CMD or
+	 * ISCSI_OP_NOOP_OUT opcodes.  For all other opcodes call
+	 * iscsi_remove_cmd_from_conn_list() to release the command to the
+	 * session pool and remove it from the connection's list.
+	 *
+	 * Also stop the DataOUT timer, which will be restarted after
+	 * sending the TMR response.
+	 */
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_list) {
+
+		if ((cmd->iscsi_opcode != ISCSI_OP_SCSI_CMD) &&
+		    (cmd->iscsi_opcode != ISCSI_OP_NOOP_OUT)) {
+			TRACE(TRACE_ERL2, "Not performing realligence on"
+				" Opcode: 0x%02x, ITT: 0x%08x, CmdSN: 0x%08x,"
+				" CID: %hu\n", cmd->iscsi_opcode,
+				cmd->init_task_tag, cmd->cmd_sn, conn->cid);
+
+			iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+			spin_unlock_bh(&conn->cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 0);
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+
+		/*
+		 * Special case where commands greater than or equal to
+		 * the session's ExpCmdSN are attached to the connection
+		 * list but not to the out of order CmdSN list.  The one
+		 * obvious case is when a command with immediate data
+		 * attached must only check the CmdSN against ExpCmdSN
+		 * after the data is received.  The special case below
+		 * is when the connection fails before data is received,
+		 * but also may apply to other PDUs, so it has been
+		 * made generic here.
+		 */
+		if (!(cmd->cmd_flags & ICF_OOO_CMDSN) && !cmd->immediate_cmd &&
+		     (cmd->cmd_sn >= SESS(conn)->exp_cmd_sn)) {
+			iscsi_remove_cmd_from_conn_list(cmd, conn);
+
+			spin_unlock_bh(&conn->cmd_lock);
+			if (!(SE_CMD(cmd)) ||
+			    !(SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) ||
+			    !(SE_CMD(cmd)->transport_wait_for_tasks))
+				__iscsi_release_cmd_to_pool(cmd, SESS(conn));
+			else
+				SE_CMD(cmd)->transport_wait_for_tasks(
+						SE_CMD(cmd), 1, 1);
+			spin_lock_bh(&conn->cmd_lock);
+			continue;
+		}
+
+		cmd_count++;
+		TRACE(TRACE_ERL2, "Preparing Opcode: 0x%02x, ITT: 0x%08x,"
+			" CmdSN: 0x%08x, StatSN: 0x%08x, CID: %hu for"
+			" realligence.\n", cmd->iscsi_opcode,
+			cmd->init_task_tag, cmd->cmd_sn, cmd->stat_sn,
+			conn->cid);
+
+		cmd->deferred_i_state = cmd->i_state;
+		cmd->i_state = ISTATE_IN_CONNECTION_RECOVERY;
+
+		if (cmd->data_direction == DMA_TO_DEVICE)
+			iscsi_stop_dataout_timer(cmd);
+
+		cmd->sess = SESS(conn);
+
+		iscsi_remove_cmd_from_conn_list(cmd, conn);
+		spin_unlock_bh(&conn->cmd_lock);
+
+		iscsi_free_all_datain_reqs(cmd);
+
+		if ((SE_CMD(cmd)) &&
+		    (SE_CMD(cmd)->se_cmd_flags & SCF_SE_LUN_CMD) &&
+		     SE_CMD(cmd)->transport_wait_for_tasks)
+			SE_CMD(cmd)->transport_wait_for_tasks(SE_CMD(cmd),
+					0, 0);
+		/*
+		 * Add the struct iscsi_cmd to the connection recovery cmd list
+		 */
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_add_tail(&cmd->i_list, &cr->conn_recovery_cmd_list);
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+
+		spin_lock_bh(&conn->cmd_lock);
+		cmd->cr = cr;
+		cmd->conn = NULL;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	/*
+	 * Fill in the various values in the preallocated struct iscsi_conn_recovery.
+	 */
+	cr->cid = conn->cid;
+	cr->cmd_count = cmd_count;
+	cr->maxrecvdatasegmentlength = CONN_OPS(conn)->MaxRecvDataSegmentLength;
+	cr->sess = SESS(conn);
+
+	iscsi_attach_inactive_connection_recovery_entry(SESS(conn), cr);
+
+	return 0;
+}
+
+/*	iscsi_connection_recovery_transport_reset():
+ *
+ *
+ */
+int iscsi_connection_recovery_transport_reset(struct iscsi_conn *conn)
+{
+	atomic_set(&conn->connection_recovery, 1);
+
+	if (iscsi_close_connection(conn) < 0)
+		return -1;
+
+	return 0;
+}
+
diff --git a/drivers/target/iscsi/iscsi_target_erl2.h b/drivers/target/iscsi/iscsi_target_erl2.h
new file mode 100644
index 0000000..0da7d3c
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_erl2.h
@@ -0,0 +1,21 @@
+#ifndef ISCSI_TARGET_ERL2_H
+#define ISCSI_TARGET_ERL2_H
+
+extern void iscsi_create_conn_recovery_datain_values(struct iscsi_cmd *, __u32);
+extern void iscsi_create_conn_recovery_dataout_values(struct iscsi_cmd *);
+extern struct iscsi_conn_recovery *iscsi_get_inactive_connection_recovery_entry(
+			struct iscsi_session *, __u16);
+extern void iscsi_free_connection_recovery_entires(struct iscsi_session *);
+extern int iscsi_remove_active_connection_recovery_entry(
+			struct iscsi_conn_recovery *, struct iscsi_session *);
+extern int iscsi_remove_cmd_from_connection_recovery(struct iscsi_cmd *,
+			struct iscsi_session *);
+extern void iscsi_discard_cr_cmds_by_expstatsn(struct iscsi_conn_recovery *, __u32);
+extern int iscsi_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *);
+extern int iscsi_prepare_cmds_for_realligance(struct iscsi_conn *);
+extern int iscsi_connection_recovery_transport_reset(struct iscsi_conn *);
+
+extern int iscsi_close_connection(struct iscsi_conn *);
+
+#endif /*** ISCSI_TARGET_ERL2_H ***/
+
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 10/12] iscsi-target: Add support for task management operations
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 28032 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for iSCSI task management operations called
directly from iscsi_target.c TMR request/response PDU logic, and
interfaces struct se_lun -> struct se_device for assoication
of TMRs to TCM backend devices.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_tmr.c |  908 +++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_tmr.h |   17 +
 2 files changed, 925 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.h

diff --git a/drivers/target/iscsi/iscsi_target_tmr.c b/drivers/target/iscsi/iscsi_target_tmr.c
new file mode 100644
index 0000000..a60218d
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tmr.c
@@ -0,0 +1,908 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target specific Task Management functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/unaligned.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_tmr.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+
+/*	iscsi_tmr_abort_task():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+u8 iscsi_tmr_abort_task(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *ref_cmd;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm *hdr = (struct iscsi_tm *) buf;
+
+	ref_cmd = iscsi_find_cmd_from_itt(conn, hdr->rtt);
+	if (!(ref_cmd)) {
+		printk(KERN_ERR "Unable to locate RefTaskTag: 0x%08x on CID:"
+			" %hu.\n", hdr->rtt, conn->cid);
+		return ((hdr->refcmdsn >= SESS(conn)->exp_cmd_sn) &&
+			(hdr->refcmdsn <= SESS(conn)->max_cmd_sn)) ?
+			ISCSI_TMF_RSP_COMPLETE : ISCSI_TMF_RSP_NO_TASK;
+	}
+	if (ref_cmd->cmd_sn != hdr->refcmdsn) {
+		printk(KERN_ERR "RefCmdSN 0x%08x does not equal"
+			" task's CmdSN 0x%08x. Rejecting ABORT_TASK.\n",
+			hdr->refcmdsn, ref_cmd->cmd_sn);
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+
+	se_tmr->ref_task_tag		= hdr->rtt;
+	se_tmr->ref_cmd			= &ref_cmd->se_cmd;
+	se_tmr->ref_task_lun		= get_unaligned_le64(&hdr->lun[0]);
+	tmr_req->ref_cmd_sn		= hdr->refcmdsn;
+	tmr_req->exp_data_sn		= hdr->exp_datasn;
+
+	return ISCSI_TMF_RSP_COMPLETE;
+}
+
+/*	iscsi_tmr_task_warm_reset():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+int iscsi_tmr_task_warm_reset(
+	struct iscsi_conn *conn,
+	struct iscsi_tmr_req *tmr_req,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+#if 0
+	struct iscsi_init_task_mgt_cmnd *hdr =
+		(struct iscsi_init_task_mgt_cmnd *) buf;
+#endif
+	if (!(na->tmr_warm_reset)) {
+		printk(KERN_ERR "TMR Opcode TARGET_WARM_RESET authorization"
+			" failed for Initiator Node: %s\n",
+			SESS_NODE_ACL(sess)->initiatorname);
+		 return -1;
+	}
+	/*
+	 * Do the real work in transport_generic_do_tmr().
+	 */
+	return 0;
+}
+
+/*	iscsi_tmr_task_cold_reset():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+int iscsi_tmr_task_cold_reset(
+	struct iscsi_conn *conn,
+	struct iscsi_tmr_req *tmr_req,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	if (!(na->tmr_cold_reset)) {
+		printk(KERN_ERR "TMR Opcode TARGET_COLD_RESET authorization"
+			" failed for Initiator Node: %s\n",
+			SESS_NODE_ACL(sess)->initiatorname);
+		return -1;
+	}
+	/*
+	 * Do the real work in transport_generic_do_tmr().
+	 */
+	return 0;
+}
+
+/*	iscsi_tmr_task_reassign():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+u8 iscsi_tmr_task_reassign(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *ref_cmd = NULL;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm *hdr = (struct iscsi_tm *) buf;
+	int ret;
+
+	TRACE(TRACE_ERL2, "Got TASK_REASSIGN TMR ITT: 0x%08x,"
+		" RefTaskTag: 0x%08x, ExpDataSN: 0x%08x, CID: %hu\n",
+		hdr->itt, hdr->rtt, hdr->exp_datasn, conn->cid);
+
+	if (SESS_OPS_C(conn)->ErrorRecoveryLevel != 2) {
+		printk(KERN_ERR "TMR TASK_REASSIGN not supported in ERL<2,"
+				" ignoring request.\n");
+		return ISCSI_TMF_RSP_NOT_SUPPORTED;
+	}
+
+	ret = iscsi_find_cmd_for_recovery(SESS(conn), &ref_cmd, &cr, hdr->rtt);
+	if (ret == -2) {
+		printk(KERN_ERR "Command ITT: 0x%08x is still alligent to CID:"
+			" %hu\n", ref_cmd->init_task_tag, cr->cid);
+		return ISCSI_TMF_RSP_TASK_ALLEGIANT;
+	} else if (ret == -1) {
+		printk(KERN_ERR "Unable to locate RefTaskTag: 0x%08x in"
+			" connection recovery command list.\n", hdr->rtt);
+		return ISCSI_TMF_RSP_NO_TASK;
+	}
+	/*
+	 * Temporary check to prevent connection recovery for
+	 * connections with a differing MaxRecvDataSegmentLength.
+	 */
+	if (cr->maxrecvdatasegmentlength !=
+	    CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "Unable to perform connection recovery for"
+			" differing MaxRecvDataSegmentLength, rejecting"
+			" TMR TASK_REASSIGN.\n");
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+
+	se_tmr->ref_task_tag		= hdr->rtt;
+	se_tmr->ref_cmd			= &ref_cmd->se_cmd;
+	se_tmr->ref_task_lun		= get_unaligned_le64(&hdr->lun[0]);
+	tmr_req->ref_cmd_sn		= hdr->refcmdsn;
+	tmr_req->exp_data_sn		= hdr->exp_datasn;
+	tmr_req->conn_recovery		= cr;
+	tmr_req->task_reassign		= 1;
+	/*
+	 * Command can now be reassigned to a new connection.
+	 * The task management response must be sent before the
+	 * reassignment actually happens.  See iscsi_tmr_post_handler().
+	 */
+	return ISCSI_TMF_RSP_COMPLETE;
+}
+
+/*      iscsi_task_reassign_remove_cmd():
+ *
+ *
+ */
+static void iscsi_task_reassign_remove_cmd(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	int ret;
+
+	spin_lock(&cr->conn_recovery_cmd_lock);
+	ret = iscsi_remove_cmd_from_connection_recovery(cmd, sess);
+	spin_unlock(&cr->conn_recovery_cmd_lock);
+	if (!ret) {
+		printk(KERN_INFO "iSCSI connection recovery successful for CID:"
+			" %hu on SID: %u\n", cr->cid, sess->sid);
+		iscsi_remove_active_connection_recovery_entry(cr, sess);
+	}
+
+	return;
+}
+
+/*	iscsi_task_reassign_complete_nop_out():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_nop_out(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		return -1;
+	}
+	cr = cmd->cr;
+
+	/*
+	 * Reset the StatSN so a new one for this commands new connection
+	 * will be assigned.
+	 * Reset the ExpStatSN as well so we may receive Status SNACKs.
+	 */
+	cmd->stat_sn = cmd->exp_stat_sn = 0;
+
+	iscsi_task_reassign_remove_cmd(cmd, cr, SESS(conn));
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_NOPIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_write():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_write(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	int no_build_r2ts = 0;
+	u32 length = 0, offset = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	/*
+	 * The Initiator must not send a R2T SNACK with a Begrun less than
+	 * the TMR TASK_REASSIGN's ExpDataSN.
+	 */
+	if (!tmr_req->exp_data_sn) {
+		cmd->cmd_flags &= ~ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = 0;
+	} else {
+		cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = (tmr_req->exp_data_sn - 1);
+	}
+
+	/*
+	 * The TMR TASK_REASSIGN's ExpDataSN contains the next R2TSN the
+	 * Initiator is expecting.  The Target controls all WRITE operations
+	 * so if we have received all DataOUT we can safety ignore Initiator.
+	 */
+	if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+		if (!atomic_read(&cmd->transport_sent)) {
+			TRACE(TRACE_ERL2, "WRITE ITT: 0x%08x: t_state: %d"
+				" never sent to transport\n",
+				cmd->init_task_tag, cmd->se_cmd.t_state);
+			return transport_generic_handle_data(se_cmd);
+		}
+
+		cmd->i_state = ISTATE_SEND_STATUS;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	/*
+	 * Special case to deal with DataSequenceInOrder=No and Non-Immeidate
+	 * Unsolicited DataOut.
+	 */
+	if (cmd->unsolicited_data) {
+		cmd->unsolicited_data = 0;
+
+		offset = cmd->next_burst_len = cmd->write_data_done;
+
+		if ((SESS_OPS_C(conn)->FirstBurstLength - offset) >=
+		     cmd->data_length) {
+			no_build_r2ts = 1;
+			length = (cmd->data_length - offset);
+		} else
+			length = (SESS_OPS_C(conn)->FirstBurstLength - offset);
+
+		spin_lock_bh(&cmd->r2t_lock);
+		if (iscsi_add_r2t_to_list(cmd, offset, length, 0, 0) < 0) {
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+		cmd->outstanding_r2ts++;
+		spin_unlock_bh(&cmd->r2t_lock);
+
+		if (no_build_r2ts)
+			return 0;
+	}
+
+	/*
+	 * iscsi_build_r2ts_for_cmd() can handle the rest from here.
+	 */
+	return iscsi_build_r2ts_for_cmd(cmd, conn, 2);
+}
+
+/*	iscsi_task_reassign_complete_read():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_read(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+
+	/*
+	 * The Initiator must not send a Data SNACK with a BegRun less than
+	 * the TMR TASK_REASSIGN's ExpDataSN.
+	 */
+	if (!tmr_req->exp_data_sn) {
+		cmd->cmd_flags &= ~ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = 0;
+	} else {
+		cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = (tmr_req->exp_data_sn - 1);
+	}
+
+	if (!atomic_read(&cmd->transport_sent)) {
+		printk(KERN_INFO "READ ITT: 0x%08x: t_state: %d never sent to"
+			" transport\n", cmd->init_task_tag,
+			SE_CMD(cmd)->t_state);
+		transport_generic_handle_cdb(se_cmd);
+		return 0;
+	}
+
+	if (!(atomic_read(&T_TASK(se_cmd)->t_transport_complete))) {
+		printk(KERN_ERR "READ ITT: 0x%08x: t_state: %d, never returned"
+			" from transport\n", cmd->init_task_tag,
+			SE_CMD(cmd)->t_state);
+		return -1;
+	}
+
+	dr = iscsi_allocate_datain_req();
+	if (!(dr))
+		return -1;
+
+	/*
+	 * The TMR TASK_REASSIGN's ExpDataSN contains the next DataSN the
+	 * Initiator is expecting.
+	 */
+	dr->data_sn = dr->begrun = tmr_req->exp_data_sn;
+	dr->runlength = 0;
+	dr->generate_recovery_values = 1;
+	dr->recovery = DATAIN_CONNECTION_RECOVERY;
+
+	iscsi_attach_datain_req(cmd, dr);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_none():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_none(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->i_state = ISTATE_SEND_STATUS;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_scsi_cmnd():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_scsi_cmnd(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		return -1;
+	}
+	cr = cmd->cr;
+
+	/*
+	 * Reset the StatSN so a new one for this commands new connection
+	 * will be assigned.
+	 * Reset the ExpStatSN as well so we may receive Status SNACKs.
+	 */
+	cmd->stat_sn = cmd->exp_stat_sn = 0;
+
+	iscsi_task_reassign_remove_cmd(cmd, cr, SESS(conn));
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (se_cmd->se_cmd_flags & SCF_SENT_CHECK_CONDITION) {
+		cmd->i_state = ISTATE_SEND_STATUS;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	switch (cmd->data_direction) {
+	case DMA_TO_DEVICE:
+		return iscsi_task_reassign_complete_write(cmd, tmr_req);
+	case DMA_FROM_DEVICE:
+		return iscsi_task_reassign_complete_read(cmd, tmr_req);
+	case DMA_NONE:
+		return iscsi_task_reassign_complete_none(cmd, tmr_req);
+	default:
+		printk(KERN_ERR "Unknown cmd->data_direction: 0x%02x\n",
+				cmd->data_direction);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete():
+ *
+ *	Called from iscsi_tmr_post_handler().
+ */
+static int iscsi_task_reassign_complete(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd;
+	struct iscsi_cmd *cmd;
+	int ret = 0;
+
+	if (!se_tmr->ref_cmd) {
+		printk(KERN_ERR "TMR Request is missing a RefCmd struct iscsi_cmd.\n");
+		return -1;
+	}
+	se_cmd = se_tmr->ref_cmd;
+	cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	cmd->conn = conn;
+
+	switch (cmd->iscsi_opcode) {
+	case ISCSI_OP_NOOP_OUT:
+		ret = iscsi_task_reassign_complete_nop_out(tmr_req, conn);
+		break;
+	case ISCSI_OP_SCSI_CMD:
+		ret = iscsi_task_reassign_complete_scsi_cmnd(tmr_req, conn);
+		break;
+	default:
+		 printk(KERN_ERR "Illegal iSCSI Opcode 0x%02x during"
+			" command realligence\n", cmd->iscsi_opcode);
+		return -1;
+	}
+
+	if (ret != 0)
+		return ret;
+
+	TRACE(TRACE_ERL2, "Completed connection realligence for Opcode: 0x%02x,"
+		" ITT: 0x%08x to CID: %hu.\n", cmd->iscsi_opcode,
+			cmd->init_task_tag, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_tmr_post_handler():
+ *
+ *	Handles special after-the-fact actions related to TMRs.
+ *	Right now the only one that its really needed for is
+ *	connection recovery releated TASK_REASSIGN.
+ */
+extern int iscsi_tmr_post_handler(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+
+	if (tmr_req->task_reassign &&
+	   (se_tmr->response == ISCSI_TMF_RSP_COMPLETE))
+		return iscsi_task_reassign_complete(tmr_req, conn);
+
+	return 0;
+}
+
+/*	iscsi_task_reassign_prepare_read():
+ *
+ *	Nothing to do here, but leave it for good measure. :-)
+ */
+int iscsi_task_reassign_prepare_read(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	return 0;
+}
+
+/*	iscsi_task_reassign_prepare_unsolicited_dataout():
+ *
+ *
+ */
+static void iscsi_task_reassign_prepare_unsolicited_dataout(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int i, j;
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_seq *seq = NULL;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		cmd->data_sn = 0;
+
+		if (cmd->immediate_data)
+			cmd->r2t_offset += (cmd->first_burst_len -
+				cmd->seq_start_offset);
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			cmd->write_data_done -= (cmd->immediate_data) ?
+						(cmd->first_burst_len -
+						 cmd->seq_start_offset) :
+						 cmd->first_burst_len;
+			cmd->first_burst_len = 0;
+			return;
+		}
+
+		for (i = 0; i < cmd->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			if ((pdu->offset >= cmd->seq_start_offset) &&
+			   ((pdu->offset + pdu->length) <=
+			     cmd->seq_end_offset)) {
+				cmd->first_burst_len -= pdu->length;
+				cmd->write_data_done -= pdu->length;
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	} else {
+		for (i = 0; i < cmd->seq_count; i++) {
+			seq = &cmd->seq_list[i];
+
+			if (seq->type != SEQTYPE_UNSOLICITED)
+				continue;
+
+			cmd->write_data_done -=
+					(seq->offset - seq->orig_offset);
+			cmd->first_burst_len = 0;
+			seq->data_sn = 0;
+			seq->offset = seq->orig_offset;
+			seq->next_burst_len = 0;
+			seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder)
+				continue;
+
+			for (j = 0; j < seq->pdu_count; j++) {
+				pdu = &cmd->pdu_list[j+seq->pdu_start];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	}
+
+	return;
+}
+
+/*	iscsi_task_reassign_prepare_write():
+ *
+ *
+ */
+int iscsi_task_reassign_prepare_write(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_r2t *r2t = NULL, *r2t_tmp;
+	int first_incomplete_r2t = 1, i = 0;
+
+	/*
+	 * The command was in the process of receiving Unsolicited DataOUT when
+	 * the connection failed.
+	 */
+	if (cmd->unsolicited_data)
+		iscsi_task_reassign_prepare_unsolicited_dataout(cmd, conn);
+
+	/*
+	 * The Initiator is requesting R2Ts starting from zero,  skip
+	 * checking acknowledged R2Ts and start checking struct iscsi_r2ts
+	 * greater than zero.
+	 */
+	if (!tmr_req->exp_data_sn)
+		goto drop_unacknowledged_r2ts;
+
+	/*
+	 * We now check that the PDUs in DataOUT sequences below
+	 * the TMR TASK_REASSIGN ExpDataSN (R2TSN the Initiator is
+	 * expecting next) have all the DataOUT they require to complete
+	 * the DataOUT sequence.  First scan from R2TSN 0 to TMR
+	 * TASK_REASSIGN ExpDataSN-1.
+	 *
+	 * If we have not received all DataOUT in question,  we must
+	 * make sure to make the appropriate changes to values in
+	 * struct iscsi_cmd (and elsewhere depending on session parameters)
+	 * so iscsi_build_r2ts_for_cmd() in iscsi_task_reassign_complete_write()
+	 * will resend a new R2T for the DataOUT sequences in question.
+	 */
+	spin_lock_bh(&cmd->r2t_lock);
+	if (list_empty(&cmd->cmd_r2t_list)) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+
+		if (r2t->r2t_sn >= tmr_req->exp_data_sn)
+			continue;
+		/*
+		 * Safely ignore Recovery R2Ts and R2Ts that have completed
+		 * DataOUT sequences.
+		 */
+		if (r2t->seq_complete)
+			continue;
+
+		if (r2t->recovery_r2t)
+			continue;
+
+		/*
+		 *                 DataSequenceInOrder=Yes:
+		 *
+		 * Taking into account the iSCSI implementation requirement of
+		 * MaxOutstandingR2T=1 while ErrorRecoveryLevel>0 and
+		 * DataSequenceInOrder=Yes, we must take into consideration
+		 * the following:
+		 *
+		 *                  DataSequenceInOrder=No:
+		 *
+		 * Taking into account that the Initiator controls the (possibly
+		 * random) PDU Order in (possibly random) Sequence Order of
+		 * DataOUT the target requests with R2Ts,  we must take into
+		 * consideration the following:
+		 *
+		 *      DataPDUInOrder=Yes for DataSequenceInOrder=[Yes,No]:
+		 *
+		 * While processing non-complete R2T DataOUT sequence requests
+		 * the Target will re-request only the total sequence length
+		 * minus current received offset.  This is because we must
+		 * assume the initiator will continue sending DataOUT from the
+		 * last PDU before the connection failed.
+		 *
+		 *      DataPDUInOrder=No for DataSequenceInOrder=[Yes,No]:
+		 *
+		 * While processing non-complete R2T DataOUT sequence requests
+		 * the Target will re-request the entire DataOUT sequence if
+		 * any single PDU is missing from the sequence.  This is because
+		 * we have no logical method to determine the next PDU offset,
+		 * and we must assume the Initiator will be sending any random
+		 * PDU offset in the current sequence after TASK_REASSIGN
+		 * has completed.
+		 */
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if (!first_incomplete_r2t) {
+				cmd->r2t_offset -= r2t->xfer_len;
+				goto next;
+			}
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder) {
+				cmd->data_sn = 0;
+				cmd->r2t_offset -= (r2t->xfer_len -
+					cmd->next_burst_len);
+				first_incomplete_r2t = 0;
+				goto next;
+			}
+
+			cmd->data_sn = 0;
+			cmd->r2t_offset -= r2t->xfer_len;
+
+			for (i = 0; i < cmd->pdu_count; i++) {
+				pdu = &cmd->pdu_list[i];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				if ((pdu->offset >= r2t->offset) &&
+				    (pdu->offset < (r2t->offset +
+						r2t->xfer_len))) {
+					cmd->next_burst_len -= pdu->length;
+					cmd->write_data_done -= pdu->length;
+					pdu->status = ISCSI_PDU_NOT_RECEIVED;
+				}
+			}
+
+			first_incomplete_r2t = 0;
+		} else {
+			struct iscsi_seq *seq;
+
+			seq = iscsi_get_seq_holder(cmd, r2t->offset,
+					r2t->xfer_len);
+			if (!(seq)) {
+				spin_unlock_bh(&cmd->r2t_lock);
+				return -1;
+			}
+
+			cmd->write_data_done -=
+					(seq->offset - seq->orig_offset);
+			seq->data_sn = 0;
+			seq->offset = seq->orig_offset;
+			seq->next_burst_len = 0;
+			seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+			cmd->seq_send_order--;
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder)
+				goto next;
+
+			for (i = 0; i < seq->pdu_count; i++) {
+				pdu = &cmd->pdu_list[i+seq->pdu_start];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+
+next:
+		cmd->outstanding_r2ts--;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	/*
+	 * We now drop all unacknowledged R2Ts, ie: ExpDataSN from TMR
+	 * TASK_REASSIGN to the last R2T in the list..  We are also careful
+	 * to check that the Initiator is not requesting R2Ts for DataOUT
+	 * sequences it has already completed.
+	 *
+	 * Free each R2T in question and adjust values in struct iscsi_cmd
+	 * accordingly so iscsi_build_r2ts_for_cmd() do the rest of
+	 * the work after the TMR TASK_REASSIGN Response is sent.
+	 */
+drop_unacknowledged_r2ts:
+
+	cmd->cmd_flags &= ~ICF_SENT_LAST_R2T;
+	cmd->r2t_sn = tmr_req->exp_data_sn;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry_safe(r2t, r2t_tmp, &cmd->cmd_r2t_list, r2t_list) {
+		/*
+		 * Skip up to the R2T Sequence number provided by the
+		 * iSCSI TASK_REASSIGN TMR
+		 */
+		if (r2t->r2t_sn < tmr_req->exp_data_sn)
+			continue;
+
+		if (r2t->seq_complete) {
+			printk(KERN_ERR "Initiator is requesting R2Ts from"
+				" R2TSN: 0x%08x, but R2TSN: 0x%08x, Offset: %u,"
+				" Length: %u is already complete."
+				"   BAD INITIATOR ERL=2 IMPLEMENTATION!\n",
+				tmr_req->exp_data_sn, r2t->r2t_sn,
+				r2t->offset, r2t->xfer_len);
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+
+		if (r2t->recovery_r2t) {
+			iscsi_free_r2t(r2t, cmd);
+			continue;
+		}
+
+		/*		   DataSequenceInOrder=Yes:
+		 *
+		 * Taking into account the iSCSI implementation requirement of
+		 * MaxOutstandingR2T=1 while ErrorRecoveryLevel>0 and
+		 * DataSequenceInOrder=Yes, it's safe to subtract the R2Ts
+		 * entire transfer length from the commands R2T offset marker.
+		 *
+		 *		   DataSequenceInOrder=No:
+		 *
+		 * We subtract the difference from struct iscsi_seq between the
+		 * current offset and original offset from cmd->write_data_done
+		 * for account for DataOUT PDUs already received.  Then reset
+		 * the current offset to the original and zero out the current
+		 * burst length,  to make sure we re-request the entire DataOUT
+		 * sequence.
+		 */
+		if (SESS_OPS_C(conn)->DataSequenceInOrder)
+			cmd->r2t_offset -= r2t->xfer_len;
+		else
+			cmd->seq_send_order--;
+
+		cmd->outstanding_r2ts--;
+		iscsi_free_r2t(r2t, cmd);
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_check_task_reassign_expdatasn():
+ *
+ *	Performs sanity checks TMR TASK_REASSIGN's ExpDataSN for
+ *	a given struct iscsi_cmd.
+ */
+int iscsi_check_task_reassign_expdatasn(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *ref_cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	if (ref_cmd->iscsi_opcode != ISCSI_OP_SCSI_CMD)
+		return 0;
+
+	if (se_cmd->se_cmd_flags & SCF_SENT_CHECK_CONDITION)
+		return 0;
+
+	if (ref_cmd->data_direction == DMA_NONE)
+		return 0;
+
+	/*
+	 * For READs the TMR TASK_REASSIGNs ExpDataSN contains the next DataSN
+	 * of DataIN the Initiator is expecting.
+	 *
+	 * Also check that the Initiator is not re-requesting DataIN that has
+	 * already been acknowledged with a DataAck SNACK.
+	 */
+	if (ref_cmd->data_direction == DMA_FROM_DEVICE) {
+		if (tmr_req->exp_data_sn > ref_cmd->data_sn) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for READ"
+				" in TMR TASK_REASSIGN greater than command's"
+				" DataSN: 0x%08x.\n", tmr_req->exp_data_sn,
+				ref_cmd->data_sn);
+			return -1;
+		}
+		if ((ref_cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+		    (tmr_req->exp_data_sn <= ref_cmd->acked_data_sn)) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for READ"
+				" in TMR TASK_REASSIGN for previously"
+				" acknowledged DataIN: 0x%08x,"
+				" protocol error\n", tmr_req->exp_data_sn,
+				ref_cmd->acked_data_sn);
+			return -1;
+		}
+		return iscsi_task_reassign_prepare_read(tmr_req, conn);
+	}
+
+	/*
+	 * For WRITEs the TMR TASK_REASSIGNs ExpDataSN contains the next R2TSN
+	 * for R2Ts the Initiator is expecting.
+	 *
+	 * Do the magic in iscsi_task_reassign_prepare_write().
+	 */
+	if (ref_cmd->data_direction == DMA_TO_DEVICE) {
+		if (tmr_req->exp_data_sn > ref_cmd->r2t_sn) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for WRITE"
+				" in TMR TASK_REASSIGN greater than command's"
+				" R2TSN: 0x%08x.\n", tmr_req->exp_data_sn,
+					ref_cmd->r2t_sn);
+			return -1;
+		}
+		return iscsi_task_reassign_prepare_write(tmr_req, conn);
+	}
+
+	printk(KERN_ERR "Unknown iSCSI data_direction: 0x%02x\n",
+			ref_cmd->data_direction);
+
+	return -1;
+}
diff --git a/drivers/target/iscsi/iscsi_target_tmr.h b/drivers/target/iscsi/iscsi_target_tmr.h
new file mode 100644
index 0000000..ebb4f33
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tmr.h
@@ -0,0 +1,17 @@
+#ifndef ISCSI_TARGET_TMR_H
+#define ISCSI_TARGET_TMR_H
+
+extern __u8 iscsi_tmr_abort_task(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_tmr_task_warm_reset(struct iscsi_conn *, struct iscsi_tmr_req *,
+			unsigned char *);
+extern int iscsi_tmr_task_cold_reset(struct iscsi_conn *, struct iscsi_tmr_req *,
+			unsigned char *);
+extern __u8 iscsi_tmr_task_reassign(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_tmr_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_check_task_reassign_expdatasn(struct iscsi_tmr_req *,
+			struct iscsi_conn *);
+
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+
+#endif /* ISCSI_TARGET_TMR_H */
+
-- 
1.7.4.1



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 10/12] iscsi-target: Add support for task management operations
@ 2011-03-02  3:33   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:33 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds support for iSCSI task management operations called
directly from iscsi_target.c TMR request/response PDU logic, and
interfaces struct se_lun -> struct se_device for assoication
of TMRs to TCM backend devices.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_target_tmr.c |  908 +++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_tmr.h |   17 +
 2 files changed, 925 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.c
 create mode 100644 drivers/target/iscsi/iscsi_target_tmr.h

diff --git a/drivers/target/iscsi/iscsi_target_tmr.c b/drivers/target/iscsi/iscsi_target_tmr.c
new file mode 100644
index 0000000..a60218d
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tmr.c
@@ -0,0 +1,908 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target specific Task Management functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <asm/unaligned.h>
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_device.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_tmr.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+
+/*	iscsi_tmr_abort_task():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+u8 iscsi_tmr_abort_task(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *ref_cmd;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm *hdr = (struct iscsi_tm *) buf;
+
+	ref_cmd = iscsi_find_cmd_from_itt(conn, hdr->rtt);
+	if (!(ref_cmd)) {
+		printk(KERN_ERR "Unable to locate RefTaskTag: 0x%08x on CID:"
+			" %hu.\n", hdr->rtt, conn->cid);
+		return ((hdr->refcmdsn >= SESS(conn)->exp_cmd_sn) &&
+			(hdr->refcmdsn <= SESS(conn)->max_cmd_sn)) ?
+			ISCSI_TMF_RSP_COMPLETE : ISCSI_TMF_RSP_NO_TASK;
+	}
+	if (ref_cmd->cmd_sn != hdr->refcmdsn) {
+		printk(KERN_ERR "RefCmdSN 0x%08x does not equal"
+			" task's CmdSN 0x%08x. Rejecting ABORT_TASK.\n",
+			hdr->refcmdsn, ref_cmd->cmd_sn);
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+
+	se_tmr->ref_task_tag		= hdr->rtt;
+	se_tmr->ref_cmd			= &ref_cmd->se_cmd;
+	se_tmr->ref_task_lun		= get_unaligned_le64(&hdr->lun[0]);
+	tmr_req->ref_cmd_sn		= hdr->refcmdsn;
+	tmr_req->exp_data_sn		= hdr->exp_datasn;
+
+	return ISCSI_TMF_RSP_COMPLETE;
+}
+
+/*	iscsi_tmr_task_warm_reset():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+int iscsi_tmr_task_warm_reset(
+	struct iscsi_conn *conn,
+	struct iscsi_tmr_req *tmr_req,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+#if 0
+	struct iscsi_init_task_mgt_cmnd *hdr =
+		(struct iscsi_init_task_mgt_cmnd *) buf;
+#endif
+	if (!(na->tmr_warm_reset)) {
+		printk(KERN_ERR "TMR Opcode TARGET_WARM_RESET authorization"
+			" failed for Initiator Node: %s\n",
+			SESS_NODE_ACL(sess)->initiatorname);
+		 return -1;
+	}
+	/*
+	 * Do the real work in transport_generic_do_tmr().
+	 */
+	return 0;
+}
+
+/*	iscsi_tmr_task_cold_reset():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+int iscsi_tmr_task_cold_reset(
+	struct iscsi_conn *conn,
+	struct iscsi_tmr_req *tmr_req,
+	unsigned char *buf)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	if (!(na->tmr_cold_reset)) {
+		printk(KERN_ERR "TMR Opcode TARGET_COLD_RESET authorization"
+			" failed for Initiator Node: %s\n",
+			SESS_NODE_ACL(sess)->initiatorname);
+		return -1;
+	}
+	/*
+	 * Do the real work in transport_generic_do_tmr().
+	 */
+	return 0;
+}
+
+/*	iscsi_tmr_task_reassign():
+ *
+ *	Called from iscsi_handle_task_mgt_cmd().
+ */
+u8 iscsi_tmr_task_reassign(
+	struct iscsi_cmd *cmd,
+	unsigned char *buf)
+{
+	struct iscsi_cmd *ref_cmd = NULL;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_conn_recovery *cr = NULL;
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+	struct iscsi_tm *hdr = (struct iscsi_tm *) buf;
+	int ret;
+
+	TRACE(TRACE_ERL2, "Got TASK_REASSIGN TMR ITT: 0x%08x,"
+		" RefTaskTag: 0x%08x, ExpDataSN: 0x%08x, CID: %hu\n",
+		hdr->itt, hdr->rtt, hdr->exp_datasn, conn->cid);
+
+	if (SESS_OPS_C(conn)->ErrorRecoveryLevel != 2) {
+		printk(KERN_ERR "TMR TASK_REASSIGN not supported in ERL<2,"
+				" ignoring request.\n");
+		return ISCSI_TMF_RSP_NOT_SUPPORTED;
+	}
+
+	ret = iscsi_find_cmd_for_recovery(SESS(conn), &ref_cmd, &cr, hdr->rtt);
+	if (ret == -2) {
+		printk(KERN_ERR "Command ITT: 0x%08x is still alligent to CID:"
+			" %hu\n", ref_cmd->init_task_tag, cr->cid);
+		return ISCSI_TMF_RSP_TASK_ALLEGIANT;
+	} else if (ret == -1) {
+		printk(KERN_ERR "Unable to locate RefTaskTag: 0x%08x in"
+			" connection recovery command list.\n", hdr->rtt);
+		return ISCSI_TMF_RSP_NO_TASK;
+	}
+	/*
+	 * Temporary check to prevent connection recovery for
+	 * connections with a differing MaxRecvDataSegmentLength.
+	 */
+	if (cr->maxrecvdatasegmentlength !=
+	    CONN_OPS(conn)->MaxRecvDataSegmentLength) {
+		printk(KERN_ERR "Unable to perform connection recovery for"
+			" differing MaxRecvDataSegmentLength, rejecting"
+			" TMR TASK_REASSIGN.\n");
+		return ISCSI_TMF_RSP_REJECTED;
+	}
+
+	se_tmr->ref_task_tag		= hdr->rtt;
+	se_tmr->ref_cmd			= &ref_cmd->se_cmd;
+	se_tmr->ref_task_lun		= get_unaligned_le64(&hdr->lun[0]);
+	tmr_req->ref_cmd_sn		= hdr->refcmdsn;
+	tmr_req->exp_data_sn		= hdr->exp_datasn;
+	tmr_req->conn_recovery		= cr;
+	tmr_req->task_reassign		= 1;
+	/*
+	 * Command can now be reassigned to a new connection.
+	 * The task management response must be sent before the
+	 * reassignment actually happens.  See iscsi_tmr_post_handler().
+	 */
+	return ISCSI_TMF_RSP_COMPLETE;
+}
+
+/*      iscsi_task_reassign_remove_cmd():
+ *
+ *
+ */
+static void iscsi_task_reassign_remove_cmd(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn_recovery *cr,
+	struct iscsi_session *sess)
+{
+	int ret;
+
+	spin_lock(&cr->conn_recovery_cmd_lock);
+	ret = iscsi_remove_cmd_from_connection_recovery(cmd, sess);
+	spin_unlock(&cr->conn_recovery_cmd_lock);
+	if (!ret) {
+		printk(KERN_INFO "iSCSI connection recovery successful for CID:"
+			" %hu on SID: %u\n", cr->cid, sess->sid);
+		iscsi_remove_active_connection_recovery_entry(cr, sess);
+	}
+
+	return;
+}
+
+/*	iscsi_task_reassign_complete_nop_out():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_nop_out(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		return -1;
+	}
+	cr = cmd->cr;
+
+	/*
+	 * Reset the StatSN so a new one for this commands new connection
+	 * will be assigned.
+	 * Reset the ExpStatSN as well so we may receive Status SNACKs.
+	 */
+	cmd->stat_sn = cmd->exp_stat_sn = 0;
+
+	iscsi_task_reassign_remove_cmd(cmd, cr, SESS(conn));
+
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	cmd->i_state = ISTATE_SEND_NOPIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_write():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_write(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	int no_build_r2ts = 0;
+	u32 length = 0, offset = 0;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	/*
+	 * The Initiator must not send a R2T SNACK with a Begrun less than
+	 * the TMR TASK_REASSIGN's ExpDataSN.
+	 */
+	if (!tmr_req->exp_data_sn) {
+		cmd->cmd_flags &= ~ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = 0;
+	} else {
+		cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = (tmr_req->exp_data_sn - 1);
+	}
+
+	/*
+	 * The TMR TASK_REASSIGN's ExpDataSN contains the next R2TSN the
+	 * Initiator is expecting.  The Target controls all WRITE operations
+	 * so if we have received all DataOUT we can safety ignore Initiator.
+	 */
+	if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) {
+		if (!atomic_read(&cmd->transport_sent)) {
+			TRACE(TRACE_ERL2, "WRITE ITT: 0x%08x: t_state: %d"
+				" never sent to transport\n",
+				cmd->init_task_tag, cmd->se_cmd.t_state);
+			return transport_generic_handle_data(se_cmd);
+		}
+
+		cmd->i_state = ISTATE_SEND_STATUS;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	/*
+	 * Special case to deal with DataSequenceInOrder=No and Non-Immeidate
+	 * Unsolicited DataOut.
+	 */
+	if (cmd->unsolicited_data) {
+		cmd->unsolicited_data = 0;
+
+		offset = cmd->next_burst_len = cmd->write_data_done;
+
+		if ((SESS_OPS_C(conn)->FirstBurstLength - offset) >=
+		     cmd->data_length) {
+			no_build_r2ts = 1;
+			length = (cmd->data_length - offset);
+		} else
+			length = (SESS_OPS_C(conn)->FirstBurstLength - offset);
+
+		spin_lock_bh(&cmd->r2t_lock);
+		if (iscsi_add_r2t_to_list(cmd, offset, length, 0, 0) < 0) {
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+		cmd->outstanding_r2ts++;
+		spin_unlock_bh(&cmd->r2t_lock);
+
+		if (no_build_r2ts)
+			return 0;
+	}
+
+	/*
+	 * iscsi_build_r2ts_for_cmd() can handle the rest from here.
+	 */
+	return iscsi_build_r2ts_for_cmd(cmd, conn, 2);
+}
+
+/*	iscsi_task_reassign_complete_read():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_read(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_datain_req *dr;
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+
+	/*
+	 * The Initiator must not send a Data SNACK with a BegRun less than
+	 * the TMR TASK_REASSIGN's ExpDataSN.
+	 */
+	if (!tmr_req->exp_data_sn) {
+		cmd->cmd_flags &= ~ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = 0;
+	} else {
+		cmd->cmd_flags |= ICF_GOT_DATACK_SNACK;
+		cmd->acked_data_sn = (tmr_req->exp_data_sn - 1);
+	}
+
+	if (!atomic_read(&cmd->transport_sent)) {
+		printk(KERN_INFO "READ ITT: 0x%08x: t_state: %d never sent to"
+			" transport\n", cmd->init_task_tag,
+			SE_CMD(cmd)->t_state);
+		transport_generic_handle_cdb(se_cmd);
+		return 0;
+	}
+
+	if (!(atomic_read(&T_TASK(se_cmd)->t_transport_complete))) {
+		printk(KERN_ERR "READ ITT: 0x%08x: t_state: %d, never returned"
+			" from transport\n", cmd->init_task_tag,
+			SE_CMD(cmd)->t_state);
+		return -1;
+	}
+
+	dr = iscsi_allocate_datain_req();
+	if (!(dr))
+		return -1;
+
+	/*
+	 * The TMR TASK_REASSIGN's ExpDataSN contains the next DataSN the
+	 * Initiator is expecting.
+	 */
+	dr->data_sn = dr->begrun = tmr_req->exp_data_sn;
+	dr->runlength = 0;
+	dr->generate_recovery_values = 1;
+	dr->recovery = DATAIN_CONNECTION_RECOVERY;
+
+	iscsi_attach_datain_req(cmd, dr);
+
+	cmd->i_state = ISTATE_SEND_DATAIN;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_none():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_none(
+	struct iscsi_cmd *cmd,
+	struct iscsi_tmr_req *tmr_req)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	cmd->i_state = ISTATE_SEND_STATUS;
+	iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete_scsi_cmnd():
+ *
+ *
+ */
+static int iscsi_task_reassign_complete_scsi_cmnd(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_conn_recovery *cr;
+
+	if (!cmd->cr) {
+		printk(KERN_ERR "struct iscsi_conn_recovery pointer for ITT: 0x%08x"
+			" is NULL!\n", cmd->init_task_tag);
+		return -1;
+	}
+	cr = cmd->cr;
+
+	/*
+	 * Reset the StatSN so a new one for this commands new connection
+	 * will be assigned.
+	 * Reset the ExpStatSN as well so we may receive Status SNACKs.
+	 */
+	cmd->stat_sn = cmd->exp_stat_sn = 0;
+
+	iscsi_task_reassign_remove_cmd(cmd, cr, SESS(conn));
+	iscsi_attach_cmd_to_queue(conn, cmd);
+
+	if (se_cmd->se_cmd_flags & SCF_SENT_CHECK_CONDITION) {
+		cmd->i_state = ISTATE_SEND_STATUS;
+		iscsi_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
+		return 0;
+	}
+
+	switch (cmd->data_direction) {
+	case DMA_TO_DEVICE:
+		return iscsi_task_reassign_complete_write(cmd, tmr_req);
+	case DMA_FROM_DEVICE:
+		return iscsi_task_reassign_complete_read(cmd, tmr_req);
+	case DMA_NONE:
+		return iscsi_task_reassign_complete_none(cmd, tmr_req);
+	default:
+		printk(KERN_ERR "Unknown cmd->data_direction: 0x%02x\n",
+				cmd->data_direction);
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_task_reassign_complete():
+ *
+ *	Called from iscsi_tmr_post_handler().
+ */
+static int iscsi_task_reassign_complete(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd;
+	struct iscsi_cmd *cmd;
+	int ret = 0;
+
+	if (!se_tmr->ref_cmd) {
+		printk(KERN_ERR "TMR Request is missing a RefCmd struct iscsi_cmd.\n");
+		return -1;
+	}
+	se_cmd = se_tmr->ref_cmd;
+	cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	cmd->conn = conn;
+
+	switch (cmd->iscsi_opcode) {
+	case ISCSI_OP_NOOP_OUT:
+		ret = iscsi_task_reassign_complete_nop_out(tmr_req, conn);
+		break;
+	case ISCSI_OP_SCSI_CMD:
+		ret = iscsi_task_reassign_complete_scsi_cmnd(tmr_req, conn);
+		break;
+	default:
+		 printk(KERN_ERR "Illegal iSCSI Opcode 0x%02x during"
+			" command realligence\n", cmd->iscsi_opcode);
+		return -1;
+	}
+
+	if (ret != 0)
+		return ret;
+
+	TRACE(TRACE_ERL2, "Completed connection realligence for Opcode: 0x%02x,"
+		" ITT: 0x%08x to CID: %hu.\n", cmd->iscsi_opcode,
+			cmd->init_task_tag, conn->cid);
+
+	return 0;
+}
+
+/*	iscsi_tmr_post_handler():
+ *
+ *	Handles special after-the-fact actions related to TMRs.
+ *	Right now the only one that its really needed for is
+ *	connection recovery releated TASK_REASSIGN.
+ */
+extern int iscsi_tmr_post_handler(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct iscsi_tmr_req *tmr_req = cmd->tmr_req;
+	struct se_tmr_req *se_tmr = SE_CMD(cmd)->se_tmr_req;
+
+	if (tmr_req->task_reassign &&
+	   (se_tmr->response == ISCSI_TMF_RSP_COMPLETE))
+		return iscsi_task_reassign_complete(tmr_req, conn);
+
+	return 0;
+}
+
+/*	iscsi_task_reassign_prepare_read():
+ *
+ *	Nothing to do here, but leave it for good measure. :-)
+ */
+int iscsi_task_reassign_prepare_read(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	return 0;
+}
+
+/*	iscsi_task_reassign_prepare_unsolicited_dataout():
+ *
+ *
+ */
+static void iscsi_task_reassign_prepare_unsolicited_dataout(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	int i, j;
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_seq *seq = NULL;
+
+	if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+		cmd->data_sn = 0;
+
+		if (cmd->immediate_data)
+			cmd->r2t_offset += (cmd->first_burst_len -
+				cmd->seq_start_offset);
+
+		if (SESS_OPS_C(conn)->DataPDUInOrder) {
+			cmd->write_data_done -= (cmd->immediate_data) ?
+						(cmd->first_burst_len -
+						 cmd->seq_start_offset) :
+						 cmd->first_burst_len;
+			cmd->first_burst_len = 0;
+			return;
+		}
+
+		for (i = 0; i < cmd->pdu_count; i++) {
+			pdu = &cmd->pdu_list[i];
+
+			if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+				continue;
+
+			if ((pdu->offset >= cmd->seq_start_offset) &&
+			   ((pdu->offset + pdu->length) <=
+			     cmd->seq_end_offset)) {
+				cmd->first_burst_len -= pdu->length;
+				cmd->write_data_done -= pdu->length;
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	} else {
+		for (i = 0; i < cmd->seq_count; i++) {
+			seq = &cmd->seq_list[i];
+
+			if (seq->type != SEQTYPE_UNSOLICITED)
+				continue;
+
+			cmd->write_data_done -=
+					(seq->offset - seq->orig_offset);
+			cmd->first_burst_len = 0;
+			seq->data_sn = 0;
+			seq->offset = seq->orig_offset;
+			seq->next_burst_len = 0;
+			seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder)
+				continue;
+
+			for (j = 0; j < seq->pdu_count; j++) {
+				pdu = &cmd->pdu_list[j+seq->pdu_start];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+	}
+
+	return;
+}
+
+/*	iscsi_task_reassign_prepare_write():
+ *
+ *
+ */
+int iscsi_task_reassign_prepare_write(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	struct iscsi_pdu *pdu = NULL;
+	struct iscsi_r2t *r2t = NULL, *r2t_tmp;
+	int first_incomplete_r2t = 1, i = 0;
+
+	/*
+	 * The command was in the process of receiving Unsolicited DataOUT when
+	 * the connection failed.
+	 */
+	if (cmd->unsolicited_data)
+		iscsi_task_reassign_prepare_unsolicited_dataout(cmd, conn);
+
+	/*
+	 * The Initiator is requesting R2Ts starting from zero,  skip
+	 * checking acknowledged R2Ts and start checking struct iscsi_r2ts
+	 * greater than zero.
+	 */
+	if (!tmr_req->exp_data_sn)
+		goto drop_unacknowledged_r2ts;
+
+	/*
+	 * We now check that the PDUs in DataOUT sequences below
+	 * the TMR TASK_REASSIGN ExpDataSN (R2TSN the Initiator is
+	 * expecting next) have all the DataOUT they require to complete
+	 * the DataOUT sequence.  First scan from R2TSN 0 to TMR
+	 * TASK_REASSIGN ExpDataSN-1.
+	 *
+	 * If we have not received all DataOUT in question,  we must
+	 * make sure to make the appropriate changes to values in
+	 * struct iscsi_cmd (and elsewhere depending on session parameters)
+	 * so iscsi_build_r2ts_for_cmd() in iscsi_task_reassign_complete_write()
+	 * will resend a new R2T for the DataOUT sequences in question.
+	 */
+	spin_lock_bh(&cmd->r2t_lock);
+	if (list_empty(&cmd->cmd_r2t_list)) {
+		spin_unlock_bh(&cmd->r2t_lock);
+		return -1;
+	}
+
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+
+		if (r2t->r2t_sn >= tmr_req->exp_data_sn)
+			continue;
+		/*
+		 * Safely ignore Recovery R2Ts and R2Ts that have completed
+		 * DataOUT sequences.
+		 */
+		if (r2t->seq_complete)
+			continue;
+
+		if (r2t->recovery_r2t)
+			continue;
+
+		/*
+		 *                 DataSequenceInOrder=Yes:
+		 *
+		 * Taking into account the iSCSI implementation requirement of
+		 * MaxOutstandingR2T=1 while ErrorRecoveryLevel>0 and
+		 * DataSequenceInOrder=Yes, we must take into consideration
+		 * the following:
+		 *
+		 *                  DataSequenceInOrder=No:
+		 *
+		 * Taking into account that the Initiator controls the (possibly
+		 * random) PDU Order in (possibly random) Sequence Order of
+		 * DataOUT the target requests with R2Ts,  we must take into
+		 * consideration the following:
+		 *
+		 *      DataPDUInOrder=Yes for DataSequenceInOrder=[Yes,No]:
+		 *
+		 * While processing non-complete R2T DataOUT sequence requests
+		 * the Target will re-request only the total sequence length
+		 * minus current received offset.  This is because we must
+		 * assume the initiator will continue sending DataOUT from the
+		 * last PDU before the connection failed.
+		 *
+		 *      DataPDUInOrder=No for DataSequenceInOrder=[Yes,No]:
+		 *
+		 * While processing non-complete R2T DataOUT sequence requests
+		 * the Target will re-request the entire DataOUT sequence if
+		 * any single PDU is missing from the sequence.  This is because
+		 * we have no logical method to determine the next PDU offset,
+		 * and we must assume the Initiator will be sending any random
+		 * PDU offset in the current sequence after TASK_REASSIGN
+		 * has completed.
+		 */
+		if (SESS_OPS_C(conn)->DataSequenceInOrder) {
+			if (!first_incomplete_r2t) {
+				cmd->r2t_offset -= r2t->xfer_len;
+				goto next;
+			}
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder) {
+				cmd->data_sn = 0;
+				cmd->r2t_offset -= (r2t->xfer_len -
+					cmd->next_burst_len);
+				first_incomplete_r2t = 0;
+				goto next;
+			}
+
+			cmd->data_sn = 0;
+			cmd->r2t_offset -= r2t->xfer_len;
+
+			for (i = 0; i < cmd->pdu_count; i++) {
+				pdu = &cmd->pdu_list[i];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				if ((pdu->offset >= r2t->offset) &&
+				    (pdu->offset < (r2t->offset +
+						r2t->xfer_len))) {
+					cmd->next_burst_len -= pdu->length;
+					cmd->write_data_done -= pdu->length;
+					pdu->status = ISCSI_PDU_NOT_RECEIVED;
+				}
+			}
+
+			first_incomplete_r2t = 0;
+		} else {
+			struct iscsi_seq *seq;
+
+			seq = iscsi_get_seq_holder(cmd, r2t->offset,
+					r2t->xfer_len);
+			if (!(seq)) {
+				spin_unlock_bh(&cmd->r2t_lock);
+				return -1;
+			}
+
+			cmd->write_data_done -=
+					(seq->offset - seq->orig_offset);
+			seq->data_sn = 0;
+			seq->offset = seq->orig_offset;
+			seq->next_burst_len = 0;
+			seq->status = DATAOUT_SEQUENCE_WITHIN_COMMAND_RECOVERY;
+
+			cmd->seq_send_order--;
+
+			if (SESS_OPS_C(conn)->DataPDUInOrder)
+				goto next;
+
+			for (i = 0; i < seq->pdu_count; i++) {
+				pdu = &cmd->pdu_list[i+seq->pdu_start];
+
+				if (pdu->status != ISCSI_PDU_RECEIVED_OK)
+					continue;
+
+				pdu->status = ISCSI_PDU_NOT_RECEIVED;
+			}
+		}
+
+next:
+		cmd->outstanding_r2ts--;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	/*
+	 * We now drop all unacknowledged R2Ts, ie: ExpDataSN from TMR
+	 * TASK_REASSIGN to the last R2T in the list..  We are also careful
+	 * to check that the Initiator is not requesting R2Ts for DataOUT
+	 * sequences it has already completed.
+	 *
+	 * Free each R2T in question and adjust values in struct iscsi_cmd
+	 * accordingly so iscsi_build_r2ts_for_cmd() do the rest of
+	 * the work after the TMR TASK_REASSIGN Response is sent.
+	 */
+drop_unacknowledged_r2ts:
+
+	cmd->cmd_flags &= ~ICF_SENT_LAST_R2T;
+	cmd->r2t_sn = tmr_req->exp_data_sn;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry_safe(r2t, r2t_tmp, &cmd->cmd_r2t_list, r2t_list) {
+		/*
+		 * Skip up to the R2T Sequence number provided by the
+		 * iSCSI TASK_REASSIGN TMR
+		 */
+		if (r2t->r2t_sn < tmr_req->exp_data_sn)
+			continue;
+
+		if (r2t->seq_complete) {
+			printk(KERN_ERR "Initiator is requesting R2Ts from"
+				" R2TSN: 0x%08x, but R2TSN: 0x%08x, Offset: %u,"
+				" Length: %u is already complete."
+				"   BAD INITIATOR ERL=2 IMPLEMENTATION!\n",
+				tmr_req->exp_data_sn, r2t->r2t_sn,
+				r2t->offset, r2t->xfer_len);
+			spin_unlock_bh(&cmd->r2t_lock);
+			return -1;
+		}
+
+		if (r2t->recovery_r2t) {
+			iscsi_free_r2t(r2t, cmd);
+			continue;
+		}
+
+		/*		   DataSequenceInOrder=Yes:
+		 *
+		 * Taking into account the iSCSI implementation requirement of
+		 * MaxOutstandingR2T=1 while ErrorRecoveryLevel>0 and
+		 * DataSequenceInOrder=Yes, it's safe to subtract the R2Ts
+		 * entire transfer length from the commands R2T offset marker.
+		 *
+		 *		   DataSequenceInOrder=No:
+		 *
+		 * We subtract the difference from struct iscsi_seq between the
+		 * current offset and original offset from cmd->write_data_done
+		 * for account for DataOUT PDUs already received.  Then reset
+		 * the current offset to the original and zero out the current
+		 * burst length,  to make sure we re-request the entire DataOUT
+		 * sequence.
+		 */
+		if (SESS_OPS_C(conn)->DataSequenceInOrder)
+			cmd->r2t_offset -= r2t->xfer_len;
+		else
+			cmd->seq_send_order--;
+
+		cmd->outstanding_r2ts--;
+		iscsi_free_r2t(r2t, cmd);
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return 0;
+}
+
+/*	iscsi_check_task_reassign_expdatasn():
+ *
+ *	Performs sanity checks TMR TASK_REASSIGN's ExpDataSN for
+ *	a given struct iscsi_cmd.
+ */
+int iscsi_check_task_reassign_expdatasn(
+	struct iscsi_tmr_req *tmr_req,
+	struct iscsi_conn *conn)
+{
+	struct se_tmr_req *se_tmr = tmr_req->se_tmr_req;
+	struct se_cmd *se_cmd = se_tmr->ref_cmd;
+	struct iscsi_cmd *ref_cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	if (ref_cmd->iscsi_opcode != ISCSI_OP_SCSI_CMD)
+		return 0;
+
+	if (se_cmd->se_cmd_flags & SCF_SENT_CHECK_CONDITION)
+		return 0;
+
+	if (ref_cmd->data_direction == DMA_NONE)
+		return 0;
+
+	/*
+	 * For READs the TMR TASK_REASSIGNs ExpDataSN contains the next DataSN
+	 * of DataIN the Initiator is expecting.
+	 *
+	 * Also check that the Initiator is not re-requesting DataIN that has
+	 * already been acknowledged with a DataAck SNACK.
+	 */
+	if (ref_cmd->data_direction == DMA_FROM_DEVICE) {
+		if (tmr_req->exp_data_sn > ref_cmd->data_sn) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for READ"
+				" in TMR TASK_REASSIGN greater than command's"
+				" DataSN: 0x%08x.\n", tmr_req->exp_data_sn,
+				ref_cmd->data_sn);
+			return -1;
+		}
+		if ((ref_cmd->cmd_flags & ICF_GOT_DATACK_SNACK) &&
+		    (tmr_req->exp_data_sn <= ref_cmd->acked_data_sn)) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for READ"
+				" in TMR TASK_REASSIGN for previously"
+				" acknowledged DataIN: 0x%08x,"
+				" protocol error\n", tmr_req->exp_data_sn,
+				ref_cmd->acked_data_sn);
+			return -1;
+		}
+		return iscsi_task_reassign_prepare_read(tmr_req, conn);
+	}
+
+	/*
+	 * For WRITEs the TMR TASK_REASSIGNs ExpDataSN contains the next R2TSN
+	 * for R2Ts the Initiator is expecting.
+	 *
+	 * Do the magic in iscsi_task_reassign_prepare_write().
+	 */
+	if (ref_cmd->data_direction == DMA_TO_DEVICE) {
+		if (tmr_req->exp_data_sn > ref_cmd->r2t_sn) {
+			printk(KERN_ERR "Received ExpDataSN: 0x%08x for WRITE"
+				" in TMR TASK_REASSIGN greater than command's"
+				" R2TSN: 0x%08x.\n", tmr_req->exp_data_sn,
+					ref_cmd->r2t_sn);
+			return -1;
+		}
+		return iscsi_task_reassign_prepare_write(tmr_req, conn);
+	}
+
+	printk(KERN_ERR "Unknown iSCSI data_direction: 0x%02x\n",
+			ref_cmd->data_direction);
+
+	return -1;
+}
diff --git a/drivers/target/iscsi/iscsi_target_tmr.h b/drivers/target/iscsi/iscsi_target_tmr.h
new file mode 100644
index 0000000..ebb4f33
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_tmr.h
@@ -0,0 +1,17 @@
+#ifndef ISCSI_TARGET_TMR_H
+#define ISCSI_TARGET_TMR_H
+
+extern __u8 iscsi_tmr_abort_task(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_tmr_task_warm_reset(struct iscsi_conn *, struct iscsi_tmr_req *,
+			unsigned char *);
+extern int iscsi_tmr_task_cold_reset(struct iscsi_conn *, struct iscsi_tmr_req *,
+			unsigned char *);
+extern __u8 iscsi_tmr_task_reassign(struct iscsi_cmd *, unsigned char *);
+extern int iscsi_tmr_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsi_check_task_reassign_expdatasn(struct iscsi_tmr_req *,
+			struct iscsi_conn *);
+
+extern int iscsi_build_r2ts_for_cmd(struct iscsi_cmd *, struct iscsi_conn *, int);
+
+#endif /* ISCSI_TARGET_TMR_H */
+
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 11/12] iscsi-target: Add misc utility and debug logic
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
@ 2011-03-02  3:34   ` Nicholas A. Bellinger
  2011-03-02  3:33   ` Nicholas A. Bellinger
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:34 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 84307 bytes --]

From: Nicholas Bellinger <nab@linux-iscsi.org>

This file adds iscsi_target_util.[c,h] code containing a number
of miscellaneous utility functions for iscsi_target_mod including
the following:

*) wrappers to TCM logic from iscsi_target.c for struct iscsi_cmd
allocation
*) received iSCSI Command Sequence Number (CmdSN) processing
*) Code for immediate / TX queues
*) Nopin Response + Response Timeout handlers
*) Primary sock_sendmsg() and sock_recvmsg() calls into Linux/Net
*) iSCSI SendTargets

It also contains iscsi_debug.h macros for CONFIG_ISCSI_TARGET_DEBUG

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_debug.h       |  113 ++
 drivers/target/iscsi/iscsi_target_util.c | 2852 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_util.h |  128 ++
 3 files changed, 3093 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_debug.h
 create mode 100644 drivers/target/iscsi/iscsi_target_util.c
 create mode 100644 drivers/target/iscsi/iscsi_target_util.h

diff --git a/drivers/target/iscsi/iscsi_debug.h b/drivers/target/iscsi/iscsi_debug.h
new file mode 100644
index 0000000..cf5f57f
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_debug.h
@@ -0,0 +1,113 @@
+#ifndef ISCSI_DEBUG_H
+#define ISCSI_DEBUG_H
+
+/*
+ * Debugging Support
+ */
+
+#define TRACE_DEBUG	0x00000001	/* Verbose debugging */
+#define TRACE_SCSI	0x00000002	/* Stuff related to SCSI Mid-layer */
+#define TRACE_ISCSI	0x00000004	/* Stuff related to iSCSI */
+#define TRACE_NET	0x00000008	/* Stuff related to network code */
+#define TRACE_BUFF	0x00000010	/* For dumping raw data */
+#define TRACE_FILE	0x00000020	/* Used for __FILE__ */
+#define TRACE_LINE	0x00000040	/* Used for __LINE__ */
+#define TRACE_FUNCTION	0x00000080	/* Used for __FUNCTION__ */
+#define TRACE_SEM	0x00000100	/* Stuff related to semaphores */
+#define TRACE_ENTER_LEAVE 0x00000200	/* For entering/leaving functions */
+#define TRACE_DIGEST	0x00000400	/* For Header/Data Digests */
+#define TRACE_PARAM	0x00000800	/* For parameters in parameters.c */
+#define TRACE_LOGIN	0x00001000	/* For login related code */
+#define TRACE_STATE	0x00002000	/* For conn/sess/cleanup states */
+#define TRACE_ERL0	0x00004000	/* For ErrorRecoveryLevel=0 */
+#define TRACE_ERL1	0x00008000	/* For ErrorRecoveryLevel=1 */
+#define TRACE_ERL2	0x00010000	/* For ErrorRecoveryLevel=2 */
+#define TRACE_TIMER	0x00020000	/* For various ERL timers */
+#define TRACE_R2T	0x00040000	/* For R2T callers */
+#define TRACE_SPINDLE	0x00080000	/* For Spindle callers */
+#define TRACE_SSLR	0x00100000	/* For SyncNSteering RX */
+#define TRACE_SSLT	0x00200000	/* For SyncNSteering TX */
+#define TRACE_CHANNEL	0x00400000	/* For SCSI Channels */
+#define TRACE_CMDSN	0x00800000	/* For Out of Order CmdSN execution */
+#define TRACE_NODEATTRIB 0x01000000	/* For Initiator Nodes */
+
+#define TRACE_VANITY		0x80000000	/* For all Vanity Noise */
+#define TRACE_ALL		0xffffffff	/* Turn on all flags */
+#define TRACE_ENDING		0x00000000	/* foo */
+
+#ifdef CONFIG_ISCSI_TARGET_DEBUG
+/*
+ * TRACE_VANITY, is always last!
+ */
+static unsigned int iscsi_trace =
+/*		TRACE_DEBUG | */
+/*		TRACE_SCSI | */
+/*		TRACE_ISCSI | */
+/*		TRACE_NET | */
+/*		TRACE_BUFF | */
+/*		TRACE_FILE | */
+/*		TRACE_LINE | */
+/*       	TRACE_FUNCTION | */
+/*		TRACE_SEM | */
+/*		TRACE_ENTER_LEAVE | */
+/*		TRACE_DIGEST | */
+/*		TRACE_PARAM | */
+/*		TRACE_LOGIN | */
+/*		TRACE_STATE | */
+		TRACE_ERL0 |
+		TRACE_ERL1 |
+		TRACE_ERL2 |
+/*		TRACE_TIMER | */
+/*		TRACE_R2T | */
+/*		TRACE_SPINDLE | */
+/*		TRACE_SSLR | */
+/*		TRACE_SSLT | */
+/*		TRACE_CHANNEL | */
+/*		TRACE_CMDSN | */
+/*		TRACE_NODEATTRIB | */
+		TRACE_VANITY |
+		TRACE_ENDING;
+
+#define TRACE(trace, args...)					\
+{								\
+static char iscsi_trace_buff[256];				\
+								\
+if (iscsi_trace & trace) {					\
+	sprintf(iscsi_trace_buff, args);			\
+	if (iscsi_trace & TRACE_FUNCTION) {			\
+		printk(KERN_INFO "%s:%d: %s",  __func__, __LINE__, \
+			iscsi_trace_buff);			\
+	} else if (iscsi_trace&TRACE_FILE) {			\
+		printk(KERN_INFO "%s::%d: %s", __FILE__, __LINE__, \
+			iscsi_trace_buff);			\
+	} else if (iscsi_trace & TRACE_LINE) {			\
+		printk(KERN_INFO "%d: %s", __LINE__, iscsi_trace_buff);	\
+	} else {						\
+		printk(KERN_INFO "%s", iscsi_trace_buff);	\
+	}							\
+}								\
+}
+
+#define PRINT_BUFF(buff, len)					\
+if (iscsi_trace & TRACE_BUFF) {					\
+	int zzz;						\
+								\
+	printk(KERN_INFO "%d: \n", __LINE__);			\
+	for (zzz = 0; zzz < len; zzz++) {			\
+		if (zzz % 16 == 0) {				\
+			if (zzz)				\
+				printk(KERN_INFO "\n");		\
+			printk(KERN_INFO "%4i: ", zzz);		\
+		}						\
+		printk(KERN_INFO "%02x ", (unsigned char) (buff)[zzz]);	\
+	}							\
+	if ((len + 1) % 16)					\
+		printk(KERN_INFO "\n");				\
+}
+
+#else /* !CONFIG_ISCSI_TARGET_DEBUG */
+#define TRACE(trace, args...)
+#define PRINT_BUFF(buff, len)
+#endif /* CONFIG_ISCSI_TARGET_DEBUG */
+
+#endif   /*** ISCSI_DEBUG_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
new file mode 100644
index 0000000..61b9fea
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -0,0 +1,2852 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target specific utility functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <scsi/libsas.h> /* For TASK_ATTR_* */
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_tmr.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_configfs.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+
+/*	iscsi_attach_cmd_to_queue():
+ *
+ *
+ */
+inline void iscsi_attach_cmd_to_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+{
+	spin_lock_bh(&conn->cmd_lock);
+	list_add_tail(&cmd->i_list, &conn->conn_cmd_list);
+	spin_unlock_bh(&conn->cmd_lock);
+
+	atomic_inc(&conn->active_cmds);
+}
+
+/*	iscsi_remove_cmd_from_conn_list():
+ *
+ *	MUST be called with conn->cmd_lock held.
+ */
+inline void iscsi_remove_cmd_from_conn_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	list_del(&cmd->i_list);
+	atomic_dec(&conn->active_cmds);
+}
+
+
+/*	iscsi_ack_from_expstatsn():
+ *
+ *
+ */
+inline void iscsi_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn)
+{
+	struct iscsi_cmd *cmd;
+
+	conn->exp_statsn = exp_statsn;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+
+		spin_lock(&cmd->istate_lock);
+		if ((cmd->i_state == ISTATE_SENT_STATUS) &&
+		    (cmd->stat_sn < exp_statsn)) {
+			cmd->i_state = ISTATE_REMOVE;
+			spin_unlock(&cmd->istate_lock);
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			continue;
+		}
+		spin_unlock(&cmd->istate_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_remove_conn_from_list():
+ *
+ *	Called with sess->conn_lock held.
+ */
+void iscsi_remove_conn_from_list(struct iscsi_session *sess, struct iscsi_conn *conn)
+{
+	list_del(&conn->conn_list);
+}
+
+/*	iscsi_add_r2t_to_list():
+ *
+ *	Called with cmd->r2t_lock held.
+ */
+int iscsi_add_r2t_to_list(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 xfer_len,
+	int recovery,
+	u32 r2t_sn)
+{
+	struct iscsi_r2t *r2t;
+
+	r2t = kmem_cache_zalloc(lio_r2t_cache, GFP_ATOMIC);
+	if (!(r2t)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_r2t.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&r2t->r2t_list);
+
+	r2t->recovery_r2t = recovery;
+	r2t->r2t_sn = (!r2t_sn) ? cmd->r2t_sn++ : r2t_sn;
+	r2t->offset = offset;
+	r2t->xfer_len = xfer_len;
+	list_add_tail(&r2t->r2t_list, &cmd->cmd_r2t_list);
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	iscsi_add_cmd_to_immediate_queue(cmd, CONN(cmd), ISTATE_SEND_R2T);
+
+	spin_lock_bh(&cmd->r2t_lock);
+	return 0;
+}
+
+/*	iscsi_get_r2t_for_eos():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_r2t_for_eos(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if ((r2t->offset <= offset) &&
+		    (r2t->offset + r2t->xfer_len) >= (offset + length))
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate R2T for Offset: %u, Length:"
+				" %u\n", offset, length);
+		return NULL;
+	}
+
+	return r2t;
+}
+
+/*	iscsi_get_r2t_from_list():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_r2t_from_list(struct iscsi_cmd *cmd)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if (!r2t->sent_r2t)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate next R2T to send for ITT:"
+			" 0x%08x.\n", cmd->init_task_tag);
+		return NULL;
+	}
+
+	return r2t;
+}
+
+/*	iscsi_free_r2t():
+ *
+ *	Called with cmd->r2t_lock held.
+ */
+void iscsi_free_r2t(struct iscsi_r2t *r2t, struct iscsi_cmd *cmd)
+{
+	list_del(&r2t->r2t_list);
+	kmem_cache_free(lio_r2t_cache, r2t);
+}
+
+/*	iscsi_free_r2ts_from_list():
+ *
+ *
+ */
+void iscsi_free_r2ts_from_list(struct iscsi_cmd *cmd)
+{
+	struct iscsi_r2t *r2t, *r2t_tmp;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry_safe(r2t, r2t_tmp, &cmd->cmd_r2t_list, r2t_list) {
+		list_del(&r2t->r2t_list);
+		kmem_cache_free(lio_r2t_cache, r2t);
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+}
+
+/*	iscsi_allocate_cmd():
+ *
+ *	May be called from interrupt context.
+ */
+struct iscsi_cmd *iscsi_allocate_cmd(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+
+	cmd = kmem_cache_zalloc(lio_cmd_cache, GFP_ATOMIC);
+	if (!(cmd)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_cmd.\n");
+		return NULL;
+	}
+
+	cmd->conn	= conn;
+	INIT_LIST_HEAD(&cmd->i_list);
+	INIT_LIST_HEAD(&cmd->datain_list);
+	INIT_LIST_HEAD(&cmd->cmd_r2t_list);
+	sema_init(&cmd->reject_sem, 0);
+	sema_init(&cmd->unsolicited_data_sem, 0);
+	spin_lock_init(&cmd->datain_lock);
+	spin_lock_init(&cmd->dataout_timeout_lock);
+	spin_lock_init(&cmd->istate_lock);
+	spin_lock_init(&cmd->error_lock);
+	spin_lock_init(&cmd->r2t_lock);
+
+	return cmd;
+}
+
+/*
+ * Called from iscsi_handle_scsi_cmd()
+ */
+struct iscsi_cmd *iscsi_allocate_se_cmd(
+	struct iscsi_conn *conn,
+	u32 data_length,
+	int data_direction,
+	int iscsi_task_attr)
+{
+	struct iscsi_cmd *cmd;
+	struct se_cmd *se_cmd;
+	int sam_task_attr;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return NULL;
+
+	cmd->data_direction = data_direction;
+	cmd->data_length = data_length;
+	/*
+	 * Figure out the SAM Task Attribute for the incoming SCSI CDB
+	 */
+	if ((iscsi_task_attr == ISCSI_ATTR_UNTAGGED) ||
+	    (iscsi_task_attr == ISCSI_ATTR_SIMPLE))
+		sam_task_attr = TASK_ATTR_SIMPLE;
+	else if (iscsi_task_attr == ISCSI_ATTR_ORDERED)
+		sam_task_attr = TASK_ATTR_ORDERED;
+	else if (iscsi_task_attr == ISCSI_ATTR_HEAD_OF_QUEUE)
+		sam_task_attr = TASK_ATTR_HOQ;
+	else if (iscsi_task_attr == ISCSI_ATTR_ACA)
+		sam_task_attr = TASK_ATTR_ACA;
+	else {
+		printk(KERN_INFO "Unknown iSCSI Task Attribute: 0x%02x, using"
+			" TASK_ATTR_SIMPLE\n", iscsi_task_attr);
+		sam_task_attr = TASK_ATTR_SIMPLE;
+	}
+
+	se_cmd = &cmd->se_cmd;
+	/*
+	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+	 */
+	transport_init_se_cmd(se_cmd, &lio_target_fabric_configfs->tf_ops,
+			SESS(conn)->se_sess, data_length, data_direction,
+			sam_task_attr, &cmd->sense_buffer[0]);
+	return cmd;
+}
+
+/*	iscsi_allocate_tmr_req():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_allocate_se_cmd_for_tmr(
+	struct iscsi_conn *conn,
+	u8 function)
+{
+	struct iscsi_cmd *cmd;
+	struct se_cmd *se_cmd;
+	u8 tcm_function;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return NULL;
+
+	cmd->data_direction = DMA_NONE;
+
+	cmd->tmr_req = kzalloc(sizeof(struct iscsi_tmr_req), GFP_KERNEL);
+	if (!(cmd->tmr_req)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" Task Management command!\n");
+		return NULL;
+	}
+	/*
+	 * TASK_REASSIGN for ERL=2 / connection stays inside of
+	 * LIO-Target $FABRIC_MOD
+	 */
+	if (function == ISCSI_TM_FUNC_TASK_REASSIGN)
+		return cmd;
+
+	se_cmd = &cmd->se_cmd;
+	/*
+	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+	 */
+	transport_init_se_cmd(se_cmd, &lio_target_fabric_configfs->tf_ops,
+				SESS(conn)->se_sess, 0, DMA_NONE,
+				TASK_ATTR_SIMPLE, &cmd->sense_buffer[0]);
+
+	switch (function) {
+	case ISCSI_TM_FUNC_ABORT_TASK:
+		tcm_function = TMR_ABORT_TASK;
+		break;
+	case ISCSI_TM_FUNC_ABORT_TASK_SET:
+		tcm_function = TMR_ABORT_TASK_SET;
+		break;
+	case ISCSI_TM_FUNC_CLEAR_ACA:
+		tcm_function = TMR_CLEAR_ACA;
+		break;
+	case ISCSI_TM_FUNC_CLEAR_TASK_SET:
+		tcm_function = TMR_CLEAR_TASK_SET;
+		break;
+	case ISCSI_TM_FUNC_LOGICAL_UNIT_RESET:
+		tcm_function = TMR_LUN_RESET;
+		break;
+	case ISCSI_TM_FUNC_TARGET_WARM_RESET:
+		tcm_function = TMR_TARGET_WARM_RESET;
+		break;
+	case ISCSI_TM_FUNC_TARGET_COLD_RESET:
+		tcm_function = TMR_TARGET_COLD_RESET;
+		break;
+	default: 
+		printk(KERN_ERR "Unknown iSCSI TMR Function:"
+			" 0x%02x\n", function);
+		goto out;
+	}
+
+	se_cmd->se_tmr_req = core_tmr_alloc_req(se_cmd,
+				(void *)cmd->tmr_req, tcm_function);
+	if (!(se_cmd->se_tmr_req))
+		goto out;
+
+	cmd->tmr_req->se_tmr_req = se_cmd->se_tmr_req;
+
+	return cmd;
+out:
+	iscsi_release_cmd_to_pool(cmd);
+	if (se_cmd)
+		transport_free_se_cmd(se_cmd);
+	return NULL;
+}
+
+/*	iscsi_decide_list_to_build():
+ *
+ *
+ */
+int iscsi_decide_list_to_build(
+	struct iscsi_cmd *cmd,
+	u32 immediate_data_length)
+{
+	struct iscsi_build_list bl;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na;
+
+	if (SESS_OPS(sess)->DataSequenceInOrder &&
+	    SESS_OPS(sess)->DataPDUInOrder)
+		return 0;
+
+	if (cmd->data_direction == DMA_NONE)
+		return 0;
+
+	na = iscsi_tpg_get_node_attrib(sess);
+	memset(&bl, 0, sizeof(struct iscsi_build_list));
+
+	if (cmd->data_direction == DMA_FROM_DEVICE) {
+		bl.data_direction = ISCSI_PDU_READ;
+		bl.type = PDULIST_NORMAL;
+		if (na->random_datain_pdu_offsets)
+			bl.randomize |= RANDOM_DATAIN_PDU_OFFSETS;
+		if (na->random_datain_seq_offsets)
+			bl.randomize |= RANDOM_DATAIN_SEQ_OFFSETS;
+	} else {
+		bl.data_direction = ISCSI_PDU_WRITE;
+		bl.immediate_data_length = immediate_data_length;
+		if (na->random_r2t_offsets)
+			bl.randomize |= RANDOM_R2T_OFFSETS;
+
+		if (!cmd->immediate_data && !cmd->unsolicited_data)
+			bl.type = PDULIST_NORMAL;
+		else if (cmd->immediate_data && !cmd->unsolicited_data)
+			bl.type = PDULIST_IMMEDIATE;
+		else if (!cmd->immediate_data && cmd->unsolicited_data)
+			bl.type = PDULIST_UNSOLICITED;
+		else if (cmd->immediate_data && cmd->unsolicited_data)
+			bl.type = PDULIST_IMMEDIATE_AND_UNSOLICITED;
+	}
+
+	return iscsi_do_build_list(cmd, &bl);
+}
+
+/*	iscsi_get_seq_holder_for_datain():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder_for_datain(
+	struct iscsi_cmd *cmd,
+	u32 seq_send_order)
+{
+	u32 i;
+
+	for (i = 0; i < cmd->seq_count; i++)
+		if (cmd->seq_list[i].seq_send_order == seq_send_order)
+			return &cmd->seq_list[i];
+
+	return NULL;
+}
+
+/*	iscsi_get_seq_holder_for_r2t():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder_for_r2t(struct iscsi_cmd *cmd)
+{
+	u32 i;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return NULL;
+	}
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		if (cmd->seq_list[i].seq_send_order == cmd->seq_send_order) {
+			cmd->seq_send_order++;
+			return &cmd->seq_list[i];
+		}
+	}
+
+	return NULL;
+}
+
+/*	iscsi_get_holder_for_r2tsn():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_holder_for_r2tsn(
+	struct iscsi_cmd *cmd,
+	u32 r2t_sn)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if (r2t->r2t_sn == r2t_sn)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return (r2t) ? r2t : NULL;
+}
+
+#define SERIAL_BITS	31
+#define MAX_BOUND	(u32)2147483647UL
+
+int serial_lt(u32 x, u32 y)
+{
+	return (x != y) && (((x < y) && ((y - x) < MAX_BOUND)) ||
+		((x > y) && ((x - y) > MAX_BOUND)));
+}
+
+int serial_lte(u32 x, u32 y)
+{
+	return (x == y) ? 1 : serial_lt(x, y);
+}
+
+int serial_gt(u32 x, u32 y)
+{
+	return (x != y) && (((x < y) && ((y - x) > MAX_BOUND)) ||
+		((x > y) && ((x - y) < MAX_BOUND)));
+}
+
+int serial_gte(u32 x, u32 y)
+{
+	return (x == y) ? 1 : serial_gt(x, y);
+}
+
+/*	iscsi_check_received_cmdsn():
+ *
+ *
+ */
+inline int iscsi_check_received_cmdsn(
+	struct iscsi_conn *conn,
+	struct iscsi_cmd *cmd,
+	u32 cmdsn)
+{
+	int ret;
+	/*
+	 * This is the proper method of checking received CmdSN against
+	 * ExpCmdSN and MaxCmdSN values, as well as accounting for out
+	 * or order CmdSNs due to multiple connection sessions and/or
+	 * CRC failures.
+	 */
+	spin_lock(&SESS(conn)->cmdsn_lock);
+	if (serial_gt(cmdsn, SESS(conn)->max_cmd_sn)) {
+		printk(KERN_ERR "Received CmdSN: 0x%08x is greater than"
+			" MaxCmdSN: 0x%08x, protocol error.\n", cmdsn,
+				SESS(conn)->max_cmd_sn);
+		spin_unlock(&SESS(conn)->cmdsn_lock);
+		return CMDSN_ERROR_CANNOT_RECOVER;
+	}
+
+	if (!SESS(conn)->cmdsn_outoforder) {
+		if (cmdsn == SESS(conn)->exp_cmd_sn) {
+			SESS(conn)->exp_cmd_sn++;
+			TRACE(TRACE_CMDSN, "Received CmdSN matches ExpCmdSN,"
+				" incremented ExpCmdSN to: 0x%08x\n",
+					SESS(conn)->exp_cmd_sn);
+			ret = iscsi_execute_cmd(cmd, 0);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+
+			return (!ret) ? CMDSN_NORMAL_OPERATION :
+					CMDSN_ERROR_CANNOT_RECOVER;
+		} else if (serial_gt(cmdsn, SESS(conn)->exp_cmd_sn)) {
+			TRACE(TRACE_CMDSN, "Received CmdSN: 0x%08x is greater"
+				" than ExpCmdSN: 0x%08x, not acknowledging.\n",
+				cmdsn, SESS(conn)->exp_cmd_sn);
+			goto ooo_cmdsn;
+		} else {
+			printk(KERN_ERR "Received CmdSN: 0x%08x is less than"
+				" ExpCmdSN: 0x%08x, ignoring.\n", cmdsn,
+					SESS(conn)->exp_cmd_sn);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_LOWER_THAN_EXP;
+		}
+	} else {
+		int counter = 0;
+		u32 old_expcmdsn = 0;
+		if (cmdsn == SESS(conn)->exp_cmd_sn) {
+			old_expcmdsn = SESS(conn)->exp_cmd_sn++;
+			TRACE(TRACE_CMDSN, "Got missing CmdSN: 0x%08x matches"
+				" ExpCmdSN, incremented ExpCmdSN to 0x%08x.\n",
+					cmdsn, SESS(conn)->exp_cmd_sn);
+
+			if (iscsi_execute_cmd(cmd, 0) < 0) {
+				spin_unlock(&SESS(conn)->cmdsn_lock);
+				return CMDSN_ERROR_CANNOT_RECOVER;
+			}
+		} else if (serial_gt(cmdsn, SESS(conn)->exp_cmd_sn)) {
+			TRACE(TRACE_CMDSN, "CmdSN: 0x%08x greater than"
+				" ExpCmdSN: 0x%08x, not acknowledging.\n",
+				cmdsn, SESS(conn)->exp_cmd_sn);
+			goto ooo_cmdsn;
+		} else {
+			printk(KERN_ERR "CmdSN: 0x%08x less than ExpCmdSN:"
+				" 0x%08x, ignoring.\n", cmdsn,
+				SESS(conn)->exp_cmd_sn);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_LOWER_THAN_EXP;
+		}
+
+		counter = iscsi_execute_ooo_cmdsns(SESS(conn));
+		if (counter < 0) {
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_ERROR_CANNOT_RECOVER;
+		}
+
+		if (counter == SESS(conn)->ooo_cmdsn_count) {
+			if (SESS(conn)->ooo_cmdsn_count == 1) {
+				TRACE(TRACE_CMDSN, "Received final missing"
+					" CmdSN: 0x%08x.\n", old_expcmdsn);
+			} else {
+				TRACE(TRACE_CMDSN, "Received final missing"
+					" CmdSNs: 0x%08x->0x%08x.\n",
+				old_expcmdsn, (SESS(conn)->exp_cmd_sn - 1));
+			}
+
+			SESS(conn)->ooo_cmdsn_count = 0;
+			SESS(conn)->cmdsn_outoforder = 0;
+		} else {
+			SESS(conn)->ooo_cmdsn_count -= counter;
+			TRACE(TRACE_CMDSN, "Still missing %hu CmdSN(s),"
+				" continuing out of order operation.\n",
+				SESS(conn)->ooo_cmdsn_count);
+		}
+		spin_unlock(&SESS(conn)->cmdsn_lock);
+		return CMDSN_NORMAL_OPERATION;
+	}
+
+ooo_cmdsn:
+	ret = iscsi_handle_ooo_cmdsn(SESS(conn), cmd, cmdsn);
+	spin_unlock(&SESS(conn)->cmdsn_lock);
+	return ret;
+}
+
+/*	iscsi_check_unsolicited_dataout():
+ *
+ *
+ */
+int iscsi_check_unsolicited_dataout(struct iscsi_cmd *cmd, unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (SESS_OPS_C(conn)->InitialR2T) {
+		printk(KERN_ERR "Received unexpected unsolicited data"
+			" while InitialR2T=Yes, protocol error.\n");
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_UNEXPECTED_UNSOLICITED_DATA, 0);
+		return -1;
+	}
+
+	if ((cmd->first_burst_len + payload_length) >
+	     SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "Total %u bytes exceeds FirstBurstLength: %u"
+			" for this Unsolicited DataOut Burst.\n",
+			(cmd->first_burst_len + payload_length),
+				SESS_OPS_C(conn)->FirstBurstLength);
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return -1;
+	}
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))
+		return 0;
+
+	if (((cmd->first_burst_len + payload_length) != cmd->data_length) &&
+	    ((cmd->first_burst_len + payload_length) !=
+	      SESS_OPS_C(conn)->FirstBurstLength)) {
+		printk(KERN_ERR "Unsolicited non-immediate data received %u"
+			" does not equal FirstBurstLength: %u, and does"
+			" not equal ExpXferLen %u.\n",
+			(cmd->first_burst_len + payload_length),
+			SESS_OPS_C(conn)->FirstBurstLength, cmd->data_length);
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return -1;
+	}
+	return 0;
+}
+
+/*	iscsi_find_cmd_from_itt():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_itt(
+	struct iscsi_conn *conn,
+	u32 init_task_tag)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->init_task_tag == init_task_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate ITT: 0x%08x on CID: %hu",
+			init_task_tag, conn->cid);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_from_itt_or_dump():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_itt_or_dump(
+	struct iscsi_conn *conn,
+	u32 init_task_tag,
+	u32 length)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->init_task_tag == init_task_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate ITT: 0x%08x on CID: %hu,"
+			" dumping payload\n", init_task_tag, conn->cid);
+		if (length)
+			iscsi_dump_data_payload(conn, length, 1);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_from_ttt():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_ttt(
+	struct iscsi_conn *conn,
+	u32 targ_xfer_tag)
+{
+	struct iscsi_cmd *cmd = NULL;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->targ_xfer_tag == targ_xfer_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate TTT: 0x%08x on CID: %hu\n",
+			targ_xfer_tag, conn->cid);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_for_recovery():
+ *
+ *
+ */
+int iscsi_find_cmd_for_recovery(
+	struct iscsi_session *sess,
+	struct iscsi_cmd **cmd_ptr,
+	struct iscsi_conn_recovery **cr_ptr,
+	u32 init_task_tag)
+{
+	int found_itt = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_conn_recovery *cr;
+
+	/*
+	 * Scan through the inactive connection recovery list's command list.
+	 * If init_task_tag matches the command is still alligent.
+	 */
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry(cr, &sess->cr_inactive_list, cr_list) {
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry(cmd, &cr->conn_recovery_cmd_list, i_list) {
+			if (cmd->init_task_tag == init_task_tag) {
+				found_itt = 1;
+				break;
+			}
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (found_itt)
+			break;
+	}
+	spin_unlock(&sess->cr_i_lock);
+
+	if (cmd) {
+		*cr_ptr = cr;
+		*cmd_ptr = cmd;
+		return -2;
+	}
+
+	found_itt = 0;
+
+	/*
+	 * Scan through the active connection recovery list's command list.
+	 * If init_task_tag matches the command is ready to be reassigned.
+	 */
+	spin_lock(&sess->cr_a_lock);
+	list_for_each_entry(cr, &sess->cr_active_list, cr_list) {
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry(cmd, &cr->conn_recovery_cmd_list, i_list) {
+			if (cmd->init_task_tag == init_task_tag) {
+				found_itt = 1;
+				break;
+			}
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (found_itt)
+			break;
+	}
+	spin_unlock(&sess->cr_a_lock);
+
+	if (!cmd || !cr)
+		return -1;
+
+	*cr_ptr = cr;
+	*cmd_ptr = cmd;
+
+	return 0;
+}
+
+/*	iscsi_add_cmd_to_immediate_queue():
+ *
+ *
+ */
+void iscsi_add_cmd_to_immediate_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	u8 state)
+{
+	struct iscsi_queue_req *qr;
+
+	qr = kmem_cache_zalloc(lio_qr_cache, GFP_ATOMIC);
+	if (!(qr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_queue_req\n");
+		return;
+	}
+	INIT_LIST_HEAD(&qr->qr_list);
+	qr->cmd = cmd;
+	qr->state = state;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	list_add_tail(&qr->qr_list, &conn->immed_queue_list);
+	atomic_inc(&cmd->immed_queue_count);
+	atomic_set(&conn->check_immediate_queue, 1);
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	up(&conn->tx_sem);
+}
+
+/*	iscsi_get_cmd_from_immediate_queue():
+ *
+ *
+ */
+struct iscsi_queue_req *iscsi_get_cmd_from_immediate_queue(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	if (list_empty(&conn->immed_queue_list)) {
+		spin_unlock_bh(&conn->immed_queue_lock);
+		return NULL;
+	}
+	list_for_each_entry(qr, &conn->immed_queue_list, qr_list)
+		break;
+
+	list_del(&qr->qr_list);
+	if (qr->cmd)
+		atomic_dec(&qr->cmd->immed_queue_count);
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	return qr;
+}
+
+static void iscsi_remove_cmd_from_immediate_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	if (!(atomic_read(&cmd->immed_queue_count))) {
+		spin_unlock_bh(&conn->immed_queue_lock);
+		return;
+	}
+
+	list_for_each_entry_safe(qr, qr_tmp, &conn->immed_queue_list, qr_list) {
+		if (qr->cmd != cmd)
+			continue;
+
+		atomic_dec(&qr->cmd->immed_queue_count);
+		list_del(&qr->qr_list);
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	if (atomic_read(&cmd->immed_queue_count)) {
+		printk(KERN_ERR "ITT: 0x%08x immed_queue_count: %d\n",
+			cmd->init_task_tag,
+			atomic_read(&cmd->immed_queue_count));
+	}
+}
+
+/*	iscsi_add_cmd_to_response_queue():
+ *
+ *
+ */
+void iscsi_add_cmd_to_response_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	u8 state)
+{
+	struct iscsi_queue_req *qr;
+
+	qr = kmem_cache_zalloc(lio_qr_cache, GFP_ATOMIC);
+	if (!(qr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_queue_req\n");
+		return;
+	}
+	INIT_LIST_HEAD(&qr->qr_list);
+	qr->cmd = cmd;
+	qr->state = state;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	list_add_tail(&qr->qr_list, &conn->response_queue_list);
+	atomic_inc(&cmd->response_queue_count);
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	up(&conn->tx_sem);
+}
+
+/*	iscsi_get_cmd_from_response_queue():
+ *
+ *
+ */
+struct iscsi_queue_req *iscsi_get_cmd_from_response_queue(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	if (list_empty(&conn->response_queue_list)) {
+		spin_unlock_bh(&conn->response_queue_lock);
+		return NULL;
+	}
+
+	list_for_each_entry(qr, &conn->response_queue_list, qr_list)
+		break;
+
+	list_del(&qr->qr_list);
+	if (qr->cmd)
+		atomic_dec(&qr->cmd->response_queue_count);
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	return qr;
+}
+
+/*	iscsi_remove_cmd_from_response_queue():
+ *
+ *
+ */
+static void iscsi_remove_cmd_from_response_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	if (!(atomic_read(&cmd->response_queue_count))) {
+		spin_unlock_bh(&conn->response_queue_lock);
+		return;
+	}
+
+	list_for_each_entry_safe(qr, qr_tmp, &conn->response_queue_list,
+				qr_list) {
+		if (qr->cmd != cmd)
+			continue;
+
+		atomic_dec(&qr->cmd->response_queue_count);
+		list_del(&qr->qr_list);
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	if (atomic_read(&cmd->response_queue_count)) {
+		printk(KERN_ERR "ITT: 0x%08x response_queue_count: %d\n",
+			cmd->init_task_tag,
+			atomic_read(&cmd->response_queue_count));
+	}
+}
+
+void iscsi_remove_cmd_from_tx_queues(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	iscsi_remove_cmd_from_immediate_queue(cmd, conn);
+	iscsi_remove_cmd_from_response_queue(cmd, conn);
+}
+
+/*	iscsi_free_queue_reqs_for_conn():
+ *
+ *
+ */
+void iscsi_free_queue_reqs_for_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	list_for_each_entry_safe(qr, qr_tmp, &conn->immed_queue_list, qr_list) {
+		list_del(&qr->qr_list);
+		if (qr->cmd)
+			atomic_dec(&qr->cmd->immed_queue_count);
+
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	spin_lock_bh(&conn->response_queue_lock);
+	list_for_each_entry_safe(qr, qr_tmp, &conn->response_queue_list,
+			qr_list) {
+		list_del(&qr->qr_list);
+		if (qr->cmd)
+			atomic_dec(&qr->cmd->response_queue_count);
+
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->response_queue_lock);
+}
+
+/*	iscsi_release_cmd_direct():
+ *
+ *
+ */
+void iscsi_release_cmd_direct(struct iscsi_cmd *cmd)
+{
+	iscsi_free_r2ts_from_list(cmd);
+	iscsi_free_all_datain_reqs(cmd);
+
+	kfree(cmd->buf_ptr);
+	kfree(cmd->pdu_list);
+	kfree(cmd->seq_list);
+	kfree(cmd->tmr_req);
+	kfree(cmd->iov_data);
+
+	kmem_cache_free(lio_cmd_cache, cmd);
+}
+
+void lio_release_cmd_direct(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	iscsi_release_cmd_direct(cmd);
+}
+
+/*	__iscsi_release_cmd_to_pool():
+ *
+ *
+ */
+void __iscsi_release_cmd_to_pool(struct iscsi_cmd *cmd, struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	iscsi_free_r2ts_from_list(cmd);
+	iscsi_free_all_datain_reqs(cmd);
+
+	kfree(cmd->buf_ptr);
+	kfree(cmd->pdu_list);
+	kfree(cmd->seq_list);
+	kfree(cmd->tmr_req);
+	kfree(cmd->iov_data);
+
+	if (conn)
+		iscsi_remove_cmd_from_tx_queues(cmd, conn);
+
+	kmem_cache_free(lio_cmd_cache, cmd);
+}
+
+void iscsi_release_cmd_to_pool(struct iscsi_cmd *cmd)
+{
+	if (!CONN(cmd) && !cmd->sess) {
+		iscsi_release_cmd_direct(cmd);
+	} else {
+		__iscsi_release_cmd_to_pool(cmd, (CONN(cmd)) ?
+			CONN(cmd)->sess : cmd->sess);
+	}
+}
+
+void lio_release_cmd_to_pool(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	iscsi_release_cmd_to_pool(cmd);
+}
+
+/*	iscsi_pack_lun():
+ *
+ *	Routine to pack an ordinary (LINUX) LUN 32-bit number
+ *		into an 8-byte LUN structure
+ *	(see SAM-2, Section 4.12.3 page 39)
+ *	Thanks to UNH for help with this :-).
+ */
+inline u64 iscsi_pack_lun(unsigned int lun)
+{
+	u64	result;
+
+	result = ((lun & 0xff) << 8);	/* LSB of lun into byte 1 big-endian */
+
+	if (0) {
+		/* use flat space addressing method, SAM-2 Section 4.12.4
+			-	high-order 2 bits of byte 0 are 01
+			-	low-order 6 bits of byte 0 are MSB of the lun
+			-	all 8 bits of byte 1 are LSB of the lun
+			-	all other bytes (2 thru 7) are 0
+		 */
+		result |= 0x40 | ((lun >> 8) & 0x3f);
+	}
+	/* else use peripheral device addressing method, Sam-2 Section 4.12.5
+			-	high-order 2 bits of byte 0 are 00
+			-	low-order 6 bits of byte 0 are all 0
+			-	all 8 bits of byte 1 are the lun
+			-	all other bytes (2 thru 7) are 0
+	*/
+
+	return cpu_to_le64(result);
+}
+
+/*	iscsi_unpack_lun():
+ *
+ *	Routine to pack an 8-byte LUN structure into a ordinary (LINUX) 32-bit
+ *	LUN number (see SAM-2, Section 4.12.3 page 39)
+ *	Thanks to UNH for help with this :-).
+ */
+inline u32 iscsi_unpack_lun(unsigned char *lun_ptr)
+{
+	u32	result, temp;
+
+	result = *(lun_ptr+1);  /* LSB of lun from byte 1 big-endian */
+
+	switch (temp = ((*lun_ptr)>>6)) { /* high 2 bits of byte 0 big-endian */
+	case 0: /* peripheral device addressing method, Sam-2 Section 4.12.5
+		-	high-order 2 bits of byte 0 are 00
+		-	low-order 6 bits of byte 0 are all 0
+		-	all 8 bits of byte 1 are the lun
+		-	all other bytes (2 thru 7) are 0
+		 */
+		if (*lun_ptr != 0) {
+			printk(KERN_ERR "Illegal Byte 0 in LUN peripheral"
+				" device addressing method %u, expected 0\n",
+				*lun_ptr);
+		}
+		break;
+	case 1: /* flat space addressing method, SAM-2 Section 4.12.4
+		-	high-order 2 bits of byte 0 are 01
+		-	low-order 6 bits of byte 0 are MSB of the lun
+		-	all 8 bits of byte 1 are LSB of the lun
+		-	all other bytes (2 thru 7) are 0
+		 */
+		result += ((*lun_ptr) & 0x3f) << 8;
+		break;
+	default: /* (extended) logical unit addressing */
+		printk(KERN_ERR "Unimplemented LUN addressing method %u, "
+			"PDA method used instead\n", temp);
+		break;
+	}
+
+	return result;
+}
+
+/*	iscsi_check_session_usage_count():
+ *
+ *
+ */
+int iscsi_check_session_usage_count(struct iscsi_session *sess)
+{
+	spin_lock_bh(&sess->session_usage_lock);
+	if (atomic_read(&sess->session_usage_count)) {
+		atomic_set(&sess->session_waiting_on_uc, 1);
+		spin_unlock_bh(&sess->session_usage_lock);
+		if (in_interrupt())
+			return 2;
+
+		down(&sess->session_waiting_on_uc_sem);
+		return 1;
+	}
+	spin_unlock_bh(&sess->session_usage_lock);
+
+	return 0;
+}
+
+/*	iscsi_dec_session_usage_count():
+ *
+ *
+ */
+void iscsi_dec_session_usage_count(struct iscsi_session *sess)
+{
+	spin_lock_bh(&sess->session_usage_lock);
+	atomic_dec(&sess->session_usage_count);
+
+	if (!atomic_read(&sess->session_usage_count) &&
+	     atomic_read(&sess->session_waiting_on_uc))
+		up(&sess->session_waiting_on_uc_sem);
+
+	spin_unlock_bh(&sess->session_usage_lock);
+}
+
+/*	iscsi_inc_session_usage_count():
+ *
+ *
+ */
+void iscsi_inc_session_usage_count(struct iscsi_session *sess)
+{
+
+	spin_lock_bh(&sess->session_usage_lock);
+	atomic_inc(&sess->session_usage_count);
+	spin_unlock_bh(&sess->session_usage_lock);
+}
+
+/*	iscsi_determine_sync_and_steering_counts():
+ *
+ *	Used before iscsi_do[rx,tx]_data() to determine iov and [rx,tx]_marker
+ *	array counts needed for sync and steering.
+ */
+static inline int iscsi_determine_sync_and_steering_counts(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	u32 length = count->data_length;
+	u32 marker, markint;
+
+	count->sync_and_steering = 1;
+
+	marker = (count->type == ISCSI_RX_DATA) ?
+			conn->of_marker : conn->if_marker;
+	markint = (count->type == ISCSI_RX_DATA) ?
+			(CONN_OPS(conn)->OFMarkInt * 4) :
+			(CONN_OPS(conn)->IFMarkInt * 4);
+	count->ss_iov_count = count->iov_count;
+
+	while (length > 0) {
+		if (length >= marker) {
+			count->ss_iov_count += 3;
+			count->ss_marker_count += 2;
+
+			length -= marker;
+			marker = markint;
+		} else
+			length = 0;
+	}
+
+	return 0;
+}
+
+/*	iscsi_set_sync_and_steering_values():
+ *
+ * 	Setup conn->if_marker and conn->of_marker values based upon
+ * 	the initial marker-less interval. (see iSCSI v19 A.2)
+ */
+int iscsi_set_sync_and_steering_values(struct iscsi_conn *conn)
+{
+	int login_ifmarker_count = 0, login_ofmarker_count = 0, next_marker = 0;
+	/*
+	 * IFMarkInt and OFMarkInt are negotiated as 32-bit words.
+	 */
+	u32 IFMarkInt = (CONN_OPS(conn)->IFMarkInt * 4);
+	u32 OFMarkInt = (CONN_OPS(conn)->OFMarkInt * 4);
+
+	if (CONN_OPS(conn)->OFMarker) {
+		/*
+		 * Account for the first Login Command received not
+		 * via iscsi_recv_msg().
+		 */
+		conn->of_marker += ISCSI_HDR_LEN;
+		if (conn->of_marker <= OFMarkInt) {
+			conn->of_marker = (OFMarkInt - conn->of_marker);
+		} else {
+			login_ofmarker_count = (conn->of_marker / OFMarkInt);
+			next_marker = (OFMarkInt * (login_ofmarker_count + 1)) +
+					(login_ofmarker_count * MARKER_SIZE);
+			conn->of_marker = (next_marker - conn->of_marker);
+		}
+		conn->of_marker_offset = 0;
+		printk(KERN_INFO "Setting OFMarker value to %u based on Initial"
+			" Markerless Interval.\n", conn->of_marker);
+	}
+
+	if (CONN_OPS(conn)->IFMarker) {
+		if (conn->if_marker <= IFMarkInt) {
+			conn->if_marker = (IFMarkInt - conn->if_marker);
+		} else {
+			login_ifmarker_count = (conn->if_marker / IFMarkInt);
+			next_marker = (IFMarkInt * (login_ifmarker_count + 1)) +
+					(login_ifmarker_count * MARKER_SIZE);
+			conn->if_marker = (next_marker - conn->if_marker);
+		}
+		printk(KERN_INFO "Setting IFMarker value to %u based on Initial"
+			" Markerless Interval.\n", conn->if_marker);
+	}
+
+	return 0;
+}
+
+unsigned char *iscsi_ntoa(u32 ip)
+{
+	static unsigned char buf[18];
+
+	memset((void *) buf, 0, 18);
+	sprintf(buf, "%u.%u.%u.%u", ((ip >> 24) & 0xff), ((ip >> 16) & 0xff),
+			((ip >> 8) & 0xff), (ip & 0xff));
+
+	return buf;
+}
+
+void iscsi_ntoa2(unsigned char *buf, u32 ip)
+{
+	memset((void *) buf, 0, 18);
+	sprintf(buf, "%u.%u.%u.%u", ((ip >> 24) & 0xff), ((ip >> 16) & 0xff),
+			((ip >> 8) & 0xff), (ip & 0xff));
+}
+
+#define NS_INT16SZ	 2
+#define NS_INADDRSZ	 4
+#define NS_IN6ADDRSZ	16
+
+/* const char *
+ * inet_ntop4(src, dst, size)
+ *	format an IPv4 address
+ * return:
+ *	`dst' (as a const)
+ * notes:
+ *	(1) uses no statics
+ *	(2) takes a unsigned char* not an in_addr as input
+ * author:
+ *	Paul Vixie, 1996.
+ */
+static const char *iscsi_ntop4(
+	const unsigned char *src,
+	char *dst,
+	size_t size)
+{
+	static const char *fmt = "%u.%u.%u.%u";
+	char tmp[sizeof "255.255.255.255"];
+	size_t len;
+
+	len = snprintf(tmp, sizeof tmp, fmt, src[0], src[1], src[2], src[3]);
+	if (len >= size) {
+		printk(KERN_ERR "len: %d >= size: %d\n", (int)len, (int)size);
+		return NULL;
+	}
+	memcpy(dst, tmp, len + 1);
+
+	return dst;
+}
+
+/* const char *
+ * isc_inet_ntop6(src, dst, size)
+ * convert IPv6 binary address into presentation (printable) format
+ * author:
+ *	Paul Vixie, 1996.
+ */
+const char *iscsi_ntop6(const unsigned char *src, char *dst, size_t size)
+{
+	/*
+	 * Note that int32_t and int16_t need only be "at least" large enough
+	 * to contain a value of the specified size.  On some systems, like
+	 * Crays, there is no such thing as an integer variable with 16 bits.
+	 * Keep this in mind if you think this function should have been coded
+	 * to use pointer overlays.  All the world's not a VAX.
+	 */
+	char tmp[sizeof "ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255"], *tp;
+	struct { int base, len; } best, cur;
+	unsigned int words[NS_IN6ADDRSZ / NS_INT16SZ];
+	int i, inc;
+
+	best.len = best.base = 0;
+	cur.len = cur.base = 0;
+
+	/*
+	 * Preprocess:
+	 *	Copy the input (bytewise) array into a wordwise array.
+	 *	Find the longest run of 0x00's in src[] for :: shorthanding.
+	 */
+	memset(words, '\0', sizeof words);
+	for (i = 0; i < NS_IN6ADDRSZ; i++)
+		words[i / 2] |= (src[i] << ((1 - (i % 2)) << 3));
+	best.base = -1;
+	cur.base = -1;
+	for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) {
+		if (words[i] == 0) {
+			if (cur.base == -1)
+				cur.base = i, cur.len = 1;
+			else
+				cur.len++;
+		} else {
+			if (cur.base != -1) {
+				if (best.base == -1 || cur.len > best.len)
+					best = cur;
+				cur.base = -1;
+			}
+		}
+	}
+	if (cur.base != -1) {
+		if (best.base == -1 || cur.len > best.len)
+			best = cur;
+	}
+	if (best.base != -1 && best.len < 2)
+		best.base = -1;
+
+	/*
+	 * Format the result.
+	 */
+	tp = tmp;
+	for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) {
+		/* Are we inside the best run of 0x00's? */
+		if (best.base != -1 && i >= best.base &&
+		    i < (best.base + best.len)) {
+			if (i == best.base)
+				*tp++ = ':';
+			continue;
+		}
+		/* Are we following an initial run of 0x00s or any real hex? */
+		if (i != 0)
+			*tp++ = ':';
+		/* Is this address an encapsulated IPv4? */
+		if (i == 6 && best.base == 0 &&
+		    (best.len == 6 || (best.len == 5 && words[5] == 0xffff))) {
+			if (!iscsi_ntop4(src+12, tp, sizeof tmp - (tp - tmp)))
+				return NULL;
+			tp += strlen(tp);
+			break;
+		}
+		inc = snprintf(tp, 5, "%x", words[i]);
+		if (inc < 5)
+			return NULL;
+		tp += inc;
+	}
+	/* Was it a trailing run of 0x00's? */
+	if (best.base != -1 && (best.base + best.len) ==
+	    (NS_IN6ADDRSZ / NS_INT16SZ))
+		*tp++ = ':';
+	*tp++ = '\0';
+
+	/*
+	 * Check for overflow, copy, and we're done.
+	 */
+	if ((size_t)(tp - tmp) > size) {
+		printk(KERN_ERR "(size_t)(tp - tmp): %d > size: %d\n",
+			(int)(tp - tmp), (int)size);
+		return NULL;
+	}
+	memcpy(dst, tmp, tp - tmp);
+	return dst;
+}
+
+/* int
+ * inet_pton4(src, dst)
+ *	like inet_aton() but without all the hexadecimal and shorthand.
+ * return:
+ *	1 if `src' is a valid dotted quad, else 0.
+ * notice:
+ *	does not touch `dst' unless it's returning 1.
+ * author:
+ *	Paul Vixie, 1996.
+ */
+static int iscsi_pton4(const char *src, unsigned char *dst)
+{
+	static const char digits[] = "0123456789";
+	int saw_digit, octets, ch;
+	unsigned char tmp[NS_INADDRSZ], *tp;
+
+	saw_digit = 0;
+	octets = 0;
+	*(tp = tmp) = 0;
+	while ((ch = *src++) != '\0') {
+		const char *pch;
+
+		pch = strchr(digits, ch);
+		if (pch != NULL) {
+			unsigned int new = *tp * 10 + (pch - digits);
+
+			if (new > 255)
+				return 0;
+			*tp = new;
+			if (!saw_digit) {
+				if (++octets > 4)
+					return 0;
+				saw_digit = 1;
+			}
+		} else if (ch == '.' && saw_digit) {
+			if (octets == 4)
+				return 0;
+			*++tp = 0;
+			saw_digit = 0;
+		} else
+			return 0;
+	}
+	if (octets < 4)
+		return 0;
+	memcpy(dst, tmp, NS_INADDRSZ);
+	return 1;
+}
+
+/* int
+ * inet_pton6(src, dst)
+ *	convert presentation level address to network order binary form.
+ * return:
+ *	1 if `src' is a valid [RFC1884 2.2] address, else 0.
+ * notice:
+ *	(1) does not touch `dst' unless it's returning 1.
+ *	(2) :: in a full address is silently ignored.
+ * credit:
+ *	inspired by Mark Andrews.
+ * author:
+ *	Paul Vixie, 1996.
+ */
+int iscsi_pton6(const char *src, unsigned char *dst)
+{
+	static const char xdigits_l[] = "0123456789abcdef",
+			  xdigits_u[] = "0123456789ABCDEF";
+	unsigned char tmp[NS_IN6ADDRSZ], *tp, *endp, *colonp;
+	const char *xdigits, *curtok;
+	int ch, saw_xdigit;
+	unsigned int val;
+
+	memset((tp = tmp), '\0', NS_IN6ADDRSZ);
+	endp = tp + NS_IN6ADDRSZ;
+	colonp = NULL;
+	/* Leading :: requires some special handling. */
+	if (*src == ':')
+		if (*++src != ':')
+			return 0;
+	curtok = src;
+	saw_xdigit = 0;
+	val = 0;
+	while ((ch = *src++) != '\0') {
+		const char *pch;
+
+		pch = strchr((xdigits = xdigits_l), ch);
+		if (pch == NULL)
+			pch = strchr((xdigits = xdigits_u), ch);
+		if (pch != NULL) {
+			val <<= 4;
+			val |= (pch - xdigits);
+			if (val > 0xffff)
+				return 0;
+			saw_xdigit = 1;
+			continue;
+		}
+		if (ch == ':') {
+			curtok = src;
+			if (!saw_xdigit) {
+				if (colonp)
+					return 0;
+				colonp = tp;
+				continue;
+			}
+			if (tp + NS_INT16SZ > endp)
+				return 0;
+			*tp++ = (unsigned char) (val >> 8) & 0xff;
+			*tp++ = (unsigned char) val & 0xff;
+			saw_xdigit = 0;
+			val = 0;
+			continue;
+		}
+		if (ch == '.' && ((tp + NS_INADDRSZ) <= endp) &&
+		    iscsi_pton4(curtok, tp) > 0) {
+			tp += NS_INADDRSZ;
+			saw_xdigit = 0;
+			break;	/* '\0' was seen by inet_pton4(). */
+		}
+		return 0;
+	}
+	if (saw_xdigit) {
+		if (tp + NS_INT16SZ > endp)
+			return 0;
+		*tp++ = (unsigned char) (val >> 8) & 0xff;
+		*tp++ = (unsigned char) val & 0xff;
+	}
+	if (colonp != NULL) {
+		/*
+		 * Since some memmove()'s erroneously fail to handle
+		 * overlapping regions, we'll do the shift by hand.
+		 */
+		const int n = tp - colonp;
+		int i;
+
+		for (i = 1; i <= n; i++) {
+			endp[-i] = colonp[n - i];
+			colonp[n - i] = 0;
+		}
+		tp = endp;
+	}
+	if (tp != endp)
+		return 0;
+	memcpy(dst, tmp, NS_IN6ADDRSZ);
+	return 1;
+}
+
+/*	iscsi_get_conn_from_cid():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_get_conn_from_cid(struct iscsi_session *sess, u16 cid)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		if ((conn->cid == cid) &&
+		    (conn->conn_state == TARG_CONN_STATE_LOGGED_IN)) {
+			iscsi_inc_conn_usage_count(conn);
+			spin_unlock_bh(&sess->conn_lock);
+			return conn;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return NULL;
+}
+
+/*	iscsi_get_conn_from_cid_rcfr():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_get_conn_from_cid_rcfr(struct iscsi_session *sess, u16 cid)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		if (conn->cid == cid) {
+			iscsi_inc_conn_usage_count(conn);
+			spin_lock(&conn->state_lock);
+			atomic_set(&conn->connection_wait_rcfr, 1);
+			spin_unlock(&conn->state_lock);
+			spin_unlock_bh(&sess->conn_lock);
+			return conn;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return NULL;
+}
+
+/*	iscsi_check_conn_usage_count():
+ *
+ *
+ */
+void iscsi_check_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	if (atomic_read(&conn->conn_usage_count)) {
+		atomic_set(&conn->conn_waiting_on_uc, 1);
+		spin_unlock_bh(&conn->conn_usage_lock);
+
+		down(&conn->conn_waiting_on_uc_sem);
+		return;
+	}
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_dec_conn_usage_count():
+ *
+ *
+ */
+void iscsi_dec_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	atomic_dec(&conn->conn_usage_count);
+
+	if (!atomic_read(&conn->conn_usage_count) &&
+	     atomic_read(&conn->conn_waiting_on_uc))
+		up(&conn->conn_waiting_on_uc_sem);
+
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_inc_conn_usage_count():
+ *
+ *
+ */
+void iscsi_inc_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	atomic_inc(&conn->conn_usage_count);
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_async_msg_timer_function():
+ *
+ *
+ */
+void iscsi_async_msg_timer_function(unsigned long data)
+{
+	up((struct semaphore *) data);
+}
+
+/*	Riscsi_check_for_active_network_device():
+ *
+ *
+ */
+int iscsi_check_for_active_network_device(struct iscsi_conn *conn)
+{
+	struct net_device *net_dev;
+
+	if (!conn->net_if) {
+		printk(KERN_ERR "struct iscsi_conn->net_if is NULL for CID:"
+			" %hu\n", conn->cid);
+		return 0;
+	}
+	net_dev = conn->net_if;
+
+	return netif_carrier_ok(net_dev);
+}
+
+/*	iscsi_handle_netif_timeou():
+ *
+ *
+ */
+static void iscsi_handle_netif_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->netif_lock);
+	if (conn->netif_timer_flags & NETIF_TF_STOP) {
+		spin_unlock_bh(&conn->netif_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	conn->netif_timer_flags &= ~NETIF_TF_RUNNING;
+
+	if (iscsi_check_for_active_network_device((void *)conn)) {
+		iscsi_start_netif_timer(conn);
+		spin_unlock_bh(&conn->netif_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+
+	printk(KERN_ERR "Detected PHY loss on Network Interface: %s for iSCSI"
+		" CID: %hu on SID: %u\n", conn->net_dev, conn->cid,
+			SESS(conn)->sid);
+
+	spin_unlock_bh(&conn->netif_lock);
+
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_get_network_interface_from_conn():
+ *
+ *
+ */
+void iscsi_get_network_interface_from_conn(struct iscsi_conn *conn)
+{
+	struct net_device *net_dev;
+
+	net_dev = dev_get_by_name(&init_net, conn->net_dev);
+	if (!(net_dev)) {
+		printk(KERN_ERR "Unable to locate active network interface:"
+			" %s\n", strlen(conn->net_dev) ?
+			conn->net_dev : "None");
+		conn->net_if = NULL;
+		return;
+	}
+
+	conn->net_if = net_dev;
+}
+
+/*      iscsi_start_netif_timer():
+ *
+ *	Called with conn->netif_lock held.
+ */
+void iscsi_start_netif_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+
+	if (!conn->net_if)
+		return;
+
+	if (conn->netif_timer_flags & NETIF_TF_RUNNING)
+		return;
+
+	init_timer(&conn->transport_timer);
+	SETUP_TIMER(conn->transport_timer, ISCSI_TPG_ATTRIB(tpg)->netif_timeout,
+		conn, iscsi_handle_netif_timeout);
+	conn->netif_timer_flags &= ~NETIF_TF_STOP;
+	conn->netif_timer_flags |= NETIF_TF_RUNNING;
+	add_timer(&conn->transport_timer);
+}
+
+/*	iscsi_stop_netif_timer():
+ *
+ *
+ */
+void iscsi_stop_netif_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->netif_lock);
+	if (!(conn->netif_timer_flags & NETIF_TF_RUNNING)) {
+		spin_unlock_bh(&conn->netif_lock);
+		return;
+	}
+	conn->netif_timer_flags |= NETIF_TF_STOP;
+	spin_unlock_bh(&conn->netif_lock);
+
+	del_timer_sync(&conn->transport_timer);
+
+	spin_lock_bh(&conn->netif_lock);
+	conn->netif_timer_flags &= ~NETIF_TF_RUNNING;
+	spin_unlock_bh(&conn->netif_lock);
+}
+
+/*	iscsi_handle_nopin_response_timeout():
+ *
+ *
+ */
+static void iscsi_handle_nopin_response_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_STOP) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+
+	TRACE(TRACE_TIMER, "Did not receive response to NOPIN on CID: %hu on"
+		" SID: %u, failing connection.\n", conn->cid,
+			SESS(conn)->sid);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	{
+	struct iscsi_portal_group *tpg = conn->sess->tpg;
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	if (tiqn) {
+		spin_lock_bh(&tiqn->sess_err_stats.lock);
+		strcpy(tiqn->sess_err_stats.last_sess_fail_rem_name,
+				(void *)SESS_OPS_C(conn)->InitiatorName);
+		tiqn->sess_err_stats.last_sess_failure_type =
+				ISCSI_SESS_ERR_CXN_TIMEOUT;
+		tiqn->sess_err_stats.cxn_timeout_errors++;
+		SESS(conn)->conn_timeout_errors++;
+		spin_unlock_bh(&tiqn->sess_err_stats.lock);
+	}
+	}
+
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_mod_nopin_response_timer():
+ *
+ *
+ */
+void iscsi_mod_nopin_response_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	MOD_TIMER(&conn->nopin_response_timer, na->nopin_response_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_start_nopin_response_timer():
+ *
+ *	Called with conn->nopin_timer_lock held.
+ */
+void iscsi_start_nopin_response_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	init_timer(&conn->nopin_response_timer);
+	SETUP_TIMER(conn->nopin_response_timer, na->nopin_response_timeout,
+		conn, iscsi_handle_nopin_response_timeout);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_STOP;
+	conn->nopin_response_timer_flags |= NOPIN_RESPONSE_TF_RUNNING;
+	add_timer(&conn->nopin_response_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Response Timer on CID: %d to %u"
+		" seconds\n", conn->cid, na->nopin_response_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_stop_nopin_response_timer():
+ *
+ *
+ */
+void iscsi_stop_nopin_response_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+	conn->nopin_response_timer_flags |= NOPIN_RESPONSE_TF_STOP;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	del_timer_sync(&conn->nopin_response_timer);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_handle_nopin_timeout():
+ *
+ *
+ */
+static void iscsi_handle_nopin_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_timer_flags & NOPIN_TF_STOP) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	conn->nopin_timer_flags &= ~NOPIN_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	iscsi_add_nopin(conn, 1);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*
+ * Called with conn->nopin_timer_lock held.
+ */
+void __iscsi_start_nopin_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+	/*
+	* NOPIN timeout is disabled.
+	 */
+	if (!(na->nopin_timeout))
+		return;
+
+	if (conn->nopin_timer_flags & NOPIN_TF_RUNNING)
+		return;
+
+	init_timer(&conn->nopin_timer);
+	SETUP_TIMER(conn->nopin_timer, na->nopin_timeout, conn,
+		iscsi_handle_nopin_timeout);
+	conn->nopin_timer_flags &= ~NOPIN_TF_STOP;
+	conn->nopin_timer_flags |= NOPIN_TF_RUNNING;
+	add_timer(&conn->nopin_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Timer on CID: %d at %u second"
+		" interval\n", conn->cid, na->nopin_timeout);
+}
+
+/*	iscsi_start_nopin_timer():
+ *
+ *
+ */
+void iscsi_start_nopin_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+	/*
+	 * NOPIN timeout is disabled..
+	 */
+	if (!(na->nopin_timeout))
+		return;
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_timer_flags & NOPIN_TF_RUNNING) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	init_timer(&conn->nopin_timer);
+	SETUP_TIMER(conn->nopin_timer, na->nopin_timeout, conn,
+			iscsi_handle_nopin_timeout);
+	conn->nopin_timer_flags &= ~NOPIN_TF_STOP;
+	conn->nopin_timer_flags |= NOPIN_TF_RUNNING;
+	add_timer(&conn->nopin_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Timer on CID: %d at %u second"
+			" interval\n", conn->cid, na->nopin_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_stop_nopin_timer():
+ *
+ *
+ */
+void iscsi_stop_nopin_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_timer_flags & NOPIN_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+	conn->nopin_timer_flags |= NOPIN_TF_STOP;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	del_timer_sync(&conn->nopin_timer);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	conn->nopin_timer_flags &= ~NOPIN_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+int iscsi_allocate_iovecs_for_cmd(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	u32 iov_count = (T_TASK(se_cmd)->t_tasks_se_num == 0) ? 1 :
+				T_TASK(se_cmd)->t_tasks_se_num;
+	
+	iov_count += TRANSPORT_IOV_DATA_BUFFER;
+
+	cmd->iov_data = kzalloc(iov_count * sizeof(struct iovec), GFP_KERNEL);
+	if (!(cmd->iov_data))
+		return -ENOMEM;
+	
+	cmd->orig_iov_data_count = iov_count;
+	return 0;
+}
+
+/*	iscsi_send_tx_data():
+ *
+ *
+ */
+int iscsi_send_tx_data(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int use_misc)
+{
+	int tx_sent, tx_size;
+	u32 iov_count;
+	struct iovec *iov;
+
+send_data:
+	tx_size = cmd->tx_size;
+
+	if (!use_misc) {
+		iov = &cmd->iov_data[0];
+		iov_count = cmd->iov_data_count;
+	} else {
+		iov = &cmd->iov_misc[0];
+		iov_count = cmd->iov_misc_count;
+	}
+
+	tx_sent = tx_data(conn, &iov[0], iov_count, tx_size);
+	if (tx_size != tx_sent) {
+		if (tx_sent == -EAGAIN) {
+			printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+			goto send_data;
+		} else
+			return -1;
+	}
+	cmd->tx_size = 0;
+
+	return 0;
+}
+
+int iscsi_fe_sendpage_sg(
+	struct se_unmap_sg *u_sg,
+	struct iscsi_conn *conn)
+{
+	int tx_sent;
+	struct iscsi_cmd *cmd = (struct iscsi_cmd *)u_sg->fabric_cmd;
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	u32 len = cmd->tx_size, pg_len, se_len, se_off, tx_size;
+	struct iovec *iov = &cmd->iov_data[0];
+	struct page *page;
+	struct se_mem *se_mem = u_sg->cur_se_mem;
+
+send_hdr:
+	tx_size = (CONN_OPS(conn)->HeaderDigest) ? ISCSI_HDR_LEN + CRC_LEN :
+			ISCSI_HDR_LEN;
+	tx_sent = tx_data(conn, iov, 1, tx_size);
+	if (tx_size != tx_sent) {
+		if (tx_sent == -EAGAIN) {
+			printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+			goto send_hdr;
+		}
+		return -1;
+	}
+
+	len -= tx_size;
+	len -= u_sg->padding;
+	if (CONN_OPS(conn)->DataDigest)
+		len -= CRC_LEN;
+
+	/*
+	 * Start calculating from the first page of current struct se_mem.
+	 */
+	page = se_mem->se_page;
+	pg_len = (PAGE_SIZE - se_mem->se_off);
+	se_len = se_mem->se_len;
+	if (se_len < pg_len)
+		pg_len = se_len;
+	se_off = se_mem->se_off;
+#if 0
+	printk(KERN_INFO "se: %p page: %p se_len: %d se_off: %d pg_len: %d\n",
+		se_mem, page, se_len, se_off, pg_len);
+#endif
+	/*
+	 * Calucate new se_len and se_off based upon u_sg->t_offset into
+	 * the current struct se_mem and possibily a different page.
+	 */
+	while (u_sg->t_offset) {
+#if 0
+		printk(KERN_INFO "u_sg->t_offset: %d, page: %p se_len: %d"
+			" se_off: %d pg_len: %d\n", u_sg->t_offset, page,
+			se_len, se_off, pg_len);
+#endif
+		if (u_sg->t_offset >= pg_len) {
+			u_sg->t_offset -= pg_len;
+			se_len -= pg_len;
+			se_off = 0;
+			pg_len = PAGE_SIZE;
+			page++;
+		} else {
+			se_off += u_sg->t_offset;
+			se_len -= u_sg->t_offset;
+			u_sg->t_offset = 0;
+		}
+	}
+
+	/*
+	 * Perform sendpage() for each page in the struct se_mem
+	 */
+	while (len) {
+#if 0
+		printk(KERN_INFO "len: %d page: %p se_len: %d se_off: %d\n",
+			len, page, se_len, se_off);
+#endif
+		if (se_len > len)
+			se_len = len;
+send_pg:
+		tx_sent = conn->sock->ops->sendpage(conn->sock,
+				page, se_off, se_len, 0);
+		if (tx_sent != se_len) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tcp_sendpage() returned"
+						" -EAGAIN\n");
+				goto send_pg;
+			}
+
+			printk(KERN_ERR "tcp_sendpage() failure: %d\n",
+					tx_sent);
+			return -1;
+		}
+
+		len -= se_len;
+		if (!(len))
+			break;
+
+		se_len -= tx_sent;
+		if (!(se_len)) {
+			list_for_each_entry_continue(se_mem,
+					T_TASK(se_cmd)->t_mem_list, se_list)
+				break;
+
+			if (!se_mem) {
+				printk(KERN_ERR "Unable to locate next struct se_mem\n");
+				return -1;
+			}
+
+			se_len = se_mem->se_len;
+			se_off = se_mem->se_off;
+			page = se_mem->se_page;
+		} else {
+			se_len = PAGE_SIZE;
+			se_off = 0;
+			page++;
+		}
+	}
+
+send_padding:
+	if (u_sg->padding) {
+		struct iovec *iov_p =
+			&cmd->iov_data[cmd->iov_data_count-2];
+
+		tx_sent = tx_data(conn, iov_p, 1, u_sg->padding);
+		if (u_sg->padding != tx_sent) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+				goto send_padding;
+			}
+			return -1;
+		}
+	}
+
+send_datacrc:
+	if (CONN_OPS(conn)->DataDigest) {
+		struct iovec *iov_d =
+			&cmd->iov_data[cmd->iov_data_count-1];
+
+		tx_sent = tx_data(conn, iov_d, 1, CRC_LEN);
+		if (CRC_LEN != tx_sent) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+				goto send_datacrc;
+			}
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*      iscsi_tx_login_rsp():
+ *
+ *      This function is used for mainly sending a ISCSI_TARG_LOGIN_RSP PDU
+ *      back to the Initiator when an expection condition occurs with the
+ *      errors set in status_class and status_detail.
+ *
+ *      Parameters:     iSCSI Connection, Status Class, Status Detail.
+ *      Returns:        0 on success, -1 on error.
+ */
+int iscsi_tx_login_rsp(struct iscsi_conn *conn, u8 status_class, u8 status_detail)
+{
+	u8 iscsi_hdr[ISCSI_HDR_LEN];
+	int err;
+	struct iovec iov;
+	struct iscsi_login_rsp *hdr;
+
+	iscsi_collect_login_stats(conn, status_class, status_detail);
+
+	memset((void *)&iov, 0, sizeof(struct iovec));
+	memset((void *)&iscsi_hdr, 0x0, ISCSI_HDR_LEN);
+
+	hdr	= (struct iscsi_login_rsp *)&iscsi_hdr;
+	hdr->opcode		= ISCSI_OP_LOGIN_RSP;
+	hdr->status_class	= status_class;
+	hdr->status_detail	= status_detail;
+	hdr->itt		= cpu_to_be32(conn->login_itt);
+
+	iov.iov_base		= &iscsi_hdr;
+	iov.iov_len		= ISCSI_HDR_LEN;
+
+	PRINT_BUFF(iscsi_hdr, ISCSI_HDR_LEN);
+
+	err = tx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+	if (err != ISCSI_HDR_LEN) {
+		printk(KERN_ERR "tx_data returned less than expected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_print_session_params():
+ *
+ *
+ */
+void iscsi_print_session_params(struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn;
+
+	printk(KERN_INFO "-----------------------------[Session Params for"
+		" SID: %u]-----------------------------\n", sess->sid);
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list)
+		iscsi_dump_conn_ops(conn->conn_ops);
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_dump_sess_ops(sess->sess_ops);
+}
+
+/*	iscsi_do_rx_data():
+ *
+ *
+ */
+static inline int iscsi_do_rx_data(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	int data = count->data_length, rx_loop = 0, total_rx = 0;
+	u32 rx_marker_val[count->ss_marker_count], rx_marker_iov = 0;
+	struct iovec iov[count->ss_iov_count];
+	mm_segment_t oldfs;
+	struct msghdr msg;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&msg, 0, sizeof(struct msghdr));
+
+	if (count->sync_and_steering) {
+		int size = 0;
+		u32 i, orig_iov_count = 0;
+		u32 orig_iov_len = 0, orig_iov_loc = 0;
+		u32 iov_count = 0, per_iov_bytes = 0;
+		u32 *rx_marker, old_rx_marker = 0;
+		struct iovec *iov_record;
+
+		memset((void *)&rx_marker_val, 0,
+				count->ss_marker_count * sizeof(u32));
+		memset((void *)&iov, 0,
+				count->ss_iov_count * sizeof(struct iovec));
+
+		iov_record = count->iov;
+		orig_iov_count = count->iov_count;
+		rx_marker = &conn->of_marker;
+
+		i = 0;
+		size = data;
+		orig_iov_len = iov_record[orig_iov_loc].iov_len;
+		while (size > 0) {
+			TRACE(TRACE_SSLR, "rx_data: #1 orig_iov_len %u,"
+			" orig_iov_loc %u\n", orig_iov_len, orig_iov_loc);
+			TRACE(TRACE_SSLR, "rx_data: #2 rx_marker %u, size"
+				" %u\n", *rx_marker, size);
+
+			if (orig_iov_len >= *rx_marker) {
+				iov[iov_count].iov_len = *rx_marker;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&rx_marker_val[rx_marker_iov++];
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&rx_marker_val[rx_marker_iov++];
+				old_rx_marker = *rx_marker;
+
+				/*
+				 * OFMarkInt is in 32-bit words.
+				 */
+				*rx_marker = (CONN_OPS(conn)->OFMarkInt * 4);
+				size -= old_rx_marker;
+				orig_iov_len -= old_rx_marker;
+				per_iov_bytes += old_rx_marker;
+
+				TRACE(TRACE_SSLR, "rx_data: #3 new_rx_marker"
+					" %u, size %u\n", *rx_marker, size);
+			} else {
+				iov[iov_count].iov_len = orig_iov_len;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				per_iov_bytes = 0;
+				*rx_marker -= orig_iov_len;
+				size -= orig_iov_len;
+
+				if (size)
+					orig_iov_len =
+					iov_record[++orig_iov_loc].iov_len;
+
+				TRACE(TRACE_SSLR, "rx_data: #4 new_rx_marker"
+					" %u, size %u\n", *rx_marker, size);
+			}
+		}
+		data += (rx_marker_iov * (MARKER_SIZE / 2));
+
+		msg.msg_iov	= &iov[0];
+		msg.msg_iovlen	= iov_count;
+
+		if (iov_count > count->ss_iov_count) {
+			printk(KERN_ERR "iov_count: %d, count->ss_iov_count:"
+				" %d\n", iov_count, count->ss_iov_count);
+			return -1;
+		}
+		if (rx_marker_iov > count->ss_marker_count) {
+			printk(KERN_ERR "rx_marker_iov: %d, count->ss_marker"
+				"_count: %d\n", rx_marker_iov,
+				count->ss_marker_count);
+			return -1;
+		}
+	} else {
+		msg.msg_iov	= count->iov;
+		msg.msg_iovlen	= count->iov_count;
+	}
+
+	while (total_rx < data) {
+		oldfs = get_fs();
+		set_fs(get_ds());
+
+		conn->sock->sk->sk_allocation = GFP_ATOMIC;
+		rx_loop = sock_recvmsg(conn->sock, &msg,
+				(data - total_rx), MSG_WAITALL);
+
+		set_fs(oldfs);
+
+		if (rx_loop <= 0) {
+			TRACE(TRACE_NET, "rx_loop: %d total_rx: %d\n",
+				rx_loop, total_rx);
+			return rx_loop;
+		}
+		total_rx += rx_loop;
+		TRACE(TRACE_NET, "rx_loop: %d, total_rx: %d, data: %d\n",
+				rx_loop, total_rx, data);
+	}
+
+	if (count->sync_and_steering) {
+		int j;
+		for (j = 0; j < rx_marker_iov; j++) {
+			TRACE(TRACE_SSLR, "rx_data: #5 j: %d, offset: %d\n",
+				j, rx_marker_val[j]);
+			conn->of_marker_offset = rx_marker_val[j];
+		}
+		total_rx -= (rx_marker_iov * (MARKER_SIZE / 2));
+	}
+
+	return total_rx;
+}
+
+/*	iscsi_do_tx_data():
+ *
+ *
+ */
+static inline int iscsi_do_tx_data(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	int data = count->data_length, total_tx = 0, tx_loop = 0;
+	u32 tx_marker_val[count->ss_marker_count], tx_marker_iov = 0;
+	struct iovec iov[count->ss_iov_count];
+	mm_segment_t oldfs;
+	struct msghdr msg;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	if (data <= 0) {
+		printk(KERN_ERR "Data length is: %d\n", data);
+		return -1;
+	}
+
+	memset(&msg, 0, sizeof(struct msghdr));
+
+	if (count->sync_and_steering) {
+		int size = 0;
+		u32 i, orig_iov_count = 0;
+		u32 orig_iov_len = 0, orig_iov_loc = 0;
+		u32 iov_count = 0, per_iov_bytes = 0;
+		u32 *tx_marker, old_tx_marker = 0;
+		struct iovec *iov_record;
+
+		memset((void *)&tx_marker_val, 0,
+			count->ss_marker_count * sizeof(u32));
+		memset((void *)&iov, 0,
+			count->ss_iov_count * sizeof(struct iovec));
+
+		iov_record = count->iov;
+		orig_iov_count = count->iov_count;
+		tx_marker = &conn->if_marker;
+
+		i = 0;
+		size = data;
+		orig_iov_len = iov_record[orig_iov_loc].iov_len;
+		while (size > 0) {
+			TRACE(TRACE_SSLT, "tx_data: #1 orig_iov_len %u,"
+			" orig_iov_loc %u\n", orig_iov_len, orig_iov_loc);
+			TRACE(TRACE_SSLT, "tx_data: #2 tx_marker %u, size"
+				" %u\n", *tx_marker, size);
+
+			if (orig_iov_len >= *tx_marker) {
+				iov[iov_count].iov_len = *tx_marker;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				tx_marker_val[tx_marker_iov] =
+						(size - *tx_marker);
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&tx_marker_val[tx_marker_iov++];
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&tx_marker_val[tx_marker_iov++];
+				old_tx_marker = *tx_marker;
+
+				/*
+				 * IFMarkInt is in 32-bit words.
+				 */
+				*tx_marker = (CONN_OPS(conn)->IFMarkInt * 4);
+				size -= old_tx_marker;
+				orig_iov_len -= old_tx_marker;
+				per_iov_bytes += old_tx_marker;
+
+				TRACE(TRACE_SSLT, "tx_data: #3 new_tx_marker"
+					" %u, size %u\n", *tx_marker, size);
+				TRACE(TRACE_SSLT, "tx_data: #4 offset %u\n",
+					tx_marker_val[tx_marker_iov-1]);
+			} else {
+				iov[iov_count].iov_len = orig_iov_len;
+				iov[iov_count++].iov_base
+					= (iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				per_iov_bytes = 0;
+				*tx_marker -= orig_iov_len;
+				size -= orig_iov_len;
+
+				if (size)
+					orig_iov_len =
+					iov_record[++orig_iov_loc].iov_len;
+
+				TRACE(TRACE_SSLT, "tx_data: #5 new_tx_marker"
+					" %u, size %u\n", *tx_marker, size);
+			}
+		}
+
+		data += (tx_marker_iov * (MARKER_SIZE / 2));
+
+		msg.msg_iov	= &iov[0];
+		msg.msg_iovlen = iov_count;
+
+		if (iov_count > count->ss_iov_count) {
+			printk(KERN_ERR "iov_count: %d, count->ss_iov_count:"
+				" %d\n", iov_count, count->ss_iov_count);
+			return -1;
+		}
+		if (tx_marker_iov > count->ss_marker_count) {
+			printk(KERN_ERR "tx_marker_iov: %d, count->ss_marker"
+				"_count: %d\n", tx_marker_iov,
+				count->ss_marker_count);
+			return -1;
+		}
+	} else {
+		msg.msg_iov	= count->iov;
+		msg.msg_iovlen	= count->iov_count;
+	}
+
+	while (total_tx < data) {
+		oldfs = get_fs();
+		set_fs(get_ds());
+
+		conn->sock->sk->sk_allocation = GFP_ATOMIC;
+		tx_loop = sock_sendmsg(conn->sock, &msg, (data - total_tx));
+
+		set_fs(oldfs);
+
+		if (tx_loop <= 0) {
+			TRACE(TRACE_NET, "tx_loop: %d total_tx %d\n",
+				tx_loop, total_tx);
+			return tx_loop;
+		}
+		total_tx += tx_loop;
+		TRACE(TRACE_NET, "tx_loop: %d, total_tx: %d, data: %d\n",
+					tx_loop, total_tx, data);
+	}
+
+	if (count->sync_and_steering)
+		total_tx -= (tx_marker_iov * (MARKER_SIZE / 2));
+
+	return total_tx;
+}
+
+/*	rx_data():
+ *
+ *
+ */
+int rx_data(
+	struct iscsi_conn *conn,
+	struct iovec *iov,
+	int iov_count,
+	int data)
+{
+	struct iscsi_data_count c;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&c, 0, sizeof(struct iscsi_data_count));
+	c.iov = iov;
+	c.iov_count = iov_count;
+	c.data_length = data;
+	c.type = ISCSI_RX_DATA;
+
+	if (CONN_OPS(conn)->OFMarker &&
+	   (conn->conn_state >= TARG_CONN_STATE_LOGGED_IN)) {
+		if (iscsi_determine_sync_and_steering_counts(conn, &c) < 0)
+			return -1;
+	}
+
+	return iscsi_do_rx_data(conn, &c);
+}
+
+/*	tx_data():
+ *
+ *
+ */
+int tx_data(
+	struct iscsi_conn *conn,
+	struct iovec *iov,
+	int iov_count,
+	int data)
+{
+	struct iscsi_data_count c;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&c, 0, sizeof(struct iscsi_data_count));
+	c.iov = iov;
+	c.iov_count = iov_count;
+	c.data_length = data;
+	c.type = ISCSI_TX_DATA;
+
+	if (CONN_OPS(conn)->IFMarker &&
+	   (conn->conn_state >= TARG_CONN_STATE_LOGGED_IN)) {
+		if (iscsi_determine_sync_and_steering_counts(conn, &c) < 0)
+			return -1;
+	}
+
+	return iscsi_do_tx_data(conn, &c);
+}
+
+/*
+ * Collect login statistics
+ */
+void iscsi_collect_login_stats(
+	struct iscsi_conn *conn,
+	u8 status_class,
+	u8 status_detail)
+{
+	struct iscsi_param *intrname = NULL;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_login_stats *ls;
+
+	tiqn = iscsi_snmp_get_tiqn(conn);
+	if (!(tiqn))
+		return;
+
+	ls = &tiqn->login_stats;
+
+	spin_lock(&ls->lock);
+	if (((conn->login_ip == ls->last_intr_fail_addr) ||
+	    !(memcmp(conn->ipv6_login_ip, ls->last_intr_fail_ip6_addr,
+		IPV6_ADDRESS_SPACE))) &&
+	    ((get_jiffies_64() - ls->last_fail_time) < 10)) {
+		/* We already have the failure info for this login */
+		spin_unlock(&ls->lock);
+		return;
+	}
+
+	if (status_class == ISCSI_STATUS_CLS_SUCCESS)
+		ls->accepts++;
+	else if (status_class == ISCSI_STATUS_CLS_REDIRECT) {
+		ls->redirects++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_REDIRECT;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR)  &&
+		 (status_detail == ISCSI_LOGIN_STATUS_AUTH_FAILED)) {
+		ls->authenticate_fails++;
+		ls->last_fail_type =  ISCSI_LOGIN_FAIL_AUTHENTICATE;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR)  &&
+		 (status_detail == ISCSI_LOGIN_STATUS_TGT_FORBIDDEN)) {
+		ls->authorize_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_AUTHORIZE;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR) &&
+		 (status_detail == ISCSI_LOGIN_STATUS_INIT_ERR)) {
+		ls->negotiate_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_NEGOTIATE;
+	} else {
+		ls->other_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_OTHER;
+	}
+
+	/* Save initiator name, ip address and time, if it is a failed login */
+	if (status_class != ISCSI_STATUS_CLS_SUCCESS) {
+		if (conn->param_list)
+			intrname = iscsi_find_param_from_key(INITIATORNAME,
+							     conn->param_list);
+		strcpy(ls->last_intr_fail_name,
+		       (intrname ? intrname->value : "Unknown"));
+
+		if (conn->ipv6_login_ip != NULL) {
+			memcpy(ls->last_intr_fail_ip6_addr,
+				conn->ipv6_login_ip, IPV6_ADDRESS_SPACE);
+			ls->last_intr_fail_addr = 0;
+		} else {
+			memset(ls->last_intr_fail_ip6_addr, 0,
+				IPV6_ADDRESS_SPACE);
+			ls->last_intr_fail_addr = conn->login_ip;
+		}
+		ls->last_fail_time = get_jiffies_64();
+	}
+
+	spin_unlock(&ls->lock);
+}
+
+struct iscsi_tiqn *iscsi_snmp_get_tiqn(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg;
+
+	if (!(conn) || !(conn->sess))
+		return NULL;
+
+	tpg = conn->sess->tpg;
+	if (!(tpg))
+		return NULL;
+
+	if (!(tpg->tpg_tiqn))
+		return NULL;
+
+	return tpg->tpg_tiqn;
+}
+
+int iscsi_build_sendtargets_response(struct iscsi_cmd *cmd)
+{
+	char *ip, *ip_ex, *payload = NULL;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_np_ex *np_ex;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_tpg_np *tpg_np;
+	int buffer_len, end_of_buf = 0, len = 0, payload_len = 0;
+	unsigned char buf[256];
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	buffer_len = (CONN_OPS(conn)->MaxRecvDataSegmentLength > 32768) ?
+			32768 : CONN_OPS(conn)->MaxRecvDataSegmentLength;
+
+	payload = kzalloc(buffer_len, GFP_KERNEL);
+	if (!(payload)) {
+		printk(KERN_ERR "Unable to allocate memory for sendtargets"
+			" response.\n");
+		return -1;
+	}
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		memset((void *)buf, 0, 256);
+
+		len = sprintf(buf, "TargetName=%s", tiqn->tiqn);
+		len += 1;
+
+		if ((len + payload_len) > buffer_len) {
+			spin_unlock(&tiqn->tiqn_tpg_lock);
+			end_of_buf = 1;
+			goto eob;
+		}
+		memcpy((void *)payload + payload_len, buf, len);
+		payload_len += len;
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+		list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+			spin_lock(&tpg->tpg_state_lock);
+			if ((tpg->tpg_state == TPG_STATE_FREE) ||
+			    (tpg->tpg_state == TPG_STATE_INACTIVE)) {
+				spin_unlock(&tpg->tpg_state_lock);
+				continue;
+			}
+			spin_unlock(&tpg->tpg_state_lock);
+
+			spin_lock(&tpg->tpg_np_lock);
+			list_for_each_entry(tpg_np, &tpg->tpg_gnp_list,
+					tpg_np_list) {
+				memset((void *)buf, 0, 256);
+
+				if (tpg_np->tpg_np->np_flags & NPF_NET_IPV6)
+					ip = &tpg_np->tpg_np->np_ipv6[0];
+				else {
+					memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+					iscsi_ntoa2(buf_ipv4,
+						tpg_np->tpg_np->np_ipv4);
+					ip = &buf_ipv4[0];
+				}
+
+				len = sprintf(buf, "TargetAddress="
+					"%s%s%s:%hu,%hu",
+					(tpg_np->tpg_np->np_flags &
+						NPF_NET_IPV6) ?
+					"[" : "", ip,
+					(tpg_np->tpg_np->np_flags &
+						NPF_NET_IPV6) ?
+					"]" : "", tpg_np->tpg_np->np_port,
+					tpg->tpgt);
+				len += 1;
+
+				if ((len + payload_len) > buffer_len) {
+					spin_unlock(&tpg->tpg_np_lock);
+					spin_unlock(&tiqn->tiqn_tpg_lock);
+					end_of_buf = 1;
+					goto eob;
+				}
+
+				memcpy((void *)payload + payload_len, buf, len);
+				payload_len += len;
+
+				spin_lock(&tpg_np->tpg_np->np_ex_lock);
+				list_for_each_entry(np_ex,
+						&tpg_np->tpg_np->np_nex_list,
+						np_ex_list) {
+					if (tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6)
+						ip_ex = &np_ex->np_ex_ipv6[0];
+					else {
+						memset(buf_ipv4, 0,
+							IPV4_BUF_SIZE);
+						iscsi_ntoa2(buf_ipv4,
+							np_ex->np_ex_ipv4);
+						ip_ex = &buf_ipv4[0];
+					}
+					len = sprintf(buf, "TargetAddress="
+							"%s%s%s:%hu,%hu",
+						(tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6) ?
+						"[" : "", ip_ex,
+						(tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6) ?
+						"]" : "", np_ex->np_ex_port,
+						tpg->tpgt);
+					len += 1;
+
+					if ((len + payload_len) > buffer_len) {
+						spin_unlock(&tpg_np->tpg_np->np_ex_lock);
+						spin_unlock(&tpg->tpg_np_lock);
+						spin_unlock(&tiqn->tiqn_tpg_lock);
+						end_of_buf = 1;
+						goto eob;
+					}
+
+					memcpy((void *)payload + payload_len,
+							buf, len);
+					payload_len += len;
+				}
+				spin_unlock(&tpg_np->tpg_np->np_ex_lock);
+			}
+			spin_unlock(&tpg->tpg_np_lock);
+		}
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+eob:
+		if (end_of_buf)
+			break;
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	cmd->buf_ptr = payload;
+
+	return payload_len;
+}
diff --git a/drivers/target/iscsi/iscsi_target_util.h b/drivers/target/iscsi/iscsi_target_util.h
new file mode 100644
index 0000000..4d0ca53
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_util.h
@@ -0,0 +1,128 @@
+#ifndef ISCSI_TARGET_UTIL_H
+#define ISCSI_TARGET_UTIL_H
+
+#define MARKER_SIZE	8
+
+struct se_cmd;
+
+struct se_offset_map {
+	int                     map_reset;
+	u32                     iovec_length;
+	u32                     iscsi_offset;
+	u32                     current_offset;
+	u32                     orig_offset;
+	u32                     sg_count;
+	u32                     sg_current;
+	u32                     sg_length;
+	struct page		*sg_page;
+	struct se_mem		*map_se_mem;
+	struct se_mem		*map_orig_se_mem;
+	void			*iovec_base;
+} ____cacheline_aligned;
+
+struct se_map_sg {
+	int			sg_kmap_active:1;
+	u32			data_length;
+	u32			data_offset;
+	void			*fabric_cmd;
+	struct se_cmd		*se_cmd;
+	struct iovec		*iov;
+} ____cacheline_aligned;
+
+struct se_unmap_sg {
+	u32			data_length;
+	u32			sg_count;
+	u32			sg_offset;
+	u32			padding;
+	u32			t_offset;
+	void			*fabric_cmd;
+	struct se_cmd		*se_cmd;
+	struct se_offset_map	lmap;
+	struct se_mem		*cur_se_mem;
+} ____cacheline_aligned;
+
+extern void iscsi_attach_cmd_to_queue(struct iscsi_conn *, struct iscsi_cmd *);
+extern void iscsi_remove_cmd_from_conn_list(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_ack_from_expstatsn(struct iscsi_conn *, __u32);
+extern void iscsi_remove_conn_from_list(struct iscsi_session *, struct iscsi_conn *);
+extern int iscsi_add_r2t_to_list(struct iscsi_cmd *, __u32, __u32, int, __u32);
+extern struct iscsi_r2t *iscsi_get_r2t_for_eos(struct iscsi_cmd *, __u32, __u32);
+extern struct iscsi_r2t *iscsi_get_r2t_from_list(struct iscsi_cmd *);
+extern void iscsi_free_r2t(struct iscsi_r2t *, struct iscsi_cmd *);
+extern void iscsi_free_r2ts_from_list(struct iscsi_cmd *);
+extern struct iscsi_cmd *iscsi_allocate_cmd(struct iscsi_conn *);
+extern struct iscsi_cmd *iscsi_allocate_se_cmd(struct iscsi_conn *, u32, int, int);
+extern struct iscsi_cmd *iscsi_allocate_se_cmd_for_tmr(struct iscsi_conn *, u8);
+extern int iscsi_decide_list_to_build(struct iscsi_cmd *, __u32);
+extern struct iscsi_seq *iscsi_get_seq_holder_for_datain(struct iscsi_cmd *, __u32);
+extern struct iscsi_seq *iscsi_get_seq_holder_for_r2t(struct iscsi_cmd *);
+extern struct iscsi_r2t *iscsi_get_holder_for_r2tsn(struct iscsi_cmd *, __u32);
+extern int iscsi_check_received_cmdsn(struct iscsi_conn *, struct iscsi_cmd *, __u32);
+extern int iscsi_check_unsolicited_dataout(struct iscsi_cmd *, unsigned char *);
+extern struct iscsi_cmd *iscsi_find_cmd_from_itt(struct iscsi_conn *, __u32);
+extern struct iscsi_cmd *iscsi_find_cmd_from_itt_or_dump(struct iscsi_conn *,
+			__u32, __u32);
+extern struct iscsi_cmd *iscsi_find_cmd_from_ttt(struct iscsi_conn *, __u32);
+extern int iscsi_find_cmd_for_recovery(struct iscsi_session *, struct iscsi_cmd **,
+			struct iscsi_conn_recovery **, __u32);
+extern void iscsi_add_cmd_to_immediate_queue(struct iscsi_cmd *, struct iscsi_conn *, u8);
+extern struct iscsi_queue_req *iscsi_get_cmd_from_immediate_queue(struct iscsi_conn *);
+extern void iscsi_add_cmd_to_response_queue(struct iscsi_cmd *, struct iscsi_conn *, u8);
+extern struct iscsi_queue_req *iscsi_get_cmd_from_response_queue(struct iscsi_conn *);
+extern void iscsi_remove_cmd_from_tx_queues(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_free_queue_reqs_for_conn(struct iscsi_conn *);
+extern void iscsi_release_cmd_direct(struct iscsi_cmd *);
+extern void lio_release_cmd_direct(struct se_cmd *);
+extern void __iscsi_release_cmd_to_pool(struct iscsi_cmd *, struct iscsi_session *);
+extern void iscsi_release_cmd_to_pool(struct iscsi_cmd *);
+extern void lio_release_cmd_to_pool(struct se_cmd *);
+extern __u64 iscsi_pack_lun(unsigned int);
+extern __u32 iscsi_unpack_lun(unsigned char *);
+extern int iscsi_check_session_usage_count(struct iscsi_session *);
+extern void iscsi_dec_session_usage_count(struct iscsi_session *);
+extern void iscsi_inc_session_usage_count(struct iscsi_session *);
+extern int iscsi_set_sync_and_steering_values(struct iscsi_conn *);
+extern unsigned char *iscsi_ntoa(__u32);
+extern void iscsi_ntoa2(unsigned char *, __u32);
+extern const char *iscsi_ntop6(const unsigned char *, char *, size_t);
+extern int iscsi_pton6(const char *, unsigned char *);
+extern struct iscsi_conn *iscsi_get_conn_from_cid(struct iscsi_session *, __u16);
+extern struct iscsi_conn *iscsi_get_conn_from_cid_rcfr(struct iscsi_session *, __u16);
+extern void iscsi_check_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_dec_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_inc_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_async_msg_timer_function(unsigned long);
+extern int iscsi_check_for_active_network_device(struct iscsi_conn *);
+extern void iscsi_get_network_interface_from_conn(struct iscsi_conn *);
+extern void iscsi_start_netif_timer(struct iscsi_conn *);
+extern void iscsi_stop_netif_timer(struct iscsi_conn *);
+extern void iscsi_mod_nopin_response_timer(struct iscsi_conn *);
+extern void iscsi_start_nopin_response_timer(struct iscsi_conn *);
+extern void iscsi_stop_nopin_response_timer(struct iscsi_conn *);
+extern void __iscsi_start_nopin_timer(struct iscsi_conn *);
+extern void iscsi_start_nopin_timer(struct iscsi_conn *);
+extern void iscsi_stop_nopin_timer(struct iscsi_conn *);
+extern int iscsi_allocate_iovecs_for_cmd(struct se_cmd *);
+extern int iscsi_send_tx_data(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int iscsi_fe_sendpage_sg(struct se_unmap_sg *, struct iscsi_conn *);
+extern int iscsi_tx_login_rsp(struct iscsi_conn *, __u8, __u8);
+extern void iscsi_print_session_params(struct iscsi_session *);
+extern int iscsi_print_dev_to_proc(char *, char **, off_t, int);
+extern int iscsi_print_sessions_to_proc(char *, char **, off_t, int);
+extern int iscsi_print_tpg_to_proc(char *, char **, off_t, int);
+extern int rx_data(struct iscsi_conn *, struct iovec *, int, int);
+extern int tx_data(struct iscsi_conn *, struct iovec *, int, int);
+extern void iscsi_collect_login_stats(struct iscsi_conn *, __u8, __u8);
+extern struct iscsi_tiqn *iscsi_snmp_get_tiqn(struct iscsi_conn *);
+extern int iscsi_build_sendtargets_response(struct iscsi_cmd *);
+
+extern struct target_fabric_configfs *lio_target_fabric_configfs;
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_cmd_cache;
+extern struct kmem_cache *lio_qr_cache;
+extern struct kmem_cache *lio_r2t_cache;
+
+extern int iscsi_add_nopin(struct iscsi_conn *, int);
+
+#endif /*** ISCSI_TARGET_UTIL_H ***/
+
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 11/12] iscsi-target: Add misc utility and debug logic
@ 2011-03-02  3:34   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:34 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This file adds iscsi_target_util.[c,h] code containing a number
of miscellaneous utility functions for iscsi_target_mod including
the following:

*) wrappers to TCM logic from iscsi_target.c for struct iscsi_cmd
allocation
*) received iSCSI Command Sequence Number (CmdSN) processing
*) Code for immediate / TX queues
*) Nopin Response + Response Timeout handlers
*) Primary sock_sendmsg() and sock_recvmsg() calls into Linux/Net
*) iSCSI SendTargets

It also contains iscsi_debug.h macros for CONFIG_ISCSI_TARGET_DEBUG

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/iscsi/iscsi_debug.h       |  113 ++
 drivers/target/iscsi/iscsi_target_util.c | 2852 ++++++++++++++++++++++++++++++
 drivers/target/iscsi/iscsi_target_util.h |  128 ++
 3 files changed, 3093 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/iscsi_debug.h
 create mode 100644 drivers/target/iscsi/iscsi_target_util.c
 create mode 100644 drivers/target/iscsi/iscsi_target_util.h

diff --git a/drivers/target/iscsi/iscsi_debug.h b/drivers/target/iscsi/iscsi_debug.h
new file mode 100644
index 0000000..cf5f57f
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_debug.h
@@ -0,0 +1,113 @@
+#ifndef ISCSI_DEBUG_H
+#define ISCSI_DEBUG_H
+
+/*
+ * Debugging Support
+ */
+
+#define TRACE_DEBUG	0x00000001	/* Verbose debugging */
+#define TRACE_SCSI	0x00000002	/* Stuff related to SCSI Mid-layer */
+#define TRACE_ISCSI	0x00000004	/* Stuff related to iSCSI */
+#define TRACE_NET	0x00000008	/* Stuff related to network code */
+#define TRACE_BUFF	0x00000010	/* For dumping raw data */
+#define TRACE_FILE	0x00000020	/* Used for __FILE__ */
+#define TRACE_LINE	0x00000040	/* Used for __LINE__ */
+#define TRACE_FUNCTION	0x00000080	/* Used for __FUNCTION__ */
+#define TRACE_SEM	0x00000100	/* Stuff related to semaphores */
+#define TRACE_ENTER_LEAVE 0x00000200	/* For entering/leaving functions */
+#define TRACE_DIGEST	0x00000400	/* For Header/Data Digests */
+#define TRACE_PARAM	0x00000800	/* For parameters in parameters.c */
+#define TRACE_LOGIN	0x00001000	/* For login related code */
+#define TRACE_STATE	0x00002000	/* For conn/sess/cleanup states */
+#define TRACE_ERL0	0x00004000	/* For ErrorRecoveryLevel=0 */
+#define TRACE_ERL1	0x00008000	/* For ErrorRecoveryLevel=1 */
+#define TRACE_ERL2	0x00010000	/* For ErrorRecoveryLevel=2 */
+#define TRACE_TIMER	0x00020000	/* For various ERL timers */
+#define TRACE_R2T	0x00040000	/* For R2T callers */
+#define TRACE_SPINDLE	0x00080000	/* For Spindle callers */
+#define TRACE_SSLR	0x00100000	/* For SyncNSteering RX */
+#define TRACE_SSLT	0x00200000	/* For SyncNSteering TX */
+#define TRACE_CHANNEL	0x00400000	/* For SCSI Channels */
+#define TRACE_CMDSN	0x00800000	/* For Out of Order CmdSN execution */
+#define TRACE_NODEATTRIB 0x01000000	/* For Initiator Nodes */
+
+#define TRACE_VANITY		0x80000000	/* For all Vanity Noise */
+#define TRACE_ALL		0xffffffff	/* Turn on all flags */
+#define TRACE_ENDING		0x00000000	/* foo */
+
+#ifdef CONFIG_ISCSI_TARGET_DEBUG
+/*
+ * TRACE_VANITY, is always last!
+ */
+static unsigned int iscsi_trace =
+/*		TRACE_DEBUG | */
+/*		TRACE_SCSI | */
+/*		TRACE_ISCSI | */
+/*		TRACE_NET | */
+/*		TRACE_BUFF | */
+/*		TRACE_FILE | */
+/*		TRACE_LINE | */
+/*       	TRACE_FUNCTION | */
+/*		TRACE_SEM | */
+/*		TRACE_ENTER_LEAVE | */
+/*		TRACE_DIGEST | */
+/*		TRACE_PARAM | */
+/*		TRACE_LOGIN | */
+/*		TRACE_STATE | */
+		TRACE_ERL0 |
+		TRACE_ERL1 |
+		TRACE_ERL2 |
+/*		TRACE_TIMER | */
+/*		TRACE_R2T | */
+/*		TRACE_SPINDLE | */
+/*		TRACE_SSLR | */
+/*		TRACE_SSLT | */
+/*		TRACE_CHANNEL | */
+/*		TRACE_CMDSN | */
+/*		TRACE_NODEATTRIB | */
+		TRACE_VANITY |
+		TRACE_ENDING;
+
+#define TRACE(trace, args...)					\
+{								\
+static char iscsi_trace_buff[256];				\
+								\
+if (iscsi_trace & trace) {					\
+	sprintf(iscsi_trace_buff, args);			\
+	if (iscsi_trace & TRACE_FUNCTION) {			\
+		printk(KERN_INFO "%s:%d: %s",  __func__, __LINE__, \
+			iscsi_trace_buff);			\
+	} else if (iscsi_trace&TRACE_FILE) {			\
+		printk(KERN_INFO "%s::%d: %s", __FILE__, __LINE__, \
+			iscsi_trace_buff);			\
+	} else if (iscsi_trace & TRACE_LINE) {			\
+		printk(KERN_INFO "%d: %s", __LINE__, iscsi_trace_buff);	\
+	} else {						\
+		printk(KERN_INFO "%s", iscsi_trace_buff);	\
+	}							\
+}								\
+}
+
+#define PRINT_BUFF(buff, len)					\
+if (iscsi_trace & TRACE_BUFF) {					\
+	int zzz;						\
+								\
+	printk(KERN_INFO "%d: \n", __LINE__);			\
+	for (zzz = 0; zzz < len; zzz++) {			\
+		if (zzz % 16 == 0) {				\
+			if (zzz)				\
+				printk(KERN_INFO "\n");		\
+			printk(KERN_INFO "%4i: ", zzz);		\
+		}						\
+		printk(KERN_INFO "%02x ", (unsigned char) (buff)[zzz]);	\
+	}							\
+	if ((len + 1) % 16)					\
+		printk(KERN_INFO "\n");				\
+}
+
+#else /* !CONFIG_ISCSI_TARGET_DEBUG */
+#define TRACE(trace, args...)
+#define PRINT_BUFF(buff, len)
+#endif /* CONFIG_ISCSI_TARGET_DEBUG */
+
+#endif   /*** ISCSI_DEBUG_H ***/
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
new file mode 100644
index 0000000..61b9fea
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -0,0 +1,2852 @@
+/*******************************************************************************
+ * This file contains the iSCSI Target specific utility functions.
+ *
+ * Copyright (c) 2002, 2003, 2004, 2005 PyX Technologies, Inc.
+ * Copyright (c) 2005, 2006, 2007 SBE, Inc.
+ * © Copyright 2007-2011 RisingTide Systems LLC.
+ *
+ * Licensed to the Linux Foundation under the General Public License (GPL) version 2.
+ *
+ * Author: Nicholas A. Bellinger <nab@linux-iscsi.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ ******************************************************************************/
+
+#include <linux/timer.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <scsi/libsas.h> /* For TASK_ATTR_* */
+#include <scsi/iscsi_proto.h>
+#include <target/target_core_base.h>
+#include <target/target_core_transport.h>
+#include <target/target_core_tmr.h>
+#include <target/target_core_fabric_ops.h>
+#include <target/target_core_configfs.h>
+
+#include "iscsi_debug.h"
+#include "iscsi_target_core.h"
+#include "iscsi_parameters.h"
+#include "iscsi_seq_and_pdu_list.h"
+#include "iscsi_target_datain_values.h"
+#include "iscsi_target_erl0.h"
+#include "iscsi_target_erl1.h"
+#include "iscsi_target_erl2.h"
+#include "iscsi_target_tpg.h"
+#include "iscsi_target_util.h"
+#include "iscsi_target.h"
+
+/*	iscsi_attach_cmd_to_queue():
+ *
+ *
+ */
+inline void iscsi_attach_cmd_to_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+{
+	spin_lock_bh(&conn->cmd_lock);
+	list_add_tail(&cmd->i_list, &conn->conn_cmd_list);
+	spin_unlock_bh(&conn->cmd_lock);
+
+	atomic_inc(&conn->active_cmds);
+}
+
+/*	iscsi_remove_cmd_from_conn_list():
+ *
+ *	MUST be called with conn->cmd_lock held.
+ */
+inline void iscsi_remove_cmd_from_conn_list(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	list_del(&cmd->i_list);
+	atomic_dec(&conn->active_cmds);
+}
+
+
+/*	iscsi_ack_from_expstatsn():
+ *
+ *
+ */
+inline void iscsi_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn)
+{
+	struct iscsi_cmd *cmd;
+
+	conn->exp_statsn = exp_statsn;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+
+		spin_lock(&cmd->istate_lock);
+		if ((cmd->i_state == ISTATE_SENT_STATUS) &&
+		    (cmd->stat_sn < exp_statsn)) {
+			cmd->i_state = ISTATE_REMOVE;
+			spin_unlock(&cmd->istate_lock);
+			iscsi_add_cmd_to_immediate_queue(cmd, conn,
+					cmd->i_state);
+			continue;
+		}
+		spin_unlock(&cmd->istate_lock);
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+}
+
+/*	iscsi_remove_conn_from_list():
+ *
+ *	Called with sess->conn_lock held.
+ */
+void iscsi_remove_conn_from_list(struct iscsi_session *sess, struct iscsi_conn *conn)
+{
+	list_del(&conn->conn_list);
+}
+
+/*	iscsi_add_r2t_to_list():
+ *
+ *	Called with cmd->r2t_lock held.
+ */
+int iscsi_add_r2t_to_list(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 xfer_len,
+	int recovery,
+	u32 r2t_sn)
+{
+	struct iscsi_r2t *r2t;
+
+	r2t = kmem_cache_zalloc(lio_r2t_cache, GFP_ATOMIC);
+	if (!(r2t)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_r2t.\n");
+		return -1;
+	}
+	INIT_LIST_HEAD(&r2t->r2t_list);
+
+	r2t->recovery_r2t = recovery;
+	r2t->r2t_sn = (!r2t_sn) ? cmd->r2t_sn++ : r2t_sn;
+	r2t->offset = offset;
+	r2t->xfer_len = xfer_len;
+	list_add_tail(&r2t->r2t_list, &cmd->cmd_r2t_list);
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	iscsi_add_cmd_to_immediate_queue(cmd, CONN(cmd), ISTATE_SEND_R2T);
+
+	spin_lock_bh(&cmd->r2t_lock);
+	return 0;
+}
+
+/*	iscsi_get_r2t_for_eos():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_r2t_for_eos(
+	struct iscsi_cmd *cmd,
+	u32 offset,
+	u32 length)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if ((r2t->offset <= offset) &&
+		    (r2t->offset + r2t->xfer_len) >= (offset + length))
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate R2T for Offset: %u, Length:"
+				" %u\n", offset, length);
+		return NULL;
+	}
+
+	return r2t;
+}
+
+/*	iscsi_get_r2t_from_list():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_r2t_from_list(struct iscsi_cmd *cmd)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if (!r2t->sent_r2t)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	if (!r2t) {
+		printk(KERN_ERR "Unable to locate next R2T to send for ITT:"
+			" 0x%08x.\n", cmd->init_task_tag);
+		return NULL;
+	}
+
+	return r2t;
+}
+
+/*	iscsi_free_r2t():
+ *
+ *	Called with cmd->r2t_lock held.
+ */
+void iscsi_free_r2t(struct iscsi_r2t *r2t, struct iscsi_cmd *cmd)
+{
+	list_del(&r2t->r2t_list);
+	kmem_cache_free(lio_r2t_cache, r2t);
+}
+
+/*	iscsi_free_r2ts_from_list():
+ *
+ *
+ */
+void iscsi_free_r2ts_from_list(struct iscsi_cmd *cmd)
+{
+	struct iscsi_r2t *r2t, *r2t_tmp;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry_safe(r2t, r2t_tmp, &cmd->cmd_r2t_list, r2t_list) {
+		list_del(&r2t->r2t_list);
+		kmem_cache_free(lio_r2t_cache, r2t);
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+}
+
+/*	iscsi_allocate_cmd():
+ *
+ *	May be called from interrupt context.
+ */
+struct iscsi_cmd *iscsi_allocate_cmd(struct iscsi_conn *conn)
+{
+	struct iscsi_cmd *cmd;
+
+	cmd = kmem_cache_zalloc(lio_cmd_cache, GFP_ATOMIC);
+	if (!(cmd)) {
+		printk(KERN_ERR "Unable to allocate memory for struct iscsi_cmd.\n");
+		return NULL;
+	}
+
+	cmd->conn	= conn;
+	INIT_LIST_HEAD(&cmd->i_list);
+	INIT_LIST_HEAD(&cmd->datain_list);
+	INIT_LIST_HEAD(&cmd->cmd_r2t_list);
+	sema_init(&cmd->reject_sem, 0);
+	sema_init(&cmd->unsolicited_data_sem, 0);
+	spin_lock_init(&cmd->datain_lock);
+	spin_lock_init(&cmd->dataout_timeout_lock);
+	spin_lock_init(&cmd->istate_lock);
+	spin_lock_init(&cmd->error_lock);
+	spin_lock_init(&cmd->r2t_lock);
+
+	return cmd;
+}
+
+/*
+ * Called from iscsi_handle_scsi_cmd()
+ */
+struct iscsi_cmd *iscsi_allocate_se_cmd(
+	struct iscsi_conn *conn,
+	u32 data_length,
+	int data_direction,
+	int iscsi_task_attr)
+{
+	struct iscsi_cmd *cmd;
+	struct se_cmd *se_cmd;
+	int sam_task_attr;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return NULL;
+
+	cmd->data_direction = data_direction;
+	cmd->data_length = data_length;
+	/*
+	 * Figure out the SAM Task Attribute for the incoming SCSI CDB
+	 */
+	if ((iscsi_task_attr == ISCSI_ATTR_UNTAGGED) ||
+	    (iscsi_task_attr == ISCSI_ATTR_SIMPLE))
+		sam_task_attr = TASK_ATTR_SIMPLE;
+	else if (iscsi_task_attr == ISCSI_ATTR_ORDERED)
+		sam_task_attr = TASK_ATTR_ORDERED;
+	else if (iscsi_task_attr == ISCSI_ATTR_HEAD_OF_QUEUE)
+		sam_task_attr = TASK_ATTR_HOQ;
+	else if (iscsi_task_attr == ISCSI_ATTR_ACA)
+		sam_task_attr = TASK_ATTR_ACA;
+	else {
+		printk(KERN_INFO "Unknown iSCSI Task Attribute: 0x%02x, using"
+			" TASK_ATTR_SIMPLE\n", iscsi_task_attr);
+		sam_task_attr = TASK_ATTR_SIMPLE;
+	}
+
+	se_cmd = &cmd->se_cmd;
+	/*
+	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+	 */
+	transport_init_se_cmd(se_cmd, &lio_target_fabric_configfs->tf_ops,
+			SESS(conn)->se_sess, data_length, data_direction,
+			sam_task_attr, &cmd->sense_buffer[0]);
+	return cmd;
+}
+
+/*	iscsi_allocate_tmr_req():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_allocate_se_cmd_for_tmr(
+	struct iscsi_conn *conn,
+	u8 function)
+{
+	struct iscsi_cmd *cmd;
+	struct se_cmd *se_cmd;
+	u8 tcm_function;
+
+	cmd = iscsi_allocate_cmd(conn);
+	if (!(cmd))
+		return NULL;
+
+	cmd->data_direction = DMA_NONE;
+
+	cmd->tmr_req = kzalloc(sizeof(struct iscsi_tmr_req), GFP_KERNEL);
+	if (!(cmd->tmr_req)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" Task Management command!\n");
+		return NULL;
+	}
+	/*
+	 * TASK_REASSIGN for ERL=2 / connection stays inside of
+	 * LIO-Target $FABRIC_MOD
+	 */
+	if (function == ISCSI_TM_FUNC_TASK_REASSIGN)
+		return cmd;
+
+	se_cmd = &cmd->se_cmd;
+	/*
+	 * Initialize struct se_cmd descriptor from target_core_mod infrastructure
+	 */
+	transport_init_se_cmd(se_cmd, &lio_target_fabric_configfs->tf_ops,
+				SESS(conn)->se_sess, 0, DMA_NONE,
+				TASK_ATTR_SIMPLE, &cmd->sense_buffer[0]);
+
+	switch (function) {
+	case ISCSI_TM_FUNC_ABORT_TASK:
+		tcm_function = TMR_ABORT_TASK;
+		break;
+	case ISCSI_TM_FUNC_ABORT_TASK_SET:
+		tcm_function = TMR_ABORT_TASK_SET;
+		break;
+	case ISCSI_TM_FUNC_CLEAR_ACA:
+		tcm_function = TMR_CLEAR_ACA;
+		break;
+	case ISCSI_TM_FUNC_CLEAR_TASK_SET:
+		tcm_function = TMR_CLEAR_TASK_SET;
+		break;
+	case ISCSI_TM_FUNC_LOGICAL_UNIT_RESET:
+		tcm_function = TMR_LUN_RESET;
+		break;
+	case ISCSI_TM_FUNC_TARGET_WARM_RESET:
+		tcm_function = TMR_TARGET_WARM_RESET;
+		break;
+	case ISCSI_TM_FUNC_TARGET_COLD_RESET:
+		tcm_function = TMR_TARGET_COLD_RESET;
+		break;
+	default: 
+		printk(KERN_ERR "Unknown iSCSI TMR Function:"
+			" 0x%02x\n", function);
+		goto out;
+	}
+
+	se_cmd->se_tmr_req = core_tmr_alloc_req(se_cmd,
+				(void *)cmd->tmr_req, tcm_function);
+	if (!(se_cmd->se_tmr_req))
+		goto out;
+
+	cmd->tmr_req->se_tmr_req = se_cmd->se_tmr_req;
+
+	return cmd;
+out:
+	iscsi_release_cmd_to_pool(cmd);
+	if (se_cmd)
+		transport_free_se_cmd(se_cmd);
+	return NULL;
+}
+
+/*	iscsi_decide_list_to_build():
+ *
+ *
+ */
+int iscsi_decide_list_to_build(
+	struct iscsi_cmd *cmd,
+	u32 immediate_data_length)
+{
+	struct iscsi_build_list bl;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na;
+
+	if (SESS_OPS(sess)->DataSequenceInOrder &&
+	    SESS_OPS(sess)->DataPDUInOrder)
+		return 0;
+
+	if (cmd->data_direction == DMA_NONE)
+		return 0;
+
+	na = iscsi_tpg_get_node_attrib(sess);
+	memset(&bl, 0, sizeof(struct iscsi_build_list));
+
+	if (cmd->data_direction == DMA_FROM_DEVICE) {
+		bl.data_direction = ISCSI_PDU_READ;
+		bl.type = PDULIST_NORMAL;
+		if (na->random_datain_pdu_offsets)
+			bl.randomize |= RANDOM_DATAIN_PDU_OFFSETS;
+		if (na->random_datain_seq_offsets)
+			bl.randomize |= RANDOM_DATAIN_SEQ_OFFSETS;
+	} else {
+		bl.data_direction = ISCSI_PDU_WRITE;
+		bl.immediate_data_length = immediate_data_length;
+		if (na->random_r2t_offsets)
+			bl.randomize |= RANDOM_R2T_OFFSETS;
+
+		if (!cmd->immediate_data && !cmd->unsolicited_data)
+			bl.type = PDULIST_NORMAL;
+		else if (cmd->immediate_data && !cmd->unsolicited_data)
+			bl.type = PDULIST_IMMEDIATE;
+		else if (!cmd->immediate_data && cmd->unsolicited_data)
+			bl.type = PDULIST_UNSOLICITED;
+		else if (cmd->immediate_data && cmd->unsolicited_data)
+			bl.type = PDULIST_IMMEDIATE_AND_UNSOLICITED;
+	}
+
+	return iscsi_do_build_list(cmd, &bl);
+}
+
+/*	iscsi_get_seq_holder_for_datain():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder_for_datain(
+	struct iscsi_cmd *cmd,
+	u32 seq_send_order)
+{
+	u32 i;
+
+	for (i = 0; i < cmd->seq_count; i++)
+		if (cmd->seq_list[i].seq_send_order == seq_send_order)
+			return &cmd->seq_list[i];
+
+	return NULL;
+}
+
+/*	iscsi_get_seq_holder_for_r2t():
+ *
+ *
+ */
+struct iscsi_seq *iscsi_get_seq_holder_for_r2t(struct iscsi_cmd *cmd)
+{
+	u32 i;
+
+	if (!cmd->seq_list) {
+		printk(KERN_ERR "struct iscsi_cmd->seq_list is NULL!\n");
+		return NULL;
+	}
+
+	for (i = 0; i < cmd->seq_count; i++) {
+		if (cmd->seq_list[i].type != SEQTYPE_NORMAL)
+			continue;
+		if (cmd->seq_list[i].seq_send_order == cmd->seq_send_order) {
+			cmd->seq_send_order++;
+			return &cmd->seq_list[i];
+		}
+	}
+
+	return NULL;
+}
+
+/*	iscsi_get_holder_for_r2tsn():
+ *
+ *
+ */
+struct iscsi_r2t *iscsi_get_holder_for_r2tsn(
+	struct iscsi_cmd *cmd,
+	u32 r2t_sn)
+{
+	struct iscsi_r2t *r2t;
+
+	spin_lock_bh(&cmd->r2t_lock);
+	list_for_each_entry(r2t, &cmd->cmd_r2t_list, r2t_list) {
+		if (r2t->r2t_sn == r2t_sn)
+			break;
+	}
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	return (r2t) ? r2t : NULL;
+}
+
+#define SERIAL_BITS	31
+#define MAX_BOUND	(u32)2147483647UL
+
+int serial_lt(u32 x, u32 y)
+{
+	return (x != y) && (((x < y) && ((y - x) < MAX_BOUND)) ||
+		((x > y) && ((x - y) > MAX_BOUND)));
+}
+
+int serial_lte(u32 x, u32 y)
+{
+	return (x == y) ? 1 : serial_lt(x, y);
+}
+
+int serial_gt(u32 x, u32 y)
+{
+	return (x != y) && (((x < y) && ((y - x) > MAX_BOUND)) ||
+		((x > y) && ((x - y) < MAX_BOUND)));
+}
+
+int serial_gte(u32 x, u32 y)
+{
+	return (x == y) ? 1 : serial_gt(x, y);
+}
+
+/*	iscsi_check_received_cmdsn():
+ *
+ *
+ */
+inline int iscsi_check_received_cmdsn(
+	struct iscsi_conn *conn,
+	struct iscsi_cmd *cmd,
+	u32 cmdsn)
+{
+	int ret;
+	/*
+	 * This is the proper method of checking received CmdSN against
+	 * ExpCmdSN and MaxCmdSN values, as well as accounting for out
+	 * or order CmdSNs due to multiple connection sessions and/or
+	 * CRC failures.
+	 */
+	spin_lock(&SESS(conn)->cmdsn_lock);
+	if (serial_gt(cmdsn, SESS(conn)->max_cmd_sn)) {
+		printk(KERN_ERR "Received CmdSN: 0x%08x is greater than"
+			" MaxCmdSN: 0x%08x, protocol error.\n", cmdsn,
+				SESS(conn)->max_cmd_sn);
+		spin_unlock(&SESS(conn)->cmdsn_lock);
+		return CMDSN_ERROR_CANNOT_RECOVER;
+	}
+
+	if (!SESS(conn)->cmdsn_outoforder) {
+		if (cmdsn == SESS(conn)->exp_cmd_sn) {
+			SESS(conn)->exp_cmd_sn++;
+			TRACE(TRACE_CMDSN, "Received CmdSN matches ExpCmdSN,"
+				" incremented ExpCmdSN to: 0x%08x\n",
+					SESS(conn)->exp_cmd_sn);
+			ret = iscsi_execute_cmd(cmd, 0);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+
+			return (!ret) ? CMDSN_NORMAL_OPERATION :
+					CMDSN_ERROR_CANNOT_RECOVER;
+		} else if (serial_gt(cmdsn, SESS(conn)->exp_cmd_sn)) {
+			TRACE(TRACE_CMDSN, "Received CmdSN: 0x%08x is greater"
+				" than ExpCmdSN: 0x%08x, not acknowledging.\n",
+				cmdsn, SESS(conn)->exp_cmd_sn);
+			goto ooo_cmdsn;
+		} else {
+			printk(KERN_ERR "Received CmdSN: 0x%08x is less than"
+				" ExpCmdSN: 0x%08x, ignoring.\n", cmdsn,
+					SESS(conn)->exp_cmd_sn);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_LOWER_THAN_EXP;
+		}
+	} else {
+		int counter = 0;
+		u32 old_expcmdsn = 0;
+		if (cmdsn == SESS(conn)->exp_cmd_sn) {
+			old_expcmdsn = SESS(conn)->exp_cmd_sn++;
+			TRACE(TRACE_CMDSN, "Got missing CmdSN: 0x%08x matches"
+				" ExpCmdSN, incremented ExpCmdSN to 0x%08x.\n",
+					cmdsn, SESS(conn)->exp_cmd_sn);
+
+			if (iscsi_execute_cmd(cmd, 0) < 0) {
+				spin_unlock(&SESS(conn)->cmdsn_lock);
+				return CMDSN_ERROR_CANNOT_RECOVER;
+			}
+		} else if (serial_gt(cmdsn, SESS(conn)->exp_cmd_sn)) {
+			TRACE(TRACE_CMDSN, "CmdSN: 0x%08x greater than"
+				" ExpCmdSN: 0x%08x, not acknowledging.\n",
+				cmdsn, SESS(conn)->exp_cmd_sn);
+			goto ooo_cmdsn;
+		} else {
+			printk(KERN_ERR "CmdSN: 0x%08x less than ExpCmdSN:"
+				" 0x%08x, ignoring.\n", cmdsn,
+				SESS(conn)->exp_cmd_sn);
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_LOWER_THAN_EXP;
+		}
+
+		counter = iscsi_execute_ooo_cmdsns(SESS(conn));
+		if (counter < 0) {
+			spin_unlock(&SESS(conn)->cmdsn_lock);
+			return CMDSN_ERROR_CANNOT_RECOVER;
+		}
+
+		if (counter == SESS(conn)->ooo_cmdsn_count) {
+			if (SESS(conn)->ooo_cmdsn_count == 1) {
+				TRACE(TRACE_CMDSN, "Received final missing"
+					" CmdSN: 0x%08x.\n", old_expcmdsn);
+			} else {
+				TRACE(TRACE_CMDSN, "Received final missing"
+					" CmdSNs: 0x%08x->0x%08x.\n",
+				old_expcmdsn, (SESS(conn)->exp_cmd_sn - 1));
+			}
+
+			SESS(conn)->ooo_cmdsn_count = 0;
+			SESS(conn)->cmdsn_outoforder = 0;
+		} else {
+			SESS(conn)->ooo_cmdsn_count -= counter;
+			TRACE(TRACE_CMDSN, "Still missing %hu CmdSN(s),"
+				" continuing out of order operation.\n",
+				SESS(conn)->ooo_cmdsn_count);
+		}
+		spin_unlock(&SESS(conn)->cmdsn_lock);
+		return CMDSN_NORMAL_OPERATION;
+	}
+
+ooo_cmdsn:
+	ret = iscsi_handle_ooo_cmdsn(SESS(conn), cmd, cmdsn);
+	spin_unlock(&SESS(conn)->cmdsn_lock);
+	return ret;
+}
+
+/*	iscsi_check_unsolicited_dataout():
+ *
+ *
+ */
+int iscsi_check_unsolicited_dataout(struct iscsi_cmd *cmd, unsigned char *buf)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	struct iscsi_data *hdr = (struct iscsi_data *) buf;
+	u32 payload_length = ntoh24(hdr->dlength);
+
+	if (SESS_OPS_C(conn)->InitialR2T) {
+		printk(KERN_ERR "Received unexpected unsolicited data"
+			" while InitialR2T=Yes, protocol error.\n");
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_UNEXPECTED_UNSOLICITED_DATA, 0);
+		return -1;
+	}
+
+	if ((cmd->first_burst_len + payload_length) >
+	     SESS_OPS_C(conn)->FirstBurstLength) {
+		printk(KERN_ERR "Total %u bytes exceeds FirstBurstLength: %u"
+			" for this Unsolicited DataOut Burst.\n",
+			(cmd->first_burst_len + payload_length),
+				SESS_OPS_C(conn)->FirstBurstLength);
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return -1;
+	}
+
+	if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))
+		return 0;
+
+	if (((cmd->first_burst_len + payload_length) != cmd->data_length) &&
+	    ((cmd->first_burst_len + payload_length) !=
+	      SESS_OPS_C(conn)->FirstBurstLength)) {
+		printk(KERN_ERR "Unsolicited non-immediate data received %u"
+			" does not equal FirstBurstLength: %u, and does"
+			" not equal ExpXferLen %u.\n",
+			(cmd->first_burst_len + payload_length),
+			SESS_OPS_C(conn)->FirstBurstLength, cmd->data_length);
+		transport_send_check_condition_and_sense(se_cmd,
+				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
+		return -1;
+	}
+	return 0;
+}
+
+/*	iscsi_find_cmd_from_itt():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_itt(
+	struct iscsi_conn *conn,
+	u32 init_task_tag)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->init_task_tag == init_task_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate ITT: 0x%08x on CID: %hu",
+			init_task_tag, conn->cid);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_from_itt_or_dump():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_itt_or_dump(
+	struct iscsi_conn *conn,
+	u32 init_task_tag,
+	u32 length)
+{
+	struct iscsi_cmd *cmd;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->init_task_tag == init_task_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate ITT: 0x%08x on CID: %hu,"
+			" dumping payload\n", init_task_tag, conn->cid);
+		if (length)
+			iscsi_dump_data_payload(conn, length, 1);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_from_ttt():
+ *
+ *
+ */
+struct iscsi_cmd *iscsi_find_cmd_from_ttt(
+	struct iscsi_conn *conn,
+	u32 targ_xfer_tag)
+{
+	struct iscsi_cmd *cmd = NULL;
+
+	spin_lock_bh(&conn->cmd_lock);
+	list_for_each_entry(cmd, &conn->conn_cmd_list, i_list) {
+		if (cmd->targ_xfer_tag == targ_xfer_tag)
+			break;
+	}
+	spin_unlock_bh(&conn->cmd_lock);
+
+	if (!cmd) {
+		printk(KERN_ERR "Unable to locate TTT: 0x%08x on CID: %hu\n",
+			targ_xfer_tag, conn->cid);
+		return NULL;
+	}
+
+	return cmd;
+}
+
+/*	iscsi_find_cmd_for_recovery():
+ *
+ *
+ */
+int iscsi_find_cmd_for_recovery(
+	struct iscsi_session *sess,
+	struct iscsi_cmd **cmd_ptr,
+	struct iscsi_conn_recovery **cr_ptr,
+	u32 init_task_tag)
+{
+	int found_itt = 0;
+	struct iscsi_cmd *cmd = NULL;
+	struct iscsi_conn_recovery *cr;
+
+	/*
+	 * Scan through the inactive connection recovery list's command list.
+	 * If init_task_tag matches the command is still alligent.
+	 */
+	spin_lock(&sess->cr_i_lock);
+	list_for_each_entry(cr, &sess->cr_inactive_list, cr_list) {
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry(cmd, &cr->conn_recovery_cmd_list, i_list) {
+			if (cmd->init_task_tag == init_task_tag) {
+				found_itt = 1;
+				break;
+			}
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (found_itt)
+			break;
+	}
+	spin_unlock(&sess->cr_i_lock);
+
+	if (cmd) {
+		*cr_ptr = cr;
+		*cmd_ptr = cmd;
+		return -2;
+	}
+
+	found_itt = 0;
+
+	/*
+	 * Scan through the active connection recovery list's command list.
+	 * If init_task_tag matches the command is ready to be reassigned.
+	 */
+	spin_lock(&sess->cr_a_lock);
+	list_for_each_entry(cr, &sess->cr_active_list, cr_list) {
+		spin_lock(&cr->conn_recovery_cmd_lock);
+		list_for_each_entry(cmd, &cr->conn_recovery_cmd_list, i_list) {
+			if (cmd->init_task_tag == init_task_tag) {
+				found_itt = 1;
+				break;
+			}
+		}
+		spin_unlock(&cr->conn_recovery_cmd_lock);
+		if (found_itt)
+			break;
+	}
+	spin_unlock(&sess->cr_a_lock);
+
+	if (!cmd || !cr)
+		return -1;
+
+	*cr_ptr = cr;
+	*cmd_ptr = cmd;
+
+	return 0;
+}
+
+/*	iscsi_add_cmd_to_immediate_queue():
+ *
+ *
+ */
+void iscsi_add_cmd_to_immediate_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	u8 state)
+{
+	struct iscsi_queue_req *qr;
+
+	qr = kmem_cache_zalloc(lio_qr_cache, GFP_ATOMIC);
+	if (!(qr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+				" struct iscsi_queue_req\n");
+		return;
+	}
+	INIT_LIST_HEAD(&qr->qr_list);
+	qr->cmd = cmd;
+	qr->state = state;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	list_add_tail(&qr->qr_list, &conn->immed_queue_list);
+	atomic_inc(&cmd->immed_queue_count);
+	atomic_set(&conn->check_immediate_queue, 1);
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	up(&conn->tx_sem);
+}
+
+/*	iscsi_get_cmd_from_immediate_queue():
+ *
+ *
+ */
+struct iscsi_queue_req *iscsi_get_cmd_from_immediate_queue(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	if (list_empty(&conn->immed_queue_list)) {
+		spin_unlock_bh(&conn->immed_queue_lock);
+		return NULL;
+	}
+	list_for_each_entry(qr, &conn->immed_queue_list, qr_list)
+		break;
+
+	list_del(&qr->qr_list);
+	if (qr->cmd)
+		atomic_dec(&qr->cmd->immed_queue_count);
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	return qr;
+}
+
+static void iscsi_remove_cmd_from_immediate_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	if (!(atomic_read(&cmd->immed_queue_count))) {
+		spin_unlock_bh(&conn->immed_queue_lock);
+		return;
+	}
+
+	list_for_each_entry_safe(qr, qr_tmp, &conn->immed_queue_list, qr_list) {
+		if (qr->cmd != cmd)
+			continue;
+
+		atomic_dec(&qr->cmd->immed_queue_count);
+		list_del(&qr->qr_list);
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	if (atomic_read(&cmd->immed_queue_count)) {
+		printk(KERN_ERR "ITT: 0x%08x immed_queue_count: %d\n",
+			cmd->init_task_tag,
+			atomic_read(&cmd->immed_queue_count));
+	}
+}
+
+/*	iscsi_add_cmd_to_response_queue():
+ *
+ *
+ */
+void iscsi_add_cmd_to_response_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	u8 state)
+{
+	struct iscsi_queue_req *qr;
+
+	qr = kmem_cache_zalloc(lio_qr_cache, GFP_ATOMIC);
+	if (!(qr)) {
+		printk(KERN_ERR "Unable to allocate memory for"
+			" struct iscsi_queue_req\n");
+		return;
+	}
+	INIT_LIST_HEAD(&qr->qr_list);
+	qr->cmd = cmd;
+	qr->state = state;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	list_add_tail(&qr->qr_list, &conn->response_queue_list);
+	atomic_inc(&cmd->response_queue_count);
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	up(&conn->tx_sem);
+}
+
+/*	iscsi_get_cmd_from_response_queue():
+ *
+ *
+ */
+struct iscsi_queue_req *iscsi_get_cmd_from_response_queue(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	if (list_empty(&conn->response_queue_list)) {
+		spin_unlock_bh(&conn->response_queue_lock);
+		return NULL;
+	}
+
+	list_for_each_entry(qr, &conn->response_queue_list, qr_list)
+		break;
+
+	list_del(&qr->qr_list);
+	if (qr->cmd)
+		atomic_dec(&qr->cmd->response_queue_count);
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	return qr;
+}
+
+/*	iscsi_remove_cmd_from_response_queue():
+ *
+ *
+ */
+static void iscsi_remove_cmd_from_response_queue(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->response_queue_lock);
+	if (!(atomic_read(&cmd->response_queue_count))) {
+		spin_unlock_bh(&conn->response_queue_lock);
+		return;
+	}
+
+	list_for_each_entry_safe(qr, qr_tmp, &conn->response_queue_list,
+				qr_list) {
+		if (qr->cmd != cmd)
+			continue;
+
+		atomic_dec(&qr->cmd->response_queue_count);
+		list_del(&qr->qr_list);
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->response_queue_lock);
+
+	if (atomic_read(&cmd->response_queue_count)) {
+		printk(KERN_ERR "ITT: 0x%08x response_queue_count: %d\n",
+			cmd->init_task_tag,
+			atomic_read(&cmd->response_queue_count));
+	}
+}
+
+void iscsi_remove_cmd_from_tx_queues(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	iscsi_remove_cmd_from_immediate_queue(cmd, conn);
+	iscsi_remove_cmd_from_response_queue(cmd, conn);
+}
+
+/*	iscsi_free_queue_reqs_for_conn():
+ *
+ *
+ */
+void iscsi_free_queue_reqs_for_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_queue_req *qr, *qr_tmp;
+
+	spin_lock_bh(&conn->immed_queue_lock);
+	list_for_each_entry_safe(qr, qr_tmp, &conn->immed_queue_list, qr_list) {
+		list_del(&qr->qr_list);
+		if (qr->cmd)
+			atomic_dec(&qr->cmd->immed_queue_count);
+
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->immed_queue_lock);
+
+	spin_lock_bh(&conn->response_queue_lock);
+	list_for_each_entry_safe(qr, qr_tmp, &conn->response_queue_list,
+			qr_list) {
+		list_del(&qr->qr_list);
+		if (qr->cmd)
+			atomic_dec(&qr->cmd->response_queue_count);
+
+		kmem_cache_free(lio_qr_cache, qr);
+	}
+	spin_unlock_bh(&conn->response_queue_lock);
+}
+
+/*	iscsi_release_cmd_direct():
+ *
+ *
+ */
+void iscsi_release_cmd_direct(struct iscsi_cmd *cmd)
+{
+	iscsi_free_r2ts_from_list(cmd);
+	iscsi_free_all_datain_reqs(cmd);
+
+	kfree(cmd->buf_ptr);
+	kfree(cmd->pdu_list);
+	kfree(cmd->seq_list);
+	kfree(cmd->tmr_req);
+	kfree(cmd->iov_data);
+
+	kmem_cache_free(lio_cmd_cache, cmd);
+}
+
+void lio_release_cmd_direct(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	iscsi_release_cmd_direct(cmd);
+}
+
+/*	__iscsi_release_cmd_to_pool():
+ *
+ *
+ */
+void __iscsi_release_cmd_to_pool(struct iscsi_cmd *cmd, struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn = CONN(cmd);
+
+	iscsi_free_r2ts_from_list(cmd);
+	iscsi_free_all_datain_reqs(cmd);
+
+	kfree(cmd->buf_ptr);
+	kfree(cmd->pdu_list);
+	kfree(cmd->seq_list);
+	kfree(cmd->tmr_req);
+	kfree(cmd->iov_data);
+
+	if (conn)
+		iscsi_remove_cmd_from_tx_queues(cmd, conn);
+
+	kmem_cache_free(lio_cmd_cache, cmd);
+}
+
+void iscsi_release_cmd_to_pool(struct iscsi_cmd *cmd)
+{
+	if (!CONN(cmd) && !cmd->sess) {
+		iscsi_release_cmd_direct(cmd);
+	} else {
+		__iscsi_release_cmd_to_pool(cmd, (CONN(cmd)) ?
+			CONN(cmd)->sess : cmd->sess);
+	}
+}
+
+void lio_release_cmd_to_pool(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+
+	iscsi_release_cmd_to_pool(cmd);
+}
+
+/*	iscsi_pack_lun():
+ *
+ *	Routine to pack an ordinary (LINUX) LUN 32-bit number
+ *		into an 8-byte LUN structure
+ *	(see SAM-2, Section 4.12.3 page 39)
+ *	Thanks to UNH for help with this :-).
+ */
+inline u64 iscsi_pack_lun(unsigned int lun)
+{
+	u64	result;
+
+	result = ((lun & 0xff) << 8);	/* LSB of lun into byte 1 big-endian */
+
+	if (0) {
+		/* use flat space addressing method, SAM-2 Section 4.12.4
+			-	high-order 2 bits of byte 0 are 01
+			-	low-order 6 bits of byte 0 are MSB of the lun
+			-	all 8 bits of byte 1 are LSB of the lun
+			-	all other bytes (2 thru 7) are 0
+		 */
+		result |= 0x40 | ((lun >> 8) & 0x3f);
+	}
+	/* else use peripheral device addressing method, Sam-2 Section 4.12.5
+			-	high-order 2 bits of byte 0 are 00
+			-	low-order 6 bits of byte 0 are all 0
+			-	all 8 bits of byte 1 are the lun
+			-	all other bytes (2 thru 7) are 0
+	*/
+
+	return cpu_to_le64(result);
+}
+
+/*	iscsi_unpack_lun():
+ *
+ *	Routine to pack an 8-byte LUN structure into a ordinary (LINUX) 32-bit
+ *	LUN number (see SAM-2, Section 4.12.3 page 39)
+ *	Thanks to UNH for help with this :-).
+ */
+inline u32 iscsi_unpack_lun(unsigned char *lun_ptr)
+{
+	u32	result, temp;
+
+	result = *(lun_ptr+1);  /* LSB of lun from byte 1 big-endian */
+
+	switch (temp = ((*lun_ptr)>>6)) { /* high 2 bits of byte 0 big-endian */
+	case 0: /* peripheral device addressing method, Sam-2 Section 4.12.5
+		-	high-order 2 bits of byte 0 are 00
+		-	low-order 6 bits of byte 0 are all 0
+		-	all 8 bits of byte 1 are the lun
+		-	all other bytes (2 thru 7) are 0
+		 */
+		if (*lun_ptr != 0) {
+			printk(KERN_ERR "Illegal Byte 0 in LUN peripheral"
+				" device addressing method %u, expected 0\n",
+				*lun_ptr);
+		}
+		break;
+	case 1: /* flat space addressing method, SAM-2 Section 4.12.4
+		-	high-order 2 bits of byte 0 are 01
+		-	low-order 6 bits of byte 0 are MSB of the lun
+		-	all 8 bits of byte 1 are LSB of the lun
+		-	all other bytes (2 thru 7) are 0
+		 */
+		result += ((*lun_ptr) & 0x3f) << 8;
+		break;
+	default: /* (extended) logical unit addressing */
+		printk(KERN_ERR "Unimplemented LUN addressing method %u, "
+			"PDA method used instead\n", temp);
+		break;
+	}
+
+	return result;
+}
+
+/*	iscsi_check_session_usage_count():
+ *
+ *
+ */
+int iscsi_check_session_usage_count(struct iscsi_session *sess)
+{
+	spin_lock_bh(&sess->session_usage_lock);
+	if (atomic_read(&sess->session_usage_count)) {
+		atomic_set(&sess->session_waiting_on_uc, 1);
+		spin_unlock_bh(&sess->session_usage_lock);
+		if (in_interrupt())
+			return 2;
+
+		down(&sess->session_waiting_on_uc_sem);
+		return 1;
+	}
+	spin_unlock_bh(&sess->session_usage_lock);
+
+	return 0;
+}
+
+/*	iscsi_dec_session_usage_count():
+ *
+ *
+ */
+void iscsi_dec_session_usage_count(struct iscsi_session *sess)
+{
+	spin_lock_bh(&sess->session_usage_lock);
+	atomic_dec(&sess->session_usage_count);
+
+	if (!atomic_read(&sess->session_usage_count) &&
+	     atomic_read(&sess->session_waiting_on_uc))
+		up(&sess->session_waiting_on_uc_sem);
+
+	spin_unlock_bh(&sess->session_usage_lock);
+}
+
+/*	iscsi_inc_session_usage_count():
+ *
+ *
+ */
+void iscsi_inc_session_usage_count(struct iscsi_session *sess)
+{
+
+	spin_lock_bh(&sess->session_usage_lock);
+	atomic_inc(&sess->session_usage_count);
+	spin_unlock_bh(&sess->session_usage_lock);
+}
+
+/*	iscsi_determine_sync_and_steering_counts():
+ *
+ *	Used before iscsi_do[rx,tx]_data() to determine iov and [rx,tx]_marker
+ *	array counts needed for sync and steering.
+ */
+static inline int iscsi_determine_sync_and_steering_counts(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	u32 length = count->data_length;
+	u32 marker, markint;
+
+	count->sync_and_steering = 1;
+
+	marker = (count->type == ISCSI_RX_DATA) ?
+			conn->of_marker : conn->if_marker;
+	markint = (count->type == ISCSI_RX_DATA) ?
+			(CONN_OPS(conn)->OFMarkInt * 4) :
+			(CONN_OPS(conn)->IFMarkInt * 4);
+	count->ss_iov_count = count->iov_count;
+
+	while (length > 0) {
+		if (length >= marker) {
+			count->ss_iov_count += 3;
+			count->ss_marker_count += 2;
+
+			length -= marker;
+			marker = markint;
+		} else
+			length = 0;
+	}
+
+	return 0;
+}
+
+/*	iscsi_set_sync_and_steering_values():
+ *
+ * 	Setup conn->if_marker and conn->of_marker values based upon
+ * 	the initial marker-less interval. (see iSCSI v19 A.2)
+ */
+int iscsi_set_sync_and_steering_values(struct iscsi_conn *conn)
+{
+	int login_ifmarker_count = 0, login_ofmarker_count = 0, next_marker = 0;
+	/*
+	 * IFMarkInt and OFMarkInt are negotiated as 32-bit words.
+	 */
+	u32 IFMarkInt = (CONN_OPS(conn)->IFMarkInt * 4);
+	u32 OFMarkInt = (CONN_OPS(conn)->OFMarkInt * 4);
+
+	if (CONN_OPS(conn)->OFMarker) {
+		/*
+		 * Account for the first Login Command received not
+		 * via iscsi_recv_msg().
+		 */
+		conn->of_marker += ISCSI_HDR_LEN;
+		if (conn->of_marker <= OFMarkInt) {
+			conn->of_marker = (OFMarkInt - conn->of_marker);
+		} else {
+			login_ofmarker_count = (conn->of_marker / OFMarkInt);
+			next_marker = (OFMarkInt * (login_ofmarker_count + 1)) +
+					(login_ofmarker_count * MARKER_SIZE);
+			conn->of_marker = (next_marker - conn->of_marker);
+		}
+		conn->of_marker_offset = 0;
+		printk(KERN_INFO "Setting OFMarker value to %u based on Initial"
+			" Markerless Interval.\n", conn->of_marker);
+	}
+
+	if (CONN_OPS(conn)->IFMarker) {
+		if (conn->if_marker <= IFMarkInt) {
+			conn->if_marker = (IFMarkInt - conn->if_marker);
+		} else {
+			login_ifmarker_count = (conn->if_marker / IFMarkInt);
+			next_marker = (IFMarkInt * (login_ifmarker_count + 1)) +
+					(login_ifmarker_count * MARKER_SIZE);
+			conn->if_marker = (next_marker - conn->if_marker);
+		}
+		printk(KERN_INFO "Setting IFMarker value to %u based on Initial"
+			" Markerless Interval.\n", conn->if_marker);
+	}
+
+	return 0;
+}
+
+unsigned char *iscsi_ntoa(u32 ip)
+{
+	static unsigned char buf[18];
+
+	memset((void *) buf, 0, 18);
+	sprintf(buf, "%u.%u.%u.%u", ((ip >> 24) & 0xff), ((ip >> 16) & 0xff),
+			((ip >> 8) & 0xff), (ip & 0xff));
+
+	return buf;
+}
+
+void iscsi_ntoa2(unsigned char *buf, u32 ip)
+{
+	memset((void *) buf, 0, 18);
+	sprintf(buf, "%u.%u.%u.%u", ((ip >> 24) & 0xff), ((ip >> 16) & 0xff),
+			((ip >> 8) & 0xff), (ip & 0xff));
+}
+
+#define NS_INT16SZ	 2
+#define NS_INADDRSZ	 4
+#define NS_IN6ADDRSZ	16
+
+/* const char *
+ * inet_ntop4(src, dst, size)
+ *	format an IPv4 address
+ * return:
+ *	`dst' (as a const)
+ * notes:
+ *	(1) uses no statics
+ *	(2) takes a unsigned char* not an in_addr as input
+ * author:
+ *	Paul Vixie, 1996.
+ */
+static const char *iscsi_ntop4(
+	const unsigned char *src,
+	char *dst,
+	size_t size)
+{
+	static const char *fmt = "%u.%u.%u.%u";
+	char tmp[sizeof "255.255.255.255"];
+	size_t len;
+
+	len = snprintf(tmp, sizeof tmp, fmt, src[0], src[1], src[2], src[3]);
+	if (len >= size) {
+		printk(KERN_ERR "len: %d >= size: %d\n", (int)len, (int)size);
+		return NULL;
+	}
+	memcpy(dst, tmp, len + 1);
+
+	return dst;
+}
+
+/* const char *
+ * isc_inet_ntop6(src, dst, size)
+ * convert IPv6 binary address into presentation (printable) format
+ * author:
+ *	Paul Vixie, 1996.
+ */
+const char *iscsi_ntop6(const unsigned char *src, char *dst, size_t size)
+{
+	/*
+	 * Note that int32_t and int16_t need only be "at least" large enough
+	 * to contain a value of the specified size.  On some systems, like
+	 * Crays, there is no such thing as an integer variable with 16 bits.
+	 * Keep this in mind if you think this function should have been coded
+	 * to use pointer overlays.  All the world's not a VAX.
+	 */
+	char tmp[sizeof "ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255"], *tp;
+	struct { int base, len; } best, cur;
+	unsigned int words[NS_IN6ADDRSZ / NS_INT16SZ];
+	int i, inc;
+
+	best.len = best.base = 0;
+	cur.len = cur.base = 0;
+
+	/*
+	 * Preprocess:
+	 *	Copy the input (bytewise) array into a wordwise array.
+	 *	Find the longest run of 0x00's in src[] for :: shorthanding.
+	 */
+	memset(words, '\0', sizeof words);
+	for (i = 0; i < NS_IN6ADDRSZ; i++)
+		words[i / 2] |= (src[i] << ((1 - (i % 2)) << 3));
+	best.base = -1;
+	cur.base = -1;
+	for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) {
+		if (words[i] == 0) {
+			if (cur.base == -1)
+				cur.base = i, cur.len = 1;
+			else
+				cur.len++;
+		} else {
+			if (cur.base != -1) {
+				if (best.base == -1 || cur.len > best.len)
+					best = cur;
+				cur.base = -1;
+			}
+		}
+	}
+	if (cur.base != -1) {
+		if (best.base == -1 || cur.len > best.len)
+			best = cur;
+	}
+	if (best.base != -1 && best.len < 2)
+		best.base = -1;
+
+	/*
+	 * Format the result.
+	 */
+	tp = tmp;
+	for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) {
+		/* Are we inside the best run of 0x00's? */
+		if (best.base != -1 && i >= best.base &&
+		    i < (best.base + best.len)) {
+			if (i == best.base)
+				*tp++ = ':';
+			continue;
+		}
+		/* Are we following an initial run of 0x00s or any real hex? */
+		if (i != 0)
+			*tp++ = ':';
+		/* Is this address an encapsulated IPv4? */
+		if (i == 6 && best.base == 0 &&
+		    (best.len == 6 || (best.len == 5 && words[5] == 0xffff))) {
+			if (!iscsi_ntop4(src+12, tp, sizeof tmp - (tp - tmp)))
+				return NULL;
+			tp += strlen(tp);
+			break;
+		}
+		inc = snprintf(tp, 5, "%x", words[i]);
+		if (inc < 5)
+			return NULL;
+		tp += inc;
+	}
+	/* Was it a trailing run of 0x00's? */
+	if (best.base != -1 && (best.base + best.len) ==
+	    (NS_IN6ADDRSZ / NS_INT16SZ))
+		*tp++ = ':';
+	*tp++ = '\0';
+
+	/*
+	 * Check for overflow, copy, and we're done.
+	 */
+	if ((size_t)(tp - tmp) > size) {
+		printk(KERN_ERR "(size_t)(tp - tmp): %d > size: %d\n",
+			(int)(tp - tmp), (int)size);
+		return NULL;
+	}
+	memcpy(dst, tmp, tp - tmp);
+	return dst;
+}
+
+/* int
+ * inet_pton4(src, dst)
+ *	like inet_aton() but without all the hexadecimal and shorthand.
+ * return:
+ *	1 if `src' is a valid dotted quad, else 0.
+ * notice:
+ *	does not touch `dst' unless it's returning 1.
+ * author:
+ *	Paul Vixie, 1996.
+ */
+static int iscsi_pton4(const char *src, unsigned char *dst)
+{
+	static const char digits[] = "0123456789";
+	int saw_digit, octets, ch;
+	unsigned char tmp[NS_INADDRSZ], *tp;
+
+	saw_digit = 0;
+	octets = 0;
+	*(tp = tmp) = 0;
+	while ((ch = *src++) != '\0') {
+		const char *pch;
+
+		pch = strchr(digits, ch);
+		if (pch != NULL) {
+			unsigned int new = *tp * 10 + (pch - digits);
+
+			if (new > 255)
+				return 0;
+			*tp = new;
+			if (!saw_digit) {
+				if (++octets > 4)
+					return 0;
+				saw_digit = 1;
+			}
+		} else if (ch == '.' && saw_digit) {
+			if (octets == 4)
+				return 0;
+			*++tp = 0;
+			saw_digit = 0;
+		} else
+			return 0;
+	}
+	if (octets < 4)
+		return 0;
+	memcpy(dst, tmp, NS_INADDRSZ);
+	return 1;
+}
+
+/* int
+ * inet_pton6(src, dst)
+ *	convert presentation level address to network order binary form.
+ * return:
+ *	1 if `src' is a valid [RFC1884 2.2] address, else 0.
+ * notice:
+ *	(1) does not touch `dst' unless it's returning 1.
+ *	(2) :: in a full address is silently ignored.
+ * credit:
+ *	inspired by Mark Andrews.
+ * author:
+ *	Paul Vixie, 1996.
+ */
+int iscsi_pton6(const char *src, unsigned char *dst)
+{
+	static const char xdigits_l[] = "0123456789abcdef",
+			  xdigits_u[] = "0123456789ABCDEF";
+	unsigned char tmp[NS_IN6ADDRSZ], *tp, *endp, *colonp;
+	const char *xdigits, *curtok;
+	int ch, saw_xdigit;
+	unsigned int val;
+
+	memset((tp = tmp), '\0', NS_IN6ADDRSZ);
+	endp = tp + NS_IN6ADDRSZ;
+	colonp = NULL;
+	/* Leading :: requires some special handling. */
+	if (*src == ':')
+		if (*++src != ':')
+			return 0;
+	curtok = src;
+	saw_xdigit = 0;
+	val = 0;
+	while ((ch = *src++) != '\0') {
+		const char *pch;
+
+		pch = strchr((xdigits = xdigits_l), ch);
+		if (pch == NULL)
+			pch = strchr((xdigits = xdigits_u), ch);
+		if (pch != NULL) {
+			val <<= 4;
+			val |= (pch - xdigits);
+			if (val > 0xffff)
+				return 0;
+			saw_xdigit = 1;
+			continue;
+		}
+		if (ch == ':') {
+			curtok = src;
+			if (!saw_xdigit) {
+				if (colonp)
+					return 0;
+				colonp = tp;
+				continue;
+			}
+			if (tp + NS_INT16SZ > endp)
+				return 0;
+			*tp++ = (unsigned char) (val >> 8) & 0xff;
+			*tp++ = (unsigned char) val & 0xff;
+			saw_xdigit = 0;
+			val = 0;
+			continue;
+		}
+		if (ch == '.' && ((tp + NS_INADDRSZ) <= endp) &&
+		    iscsi_pton4(curtok, tp) > 0) {
+			tp += NS_INADDRSZ;
+			saw_xdigit = 0;
+			break;	/* '\0' was seen by inet_pton4(). */
+		}
+		return 0;
+	}
+	if (saw_xdigit) {
+		if (tp + NS_INT16SZ > endp)
+			return 0;
+		*tp++ = (unsigned char) (val >> 8) & 0xff;
+		*tp++ = (unsigned char) val & 0xff;
+	}
+	if (colonp != NULL) {
+		/*
+		 * Since some memmove()'s erroneously fail to handle
+		 * overlapping regions, we'll do the shift by hand.
+		 */
+		const int n = tp - colonp;
+		int i;
+
+		for (i = 1; i <= n; i++) {
+			endp[-i] = colonp[n - i];
+			colonp[n - i] = 0;
+		}
+		tp = endp;
+	}
+	if (tp != endp)
+		return 0;
+	memcpy(dst, tmp, NS_IN6ADDRSZ);
+	return 1;
+}
+
+/*	iscsi_get_conn_from_cid():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_get_conn_from_cid(struct iscsi_session *sess, u16 cid)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		if ((conn->cid == cid) &&
+		    (conn->conn_state == TARG_CONN_STATE_LOGGED_IN)) {
+			iscsi_inc_conn_usage_count(conn);
+			spin_unlock_bh(&sess->conn_lock);
+			return conn;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return NULL;
+}
+
+/*	iscsi_get_conn_from_cid_rcfr():
+ *
+ *
+ */
+struct iscsi_conn *iscsi_get_conn_from_cid_rcfr(struct iscsi_session *sess, u16 cid)
+{
+	struct iscsi_conn *conn;
+
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
+		if (conn->cid == cid) {
+			iscsi_inc_conn_usage_count(conn);
+			spin_lock(&conn->state_lock);
+			atomic_set(&conn->connection_wait_rcfr, 1);
+			spin_unlock(&conn->state_lock);
+			spin_unlock_bh(&sess->conn_lock);
+			return conn;
+		}
+	}
+	spin_unlock_bh(&sess->conn_lock);
+
+	return NULL;
+}
+
+/*	iscsi_check_conn_usage_count():
+ *
+ *
+ */
+void iscsi_check_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	if (atomic_read(&conn->conn_usage_count)) {
+		atomic_set(&conn->conn_waiting_on_uc, 1);
+		spin_unlock_bh(&conn->conn_usage_lock);
+
+		down(&conn->conn_waiting_on_uc_sem);
+		return;
+	}
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_dec_conn_usage_count():
+ *
+ *
+ */
+void iscsi_dec_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	atomic_dec(&conn->conn_usage_count);
+
+	if (!atomic_read(&conn->conn_usage_count) &&
+	     atomic_read(&conn->conn_waiting_on_uc))
+		up(&conn->conn_waiting_on_uc_sem);
+
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_inc_conn_usage_count():
+ *
+ *
+ */
+void iscsi_inc_conn_usage_count(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->conn_usage_lock);
+	atomic_inc(&conn->conn_usage_count);
+	spin_unlock_bh(&conn->conn_usage_lock);
+}
+
+/*	iscsi_async_msg_timer_function():
+ *
+ *
+ */
+void iscsi_async_msg_timer_function(unsigned long data)
+{
+	up((struct semaphore *) data);
+}
+
+/*	Riscsi_check_for_active_network_device():
+ *
+ *
+ */
+int iscsi_check_for_active_network_device(struct iscsi_conn *conn)
+{
+	struct net_device *net_dev;
+
+	if (!conn->net_if) {
+		printk(KERN_ERR "struct iscsi_conn->net_if is NULL for CID:"
+			" %hu\n", conn->cid);
+		return 0;
+	}
+	net_dev = conn->net_if;
+
+	return netif_carrier_ok(net_dev);
+}
+
+/*	iscsi_handle_netif_timeou():
+ *
+ *
+ */
+static void iscsi_handle_netif_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->netif_lock);
+	if (conn->netif_timer_flags & NETIF_TF_STOP) {
+		spin_unlock_bh(&conn->netif_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	conn->netif_timer_flags &= ~NETIF_TF_RUNNING;
+
+	if (iscsi_check_for_active_network_device((void *)conn)) {
+		iscsi_start_netif_timer(conn);
+		spin_unlock_bh(&conn->netif_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+
+	printk(KERN_ERR "Detected PHY loss on Network Interface: %s for iSCSI"
+		" CID: %hu on SID: %u\n", conn->net_dev, conn->cid,
+			SESS(conn)->sid);
+
+	spin_unlock_bh(&conn->netif_lock);
+
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_get_network_interface_from_conn():
+ *
+ *
+ */
+void iscsi_get_network_interface_from_conn(struct iscsi_conn *conn)
+{
+	struct net_device *net_dev;
+
+	net_dev = dev_get_by_name(&init_net, conn->net_dev);
+	if (!(net_dev)) {
+		printk(KERN_ERR "Unable to locate active network interface:"
+			" %s\n", strlen(conn->net_dev) ?
+			conn->net_dev : "None");
+		conn->net_if = NULL;
+		return;
+	}
+
+	conn->net_if = net_dev;
+}
+
+/*      iscsi_start_netif_timer():
+ *
+ *	Called with conn->netif_lock held.
+ */
+void iscsi_start_netif_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg = ISCSI_TPG_C(conn);
+
+	if (!conn->net_if)
+		return;
+
+	if (conn->netif_timer_flags & NETIF_TF_RUNNING)
+		return;
+
+	init_timer(&conn->transport_timer);
+	SETUP_TIMER(conn->transport_timer, ISCSI_TPG_ATTRIB(tpg)->netif_timeout,
+		conn, iscsi_handle_netif_timeout);
+	conn->netif_timer_flags &= ~NETIF_TF_STOP;
+	conn->netif_timer_flags |= NETIF_TF_RUNNING;
+	add_timer(&conn->transport_timer);
+}
+
+/*	iscsi_stop_netif_timer():
+ *
+ *
+ */
+void iscsi_stop_netif_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->netif_lock);
+	if (!(conn->netif_timer_flags & NETIF_TF_RUNNING)) {
+		spin_unlock_bh(&conn->netif_lock);
+		return;
+	}
+	conn->netif_timer_flags |= NETIF_TF_STOP;
+	spin_unlock_bh(&conn->netif_lock);
+
+	del_timer_sync(&conn->transport_timer);
+
+	spin_lock_bh(&conn->netif_lock);
+	conn->netif_timer_flags &= ~NETIF_TF_RUNNING;
+	spin_unlock_bh(&conn->netif_lock);
+}
+
+/*	iscsi_handle_nopin_response_timeout():
+ *
+ *
+ */
+static void iscsi_handle_nopin_response_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_STOP) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+
+	TRACE(TRACE_TIMER, "Did not receive response to NOPIN on CID: %hu on"
+		" SID: %u, failing connection.\n", conn->cid,
+			SESS(conn)->sid);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	{
+	struct iscsi_portal_group *tpg = conn->sess->tpg;
+	struct iscsi_tiqn *tiqn = tpg->tpg_tiqn;
+
+	if (tiqn) {
+		spin_lock_bh(&tiqn->sess_err_stats.lock);
+		strcpy(tiqn->sess_err_stats.last_sess_fail_rem_name,
+				(void *)SESS_OPS_C(conn)->InitiatorName);
+		tiqn->sess_err_stats.last_sess_failure_type =
+				ISCSI_SESS_ERR_CXN_TIMEOUT;
+		tiqn->sess_err_stats.cxn_timeout_errors++;
+		SESS(conn)->conn_timeout_errors++;
+		spin_unlock_bh(&tiqn->sess_err_stats.lock);
+	}
+	}
+
+	iscsi_cause_connection_reinstatement(conn, 0);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*	iscsi_mod_nopin_response_timer():
+ *
+ *
+ */
+void iscsi_mod_nopin_response_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	MOD_TIMER(&conn->nopin_response_timer, na->nopin_response_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_start_nopin_response_timer():
+ *
+ *	Called with conn->nopin_timer_lock held.
+ */
+void iscsi_start_nopin_response_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	init_timer(&conn->nopin_response_timer);
+	SETUP_TIMER(conn->nopin_response_timer, na->nopin_response_timeout,
+		conn, iscsi_handle_nopin_response_timeout);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_STOP;
+	conn->nopin_response_timer_flags |= NOPIN_RESPONSE_TF_RUNNING;
+	add_timer(&conn->nopin_response_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Response Timer on CID: %d to %u"
+		" seconds\n", conn->cid, na->nopin_response_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_stop_nopin_response_timer():
+ *
+ *
+ */
+void iscsi_stop_nopin_response_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_response_timer_flags & NOPIN_RESPONSE_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+	conn->nopin_response_timer_flags |= NOPIN_RESPONSE_TF_STOP;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	del_timer_sync(&conn->nopin_response_timer);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	conn->nopin_response_timer_flags &= ~NOPIN_RESPONSE_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_handle_nopin_timeout():
+ *
+ *
+ */
+static void iscsi_handle_nopin_timeout(unsigned long data)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *) data;
+
+	iscsi_inc_conn_usage_count(conn);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_timer_flags & NOPIN_TF_STOP) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		iscsi_dec_conn_usage_count(conn);
+		return;
+	}
+	conn->nopin_timer_flags &= ~NOPIN_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	iscsi_add_nopin(conn, 1);
+	iscsi_dec_conn_usage_count(conn);
+}
+
+/*
+ * Called with conn->nopin_timer_lock held.
+ */
+void __iscsi_start_nopin_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+	/*
+	* NOPIN timeout is disabled.
+	 */
+	if (!(na->nopin_timeout))
+		return;
+
+	if (conn->nopin_timer_flags & NOPIN_TF_RUNNING)
+		return;
+
+	init_timer(&conn->nopin_timer);
+	SETUP_TIMER(conn->nopin_timer, na->nopin_timeout, conn,
+		iscsi_handle_nopin_timeout);
+	conn->nopin_timer_flags &= ~NOPIN_TF_STOP;
+	conn->nopin_timer_flags |= NOPIN_TF_RUNNING;
+	add_timer(&conn->nopin_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Timer on CID: %d at %u second"
+		" interval\n", conn->cid, na->nopin_timeout);
+}
+
+/*	iscsi_start_nopin_timer():
+ *
+ *
+ */
+void iscsi_start_nopin_timer(struct iscsi_conn *conn)
+{
+	struct iscsi_session *sess = SESS(conn);
+	struct iscsi_node_attrib *na = iscsi_tpg_get_node_attrib(sess);
+	/*
+	 * NOPIN timeout is disabled..
+	 */
+	if (!(na->nopin_timeout))
+		return;
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (conn->nopin_timer_flags & NOPIN_TF_RUNNING) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+
+	init_timer(&conn->nopin_timer);
+	SETUP_TIMER(conn->nopin_timer, na->nopin_timeout, conn,
+			iscsi_handle_nopin_timeout);
+	conn->nopin_timer_flags &= ~NOPIN_TF_STOP;
+	conn->nopin_timer_flags |= NOPIN_TF_RUNNING;
+	add_timer(&conn->nopin_timer);
+
+	TRACE(TRACE_TIMER, "Started NOPIN Timer on CID: %d at %u second"
+			" interval\n", conn->cid, na->nopin_timeout);
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+/*	iscsi_stop_nopin_timer():
+ *
+ *
+ */
+void iscsi_stop_nopin_timer(struct iscsi_conn *conn)
+{
+	spin_lock_bh(&conn->nopin_timer_lock);
+	if (!(conn->nopin_timer_flags & NOPIN_TF_RUNNING)) {
+		spin_unlock_bh(&conn->nopin_timer_lock);
+		return;
+	}
+	conn->nopin_timer_flags |= NOPIN_TF_STOP;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+
+	del_timer_sync(&conn->nopin_timer);
+
+	spin_lock_bh(&conn->nopin_timer_lock);
+	conn->nopin_timer_flags &= ~NOPIN_TF_RUNNING;
+	spin_unlock_bh(&conn->nopin_timer_lock);
+}
+
+int iscsi_allocate_iovecs_for_cmd(struct se_cmd *se_cmd)
+{
+	struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
+	u32 iov_count = (T_TASK(se_cmd)->t_tasks_se_num == 0) ? 1 :
+				T_TASK(se_cmd)->t_tasks_se_num;
+	
+	iov_count += TRANSPORT_IOV_DATA_BUFFER;
+
+	cmd->iov_data = kzalloc(iov_count * sizeof(struct iovec), GFP_KERNEL);
+	if (!(cmd->iov_data))
+		return -ENOMEM;
+	
+	cmd->orig_iov_data_count = iov_count;
+	return 0;
+}
+
+/*	iscsi_send_tx_data():
+ *
+ *
+ */
+int iscsi_send_tx_data(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn,
+	int use_misc)
+{
+	int tx_sent, tx_size;
+	u32 iov_count;
+	struct iovec *iov;
+
+send_data:
+	tx_size = cmd->tx_size;
+
+	if (!use_misc) {
+		iov = &cmd->iov_data[0];
+		iov_count = cmd->iov_data_count;
+	} else {
+		iov = &cmd->iov_misc[0];
+		iov_count = cmd->iov_misc_count;
+	}
+
+	tx_sent = tx_data(conn, &iov[0], iov_count, tx_size);
+	if (tx_size != tx_sent) {
+		if (tx_sent == -EAGAIN) {
+			printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+			goto send_data;
+		} else
+			return -1;
+	}
+	cmd->tx_size = 0;
+
+	return 0;
+}
+
+int iscsi_fe_sendpage_sg(
+	struct se_unmap_sg *u_sg,
+	struct iscsi_conn *conn)
+{
+	int tx_sent;
+	struct iscsi_cmd *cmd = (struct iscsi_cmd *)u_sg->fabric_cmd;
+	struct se_cmd *se_cmd = SE_CMD(cmd);
+	u32 len = cmd->tx_size, pg_len, se_len, se_off, tx_size;
+	struct iovec *iov = &cmd->iov_data[0];
+	struct page *page;
+	struct se_mem *se_mem = u_sg->cur_se_mem;
+
+send_hdr:
+	tx_size = (CONN_OPS(conn)->HeaderDigest) ? ISCSI_HDR_LEN + CRC_LEN :
+			ISCSI_HDR_LEN;
+	tx_sent = tx_data(conn, iov, 1, tx_size);
+	if (tx_size != tx_sent) {
+		if (tx_sent == -EAGAIN) {
+			printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+			goto send_hdr;
+		}
+		return -1;
+	}
+
+	len -= tx_size;
+	len -= u_sg->padding;
+	if (CONN_OPS(conn)->DataDigest)
+		len -= CRC_LEN;
+
+	/*
+	 * Start calculating from the first page of current struct se_mem.
+	 */
+	page = se_mem->se_page;
+	pg_len = (PAGE_SIZE - se_mem->se_off);
+	se_len = se_mem->se_len;
+	if (se_len < pg_len)
+		pg_len = se_len;
+	se_off = se_mem->se_off;
+#if 0
+	printk(KERN_INFO "se: %p page: %p se_len: %d se_off: %d pg_len: %d\n",
+		se_mem, page, se_len, se_off, pg_len);
+#endif
+	/*
+	 * Calucate new se_len and se_off based upon u_sg->t_offset into
+	 * the current struct se_mem and possibily a different page.
+	 */
+	while (u_sg->t_offset) {
+#if 0
+		printk(KERN_INFO "u_sg->t_offset: %d, page: %p se_len: %d"
+			" se_off: %d pg_len: %d\n", u_sg->t_offset, page,
+			se_len, se_off, pg_len);
+#endif
+		if (u_sg->t_offset >= pg_len) {
+			u_sg->t_offset -= pg_len;
+			se_len -= pg_len;
+			se_off = 0;
+			pg_len = PAGE_SIZE;
+			page++;
+		} else {
+			se_off += u_sg->t_offset;
+			se_len -= u_sg->t_offset;
+			u_sg->t_offset = 0;
+		}
+	}
+
+	/*
+	 * Perform sendpage() for each page in the struct se_mem
+	 */
+	while (len) {
+#if 0
+		printk(KERN_INFO "len: %d page: %p se_len: %d se_off: %d\n",
+			len, page, se_len, se_off);
+#endif
+		if (se_len > len)
+			se_len = len;
+send_pg:
+		tx_sent = conn->sock->ops->sendpage(conn->sock,
+				page, se_off, se_len, 0);
+		if (tx_sent != se_len) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tcp_sendpage() returned"
+						" -EAGAIN\n");
+				goto send_pg;
+			}
+
+			printk(KERN_ERR "tcp_sendpage() failure: %d\n",
+					tx_sent);
+			return -1;
+		}
+
+		len -= se_len;
+		if (!(len))
+			break;
+
+		se_len -= tx_sent;
+		if (!(se_len)) {
+			list_for_each_entry_continue(se_mem,
+					T_TASK(se_cmd)->t_mem_list, se_list)
+				break;
+
+			if (!se_mem) {
+				printk(KERN_ERR "Unable to locate next struct se_mem\n");
+				return -1;
+			}
+
+			se_len = se_mem->se_len;
+			se_off = se_mem->se_off;
+			page = se_mem->se_page;
+		} else {
+			se_len = PAGE_SIZE;
+			se_off = 0;
+			page++;
+		}
+	}
+
+send_padding:
+	if (u_sg->padding) {
+		struct iovec *iov_p =
+			&cmd->iov_data[cmd->iov_data_count-2];
+
+		tx_sent = tx_data(conn, iov_p, 1, u_sg->padding);
+		if (u_sg->padding != tx_sent) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+				goto send_padding;
+			}
+			return -1;
+		}
+	}
+
+send_datacrc:
+	if (CONN_OPS(conn)->DataDigest) {
+		struct iovec *iov_d =
+			&cmd->iov_data[cmd->iov_data_count-1];
+
+		tx_sent = tx_data(conn, iov_d, 1, CRC_LEN);
+		if (CRC_LEN != tx_sent) {
+			if (tx_sent == -EAGAIN) {
+				printk(KERN_ERR "tx_data() returned -EAGAIN\n");
+				goto send_datacrc;
+			}
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/*      iscsi_tx_login_rsp():
+ *
+ *      This function is used for mainly sending a ISCSI_TARG_LOGIN_RSP PDU
+ *      back to the Initiator when an expection condition occurs with the
+ *      errors set in status_class and status_detail.
+ *
+ *      Parameters:     iSCSI Connection, Status Class, Status Detail.
+ *      Returns:        0 on success, -1 on error.
+ */
+int iscsi_tx_login_rsp(struct iscsi_conn *conn, u8 status_class, u8 status_detail)
+{
+	u8 iscsi_hdr[ISCSI_HDR_LEN];
+	int err;
+	struct iovec iov;
+	struct iscsi_login_rsp *hdr;
+
+	iscsi_collect_login_stats(conn, status_class, status_detail);
+
+	memset((void *)&iov, 0, sizeof(struct iovec));
+	memset((void *)&iscsi_hdr, 0x0, ISCSI_HDR_LEN);
+
+	hdr	= (struct iscsi_login_rsp *)&iscsi_hdr;
+	hdr->opcode		= ISCSI_OP_LOGIN_RSP;
+	hdr->status_class	= status_class;
+	hdr->status_detail	= status_detail;
+	hdr->itt		= cpu_to_be32(conn->login_itt);
+
+	iov.iov_base		= &iscsi_hdr;
+	iov.iov_len		= ISCSI_HDR_LEN;
+
+	PRINT_BUFF(iscsi_hdr, ISCSI_HDR_LEN);
+
+	err = tx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+	if (err != ISCSI_HDR_LEN) {
+		printk(KERN_ERR "tx_data returned less than expected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/*	iscsi_print_session_params():
+ *
+ *
+ */
+void iscsi_print_session_params(struct iscsi_session *sess)
+{
+	struct iscsi_conn *conn;
+
+	printk(KERN_INFO "-----------------------------[Session Params for"
+		" SID: %u]-----------------------------\n", sess->sid);
+	spin_lock_bh(&sess->conn_lock);
+	list_for_each_entry(conn, &sess->sess_conn_list, conn_list)
+		iscsi_dump_conn_ops(conn->conn_ops);
+	spin_unlock_bh(&sess->conn_lock);
+
+	iscsi_dump_sess_ops(sess->sess_ops);
+}
+
+/*	iscsi_do_rx_data():
+ *
+ *
+ */
+static inline int iscsi_do_rx_data(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	int data = count->data_length, rx_loop = 0, total_rx = 0;
+	u32 rx_marker_val[count->ss_marker_count], rx_marker_iov = 0;
+	struct iovec iov[count->ss_iov_count];
+	mm_segment_t oldfs;
+	struct msghdr msg;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&msg, 0, sizeof(struct msghdr));
+
+	if (count->sync_and_steering) {
+		int size = 0;
+		u32 i, orig_iov_count = 0;
+		u32 orig_iov_len = 0, orig_iov_loc = 0;
+		u32 iov_count = 0, per_iov_bytes = 0;
+		u32 *rx_marker, old_rx_marker = 0;
+		struct iovec *iov_record;
+
+		memset((void *)&rx_marker_val, 0,
+				count->ss_marker_count * sizeof(u32));
+		memset((void *)&iov, 0,
+				count->ss_iov_count * sizeof(struct iovec));
+
+		iov_record = count->iov;
+		orig_iov_count = count->iov_count;
+		rx_marker = &conn->of_marker;
+
+		i = 0;
+		size = data;
+		orig_iov_len = iov_record[orig_iov_loc].iov_len;
+		while (size > 0) {
+			TRACE(TRACE_SSLR, "rx_data: #1 orig_iov_len %u,"
+			" orig_iov_loc %u\n", orig_iov_len, orig_iov_loc);
+			TRACE(TRACE_SSLR, "rx_data: #2 rx_marker %u, size"
+				" %u\n", *rx_marker, size);
+
+			if (orig_iov_len >= *rx_marker) {
+				iov[iov_count].iov_len = *rx_marker;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&rx_marker_val[rx_marker_iov++];
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&rx_marker_val[rx_marker_iov++];
+				old_rx_marker = *rx_marker;
+
+				/*
+				 * OFMarkInt is in 32-bit words.
+				 */
+				*rx_marker = (CONN_OPS(conn)->OFMarkInt * 4);
+				size -= old_rx_marker;
+				orig_iov_len -= old_rx_marker;
+				per_iov_bytes += old_rx_marker;
+
+				TRACE(TRACE_SSLR, "rx_data: #3 new_rx_marker"
+					" %u, size %u\n", *rx_marker, size);
+			} else {
+				iov[iov_count].iov_len = orig_iov_len;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				per_iov_bytes = 0;
+				*rx_marker -= orig_iov_len;
+				size -= orig_iov_len;
+
+				if (size)
+					orig_iov_len =
+					iov_record[++orig_iov_loc].iov_len;
+
+				TRACE(TRACE_SSLR, "rx_data: #4 new_rx_marker"
+					" %u, size %u\n", *rx_marker, size);
+			}
+		}
+		data += (rx_marker_iov * (MARKER_SIZE / 2));
+
+		msg.msg_iov	= &iov[0];
+		msg.msg_iovlen	= iov_count;
+
+		if (iov_count > count->ss_iov_count) {
+			printk(KERN_ERR "iov_count: %d, count->ss_iov_count:"
+				" %d\n", iov_count, count->ss_iov_count);
+			return -1;
+		}
+		if (rx_marker_iov > count->ss_marker_count) {
+			printk(KERN_ERR "rx_marker_iov: %d, count->ss_marker"
+				"_count: %d\n", rx_marker_iov,
+				count->ss_marker_count);
+			return -1;
+		}
+	} else {
+		msg.msg_iov	= count->iov;
+		msg.msg_iovlen	= count->iov_count;
+	}
+
+	while (total_rx < data) {
+		oldfs = get_fs();
+		set_fs(get_ds());
+
+		conn->sock->sk->sk_allocation = GFP_ATOMIC;
+		rx_loop = sock_recvmsg(conn->sock, &msg,
+				(data - total_rx), MSG_WAITALL);
+
+		set_fs(oldfs);
+
+		if (rx_loop <= 0) {
+			TRACE(TRACE_NET, "rx_loop: %d total_rx: %d\n",
+				rx_loop, total_rx);
+			return rx_loop;
+		}
+		total_rx += rx_loop;
+		TRACE(TRACE_NET, "rx_loop: %d, total_rx: %d, data: %d\n",
+				rx_loop, total_rx, data);
+	}
+
+	if (count->sync_and_steering) {
+		int j;
+		for (j = 0; j < rx_marker_iov; j++) {
+			TRACE(TRACE_SSLR, "rx_data: #5 j: %d, offset: %d\n",
+				j, rx_marker_val[j]);
+			conn->of_marker_offset = rx_marker_val[j];
+		}
+		total_rx -= (rx_marker_iov * (MARKER_SIZE / 2));
+	}
+
+	return total_rx;
+}
+
+/*	iscsi_do_tx_data():
+ *
+ *
+ */
+static inline int iscsi_do_tx_data(
+	struct iscsi_conn *conn,
+	struct iscsi_data_count *count)
+{
+	int data = count->data_length, total_tx = 0, tx_loop = 0;
+	u32 tx_marker_val[count->ss_marker_count], tx_marker_iov = 0;
+	struct iovec iov[count->ss_iov_count];
+	mm_segment_t oldfs;
+	struct msghdr msg;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	if (data <= 0) {
+		printk(KERN_ERR "Data length is: %d\n", data);
+		return -1;
+	}
+
+	memset(&msg, 0, sizeof(struct msghdr));
+
+	if (count->sync_and_steering) {
+		int size = 0;
+		u32 i, orig_iov_count = 0;
+		u32 orig_iov_len = 0, orig_iov_loc = 0;
+		u32 iov_count = 0, per_iov_bytes = 0;
+		u32 *tx_marker, old_tx_marker = 0;
+		struct iovec *iov_record;
+
+		memset((void *)&tx_marker_val, 0,
+			count->ss_marker_count * sizeof(u32));
+		memset((void *)&iov, 0,
+			count->ss_iov_count * sizeof(struct iovec));
+
+		iov_record = count->iov;
+		orig_iov_count = count->iov_count;
+		tx_marker = &conn->if_marker;
+
+		i = 0;
+		size = data;
+		orig_iov_len = iov_record[orig_iov_loc].iov_len;
+		while (size > 0) {
+			TRACE(TRACE_SSLT, "tx_data: #1 orig_iov_len %u,"
+			" orig_iov_loc %u\n", orig_iov_len, orig_iov_loc);
+			TRACE(TRACE_SSLT, "tx_data: #2 tx_marker %u, size"
+				" %u\n", *tx_marker, size);
+
+			if (orig_iov_len >= *tx_marker) {
+				iov[iov_count].iov_len = *tx_marker;
+				iov[iov_count++].iov_base =
+					(iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				tx_marker_val[tx_marker_iov] =
+						(size - *tx_marker);
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&tx_marker_val[tx_marker_iov++];
+				iov[iov_count].iov_len = (MARKER_SIZE / 2);
+				iov[iov_count++].iov_base =
+					&tx_marker_val[tx_marker_iov++];
+				old_tx_marker = *tx_marker;
+
+				/*
+				 * IFMarkInt is in 32-bit words.
+				 */
+				*tx_marker = (CONN_OPS(conn)->IFMarkInt * 4);
+				size -= old_tx_marker;
+				orig_iov_len -= old_tx_marker;
+				per_iov_bytes += old_tx_marker;
+
+				TRACE(TRACE_SSLT, "tx_data: #3 new_tx_marker"
+					" %u, size %u\n", *tx_marker, size);
+				TRACE(TRACE_SSLT, "tx_data: #4 offset %u\n",
+					tx_marker_val[tx_marker_iov-1]);
+			} else {
+				iov[iov_count].iov_len = orig_iov_len;
+				iov[iov_count++].iov_base
+					= (iov_record[orig_iov_loc].iov_base +
+						per_iov_bytes);
+
+				per_iov_bytes = 0;
+				*tx_marker -= orig_iov_len;
+				size -= orig_iov_len;
+
+				if (size)
+					orig_iov_len =
+					iov_record[++orig_iov_loc].iov_len;
+
+				TRACE(TRACE_SSLT, "tx_data: #5 new_tx_marker"
+					" %u, size %u\n", *tx_marker, size);
+			}
+		}
+
+		data += (tx_marker_iov * (MARKER_SIZE / 2));
+
+		msg.msg_iov	= &iov[0];
+		msg.msg_iovlen = iov_count;
+
+		if (iov_count > count->ss_iov_count) {
+			printk(KERN_ERR "iov_count: %d, count->ss_iov_count:"
+				" %d\n", iov_count, count->ss_iov_count);
+			return -1;
+		}
+		if (tx_marker_iov > count->ss_marker_count) {
+			printk(KERN_ERR "tx_marker_iov: %d, count->ss_marker"
+				"_count: %d\n", tx_marker_iov,
+				count->ss_marker_count);
+			return -1;
+		}
+	} else {
+		msg.msg_iov	= count->iov;
+		msg.msg_iovlen	= count->iov_count;
+	}
+
+	while (total_tx < data) {
+		oldfs = get_fs();
+		set_fs(get_ds());
+
+		conn->sock->sk->sk_allocation = GFP_ATOMIC;
+		tx_loop = sock_sendmsg(conn->sock, &msg, (data - total_tx));
+
+		set_fs(oldfs);
+
+		if (tx_loop <= 0) {
+			TRACE(TRACE_NET, "tx_loop: %d total_tx %d\n",
+				tx_loop, total_tx);
+			return tx_loop;
+		}
+		total_tx += tx_loop;
+		TRACE(TRACE_NET, "tx_loop: %d, total_tx: %d, data: %d\n",
+					tx_loop, total_tx, data);
+	}
+
+	if (count->sync_and_steering)
+		total_tx -= (tx_marker_iov * (MARKER_SIZE / 2));
+
+	return total_tx;
+}
+
+/*	rx_data():
+ *
+ *
+ */
+int rx_data(
+	struct iscsi_conn *conn,
+	struct iovec *iov,
+	int iov_count,
+	int data)
+{
+	struct iscsi_data_count c;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&c, 0, sizeof(struct iscsi_data_count));
+	c.iov = iov;
+	c.iov_count = iov_count;
+	c.data_length = data;
+	c.type = ISCSI_RX_DATA;
+
+	if (CONN_OPS(conn)->OFMarker &&
+	   (conn->conn_state >= TARG_CONN_STATE_LOGGED_IN)) {
+		if (iscsi_determine_sync_and_steering_counts(conn, &c) < 0)
+			return -1;
+	}
+
+	return iscsi_do_rx_data(conn, &c);
+}
+
+/*	tx_data():
+ *
+ *
+ */
+int tx_data(
+	struct iscsi_conn *conn,
+	struct iovec *iov,
+	int iov_count,
+	int data)
+{
+	struct iscsi_data_count c;
+
+	if (!conn || !conn->sock || !CONN_OPS(conn))
+		return -1;
+
+	memset(&c, 0, sizeof(struct iscsi_data_count));
+	c.iov = iov;
+	c.iov_count = iov_count;
+	c.data_length = data;
+	c.type = ISCSI_TX_DATA;
+
+	if (CONN_OPS(conn)->IFMarker &&
+	   (conn->conn_state >= TARG_CONN_STATE_LOGGED_IN)) {
+		if (iscsi_determine_sync_and_steering_counts(conn, &c) < 0)
+			return -1;
+	}
+
+	return iscsi_do_tx_data(conn, &c);
+}
+
+/*
+ * Collect login statistics
+ */
+void iscsi_collect_login_stats(
+	struct iscsi_conn *conn,
+	u8 status_class,
+	u8 status_detail)
+{
+	struct iscsi_param *intrname = NULL;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_login_stats *ls;
+
+	tiqn = iscsi_snmp_get_tiqn(conn);
+	if (!(tiqn))
+		return;
+
+	ls = &tiqn->login_stats;
+
+	spin_lock(&ls->lock);
+	if (((conn->login_ip == ls->last_intr_fail_addr) ||
+	    !(memcmp(conn->ipv6_login_ip, ls->last_intr_fail_ip6_addr,
+		IPV6_ADDRESS_SPACE))) &&
+	    ((get_jiffies_64() - ls->last_fail_time) < 10)) {
+		/* We already have the failure info for this login */
+		spin_unlock(&ls->lock);
+		return;
+	}
+
+	if (status_class == ISCSI_STATUS_CLS_SUCCESS)
+		ls->accepts++;
+	else if (status_class == ISCSI_STATUS_CLS_REDIRECT) {
+		ls->redirects++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_REDIRECT;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR)  &&
+		 (status_detail == ISCSI_LOGIN_STATUS_AUTH_FAILED)) {
+		ls->authenticate_fails++;
+		ls->last_fail_type =  ISCSI_LOGIN_FAIL_AUTHENTICATE;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR)  &&
+		 (status_detail == ISCSI_LOGIN_STATUS_TGT_FORBIDDEN)) {
+		ls->authorize_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_AUTHORIZE;
+	} else if ((status_class == ISCSI_STATUS_CLS_INITIATOR_ERR) &&
+		 (status_detail == ISCSI_LOGIN_STATUS_INIT_ERR)) {
+		ls->negotiate_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_NEGOTIATE;
+	} else {
+		ls->other_fails++;
+		ls->last_fail_type = ISCSI_LOGIN_FAIL_OTHER;
+	}
+
+	/* Save initiator name, ip address and time, if it is a failed login */
+	if (status_class != ISCSI_STATUS_CLS_SUCCESS) {
+		if (conn->param_list)
+			intrname = iscsi_find_param_from_key(INITIATORNAME,
+							     conn->param_list);
+		strcpy(ls->last_intr_fail_name,
+		       (intrname ? intrname->value : "Unknown"));
+
+		if (conn->ipv6_login_ip != NULL) {
+			memcpy(ls->last_intr_fail_ip6_addr,
+				conn->ipv6_login_ip, IPV6_ADDRESS_SPACE);
+			ls->last_intr_fail_addr = 0;
+		} else {
+			memset(ls->last_intr_fail_ip6_addr, 0,
+				IPV6_ADDRESS_SPACE);
+			ls->last_intr_fail_addr = conn->login_ip;
+		}
+		ls->last_fail_time = get_jiffies_64();
+	}
+
+	spin_unlock(&ls->lock);
+}
+
+struct iscsi_tiqn *iscsi_snmp_get_tiqn(struct iscsi_conn *conn)
+{
+	struct iscsi_portal_group *tpg;
+
+	if (!(conn) || !(conn->sess))
+		return NULL;
+
+	tpg = conn->sess->tpg;
+	if (!(tpg))
+		return NULL;
+
+	if (!(tpg->tpg_tiqn))
+		return NULL;
+
+	return tpg->tpg_tiqn;
+}
+
+int iscsi_build_sendtargets_response(struct iscsi_cmd *cmd)
+{
+	char *ip, *ip_ex, *payload = NULL;
+	struct iscsi_conn *conn = CONN(cmd);
+	struct iscsi_np_ex *np_ex;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tiqn *tiqn;
+	struct iscsi_tpg_np *tpg_np;
+	int buffer_len, end_of_buf = 0, len = 0, payload_len = 0;
+	unsigned char buf[256];
+	unsigned char buf_ipv4[IPV4_BUF_SIZE];
+
+	buffer_len = (CONN_OPS(conn)->MaxRecvDataSegmentLength > 32768) ?
+			32768 : CONN_OPS(conn)->MaxRecvDataSegmentLength;
+
+	payload = kzalloc(buffer_len, GFP_KERNEL);
+	if (!(payload)) {
+		printk(KERN_ERR "Unable to allocate memory for sendtargets"
+			" response.\n");
+		return -1;
+	}
+
+	spin_lock(&iscsi_global->tiqn_lock);
+	list_for_each_entry(tiqn, &iscsi_global->g_tiqn_list, tiqn_list) {
+		memset((void *)buf, 0, 256);
+
+		len = sprintf(buf, "TargetName=%s", tiqn->tiqn);
+		len += 1;
+
+		if ((len + payload_len) > buffer_len) {
+			spin_unlock(&tiqn->tiqn_tpg_lock);
+			end_of_buf = 1;
+			goto eob;
+		}
+		memcpy((void *)payload + payload_len, buf, len);
+		payload_len += len;
+
+		spin_lock(&tiqn->tiqn_tpg_lock);
+		list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+
+			spin_lock(&tpg->tpg_state_lock);
+			if ((tpg->tpg_state == TPG_STATE_FREE) ||
+			    (tpg->tpg_state == TPG_STATE_INACTIVE)) {
+				spin_unlock(&tpg->tpg_state_lock);
+				continue;
+			}
+			spin_unlock(&tpg->tpg_state_lock);
+
+			spin_lock(&tpg->tpg_np_lock);
+			list_for_each_entry(tpg_np, &tpg->tpg_gnp_list,
+					tpg_np_list) {
+				memset((void *)buf, 0, 256);
+
+				if (tpg_np->tpg_np->np_flags & NPF_NET_IPV6)
+					ip = &tpg_np->tpg_np->np_ipv6[0];
+				else {
+					memset(buf_ipv4, 0, IPV4_BUF_SIZE);
+					iscsi_ntoa2(buf_ipv4,
+						tpg_np->tpg_np->np_ipv4);
+					ip = &buf_ipv4[0];
+				}
+
+				len = sprintf(buf, "TargetAddress="
+					"%s%s%s:%hu,%hu",
+					(tpg_np->tpg_np->np_flags &
+						NPF_NET_IPV6) ?
+					"[" : "", ip,
+					(tpg_np->tpg_np->np_flags &
+						NPF_NET_IPV6) ?
+					"]" : "", tpg_np->tpg_np->np_port,
+					tpg->tpgt);
+				len += 1;
+
+				if ((len + payload_len) > buffer_len) {
+					spin_unlock(&tpg->tpg_np_lock);
+					spin_unlock(&tiqn->tiqn_tpg_lock);
+					end_of_buf = 1;
+					goto eob;
+				}
+
+				memcpy((void *)payload + payload_len, buf, len);
+				payload_len += len;
+
+				spin_lock(&tpg_np->tpg_np->np_ex_lock);
+				list_for_each_entry(np_ex,
+						&tpg_np->tpg_np->np_nex_list,
+						np_ex_list) {
+					if (tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6)
+						ip_ex = &np_ex->np_ex_ipv6[0];
+					else {
+						memset(buf_ipv4, 0,
+							IPV4_BUF_SIZE);
+						iscsi_ntoa2(buf_ipv4,
+							np_ex->np_ex_ipv4);
+						ip_ex = &buf_ipv4[0];
+					}
+					len = sprintf(buf, "TargetAddress="
+							"%s%s%s:%hu,%hu",
+						(tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6) ?
+						"[" : "", ip_ex,
+						(tpg_np->tpg_np->np_flags &
+							NPF_NET_IPV6) ?
+						"]" : "", np_ex->np_ex_port,
+						tpg->tpgt);
+					len += 1;
+
+					if ((len + payload_len) > buffer_len) {
+						spin_unlock(&tpg_np->tpg_np->np_ex_lock);
+						spin_unlock(&tpg->tpg_np_lock);
+						spin_unlock(&tiqn->tiqn_tpg_lock);
+						end_of_buf = 1;
+						goto eob;
+					}
+
+					memcpy((void *)payload + payload_len,
+							buf, len);
+					payload_len += len;
+				}
+				spin_unlock(&tpg_np->tpg_np->np_ex_lock);
+			}
+			spin_unlock(&tpg->tpg_np_lock);
+		}
+		spin_unlock(&tiqn->tiqn_tpg_lock);
+eob:
+		if (end_of_buf)
+			break;
+	}
+	spin_unlock(&iscsi_global->tiqn_lock);
+
+	cmd->buf_ptr = payload;
+
+	return payload_len;
+}
diff --git a/drivers/target/iscsi/iscsi_target_util.h b/drivers/target/iscsi/iscsi_target_util.h
new file mode 100644
index 0000000..4d0ca53
--- /dev/null
+++ b/drivers/target/iscsi/iscsi_target_util.h
@@ -0,0 +1,128 @@
+#ifndef ISCSI_TARGET_UTIL_H
+#define ISCSI_TARGET_UTIL_H
+
+#define MARKER_SIZE	8
+
+struct se_cmd;
+
+struct se_offset_map {
+	int                     map_reset;
+	u32                     iovec_length;
+	u32                     iscsi_offset;
+	u32                     current_offset;
+	u32                     orig_offset;
+	u32                     sg_count;
+	u32                     sg_current;
+	u32                     sg_length;
+	struct page		*sg_page;
+	struct se_mem		*map_se_mem;
+	struct se_mem		*map_orig_se_mem;
+	void			*iovec_base;
+} ____cacheline_aligned;
+
+struct se_map_sg {
+	int			sg_kmap_active:1;
+	u32			data_length;
+	u32			data_offset;
+	void			*fabric_cmd;
+	struct se_cmd		*se_cmd;
+	struct iovec		*iov;
+} ____cacheline_aligned;
+
+struct se_unmap_sg {
+	u32			data_length;
+	u32			sg_count;
+	u32			sg_offset;
+	u32			padding;
+	u32			t_offset;
+	void			*fabric_cmd;
+	struct se_cmd		*se_cmd;
+	struct se_offset_map	lmap;
+	struct se_mem		*cur_se_mem;
+} ____cacheline_aligned;
+
+extern void iscsi_attach_cmd_to_queue(struct iscsi_conn *, struct iscsi_cmd *);
+extern void iscsi_remove_cmd_from_conn_list(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_ack_from_expstatsn(struct iscsi_conn *, __u32);
+extern void iscsi_remove_conn_from_list(struct iscsi_session *, struct iscsi_conn *);
+extern int iscsi_add_r2t_to_list(struct iscsi_cmd *, __u32, __u32, int, __u32);
+extern struct iscsi_r2t *iscsi_get_r2t_for_eos(struct iscsi_cmd *, __u32, __u32);
+extern struct iscsi_r2t *iscsi_get_r2t_from_list(struct iscsi_cmd *);
+extern void iscsi_free_r2t(struct iscsi_r2t *, struct iscsi_cmd *);
+extern void iscsi_free_r2ts_from_list(struct iscsi_cmd *);
+extern struct iscsi_cmd *iscsi_allocate_cmd(struct iscsi_conn *);
+extern struct iscsi_cmd *iscsi_allocate_se_cmd(struct iscsi_conn *, u32, int, int);
+extern struct iscsi_cmd *iscsi_allocate_se_cmd_for_tmr(struct iscsi_conn *, u8);
+extern int iscsi_decide_list_to_build(struct iscsi_cmd *, __u32);
+extern struct iscsi_seq *iscsi_get_seq_holder_for_datain(struct iscsi_cmd *, __u32);
+extern struct iscsi_seq *iscsi_get_seq_holder_for_r2t(struct iscsi_cmd *);
+extern struct iscsi_r2t *iscsi_get_holder_for_r2tsn(struct iscsi_cmd *, __u32);
+extern int iscsi_check_received_cmdsn(struct iscsi_conn *, struct iscsi_cmd *, __u32);
+extern int iscsi_check_unsolicited_dataout(struct iscsi_cmd *, unsigned char *);
+extern struct iscsi_cmd *iscsi_find_cmd_from_itt(struct iscsi_conn *, __u32);
+extern struct iscsi_cmd *iscsi_find_cmd_from_itt_or_dump(struct iscsi_conn *,
+			__u32, __u32);
+extern struct iscsi_cmd *iscsi_find_cmd_from_ttt(struct iscsi_conn *, __u32);
+extern int iscsi_find_cmd_for_recovery(struct iscsi_session *, struct iscsi_cmd **,
+			struct iscsi_conn_recovery **, __u32);
+extern void iscsi_add_cmd_to_immediate_queue(struct iscsi_cmd *, struct iscsi_conn *, u8);
+extern struct iscsi_queue_req *iscsi_get_cmd_from_immediate_queue(struct iscsi_conn *);
+extern void iscsi_add_cmd_to_response_queue(struct iscsi_cmd *, struct iscsi_conn *, u8);
+extern struct iscsi_queue_req *iscsi_get_cmd_from_response_queue(struct iscsi_conn *);
+extern void iscsi_remove_cmd_from_tx_queues(struct iscsi_cmd *, struct iscsi_conn *);
+extern void iscsi_free_queue_reqs_for_conn(struct iscsi_conn *);
+extern void iscsi_release_cmd_direct(struct iscsi_cmd *);
+extern void lio_release_cmd_direct(struct se_cmd *);
+extern void __iscsi_release_cmd_to_pool(struct iscsi_cmd *, struct iscsi_session *);
+extern void iscsi_release_cmd_to_pool(struct iscsi_cmd *);
+extern void lio_release_cmd_to_pool(struct se_cmd *);
+extern __u64 iscsi_pack_lun(unsigned int);
+extern __u32 iscsi_unpack_lun(unsigned char *);
+extern int iscsi_check_session_usage_count(struct iscsi_session *);
+extern void iscsi_dec_session_usage_count(struct iscsi_session *);
+extern void iscsi_inc_session_usage_count(struct iscsi_session *);
+extern int iscsi_set_sync_and_steering_values(struct iscsi_conn *);
+extern unsigned char *iscsi_ntoa(__u32);
+extern void iscsi_ntoa2(unsigned char *, __u32);
+extern const char *iscsi_ntop6(const unsigned char *, char *, size_t);
+extern int iscsi_pton6(const char *, unsigned char *);
+extern struct iscsi_conn *iscsi_get_conn_from_cid(struct iscsi_session *, __u16);
+extern struct iscsi_conn *iscsi_get_conn_from_cid_rcfr(struct iscsi_session *, __u16);
+extern void iscsi_check_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_dec_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_inc_conn_usage_count(struct iscsi_conn *);
+extern void iscsi_async_msg_timer_function(unsigned long);
+extern int iscsi_check_for_active_network_device(struct iscsi_conn *);
+extern void iscsi_get_network_interface_from_conn(struct iscsi_conn *);
+extern void iscsi_start_netif_timer(struct iscsi_conn *);
+extern void iscsi_stop_netif_timer(struct iscsi_conn *);
+extern void iscsi_mod_nopin_response_timer(struct iscsi_conn *);
+extern void iscsi_start_nopin_response_timer(struct iscsi_conn *);
+extern void iscsi_stop_nopin_response_timer(struct iscsi_conn *);
+extern void __iscsi_start_nopin_timer(struct iscsi_conn *);
+extern void iscsi_start_nopin_timer(struct iscsi_conn *);
+extern void iscsi_stop_nopin_timer(struct iscsi_conn *);
+extern int iscsi_allocate_iovecs_for_cmd(struct se_cmd *);
+extern int iscsi_send_tx_data(struct iscsi_cmd *, struct iscsi_conn *, int);
+extern int iscsi_fe_sendpage_sg(struct se_unmap_sg *, struct iscsi_conn *);
+extern int iscsi_tx_login_rsp(struct iscsi_conn *, __u8, __u8);
+extern void iscsi_print_session_params(struct iscsi_session *);
+extern int iscsi_print_dev_to_proc(char *, char **, off_t, int);
+extern int iscsi_print_sessions_to_proc(char *, char **, off_t, int);
+extern int iscsi_print_tpg_to_proc(char *, char **, off_t, int);
+extern int rx_data(struct iscsi_conn *, struct iovec *, int, int);
+extern int tx_data(struct iscsi_conn *, struct iovec *, int, int);
+extern void iscsi_collect_login_stats(struct iscsi_conn *, __u8, __u8);
+extern struct iscsi_tiqn *iscsi_snmp_get_tiqn(struct iscsi_conn *);
+extern int iscsi_build_sendtargets_response(struct iscsi_cmd *);
+
+extern struct target_fabric_configfs *lio_target_fabric_configfs;
+extern struct iscsi_global *iscsi_global;
+extern struct kmem_cache *lio_cmd_cache;
+extern struct kmem_cache *lio_qr_cache;
+extern struct kmem_cache *lio_r2t_cache;
+
+extern int iscsi_add_nopin(struct iscsi_conn *, int);
+
+#endif /*** ISCSI_TARGET_UTIL_H ***/
+
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
                   ` (10 preceding siblings ...)
  2011-03-02  3:34   ` Nicholas A. Bellinger
@ 2011-03-02  3:34 ` Nicholas A. Bellinger
  2011-03-02  6:32   ` Randy Dunlap
  11 siblings, 1 reply; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02  3:34 UTC (permalink / raw)
  To: linux-scsi, linux-kernel
  Cc: Christoph Hellwig, Mike Christie, Hannes Reinecke,
	FUJITA Tomonori, James Bottomley, Boaz Harrosh, Stephen Rothwell,
	Douglas Gilbert, Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Add Makefile/Kconfig and update drivers/target/[Makefile,Kconfig]
to include the fabric module.

igned-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/Kconfig        |    1 +
 drivers/target/Makefile       |    1 +
 drivers/target/iscsi/Kconfig  |   17 +++++++++++++++++
 drivers/target/iscsi/Makefile |   20 ++++++++++++++++++++
 4 files changed, 39 insertions(+), 0 deletions(-)
 create mode 100644 drivers/target/iscsi/Kconfig
 create mode 100644 drivers/target/iscsi/Makefile

diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig
index 387d293..798749a 100644
--- a/drivers/target/Kconfig
+++ b/drivers/target/Kconfig
@@ -30,5 +30,6 @@ config TCM_PSCSI
 	passthrough access to Linux/SCSI device
 
 source "drivers/target/tcm_loop/Kconfig"
+source "drivers/target/iscsi/Kconfig"
 
 endif
diff --git a/drivers/target/Makefile b/drivers/target/Makefile
index 60028fe..b038b7d 100644
--- a/drivers/target/Makefile
+++ b/drivers/target/Makefile
@@ -24,3 +24,4 @@ obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
 
 # Fabric modules
 obj-$(CONFIG_TCM_LOOP_FABRIC)	+= tcm_loop/
+obj-$(CONFIG_ISCSI_TARGET)	+= iscsi/
diff --git a/drivers/target/iscsi/Kconfig b/drivers/target/iscsi/Kconfig
new file mode 100644
index 0000000..d1eaec4
--- /dev/null
+++ b/drivers/target/iscsi/Kconfig
@@ -0,0 +1,17 @@
+config ISCSI_TARGET
+	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
+	select CRYPTO
+	select CRYPTO_CRC32C
+	select CRYPTO_CRC32C_INTEL
+	help
+	Say Y here to enable the ConfigFS enabled Linux-iSCSI.org iSCSI
+	Target Mode Stack.
+
+if ISCSI_TARGET
+
+config ISCSI_TARGET_DEBUG
+	bool "LIO-Target iscsi_debug.h ring buffer messages"
+	help
+	Say Y here to enable the legacy DEBUG_ISCSI macros in iscsi_debug.h
+
+endif
diff --git a/drivers/target/iscsi/Makefile b/drivers/target/iscsi/Makefile
new file mode 100644
index 0000000..5ca883d
--- /dev/null
+++ b/drivers/target/iscsi/Makefile
@@ -0,0 +1,20 @@
+iscsi_target_mod-y +=		iscsi_auth_chap.o \
+				iscsi_parameters.o \
+				iscsi_seq_and_pdu_list.o \
+				iscsi_thread_queue.o \
+				iscsi_target_datain_values.o \
+				iscsi_target_device.o \
+				iscsi_target_erl0.o \
+				iscsi_target_erl1.o \
+				iscsi_target_erl2.o \
+				iscsi_target_login.o \
+				iscsi_target_nego.o \
+				iscsi_target_nodeattrib.o \
+				iscsi_target_tmr.o \
+				iscsi_target_tpg.o \
+				iscsi_target_util.o \
+				iscsi_target.o \
+				iscsi_target_configfs.o \
+				iscsi_target_stat.o
+
+obj-$(CONFIG_ISCSI_TARGET)	+= iscsi_target_mod.o
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-02  3:34 ` [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level Nicholas A. Bellinger
@ 2011-03-02  6:32   ` Randy Dunlap
  2011-03-02 21:32     ` Nicholas A. Bellinger
  0 siblings, 1 reply; 36+ messages in thread
From: Randy Dunlap @ 2011-03-02  6:32 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, linux-kernel, Christoph Hellwig, Mike Christie,
	Hannes Reinecke, FUJITA Tomonori, James Bottomley, Boaz Harrosh,
	Stephen Rothwell, Douglas Gilbert

On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:

> From: Nicholas Bellinger <nab@linux-iscsi.org>
> 
> Add Makefile/Kconfig and update drivers/target/[Makefile,Kconfig]
> to include the fabric module.
> 
> igned-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
> ---
>  drivers/target/Kconfig        |    1 +
>  drivers/target/Makefile       |    1 +
>  drivers/target/iscsi/Kconfig  |   17 +++++++++++++++++
>  drivers/target/iscsi/Makefile |   20 ++++++++++++++++++++
>  4 files changed, 39 insertions(+), 0 deletions(-)
>  create mode 100644 drivers/target/iscsi/Kconfig
>  create mode 100644 drivers/target/iscsi/Makefile
> 
> diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig
> index 387d293..798749a 100644
> --- a/drivers/target/Kconfig
> +++ b/drivers/target/Kconfig
> @@ -30,5 +30,6 @@ config TCM_PSCSI
>  	passthrough access to Linux/SCSI device
>  
>  source "drivers/target/tcm_loop/Kconfig"
> +source "drivers/target/iscsi/Kconfig"
>  
>  endif
> diff --git a/drivers/target/Makefile b/drivers/target/Makefile
> index 60028fe..b038b7d 100644
> --- a/drivers/target/Makefile
> +++ b/drivers/target/Makefile
> @@ -24,3 +24,4 @@ obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
>  
>  # Fabric modules
>  obj-$(CONFIG_TCM_LOOP_FABRIC)	+= tcm_loop/
> +obj-$(CONFIG_ISCSI_TARGET)	+= iscsi/
> diff --git a/drivers/target/iscsi/Kconfig b/drivers/target/iscsi/Kconfig
> new file mode 100644
> index 0000000..d1eaec4
> --- /dev/null
> +++ b/drivers/target/iscsi/Kconfig
> @@ -0,0 +1,17 @@
> +config ISCSI_TARGET
> +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
> +	select CRYPTO
> +	select CRYPTO_CRC32C
> +	select CRYPTO_CRC32C_INTEL

  CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
or is this kconfig just mucked up?


> +	help
> +	Say Y here to enable the ConfigFS enabled Linux-iSCSI.org iSCSI
> +	Target Mode Stack.
> +
> +if ISCSI_TARGET
> +
> +config ISCSI_TARGET_DEBUG
> +	bool "LIO-Target iscsi_debug.h ring buffer messages"
> +	help
> +	Say Y here to enable the legacy DEBUG_ISCSI macros in iscsi_debug.h
> +
> +endif
> diff --git a/drivers/target/iscsi/Makefile b/drivers/target/iscsi/Makefile
> new file mode 100644
> index 0000000..5ca883d
> --- /dev/null
> +++ b/drivers/target/iscsi/Makefile
> @@ -0,0 +1,20 @@
> +iscsi_target_mod-y +=		iscsi_auth_chap.o \
> +				iscsi_parameters.o \
> +				iscsi_seq_and_pdu_list.o \
> +				iscsi_thread_queue.o \
> +				iscsi_target_datain_values.o \
> +				iscsi_target_device.o \
> +				iscsi_target_erl0.o \
> +				iscsi_target_erl1.o \
> +				iscsi_target_erl2.o \
> +				iscsi_target_login.o \
> +				iscsi_target_nego.o \
> +				iscsi_target_nodeattrib.o \
> +				iscsi_target_tmr.o \
> +				iscsi_target_tpg.o \
> +				iscsi_target_util.o \
> +				iscsi_target.o \
> +				iscsi_target_configfs.o \
> +				iscsi_target_stat.o
> +
> +obj-$(CONFIG_ISCSI_TARGET)	+= iscsi_target_mod.o
> -- 


---
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-02  6:32   ` Randy Dunlap
@ 2011-03-02 21:32     ` Nicholas A. Bellinger
  2011-03-02 22:45         ` Randy Dunlap
  2011-03-03 14:19       ` Christoph Hellwig
  0 siblings, 2 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02 21:32 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-scsi, linux-kernel, Christoph Hellwig, Mike Christie,
	Hannes Reinecke, FUJITA Tomonori, James Bottomley, Boaz Harrosh,
	Stephen Rothwell, Douglas Gilbert

On Tue, 2011-03-01 at 22:32 -0800, Randy Dunlap wrote:
> On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:
> 
> > From: Nicholas Bellinger <nab@linux-iscsi.org>
> > 
> > Add Makefile/Kconfig and update drivers/target/[Makefile,Kconfig]
> > to include the fabric module.
> > 
> > igned-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
> > ---
> >  drivers/target/Kconfig        |    1 +
> >  drivers/target/Makefile       |    1 +
> >  drivers/target/iscsi/Kconfig  |   17 +++++++++++++++++
> >  drivers/target/iscsi/Makefile |   20 ++++++++++++++++++++
> >  4 files changed, 39 insertions(+), 0 deletions(-)
> >  create mode 100644 drivers/target/iscsi/Kconfig
> >  create mode 100644 drivers/target/iscsi/Makefile
> > 
> > diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig
> > index 387d293..798749a 100644
> > --- a/drivers/target/Kconfig
> > +++ b/drivers/target/Kconfig
> > @@ -30,5 +30,6 @@ config TCM_PSCSI
> >  	passthrough access to Linux/SCSI device
> >  
> >  source "drivers/target/tcm_loop/Kconfig"
> > +source "drivers/target/iscsi/Kconfig"
> >  
> >  endif
> > diff --git a/drivers/target/Makefile b/drivers/target/Makefile
> > index 60028fe..b038b7d 100644
> > --- a/drivers/target/Makefile
> > +++ b/drivers/target/Makefile
> > @@ -24,3 +24,4 @@ obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
> >  
> >  # Fabric modules
> >  obj-$(CONFIG_TCM_LOOP_FABRIC)	+= tcm_loop/
> > +obj-$(CONFIG_ISCSI_TARGET)	+= iscsi/
> > diff --git a/drivers/target/iscsi/Kconfig b/drivers/target/iscsi/Kconfig
> > new file mode 100644
> > index 0000000..d1eaec4
> > --- /dev/null
> > +++ b/drivers/target/iscsi/Kconfig
> > @@ -0,0 +1,17 @@
> > +config ISCSI_TARGET
> > +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
> > +	select CRYPTO
> > +	select CRYPTO_CRC32C
> > +	select CRYPTO_CRC32C_INTEL
> 
>   CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
> or is this kconfig just mucked up?
> 
> 

Hi Randy,

The kernel code itself that is specific to using the SSE v4.2
instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
default to using the unoptimized 1x8 slicing soft CRC32C code.  This
particular piece of logic has been tested on powerpc and arm and is
funcitoning as expected from the kernel level using the arch independent
soft code.

On the kbuild side, I do see the following warning on !CONFIG_X86:

warning: (LIO_TARGET) selects CRYPTO_CRC32C_INTEL which has unmet direct dependencies (CRYPTO && X86)

I looking at trying to fix this at one point, but was unable to
determine a method for adding a CONFIG_$ARCH condition to an individual
'select BAR' section of 'config FOO'..

How would you recommend handling this case..?

Thanks,

--nab



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM  top level
  2011-03-02 21:32     ` Nicholas A. Bellinger
@ 2011-03-02 22:45         ` Randy Dunlap
  2011-03-03 14:19       ` Christoph Hellwig
  1 sibling, 0 replies; 36+ messages in thread
From: Randy Dunlap @ 2011-03-02 22:45 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Randy Dunlap, linux-scsi, linux-kernel, Christoph Hellwig,
	Mike Christie, Hannes Reinecke, FUJITA Tomonori, James Bottomley,
	Boaz Harrosh, Stephen Rothwell, Douglas Gilbert


On Wed, March 2, 2011 1:32 pm, Nicholas A. Bellinger wrote:
> On Tue, 2011-03-01 at 22:32 -0800, Randy Dunlap wrote:
>
>> On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:
>>
>>
>>> From: Nicholas Bellinger <nab@linux-iscsi.org>
>>>
>>>
>>> Add Makefile/Kconfig and update drivers/target/[Makefile,Kconfig]
>>> to include the fabric module.
>>>
>>> igned-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
>>> ---
>>> drivers/target/Kconfig        |    1 + drivers/target/Makefile       |
>>> 1 +
>>> drivers/target/iscsi/Kconfig  |   17 +++++++++++++++++
>>> drivers/target/iscsi/Makefile |   20 ++++++++++++++++++++ 4 files
>>> changed, 39 insertions(+), 0 deletions(-) create mode 100644
>>> drivers/target/iscsi/Kconfig create mode 100644
>>> drivers/target/iscsi/Makefile
>>>
>>> diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig index
>>> 387d293..798749a 100644
>>> --- a/drivers/target/Kconfig
>>> +++ b/drivers/target/Kconfig
>>> @@ -30,5 +30,6 @@ config TCM_PSCSI
>>> passthrough access to Linux/SCSI device
>>>
>>> source "drivers/target/tcm_loop/Kconfig" +source
>>> "drivers/target/iscsi/Kconfig"
>>>
>>>
>>> endif diff --git a/drivers/target/Makefile b/drivers/target/Makefile
>>> index 60028fe..b038b7d 100644 --- a/drivers/target/Makefile
>>> +++ b/drivers/target/Makefile
>>> @@ -24,3 +24,4 @@ obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
>>>
>>>
>>> # Fabric modules
>>> obj-$(CONFIG_TCM_LOOP_FABRIC)	+= tcm_loop/ +obj-$(CONFIG_ISCSI_TARGET)
>>> += iscsi/
>>> diff --git a/drivers/target/iscsi/Kconfig
>>> b/drivers/target/iscsi/Kconfig new file mode 100644 index
>>> 0000000..d1eaec4
>>> --- /dev/null
>>> +++ b/drivers/target/iscsi/Kconfig
>>> @@ -0,0 +1,17 @@
>>> +config ISCSI_TARGET
>>> +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
>>> +	select CRYPTO
>>> +	select CRYPTO_CRC32C
>>> +	select CRYPTO_CRC32C_INTEL
>>>
>>
>> CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
>> or is this kconfig just mucked up?
>>
>>
>
> Hi Randy,
>
>
> The kernel code itself that is specific to using the SSE v4.2
> instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> particular piece of logic has been tested on powerpc and arm and is
> funcitoning as expected from the kernel level using the arch independent
> soft code.
>
> On the kbuild side, I do see the following warning on !CONFIG_X86:
>
>
> warning: (LIO_TARGET) selects CRYPTO_CRC32C_INTEL which has unmet direct
> dependencies (CRYPTO && X86)

Ah, good.

> I looking at trying to fix this at one point, but was unable to
> determine a method for adding a CONFIG_$ARCH condition to an individual
> 'select BAR' section of 'config FOO'..
>
>
> How would you recommend handling this case..?

How about

        select CRYPTO_CRC32C_INTEL if X86

(with s/spaces/tab/)


-- 
~Randy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
@ 2011-03-02 22:45         ` Randy Dunlap
  0 siblings, 0 replies; 36+ messages in thread
From: Randy Dunlap @ 2011-03-02 22:45 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Randy Dunlap, linux-scsi, linux-kernel, Christoph Hellwig,
	Mike Christie, Hannes Reinecke, FUJITA Tomonori, James Bottomley,
	Boaz Harrosh, Stephen Rothwell, Douglas Gilbert


On Wed, March 2, 2011 1:32 pm, Nicholas A. Bellinger wrote:
> On Tue, 2011-03-01 at 22:32 -0800, Randy Dunlap wrote:
>
>> On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:
>>
>>
>>> From: Nicholas Bellinger <nab@linux-iscsi.org>
>>>
>>>
>>> Add Makefile/Kconfig and update drivers/target/[Makefile,Kconfig]
>>> to include the fabric module.
>>>
>>> igned-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
>>> ---
>>> drivers/target/Kconfig        |    1 + drivers/target/Makefile       |
>>> 1 +
>>> drivers/target/iscsi/Kconfig  |   17 +++++++++++++++++
>>> drivers/target/iscsi/Makefile |   20 ++++++++++++++++++++ 4 files
>>> changed, 39 insertions(+), 0 deletions(-) create mode 100644
>>> drivers/target/iscsi/Kconfig create mode 100644
>>> drivers/target/iscsi/Makefile
>>>
>>> diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig index
>>> 387d293..798749a 100644
>>> --- a/drivers/target/Kconfig
>>> +++ b/drivers/target/Kconfig
>>> @@ -30,5 +30,6 @@ config TCM_PSCSI
>>> passthrough access to Linux/SCSI device
>>>
>>> source "drivers/target/tcm_loop/Kconfig" +source
>>> "drivers/target/iscsi/Kconfig"
>>>
>>>
>>> endif diff --git a/drivers/target/Makefile b/drivers/target/Makefile
>>> index 60028fe..b038b7d 100644 --- a/drivers/target/Makefile
>>> +++ b/drivers/target/Makefile
>>> @@ -24,3 +24,4 @@ obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
>>>
>>>
>>> # Fabric modules
>>> obj-$(CONFIG_TCM_LOOP_FABRIC)	+= tcm_loop/ +obj-$(CONFIG_ISCSI_TARGET)
>>> += iscsi/
>>> diff --git a/drivers/target/iscsi/Kconfig
>>> b/drivers/target/iscsi/Kconfig new file mode 100644 index
>>> 0000000..d1eaec4
>>> --- /dev/null
>>> +++ b/drivers/target/iscsi/Kconfig
>>> @@ -0,0 +1,17 @@
>>> +config ISCSI_TARGET
>>> +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
>>> +	select CRYPTO
>>> +	select CRYPTO_CRC32C
>>> +	select CRYPTO_CRC32C_INTEL
>>>
>>
>> CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
>> or is this kconfig just mucked up?
>>
>>
>
> Hi Randy,
>
>
> The kernel code itself that is specific to using the SSE v4.2
> instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> particular piece of logic has been tested on powerpc and arm and is
> funcitoning as expected from the kernel level using the arch independent
> soft code.
>
> On the kbuild side, I do see the following warning on !CONFIG_X86:
>
>
> warning: (LIO_TARGET) selects CRYPTO_CRC32C_INTEL which has unmet direct
> dependencies (CRYPTO && X86)

Ah, good.

> I looking at trying to fix this at one point, but was unable to
> determine a method for adding a CONFIG_$ARCH condition to an individual
> 'select BAR' section of 'config FOO'..
>
>
> How would you recommend handling this case..?

How about

        select CRYPTO_CRC32C_INTEL if X86

(with s/spaces/tab/)


-- 
~Randy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM  top level
  2011-03-02 22:45         ` Randy Dunlap
@ 2011-03-02 23:18           ` Nicholas A. Bellinger
  -1 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02 23:18 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-scsi, linux-kernel, Christoph Hellwig, Mike Christie,
	Hannes Reinecke, FUJITA Tomonori, James Bottomley, Boaz Harrosh,
	Stephen Rothwell, Douglas Gilbert

On Wed, 2011-03-02 at 14:45 -0800, Randy Dunlap wrote:
> On Wed, March 2, 2011 1:32 pm, Nicholas A. Bellinger wrote:
> > On Tue, 2011-03-01 at 22:32 -0800, Randy Dunlap wrote:
> >> On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:
> >>> From: Nicholas Bellinger <nab@linux-iscsi.org>

<SNIP>

> >>> --- /dev/null
> >>> +++ b/drivers/target/iscsi/Kconfig
> >>> @@ -0,0 +1,17 @@
> >>> +config ISCSI_TARGET
> >>> +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
> >>> +	select CRYPTO
> >>> +	select CRYPTO_CRC32C
> >>> +	select CRYPTO_CRC32C_INTEL
> >>>
> >>
> >> CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
> >> or is this kconfig just mucked up?
> >>
> >>
> >
> > Hi Randy,
> >
> >
> > The kernel code itself that is specific to using the SSE v4.2
> > instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> > iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> > default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> > particular piece of logic has been tested on powerpc and arm and is
> > funcitoning as expected from the kernel level using the arch independent
> > soft code.
> >
> > On the kbuild side, I do see the following warning on !CONFIG_X86:
> >
> >
> > warning: (LIO_TARGET) selects CRYPTO_CRC32C_INTEL which has unmet direct
> > dependencies (CRYPTO && X86)
> 
> Ah, good.
> 
> > I looking at trying to fix this at one point, but was unable to
> > determine a method for adding a CONFIG_$ARCH condition to an individual
> > 'select BAR' section of 'config FOO'..
> >
> >
> > How would you recommend handling this case..?
> 
> How about
> 
>         select CRYPTO_CRC32C_INTEL if X86
> 
> (with s/spaces/tab/)

Perfect, this resolves the kconfig warning on a nearby powerpc machine
running LIO upstream .38-rcX iSCSI target code..

Committed as c8bc93f107b into lio-core-2.6.git/lio-4.1, and I will make
sure this is included for the next iscsi-target RFC series rev.

Thanks for your review Randy!

--nab




^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
@ 2011-03-02 23:18           ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-02 23:18 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-scsi, linux-kernel, Christoph Hellwig, Mike Christie,
	Hannes Reinecke, FUJITA Tomonori, James Bottomley, Boaz Harrosh,
	Stephen Rothwell, Douglas Gilbert

On Wed, 2011-03-02 at 14:45 -0800, Randy Dunlap wrote:
> On Wed, March 2, 2011 1:32 pm, Nicholas A. Bellinger wrote:
> > On Tue, 2011-03-01 at 22:32 -0800, Randy Dunlap wrote:
> >> On Tue,  1 Mar 2011 19:34:01 -0800 Nicholas A. Bellinger wrote:
> >>> From: Nicholas Bellinger <nab@linux-iscsi.org>

<SNIP>

> >>> --- /dev/null
> >>> +++ b/drivers/target/iscsi/Kconfig
> >>> @@ -0,0 +1,17 @@
> >>> +config ISCSI_TARGET
> >>> +	tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
> >>> +	select CRYPTO
> >>> +	select CRYPTO_CRC32C
> >>> +	select CRYPTO_CRC32C_INTEL
> >>>
> >>
> >> CRYPTO_CRC32C_INTEL depends on X86.  so is ISCSI_TARGET only for X86,
> >> or is this kconfig just mucked up?
> >>
> >>
> >
> > Hi Randy,
> >
> >
> > The kernel code itself that is specific to using the SSE v4.2
> > instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> > iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> > default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> > particular piece of logic has been tested on powerpc and arm and is
> > funcitoning as expected from the kernel level using the arch independent
> > soft code.
> >
> > On the kbuild side, I do see the following warning on !CONFIG_X86:
> >
> >
> > warning: (LIO_TARGET) selects CRYPTO_CRC32C_INTEL which has unmet direct
> > dependencies (CRYPTO && X86)
> 
> Ah, good.
> 
> > I looking at trying to fix this at one point, but was unable to
> > determine a method for adding a CONFIG_$ARCH condition to an individual
> > 'select BAR' section of 'config FOO'..
> >
> >
> > How would you recommend handling this case..?
> 
> How about
> 
>         select CRYPTO_CRC32C_INTEL if X86
> 
> (with s/spaces/tab/)

Perfect, this resolves the kconfig warning on a nearby powerpc machine
running LIO upstream .38-rcX iSCSI target code..

Committed as c8bc93f107b into lio-core-2.6.git/lio-4.1, and I will make
sure this is included for the next iscsi-target RFC series rev.

Thanks for your review Randy!

--nab

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-02 21:32     ` Nicholas A. Bellinger
  2011-03-02 22:45         ` Randy Dunlap
@ 2011-03-03 14:19       ` Christoph Hellwig
  2011-03-03 20:58         ` Nicholas A. Bellinger
  1 sibling, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2011-03-03 14:19 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Randy Dunlap, linux-scsi, linux-kernel, linux-crypto, James Bottomley

On Wed, Mar 02, 2011 at 01:32:11PM -0800, Nicholas A. Bellinger wrote:
> The kernel code itself that is specific to using the SSE v4.2
> instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> particular piece of logic has been tested on powerpc and arm and is
> funcitoning as expected from the kernel level using the arch independent
> soft code.

I don't think you need that code at all.  The crypto code is structured
to prefer the optimized implementation if it is present.  Just stripping
the x86-specific code out and always requesting the plain crc32c
algorithm should give you the optimized one if it is present on your
system.

Please give it a try.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-03 14:19       ` Christoph Hellwig
@ 2011-03-03 20:58         ` Nicholas A. Bellinger
  2011-03-04 17:00           ` James Bottomley
  0 siblings, 1 reply; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-03 20:58 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Randy Dunlap, linux-scsi, linux-kernel, linux-crypto, James Bottomley

On Thu, 2011-03-03 at 09:19 -0500, Christoph Hellwig wrote:
> On Wed, Mar 02, 2011 at 01:32:11PM -0800, Nicholas A. Bellinger wrote:
> > The kernel code itself that is specific to using the SSE v4.2
> > instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> > iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> > default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> > particular piece of logic has been tested on powerpc and arm and is
> > funcitoning as expected from the kernel level using the arch independent
> > soft code.
> 
> I don't think you need that code at all.  The crypto code is structured
> to prefer the optimized implementation if it is present.  Just stripping
> the x86-specific code out and always requesting the plain crc32c
> algorithm should give you the optimized one if it is present on your
> system.
> 
> Please give it a try.
> 

This is what I originally thought as well, but this ended up not being
the case when the logic was originally coded up.   I just tried again
with .38-rc7 on a 5500 series machine and simply stubbing out the
CONFIG_X86 part from iscsi_login_setup_crypto() and calling:

   crypto_alloc_hash("crc32c", 0, CRYPTO_ALG_ASYNC)

does not automatically load and use crc32c_intel.ko when only requesting
plain crc32c.

The reason for the extra crypto_alloc_hash("crc32c-intel", ...) call in
iscsi_login_setup_crypto() is to load crc32c_intel.ko on-demand for
cpu_has_xmm4_2 capable machines.

I should mention this is with the following .config:

CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m

This would seem to indicate that CRC32C_INTEL needs to be compiled in
(or at least manually loaded) for libcypto to use the optimized
instructions when the plain crc32c is called, correct..?

--nab


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-03 20:58         ` Nicholas A. Bellinger
@ 2011-03-04 17:00           ` James Bottomley
  2011-03-07 23:15             ` Nicholas A. Bellinger
  0 siblings, 1 reply; 36+ messages in thread
From: James Bottomley @ 2011-03-04 17:00 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Christoph Hellwig, Randy Dunlap, linux-scsi, linux-kernel, linux-crypto

On Thu, 2011-03-03 at 12:58 -0800, Nicholas A. Bellinger wrote:
> On Thu, 2011-03-03 at 09:19 -0500, Christoph Hellwig wrote:
> > On Wed, Mar 02, 2011 at 01:32:11PM -0800, Nicholas A. Bellinger wrote:
> > > The kernel code itself that is specific to using the SSE v4.2
> > > instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> > > iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> > > default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> > > particular piece of logic has been tested on powerpc and arm and is
> > > funcitoning as expected from the kernel level using the arch independent
> > > soft code.
> > 
> > I don't think you need that code at all.  The crypto code is structured
> > to prefer the optimized implementation if it is present.  Just stripping
> > the x86-specific code out and always requesting the plain crc32c
> > algorithm should give you the optimized one if it is present on your
> > system.
> > 
> > Please give it a try.
> > 
> 
> This is what I originally thought as well, but this ended up not being
> the case when the logic was originally coded up.   I just tried again
> with .38-rc7 on a 5500 series machine and simply stubbing out the
> CONFIG_X86 part from iscsi_login_setup_crypto() and calling:
> 
>    crypto_alloc_hash("crc32c", 0, CRYPTO_ALG_ASYNC)
> 
> does not automatically load and use crc32c_intel.ko when only requesting
> plain crc32c.

It sounds like there might be a bug in the crypto layer, so the Linux
way is to make it work as intended.

It's absolutely not acceptable just to pull other layer workarounds into
drivers.

> The reason for the extra crypto_alloc_hash("crc32c-intel", ...) call in
> iscsi_login_setup_crypto() is to load crc32c_intel.ko on-demand for
> cpu_has_xmm4_2 capable machines.
> 
> I should mention this is with the following .config:
> 
> CONFIG_CRYPTO_CRC32C=y
> CONFIG_CRYPTO_CRC32C_INTEL=m
> 
> This would seem to indicate that CRC32C_INTEL needs to be compiled in
> (or at least manually loaded) for libcypto to use the optimized
> instructions when the plain crc32c is called, correct..?

That sounds right.  There's probably not an autoload for this on
recognising sse instructions.

James



James

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-04 17:00           ` James Bottomley
@ 2011-03-07 23:15             ` Nicholas A. Bellinger
  2011-03-08  9:33                 ` Herbert Xu
  0 siblings, 1 reply; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-07 23:15 UTC (permalink / raw)
  To: James Bottomley
  Cc: Christoph Hellwig, Randy Dunlap, linux-scsi, linux-kernel, linux-crypto

On Fri, 2011-03-04 at 11:00 -0600, James Bottomley wrote:
> On Thu, 2011-03-03 at 12:58 -0800, Nicholas A. Bellinger wrote:
> > On Thu, 2011-03-03 at 09:19 -0500, Christoph Hellwig wrote:
> > > On Wed, Mar 02, 2011 at 01:32:11PM -0800, Nicholas A. Bellinger wrote:
> > > > The kernel code itself that is specific to using the SSE v4.2
> > > > instruction for CRC32C offload are using #ifdef CONFIG_X86 stubs in
> > > > iscsi_target_login.c:iscsi_login_setup_crypto(), and !CONFIG_X86 will
> > > > default to using the unoptimized 1x8 slicing soft CRC32C code.  This
> > > > particular piece of logic has been tested on powerpc and arm and is
> > > > funcitoning as expected from the kernel level using the arch independent
> > > > soft code.
> > > 
> > > I don't think you need that code at all.  The crypto code is structured
> > > to prefer the optimized implementation if it is present.  Just stripping
> > > the x86-specific code out and always requesting the plain crc32c
> > > algorithm should give you the optimized one if it is present on your
> > > system.
> > > 
> > > Please give it a try.
> > > 
> > 
> > This is what I originally thought as well, but this ended up not being
> > the case when the logic was originally coded up.   I just tried again
> > with .38-rc7 on a 5500 series machine and simply stubbing out the
> > CONFIG_X86 part from iscsi_login_setup_crypto() and calling:
> > 
> >    crypto_alloc_hash("crc32c", 0, CRYPTO_ALG_ASYNC)
> > 
> > does not automatically load and use crc32c_intel.ko when only requesting
> > plain crc32c.
> 
> It sounds like there might be a bug in the crypto layer, so the Linux
> way is to make it work as intended.
> 
> It's absolutely not acceptable just to pull other layer workarounds into
> drivers.
> 
> > The reason for the extra crypto_alloc_hash("crc32c-intel", ...) call in
> > iscsi_login_setup_crypto() is to load crc32c_intel.ko on-demand for
> > cpu_has_xmm4_2 capable machines.
> > 
> > I should mention this is with the following .config:
> > 
> > CONFIG_CRYPTO_CRC32C=y
> > CONFIG_CRYPTO_CRC32C_INTEL=m
> > 
> > This would seem to indicate that CRC32C_INTEL needs to be compiled in
> > (or at least manually loaded) for libcypto to use the optimized
> > instructions when the plain crc32c is called, correct..?
> 
> That sounds right.  There's probably not an autoload for this on
> recognising sse instructions.
> 

I have been thinking about this some more, and modifying libcrypto to be
aware of optimized offload methods for hardware specific modules that it
should load does sound useful, but it seem like overkill to me for only
this particular case.

What about the following to simply call request_module("crc32c_intel")
at module_init() time and top the extra iscsi_login_setup_crypto()
code..?

Thanks,

--nab

[PATCH] iscsi-target: Call request_module("crc32c_intel") during module_init

This patch adds a call during module_init() -> iscsi_target_register_configfs()
to request the loading of crc32c_intel.ko to allow libcrypto to properly use
the optimized offload where available.

It also removes the extra crypto_alloc_hash("crc32c-intel", ...) calls
from iscsi_login_setup_crypto() and removes the unnecessary TPG attribute
crc32c_x86_offload for control this offload from configfs.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/lio-target/iscsi_target_configfs.c |   18 +++++++----
 drivers/target/lio-target/iscsi_target_core.h     |    4 --
 drivers/target/lio-target/iscsi_target_login.c    |   34 ++-------------------
 drivers/target/lio-target/iscsi_target_tpg.c      |   19 -----------
 drivers/target/lio-target/iscsi_target_tpg.h      |    1 -
 5 files changed, 15 insertions(+), 61 deletions(-)

diff --git a/drivers/target/lio-target/iscsi_target_configfs.c b/drivers/target/lio-target/iscsi_target_configfs.c
index 76ee4fc..7ba169a 100644
--- a/drivers/target/lio-target/iscsi_target_configfs.c
+++ b/drivers/target/lio-target/iscsi_target_configfs.c
@@ -927,11 +927,6 @@ TPG_ATTR(demo_mode_write_protect, S_IRUGO | S_IWUSR);
  */
 DEF_TPG_ATTRIB(prod_mode_write_protect);
 TPG_ATTR(prod_mode_write_protect, S_IRUGO | S_IWUSR);
-/*
- * Define iscsi_tpg_attrib_s_crc32c_x86_offload
- */
-DEF_TPG_ATTRIB(crc32c_x86_offload);
-TPG_ATTR(crc32c_x86_offload, S_IRUGO | S_IWUSR);

 static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
        &iscsi_tpg_attrib_authentication.attr,
@@ -942,7 +937,6 @@ static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
        &iscsi_tpg_attrib_cache_dynamic_acls.attr,
        &iscsi_tpg_attrib_demo_mode_write_protect.attr,
        &iscsi_tpg_attrib_prod_mode_write_protect.attr,
-       &iscsi_tpg_attrib_crc32c_x86_offload.attr,
        NULL,
 };

@@ -1525,6 +1519,18 @@ int iscsi_target_register_configfs(void)
        lio_target_fabric_configfs = fabric;
        printk(KERN_INFO "LIO_TARGET[0] - Set fabric ->"
                        " lio_target_fabric_configfs\n");
+#ifdef CONFIG_X86
+       /*
+        * For cpu_has_xmm4_2 go ahead and load crc32c_intel.ko in order for
+        * iscsi_login_setup_crypto() -> crypto_alloc_hash("crc32c", ...) to
+        * use the offload when available from libcrypto..
+        */
+       if (cpu_has_xmm4_2) {
+               int rc = request_module("crc32c_intel");
+               if (rc < 0)
+                       printk(KERN_ERR "Unable to load crc32c_intel.ko\n");
+       }
+#endif
        return 0;
 }

diff --git a/drivers/target/lio-target/iscsi_target_core.h b/drivers/target/lio-target/iscsi_target_core.h
index b8e87a3..93632f3 100644
--- a/drivers/target/lio-target/iscsi_target_core.h
+++ b/drivers/target/lio-target/iscsi_target_core.h
@@ -83,8 +83,6 @@
 #define TA_DEMO_MODE_WRITE_PROTECT     1
 /* Disabled by default in production mode w/ explict ACLs */
 #define TA_PROD_MODE_WRITE_PROTECT     0
-/* Enabled by default with x86 supporting SSE v4.2 */
-#define TA_CRC32C_X86_OFFLOAD          1
 #define TA_CACHE_CORE_NPS              0

 /* struct iscsi_data_count->type */
@@ -781,8 +779,6 @@ struct iscsi_tpg_attrib {
        u32                     default_cmdsn_depth;
        u32                     demo_mode_write_protect;
        u32                     prod_mode_write_protect;
-       /* Used to signal libcrypto crc32-intel offload instruction usage */
-       u32                     crc32c_x86_offload;
        u32                     cache_core_nps;
        struct iscsi_portal_group *tpg;
 }  ____cacheline_aligned;
diff --git a/drivers/target/lio-target/iscsi_target_login.c b/drivers/target/lio-target/iscsi_target_login.c
index 35d4765..0f098d3 100644
--- a/drivers/target/lio-target/iscsi_target_login.c
+++ b/drivers/target/lio-target/iscsi_target_login.c
@@ -95,38 +95,10 @@ static int iscsi_login_init_conn(struct iscsi_conn *conn)
 int iscsi_login_setup_crypto(struct iscsi_conn *conn)
 {
        struct iscsi_portal_group *tpg = conn->tpg;
-#ifdef CONFIG_X86
        /*
-        * Check for the Nehalem optimized crc32c-intel instructions
-        * This is only currently available while running on bare-metal,
-        * and is not yet available with QEMU-KVM guests.
-        */
-       if (cpu_has_xmm4_2 && ISCSI_TPG_ATTRIB(tpg)->crc32c_x86_offload) {
-               conn->conn_rx_hash.flags = 0;
-               conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
-                                               CRYPTO_ALG_ASYNC);
-               if (IS_ERR(conn->conn_rx_hash.tfm)) {
-                       printk(KERN_ERR "crypto_alloc_hash() failed for conn_rx_tfm\n");
-                       goto check_crc32c;
-               }
-
-               conn->conn_tx_hash.flags = 0;
-               conn->conn_tx_hash.tfm = crypto_alloc_hash("crc32c-intel", 0,
-                                               CRYPTO_ALG_ASYNC);
-               if (IS_ERR(conn->conn_tx_hash.tfm)) {   
-                       printk(KERN_ERR "crypto_alloc_hash() failed for conn_tx_tfm\n");
-                       crypto_free_hash(conn->conn_rx_hash.tfm);
-                       goto check_crc32c;
-               }
-
-               printk(KERN_INFO "LIO-Target[0]: Using Nehalem crc32c-intel"
-                                       " offload instructions\n");
-               return 0;
-       }
-check_crc32c:
-#endif /* CONFIG_X86 */
-       /*
-        * Setup slicing by 1x CRC32C algorithm for RX and TX libcrypto contexts
+        * Setup slicing by CRC32C algorithm for RX and TX libcrypto contexts
+        * which will default to crc32c_intel.ko for cpu_has_xmm4_2, or fallback
+        * to software 1x8 byte slicing from crc32c.ko
         */
        conn->conn_rx_hash.flags = 0;
        conn->conn_rx_hash.tfm = crypto_alloc_hash("crc32c", 0,
diff --git a/drivers/target/lio-target/iscsi_target_tpg.c b/drivers/target/lio-target/iscsi_target_tpg.c
index e851982..212d8c1 100644
--- a/drivers/target/lio-target/iscsi_target_tpg.c
+++ b/drivers/target/lio-target/iscsi_target_tpg.c
@@ -465,7 +465,6 @@ static void iscsi_set_default_tpg_attribs(struct iscsi_portal_group *tpg)
        a->cache_dynamic_acls = TA_CACHE_DYNAMIC_ACLS;
        a->demo_mode_write_protect = TA_DEMO_MODE_WRITE_PROTECT;
        a->prod_mode_write_protect = TA_PROD_MODE_WRITE_PROTECT;
-       a->crc32c_x86_offload = TA_CRC32C_X86_OFFLOAD;
        a->cache_core_nps = TA_CACHE_CORE_NPS;
 }

@@ -1103,24 +1102,6 @@ int iscsi_ta_prod_mode_write_protect(
        return 0;
 }

-int iscsi_ta_crc32c_x86_offload(
-       struct iscsi_portal_group *tpg,
-       u32 flag)
-{
-       struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
-
-       if ((flag != 0) && (flag != 1)) {
-               printk(KERN_ERR "Illegal value %d\n", flag);
-               return -EINVAL;
-       }
-
-       a->crc32c_x86_offload = flag;
-       printk(KERN_INFO "iSCSI_TPG[%hu] - CRC32C x86 Offload: %s\n",
-               tpg->tpgt, (a->crc32c_x86_offload) ? "ON" : "OFF");
-
-       return 0;
-}
-
 void iscsi_disable_tpgs(struct iscsi_tiqn *tiqn)
 {
        struct iscsi_portal_group *tpg;
diff --git a/drivers/target/lio-target/iscsi_target_tpg.h b/drivers/target/lio-target/iscsi_target_tpg.h
index bcdfacb..2553707 100644
--- a/drivers/target/lio-target/iscsi_target_tpg.h
+++ b/drivers/target/lio-target/iscsi_target_tpg.h
@@ -53,7 +53,6 @@ extern int iscsi_ta_default_cmdsn_depth(struct iscsi_portal_group *, u32);
 extern int iscsi_ta_cache_dynamic_acls(struct iscsi_portal_group *, u32);
 extern int iscsi_ta_demo_mode_write_protect(struct iscsi_portal_group *, u32);
 extern int iscsi_ta_prod_mode_write_protect(struct iscsi_portal_group *, u32);
-extern int iscsi_ta_crc32c_x86_offload(struct iscsi_portal_group *, u32);
 extern void iscsi_disable_tpgs(struct iscsi_tiqn *);
 extern void iscsi_disable_all_tpgs(void);
 extern void iscsi_remove_tpgs(struct iscsi_tiqn *);
-- 
1.6.2.2

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-07 23:15             ` Nicholas A. Bellinger
@ 2011-03-08  9:33                 ` Herbert Xu
  0 siblings, 0 replies; 36+ messages in thread
From: Herbert Xu @ 2011-03-08  9:33 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: James.Bottomley, hch, rdunlap, linux-scsi, linux-kernel, linux-crypto

Nicholas A. Bellinger <nab@linux-iscsi.org> wrote:
>
>> > I should mention this is with the following .config:
>> > 
>> > CONFIG_CRYPTO_CRC32C=y
>> > CONFIG_CRYPTO_CRC32C_INTEL=m

This is why you get the unoptimised version.  Had you selected
both as built-in or both as modules, then it would have worked
as intended.
 
> What about the following to simply call request_module("crc32c_intel")
> at module_init() time and top the extra iscsi_login_setup_crypto()
> code..?

If we're going to do this we should do it in the crypto layer,
and not litter every single crypto API user with such crap.

Currently we don't invoke request_module unless no implementation
is reigstered for an algorithm.  You can change this so that it
also invokes request_module if we have not yet done so at least
once for that algorithm.

Patches are welcome.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
@ 2011-03-08  9:33                 ` Herbert Xu
  0 siblings, 0 replies; 36+ messages in thread
From: Herbert Xu @ 2011-03-08  9:33 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: James.Bottomley, hch, rdunlap, linux-scsi, linux-kernel, linux-crypto

Nicholas A. Bellinger <nab@linux-iscsi.org> wrote:
>
>> > I should mention this is with the following .config:
>> > 
>> > CONFIG_CRYPTO_CRC32C=y
>> > CONFIG_CRYPTO_CRC32C_INTEL=m

This is why you get the unoptimised version.  Had you selected
both as built-in or both as modules, then it would have worked
as intended.
 
> What about the following to simply call request_module("crc32c_intel")
> at module_init() time and top the extra iscsi_login_setup_crypto()
> code..?

If we're going to do this we should do it in the crypto layer,
and not litter every single crypto API user with such crap.

Currently we don't invoke request_module unless no implementation
is reigstered for an algorithm.  You can change this so that it
also invokes request_module if we have not yet done so at least
once for that algorithm.

Patches are welcome.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
  2011-03-08  9:33                 ` Herbert Xu
@ 2011-03-10  8:02                   ` Nicholas A. Bellinger
  -1 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-10  8:02 UTC (permalink / raw)
  To: Herbert Xu
  Cc: James.Bottomley, hch, rdunlap, linux-scsi, linux-kernel, linux-crypto

On Tue, 2011-03-08 at 17:33 +0800, Herbert Xu wrote:
> Nicholas A. Bellinger <nab@linux-iscsi.org> wrote:
> >
> >> > I should mention this is with the following .config:
> >> > 
> >> > CONFIG_CRYPTO_CRC32C=y
> >> > CONFIG_CRYPTO_CRC32C_INTEL=m
> 
> This is why you get the unoptimised version.  Had you selected
> both as built-in or both as modules, then it would have worked
> as intended.
>  

<nod>

> > What about the following to simply call request_module("crc32c_intel")
> > at module_init() time and top the extra iscsi_login_setup_crypto()
> > code..?
> 
> If we're going to do this we should do it in the crypto layer,
> and not litter every single crypto API user with such crap.
> 
> Currently we don't invoke request_module unless no implementation
> is reigstered for an algorithm.  You can change this so that it
> also invokes request_module if we have not yet done so at least
> once for that algorithm.
> 
> Patches are welcome.
> 

Ok, fair enough point..  I have addressed this with a new struct
crypto_alg->cra_check_optimized() callback in order for crc32c.ko to
have a method to call request_module("crc32c_intel.ko") after the base
software alg has been loaded.

This is working w/ CONFIG_CRYPTO_CRC32C=y + CONFIG_CRYPTO_CRC32C_INTEL=m
case and should satisfy current (and future) architecture dependent
cases for CRC32C HW offload.

Sending out a patch series for your comments shortly..

Thanks!

--nab

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level
@ 2011-03-10  8:02                   ` Nicholas A. Bellinger
  0 siblings, 0 replies; 36+ messages in thread
From: Nicholas A. Bellinger @ 2011-03-10  8:02 UTC (permalink / raw)
  To: Herbert Xu
  Cc: James.Bottomley, hch, rdunlap, linux-scsi, linux-kernel, linux-crypto

On Tue, 2011-03-08 at 17:33 +0800, Herbert Xu wrote:
> Nicholas A. Bellinger <nab@linux-iscsi.org> wrote:
> >
> >> > I should mention this is with the following .config:
> >> > 
> >> > CONFIG_CRYPTO_CRC32C=y
> >> > CONFIG_CRYPTO_CRC32C_INTEL=m
> 
> This is why you get the unoptimised version.  Had you selected
> both as built-in or both as modules, then it would have worked
> as intended.
>  

<nod>

> > What about the following to simply call request_module("crc32c_intel")
> > at module_init() time and top the extra iscsi_login_setup_crypto()
> > code..?
> 
> If we're going to do this we should do it in the crypto layer,
> and not litter every single crypto API user with such crap.
> 
> Currently we don't invoke request_module unless no implementation
> is reigstered for an algorithm.  You can change this so that it
> also invokes request_module if we have not yet done so at least
> once for that algorithm.
> 
> Patches are welcome.
> 

Ok, fair enough point..  I have addressed this with a new struct
crypto_alg->cra_check_optimized() callback in order for crc32c.ko to
have a method to call request_module("crc32c_intel.ko") after the base
software alg has been loaded.

This is working w/ CONFIG_CRYPTO_CRC32C=y + CONFIG_CRYPTO_CRC32C_INTEL=m
case and should satisfy current (and future) architecture dependent
cases for CRC32C HW offload.

Sending out a patch series for your comments shortly..

Thanks!

--nab


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2011-03-10  8:09 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-03-02  3:33 [RFC 00/12] iSCSI target v4.1.0-rc1 series Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 01/12] iscsi: Resolve iscsi_proto.h naming conflicts with drivers/target/iscsi Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 02/12] iscsi-target: Add primary iSCSI request/response state machine logic Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 03/12] iscsi-target: Add TCM v4 compatiable ConfigFS control plane Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 04/12] iscsi-target: Add configfs fabric dependent statistics Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 05/12] iscsi-target: Add TPG and Device logic Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 06/12] iscsi-target: Add iSCSI Login Negotiation and Parameter logic Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 07/12] iscsi-target: Add CHAP Authentication support using libcrypto Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 08/12] iscsi-target: Add Sequence/PDU list + DataIN response logic Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 09/12] iscsi-target: Add iSCSI Error Recovery Hierarchy support Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:33 ` [RFC 10/12] iscsi-target: Add support for task management operations Nicholas A. Bellinger
2011-03-02  3:33   ` Nicholas A. Bellinger
2011-03-02  3:34 ` [RFC 11/12] iscsi-target: Add misc utility and debug logic Nicholas A. Bellinger
2011-03-02  3:34   ` Nicholas A. Bellinger
2011-03-02  3:34 ` [RFC 12/12] iscsi-target: Add Makefile/Kconfig and update TCM top level Nicholas A. Bellinger
2011-03-02  6:32   ` Randy Dunlap
2011-03-02 21:32     ` Nicholas A. Bellinger
2011-03-02 22:45       ` Randy Dunlap
2011-03-02 22:45         ` Randy Dunlap
2011-03-02 23:18         ` Nicholas A. Bellinger
2011-03-02 23:18           ` Nicholas A. Bellinger
2011-03-03 14:19       ` Christoph Hellwig
2011-03-03 20:58         ` Nicholas A. Bellinger
2011-03-04 17:00           ` James Bottomley
2011-03-07 23:15             ` Nicholas A. Bellinger
2011-03-08  9:33               ` Herbert Xu
2011-03-08  9:33                 ` Herbert Xu
2011-03-10  8:02                 ` Nicholas A. Bellinger
2011-03-10  8:02                   ` Nicholas A. Bellinger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.