All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/34] Chelsio iSCSI target offload driver
@ 2016-02-14 17:30 Varun Prakash
  2016-02-14 17:32 ` [RFC 01/34] cxgb4: add new ULD type CXGB4_ULD_ISCSIT Varun Prakash
                   ` (34 more replies)
  0 siblings, 35 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:30 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, dledford, swise, indranil, kxie, hariprasad, varun

This RFC series is for Chelsio iSCSI target offload
driver(cxgbit.ko).

cxgbit.ko registers with iSCSI target transport
and offloads multiple CPU intensive tasks to
Chelsio T5 adapters.

Chelsio T5 adapter series has following offload
features for iSCSI -
-TCP/IP offload.
-iSCSI PDU recovery by reassembling TCP segments.
-Header and Data Digest offload.
-iSCSI segmentation offload(ISO).
-Direct Data Placement(DDP).

Please review this series.

Thanks

Varun Prakash (34):
  cxgb4: add new ULD type CXGB4_ULD_ISCSIT
  cxgb4: allocate resources for CXGB4_ULD_ISCSIT
  cxgb4: large receive offload support
  cxgb4, iw_cxgb4: move definitions to common header file
  cxgb4, iw_cxgb4, cxgb4i: remove duplicate definitions
  cxgb4, cxgb4i: move struct cpl_rx_data_ddp definition
  cxgb4: add definitions for iSCSI target ULD
  cxgb4: update struct cxgb4_lld_info definition
  cxgb4: move VLAN_NONE macro definition
  cxgb4, iw_cxgb4: move delayed ack macro definitions
  cxgb4: add iSCSI DDP page pod manager
  cxgb4: update Kconfig and Makefile
  iscsi-target: add new transport type
  iscsi-target: export symbols
  iscsi-target: export symbols from iscsi_target.c
  iscsi-target: split iscsit_send_r2t()
  iscsi-target: split iscsit_send_conn_drop_async_message()
  iscsi-target: call complete on conn_logout_comp
  iscsi-target: clear tx_thread_active
  iscsi-target: update struct iscsit_transport definition
  iscsi-target: release transport driver resources
  iscsi-target: call Rx thread function
  iscsi-target: split iscsi_target_rx_thread()
  iscsi-target: validate conn operational parameters
  iscsi-target: move iscsit_thread_check_cpumask()
  iscsi-target: fix seq_end_offset calculation
  cxgbit: add cxgbit.h
  cxgbit: add cxgbit_lro.h
  cxgbit: add cxgbit_main.c
  cxgbit: add cxgbit_cm.c
  cxgbit: add cxgbit_target.c
  cxgbit: add cxgbit_ddp.c
  cxgbit: add Kconfig and Makefile
  iscsi-target: update Kconfig and Makefile

 drivers/infiniband/hw/cxgb4/t4fw_ri_api.h          |   99 -
 drivers/infiniband/ulp/isert/ib_isert.c            |   10 +
 drivers/net/ethernet/chelsio/Kconfig               |   11 +
 drivers/net/ethernet/chelsio/cxgb4/Makefile        |    1 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h         |   27 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c |   34 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    |   97 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c     |  464 +++++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h     |  310 +++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h     |   11 +
 drivers/net/ethernet/chelsio/cxgb4/l2t.c           |    2 -
 drivers/net/ethernet/chelsio/cxgb4/l2t.h           |    2 +
 drivers/net/ethernet/chelsio/cxgb4/sge.c           |   13 +-
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h        |  217 +++
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h      |    7 +
 drivers/scsi/cxgbi/cxgb4i/cxgb4i.h                 |   17 -
 drivers/target/iscsi/Kconfig                       |    2 +
 drivers/target/iscsi/Makefile                      |    1 +
 drivers/target/iscsi/cxgbit/Kconfig                |    7 +
 drivers/target/iscsi/cxgbit/Makefile               |    6 +
 drivers/target/iscsi/cxgbit/cxgbit.h               |  363 ++++
 drivers/target/iscsi/cxgbit/cxgbit_cm.c            | 1893 ++++++++++++++++++
 drivers/target/iscsi/cxgbit/cxgbit_ddp.c           |  374 ++++
 drivers/target/iscsi/cxgbit/cxgbit_lro.h           |   70 +
 drivers/target/iscsi/cxgbit/cxgbit_main.c          |  719 +++++++
 drivers/target/iscsi/cxgbit/cxgbit_target.c        | 2027 ++++++++++++++++++++
 drivers/target/iscsi/iscsi_target.c                |  184 +-
 drivers/target/iscsi/iscsi_target_configfs.c       |   79 +
 drivers/target/iscsi/iscsi_target_datain_values.c  |    3 +
 drivers/target/iscsi/iscsi_target_erl0.c           |    7 +-
 drivers/target/iscsi/iscsi_target_erl1.c           |    1 +
 drivers/target/iscsi/iscsi_target_login.c          |   13 +-
 drivers/target/iscsi/iscsi_target_nego.c           |    1 +
 drivers/target/iscsi/iscsi_target_parameters.c     |    1 +
 drivers/target/iscsi/iscsi_target_util.c           |    7 +
 include/target/iscsi/iscsi_target_core.h           |   27 +
 include/target/iscsi/iscsi_transport.h             |   52 +
 37 files changed, 6928 insertions(+), 231 deletions(-)
 create mode 100644 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c
 create mode 100644 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h
 create mode 100644 drivers/target/iscsi/cxgbit/Kconfig
 create mode 100644 drivers/target/iscsi/cxgbit/Makefile
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit.h
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_cm.c
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_ddp.c
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_lro.h
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_main.c
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_target.c

-- 
2.0.2

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [RFC 01/34] cxgb4: add new ULD type CXGB4_ULD_ISCSIT
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
@ 2016-02-14 17:32 ` Varun Prakash
  2016-02-14 17:32 ` [RFC 02/34] cxgb4: allocate resources for CXGB4_ULD_ISCSIT Varun Prakash
                   ` (33 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:32 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

Chelsio iSCSI target offload driver
will register with cxgb4 driver as ULD of type
CXGB4_ULD_ISCSIT.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
index cf711d5..2f80e32 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
@@ -191,6 +191,7 @@ static inline void set_wr_txq(struct sk_buff *skb, int prio, int queue)
 enum cxgb4_uld {
 	CXGB4_ULD_RDMA,
 	CXGB4_ULD_ISCSI,
+	CXGB4_ULD_ISCSIT,
 	CXGB4_ULD_MAX
 };
 
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 02/34] cxgb4: allocate resources for CXGB4_ULD_ISCSIT
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
  2016-02-14 17:32 ` [RFC 01/34] cxgb4: add new ULD type CXGB4_ULD_ISCSIT Varun Prakash
@ 2016-02-14 17:32 ` Varun Prakash
  2016-02-14 17:32 ` [RFC 03/34] cxgb4: large receive offload support Varun Prakash
                   ` (32 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:32 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

allocate rxqs for non T4 adapters,
dump rxqs sge qinfo through debugfs.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h         | 11 ++++-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c | 34 +++++++++++++-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    | 53 ++++++++++++++++++++--
 drivers/net/ethernet/chelsio/cxgb4/sge.c           |  1 +
 4 files changed, 92 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index ec6e849..6206de9 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -404,6 +404,9 @@ enum {
 	MAX_CTRL_QUEUES = NCHAN,      /* # of control Tx queues */
 	MAX_RDMA_QUEUES = NCHAN,      /* # of streaming RDMA Rx queues */
 	MAX_RDMA_CIQS = 32,        /* # of  RDMA concentrator IQs */
+
+	/* # of streaming iSCSIT Rx queues */
+	MAX_ISCSIT_QUEUES = MAX_OFLD_QSETS,
 };
 
 enum {
@@ -420,8 +423,8 @@ enum {
 enum {
 	INGQ_EXTRAS = 2,        /* firmware event queue and */
 				/*   forwarded interrupts */
-	MAX_INGQ = MAX_ETH_QSETS + MAX_OFLD_QSETS + MAX_RDMA_QUEUES
-		   + MAX_RDMA_CIQS + INGQ_EXTRAS,
+	MAX_INGQ = MAX_ETH_QSETS + MAX_OFLD_QSETS + MAX_RDMA_QUEUES +
+		   MAX_RDMA_CIQS + MAX_ISCSIT_QUEUES + INGQ_EXTRAS,
 };
 
 struct adapter;
@@ -641,6 +644,7 @@ struct sge {
 
 	struct sge_eth_rxq ethrxq[MAX_ETH_QSETS];
 	struct sge_ofld_rxq iscsirxq[MAX_OFLD_QSETS];
+	struct sge_ofld_rxq iscsitrxq[MAX_ISCSIT_QUEUES];
 	struct sge_ofld_rxq rdmarxq[MAX_RDMA_QUEUES];
 	struct sge_ofld_rxq rdmaciq[MAX_RDMA_CIQS];
 	struct sge_rspq fw_evtq ____cacheline_aligned_in_smp;
@@ -652,9 +656,11 @@ struct sge {
 	u16 ethqsets;               /* # of active Ethernet queue sets */
 	u16 ethtxq_rover;           /* Tx queue to clean up next */
 	u16 iscsiqsets;              /* # of active iSCSI queue sets */
+	u16 niscsitq;               /* # of available iSCST Rx queues */
 	u16 rdmaqs;                 /* # of available RDMA Rx queues */
 	u16 rdmaciqs;               /* # of available RDMA concentrator IQs */
 	u16 iscsi_rxq[MAX_OFLD_QSETS];
+	u16 iscsit_rxq[MAX_ISCSIT_QUEUES];
 	u16 rdma_rxq[MAX_RDMA_QUEUES];
 	u16 rdma_ciq[MAX_RDMA_CIQS];
 	u16 timer_val[SGE_NTIMERS];
@@ -681,6 +687,7 @@ struct sge {
 
 #define for_each_ethrxq(sge, i) for (i = 0; i < (sge)->ethqsets; i++)
 #define for_each_iscsirxq(sge, i) for (i = 0; i < (sge)->iscsiqsets; i++)
+#define for_each_iscsitrxq(sge, i) for (i = 0; i < (sge)->niscsitq; i++)
 #define for_each_rdmarxq(sge, i) for (i = 0; i < (sge)->rdmaqs; i++)
 #define for_each_rdmaciq(sge, i) for (i = 0; i < (sge)->rdmaciqs; i++)
 
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
index e6a4072..0bb41e9 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
@@ -2334,12 +2334,14 @@ static int sge_qinfo_show(struct seq_file *seq, void *v)
 	struct adapter *adap = seq->private;
 	int eth_entries = DIV_ROUND_UP(adap->sge.ethqsets, 4);
 	int iscsi_entries = DIV_ROUND_UP(adap->sge.iscsiqsets, 4);
+	int iscsit_entries = DIV_ROUND_UP(adap->sge.niscsitq, 4);
 	int rdma_entries = DIV_ROUND_UP(adap->sge.rdmaqs, 4);
 	int ciq_entries = DIV_ROUND_UP(adap->sge.rdmaciqs, 4);
 	int ctrl_entries = DIV_ROUND_UP(MAX_CTRL_QUEUES, 4);
 	int i, r = (uintptr_t)v - 1;
 	int iscsi_idx = r - eth_entries;
-	int rdma_idx = iscsi_idx - iscsi_entries;
+	int iscsit_idx = iscsi_idx - iscsi_entries;
+	int rdma_idx = iscsit_idx - iscsit_entries;
 	int ciq_idx = rdma_idx - rdma_entries;
 	int ctrl_idx =  ciq_idx - ciq_entries;
 	int fq_idx =  ctrl_idx - ctrl_entries;
@@ -2453,6 +2455,35 @@ do { \
 		RL("FLLow:", fl.low);
 		RL("FLStarving:", fl.starving);
 
+	} else if (iscsit_idx < iscsit_entries) {
+		const struct sge_ofld_rxq *rx =
+			&adap->sge.iscsitrxq[iscsit_idx * 4];
+		int n = min(4, adap->sge.niscsitq - 4 * iscsit_idx);
+
+		S("QType:", "iSCSIT");
+		R("RspQ ID:", rspq.abs_id);
+		R("RspQ size:", rspq.size);
+		R("RspQE size:", rspq.iqe_len);
+		R("RspQ CIDX:", rspq.cidx);
+		R("RspQ Gen:", rspq.gen);
+		S3("u", "Intr delay:", qtimer_val(adap, &rx[i].rspq));
+		S3("u", "Intr pktcnt:",
+		   adap->sge.counter_val[rx[i].rspq.pktcnt_idx]);
+		R("FL ID:", fl.cntxt_id);
+		R("FL size:", fl.size - 8);
+		R("FL pend:", fl.pend_cred);
+		R("FL avail:", fl.avail);
+		R("FL PIDX:", fl.pidx);
+		R("FL CIDX:", fl.cidx);
+		RL("RxPackets:", stats.pkts);
+		RL("RxImmPkts:", stats.imm);
+		RL("RxNoMem:", stats.nomem);
+		RL("FLAllocErr:", fl.alloc_failed);
+		RL("FLLrgAlcErr:", fl.large_alloc_failed);
+		RL("FLMapErr:", fl.mapping_err);
+		RL("FLLow:", fl.low);
+		RL("FLStarving:", fl.starving);
+
 	} else if (rdma_idx < rdma_entries) {
 		const struct sge_ofld_rxq *rx =
 				&adap->sge.rdmarxq[rdma_idx * 4];
@@ -2543,6 +2574,7 @@ static int sge_queue_entries(const struct adapter *adap)
 {
 	return DIV_ROUND_UP(adap->sge.ethqsets, 4) +
 	       DIV_ROUND_UP(adap->sge.iscsiqsets, 4) +
+	       DIV_ROUND_UP(adap->sge.niscsitq, 4) +
 	       DIV_ROUND_UP(adap->sge.rdmaqs, 4) +
 	       DIV_ROUND_UP(adap->sge.rdmaciqs, 4) +
 	       DIV_ROUND_UP(MAX_CTRL_QUEUES, 4) + 1;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index b8a5fb0..d6cfa90 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -227,7 +227,7 @@ static DEFINE_MUTEX(uld_mutex);
 static LIST_HEAD(adap_rcu_list);
 static DEFINE_SPINLOCK(adap_rcu_lock);
 static struct cxgb4_uld_info ulds[CXGB4_ULD_MAX];
-static const char *uld_str[] = { "RDMA", "iSCSI" };
+static const char *const uld_str[] = { "RDMA", "iSCSI", "iSCSIT" };
 
 static void link_report(struct net_device *dev)
 {
@@ -730,6 +730,10 @@ static void name_msix_vecs(struct adapter *adap)
 		snprintf(adap->msix_info[msi_idx++].desc, n, "%s-iscsi%d",
 			 adap->port[0]->name, i);
 
+	for_each_iscsitrxq(&adap->sge, i)
+		snprintf(adap->msix_info[msi_idx++].desc, n, "%s-iSCSIT%d",
+			 adap->port[0]->name, i);
+
 	for_each_rdmarxq(&adap->sge, i)
 		snprintf(adap->msix_info[msi_idx++].desc, n, "%s-rdma%d",
 			 adap->port[0]->name, i);
@@ -743,6 +747,7 @@ static int request_msix_queue_irqs(struct adapter *adap)
 {
 	struct sge *s = &adap->sge;
 	int err, ethqidx, iscsiqidx = 0, rdmaqidx = 0, rdmaciqqidx = 0;
+	int iscsitqidx = 0;
 	int msi_index = 2;
 
 	err = request_irq(adap->msix_info[1].vec, t4_sge_intr_msix, 0,
@@ -768,6 +773,15 @@ static int request_msix_queue_irqs(struct adapter *adap)
 			goto unwind;
 		msi_index++;
 	}
+	for_each_iscsitrxq(s, iscsitqidx) {
+		err = request_irq(adap->msix_info[msi_index].vec,
+				  t4_sge_intr_msix, 0,
+				  adap->msix_info[msi_index].desc,
+				  &s->iscsitrxq[iscsitqidx].rspq);
+		if (err)
+			goto unwind;
+		msi_index++;
+	}
 	for_each_rdmarxq(s, rdmaqidx) {
 		err = request_irq(adap->msix_info[msi_index].vec,
 				  t4_sge_intr_msix, 0,
@@ -795,6 +809,9 @@ unwind:
 	while (--rdmaqidx >= 0)
 		free_irq(adap->msix_info[--msi_index].vec,
 			 &s->rdmarxq[rdmaqidx].rspq);
+	while (--iscsitqidx >= 0)
+		free_irq(adap->msix_info[--msi_index].vec,
+			 &s->iscsitrxq[iscsitqidx].rspq);
 	while (--iscsiqidx >= 0)
 		free_irq(adap->msix_info[--msi_index].vec,
 			 &s->iscsirxq[iscsiqidx].rspq);
@@ -816,6 +833,9 @@ static void free_msix_queue_irqs(struct adapter *adap)
 	for_each_iscsirxq(s, i)
 		free_irq(adap->msix_info[msi_index++].vec,
 			 &s->iscsirxq[i].rspq);
+	for_each_iscsitrxq(s, i)
+		free_irq(adap->msix_info[msi_index++].vec,
+			 &s->iscsitrxq[i].rspq);
 	for_each_rdmarxq(s, i)
 		free_irq(adap->msix_info[msi_index++].vec, &s->rdmarxq[i].rspq);
 	for_each_rdmaciq(s, i)
@@ -1072,6 +1092,7 @@ freeout:	t4_free_sge_resources(adap);
 } while (0)
 
 	ALLOC_OFLD_RXQS(s->iscsirxq, s->iscsiqsets, j, s->iscsi_rxq);
+	ALLOC_OFLD_RXQS(s->iscsitrxq, s->niscsitq, j, s->iscsit_rxq);
 	ALLOC_OFLD_RXQS(s->rdmarxq, s->rdmaqs, 1, s->rdma_rxq);
 	j = s->rdmaciqs / adap->params.nports; /* rdmaq queues per channel */
 	ALLOC_OFLD_RXQS(s->rdmaciq, s->rdmaciqs, j, s->rdma_ciq);
@@ -2406,6 +2427,9 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
 	} else if (uld == CXGB4_ULD_ISCSI) {
 		lli.rxq_ids = adap->sge.iscsi_rxq;
 		lli.nrxq = adap->sge.iscsiqsets;
+	} else if (uld == CXGB4_ULD_ISCSIT) {
+		lli.rxq_ids = adap->sge.iscsit_rxq;
+		lli.nrxq = adap->sge.niscsitq;
 	}
 	lli.ntxq = adap->sge.iscsiqsets;
 	lli.nchan = adap->params.nports;
@@ -4310,6 +4334,9 @@ static void cfg_queues(struct adapter *adap)
 		s->rdmaciqs = (s->rdmaciqs / adap->params.nports) *
 				adap->params.nports;
 		s->rdmaciqs = max_t(int, s->rdmaciqs, adap->params.nports);
+
+		if (!is_t4(adap->params.chip))
+			s->niscsitq = s->iscsiqsets;
 	}
 
 	for (i = 0; i < ARRAY_SIZE(s->ethrxq); i++) {
@@ -4336,6 +4363,16 @@ static void cfg_queues(struct adapter *adap)
 		r->fl.size = 72;
 	}
 
+	if (!is_t4(adap->params.chip)) {
+		for (i = 0; i < ARRAY_SIZE(s->iscsitrxq); i++) {
+			struct sge_ofld_rxq *r = &s->iscsitrxq[i];
+
+			init_rspq(adap, &r->rspq, 5, 1, 1024, 64);
+			r->rspq.uld = CXGB4_ULD_ISCSIT;
+			r->fl.size = 72;
+		}
+	}
+
 	for (i = 0; i < ARRAY_SIZE(s->rdmarxq); i++) {
 		struct sge_ofld_rxq *r = &s->rdmarxq[i];
 
@@ -4410,9 +4447,13 @@ static int enable_msix(struct adapter *adap)
 
 	want = s->max_ethqsets + EXTRA_VECS;
 	if (is_offload(adap)) {
-		want += s->rdmaqs + s->rdmaciqs + s->iscsiqsets;
+		want += s->rdmaqs + s->rdmaciqs + s->iscsiqsets	+
+			s->niscsitq;
 		/* need nchan for each possible ULD */
-		ofld_need = 3 * nchan;
+		if (is_t4(adap->params.chip))
+			ofld_need = 3 * nchan;
+		else
+			ofld_need = 4 * nchan;
 	}
 #ifdef CONFIG_CHELSIO_T4_DCB
 	/* For Data Center Bridging we need 8 Ethernet TX Priority Queues for
@@ -4444,12 +4485,16 @@ static int enable_msix(struct adapter *adap)
 		if (allocated < want) {
 			s->rdmaqs = nchan;
 			s->rdmaciqs = nchan;
+
+			if (!is_t4(adap->params.chip))
+				s->niscsitq = nchan;
 		}
 
 		/* leftovers go to OFLD */
 		i = allocated - EXTRA_VECS - s->max_ethqsets -
-		    s->rdmaqs - s->rdmaciqs;
+		    s->rdmaqs - s->rdmaciqs - s->niscsitq;
 		s->iscsiqsets = (i / nchan) * nchan;  /* round down */
+
 	}
 	for (i = 0; i < allocated; ++i)
 		adap->msix_info[i].vec = entries[i].vector;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index b4eb468..b3a31af 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -2982,6 +2982,7 @@ void t4_free_sge_resources(struct adapter *adap)
 
 	/* clean up RDMA and iSCSI Rx queues */
 	t4_free_ofld_rxqs(adap, adap->sge.iscsiqsets, adap->sge.iscsirxq);
+	t4_free_ofld_rxqs(adap, adap->sge.niscsitq, adap->sge.iscsitrxq);
 	t4_free_ofld_rxqs(adap, adap->sge.rdmaqs, adap->sge.rdmarxq);
 	t4_free_ofld_rxqs(adap, adap->sge.rdmaciqs, adap->sge.rdmaciq);
 
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 03/34] cxgb4: large receive offload support
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
  2016-02-14 17:32 ` [RFC 01/34] cxgb4: add new ULD type CXGB4_ULD_ISCSIT Varun Prakash
  2016-02-14 17:32 ` [RFC 02/34] cxgb4: allocate resources for CXGB4_ULD_ISCSIT Varun Prakash
@ 2016-02-14 17:32 ` Varun Prakash
  2016-02-14 17:34 ` [RFC 04/34] cxgb4, iw_cxgb4: move definitions to common header file Varun Prakash
                   ` (31 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:32 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

add large receive offload(LRO) support
for upper layer drivers.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h      | 14 ++++++++-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 42 ++++++++++++++++++-------
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h  |  6 ++++
 drivers/net/ethernet/chelsio/cxgb4/sge.c        | 12 +++++--
 4 files changed, 60 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index 6206de9..92086a0 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -511,6 +511,15 @@ struct pkt_gl {
 
 typedef int (*rspq_handler_t)(struct sge_rspq *q, const __be64 *rsp,
 			      const struct pkt_gl *gl);
+typedef void (*rspq_flush_handler_t)(struct sge_rspq *q);
+/* LRO related declarations for ULD */
+struct t4_lro_mgr {
+#define MAX_LRO_SESSIONS		64
+	u8 lro_session_cnt;         /* # of sessions to aggregate */
+	unsigned long lro_pkts;     /* # of LRO super packets */
+	unsigned long lro_merged;   /* # of wire packets merged by LRO */
+	struct sk_buff_head lroq;   /* list of aggregated sessions */
+};
 
 struct sge_rspq {                   /* state for an SGE response queue */
 	struct napi_struct napi;
@@ -535,6 +544,8 @@ struct sge_rspq {                   /* state for an SGE response queue */
 	struct adapter *adap;
 	struct net_device *netdev;  /* associated net device */
 	rspq_handler_t handler;
+	rspq_flush_handler_t flush_handler;
+	struct t4_lro_mgr lro_mgr;
 #ifdef CONFIG_NET_RX_BUSY_POLL
 #define CXGB_POLL_STATE_IDLE		0
 #define CXGB_POLL_STATE_NAPI		BIT(0) /* NAPI owns this poll */
@@ -1114,7 +1125,8 @@ int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb);
 int t4_ofld_send(struct adapter *adap, struct sk_buff *skb);
 int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
 		     struct net_device *dev, int intr_idx,
-		     struct sge_fl *fl, rspq_handler_t hnd, int cong);
+		     struct sge_fl *fl, rspq_handler_t hnd,
+		     rspq_flush_handler_t flush_handler, int cong);
 int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
 			 struct net_device *dev, struct netdev_queue *netdevq,
 			 unsigned int iqid);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index d6cfa90..050f215 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -640,6 +640,13 @@ out:
 	return 0;
 }
 
+/* Flush the aggregated lro sessions */
+static void uldrx_flush_handler(struct sge_rspq *q)
+{
+	if (ulds[q->uld].lro_flush)
+		ulds[q->uld].lro_flush(&q->lro_mgr);
+}
+
 /**
  *	uldrx_handler - response queue handler for ULD queues
  *	@q: the response queue that received the packet
@@ -653,6 +660,7 @@ static int uldrx_handler(struct sge_rspq *q, const __be64 *rsp,
 			 const struct pkt_gl *gl)
 {
 	struct sge_ofld_rxq *rxq = container_of(q, struct sge_ofld_rxq, rspq);
+	int ret;
 
 	/* FW can send CPLs encapsulated in a CPL_FW4_MSG.
 	 */
@@ -660,10 +668,19 @@ static int uldrx_handler(struct sge_rspq *q, const __be64 *rsp,
 	    ((const struct cpl_fw4_msg *)(rsp + 1))->type == FW_TYPE_RSSCPL)
 		rsp += 2;
 
-	if (ulds[q->uld].rx_handler(q->adap->uld_handle[q->uld], rsp, gl)) {
+	if (q->flush_handler)
+		ret = ulds[q->uld].lro_rx_handler(q->adap->uld_handle[q->uld],
+						  rsp, gl, &q->lro_mgr,
+						  &q->napi);
+	else
+		ret = ulds[q->uld].rx_handler(q->adap->uld_handle[q->uld],
+					      rsp, gl);
+
+	if (ret) {
 		rxq->stats.nomem++;
 		return -1;
 	}
+
 	if (gl == NULL)
 		rxq->stats.imm++;
 	else if (gl == CXGB4_MSG_AN)
@@ -980,7 +997,7 @@ static void enable_rx(struct adapter *adap)
 
 static int alloc_ofld_rxqs(struct adapter *adap, struct sge_ofld_rxq *q,
 			   unsigned int nq, unsigned int per_chan, int msi_idx,
-			   u16 *ids)
+			   u16 *ids, bool lro)
 {
 	int i, err;
 
@@ -990,7 +1007,9 @@ static int alloc_ofld_rxqs(struct adapter *adap, struct sge_ofld_rxq *q,
 		err = t4_sge_alloc_rxq(adap, &q->rspq, false,
 				       adap->port[i / per_chan],
 				       msi_idx, q->fl.size ? &q->fl : NULL,
-				       uldrx_handler, 0);
+				       uldrx_handler,
+				       lro ? uldrx_flush_handler : NULL,
+				       0);
 		if (err)
 			return err;
 		memset(&q->stats, 0, sizeof(q->stats));
@@ -1020,7 +1039,7 @@ static int setup_sge_queues(struct adapter *adap)
 		msi_idx = 1;         /* vector 0 is for non-queue interrupts */
 	else {
 		err = t4_sge_alloc_rxq(adap, &s->intrq, false, adap->port[0], 0,
-				       NULL, NULL, -1);
+				       NULL, NULL, NULL, -1);
 		if (err)
 			return err;
 		msi_idx = -((int)s->intrq.abs_id + 1);
@@ -1040,7 +1059,7 @@ static int setup_sge_queues(struct adapter *adap)
 	 *    new/deleted queues.
 	 */
 	err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0],
-			       msi_idx, NULL, fwevtq_handler, -1);
+			       msi_idx, NULL, fwevtq_handler, NULL, -1);
 	if (err) {
 freeout:	t4_free_sge_resources(adap);
 		return err;
@@ -1058,6 +1077,7 @@ freeout:	t4_free_sge_resources(adap);
 			err = t4_sge_alloc_rxq(adap, &q->rspq, false, dev,
 					       msi_idx, &q->fl,
 					       t4_ethrx_handler,
+					       NULL,
 					       t4_get_mps_bg_map(adap,
 								 pi->tx_chan));
 			if (err)
@@ -1083,19 +1103,19 @@ freeout:	t4_free_sge_resources(adap);
 			goto freeout;
 	}
 
-#define ALLOC_OFLD_RXQS(firstq, nq, per_chan, ids) do { \
-	err = alloc_ofld_rxqs(adap, firstq, nq, per_chan, msi_idx, ids); \
+#define ALLOC_OFLD_RXQS(firstq, nq, per_chan, ids, lro) do { \
+	err = alloc_ofld_rxqs(adap, firstq, nq, per_chan, msi_idx, ids, lro); \
 	if (err) \
 		goto freeout; \
 	if (msi_idx > 0) \
 		msi_idx += nq; \
 } while (0)
 
-	ALLOC_OFLD_RXQS(s->iscsirxq, s->iscsiqsets, j, s->iscsi_rxq);
-	ALLOC_OFLD_RXQS(s->iscsitrxq, s->niscsitq, j, s->iscsit_rxq);
-	ALLOC_OFLD_RXQS(s->rdmarxq, s->rdmaqs, 1, s->rdma_rxq);
+	ALLOC_OFLD_RXQS(s->iscsirxq, s->iscsiqsets, j, s->iscsi_rxq, false);
+	ALLOC_OFLD_RXQS(s->iscsitrxq, s->niscsitq, j, s->iscsit_rxq, true);
+	ALLOC_OFLD_RXQS(s->rdmarxq, s->rdmaqs, 1, s->rdma_rxq, false);
 	j = s->rdmaciqs / adap->params.nports; /* rdmaq queues per channel */
-	ALLOC_OFLD_RXQS(s->rdmaciq, s->rdmaciqs, j, s->rdma_ciq);
+	ALLOC_OFLD_RXQS(s->rdmaciq, s->rdmaciqs, j, s->rdma_ciq, false);
 
 #undef ALLOC_OFLD_RXQS
 
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
index 2f80e32..d97a81f 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
@@ -213,6 +213,7 @@ struct l2t_data;
 struct net_device;
 struct pkt_gl;
 struct tp_tcp_stats;
+struct t4_lro_mgr;
 
 struct cxgb4_range {
 	unsigned int start;
@@ -284,6 +285,11 @@ struct cxgb4_uld_info {
 			  const struct pkt_gl *gl);
 	int (*state_change)(void *handle, enum cxgb4_state new_state);
 	int (*control)(void *handle, enum cxgb4_control control, ...);
+	int (*lro_rx_handler)(void *handle, const __be64 *rsp,
+			      const struct pkt_gl *gl,
+			      struct t4_lro_mgr *lro_mgr,
+			      struct napi_struct *napi);
+	void (*lro_flush)(struct t4_lro_mgr *);
 };
 
 int cxgb4_register_uld(enum cxgb4_uld type, const struct cxgb4_uld_info *p);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index b3a31af..3ba4b0c 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -2157,8 +2157,11 @@ static int process_responses(struct sge_rspq *q, int budget)
 
 	while (likely(budget_left)) {
 		rc = (void *)q->cur_desc + (q->iqe_len - sizeof(*rc));
-		if (!is_new_response(rc, q))
+		if (!is_new_response(rc, q)) {
+			if (q->flush_handler)
+				q->flush_handler(q);
 			break;
+		}
 
 		dma_rmb();
 		rsp_type = RSPD_TYPE_G(rc->type_gen);
@@ -2544,7 +2547,8 @@ static void __iomem *bar2_address(struct adapter *adapter,
  */
 int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
 		     struct net_device *dev, int intr_idx,
-		     struct sge_fl *fl, rspq_handler_t hnd, int cong)
+		     struct sge_fl *fl, rspq_handler_t hnd,
+		     rspq_flush_handler_t flush_hnd, int cong)
 {
 	int ret, flsz = 0;
 	struct fw_iq_cmd c;
@@ -2638,6 +2642,10 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
 	iq->size--;                           /* subtract status entry */
 	iq->netdev = dev;
 	iq->handler = hnd;
+	iq->flush_handler = flush_hnd;
+
+	memset(&iq->lro_mgr, 0, sizeof(struct t4_lro_mgr));
+	skb_queue_head_init(&iq->lro_mgr.lroq);
 
 	/* set offset to -1 to distinguish ingress queues without FL */
 	iq->offset = fl ? 0 : -1;
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 04/34] cxgb4, iw_cxgb4: move definitions to common header file
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (2 preceding siblings ...)
  2016-02-14 17:32 ` [RFC 03/34] cxgb4: large receive offload support Varun Prakash
@ 2016-02-14 17:34 ` Varun Prakash
  2016-02-14 17:34 ` [RFC 05/34] cxgb4, iw_cxgb4, cxgb4i: remove duplicate definitions Varun Prakash
                   ` (30 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:34 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, dledford, swise, indranil, kxie, hariprasad, varun

move struct tcp_options, struct cpl_pass_accept_req,
enum defining congestion control algorithms
and associated macros to common header file t4_msg.h

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/infiniband/hw/cxgb4/t4fw_ri_api.h   | 81 -----------------------------
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h | 81 +++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+), 81 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
index 343e8daf..5b66da3 100644
--- a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
+++ b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
@@ -753,71 +753,6 @@ struct fw_ri_wr {
 #define FW_RI_WR_P2PTYPE_G(x)	\
 	(((x) >> FW_RI_WR_P2PTYPE_S) & FW_RI_WR_P2PTYPE_M)
 
-struct tcp_options {
-	__be16 mss;
-	__u8 wsf;
-#if defined(__LITTLE_ENDIAN_BITFIELD)
-	__u8:4;
-	__u8 unknown:1;
-	__u8:1;
-	__u8 sack:1;
-	__u8 tstamp:1;
-#else
-	__u8 tstamp:1;
-	__u8 sack:1;
-	__u8:1;
-	__u8 unknown:1;
-	__u8:4;
-#endif
-};
-
-struct cpl_pass_accept_req {
-	union opcode_tid ot;
-	__be16 rsvd;
-	__be16 len;
-	__be32 hdr_len;
-	__be16 vlan;
-	__be16 l2info;
-	__be32 tos_stid;
-	struct tcp_options tcpopt;
-};
-
-/* cpl_pass_accept_req.hdr_len fields */
-#define SYN_RX_CHAN_S    0
-#define SYN_RX_CHAN_M    0xF
-#define SYN_RX_CHAN_V(x) ((x) << SYN_RX_CHAN_S)
-#define SYN_RX_CHAN_G(x) (((x) >> SYN_RX_CHAN_S) & SYN_RX_CHAN_M)
-
-#define TCP_HDR_LEN_S    10
-#define TCP_HDR_LEN_M    0x3F
-#define TCP_HDR_LEN_V(x) ((x) << TCP_HDR_LEN_S)
-#define TCP_HDR_LEN_G(x) (((x) >> TCP_HDR_LEN_S) & TCP_HDR_LEN_M)
-
-#define IP_HDR_LEN_S    16
-#define IP_HDR_LEN_M    0x3FF
-#define IP_HDR_LEN_V(x) ((x) << IP_HDR_LEN_S)
-#define IP_HDR_LEN_G(x) (((x) >> IP_HDR_LEN_S) & IP_HDR_LEN_M)
-
-#define ETH_HDR_LEN_S    26
-#define ETH_HDR_LEN_M    0x1F
-#define ETH_HDR_LEN_V(x) ((x) << ETH_HDR_LEN_S)
-#define ETH_HDR_LEN_G(x) (((x) >> ETH_HDR_LEN_S) & ETH_HDR_LEN_M)
-
-/* cpl_pass_accept_req.l2info fields */
-#define SYN_MAC_IDX_S    0
-#define SYN_MAC_IDX_M    0x1FF
-#define SYN_MAC_IDX_V(x) ((x) << SYN_MAC_IDX_S)
-#define SYN_MAC_IDX_G(x) (((x) >> SYN_MAC_IDX_S) & SYN_MAC_IDX_M)
-
-#define SYN_XACT_MATCH_S    9
-#define SYN_XACT_MATCH_V(x) ((x) << SYN_XACT_MATCH_S)
-#define SYN_XACT_MATCH_F    SYN_XACT_MATCH_V(1U)
-
-#define SYN_INTF_S    12
-#define SYN_INTF_M    0xF
-#define SYN_INTF_V(x) ((x) << SYN_INTF_S)
-#define SYN_INTF_G(x) (((x) >> SYN_INTF_S) & SYN_INTF_M)
-
 struct ulptx_idata {
 	__be32 cmd_more;
 	__be32 len;
@@ -836,20 +771,4 @@ struct ulptx_idata {
 #define RX_DACK_CHANGE_V(x) ((x) << RX_DACK_CHANGE_S)
 #define RX_DACK_CHANGE_F    RX_DACK_CHANGE_V(1U)
 
-enum {                     /* TCP congestion control algorithms */
-	CONG_ALG_RENO,
-	CONG_ALG_TAHOE,
-	CONG_ALG_NEWRENO,
-	CONG_ALG_HIGHSPEED
-};
-
-#define CONG_CNTRL_S    14
-#define CONG_CNTRL_M    0x3
-#define CONG_CNTRL_V(x) ((x) << CONG_CNTRL_S)
-#define CONG_CNTRL_G(x) (((x) >> CONG_CNTRL_S) & CONG_CNTRL_M)
-
-#define T5_ISS_S    18
-#define T5_ISS_V(x) ((x) << T5_ISS_S)
-#define T5_ISS_F    T5_ISS_V(1U)
-
 #endif /* _T4FW_RI_API_H_ */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index a072d34..5753bd4 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -344,6 +344,87 @@ struct cpl_pass_open_rpl {
 	u8 status;
 };
 
+struct tcp_options {
+	__be16 mss;
+	__u8 wsf;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+	__u8:4;
+	__u8 unknown:1;
+	__u8:1;
+	__u8 sack:1;
+	__u8 tstamp:1;
+#else
+	__u8 tstamp:1;
+	__u8 sack:1;
+	__u8:1;
+	__u8 unknown:1;
+	__u8:4;
+#endif
+};
+
+struct cpl_pass_accept_req {
+	union opcode_tid ot;
+	__be16 rsvd;
+	__be16 len;
+	__be32 hdr_len;
+	__be16 vlan;
+	__be16 l2info;
+	__be32 tos_stid;
+	struct tcp_options tcpopt;
+};
+
+/* cpl_pass_accept_req.hdr_len fields */
+#define SYN_RX_CHAN_S    0
+#define SYN_RX_CHAN_M    0xF
+#define SYN_RX_CHAN_V(x) ((x) << SYN_RX_CHAN_S)
+#define SYN_RX_CHAN_G(x) (((x) >> SYN_RX_CHAN_S) & SYN_RX_CHAN_M)
+
+#define TCP_HDR_LEN_S    10
+#define TCP_HDR_LEN_M    0x3F
+#define TCP_HDR_LEN_V(x) ((x) << TCP_HDR_LEN_S)
+#define TCP_HDR_LEN_G(x) (((x) >> TCP_HDR_LEN_S) & TCP_HDR_LEN_M)
+
+#define IP_HDR_LEN_S    16
+#define IP_HDR_LEN_M    0x3FF
+#define IP_HDR_LEN_V(x) ((x) << IP_HDR_LEN_S)
+#define IP_HDR_LEN_G(x) (((x) >> IP_HDR_LEN_S) & IP_HDR_LEN_M)
+
+#define ETH_HDR_LEN_S    26
+#define ETH_HDR_LEN_M    0x1F
+#define ETH_HDR_LEN_V(x) ((x) << ETH_HDR_LEN_S)
+#define ETH_HDR_LEN_G(x) (((x) >> ETH_HDR_LEN_S) & ETH_HDR_LEN_M)
+
+/* cpl_pass_accept_req.l2info fields */
+#define SYN_MAC_IDX_S    0
+#define SYN_MAC_IDX_M    0x1FF
+#define SYN_MAC_IDX_V(x) ((x) << SYN_MAC_IDX_S)
+#define SYN_MAC_IDX_G(x) (((x) >> SYN_MAC_IDX_S) & SYN_MAC_IDX_M)
+
+#define SYN_XACT_MATCH_S    9
+#define SYN_XACT_MATCH_V(x) ((x) << SYN_XACT_MATCH_S)
+#define SYN_XACT_MATCH_F    SYN_XACT_MATCH_V(1U)
+
+#define SYN_INTF_S    12
+#define SYN_INTF_M    0xF
+#define SYN_INTF_V(x) ((x) << SYN_INTF_S)
+#define SYN_INTF_G(x) (((x) >> SYN_INTF_S) & SYN_INTF_M)
+
+enum {                     /* TCP congestion control algorithms */
+	CONG_ALG_RENO,
+	CONG_ALG_TAHOE,
+	CONG_ALG_NEWRENO,
+	CONG_ALG_HIGHSPEED
+};
+
+#define CONG_CNTRL_S    14
+#define CONG_CNTRL_M    0x3
+#define CONG_CNTRL_V(x) ((x) << CONG_CNTRL_S)
+#define CONG_CNTRL_G(x) (((x) >> CONG_CNTRL_S) & CONG_CNTRL_M)
+
+#define T5_ISS_S    18
+#define T5_ISS_V(x) ((x) << T5_ISS_S)
+#define T5_ISS_F    T5_ISS_V(1U)
+
 struct cpl_pass_accept_rpl {
 	WR_HDR;
 	union opcode_tid ot;
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 05/34] cxgb4, iw_cxgb4, cxgb4i: remove duplicate definitions
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (3 preceding siblings ...)
  2016-02-14 17:34 ` [RFC 04/34] cxgb4, iw_cxgb4: move definitions to common header file Varun Prakash
@ 2016-02-14 17:34 ` Varun Prakash
  2016-02-14 17:37 ` [RFC 06/34] cxgb4, cxgb4i: move struct cpl_rx_data_ddp definition Varun Prakash
                   ` (29 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:34 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, dledford, swise, indranil, kxie, hariprasad, varun

move struct ulptx_idata definition to
common header file t4_msg.h.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/infiniband/hw/cxgb4/t4fw_ri_api.h   | 9 ---------
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h | 5 +++++
 drivers/scsi/cxgbi/cxgb4i/cxgb4i.h          | 5 -----
 3 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
index 5b66da3..5f47e03 100644
--- a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
+++ b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
@@ -753,15 +753,6 @@ struct fw_ri_wr {
 #define FW_RI_WR_P2PTYPE_G(x)	\
 	(((x) >> FW_RI_WR_P2PTYPE_S) & FW_RI_WR_P2PTYPE_M)
 
-struct ulptx_idata {
-	__be32 cmd_more;
-	__be32 len;
-};
-
-#define ULPTX_NSGE_S    0
-#define ULPTX_NSGE_M    0xFFFF
-#define ULPTX_NSGE_V(x) ((x) << ULPTX_NSGE_S)
-
 #define RX_DACK_MODE_S    29
 #define RX_DACK_MODE_M    0x3
 #define RX_DACK_MODE_V(x) ((x) << RX_DACK_MODE_S)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index 5753bd4..8cfc6a8 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -1222,6 +1222,11 @@ struct ulptx_sgl {
 	struct ulptx_sge_pair sge[0];
 };
 
+struct ulptx_idata {
+	__be32 cmd_more;
+	__be32 len;
+};
+
 #define ULPTX_NSGE_S    0
 #define ULPTX_NSGE_V(x) ((x) << ULPTX_NSGE_S)
 
diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
index 22dd8d6..e5f8f65 100644
--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
+++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
@@ -25,11 +25,6 @@
 
 #define T5_ISS_VALID		(1 << 18)
 
-struct ulptx_idata {
-	__be32 cmd_more;
-	__be32 len;
-};
-
 struct cpl_rx_data_ddp {
 	union opcode_tid ot;
 	__be16 urg;
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 06/34] cxgb4, cxgb4i: move struct cpl_rx_data_ddp definition
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (4 preceding siblings ...)
  2016-02-14 17:34 ` [RFC 05/34] cxgb4, iw_cxgb4, cxgb4i: remove duplicate definitions Varun Prakash
@ 2016-02-14 17:37 ` Varun Prakash
  2016-02-14 17:37 ` [RFC 07/34] cxgb4: add definitions for iSCSI target ULD Varun Prakash
                   ` (28 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:37 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

move struct cpl_rx_data_ddp definition to
common header file t4_msg.h.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h | 15 +++++++++++++++
 drivers/scsi/cxgbi/cxgb4i/cxgb4i.h          | 12 ------------
 2 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index 8cfc6a8..7279245 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -899,6 +899,21 @@ struct cpl_iscsi_hdr {
 #define ISCSI_DDP_V(x) ((x) << ISCSI_DDP_S)
 #define ISCSI_DDP_F    ISCSI_DDP_V(1U)
 
+struct cpl_rx_data_ddp {
+	union opcode_tid ot;
+	__be16 urg;
+	__be16 len;
+	__be32 seq;
+	union {
+		__be32 nxt_seq;
+		__be32 ddp_report;
+	};
+	__be32 ulp_crc;
+	__be32 ddpvld;
+};
+
+#define cpl_rx_iscsi_ddp cpl_rx_data_ddp
+
 struct cpl_rx_data {
 	union opcode_tid ot;
 	__be16 rsvd;
diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
index e5f8f65..2fd9c76 100644
--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
+++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.h
@@ -25,16 +25,4 @@
 
 #define T5_ISS_VALID		(1 << 18)
 
-struct cpl_rx_data_ddp {
-	union opcode_tid ot;
-	__be16 urg;
-	__be16 len;
-	__be32 seq;
-	union {
-		__be32 nxt_seq;
-		__be32 ddp_report;
-	};
-	__be32 ulp_crc;
-	__be32 ddpvld;
-};
 #endif	/* __CXGB4I_H__ */
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 07/34] cxgb4: add definitions for iSCSI target ULD
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (5 preceding siblings ...)
  2016-02-14 17:37 ` [RFC 06/34] cxgb4, cxgb4i: move struct cpl_rx_data_ddp definition Varun Prakash
@ 2016-02-14 17:37 ` Varun Prakash
  2016-02-14 17:37 ` [RFC 08/34] cxgb4: update struct cxgb4_lld_info definition Varun Prakash
                   ` (27 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:37 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

add structure, macro and constant definitions
for iSCSI Tx and Rx.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h   | 107 ++++++++++++++++++++++++++
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |   7 ++
 2 files changed, 114 insertions(+)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index 7279245..03da7a7 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -51,6 +51,7 @@ enum {
 	CPL_TX_PKT            = 0xE,
 	CPL_L2T_WRITE_REQ     = 0x12,
 	CPL_TID_RELEASE       = 0x1A,
+	CPL_TX_DATA_ISO	      = 0x1F,
 
 	CPL_CLOSE_LISTSRV_RPL = 0x20,
 	CPL_L2T_WRITE_RPL     = 0x23,
@@ -914,6 +915,95 @@ struct cpl_rx_data_ddp {
 
 #define cpl_rx_iscsi_ddp cpl_rx_data_ddp
 
+struct cpl_iscsi_data {
+	union opcode_tid ot;
+	__u8 rsvd0[2];
+	__be16 len;
+	__be32 seq;
+	__be16 urg;
+	__u8 rsvd1;
+	__u8 status;
+};
+
+struct cpl_tx_data_iso {
+	__be32 op_to_scsi;
+	__u8   reserved1;
+	__u8   ahs_len;
+	__be16 mpdu;
+	__be32 burst_size;
+	__be32 len;
+	__be32 reserved2_seglen_offset;
+	__be32 datasn_offset;
+	__be32 buffer_offset;
+	__be32 reserved3;
+
+	/* encapsulated CPL_TX_DATA follows here */
+};
+
+/* cpl_tx_data_iso.op_to_scsi fields */
+#define CPL_TX_DATA_ISO_OP_S	24
+#define CPL_TX_DATA_ISO_OP_M	0xff
+#define CPL_TX_DATA_ISO_OP_V(x)	((x) << CPL_TX_DATA_ISO_OP_S)
+#define CPL_TX_DATA_ISO_OP_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_OP_S) & CPL_TX_DATA_ISO_OP_M)
+
+#define CPL_TX_DATA_ISO_FIRST_S		23
+#define CPL_TX_DATA_ISO_FIRST_M		0x1
+#define CPL_TX_DATA_ISO_FIRST_V(x)	((x) << CPL_TX_DATA_ISO_FIRST_S)
+#define CPL_TX_DATA_ISO_FIRST_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_FIRST_S) & CPL_TX_DATA_ISO_FIRST_M)
+#define CPL_TX_DATA_ISO_FIRST_F	CPL_TX_DATA_ISO_FIRST_V(1U)
+
+#define CPL_TX_DATA_ISO_LAST_S		22
+#define CPL_TX_DATA_ISO_LAST_M		0x1
+#define CPL_TX_DATA_ISO_LAST_V(x)	((x) << CPL_TX_DATA_ISO_LAST_S)
+#define CPL_TX_DATA_ISO_LAST_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_LAST_S) & CPL_TX_DATA_ISO_LAST_M)
+#define CPL_TX_DATA_ISO_LAST_F	CPL_TX_DATA_ISO_LAST_V(1U)
+
+#define CPL_TX_DATA_ISO_CPLHDRLEN_S	21
+#define CPL_TX_DATA_ISO_CPLHDRLEN_M	0x1
+#define CPL_TX_DATA_ISO_CPLHDRLEN_V(x)	((x) << CPL_TX_DATA_ISO_CPLHDRLEN_S)
+#define CPL_TX_DATA_ISO_CPLHDRLEN_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_CPLHDRLEN_S) & CPL_TX_DATA_ISO_CPLHDRLEN_M)
+#define CPL_TX_DATA_ISO_CPLHDRLEN_F	CPL_TX_DATA_ISO_CPLHDRLEN_V(1U)
+
+#define CPL_TX_DATA_ISO_HDRCRC_S	20
+#define CPL_TX_DATA_ISO_HDRCRC_M	0x1
+#define CPL_TX_DATA_ISO_HDRCRC_V(x)	((x) << CPL_TX_DATA_ISO_HDRCRC_S)
+#define CPL_TX_DATA_ISO_HDRCRC_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_HDRCRC_S) & CPL_TX_DATA_ISO_HDRCRC_M)
+#define CPL_TX_DATA_ISO_HDRCRC_F	CPL_TX_DATA_ISO_HDRCRC_V(1U)
+
+#define CPL_TX_DATA_ISO_PLDCRC_S	19
+#define CPL_TX_DATA_ISO_PLDCRC_M	0x1
+#define CPL_TX_DATA_ISO_PLDCRC_V(x)	((x) << CPL_TX_DATA_ISO_PLDCRC_S)
+#define CPL_TX_DATA_ISO_PLDCRC_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_PLDCRC_S) & CPL_TX_DATA_ISO_PLDCRC_M)
+#define CPL_TX_DATA_ISO_PLDCRC_F	CPL_TX_DATA_ISO_PLDCRC_V(1U)
+
+#define CPL_TX_DATA_ISO_IMMEDIATE_S	18
+#define CPL_TX_DATA_ISO_IMMEDIATE_M	0x1
+#define CPL_TX_DATA_ISO_IMMEDIATE_V(x)	((x) << CPL_TX_DATA_ISO_IMMEDIATE_S)
+#define CPL_TX_DATA_ISO_IMMEDIATE_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_IMMEDIATE_S) & CPL_TX_DATA_ISO_IMMEDIATE_M)
+#define CPL_TX_DATA_ISO_IMMEDIATE_F	CPL_TX_DATA_ISO_IMMEDIATE_V(1U)
+
+#define CPL_TX_DATA_ISO_SCSI_S		16
+#define CPL_TX_DATA_ISO_SCSI_M		0x3
+#define CPL_TX_DATA_ISO_SCSI_V(x)	((x) << CPL_TX_DATA_ISO_SCSI_S)
+#define CPL_TX_DATA_ISO_SCSI_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_SCSI_S) & CPL_TX_DATA_ISO_SCSI_M)
+
+/* cpl_tx_data_iso.reserved2_seglen_offset fields */
+#define CPL_TX_DATA_ISO_SEGLEN_OFFSET_S		0
+#define CPL_TX_DATA_ISO_SEGLEN_OFFSET_M		0xffffff
+#define CPL_TX_DATA_ISO_SEGLEN_OFFSET_V(x)	\
+	((x) << CPL_TX_DATA_ISO_SEGLEN_OFFSET_S)
+#define CPL_TX_DATA_ISO_SEGLEN_OFFSET_G(x)	\
+	(((x) >> CPL_TX_DATA_ISO_SEGLEN_OFFSET_S) & \
+	 CPL_TX_DATA_ISO_SEGLEN_OFFSET_M)
+
 struct cpl_rx_data {
 	union opcode_tid ot;
 	__be16 rsvd;
@@ -1184,6 +1274,12 @@ struct cpl_fw4_ack {
 	__be64 rsvd1;
 };
 
+enum {
+	CPL_FW4_ACK_FLAGS_SEQVAL	= 0x1,	/* seqn valid */
+	CPL_FW4_ACK_FLAGS_CH		= 0x2,	/* channel change complete */
+	CPL_FW4_ACK_FLAGS_FLOWC		= 0x4,	/* fw_flowc_wr complete */
+};
+
 struct cpl_fw6_msg {
 	u8 opcode;
 	u8 type;
@@ -1209,6 +1305,17 @@ struct cpl_fw6_msg_ofld_connection_wr_rpl {
 	__u8    rsvd[2];
 };
 
+struct cpl_tx_data {
+	union opcode_tid ot;
+	__be32 len;
+	__be32 rsvd;
+	__be32 flags;
+};
+
+/* cpl_tx_data.flags field */
+#define TX_FORCE_S	13
+#define TX_FORCE_V(x)	((x) << TX_FORCE_S)
+
 enum {
 	ULP_TX_MEM_READ = 2,
 	ULP_TX_MEM_WRITE = 3,
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
index a32de30..7ad6d4e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
@@ -101,6 +101,7 @@ enum fw_wr_opcodes {
 	FW_RI_BIND_MW_WR               = 0x18,
 	FW_RI_FR_NSMR_WR               = 0x19,
 	FW_RI_INV_LSTAG_WR             = 0x1a,
+	FW_ISCSI_TX_DATA_WR	       = 0x45,
 	FW_LASTC2E_WR                  = 0x70
 };
 
@@ -561,6 +562,12 @@ enum fw_flowc_mnem {
 	FW_FLOWC_MNEM_SNDBUF,
 	FW_FLOWC_MNEM_MSS,
 	FW_FLOWC_MNEM_TXDATAPLEN_MAX,
+	FW_FLOWC_MNEM_TCPSTATE,
+	FW_FLOWC_MNEM_EOSTATE,
+	FW_FLOWC_MNEM_SCHEDCLASS,
+	FW_FLOWC_MNEM_DCBPRIO,
+	FW_FLOWC_MNEM_SND_SCALE,
+	FW_FLOWC_MNEM_RCV_SCALE,
 };
 
 struct fw_flowc_mnemval {
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 08/34] cxgb4: update struct cxgb4_lld_info definition
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (6 preceding siblings ...)
  2016-02-14 17:37 ` [RFC 07/34] cxgb4: add definitions for iSCSI target ULD Varun Prakash
@ 2016-02-14 17:37 ` Varun Prakash
  2016-02-14 17:37 ` [RFC 09/34] cxgb4: move VLAN_NONE macro definition Varun Prakash
                   ` (26 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:37 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

add members for iSCSI DDP.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h      | 2 ++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 4 ++++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h  | 4 ++++
 3 files changed, 10 insertions(+)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index 92086a0..646076e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -759,6 +759,8 @@ struct adapter {
 	struct list_head list_node;
 	struct list_head rcu_node;
 
+	void *iscsi_ppm;
+
 	struct tid_info tids;
 	void **tid_release_head;
 	spinlock_t tid_release_lock;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 050f215..1a1f1c8 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -2457,6 +2457,10 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
 	lli.wr_cred = adap->params.ofldq_wr_cred;
 	lli.adapter_type = adap->params.chip;
 	lli.iscsi_iolen = MAXRXDATA_G(t4_read_reg(adap, TP_PARA_REG2_A));
+	lli.iscsi_tagmask = t4_read_reg(adap, ULP_RX_ISCSI_TAGMASK_A);
+	lli.iscsi_pgsz_order = t4_read_reg(adap, ULP_RX_ISCSI_PSZ_A);
+	lli.iscsi_llimit = t4_read_reg(adap, ULP_RX_ISCSI_LLIMIT_A);
+	lli.iscsi_ppm = &adap->iscsi_ppm;
 	lli.cclk_ps = 1000000000 / adap->params.vpd.cclk;
 	lli.udb_density = 1 << adap->params.sge.eq_qpp;
 	lli.ucq_density = 1 << adap->params.sge.iq_qpp;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
index d97a81f..f3c58aa 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
@@ -275,6 +275,10 @@ struct cxgb4_lld_info {
 	unsigned int max_ordird_qp;          /* Max ORD/IRD depth per RDMA QP */
 	unsigned int max_ird_adapter;        /* Max IRD memory per adapter */
 	bool ulptx_memwrite_dsgl;            /* use of T5 DSGL allowed */
+	unsigned int iscsi_tagmask;	     /* iscsi ddp tag mask */
+	unsigned int iscsi_pgsz_order;	     /* iscsi ddp page size orders */
+	unsigned int iscsi_llimit;	     /* chip's iscsi region llimit */
+	void **iscsi_ppm;		     /* iscsi page pod manager */
 	int nodeid;			     /* device numa node id */
 };
 
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 09/34] cxgb4: move VLAN_NONE macro definition
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (7 preceding siblings ...)
  2016-02-14 17:37 ` [RFC 08/34] cxgb4: update struct cxgb4_lld_info definition Varun Prakash
@ 2016-02-14 17:37 ` Varun Prakash
  2016-02-14 17:38 ` [RFC 10/34] cxgb4, iw_cxgb4: move delayed ack macro definitions Varun Prakash
                   ` (25 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:37 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

move VLAN_NONE macro definition from l2t.c
to l2t.h

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/l2t.c | 2 --
 drivers/net/ethernet/chelsio/cxgb4/l2t.h | 2 ++
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.c b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
index 5b0f3ef..60a2603 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/l2t.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
@@ -48,8 +48,6 @@
 #include "t4_regs.h"
 #include "t4_values.h"
 
-#define VLAN_NONE 0xfff
-
 /* identifies sync vs async L2T_WRITE_REQs */
 #define SYNC_WR_S    12
 #define SYNC_WR_V(x) ((x) << SYNC_WR_S)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.h b/drivers/net/ethernet/chelsio/cxgb4/l2t.h
index 4e2d47a..79665bd 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/l2t.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.h
@@ -39,6 +39,8 @@
 #include <linux/if_ether.h>
 #include <linux/atomic.h>
 
+#define VLAN_NONE 0xfff
+
 enum { L2T_SIZE = 4096 };     /* # of L2T entries */
 
 enum {
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 10/34] cxgb4, iw_cxgb4: move delayed ack macro definitions
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (8 preceding siblings ...)
  2016-02-14 17:37 ` [RFC 09/34] cxgb4: move VLAN_NONE macro definition Varun Prakash
@ 2016-02-14 17:38 ` Varun Prakash
  2016-02-14 17:39 ` [RFC 11/34] cxgb4: add iSCSI DDP page pod manager Varun Prakash
                   ` (24 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:38 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, dledford, swise, indranil, kxie, hariprasad, varun

move delayed ack macro definitions to common
header file t4_msg.h.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/infiniband/hw/cxgb4/t4fw_ri_api.h   | 9 ---------
 drivers/net/ethernet/chelsio/cxgb4/t4_msg.h | 9 +++++++++
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
index 5f47e03..1e26669 100644
--- a/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
+++ b/drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
@@ -753,13 +753,4 @@ struct fw_ri_wr {
 #define FW_RI_WR_P2PTYPE_G(x)	\
 	(((x) >> FW_RI_WR_P2PTYPE_S) & FW_RI_WR_P2PTYPE_M)
 
-#define RX_DACK_MODE_S    29
-#define RX_DACK_MODE_M    0x3
-#define RX_DACK_MODE_V(x) ((x) << RX_DACK_MODE_S)
-#define RX_DACK_MODE_G(x) (((x) >> RX_DACK_MODE_S) & RX_DACK_MODE_M)
-
-#define RX_DACK_CHANGE_S    31
-#define RX_DACK_CHANGE_V(x) ((x) << RX_DACK_CHANGE_S)
-#define RX_DACK_CHANGE_F    RX_DACK_CHANGE_V(1U)
-
 #endif /* _T4FW_RI_API_H_ */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index 03da7a7..a5641e2 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -1040,6 +1040,15 @@ struct cpl_rx_data_ack {
 #define RX_FORCE_ACK_V(x) ((x) << RX_FORCE_ACK_S)
 #define RX_FORCE_ACK_F    RX_FORCE_ACK_V(1U)
 
+#define RX_DACK_MODE_S    29
+#define RX_DACK_MODE_M    0x3
+#define RX_DACK_MODE_V(x) ((x) << RX_DACK_MODE_S)
+#define RX_DACK_MODE_G(x) (((x) >> RX_DACK_MODE_S) & RX_DACK_MODE_M)
+
+#define RX_DACK_CHANGE_S    31
+#define RX_DACK_CHANGE_V(x) ((x) << RX_DACK_CHANGE_S)
+#define RX_DACK_CHANGE_F    RX_DACK_CHANGE_V(1U)
+
 struct cpl_rx_pkt {
 	struct rss_header rsshdr;
 	u8 opcode;
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 11/34] cxgb4: add iSCSI DDP page pod manager
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (9 preceding siblings ...)
  2016-02-14 17:38 ` [RFC 10/34] cxgb4, iw_cxgb4: move delayed ack macro definitions Varun Prakash
@ 2016-02-14 17:39 ` Varun Prakash
  2016-02-14 17:39 ` [RFC 12/34] cxgb4: update Kconfig and Makefile Varun Prakash
                   ` (23 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:39 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

add files for common page pod manager,
both iSCSI initiator and target ULDs will
use common ppod manager for DDP.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c | 464 +++++++++++++++++++++++++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h | 310 +++++++++++++++++
 2 files changed, 774 insertions(+)
 create mode 100644 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c
 create mode 100644 drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c
new file mode 100644
index 0000000..d88a7a7
--- /dev/null
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.c
@@ -0,0 +1,464 @@
+/*
+ * cxgb4_ppm.c: Chelsio common library for T4/T5 iSCSI PagePod Manager
+ *
+ * Copyright (c) 2016 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Written by: Karen Xie (kxie@chelsio.com)
+ */
+
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/debugfs.h>
+#include <linux/export.h>
+#include <linux/list.h>
+#include <linux/skbuff.h>
+#include <linux/pci.h>
+#include <linux/scatterlist.h>
+
+#include "cxgb4_ppm.h"
+
+/* Direct Data Placement -
+ * Directly place the iSCSI Data-In or Data-Out PDU's payload into
+ * pre-posted final destination host-memory buffers based on the
+ * Initiator Task Tag (ITT) in Data-In or Target Task Tag (TTT)
+ * in Data-Out PDUs. The host memory address is programmed into
+ * h/w in the format of pagepod entries. The location of the
+ * pagepod entry is encoded into ddp tag which is used as the base
+ * for ITT/TTT.
+ */
+
+/* Direct-Data Placement page size adjustment
+ */
+int cxgbi_ppm_find_page_index(struct cxgbi_ppm *ppm, unsigned long pgsz)
+{
+	struct cxgbi_tag_format *tformat = &ppm->tformat;
+	int i;
+
+	for (i = 0; i < DDP_PGIDX_MAX; i++) {
+		if (pgsz == 1UL << (DDP_PGSZ_BASE_SHIFT +
+					 tformat->pgsz_order[i])) {
+			pr_debug("%s: %s ppm, pgsz %lu -> idx %d.\n",
+				 __func__, ppm->ndev->name, pgsz, i);
+			return i;
+		}
+	}
+	pr_info("ippm: ddp page size %lu not supported.\n", pgsz);
+	return DDP_PGIDX_MAX;
+}
+
+/* DDP setup & teardown
+ */
+static int ppm_find_unused_entries(unsigned long *bmap,
+				   unsigned int max_ppods,
+				   unsigned int start,
+				   unsigned int nr,
+				   unsigned int align_mask)
+{
+	unsigned long i;
+
+	i = bitmap_find_next_zero_area(bmap, max_ppods, start, nr, align_mask);
+
+	if (unlikely(i >= max_ppods) && (start > nr))
+		i = bitmap_find_next_zero_area(bmap, max_ppods, 0, start - 1,
+					       align_mask);
+	if (unlikely(i >= max_ppods))
+		return -ENOSPC;
+
+	bitmap_set(bmap, i, nr);
+	return (int)i;
+}
+
+static void ppm_mark_entries(struct cxgbi_ppm *ppm, int i, int count,
+			     unsigned long caller_data)
+{
+	struct cxgbi_ppod_data *pdata = ppm->ppod_data + i;
+
+	pdata->caller_data = caller_data;
+	pdata->npods = count;
+
+	if (pdata->color == ((1 << PPOD_IDX_SHIFT) - 1))
+		pdata->color = 0;
+	else
+		pdata->color++;
+}
+
+static int ppm_get_cpu_entries(struct cxgbi_ppm *ppm, unsigned int count,
+			       unsigned long caller_data)
+{
+	struct cxgbi_ppm_pool *pool;
+	unsigned int cpu;
+	int i;
+
+	cpu = get_cpu();
+	pool = per_cpu_ptr(ppm->pool, cpu);
+	spin_lock_bh(&pool->lock);
+	put_cpu();
+
+	i = ppm_find_unused_entries(pool->bmap, ppm->pool_index_max,
+				    pool->next, count, 0);
+	if (i < 0) {
+		pool->next = 0;
+		spin_unlock_bh(&pool->lock);
+		return -ENOSPC;
+	}
+
+	pool->next = i + count;
+	if (pool->next >= ppm->pool_index_max)
+		pool->next = 0;
+
+	spin_unlock_bh(&pool->lock);
+
+	pr_debug("%s: cpu %u, idx %d + %d (%d), next %u.\n",
+		 __func__, cpu, i, count, i + cpu * ppm->pool_index_max,
+		pool->next);
+
+	i += cpu * ppm->pool_index_max;
+	ppm_mark_entries(ppm, i, count, caller_data);
+
+	return i;
+}
+
+static int ppm_get_entries(struct cxgbi_ppm *ppm, unsigned int count,
+			   unsigned long caller_data)
+{
+	int i;
+
+	spin_lock_bh(&ppm->map_lock);
+	i = ppm_find_unused_entries(ppm->ppod_bmap, ppm->bmap_index_max,
+				    ppm->next, count, 0);
+	if (i < 0) {
+		ppm->next = 0;
+		spin_unlock_bh(&ppm->map_lock);
+		pr_debug("ippm: NO suitable entries %u available.\n",
+			 count);
+		return -ENOSPC;
+	}
+
+	ppm->next = i + count;
+	if (ppm->next >= ppm->bmap_index_max)
+		ppm->next = 0;
+
+	spin_unlock_bh(&ppm->map_lock);
+
+	pr_debug("%s: idx %d + %d (%d), next %u, caller_data 0x%lx.\n",
+		 __func__, i, count, i + ppm->pool_rsvd, ppm->next,
+		 caller_data);
+
+	i += ppm->pool_rsvd;
+	ppm_mark_entries(ppm, i, count, caller_data);
+
+	return i;
+}
+
+static void ppm_unmark_entries(struct cxgbi_ppm *ppm, int i, int count)
+{
+	pr_debug("%s: idx %d + %d.\n", __func__, i, count);
+
+	if (i < ppm->pool_rsvd) {
+		unsigned int cpu;
+		struct cxgbi_ppm_pool *pool;
+
+		cpu = i / ppm->pool_index_max;
+		i %= ppm->pool_index_max;
+
+		pool = per_cpu_ptr(ppm->pool, cpu);
+		spin_lock_bh(&pool->lock);
+		bitmap_clear(pool->bmap, i, count);
+
+		if (i < pool->next)
+			pool->next = i;
+		spin_unlock_bh(&pool->lock);
+
+		pr_debug("%s: cpu %u, idx %d, next %u.\n",
+			 __func__, cpu, i, pool->next);
+	} else {
+		spin_lock_bh(&ppm->map_lock);
+
+		i -= ppm->pool_rsvd;
+		bitmap_clear(ppm->ppod_bmap, i, count);
+
+		if (i < ppm->next)
+			ppm->next = i;
+		spin_unlock_bh(&ppm->map_lock);
+
+		pr_debug("%s: idx %d, next %u.\n", __func__, i, ppm->next);
+	}
+}
+
+void cxgbi_ppm_ppod_release(struct cxgbi_ppm *ppm, u32 idx)
+{
+	struct cxgbi_ppod_data *pdata;
+
+	if (idx >= ppm->ppmax) {
+		pr_warn("ippm: idx too big %u > %u.\n", idx, ppm->ppmax);
+		return;
+	}
+
+	pdata = ppm->ppod_data + idx;
+	if (!pdata->npods) {
+		pr_warn("ippm: idx %u, npods 0.\n", idx);
+		return;
+	}
+
+	pr_debug("release idx %u, npods %u.\n", idx, pdata->npods);
+	ppm_unmark_entries(ppm, idx, pdata->npods);
+}
+EXPORT_SYMBOL(cxgbi_ppm_ppod_release);
+
+int cxgbi_ppm_ppods_reserve(struct cxgbi_ppm *ppm, unsigned short nr_pages,
+			    u32 per_tag_pg_idx, u32 *ppod_idx,
+			    u32 *ddp_tag, unsigned long caller_data)
+{
+	struct cxgbi_ppod_data *pdata;
+	unsigned int npods;
+	int idx = -1;
+	unsigned int hwidx;
+	u32 tag;
+
+	npods = (nr_pages + PPOD_PAGES_MAX - 1) >> PPOD_PAGES_SHIFT;
+	if (!npods) {
+		pr_warn("%s: pages %u -> npods %u, full.\n",
+			__func__, nr_pages, npods);
+		return -EINVAL;
+	}
+
+	/* grab from cpu pool first */
+	idx = ppm_get_cpu_entries(ppm, npods, caller_data);
+	/* try the general pool */
+	if (idx < 0)
+		idx = ppm_get_entries(ppm, npods, caller_data);
+	if (idx < 0) {
+		pr_debug("ippm: pages %u, nospc %u, nxt %u, 0x%lx.\n",
+			 nr_pages, npods, ppm->next, caller_data);
+		return idx;
+	}
+
+	pdata = ppm->ppod_data + idx;
+	hwidx = ppm->base_idx + idx;
+
+	tag = cxgbi_ppm_make_ddp_tag(hwidx, pdata->color);
+
+	if (per_tag_pg_idx)
+		tag |= (per_tag_pg_idx << 30) & 0xC0000000;
+
+	*ppod_idx = idx;
+	*ddp_tag = tag;
+
+	pr_debug("ippm: sg %u, tag 0x%x(%u,%u), data 0x%lx.\n",
+		 nr_pages, tag, idx, npods, caller_data);
+
+	return npods;
+}
+EXPORT_SYMBOL(cxgbi_ppm_ppods_reserve);
+
+void cxgbi_ppm_make_ppod_hdr(struct cxgbi_ppm *ppm, u32 tag,
+			     unsigned int tid, unsigned int offset,
+			     unsigned int length,
+			     struct cxgbi_pagepod_hdr *hdr)
+{
+	/* The ddp tag in pagepod should be with bit 31:30 set to 0.
+	 * The ddp Tag on the wire should be with non-zero 31:30 to the peer
+	 */
+	tag &= 0x3FFFFFFF;
+
+	hdr->vld_tid = htonl(PPOD_VALID_FLAG | PPOD_TID(tid));
+
+	hdr->rsvd = 0;
+	hdr->pgsz_tag_clr = htonl(tag & ppm->tformat.idx_clr_mask);
+	hdr->max_offset = htonl(length);
+	hdr->page_offset = htonl(offset);
+
+	pr_debug("ippm: tag 0x%x, tid 0x%x, xfer %u, off %u.\n",
+		 tag, tid, length, offset);
+}
+EXPORT_SYMBOL(cxgbi_ppm_make_ppod_hdr);
+
+static void ppm_free(struct cxgbi_ppm *ppm)
+{
+	vfree(ppm);
+}
+
+static void ppm_destroy(struct kref *kref)
+{
+	struct cxgbi_ppm *ppm = container_of(kref,
+					     struct cxgbi_ppm,
+					     refcnt);
+	pr_info("ippm: kref 0, destroy %s ppm 0x%p.\n",
+		ppm->ndev->name, ppm);
+
+	*ppm->ppm_pp = NULL;
+
+	free_percpu(ppm->pool);
+	ppm_free(ppm);
+}
+
+int cxgbi_ppm_release(struct cxgbi_ppm *ppm)
+{
+	if (ppm) {
+		int rv;
+
+		rv = kref_put(&ppm->refcnt, ppm_destroy);
+		return rv;
+	}
+	return 1;
+}
+
+static struct cxgbi_ppm_pool *ppm_alloc_cpu_pool(unsigned int *total,
+						 unsigned int *pcpu_ppmax)
+{
+	struct cxgbi_ppm_pool *pools;
+	unsigned int ppmax = (*total) / num_possible_cpus();
+	unsigned int max = (PCPU_MIN_UNIT_SIZE - sizeof(*pools)) << 3;
+	unsigned int bmap;
+	unsigned int alloc_sz;
+	unsigned int count = 0;
+	unsigned int cpu;
+
+	/* make sure per cpu pool fits into PCPU_MIN_UNIT_SIZE */
+	if (ppmax > max)
+		ppmax = max;
+
+	/* pool size must be multiple of unsigned long */
+	bmap = BITS_TO_LONGS(ppmax);
+	ppmax = (bmap * sizeof(unsigned long)) << 3;
+
+	alloc_sz = sizeof(*pools) + sizeof(unsigned long) * bmap;
+	pools = __alloc_percpu(alloc_sz, __alignof__(struct cxgbi_ppm_pool));
+
+	if (!pools)
+		return NULL;
+
+	for_each_possible_cpu(cpu) {
+		struct cxgbi_ppm_pool *ppool = per_cpu_ptr(pools, cpu);
+
+		memset(ppool, 0, alloc_sz);
+		spin_lock_init(&ppool->lock);
+		count += ppmax;
+	}
+
+	*total = count;
+	*pcpu_ppmax = ppmax;
+
+	return pools;
+}
+
+int cxgbi_ppm_init(void **ppm_pp, struct net_device *ndev,
+		   struct pci_dev *pdev, void *lldev,
+		   struct cxgbi_tag_format *tformat,
+		   unsigned int ppmax,
+		   unsigned int llimit,
+		   unsigned int start,
+		   unsigned int reserve_factor)
+{
+	struct cxgbi_ppm *ppm = (struct cxgbi_ppm *)(*ppm_pp);
+	struct cxgbi_ppm_pool *pool = NULL;
+	unsigned int ppmax_pool = 0;
+	unsigned int pool_index_max = 0;
+	unsigned int alloc_sz;
+	unsigned int ppod_bmap_size;
+
+	if (ppm) {
+		pr_info("ippm: %s, ppm 0x%p,0x%p already initialized, %u/%u.\n",
+			ndev->name, ppm_pp, ppm, ppm->ppmax, ppmax);
+		kref_get(&ppm->refcnt);
+		return 1;
+	}
+
+	if (reserve_factor) {
+		ppmax_pool = ppmax / reserve_factor;
+		pool = ppm_alloc_cpu_pool(&ppmax_pool, &pool_index_max);
+
+		pr_debug("%s: ppmax %u, cpu total %u, per cpu %u.\n",
+			 ndev->name, ppmax, ppmax_pool, pool_index_max);
+	}
+
+	ppod_bmap_size = BITS_TO_LONGS(ppmax - ppmax_pool);
+	alloc_sz = sizeof(struct cxgbi_ppm) +
+			ppmax * (sizeof(struct cxgbi_ppod_data)) +
+			ppod_bmap_size * sizeof(unsigned long);
+
+	ppm = vmalloc(alloc_sz);
+	if (!ppm)
+		goto release_ppm_pool;
+
+	memset(ppm, 0, alloc_sz);
+
+	ppm->ppod_bmap = (unsigned long *)(&ppm->ppod_data[ppmax]);
+
+	if ((ppod_bmap_size >> 3) > (ppmax - ppmax_pool)) {
+		unsigned int start = ppmax - ppmax_pool;
+		unsigned int end = ppod_bmap_size >> 3;
+
+		bitmap_set(ppm->ppod_bmap, ppmax, end - start);
+		pr_info("%s: %u - %u < %u * 8, mask extra bits %u, %u.\n",
+			__func__, ppmax, ppmax_pool, ppod_bmap_size, start,
+			end);
+	}
+
+	spin_lock_init(&ppm->map_lock);
+	kref_init(&ppm->refcnt);
+
+	memcpy(&ppm->tformat, tformat, sizeof(struct cxgbi_tag_format));
+
+	ppm->ppm_pp = ppm_pp;
+	ppm->ndev = ndev;
+	ppm->pdev = pdev;
+	ppm->lldev = lldev;
+	ppm->ppmax = ppmax;
+	ppm->next = 0;
+	ppm->llimit = llimit;
+	ppm->base_idx = start > llimit ?
+			(start - llimit + 1) >> PPOD_SIZE_SHIFT : 0;
+	ppm->bmap_index_max = ppmax - ppmax_pool;
+
+	ppm->pool = pool;
+	ppm->pool_rsvd = ppmax_pool;
+	ppm->pool_index_max = pool_index_max;
+
+	/* check one more time */
+	if (*ppm_pp) {
+		ppm_free(ppm);
+		ppm = (struct cxgbi_ppm *)(*ppm_pp);
+
+		pr_info("ippm: %s, ppm 0x%p,0x%p already initialized, %u/%u.\n",
+			ndev->name, ppm_pp, *ppm_pp, ppm->ppmax, ppmax);
+
+		kref_get(&ppm->refcnt);
+		return 1;
+	}
+	*ppm_pp = ppm;
+
+	ppm->tformat.pgsz_idx_dflt = cxgbi_ppm_find_page_index(ppm, PAGE_SIZE);
+
+	pr_info("ippm %s: ppm 0x%p, 0x%p, base %u/%u, pg %lu,%u, rsvd %u,%u.\n",
+		ndev->name, ppm_pp, ppm, ppm->base_idx, ppm->ppmax, PAGE_SIZE,
+		ppm->tformat.pgsz_idx_dflt, ppm->pool_rsvd,
+		ppm->pool_index_max);
+
+	return 0;
+
+release_ppm_pool:
+	free_percpu(pool);
+	return -ENOMEM;
+}
+EXPORT_SYMBOL(cxgbi_ppm_init);
+
+unsigned int cxgbi_tagmask_set(unsigned int ppmax)
+{
+	unsigned int bits = fls(ppmax);
+
+	if (bits > PPOD_IDX_MAX_SIZE)
+		bits = PPOD_IDX_MAX_SIZE;
+
+	pr_info("ippm: ppmax %u/0x%x -> bits %u, tagmask 0x%x.\n",
+		ppmax, ppmax, bits, 1 << (bits + PPOD_IDX_SHIFT));
+
+	return 1 << (bits + PPOD_IDX_SHIFT);
+}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h
new file mode 100644
index 0000000..d487326
--- /dev/null
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ppm.h
@@ -0,0 +1,310 @@
+/*
+ * cxgb4_ppm.h: Chelsio common library for T4/T5 iSCSI ddp operation
+ *
+ * Copyright (c) 2016 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Written by: Karen Xie (kxie@chelsio.com)
+ */
+
+#ifndef	__CXGB4PPM_H__
+#define	__CXGB4PPM_H__
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/debugfs.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
+#include <linux/vmalloc.h>
+#include <linux/bitmap.h>
+
+struct cxgbi_pagepod_hdr {
+	u32 vld_tid;
+	u32 pgsz_tag_clr;
+	u32 max_offset;
+	u32 page_offset;
+	u64 rsvd;
+};
+
+#define PPOD_PAGES_MAX			4
+struct cxgbi_pagepod {
+	struct cxgbi_pagepod_hdr hdr;
+	u64 addr[PPOD_PAGES_MAX + 1];
+};
+
+/* ddp tag format
+ * for a 32-bit tag:
+ * bit #
+ * 31 .....   .....  0
+ *     X   Y...Y Z...Z, where
+ *     ^   ^^^^^ ^^^^
+ *     |   |      |____ when ddp bit = 0: color bits
+ *     |   |
+ *     |   |____ when ddp bit = 0: idx into the ddp memory region
+ *     |
+ *     |____ ddp bit: 0 - ddp tag, 1 - non-ddp tag
+ *
+ *  [page selector:2] [sw/free bits] [0] [idx] [color:6]
+ */
+
+#define DDP_PGIDX_MAX		4
+#define DDP_PGSZ_BASE_SHIFT	12	/* base page 4K */
+
+struct cxgbi_task_tag_info {
+	unsigned char flags;
+#define CXGBI_PPOD_INFO_FLAG_VALID	0x1
+#define CXGBI_PPOD_INFO_FLAG_MAPPED	0x2
+	unsigned char cid;
+	unsigned short pg_shift;
+	unsigned int npods;
+	unsigned int idx;
+	unsigned int tag;
+	struct cxgbi_pagepod_hdr hdr;
+	int nents;
+	int nr_pages;
+	struct scatterlist *sgl;
+};
+
+struct cxgbi_tag_format {
+	unsigned char pgsz_order[DDP_PGIDX_MAX];
+	unsigned char pgsz_idx_dflt;
+	unsigned char free_bits:4;
+	unsigned char color_bits:4;
+	unsigned char idx_bits;
+	unsigned char rsvd_bits;
+	unsigned int  no_ddp_mask;
+	unsigned int  idx_mask;
+	unsigned int  color_mask;
+	unsigned int  idx_clr_mask;
+	unsigned int  rsvd_mask;
+};
+
+struct cxgbi_ppod_data {
+	unsigned char pg_idx:2;
+	unsigned char color:6;
+	unsigned char chan_id;
+	unsigned short npods;
+	unsigned long caller_data;
+};
+
+/* per cpu ppm pool */
+struct cxgbi_ppm_pool {
+	unsigned int base;		/* base index */
+	unsigned int next;		/* next possible free index */
+	spinlock_t lock;		/* ppm pool lock */
+	unsigned long bmap[0];
+} ____cacheline_aligned_in_smp;
+
+struct cxgbi_ppm {
+	struct kref refcnt;
+	struct net_device *ndev;	/* net_device, 1st port */
+	struct pci_dev *pdev;
+	void *lldev;
+	void **ppm_pp;
+	struct cxgbi_tag_format tformat;
+	unsigned int ppmax;
+	unsigned int llimit;
+	unsigned int base_idx;
+
+	unsigned int pool_rsvd;
+	unsigned int pool_index_max;
+	struct cxgbi_ppm_pool __percpu *pool;
+	/* map lock */
+	spinlock_t map_lock;		/* ppm map lock */
+	unsigned int bmap_index_max;
+	unsigned int next;
+	unsigned long *ppod_bmap;
+	struct cxgbi_ppod_data ppod_data[0];
+};
+
+#define DDP_THRESHOLD		512
+
+#define PPOD_PAGES_SHIFT	2       /*  4 pages per pod */
+
+#define IPPOD_SIZE               sizeof(struct cxgbi_pagepod)  /*  64 */
+#define PPOD_SIZE_SHIFT         6
+
+/* page pods are allocated in groups of this size (must be power of 2) */
+#define PPOD_CLUSTER_SIZE	16U
+
+#define ULPMEM_DSGL_MAX_NPPODS	16	/*  1024/PPOD_SIZE */
+#define ULPMEM_IDATA_MAX_NPPODS	3	/* (PPOD_SIZE * 3 + ulptx hdr) < 256B */
+#define PCIE_MEMWIN_MAX_NPPODS	16	/*  1024/PPOD_SIZE */
+
+#define PPOD_COLOR_SHIFT	0
+#define PPOD_COLOR(x)		((x) << PPOD_COLOR_SHIFT)
+
+#define PPOD_IDX_SHIFT          6
+#define PPOD_IDX_MAX_SIZE       24
+
+#define PPOD_TID_SHIFT		0
+#define PPOD_TID(x)		((x) << PPOD_TID_SHIFT)
+
+#define PPOD_TAG_SHIFT		6
+#define PPOD_TAG(x)		((x) << PPOD_TAG_SHIFT)
+
+#define PPOD_VALID_SHIFT	24
+#define PPOD_VALID(x)		((x) << PPOD_VALID_SHIFT)
+#define PPOD_VALID_FLAG		PPOD_VALID(1U)
+
+#define PPOD_PI_EXTRACT_CTL_SHIFT	31
+#define PPOD_PI_EXTRACT_CTL(x)		((x) << PPOD_PI_EXTRACT_CTL_SHIFT)
+#define PPOD_PI_EXTRACT_CTL_FLAG	V_PPOD_PI_EXTRACT_CTL(1U)
+
+#define PPOD_PI_TYPE_SHIFT		29
+#define PPOD_PI_TYPE_MASK		0x3
+#define PPOD_PI_TYPE(x)			((x) << PPOD_PI_TYPE_SHIFT)
+
+#define PPOD_PI_CHECK_CTL_SHIFT		27
+#define PPOD_PI_CHECK_CTL_MASK		0x3
+#define PPOD_PI_CHECK_CTL(x)		((x) << PPOD_PI_CHECK_CTL_SHIFT)
+
+#define PPOD_PI_REPORT_CTL_SHIFT	25
+#define PPOD_PI_REPORT_CTL_MASK		0x3
+#define PPOD_PI_REPORT_CTL(x)		((x) << PPOD_PI_REPORT_CTL_SHIFT)
+
+static inline int cxgbi_ppm_is_ddp_tag(struct cxgbi_ppm *ppm, u32 tag)
+{
+	return !(tag & ppm->tformat.no_ddp_mask);
+}
+
+static inline int cxgbi_ppm_sw_tag_is_usable(struct cxgbi_ppm *ppm,
+					     u32 tag)
+{
+	/* the sw tag must be using <= 31 bits */
+	return !(tag & 0x80000000U);
+}
+
+static inline int cxgbi_ppm_make_non_ddp_tag(struct cxgbi_ppm *ppm,
+					     u32 sw_tag,
+					     u32 *final_tag)
+{
+	struct cxgbi_tag_format *tformat = &ppm->tformat;
+
+	if (!cxgbi_ppm_sw_tag_is_usable(ppm, sw_tag)) {
+		pr_info("sw_tag 0x%x NOT usable.\n", sw_tag);
+		return -EINVAL;
+	}
+
+	if (!sw_tag) {
+		*final_tag = tformat->no_ddp_mask;
+	} else {
+		unsigned int shift = tformat->idx_bits + tformat->color_bits;
+		u32 lower = sw_tag & tformat->idx_clr_mask;
+		u32 upper = (sw_tag >> shift) << (shift + 1);
+
+		*final_tag = upper | tformat->no_ddp_mask | lower;
+	}
+	return 0;
+}
+
+static inline u32 cxgbi_ppm_decode_non_ddp_tag(struct cxgbi_ppm *ppm,
+					       u32 tag)
+{
+	struct cxgbi_tag_format *tformat = &ppm->tformat;
+	unsigned int shift = tformat->idx_bits + tformat->color_bits;
+	u32 lower = tag & tformat->idx_clr_mask;
+	u32 upper = (tag >> tformat->rsvd_bits) << shift;
+
+	return upper | lower;
+}
+
+static inline u32 cxgbi_ppm_ddp_tag_get_idx(struct cxgbi_ppm *ppm,
+					    u32 ddp_tag)
+{
+	u32 hw_idx = (ddp_tag >> PPOD_IDX_SHIFT) &
+			ppm->tformat.idx_mask;
+
+	return hw_idx - ppm->base_idx;
+}
+
+static inline u32 cxgbi_ppm_make_ddp_tag(unsigned int hw_idx,
+					 unsigned char color)
+{
+	return (hw_idx << PPOD_IDX_SHIFT) | ((u32)color);
+}
+
+static inline unsigned long
+cxgbi_ppm_get_tag_caller_data(struct cxgbi_ppm *ppm,
+			      u32 ddp_tag)
+{
+	u32 idx = cxgbi_ppm_ddp_tag_get_idx(ppm, ddp_tag);
+
+	return ppm->ppod_data[idx].caller_data;
+}
+
+/* sw bits are the free bits */
+static inline int cxgbi_ppm_ddp_tag_update_sw_bits(struct cxgbi_ppm *ppm,
+						   u32 val, u32 orig_tag,
+						   u32 *final_tag)
+{
+	struct cxgbi_tag_format *tformat = &ppm->tformat;
+	u32 v = val >> tformat->free_bits;
+
+	if (v) {
+		pr_info("sw_bits 0x%x too large, avail bits %u.\n",
+			val, tformat->free_bits);
+		return -EINVAL;
+	}
+	if (!cxgbi_ppm_is_ddp_tag(ppm, orig_tag))
+		return -EINVAL;
+
+	*final_tag = (val << tformat->rsvd_bits) |
+		     (orig_tag & ppm->tformat.rsvd_mask);
+	return 0;
+}
+
+static inline void cxgbi_ppm_ppod_clear(struct cxgbi_pagepod *ppod)
+{
+	ppod->hdr.vld_tid = 0U;
+}
+
+static inline void cxgbi_tagmask_check(unsigned int tagmask,
+				       struct cxgbi_tag_format *tformat)
+{
+	unsigned int bits = fls(tagmask);
+
+	/* reserve top most 2 bits for page selector */
+	tformat->free_bits = 32 - 2 - bits;
+	tformat->rsvd_bits = bits;
+	tformat->color_bits = PPOD_IDX_SHIFT;
+	tformat->idx_bits = bits - 1 - PPOD_IDX_SHIFT;
+	tformat->no_ddp_mask = 1 << (bits - 1);
+	tformat->idx_mask = (1 << tformat->idx_bits) - 1;
+	tformat->color_mask = (1 << PPOD_IDX_SHIFT) - 1;
+	tformat->idx_clr_mask = (1 << (bits - 1)) - 1;
+	tformat->rsvd_mask = (1 << bits) - 1;
+
+	pr_info("ippm: tagmask 0x%x, rsvd %u=%u+%u+1, mask 0x%x,0x%x, "
+		"pg %u,%u,%u,%u.\n",
+		tagmask, tformat->rsvd_bits, tformat->idx_bits,
+		tformat->color_bits, tformat->no_ddp_mask, tformat->rsvd_mask,
+		tformat->pgsz_order[0], tformat->pgsz_order[1],
+		tformat->pgsz_order[2], tformat->pgsz_order[3]);
+}
+
+int cxgbi_ppm_find_page_index(struct cxgbi_ppm *ppm, unsigned long pgsz);
+void cxgbi_ppm_make_ppod_hdr(struct cxgbi_ppm *ppm, u32 tag,
+			     unsigned int tid, unsigned int offset,
+			     unsigned int length,
+			     struct cxgbi_pagepod_hdr *hdr);
+void cxgbi_ppm_ppod_release(struct cxgbi_ppm *, u32 idx);
+int cxgbi_ppm_ppods_reserve(struct cxgbi_ppm *, unsigned short nr_pages,
+			    u32 per_tag_pg_idx, u32 *ppod_idx, u32 *ddp_tag,
+			    unsigned long caller_data);
+int cxgbi_ppm_init(void **ppm_pp, struct net_device *, struct pci_dev *,
+		   void *lldev, struct cxgbi_tag_format *,
+		   unsigned int ppmax, unsigned int llimit,
+		   unsigned int start,
+		   unsigned int reserve_factor);
+int cxgbi_ppm_release(struct cxgbi_ppm *ppm);
+void cxgbi_tagmask_check(unsigned int tagmask, struct cxgbi_tag_format *);
+unsigned int cxgbi_tagmask_set(unsigned int ppmax);
+
+#endif	/*__CXGB4PPM_H__*/
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 12/34] cxgb4: update Kconfig and Makefile
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (10 preceding siblings ...)
  2016-02-14 17:39 ` [RFC 11/34] cxgb4: add iSCSI DDP page pod manager Varun Prakash
@ 2016-02-14 17:39 ` Varun Prakash
  2016-03-01 14:47   ` Christoph Hellwig
  2016-02-14 17:42 ` [RFC 13/34] iscsi-target: add new transport type Varun Prakash
                   ` (22 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:39 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, davem, swise, indranil, kxie, hariprasad, varun

update Kconfig and Makefile for enabling iSCSI
DDP page pod manager.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/net/ethernet/chelsio/Kconfig        | 11 +++++++++++
 drivers/net/ethernet/chelsio/cxgb4/Makefile |  1 +
 2 files changed, 12 insertions(+)

diff --git a/drivers/net/ethernet/chelsio/Kconfig b/drivers/net/ethernet/chelsio/Kconfig
index 4d187f2..4686a85 100644
--- a/drivers/net/ethernet/chelsio/Kconfig
+++ b/drivers/net/ethernet/chelsio/Kconfig
@@ -96,6 +96,17 @@ config CHELSIO_T4_DCB
 
 	  If unsure, say N.
 
+config CHELSIO_T4_UWIRE
+	bool "Unified Wire Support for Chelsio T5 cards"
+	default n
+	depends on CHELSIO_T4
+	---help---
+	  Enable unified-wire offload features.
+	  Say Y here if you want to enable unified-wire over Ethernet
+	  in the driver.
+
+	  If unsure, say N.
+
 config CHELSIO_T4_FCOE
 	bool "Fibre Channel over Ethernet (FCoE) Support for Chelsio T5 cards"
 	default n
diff --git a/drivers/net/ethernet/chelsio/cxgb4/Makefile b/drivers/net/ethernet/chelsio/cxgb4/Makefile
index ace0ab9..85c9282 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/Makefile
+++ b/drivers/net/ethernet/chelsio/cxgb4/Makefile
@@ -7,4 +7,5 @@ obj-$(CONFIG_CHELSIO_T4) += cxgb4.o
 cxgb4-objs := cxgb4_main.o l2t.o t4_hw.o sge.o clip_tbl.o cxgb4_ethtool.o
 cxgb4-$(CONFIG_CHELSIO_T4_DCB) +=  cxgb4_dcb.o
 cxgb4-$(CONFIG_CHELSIO_T4_FCOE) +=  cxgb4_fcoe.o
+cxgb4-$(CONFIG_CHELSIO_T4_UWIRE) +=  cxgb4_ppm.o
 cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 13/34] iscsi-target: add new transport type
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (11 preceding siblings ...)
  2016-02-14 17:39 ` [RFC 12/34] cxgb4: update Kconfig and Makefile Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-03-01 14:48   ` Christoph Hellwig
  2016-02-14 17:42 ` [RFC 14/34] iscsi-target: export symbols Varun Prakash
                   ` (21 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

add new transport type ISCSI_TCP_CXGB4.

This transport has following features
1. TCP/IP offload.
2. iSCSI PDU recovery by reassembling
   TCP segments.
3. Header and Data digest offload.
4. iSCSI segmentation offload(ISO).
5. Direct Data Placement(DDP).

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target_configfs.c | 79 ++++++++++++++++++++++++++++
 include/target/iscsi/iscsi_target_core.h     |  1 +
 2 files changed, 80 insertions(+)

diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
index 2f821de..e09c891 100644
--- a/drivers/target/iscsi/iscsi_target_configfs.c
+++ b/drivers/target/iscsi/iscsi_target_configfs.c
@@ -182,12 +182,91 @@ out:
 	return rc;
 }
 
+static ssize_t lio_target_np_cxgb4_show(struct config_item *item, char *page)
+{
+	struct iscsi_tpg_np *tpg_np = to_iscsi_tpg_np(item);
+	struct iscsi_tpg_np *tpg_np_cxgb4;
+	ssize_t rb;
+
+	tpg_np_cxgb4 = iscsit_tpg_locate_child_np(tpg_np, ISCSI_TCP_CXGB4);
+	if (tpg_np_cxgb4)
+		rb = sprintf(page, "1\n");
+	else
+		rb = sprintf(page, "0\n");
+
+	return rb;
+}
+
+static ssize_t lio_target_np_cxgb4_store(struct config_item *item,
+					 const char *page, size_t count)
+{
+	struct iscsi_tpg_np *tpg_np = to_iscsi_tpg_np(item);
+	struct iscsi_np *np;
+	struct iscsi_portal_group *tpg;
+	struct iscsi_tpg_np *tpg_np_cxgb4 = NULL;
+	u32 op;
+	int rc = 0;
+
+	rc = kstrtou32(page, 0, &op);
+	if (rc)
+		return rc;
+
+	if ((op != 1) && (op != 0)) {
+		pr_err("Illegal value for tpg_enable: %u\n", op);
+		return -EINVAL;
+	}
+
+	np = tpg_np->tpg_np;
+	if (!np) {
+		pr_err("Unable to locate struct iscsi_np from"
+				" struct iscsi_tpg_np\n");
+		return -EINVAL;
+	}
+
+	tpg = tpg_np->tpg;
+	if (iscsit_get_tpg(tpg) < 0)
+		return -EINVAL;
+
+	if (op) {
+		rc = request_module("cxgbit");
+		if (rc != 0) {
+			pr_warn("Unable to request_module for cxgbit\n");
+			rc = 0;
+		}
+
+		tpg_np_cxgb4 = iscsit_tpg_add_network_portal(tpg,
+							     &np->np_sockaddr,
+							     tpg_np,
+							     ISCSI_TCP_CXGB4);
+		if (IS_ERR(tpg_np_cxgb4)) {
+			rc = PTR_ERR(tpg_np_cxgb4);
+			goto out;
+		}
+	} else {
+		tpg_np_cxgb4 = iscsit_tpg_locate_child_np(tpg_np,
+							  ISCSI_TCP_CXGB4);
+		if (tpg_np_cxgb4) {
+			rc = iscsit_tpg_del_network_portal(tpg, tpg_np_cxgb4);
+			if (rc < 0)
+				goto out;
+		}
+	}
+
+	iscsit_put_tpg(tpg);
+	return count;
+out:
+	iscsit_put_tpg(tpg);
+	return rc;
+}
+
 CONFIGFS_ATTR(lio_target_np_, sctp);
 CONFIGFS_ATTR(lio_target_np_, iser);
+CONFIGFS_ATTR(lio_target_np_, cxgb4);
 
 static struct configfs_attribute *lio_target_portal_attrs[] = {
 	&lio_target_np_attr_sctp,
 	&lio_target_np_attr_iser,
+	&lio_target_np_attr_cxgb4,
 	NULL,
 };
 
diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
index 373d334..0edf18b 100644
--- a/include/target/iscsi/iscsi_target_core.h
+++ b/include/target/iscsi/iscsi_target_core.h
@@ -74,6 +74,7 @@ enum iscsit_transport_type {
 	ISCSI_IWARP_TCP				= 3,
 	ISCSI_IWARP_SCTP			= 4,
 	ISCSI_INFINIBAND			= 5,
+	ISCSI_TCP_CXGB4				= 6,
 };
 
 /* RFC-3720 7.1.4  Standard Connection State Diagram for a Target */
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 14/34] iscsi-target: export symbols
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (12 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 13/34] iscsi-target: add new transport type Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-03-01 14:49   ` Christoph Hellwig
  2016-02-14 17:42 ` [RFC 15/34] iscsi-target: export symbols from iscsi_target.c Varun Prakash
                   ` (20 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

export multiple symbols for
ISCSI_TCP_CXGB4 transport driver.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target_datain_values.c |  3 ++
 drivers/target/iscsi/iscsi_target_erl0.c          |  1 +
 drivers/target/iscsi/iscsi_target_erl1.c          |  1 +
 drivers/target/iscsi/iscsi_target_login.c         |  3 +-
 drivers/target/iscsi/iscsi_target_nego.c          |  1 +
 drivers/target/iscsi/iscsi_target_parameters.c    |  1 +
 drivers/target/iscsi/iscsi_target_util.c          |  4 +++
 include/target/iscsi/iscsi_transport.h            | 34 +++++++++++++++++++++++
 8 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/drivers/target/iscsi/iscsi_target_datain_values.c b/drivers/target/iscsi/iscsi_target_datain_values.c
index fb3b52b..73253e3 100644
--- a/drivers/target/iscsi/iscsi_target_datain_values.c
+++ b/drivers/target/iscsi/iscsi_target_datain_values.c
@@ -55,6 +55,7 @@ void iscsit_free_datain_req(struct iscsi_cmd *cmd, struct iscsi_datain_req *dr)
 
 	kmem_cache_free(lio_dr_cache, dr);
 }
+EXPORT_SYMBOL(iscsit_free_datain_req);
 
 void iscsit_free_all_datain_reqs(struct iscsi_cmd *cmd)
 {
@@ -79,6 +80,7 @@ struct iscsi_datain_req *iscsit_get_datain_req(struct iscsi_cmd *cmd)
 	return list_first_entry(&cmd->datain_list, struct iscsi_datain_req,
 				cmd_datain_node);
 }
+EXPORT_SYMBOL(iscsit_get_datain_req);
 
 /*
  *	For Normal and Recovery DataSequenceInOrder=Yes and DataPDUInOrder=Yes.
@@ -524,3 +526,4 @@ struct iscsi_datain_req *iscsit_get_datain_values(
 
 	return NULL;
 }
+EXPORT_SYMBOL(iscsit_get_datain_values);
diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
index 210f6e4..4a66317 100644
--- a/drivers/target/iscsi/iscsi_target_erl0.c
+++ b/drivers/target/iscsi/iscsi_target_erl0.c
@@ -913,6 +913,7 @@ void iscsit_fall_back_to_erl0(struct iscsi_session *sess)
 
 	atomic_set(&sess->session_fall_back_to_erl0, 1);
 }
+EXPORT_SYMBOL(iscsit_fall_back_to_erl0);
 
 static void iscsit_handle_connection_cleanup(struct iscsi_conn *conn)
 {
diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
index 9214c9da..42aaaea 100644
--- a/drivers/target/iscsi/iscsi_target_erl1.c
+++ b/drivers/target/iscsi/iscsi_target_erl1.c
@@ -1271,6 +1271,7 @@ void iscsit_start_dataout_timer(
 	cmd->dataout_timer_flags |= ISCSI_TF_RUNNING;
 	add_timer(&cmd->dataout_timer);
 }
+EXPORT_SYMBOL(iscsit_start_dataout_timer);
 
 void iscsit_stop_dataout_timer(struct iscsi_cmd *cmd)
 {
diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
index 96e78c8..616ec9e 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -249,7 +249,7 @@ static void iscsi_login_set_conn_values(
 	mutex_unlock(&auth_id_lock);
 }
 
-static __printf(2, 3) int iscsi_change_param_sprintf(
+__printf(2, 3) int iscsi_change_param_sprintf(
 	struct iscsi_conn *conn,
 	const char *fmt, ...)
 {
@@ -270,6 +270,7 @@ static __printf(2, 3) int iscsi_change_param_sprintf(
 
 	return 0;
 }
+EXPORT_SYMBOL(iscsi_change_param_sprintf);
 
 /*
  *	This is the leading connection of a new session,
diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
index 9fc9117..ca06132 100644
--- a/drivers/target/iscsi/iscsi_target_nego.c
+++ b/drivers/target/iscsi/iscsi_target_nego.c
@@ -269,6 +269,7 @@ int iscsi_target_check_login_request(
 
 	return 0;
 }
+EXPORT_SYMBOL(iscsi_target_check_login_request);
 
 static int iscsi_target_check_first_request(
 	struct iscsi_conn *conn,
diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
index 3a1f9a7..0efa80b 100644
--- a/drivers/target/iscsi/iscsi_target_parameters.c
+++ b/drivers/target/iscsi/iscsi_target_parameters.c
@@ -680,6 +680,7 @@ struct iscsi_param *iscsi_find_param_from_key(
 	pr_err("Unable to locate key \"%s\".\n", key);
 	return NULL;
 }
+EXPORT_SYMBOL(iscsi_find_param_from_key);
 
 int iscsi_extract_key_value(char *textbuf, char **key, char **value)
 {
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index 428b0d9..5cab517 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -126,6 +126,7 @@ struct iscsi_r2t *iscsit_get_r2t_from_list(struct iscsi_cmd *cmd)
 			" 0x%08x.\n", cmd->init_task_tag);
 	return NULL;
 }
+EXPORT_SYMBOL(iscsit_get_r2t_from_list);
 
 /*
  *	Called with cmd->r2t_lock held.
@@ -514,6 +515,7 @@ void iscsit_add_cmd_to_immediate_queue(
 
 	wake_up(&conn->queues_wq);
 }
+EXPORT_SYMBOL(iscsit_add_cmd_to_immediate_queue);
 
 struct iscsi_queue_req *iscsit_get_cmd_from_immediate_queue(struct iscsi_conn *conn)
 {
@@ -773,6 +775,7 @@ void iscsit_free_cmd(struct iscsi_cmd *cmd, bool shutdown)
 		break;
 	}
 }
+EXPORT_SYMBOL(iscsit_free_cmd);
 
 int iscsit_check_session_usage_count(struct iscsi_session *sess)
 {
@@ -958,6 +961,7 @@ void iscsit_mod_nopin_response_timer(struct iscsi_conn *conn)
 		(get_jiffies_64() + na->nopin_response_timeout * HZ));
 	spin_unlock_bh(&conn->nopin_timer_lock);
 }
+EXPORT_SYMBOL(iscsit_mod_nopin_response_timer);
 
 /*
  *	Called with conn->nopin_timer_lock held.
diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
index 90e37fa..a17e07a 100644
--- a/include/target/iscsi/iscsi_transport.h
+++ b/include/target/iscsi/iscsi_transport.h
@@ -85,9 +85,11 @@ extern void iscsit_increment_maxcmdsn(struct iscsi_cmd *, struct iscsi_session *
  * From iscsi_target_erl0.c
  */
 extern void iscsit_cause_connection_reinstatement(struct iscsi_conn *, int);
+extern void iscsit_fall_back_to_erl0(struct iscsi_session *);
 /*
  * From iscsi_target_erl1.c
  */
+extern void iscsit_start_dataout_timer(struct iscsi_cmd *, struct iscsi_conn *);
 extern void iscsit_stop_dataout_timer(struct iscsi_cmd *);
 
 /*
@@ -102,3 +104,35 @@ extern struct iscsi_cmd *iscsit_allocate_cmd(struct iscsi_conn *, int);
 extern int iscsit_sequence_cmd(struct iscsi_conn *, struct iscsi_cmd *,
 			       unsigned char *, __be32);
 extern void iscsit_release_cmd(struct iscsi_cmd *);
+extern struct iscsi_r2t *iscsit_get_r2t_from_list(struct iscsi_cmd *);
+extern void iscsit_free_cmd(struct iscsi_cmd *, bool);
+extern void iscsit_mod_nopin_response_timer(struct iscsi_conn *);
+extern void iscsit_add_cmd_to_immediate_queue(struct iscsi_cmd *,
+					      struct iscsi_conn *, u8);
+
+/*
+ * From iscsi_target_datain_values.c
+ */
+extern void iscsit_free_datain_req(struct iscsi_cmd *,
+				   struct iscsi_datain_req *);
+extern struct iscsi_datain_req *iscsit_get_datain_req(struct iscsi_cmd *);
+extern struct iscsi_datain_req *iscsit_get_datain_values(struct iscsi_cmd *,
+							 struct iscsi_datain *);
+
+/*
+ * From iscsi_target_nego.c
+ */
+extern int iscsi_target_check_login_request(struct iscsi_conn *,
+					    struct iscsi_login *);
+
+/*
+ * From iscsi_target_login.c
+ */
+extern __printf(2, 3) int iscsi_change_param_sprintf(
+	struct iscsi_conn *, const char *, ...);
+
+/*
+ * From iscsi_target_parameters.c
+ */
+extern struct iscsi_param *iscsi_find_param_from_key(
+	char *, struct iscsi_param_list *);
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 15/34] iscsi-target: export symbols from iscsi_target.c
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (13 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 14/34] iscsi-target: export symbols Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-03-01 14:49   ` Christoph Hellwig
  2016-02-14 17:42 ` [RFC 16/34] iscsi-target: split iscsit_send_r2t() Varun Prakash
                   ` (19 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

export symbols from iscsi_target.c for
ISCSI_TCP_CXGB4 transport driver.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c    | 17 ++++++++++++-----
 include/target/iscsi/iscsi_transport.h | 10 ++++++++++
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 576a7a4..32af13b 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -481,13 +481,14 @@ int iscsit_del_np(struct iscsi_np *np)
 static int iscsit_immediate_queue(struct iscsi_conn *, struct iscsi_cmd *, int);
 static int iscsit_response_queue(struct iscsi_conn *, struct iscsi_cmd *, int);
 
-static int iscsit_queue_rsp(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+int iscsit_queue_rsp(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 {
 	iscsit_add_cmd_to_response_queue(cmd, cmd->conn, cmd->i_state);
 	return 0;
 }
+EXPORT_SYMBOL(iscsit_queue_rsp);
 
-static void iscsit_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+void iscsit_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 {
 	bool scsi_cmd = (cmd->iscsi_opcode == ISCSI_OP_SCSI_CMD);
 
@@ -498,6 +499,7 @@ static void iscsit_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 
 	__iscsit_free_cmd(cmd, scsi_cmd, true);
 }
+EXPORT_SYMBOL(iscsit_aborted_task);
 
 static enum target_prot_op iscsit_get_sup_prot_ops(struct iscsi_conn *conn)
 {
@@ -634,7 +636,7 @@ static void __exit iscsi_target_cleanup_module(void)
 	kfree(iscsit_global);
 }
 
-static int iscsit_add_reject(
+int iscsit_add_reject(
 	struct iscsi_conn *conn,
 	u8 reason,
 	unsigned char *buf)
@@ -664,6 +666,7 @@ static int iscsit_add_reject(
 
 	return -1;
 }
+EXPORT_SYMBOL(iscsit_add_reject);
 
 static int iscsit_add_reject_from_cmd(
 	struct iscsi_cmd *cmd,
@@ -719,6 +722,7 @@ int iscsit_reject_cmd(struct iscsi_cmd *cmd, u8 reason, unsigned char *buf)
 {
 	return iscsit_add_reject_from_cmd(cmd, reason, false, buf);
 }
+EXPORT_SYMBOL(iscsit_reject_cmd);
 
 /*
  * Map some portion of the allocated scatterlist to an iovec, suitable for
@@ -2333,7 +2337,7 @@ iscsit_handle_logout_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
 }
 EXPORT_SYMBOL(iscsit_handle_logout_cmd);
 
-static int iscsit_handle_snack(
+int iscsit_handle_snack(
 	struct iscsi_conn *conn,
 	unsigned char *buf)
 {
@@ -2386,6 +2390,7 @@ static int iscsit_handle_snack(
 
 	return 0;
 }
+EXPORT_SYMBOL(iscsit_handle_snack);
 
 static void iscsit_rx_thread_wait_for_tcp(struct iscsi_conn *conn)
 {
@@ -2581,7 +2586,7 @@ static void iscsit_tx_thread_wait_for_tcp(struct iscsi_conn *conn)
 	}
 }
 
-static void
+void
 iscsit_build_datain_pdu(struct iscsi_cmd *cmd, struct iscsi_conn *conn,
 			struct iscsi_datain *datain, struct iscsi_data_rsp *hdr,
 			bool set_statsn)
@@ -2625,6 +2630,7 @@ iscsit_build_datain_pdu(struct iscsi_cmd *cmd, struct iscsi_conn *conn,
 		cmd->init_task_tag, ntohl(hdr->statsn), ntohl(hdr->datasn),
 		ntohl(hdr->offset), datain->length, conn->cid);
 }
+EXPORT_SYMBOL(iscsit_build_datain_pdu);
 
 static int iscsit_send_datain(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 {
@@ -3164,6 +3170,7 @@ int iscsit_build_r2ts_for_cmd(
 
 	return 0;
 }
+EXPORT_SYMBOL(iscsit_build_r2ts_for_cmd);
 
 void iscsit_build_rsp_pdu(struct iscsi_cmd *cmd, struct iscsi_conn *conn,
 			bool inc_stat_sn, struct iscsi_scsi_rsp *hdr)
diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
index a17e07a..23695a3 100644
--- a/include/target/iscsi/iscsi_transport.h
+++ b/include/target/iscsi/iscsi_transport.h
@@ -77,6 +77,16 @@ extern void iscsit_build_reject(struct iscsi_cmd *, struct iscsi_conn *,
 extern int iscsit_build_logout_rsp(struct iscsi_cmd *, struct iscsi_conn *,
 				struct iscsi_logout_rsp *);
 extern int iscsit_logout_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
+extern int iscsit_queue_rsp(struct iscsi_conn *, struct iscsi_cmd *);
+extern void iscsit_aborted_task(struct iscsi_conn *, struct iscsi_cmd *);
+extern int iscsit_add_reject(struct iscsi_conn *, u8, unsigned char *);
+extern int iscsit_reject_cmd(struct iscsi_cmd *, u8, unsigned char *);
+extern int iscsit_handle_snack(struct iscsi_conn *, unsigned char *);
+extern void iscsit_build_datain_pdu(struct iscsi_cmd *, struct iscsi_conn *,
+				    struct iscsi_datain *,
+				    struct iscsi_data_rsp *, bool);
+extern int iscsit_build_r2ts_for_cmd(struct iscsi_conn *, struct iscsi_cmd *,
+				     bool);
 /*
  * From iscsi_target_device.c
  */
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 16/34] iscsi-target: split iscsit_send_r2t()
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (14 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 15/34] iscsi-target: export symbols from iscsi_target.c Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-02-14 17:42 ` [RFC 17/34] iscsi-target: split iscsit_send_conn_drop_async_message() Varun Prakash
                   ` (18 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

move code to form iscsi_r2t_rsp hdr to new function
iscsit_build_r2t_pdu() so that ISCSI_TCP_CXGB4 and
other transport drivers can reuse this code.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c    | 34 ++++++++++++++++++++++------------
 include/target/iscsi/iscsi_transport.h |  2 ++
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 32af13b..6137e26 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3019,6 +3019,26 @@ iscsit_send_nopin(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 	return 0;
 }
 
+void iscsit_build_r2t_pdu(struct iscsi_cmd *cmd,
+			  struct iscsi_conn *conn,
+			  struct iscsi_r2t *r2t,
+			  struct iscsi_r2t_rsp *hdr)
+{
+	hdr->opcode		= ISCSI_OP_R2T;
+	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
+	int_to_scsilun(cmd->se_cmd.orig_fe_lun,	(struct scsi_lun *)&hdr->lun);
+	hdr->itt		= cmd->init_task_tag;
+	hdr->ttt		= cpu_to_be32(r2t->targ_xfer_tag);
+	hdr->statsn		= cpu_to_be32(conn->stat_sn);
+	hdr->exp_cmdsn		= cpu_to_be32(conn->sess->exp_cmd_sn);
+	hdr->max_cmdsn		= cpu_to_be32(
+				  (u32)atomic_read(&conn->sess->max_cmd_sn));
+	hdr->r2tsn		= cpu_to_be32(r2t->r2t_sn);
+	hdr->data_offset	= cpu_to_be32(r2t->offset);
+	hdr->data_length	= cpu_to_be32(r2t->xfer_len);
+}
+EXPORT_SYMBOL(iscsit_build_r2t_pdu);
+
 static int iscsit_send_r2t(
 	struct iscsi_cmd *cmd,
 	struct iscsi_conn *conn)
@@ -3034,19 +3054,9 @@ static int iscsit_send_r2t(
 
 	hdr			= (struct iscsi_r2t_rsp *) cmd->pdu;
 	memset(hdr, 0, ISCSI_HDR_LEN);
-	hdr->opcode		= ISCSI_OP_R2T;
-	hdr->flags		|= ISCSI_FLAG_CMD_FINAL;
-	int_to_scsilun(cmd->se_cmd.orig_fe_lun,
-			(struct scsi_lun *)&hdr->lun);
-	hdr->itt		= cmd->init_task_tag;
 	r2t->targ_xfer_tag	= session_get_next_ttt(conn->sess);
-	hdr->ttt		= cpu_to_be32(r2t->targ_xfer_tag);
-	hdr->statsn		= cpu_to_be32(conn->stat_sn);
-	hdr->exp_cmdsn		= cpu_to_be32(conn->sess->exp_cmd_sn);
-	hdr->max_cmdsn		= cpu_to_be32((u32) atomic_read(&conn->sess->max_cmd_sn));
-	hdr->r2tsn		= cpu_to_be32(r2t->r2t_sn);
-	hdr->data_offset	= cpu_to_be32(r2t->offset);
-	hdr->data_length	= cpu_to_be32(r2t->xfer_len);
+
+	iscsit_build_r2t_pdu(cmd, conn, r2t, hdr);
 
 	cmd->iov_misc[0].iov_base	= cmd->pdu;
 	cmd->iov_misc[0].iov_len	= ISCSI_HDR_LEN;
diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
index 23695a3..746404a 100644
--- a/include/target/iscsi/iscsi_transport.h
+++ b/include/target/iscsi/iscsi_transport.h
@@ -87,6 +87,8 @@ extern void iscsit_build_datain_pdu(struct iscsi_cmd *, struct iscsi_conn *,
 				    struct iscsi_data_rsp *, bool);
 extern int iscsit_build_r2ts_for_cmd(struct iscsi_conn *, struct iscsi_cmd *,
 				     bool);
+extern void iscsit_build_r2t_pdu(struct iscsi_cmd *, struct iscsi_conn *,
+				 struct iscsi_r2t *, struct iscsi_r2t_rsp *);
 /*
  * From iscsi_target_device.c
  */
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 17/34] iscsi-target: split iscsit_send_conn_drop_async_message()
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (15 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 16/34] iscsi-target: split iscsit_send_r2t() Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-02-14 17:42 ` [RFC 18/34] iscsi-target: call complete on conn_logout_comp Varun Prakash
                   ` (17 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

move code to form iscsi_async hdr to new function
iscsit_build_conn_drop_async_pdu(), so that
ISCSI_TCP_CXGB4 and other transport drivers
can reuse this code.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c    | 26 ++++++++++++++++++--------
 include/target/iscsi/iscsi_transport.h |  3 +++
 2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 6137e26..8bf3cfb 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -2531,16 +2531,11 @@ static void iscsit_build_conn_drop_async_message(struct iscsi_conn *conn)
 	iscsit_dec_conn_usage_count(conn_p);
 }
 
-static int iscsit_send_conn_drop_async_message(
+void iscsit_build_conn_drop_async_pdu(
 	struct iscsi_cmd *cmd,
-	struct iscsi_conn *conn)
+	struct iscsi_conn *conn,
+	struct iscsi_async *hdr)
 {
-	struct iscsi_async *hdr;
-
-	cmd->tx_size = ISCSI_HDR_LEN;
-	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
-
-	hdr			= (struct iscsi_async *) cmd->pdu;
 	hdr->opcode		= ISCSI_OP_ASYNC_EVENT;
 	hdr->flags		= ISCSI_FLAG_CMD_FINAL;
 	cmd->init_task_tag	= RESERVED_ITT;
@@ -2554,6 +2549,21 @@ static int iscsit_send_conn_drop_async_message(
 	hdr->param1		= cpu_to_be16(cmd->logout_cid);
 	hdr->param2		= cpu_to_be16(conn->sess->sess_ops->DefaultTime2Wait);
 	hdr->param3		= cpu_to_be16(conn->sess->sess_ops->DefaultTime2Retain);
+}
+EXPORT_SYMBOL(iscsit_build_conn_drop_async_pdu);
+
+static int iscsit_send_conn_drop_async_message(
+	struct iscsi_cmd *cmd,
+	struct iscsi_conn *conn)
+{
+	struct iscsi_async *hdr;
+
+	cmd->tx_size = ISCSI_HDR_LEN;
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+
+	hdr			= (struct iscsi_async *)cmd->pdu;
+
+	iscsit_build_conn_drop_async_pdu(cmd, conn, hdr);
 
 	if (conn->conn_ops->HeaderDigest) {
 		u32 *header_digest = (u32 *)&cmd->pdu[ISCSI_HDR_LEN];
diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
index 746404a..a47124b 100644
--- a/include/target/iscsi/iscsi_transport.h
+++ b/include/target/iscsi/iscsi_transport.h
@@ -89,6 +89,9 @@ extern int iscsit_build_r2ts_for_cmd(struct iscsi_conn *, struct iscsi_cmd *,
 				     bool);
 extern void iscsit_build_r2t_pdu(struct iscsi_cmd *, struct iscsi_conn *,
 				 struct iscsi_r2t *, struct iscsi_r2t_rsp *);
+extern void iscsit_build_conn_drop_async_pdu(struct iscsi_cmd *,
+					     struct iscsi_conn *,
+					     struct iscsi_async *);
 /*
  * From iscsi_target_device.c
  */
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 18/34] iscsi-target: call complete on conn_logout_comp
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (16 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 17/34] iscsi-target: split iscsit_send_conn_drop_async_message() Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-02-15 17:07   ` Sagi Grimberg
  2016-02-14 17:42 ` [RFC 19/34] iscsi-target: clear tx_thread_active Varun Prakash
                   ` (16 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

ISCSI_TCP_CXGB4 driver waits on conn_logout_comp as
ISCSI_TCP driver so call complete if transport type
is ISCSI_TCP_CXGB4.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 8bf3cfb..858f6e4 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -4265,16 +4265,18 @@ int iscsit_close_connection(
 	pr_debug("Closing iSCSI connection CID %hu on SID:"
 		" %u\n", conn->cid, sess->sid);
 	/*
-	 * Always up conn_logout_comp for the traditional TCP case just in case
-	 * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout
-	 * response never got sent because the connection failed.
+	 * Always up conn_logout_comp for the traditional TCP and TCP_CXGB4
+	 * case just in case the RX Thread in iscsi_target_rx_opcode() is
+	 * sleeping and the logout response never got sent because the
+	 * connection failed.
 	 *
 	 * However for iser-target, isert_wait4logout() is using conn_logout_comp
 	 * to signal logout response TX interrupt completion.  Go ahead and skip
 	 * this for iser since isert_rx_opcode() does not wait on logout failure,
 	 * and to avoid iscsi_conn pointer dereference in iser-target code.
 	 */
-	if (conn->conn_transport->transport_type == ISCSI_TCP)
+	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
+	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
 		complete(&conn->conn_logout_comp);
 
 	if (!strcmp(current->comm, ISCSI_RX_THREAD_NAME)) {
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 19/34] iscsi-target: clear tx_thread_active
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (17 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 18/34] iscsi-target: call complete on conn_logout_comp Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-02-15 17:07   ` Sagi Grimberg
  2016-03-01 14:59   ` Christoph Hellwig
  2016-02-14 17:42 ` [RFC 20/34] iscsi-target: update struct iscsit_transport definition Varun Prakash
                   ` (15 subsequent siblings)
  34 siblings, 2 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

clear tx_thread_active for ISCSI_TCP_CXGB4
transport in logout_post_handler functions.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 858f6e4..3dd7ba2 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -4579,7 +4579,8 @@ static void iscsit_logout_post_handler_closesession(
 	 * always sleep waiting for RX/TX thread shutdown to complete
 	 * within iscsit_close_connection().
 	 */
-	if (conn->conn_transport->transport_type == ISCSI_TCP)
+	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
+	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
 		sleep = cmpxchg(&conn->tx_thread_active, true, false);
 
 	atomic_set(&conn->conn_logout_remove, 0);
@@ -4596,7 +4597,8 @@ static void iscsit_logout_post_handler_samecid(
 {
 	int sleep = 1;
 
-	if (conn->conn_transport->transport_type == ISCSI_TCP)
+	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
+	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
 		sleep = cmpxchg(&conn->tx_thread_active, true, false);
 
 	atomic_set(&conn->conn_logout_remove, 0);
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 20/34] iscsi-target: update struct iscsit_transport definition
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (18 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 19/34] iscsi-target: clear tx_thread_active Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-02-15 17:09   ` Sagi Grimberg
  2016-02-14 17:42 ` [RFC 21/34] iscsi-target: release transport driver resources Varun Prakash
                   ` (14 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

add new function pointers to support ISCSI_TCP_CXGB4
transport,

1. void (*iscsit_rx_pdu)(struct iscsi_conn *);
   Rx thread uses this for receiving and processing
   iSCSI PDU in full feature phase.

2. void (*iscsit_release_cmd)(struct iscsi_conn *,
   struct iscsi_cmd *);
   This function pointer is used for releasing
   transport resources associated with the cmd.

3. int (*iscsit_validate_params)(struct iscsi_conn *);
   This function is used for checking whether default
   connection operational parameters are supported by
   the transport, if not transport driver will set the
   supported value.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 include/target/iscsi/iscsi_transport.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
index a47124b..83228a7 100644
--- a/include/target/iscsi/iscsi_transport.h
+++ b/include/target/iscsi/iscsi_transport.h
@@ -22,6 +22,9 @@ struct iscsit_transport {
 	int (*iscsit_queue_data_in)(struct iscsi_conn *, struct iscsi_cmd *);
 	int (*iscsit_queue_status)(struct iscsi_conn *, struct iscsi_cmd *);
 	void (*iscsit_aborted_task)(struct iscsi_conn *, struct iscsi_cmd *);
+	void (*iscsit_rx_pdu)(struct iscsi_conn *);
+	void (*iscsit_release_cmd)(struct iscsi_conn *, struct iscsi_cmd *);
+	int (*iscsit_validate_params)(struct iscsi_conn *);
 	enum target_prot_op (*iscsit_get_sup_prot_ops)(struct iscsi_conn *);
 };
 
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 21/34] iscsi-target: release transport driver resources
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (19 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 20/34] iscsi-target: update struct iscsit_transport definition Varun Prakash
@ 2016-02-14 17:42 ` Varun Prakash
  2016-03-01 14:59   ` Christoph Hellwig
  2016-02-14 17:45 ` [RFC 22/34] iscsi-target: call Rx thread function Varun Prakash
                   ` (13 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:42 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

transport driver may allocate resources for an
iSCSI cmd, to free that resources iscsi target
must call release function registered by transport
driver.

ISCSI_TCP_CXGB4 frees DDP resource associated
with a WRITE cmd in the callback function.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target_util.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index 5cab517..71240e4 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -727,6 +727,9 @@ void __iscsit_free_cmd(struct iscsi_cmd *cmd, bool scsi_cmd,
 		iscsit_remove_cmd_from_immediate_queue(cmd, conn);
 		iscsit_remove_cmd_from_response_queue(cmd, conn);
 	}
+
+	if (conn && conn->conn_transport->iscsit_release_cmd)
+		conn->conn_transport->iscsit_release_cmd(conn, cmd);
 }
 
 void iscsit_free_cmd(struct iscsi_cmd *cmd, bool shutdown)
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 22/34] iscsi-target: call Rx thread function
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (20 preceding siblings ...)
  2016-02-14 17:42 ` [RFC 21/34] iscsi-target: release transport driver resources Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-15 17:16   ` Sagi Grimberg
  2016-03-01 15:01   ` Christoph Hellwig
  2016-02-14 17:45 ` [RFC 23/34] iscsi-target: split iscsi_target_rx_thread() Varun Prakash
                   ` (12 subsequent siblings)
  34 siblings, 2 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

call Rx thread function if registered
by transport driver, so that transport
drivers can use iscsi-target Rx thread
for Rx processing.

update iSER target driver to use this
interface.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/infiniband/ulp/isert/ib_isert.c | 10 ++++++++++
 drivers/target/iscsi/iscsi_target.c     | 10 ++--------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index f121e61..365aa8c 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -3396,6 +3396,15 @@ static void isert_free_conn(struct iscsi_conn *conn)
 	isert_put_conn(isert_conn);
 }
 
+static void isert_rx_pdu(struct iscsi_conn *conn)
+{
+	struct completion comp;
+
+	init_completion(&comp);
+
+	wait_for_completion_interruptible(&comp);
+}
+
 static struct iscsit_transport iser_target_transport = {
 	.name			= "IB/iSER",
 	.transport_type		= ISCSI_INFINIBAND,
@@ -3414,6 +3423,7 @@ static struct iscsit_transport iser_target_transport = {
 	.iscsit_queue_data_in	= isert_put_datain,
 	.iscsit_queue_status	= isert_put_response,
 	.iscsit_aborted_task	= isert_aborted_task,
+	.iscsit_rx_pdu		= isert_rx_pdu,
 	.iscsit_get_sup_prot_ops = isert_get_sup_prot_ops,
 };
 
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 3dd7ba2..e2ec56f 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -4132,14 +4132,8 @@ int iscsi_target_rx_thread(void *arg)
 	if (rc < 0 || iscsi_target_check_conn_state(conn))
 		return 0;
 
-	if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
-		struct completion comp;
-
-		init_completion(&comp);
-		rc = wait_for_completion_interruptible(&comp);
-		if (rc < 0)
-			goto transport_err;
-
+	if (conn->conn_transport->iscsit_rx_pdu) {
+		conn->conn_transport->iscsit_rx_pdu(conn);
 		goto transport_err;
 	}
 
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 23/34] iscsi-target: split iscsi_target_rx_thread()
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (21 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 22/34] iscsi-target: call Rx thread function Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-03-01 15:02   ` Christoph Hellwig
  2016-02-14 17:45 ` [RFC 24/34] iscsi-target: validate conn operational parameters Varun Prakash
                   ` (11 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

split iscsi_target_rx_thread() into two parts,
1. iscsi_target_rx_thread() is common to all
   transport drivers, it will call Rx function
   registered by transport driver.

2. iscsit_rx_pdu() is Rx function for
   ISCSI_TCP transport.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c | 59 +++++++++++++++++++++----------------
 1 file changed, 33 insertions(+), 26 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index e2ec56f..485e33a 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -480,6 +480,7 @@ int iscsit_del_np(struct iscsi_np *np)
 
 static int iscsit_immediate_queue(struct iscsi_conn *, struct iscsi_cmd *, int);
 static int iscsit_response_queue(struct iscsi_conn *, struct iscsi_cmd *, int);
+static void iscsit_rx_pdu(struct iscsi_conn *);
 
 int iscsit_queue_rsp(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 {
@@ -521,6 +522,7 @@ static struct iscsit_transport iscsi_target_transport = {
 	.iscsit_queue_data_in	= iscsit_queue_rsp,
 	.iscsit_queue_status	= iscsit_queue_rsp,
 	.iscsit_aborted_task	= iscsit_aborted_task,
+	.iscsit_rx_pdu		= iscsit_rx_pdu,
 	.iscsit_get_sup_prot_ops = iscsit_get_sup_prot_ops,
 };
 
@@ -4112,30 +4114,12 @@ static bool iscsi_target_check_conn_state(struct iscsi_conn *conn)
 	return ret;
 }
 
-int iscsi_target_rx_thread(void *arg)
+static void iscsit_rx_pdu(struct iscsi_conn *conn)
 {
-	int ret, rc;
+	int ret;
 	u8 buffer[ISCSI_HDR_LEN], opcode;
 	u32 checksum = 0, digest = 0;
-	struct iscsi_conn *conn = arg;
 	struct kvec iov;
-	/*
-	 * Allow ourselves to be interrupted by SIGINT so that a
-	 * connection recovery / failure event can be triggered externally.
-	 */
-	allow_signal(SIGINT);
-	/*
-	 * Wait for iscsi_post_login_handler() to complete before allowing
-	 * incoming iscsi/tcp socket I/O, and/or failing the connection.
-	 */
-	rc = wait_for_completion_interruptible(&conn->rx_login_comp);
-	if (rc < 0 || iscsi_target_check_conn_state(conn))
-		return 0;
-
-	if (conn->conn_transport->iscsit_rx_pdu) {
-		conn->conn_transport->iscsit_rx_pdu(conn);
-		goto transport_err;
-	}
 
 	while (!kthread_should_stop()) {
 		/*
@@ -4153,7 +4137,7 @@ int iscsi_target_rx_thread(void *arg)
 		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
 		if (ret != ISCSI_HDR_LEN) {
 			iscsit_rx_thread_wait_for_tcp(conn);
-			goto transport_err;
+			return;
 		}
 
 		if (conn->conn_ops->HeaderDigest) {
@@ -4163,7 +4147,7 @@ int iscsi_target_rx_thread(void *arg)
 			ret = rx_data(conn, &iov, 1, ISCSI_CRC_LEN);
 			if (ret != ISCSI_CRC_LEN) {
 				iscsit_rx_thread_wait_for_tcp(conn);
-				goto transport_err;
+				return;
 			}
 
 			iscsit_do_crypto_hash_buf(&conn->conn_rx_hash,
@@ -4187,7 +4171,7 @@ int iscsi_target_rx_thread(void *arg)
 		}
 
 		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
-			goto transport_err;
+			return;
 
 		opcode = buffer[0] & ISCSI_OPCODE_MASK;
 
@@ -4198,15 +4182,38 @@ int iscsi_target_rx_thread(void *arg)
 			" while in Discovery Session, rejecting.\n", opcode);
 			iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
 					  buffer);
-			goto transport_err;
+			return;
 		}
 
 		ret = iscsi_target_rx_opcode(conn, buffer);
 		if (ret < 0)
-			goto transport_err;
+			return;
 	}
+}
+
+int iscsi_target_rx_thread(void *arg)
+{
+	int rc;
+	struct iscsi_conn *conn = arg;
+
+	/*
+	 * Allow ourselves to be interrupted by SIGINT so that a
+	 * connection recovery / failure event can be triggered externally.
+	 */
+	allow_signal(SIGINT);
+	/*
+	 * Wait for iscsi_post_login_handler() to complete before allowing
+	 * incoming iscsi/tcp socket I/O, and/or failing the connection.
+	 */
+	rc = wait_for_completion_interruptible(&conn->rx_login_comp);
+	if (rc < 0 || iscsi_target_check_conn_state(conn))
+		return 0;
+
+	if (!conn->conn_transport->iscsit_rx_pdu)
+		return 0;
+
+	conn->conn_transport->iscsit_rx_pdu(conn);
 
-transport_err:
 	if (!signal_pending(current))
 		atomic_set(&conn->transport_failed, 1);
 	iscsit_take_action_for_connection_exit(conn);
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 24/34] iscsi-target: validate conn operational parameters
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (22 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 23/34] iscsi-target: split iscsi_target_rx_thread() Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-03-01 15:03   ` Christoph Hellwig
  2016-02-14 17:45 ` [RFC 25/34] iscsi-target: move iscsit_thread_check_cpumask() Varun Prakash
                   ` (10 subsequent siblings)
  34 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

call validate params function if registered
by transport driver before starting negotiation,
so that transport driver can validate and update
the value of parameters.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target_login.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
index 616ec9e..83c3643 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -1375,6 +1375,16 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
 			goto old_sess_out;
 	}
 
+	if (conn->conn_transport->iscsit_validate_params) {
+		ret = conn->conn_transport->iscsit_validate_params(conn);
+		if (ret < 0) {
+			if (zero_tsih)
+				goto new_sess_out;
+			else
+				goto old_sess_out;
+		}
+	}
+
 	ret = iscsi_target_start_negotiation(login, conn);
 	if (ret < 0)
 		goto new_sess_out;
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 25/34] iscsi-target: move iscsit_thread_check_cpumask()
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (23 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 24/34] iscsi-target: validate conn operational parameters Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 26/34] iscsi-target: fix seq_end_offset calculation Varun Prakash
                   ` (9 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

move iscsit_thread_check_cpumask() to header
file so that ISCSI_TCP_CXGB4 and other transport drivers
can call this function to ensure both tx and rx thread
runs on same cpu.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target.c      | 26 --------------------------
 include/target/iscsi/iscsi_target_core.h | 26 ++++++++++++++++++++++++++
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 485e33a..a3b9397 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3751,32 +3751,6 @@ void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
 	cpumask_setall(conn->conn_cpumask);
 }
 
-static inline void iscsit_thread_check_cpumask(
-	struct iscsi_conn *conn,
-	struct task_struct *p,
-	int mode)
-{
-	/*
-	 * mode == 1 signals iscsi_target_tx_thread() usage.
-	 * mode == 0 signals iscsi_target_rx_thread() usage.
-	 */
-	if (mode == 1) {
-		if (!conn->conn_tx_reset_cpumask)
-			return;
-		conn->conn_tx_reset_cpumask = 0;
-	} else {
-		if (!conn->conn_rx_reset_cpumask)
-			return;
-		conn->conn_rx_reset_cpumask = 0;
-	}
-	/*
-	 * Update the CPU mask for this single kthread so that
-	 * both TX and RX kthreads are scheduled to run on the
-	 * same CPU.
-	 */
-	set_cpus_allowed_ptr(p, conn->conn_cpumask);
-}
-
 static int
 iscsit_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state)
 {
diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
index 0edf18b..7ca3cce 100644
--- a/include/target/iscsi/iscsi_target_core.h
+++ b/include/target/iscsi/iscsi_target_core.h
@@ -891,4 +891,30 @@ static inline u32 session_get_next_ttt(struct iscsi_session *session)
 }
 
 extern struct iscsi_cmd *iscsit_find_cmd_from_itt(struct iscsi_conn *, itt_t);
+
+static inline void iscsit_thread_check_cpumask(
+	struct iscsi_conn *conn,
+	struct task_struct *p,
+	int mode)
+{
+	/*
+	 * mode == 1 signals iscsi_target_tx_thread() usage.
+	 * mode == 0 signals iscsi_target_rx_thread() usage.
+	 */
+	if (mode == 1) {
+		if (!conn->conn_tx_reset_cpumask)
+			return;
+		conn->conn_tx_reset_cpumask = 0;
+	} else {
+		if (!conn->conn_rx_reset_cpumask)
+			return;
+		conn->conn_rx_reset_cpumask = 0;
+	}
+	/*
+	 * Update the CPU mask for this single kthread so that
+	 * both TX and RX kthreads are scheduled to run on the
+	 * same CPU.
+	 */
+	set_cpus_allowed_ptr(p, conn->conn_cpumask);
+}
 #endif /* ISCSI_TARGET_CORE_H */
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 26/34] iscsi-target: fix seq_end_offset calculation
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (24 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 25/34] iscsi-target: move iscsit_thread_check_cpumask() Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 27/34] cxgbit: add cxgbit.h Varun Prakash
                   ` (8 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

In case of unsolicited data, seq end offset for first
sequence must be equal to the min of firstburstlength
and data length.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/iscsi_target_erl0.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
index 4a66317..9345cc2 100644
--- a/drivers/target/iscsi/iscsi_target_erl0.c
+++ b/drivers/target/iscsi/iscsi_target_erl0.c
@@ -44,10 +44,10 @@ void iscsit_set_dataout_sequence_values(
 	 */
 	if (cmd->unsolicited_data) {
 		cmd->seq_start_offset = cmd->write_data_done;
-		cmd->seq_end_offset = (cmd->write_data_done +
-			((cmd->se_cmd.data_length >
+		cmd->seq_end_offset = (cmd->se_cmd.data_length >
 			  conn->sess->sess_ops->FirstBurstLength) ?
-			 conn->sess->sess_ops->FirstBurstLength : cmd->se_cmd.data_length));
+			 conn->sess->sess_ops->FirstBurstLength :
+			 cmd->se_cmd.data_length;
 		return;
 	}
 
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 27/34] cxgbit: add cxgbit.h
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (25 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 26/34] iscsi-target: fix seq_end_offset calculation Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 28/34] cxgbit: add cxgbit_lro.h Varun Prakash
                   ` (7 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains data structure
definitions for cxgbit.ko.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit.h | 363 +++++++++++++++++++++++++++++++++++
 1 file changed, 363 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit.h

diff --git a/drivers/target/iscsi/cxgbit/cxgbit.h b/drivers/target/iscsi/cxgbit/cxgbit.h
new file mode 100644
index 0000000..3ceb5ad
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit.h
@@ -0,0 +1,363 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CXGBIT_H__
+#define __CXGBIT_H__
+
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/idr.h>
+#include <linux/completion.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/inet.h>
+#include <linux/wait.h>
+#include <linux/kref.h>
+#include <linux/timer.h>
+#include <linux/io.h>
+
+#include <asm/byteorder.h>
+
+#include <net/net_namespace.h>
+
+#include <target/iscsi/iscsi_transport.h>
+#include <iscsi_target_parameters.h>
+#include <iscsi_target_login.h>
+
+#include "t4_regs.h"
+#include "t4_msg.h"
+#include "cxgb4.h"
+#include "cxgb4_uld.h"
+#include "l2t.h"
+#include "cxgb4_ppm.h"
+#include "cxgbit_lro.h"
+
+extern struct mutex cdev_list_lock;
+extern struct list_head cdev_list_head;
+struct cxgbit_np;
+
+struct cxgbit_sock;
+
+struct cxgbit_cmd {
+	struct scatterlist sg;
+	struct cxgbi_task_tag_info ttinfo;
+	bool setup_ddp;
+	bool release;
+};
+
+#define CXGBIT_MAX_ISO_PAYLOAD	\
+	min_t(u32, MAX_SKB_FRAGS * PAGE_SIZE, 65535)
+
+struct cxgbit_iso_info {
+	u8 flags;
+	u32 mpdu;
+	u32 len;
+	u32 burst_len;
+};
+
+enum cxgbit_skcb_flags {
+	SKCBF_TX_NEED_HDR	= (1 << 0), /* packet needs a header */
+	SKCBF_TX_FLAG_COMPL	= (1 << 1), /* wr completion flag */
+	SKCBF_TX_ISO		= (1 << 2), /* iso cpl in tx skb */
+	SKCBF_RX_LRO		= (1 << 3), /* lro skb */
+};
+
+struct cxgbit_skb_rx_cb {
+	u8 opcode;
+	void *pdu_cb;
+	void (*backlog_fn)(struct cxgbit_sock *, struct sk_buff *);
+};
+
+struct cxgbit_skb_tx_cb {
+	u8 submode;
+	u32 extra_len;
+};
+
+union cxgbit_skb_cb {
+	struct {
+		u8 flags;
+		union {
+			struct cxgbit_skb_tx_cb tx;
+			struct cxgbit_skb_rx_cb rx;
+		};
+	};
+
+	struct {
+		/* This member must be first. */
+		struct l2t_skb_cb l2t;
+		struct sk_buff *wr_next;
+	};
+};
+
+#define CXGBIT_SKB_CB(skb)	((union cxgbit_skb_cb *)&((skb)->cb[0]))
+#define cxgbit_skcb_flags(skb)		(CXGBIT_SKB_CB(skb)->flags)
+#define cxgbit_skcb_submode(skb)	(CXGBIT_SKB_CB(skb)->tx.submode)
+#define cxgbit_skcb_tx_wr_next(skb)	(CXGBIT_SKB_CB(skb)->wr_next)
+#define cxgbit_skcb_tx_extralen(skb)	(CXGBIT_SKB_CB(skb)->tx.extra_len)
+#define cxgbit_skcb_rx_opcode(skb)	(CXGBIT_SKB_CB(skb)->rx.opcode)
+#define cxgbit_skcb_rx_backlog_fn(skb)	(CXGBIT_SKB_CB(skb)->rx.backlog_fn)
+#define cxgbit_rx_pdu_cb(skb)		(CXGBIT_SKB_CB(skb)->rx.pdu_cb)
+
+static inline void *cplhdr(struct sk_buff *skb)
+{
+	return skb->data;
+}
+
+enum cxgbit_cdev_flags {
+	CDEV_STATE_UP = 0,
+	CDEV_ISO_ENABLE,
+	CDEV_DDP_ENABLE,
+};
+
+#define NP_INFO_HASH_SIZE 32
+
+struct np_info {
+	struct np_info *next;
+	struct cxgbit_np *cnp;
+	unsigned int stid;
+};
+
+struct cxgbit_list_head {
+	struct list_head list;
+	/* device lock */
+	spinlock_t lock;
+};
+
+struct cxgbit_device {
+	struct list_head list;
+	struct cxgb4_lld_info lldi;
+	struct np_info *np_hash_tab[NP_INFO_HASH_SIZE];
+	/* np lock */
+	spinlock_t np_lock;
+	u8 selectq[MAX_NPORTS][2];
+	struct cxgbit_list_head cskq;
+	u32 mdsl;
+	struct kref kref;
+	unsigned long flags;
+};
+
+struct cxgbit_wr_wait {
+	struct completion completion;
+	int ret;
+};
+
+enum cxgbit_csk_state {
+	CSK_STATE_IDLE = 0,
+	CSK_STATE_LISTEN,
+	CSK_STATE_CONNECTING,
+	CSK_STATE_ESTABLISHED,
+	CSK_STATE_ABORTING,
+	CSK_STATE_CLOSING,
+	CSK_STATE_MORIBUND,
+	CSK_STATE_DEAD,
+};
+
+enum cxgbit_csk_flags {
+	CSK_TX_DATA_SENT = 0,
+	CSK_TX_FIN,
+	CSK_LOGIN_PDU_DONE,
+	CSK_LOGIN_DONE,
+	CSK_DDP_ENABLE,
+};
+
+struct cxgbit_sock_common {
+	struct cxgbit_device *cdev;
+	struct sockaddr_storage local_addr;
+	struct sockaddr_storage remote_addr;
+	struct cxgbit_wr_wait wr_wait;
+	enum cxgbit_csk_state state;
+	unsigned long flags;
+};
+
+struct cxgbit_np {
+	struct cxgbit_sock_common com;
+	wait_queue_head_t accept_wait;
+	struct iscsi_np *np;
+	struct completion accept_comp;
+	struct list_head np_accept_list;
+	/* np accept lock */
+	spinlock_t np_accept_lock;
+	struct kref kref;
+	unsigned int stid;
+};
+
+struct cxgbit_sock {
+	struct cxgbit_sock_common com;
+	struct cxgbit_np *cnp;
+	struct iscsi_conn *conn;
+	struct l2t_entry *l2t;
+	struct dst_entry *dst;
+	struct list_head list;
+	struct sk_buff_head rxq;
+	struct sk_buff_head txq;
+	struct sk_buff_head ppodq;
+	struct sk_buff_head backlogq;
+	struct sk_buff_head skbq;
+	struct sk_buff *wr_pending_head;
+	struct sk_buff *wr_pending_tail;
+	struct sk_buff *skb;
+	struct sk_buff *lro_skb;
+	struct sk_buff *lro_skb_hold;
+	struct list_head accept_node;
+	/* socket lock */
+	spinlock_t lock;
+	wait_queue_head_t waitq;
+	wait_queue_head_t ack_waitq;
+	bool lock_owner;
+	struct kref kref;
+	u32 max_iso_npdu;
+	u32 wr_cred;
+	u32 wr_una_cred;
+	u32 wr_max_cred;
+	u32 snd_una;
+	u32 tid;
+	u32 snd_nxt;
+	u32 rcv_nxt;
+	u32 smac_idx;
+	u32 tx_chan;
+	u32 mtu;
+	u32 write_seq;
+	u32 rx_credits;
+	u32 snd_win;
+	u32 rcv_win;
+	u16 mss;
+	u16 emss;
+	u16 plen;
+	u16 rss_qid;
+	u16 txq_idx;
+	u16 ctrlq_idx;
+	u8 tos;
+	u8 port_id;
+#define CXGBIT_SUBMODE_HCRC 0x1
+#define CXGBIT_SUBMODE_DCRC 0x2
+	u8 submode;
+#ifdef CONFIG_CHELSIO_T4_DCB
+	u8 dcb_priority;
+#endif
+	u8 snd_wscale;
+};
+
+void _cxgbit_free_cdev(struct kref *kref);
+void _cxgbit_free_csk(struct kref *kref);
+void _cxgbit_free_cnp(struct kref *kref);
+
+static inline void cxgbit_get_cdev(struct cxgbit_device *cdev)
+{
+	kref_get(&cdev->kref);
+}
+
+static inline void cxgbit_put_cdev(struct cxgbit_device *cdev)
+{
+	kref_put(&cdev->kref, _cxgbit_free_cdev);
+}
+
+static inline void cxgbit_get_csk(struct cxgbit_sock *csk)
+{
+	kref_get(&csk->kref);
+}
+
+static inline void cxgbit_put_csk(struct cxgbit_sock *csk)
+{
+	kref_put(&csk->kref, _cxgbit_free_csk);
+}
+
+static inline void cxgbit_get_cnp(struct cxgbit_np *cnp)
+{
+	kref_get(&cnp->kref);
+}
+
+static inline void cxgbit_put_cnp(struct cxgbit_np *cnp)
+{
+	kref_put(&cnp->kref, _cxgbit_free_cnp);
+}
+
+static inline void cxgbit_sock_reset_wr_list(struct cxgbit_sock *csk)
+{
+	csk->wr_pending_tail = NULL;
+	csk->wr_pending_head = NULL;
+}
+
+static inline struct
+sk_buff *cxgbit_sock_peek_wr(const struct cxgbit_sock *csk)
+{
+	return csk->wr_pending_head;
+}
+
+static inline void cxgbit_sock_enqueue_wr(struct cxgbit_sock *csk,
+					  struct sk_buff *skb)
+{
+	cxgbit_skcb_tx_wr_next(skb) = NULL;
+
+	/*
+	 * We want to take an extra reference since both us and the driver
+	 * need to free the packet before it's really freed.
+	 */
+	skb_get(skb);
+
+	if (!csk->wr_pending_head)
+		csk->wr_pending_head = skb;
+	else
+		cxgbit_skcb_tx_wr_next(csk->wr_pending_tail) = skb;
+	csk->wr_pending_tail = skb;
+}
+
+static inline struct
+sk_buff *cxgbit_sock_dequeue_wr(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb = csk->wr_pending_head;
+
+	if (likely(skb)) {
+		csk->wr_pending_head = cxgbit_skcb_tx_wr_next(skb);
+		cxgbit_skcb_tx_wr_next(skb) = NULL;
+	}
+	return skb;
+}
+
+typedef void (*cxgbit_cplhandler_func)(struct cxgbit_device *cdev,
+	      struct sk_buff *skb);
+
+int cxgbit_setup_np(struct iscsi_np *np,
+		    struct __kernel_sockaddr_storage *ksockaddr);
+int cxgbit_setup_conn_digest(struct cxgbit_sock *);
+int cxgbit_accept_np(struct iscsi_np *np, struct iscsi_conn *conn);
+void cxgbit_free_np(struct iscsi_np *np);
+void cxgbit_free_conn(struct iscsi_conn *conn);
+extern cxgbit_cplhandler_func cxgbit_cplhandlers[NUM_CPL_CMDS];
+int cxgbit_get_login_rx(struct iscsi_conn *conn,
+			struct iscsi_login *login);
+int cxgbit_rx_data_ack(struct cxgbit_sock *csk);
+int cxgbit_l2t_send(struct cxgbit_device *cdev, struct sk_buff *skb,
+		    struct l2t_entry *l2e);
+void push_tx_frames(struct cxgbit_sock *csk);
+int cxgbit_put_login_tx(struct iscsi_conn *conn, struct iscsi_login *login,
+			u32 length);
+int cxgbit_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+			   int state);
+int
+cxgbit_response_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+		      int state);
+u32 send_tx_flowc_wr(struct cxgbit_sock *csk);
+int cxgbit_ofld_send(struct cxgbit_device *cdev, struct sk_buff *skb);
+void cxgbit_rx_pdu(struct iscsi_conn *);
+int cxgbit_validate_params(struct iscsi_conn *);
+
+/* DDP */
+int cxgbit_ddp_init(struct cxgbit_device *);
+int cxgbit_setup_conn_pgidx(struct cxgbit_sock *, u32);
+int cxgbit_reserve_ttt(struct cxgbit_sock *, struct iscsi_cmd *);
+void cxgbit_release_cmd(struct iscsi_conn *, struct iscsi_cmd *);
+
+static inline
+struct cxgbi_ppm *cdev2ppm(struct cxgbit_device *cdev)
+{
+	return (struct cxgbi_ppm *)(*cdev->lldi.iscsi_ppm);
+}
+#endif /* __CXGBIT_H__ */
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 28/34] cxgbit: add cxgbit_lro.h
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (26 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 27/34] cxgbit: add cxgbit.h Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 29/34] cxgbit: add cxgbit_main.c Varun Prakash
                   ` (6 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains data structure
definitions for LRO support.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit_lro.h | 70 ++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_lro.h

diff --git a/drivers/target/iscsi/cxgbit/cxgbit_lro.h b/drivers/target/iscsi/cxgbit/cxgbit_lro.h
new file mode 100644
index 0000000..74c840d
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit_lro.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ *
+ */
+
+#ifndef	__CXGBIT_LRO_H__
+#define	__CXGBIT_LRO_H__
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/skbuff.h>
+
+#define LRO_FLUSH_TOTALLEN_MAX	65535
+
+struct cxgbit_lro_cb {
+	struct cxgbit_sock *csk;
+	u32 pdu_totallen;
+	u32 offset;
+	u8 pdu_cnt;
+	bool release;
+};
+
+enum cxgbit_pducb_flags {
+	PDUCBF_RX_HDR		= (1 << 0), /* received pdu header */
+	PDUCBF_RX_DATA		= (1 << 1), /* received pdu payload */
+	PDUCBF_RX_STATUS	= (1 << 2), /* received ddp status */
+	PDUCBF_RX_DATA_DDPD	= (1 << 3), /* pdu payload ddp'd */
+	PDUCBF_RX_HCRC_ERR	= (1 << 4), /* header digest error */
+	PDUCBF_RX_DCRC_ERR	= (1 << 5), /* data digest error */
+};
+
+struct cxgbit_lro_pdu_cb {
+	u8 flags;
+	u8 frags;
+	u8 nr_dfrags;
+	u8 dfrag_index;
+	u32 seq;
+	u32 pdulen;
+	u32 hlen;
+	u32 dlen;
+	u32 doffset;
+	u32 ddigest;
+	void *hdr;
+};
+
+#define LRO_SKB_MAX_HEADROOM  \
+		(sizeof(struct cxgbit_lro_cb) + \
+		 MAX_SKB_FRAGS * sizeof(struct cxgbit_lro_pdu_cb))
+
+#define LRO_SKB_MIN_HEADROOM  \
+		(sizeof(struct cxgbit_lro_cb) + \
+		 sizeof(struct cxgbit_lro_pdu_cb))
+
+#define cxgbit_skb_lro_cb(skb)	((struct cxgbit_lro_cb *)skb->data)
+#define cxgbit_skb_lro_pdu_cb(skb, i)	\
+	((struct cxgbit_lro_pdu_cb *)(skb->data + sizeof(struct cxgbit_lro_cb) \
+					+ i * sizeof(struct cxgbit_lro_pdu_cb)))
+
+#define CPL_RX_ISCSI_DDP_STATUS_DDP_SHIFT	16 /* ddp'able */
+#define CPL_RX_ISCSI_DDP_STATUS_PAD_SHIFT	19 /* pad error */
+#define CPL_RX_ISCSI_DDP_STATUS_HCRC_SHIFT	20 /* hcrc error */
+#define CPL_RX_ISCSI_DDP_STATUS_DCRC_SHIFT	21 /* dcrc error */
+
+#endif	/*__CXGBIT_LRO_H_*/
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 29/34] cxgbit: add cxgbit_main.c
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (27 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 28/34] cxgbit: add cxgbit_lro.h Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 30/34] cxgbit: add cxgbit_cm.c Varun Prakash
                   ` (5 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains code for registering
with iscsi target transport and cxgb4 driver.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit_main.c | 719 ++++++++++++++++++++++++++++++
 1 file changed, 719 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_main.c

diff --git a/drivers/target/iscsi/cxgbit/cxgbit_main.c b/drivers/target/iscsi/cxgbit/cxgbit_main.c
new file mode 100644
index 0000000..ab79289
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit_main.c
@@ -0,0 +1,719 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define DRV_NAME "cxgbit"
+#define DRV_VERSION "1.0.0-ko"
+#define pr_fmt(fmt) DRV_NAME ": " fmt
+
+#include "cxgbit.h"
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+#include <net/dcbevent.h>
+#include "cxgb4_dcb.h"
+#endif
+
+LIST_HEAD(cdev_list_head);
+/* cdev list lock */
+DEFINE_MUTEX(cdev_list_lock);
+
+void _cxgbit_free_cdev(struct kref *kref)
+{
+	struct cxgbit_device *cdev;
+
+	cdev = container_of(kref, struct cxgbit_device, kref);
+
+	kfree(cdev);
+}
+
+static void cxgbit_set_mdsl(struct cxgbit_device *cdev)
+{
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	u32 mdsl;
+
+#define ULP2_MAX_PKT_LEN 16224
+#define ISCSI_PDU_NONPAYLOAD_LEN 312
+	mdsl = min_t(u32, lldi->iscsi_iolen - ISCSI_PDU_NONPAYLOAD_LEN,
+		     ULP2_MAX_PKT_LEN - ISCSI_PDU_NONPAYLOAD_LEN);
+	mdsl = min_t(u32, mdsl, 8192);
+	mdsl = min_t(u32, mdsl, (MAX_SKB_FRAGS - 1) * PAGE_SIZE);
+
+	cdev->mdsl = mdsl;
+}
+
+static void *cxgbit_uld_add(const struct cxgb4_lld_info *lldi)
+{
+	struct cxgbit_device *cdev;
+
+	if (is_t4(lldi->adapter_type))
+		return ERR_PTR(-ENODEV);
+
+	cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
+	if (!cdev)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&cdev->kref);
+
+	cdev->lldi = *lldi;
+
+	cxgbit_set_mdsl(cdev);
+
+	cxgbit_ddp_init(cdev);
+
+	if (!test_bit(CDEV_DDP_ENABLE, &cdev->flags))
+		pr_info("cdev %s ddp init failed\n",
+			pci_name(lldi->pdev));
+
+	if (lldi->fw_vers >= 0x10d2b00)
+		set_bit(CDEV_ISO_ENABLE, &cdev->flags);
+
+	spin_lock_init(&cdev->cskq.lock);
+	INIT_LIST_HEAD(&cdev->cskq.list);
+
+	mutex_lock(&cdev_list_lock);
+	list_add_tail(&cdev->list, &cdev_list_head);
+	mutex_unlock(&cdev_list_lock);
+
+	pr_info("cdev %s added for iSCSI target transport\n",
+		pci_name(lldi->pdev));
+
+	return cdev;
+}
+
+static void cxgbit_close_conn(struct cxgbit_device *cdev)
+{
+	struct cxgbit_sock *csk;
+	struct sk_buff *skb;
+	bool wakeup_thread = false;
+
+	spin_lock_bh(&cdev->cskq.lock);
+	list_for_each_entry(csk, &cdev->cskq.list, list) {
+		skb = alloc_skb(0, GFP_ATOMIC);
+		if (!skb)
+			continue;
+
+		spin_lock_bh(&csk->rxq.lock);
+		__skb_queue_tail(&csk->rxq, skb);
+		if (skb_queue_len(&csk->rxq) == 1)
+			wakeup_thread = true;
+		spin_unlock_bh(&csk->rxq.lock);
+
+		if (wakeup_thread) {
+			wake_up(&csk->waitq);
+			wakeup_thread = false;
+		}
+	}
+	spin_unlock_bh(&cdev->cskq.lock);
+}
+
+static void cxgbit_detach_cdev(struct cxgbit_device *cdev)
+{
+	bool free_cdev = false;
+
+	spin_lock_bh(&cdev->cskq.lock);
+	if (list_empty(&cdev->cskq.list))
+		free_cdev = true;
+	spin_unlock_bh(&cdev->cskq.lock);
+
+	if (free_cdev) {
+		mutex_lock(&cdev_list_lock);
+		list_del(&cdev->list);
+		mutex_unlock(&cdev_list_lock);
+
+		cxgbit_put_cdev(cdev);
+	} else {
+		cxgbit_close_conn(cdev);
+	}
+}
+
+static int cxgbit_uld_state_change(void *handle, enum cxgb4_state state)
+{
+	struct cxgbit_device *cdev = handle;
+
+	switch (state) {
+	case CXGB4_STATE_UP:
+		set_bit(CDEV_STATE_UP, &cdev->flags);
+		pr_info("cdev %s state UP.\n", pci_name(cdev->lldi.pdev));
+		break;
+	case CXGB4_STATE_START_RECOVERY:
+		clear_bit(CDEV_STATE_UP, &cdev->flags);
+		cxgbit_close_conn(cdev);
+		pr_info("cdev %s state RECOVERY.\n", pci_name(cdev->lldi.pdev));
+		break;
+	case CXGB4_STATE_DOWN:
+		pr_info("cdev %s state DOWN.\n", pci_name(cdev->lldi.pdev));
+		break;
+	case CXGB4_STATE_DETACH:
+		clear_bit(CDEV_STATE_UP, &cdev->flags);
+		cxgbit_detach_cdev(cdev);
+		pr_info("cdev %s state DETACH.\n", pci_name(cdev->lldi.pdev));
+		break;
+	default:
+		pr_info("cdev %s unknown state %d.\n",
+			pci_name(cdev->lldi.pdev), state);
+		break;
+	}
+	return 0;
+}
+
+static void cxgbit_proc_ddp_status(unsigned int tid,
+				   struct cpl_rx_data_ddp *cpl,
+				   struct cxgbit_lro_pdu_cb *pdu_cb)
+{
+	unsigned int status = ntohl(cpl->ddpvld);
+
+	pdu_cb->flags |= PDUCBF_RX_STATUS;
+	pdu_cb->ddigest = ntohl(cpl->ulp_crc);
+	pdu_cb->pdulen = ntohs(cpl->len);
+
+	if (status & (1 << CPL_RX_ISCSI_DDP_STATUS_HCRC_SHIFT)) {
+		pr_info("tid 0x%x, status 0x%x, hcrc bad.\n", tid, status);
+		pdu_cb->flags |= PDUCBF_RX_HCRC_ERR;
+	}
+
+	if (status & (1 << CPL_RX_ISCSI_DDP_STATUS_DCRC_SHIFT)) {
+		pr_info("tid 0x%x, status 0x%x, dcrc bad.\n", tid, status);
+		pdu_cb->flags |= PDUCBF_RX_DCRC_ERR;
+	}
+
+	if (status & (1 << CPL_RX_ISCSI_DDP_STATUS_PAD_SHIFT))
+		pr_info("tid 0x%x, status 0x%x, pad bad.\n", tid, status);
+
+	if ((status & (1 << CPL_RX_ISCSI_DDP_STATUS_DDP_SHIFT)) &&
+	    (!(pdu_cb->flags & PDUCBF_RX_DATA))) {
+		pdu_cb->flags |= PDUCBF_RX_DATA_DDPD;
+	}
+}
+
+static void cxgbit_lro_add_packet_rsp(struct sk_buff *skb, u8 op,
+				      const __be64 *rsp)
+{
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb,
+						lro_cb->pdu_cnt);
+	struct cpl_rx_iscsi_ddp *cpl = (struct cpl_rx_iscsi_ddp *)(rsp + 1);
+
+	cxgbit_proc_ddp_status(lro_cb->csk->tid, cpl, pdu_cb);
+
+	lro_cb->pdu_totallen += pdu_cb->pdulen;
+	lro_cb->pdu_cnt++;
+}
+
+static void
+cxgbit_copy_frags(struct sk_buff *skb, const struct pkt_gl *gl,
+		  unsigned int offset)
+{
+	int skb_frag_index = skb_shinfo(skb)->nr_frags;
+	int i;
+
+	/* usually there's just one frag */
+	__skb_fill_page_desc(skb, skb_frag_index, gl->frags[0].page,
+			     gl->frags[0].offset + offset,
+			     gl->frags[0].size - offset);
+	for (i = 1; i < gl->nfrags; i++)
+		__skb_fill_page_desc(skb, skb_frag_index + i,
+				     gl->frags[i].page,
+				     gl->frags[i].offset,
+				     gl->frags[i].size);
+	skb_shinfo(skb)->nr_frags += gl->nfrags;
+
+	/* get a reference to the last page, we don't own it */
+	get_page(gl->frags[gl->nfrags - 1].page);
+}
+
+static void cxgbit_lro_add_packet_gl(struct sk_buff *skb, u8 op,
+				     const struct pkt_gl *gl)
+{
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb,
+						lro_cb->pdu_cnt);
+	unsigned int offset;
+	unsigned int len;
+
+	if (op == CPL_ISCSI_HDR) {
+		struct cpl_iscsi_hdr *cpl = (struct cpl_iscsi_hdr *)gl->va;
+
+		offset = sizeof(struct cpl_iscsi_hdr);
+		pdu_cb->flags |= PDUCBF_RX_HDR;
+		pdu_cb->seq = ntohl(cpl->seq);
+		len = ntohs(cpl->len);
+		pdu_cb->hdr = gl->va + offset;
+		pdu_cb->hlen = len;
+	} else {
+		struct cpl_iscsi_data *cpl = (struct cpl_iscsi_data *)gl->va;
+
+		offset = sizeof(struct cpl_iscsi_data);
+		pdu_cb->flags |= PDUCBF_RX_DATA;
+		len = ntohs(cpl->len);
+		pdu_cb->dlen = len;
+		pdu_cb->doffset = lro_cb->offset;
+		pdu_cb->nr_dfrags = gl->nfrags;
+		pdu_cb->dfrag_index = skb_shinfo(skb)->nr_frags;
+	}
+
+	cxgbit_copy_frags(skb, gl, offset);
+
+	pdu_cb->frags += gl->nfrags;
+	lro_cb->offset += len;
+	skb->len += len;
+	skb->data_len += len;
+	skb->truesize += len;
+}
+
+static struct
+sk_buff *cxgbit_lro_init_skb(struct cxgbit_sock *csk,
+			     u8 op,
+			     const struct pkt_gl *gl,
+			     const __be64 *rsp,
+			     struct napi_struct *napi)
+{
+	struct sk_buff *skb;
+	struct cxgbit_lro_cb *lro_cb;
+
+	skb = napi_alloc_skb(napi, LRO_SKB_MAX_HEADROOM);
+
+	if (unlikely(!skb))
+		return NULL;
+
+	memset(skb->data, 0, LRO_SKB_MAX_HEADROOM);
+
+	lro_cb = cxgbit_skb_lro_cb(skb);
+
+	cxgbit_get_csk(csk);
+
+	lro_cb->csk = csk;
+
+	return skb;
+}
+
+static void cxgbit_queue_lro_skb(struct cxgbit_sock *csk,
+				 struct sk_buff *skb)
+{
+	bool wakeup_thread = false;
+
+	cxgbit_skcb_flags(skb) |= SKCBF_RX_LRO;
+
+	spin_lock(&csk->rxq.lock);
+	__skb_queue_tail(&csk->rxq, skb);
+	if (skb_queue_len(&csk->rxq) == 1)
+		wakeup_thread = true;
+	spin_unlock(&csk->rxq.lock);
+
+	if (wakeup_thread)
+		wake_up(&csk->waitq);
+}
+
+static void cxgbit_lro_flush(struct t4_lro_mgr *lro_mgr,
+			     struct sk_buff *skb)
+{
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_sock *csk = lro_cb->csk;
+
+	csk->lro_skb = NULL;
+
+	__skb_unlink(skb, &lro_mgr->lroq);
+	cxgbit_queue_lro_skb(csk, skb);
+
+	cxgbit_put_csk(csk);
+
+	lro_mgr->lro_pkts++;
+	lro_mgr->lro_session_cnt--;
+}
+
+static void cxgbit_uld_lro_flush(struct t4_lro_mgr *lro_mgr)
+{
+	struct sk_buff *skb;
+
+	while ((skb = skb_peek(&lro_mgr->lroq)))
+		cxgbit_lro_flush(lro_mgr, skb);
+}
+
+static int cxgbit_lro_receive(struct cxgbit_sock *csk, u8 op,
+			      const __be64 *rsp,
+			      const struct pkt_gl *gl,
+			      struct t4_lro_mgr *lro_mgr,
+			      struct napi_struct *napi)
+{
+	struct sk_buff *skb;
+	struct cxgbit_lro_cb *lro_cb;
+
+	if (!csk) {
+		pr_err("%s: csk NULL, op 0x%x.\n", __func__, op);
+		goto out;
+	}
+
+	if (csk->lro_skb)
+		goto add_packet;
+
+start_lro:
+	/* Did we reach the hash size limit */
+	if (lro_mgr->lro_session_cnt >= MAX_LRO_SESSIONS) {
+		cxgbit_uld_lro_flush(lro_mgr);
+		goto start_lro;
+	}
+
+	skb = cxgbit_lro_init_skb(csk, op, gl, rsp, napi);
+	if (unlikely(!skb))
+		goto out;
+
+	csk->lro_skb = skb;
+
+	__skb_queue_tail(&lro_mgr->lroq, skb);
+	lro_mgr->lro_session_cnt++;
+
+	/* continue to add the packet */
+add_packet:
+	skb = csk->lro_skb;
+	lro_cb = cxgbit_skb_lro_cb(skb);
+
+	/* Check if this packet can be aggregated */
+	if ((gl && ((skb_shinfo(skb)->nr_frags + gl->nfrags) >=
+			MAX_SKB_FRAGS ||
+			lro_cb->pdu_totallen >= LRO_FLUSH_TOTALLEN_MAX)) ||
+	    /* lro_cb->pdu_cnt must be less than MAX_SKB_FRAGS */
+			lro_cb->pdu_cnt >= (MAX_SKB_FRAGS - 1)) {
+		cxgbit_lro_flush(lro_mgr, skb);
+		goto start_lro;
+	}
+
+	if (gl)
+		cxgbit_lro_add_packet_gl(skb, op, gl);
+	else
+		cxgbit_lro_add_packet_rsp(skb, op, rsp);
+	lro_mgr->lro_merged++;
+
+	return 0;
+
+out:
+	return -1;
+}
+
+static int cxgbit_uld_lro_rx_handler(void *hndl, const __be64 *rsp,
+				     const struct pkt_gl *gl,
+				     struct t4_lro_mgr *lro_mgr,
+				     struct napi_struct *napi)
+{
+	struct cxgbit_device *cdev = hndl;
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	struct cpl_tx_data *rpl = NULL;
+	struct cxgbit_sock *csk = NULL;
+	unsigned int tid = 0;
+	struct sk_buff *skb;
+	unsigned int op = *(u8 *)rsp;
+	bool lro_flush = true;
+
+	switch (op) {
+	case CPL_ISCSI_HDR:
+	case CPL_ISCSI_DATA:
+	case CPL_RX_ISCSI_DDP:
+	case CPL_FW4_ACK:
+		lro_flush = false;
+	case CPL_ABORT_RPL_RSS:
+	case CPL_PASS_ESTABLISH:
+	case CPL_PEER_CLOSE:
+	case CPL_CLOSE_CON_RPL:
+	case CPL_ABORT_REQ_RSS:
+	case CPL_SET_TCB_RPL:
+	case CPL_RX_DATA:
+		/* Get the TID of this connection */
+		rpl = gl ? (struct cpl_tx_data *)gl->va :
+			   (struct cpl_tx_data *)(rsp + 1);
+		tid = GET_TID(rpl);
+		csk = lookup_tid(lldi->tids, tid);
+		break;
+	default:
+		break;
+	}
+
+	/*
+	 * Flush the LROed skb on receiving any cpl other than FW4_ACK and
+	 * CPL_ISCSI_XXX
+	 */
+	if (csk && csk->lro_skb && lro_flush)
+		cxgbit_lro_flush(lro_mgr, csk->lro_skb);
+
+	if (!gl) {
+		unsigned int len;
+
+		if (op == CPL_RX_ISCSI_DDP) {
+			if (!cxgbit_lro_receive(csk, op, rsp, NULL, lro_mgr,
+						napi))
+				return 0;
+		}
+
+		len = 64 - sizeof(struct rsp_ctrl) - 8;
+		skb = napi_alloc_skb(napi, len);
+		if (!skb)
+			goto nomem;
+		__skb_put(skb, len);
+		skb_copy_to_linear_data(skb, &rsp[1], len);
+	} else {
+		if (unlikely(op != *(u8 *)gl->va)) {
+			pr_info("? FL 0x%p,RSS%#llx,FL %#llx,len %u.\n",
+				gl->va, be64_to_cpu(*rsp),
+				be64_to_cpu(*(u64 *)gl->va),
+				gl->tot_len);
+			return 0;
+		}
+
+		if (op == CPL_ISCSI_HDR || op == CPL_ISCSI_DATA) {
+			if (!cxgbit_lro_receive(csk, op, rsp, gl, lro_mgr,
+						napi))
+				return 0;
+		}
+
+#define RX_PULL_LEN 128
+		skb = cxgb4_pktgl_to_skb(gl, RX_PULL_LEN, RX_PULL_LEN);
+		if (unlikely(!skb))
+			goto nomem;
+	}
+
+	rpl = (struct cpl_tx_data *)skb->data;
+	op = rpl->ot.opcode;
+	cxgbit_skcb_rx_opcode(skb) = op;
+
+	pr_debug("cdev %p, opcode 0x%x(0x%x,0x%x), skb %p.\n",
+		 cdev, op, rpl->ot.opcode_tid,
+		 ntohl(rpl->ot.opcode_tid), skb);
+
+	if (op < NUM_CPL_CMDS && cxgbit_cplhandlers[op]) {
+		cxgbit_cplhandlers[op](cdev, skb);
+	} else {
+		pr_err("No handler for opcode 0x%x.\n", op);
+		__kfree_skb(skb);
+	}
+	return 0;
+nomem:
+	pr_err("%s OOM bailing out.\n", __func__);
+	return 1;
+}
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+struct cxgbit_dcb_work {
+	struct dcb_app_type dcb_app;
+	struct work_struct work;
+};
+
+static struct cxgbit_device *
+cxgbit_find_device(struct net_device *ndev, u8 *port_id)
+{
+	struct cxgbit_device *cdev;
+	u8 i;
+
+	list_for_each_entry(cdev, &cdev_list_head, list) {
+		struct cxgb4_lld_info *lldi = &cdev->lldi;
+
+		for (i = 0; i < lldi->nports; i++) {
+			if (lldi->ports[i] == ndev) {
+				*port_id = i;
+				return cdev;
+			}
+		}
+	}
+
+	return NULL;
+}
+
+static void cxgbit_update_dcb_priority(struct cxgbit_device *cdev,
+				       u8 port_id, u8 dcb_priority,
+				       u16 port_num)
+{
+	struct cxgbit_sock *csk;
+	struct sk_buff *skb;
+	u16 local_port;
+	bool wakeup_thread = false;
+
+	spin_lock_bh(&cdev->cskq.lock);
+	list_for_each_entry(csk, &cdev->cskq.list, list) {
+		if (csk->port_id != port_id)
+			continue;
+
+		if (csk->com.local_addr.ss_family == AF_INET6) {
+			struct sockaddr_in6 *sock_in6;
+
+			sock_in6 = (struct sockaddr_in6 *)&csk->com.local_addr;
+			local_port = ntohs(sock_in6->sin6_port);
+		} else {
+			struct sockaddr_in *sock_in;
+
+			sock_in = (struct sockaddr_in *)&csk->com.local_addr;
+			local_port = ntohs(sock_in->sin_port);
+		}
+
+		if (local_port != port_num)
+			continue;
+
+		if (csk->dcb_priority == dcb_priority)
+			continue;
+
+		skb = alloc_skb(0, GFP_ATOMIC);
+		if (!skb)
+			continue;
+
+		spin_lock(&csk->rxq.lock);
+		__skb_queue_tail(&csk->rxq, skb);
+		if (skb_queue_len(&csk->rxq) == 1)
+			wakeup_thread = true;
+		spin_unlock(&csk->rxq.lock);
+
+		if (wakeup_thread) {
+			wake_up(&csk->waitq);
+			wakeup_thread = false;
+		}
+	}
+	spin_unlock_bh(&cdev->cskq.lock);
+}
+
+static void cxgbit_dcb_workfn(struct work_struct *work)
+{
+	struct cxgbit_dcb_work *dcb_work;
+	struct net_device *ndev;
+	struct cxgbit_device *cdev = NULL;
+	struct dcb_app_type *iscsi_app;
+	u8 priority, port_id = 0xff;
+
+	dcb_work = container_of(work, struct cxgbit_dcb_work, work);
+	iscsi_app = &dcb_work->dcb_app;
+
+	if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_IEEE) {
+		if (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY)
+			goto out;
+
+		priority = iscsi_app->app.priority;
+
+	} else if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_CEE) {
+		if (iscsi_app->app.selector != DCB_APP_IDTYPE_PORTNUM)
+			goto out;
+
+		if (!iscsi_app->app.priority)
+			goto out;
+
+		priority = ffs(iscsi_app->app.priority) - 1;
+	} else {
+		goto out;
+	}
+
+	pr_debug("priority for ifid %d is %u\n",
+		 iscsi_app->ifindex, priority);
+
+	ndev = dev_get_by_index(&init_net, iscsi_app->ifindex);
+
+	if (!ndev)
+		goto out;
+
+	mutex_lock(&cdev_list_lock);
+	cdev = cxgbit_find_device(ndev, &port_id);
+
+	dev_put(ndev);
+
+	if (!cdev) {
+		mutex_unlock(&cdev_list_lock);
+		goto out;
+	}
+
+	cxgbit_update_dcb_priority(cdev, port_id, priority,
+				   iscsi_app->app.protocol);
+	mutex_unlock(&cdev_list_lock);
+out:
+	kfree(dcb_work);
+}
+
+static int cxgbit_dcbevent_notify(struct notifier_block *nb,
+				  unsigned long action, void *data)
+{
+	struct cxgbit_dcb_work *dcb_work;
+	struct dcb_app_type *dcb_app = data;
+
+	dcb_work = kzalloc(sizeof(*dcb_work), GFP_ATOMIC);
+	if (!dcb_work)
+		return NOTIFY_DONE;
+
+	dcb_work->dcb_app = *dcb_app;
+	INIT_WORK(&dcb_work->work, cxgbit_dcb_workfn);
+	schedule_work(&dcb_work->work);
+	return NOTIFY_OK;
+}
+#endif
+
+static enum target_prot_op cxgbit_get_sup_prot_ops(struct iscsi_conn *conn)
+{
+	return TARGET_PROT_NORMAL;
+}
+
+static struct iscsit_transport cxgbit_transport = {
+	.name			= DRV_NAME,
+	.transport_type		= ISCSI_TCP_CXGB4,
+	.priv_size		= sizeof(struct cxgbit_cmd),
+	.owner			= THIS_MODULE,
+	.iscsit_setup_np	= cxgbit_setup_np,
+	.iscsit_accept_np	= cxgbit_accept_np,
+	.iscsit_free_np		= cxgbit_free_np,
+	.iscsit_free_conn	= cxgbit_free_conn,
+	.iscsit_get_login_rx	= cxgbit_get_login_rx,
+	.iscsit_put_login_tx	= cxgbit_put_login_tx,
+	.iscsit_immediate_queue	= cxgbit_immediate_queue,
+	.iscsit_response_queue	= cxgbit_response_queue,
+	.iscsit_get_dataout	= iscsit_build_r2ts_for_cmd,
+	.iscsit_queue_data_in	= iscsit_queue_rsp,
+	.iscsit_queue_status	= iscsit_queue_rsp,
+	.iscsit_rx_pdu		= cxgbit_rx_pdu,
+	.iscsit_validate_params	= cxgbit_validate_params,
+	.iscsit_release_cmd	= cxgbit_release_cmd,
+	.iscsit_aborted_task	= iscsit_aborted_task,
+	.iscsit_get_sup_prot_ops = cxgbit_get_sup_prot_ops,
+};
+
+static struct cxgb4_uld_info cxgbit_uld_info = {
+	.name		= DRV_NAME,
+	.add		= cxgbit_uld_add,
+	.state_change	= cxgbit_uld_state_change,
+	.lro_rx_handler = cxgbit_uld_lro_rx_handler,
+	.lro_flush	= cxgbit_uld_lro_flush,
+};
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+static struct notifier_block cxgbit_dcbevent_nb = {
+		.notifier_call = cxgbit_dcbevent_notify,
+};
+#endif
+
+static int __init cxgbit_init(void)
+{
+	cxgb4_register_uld(CXGB4_ULD_ISCSIT, &cxgbit_uld_info);
+	iscsit_register_transport(&cxgbit_transport);
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+	pr_info("%s dcb enabled.\n", DRV_NAME);
+	register_dcbevent_notifier(&cxgbit_dcbevent_nb);
+#endif
+	return 0;
+}
+
+static void __exit cxgbit_exit(void)
+{
+	struct cxgbit_device *cdev, *tmp;
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+	unregister_dcbevent_notifier(&cxgbit_dcbevent_nb);
+#endif
+	mutex_lock(&cdev_list_lock);
+	list_for_each_entry_safe(cdev, tmp, &cdev_list_head, list) {
+		list_del(&cdev->list);
+		cxgbit_put_cdev(cdev);
+	}
+	mutex_unlock(&cdev_list_lock);
+	iscsit_unregister_transport(&cxgbit_transport);
+	cxgb4_unregister_uld(CXGB4_ULD_ISCSIT);
+}
+
+module_init(cxgbit_init);
+module_exit(cxgbit_exit);
+
+MODULE_DESCRIPTION("Chelsio iSCSI target offload driver");
+MODULE_AUTHOR("Chelsio Communications");
+MODULE_VERSION(DRV_VERSION);
+MODULE_LICENSE("GPL");
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 30/34] cxgbit: add cxgbit_cm.c
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (28 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 29/34] cxgbit: add cxgbit_main.c Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 31/34] cxgbit: add cxgbit_target.c Varun Prakash
                   ` (4 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains code for connection
management.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit_cm.c | 1893 +++++++++++++++++++++++++++++++
 1 file changed, 1893 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_cm.c

diff --git a/drivers/target/iscsi/cxgbit/cxgbit_cm.c b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
new file mode 100644
index 0000000..3288821
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
@@ -0,0 +1,1893 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/workqueue.h>
+#include <linux/skbuff.h>
+#include <linux/timer.h>
+#include <linux/notifier.h>
+#include <linux/inetdevice.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/if_vlan.h>
+
+#include <net/neighbour.h>
+#include <net/netevent.h>
+#include <net/route.h>
+#include <net/tcp.h>
+#include <net/ip6_route.h>
+#include <net/addrconf.h>
+
+#include "cxgbit.h"
+#include "clip_tbl.h"
+
+static void cxgbit_init_wr_wait(struct cxgbit_wr_wait *wr_waitp)
+{
+	wr_waitp->ret = 0;
+	reinit_completion(&wr_waitp->completion);
+}
+
+static void cxgbit_wake_up(struct cxgbit_wr_wait *wr_waitp,
+			   const char *func, u8 ret)
+{
+	if (ret == CPL_ERR_NONE)
+		wr_waitp->ret = 0;
+	else
+		wr_waitp->ret = -EIO;
+
+	if (wr_waitp->ret)
+		pr_err("%s: err:%u", func, ret);
+
+	complete(&wr_waitp->completion);
+}
+
+static int cxgbit_wait_for_reply(struct cxgbit_device *cdev,
+				 struct cxgbit_wr_wait *wr_waitp,
+				 u32 tid, u32 timeout, const char *func)
+{
+	int ret;
+
+	if (!test_bit(CDEV_STATE_UP, &cdev->flags)) {
+		wr_waitp->ret = -EIO;
+		goto out;
+	}
+
+	ret = wait_for_completion_timeout(&wr_waitp->completion, timeout * HZ);
+	if (!ret) {
+		pr_info("%s - Device %s not responding tid %u\n",
+			func, pci_name(cdev->lldi.pdev), tid);
+		wr_waitp->ret = -ETIMEDOUT;
+	}
+out:
+	if (wr_waitp->ret)
+		pr_info("%s: FW reply %d tid %u\n",
+			pci_name(cdev->lldi.pdev), wr_waitp->ret, tid);
+	return wr_waitp->ret;
+}
+
+/* Returns whether a CPL status conveys negative advice.
+ */
+static int is_neg_adv(unsigned int status)
+{
+	return status == CPL_ERR_RTX_NEG_ADVICE ||
+		status == CPL_ERR_PERSIST_NEG_ADVICE ||
+		status == CPL_ERR_KEEPALV_NEG_ADVICE;
+}
+
+static int np_hashfn(const struct cxgbit_np *cnp)
+{
+	return ((unsigned long)cnp >> 10) & (NP_INFO_HASH_SIZE - 1);
+}
+
+static struct np_info *np_hash_add(struct cxgbit_device *cdev,
+				   struct cxgbit_np *cnp, unsigned int stid)
+{
+	struct np_info *p = kzalloc(sizeof(*p), GFP_KERNEL);
+
+	if (p) {
+		int bucket = np_hashfn(cnp);
+
+		p->cnp = cnp;
+		p->stid = stid;
+		spin_lock(&cdev->np_lock);
+		p->next = cdev->np_hash_tab[bucket];
+		cdev->np_hash_tab[bucket] = p;
+		spin_unlock(&cdev->np_lock);
+	}
+
+	return p;
+}
+
+static int np_hash_find(struct cxgbit_device *cdev, struct cxgbit_np *cnp)
+{
+	int stid = -1, bucket = np_hashfn(cnp);
+	struct np_info *p;
+
+	spin_lock(&cdev->np_lock);
+	for (p = cdev->np_hash_tab[bucket]; p; p = p->next) {
+		if (p->cnp == cnp) {
+			stid = p->stid;
+			break;
+		}
+	}
+	spin_unlock(&cdev->np_lock);
+
+	return stid;
+}
+
+static int np_hash_del(struct cxgbit_device *cdev, struct cxgbit_np *cnp)
+{
+	int stid = -1, bucket = np_hashfn(cnp);
+	struct np_info *p, **prev = &cdev->np_hash_tab[bucket];
+
+	spin_lock(&cdev->np_lock);
+	for (p = *prev; p; prev = &p->next, p = p->next) {
+		if (p->cnp == cnp) {
+			stid = p->stid;
+			*prev = p->next;
+			kfree(p);
+			break;
+		}
+	}
+	spin_unlock(&cdev->np_lock);
+
+	return stid;
+}
+
+void _cxgbit_free_cnp(struct kref *kref)
+{
+	struct cxgbit_np *cnp;
+
+	cnp = container_of(kref, struct cxgbit_np, kref);
+	kfree(cnp);
+}
+
+static int cxgbit_create_server6(struct cxgbit_device *cdev,
+				 unsigned int stid,
+				 struct cxgbit_np *cnp)
+{
+	struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)
+				     &cnp->com.local_addr;
+	int addr_type = 0;
+	int ret;
+
+	pr_debug("%s: dev = %s; stid = %u; sin6_port = %u\n",
+		 __func__, cdev->lldi.ports[0]->name, stid, sin6->sin6_port);
+
+	cxgbit_get_cnp(cnp);
+	cxgbit_init_wr_wait(&cnp->com.wr_wait);
+
+	ret = cxgb4_create_server6(cdev->lldi.ports[0],
+				   stid, &sin6->sin6_addr,
+				   sin6->sin6_port,
+				   cdev->lldi.rxq_ids[0]);
+	if (!ret)
+		ret = cxgbit_wait_for_reply(cdev, &cnp->com.wr_wait,
+					    0, 10, __func__);
+	else if (ret > 0)
+		ret = net_xmit_errno(ret);
+	else
+		cxgbit_put_cnp(cnp);
+
+	if (!ret) {
+		addr_type = ipv6_addr_type((const struct in6_addr *)
+					   &sin6->sin6_addr);
+		if (addr_type != IPV6_ADDR_ANY)
+			cxgb4_clip_get(cdev->lldi.ports[0],
+				       (const u32 *)&sin6->sin6_addr.s6_addr,
+					1);
+	} else {
+		pr_err("create server6 err %d stid %d laddr %pI6 lport %d\n",
+		       ret, stid, sin6->sin6_addr.s6_addr,
+		       ntohs(sin6->sin6_port));
+	}
+
+	return ret;
+}
+
+static int cxgbit_create_server4(struct cxgbit_device *cdev,
+				 unsigned int stid,
+				 struct cxgbit_np *cnp)
+{
+	struct sockaddr_in *sin = (struct sockaddr_in *)
+				   &cnp->com.local_addr;
+	int ret;
+
+	pr_debug("%s: dev = %s; stid = %u; sin_port = %u\n",
+		 __func__, cdev->lldi.ports[0]->name, stid, sin->sin_port);
+
+	cxgbit_get_cnp(cnp);
+	cxgbit_init_wr_wait(&cnp->com.wr_wait);
+
+	ret = cxgb4_create_server(cdev->lldi.ports[0],
+				  stid, sin->sin_addr.s_addr,
+				  sin->sin_port, 0,
+				  cdev->lldi.rxq_ids[0]);
+	if (!ret)
+		ret = cxgbit_wait_for_reply(cdev,
+					    &cnp->com.wr_wait,
+					    0, 10, __func__);
+	else if (ret > 0)
+		ret = net_xmit_errno(ret);
+	else
+		cxgbit_put_cnp(cnp);
+
+	if (ret)
+		pr_err("create server failed err %d stid %d laddr %pI4 lport %d\n",
+		       ret, stid, &sin->sin_addr, ntohs(sin->sin_port));
+	return ret;
+}
+
+static int cxgbit_setup_cdev_np(struct cxgbit_np *cnp)
+{
+	struct cxgbit_device *cdev;
+	int stid, ret;
+	u32 count = 0;
+	int ss_family = cnp->com.local_addr.ss_family;
+
+	mutex_lock(&cdev_list_lock);
+	list_for_each_entry(cdev, &cdev_list_head, list) {
+		if (np_hash_find(cdev, cnp) >= 0) {
+			mutex_unlock(&cdev_list_lock);
+			return -1;
+		}
+	}
+
+	list_for_each_entry(cdev, &cdev_list_head, list) {
+		if (!test_bit(CDEV_STATE_UP, &cdev->flags))
+			continue;
+
+		stid = cxgb4_alloc_stid(cdev->lldi.tids,
+					ss_family, cnp);
+		if (stid < 0)
+			continue;
+
+		if (!np_hash_add(cdev, cnp, stid)) {
+			cxgb4_free_stid(cdev->lldi.tids, stid,
+					ss_family);
+			continue;
+		}
+
+		if (ss_family == AF_INET)
+			ret = cxgbit_create_server4(cdev, stid, cnp);
+		else
+			ret = cxgbit_create_server6(cdev, stid, cnp);
+
+		if (ret) {
+			if (ret != -ETIMEDOUT)
+				cxgb4_free_stid(cdev->lldi.tids, stid,
+						ss_family);
+			np_hash_del(cdev, cnp);
+
+			if (ret == -ETIMEDOUT)
+				break;
+		}
+
+		count++;
+	}
+
+	mutex_unlock(&cdev_list_lock);
+
+	ret = count ? 0 : -1;
+	return ret;
+}
+
+int cxgbit_setup_np(struct iscsi_np *np,
+		    struct sockaddr_storage *ksockaddr)
+{
+	struct cxgbit_np *cnp;
+
+	cnp = kzalloc(sizeof(*cnp), GFP_KERNEL);
+	if (!cnp)
+		return -ENOMEM;
+
+	init_waitqueue_head(&cnp->accept_wait);
+	init_completion(&cnp->com.wr_wait.completion);
+	init_completion(&cnp->accept_comp);
+	INIT_LIST_HEAD(&cnp->np_accept_list);
+	spin_lock_init(&cnp->np_accept_lock);
+	kref_init(&cnp->kref);
+	memcpy(&np->np_sockaddr, ksockaddr,
+	       sizeof(struct sockaddr_storage));
+	memcpy(&cnp->com.local_addr, &np->np_sockaddr,
+	       sizeof(cnp->com.local_addr));
+	cnp->np = np;
+
+	if (cxgbit_setup_cdev_np(cnp)) {
+		cxgbit_put_cnp(cnp);
+		return -1;
+	}
+
+	np->np_context = cnp;
+	cnp->com.state = CSK_STATE_LISTEN;
+	return 0;
+}
+
+static void
+cxgbit_set_conn_info(struct iscsi_np *np, struct iscsi_conn *conn,
+		     struct cxgbit_sock *csk)
+{
+	conn->login_family = np->np_sockaddr.ss_family;
+	conn->login_sockaddr = csk->com.remote_addr;
+	conn->local_sockaddr = csk->com.local_addr;
+}
+
+int cxgbit_accept_np(struct iscsi_np *np, struct iscsi_conn *conn)
+{
+	struct cxgbit_np *cnp = np->np_context;
+	struct cxgbit_sock *csk;
+	int ret = 0;
+
+accept_wait:
+	ret = wait_for_completion_interruptible(&cnp->accept_comp);
+	if (ret)
+		return -ENODEV;
+
+	spin_lock_bh(&np->np_thread_lock);
+	if (np->np_thread_state >= ISCSI_NP_THREAD_RESET) {
+		spin_unlock_bh(&np->np_thread_lock);
+		/**
+		 * No point in stalling here when np_thread
+		 * is in state RESET/SHUTDOWN/EXIT - bail
+		 **/
+		return -ENODEV;
+	}
+	spin_unlock_bh(&np->np_thread_lock);
+
+	spin_lock_bh(&cnp->np_accept_lock);
+	if (list_empty(&cnp->np_accept_list)) {
+		spin_unlock_bh(&cnp->np_accept_lock);
+		goto accept_wait;
+	}
+
+	csk = list_first_entry(&cnp->np_accept_list,
+			       struct cxgbit_sock,
+			       accept_node);
+
+	list_del_init(&csk->accept_node);
+	spin_unlock_bh(&cnp->np_accept_lock);
+	conn->context = csk;
+	csk->conn = conn;
+
+	cxgbit_set_conn_info(np, conn, csk);
+	return 0;
+}
+
+void cxgbit_free_np(struct iscsi_np *np)
+{
+	struct cxgbit_np *cnp = np->np_context;
+	struct cxgbit_device *cdev;
+	int stid, ret = 0;
+	bool ipv6 = false;
+
+	cnp->com.state = CSK_STATE_DEAD;
+
+	mutex_lock(&cdev_list_lock);
+	list_for_each_entry(cdev, &cdev_list_head, list) {
+		stid = np_hash_del(cdev, cnp);
+		if (stid < 0)
+			continue;
+
+		if (!test_bit(CDEV_STATE_UP, &cdev->flags))
+			continue;
+
+		if (np->np_sockaddr.ss_family == AF_INET6)
+			ipv6 = true;
+
+		cxgbit_get_cnp(cnp);
+		cxgbit_init_wr_wait(&cnp->com.wr_wait);
+		ret = cxgb4_remove_server(cdev->lldi.ports[0], stid,
+					  cdev->lldi.rxq_ids[0], ipv6);
+		if (ret) {
+			cxgbit_put_cnp(cnp);
+			continue;
+		}
+
+		ret = cxgbit_wait_for_reply(cdev, &cnp->com.wr_wait,
+					    0, 10, __func__);
+
+		if (ret == -ETIMEDOUT)
+			continue;
+
+		if (ipv6) {
+			struct sockaddr_in6 *sin6;
+			int addr_type = 0;
+
+			sin6 = (struct sockaddr_in6 *)&cnp->com.local_addr;
+			addr_type = ipv6_addr_type((const struct in6_addr *)
+							&sin6->sin6_addr);
+			if (addr_type != IPV6_ADDR_ANY)
+				cxgb4_clip_release(cdev->lldi.ports[0],
+						   (const u32 *)
+						   &sin6->sin6_addr.s6_addr,
+						   1);
+		}
+
+		cxgb4_free_stid(cdev->lldi.tids, stid,
+				cnp->com.local_addr.ss_family);
+	}
+
+	mutex_unlock(&cdev_list_lock);
+
+	np->np_context = NULL;
+	cxgbit_put_cnp(cnp);
+}
+
+static void cxgbit_send_halfclose(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+	struct cpl_close_con_req *req;
+	unsigned int len = roundup(sizeof(struct cpl_close_con_req), 16);
+
+	skb = alloc_skb(len, GFP_ATOMIC);
+	if (!skb)
+		return;
+
+	req = (struct cpl_close_con_req *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	set_wr_txq(skb, CPL_PRIORITY_DATA, csk->txq_idx);
+	INIT_TP_WR(req, csk->tid);
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_CLOSE_CON_REQ,
+						    csk->tid));
+	req->rsvd = 0;
+
+	cxgbit_skcb_flags(skb) |= SKCBF_TX_FLAG_COMPL;
+	__skb_queue_tail(&csk->txq, skb);
+	push_tx_frames(csk);
+}
+
+static void arp_failure_discard(void *handle, struct sk_buff *skb)
+{
+	pr_debug("%s cxgbit_device %p\n", __func__, handle);
+	kfree_skb(skb);
+}
+
+static void abort_arp_failure(void *handle, struct sk_buff *skb)
+{
+	struct cxgbit_device *cdev = handle;
+	struct cpl_abort_req *req = cplhdr(skb);
+
+	pr_debug("%s cdev %p\n", __func__, cdev);
+	req->cmd = CPL_ABORT_NO_RST;
+	cxgbit_ofld_send(cdev, skb);
+}
+
+static int cxgbit_send_abort_req(struct cxgbit_sock *csk)
+{
+	struct cpl_abort_req *req;
+	unsigned int len = roundup(sizeof(*req), 16);
+	struct sk_buff *skb;
+
+	pr_debug("%s: csk %p tid %u; state %d\n",
+		 __func__, csk, csk->tid, csk->com.state);
+
+	__skb_queue_purge(&csk->txq);
+
+	if (!test_and_set_bit(CSK_TX_DATA_SENT, &csk->com.flags))
+		send_tx_flowc_wr(csk);
+
+	skb = __skb_dequeue(&csk->skbq);
+	req = (struct cpl_abort_req *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	set_wr_txq(skb, CPL_PRIORITY_DATA, csk->txq_idx);
+	t4_set_arp_err_handler(skb, csk->com.cdev, abort_arp_failure);
+	INIT_TP_WR(req, csk->tid);
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ABORT_REQ,
+						    csk->tid));
+	req->cmd = CPL_ABORT_SEND_RST;
+	return cxgbit_l2t_send(csk->com.cdev, skb, csk->l2t);
+}
+
+void cxgbit_free_conn(struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	bool release = false;
+
+	pr_debug("%s: state %d\n",
+		 __func__, csk->com.state);
+
+	spin_lock_bh(&csk->lock);
+	switch (csk->com.state) {
+	case CSK_STATE_ESTABLISHED:
+		if (test_bit(CSK_TX_FIN, &csk->com.flags)) {
+			csk->com.state = CSK_STATE_CLOSING;
+			cxgbit_send_halfclose(csk);
+		} else {
+			csk->com.state = CSK_STATE_ABORTING;
+			cxgbit_send_abort_req(csk);
+		}
+		break;
+	case CSK_STATE_CLOSING:
+		csk->com.state = CSK_STATE_MORIBUND;
+		cxgbit_send_halfclose(csk);
+		break;
+	case CSK_STATE_DEAD:
+		release = true;
+		break;
+	default:
+		pr_err("%s: csk %p; state %d\n",
+		       __func__, csk, csk->com.state);
+	}
+	spin_unlock_bh(&csk->lock);
+
+	if (release)
+		cxgbit_put_csk(csk);
+}
+
+static void set_emss(struct cxgbit_sock *csk, u16 opt)
+{
+	csk->emss = csk->com.cdev->lldi.mtus[TCPOPT_MSS_G(opt)] -
+			((csk->com.remote_addr.ss_family == AF_INET) ?
+			sizeof(struct iphdr) : sizeof(struct ipv6hdr)) -
+			sizeof(struct tcphdr);
+	csk->mss = csk->emss;
+	if (TCPOPT_TSTAMP_G(opt))
+		csk->emss -= round_up(TCPOLEN_TIMESTAMP, 4);
+	if (csk->emss < 128)
+		csk->emss = 128;
+	if (csk->emss & 7)
+		pr_info("Warning: misaligned mtu idx %u mss %u emss=%u\n",
+			TCPOPT_MSS_G(opt), csk->mss, csk->emss);
+	pr_debug("%s mss_idx %u mss %u emss=%u\n", __func__, TCPOPT_MSS_G(opt),
+		 csk->mss, csk->emss);
+}
+
+static void cxgbit_free_skb(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+
+	__skb_queue_purge(&csk->txq);
+	__skb_queue_purge(&csk->rxq);
+	__skb_queue_purge(&csk->backlogq);
+	__skb_queue_purge(&csk->ppodq);
+	__skb_queue_purge(&csk->skbq);
+
+	while ((skb = cxgbit_sock_dequeue_wr(csk)))
+		kfree_skb(skb);
+
+	__kfree_skb(csk->lro_skb_hold);
+}
+
+void _cxgbit_free_csk(struct kref *kref)
+{
+	struct cxgbit_sock *csk;
+	struct cxgbit_device *cdev;
+
+	csk = container_of(kref, struct cxgbit_sock, kref);
+
+	pr_debug("%s csk %p state %d\n", __func__, csk, csk->com.state);
+
+	if (csk->com.local_addr.ss_family == AF_INET6) {
+		struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)
+					     &csk->com.local_addr;
+		cxgb4_clip_release(csk->com.cdev->lldi.ports[0],
+				   (const u32 *)
+				   &sin6->sin6_addr.s6_addr, 1);
+	}
+
+	cxgb4_remove_tid(csk->com.cdev->lldi.tids, 0, csk->tid);
+	dst_release(csk->dst);
+	cxgb4_l2t_release(csk->l2t);
+
+	cdev = csk->com.cdev;
+	spin_lock_bh(&cdev->cskq.lock);
+	list_del(&csk->list);
+	spin_unlock_bh(&cdev->cskq.lock);
+
+	cxgbit_free_skb(csk);
+	cxgbit_put_cdev(cdev);
+
+	kfree(csk);
+}
+
+static void get_tuple_info(struct cpl_pass_accept_req *req, int *iptype,
+			   __u8 *local_ip, __u8 *peer_ip,
+			   __be16 *local_port, __be16 *peer_port)
+{
+	u32 eth_len = ETH_HDR_LEN_G(be32_to_cpu(req->hdr_len));
+	u32 ip_len = IP_HDR_LEN_G(be32_to_cpu(req->hdr_len));
+	struct iphdr *ip = (struct iphdr *)((u8 *)(req + 1) + eth_len);
+	struct ipv6hdr *ip6 = (struct ipv6hdr *)((u8 *)(req + 1) + eth_len);
+	struct tcphdr *tcp = (struct tcphdr *)
+			      ((u8 *)(req + 1) + eth_len + ip_len);
+
+	if (ip->version == 4) {
+		pr_debug("%s saddr 0x%x daddr 0x%x sport %u dport %u\n",
+			 __func__,
+			 ntohl(ip->saddr), ntohl(ip->daddr),
+			 ntohs(tcp->source),
+			 ntohs(tcp->dest));
+		*iptype = 4;
+		memcpy(peer_ip, &ip->saddr, 4);
+		memcpy(local_ip, &ip->daddr, 4);
+	} else {
+		pr_debug("%s saddr %pI6 daddr %pI6 sport %u dport %u\n",
+			 __func__,
+			 ip6->saddr.s6_addr, ip6->daddr.s6_addr,
+			 ntohs(tcp->source),
+			 ntohs(tcp->dest));
+		*iptype = 6;
+		memcpy(peer_ip, ip6->saddr.s6_addr, 16);
+		memcpy(local_ip, ip6->daddr.s6_addr, 16);
+	}
+
+	*peer_port = tcp->source;
+	*local_port = tcp->dest;
+}
+
+static struct net_device *get_real_dev(struct net_device *egress_dev)
+{
+	return (egress_dev->priv_flags & IFF_802_1Q_VLAN ?
+		vlan_dev_real_dev(egress_dev) : NULL) ? : egress_dev;
+}
+
+static int our_interface(struct cxgbit_device *cdev,
+			 struct net_device *egress_dev)
+{
+	u8 i;
+
+	egress_dev = get_real_dev(egress_dev);
+	for (i = 0; i < cdev->lldi.nports; i++)
+		if (cdev->lldi.ports[i] == egress_dev)
+			return 1;
+	return 0;
+}
+
+static struct dst_entry *find_route6(struct cxgbit_device *cdev,
+				     __u8 *local_ip,
+				     __u8 *peer_ip, __be16 local_port,
+				     __be16 peer_port, u8 tos,
+				     __u32 sin6_scope_id)
+{
+	struct dst_entry *dst = NULL;
+
+	if (IS_ENABLED(CONFIG_IPV6)) {
+		struct flowi6 fl6;
+
+		memset(&fl6, 0, sizeof(fl6));
+		memcpy(&fl6.daddr, peer_ip, 16);
+		memcpy(&fl6.saddr, local_ip, 16);
+		if (ipv6_addr_type(&fl6.daddr) & IPV6_ADDR_LINKLOCAL)
+			fl6.flowi6_oif = sin6_scope_id;
+		dst = ip6_route_output(&init_net, NULL, &fl6);
+		if (!dst)
+			goto out;
+		if (!our_interface(cdev, ip6_dst_idev(dst)->dev) &&
+		    !(ip6_dst_idev(dst)->dev->flags & IFF_LOOPBACK)) {
+			dst_release(dst);
+			dst = NULL;
+		}
+	}
+out:
+	return dst;
+}
+
+static struct dst_entry *find_route(struct cxgbit_device *cdev,
+				    __be32 local_ip,
+				    __be32 peer_ip, __be16 local_port,
+				    __be16 peer_port, u8 tos)
+{
+	struct rtable *rt;
+	struct flowi4 fl4;
+	struct neighbour *n;
+
+	rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip,
+				   local_ip,
+				   peer_port, local_port, IPPROTO_TCP,
+				   tos, 0);
+	if (IS_ERR(rt))
+		return NULL;
+	n = dst_neigh_lookup(&rt->dst, &peer_ip);
+	if (!n)
+		return NULL;
+	if (!our_interface(cdev, n->dev) &&
+	    !(n->dev->flags & IFF_LOOPBACK)) {
+		neigh_release(n);
+		dst_release(&rt->dst);
+		return NULL;
+	}
+	neigh_release(n);
+	return &rt->dst;
+}
+
+static void set_tcp_window(struct cxgbit_sock *csk, struct port_info *pi)
+{
+	unsigned int linkspeed;
+	u8 scale;
+
+	linkspeed = pi->link_cfg.speed;
+	scale = linkspeed / SPEED_10000;
+
+#define CXGBIT_10G_RCV_WIN (256 * 1024)
+	csk->rcv_win = CXGBIT_10G_RCV_WIN;
+	if (scale)
+		csk->rcv_win *= scale;
+
+#define CXGBIT_10G_SND_WIN (256 * 1024)
+	csk->snd_win = CXGBIT_10G_SND_WIN;
+	if (scale)
+		csk->snd_win *= scale;
+
+	pr_debug("%s snd_win %d rcv_win %d\n",
+		 __func__, csk->snd_win, csk->rcv_win);
+}
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+static u8 get_iscsi_dcb_state(struct net_device *ndev)
+{
+	return ndev->dcbnl_ops->getstate(ndev);
+}
+
+static int select_priority(int pri_mask)
+{
+	if (!pri_mask)
+		return 0;
+
+	return (ffs(pri_mask) - 1);
+}
+
+static u8 get_iscsi_dcb_priority(struct net_device *ndev, u16 local_port)
+{
+	int ret;
+	u8 caps;
+
+	struct dcb_app iscsi_dcb_app = {
+		.protocol = local_port
+	};
+
+	ret = (int)ndev->dcbnl_ops->getcap(ndev, DCB_CAP_ATTR_DCBX, &caps);
+
+	if (ret)
+		return 0;
+
+	if (caps & DCB_CAP_DCBX_VER_IEEE) {
+		iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY;
+
+		ret = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app);
+
+	} else if (caps & DCB_CAP_DCBX_VER_CEE) {
+		iscsi_dcb_app.selector = DCB_APP_IDTYPE_PORTNUM;
+
+		ret = dcb_getapp(ndev, &iscsi_dcb_app);
+	}
+
+	pr_info("iSCSI priority is set to %u\n", select_priority(ret));
+
+	return select_priority(ret);
+}
+#endif
+
+static int offload_init(struct cxgbit_sock *csk, int iptype, __u8 *peer_ip,
+			u16 local_port, struct dst_entry *dst,
+			struct cxgbit_device *cdev)
+{
+	struct neighbour *n;
+	int ret, step;
+	struct net_device *pdev;
+	u16 rxq_idx, port_id;
+#ifdef CONFIG_CHELSIO_T4_DCB
+	u8 priority = 0;
+#endif
+
+	n = dst_neigh_lookup(dst, peer_ip);
+	if (!n)
+		return -ENODEV;
+
+	rcu_read_lock();
+	ret = -ENOMEM;
+	if (n->dev->flags & IFF_LOOPBACK) {
+		if (iptype == 4)
+			pdev = ip_dev_find(&init_net, *(__be32 *)peer_ip);
+		else if (IS_ENABLED(CONFIG_IPV6))
+			for_each_netdev(&init_net, pdev) {
+				if (ipv6_chk_addr(&init_net,
+						  (struct in6_addr *)
+						   peer_ip,
+						   pdev, 1))
+					break;
+			}
+		else
+			pdev = NULL;
+
+		if (!pdev) {
+			ret = -ENODEV;
+			goto out;
+		}
+		csk->l2t = cxgb4_l2t_get(cdev->lldi.l2t,
+					 n, pdev, 0);
+		if (!csk->l2t)
+			goto out;
+		csk->mtu = pdev->mtu;
+		csk->tx_chan = cxgb4_port_chan(pdev);
+		csk->smac_idx = (cxgb4_port_viid(pdev) & 0x7F) << 1;
+		step = cdev->lldi.ntxq /
+			cdev->lldi.nchan;
+		csk->txq_idx = cxgb4_port_idx(pdev) * step;
+		step = cdev->lldi.nrxq /
+			cdev->lldi.nchan;
+		csk->ctrlq_idx = cxgb4_port_idx(pdev);
+		csk->rss_qid = cdev->lldi.rxq_ids[
+				cxgb4_port_idx(pdev) * step];
+		csk->port_id = cxgb4_port_idx(pdev);
+		set_tcp_window(csk, (struct port_info *)netdev_priv(pdev));
+		dev_put(pdev);
+	} else {
+		pdev = get_real_dev(n->dev);
+#ifdef CONFIG_CHELSIO_T4_DCB
+		if (get_iscsi_dcb_state(pdev))
+			priority = get_iscsi_dcb_priority(pdev, local_port);
+
+		csk->dcb_priority = priority;
+
+		csk->l2t = cxgb4_l2t_get(cdev->lldi.l2t, n, pdev, priority);
+#else
+		csk->l2t = cxgb4_l2t_get(cdev->lldi.l2t, n, pdev, 0);
+#endif
+		if (!csk->l2t)
+			goto out;
+		port_id = cxgb4_port_idx(pdev);
+		csk->mtu = dst_mtu(dst);
+		csk->tx_chan = cxgb4_port_chan(pdev);
+		csk->smac_idx = (cxgb4_port_viid(pdev) & 0x7F) << 1;
+		step = cdev->lldi.ntxq /
+			cdev->lldi.nports;
+		csk->txq_idx = (port_id * step) +
+				(cdev->selectq[port_id][0]++ % step);
+		csk->ctrlq_idx = cxgb4_port_idx(pdev);
+		step = cdev->lldi.nrxq /
+			cdev->lldi.nports;
+		rxq_idx = (port_id * step) +
+				(cdev->selectq[port_id][1]++ % step);
+		csk->rss_qid = cdev->lldi.rxq_ids[rxq_idx];
+		csk->port_id = port_id;
+		set_tcp_window(csk, (struct port_info *)netdev_priv(pdev));
+	}
+	ret = 0;
+out:
+	rcu_read_unlock();
+	neigh_release(n);
+	return ret;
+}
+
+int cxgbit_ofld_send(struct cxgbit_device *cdev, struct sk_buff *skb)
+{
+	int ret = 0;
+
+	if (!test_bit(CDEV_STATE_UP, &cdev->flags)) {
+		kfree_skb(skb);
+		pr_err("%s - device not up - dropping\n", __func__);
+		return -EIO;
+	}
+
+	ret = cxgb4_ofld_send(cdev->lldi.ports[0], skb);
+	if (ret < 0)
+		kfree_skb(skb);
+	return ret < 0 ? ret : 0;
+}
+
+static void cxgbit_release_tid(struct cxgbit_device *cdev, u32 tid)
+{
+	struct cpl_tid_release *req;
+	unsigned int len = roundup(sizeof(*req), 16);
+	struct sk_buff *skb;
+
+	skb = alloc_skb(len, GFP_ATOMIC);
+	if (!skb)
+		return;
+
+	req = (struct cpl_tid_release *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	INIT_TP_WR(req, tid);
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(
+		   CPL_TID_RELEASE, tid));
+	set_wr_txq(skb, CPL_PRIORITY_SETUP, 0);
+	cxgbit_ofld_send(cdev, skb);
+}
+
+int cxgbit_l2t_send(struct cxgbit_device *cdev, struct sk_buff *skb,
+		    struct l2t_entry *l2e)
+{
+	int ret = 0;
+
+	if (!test_bit(CDEV_STATE_UP, &cdev->flags)) {
+		kfree_skb(skb);
+		pr_err("%s - device not up - dropping\n", __func__);
+		return -EIO;
+	}
+
+	ret = cxgb4_l2t_send(cdev->lldi.ports[0], skb, l2e);
+	if (ret < 0)
+		kfree_skb(skb);
+	return ret < 0 ? ret : 0;
+}
+
+static void best_mtu(const unsigned short *mtus, unsigned short mtu,
+		     unsigned int *idx, int use_ts, int ipv6)
+{
+	unsigned short hdr_size = (ipv6 ? sizeof(struct ipv6hdr) :
+				   sizeof(struct iphdr)) +
+				   sizeof(struct tcphdr) +
+				   (use_ts ? round_up(TCPOLEN_TIMESTAMP,
+				    4) : 0);
+	unsigned short data_size = mtu - hdr_size;
+
+	cxgb4_best_aligned_mtu(mtus, hdr_size, data_size, 8, idx);
+}
+
+static void cxgbit_send_rx_credits(struct cxgbit_sock *csk,
+				   struct sk_buff *skb)
+{
+	if (csk->com.state != CSK_STATE_ESTABLISHED) {
+		__kfree_skb(skb);
+		return;
+	}
+
+	cxgbit_ofld_send(csk->com.cdev, skb);
+}
+
+/*
+ * CPL connection rx data ack: host ->
+ * Send RX credits through an RX_DATA_ACK CPL message.
+ * Returns the number of credits sent.
+ */
+int cxgbit_rx_data_ack(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+	struct cpl_rx_data_ack *req;
+	unsigned int len = roundup(sizeof(*req), 16);
+
+	skb = alloc_skb(len, GFP_KERNEL);
+	if (!skb)
+		return -1;
+
+	req = (struct cpl_rx_data_ack *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	set_wr_txq(skb, CPL_PRIORITY_ACK, csk->ctrlq_idx);
+	INIT_TP_WR(req, csk->tid);
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_RX_DATA_ACK,
+						    csk->tid));
+	req->credit_dack = cpu_to_be32(RX_DACK_CHANGE_F | RX_DACK_MODE_V(1) |
+				       RX_CREDITS_V(csk->rx_credits));
+
+	csk->rx_credits = 0;
+
+	spin_lock_bh(&csk->lock);
+	if (csk->lock_owner) {
+		cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_send_rx_credits;
+		__skb_queue_tail(&csk->backlogq, skb);
+		spin_unlock_bh(&csk->lock);
+		return 0;
+	}
+
+	cxgbit_send_rx_credits(csk, skb);
+	spin_unlock_bh(&csk->lock);
+
+	return 0;
+}
+
+#define FLOWC_WR_NPARAMS_MIN    9
+#define FLOWC_WR_NPARAMS_MAX	11
+static int cxgbit_alloc_csk_skb(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+	u32 len, flowclen;
+	u8 i;
+
+	flowclen = offsetof(struct fw_flowc_wr,
+			    mnemval[FLOWC_WR_NPARAMS_MAX]);
+
+	len = max_t(u32, sizeof(struct cpl_abort_req),
+		    sizeof(struct cpl_abort_rpl));
+
+	len = max(len, flowclen);
+	len = roundup(len, 16);
+
+	for (i = 0; i < 3; i++) {
+		skb = alloc_skb(len, GFP_ATOMIC);
+		if (!skb)
+			goto out;
+		__skb_queue_tail(&csk->skbq, skb);
+	}
+
+	skb = alloc_skb(LRO_SKB_MIN_HEADROOM, GFP_ATOMIC);
+	if (!skb)
+		goto out;
+
+	memset(skb->data, 0, LRO_SKB_MIN_HEADROOM);
+	csk->lro_skb_hold = skb;
+
+	return 0;
+out:
+	__skb_queue_purge(&csk->skbq);
+	return -ENOMEM;
+}
+
+static u32 compute_wscale(u32 win)
+{
+	u32 wscale = 0;
+
+	while (wscale < 14 && (65535 << wscale) < win)
+		wscale++;
+	return wscale;
+}
+
+static void cxgbit_pass_accept_rpl(struct cxgbit_sock *csk,
+				   struct cpl_pass_accept_req *req)
+{
+	struct sk_buff *skb;
+	const struct tcphdr *tcph;
+	struct cpl_t5_pass_accept_rpl *rpl5;
+	unsigned int len = roundup(sizeof(*rpl5), 16);
+	unsigned int mtu_idx;
+	u64 opt0;
+	u32 opt2, hlen;
+	u32 wscale;
+	u32 win;
+
+	pr_debug("%s csk %p tid %u\n", __func__, csk, csk->tid);
+
+	skb = alloc_skb(len, GFP_ATOMIC);
+	if (!skb) {
+		cxgbit_put_csk(csk);
+		return;
+	}
+
+	rpl5 = (struct cpl_t5_pass_accept_rpl *)__skb_put(skb, len);
+	memset(rpl5, 0, len);
+
+	INIT_TP_WR(rpl5, csk->tid);
+	OPCODE_TID(rpl5) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL,
+						     csk->tid));
+	best_mtu(csk->com.cdev->lldi.mtus, csk->mtu, &mtu_idx,
+		 req->tcpopt.tstamp,
+		 (csk->com.remote_addr.ss_family == AF_INET) ? 0 : 1);
+	wscale = compute_wscale(csk->rcv_win);
+	/*
+	 * Specify the largest window that will fit in opt0. The
+	 * remainder will be specified in the rx_data_ack.
+	 */
+	win = csk->rcv_win >> 10;
+	if (win > RCV_BUFSIZ_M)
+		win = RCV_BUFSIZ_M;
+	opt0 =  TCAM_BYPASS_F |
+		WND_SCALE_V(wscale) |
+		MSS_IDX_V(mtu_idx) |
+		L2T_IDX_V(csk->l2t->idx) |
+		TX_CHAN_V(csk->tx_chan) |
+		SMAC_SEL_V(csk->smac_idx) |
+		DSCP_V(csk->tos >> 2) |
+		ULP_MODE_V(ULP_MODE_ISCSI) |
+		RCV_BUFSIZ_V(win);
+
+	opt2 = RX_CHANNEL_V(0) |
+		RSS_QUEUE_VALID_F | RSS_QUEUE_V(csk->rss_qid);
+
+	if (req->tcpopt.tstamp)
+		opt2 |= TSTAMPS_EN_F;
+	if (req->tcpopt.sack)
+		opt2 |= SACK_EN_F;
+	if (wscale)
+		opt2 |= WND_SCALE_EN_F;
+
+	hlen = ntohl(req->hdr_len);
+	tcph = (const void *)(req + 1) + ETH_HDR_LEN_G(hlen) +
+		IP_HDR_LEN_G(hlen);
+
+	if (tcph->ece && tcph->cwr)
+		opt2 |= CCTRL_ECN_V(1);
+
+	opt2 |= RX_COALESCE_V(3);
+	opt2 |= CONG_CNTRL_V(CONG_ALG_NEWRENO);
+
+	opt2 |= T5_ISS_F;
+	rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1);
+
+	opt2 |= T5_OPT_2_VALID_F;
+
+	rpl5->opt0 = cpu_to_be64(opt0);
+	rpl5->opt2 = cpu_to_be32(opt2);
+	set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->ctrlq_idx);
+	t4_set_arp_err_handler(skb, NULL, arp_failure_discard);
+	cxgbit_l2t_send(csk->com.cdev, skb, csk->l2t);
+}
+
+static void cxgbit_pass_accept_req(struct cxgbit_device *cdev,
+				   struct sk_buff *skb)
+{
+	struct cxgbit_sock *csk = NULL;
+	struct cxgbit_np *cnp;
+	struct cpl_pass_accept_req *req = cplhdr(skb);
+	unsigned int stid = PASS_OPEN_TID_G(ntohl(req->tos_stid));
+	struct tid_info *t = cdev->lldi.tids;
+	unsigned int tid = GET_TID(req);
+	u16 peer_mss = ntohs(req->tcpopt.mss);
+	unsigned short hdrs;
+
+	struct dst_entry *dst;
+	__u8 local_ip[16], peer_ip[16];
+	__be16 local_port, peer_port;
+	int ret;
+	int iptype;
+
+	pr_debug("%s: cdev = %p; stid = %u; tid = %u\n",
+		 __func__, cdev, stid, tid);
+
+	cnp = lookup_stid(t, stid);
+	if (!cnp) {
+		pr_err("%s connect request on invalid stid %d\n",
+		       __func__, stid);
+		goto rel_skb;
+	}
+
+	if (cnp->com.state != CSK_STATE_LISTEN) {
+		pr_err("%s - listening parent not in CSK_STATE_LISTEN\n",
+		       __func__);
+		goto reject;
+	}
+
+	csk = lookup_tid(t, tid);
+	if (csk) {
+		pr_err("%s csk not null tid %u\n",
+		       __func__, tid);
+		goto rel_skb;
+	}
+
+	get_tuple_info(req, &iptype, local_ip, peer_ip,
+		       &local_port, &peer_port);
+
+	/* Find output route */
+	if (iptype == 4)  {
+		pr_debug("%s parent sock %p tid %u laddr %pI4 raddr %pI4 "
+			 "lport %d rport %d peer_mss %d\n"
+			 , __func__, cnp, tid,
+			 local_ip, peer_ip, ntohs(local_port),
+			 ntohs(peer_port), peer_mss);
+		dst = find_route(cdev, *(__be32 *)local_ip,
+				 *(__be32 *)peer_ip,
+				 local_port, peer_port,
+				 PASS_OPEN_TOS_G(ntohl(req->tos_stid)));
+	} else {
+		pr_debug("%s parent sock %p tid %u laddr %pI6 raddr %pI6 "
+			 "lport %d rport %d peer_mss %d\n"
+			 , __func__, cnp, tid,
+			 local_ip, peer_ip, ntohs(local_port),
+			 ntohs(peer_port), peer_mss);
+		dst = find_route6(cdev, local_ip, peer_ip,
+				  local_port, peer_port,
+				  PASS_OPEN_TOS_G(ntohl(req->tos_stid)),
+				  ((struct sockaddr_in6 *)
+				  &cnp->com.local_addr)->sin6_scope_id);
+	}
+	if (!dst) {
+		pr_err("%s - failed to find dst entry!\n",
+		       __func__);
+		goto reject;
+	}
+
+	csk = kzalloc(sizeof(*csk), GFP_ATOMIC);
+	if (!csk) {
+		dst_release(dst);
+		goto rel_skb;
+	}
+
+	ret = offload_init(csk, iptype, peer_ip, ntohs(local_port), dst, cdev);
+	if (ret) {
+		pr_err("%s - failed to allocate l2t entry!\n",
+		       __func__);
+		dst_release(dst);
+		kfree(csk);
+		goto reject;
+	}
+
+	kref_init(&csk->kref);
+	init_completion(&csk->com.wr_wait.completion);
+
+	INIT_LIST_HEAD(&csk->accept_node);
+
+	hdrs = (iptype == 4 ? sizeof(struct iphdr) : sizeof(struct ipv6hdr)) +
+		sizeof(struct tcphdr) +	(req->tcpopt.tstamp ? 12 : 0);
+	if (peer_mss && csk->mtu > (peer_mss + hdrs))
+		csk->mtu = peer_mss + hdrs;
+
+	csk->com.state = CSK_STATE_CONNECTING;
+	csk->com.cdev = cdev;
+	csk->cnp = cnp;
+	csk->tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid));
+	csk->dst = dst;
+	csk->tid = tid;
+	csk->wr_cred = cdev->lldi.wr_cred -
+			DIV_ROUND_UP(sizeof(struct cpl_abort_req), 16);
+	csk->wr_max_cred = csk->wr_cred;
+	csk->wr_una_cred = 0;
+
+	if (iptype == 4) {
+		struct sockaddr_in *sin = (struct sockaddr_in *)
+					  &csk->com.local_addr;
+		sin->sin_family = AF_INET;
+		sin->sin_port = local_port;
+		sin->sin_addr.s_addr = *(__be32 *)local_ip;
+
+		sin = (struct sockaddr_in *)&csk->com.remote_addr;
+		sin->sin_family = AF_INET;
+		sin->sin_port = peer_port;
+		sin->sin_addr.s_addr = *(__be32 *)peer_ip;
+	} else {
+		struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)
+					    &csk->com.local_addr;
+
+		sin6->sin6_family = PF_INET6;
+		sin6->sin6_port = local_port;
+		memcpy(sin6->sin6_addr.s6_addr, local_ip, 16);
+		cxgb4_clip_get(cdev->lldi.ports[0],
+			       (const u32 *)&sin6->sin6_addr.s6_addr,
+			       1);
+
+		sin6 = (struct sockaddr_in6 *)&csk->com.remote_addr;
+		sin6->sin6_family = PF_INET6;
+		sin6->sin6_port = peer_port;
+		memcpy(sin6->sin6_addr.s6_addr, peer_ip, 16);
+	}
+
+	skb_queue_head_init(&csk->rxq);
+	skb_queue_head_init(&csk->txq);
+	skb_queue_head_init(&csk->ppodq);
+	skb_queue_head_init(&csk->backlogq);
+	skb_queue_head_init(&csk->skbq);
+	cxgbit_sock_reset_wr_list(csk);
+	spin_lock_init(&csk->lock);
+	init_waitqueue_head(&csk->waitq);
+	init_waitqueue_head(&csk->ack_waitq);
+	csk->lock_owner = false;
+
+	if (cxgbit_alloc_csk_skb(csk)) {
+		dst_release(dst);
+		kfree(csk);
+		goto rel_skb;
+	}
+
+	cxgbit_get_cdev(cdev);
+
+	spin_lock(&cdev->cskq.lock);
+	list_add_tail(&csk->list, &cdev->cskq.list);
+	spin_unlock(&cdev->cskq.lock);
+
+	cxgb4_insert_tid(t, csk, tid);
+	cxgbit_pass_accept_rpl(csk, req);
+	goto rel_skb;
+
+reject:
+	cxgbit_release_tid(cdev, tid);
+rel_skb:
+	__kfree_skb(skb);
+}
+
+static u32 tx_flowc_wr_credits(struct cxgbit_sock *csk,
+			       u32 *nparamsp, u32 *flowclenp)
+{
+	u32 nparams, flowclen16, flowclen;
+
+	nparams = FLOWC_WR_NPARAMS_MIN;
+
+	if (csk->snd_wscale)
+		nparams++;
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+	nparams++;
+#endif
+	flowclen = offsetof(struct fw_flowc_wr, mnemval[nparams]);
+	flowclen16 = DIV_ROUND_UP(flowclen, 16);
+	flowclen = flowclen16 * 16;
+	/*
+	 * Return the number of 16-byte credits used by the flowc request.
+	 * Pass back the nparams and actual flowc length if requested.
+	 */
+	if (nparamsp)
+		*nparamsp = nparams;
+	if (flowclenp)
+		*flowclenp = flowclen;
+	return flowclen16;
+}
+
+u32 send_tx_flowc_wr(struct cxgbit_sock *csk)
+{
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct fw_flowc_wr *flowc;
+	u32 nparams, flowclen16, flowclen;
+	struct sk_buff *skb;
+	u8 index;
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+	u16 vlan = ((struct l2t_entry *)csk->l2t)->vlan;
+#endif
+
+	flowclen16 = tx_flowc_wr_credits(csk, &nparams, &flowclen);
+
+	skb = __skb_dequeue(&csk->skbq);
+	flowc = (struct fw_flowc_wr *)__skb_put(skb, flowclen);
+	memset(flowc, 0, flowclen);
+
+	flowc->op_to_nparams = cpu_to_be32(FW_WR_OP_V(FW_FLOWC_WR) |
+					   FW_FLOWC_WR_NPARAMS_V(nparams));
+	flowc->flowid_len16 = cpu_to_be32(FW_WR_LEN16_V(flowclen16) |
+					  FW_WR_FLOWID_V(csk->tid));
+	flowc->mnemval[0].mnemonic = FW_FLOWC_MNEM_PFNVFN;
+	flowc->mnemval[0].val = cpu_to_be32(FW_PFVF_CMD_PFN_V
+					    (csk->com.cdev->lldi.pf));
+	flowc->mnemval[1].mnemonic = FW_FLOWC_MNEM_CH;
+	flowc->mnemval[1].val = cpu_to_be32(csk->tx_chan);
+	flowc->mnemval[2].mnemonic = FW_FLOWC_MNEM_PORT;
+	flowc->mnemval[2].val = cpu_to_be32(csk->tx_chan);
+	flowc->mnemval[3].mnemonic = FW_FLOWC_MNEM_IQID;
+	flowc->mnemval[3].val = cpu_to_be32(csk->rss_qid);
+	flowc->mnemval[4].mnemonic = FW_FLOWC_MNEM_SNDNXT;
+	flowc->mnemval[4].val = cpu_to_be32(csk->snd_nxt);
+	flowc->mnemval[5].mnemonic = FW_FLOWC_MNEM_RCVNXT;
+	flowc->mnemval[5].val = cpu_to_be32(csk->rcv_nxt);
+	flowc->mnemval[6].mnemonic = FW_FLOWC_MNEM_SNDBUF;
+	flowc->mnemval[6].val = cpu_to_be32(csk->snd_win);
+	flowc->mnemval[7].mnemonic = FW_FLOWC_MNEM_MSS;
+	flowc->mnemval[7].val = cpu_to_be32(csk->emss);
+
+	flowc->mnemval[8].mnemonic = FW_FLOWC_MNEM_TXDATAPLEN_MAX;
+	if (test_bit(CDEV_ISO_ENABLE, &cdev->flags))
+		flowc->mnemval[8].val = cpu_to_be32(CXGBIT_MAX_ISO_PAYLOAD);
+	else
+		flowc->mnemval[8].val = cpu_to_be32(16384);
+
+	index = 9;
+
+	if (csk->snd_wscale) {
+		flowc->mnemval[index].mnemonic = FW_FLOWC_MNEM_RCV_SCALE;
+		flowc->mnemval[index].val = cpu_to_be32(csk->snd_wscale);
+		index++;
+	}
+
+#ifdef CONFIG_CHELSIO_T4_DCB
+	flowc->mnemval[index].mnemonic = FW_FLOWC_MNEM_DCBPRIO;
+	if (vlan == VLAN_NONE) {
+		pr_warn("csk %u without VLAN Tag on DCB Link\n", csk->tid);
+		flowc->mnemval[index].val = cpu_to_be32(0);
+	} else
+		flowc->mnemval[index].val = cpu_to_be32(
+				(vlan & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT);
+#endif
+
+	pr_debug("%s: csk %p; tx_chan = %u; rss_qid = %u; snd_seq = %u;"
+		 " rcv_seq = %u; snd_win = %u; emss = %u\n",
+		 __func__, csk, csk->tx_chan, csk->rss_qid, csk->snd_nxt,
+		 csk->rcv_nxt, csk->snd_win, csk->emss);
+	set_wr_txq(skb, CPL_PRIORITY_DATA, csk->txq_idx);
+	cxgbit_ofld_send(csk->com.cdev, skb);
+	return flowclen16;
+}
+
+int cxgbit_setup_conn_digest(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+	struct cpl_set_tcb_field *req;
+	u8 hcrc = csk->submode & 1;
+	u8 dcrc = csk->submode & 2;
+	unsigned int len = roundup(sizeof(*req), 16);
+	int ret;
+
+	skb = alloc_skb(len, GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	/*  set up ulp submode */
+	req = (struct cpl_set_tcb_field *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	INIT_TP_WR(req, csk->tid);
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_SET_TCB_FIELD, csk->tid));
+	req->reply_ctrl = htons(NO_REPLY_V(0) | QUEUENO_V(csk->rss_qid));
+	req->word_cookie = htons(0);
+	req->mask = cpu_to_be64(0x3 << 4);
+	req->val = cpu_to_be64(((hcrc ? ULP_CRC_HEADER : 0) |
+				(dcrc ? ULP_CRC_DATA : 0)) << 4);
+	set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->ctrlq_idx);
+
+	cxgbit_get_csk(csk);
+	cxgbit_init_wr_wait(&csk->com.wr_wait);
+
+	cxgbit_ofld_send(csk->com.cdev, skb);
+
+	ret = cxgbit_wait_for_reply(csk->com.cdev,
+				    &csk->com.wr_wait,
+				    csk->tid, 5, __func__);
+	if (ret)
+		return -1;
+
+	return 0;
+}
+
+int cxgbit_setup_conn_pgidx(struct cxgbit_sock *csk, u32 pg_idx)
+{
+	struct sk_buff *skb;
+	struct cpl_set_tcb_field *req;
+	unsigned int len = roundup(sizeof(*req), 16);
+	int ret;
+
+	skb = alloc_skb(len, GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	req = (struct cpl_set_tcb_field *)__skb_put(skb, len);
+	memset(req, 0, len);
+
+	INIT_TP_WR(req, csk->tid);
+	OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_SET_TCB_FIELD, csk->tid));
+	req->reply_ctrl = htons(NO_REPLY_V(0) | QUEUENO_V(csk->rss_qid));
+	req->word_cookie = htons(0);
+	req->mask = cpu_to_be64(0x3 << 8);
+	req->val = cpu_to_be64(pg_idx << 8);
+	set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->ctrlq_idx);
+
+	cxgbit_get_csk(csk);
+	cxgbit_init_wr_wait(&csk->com.wr_wait);
+
+	cxgbit_ofld_send(csk->com.cdev, skb);
+
+	ret = cxgbit_wait_for_reply(csk->com.cdev,
+				    &csk->com.wr_wait,
+				    csk->tid, 5, __func__);
+	if (ret)
+		return -1;
+
+	return 0;
+}
+
+static void cxgbit_pass_open_rpl(struct cxgbit_device *cdev,
+				 struct sk_buff *skb)
+{
+	struct cpl_pass_open_rpl *rpl = cplhdr(skb);
+	struct tid_info *t = cdev->lldi.tids;
+	unsigned int stid = GET_TID(rpl);
+	struct cxgbit_np *cnp = lookup_stid(t, stid);
+
+	pr_debug("%s: cnp = %p; stid = %u; status = %d\n",
+		 __func__, cnp, stid, rpl->status);
+
+	if (!cnp) {
+		pr_info("%s stid %d lookup failure\n", __func__, stid);
+		return;
+	}
+
+	cxgbit_wake_up(&cnp->com.wr_wait, __func__, rpl->status);
+	cxgbit_put_cnp(cnp);
+}
+
+static void cxgbit_close_listsrv_rpl(struct cxgbit_device *cdev,
+				     struct sk_buff *skb)
+{
+	struct cpl_close_listsvr_rpl *rpl = cplhdr(skb);
+	struct tid_info *t = cdev->lldi.tids;
+	unsigned int stid = GET_TID(rpl);
+	struct cxgbit_np *cnp = lookup_stid(t, stid);
+
+	pr_debug("%s: cnp = %p; stid = %u; status = %d\n",
+		 __func__, cnp, stid, rpl->status);
+
+	if (!cnp) {
+		pr_info("%s stid %d lookup failure\n", __func__, stid);
+		return;
+	}
+
+	cxgbit_wake_up(&cnp->com.wr_wait, __func__, rpl->status);
+	cxgbit_put_cnp(cnp);
+}
+
+static void cxgbit_pass_establish(struct cxgbit_device *cdev,
+				  struct sk_buff *skb)
+{
+	struct cpl_pass_establish *req = cplhdr(skb);
+	struct tid_info *t = cdev->lldi.tids;
+	unsigned int tid = GET_TID(req);
+	struct cxgbit_sock *csk;
+	struct cxgbit_np *cnp;
+	u16 tcp_opt = be16_to_cpu(req->tcp_opt);
+	u32 snd_isn = be32_to_cpu(req->snd_isn);
+	u32 rcv_isn = be32_to_cpu(req->rcv_isn);
+
+	csk = lookup_tid(t, tid);
+	if (unlikely(!csk)) {
+		pr_err("can't find connection for tid %u.\n", tid);
+		goto rel_skb;
+	}
+	cnp = csk->cnp;
+
+	pr_debug("%s: csk %p; tid %u; cnp %p\n",
+		 __func__, csk, tid, cnp);
+
+	csk->write_seq = snd_isn;
+	csk->snd_una = snd_isn;
+	csk->snd_nxt = snd_isn;
+
+	csk->rcv_nxt = rcv_isn;
+
+	if (csk->rcv_win > (RCV_BUFSIZ_M << 10))
+		csk->rx_credits = (csk->rcv_win - (RCV_BUFSIZ_M << 10));
+
+	csk->snd_wscale = TCPOPT_SND_WSCALE_G(tcp_opt);
+	set_emss(csk, tcp_opt);
+	dst_confirm(csk->dst);
+	csk->com.state = CSK_STATE_ESTABLISHED;
+	spin_lock_bh(&cnp->np_accept_lock);
+	list_add_tail(&csk->accept_node, &cnp->np_accept_list);
+	spin_unlock_bh(&cnp->np_accept_lock);
+	complete(&cnp->accept_comp);
+rel_skb:
+	__kfree_skb(skb);
+}
+
+static void cxgbit_queue_rx_skb(struct cxgbit_sock *csk,
+				struct sk_buff *skb)
+{
+	cxgbit_skcb_flags(skb) = 0;
+	spin_lock_bh(&csk->rxq.lock);
+	__skb_queue_tail(&csk->rxq, skb);
+	spin_unlock_bh(&csk->rxq.lock);
+	wake_up(&csk->waitq);
+}
+
+static void cxgbit_peer_close(struct cxgbit_sock *csk,
+			      struct sk_buff *skb)
+{
+	pr_debug("%s: csk %p; tid %u; state %d\n",
+		 __func__, csk, csk->tid, csk->com.state);
+
+	switch (csk->com.state) {
+	case CSK_STATE_ESTABLISHED:
+		csk->com.state = CSK_STATE_CLOSING;
+		cxgbit_queue_rx_skb(csk, skb);
+		return;
+	case CSK_STATE_CLOSING:
+		/* simultaneous close */
+		csk->com.state = CSK_STATE_MORIBUND;
+		break;
+	case CSK_STATE_MORIBUND:
+		csk->com.state = CSK_STATE_DEAD;
+		cxgbit_put_csk(csk);
+		break;
+	case CSK_STATE_ABORTING:
+		break;
+	default:
+		pr_info("%s: cpl_peer_close in bad state %d\n",
+			__func__, csk->com.state);
+	}
+
+	__kfree_skb(skb);
+}
+
+static void cxgbit_close_con_rpl(struct cxgbit_sock *csk,
+				 struct sk_buff *skb)
+{
+	pr_debug("%s: csk %p; tid %u; state %d\n",
+		 __func__, csk, csk->tid, csk->com.state);
+
+	switch (csk->com.state) {
+	case CSK_STATE_CLOSING:
+		csk->com.state = CSK_STATE_MORIBUND;
+		break;
+	case CSK_STATE_MORIBUND:
+		csk->com.state = CSK_STATE_DEAD;
+		cxgbit_put_csk(csk);
+		break;
+	case CSK_STATE_ABORTING:
+	case CSK_STATE_DEAD:
+		break;
+	default:
+		pr_info("%s: cpl_close_con_rpl in bad state %d\n",
+			__func__, csk->com.state);
+	}
+
+	__kfree_skb(skb);
+}
+
+static void cxgbit_abort_req_rss(struct cxgbit_sock *csk,
+				 struct sk_buff *skb)
+{
+	struct cpl_abort_req_rss *hdr = cplhdr(skb);
+	unsigned int tid = GET_TID(hdr);
+	struct cpl_abort_rpl *rpl;
+	struct sk_buff *rpl_skb;
+	bool release = false;
+	bool wakeup_thread = false;
+	unsigned int len = roundup(sizeof(*rpl), 16);
+
+	pr_debug("%s: csk %p; tid %u; state %d\n",
+		 __func__, csk, tid, csk->com.state);
+
+	if (is_neg_adv(hdr->status)) {
+		pr_err("%s: got neg advise %d on tid %u\n",
+		       __func__, hdr->status, tid);
+		goto rel_skb;
+	}
+
+	switch (csk->com.state) {
+	case CSK_STATE_CONNECTING:
+	case CSK_STATE_MORIBUND:
+		csk->com.state = CSK_STATE_DEAD;
+		release = true;
+		break;
+	case CSK_STATE_ESTABLISHED:
+		csk->com.state = CSK_STATE_DEAD;
+		wakeup_thread = true;
+		break;
+	case CSK_STATE_CLOSING:
+		csk->com.state = CSK_STATE_DEAD;
+		if (!csk->conn)
+			release = true;
+		break;
+	case CSK_STATE_ABORTING:
+		break;
+	default:
+		pr_info("%s: cpl_abort_req_rss in bad state %d\n",
+			__func__, csk->com.state);
+		csk->com.state = CSK_STATE_DEAD;
+	}
+
+	__skb_queue_purge(&csk->txq);
+
+	if (!test_and_set_bit(CSK_TX_DATA_SENT, &csk->com.flags))
+		send_tx_flowc_wr(csk);
+
+	rpl_skb = __skb_dequeue(&csk->skbq);
+	set_wr_txq(skb, CPL_PRIORITY_DATA, csk->txq_idx);
+
+	rpl = (struct cpl_abort_rpl *)__skb_put(rpl_skb, len);
+	memset(rpl, 0, len);
+
+	INIT_TP_WR(rpl, csk->tid);
+	OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_ABORT_RPL, tid));
+	rpl->cmd = CPL_ABORT_NO_RST;
+	cxgbit_ofld_send(csk->com.cdev, rpl_skb);
+
+	if (wakeup_thread) {
+		cxgbit_queue_rx_skb(csk, skb);
+		return;
+	}
+
+	if (release)
+		cxgbit_put_csk(csk);
+rel_skb:
+	__kfree_skb(skb);
+}
+
+static void cxgbit_abort_rpl_rss(struct cxgbit_sock *csk,
+				 struct sk_buff *skb)
+{
+	pr_debug("%s: csk %p; tid %u; state %d\n",
+		 __func__, csk, csk->tid, csk->com.state);
+
+	switch (csk->com.state) {
+	case CSK_STATE_ABORTING:
+		csk->com.state = CSK_STATE_DEAD;
+		cxgbit_put_csk(csk);
+		break;
+	default:
+		pr_info("%s: cpl_abort_rpl_rss in state %d\n",
+			__func__, csk->com.state);
+	}
+
+	__kfree_skb(skb);
+}
+
+static bool cxgbit_credit_err(const struct cxgbit_sock *csk)
+{
+	const struct sk_buff *skb = csk->wr_pending_head;
+	u32 credit = 0;
+
+	if (unlikely(csk->wr_cred > csk->wr_max_cred)) {
+		pr_err("csk 0x%p, tid %u, credit %u > %u\n",
+		       csk, csk->tid, csk->wr_cred, csk->wr_max_cred);
+		return true;
+	}
+
+	while (skb) {
+		credit += skb->csum;
+		skb = cxgbit_skcb_tx_wr_next(skb);
+	}
+
+	if (unlikely((csk->wr_cred + credit) != csk->wr_max_cred)) {
+		pr_err("csk 0x%p, tid %u, credit %u + %u != %u.\n",
+		       csk, csk->tid, csk->wr_cred,
+		       credit, csk->wr_max_cred);
+
+		return true;
+	}
+
+	return false;
+}
+
+static void cxgbit_fw4_ack(struct cxgbit_sock *csk, struct sk_buff *skb)
+{
+	struct cpl_fw4_ack *rpl = (struct cpl_fw4_ack *)cplhdr(skb);
+	u32 credits = rpl->credits;
+	u32 snd_una = ntohl(rpl->snd_una);
+
+	csk->wr_cred += credits;
+	if (csk->wr_una_cred > (csk->wr_max_cred - csk->wr_cred))
+		csk->wr_una_cred = csk->wr_max_cred - csk->wr_cred;
+
+	while (credits) {
+		struct sk_buff *p = cxgbit_sock_peek_wr(csk);
+
+		if (unlikely(!p)) {
+			pr_err("csk 0x%p,%u, cr %u,%u+%u, empty.\n",
+			       csk, csk->tid, credits,
+			       csk->wr_cred, csk->wr_una_cred);
+			break;
+		}
+
+		if (unlikely(credits < p->csum)) {
+			pr_warn("csk 0x%p,%u, cr %u,%u+%u, < %u.\n",
+				csk,  csk->tid,
+				credits, csk->wr_cred, csk->wr_una_cred,
+				p->csum);
+			p->csum -= credits;
+			break;
+		}
+
+		cxgbit_sock_dequeue_wr(csk);
+		credits -= p->csum;
+		kfree_skb(p);
+	}
+
+	if (unlikely(cxgbit_credit_err(csk))) {
+		cxgbit_queue_rx_skb(csk, skb);
+		return;
+	}
+
+	if (rpl->seq_vld & CPL_FW4_ACK_FLAGS_SEQVAL) {
+		if (unlikely(before(snd_una, csk->snd_una))) {
+			pr_warn("csk 0x%p,%u, snd_una %u/%u.",
+				csk, csk->tid, snd_una,
+				csk->snd_una);
+			goto rel_skb;
+		}
+
+		if (csk->snd_una != snd_una) {
+			csk->snd_una = snd_una;
+			dst_confirm(csk->dst);
+			wake_up(&csk->ack_waitq);
+		}
+	}
+
+	if (skb_queue_len(&csk->txq))
+		push_tx_frames(csk);
+
+rel_skb:
+	__kfree_skb(skb);
+}
+
+static void cxgbit_set_tcb_rpl(struct cxgbit_device *cdev,
+			       struct sk_buff *skb)
+{
+	struct cxgbit_sock *csk;
+	struct cpl_set_tcb_rpl *rpl = (struct cpl_set_tcb_rpl *)skb->data;
+	unsigned int tid = GET_TID(rpl);
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	struct tid_info *t = lldi->tids;
+
+	csk = lookup_tid(t, tid);
+	if (unlikely(!csk))
+		pr_err("can't find connection for tid %u.\n", tid);
+	else
+		cxgbit_wake_up(&csk->com.wr_wait, __func__, rpl->status);
+
+	cxgbit_put_csk(csk);
+}
+
+static void cxgbit_rx_data(struct cxgbit_device *cdev, struct sk_buff *skb)
+{
+	struct cxgbit_sock *csk;
+	struct cpl_rx_data *cpl = cplhdr(skb);
+	unsigned int tid = GET_TID(cpl);
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	struct tid_info *t = lldi->tids;
+
+	csk = lookup_tid(t, tid);
+	if (unlikely(!csk)) {
+		pr_err("can't find conn. for tid %u.\n", tid);
+		goto rel_skb;
+	}
+
+	cxgbit_queue_rx_skb(csk, skb);
+	return;
+rel_skb:
+	__kfree_skb(skb);
+}
+
+static void __cxgbit_process_rx_cpl(struct cxgbit_sock *csk,
+				    struct sk_buff *skb)
+{
+	spin_lock(&csk->lock);
+	if (csk->lock_owner) {
+		__skb_queue_tail(&csk->backlogq, skb);
+		spin_unlock(&csk->lock);
+		return;
+	}
+
+	cxgbit_skcb_rx_backlog_fn(skb)(csk, skb);
+	spin_unlock(&csk->lock);
+}
+
+static void cxgbit_process_rx_cpl(struct cxgbit_sock *csk,
+				  struct sk_buff *skb)
+{
+	cxgbit_get_csk(csk);
+	__cxgbit_process_rx_cpl(csk, skb);
+	cxgbit_put_csk(csk);
+}
+
+static void cxgbit_rx_cpl(struct cxgbit_device *cdev, struct sk_buff *skb)
+{
+	struct cxgbit_sock *csk;
+	struct cpl_tx_data *cpl = cplhdr(skb);
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	struct tid_info *t = lldi->tids;
+	unsigned int tid = GET_TID(cpl);
+	u8 opcode = cxgbit_skcb_rx_opcode(skb);
+	bool ref = true;
+
+	switch (opcode) {
+	case CPL_FW4_ACK:
+			cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_fw4_ack;
+			ref = false;
+			break;
+	case CPL_PEER_CLOSE:
+			cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_peer_close;
+			break;
+	case CPL_CLOSE_CON_RPL:
+			cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_close_con_rpl;
+			break;
+	case CPL_ABORT_REQ_RSS:
+			cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_abort_req_rss;
+			break;
+	case CPL_ABORT_RPL_RSS:
+			cxgbit_skcb_rx_backlog_fn(skb) = cxgbit_abort_rpl_rss;
+			break;
+	default:
+		goto rel_skb;
+	}
+
+	csk = lookup_tid(t, tid);
+	if (unlikely(!csk)) {
+		pr_err("can't find conn. for tid %u.\n", tid);
+		goto rel_skb;
+	}
+
+	if (ref)
+		cxgbit_process_rx_cpl(csk, skb);
+	else
+		__cxgbit_process_rx_cpl(csk, skb);
+
+	return;
+rel_skb:
+	__kfree_skb(skb);
+}
+
+cxgbit_cplhandler_func cxgbit_cplhandlers[NUM_CPL_CMDS] = {
+	[CPL_PASS_OPEN_RPL]	= cxgbit_pass_open_rpl,
+	[CPL_CLOSE_LISTSRV_RPL] = cxgbit_close_listsrv_rpl,
+	[CPL_PASS_ACCEPT_REQ]	= cxgbit_pass_accept_req,
+	[CPL_PASS_ESTABLISH]	= cxgbit_pass_establish,
+	[CPL_SET_TCB_RPL]	= cxgbit_set_tcb_rpl,
+	[CPL_RX_DATA]		= cxgbit_rx_data,
+	[CPL_FW4_ACK]		= cxgbit_rx_cpl,
+	[CPL_PEER_CLOSE]	= cxgbit_rx_cpl,
+	[CPL_CLOSE_CON_RPL]	= cxgbit_rx_cpl,
+	[CPL_ABORT_REQ_RSS]	= cxgbit_rx_cpl,
+	[CPL_ABORT_RPL_RSS]	= cxgbit_rx_cpl,
+};
-- 
2.0.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 31/34] cxgbit: add cxgbit_target.c
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (29 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 30/34] cxgbit: add cxgbit_cm.c Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 32/34] cxgbit: add cxgbit_ddp.c Varun Prakash
                   ` (3 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains code for processing
iSCSI PDU.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit_target.c | 2027 +++++++++++++++++++++++++++
 1 file changed, 2027 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_target.c

diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
new file mode 100644
index 0000000..528bd21
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
@@ -0,0 +1,2027 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/workqueue.h>
+#include <linux/kthread.h>
+#include <asm/unaligned.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+#include "cxgbit.h"
+
+struct sge_opaque_hdr {
+	void *dev;
+	dma_addr_t addr[MAX_SKB_FRAGS + 1];
+};
+
+static const u8 cxgbit_digest_len[] = {0, 4, 4, 8};
+
+#define TX_HDR_LEN (sizeof(struct sge_opaque_hdr) + \
+		    sizeof(struct fw_ofld_tx_data_wr))
+
+static struct sk_buff *
+__cxgbit_alloc_skb(struct cxgbit_sock *csk, u32 len, bool iso)
+{
+	struct sk_buff *skb = NULL;
+	u8 submode = 0;
+	int errcode;
+	static const u32 hdr_len = TX_HDR_LEN + ISCSI_HDR_LEN;
+
+	if (len) {
+		skb = alloc_skb_with_frags(hdr_len, len,
+					   0, &errcode,
+					   GFP_KERNEL);
+		if (!skb)
+			return NULL;
+
+		memset(skb->data, 0, hdr_len);
+		skb_reserve(skb, TX_HDR_LEN);
+		skb_reset_transport_header(skb);
+		skb_put(skb, ISCSI_HDR_LEN);
+		skb->data_len = len;
+		skb->len += len;
+		submode |= (csk->submode & CXGBIT_SUBMODE_DCRC);
+
+	} else {
+		u32 iso_len = iso ? sizeof(struct cpl_tx_data_iso) : 0;
+
+		skb = alloc_skb(hdr_len + iso_len, GFP_KERNEL);
+		if (!skb)
+			return NULL;
+
+		memset(skb->data, 0, hdr_len + iso_len);
+		skb_reserve(skb, TX_HDR_LEN + iso_len);
+		skb_reset_transport_header(skb);
+		skb_put(skb, ISCSI_HDR_LEN);
+	}
+
+	submode |= (csk->submode & CXGBIT_SUBMODE_HCRC);
+	cxgbit_skcb_submode(skb) = submode;
+	cxgbit_skcb_tx_extralen(skb) = cxgbit_digest_len[submode];
+	cxgbit_skcb_flags(skb) |= SKCBF_TX_NEED_HDR;
+	return skb;
+}
+
+static struct sk_buff *cxgbit_alloc_skb(struct cxgbit_sock *csk, u32 len)
+{
+	return __cxgbit_alloc_skb(csk, len, false);
+}
+
+/*
+ * is_ofld_imm - check whether a packet can be sent as immediate data
+ * @skb: the packet
+ *
+ * Returns true if a packet can be sent as an offload WR with immediate
+ * data.  We currently use the same limit as for Ethernet packets.
+ */
+static int is_ofld_imm(const struct sk_buff *skb)
+{
+	int length = skb->len;
+
+	if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_NEED_HDR))
+		length += sizeof(struct fw_ofld_tx_data_wr);
+
+	if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_ISO))
+		length += sizeof(struct cpl_tx_data_iso);
+
+#define MAX_IMM_TX_PKT_LEN	256
+	return length <= MAX_IMM_TX_PKT_LEN;
+}
+
+/*
+ * sgl_len - calculates the size of an SGL of the given capacity
+ * @n: the number of SGL entries
+ * Calculates the number of flits needed for a scatter/gather list that
+ * can hold the given number of entries.
+ */
+static inline unsigned int sgl_len(unsigned int n)
+{
+	n--;
+	return (3 * n) / 2 + (n & 1) + 2;
+}
+
+/*
+ * calc_tx_flits_ofld - calculate # of flits for an offload packet
+ * @skb: the packet
+ *
+ * Returns the number of flits needed for the given offload packet.
+ * These packets are already fully constructed and no additional headers
+ * will be added.
+ */
+static unsigned int calc_tx_flits_ofld(const struct sk_buff *skb)
+{
+	unsigned int flits, cnt;
+
+	if (is_ofld_imm(skb))
+		return DIV_ROUND_UP(skb->len, 8);
+	flits = skb_transport_offset(skb) / 8;
+	cnt = skb_shinfo(skb)->nr_frags;
+	if (skb_tail_pointer(skb) != skb_transport_header(skb))
+		cnt++;
+	return flits + sgl_len(cnt);
+}
+
+#define CXGBIT_ISO_FSLICE 0x1
+#define CXGBIT_ISO_LSLICE 0x2
+static void make_cpl_tx_data_iso(struct sk_buff *skb,
+				 struct cxgbit_iso_info *iso_info)
+{
+	struct cpl_tx_data_iso *cpl;
+	unsigned int submode = cxgbit_skcb_submode(skb);
+	unsigned int fslice = !!(iso_info->flags & CXGBIT_ISO_FSLICE);
+	unsigned int lslice = !!(iso_info->flags & CXGBIT_ISO_LSLICE);
+
+	cpl = (struct cpl_tx_data_iso *)__skb_push(skb, sizeof(*cpl));
+
+	cpl->op_to_scsi = htonl(CPL_TX_DATA_ISO_OP_V(CPL_TX_DATA_ISO) |
+			CPL_TX_DATA_ISO_FIRST_V(fslice) |
+			CPL_TX_DATA_ISO_LAST_V(lslice) |
+			CPL_TX_DATA_ISO_CPLHDRLEN_V(0) |
+			CPL_TX_DATA_ISO_HDRCRC_V(submode & 1) |
+			CPL_TX_DATA_ISO_PLDCRC_V(((submode >> 1) & 1)) |
+			CPL_TX_DATA_ISO_IMMEDIATE_V(0) |
+			CPL_TX_DATA_ISO_SCSI_V(2));
+
+	cpl->ahs_len = 0;
+	cpl->mpdu = htons(DIV_ROUND_UP(iso_info->mpdu, 4));
+	cpl->burst_size = htonl(DIV_ROUND_UP(iso_info->burst_len, 4));
+	cpl->len = htonl(iso_info->len);
+	cpl->reserved2_seglen_offset = htonl(0);
+	cpl->datasn_offset = htonl(0);
+	cpl->buffer_offset = htonl(0);
+	cpl->reserved3 = 0;
+
+	__skb_pull(skb, sizeof(*cpl));
+}
+
+static void make_tx_data_wr(struct cxgbit_sock *csk,
+			    struct sk_buff *skb, u32 dlen,
+			    u32 len, u32 credits, u32 compl)
+{
+	struct fw_ofld_tx_data_wr *req;
+	u32 submode = cxgbit_skcb_submode(skb);
+	u32 wr_ulp_mode = 0;
+	u32 hdr_size = sizeof(*req);
+	u32 opcode = FW_OFLD_TX_DATA_WR;
+	u32 immlen = 0;
+	u32 force = TX_FORCE_V(!submode);
+
+	if (cxgbit_skcb_flags(skb) & SKCBF_TX_ISO) {
+		opcode = FW_ISCSI_TX_DATA_WR;
+		immlen += sizeof(struct cpl_tx_data_iso);
+		hdr_size += sizeof(struct cpl_tx_data_iso);
+		submode |= 8;
+	}
+
+	if (is_ofld_imm(skb))
+		immlen += dlen;
+
+	req = (struct fw_ofld_tx_data_wr *)__skb_push(skb,
+							hdr_size);
+	req->op_to_immdlen = cpu_to_be32(FW_WR_OP_V(opcode) |
+					FW_WR_COMPL_V(compl) |
+					FW_WR_IMMDLEN_V(immlen));
+	req->flowid_len16 = cpu_to_be32(FW_WR_FLOWID_V(csk->tid) |
+					FW_WR_LEN16_V(credits));
+	req->plen = htonl(len);
+	wr_ulp_mode = FW_OFLD_TX_DATA_WR_ULPMODE_V(ULP_MODE_ISCSI) |
+				FW_OFLD_TX_DATA_WR_ULPSUBMODE_V(submode);
+
+	req->tunnel_to_proxy = htonl((wr_ulp_mode) | force |
+		 FW_OFLD_TX_DATA_WR_SHOVE_V(skb_peek(&csk->txq) ? 0 : 1));
+}
+
+static void arp_failure_skb_discard(void *handle, struct sk_buff *skb)
+{
+	kfree_skb(skb);
+}
+
+void push_tx_frames(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+
+	while (csk->wr_cred && ((skb = skb_peek(&csk->txq)) != NULL)) {
+		u32 dlen = skb->len;
+		u32 len = skb->len;
+		u32 credits_needed;
+		u32 compl = 0;
+		u32 flowclen16 = 0;
+		u32 iso_cpl_len = 0;
+
+		if (cxgbit_skcb_flags(skb) & SKCBF_TX_ISO)
+			iso_cpl_len = sizeof(struct cpl_tx_data_iso);
+
+		if (is_ofld_imm(skb))
+			credits_needed = DIV_ROUND_UP(dlen + iso_cpl_len, 16);
+		else
+			credits_needed = DIV_ROUND_UP((8 *
+					calc_tx_flits_ofld(skb)) +
+					iso_cpl_len, 16);
+
+		if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_NEED_HDR))
+			credits_needed += DIV_ROUND_UP(
+				sizeof(struct fw_ofld_tx_data_wr), 16);
+		/*
+		 * Assumes the initial credits is large enough to support
+		 * fw_flowc_wr plus largest possible first payload
+		 */
+
+		if (!test_and_set_bit(CSK_TX_DATA_SENT, &csk->com.flags)) {
+			flowclen16 = send_tx_flowc_wr(csk);
+			csk->wr_cred -= flowclen16;
+			csk->wr_una_cred += flowclen16;
+		}
+
+		if (csk->wr_cred < credits_needed) {
+			pr_debug("csk 0x%p, skb %u/%u, wr %d < %u.\n",
+				 csk, skb->len, skb->data_len,
+				 credits_needed, csk->wr_cred);
+			break;
+		}
+		__skb_unlink(skb, &csk->txq);
+		set_wr_txq(skb, CPL_PRIORITY_DATA, csk->txq_idx);
+		skb->csum = credits_needed + flowclen16;
+		csk->wr_cred -= credits_needed;
+		csk->wr_una_cred += credits_needed;
+
+		pr_debug("csk 0x%p, skb %u/%u, wr %d, left %u, unack %u.\n",
+			 csk, skb->len, skb->data_len, credits_needed,
+			 csk->wr_cred, csk->wr_una_cred);
+
+		if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_NEED_HDR)) {
+			len += cxgbit_skcb_tx_extralen(skb);
+
+			if ((csk->wr_una_cred >= (csk->wr_max_cred / 2)) ||
+			    (!before(csk->write_seq,
+				     csk->snd_una + csk->snd_win))) {
+				compl = 1;
+				csk->wr_una_cred = 0;
+			}
+
+			make_tx_data_wr(csk, skb, dlen, len, credits_needed,
+					compl);
+			csk->snd_nxt += len;
+
+		} else if ((cxgbit_skcb_flags(skb) & SKCBF_TX_FLAG_COMPL) ||
+			   (csk->wr_una_cred >= (csk->wr_max_cred / 2))) {
+			struct cpl_close_con_req *req =
+				(struct cpl_close_con_req *)skb->data;
+			req->wr.wr_hi |= htonl(FW_WR_COMPL_F);
+			csk->wr_una_cred = 0;
+		}
+
+		cxgbit_sock_enqueue_wr(csk, skb);
+		t4_set_arp_err_handler(skb, csk, arp_failure_skb_discard);
+
+		pr_debug("csk 0x%p,%u, skb 0x%p, %u.\n",
+			 csk, csk->tid, skb, len);
+
+		cxgbit_l2t_send(csk->com.cdev, skb, csk->l2t);
+	}
+}
+
+static bool cxgbit_lock_sock(struct cxgbit_sock *csk)
+{
+	spin_lock_bh(&csk->lock);
+
+	if (before(csk->write_seq, csk->snd_una + csk->snd_win))
+		csk->lock_owner = true;
+
+	spin_unlock_bh(&csk->lock);
+
+	return csk->lock_owner;
+}
+
+static void cxgbit_unlock_sock(struct cxgbit_sock *csk)
+{
+	struct sk_buff_head backlogq;
+	struct sk_buff *skb;
+	void (*fn)(struct cxgbit_sock *, struct sk_buff *);
+
+	skb_queue_head_init(&backlogq);
+
+	spin_lock_bh(&csk->lock);
+	while (skb_queue_len(&csk->backlogq)) {
+		skb_queue_splice_init(&csk->backlogq, &backlogq);
+		spin_unlock_bh(&csk->lock);
+
+		while ((skb = __skb_dequeue(&backlogq))) {
+			fn = cxgbit_skcb_rx_backlog_fn(skb);
+			fn(csk, skb);
+		}
+
+		spin_lock_bh(&csk->lock);
+	}
+
+	csk->lock_owner = false;
+	spin_unlock_bh(&csk->lock);
+}
+
+static int
+cxgbit_queue_skb(struct cxgbit_sock *csk, struct sk_buff *skb)
+{
+	int ret = 0;
+
+	wait_event_interruptible(csk->ack_waitq, cxgbit_lock_sock(csk));
+
+	if (unlikely((csk->com.state != CSK_STATE_ESTABLISHED) ||
+		     signal_pending(current))) {
+		__kfree_skb(skb);
+		ret = -1;
+		spin_lock_bh(&csk->lock);
+		if (csk->lock_owner) {
+			spin_unlock_bh(&csk->lock);
+			goto unlock;
+		}
+		spin_unlock_bh(&csk->lock);
+		return ret;
+	}
+
+	csk->write_seq += skb->len +
+			  cxgbit_skcb_tx_extralen(skb);
+
+	skb_queue_splice_tail_init(&csk->ppodq, &csk->txq);
+	__skb_queue_tail(&csk->txq, skb);
+	push_tx_frames(csk);
+
+unlock:
+	cxgbit_unlock_sock(csk);
+	return ret;
+}
+
+static int cxgbit_send_r2t(struct iscsi_cmd *cmd,
+			   struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+	struct sk_buff *skb;
+	struct iscsi_r2t *r2t;
+	struct iscsi_r2t_rsp *hdr;
+
+	r2t = iscsit_get_r2t_from_list(cmd);
+	if (!r2t)
+		return -1;
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	if (ccmd->setup_ddp) {
+		if (test_bit(CSK_DDP_ENABLE, &csk->com.flags))
+			cxgbit_reserve_ttt(csk, cmd);
+
+		ccmd->setup_ddp = false;
+	}
+
+	r2t->targ_xfer_tag = ccmd->ttinfo.tag;
+
+	hdr = (struct iscsi_r2t_rsp *)skb->data;
+	iscsit_build_r2t_pdu(cmd, conn, r2t, hdr);
+
+	pr_debug("Built %sR2T, ITT: 0x%08x, TTT: 0x%08x, StatSN:"
+		" 0x%08x, R2TSN: 0x%08x, Offset: %u, DDTL: %u, CID: %hu\n",
+		(!r2t->recovery_r2t) ? "" : "Recovery ", cmd->init_task_tag,
+		r2t->targ_xfer_tag, ntohl(hdr->statsn), r2t->r2t_sn,
+			r2t->offset, r2t->xfer_len, conn->cid);
+
+	spin_lock_bh(&cmd->r2t_lock);
+	r2t->sent_r2t = 1;
+	spin_unlock_bh(&cmd->r2t_lock);
+
+	if (cxgbit_queue_skb(csk, skb)) {
+		skb_queue_purge(&csk->ppodq);
+		return -1;
+	}
+
+	spin_lock_bh(&cmd->dataout_timeout_lock);
+	iscsit_start_dataout_timer(cmd, conn);
+	spin_unlock_bh(&cmd->dataout_timeout_lock);
+
+	return 0;
+}
+
+static int
+cxgbit_send_unsolicited_nopin(struct iscsi_cmd *cmd,
+			      struct iscsi_conn *conn,
+			      bool want_response)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	iscsit_build_nopin_rsp(cmd, conn, (struct iscsi_nopin *)(skb->data),
+			       false);
+
+	if (cxgbit_queue_skb(csk, skb))
+		return -1;
+
+	spin_lock_bh(&cmd->istate_lock);
+	cmd->i_state = want_response ?
+			ISTATE_SENT_NOPIN_WANT_RESPONSE : ISTATE_SENT_STATUS;
+	spin_unlock_bh(&cmd->istate_lock);
+
+	return 0;
+}
+
+static int cxgbit_send_response(struct iscsi_cmd *cmd,
+				struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+	struct iscsi_scsi_rsp *hdr;
+	u32 padding = 0, tx_size = 0;
+	bool inc_stat_sn = (cmd->i_state == ISTATE_SEND_STATUS);
+
+	/*
+	 * Attach SENSE DATA payload to iSCSI Response PDU
+	 */
+	if (cmd->se_cmd.sense_buffer &&
+	    ((cmd->se_cmd.se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
+	    (cmd->se_cmd.se_cmd_flags & SCF_EMULATED_TASK_SENSE))) {
+		put_unaligned_be16(cmd->se_cmd.scsi_sense_length,
+				   cmd->sense_buffer);
+		cmd->se_cmd.scsi_sense_length += sizeof(__be16);
+
+		padding	= -(cmd->se_cmd.scsi_sense_length) & 3;
+		tx_size += cmd->se_cmd.scsi_sense_length;
+
+		if (padding) {
+			memset(cmd->sense_buffer +
+				cmd->se_cmd.scsi_sense_length, 0, padding);
+			tx_size += padding;
+			pr_debug("Adding %u bytes of padding to"
+				" SENSE.\n", padding);
+		}
+
+		pr_debug("Attaching SENSE DATA: %u bytes to iSCSI"
+				" Response PDU\n",
+				cmd->se_cmd.scsi_sense_length);
+	}
+
+	skb = cxgbit_alloc_skb(csk, tx_size);
+	if (!skb)
+		return -ENOMEM;
+
+	hdr = (struct iscsi_scsi_rsp *)(skb->data);
+
+	iscsit_build_rsp_pdu(cmd, conn, inc_stat_sn, hdr);
+
+	if (tx_size) {
+		hton24(hdr->dlength, (u32)cmd->se_cmd.scsi_sense_length);
+		skb_store_bits(skb, ISCSI_HDR_LEN, cmd->sense_buffer, tx_size);
+	}
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+static int cxgbit_map_skb(struct iscsi_cmd *cmd,
+			  struct sk_buff *skb,
+			  u32 data_offset,
+			  u32 data_length)
+{
+	u32 i = 0, nr_frags = MAX_SKB_FRAGS;
+	u32 padding = ((-data_length) & 3);
+	struct scatterlist *sg;
+	struct page *page;
+	unsigned int page_off;
+
+	if (padding)
+		nr_frags--;
+
+	/*
+	 * We know each entry in t_data_sg contains a page.
+	 */
+	sg = &cmd->se_cmd.t_data_sg[data_offset / PAGE_SIZE];
+	page_off = (data_offset % PAGE_SIZE);
+
+	while (data_length && (i < nr_frags)) {
+		u32 cur_len = min_t(u32, data_length, sg->length - page_off);
+
+		page = sg_page(sg);
+
+		get_page(page);
+		skb_fill_page_desc(skb, i, page, sg->offset + page_off,
+				   cur_len);
+		skb->data_len += cur_len;
+		skb->len += cur_len;
+		skb->truesize += cur_len;
+
+		data_length -= cur_len;
+		page_off = 0;
+		sg = sg_next(sg);
+		i++;
+	}
+
+	if (data_length)
+		return -1;
+
+	if (padding) {
+		page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+		if (!page)
+			return -1;
+		skb_fill_page_desc(skb, i, page, 0, padding);
+		skb->data_len += padding;
+		skb->len += padding;
+		skb->truesize += padding;
+	}
+
+	return 0;
+}
+
+static int cxgbit_tx_datain_iso(struct cxgbit_sock *csk,
+				struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct sk_buff *skb;
+	struct iscsi_datain datain;
+	struct cxgbit_iso_info iso_info;
+	u32 data_length = cmd->se_cmd.data_length;
+	u32 mrdsl = conn->conn_ops->MaxRecvDataSegmentLength;
+	u32 num_pdu, plen, tx_data = 0;
+	bool task_sense = !!(cmd->se_cmd.se_cmd_flags &
+		SCF_TRANSPORT_TASK_SENSE);
+	bool set_statsn = false;
+	int ret = -1;
+
+	while (data_length) {
+		num_pdu = (data_length + mrdsl - 1) / mrdsl;
+		if (num_pdu > csk->max_iso_npdu)
+			num_pdu = csk->max_iso_npdu;
+
+		plen = num_pdu * mrdsl;
+		if (plen > data_length)
+			plen = data_length;
+
+		skb = __cxgbit_alloc_skb(csk, 0, true);
+		if (!skb)
+			return -ENOMEM;
+
+		cxgbit_skcb_flags(skb) |= SKCBF_TX_ISO;
+		cxgbit_skcb_submode(skb) |= (csk->submode &
+				CXGBIT_SUBMODE_DCRC);
+		cxgbit_skcb_tx_extralen(skb) = (num_pdu *
+				cxgbit_digest_len[cxgbit_skcb_submode(skb)]) +
+						((num_pdu - 1) * ISCSI_HDR_LEN);
+
+		memset(&datain, 0, sizeof(struct iscsi_datain));
+		memset(&iso_info, 0, sizeof(iso_info));
+
+		if (!tx_data)
+			iso_info.flags |= CXGBIT_ISO_FSLICE;
+
+		if (!(data_length - plen)) {
+			iso_info.flags |= CXGBIT_ISO_LSLICE;
+			if (!task_sense) {
+				datain.flags = ISCSI_FLAG_DATA_STATUS;
+				iscsit_increment_maxcmdsn(cmd, conn->sess);
+				cmd->stat_sn = conn->stat_sn++;
+				set_statsn = true;
+			}
+		}
+
+		iso_info.burst_len = num_pdu * mrdsl;
+		iso_info.mpdu = mrdsl;
+		iso_info.len = ISCSI_HDR_LEN + plen;
+
+		make_cpl_tx_data_iso(skb, &iso_info);
+
+		datain.offset = tx_data;
+		datain.data_sn = cmd->data_sn;
+
+		iscsit_build_datain_pdu(cmd, conn, &datain,
+					(struct iscsi_data_rsp *)skb->data,
+					set_statsn);
+
+		ret = cxgbit_map_skb(cmd, skb, tx_data, plen);
+		if (unlikely(ret)) {
+			__kfree_skb(skb);
+			goto out;
+		}
+
+		ret = cxgbit_queue_skb(csk, skb);
+		if (unlikely(ret))
+			goto out;
+
+		tx_data += plen;
+		data_length -= plen;
+
+		cmd->read_data_done += plen;
+		cmd->data_sn += num_pdu;
+	}
+
+	ret = task_sense ? 2 : 1;
+	return ret;
+
+out:
+	return ret;
+}
+
+static int cxgbit_tx_datain(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct sk_buff *skb;
+	struct iscsi_datain datain;
+	struct iscsi_datain_req *dr;
+	int eodr = 0;
+	bool set_statsn = false;
+	int ret = 0;
+
+	memset(&datain, 0, sizeof(struct iscsi_datain));
+
+	dr = iscsit_get_datain_values(cmd, &datain);
+	if (!dr) {
+		pr_err("iscsit_get_datain_values failed for ITT: 0x%08x\n",
+		       cmd->init_task_tag);
+		return -1;
+	}
+
+	/*
+	 * Be paranoid and double check the logic for now.
+	 */
+	if ((datain.offset + datain.length) > cmd->se_cmd.data_length) {
+		pr_err("Command ITT: 0x%08x, datain.offset: %u and"
+			" datain.length: %u exceeds cmd->data_length: %u\n",
+			cmd->init_task_tag, datain.offset, datain.length,
+			cmd->se_cmd.data_length);
+		return -1;
+	}
+
+	atomic_long_add(datain.length, &conn->sess->tx_data_octets);
+	/*
+	 * Special case for successfully execution w/ both DATAIN
+	 * and Sense Data.
+	 */
+	if ((datain.flags & ISCSI_FLAG_DATA_STATUS) &&
+	    (cmd->se_cmd.se_cmd_flags & SCF_TRANSPORT_TASK_SENSE))
+		datain.flags &= ~ISCSI_FLAG_DATA_STATUS;
+	else {
+		if ((dr->dr_complete == DATAIN_COMPLETE_NORMAL) ||
+		    (dr->dr_complete == DATAIN_COMPLETE_CONNECTION_RECOVERY)) {
+			iscsit_increment_maxcmdsn(cmd, conn->sess);
+			cmd->stat_sn = conn->stat_sn++;
+			set_statsn = true;
+		} else if (dr->dr_complete ==
+			   DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY)
+			set_statsn = true;
+	}
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	if (datain.length) {
+		cxgbit_skcb_submode(skb) |= (csk->submode &
+				CXGBIT_SUBMODE_DCRC);
+		cxgbit_skcb_tx_extralen(skb) =
+				cxgbit_digest_len[cxgbit_skcb_submode(skb)];
+	}
+
+	iscsit_build_datain_pdu(cmd, conn, &datain,
+				(struct iscsi_data_rsp *)skb->data, set_statsn);
+
+	ret = cxgbit_map_skb(cmd, skb, datain.offset, datain.length);
+	if (ret < 0) {
+		__kfree_skb(skb);
+		return ret;
+	}
+
+	ret = cxgbit_queue_skb(csk, skb);
+	if (ret < 0)
+		return ret;
+
+	if (dr->dr_complete) {
+		eodr = (cmd->se_cmd.se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ?
+				2 : 1;
+		iscsit_free_datain_req(cmd, dr);
+	}
+
+	return eodr;
+}
+
+static int cxgbit_send_datain(struct iscsi_cmd *cmd,
+			      struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	u32 data_length = cmd->se_cmd.data_length;
+	u32 padding = -data_length & 3;
+	u32 mrdsl = conn->conn_ops->MaxRecvDataSegmentLength;
+	struct iscsi_datain_req *dr;
+	int ret = 0;
+
+	dr = iscsit_get_datain_req(cmd);
+	if (!dr) {
+		pr_err("iscsit_get_datain_req failed for ITT: 0x%08x\n",
+		       cmd->init_task_tag);
+		return -1;
+	}
+
+	if ((data_length > mrdsl) && (!dr->recovery) &&
+	    (!padding) && csk->max_iso_npdu) {
+		ret = cxgbit_tx_datain_iso(csk, cmd);
+
+		if (ret > 0)
+			iscsit_free_datain_req(cmd, dr);
+
+		return ret;
+	}
+
+	while (!ret)
+		ret = cxgbit_tx_datain(csk, cmd);
+
+	return ret;
+}
+
+static int
+cxgbit_send_logout_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+	int ret;
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	ret = iscsit_build_logout_rsp(cmd, conn,
+				      (struct iscsi_logout_rsp *)skb->data);
+	if (ret < 0)
+		return ret;
+
+	ret = cxgbit_queue_skb(csk, skb);
+	if (!ret)
+		set_bit(CSK_TX_FIN, &csk->com.flags);
+
+	return ret;
+}
+
+static int
+cxgbit_send_conn_drop_async_message(struct iscsi_cmd *cmd,
+				    struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	cmd->iscsi_opcode = ISCSI_OP_ASYNC_EVENT;
+	iscsit_build_conn_drop_async_pdu(cmd, conn,
+					 (struct iscsi_async *)skb->data);
+
+	pr_debug("Sending Connection Dropped Async Message StatSN:"
+		" 0x%08x, for CID: %hu on CID: %hu\n", cmd->stat_sn,
+			cmd->logout_cid, conn->cid);
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+static int
+cxgbit_send_nopin(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+	struct iscsi_nopin *hdr = (struct iscsi_nopin *)cmd->pdu;
+	u32 padding;
+
+	iscsit_build_nopin_rsp(cmd, conn, hdr, true);
+
+	padding = (-cmd->buf_ptr_size) & 3;
+	skb = cxgbit_alloc_skb(csk, cmd->buf_ptr_size + padding);
+	if (!skb)
+		return -ENOMEM;
+
+	skb_store_bits(skb, 0, hdr, ISCSI_HDR_LEN);
+	if (cmd->buf_ptr_size) {
+		skb_store_bits(skb, ISCSI_HDR_LEN, cmd->buf_ptr,
+			       cmd->buf_ptr_size);
+		skb_store_bits(skb, ISCSI_HDR_LEN + cmd->buf_ptr_size,
+			       &cmd->pad_bytes, padding);
+	}
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+static int
+cxgbit_send_task_mgt_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+
+	skb = cxgbit_alloc_skb(csk, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	iscsit_build_task_mgt_rsp(cmd, conn,
+				  (struct iscsi_tm_rsp *)skb->data);
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+static int
+cxgbit_send_reject(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+
+	skb = cxgbit_alloc_skb(csk, ISCSI_HDR_LEN);
+	if (!skb)
+		return -ENOMEM;
+
+	iscsit_build_reject(cmd, conn,
+			    (struct iscsi_reject *)skb->data);
+
+	skb_store_bits(skb, ISCSI_HDR_LEN, cmd->buf_ptr, ISCSI_HDR_LEN);
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+static int
+cxgbit_send_text_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct iscsi_text_rsp *hdr =
+		(struct iscsi_text_rsp *)cmd->pdu;
+	struct sk_buff *skb;
+	u32 text_length;
+	int rc;
+
+	memset(hdr, 0, ISCSI_HDR_LEN);
+	rc = iscsit_build_text_rsp(cmd, conn, hdr, ISCSI_TCP_CXGB4);
+	if (rc < 0)
+		return rc;
+	text_length = rc;
+
+	skb = cxgbit_alloc_skb(csk, text_length);
+	if (!skb)
+		return -ENOMEM;
+
+	skb_store_bits(skb, 0, hdr, ISCSI_HDR_LEN);
+	skb_store_bits(skb, ISCSI_HDR_LEN, cmd->buf_ptr, text_length);
+
+	return cxgbit_queue_skb(csk, skb);
+}
+
+int
+cxgbit_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+		       int state)
+{
+	int ret;
+
+	switch (state) {
+	case ISTATE_SEND_R2T:
+		ret = cxgbit_send_r2t(cmd, conn);
+		if (ret < 0)
+			goto err;
+		break;
+	case ISTATE_REMOVE:
+		spin_lock_bh(&conn->cmd_lock);
+		list_del_init(&cmd->i_conn_node);
+		spin_unlock_bh(&conn->cmd_lock);
+		iscsit_free_cmd(cmd, false);
+		break;
+	case ISTATE_SEND_NOPIN_WANT_RESPONSE:
+		iscsit_mod_nopin_response_timer(conn);
+		ret = cxgbit_send_unsolicited_nopin(cmd, conn, true);
+		if (ret < 0)
+			goto err;
+		break;
+	case ISTATE_SEND_NOPIN_NO_RESPONSE:
+		ret = cxgbit_send_unsolicited_nopin(cmd, conn, false);
+		if (ret < 0)
+			goto err;
+		break;
+	default:
+		pr_err("Unknown Opcode: 0x%02x ITT:"
+		       " 0x%08x, i_state: %d on CID: %hu\n",
+				cmd->iscsi_opcode, cmd->init_task_tag, state,
+				conn->cid);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	return -1;
+}
+
+int
+cxgbit_response_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
+		      int state)
+{
+	int ret;
+
+check_rsp_state:
+	switch (state) {
+	case ISTATE_SEND_DATAIN:
+		ret = cxgbit_send_datain(cmd, conn);
+		if (ret < 0) {
+			goto err;
+		} else if (ret == 1) {
+			/* all done */
+			spin_lock_bh(&cmd->istate_lock);
+			cmd->i_state = ISTATE_SENT_STATUS;
+			spin_unlock_bh(&cmd->istate_lock);
+
+			if (atomic_read(&conn->check_immediate_queue))
+				return 1;
+
+			return 0;
+		} else if (ret == 2) {
+			/* Still must send status,
+			 * SCF_TRANSPORT_TASK_SENSE was set
+			 */
+			spin_lock_bh(&cmd->istate_lock);
+			cmd->i_state = ISTATE_SEND_STATUS;
+			spin_unlock_bh(&cmd->istate_lock);
+			state = ISTATE_SEND_STATUS;
+			goto check_rsp_state;
+		}
+
+		break;
+	case ISTATE_SEND_STATUS:
+	case ISTATE_SEND_STATUS_RECOVERY:
+		ret = cxgbit_send_response(cmd, conn);
+		break;
+	case ISTATE_SEND_LOGOUTRSP:
+		ret = cxgbit_send_logout_rsp(cmd, conn);
+		break;
+	case ISTATE_SEND_ASYNCMSG:
+		ret = cxgbit_send_conn_drop_async_message(
+			cmd, conn);
+		break;
+	case ISTATE_SEND_NOPIN:
+		ret = cxgbit_send_nopin(cmd, conn);
+		break;
+	case ISTATE_SEND_REJECT:
+		ret = cxgbit_send_reject(cmd, conn);
+		break;
+	case ISTATE_SEND_TASKMGTRSP:
+		ret = cxgbit_send_task_mgt_rsp(cmd, conn);
+		if (ret != 0)
+			break;
+
+		ret = iscsit_tmr_post_handler(cmd, conn);
+		if (ret != 0)
+			iscsit_fall_back_to_erl0(conn->sess);
+		break;
+	case ISTATE_SEND_TEXTRSP:
+		ret = cxgbit_send_text_rsp(cmd, conn);
+		break;
+	default:
+		pr_err("Unknown Opcode: 0x%02x ITT:"
+		       " 0x%08x, i_state: %d on CID: %hu\n",
+		       cmd->iscsi_opcode, cmd->init_task_tag,
+		       state, conn->cid);
+		goto err;
+	}
+
+	if (ret < 0)
+		goto err;
+
+	switch (state) {
+	case ISTATE_SEND_LOGOUTRSP:
+		if (!iscsit_logout_post_handler(cmd, conn))
+			return -ECONNRESET;
+		/* fall through */
+	case ISTATE_SEND_STATUS:
+	case ISTATE_SEND_ASYNCMSG:
+	case ISTATE_SEND_NOPIN:
+	case ISTATE_SEND_STATUS_RECOVERY:
+	case ISTATE_SEND_TEXTRSP:
+	case ISTATE_SEND_TASKMGTRSP:
+	case ISTATE_SEND_REJECT:
+		spin_lock_bh(&cmd->istate_lock);
+		cmd->i_state = ISTATE_SENT_STATUS;
+		spin_unlock_bh(&cmd->istate_lock);
+		break;
+	default:
+		pr_err("Unknown Opcode: 0x%02x ITT:"
+		       " 0x%08x, i_state: %d on CID: %hu\n",
+		       cmd->iscsi_opcode, cmd->init_task_tag,
+		       cmd->i_state, conn->cid);
+		goto err;
+	}
+
+	if (atomic_read(&conn->check_immediate_queue))
+		return 1;
+
+	return 0;
+err:
+	return -1;
+}
+
+int cxgbit_validate_params(struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct iscsi_param *param;
+	u32 max_xmitdsl;
+
+	param = iscsi_find_param_from_key(MAXXMITDATASEGMENTLENGTH,
+					  conn->param_list);
+	if (!param)
+		return -1;
+
+	if (kstrtou32(param->value, 0, &max_xmitdsl) < 0)
+		return -1;
+
+	if (max_xmitdsl > cdev->mdsl) {
+		if (iscsi_change_param_sprintf
+		    (conn, "MaxXmitDataSegmentLength=%u",
+		     cdev->mdsl))
+			return -1;
+	}
+
+	return 0;
+}
+
+static int cxgbit_set_digest(struct cxgbit_sock *csk)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_param *param;
+
+	param = iscsi_find_param_from_key(HEADERDIGEST, conn->param_list);
+	if (!param) {
+		pr_err("param not found key %s\n", HEADERDIGEST);
+		return -1;
+	}
+
+	if (!strcmp(param->value, CRC32C))
+		csk->submode |= 1;
+
+	param = iscsi_find_param_from_key(DATADIGEST, conn->param_list);
+	if (!param) {
+		csk->submode = 0;
+		pr_err("param not found key %s\n", DATADIGEST);
+		return -1;
+	}
+
+	if (!strcmp(param->value, CRC32C))
+		csk->submode |= 2;
+
+	if (cxgbit_setup_conn_digest(csk)) {
+		csk->submode = 0;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int cxgbit_set_iso_npdu(struct cxgbit_sock *csk)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_conn_ops *conn_ops = conn->conn_ops;
+	struct iscsi_param *param;
+	u32 mrdsl, mbl;
+	u32 max_npdu, max_iso_npdu;
+
+	param = iscsi_find_param_from_key(DATASEQUENCEINORDER,
+					  conn->param_list);
+	if (!param) {
+		pr_err("param not found key %s\n", DATASEQUENCEINORDER);
+		return -1;
+	}
+
+	if (strcmp(param->value, YES))
+		return 0;
+
+	param = iscsi_find_param_from_key(DATAPDUINORDER,
+					  conn->param_list);
+	if (!param) {
+		pr_err("param not found key %s\n", DATAPDUINORDER);
+		return -1;
+	}
+
+	if (strcmp(param->value, YES))
+		return 0;
+
+	param = iscsi_find_param_from_key(MAXBURSTLENGTH,
+					  conn->param_list);
+	if (!param) {
+		pr_err("param not found key %s\n", MAXBURSTLENGTH);
+		return -1;
+	}
+
+	if (kstrtou32(param->value, 0, &mbl) < 0)
+		return -1;
+
+	mrdsl = conn_ops->MaxRecvDataSegmentLength;
+	max_npdu = mbl / mrdsl;
+
+	max_iso_npdu = CXGBIT_MAX_ISO_PAYLOAD /
+			(ISCSI_HDR_LEN + mrdsl +
+			cxgbit_digest_len[csk->submode]);
+
+	csk->max_iso_npdu = min(max_npdu, max_iso_npdu);
+
+	if (csk->max_iso_npdu <= 1)
+		csk->max_iso_npdu = 0;
+
+	return 0;
+}
+
+static int cxgbit_set_params(struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct cxgbi_ppm *ppm = *csk->com.cdev->lldi.iscsi_ppm;
+	struct iscsi_conn_ops *conn_ops = conn->conn_ops;
+	struct iscsi_param *param;
+	u8 erl;
+
+	if (conn_ops->MaxRecvDataSegmentLength > cdev->mdsl)
+		conn_ops->MaxRecvDataSegmentLength = cdev->mdsl;
+
+	param = iscsi_find_param_from_key(ERRORRECOVERYLEVEL, conn->param_list);
+	if (!param) {
+		pr_err("param not found key %s\n", ERRORRECOVERYLEVEL);
+		return -1;
+	}
+
+	if (kstrtou8(param->value, 0, &erl) < 0)
+		return -1;
+
+	if (!erl) {
+		if (test_bit(CDEV_ISO_ENABLE, &cdev->flags)) {
+			if (cxgbit_set_iso_npdu(csk))
+				return -1;
+		}
+
+		if (test_bit(CDEV_DDP_ENABLE, &cdev->flags)) {
+			if (cxgbit_setup_conn_pgidx
+			    (csk, ppm->tformat.pgsz_idx_dflt))
+				return -1;
+			set_bit(CSK_DDP_ENABLE, &csk->com.flags);
+		}
+	}
+
+	if (cxgbit_set_digest(csk))
+		return -1;
+
+	return 0;
+}
+
+int cxgbit_put_login_tx(struct iscsi_conn *conn,
+			struct iscsi_login *login,
+			u32 length)
+{
+	struct cxgbit_sock *csk = conn->context;
+	struct sk_buff *skb;
+	u32 padding_buf = 0;
+	u8 padding = (-length) & 3;
+
+	skb = cxgbit_alloc_skb(csk, length + padding);
+	if (!skb)
+		return -ENOMEM;
+	skb_store_bits(skb, 0, login->rsp, ISCSI_HDR_LEN);
+	skb_store_bits(skb, ISCSI_HDR_LEN, login->rsp_buf, length);
+
+	if (padding)
+		skb_store_bits(skb, ISCSI_HDR_LEN + length,
+			       &padding_buf, padding);
+
+	if (login->login_complete) {
+		if (cxgbit_set_params(conn)) {
+			kfree_skb(skb);
+			return -1;
+		}
+
+		set_bit(CSK_LOGIN_DONE, &csk->com.flags);
+	}
+
+	if (cxgbit_queue_skb(csk, skb))
+		return -1;
+
+	if ((!login->login_complete) && (!login->login_failed))
+		schedule_delayed_work(&conn->login_work, 0);
+
+	return 0;
+}
+
+static void skb_copy_to_sg(struct sk_buff *skb, struct scatterlist *sg,
+			   unsigned int nents)
+{
+	struct skb_seq_state st;
+	const u8 *buf;
+	unsigned int consumed = 0, buf_len;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(skb);
+
+	skb_prepare_seq_read(skb, pdu_cb->doffset,
+			     pdu_cb->doffset + pdu_cb->dlen,
+			     &st);
+
+	while (true) {
+		buf_len = skb_seq_read(consumed, &buf, &st);
+		if (!buf_len) {
+			skb_abort_seq_read(&st);
+			break;
+		}
+
+		consumed += sg_pcopy_from_buffer(sg, nents, (void *)buf,
+						 buf_len, consumed);
+	}
+}
+
+static struct iscsi_cmd
+*cxgbit_allocate_cmd(struct cxgbit_sock *csk)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct cxgbi_ppm *ppm = cdev2ppm(csk->com.cdev);
+	struct cxgbit_cmd *ccmd;
+	struct iscsi_cmd *cmd;
+
+	cmd = iscsit_allocate_cmd(conn, TASK_INTERRUPTIBLE);
+	if (!cmd) {
+		pr_err("Unable to allocate iscsi_cmd + cxgbit_cmd\n");
+		return NULL;
+	}
+
+	ccmd = iscsit_priv_cmd(cmd);
+	ccmd->ttinfo.tag = ppm->tformat.no_ddp_mask;
+	ccmd->setup_ddp = true;
+
+	return cmd;
+}
+
+static int cxgbit_handle_immediate_data(struct iscsi_cmd *cmd,
+					struct iscsi_scsi_req *hdr,
+					u32 length)
+{
+	struct iscsi_conn *conn = cmd->conn;
+	struct cxgbit_sock *csk = conn->context;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+
+	if (pdu_cb->flags & PDUCBF_RX_DCRC_ERR) {
+		pr_err("ImmediateData CRC32C DataDigest error\n");
+		if (!conn->sess->sess_ops->ErrorRecoveryLevel) {
+			pr_err("Unable to recover from"
+			       " Immediate Data digest failure while"
+			       " in ERL=0.\n");
+			iscsit_reject_cmd(cmd, ISCSI_REASON_DATA_DIGEST_ERROR,
+					  (unsigned char *)hdr);
+			return IMMEDIATE_DATA_CANNOT_RECOVER;
+		}
+
+		iscsit_reject_cmd(cmd, ISCSI_REASON_DATA_DIGEST_ERROR,
+				  (unsigned char *)hdr);
+		return IMMEDIATE_DATA_ERL1_CRC_FAILURE;
+	}
+
+	if (cmd->se_cmd.se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
+		struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+		struct skb_shared_info *ssi = skb_shinfo(csk->skb);
+		skb_frag_t *dfrag = &ssi->frags[pdu_cb->dfrag_index];
+
+		sg_init_table(&ccmd->sg, 1);
+		sg_set_page(&ccmd->sg, dfrag->page.p, skb_frag_size(dfrag),
+			    dfrag->page_offset);
+		get_page(dfrag->page.p);
+
+		cmd->se_cmd.t_data_sg = &ccmd->sg;
+		cmd->se_cmd.t_data_nents = 1;
+
+		ccmd->release = true;
+	} else {
+		struct scatterlist *sg = &cmd->se_cmd.t_data_sg[0];
+		u32 sg_nents = max(1UL, DIV_ROUND_UP(pdu_cb->dlen, PAGE_SIZE));
+
+		skb_copy_to_sg(csk->skb, sg, sg_nents);
+	}
+
+	cmd->write_data_done += pdu_cb->dlen;
+
+	if (cmd->write_data_done == cmd->se_cmd.data_length) {
+		spin_lock_bh(&cmd->istate_lock);
+		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
+		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+		spin_unlock_bh(&cmd->istate_lock);
+	}
+
+	return IMMEDIATE_DATA_NORMAL_OPERATION;
+}
+
+static int
+cxgbit_get_immediate_data(struct iscsi_cmd *cmd,
+			  struct iscsi_scsi_req *hdr,
+			  bool dump_payload)
+{
+	struct iscsi_conn *conn = cmd->conn;
+	int cmdsn_ret = 0, immed_ret = IMMEDIATE_DATA_NORMAL_OPERATION;
+	/*
+	 * Special case for Unsupported SAM WRITE Opcodes and ImmediateData=Yes.
+	 */
+	if (dump_payload)
+		goto after_immediate_data;
+
+	immed_ret = cxgbit_handle_immediate_data(cmd, hdr,
+						 cmd->first_burst_len);
+after_immediate_data:
+	if (immed_ret == IMMEDIATE_DATA_NORMAL_OPERATION) {
+		/*
+		 * A PDU/CmdSN carrying Immediate Data passed
+		 * DataCRC, check against ExpCmdSN/MaxCmdSN if
+		 * Immediate Bit is not set.
+		 */
+		cmdsn_ret = iscsit_sequence_cmd(conn, cmd,
+						(unsigned char *)hdr,
+						hdr->cmdsn);
+		if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
+			return -1;
+
+		if (cmd->sense_reason || cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
+			target_put_sess_cmd(&cmd->se_cmd);
+			return 0;
+		} else if (cmd->unsolicited_data) {
+			iscsit_set_unsoliticed_dataout(cmd);
+		}
+
+	} else if (immed_ret == IMMEDIATE_DATA_ERL1_CRC_FAILURE) {
+		/*
+		 * Immediate Data failed DataCRC and ERL>=1,
+		 * silently drop this PDU and let the initiator
+		 * plug the CmdSN gap.
+		 *
+		 * FIXME: Send Unsolicited NOPIN with reserved
+		 * TTT here to help the initiator figure out
+		 * the missing CmdSN, although they should be
+		 * intelligent enough to determine the missing
+		 * CmdSN and issue a retry to plug the sequence.
+		 */
+		cmd->i_state = ISTATE_REMOVE;
+		iscsit_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
+	} else /* immed_ret == IMMEDIATE_DATA_CANNOT_RECOVER */
+		return -1;
+
+	return 0;
+}
+
+static int
+cxgbit_handle_scsi_cmd(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)pdu_cb->hdr;
+	int rc;
+	bool dump_payload = false;
+
+	rc = iscsit_setup_scsi_cmd(conn, cmd, (unsigned char *)hdr);
+	if (rc < 0)
+		return rc;
+
+	if (pdu_cb->dlen && (pdu_cb->dlen == cmd->se_cmd.data_length) &&
+	    (pdu_cb->nr_dfrags == 1))
+		cmd->se_cmd.se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
+
+	rc = iscsit_process_scsi_cmd(conn, cmd, hdr);
+	if (rc < 0)
+		return 0;
+	else if (rc > 0)
+		dump_payload = true;
+
+	if (!pdu_cb->dlen)
+		return 0;
+
+	return cxgbit_get_immediate_data(cmd, hdr, dump_payload);
+}
+
+static int
+cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+{
+	struct scatterlist *sg_start;
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_cmd *cmd = NULL;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_data *hdr = (struct iscsi_data *)pdu_cb->hdr;
+	u32 data_offset = be32_to_cpu(hdr->offset);
+	u32 data_len = pdu_cb->dlen;
+	int rc, sg_nents, sg_off;
+	bool dcrc_err = false;
+
+	rc = iscsit_check_dataout_hdr(conn, (unsigned char *)hdr, &cmd);
+	if (rc < 0)
+		return rc;
+	else if (!cmd)
+		return 0;
+
+	if (pdu_cb->flags & PDUCBF_RX_DCRC_ERR) {
+		pr_err("ITT: 0x%08x, Offset: %u, Length: %u,"
+		       " DataSN: 0x%08x\n",
+		       hdr->itt, hdr->offset, data_len,
+		       hdr->datasn);
+
+		dcrc_err = true;
+		goto check_payload;
+	}
+
+	pr_debug("DataOut data_len: %u, "
+		"write_data_done: %u, data_length: %u\n",
+		  data_len,  cmd->write_data_done,
+		  cmd->se_cmd.data_length);
+
+	if (!(pdu_cb->flags & PDUCBF_RX_DATA_DDPD)) {
+		sg_off = data_offset / PAGE_SIZE;
+		sg_start = &cmd->se_cmd.t_data_sg[sg_off];
+		sg_nents = max(1UL, DIV_ROUND_UP(data_len, PAGE_SIZE));
+
+		skb_copy_to_sg(csk->skb, sg_start, sg_nents);
+	}
+
+check_payload:
+
+	rc = iscsit_check_dataout_payload(cmd, hdr, dcrc_err);
+	if (rc < 0)
+		return rc;
+
+	return 0;
+}
+
+static int
+cxgbit_handle_nop_out(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_nopout *hdr = (struct iscsi_nopout *)pdu_cb->hdr;
+	unsigned char *ping_data = NULL;
+	u32 payload_length = pdu_cb->dlen;
+	int ret;
+
+	ret = iscsit_setup_nop_out(conn, cmd, hdr);
+	if (ret < 0)
+		return 0;
+
+	if (pdu_cb->flags & PDUCBF_RX_DCRC_ERR) {
+		if (!conn->sess->sess_ops->ErrorRecoveryLevel) {
+			pr_err("Unable to recover from"
+			       " NOPOUT Ping DataCRC failure while in"
+			       " ERL=0.\n");
+			ret = -1;
+			goto out;
+		} else {
+			/*
+			 * drop this PDU and let the
+			 * initiator plug the CmdSN gap.
+			 */
+			pr_info("Dropping NOPOUT"
+				" Command CmdSN: 0x%08x due to"
+				" DataCRC error.\n", hdr->cmdsn);
+			ret = 0;
+			goto out;
+		}
+	}
+
+	/*
+	 * Handle NOP-OUT payload for traditional iSCSI sockets
+	 */
+	if (payload_length && hdr->ttt == cpu_to_be32(0xFFFFFFFF)) {
+		ping_data = kzalloc(payload_length + 1, GFP_KERNEL);
+		if (!ping_data) {
+			pr_err("Unable to allocate memory for"
+				" NOPOUT ping data.\n");
+			ret = -1;
+			goto out;
+		}
+
+		skb_copy_bits(csk->skb, pdu_cb->doffset,
+			      ping_data, payload_length);
+
+		ping_data[payload_length] = '\0';
+		/*
+		 * Attach ping data to struct iscsi_cmd->buf_ptr.
+		 */
+		cmd->buf_ptr = ping_data;
+		cmd->buf_ptr_size = payload_length;
+
+		pr_debug("Got %u bytes of NOPOUT ping"
+			" data.\n", payload_length);
+		pr_debug("Ping Data: \"%s\"\n", ping_data);
+	}
+
+	return iscsit_process_nop_out(conn, cmd, hdr);
+out:
+	if (cmd)
+		iscsit_free_cmd(cmd, false);
+	return ret;
+}
+
+static int
+cxgbit_handle_text_cmd(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_text *hdr = (struct iscsi_text *)pdu_cb->hdr;
+	u32 payload_length = pdu_cb->dlen;
+	int rc;
+	unsigned char *text_in = NULL;
+
+	rc = iscsit_setup_text_cmd(conn, cmd, hdr);
+	if (rc < 0)
+		return rc;
+
+	if (pdu_cb->flags & PDUCBF_RX_DCRC_ERR) {
+		if (!conn->sess->sess_ops->ErrorRecoveryLevel) {
+			pr_err("Unable to recover from"
+			       " Text Data digest failure while in"
+			       " ERL=0.\n");
+			goto reject;
+		} else {
+			/*
+			 * drop this PDU and let the
+			 * initiator plug the CmdSN gap.
+			 */
+			pr_info("Dropping Text"
+				" Command CmdSN: 0x%08x due to"
+				" DataCRC error.\n", hdr->cmdsn);
+			return 0;
+		}
+	}
+
+	if (payload_length) {
+		text_in = kzalloc(payload_length, GFP_KERNEL);
+		if (!text_in) {
+			pr_err("Unable to allocate text_in of payload_length: %u\n",
+			       payload_length);
+			return -ENOMEM;
+		}
+		skb_copy_bits(csk->skb, pdu_cb->doffset,
+			      text_in, payload_length);
+
+		text_in[payload_length - 1] = '\0';
+
+		cmd->text_in_ptr = text_in;
+	}
+
+	return iscsit_process_text_cmd(conn, cmd, hdr);
+
+reject:
+	return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR,
+				 pdu_cb->hdr);
+}
+
+static int
+cxgbit_target_rx_opcode(struct cxgbit_sock *csk)
+{
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_hdr *hdr = (struct iscsi_hdr *)pdu_cb->hdr;
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_cmd *cmd = NULL;
+	u8 opcode = (hdr->opcode & ISCSI_OPCODE_MASK);
+	int ret = -EINVAL;
+
+	switch (opcode) {
+	case ISCSI_OP_SCSI_CMD:
+		cmd = cxgbit_allocate_cmd(csk);
+		if (!cmd)
+			goto reject;
+
+		ret = cxgbit_handle_scsi_cmd(csk, cmd);
+		break;
+	case ISCSI_OP_SCSI_DATA_OUT:
+		ret = cxgbit_handle_iscsi_dataout(csk);
+		break;
+	case ISCSI_OP_NOOP_OUT:
+		if (hdr->ttt == cpu_to_be32(0xFFFFFFFF)) {
+			cmd = cxgbit_allocate_cmd(csk);
+			if (!cmd)
+				goto reject;
+		}
+
+		ret = cxgbit_handle_nop_out(csk, cmd);
+		break;
+	case ISCSI_OP_SCSI_TMFUNC:
+		cmd = cxgbit_allocate_cmd(csk);
+		if (!cmd)
+			goto reject;
+
+		ret = iscsit_handle_task_mgt_cmd(conn, cmd,
+						 (unsigned char *)hdr);
+		break;
+	case ISCSI_OP_TEXT:
+		if (hdr->ttt != cpu_to_be32(0xFFFFFFFF)) {
+			cmd = iscsit_find_cmd_from_itt(conn, hdr->itt);
+			if (!cmd)
+				goto reject;
+		} else {
+			cmd = cxgbit_allocate_cmd(csk);
+			if (!cmd)
+				goto reject;
+		}
+
+		ret = cxgbit_handle_text_cmd(csk, cmd);
+		break;
+	case ISCSI_OP_LOGOUT:
+		cmd = cxgbit_allocate_cmd(csk);
+		if (!cmd)
+			goto reject;
+
+		ret = iscsit_handle_logout_cmd(conn, cmd, (unsigned char *)hdr);
+		if (ret > 0)
+			wait_for_completion_timeout(&conn->conn_logout_comp,
+						    SECONDS_FOR_LOGOUT_COMP
+						    * HZ);
+		break;
+	case ISCSI_OP_SNACK:
+		ret = iscsit_handle_snack(conn, (unsigned char *)hdr);
+		break;
+	default:
+		pr_err("Got unknown iSCSI OpCode: 0x%02x\n", opcode);
+		dump_stack();
+		break;
+	}
+
+	return ret;
+
+reject:
+	return iscsit_add_reject(conn, ISCSI_REASON_BOOKMARK_NO_RESOURCES,
+				 (unsigned char *)hdr);
+	return ret;
+}
+
+static int cxgbit_rx_opcode(struct cxgbit_sock *csk)
+{
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_hdr *hdr = pdu_cb->hdr;
+	u8 opcode;
+
+	if (pdu_cb->flags & PDUCBF_RX_HCRC_ERR) {
+		atomic_long_inc(&conn->sess->conn_digest_errors);
+		goto transport_err;
+	}
+
+	if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+		goto transport_err;
+
+	opcode = hdr->opcode & ISCSI_OPCODE_MASK;
+
+	if (conn->sess->sess_ops->SessionType &&
+	    ((!(opcode & ISCSI_OP_TEXT)) ||
+	     (!(opcode & ISCSI_OP_LOGOUT)))) {
+		pr_err("Received illegal iSCSI Opcode: 0x%02x"
+			" while in Discovery Session, rejecting.\n", opcode);
+		iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
+				  (unsigned char *)hdr);
+		goto transport_err;
+	}
+
+	if (cxgbit_target_rx_opcode(csk) < 0)
+		goto transport_err;
+
+	return 0;
+
+transport_err:
+	return -1;
+}
+
+static int cxgbit_rx_login_pdu(struct cxgbit_sock *csk)
+{
+	struct iscsi_conn *conn = csk->conn;
+	struct iscsi_login *login = conn->login;
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+	struct iscsi_login_req *login_req;
+
+	login_req = (struct iscsi_login_req *)login->req;
+	memcpy(login_req, pdu_cb->hdr, sizeof(*login_req));
+
+	pr_debug("Got Login Command, Flags 0x%02x, ITT: 0x%08x,"
+		" CmdSN: 0x%08x, ExpStatSN: 0x%08x, CID: %hu, Length: %u\n",
+		login_req->flags, login_req->itt, login_req->cmdsn,
+		login_req->exp_statsn, login_req->cid, pdu_cb->dlen);
+	/*
+	 * Setup the initial iscsi_login values from the leading
+	 * login request PDU.
+	 */
+	if (login->first_request) {
+		login_req = (struct iscsi_login_req *)login->req;
+		login->leading_connection = (!login_req->tsih) ? 1 : 0;
+		login->current_stage	= ISCSI_LOGIN_CURRENT_STAGE(
+				login_req->flags);
+		login->version_min	= login_req->min_version;
+		login->version_max	= login_req->max_version;
+		memcpy(login->isid, login_req->isid, 6);
+		login->cmd_sn		= be32_to_cpu(login_req->cmdsn);
+		login->init_task_tag	= login_req->itt;
+		login->initial_exp_statsn = be32_to_cpu(login_req->exp_statsn);
+		login->cid		= be16_to_cpu(login_req->cid);
+		login->tsih		= be16_to_cpu(login_req->tsih);
+	}
+
+	if (iscsi_target_check_login_request(conn, login) < 0)
+		return -1;
+
+	memset(login->req_buf, 0, MAX_KEY_VALUE_PAIRS);
+	skb_copy_bits(csk->skb, pdu_cb->doffset, login->req_buf, pdu_cb->dlen);
+
+	return 0;
+}
+
+static int cxgbit_process_iscsi_pdu(struct cxgbit_sock *csk,
+				    struct sk_buff *skb,
+				    int idx)
+{
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb, idx);
+	int ret;
+
+	cxgbit_rx_pdu_cb(skb) = pdu_cb;
+
+	csk->skb = skb;
+
+	if (!test_bit(CSK_LOGIN_DONE, &csk->com.flags)) {
+		ret = cxgbit_rx_login_pdu(csk);
+		set_bit(CSK_LOGIN_PDU_DONE, &csk->com.flags);
+	} else {
+		ret = cxgbit_rx_opcode(csk);
+	}
+
+	return ret;
+}
+
+static void cxgbit_lro_skb_dump(struct sk_buff *skb)
+{
+	struct skb_shared_info *ssi = skb_shinfo(skb);
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb, 0);
+	int i;
+
+	pr_info("skb 0x%p, head 0x%p, 0x%p, len %u,%u, frags %u.\n",
+		skb, skb->head, skb->data, skb->len, skb->data_len,
+		ssi->nr_frags);
+	pr_info("skb 0x%p, lro_cb, csk 0x%p, pdu %u, %u.\n",
+		skb, lro_cb->csk, lro_cb->pdu_cnt, lro_cb->pdu_totallen);
+
+	for (i = 0; i < lro_cb->pdu_cnt; i++, pdu_cb++)
+		pr_info("skb 0x%p, pdu %d, %u, f 0x%x, seq 0x%x, dcrc 0x%x, "
+			"frags %u.\n",
+			skb, i, pdu_cb->pdulen, pdu_cb->flags, pdu_cb->seq,
+			pdu_cb->ddigest, pdu_cb->frags);
+	for (i = 0; i < ssi->nr_frags; i++)
+		pr_info("skb 0x%p, frag %d, off %u, sz %u.\n",
+			skb, i, ssi->frags[i].page_offset, ssi->frags[i].size);
+}
+
+static void cxgbit_lro_skb_hold_done(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb = csk->lro_skb_hold;
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+
+	if (lro_cb->release) {
+		struct skb_shared_info *ssi = skb_shinfo(skb);
+		int i;
+
+		memset(skb->data, 0, LRO_SKB_MIN_HEADROOM);
+		for (i = 0; i < ssi->nr_frags; i++)
+			put_page(skb_frag_page(&ssi->frags[i]));
+		ssi->nr_frags = 0;
+	}
+}
+
+static void cxgbit_lro_skb_merge(struct cxgbit_sock *csk,
+				 struct sk_buff *skb,
+				 int pdu_idx)
+{
+	struct sk_buff *hskb = csk->lro_skb_hold;
+	struct skb_shared_info *hssi = skb_shinfo(hskb);
+	struct cxgbit_lro_cb *hlro_cb = cxgbit_skb_lro_cb(hskb);
+	struct cxgbit_lro_pdu_cb *hpdu_cb = cxgbit_skb_lro_pdu_cb(hskb, 0);
+	struct skb_shared_info *ssi = skb_shinfo(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb, pdu_idx);
+	int frag_idx = 0;
+	int hfrag_idx = 0;
+
+	/* either 1st or last */
+	if (pdu_idx)
+		frag_idx = ssi->nr_frags - pdu_cb->frags;
+
+	if (pdu_cb->flags & PDUCBF_RX_HDR) {
+		unsigned int len = 0;
+
+		cxgbit_lro_skb_hold_done(csk);
+
+		hlro_cb->csk = csk;
+		hlro_cb->pdu_cnt = 1;
+		hlro_cb->release = true;
+
+		hpdu_cb->flags = pdu_cb->flags;
+		hpdu_cb->seq = pdu_cb->seq;
+		hpdu_cb->hdr = pdu_cb->hdr;
+		hpdu_cb->hlen = pdu_cb->hlen;
+
+		memcpy(&hssi->frags[hfrag_idx], &ssi->frags[frag_idx],
+		       sizeof(skb_frag_t));
+		get_page(skb_frag_page(&hssi->frags[hfrag_idx]));
+		frag_idx++;
+		hfrag_idx++;
+		hssi->nr_frags = 1;
+		hpdu_cb->frags = 1;
+
+		len = hssi->frags[0].size;
+		hskb->len = len;
+		hskb->data_len = len;
+		hskb->truesize = len;
+	}
+
+	if (pdu_cb->flags & PDUCBF_RX_DATA) {
+		unsigned int len = 0;
+		int i, n;
+
+		hpdu_cb->flags |= pdu_cb->flags;
+
+		for (i = 1, n = hfrag_idx; n < pdu_cb->frags;
+				i++, frag_idx++, n++) {
+			memcpy(&hssi->frags[i], &ssi->frags[frag_idx],
+			       sizeof(skb_frag_t));
+			get_page(skb_frag_page(&hssi->frags[i]));
+			len += hssi->frags[i].size;
+
+			hssi->nr_frags++;
+			hpdu_cb->frags++;
+		}
+
+		hpdu_cb->dlen = pdu_cb->dlen;
+		hpdu_cb->doffset = hpdu_cb->hlen;
+		hpdu_cb->nr_dfrags = pdu_cb->nr_dfrags;
+		hpdu_cb->dfrag_index = 1;
+		hskb->len += len;
+		hskb->data_len += len;
+		hskb->truesize += len;
+	}
+
+	if (pdu_cb->flags & PDUCBF_RX_STATUS) {
+		hpdu_cb->flags |= pdu_cb->flags;
+		hpdu_cb->ddigest = pdu_cb->ddigest;
+		hpdu_cb->pdulen = pdu_cb->pdulen;
+		hlro_cb->pdu_totallen = pdu_cb->pdulen;
+	}
+}
+
+static int cxgbit_process_lro_skb(struct cxgbit_sock *csk,
+				  struct sk_buff *skb)
+{
+	struct sk_buff *hskb = csk->lro_skb_hold;
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb, 0);
+	int last = lro_cb->pdu_cnt - 1;
+	int i = 0;
+	int err = 0;
+	unsigned int offset = 0;
+
+	if (!(pdu_cb->flags & PDUCBF_RX_HDR)) {
+		cxgbit_lro_skb_merge(csk, skb, 0);
+
+		if (pdu_cb->flags & PDUCBF_RX_STATUS) {
+			err = cxgbit_process_iscsi_pdu(csk, hskb, 0);
+			if (err < 0)
+				goto done;
+
+			if (pdu_cb->frags) {
+				struct skb_shared_info *ssi = skb_shinfo(skb);
+				int k;
+
+				for (k = 0; k < pdu_cb->frags; k++)
+					offset += ssi->frags[k].size;
+			}
+		}
+		i = 1;
+	}
+
+	for (; i < last; i++, pdu_cb++) {
+		err = cxgbit_process_iscsi_pdu(csk, skb, i);
+		if (err < 0)
+			goto done;
+	}
+
+	if (i == last) {
+		pdu_cb = cxgbit_skb_lro_pdu_cb(skb, last);
+		if (!(pdu_cb->flags & PDUCBF_RX_STATUS)) {
+			cxgbit_lro_skb_merge(csk, skb, last);
+		} else {
+			err = cxgbit_process_iscsi_pdu(csk, skb, last);
+			if (err < 0)
+				goto done;
+		}
+	}
+
+done:
+	return err;
+}
+
+static int cxgbit_rx_lro_skb(struct cxgbit_sock *csk, struct sk_buff *skb)
+{
+	struct cxgbit_lro_cb *lro_cb = cxgbit_skb_lro_cb(skb);
+	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_skb_lro_pdu_cb(skb, 0);
+	int ret = -1;
+
+	if ((pdu_cb->flags & PDUCBF_RX_HDR) &&
+	    (pdu_cb->seq != csk->rcv_nxt)) {
+		pr_info("csk 0x%p, tid 0x%x, seq 0x%x != 0x%x.\n",
+			csk, csk->tid, pdu_cb->seq, csk->rcv_nxt);
+		cxgbit_lro_skb_dump(skb);
+		return ret;
+	}
+
+	/* partial pdus */
+	if (!lro_cb->pdu_cnt) {
+		lro_cb->pdu_cnt = 1;
+	} else {
+		pdu_cb = cxgbit_skb_lro_pdu_cb(skb, lro_cb->pdu_cnt);
+
+		if ((!(pdu_cb->flags & PDUCBF_RX_STATUS)) &&
+		    pdu_cb->frags)
+			lro_cb->pdu_cnt++;
+	}
+
+	csk->rcv_nxt += lro_cb->pdu_totallen;
+
+	skb_reset_transport_header(skb);
+	ret = cxgbit_process_lro_skb(csk, skb);
+
+	csk->rx_credits += lro_cb->pdu_totallen;
+
+	if (csk->rx_credits >= (csk->rcv_win / 4))
+		cxgbit_rx_data_ack(csk);
+
+	return ret;
+}
+
+static int
+cxgbit_rx_skb(struct cxgbit_sock *csk, struct sk_buff *skb)
+{
+	int ret = -1;
+
+	if (likely(cxgbit_skcb_flags(skb) & SKCBF_RX_LRO))
+		ret = cxgbit_rx_lro_skb(csk, skb);
+
+	__kfree_skb(skb);
+	return ret;
+}
+
+static bool cxgbit_rxq_len(struct cxgbit_sock *csk,
+			   struct sk_buff_head *rxq)
+{
+	spin_lock_bh(&csk->rxq.lock);
+	if (skb_queue_len(&csk->rxq)) {
+		skb_queue_splice_init(&csk->rxq, rxq);
+		spin_unlock_bh(&csk->rxq.lock);
+		return true;
+	}
+	spin_unlock_bh(&csk->rxq.lock);
+	return false;
+}
+
+static int cxgbit_wait_rxq(struct cxgbit_sock *csk)
+{
+	struct sk_buff *skb;
+	struct sk_buff_head rxq;
+
+	skb_queue_head_init(&rxq);
+
+	wait_event_interruptible(csk->waitq, cxgbit_rxq_len(csk, &rxq));
+
+	if (signal_pending(current))
+		goto out;
+
+	while ((skb = __skb_dequeue(&rxq))) {
+		if (cxgbit_rx_skb(csk, skb))
+			goto out;
+	}
+
+	return 0;
+out:
+	__skb_queue_purge(&rxq);
+	return -1;
+}
+
+int cxgbit_get_login_rx(struct iscsi_conn *conn,
+			struct iscsi_login *login)
+{
+	struct cxgbit_sock *csk = conn->context;
+	int ret = -1;
+
+	while (!test_and_clear_bit(CSK_LOGIN_PDU_DONE, &csk->com.flags)) {
+		ret = cxgbit_wait_rxq(csk);
+		if (ret) {
+			clear_bit(CSK_LOGIN_PDU_DONE, &csk->com.flags);
+			break;
+		}
+	}
+
+	return ret;
+}
+
+void cxgbit_rx_pdu(struct iscsi_conn *conn)
+{
+	struct cxgbit_sock *csk = conn->context;
+
+	while (!kthread_should_stop()) {
+		iscsit_thread_check_cpumask(conn, current, 0);
+		if (cxgbit_wait_rxq(csk))
+			return;
+	}
+}
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 32/34] cxgbit: add cxgbit_ddp.c
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (30 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 31/34] cxgbit: add cxgbit_target.c Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 33/34] cxgbit: add Kconfig and Makefile Varun Prakash
                   ` (2 subsequent siblings)
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

This file contains code for
Direct Data Placement.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/cxgbit_ddp.c | 374 +++++++++++++++++++++++++++++++
 1 file changed, 374 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/cxgbit_ddp.c

diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
new file mode 100644
index 0000000..07e2bc8
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
@@ -0,0 +1,374 @@
+/*
+ * Copyright (c) 2016 Chelsio Communications, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include "cxgbit.h"
+
+/*
+ * functions to program the pagepod in h/w
+ */
+static void ulp_mem_io_set_hdr(struct cxgbit_device *cdev,
+			       struct ulp_mem_io *req,
+			       unsigned int wr_len,
+			       unsigned int dlen,
+			       unsigned int pm_addr,
+			       int tid)
+{
+	struct ulptx_idata *idata = (struct ulptx_idata *)(req + 1);
+
+	INIT_ULPTX_WR(req, wr_len, 0, tid);
+	req->wr.wr_hi = htonl(FW_WR_OP_V(FW_ULPTX_WR) |
+		FW_WR_ATOMIC_V(0));
+	req->cmd = htonl(ULPTX_CMD_V(ULP_TX_MEM_WRITE) |
+		ULP_MEMIO_ORDER_V(0) |
+		T5_ULP_MEMIO_IMM_V(1));
+	req->dlen = htonl(ULP_MEMIO_DATA_LEN_V(dlen >> 5));
+	req->lock_addr = htonl(ULP_MEMIO_ADDR_V(pm_addr >> 5));
+	req->len16 = htonl(DIV_ROUND_UP(wr_len - sizeof(req->wr), 16));
+
+	idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_IMM));
+	idata->len = htonl(dlen);
+}
+
+static void cxgbit_set_one_ppod(struct cxgbi_pagepod *ppod,
+				struct cxgbi_task_tag_info *ttinfo,
+				struct scatterlist **sg_pp,
+				unsigned int *sg_off)
+{
+	struct scatterlist *sg = sg_pp ? *sg_pp : NULL;
+	unsigned int offset = sg_off ? *sg_off : 0;
+	dma_addr_t addr = 0UL;
+	unsigned int len = 0;
+	int i;
+
+	memcpy(ppod, &ttinfo->hdr, sizeof(struct cxgbi_pagepod_hdr));
+
+	if (sg) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+	}
+
+	for (i = 0; i < PPOD_PAGES_MAX; i++) {
+		if (sg) {
+			ppod->addr[i] = cpu_to_be64(addr + offset);
+			offset += PAGE_SIZE;
+			if (offset == (len + sg->offset)) {
+				offset = 0;
+				sg = sg_next(sg);
+				if (sg) {
+					addr = sg_dma_address(sg);
+					len = sg_dma_len(sg);
+				}
+			}
+		} else {
+			ppod->addr[i] = 0ULL;
+		}
+	}
+
+	/*
+	 * the fifth address needs to be repeated in the next ppod, so do
+	 * not move sg
+	 */
+	if (sg_pp) {
+		*sg_pp = sg;
+		*sg_off = offset;
+	}
+
+	if (offset == len) {
+		offset = 0;
+		sg = sg_next(sg);
+		if (sg) {
+			addr = sg_dma_address(sg);
+			len = sg_dma_len(sg);
+		}
+	}
+	ppod->addr[i] = sg ? cpu_to_be64(addr + offset) : 0ULL;
+}
+
+static struct sk_buff *cxgbit_ppod_init_idata(struct cxgbit_device *cdev,
+					      struct cxgbi_ppm *ppm,
+					      unsigned int idx,
+					      unsigned int npods,
+					      unsigned int tid)
+{
+	unsigned int pm_addr = (idx << PPOD_SIZE_SHIFT) + ppm->llimit;
+	unsigned int dlen = npods << PPOD_SIZE_SHIFT;
+	unsigned int wr_len = roundup(sizeof(struct ulp_mem_io) +
+				sizeof(struct ulptx_idata) + dlen, 16);
+	struct sk_buff *skb = alloc_skb(wr_len, GFP_KERNEL);
+
+	if (!skb)
+		return NULL;
+
+	__skb_put(skb, wr_len);
+	ulp_mem_io_set_hdr(cdev, (struct ulp_mem_io *)skb->data, wr_len, dlen,
+			   pm_addr, tid);
+
+	return skb;
+}
+
+static int cxgbit_ppod_write_idata(struct cxgbi_ppm *ppm,
+				   struct cxgbit_sock *csk,
+				   struct cxgbi_task_tag_info *ttinfo,
+				   unsigned int idx, unsigned int npods,
+				   struct scatterlist **sg_pp,
+				   unsigned int *sg_off)
+{
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct sk_buff *skb = cxgbit_ppod_init_idata(cdev, ppm, idx, npods,
+						csk->tid);
+	struct ulp_mem_io *req;
+	struct ulptx_idata *idata;
+	struct cxgbi_pagepod *ppod;
+	int i;
+
+	if (!skb)
+		return -ENOMEM;
+
+	req = (struct ulp_mem_io *)skb->data;
+	idata = (struct ulptx_idata *)(req + 1);
+	ppod = (struct cxgbi_pagepod *)(idata + 1);
+
+	for (i = 0; i < npods; i++, ppod++)
+		cxgbit_set_one_ppod(ppod, ttinfo, sg_pp, sg_off);
+
+	__skb_queue_tail(&csk->ppodq, skb);
+
+	return 0;
+}
+
+static int cxgbit_ddp_set_map(struct cxgbi_ppm *ppm, struct cxgbit_sock *csk,
+			      struct cxgbi_task_tag_info *ttinfo)
+{
+	unsigned int pidx = ttinfo->idx;
+	unsigned int npods = ttinfo->npods;
+	unsigned int i, cnt;
+	int ret = 0;
+	struct scatterlist *sg = ttinfo->sgl;
+	unsigned int offset = 0;
+
+	ttinfo->cid = csk->port_id;
+
+	for (i = 0; i < npods; i += cnt, pidx += cnt) {
+		cnt = npods - i;
+
+		if (cnt > ULPMEM_IDATA_MAX_NPPODS)
+			cnt = ULPMEM_IDATA_MAX_NPPODS;
+
+		ret = cxgbit_ppod_write_idata(ppm, csk, ttinfo, pidx, cnt,
+					      &sg, &offset);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static void
+cxgbit_dump_sgl(const char *cap, struct scatterlist *sgl, int nents)
+{
+	struct scatterlist *sg;
+	int i;
+
+	if (cap)
+		pr_info("%s: sgl 0x%p, nents %u.\n", cap, sgl, nents);
+	for_each_sg(sgl, sg, nents, i)
+		pr_info("\t%d/%u, 0x%p: len %u, off %u, pg 0x%p, dma 0x%llx, %u\n",
+			i, nents, sg, sg->length, sg->offset, sg_page(sg),
+			sg_dma_address(sg), sg_dma_len(sg));
+}
+
+static int cxgbit_ddp_sgl_check(struct scatterlist *sgl, int nents)
+{
+	int i;
+	int last_sgidx = nents - 1;
+	struct scatterlist *sg = sgl;
+
+	for (i = 1, sg = sg_next(sgl); i < nents; i++, sg = sg_next(sg)) {
+		if ((i && sg->offset) ||
+		    ((i != last_sgidx) &&
+		     ((sg->length + sg->offset) & ((1 << PAGE_SHIFT) - 1)))) {
+			pr_info("%s: sg %u/%u, %u,%u, not page aligned.\n",
+				__func__, i, nents, sg->offset, sg->length);
+			cxgbit_dump_sgl(NULL, sgl, nents);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int cxgbit_ddp_reserve(struct cxgbit_sock *csk,
+			      struct cxgbi_task_tag_info *ttinfo,
+			      unsigned int xferlen)
+{
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct cxgbi_ppm *ppm = cdev2ppm(cdev);
+	struct scatterlist *sgl = ttinfo->sgl;
+	unsigned int sgcnt = ttinfo->nents;
+	unsigned int sg_offset = sgl->offset;
+	int ret;
+
+	if (!ppm || xferlen < DDP_THRESHOLD || !sgcnt ||
+	    ppm->tformat.pgsz_idx_dflt >= DDP_PGIDX_MAX) {
+		pr_debug("ppm 0x%p, pgidx %u, xfer %u, sgcnt %u, NO ddp.\n",
+			 ppm, ppm ? ppm->tformat.pgsz_idx_dflt :
+			 DDP_PGIDX_MAX,
+			 xferlen, ttinfo->nents);
+		return -EINVAL;
+	}
+
+	/* make sure the buffer is suitable for ddp */
+	if (cxgbit_ddp_sgl_check(sgl, sgcnt) < 0)
+		return -EINVAL;
+
+	ttinfo->nr_pages = (xferlen + sgl->offset +
+			    (1 << PAGE_SHIFT) - 1) >> PAGE_SHIFT;
+
+	/*
+	 * the ddp tag will be used for the ttt in the outgoing r2t pdu
+	 */
+	ret = cxgbi_ppm_ppods_reserve(ppm, ttinfo->nr_pages, 0, &ttinfo->idx,
+				      &ttinfo->tag, 0);
+	if (ret < 0)
+		return ret;
+	ttinfo->npods = ret;
+
+	 /* setup dma from scsi command sgl */
+	sgl->offset = 0;
+	ret = dma_map_sg(&ppm->pdev->dev, sgl, sgcnt, DMA_FROM_DEVICE);
+	sgl->offset = sg_offset;
+	if (!ret) {
+		pr_info("%s: 0x%x, xfer %u, sgl %u dma mapping err.\n",
+			__func__, 0, xferlen, sgcnt);
+		goto rel_ppods;
+	}
+	if (ret != ttinfo->nr_pages) {
+		pr_info("%s: 0x%x, xfer %u, sgl %u, dma count %d.\n",
+			__func__, 0, xferlen, sgcnt, ret);
+		cxgbit_dump_sgl(__func__, sgl, sgcnt);
+	}
+
+	ttinfo->flags |= CXGBI_PPOD_INFO_FLAG_MAPPED;
+	ttinfo->cid = csk->port_id;
+
+	cxgbi_ppm_make_ppod_hdr(ppm, ttinfo->tag, csk->tid, sgl->offset,
+				xferlen, &ttinfo->hdr);
+
+	ttinfo->flags |= CXGBI_PPOD_INFO_FLAG_VALID;
+	cxgbit_ddp_set_map(ppm, csk, ttinfo);
+
+	return 0;
+
+rel_ppods:
+	cxgbi_ppm_ppod_release(ppm, ttinfo->idx);
+
+	if (ttinfo->flags & CXGBI_PPOD_INFO_FLAG_MAPPED) {
+		ttinfo->flags &= ~CXGBI_PPOD_INFO_FLAG_MAPPED;
+		dma_unmap_sg(&ppm->pdev->dev, sgl, sgcnt, DMA_FROM_DEVICE);
+	}
+	return -EINVAL;
+}
+
+int cxgbit_reserve_ttt(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
+{
+	struct cxgbit_device *cdev = csk->com.cdev;
+	struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+	struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+	int ret = -EINVAL;
+
+	ttinfo->sgl = cmd->se_cmd.t_data_sg;
+	ttinfo->nents = cmd->se_cmd.t_data_nents;
+
+	ret = cxgbit_ddp_reserve(csk, ttinfo, cmd->se_cmd.data_length);
+	if (ret < 0) {
+		pr_info("csk 0x%p, cmd 0x%p, xfer len %u, sgcnt %u no ddp.\n",
+			csk, cmd, cmd->se_cmd.data_length, ttinfo->nents);
+
+		ttinfo->sgl = NULL;
+		ttinfo->nents = 0;
+
+		return ret;
+	}
+
+	ccmd->release = true;
+
+	pr_debug("cdev 0x%p, cmd 0x%p, tag 0x%x\n", cdev, cmd, ttinfo->tag);
+
+	return 0;
+}
+
+void cxgbit_release_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+{
+	struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+
+	if (ccmd->release) {
+		struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+
+		if (ttinfo->sgl) {
+			struct cxgbit_sock *csk = conn->context;
+			struct cxgbit_device *cdev = csk->com.cdev;
+			struct cxgbi_ppm *ppm = cdev2ppm(cdev);
+
+			cxgbi_ppm_ppod_release(ppm, ttinfo->idx);
+
+			dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
+				     ttinfo->nents, DMA_FROM_DEVICE);
+		} else {
+			put_page(sg_page(&ccmd->sg));
+		}
+
+		ccmd->release = false;
+	}
+}
+
+static void cxgbit_ddp_ppm_setup(void **ppm_pp, struct cxgbit_device *cdev,
+				 struct cxgbi_tag_format *tformat,
+				 unsigned int ppmax,
+				 unsigned int llimit,
+				 unsigned int start)
+{
+	int ret = cxgbi_ppm_init(ppm_pp, cdev->lldi.ports[0], cdev->lldi.pdev,
+				 &cdev->lldi, tformat, ppmax, llimit, start,
+				 2);
+
+	if (ret >= 0) {
+		struct cxgbi_ppm *ppm = (struct cxgbi_ppm *)(*ppm_pp);
+
+		if (ppm->ppmax < 1024 ||
+		    ppm->tformat.pgsz_idx_dflt >= DDP_PGIDX_MAX)
+			return;
+
+		set_bit(CDEV_DDP_ENABLE, &cdev->flags);
+	}
+}
+
+int cxgbit_ddp_init(struct cxgbit_device *cdev)
+{
+	struct cxgb4_lld_info *lldi = &cdev->lldi;
+	struct net_device *ndev = cdev->lldi.ports[0];
+	struct cxgbi_tag_format tformat;
+	unsigned int ppmax;
+	int i;
+
+	if (!lldi->vr->iscsi.size) {
+		pr_warn("%s, iscsi NOT enabled, check config!\n", ndev->name);
+		return -EACCES;
+	}
+
+	ppmax = lldi->vr->iscsi.size >> PPOD_SIZE_SHIFT;
+
+	memset(&tformat, 0, sizeof(struct cxgbi_tag_format));
+	for (i = 0; i < 4; i++)
+		tformat.pgsz_order[i] = (lldi->iscsi_pgsz_order >> (i << 3))
+					 & 0xF;
+	cxgbi_tagmask_check(lldi->iscsi_tagmask, &tformat);
+
+	cxgbit_ddp_ppm_setup(lldi->iscsi_ppm, cdev, &tformat, ppmax,
+			     lldi->iscsi_llimit, lldi->vr->iscsi.start);
+	return 0;
+}
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 33/34] cxgbit: add Kconfig and Makefile
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (31 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 32/34] cxgbit: add cxgbit_ddp.c Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-14 17:45 ` [RFC 34/34] iscsi-target: update " Varun Prakash
  2016-02-26  7:29 ` [RFC 00/34] Chelsio iSCSI target offload driver Nicholas A. Bellinger
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/cxgbit/Kconfig  | 7 +++++++
 drivers/target/iscsi/cxgbit/Makefile | 6 ++++++
 2 files changed, 13 insertions(+)
 create mode 100644 drivers/target/iscsi/cxgbit/Kconfig
 create mode 100644 drivers/target/iscsi/cxgbit/Makefile

diff --git a/drivers/target/iscsi/cxgbit/Kconfig b/drivers/target/iscsi/cxgbit/Kconfig
new file mode 100644
index 0000000..cf335b4
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/Kconfig
@@ -0,0 +1,7 @@
+config ISCSI_TARGET_CXGB4
+	tristate "Chelsio iSCSI target offload driver"
+	depends on ISCSI_TARGET && CHELSIO_T4
+	select CHELSIO_T4_UWIRE
+	---help---
+	To compile this driver as module, choose M here: the module
+	will be called cxgbit.
diff --git a/drivers/target/iscsi/cxgbit/Makefile b/drivers/target/iscsi/cxgbit/Makefile
new file mode 100644
index 0000000..bd56c07
--- /dev/null
+++ b/drivers/target/iscsi/cxgbit/Makefile
@@ -0,0 +1,6 @@
+ccflags-y := -Idrivers/net/ethernet/chelsio/cxgb4
+ccflags-y += -Idrivers/target/iscsi
+
+obj-$(CONFIG_ISCSI_TARGET_CXGB4)  += cxgbit.o
+
+cxgbit-y  := cxgbit_main.o cxgbit_cm.o cxgbit_target.o cxgbit_ddp.o
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [RFC 34/34] iscsi-target: update Kconfig and Makefile
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (32 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 33/34] cxgbit: add Kconfig and Makefile Varun Prakash
@ 2016-02-14 17:45 ` Varun Prakash
  2016-02-26  7:29 ` [RFC 00/34] Chelsio iSCSI target offload driver Nicholas A. Bellinger
  34 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-14 17:45 UTC (permalink / raw)
  To: target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad, varun

update Makefile and Kconfig for
compiling cxgbit.ko.

Signed-off-by: Varun Prakash <varun@chelsio.com>
---
 drivers/target/iscsi/Kconfig  | 2 ++
 drivers/target/iscsi/Makefile | 1 +
 2 files changed, 3 insertions(+)

diff --git a/drivers/target/iscsi/Kconfig b/drivers/target/iscsi/Kconfig
index 8345fb4..bbdbf9c 100644
--- a/drivers/target/iscsi/Kconfig
+++ b/drivers/target/iscsi/Kconfig
@@ -7,3 +7,5 @@ config ISCSI_TARGET
 	help
 	Say M here to enable the ConfigFS enabled Linux-iSCSI.org iSCSI
 	Target Mode Stack.
+
+source	"drivers/target/iscsi/cxgbit/Kconfig"
diff --git a/drivers/target/iscsi/Makefile b/drivers/target/iscsi/Makefile
index 0f43be9..0f18295 100644
--- a/drivers/target/iscsi/Makefile
+++ b/drivers/target/iscsi/Makefile
@@ -18,3 +18,4 @@ iscsi_target_mod-y +=		iscsi_target_parameters.o \
 				iscsi_target_transport.o
 
 obj-$(CONFIG_ISCSI_TARGET)	+= iscsi_target_mod.o
+obj-$(CONFIG_ISCSI_TARGET_CXGB4) += cxgbit/
-- 
2.0.2

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [RFC 18/34] iscsi-target: call complete on conn_logout_comp
  2016-02-14 17:42 ` [RFC 18/34] iscsi-target: call complete on conn_logout_comp Varun Prakash
@ 2016-02-15 17:07   ` Sagi Grimberg
  2016-03-01 14:52     ` Christoph Hellwig
  0 siblings, 1 reply; 69+ messages in thread
From: Sagi Grimberg @ 2016-02-15 17:07 UTC (permalink / raw)
  To: Varun Prakash, target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad



On 14/02/2016 19:42, Varun Prakash wrote:
> ISCSI_TCP_CXGB4 driver waits on conn_logout_comp as
> ISCSI_TCP driver so call complete if transport type
> is ISCSI_TCP_CXGB4.
>
> Signed-off-by: Varun Prakash <varun@chelsio.com>
> ---
>   drivers/target/iscsi/iscsi_target.c | 10 ++++++----
>   1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
> index 8bf3cfb..858f6e4 100644
> --- a/drivers/target/iscsi/iscsi_target.c
> +++ b/drivers/target/iscsi/iscsi_target.c
> @@ -4265,16 +4265,18 @@ int iscsit_close_connection(
>   	pr_debug("Closing iSCSI connection CID %hu on SID:"
>   		" %u\n", conn->cid, sess->sid);
>   	/*
> -	 * Always up conn_logout_comp for the traditional TCP case just in case
> -	 * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout
> -	 * response never got sent because the connection failed.
> +	 * Always up conn_logout_comp for the traditional TCP and TCP_CXGB4
> +	 * case just in case the RX Thread in iscsi_target_rx_opcode() is
> +	 * sleeping and the logout response never got sent because the
> +	 * connection failed.
>   	 *
>   	 * However for iser-target, isert_wait4logout() is using conn_logout_comp
>   	 * to signal logout response TX interrupt completion.  Go ahead and skip
>   	 * this for iser since isert_rx_opcode() does not wait on logout failure,
>   	 * and to avoid iscsi_conn pointer dereference in iser-target code.
>   	 */
> -	if (conn->conn_transport->transport_type == ISCSI_TCP)
> +	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> +	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
>   		complete(&conn->conn_logout_comp);

IMO, this is an indication that this condition is a bandage in the first
place...

While this is unrelated to the patch set, we should look into the iscsi
termination sequences more closely and look if we can dispatch some
logic to the drivers in places they defer. It will make the code much
less complicated.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 19/34] iscsi-target: clear tx_thread_active
  2016-02-14 17:42 ` [RFC 19/34] iscsi-target: clear tx_thread_active Varun Prakash
@ 2016-02-15 17:07   ` Sagi Grimberg
  2016-03-01 14:59   ` Christoph Hellwig
  1 sibling, 0 replies; 69+ messages in thread
From: Sagi Grimberg @ 2016-02-15 17:07 UTC (permalink / raw)
  To: Varun Prakash, target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad



On 14/02/2016 19:42, Varun Prakash wrote:
> clear tx_thread_active for ISCSI_TCP_CXGB4
> transport in logout_post_handler functions.
>
> Signed-off-by: Varun Prakash <varun@chelsio.com>
> ---
>   drivers/target/iscsi/iscsi_target.c | 6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
> index 858f6e4..3dd7ba2 100644
> --- a/drivers/target/iscsi/iscsi_target.c
> +++ b/drivers/target/iscsi/iscsi_target.c
> @@ -4579,7 +4579,8 @@ static void iscsit_logout_post_handler_closesession(
>   	 * always sleep waiting for RX/TX thread shutdown to complete
>   	 * within iscsit_close_connection().
>   	 */
> -	if (conn->conn_transport->transport_type == ISCSI_TCP)
> +	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> +	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
>   		sleep = cmpxchg(&conn->tx_thread_active, true, false);

Again, this is not the right way to make progress...

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 20/34] iscsi-target: update struct iscsit_transport definition
  2016-02-14 17:42 ` [RFC 20/34] iscsi-target: update struct iscsit_transport definition Varun Prakash
@ 2016-02-15 17:09   ` Sagi Grimberg
  2016-02-18 12:36     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Sagi Grimberg @ 2016-02-15 17:09 UTC (permalink / raw)
  To: Varun Prakash, target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad


> 1. void (*iscsit_rx_pdu)(struct iscsi_conn *);
>     Rx thread uses this for receiving and processing
>     iSCSI PDU in full feature phase.

Is iscsit_rx_pdu the best name for this? it sounds like
a function that would handle A pdu, but it's actually the
thread function dequeuing pdus correct?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 22/34] iscsi-target: call Rx thread function
  2016-02-14 17:45 ` [RFC 22/34] iscsi-target: call Rx thread function Varun Prakash
@ 2016-02-15 17:16   ` Sagi Grimberg
  2016-03-01 15:01   ` Christoph Hellwig
  1 sibling, 0 replies; 69+ messages in thread
From: Sagi Grimberg @ 2016-02-15 17:16 UTC (permalink / raw)
  To: Varun Prakash, target-devel, linux-scsi
  Cc: nab, roland, swise, indranil, kxie, hariprasad


> call Rx thread function if registered
> by transport driver, so that transport
> drivers can use iscsi-target Rx thread
> for Rx processing.
>
> update iSER target driver to use this
> interface.
>
> Signed-off-by: Varun Prakash <varun@chelsio.com>

Other than the handler name, looks harmless.

Acked-by: Sagi Grimberg <sagig@mellanox.com>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 20/34] iscsi-target: update struct iscsit_transport definition
  2016-02-15 17:09   ` Sagi Grimberg
@ 2016-02-18 12:36     ` Varun Prakash
  0 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-02-18 12:36 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Mon, Feb 15, 2016 at 10:39:55PM +0530, Sagi Grimberg wrote:
> 
> > 1. void (*iscsit_rx_pdu)(struct iscsi_conn *);
> >     Rx thread uses this for receiving and processing
> >     iSCSI PDU in full feature phase.
> 
> Is iscsit_rx_pdu the best name for this? it sounds like
> a function that would handle A pdu, but it's actually the
> thread function dequeuing pdus correct?

iscsit_rx_pdu is for both dequeuing and processing iscsi pdus,
I will rename it to iscsit_get_rx_pdu in the next revision.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 00/34] Chelsio iSCSI target offload driver
  2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
                   ` (33 preceding siblings ...)
  2016-02-14 17:45 ` [RFC 34/34] iscsi-target: update " Varun Prakash
@ 2016-02-26  7:29 ` Nicholas A. Bellinger
  34 siblings, 0 replies; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-02-26  7:29 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, roland, davem, dledford, swise,
	indranil, kxie, hariprasad

Hi Varun & Co,

Apologies for the delayed follow up here.

On Sun, 2016-02-14 at 23:00 +0530, Varun Prakash wrote:
> This RFC series is for Chelsio iSCSI target offload
> driver(cxgbit.ko).
> 
> cxgbit.ko registers with iSCSI target transport
> and offloads multiple CPU intensive tasks to
> Chelsio T5 adapters.
> 
> Chelsio T5 adapter series has following offload
> features for iSCSI -
> -TCP/IP offload.
> -iSCSI PDU recovery by reassembling TCP segments.
> -Header and Data Digest offload.
> -iSCSI segmentation offload(ISO).
> -Direct Data Placement(DDP).
> 
> Please review this series.
> 

After spending time over the last weeks to understand how cxgbit.ko
offloads work in patches #27-34, I think overall the new driver is in
very good shape.

Wrt to the patches to existing iscsi-target code in patches #13-26, I
need more time over the next days to consider the changes and/or better
alternatives, along with Sagi's earlier comments.  AFAICT it's mostly
smaller items, but it would be worth the discussion to see if/where some
larger improvements to existing iscsi-target code can be made beyond the
initial merge.

That said, this series has been added as-is to
target-pending/for-next-merge, to be picked up for the first round of
linux-next integration for drivers/net/ethernet/chelsio/ vs. DaveM's net
tree, and 0-day build testing.

Let's plan a RFC-v2 follow up sometime next week to address individual
comments, and to git squash patches #27-34 into a single merge-able
commit.

Thank you,

--nab


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 12/34] cxgb4: update Kconfig and Makefile
  2016-02-14 17:39 ` [RFC 12/34] cxgb4: update Kconfig and Makefile Varun Prakash
@ 2016-03-01 14:47   ` Christoph Hellwig
  2016-03-02 10:56     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:47 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, davem, swise, indranil,
	kxie, hariprasad

> +config CHELSIO_T4_UWIRE
> +	bool "Unified Wire Support for Chelsio T5 cards"
> +	default n
> +	depends on CHELSIO_T4
> +	---help---
> +	  Enable unified-wire offload features.
> +	  Say Y here if you want to enable unified-wire over Ethernet
> +	  in the driver.

And what the hell is "unified-wire over Ethernet".  A little more
explanation would be very helpful for the user facing this question.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 13/34] iscsi-target: add new transport type
  2016-02-14 17:42 ` [RFC 13/34] iscsi-target: add new transport type Varun Prakash
@ 2016-03-01 14:48   ` Christoph Hellwig
  2016-03-02 11:52     ` Varun Prakash
  2016-03-05 21:28     ` Nicholas A. Bellinger
  0 siblings, 2 replies; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:48 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

This really looks like an odd interface.  I think everyone will
be much happpier in the long run if you do a generic offload interface
instead of special casing each possible driver.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-02-14 17:42 ` [RFC 14/34] iscsi-target: export symbols Varun Prakash
@ 2016-03-01 14:49   ` Christoph Hellwig
  2016-03-02 12:00     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:49 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

This looks like pretty random exports and not something like a well
defined interface to me :(

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 15/34] iscsi-target: export symbols from iscsi_target.c
  2016-02-14 17:42 ` [RFC 15/34] iscsi-target: export symbols from iscsi_target.c Varun Prakash
@ 2016-03-01 14:49   ` Christoph Hellwig
  2016-03-02 12:07     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:49 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:12:09PM +0530, Varun Prakash wrote:
> export symbols from iscsi_target.c for
> ISCSI_TCP_CXGB4 transport driver.

What exactly is the reason for the split between this and the previous
patch?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 18/34] iscsi-target: call complete on conn_logout_comp
  2016-02-15 17:07   ` Sagi Grimberg
@ 2016-03-01 14:52     ` Christoph Hellwig
  2016-03-05 21:02       ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:52 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Varun Prakash, target-devel, linux-scsi, nab, roland, swise,
	indranil, kxie, hariprasad

On Mon, Feb 15, 2016 at 07:07:19PM +0200, Sagi Grimberg wrote:
> >+	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> >+	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
> >  		complete(&conn->conn_logout_comp);
> 
> IMO, this is an indication that this condition is a bandage in the first
> place...

Agreed.  Nevermind the fact that a spurious complete() is perfectly
harmless..

> While this is unrelated to the patch set, we should look into the iscsi
> termination sequences more closely and look if we can dispatch some
> logic to the drivers in places they defer. It will make the code much
> less complicated.

Yes, all I've seen in this series so far suggest that the integration
is a complete mess.  I think this really needs to go back to the drawing
board.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 19/34] iscsi-target: clear tx_thread_active
  2016-02-14 17:42 ` [RFC 19/34] iscsi-target: clear tx_thread_active Varun Prakash
  2016-02-15 17:07   ` Sagi Grimberg
@ 2016-03-01 14:59   ` Christoph Hellwig
  1 sibling, 0 replies; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:59 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:12:13PM +0530, Varun Prakash wrote:
> clear tx_thread_active for ISCSI_TCP_CXGB4
> transport in logout_post_handler functions.
> 
> Signed-off-by: Varun Prakash <varun@chelsio.com>
> ---
>  drivers/target/iscsi/iscsi_target.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
> index 858f6e4..3dd7ba2 100644
> --- a/drivers/target/iscsi/iscsi_target.c
> +++ b/drivers/target/iscsi/iscsi_target.c
> @@ -4579,7 +4579,8 @@ static void iscsit_logout_post_handler_closesession(
>  	 * always sleep waiting for RX/TX thread shutdown to complete
>  	 * within iscsit_close_connection().
>  	 */
> -	if (conn->conn_transport->transport_type == ISCSI_TCP)
> +	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> +	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
>  		sleep = cmpxchg(&conn->tx_thread_active, true, false);

Just move the call to iscsit_response_queue, and this is handled much
saner.

>  
>  	atomic_set(&conn->conn_logout_remove, 0);
> @@ -4596,7 +4597,8 @@ static void iscsit_logout_post_handler_samecid(
>  {
>  	int sleep = 1;
>  
> -	if (conn->conn_transport->transport_type == ISCSI_TCP)
> +	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> +	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
>  		sleep = cmpxchg(&conn->tx_thread_active, true, false);

Same here.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 21/34] iscsi-target: release transport driver resources
  2016-02-14 17:42 ` [RFC 21/34] iscsi-target: release transport driver resources Varun Prakash
@ 2016-03-01 14:59   ` Christoph Hellwig
  2016-03-02 12:15     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 14:59 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:12:15PM +0530, Varun Prakash wrote:
> transport driver may allocate resources for an
> iSCSI cmd, to free that resources iscsi target
> must call release function registered by transport
> driver.
> 
> ISCSI_TCP_CXGB4 frees DDP resource associated
> with a WRITE cmd in the callback function.

This should go with the patch introducing the method.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 22/34] iscsi-target: call Rx thread function
  2016-02-14 17:45 ` [RFC 22/34] iscsi-target: call Rx thread function Varun Prakash
  2016-02-15 17:16   ` Sagi Grimberg
@ 2016-03-01 15:01   ` Christoph Hellwig
  2016-03-05 23:16     ` Nicholas A. Bellinger
  1 sibling, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 15:01 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:15:29PM +0530, Varun Prakash wrote:
> call Rx thread function if registered
> by transport driver, so that transport
> drivers can use iscsi-target Rx thread
> for Rx processing.
> 
> update iSER target driver to use this
> interface.

Not related to your new driver, but what's the point of even starting a
RX thread for iSER?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 23/34] iscsi-target: split iscsi_target_rx_thread()
  2016-02-14 17:45 ` [RFC 23/34] iscsi-target: split iscsi_target_rx_thread() Varun Prakash
@ 2016-03-01 15:02   ` Christoph Hellwig
  0 siblings, 0 replies; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 15:02 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:15:30PM +0530, Varun Prakash wrote:
> split iscsi_target_rx_thread() into two parts,
> 1. iscsi_target_rx_thread() is common to all
>    transport drivers, it will call Rx function
>    registered by transport driver.
> 
> 2. iscsit_rx_pdu() is Rx function for
>    ISCSI_TCP transport.

It seems like the cleaner approach would be to have thread management
in the driver (or preferably workqueues!) and export a few helpers
for common code.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 24/34] iscsi-target: validate conn operational parameters
  2016-02-14 17:45 ` [RFC 24/34] iscsi-target: validate conn operational parameters Varun Prakash
@ 2016-03-01 15:03   ` Christoph Hellwig
  2016-03-02 12:18     ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Christoph Hellwig @ 2016-03-01 15:03 UTC (permalink / raw)
  To: Varun Prakash
  Cc: target-devel, linux-scsi, nab, roland, swise, indranil, kxie, hariprasad

On Sun, Feb 14, 2016 at 11:15:31PM +0530, Varun Prakash wrote:
> call validate params function if registered
> by transport driver before starting negotiation,
> so that transport driver can validate and update
> the value of parameters.

Should go together with the introduction of the method.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 12/34] cxgb4: update Kconfig and Makefile
  2016-03-01 14:47   ` Christoph Hellwig
@ 2016-03-02 10:56     ` Varun Prakash
  0 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 10:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, davem, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:17:06PM +0530, Christoph Hellwig wrote:
> > +config CHELSIO_T4_UWIRE
> > +	bool "Unified Wire Support for Chelsio T5 cards"
> > +	default n
> > +	depends on CHELSIO_T4
> > +	---help---
> > +	  Enable unified-wire offload features.
> > +	  Say Y here if you want to enable unified-wire over Ethernet
> > +	  in the driver.
> 
> And what the hell is "unified-wire over Ethernet".  A little more
> explanation would be very helpful for the user facing this question.

Ok, I will update the help in v2 series.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 13/34] iscsi-target: add new transport type
  2016-03-01 14:48   ` Christoph Hellwig
@ 2016-03-02 11:52     ` Varun Prakash
  2016-03-05 21:28     ` Nicholas A. Bellinger
  1 sibling, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 11:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:18:48PM +0530, Christoph Hellwig wrote:
> This really looks like an odd interface.  I think everyone will
> be much happpier in the long run if you do a generic offload interface
> instead of special casing each possible driver.

Common offload transport type is a better option, but we
have to do many changes in iscsi-target to 
support multiple offload drivers simultaneously.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-01 14:49   ` Christoph Hellwig
@ 2016-03-02 12:00     ` Varun Prakash
  2016-03-05 21:54       ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 12:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:19:21PM +0530, Christoph Hellwig wrote:
> This looks like pretty random exports and not something like a well
> defined interface to me :(

I have exported functions which works on iscsi-target data structures
and can be reused by offload drivers.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 15/34] iscsi-target: export symbols from iscsi_target.c
  2016-03-01 14:49   ` Christoph Hellwig
@ 2016-03-02 12:07     ` Varun Prakash
  0 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 12:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:19:48PM +0530, Christoph Hellwig wrote:
> On Sun, Feb 14, 2016 at 11:12:09PM +0530, Varun Prakash wrote:
> > export symbols from iscsi_target.c for
> > ISCSI_TCP_CXGB4 transport driver.
> 
> What exactly is the reason for the split between this and the previous
> patch?

To avoid a single big patch I created two separate patch,
also there are large number of functions exported from iscsi_target.c
as compared to other files.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 21/34] iscsi-target: release transport driver resources
  2016-03-01 14:59   ` Christoph Hellwig
@ 2016-03-02 12:15     ` Varun Prakash
  0 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 12:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:29:53PM +0530, Christoph Hellwig wrote:
> On Sun, Feb 14, 2016 at 11:12:15PM +0530, Varun Prakash wrote:
> > transport driver may allocate resources for an
> > iSCSI cmd, to free that resources iscsi target
> > must call release function registered by transport
> > driver.
> > 
> > ISCSI_TCP_CXGB4 frees DDP resource associated
> > with a WRITE cmd in the callback function.
> 
> This should go with the patch introducing the method.

Ok, I will update it in v2 series.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 24/34] iscsi-target: validate conn operational parameters
  2016-03-01 15:03   ` Christoph Hellwig
@ 2016-03-02 12:18     ` Varun Prakash
  0 siblings, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-03-02 12:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: target-devel, linux-scsi, nab, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Tue, Mar 01, 2016 at 08:33:18PM +0530, Christoph Hellwig wrote:
> On Sun, Feb 14, 2016 at 11:15:31PM +0530, Varun Prakash wrote:
> > call validate params function if registered
> > by transport driver before starting negotiation,
> > so that transport driver can validate and update
> > the value of parameters.
> 
> Should go together with the introduction of the method.

Ok, I will update it in v2 series. 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 18/34] iscsi-target: call complete on conn_logout_comp
  2016-03-01 14:52     ` Christoph Hellwig
@ 2016-03-05 21:02       ` Nicholas A. Bellinger
  0 siblings, 0 replies; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-05 21:02 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Varun Prakash, target-devel, linux-scsi, roland,
	swise, indranil, kxie, hariprasad

On Tue, 2016-03-01 at 06:52 -0800, Christoph Hellwig wrote:
> On Mon, Feb 15, 2016 at 07:07:19PM +0200, Sagi Grimberg wrote:
> > >+	if ((conn->conn_transport->transport_type == ISCSI_TCP) ||
> > >+	    (conn->conn_transport->transport_type == ISCSI_TCP_CXGB4))
> > >  		complete(&conn->conn_logout_comp);
> > 
> > IMO, this is an indication that this condition is a bandage in the first
> > place...
> 
> Agreed.  Nevermind the fact that a spurious complete() is perfectly
> harmless..
> 

iser-target uses the same completion isert_wait4logout() waiting for a
logout_posted response, which is why a spurious wakeup is not OK here.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 13/34] iscsi-target: add new transport type
  2016-03-01 14:48   ` Christoph Hellwig
  2016-03-02 11:52     ` Varun Prakash
@ 2016-03-05 21:28     ` Nicholas A. Bellinger
  2016-03-07 14:55       ` Varun Prakash
  1 sibling, 1 reply; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-05 21:28 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Varun Prakash, target-devel, linux-scsi, roland, swise, indranil,
	kxie, hariprasad

On Tue, 2016-03-01 at 06:48 -0800, Christoph Hellwig wrote:
> This really looks like an odd interface.  I think everyone will
> be much happpier in the long run if you do a generic offload interface
> instead of special casing each possible driver.

Yes, I think iscsit_transport_type should to be replaced with an enum
defining three types:

  - TCP using Linux/NET + stateless hw offload
  - TCP using hw iscsi + network offload
  - RDMA using iser-target offload

iscsit_transport_type was originally introduced to support TCP and SCTP
network portal (which is still there btw), and since nobody cares about
SCTP anymore, we can just drop it.

Varun, let's go in this direction for -v2 code, and use this new enum
for existing special cases that -v1 touches.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-02 12:00     ` Varun Prakash
@ 2016-03-05 21:54       ` Nicholas A. Bellinger
  2016-03-07 23:22         ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-05 21:54 UTC (permalink / raw)
  To: Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

Hi Varun & Co,

On Wed, 2016-03-02 at 17:30 +0530, Varun Prakash wrote:
> On Tue, Mar 01, 2016 at 08:19:21PM +0530, Christoph Hellwig wrote:
> > This looks like pretty random exports and not something like a well
> > defined interface to me :(
> 
> I have exported functions which works on iscsi-target data structures
> and can be reused by offload drivers.

The main point here for cxgbit, as-is it's duplicating a lot of code vs.
existing TCP code-paths using Linux/NET.

Many of these symbols are related to ->iscsit_immediate_queue() and
->cxgbit_response_queue() support, which for cxgbit vs. existing TCP can
both benefit using common code.

That is, to start cxgbit should avoid using these two specific callbacks
originally intended for iser-target, and for supporting traditional
iscsi hw offloads we'll need to pursue:

  - New iscsit_transport per PDU callbacks invoked from 
    iscsit_immediate_queue() + iscsit_response_queue() in 
    iscsi_target.c, or

  - New iscsit_transport per PDU callbacks invoked from
    iscsi_send_$PDU() in iscsi_target.c.
   

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 22/34] iscsi-target: call Rx thread function
  2016-03-01 15:01   ` Christoph Hellwig
@ 2016-03-05 23:16     ` Nicholas A. Bellinger
  0 siblings, 0 replies; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-05 23:16 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Varun Prakash, target-devel, linux-scsi, roland, swise, indranil,
	kxie, hariprasad

On Tue, 2016-03-01 at 07:01 -0800, Christoph Hellwig wrote:
> On Sun, Feb 14, 2016 at 11:15:29PM +0530, Varun Prakash wrote:
> > call Rx thread function if registered
> > by transport driver, so that transport
> > drivers can use iscsi-target Rx thread
> > for Rx processing.
> > 
> > update iSER target driver to use this
> > interface.
> 
> Not related to your new driver, but what's the point of even starting a
> RX thread for iSER?

Originally, this was added to avoid having multiple connection shutdown
special cases in now defunct iscsi_target_tq.c code.

With that part out of the way, it's should be straight-forward to avoid
starting conn->rx_thread all-together for iser-target.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 13/34] iscsi-target: add new transport type
  2016-03-05 21:28     ` Nicholas A. Bellinger
@ 2016-03-07 14:55       ` Varun Prakash
  2016-03-07 20:30         ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-03-07 14:55 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, swise,
	indranil, kxie, hariprasad

On Sat, Mar 05, 2016 at 01:28:58PM -0800, Nicholas A. Bellinger wrote:
> On Tue, 2016-03-01 at 06:48 -0800, Christoph Hellwig wrote:
> > This really looks like an odd interface.  I think everyone will
> > be much happpier in the long run if you do a generic offload interface
> > instead of special casing each possible driver.
> 
> Yes, I think iscsit_transport_type should to be replaced with an enum
> defining three types:
> 
>   - TCP using Linux/NET + stateless hw offload
>   - TCP using hw iscsi + network offload
>   - RDMA using iser-target offload
> 
> iscsit_transport_type was originally introduced to support TCP and SCTP
> network portal (which is still there btw), and since nobody cares about
> SCTP anymore, we can just drop it.
> 
> Varun, let's go in this direction for -v2 code, and use this new enum
> for existing special cases that -v1 touches.

Should we allow registration of multiple same transport type
offload drivers with iscsi-target?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 13/34] iscsi-target: add new transport type
  2016-03-07 14:55       ` Varun Prakash
@ 2016-03-07 20:30         ` Nicholas A. Bellinger
  0 siblings, 0 replies; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-07 20:30 UTC (permalink / raw)
  To: Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, swise,
	indranil, kxie, hariprasad

On Mon, 2016-03-07 at 20:25 +0530, Varun Prakash wrote:
> On Sat, Mar 05, 2016 at 01:28:58PM -0800, Nicholas A. Bellinger wrote:
> > On Tue, 2016-03-01 at 06:48 -0800, Christoph Hellwig wrote:
> > > This really looks like an odd interface.  I think everyone will
> > > be much happpier in the long run if you do a generic offload interface
> > > instead of special casing each possible driver.
> > 
> > Yes, I think iscsit_transport_type should to be replaced with an enum
> > defining three types:
> > 
> >   - TCP using Linux/NET + stateless hw offload
> >   - TCP using hw iscsi + network offload
> >   - RDMA using iser-target offload
> > 
> > iscsit_transport_type was originally introduced to support TCP and SCTP
> > network portal (which is still there btw), and since nobody cares about
> > SCTP anymore, we can just drop it.
> > 
> > Varun, let's go in this direction for -v2 code, and use this new enum
> > for existing special cases that -v1 touches.
> 
> Should we allow registration of multiple same transport type
> offload drivers with iscsi-target?

Registration of different hw iscsi + network offload drivers using the
same transport type should be supported, yes.

I don't believe there is a limit wrt to multiple drivers/infiniband/hw/
RNICs using different iscsit_transport registrations today, so it
shouldn't be an issue. 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-05 21:54       ` Nicholas A. Bellinger
@ 2016-03-07 23:22         ` Nicholas A. Bellinger
  2016-03-12  6:28           ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-07 23:22 UTC (permalink / raw)
  To: Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Sat, 2016-03-05 at 13:54 -0800, Nicholas A. Bellinger wrote:
> Hi Varun & Co,
> 
> On Wed, 2016-03-02 at 17:30 +0530, Varun Prakash wrote:
> > On Tue, Mar 01, 2016 at 08:19:21PM +0530, Christoph Hellwig wrote:
> > > This looks like pretty random exports and not something like a well
> > > defined interface to me :(
> > 
> > I have exported functions which works on iscsi-target data structures
> > and can be reused by offload drivers.
> 
> The main point here for cxgbit, as-is it's duplicating a lot of code vs.
> existing TCP code-paths using Linux/NET.
> 
> Many of these symbols are related to ->iscsit_immediate_queue() and
> ->cxgbit_response_queue() support, which for cxgbit vs. existing TCP can
> both benefit using common code.
> 
> That is, to start cxgbit should avoid using these two specific callbacks
> originally intended for iser-target, and for supporting traditional
> iscsi hw offloads we'll need to pursue:
> 
>   - New iscsit_transport per PDU callbacks invoked from 
>     iscsit_immediate_queue() + iscsit_response_queue() in 
>     iscsi_target.c, or
> 
>   - New iscsit_transport per PDU callbacks invoked from
>     iscsi_send_$PDU() in iscsi_target.c.
>    

So obviously this is going to take longer to sort out, and likely not
end up being v4.6-rc1 material.  That is OK, as the end result will be
better code.  :)

That said, it would make sense do the initial merge of the
drivers/net/ethernet/chelsio/cxgb4/ + PPM prerequisites changes for
v4.6, and as many basic prerequisites for iscsi-target for ISO as
possible in the short time we have.

Wrt to drivers/net/ethernet/chelsio/cxgb4/ changes, please have person
from Chelsio give their 'Reviewed-by' and 'Signed-off-by' to this
initial series, and make sure they get posted as stand-alone patches
ASAP to net-dev w/ DaveM in the loop.

I'm still happy to push these via target-pending/for-next-merge to get
the ball rolling for post v4.6 development, as long as they have the
extra reviews + signed-off-by from Chelsio folks, and DaveM doesn't have
any objections.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-07 23:22         ` Nicholas A. Bellinger
@ 2016-03-12  6:28           ` Nicholas A. Bellinger
  2016-03-13 12:13             ` Varun Prakash
  0 siblings, 1 reply; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-03-12  6:28 UTC (permalink / raw)
  To: Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

Hi Varun & Co,

On Mon, 2016-03-07 at 15:22 -0800, Nicholas A. Bellinger wrote:

<SNIP>

> So obviously this is going to take longer to sort out, and likely not
> end up being v4.6-rc1 material.  That is OK, as the end result will be
> better code.  :)
> 
> That said, it would make sense do the initial merge of the
> drivers/net/ethernet/chelsio/cxgb4/ + PPM prerequisites changes for
> v4.6, and as many basic prerequisites for iscsi-target for ISO as
> possible in the short time we have.
> 
> Wrt to drivers/net/ethernet/chelsio/cxgb4/ changes, please have person
> from Chelsio give their 'Reviewed-by' and 'Signed-off-by' to this
> initial series, and make sure they get posted as stand-alone patches
> ASAP to net-dev w/ DaveM in the loop.
> 
> I'm still happy to push these via target-pending/for-next-merge to get
> the ball rolling for post v4.6 development, as long as they have the
> extra reviews + signed-off-by from Chelsio folks, and DaveM doesn't have
> any objections.
> 

The drivers/net/ethernet/chelsio/cxgb4/ prerequisites for supporting
iscsi segment offload (ISO) + page pod manager (PPM) are in
target-pending/for-next-merge, and patches #13+ have been dropped
for initial v4.6-rc1 merge.

Also, there haven't been any further linux-next subsystem merge failures
beyond the initial t4fw_api.h conflict as reported by SFR.

https://lkml.org/lkml/2016/2/29/34

So please go ahead and post stand-alone cxgb4 driver changes for
initial ISO + PPM to net-dev + target-devel, and Hariprasad should
review + ack these ASAP.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-12  6:28           ` Nicholas A. Bellinger
@ 2016-03-13 12:13             ` Varun Prakash
  2016-04-08  7:16               ` Nicholas A. Bellinger
  0 siblings, 1 reply; 69+ messages in thread
From: Varun Prakash @ 2016-03-13 12:13 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Fri, Mar 11, 2016 at 10:28:33PM -0800, Nicholas A. Bellinger wrote:
> Hi Varun & Co,
> 
> On Mon, 2016-03-07 at 15:22 -0800, Nicholas A. Bellinger wrote:
> 
> <SNIP>
> 
> > So obviously this is going to take longer to sort out, and likely not
> > end up being v4.6-rc1 material.  That is OK, as the end result will be
> > better code.  :)
> > 
> > That said, it would make sense do the initial merge of the
> > drivers/net/ethernet/chelsio/cxgb4/ + PPM prerequisites changes for
> > v4.6, and as many basic prerequisites for iscsi-target for ISO as
> > possible in the short time we have.
> > 
> > Wrt to drivers/net/ethernet/chelsio/cxgb4/ changes, please have person
> > from Chelsio give their 'Reviewed-by' and 'Signed-off-by' to this
> > initial series, and make sure they get posted as stand-alone patches
> > ASAP to net-dev w/ DaveM in the loop.
> > 
> > I'm still happy to push these via target-pending/for-next-merge to get
> > the ball rolling for post v4.6 development, as long as they have the
> > extra reviews + signed-off-by from Chelsio folks, and DaveM doesn't have
> > any objections.
> > 
> 
> The drivers/net/ethernet/chelsio/cxgb4/ prerequisites for supporting
> iscsi segment offload (ISO) + page pod manager (PPM) are in
> target-pending/for-next-merge, and patches #13+ have been dropped
> for initial v4.6-rc1 merge.
> 
> Also, there haven't been any further linux-next subsystem merge failures
> beyond the initial t4fw_api.h conflict as reported by SFR.
> 
> https://lkml.org/lkml/2016/2/29/34
> 
> So please go ahead and post stand-alone cxgb4 driver changes for
> initial ISO + PPM to net-dev + target-devel, and Hariprasad should
> review + ack these ASAP.

Yes, I will post cxgb4 patch series, thanks.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-03-13 12:13             ` Varun Prakash
@ 2016-04-08  7:16               ` Nicholas A. Bellinger
  2016-04-09 12:09                 ` Varun Prakash
  2016-04-10  8:56                 ` Sagi Grimberg
  0 siblings, 2 replies; 69+ messages in thread
From: Nicholas A. Bellinger @ 2016-04-08  7:16 UTC (permalink / raw)
  To: Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Sun, 2016-03-13 at 17:43 +0530, Varun Prakash wrote:
> On Fri, Mar 11, 2016 at 10:28:33PM -0800, Nicholas A. Bellinger wrote:
> > Hi Varun & Co,
> > 
> > On Mon, 2016-03-07 at 15:22 -0800, Nicholas A. Bellinger wrote:
> > 
> > <SNIP>
> > 
> > > So obviously this is going to take longer to sort out, and likely not
> > > end up being v4.6-rc1 material.  That is OK, as the end result will be
> > > better code.  :)
> > > 
> > > That said, it would make sense do the initial merge of the
> > > drivers/net/ethernet/chelsio/cxgb4/ + PPM prerequisites changes for
> > > v4.6, and as many basic prerequisites for iscsi-target for ISO as
> > > possible in the short time we have.
> > > 
> > > Wrt to drivers/net/ethernet/chelsio/cxgb4/ changes, please have person
> > > from Chelsio give their 'Reviewed-by' and 'Signed-off-by' to this
> > > initial series, and make sure they get posted as stand-alone patches
> > > ASAP to net-dev w/ DaveM in the loop.
> > > 
> > > I'm still happy to push these via target-pending/for-next-merge to get
> > > the ball rolling for post v4.6 development, as long as they have the
> > > extra reviews + signed-off-by from Chelsio folks, and DaveM doesn't have
> > > any objections.
> > > 
> > 
> > The drivers/net/ethernet/chelsio/cxgb4/ prerequisites for supporting
> > iscsi segment offload (ISO) + page pod manager (PPM) are in
> > target-pending/for-next-merge, and patches #13+ have been dropped
> > for initial v4.6-rc1 merge.
> > 
> > Also, there haven't been any further linux-next subsystem merge failures
> > beyond the initial t4fw_api.h conflict as reported by SFR.
> > 
> > https://lkml.org/lkml/2016/2/29/34
> > 
> > So please go ahead and post stand-alone cxgb4 driver changes for
> > initial ISO + PPM to net-dev + target-devel, and Hariprasad should
> > review + ack these ASAP.
> 
> Yes, I will post cxgb4 patch series, thanks.

Great.  Just curious how -v2 is coming along..?

I've got a few cycles over the weekend, and plan to start reviewing as
the series hits the list.

Btw, I asked Sagi to help out with review as well.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-04-08  7:16               ` Nicholas A. Bellinger
@ 2016-04-09 12:09                 ` Varun Prakash
  2016-04-10  8:56                 ` Sagi Grimberg
  1 sibling, 0 replies; 69+ messages in thread
From: Varun Prakash @ 2016-04-09 12:09 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S

On Fri, Apr 08, 2016 at 12:16:43AM -0700, Nicholas A. Bellinger wrote:
> On Sun, 2016-03-13 at 17:43 +0530, Varun Prakash wrote:
> > On Fri, Mar 11, 2016 at 10:28:33PM -0800, Nicholas A. Bellinger wrote:
> > > Hi Varun & Co,
> > > 
> > > On Mon, 2016-03-07 at 15:22 -0800, Nicholas A. Bellinger wrote:
> > > 
> > > <SNIP>
> > > 
> > > The drivers/net/ethernet/chelsio/cxgb4/ prerequisites for supporting
> > > iscsi segment offload (ISO) + page pod manager (PPM) are in
> > > target-pending/for-next-merge, and patches #13+ have been dropped
> > > for initial v4.6-rc1 merge.
> > > 
> > > Also, there haven't been any further linux-next subsystem merge failures
> > > beyond the initial t4fw_api.h conflict as reported by SFR.
> > > 
> > > https://lkml.org/lkml/2016/2/29/34
> > > 
> > > So please go ahead and post stand-alone cxgb4 driver changes for
> > > initial ISO + PPM to net-dev + target-devel, and Hariprasad should
> > > review + ack these ASAP.
> > 
> > Yes, I will post cxgb4 patch series, thanks.
> 
> Great.  Just curious how -v2 is coming along..?
> 
> I've got a few cycles over the weekend, and plan to start reviewing as
> the series hits the list.
>
 
I will post v2 series today, thanks.

> Btw, I asked Sagi to help out with review as well.
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [RFC 14/34] iscsi-target: export symbols
  2016-04-08  7:16               ` Nicholas A. Bellinger
  2016-04-09 12:09                 ` Varun Prakash
@ 2016-04-10  8:56                 ` Sagi Grimberg
  1 sibling, 0 replies; 69+ messages in thread
From: Sagi Grimberg @ 2016-04-10  8:56 UTC (permalink / raw)
  To: Nicholas A. Bellinger, Varun Prakash
  Cc: Christoph Hellwig, target-devel, linux-scsi, roland, SWise OGC,
	Indranil Choudhury, Karen Xie, Hariprasad S


> Great.  Just curious how -v2 is coming along..?
>
> I've got a few cycles over the weekend, and plan to start reviewing as
> the series hits the list.
>
> Btw, I asked Sagi to help out with review as well.

If you did, it's lost in my old email address :)

I'll have a look this week.

Cheers,
Sagi.

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2016-04-10  8:56 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-14 17:30 [RFC 00/34] Chelsio iSCSI target offload driver Varun Prakash
2016-02-14 17:32 ` [RFC 01/34] cxgb4: add new ULD type CXGB4_ULD_ISCSIT Varun Prakash
2016-02-14 17:32 ` [RFC 02/34] cxgb4: allocate resources for CXGB4_ULD_ISCSIT Varun Prakash
2016-02-14 17:32 ` [RFC 03/34] cxgb4: large receive offload support Varun Prakash
2016-02-14 17:34 ` [RFC 04/34] cxgb4, iw_cxgb4: move definitions to common header file Varun Prakash
2016-02-14 17:34 ` [RFC 05/34] cxgb4, iw_cxgb4, cxgb4i: remove duplicate definitions Varun Prakash
2016-02-14 17:37 ` [RFC 06/34] cxgb4, cxgb4i: move struct cpl_rx_data_ddp definition Varun Prakash
2016-02-14 17:37 ` [RFC 07/34] cxgb4: add definitions for iSCSI target ULD Varun Prakash
2016-02-14 17:37 ` [RFC 08/34] cxgb4: update struct cxgb4_lld_info definition Varun Prakash
2016-02-14 17:37 ` [RFC 09/34] cxgb4: move VLAN_NONE macro definition Varun Prakash
2016-02-14 17:38 ` [RFC 10/34] cxgb4, iw_cxgb4: move delayed ack macro definitions Varun Prakash
2016-02-14 17:39 ` [RFC 11/34] cxgb4: add iSCSI DDP page pod manager Varun Prakash
2016-02-14 17:39 ` [RFC 12/34] cxgb4: update Kconfig and Makefile Varun Prakash
2016-03-01 14:47   ` Christoph Hellwig
2016-03-02 10:56     ` Varun Prakash
2016-02-14 17:42 ` [RFC 13/34] iscsi-target: add new transport type Varun Prakash
2016-03-01 14:48   ` Christoph Hellwig
2016-03-02 11:52     ` Varun Prakash
2016-03-05 21:28     ` Nicholas A. Bellinger
2016-03-07 14:55       ` Varun Prakash
2016-03-07 20:30         ` Nicholas A. Bellinger
2016-02-14 17:42 ` [RFC 14/34] iscsi-target: export symbols Varun Prakash
2016-03-01 14:49   ` Christoph Hellwig
2016-03-02 12:00     ` Varun Prakash
2016-03-05 21:54       ` Nicholas A. Bellinger
2016-03-07 23:22         ` Nicholas A. Bellinger
2016-03-12  6:28           ` Nicholas A. Bellinger
2016-03-13 12:13             ` Varun Prakash
2016-04-08  7:16               ` Nicholas A. Bellinger
2016-04-09 12:09                 ` Varun Prakash
2016-04-10  8:56                 ` Sagi Grimberg
2016-02-14 17:42 ` [RFC 15/34] iscsi-target: export symbols from iscsi_target.c Varun Prakash
2016-03-01 14:49   ` Christoph Hellwig
2016-03-02 12:07     ` Varun Prakash
2016-02-14 17:42 ` [RFC 16/34] iscsi-target: split iscsit_send_r2t() Varun Prakash
2016-02-14 17:42 ` [RFC 17/34] iscsi-target: split iscsit_send_conn_drop_async_message() Varun Prakash
2016-02-14 17:42 ` [RFC 18/34] iscsi-target: call complete on conn_logout_comp Varun Prakash
2016-02-15 17:07   ` Sagi Grimberg
2016-03-01 14:52     ` Christoph Hellwig
2016-03-05 21:02       ` Nicholas A. Bellinger
2016-02-14 17:42 ` [RFC 19/34] iscsi-target: clear tx_thread_active Varun Prakash
2016-02-15 17:07   ` Sagi Grimberg
2016-03-01 14:59   ` Christoph Hellwig
2016-02-14 17:42 ` [RFC 20/34] iscsi-target: update struct iscsit_transport definition Varun Prakash
2016-02-15 17:09   ` Sagi Grimberg
2016-02-18 12:36     ` Varun Prakash
2016-02-14 17:42 ` [RFC 21/34] iscsi-target: release transport driver resources Varun Prakash
2016-03-01 14:59   ` Christoph Hellwig
2016-03-02 12:15     ` Varun Prakash
2016-02-14 17:45 ` [RFC 22/34] iscsi-target: call Rx thread function Varun Prakash
2016-02-15 17:16   ` Sagi Grimberg
2016-03-01 15:01   ` Christoph Hellwig
2016-03-05 23:16     ` Nicholas A. Bellinger
2016-02-14 17:45 ` [RFC 23/34] iscsi-target: split iscsi_target_rx_thread() Varun Prakash
2016-03-01 15:02   ` Christoph Hellwig
2016-02-14 17:45 ` [RFC 24/34] iscsi-target: validate conn operational parameters Varun Prakash
2016-03-01 15:03   ` Christoph Hellwig
2016-03-02 12:18     ` Varun Prakash
2016-02-14 17:45 ` [RFC 25/34] iscsi-target: move iscsit_thread_check_cpumask() Varun Prakash
2016-02-14 17:45 ` [RFC 26/34] iscsi-target: fix seq_end_offset calculation Varun Prakash
2016-02-14 17:45 ` [RFC 27/34] cxgbit: add cxgbit.h Varun Prakash
2016-02-14 17:45 ` [RFC 28/34] cxgbit: add cxgbit_lro.h Varun Prakash
2016-02-14 17:45 ` [RFC 29/34] cxgbit: add cxgbit_main.c Varun Prakash
2016-02-14 17:45 ` [RFC 30/34] cxgbit: add cxgbit_cm.c Varun Prakash
2016-02-14 17:45 ` [RFC 31/34] cxgbit: add cxgbit_target.c Varun Prakash
2016-02-14 17:45 ` [RFC 32/34] cxgbit: add cxgbit_ddp.c Varun Prakash
2016-02-14 17:45 ` [RFC 33/34] cxgbit: add Kconfig and Makefile Varun Prakash
2016-02-14 17:45 ` [RFC 34/34] iscsi-target: update " Varun Prakash
2016-02-26  7:29 ` [RFC 00/34] Chelsio iSCSI target offload driver Nicholas A. Bellinger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.