netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] bna: Update bna driver version to 3.0.2.0
@ 2011-08-09  2:21 Rasesh Mody
  2011-08-09  2:21 ` [PATCH 1/8] bna: MSGQ Implementation Rasesh Mody
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Hi David,

   The following patch set contains changes for driver re-architecture and
   code re-organisazion. This includes driver firmware interface change,
   tx and rx re-design and corresponding changes required to use/enable new
   code and also keep the patch set bisectable. It also removes obsolete
   files and cleans up unused code.

   This updates the Brocade BNA driver to v3.0.2.0.

   The driver has been compiled & tested against net-next-2.6(3.0.0-rc7)

Thanks,
Rasesh

Rasesh Mody (8):
  bna: MSGQ Implementation
  bna: Introduce ENET as New Driver and FW Interface
  bna: Tx and Rx Redesign
  bna: Add New HW Defs
  bna: ENET and Tx Rx Redesign Enablement
  bna: Remove Unused Code
  bna: Remove Obsolete Files
  bna: Driver Version changed to 3.0.2.0

 drivers/net/bna/Makefile            |    5 +-
 drivers/net/bna/bfa_cee.c           |    3 -
 drivers/net/bna/bfa_defs.h          |   25 +-
 drivers/net/bna/bfa_defs_mfg_comm.h |   28 +-
 drivers/net/bna/bfa_ioc.c           |  403 +++--
 drivers/net/bna/bfa_ioc.h           |   45 +-
 drivers/net/bna/bfa_ioc_ct.c        |   41 +-
 drivers/net/bna/bfa_msgq.c          |  669 ++++++
 drivers/net/bna/bfa_msgq.h          |  130 ++
 drivers/net/bna/bfi.h               |  195 ++-
 drivers/net/bna/bfi_enet.h          |  901 ++++++++
 drivers/net/bna/bfi_ll.h            |  438 ----
 drivers/net/bna/bna.h               |  337 ++--
 drivers/net/bna/bna_ctrl.c          | 3076 -------------------------
 drivers/net/bna/bna_enet.c          | 2129 ++++++++++++++++++
 drivers/net/bna/bna_hw.h            | 1492 -------------
 drivers/net/bna/bna_hw_defs.h       |  413 ++++
 drivers/net/bna/bna_tx_rx.c         | 3787 +++++++++++++++++++++++++++++++
 drivers/net/bna/bna_txrx.c          | 4185 -----------------------------------
 drivers/net/bna/bna_types.h         |  654 +++----
 drivers/net/bna/bnad.c              |  643 ++++---
 drivers/net/bna/bnad.h              |   38 +-
 drivers/net/bna/bnad_ethtool.c      |   65 +-
 drivers/net/bna/cna.h               |   31 +-
 24 files changed, 9443 insertions(+), 10290 deletions(-)
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h
 create mode 100644 drivers/net/bna/bfi_enet.h
 delete mode 100644 drivers/net/bna/bfi_ll.h
 delete mode 100644 drivers/net/bna/bna_ctrl.c
 create mode 100644 drivers/net/bna/bna_enet.c
 delete mode 100644 drivers/net/bna/bna_hw.h
 create mode 100644 drivers/net/bna/bna_hw_defs.h
 create mode 100644 drivers/net/bna/bna_tx_rx.c
 delete mode 100644 drivers/net/bna/bna_txrx.c


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/8] bna: MSGQ Implementation
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 2/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Currently modules communicate with the FW using 32 byte command and
   response register. This limits the size of the command and response
   messages exchanged with the FW to 32 bytes. We need a mechanism to
   exchange the comamnds and responses exchange with FW that exceeds 32 bytes.

 - MSGQ implementation provides that facility. It removes the assumption that
   command/response queue size is precisely calculated to accommodate all
   concurrent FW commands/responses. The queue depth is made variable now, defined
   by a macro. A waiting command list is implemented to hold all the commands
   when there is no place in the command queue. Callback is implemented for
   each command entry to invoke the module posting the command, when there is
   space in the command queue and the command was finally posted to the queue.
   Module/Object information is embedded in the response for tracking purpose.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/Makefile   |    3 +-
 drivers/net/bna/bfa_ioc.c  |   14 +-
 drivers/net/bna/bfa_ioc.h  |    4 +-
 drivers/net/bna/bfa_msgq.c |  669 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bna/bfa_msgq.h |  130 +++++++++
 drivers/net/bna/bfi.h      |  101 +++++++
 drivers/net/bna/bna_ctrl.c |    6 +-
 7 files changed, 918 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h

diff --git a/drivers/net/bna/Makefile b/drivers/net/bna/Makefile
index a5d604d..d501f52 100644
--- a/drivers/net/bna/Makefile
+++ b/drivers/net/bna/Makefile
@@ -6,6 +6,7 @@
 obj-$(CONFIG_BNA) += bna.o
 
 bna-objs := bnad.o bnad_ethtool.o bna_ctrl.o bna_txrx.o
-bna-objs += bfa_ioc.o bfa_ioc_ct.o bfa_cee.o cna_fwimg.o
+bna-objs += bfa_msgq.o bfa_ioc.o bfa_ioc_ct.o bfa_cee.o
+bna-objs += cna_fwimg.o
 
 EXTRA_CFLAGS := -Idrivers/net/bna
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index 3cdea65..2d5c4fd 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -1968,18 +1968,22 @@ bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
  * @param[in]	ioc	IOC instance
  * @param[i]	cmd	Mailbox command
  */
-void
-bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
+bool
+bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg)
 {
 	struct bfa_ioc_mbox_mod *mod = &ioc->mbox_mod;
 	u32			stat;
 
+	cmd->cbfn = cbfn;
+	cmd->cbarg = cbarg;
+
 	/**
 	 * If a previous command is pending, queue new command
 	 */
 	if (!list_empty(&mod->cmd_q)) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1988,7 +1992,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	stat = readl(ioc->ioc_regs.hfn_mbox_cmd);
 	if (stat) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1996,7 +2000,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	 */
 	bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg));
 
-	return;
+	return false;
 }
 
 /**
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index bda866b..33ba5f4 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -253,7 +253,9 @@ struct bfa_ioc_hwif {
 /**
  * IOC mailbox interface
  */
-void bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd);
+bool bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc,
+			struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg);
 void bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc);
 void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
 		bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg);
diff --git a/drivers/net/bna/bfa_msgq.c b/drivers/net/bna/bfa_msgq.c
new file mode 100644
index 0000000..ed52187
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.c
@@ -0,0 +1,669 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfa_msgq.c MSGQ module source file.
+ */
+
+#include "bfi.h"
+#include "bfa_msgq.h"
+#include "bfa_ioc.h"
+
+#define call_cmdq_ent_cbfn(_cmdq_ent, _status)				\
+{									\
+	bfa_msgq_cmdcbfn_t cbfn;					\
+	void *cbarg;							\
+	cbfn = (_cmdq_ent)->cbfn;					\
+	cbarg = (_cmdq_ent)->cbarg;					\
+	(_cmdq_ent)->cbfn = NULL;					\
+	(_cmdq_ent)->cbarg = NULL;					\
+	if (cbfn) {							\
+		cbfn(cbarg, (_status));					\
+	}								\
+}
+
+static void bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq);
+static void bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq);
+
+enum cmdq_event {
+	CMDQ_E_START			= 1,
+	CMDQ_E_STOP			= 2,
+	CMDQ_E_FAIL			= 3,
+	CMDQ_E_POST			= 4,
+	CMDQ_E_INIT_RESP		= 5,
+	CMDQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(cmdq, stopped, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, init_wait, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, ready, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, dbell_wait, struct bfa_msgq_cmdq,
+			enum cmdq_event);
+
+static void
+cmdq_sm_stopped_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfa_msgq_cmd_entry *cmdq_ent;
+
+	cmdq->producer_index = 0;
+	cmdq->consumer_index = 0;
+	cmdq->flags = 0;
+	cmdq->token = 0;
+	cmdq->offset = 0;
+	cmdq->bytes_to_copy = 0;
+	while (!list_empty(&cmdq->pending_q)) {
+		bfa_q_deq(&cmdq->pending_q, &cmdq_ent);
+		bfa_q_qe_init(&cmdq_ent->qe);
+		call_cmdq_ent_cbfn(cmdq_ent, BFA_STATUS_FAILED);
+	}
+}
+
+static void
+cmdq_sm_stopped(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_START:
+		bfa_fsm_set_state(cmdq, cmdq_sm_init_wait);
+		break;
+
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		/* No-op */
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_init_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_wc_down(&cmdq->msgq->init_wc);
+}
+
+static void
+cmdq_sm_init_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_INIT_RESP:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_ready_entry(struct bfa_msgq_cmdq *cmdq)
+{
+}
+
+static void
+cmdq_sm_ready(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_dbell_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_msgq_cmdq_dbell(cmdq);
+}
+
+static void
+cmdq_sm_dbell_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_DB_READY:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_cmdq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+	bfa_fsm_send_event(cmdq, CMDQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&cmdq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_PI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.cmdq_pi = htons(cmdq->producer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->dbell_mb,
+				bfa_msgq_cmdq_dbell_ready, cmdq)) {
+		bfa_msgq_cmdq_dbell_ready(cmdq);
+	}
+}
+
+static void
+__cmd_copy(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq_cmd_entry *cmd)
+{
+	size_t len = cmd->msg_size;
+	int num_entries = 0;
+	size_t to_copy;
+	u8 *src, *dst;
+
+	src = (u8 *)cmd->msg_hdr;
+	dst = (u8 *)cmdq->addr.kva;
+	dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_CMD_ENTRY_SIZE) ?
+				len : BFI_MSGQ_CMD_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		src += BFI_MSGQ_CMD_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(cmdq->producer_index, 1, cmdq->depth);
+		dst = (u8 *)cmdq->addr.kva;
+		dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+		num_entries++;
+	}
+
+}
+
+static void
+bfa_msgq_cmdq_ci_update(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfa_msgq_cmd_entry *cmd;
+	int posted = 0;
+
+	cmdq->consumer_index = ntohs(dbell->idx.cmdq_ci);
+
+	/* Walk through pending list to see if the command can be posted */
+	while (!list_empty(&cmdq->pending_q)) {
+		cmd =
+		(struct bfa_msgq_cmd_entry *)bfa_q_first(&cmdq->pending_q);
+		if (ntohs(cmd->msg_hdr->num_entries) <=
+			BFA_MSGQ_FREE_CNT(cmdq)) {
+			list_del(&cmd->qe);
+			__cmd_copy(cmdq, cmd);
+			posted = 1;
+			call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		} else {
+			break;
+		}
+	}
+
+	if (posted)
+		bfa_fsm_send_event(cmdq, CMDQ_E_POST);
+}
+
+static void
+bfa_msgq_cmdq_copy_next(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+
+	if (cmdq->bytes_to_copy)
+		bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_req(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_cmdq_copy_req *req =
+		(struct bfi_msgq_i2h_cmdq_copy_req *)mb;
+
+	cmdq->token = 0;
+	cmdq->offset = ntohs(req->offset);
+	cmdq->bytes_to_copy = ntohs(req->len);
+	bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_cmdq_copy_rsp *rsp =
+		(struct bfi_msgq_h2i_cmdq_copy_rsp *)&cmdq->copy_mb.msg[0];
+	int copied;
+	u8 *addr = (u8 *)cmdq->addr.kva;
+
+	memset(rsp, 0, sizeof(struct bfi_msgq_h2i_cmdq_copy_rsp));
+	bfi_h2i_set(rsp->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_CMDQ_COPY_RSP, 0);
+	rsp->mh.mtag.i2htok = htons(cmdq->token);
+	copied = (cmdq->bytes_to_copy >= BFI_CMD_COPY_SZ) ? BFI_CMD_COPY_SZ :
+		cmdq->bytes_to_copy;
+	addr += cmdq->offset;
+	memcpy(rsp->data, addr, copied);
+
+	cmdq->token++;
+	cmdq->offset += copied;
+	cmdq->bytes_to_copy -= copied;
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->copy_mb,
+				bfa_msgq_cmdq_copy_next, cmdq)) {
+		bfa_msgq_cmdq_copy_next(cmdq);
+	}
+}
+
+static void
+bfa_msgq_cmdq_attach(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq *msgq)
+{
+	cmdq->depth = BFA_MSGQ_CMDQ_NUM_ENTRY;
+	INIT_LIST_HEAD(&cmdq->pending_q);
+	cmdq->msgq = msgq;
+	bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+}
+
+static void bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq);
+
+enum rspq_event {
+	RSPQ_E_START			= 1,
+	RSPQ_E_STOP			= 2,
+	RSPQ_E_FAIL			= 3,
+	RSPQ_E_RESP			= 4,
+	RSPQ_E_INIT_RESP		= 5,
+	RSPQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(rspq, stopped, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, init_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+bfa_fsm_state_decl(rspq, ready, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, dbell_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+
+static void
+rspq_sm_stopped_entry(struct bfa_msgq_rspq *rspq)
+{
+	rspq->producer_index = 0;
+	rspq->consumer_index = 0;
+	rspq->flags = 0;
+}
+
+static void
+rspq_sm_stopped(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_START:
+		bfa_fsm_set_state(rspq, rspq_sm_init_wait);
+		break;
+
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_init_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	bfa_wc_down(&rspq->msgq->init_wc);
+}
+
+static void
+rspq_sm_init_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_FAIL:
+	case RSPQ_E_STOP:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_INIT_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_ready_entry(struct bfa_msgq_rspq *rspq)
+{
+}
+
+static void
+rspq_sm_ready(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_dbell_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	if (!bfa_nw_ioc_is_disabled(rspq->msgq->ioc))
+		bfa_msgq_rspq_dbell(rspq);
+}
+
+static void
+rspq_sm_dbell_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		rspq->flags |= BFA_MSGQ_RSPQ_F_DB_UPDATE;
+		break;
+
+	case RSPQ_E_DB_READY:
+		if (rspq->flags & BFA_MSGQ_RSPQ_F_DB_UPDATE) {
+			rspq->flags &= ~BFA_MSGQ_RSPQ_F_DB_UPDATE;
+			bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_rspq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_rspq *rspq = (struct bfa_msgq_rspq *)arg;
+	bfa_fsm_send_event(rspq, RSPQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&rspq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_CI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.rspq_ci = htons(rspq->consumer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(rspq->msgq->ioc, &rspq->dbell_mb,
+				bfa_msgq_rspq_dbell_ready, rspq)) {
+		bfa_msgq_rspq_dbell_ready(rspq);
+	}
+}
+
+static void
+bfa_msgq_rspq_pi_update(struct bfa_msgq_rspq *rspq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfi_msgq_mhdr *msghdr;
+	int num_entries;
+	int mc;
+	u8 *rspq_qe;
+
+	rspq->producer_index = ntohs(dbell->idx.rspq_pi);
+
+	while (rspq->consumer_index != rspq->producer_index) {
+		rspq_qe = (u8 *)rspq->addr.kva;
+		rspq_qe += (rspq->consumer_index * BFI_MSGQ_RSP_ENTRY_SIZE);
+		msghdr = (struct bfi_msgq_mhdr *)rspq_qe;
+
+		mc = msghdr->msg_class;
+		num_entries = ntohs(msghdr->num_entries);
+
+		if ((mc > BFI_MC_MAX) || (rspq->rsphdlr[mc].cbfn == NULL))
+			break;
+
+		(rspq->rsphdlr[mc].cbfn)(rspq->rsphdlr[mc].cbarg, msghdr);
+
+		BFA_MSGQ_INDX_ADD(rspq->consumer_index, num_entries,
+				rspq->depth);
+	}
+
+	bfa_fsm_send_event(rspq, RSPQ_E_RESP);
+}
+
+static void
+bfa_msgq_rspq_attach(struct bfa_msgq_rspq *rspq, struct bfa_msgq *msgq)
+{
+	rspq->depth = BFA_MSGQ_RSPQ_NUM_ENTRY;
+	rspq->msgq = msgq;
+	bfa_fsm_set_state(rspq, rspq_sm_stopped);
+}
+
+static void
+bfa_msgq_init_rsp(struct bfa_msgq *msgq,
+		 struct bfi_mbmsg *mb)
+{
+	bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_INIT_RESP);
+	bfa_fsm_send_event(&msgq->rspq, RSPQ_E_INIT_RESP);
+}
+
+static void
+bfa_msgq_init(void *arg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)arg;
+	struct bfi_msgq_cfg_req *msgq_cfg =
+		(struct bfi_msgq_cfg_req *)&msgq->init_mb.msg[0];
+
+	memset(msgq_cfg, 0, sizeof(struct bfi_msgq_cfg_req));
+	bfi_h2i_set(msgq_cfg->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_INIT_REQ, 0);
+	msgq_cfg->mh.mtag.i2htok = 0;
+
+	bfa_dma_be_addr_set(msgq_cfg->cmdq.addr, msgq->cmdq.addr.pa);
+	msgq_cfg->cmdq.q_depth = htons(msgq->cmdq.depth);
+	bfa_dma_be_addr_set(msgq_cfg->rspq.addr, msgq->rspq.addr.pa);
+	msgq_cfg->rspq.q_depth = htons(msgq->rspq.depth);
+
+	bfa_nw_ioc_mbox_queue(msgq->ioc, &msgq->init_mb, NULL, NULL);
+}
+
+static void
+bfa_msgq_isr(void *cbarg, struct bfi_mbmsg *msg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (msg->mh.msg_id) {
+	case BFI_MSGQ_I2H_INIT_RSP:
+		bfa_msgq_init_rsp(msgq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_PI:
+		bfa_msgq_rspq_pi_update(&msgq->rspq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_CI:
+		bfa_msgq_cmdq_ci_update(&msgq->cmdq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_CMDQ_COPY_REQ:
+		bfa_msgq_cmdq_copy_req(&msgq->cmdq, msg);
+		break;
+
+	default:
+		BUG_ON(1);
+	}
+}
+
+static void
+bfa_msgq_notify(void *cbarg, enum bfa_ioc_event event)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (event) {
+	case BFA_IOC_E_ENABLED:
+		bfa_wc_init(&msgq->init_wc, bfa_msgq_init, msgq);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_START);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_START);
+		bfa_wc_wait(&msgq->init_wc);
+		break;
+
+	case BFA_IOC_E_DISABLED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_STOP);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_STOP);
+		break;
+
+	case BFA_IOC_E_FAILED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_FAIL);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_FAIL);
+		break;
+
+	default:
+		break;
+	}
+}
+
+u32
+bfa_msgq_meminfo(void)
+{
+	return roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ) +
+		roundup(BFA_MSGQ_RSPQ_SIZE, BFA_DMA_ALIGN_SZ);
+}
+
+void
+bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa)
+{
+	msgq->cmdq.addr.kva = kva;
+	msgq->cmdq.addr.pa  = pa;
+
+	kva += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+	pa += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+
+	msgq->rspq.addr.kva = kva;
+	msgq->rspq.addr.pa = pa;
+}
+
+void
+bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc)
+{
+	msgq->ioc    = ioc;
+
+	bfa_msgq_cmdq_attach(&msgq->cmdq, msgq);
+	bfa_msgq_rspq_attach(&msgq->rspq, msgq);
+
+	bfa_nw_ioc_mbox_regisr(msgq->ioc, BFI_MC_MSGQ, bfa_msgq_isr, msgq);
+	bfa_q_qe_init(&msgq->ioc_notify);
+	bfa_ioc_notify_init(&msgq->ioc_notify, bfa_msgq_notify, msgq);
+	bfa_nw_ioc_notify_register(msgq->ioc, &msgq->ioc_notify);
+}
+
+void
+bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		bfa_msgq_mcfunc_t cbfn, void *cbarg)
+{
+	msgq->rspq.rsphdlr[mc].cbfn	= cbfn;
+	msgq->rspq.rsphdlr[mc].cbarg	= cbarg;
+}
+
+void
+bfa_msgq_cmd_post(struct bfa_msgq *msgq,  struct bfa_msgq_cmd_entry *cmd)
+{
+	if (ntohs(cmd->msg_hdr->num_entries) <=
+		BFA_MSGQ_FREE_CNT(&msgq->cmdq)) {
+		__cmd_copy(&msgq->cmdq, cmd);
+		call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_POST);
+	} else {
+		list_add_tail(&cmd->qe, &msgq->cmdq.pending_q);
+	}
+}
+
+void
+bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len)
+{
+	struct bfa_msgq_rspq *rspq = &msgq->rspq;
+	size_t len = buf_len;
+	size_t to_copy;
+	int ci;
+	u8 *src, *dst;
+
+	ci = rspq->consumer_index;
+	src = (u8 *)rspq->addr.kva;
+	src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	dst = buf;
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_RSP_ENTRY_SIZE) ?
+				len : BFI_MSGQ_RSP_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		dst += BFI_MSGQ_RSP_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(ci, 1, rspq->depth);
+		src = (u8 *)rspq->addr.kva;
+		src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	}
+}
diff --git a/drivers/net/bna/bfa_msgq.h b/drivers/net/bna/bfa_msgq.h
new file mode 100644
index 0000000..a6a565a
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.h
@@ -0,0 +1,130 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+#ifndef __BFA_MSGQ_H__
+#define __BFA_MSGQ_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+#include "bfa_ioc.h"
+#include "bfa_cs.h"
+
+#define BFA_MSGQ_FREE_CNT(_q)						\
+	(((_q)->consumer_index - (_q)->producer_index - 1) & ((_q)->depth - 1))
+
+#define BFA_MSGQ_INDX_ADD(_q_indx, _qe_num, _q_depth)			\
+	((_q_indx) = (((_q_indx) + (_qe_num)) & ((_q_depth) - 1)))
+
+#define BFA_MSGQ_CMDQ_NUM_ENTRY		128
+#define BFA_MSGQ_CMDQ_SIZE						\
+	(BFI_MSGQ_CMD_ENTRY_SIZE * BFA_MSGQ_CMDQ_NUM_ENTRY)
+
+#define BFA_MSGQ_RSPQ_NUM_ENTRY		128
+#define BFA_MSGQ_RSPQ_SIZE						\
+	(BFI_MSGQ_RSP_ENTRY_SIZE * BFA_MSGQ_RSPQ_NUM_ENTRY)
+
+#define bfa_msgq_cmd_set(_cmd, _cbfn, _cbarg, _msg_size, _msg_hdr)	\
+do {									\
+	(_cmd)->cbfn = (_cbfn);						\
+	(_cmd)->cbarg = (_cbarg);					\
+	(_cmd)->msg_size = (_msg_size);					\
+	(_cmd)->msg_hdr = (_msg_hdr);					\
+} while (0)
+
+struct bfa_msgq;
+
+typedef void (*bfa_msgq_cmdcbfn_t)(void *cbarg, enum bfa_status status);
+
+struct bfa_msgq_cmd_entry {
+	struct list_head				qe;
+	bfa_msgq_cmdcbfn_t		cbfn;
+	void				*cbarg;
+	size_t				msg_size;
+	struct bfi_msgq_mhdr *msg_hdr;
+};
+
+enum bfa_msgq_cmdq_flags {
+	BFA_MSGQ_CMDQ_F_DB_UPDATE	= 1,
+};
+
+struct bfa_msgq_cmdq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_cmdq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	u16			token;
+	int				offset;
+	int				bytes_to_copy;
+	struct bfa_mbox_cmd copy_mb;
+
+	struct list_head		pending_q; /* pending command queue */
+
+	struct bfa_msgq *msgq;
+};
+
+enum bfa_msgq_rspq_flags {
+	BFA_MSGQ_RSPQ_F_DB_UPDATE	= 1,
+};
+
+typedef void (*bfa_msgq_mcfunc_t)(void *cbarg, struct bfi_msgq_mhdr *mhdr);
+
+struct bfa_msgq_rspq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_rspq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	int				nmclass;
+	struct {
+		bfa_msgq_mcfunc_t	cbfn;
+		void			*cbarg;
+	} rsphdlr[BFI_MC_MAX];
+
+	struct bfa_msgq *msgq;
+};
+
+struct bfa_msgq {
+	struct bfa_msgq_cmdq cmdq;
+	struct bfa_msgq_rspq rspq;
+
+	struct bfa_wc			init_wc;
+	struct bfa_mbox_cmd init_mb;
+
+	struct bfa_ioc_notify ioc_notify;
+	struct bfa_ioc *ioc;
+};
+
+u32 bfa_msgq_meminfo(void);
+void bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa);
+void bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc);
+void bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		     bfa_msgq_mcfunc_t cbfn, void *cbarg);
+void bfa_msgq_cmd_post(struct bfa_msgq *msgq,
+		       struct bfa_msgq_cmd_entry *cmd);
+void bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len);
+
+#endif
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 088211c..6a53183 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -192,6 +192,8 @@ enum bfi_mclass {
 
 #define BFI_BOOT_LOADER_OS		0
 
+#define BFI_FWBOOT_ENV_OS		0
+
 #define BFI_BOOT_MEMTEST_RES_ADDR   0x900
 #define BFI_BOOT_MEMTEST_RES_SIG    0xA0A1A2A3
 
@@ -395,6 +397,105 @@ union bfi_ioc_i2h_msg_u {
 	u32			mboxmsg[BFI_IOC_MSGSZ];
 };
 
+/**
+ *----------------------------------------------------------------------
+ *				MSGQ
+ *----------------------------------------------------------------------
+ */
+
+enum bfi_msgq_h2i_msgs {
+	BFI_MSGQ_H2I_INIT_REQ	   = 1,
+	BFI_MSGQ_H2I_DOORBELL_PI	= 2,
+	BFI_MSGQ_H2I_DOORBELL_CI	= 3,
+	BFI_MSGQ_H2I_CMDQ_COPY_RSP      = 4,
+};
+
+enum bfi_msgq_i2h_msgs {
+	BFI_MSGQ_I2H_INIT_RSP	   = BFA_I2HM(BFI_MSGQ_H2I_INIT_REQ),
+	BFI_MSGQ_I2H_DOORBELL_PI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_PI),
+	BFI_MSGQ_I2H_DOORBELL_CI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_CI),
+	BFI_MSGQ_I2H_CMDQ_COPY_REQ      = BFA_I2HM(BFI_MSGQ_H2I_CMDQ_COPY_RSP),
+};
+
+/* Messages(commands/responsed/AENS will have the following header */
+struct bfi_msgq_mhdr {
+	u8	msg_class;
+	u8	msg_id;
+	u16	msg_token;
+	u16	num_entries;
+	u8	enet_id;
+	u8	rsvd[1];
+};
+
+#define bfi_msgq_mhdr_set(_mh, _mc, _mid, _tok, _enet_id) do {	\
+	(_mh).msg_class	 = (_mc);	\
+	(_mh).msg_id	    = (_mid);       \
+	(_mh).msg_token	 = (_tok);       \
+	(_mh).enet_id	   = (_enet_id);   \
+} while (0)
+
+/*
+ * Mailbox  for messaging interface
+ */
+#define BFI_MSGQ_CMD_ENTRY_SIZE	 (64)    /* TBD */
+#define BFI_MSGQ_RSP_ENTRY_SIZE	 (64)    /* TBD */
+
+#define bfi_msgq_num_cmd_entries(_size)				 \
+	(((_size) + BFI_MSGQ_CMD_ENTRY_SIZE - 1) / BFI_MSGQ_CMD_ENTRY_SIZE)
+
+struct bfi_msgq {
+	union bfi_addr_u addr;
+	u16 q_depth;     /* Total num of entries in the queue */
+	u8 rsvd[2];
+};
+
+/* BFI_ENET_MSGQ_CFG_REQ TBD init or cfg? */
+struct bfi_msgq_cfg_req {
+	struct bfi_mhdr mh;
+	struct bfi_msgq cmdq;
+	struct bfi_msgq rspq;
+};
+
+/* BFI_ENET_MSGQ_CFG_RSP */
+struct bfi_msgq_cfg_rsp {
+	struct bfi_mhdr mh;
+	u8 cmd_status;
+	u8 rsvd[3];
+};
+
+/* BFI_MSGQ_H2I_DOORBELL */
+struct bfi_msgq_h2i_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 cmdq_pi;
+		u16 rspq_ci;
+	} idx;
+};
+
+/* BFI_MSGQ_I2H_DOORBELL */
+struct bfi_msgq_i2h_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 rspq_pi;
+		u16 cmdq_ci;
+	} idx;
+};
+
+#define BFI_CMD_COPY_SZ 28
+
+/* BFI_MSGQ_H2I_CMD_COPY_RSP */
+struct bfi_msgq_h2i_cmdq_copy_rsp {
+	struct bfi_mhdr mh;
+	u8	      data[BFI_CMD_COPY_SZ];
+};
+
+/* BFI_MSGQ_I2H_CMD_COPY_REQ */
+struct bfi_msgq_i2h_cmdq_copy_req {
+	struct bfi_mhdr mh;
+	u16     offset;
+	u16     len;
+};
+
 #pragma pack()
 
 #endif /* __BFI_H__ */
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
index cb2594c..7d95517 100644
--- a/drivers/net/bna/bna_ctrl.c
+++ b/drivers/net/bna/bna_ctrl.c
@@ -183,7 +183,8 @@ bna_ll_isr(void *llarg, struct bfi_mbmsg *msg)
 			if (to_post) {
 				mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
 				bfa_nw_ioc_mbox_queue(&bna->device.ioc,
-							&mb_qe->cmd);
+							&mb_qe->cmd, NULL,
+							NULL);
 			}
 		} else {
 			snprintf(message, BNA_MESSAGE_SIZE,
@@ -234,7 +235,8 @@ bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe)
 	bna->mbox_mod.msg_pending++;
 	if (bna->mbox_mod.state == BNA_MBOX_FREE) {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd);
+		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd,
+					NULL, NULL);
 		bna->mbox_mod.state = BNA_MBOX_POSTED;
 	} else {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/8] bna: Introduce ENET as New Driver and FW Interface
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
  2011-08-09  2:21 ` [PATCH 1/8] bna: MSGQ Implementation Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 3/8] bna: Tx and Rx Redesign Rasesh Mody
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - This patch contains the messages, opcodes and structure format for the
   messages and responses exchanged between driver and the FW. In addition
   this patch contains the state machine implementation for Ethport, Enet,
   IOCEth.
 - Ethport object is responsible for receiving link state events, sending
   port enable/disable commands to FW.
 - Enet object is responsible for synchronizing initialization/teardown of
   tx & rx datapath configuration.
 - IOCEth object is responsible for init/un-init of IO Controller in the
   adapter which runs the FW.
 - This patch also contains code for initialization and resource assignment
   for Ethport, Enet, IOCEth, Tx, Rx objects.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_enet.h |  901 +++++++++++++++++++
 drivers/net/bna/bna_enet.c | 2129 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3030 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bfi_enet.h
 create mode 100644 drivers/net/bna/bna_enet.c

diff --git a/drivers/net/bna/bfi_enet.h b/drivers/net/bna/bfi_enet.h
new file mode 100644
index 0000000..a90f1cf
--- /dev/null
+++ b/drivers/net/bna/bfi_enet.h
@@ -0,0 +1,901 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfi_enet.h BNA Hardware and Firmware Interface
+ */
+
+/**
+ * Skipping statistics collection to avoid clutter.
+ * Command is no longer needed:
+ *	MTU
+ *	TxQ Stop
+ *	RxQ Stop
+ *	RxF Enable/Disable
+ *
+ * HDS-off request is dynamic
+ * keep structures as multiple of 32-bit fields for alignment.
+ * All values must be written in big-endian.
+ */
+#ifndef __BFI_ENET_H__
+#define __BFI_ENET_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+
+#pragma pack(1)
+
+#define BFI_ENET_CFG_MAX		32	/* Max resources per PF */
+
+#define BFI_ENET_TXQ_PRIO_MAX		8
+#define BFI_ENET_RX_QSET_MAX		16
+#define BFI_ENET_TXQ_WI_VECT_MAX	4
+
+#define BFI_ENET_VLAN_ID_MAX		4096
+#define BFI_ENET_VLAN_BLOCK_SIZE	512	/* in bits */
+#define BFI_ENET_VLAN_BLOCKS_MAX					\
+	(BFI_ENET_VLAN_ID_MAX / BFI_ENET_VLAN_BLOCK_SIZE)
+#define BFI_ENET_VLAN_WORD_SIZE		32	/* in bits */
+#define BFI_ENET_VLAN_WORDS_MAX						\
+	(BFI_ENET_VLAN_BLOCK_SIZE / BFI_ENET_VLAN_WORD_SIZE)
+
+#define BFI_ENET_RSS_RIT_MAX		64	/* entries */
+#define BFI_ENET_RSS_KEY_LEN		10	/* 32-bit words */
+
+union bfi_addr_be_u {
+	struct {
+		u32	addr_hi;	/* Most Significant 32-bits */
+		u32	addr_lo;	/* Least Significant 32-Bits */
+	} a32;
+};
+
+/**
+ *	T X   Q U E U E   D E F I N E S
+ */
+/* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
+/* TxQ Entry Opcodes */
+#define BFI_ENET_TXQ_WI_SEND		(0x402)	/* Single Frame Transmission */
+#define BFI_ENET_TXQ_WI_SEND_LSO	(0x403)	/* Multi-Frame Transmission */
+#define BFI_ENET_TXQ_WI_EXTENSION	(0x104)	/* Extension WI */
+
+/* TxQ Entry Control Flags */
+#define BFI_ENET_TXQ_WI_CF_FCOE_CRC	(1 << 8)
+#define BFI_ENET_TXQ_WI_CF_IPID_MODE	(1 << 5)
+#define BFI_ENET_TXQ_WI_CF_INS_PRIO	(1 << 4)
+#define BFI_ENET_TXQ_WI_CF_INS_VLAN	(1 << 3)
+#define BFI_ENET_TXQ_WI_CF_UDP_CKSUM	(1 << 2)
+#define BFI_ENET_TXQ_WI_CF_TCP_CKSUM	(1 << 1)
+#define BFI_ENET_TXQ_WI_CF_IP_CKSUM	(1 << 0)
+
+struct bfi_enet_txq_wi_base {
+	u8			reserved;
+	u8			num_vectors;	/* number of vectors present */
+	u16			opcode;
+			/* BFI_ENET_TXQ_WI_SEND or BFI_ENET_TXQ_WI_SEND_LSO */
+	u16			flags;		/* OR of all the flags */
+	u16			l4_hdr_size_n_offset;
+	u16			vlan_tag;
+	u16			lso_mss;	/* Only 14 LSB are valid */
+	u32			frame_length;	/* Only 24 LSB are valid */
+};
+
+struct bfi_enet_txq_wi_ext {
+	u16			reserved;
+	u16			opcode;		/* BFI_ENET_TXQ_WI_EXTENSION */
+	u32			reserved2[3];
+};
+
+struct bfi_enet_txq_wi_vector {			/* Tx Buffer Descriptor */
+	u16			reserved;
+	u16			length;		/* Only 14 LSB are valid */
+	union bfi_addr_be_u	addr;
+};
+
+/**
+ *  TxQ Entry Structure
+ *
+ */
+struct bfi_enet_txq_entry {
+	union {
+		struct bfi_enet_txq_wi_base	base;
+		struct bfi_enet_txq_wi_ext	ext;
+	} wi;
+	struct bfi_enet_txq_wi_vector vector[BFI_ENET_TXQ_WI_VECT_MAX];
+};
+
+#define wi_hdr		wi.base
+#define wi_ext_hdr	wi.ext
+
+#define BFI_ENET_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
+		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
+
+/**
+ *   R X   Q U E U E   D E F I N E S
+ */
+struct bfi_enet_rxq_entry {
+	union bfi_addr_be_u  rx_buffer;
+};
+
+/**
+ *   R X   C O M P L E T I O N   Q U E U E   D E F I N E S
+ */
+/* CQ Entry Flags */
+#define	BFI_ENET_CQ_EF_MAC_ERROR	(1 <<  0)
+#define	BFI_ENET_CQ_EF_FCS_ERROR	(1 <<  1)
+#define	BFI_ENET_CQ_EF_TOO_LONG		(1 <<  2)
+#define	BFI_ENET_CQ_EF_FC_CRC_OK	(1 <<  3)
+
+#define	BFI_ENET_CQ_EF_RSVD1		(1 <<  4)
+#define	BFI_ENET_CQ_EF_L4_CKSUM_OK	(1 <<  5)
+#define	BFI_ENET_CQ_EF_L3_CKSUM_OK	(1 <<  6)
+#define	BFI_ENET_CQ_EF_HDS_HEADER	(1 <<  7)
+
+#define	BFI_ENET_CQ_EF_UDP		(1 <<  8)
+#define	BFI_ENET_CQ_EF_TCP		(1 <<  9)
+#define	BFI_ENET_CQ_EF_IP_OPTIONS	(1 << 10)
+#define	BFI_ENET_CQ_EF_IPV6		(1 << 11)
+
+#define	BFI_ENET_CQ_EF_IPV4		(1 << 12)
+#define	BFI_ENET_CQ_EF_VLAN		(1 << 13)
+#define	BFI_ENET_CQ_EF_RSS		(1 << 14)
+#define	BFI_ENET_CQ_EF_RSVD2		(1 << 15)
+
+#define	BFI_ENET_CQ_EF_MCAST_MATCH	(1 << 16)
+#define	BFI_ENET_CQ_EF_MCAST		(1 << 17)
+#define BFI_ENET_CQ_EF_BCAST		(1 << 18)
+#define	BFI_ENET_CQ_EF_REMOTE		(1 << 19)
+
+#define	BFI_ENET_CQ_EF_LOCAL		(1 << 20)
+
+/* CQ Entry Structure */
+struct bfi_enet_cq_entry {
+	u32 flags;
+	u16	vlan_tag;
+	u16	length;
+	u32	rss_hash;
+	u8	valid;
+	u8	reserved1;
+	u8	reserved2;
+	u8	rxq_id;
+};
+
+/**
+ *   E N E T   C O N T R O L   P A T H   C O M M A N D S
+ */
+struct bfi_enet_q {
+	union bfi_addr_u	pg_tbl;
+	union bfi_addr_u	first_entry;
+	u16		pages;	/* # of pages */
+	u16		page_sz;
+};
+
+struct bfi_enet_txq {
+	struct bfi_enet_q	q;
+	u8			priority;
+	u8			rsvd[3];
+};
+
+struct bfi_enet_rxq {
+	struct bfi_enet_q	q;
+	u16		rx_buffer_size;
+	u16		rsvd;
+};
+
+struct bfi_enet_cq {
+	struct bfi_enet_q	q;
+};
+
+struct bfi_enet_ib_cfg {
+	u8		int_pkt_dma;
+	u8		int_enabled;
+	u8		int_pkt_enabled;
+	u8		continuous_coalescing;
+	u8		msix;
+	u8		rsvd[3];
+	u32	coalescing_timeout;
+	u32	inter_pkt_timeout;
+	u8		inter_pkt_count;
+	u8		rsvd1[3];
+};
+
+struct bfi_enet_ib {
+	union bfi_addr_u	index_addr;
+	union {
+		u16	msix_index;
+		u16	intx_bitmask;
+	} intr;
+	u16		rsvd;
+};
+
+/**
+ * ENET command messages
+ */
+enum bfi_enet_h2i_msgs {
+	/* Rx Commands */
+	BFI_ENET_H2I_RX_CFG_SET_REQ = 1,
+	BFI_ENET_H2I_RX_CFG_CLR_REQ = 2,
+
+	BFI_ENET_H2I_RIT_CFG_REQ = 3,
+	BFI_ENET_H2I_RSS_CFG_REQ = 4,
+	BFI_ENET_H2I_RSS_ENABLE_REQ = 5,
+	BFI_ENET_H2I_RX_PROMISCUOUS_REQ = 6,
+	BFI_ENET_H2I_RX_DEFAULT_REQ = 7,
+
+	BFI_ENET_H2I_MAC_UCAST_SET_REQ = 8,
+	BFI_ENET_H2I_MAC_UCAST_CLR_REQ = 9,
+	BFI_ENET_H2I_MAC_UCAST_ADD_REQ = 10,
+	BFI_ENET_H2I_MAC_UCAST_DEL_REQ = 11,
+
+	BFI_ENET_H2I_MAC_MCAST_ADD_REQ = 12,
+	BFI_ENET_H2I_MAC_MCAST_DEL_REQ = 13,
+	BFI_ENET_H2I_MAC_MCAST_FILTER_REQ = 14,
+
+	BFI_ENET_H2I_RX_VLAN_SET_REQ = 15,
+	BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ = 16,
+
+	/* Tx Commands */
+	BFI_ENET_H2I_TX_CFG_SET_REQ = 17,
+	BFI_ENET_H2I_TX_CFG_CLR_REQ = 18,
+
+	/* Port Commands */
+	BFI_ENET_H2I_PORT_ADMIN_UP_REQ = 19,
+	BFI_ENET_H2I_SET_PAUSE_REQ = 20,
+	BFI_ENET_H2I_DIAG_LOOPBACK_REQ = 21,
+
+	/* Get Attributes Command */
+	BFI_ENET_H2I_GET_ATTR_REQ = 22,
+
+	/*  Statistics Commands */
+	BFI_ENET_H2I_STATS_GET_REQ = 23,
+	BFI_ENET_H2I_STATS_CLR_REQ = 24,
+
+	BFI_ENET_H2I_WOL_MAGIC_REQ = 25,
+	BFI_ENET_H2I_WOL_FRAME_REQ = 26,
+
+	BFI_ENET_H2I_MAX = 27,
+};
+
+enum bfi_enet_i2h_msgs {
+	/* Rx Responses */
+	BFI_ENET_I2H_RX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_SET_REQ),
+	BFI_ENET_I2H_RX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_CLR_REQ),
+
+	BFI_ENET_I2H_RIT_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RIT_CFG_REQ),
+	BFI_ENET_I2H_RSS_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_CFG_REQ),
+	BFI_ENET_I2H_RSS_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_ENABLE_REQ),
+	BFI_ENET_I2H_RX_PROMISCUOUS_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_PROMISCUOUS_REQ),
+	BFI_ENET_I2H_RX_DEFAULT_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_DEFAULT_REQ),
+
+	BFI_ENET_I2H_MAC_UCAST_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_SET_REQ),
+	BFI_ENET_I2H_MAC_UCAST_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_CLR_REQ),
+	BFI_ENET_I2H_MAC_UCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_UCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_DEL_REQ),
+
+	BFI_ENET_I2H_MAC_MCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_MCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_DEL_REQ),
+	BFI_ENET_I2H_MAC_MCAST_FILTER_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_FILTER_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_SET_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ),
+
+	/* Tx Responses */
+	BFI_ENET_I2H_TX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_SET_REQ),
+	BFI_ENET_I2H_TX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_CLR_REQ),
+
+	/* Port Responses */
+	BFI_ENET_I2H_PORT_ADMIN_RSP =
+		BFA_I2HM(BFI_ENET_H2I_PORT_ADMIN_UP_REQ),
+
+	BFI_ENET_I2H_SET_PAUSE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_SET_PAUSE_REQ),
+	BFI_ENET_I2H_DIAG_LOOPBACK_RSP =
+		BFA_I2HM(BFI_ENET_H2I_DIAG_LOOPBACK_REQ),
+
+	/*  Attributes Response */
+	BFI_ENET_I2H_GET_ATTR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_GET_ATTR_REQ),
+
+	/* Statistics Responses */
+	BFI_ENET_I2H_STATS_GET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_GET_REQ),
+	BFI_ENET_I2H_STATS_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_CLR_REQ),
+
+	BFI_ENET_I2H_WOL_MAGIC_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_MAGIC_REQ),
+	BFI_ENET_I2H_WOL_FRAME_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_FRAME_REQ),
+
+	/* AENs */
+	BFI_ENET_I2H_LINK_DOWN_AEN = BFA_I2HM(BFI_ENET_H2I_MAX),
+	BFI_ENET_I2H_LINK_UP_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 1),
+
+	BFI_ENET_I2H_PORT_ENABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 2),
+	BFI_ENET_I2H_PORT_DISABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 3),
+
+	BFI_ENET_I2H_BW_UPDATE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 4),
+};
+
+/**
+ *  The following error codes can be returned by the enet commands
+ */
+enum bfi_enet_err {
+	BFI_ENET_CMD_OK		= 0,
+	BFI_ENET_CMD_FAIL	= 1,
+	BFI_ENET_CMD_DUP_ENTRY	= 2,	/* !< Duplicate entry in CAM */
+	BFI_ENET_CMD_CAM_FULL	= 3,	/* !< CAM is full */
+	BFI_ENET_CMD_NOT_OWNER	= 4,	/* !< Not permitted, b'cos not owner */
+	BFI_ENET_CMD_NOT_EXEC	= 5,	/* !< Was not sent to f/w at all */
+	BFI_ENET_CMD_WAITING	= 6,	/* !< Waiting for completion */
+	BFI_ENET_CMD_PORT_DISABLED = 7,	/* !< port in disabled state */
+};
+
+/**
+ * Generic Request
+ *
+ * bfi_enet_req is used by:
+ *	BFI_ENET_H2I_RX_CFG_CLR_REQ
+ *	BFI_ENET_H2I_TX_CFG_CLR_REQ
+ */
+struct bfi_enet_req {
+	struct bfi_msgq_mhdr mh;
+};
+
+/**
+ * Enable/Disable Request
+ *
+ * bfi_enet_enable_req is used by:
+ *	BFI_ENET_H2I_RSS_ENABLE_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_PROMISCUOUS_REQ (enet_id must be zero)
+ *	BFI_ENET_H2I_RX_DEFAULT_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_MAC_MCAST_FILTER_REQ
+ *	BFI_ENET_H2I_PORT_ADMIN_UP_REQ	(enet_id must be zero)
+ */
+struct bfi_enet_enable_req {
+	struct		bfi_msgq_mhdr mh;
+	u8		enable;		/* 1 = enable;  0 = disable */
+	u8		rsvd[3];
+};
+
+/**
+ * Generic Response
+ */
+struct bfi_enet_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+};
+
+/**
+ * GLOBAL CONFIGURATION
+ */
+
+/**
+ * bfi_enet_attr_req is used by:
+ *	BFI_ENET_H2I_GET_ATTR_REQ
+ */
+struct bfi_enet_attr_req {
+	struct bfi_msgq_mhdr	mh;
+};
+
+/**
+ * bfi_enet_attr_rsp is used by:
+ *	BFI_ENET_I2H_GET_ATTR_RSP
+ */
+struct bfi_enet_attr_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+	u32		max_cfg;
+	u32		max_ucmac;
+	u32		rit_size;
+};
+
+/**
+ * Tx Configuration
+ *
+ * bfi_enet_tx_cfg is used by:
+ *	BFI_ENET_H2I_TX_CFG_SET_REQ
+ */
+enum bfi_enet_tx_vlan_mode {
+	BFI_ENET_TX_VLAN_NOP	= 0,
+	BFI_ENET_TX_VLAN_INS	= 1,
+	BFI_ENET_TX_VLAN_WI	= 2,
+};
+
+struct bfi_enet_tx_cfg {
+	u8		vlan_mode;	/*!< processing mode */
+	u8		rsvd;
+	u16		vlan_id;
+	u8		admit_tagged_frame;
+	u8		apply_vlan_filter;
+	u8		add_to_vswitch;
+	u8		rsvd1[1];
+};
+
+struct bfi_enet_tx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queues;	/* # of Tx Queues */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_txq	q;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_TXQ_PRIO_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_tx_cfg	tx_cfg;
+};
+
+struct bfi_enet_tx_cfg_rsp {
+	struct		bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;		/* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	q_dbell;	/* PCI base address offset */
+		u32	i_dbell;	/* PCI base address offset */
+		u8	hw_qid;		/* For debugging */
+		u8	rsvd[3];
+	} q_handles[BFI_ENET_TXQ_PRIO_MAX];
+};
+
+/**
+ * Rx Configuration
+ *
+ * bfi_enet_rx_cfg is used by:
+ *	BFI_ENET_H2I_RX_CFG_SET_REQ
+ */
+enum bfi_enet_rxq_type {
+	BFI_ENET_RXQ_SINGLE		= 1,
+	BFI_ENET_RXQ_LARGE_SMALL	= 2,
+	BFI_ENET_RXQ_HDS		= 3,
+	BFI_ENET_RXQ_HDS_OPT_BASED	= 4,
+};
+
+enum bfi_enet_hds_type {
+	BFI_ENET_HDS_FORCED	= 0x01,
+	BFI_ENET_HDS_IPV6_UDP	= 0x02,
+	BFI_ENET_HDS_IPV6_TCP	= 0x04,
+	BFI_ENET_HDS_IPV4_TCP	= 0x08,
+	BFI_ENET_HDS_IPV4_UDP	= 0x10,
+};
+
+struct bfi_enet_rx_cfg {
+	u8		rxq_type;
+	u8		rsvd[3];
+
+	struct {
+		u8			max_header_size;
+		u8			force_offset;
+		u8			type;
+		u8			rsvd1;
+	} hds;
+
+	u8		multi_buffer;
+	u8		strip_vlan;
+	u8		drop_untagged;
+	u8		rsvd2;
+};
+
+/*
+ * Multicast frames are received on the ql of q-set index zero.
+ * On the completion queue.  RxQ ID = even is for large/data buffer queues
+ * and RxQ ID = odd is for small/header buffer queues.
+ */
+struct bfi_enet_rx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queue_sets;	/* # of Rx Queue Sets */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_rxq	ql;	/* large/data/single buffers */
+		struct bfi_enet_rxq	qs;	/* small/header buffers */
+		struct bfi_enet_cq	cq;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_RX_QSET_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_rx_cfg	rx_cfg;
+};
+
+struct bfi_enet_rx_cfg_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;	 /* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	ql_dbell; /* PCI base address offset */
+		u32	qs_dbell; /* PCI base address offset */
+		u32	i_dbell;  /* PCI base address offset */
+		u8		hw_lqid;  /* For debugging */
+		u8		hw_sqid;  /* For debugging */
+		u8		hw_cqid;  /* For debugging */
+		u8		rsvd;
+	} q_handles[BFI_ENET_RX_QSET_MAX];
+};
+
+/**
+ * RIT
+ *
+ * bfi_enet_rit_req is used by:
+ *	BFI_ENET_H2I_RIT_CFG_REQ
+ */
+struct bfi_enet_rit_req {
+	struct	bfi_msgq_mhdr mh;
+	u16	size;			/* number of table-entries used */
+	u8	rsvd[2];
+	u8	table[BFI_ENET_RSS_RIT_MAX];
+};
+
+/**
+ * RSS
+ *
+ * bfi_enet_rss_cfg_req is used by:
+ *	BFI_ENET_H2I_RSS_CFG_REQ
+ */
+enum bfi_enet_rss_type {
+	BFI_ENET_RSS_IPV6	= 0x01,
+	BFI_ENET_RSS_IPV6_TCP	= 0x02,
+	BFI_ENET_RSS_IPV4	= 0x04,
+	BFI_ENET_RSS_IPV4_TCP	= 0x08
+};
+
+struct bfi_enet_rss_cfg {
+	u8	type;
+	u8	mask;
+	u8	rsvd[2];
+	u32	key[BFI_ENET_RSS_KEY_LEN];
+};
+
+struct bfi_enet_rss_cfg_req {
+	struct bfi_msgq_mhdr	mh;
+	struct bfi_enet_rss_cfg	cfg;
+};
+
+/**
+ * MAC Unicast
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_MAC_UCAST_SET_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_CLR_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_ADD_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_DEL_REQ
+ */
+struct bfi_enet_ucast_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * MAC Unicast + VLAN
+ */
+struct bfi_enet_mac_n_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			vlan_id;
+	mac_t			mac_addr;
+};
+
+/**
+ * MAC Multicast
+ *
+ * bfi_enet_mac_mfilter_add_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_ADD_REQ
+ */
+struct bfi_enet_mcast_add_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_add_rsp is used by:
+ *	BFI_ENET_I2H_MAC_MCAST_ADD_RSP
+ */
+struct bfi_enet_mcast_add_rsp {
+	struct bfi_msgq_mhdr	mh;
+	u8			error;
+	u8			rsvd;
+	u16			cmd_offset;
+	u16			handle;
+	u8			rsvd1[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_del_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_DEL_REQ
+ */
+struct bfi_enet_mcast_del_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			handle;
+	u8			rsvd[2];
+};
+
+/**
+ * VLAN
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_RX_VLAN_SET_REQ
+ */
+struct bfi_enet_rx_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			block_idx;
+	u8			rsvd[3];
+	u32			bit_mask[BFI_ENET_VLAN_WORDS_MAX];
+};
+
+/**
+ * PAUSE
+ *
+ * bfi_enet_set_pause_req is used by:
+ *	BFI_ENET_H2I_SET_PAUSE_REQ
+ */
+struct bfi_enet_set_pause_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			tx_pause;	/* 1 = enable;  0 = disable */
+	u8			rx_pause;	/* 1 = enable;  0 = disable */
+};
+
+/**
+ * DIAGNOSTICS
+ *
+ * bfi_enet_diag_lb_req is used by:
+ *      BFI_ENET_H2I_DIAG_LOOPBACK
+ */
+struct bfi_enet_diag_lb_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			mode;		/* cable or Serdes */
+	u8			enable;		/* 1 = enable;  0 = disable */
+};
+
+/**
+ * enum for Loopback opmodes
+ */
+enum {
+	BFI_ENET_DIAG_LB_OPMODE_EXT = 0,
+	BFI_ENET_DIAG_LB_OPMODE_CBL = 1,
+};
+
+/**
+ * STATISTICS
+ *
+ * bfi_enet_stats_req is used by:
+ *    BFI_ENET_H2I_STATS_GET_REQ
+ *    BFI_ENET_I2H_STATS_CLR_REQ
+ */
+struct bfi_enet_stats_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			stats_mask;
+	u8			rsvd[2];
+	u32			rx_enet_mask;
+	u32			tx_enet_mask;
+	union bfi_addr_u	host_buffer;
+};
+
+/**
+ * defines for "stats_mask" above.
+ */
+#define BFI_ENET_STATS_MAC    (1 << 0)    /* !< MAC Statistics */
+#define BFI_ENET_STATS_BPC    (1 << 1)    /* !< Pause Stats from BPC */
+#define BFI_ENET_STATS_RAD    (1 << 2)    /* !< Rx Admission Statistics */
+#define BFI_ENET_STATS_RX_FC  (1 << 3)    /* !< Rx FC Stats from RxA */
+#define BFI_ENET_STATS_TX_FC  (1 << 4)    /* !< Tx FC Stats from TxA */
+
+#define BFI_ENET_STATS_ALL    0x1f
+
+/* TxF Frame Statistics */
+struct bfi_enet_stats_txf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+
+	u64 errors;
+	u64 filter_vlan;      /* frames filtered due to VLAN */
+	u64 filter_mac_sa;    /* frames filtered due to SA check */
+};
+
+/* RxF Frame Statistics */
+struct bfi_enet_stats_rxf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+	u64 frame_drops;
+};
+
+/* FC Tx Frame Statistics */
+struct bfi_enet_stats_fc_tx {
+	u64 txf_ucast_octets;
+	u64 txf_ucast;
+	u64 txf_ucast_vlan;
+
+	u64 txf_mcast_octets;
+	u64 txf_mcast;
+	u64 txf_mcast_vlan;
+
+	u64 txf_bcast_octets;
+	u64 txf_bcast;
+	u64 txf_bcast_vlan;
+
+	u64 txf_parity_errors;
+	u64 txf_timeout;
+	u64 txf_fid_parity_errors;
+};
+
+/* FC Rx Frame Statistics */
+struct bfi_enet_stats_fc_rx {
+	u64 rxf_ucast_octets;
+	u64 rxf_ucast;
+	u64 rxf_ucast_vlan;
+
+	u64 rxf_mcast_octets;
+	u64 rxf_mcast;
+	u64 rxf_mcast_vlan;
+
+	u64 rxf_bcast_octets;
+	u64 rxf_bcast;
+	u64 rxf_bcast_vlan;
+};
+
+/* RAD Frame Statistics */
+struct bfi_enet_stats_rad {
+	u64 rx_frames;
+	u64 rx_octets;
+	u64 rx_vlan_frames;
+
+	u64 rx_ucast;
+	u64 rx_ucast_octets;
+	u64 rx_ucast_vlan;
+
+	u64 rx_mcast;
+	u64 rx_mcast_octets;
+	u64 rx_mcast_vlan;
+
+	u64 rx_bcast;
+	u64 rx_bcast_octets;
+	u64 rx_bcast_vlan;
+
+	u64 rx_drops;
+};
+
+/* BPC Tx Registers */
+struct bfi_enet_stats_bpc {
+	/* transmit stats */
+	u64 tx_pause[8];
+	u64 tx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 tx_first_pause[8];
+
+	/* receive stats */
+	u64 rx_pause[8];
+	u64 rx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 rx_first_pause[8];
+};
+
+/* MAC Rx Statistics */
+struct bfi_enet_stats_mac {
+	u64 frame_64;		/* both rx and tx counter */
+	u64 frame_65_127;		/* both rx and tx counter */
+	u64 frame_128_255;		/* both rx and tx counter */
+	u64 frame_256_511;		/* both rx and tx counter */
+	u64 frame_512_1023;	/* both rx and tx counter */
+	u64 frame_1024_1518;	/* both rx and tx counter */
+	u64 frame_1519_1522;	/* both rx and tx counter */
+
+	/* receive stats */
+	u64 rx_bytes;
+	u64 rx_packets;
+	u64 rx_fcs_error;
+	u64 rx_multicast;
+	u64 rx_broadcast;
+	u64 rx_control_frames;
+	u64 rx_pause;
+	u64 rx_unknown_opcode;
+	u64 rx_alignment_error;
+	u64 rx_frame_length_error;
+	u64 rx_code_error;
+	u64 rx_carrier_sense_error;
+	u64 rx_undersize;
+	u64 rx_oversize;
+	u64 rx_fragments;
+	u64 rx_jabber;
+	u64 rx_drop;
+
+	/* transmit stats */
+	u64 tx_bytes;
+	u64 tx_packets;
+	u64 tx_multicast;
+	u64 tx_broadcast;
+	u64 tx_pause;
+	u64 tx_deferral;
+	u64 tx_excessive_deferral;
+	u64 tx_single_collision;
+	u64 tx_muliple_collision;
+	u64 tx_late_collision;
+	u64 tx_excessive_collision;
+	u64 tx_total_collision;
+	u64 tx_pause_honored;
+	u64 tx_drop;
+	u64 tx_jabber;
+	u64 tx_fcs_error;
+	u64 tx_control_frame;
+	u64 tx_oversize;
+	u64 tx_undersize;
+	u64 tx_fragments;
+};
+
+/**
+ * Complete statistics, DMAed from fw to host followed by
+ * BFI_ENET_I2H_STATS_GET_RSP
+ */
+struct bfi_enet_stats {
+	struct bfi_enet_stats_mac	mac_stats;
+	struct bfi_enet_stats_bpc	bpc_stats;
+	struct bfi_enet_stats_rad	rad_stats;
+	struct bfi_enet_stats_rad	rlb_stats;
+	struct bfi_enet_stats_fc_rx	fc_rx_stats;
+	struct bfi_enet_stats_fc_tx	fc_tx_stats;
+	struct bfi_enet_stats_rxf	rxf_stats[BFI_ENET_CFG_MAX];
+	struct bfi_enet_stats_txf	txf_stats[BFI_ENET_CFG_MAX];
+};
+
+#pragma pack()
+
+#endif  /* __BFI_ENET_H__ */
diff --git a/drivers/net/bna/bna_enet.c b/drivers/net/bna/bna_enet.c
new file mode 100644
index 0000000..68a275d
--- /dev/null
+++ b/drivers/net/bna/bna_enet.c
@@ -0,0 +1,2129 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+#include "bna.h"
+
+static inline int
+ethport_can_be_up(struct bna_ethport *ethport)
+{
+	int ready = 0;
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 (ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	else
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 !(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	return ready;
+}
+
+#define ethport_is_up ethport_can_be_up
+
+enum bna_ethport_event {
+	ETHPORT_E_START			= 1,
+	ETHPORT_E_STOP			= 2,
+	ETHPORT_E_FAIL			= 3,
+	ETHPORT_E_UP			= 4,
+	ETHPORT_E_DOWN			= 5,
+	ETHPORT_E_FWRESP_UP_OK		= 6,
+	ETHPORT_E_FWRESP_DOWN		= 7,
+	ETHPORT_E_FWRESP_UP_FAIL	= 8,
+};
+
+enum bna_enet_event {
+	ENET_E_START			= 1,
+	ENET_E_STOP			= 2,
+	ENET_E_FAIL			= 3,
+	ENET_E_PAUSE_CFG		= 4,
+	ENET_E_MTU_CFG			= 5,
+	ENET_E_FWRESP_PAUSE		= 6,
+	ENET_E_CHLD_STOPPED		= 7,
+};
+
+enum bna_ioceth_event {
+	IOCETH_E_ENABLE			= 1,
+	IOCETH_E_DISABLE		= 2,
+	IOCETH_E_IOC_RESET		= 3,
+	IOCETH_E_IOC_FAILED		= 4,
+	IOCETH_E_IOC_READY		= 5,
+	IOCETH_E_ENET_ATTR_RESP		= 6,
+	IOCETH_E_ENET_STOPPED		= 7,
+	IOCETH_E_IOC_DISABLED		= 8,
+};
+
+#define bna_stats_copy(_name, _type)					\
+do {									\
+	count = sizeof(struct bfi_enet_stats_ ## _type) / sizeof(u64);	\
+	stats_src = (u64 *)&bna->stats.hw_stats_kva->_name ## _stats;	\
+	stats_dst = (u64 *)&bna->stats.hw_stats._name ## _stats;	\
+	for (i = 0; i < count; i++)					\
+		stats_dst[i] = be64_to_cpu(stats_src[i]);		\
+} while (0)								\
+
+/*
+ * FW response handlers
+ */
+
+static void
+bna_bfi_ethport_enable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_can_be_up(ethport))
+		bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+}
+
+static void
+bna_bfi_ethport_disable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_up)
+		bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+}
+
+static void
+bna_bfi_ethport_admin_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_enable_req *admin_req =
+		&ethport->bfi_enet_cmd.admin_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (admin_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+		break;
+	}
+}
+
+static void
+bna_bfi_ethport_lpbk_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_diag_lb_req *diag_lb_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (diag_lb_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		break;
+	}
+}
+
+static void
+bna_bfi_pause_set_rsp(struct bna_enet *enet, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(enet, ENET_E_FWRESP_PAUSE);
+}
+
+static void
+bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_attr_rsp *rsp = (struct bfi_enet_attr_rsp *)msghdr;
+
+	/**
+	 * Store only if not set earlier, since BNAD can override the HW
+	 * attributes
+	 */
+	if (!ioceth->attr.num_txq)
+		ioceth->attr.num_txq = ntohl(rsp->max_cfg);
+	if (!ioceth->attr.num_rxp)
+		ioceth->attr.num_rxp = ntohl(rsp->max_cfg);
+	ioceth->attr.num_ucmac = ntohl(rsp->max_ucmac);
+	ioceth->attr.num_mcmac = BFI_ENET_MAX_MCAM;
+	ioceth->attr.max_rit_size = ntohl(rsp->rit_size);
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_ATTR_RESP);
+}
+
+static void
+bna_bfi_stats_get_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+	u64 *stats_src;
+	u64 *stats_dst;
+	u32 tx_enet_mask = ntohl(stats_req->tx_enet_mask);
+	u32 rx_enet_mask = ntohl(stats_req->rx_enet_mask);
+	int count;
+	int i;
+
+	bna_stats_copy(mac, mac);
+	bna_stats_copy(bpc, bpc);
+	bna_stats_copy(rad, rad);
+	bna_stats_copy(rlb, rad);
+	bna_stats_copy(fc_rx, fc_rx);
+	bna_stats_copy(fc_tx, fc_tx);
+
+	stats_src = (u64 *)&(bna->stats.hw_stats_kva->rxf_stats[0]);
+
+	/* Copy Rxf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.rxf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_rxf));
+		if (rx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_rxf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	/* Copy Txf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.txf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_txf));
+		if (tx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_txf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	bna->stats_mod.stats_get_busy = false;
+	bnad_cb_stats_get(bna->bnad, BNA_CB_SUCCESS, &bna->stats);
+}
+
+static void
+bna_bfi_ethport_linkup_aen(struct bna_ethport *ethport,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_UP;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, ethport->link_status);
+}
+
+static void
+bna_bfi_ethport_linkdown_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_DOWN;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+}
+
+static void
+bna_err_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_HALT_INTR(bna, intr_status))
+		bna_halt_clear(bna);
+
+	bfa_nw_ioc_error_isr(&bna->ioceth.ioc);
+}
+
+void
+bna_mbox_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_ERR_INTR(bna, intr_status)) {
+		bna_err_handler(bna, intr_status);
+		return;
+	}
+	if (BNA_IS_MBOX_INTR(bna, intr_status))
+		bfa_nw_ioc_mbox_isr(&bna->ioceth.ioc);
+}
+
+static void
+bna_msgq_rsp_handler(void *arg, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bna *bna = (struct bna *)arg;
+	struct bna_tx *tx;
+	struct bna_rx *rx;
+
+	switch (msghdr->msg_id) {
+	case BFI_ENET_I2H_RX_CFG_SET_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_start_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RX_CFG_CLR_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_stop_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RIT_CFG_RSP:
+	case BFI_ENET_I2H_RSS_CFG_RSP:
+	case BFI_ENET_I2H_RSS_ENABLE_RSP:
+	case BFI_ENET_I2H_RX_PROMISCUOUS_RSP:
+	case BFI_ENET_I2H_RX_DEFAULT_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_SET_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_CLR_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_ADD_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_FILTER_RSP:
+	case BFI_ENET_I2H_RX_VLAN_SET_RSP:
+	case BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_cfg_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_MAC_MCAST_ADD_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_mcast_add_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_SET_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_start_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_CLR_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_stop_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ADMIN_RSP:
+		bna_bfi_ethport_admin_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_DIAG_LOOPBACK_RSP:
+		bna_bfi_ethport_lpbk_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_SET_PAUSE_RSP:
+		bna_bfi_pause_set_rsp(&bna->enet, msghdr);
+		break;
+
+	case BFI_ENET_I2H_GET_ATTR_RSP:
+		bna_bfi_attr_get_rsp(&bna->ioceth, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_GET_RSP:
+		bna_bfi_stats_get_rsp(bna, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_CLR_RSP:
+		/* No-op */
+		break;
+
+	case BFI_ENET_I2H_LINK_UP_AEN:
+		bna_bfi_ethport_linkup_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_LINK_DOWN_AEN:
+		bna_bfi_ethport_linkdown_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ENABLE_AEN:
+		bna_bfi_ethport_enable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_DISABLE_AEN:
+		bna_bfi_ethport_disable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_BW_UPDATE_AEN:
+		bna_bfi_bw_update_aen(&bna->tx_mod);
+		break;
+
+	default:
+		break;
+	}
+}
+
+/**
+ * ETHPORT
+ */
+#define call_ethport_stop_cbfn(_ethport)				\
+do {									\
+	if ((_ethport)->stop_cbfn) {					\
+		void (*cbfn)(struct bna_enet *);			\
+		cbfn = (_ethport)->stop_cbfn;				\
+		(_ethport)->stop_cbfn = NULL;				\
+		cbfn(&(_ethport)->bna->enet);				\
+	}								\
+} while (0)
+
+#define call_ethport_adminup_cbfn(ethport, status)			\
+do {									\
+	if ((ethport)->adminup_cbfn) {					\
+		void (*cbfn)(struct bnad *, enum bna_cb_status);	\
+		cbfn = (ethport)->adminup_cbfn;				\
+		(ethport)->adminup_cbfn = NULL;				\
+		cbfn((ethport)->bna->bnad, status);			\
+	}								\
+} while (0)
+
+static void
+bna_bfi_ethport_admin_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_up_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_admin_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_down_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_up_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_up_req->mode = (ethport->bna->enet.type ==
+				BNA_ENET_T_LOOPBACK_INTERNAL) ?
+				BFI_ENET_DIAG_LB_OPMODE_EXT :
+				BFI_ENET_DIAG_LB_OPMODE_CBL;
+	lpbk_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_down_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_up(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_up(ethport);
+	else
+		bna_bfi_ethport_lpbk_up(ethport);
+}
+
+static void
+bna_bfi_ethport_down(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_down(ethport);
+	else
+		bna_bfi_ethport_lpbk_down(ethport);
+}
+
+bfa_fsm_state_decl(bna_ethport, stopped, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, last_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+
+static void
+bna_ethport_sm_stopped_entry(struct bna_ethport *ethport)
+{
+	call_ethport_stop_cbfn(ethport);
+}
+
+static void
+bna_ethport_sm_stopped(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_START:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_STOP:
+		call_ethport_stop_cbfn(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		/* No-op */
+		break;
+
+	case ETHPORT_E_DOWN:
+		/* This event is received due to Rx objects failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_down(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_INTERRUPT);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_SUCCESS);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_FWRESP_DOWN:
+		/* down_resp_wait -> up_resp_wait transition on ETHPORT_E_UP */
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_resp_wait_entry(struct bna_ethport *ethport)
+{
+	/**
+	 * NOTE: Do not call bna_bfi_ethport_down() here. That will over step
+	 * mbox due to up_resp_wait -> down_resp_wait transition on event
+	 * ETHPORT_E_DOWN
+	 */
+}
+
+static void
+bna_ethport_sm_down_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->down_resp_wait transition on ETHPORT_E_DOWN */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_last_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_last_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		/**
+		 * This event is received due to Rx objects stopping in
+		 * parallel to ethport
+		 */
+		/* No-op */
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->last_resp_wait transition on ETHPORT_T_STOP */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_init(struct bna_ethport *ethport, struct bna *bna)
+{
+	ethport->flags |= (BNA_ETHPORT_F_ADMIN_UP | BNA_ETHPORT_F_PORT_ENABLED);
+	ethport->bna = bna;
+
+	ethport->link_status = BNA_LINK_DOWN;
+	ethport->link_cbfn = bnad_cb_ethport_link_status;
+
+	ethport->rx_started_count = 0;
+
+	ethport->stop_cbfn = NULL;
+	ethport->adminup_cbfn = NULL;
+
+	bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+}
+
+static void
+bna_ethport_uninit(struct bna_ethport *ethport)
+{
+	ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	ethport->bna = NULL;
+}
+
+static void
+bna_ethport_start(struct bna_ethport *ethport)
+{
+	bfa_fsm_send_event(ethport, ETHPORT_E_START);
+}
+
+static void
+bna_enet_cb_ethport_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+static void
+bna_ethport_stop(struct bna_ethport *ethport)
+{
+	ethport->stop_cbfn = bna_enet_cb_ethport_stopped;
+	bfa_fsm_send_event(ethport, ETHPORT_E_STOP);
+}
+
+static void
+bna_ethport_fail(struct bna_ethport *ethport)
+{
+	/* Reset the physical port status to enabled */
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport->link_status != BNA_LINK_DOWN) {
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+	}
+	bfa_fsm_send_event(ethport, ETHPORT_E_FAIL);
+}
+
+/* Should be called only when ethport is disabled */
+void
+bna_ethport_cb_rx_started(struct bna_ethport *ethport)
+{
+	ethport->rx_started_count++;
+
+	if (ethport->rx_started_count == 1) {
+		ethport->flags |= BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_can_be_up(ethport))
+			bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+	}
+}
+
+void
+bna_ethport_cb_rx_stopped(struct bna_ethport *ethport)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->rx_started_count--;
+
+	if (ethport->rx_started_count == 0) {
+		ethport->flags &= ~BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_up)
+			bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+	}
+}
+
+/**
+ * ENET
+ */
+#define bna_enet_chld_start(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_ethport_start(&(enet)->bna->ethport);			\
+	bna_tx_mod_start(&(enet)->bna->tx_mod, tx_type);		\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_chld_stop(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_ethport_stop(&(enet)->bna->ethport);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_tx_mod_stop(&(enet)->bna->tx_mod, tx_type);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define bna_enet_chld_fail(enet)					\
+do {									\
+	bna_ethport_fail(&(enet)->bna->ethport);			\
+	bna_tx_mod_fail(&(enet)->bna->tx_mod);				\
+	bna_rx_mod_fail(&(enet)->bna->rx_mod);				\
+} while (0)
+
+#define bna_enet_rx_start(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_rx_stop(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define call_enet_stop_cbfn(enet)					\
+do {									\
+	if ((enet)->stop_cbfn) {					\
+		void (*cbfn)(void *);					\
+		void *cbarg;						\
+		cbfn = (enet)->stop_cbfn;				\
+		cbarg = (enet)->stop_cbarg;				\
+		(enet)->stop_cbfn = NULL;				\
+		(enet)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define call_enet_pause_cbfn(enet)					\
+do {									\
+	if ((enet)->pause_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->pause_cbfn;				\
+		(enet)->pause_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+#define call_enet_mtu_cbfn(enet)					\
+do {									\
+	if ((enet)->mtu_cbfn) {						\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->mtu_cbfn;				\
+		(enet)->mtu_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+static void bna_enet_cb_chld_stopped(void *arg);
+static void bna_bfi_pause_set(struct bna_enet *enet);
+
+bfa_fsm_state_decl(bna_enet, stopped, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, pause_init_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, last_resp_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, started, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, chld_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+
+static void
+bna_enet_sm_stopped_entry(struct bna_enet *enet)
+{
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+	call_enet_stop_cbfn(enet);
+}
+
+static void
+bna_enet_sm_stopped(struct bna_enet *enet, enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_START:
+		bfa_fsm_set_state(enet, bna_enet_sm_pause_init_wait);
+		break;
+
+	case ENET_E_STOP:
+		call_enet_stop_cbfn(enet);
+		break;
+
+	case ENET_E_FAIL:
+		/* No-op */
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		call_enet_pause_cbfn(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		call_enet_mtu_cbfn(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		/**
+		 * This event is received due to Ethport, Tx and Rx objects
+		 * failing
+		 */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_pause_init_wait_entry(struct bna_enet *enet)
+{
+	bna_bfi_pause_set(enet);
+}
+
+static void
+bna_enet_sm_pause_init_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_last_resp_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		/* No-op */
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+			bna_enet_chld_start(enet);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_last_resp_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+}
+
+static void
+bna_enet_sm_last_resp_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+	case ENET_E_FWRESP_PAUSE:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_started_entry(struct bna_enet *enet)
+{
+	/**
+	 * NOTE: Do not call bna_enet_chld_start() here, since it will be
+	 * inadvertently called during cfg_wait->started transition as well
+	 */
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+}
+
+static void
+bna_enet_sm_started(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_bfi_pause_set(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_enet_rx_stop(enet);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_wait_entry(struct bna_enet *enet)
+{
+}
+
+static void
+bna_enet_sm_cfg_wait(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		enet->flags |= BNA_ENET_F_MTU_CHANGED;
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bna_enet_rx_start(enet);
+		/* Fall through */
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else if (enet->flags & BNA_ENET_F_MTU_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+			bna_enet_rx_stop(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_stop_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+	enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+}
+
+static void
+bna_enet_sm_cfg_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_chld_stop_wait_entry(struct bna_enet *enet)
+{
+	bna_enet_chld_stop(enet);
+}
+
+static void
+bna_enet_sm_chld_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_pause_set(struct bna_enet *enet)
+{
+	struct bfi_enet_set_pause_req *pause_req = &enet->pause_req;
+
+	bfi_msgq_mhdr_set(pause_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_SET_PAUSE_REQ, 0, 0);
+	pause_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_set_pause_req)));
+	pause_req->tx_pause = enet->pause_config.tx_pause;
+	pause_req->rx_pause = enet->pause_config.rx_pause;
+
+	bfa_msgq_cmd_set(&enet->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_set_pause_req), &pause_req->mh);
+	bfa_msgq_cmd_post(&enet->bna->msgq, &enet->msgq_cmd);
+}
+
+static void
+bna_enet_cb_chld_stopped(void *arg)
+{
+	struct bna_enet *enet = (struct bna_enet *)arg;
+
+	bfa_fsm_send_event(enet, ENET_E_CHLD_STOPPED);
+}
+
+static void
+bna_enet_init(struct bna_enet *enet, struct bna *bna)
+{
+	enet->bna = bna;
+	enet->flags = 0;
+	enet->mtu = 0;
+	enet->type = BNA_ENET_T_REGULAR;
+
+	enet->stop_cbfn = NULL;
+	enet->stop_cbarg = NULL;
+
+	enet->pause_cbfn = NULL;
+
+	enet->mtu_cbfn = NULL;
+
+	bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+}
+
+static void
+bna_enet_uninit(struct bna_enet *enet)
+{
+	enet->flags = 0;
+
+	enet->bna = NULL;
+}
+
+static void
+bna_enet_start(struct bna_enet *enet)
+{
+	enet->flags |= BNA_ENET_F_IOCETH_READY;
+	if (enet->flags & BNA_ENET_F_ENABLED)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+static void
+bna_ioceth_cb_enet_stopped(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_STOPPED);
+}
+
+static void
+bna_enet_stop(struct bna_enet *enet)
+{
+	enet->stop_cbfn = bna_ioceth_cb_enet_stopped;
+	enet->stop_cbarg = &enet->bna->ioceth;
+
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+static void
+bna_enet_fail(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_FAIL);
+}
+
+void
+bna_enet_cb_tx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_enet_cb_rx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+int
+bna_enet_mtu_get(struct bna_enet *enet)
+{
+	return enet->mtu;
+}
+
+void
+bna_enet_enable(struct bna_enet *enet)
+{
+	if (enet->fsm != (bfa_sm_t)bna_enet_sm_stopped)
+		return;
+
+	enet->flags |= BNA_ENET_F_ENABLED;
+
+	if (enet->flags & BNA_ENET_F_IOCETH_READY)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+void
+bna_enet_disable(struct bna_enet *enet, enum bna_cleanup_type type,
+		 void (*cbfn)(void *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		(*cbfn)(enet->bna->bnad);
+		return;
+	}
+
+	enet->stop_cbfn = cbfn;
+	enet->stop_cbarg = enet->bna->bnad;
+
+	enet->flags &= ~BNA_ENET_F_ENABLED;
+
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+void
+bna_enet_pause_config(struct bna_enet *enet,
+		      struct bna_pause_config *pause_config,
+		      void (*cbfn)(struct bnad *))
+{
+	enet->pause_config = *pause_config;
+
+	enet->pause_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_PAUSE_CFG);
+}
+
+void
+bna_enet_mtu_set(struct bna_enet *enet, int mtu,
+		 void (*cbfn)(struct bnad *))
+{
+	enet->mtu = mtu;
+
+	enet->mtu_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_MTU_CFG);
+}
+
+void
+bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac)
+{
+	*mac = bfa_nw_ioc_get_mac(&enet->bna->ioceth.ioc);
+}
+
+/**
+ * IOCETH
+ */
+#define enable_mbox_intr(_ioceth)					\
+do {									\
+	u32 intr_status;						\
+	bna_intr_status_get((_ioceth)->bna, intr_status);		\
+	bnad_cb_mbox_intr_enable((_ioceth)->bna->bnad);			\
+	bna_mbox_intr_enable((_ioceth)->bna);				\
+} while (0)
+
+#define disable_mbox_intr(_ioceth)					\
+do {									\
+	bna_mbox_intr_disable((_ioceth)->bna);				\
+	bnad_cb_mbox_intr_disable((_ioceth)->bna->bnad);		\
+} while (0)
+
+#define call_ioceth_stop_cbfn(_ioceth)					\
+do {									\
+	if ((_ioceth)->stop_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		struct bnad *cbarg;					\
+		cbfn = (_ioceth)->stop_cbfn;				\
+		cbarg = (_ioceth)->stop_cbarg;				\
+		(_ioceth)->stop_cbfn = NULL;				\
+		(_ioceth)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define bna_stats_mod_uninit(_stats_mod)				\
+do {									\
+} while (0)
+
+#define bna_stats_mod_start(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = true;					\
+} while (0)
+
+#define bna_stats_mod_stop(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+} while (0)
+
+#define bna_stats_mod_fail(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+	(_stats_mod)->stats_get_busy = false;				\
+	(_stats_mod)->stats_clr_busy = false;				\
+} while (0)
+
+static void bna_bfi_attr_get(struct bna_ioceth *ioceth);
+
+bfa_fsm_state_decl(bna_ioceth, stopped, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_ready_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_attr_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ready, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, last_resp_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_stop_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_disable_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, failed, struct bna_ioceth,
+			enum bna_ioceth_event);
+
+static void
+bna_ioceth_sm_stopped_entry(struct bna_ioceth *ioceth)
+{
+	call_ioceth_stop_cbfn(ioceth);
+}
+
+static void
+bna_ioceth_sm_stopped(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_ENABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		bfa_nw_ioc_enable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait_entry(struct bna_ioceth *ioceth)
+{
+	/**
+	 * Do not call bfa_nw_ioc_enable() here. It must be called in the
+	 * previous state due to failed -> ioc_ready_wait transition.
+	 */
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_IOC_READY:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_attr_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_bfi_attr_get(ioceth);
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_last_resp_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ready_entry(struct bna_ioceth *ioceth)
+{
+	bna_enet_start(&ioceth->bna->enet);
+	bna_stats_mod_start(&ioceth->bna->stats_mod);
+	bnad_cb_ioceth_ready(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_ready(struct bna_ioceth *ioceth, enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_stop_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_last_resp_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_last_resp_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_stats_mod_stop(&ioceth->bna->stats_mod);
+	bna_enet_stop(&ioceth->bna->enet);
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_DISABLED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		/* This event is received due to enet failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_failed_entry(struct bna_ioceth *ioceth)
+{
+	bnad_cb_ioceth_failed(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_failed(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_attr_get(struct bna_ioceth *ioceth)
+{
+	struct bfi_enet_attr_req *attr_req = &ioceth->attr_req;
+
+	bfi_msgq_mhdr_set(attr_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_GET_ATTR_REQ, 0, 0);
+	attr_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_attr_req)));
+	bfa_msgq_cmd_set(&ioceth->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_attr_req), &attr_req->mh);
+	bfa_msgq_cmd_post(&ioceth->bna->msgq, &ioceth->msgq_cmd);
+}
+
+/* IOC callback functions */
+
+static void
+bna_cb_ioceth_enable(void *arg, enum bfa_status error)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	if (error)
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+	else
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_READY);
+}
+
+static void
+bna_cb_ioceth_disable(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_DISABLED);
+}
+
+static void
+bna_cb_ioceth_hbfail(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+}
+
+static void
+bna_cb_ioceth_reset(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_RESET);
+}
+
+static struct bfa_ioc_cbfn bna_ioceth_cbfn = {
+	bna_cb_ioceth_enable,
+	bna_cb_ioceth_disable,
+	bna_cb_ioceth_hbfail,
+	bna_cb_ioceth_reset
+};
+
+static void
+bna_ioceth_init(struct bna_ioceth *ioceth, struct bna *bna,
+		struct bna_res_info *res_info)
+{
+	u64 dma;
+	u8 *kva;
+
+	ioceth->bna = bna;
+
+	/**
+	 * Attach IOC and claim:
+	 *	1. DMA memory for IOC attributes
+	 *	2. Kernel memory for FW trace
+	 */
+	bfa_nw_ioc_attach(&ioceth->ioc, ioceth, &bna_ioceth_cbfn);
+	bfa_nw_ioc_pci_init(&ioceth->ioc, &bna->pcidev, BFI_PCIFN_CLASS_ETH);
+
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].kva;
+	bfa_nw_ioc_mem_claim(&ioceth->ioc, kva, dma);
+
+	kva = res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mdl[0].kva;
+
+	/**
+	 * Attach common modules (Diag, SFP, CEE, Port) and claim respective
+	 * DMA memory.
+	 */
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].kva;
+	bfa_nw_cee_attach(&bna->cee, &ioceth->ioc, bna);
+	bfa_nw_cee_mem_claim(&bna->cee, kva, dma);
+	kva += bfa_nw_cee_meminfo();
+	dma += bfa_nw_cee_meminfo();
+
+	bfa_msgq_attach(&bna->msgq, &ioceth->ioc);
+	bfa_msgq_memclaim(&bna->msgq, kva, dma);
+	bfa_msgq_regisr(&bna->msgq, BFI_MC_ENET, bna_msgq_rsp_handler, bna);
+	kva += bfa_msgq_meminfo();
+	dma += bfa_msgq_meminfo();
+
+	ioceth->stop_cbfn = NULL;
+	ioceth->stop_cbarg = NULL;
+
+	bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+}
+
+static void
+bna_ioceth_uninit(struct bna_ioceth *ioceth)
+{
+	bfa_nw_ioc_detach(&ioceth->ioc);
+
+	ioceth->bna = NULL;
+}
+
+void
+bna_ioceth_enable(struct bna_ioceth *ioceth)
+{
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_ready) {
+		bnad_cb_ioceth_ready(ioceth->bna->bnad);
+		return;
+	}
+
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_stopped)
+		bfa_fsm_send_event(ioceth, IOCETH_E_ENABLE);
+}
+
+void
+bna_ioceth_disable(struct bna_ioceth *ioceth, enum bna_cleanup_type type)
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		bnad_cb_ioceth_disabled(ioceth->bna->bnad);
+		return;
+	}
+
+	ioceth->stop_cbfn = bnad_cb_ioceth_disabled;
+	ioceth->stop_cbarg = ioceth->bna->bnad;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_DISABLE);
+}
+
+static void
+bna_ucam_mod_init(struct bna_ucam_mod *ucam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	ucam_mod->ucmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&ucam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_ucmac; i++) {
+		bfa_q_qe_init(&ucam_mod->ucmac[i].qe);
+		list_add_tail(&ucam_mod->ucmac[i].qe, &ucam_mod->free_q);
+	}
+
+	ucam_mod->bna = bna;
+}
+
+static void
+bna_ucam_mod_uninit(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+	int i = 0;
+
+	list_for_each(qe, &ucam_mod->free_q)
+		i++;
+
+	ucam_mod->bna = NULL;
+}
+
+static void
+bna_mcam_mod_init(struct bna_mcam_mod *mcam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	mcam_mod->mcmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mcmac[i].qe);
+		list_add_tail(&mcam_mod->mcmac[i].qe, &mcam_mod->free_q);
+	}
+
+	mcam_mod->mchandle = (struct bna_mcam_handle *)
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_handle_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mchandle[i].qe);
+		list_add_tail(&mcam_mod->mchandle[i].qe,
+				&mcam_mod->free_handle_q);
+	}
+
+	mcam_mod->bna = bna;
+}
+
+static void
+bna_mcam_mod_uninit(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_q) i++;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_handle_q) i++;
+
+	mcam_mod->bna = NULL;
+}
+
+static void
+bna_bfi_stats_get(struct bna *bna)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+
+	bna->stats_mod.stats_get_busy = true;
+
+	bfi_msgq_mhdr_set(stats_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_STATS_GET_REQ, 0, 0);
+	stats_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_stats_req)));
+	stats_req->stats_mask = htons(BFI_ENET_STATS_ALL);
+	stats_req->tx_enet_mask = htonl(bna->tx_mod.rid_mask);
+	stats_req->rx_enet_mask = htonl(bna->rx_mod.rid_mask);
+	stats_req->host_buffer.a32.addr_hi = bna->stats.hw_stats_dma.msb;
+	stats_req->host_buffer.a32.addr_lo = bna->stats.hw_stats_dma.lsb;
+
+	bfa_msgq_cmd_set(&bna->stats_mod.stats_get_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_stats_req), &stats_req->mh);
+	bfa_msgq_cmd_post(&bna->msgq, &bna->stats_mod.stats_get_cmd);
+}
+
+void
+bna_res_req(struct bna_res_info *res_info)
+{
+	/* DMA memory for COMMON_MODULE */
+	res_info[BNA_RES_MEM_T_COM].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.len = ALIGN(
+				(bfa_nw_cee_meminfo() +
+				bfa_msgq_meminfo()), PAGE_SIZE);
+
+	/* DMA memory for retrieving IOC attributes */
+	res_info[BNA_RES_MEM_T_ATTR].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.len =
+				ALIGN(bfa_nw_ioc_meminfo(), PAGE_SIZE);
+
+	/* Virtual memory for retreiving fw_trc */
+	res_info[BNA_RES_MEM_T_FWTRC].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.num = 0;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.len = 0;
+
+	/* DMA memory for retreiving stats */
+	res_info[BNA_RES_MEM_T_STATS].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.len =
+				ALIGN(sizeof(struct bfi_enet_stats),
+					PAGE_SIZE);
+}
+
+void
+bna_mod_res_req(struct bna *bna, struct bna_res_info *res_info)
+{
+	struct bna_attr *attr = &bna->ioceth.attr;
+
+	/* Virtual memory for Tx objects - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_tx);
+
+	/* Virtual memory for TxQ - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_txq);
+
+	/* Virtual memory for Rx objects - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rx);
+
+	/* Virtual memory for RxPath - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rxp);
+
+	/* Virtual memory for RxQ - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.len =
+		(attr->num_rxp * 2) * sizeof(struct bna_rxq);
+
+	/* Virtual memory for Unicast MAC address - stored by ucam module */
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_ucmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast MAC address - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast handle - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mcam_handle);
+}
+
+void
+bna_init(struct bna *bna, struct bnad *bnad,
+		struct bfa_pcidev *pcidev, struct bna_res_info *res_info)
+{
+	bna->bnad = bnad;
+	bna->pcidev = *pcidev;
+
+	bna->stats.hw_stats_kva = (struct bfi_enet_stats *)
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].kva;
+	bna->stats.hw_stats_dma.msb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.msb;
+	bna->stats.hw_stats_dma.lsb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.lsb;
+
+	bna_reg_addr_init(bna, &bna->pcidev);
+
+	/* Also initializes diag, cee, sfp, phy_port, msgq */
+	bna_ioceth_init(&bna->ioceth, bna, res_info);
+
+	bna_enet_init(&bna->enet, bna);
+	bna_ethport_init(&bna->ethport, bna);
+}
+
+void
+bna_mod_init(struct bna *bna, struct bna_res_info *res_info)
+{
+	bna_tx_mod_init(&bna->tx_mod, bna, res_info);
+
+	bna_rx_mod_init(&bna->rx_mod, bna, res_info);
+
+	bna_ucam_mod_init(&bna->ucam_mod, bna, res_info);
+
+	bna_mcam_mod_init(&bna->mcam_mod, bna, res_info);
+
+	bna->default_mode_rid = BFI_INVALID_RID;
+	bna->promisc_rid = BFI_INVALID_RID;
+
+	bna->mod_flags |= BNA_MOD_F_INIT_DONE;
+}
+
+void
+bna_uninit(struct bna *bna)
+{
+	if (bna->mod_flags & BNA_MOD_F_INIT_DONE) {
+		bna_mcam_mod_uninit(&bna->mcam_mod);
+		bna_ucam_mod_uninit(&bna->ucam_mod);
+		bna_rx_mod_uninit(&bna->rx_mod);
+		bna_tx_mod_uninit(&bna->tx_mod);
+		bna->mod_flags &= ~BNA_MOD_F_INIT_DONE;
+	}
+
+	bna_stats_mod_uninit(&bna->stats_mod);
+	bna_ethport_uninit(&bna->ethport);
+	bna_enet_uninit(&bna->enet);
+
+	bna_ioceth_uninit(&bna->ioceth);
+
+	bna->bnad = NULL;
+}
+
+int
+bna_num_txq_set(struct bna *bna, int num_txq)
+{
+	if (num_txq > 0 && (num_txq <= bna->ioceth.attr.num_txq)) {
+		bna->ioceth.attr.num_txq = num_txq;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+int
+bna_num_rxp_set(struct bna *bna, int num_rxp)
+{
+	if (num_rxp > 0 && (num_rxp <= bna->ioceth.attr.num_rxp)) {
+		bna->ioceth.attr.num_rxp = num_rxp;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+struct bna_mac *
+bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&ucam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&ucam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &ucam_mod->free_q);
+}
+
+struct bna_mac *
+bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &mcam_mod->free_q);
+}
+
+struct bna_mcam_handle *
+bna_mcam_mod_handle_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_handle_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_handle_q, &qe);
+
+	return (struct bna_mcam_handle *)qe;
+}
+
+void
+bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
+			struct bna_mcam_handle *handle)
+{
+	list_add_tail(&handle->qe, &mcam_mod->free_handle_q);
+}
+
+void
+bna_hw_stats_get(struct bna *bna)
+{
+	if (!bna->stats_mod.ioc_ready) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
+		return;
+	}
+	if (bna->stats_mod.stats_get_busy) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_BUSY, &bna->stats);
+		return;
+	}
+
+	bna_bfi_stats_get(bna);
+}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/8] bna: Tx and Rx Redesign
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
  2011-08-09  2:21 ` [PATCH 1/8] bna: MSGQ Implementation Rasesh Mody
  2011-08-09  2:21 ` [PATCH 2/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 4/8] bna: Add New HW Defs Rasesh Mody
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - This patch contains the changes as a result of redesigning of Tx, Rx data
   path setup. In the old design, setting up Txqs, Rxqs were done in the driver.
   With the new design, most of the hardware setup steps for the Txq, Rxqs are
   moved to FW. Host driver issues commands to FW through the message queue to
   setup/teardown tx, rx data path. FW performs necessary steps and responds
   back to the driver with a status.
 - As a result of this redesign, the state machine implementation for Tx, Rx
   objects have changed significantly. Instead of doing the raw register access,
   these state machines mostly send a command to FW and wait for response and
   take the next action. In addition to tx, rx datapath setup, this patch also
   deals with rx filter configuration - such as unicast address, multicast
   address, vlan filter, promiscuous mode etc.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bna_tx_rx.c | 3787 +++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 3787 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bna_tx_rx.c

diff --git a/drivers/net/bna/bna_tx_rx.c b/drivers/net/bna/bna_tx_rx.c
new file mode 100644
index 0000000..9221413
--- /dev/null
+++ b/drivers/net/bna/bna_tx_rx.c
@@ -0,0 +1,3787 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+  */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+#include "bna.h"
+#include "bfi.h"
+
+/**
+ * IB
+ */
+static void
+bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo)
+{
+	ib->coalescing_timeo = coalescing_timeo;
+	ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
+				(u32)ib->coalescing_timeo, 0);
+}
+
+/**
+ * RXF
+ */
+
+#define bna_rxf_vlan_cfg_soft_reset(rxf)				\
+do {									\
+	(rxf)->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;		\
+	(rxf)->vlan_strip_pending = true;				\
+} while (0)
+
+#define bna_rxf_rss_cfg_soft_reset(rxf)					\
+do {									\
+	if ((rxf)->rss_status == BNA_STATUS_T_ENABLED)			\
+		(rxf)->rss_pending = (BNA_RSS_F_RIT_PENDING |		\
+				BNA_RSS_F_CFG_PENDING |			\
+				BNA_RSS_F_STATUS_PENDING);		\
+} while (0)
+
+static int bna_rxf_cfg_apply(struct bna_rxf *rxf);
+static void bna_rxf_cfg_reset(struct bna_rxf *rxf);
+static int bna_rxf_fltr_clear(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+
+bfa_fsm_state_decl(bna_rxf, stopped, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, paused, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, cfg_wait, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, started, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, fltr_clr_wait, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, last_resp_wait, struct bna_rxf,
+			enum bna_rxf_event);
+
+static void
+bna_rxf_sm_stopped_entry(struct bna_rxf *rxf)
+{
+	call_rxf_stop_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_START:
+		if (rxf->flags & BNA_RXF_F_PAUSED) {
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+			call_rxf_start_cbfn(rxf);
+		} else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	case RXF_E_STOP:
+		call_rxf_stop_cbfn(rxf);
+		break;
+
+	case RXF_E_FAIL:
+		/* No-op */
+		break;
+
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_pause_cbfn(rxf);
+		break;
+
+	case RXF_E_RESUME:
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		call_rxf_resume_cbfn(rxf);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_paused_entry(struct bna_rxf *rxf)
+{
+	call_rxf_pause_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_paused(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+	case RXF_E_FAIL:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
+		break;
+
+	case RXF_E_RESUME:
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_cfg_wait_entry(struct bna_rxf *rxf)
+{
+	if (!bna_rxf_cfg_apply(rxf)) {
+		/* No more pending config updates */
+		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
+	}
+}
+
+static void
+bna_rxf_sm_cfg_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_last_resp_wait);
+		break;
+
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_start_cbfn(rxf);
+		call_rxf_cam_fltr_cbfn(rxf);
+		call_rxf_resume_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		/* No-op */
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_start_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
+		break;
+
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_cfg_apply(rxf)) {
+			/* No more pending config updates */
+			bfa_fsm_set_state(rxf, bna_rxf_sm_started);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_started_entry(struct bna_rxf *rxf)
+{
+	call_rxf_start_cbfn(rxf);
+	call_rxf_cam_fltr_cbfn(rxf);
+	call_rxf_resume_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		if (!bna_rxf_fltr_clear(rxf))
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+		else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_fltr_clr_wait_entry(struct bna_rxf *rxf)
+{
+}
+
+static void
+bna_rxf_sm_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_pause_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_fltr_clear(rxf)) {
+			/* No more pending CAM entries to clear */
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_last_resp_wait_entry(struct bna_rxf *rxf)
+{
+}
+
+static void
+bna_rxf_sm_last_resp_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_FAIL:
+	case RXF_E_FW_RESP:
+		bna_rxf_cfg_reset(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_ucast_req(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bfi_enet_h2i_msgs req_type)
+{
+	struct bfi_enet_ucast_req *req = &rxf->bfi_enet_cmd.ucast_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, req_type, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_ucast_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_ucast_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_add_req(struct bna_rxf *rxf, struct bna_mac *mac)
+{
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_ADD_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_add_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_add_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_del_req(struct bna_rxf *rxf, u16 handle)
+{
+	struct bfi_enet_mcast_del_req *req =
+		&rxf->bfi_enet_cmd.mcast_del_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_DEL_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_del_req)));
+	req->handle = htons(handle);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_del_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_filter_req(struct bna_rxf *rxf, enum bna_status status)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_MAC_MCAST_FILTER_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_promisc_req(struct bna_rxf *rxf, enum bna_status status)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_PROMISCUOUS_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_vlan_filter_set(struct bna_rxf *rxf, u8 block_idx)
+{
+	struct bfi_enet_rx_vlan_req *req = &rxf->bfi_enet_cmd.vlan_req;
+	int i;
+	int j;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_SET_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_vlan_req)));
+	req->block_idx = block_idx;
+	for (i = 0; i < (BFI_ENET_VLAN_BLOCK_SIZE / 32); i++) {
+		j = (block_idx * (BFI_ENET_VLAN_BLOCK_SIZE / 32)) + i;
+		if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED)
+			req->bit_mask[i] =
+				htonl(rxf->vlan_filter_table[j]);
+		else
+			req->bit_mask[i] = 0xFFFFFFFF;
+	}
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_vlan_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_vlan_strip_enable(struct bna_rxf *rxf)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->vlan_strip_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rit_cfg(struct bna_rxf *rxf)
+{
+	struct bfi_enet_rit_req *req = &rxf->bfi_enet_cmd.rit_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RIT_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rit_req)));
+	req->size = htons(rxf->rit_size);
+	memcpy(&req->table[0], rxf->rit, rxf->rit_size);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rit_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rss_cfg(struct bna_rxf *rxf)
+{
+	struct bfi_enet_rss_cfg_req *req = &rxf->bfi_enet_cmd.rss_req;
+	int i;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rss_cfg_req)));
+	req->cfg.type = rxf->rss_cfg.hash_type;
+	req->cfg.mask = rxf->rss_cfg.hash_mask;
+	for (i = 0; i < BFI_ENET_RSS_KEY_LEN; i++)
+		req->cfg.key[i] =
+			htonl(rxf->rss_cfg.toeplitz_hash_key[i]);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rss_cfg_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rss_enable(struct bna_rxf *rxf)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->rss_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+/* This function gets the multicast MAC that has already been added to CAM */
+static struct bna_mac *
+bna_rxf_mcmac_get(struct bna_rxf *rxf, u8 *mac_addr)
+{
+	struct bna_mac *mac;
+	struct list_head *qe;
+
+	list_for_each(qe, &rxf->mcast_active_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
+
+	list_for_each(qe, &rxf->mcast_pending_del_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
+
+	return NULL;
+}
+
+static struct bna_mcam_handle *
+bna_rxf_mchandle_get(struct bna_rxf *rxf, int handle)
+{
+	struct bna_mcam_handle *mchandle;
+	struct list_head *qe;
+
+	list_for_each(qe, &rxf->mcast_handle_q) {
+		mchandle = (struct bna_mcam_handle *)qe;
+		if (mchandle->handle == handle)
+			return mchandle;
+	}
+
+	return NULL;
+}
+
+static void
+bna_rxf_mchandle_attach(struct bna_rxf *rxf, u8 *mac_addr, int handle)
+{
+	struct bna_mac *mcmac;
+	struct bna_mcam_handle *mchandle;
+
+	mcmac = bna_rxf_mcmac_get(rxf, mac_addr);
+	mchandle = bna_rxf_mchandle_get(rxf, handle);
+	if (mchandle == NULL) {
+		mchandle = bna_mcam_mod_handle_get(&rxf->rx->bna->mcam_mod);
+		mchandle->handle = handle;
+		mchandle->refcnt = 0;
+		list_add_tail(&mchandle->qe, &rxf->mcast_handle_q);
+	}
+	mchandle->refcnt++;
+	mcmac->handle = mchandle;
+}
+
+static int
+bna_rxf_mcast_del(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bna_cleanup_type cleanup)
+{
+	struct bna_mcam_handle *mchandle;
+	int ret = 0;
+
+	mchandle = mac->handle;
+	if (mchandle == NULL)
+		return ret;
+
+	mchandle->refcnt--;
+	if (mchandle->refcnt == 0) {
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_del_req(rxf, mchandle->handle);
+			ret = 1;
+		}
+		list_del(&mchandle->qe);
+		bfa_q_qe_init(&mchandle->qe);
+		bna_mcam_mod_handle_put(&rxf->rx->bna->mcam_mod, mchandle);
+	}
+	mac->handle = NULL;
+
+	return ret;
+}
+
+static int
+bna_rxf_mcast_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+	int ret;
+
+	/* Delete multicast entries previousely added */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
+		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		ret = bna_rxf_mcast_del(rxf, mac, BNA_HARD_CLEANUP);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (ret)
+			return ret;
+	}
+
+	/* Add multicast entries */
+	if (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		list_add_tail(&mac->qe, &rxf->mcast_active_q);
+		bna_bfi_mcast_add_req(rxf, mac);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_vlan_cfg_apply(struct bna_rxf *rxf)
+{
+	u8 vlan_pending_bitmask;
+	int block_idx = 0;
+
+	if (rxf->vlan_pending_bitmask) {
+		vlan_pending_bitmask = rxf->vlan_pending_bitmask;
+		while (!(vlan_pending_bitmask & 0x1)) {
+			block_idx++;
+			vlan_pending_bitmask >>= 1;
+		}
+		rxf->vlan_pending_bitmask &= ~(1 << block_idx);
+		bna_bfi_rx_vlan_filter_set(rxf, block_idx);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_mcast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+	int ret;
+
+	/* Throw away delete pending mcast entries */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
+		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		ret = bna_rxf_mcast_del(rxf, mac, cleanup);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (ret)
+			return ret;
+	}
+
+	/* Move active mcast entries to pending_add_q */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->mcast_pending_add_q);
+		mac = (struct bna_mac *)qe;
+		if (bna_rxf_mcast_del(rxf, mac, cleanup))
+			return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_rss_cfg_apply(struct bna_rxf *rxf)
+{
+	if (rxf->rss_pending) {
+		if (rxf->rss_pending & BNA_RSS_F_RIT_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_RIT_PENDING;
+			bna_bfi_rit_cfg(rxf);
+			return 1;
+		}
+
+		if (rxf->rss_pending & BNA_RSS_F_CFG_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_CFG_PENDING;
+			bna_bfi_rss_cfg(rxf);
+			return 1;
+		}
+
+		if (rxf->rss_pending & BNA_RSS_F_STATUS_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_STATUS_PENDING;
+			bna_bfi_rss_enable(rxf);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_cfg_apply(struct bna_rxf *rxf)
+{
+	if (bna_rxf_ucast_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_mcast_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_promisc_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_allmulti_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_vlan_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_vlan_strip_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_rss_cfg_apply(rxf))
+		return 1;
+
+	return 0;
+}
+
+/* Only software reset */
+static int
+bna_rxf_fltr_clear(struct bna_rxf *rxf)
+{
+	if (bna_rxf_ucast_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_mcast_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_promisc_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_allmulti_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	return 0;
+}
+
+static void
+bna_rxf_cfg_reset(struct bna_rxf *rxf)
+{
+	bna_rxf_ucast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_mcast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_promisc_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_allmulti_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_vlan_cfg_soft_reset(rxf);
+	bna_rxf_rss_cfg_soft_reset(rxf);
+}
+
+static void
+bna_rit_init(struct bna_rxf *rxf, int rit_size)
+{
+	struct bna_rx *rx = rxf->rx;
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+	int offset = 0;
+
+	rxf->rit_size = rit_size;
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxf->rit[offset] = rxp->cq.ccb->id;
+		offset++;
+	}
+
+}
+
+void
+bna_bfi_rxf_cfg_rsp(struct bna_rxf *rxf, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
+}
+
+void
+bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
+	struct bfi_enet_mcast_add_rsp *rsp =
+		(struct bfi_enet_mcast_add_rsp *)msghdr;
+
+	bna_rxf_mchandle_attach(rxf, (u8 *)&req->mac_addr,
+		ntohs(rsp->handle));
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
+}
+
+static void
+bna_rxf_init(struct bna_rxf *rxf,
+		struct bna_rx *rx,
+		struct bna_rx_config *q_config,
+		struct bna_res_info *res_info)
+{
+	rxf->rx = rx;
+
+	INIT_LIST_HEAD(&rxf->ucast_pending_add_q);
+	INIT_LIST_HEAD(&rxf->ucast_pending_del_q);
+	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
+	INIT_LIST_HEAD(&rxf->ucast_active_q);
+	rxf->ucast_pending_mac = NULL;
+
+	INIT_LIST_HEAD(&rxf->mcast_pending_add_q);
+	INIT_LIST_HEAD(&rxf->mcast_pending_del_q);
+	INIT_LIST_HEAD(&rxf->mcast_active_q);
+	INIT_LIST_HEAD(&rxf->mcast_handle_q);
+
+	if (q_config->paused)
+		rxf->flags |= BNA_RXF_F_PAUSED;
+
+	rxf->rit = (u8 *)
+		res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info.mdl[0].kva;
+	bna_rit_init(rxf, q_config->num_paths);
+
+	rxf->rss_status = q_config->rss_status;
+	if (rxf->rss_status == BNA_STATUS_T_ENABLED) {
+		rxf->rss_cfg = q_config->rss_config;
+		rxf->rss_pending |= BNA_RSS_F_CFG_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_RIT_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_STATUS_PENDING;
+	}
+
+	rxf->vlan_filter_status = BNA_STATUS_T_DISABLED;
+	memset(rxf->vlan_filter_table, 0,
+			(sizeof(u32) * (BFI_ENET_VLAN_ID_MAX / 32)));
+	rxf->vlan_filter_table[0] |= 1; /* for pure priority tagged frames */
+	rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
+
+	rxf->vlan_strip_status = q_config->vlan_strip_status;
+
+	bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+}
+
+static void
+bna_rxf_uninit(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac;
+
+	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
+
+	while (!list_empty(&rxf->ucast_pending_add_q)) {
+		bfa_q_deq(&rxf->ucast_pending_add_q, &mac);
+		bfa_q_qe_init(&mac->qe);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+	}
+
+	if (rxf->ucast_pending_mac) {
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod,
+			rxf->ucast_pending_mac);
+		rxf->ucast_pending_mac = NULL;
+	}
+
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &mac);
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	rxf->rxmode_pending = 0;
+	rxf->rxmode_pending_bitmask = 0;
+	if (rxf->rx->bna->promisc_rid == rxf->rx->rid)
+		rxf->rx->bna->promisc_rid = BFI_INVALID_RID;
+	if (rxf->rx->bna->default_mode_rid == rxf->rx->rid)
+		rxf->rx->bna->default_mode_rid = BFI_INVALID_RID;
+
+	rxf->rss_pending = 0;
+	rxf->vlan_strip_pending = false;
+
+	rxf->flags = 0;
+
+	rxf->rx = NULL;
+}
+
+static void
+bna_rx_cb_rxf_started(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_RXF_STARTED);
+}
+
+static void
+bna_rxf_start(struct bna_rxf *rxf)
+{
+	rxf->start_cbfn = bna_rx_cb_rxf_started;
+	rxf->start_cbarg = rxf->rx;
+	bfa_fsm_send_event(rxf, RXF_E_START);
+}
+
+static void
+bna_rx_cb_rxf_stopped(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_RXF_STOPPED);
+}
+
+static void
+bna_rxf_stop(struct bna_rxf *rxf)
+{
+	rxf->stop_cbfn = bna_rx_cb_rxf_stopped;
+	rxf->stop_cbarg = rxf->rx;
+	bfa_fsm_send_event(rxf, RXF_E_STOP);
+}
+
+static void
+bna_rxf_fail(struct bna_rxf *rxf)
+{
+	bfa_fsm_send_event(rxf, RXF_E_FAIL);
+}
+
+enum bna_cb_status
+bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+
+	if (rxf->ucast_pending_mac == NULL) {
+		rxf->ucast_pending_mac =
+				bna_ucam_mod_mac_get(&rxf->rx->bna->ucam_mod);
+		if (rxf->ucast_pending_mac == NULL)
+			return BNA_CB_UCAST_CAM_FULL;
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
+	}
+
+	memcpy(rxf->ucast_pending_mac->addr, ucmac, ETH_ALEN);
+	rxf->ucast_pending_set = 1;
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+}
+
+enum bna_cb_status
+bna_rx_mcast_add(struct bna_rx *rx, u8 *addr,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct bna_mac *mac;
+
+	/* Check if already added or pending addition */
+	if (bna_mac_find(&rxf->mcast_active_q, addr) ||
+		bna_mac_find(&rxf->mcast_pending_add_q, addr)) {
+		if (cbfn)
+			cbfn(rx->bna->bnad, rx);
+		return BNA_CB_SUCCESS;
+	}
+
+	mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
+	if (mac == NULL)
+		return BNA_CB_MCAST_LIST_FULL;
+	bfa_q_qe_init(&mac->qe);
+	memcpy(mac->addr, addr, ETH_ALEN);
+	list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
+
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+}
+
+enum bna_cb_status
+bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mclist,
+		     void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct list_head list_head;
+	struct list_head *qe;
+	u8 *mcaddr;
+	struct bna_mac *mac;
+	int i;
+
+	/* Allocate nodes */
+	INIT_LIST_HEAD(&list_head);
+	for (i = 0, mcaddr = mclist; i < count; i++) {
+		mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
+		if (mac == NULL)
+			goto err_return;
+		bfa_q_qe_init(&mac->qe);
+		memcpy(mac->addr, mcaddr, ETH_ALEN);
+		list_add_tail(&mac->qe, &list_head);
+
+		mcaddr += ETH_ALEN;
+	}
+
+	/* Purge the pending_add_q */
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	/* Schedule active_q entries for deletion */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+	}
+
+	/* Add the new entries */
+	while (!list_empty(&list_head)) {
+		bfa_q_deq(&list_head, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
+	}
+
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+
+err_return:
+	while (!list_empty(&list_head)) {
+		bfa_q_deq(&list_head, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	return BNA_CB_MCAST_LIST_FULL;
+}
+
+void
+bna_rx_vlan_add(struct bna_rx *rx, int vlan_id)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
+
+	rxf->vlan_filter_table[index] |= bit;
+	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+void
+bna_rx_vlan_del(struct bna_rx *rx, int vlan_id)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
+
+	rxf->vlan_filter_table[index] &= ~bit;
+	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+static int
+bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+
+	/* Delete MAC addresses previousely added */
+	if (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		return 1;
+	}
+
+	/* Set default unicast MAC */
+	if (rxf->ucast_pending_set) {
+		rxf->ucast_pending_set = 0;
+		memcpy(rxf->ucast_active_mac.addr,
+			rxf->ucast_pending_mac->addr, ETH_ALEN);
+		rxf->ucast_active_set = 1;
+		bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+			BFI_ENET_H2I_MAC_UCAST_SET_REQ);
+		return 1;
+	}
+
+	/* Add additional MAC entries */
+	if (!list_empty(&rxf->ucast_pending_add_q)) {
+		bfa_q_deq(&rxf->ucast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		list_add_tail(&mac->qe, &rxf->ucast_active_q);
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_ADD_REQ);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+
+	/* Throw away delete pending ucast entries */
+	while (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		if (cleanup == BNA_SOFT_CLEANUP)
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		else {
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+			return 1;
+		}
+	}
+
+	/* Move active ucast entries to pending_add_q */
+	while (!list_empty(&rxf->ucast_active_q)) {
+		bfa_q_deq(&rxf->ucast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->ucast_pending_add_q);
+		if (cleanup == BNA_HARD_CLEANUP) {
+			mac = (struct bna_mac *)qe;
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			return 1;
+		}
+	}
+
+	if (rxf->ucast_active_set) {
+		rxf->ucast_pending_set = 1;
+		rxf->ucast_active_set = 0;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+				BFI_ENET_H2I_MAC_UCAST_CLR_REQ);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Enable/disable promiscuous mode */
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_PROMISC;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Clear pending promisc mode disable */
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	/* Move promisc mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf)
+{
+	/* Enable/disable allmulti mode */
+	if (is_allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	/* Clear pending allmulti mode disable */
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	/* Move allmulti mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_enable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(rxf->rxmode_active & BNA_RXMODE_PROMISC)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		promisc_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = rxf->rx->rid;
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_promisc_disable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_PROMISC))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_promisc_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = BFI_INVALID_RID;
+	} else if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		/* Schedule disable */
+		promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_enable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_enable(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask) ||
+			(rxf->rxmode_active & BNA_RXMODE_ALLMULTI)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		allmulti_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_disable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_ALLMULTI))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_allmulti_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+	} else if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		/* Schedule disable */
+		allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf)
+{
+	if (rxf->vlan_strip_pending) {
+			rxf->vlan_strip_pending = false;
+			bna_bfi_vlan_strip_enable(rxf);
+			return 1;
+	}
+
+	return 0;
+}
+
+/**
+ * RX
+ */
+
+#define	BNA_GET_RXQS(qcfg)	(((qcfg)->rxp_type == BNA_RXP_SINGLE) ?	\
+	(qcfg)->num_paths : ((qcfg)->num_paths * 2))
+
+#define	SIZE_TO_PAGES(size)	(((size) >> PAGE_SHIFT) + ((((size) &\
+	(PAGE_SIZE - 1)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT))
+
+#define	call_rx_stop_cbfn(rx)						\
+do {								    \
+	if ((rx)->stop_cbfn) {						\
+		void (*cbfn)(void *, struct bna_rx *);	  \
+		void *cbarg;					    \
+		cbfn = (rx)->stop_cbfn;				 \
+		cbarg = (rx)->stop_cbarg;			       \
+		(rx)->stop_cbfn = NULL;					\
+		(rx)->stop_cbarg = NULL;				\
+		cbfn(cbarg, rx);					\
+	}							       \
+} while (0)
+
+#define bfi_enet_datapath_q_init(bfi_q, bna_qpt)			\
+do {									\
+	struct bna_dma_addr cur_q_addr =				\
+		*((struct bna_dma_addr *)((bna_qpt)->kv_qpt_ptr));	\
+	(bfi_q)->pg_tbl.a32.addr_lo = (bna_qpt)->hw_qpt_ptr.lsb;	\
+	(bfi_q)->pg_tbl.a32.addr_hi = (bna_qpt)->hw_qpt_ptr.msb;	\
+	(bfi_q)->first_entry.a32.addr_lo = cur_q_addr.lsb;		\
+	(bfi_q)->first_entry.a32.addr_hi = cur_q_addr.msb;		\
+	(bfi_q)->pages = htons((u16)(bna_qpt)->page_count);	\
+	(bfi_q)->page_sz = htons((u16)(bna_qpt)->page_size);\
+} while (0)
+
+static void bna_bfi_rx_enet_start(struct bna_rx *rx);
+static void bna_rx_enet_stop(struct bna_rx *rx);
+static void bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx);
+
+bfa_fsm_state_decl(bna_rx, stopped,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, start_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, rxf_start_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, started,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, rxf_stop_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, stop_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, cleanup_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, failed,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, quiesce_wait,
+	struct bna_rx, enum bna_rx_event);
+
+static void bna_rx_sm_stopped_entry(struct bna_rx *rx)
+{
+	call_rx_stop_cbfn(rx);
+}
+
+static void bna_rx_sm_stopped(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_START:
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
+		break;
+
+	case RX_E_STOP:
+		call_rx_stop_cbfn(rx);
+		break;
+
+	case RX_E_FAIL:
+		/* no-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_start_wait_entry(struct bna_rx *rx)
+{
+	bna_bfi_rx_enet_start(rx);
+}
+
+void
+bna_rx_sm_stop_wait_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+	case RX_E_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_STARTED:
+		bna_rx_enet_stop(rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_start_wait(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	case RX_E_STARTED:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_rxf_start_wait_entry(struct bna_rx *rx)
+{
+	rx->rx_post_cbfn(rx->bna->bnad, rx);
+	bna_rxf_start(&rx->rxf);
+}
+
+void
+bna_rx_sm_rxf_stop_wait_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_RXF_STARTED:
+		bna_rxf_stop(&rx->rxf);
+		break;
+
+	case RX_E_RXF_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		bna_rx_enet_stop(rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+
+}
+
+void
+bna_rx_sm_started_entry(struct bna_rx *rx)
+{
+	struct bna_rxp *rxp;
+	struct list_head *qe_rxp;
+	int is_regular = (rx->type == BNA_RX_T_REGULAR);
+
+	/* Start IB */
+	list_for_each(qe_rxp, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe_rxp;
+		bna_ib_start(rx->bna, &rxp->cq.ib, is_regular);
+	}
+
+	bna_ethport_cb_rx_started(&rx->bna->ethport);
+}
+
+static void
+bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_stop(&rx->rxf);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_rxf_start_wait(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_RXF_STARTED:
+		bfa_fsm_set_state(rx, bna_rx_sm_started);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+void
+bna_rx_sm_cleanup_wait_entry(struct bna_rx *rx)
+{
+}
+
+void
+bna_rx_sm_cleanup_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void
+bna_rx_sm_failed_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_failed(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_START:
+		bfa_fsm_set_state(rx, bna_rx_sm_quiesce_wait);
+		break;
+
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
+
+	case RX_E_FAIL:
+	case RX_E_RXF_STARTED:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+}	}
+
+static void
+bna_rx_sm_quiesce_wait_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_quiesce_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void
+bna_bfi_rx_enet_start(struct bna_rx *rx)
+{
+	struct bfi_enet_rx_cfg_req *cfg_req = &rx->bfi_enet_cmd.cfg_req;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
+
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_SET_REQ, 0, rx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_cfg_req)));
+
+	cfg_req->num_queue_sets = rx->num_paths;
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
+
+		GET_RXQS(rxp, q0, q1);
+		switch (rxp->type) {
+		case BNA_RXP_SLR:
+		case BNA_RXP_HDS:
+			/* Small RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].qs.q,
+						&q1->qpt);
+			cfg_req->q_cfg[i].qs.rx_buffer_size =
+				htons((u16)q1->buffer_size);
+			/* Fall through */
+
+		case BNA_RXP_SINGLE:
+			/* Large/Single RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].ql.q,
+						&q0->qpt);
+			q0->buffer_size =
+				bna_enet_mtu_get(&rx->bna->enet);
+			cfg_req->q_cfg[i].ql.rx_buffer_size =
+				htons((u16)q0->buffer_size);
+			break;
+
+		default:
+			BUG_ON(1);
+		}
+
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].cq.q,
+					&rxp->cq.qpt);
+
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			rxp->cq.ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			rxp->cq.ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)rxp->cq.ib.intr_vector);
+	}
+
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.msix = (rxp->cq.ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED :
+				BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)rxp->cq.ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)rxp->cq.ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)rxp->cq.ib.interpkt_count;
+
+	switch (rxp->type) {
+	case BNA_RXP_SLR:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_LARGE_SMALL;
+		break;
+
+	case BNA_RXP_HDS:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_HDS;
+		cfg_req->rx_cfg.hds.type = rx->hds_cfg.hdr_type;
+		cfg_req->rx_cfg.hds.force_offset = rx->hds_cfg.forced_offset;
+		cfg_req->rx_cfg.hds.max_header_size = rx->hds_cfg.forced_offset;
+		break;
+
+	case BNA_RXP_SINGLE:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_SINGLE;
+		break;
+
+	default:
+		BUG_ON(1);
+	}
+	cfg_req->rx_cfg.strip_vlan = rx->rxf.vlan_strip_status;
+
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_enet_stop(struct bna_rx *rx)
+{
+	struct bfi_enet_req *req = &rx->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_CLR_REQ, 0, rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
+}
+
+static void
+bna_rx_enet_stop(struct bna_rx *rx)
+{
+	struct bna_rxp *rxp;
+	struct list_head		 *qe_rxp;
+
+	/* Stop IB */
+	list_for_each(qe_rxp, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe_rxp;
+		bna_ib_stop(rx->bna, &rxp->cq.ib);
+	}
+
+	bna_bfi_rx_enet_stop(rx);
+}
+
+static int
+bna_rx_res_check(struct bna_rx_mod *rx_mod, struct bna_rx_config *rx_cfg)
+{
+	if ((rx_mod->rx_free_count == 0) ||
+		(rx_mod->rxp_free_count == 0) ||
+		(rx_mod->rxq_free_count == 0))
+		return 0;
+
+	if (rx_cfg->rxp_type == BNA_RXP_SINGLE) {
+		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
+			(rx_mod->rxq_free_count < rx_cfg->num_paths))
+				return 0;
+	} else {
+		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
+			(rx_mod->rxq_free_count < (2 * rx_cfg->num_paths)))
+			return 0;
+	}
+
+	return 1;
+}
+
+static struct bna_rxq *
+bna_rxq_get(struct bna_rx_mod *rx_mod)
+{
+	struct bna_rxq *rxq = NULL;
+	struct list_head	*qe = NULL;
+
+	bfa_q_deq(&rx_mod->rxq_free_q, &qe);
+	rx_mod->rxq_free_count--;
+	rxq = (struct bna_rxq *)qe;
+	bfa_q_qe_init(&rxq->qe);
+
+	return rxq;
+}
+
+static void
+bna_rxq_put(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
+{
+	bfa_q_qe_init(&rxq->qe);
+	list_add_tail(&rxq->qe, &rx_mod->rxq_free_q);
+	rx_mod->rxq_free_count++;
+}
+
+static struct bna_rxp *
+bna_rxp_get(struct bna_rx_mod *rx_mod)
+{
+	struct list_head	*qe = NULL;
+	struct bna_rxp *rxp = NULL;
+
+	bfa_q_deq(&rx_mod->rxp_free_q, &qe);
+	rx_mod->rxp_free_count--;
+	rxp = (struct bna_rxp *)qe;
+	bfa_q_qe_init(&rxp->qe);
+
+	return rxp;
+}
+
+static void
+bna_rxp_put(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
+{
+	bfa_q_qe_init(&rxp->qe);
+	list_add_tail(&rxp->qe, &rx_mod->rxp_free_q);
+	rx_mod->rxp_free_count++;
+}
+
+static struct bna_rx *
+bna_rx_get(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct list_head	*qe = NULL;
+	struct bna_rx *rx = NULL;
+
+	if (type == BNA_RX_T_REGULAR) {
+		bfa_q_deq(&rx_mod->rx_free_q, &qe);
+	} else
+		bfa_q_deq_tail(&rx_mod->rx_free_q, &qe);
+
+	rx_mod->rx_free_count--;
+	rx = (struct bna_rx *)qe;
+	bfa_q_qe_init(&rx->qe);
+	list_add_tail(&rx->qe, &rx_mod->rx_active_q);
+	rx->type = type;
+
+	return rx;
+}
+
+static void
+bna_rx_put(struct bna_rx_mod *rx_mod, struct bna_rx *rx)
+{
+	struct list_head *prev_qe = NULL;
+	struct list_head *qe;
+
+	bfa_q_qe_init(&rx->qe);
+
+	list_for_each(qe, &rx_mod->rx_free_q) {
+		if (((struct bna_rx *)qe)->rid < rx->rid)
+			prev_qe = qe;
+		else
+			break;
+	}
+
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&rx_mod->rx_free_q, &rx->qe);
+	} else if (bfa_q_next(prev_qe) == &rx_mod->rx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&rx->qe, &rx_mod->rx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&rx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&rx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &rx->qe;
+		bfa_q_prev(bfa_q_next(&rx->qe)) = &rx->qe;
+	}
+
+	rx_mod->rx_free_count++;
+}
+
+static void
+bna_rxp_add_rxqs(struct bna_rxp *rxp, struct bna_rxq *q0,
+		struct bna_rxq *q1)
+{
+	switch (rxp->type) {
+	case BNA_RXP_SINGLE:
+		rxp->rxq.single.only = q0;
+		rxp->rxq.single.reserved = NULL;
+		break;
+	case BNA_RXP_SLR:
+		rxp->rxq.slr.large = q0;
+		rxp->rxq.slr.small = q1;
+		break;
+	case BNA_RXP_HDS:
+		rxp->rxq.hds.data = q0;
+		rxp->rxq.hds.hdr = q1;
+		break;
+	default:
+		break;
+	}
+}
+
+static void
+bna_rxq_qpt_setup(struct bna_rxq *rxq,
+		struct bna_rxp *rxp,
+		u32 page_count,
+		u32 page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int	i;
+
+	rxq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	rxq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	rxq->qpt.kv_qpt_ptr = qpt_mem->kva;
+	rxq->qpt.page_count = page_count;
+	rxq->qpt.page_size = page_size;
+
+	rxq->rcb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < rxq->qpt.page_count; i++) {
+		rxq->rcb->sw_qpt[i] = page_mem[i].kva;
+		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static void
+bna_rxp_cqpt_setup(struct bna_rxp *rxp,
+		u32 page_count,
+		u32 page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int	i;
+
+	rxp->cq.qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	rxp->cq.qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	rxp->cq.qpt.kv_qpt_ptr = qpt_mem->kva;
+	rxp->cq.qpt.page_count = page_count;
+	rxp->cq.qpt.page_size = page_size;
+
+	rxp->cq.ccb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < rxp->cq.qpt.page_count; i++) {
+		rxp->cq.ccb->sw_qpt[i] = page_mem[i].kva;
+
+		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static void
+bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx)
+{
+	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
+
+	bfa_wc_down(&rx_mod->rx_stop_wc);
+}
+
+static void
+bna_rx_mod_cb_rx_stopped_all(void *arg)
+{
+	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
+
+	if (rx_mod->stop_cbfn)
+		rx_mod->stop_cbfn(&rx_mod->bna->enet);
+	rx_mod->stop_cbfn = NULL;
+}
+
+static void
+bna_rx_start(struct bna_rx *rx)
+{
+	rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+	if (rx->rx_flags & BNA_RX_F_ENABLED)
+		bfa_fsm_send_event(rx, RX_E_START);
+}
+
+static void
+bna_rx_stop(struct bna_rx *rx)
+{
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
+	if (rx->fsm == (bfa_fsm_t) bna_rx_sm_stopped)
+		bna_rx_mod_cb_rx_stopped(&rx->bna->rx_mod, rx);
+	else {
+		rx->stop_cbfn = bna_rx_mod_cb_rx_stopped;
+		rx->stop_cbarg = &rx->bna->rx_mod;
+		bfa_fsm_send_event(rx, RX_E_STOP);
+	}
+}
+
+static void
+bna_rx_fail(struct bna_rx *rx)
+{
+	/* Indicate Enet is not enabled, and failed */
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
+	bfa_fsm_send_event(rx, RX_E_FAIL);
+}
+
+void
+bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags |= BNA_RX_MOD_F_ENET_STARTED;
+	if (type == BNA_RX_T_LOOPBACK)
+		rx_mod->flags |= BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		if (rx->type == type)
+			bna_rx_start(rx);
+	}
+}
+
+void
+bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	rx_mod->stop_cbfn = bna_enet_cb_rx_stopped;
+
+	bfa_wc_init(&rx_mod->rx_stop_wc, bna_rx_mod_cb_rx_stopped_all, rx_mod);
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		if (rx->type == type) {
+			bfa_wc_up(&rx_mod->rx_stop_wc);
+			bna_rx_stop(rx);
+		}
+	}
+
+	bfa_wc_wait(&rx_mod->rx_stop_wc);
+}
+
+void
+bna_rx_mod_fail(struct bna_rx_mod *rx_mod)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		bna_rx_fail(rx);
+	}
+}
+
+void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
+			struct bna_res_info *res_info)
+{
+	int	index;
+	struct bna_rx *rx_ptr;
+	struct bna_rxp *rxp_ptr;
+	struct bna_rxq *rxq_ptr;
+
+	rx_mod->bna = bna;
+	rx_mod->flags = 0;
+
+	rx_mod->rx = (struct bna_rx *)
+		res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mdl[0].kva;
+	rx_mod->rxp = (struct bna_rxp *)
+		res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mdl[0].kva;
+	rx_mod->rxq = (struct bna_rxq *)
+		res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	/* Initialize the queues */
+	INIT_LIST_HEAD(&rx_mod->rx_free_q);
+	rx_mod->rx_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxq_free_q);
+	rx_mod->rxq_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxp_free_q);
+	rx_mod->rxp_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rx_active_q);
+
+	/* Build RX queues */
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
+		rx_ptr = &rx_mod->rx[index];
+
+		bfa_q_qe_init(&rx_ptr->qe);
+		INIT_LIST_HEAD(&rx_ptr->rxp_q);
+		rx_ptr->bna = NULL;
+		rx_ptr->rid = index;
+		rx_ptr->stop_cbfn = NULL;
+		rx_ptr->stop_cbarg = NULL;
+
+		list_add_tail(&rx_ptr->qe, &rx_mod->rx_free_q);
+		rx_mod->rx_free_count++;
+	}
+
+	/* build RX-path queue */
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
+		rxp_ptr = &rx_mod->rxp[index];
+		bfa_q_qe_init(&rxp_ptr->qe);
+		list_add_tail(&rxp_ptr->qe, &rx_mod->rxp_free_q);
+		rx_mod->rxp_free_count++;
+	}
+
+	/* build RXQ queue */
+	for (index = 0; index < (bna->ioceth.attr.num_rxp * 2); index++) {
+		rxq_ptr = &rx_mod->rxq[index];
+		bfa_q_qe_init(&rxq_ptr->qe);
+		list_add_tail(&rxq_ptr->qe, &rx_mod->rxq_free_q);
+		rx_mod->rxq_free_count++;
+	}
+}
+
+void
+bna_rx_mod_uninit(struct bna_rx_mod *rx_mod)
+{
+	struct list_head		*qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rx_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rxp_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rxq_free_q)
+		i++;
+
+	rx_mod->bna = NULL;
+}
+
+void
+bna_bfi_rx_enet_start_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_rx_cfg_rsp *cfg_rsp = &rx->bfi_enet_cmd.cfg_rsp;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&rx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_rx_cfg_rsp));
+
+	rx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
+		GET_RXQS(rxp, q0, q1);
+
+		/* Setup doorbells */
+		rxp->cq.ccb->i_dbell->doorbell_addr =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		rxp->hw_id = cfg_rsp->q_handles[i].hw_cqid;
+		q0->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].ql_dbell);
+		q0->hw_id = cfg_rsp->q_handles[i].hw_lqid;
+		if (q1) {
+			q1->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].qs_dbell);
+			q1->hw_id = cfg_rsp->q_handles[i].hw_sqid;
+		}
+
+		/* Initialize producer/consumer indexes */
+		(*rxp->cq.ccb->hw_producer_index) = 0;
+		rxp->cq.ccb->producer_index = 0;
+		q0->rcb->producer_index = q0->rcb->consumer_index = 0;
+		if (q1)
+			q1->rcb->producer_index = q1->rcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(rx, RX_E_STARTED);
+}
+
+void
+bna_bfi_rx_enet_stop_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(rx, RX_E_STOPPED);
+}
+
+void
+bna_rx_res_req(struct bna_rx_config *q_cfg, struct bna_res_info *res_info)
+{
+	u32 cq_size, hq_size, dq_size;
+	u32 cpage_count, hpage_count, dpage_count;
+	struct bna_mem_info *mem_info;
+	u32 cq_depth;
+	u32 hq_depth;
+	u32 dq_depth;
+
+	dq_depth = q_cfg->q_depth;
+	hq_depth = ((q_cfg->rxp_type == BNA_RXP_SINGLE) ? 0 : q_cfg->q_depth);
+	cq_depth = dq_depth + hq_depth;
+
+	BNA_TO_POWER_OF_2_HIGH(cq_depth);
+	cq_size = cq_depth * BFI_CQ_WI_SIZE;
+	cq_size = ALIGN(cq_size, PAGE_SIZE);
+	cpage_count = SIZE_TO_PAGES(cq_size);
+
+	BNA_TO_POWER_OF_2_HIGH(dq_depth);
+	dq_size = dq_depth * BFI_RXQ_WI_SIZE;
+	dq_size = ALIGN(dq_size, PAGE_SIZE);
+	dpage_count = SIZE_TO_PAGES(dq_size);
+
+	if (BNA_RXP_SINGLE != q_cfg->rxp_type) {
+		BNA_TO_POWER_OF_2_HIGH(hq_depth);
+		hq_size = hq_depth * BFI_RXQ_WI_SIZE;
+		hq_size = ALIGN(hq_size, PAGE_SIZE);
+		hpage_count = SIZE_TO_PAGES(hq_size);
+	} else
+		hpage_count = 0;
+
+	res_info[BNA_RX_RES_MEM_T_CCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_ccb);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_RCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_rcb);
+	mem_info->num = BNA_GET_RXQS(q_cfg);
+
+	res_info[BNA_RX_RES_MEM_T_CQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = cpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_CSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = cpage_count * sizeof(void *);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = cpage_count * q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = dpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = dpage_count * sizeof(void *);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DPAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = dpage_count * q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_HQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = hpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
+
+	res_info[BNA_RX_RES_MEM_T_HSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = hpage_count * sizeof(void *);
+	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
+
+	res_info[BNA_RX_RES_MEM_T_HPAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = (hpage_count ? PAGE_SIZE : 0);
+	mem_info->num = (hpage_count ? (hpage_count * q_cfg->num_paths) : 0);
+
+	res_info[BNA_RX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_RIT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = BFI_ENET_RSS_RIT_MAX;
+	mem_info->num = 1;
+
+	res_info[BNA_RX_RES_T_INTR].res_type = BNA_RES_T_INTR;
+	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.intr_type = BNA_INTR_T_MSIX;
+	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.num = q_cfg->num_paths;
+}
+
+struct bna_rx *
+bna_rx_create(struct bna *bna, struct bnad *bnad,
+		struct bna_rx_config *rx_cfg,
+		struct bna_rx_event_cbfn *rx_cbfn,
+		struct bna_res_info *res_info,
+		void *priv)
+{
+	struct bna_rx_mod *rx_mod = &bna->rx_mod;
+	struct bna_rx *rx;
+	struct bna_rxp *rxp;
+	struct bna_rxq *q0;
+	struct bna_rxq *q1;
+	struct bna_intr_info *intr_info;
+	u32 page_count;
+	struct bna_mem_descr *ccb_mem;
+	struct bna_mem_descr *rcb_mem;
+	struct bna_mem_descr *unmapq_mem;
+	struct bna_mem_descr *cqpt_mem;
+	struct bna_mem_descr *cswqpt_mem;
+	struct bna_mem_descr *cpage_mem;
+	struct bna_mem_descr *hqpt_mem;
+	struct bna_mem_descr *dqpt_mem;
+	struct bna_mem_descr *hsqpt_mem;
+	struct bna_mem_descr *dsqpt_mem;
+	struct bna_mem_descr *hpage_mem;
+	struct bna_mem_descr *dpage_mem;
+	int i, cpage_idx = 0, dpage_idx = 0, hpage_idx = 0;
+	int dpage_count, hpage_count, rcb_idx;
+
+	if (!bna_rx_res_check(rx_mod, rx_cfg))
+		return NULL;
+
+	intr_info = &res_info[BNA_RX_RES_T_INTR].res_u.intr_info;
+	ccb_mem = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info.mdl[0];
+	rcb_mem = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info.mdl[0];
+	unmapq_mem = &res_info[BNA_RX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[0];
+	cqpt_mem = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info.mdl[0];
+	cswqpt_mem = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info.mdl[0];
+	cpage_mem = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.mdl[0];
+	hqpt_mem = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info.mdl[0];
+	dqpt_mem = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info.mdl[0];
+	hsqpt_mem = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info.mdl[0];
+	dsqpt_mem = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info.mdl[0];
+	hpage_mem = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.mdl[0];
+	dpage_mem = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.mdl[0];
+
+	page_count = res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	dpage_count = res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	hpage_count = res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	rx = bna_rx_get(rx_mod, rx_cfg->rx_type);
+	rx->bna = bna;
+	rx->rx_flags = 0;
+	INIT_LIST_HEAD(&rx->rxp_q);
+	rx->stop_cbfn = NULL;
+	rx->stop_cbarg = NULL;
+	rx->priv = priv;
+
+	rx->rcb_setup_cbfn = rx_cbfn->rcb_setup_cbfn;
+	rx->rcb_destroy_cbfn = rx_cbfn->rcb_destroy_cbfn;
+	rx->ccb_setup_cbfn = rx_cbfn->ccb_setup_cbfn;
+	rx->ccb_destroy_cbfn = rx_cbfn->ccb_destroy_cbfn;
+	/* Following callbacks are mandatory */
+	rx->rx_cleanup_cbfn = rx_cbfn->rx_cleanup_cbfn;
+	rx->rx_post_cbfn = rx_cbfn->rx_post_cbfn;
+
+	if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_STARTED) {
+		switch (rx->type) {
+		case BNA_RX_T_REGULAR:
+			if (!(rx->bna->rx_mod.flags &
+				BNA_RX_MOD_F_ENET_LOOPBACK))
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+			break;
+		case BNA_RX_T_LOOPBACK:
+			if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_LOOPBACK)
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+			break;
+		}
+	}
+
+	rx->num_paths = rx_cfg->num_paths;
+	for (i = 0, rcb_idx = 0; i < rx->num_paths; i++) {
+		rxp = bna_rxp_get(rx_mod);
+		list_add_tail(&rxp->qe, &rx->rxp_q);
+		rxp->type = rx_cfg->rxp_type;
+		rxp->rx = rx;
+		rxp->cq.rx = rx;
+
+		q0 = bna_rxq_get(rx_mod);
+		if (BNA_RXP_SINGLE == rx_cfg->rxp_type)
+			q1 = NULL;
+		else
+			q1 = bna_rxq_get(rx_mod);
+
+		if (1 == intr_info->num)
+			rxp->vector = intr_info->idl[0].vector;
+		else
+			rxp->vector = intr_info->idl[i].vector;
+
+		/* Setup IB */
+
+		rxp->cq.ib.ib_seg_host_addr.lsb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		rxp->cq.ib.ib_seg_host_addr.msb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		rxp->cq.ib.ib_seg_host_addr_kva =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		rxp->cq.ib.intr_type = intr_info->intr_type;
+		if (intr_info->intr_type == BNA_INTR_T_MSIX)
+			rxp->cq.ib.intr_vector = rxp->vector;
+		else
+			rxp->cq.ib.intr_vector = (1 << rxp->vector);
+		rxp->cq.ib.coalescing_timeo = rx_cfg->coalescing_timeo;
+		rxp->cq.ib.interpkt_count = BFI_RX_INTERPKT_COUNT;
+		rxp->cq.ib.interpkt_timeo = BFI_RX_INTERPKT_TIMEO;
+
+		bna_rxp_add_rxqs(rxp, q0, q1);
+
+		/* Setup large Q */
+
+		q0->rx = rx;
+		q0->rxp = rxp;
+
+		q0->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
+		q0->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
+		rcb_idx++;
+		q0->rcb->q_depth = rx_cfg->q_depth;
+		q0->rcb->rxq = q0;
+		q0->rcb->bnad = bna->bnad;
+		q0->rcb->id = 0;
+		q0->rx_packets = q0->rx_bytes = 0;
+		q0->rx_packets_with_error = q0->rxbuf_alloc_failed = 0;
+
+		bna_rxq_qpt_setup(q0, rxp, dpage_count, PAGE_SIZE,
+			&dqpt_mem[i], &dsqpt_mem[i], &dpage_mem[dpage_idx]);
+		q0->rcb->page_idx = dpage_idx;
+		q0->rcb->page_count = dpage_count;
+		dpage_idx += dpage_count;
+
+		if (rx->rcb_setup_cbfn)
+			rx->rcb_setup_cbfn(bnad, q0->rcb);
+
+		/* Setup small Q */
+
+		if (q1) {
+			q1->rx = rx;
+			q1->rxp = rxp;
+
+			q1->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
+			q1->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
+			rcb_idx++;
+			q1->rcb->q_depth = rx_cfg->q_depth;
+			q1->rcb->rxq = q1;
+			q1->rcb->bnad = bna->bnad;
+			q1->rcb->id = 1;
+			q1->buffer_size = (rx_cfg->rxp_type == BNA_RXP_HDS) ?
+					rx_cfg->hds_config.forced_offset
+					: rx_cfg->small_buff_size;
+			q1->rx_packets = q1->rx_bytes = 0;
+			q1->rx_packets_with_error = q1->rxbuf_alloc_failed = 0;
+
+			bna_rxq_qpt_setup(q1, rxp, hpage_count, PAGE_SIZE,
+				&hqpt_mem[i], &hsqpt_mem[i],
+				&hpage_mem[hpage_idx]);
+			q1->rcb->page_idx = hpage_idx;
+			q1->rcb->page_count = hpage_count;
+			hpage_idx += hpage_count;
+
+			if (rx->rcb_setup_cbfn)
+				rx->rcb_setup_cbfn(bnad, q1->rcb);
+		}
+
+		/* Setup CQ */
+
+		rxp->cq.ccb = (struct bna_ccb *) ccb_mem[i].kva;
+		rxp->cq.ccb->q_depth =	rx_cfg->q_depth +
+					((rx_cfg->rxp_type == BNA_RXP_SINGLE) ?
+					0 : rx_cfg->q_depth);
+		rxp->cq.ccb->cq = &rxp->cq;
+		rxp->cq.ccb->rcb[0] = q0->rcb;
+		q0->rcb->ccb = rxp->cq.ccb;
+		if (q1) {
+			rxp->cq.ccb->rcb[1] = q1->rcb;
+			q1->rcb->ccb = rxp->cq.ccb;
+		}
+		rxp->cq.ccb->hw_producer_index =
+			(u32 *)rxp->cq.ib.ib_seg_host_addr_kva;
+		rxp->cq.ccb->i_dbell = &rxp->cq.ib.door_bell;
+		rxp->cq.ccb->intr_type = rxp->cq.ib.intr_type;
+		rxp->cq.ccb->intr_vector = rxp->cq.ib.intr_vector;
+		rxp->cq.ccb->rx_coalescing_timeo =
+			rxp->cq.ib.coalescing_timeo;
+		rxp->cq.ccb->pkt_rate.small_pkt_cnt = 0;
+		rxp->cq.ccb->pkt_rate.large_pkt_cnt = 0;
+		rxp->cq.ccb->bnad = bna->bnad;
+		rxp->cq.ccb->id = i;
+
+		bna_rxp_cqpt_setup(rxp, page_count, PAGE_SIZE,
+			&cqpt_mem[i], &cswqpt_mem[i], &cpage_mem[cpage_idx]);
+		rxp->cq.ccb->page_idx = cpage_idx;
+		rxp->cq.ccb->page_count = page_count;
+		cpage_idx += page_count;
+
+		if (rx->ccb_setup_cbfn)
+			rx->ccb_setup_cbfn(bnad, rxp->cq.ccb);
+	}
+
+	rx->hds_cfg = rx_cfg->hds_config;
+
+	bna_rxf_init(&rx->rxf, rx, rx_cfg, res_info);
+
+	bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+
+	rx_mod->rid_mask |= (1 << rx->rid);
+
+	return rx;
+}
+
+void
+bna_rx_destroy(struct bna_rx *rx)
+{
+	struct bna_rx_mod *rx_mod = &rx->bna->rx_mod;
+	struct bna_rxq *q0 = NULL;
+	struct bna_rxq *q1 = NULL;
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+
+	bna_rxf_uninit(&rx->rxf);
+
+	while (!list_empty(&rx->rxp_q)) {
+		bfa_q_deq(&rx->rxp_q, &rxp);
+		GET_RXQS(rxp, q0, q1);
+		if (rx->rcb_destroy_cbfn)
+			rx->rcb_destroy_cbfn(rx->bna->bnad, q0->rcb);
+		q0->rcb = NULL;
+		q0->rxp = NULL;
+		q0->rx = NULL;
+		bna_rxq_put(rx_mod, q0);
+
+		if (q1) {
+			if (rx->rcb_destroy_cbfn)
+				rx->rcb_destroy_cbfn(rx->bna->bnad, q1->rcb);
+			q1->rcb = NULL;
+			q1->rxp = NULL;
+			q1->rx = NULL;
+			bna_rxq_put(rx_mod, q1);
+		}
+		rxp->rxq.slr.large = NULL;
+		rxp->rxq.slr.small = NULL;
+
+		if (rx->ccb_destroy_cbfn)
+			rx->ccb_destroy_cbfn(rx->bna->bnad, rxp->cq.ccb);
+		rxp->cq.ccb = NULL;
+		rxp->rx = NULL;
+		bna_rxp_put(rx_mod, rxp);
+	}
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		if (qe == &rx->qe) {
+			list_del(&rx->qe);
+			bfa_q_qe_init(&rx->qe);
+			break;
+		}
+	}
+
+	rx_mod->rid_mask &= ~(1 << rx->rid);
+
+	rx->bna = NULL;
+	rx->priv = NULL;
+	bna_rx_put(rx_mod, rx);
+}
+
+void
+bna_rx_enable(struct bna_rx *rx)
+{
+	if (rx->fsm != (bfa_sm_t)bna_rx_sm_stopped)
+		return;
+
+	rx->rx_flags |= BNA_RX_F_ENABLED;
+	if (rx->rx_flags & BNA_RX_F_ENET_STARTED)
+		bfa_fsm_send_event(rx, RX_E_START);
+}
+
+void
+bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
+		void (*cbfn)(void *, struct bna_rx *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		/* h/w should not be accessed. Treat we're stopped */
+		(*cbfn)(rx->bna->bnad, rx);
+	} else {
+		rx->stop_cbfn = cbfn;
+		rx->stop_cbarg = rx->bna->bnad;
+
+		rx->rx_flags &= ~BNA_RX_F_ENABLED;
+
+		bfa_fsm_send_event(rx, RX_E_STOP);
+	}
+}
+
+void
+bna_rx_cleanup_complete(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_CLEANUP_DONE);
+}
+
+enum bna_cb_status
+bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode new_mode,
+		enum bna_rxmode bitmask,
+		void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int need_hw_config = 0;
+
+	/* Error checks */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		/* If promisc mode is already enabled elsewhere in the system */
+		if ((rx->bna->promisc_rid != BFI_INVALID_RID) &&
+			(rx->bna->promisc_rid != rxf->rx->rid))
+			goto err_return;
+
+		/* If default mode is already enabled in the system */
+		if (rx->bna->default_mode_rid != BFI_INVALID_RID)
+			goto err_return;
+
+		/* Trying to enable promiscuous and default mode together */
+		if (is_default_enable(new_mode, bitmask))
+			goto err_return;
+	}
+
+	if (is_default_enable(new_mode, bitmask)) {
+		/* If default mode is already enabled elsewhere in the system */
+		if ((rx->bna->default_mode_rid != BFI_INVALID_RID) &&
+			(rx->bna->default_mode_rid != rxf->rx->rid)) {
+				goto err_return;
+		}
+
+		/* If promiscuous mode is already enabled in the system */
+		if (rx->bna->promisc_rid != BFI_INVALID_RID)
+			goto err_return;
+	}
+
+	/* Process the commands */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_promisc_disable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	if (is_allmulti_enable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_allmulti_disable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	/* Trigger h/w if needed */
+
+	if (need_hw_config) {
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	} else if (cbfn)
+		(*cbfn)(rx->bna->bnad, rx);
+
+	return BNA_CB_SUCCESS;
+
+err_return:
+	return BNA_CB_FAIL;
+}
+
+void
+bna_rx_vlanfilter_enable(struct bna_rx *rx)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+
+	if (rxf->vlan_filter_status == BNA_STATUS_T_DISABLED) {
+		rxf->vlan_filter_status = BNA_STATUS_T_ENABLED;
+		rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+void
+bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo)
+{
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxp->cq.ccb->rx_coalescing_timeo = coalescing_timeo;
+		bna_ib_coalescing_timeo_set(&rxp->cq.ib, coalescing_timeo);
+	}
+}
+
+void
+bna_rx_dim_reconfig(struct bna *bna, const u32 vector[][BNA_BIAS_T_MAX])
+{
+	int i, j;
+
+	for (i = 0; i < BNA_LOAD_T_MAX; i++)
+		for (j = 0; j < BNA_BIAS_T_MAX; j++)
+			bna->rx_mod.dim_vector[i][j] = vector[i][j];
+}
+
+void
+bna_rx_dim_update(struct bna_ccb *ccb)
+{
+	struct bna *bna = ccb->cq->rx->bna;
+	u32 load, bias;
+	u32 pkt_rt, small_rt, large_rt;
+	u8 coalescing_timeo;
+
+	if ((ccb->pkt_rate.small_pkt_cnt == 0) &&
+		(ccb->pkt_rate.large_pkt_cnt == 0))
+		return;
+
+	/* Arrive at preconfigured coalescing timeo value based on pkt rate */
+
+	small_rt = ccb->pkt_rate.small_pkt_cnt;
+	large_rt = ccb->pkt_rate.large_pkt_cnt;
+
+	pkt_rt = small_rt + large_rt;
+
+	if (pkt_rt < BNA_PKT_RATE_10K)
+		load = BNA_LOAD_T_LOW_4;
+	else if (pkt_rt < BNA_PKT_RATE_20K)
+		load = BNA_LOAD_T_LOW_3;
+	else if (pkt_rt < BNA_PKT_RATE_30K)
+		load = BNA_LOAD_T_LOW_2;
+	else if (pkt_rt < BNA_PKT_RATE_40K)
+		load = BNA_LOAD_T_LOW_1;
+	else if (pkt_rt < BNA_PKT_RATE_50K)
+		load = BNA_LOAD_T_HIGH_1;
+	else if (pkt_rt < BNA_PKT_RATE_60K)
+		load = BNA_LOAD_T_HIGH_2;
+	else if (pkt_rt < BNA_PKT_RATE_80K)
+		load = BNA_LOAD_T_HIGH_3;
+	else
+		load = BNA_LOAD_T_HIGH_4;
+
+	if (small_rt > (large_rt << 1))
+		bias = 0;
+	else
+		bias = 1;
+
+	ccb->pkt_rate.small_pkt_cnt = 0;
+	ccb->pkt_rate.large_pkt_cnt = 0;
+
+	coalescing_timeo = bna->rx_mod.dim_vector[load][bias];
+	ccb->rx_coalescing_timeo = coalescing_timeo;
+
+	/* Set it to IB */
+	bna_ib_coalescing_timeo_set(&ccb->cq->ib, coalescing_timeo);
+}
+
+const u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = {
+	{12, 12},
+	{6, 10},
+	{5, 10},
+	{4, 8},
+	{3, 6},
+	{3, 6},
+	{2, 4},
+	{1, 2},
+};
+
+/**
+ * TX
+ */
+#define call_tx_stop_cbfn(tx)						\
+do {									\
+	if ((tx)->stop_cbfn) {						\
+		void (*cbfn)(void *, struct bna_tx *);		\
+		void *cbarg;						\
+		cbfn = (tx)->stop_cbfn;					\
+		cbarg = (tx)->stop_cbarg;				\
+		(tx)->stop_cbfn = NULL;					\
+		(tx)->stop_cbarg = NULL;				\
+		cbfn(cbarg, (tx));					\
+	}								\
+} while (0)
+
+#define call_tx_prio_change_cbfn(tx)					\
+do {									\
+	if ((tx)->prio_change_cbfn) {					\
+		void (*cbfn)(struct bnad *, struct bna_tx *);	\
+		cbfn = (tx)->prio_change_cbfn;				\
+		(tx)->prio_change_cbfn = NULL;				\
+		cbfn((tx)->bna->bnad, (tx));				\
+	}								\
+} while (0)
+
+static void bna_tx_mod_cb_tx_stopped(void *tx_mod, struct bna_tx *tx);
+static void bna_bfi_tx_enet_start(struct bna_tx *tx);
+static void bna_tx_enet_stop(struct bna_tx *tx);
+
+enum bna_tx_event {
+	TX_E_START			= 1,
+	TX_E_STOP			= 2,
+	TX_E_FAIL			= 3,
+	TX_E_STARTED			= 4,
+	TX_E_STOPPED			= 5,
+	TX_E_PRIO_CHANGE		= 6,
+	TX_E_CLEANUP_DONE		= 7,
+	TX_E_BW_UPDATE			= 8,
+};
+
+bfa_fsm_state_decl(bna_tx, stopped, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, start_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, started, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, stop_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, cleanup_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, prio_stop_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, prio_cleanup_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, failed, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, quiesce_wait, struct bna_tx,
+			enum bna_tx_event);
+
+static void
+bna_tx_sm_stopped_entry(struct bna_tx *tx)
+{
+	call_tx_stop_cbfn(tx);
+}
+
+static void
+bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_START:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	case TX_E_STOP:
+		call_tx_stop_cbfn(tx);
+		break;
+
+	case TX_E_FAIL:
+		/* No-op */
+		break;
+
+	case TX_E_PRIO_CHANGE:
+		call_tx_prio_change_cbfn(tx);
+		break;
+
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_start_wait_entry(struct bna_tx *tx)
+{
+	bna_bfi_tx_enet_start(tx);
+}
+
+static void
+bna_tx_sm_start_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		break;
+
+	case TX_E_FAIL:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	case TX_E_STARTED:
+		if (tx->flags & (BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED)) {
+			tx->flags &= ~(BNA_TX_F_PRIO_CHANGED |
+				BNA_TX_F_BW_UPDATED);
+			bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
+		} else
+			bfa_fsm_set_state(tx, bna_tx_sm_started);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+		tx->flags |=  BNA_TX_F_PRIO_CHANGED;
+		break;
+
+	case TX_E_BW_UPDATE:
+		tx->flags |= BNA_TX_F_BW_UPDATED;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_started_entry(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+	int is_regular = (tx->type == BNA_TX_T_REGULAR);
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->tcb->priority = txq->priority;
+		/* Start IB */
+		bna_ib_start(tx->bna, &txq->ib, is_regular);
+	}
+	tx->tx_resume_cbfn(tx->bna->bnad, tx);
+}
+
+static void
+bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		bna_tx_enet_stop(tx);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_stop_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_FAIL:
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_STARTED:
+		/**
+		 * We are here due to start_wait -> stop_wait transition on
+		 * TX_E_STOP event
+		 */
+		bna_tx_enet_stop(tx);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_cleanup_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_FAIL:
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_prio_stop_wait_entry(struct bna_tx *tx)
+{
+	tx->tx_stall_cbfn(tx->bna->bnad, tx);
+	bna_tx_enet_stop(tx);
+}
+
+static void
+bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		call_tx_prio_change_cbfn(tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_prio_cleanup_wait);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_prio_cleanup_wait_entry(struct bna_tx *tx)
+{
+	call_tx_prio_change_cbfn(tx);
+	tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+}
+
+static void
+bna_tx_sm_prio_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_failed_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_failed(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_START:
+		bfa_fsm_set_state(tx, bna_tx_sm_quiesce_wait);
+		break;
+
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_quiesce_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_quiesce_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_tx_enet_start(struct bna_tx *tx)
+{
+	struct bfi_enet_tx_cfg_req *cfg_req = &tx->bfi_enet_cmd.cfg_req;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
+
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_SET_REQ, 0, tx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_tx_cfg_req)));
+
+	cfg_req->num_queues = tx->num_txq;
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq;
+		i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
+
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].q.q, &txq->qpt);
+		cfg_req->q_cfg[i].q.priority = txq->priority;
+
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			txq->ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			txq->ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)txq->ib.intr_vector);
+	}
+
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.msix = (txq->ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED : BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)txq->ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)txq->ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)txq->ib.interpkt_count;
+
+	cfg_req->tx_cfg.vlan_mode = BFI_ENET_TX_VLAN_WI;
+	cfg_req->tx_cfg.vlan_id = htons((u16)tx->txf_vlan_id);
+	cfg_req->tx_cfg.admit_tagged_frame = BNA_STATUS_T_DISABLED;
+	cfg_req->tx_cfg.apply_vlan_filter = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_tx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
+}
+
+static void
+bna_bfi_tx_enet_stop(struct bna_tx *tx)
+{
+	struct bfi_enet_req *req = &tx->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_CLR_REQ, 0, tx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
+}
+
+static void
+bna_tx_enet_stop(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+
+	/* Stop IB */
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		bna_ib_stop(tx->bna, &txq->ib);
+	}
+
+	bna_bfi_tx_enet_stop(tx);
+}
+
+static void
+bna_txq_qpt_setup(struct bna_txq *txq, int page_count, int page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int i;
+
+	txq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	txq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	txq->qpt.kv_qpt_ptr = qpt_mem->kva;
+	txq->qpt.page_count = page_count;
+	txq->qpt.page_size = page_size;
+
+	txq->tcb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < page_count; i++) {
+		txq->tcb->sw_qpt[i] = page_mem[i].kva;
+
+		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static struct bna_tx *
+bna_tx_get(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct list_head	*qe = NULL;
+	struct bna_tx *tx = NULL;
+
+	if (list_empty(&tx_mod->tx_free_q))
+		return NULL;
+	if (type == BNA_TX_T_REGULAR) {
+		bfa_q_deq(&tx_mod->tx_free_q, &qe);
+	} else {
+		bfa_q_deq_tail(&tx_mod->tx_free_q, &qe);
+	}
+	tx = (struct bna_tx *)qe;
+	bfa_q_qe_init(&tx->qe);
+	tx->type = type;
+
+	return tx;
+}
+
+static void
+bna_tx_free(struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
+	struct bna_txq *txq;
+	struct list_head *prev_qe;
+	struct list_head *qe;
+
+	while (!list_empty(&tx->txq_q)) {
+		bfa_q_deq(&tx->txq_q, &txq);
+		bfa_q_qe_init(&txq->qe);
+		txq->tcb = NULL;
+		txq->tx = NULL;
+		list_add_tail(&txq->qe, &tx_mod->txq_free_q);
+	}
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		if (qe == &tx->qe) {
+			list_del(&tx->qe);
+			bfa_q_qe_init(&tx->qe);
+			break;
+		}
+	}
+
+	tx->bna = NULL;
+	tx->priv = NULL;
+
+	prev_qe = NULL;
+	list_for_each(qe, &tx_mod->tx_free_q) {
+		if (((struct bna_tx *)qe)->rid < tx->rid)
+			prev_qe = qe;
+		else {
+			break;
+		}
+	}
+
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&tx_mod->tx_free_q, &tx->qe);
+	} else if (bfa_q_next(prev_qe) == &tx_mod->tx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&tx->qe, &tx_mod->tx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&tx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&tx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &tx->qe;
+		bfa_q_prev(bfa_q_next(&tx->qe)) = &tx->qe;
+	}
+}
+
+static void
+bna_tx_start(struct bna_tx *tx)
+{
+	tx->flags |= BNA_TX_F_ENET_STARTED;
+	if (tx->flags & BNA_TX_F_ENABLED)
+		bfa_fsm_send_event(tx, TX_E_START);
+}
+
+static void
+bna_tx_stop(struct bna_tx *tx)
+{
+	tx->stop_cbfn = bna_tx_mod_cb_tx_stopped;
+	tx->stop_cbarg = &tx->bna->tx_mod;
+
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
+	bfa_fsm_send_event(tx, TX_E_STOP);
+}
+
+static void
+bna_tx_fail(struct bna_tx *tx)
+{
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
+	bfa_fsm_send_event(tx, TX_E_FAIL);
+}
+
+void
+bna_bfi_tx_enet_start_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_tx_cfg_rsp *cfg_rsp = &tx->bfi_enet_cmd.cfg_rsp;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&tx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_tx_cfg_rsp));
+
+	tx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq; i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
+
+		/* Setup doorbells */
+		txq->tcb->i_dbell->doorbell_addr =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		txq->tcb->q_dbell =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].q_dbell);
+		txq->hw_id = cfg_rsp->q_handles[i].hw_qid;
+
+		/* Initialize producer/consumer indexes */
+		(*txq->tcb->hw_consumer_index) = 0;
+		txq->tcb->producer_index = txq->tcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(tx, TX_E_STARTED);
+}
+
+void
+bna_bfi_tx_enet_stop_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(tx, TX_E_STOPPED);
+}
+
+void
+bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		bfa_fsm_send_event(tx, TX_E_BW_UPDATE);
+	}
+}
+
+void
+bna_tx_res_req(int num_txq, int txq_depth, struct bna_res_info *res_info)
+{
+	u32 q_size;
+	u32 page_count;
+	struct bna_mem_info *mem_info;
+
+	res_info[BNA_TX_RES_MEM_T_TCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_tcb);
+	mem_info->num = num_txq;
+
+	q_size = txq_depth * BFI_TXQ_WI_SIZE;
+	q_size = ALIGN(q_size, PAGE_SIZE);
+	page_count = q_size >> PAGE_SHIFT;
+
+	res_info[BNA_TX_RES_MEM_T_QPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = page_count * sizeof(struct bna_dma_addr);
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_MEM_T_SWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = page_count * sizeof(void *);
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_MEM_T_PAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = num_txq * page_count;
+
+	res_info[BNA_TX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_type = BNA_RES_T_INTR;
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.intr_type =
+			BNA_INTR_T_MSIX;
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.num = num_txq;
+}
+
+struct bna_tx *
+bna_tx_create(struct bna *bna, struct bnad *bnad,
+		struct bna_tx_config *tx_cfg,
+		struct bna_tx_event_cbfn *tx_cbfn,
+		struct bna_res_info *res_info, void *priv)
+{
+	struct bna_intr_info *intr_info;
+	struct bna_tx_mod *tx_mod = &bna->tx_mod;
+	struct bna_tx *tx;
+	struct bna_txq *txq;
+	struct list_head *qe;
+	int page_count;
+	int page_size;
+	int page_idx;
+	int i;
+
+	intr_info = &res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info;
+	page_count = (res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.num) /
+			tx_cfg->num_txq;
+	page_size = res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.len;
+
+	/**
+	 * Get resources
+	 */
+
+	if ((intr_info->num != 1) && (intr_info->num != tx_cfg->num_txq))
+		return NULL;
+
+	/* Tx */
+
+	tx = bna_tx_get(tx_mod, tx_cfg->tx_type);
+	if (!tx)
+		return NULL;
+	tx->bna = bna;
+	tx->priv = priv;
+
+	/* TxQs */
+
+	INIT_LIST_HEAD(&tx->txq_q);
+	for (i = 0; i < tx_cfg->num_txq; i++) {
+		if (list_empty(&tx_mod->txq_free_q))
+			goto err_return;
+
+		bfa_q_deq(&tx_mod->txq_free_q, &txq);
+		bfa_q_qe_init(&txq->qe);
+		list_add_tail(&txq->qe, &tx->txq_q);
+		txq->tx = tx;
+	}
+
+	/*
+	 * Initialize
+	 */
+
+	/* Tx */
+
+	tx->tcb_setup_cbfn = tx_cbfn->tcb_setup_cbfn;
+	tx->tcb_destroy_cbfn = tx_cbfn->tcb_destroy_cbfn;
+	/* Following callbacks are mandatory */
+	tx->tx_stall_cbfn = tx_cbfn->tx_stall_cbfn;
+	tx->tx_resume_cbfn = tx_cbfn->tx_resume_cbfn;
+	tx->tx_cleanup_cbfn = tx_cbfn->tx_cleanup_cbfn;
+
+	list_add_tail(&tx->qe, &tx_mod->tx_active_q);
+
+	tx->num_txq = tx_cfg->num_txq;
+
+	tx->flags = 0;
+	if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_STARTED) {
+		switch (tx->type) {
+		case BNA_TX_T_REGULAR:
+			if (!(tx->bna->tx_mod.flags &
+				BNA_TX_MOD_F_ENET_LOOPBACK))
+				tx->flags |= BNA_TX_F_ENET_STARTED;
+			break;
+		case BNA_TX_T_LOOPBACK:
+			if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_LOOPBACK)
+				tx->flags |= BNA_TX_F_ENET_STARTED;
+			break;
+		}
+	}
+
+	/* TxQ */
+
+	i = 0;
+	page_idx = 0;
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->tcb = (struct bna_tcb *)
+		res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info.mdl[i].kva;
+		txq->tx_packets = 0;
+		txq->tx_bytes = 0;
+
+		/* IB */
+		txq->ib.ib_seg_host_addr.lsb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		txq->ib.ib_seg_host_addr.msb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		txq->ib.ib_seg_host_addr_kva =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		txq->ib.intr_type = intr_info->intr_type;
+		txq->ib.intr_vector = (intr_info->num == 1) ?
+					intr_info->idl[0].vector :
+					intr_info->idl[i].vector;
+		if (intr_info->intr_type == BNA_INTR_T_INTX)
+			txq->ib.intr_vector = (1 <<  txq->ib.intr_vector);
+		txq->ib.coalescing_timeo = tx_cfg->coalescing_timeo;
+		txq->ib.interpkt_timeo = 0; /* Not used */
+		txq->ib.interpkt_count = BFI_TX_INTERPKT_COUNT;
+
+		/* TCB */
+
+		txq->tcb->q_depth = tx_cfg->txq_depth;
+		txq->tcb->unmap_q = (void *)
+		res_info[BNA_TX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[i].kva;
+		txq->tcb->hw_consumer_index =
+			(u32 *)txq->ib.ib_seg_host_addr_kva;
+		txq->tcb->i_dbell = &txq->ib.door_bell;
+		txq->tcb->intr_type = txq->ib.intr_type;
+		txq->tcb->intr_vector = txq->ib.intr_vector;
+		txq->tcb->txq = txq;
+		txq->tcb->bnad = bnad;
+		txq->tcb->id = i;
+
+		/* QPT, SWQPT, Pages */
+		bna_txq_qpt_setup(txq, page_count, page_size,
+			&res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info.mdl[i],
+			&res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info.mdl[i],
+			&res_info[BNA_TX_RES_MEM_T_PAGE].
+				  res_u.mem_info.mdl[page_idx]);
+		txq->tcb->page_idx = page_idx;
+		txq->tcb->page_count = page_count;
+		page_idx += page_count;
+
+		/* Callback to bnad for setting up TCB */
+		if (tx->tcb_setup_cbfn)
+			(tx->tcb_setup_cbfn)(bna->bnad, txq->tcb);
+
+		if (tx_cfg->num_txq == BFI_TX_MAX_PRIO)
+			txq->priority = txq->tcb->id;
+		else
+			txq->priority = tx_mod->default_prio;
+
+		i++;
+	}
+
+	tx->txf_vlan_id = 0;
+
+	bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+
+	tx_mod->rid_mask |= (1 << tx->rid);
+
+	return tx;
+
+err_return:
+	bna_tx_free(tx);
+	return NULL;
+}
+
+void
+bna_tx_destroy(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head *qe;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		if (tx->tcb_destroy_cbfn)
+			(tx->tcb_destroy_cbfn)(tx->bna->bnad, txq->tcb);
+	}
+
+	tx->bna->tx_mod.rid_mask &= ~(1 << tx->rid);
+	bna_tx_free(tx);
+}
+
+void
+bna_tx_enable(struct bna_tx *tx)
+{
+	if (tx->fsm != (bfa_sm_t)bna_tx_sm_stopped)
+		return;
+
+	tx->flags |= BNA_TX_F_ENABLED;
+
+	if (tx->flags & BNA_TX_F_ENET_STARTED)
+		bfa_fsm_send_event(tx, TX_E_START);
+}
+
+void
+bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
+		void (*cbfn)(void *, struct bna_tx *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		(*cbfn)(tx->bna->bnad, tx);
+		return;
+	}
+
+	tx->stop_cbfn = cbfn;
+	tx->stop_cbarg = tx->bna->bnad;
+
+	tx->flags &= ~BNA_TX_F_ENABLED;
+
+	bfa_fsm_send_event(tx, TX_E_STOP);
+}
+
+void
+bna_tx_cleanup_complete(struct bna_tx *tx)
+{
+	bfa_fsm_send_event(tx, TX_E_CLEANUP_DONE);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped(void *arg, struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	bfa_wc_down(&tx_mod->tx_stop_wc);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped_all(void *arg)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	if (tx_mod->stop_cbfn)
+		tx_mod->stop_cbfn(&tx_mod->bna->enet);
+	tx_mod->stop_cbfn = NULL;
+}
+
+void
+bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
+		struct bna_res_info *res_info)
+{
+	int i;
+
+	tx_mod->bna = bna;
+	tx_mod->flags = 0;
+
+	tx_mod->tx = (struct bna_tx *)
+		res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mdl[0].kva;
+	tx_mod->txq = (struct bna_txq *)
+		res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&tx_mod->tx_free_q);
+	INIT_LIST_HEAD(&tx_mod->tx_active_q);
+
+	INIT_LIST_HEAD(&tx_mod->txq_free_q);
+
+	for (i = 0; i < bna->ioceth.attr.num_txq; i++) {
+		tx_mod->tx[i].rid = i;
+		bfa_q_qe_init(&tx_mod->tx[i].qe);
+		list_add_tail(&tx_mod->tx[i].qe, &tx_mod->tx_free_q);
+		bfa_q_qe_init(&tx_mod->txq[i].qe);
+		list_add_tail(&tx_mod->txq[i].qe, &tx_mod->txq_free_q);
+	}
+
+	tx_mod->prio_map = BFI_TX_PRIO_MAP_ALL;
+	tx_mod->default_prio = 0;
+	tx_mod->iscsi_over_cee = BNA_STATUS_T_DISABLED;
+	tx_mod->iscsi_prio = -1;
+}
+
+void
+bna_tx_mod_uninit(struct bna_tx_mod *tx_mod)
+{
+	struct list_head		*qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &tx_mod->tx_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &tx_mod->txq_free_q)
+		i++;
+
+	tx_mod->bna = NULL;
+}
+
+void
+bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags |= BNA_TX_MOD_F_ENET_STARTED;
+	if (type == BNA_TX_T_LOOPBACK)
+		tx_mod->flags |= BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		if (tx->type == type)
+			bna_tx_start(tx);
+	}
+}
+
+void
+bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	tx_mod->stop_cbfn = bna_enet_cb_tx_stopped;
+
+	bfa_wc_init(&tx_mod->tx_stop_wc, bna_tx_mod_cb_tx_stopped_all, tx_mod);
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		if (tx->type == type) {
+			bfa_wc_up(&tx_mod->tx_stop_wc);
+			bna_tx_stop(tx);
+		}
+	}
+
+	bfa_wc_wait(&tx_mod->tx_stop_wc);
+}
+
+void
+bna_tx_mod_fail(struct bna_tx_mod *tx_mod)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		bna_tx_fail(tx);
+	}
+}
+
+void
+bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo)
+{
+	struct bna_txq *txq;
+	struct list_head *qe;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		bna_ib_coalescing_timeo_set(&txq->ib, coalescing_timeo);
+	}
+}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/8] bna: Add New HW Defs
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (2 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 3/8] bna: Tx and Rx Redesign Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 5/8] bna: ENET and Tx Rx Redesign Enablement Rasesh Mody
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Add new file bna_hw_defs.h to support new code MSGQ, ENET and TX RX redign.
   This makes bna_hw.h obsolete and is removed in a later patch. bna_hw_defs.h
   removes all unused HW register definition that were part of bna_hw.h.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bna_hw_defs.h |  413 +++++++++++++++++++++++++++++++++++++++++
 1 files changed, 413 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bna_hw_defs.h

diff --git a/drivers/net/bna/bna_hw_defs.h b/drivers/net/bna/bna_hw_defs.h
new file mode 100644
index 0000000..07bb792
--- /dev/null
+++ b/drivers/net/bna/bna_hw_defs.h
@@ -0,0 +1,413 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * File for interrupt macros and functions
+ */
+
+#ifndef __BNA_HW_DEFS_H__
+#define __BNA_HW_DEFS_H__
+
+#include "bfi_reg.h"
+
+/**
+ *
+ * SW imposed limits
+ *
+ */
+
+#define BFI_ENET_MAX_MCAM		256
+
+#define BFI_INVALID_RID			-1
+
+#define BFI_IBIDX_SIZE			4
+
+#define BFI_VLAN_WORD_SHIFT		5	/* 32 bits */
+#define BFI_VLAN_WORD_MASK		0x1F
+#define BFI_VLAN_BLOCK_SHIFT		9	/* 512 bits */
+#define BFI_VLAN_BMASK_ALL		0xFF
+
+#define BFI_COALESCING_TIMER_UNIT	5	/* 5us */
+#define BFI_MAX_COALESCING_TIMEO	0xFF	/* in 5us units */
+#define BFI_MAX_INTERPKT_COUNT		0xFF
+#define BFI_MAX_INTERPKT_TIMEO		0xF	/* in 0.5us units */
+#define BFI_TX_COALESCING_TIMEO		20	/* 20 * 5 = 100us */
+#define BFI_TX_INTERPKT_COUNT		32
+#define	BFI_RX_COALESCING_TIMEO		12	/* 12 * 5 = 60us */
+#define	BFI_RX_INTERPKT_COUNT		6	/* Pkt Cnt = 6 */
+#define	BFI_RX_INTERPKT_TIMEO		3	/* 3 * 0.5 = 1.5us */
+
+#define BFI_TXQ_WI_SIZE			64	/* bytes */
+#define BFI_RXQ_WI_SIZE			8	/* bytes */
+#define BFI_CQ_WI_SIZE			16	/* bytes */
+#define BFI_TX_MAX_WRR_QUOTA		0xFFF
+
+#define BFI_TX_MAX_VECTORS_PER_WI	4
+#define BFI_TX_MAX_VECTORS_PER_PKT	0xFF
+#define BFI_TX_MAX_DATA_PER_VECTOR	0xFFFF
+#define BFI_TX_MAX_DATA_PER_PKT		0xFFFFFF
+
+/* Small Q buffer size */
+#define BFI_SMALL_RXBUF_SIZE		128
+
+#define BFI_TX_MAX_PRIO			8
+#define BFI_TX_PRIO_MAP_ALL		0xFF
+
+/*
+ *
+ * Register definitions and macros
+ *
+ */
+
+#define BNA_PCI_REG_CT_ADDRSZ		(0x40000)
+
+#define ct_reg_addr_init(_bna, _pcidev)					\
+{									\
+	struct bna_reg_offset reg_offset[] =				\
+	{{HOSTFN0_INT_STATUS, HOSTFN0_INT_MSK},				\
+	 {HOSTFN1_INT_STATUS, HOSTFN1_INT_MSK},				\
+	 {HOSTFN2_INT_STATUS, HOSTFN2_INT_MSK},				\
+	 {HOSTFN3_INT_STATUS, HOSTFN3_INT_MSK} };			\
+									\
+	(_bna)->regs.fn_int_status = (_pcidev)->pci_bar_kva +		\
+				reg_offset[(_pcidev)->pci_func].fn_int_status;\
+	(_bna)->regs.fn_int_mask = (_pcidev)->pci_bar_kva +		\
+				reg_offset[(_pcidev)->pci_func].fn_int_mask;\
+}
+
+#define ct_bit_defn_init(_bna, _pcidev)					\
+{									\
+	(_bna)->bits.mbox_status_bits = (__HFN_INT_MBOX_LPU0 |		\
+					__HFN_INT_MBOX_LPU1);		\
+	(_bna)->bits.mbox_mask_bits = (__HFN_INT_MBOX_LPU0 |		\
+					__HFN_INT_MBOX_LPU1);		\
+	(_bna)->bits.error_status_bits = (__HFN_INT_ERR_MASK);		\
+	(_bna)->bits.error_mask_bits = (__HFN_INT_ERR_MASK);		\
+	(_bna)->bits.halt_status_bits = __HFN_INT_LL_HALT;		\
+}
+
+#define ct2_reg_addr_init(_bna, _pcidev)				\
+{									\
+	(_bna)->regs.fn_int_status = (_pcidev)->pci_bar_kva +		\
+				CT2_HOSTFN_INT_STATUS;			\
+	(_bna)->regs.fn_int_mask = (_pcidev)->pci_bar_kva +		\
+				CT2_HOSTFN_INTR_MASK;			\
+}
+
+#define ct2_bit_defn_init(_bna, _pcidev)				\
+{									\
+	(_bna)->bits.mbox_status_bits = (__HFN_INT_MBOX_LPU0_CT2 |	\
+					__HFN_INT_MBOX_LPU1_CT2);	\
+	(_bna)->bits.mbox_mask_bits = (__HFN_INT_MBOX_LPU0_CT2 |	\
+					__HFN_INT_MBOX_LPU1_CT2);	\
+	(_bna)->bits.error_status_bits = (__HFN_INT_ERR_MASK_CT2);	\
+	(_bna)->bits.error_mask_bits = (__HFN_INT_ERR_MASK_CT2);	\
+	(_bna)->bits.halt_status_bits = __HFN_INT_CPQ_HALT_CT2;		\
+	(_bna)->bits.halt_mask_bits = __HFN_INT_CPQ_HALT_CT2;		\
+}
+
+#define bna_reg_addr_init(_bna, _pcidev)				\
+{									\
+	switch ((_pcidev)->device_id) {					\
+	case PCI_DEVICE_ID_BROCADE_CT:					\
+		ct_reg_addr_init((_bna), (_pcidev));			\
+		ct_bit_defn_init((_bna), (_pcidev));			\
+		break;							\
+	}								\
+}
+
+#define bna_port_id_get(_bna) ((_bna)->ioceth.ioc.port_id)
+/**
+ *
+ *  Interrupt related bits, flags and macros
+ *
+ */
+
+#define IB_STATUS_BITS		0x0000ffff
+
+#define BNA_IS_MBOX_INTR(_bna, _intr_status)				\
+	((_intr_status) & (_bna)->bits.mbox_status_bits)
+
+#define BNA_IS_HALT_INTR(_bna, _intr_status)				\
+	((_intr_status) & (_bna)->bits.halt_status_bits)
+
+#define BNA_IS_ERR_INTR(_bna, _intr_status)	\
+	((_intr_status) & (_bna)->bits.error_status_bits)
+
+#define BNA_IS_MBOX_ERR_INTR(_bna, _intr_status)	\
+	(BNA_IS_MBOX_INTR(_bna, _intr_status) |		\
+	BNA_IS_ERR_INTR(_bna, _intr_status))
+
+#define BNA_IS_INTX_DATA_INTR(_intr_status)		\
+		((_intr_status) & IB_STATUS_BITS)
+
+#define bna_halt_clear(_bna)						\
+do {									\
+	u32 init_halt;						\
+	init_halt = readl((_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
+	init_halt &= ~__FW_INIT_HALT_P;					\
+	writel(init_halt, (_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
+	init_halt = readl((_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
+} while (0)
+
+#define bna_intx_disable(_bna, _cur_mask)				\
+{									\
+	(_cur_mask) = readl((_bna)->regs.fn_int_mask);		\
+	writel(0xffffffff, (_bna)->regs.fn_int_mask);		\
+}
+
+#define bna_intx_enable(bna, new_mask)					\
+	writel((new_mask), (bna)->regs.fn_int_mask)
+#define bna_mbox_intr_disable(bna)					\
+do {									\
+	u32 mask;							\
+	mask = readl((bna)->regs.fn_int_mask);				\
+	writel((mask | (bna)->bits.mbox_mask_bits |			\
+		(bna)->bits.error_mask_bits), (bna)->regs.fn_int_mask); \
+	mask = readl((bna)->regs.fn_int_mask);				\
+} while (0)
+
+#define bna_mbox_intr_enable(bna)					\
+do {									\
+	u32 mask;							\
+	mask = readl((bna)->regs.fn_int_mask);				\
+	writel((mask & ~((bna)->bits.mbox_mask_bits |			\
+		(bna)->bits.error_mask_bits)), (bna)->regs.fn_int_mask);\
+	mask = readl((bna)->regs.fn_int_mask);				\
+} while (0)
+
+#define bna_intr_status_get(_bna, _status)				\
+{									\
+	(_status) = readl((_bna)->regs.fn_int_status);			\
+	if (_status) {							\
+		writel(((_status) & ~(_bna)->bits.mbox_status_bits),	\
+			(_bna)->regs.fn_int_status);			\
+	}								\
+}
+
+/*
+ * MAX ACK EVENTS : No. of acks that can be accumulated in driver,
+ * before acking to h/w. The no. of bits is 16 in the doorbell register,
+ * however we keep this limited to 15 bits.
+ * This is because around the edge of 64K boundary (16 bits), one
+ * single poll can make the accumulated ACK counter cross the 64K boundary,
+ * causing problems, when we try to ack with a value greater than 64K.
+ * 15 bits (32K) should  be large enough to accumulate, anyways, and the max.
+ * acked events to h/w can be (32K + max poll weight) (currently 64).
+ */
+#define	BNA_IB_MAX_ACK_EVENTS		(1 << 15)
+
+/* These macros build the data portion of the TxQ/RxQ doorbell */
+#define BNA_DOORBELL_Q_PRD_IDX(_pi)	(0x80000000 | (_pi))
+#define BNA_DOORBELL_Q_STOP		(0x40000000)
+
+/* These macros build the data portion of the IB doorbell */
+#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events)			\
+	(0x80000000 | ((_timeout) << 16) | (_events))
+#define BNA_DOORBELL_IB_INT_DISABLE	(0x40000000)
+
+/* Set the coalescing timer for the given ib */
+#define bna_ib_coalescing_timer_set(_i_dbell, _cls_timer)		\
+	((_i_dbell)->doorbell_ack = BNA_DOORBELL_IB_INT_ACK((_cls_timer), 0));
+
+/* Acks 'events' # of events for a given ib while disabling interrupts */
+#define bna_ib_ack_disable_irq(_i_dbell, _events)			\
+	(writel(BNA_DOORBELL_IB_INT_ACK(0, (_events)), \
+		(_i_dbell)->doorbell_addr));
+
+/* Acks 'events' # of events for a given ib */
+#define bna_ib_ack(_i_dbell, _events)					\
+	(writel(((_i_dbell)->doorbell_ack | (_events)), \
+		(_i_dbell)->doorbell_addr));
+
+#define bna_ib_start(_bna, _ib, _is_regular)				\
+{									\
+	u32 intx_mask;						\
+	struct bna_ib *ib = _ib;					\
+	if ((ib->intr_type == BNA_INTR_T_INTX)) {			\
+		bna_intx_disable((_bna), intx_mask);			\
+		intx_mask &= ~(ib->intr_vector);			\
+		bna_intx_enable((_bna), intx_mask);			\
+	}								\
+	bna_ib_coalescing_timer_set(&ib->door_bell,			\
+			ib->coalescing_timeo);				\
+	if (_is_regular)						\
+		bna_ib_ack(&ib->door_bell, 0);				\
+}
+
+#define bna_ib_stop(_bna, _ib)						\
+{									\
+	u32 intx_mask;						\
+	struct bna_ib *ib = _ib;					\
+	writel(BNA_DOORBELL_IB_INT_DISABLE,				\
+		ib->door_bell.doorbell_addr);				\
+	if (ib->intr_type == BNA_INTR_T_INTX) {				\
+		bna_intx_disable((_bna), intx_mask);			\
+		intx_mask |= ib->intr_vector;				\
+		bna_intx_enable((_bna), intx_mask);			\
+	}								\
+}
+
+#define bna_txq_prod_indx_doorbell(_tcb)				\
+	(writel(BNA_DOORBELL_Q_PRD_IDX((_tcb)->producer_index), \
+		(_tcb)->q_dbell));
+
+#define bna_rxq_prod_indx_doorbell(_rcb)				\
+	(writel(BNA_DOORBELL_Q_PRD_IDX((_rcb)->producer_index), \
+		(_rcb)->q_dbell));
+
+/**
+ *
+ * TxQ, RxQ, CQ related bits, offsets, macros
+ *
+ */
+
+/* TxQ Entry Opcodes */
+#define BNA_TXQ_WI_SEND			(0x402)	/* Single Frame Transmission */
+#define BNA_TXQ_WI_SEND_LSO		(0x403)	/* Multi-Frame Transmission */
+#define BNA_TXQ_WI_EXTENSION		(0x104)	/* Extension WI */
+
+/* TxQ Entry Control Flags */
+#define BNA_TXQ_WI_CF_FCOE_CRC		(1 << 8)
+#define BNA_TXQ_WI_CF_IPID_MODE		(1 << 5)
+#define BNA_TXQ_WI_CF_INS_PRIO		(1 << 4)
+#define BNA_TXQ_WI_CF_INS_VLAN		(1 << 3)
+#define BNA_TXQ_WI_CF_UDP_CKSUM		(1 << 2)
+#define BNA_TXQ_WI_CF_TCP_CKSUM		(1 << 1)
+#define BNA_TXQ_WI_CF_IP_CKSUM		(1 << 0)
+
+#define BNA_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
+		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
+
+/*
+ * Completion Q defines
+ */
+/* CQ Entry Flags */
+#define	BNA_CQ_EF_MAC_ERROR	(1 <<  0)
+#define	BNA_CQ_EF_FCS_ERROR	(1 <<  1)
+#define	BNA_CQ_EF_TOO_LONG	(1 <<  2)
+#define	BNA_CQ_EF_FC_CRC_OK	(1 <<  3)
+
+#define	BNA_CQ_EF_RSVD1		(1 <<  4)
+#define	BNA_CQ_EF_L4_CKSUM_OK	(1 <<  5)
+#define	BNA_CQ_EF_L3_CKSUM_OK	(1 <<  6)
+#define	BNA_CQ_EF_HDS_HEADER	(1 <<  7)
+
+#define	BNA_CQ_EF_UDP		(1 <<  8)
+#define	BNA_CQ_EF_TCP		(1 <<  9)
+#define	BNA_CQ_EF_IP_OPTIONS	(1 << 10)
+#define	BNA_CQ_EF_IPV6		(1 << 11)
+
+#define	BNA_CQ_EF_IPV4		(1 << 12)
+#define	BNA_CQ_EF_VLAN		(1 << 13)
+#define	BNA_CQ_EF_RSS		(1 << 14)
+#define	BNA_CQ_EF_RSVD2		(1 << 15)
+
+#define	BNA_CQ_EF_MCAST_MATCH   (1 << 16)
+#define	BNA_CQ_EF_MCAST		(1 << 17)
+#define BNA_CQ_EF_BCAST		(1 << 18)
+#define	BNA_CQ_EF_REMOTE	(1 << 19)
+
+#define	BNA_CQ_EF_LOCAL		(1 << 20)
+
+/**
+ *
+ * Data structures
+ *
+ */
+
+struct bna_reg_offset {
+	u32 fn_int_status;
+	u32 fn_int_mask;
+};
+
+struct bna_bit_defn {
+	u32 mbox_status_bits;
+	u32 mbox_mask_bits;
+	u32 error_status_bits;
+	u32 error_mask_bits;
+	u32 halt_status_bits;
+	u32 halt_mask_bits;
+};
+
+struct bna_reg {
+	void __iomem *fn_int_status;
+	void __iomem *fn_int_mask;
+};
+
+/* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
+struct bna_dma_addr {
+	u32		msb;
+	u32		lsb;
+};
+
+struct bna_txq_wi_vector {
+	u16		reserved;
+	u16		length;		/* Only 14 LSB are valid */
+	struct bna_dma_addr host_addr; /* Tx-Buf DMA addr */
+};
+
+/**
+ *  TxQ Entry Structure
+ *
+ *  BEWARE:  Load values into this structure with correct endianess.
+ */
+struct bna_txq_entry {
+	union {
+		struct {
+			u8 reserved;
+			u8 num_vectors;	/* number of vectors present */
+			u16 opcode; /* Either */
+						    /* BNA_TXQ_WI_SEND or */
+						    /* BNA_TXQ_WI_SEND_LSO */
+			u16 flags; /* OR of all the flags */
+			u16 l4_hdr_size_n_offset;
+			u16 vlan_tag;
+			u16 lso_mss;	/* Only 14 LSB are valid */
+			u32 frame_length;	/* Only 24 LSB are valid */
+		} wi;
+
+		struct {
+			u16 reserved;
+			u16 opcode; /* Must be */
+						    /* BNA_TXQ_WI_EXTENSION */
+			u32 reserved2[3];	/* Place holder for */
+						/* removed vector (12 bytes) */
+		} wi_ext;
+	} hdr;
+	struct bna_txq_wi_vector vector[4];
+};
+
+/* RxQ Entry Structure */
+struct bna_rxq_entry {		/* Rx-Buffer */
+	struct bna_dma_addr host_addr; /* Rx-Buffer DMA address */
+};
+
+/* CQ Entry Structure */
+struct bna_cq_entry {
+	u32 flags;
+	u16 vlan_tag;
+	u16 length;
+	u32 rss_hash;
+	u8 valid;
+	u8 reserved1;
+	u8 reserved2;
+	u8 rxq_id;
+};
+
+#endif /* __BNA_HW_DEFS_H__ */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/8] bna: ENET and Tx Rx Redesign Enablement
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (3 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 4/8] bna: Add New HW Defs Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 6/8] bna: Remove Unused Code Rasesh Mody
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
This patch contains additional structure and function definition changes
that are required to enable the new msgq/enet/txrx redesign introduced
by the previous 4 patches.
 - structure and function definition changes to header files as a result
   of Ethport, Enet, IOCEth, Tx, Rx redesign.
 - ethtool changes to use new enet function and definitions
 - Set number of Tx and Rx queues bassed on underlying hardware. Define
   separate macros for maximum and supported numbers of Tx and Rx queues
   based on underlying hardware. Take VLAN header into account for MTU
   calculation. Default to INTx mode when pci_enable_msix() fails. Set a
   bit in Rx poll routine, check and wait for that bit to be cleared in
   the cleanup routine before proceeding.
 - The TX and Rx coalesce settings are programmed in steps of 5 us. The value
   that are not divisible by 5 are rounded to the next lower number. This was
   causing the value os 1 to 4 to be rounded to 0, which is an invalid setting.
   When creating Rx and Tx object, we are currently assigning the default
   values of Rx and Tx coalescing_timeo. If these values are changed in the
   driver to a different value, the change is lost during such operations as
   MTU change. In order to avoid that, pass the configured value of
   coalescing_timeo before Rx and Tx object creation. Fix
   bnad_tx_coalescing_timeo_set() so it applies to all the Tx objects.
 - Reorg uninitialization path in case of pci_probe failure.
 - Hardware clock setup changes to pass asic generation, port modes and
   asic mode as part firmware boot parameters to firmware.
 - FW mailbox interface changes to defined asic specific mailbox interfaces.
   h/w mailbox interfaces take 8-bit FIDs and 2-bit port id for owner. Cleaned
   up mailbox definitions and usage for new and old HW. Eliminated usage of
   ASIC ID. MSI-X vector assignment and programming done by firmware. Fixed
   host offsets for CPE/RME queue registers.
 - Implement polling mechanism for FW ready to have poll mechanism replaces
   the current interrupt based FW READY method. The timer based poll routine
   in IOC will query the ioc_fwstate register to see if there is a state
   change in FW, and sends the READY event. Removed infrastructure needed to
   support mbox READY event from fw as well as IOC code.
 - Move FW init to HW init. Handle the case where PCI mapping goes away when
   IOCPF state machine is waiting for semaphore.
 - Add IOC mbox call back to client indicating that the command is sent.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/Makefile            |    2 +-
 drivers/net/bna/bfa_defs.h          |   25 ++-
 drivers/net/bna/bfa_defs_mfg_comm.h |   20 +-
 drivers/net/bna/bfa_ioc.c           |  389 ++++++++++++++--------
 drivers/net/bna/bfa_ioc.h           |   36 ++-
 drivers/net/bna/bfa_ioc_ct.c        |   41 +--
 drivers/net/bna/bfi.h               |   74 ++++-
 drivers/net/bna/bna.h               |  224 ++++++++++--
 drivers/net/bna/bna_types.h         |  494 ++++++++++++++++-----------
 drivers/net/bna/bnad.c              |  643 +++++++++++++++++++++--------------
 drivers/net/bna/bnad.h              |   36 ++-
 drivers/net/bna/bnad_ethtool.c      |   65 ++--
 drivers/net/bna/cna.h               |   31 ++-
 13 files changed, 1337 insertions(+), 743 deletions(-)

diff --git a/drivers/net/bna/Makefile b/drivers/net/bna/Makefile
index d501f52..74d3abc 100644
--- a/drivers/net/bna/Makefile
+++ b/drivers/net/bna/Makefile
@@ -5,7 +5,7 @@
 
 obj-$(CONFIG_BNA) += bna.o
 
-bna-objs := bnad.o bnad_ethtool.o bna_ctrl.o bna_txrx.o
+bna-objs := bnad.o bnad_ethtool.o bna_enet.o bna_tx_rx.o
 bna-objs += bfa_msgq.o bfa_ioc.o bfa_ioc_ct.o bfa_cee.o
 bna-objs += cna_fwimg.o
 
diff --git a/drivers/net/bna/bfa_defs.h b/drivers/net/bna/bfa_defs.h
index b080b36..205b92b 100644
--- a/drivers/net/bna/bfa_defs.h
+++ b/drivers/net/bna/bfa_defs.h
@@ -124,6 +124,7 @@ enum bfa_ioc_state {
 	BFA_IOC_DISABLED	= 10,	/*!< IOC is disabled */
 	BFA_IOC_FWMISMATCH	= 11,	/*!< IOC f/w different from drivers */
 	BFA_IOC_ENABLING	= 12,	/*!< IOC is being enabled */
+	BFA_IOC_HWFAIL		= 13,	/*!< PCI mapping doesn't exist */
 };
 
 /**
@@ -179,8 +180,19 @@ struct bfa_ioc_attr {
 	struct bfa_adapter_attr adapter_attr;	/*!< HBA attributes */
 	struct bfa_ioc_driver_attr driver_attr;	/*!< driver attr    */
 	struct bfa_ioc_pci_attr pci_attr;
-	u8				port_id;	/*!< port number    */
-	u8				rsvd[7];	/*!< 64bit align    */
+	u8				port_id;	/*!< port number */
+	u8				port_mode;	/*!< enum bfa_mode */
+	u8				cap_bm;		/*!< capability */
+	u8				port_mode_cfg;	/*!< enum bfa_mode */
+	u8				rsvd[4];	/*!< 64bit align */
+};
+
+/**
+ * Adapter capability mask definition
+ */
+enum {
+	BFA_CM_HBA	=	0x01,
+	BFA_CM_CNA	=	0x02,
 };
 
 /**
@@ -228,7 +240,7 @@ struct bfa_mfg_block {
 	mac_t		mfg_mac;	/*!< mac address */
 	u8		num_mac;	/*!< number of mac addresses */
 	u8		rsv2;
-	u32	mfg_type;	/*!< card type */
+	u32		card_type;	/*!< card type */
 	u8		rsv3[108];
 	u8		md5_chksum[BFA_MFG_CHKSUM_SIZE]; /*!< md5 checksum */
 };
@@ -242,5 +254,12 @@ struct bfa_mfg_block {
 #define bfa_asic_id_ct(devid)			\
 	((devid) == PCI_DEVICE_ID_BROCADE_CT ||	\
 	(devid) == PCI_DEVICE_ID_BROCADE_CT_FC)
+#define bfa_asic_id_ctc(devid) (bfa_asic_id_ct(devid))
+
+enum bfa_mode {
+	BFA_MODE_HBA		= 1,
+	BFA_MODE_CNA		= 2,
+	BFA_MODE_NIC		= 3
+};
 
 #endif /* __BFA_DEFS_H__ */
diff --git a/drivers/net/bna/bfa_defs_mfg_comm.h b/drivers/net/bna/bfa_defs_mfg_comm.h
index 885ef3a..f84d8f6 100644
--- a/drivers/net/bna/bfa_defs_mfg_comm.h
+++ b/drivers/net/bna/bfa_defs_mfg_comm.h
@@ -19,11 +19,12 @@
 #define __BFA_DEFS_MFG_COMM_H__
 
 #include "cna.h"
+#include "bfa_defs.h"
 
 /**
  * Manufacturing block version
  */
-#define BFA_MFG_VERSION				2
+#define BFA_MFG_VERSION				3
 #define BFA_MFG_VERSION_UNINIT			0xFF
 
 /**
@@ -95,27 +96,14 @@ enum {
 	(type) == BFA_MFG_TYPE_CNA10P1 || \
 	bfa_mfg_is_mezz(type)))
 
-#define bfa_mfg_adapter_prop_init_flash(card_type, prop)	\
+#define bfa_mfg_adapter_prop_init_flash_ct(mfgblk, prop)	\
 do {								\
-	switch ((card_type)) {					\
-	case BFA_MFG_TYPE_FC8P2:				\
+	switch ((mfgblk)->card_type) {				\
 	case BFA_MFG_TYPE_JAYHAWK:				\
 	case BFA_MFG_TYPE_ASTRA:				\
 		(prop) = BFI_ADAPTER_SETP(NPORTS, 2) |		\
 			BFI_ADAPTER_SETP(SPEED, 8);		\
 		break;						\
-	case BFA_MFG_TYPE_FC8P1:				\
-		(prop) = BFI_ADAPTER_SETP(NPORTS, 1) |		\
-			BFI_ADAPTER_SETP(SPEED, 8);		\
-		break;						\
-	case BFA_MFG_TYPE_FC4P2:				\
-		(prop) = BFI_ADAPTER_SETP(NPORTS, 2) |		\
-			BFI_ADAPTER_SETP(SPEED, 4);		\
-		break;						\
-	case BFA_MFG_TYPE_FC4P1:				\
-		(prop) = BFI_ADAPTER_SETP(NPORTS, 1) |		\
-			BFI_ADAPTER_SETP(SPEED, 4);		\
-		break;						\
 	case BFA_MFG_TYPE_CNA10P2:				\
 	case BFA_MFG_TYPE_WANCHESE:				\
 	case BFA_MFG_TYPE_LIGHTNING_P0:				\
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index 2d5c4fd..029fb52 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -62,6 +62,7 @@ static void bfa_ioc_hw_sem_init(struct bfa_ioc *ioc);
 static void bfa_ioc_hw_sem_get(struct bfa_ioc *ioc);
 static void bfa_ioc_hw_sem_get_cancel(struct bfa_ioc *ioc);
 static void bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force);
+static void bfa_ioc_poll_fwinit(struct bfa_ioc *ioc);
 static void bfa_ioc_send_enable(struct bfa_ioc *ioc);
 static void bfa_ioc_send_disable(struct bfa_ioc *ioc);
 static void bfa_ioc_send_getattr(struct bfa_ioc *ioc);
@@ -78,8 +79,8 @@ static void bfa_ioc_lpu_stop(struct bfa_ioc *ioc);
 static void bfa_ioc_fail_notify(struct bfa_ioc *ioc);
 static void bfa_ioc_pf_enabled(struct bfa_ioc *ioc);
 static void bfa_ioc_pf_disabled(struct bfa_ioc *ioc);
-static void bfa_ioc_pf_initfailed(struct bfa_ioc *ioc);
 static void bfa_ioc_pf_failed(struct bfa_ioc *ioc);
+static void bfa_ioc_pf_hwfailed(struct bfa_ioc *ioc);
 static void bfa_ioc_pf_fwmismatch(struct bfa_ioc *ioc);
 static void bfa_ioc_boot(struct bfa_ioc *ioc, u32 boot_type,
 			 u32 boot_param);
@@ -108,11 +109,11 @@ enum ioc_event {
 	IOC_E_ENABLED		= 5,	/*!< f/w enabled		*/
 	IOC_E_FWRSP_GETATTR	= 6,	/*!< IOC get attribute response	*/
 	IOC_E_DISABLED		= 7,	/*!< f/w disabled		*/
-	IOC_E_INITFAILED	= 8,	/*!< failure notice by iocpf sm	*/
-	IOC_E_PFFAILED		= 9,	/*!< failure notice by iocpf sm	*/
-	IOC_E_HBFAIL		= 10,	/*!< heartbeat failure		*/
-	IOC_E_HWERROR		= 11,	/*!< hardware error interrupt	*/
-	IOC_E_TIMEOUT		= 12,	/*!< timeout			*/
+	IOC_E_PFFAILED		= 8,	/*!< failure notice by iocpf sm	*/
+	IOC_E_HBFAIL		= 9,	/*!< heartbeat failure		*/
+	IOC_E_HWERROR		= 10,	/*!< hardware error interrupt	*/
+	IOC_E_TIMEOUT		= 11,	/*!< timeout			*/
+	IOC_E_HWFAILED		= 12,	/*!< PCI mapping failure notice	*/
 };
 
 bfa_fsm_state_decl(bfa_ioc, uninit, struct bfa_ioc, enum ioc_event);
@@ -124,6 +125,7 @@ bfa_fsm_state_decl(bfa_ioc, fail_retry, struct bfa_ioc, enum ioc_event);
 bfa_fsm_state_decl(bfa_ioc, fail, struct bfa_ioc, enum ioc_event);
 bfa_fsm_state_decl(bfa_ioc, disabling, struct bfa_ioc, enum ioc_event);
 bfa_fsm_state_decl(bfa_ioc, disabled, struct bfa_ioc, enum ioc_event);
+bfa_fsm_state_decl(bfa_ioc, hwfail, struct bfa_ioc, enum ioc_event);
 
 static struct bfa_sm_table ioc_sm_table[] = {
 	{BFA_SM(bfa_ioc_sm_uninit), BFA_IOC_UNINIT},
@@ -135,6 +137,7 @@ static struct bfa_sm_table ioc_sm_table[] = {
 	{BFA_SM(bfa_ioc_sm_fail), BFA_IOC_FAIL},
 	{BFA_SM(bfa_ioc_sm_disabling), BFA_IOC_DISABLING},
 	{BFA_SM(bfa_ioc_sm_disabled), BFA_IOC_DISABLED},
+	{BFA_SM(bfa_ioc_sm_hwfail), BFA_IOC_HWFAIL},
 };
 
 /**
@@ -166,6 +169,7 @@ enum iocpf_event {
 	IOCPF_E_GETATTRFAIL	= 9,	/*!< init fail notice by ioc sm	*/
 	IOCPF_E_SEMLOCKED	= 10,   /*!< h/w semaphore is locked	*/
 	IOCPF_E_TIMEOUT		= 11,   /*!< f/w response timeout	*/
+	IOCPF_E_SEM_ERROR	= 12,   /*!< h/w sem mapping error	*/
 };
 
 /**
@@ -300,11 +304,16 @@ bfa_ioc_sm_enabling(struct bfa_ioc *ioc, enum ioc_event event)
 		/* !!! fall through !!! */
 	case IOC_E_HWERROR:
 		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
-		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail_retry);
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
 		if (event != IOC_E_PFFAILED)
 			bfa_iocpf_initfail(ioc);
 		break;
 
+	case IOC_E_HWFAILED:
+		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_hwfail);
+		break;
+
 	case IOC_E_DISABLE:
 		bfa_fsm_set_state(ioc, bfa_ioc_sm_disabling);
 		break;
@@ -343,6 +352,7 @@ bfa_ioc_sm_getattr(struct bfa_ioc *ioc, enum ioc_event event)
 	case IOC_E_FWRSP_GETATTR:
 		del_timer(&ioc->ioc_timer);
 		bfa_ioc_check_attr_wwns(ioc);
+		bfa_ioc_hb_monitor(ioc);
 		bfa_fsm_set_state(ioc, bfa_ioc_sm_op);
 		break;
 
@@ -352,7 +362,7 @@ bfa_ioc_sm_getattr(struct bfa_ioc *ioc, enum ioc_event event)
 		/* fall through */
 	case IOC_E_TIMEOUT:
 		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
-		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail_retry);
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
 		if (event != IOC_E_PFFAILED)
 			bfa_iocpf_getattrfail(ioc);
 		break;
@@ -374,7 +384,7 @@ static void
 bfa_ioc_sm_op_entry(struct bfa_ioc *ioc)
 {
 	ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_OK);
-	bfa_ioc_hb_monitor(ioc);
+	bfa_ioc_event_notify(ioc, BFA_IOC_E_ENABLED);
 }
 
 static void
@@ -394,12 +404,13 @@ bfa_ioc_sm_op(struct bfa_ioc *ioc, enum ioc_event event)
 		bfa_ioc_hb_stop(ioc);
 		/* !!! fall through !!! */
 	case IOC_E_HBFAIL:
-		bfa_ioc_fail_notify(ioc);
 		if (ioc->iocpf.auto_recover)
 			bfa_fsm_set_state(ioc, bfa_ioc_sm_fail_retry);
 		else
 			bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
 
+		bfa_ioc_fail_notify(ioc);
+
 		if (event != IOC_E_PFFAILED)
 			bfa_iocpf_fail(ioc);
 		break;
@@ -435,6 +446,11 @@ bfa_ioc_sm_disabling(struct bfa_ioc *ioc, enum ioc_event event)
 		bfa_iocpf_fail(ioc);
 		break;
 
+	case IOC_E_HWFAILED:
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_hwfail);
+		bfa_ioc_disable_comp(ioc);
+		break;
+
 	default:
 		bfa_sm_fault(event);
 	}
@@ -493,12 +509,14 @@ bfa_ioc_sm_fail_retry(struct bfa_ioc *ioc, enum ioc_event event)
 		 * Initialization retry failed.
 		 */
 		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
 		if (event != IOC_E_PFFAILED)
 			bfa_iocpf_initfail(ioc);
 		break;
 
-	case IOC_E_INITFAILED:
-		bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
+	case IOC_E_HWFAILED:
+		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_hwfail);
 		break;
 
 	case IOC_E_ENABLE:
@@ -552,6 +570,36 @@ bfa_ioc_sm_fail(struct bfa_ioc *ioc, enum ioc_event event)
 	}
 }
 
+static void
+bfa_ioc_sm_hwfail_entry(struct bfa_ioc *ioc)
+{
+}
+
+/**
+ * IOC failure.
+ */
+static void
+bfa_ioc_sm_hwfail(struct bfa_ioc *ioc, enum ioc_event event)
+{
+	switch (event) {
+
+	case IOC_E_ENABLE:
+		ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
+		break;
+
+	case IOC_E_DISABLE:
+		ioc->cbfn->disable_cbfn(ioc->bfa);
+		break;
+
+	case IOC_E_DETACH:
+		bfa_fsm_set_state(ioc, bfa_ioc_sm_uninit);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
 /**
  * IOCPF State Machine
  */
@@ -562,7 +610,7 @@ bfa_ioc_sm_fail(struct bfa_ioc *ioc, enum ioc_event event)
 static void
 bfa_iocpf_sm_reset_entry(struct bfa_iocpf *iocpf)
 {
-	iocpf->retry_count = 0;
+	iocpf->fw_mismatch_notified = false;
 	iocpf->auto_recover = bfa_nw_auto_recover;
 }
 
@@ -607,7 +655,6 @@ bfa_iocpf_sm_fwcheck(struct bfa_iocpf *iocpf, enum iocpf_event event)
 	case IOCPF_E_SEMLOCKED:
 		if (bfa_ioc_firmware_lock(ioc)) {
 			if (bfa_ioc_sync_start(ioc)) {
-				iocpf->retry_count = 0;
 				bfa_ioc_sync_join(ioc);
 				bfa_fsm_set_state(iocpf, bfa_iocpf_sm_hwinit);
 			} else {
@@ -622,6 +669,11 @@ bfa_iocpf_sm_fwcheck(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		}
 		break;
 
+	case IOCPF_E_SEM_ERROR:
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
+		bfa_ioc_pf_hwfailed(ioc);
+		break;
+
 	case IOCPF_E_DISABLE:
 		bfa_ioc_hw_sem_get_cancel(ioc);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_reset);
@@ -645,10 +697,10 @@ static void
 bfa_iocpf_sm_mismatch_entry(struct bfa_iocpf *iocpf)
 {
 	/* Call only the first time sm enters fwmismatch state. */
-	if (iocpf->retry_count == 0)
+	if (iocpf->fw_mismatch_notified == false)
 		bfa_ioc_pf_fwmismatch(iocpf->ioc);
 
-	iocpf->retry_count++;
+	iocpf->fw_mismatch_notified = true;
 	mod_timer(&(iocpf->ioc)->iocpf_timer, jiffies +
 		msecs_to_jiffies(BFA_IOC_TOV));
 }
@@ -711,6 +763,11 @@ bfa_iocpf_sm_semwait(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		}
 		break;
 
+	case IOCPF_E_SEM_ERROR:
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
+		bfa_ioc_pf_hwfailed(ioc);
+		break;
+
 	case IOCPF_E_DISABLE:
 		bfa_ioc_hw_sem_get_cancel(ioc);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabling_sync);
@@ -724,8 +781,7 @@ bfa_iocpf_sm_semwait(struct bfa_iocpf *iocpf, enum iocpf_event event)
 static void
 bfa_iocpf_sm_hwinit_entry(struct bfa_iocpf *iocpf)
 {
-	mod_timer(&(iocpf->ioc)->iocpf_timer, jiffies +
-		msecs_to_jiffies(BFA_IOC_TOV));
+	iocpf->poll_time = 0;
 	bfa_ioc_reset(iocpf->ioc, 0);
 }
 
@@ -740,19 +796,11 @@ bfa_iocpf_sm_hwinit(struct bfa_iocpf *iocpf, enum iocpf_event event)
 
 	switch (event) {
 	case IOCPF_E_FWREADY:
-		del_timer(&ioc->iocpf_timer);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_enabling);
 		break;
 
-	case IOCPF_E_INITFAIL:
-		del_timer(&ioc->iocpf_timer);
-		/*
-		 * !!! fall through !!!
-		 */
-
 	case IOCPF_E_TIMEOUT:
 		bfa_nw_ioc_hw_sem_release(ioc);
-		if (event == IOCPF_E_TIMEOUT)
 			bfa_ioc_pf_failed(ioc);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_initfail_sync);
 		break;
@@ -774,6 +822,10 @@ bfa_iocpf_sm_enabling_entry(struct bfa_iocpf *iocpf)
 {
 	mod_timer(&(iocpf->ioc)->iocpf_timer, jiffies +
 		msecs_to_jiffies(BFA_IOC_TOV));
+	/**
+	 * Enable Interrupts before sending fw IOC ENABLE cmd.
+	 */
+	iocpf->ioc->cbfn->reset_cbfn(iocpf->ioc->bfa);
 	bfa_ioc_send_enable(iocpf->ioc);
 }
 
@@ -811,21 +863,11 @@ bfa_iocpf_sm_enabling(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabling);
 		break;
 
-	case IOCPF_E_FWREADY:
-		bfa_ioc_send_enable(ioc);
-		break;
-
 	default:
 		bfa_sm_fault(event);
 	}
 }
 
-static bool
-bfa_nw_ioc_is_operational(struct bfa_ioc *ioc)
-{
-	return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_op);
-}
-
 static void
 bfa_iocpf_sm_ready_entry(struct bfa_iocpf *iocpf)
 {
@@ -835,8 +877,6 @@ bfa_iocpf_sm_ready_entry(struct bfa_iocpf *iocpf)
 static void
 bfa_iocpf_sm_ready(struct bfa_iocpf *iocpf, enum iocpf_event event)
 {
-	struct bfa_ioc *ioc = iocpf->ioc;
-
 	switch (event) {
 	case IOCPF_E_DISABLE:
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabling);
@@ -850,14 +890,6 @@ bfa_iocpf_sm_ready(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail_sync);
 		break;
 
-	case IOCPF_E_FWREADY:
-		bfa_ioc_pf_failed(ioc);
-		if (bfa_nw_ioc_is_operational(ioc))
-			bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail_sync);
-		else
-			bfa_fsm_set_state(iocpf, bfa_iocpf_sm_initfail_sync);
-		break;
-
 	default:
 		bfa_sm_fault(event);
 	}
@@ -881,7 +913,6 @@ bfa_iocpf_sm_disabling(struct bfa_iocpf *iocpf, enum iocpf_event event)
 
 	switch (event) {
 	case IOCPF_E_FWRSP_DISABLE:
-	case IOCPF_E_FWREADY:
 		del_timer(&ioc->iocpf_timer);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabling_sync);
 		break;
@@ -926,6 +957,11 @@ bfa_iocpf_sm_disabling_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabled);
 		break;
 
+	case IOCPF_E_SEM_ERROR:
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
+		bfa_ioc_pf_hwfailed(ioc);
+		break;
+
 	case IOCPF_E_FAIL:
 		break;
 
@@ -951,7 +987,6 @@ bfa_iocpf_sm_disabled(struct bfa_iocpf *iocpf, enum iocpf_event event)
 
 	switch (event) {
 	case IOCPF_E_ENABLE:
-		iocpf->retry_count = 0;
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_semwait);
 		break;
 
@@ -982,20 +1017,15 @@ bfa_iocpf_sm_initfail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 	switch (event) {
 	case IOCPF_E_SEMLOCKED:
 		bfa_ioc_notify_fail(ioc);
-		bfa_ioc_sync_ack(ioc);
-		iocpf->retry_count++;
-		if (iocpf->retry_count >= BFA_IOC_HWINIT_MAX) {
-			bfa_ioc_sync_leave(ioc);
-			bfa_nw_ioc_hw_sem_release(ioc);
-			bfa_fsm_set_state(iocpf, bfa_iocpf_sm_initfail);
-		} else {
-			if (bfa_ioc_sync_complete(ioc))
-				bfa_fsm_set_state(iocpf, bfa_iocpf_sm_hwinit);
-			else {
-				bfa_nw_ioc_hw_sem_release(ioc);
-				bfa_fsm_set_state(iocpf, bfa_iocpf_sm_semwait);
-			}
-		}
+		bfa_ioc_sync_leave(ioc);
+		writel(BFI_IOC_FAIL, ioc->ioc_regs.ioc_fwstate);
+		bfa_nw_ioc_hw_sem_release(ioc);
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_initfail);
+		break;
+
+	case IOCPF_E_SEM_ERROR:
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
+		bfa_ioc_pf_hwfailed(ioc);
 		break;
 
 	case IOCPF_E_DISABLE:
@@ -1020,7 +1050,6 @@ bfa_iocpf_sm_initfail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 static void
 bfa_iocpf_sm_initfail_entry(struct bfa_iocpf *iocpf)
 {
-	bfa_ioc_pf_initfailed(iocpf->ioc);
 }
 
 /**
@@ -1071,11 +1100,11 @@ bfa_iocpf_sm_fail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 
 	switch (event) {
 	case IOCPF_E_SEMLOCKED:
-		iocpf->retry_count = 0;
 		bfa_ioc_sync_ack(ioc);
 		bfa_ioc_notify_fail(ioc);
 		if (!iocpf->auto_recover) {
 			bfa_ioc_sync_leave(ioc);
+			writel(BFI_IOC_FAIL, ioc->ioc_regs.ioc_fwstate);
 			bfa_nw_ioc_hw_sem_release(ioc);
 			bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
 		} else {
@@ -1088,6 +1117,11 @@ bfa_iocpf_sm_fail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		}
 		break;
 
+	case IOCPF_E_SEM_ERROR:
+		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_fail);
+		bfa_ioc_pf_hwfailed(ioc);
+		break;
+
 	case IOCPF_E_DISABLE:
 		bfa_ioc_hw_sem_get_cancel(ioc);
 		bfa_fsm_set_state(iocpf, bfa_iocpf_sm_disabling_sync);
@@ -1158,13 +1192,13 @@ bfa_nw_ioc_sem_get(void __iomem *sem_reg)
 
 	r32 = readl(sem_reg);
 
-	while (r32 && (cnt < BFA_SEM_SPINCNT)) {
+	while ((r32 & 1) && (cnt < BFA_SEM_SPINCNT)) {
 		cnt++;
 		udelay(2);
 		r32 = readl(sem_reg);
 	}
 
-	if (r32 == 0)
+	if (!(r32 & 1))
 		return true;
 
 	BUG_ON(!(cnt < BFA_SEM_SPINCNT));
@@ -1210,7 +1244,11 @@ bfa_ioc_hw_sem_get(struct bfa_ioc *ioc)
 	 * will return 1. Semaphore is released by writing 1 to the register
 	 */
 	r32 = readl(ioc->ioc_regs.ioc_sem_reg);
-	if (r32 == 0) {
+	if (r32 == ~0) {
+		bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_SEM_ERROR);
+		return;
+	}
+	if (!(r32 & 1)) {
 		bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_SEMLOCKED);
 		return;
 	}
@@ -1331,7 +1369,7 @@ bfa_nw_ioc_fwver_cmp(struct bfa_ioc *ioc, struct bfi_ioc_image_hdr *fwhdr)
 	int i;
 
 	drv_fwhdr = (struct bfi_ioc_image_hdr *)
-		bfa_cb_image_get_chunk(BFA_IOC_FWIMG_TYPE(ioc), 0);
+		bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), 0);
 
 	for (i = 0; i < BFI_IOC_MD5SUM_SZ; i++) {
 		if (fwhdr->md5sum[i] != drv_fwhdr->md5sum[i])
@@ -1352,12 +1390,12 @@ bfa_ioc_fwver_valid(struct bfa_ioc *ioc, u32 boot_env)
 
 	bfa_nw_ioc_fwver_get(ioc, &fwhdr);
 	drv_fwhdr = (struct bfi_ioc_image_hdr *)
-		bfa_cb_image_get_chunk(BFA_IOC_FWIMG_TYPE(ioc), 0);
+		bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), 0);
 
 	if (fwhdr.signature != drv_fwhdr->signature)
 		return false;
 
-	if (swab32(fwhdr.param) != boot_env)
+	if (swab32(fwhdr.bootenv) != boot_env)
 		return false;
 
 	return bfa_nw_ioc_fwver_cmp(ioc, &fwhdr);
@@ -1388,11 +1426,11 @@ bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force)
 
 	ioc_fwstate = readl(ioc->ioc_regs.ioc_fwstate);
 
-	boot_env = BFI_BOOT_LOADER_OS;
-
 	if (force)
 		ioc_fwstate = BFI_IOC_UNINIT;
 
+	boot_env = BFI_FWBOOT_ENV_OS;
+
 	/**
 	 * check if firmware is valid
 	 */
@@ -1400,7 +1438,8 @@ bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force)
 		false : bfa_ioc_fwver_valid(ioc, boot_env);
 
 	if (!fwvalid) {
-		bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, boot_env);
+		bfa_ioc_boot(ioc, BFI_FWBOOT_TYPE_NORMAL, boot_env);
+		bfa_ioc_poll_fwinit(ioc);
 		return;
 	}
 
@@ -1409,7 +1448,7 @@ bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force)
 	 * just wait for an initialization completion interrupt.
 	 */
 	if (ioc_fwstate == BFI_IOC_INITING) {
-		ioc->cbfn->reset_cbfn(ioc->bfa);
+		bfa_ioc_poll_fwinit(ioc);
 		return;
 	}
 
@@ -1423,7 +1462,6 @@ bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force)
 		 * be flushed. Otherwise MSI-X interrupts are not delivered.
 		 */
 		bfa_ioc_msgflush(ioc);
-		ioc->cbfn->reset_cbfn(ioc->bfa);
 		bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_FWREADY);
 		return;
 	}
@@ -1431,7 +1469,8 @@ bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force)
 	/**
 	 * Initialize the h/w for any other states.
 	 */
-	bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, boot_env);
+	bfa_ioc_boot(ioc, BFI_FWBOOT_TYPE_NORMAL, boot_env);
+	bfa_ioc_poll_fwinit(ioc);
 }
 
 void
@@ -1475,7 +1514,7 @@ bfa_ioc_send_enable(struct bfa_ioc *ioc)
 
 	bfi_h2i_set(enable_req.mh, BFI_MC_IOC, BFI_IOC_H2I_ENABLE_REQ,
 		    bfa_ioc_portid(ioc));
-	enable_req.ioc_class = ioc->ioc_mc;
+	enable_req.clscode = htons(ioc->clscode);
 	do_gettimeofday(&tv);
 	enable_req.tv_sec = ntohl(tv.tv_sec);
 	bfa_ioc_mbox_send(ioc, &enable_req, sizeof(struct bfi_ioc_ctrl_req));
@@ -1548,22 +1587,23 @@ bfa_ioc_download_fw(struct bfa_ioc *ioc, u32 boot_type,
 	u32 loff = 0;
 	u32 chunkno = 0;
 	u32 i;
+	u32 asicmode;
 
 	/**
 	 * Initialize LMEM first before code download
 	 */
 	bfa_ioc_lmem_init(ioc);
 
-	fwimg = bfa_cb_image_get_chunk(BFA_IOC_FWIMG_TYPE(ioc), chunkno);
+	fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), chunkno);
 
 	pgnum = bfa_ioc_smem_pgnum(ioc, loff);
 
 	writel(pgnum, ioc->ioc_regs.host_page_num_fn);
 
-	for (i = 0; i < bfa_cb_image_get_size(BFA_IOC_FWIMG_TYPE(ioc)); i++) {
+	for (i = 0; i < bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc)); i++) {
 		if (BFA_IOC_FLASH_CHUNK_NO(i) != chunkno) {
 			chunkno = BFA_IOC_FLASH_CHUNK_NO(i);
-			fwimg = bfa_cb_image_get_chunk(BFA_IOC_FWIMG_TYPE(ioc),
+			fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc),
 					BFA_IOC_FLASH_CHUNK_ADDR(chunkno));
 		}
 
@@ -1590,12 +1630,16 @@ bfa_ioc_download_fw(struct bfa_ioc *ioc, u32 boot_type,
 		      ioc->ioc_regs.host_page_num_fn);
 
 	/*
-	 * Set boot type and boot param at the end.
+	 * Set boot type, env and device mode at the end.
 	*/
+	asicmode = BFI_FWBOOT_DEVMODE(ioc->asic_gen, ioc->asic_mode,
+					ioc->port0_mode, ioc->port1_mode);
+	writel(asicmode, ((ioc->ioc_regs.smem_page_start)
+			+ BFI_FWBOOT_DEVMODE_OFF));
 	writel(boot_type, ((ioc->ioc_regs.smem_page_start)
-			+ (BFI_BOOT_TYPE_OFF)));
+			+ (BFI_FWBOOT_TYPE_OFF)));
 	writel(boot_env, ((ioc->ioc_regs.smem_page_start)
-			+ (BFI_BOOT_LOADER_OFF)));
+			+ (BFI_FWBOOT_ENV_OFF)));
 }
 
 static void
@@ -1605,6 +1649,20 @@ bfa_ioc_reset(struct bfa_ioc *ioc, bool force)
 }
 
 /**
+ * BFA ioc enable reply by firmware
+ */
+static void
+bfa_ioc_enable_reply(struct bfa_ioc *ioc, enum bfa_mode port_mode,
+			u8 cap_bm)
+{
+	struct bfa_iocpf *iocpf = &ioc->iocpf;
+
+	ioc->port_mode = ioc->port_mode_cfg = port_mode;
+	ioc->ad_cap_bm = cap_bm;
+	bfa_fsm_send_event(iocpf, IOCPF_E_FWRSP_ENABLE);
+}
+
+/**
  * @brief
  * Update BFA configuration from firmware configuration.
  */
@@ -1644,7 +1702,9 @@ bfa_ioc_mbox_poll(struct bfa_ioc *ioc)
 {
 	struct bfa_ioc_mbox_mod *mod = &ioc->mbox_mod;
 	struct bfa_mbox_cmd *cmd;
-	u32			stat;
+	bfa_mbox_cmd_cbfn_t cbfn;
+	void *cbarg;
+	u32 stat;
 
 	/**
 	 * If no command pending, do nothing
@@ -1664,6 +1724,16 @@ bfa_ioc_mbox_poll(struct bfa_ioc *ioc)
 	 */
 	bfa_q_deq(&mod->cmd_q, &cmd);
 	bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg));
+
+	/**
+	 * Give a callback to the client, indicating that the command is sent
+	 */
+	if (cmd->cbfn) {
+		cbfn = cmd->cbfn;
+		cbarg = cmd->cbarg;
+		cmd->cbfn = NULL;
+		cbfn(cbarg);
+	}
 }
 
 /**
@@ -1702,15 +1772,15 @@ bfa_ioc_pf_disabled(struct bfa_ioc *ioc)
 }
 
 static void
-bfa_ioc_pf_initfailed(struct bfa_ioc *ioc)
+bfa_ioc_pf_failed(struct bfa_ioc *ioc)
 {
-	bfa_fsm_send_event(ioc, IOC_E_INITFAILED);
+	bfa_fsm_send_event(ioc, IOC_E_PFFAILED);
 }
 
 static void
-bfa_ioc_pf_failed(struct bfa_ioc *ioc)
+bfa_ioc_pf_hwfailed(struct bfa_ioc *ioc)
 {
-	bfa_fsm_send_event(ioc, IOC_E_PFFAILED);
+	bfa_fsm_send_event(ioc, IOC_E_HWFAILED);
 }
 
 static void
@@ -1749,10 +1819,9 @@ bfa_ioc_pll_init(struct bfa_ioc *ioc)
  * as the entry vector.
  */
 static void
-bfa_ioc_boot(struct bfa_ioc *ioc, u32 boot_type, u32 boot_env)
+bfa_ioc_boot(struct bfa_ioc *ioc, enum bfi_fwboot_type boot_type,
+		u32 boot_env)
 {
-	void __iomem *rb;
-
 	bfa_ioc_stats(ioc, ioc_boots);
 
 	if (bfa_ioc_pll_init(ioc) != BFA_STATUS_OK)
@@ -1761,22 +1830,16 @@ bfa_ioc_boot(struct bfa_ioc *ioc, u32 boot_type, u32 boot_env)
 	/**
 	 * Initialize IOC state of all functions on a chip reset.
 	 */
-	rb = ioc->pcidev.pci_bar_kva;
-	if (boot_type == BFI_BOOT_TYPE_MEMTEST) {
-		writel(BFI_IOC_MEMTEST, (rb + BFA_IOC0_STATE_REG));
-		writel(BFI_IOC_MEMTEST, (rb + BFA_IOC1_STATE_REG));
+	if (boot_type == BFI_FWBOOT_TYPE_MEMTEST) {
+		writel(BFI_IOC_MEMTEST, ioc->ioc_regs.ioc_fwstate);
+		writel(BFI_IOC_MEMTEST, ioc->ioc_regs.alt_ioc_fwstate);
 	} else {
-		writel(BFI_IOC_INITING, (rb + BFA_IOC0_STATE_REG));
-		writel(BFI_IOC_INITING, (rb + BFA_IOC1_STATE_REG));
+		writel(BFI_IOC_INITING, ioc->ioc_regs.ioc_fwstate);
+		writel(BFI_IOC_INITING, ioc->ioc_regs.alt_ioc_fwstate);
 	}
 
 	bfa_ioc_msgflush(ioc);
 	bfa_ioc_download_fw(ioc, boot_type, boot_env);
-
-	/**
-	 * Enable interrupts just before starting LPU
-	 */
-	ioc->cbfn->reset_cbfn(ioc->bfa);
 	bfa_ioc_lpu_start(ioc);
 }
 
@@ -1789,13 +1852,17 @@ bfa_nw_ioc_auto_recover(bool auto_recover)
 	bfa_nw_auto_recover = auto_recover;
 }
 
-static void
+static bool
 bfa_ioc_msgget(struct bfa_ioc *ioc, void *mbmsg)
 {
 	u32	*msgp = mbmsg;
 	u32	r32;
 	int		i;
 
+	r32 = readl(ioc->ioc_regs.lpu_mbox_cmd);
+	if ((r32 & 1) == 0)
+		return false;
+
 	/**
 	 * read the MBOX msg
 	 */
@@ -1811,6 +1878,8 @@ bfa_ioc_msgget(struct bfa_ioc *ioc, void *mbmsg)
 	 */
 	writel(1, ioc->ioc_regs.lpu_mbox_cmd);
 	readl(ioc->ioc_regs.lpu_mbox_cmd);
+
+	return true;
 }
 
 static void
@@ -1827,12 +1896,10 @@ bfa_ioc_isr(struct bfa_ioc *ioc, struct bfi_mbmsg *m)
 	case BFI_IOC_I2H_HBEAT:
 		break;
 
-	case BFI_IOC_I2H_READY_EVENT:
-		bfa_fsm_send_event(iocpf, IOCPF_E_FWREADY);
-		break;
-
 	case BFI_IOC_I2H_ENABLE_REPLY:
-		bfa_fsm_send_event(iocpf, IOCPF_E_FWRSP_ENABLE);
+		bfa_ioc_enable_reply(ioc,
+			(enum bfa_mode)msg->fw_event.port_mode,
+			msg->fw_event.cap_bm);
 		break;
 
 	case BFI_IOC_I2H_DISABLE_REPLY:
@@ -1878,6 +1945,9 @@ void
 bfa_nw_ioc_detach(struct bfa_ioc *ioc)
 {
 	bfa_fsm_send_event(ioc, IOC_E_DETACH);
+
+	/* Done with detach, empty the notify_q. */
+	INIT_LIST_HEAD(&ioc->notify_q);
 }
 
 /**
@@ -1887,12 +1957,29 @@ bfa_nw_ioc_detach(struct bfa_ioc *ioc)
  */
 void
 bfa_nw_ioc_pci_init(struct bfa_ioc *ioc, struct bfa_pcidev *pcidev,
-		 enum bfi_mclass mc)
+		 enum bfi_pcifn_class clscode)
 {
-	ioc->ioc_mc	= mc;
+	ioc->clscode	= clscode;
 	ioc->pcidev	= *pcidev;
-	ioc->ctdev	= bfa_asic_id_ct(ioc->pcidev.device_id);
-	ioc->cna	= ioc->ctdev && !ioc->fcmode;
+
+	/**
+	 * Initialize IOC and device personality
+	 */
+	ioc->port0_mode = ioc->port1_mode = BFI_PORT_MODE_FC;
+	ioc->asic_mode  = BFI_ASIC_MODE_FC;
+
+	switch (pcidev->device_id) {
+	case PCI_DEVICE_ID_BROCADE_CT:
+		ioc->asic_gen = BFI_ASIC_GEN_CT;
+		ioc->port0_mode = ioc->port1_mode = BFI_PORT_MODE_ETH;
+		ioc->asic_mode  = BFI_ASIC_MODE_ETH;
+		ioc->port_mode = ioc->port_mode_cfg = BFA_MODE_CNA;
+		ioc->ad_cap_bm = BFA_CM_CNA;
+		break;
+
+	default:
+		BUG_ON(1);
+	}
 
 	bfa_nw_ioc_set_ct_hwif(ioc);
 
@@ -2013,21 +2100,28 @@ bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc)
 	struct bfi_mbmsg m;
 	int				mc;
 
-	bfa_ioc_msgget(ioc, &m);
+	if (bfa_ioc_msgget(ioc, &m)) {
+		/**
+		 * Treat IOC message class as special.
+		 */
+		mc = m.mh.msg_class;
+		if (mc == BFI_MC_IOC) {
+			bfa_ioc_isr(ioc, &m);
+			return;
+		}
 
-	/**
-	 * Treat IOC message class as special.
-	 */
-	mc = m.mh.msg_class;
-	if (mc == BFI_MC_IOC) {
-		bfa_ioc_isr(ioc, &m);
-		return;
+		if ((mc >= BFI_MC_MAX) || (mod->mbhdlr[mc].cbfn == NULL))
+			return;
+
+		mod->mbhdlr[mc].cbfn(mod->mbhdlr[mc].cbarg, &m);
 	}
 
-	if ((mc >= BFI_MC_MAX) || (mod->mbhdlr[mc].cbfn == NULL))
-		return;
+	bfa_ioc_lpu_read_stat(ioc);
 
-	mod->mbhdlr[mc].cbfn(mod->mbhdlr[mc].cbarg, &m);
+	/**
+	 * Try to send pending mailbox commands
+	 */
+	bfa_ioc_mbox_poll(ioc);
 }
 
 void
@@ -2099,24 +2193,18 @@ bfa_ioc_get_adapter_attr(struct bfa_ioc *ioc,
 	ad_attr->asic_rev = ioc_attr->asic_rev;
 
 	bfa_ioc_get_pci_chip_rev(ioc, ad_attr->hw_ver);
-
-	ad_attr->cna_capable = ioc->cna;
-	ad_attr->trunk_capable = (ad_attr->nports > 1) && !ioc->cna;
 }
 
 static enum bfa_ioc_type
 bfa_ioc_get_type(struct bfa_ioc *ioc)
 {
-	if (!ioc->ctdev || ioc->fcmode)
-		return BFA_IOC_TYPE_FC;
-	else if (ioc->ioc_mc == BFI_MC_IOCFC)
-		return BFA_IOC_TYPE_FCoE;
-	else if (ioc->ioc_mc == BFI_MC_LL)
-		return BFA_IOC_TYPE_LL;
-	else {
-		BUG_ON(!(ioc->ioc_mc == BFI_MC_LL));
+	if (ioc->clscode == BFI_PCIFN_CLASS_ETH)
 		return BFA_IOC_TYPE_LL;
-	}
+
+	BUG_ON(!(ioc->clscode == BFI_PCIFN_CLASS_FC));
+
+	return (ioc->attr->port_mode == BFI_PORT_MODE_FC)
+		? BFA_IOC_TYPE_FC : BFA_IOC_TYPE_FCoE;
 }
 
 static void
@@ -2228,6 +2316,10 @@ bfa_nw_ioc_get_attr(struct bfa_ioc *ioc, struct bfa_ioc_attr *ioc_attr)
 
 	ioc_attr->state = bfa_ioc_get_state(ioc);
 	ioc_attr->port_id = ioc->port_id;
+	ioc_attr->port_mode = ioc->port_mode;
+
+	ioc_attr->port_mode_cfg = ioc->port_mode_cfg;
+	ioc_attr->cap_bm = ioc->ad_cap_bm;
 
 	ioc_attr->ioc_type = bfa_ioc_get_type(ioc);
 
@@ -2317,8 +2409,14 @@ void
 bfa_nw_iocpf_timeout(void *ioc_arg)
 {
 	struct bfa_ioc  *ioc = (struct bfa_ioc *) ioc_arg;
+	enum bfa_iocpf_state iocpf_st;
+
+	iocpf_st = bfa_sm_to_state(iocpf_sm_table, ioc->iocpf.fsm);
 
-	bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_TIMEOUT);
+	if (iocpf_st == BFA_IOCPF_HWINIT)
+		bfa_ioc_poll_fwinit(ioc);
+	else
+		bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_TIMEOUT);
 }
 
 void
@@ -2328,3 +2426,22 @@ bfa_nw_iocpf_sem_timeout(void *ioc_arg)
 
 	bfa_ioc_hw_sem_get(ioc);
 }
+
+static void
+bfa_ioc_poll_fwinit(struct bfa_ioc *ioc)
+{
+	u32 fwstate = readl(ioc->ioc_regs.ioc_fwstate);
+
+	if (fwstate == BFI_IOC_DISABLED) {
+		bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_FWREADY);
+		return;
+	}
+
+	if (ioc->iocpf.poll_time >= BFA_IOC_TOV) {
+		bfa_nw_iocpf_timeout(ioc);
+	} else {
+		ioc->iocpf.poll_time += BFA_IOC_POLL_TOV;
+		mod_timer(&ioc->iocpf_timer, jiffies +
+			msecs_to_jiffies(BFA_IOC_POLL_TOV));
+	}
+}
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index 33ba5f4..7514c72 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -27,6 +27,7 @@
 #define BFA_IOC_HWSEM_TOV	500	/* msecs */
 #define BFA_IOC_HB_TOV		500	/* msecs */
 #define BFA_IOC_HWINIT_MAX	5
+#define BFA_IOC_POLL_TOV	200	/* msecs */
 
 /**
  * PCI device information required by IOC
@@ -169,8 +170,9 @@ struct bfa_ioc_hbfail_notify {
 struct bfa_iocpf {
 	bfa_fsm_t		fsm;
 	struct bfa_ioc		*ioc;
-	u32			retry_count;
+	bool			fw_mismatch_notified;
 	bool			auto_recover;
+	u32			poll_time;
 };
 
 struct bfa_ioc {
@@ -186,12 +188,10 @@ struct bfa_ioc {
 	void			*dbg_fwsave;
 	int			dbg_fwsave_len;
 	bool			dbg_fwsave_once;
-	enum bfi_mclass		ioc_mc;
+	enum bfi_pcifn_class	clscode;
 	struct bfa_ioc_regs	ioc_regs;
 	struct bfa_ioc_drv_stats stats;
 	bool			fcmode;
-	bool			ctdev;
-	bool			cna;
 	bool			pllinit;
 	bool			stats_busy;	/*!< outstanding stats */
 	u8			port_id;
@@ -202,10 +202,18 @@ struct bfa_ioc {
 	struct bfa_ioc_mbox_mod	mbox_mod;
 	struct bfa_ioc_hwif	*ioc_hwif;
 	struct bfa_iocpf	iocpf;
+	enum bfi_asic_gen	asic_gen;
+	enum bfi_asic_mode	asic_mode;
+	enum bfi_port_mode	port0_mode;
+	enum bfi_port_mode	port1_mode;
+	enum bfa_mode		port_mode;
+	u8			ad_cap_bm;	/*!< adapter cap bit mask */
+	u8			port_mode_cfg;	/*!< config port mode */
 };
 
 struct bfa_ioc_hwif {
-	enum bfa_status (*ioc_pll_init) (void __iomem *rb, bool fcmode);
+	enum bfa_status (*ioc_pll_init) (void __iomem *rb,
+						enum bfi_asic_mode m);
 	bool		(*ioc_firmware_lock)	(struct bfa_ioc *ioc);
 	void		(*ioc_firmware_unlock)	(struct bfa_ioc *ioc);
 	void		(*ioc_reg_init)	(struct bfa_ioc *ioc);
@@ -219,12 +227,14 @@ struct bfa_ioc_hwif {
 	void		(*ioc_sync_leave)	(struct bfa_ioc *ioc);
 	void		(*ioc_sync_ack)		(struct bfa_ioc *ioc);
 	bool		(*ioc_sync_complete)	(struct bfa_ioc *ioc);
+	bool		(*ioc_lpu_read_stat)	(struct bfa_ioc *ioc);
 };
 
 #define bfa_ioc_pcifn(__ioc)		((__ioc)->pcidev.pci_func)
 #define bfa_ioc_devid(__ioc)		((__ioc)->pcidev.device_id)
 #define bfa_ioc_bar0(__ioc)		((__ioc)->pcidev.pci_bar_kva)
 #define bfa_ioc_portid(__ioc)		((__ioc)->port_id)
+#define bfa_ioc_asic_gen(__ioc)		((__ioc)->asic_gen)
 #define bfa_ioc_fetch_stats(__ioc, __stats) \
 		(((__stats)->drv_stats) = (__ioc)->stats)
 #define bfa_ioc_clr_stats(__ioc)	\
@@ -245,7 +255,8 @@ struct bfa_ioc_hwif {
 	 (((__ioc)->fcmode) ? BFI_IMAGE_CT_FC : BFI_IMAGE_CT_CNA) :	\
 	 BFI_IMAGE_CB_FC)
 #define BFA_IOC_FW_SMEM_SIZE(__ioc)					\
-	(((__ioc)->ctdev) ? BFI_SMEM_CT_SIZE : BFI_SMEM_CB_SIZE)
+	((bfa_ioc_asic_gen(__ioc) == BFI_ASIC_GEN_CB)			\
+	? BFI_SMEM_CB_SIZE : BFI_SMEM_CT_SIZE)
 #define BFA_IOC_FLASH_CHUNK_NO(off)		(off / BFI_FLASH_CHUNK_SZ_WORDS)
 #define BFA_IOC_FLASH_OFFSET_IN_CHUNK(off)	(off % BFI_FLASH_CHUNK_SZ_WORDS)
 #define BFA_IOC_FLASH_CHUNK_ADDR(chunkno)  (chunkno * BFI_FLASH_CHUNK_SZ_WORDS)
@@ -266,13 +277,18 @@ void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
 
 #define bfa_ioc_pll_init_asic(__ioc) \
 	((__ioc)->ioc_hwif->ioc_pll_init((__ioc)->pcidev.pci_bar_kva, \
-			   (__ioc)->fcmode))
+			   (__ioc)->asic_mode))
 
 #define	bfa_ioc_isr_mode_set(__ioc, __msix)			\
 			((__ioc)->ioc_hwif->ioc_isr_mode_set(__ioc, __msix))
 #define	bfa_ioc_ownership_reset(__ioc)				\
 			((__ioc)->ioc_hwif->ioc_ownership_reset(__ioc))
 
+#define bfa_ioc_lpu_read_stat(__ioc) do {				\
+		if ((__ioc)->ioc_hwif->ioc_lpu_read_stat)		\
+			((__ioc)->ioc_hwif->ioc_lpu_read_stat(__ioc));	\
+} while (0)
+
 void bfa_nw_ioc_set_ct_hwif(struct bfa_ioc *ioc);
 
 void bfa_nw_ioc_attach(struct bfa_ioc *ioc, void *bfa,
@@ -280,7 +296,7 @@ void bfa_nw_ioc_attach(struct bfa_ioc *ioc, void *bfa,
 void bfa_nw_ioc_auto_recover(bool auto_recover);
 void bfa_nw_ioc_detach(struct bfa_ioc *ioc);
 void bfa_nw_ioc_pci_init(struct bfa_ioc *ioc, struct bfa_pcidev *pcidev,
-		enum bfi_mclass mc);
+		enum bfi_pcifn_class clscode);
 u32 bfa_nw_ioc_meminfo(void);
 void bfa_nw_ioc_mem_claim(struct bfa_ioc *ioc,  u8 *dm_kva, u64 dm_pa);
 void bfa_nw_ioc_enable(struct bfa_ioc *ioc);
@@ -311,7 +327,7 @@ void bfa_nw_iocpf_sem_timeout(void *ioc);
 /*
  * F/W Image Size & Chunk
  */
-u32 *bfa_cb_image_get_chunk(int type, u32 off);
-u32 bfa_cb_image_get_size(int type);
+u32 *bfa_cb_image_get_chunk(enum bfi_asic_gen asic_gen, u32 off);
+u32 bfa_cb_image_get_size(enum bfi_asic_gen asic_gen);
 
 #endif /* __BFA_IOC_H__ */
diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 209f1f3..b4429bc 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -46,7 +46,8 @@ static void bfa_ioc_ct_sync_join(struct bfa_ioc *ioc);
 static void bfa_ioc_ct_sync_leave(struct bfa_ioc *ioc);
 static void bfa_ioc_ct_sync_ack(struct bfa_ioc *ioc);
 static bool bfa_ioc_ct_sync_complete(struct bfa_ioc *ioc);
-static enum bfa_status bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode);
+static enum bfa_status bfa_ioc_ct_pll_init(void __iomem *rb,
+				enum bfi_asic_mode asic_mode);
 
 static struct bfa_ioc_hwif nw_hwif_ct;
 
@@ -92,7 +93,7 @@ bfa_ioc_ct_firmware_lock(struct bfa_ioc *ioc)
 	/**
 	 * If bios boot (flash based) -- do not increment usage count
 	 */
-	if (bfa_cb_image_get_size(BFA_IOC_FWIMG_TYPE(ioc)) <
+	if (bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc)) <
 						BFA_IOC_FWIMG_MINSZ)
 		return true;
 
@@ -142,7 +143,7 @@ bfa_ioc_ct_firmware_unlock(struct bfa_ioc *ioc)
 	/**
 	 * If bios boot (flash based) -- do not decrement usage count
 	 */
-	if (bfa_cb_image_get_size(BFA_IOC_FWIMG_TYPE(ioc)) <
+	if (bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc)) <
 						BFA_IOC_FWIMG_MINSZ)
 		return;
 
@@ -165,22 +166,17 @@ bfa_ioc_ct_firmware_unlock(struct bfa_ioc *ioc)
 static void
 bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc)
 {
-	if (ioc->cna) {
-		writel(__FW_INIT_HALT_P, ioc->ioc_regs.ll_halt);
-		writel(__FW_INIT_HALT_P, ioc->ioc_regs.alt_ll_halt);
-		/* Wait for halt to take effect */
-		readl(ioc->ioc_regs.ll_halt);
-		readl(ioc->ioc_regs.alt_ll_halt);
-	} else {
-		writel(~0U, ioc->ioc_regs.err_set);
-		readl(ioc->ioc_regs.err_set);
-	}
+	writel(__FW_INIT_HALT_P, ioc->ioc_regs.ll_halt);
+	writel(__FW_INIT_HALT_P, ioc->ioc_regs.alt_ll_halt);
+	/* Wait for halt to take effect */
+	readl(ioc->ioc_regs.ll_halt);
+	readl(ioc->ioc_regs.alt_ll_halt);
 }
 
 /**
  * Host to LPU mailbox message addresses
  */
-static struct { u32 hfn_mbox, lpu_mbox, hfn_pgn; } iocreg_fnreg[] = {
+static struct { u32 hfn_mbox, lpu_mbox, hfn_pgn; } ct_fnreg[] = {
 	{ HOSTFN0_LPU_MBOX0_0, LPU_HOSTFN0_MBOX0_0, HOST_PAGE_NUM_FN0 },
 	{ HOSTFN1_LPU_MBOX0_8, LPU_HOSTFN1_MBOX0_8, HOST_PAGE_NUM_FN1 },
 	{ HOSTFN2_LPU_MBOX0_0, LPU_HOSTFN2_MBOX0_0, HOST_PAGE_NUM_FN2 },
@@ -215,9 +211,9 @@ bfa_ioc_ct_reg_init(struct bfa_ioc *ioc)
 
 	rb = bfa_ioc_bar0(ioc);
 
-	ioc->ioc_regs.hfn_mbox = rb + iocreg_fnreg[pcifn].hfn_mbox;
-	ioc->ioc_regs.lpu_mbox = rb + iocreg_fnreg[pcifn].lpu_mbox;
-	ioc->ioc_regs.host_page_num_fn = rb + iocreg_fnreg[pcifn].hfn_pgn;
+	ioc->ioc_regs.hfn_mbox = rb + ct_fnreg[pcifn].hfn_mbox;
+	ioc->ioc_regs.lpu_mbox = rb + ct_fnreg[pcifn].lpu_mbox;
+	ioc->ioc_regs.host_page_num_fn = rb + ct_fnreg[pcifn].hfn_pgn;
 
 	if (ioc->port_id == 0) {
 		ioc->ioc_regs.heartbeat = rb + BFA_IOC0_HBEAT_REG;
@@ -323,11 +319,9 @@ bfa_ioc_ct_isr_mode_set(struct bfa_ioc *ioc, bool msix)
 static void
 bfa_ioc_ct_ownership_reset(struct bfa_ioc *ioc)
 {
-	if (ioc->cna) {
-		bfa_nw_ioc_sem_get(ioc->ioc_regs.ioc_usage_sem_reg);
-		writel(0, ioc->ioc_regs.ioc_usage_reg);
-		bfa_nw_ioc_sem_release(ioc->ioc_regs.ioc_usage_sem_reg);
-	}
+	bfa_nw_ioc_sem_get(ioc->ioc_regs.ioc_usage_sem_reg);
+	writel(0, ioc->ioc_regs.ioc_usage_reg);
+	bfa_nw_ioc_sem_release(ioc->ioc_regs.ioc_usage_sem_reg);
 
 	/*
 	 * Read the hw sem reg to make sure that it is locked
@@ -436,9 +430,10 @@ bfa_ioc_ct_sync_complete(struct bfa_ioc *ioc)
 }
 
 static enum bfa_status
-bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode)
+bfa_ioc_ct_pll_init(void __iomem *rb, enum bfi_asic_mode asic_mode)
 {
 	u32	pll_sclk, pll_fclk, r32;
+	bool fcmode = (asic_mode == BFI_ASIC_MODE_FC);
 
 	pll_sclk = __APP_PLL_SCLK_LRESETN | __APP_PLL_SCLK_ENARST |
 		__APP_PLL_SCLK_RSEL200500 | __APP_PLL_SCLK_P0_1(3U) |
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 6a53183..978e1bc 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -43,17 +43,21 @@ struct bfi_mhdr {
 	u8		msg_id;		/*!< msg opcode with in the class   */
 	union {
 		struct {
-			u8	rsvd;
-			u8	lpu_id;	/*!< msg destination		    */
+			u8	qid;
+			u8	fn_lpu;	/*!< msg destination		    */
 		} h2i;
 		u16	i2htok;	/*!< token in msgs to host	    */
 	} mtag;
 };
 
-#define bfi_h2i_set(_mh, _mc, _op, _lpuid) do {		\
+#define bfi_fn_lpu(__fn, __lpu)	((__fn) << 1 | (__lpu))
+#define bfi_mhdr_2_fn(_mh)	((_mh)->mtag.h2i.fn_lpu >> 1)
+#define bfi_mhdr_2_qid(_mh)	((_mh)->mtag.h2i.qid)
+
+#define bfi_h2i_set(_mh, _mc, _op, _fn_lpu) do {		\
 	(_mh).msg_class			= (_mc);		\
 	(_mh).msg_id			= (_op);		\
-	(_mh).mtag.h2i.lpu_id	= (_lpuid);			\
+	(_mh).mtag.h2i.fn_lpu	= (_fn_lpu);			\
 } while (0)
 
 #define bfi_i2h_set(_mh, _mc, _op, _i2htok) do {		\
@@ -149,6 +153,14 @@ struct bfi_mbmsg {
 };
 
 /**
+ * Supported PCI function class codes (personality)
+ */
+enum bfi_pcifn_class {
+	BFI_PCIFN_CLASS_FC	= 0x0c04,
+	BFI_PCIFN_CLASS_ETH	= 0x0200,
+};
+
+/**
  * Message Classes
  */
 enum bfi_mclass {
@@ -203,6 +215,21 @@ enum bfi_mclass {
  *----------------------------------------------------------------------
  */
 
+/**
+ * Different asic generations
+ */
+enum bfi_asic_gen {
+	BFI_ASIC_GEN_CB		= 1,
+	BFI_ASIC_GEN_CT		= 2,
+};
+
+enum bfi_asic_mode {
+	BFI_ASIC_MODE_FC	= 1,	/* FC upto 8G speed		*/
+	BFI_ASIC_MODE_FC16	= 2,	/* FC upto 16G speed		*/
+	BFI_ASIC_MODE_ETH	= 3,	/* Ethernet ports		*/
+	BFI_ASIC_MODE_COMBO	= 4,	/* FC 16G and Ethernet 10G port	*/
+};
+
 enum bfi_ioc_h2i_msgs {
 	BFI_IOC_H2I_ENABLE_REQ		= 1,
 	BFI_IOC_H2I_DISABLE_REQ		= 2,
@@ -215,8 +242,7 @@ enum bfi_ioc_i2h_msgs {
 	BFI_IOC_I2H_ENABLE_REPLY	= BFA_I2HM(1),
 	BFI_IOC_I2H_DISABLE_REPLY	= BFA_I2HM(2),
 	BFI_IOC_I2H_GETATTR_REPLY	= BFA_I2HM(3),
-	BFI_IOC_I2H_READY_EVENT		= BFA_I2HM(4),
-	BFI_IOC_I2H_HBEAT		= BFA_I2HM(5),
+	BFI_IOC_I2H_HBEAT		= BFA_I2HM(4),
 };
 
 /**
@@ -231,7 +257,8 @@ struct bfi_ioc_attr {
 	u64		mfg_pwwn;	/*!< Mfg port wwn	   */
 	u64		mfg_nwwn;	/*!< Mfg node wwn	   */
 	mac_t		mfg_mac;	/*!< Mfg mac		   */
-	u16	rsvd_a;
+	u8		port_mode;	/* enum bfi_port_mode	   */
+	u8		rsvd_a;
 	u64		pwwn;
 	u64		nwwn;
 	mac_t		mac;		/*!< PBC or Mfg mac	   */
@@ -284,19 +311,36 @@ struct bfi_ioc_getattr_reply {
 #define BFI_IOC_MD5SUM_SZ	4
 struct bfi_ioc_image_hdr {
 	u32	signature;	/*!< constant signature */
-	u32	rsvd_a;
+	u8	asic_gen;	/*!< asic generation */
+	u8	asic_mode;
+	u8	port0_mode;	/*!< device mode for port 0 */
+	u8	port1_mode;	/*!< device mode for port 1 */
 	u32	exec;		/*!< exec vector	*/
-	u32	param;		/*!< parameters		*/
+	u32	bootenv;	/*!< firmware boot env */
 	u32	rsvd_b[4];
 	u32	md5sum[BFI_IOC_MD5SUM_SZ];
 };
 
+#define BFI_FWBOOT_DEVMODE_OFF		4
+#define BFI_FWBOOT_TYPE_OFF		8
+#define BFI_FWBOOT_ENV_OFF		12
+#define BFI_FWBOOT_DEVMODE(__asic_gen, __asic_mode, __p0_mode, __p1_mode) \
+	(((u32)(__asic_gen)) << 24 |	\
+	 ((u32)(__asic_mode)) << 16 |	\
+	 ((u32)(__p0_mode)) << 8 |	\
+	 ((u32)(__p1_mode)))
+
 enum bfi_fwboot_type {
 	BFI_FWBOOT_TYPE_NORMAL	= 0,
 	BFI_FWBOOT_TYPE_FLASH	= 1,
 	BFI_FWBOOT_TYPE_MEMTEST	= 2,
 };
 
+enum bfi_port_mode {
+	BFI_PORT_MODE_FC	= 1,
+	BFI_PORT_MODE_ETH	= 2,
+};
+
 /**
  *  BFI_IOC_I2H_READY_EVENT message
  */
@@ -362,8 +406,8 @@ enum {
  */
 struct bfi_ioc_ctrl_req {
 	struct bfi_mhdr mh;
-	u8			ioc_class;
-	u8			rsvd[3];
+	u16			clscode;
+	u16			rsvd;
 	u32		tv_sec;
 };
 
@@ -371,9 +415,11 @@ struct bfi_ioc_ctrl_req {
  * BFI_IOC_I2H_ENABLE_REPLY & BFI_IOC_I2H_DISABLE_REPLY messages
  */
 struct bfi_ioc_ctrl_reply {
-	struct bfi_mhdr mh;		/*!< Common msg header     */
+	struct bfi_mhdr mh;			/*!< Common msg header     */
 	u8			status;		/*!< enable/disable status */
-	u8			rsvd[3];
+	u8			port_mode;	/*!< enum bfa_mode */
+	u8			cap_bm;		/*!< capability bit mask */
+	u8			rsvd;
 };
 
 #define BFI_IOC_MSGSZ   8
@@ -393,7 +439,7 @@ union bfi_ioc_h2i_msg_u {
  */
 union bfi_ioc_i2h_msg_u {
 	struct bfi_mhdr mh;
-	struct bfi_ioc_rdy_event rdy_event;
+	struct bfi_ioc_ctrl_reply fw_event;
 	u32			mboxmsg[BFI_IOC_MSGSZ];
 };
 
diff --git a/drivers/net/bna/bna.h b/drivers/net/bna/bna.h
index 21e9155..f9781a3 100644
--- a/drivers/net/bna/bna.h
+++ b/drivers/net/bna/bna.h
@@ -40,7 +40,7 @@ do {									\
 	(_qe)->cbarg = (_cbarg);					\
 } while (0)
 
-#define bna_is_small_rxq(rcb) ((rcb)->id == 1)
+#define bna_is_small_rxq(_id) ((_id) & 0x1)
 
 #define BNA_MAC_IS_EQUAL(_mac1, _mac2)					\
 	(!memcmp((_mac1), (_mac2), sizeof(mac_t)))
@@ -214,38 +214,59 @@ do {									\
 	}								\
 } while (0)
 
-#define	call_rxf_stop_cbfn(rxf, status)					\
+#define	call_rxf_stop_cbfn(rxf)						\
+do {									\
 	if ((rxf)->stop_cbfn) {						\
-		(*(rxf)->stop_cbfn)((rxf)->stop_cbarg, (status));	\
+		void (*cbfn)(struct bna_rx *);			\
+		struct bna_rx *cbarg;					\
+		cbfn = (rxf)->stop_cbfn;				\
+		cbarg = (rxf)->stop_cbarg;				\
 		(rxf)->stop_cbfn = NULL;				\
 		(rxf)->stop_cbarg = NULL;				\
-	}
+		cbfn(cbarg);						\
+	}								\
+} while (0)
 
-#define	call_rxf_start_cbfn(rxf, status)				\
+#define	call_rxf_start_cbfn(rxf)					\
+do {									\
 	if ((rxf)->start_cbfn) {					\
-		(*(rxf)->start_cbfn)((rxf)->start_cbarg, (status));	\
+		void (*cbfn)(struct bna_rx *);			\
+		struct bna_rx *cbarg;					\
+		cbfn = (rxf)->start_cbfn;				\
+		cbarg = (rxf)->start_cbarg;				\
 		(rxf)->start_cbfn = NULL;				\
 		(rxf)->start_cbarg = NULL;				\
-	}
+		cbfn(cbarg);						\
+	}								\
+} while (0)
 
-#define	call_rxf_cam_fltr_cbfn(rxf, status)				\
+#define	call_rxf_cam_fltr_cbfn(rxf)					\
+do {									\
 	if ((rxf)->cam_fltr_cbfn) {					\
-		(*(rxf)->cam_fltr_cbfn)((rxf)->cam_fltr_cbarg, rxf->rx,	\
-					(status));			\
+		void (*cbfn)(struct bnad *, struct bna_rx *);	\
+		struct bnad *cbarg;					\
+		cbfn = (rxf)->cam_fltr_cbfn;				\
+		cbarg = (rxf)->cam_fltr_cbarg;				\
 		(rxf)->cam_fltr_cbfn = NULL;				\
 		(rxf)->cam_fltr_cbarg = NULL;				\
-	}
+		cbfn(cbarg, rxf->rx);					\
+	}								\
+} while (0)
 
-#define	call_rxf_pause_cbfn(rxf, status)				\
+#define	call_rxf_pause_cbfn(rxf)					\
+do {									\
 	if ((rxf)->oper_state_cbfn) {					\
-		(*(rxf)->oper_state_cbfn)((rxf)->oper_state_cbarg, rxf->rx,\
-					(status));			\
-		(rxf)->rxf_flags &= ~BNA_RXF_FL_OPERSTATE_CHANGED;	\
+		void (*cbfn)(struct bnad *, struct bna_rx *);	\
+		struct bnad *cbarg;					\
+		cbfn = (rxf)->oper_state_cbfn;				\
+		cbarg = (rxf)->oper_state_cbarg;			\
 		(rxf)->oper_state_cbfn = NULL;				\
 		(rxf)->oper_state_cbarg = NULL;				\
-	}
+		cbfn(cbarg, rxf->rx);					\
+	}								\
+} while (0)
 
-#define	call_rxf_resume_cbfn(rxf, status) call_rxf_pause_cbfn(rxf, status)
+#define	call_rxf_resume_cbfn(rxf) call_rxf_pause_cbfn(rxf)
 
 #define is_xxx_enable(mode, bitmask, xxx) ((bitmask & xxx) && (mode & xxx))
 
@@ -331,6 +352,61 @@ do {									\
 	}								\
 } while (0)
 
+#define bna_tx_rid_mask(_bna) ((_bna)->tx_mod.rid_mask)
+
+#define bna_rx_rid_mask(_bna) ((_bna)->rx_mod.rid_mask)
+
+#define bna_tx_from_rid(_bna, _rid, _tx)				\
+do {								    \
+	struct bna_tx_mod *__tx_mod = &(_bna)->tx_mod;	  \
+	struct bna_tx *__tx;					    \
+	struct list_head *qe;					   \
+	_tx = NULL;						     \
+	list_for_each(qe, &__tx_mod->tx_active_q) {		     \
+		__tx = (struct bna_tx *)qe;			     \
+		if (__tx->rid == (_rid)) {			      \
+			(_tx) = __tx;				   \
+			break;					  \
+		}						       \
+	}							       \
+} while (0)
+
+#define bna_rx_from_rid(_bna, _rid, _rx)				\
+do {									\
+	struct bna_rx_mod *__rx_mod = &(_bna)->rx_mod;			\
+	struct bna_rx *__rx;						\
+	struct list_head *qe;						\
+	_rx = NULL;							\
+	list_for_each(qe, &__rx_mod->rx_active_q) {			\
+		__rx = (struct bna_rx *)qe;				\
+		if (__rx->rid == (_rid)) {				\
+			(_rx) = __rx;					\
+			break;						\
+		}							\
+	}								\
+} while (0)
+
+/**
+ *
+ *  Inline functions
+ *
+ */
+
+static inline struct bna_mac *bna_mac_find(struct list_head *q, u8 *addr)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+	list_for_each(qe, q) {
+		if (BNA_MAC_IS_EQUAL(((struct bna_mac *)qe)->addr, addr)) {
+			mac = (struct bna_mac *)qe;
+			break;
+		}
+	}
+	return mac;
+}
+
+#define bna_attr(_bna) (&(_bna)->ioceth.attr)
+
 /**
  *
  * Function prototypes
@@ -341,14 +417,22 @@ do {									\
  * BNA
  */
 
+/* FW response handlers */
+void bna_bfi_stats_clr_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr);
+
 /* APIs for BNAD */
 void bna_res_req(struct bna_res_info *res_info);
+void bna_mod_res_req(struct bna *bna, struct bna_res_info *res_info);
 void bna_init(struct bna *bna, struct bnad *bnad,
 			struct bfa_pcidev *pcidev,
 			struct bna_res_info *res_info);
+void bna_mod_init(struct bna *bna, struct bna_res_info *res_info);
 void bna_uninit(struct bna *bna);
+int bna_num_txq_set(struct bna *bna, int num_txq);
+int bna_num_rxp_set(struct bna *bna, int num_rxp);
 void bna_stats_get(struct bna *bna);
 void bna_get_perm_mac(struct bna *bna, u8 *mac);
+void bna_hw_stats_get(struct bna *bna);
 
 /* APIs for Rx */
 int bna_rit_mod_can_satisfy(struct bna_rit_mod *rit_mod, int seg_size);
@@ -360,6 +444,9 @@ void bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod,
 struct bna_mac *bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod);
 void bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod,
 			  struct bna_mac *mac);
+struct bna_mcam_handle *bna_mcam_mod_handle_get(struct bna_mcam_mod *mod);
+void bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
+			  struct bna_mcam_handle *handle);
 struct bna_rit_segment *
 bna_rit_mod_seg_get(struct bna_rit_mod *rit_mod, int seg_size);
 void bna_rit_mod_seg_put(struct bna_rit_mod *rit_mod,
@@ -409,6 +496,14 @@ void bna_port_cb_rx_stopped(struct bna_port *port,
 			    enum bna_cb_status status);
 
 /**
+ * ETHPORT
+ */
+
+/* Callbacks for RX */
+void bna_ethport_cb_rx_started(struct bna_ethport *ethport);
+void bna_ethport_cb_rx_stopped(struct bna_ethport *ethport);
+
+/**
  * IB
  */
 
@@ -420,6 +515,12 @@ void bna_ib_mod_uninit(struct bna_ib_mod *ib_mod);
 /**
  * TX MODULE AND TX
  */
+/* FW response handelrs */
+void bna_bfi_tx_enet_start_rsp(struct bna_tx *tx,
+			       struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_tx_enet_stop_rsp(struct bna_tx *tx,
+			      struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod);
 
 /* APIs for BNA */
 void bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
@@ -427,7 +528,7 @@ void bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
 void bna_tx_mod_uninit(struct bna_tx_mod *tx_mod);
 int bna_tx_state_get(struct bna_tx *tx);
 
-/* APIs for PORT */
+/* APIs for ENET */
 void bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
 void bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
 void bna_tx_mod_fail(struct bna_tx_mod *tx_mod);
@@ -444,8 +545,8 @@ struct bna_tx *bna_tx_create(struct bna *bna, struct bnad *bnad,
 void bna_tx_destroy(struct bna_tx *tx);
 void bna_tx_enable(struct bna_tx *tx);
 void bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
-		    void (*cbfn)(void *, struct bna_tx *,
-				 enum bna_cb_status));
+		    void (*cbfn)(void *, struct bna_tx *));
+void bna_tx_cleanup_complete(struct bna_tx *tx);
 void bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo);
 
 /**
@@ -473,6 +574,15 @@ void rxf_reset_packet_filter_promisc(struct bna_rxf *rxf);
 void rxf_reset_packet_filter_default(struct bna_rxf *rxf);
 void rxf_reset_packet_filter_allmulti(struct bna_rxf *rxf);
 
+/* FW response handlers */
+void bna_bfi_rx_enet_start_rsp(struct bna_rx *rx,
+			       struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rx_enet_stop_rsp(struct bna_rx *rx,
+			      struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rxf_cfg_rsp(struct bna_rxf *rxf, struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
+			       struct bfi_msgq_mhdr *msghdr);
+
 /* APIs for BNA */
 void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
 		     struct bna_res_info *res_info);
@@ -480,7 +590,7 @@ void bna_rx_mod_uninit(struct bna_rx_mod *rx_mod);
 int bna_rx_state_get(struct bna_rx *rx);
 int bna_rxf_state_get(struct bna_rxf *rxf);
 
-/* APIs for PORT */
+/* APIs for ENET */
 void bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
 void bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
 void bna_rx_mod_fail(struct bna_rx_mod *rx_mod);
@@ -495,42 +605,84 @@ struct bna_rx *bna_rx_create(struct bna *bna, struct bnad *bnad,
 void bna_rx_destroy(struct bna_rx *rx);
 void bna_rx_enable(struct bna_rx *rx);
 void bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
-		    void (*cbfn)(void *, struct bna_rx *,
-				 enum bna_cb_status));
+		    void (*cbfn)(void *, struct bna_rx *));
+void bna_rx_cleanup_complete(struct bna_rx *rx);
 void bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo);
 void bna_rx_dim_reconfig(struct bna *bna, const u32 vector[][BNA_BIAS_T_MAX]);
 void bna_rx_dim_update(struct bna_ccb *ccb);
 enum bna_cb_status
 bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status));
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
+enum bna_cb_status
+bna_rx_ucast_add(struct bna_rx *rx, u8* ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
+enum bna_cb_status
+bna_rx_ucast_del(struct bna_rx *rx, u8 *ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mcast_add(struct bna_rx *rx, u8 *mcmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status));
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mcmac,
-		     void (*cbfn)(struct bnad *, struct bna_rx *,
-				  enum bna_cb_status));
+		     void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode rxmode,
 		enum bna_rxmode bitmask,
-		void (*cbfn)(struct bnad *, struct bna_rx *,
-			     enum bna_cb_status));
+		void (*cbfn)(struct bnad *, struct bna_rx *));
 void bna_rx_vlan_add(struct bna_rx *rx, int vlan_id);
 void bna_rx_vlan_del(struct bna_rx *rx, int vlan_id);
 void bna_rx_vlanfilter_enable(struct bna_rx *rx);
-void bna_rx_hds_enable(struct bna_rx *rx, struct bna_rxf_hds *hds_config,
-		       void (*cbfn)(struct bnad *, struct bna_rx *,
-				    enum bna_cb_status));
+void bna_rx_hds_enable(struct bna_rx *rx, struct bna_hds_config *hds_config,
+		       void (*cbfn)(struct bnad *, struct bna_rx *));
 void bna_rx_hds_disable(struct bna_rx *rx,
-			void (*cbfn)(struct bnad *, struct bna_rx *,
-				     enum bna_cb_status));
+			void (*cbfn)(struct bnad *, struct bna_rx *));
+
+/**
+ * ENET
+ */
+
+/* API for RX */
+int bna_enet_mtu_get(struct bna_enet *enet);
+
+/* Callbacks for TX, RX */
+void bna_enet_cb_tx_stopped(struct bna_enet *enet);
+void bna_enet_cb_rx_stopped(struct bna_enet *enet);
+
+/* API for BNAD */
+void bna_enet_enable(struct bna_enet *enet);
+void bna_enet_disable(struct bna_enet *enet, enum bna_cleanup_type type,
+		      void (*cbfn)(void *));
+void bna_enet_pause_config(struct bna_enet *enet,
+			   struct bna_pause_config *pause_config,
+			   void (*cbfn)(struct bnad *));
+void bna_enet_mtu_set(struct bna_enet *enet, int mtu,
+		      void (*cbfn)(struct bnad *));
+void bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac);
+
+/**
+ * IOCETH
+ */
+
+/* APIs for BNAD */
+void bna_ioceth_enable(struct bna_ioceth *ioceth);
+void bna_ioceth_disable(struct bna_ioceth *ioceth,
+			enum bna_cleanup_type type);
 
 /**
  * BNAD
  */
 
+/* Callbacks for ENET */
+void bnad_cb_ethport_link_status(struct bnad *bnad,
+			      enum bna_link_status status);
+
+/* Callbacks for IOCETH */
+void bnad_cb_ioceth_ready(struct bnad *bnad);
+void bnad_cb_ioceth_failed(struct bnad *bnad);
+void bnad_cb_ioceth_disabled(struct bnad *bnad);
+void bnad_cb_mbox_intr_enable(struct bnad *bnad);
+void bnad_cb_mbox_intr_disable(struct bnad *bnad);
+
 /* Callbacks for BNA */
 void bnad_cb_stats_get(struct bnad *bnad, enum bna_cb_status status,
 		       struct bna_stats *stats);
diff --git a/drivers/net/bna/bna_types.h b/drivers/net/bna/bna_types.h
index 2f89cb2..655eb14 100644
--- a/drivers/net/bna/bna_types.h
+++ b/drivers/net/bna/bna_types.h
@@ -19,8 +19,10 @@
 #define __BNA_TYPES_H__
 
 #include "cna.h"
-#include "bna_hw.h"
+#include "bna_hw_defs.h"
 #include "bfa_cee.h"
+#include "bfi_enet.h"
+#include "bfa_msgq.h"
 
 /**
  *
@@ -28,6 +30,7 @@
  *
  */
 
+struct bna_mcam_handle;
 struct bna_txq;
 struct bna_tx;
 struct bna_rxq;
@@ -35,6 +38,7 @@ struct bna_cq;
 struct bna_rx;
 struct bna_rxf;
 struct bna_port;
+struct bna_enet;
 struct bna;
 struct bnad;
 
@@ -104,13 +108,26 @@ enum bna_res_req_type {
 	BNA_RES_T_MAX
 };
 
+enum bna_mod_res_req_type {
+	BNA_MOD_RES_MEM_T_TX_ARRAY	= 0,
+	BNA_MOD_RES_MEM_T_TXQ_ARRAY	= 1,
+	BNA_MOD_RES_MEM_T_RX_ARRAY	= 2,
+	BNA_MOD_RES_MEM_T_RXP_ARRAY	= 3,
+	BNA_MOD_RES_MEM_T_RXQ_ARRAY	= 4,
+	BNA_MOD_RES_MEM_T_UCMAC_ARRAY	= 5,
+	BNA_MOD_RES_MEM_T_MCMAC_ARRAY	= 6,
+	BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY = 7,
+	BNA_MOD_RES_T_MAX
+};
+
 enum bna_tx_res_req_type {
 	BNA_TX_RES_MEM_T_TCB	= 0,
 	BNA_TX_RES_MEM_T_UNMAPQ	= 1,
 	BNA_TX_RES_MEM_T_QPT	= 2,
 	BNA_TX_RES_MEM_T_SWQPT	= 3,
 	BNA_TX_RES_MEM_T_PAGE	= 4,
-	BNA_TX_RES_INTR_T_TXCMPL = 5,
+	BNA_TX_RES_MEM_T_IBIDX	= 5,
+	BNA_TX_RES_INTR_T_TXCMPL = 6,
 	BNA_TX_RES_T_MAX,
 };
 
@@ -127,8 +144,10 @@ enum bna_rx_mem_type {
 	BNA_RX_RES_MEM_T_DSWQPT		= 9,	/* RX s/w QPT */
 	BNA_RX_RES_MEM_T_DPAGE		= 10,	/* RX s/w QPT */
 	BNA_RX_RES_MEM_T_HPAGE		= 11,	/* RX s/w QPT */
-	BNA_RX_RES_T_INTR		= 12,	/* Rx interrupts */
-	BNA_RX_RES_T_MAX		= 13
+	BNA_RX_RES_MEM_T_IBIDX		= 12,
+	BNA_RX_RES_MEM_T_RIT		= 13,
+	BNA_RX_RES_T_INTR		= 14,	/* Rx interrupts */
+	BNA_RX_RES_T_MAX		= 15
 };
 
 enum bna_mbox_state {
@@ -142,14 +161,15 @@ enum bna_tx_type {
 };
 
 enum bna_tx_flags {
-	BNA_TX_F_PORT_STARTED	= 1,
+	BNA_TX_F_ENET_STARTED	= 1,
 	BNA_TX_F_ENABLED	= 2,
-	BNA_TX_F_PRIO_LOCK	= 4,
+	BNA_TX_F_PRIO_CHANGED	= 4,
+	BNA_TX_F_BW_UPDATED	= 8,
 };
 
 enum bna_tx_mod_flags {
-	BNA_TX_MOD_F_PORT_STARTED	= 1,
-	BNA_TX_MOD_F_PORT_LOOPBACK	= 2,
+	BNA_TX_MOD_F_ENET_STARTED	= 1,
+	BNA_TX_MOD_F_ENET_LOOPBACK	= 2,
 };
 
 enum bna_rx_type {
@@ -165,16 +185,19 @@ enum bna_rxp_type {
 
 enum bna_rxmode {
 	BNA_RXMODE_PROMISC	= 1,
-	BNA_RXMODE_ALLMULTI	= 2
+	BNA_RXMODE_DEFAULT	= 2,
+	BNA_RXMODE_ALLMULTI	= 4
 };
 
 enum bna_rx_event {
 	RX_E_START			= 1,
 	RX_E_STOP			= 2,
 	RX_E_FAIL			= 3,
-	RX_E_RXF_STARTED		= 4,
-	RX_E_RXF_STOPPED		= 5,
-	RX_E_RXQ_STOPPED		= 6,
+	RX_E_STARTED			= 4,
+	RX_E_STOPPED			= 5,
+	RX_E_RXF_STARTED		= 6,
+	RX_E_RXF_STOPPED		= 7,
+	RX_E_CLEANUP_DONE		= 8,
 };
 
 enum bna_rx_state {
@@ -186,14 +209,13 @@ enum bna_rx_state {
 };
 
 enum bna_rx_flags {
-	BNA_RX_F_ENABLE		= 0x01,		/* bnad enabled rxf */
-	BNA_RX_F_PORT_ENABLED	= 0x02,		/* Port object is enabled */
-	BNA_RX_F_PORT_FAILED	= 0x04,		/* Port in failed state */
+	BNA_RX_F_ENET_STARTED	= 1,
+	BNA_RX_F_ENABLED	= 2,
 };
 
 enum bna_rx_mod_flags {
-	BNA_RX_MOD_F_PORT_STARTED	= 1,
-	BNA_RX_MOD_F_PORT_LOOPBACK	= 2,
+	BNA_RX_MOD_F_ENET_STARTED	= 1,
+	BNA_RX_MOD_F_ENET_LOOPBACK	= 2,
 };
 
 enum bna_rxf_oper_state {
@@ -202,25 +224,17 @@ enum bna_rxf_oper_state {
 };
 
 enum bna_rxf_flags {
-	BNA_RXF_FL_STOP_PENDING		= 0x01,
-	BNA_RXF_FL_FAILED		= 0x02,
-	BNA_RXF_FL_RSS_CONFIG_PENDING	= 0x04,
-	BNA_RXF_FL_OPERSTATE_CHANGED	= 0x08,
-	BNA_RXF_FL_RXF_ENABLED		= 0x10,
-	BNA_RXF_FL_VLAN_CONFIG_PENDING	= 0x20,
+	BNA_RXF_F_PAUSED		= 1,
 };
 
 enum bna_rxf_event {
 	RXF_E_START			= 1,
 	RXF_E_STOP			= 2,
 	RXF_E_FAIL			= 3,
-	RXF_E_CAM_FLTR_MOD		= 4,
-	RXF_E_STARTED			= 5,
-	RXF_E_STOPPED			= 6,
-	RXF_E_CAM_FLTR_RESP		= 7,
-	RXF_E_PAUSE			= 8,
-	RXF_E_RESUME			= 9,
-	RXF_E_STAT_CLEARED		= 10,
+	RXF_E_CONFIG			= 4,
+	RXF_E_PAUSE			= 5,
+	RXF_E_RESUME			= 6,
+	RXF_E_FW_RESP			= 7,
 };
 
 enum bna_rxf_state {
@@ -241,6 +255,12 @@ enum bna_port_type {
 	BNA_PORT_T_LOOPBACK_EXTERNAL	= 2,
 };
 
+enum bna_enet_type {
+	BNA_ENET_T_REGULAR		= 0,
+	BNA_ENET_T_LOOPBACK_INTERNAL	= 1,
+	BNA_ENET_T_LOOPBACK_EXTERNAL	= 2,
+};
+
 enum bna_link_status {
 	BNA_LINK_DOWN		= 0,
 	BNA_LINK_UP		= 1,
@@ -253,6 +273,12 @@ enum bna_llport_flags {
 	BNA_LLPORT_F_RX_STARTED		= 4
 };
 
+enum bna_ethport_flags {
+	BNA_ETHPORT_F_ADMIN_UP		= 1,
+	BNA_ETHPORT_F_PORT_ENABLED	= 2,
+	BNA_ETHPORT_F_RX_STARTED	= 4,
+};
+
 enum bna_port_flags {
 	BNA_PORT_F_DEVICE_READY	= 1,
 	BNA_PORT_F_ENABLED	= 2,
@@ -260,6 +286,23 @@ enum bna_port_flags {
 	BNA_PORT_F_MTU_CHANGED	= 8
 };
 
+enum bna_enet_flags {
+	BNA_ENET_F_IOCETH_READY		= 1,
+	BNA_ENET_F_ENABLED		= 2,
+	BNA_ENET_F_PAUSE_CHANGED	= 4,
+	BNA_ENET_F_MTU_CHANGED		= 8
+};
+
+enum bna_rss_flags {
+	BNA_RSS_F_RIT_PENDING		= 1,
+	BNA_RSS_F_CFG_PENDING		= 2,
+	BNA_RSS_F_STATUS_PENDING	= 4,
+};
+
+enum bna_mod_flags {
+	BNA_MOD_F_INIT_DONE		= 1,
+};
+
 enum bna_pkt_rates {
 	BNA_PKT_RATE_10K		= 10000,
 	BNA_PKT_RATE_20K		= 20000,
@@ -289,10 +332,17 @@ enum bna_dim_bias_types {
 	BNA_BIAS_T_MAX			= 2
 };
 
+#define BNA_MAX_NAME_SIZE	64
+struct bna_ident {
+	int			id;
+	char			name[BNA_MAX_NAME_SIZE];
+};
+
 struct bna_mac {
 	/* This should be the first one */
 	struct list_head			qe;
 	u8			addr[ETH_ALEN];
+	struct bna_mcam_handle *handle;
 };
 
 struct bna_mem_descr {
@@ -338,23 +388,29 @@ struct bna_qpt {
 	u32		page_size;
 };
 
+struct bna_attr {
+	int			num_txq;
+	int			num_rxp;
+	int			num_ucmac;
+	int			num_mcmac;
+	int			max_rit_size;
+};
+
 /**
  *
- * Device
+ * IOCEth
  *
  */
 
-struct bna_device {
+struct bna_ioceth {
 	bfa_fsm_t		fsm;
 	struct bfa_ioc ioc;
 
-	enum bna_intr_type intr_type;
-	int			vector;
+	struct bna_attr attr;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	struct bfi_enet_attr_req attr_req;
 
-	void (*ready_cbfn)(struct bnad *bnad, enum bna_cb_status status);
-	struct bnad *ready_cbarg;
-
-	void (*stop_cbfn)(struct bnad *bnad, enum bna_cb_status status);
+	void (*stop_cbfn)(struct bnad *bnad);
 	struct bnad *stop_cbarg;
 
 	struct bna *bna;
@@ -447,6 +503,68 @@ struct bna_port {
 
 /**
  *
+ * Enet
+ *
+ */
+
+struct bna_enet {
+	bfa_fsm_t		fsm;
+	enum bna_enet_flags flags;
+
+	enum bna_enet_type type;
+
+	struct bna_pause_config pause_config;
+	int			mtu;
+
+	/* Callback for bna_enet_disable(), enet_stop() */
+	void (*stop_cbfn)(void *);
+	void			*stop_cbarg;
+
+	/* Callback for bna_enet_pause_config() */
+	void (*pause_cbfn)(struct bnad *);
+
+	/* Callback for bna_enet_mtu_set() */
+	void (*mtu_cbfn)(struct bnad *);
+
+	struct bfa_wc		chld_stop_wc;
+
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	struct bfi_enet_set_pause_req pause_req;
+
+	struct bna *bna;
+};
+
+/**
+ *
+ * Ethport
+ *
+ */
+
+struct bna_ethport {
+	bfa_fsm_t		fsm;
+	enum bna_ethport_flags flags;
+
+	enum bna_link_status link_status;
+
+	int			rx_started_count;
+
+	void (*stop_cbfn)(struct bna_enet *);
+
+	void (*adminup_cbfn)(struct bnad *, enum bna_cb_status);
+
+	void (*link_cbfn)(struct bnad *, enum bna_link_status);
+
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_enable_req admin_req;
+		struct bfi_enet_diag_lb_req lpbk_req;
+	} bfi_enet_cmd;
+
+	struct bna *bna;
+};
+
+/**
+ *
  * Interrupt Block
  *
  */
@@ -478,55 +596,20 @@ struct bna_ib_dbell {
 	u32		doorbell_ack;
 };
 
-/* Interrupt timer configuration */
-struct bna_ib_config {
-	u8		coalescing_timeo;    /* Unit is 5usec. */
-
-	int			interpkt_count;
-	int			interpkt_timeo;
-
-	enum ib_flags ctrl_flags;
-};
-
 /* IB structure */
 struct bna_ib {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	int			ib_id;
-
-	int			ref_count;
-	int			start_count;
-
 	struct bna_dma_addr ib_seg_host_addr;
 	void		*ib_seg_host_addr_kva;
-	u32		idx_mask; /* Size >= BNA_IBIDX_MAX_SEGSIZE */
-
-	struct bna_ibidx_seg *idx_seg;
 
 	struct bna_ib_dbell door_bell;
 
-	struct bna_intr *intr;
-
-	struct bna_ib_config ib_config;
-
-	struct bna *bna;
-};
-
-/* IB module - keeps track of IBs and interrupts */
-struct bna_ib_mod {
-	struct bna_ib *ib;		/* BFI_MAX_IB entries */
-	struct bna_intr *intr;		/* BFI_MAX_IB entries */
-	struct bna_ibidx_seg *idx_seg;	/* BNA_IBIDX_TOTAL_SEGS */
-
-	struct list_head			ib_free_q;
-
-	struct list_head		ibidx_seg_pool[BFI_IBIDX_TOTAL_POOLS];
+	enum bna_intr_type	intr_type;
+	int			intr_vector;
 
-	struct list_head			intr_free_q;
-	struct list_head			intr_active_q;
+	u8			coalescing_timeo;    /* Unit is 5usec. */
 
-	struct bna *bna;
+	int			interpkt_count;
+	int			interpkt_timeo;
 };
 
 /**
@@ -552,6 +635,7 @@ struct bna_tcb {
 	/* Control path */
 	struct bna_txq *txq;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	enum bna_intr_type intr_type;
 	int			intr_vector;
 	u8			priority; /* Current priority */
@@ -565,68 +649,66 @@ struct bna_txq {
 	/* This should be the first one */
 	struct list_head			qe;
 
-	int			txq_id;
-
 	u8			priority;
 
 	struct bna_qpt qpt;
 	struct bna_tcb *tcb;
-	struct bna_ib *ib;
-	int			ib_seg_offset;
+	struct bna_ib ib;
 
 	struct bna_tx *tx;
 
+	int			hw_id;
+
 	u64		tx_packets;
 	u64		tx_bytes;
 };
 
-/* TxF structure (hardware Tx Function) */
-struct bna_txf {
-	int			txf_id;
-	enum txf_flags ctrl_flags;
-	u16		vlan;
-};
-
 /* Tx object */
 struct bna_tx {
 	/* This should be the first one */
 	struct list_head			qe;
+	int			rid;
+	int			hw_id;
 
 	bfa_fsm_t		fsm;
 	enum bna_tx_flags flags;
 
 	enum bna_tx_type type;
+	int			num_txq;
 
 	struct list_head			txq_q;
-	struct bna_txf txf;
+	u16			txf_vlan_id;
 
 	/* Tx event handlers */
 	void (*tcb_setup_cbfn)(struct bnad *, struct bna_tcb *);
 	void (*tcb_destroy_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_stall_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_resume_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tcb *);
+	void (*tx_stall_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_resume_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tx *);
 
 	/* callback for bna_tx_disable(), bna_tx_stop() */
-	void (*stop_cbfn)(void *arg, struct bna_tx *tx,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(void *arg, struct bna_tx *tx);
 	void			*stop_cbarg;
 
 	/* callback for bna_tx_prio_set() */
-	void (*prio_change_cbfn)(struct bnad *bnad, struct bna_tx *tx,
-				enum bna_cb_status status);
+	void (*prio_change_cbfn)(struct bnad *bnad, struct bna_tx *tx);
 
-	struct bfa_wc		txq_stop_wc;
-
-	struct bna_mbox_qe mbox_qe;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_tx_cfg_req	cfg_req;
+		struct bfi_enet_req		req;
+		struct bfi_enet_tx_cfg_rsp	cfg_rsp;
+	} bfi_enet_cmd;
 
 	struct bna *bna;
 	void			*priv;	/* bnad's cookie */
 };
 
+/* Tx object configuration used during creation */
 struct bna_tx_config {
 	int			num_txq;
 	int			txq_depth;
+	int			coalescing_timeo;
 	enum bna_tx_type tx_type;
 };
 
@@ -635,9 +717,9 @@ struct bna_tx_event_cbfn {
 	void (*tcb_setup_cbfn)(struct bnad *, struct bna_tcb *);
 	void (*tcb_destroy_cbfn)(struct bnad *, struct bna_tcb *);
 	/* Mandatory */
-	void (*tx_stall_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_resume_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tcb *);
+	void (*tx_stall_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_resume_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tx *);
 };
 
 /* Tx module - keeps track of free, active tx objects */
@@ -651,17 +733,19 @@ struct bna_tx_mod {
 	struct list_head			txq_free_q;
 
 	/* callback for bna_tx_mod_stop() */
-	void (*stop_cbfn)(struct bna_port *port,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(struct bna_enet *enet);
 
 	struct bfa_wc		tx_stop_wc;
 
 	enum bna_tx_mod_flags flags;
 
-	int			priority;
-	int			cee_link;
+	u8			prio_map;
+	int			default_prio;
+	int			iscsi_over_cee;
+	int			iscsi_prio;
+	int			prio_reconfigured;
 
-	u32		txf_bmap[2];
+	u32			rid_mask;
 
 	struct bna *bna;
 };
@@ -693,13 +777,6 @@ struct bna_rit_segment {
 	struct bna_rit_entry *rit;
 };
 
-struct bna_rit_mod {
-	struct bna_rit_entry *rit;
-	struct bna_rit_segment *rit_segment;
-
-	struct list_head		rit_seg_pool[BFI_RIT_SEG_TOTAL_POOLS];
-};
-
 /**
  *
  * Rx object
@@ -719,8 +796,9 @@ struct bna_rcb {
 	int			page_count;
 	/* Control path */
 	struct bna_rxq *rxq;
-	struct bna_cq *cq;
+	struct bna_ccb *ccb;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	unsigned long		flags;
 	int			id;
 };
@@ -728,7 +806,6 @@ struct bna_rcb {
 /* RxQ structure - QPT, configuration */
 struct bna_rxq {
 	struct list_head			qe;
-	int			rxq_id;
 
 	int			buffer_size;
 	int			q_depth;
@@ -739,6 +816,8 @@ struct bna_rxq {
 	struct bna_rxp *rxp;
 	struct bna_rx *rx;
 
+	int			hw_id;
+
 	u64		rx_packets;
 	u64		rx_bytes;
 	u64		rx_packets_with_error;
@@ -784,6 +863,7 @@ struct bna_ccb {
 	/* Control path */
 	struct bna_cq *cq;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	enum bna_intr_type intr_type;
 	int			intr_vector;
 	u8			rx_coalescing_timeo; /* For NAPI */
@@ -793,46 +873,43 @@ struct bna_ccb {
 
 /* CQ QPT, configuration  */
 struct bna_cq {
-	int			cq_id;
-
 	struct bna_qpt qpt;
 	struct bna_ccb *ccb;
 
-	struct bna_ib *ib;
-	u8			ib_seg_offset;
+	struct bna_ib ib;
 
 	struct bna_rx *rx;
 };
 
 struct bna_rss_config {
-	enum rss_hash_type hash_type;
+	enum bfi_enet_rss_type	hash_type;
 	u8			hash_mask;
-	u32		toeplitz_hash_key[BFI_RSS_HASH_KEY_LEN];
+	u32		toeplitz_hash_key[BFI_ENET_RSS_KEY_LEN];
 };
 
 struct bna_hds_config {
-	enum hds_header_type hdr_type;
-	int			header_size;
+	enum bfi_enet_hds_type	hdr_type;
+	int			forced_offset;
 };
 
-/* This structure is used during RX creation */
+/* Rx object configuration used during creation */
 struct bna_rx_config {
 	enum bna_rx_type rx_type;
 	int			num_paths;
 	enum bna_rxp_type rxp_type;
 	int			paused;
 	int			q_depth;
+	int			coalescing_timeo;
 	/*
 	 * Small/Large (or Header/Data) buffer size to be configured
 	 * for SLR and HDS queue type. Large buffer size comes from
-	 * port->mtu.
+	 * enet->mtu.
 	 */
 	int			small_buff_size;
 
 	enum bna_status rss_status;
 	struct bna_rss_config rss_config;
 
-	enum bna_status hds_status;
 	struct bna_hds_config hds_config;
 
 	enum bna_status vlan_strip_status;
@@ -851,51 +928,35 @@ struct bna_rxp {
 
 	/* MSI-x vector number for configuring RSS */
 	int			vector;
-
-	struct bna_mbox_qe mbox_qe;
-};
-
-/* HDS configuration structure */
-struct bna_rxf_hds {
-	enum hds_header_type hdr_type;
-	int			header_size;
-};
-
-/* RSS configuration structure */
-struct bna_rxf_rss {
-	enum rss_hash_type hash_type;
-	u8			hash_mask;
-	u32		toeplitz_hash_key[BFI_RSS_HASH_KEY_LEN];
+	int			hw_id;
 };
 
 /* RxF structure (hardware Rx Function) */
 struct bna_rxf {
 	bfa_fsm_t		fsm;
-	int			rxf_id;
-	enum rxf_flags ctrl_flags;
-	u16		default_vlan_tag;
-	enum bna_rxf_oper_state rxf_oper_state;
-	enum bna_status hds_status;
-	struct bna_rxf_hds hds_cfg;
-	enum bna_status rss_status;
-	struct bna_rxf_rss rss_cfg;
-	struct bna_rit_segment *rit_segment;
-	struct bna_rx *rx;
-	u32		forced_offset;
-	struct bna_mbox_qe mbox_qe;
-	int			mcast_rxq_id;
+	enum bna_rxf_flags flags;
+
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_enable_req req;
+		struct bfi_enet_rss_cfg_req rss_req;
+		struct bfi_enet_rit_req rit_req;
+		struct bfi_enet_rx_vlan_req vlan_req;
+		struct bfi_enet_mcast_add_req mcast_add_req;
+		struct bfi_enet_mcast_del_req mcast_del_req;
+		struct bfi_enet_ucast_req ucast_req;
+	} bfi_enet_cmd;
 
 	/* callback for bna_rxf_start() */
-	void (*start_cbfn) (struct bna_rx *rx, enum bna_cb_status status);
+	void (*start_cbfn) (struct bna_rx *rx);
 	struct bna_rx *start_cbarg;
 
 	/* callback for bna_rxf_stop() */
-	void (*stop_cbfn) (struct bna_rx *rx, enum bna_cb_status status);
+	void (*stop_cbfn) (struct bna_rx *rx);
 	struct bna_rx *stop_cbarg;
 
-	/* callback for bna_rxf_receive_enable() / bna_rxf_receive_disable() */
-	void (*oper_state_cbfn) (struct bnad *bnad, struct bna_rx *rx,
-			enum bna_cb_status status);
+	/* callback for bna_rx_receive_pause() / bna_rx_receive_resume() */
+	void (*oper_state_cbfn) (struct bnad *bnad, struct bna_rx *rx);
 	struct bnad *oper_state_cbarg;
 
 	/**
@@ -905,25 +966,25 @@ struct bna_rxf {
 	 *	bna_rxf_{ucast/mcast}_del(),
 	 *	bna_rxf_mode_set()
 	 */
-	void (*cam_fltr_cbfn)(struct bnad *bnad, struct bna_rx *rx,
-				enum bna_cb_status status);
+	void (*cam_fltr_cbfn)(struct bnad *bnad, struct bna_rx *rx);
 	struct bnad *cam_fltr_cbarg;
 
-	enum bna_rxf_flags rxf_flags;
-
 	/* List of unicast addresses yet to be applied to h/w */
 	struct list_head			ucast_pending_add_q;
 	struct list_head			ucast_pending_del_q;
+	struct bna_mac *ucast_pending_mac;
 	int			ucast_pending_set;
 	/* ucast addresses applied to the h/w */
 	struct list_head			ucast_active_q;
-	struct bna_mac *ucast_active_mac;
+	struct bna_mac ucast_active_mac;
+	int			ucast_active_set;
 
 	/* List of multicast addresses yet to be applied to h/w */
 	struct list_head			mcast_pending_add_q;
 	struct list_head			mcast_pending_del_q;
 	/* multicast addresses applied to the h/w */
 	struct list_head			mcast_active_q;
+	struct list_head			mcast_handle_q;
 
 	/* Rx modes yet to be applied to h/w */
 	enum bna_rxmode rxmode_pending;
@@ -931,41 +992,58 @@ struct bna_rxf {
 	/* Rx modes applied to h/w */
 	enum bna_rxmode rxmode_active;
 
+	u8			vlan_pending_bitmask;
 	enum bna_status vlan_filter_status;
-	u32		vlan_filter_table[(BFI_MAX_VLAN + 1) / 32];
+	u32	vlan_filter_table[(BFI_ENET_VLAN_ID_MAX) / 32];
+	bool			vlan_strip_pending;
+	enum bna_status		vlan_strip_status;
+
+	enum bna_rss_flags	rss_pending;
+	enum bna_status		rss_status;
+	struct bna_rss_config	rss_cfg;
+	u8			*rit;
+	int			rit_size;
+
+	struct bna_rx		*rx;
 };
 
 /* Rx object */
 struct bna_rx {
 	/* This should be the first one */
 	struct list_head			qe;
+	int			rid;
+	int			hw_id;
 
 	bfa_fsm_t		fsm;
 
 	enum bna_rx_type type;
 
-	/* list-head for RX path objects */
+	int			num_paths;
 	struct list_head			rxp_q;
 
+	struct bna_hds_config	hds_cfg;
+
 	struct bna_rxf rxf;
 
 	enum bna_rx_flags rx_flags;
 
-	struct bna_mbox_qe mbox_qe;
-
-	struct bfa_wc		rxq_stop_wc;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_rx_cfg_req	cfg_req;
+		struct bfi_enet_req		req;
+		struct bfi_enet_rx_cfg_rsp	cfg_rsp;
+	} bfi_enet_cmd;
 
 	/* Rx event handlers */
 	void (*rcb_setup_cbfn)(struct bnad *, struct bna_rcb *);
 	void (*rcb_destroy_cbfn)(struct bnad *, struct bna_rcb *);
 	void (*ccb_setup_cbfn)(struct bnad *, struct bna_ccb *);
 	void (*ccb_destroy_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_post_cbfn)(struct bnad *, struct bna_rcb *);
+	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_rx *);
+	void (*rx_post_cbfn)(struct bnad *, struct bna_rx *);
 
 	/* callback for bna_rx_disable(), bna_rx_stop() */
-	void (*stop_cbfn)(void *arg, struct bna_rx *rx,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(void *arg, struct bna_rx *rx);
 	void			*stop_cbarg;
 
 	struct bna *bna;
@@ -979,8 +1057,8 @@ struct bna_rx_event_cbfn {
 	void (*ccb_setup_cbfn)(struct bnad *, struct bna_ccb *);
 	void (*ccb_destroy_cbfn)(struct bnad *, struct bna_ccb *);
 	/* Mandatory */
-	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_post_cbfn)(struct bnad *, struct bna_rcb *);
+	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_rx *);
+	void (*rx_post_cbfn)(struct bnad *, struct bna_rx *);
 };
 
 /* Rx module - keeps track of free, active rx objects */
@@ -1003,12 +1081,11 @@ struct bna_rx_mod {
 	enum bna_rx_mod_flags flags;
 
 	/* callback for bna_rx_mod_stop() */
-	void (*stop_cbfn)(struct bna_port *port,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(struct bna_enet *enet);
 
 	struct bfa_wc		rx_stop_wc;
 	u32		dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX];
-	u32		rxf_bmap[2];
+	u32		rid_mask;
 };
 
 /**
@@ -1024,9 +1101,18 @@ struct bna_ucam_mod {
 	struct bna *bna;
 };
 
+struct bna_mcam_handle {
+	/* This should be the first one */
+	struct list_head			qe;
+	int			handle;
+	int			refcnt;
+};
+
 struct bna_mcam_mod {
 	struct bna_mac *mcmac;		/* BFI_MAX_MCMAC entries */
+	struct bna_mcam_handle *mchandle;	/* BFI_MAX_MCMAC entries */
 	struct list_head			free_q;
+	struct list_head			free_handle_q;
 
 	struct bna *bna;
 };
@@ -1059,7 +1145,6 @@ struct bna_rx_stats {
 	int			num_active_mcast;
 	int			rxmode_active;
 	int			vlan_filter_status;
-	u32		vlan_filter_table[(BFI_MAX_VLAN + 1) / 32];
 	int			rss_status;
 	int			hds_status;
 };
@@ -1072,15 +1157,22 @@ struct bna_sw_stats {
 	int			priority;
 	int			num_active_tx;
 	int			num_active_rx;
-	struct bna_tx_stats tx_stats[BFI_MAX_TXQ];
-	struct bna_rx_stats rx_stats[BFI_MAX_RXQ];
 };
 
 struct bna_stats {
-	u32		txf_bmap[2];
-	u32		rxf_bmap[2];
-	struct bfi_ll_stats	*hw_stats;
-	struct bna_sw_stats *sw_stats;
+	struct bna_dma_addr	hw_stats_dma;
+	struct bfi_enet_stats	*hw_stats_kva;
+	struct bfi_enet_stats	hw_stats;
+};
+
+struct bna_stats_mod {
+	bool		ioc_ready;
+	bool		stats_get_busy;
+	bool		stats_clr_busy;
+	struct bfa_msgq_cmd_entry stats_get_cmd;
+	struct bfa_msgq_cmd_entry stats_clr_cmd;
+	struct bfi_enet_stats_req stats_get;
+	struct bfi_enet_stats_req stats_clr;
 };
 
 /**
@@ -1090,38 +1182,32 @@ struct bna_stats {
  */
 
 struct bna {
+	struct bna_ident ident;
 	struct bfa_pcidev pcidev;
 
-	int			port_num;
+	struct bna_reg regs;
+	struct bna_bit_defn bits;
 
-	struct bna_chip_regs regs;
-
-	struct bna_dma_addr hw_stats_dma;
 	struct bna_stats stats;
 
-	struct bna_device device;
+	struct bna_ioceth ioceth;
 	struct bfa_cee cee;
+	struct bfa_msgq msgq;
 
-	struct bna_mbox_mod mbox_mod;
-
-	struct bna_port port;
+	struct bna_ethport ethport;
+	struct bna_enet enet;
+	struct bna_stats_mod stats_mod;
 
 	struct bna_tx_mod tx_mod;
-
 	struct bna_rx_mod rx_mod;
-
-	struct bna_ib_mod ib_mod;
-
 	struct bna_ucam_mod ucam_mod;
 	struct bna_mcam_mod mcam_mod;
 
-	struct bna_rit_mod rit_mod;
-
-	int			rxf_promisc_id;
+	enum bna_mod_flags mod_flags;
 
-	struct bna_mbox_qe mbox_qe;
+	int			default_mode_rid;
+	int			promisc_rid;
 
 	struct bnad *bnad;
 };
-
 #endif	/* __BNA_TYPES_H__ */
diff --git a/drivers/net/bna/bnad.c b/drivers/net/bna/bnad.c
index 8e35b25..5ad07ea 100644
--- a/drivers/net/bna/bnad.c
+++ b/drivers/net/bna/bnad.c
@@ -441,11 +441,15 @@ bnad_poll_cq(struct bnad *bnad, struct bna_ccb *ccb, int budget)
 	struct bnad_skb_unmap *unmap_array;
 	struct sk_buff *skb;
 	u32 flags, unmap_cons;
-	u32 qid0 = ccb->rcb[0]->rxq->rxq_id;
 	struct bna_pkt_rate *pkt_rt = &ccb->pkt_rate;
+	struct bnad_rx_ctrl *rx_ctrl = (struct bnad_rx_ctrl *)(ccb->ctrl);
+
+	set_bit(BNAD_FP_IN_RX_PATH, &rx_ctrl->flags);
 
-	if (!test_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags))
+	if (!test_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags)) {
+		clear_bit(BNAD_FP_IN_RX_PATH, &rx_ctrl->flags);
 		return 0;
+	}
 
 	prefetch(bnad->netdev);
 	BNA_CQ_QPGE_PTR_GET(ccb->producer_index, ccb->sw_qpt, cmpl,
@@ -455,10 +459,10 @@ bnad_poll_cq(struct bnad *bnad, struct bna_ccb *ccb, int budget)
 		packets++;
 		BNA_UPDATE_PKT_CNT(pkt_rt, ntohs(cmpl->length));
 
-		if (qid0 == cmpl->rxq_id)
-			rcb = ccb->rcb[0];
-		else
+		if (bna_is_small_rxq(cmpl->rxq_id))
 			rcb = ccb->rcb[1];
+		else
+			rcb = ccb->rcb[0];
 
 		unmap_q = rcb->unmap_q;
 		unmap_array = unmap_q->unmap_array;
@@ -518,12 +522,9 @@ bnad_poll_cq(struct bnad *bnad, struct bna_ccb *ccb, int budget)
 		if (flags & BNA_CQ_EF_VLAN)
 			__vlan_hwaccel_put_tag(skb, ntohs(cmpl->vlan_tag));
 
-		if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
-			struct bnad_rx_ctrl *rx_ctrl;
-
-			rx_ctrl = (struct bnad_rx_ctrl *) ccb->ctrl;
+		if (skb->ip_summed == CHECKSUM_UNNECESSARY)
 			napi_gro_receive(&rx_ctrl->napi, skb);
-		} else {
+		else {
 			netif_receive_skb(skb);
 		}
 
@@ -545,6 +546,8 @@ next:
 			bna_ib_ack(ccb->i_dbell, 0);
 	}
 
+	clear_bit(BNAD_FP_IN_RX_PATH, &rx_ctrl->flags);
+
 	return packets;
 }
 
@@ -611,7 +614,7 @@ bnad_msix_mbox_handler(int irq, void *data)
 
 	bna_intr_status_get(&bnad->bna, intr_status);
 
-	if (BNA_IS_MBOX_ERR_INTR(intr_status))
+	if (BNA_IS_MBOX_ERR_INTR(&bnad->bna, intr_status))
 		bna_mbox_handler(&bnad->bna, intr_status);
 
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -628,6 +631,7 @@ bnad_isr(int irq, void *data)
 	struct bnad *bnad = (struct bnad *)data;
 	struct bnad_rx_info *rx_info;
 	struct bnad_rx_ctrl *rx_ctrl;
+	struct bna_tcb *tcb = NULL;
 
 	if (unlikely(test_bit(BNAD_RF_MBOX_IRQ_DISABLED, &bnad->run_flags)))
 		return IRQ_NONE;
@@ -639,7 +643,7 @@ bnad_isr(int irq, void *data)
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 
-	if (BNA_IS_MBOX_ERR_INTR(intr_status))
+	if (BNA_IS_MBOX_ERR_INTR(&bnad->bna, intr_status))
 		bna_mbox_handler(&bnad->bna, intr_status);
 
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -650,8 +654,11 @@ bnad_isr(int irq, void *data)
 	/* Process data interrupts */
 	/* Tx processing */
 	for (i = 0; i < bnad->num_tx; i++) {
-		for (j = 0; j < bnad->num_txq_per_tx; j++)
-			bnad_tx(bnad, bnad->tx_info[i].tcb[j]);
+		for (j = 0; j < bnad->num_txq_per_tx; j++) {
+			tcb = bnad->tx_info[i].tcb[j];
+			if (tcb && test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))
+				bnad_tx(bnad, bnad->tx_info[i].tcb[j]);
+		}
 	}
 	/* Rx processing */
 	for (i = 0; i < bnad->num_rx; i++) {
@@ -706,43 +713,49 @@ bnad_set_netdev_perm_addr(struct bnad *bnad)
 
 /* Callbacks */
 void
-bnad_cb_device_enable_mbox_intr(struct bnad *bnad)
+bnad_cb_mbox_intr_enable(struct bnad *bnad)
 {
 	bnad_enable_mbox_irq(bnad);
 }
 
 void
-bnad_cb_device_disable_mbox_intr(struct bnad *bnad)
+bnad_cb_mbox_intr_disable(struct bnad *bnad)
 {
 	bnad_disable_mbox_irq(bnad);
 }
 
 void
-bnad_cb_device_enabled(struct bnad *bnad, enum bna_cb_status status)
+bnad_cb_ioceth_ready(struct bnad *bnad)
+{
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_SUCCESS;
+	complete(&bnad->bnad_completions.ioc_comp);
+}
+
+void
+bnad_cb_ioceth_failed(struct bnad *bnad)
 {
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_FAIL;
 	complete(&bnad->bnad_completions.ioc_comp);
-	bnad->bnad_completions.ioc_comp_status = status;
 }
 
 void
-bnad_cb_device_disabled(struct bnad *bnad, enum bna_cb_status status)
+bnad_cb_ioceth_disabled(struct bnad *bnad)
 {
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_SUCCESS;
 	complete(&bnad->bnad_completions.ioc_comp);
-	bnad->bnad_completions.ioc_comp_status = status;
 }
 
 static void
-bnad_cb_port_disabled(void *arg, enum bna_cb_status status)
+bnad_cb_enet_disabled(void *arg)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
-	complete(&bnad->bnad_completions.port_comp);
-
 	netif_carrier_off(bnad->netdev);
+	complete(&bnad->bnad_completions.enet_comp);
 }
 
 void
-bnad_cb_port_link_status(struct bnad *bnad,
+bnad_cb_ethport_link_status(struct bnad *bnad,
 			enum bna_link_status link_status)
 {
 	bool link_up = 0;
@@ -750,34 +763,60 @@ bnad_cb_port_link_status(struct bnad *bnad,
 	link_up = (link_status == BNA_LINK_UP) || (link_status == BNA_CEE_UP);
 
 	if (link_status == BNA_CEE_UP) {
+		if (!test_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags))
+			BNAD_UPDATE_CTR(bnad, cee_toggle);
 		set_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags);
-		BNAD_UPDATE_CTR(bnad, cee_up);
-	} else
+	} else {
+		if (test_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags))
+			BNAD_UPDATE_CTR(bnad, cee_toggle);
 		clear_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags);
+	}
 
 	if (link_up) {
 		if (!netif_carrier_ok(bnad->netdev)) {
-			struct bna_tcb *tcb = bnad->tx_info[0].tcb[0];
-			if (!tcb)
-				return;
-			pr_warn("bna: %s link up\n",
+			uint tx_id, tcb_id;
+			printk(KERN_WARNING "bna: %s link up\n",
 				bnad->netdev->name);
 			netif_carrier_on(bnad->netdev);
 			BNAD_UPDATE_CTR(bnad, link_toggle);
-			if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags)) {
-				/* Force an immediate Transmit Schedule */
-				pr_info("bna: %s TX_STARTED\n",
-					bnad->netdev->name);
-				netif_wake_queue(bnad->netdev);
-				BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
-			} else {
-				netif_stop_queue(bnad->netdev);
-				BNAD_UPDATE_CTR(bnad, netif_queue_stop);
+			for (tx_id = 0; tx_id < bnad->num_tx; tx_id++) {
+				for (tcb_id = 0; tcb_id < bnad->num_txq_per_tx;
+				      tcb_id++) {
+					struct bna_tcb *tcb =
+					bnad->tx_info[tx_id].tcb[tcb_id];
+					u32 txq_id;
+					if (!tcb)
+						continue;
+
+					txq_id = tcb->id;
+
+					if (test_bit(BNAD_TXQ_TX_STARTED,
+						     &tcb->flags)) {
+						/*
+						 * Force an immediate
+						 * Transmit Schedule */
+						printk(KERN_INFO "bna: %s %d "
+						      "TXQ_STARTED\n",
+						       bnad->netdev->name,
+						       txq_id);
+						netif_wake_subqueue(
+								bnad->netdev,
+								txq_id);
+						BNAD_UPDATE_CTR(bnad,
+							netif_queue_wakeup);
+					} else {
+						netif_stop_subqueue(
+								bnad->netdev,
+								txq_id);
+						BNAD_UPDATE_CTR(bnad,
+							netif_queue_stop);
+					}
+				}
 			}
 		}
 	} else {
 		if (netif_carrier_ok(bnad->netdev)) {
-			pr_warn("bna: %s link down\n",
+			printk(KERN_WARNING "bna: %s link down\n",
 				bnad->netdev->name);
 			netif_carrier_off(bnad->netdev);
 			BNAD_UPDATE_CTR(bnad, link_toggle);
@@ -786,8 +825,7 @@ bnad_cb_port_link_status(struct bnad *bnad,
 }
 
 static void
-bnad_cb_tx_disabled(void *arg, struct bna_tx *tx,
-			enum bna_cb_status status)
+bnad_cb_tx_disabled(void *arg, struct bna_tx *tx)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
@@ -864,108 +902,166 @@ bnad_cb_ccb_destroy(struct bnad *bnad, struct bna_ccb *ccb)
 }
 
 static void
-bnad_cb_tx_stall(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_stall(struct bnad *bnad, struct bna_tx *tx)
 {
 	struct bnad_tx_info *tx_info =
-			(struct bnad_tx_info *)tcb->txq->tx->priv;
-
-	if (tx_info != &bnad->tx_info[0])
-		return;
+			(struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	u32 txq_id;
+	int i;
 
-	clear_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
-	netif_stop_queue(bnad->netdev);
-	pr_info("bna: %s TX_STOPPED\n", bnad->netdev->name);
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+		txq_id = tcb->id;
+		clear_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
+		netif_stop_subqueue(bnad->netdev, txq_id);
+		printk(KERN_INFO "bna: %s %d TXQ_STOPPED\n",
+			bnad->netdev->name, txq_id);
+	}
 }
 
 static void
-bnad_cb_tx_resume(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_resume(struct bnad *bnad, struct bna_tx *tx)
 {
-	struct bnad_unmap_q *unmap_q = tcb->unmap_q;
+	struct bnad_tx_info *tx_info = (struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	struct bnad_unmap_q *unmap_q;
+	u32 txq_id;
+	int i;
 
-	if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))
-		return;
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+		txq_id = tcb->id;
 
-	clear_bit(BNAD_RF_TX_SHUTDOWN_DELAYED, &bnad->run_flags);
+		unmap_q = tcb->unmap_q;
 
-	while (test_and_set_bit(BNAD_TXQ_FREE_SENT, &tcb->flags))
-		cpu_relax();
+		if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))
+			continue;
 
-	bnad_free_all_txbufs(bnad, tcb);
+		while (test_and_set_bit(BNAD_TXQ_FREE_SENT, &tcb->flags))
+			cpu_relax();
 
-	unmap_q->producer_index = 0;
-	unmap_q->consumer_index = 0;
+		bnad_free_all_txbufs(bnad, tcb);
 
-	smp_mb__before_clear_bit();
-	clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
+		unmap_q->producer_index = 0;
+		unmap_q->consumer_index = 0;
+
+		smp_mb__before_clear_bit();
+		clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
+
+		set_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
+
+		if (netif_carrier_ok(bnad->netdev)) {
+			printk(KERN_INFO "bna: %s %d TXQ_STARTED\n",
+				bnad->netdev->name, txq_id);
+			netif_wake_subqueue(bnad->netdev, txq_id);
+			BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
+		}
+	}
 
 	/*
-	 * Workaround for first device enable failure & we
+	 * Workaround for first ioceth enable failure & we
 	 * get a 0 MAC address. We try to get the MAC address
 	 * again here.
 	 */
 	if (is_zero_ether_addr(&bnad->perm_addr.mac[0])) {
-		bna_port_mac_get(&bnad->bna.port, &bnad->perm_addr);
+		bna_enet_perm_mac_get(&bnad->bna.enet, &bnad->perm_addr);
 		bnad_set_netdev_perm_addr(bnad);
 	}
-
-	set_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
-
-	if (netif_carrier_ok(bnad->netdev)) {
-		pr_info("bna: %s TX_STARTED\n", bnad->netdev->name);
-		netif_wake_queue(bnad->netdev);
-		BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
-	}
 }
 
 static void
-bnad_cb_tx_cleanup(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_cleanup(struct bnad *bnad, struct bna_tx *tx)
 {
-	/* Delay only once for the whole Tx Path Shutdown */
-	if (!test_and_set_bit(BNAD_RF_TX_SHUTDOWN_DELAYED, &bnad->run_flags))
-		mdelay(BNAD_TXRX_SYNC_MDELAY);
+	struct bnad_tx_info *tx_info = (struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	int i;
+
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+	}
+
+	mdelay(BNAD_TXRX_SYNC_MDELAY);
+	bna_tx_cleanup_complete(tx);
 }
 
 static void
-bnad_cb_rx_cleanup(struct bnad *bnad,
-			struct bna_ccb *ccb)
+bnad_cb_rx_cleanup(struct bnad *bnad, struct bna_rx *rx)
 {
-	clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags);
+	struct bnad_rx_info *rx_info = (struct bnad_rx_info *)rx->priv;
+	struct bna_ccb *ccb;
+	struct bnad_rx_ctrl *rx_ctrl;
+	int i;
+
+	mdelay(BNAD_TXRX_SYNC_MDELAY);
+
+	for (i = 0; i < BNAD_MAX_RXPS_PER_RX; i++) {
+		rx_ctrl = &rx_info->rx_ctrl[i];
+		ccb = rx_ctrl->ccb;
+		if (!ccb)
+			continue;
+
+		clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags);
+
+		if (ccb->rcb[1])
+			clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[1]->flags);
 
-	if (ccb->rcb[1])
-		clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[1]->flags);
+		while (test_bit(BNAD_FP_IN_RX_PATH, &rx_ctrl->flags))
+			cpu_relax();
+	}
 
-	if (!test_and_set_bit(BNAD_RF_RX_SHUTDOWN_DELAYED, &bnad->run_flags))
-		mdelay(BNAD_TXRX_SYNC_MDELAY);
+	bna_rx_cleanup_complete(rx);
 }
 
 static void
-bnad_cb_rx_post(struct bnad *bnad, struct bna_rcb *rcb)
+bnad_cb_rx_post(struct bnad *bnad, struct bna_rx *rx)
 {
-	struct bnad_unmap_q *unmap_q = rcb->unmap_q;
-
-	clear_bit(BNAD_RF_RX_SHUTDOWN_DELAYED, &bnad->run_flags);
-
-	if (rcb == rcb->cq->ccb->rcb[0])
-		bnad_cq_cmpl_init(bnad, rcb->cq->ccb);
+	struct bnad_rx_info *rx_info = (struct bnad_rx_info *)rx->priv;
+	struct bna_ccb *ccb;
+	struct bna_rcb *rcb;
+	struct bnad_rx_ctrl *rx_ctrl;
+	struct bnad_unmap_q *unmap_q;
+	int i;
+	int j;
 
-	bnad_free_all_rxbufs(bnad, rcb);
+	for (i = 0; i < BNAD_MAX_RXPS_PER_RX; i++) {
+		rx_ctrl = &rx_info->rx_ctrl[i];
+		ccb = rx_ctrl->ccb;
+		if (!ccb)
+			continue;
 
-	set_bit(BNAD_RXQ_STARTED, &rcb->flags);
+		bnad_cq_cmpl_init(bnad, ccb);
 
-	/* Now allocate & post buffers for this RCB */
-	/* !!Allocation in callback context */
-	if (!test_and_set_bit(BNAD_RXQ_REFILL, &rcb->flags)) {
-		if (BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth)
-			 >> BNAD_RXQ_REFILL_THRESHOLD_SHIFT)
-			bnad_alloc_n_post_rxbufs(bnad, rcb);
-		smp_mb__before_clear_bit();
-		clear_bit(BNAD_RXQ_REFILL, &rcb->flags);
+		for (j = 0; j < BNAD_MAX_RXQ_PER_RXP; j++) {
+			rcb = ccb->rcb[j];
+			if (!rcb)
+				continue;
+			bnad_free_all_rxbufs(bnad, rcb);
+
+			set_bit(BNAD_RXQ_STARTED, &rcb->flags);
+			unmap_q = rcb->unmap_q;
+
+			/* Now allocate & post buffers for this RCB */
+			/* !!Allocation in callback context */
+			if (!test_and_set_bit(BNAD_RXQ_REFILL, &rcb->flags)) {
+				if (BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth)
+					>> BNAD_RXQ_REFILL_THRESHOLD_SHIFT)
+					bnad_alloc_n_post_rxbufs(bnad, rcb);
+					smp_mb__before_clear_bit();
+				clear_bit(BNAD_RXQ_REFILL, &rcb->flags);
+			}
+		}
 	}
 }
 
 static void
-bnad_cb_rx_disabled(void *arg, struct bna_rx *rx,
-			enum bna_cb_status status)
+bnad_cb_rx_disabled(void *arg, struct bna_rx *rx)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
@@ -973,10 +1069,9 @@ bnad_cb_rx_disabled(void *arg, struct bna_rx *rx,
 }
 
 static void
-bnad_cb_rx_mcast_add(struct bnad *bnad, struct bna_rx *rx,
-				enum bna_cb_status status)
+bnad_cb_rx_mcast_add(struct bnad *bnad, struct bna_rx *rx)
 {
-	bnad->bnad_completions.mcast_comp_status = status;
+	bnad->bnad_completions.mcast_comp_status = BNA_CB_SUCCESS;
 	complete(&bnad->bnad_completions.mcast_comp);
 }
 
@@ -995,6 +1090,13 @@ bnad_cb_stats_get(struct bnad *bnad, enum bna_cb_status status,
 		  jiffies + msecs_to_jiffies(BNAD_STATS_TIMER_FREQ));
 }
 
+static void
+bnad_cb_enet_mtu_set(struct bnad *bnad)
+{
+	bnad->bnad_completions.mtu_comp_status = BNA_CB_SUCCESS;
+	complete(&bnad->bnad_completions.mtu_comp);
+}
+
 /* Resource allocation, free functions */
 
 static void
@@ -1073,23 +1175,17 @@ err_return:
 
 /* Free IRQ for Mailbox */
 static void
-bnad_mbox_irq_free(struct bnad *bnad,
-		   struct bna_intr_info *intr_info)
+bnad_mbox_irq_free(struct bnad *bnad)
 {
 	int irq;
 	unsigned long flags;
 
-	if (intr_info->idl == NULL)
-		return;
-
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bnad_disable_mbox_irq(bnad);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	irq = BNAD_GET_MBOX_IRQ(bnad);
 	free_irq(irq, bnad);
-
-	kfree(intr_info->idl);
 }
 
 /*
@@ -1098,32 +1194,22 @@ bnad_mbox_irq_free(struct bnad *bnad,
  * from bna
  */
 static int
-bnad_mbox_irq_alloc(struct bnad *bnad,
-		    struct bna_intr_info *intr_info)
+bnad_mbox_irq_alloc(struct bnad *bnad)
 {
 	int		err = 0;
 	unsigned long	irq_flags, flags;
 	u32	irq;
 	irq_handler_t	irq_handler;
 
-	/* Mbox should use only 1 vector */
-
-	intr_info->idl = kzalloc(sizeof(*(intr_info->idl)), GFP_KERNEL);
-	if (!intr_info->idl)
-		return -ENOMEM;
-
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	if (bnad->cfg_flags & BNAD_CF_MSIX) {
 		irq_handler = (irq_handler_t)bnad_msix_mbox_handler;
 		irq = bnad->msix_table[BNAD_MAILBOX_MSIX_INDEX].vector;
 		irq_flags = 0;
-		intr_info->intr_type = BNA_INTR_T_MSIX;
-		intr_info->idl[0].vector = BNAD_MAILBOX_MSIX_INDEX;
 	} else {
 		irq_handler = (irq_handler_t)bnad_isr;
 		irq = bnad->pcidev->irq;
 		irq_flags = IRQF_SHARED;
-		intr_info->intr_type = BNA_INTR_T_INTX;
 	}
 
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -1140,11 +1226,6 @@ bnad_mbox_irq_alloc(struct bnad *bnad,
 	err = request_irq(irq, irq_handler, irq_flags,
 			  bnad->mbox_irq_name, bnad);
 
-	if (err) {
-		kfree(intr_info->idl);
-		intr_info->idl = NULL;
-	}
-
 	return err;
 }
 
@@ -1158,7 +1239,7 @@ bnad_txrx_irq_free(struct bnad *bnad, struct bna_intr_info *intr_info)
 /* Allocates Interrupt Descriptor List for MSIX/INT-X vectors */
 static int
 bnad_txrx_irq_alloc(struct bnad *bnad, enum bnad_intr_source src,
-		    uint txrx_id, struct bna_intr_info *intr_info)
+		    u32 txrx_id, struct bna_intr_info *intr_info)
 {
 	int i, vector_start = 0;
 	u32 cfg_flags;
@@ -1241,7 +1322,7 @@ bnad_tx_msix_unregister(struct bnad *bnad, struct bnad_tx_info *tx_info,
  */
 static int
 bnad_tx_msix_register(struct bnad *bnad, struct bnad_tx_info *tx_info,
-			uint tx_id, int num_txqs)
+			u32 tx_id, int num_txqs)
 {
 	int i;
 	int err;
@@ -1294,7 +1375,7 @@ bnad_rx_msix_unregister(struct bnad *bnad, struct bnad_rx_info *rx_info,
  */
 static int
 bnad_rx_msix_register(struct bnad *bnad, struct bnad_rx_info *rx_info,
-			uint rx_id, int num_rxps)
+			u32 rx_id, int num_rxps)
 {
 	int i;
 	int err;
@@ -1338,7 +1419,7 @@ bnad_tx_res_free(struct bnad *bnad, struct bna_res_info *res_info)
 /* Allocates memory and interrupt resources for Tx object */
 static int
 bnad_tx_res_alloc(struct bnad *bnad, struct bna_res_info *res_info,
-		  uint tx_id)
+		  u32 tx_id)
 {
 	int i, err = 0;
 
@@ -1407,7 +1488,7 @@ bnad_ioc_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_ioc_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_ioc_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1418,7 +1499,7 @@ bnad_ioc_hb_check(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_ioc_hb_check((void *) &bnad->bna.device.ioc);
+	bfa_nw_ioc_hb_check((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1429,7 +1510,7 @@ bnad_iocpf_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_iocpf_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_iocpf_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1440,7 +1521,7 @@ bnad_iocpf_sem_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_iocpf_sem_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_iocpf_sem_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1499,7 +1580,7 @@ bnad_stats_timeout(unsigned long data)
 		return;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_stats_get(&bnad->bna);
+	bna_hw_stats_get(&bnad->bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1632,7 +1713,7 @@ bnad_napi_disable(struct bnad *bnad, u32 rx_id)
 
 /* Should be held with conf_lock held */
 void
-bnad_cleanup_tx(struct bnad *bnad, uint tx_id)
+bnad_cleanup_tx(struct bnad *bnad, u32 tx_id)
 {
 	struct bnad_tx_info *tx_info = &bnad->tx_info[tx_id];
 	struct bna_res_info *res_info = &bnad->tx_res_info[tx_id].res_info[0];
@@ -1656,6 +1737,7 @@ bnad_cleanup_tx(struct bnad *bnad, uint tx_id)
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	tx_info->tx = NULL;
+	tx_info->tx_id = 0;
 
 	if (0 == tx_id)
 		tasklet_kill(&bnad->tx_free_tasklet);
@@ -1665,7 +1747,7 @@ bnad_cleanup_tx(struct bnad *bnad, uint tx_id)
 
 /* Should be held with conf_lock held */
 int
-bnad_setup_tx(struct bnad *bnad, uint tx_id)
+bnad_setup_tx(struct bnad *bnad, u32 tx_id)
 {
 	int err;
 	struct bnad_tx_info *tx_info = &bnad->tx_info[tx_id];
@@ -1677,10 +1759,13 @@ bnad_setup_tx(struct bnad *bnad, uint tx_id)
 	struct bna_tx *tx;
 	unsigned long flags;
 
+	tx_info->tx_id = tx_id;
+
 	/* Initialize the Tx object configuration */
 	tx_config->num_txq = bnad->num_txq_per_tx;
 	tx_config->txq_depth = bnad->txq_depth;
 	tx_config->tx_type = BNA_TX_T_REGULAR;
+	tx_config->coalescing_timeo = bnad->tx_coalescing_timeo;
 
 	/* Initialize the tx event handlers */
 	tx_cbfn.tcb_setup_cbfn = bnad_cb_tcb_setup;
@@ -1741,14 +1826,15 @@ bnad_init_rx_config(struct bnad *bnad, struct bna_rx_config *rx_config)
 {
 	rx_config->rx_type = BNA_RX_T_REGULAR;
 	rx_config->num_paths = bnad->num_rxp_per_rx;
+	rx_config->coalescing_timeo = bnad->rx_coalescing_timeo;
 
 	if (bnad->num_rxp_per_rx > 1) {
 		rx_config->rss_status = BNA_STATUS_T_ENABLED;
 		rx_config->rss_config.hash_type =
-				(BFI_RSS_T_V4_TCP |
-				 BFI_RSS_T_V6_TCP |
-				 BFI_RSS_T_V4_IP  |
-				 BFI_RSS_T_V6_IP);
+				(BFI_ENET_RSS_IPV6 |
+				 BFI_ENET_RSS_IPV6_TCP |
+				 BFI_ENET_RSS_IPV4 |
+				 BFI_ENET_RSS_IPV4_TCP);
 		rx_config->rss_config.hash_mask =
 				bnad->num_rxp_per_rx - 1;
 		get_random_bytes(rx_config->rss_config.toeplitz_hash_key,
@@ -1768,7 +1854,7 @@ bnad_init_rx_config(struct bnad *bnad, struct bna_rx_config *rx_config)
 
 /* Called with mutex_lock(&bnad->conf_mutex) held */
 void
-bnad_cleanup_rx(struct bnad *bnad, uint rx_id)
+bnad_cleanup_rx(struct bnad *bnad, u32 rx_id)
 {
 	struct bnad_rx_info *rx_info = &bnad->rx_info[rx_id];
 	struct bna_rx_config *rx_config = &bnad->rx_config[rx_id];
@@ -1811,7 +1897,7 @@ bnad_cleanup_rx(struct bnad *bnad, uint rx_id)
 
 /* Called with mutex_lock(&bnad->conf_mutex) held */
 int
-bnad_setup_rx(struct bnad *bnad, uint rx_id)
+bnad_setup_rx(struct bnad *bnad, u32 rx_id)
 {
 	int err;
 	struct bnad_rx_info *rx_info = &bnad->rx_info[rx_id];
@@ -1823,6 +1909,8 @@ bnad_setup_rx(struct bnad *bnad, uint rx_id)
 	struct bna_rx *rx;
 	unsigned long flags;
 
+	rx_info->rx_id = rx_id;
+
 	/* Initialize the Rx object configuration */
 	bnad_init_rx_config(bnad, rx_config);
 
@@ -1978,7 +2066,7 @@ bnad_restore_vlans(struct bnad *bnad, u32 rx_id)
 	u16 vid;
 	unsigned long flags;
 
-	BUG_ON(!(VLAN_N_VID == (BFI_MAX_VLAN + 1)));
+	BUG_ON(!(VLAN_N_VID == BFI_ENET_VLAN_ID_MAX));
 
 	for_each_set_bit(vid, bnad->active_vlans, VLAN_N_VID) {
 		spin_lock_irqsave(&bnad->bna_lock, flags);
@@ -2031,11 +2119,11 @@ bnad_netdev_qstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 void
 bnad_netdev_hwstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 {
-	struct bfi_ll_stats_mac *mac_stats;
-	u64 bmap;
+	struct bfi_enet_stats_mac *mac_stats;
+	u32 bmap;
 	int i;
 
-	mac_stats = &bnad->stats.bna_stats->hw_stats->mac_stats;
+	mac_stats = &bnad->stats.bna_stats->hw_stats.mac_stats;
 	stats->rx_errors =
 		mac_stats->rx_fcs_error + mac_stats->rx_alignment_error +
 		mac_stats->rx_frame_length_error + mac_stats->rx_code_error +
@@ -2054,13 +2142,12 @@ bnad_netdev_hwstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 	stats->rx_crc_errors = mac_stats->rx_fcs_error;
 	stats->rx_frame_errors = mac_stats->rx_alignment_error;
 	/* recv'r fifo overrun */
-	bmap = (u64)bnad->stats.bna_stats->rxf_bmap[0] |
-		((u64)bnad->stats.bna_stats->rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats->rx_fifo_errors +=
 				bnad->stats.bna_stats->
-					hw_stats->rxf_stats[i].frame_drops;
+					hw_stats.rxf_stats[i].frame_drops;
 			break;
 		}
 		bmap >>= 1;
@@ -2158,7 +2245,7 @@ bnad_q_num_init(struct bnad *bnad)
  * Called with bnad->bna_lock held b'cos of cfg_flags access
  */
 static void
-bnad_q_num_adjust(struct bnad *bnad, int msix_vectors)
+bnad_q_num_adjust(struct bnad *bnad, int msix_vectors, int temp)
 {
 	bnad->num_txq_per_tx = 1;
 	if ((msix_vectors >= (bnad->num_tx * bnad->num_txq_per_tx)  +
@@ -2171,76 +2258,72 @@ bnad_q_num_adjust(struct bnad *bnad, int msix_vectors)
 		bnad->num_rxp_per_rx = 1;
 }
 
-/* Enable / disable device */
-static void
-bnad_device_disable(struct bnad *bnad)
+/* Enable / disable ioceth */
+static int
+bnad_ioceth_disable(struct bnad *bnad)
 {
 	unsigned long flags;
-
-	init_completion(&bnad->bnad_completions.ioc_comp);
+	int err = 0;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_device_disable(&bnad->bna.device, BNA_HARD_CLEANUP);
+	init_completion(&bnad->bnad_completions.ioc_comp);
+	bna_ioceth_disable(&bnad->bna.ioceth, BNA_HARD_CLEANUP);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.ioc_comp);
+	wait_for_completion_timeout(&bnad->bnad_completions.ioc_comp,
+		msecs_to_jiffies(BNAD_IOCETH_TIMEOUT));
+
+	err = bnad->bnad_completions.ioc_comp_status;
+	return err;
 }
 
 static int
-bnad_device_enable(struct bnad *bnad)
+bnad_ioceth_enable(struct bnad *bnad)
 {
 	int err = 0;
 	unsigned long flags;
 
-	init_completion(&bnad->bnad_completions.ioc_comp);
-
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_device_enable(&bnad->bna.device);
+	init_completion(&bnad->bnad_completions.ioc_comp);
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_WAITING;
+	bna_ioceth_enable(&bnad->bna.ioceth);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.ioc_comp);
+	wait_for_completion_timeout(&bnad->bnad_completions.ioc_comp,
+		msecs_to_jiffies(BNAD_IOCETH_TIMEOUT));
 
-	if (bnad->bnad_completions.ioc_comp_status)
-		err = bnad->bnad_completions.ioc_comp_status;
+	err = bnad->bnad_completions.ioc_comp_status;
 
 	return err;
 }
 
 /* Free BNA resources */
 static void
-bnad_res_free(struct bnad *bnad)
+bnad_res_free(struct bnad *bnad, struct bna_res_info *res_info,
+		u32 res_val_max)
 {
 	int i;
-	struct bna_res_info *res_info = &bnad->res_info[0];
 
-	for (i = 0; i < BNA_RES_T_MAX; i++) {
-		if (res_info[i].res_type == BNA_RES_T_MEM)
-			bnad_mem_free(bnad, &res_info[i].res_u.mem_info);
-		else
-			bnad_mbox_irq_free(bnad, &res_info[i].res_u.intr_info);
-	}
+	for (i = 0; i < res_val_max; i++)
+		bnad_mem_free(bnad, &res_info[i].res_u.mem_info);
 }
 
 /* Allocates memory and interrupt resources for BNA */
 static int
-bnad_res_alloc(struct bnad *bnad)
+bnad_res_alloc(struct bnad *bnad, struct bna_res_info *res_info,
+		u32 res_val_max)
 {
 	int i, err;
-	struct bna_res_info *res_info = &bnad->res_info[0];
 
-	for (i = 0; i < BNA_RES_T_MAX; i++) {
-		if (res_info[i].res_type == BNA_RES_T_MEM)
-			err = bnad_mem_alloc(bnad, &res_info[i].res_u.mem_info);
-		else
-			err = bnad_mbox_irq_alloc(bnad,
-						  &res_info[i].res_u.intr_info);
+	for (i = 0; i < res_val_max; i++) {
+		err = bnad_mem_alloc(bnad, &res_info[i].res_u.mem_info);
 		if (err)
 			goto err_return;
 	}
 	return 0;
 
 err_return:
-	bnad_res_free(bnad);
+	bnad_res_free(bnad, res_info, res_val_max);
 	return err;
 }
 
@@ -2276,7 +2359,7 @@ bnad_enable_msix(struct bnad *bnad)
 
 		spin_lock_irqsave(&bnad->bna_lock, flags);
 		/* ret = #of vectors that we got */
-		bnad_q_num_adjust(bnad, ret);
+		bnad_q_num_adjust(bnad, ret, 0);
 		spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 		bnad->msix_num = (bnad->num_tx * bnad->num_txq_per_tx)
@@ -2284,6 +2367,9 @@ bnad_enable_msix(struct bnad *bnad)
 			* bnad->num_rxp_per_rx) +
 			 BNAD_MAILBOX_MSIX_VECTORS;
 
+		if (bnad->msix_num > ret)
+			goto intx_mode;
+
 		/* Try once more with adjusted numbers */
 		/* If this fails, fall back to INTx */
 		ret = pci_enable_msix(bnad->pcidev, bnad->msix_table,
@@ -2293,6 +2379,9 @@ bnad_enable_msix(struct bnad *bnad)
 
 	} else if (ret < 0)
 		goto intx_mode;
+
+	pci_intx(bnad->pcidev, 0);
+
 	return;
 
 intx_mode:
@@ -2351,12 +2440,12 @@ bnad_open(struct net_device *netdev)
 	pause_config.tx_pause = 0;
 	pause_config.rx_pause = 0;
 
-	mtu = ETH_HLEN + bnad->netdev->mtu + ETH_FCS_LEN;
+	mtu = ETH_HLEN + VLAN_HLEN + bnad->netdev->mtu + ETH_FCS_LEN;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mtu_set(&bnad->bna.port, mtu, NULL);
-	bna_port_pause_config(&bnad->bna.port, &pause_config, NULL);
-	bna_port_enable(&bnad->bna.port);
+	bna_enet_mtu_set(&bnad->bna.enet, mtu, NULL);
+	bna_enet_pause_config(&bnad->bna.enet, &pause_config, NULL);
+	bna_enet_enable(&bnad->bna.enet);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	/* Enable broadcast */
@@ -2396,14 +2485,14 @@ bnad_stop(struct net_device *netdev)
 	/* Stop the stats timer */
 	bnad_stats_timer_stop(bnad);
 
-	init_completion(&bnad->bnad_completions.port_comp);
+	init_completion(&bnad->bnad_completions.enet_comp);
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_disable(&bnad->bna.port, BNA_HARD_CLEANUP,
-			bnad_cb_port_disabled);
+	bna_enet_disable(&bnad->bna.enet, BNA_HARD_CLEANUP,
+			bnad_cb_enet_disabled);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.port_comp);
+	wait_for_completion(&bnad->bnad_completions.enet_comp);
 
 	bnad_cleanup_tx(bnad, 0);
 	bnad_cleanup_rx(bnad, 0);
@@ -2425,19 +2514,18 @@ static netdev_tx_t
 bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 {
 	struct bnad *bnad = netdev_priv(netdev);
+	u32 txq_id = 0;
+	struct bna_tcb *tcb = bnad->tx_info[0].tcb[txq_id];
 
 	u16		txq_prod, vlan_tag = 0;
 	u32		unmap_prod, wis, wis_used, wi_range;
 	u32		vectors, vect_id, i, acked;
-	u32		tx_id;
 	int			err;
 
-	struct bnad_tx_info *tx_info;
-	struct bna_tcb *tcb;
-	struct bnad_unmap_q *unmap_q;
+	struct bnad_unmap_q *unmap_q = tcb->unmap_q;
 	dma_addr_t		dma_addr;
 	struct bna_txq_entry *txqent;
-	bna_txq_wi_ctrl_flag_t	flags;
+	u16	flags;
 
 	if (unlikely
 	    (skb->len <= ETH_HLEN || skb->len > BFI_TX_MAX_DATA_PER_PKT)) {
@@ -2445,15 +2533,9 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 		return NETDEV_TX_OK;
 	}
 
-	tx_id = 0;
-
-	tx_info = &bnad->tx_info[tx_id];
-	tcb = tx_info->tcb[tx_id];
-	unmap_q = tcb->unmap_q;
-
 	/*
 	 * Takes care of the Tx that is scheduled between clearing the flag
-	 * and the netif_stop_queue() call.
+	 * and the netif_stop_all_queue() call.
 	 */
 	if (unlikely(!test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))) {
 		dev_kfree_skb(skb);
@@ -2467,9 +2549,8 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 	}
 	wis = BNA_TXQ_WI_NEEDED(vectors);	/* 4 vectors per work item */
 	acked = 0;
-	if (unlikely
-	    (wis > BNA_QE_FREE_CNT(tcb, tcb->q_depth) ||
-	     vectors > BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth))) {
+	if (unlikely(wis > BNA_QE_FREE_CNT(tcb, tcb->q_depth) ||
+			vectors > BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth))) {
 		if ((u16) (*tcb->hw_consumer_index) !=
 		    tcb->consumer_index &&
 		    !test_and_set_bit(BNAD_TXQ_FREE_SENT, &tcb->flags)) {
@@ -2602,7 +2683,7 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
 		struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
-		u32		size = frag->size;
+		u16		size = frag->size;
 
 		if (++vect_id == BFI_TX_MAX_VECTORS_PER_WI) {
 			vect_id = 0;
@@ -2760,11 +2841,25 @@ bnad_set_mac_address(struct net_device *netdev, void *mac_addr)
 }
 
 static int
-bnad_change_mtu(struct net_device *netdev, int new_mtu)
+bnad_mtu_set(struct bnad *bnad, int mtu)
 {
-	int mtu, err = 0;
 	unsigned long flags;
 
+	init_completion(&bnad->bnad_completions.mtu_comp);
+
+	spin_lock_irqsave(&bnad->bna_lock, flags);
+	bna_enet_mtu_set(&bnad->bna.enet, mtu, bnad_cb_enet_mtu_set);
+	spin_unlock_irqrestore(&bnad->bna_lock, flags);
+
+	wait_for_completion(&bnad->bnad_completions.mtu_comp);
+
+	return bnad->bnad_completions.mtu_comp_status;
+}
+
+static int
+bnad_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	int err, mtu = netdev->mtu;
 	struct bnad *bnad = netdev_priv(netdev);
 
 	if (new_mtu + ETH_HLEN < ETH_ZLEN || new_mtu > BNAD_JUMBO_MTU)
@@ -2774,11 +2869,10 @@ bnad_change_mtu(struct net_device *netdev, int new_mtu)
 
 	netdev->mtu = new_mtu;
 
-	mtu = ETH_HLEN + new_mtu + ETH_FCS_LEN;
-
-	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mtu_set(&bnad->bna.port, mtu, NULL);
-	spin_unlock_irqrestore(&bnad->bna_lock, flags);
+	mtu = ETH_HLEN + VLAN_HLEN + new_mtu + ETH_FCS_LEN;
+	err = bnad_mtu_set(bnad, mtu);
+	if (err)
+		err = -EBUSY;
 
 	mutex_unlock(&bnad->conf_mutex);
 	return err;
@@ -2968,7 +3062,7 @@ bnad_uninit(struct bnad *bnad)
 
 /*
  * Initialize locks
-	a) Per device mutes used for serializing configuration
+	a) Per ioceth mutes used for serializing configuration
 	   changes from OS interface
 	b) spin lock used to protect bna state machine
  */
@@ -3058,12 +3152,15 @@ bnad_pci_probe(struct pci_dev *pdev,
 	 */
 	netdev = alloc_etherdev(sizeof(struct bnad));
 	if (!netdev) {
-		dev_err(&pdev->dev, "alloc_etherdev failed\n");
+		dev_err(&pdev->dev, "netdev allocation failed\n");
 		err = -ENOMEM;
 		return err;
 	}
 	bnad = netdev_priv(netdev);
 
+	bnad_lock_init(bnad);
+
+	mutex_lock(&bnad->conf_mutex);
 	/*
 	 * PCI initialization
 	 *	Output : using_dac = 1 for 64 bit DMA
@@ -3073,7 +3170,6 @@ bnad_pci_probe(struct pci_dev *pdev,
 	if (err)
 		goto free_netdev;
 
-	bnad_lock_init(bnad);
 	/*
 	 * Initialize bnad structure
 	 * Setup relation between pci_dev & netdev
@@ -3082,21 +3178,22 @@ bnad_pci_probe(struct pci_dev *pdev,
 	err = bnad_init(bnad, pdev, netdev);
 	if (err)
 		goto pci_uninit;
+
 	/* Initialize netdev structure, set up ethtool ops */
 	bnad_netdev_init(bnad, using_dac);
 
 	/* Set link to down state */
 	netif_carrier_off(netdev);
 
-	bnad_enable_msix(bnad);
-
 	/* Get resource requirement form bna */
+	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_res_req(&bnad->res_info[0]);
+	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	/* Allocate resources from bna */
-	err = bnad_res_alloc(bnad);
+	err = bnad_res_alloc(bnad, &bnad->res_info[0], BNA_RES_T_MAX);
 	if (err)
-		goto free_netdev;
+		goto drv_uninit;
 
 	bna = &bnad->bna;
 
@@ -3106,69 +3203,102 @@ bnad_pci_probe(struct pci_dev *pdev,
 	pcidev_info.device_id = bnad->pcidev->device;
 	pcidev_info.pci_bar_kva = bnad->bar0;
 
-	mutex_lock(&bnad->conf_mutex);
-
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_init(bna, bnad, &pcidev_info, &bnad->res_info[0]);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	bnad->stats.bna_stats = &bna->stats;
 
+	bnad_enable_msix(bnad);
+	err = bnad_mbox_irq_alloc(bnad);
+	if (err)
+		goto res_free;
+
+
 	/* Set up timers */
-	setup_timer(&bnad->bna.device.ioc.ioc_timer, bnad_ioc_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.ioc_timer, bnad_ioc_timeout,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.hb_timer, bnad_ioc_hb_check,
+	setup_timer(&bnad->bna.ioceth.ioc.hb_timer, bnad_ioc_hb_check,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.iocpf_timer, bnad_iocpf_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.iocpf_timer, bnad_iocpf_timeout,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.sem_timer, bnad_iocpf_sem_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.sem_timer, bnad_iocpf_sem_timeout,
 				((unsigned long)bnad));
 
 	/* Now start the timer before calling IOC */
-	mod_timer(&bnad->bna.device.ioc.iocpf_timer,
+	mod_timer(&bnad->bna.ioceth.ioc.iocpf_timer,
 		  jiffies + msecs_to_jiffies(BNA_IOC_TIMER_FREQ));
 
 	/*
 	 * Start the chip
-	 * Don't care even if err != 0, bna state machine will
-	 * deal with it
+	 * If the call back comes with error, we bail out.
+	 * This is a catastrophic error.
 	 */
-	err = bnad_device_enable(bnad);
+	err = bnad_ioceth_enable(bnad);
+	if (err) {
+		pr_err("BNA: Initialization failed err=%d\n",
+		       err);
+		goto probe_success;
+	}
+
+	spin_lock_irqsave(&bnad->bna_lock, flags);
+	if (bna_num_txq_set(bna, BNAD_NUM_TXQ + 1) ||
+		bna_num_rxp_set(bna, BNAD_NUM_RXP + 1)) {
+		bnad_q_num_adjust(bnad, bna_attr(bna)->num_txq - 1,
+			bna_attr(bna)->num_rxp - 1);
+		if (bna_num_txq_set(bna, BNAD_NUM_TXQ + 1) ||
+			bna_num_rxp_set(bna, BNAD_NUM_RXP + 1))
+			err = -EIO;
+	}
+	bna_mod_res_req(&bnad->bna, &bnad->mod_res_info[0]);
+	spin_unlock_irqrestore(&bnad->bna_lock, flags);
+
+	err = bnad_res_alloc(bnad, &bnad->mod_res_info[0], BNA_MOD_RES_T_MAX);
+	if (err)
+		goto disable_ioceth;
+
+	spin_lock_irqsave(&bnad->bna_lock, flags);
+	bna_mod_init(&bnad->bna, &bnad->mod_res_info[0]);
+	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	/* Get the burnt-in mac */
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mac_get(&bna->port, &bnad->perm_addr);
+	bna_enet_perm_mac_get(&bna->enet, &bnad->perm_addr);
 	bnad_set_netdev_perm_addr(bnad);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	mutex_unlock(&bnad->conf_mutex);
-
 	/* Finally, reguister with net_device layer */
 	err = register_netdev(netdev);
 	if (err) {
 		pr_err("BNA : Registering with netdev failed\n");
-		goto disable_device;
+		goto probe_uninit;
 	}
+	set_bit(BNAD_RF_NETDEV_REGISTERED, &bnad->run_flags);
 
+probe_success:
+	mutex_unlock(&bnad->conf_mutex);
 	return 0;
 
-disable_device:
-	mutex_lock(&bnad->conf_mutex);
-	bnad_device_disable(bnad);
-	del_timer_sync(&bnad->bna.device.ioc.ioc_timer);
-	del_timer_sync(&bnad->bna.device.ioc.sem_timer);
-	del_timer_sync(&bnad->bna.device.ioc.hb_timer);
+probe_uninit:
+	bnad_res_free(bnad, &bnad->mod_res_info[0], BNA_MOD_RES_T_MAX);
+disable_ioceth:
+	bnad_ioceth_disable(bnad);
+	del_timer_sync(&bnad->bna.ioceth.ioc.ioc_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.sem_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.hb_timer);
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_uninit(bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
-	mutex_unlock(&bnad->conf_mutex);
-
-	bnad_res_free(bnad);
+	bnad_mbox_irq_free(bnad);
 	bnad_disable_msix(bnad);
+res_free:
+	bnad_res_free(bnad, &bnad->res_info[0], BNA_RES_T_MAX);
+drv_uninit:
+	bnad_uninit(bnad);
 pci_uninit:
 	bnad_pci_uninit(pdev);
+	mutex_unlock(&bnad->conf_mutex);
 	bnad_lock_uninit(bnad);
-	bnad_uninit(bnad);
 free_netdev:
 	free_netdev(netdev);
 	return err;
@@ -3189,21 +3319,24 @@ bnad_pci_remove(struct pci_dev *pdev)
 	bnad = netdev_priv(netdev);
 	bna = &bnad->bna;
 
-	unregister_netdev(netdev);
+	if (test_and_clear_bit(BNAD_RF_NETDEV_REGISTERED, &bnad->run_flags))
+		unregister_netdev(netdev);
 
 	mutex_lock(&bnad->conf_mutex);
-	bnad_device_disable(bnad);
-	del_timer_sync(&bnad->bna.device.ioc.ioc_timer);
-	del_timer_sync(&bnad->bna.device.ioc.sem_timer);
-	del_timer_sync(&bnad->bna.device.ioc.hb_timer);
+	bnad_ioceth_disable(bnad);
+	del_timer_sync(&bnad->bna.ioceth.ioc.ioc_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.sem_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.hb_timer);
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_uninit(bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
-	mutex_unlock(&bnad->conf_mutex);
 
-	bnad_res_free(bnad);
+	bnad_res_free(bnad, &bnad->mod_res_info[0], BNA_MOD_RES_T_MAX);
+	bnad_res_free(bnad, &bnad->res_info[0], BNA_RES_T_MAX);
+	bnad_mbox_irq_free(bnad);
 	bnad_disable_msix(bnad);
 	bnad_pci_uninit(pdev);
+	mutex_unlock(&bnad->conf_mutex);
 	bnad_lock_uninit(bnad);
 	bnad_uninit(bnad);
 	free_netdev(netdev);
diff --git a/drivers/net/bna/bnad.h b/drivers/net/bna/bnad.h
index 458eb30..a538cf4 100644
--- a/drivers/net/bna/bnad.h
+++ b/drivers/net/bna/bnad.h
@@ -44,6 +44,7 @@
 
 #define BNAD_MAX_RXS		1
 #define BNAD_MAX_RXPS_PER_RX	16
+#define BNAD_MAX_RXQ_PER_RXP	2
 
 /*
  * Control structure pointed to ccb->ctrl, which
@@ -76,6 +77,8 @@ struct bnad_rx_ctrl {
 #define BNAD_STATS_TIMER_FREQ		1000	/* in msecs */
 #define BNAD_DIM_TIMER_FREQ		1000	/* in msecs */
 
+#define BNAD_IOCETH_TIMEOUT	     10000
+
 #define BNAD_MAX_Q_DEPTH		0x10000
 #define BNAD_MIN_Q_DEPTH		0x200
 
@@ -93,6 +96,10 @@ struct bnad_rx_ctrl {
 #define BNAD_RXQ_REFILL			0
 #define BNAD_RXQ_STARTED		1
 
+/* Resource limits */
+#define BNAD_NUM_TXQ			(bnad->num_tx * bnad->num_txq_per_tx)
+#define BNAD_NUM_RXP			(bnad->num_rx * bnad->num_rxp_per_rx)
+
 /*
  * DATA STRUCTURES
  */
@@ -115,7 +122,8 @@ struct bnad_completion {
 	struct completion	tx_comp;
 	struct completion	rx_comp;
 	struct completion	stats_comp;
-	struct completion	port_comp;
+	struct completion	enet_comp;
+	struct completion	mtu_comp;
 
 	u8			ioc_comp_status;
 	u8			ucast_comp_status;
@@ -124,6 +132,7 @@ struct bnad_completion {
 	u8			rx_comp_status;
 	u8			stats_comp_status;
 	u8			port_comp_status;
+	u8			mtu_comp_status;
 };
 
 /* Tx Rx Control Stats */
@@ -145,6 +154,7 @@ struct bnad_drv_stats {
 	u64		netif_rx_dropped;
 
 	u64		link_toggle;
+	u64		cee_toggle;
 	u64		cee_up;
 
 	u64		rxp_info_alloc_failed;
@@ -174,12 +184,14 @@ struct bnad_rx_res_info {
 struct bnad_tx_info {
 	struct bna_tx *tx; /* 1:1 between tx_info & tx */
 	struct bna_tcb *tcb[BNAD_MAX_TXQ_PER_TX];
+	u32 tx_id;
 } ____cacheline_aligned;
 
 struct bnad_rx_info {
 	struct bna_rx *rx; /* 1:1 between rx_info & rx */
 
 	struct bnad_rx_ctrl rx_ctrl[BNAD_MAX_RXPS_PER_RX];
+	u32 rx_id;
 } ____cacheline_aligned;
 
 /* Unmap queues for Tx / Rx cleanup */
@@ -205,13 +217,18 @@ struct bnad_unmap_q {
 /* Defines for run_flags bit-mask */
 /* Set, tested & cleared using xxx_bit() functions */
 /* Values indicated bit positions */
-#define	BNAD_RF_CEE_RUNNING		1
+#define BNAD_RF_CEE_RUNNING		0
+#define BNAD_RF_MTU_SET		1
 #define BNAD_RF_MBOX_IRQ_DISABLED	2
-#define BNAD_RF_RX_STARTED		3
+#define BNAD_RF_NETDEV_REGISTERED	3
 #define BNAD_RF_DIM_TIMER_RUNNING	4
 #define BNAD_RF_STATS_TIMER_RUNNING	5
-#define BNAD_RF_TX_SHUTDOWN_DELAYED	6
-#define BNAD_RF_RX_SHUTDOWN_DELAYED	7
+#define BNAD_RF_TX_PRIO_SET		6
+
+
+/* Define for Fast Path flags */
+/* Defined as bit positions */
+#define BNAD_FP_IN_RX_PATH	      0
 
 struct bnad {
 	struct net_device	*netdev;
@@ -265,6 +282,7 @@ struct bnad {
 
 	/* Control path resources, memory & irq */
 	struct bna_res_info res_info[BNA_RES_T_MAX];
+	struct bna_res_info mod_res_info[BNA_MOD_RES_T_MAX];
 	struct bnad_tx_res_info tx_res_info[BNAD_MAX_TXS];
 	struct bnad_rx_res_info rx_res_info[BNAD_MAX_RXS];
 
@@ -302,10 +320,10 @@ extern void bnad_set_ethtool_ops(struct net_device *netdev);
 extern void bnad_tx_coalescing_timeo_set(struct bnad *bnad);
 extern void bnad_rx_coalescing_timeo_set(struct bnad *bnad);
 
-extern int bnad_setup_rx(struct bnad *bnad, uint rx_id);
-extern int bnad_setup_tx(struct bnad *bnad, uint tx_id);
-extern void bnad_cleanup_tx(struct bnad *bnad, uint tx_id);
-extern void bnad_cleanup_rx(struct bnad *bnad, uint rx_id);
+extern int bnad_setup_rx(struct bnad *bnad, u32 rx_id);
+extern int bnad_setup_tx(struct bnad *bnad, u32 tx_id);
+extern void bnad_cleanup_tx(struct bnad *bnad, u32 tx_id);
+extern void bnad_cleanup_rx(struct bnad *bnad, u32 rx_id);
 
 /* Timer start/stop protos */
 extern void bnad_dim_timer_start(struct bnad *bnad);
diff --git a/drivers/net/bna/bnad_ethtool.c b/drivers/net/bna/bnad_ethtool.c
index 49174f8..1c19dce 100644
--- a/drivers/net/bna/bnad_ethtool.c
+++ b/drivers/net/bna/bnad_ethtool.c
@@ -29,14 +29,14 @@
 
 #define BNAD_NUM_TXF_COUNTERS 12
 #define BNAD_NUM_RXF_COUNTERS 10
-#define BNAD_NUM_CQ_COUNTERS 3
+#define BNAD_NUM_CQ_COUNTERS (3 + 5)
 #define BNAD_NUM_RXQ_COUNTERS 6
 #define BNAD_NUM_TXQ_COUNTERS 5
 
 #define BNAD_ETHTOOL_STATS_NUM						\
 	(sizeof(struct rtnl_link_stats64) / sizeof(u64) +	\
 	sizeof(struct bnad_drv_stats) / sizeof(u64) +		\
-	offsetof(struct bfi_ll_stats, rxf_stats[0]) / sizeof(u64))
+	offsetof(struct bfi_enet_stats, rxf_stats[0]) / sizeof(u64))
 
 static char *bnad_net_stats_strings[BNAD_ETHTOOL_STATS_NUM] = {
 	"rx_packets",
@@ -277,7 +277,7 @@ bnad_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
 	ioc_attr = kzalloc(sizeof(*ioc_attr), GFP_KERNEL);
 	if (ioc_attr) {
 		spin_lock_irqsave(&bnad->bna_lock, flags);
-		bfa_nw_ioc_get_attr(&bnad->bna.device.ioc, ioc_attr);
+		bfa_nw_ioc_get_attr(&bnad->bna.ioceth.ioc, ioc_attr);
 		spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 		strncpy(drvinfo->fw_version, ioc_attr->adapter_attr.fw_ver,
@@ -462,8 +462,8 @@ bnad_get_pauseparam(struct net_device *netdev,
 	struct bnad *bnad = netdev_priv(netdev);
 
 	pauseparam->autoneg = 0;
-	pauseparam->rx_pause = bnad->bna.port.pause_config.rx_pause;
-	pauseparam->tx_pause = bnad->bna.port.pause_config.tx_pause;
+	pauseparam->rx_pause = bnad->bna.enet.pause_config.rx_pause;
+	pauseparam->tx_pause = bnad->bna.enet.pause_config.tx_pause;
 }
 
 static int
@@ -478,12 +478,12 @@ bnad_set_pauseparam(struct net_device *netdev,
 		return -EINVAL;
 
 	mutex_lock(&bnad->conf_mutex);
-	if (pauseparam->rx_pause != bnad->bna.port.pause_config.rx_pause ||
-	    pauseparam->tx_pause != bnad->bna.port.pause_config.tx_pause) {
+	if (pauseparam->rx_pause != bnad->bna.enet.pause_config.rx_pause ||
+	    pauseparam->tx_pause != bnad->bna.enet.pause_config.tx_pause) {
 		pause_config.rx_pause = pauseparam->rx_pause;
 		pause_config.tx_pause = pauseparam->tx_pause;
 		spin_lock_irqsave(&bnad->bna_lock, flags);
-		bna_port_pause_config(&bnad->bna.port, &pause_config, NULL);
+		bna_enet_pause_config(&bnad->bna.enet, &pause_config, NULL);
 		spin_unlock_irqrestore(&bnad->bna_lock, flags);
 	}
 	mutex_unlock(&bnad->conf_mutex);
@@ -495,7 +495,7 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 {
 	struct bnad *bnad = netdev_priv(netdev);
 	int i, j, q_num;
-	u64 bmap;
+	u32 bmap;
 
 	mutex_lock(&bnad->conf_mutex);
 
@@ -508,9 +508,8 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 			       ETH_GSTRING_LEN);
 			string += ETH_GSTRING_LEN;
 		}
-		bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-		for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+		bmap = bna_tx_rid_mask(&bnad->bna);
+		for (i = 0; bmap; i++) {
 			if (bmap & 1) {
 				sprintf(string, "txf%d_ucast_octets", i);
 				string += ETH_GSTRING_LEN;
@@ -540,9 +539,8 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 			bmap >>= 1;
 		}
 
-		bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-		for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+		bmap = bna_rx_rid_mask(&bnad->bna);
+		for (i = 0; bmap; i++) {
 			if (bmap & 1) {
 				sprintf(string, "rxf%d_ucast_octets", i);
 				string += ETH_GSTRING_LEN;
@@ -663,18 +661,16 @@ bnad_get_stats_count_locked(struct net_device *netdev)
 {
 	struct bnad *bnad = netdev_priv(netdev);
 	int i, j, count, rxf_active_num = 0, txf_active_num = 0;
-	u64 bmap;
+	u32 bmap;
 
-	bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+	bmap = bna_tx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1)
 			txf_active_num++;
 		bmap >>= 1;
 	}
-	bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1)
 			rxf_active_num++;
 		bmap >>= 1;
@@ -787,7 +783,7 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 	unsigned long flags;
 	struct rtnl_link_stats64 *net_stats64;
 	u64 *stats64;
-	u64 bmap;
+	u32 bmap;
 
 	mutex_lock(&bnad->conf_mutex);
 	if (bnad_get_stats_count_locked(netdev) != stats->n_stats) {
@@ -818,20 +814,20 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 		buf[bi++] = stats64[i];
 
 	/* Fill hardware stats excluding the rxf/txf into ethtool bufs */
-	stats64 = (u64 *) bnad->stats.bna_stats->hw_stats;
+	stats64 = (u64 *) &bnad->stats.bna_stats->hw_stats;
 	for (i = 0;
-	     i < offsetof(struct bfi_ll_stats, rxf_stats[0]) / sizeof(u64);
+	     i < offsetof(struct bfi_enet_stats, rxf_stats[0]) /
+		sizeof(u64);
 	     i++)
 		buf[bi++] = stats64[i];
 
 	/* Fill txf stats into ethtool buffers */
-	bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+	bmap = bna_tx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats64 = (u64 *)&bnad->stats.bna_stats->
-						hw_stats->txf_stats[i];
-			for (j = 0; j < sizeof(struct bfi_ll_stats_txf) /
+						hw_stats.txf_stats[i];
+			for (j = 0; j < sizeof(struct bfi_enet_stats_txf) /
 					sizeof(u64); j++)
 				buf[bi++] = stats64[j];
 		}
@@ -839,13 +835,12 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 	}
 
 	/*  Fill rxf stats into ethtool buffers */
-	bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats64 = (u64 *)&bnad->stats.bna_stats->
-						hw_stats->rxf_stats[i];
-			for (j = 0; j < sizeof(struct bfi_ll_stats_rxf) /
+						hw_stats.rxf_stats[i];
+			for (j = 0; j < sizeof(struct bfi_enet_stats_rxf) /
 					sizeof(u64); j++)
 				buf[bi++] = stats64[j];
 		}
diff --git a/drivers/net/bna/cna.h b/drivers/net/bna/cna.h
index a679e03..50fce15 100644
--- a/drivers/net/bna/cna.h
+++ b/drivers/net/bna/cna.h
@@ -40,7 +40,7 @@
 
 extern char bfa_version[];
 
-#define	CNA_FW_FILE_CT	"ctfw_cna.bin"
+#define	CNA_FW_FILE_CT	"ctfw.bin"
 #define FC_SYMNAME_MAX	256	/*!< max name server symbolic name size */
 
 #pragma pack(1)
@@ -77,4 +77,33 @@ typedef struct mac { u8 mac[MAC_ADDRLEN]; } mac_t;
 	}								\
 }
 
+/*
+ * bfa_q_deq_tail - dequeue an element from tail of the queue
+ */
+#define bfa_q_deq_tail(_q, _qe) {					\
+	if (!list_empty(_q)) {						\
+		*((struct list_head **) (_qe)) = bfa_q_prev(_q);	\
+		bfa_q_next(bfa_q_prev(*((struct list_head **) _qe))) =  \
+						(struct list_head *) (_q); \
+		bfa_q_prev(_q) = bfa_q_prev(*(struct list_head **) _qe);\
+		bfa_q_qe_init(*((struct list_head **) _qe));		\
+	} else {							\
+		*((struct list_head **) (_qe)) = (struct list_head *) NULL; \
+	}								\
+}
+
+/*
+ * bfa_add_tail_head - enqueue an element at the head of queue
+ */
+#define bfa_q_enq_head(_q, _qe) {					\
+	if (!(bfa_q_next(_qe) == NULL) && (bfa_q_prev(_qe) == NULL))	\
+		pr_err("Assertion failure: %s:%d: %d",			\
+			__FILE__, __LINE__,				\
+		(bfa_q_next(_qe) == NULL) && (bfa_q_prev(_qe) == NULL));\
+	bfa_q_next(_qe) = bfa_q_next(_q);				\
+	bfa_q_prev(_qe) = (struct list_head *) (_q);			\
+	bfa_q_prev(bfa_q_next(_q)) = (struct list_head *) (_qe);	\
+	bfa_q_next(_q) = (struct list_head *) (_qe);			\
+}
+
 #endif /* __CNA_H__ */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/8] bna: Remove Unused Code
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (4 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 5/8] bna: ENET and Tx Rx Redesign Enablement Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 7/8] bna: Remove Obsolete Files Rasesh Mody
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Remove unused code.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_cee.c           |    3 -
 drivers/net/bna/bfa_defs_mfg_comm.h |    8 --
 drivers/net/bna/bfa_ioc.h           |    5 -
 drivers/net/bna/bfi.h               |   22 ----
 drivers/net/bna/bna.h               |  122 +------------------
 drivers/net/bna/bna_types.h         |  230 +----------------------------------
 6 files changed, 2 insertions(+), 388 deletions(-)

diff --git a/drivers/net/bna/bfa_cee.c b/drivers/net/bna/bfa_cee.c
index 39e5ab9..b45b8eb 100644
--- a/drivers/net/bna/bfa_cee.c
+++ b/drivers/net/bna/bfa_cee.c
@@ -22,9 +22,6 @@
 #include "bfi_cna.h"
 #include "bfa_ioc.h"
 
-#define bfa_ioc_portid(__ioc) ((__ioc)->port_id)
-#define bfa_lpuid(__arg) bfa_ioc_portid(&(__arg)->ioc)
-
 static void bfa_cee_format_lldp_cfg(struct bfa_cee_lldp_cfg *lldp_cfg);
 static void bfa_cee_format_cee_cfg(void *buffer);
 
diff --git a/drivers/net/bna/bfa_defs_mfg_comm.h b/drivers/net/bna/bfa_defs_mfg_comm.h
index f84d8f6..7ddd16f 100644
--- a/drivers/net/bna/bfa_defs_mfg_comm.h
+++ b/drivers/net/bna/bfa_defs_mfg_comm.h
@@ -67,14 +67,6 @@ enum {
 #pragma pack(1)
 
 /**
- * Check if 1-port card
- */
-#define bfa_mfg_is_1port(type) (( \
-	(type) == BFA_MFG_TYPE_FC8P1 || \
-	(type) == BFA_MFG_TYPE_FC4P1 || \
-	(type) == BFA_MFG_TYPE_CNA10P1))
-
-/**
  * Check if Mezz card
  */
 #define bfa_mfg_is_mezz(type) (( \
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index 7514c72..f5a3d4e 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -26,7 +26,6 @@
 #define BFA_IOC_TOV		3000	/* msecs */
 #define BFA_IOC_HWSEM_TOV	500	/* msecs */
 #define BFA_IOC_HB_TOV		500	/* msecs */
-#define BFA_IOC_HWINIT_MAX	5
 #define BFA_IOC_POLL_TOV	200	/* msecs */
 
 /**
@@ -250,10 +249,6 @@ struct bfa_ioc_hwif {
 #define bfa_ioc_stats_hb_count(_ioc, _hb_count)	\
 	((_ioc)->stats.hb_count = (_hb_count))
 #define BFA_IOC_FWIMG_MINSZ	(16 * 1024)
-#define BFA_IOC_FWIMG_TYPE(__ioc)					\
-	(((__ioc)->ctdev) ?						\
-	 (((__ioc)->fcmode) ? BFI_IMAGE_CT_FC : BFI_IMAGE_CT_CNA) :	\
-	 BFI_IMAGE_CB_FC)
 #define BFA_IOC_FW_SMEM_SIZE(__ioc)					\
 	((bfa_ioc_asic_gen(__ioc) == BFI_ASIC_GEN_CB)			\
 	? BFI_SMEM_CB_SIZE : BFI_SMEM_CT_SIZE)
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 978e1bc..19654cc 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -15,7 +15,6 @@
  * All rights reserved
  * www.brocade.com
  */
-
 #ifndef __BFI_H__
 #define __BFI_H__
 
@@ -28,12 +27,6 @@
  */
 #define	BFI_FLASH_CHUNK_SZ			256	/*!< Flash chunk size */
 #define	BFI_FLASH_CHUNK_SZ_WORDS	(BFI_FLASH_CHUNK_SZ/sizeof(u32))
-enum {
-	BFI_IMAGE_CB_FC,
-	BFI_IMAGE_CT_FC,
-	BFI_IMAGE_CT_CNA,
-	BFI_IMAGE_MAX,
-};
 
 /**
  * Msg header common to all msgs
@@ -195,15 +188,6 @@ enum bfi_mclass {
 #define BFI_IOC_MAX_CQS_ASIC	8
 #define BFI_IOC_MSGLEN_MAX	32	/* 32 bytes */
 
-#define BFI_BOOT_TYPE_OFF		8
-#define BFI_BOOT_LOADER_OFF		12
-
-#define BFI_BOOT_TYPE_NORMAL		0
-#define	BFI_BOOT_TYPE_FLASH		1
-#define	BFI_BOOT_TYPE_MEMTEST		2
-
-#define BFI_BOOT_LOADER_OS		0
-
 #define BFI_FWBOOT_ENV_OS		0
 
 #define BFI_BOOT_MEMTEST_RES_ADDR   0x900
@@ -344,12 +328,6 @@ enum bfi_port_mode {
 /**
  *  BFI_IOC_I2H_READY_EVENT message
  */
-struct bfi_ioc_rdy_event {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8			init_status;	/*!< init event status */
-	u8			rsvd[3];
-};
-
 struct bfi_ioc_hbeat {
 	struct bfi_mhdr mh;		/*!< common msg header		*/
 	u32	   hb_count;	/*!< current heart beat count	*/
diff --git a/drivers/net/bna/bna.h b/drivers/net/bna/bna.h
index f9781a3..1f1fa93 100644
--- a/drivers/net/bna/bna.h
+++ b/drivers/net/bna/bna.h
@@ -32,14 +32,6 @@ extern const u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX];
 /* Log string size */
 #define BNA_MESSAGE_SIZE		256
 
-/* MBOX API for PORT, TX, RX */
-#define bna_mbox_qe_fill(_qe, _cmd, _cmd_len, _cbfn, _cbarg)		\
-do {									\
-	memcpy(&((_qe)->cmd.msg[0]), (_cmd), (_cmd_len));	\
-	(_qe)->cbfn = (_cbfn);						\
-	(_qe)->cbarg = (_cbarg);					\
-} while (0)
-
 #define bna_is_small_rxq(_id) ((_id) & 0x1)
 
 #define BNA_MAC_IS_EQUAL(_mac1, _mac2)					\
@@ -177,32 +169,6 @@ do {								\
 #define BNA_Q_IN_USE_COUNT(_q_ptr)					\
 	(BNA_QE_IN_USE_CNT(&(_q_ptr)->q, (_q_ptr)->q.q_depth))
 
-/* These macros build the data portion of the TxQ/RxQ doorbell */
-#define BNA_DOORBELL_Q_PRD_IDX(_pi)	(0x80000000 | (_pi))
-#define BNA_DOORBELL_Q_STOP		(0x40000000)
-
-/* These macros build the data portion of the IB doorbell */
-#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events) \
-	(0x80000000 | ((_timeout) << 16) | (_events))
-#define BNA_DOORBELL_IB_INT_DISABLE	(0x40000000)
-
-/* Set the coalescing timer for the given ib */
-#define bna_ib_coalescing_timer_set(_i_dbell, _cls_timer)		\
-	((_i_dbell)->doorbell_ack = BNA_DOORBELL_IB_INT_ACK((_cls_timer), 0));
-
-/* Acks 'events' # of events for a given ib */
-#define bna_ib_ack(_i_dbell, _events)					\
-	(writel(((_i_dbell)->doorbell_ack | (_events)), \
-		(_i_dbell)->doorbell_addr));
-
-#define bna_txq_prod_indx_doorbell(_tcb)				\
-	(writel(BNA_DOORBELL_Q_PRD_IDX((_tcb)->producer_index), \
-		(_tcb)->q_dbell));
-
-#define bna_rxq_prod_indx_doorbell(_rcb)				\
-	(writel(BNA_DOORBELL_Q_PRD_IDX((_rcb)->producer_index), \
-		(_rcb)->q_dbell));
-
 #define BNA_LARGE_PKT_SIZE		1000
 
 #define BNA_UPDATE_PKT_CNT(_pkt, _len)					\
@@ -435,7 +401,6 @@ void bna_get_perm_mac(struct bna *bna, u8 *mac);
 void bna_hw_stats_get(struct bna *bna);
 
 /* APIs for Rx */
-int bna_rit_mod_can_satisfy(struct bna_rit_mod *rit_mod, int seg_size);
 
 /* APIs for RxF */
 struct bna_mac *bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod);
@@ -447,53 +412,13 @@ void bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod,
 struct bna_mcam_handle *bna_mcam_mod_handle_get(struct bna_mcam_mod *mod);
 void bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
 			  struct bna_mcam_handle *handle);
-struct bna_rit_segment *
-bna_rit_mod_seg_get(struct bna_rit_mod *rit_mod, int seg_size);
-void bna_rit_mod_seg_put(struct bna_rit_mod *rit_mod,
-			struct bna_rit_segment *seg);
-
-/**
- * DEVICE
- */
-
-/* APIs for BNAD */
-void bna_device_enable(struct bna_device *device);
-void bna_device_disable(struct bna_device *device,
-			enum bna_cleanup_type type);
 
 /**
  * MBOX
  */
 
-/* APIs for PORT, TX, RX */
-void bna_mbox_handler(struct bna *bna, u32 intr_status);
-void bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe);
-
-/**
- * PORT
- */
-
-/* API for RX */
-int bna_port_mtu_get(struct bna_port *port);
-void bna_llport_rx_started(struct bna_llport *llport);
-void bna_llport_rx_stopped(struct bna_llport *llport);
-
 /* API for BNAD */
-void bna_port_enable(struct bna_port *port);
-void bna_port_disable(struct bna_port *port, enum bna_cleanup_type type,
-		      void (*cbfn)(void *, enum bna_cb_status));
-void bna_port_pause_config(struct bna_port *port,
-			   struct bna_pause_config *pause_config,
-			   void (*cbfn)(struct bnad *, enum bna_cb_status));
-void bna_port_mtu_set(struct bna_port *port, int mtu,
-		      void (*cbfn)(struct bnad *, enum bna_cb_status));
-void bna_port_mac_get(struct bna_port *port, mac_t *mac);
-
-/* Callbacks for TX, RX */
-void bna_port_cb_tx_stopped(struct bna_port *port,
-			    enum bna_cb_status status);
-void bna_port_cb_rx_stopped(struct bna_port *port,
-			    enum bna_cb_status status);
+void bna_mbox_handler(struct bna *bna, u32 intr_status);
 
 /**
  * ETHPORT
@@ -504,15 +429,6 @@ void bna_ethport_cb_rx_started(struct bna_ethport *ethport);
 void bna_ethport_cb_rx_stopped(struct bna_ethport *ethport);
 
 /**
- * IB
- */
-
-/* APIs for BNA */
-void bna_ib_mod_init(struct bna_ib_mod *ib_mod, struct bna *bna,
-		     struct bna_res_info *res_info);
-void bna_ib_mod_uninit(struct bna_ib_mod *ib_mod);
-
-/**
  * TX MODULE AND TX
  */
 /* FW response handelrs */
@@ -526,14 +442,11 @@ void bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod);
 void bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
 		     struct bna_res_info *res_info);
 void bna_tx_mod_uninit(struct bna_tx_mod *tx_mod);
-int bna_tx_state_get(struct bna_tx *tx);
 
 /* APIs for ENET */
 void bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
 void bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
 void bna_tx_mod_fail(struct bna_tx_mod *tx_mod);
-void bna_tx_mod_prio_changed(struct bna_tx_mod *tx_mod, int prio);
-void bna_tx_mod_cee_link_status(struct bna_tx_mod *tx_mod, int cee_link);
 
 /* APIs for BNAD */
 void bna_tx_res_req(int num_txq, int txq_depth,
@@ -553,27 +466,6 @@ void bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo);
  * RX MODULE, RX, RXF
  */
 
-/* Internal APIs */
-void rxf_cb_cam_fltr_mbox_cmd(void *arg, int status);
-void rxf_cam_mbox_cmd(struct bna_rxf *rxf, u8 cmd,
-		const struct bna_mac *mac_addr);
-void __rxf_vlan_filter_set(struct bna_rxf *rxf, enum bna_status status);
-void bna_rxf_adv_init(struct bna_rxf *rxf,
-		struct bna_rx *rx,
-		struct bna_rx_config *q_config);
-int rxf_process_packet_filter_ucast(struct bna_rxf *rxf);
-int rxf_process_packet_filter_promisc(struct bna_rxf *rxf);
-int rxf_process_packet_filter_default(struct bna_rxf *rxf);
-int rxf_process_packet_filter_allmulti(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_ucast(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_promisc(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_default(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_allmulti(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_ucast(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_promisc(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_default(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_allmulti(struct bna_rxf *rxf);
-
 /* FW response handlers */
 void bna_bfi_rx_enet_start_rsp(struct bna_rx *rx,
 			       struct bfi_msgq_mhdr *msghdr);
@@ -587,8 +479,6 @@ void bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
 void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
 		     struct bna_res_info *res_info);
 void bna_rx_mod_uninit(struct bna_rx_mod *rx_mod);
-int bna_rx_state_get(struct bna_rx *rx);
-int bna_rxf_state_get(struct bna_rxf *rxf);
 
 /* APIs for ENET */
 void bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
@@ -687,14 +577,4 @@ void bnad_cb_mbox_intr_disable(struct bnad *bnad);
 void bnad_cb_stats_get(struct bnad *bnad, enum bna_cb_status status,
 		       struct bna_stats *stats);
 
-/* Callbacks for DEVICE */
-void bnad_cb_device_enabled(struct bnad *bnad, enum bna_cb_status status);
-void bnad_cb_device_disabled(struct bnad *bnad, enum bna_cb_status status);
-void bnad_cb_device_enable_mbox_intr(struct bnad *bnad);
-void bnad_cb_device_disable_mbox_intr(struct bnad *bnad);
-
-/* Callbacks for port */
-void bnad_cb_port_link_status(struct bnad *bnad,
-			      enum bna_link_status status);
-
 #endif  /* __BNA_H__ */
diff --git a/drivers/net/bna/bna_types.h b/drivers/net/bna/bna_types.h
index 655eb14..8a6da0c 100644
--- a/drivers/net/bna/bna_types.h
+++ b/drivers/net/bna/bna_types.h
@@ -37,7 +37,6 @@ struct bna_rxq;
 struct bna_cq;
 struct bna_rx;
 struct bna_rxf;
-struct bna_port;
 struct bna_enet;
 struct bna;
 struct bnad;
@@ -90,21 +89,6 @@ enum bna_res_req_type {
 	BNA_RES_MEM_T_ATTR		= 1,
 	BNA_RES_MEM_T_FWTRC		= 2,
 	BNA_RES_MEM_T_STATS		= 3,
-	BNA_RES_MEM_T_SWSTATS		= 4,
-	BNA_RES_MEM_T_IBIDX		= 5,
-	BNA_RES_MEM_T_IB_ARRAY		= 6,
-	BNA_RES_MEM_T_INTR_ARRAY	= 7,
-	BNA_RES_MEM_T_IDXSEG_ARRAY	= 8,
-	BNA_RES_MEM_T_TX_ARRAY		= 9,
-	BNA_RES_MEM_T_TXQ_ARRAY		= 10,
-	BNA_RES_MEM_T_RX_ARRAY		= 11,
-	BNA_RES_MEM_T_RXP_ARRAY		= 12,
-	BNA_RES_MEM_T_RXQ_ARRAY		= 13,
-	BNA_RES_MEM_T_UCMAC_ARRAY	= 14,
-	BNA_RES_MEM_T_MCMAC_ARRAY	= 15,
-	BNA_RES_MEM_T_RIT_ENTRY		= 16,
-	BNA_RES_MEM_T_RIT_SEGMENT	= 17,
-	BNA_RES_INTR_T_MBOX		= 18,
 	BNA_RES_T_MAX
 };
 
@@ -150,11 +134,6 @@ enum bna_rx_mem_type {
 	BNA_RX_RES_T_MAX		= 15
 };
 
-enum bna_mbox_state {
-	BNA_MBOX_FREE		= 0,
-	BNA_MBOX_POSTED		= 1
-};
-
 enum bna_tx_type {
 	BNA_TX_T_REGULAR	= 0,
 	BNA_TX_T_LOOPBACK	= 1,
@@ -200,14 +179,6 @@ enum bna_rx_event {
 	RX_E_CLEANUP_DONE		= 8,
 };
 
-enum bna_rx_state {
-	BNA_RX_STOPPED			= 1,
-	BNA_RX_RXF_START_WAIT		= 2,
-	BNA_RX_STARTED			= 3,
-	BNA_RX_RXF_STOP_WAIT		= 4,
-	BNA_RX_RXQ_STOP_WAIT		= 5,
-};
-
 enum bna_rx_flags {
 	BNA_RX_F_ENET_STARTED	= 1,
 	BNA_RX_F_ENABLED	= 2,
@@ -218,11 +189,6 @@ enum bna_rx_mod_flags {
 	BNA_RX_MOD_F_ENET_LOOPBACK	= 2,
 };
 
-enum bna_rxf_oper_state {
-	BNA_RXF_OPER_STATE_RUNNING	= 0x01, /* rxf operational */
-	BNA_RXF_OPER_STATE_PAUSED	= 0x02,	/* rxf in PAUSED state */
-};
-
 enum bna_rxf_flags {
 	BNA_RXF_F_PAUSED		= 1,
 };
@@ -237,24 +203,6 @@ enum bna_rxf_event {
 	RXF_E_FW_RESP			= 7,
 };
 
-enum bna_rxf_state {
-	BNA_RXF_STOPPED			= 1,
-	BNA_RXF_START_WAIT		= 2,
-	BNA_RXF_CAM_FLTR_MOD_WAIT	= 3,
-	BNA_RXF_STARTED			= 4,
-	BNA_RXF_CAM_FLTR_CLR_WAIT	= 5,
-	BNA_RXF_STOP_WAIT		= 6,
-	BNA_RXF_PAUSE_WAIT		= 7,
-	BNA_RXF_RESUME_WAIT		= 8,
-	BNA_RXF_STAT_CLR_WAIT		= 9,
-};
-
-enum bna_port_type {
-	BNA_PORT_T_REGULAR		= 0,
-	BNA_PORT_T_LOOPBACK_INTERNAL	= 1,
-	BNA_PORT_T_LOOPBACK_EXTERNAL	= 2,
-};
-
 enum bna_enet_type {
 	BNA_ENET_T_REGULAR		= 0,
 	BNA_ENET_T_LOOPBACK_INTERNAL	= 1,
@@ -267,25 +215,12 @@ enum bna_link_status {
 	BNA_CEE_UP		= 2
 };
 
-enum bna_llport_flags {
-	BNA_LLPORT_F_ADMIN_UP		= 1,
-	BNA_LLPORT_F_PORT_ENABLED	= 2,
-	BNA_LLPORT_F_RX_STARTED		= 4
-};
-
 enum bna_ethport_flags {
 	BNA_ETHPORT_F_ADMIN_UP		= 1,
 	BNA_ETHPORT_F_PORT_ENABLED	= 2,
 	BNA_ETHPORT_F_RX_STARTED	= 4,
 };
 
-enum bna_port_flags {
-	BNA_PORT_F_DEVICE_READY	= 1,
-	BNA_PORT_F_ENABLED	= 2,
-	BNA_PORT_F_PAUSE_CHANGED = 4,
-	BNA_PORT_F_MTU_CHANGED	= 8
-};
-
 enum bna_enet_flags {
 	BNA_ENET_F_IOCETH_READY		= 1,
 	BNA_ENET_F_ENABLED		= 2,
@@ -418,32 +353,7 @@ struct bna_ioceth {
 
 /**
  *
- * Mail box
- *
- */
-
-struct bna_mbox_qe {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	struct bfa_mbox_cmd cmd;
-	u32		cmd_len;
-	/* Callback for port, tx, rx, rxf */
-	void (*cbfn)(void *arg, int status);
-	void			*cbarg;
-};
-
-struct bna_mbox_mod {
-	enum bna_mbox_state state;
-	struct list_head			posted_q;
-	u32		msg_pending;
-	u32		msg_ctr;
-	struct bna *bna;
-};
-
-/**
- *
- * Port
+ * Enet
  *
  */
 
@@ -453,60 +363,6 @@ struct bna_pause_config {
 	enum bna_status rx_pause;
 };
 
-struct bna_llport {
-	bfa_fsm_t		fsm;
-	enum bna_llport_flags flags;
-
-	enum bna_port_type type;
-
-	enum bna_link_status link_status;
-
-	int			rx_started_count;
-
-	void (*stop_cbfn)(struct bna_port *, enum bna_cb_status);
-
-	struct bna_mbox_qe mbox_qe;
-
-	struct bna *bna;
-};
-
-struct bna_port {
-	bfa_fsm_t		fsm;
-	enum bna_port_flags flags;
-
-	enum bna_port_type type;
-
-	struct bna_llport llport;
-
-	struct bna_pause_config pause_config;
-	u8			priority;
-	int			mtu;
-
-	/* Callback for bna_port_disable(), port_stop() */
-	void (*stop_cbfn)(void *, enum bna_cb_status);
-	void			*stop_cbarg;
-
-	/* Callback for bna_port_pause_config() */
-	void (*pause_cbfn)(struct bnad *, enum bna_cb_status);
-
-	/* Callback for bna_port_mtu_set() */
-	void (*mtu_cbfn)(struct bnad *, enum bna_cb_status);
-
-	void (*link_cbfn)(struct bnad *, enum bna_link_status);
-
-	struct bfa_wc		chld_stop_wc;
-
-	struct bna_mbox_qe mbox_qe;
-
-	struct bna *bna;
-};
-
-/**
- *
- * Enet
- *
- */
-
 struct bna_enet {
 	bfa_fsm_t		fsm;
 	enum bna_enet_flags flags;
@@ -569,27 +425,6 @@ struct bna_ethport {
  *
  */
 
-/* IB index segment structure */
-struct bna_ibidx_seg {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	u8			ib_seg_size;
-	u8			ib_idx_tbl_offset;
-};
-
-/* Interrupt structure */
-struct bna_intr {
-	/* This should be the first one */
-	struct list_head			qe;
-	int			ref_count;
-
-	enum bna_intr_type intr_type;
-	int			vector;
-
-	struct bna_ib *ib;
-};
-
 /* Doorbell structure */
 struct bna_ib_dbell {
 	void *__iomem doorbell_addr;
@@ -752,33 +587,6 @@ struct bna_tx_mod {
 
 /**
  *
- * Receive Indirection Table
- *
- */
-
-/* One row of RIT table */
-struct bna_rit_entry {
-	u8 large_rxq_id;	/* used for either large or data buffers */
-	u8 small_rxq_id;	/* used for either small or header buffers */
-};
-
-/* RIT segment */
-struct bna_rit_segment {
-	struct list_head			qe;
-
-	u32		rit_offset;
-	u32		rit_size;
-	/**
-	 * max_rit_size: Varies per RIT segment depending on how RIT is
-	 * partitioned
-	 */
-	u32		max_rit_size;
-
-	struct bna_rit_entry *rit;
-};
-
-/**
- *
  * Rx object
  *
  */
@@ -1123,42 +931,6 @@ struct bna_mcam_mod {
  *
  */
 
-struct bna_tx_stats {
-	int			tx_state;
-	int			tx_flags;
-	int			num_txqs;
-	u32		txq_bmap[2];
-	int			txf_id;
-};
-
-struct bna_rx_stats {
-	int			rx_state;
-	int			rx_flags;
-	int			num_rxps;
-	int			num_rxqs;
-	u32		rxq_bmap[2];
-	u32		cq_bmap[2];
-	int			rxf_id;
-	int			rxf_state;
-	int			rxf_oper_state;
-	int			num_active_ucast;
-	int			num_active_mcast;
-	int			rxmode_active;
-	int			vlan_filter_status;
-	int			rss_status;
-	int			hds_status;
-};
-
-struct bna_sw_stats {
-	int			device_state;
-	int			port_state;
-	int			port_flags;
-	int			llport_state;
-	int			priority;
-	int			num_active_tx;
-	int			num_active_rx;
-};
-
 struct bna_stats {
 	struct bna_dma_addr	hw_stats_dma;
 	struct bfi_enet_stats	*hw_stats_kva;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/8] bna: Remove Obsolete Files
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (5 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 6/8] bna: Remove Unused Code Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-09  2:21 ` [PATCH 8/8] bna: Driver Version changed to 3.0.2.0 Rasesh Mody
  2011-08-11 14:33 ` [PATCH 0/8] bna: Update bna driver version " David Miller
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Removec bfi_ll.h bna_hw.h bna_ctrl.c and bna_txrx.c due to ENET, MSGQ
   and TXRX changes for new FW Driver interface and TX RX re-design.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_ll.h   |  438 -----
 drivers/net/bna/bna.h      |    1 -
 drivers/net/bna/bna_ctrl.c | 3078 --------------------------------
 drivers/net/bna/bna_hw.h   | 1492 ----------------
 drivers/net/bna/bna_txrx.c | 4185 --------------------------------------------
 5 files changed, 0 insertions(+), 9194 deletions(-)
 delete mode 100644 drivers/net/bna/bfi_ll.h
 delete mode 100644 drivers/net/bna/bna_ctrl.c
 delete mode 100644 drivers/net/bna/bna_hw.h
 delete mode 100644 drivers/net/bna/bna_txrx.c

diff --git a/drivers/net/bna/bfi_ll.h b/drivers/net/bna/bfi_ll.h
deleted file mode 100644
index bee4d05..0000000
--- a/drivers/net/bna/bfi_ll.h
+++ /dev/null
@@ -1,438 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-#ifndef __BFI_LL_H__
-#define __BFI_LL_H__
-
-#include "bfi.h"
-
-#pragma pack(1)
-
-/**
- * @brief
- *	"enums" for all LL mailbox messages other than IOC
- */
-enum {
-	BFI_LL_H2I_MAC_UCAST_SET_REQ = 1,
-	BFI_LL_H2I_MAC_UCAST_ADD_REQ = 2,
-	BFI_LL_H2I_MAC_UCAST_DEL_REQ = 3,
-
-	BFI_LL_H2I_MAC_MCAST_ADD_REQ = 4,
-	BFI_LL_H2I_MAC_MCAST_DEL_REQ = 5,
-	BFI_LL_H2I_MAC_MCAST_FILTER_REQ = 6,
-	BFI_LL_H2I_MAC_MCAST_DEL_ALL_REQ = 7,
-
-	BFI_LL_H2I_PORT_ADMIN_REQ = 8,
-	BFI_LL_H2I_STATS_GET_REQ = 9,
-	BFI_LL_H2I_STATS_CLEAR_REQ = 10,
-
-	BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ = 11,
-	BFI_LL_H2I_RXF_DEFAULT_SET_REQ = 12,
-
-	BFI_LL_H2I_TXQ_STOP_REQ = 13,
-	BFI_LL_H2I_RXQ_STOP_REQ = 14,
-
-	BFI_LL_H2I_DIAG_LOOPBACK_REQ = 15,
-
-	BFI_LL_H2I_SET_PAUSE_REQ = 16,
-	BFI_LL_H2I_MTU_INFO_REQ = 17,
-
-	BFI_LL_H2I_RX_REQ = 18,
-} ;
-
-enum {
-	BFI_LL_I2H_MAC_UCAST_SET_RSP = BFA_I2HM(1),
-	BFI_LL_I2H_MAC_UCAST_ADD_RSP = BFA_I2HM(2),
-	BFI_LL_I2H_MAC_UCAST_DEL_RSP = BFA_I2HM(3),
-
-	BFI_LL_I2H_MAC_MCAST_ADD_RSP = BFA_I2HM(4),
-	BFI_LL_I2H_MAC_MCAST_DEL_RSP = BFA_I2HM(5),
-	BFI_LL_I2H_MAC_MCAST_FILTER_RSP = BFA_I2HM(6),
-	BFI_LL_I2H_MAC_MCAST_DEL_ALL_RSP = BFA_I2HM(7),
-
-	BFI_LL_I2H_PORT_ADMIN_RSP = BFA_I2HM(8),
-	BFI_LL_I2H_STATS_GET_RSP = BFA_I2HM(9),
-	BFI_LL_I2H_STATS_CLEAR_RSP = BFA_I2HM(10),
-
-	BFI_LL_I2H_RXF_PROMISCUOUS_SET_RSP = BFA_I2HM(11),
-	BFI_LL_I2H_RXF_DEFAULT_SET_RSP = BFA_I2HM(12),
-
-	BFI_LL_I2H_TXQ_STOP_RSP = BFA_I2HM(13),
-	BFI_LL_I2H_RXQ_STOP_RSP = BFA_I2HM(14),
-
-	BFI_LL_I2H_DIAG_LOOPBACK_RSP = BFA_I2HM(15),
-
-	BFI_LL_I2H_SET_PAUSE_RSP = BFA_I2HM(16),
-
-	BFI_LL_I2H_MTU_INFO_RSP = BFA_I2HM(17),
-	BFI_LL_I2H_RX_RSP = BFA_I2HM(18),
-
-	BFI_LL_I2H_LINK_DOWN_AEN = BFA_I2HM(19),
-	BFI_LL_I2H_LINK_UP_AEN = BFA_I2HM(20),
-
-	BFI_LL_I2H_PORT_ENABLE_AEN = BFA_I2HM(21),
-	BFI_LL_I2H_PORT_DISABLE_AEN = BFA_I2HM(22),
-} ;
-
-/**
- * @brief bfi_ll_mac_addr_req is used by:
- *        BFI_LL_H2I_MAC_UCAST_SET_REQ
- *        BFI_LL_H2I_MAC_UCAST_ADD_REQ
- *        BFI_LL_H2I_MAC_UCAST_DEL_REQ
- *        BFI_LL_H2I_MAC_MCAST_ADD_REQ
- *        BFI_LL_H2I_MAC_MCAST_DEL_REQ
- */
-struct bfi_ll_mac_addr_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		rsvd1[3];
-	mac_t		mac_addr;
-	u8		rsvd2[2];
-};
-
-/**
- * @brief bfi_ll_mcast_filter_req is used by:
- *	  BFI_LL_H2I_MAC_MCAST_FILTER_REQ
- */
-struct bfi_ll_mcast_filter_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		enable;
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_mcast_del_all is used by:
- *	  BFI_LL_H2I_MAC_MCAST_DEL_ALL_REQ
- */
-struct bfi_ll_mcast_del_all_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		   rxf_id;
-	u8		   rsvd[3];
-};
-
-/**
- * @brief bfi_ll_q_stop_req is used by:
- *	BFI_LL_H2I_TXQ_STOP_REQ
- *	BFI_LL_H2I_RXQ_STOP_REQ
- */
-struct bfi_ll_q_stop_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	q_id_mask[2];	/* !< bit-mask for queue ids */
-};
-
-/**
- * @brief bfi_ll_stats_req is used by:
- *    BFI_LL_I2H_STATS_GET_REQ
- *    BFI_LL_I2H_STATS_CLEAR_REQ
- */
-struct bfi_ll_stats_req {
-	struct bfi_mhdr mh;	/*!< common msg header */
-	u16 stats_mask;	/* !< bit-mask for non-function statistics */
-	u8	rsvd[2];
-	u32 rxf_id_mask[2];	/* !< bit-mask for RxF Statistics */
-	u32 txf_id_mask[2];	/* !< bit-mask for TxF Statistics */
-	union bfi_addr_u  host_buffer;	/* !< where statistics are returned */
-};
-
-/**
- * @brief defines for "stats_mask" above.
- */
-#define BFI_LL_STATS_MAC	(1 << 0)	/* !< MAC Statistics */
-#define BFI_LL_STATS_BPC	(1 << 1)	/* !< Pause Stats from BPC */
-#define BFI_LL_STATS_RAD	(1 << 2)	/* !< Rx Admission Statistics */
-#define BFI_LL_STATS_RX_FC	(1 << 3)	/* !< Rx FC Stats from RxA */
-#define BFI_LL_STATS_TX_FC	(1 << 4)	/* !< Tx FC Stats from TxA */
-
-#define BFI_LL_STATS_ALL	0x1f
-
-/**
- * @brief bfi_ll_port_admin_req
- */
-struct bfi_ll_port_admin_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		 up;
-	u8		 rsvd[3];
-};
-
-/**
- * @brief bfi_ll_rxf_req is used by:
- *      BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ
- *      BFI_LL_H2I_RXF_DEFAULT_SET_REQ
- */
-struct bfi_ll_rxf_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		enable;
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_rxf_multi_req is used by:
- *	BFI_LL_H2I_RX_REQ
- */
-struct bfi_ll_rxf_multi_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	rxf_id_mask[2];
-	u8		enable;
-	u8		rsvd[3];
-};
-
-/**
- * @brief enum for Loopback opmodes
- */
-enum {
-	BFI_LL_DIAG_LB_OPMODE_EXT = 0,
-	BFI_LL_DIAG_LB_OPMODE_CBL = 1,
-};
-
-/**
- * @brief bfi_ll_set_pause_req is used by:
- *	BFI_LL_H2I_SET_PAUSE_REQ
- */
-struct bfi_ll_set_pause_req {
-	struct bfi_mhdr mh;
-	u8		tx_pause; /* 1 = enable, 0 =  disable */
-	u8		rx_pause; /* 1 = enable, 0 =  disable */
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_mtu_info_req is used by:
- *	BFI_LL_H2I_MTU_INFO_REQ
- */
-struct bfi_ll_mtu_info_req {
-	struct bfi_mhdr mh;
-	u16	mtu;
-	u8		rsvd[2];
-};
-
-/**
- * @brief
- *	  Response header format used by all responses
- *	  For both responses and asynchronous notifications
- */
-struct bfi_ll_rsp {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		error;
-	u8		rsvd[3];
-};
-
-/**
- * @brief bfi_ll_cee_aen is used by:
- *	BFI_LL_I2H_LINK_DOWN_AEN
- *	BFI_LL_I2H_LINK_UP_AEN
- */
-struct bfi_ll_aen {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	reason;
-	u8		cee_linkup;
-	u8		prio_map;    /*!< LL priority bit-map */
-	u8		rsvd[2];
-};
-
-/**
- * @brief
- * 	The following error codes can be returned
- *	by the mbox commands
- */
-enum {
-	BFI_LL_CMD_OK 		= 0,
-	BFI_LL_CMD_FAIL 	= 1,
-	BFI_LL_CMD_DUP_ENTRY	= 2,	/* !< Duplicate entry in CAM */
-	BFI_LL_CMD_CAM_FULL	= 3,	/* !< CAM is full */
-	BFI_LL_CMD_NOT_OWNER	= 4,   	/* !< Not permitted, b'cos not owner */
-	BFI_LL_CMD_NOT_EXEC	= 5,   	/* !< Was not sent to f/w at all */
-	BFI_LL_CMD_WAITING	= 6,	/* !< Waiting for completion (VMware) */
-	BFI_LL_CMD_PORT_DISABLED	= 7,	/* !< port in disabled state */
-} ;
-
-/* Statistics */
-#define BFI_LL_TXF_ID_MAX  	64
-#define BFI_LL_RXF_ID_MAX  	64
-
-/* TxF Frame Statistics */
-struct bfi_ll_stats_txf {
-	u64 ucast_octets;
-	u64 ucast;
-	u64 ucast_vlan;
-
-	u64 mcast_octets;
-	u64 mcast;
-	u64 mcast_vlan;
-
-	u64 bcast_octets;
-	u64 bcast;
-	u64 bcast_vlan;
-
-	u64 errors;
-	u64 filter_vlan;      /* frames filtered due to VLAN */
-	u64 filter_mac_sa;    /* frames filtered due to SA check */
-};
-
-/* RxF Frame Statistics */
-struct bfi_ll_stats_rxf {
-	u64 ucast_octets;
-	u64 ucast;
-	u64 ucast_vlan;
-
-	u64 mcast_octets;
-	u64 mcast;
-	u64 mcast_vlan;
-
-	u64 bcast_octets;
-	u64 bcast;
-	u64 bcast_vlan;
-	u64 frame_drops;
-};
-
-/* FC Tx Frame Statistics */
-struct bfi_ll_stats_fc_tx {
-	u64 txf_ucast_octets;
-	u64 txf_ucast;
-	u64 txf_ucast_vlan;
-
-	u64 txf_mcast_octets;
-	u64 txf_mcast;
-	u64 txf_mcast_vlan;
-
-	u64 txf_bcast_octets;
-	u64 txf_bcast;
-	u64 txf_bcast_vlan;
-
-	u64 txf_parity_errors;
-	u64 txf_timeout;
-	u64 txf_fid_parity_errors;
-};
-
-/* FC Rx Frame Statistics */
-struct bfi_ll_stats_fc_rx {
-	u64 rxf_ucast_octets;
-	u64 rxf_ucast;
-	u64 rxf_ucast_vlan;
-
-	u64 rxf_mcast_octets;
-	u64 rxf_mcast;
-	u64 rxf_mcast_vlan;
-
-	u64 rxf_bcast_octets;
-	u64 rxf_bcast;
-	u64 rxf_bcast_vlan;
-};
-
-/* RAD Frame Statistics */
-struct bfi_ll_stats_rad {
-	u64 rx_frames;
-	u64 rx_octets;
-	u64 rx_vlan_frames;
-
-	u64 rx_ucast;
-	u64 rx_ucast_octets;
-	u64 rx_ucast_vlan;
-
-	u64 rx_mcast;
-	u64 rx_mcast_octets;
-	u64 rx_mcast_vlan;
-
-	u64 rx_bcast;
-	u64 rx_bcast_octets;
-	u64 rx_bcast_vlan;
-
-	u64 rx_drops;
-};
-
-/* BPC Tx Registers */
-struct bfi_ll_stats_bpc {
-	/* transmit stats */
-	u64 tx_pause[8];
-	u64 tx_zero_pause[8];	/*!< Pause cancellation */
-	/*!<Pause initiation rather than retention */
-	u64 tx_first_pause[8];
-
-	/* receive stats */
-	u64 rx_pause[8];
-	u64 rx_zero_pause[8];	/*!< Pause cancellation */
-	/*!<Pause initiation rather than retention */
-	u64 rx_first_pause[8];
-};
-
-/* MAC Rx Statistics */
-struct bfi_ll_stats_mac {
-	u64 frame_64;		/* both rx and tx counter */
-	u64 frame_65_127;		/* both rx and tx counter */
-	u64 frame_128_255;		/* both rx and tx counter */
-	u64 frame_256_511;		/* both rx and tx counter */
-	u64 frame_512_1023;	/* both rx and tx counter */
-	u64 frame_1024_1518;	/* both rx and tx counter */
-	u64 frame_1519_1522;	/* both rx and tx counter */
-
-	/* receive stats */
-	u64 rx_bytes;
-	u64 rx_packets;
-	u64 rx_fcs_error;
-	u64 rx_multicast;
-	u64 rx_broadcast;
-	u64 rx_control_frames;
-	u64 rx_pause;
-	u64 rx_unknown_opcode;
-	u64 rx_alignment_error;
-	u64 rx_frame_length_error;
-	u64 rx_code_error;
-	u64 rx_carrier_sense_error;
-	u64 rx_undersize;
-	u64 rx_oversize;
-	u64 rx_fragments;
-	u64 rx_jabber;
-	u64 rx_drop;
-
-	/* transmit stats */
-	u64 tx_bytes;
-	u64 tx_packets;
-	u64 tx_multicast;
-	u64 tx_broadcast;
-	u64 tx_pause;
-	u64 tx_deferral;
-	u64 tx_excessive_deferral;
-	u64 tx_single_collision;
-	u64 tx_muliple_collision;
-	u64 tx_late_collision;
-	u64 tx_excessive_collision;
-	u64 tx_total_collision;
-	u64 tx_pause_honored;
-	u64 tx_drop;
-	u64 tx_jabber;
-	u64 tx_fcs_error;
-	u64 tx_control_frame;
-	u64 tx_oversize;
-	u64 tx_undersize;
-	u64 tx_fragments;
-};
-
-/* Complete statistics */
-struct bfi_ll_stats {
-	struct bfi_ll_stats_mac		mac_stats;
-	struct bfi_ll_stats_bpc		bpc_stats;
-	struct bfi_ll_stats_rad		rad_stats;
-	struct bfi_ll_stats_fc_rx	fc_rx_stats;
-	struct bfi_ll_stats_fc_tx	fc_tx_stats;
-	struct bfi_ll_stats_rxf	rxf_stats[BFI_LL_RXF_ID_MAX];
-	struct bfi_ll_stats_txf	txf_stats[BFI_LL_TXF_ID_MAX];
-};
-
-#pragma pack()
-
-#endif  /* __BFI_LL_H__ */
diff --git a/drivers/net/bna/bna.h b/drivers/net/bna/bna.h
index 1f1fa93..2a587c5 100644
--- a/drivers/net/bna/bna.h
+++ b/drivers/net/bna/bna.h
@@ -16,7 +16,6 @@
 #include "bfa_cs.h"
 #include "bfa_ioc.h"
 #include "cna.h"
-#include "bfi_ll.h"
 #include "bna_types.h"
 
 extern const u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX];
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
deleted file mode 100644
index 7d95517..0000000
--- a/drivers/net/bna/bna_ctrl.c
+++ /dev/null
@@ -1,3078 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-#include "bna.h"
-#include "bfa_cs.h"
-
-static void bna_device_cb_port_stopped(void *arg, enum bna_cb_status status);
-
-static void
-bna_port_cb_link_up(struct bna_port *port, struct bfi_ll_aen *aen,
-			int status)
-{
-	int i;
-	u8 prio_map;
-
-	port->llport.link_status = BNA_LINK_UP;
-	if (aen->cee_linkup)
-		port->llport.link_status = BNA_CEE_UP;
-
-	/* Compute the priority */
-	prio_map = aen->prio_map;
-	if (prio_map) {
-		for (i = 0; i < 8; i++) {
-			if ((prio_map >> i) & 0x1)
-				break;
-		}
-		port->priority = i;
-	} else
-		port->priority = 0;
-
-	/* Dispatch events */
-	bna_tx_mod_cee_link_status(&port->bna->tx_mod, aen->cee_linkup);
-	bna_tx_mod_prio_changed(&port->bna->tx_mod, port->priority);
-	port->link_cbfn(port->bna->bnad, port->llport.link_status);
-}
-
-static void
-bna_port_cb_link_down(struct bna_port *port, int status)
-{
-	port->llport.link_status = BNA_LINK_DOWN;
-
-	/* Dispatch events */
-	bna_tx_mod_cee_link_status(&port->bna->tx_mod, BNA_LINK_DOWN);
-	port->link_cbfn(port->bna->bnad, BNA_LINK_DOWN);
-}
-
-static inline int
-llport_can_be_up(struct bna_llport *llport)
-{
-	int ready = 0;
-	if (llport->type == BNA_PORT_T_REGULAR)
-		ready = ((llport->flags & BNA_LLPORT_F_ADMIN_UP) &&
-			 (llport->flags & BNA_LLPORT_F_RX_STARTED) &&
-			 (llport->flags & BNA_LLPORT_F_PORT_ENABLED));
-	else
-		ready = ((llport->flags & BNA_LLPORT_F_ADMIN_UP) &&
-			 (llport->flags & BNA_LLPORT_F_RX_STARTED) &&
-			 !(llport->flags & BNA_LLPORT_F_PORT_ENABLED));
-	return ready;
-}
-
-#define llport_is_up llport_can_be_up
-
-enum bna_llport_event {
-	LLPORT_E_START			= 1,
-	LLPORT_E_STOP			= 2,
-	LLPORT_E_FAIL			= 3,
-	LLPORT_E_UP			= 4,
-	LLPORT_E_DOWN			= 5,
-	LLPORT_E_FWRESP_UP_OK		= 6,
-	LLPORT_E_FWRESP_UP_FAIL		= 7,
-	LLPORT_E_FWRESP_DOWN		= 8
-};
-
-static void
-bna_llport_cb_port_enabled(struct bna_llport *llport)
-{
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-
-	if (llport_can_be_up(llport))
-		bfa_fsm_send_event(llport, LLPORT_E_UP);
-}
-
-static void
-bna_llport_cb_port_disabled(struct bna_llport *llport)
-{
-	int llport_up = llport_is_up(llport);
-
-	llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-
-	if (llport_up)
-		bfa_fsm_send_event(llport, LLPORT_E_DOWN);
-}
-
-/**
- * MBOX
- */
-static int
-bna_is_aen(u8 msg_id)
-{
-	switch (msg_id) {
-	case BFI_LL_I2H_LINK_DOWN_AEN:
-	case BFI_LL_I2H_LINK_UP_AEN:
-	case BFI_LL_I2H_PORT_ENABLE_AEN:
-	case BFI_LL_I2H_PORT_DISABLE_AEN:
-		return 1;
-
-	default:
-		return 0;
-	}
-}
-
-static void
-bna_mbox_aen_callback(struct bna *bna, struct bfi_mbmsg *msg)
-{
-	struct bfi_ll_aen *aen = (struct bfi_ll_aen *)(msg);
-
-	switch (aen->mh.msg_id) {
-	case BFI_LL_I2H_LINK_UP_AEN:
-		bna_port_cb_link_up(&bna->port, aen, aen->reason);
-		break;
-	case BFI_LL_I2H_LINK_DOWN_AEN:
-		bna_port_cb_link_down(&bna->port, aen->reason);
-		break;
-	case BFI_LL_I2H_PORT_ENABLE_AEN:
-		bna_llport_cb_port_enabled(&bna->port.llport);
-		break;
-	case BFI_LL_I2H_PORT_DISABLE_AEN:
-		bna_llport_cb_port_disabled(&bna->port.llport);
-		break;
-	default:
-		break;
-	}
-}
-
-static void
-bna_ll_isr(void *llarg, struct bfi_mbmsg *msg)
-{
-	struct bna *bna = (struct bna *)(llarg);
-	struct bfi_ll_rsp *mb_rsp = (struct bfi_ll_rsp *)(msg);
-	struct bfi_mhdr *cmd_h, *rsp_h;
-	struct bna_mbox_qe *mb_qe = NULL;
-	int to_post = 0;
-	u8 aen = 0;
-	char message[BNA_MESSAGE_SIZE];
-
-	aen = bna_is_aen(mb_rsp->mh.msg_id);
-
-	if (!aen) {
-		mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
-		cmd_h = (struct bfi_mhdr *)(&mb_qe->cmd.msg[0]);
-		rsp_h = (struct bfi_mhdr *)(&mb_rsp->mh);
-
-		if ((BFA_I2HM(cmd_h->msg_id) == rsp_h->msg_id) &&
-		    (cmd_h->mtag.i2htok == rsp_h->mtag.i2htok)) {
-			/* Remove the request from posted_q, update state  */
-			list_del(&mb_qe->qe);
-			bna->mbox_mod.msg_pending--;
-			if (list_empty(&bna->mbox_mod.posted_q))
-				bna->mbox_mod.state = BNA_MBOX_FREE;
-			else
-				to_post = 1;
-
-			/* Dispatch the cbfn */
-			if (mb_qe->cbfn)
-				mb_qe->cbfn(mb_qe->cbarg, mb_rsp->error);
-
-			/* Post the next entry, if needed */
-			if (to_post) {
-				mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
-				bfa_nw_ioc_mbox_queue(&bna->device.ioc,
-							&mb_qe->cmd, NULL,
-							NULL);
-			}
-		} else {
-			snprintf(message, BNA_MESSAGE_SIZE,
-				       "No matching rsp for [%d:%d:%d]\n",
-				       mb_rsp->mh.msg_class, mb_rsp->mh.msg_id,
-				       mb_rsp->mh.mtag.i2htok);
-		pr_info("%s", message);
-		}
-
-	} else
-		bna_mbox_aen_callback(bna, msg);
-}
-
-static void
-bna_err_handler(struct bna *bna, u32 intr_status)
-{
-	u32 init_halt;
-
-	if (intr_status & __HALT_STATUS_BITS) {
-		init_halt = readl(bna->device.ioc.ioc_regs.ll_halt);
-		init_halt &= ~__FW_INIT_HALT_P;
-		writel(init_halt, bna->device.ioc.ioc_regs.ll_halt);
-	}
-
-	bfa_nw_ioc_error_isr(&bna->device.ioc);
-}
-
-void
-bna_mbox_handler(struct bna *bna, u32 intr_status)
-{
-	if (BNA_IS_ERR_INTR(intr_status)) {
-		bna_err_handler(bna, intr_status);
-		return;
-	}
-	if (BNA_IS_MBOX_INTR(intr_status))
-		bfa_nw_ioc_mbox_isr(&bna->device.ioc);
-}
-
-void
-bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe)
-{
-	struct bfi_mhdr *mh;
-
-	mh = (struct bfi_mhdr *)(&mbox_qe->cmd.msg[0]);
-
-	mh->mtag.i2htok = htons(bna->mbox_mod.msg_ctr);
-	bna->mbox_mod.msg_ctr++;
-	bna->mbox_mod.msg_pending++;
-	if (bna->mbox_mod.state == BNA_MBOX_FREE) {
-		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd,
-					NULL, NULL);
-		bna->mbox_mod.state = BNA_MBOX_POSTED;
-	} else {
-		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-	}
-}
-
-static void
-bna_mbox_flush_q(struct bna *bna, struct list_head *q)
-{
-	struct bna_mbox_qe *mb_qe = NULL;
-	struct list_head			*mb_q;
-	void 			(*cbfn)(void *arg, int status);
-	void 			*cbarg;
-
-	mb_q = &bna->mbox_mod.posted_q;
-
-	while (!list_empty(mb_q)) {
-		bfa_q_deq(mb_q, &mb_qe);
-		cbfn = mb_qe->cbfn;
-		cbarg = mb_qe->cbarg;
-		bfa_q_qe_init(mb_qe);
-		bna->mbox_mod.msg_pending--;
-
-		if (cbfn)
-			cbfn(cbarg, BNA_CB_NOT_EXEC);
-	}
-
-	bna->mbox_mod.state = BNA_MBOX_FREE;
-}
-
-static void
-bna_mbox_mod_start(struct bna_mbox_mod *mbox_mod)
-{
-}
-
-static void
-bna_mbox_mod_stop(struct bna_mbox_mod *mbox_mod)
-{
-	bna_mbox_flush_q(mbox_mod->bna, &mbox_mod->posted_q);
-}
-
-static void
-bna_mbox_mod_init(struct bna_mbox_mod *mbox_mod, struct bna *bna)
-{
-	bfa_nw_ioc_mbox_regisr(&bna->device.ioc, BFI_MC_LL, bna_ll_isr, bna);
-	mbox_mod->state = BNA_MBOX_FREE;
-	mbox_mod->msg_ctr = mbox_mod->msg_pending = 0;
-	INIT_LIST_HEAD(&mbox_mod->posted_q);
-	mbox_mod->bna = bna;
-}
-
-static void
-bna_mbox_mod_uninit(struct bna_mbox_mod *mbox_mod)
-{
-	mbox_mod->bna = NULL;
-}
-
-/**
- * LLPORT
- */
-#define call_llport_stop_cbfn(llport, status)\
-do {\
-	if ((llport)->stop_cbfn)\
-		(llport)->stop_cbfn(&(llport)->bna->port, status);\
-	(llport)->stop_cbfn = NULL;\
-} while (0)
-
-static void bna_fw_llport_up(struct bna_llport *llport);
-static void bna_fw_cb_llport_up(void *arg, int status);
-static void bna_fw_llport_down(struct bna_llport *llport);
-static void bna_fw_cb_llport_down(void *arg, int status);
-static void bna_llport_start(struct bna_llport *llport);
-static void bna_llport_stop(struct bna_llport *llport);
-static void bna_llport_fail(struct bna_llport *llport);
-
-enum bna_llport_state {
-	BNA_LLPORT_STOPPED		= 1,
-	BNA_LLPORT_DOWN			= 2,
-	BNA_LLPORT_UP_RESP_WAIT		= 3,
-	BNA_LLPORT_DOWN_RESP_WAIT	= 4,
-	BNA_LLPORT_UP			= 5,
-	BNA_LLPORT_LAST_RESP_WAIT 	= 6
-};
-
-bfa_fsm_state_decl(bna_llport, stopped, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, down, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, up_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, down_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, up, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, last_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-
-static struct bfa_sm_table llport_sm_table[] = {
-	{BFA_SM(bna_llport_sm_stopped), BNA_LLPORT_STOPPED},
-	{BFA_SM(bna_llport_sm_down), BNA_LLPORT_DOWN},
-	{BFA_SM(bna_llport_sm_up_resp_wait), BNA_LLPORT_UP_RESP_WAIT},
-	{BFA_SM(bna_llport_sm_down_resp_wait), BNA_LLPORT_DOWN_RESP_WAIT},
-	{BFA_SM(bna_llport_sm_up), BNA_LLPORT_UP},
-	{BFA_SM(bna_llport_sm_last_resp_wait), BNA_LLPORT_LAST_RESP_WAIT}
-};
-
-static void
-bna_llport_sm_stopped_entry(struct bna_llport *llport)
-{
-	llport->bna->port.link_cbfn((llport)->bna->bnad, BNA_LINK_DOWN);
-	call_llport_stop_cbfn(llport, BNA_CB_SUCCESS);
-}
-
-static void
-bna_llport_sm_stopped(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_START:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	case LLPORT_E_STOP:
-		call_llport_stop_cbfn(llport, BNA_CB_SUCCESS);
-		break;
-
-	case LLPORT_E_FAIL:
-		break;
-
-	case LLPORT_E_DOWN:
-		/* This event is received due to Rx objects failing */
-		/* No-op */
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-	case LLPORT_E_FWRESP_DOWN:
-		/**
-		 * These events are received due to flushing of mbox when
-		 * device fails
-		 */
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_down_entry(struct bna_llport *llport)
-{
-	bnad_cb_port_link_status((llport)->bna->bnad, BNA_LINK_DOWN);
-}
-
-static void
-bna_llport_sm_down(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_UP:
-		bfa_fsm_set_state(llport, bna_llport_sm_up_resp_wait);
-		bna_fw_llport_up(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_up_resp_wait_entry(struct bna_llport *llport)
-{
-	BUG_ON(!llport_can_be_up(llport));
-	/**
-	 * NOTE: Do not call bna_fw_llport_up() here. That will over step
-	 * mbox due to down_resp_wait -> up_resp_wait transition on event
-	 * LLPORT_E_UP
-	 */
-}
-
-static void
-bna_llport_sm_up_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down_resp_wait);
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		bfa_fsm_set_state(llport, bna_llport_sm_up);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	case LLPORT_E_FWRESP_DOWN:
-		/* down_resp_wait -> up_resp_wait transition on LLPORT_E_UP */
-		bna_fw_llport_up(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_down_resp_wait_entry(struct bna_llport *llport)
-{
-	/**
-	 * NOTE: Do not call bna_fw_llport_down() here. That will over step
-	 * mbox due to up_resp_wait -> down_resp_wait transition on event
-	 * LLPORT_E_DOWN
-	 */
-}
-
-static void
-bna_llport_sm_down_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_UP:
-		bfa_fsm_set_state(llport, bna_llport_sm_up_resp_wait);
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		/* up_resp_wait->down_resp_wait transition on LLPORT_E_DOWN */
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-	case LLPORT_E_FWRESP_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_up_entry(struct bna_llport *llport)
-{
-}
-
-static void
-bna_llport_sm_up(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down_resp_wait);
-		bna_fw_llport_down(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_last_resp_wait_entry(struct bna_llport *llport)
-{
-}
-
-static void
-bna_llport_sm_last_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		/**
-		 * This event is received due to Rx objects stopping in
-		 * parallel to llport
-		 */
-		/* No-op */
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		/* up_resp_wait->last_resp_wait transition on LLPORT_T_STOP */
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-	case LLPORT_E_FWRESP_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_fw_llport_admin_up(struct bna_llport *llport)
-{
-	struct bfi_ll_port_admin_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_PORT_ADMIN_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.up = BNA_STATUS_T_ENABLED;
-
-	bna_mbox_qe_fill(&llport->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_llport_up, llport);
-
-	bna_mbox_send(llport->bna, &llport->mbox_qe);
-}
-
-static void
-bna_fw_llport_up(struct bna_llport *llport)
-{
-	if (llport->type == BNA_PORT_T_REGULAR)
-		bna_fw_llport_admin_up(llport);
-}
-
-static void
-bna_fw_cb_llport_up(void *arg, int status)
-{
-	struct bna_llport *llport = (struct bna_llport *)arg;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-	if (status == BFI_LL_CMD_FAIL) {
-		if (llport->type == BNA_PORT_T_REGULAR)
-			llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-		else
-			llport->flags &= ~BNA_LLPORT_F_ADMIN_UP;
-		bfa_fsm_send_event(llport, LLPORT_E_FWRESP_UP_FAIL);
-	} else
-		bfa_fsm_send_event(llport, LLPORT_E_FWRESP_UP_OK);
-}
-
-static void
-bna_fw_llport_admin_down(struct bna_llport *llport)
-{
-	struct bfi_ll_port_admin_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_PORT_ADMIN_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.up = BNA_STATUS_T_DISABLED;
-
-	bna_mbox_qe_fill(&llport->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_llport_down, llport);
-
-	bna_mbox_send(llport->bna, &llport->mbox_qe);
-}
-
-static void
-bna_fw_llport_down(struct bna_llport *llport)
-{
-	if (llport->type == BNA_PORT_T_REGULAR)
-		bna_fw_llport_admin_down(llport);
-}
-
-static void
-bna_fw_cb_llport_down(void *arg, int status)
-{
-	struct bna_llport *llport = (struct bna_llport *)arg;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-	bfa_fsm_send_event(llport, LLPORT_E_FWRESP_DOWN);
-}
-
-static void
-bna_port_cb_llport_stopped(struct bna_port *port,
-				enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-static void
-bna_llport_init(struct bna_llport *llport, struct bna *bna)
-{
-	llport->flags |= BNA_LLPORT_F_ADMIN_UP;
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-	llport->type = BNA_PORT_T_REGULAR;
-	llport->bna = bna;
-
-	llport->link_status = BNA_LINK_DOWN;
-
-	llport->rx_started_count = 0;
-
-	llport->stop_cbfn = NULL;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-
-	bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-}
-
-static void
-bna_llport_uninit(struct bna_llport *llport)
-{
-	llport->flags &= ~BNA_LLPORT_F_ADMIN_UP;
-	llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-
-	llport->bna = NULL;
-}
-
-static void
-bna_llport_start(struct bna_llport *llport)
-{
-	bfa_fsm_send_event(llport, LLPORT_E_START);
-}
-
-static void
-bna_llport_stop(struct bna_llport *llport)
-{
-	llport->stop_cbfn = bna_port_cb_llport_stopped;
-
-	bfa_fsm_send_event(llport, LLPORT_E_STOP);
-}
-
-static void
-bna_llport_fail(struct bna_llport *llport)
-{
-	/* Reset the physical port status to enabled */
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-	bfa_fsm_send_event(llport, LLPORT_E_FAIL);
-}
-
-static int
-bna_llport_state_get(struct bna_llport *llport)
-{
-	return bfa_sm_to_state(llport_sm_table, llport->fsm);
-}
-
-void
-bna_llport_rx_started(struct bna_llport *llport)
-{
-	llport->rx_started_count++;
-
-	if (llport->rx_started_count == 1) {
-
-		llport->flags |= BNA_LLPORT_F_RX_STARTED;
-
-		if (llport_can_be_up(llport))
-			bfa_fsm_send_event(llport, LLPORT_E_UP);
-	}
-}
-
-void
-bna_llport_rx_stopped(struct bna_llport *llport)
-{
-	int llport_up = llport_is_up(llport);
-
-	llport->rx_started_count--;
-
-	if (llport->rx_started_count == 0) {
-
-		llport->flags &= ~BNA_LLPORT_F_RX_STARTED;
-
-		if (llport_up)
-			bfa_fsm_send_event(llport, LLPORT_E_DOWN);
-	}
-}
-
-/**
- * PORT
- */
-#define bna_port_chld_start(port)\
-do {\
-	enum bna_tx_type tx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bna_llport_start(&(port)->llport);\
-	bna_tx_mod_start(&(port)->bna->tx_mod, tx_type);\
-	bna_rx_mod_start(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_chld_stop(port)\
-do {\
-	enum bna_tx_type tx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bna_llport_stop(&(port)->llport);\
-	bna_tx_mod_stop(&(port)->bna->tx_mod, tx_type);\
-	bna_rx_mod_stop(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_chld_fail(port)\
-do {\
-	bna_llport_fail(&(port)->llport);\
-	bna_tx_mod_fail(&(port)->bna->tx_mod);\
-	bna_rx_mod_fail(&(port)->bna->rx_mod);\
-} while (0)
-
-#define bna_port_rx_start(port)\
-do {\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bna_rx_mod_start(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_rx_stop(port)\
-do {\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bna_rx_mod_stop(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define call_port_stop_cbfn(port, status)\
-do {\
-	if ((port)->stop_cbfn)\
-		(port)->stop_cbfn((port)->stop_cbarg, status);\
-	(port)->stop_cbfn = NULL;\
-	(port)->stop_cbarg = NULL;\
-} while (0)
-
-#define call_port_pause_cbfn(port, status)\
-do {\
-	if ((port)->pause_cbfn)\
-		(port)->pause_cbfn((port)->bna->bnad, status);\
-	(port)->pause_cbfn = NULL;\
-} while (0)
-
-#define call_port_mtu_cbfn(port, status)\
-do {\
-	if ((port)->mtu_cbfn)\
-		(port)->mtu_cbfn((port)->bna->bnad, status);\
-	(port)->mtu_cbfn = NULL;\
-} while (0)
-
-static void bna_fw_pause_set(struct bna_port *port);
-static void bna_fw_cb_pause_set(void *arg, int status);
-static void bna_fw_mtu_set(struct bna_port *port);
-static void bna_fw_cb_mtu_set(void *arg, int status);
-
-enum bna_port_event {
-	PORT_E_START			= 1,
-	PORT_E_STOP			= 2,
-	PORT_E_FAIL			= 3,
-	PORT_E_PAUSE_CFG		= 4,
-	PORT_E_MTU_CFG			= 5,
-	PORT_E_CHLD_STOPPED		= 6,
-	PORT_E_FWRESP_PAUSE		= 7,
-	PORT_E_FWRESP_MTU		= 8
-};
-
-enum bna_port_state {
-	BNA_PORT_STOPPED		= 1,
-	BNA_PORT_MTU_INIT_WAIT		= 2,
-	BNA_PORT_PAUSE_INIT_WAIT	= 3,
-	BNA_PORT_LAST_RESP_WAIT		= 4,
-	BNA_PORT_STARTED		= 5,
-	BNA_PORT_PAUSE_CFG_WAIT		= 6,
-	BNA_PORT_RX_STOP_WAIT		= 7,
-	BNA_PORT_MTU_CFG_WAIT 		= 8,
-	BNA_PORT_CHLD_STOP_WAIT		= 9
-};
-
-bfa_fsm_state_decl(bna_port, stopped, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, mtu_init_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, pause_init_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, last_resp_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, started, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, pause_cfg_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, rx_stop_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, mtu_cfg_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, chld_stop_wait, struct bna_port,
-			enum bna_port_event);
-
-static struct bfa_sm_table port_sm_table[] = {
-	{BFA_SM(bna_port_sm_stopped), BNA_PORT_STOPPED},
-	{BFA_SM(bna_port_sm_mtu_init_wait), BNA_PORT_MTU_INIT_WAIT},
-	{BFA_SM(bna_port_sm_pause_init_wait), BNA_PORT_PAUSE_INIT_WAIT},
-	{BFA_SM(bna_port_sm_last_resp_wait), BNA_PORT_LAST_RESP_WAIT},
-	{BFA_SM(bna_port_sm_started), BNA_PORT_STARTED},
-	{BFA_SM(bna_port_sm_pause_cfg_wait), BNA_PORT_PAUSE_CFG_WAIT},
-	{BFA_SM(bna_port_sm_rx_stop_wait), BNA_PORT_RX_STOP_WAIT},
-	{BFA_SM(bna_port_sm_mtu_cfg_wait), BNA_PORT_MTU_CFG_WAIT},
-	{BFA_SM(bna_port_sm_chld_stop_wait), BNA_PORT_CHLD_STOP_WAIT}
-};
-
-static void
-bna_port_sm_stopped_entry(struct bna_port *port)
-{
-	call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-	call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-	call_port_stop_cbfn(port, BNA_CB_SUCCESS);
-}
-
-static void
-bna_port_sm_stopped(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_START:
-		bfa_fsm_set_state(port, bna_port_sm_mtu_init_wait);
-		break;
-
-	case PORT_E_STOP:
-		call_port_stop_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_FAIL:
-		/* No-op */
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_MTU_CFG:
-		call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		/**
-		 * This event is received due to LLPort, Tx and Rx objects
-		 * failing
-		 */
-		/* No-op */
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-	case PORT_E_FWRESP_MTU:
-		/**
-		 * These events are received due to flushing of mbox when
-		 * device fails
-		 */
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_mtu_init_wait_entry(struct bna_port *port)
-{
-	bna_fw_mtu_set(port);
-}
-
-static void
-bna_port_sm_mtu_init_wait(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_last_resp_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		/* No-op */
-		break;
-
-	case PORT_E_MTU_CFG:
-		port->flags |= BNA_PORT_F_MTU_CHANGED;
-		break;
-
-	case PORT_E_FWRESP_MTU:
-		if (port->flags & BNA_PORT_F_MTU_CHANGED) {
-			port->flags &= ~BNA_PORT_F_MTU_CHANGED;
-			bna_fw_mtu_set(port);
-		} else {
-			bfa_fsm_set_state(port, bna_port_sm_pause_init_wait);
-		}
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_pause_init_wait_entry(struct bna_port *port)
-{
-	bna_fw_pause_set(port);
-}
-
-static void
-bna_port_sm_pause_init_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_last_resp_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		port->flags |= BNA_PORT_F_PAUSE_CHANGED;
-		break;
-
-	case PORT_E_MTU_CFG:
-		port->flags |= BNA_PORT_F_MTU_CHANGED;
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-		if (port->flags & BNA_PORT_F_PAUSE_CHANGED) {
-			port->flags &= ~BNA_PORT_F_PAUSE_CHANGED;
-			bna_fw_pause_set(port);
-		} else if (port->flags & BNA_PORT_F_MTU_CHANGED) {
-			port->flags &= ~BNA_PORT_F_MTU_CHANGED;
-			bfa_fsm_set_state(port, bna_port_sm_mtu_init_wait);
-		} else {
-			bfa_fsm_set_state(port, bna_port_sm_started);
-			bna_port_chld_start(port);
-		}
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_last_resp_wait_entry(struct bna_port *port)
-{
-}
-
-static void
-bna_port_sm_last_resp_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-	case PORT_E_FWRESP_PAUSE:
-	case PORT_E_FWRESP_MTU:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_started_entry(struct bna_port *port)
-{
-	/**
-	 * NOTE: Do not call bna_port_chld_start() here, since it will be
-	 * inadvertently called during pause_cfg_wait->started transition
-	 * as well
-	 */
-	call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-	call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-}
-
-static void
-bna_port_sm_started(struct bna_port *port,
-			enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_chld_stop_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		bfa_fsm_set_state(port, bna_port_sm_pause_cfg_wait);
-		break;
-
-	case PORT_E_MTU_CFG:
-		bfa_fsm_set_state(port, bna_port_sm_rx_stop_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_pause_cfg_wait_entry(struct bna_port *port)
-{
-	bna_fw_pause_set(port);
-}
-
-static void
-bna_port_sm_pause_cfg_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-		bfa_fsm_set_state(port, bna_port_sm_started);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_rx_stop_wait_entry(struct bna_port *port)
-{
-	bna_port_rx_stop(port);
-}
-
-static void
-bna_port_sm_rx_stop_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		bfa_fsm_set_state(port, bna_port_sm_mtu_cfg_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_mtu_cfg_wait_entry(struct bna_port *port)
-{
-	bna_fw_mtu_set(port);
-}
-
-static void
-bna_port_sm_mtu_cfg_wait(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_FWRESP_MTU:
-		bfa_fsm_set_state(port, bna_port_sm_started);
-		bna_port_rx_start(port);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_chld_stop_wait_entry(struct bna_port *port)
-{
-	bna_port_chld_stop(port);
-}
-
-static void
-bna_port_sm_chld_stop_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_fw_pause_set(struct bna_port *port)
-{
-	struct bfi_ll_set_pause_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_SET_PAUSE_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.tx_pause = port->pause_config.tx_pause;
-	ll_req.rx_pause = port->pause_config.rx_pause;
-
-	bna_mbox_qe_fill(&port->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_pause_set, port);
-
-	bna_mbox_send(port->bna, &port->mbox_qe);
-}
-
-static void
-bna_fw_cb_pause_set(void *arg, int status)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-	bfa_fsm_send_event(port, PORT_E_FWRESP_PAUSE);
-}
-
-void
-bna_fw_mtu_set(struct bna_port *port)
-{
-	struct bfi_ll_mtu_info_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_MTU_INFO_REQ, 0);
-	ll_req.mtu = htons((u16)port->mtu);
-
-	bna_mbox_qe_fill(&port->mbox_qe, &ll_req, sizeof(ll_req),
-				bna_fw_cb_mtu_set, port);
-	bna_mbox_send(port->bna, &port->mbox_qe);
-}
-
-void
-bna_fw_cb_mtu_set(void *arg, int status)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-	bfa_fsm_send_event(port, PORT_E_FWRESP_MTU);
-}
-
-static void
-bna_port_cb_chld_stopped(void *arg)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_fsm_send_event(port, PORT_E_CHLD_STOPPED);
-}
-
-static void
-bna_port_init(struct bna_port *port, struct bna *bna)
-{
-	port->bna = bna;
-	port->flags = 0;
-	port->mtu = 0;
-	port->type = BNA_PORT_T_REGULAR;
-
-	port->link_cbfn = bnad_cb_port_link_status;
-
-	port->chld_stop_wc.wc_resume = bna_port_cb_chld_stopped;
-	port->chld_stop_wc.wc_cbarg = port;
-	port->chld_stop_wc.wc_count = 0;
-
-	port->stop_cbfn = NULL;
-	port->stop_cbarg = NULL;
-
-	port->pause_cbfn = NULL;
-
-	port->mtu_cbfn = NULL;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-
-	bfa_fsm_set_state(port, bna_port_sm_stopped);
-
-	bna_llport_init(&port->llport, bna);
-}
-
-static void
-bna_port_uninit(struct bna_port *port)
-{
-	bna_llport_uninit(&port->llport);
-
-	port->flags = 0;
-
-	port->bna = NULL;
-}
-
-static int
-bna_port_state_get(struct bna_port *port)
-{
-	return bfa_sm_to_state(port_sm_table, port->fsm);
-}
-
-static void
-bna_port_start(struct bna_port *port)
-{
-	port->flags |= BNA_PORT_F_DEVICE_READY;
-	if (port->flags & BNA_PORT_F_ENABLED)
-		bfa_fsm_send_event(port, PORT_E_START);
-}
-
-static void
-bna_port_stop(struct bna_port *port)
-{
-	port->stop_cbfn = bna_device_cb_port_stopped;
-	port->stop_cbarg = &port->bna->device;
-
-	port->flags &= ~BNA_PORT_F_DEVICE_READY;
-	bfa_fsm_send_event(port, PORT_E_STOP);
-}
-
-static void
-bna_port_fail(struct bna_port *port)
-{
-	port->flags &= ~BNA_PORT_F_DEVICE_READY;
-	bfa_fsm_send_event(port, PORT_E_FAIL);
-}
-
-void
-bna_port_cb_tx_stopped(struct bna_port *port, enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-void
-bna_port_cb_rx_stopped(struct bna_port *port, enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-int
-bna_port_mtu_get(struct bna_port *port)
-{
-	return port->mtu;
-}
-
-void
-bna_port_enable(struct bna_port *port)
-{
-	if (port->fsm != (bfa_sm_t)bna_port_sm_stopped)
-		return;
-
-	port->flags |= BNA_PORT_F_ENABLED;
-
-	if (port->flags & BNA_PORT_F_DEVICE_READY)
-		bfa_fsm_send_event(port, PORT_E_START);
-}
-
-void
-bna_port_disable(struct bna_port *port, enum bna_cleanup_type type,
-		 void (*cbfn)(void *, enum bna_cb_status))
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		(*cbfn)(port->bna->bnad, BNA_CB_SUCCESS);
-		return;
-	}
-
-	port->stop_cbfn = cbfn;
-	port->stop_cbarg = port->bna->bnad;
-
-	port->flags &= ~BNA_PORT_F_ENABLED;
-
-	bfa_fsm_send_event(port, PORT_E_STOP);
-}
-
-void
-bna_port_pause_config(struct bna_port *port,
-		      struct bna_pause_config *pause_config,
-		      void (*cbfn)(struct bnad *, enum bna_cb_status))
-{
-	port->pause_config = *pause_config;
-
-	port->pause_cbfn = cbfn;
-
-	bfa_fsm_send_event(port, PORT_E_PAUSE_CFG);
-}
-
-void
-bna_port_mtu_set(struct bna_port *port, int mtu,
-		 void (*cbfn)(struct bnad *, enum bna_cb_status))
-{
-	port->mtu = mtu;
-
-	port->mtu_cbfn = cbfn;
-
-	bfa_fsm_send_event(port, PORT_E_MTU_CFG);
-}
-
-void
-bna_port_mac_get(struct bna_port *port, mac_t *mac)
-{
-	*mac = bfa_nw_ioc_get_mac(&port->bna->device.ioc);
-}
-
-/**
- * DEVICE
- */
-#define enable_mbox_intr(_device)\
-do {\
-	u32 intr_status;\
-	bna_intr_status_get((_device)->bna, intr_status);\
-	bnad_cb_device_enable_mbox_intr((_device)->bna->bnad);\
-	bna_mbox_intr_enable((_device)->bna);\
-} while (0)
-
-#define disable_mbox_intr(_device)\
-do {\
-	bna_mbox_intr_disable((_device)->bna);\
-	bnad_cb_device_disable_mbox_intr((_device)->bna->bnad);\
-} while (0)
-
-static const struct bna_chip_regs_offset reg_offset[] =
-{{HOST_PAGE_NUM_FN0, HOSTFN0_INT_STATUS,
-	HOSTFN0_INT_MASK, HOST_MSIX_ERR_INDEX_FN0},
-{HOST_PAGE_NUM_FN1, HOSTFN1_INT_STATUS,
-	HOSTFN1_INT_MASK, HOST_MSIX_ERR_INDEX_FN1},
-{HOST_PAGE_NUM_FN2, HOSTFN2_INT_STATUS,
-	HOSTFN2_INT_MASK, HOST_MSIX_ERR_INDEX_FN2},
-{HOST_PAGE_NUM_FN3, HOSTFN3_INT_STATUS,
-	HOSTFN3_INT_MASK, HOST_MSIX_ERR_INDEX_FN3},
-};
-
-enum bna_device_event {
-	DEVICE_E_ENABLE			= 1,
-	DEVICE_E_DISABLE		= 2,
-	DEVICE_E_IOC_READY		= 3,
-	DEVICE_E_IOC_FAILED		= 4,
-	DEVICE_E_IOC_DISABLED		= 5,
-	DEVICE_E_IOC_RESET		= 6,
-	DEVICE_E_PORT_STOPPED		= 7,
-};
-
-enum bna_device_state {
-	BNA_DEVICE_STOPPED		= 1,
-	BNA_DEVICE_IOC_READY_WAIT 	= 2,
-	BNA_DEVICE_READY		= 3,
-	BNA_DEVICE_PORT_STOP_WAIT 	= 4,
-	BNA_DEVICE_IOC_DISABLE_WAIT 	= 5,
-	BNA_DEVICE_FAILED		= 6
-};
-
-bfa_fsm_state_decl(bna_device, stopped, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ioc_ready_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ready, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, port_stop_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ioc_disable_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, failed, struct bna_device,
-			enum bna_device_event);
-
-static struct bfa_sm_table device_sm_table[] = {
-	{BFA_SM(bna_device_sm_stopped), BNA_DEVICE_STOPPED},
-	{BFA_SM(bna_device_sm_ioc_ready_wait), BNA_DEVICE_IOC_READY_WAIT},
-	{BFA_SM(bna_device_sm_ready), BNA_DEVICE_READY},
-	{BFA_SM(bna_device_sm_port_stop_wait), BNA_DEVICE_PORT_STOP_WAIT},
-	{BFA_SM(bna_device_sm_ioc_disable_wait), BNA_DEVICE_IOC_DISABLE_WAIT},
-	{BFA_SM(bna_device_sm_failed), BNA_DEVICE_FAILED},
-};
-
-static void
-bna_device_sm_stopped_entry(struct bna_device *device)
-{
-	if (device->stop_cbfn)
-		device->stop_cbfn(device->stop_cbarg, BNA_CB_SUCCESS);
-
-	device->stop_cbfn = NULL;
-	device->stop_cbarg = NULL;
-}
-
-static void
-bna_device_sm_stopped(struct bna_device *device,
-			enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_ENABLE:
-		if (device->intr_type == BNA_INTR_T_MSIX)
-			bna_mbox_msix_idx_set(device);
-		bfa_nw_ioc_enable(&device->ioc);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_ready_wait);
-		break;
-
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_stopped);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ioc_ready_wait_entry(struct bna_device *device)
-{
-	/**
-	 * Do not call bfa_ioc_enable() here. It must be called in the
-	 * previous state due to failed -> ioc_ready_wait transition.
-	 */
-}
-
-static void
-bna_device_sm_ioc_ready_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		if (device->ready_cbfn)
-			device->ready_cbfn(device->ready_cbarg,
-						BNA_CB_INTERRUPT);
-		device->ready_cbfn = NULL;
-		device->ready_cbarg = NULL;
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_READY:
-		bfa_fsm_set_state(device, bna_device_sm_ready);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ready_entry(struct bna_device *device)
-{
-	bna_mbox_mod_start(&device->bna->mbox_mod);
-	bna_port_start(&device->bna->port);
-
-	if (device->ready_cbfn)
-		device->ready_cbfn(device->ready_cbarg,
-					BNA_CB_SUCCESS);
-	device->ready_cbfn = NULL;
-	device->ready_cbarg = NULL;
-}
-
-static void
-bna_device_sm_ready(struct bna_device *device, enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_port_stop_wait);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_port_stop_wait_entry(struct bna_device *device)
-{
-	bna_port_stop(&device->bna->port);
-}
-
-static void
-bna_device_sm_port_stop_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_PORT_STOPPED:
-		bna_mbox_mod_stop(&device->bna->mbox_mod);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		disable_mbox_intr(device);
-		bna_port_fail(&device->bna->port);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ioc_disable_wait_entry(struct bna_device *device)
-{
-	bfa_nw_ioc_disable(&device->ioc);
-}
-
-static void
-bna_device_sm_ioc_disable_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_IOC_DISABLED:
-		disable_mbox_intr(device);
-		bfa_fsm_set_state(device, bna_device_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_failed_entry(struct bna_device *device)
-{
-	disable_mbox_intr(device);
-	bna_port_fail(&device->bna->port);
-	bna_mbox_mod_stop(&device->bna->mbox_mod);
-
-	if (device->ready_cbfn)
-		device->ready_cbfn(device->ready_cbarg,
-					BNA_CB_FAIL);
-	device->ready_cbfn = NULL;
-	device->ready_cbarg = NULL;
-}
-
-static void
-bna_device_sm_failed(struct bna_device *device,
-			enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_ready_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-/* IOC callback functions */
-
-static void
-bna_device_cb_iocll_ready(void *dev, enum bfa_status error)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	if (error)
-		bfa_fsm_send_event(device, DEVICE_E_IOC_FAILED);
-	else
-		bfa_fsm_send_event(device, DEVICE_E_IOC_READY);
-}
-
-static void
-bna_device_cb_iocll_disabled(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_DISABLED);
-}
-
-static void
-bna_device_cb_iocll_failed(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_FAILED);
-}
-
-static void
-bna_device_cb_iocll_reset(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_RESET);
-}
-
-static struct bfa_ioc_cbfn bfa_iocll_cbfn = {
-	bna_device_cb_iocll_ready,
-	bna_device_cb_iocll_disabled,
-	bna_device_cb_iocll_failed,
-	bna_device_cb_iocll_reset
-};
-
-/* device */
-static void
-bna_adv_device_init(struct bna_device *device, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	u8 *kva;
-	u64 dma;
-
-	device->bna = bna;
-
-	kva = res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mdl[0].kva;
-
-	/**
-	 * Attach common modules (Diag, SFP, CEE, Port) and claim respective
-	 * DMA memory.
-	 */
-	BNA_GET_DMA_ADDR(
-		&res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].dma, dma);
-	kva = res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].kva;
-
-	bfa_nw_cee_attach(&bna->cee, &device->ioc, bna);
-	bfa_nw_cee_mem_claim(&bna->cee, kva, dma);
-	kva += bfa_nw_cee_meminfo();
-	dma += bfa_nw_cee_meminfo();
-
-}
-
-static void
-bna_device_init(struct bna_device *device, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	u64 dma;
-
-	device->bna = bna;
-
-	/**
-	 * Attach IOC and claim:
-	 *	1. DMA memory for IOC attributes
-	 *	2. Kernel memory for FW trace
-	 */
-	bfa_nw_ioc_attach(&device->ioc, device, &bfa_iocll_cbfn);
-	bfa_nw_ioc_pci_init(&device->ioc, &bna->pcidev, BFI_MC_LL);
-
-	BNA_GET_DMA_ADDR(
-		&res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].dma, dma);
-	bfa_nw_ioc_mem_claim(&device->ioc,
-		res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].kva,
-			  dma);
-
-	bna_adv_device_init(device, bna, res_info);
-	/*
-	 * Initialize mbox_mod only after IOC, so that mbox handler
-	 * registration goes through
-	 */
-	device->intr_type =
-		res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.intr_type;
-	device->vector =
-		res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.idl[0].vector;
-	bna_mbox_mod_init(&bna->mbox_mod, bna);
-
-	device->ready_cbfn = device->stop_cbfn = NULL;
-	device->ready_cbarg = device->stop_cbarg = NULL;
-
-	bfa_fsm_set_state(device, bna_device_sm_stopped);
-}
-
-static void
-bna_device_uninit(struct bna_device *device)
-{
-	bna_mbox_mod_uninit(&device->bna->mbox_mod);
-
-	bfa_nw_ioc_detach(&device->ioc);
-
-	device->bna = NULL;
-}
-
-static void
-bna_device_cb_port_stopped(void *arg, enum bna_cb_status status)
-{
-	struct bna_device *device = (struct bna_device *)arg;
-
-	bfa_fsm_send_event(device, DEVICE_E_PORT_STOPPED);
-}
-
-static int
-bna_device_status_get(struct bna_device *device)
-{
-	return device->fsm == (bfa_fsm_t)bna_device_sm_ready;
-}
-
-void
-bna_device_enable(struct bna_device *device)
-{
-	if (device->fsm != (bfa_fsm_t)bna_device_sm_stopped) {
-		bnad_cb_device_enabled(device->bna->bnad, BNA_CB_BUSY);
-		return;
-	}
-
-	device->ready_cbfn = bnad_cb_device_enabled;
-	device->ready_cbarg = device->bna->bnad;
-
-	bfa_fsm_send_event(device, DEVICE_E_ENABLE);
-}
-
-void
-bna_device_disable(struct bna_device *device, enum bna_cleanup_type type)
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		bnad_cb_device_disabled(device->bna->bnad, BNA_CB_SUCCESS);
-		return;
-	}
-
-	device->stop_cbfn = bnad_cb_device_disabled;
-	device->stop_cbarg = device->bna->bnad;
-
-	bfa_fsm_send_event(device, DEVICE_E_DISABLE);
-}
-
-static int
-bna_device_state_get(struct bna_device *device)
-{
-	return bfa_sm_to_state(device_sm_table, device->fsm);
-}
-
-const u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = {
-	{12, 12},
-	{6, 10},
-	{5, 10},
-	{4, 8},
-	{3, 6},
-	{3, 6},
-	{2, 4},
-	{1, 2},
-};
-
-/* utils */
-
-static void
-bna_adv_res_req(struct bna_res_info *res_info)
-{
-	/* DMA memory for COMMON_MODULE */
-	res_info[BNA_RES_MEM_T_COM].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.len = ALIGN(
-				bfa_nw_cee_meminfo(), PAGE_SIZE);
-
-	/* Virtual memory for retreiving fw_trc */
-	res_info[BNA_RES_MEM_T_FWTRC].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.num = 0;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.len = 0;
-
-	/* DMA memory for retreiving stats */
-	res_info[BNA_RES_MEM_T_STATS].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.len =
-				ALIGN(BFI_HW_STATS_SIZE, PAGE_SIZE);
-
-	/* Virtual memory for soft stats */
-	res_info[BNA_RES_MEM_T_SWSTATS].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.len =
-				sizeof(struct bna_sw_stats);
-}
-
-static void
-bna_sw_stats_get(struct bna *bna, struct bna_sw_stats *sw_stats)
-{
-	struct bna_tx *tx;
-	struct bna_txq *txq;
-	struct bna_rx *rx;
-	struct bna_rxp *rxp;
-	struct list_head *qe;
-	struct list_head *txq_qe;
-	struct list_head *rxp_qe;
-	struct list_head *mac_qe;
-	int i;
-
-	sw_stats->device_state = bna_device_state_get(&bna->device);
-	sw_stats->port_state = bna_port_state_get(&bna->port);
-	sw_stats->port_flags = bna->port.flags;
-	sw_stats->llport_state = bna_llport_state_get(&bna->port.llport);
-	sw_stats->priority = bna->port.priority;
-
-	i = 0;
-	list_for_each(qe, &bna->tx_mod.tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		sw_stats->tx_stats[i].tx_state = bna_tx_state_get(tx);
-		sw_stats->tx_stats[i].tx_flags = tx->flags;
-
-		sw_stats->tx_stats[i].num_txqs = 0;
-		sw_stats->tx_stats[i].txq_bmap[0] = 0;
-		sw_stats->tx_stats[i].txq_bmap[1] = 0;
-		list_for_each(txq_qe, &tx->txq_q) {
-			txq = (struct bna_txq *)txq_qe;
-			if (txq->txq_id < 32)
-				sw_stats->tx_stats[i].txq_bmap[0] |=
-						((u32)1 << txq->txq_id);
-			else
-				sw_stats->tx_stats[i].txq_bmap[1] |=
-						((u32)
-						 1 << (txq->txq_id - 32));
-			sw_stats->tx_stats[i].num_txqs++;
-		}
-
-		sw_stats->tx_stats[i].txf_id = tx->txf.txf_id;
-
-		i++;
-	}
-	sw_stats->num_active_tx = i;
-
-	i = 0;
-	list_for_each(qe, &bna->rx_mod.rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		sw_stats->rx_stats[i].rx_state = bna_rx_state_get(rx);
-		sw_stats->rx_stats[i].rx_flags = rx->rx_flags;
-
-		sw_stats->rx_stats[i].num_rxps = 0;
-		sw_stats->rx_stats[i].num_rxqs = 0;
-		sw_stats->rx_stats[i].rxq_bmap[0] = 0;
-		sw_stats->rx_stats[i].rxq_bmap[1] = 0;
-		sw_stats->rx_stats[i].cq_bmap[0] = 0;
-		sw_stats->rx_stats[i].cq_bmap[1] = 0;
-		list_for_each(rxp_qe, &rx->rxp_q) {
-			rxp = (struct bna_rxp *)rxp_qe;
-
-			sw_stats->rx_stats[i].num_rxqs += 1;
-
-			if (rxp->type == BNA_RXP_SINGLE) {
-				if (rxp->rxq.single.only->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.single.only->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-					((u32)1 <<
-					(rxp->rxq.single.only->rxq_id - 32));
-				}
-			} else {
-				if (rxp->rxq.slr.large->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.slr.large->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-					((u32)1 <<
-					(rxp->rxq.slr.large->rxq_id - 32));
-				}
-
-				if (rxp->rxq.slr.small->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.slr.small->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-				((u32)1 <<
-				 (rxp->rxq.slr.small->rxq_id - 32));
-				}
-				sw_stats->rx_stats[i].num_rxqs += 1;
-			}
-
-			if (rxp->cq.cq_id < 32)
-				sw_stats->rx_stats[i].cq_bmap[0] |=
-					(1 << rxp->cq.cq_id);
-			else
-				sw_stats->rx_stats[i].cq_bmap[1] |=
-					(1 << (rxp->cq.cq_id - 32));
-
-			sw_stats->rx_stats[i].num_rxps++;
-		}
-
-		sw_stats->rx_stats[i].rxf_id = rx->rxf.rxf_id;
-		sw_stats->rx_stats[i].rxf_state = bna_rxf_state_get(&rx->rxf);
-		sw_stats->rx_stats[i].rxf_oper_state = rx->rxf.rxf_oper_state;
-
-		sw_stats->rx_stats[i].num_active_ucast = 0;
-		if (rx->rxf.ucast_active_mac)
-			sw_stats->rx_stats[i].num_active_ucast++;
-		list_for_each(mac_qe, &rx->rxf.ucast_active_q)
-			sw_stats->rx_stats[i].num_active_ucast++;
-
-		sw_stats->rx_stats[i].num_active_mcast = 0;
-		list_for_each(mac_qe, &rx->rxf.mcast_active_q)
-			sw_stats->rx_stats[i].num_active_mcast++;
-
-		sw_stats->rx_stats[i].rxmode_active = rx->rxf.rxmode_active;
-		sw_stats->rx_stats[i].vlan_filter_status =
-						rx->rxf.vlan_filter_status;
-		memcpy(sw_stats->rx_stats[i].vlan_filter_table,
-				rx->rxf.vlan_filter_table,
-				sizeof(u32) * ((BFI_MAX_VLAN + 1) / 32));
-
-		sw_stats->rx_stats[i].rss_status = rx->rxf.rss_status;
-		sw_stats->rx_stats[i].hds_status = rx->rxf.hds_status;
-
-		i++;
-	}
-	sw_stats->num_active_rx = i;
-}
-
-static void
-bna_fw_cb_stats_get(void *arg, int status)
-{
-	struct bna *bna = (struct bna *)arg;
-	u64 *p_stats;
-	int i, count;
-	int rxf_count, txf_count;
-	u64 rxf_bmap, txf_bmap;
-
-	bfa_q_qe_init(&bna->mbox_qe.qe);
-
-	if (status == 0) {
-		p_stats = (u64 *)bna->stats.hw_stats;
-		count = sizeof(struct bfi_ll_stats) / sizeof(u64);
-		for (i = 0; i < count; i++)
-			p_stats[i] = cpu_to_be64(p_stats[i]);
-
-		rxf_count = 0;
-		rxf_bmap = (u64)bna->stats.rxf_bmap[0] |
-			((u64)bna->stats.rxf_bmap[1] << 32);
-		for (i = 0; i < BFI_LL_RXF_ID_MAX; i++)
-			if (rxf_bmap & ((u64)1 << i))
-				rxf_count++;
-
-		txf_count = 0;
-		txf_bmap = (u64)bna->stats.txf_bmap[0] |
-			((u64)bna->stats.txf_bmap[1] << 32);
-		for (i = 0; i < BFI_LL_TXF_ID_MAX; i++)
-			if (txf_bmap & ((u64)1 << i))
-				txf_count++;
-
-		p_stats = (u64 *)&bna->stats.hw_stats->rxf_stats[0] +
-				((rxf_count * sizeof(struct bfi_ll_stats_rxf) +
-				txf_count * sizeof(struct bfi_ll_stats_txf))/
-				sizeof(u64));
-
-		/* Populate the TXF stats from the firmware DMAed copy */
-		for (i = (BFI_LL_TXF_ID_MAX - 1); i >= 0; i--)
-			if (txf_bmap & ((u64)1 << i)) {
-				p_stats -= sizeof(struct bfi_ll_stats_txf)/
-						sizeof(u64);
-				memcpy(&bna->stats.hw_stats->txf_stats[i],
-					p_stats,
-					sizeof(struct bfi_ll_stats_txf));
-			}
-
-		/* Populate the RXF stats from the firmware DMAed copy */
-		for (i = (BFI_LL_RXF_ID_MAX - 1); i >= 0; i--)
-			if (rxf_bmap & ((u64)1 << i)) {
-				p_stats -= sizeof(struct bfi_ll_stats_rxf)/
-						sizeof(u64);
-				memcpy(&bna->stats.hw_stats->rxf_stats[i],
-					p_stats,
-					sizeof(struct bfi_ll_stats_rxf));
-			}
-
-		bna_sw_stats_get(bna, bna->stats.sw_stats);
-		bnad_cb_stats_get(bna->bnad, BNA_CB_SUCCESS, &bna->stats);
-	} else
-		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
-}
-
-static void
-bna_fw_stats_get(struct bna *bna)
-{
-	struct bfi_ll_stats_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_GET_REQ, 0);
-	ll_req.stats_mask = htons(BFI_LL_STATS_ALL);
-
-	ll_req.rxf_id_mask[0] = htonl(bna->rx_mod.rxf_bmap[0]);
-	ll_req.rxf_id_mask[1] =	htonl(bna->rx_mod.rxf_bmap[1]);
-	ll_req.txf_id_mask[0] =	htonl(bna->tx_mod.txf_bmap[0]);
-	ll_req.txf_id_mask[1] =	htonl(bna->tx_mod.txf_bmap[1]);
-
-	ll_req.host_buffer.a32.addr_hi = bna->hw_stats_dma.msb;
-	ll_req.host_buffer.a32.addr_lo = bna->hw_stats_dma.lsb;
-
-	bna_mbox_qe_fill(&bna->mbox_qe, &ll_req, sizeof(ll_req),
-				bna_fw_cb_stats_get, bna);
-	bna_mbox_send(bna, &bna->mbox_qe);
-
-	bna->stats.rxf_bmap[0] = bna->rx_mod.rxf_bmap[0];
-	bna->stats.rxf_bmap[1] = bna->rx_mod.rxf_bmap[1];
-	bna->stats.txf_bmap[0] = bna->tx_mod.txf_bmap[0];
-	bna->stats.txf_bmap[1] = bna->tx_mod.txf_bmap[1];
-}
-
-void
-bna_stats_get(struct bna *bna)
-{
-	if (bna_device_status_get(&bna->device))
-		bna_fw_stats_get(bna);
-	else
-		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
-}
-
-/* IB */
-static void
-bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo)
-{
-	ib->ib_config.coalescing_timeo = coalescing_timeo;
-
-	if (ib->start_count)
-		ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
-				(u32)ib->ib_config.coalescing_timeo, 0);
-}
-
-/* RxF */
-void
-bna_rxf_adv_init(struct bna_rxf *rxf,
-		struct bna_rx *rx,
-		struct bna_rx_config *q_config)
-{
-	switch (q_config->rxp_type) {
-	case BNA_RXP_SINGLE:
-		/* No-op */
-		break;
-	case BNA_RXP_SLR:
-		rxf->ctrl_flags |= BNA_RXF_CF_SM_LG_RXQ;
-		break;
-	case BNA_RXP_HDS:
-		rxf->hds_cfg.hdr_type = q_config->hds_config.hdr_type;
-		rxf->hds_cfg.header_size =
-				q_config->hds_config.header_size;
-		rxf->forced_offset = 0;
-		break;
-	default:
-		break;
-	}
-
-	if (q_config->rss_status == BNA_STATUS_T_ENABLED) {
-		rxf->ctrl_flags |= BNA_RXF_CF_RSS_ENABLE;
-		rxf->rss_cfg.hash_type = q_config->rss_config.hash_type;
-		rxf->rss_cfg.hash_mask = q_config->rss_config.hash_mask;
-		memcpy(&rxf->rss_cfg.toeplitz_hash_key[0],
-			&q_config->rss_config.toeplitz_hash_key[0],
-			sizeof(rxf->rss_cfg.toeplitz_hash_key));
-	}
-}
-
-static void
-rxf_fltr_mbox_cmd(struct bna_rxf *rxf, u8 cmd, enum bna_status status)
-{
-	struct bfi_ll_rxf_req req;
-
-	bfi_h2i_set(req.mh, BFI_MC_LL, cmd, 0);
-
-	req.rxf_id = rxf->rxf_id;
-	req.enable = status;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &req, sizeof(req),
-			rxf_cb_cam_fltr_mbox_cmd, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-int
-rxf_process_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* Add additional MAC entries */
-	if (!list_empty(&rxf->ucast_pending_add_q)) {
-		bfa_q_deq(&rxf->ucast_pending_add_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_ADD_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->ucast_active_q);
-		return 1;
-	}
-
-	/* Delete MAC addresses previousely added */
-	if (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_process_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* Enable/disable promiscuous mode */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active |= BNA_RXMODE_PROMISC;
-
-		/* Disable VLAN filter to allow all VLANs */
-		__rxf_vlan_filter_set(rxf, BNA_STATUS_T_DISABLED);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_ENABLED);
-		return 1;
-	} else if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_process_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* Enable/disable allmulti mode */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active |= BNA_RXMODE_ALLMULTI;
-
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_ENABLED);
-		return 1;
-	} else if (is_allmulti_disable(rxf->rxmode_pending,
-					rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* 1. delete pending ucast entries */
-	if (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-		return 1;
-	}
-
-	/* 2. clear active ucast entries; move them to pending_add_q */
-	if (!list_empty(&rxf->ucast_active_q)) {
-		bfa_q_deq(&rxf->ucast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->ucast_pending_add_q);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* 6. Execute pending promisc mode disable command */
-	if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	/* 7. Clear active promisc mode; move it to pending enable */
-	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		/* move promisc configuration from active -> pending */
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* 10. Execute pending allmulti mode disable command */
-	if (is_allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	/* 11. Clear active allmulti mode; move it to pending enable */
-	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		/* move allmulti configuration from active -> pending */
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-void
-rxf_reset_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct list_head *qe;
-	struct bna_mac *mac;
-
-	/* 1. Move active ucast entries to pending_add_q */
-	while (!list_empty(&rxf->ucast_active_q)) {
-		bfa_q_deq(&rxf->ucast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		list_add_tail(qe, &rxf->ucast_pending_add_q);
-	}
-
-	/* 2. Throw away delete pending ucast entries */
-	while (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-	}
-}
-
-void
-rxf_reset_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* 6. Clear pending promisc mode disable */
-	if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-	}
-
-	/* 7. Move promisc mode config from active -> pending */
-	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-	}
-
-}
-
-void
-rxf_reset_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* 10. Clear pending allmulti mode disable */
-	if (is_allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-	}
-
-	/* 11. Move allmulti mode config from active -> pending */
-	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-	}
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_promisc_enable(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	int ret = 0;
-
-	/* There can not be any pending disable command */
-
-	/* Do nothing if pending enable or already enabled */
-	if (is_promisc_enable(rxf->rxmode_pending,
-			rxf->rxmode_pending_bitmask) ||
-			(rxf->rxmode_active & BNA_RXMODE_PROMISC)) {
-		/* Schedule enable */
-	} else {
-		/* Promisc mode should not be active in the system */
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		bna->rxf_promisc_id = rxf->rxf_id;
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_promisc_disable(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	int ret = 0;
-
-	/* There can not be any pending disable */
-
-	/* Turn off pending enable command , if any */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* Promisc mode should not be active */
-		/* system promisc state should be pending */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		/* Remove the promisc state from the system */
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Schedule disable */
-	} else if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		/* Promisc mode should be active in the system */
-		promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-
-	/* Do nothing if already disabled */
-	} else {
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_allmulti_enable(struct bna_rxf *rxf)
-{
-	int ret = 0;
-
-	/* There can not be any pending disable command */
-
-	/* Do nothing if pending enable or already enabled */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-			rxf->rxmode_pending_bitmask) ||
-			(rxf->rxmode_active & BNA_RXMODE_ALLMULTI)) {
-		/* Schedule enable */
-	} else {
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_allmulti_disable(struct bna_rxf *rxf)
-{
-	int ret = 0;
-
-	/* There can not be any pending disable */
-
-	/* Turn off pending enable command , if any */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* Allmulti mode should not be active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-
-	/* Schedule disable */
-	} else if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/* RxF <- bnad */
-enum bna_cb_status
-bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode new_mode,
-		enum bna_rxmode bitmask,
-		void (*cbfn)(struct bnad *, struct bna_rx *,
-			     enum bna_cb_status))
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	int need_hw_config = 0;
-
-	/* Process the commands */
-
-	if (is_promisc_enable(new_mode, bitmask)) {
-		/* If promisc mode is already enabled elsewhere in the system */
-		if ((rx->bna->rxf_promisc_id != BFI_MAX_RXF) &&
-			(rx->bna->rxf_promisc_id != rxf->rxf_id))
-			goto err_return;
-		if (rxf_promisc_enable(rxf))
-			need_hw_config = 1;
-	} else if (is_promisc_disable(new_mode, bitmask)) {
-		if (rxf_promisc_disable(rxf))
-			need_hw_config = 1;
-	}
-
-	if (is_allmulti_enable(new_mode, bitmask)) {
-		if (rxf_allmulti_enable(rxf))
-			need_hw_config = 1;
-	} else if (is_allmulti_disable(new_mode, bitmask)) {
-		if (rxf_allmulti_disable(rxf))
-			need_hw_config = 1;
-	}
-
-	/* Trigger h/w if needed */
-
-	if (need_hw_config) {
-		rxf->cam_fltr_cbfn = cbfn;
-		rxf->cam_fltr_cbarg = rx->bna->bnad;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	} else if (cbfn)
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-
-	return BNA_CB_SUCCESS;
-
-err_return:
-	return BNA_CB_FAIL;
-}
-
-void
-/* RxF <- bnad */
-bna_rx_vlanfilter_enable(struct bna_rx *rx)
-{
-	struct bna_rxf *rxf = &rx->rxf;
-
-	if (rxf->vlan_filter_status == BNA_STATUS_T_DISABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		rxf->vlan_filter_status = BNA_STATUS_T_ENABLED;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	}
-}
-
-/* Rx */
-
-/* Rx <- bnad */
-void
-bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo)
-{
-	struct bna_rxp *rxp;
-	struct list_head *qe;
-
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		rxp->cq.ccb->rx_coalescing_timeo = coalescing_timeo;
-		bna_ib_coalescing_timeo_set(rxp->cq.ib, coalescing_timeo);
-	}
-}
-
-/* Rx <- bnad */
-void
-bna_rx_dim_reconfig(struct bna *bna, const u32 vector[][BNA_BIAS_T_MAX])
-{
-	int i, j;
-
-	for (i = 0; i < BNA_LOAD_T_MAX; i++)
-		for (j = 0; j < BNA_BIAS_T_MAX; j++)
-			bna->rx_mod.dim_vector[i][j] = vector[i][j];
-}
-
-/* Rx <- bnad */
-void
-bna_rx_dim_update(struct bna_ccb *ccb)
-{
-	struct bna *bna = ccb->cq->rx->bna;
-	u32 load, bias;
-	u32 pkt_rt, small_rt, large_rt;
-	u8 coalescing_timeo;
-
-	if ((ccb->pkt_rate.small_pkt_cnt == 0) &&
-		(ccb->pkt_rate.large_pkt_cnt == 0))
-		return;
-
-	/* Arrive at preconfigured coalescing timeo value based on pkt rate */
-
-	small_rt = ccb->pkt_rate.small_pkt_cnt;
-	large_rt = ccb->pkt_rate.large_pkt_cnt;
-
-	pkt_rt = small_rt + large_rt;
-
-	if (pkt_rt < BNA_PKT_RATE_10K)
-		load = BNA_LOAD_T_LOW_4;
-	else if (pkt_rt < BNA_PKT_RATE_20K)
-		load = BNA_LOAD_T_LOW_3;
-	else if (pkt_rt < BNA_PKT_RATE_30K)
-		load = BNA_LOAD_T_LOW_2;
-	else if (pkt_rt < BNA_PKT_RATE_40K)
-		load = BNA_LOAD_T_LOW_1;
-	else if (pkt_rt < BNA_PKT_RATE_50K)
-		load = BNA_LOAD_T_HIGH_1;
-	else if (pkt_rt < BNA_PKT_RATE_60K)
-		load = BNA_LOAD_T_HIGH_2;
-	else if (pkt_rt < BNA_PKT_RATE_80K)
-		load = BNA_LOAD_T_HIGH_3;
-	else
-		load = BNA_LOAD_T_HIGH_4;
-
-	if (small_rt > (large_rt << 1))
-		bias = 0;
-	else
-		bias = 1;
-
-	ccb->pkt_rate.small_pkt_cnt = 0;
-	ccb->pkt_rate.large_pkt_cnt = 0;
-
-	coalescing_timeo = bna->rx_mod.dim_vector[load][bias];
-	ccb->rx_coalescing_timeo = coalescing_timeo;
-
-	/* Set it to IB */
-	bna_ib_coalescing_timeo_set(ccb->cq->ib, coalescing_timeo);
-}
-
-/* Tx */
-/* TX <- bnad */
-void
-bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo)
-{
-	struct bna_txq *txq;
-	struct list_head *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bna_ib_coalescing_timeo_set(txq->ib, coalescing_timeo);
-	}
-}
-
-/*
- * Private data
- */
-
-struct bna_ritseg_pool_cfg {
-	u32	pool_size;
-	u32	pool_entry_size;
-};
-init_ritseg_pool(ritseg_pool_cfg);
-
-/*
- * Private functions
- */
-static void
-bna_ucam_mod_init(struct bna_ucam_mod *ucam_mod, struct bna *bna,
-		  struct bna_res_info *res_info)
-{
-	int i;
-
-	ucam_mod->ucmac = (struct bna_mac *)
-		res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&ucam_mod->free_q);
-	for (i = 0; i < BFI_MAX_UCMAC; i++) {
-		bfa_q_qe_init(&ucam_mod->ucmac[i].qe);
-		list_add_tail(&ucam_mod->ucmac[i].qe, &ucam_mod->free_q);
-	}
-
-	ucam_mod->bna = bna;
-}
-
-static void
-bna_ucam_mod_uninit(struct bna_ucam_mod *ucam_mod)
-{
-	struct list_head *qe;
-	int i = 0;
-
-	list_for_each(qe, &ucam_mod->free_q)
-		i++;
-
-	ucam_mod->bna = NULL;
-}
-
-static void
-bna_mcam_mod_init(struct bna_mcam_mod *mcam_mod, struct bna *bna,
-		  struct bna_res_info *res_info)
-{
-	int i;
-
-	mcam_mod->mcmac = (struct bna_mac *)
-		res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&mcam_mod->free_q);
-	for (i = 0; i < BFI_MAX_MCMAC; i++) {
-		bfa_q_qe_init(&mcam_mod->mcmac[i].qe);
-		list_add_tail(&mcam_mod->mcmac[i].qe, &mcam_mod->free_q);
-	}
-
-	mcam_mod->bna = bna;
-}
-
-static void
-bna_mcam_mod_uninit(struct bna_mcam_mod *mcam_mod)
-{
-	struct list_head *qe;
-	int i = 0;
-
-	list_for_each(qe, &mcam_mod->free_q)
-		i++;
-
-	mcam_mod->bna = NULL;
-}
-
-static void
-bna_rit_mod_init(struct bna_rit_mod *rit_mod,
-		struct bna_res_info *res_info)
-{
-	int i;
-	int j;
-	int count;
-	int offset;
-
-	rit_mod->rit = (struct bna_rit_entry *)
-		res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.mdl[0].kva;
-	rit_mod->rit_segment = (struct bna_rit_segment *)
-		res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.mdl[0].kva;
-
-	count = 0;
-	offset = 0;
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		INIT_LIST_HEAD(&rit_mod->rit_seg_pool[i]);
-		for (j = 0; j < ritseg_pool_cfg[i].pool_size; j++) {
-			bfa_q_qe_init(&rit_mod->rit_segment[count].qe);
-			rit_mod->rit_segment[count].max_rit_size =
-					ritseg_pool_cfg[i].pool_entry_size;
-			rit_mod->rit_segment[count].rit_offset = offset;
-			rit_mod->rit_segment[count].rit =
-					&rit_mod->rit[offset];
-			list_add_tail(&rit_mod->rit_segment[count].qe,
-				&rit_mod->rit_seg_pool[i]);
-			count++;
-			offset += ritseg_pool_cfg[i].pool_entry_size;
-		}
-	}
-}
-
-/*
- * Public functions
- */
-
-/* Called during probe(), before calling bna_init() */
-void
-bna_res_req(struct bna_res_info *res_info)
-{
-	bna_adv_res_req(res_info);
-
-	/* DMA memory for retrieving IOC attributes */
-	res_info[BNA_RES_MEM_T_ATTR].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.len =
-				ALIGN(bfa_nw_ioc_meminfo(), PAGE_SIZE);
-
-	/* DMA memory for index segment of an IB */
-	res_info[BNA_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.len =
-				BFI_IBIDX_SIZE * BFI_IBIDX_MAX_SEGSIZE;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.num = BFI_MAX_IB;
-
-	/* Virtual memory for IB objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.len =
-				BFI_MAX_IB * sizeof(struct bna_ib);
-
-	/* Virtual memory for intr objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.len =
-				BFI_MAX_IB * sizeof(struct bna_intr);
-
-	/* Virtual memory for idx_seg objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.len =
-			BFI_IBIDX_TOTAL_SEGS * sizeof(struct bna_ibidx_seg);
-
-	/* Virtual memory for Tx objects - stored by Tx module */
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.len =
-			BFI_MAX_TXQ * sizeof(struct bna_tx);
-
-	/* Virtual memory for TxQ - stored by Tx module */
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.len =
-			BFI_MAX_TXQ * sizeof(struct bna_txq);
-
-	/* Virtual memory for Rx objects - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rx);
-
-	/* Virtual memory for RxPath - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rxp);
-
-	/* Virtual memory for RxQ - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rxq);
-
-	/* Virtual memory for Unicast MAC address - stored by ucam module */
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.len =
-			BFI_MAX_UCMAC * sizeof(struct bna_mac);
-
-	/* Virtual memory for Multicast MAC address - stored by mcam module */
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.len =
-			BFI_MAX_MCMAC * sizeof(struct bna_mac);
-
-	/* Virtual memory for RIT entries */
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.len =
-			BFI_MAX_RIT_SIZE * sizeof(struct bna_rit_entry);
-
-	/* Virtual memory for RIT segment table */
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.len =
-			BFI_RIT_TOTAL_SEGS * sizeof(struct bna_rit_segment);
-
-	/* Interrupt resource for mailbox interrupt */
-	res_info[BNA_RES_INTR_T_MBOX].res_type = BNA_RES_T_INTR;
-	res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.intr_type =
-							BNA_INTR_T_MSIX;
-	res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.num = 1;
-}
-
-/* Called during probe() */
-void
-bna_init(struct bna *bna, struct bnad *bnad, struct bfa_pcidev *pcidev,
-		struct bna_res_info *res_info)
-{
-	bna->bnad = bnad;
-	bna->pcidev = *pcidev;
-
-	bna->stats.hw_stats = (struct bfi_ll_stats *)
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].kva;
-	bna->hw_stats_dma.msb =
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.msb;
-	bna->hw_stats_dma.lsb =
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.lsb;
-	bna->stats.sw_stats = (struct bna_sw_stats *)
-		res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.mdl[0].kva;
-
-	bna->regs.page_addr = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].page_addr;
-	bna->regs.fn_int_status = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].fn_int_status;
-	bna->regs.fn_int_mask = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].fn_int_mask;
-
-	if (bna->pcidev.pci_func < 3)
-		bna->port_num = 0;
-	else
-		bna->port_num = 1;
-
-	/* Also initializes diag, cee, sfp, phy_port and mbox_mod */
-	bna_device_init(&bna->device, bna, res_info);
-
-	bna_port_init(&bna->port, bna);
-
-	bna_tx_mod_init(&bna->tx_mod, bna, res_info);
-
-	bna_rx_mod_init(&bna->rx_mod, bna, res_info);
-
-	bna_ib_mod_init(&bna->ib_mod, bna, res_info);
-
-	bna_rit_mod_init(&bna->rit_mod, res_info);
-
-	bna_ucam_mod_init(&bna->ucam_mod, bna, res_info);
-
-	bna_mcam_mod_init(&bna->mcam_mod, bna, res_info);
-
-	bna->rxf_promisc_id = BFI_MAX_RXF;
-
-	/* Mbox q element for posting stat request to f/w */
-	bfa_q_qe_init(&bna->mbox_qe.qe);
-}
-
-void
-bna_uninit(struct bna *bna)
-{
-	bna_mcam_mod_uninit(&bna->mcam_mod);
-
-	bna_ucam_mod_uninit(&bna->ucam_mod);
-
-	bna_ib_mod_uninit(&bna->ib_mod);
-
-	bna_rx_mod_uninit(&bna->rx_mod);
-
-	bna_tx_mod_uninit(&bna->tx_mod);
-
-	bna_port_uninit(&bna->port);
-
-	bna_device_uninit(&bna->device);
-
-	bna->bnad = NULL;
-}
-
-struct bna_mac *
-bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod)
-{
-	struct list_head *qe;
-
-	if (list_empty(&ucam_mod->free_q))
-		return NULL;
-
-	bfa_q_deq(&ucam_mod->free_q, &qe);
-
-	return (struct bna_mac *)qe;
-}
-
-void
-bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod, struct bna_mac *mac)
-{
-	list_add_tail(&mac->qe, &ucam_mod->free_q);
-}
-
-struct bna_mac *
-bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod)
-{
-	struct list_head *qe;
-
-	if (list_empty(&mcam_mod->free_q))
-		return NULL;
-
-	bfa_q_deq(&mcam_mod->free_q, &qe);
-
-	return (struct bna_mac *)qe;
-}
-
-void
-bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod, struct bna_mac *mac)
-{
-	list_add_tail(&mac->qe, &mcam_mod->free_q);
-}
-
-/**
- * Note: This should be called in the same locking context as the call to
- * bna_rit_mod_seg_get()
- */
-int
-bna_rit_mod_can_satisfy(struct bna_rit_mod *rit_mod, int seg_size)
-{
-	int i;
-
-	/* Select the pool for seg_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg_size <= ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	if (i == BFI_RIT_SEG_TOTAL_POOLS)
-		return 0;
-
-	if (list_empty(&rit_mod->rit_seg_pool[i]))
-		return 0;
-
-	return 1;
-}
-
-struct bna_rit_segment *
-bna_rit_mod_seg_get(struct bna_rit_mod *rit_mod, int seg_size)
-{
-	struct bna_rit_segment *seg;
-	struct list_head *qe;
-	int i;
-
-	/* Select the pool for seg_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg_size <= ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	if (i == BFI_RIT_SEG_TOTAL_POOLS)
-		return NULL;
-
-	if (list_empty(&rit_mod->rit_seg_pool[i]))
-		return NULL;
-
-	bfa_q_deq(&rit_mod->rit_seg_pool[i], &qe);
-	seg = (struct bna_rit_segment *)qe;
-	bfa_q_qe_init(&seg->qe);
-	seg->rit_size = seg_size;
-
-	return seg;
-}
-
-void
-bna_rit_mod_seg_put(struct bna_rit_mod *rit_mod,
-			struct bna_rit_segment *seg)
-{
-	int i;
-
-	/* Select the pool for seg->max_rit_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg->max_rit_size == ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	seg->rit_size = 0;
-	list_add_tail(&seg->qe, &rit_mod->rit_seg_pool[i]);
-}
diff --git a/drivers/net/bna/bna_hw.h b/drivers/net/bna/bna_hw.h
deleted file mode 100644
index 16a5eed..0000000
--- a/drivers/net/bna/bna_hw.h
+++ /dev/null
@@ -1,1492 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-
-/**
- * File for interrupt macros and functions
- */
-
-#ifndef __BNA_HW_H__
-#define __BNA_HW_H__
-
-#include "bfi_reg.h"
-
-/**
- *
- * SW imposed limits
- *
- */
-
-#ifndef BNA_BIOS_BUILD
-
-#define BFI_MAX_TXQ			64
-#define BFI_MAX_RXQ			64
-#define	BFI_MAX_RXF			64
-#define BFI_MAX_IB			128
-#define	BFI_MAX_RIT_SIZE		256
-#define	BFI_RSS_RIT_SIZE		64
-#define	BFI_NONRSS_RIT_SIZE		1
-#define BFI_MAX_UCMAC			256
-#define BFI_MAX_MCMAC			512
-#define BFI_IBIDX_SIZE			4
-#define BFI_MAX_VLAN			4095
-
-/**
- * There are 2 free IB index pools:
- *	pool1: 120 segments of 1 index each
- *	pool8: 1 segment of 8 indexes
- */
-#define BFI_IBIDX_POOL1_SIZE		116
-#define	BFI_IBIDX_POOL1_ENTRY_SIZE	1
-#define BFI_IBIDX_POOL2_SIZE		2
-#define	BFI_IBIDX_POOL2_ENTRY_SIZE	2
-#define	BFI_IBIDX_POOL8_SIZE		1
-#define	BFI_IBIDX_POOL8_ENTRY_SIZE	8
-#define	BFI_IBIDX_TOTAL_POOLS		3
-#define	BFI_IBIDX_TOTAL_SEGS		119 /* (POOL1 + POOL2 + POOL8)_SIZE */
-#define	BFI_IBIDX_MAX_SEGSIZE		8
-#define init_ibidx_pool(name)						\
-static struct bna_ibidx_pool name[BFI_IBIDX_TOTAL_POOLS] =		\
-{									\
-	{ BFI_IBIDX_POOL1_SIZE, BFI_IBIDX_POOL1_ENTRY_SIZE },		\
-	{ BFI_IBIDX_POOL2_SIZE, BFI_IBIDX_POOL2_ENTRY_SIZE },		\
-	{ BFI_IBIDX_POOL8_SIZE, BFI_IBIDX_POOL8_ENTRY_SIZE }		\
-}
-
-/**
- * There are 2 free RIT segment pools:
- *	Pool1: 192 segments of 1 RIT entry each
- *	Pool2: 1 segment of 64 RIT entry
- */
-#define BFI_RIT_SEG_POOL1_SIZE		192
-#define BFI_RIT_SEG_POOL1_ENTRY_SIZE	1
-#define BFI_RIT_SEG_POOLRSS_SIZE	1
-#define BFI_RIT_SEG_POOLRSS_ENTRY_SIZE	64
-#define BFI_RIT_SEG_TOTAL_POOLS		2
-#define BFI_RIT_TOTAL_SEGS		193 /* POOL1_SIZE + POOLRSS_SIZE */
-#define init_ritseg_pool(name)						\
-static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
-{									\
-	{ BFI_RIT_SEG_POOL1_SIZE, BFI_RIT_SEG_POOL1_ENTRY_SIZE },	\
-	{ BFI_RIT_SEG_POOLRSS_SIZE, BFI_RIT_SEG_POOLRSS_ENTRY_SIZE }	\
-}
-
-#else /* BNA_BIOS_BUILD */
-
-#define BFI_MAX_TXQ			1
-#define BFI_MAX_RXQ			1
-#define	BFI_MAX_RXF			1
-#define BFI_MAX_IB			2
-#define	BFI_MAX_RIT_SIZE		2
-#define	BFI_RSS_RIT_SIZE		64
-#define	BFI_NONRSS_RIT_SIZE		1
-#define BFI_MAX_UCMAC			1
-#define BFI_MAX_MCMAC			8
-#define BFI_IBIDX_SIZE			4
-#define BFI_MAX_VLAN			4095
-/* There is one free pool: 2 segments of 1 index each */
-#define BFI_IBIDX_POOL1_SIZE		2
-#define	BFI_IBIDX_POOL1_ENTRY_SIZE	1
-#define	BFI_IBIDX_TOTAL_POOLS		1
-#define	BFI_IBIDX_TOTAL_SEGS		2 /* POOL1_SIZE */
-#define	BFI_IBIDX_MAX_SEGSIZE		1
-#define init_ibidx_pool(name)						\
-static struct bna_ibidx_pool name[BFI_IBIDX_TOTAL_POOLS] =		\
-{									\
-	{ BFI_IBIDX_POOL1_SIZE, BFI_IBIDX_POOL1_ENTRY_SIZE }		\
-}
-
-#define BFI_RIT_SEG_POOL1_SIZE		1
-#define BFI_RIT_SEG_POOL1_ENTRY_SIZE	1
-#define BFI_RIT_SEG_TOTAL_POOLS		1
-#define BFI_RIT_TOTAL_SEGS		1 /* POOL1_SIZE */
-#define init_ritseg_pool(name)						\
-static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
-{									\
-	{ BFI_RIT_SEG_POOL1_SIZE, BFI_RIT_SEG_POOL1_ENTRY_SIZE }	\
-}
-
-#endif /* BNA_BIOS_BUILD */
-
-#define BFI_RSS_HASH_KEY_LEN		10
-
-#define BFI_COALESCING_TIMER_UNIT	5	/* 5us */
-#define BFI_MAX_COALESCING_TIMEO	0xFF	/* in 5us units */
-#define BFI_MAX_INTERPKT_COUNT		0xFF
-#define BFI_MAX_INTERPKT_TIMEO		0xF	/* in 0.5us units */
-#define BFI_TX_COALESCING_TIMEO		20	/* 20 * 5 = 100us */
-#define BFI_TX_INTERPKT_COUNT		32
-#define	BFI_RX_COALESCING_TIMEO		12	/* 12 * 5 = 60us */
-#define	BFI_RX_INTERPKT_COUNT		6	/* Pkt Cnt = 6 */
-#define	BFI_RX_INTERPKT_TIMEO		3	/* 3 * 0.5 = 1.5us */
-
-#define BFI_TXQ_WI_SIZE			64	/* bytes */
-#define BFI_RXQ_WI_SIZE			8	/* bytes */
-#define BFI_CQ_WI_SIZE			16	/* bytes */
-#define BFI_TX_MAX_WRR_QUOTA		0xFFF
-
-#define BFI_TX_MAX_VECTORS_PER_WI	4
-#define BFI_TX_MAX_VECTORS_PER_PKT	0xFF
-#define BFI_TX_MAX_DATA_PER_VECTOR	0xFFFF
-#define BFI_TX_MAX_DATA_PER_PKT		0xFFFFFF
-
-/* Small Q buffer size */
-#define BFI_SMALL_RXBUF_SIZE		128
-
-/* Defined separately since BFA_FLASH_DMA_BUF_SZ is in bfa_flash.c */
-#define BFI_FLASH_DMA_BUF_SZ		0x010000 /* 64K DMA */
-#define BFI_HW_STATS_SIZE		0x4000 /* 16K DMA */
-
-/**
- *
- * HW register offsets, macros
- *
- */
-
-/* DMA Block Register Host Window Start Address */
-#define DMA_BLK_REG_ADDR		0x00013000
-
-/* DMA Block Internal Registers */
-#define DMA_CTRL_REG0			(DMA_BLK_REG_ADDR + 0x000)
-#define DMA_CTRL_REG1			(DMA_BLK_REG_ADDR + 0x004)
-#define DMA_ERR_INT_STATUS		(DMA_BLK_REG_ADDR + 0x008)
-#define DMA_ERR_INT_ENABLE		(DMA_BLK_REG_ADDR + 0x00c)
-#define DMA_ERR_INT_STATUS_SET		(DMA_BLK_REG_ADDR + 0x010)
-
-/* APP Block Register Address Offset from BAR0 */
-#define APP_BLK_REG_ADDR		0x00014000
-
-/* Host Function Interrupt Mask Registers */
-#define HOSTFN0_INT_MASK		(APP_BLK_REG_ADDR + 0x004)
-#define HOSTFN1_INT_MASK		(APP_BLK_REG_ADDR + 0x104)
-#define HOSTFN2_INT_MASK		(APP_BLK_REG_ADDR + 0x304)
-#define HOSTFN3_INT_MASK		(APP_BLK_REG_ADDR + 0x404)
-
-/**
- * Host Function PCIe Error Registers
- * Duplicates "Correctable" & "Uncorrectable"
- * registers in PCIe Config space.
- */
-#define FN0_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x014)
-#define FN1_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x114)
-#define FN2_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x314)
-#define FN3_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x414)
-
-/* Host Function Error Type Status Registers */
-#define FN0_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x018)
-#define FN1_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x118)
-#define FN2_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x318)
-#define FN3_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x418)
-
-/* Host Function Error Type Mask Registers */
-#define FN0_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x01c)
-#define FN1_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x11c)
-#define FN2_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x31c)
-#define FN3_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x41c)
-
-/* Catapult Host Semaphore Status Registers (App block) */
-#define HOST_SEM_STS0_REG		(APP_BLK_REG_ADDR + 0x630)
-#define HOST_SEM_STS1_REG		(APP_BLK_REG_ADDR + 0x634)
-#define HOST_SEM_STS2_REG		(APP_BLK_REG_ADDR + 0x638)
-#define HOST_SEM_STS3_REG		(APP_BLK_REG_ADDR + 0x63c)
-#define HOST_SEM_STS4_REG		(APP_BLK_REG_ADDR + 0x640)
-#define HOST_SEM_STS5_REG		(APP_BLK_REG_ADDR + 0x644)
-#define HOST_SEM_STS6_REG		(APP_BLK_REG_ADDR + 0x648)
-#define HOST_SEM_STS7_REG		(APP_BLK_REG_ADDR + 0x64c)
-
-/* PCIe Misc Register */
-#define PCIE_MISC_REG			(APP_BLK_REG_ADDR + 0x200)
-
-/* Temp Sensor Control Registers */
-#define TEMPSENSE_CNTL_REG		(APP_BLK_REG_ADDR + 0x250)
-#define TEMPSENSE_STAT_REG		(APP_BLK_REG_ADDR + 0x254)
-
-/* APP Block local error registers */
-#define APP_LOCAL_ERR_STAT		(APP_BLK_REG_ADDR + 0x258)
-#define APP_LOCAL_ERR_MSK		(APP_BLK_REG_ADDR + 0x25c)
-
-/* PCIe Link Error registers */
-#define PCIE_LNK_ERR_STAT		(APP_BLK_REG_ADDR + 0x260)
-#define PCIE_LNK_ERR_MSK		(APP_BLK_REG_ADDR + 0x264)
-
-/**
- * FCoE/FIP Ethertype Register
- * 31:16 -- Chip wide value for FIP type
- * 15:0  -- Chip wide value for FCoE type
- */
-#define FCOE_FIP_ETH_TYPE		(APP_BLK_REG_ADDR + 0x280)
-
-/**
- * Reserved Ethertype Register
- * 31:16 -- Reserved
- * 15:0  -- Other ethertype
- */
-#define RESV_ETH_TYPE			(APP_BLK_REG_ADDR + 0x284)
-
-/**
- * Host Command Status Registers
- * Each set consists of 3 registers :
- * clear, set, cmd
- * 16 such register sets in all
- * See catapult_spec.pdf for detailed functionality
- * Put each type in a single macro accessed by _num ?
- */
-#define HOST_CMDSTS0_CLR_REG		(APP_BLK_REG_ADDR + 0x500)
-#define HOST_CMDSTS0_SET_REG		(APP_BLK_REG_ADDR + 0x504)
-#define HOST_CMDSTS0_REG		(APP_BLK_REG_ADDR + 0x508)
-#define HOST_CMDSTS1_CLR_REG		(APP_BLK_REG_ADDR + 0x510)
-#define HOST_CMDSTS1_SET_REG		(APP_BLK_REG_ADDR + 0x514)
-#define HOST_CMDSTS1_REG		(APP_BLK_REG_ADDR + 0x518)
-#define HOST_CMDSTS2_CLR_REG		(APP_BLK_REG_ADDR + 0x520)
-#define HOST_CMDSTS2_SET_REG		(APP_BLK_REG_ADDR + 0x524)
-#define HOST_CMDSTS2_REG		(APP_BLK_REG_ADDR + 0x528)
-#define HOST_CMDSTS3_CLR_REG		(APP_BLK_REG_ADDR + 0x530)
-#define HOST_CMDSTS3_SET_REG		(APP_BLK_REG_ADDR + 0x534)
-#define HOST_CMDSTS3_REG		(APP_BLK_REG_ADDR + 0x538)
-#define HOST_CMDSTS4_CLR_REG		(APP_BLK_REG_ADDR + 0x540)
-#define HOST_CMDSTS4_SET_REG		(APP_BLK_REG_ADDR + 0x544)
-#define HOST_CMDSTS4_REG		(APP_BLK_REG_ADDR + 0x548)
-#define HOST_CMDSTS5_CLR_REG		(APP_BLK_REG_ADDR + 0x550)
-#define HOST_CMDSTS5_SET_REG		(APP_BLK_REG_ADDR + 0x554)
-#define HOST_CMDSTS5_REG		(APP_BLK_REG_ADDR + 0x558)
-#define HOST_CMDSTS6_CLR_REG		(APP_BLK_REG_ADDR + 0x560)
-#define HOST_CMDSTS6_SET_REG		(APP_BLK_REG_ADDR + 0x564)
-#define HOST_CMDSTS6_REG		(APP_BLK_REG_ADDR + 0x568)
-#define HOST_CMDSTS7_CLR_REG		(APP_BLK_REG_ADDR + 0x570)
-#define HOST_CMDSTS7_SET_REG		(APP_BLK_REG_ADDR + 0x574)
-#define HOST_CMDSTS7_REG		(APP_BLK_REG_ADDR + 0x578)
-#define HOST_CMDSTS8_CLR_REG		(APP_BLK_REG_ADDR + 0x580)
-#define HOST_CMDSTS8_SET_REG		(APP_BLK_REG_ADDR + 0x584)
-#define HOST_CMDSTS8_REG		(APP_BLK_REG_ADDR + 0x588)
-#define HOST_CMDSTS9_CLR_REG		(APP_BLK_REG_ADDR + 0x590)
-#define HOST_CMDSTS9_SET_REG		(APP_BLK_REG_ADDR + 0x594)
-#define HOST_CMDSTS9_REG		(APP_BLK_REG_ADDR + 0x598)
-#define HOST_CMDSTS10_CLR_REG		(APP_BLK_REG_ADDR + 0x5A0)
-#define HOST_CMDSTS10_SET_REG		(APP_BLK_REG_ADDR + 0x5A4)
-#define HOST_CMDSTS10_REG		(APP_BLK_REG_ADDR + 0x5A8)
-#define HOST_CMDSTS11_CLR_REG		(APP_BLK_REG_ADDR + 0x5B0)
-#define HOST_CMDSTS11_SET_REG		(APP_BLK_REG_ADDR + 0x5B4)
-#define HOST_CMDSTS11_REG		(APP_BLK_REG_ADDR + 0x5B8)
-#define HOST_CMDSTS12_CLR_REG		(APP_BLK_REG_ADDR + 0x5C0)
-#define HOST_CMDSTS12_SET_REG		(APP_BLK_REG_ADDR + 0x5C4)
-#define HOST_CMDSTS12_REG		(APP_BLK_REG_ADDR + 0x5C8)
-#define HOST_CMDSTS13_CLR_REG		(APP_BLK_REG_ADDR + 0x5D0)
-#define HOST_CMDSTS13_SET_REG		(APP_BLK_REG_ADDR + 0x5D4)
-#define HOST_CMDSTS13_REG		(APP_BLK_REG_ADDR + 0x5D8)
-#define HOST_CMDSTS14_CLR_REG		(APP_BLK_REG_ADDR + 0x5E0)
-#define HOST_CMDSTS14_SET_REG		(APP_BLK_REG_ADDR + 0x5E4)
-#define HOST_CMDSTS14_REG		(APP_BLK_REG_ADDR + 0x5E8)
-#define HOST_CMDSTS15_CLR_REG		(APP_BLK_REG_ADDR + 0x5F0)
-#define HOST_CMDSTS15_SET_REG		(APP_BLK_REG_ADDR + 0x5F4)
-#define HOST_CMDSTS15_REG		(APP_BLK_REG_ADDR + 0x5F8)
-
-/**
- * LPU0 Block Register Address Offset from BAR0
- * Range 0x18000 - 0x18033
- */
-#define LPU0_BLK_REG_ADDR		0x00018000
-
-/**
- * LPU0 Registers
- * Should they be directly used from host,
- * except for diagnostics ?
- * CTL_REG : Control register
- * CMD_REG : Triggers exec. of cmd. in
- *           Mailbox memory
- */
-#define LPU0_MBOX_CTL_REG		(LPU0_BLK_REG_ADDR + 0x000)
-#define LPU0_MBOX_CMD_REG		(LPU0_BLK_REG_ADDR + 0x004)
-#define LPU0_MBOX_LINK_0REG		(LPU0_BLK_REG_ADDR + 0x008)
-#define LPU1_MBOX_LINK_0REG		(LPU0_BLK_REG_ADDR + 0x00c)
-#define LPU0_MBOX_STATUS_0REG		(LPU0_BLK_REG_ADDR + 0x010)
-#define LPU1_MBOX_STATUS_0REG		(LPU0_BLK_REG_ADDR + 0x014)
-#define LPU0_ERR_STATUS_REG		(LPU0_BLK_REG_ADDR + 0x018)
-#define LPU0_ERR_SET_REG		(LPU0_BLK_REG_ADDR + 0x020)
-
-/**
- * LPU1 Block Register Address Offset from BAR0
- * Range 0x18400 - 0x18433
- */
-#define LPU1_BLK_REG_ADDR		0x00018400
-
-/**
- * LPU1 Registers
- * Same as LPU0 registers above
- */
-#define LPU1_MBOX_CTL_REG		(LPU1_BLK_REG_ADDR + 0x000)
-#define LPU1_MBOX_CMD_REG		(LPU1_BLK_REG_ADDR + 0x004)
-#define LPU0_MBOX_LINK_1REG		(LPU1_BLK_REG_ADDR + 0x008)
-#define LPU1_MBOX_LINK_1REG		(LPU1_BLK_REG_ADDR + 0x00c)
-#define LPU0_MBOX_STATUS_1REG		(LPU1_BLK_REG_ADDR + 0x010)
-#define LPU1_MBOX_STATUS_1REG		(LPU1_BLK_REG_ADDR + 0x014)
-#define LPU1_ERR_STATUS_REG		(LPU1_BLK_REG_ADDR + 0x018)
-#define LPU1_ERR_SET_REG		(LPU1_BLK_REG_ADDR + 0x020)
-
-/**
- * PSS Block Register Address Offset from BAR0
- * Range 0x18800 - 0x188DB
- */
-#define PSS_BLK_REG_ADDR		0x00018800
-
-/**
- * PSS Registers
- * For details, see catapult_spec.pdf
- * ERR_STATUS_REG : Indicates error in PSS module
- * RAM_ERR_STATUS_REG : Indicates RAM module that detected error
- */
-#define ERR_STATUS_SET			(PSS_BLK_REG_ADDR + 0x018)
-#define PSS_RAM_ERR_STATUS_REG		(PSS_BLK_REG_ADDR + 0x01C)
-
-/**
- * PSS Semaphore Lock Registers, total 16
- * First read when unlocked returns 0,
- * and is set to 1, atomically.
- * Subsequent reads returns 1.
- * To clear set the value to 0.
- * Range : 0x20 to 0x5c
- */
-#define PSS_SEM_LOCK_REG(_num)		\
-	(PSS_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-/**
- * PSS Semaphore Status Registers,
- * corresponding to the lock registers above
- */
-#define PSS_SEM_STATUS_REG(_num)		\
-	(PSS_BLK_REG_ADDR + 0x060 + ((_num) << 2))
-
-/**
- * Catapult CPQ Registers
- * Defines for Mailbox Registers
- * Used to send mailbox commands to firmware from
- * host. The data part is written to the MBox
- * memory, registers are used to indicate that
- * a commnad is resident in memory.
- *
- * Note : LPU0<->LPU1 mailboxes are not listed here
- */
-#define CPQ_BLK_REG_ADDR		0x00019000
-
-#define HOSTFN0_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x130)
-#define HOSTFN0_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x134)
-#define LPU0_HOSTFN0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x138)
-#define LPU1_HOSTFN0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x13C)
-
-#define HOSTFN1_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x140)
-#define HOSTFN1_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x144)
-#define LPU0_HOSTFN1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x148)
-#define LPU1_HOSTFN1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x14C)
-
-#define HOSTFN2_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x170)
-#define HOSTFN2_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x174)
-#define LPU0_HOSTFN2_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x178)
-#define LPU1_HOSTFN2_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x17C)
-
-#define HOSTFN3_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x180)
-#define HOSTFN3_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x184)
-#define LPU0_HOSTFN3_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x188)
-#define LPU1_HOSTFN3_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x18C)
-
-/* Host Function Force Parity Error Registers */
-#define HOSTFN0_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x120)
-#define HOSTFN1_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x124)
-#define HOSTFN2_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x128)
-#define HOSTFN3_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x12C)
-
-/* LL Port[0|1] Halt Mask Registers */
-#define LL_HALT_MSK_P0			(CPQ_BLK_REG_ADDR + 0x1A0)
-#define LL_HALT_MSK_P1			(CPQ_BLK_REG_ADDR + 0x1B0)
-
-/* LL Port[0|1] Error Mask Registers */
-#define LL_ERR_MSK_P0			(CPQ_BLK_REG_ADDR + 0x1D0)
-#define LL_ERR_MSK_P1			(CPQ_BLK_REG_ADDR + 0x1D4)
-
-/* EMC FLI (Flash Controller) Block Register Address Offset from BAR0 */
-#define FLI_BLK_REG_ADDR		0x0001D000
-
-/* EMC FLI Registers */
-#define FLI_CMD_REG			(FLI_BLK_REG_ADDR + 0x000)
-#define FLI_ADDR_REG			(FLI_BLK_REG_ADDR + 0x004)
-#define FLI_CTL_REG			(FLI_BLK_REG_ADDR + 0x008)
-#define FLI_WRDATA_REG			(FLI_BLK_REG_ADDR + 0x00C)
-#define FLI_RDDATA_REG			(FLI_BLK_REG_ADDR + 0x010)
-#define FLI_DEV_STATUS_REG		(FLI_BLK_REG_ADDR + 0x014)
-#define FLI_SIG_WD_REG			(FLI_BLK_REG_ADDR + 0x018)
-
-/**
- * RO register
- * 31:16 -- Vendor Id
- * 15:0  -- Device Id
- */
-#define FLI_DEV_VENDOR_REG		(FLI_BLK_REG_ADDR + 0x01C)
-#define FLI_ERR_STATUS_REG		(FLI_BLK_REG_ADDR + 0x020)
-
-/**
- * RAD (RxAdm) Block Register Address Offset from BAR0
- * RAD0 Range : 0x20000 - 0x203FF
- * RAD1 Range : 0x20400 - 0x207FF
- */
-#define RAD0_BLK_REG_ADDR		0x00020000
-#define RAD1_BLK_REG_ADDR		0x00020400
-
-/* RAD0 Registers */
-#define RAD0_CTL_REG			(RAD0_BLK_REG_ADDR + 0x000)
-#define RAD0_PE_PARM_REG		(RAD0_BLK_REG_ADDR + 0x004)
-#define RAD0_BCN_REG			(RAD0_BLK_REG_ADDR + 0x008)
-
-/* Default function ID register */
-#define RAD0_DEFAULT_REG		(RAD0_BLK_REG_ADDR + 0x00C)
-
-/* Default promiscuous ID register */
-#define RAD0_PROMISC_REG		(RAD0_BLK_REG_ADDR + 0x010)
-
-#define RAD0_BCNQ_REG			(RAD0_BLK_REG_ADDR + 0x014)
-
-/*
- * This register selects 1 of 8 PM Q's using
- * VLAN pri, for non-BCN packets without a VLAN tag
- */
-#define RAD0_DEFAULTQ_REG		(RAD0_BLK_REG_ADDR + 0x018)
-
-#define RAD0_ERR_STS			(RAD0_BLK_REG_ADDR + 0x01C)
-#define RAD0_SET_ERR_STS		(RAD0_BLK_REG_ADDR + 0x020)
-#define RAD0_ERR_INT_EN			(RAD0_BLK_REG_ADDR + 0x024)
-#define RAD0_FIRST_ERR			(RAD0_BLK_REG_ADDR + 0x028)
-#define RAD0_FORCE_ERR			(RAD0_BLK_REG_ADDR + 0x02C)
-
-#define RAD0_IF_RCVD			(RAD0_BLK_REG_ADDR + 0x030)
-#define RAD0_IF_RCVD_OCTETS_HIGH	(RAD0_BLK_REG_ADDR + 0x034)
-#define RAD0_IF_RCVD_OCTETS_LOW		(RAD0_BLK_REG_ADDR + 0x038)
-#define RAD0_IF_RCVD_VLAN		(RAD0_BLK_REG_ADDR + 0x03C)
-#define RAD0_IF_RCVD_UCAST		(RAD0_BLK_REG_ADDR + 0x040)
-#define RAD0_IF_RCVD_UCAST_OCTETS_HIGH	(RAD0_BLK_REG_ADDR + 0x044)
-#define RAD0_IF_RCVD_UCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x048)
-#define RAD0_IF_RCVD_UCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x04C)
-#define RAD0_IF_RCVD_MCAST		(RAD0_BLK_REG_ADDR + 0x050)
-#define RAD0_IF_RCVD_MCAST_OCTETS_HIGH  (RAD0_BLK_REG_ADDR + 0x054)
-#define RAD0_IF_RCVD_MCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x058)
-#define RAD0_IF_RCVD_MCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x05C)
-#define RAD0_IF_RCVD_BCAST		(RAD0_BLK_REG_ADDR + 0x060)
-#define RAD0_IF_RCVD_BCAST_OCTETS_HIGH  (RAD0_BLK_REG_ADDR + 0x064)
-#define RAD0_IF_RCVD_BCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x068)
-#define RAD0_IF_RCVD_BCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x06C)
-#define RAD0_DROPPED_FRAMES		(RAD0_BLK_REG_ADDR + 0x070)
-
-#define RAD0_MAC_MAN_1H			(RAD0_BLK_REG_ADDR + 0x080)
-#define RAD0_MAC_MAN_1L			(RAD0_BLK_REG_ADDR + 0x084)
-#define RAD0_MAC_MAN_2H			(RAD0_BLK_REG_ADDR + 0x088)
-#define RAD0_MAC_MAN_2L			(RAD0_BLK_REG_ADDR + 0x08C)
-#define RAD0_MAC_MAN_3H			(RAD0_BLK_REG_ADDR + 0x090)
-#define RAD0_MAC_MAN_3L			(RAD0_BLK_REG_ADDR + 0x094)
-#define RAD0_MAC_MAN_4H			(RAD0_BLK_REG_ADDR + 0x098)
-#define RAD0_MAC_MAN_4L			(RAD0_BLK_REG_ADDR + 0x09C)
-
-#define RAD0_LAST4_IP			(RAD0_BLK_REG_ADDR + 0x100)
-
-/* RAD1 Registers */
-#define RAD1_CTL_REG			(RAD1_BLK_REG_ADDR + 0x000)
-#define RAD1_PE_PARM_REG		(RAD1_BLK_REG_ADDR + 0x004)
-#define RAD1_BCN_REG			(RAD1_BLK_REG_ADDR + 0x008)
-
-/* Default function ID register */
-#define RAD1_DEFAULT_REG		(RAD1_BLK_REG_ADDR + 0x00C)
-
-/* Promiscuous function ID register */
-#define RAD1_PROMISC_REG		(RAD1_BLK_REG_ADDR + 0x010)
-
-#define RAD1_BCNQ_REG			(RAD1_BLK_REG_ADDR + 0x014)
-
-/*
- * This register selects 1 of 8 PM Q's using
- * VLAN pri, for non-BCN packets without a VLAN tag
- */
-#define RAD1_DEFAULTQ_REG		(RAD1_BLK_REG_ADDR + 0x018)
-
-#define RAD1_ERR_STS			(RAD1_BLK_REG_ADDR + 0x01C)
-#define RAD1_SET_ERR_STS		(RAD1_BLK_REG_ADDR + 0x020)
-#define RAD1_ERR_INT_EN			(RAD1_BLK_REG_ADDR + 0x024)
-
-/**
- * TXA Block Register Address Offset from BAR0
- * TXA0 Range : 0x21000 - 0x213FF
- * TXA1 Range : 0x21400 - 0x217FF
- */
-#define TXA0_BLK_REG_ADDR		0x00021000
-#define TXA1_BLK_REG_ADDR		0x00021400
-
-/* TXA Registers */
-#define TXA0_CTRL_REG			(TXA0_BLK_REG_ADDR + 0x000)
-#define TXA1_CTRL_REG			(TXA1_BLK_REG_ADDR + 0x000)
-
-/**
- * TSO Sequence # Registers (RO)
- * Total 8 (for 8 queues)
- * Holds the last seq.# for TSO frames
- * See catapult_spec.pdf for more details
- */
-#define TXA0_TSO_TCP_SEQ_REG(_num)		\
-	(TXA0_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-#define TXA1_TSO_TCP_SEQ_REG(_num)		\
-	(TXA1_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-/**
- * TSO IP ID # Registers (RO)
- * Total 8 (for 8 queues)
- * Holds the last IP ID for TSO frames
- * See catapult_spec.pdf for more details
- */
-#define TXA0_TSO_IP_INFO_REG(_num)		\
-	(TXA0_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-#define TXA1_TSO_IP_INFO_REG(_num)		\
-	(TXA1_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-/**
- * RXA Block Register Address Offset from BAR0
- * RXA0 Range : 0x21800 - 0x21BFF
- * RXA1 Range : 0x21C00 - 0x21FFF
- */
-#define RXA0_BLK_REG_ADDR		0x00021800
-#define RXA1_BLK_REG_ADDR		0x00021C00
-
-/* RXA Registers */
-#define RXA0_CTL_REG			(RXA0_BLK_REG_ADDR + 0x040)
-#define RXA1_CTL_REG			(RXA1_BLK_REG_ADDR + 0x040)
-
-/**
- * PPLB Block Register Address Offset from BAR0
- * PPLB0 Range : 0x22000 - 0x223FF
- * PPLB1 Range : 0x22400 - 0x227FF
- */
-#define PLB0_BLK_REG_ADDR		0x00022000
-#define PLB1_BLK_REG_ADDR		0x00022400
-
-/**
- * PLB Registers
- * Holds RL timer used time stamps in RLT tagged frames
- */
-#define PLB0_ECM_TIMER_REG		(PLB0_BLK_REG_ADDR + 0x05C)
-#define PLB1_ECM_TIMER_REG		(PLB1_BLK_REG_ADDR + 0x05C)
-
-/* Controls the rate-limiter on each of the priority class */
-#define PLB0_RL_CTL			(PLB0_BLK_REG_ADDR + 0x060)
-#define PLB1_RL_CTL			(PLB1_BLK_REG_ADDR + 0x060)
-
-/**
- * Max byte register, total 8, 0-7
- * see catapult_spec.pdf for details
- */
-#define PLB0_RL_MAX_BC(_num)			\
-	(PLB0_BLK_REG_ADDR + 0x064 + ((_num) << 2))
-#define PLB1_RL_MAX_BC(_num)			\
-	(PLB1_BLK_REG_ADDR + 0x064 + ((_num) << 2))
-
-/**
- * RL Time Unit Register for priority 0-7
- * 4 bits per priority
- * (2^rl_unit)*1us is the actual time period
- */
-#define PLB0_RL_TU_PRIO			(PLB0_BLK_REG_ADDR + 0x084)
-#define PLB1_RL_TU_PRIO			(PLB1_BLK_REG_ADDR + 0x084)
-
-/**
- * RL byte count register,
- * bytes transmitted in (rl_unit*1)us time period
- * 1 per priority, 8 in all, 0-7.
- */
-#define PLB0_RL_BYTE_CNT(_num)			\
-	(PLB0_BLK_REG_ADDR + 0x088 + ((_num) << 2))
-#define PLB1_RL_BYTE_CNT(_num)			\
-	(PLB1_BLK_REG_ADDR + 0x088 + ((_num) << 2))
-
-/**
- * RL Min factor register
- * 2 bits per priority,
- * 4 factors possible: 1, 0.5, 0.25, 0
- * 2'b00 - 0; 2'b01 - 0.25; 2'b10 - 0.5; 2'b11 - 1
- */
-#define PLB0_RL_MIN_REG			(PLB0_BLK_REG_ADDR + 0x0A8)
-#define PLB1_RL_MIN_REG			(PLB1_BLK_REG_ADDR + 0x0A8)
-
-/**
- * RL Max factor register
- * 2 bits per priority,
- * 4 factors possible: 1, 0.5, 0.25, 0
- * 2'b00 - 0; 2'b01 - 0.25; 2'b10 - 0.5; 2'b11 - 1
- */
-#define PLB0_RL_MAX_REG			(PLB0_BLK_REG_ADDR + 0x0AC)
-#define PLB1_RL_MAX_REG			(PLB1_BLK_REG_ADDR + 0x0AC)
-
-/* MAC SERDES Address Paging register */
-#define PLB0_EMS_ADD_REG		(PLB0_BLK_REG_ADDR + 0xD0)
-#define PLB1_EMS_ADD_REG		(PLB1_BLK_REG_ADDR + 0xD0)
-
-/* LL EMS Registers */
-#define LL_EMS0_BLK_REG_ADDR		0x00026800
-#define LL_EMS1_BLK_REG_ADDR		0x00026C00
-
-/**
- * BPC Block Register Address Offset from BAR0
- * BPC0 Range : 0x23000 - 0x233FF
- * BPC1 Range : 0x23400 - 0x237FF
- */
-#define BPC0_BLK_REG_ADDR		0x00023000
-#define BPC1_BLK_REG_ADDR		0x00023400
-
-/**
- * PMM Block Register Address Offset from BAR0
- * PMM0 Range : 0x23800 - 0x23BFF
- * PMM1 Range : 0x23C00 - 0x23FFF
- */
-#define PMM0_BLK_REG_ADDR		0x00023800
-#define PMM1_BLK_REG_ADDR		0x00023C00
-
-/**
- * HQM Block Register Address Offset from BAR0
- * HQM0 Range : 0x24000 - 0x243FF
- * HQM1 Range : 0x24400 - 0x247FF
- */
-#define HQM0_BLK_REG_ADDR		0x00024000
-#define HQM1_BLK_REG_ADDR		0x00024400
-
-/**
- * HQM Control Register
- * Controls some aspects of IB
- * See catapult_spec.pdf for details
- */
-#define HQM0_CTL_REG			(HQM0_BLK_REG_ADDR + 0x000)
-#define HQM1_CTL_REG			(HQM1_BLK_REG_ADDR + 0x000)
-
-/**
- * HQM Stop Q Semaphore Registers.
- * Only one Queue resource can be stopped at
- * any given time. This register controls access
- * to the single stop Q resource.
- * See catapult_spec.pdf for details
- */
-#define HQM0_RXQ_STOP_SEM		(HQM0_BLK_REG_ADDR + 0x028)
-#define HQM0_TXQ_STOP_SEM		(HQM0_BLK_REG_ADDR + 0x02C)
-#define HQM1_RXQ_STOP_SEM		(HQM1_BLK_REG_ADDR + 0x028)
-#define HQM1_TXQ_STOP_SEM		(HQM1_BLK_REG_ADDR + 0x02C)
-
-/**
- * LUT Block Register Address Offset from BAR0
- * LUT0 Range : 0x25800 - 0x25BFF
- * LUT1 Range : 0x25C00 - 0x25FFF
- */
-#define LUT0_BLK_REG_ADDR		0x00025800
-#define LUT1_BLK_REG_ADDR		0x00025C00
-
-/**
- * LUT Registers
- * See catapult_spec.pdf for details
- */
-#define LUT0_ERR_STS			(LUT0_BLK_REG_ADDR + 0x000)
-#define LUT1_ERR_STS			(LUT1_BLK_REG_ADDR + 0x000)
-#define LUT0_SET_ERR_STS		(LUT0_BLK_REG_ADDR + 0x004)
-#define LUT1_SET_ERR_STS		(LUT1_BLK_REG_ADDR + 0x004)
-
-/**
- * TRC (Debug/Trace) Register Offset from BAR0
- * Range : 0x26000 -- 0x263FFF
- */
-#define TRC_BLK_REG_ADDR		0x00026000
-
-/**
- * TRC Registers
- * See catapult_spec.pdf for details of each
- */
-#define TRC_CTL_REG			(TRC_BLK_REG_ADDR + 0x000)
-#define TRC_MODS_REG			(TRC_BLK_REG_ADDR + 0x004)
-#define TRC_TRGC_REG			(TRC_BLK_REG_ADDR + 0x008)
-#define TRC_CNT1_REG			(TRC_BLK_REG_ADDR + 0x010)
-#define TRC_CNT2_REG			(TRC_BLK_REG_ADDR + 0x014)
-#define TRC_NXTS_REG			(TRC_BLK_REG_ADDR + 0x018)
-#define TRC_DIRR_REG			(TRC_BLK_REG_ADDR + 0x01C)
-
-/**
- * TRC Trigger match filters, total 10
- * Determines the trigger condition
- */
-#define TRC_TRGM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-/**
- * TRC Next State filters, total 10
- * Determines the next state conditions
- */
-#define TRC_NXTM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x080 + ((_num) << 2))
-
-/**
- * TRC Store Match filters, total 10
- * Determines the store conditions
- */
-#define TRC_STRM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x0C0 + ((_num) << 2))
-
-/* DOORBELLS ACCESS */
-
-/**
- * Catapult doorbells
- * Each doorbell-queue set has
- * 1 RxQ, 1 TxQ, 2 IBs in that order
- * Size of each entry in 32 bytes, even though only 1 word
- * is used. For Non-VM case each doorbell-q set is
- * separated by 128 bytes, for VM case it is separated
- * by 4K bytes
- * Non VM case Range : 0x38000 - 0x39FFF
- * VM case Range     : 0x100000 - 0x11FFFF
- * The range applies to both HQMs
- */
-#define HQM_DOORBELL_BLK_BASE_ADDR	0x00038000
-#define HQM_DOORBELL_VM_BLK_BASE_ADDR	0x00100000
-
-/* MEMORY ACCESS */
-
-/**
- * Catapult H/W Block Memory Access Address
- * To the host a memory space of 32K (page) is visible
- * at a time. The address range is from 0x08000 to 0x0FFFF
- */
-#define HW_BLK_HOST_MEM_ADDR		0x08000
-
-/**
- * Catapult LUT Memory Access Page Numbers
- * Range : LUT0 0xa0-0xa1
- *         LUT1 0xa2-0xa3
- */
-#define LUT0_MEM_BLK_BASE_PG_NUM	0x000000A0
-#define LUT1_MEM_BLK_BASE_PG_NUM	0x000000A2
-
-/**
- * Catapult RxFn Database Memory Block Base Offset
- *
- * The Rx function database exists in LUT block.
- * In PCIe space this is accessible as a 256x32
- * bit block. Each entry in this database is 4
- * (4 byte) words. Max. entries is 64.
- * Address of an entry corresponding to a function
- * = base_addr + (function_no. * 16)
- */
-#define RX_FNDB_RAM_BASE_OFFSET		0x0000B400
-
-/**
- * Catapult TxFn Database Memory Block Base Offset Address
- *
- * The Tx function database exists in LUT block.
- * In PCIe space this is accessible as a 64x32
- * bit block. Each entry in this database is 1
- * (4 byte) word. Max. entries is 64.
- * Address of an entry corresponding to a function
- * = base_addr + (function_no. * 4)
- */
-#define TX_FNDB_RAM_BASE_OFFSET		0x0000B800
-
-/**
- * Catapult Unicast CAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x48 bits; mapped to PCIe space
- * 512x32 bit blocks. For each address, bits
- * are written in the order : [47:32] and then
- * [31:0].
- */
-#define UCAST_CAM_BASE_OFFSET		0x0000A800
-
-/**
- * Catapult Unicast RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x9 bits.
- */
-#define UCAST_RAM_BASE_OFFSET		0x0000B000
-
-/**
- * Catapult Mulicast CAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x48 bits; mapped to PCIe space
- * 512x32 bit blocks. For each address, bits
- * are written in the order : [47:32] and then
- * [31:0].
- */
-#define MCAST_CAM_BASE_OFFSET		0x0000A000
-
-/**
- * Catapult VLAN RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Size is 4096x66 bits; mapped to PCIe space as
- * 8192x32 bit blocks.
- * All the 4K entries are within the address range
- * 0x0000 to 0x8000, so in the first LUT page.
- */
-#define VLAN_RAM_BASE_OFFSET		0x00000000
-
-/**
- * Catapult Tx Stats RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Size is 1024x33 bits;
- * Each Tx function has 64 bytes of space
- */
-#define TX_STATS_RAM_BASE_OFFSET	0x00009000
-
-/**
- * Catapult Rx Stats RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Size is 1024x33 bits;
- * Each Rx function has 64 bytes of space
- */
-#define RX_STATS_RAM_BASE_OFFSET	0x00008000
-
-/* Catapult RXA Memory Access Page Numbers */
-#define RXA0_MEM_BLK_BASE_PG_NUM	0x0000008C
-#define RXA1_MEM_BLK_BASE_PG_NUM	0x0000008D
-
-/**
- * Catapult Multicast Vector Table Base Offset Address
- *
- * Exists in RxA memory space.
- * Organized as 512x65 bit block.
- * However for each entry 16 bytes allocated (power of 2)
- * Total size 512*16 bytes.
- * There are two logical divisions, 256 entries each :
- * a) Entries 0x00 to 0xff (256) -- Approx. MVT
- *    Offset 0x000 to 0xFFF
- * b) Entries 0x100 to 0x1ff (256) -- Exact MVT
- *    Offsets 0x1000 to 0x1FFF
- */
-#define MCAST_APPROX_MVT_BASE_OFFSET	0x00000000
-#define MCAST_EXACT_MVT_BASE_OFFSET	0x00001000
-
-/**
- * Catapult RxQ Translate Table (RIT) Base Offset Address
- *
- * Exists in RxA memory space
- * Total no. of entries 64
- * Each entry is 1 (4 byte) word.
- * 31:12 -- Reserved
- * 11:0  -- Two 6 bit RxQ Ids
- */
-#define FUNCTION_TO_RXQ_TRANSLATE	0x00002000
-
-/* Catapult RxAdm (RAD) Memory Access Page Numbers */
-#define RAD0_MEM_BLK_BASE_PG_NUM	0x00000086
-#define RAD1_MEM_BLK_BASE_PG_NUM	0x00000087
-
-/**
- * Catapult RSS Table Base Offset Address
- *
- * Exists in RAD memory space.
- * Each entry is 352 bits, but aligned on
- * 64 byte (512 bit) boundary. Accessed
- * 4 byte words, the whole entry can be
- * broken into 11 word accesses.
- */
-#define RSS_TABLE_BASE_OFFSET		0x00000800
-
-/**
- * Catapult CPQ Block Page Number
- * This value is written to the page number registers
- * to access the memory associated with the mailboxes.
- */
-#define CPQ_BLK_PG_NUM			0x00000005
-
-/**
- * Clarification :
- * LL functions are 2 & 3; can HostFn0/HostFn1
- * <-> LPU0/LPU1 memories be used ?
- */
-/**
- * Catapult HostFn0/HostFn1 to LPU0/LPU1 Mbox memory
- * Per catapult_spec.pdf, the offset of the mbox
- * memory is in the register space at an offset of 0x200
- */
-#define CPQ_BLK_REG_MBOX_ADDR		(CPQ_BLK_REG_ADDR + 0x200)
-
-#define HOSTFN_LPU_MBOX			(CPQ_BLK_REG_MBOX_ADDR + 0x000)
-
-/* Catapult LPU0/LPU1 to HostFn0/HostFn1 Mbox memory */
-#define LPU_HOSTFN_MBOX			(CPQ_BLK_REG_MBOX_ADDR + 0x080)
-
-/**
- * Catapult HQM Block Page Number
- * This is written to the page number register for
- * the appropriate function to access the memory
- * associated with HQM
- */
-#define HQM0_BLK_PG_NUM			0x00000096
-#define HQM1_BLK_PG_NUM			0x00000097
-
-/**
- * Note that TxQ and RxQ entries are interlaced
- * the HQM memory, i.e RXQ0, TXQ0, RXQ1, TXQ1.. etc.
- */
-
-#define HQM_RXTX_Q_RAM_BASE_OFFSET	0x00004000
-
-/**
- * CQ Memory
- * Exists in HQM Memory space
- * Each entry is 16 (4 byte) words of which
- * only 12 words are used for configuration
- * Total 64 entries per HQM memory space
- */
-#define HQM_CQ_RAM_BASE_OFFSET		0x00006000
-
-/**
- * Interrupt Block (IB) Memory
- * Exists in HQM Memory space
- * Each entry is 8 (4 byte) words of which
- * only 5 words are used for configuration
- * Total 128 entries per HQM memory space
- */
-#define HQM_IB_RAM_BASE_OFFSET		0x00001000
-
-/**
- * Index Table (IT) Memory
- * Exists in HQM Memory space
- * Each entry is 1 (4 byte) word which
- * is used for configuration
- * Total 128 entries per HQM memory space
- */
-#define HQM_INDX_TBL_RAM_BASE_OFFSET	0x00002000
-
-/**
- * PSS Block Memory Page Number
- * This is written to the appropriate page number
- * register to access the CPU memory.
- * Also known as the PSS secondary memory (SMEM).
- * Range : 0x180 to 0x1CF
- * See catapult_spec.pdf for details
- */
-#define PSS_BLK_PG_NUM			0x00000180
-
-/**
- * Offsets of different instances of PSS SMEM
- * 2.5M of continuous 1T memory space : 2 blocks
- * of 1M each (32 pages each, page=32KB) and 4 smaller
- * blocks of 128K each (4 pages each, page=32KB)
- * PSS_LMEM_INST0 is used for firmware download
- */
-#define PSS_LMEM_INST0			0x00000000
-#define PSS_LMEM_INST1			0x00100000
-#define PSS_LMEM_INST2			0x00200000
-#define PSS_LMEM_INST3			0x00220000
-#define PSS_LMEM_INST4			0x00240000
-#define PSS_LMEM_INST5			0x00260000
-
-#define BNA_PCI_REG_CT_ADDRSZ		(0x40000)
-
-#define BNA_GET_PAGE_NUM(_base_page, _offset)   \
-	((_base_page) + ((_offset) >> 15))
-
-#define BNA_GET_PAGE_OFFSET(_offset)    \
-	((_offset) & 0x7fff)
-
-#define BNA_GET_MEM_BASE_ADDR(_bar0, _base_offset)	\
-	((_bar0) + HW_BLK_HOST_MEM_ADDR		\
-	  + BNA_GET_PAGE_OFFSET((_base_offset)))
-
-#define BNA_GET_VLAN_MEM_ENTRY_ADDR(_bar0, _fn_id, _vlan_id)\
-	(_bar0 + (HW_BLK_HOST_MEM_ADDR)  \
-	+ (BNA_GET_PAGE_OFFSET(VLAN_RAM_BASE_OFFSET))	\
-	+ (((_fn_id) & 0x3f) << 9)	  \
-	+ (((_vlan_id) & 0xfe0) >> 3))
-
-/**
- *
- *  Interrupt related bits, flags and macros
- *
- */
-
-#define __LPU02HOST_MBOX0_STATUS_BITS 0x00100000
-#define __LPU12HOST_MBOX0_STATUS_BITS 0x00200000
-#define __LPU02HOST_MBOX1_STATUS_BITS 0x00400000
-#define __LPU12HOST_MBOX1_STATUS_BITS 0x00800000
-
-#define __LPU02HOST_MBOX0_MASK_BITS	0x00100000
-#define __LPU12HOST_MBOX0_MASK_BITS	0x00200000
-#define __LPU02HOST_MBOX1_MASK_BITS	0x00400000
-#define __LPU12HOST_MBOX1_MASK_BITS	0x00800000
-
-#define __LPU2HOST_MBOX_MASK_BITS			 \
-	(__LPU02HOST_MBOX0_MASK_BITS | __LPU02HOST_MBOX1_MASK_BITS |	\
-	  __LPU12HOST_MBOX0_MASK_BITS | __LPU12HOST_MBOX1_MASK_BITS)
-
-#define __LPU2HOST_IB_STATUS_BITS	0x0000ffff
-
-#define BNA_IS_LPU0_MBOX_INTR(_intr_status) \
-	((_intr_status) & (__LPU02HOST_MBOX0_STATUS_BITS | \
-			__LPU02HOST_MBOX1_STATUS_BITS))
-
-#define BNA_IS_LPU1_MBOX_INTR(_intr_status) \
-	((_intr_status) & (__LPU12HOST_MBOX0_STATUS_BITS | \
-		__LPU12HOST_MBOX1_STATUS_BITS))
-
-#define BNA_IS_MBOX_INTR(_intr_status)		\
-	((_intr_status) &			\
-	(__LPU02HOST_MBOX0_STATUS_BITS |	\
-	 __LPU02HOST_MBOX1_STATUS_BITS |	\
-	 __LPU12HOST_MBOX0_STATUS_BITS |	\
-	 __LPU12HOST_MBOX1_STATUS_BITS))
-
-#define __EMC_ERROR_STATUS_BITS		0x00010000
-#define __LPU0_ERROR_STATUS_BITS	0x00020000
-#define __LPU1_ERROR_STATUS_BITS	0x00040000
-#define __PSS_ERROR_STATUS_BITS		0x00080000
-
-#define __HALT_STATUS_BITS		0x01000000
-
-#define __EMC_ERROR_MASK_BITS		0x00010000
-#define __LPU0_ERROR_MASK_BITS		0x00020000
-#define __LPU1_ERROR_MASK_BITS		0x00040000
-#define __PSS_ERROR_MASK_BITS		0x00080000
-
-#define __HALT_MASK_BITS		0x01000000
-
-#define __ERROR_MASK_BITS		\
-	(__EMC_ERROR_MASK_BITS | __LPU0_ERROR_MASK_BITS | \
-	  __LPU1_ERROR_MASK_BITS | __PSS_ERROR_MASK_BITS | \
-	  __HALT_MASK_BITS)
-
-#define BNA_IS_ERR_INTR(_intr_status)	\
-	((_intr_status) &		\
-	(__EMC_ERROR_STATUS_BITS |	\
-	 __LPU0_ERROR_STATUS_BITS |	\
-	 __LPU1_ERROR_STATUS_BITS |	\
-	 __PSS_ERROR_STATUS_BITS  |	\
-	 __HALT_STATUS_BITS))
-
-#define BNA_IS_MBOX_ERR_INTR(_intr_status)	\
-	(BNA_IS_MBOX_INTR((_intr_status)) |	\
-	 BNA_IS_ERR_INTR((_intr_status)))
-
-#define BNA_IS_INTX_DATA_INTR(_intr_status)	\
-	((_intr_status) & __LPU2HOST_IB_STATUS_BITS)
-
-#define BNA_INTR_STATUS_MBOX_CLR(_intr_status)			\
-do {								\
-	(_intr_status) &= ~(__LPU02HOST_MBOX0_STATUS_BITS |	\
-			__LPU02HOST_MBOX1_STATUS_BITS |		\
-			__LPU12HOST_MBOX0_STATUS_BITS |		\
-			__LPU12HOST_MBOX1_STATUS_BITS);		\
-} while (0)
-
-#define BNA_INTR_STATUS_ERR_CLR(_intr_status)		\
-do {							\
-	(_intr_status) &= ~(__EMC_ERROR_STATUS_BITS |	\
-		__LPU0_ERROR_STATUS_BITS |		\
-		__LPU1_ERROR_STATUS_BITS |		\
-		__PSS_ERROR_STATUS_BITS  |		\
-		__HALT_STATUS_BITS);			\
-} while (0)
-
-#define bna_intx_disable(_bna, _cur_mask)		\
-{							\
-	(_cur_mask) = readl((_bna)->regs.fn_int_mask);\
-	writel(0xffffffff, (_bna)->regs.fn_int_mask);\
-}
-
-#define bna_intx_enable(bna, new_mask)			\
-	writel((new_mask), (bna)->regs.fn_int_mask)
-
-#define bna_mbox_intr_disable(bna)		\
-	writel((readl((bna)->regs.fn_int_mask) | \
-	     (__LPU2HOST_MBOX_MASK_BITS | __ERROR_MASK_BITS)), \
-	     (bna)->regs.fn_int_mask)
-
-#define bna_mbox_intr_enable(bna)		\
-	writel((readl((bna)->regs.fn_int_mask) & \
-	     ~(__LPU2HOST_MBOX_MASK_BITS | __ERROR_MASK_BITS)), \
-	     (bna)->regs.fn_int_mask)
-
-#define bna_intr_status_get(_bna, _status)				\
-{									\
-	(_status) = readl((_bna)->regs.fn_int_status);		\
-	if ((_status)) {						\
-		writel((_status) & ~(__LPU02HOST_MBOX0_STATUS_BITS |\
-					  __LPU02HOST_MBOX1_STATUS_BITS |\
-					  __LPU12HOST_MBOX0_STATUS_BITS |\
-					  __LPU12HOST_MBOX1_STATUS_BITS), \
-			      (_bna)->regs.fn_int_status);\
-	}								\
-}
-
-#define bna_intr_status_get_no_clr(_bna, _status)		\
-	(_status) = readl((_bna)->regs.fn_int_status)
-
-#define bna_intr_mask_get(bna, mask)		\
-	(*mask) = readl((bna)->regs.fn_int_mask)
-
-#define bna_intr_ack(bna, intr_bmap)		\
-	writel((intr_bmap), (bna)->regs.fn_int_status)
-
-#define bna_ib_intx_disable(bna, ib_id)		\
-	writel(readl((bna)->regs.fn_int_mask) | \
-	    (1 << (ib_id)), \
-	    (bna)->regs.fn_int_mask)
-
-#define bna_ib_intx_enable(bna, ib_id)		\
-	writel(readl((bna)->regs.fn_int_mask) & \
-	    ~(1 << (ib_id)), \
-	    (bna)->regs.fn_int_mask)
-
-#define bna_mbox_msix_idx_set(_device) \
-do {\
-	writel(((_device)->vector & 0x000001FF), \
-		(_device)->bna->pcidev.pci_bar_kva + \
-		reg_offset[(_device)->bna->pcidev.pci_func].msix_idx);\
-} while (0)
-
-/**
- *
- * TxQ, RxQ, CQ related bits, offsets, macros
- *
- */
-
-#define	BNA_Q_IDLE_STATE	0x00008001
-
-#define BNA_GET_DOORBELL_BASE_ADDR(_bar0)	\
-	((_bar0) + HQM_DOORBELL_BLK_BASE_ADDR)
-
-#define BNA_GET_DOORBELL_ENTRY_OFFSET(_entry)		\
-	((HQM_DOORBELL_BLK_BASE_ADDR)		\
-	+ (_entry << 7))
-
-#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events) \
-		(0x80000000 | ((_timeout) << 16) | (_events))
-
-#define BNA_DOORBELL_IB_INT_DISABLE		(0x40000000)
-
-/* TxQ Entry Opcodes */
-#define BNA_TXQ_WI_SEND			(0x402)	/* Single Frame Transmission */
-#define BNA_TXQ_WI_SEND_LSO		(0x403)	/* Multi-Frame Transmission */
-#define BNA_TXQ_WI_EXTENSION		(0x104)	/* Extension WI */
-
-/* TxQ Entry Control Flags */
-#define BNA_TXQ_WI_CF_FCOE_CRC		(1 << 8)
-#define BNA_TXQ_WI_CF_IPID_MODE		(1 << 5)
-#define BNA_TXQ_WI_CF_INS_PRIO		(1 << 4)
-#define BNA_TXQ_WI_CF_INS_VLAN		(1 << 3)
-#define BNA_TXQ_WI_CF_UDP_CKSUM		(1 << 2)
-#define BNA_TXQ_WI_CF_TCP_CKSUM		(1 << 1)
-#define BNA_TXQ_WI_CF_IP_CKSUM		(1 << 0)
-
-#define BNA_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
-		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
-
-/*
- * Completion Q defines
- */
-/* CQ Entry Flags */
-#define	BNA_CQ_EF_MAC_ERROR	(1 <<  0)
-#define	BNA_CQ_EF_FCS_ERROR	(1 <<  1)
-#define	BNA_CQ_EF_TOO_LONG	(1 <<  2)
-#define	BNA_CQ_EF_FC_CRC_OK	(1 <<  3)
-
-#define	BNA_CQ_EF_RSVD1		(1 <<  4)
-#define	BNA_CQ_EF_L4_CKSUM_OK	(1 <<  5)
-#define	BNA_CQ_EF_L3_CKSUM_OK	(1 <<  6)
-#define	BNA_CQ_EF_HDS_HEADER	(1 <<  7)
-
-#define	BNA_CQ_EF_UDP		(1 <<  8)
-#define	BNA_CQ_EF_TCP		(1 <<  9)
-#define	BNA_CQ_EF_IP_OPTIONS	(1 << 10)
-#define	BNA_CQ_EF_IPV6		(1 << 11)
-
-#define	BNA_CQ_EF_IPV4		(1 << 12)
-#define	BNA_CQ_EF_VLAN		(1 << 13)
-#define	BNA_CQ_EF_RSS		(1 << 14)
-#define	BNA_CQ_EF_RSVD2		(1 << 15)
-
-#define	BNA_CQ_EF_MCAST_MATCH   (1 << 16)
-#define	BNA_CQ_EF_MCAST		(1 << 17)
-#define BNA_CQ_EF_BCAST		(1 << 18)
-#define	BNA_CQ_EF_REMOTE	(1 << 19)
-
-#define	BNA_CQ_EF_LOCAL		(1 << 20)
-
-/**
- *
- * Data structures
- *
- */
-
-enum txf_flags {
-	BFI_TXF_CF_ENABLE		= 1 << 0,
-	BFI_TXF_CF_VLAN_FILTER		= 1 << 8,
-	BFI_TXF_CF_VLAN_ADMIT		= 1 << 9,
-	BFI_TXF_CF_VLAN_INSERT		= 1 << 10,
-	BFI_TXF_CF_RSVD1		= 1 << 11,
-	BFI_TXF_CF_MAC_SA_CHECK		= 1 << 12,
-	BFI_TXF_CF_VLAN_WI_BASED	= 1 << 13,
-	BFI_TXF_CF_VSWITCH_MCAST	= 1 << 14,
-	BFI_TXF_CF_VSWITCH_UCAST	= 1 << 15,
-	BFI_TXF_CF_RSVD2		= 0x7F << 1
-};
-
-enum ib_flags {
-	BFI_IB_CF_MASTER_ENABLE		= (1 << 0),
-	BFI_IB_CF_MSIX_MODE		= (1 << 1),
-	BFI_IB_CF_COALESCING_MODE	= (1 << 2),
-	BFI_IB_CF_INTER_PKT_ENABLE	= (1 << 3),
-	BFI_IB_CF_INT_ENABLE		= (1 << 4),
-	BFI_IB_CF_INTER_PKT_DMA		= (1 << 5),
-	BFI_IB_CF_ACK_PENDING		= (1 << 6),
-	BFI_IB_CF_RESERVED1		= (1 << 7)
-};
-
-enum rss_hash_type {
-	BFI_RSS_T_V4_TCP		= (1 << 11),
-	BFI_RSS_T_V4_IP			= (1 << 10),
-	BFI_RSS_T_V6_TCP		= (1 <<  9),
-	BFI_RSS_T_V6_IP			= (1 <<  8)
-};
-enum hds_header_type {
-	BNA_HDS_T_V4_TCP	= (1 << 11),
-	BNA_HDS_T_V4_UDP	= (1 << 10),
-	BNA_HDS_T_V6_TCP	= (1 << 9),
-	BNA_HDS_T_V6_UDP	= (1 << 8),
-	BNA_HDS_FORCED		= (1 << 7),
-};
-enum rxf_flags {
-	BNA_RXF_CF_SM_LG_RXQ			= (1 << 15),
-	BNA_RXF_CF_DEFAULT_VLAN			= (1 << 14),
-	BNA_RXF_CF_DEFAULT_FUNCTION_ENABLE	= (1 << 13),
-	BNA_RXF_CF_VLAN_STRIP			= (1 << 12),
-	BNA_RXF_CF_RSS_ENABLE			= (1 <<  8)
-};
-struct bna_chip_regs_offset {
-	u32 page_addr;
-	u32 fn_int_status;
-	u32 fn_int_mask;
-	u32 msix_idx;
-};
-
-struct bna_chip_regs {
-	void __iomem *page_addr;
-	void __iomem *fn_int_status;
-	void __iomem *fn_int_mask;
-};
-
-struct bna_txq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 int_blk_n_cns_ptr;	/* 31:24->Int Blk Id;  */
-					/* 23:16->Int Blk Offset */
-					/* 15:0 ->consumer pointer(index?) */
-	u32 cns_ptr2_n_q_state;	/* 31:16->cons. ptr 2; 15:0-> Q state */
-	u32 nxt_qid_n_fid_n_pri;	/* 17:10->next */
-					/* QId;9:3->FID;2:0->Priority */
-	u32 wvc_n_cquota_n_rquota; /* 31:24->WI Vector Count; */
-					/* 23:12->Cfg Quota; */
-					/* 11:0 ->Run Quota */
-	u32 reserved3[4];
-};
-
-struct bna_rxq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 sg_n_cq_n_cns_ptr;	/* 31:28->reserved; 27:24->sg count */
-					/* 23:16->CQ; */
-					/* 15:0->consumer pointer(index?) */
-	u32 buf_sz_n_q_state;	/* 31:16->buffer size; 15:0-> Q state */
-	u32 next_qid;		/* 17:10->next QId */
-	u32 reserved3;
-	u32 reserved4[4];
-};
-
-struct bna_rxtx_q_mem {
-	struct bna_rxq_mem rxq;
-	struct bna_txq_mem txq;
-};
-
-struct bna_cq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 int_blk_n_cns_ptr;	/* 31:24->Int Blk Id; */
-					/* 23:16->Int Blk Offset */
-					/* 15:0 ->consumer pointer(index?) */
-	u32 q_state;		/* 31:16->reserved; 15:0-> Q state */
-	u32 reserved3[2];
-	u32 reserved4[4];
-};
-
-struct bna_ib_blk_mem {
-	u32 host_addr_lo;
-	u32 host_addr_hi;
-	u32 clsc_n_ctrl_n_msix;	/* 31:24->coalescing; */
-					/* 23:16->coalescing cfg; */
-					/* 15:8 ->control; */
-					/* 7:0 ->msix; */
-	u32 ipkt_n_ent_n_idxof;
-	u32 ipkt_cnt_cfg_n_unacked;
-
-	u32 reserved[3];
-};
-
-struct bna_idx_tbl_mem {
-	u32 idx;	  /* !< 31:16->res;15:0->idx; */
-};
-
-struct bna_doorbell_qset {
-	u32 rxq[0x20 >> 2];
-	u32 txq[0x20 >> 2];
-	u32 ib0[0x20 >> 2];
-	u32 ib1[0x20 >> 2];
-};
-
-struct bna_rx_fndb_ram {
-	u32 rss_prop;
-	u32 size_routing_props;
-	u32 rit_hds_mcastq;
-	u32 control_flags;
-};
-
-struct bna_tx_fndb_ram {
-	u32 vlan_n_ctrl_flags;
-};
-
-/**
- * @brief
- *  Structure which maps to RxFn Indirection Table (RIT)
- *  Size : 1 word
- *  See catapult_spec.pdf, RxA for details
- */
-struct bna_rit_mem {
-	u32 rxq_ids;	/* !< 31:12->res;11:0->two 6 bit RxQ Ids */
-};
-
-/**
- * @brief
- *  Structure which maps to RSS Table entry
- *  Size : 16 words
- *  See catapult_spec.pdf, RAD for details
- */
-struct bna_rss_mem {
-	/*
-	 * 31:12-> res
-	 * 11:8 -> protocol type
-	 *  7:0 -> hash index
-	 */
-	u32 type_n_hash;
-	u32 hash_key[10];  /* !< 40 byte Toeplitz hash key */
-	u32 reserved[5];
-};
-
-/* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
-struct bna_dma_addr {
-	u32		msb;
-	u32		lsb;
-};
-
-struct bna_txq_wi_vector {
-	u16		reserved;
-	u16		length;		/* Only 14 LSB are valid */
-	struct bna_dma_addr host_addr; /* Tx-Buf DMA addr */
-};
-
-typedef u16 bna_txq_wi_opcode_t;
-
-typedef u16 bna_txq_wi_ctrl_flag_t;
-
-/**
- *  TxQ Entry Structure
- *
- *  BEWARE:  Load values into this structure with correct endianess.
- */
-struct bna_txq_entry {
-	union {
-		struct {
-			u8 reserved;
-			u8 num_vectors;	/* number of vectors present */
-			bna_txq_wi_opcode_t opcode; /* Either */
-						    /* BNA_TXQ_WI_SEND or */
-						    /* BNA_TXQ_WI_SEND_LSO */
-			bna_txq_wi_ctrl_flag_t flags; /* OR of all the flags */
-			u16 l4_hdr_size_n_offset;
-			u16 vlan_tag;
-			u16 lso_mss;	/* Only 14 LSB are valid */
-			u32 frame_length;	/* Only 24 LSB are valid */
-		} wi;
-
-		struct {
-			u16 reserved;
-			bna_txq_wi_opcode_t opcode; /* Must be */
-						    /* BNA_TXQ_WI_EXTENSION */
-			u32 reserved2[3];	/* Place holder for */
-						/* removed vector (12 bytes) */
-		} wi_ext;
-	} hdr;
-	struct bna_txq_wi_vector vector[4];
-};
-#define wi_hdr		hdr.wi
-#define wi_ext_hdr  hdr.wi_ext
-
-/* RxQ Entry Structure */
-struct bna_rxq_entry {		/* Rx-Buffer */
-	struct bna_dma_addr host_addr; /* Rx-Buffer DMA address */
-};
-
-typedef u32 bna_cq_e_flag_t;
-
-/* CQ Entry Structure */
-struct bna_cq_entry {
-	bna_cq_e_flag_t flags;
-	u16 vlan_tag;
-	u16 length;
-	u32 rss_hash;
-	u8 valid;
-	u8 reserved1;
-	u8 reserved2;
-	u8 rxq_id;
-};
-
-#endif /* __BNA_HW_H__ */
diff --git a/drivers/net/bna/bna_txrx.c b/drivers/net/bna/bna_txrx.c
deleted file mode 100644
index f0983c8..0000000
--- a/drivers/net/bna/bna_txrx.c
+++ /dev/null
@@ -1,4185 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
-  */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-#include "bna.h"
-#include "bfa_cs.h"
-#include "bfi.h"
-
-/**
- * IB
- */
-#define bna_ib_find_free_ibidx(_mask, _pos)\
-do {\
-	(_pos) = 0;\
-	while (((_pos) < (BFI_IBIDX_MAX_SEGSIZE)) &&\
-		((1 << (_pos)) & (_mask)))\
-		(_pos)++;\
-} while (0)
-
-#define bna_ib_count_ibidx(_mask, _count)\
-do {\
-	int pos = 0;\
-	(_count) = 0;\
-	while (pos < (BFI_IBIDX_MAX_SEGSIZE)) {\
-		if ((1 << pos) & (_mask))\
-			(_count) = pos + 1;\
-		pos++;\
-	} \
-} while (0)
-
-#define bna_ib_select_segpool(_count, _q_idx)\
-do {\
-	int i;\
-	(_q_idx) = -1;\
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {\
-		if ((_count <= ibidx_pool[i].pool_entry_size)) {\
-			(_q_idx) = i;\
-			break;\
-		} \
-	} \
-} while (0)
-
-struct bna_ibidx_pool {
-	int	pool_size;
-	int	pool_entry_size;
-};
-init_ibidx_pool(ibidx_pool);
-
-static struct bna_intr *
-bna_intr_get(struct bna_ib_mod *ib_mod, enum bna_intr_type intr_type,
-		int vector)
-{
-	struct bna_intr *intr;
-	struct list_head *qe;
-
-	list_for_each(qe, &ib_mod->intr_active_q) {
-		intr = (struct bna_intr *)qe;
-
-		if ((intr->intr_type == intr_type) &&
-			(intr->vector == vector)) {
-			intr->ref_count++;
-			return intr;
-		}
-	}
-
-	if (list_empty(&ib_mod->intr_free_q))
-		return NULL;
-
-	bfa_q_deq(&ib_mod->intr_free_q, &intr);
-	bfa_q_qe_init(&intr->qe);
-
-	intr->ref_count = 1;
-	intr->intr_type = intr_type;
-	intr->vector = vector;
-
-	list_add_tail(&intr->qe, &ib_mod->intr_active_q);
-
-	return intr;
-}
-
-static void
-bna_intr_put(struct bna_ib_mod *ib_mod,
-		struct bna_intr *intr)
-{
-	intr->ref_count--;
-
-	if (intr->ref_count == 0) {
-		intr->ib = NULL;
-		list_del(&intr->qe);
-		bfa_q_qe_init(&intr->qe);
-		list_add_tail(&intr->qe, &ib_mod->intr_free_q);
-	}
-}
-
-void
-bna_ib_mod_init(struct bna_ib_mod *ib_mod, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	int i;
-	int j;
-	int count;
-	u8 offset;
-	struct bna_doorbell_qset *qset;
-	unsigned long off;
-
-	ib_mod->bna = bna;
-
-	ib_mod->ib = (struct bna_ib *)
-		res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.mdl[0].kva;
-	ib_mod->intr = (struct bna_intr *)
-		res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.mdl[0].kva;
-	ib_mod->idx_seg = (struct bna_ibidx_seg *)
-		res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&ib_mod->ib_free_q);
-	INIT_LIST_HEAD(&ib_mod->intr_free_q);
-	INIT_LIST_HEAD(&ib_mod->intr_active_q);
-
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++)
-		INIT_LIST_HEAD(&ib_mod->ibidx_seg_pool[i]);
-
-	for (i = 0; i < BFI_MAX_IB; i++) {
-		ib_mod->ib[i].ib_id = i;
-
-		ib_mod->ib[i].ib_seg_host_addr_kva =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
-		ib_mod->ib[i].ib_seg_host_addr.lsb =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
-		ib_mod->ib[i].ib_seg_host_addr.msb =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
-
-		qset = (struct bna_doorbell_qset *)0;
-		off = (unsigned long)(&qset[i >> 1].ib0[(i & 0x1)
-					* (0x20 >> 2)]);
-		ib_mod->ib[i].door_bell.doorbell_addr = off +
-			BNA_GET_DOORBELL_BASE_ADDR(bna->pcidev.pci_bar_kva);
-
-		bfa_q_qe_init(&ib_mod->ib[i].qe);
-		list_add_tail(&ib_mod->ib[i].qe, &ib_mod->ib_free_q);
-
-		bfa_q_qe_init(&ib_mod->intr[i].qe);
-		list_add_tail(&ib_mod->intr[i].qe, &ib_mod->intr_free_q);
-	}
-
-	count = 0;
-	offset = 0;
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {
-		for (j = 0; j < ibidx_pool[i].pool_size; j++) {
-			bfa_q_qe_init(&ib_mod->idx_seg[count]);
-			ib_mod->idx_seg[count].ib_seg_size =
-					ibidx_pool[i].pool_entry_size;
-			ib_mod->idx_seg[count].ib_idx_tbl_offset = offset;
-			list_add_tail(&ib_mod->idx_seg[count].qe,
-				&ib_mod->ibidx_seg_pool[i]);
-			count++;
-			offset += ibidx_pool[i].pool_entry_size;
-		}
-	}
-}
-
-void
-bna_ib_mod_uninit(struct bna_ib_mod *ib_mod)
-{
-	int i;
-	int j;
-	struct list_head *qe;
-
-	i = 0;
-	list_for_each(qe, &ib_mod->ib_free_q)
-		i++;
-
-	i = 0;
-	list_for_each(qe, &ib_mod->intr_free_q)
-		i++;
-
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {
-		j = 0;
-		list_for_each(qe, &ib_mod->ibidx_seg_pool[i])
-			j++;
-	}
-
-	ib_mod->bna = NULL;
-}
-
-static struct bna_ib *
-bna_ib_get(struct bna_ib_mod *ib_mod,
-		enum bna_intr_type intr_type,
-		int vector)
-{
-	struct bna_ib *ib;
-	struct bna_intr *intr;
-
-	if (intr_type == BNA_INTR_T_INTX)
-		vector = (1 << vector);
-
-	intr = bna_intr_get(ib_mod, intr_type, vector);
-	if (intr == NULL)
-		return NULL;
-
-	if (intr->ib) {
-		if (intr->ib->ref_count == BFI_IBIDX_MAX_SEGSIZE) {
-			bna_intr_put(ib_mod, intr);
-			return NULL;
-		}
-		intr->ib->ref_count++;
-		return intr->ib;
-	}
-
-	if (list_empty(&ib_mod->ib_free_q)) {
-		bna_intr_put(ib_mod, intr);
-		return NULL;
-	}
-
-	bfa_q_deq(&ib_mod->ib_free_q, &ib);
-	bfa_q_qe_init(&ib->qe);
-
-	ib->ref_count = 1;
-	ib->start_count = 0;
-	ib->idx_mask = 0;
-
-	ib->intr = intr;
-	ib->idx_seg = NULL;
-	intr->ib = ib;
-
-	ib->bna = ib_mod->bna;
-
-	return ib;
-}
-
-static void
-bna_ib_put(struct bna_ib_mod *ib_mod, struct bna_ib *ib)
-{
-	bna_intr_put(ib_mod, ib->intr);
-
-	ib->ref_count--;
-
-	if (ib->ref_count == 0) {
-		ib->intr = NULL;
-		ib->bna = NULL;
-		list_add_tail(&ib->qe, &ib_mod->ib_free_q);
-	}
-}
-
-/* Returns index offset - starting from 0 */
-static int
-bna_ib_reserve_idx(struct bna_ib *ib)
-{
-	struct bna_ib_mod *ib_mod = &ib->bna->ib_mod;
-	struct bna_ibidx_seg *idx_seg;
-	int idx;
-	int num_idx;
-	int q_idx;
-
-	/* Find the first free index position */
-	bna_ib_find_free_ibidx(ib->idx_mask, idx);
-	if (idx == BFI_IBIDX_MAX_SEGSIZE)
-		return -1;
-
-	/*
-	 * Calculate the total number of indexes held by this IB,
-	 * including the index newly reserved above.
-	 */
-	bna_ib_count_ibidx((ib->idx_mask | (1 << idx)), num_idx);
-
-	/* See if there is a free space in the index segment held by this IB */
-	if (ib->idx_seg && (num_idx <= ib->idx_seg->ib_seg_size)) {
-		ib->idx_mask |= (1 << idx);
-		return idx;
-	}
-
-	if (ib->start_count)
-		return -1;
-
-	/* Allocate a new segment */
-	bna_ib_select_segpool(num_idx, q_idx);
-	while (1) {
-		if (q_idx == BFI_IBIDX_TOTAL_POOLS)
-			return -1;
-		if (!list_empty(&ib_mod->ibidx_seg_pool[q_idx]))
-			break;
-		q_idx++;
-	}
-	bfa_q_deq(&ib_mod->ibidx_seg_pool[q_idx], &idx_seg);
-	bfa_q_qe_init(&idx_seg->qe);
-
-	/* Free the old segment */
-	if (ib->idx_seg) {
-		bna_ib_select_segpool(ib->idx_seg->ib_seg_size, q_idx);
-		list_add_tail(&ib->idx_seg->qe, &ib_mod->ibidx_seg_pool[q_idx]);
-	}
-
-	ib->idx_seg = idx_seg;
-
-	ib->idx_mask |= (1 << idx);
-
-	return idx;
-}
-
-static void
-bna_ib_release_idx(struct bna_ib *ib, int idx)
-{
-	struct bna_ib_mod *ib_mod = &ib->bna->ib_mod;
-	struct bna_ibidx_seg *idx_seg;
-	int num_idx;
-	int cur_q_idx;
-	int new_q_idx;
-
-	ib->idx_mask &= ~(1 << idx);
-
-	if (ib->start_count)
-		return;
-
-	bna_ib_count_ibidx(ib->idx_mask, num_idx);
-
-	/*
-	 * Free the segment, if there are no more indexes in the segment
-	 * held by this IB
-	 */
-	if (!num_idx) {
-		bna_ib_select_segpool(ib->idx_seg->ib_seg_size, cur_q_idx);
-		list_add_tail(&ib->idx_seg->qe,
-			&ib_mod->ibidx_seg_pool[cur_q_idx]);
-		ib->idx_seg = NULL;
-		return;
-	}
-
-	/* See if we can move to a smaller segment */
-	bna_ib_select_segpool(num_idx, new_q_idx);
-	bna_ib_select_segpool(ib->idx_seg->ib_seg_size, cur_q_idx);
-	while (new_q_idx < cur_q_idx) {
-		if (!list_empty(&ib_mod->ibidx_seg_pool[new_q_idx]))
-			break;
-		new_q_idx++;
-	}
-	if (new_q_idx < cur_q_idx) {
-		/* Select the new smaller segment */
-		bfa_q_deq(&ib_mod->ibidx_seg_pool[new_q_idx], &idx_seg);
-		bfa_q_qe_init(&idx_seg->qe);
-		/* Free the old segment */
-		list_add_tail(&ib->idx_seg->qe,
-			&ib_mod->ibidx_seg_pool[cur_q_idx]);
-		ib->idx_seg = idx_seg;
-	}
-}
-
-static int
-bna_ib_config(struct bna_ib *ib, struct bna_ib_config *ib_config)
-{
-	if (ib->start_count)
-		return -1;
-
-	ib->ib_config.coalescing_timeo = ib_config->coalescing_timeo;
-	ib->ib_config.interpkt_timeo = ib_config->interpkt_timeo;
-	ib->ib_config.interpkt_count = ib_config->interpkt_count;
-	ib->ib_config.ctrl_flags = ib_config->ctrl_flags;
-
-	ib->ib_config.ctrl_flags |= BFI_IB_CF_MASTER_ENABLE;
-	if (ib->intr->intr_type == BNA_INTR_T_MSIX)
-		ib->ib_config.ctrl_flags |= BFI_IB_CF_MSIX_MODE;
-
-	return 0;
-}
-
-static void
-bna_ib_start(struct bna_ib *ib)
-{
-	struct bna_ib_blk_mem ib_cfg;
-	struct bna_ib_blk_mem *ib_mem;
-	u32 pg_num;
-	u32 intx_mask;
-	int i;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	ib->start_count++;
-
-	if (ib->start_count > 1)
-		return;
-
-	ib_cfg.host_addr_lo = (u32)(ib->ib_seg_host_addr.lsb);
-	ib_cfg.host_addr_hi = (u32)(ib->ib_seg_host_addr.msb);
-
-	ib_cfg.clsc_n_ctrl_n_msix = (((u32)
-				     ib->ib_config.coalescing_timeo << 16) |
-				((u32)ib->ib_config.ctrl_flags << 8) |
-				(ib->intr->vector));
-	ib_cfg.ipkt_n_ent_n_idxof =
-				((u32)
-				 (ib->ib_config.interpkt_timeo & 0xf) << 16) |
-				((u32)ib->idx_seg->ib_seg_size << 8) |
-				(ib->idx_seg->ib_idx_tbl_offset);
-	ib_cfg.ipkt_cnt_cfg_n_unacked = ((u32)
-					 ib->ib_config.interpkt_count << 24);
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + ib->bna->port_num,
-				HQM_IB_RAM_BASE_OFFSET);
-	writel(pg_num, ib->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(ib->bna->pcidev.pci_bar_kva,
-					HQM_IB_RAM_BASE_OFFSET);
-
-	ib_mem = (struct bna_ib_blk_mem *)0;
-	off = (unsigned long)&ib_mem[ib->ib_id].host_addr_lo;
-	writel(htonl(ib_cfg.host_addr_lo), base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].host_addr_hi;
-	writel(htonl(ib_cfg.host_addr_hi), base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].clsc_n_ctrl_n_msix;
-	writel(ib_cfg.clsc_n_ctrl_n_msix, base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].ipkt_n_ent_n_idxof;
-	writel(ib_cfg.ipkt_n_ent_n_idxof, base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].ipkt_cnt_cfg_n_unacked;
-	writel(ib_cfg.ipkt_cnt_cfg_n_unacked, base_addr + off);
-
-	ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
-				(u32)ib->ib_config.coalescing_timeo, 0);
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + ib->bna->port_num,
-				HQM_INDX_TBL_RAM_BASE_OFFSET);
-	writel(pg_num, ib->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(ib->bna->pcidev.pci_bar_kva,
-					HQM_INDX_TBL_RAM_BASE_OFFSET);
-	for (i = 0; i < ib->idx_seg->ib_seg_size; i++) {
-		off = (unsigned long)
-		((ib->idx_seg->ib_idx_tbl_offset + i) * BFI_IBIDX_SIZE);
-		writel(0, base_addr + off);
-	}
-
-	if (ib->intr->intr_type == BNA_INTR_T_INTX) {
-		bna_intx_disable(ib->bna, intx_mask);
-		intx_mask &= ~(ib->intr->vector);
-		bna_intx_enable(ib->bna, intx_mask);
-	}
-}
-
-static void
-bna_ib_stop(struct bna_ib *ib)
-{
-	u32 intx_mask;
-
-	ib->start_count--;
-
-	if (ib->start_count == 0) {
-		writel(BNA_DOORBELL_IB_INT_DISABLE,
-				ib->door_bell.doorbell_addr);
-		if (ib->intr->intr_type == BNA_INTR_T_INTX) {
-			bna_intx_disable(ib->bna, intx_mask);
-			intx_mask |= (ib->intr->vector);
-			bna_intx_enable(ib->bna, intx_mask);
-		}
-	}
-}
-
-static void
-bna_ib_fail(struct bna_ib *ib)
-{
-	ib->start_count = 0;
-}
-
-/**
- * RXF
- */
-static void rxf_enable(struct bna_rxf *rxf);
-static void rxf_disable(struct bna_rxf *rxf);
-static void __rxf_config_set(struct bna_rxf *rxf);
-static void __rxf_rit_set(struct bna_rxf *rxf);
-static void __bna_rxf_stat_clr(struct bna_rxf *rxf);
-static int rxf_process_packet_filter(struct bna_rxf *rxf);
-static int rxf_clear_packet_filter(struct bna_rxf *rxf);
-static void rxf_reset_packet_filter(struct bna_rxf *rxf);
-static void rxf_cb_enabled(void *arg, int status);
-static void rxf_cb_disabled(void *arg, int status);
-static void bna_rxf_cb_stats_cleared(void *arg, int status);
-static void __rxf_enable(struct bna_rxf *rxf);
-static void __rxf_disable(struct bna_rxf *rxf);
-
-bfa_fsm_state_decl(bna_rxf, stopped, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, start_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, cam_fltr_mod_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, started, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, cam_fltr_clr_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, stop_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, pause_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, resume_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, stat_clr_wait, struct bna_rxf,
-			enum bna_rxf_event);
-
-static struct bfa_sm_table rxf_sm_table[] = {
-	{BFA_SM(bna_rxf_sm_stopped), BNA_RXF_STOPPED},
-	{BFA_SM(bna_rxf_sm_start_wait), BNA_RXF_START_WAIT},
-	{BFA_SM(bna_rxf_sm_cam_fltr_mod_wait), BNA_RXF_CAM_FLTR_MOD_WAIT},
-	{BFA_SM(bna_rxf_sm_started), BNA_RXF_STARTED},
-	{BFA_SM(bna_rxf_sm_cam_fltr_clr_wait), BNA_RXF_CAM_FLTR_CLR_WAIT},
-	{BFA_SM(bna_rxf_sm_stop_wait), BNA_RXF_STOP_WAIT},
-	{BFA_SM(bna_rxf_sm_pause_wait), BNA_RXF_PAUSE_WAIT},
-	{BFA_SM(bna_rxf_sm_resume_wait), BNA_RXF_RESUME_WAIT},
-	{BFA_SM(bna_rxf_sm_stat_clr_wait), BNA_RXF_STAT_CLR_WAIT}
-};
-
-static void
-bna_rxf_sm_stopped_entry(struct bna_rxf *rxf)
-{
-	call_rxf_stop_cbfn(rxf, BNA_CB_SUCCESS);
-}
-
-static void
-bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_START:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_start_wait);
-		break;
-
-	case RXF_E_STOP:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_FAIL:
-		/* No-op */
-		break;
-
-	case RXF_E_CAM_FLTR_MOD:
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		break;
-
-	case RXF_E_STARTED:
-	case RXF_E_STOPPED:
-	case RXF_E_CAM_FLTR_RESP:
-		/**
-		 * These events are received due to flushing of mbox
-		 * when device fails
-		 */
-		/* No-op */
-		break;
-
-	case RXF_E_PAUSE:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		call_rxf_pause_cbfn(rxf, BNA_CB_SUCCESS);
-		break;
-
-	case RXF_E_RESUME:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		call_rxf_resume_cbfn(rxf, BNA_CB_SUCCESS);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_start_wait_entry(struct bna_rxf *rxf)
-{
-	__rxf_config_set(rxf);
-	__rxf_rit_set(rxf);
-	rxf_enable(rxf);
-}
-
-static void
-bna_rxf_sm_start_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_STOP:
-		/**
-		 * STOP is originated from bnad. When this happens,
-		 * it can not be waiting for filter update
-		 */
-		call_rxf_start_cbfn(rxf, BNA_CB_INTERRUPT);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stop_wait);
-		break;
-
-	case RXF_E_FAIL:
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		call_rxf_start_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_CAM_FLTR_MOD:
-		/* No-op */
-		break;
-
-	case RXF_E_STARTED:
-		/**
-		 * Force rxf_process_filter() to go through initial
-		 * config
-		 */
-		if ((rxf->ucast_active_mac != NULL) &&
-			(rxf->ucast_pending_set == 0))
-			rxf->ucast_pending_set = 1;
-
-		if (rxf->rss_status == BNA_STATUS_T_ENABLED)
-			rxf->rxf_flags |= BNA_RXF_FL_RSS_CONFIG_PENDING;
-
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_mod_wait);
-		break;
-
-	case RXF_E_PAUSE:
-	case RXF_E_RESUME:
-		rxf->rxf_flags |= BNA_RXF_FL_OPERSTATE_CHANGED;
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_cam_fltr_mod_wait_entry(struct bna_rxf *rxf)
-{
-	if (!rxf_process_packet_filter(rxf)) {
-		/* No more pending CAM entries to update */
-		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-	}
-}
-
-static void
-bna_rxf_sm_cam_fltr_mod_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_STOP:
-		/**
-		 * STOP is originated from bnad. When this happens,
-		 * it can not be waiting for filter update
-		 */
-		call_rxf_start_cbfn(rxf, BNA_CB_INTERRUPT);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_clr_wait);
-		break;
-
-	case RXF_E_FAIL:
-		rxf_reset_packet_filter(rxf);
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		call_rxf_start_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_CAM_FLTR_MOD:
-		/* No-op */
-		break;
-
-	case RXF_E_CAM_FLTR_RESP:
-		if (!rxf_process_packet_filter(rxf)) {
-			/* No more pending CAM entries to update */
-			call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-			bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		}
-		break;
-
-	case RXF_E_PAUSE:
-	case RXF_E_RESUME:
-		rxf->rxf_flags |= BNA_RXF_FL_OPERSTATE_CHANGED;
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_started_entry(struct bna_rxf *rxf)
-{
-	call_rxf_start_cbfn(rxf, BNA_CB_SUCCESS);
-
-	if (rxf->rxf_flags & BNA_RXF_FL_OPERSTATE_CHANGED) {
-		if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-			bfa_fsm_send_event(rxf, RXF_E_PAUSE);
-		else
-			bfa_fsm_send_event(rxf, RXF_E_RESUME);
-	}
-
-}
-
-static void
-bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_STOP:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_clr_wait);
-		/* Hack to get FSM start clearing CAM entries */
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_RESP);
-		break;
-
-	case RXF_E_FAIL:
-		rxf_reset_packet_filter(rxf);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_CAM_FLTR_MOD:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_mod_wait);
-		break;
-
-	case RXF_E_PAUSE:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_pause_wait);
-		break;
-
-	case RXF_E_RESUME:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_resume_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_cam_fltr_clr_wait_entry(struct bna_rxf *rxf)
-{
-	/**
-	 *  Note: Do not add rxf_clear_packet_filter here.
-	 * It will overstep mbox when this transition happens:
-	 *	cam_fltr_mod_wait -> cam_fltr_clr_wait on RXF_E_STOP event
-	 */
-}
-
-static void
-bna_rxf_sm_cam_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
-		rxf_reset_packet_filter(rxf);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_CAM_FLTR_RESP:
-		if (!rxf_clear_packet_filter(rxf)) {
-			/* No more pending CAM entries to clear */
-			bfa_fsm_set_state(rxf, bna_rxf_sm_stop_wait);
-			rxf_disable(rxf);
-		}
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_stop_wait_entry(struct bna_rxf *rxf)
-{
-	/**
-	 * NOTE: Do not add  rxf_disable here.
-	 * It will overstep mbox when this transition happens:
-	 *	start_wait -> stop_wait on RXF_E_STOP event
-	 */
-}
-
-static void
-bna_rxf_sm_stop_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_STARTED:
-		/**
-		 * This event is received due to abrupt transition from
-		 * bna_rxf_sm_start_wait state on receiving
-		 * RXF_E_STOP event
-		 */
-		rxf_disable(rxf);
-		break;
-
-	case RXF_E_STOPPED:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stat_clr_wait);
-		break;
-
-	case RXF_E_PAUSE:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		break;
-
-	case RXF_E_RESUME:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_pause_wait_entry(struct bna_rxf *rxf)
-{
-	rxf->rxf_flags &=
-		~(BNA_RXF_FL_OPERSTATE_CHANGED | BNA_RXF_FL_RXF_ENABLED);
-	__rxf_disable(rxf);
-}
-
-static void
-bna_rxf_sm_pause_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of disabling rxf, initiated by
-		 * bnad.
-		 */
-		call_rxf_pause_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_STOPPED:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		call_rxf_pause_cbfn(rxf, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		break;
-
-	/*
-	 * Since PAUSE/RESUME can only be sent by bnad, we don't expect
-	 * any other event during these states
-	 */
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_resume_wait_entry(struct bna_rxf *rxf)
-{
-	rxf->rxf_flags &= ~(BNA_RXF_FL_OPERSTATE_CHANGED);
-	rxf->rxf_flags |= BNA_RXF_FL_RXF_ENABLED;
-	__rxf_enable(rxf);
-}
-
-static void
-bna_rxf_sm_resume_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of disabling rxf, initiated by
-		 * bnad.
-		 */
-		call_rxf_resume_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_STARTED:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		call_rxf_resume_cbfn(rxf, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		break;
-
-	/*
-	 * Since PAUSE/RESUME can only be sent by bnad, we don't expect
-	 * any other event during these states
-	 */
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_stat_clr_wait_entry(struct bna_rxf *rxf)
-{
-	__bna_rxf_stat_clr(rxf);
-}
-
-static void
-bna_rxf_sm_stat_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-	case RXF_E_STAT_CLEARED:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-__rxf_enable(struct bna_rxf *rxf)
-{
-	struct bfi_ll_rxf_multi_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RX_REQ, 0);
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-	ll_req.enable = 1;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			rxf_cb_enabled, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-static void
-__rxf_disable(struct bna_rxf *rxf)
-{
-	struct bfi_ll_rxf_multi_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RX_REQ, 0);
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-	ll_req.enable = 0;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			rxf_cb_disabled, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-static void
-__rxf_config_set(struct bna_rxf *rxf)
-{
-	u32 i;
-	struct bna_rss_mem *rss_mem;
-	struct bna_rx_fndb_ram *rx_fndb_ram;
-	struct bna *bna = rxf->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-			RSS_TABLE_BASE_OFFSET);
-
-	rss_mem = (struct bna_rss_mem *)0;
-
-	/* Configure RSS if required */
-	if (rxf->ctrl_flags & BNA_RXF_CF_RSS_ENABLE) {
-		/* configure RSS Table */
-		writel(BNA_GET_PAGE_NUM(RAD0_MEM_BLK_BASE_PG_NUM +
-			bna->port_num, RSS_TABLE_BASE_OFFSET),
-					bna->regs.page_addr);
-
-		/* temporarily disable RSS, while hash value is written */
-		off = (unsigned long)&rss_mem[0].type_n_hash;
-		writel(0, base_addr + off);
-
-		for (i = 0; i < BFI_RSS_HASH_KEY_LEN; i++) {
-			off = (unsigned long)
-			&rss_mem[0].hash_key[(BFI_RSS_HASH_KEY_LEN - 1) - i];
-			writel(htonl(rxf->rss_cfg.toeplitz_hash_key[i]),
-			base_addr + off);
-		}
-
-		off = (unsigned long)&rss_mem[0].type_n_hash;
-		writel(rxf->rss_cfg.hash_type | rxf->rss_cfg.hash_mask,
-			base_addr + off);
-	}
-
-	/* Configure RxF */
-	writel(BNA_GET_PAGE_NUM(
-		LUT0_MEM_BLK_BASE_PG_NUM + (bna->port_num * 2),
-		RX_FNDB_RAM_BASE_OFFSET),
-		bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-		RX_FNDB_RAM_BASE_OFFSET);
-
-	rx_fndb_ram = (struct bna_rx_fndb_ram *)0;
-
-	/* We always use RSS table 0 */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].rss_prop;
-	writel(rxf->ctrl_flags & BNA_RXF_CF_RSS_ENABLE,
-		base_addr + off);
-
-	/* small large buffer enable/disable */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].size_routing_props;
-	writel((rxf->ctrl_flags & BNA_RXF_CF_SM_LG_RXQ) | 0x80,
-		base_addr + off);
-
-	/* RIT offset,  HDS forced offset, multicast RxQ Id */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].rit_hds_mcastq;
-	writel((rxf->rit_segment->rit_offset << 16) |
-		(rxf->forced_offset << 8) |
-		(rxf->hds_cfg.hdr_type & BNA_HDS_FORCED) | rxf->mcast_rxq_id,
-		base_addr + off);
-
-	/*
-	 * default vlan tag, default function enable, strip vlan bytes,
-	 * HDS type, header size
-	 */
-
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].control_flags;
-	 writel(((u32)rxf->default_vlan_tag << 16) |
-		(rxf->ctrl_flags &
-			(BNA_RXF_CF_DEFAULT_VLAN |
-			BNA_RXF_CF_DEFAULT_FUNCTION_ENABLE |
-			BNA_RXF_CF_VLAN_STRIP)) |
-		(rxf->hds_cfg.hdr_type & ~BNA_HDS_FORCED) |
-		rxf->hds_cfg.header_size,
-		base_addr + off);
-}
-
-void
-__rxf_vlan_filter_set(struct bna_rxf *rxf, enum bna_status status)
-{
-	struct bna *bna = rxf->rx->bna;
-	int i;
-
-	writel(BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(bna->port_num * 2), VLAN_RAM_BASE_OFFSET),
-			bna->regs.page_addr);
-
-	if (status == BNA_STATUS_T_ENABLED) {
-		/* enable VLAN filtering on this function */
-		for (i = 0; i <= BFI_MAX_VLAN / 32; i++) {
-			writel(rxf->vlan_filter_table[i],
-					BNA_GET_VLAN_MEM_ENTRY_ADDR
-					(bna->pcidev.pci_bar_kva, rxf->rxf_id,
-						i * 32));
-		}
-	} else {
-		/* disable VLAN filtering on this function */
-		for (i = 0; i <= BFI_MAX_VLAN / 32; i++) {
-			writel(0xffffffff,
-					BNA_GET_VLAN_MEM_ENTRY_ADDR
-					(bna->pcidev.pci_bar_kva, rxf->rxf_id,
-						i * 32));
-		}
-	}
-}
-
-static void
-__rxf_rit_set(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	struct bna_rit_mem *rit_mem;
-	int i;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-			FUNCTION_TO_RXQ_TRANSLATE);
-
-	rit_mem = (struct bna_rit_mem *)0;
-
-	writel(BNA_GET_PAGE_NUM(RXA0_MEM_BLK_BASE_PG_NUM + bna->port_num,
-		FUNCTION_TO_RXQ_TRANSLATE),
-		bna->regs.page_addr);
-
-	for (i = 0; i < rxf->rit_segment->rit_size; i++) {
-		off = (unsigned long)&rit_mem[i + rxf->rit_segment->rit_offset];
-		writel(rxf->rit_segment->rit[i].large_rxq_id << 6 |
-			rxf->rit_segment->rit[i].small_rxq_id,
-			base_addr + off);
-	}
-}
-
-static void
-__bna_rxf_stat_clr(struct bna_rxf *rxf)
-{
-	struct bfi_ll_stats_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_CLEAR_REQ, 0);
-	ll_req.stats_mask = 0;
-	ll_req.txf_id_mask[0] = 0;
-	ll_req.txf_id_mask[1] =	0;
-
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_rxf_cb_stats_cleared, rxf);
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-static void
-rxf_enable(struct bna_rxf *rxf)
-{
-	if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-		bfa_fsm_send_event(rxf, RXF_E_STARTED);
-	else {
-		rxf->rxf_flags |= BNA_RXF_FL_RXF_ENABLED;
-		__rxf_enable(rxf);
-	}
-}
-
-static void
-rxf_cb_enabled(void *arg, int status)
-{
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
-
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STARTED);
-}
-
-static void
-rxf_disable(struct bna_rxf *rxf)
-{
-	if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-		bfa_fsm_send_event(rxf, RXF_E_STOPPED);
-	else
-		rxf->rxf_flags &= ~BNA_RXF_FL_RXF_ENABLED;
-		__rxf_disable(rxf);
-}
-
-static void
-rxf_cb_disabled(void *arg, int status)
-{
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
-
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STOPPED);
-}
-
-void
-rxf_cb_cam_fltr_mbox_cmd(void *arg, int status)
-{
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
-
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_RESP);
-}
-
-static void
-bna_rxf_cb_stats_cleared(void *arg, int status)
-{
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
-
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STAT_CLEARED);
-}
-
-void
-rxf_cam_mbox_cmd(struct bna_rxf *rxf, u8 cmd,
-		const struct bna_mac *mac_addr)
-{
-	struct bfi_ll_mac_addr_req req;
-
-	bfi_h2i_set(req.mh, BFI_MC_LL, cmd, 0);
-
-	req.rxf_id = rxf->rxf_id;
-	memcpy(&req.mac_addr, (void *)&mac_addr->addr, ETH_ALEN);
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &req, sizeof(req),
-				rxf_cb_cam_fltr_mbox_cmd, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-static int
-rxf_process_packet_filter_mcast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* Add multicast entries */
-	if (!list_empty(&rxf->mcast_pending_add_q)) {
-		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_ADD_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->mcast_active_q);
-		return 1;
-	}
-
-	/* Delete multicast entries previousely added */
-	if (!list_empty(&rxf->mcast_pending_del_q)) {
-		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-		return 1;
-	}
-
-	return 0;
-}
-
-static int
-rxf_process_packet_filter_vlan(struct bna_rxf *rxf)
-{
-	/* Apply the VLAN filter */
-	if (rxf->rxf_flags & BNA_RXF_FL_VLAN_CONFIG_PENDING) {
-		rxf->rxf_flags &= ~BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		if (!(rxf->rxmode_active & BNA_RXMODE_PROMISC))
-			__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-	}
-
-	/* Apply RSS configuration */
-	if (rxf->rxf_flags & BNA_RXF_FL_RSS_CONFIG_PENDING) {
-		rxf->rxf_flags &= ~BNA_RXF_FL_RSS_CONFIG_PENDING;
-		if (rxf->rss_status == BNA_STATUS_T_DISABLED) {
-			/* RSS is being disabled */
-			rxf->ctrl_flags &= ~BNA_RXF_CF_RSS_ENABLE;
-			__rxf_rit_set(rxf);
-			__rxf_config_set(rxf);
-		} else {
-			/* RSS is being enabled or reconfigured */
-			rxf->ctrl_flags |= BNA_RXF_CF_RSS_ENABLE;
-			__rxf_rit_set(rxf);
-			__rxf_config_set(rxf);
-		}
-	}
-
-	return 0;
-}
-
-/**
- * Processes pending ucast, mcast entry addition/deletion and issues mailbox
- * command. Also processes pending filter configuration - promiscuous mode,
- * default mode, allmutli mode and issues mailbox command or directly applies
- * to h/w
- */
-static int
-rxf_process_packet_filter(struct bna_rxf *rxf)
-{
-	/* Set the default MAC first */
-	if (rxf->ucast_pending_set > 0) {
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_SET_REQ,
-				rxf->ucast_active_mac);
-		rxf->ucast_pending_set--;
-		return 1;
-	}
-
-	if (rxf_process_packet_filter_ucast(rxf))
-		return 1;
-
-	if (rxf_process_packet_filter_mcast(rxf))
-		return 1;
-
-	if (rxf_process_packet_filter_promisc(rxf))
-		return 1;
-
-	if (rxf_process_packet_filter_allmulti(rxf))
-		return 1;
-
-	if (rxf_process_packet_filter_vlan(rxf))
-		return 1;
-
-	return 0;
-}
-
-static int
-rxf_clear_packet_filter_mcast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* 3. delete pending mcast entries */
-	if (!list_empty(&rxf->mcast_pending_del_q)) {
-		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-		return 1;
-	}
-
-	/* 4. clear active mcast entries; move them to pending_add_q */
-	if (!list_empty(&rxf->mcast_active_q)) {
-		bfa_q_deq(&rxf->mcast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-		return 1;
-	}
-
-	return 0;
-}
-
-/**
- * In the rxf stop path, processes pending ucast/mcast delete queue and issues
- * the mailbox command. Moves the active ucast/mcast entries to pending add q,
- * so that they are added to CAM again in the rxf start path. Moves the current
- * filter settings - promiscuous, default, allmutli - to pending filter
- * configuration
- */
-static int
-rxf_clear_packet_filter(struct bna_rxf *rxf)
-{
-	if (rxf_clear_packet_filter_ucast(rxf))
-		return 1;
-
-	if (rxf_clear_packet_filter_mcast(rxf))
-		return 1;
-
-	/* 5. clear active default MAC in the CAM */
-	if (rxf->ucast_pending_set > 0)
-		rxf->ucast_pending_set = 0;
-
-	if (rxf_clear_packet_filter_promisc(rxf))
-		return 1;
-
-	if (rxf_clear_packet_filter_allmulti(rxf))
-		return 1;
-
-	return 0;
-}
-
-static void
-rxf_reset_packet_filter_mcast(struct bna_rxf *rxf)
-{
-	struct list_head *qe;
-	struct bna_mac *mac;
-
-	/* 3. Move active mcast entries to pending_add_q */
-	while (!list_empty(&rxf->mcast_active_q)) {
-		bfa_q_deq(&rxf->mcast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		list_add_tail(qe, &rxf->mcast_pending_add_q);
-	}
-
-	/* 4. Throw away delete pending mcast entries */
-	while (!list_empty(&rxf->mcast_pending_del_q)) {
-		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-	}
-}
-
-/**
- * In the rxf fail path, throws away the ucast/mcast entries pending for
- * deletion, moves all active ucast/mcast entries to pending queue so that
- * they are added back to CAM in the rxf start path. Also moves the current
- * filter configuration to pending filter configuration.
- */
-static void
-rxf_reset_packet_filter(struct bna_rxf *rxf)
-{
-	rxf_reset_packet_filter_ucast(rxf);
-
-	rxf_reset_packet_filter_mcast(rxf);
-
-	/* 5. Turn off ucast set flag */
-	rxf->ucast_pending_set = 0;
-
-	rxf_reset_packet_filter_promisc(rxf);
-
-	rxf_reset_packet_filter_allmulti(rxf);
-}
-
-static void
-bna_rxf_init(struct bna_rxf *rxf,
-		struct bna_rx *rx,
-		struct bna_rx_config *q_config)
-{
-	struct list_head *qe;
-	struct bna_rxp *rxp;
-
-	/* rxf_id is initialized during rx_mod init */
-	rxf->rx = rx;
-
-	INIT_LIST_HEAD(&rxf->ucast_pending_add_q);
-	INIT_LIST_HEAD(&rxf->ucast_pending_del_q);
-	rxf->ucast_pending_set = 0;
-	INIT_LIST_HEAD(&rxf->ucast_active_q);
-	rxf->ucast_active_mac = NULL;
-
-	INIT_LIST_HEAD(&rxf->mcast_pending_add_q);
-	INIT_LIST_HEAD(&rxf->mcast_pending_del_q);
-	INIT_LIST_HEAD(&rxf->mcast_active_q);
-
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-
-	if (q_config->vlan_strip_status == BNA_STATUS_T_ENABLED)
-		rxf->ctrl_flags |= BNA_RXF_CF_VLAN_STRIP;
-
-	rxf->rxf_oper_state = (q_config->paused) ?
-		BNA_RXF_OPER_STATE_PAUSED : BNA_RXF_OPER_STATE_RUNNING;
-
-	bna_rxf_adv_init(rxf, rx, q_config);
-
-	rxf->rit_segment = bna_rit_mod_seg_get(&rxf->rx->bna->rit_mod,
-					q_config->num_paths);
-
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		if (q_config->rxp_type == BNA_RXP_SINGLE)
-			rxf->mcast_rxq_id = rxp->rxq.single.only->rxq_id;
-		else
-			rxf->mcast_rxq_id = rxp->rxq.slr.large->rxq_id;
-		break;
-	}
-
-	rxf->vlan_filter_status = BNA_STATUS_T_DISABLED;
-	memset(rxf->vlan_filter_table, 0,
-			(sizeof(u32) * ((BFI_MAX_VLAN + 1) / 32)));
-
-	/* Set up VLAN 0 for pure priority tagged packets */
-	rxf->vlan_filter_table[0] |= 1;
-
-	bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-}
-
-static void
-bna_rxf_uninit(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	struct bna_mac *mac;
-
-	bna_rit_mod_seg_put(&rxf->rx->bna->rit_mod, rxf->rit_segment);
-	rxf->rit_segment = NULL;
-
-	rxf->ucast_pending_set = 0;
-
-	while (!list_empty(&rxf->ucast_pending_add_q)) {
-		bfa_q_deq(&rxf->ucast_pending_add_q, &mac);
-		bfa_q_qe_init(&mac->qe);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-	}
-
-	if (rxf->ucast_active_mac) {
-		bfa_q_qe_init(&rxf->ucast_active_mac->qe);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod,
-			rxf->ucast_active_mac);
-		rxf->ucast_active_mac = NULL;
-	}
-
-	while (!list_empty(&rxf->mcast_pending_add_q)) {
-		bfa_q_deq(&rxf->mcast_pending_add_q, &mac);
-		bfa_q_qe_init(&mac->qe);
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-	}
-
-	/* Turn off pending promisc mode */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* system promisc state should be pending */
-		BUG_ON(!(bna->rxf_promisc_id == rxf->rxf_id));
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		 bna->rxf_promisc_id = BFI_MAX_RXF;
-	}
-	/* Promisc mode should not be active */
-	BUG_ON(rxf->rxmode_active & BNA_RXMODE_PROMISC);
-
-	/* Turn off pending all-multi mode */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-	}
-	/* Allmulti mode should not be active */
-	BUG_ON(rxf->rxmode_active & BNA_RXMODE_ALLMULTI);
-
-	rxf->rx = NULL;
-}
-
-static void
-bna_rx_cb_rxf_started(struct bna_rx *rx, enum bna_cb_status status)
-{
-	bfa_fsm_send_event(rx, RX_E_RXF_STARTED);
-	if (rx->rxf.rxf_id < 32)
-		rx->bna->rx_mod.rxf_bmap[0] |= ((u32)1 << rx->rxf.rxf_id);
-	else
-		rx->bna->rx_mod.rxf_bmap[1] |= ((u32)
-				1 << (rx->rxf.rxf_id - 32));
-}
-
-static void
-bna_rxf_start(struct bna_rxf *rxf)
-{
-	rxf->start_cbfn = bna_rx_cb_rxf_started;
-	rxf->start_cbarg = rxf->rx;
-	rxf->rxf_flags &= ~BNA_RXF_FL_FAILED;
-	bfa_fsm_send_event(rxf, RXF_E_START);
-}
-
-static void
-bna_rx_cb_rxf_stopped(struct bna_rx *rx, enum bna_cb_status status)
-{
-	bfa_fsm_send_event(rx, RX_E_RXF_STOPPED);
-	if (rx->rxf.rxf_id < 32)
-		rx->bna->rx_mod.rxf_bmap[0] &= ~(u32)1 << rx->rxf.rxf_id;
-	else
-		rx->bna->rx_mod.rxf_bmap[1] &= ~(u32)
-				1 << (rx->rxf.rxf_id - 32);
-}
-
-static void
-bna_rxf_stop(struct bna_rxf *rxf)
-{
-	rxf->stop_cbfn = bna_rx_cb_rxf_stopped;
-	rxf->stop_cbarg = rxf->rx;
-	bfa_fsm_send_event(rxf, RXF_E_STOP);
-}
-
-static void
-bna_rxf_fail(struct bna_rxf *rxf)
-{
-	rxf->rxf_flags |= BNA_RXF_FL_FAILED;
-	bfa_fsm_send_event(rxf, RXF_E_FAIL);
-}
-
-int
-bna_rxf_state_get(struct bna_rxf *rxf)
-{
-	return bfa_sm_to_state(rxf_sm_table, rxf->fsm);
-}
-
-enum bna_cb_status
-bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status))
-{
-	struct bna_rxf *rxf = &rx->rxf;
-
-	if (rxf->ucast_active_mac == NULL) {
-		rxf->ucast_active_mac =
-				bna_ucam_mod_mac_get(&rxf->rx->bna->ucam_mod);
-		if (rxf->ucast_active_mac == NULL)
-			return BNA_CB_UCAST_CAM_FULL;
-		bfa_q_qe_init(&rxf->ucast_active_mac->qe);
-	}
-
-	memcpy(rxf->ucast_active_mac->addr, ucmac, ETH_ALEN);
-	rxf->ucast_pending_set++;
-	rxf->cam_fltr_cbfn = cbfn;
-	rxf->cam_fltr_cbarg = rx->bna->bnad;
-
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-
-	return BNA_CB_SUCCESS;
-}
-
-enum bna_cb_status
-bna_rx_mcast_add(struct bna_rx *rx, u8 *addr,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status))
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	struct list_head	*qe;
-	struct bna_mac *mac;
-
-	/* Check if already added */
-	list_for_each(qe, &rxf->mcast_active_q) {
-		mac = (struct bna_mac *)qe;
-		if (BNA_MAC_IS_EQUAL(mac->addr, addr)) {
-			if (cbfn)
-				(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-			return BNA_CB_SUCCESS;
-		}
-	}
-
-	/* Check if pending addition */
-	list_for_each(qe, &rxf->mcast_pending_add_q) {
-		mac = (struct bna_mac *)qe;
-		if (BNA_MAC_IS_EQUAL(mac->addr, addr)) {
-			if (cbfn)
-				(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-			return BNA_CB_SUCCESS;
-		}
-	}
-
-	mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
-	if (mac == NULL)
-		return BNA_CB_MCAST_LIST_FULL;
-	bfa_q_qe_init(&mac->qe);
-	memcpy(mac->addr, addr, ETH_ALEN);
-	list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-
-	rxf->cam_fltr_cbfn = cbfn;
-	rxf->cam_fltr_cbarg = rx->bna->bnad;
-
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-
-	return BNA_CB_SUCCESS;
-}
-
-enum bna_cb_status
-bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mclist,
-		     void (*cbfn)(struct bnad *, struct bna_rx *,
-				  enum bna_cb_status))
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	struct list_head list_head;
-	struct list_head *qe;
-	u8 *mcaddr;
-	struct bna_mac *mac;
-	struct bna_mac *mac1;
-	int skip;
-	int delete;
-	int need_hw_config = 0;
-	int i;
-
-	/* Allocate nodes */
-	INIT_LIST_HEAD(&list_head);
-	for (i = 0, mcaddr = mclist; i < count; i++) {
-		mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
-		if (mac == NULL)
-			goto err_return;
-		bfa_q_qe_init(&mac->qe);
-		memcpy(mac->addr, mcaddr, ETH_ALEN);
-		list_add_tail(&mac->qe, &list_head);
-
-		mcaddr += ETH_ALEN;
-	}
-
-	/* Schedule for addition */
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-
-		skip = 0;
-
-		/* Skip if already added */
-		list_for_each(qe, &rxf->mcast_active_q) {
-			mac1 = (struct bna_mac *)qe;
-			if (BNA_MAC_IS_EQUAL(mac1->addr, mac->addr)) {
-				bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod,
-							mac);
-				skip = 1;
-				break;
-			}
-		}
-
-		if (skip)
-			continue;
-
-		/* Skip if pending addition */
-		list_for_each(qe, &rxf->mcast_pending_add_q) {
-			mac1 = (struct bna_mac *)qe;
-			if (BNA_MAC_IS_EQUAL(mac1->addr, mac->addr)) {
-				bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod,
-							mac);
-				skip = 1;
-				break;
-			}
-		}
-
-		if (skip)
-			continue;
-
-		need_hw_config = 1;
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-	}
-
-	/**
-	 * Delete the entries that are in the pending_add_q but not
-	 * in the new list
-	 */
-	while (!list_empty(&rxf->mcast_pending_add_q)) {
-		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		for (i = 0, mcaddr = mclist, delete = 1; i < count; i++) {
-			if (BNA_MAC_IS_EQUAL(mcaddr, mac->addr)) {
-				delete = 0;
-				break;
-			}
-			mcaddr += ETH_ALEN;
-		}
-		if (delete)
-			bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-		else
-			list_add_tail(&mac->qe, &list_head);
-	}
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-	}
-
-	/**
-	 * Schedule entries for deletion that are in the active_q but not
-	 * in the new list
-	 */
-	while (!list_empty(&rxf->mcast_active_q)) {
-		bfa_q_deq(&rxf->mcast_active_q, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		for (i = 0, mcaddr = mclist, delete = 1; i < count; i++) {
-			if (BNA_MAC_IS_EQUAL(mcaddr, mac->addr)) {
-				delete = 0;
-				break;
-			}
-			mcaddr += ETH_ALEN;
-		}
-		if (delete) {
-			list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
-			need_hw_config = 1;
-		} else {
-			list_add_tail(&mac->qe, &list_head);
-		}
-	}
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		list_add_tail(&mac->qe, &rxf->mcast_active_q);
-	}
-
-	if (need_hw_config) {
-		rxf->cam_fltr_cbfn = cbfn;
-		rxf->cam_fltr_cbarg = rx->bna->bnad;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	} else if (cbfn)
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-
-	return BNA_CB_SUCCESS;
-
-err_return:
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-	}
-
-	return BNA_CB_MCAST_LIST_FULL;
-}
-
-void
-bna_rx_vlan_add(struct bna_rx *rx, int vlan_id)
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	int index = (vlan_id >> 5);
-	int bit = (1 << (vlan_id & 0x1F));
-
-	rxf->vlan_filter_table[index] |= bit;
-	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	}
-}
-
-void
-bna_rx_vlan_del(struct bna_rx *rx, int vlan_id)
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	int index = (vlan_id >> 5);
-	int bit = (1 << (vlan_id & 0x1F));
-
-	rxf->vlan_filter_table[index] &= ~bit;
-	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	}
-}
-
-/**
- * RX
- */
-#define	RXQ_RCB_INIT(q, rxp, qdepth, bna, _id, unmapq_mem)	do {	\
-	struct bna_doorbell_qset *_qset;				\
-	unsigned long off;						\
-	(q)->rcb->producer_index = (q)->rcb->consumer_index = 0;	\
-	(q)->rcb->q_depth = (qdepth);					\
-	(q)->rcb->unmap_q = unmapq_mem;					\
-	(q)->rcb->rxq = (q);						\
-	(q)->rcb->cq = &(rxp)->cq;					\
-	(q)->rcb->bnad = (bna)->bnad;					\
-	_qset = (struct bna_doorbell_qset *)0;			\
-	off = (unsigned long)&_qset[(q)->rxq_id].rxq[0];		\
-	(q)->rcb->q_dbell = off +					\
-		BNA_GET_DOORBELL_BASE_ADDR((bna)->pcidev.pci_bar_kva);	\
-	(q)->rcb->id = _id;						\
-} while (0)
-
-#define	BNA_GET_RXQS(qcfg)	(((qcfg)->rxp_type == BNA_RXP_SINGLE) ?	\
-	(qcfg)->num_paths : ((qcfg)->num_paths * 2))
-
-#define	SIZE_TO_PAGES(size)	(((size) >> PAGE_SHIFT) + ((((size) &\
-	(PAGE_SIZE - 1)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT))
-
-#define	call_rx_stop_callback(rx, status)				\
-	if ((rx)->stop_cbfn) {						\
-		(*(rx)->stop_cbfn)((rx)->stop_cbarg, rx, (status));	\
-		(rx)->stop_cbfn = NULL;					\
-		(rx)->stop_cbarg = NULL;				\
-	}
-
-/*
- * Since rx_enable is synchronous callback, there is no start_cbfn required.
- * Instead, we'll call bnad_rx_post(rxp) so that bnad can post the buffers
- * for each rxpath.
- */
-
-#define	call_rx_disable_cbfn(rx, status)				\
-		if ((rx)->disable_cbfn)	{				\
-			(*(rx)->disable_cbfn)((rx)->disable_cbarg,	\
-					status);			\
-			(rx)->disable_cbfn = NULL;			\
-			(rx)->disable_cbarg = NULL;			\
-		}							\
-
-#define	rxqs_reqd(type, num_rxqs)					\
-	(((type) == BNA_RXP_SINGLE) ? (num_rxqs) : ((num_rxqs) * 2))
-
-#define rx_ib_fail(rx)						\
-do {								\
-	struct bna_rxp *rxp;					\
-	struct list_head *qe;						\
-	list_for_each(qe, &(rx)->rxp_q) {				\
-		rxp = (struct bna_rxp *)qe;			\
-		bna_ib_fail(rxp->cq.ib);			\
-	}							\
-} while (0)
-
-static void __bna_multi_rxq_stop(struct bna_rxp *, u32 *);
-static void __bna_rxq_start(struct bna_rxq *rxq);
-static void __bna_cq_start(struct bna_cq *cq);
-static void bna_rit_create(struct bna_rx *rx);
-static void bna_rx_cb_multi_rxq_stopped(void *arg, int status);
-static void bna_rx_cb_rxq_stopped_all(void *arg);
-
-bfa_fsm_state_decl(bna_rx, stopped,
-	struct bna_rx, enum bna_rx_event);
-bfa_fsm_state_decl(bna_rx, rxf_start_wait,
-	struct bna_rx, enum bna_rx_event);
-bfa_fsm_state_decl(bna_rx, started,
-	struct bna_rx, enum bna_rx_event);
-bfa_fsm_state_decl(bna_rx, rxf_stop_wait,
-	struct bna_rx, enum bna_rx_event);
-bfa_fsm_state_decl(bna_rx, rxq_stop_wait,
-	struct bna_rx, enum bna_rx_event);
-
-static const struct bfa_sm_table rx_sm_table[] = {
-	{BFA_SM(bna_rx_sm_stopped), BNA_RX_STOPPED},
-	{BFA_SM(bna_rx_sm_rxf_start_wait), BNA_RX_RXF_START_WAIT},
-	{BFA_SM(bna_rx_sm_started), BNA_RX_STARTED},
-	{BFA_SM(bna_rx_sm_rxf_stop_wait), BNA_RX_RXF_STOP_WAIT},
-	{BFA_SM(bna_rx_sm_rxq_stop_wait), BNA_RX_RXQ_STOP_WAIT},
-};
-
-static void bna_rx_sm_stopped_entry(struct bna_rx *rx)
-{
-	struct bna_rxp *rxp;
-	struct list_head *qe_rxp;
-
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		rx->rx_cleanup_cbfn(rx->bna->bnad, rxp->cq.ccb);
-	}
-
-	call_rx_stop_callback(rx, BNA_CB_SUCCESS);
-}
-
-static void bna_rx_sm_stopped(struct bna_rx *rx,
-				enum bna_rx_event event)
-{
-	switch (event) {
-	case RX_E_START:
-		bfa_fsm_set_state(rx, bna_rx_sm_rxf_start_wait);
-		break;
-	case RX_E_STOP:
-		call_rx_stop_callback(rx, BNA_CB_SUCCESS);
-		break;
-	case RX_E_FAIL:
-		/* no-op */
-		break;
-	default:
-		bfa_sm_fault(event);
-		break;
-	}
-
-}
-
-static void bna_rx_sm_rxf_start_wait_entry(struct bna_rx *rx)
-{
-	struct bna_rxp *rxp;
-	struct list_head *qe_rxp;
-	struct bna_rxq *q0 = NULL, *q1 = NULL;
-
-	/* Setup the RIT */
-	bna_rit_create(rx);
-
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		bna_ib_start(rxp->cq.ib);
-		GET_RXQS(rxp, q0, q1);
-		q0->buffer_size = bna_port_mtu_get(&rx->bna->port);
-		__bna_rxq_start(q0);
-		rx->rx_post_cbfn(rx->bna->bnad, q0->rcb);
-		if (q1)  {
-			__bna_rxq_start(q1);
-			rx->rx_post_cbfn(rx->bna->bnad, q1->rcb);
-		}
-		__bna_cq_start(&rxp->cq);
-	}
-
-	bna_rxf_start(&rx->rxf);
-}
-
-static void bna_rx_sm_rxf_start_wait(struct bna_rx *rx,
-				enum bna_rx_event event)
-{
-	switch (event) {
-	case RX_E_STOP:
-		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
-		break;
-	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
-		bna_rxf_fail(&rx->rxf);
-		break;
-	case RX_E_RXF_STARTED:
-		bfa_fsm_set_state(rx, bna_rx_sm_started);
-		break;
-	default:
-		bfa_sm_fault(event);
-		break;
-	}
-}
-
-void
-bna_rx_sm_started_entry(struct bna_rx *rx)
-{
-	struct bna_rxp *rxp;
-	struct list_head *qe_rxp;
-
-	/* Start IB */
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		bna_ib_ack(&rxp->cq.ib->door_bell, 0);
-	}
-
-	bna_llport_rx_started(&rx->bna->port.llport);
-}
-
-void
-bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
-{
-	switch (event) {
-	case RX_E_FAIL:
-		bna_llport_rx_stopped(&rx->bna->port.llport);
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
-		bna_rxf_fail(&rx->rxf);
-		break;
-	case RX_E_STOP:
-		bna_llport_rx_stopped(&rx->bna->port.llport);
-		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
-		break;
-	default:
-		bfa_sm_fault(event);
-		break;
-	}
-}
-
-void
-bna_rx_sm_rxf_stop_wait_entry(struct bna_rx *rx)
-{
-	bna_rxf_stop(&rx->rxf);
-}
-
-void
-bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
-{
-	switch (event) {
-	case RX_E_RXF_STOPPED:
-		bfa_fsm_set_state(rx, bna_rx_sm_rxq_stop_wait);
-		break;
-	case RX_E_RXF_STARTED:
-		/**
-		 * RxF was in the process of starting up when
-		 * RXF_E_STOP was issued. Ignore this event
-		 */
-		break;
-	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
-		bna_rxf_fail(&rx->rxf);
-		break;
-	default:
-		bfa_sm_fault(event);
-		break;
-	}
-
-}
-
-void
-bna_rx_sm_rxq_stop_wait_entry(struct bna_rx *rx)
-{
-	struct bna_rxp *rxp = NULL;
-	struct bna_rxq *q0 = NULL;
-	struct bna_rxq *q1 = NULL;
-	struct list_head	*qe;
-	u32 rxq_mask[2] = {0, 0};
-
-	/* Only one call to multi-rxq-stop for all RXPs in this RX */
-	bfa_wc_up(&rx->rxq_stop_wc);
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		GET_RXQS(rxp, q0, q1);
-		if (q0->rxq_id < 32)
-			rxq_mask[0] |= ((u32)1 << q0->rxq_id);
-		else
-			rxq_mask[1] |= ((u32)1 << (q0->rxq_id - 32));
-		if (q1) {
-			if (q1->rxq_id < 32)
-				rxq_mask[0] |= ((u32)1 << q1->rxq_id);
-			else
-				rxq_mask[1] |= ((u32)
-						1 << (q1->rxq_id - 32));
-		}
-	}
-
-	__bna_multi_rxq_stop(rxp, rxq_mask);
-}
-
-void
-bna_rx_sm_rxq_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
-{
-	struct bna_rxp *rxp = NULL;
-	struct list_head	*qe;
-
-	switch (event) {
-	case RX_E_RXQ_STOPPED:
-		list_for_each(qe, &rx->rxp_q) {
-			rxp = (struct bna_rxp *)qe;
-			bna_ib_stop(rxp->cq.ib);
-		}
-		/* Fall through */
-	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		break;
-	default:
-		bfa_sm_fault(event);
-		break;
-	}
-}
-
-void
-__bna_multi_rxq_stop(struct bna_rxp *rxp, u32 * rxq_id_mask)
-{
-	struct bfi_ll_q_stop_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RXQ_STOP_REQ, 0);
-	ll_req.q_id_mask[0] = htonl(rxq_id_mask[0]);
-	ll_req.q_id_mask[1] = htonl(rxq_id_mask[1]);
-	bna_mbox_qe_fill(&rxp->mbox_qe, &ll_req, sizeof(ll_req),
-		bna_rx_cb_multi_rxq_stopped, rxp);
-	bna_mbox_send(rxp->rx->bna, &rxp->mbox_qe);
-}
-
-void
-__bna_rxq_start(struct bna_rxq *rxq)
-{
-	struct bna_rxtx_q_mem *q_mem;
-	struct bna_rxq_mem rxq_cfg, *rxq_mem;
-	struct bna_dma_addr cur_q_addr;
-	/* struct bna_doorbell_qset *qset; */
-	struct bna_qpt *qpt;
-	u32 pg_num;
-	struct bna *bna = rxq->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	qpt = &rxq->qpt;
-	cur_q_addr = *((struct bna_dma_addr *)(qpt->kv_qpt_ptr));
-
-	rxq_cfg.pg_tbl_addr_lo = qpt->hw_qpt_ptr.lsb;
-	rxq_cfg.pg_tbl_addr_hi = qpt->hw_qpt_ptr.msb;
-	rxq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	rxq_cfg.cur_q_entry_hi = cur_q_addr.msb;
-
-	rxq_cfg.pg_cnt_n_prd_ptr = ((u32)qpt->page_count << 16) | 0x0;
-	rxq_cfg.entry_n_pg_size = ((u32)(BFI_RXQ_WI_SIZE >> 2) << 16) |
-		(qpt->page_size >> 2);
-	rxq_cfg.sg_n_cq_n_cns_ptr =
-		((u32)(rxq->rxp->cq.cq_id & 0xff) << 16) | 0x0;
-	rxq_cfg.buf_sz_n_q_state = ((u32)rxq->buffer_size << 16) |
-		BNA_Q_IDLE_STATE;
-	rxq_cfg.next_qid = 0x0 | (0x3 << 8);
-
-	/* Write the page number register */
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + bna->port_num,
-			HQM_RXTX_Q_RAM_BASE_OFFSET);
-	writel(pg_num, bna->regs.page_addr);
-
-	/* Write to h/w */
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-					HQM_RXTX_Q_RAM_BASE_OFFSET);
-
-	q_mem = (struct bna_rxtx_q_mem *)0;
-	rxq_mem = &q_mem[rxq->rxq_id].rxq;
-
-	off = (unsigned long)&rxq_mem->pg_tbl_addr_lo;
-	writel(htonl(rxq_cfg.pg_tbl_addr_lo), base_addr + off);
-
-	off = (unsigned long)&rxq_mem->pg_tbl_addr_hi;
-	writel(htonl(rxq_cfg.pg_tbl_addr_hi), base_addr + off);
-
-	off = (unsigned long)&rxq_mem->cur_q_entry_lo;
-	writel(htonl(rxq_cfg.cur_q_entry_lo), base_addr + off);
-
-	off = (unsigned long)&rxq_mem->cur_q_entry_hi;
-	writel(htonl(rxq_cfg.cur_q_entry_hi), base_addr + off);
-
-	off = (unsigned long)&rxq_mem->pg_cnt_n_prd_ptr;
-	writel(rxq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
-
-	off = (unsigned long)&rxq_mem->entry_n_pg_size;
-	writel(rxq_cfg.entry_n_pg_size, base_addr + off);
-
-	off = (unsigned long)&rxq_mem->sg_n_cq_n_cns_ptr;
-	writel(rxq_cfg.sg_n_cq_n_cns_ptr, base_addr + off);
-
-	off = (unsigned long)&rxq_mem->buf_sz_n_q_state;
-	writel(rxq_cfg.buf_sz_n_q_state, base_addr + off);
-
-	off = (unsigned long)&rxq_mem->next_qid;
-	writel(rxq_cfg.next_qid, base_addr + off);
-
-	rxq->rcb->producer_index = 0;
-	rxq->rcb->consumer_index = 0;
-}
-
-void
-__bna_cq_start(struct bna_cq *cq)
-{
-	struct bna_cq_mem cq_cfg, *cq_mem;
-	const struct bna_qpt *qpt;
-	struct bna_dma_addr cur_q_addr;
-	u32 pg_num;
-	struct bna *bna = cq->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	qpt = &cq->qpt;
-	cur_q_addr = *((struct bna_dma_addr *)(qpt->kv_qpt_ptr));
-
-	/*
-	 * Fill out structure, to be subsequently written
-	 * to hardware
-	 */
-	cq_cfg.pg_tbl_addr_lo = qpt->hw_qpt_ptr.lsb;
-	cq_cfg.pg_tbl_addr_hi = qpt->hw_qpt_ptr.msb;
-	cq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	cq_cfg.cur_q_entry_hi = cur_q_addr.msb;
-
-	cq_cfg.pg_cnt_n_prd_ptr = (qpt->page_count << 16) | 0x0;
-	cq_cfg.entry_n_pg_size =
-		((u32)(BFI_CQ_WI_SIZE >> 2) << 16) | (qpt->page_size >> 2);
-	cq_cfg.int_blk_n_cns_ptr = ((((u32)cq->ib_seg_offset) << 24) |
-			((u32)(cq->ib->ib_id & 0xff)  << 16) | 0x0);
-	cq_cfg.q_state = BNA_Q_IDLE_STATE;
-
-	/* Write the page number register */
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + bna->port_num,
-				  HQM_CQ_RAM_BASE_OFFSET);
-
-	writel(pg_num, bna->regs.page_addr);
-
-	/* H/W write */
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-					HQM_CQ_RAM_BASE_OFFSET);
-
-	cq_mem = (struct bna_cq_mem *)0;
-
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_tbl_addr_lo;
-	writel(htonl(cq_cfg.pg_tbl_addr_lo), base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_tbl_addr_hi;
-	writel(htonl(cq_cfg.pg_tbl_addr_hi), base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].cur_q_entry_lo;
-	writel(htonl(cq_cfg.cur_q_entry_lo), base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].cur_q_entry_hi;
-	writel(htonl(cq_cfg.cur_q_entry_hi), base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_cnt_n_prd_ptr;
-	writel(cq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].entry_n_pg_size;
-	writel(cq_cfg.entry_n_pg_size, base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].int_blk_n_cns_ptr;
-	writel(cq_cfg.int_blk_n_cns_ptr, base_addr + off);
-
-	off = (unsigned long)&cq_mem[cq->cq_id].q_state;
-	writel(cq_cfg.q_state, base_addr + off);
-
-	cq->ccb->producer_index = 0;
-	*(cq->ccb->hw_producer_index) = 0;
-}
-
-void
-bna_rit_create(struct bna_rx *rx)
-{
-	struct list_head	*qe_rxp;
-	struct bna_rxp *rxp;
-	struct bna_rxq *q0 = NULL;
-	struct bna_rxq *q1 = NULL;
-	int offset;
-
-	offset = 0;
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		GET_RXQS(rxp, q0, q1);
-		rx->rxf.rit_segment->rit[offset].large_rxq_id = q0->rxq_id;
-		rx->rxf.rit_segment->rit[offset].small_rxq_id =
-						(q1 ? q1->rxq_id : 0);
-		offset++;
-	}
-}
-
-static int
-_rx_can_satisfy(struct bna_rx_mod *rx_mod,
-		struct bna_rx_config *rx_cfg)
-{
-	if ((rx_mod->rx_free_count == 0) ||
-		(rx_mod->rxp_free_count == 0) ||
-		(rx_mod->rxq_free_count == 0))
-		return 0;
-
-	if (rx_cfg->rxp_type == BNA_RXP_SINGLE) {
-		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
-			(rx_mod->rxq_free_count < rx_cfg->num_paths))
-				return 0;
-	} else {
-		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
-			(rx_mod->rxq_free_count < (2 * rx_cfg->num_paths)))
-			return 0;
-	}
-
-	if (!bna_rit_mod_can_satisfy(&rx_mod->bna->rit_mod, rx_cfg->num_paths))
-		return 0;
-
-	return 1;
-}
-
-static struct bna_rxq *
-_get_free_rxq(struct bna_rx_mod *rx_mod)
-{
-	struct bna_rxq *rxq = NULL;
-	struct list_head	*qe = NULL;
-
-	bfa_q_deq(&rx_mod->rxq_free_q, &qe);
-	if (qe) {
-		rx_mod->rxq_free_count--;
-		rxq = (struct bna_rxq *)qe;
-	}
-	return rxq;
-}
-
-static void
-_put_free_rxq(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
-{
-	bfa_q_qe_init(&rxq->qe);
-	list_add_tail(&rxq->qe, &rx_mod->rxq_free_q);
-	rx_mod->rxq_free_count++;
-}
-
-static struct bna_rxp *
-_get_free_rxp(struct bna_rx_mod *rx_mod)
-{
-	struct list_head	*qe = NULL;
-	struct bna_rxp *rxp = NULL;
-
-	bfa_q_deq(&rx_mod->rxp_free_q, &qe);
-	if (qe) {
-		rx_mod->rxp_free_count--;
-
-		rxp = (struct bna_rxp *)qe;
-	}
-
-	return rxp;
-}
-
-static void
-_put_free_rxp(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
-{
-	bfa_q_qe_init(&rxp->qe);
-	list_add_tail(&rxp->qe, &rx_mod->rxp_free_q);
-	rx_mod->rxp_free_count++;
-}
-
-static struct bna_rx *
-_get_free_rx(struct bna_rx_mod *rx_mod)
-{
-	struct list_head	*qe = NULL;
-	struct bna_rx *rx = NULL;
-
-	bfa_q_deq(&rx_mod->rx_free_q, &qe);
-	if (qe) {
-		rx_mod->rx_free_count--;
-
-		rx = (struct bna_rx *)qe;
-		bfa_q_qe_init(qe);
-		list_add_tail(&rx->qe, &rx_mod->rx_active_q);
-	}
-
-	return rx;
-}
-
-static void
-_put_free_rx(struct bna_rx_mod *rx_mod, struct bna_rx *rx)
-{
-	bfa_q_qe_init(&rx->qe);
-	list_add_tail(&rx->qe, &rx_mod->rx_free_q);
-	rx_mod->rx_free_count++;
-}
-
-static void
-_rx_init(struct bna_rx *rx, struct bna *bna)
-{
-	rx->bna = bna;
-	rx->rx_flags = 0;
-
-	INIT_LIST_HEAD(&rx->rxp_q);
-
-	rx->rxq_stop_wc.wc_resume = bna_rx_cb_rxq_stopped_all;
-	rx->rxq_stop_wc.wc_cbarg = rx;
-	rx->rxq_stop_wc.wc_count = 0;
-
-	rx->stop_cbfn = NULL;
-	rx->stop_cbarg = NULL;
-}
-
-static void
-_rxp_add_rxqs(struct bna_rxp *rxp,
-		struct bna_rxq *q0,
-		struct bna_rxq *q1)
-{
-	switch (rxp->type) {
-	case BNA_RXP_SINGLE:
-		rxp->rxq.single.only = q0;
-		rxp->rxq.single.reserved = NULL;
-		break;
-	case BNA_RXP_SLR:
-		rxp->rxq.slr.large = q0;
-		rxp->rxq.slr.small = q1;
-		break;
-	case BNA_RXP_HDS:
-		rxp->rxq.hds.data = q0;
-		rxp->rxq.hds.hdr = q1;
-		break;
-	default:
-		break;
-	}
-}
-
-static void
-_rxq_qpt_init(struct bna_rxq *rxq,
-		struct bna_rxp *rxp,
-		u32 page_count,
-		u32 page_size,
-		struct bna_mem_descr *qpt_mem,
-		struct bna_mem_descr *swqpt_mem,
-		struct bna_mem_descr *page_mem)
-{
-	int	i;
-
-	rxq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
-	rxq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
-	rxq->qpt.kv_qpt_ptr = qpt_mem->kva;
-	rxq->qpt.page_count = page_count;
-	rxq->qpt.page_size = page_size;
-
-	rxq->rcb->sw_qpt = (void **) swqpt_mem->kva;
-
-	for (i = 0; i < rxq->qpt.page_count; i++) {
-		rxq->rcb->sw_qpt[i] = page_mem[i].kva;
-		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].lsb =
-			page_mem[i].dma.lsb;
-		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].msb =
-			page_mem[i].dma.msb;
-
-	}
-}
-
-static void
-_rxp_cqpt_setup(struct bna_rxp *rxp,
-		u32 page_count,
-		u32 page_size,
-		struct bna_mem_descr *qpt_mem,
-		struct bna_mem_descr *swqpt_mem,
-		struct bna_mem_descr *page_mem)
-{
-	int	i;
-
-	rxp->cq.qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
-	rxp->cq.qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
-	rxp->cq.qpt.kv_qpt_ptr = qpt_mem->kva;
-	rxp->cq.qpt.page_count = page_count;
-	rxp->cq.qpt.page_size = page_size;
-
-	rxp->cq.ccb->sw_qpt = (void **) swqpt_mem->kva;
-
-	for (i = 0; i < rxp->cq.qpt.page_count; i++) {
-		rxp->cq.ccb->sw_qpt[i] = page_mem[i].kva;
-
-		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].lsb =
-			page_mem[i].dma.lsb;
-		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].msb =
-			page_mem[i].dma.msb;
-
-	}
-}
-
-static void
-_rx_add_rxp(struct bna_rx *rx, struct bna_rxp *rxp)
-{
-	list_add_tail(&rxp->qe, &rx->rxp_q);
-}
-
-static void
-_init_rxmod_queues(struct bna_rx_mod *rx_mod)
-{
-	INIT_LIST_HEAD(&rx_mod->rx_free_q);
-	INIT_LIST_HEAD(&rx_mod->rxq_free_q);
-	INIT_LIST_HEAD(&rx_mod->rxp_free_q);
-	INIT_LIST_HEAD(&rx_mod->rx_active_q);
-
-	rx_mod->rx_free_count = 0;
-	rx_mod->rxq_free_count = 0;
-	rx_mod->rxp_free_count = 0;
-}
-
-static void
-_rx_ctor(struct bna_rx *rx, int id)
-{
-	bfa_q_qe_init(&rx->qe);
-	INIT_LIST_HEAD(&rx->rxp_q);
-	rx->bna = NULL;
-
-	rx->rxf.rxf_id = id;
-
-	/* FIXME: mbox_qe ctor()?? */
-	bfa_q_qe_init(&rx->mbox_qe.qe);
-
-	rx->stop_cbfn = NULL;
-	rx->stop_cbarg = NULL;
-}
-
-void
-bna_rx_cb_multi_rxq_stopped(void *arg, int status)
-{
-	struct bna_rxp *rxp = (struct bna_rxp *)arg;
-
-	bfa_wc_down(&rxp->rx->rxq_stop_wc);
-}
-
-void
-bna_rx_cb_rxq_stopped_all(void *arg)
-{
-	struct bna_rx *rx = (struct bna_rx *)arg;
-
-	bfa_fsm_send_event(rx, RX_E_RXQ_STOPPED);
-}
-
-static void
-bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx,
-			 enum bna_cb_status status)
-{
-	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
-
-	bfa_wc_down(&rx_mod->rx_stop_wc);
-}
-
-static void
-bna_rx_mod_cb_rx_stopped_all(void *arg)
-{
-	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
-
-	if (rx_mod->stop_cbfn)
-		rx_mod->stop_cbfn(&rx_mod->bna->port, BNA_CB_SUCCESS);
-	rx_mod->stop_cbfn = NULL;
-}
-
-static void
-bna_rx_start(struct bna_rx *rx)
-{
-	rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
-	if (rx->rx_flags & BNA_RX_F_ENABLE)
-		bfa_fsm_send_event(rx, RX_E_START);
-}
-
-static void
-bna_rx_stop(struct bna_rx *rx)
-{
-	rx->rx_flags &= ~BNA_RX_F_PORT_ENABLED;
-	if (rx->fsm == (bfa_fsm_t) bna_rx_sm_stopped)
-		bna_rx_mod_cb_rx_stopped(&rx->bna->rx_mod, rx, BNA_CB_SUCCESS);
-	else {
-		rx->stop_cbfn = bna_rx_mod_cb_rx_stopped;
-		rx->stop_cbarg = &rx->bna->rx_mod;
-		bfa_fsm_send_event(rx, RX_E_STOP);
-	}
-}
-
-static void
-bna_rx_fail(struct bna_rx *rx)
-{
-	/* Indicate port is not enabled, and failed */
-	rx->rx_flags &= ~BNA_RX_F_PORT_ENABLED;
-	rx->rx_flags |= BNA_RX_F_PORT_FAILED;
-	bfa_fsm_send_event(rx, RX_E_FAIL);
-}
-
-void
-bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
-{
-	struct bna_rx *rx;
-	struct list_head *qe;
-
-	rx_mod->flags |= BNA_RX_MOD_F_PORT_STARTED;
-	if (type == BNA_RX_T_LOOPBACK)
-		rx_mod->flags |= BNA_RX_MOD_F_PORT_LOOPBACK;
-
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		if (rx->type == type)
-			bna_rx_start(rx);
-	}
-}
-
-void
-bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
-{
-	struct bna_rx *rx;
-	struct list_head *qe;
-
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_STARTED;
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_LOOPBACK;
-
-	rx_mod->stop_cbfn = bna_port_cb_rx_stopped;
-
-	/**
-	 * Before calling bna_rx_stop(), increment rx_stop_wc as many times
-	 * as we are going to call bna_rx_stop
-	 */
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		if (rx->type == type)
-			bfa_wc_up(&rx_mod->rx_stop_wc);
-	}
-
-	if (rx_mod->rx_stop_wc.wc_count == 0) {
-		rx_mod->stop_cbfn(&rx_mod->bna->port, BNA_CB_SUCCESS);
-		rx_mod->stop_cbfn = NULL;
-		return;
-	}
-
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		if (rx->type == type)
-			bna_rx_stop(rx);
-	}
-}
-
-void
-bna_rx_mod_fail(struct bna_rx_mod *rx_mod)
-{
-	struct bna_rx *rx;
-	struct list_head *qe;
-
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_STARTED;
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_LOOPBACK;
-
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		bna_rx_fail(rx);
-	}
-}
-
-void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
-			struct bna_res_info *res_info)
-{
-	int	index;
-	struct bna_rx *rx_ptr;
-	struct bna_rxp *rxp_ptr;
-	struct bna_rxq *rxq_ptr;
-
-	rx_mod->bna = bna;
-	rx_mod->flags = 0;
-
-	rx_mod->rx = (struct bna_rx *)
-		res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.mdl[0].kva;
-	rx_mod->rxp = (struct bna_rxp *)
-		res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mdl[0].kva;
-	rx_mod->rxq = (struct bna_rxq *)
-		res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	/* Initialize the queues */
-	_init_rxmod_queues(rx_mod);
-
-	/* Build RX queues */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
-		rx_ptr = &rx_mod->rx[index];
-		_rx_ctor(rx_ptr, index);
-		list_add_tail(&rx_ptr->qe, &rx_mod->rx_free_q);
-		rx_mod->rx_free_count++;
-	}
-
-	/* build RX-path queue */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
-		rxp_ptr = &rx_mod->rxp[index];
-		rxp_ptr->cq.cq_id = index;
-		bfa_q_qe_init(&rxp_ptr->qe);
-		list_add_tail(&rxp_ptr->qe, &rx_mod->rxp_free_q);
-		rx_mod->rxp_free_count++;
-	}
-
-	/* build RXQ queue */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
-		rxq_ptr = &rx_mod->rxq[index];
-		rxq_ptr->rxq_id = index;
-
-		bfa_q_qe_init(&rxq_ptr->qe);
-		list_add_tail(&rxq_ptr->qe, &rx_mod->rxq_free_q);
-		rx_mod->rxq_free_count++;
-	}
-
-	rx_mod->rx_stop_wc.wc_resume = bna_rx_mod_cb_rx_stopped_all;
-	rx_mod->rx_stop_wc.wc_cbarg = rx_mod;
-	rx_mod->rx_stop_wc.wc_count = 0;
-}
-
-void
-bna_rx_mod_uninit(struct bna_rx_mod *rx_mod)
-{
-	struct list_head		*qe;
-	int i;
-
-	i = 0;
-	list_for_each(qe, &rx_mod->rx_free_q)
-		i++;
-
-	i = 0;
-	list_for_each(qe, &rx_mod->rxp_free_q)
-		i++;
-
-	i = 0;
-	list_for_each(qe, &rx_mod->rxq_free_q)
-		i++;
-
-	rx_mod->bna = NULL;
-}
-
-int
-bna_rx_state_get(struct bna_rx *rx)
-{
-	return bfa_sm_to_state(rx_sm_table, rx->fsm);
-}
-
-void
-bna_rx_res_req(struct bna_rx_config *q_cfg, struct bna_res_info *res_info)
-{
-	u32 cq_size, hq_size, dq_size;
-	u32 cpage_count, hpage_count, dpage_count;
-	struct bna_mem_info *mem_info;
-	u32 cq_depth;
-	u32 hq_depth;
-	u32 dq_depth;
-
-	dq_depth = q_cfg->q_depth;
-	hq_depth = ((q_cfg->rxp_type == BNA_RXP_SINGLE) ? 0 : q_cfg->q_depth);
-	cq_depth = dq_depth + hq_depth;
-
-	BNA_TO_POWER_OF_2_HIGH(cq_depth);
-	cq_size = cq_depth * BFI_CQ_WI_SIZE;
-	cq_size = ALIGN(cq_size, PAGE_SIZE);
-	cpage_count = SIZE_TO_PAGES(cq_size);
-
-	BNA_TO_POWER_OF_2_HIGH(dq_depth);
-	dq_size = dq_depth * BFI_RXQ_WI_SIZE;
-	dq_size = ALIGN(dq_size, PAGE_SIZE);
-	dpage_count = SIZE_TO_PAGES(dq_size);
-
-	if (BNA_RXP_SINGLE != q_cfg->rxp_type) {
-		BNA_TO_POWER_OF_2_HIGH(hq_depth);
-		hq_size = hq_depth * BFI_RXQ_WI_SIZE;
-		hq_size = ALIGN(hq_size, PAGE_SIZE);
-		hpage_count = SIZE_TO_PAGES(hq_size);
-	} else {
-		hpage_count = 0;
-	}
-
-	/* CCB structures */
-	res_info[BNA_RX_RES_MEM_T_CCB].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = sizeof(struct bna_ccb);
-	mem_info->num = q_cfg->num_paths;
-
-	/* RCB structures */
-	res_info[BNA_RX_RES_MEM_T_RCB].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = sizeof(struct bna_rcb);
-	mem_info->num = BNA_GET_RXQS(q_cfg);
-
-	/* Completion QPT */
-	res_info[BNA_RX_RES_MEM_T_CQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = cpage_count * sizeof(struct bna_dma_addr);
-	mem_info->num = q_cfg->num_paths;
-
-	/* Completion s/w QPT */
-	res_info[BNA_RX_RES_MEM_T_CSWQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = cpage_count * sizeof(void *);
-	mem_info->num = q_cfg->num_paths;
-
-	/* Completion QPT pages */
-	res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = PAGE_SIZE;
-	mem_info->num = cpage_count * q_cfg->num_paths;
-
-	/* Data QPTs */
-	res_info[BNA_RX_RES_MEM_T_DQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = dpage_count * sizeof(struct bna_dma_addr);
-	mem_info->num = q_cfg->num_paths;
-
-	/* Data s/w QPTs */
-	res_info[BNA_RX_RES_MEM_T_DSWQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = dpage_count * sizeof(void *);
-	mem_info->num = q_cfg->num_paths;
-
-	/* Data QPT pages */
-	res_info[BNA_RX_RES_MEM_T_DPAGE].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = PAGE_SIZE;
-	mem_info->num = dpage_count * q_cfg->num_paths;
-
-	/* Hdr QPTs */
-	res_info[BNA_RX_RES_MEM_T_HQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = hpage_count * sizeof(struct bna_dma_addr);
-	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
-
-	/* Hdr s/w QPTs */
-	res_info[BNA_RX_RES_MEM_T_HSWQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = hpage_count * sizeof(void *);
-	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
-
-	/* Hdr QPT pages */
-	res_info[BNA_RX_RES_MEM_T_HPAGE].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = (hpage_count ? PAGE_SIZE : 0);
-	mem_info->num = (hpage_count ? (hpage_count * q_cfg->num_paths) : 0);
-
-	/* RX Interrupts */
-	res_info[BNA_RX_RES_T_INTR].res_type = BNA_RES_T_INTR;
-	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.intr_type = BNA_INTR_T_MSIX;
-	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.num = q_cfg->num_paths;
-}
-
-struct bna_rx *
-bna_rx_create(struct bna *bna, struct bnad *bnad,
-		struct bna_rx_config *rx_cfg,
-		struct bna_rx_event_cbfn *rx_cbfn,
-		struct bna_res_info *res_info,
-		void *priv)
-{
-	struct bna_rx_mod *rx_mod = &bna->rx_mod;
-	struct bna_rx *rx;
-	struct bna_rxp *rxp;
-	struct bna_rxq *q0;
-	struct bna_rxq *q1;
-	struct bna_intr_info *intr_info;
-	u32 page_count;
-	struct bna_mem_descr *ccb_mem;
-	struct bna_mem_descr *rcb_mem;
-	struct bna_mem_descr *unmapq_mem;
-	struct bna_mem_descr *cqpt_mem;
-	struct bna_mem_descr *cswqpt_mem;
-	struct bna_mem_descr *cpage_mem;
-	struct bna_mem_descr *hqpt_mem;	/* Header/Small Q qpt */
-	struct bna_mem_descr *dqpt_mem;	/* Data/Large Q qpt */
-	struct bna_mem_descr *hsqpt_mem;	/* s/w qpt for hdr */
-	struct bna_mem_descr *dsqpt_mem;	/* s/w qpt for data */
-	struct bna_mem_descr *hpage_mem;	/* hdr page mem */
-	struct bna_mem_descr *dpage_mem;	/* data page mem */
-	int i, cpage_idx = 0, dpage_idx = 0, hpage_idx = 0;
-	int dpage_count, hpage_count, rcb_idx;
-	struct bna_ib_config ibcfg;
-	/* Fail if we don't have enough RXPs, RXQs */
-	if (!_rx_can_satisfy(rx_mod, rx_cfg))
-		return NULL;
-
-	/* Initialize resource pointers */
-	intr_info = &res_info[BNA_RX_RES_T_INTR].res_u.intr_info;
-	ccb_mem = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info.mdl[0];
-	rcb_mem = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info.mdl[0];
-	unmapq_mem = &res_info[BNA_RX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[0];
-	cqpt_mem = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info.mdl[0];
-	cswqpt_mem = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info.mdl[0];
-	cpage_mem = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.mdl[0];
-	hqpt_mem = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info.mdl[0];
-	dqpt_mem = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info.mdl[0];
-	hsqpt_mem = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info.mdl[0];
-	dsqpt_mem = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info.mdl[0];
-	hpage_mem = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.mdl[0];
-	dpage_mem = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.mdl[0];
-
-	/* Compute q depth & page count */
-	page_count = res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.num /
-			rx_cfg->num_paths;
-
-	dpage_count = res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.num /
-			rx_cfg->num_paths;
-
-	hpage_count = res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.num /
-			rx_cfg->num_paths;
-	/* Get RX pointer */
-	rx = _get_free_rx(rx_mod);
-	_rx_init(rx, bna);
-	rx->priv = priv;
-	rx->type = rx_cfg->rx_type;
-
-	rx->rcb_setup_cbfn = rx_cbfn->rcb_setup_cbfn;
-	rx->rcb_destroy_cbfn = rx_cbfn->rcb_destroy_cbfn;
-	rx->ccb_setup_cbfn = rx_cbfn->ccb_setup_cbfn;
-	rx->ccb_destroy_cbfn = rx_cbfn->ccb_destroy_cbfn;
-	/* Following callbacks are mandatory */
-	rx->rx_cleanup_cbfn = rx_cbfn->rx_cleanup_cbfn;
-	rx->rx_post_cbfn = rx_cbfn->rx_post_cbfn;
-
-	if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_PORT_STARTED) {
-		switch (rx->type) {
-		case BNA_RX_T_REGULAR:
-			if (!(rx->bna->rx_mod.flags &
-				BNA_RX_MOD_F_PORT_LOOPBACK))
-				rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
-			break;
-		case BNA_RX_T_LOOPBACK:
-			if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_PORT_LOOPBACK)
-				rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
-			break;
-		}
-	}
-
-	for (i = 0, rcb_idx = 0; i < rx_cfg->num_paths; i++) {
-		rxp = _get_free_rxp(rx_mod);
-		rxp->type = rx_cfg->rxp_type;
-		rxp->rx = rx;
-		rxp->cq.rx = rx;
-
-		/* Get required RXQs, and queue them to rx-path */
-		q0 = _get_free_rxq(rx_mod);
-		if (BNA_RXP_SINGLE == rx_cfg->rxp_type)
-			q1 = NULL;
-		else
-			q1 = _get_free_rxq(rx_mod);
-
-		/* Initialize IB */
-		if (1 == intr_info->num) {
-			rxp->cq.ib = bna_ib_get(&bna->ib_mod,
-					intr_info->intr_type,
-					intr_info->idl[0].vector);
-			rxp->vector = intr_info->idl[0].vector;
-		} else {
-			rxp->cq.ib = bna_ib_get(&bna->ib_mod,
-					intr_info->intr_type,
-					intr_info->idl[i].vector);
-
-			/* Map the MSI-x vector used for this RXP */
-			rxp->vector = intr_info->idl[i].vector;
-		}
-
-		rxp->cq.ib_seg_offset = bna_ib_reserve_idx(rxp->cq.ib);
-
-		ibcfg.coalescing_timeo = BFI_RX_COALESCING_TIMEO;
-		ibcfg.interpkt_count = BFI_RX_INTERPKT_COUNT;
-		ibcfg.interpkt_timeo = BFI_RX_INTERPKT_TIMEO;
-		ibcfg.ctrl_flags = BFI_IB_CF_INT_ENABLE;
-
-		bna_ib_config(rxp->cq.ib, &ibcfg);
-
-		/* Link rxqs to rxp */
-		_rxp_add_rxqs(rxp, q0, q1);
-
-		/* Link rxp to rx */
-		_rx_add_rxp(rx, rxp);
-
-		q0->rx = rx;
-		q0->rxp = rxp;
-
-		/* Initialize RCB for the large / data q */
-		q0->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
-		RXQ_RCB_INIT(q0, rxp, rx_cfg->q_depth, bna, 0,
-			(void *)unmapq_mem[rcb_idx].kva);
-		rcb_idx++;
-		(q0)->rx_packets = (q0)->rx_bytes = 0;
-		(q0)->rx_packets_with_error = (q0)->rxbuf_alloc_failed = 0;
-
-		/* Initialize RXQs */
-		_rxq_qpt_init(q0, rxp, dpage_count, PAGE_SIZE,
-			&dqpt_mem[i], &dsqpt_mem[i], &dpage_mem[dpage_idx]);
-		q0->rcb->page_idx = dpage_idx;
-		q0->rcb->page_count = dpage_count;
-		dpage_idx += dpage_count;
-
-		/* Call bnad to complete rcb setup */
-		if (rx->rcb_setup_cbfn)
-			rx->rcb_setup_cbfn(bnad, q0->rcb);
-
-		if (q1) {
-			q1->rx = rx;
-			q1->rxp = rxp;
-
-			q1->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
-			RXQ_RCB_INIT(q1, rxp, rx_cfg->q_depth, bna, 1,
-				(void *)unmapq_mem[rcb_idx].kva);
-			rcb_idx++;
-			(q1)->buffer_size = (rx_cfg)->small_buff_size;
-			(q1)->rx_packets = (q1)->rx_bytes = 0;
-			(q1)->rx_packets_with_error =
-				(q1)->rxbuf_alloc_failed = 0;
-
-			_rxq_qpt_init(q1, rxp, hpage_count, PAGE_SIZE,
-				&hqpt_mem[i], &hsqpt_mem[i],
-				&hpage_mem[hpage_idx]);
-			q1->rcb->page_idx = hpage_idx;
-			q1->rcb->page_count = hpage_count;
-			hpage_idx += hpage_count;
-
-			/* Call bnad to complete rcb setup */
-			if (rx->rcb_setup_cbfn)
-				rx->rcb_setup_cbfn(bnad, q1->rcb);
-		}
-		/* Setup RXP::CQ */
-		rxp->cq.ccb = (struct bna_ccb *) ccb_mem[i].kva;
-		_rxp_cqpt_setup(rxp, page_count, PAGE_SIZE,
-			&cqpt_mem[i], &cswqpt_mem[i], &cpage_mem[cpage_idx]);
-		rxp->cq.ccb->page_idx = cpage_idx;
-		rxp->cq.ccb->page_count = page_count;
-		cpage_idx += page_count;
-
-		rxp->cq.ccb->pkt_rate.small_pkt_cnt = 0;
-		rxp->cq.ccb->pkt_rate.large_pkt_cnt = 0;
-
-		rxp->cq.ccb->producer_index = 0;
-		rxp->cq.ccb->q_depth =	rx_cfg->q_depth +
-					((rx_cfg->rxp_type == BNA_RXP_SINGLE) ?
-					0 : rx_cfg->q_depth);
-		rxp->cq.ccb->i_dbell = &rxp->cq.ib->door_bell;
-		rxp->cq.ccb->rcb[0] = q0->rcb;
-		if (q1)
-			rxp->cq.ccb->rcb[1] = q1->rcb;
-		rxp->cq.ccb->cq = &rxp->cq;
-		rxp->cq.ccb->bnad = bna->bnad;
-		rxp->cq.ccb->hw_producer_index =
-			((volatile u32 *)rxp->cq.ib->ib_seg_host_addr_kva +
-				      (rxp->cq.ib_seg_offset * BFI_IBIDX_SIZE));
-		*(rxp->cq.ccb->hw_producer_index) = 0;
-		rxp->cq.ccb->intr_type = intr_info->intr_type;
-		rxp->cq.ccb->intr_vector = (intr_info->num == 1) ?
-						intr_info->idl[0].vector :
-						intr_info->idl[i].vector;
-		rxp->cq.ccb->rx_coalescing_timeo =
-					rxp->cq.ib->ib_config.coalescing_timeo;
-		rxp->cq.ccb->id = i;
-
-		/* Call bnad to complete CCB setup */
-		if (rx->ccb_setup_cbfn)
-			rx->ccb_setup_cbfn(bnad, rxp->cq.ccb);
-
-	} /* for each rx-path */
-
-	bna_rxf_init(&rx->rxf, rx, rx_cfg);
-
-	bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-
-	return rx;
-}
-
-void
-bna_rx_destroy(struct bna_rx *rx)
-{
-	struct bna_rx_mod *rx_mod = &rx->bna->rx_mod;
-	struct bna_ib_mod *ib_mod = &rx->bna->ib_mod;
-	struct bna_rxq *q0 = NULL;
-	struct bna_rxq *q1 = NULL;
-	struct bna_rxp *rxp;
-	struct list_head *qe;
-
-	bna_rxf_uninit(&rx->rxf);
-
-	while (!list_empty(&rx->rxp_q)) {
-		bfa_q_deq(&rx->rxp_q, &rxp);
-		GET_RXQS(rxp, q0, q1);
-		/* Callback to bnad for destroying RCB */
-		if (rx->rcb_destroy_cbfn)
-			rx->rcb_destroy_cbfn(rx->bna->bnad, q0->rcb);
-		q0->rcb = NULL;
-		q0->rxp = NULL;
-		q0->rx = NULL;
-		_put_free_rxq(rx_mod, q0);
-		if (q1) {
-			/* Callback to bnad for destroying RCB */
-			if (rx->rcb_destroy_cbfn)
-				rx->rcb_destroy_cbfn(rx->bna->bnad, q1->rcb);
-			q1->rcb = NULL;
-			q1->rxp = NULL;
-			q1->rx = NULL;
-			_put_free_rxq(rx_mod, q1);
-		}
-		rxp->rxq.slr.large = NULL;
-		rxp->rxq.slr.small = NULL;
-		if (rxp->cq.ib) {
-			if (rxp->cq.ib_seg_offset != 0xff)
-				bna_ib_release_idx(rxp->cq.ib,
-						rxp->cq.ib_seg_offset);
-			bna_ib_put(ib_mod, rxp->cq.ib);
-			rxp->cq.ib = NULL;
-		}
-		/* Callback to bnad for destroying CCB */
-		if (rx->ccb_destroy_cbfn)
-			rx->ccb_destroy_cbfn(rx->bna->bnad, rxp->cq.ccb);
-		rxp->cq.ccb = NULL;
-		rxp->rx = NULL;
-		_put_free_rxp(rx_mod, rxp);
-	}
-
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		if (qe == &rx->qe) {
-			list_del(&rx->qe);
-			bfa_q_qe_init(&rx->qe);
-			break;
-		}
-	}
-
-	rx->bna = NULL;
-	rx->priv = NULL;
-	_put_free_rx(rx_mod, rx);
-}
-
-void
-bna_rx_enable(struct bna_rx *rx)
-{
-	if (rx->fsm != (bfa_sm_t)bna_rx_sm_stopped)
-		return;
-
-	rx->rx_flags |= BNA_RX_F_ENABLE;
-	if (rx->rx_flags & BNA_RX_F_PORT_ENABLED)
-		bfa_fsm_send_event(rx, RX_E_START);
-}
-
-void
-bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
-		void (*cbfn)(void *, struct bna_rx *,
-				enum bna_cb_status))
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		/* h/w should not be accessed. Treat we're stopped */
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-	} else {
-		rx->stop_cbfn = cbfn;
-		rx->stop_cbarg = rx->bna->bnad;
-
-		rx->rx_flags &= ~BNA_RX_F_ENABLE;
-
-		bfa_fsm_send_event(rx, RX_E_STOP);
-	}
-}
-
-/**
- * TX
- */
-#define call_tx_stop_cbfn(tx, status)\
-do {\
-	if ((tx)->stop_cbfn)\
-		(tx)->stop_cbfn((tx)->stop_cbarg, (tx), status);\
-	(tx)->stop_cbfn = NULL;\
-	(tx)->stop_cbarg = NULL;\
-} while (0)
-
-#define call_tx_prio_change_cbfn(tx, status)\
-do {\
-	if ((tx)->prio_change_cbfn)\
-		(tx)->prio_change_cbfn((tx)->bna->bnad, (tx), status);\
-	(tx)->prio_change_cbfn = NULL;\
-} while (0)
-
-static void bna_tx_mod_cb_tx_stopped(void *tx_mod, struct bna_tx *tx,
-					enum bna_cb_status status);
-static void bna_tx_cb_txq_stopped(void *arg, int status);
-static void bna_tx_cb_stats_cleared(void *arg, int status);
-static void __bna_tx_stop(struct bna_tx *tx);
-static void __bna_tx_start(struct bna_tx *tx);
-static void __bna_txf_stat_clr(struct bna_tx *tx);
-
-enum bna_tx_event {
-	TX_E_START			= 1,
-	TX_E_STOP			= 2,
-	TX_E_FAIL			= 3,
-	TX_E_TXQ_STOPPED		= 4,
-	TX_E_PRIO_CHANGE		= 5,
-	TX_E_STAT_CLEARED		= 6,
-};
-
-enum bna_tx_state {
-	BNA_TX_STOPPED			= 1,
-	BNA_TX_STARTED			= 2,
-	BNA_TX_TXQ_STOP_WAIT		= 3,
-	BNA_TX_PRIO_STOP_WAIT		= 4,
-	BNA_TX_STAT_CLR_WAIT		= 5,
-};
-
-bfa_fsm_state_decl(bna_tx, stopped, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, started, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, txq_stop_wait, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, prio_stop_wait, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, stat_clr_wait, struct bna_tx,
-			enum bna_tx_event);
-
-static struct bfa_sm_table tx_sm_table[] = {
-	{BFA_SM(bna_tx_sm_stopped), BNA_TX_STOPPED},
-	{BFA_SM(bna_tx_sm_started), BNA_TX_STARTED},
-	{BFA_SM(bna_tx_sm_txq_stop_wait), BNA_TX_TXQ_STOP_WAIT},
-	{BFA_SM(bna_tx_sm_prio_stop_wait), BNA_TX_PRIO_STOP_WAIT},
-	{BFA_SM(bna_tx_sm_stat_clr_wait), BNA_TX_STAT_CLR_WAIT},
-};
-
-static void
-bna_tx_sm_stopped_entry(struct bna_tx *tx)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		(tx->tx_cleanup_cbfn)(tx->bna->bnad, txq->tcb);
-	}
-
-	call_tx_stop_cbfn(tx, BNA_CB_SUCCESS);
-}
-
-static void
-bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
-{
-	switch (event) {
-	case TX_E_START:
-		bfa_fsm_set_state(tx, bna_tx_sm_started);
-		break;
-
-	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-		break;
-
-	case TX_E_FAIL:
-		/* No-op */
-		break;
-
-	case TX_E_PRIO_CHANGE:
-		call_tx_prio_change_cbfn(tx, BNA_CB_SUCCESS);
-		break;
-
-	case TX_E_TXQ_STOPPED:
-		/**
-		 * This event is received due to flushing of mbox when
-		 * device fails
-		 */
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_tx_sm_started_entry(struct bna_tx *tx)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	__bna_tx_start(tx);
-
-	/* Start IB */
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bna_ib_ack(&txq->ib->door_bell, 0);
-	}
-}
-
-static void
-bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	switch (event) {
-	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_txq_stop_wait);
-		__bna_tx_stop(tx);
-		break;
-
-	case TX_E_FAIL:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_fail(txq->ib);
-			(tx->tx_stall_cbfn)(tx->bna->bnad, txq->tcb);
-		}
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-		break;
-
-	case TX_E_PRIO_CHANGE:
-		bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_tx_sm_txq_stop_wait_entry(struct bna_tx *tx)
-{
-}
-
-static void
-bna_tx_sm_txq_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	switch (event) {
-	case TX_E_FAIL:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-		break;
-
-	case TX_E_TXQ_STOPPED:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_stop(txq->ib);
-		}
-		bfa_fsm_set_state(tx, bna_tx_sm_stat_clr_wait);
-		break;
-
-	case TX_E_PRIO_CHANGE:
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_tx_sm_prio_stop_wait_entry(struct bna_tx *tx)
-{
-	__bna_tx_stop(tx);
-}
-
-static void
-bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	switch (event) {
-	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_txq_stop_wait);
-		break;
-
-	case TX_E_FAIL:
-		call_tx_prio_change_cbfn(tx, BNA_CB_FAIL);
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-		break;
-
-	case TX_E_TXQ_STOPPED:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_stop(txq->ib);
-			(tx->tx_cleanup_cbfn)(tx->bna->bnad, txq->tcb);
-		}
-		call_tx_prio_change_cbfn(tx, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(tx, bna_tx_sm_started);
-		break;
-
-	case TX_E_PRIO_CHANGE:
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_tx_sm_stat_clr_wait_entry(struct bna_tx *tx)
-{
-	__bna_txf_stat_clr(tx);
-}
-
-static void
-bna_tx_sm_stat_clr_wait(struct bna_tx *tx, enum bna_tx_event event)
-{
-	switch (event) {
-	case TX_E_FAIL:
-	case TX_E_STAT_CLEARED:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-__bna_txq_start(struct bna_tx *tx, struct bna_txq *txq)
-{
-	struct bna_rxtx_q_mem *q_mem;
-	struct bna_txq_mem txq_cfg;
-	struct bna_txq_mem *txq_mem;
-	struct bna_dma_addr cur_q_addr;
-	u32 pg_num;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	/* Fill out structure, to be subsequently written to hardware */
-	txq_cfg.pg_tbl_addr_lo = txq->qpt.hw_qpt_ptr.lsb;
-	txq_cfg.pg_tbl_addr_hi = txq->qpt.hw_qpt_ptr.msb;
-	cur_q_addr = *((struct bna_dma_addr *)(txq->qpt.kv_qpt_ptr));
-	txq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	txq_cfg.cur_q_entry_hi = cur_q_addr.msb;
-
-	txq_cfg.pg_cnt_n_prd_ptr = (txq->qpt.page_count << 16) | 0x0;
-
-	txq_cfg.entry_n_pg_size = ((u32)(BFI_TXQ_WI_SIZE >> 2) << 16) |
-			(txq->qpt.page_size >> 2);
-	txq_cfg.int_blk_n_cns_ptr = ((((u32)txq->ib_seg_offset) << 24) |
-			((u32)(txq->ib->ib_id & 0xff) << 16) | 0x0);
-
-	txq_cfg.cns_ptr2_n_q_state = BNA_Q_IDLE_STATE;
-	txq_cfg.nxt_qid_n_fid_n_pri = (((tx->txf.txf_id & 0x3f) << 3) |
-			(txq->priority & 0x7));
-	txq_cfg.wvc_n_cquota_n_rquota =
-			((((u32)BFI_TX_MAX_WRR_QUOTA & 0xfff) << 12) |
-			(BFI_TX_MAX_WRR_QUOTA & 0xfff));
-
-	/* Setup the page and write to H/W */
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + tx->bna->port_num,
-			HQM_RXTX_Q_RAM_BASE_OFFSET);
-	writel(pg_num, tx->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					HQM_RXTX_Q_RAM_BASE_OFFSET);
-	q_mem = (struct bna_rxtx_q_mem *)0;
-	txq_mem = &q_mem[txq->txq_id].txq;
-
-	/*
-	 * The following 4 lines, is a hack b'cos the H/W needs to read
-	 * these DMA addresses as little endian
-	 */
-
-	off = (unsigned long)&txq_mem->pg_tbl_addr_lo;
-	writel(htonl(txq_cfg.pg_tbl_addr_lo), base_addr + off);
-
-	off = (unsigned long)&txq_mem->pg_tbl_addr_hi;
-	writel(htonl(txq_cfg.pg_tbl_addr_hi), base_addr + off);
-
-	off = (unsigned long)&txq_mem->cur_q_entry_lo;
-	writel(htonl(txq_cfg.cur_q_entry_lo), base_addr + off);
-
-	off = (unsigned long)&txq_mem->cur_q_entry_hi;
-	writel(htonl(txq_cfg.cur_q_entry_hi), base_addr + off);
-
-	off = (unsigned long)&txq_mem->pg_cnt_n_prd_ptr;
-	writel(txq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
-
-	off = (unsigned long)&txq_mem->entry_n_pg_size;
-	writel(txq_cfg.entry_n_pg_size, base_addr + off);
-
-	off = (unsigned long)&txq_mem->int_blk_n_cns_ptr;
-	writel(txq_cfg.int_blk_n_cns_ptr, base_addr + off);
-
-	off = (unsigned long)&txq_mem->cns_ptr2_n_q_state;
-	writel(txq_cfg.cns_ptr2_n_q_state, base_addr + off);
-
-	off = (unsigned long)&txq_mem->nxt_qid_n_fid_n_pri;
-	writel(txq_cfg.nxt_qid_n_fid_n_pri, base_addr + off);
-
-	off = (unsigned long)&txq_mem->wvc_n_cquota_n_rquota;
-	writel(txq_cfg.wvc_n_cquota_n_rquota, base_addr + off);
-
-	txq->tcb->producer_index = 0;
-	txq->tcb->consumer_index = 0;
-	*(txq->tcb->hw_consumer_index) = 0;
-
-}
-
-static void
-__bna_txq_stop(struct bna_tx *tx, struct bna_txq *txq)
-{
-	struct bfi_ll_q_stop_req ll_req;
-	u32 bit_mask[2] = {0, 0};
-	if (txq->txq_id < 32)
-		bit_mask[0] = (u32)1 << txq->txq_id;
-	else
-		bit_mask[1] = (u32)1 << (txq->txq_id - 32);
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_TXQ_STOP_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-	ll_req.q_id_mask[0] = htonl(bit_mask[0]);
-	ll_req.q_id_mask[1] = htonl(bit_mask[1]);
-
-	bna_mbox_qe_fill(&tx->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_tx_cb_txq_stopped, tx);
-
-	bna_mbox_send(tx->bna, &tx->mbox_qe);
-}
-
-static void
-__bna_txf_start(struct bna_tx *tx)
-{
-	struct bna_tx_fndb_ram *tx_fndb;
-	struct bna_txf *txf = &tx->txf;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	writel(BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(tx->bna->port_num * 2), TX_FNDB_RAM_BASE_OFFSET),
-			tx->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					TX_FNDB_RAM_BASE_OFFSET);
-
-	tx_fndb = (struct bna_tx_fndb_ram *)0;
-	off = (unsigned long)&tx_fndb[txf->txf_id].vlan_n_ctrl_flags;
-
-	writel(((u32)txf->vlan << 16) | txf->ctrl_flags,
-			base_addr + off);
-
-	if (tx->txf.txf_id < 32)
-		tx->bna->tx_mod.txf_bmap[0] |= ((u32)1 << tx->txf.txf_id);
-	else
-		tx->bna->tx_mod.txf_bmap[1] |= ((u32)
-						 1 << (tx->txf.txf_id - 32));
-}
-
-static void
-__bna_txf_stop(struct bna_tx *tx)
-{
-	struct bna_tx_fndb_ram *tx_fndb;
-	u32 page_num;
-	u32 ctl_flags;
-	struct bna_txf *txf = &tx->txf;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	/* retrieve the running txf_flags & turn off enable bit */
-	page_num = BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(tx->bna->port_num * 2), TX_FNDB_RAM_BASE_OFFSET);
-	writel(page_num, tx->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					TX_FNDB_RAM_BASE_OFFSET);
-	tx_fndb = (struct bna_tx_fndb_ram *)0;
-	off = (unsigned long)&tx_fndb[txf->txf_id].vlan_n_ctrl_flags;
-
-	ctl_flags = readl(base_addr + off);
-	ctl_flags &= ~BFI_TXF_CF_ENABLE;
-
-	writel(ctl_flags, base_addr + off);
-
-	if (tx->txf.txf_id < 32)
-		tx->bna->tx_mod.txf_bmap[0] &= ~((u32)1 << tx->txf.txf_id);
-	else
-		tx->bna->tx_mod.txf_bmap[0] &= ~((u32)
-						 1 << (tx->txf.txf_id - 32));
-}
-
-static void
-__bna_txf_stat_clr(struct bna_tx *tx)
-{
-	struct bfi_ll_stats_req ll_req;
-	u32 txf_bmap[2] = {0, 0};
-	if (tx->txf.txf_id < 32)
-		txf_bmap[0] = ((u32)1 << tx->txf.txf_id);
-	else
-		txf_bmap[1] = ((u32)1 << (tx->txf.txf_id - 32));
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_CLEAR_REQ, 0);
-	ll_req.stats_mask = 0;
-	ll_req.rxf_id_mask[0] = 0;
-	ll_req.rxf_id_mask[1] =	0;
-	ll_req.txf_id_mask[0] =	htonl(txf_bmap[0]);
-	ll_req.txf_id_mask[1] =	htonl(txf_bmap[1]);
-
-	bna_mbox_qe_fill(&tx->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_tx_cb_stats_cleared, tx);
-	bna_mbox_send(tx->bna, &tx->mbox_qe);
-}
-
-static void
-__bna_tx_start(struct bna_tx *tx)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bna_ib_start(txq->ib);
-		__bna_txq_start(tx, txq);
-	}
-
-	__bna_txf_start(tx);
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		txq->tcb->priority = txq->priority;
-		(tx->tx_resume_cbfn)(tx->bna->bnad, txq->tcb);
-	}
-}
-
-static void
-__bna_tx_stop(struct bna_tx *tx)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		(tx->tx_stall_cbfn)(tx->bna->bnad, txq->tcb);
-	}
-
-	__bna_txf_stop(tx);
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bfa_wc_up(&tx->txq_stop_wc);
-	}
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		__bna_txq_stop(tx, txq);
-	}
-}
-
-static void
-bna_txq_qpt_setup(struct bna_txq *txq, int page_count, int page_size,
-		struct bna_mem_descr *qpt_mem,
-		struct bna_mem_descr *swqpt_mem,
-		struct bna_mem_descr *page_mem)
-{
-	int i;
-
-	txq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
-	txq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
-	txq->qpt.kv_qpt_ptr = qpt_mem->kva;
-	txq->qpt.page_count = page_count;
-	txq->qpt.page_size = page_size;
-
-	txq->tcb->sw_qpt = (void **) swqpt_mem->kva;
-
-	for (i = 0; i < page_count; i++) {
-		txq->tcb->sw_qpt[i] = page_mem[i].kva;
-
-		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].lsb =
-			page_mem[i].dma.lsb;
-		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].msb =
-			page_mem[i].dma.msb;
-
-	}
-}
-
-static void
-bna_tx_free(struct bna_tx *tx)
-{
-	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
-	struct bna_txq *txq;
-	struct bna_ib_mod *ib_mod = &tx->bna->ib_mod;
-	struct list_head *qe;
-
-	while (!list_empty(&tx->txq_q)) {
-		bfa_q_deq(&tx->txq_q, &txq);
-		bfa_q_qe_init(&txq->qe);
-		if (txq->ib) {
-			if (txq->ib_seg_offset != -1)
-				bna_ib_release_idx(txq->ib,
-						txq->ib_seg_offset);
-			bna_ib_put(ib_mod, txq->ib);
-			txq->ib = NULL;
-		}
-		txq->tcb = NULL;
-		txq->tx = NULL;
-		list_add_tail(&txq->qe, &tx_mod->txq_free_q);
-	}
-
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		if (qe == &tx->qe) {
-			list_del(&tx->qe);
-			bfa_q_qe_init(&tx->qe);
-			break;
-		}
-	}
-
-	tx->bna = NULL;
-	tx->priv = NULL;
-	list_add_tail(&tx->qe, &tx_mod->tx_free_q);
-}
-
-static void
-bna_tx_cb_txq_stopped(void *arg, int status)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
-
-	bfa_q_qe_init(&tx->mbox_qe.qe);
-	bfa_wc_down(&tx->txq_stop_wc);
-}
-
-static void
-bna_tx_cb_txq_stopped_all(void *arg)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
-
-	bfa_fsm_send_event(tx, TX_E_TXQ_STOPPED);
-}
-
-static void
-bna_tx_cb_stats_cleared(void *arg, int status)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
-
-	bfa_q_qe_init(&tx->mbox_qe.qe);
-
-	bfa_fsm_send_event(tx, TX_E_STAT_CLEARED);
-}
-
-static void
-bna_tx_start(struct bna_tx *tx)
-{
-	tx->flags |= BNA_TX_F_PORT_STARTED;
-	if (tx->flags & BNA_TX_F_ENABLED)
-		bfa_fsm_send_event(tx, TX_E_START);
-}
-
-static void
-bna_tx_stop(struct bna_tx *tx)
-{
-	tx->stop_cbfn = bna_tx_mod_cb_tx_stopped;
-	tx->stop_cbarg = &tx->bna->tx_mod;
-
-	tx->flags &= ~BNA_TX_F_PORT_STARTED;
-	bfa_fsm_send_event(tx, TX_E_STOP);
-}
-
-static void
-bna_tx_fail(struct bna_tx *tx)
-{
-	tx->flags &= ~BNA_TX_F_PORT_STARTED;
-	bfa_fsm_send_event(tx, TX_E_FAIL);
-}
-
-static void
-bna_tx_prio_changed(struct bna_tx *tx, int prio)
-{
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		txq->priority = prio;
-	}
-
-	bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
-}
-
-static void
-bna_tx_cee_link_status(struct bna_tx *tx, int cee_link)
-{
-	if (cee_link)
-		tx->flags |= BNA_TX_F_PRIO_LOCK;
-	else
-		tx->flags &= ~BNA_TX_F_PRIO_LOCK;
-}
-
-static void
-bna_tx_mod_cb_tx_stopped(void *arg, struct bna_tx *tx,
-			enum bna_cb_status status)
-{
-	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
-
-	bfa_wc_down(&tx_mod->tx_stop_wc);
-}
-
-static void
-bna_tx_mod_cb_tx_stopped_all(void *arg)
-{
-	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
-
-	if (tx_mod->stop_cbfn)
-		tx_mod->stop_cbfn(&tx_mod->bna->port, BNA_CB_SUCCESS);
-	tx_mod->stop_cbfn = NULL;
-}
-
-void
-bna_tx_res_req(int num_txq, int txq_depth, struct bna_res_info *res_info)
-{
-	u32 q_size;
-	u32 page_count;
-	struct bna_mem_info *mem_info;
-
-	res_info[BNA_TX_RES_MEM_T_TCB].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = sizeof(struct bna_tcb);
-	mem_info->num = num_txq;
-
-	q_size = txq_depth * BFI_TXQ_WI_SIZE;
-	q_size = ALIGN(q_size, PAGE_SIZE);
-	page_count = q_size >> PAGE_SHIFT;
-
-	res_info[BNA_TX_RES_MEM_T_QPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = page_count * sizeof(struct bna_dma_addr);
-	mem_info->num = num_txq;
-
-	res_info[BNA_TX_RES_MEM_T_SWQPT].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_KVA;
-	mem_info->len = page_count * sizeof(void *);
-	mem_info->num = num_txq;
-
-	res_info[BNA_TX_RES_MEM_T_PAGE].res_type = BNA_RES_T_MEM;
-	mem_info = &res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info;
-	mem_info->mem_type = BNA_MEM_T_DMA;
-	mem_info->len = PAGE_SIZE;
-	mem_info->num = num_txq * page_count;
-
-	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_type = BNA_RES_T_INTR;
-	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.intr_type =
-			BNA_INTR_T_MSIX;
-	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.num = num_txq;
-}
-
-struct bna_tx *
-bna_tx_create(struct bna *bna, struct bnad *bnad,
-		struct bna_tx_config *tx_cfg,
-		struct bna_tx_event_cbfn *tx_cbfn,
-		struct bna_res_info *res_info, void *priv)
-{
-	struct bna_intr_info *intr_info;
-	struct bna_tx_mod *tx_mod = &bna->tx_mod;
-	struct bna_tx *tx;
-	struct bna_txq *txq;
-	struct list_head *qe;
-	struct bna_ib_mod *ib_mod = &bna->ib_mod;
-	struct bna_doorbell_qset *qset;
-	struct bna_ib_config ib_config;
-	int page_count;
-	int page_size;
-	int page_idx;
-	int i;
-	unsigned long off;
-
-	intr_info = &res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info;
-	page_count = (res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.num) /
-			tx_cfg->num_txq;
-	page_size = res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.len;
-
-	/**
-	 * Get resources
-	 */
-
-	if ((intr_info->num != 1) && (intr_info->num != tx_cfg->num_txq))
-		return NULL;
-
-	/* Tx */
-
-	if (list_empty(&tx_mod->tx_free_q))
-		return NULL;
-	bfa_q_deq(&tx_mod->tx_free_q, &tx);
-	bfa_q_qe_init(&tx->qe);
-
-	/* TxQs */
-
-	INIT_LIST_HEAD(&tx->txq_q);
-	for (i = 0; i < tx_cfg->num_txq; i++) {
-		if (list_empty(&tx_mod->txq_free_q))
-			goto err_return;
-
-		bfa_q_deq(&tx_mod->txq_free_q, &txq);
-		bfa_q_qe_init(&txq->qe);
-		list_add_tail(&txq->qe, &tx->txq_q);
-		txq->ib = NULL;
-		txq->ib_seg_offset = -1;
-		txq->tx = tx;
-	}
-
-	/* IBs */
-	i = 0;
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-
-		if (intr_info->num == 1)
-			txq->ib = bna_ib_get(ib_mod, intr_info->intr_type,
-						intr_info->idl[0].vector);
-		else
-			txq->ib = bna_ib_get(ib_mod, intr_info->intr_type,
-						intr_info->idl[i].vector);
-
-		if (txq->ib == NULL)
-			goto err_return;
-
-		txq->ib_seg_offset = bna_ib_reserve_idx(txq->ib);
-		if (txq->ib_seg_offset == -1)
-			goto err_return;
-
-		i++;
-	}
-
-	/*
-	 * Initialize
-	 */
-
-	/* Tx */
-
-	tx->tcb_setup_cbfn = tx_cbfn->tcb_setup_cbfn;
-	tx->tcb_destroy_cbfn = tx_cbfn->tcb_destroy_cbfn;
-	/* Following callbacks are mandatory */
-	tx->tx_stall_cbfn = tx_cbfn->tx_stall_cbfn;
-	tx->tx_resume_cbfn = tx_cbfn->tx_resume_cbfn;
-	tx->tx_cleanup_cbfn = tx_cbfn->tx_cleanup_cbfn;
-
-	list_add_tail(&tx->qe, &tx_mod->tx_active_q);
-	tx->bna = bna;
-	tx->priv = priv;
-	tx->txq_stop_wc.wc_resume = bna_tx_cb_txq_stopped_all;
-	tx->txq_stop_wc.wc_cbarg = tx;
-	tx->txq_stop_wc.wc_count = 0;
-
-	tx->type = tx_cfg->tx_type;
-
-	tx->flags = 0;
-	if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_PORT_STARTED) {
-		switch (tx->type) {
-		case BNA_TX_T_REGULAR:
-			if (!(tx->bna->tx_mod.flags &
-				BNA_TX_MOD_F_PORT_LOOPBACK))
-				tx->flags |= BNA_TX_F_PORT_STARTED;
-			break;
-		case BNA_TX_T_LOOPBACK:
-			if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_PORT_LOOPBACK)
-				tx->flags |= BNA_TX_F_PORT_STARTED;
-			break;
-		}
-	}
-	if (tx->bna->tx_mod.cee_link)
-		tx->flags |= BNA_TX_F_PRIO_LOCK;
-
-	/* TxQ */
-
-	i = 0;
-	page_idx = 0;
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		txq->priority = tx_mod->priority;
-		txq->tcb = (struct bna_tcb *)
-		  res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info.mdl[i].kva;
-		txq->tx_packets = 0;
-		txq->tx_bytes = 0;
-
-		/* IB */
-
-		ib_config.coalescing_timeo = BFI_TX_COALESCING_TIMEO;
-		ib_config.interpkt_timeo = 0; /* Not used */
-		ib_config.interpkt_count = BFI_TX_INTERPKT_COUNT;
-		ib_config.ctrl_flags = (BFI_IB_CF_INTER_PKT_DMA |
-					BFI_IB_CF_INT_ENABLE |
-					BFI_IB_CF_COALESCING_MODE);
-		bna_ib_config(txq->ib, &ib_config);
-
-		/* TCB */
-
-		txq->tcb->producer_index = 0;
-		txq->tcb->consumer_index = 0;
-		txq->tcb->hw_consumer_index = (volatile u32 *)
-			((volatile u8 *)txq->ib->ib_seg_host_addr_kva +
-			 (txq->ib_seg_offset * BFI_IBIDX_SIZE));
-		*(txq->tcb->hw_consumer_index) = 0;
-		txq->tcb->q_depth = tx_cfg->txq_depth;
-		txq->tcb->unmap_q = (void *)
-		res_info[BNA_TX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[i].kva;
-		qset = (struct bna_doorbell_qset *)0;
-		off = (unsigned long)&qset[txq->txq_id].txq[0];
-		txq->tcb->q_dbell = off +
-			BNA_GET_DOORBELL_BASE_ADDR(bna->pcidev.pci_bar_kva);
-		txq->tcb->i_dbell = &txq->ib->door_bell;
-		txq->tcb->intr_type = intr_info->intr_type;
-		txq->tcb->intr_vector = (intr_info->num == 1) ?
-					intr_info->idl[0].vector :
-					intr_info->idl[i].vector;
-		txq->tcb->txq = txq;
-		txq->tcb->bnad = bnad;
-		txq->tcb->id = i;
-
-		/* QPT, SWQPT, Pages */
-		bna_txq_qpt_setup(txq, page_count, page_size,
-			&res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info.mdl[i],
-			&res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info.mdl[i],
-			&res_info[BNA_TX_RES_MEM_T_PAGE].
-				  res_u.mem_info.mdl[page_idx]);
-		txq->tcb->page_idx = page_idx;
-		txq->tcb->page_count = page_count;
-		page_idx += page_count;
-
-		/* Callback to bnad for setting up TCB */
-		if (tx->tcb_setup_cbfn)
-			(tx->tcb_setup_cbfn)(bna->bnad, txq->tcb);
-
-		i++;
-	}
-
-	/* TxF */
-
-	tx->txf.ctrl_flags = BFI_TXF_CF_ENABLE | BFI_TXF_CF_VLAN_WI_BASED;
-	tx->txf.vlan = 0;
-
-	/* Mbox element */
-	bfa_q_qe_init(&tx->mbox_qe.qe);
-
-	bfa_fsm_set_state(tx, bna_tx_sm_stopped);
-
-	return tx;
-
-err_return:
-	bna_tx_free(tx);
-	return NULL;
-}
-
-void
-bna_tx_destroy(struct bna_tx *tx)
-{
-	/* Callback to bnad for destroying TCB */
-	if (tx->tcb_destroy_cbfn) {
-		struct bna_txq *txq;
-		struct list_head *qe;
-
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			(tx->tcb_destroy_cbfn)(tx->bna->bnad, txq->tcb);
-		}
-	}
-
-	bna_tx_free(tx);
-}
-
-void
-bna_tx_enable(struct bna_tx *tx)
-{
-	if (tx->fsm != (bfa_sm_t)bna_tx_sm_stopped)
-		return;
-
-	tx->flags |= BNA_TX_F_ENABLED;
-
-	if (tx->flags & BNA_TX_F_PORT_STARTED)
-		bfa_fsm_send_event(tx, TX_E_START);
-}
-
-void
-bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
-		void (*cbfn)(void *, struct bna_tx *, enum bna_cb_status))
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		(*cbfn)(tx->bna->bnad, tx, BNA_CB_SUCCESS);
-		return;
-	}
-
-	tx->stop_cbfn = cbfn;
-	tx->stop_cbarg = tx->bna->bnad;
-
-	tx->flags &= ~BNA_TX_F_ENABLED;
-
-	bfa_fsm_send_event(tx, TX_E_STOP);
-}
-
-int
-bna_tx_state_get(struct bna_tx *tx)
-{
-	return bfa_sm_to_state(tx_sm_table, tx->fsm);
-}
-
-void
-bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	int i;
-
-	tx_mod->bna = bna;
-	tx_mod->flags = 0;
-
-	tx_mod->tx = (struct bna_tx *)
-		res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.mdl[0].kva;
-	tx_mod->txq = (struct bna_txq *)
-		res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&tx_mod->tx_free_q);
-	INIT_LIST_HEAD(&tx_mod->tx_active_q);
-
-	INIT_LIST_HEAD(&tx_mod->txq_free_q);
-
-	for (i = 0; i < BFI_MAX_TXQ; i++) {
-		tx_mod->tx[i].txf.txf_id = i;
-		bfa_q_qe_init(&tx_mod->tx[i].qe);
-		list_add_tail(&tx_mod->tx[i].qe, &tx_mod->tx_free_q);
-
-		tx_mod->txq[i].txq_id = i;
-		bfa_q_qe_init(&tx_mod->txq[i].qe);
-		list_add_tail(&tx_mod->txq[i].qe, &tx_mod->txq_free_q);
-	}
-
-	tx_mod->tx_stop_wc.wc_resume = bna_tx_mod_cb_tx_stopped_all;
-	tx_mod->tx_stop_wc.wc_cbarg = tx_mod;
-	tx_mod->tx_stop_wc.wc_count = 0;
-}
-
-void
-bna_tx_mod_uninit(struct bna_tx_mod *tx_mod)
-{
-	struct list_head		*qe;
-	int i;
-
-	i = 0;
-	list_for_each(qe, &tx_mod->tx_free_q)
-		i++;
-
-	i = 0;
-	list_for_each(qe, &tx_mod->txq_free_q)
-		i++;
-
-	tx_mod->bna = NULL;
-}
-
-void
-bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
-{
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	tx_mod->flags |= BNA_TX_MOD_F_PORT_STARTED;
-	if (type == BNA_TX_T_LOOPBACK)
-		tx_mod->flags |= BNA_TX_MOD_F_PORT_LOOPBACK;
-
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		if (tx->type == type)
-			bna_tx_start(tx);
-	}
-}
-
-void
-bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
-{
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_STARTED;
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_LOOPBACK;
-
-	tx_mod->stop_cbfn = bna_port_cb_tx_stopped;
-
-	/**
-	 * Before calling bna_tx_stop(), increment tx_stop_wc as many times
-	 * as we are going to call bna_tx_stop
-	 */
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		if (tx->type == type)
-			bfa_wc_up(&tx_mod->tx_stop_wc);
-	}
-
-	if (tx_mod->tx_stop_wc.wc_count == 0) {
-		tx_mod->stop_cbfn(&tx_mod->bna->port, BNA_CB_SUCCESS);
-		tx_mod->stop_cbfn = NULL;
-		return;
-	}
-
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		if (tx->type == type)
-			bna_tx_stop(tx);
-	}
-}
-
-void
-bna_tx_mod_fail(struct bna_tx_mod *tx_mod)
-{
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_STARTED;
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_LOOPBACK;
-
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		bna_tx_fail(tx);
-	}
-}
-
-void
-bna_tx_mod_prio_changed(struct bna_tx_mod *tx_mod, int prio)
-{
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	if (prio != tx_mod->priority) {
-		tx_mod->priority = prio;
-
-		list_for_each(qe, &tx_mod->tx_active_q) {
-			tx = (struct bna_tx *)qe;
-			bna_tx_prio_changed(tx, prio);
-		}
-	}
-}
-
-void
-bna_tx_mod_cee_link_status(struct bna_tx_mod *tx_mod, int cee_link)
-{
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	tx_mod->cee_link = cee_link;
-
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		bna_tx_cee_link_status(tx, cee_link);
-	}
-}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/8] bna: Driver Version changed to 3.0.2.0
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (6 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 7/8] bna: Remove Obsolete Files Rasesh Mody
@ 2011-08-09  2:21 ` Rasesh Mody
  2011-08-11 14:33 ` [PATCH 0/8] bna: Update bna driver version " David Miller
  8 siblings, 0 replies; 10+ messages in thread
From: Rasesh Mody @ 2011-08-09  2:21 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bnad.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/bna/bnad.h b/drivers/net/bna/bnad.h
index a538cf4..5b5451e 100644
--- a/drivers/net/bna/bnad.h
+++ b/drivers/net/bna/bnad.h
@@ -67,7 +67,7 @@ struct bnad_rx_ctrl {
 #define BNAD_NAME			"bna"
 #define BNAD_NAME_LEN			64
 
-#define BNAD_VERSION			"2.3.2.3"
+#define BNAD_VERSION			"3.0.2.0"
 
 #define BNAD_MAILBOX_MSIX_INDEX		0
 #define BNAD_MAILBOX_MSIX_VECTORS	1
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/8] bna: Update bna driver version to 3.0.2.0
  2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (7 preceding siblings ...)
  2011-08-09  2:21 ` [PATCH 8/8] bna: Driver Version changed to 3.0.2.0 Rasesh Mody
@ 2011-08-11 14:33 ` David Miller
  8 siblings, 0 replies; 10+ messages in thread
From: David Miller @ 2011-08-11 14:33 UTC (permalink / raw)
  To: rmody; +Cc: netdev, adapter_linux_open_src_team

From: Rasesh Mody <rmody@brocade.com>
Date: Mon, 8 Aug 2011 19:21:34 -0700

>    The following patch set contains changes for driver re-architecture and
>    code re-organisazion. This includes driver firmware interface change,
>    tx and rx re-design and corresponding changes required to use/enable new
>    code and also keep the patch set bisectable. It also removes obsolete
>    files and cleans up unused code.
> 
>    This updates the Brocade BNA driver to v3.0.2.0.
> 
>    The driver has been compiled & tested against net-next-2.6(3.0.0-rc7)

All applied, thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-08-11 14:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-09  2:21 [PATCH 0/8] bna: Update bna driver version to 3.0.2.0 Rasesh Mody
2011-08-09  2:21 ` [PATCH 1/8] bna: MSGQ Implementation Rasesh Mody
2011-08-09  2:21 ` [PATCH 2/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
2011-08-09  2:21 ` [PATCH 3/8] bna: Tx and Rx Redesign Rasesh Mody
2011-08-09  2:21 ` [PATCH 4/8] bna: Add New HW Defs Rasesh Mody
2011-08-09  2:21 ` [PATCH 5/8] bna: ENET and Tx Rx Redesign Enablement Rasesh Mody
2011-08-09  2:21 ` [PATCH 6/8] bna: Remove Unused Code Rasesh Mody
2011-08-09  2:21 ` [PATCH 7/8] bna: Remove Obsolete Files Rasesh Mody
2011-08-09  2:21 ` [PATCH 8/8] bna: Driver Version changed to 3.0.2.0 Rasesh Mody
2011-08-11 14:33 ` [PATCH 0/8] bna: Update bna driver version " David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).