netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture
@ 2011-07-27  2:10 Rasesh Mody
  2011-07-27  2:10 ` [PATCH 1/8] bna: Remove get_regs Ethtool Support Rasesh Mody
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

   This patch-set consists of few fixes, HW reg consolidation and adds support
   for re-architecture and re-organisation of the driver.

   The driver has been compiled and tested against net-next-2.6(3.0.0-rc7)

   Please note the tx rx redesign & enet changes are not enabled. They will get
   enabled in the subsequent submission when all the other supporting code is
   ready.

Rasesh Mody (8):
  bna: Remove get_regs Ethtool Support
  bna: Consolidated HW Registers for Supported HWs
  bna: Remove Obsolete File bfi_ctreg.h
  bna: MSGQ Implementation
  bna: Remove Unnecessary CNA Check
  bna: HW Interface Init Update
  bna: Introduce ENET as New Driver and FW Interface
  bna: Tx and Rx Redesign

 drivers/net/bna/Makefile       |    3 +-
 drivers/net/bna/bfa_ioc.c      |   16 +-
 drivers/net/bna/bfa_ioc.h      |    4 +-
 drivers/net/bna/bfa_ioc_ct.c   |  118 +-
 drivers/net/bna/bfa_msgq.c     |  669 +++++++
 drivers/net/bna/bfa_msgq.h     |  130 ++
 drivers/net/bna/bfi.h          |  101 +
 drivers/net/bna/bfi_ctreg.h    |  646 -------
 drivers/net/bna/bfi_enet.h     |  902 +++++++++
 drivers/net/bna/bfi_reg.h      |  452 +++++
 drivers/net/bna/bna_ctrl.c     |    6 +-
 drivers/net/bna/bna_enet.c     | 2199 ++++++++++++++++++++++
 drivers/net/bna/bna_hw.h       |    6 +-
 drivers/net/bna/bna_tx_rx.c    | 3966 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/bna/bnad_ethtool.c |  319 ----
 15 files changed, 8499 insertions(+), 1038 deletions(-)
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h
 delete mode 100644 drivers/net/bna/bfi_ctreg.h
 create mode 100644 drivers/net/bna/bfi_enet.h
 create mode 100644 drivers/net/bna/bfi_reg.h
 create mode 100644 drivers/net/bna/bna_enet.c
 create mode 100644 drivers/net/bna/bna_tx_rx.c


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/8] bna: Remove get_regs Ethtool Support
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 2/8] bna: Consolidated HW Registers for Supported HWs Rasesh Mody
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - This patch contains removal of get_regs support in bnad_ethtool.c. Thus
   BNA will have minimal register definitions necessary for MBOX and
   interrupt operations

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bnad_ethtool.c |  319 ----------------------------------------
 1 files changed, 0 insertions(+), 319 deletions(-)

diff --git a/drivers/net/bna/bnad_ethtool.c b/drivers/net/bna/bnad_ethtool.c
index fea07f1..49174f8 100644
--- a/drivers/net/bna/bnad_ethtool.c
+++ b/drivers/net/bna/bnad_ethtool.c
@@ -288,323 +288,6 @@ bnad_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
 	strncpy(drvinfo->bus_info, pci_name(bnad->pcidev), ETHTOOL_BUSINFO_LEN);
 }
 
-static int
-get_regs(struct bnad *bnad, u32 * regs)
-{
-	int num = 0, i;
-	u32 reg_addr;
-	unsigned long flags;
-
-#define BNAD_GET_REG(addr)					\
-do {								\
-	if (regs)						\
-		regs[num++] = readl(bnad->bar0 + (addr));	\
-	else							\
-		num++;						\
-} while (0)
-
-	spin_lock_irqsave(&bnad->bna_lock, flags);
-
-	/* DMA Block Internal Registers */
-	BNAD_GET_REG(DMA_CTRL_REG0);
-	BNAD_GET_REG(DMA_CTRL_REG1);
-	BNAD_GET_REG(DMA_ERR_INT_STATUS);
-	BNAD_GET_REG(DMA_ERR_INT_ENABLE);
-	BNAD_GET_REG(DMA_ERR_INT_STATUS_SET);
-
-	/* APP Block Register Address Offset from BAR0 */
-	BNAD_GET_REG(HOSTFN0_INT_STATUS);
-	BNAD_GET_REG(HOSTFN0_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN0);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN0);
-	BNAD_GET_REG(FN0_PCIE_ERR_REG);
-	BNAD_GET_REG(FN0_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN0_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(HOSTFN1_INT_STATUS);
-	BNAD_GET_REG(HOSTFN1_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN1);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN1);
-	BNAD_GET_REG(FN1_PCIE_ERR_REG);
-	BNAD_GET_REG(FN1_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN1_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(PCIE_MISC_REG);
-
-	BNAD_GET_REG(HOST_SEM0_INFO_REG);
-	BNAD_GET_REG(HOST_SEM1_INFO_REG);
-	BNAD_GET_REG(HOST_SEM2_INFO_REG);
-	BNAD_GET_REG(HOST_SEM3_INFO_REG);
-
-	BNAD_GET_REG(TEMPSENSE_CNTL_REG);
-	BNAD_GET_REG(TEMPSENSE_STAT_REG);
-
-	BNAD_GET_REG(APP_LOCAL_ERR_STAT);
-	BNAD_GET_REG(APP_LOCAL_ERR_MSK);
-
-	BNAD_GET_REG(PCIE_LNK_ERR_STAT);
-	BNAD_GET_REG(PCIE_LNK_ERR_MSK);
-
-	BNAD_GET_REG(FCOE_FIP_ETH_TYPE);
-	BNAD_GET_REG(RESV_ETH_TYPE);
-
-	BNAD_GET_REG(HOSTFN2_INT_STATUS);
-	BNAD_GET_REG(HOSTFN2_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN2);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN2);
-	BNAD_GET_REG(FN2_PCIE_ERR_REG);
-	BNAD_GET_REG(FN2_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN2_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(HOSTFN3_INT_STATUS);
-	BNAD_GET_REG(HOSTFN3_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN3);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN3);
-	BNAD_GET_REG(FN3_PCIE_ERR_REG);
-	BNAD_GET_REG(FN3_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN3_ERR_TYPE_MSK_STATUS_REG);
-
-	/* Host Command Status Registers */
-	reg_addr = HOST_CMDSTS0_CLR_REG;
-	for (i = 0; i < 16; i++) {
-		BNAD_GET_REG(reg_addr);
-		BNAD_GET_REG(reg_addr + 4);
-		BNAD_GET_REG(reg_addr + 8);
-		reg_addr += 0x10;
-	}
-
-	/* Function ID register */
-	BNAD_GET_REG(FNC_ID_REG);
-
-	/* Function personality register */
-	BNAD_GET_REG(FNC_PERS_REG);
-
-	/* Operation mode register */
-	BNAD_GET_REG(OP_MODE);
-
-	/* LPU0 Registers */
-	BNAD_GET_REG(LPU0_MBOX_CTL_REG);
-	BNAD_GET_REG(LPU0_MBOX_CMD_REG);
-	BNAD_GET_REG(LPU0_MBOX_LINK_0REG);
-	BNAD_GET_REG(LPU1_MBOX_LINK_0REG);
-	BNAD_GET_REG(LPU0_MBOX_STATUS_0REG);
-	BNAD_GET_REG(LPU1_MBOX_STATUS_0REG);
-	BNAD_GET_REG(LPU0_ERR_STATUS_REG);
-	BNAD_GET_REG(LPU0_ERR_SET_REG);
-
-	/* LPU1 Registers */
-	BNAD_GET_REG(LPU1_MBOX_CTL_REG);
-	BNAD_GET_REG(LPU1_MBOX_CMD_REG);
-	BNAD_GET_REG(LPU0_MBOX_LINK_1REG);
-	BNAD_GET_REG(LPU1_MBOX_LINK_1REG);
-	BNAD_GET_REG(LPU0_MBOX_STATUS_1REG);
-	BNAD_GET_REG(LPU1_MBOX_STATUS_1REG);
-	BNAD_GET_REG(LPU1_ERR_STATUS_REG);
-	BNAD_GET_REG(LPU1_ERR_SET_REG);
-
-	/* PSS Registers */
-	BNAD_GET_REG(PSS_CTL_REG);
-	BNAD_GET_REG(PSS_ERR_STATUS_REG);
-	BNAD_GET_REG(ERR_STATUS_SET);
-	BNAD_GET_REG(PSS_RAM_ERR_STATUS_REG);
-
-	/* Catapult CPQ Registers */
-	BNAD_GET_REG(HOSTFN0_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN0_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN0_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN0_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN0_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN0_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN1_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN1_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN1_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN1_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN1_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN1_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN2_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN2_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN2_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN2_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN2_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN2_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN2_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN2_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN3_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN3_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN3_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN3_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN3_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN3_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN3_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN3_MBOX1_CMD_STAT);
-
-	/* Host Function Force Parity Error Registers */
-	BNAD_GET_REG(HOSTFN0_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN1_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN2_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN3_LPU_FORCE_PERR);
-
-	/* LL Port[0|1] Halt Mask Registers */
-	BNAD_GET_REG(LL_HALT_MSK_P0);
-	BNAD_GET_REG(LL_HALT_MSK_P1);
-
-	/* LL Port[0|1] Error Mask Registers */
-	BNAD_GET_REG(LL_ERR_MSK_P0);
-	BNAD_GET_REG(LL_ERR_MSK_P1);
-
-	/* EMC FLI Registers */
-	BNAD_GET_REG(FLI_CMD_REG);
-	BNAD_GET_REG(FLI_ADDR_REG);
-	BNAD_GET_REG(FLI_CTL_REG);
-	BNAD_GET_REG(FLI_WRDATA_REG);
-	BNAD_GET_REG(FLI_RDDATA_REG);
-	BNAD_GET_REG(FLI_DEV_STATUS_REG);
-	BNAD_GET_REG(FLI_SIG_WD_REG);
-
-	BNAD_GET_REG(FLI_DEV_VENDOR_REG);
-	BNAD_GET_REG(FLI_ERR_STATUS_REG);
-
-	/* RxAdm 0 Registers */
-	BNAD_GET_REG(RAD0_CTL_REG);
-	BNAD_GET_REG(RAD0_PE_PARM_REG);
-	BNAD_GET_REG(RAD0_BCN_REG);
-	BNAD_GET_REG(RAD0_DEFAULT_REG);
-	BNAD_GET_REG(RAD0_PROMISC_REG);
-	BNAD_GET_REG(RAD0_BCNQ_REG);
-	BNAD_GET_REG(RAD0_DEFAULTQ_REG);
-
-	BNAD_GET_REG(RAD0_ERR_STS);
-	BNAD_GET_REG(RAD0_SET_ERR_STS);
-	BNAD_GET_REG(RAD0_ERR_INT_EN);
-	BNAD_GET_REG(RAD0_FIRST_ERR);
-	BNAD_GET_REG(RAD0_FORCE_ERR);
-
-	BNAD_GET_REG(RAD0_MAC_MAN_1H);
-	BNAD_GET_REG(RAD0_MAC_MAN_1L);
-	BNAD_GET_REG(RAD0_MAC_MAN_2H);
-	BNAD_GET_REG(RAD0_MAC_MAN_2L);
-	BNAD_GET_REG(RAD0_MAC_MAN_3H);
-	BNAD_GET_REG(RAD0_MAC_MAN_3L);
-	BNAD_GET_REG(RAD0_MAC_MAN_4H);
-	BNAD_GET_REG(RAD0_MAC_MAN_4L);
-
-	BNAD_GET_REG(RAD0_LAST4_IP);
-
-	/* RxAdm 1 Registers */
-	BNAD_GET_REG(RAD1_CTL_REG);
-	BNAD_GET_REG(RAD1_PE_PARM_REG);
-	BNAD_GET_REG(RAD1_BCN_REG);
-	BNAD_GET_REG(RAD1_DEFAULT_REG);
-	BNAD_GET_REG(RAD1_PROMISC_REG);
-	BNAD_GET_REG(RAD1_BCNQ_REG);
-	BNAD_GET_REG(RAD1_DEFAULTQ_REG);
-
-	BNAD_GET_REG(RAD1_ERR_STS);
-	BNAD_GET_REG(RAD1_SET_ERR_STS);
-	BNAD_GET_REG(RAD1_ERR_INT_EN);
-
-	/* TxA0 Registers */
-	BNAD_GET_REG(TXA0_CTRL_REG);
-	/* TxA0 TSO Sequence # Registers (RO) */
-	for (i = 0; i < 8; i++) {
-		BNAD_GET_REG(TXA0_TSO_TCP_SEQ_REG(i));
-		BNAD_GET_REG(TXA0_TSO_IP_INFO_REG(i));
-	}
-
-	/* TxA1 Registers */
-	BNAD_GET_REG(TXA1_CTRL_REG);
-	/* TxA1 TSO Sequence # Registers (RO) */
-	for (i = 0; i < 8; i++) {
-		BNAD_GET_REG(TXA1_TSO_TCP_SEQ_REG(i));
-		BNAD_GET_REG(TXA1_TSO_IP_INFO_REG(i));
-	}
-
-	/* RxA Registers */
-	BNAD_GET_REG(RXA0_CTL_REG);
-	BNAD_GET_REG(RXA1_CTL_REG);
-
-	/* PLB0 Registers */
-	BNAD_GET_REG(PLB0_ECM_TIMER_REG);
-	BNAD_GET_REG(PLB0_RL_CTL);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB0_RL_MAX_BC(i));
-	BNAD_GET_REG(PLB0_RL_TU_PRIO);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB0_RL_BYTE_CNT(i));
-	BNAD_GET_REG(PLB0_RL_MIN_REG);
-	BNAD_GET_REG(PLB0_RL_MAX_REG);
-	BNAD_GET_REG(PLB0_EMS_ADD_REG);
-
-	/* PLB1 Registers */
-	BNAD_GET_REG(PLB1_ECM_TIMER_REG);
-	BNAD_GET_REG(PLB1_RL_CTL);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB1_RL_MAX_BC(i));
-	BNAD_GET_REG(PLB1_RL_TU_PRIO);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB1_RL_BYTE_CNT(i));
-	BNAD_GET_REG(PLB1_RL_MIN_REG);
-	BNAD_GET_REG(PLB1_RL_MAX_REG);
-	BNAD_GET_REG(PLB1_EMS_ADD_REG);
-
-	/* HQM Control Register */
-	BNAD_GET_REG(HQM0_CTL_REG);
-	BNAD_GET_REG(HQM0_RXQ_STOP_SEM);
-	BNAD_GET_REG(HQM0_TXQ_STOP_SEM);
-	BNAD_GET_REG(HQM1_CTL_REG);
-	BNAD_GET_REG(HQM1_RXQ_STOP_SEM);
-	BNAD_GET_REG(HQM1_TXQ_STOP_SEM);
-
-	/* LUT Registers */
-	BNAD_GET_REG(LUT0_ERR_STS);
-	BNAD_GET_REG(LUT0_SET_ERR_STS);
-	BNAD_GET_REG(LUT1_ERR_STS);
-	BNAD_GET_REG(LUT1_SET_ERR_STS);
-
-	/* TRC Registers */
-	BNAD_GET_REG(TRC_CTL_REG);
-	BNAD_GET_REG(TRC_MODS_REG);
-	BNAD_GET_REG(TRC_TRGC_REG);
-	BNAD_GET_REG(TRC_CNT1_REG);
-	BNAD_GET_REG(TRC_CNT2_REG);
-	BNAD_GET_REG(TRC_NXTS_REG);
-	BNAD_GET_REG(TRC_DIRR_REG);
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_TRGM_REG(i));
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_NXTM_REG(i));
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_STRM_REG(i));
-
-	spin_unlock_irqrestore(&bnad->bna_lock, flags);
-#undef BNAD_GET_REG
-	return num;
-}
-static int
-bnad_get_regs_len(struct net_device *netdev)
-{
-	int ret = get_regs(netdev_priv(netdev), NULL) * sizeof(u32);
-	return ret;
-}
-
-static void
-bnad_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *buf)
-{
-	memset(buf, 0, bnad_get_regs_len(netdev));
-	get_regs(netdev_priv(netdev), buf);
-}
-
 static void
 bnad_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wolinfo)
 {
@@ -1192,8 +875,6 @@ static struct ethtool_ops bnad_ethtool_ops = {
 	.get_settings = bnad_get_settings,
 	.set_settings = bnad_set_settings,
 	.get_drvinfo = bnad_get_drvinfo,
-	.get_regs_len = bnad_get_regs_len,
-	.get_regs = bnad_get_regs,
 	.get_wol = bnad_get_wol,
 	.get_link = ethtool_op_get_link,
 	.get_coalesce = bnad_get_coalesce,
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/8] bna: Consolidated HW Registers for Supported HWs
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
  2011-07-27  2:10 ` [PATCH 1/8] bna: Remove get_regs Ethtool Support Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 3/8] bna: Remove Obsolete File bfi_ctreg.h Rasesh Mody
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Introducing new file bfi_reg.h for consolidating all supported hardware
   registers. This file completely replaces bfi_ctreg.h.
 - Updated ioc code as per register definition change.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_ioc.c    |    2 +-
 drivers/net/bna/bfa_ioc_ct.c |   78 ++++----
 drivers/net/bna/bfi_reg.h    |  452 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bna/bna_hw.h     |    6 +-
 4 files changed, 497 insertions(+), 41 deletions(-)
 create mode 100644 drivers/net/bna/bfi_reg.h

diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index 126b0aa..3cdea65 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -19,7 +19,7 @@
 #include "bfa_ioc.h"
 #include "cna.h"
 #include "bfi.h"
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 #include "bfa_defs.h"
 
 /**
diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 87aecdf..75ecf7a 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -19,7 +19,7 @@
 #include "bfa_ioc.h"
 #include "cna.h"
 #include "bfi.h"
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 #include "bfa_defs.h"
 
 #define bfa_ioc_ct_sync_pos(__ioc)	\
@@ -178,7 +178,7 @@ bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc)
 		readl(ioc->ioc_regs.ll_halt);
 		readl(ioc->ioc_regs.alt_ll_halt);
 	} else {
-		writel(__PSS_ERR_STATUS_SET, ioc->ioc_regs.err_set);
+		writel(~0U, ioc->ioc_regs.err_set);
 		readl(ioc->ioc_regs.err_set);
 	}
 }
@@ -196,21 +196,21 @@ static struct { u32 hfn_mbox, lpu_mbox, hfn_pgn; } iocreg_fnreg[] = {
 /**
  * Host <-> LPU mailbox command/status registers - port 0
  */
-static struct { u32 hfn, lpu; } iocreg_mbcmd_p0[] = {
-	{ HOSTFN0_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN0_MBOX0_CMD_STAT },
-	{ HOSTFN1_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN1_MBOX0_CMD_STAT },
-	{ HOSTFN2_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN2_MBOX0_CMD_STAT },
-	{ HOSTFN3_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN3_MBOX0_CMD_STAT }
+static struct { u32 hfn, lpu; } ct_p0reg[] = {
+	{ HOSTFN0_LPU0_CMD_STAT, LPU0_HOSTFN0_CMD_STAT },
+	{ HOSTFN1_LPU0_CMD_STAT, LPU0_HOSTFN1_CMD_STAT },
+	{ HOSTFN2_LPU0_CMD_STAT, LPU0_HOSTFN2_CMD_STAT },
+	{ HOSTFN3_LPU0_CMD_STAT, LPU0_HOSTFN3_CMD_STAT }
 };
 
 /**
  * Host <-> LPU mailbox command/status registers - port 1
  */
-static struct { u32 hfn, lpu; } iocreg_mbcmd_p1[] = {
-	{ HOSTFN0_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN0_MBOX0_CMD_STAT },
-	{ HOSTFN1_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN1_MBOX0_CMD_STAT },
-	{ HOSTFN2_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN2_MBOX0_CMD_STAT },
-	{ HOSTFN3_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN3_MBOX0_CMD_STAT }
+static struct { u32 hfn, lpu; } ct_p1reg[] = {
+	{ HOSTFN0_LPU1_CMD_STAT, LPU1_HOSTFN0_CMD_STAT },
+	{ HOSTFN1_LPU1_CMD_STAT, LPU1_HOSTFN1_CMD_STAT },
+	{ HOSTFN2_LPU1_CMD_STAT, LPU1_HOSTFN2_CMD_STAT },
+	{ HOSTFN3_LPU1_CMD_STAT, LPU1_HOSTFN3_CMD_STAT }
 };
 
 static void
@@ -229,16 +229,16 @@ bfa_ioc_ct_reg_init(struct bfa_ioc *ioc)
 		ioc->ioc_regs.heartbeat = rb + BFA_IOC0_HBEAT_REG;
 		ioc->ioc_regs.ioc_fwstate = rb + BFA_IOC0_STATE_REG;
 		ioc->ioc_regs.alt_ioc_fwstate = rb + BFA_IOC1_STATE_REG;
-		ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].hfn;
-		ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].lpu;
+		ioc->ioc_regs.hfn_mbox_cmd = rb + ct_p0reg[pcifn].hfn;
+		ioc->ioc_regs.lpu_mbox_cmd = rb + ct_p0reg[pcifn].lpu;
 		ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P0;
 		ioc->ioc_regs.alt_ll_halt = rb + FW_INIT_HALT_P1;
 	} else {
 		ioc->ioc_regs.heartbeat = (rb + BFA_IOC1_HBEAT_REG);
 		ioc->ioc_regs.ioc_fwstate = (rb + BFA_IOC1_STATE_REG);
 		ioc->ioc_regs.alt_ioc_fwstate = rb + BFA_IOC0_STATE_REG;
-		ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].hfn;
-		ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].lpu;
+		ioc->ioc_regs.hfn_mbox_cmd = rb + ct_p1reg[pcifn].hfn;
+		ioc->ioc_regs.lpu_mbox_cmd = rb + ct_p1reg[pcifn].lpu;
 		ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P1;
 		ioc->ioc_regs.alt_ll_halt = rb + FW_INIT_HALT_P0;
 	}
@@ -248,8 +248,8 @@ bfa_ioc_ct_reg_init(struct bfa_ioc *ioc)
 	 */
 	ioc->ioc_regs.pss_ctl_reg = (rb + PSS_CTL_REG);
 	ioc->ioc_regs.pss_err_status_reg = (rb + PSS_ERR_STATUS_REG);
-	ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_425_CTL_REG);
-	ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_312_CTL_REG);
+	ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_LCLK_CTL_REG);
+	ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_SCLK_CTL_REG);
 
 	/*
 	 * IOC semaphore registers and serialization
@@ -446,14 +446,15 @@ bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode)
 {
 	u32	pll_sclk, pll_fclk, r32;
 
-	pll_sclk = __APP_PLL_312_LRESETN | __APP_PLL_312_ENARST |
-		__APP_PLL_312_RSEL200500 | __APP_PLL_312_P0_1(3U) |
-		__APP_PLL_312_JITLMT0_1(3U) |
-		__APP_PLL_312_CNTLMT0_1(1U);
-	pll_fclk = __APP_PLL_425_LRESETN | __APP_PLL_425_ENARST |
-		__APP_PLL_425_RSEL200500 | __APP_PLL_425_P0_1(3U) |
-		__APP_PLL_425_JITLMT0_1(3U) |
-		__APP_PLL_425_CNTLMT0_1(1U);
+	pll_sclk = __APP_PLL_SCLK_LRESETN | __APP_PLL_SCLK_ENARST |
+		__APP_PLL_SCLK_RSEL200500 | __APP_PLL_SCLK_P0_1(3U) |
+		__APP_PLL_SCLK_JITLMT0_1(3U) |
+		__APP_PLL_SCLK_CNTLMT0_1(1U);
+	pll_fclk = __APP_PLL_LCLK_LRESETN | __APP_PLL_LCLK_ENARST |
+		__APP_PLL_LCLK_RSEL200500 | __APP_PLL_LCLK_P0_1(3U) |
+		__APP_PLL_LCLK_JITLMT0_1(3U) |
+		__APP_PLL_LCLK_CNTLMT0_1(1U);
+
 	if (fcmode) {
 		writel(0, (rb + OP_MODE));
 		writel(__APP_EMS_CMLCKSEL |
@@ -474,27 +475,28 @@ bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode)
 	writel(0xffffffffU, (rb + HOSTFN0_INT_MSK));
 	writel(0xffffffffU, (rb + HOSTFN1_INT_MSK));
 	writel(pll_sclk |
-		__APP_PLL_312_LOGIC_SOFT_RESET,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_LOGIC_SOFT_RESET,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_LOGIC_SOFT_RESET,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_LOGIC_SOFT_RESET,
+		rb + APP_PLL_LCLK_CTL_REG);
 	writel(pll_sclk |
-		__APP_PLL_312_LOGIC_SOFT_RESET | __APP_PLL_312_ENABLE,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_LOGIC_SOFT_RESET | __APP_PLL_SCLK_ENABLE,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_LOGIC_SOFT_RESET | __APP_PLL_425_ENABLE,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_LOGIC_SOFT_RESET | __APP_PLL_LCLK_ENABLE,
+		rb + APP_PLL_LCLK_CTL_REG);
 	readl(rb + HOSTFN0_INT_MSK);
 	udelay(2000);
 	writel(0xffffffffU, (rb + HOSTFN0_INT_STATUS));
 	writel(0xffffffffU, (rb + HOSTFN1_INT_STATUS));
 	writel(pll_sclk |
-		__APP_PLL_312_ENABLE,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_ENABLE,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_ENABLE,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_ENABLE,
+		rb + APP_PLL_LCLK_CTL_REG);
+
 	if (!fcmode) {
 		writel(__PMM_1T_RESET_P, (rb + PMM_1T_RESET_REG_P0));
 		writel(__PMM_1T_RESET_P, (rb + PMM_1T_RESET_REG_P1));
diff --git a/drivers/net/bna/bfi_reg.h b/drivers/net/bna/bfi_reg.h
new file mode 100644
index 0000000..efacff3
--- /dev/null
+++ b/drivers/net/bna/bfi_reg.h
@@ -0,0 +1,452 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/*
+ * bfi_reg.h ASIC register defines for all Brocade adapter ASICs
+ */
+
+#ifndef __BFI_REG_H__
+#define __BFI_REG_H__
+
+#define HOSTFN0_INT_STATUS		0x00014000	/* cb/ct	*/
+#define HOSTFN1_INT_STATUS		0x00014100	/* cb/ct	*/
+#define HOSTFN2_INT_STATUS		0x00014300	/* ct		*/
+#define HOSTFN3_INT_STATUS		0x00014400	/* ct		*/
+#define HOSTFN0_INT_MSK			0x00014004	/* cb/ct	*/
+#define HOSTFN1_INT_MSK			0x00014104	/* cb/ct	*/
+#define HOSTFN2_INT_MSK			0x00014304	/* ct		*/
+#define HOSTFN3_INT_MSK			0x00014404	/* ct		*/
+
+#define HOST_PAGE_NUM_FN0		0x00014008	/* cb/ct	*/
+#define HOST_PAGE_NUM_FN1		0x00014108	/* cb/ct	*/
+#define HOST_PAGE_NUM_FN2		0x00014308	/* ct		*/
+#define HOST_PAGE_NUM_FN3		0x00014408	/* ct		*/
+
+#define APP_PLL_LCLK_CTL_REG		0x00014204	/* cb/ct	*/
+#define __P_LCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_LCLK_SRAM_USE_100MHZ	0x00100000
+#define __APP_PLL_LCLK_RESET_TIMER_MK	0x000e0000
+#define __APP_PLL_LCLK_RESET_TIMER_SH	17
+#define __APP_PLL_LCLK_RESET_TIMER(_v)	((_v) << __APP_PLL_LCLK_RESET_TIMER_SH)
+#define __APP_PLL_LCLK_LOGIC_SOFT_RESET	0x00010000
+#define __APP_PLL_LCLK_CNTLMT0_1_MK	0x0000c000
+#define __APP_PLL_LCLK_CNTLMT0_1_SH	14
+#define __APP_PLL_LCLK_CNTLMT0_1(_v)	((_v) << __APP_PLL_LCLK_CNTLMT0_1_SH)
+#define __APP_PLL_LCLK_JITLMT0_1_MK	0x00003000
+#define __APP_PLL_LCLK_JITLMT0_1_SH	12
+#define __APP_PLL_LCLK_JITLMT0_1(_v)	((_v) << __APP_PLL_LCLK_JITLMT0_1_SH)
+#define __APP_PLL_LCLK_HREF		0x00000800
+#define __APP_PLL_LCLK_HDIV		0x00000400
+#define __APP_PLL_LCLK_P0_1_MK		0x00000300
+#define __APP_PLL_LCLK_P0_1_SH		8
+#define __APP_PLL_LCLK_P0_1(_v)		((_v) << __APP_PLL_LCLK_P0_1_SH)
+#define __APP_PLL_LCLK_Z0_2_MK		0x000000e0
+#define __APP_PLL_LCLK_Z0_2_SH		5
+#define __APP_PLL_LCLK_Z0_2(_v)		((_v) << __APP_PLL_LCLK_Z0_2_SH)
+#define __APP_PLL_LCLK_RSEL200500	0x00000010
+#define __APP_PLL_LCLK_ENARST		0x00000008
+#define __APP_PLL_LCLK_BYPASS		0x00000004
+#define __APP_PLL_LCLK_LRESETN		0x00000002
+#define __APP_PLL_LCLK_ENABLE		0x00000001
+#define APP_PLL_SCLK_CTL_REG		0x00014208	/* cb/ct	*/
+#define __P_SCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_SCLK_RESET_TIMER_MK	0x000e0000
+#define __APP_PLL_SCLK_RESET_TIMER_SH	17
+#define __APP_PLL_SCLK_RESET_TIMER(_v)	((_v) << __APP_PLL_SCLK_RESET_TIMER_SH)
+#define __APP_PLL_SCLK_LOGIC_SOFT_RESET	0x00010000
+#define __APP_PLL_SCLK_CNTLMT0_1_MK	0x0000c000
+#define __APP_PLL_SCLK_CNTLMT0_1_SH	14
+#define __APP_PLL_SCLK_CNTLMT0_1(_v)	((_v) << __APP_PLL_SCLK_CNTLMT0_1_SH)
+#define __APP_PLL_SCLK_JITLMT0_1_MK	0x00003000
+#define __APP_PLL_SCLK_JITLMT0_1_SH	12
+#define __APP_PLL_SCLK_JITLMT0_1(_v)	((_v) << __APP_PLL_SCLK_JITLMT0_1_SH)
+#define __APP_PLL_SCLK_HREF		0x00000800
+#define __APP_PLL_SCLK_HDIV		0x00000400
+#define __APP_PLL_SCLK_P0_1_MK		0x00000300
+#define __APP_PLL_SCLK_P0_1_SH		8
+#define __APP_PLL_SCLK_P0_1(_v)		((_v) << __APP_PLL_SCLK_P0_1_SH)
+#define __APP_PLL_SCLK_Z0_2_MK		0x000000e0
+#define __APP_PLL_SCLK_Z0_2_SH		5
+#define __APP_PLL_SCLK_Z0_2(_v)		((_v) << __APP_PLL_SCLK_Z0_2_SH)
+#define __APP_PLL_SCLK_RSEL200500	0x00000010
+#define __APP_PLL_SCLK_ENARST		0x00000008
+#define __APP_PLL_SCLK_BYPASS		0x00000004
+#define __APP_PLL_SCLK_LRESETN		0x00000002
+#define __APP_PLL_SCLK_ENABLE		0x00000001
+#define __ENABLE_MAC_AHB_1		0x00800000	/* ct		*/
+#define __ENABLE_MAC_AHB_0		0x00400000	/* ct		*/
+#define __ENABLE_MAC_1			0x00200000	/* ct		*/
+#define __ENABLE_MAC_0			0x00100000	/* ct		*/
+
+#define HOST_SEM0_REG			0x00014230	/* cb/ct	*/
+#define HOST_SEM1_REG			0x00014234	/* cb/ct	*/
+#define HOST_SEM2_REG			0x00014238	/* cb/ct	*/
+#define HOST_SEM3_REG			0x0001423c	/* cb/ct	*/
+#define HOST_SEM4_REG			0x00014610	/* cb/ct	*/
+#define HOST_SEM5_REG			0x00014614	/* cb/ct	*/
+#define HOST_SEM6_REG			0x00014618	/* cb/ct	*/
+#define HOST_SEM7_REG			0x0001461c	/* cb/ct	*/
+#define HOST_SEM0_INFO_REG		0x00014240	/* cb/ct	*/
+#define HOST_SEM1_INFO_REG		0x00014244	/* cb/ct	*/
+#define HOST_SEM2_INFO_REG		0x00014248	/* cb/ct	*/
+#define HOST_SEM3_INFO_REG		0x0001424c	/* cb/ct	*/
+#define HOST_SEM4_INFO_REG		0x00014620	/* cb/ct	*/
+#define HOST_SEM5_INFO_REG		0x00014624	/* cb/ct	*/
+#define HOST_SEM6_INFO_REG		0x00014628	/* cb/ct	*/
+#define HOST_SEM7_INFO_REG		0x0001462c	/* cb/ct	*/
+
+#define HOSTFN0_LPU0_CMD_STAT		0x00019000	/* cb/ct	*/
+#define HOSTFN0_LPU1_CMD_STAT		0x00019004	/* cb/ct	*/
+#define HOSTFN1_LPU0_CMD_STAT		0x00019010	/* cb/ct	*/
+#define HOSTFN1_LPU1_CMD_STAT		0x00019014	/* cb/ct	*/
+#define HOSTFN2_LPU0_CMD_STAT		0x00019150	/* ct		*/
+#define HOSTFN2_LPU1_CMD_STAT		0x00019154	/* ct		*/
+#define HOSTFN3_LPU0_CMD_STAT		0x00019160	/* ct		*/
+#define HOSTFN3_LPU1_CMD_STAT		0x00019164	/* ct		*/
+#define LPU0_HOSTFN0_CMD_STAT		0x00019008	/* cb/ct	*/
+#define LPU1_HOSTFN0_CMD_STAT		0x0001900c	/* cb/ct	*/
+#define LPU0_HOSTFN1_CMD_STAT		0x00019018	/* cb/ct	*/
+#define LPU1_HOSTFN1_CMD_STAT		0x0001901c	/* cb/ct	*/
+#define LPU0_HOSTFN2_CMD_STAT		0x00019158	/* ct		*/
+#define LPU1_HOSTFN2_CMD_STAT		0x0001915c	/* ct		*/
+#define LPU0_HOSTFN3_CMD_STAT		0x00019168	/* ct		*/
+#define LPU1_HOSTFN3_CMD_STAT		0x0001916c	/* ct		*/
+
+#define PSS_CTL_REG			0x00018800	/* cb/ct	*/
+#define __PSS_I2C_CLK_DIV_MK		0x007f0000
+#define __PSS_I2C_CLK_DIV_SH		16
+#define __PSS_I2C_CLK_DIV(_v)		((_v) << __PSS_I2C_CLK_DIV_SH)
+#define __PSS_LMEM_INIT_DONE		0x00001000
+#define __PSS_LMEM_RESET		0x00000200
+#define __PSS_LMEM_INIT_EN		0x00000100
+#define __PSS_LPU1_RESET		0x00000002
+#define __PSS_LPU0_RESET		0x00000001
+#define PSS_ERR_STATUS_REG		0x00018810	/* cb/ct	*/
+#define ERR_SET_REG			0x00018818	/* cb/ct	*/
+#define PSS_GPIO_OUT_REG		0x000188c0	/* cb/ct	*/
+#define __PSS_GPIO_OUT_REG		0x00000fff
+#define PSS_GPIO_OE_REG			0x000188c8	/* cb/ct	*/
+#define __PSS_GPIO_OE_REG		0x000000ff
+
+#define HOSTFN0_LPU_MBOX0_0		0x00019200	/* cb/ct	*/
+#define HOSTFN1_LPU_MBOX0_8		0x00019260	/* cb/ct	*/
+#define LPU_HOSTFN0_MBOX0_0		0x00019280	/* cb/ct	*/
+#define LPU_HOSTFN1_MBOX0_8		0x000192e0	/* cb/ct	*/
+#define HOSTFN2_LPU_MBOX0_0		0x00019400	/* ct		*/
+#define HOSTFN3_LPU_MBOX0_8		0x00019460	/* ct		*/
+#define LPU_HOSTFN2_MBOX0_0		0x00019480	/* ct		*/
+#define LPU_HOSTFN3_MBOX0_8		0x000194e0	/* ct		*/
+
+#define HOST_MSIX_ERR_INDEX_FN0		0x0001400c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN1		0x0001410c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN2		0x0001430c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN3		0x0001440c	/* ct		*/
+
+#define MBIST_CTL_REG			0x00014220	/* ct		*/
+#define __EDRAM_BISTR_START		0x00000004
+#define MBIST_STAT_REG			0x00014224	/* ct		*/
+#define ETH_MAC_SER_REG			0x00014288	/* ct		*/
+#define __APP_EMS_CKBUFAMPIN		0x00000020
+#define __APP_EMS_REFCLKSEL		0x00000010
+#define __APP_EMS_CMLCKSEL		0x00000008
+#define __APP_EMS_REFCKBUFEN2		0x00000004
+#define __APP_EMS_REFCKBUFEN1		0x00000002
+#define __APP_EMS_CHANNEL_SEL		0x00000001
+#define FNC_PERS_REG			0x00014604	/* ct		*/
+#define __F3_FUNCTION_ACTIVE		0x80000000
+#define __F3_FUNCTION_MODE		0x40000000
+#define __F3_PORT_MAP_MK		0x30000000
+#define __F3_PORT_MAP_SH		28
+#define __F3_PORT_MAP(_v)		((_v) << __F3_PORT_MAP_SH)
+#define __F3_VM_MODE			0x08000000
+#define __F3_INTX_STATUS_MK		0x07000000
+#define __F3_INTX_STATUS_SH		24
+#define __F3_INTX_STATUS(_v)		((_v) << __F3_INTX_STATUS_SH)
+#define __F2_FUNCTION_ACTIVE		0x00800000
+#define __F2_FUNCTION_MODE		0x00400000
+#define __F2_PORT_MAP_MK		0x00300000
+#define __F2_PORT_MAP_SH		20
+#define __F2_PORT_MAP(_v)		((_v) << __F2_PORT_MAP_SH)
+#define __F2_VM_MODE			0x00080000
+#define __F2_INTX_STATUS_MK		0x00070000
+#define __F2_INTX_STATUS_SH		16
+#define __F2_INTX_STATUS(_v)		((_v) << __F2_INTX_STATUS_SH)
+#define __F1_FUNCTION_ACTIVE		0x00008000
+#define __F1_FUNCTION_MODE		0x00004000
+#define __F1_PORT_MAP_MK		0x00003000
+#define __F1_PORT_MAP_SH		12
+#define __F1_PORT_MAP(_v)		((_v) << __F1_PORT_MAP_SH)
+#define __F1_VM_MODE			0x00000800
+#define __F1_INTX_STATUS_MK		0x00000700
+#define __F1_INTX_STATUS_SH		8
+#define __F1_INTX_STATUS(_v)		((_v) << __F1_INTX_STATUS_SH)
+#define __F0_FUNCTION_ACTIVE		0x00000080
+#define __F0_FUNCTION_MODE		0x00000040
+#define __F0_PORT_MAP_MK		0x00000030
+#define __F0_PORT_MAP_SH		4
+#define __F0_PORT_MAP(_v)		((_v) << __F0_PORT_MAP_SH)
+#define __F0_VM_MODE			0x00000008
+#define __F0_INTX_STATUS		0x00000007
+enum {
+	__F0_INTX_STATUS_MSIX = 0x0,
+	__F0_INTX_STATUS_INTA = 0x1,
+	__F0_INTX_STATUS_INTB = 0x2,
+	__F0_INTX_STATUS_INTC = 0x3,
+	__F0_INTX_STATUS_INTD = 0x4,
+};
+
+#define OP_MODE				0x0001460c
+#define __APP_ETH_CLK_LOWSPEED		0x00000004
+#define __GLOBAL_CORECLK_HALFSPEED	0x00000002
+#define __GLOBAL_FCOE_MODE		0x00000001
+#define FW_INIT_HALT_P0			0x000191ac
+#define __FW_INIT_HALT_P		0x00000001
+#define FW_INIT_HALT_P1			0x000191bc
+#define PMM_1T_RESET_REG_P0		0x0002381c
+#define __PMM_1T_RESET_P		0x00000001
+#define PMM_1T_RESET_REG_P1		0x00023c1c
+
+/**
+ * Brocade 1860 Adapter specific defines
+ */
+#define CT2_PCI_CPQ_BASE		0x00030000
+#define CT2_PCI_APP_BASE		0x00030100
+#define CT2_PCI_ETH_BASE		0x00030400
+
+/*
+ * APP block registers
+ */
+#define CT2_HOSTFN_INT_STATUS		(CT2_PCI_APP_BASE + 0x00)
+#define CT2_HOSTFN_INTR_MASK		(CT2_PCI_APP_BASE + 0x04)
+#define CT2_HOSTFN_PERSONALITY0		(CT2_PCI_APP_BASE + 0x08)
+#define __PME_STATUS_			0x00200000
+#define __PF_VF_BAR_SIZE_MODE__MK	0x00180000
+#define __PF_VF_BAR_SIZE_MODE__SH	19
+#define __PF_VF_BAR_SIZE_MODE_(_v)	((_v) << __PF_VF_BAR_SIZE_MODE__SH)
+#define __FC_LL_PORT_MAP__MK		0x00060000
+#define __FC_LL_PORT_MAP__SH		17
+#define __FC_LL_PORT_MAP_(_v)		((_v) << __FC_LL_PORT_MAP__SH)
+#define __PF_VF_ACTIVE_			0x00010000
+#define __PF_VF_CFG_RDY_		0x00008000
+#define __PF_VF_ENABLE_			0x00004000
+#define __PF_DRIVER_ACTIVE_		0x00002000
+#define __PF_PME_SEND_ENABLE_		0x00001000
+#define __PF_EXROM_OFFSET__MK		0x00000ff0
+#define __PF_EXROM_OFFSET__SH		4
+#define __PF_EXROM_OFFSET_(_v)		((_v) << __PF_EXROM_OFFSET__SH)
+#define __FC_LL_MODE_			0x00000008
+#define __PF_INTX_PIN_			0x00000007
+#define CT2_HOSTFN_PERSONALITY1		(CT2_PCI_APP_BASE + 0x0C)
+#define __PF_NUM_QUEUES1__MK		0xff000000
+#define __PF_NUM_QUEUES1__SH		24
+#define __PF_NUM_QUEUES1_(_v)		((_v) << __PF_NUM_QUEUES1__SH)
+#define __PF_VF_QUE_OFFSET1__MK		0x00ff0000
+#define __PF_VF_QUE_OFFSET1__SH		16
+#define __PF_VF_QUE_OFFSET1_(_v)	((_v) << __PF_VF_QUE_OFFSET1__SH)
+#define __PF_VF_NUM_QUEUES__MK		0x0000ff00
+#define __PF_VF_NUM_QUEUES__SH		8
+#define __PF_VF_NUM_QUEUES_(_v)		((_v) << __PF_VF_NUM_QUEUES__SH)
+#define __PF_VF_QUE_OFFSET_		0x000000ff
+#define CT2_HOSTFN_PAGE_NUM		(CT2_PCI_APP_BASE + 0x18)
+#define CT2_HOSTFN_MSIX_VT_INDEX_MBOX_ERR	(CT2_PCI_APP_BASE + 0x38)
+
+/*
+ * Brocade 1860 adapter CPQ block registers
+ */
+#define CT2_HOSTFN_LPU0_MBOX0		(CT2_PCI_CPQ_BASE + 0x00)
+#define CT2_HOSTFN_LPU1_MBOX0		(CT2_PCI_CPQ_BASE + 0x20)
+#define CT2_LPU0_HOSTFN_MBOX0		(CT2_PCI_CPQ_BASE + 0x40)
+#define CT2_LPU1_HOSTFN_MBOX0		(CT2_PCI_CPQ_BASE + 0x60)
+#define CT2_HOSTFN_LPU0_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x80)
+#define CT2_HOSTFN_LPU1_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x84)
+#define CT2_LPU0_HOSTFN_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x88)
+#define CT2_LPU1_HOSTFN_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x8c)
+#define CT2_HOSTFN_LPU0_READ_STAT	(CT2_PCI_CPQ_BASE + 0x90)
+#define CT2_HOSTFN_LPU1_READ_STAT	(CT2_PCI_CPQ_BASE + 0x94)
+#define CT2_LPU0_HOSTFN_MBOX0_MSK	(CT2_PCI_CPQ_BASE + 0x98)
+#define CT2_LPU1_HOSTFN_MBOX0_MSK	(CT2_PCI_CPQ_BASE + 0x9C)
+#define CT2_HOST_SEM0_REG		0x000148f0
+#define CT2_HOST_SEM1_REG		0x000148f4
+#define CT2_HOST_SEM2_REG		0x000148f8
+#define CT2_HOST_SEM3_REG		0x000148fc
+#define CT2_HOST_SEM4_REG		0x00014900
+#define CT2_HOST_SEM5_REG		0x00014904
+#define CT2_HOST_SEM6_REG		0x00014908
+#define CT2_HOST_SEM7_REG		0x0001490c
+#define CT2_HOST_SEM0_INFO_REG		0x000148b0
+#define CT2_HOST_SEM1_INFO_REG		0x000148b4
+#define CT2_HOST_SEM2_INFO_REG		0x000148b8
+#define CT2_HOST_SEM3_INFO_REG		0x000148bc
+#define CT2_HOST_SEM4_INFO_REG		0x000148c0
+#define CT2_HOST_SEM5_INFO_REG		0x000148c4
+#define CT2_HOST_SEM6_INFO_REG		0x000148c8
+#define CT2_HOST_SEM7_INFO_REG		0x000148cc
+
+#define CT2_APP_PLL_LCLK_CTL_REG	0x00014808
+#define __APP_LPUCLK_HALFSPEED		0x40000000
+#define __APP_PLL_LCLK_LOAD		0x20000000
+#define __APP_PLL_LCLK_FBCNT_MK		0x1fe00000
+#define __APP_PLL_LCLK_FBCNT_SH		21
+#define __APP_PLL_LCLK_FBCNT(_v)	((_v) << __APP_PLL_SCLK_FBCNT_SH)
+enum {
+	__APP_PLL_LCLK_FBCNT_425_MHZ = 6,
+	__APP_PLL_LCLK_FBCNT_468_MHZ = 4,
+};
+#define __APP_PLL_LCLK_EXTFB		0x00000800
+#define __APP_PLL_LCLK_ENOUTS		0x00000400
+#define __APP_PLL_LCLK_RATE		0x00000010
+#define CT2_APP_PLL_SCLK_CTL_REG	0x0001480c
+#define __P_SCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_SCLK_REFCLK_SEL	0x40000000
+#define __APP_PLL_SCLK_CLK_DIV2		0x20000000
+#define __APP_PLL_SCLK_LOAD		0x10000000
+#define __APP_PLL_SCLK_FBCNT_MK		0x0ff00000
+#define __APP_PLL_SCLK_FBCNT_SH		20
+#define __APP_PLL_SCLK_FBCNT(_v)	((_v) << __APP_PLL_SCLK_FBCNT_SH)
+enum {
+	__APP_PLL_SCLK_FBCNT_NORM = 6,
+	__APP_PLL_SCLK_FBCNT_10G_FC = 10,
+};
+#define __APP_PLL_SCLK_EXTFB		0x00000800
+#define __APP_PLL_SCLK_ENOUTS		0x00000400
+#define __APP_PLL_SCLK_RATE		0x00000010
+#define CT2_PCIE_MISC_REG		0x00014804
+#define __ETH_CLK_ENABLE_PORT1		0x00000010
+#define CT2_CHIP_MISC_PRG		0x000148a4
+#define __ETH_CLK_ENABLE_PORT0		0x00004000
+#define __APP_LPU_SPEED			0x00000002
+#define CT2_MBIST_STAT_REG		0x00014818
+#define CT2_MBIST_CTL_REG		0x0001481c
+#define CT2_PMM_1T_CONTROL_REG_P0	0x0002381c
+#define __PMM_1T_PNDB_P			0x00000002
+#define CT2_PMM_1T_CONTROL_REG_P1	0x00023c1c
+#define CT2_WGN_STATUS			0x00014990
+#define __A2T_AHB_LOAD			0x00000800
+#define __WGN_READY			0x00000400
+#define __GLBL_PF_VF_CFG_RDY		0x00000200
+#define CT2_NFC_CSR_SET_REG		0x00027424
+#define __HALT_NFC_CONTROLLER		0x00000002
+#define __NFC_CONTROLLER_HALTED		0x00001000
+
+#define CT2_CSI_MAC0_CONTROL_REG	0x000270d0
+#define __CSI_MAC_RESET			0x00000010
+#define __CSI_MAC_AHB_RESET		0x00000008
+#define CT2_CSI_MAC1_CONTROL_REG	0x000270d4
+#define CT2_CSI_MAC_CONTROL_REG(__n) \
+	(CT2_CSI_MAC0_CONTROL_REG + \
+	(__n) * (CT2_CSI_MAC1_CONTROL_REG - CT2_CSI_MAC0_CONTROL_REG))
+
+/*
+ * Name semaphore registers based on usage
+ */
+#define BFA_IOC0_HBEAT_REG		HOST_SEM0_INFO_REG
+#define BFA_IOC0_STATE_REG		HOST_SEM1_INFO_REG
+#define BFA_IOC1_HBEAT_REG		HOST_SEM2_INFO_REG
+#define BFA_IOC1_STATE_REG		HOST_SEM3_INFO_REG
+#define BFA_FW_USE_COUNT		HOST_SEM4_INFO_REG
+#define BFA_IOC_FAIL_SYNC		HOST_SEM5_INFO_REG
+
+/*
+ * CT2 semaphore register locations changed
+ */
+#define CT2_BFA_IOC0_HBEAT_REG		CT2_HOST_SEM0_INFO_REG
+#define CT2_BFA_IOC0_STATE_REG		CT2_HOST_SEM1_INFO_REG
+#define CT2_BFA_IOC1_HBEAT_REG		CT2_HOST_SEM2_INFO_REG
+#define CT2_BFA_IOC1_STATE_REG		CT2_HOST_SEM3_INFO_REG
+#define CT2_BFA_FW_USE_COUNT		CT2_HOST_SEM4_INFO_REG
+#define CT2_BFA_IOC_FAIL_SYNC		CT2_HOST_SEM5_INFO_REG
+
+#define CPE_Q_NUM(__fn, __q)	(((__fn) << 2) + (__q))
+#define RME_Q_NUM(__fn, __q)	(((__fn) << 2) + (__q))
+
+/*
+ * And corresponding host interrupt status bit field defines
+ */
+#define __HFN_INT_CPE_Q0	0x00000001U
+#define __HFN_INT_CPE_Q1	0x00000002U
+#define __HFN_INT_CPE_Q2	0x00000004U
+#define __HFN_INT_CPE_Q3	0x00000008U
+#define __HFN_INT_CPE_Q4	0x00000010U
+#define __HFN_INT_CPE_Q5	0x00000020U
+#define __HFN_INT_CPE_Q6	0x00000040U
+#define __HFN_INT_CPE_Q7	0x00000080U
+#define __HFN_INT_RME_Q0	0x00000100U
+#define __HFN_INT_RME_Q1	0x00000200U
+#define __HFN_INT_RME_Q2	0x00000400U
+#define __HFN_INT_RME_Q3	0x00000800U
+#define __HFN_INT_RME_Q4	0x00001000U
+#define __HFN_INT_RME_Q5	0x00002000U
+#define __HFN_INT_RME_Q6	0x00004000U
+#define __HFN_INT_RME_Q7	0x00008000U
+#define __HFN_INT_ERR_EMC	0x00010000U
+#define __HFN_INT_ERR_LPU0	0x00020000U
+#define __HFN_INT_ERR_LPU1	0x00040000U
+#define __HFN_INT_ERR_PSS	0x00080000U
+#define __HFN_INT_MBOX_LPU0	0x00100000U
+#define __HFN_INT_MBOX_LPU1	0x00200000U
+#define __HFN_INT_MBOX1_LPU0	0x00400000U
+#define __HFN_INT_MBOX1_LPU1	0x00800000U
+#define __HFN_INT_LL_HALT	0x01000000U
+#define __HFN_INT_CPE_MASK	0x000000ffU
+#define __HFN_INT_RME_MASK	0x0000ff00U
+#define __HFN_INT_ERR_MASK	\
+	(__HFN_INT_ERR_EMC | __HFN_INT_ERR_LPU0 | __HFN_INT_ERR_LPU1 | \
+	 __HFN_INT_ERR_PSS | __HFN_INT_LL_HALT)
+#define __HFN_INT_FN0_MASK	\
+	(__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | __HFN_INT_CPE_Q2 | \
+	 __HFN_INT_CPE_Q3 | __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | \
+	 __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | __HFN_INT_MBOX_LPU0)
+#define __HFN_INT_FN1_MASK	\
+	(__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | __HFN_INT_CPE_Q6 | \
+	 __HFN_INT_CPE_Q7 | __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | \
+	 __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | __HFN_INT_MBOX_LPU1)
+
+/*
+ * Host interrupt status defines for 1860
+ */
+#define __HFN_INT_MBOX_LPU0_CT2	0x00010000U
+#define __HFN_INT_MBOX_LPU1_CT2	0x00020000U
+#define __HFN_INT_ERR_PSS_CT2	0x00040000U
+#define __HFN_INT_ERR_LPU0_CT2	0x00080000U
+#define __HFN_INT_ERR_LPU1_CT2	0x00100000U
+#define __HFN_INT_CPQ_HALT_CT2	0x00200000U
+#define __HFN_INT_ERR_WGN_CT2	0x00400000U
+#define __HFN_INT_ERR_LEHRX_CT2	0x00800000U
+#define __HFN_INT_ERR_LEHTX_CT2	0x01000000U
+#define __HFN_INT_ERR_MASK_CT2	\
+	(__HFN_INT_ERR_PSS_CT2 | __HFN_INT_ERR_LPU0_CT2 | \
+	 __HFN_INT_ERR_LPU1_CT2 | __HFN_INT_CPQ_HALT_CT2 | \
+	 __HFN_INT_ERR_WGN_CT2 | __HFN_INT_ERR_LEHRX_CT2 | \
+	 __HFN_INT_ERR_LEHTX_CT2)
+#define __HFN_INT_FN0_MASK_CT2	\
+	(__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | __HFN_INT_CPE_Q2 | \
+	 __HFN_INT_CPE_Q3 | __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | \
+	 __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | __HFN_INT_MBOX_LPU0_CT2)
+#define __HFN_INT_FN1_MASK_CT2	\
+	(__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | __HFN_INT_CPE_Q6 | \
+	 __HFN_INT_CPE_Q7 | __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | \
+	 __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | __HFN_INT_MBOX_LPU1_CT2)
+
+/*
+ * asic memory map.
+ */
+#define PSS_SMEM_PAGE_START		0x8000
+#define PSS_SMEM_PGNUM(_pg0, _ma)	((_pg0) + ((_ma) >> 15))
+#define PSS_SMEM_PGOFF(_ma)		((_ma) & 0x7fff)
+
+#endif /* __BFI_REG_H__ */
diff --git a/drivers/net/bna/bna_hw.h b/drivers/net/bna/bna_hw.h
index cad233d..16a5eed 100644
--- a/drivers/net/bna/bna_hw.h
+++ b/drivers/net/bna/bna_hw.h
@@ -14,14 +14,16 @@
  * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
  * All rights reserved
  * www.brocade.com
- *
+ */
+
+/**
  * File for interrupt macros and functions
  */
 
 #ifndef __BNA_HW_H__
 #define __BNA_HW_H__
 
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 
 /**
  *
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/8] bna: Remove Obsolete File bfi_ctreg.h
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
  2011-07-27  2:10 ` [PATCH 1/8] bna: Remove get_regs Ethtool Support Rasesh Mody
  2011-07-27  2:10 ` [PATCH 2/8] bna: Consolidated HW Registers for Supported HWs Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 4/8] bna: MSGQ Implementation Rasesh Mody
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change detail:
 - Remove obsolete file bfi_ctreg.h as we added new file bfi_reg.h
   consolidating HW reg definitions including the ones for new HW.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_ctreg.h |  646 -------------------------------------------
 1 files changed, 0 insertions(+), 646 deletions(-)
 delete mode 100644 drivers/net/bna/bfi_ctreg.h

diff --git a/drivers/net/bna/bfi_ctreg.h b/drivers/net/bna/bfi_ctreg.h
deleted file mode 100644
index 5130d79..0000000
--- a/drivers/net/bna/bfi_ctreg.h
+++ /dev/null
@@ -1,646 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-
-/*
- * bfi_ctreg.h catapult host block register definitions
- *
- * !!! Do not edit. Auto generated. !!!
- */
-
-#ifndef __BFI_CTREG_H__
-#define __BFI_CTREG_H__
-
-#define HOSTFN0_LPU_MBOX0_0		0x00019200
-#define HOSTFN1_LPU_MBOX0_8		0x00019260
-#define LPU_HOSTFN0_MBOX0_0		0x00019280
-#define LPU_HOSTFN1_MBOX0_8		0x000192e0
-#define HOSTFN2_LPU_MBOX0_0		0x00019400
-#define HOSTFN3_LPU_MBOX0_8		0x00019460
-#define LPU_HOSTFN2_MBOX0_0		0x00019480
-#define LPU_HOSTFN3_MBOX0_8		0x000194e0
-#define HOSTFN0_INT_STATUS		0x00014000
-#define __HOSTFN0_HALT_OCCURRED		0x01000000
-#define __HOSTFN0_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN0_INT_STATUS_LVL_SH	20
-#define __HOSTFN0_INT_STATUS_LVL(_v)	((_v) << __HOSTFN0_INT_STATUS_LVL_SH)
-#define __HOSTFN0_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN0_INT_STATUS_P_SH	16
-#define __HOSTFN0_INT_STATUS_P(_v)	((_v) << __HOSTFN0_INT_STATUS_P_SH)
-#define __HOSTFN0_INT_STATUS_F		0x0000ffff
-#define HOSTFN0_INT_MSK			0x00014004
-#define HOST_PAGE_NUM_FN0		0x00014008
-#define __HOST_PAGE_NUM_FN		0x000001ff
-#define HOST_MSIX_ERR_INDEX_FN0		0x0001400c
-#define __MSIX_ERR_INDEX_FN		0x000001ff
-#define HOSTFN1_INT_STATUS		0x00014100
-#define __HOSTFN1_HALT_OCCURRED		0x01000000
-#define __HOSTFN1_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN1_INT_STATUS_LVL_SH	20
-#define __HOSTFN1_INT_STATUS_LVL(_v)	((_v) << __HOSTFN1_INT_STATUS_LVL_SH)
-#define __HOSTFN1_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN1_INT_STATUS_P_SH	16
-#define __HOSTFN1_INT_STATUS_P(_v)	((_v) << __HOSTFN1_INT_STATUS_P_SH)
-#define __HOSTFN1_INT_STATUS_F		0x0000ffff
-#define HOSTFN1_INT_MSK			0x00014104
-#define HOST_PAGE_NUM_FN1		0x00014108
-#define HOST_MSIX_ERR_INDEX_FN1		0x0001410c
-#define APP_PLL_425_CTL_REG		0x00014204
-#define __P_425_PLL_LOCK		0x80000000
-#define __APP_PLL_425_SRAM_USE_100MHZ	0x00100000
-#define __APP_PLL_425_RESET_TIMER_MK	0x000e0000
-#define __APP_PLL_425_RESET_TIMER_SH	17
-#define __APP_PLL_425_RESET_TIMER(_v)	((_v) << __APP_PLL_425_RESET_TIMER_SH)
-#define __APP_PLL_425_LOGIC_SOFT_RESET	0x00010000
-#define __APP_PLL_425_CNTLMT0_1_MK	0x0000c000
-#define __APP_PLL_425_CNTLMT0_1_SH	14
-#define __APP_PLL_425_CNTLMT0_1(_v)	((_v) << __APP_PLL_425_CNTLMT0_1_SH)
-#define __APP_PLL_425_JITLMT0_1_MK	0x00003000
-#define __APP_PLL_425_JITLMT0_1_SH	12
-#define __APP_PLL_425_JITLMT0_1(_v)	((_v) << __APP_PLL_425_JITLMT0_1_SH)
-#define __APP_PLL_425_HREF		0x00000800
-#define __APP_PLL_425_HDIV		0x00000400
-#define __APP_PLL_425_P0_1_MK		0x00000300
-#define __APP_PLL_425_P0_1_SH		8
-#define __APP_PLL_425_P0_1(_v)		((_v) << __APP_PLL_425_P0_1_SH)
-#define __APP_PLL_425_Z0_2_MK		0x000000e0
-#define __APP_PLL_425_Z0_2_SH		5
-#define __APP_PLL_425_Z0_2(_v)		((_v) << __APP_PLL_425_Z0_2_SH)
-#define __APP_PLL_425_RSEL200500	0x00000010
-#define __APP_PLL_425_ENARST		0x00000008
-#define __APP_PLL_425_BYPASS		0x00000004
-#define __APP_PLL_425_LRESETN		0x00000002
-#define __APP_PLL_425_ENABLE		0x00000001
-#define APP_PLL_312_CTL_REG		0x00014208
-#define __P_312_PLL_LOCK		0x80000000
-#define __ENABLE_MAC_AHB_1		0x00800000
-#define __ENABLE_MAC_AHB_0		0x00400000
-#define __ENABLE_MAC_1			0x00200000
-#define __ENABLE_MAC_0			0x00100000
-#define __APP_PLL_312_RESET_TIMER_MK	0x000e0000
-#define __APP_PLL_312_RESET_TIMER_SH	17
-#define __APP_PLL_312_RESET_TIMER(_v)	((_v) << __APP_PLL_312_RESET_TIMER_SH)
-#define __APP_PLL_312_LOGIC_SOFT_RESET	0x00010000
-#define __APP_PLL_312_CNTLMT0_1_MK	0x0000c000
-#define __APP_PLL_312_CNTLMT0_1_SH	14
-#define __APP_PLL_312_CNTLMT0_1(_v)	((_v) << __APP_PLL_312_CNTLMT0_1_SH)
-#define __APP_PLL_312_JITLMT0_1_MK	0x00003000
-#define __APP_PLL_312_JITLMT0_1_SH	12
-#define __APP_PLL_312_JITLMT0_1(_v)	((_v) << __APP_PLL_312_JITLMT0_1_SH)
-#define __APP_PLL_312_HREF		0x00000800
-#define __APP_PLL_312_HDIV		0x00000400
-#define __APP_PLL_312_P0_1_MK		0x00000300
-#define __APP_PLL_312_P0_1_SH		8
-#define __APP_PLL_312_P0_1(_v)		((_v) << __APP_PLL_312_P0_1_SH)
-#define __APP_PLL_312_Z0_2_MK		0x000000e0
-#define __APP_PLL_312_Z0_2_SH		5
-#define __APP_PLL_312_Z0_2(_v)		((_v) << __APP_PLL_312_Z0_2_SH)
-#define __APP_PLL_312_RSEL200500	0x00000010
-#define __APP_PLL_312_ENARST		0x00000008
-#define __APP_PLL_312_BYPASS		0x00000004
-#define __APP_PLL_312_LRESETN		0x00000002
-#define __APP_PLL_312_ENABLE		0x00000001
-#define MBIST_CTL_REG			0x00014220
-#define __EDRAM_BISTR_START		0x00000004
-#define __MBIST_RESET			0x00000002
-#define __MBIST_START			0x00000001
-#define MBIST_STAT_REG			0x00014224
-#define __EDRAM_BISTR_STATUS		0x00000008
-#define __EDRAM_BISTR_DONE		0x00000004
-#define __MEM_BIT_STATUS		0x00000002
-#define __MBIST_DONE			0x00000001
-#define HOST_SEM0_REG			0x00014230
-#define __HOST_SEMAPHORE		0x00000001
-#define HOST_SEM1_REG			0x00014234
-#define HOST_SEM2_REG			0x00014238
-#define HOST_SEM3_REG			0x0001423c
-#define HOST_SEM0_INFO_REG		0x00014240
-#define HOST_SEM1_INFO_REG		0x00014244
-#define HOST_SEM2_INFO_REG		0x00014248
-#define HOST_SEM3_INFO_REG		0x0001424c
-#define ETH_MAC_SER_REG			0x00014288
-#define __APP_EMS_CKBUFAMPIN		0x00000020
-#define __APP_EMS_REFCLKSEL		0x00000010
-#define __APP_EMS_CMLCKSEL		0x00000008
-#define __APP_EMS_REFCKBUFEN2		0x00000004
-#define __APP_EMS_REFCKBUFEN1		0x00000002
-#define __APP_EMS_CHANNEL_SEL		0x00000001
-#define HOSTFN2_INT_STATUS		0x00014300
-#define __HOSTFN2_HALT_OCCURRED		0x01000000
-#define __HOSTFN2_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN2_INT_STATUS_LVL_SH	20
-#define __HOSTFN2_INT_STATUS_LVL(_v)	((_v) << __HOSTFN2_INT_STATUS_LVL_SH)
-#define __HOSTFN2_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN2_INT_STATUS_P_SH	16
-#define __HOSTFN2_INT_STATUS_P(_v)	((_v) << __HOSTFN2_INT_STATUS_P_SH)
-#define __HOSTFN2_INT_STATUS_F		0x0000ffff
-#define HOSTFN2_INT_MSK			0x00014304
-#define HOST_PAGE_NUM_FN2		0x00014308
-#define HOST_MSIX_ERR_INDEX_FN2		0x0001430c
-#define HOSTFN3_INT_STATUS		0x00014400
-#define __HALT_OCCURRED			0x01000000
-#define __HOSTFN3_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN3_INT_STATUS_LVL_SH	20
-#define __HOSTFN3_INT_STATUS_LVL(_v)	((_v) << __HOSTFN3_INT_STATUS_LVL_SH)
-#define __HOSTFN3_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN3_INT_STATUS_P_SH	16
-#define __HOSTFN3_INT_STATUS_P(_v)	((_v) << __HOSTFN3_INT_STATUS_P_SH)
-#define __HOSTFN3_INT_STATUS_F		0x0000ffff
-#define HOSTFN3_INT_MSK			0x00014404
-#define HOST_PAGE_NUM_FN3		0x00014408
-#define HOST_MSIX_ERR_INDEX_FN3		0x0001440c
-#define FNC_ID_REG			0x00014600
-#define __FUNCTION_NUMBER		0x00000007
-#define FNC_PERS_REG			0x00014604
-#define __F3_FUNCTION_ACTIVE		0x80000000
-#define __F3_FUNCTION_MODE		0x40000000
-#define __F3_PORT_MAP_MK		0x30000000
-#define __F3_PORT_MAP_SH		28
-#define __F3_PORT_MAP(_v)		((_v) << __F3_PORT_MAP_SH)
-#define __F3_VM_MODE			0x08000000
-#define __F3_INTX_STATUS_MK		0x07000000
-#define __F3_INTX_STATUS_SH		24
-#define __F3_INTX_STATUS(_v)		((_v) << __F3_INTX_STATUS_SH)
-#define __F2_FUNCTION_ACTIVE		0x00800000
-#define __F2_FUNCTION_MODE		0x00400000
-#define __F2_PORT_MAP_MK		0x00300000
-#define __F2_PORT_MAP_SH		20
-#define __F2_PORT_MAP(_v)		((_v) << __F2_PORT_MAP_SH)
-#define __F2_VM_MODE			0x00080000
-#define __F2_INTX_STATUS_MK		0x00070000
-#define __F2_INTX_STATUS_SH		16
-#define __F2_INTX_STATUS(_v)		((_v) << __F2_INTX_STATUS_SH)
-#define __F1_FUNCTION_ACTIVE		0x00008000
-#define __F1_FUNCTION_MODE		0x00004000
-#define __F1_PORT_MAP_MK		0x00003000
-#define __F1_PORT_MAP_SH		12
-#define __F1_PORT_MAP(_v)		((_v) << __F1_PORT_MAP_SH)
-#define __F1_VM_MODE			0x00000800
-#define __F1_INTX_STATUS_MK		0x00000700
-#define __F1_INTX_STATUS_SH		8
-#define __F1_INTX_STATUS(_v)		((_v) << __F1_INTX_STATUS_SH)
-#define __F0_FUNCTION_ACTIVE		0x00000080
-#define __F0_FUNCTION_MODE		0x00000040
-#define __F0_PORT_MAP_MK		0x00000030
-#define __F0_PORT_MAP_SH		4
-#define __F0_PORT_MAP(_v)		((_v) << __F0_PORT_MAP_SH)
-#define __F0_VM_MODE		0x00000008
-#define __F0_INTX_STATUS		0x00000007
-enum {
-	__F0_INTX_STATUS_MSIX		= 0x0,
-	__F0_INTX_STATUS_INTA		= 0x1,
-	__F0_INTX_STATUS_INTB		= 0x2,
-	__F0_INTX_STATUS_INTC		= 0x3,
-	__F0_INTX_STATUS_INTD		= 0x4,
-};
-#define OP_MODE				0x0001460c
-#define __APP_ETH_CLK_LOWSPEED		0x00000004
-#define __GLOBAL_CORECLK_HALFSPEED	0x00000002
-#define __GLOBAL_FCOE_MODE		0x00000001
-#define HOST_SEM4_REG			0x00014610
-#define HOST_SEM5_REG			0x00014614
-#define HOST_SEM6_REG			0x00014618
-#define HOST_SEM7_REG			0x0001461c
-#define HOST_SEM4_INFO_REG		0x00014620
-#define HOST_SEM5_INFO_REG		0x00014624
-#define HOST_SEM6_INFO_REG		0x00014628
-#define HOST_SEM7_INFO_REG		0x0001462c
-#define HOSTFN0_LPU0_MBOX0_CMD_STAT	0x00019000
-#define __HOSTFN0_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN0_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN0_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN0_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN0_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN0_LPU1_MBOX0_CMD_STAT	0x00019004
-#define __HOSTFN0_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN0_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN0_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN0_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN0_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN0_MBOX0_CMD_STAT	0x00019008
-#define __LPU0_HOSTFN0_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN0_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN0_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN0_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN0_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN0_MBOX0_CMD_STAT	0x0001900c
-#define __LPU1_HOSTFN0_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN0_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN0_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN0_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN1_LPU0_MBOX0_CMD_STAT	0x00019010
-#define __HOSTFN1_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN1_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN1_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN1_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN1_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN1_LPU1_MBOX0_CMD_STAT	0x00019014
-#define __HOSTFN1_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN1_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN1_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN1_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN1_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN1_MBOX0_CMD_STAT	0x00019018
-#define __LPU0_HOSTFN1_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN1_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN1_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN1_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN1_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN1_MBOX0_CMD_STAT	0x0001901c
-#define __LPU1_HOSTFN1_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN1_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN1_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN1_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN1_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN2_LPU0_MBOX0_CMD_STAT	0x00019150
-#define __HOSTFN2_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN2_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN2_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN2_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN2_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN2_LPU1_MBOX0_CMD_STAT	0x00019154
-#define __HOSTFN2_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN2_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN2_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN2_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN2_LPU1_MBOX0BOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN2_MBOX0_CMD_STAT	0x00019158
-#define __LPU0_HOSTFN2_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN2_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN2_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN2_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN2_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN2_MBOX0_CMD_STAT	0x0001915c
-#define __LPU1_HOSTFN2_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN2_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN2_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN2_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN2_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN3_LPU0_MBOX0_CMD_STAT	0x00019160
-#define __HOSTFN3_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN3_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN3_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN3_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN3_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN3_LPU1_MBOX0_CMD_STAT	0x00019164
-#define __HOSTFN3_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN3_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN3_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN3_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN3_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN3_MBOX0_CMD_STAT	0x00019168
-#define __LPU0_HOSTFN3_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN3_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN3_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN3_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN3_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN3_MBOX0_CMD_STAT	0x0001916c
-#define __LPU1_HOSTFN3_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN3_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN3_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN3_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN3_MBOX0_CMD_STATUS	0x00000001
-#define FW_INIT_HALT_P0			0x000191ac
-#define __FW_INIT_HALT_P		0x00000001
-#define FW_INIT_HALT_P1			0x000191bc
-#define CPE_PI_PTR_Q0			0x00038000
-#define __CPE_PI_UNUSED_MK		0xffff0000
-#define __CPE_PI_UNUSED_SH		16
-#define __CPE_PI_UNUSED(_v)		((_v) << __CPE_PI_UNUSED_SH)
-#define __CPE_PI_PTR			0x0000ffff
-#define CPE_PI_PTR_Q1			0x00038040
-#define CPE_CI_PTR_Q0			0x00038004
-#define __CPE_CI_UNUSED_MK		0xffff0000
-#define __CPE_CI_UNUSED_SH		16
-#define __CPE_CI_UNUSED(_v)		((_v) << __CPE_CI_UNUSED_SH)
-#define __CPE_CI_PTR			0x0000ffff
-#define CPE_CI_PTR_Q1			0x00038044
-#define CPE_DEPTH_Q0			0x00038008
-#define __CPE_DEPTH_UNUSED_MK		0xf8000000
-#define __CPE_DEPTH_UNUSED_SH		27
-#define __CPE_DEPTH_UNUSED(_v)		((_v) << __CPE_DEPTH_UNUSED_SH)
-#define __CPE_MSIX_VEC_INDEX_MK		0x07ff0000
-#define __CPE_MSIX_VEC_INDEX_SH		16
-#define __CPE_MSIX_VEC_INDEX(_v)	((_v) << __CPE_MSIX_VEC_INDEX_SH)
-#define __CPE_DEPTH			0x0000ffff
-#define CPE_DEPTH_Q1			0x00038048
-#define CPE_QCTRL_Q0			0x0003800c
-#define __CPE_CTRL_UNUSED30_MK		0xfc000000
-#define __CPE_CTRL_UNUSED30_SH		26
-#define __CPE_CTRL_UNUSED30(_v)		((_v) << __CPE_CTRL_UNUSED30_SH)
-#define __CPE_FUNC_INT_CTRL_MK		0x03000000
-#define __CPE_FUNC_INT_CTRL_SH		24
-#define __CPE_FUNC_INT_CTRL(_v)		((_v) << __CPE_FUNC_INT_CTRL_SH)
-enum {
-	__CPE_FUNC_INT_CTRL_DISABLE		= 0x0,
-	__CPE_FUNC_INT_CTRL_F2NF		= 0x1,
-	__CPE_FUNC_INT_CTRL_3QUART		= 0x2,
-	__CPE_FUNC_INT_CTRL_HALF		= 0x3,
-};
-#define __CPE_CTRL_UNUSED20_MK		0x00f00000
-#define __CPE_CTRL_UNUSED20_SH		20
-#define __CPE_CTRL_UNUSED20(_v)		((_v) << __CPE_CTRL_UNUSED20_SH)
-#define __CPE_SCI_TH_MK			0x000f0000
-#define __CPE_SCI_TH_SH			16
-#define __CPE_SCI_TH(_v)		((_v) << __CPE_SCI_TH_SH)
-#define __CPE_CTRL_UNUSED10_MK		0x0000c000
-#define __CPE_CTRL_UNUSED10_SH		14
-#define __CPE_CTRL_UNUSED10(_v)		((_v) << __CPE_CTRL_UNUSED10_SH)
-#define __CPE_ACK_PENDING		0x00002000
-#define __CPE_CTRL_UNUSED40_MK		0x00001c00
-#define __CPE_CTRL_UNUSED40_SH		10
-#define __CPE_CTRL_UNUSED40(_v)		((_v) << __CPE_CTRL_UNUSED40_SH)
-#define __CPE_PCIEID_MK			0x00000300
-#define __CPE_PCIEID_SH			8
-#define __CPE_PCIEID(_v)		((_v) << __CPE_PCIEID_SH)
-#define __CPE_CTRL_UNUSED00_MK		0x000000fe
-#define __CPE_CTRL_UNUSED00_SH		1
-#define __CPE_CTRL_UNUSED00(_v)		((_v) << __CPE_CTRL_UNUSED00_SH)
-#define __CPE_ESIZE			0x00000001
-#define CPE_QCTRL_Q1			0x0003804c
-#define __CPE_CTRL_UNUSED31_MK		0xfc000000
-#define __CPE_CTRL_UNUSED31_SH		26
-#define __CPE_CTRL_UNUSED31(_v)		((_v) << __CPE_CTRL_UNUSED31_SH)
-#define __CPE_CTRL_UNUSED21_MK		0x00f00000
-#define __CPE_CTRL_UNUSED21_SH		20
-#define __CPE_CTRL_UNUSED21(_v)		((_v) << __CPE_CTRL_UNUSED21_SH)
-#define __CPE_CTRL_UNUSED11_MK		0x0000c000
-#define __CPE_CTRL_UNUSED11_SH		14
-#define __CPE_CTRL_UNUSED11(_v)		((_v) << __CPE_CTRL_UNUSED11_SH)
-#define __CPE_CTRL_UNUSED41_MK		0x00001c00
-#define __CPE_CTRL_UNUSED41_SH		10
-#define __CPE_CTRL_UNUSED41(_v)		((_v) << __CPE_CTRL_UNUSED41_SH)
-#define __CPE_CTRL_UNUSED01_MK		0x000000fe
-#define __CPE_CTRL_UNUSED01_SH		1
-#define __CPE_CTRL_UNUSED01(_v)		((_v) << __CPE_CTRL_UNUSED01_SH)
-#define RME_PI_PTR_Q0			0x00038020
-#define __LATENCY_TIME_STAMP_MK		0xffff0000
-#define __LATENCY_TIME_STAMP_SH		16
-#define __LATENCY_TIME_STAMP(_v)	((_v) << __LATENCY_TIME_STAMP_SH)
-#define __RME_PI_PTR			0x0000ffff
-#define RME_PI_PTR_Q1			0x00038060
-#define RME_CI_PTR_Q0			0x00038024
-#define __DELAY_TIME_STAMP_MK		0xffff0000
-#define __DELAY_TIME_STAMP_SH		16
-#define __DELAY_TIME_STAMP(_v)		((_v) << __DELAY_TIME_STAMP_SH)
-#define __RME_CI_PTR			0x0000ffff
-#define RME_CI_PTR_Q1			0x00038064
-#define RME_DEPTH_Q0			0x00038028
-#define __RME_DEPTH_UNUSED_MK		0xf8000000
-#define __RME_DEPTH_UNUSED_SH		27
-#define __RME_DEPTH_UNUSED(_v)		((_v) << __RME_DEPTH_UNUSED_SH)
-#define __RME_MSIX_VEC_INDEX_MK		0x07ff0000
-#define __RME_MSIX_VEC_INDEX_SH		16
-#define __RME_MSIX_VEC_INDEX(_v)	((_v) << __RME_MSIX_VEC_INDEX_SH)
-#define __RME_DEPTH			0x0000ffff
-#define RME_DEPTH_Q1			0x00038068
-#define RME_QCTRL_Q0			0x0003802c
-#define __RME_INT_LATENCY_TIMER_MK	0xff000000
-#define __RME_INT_LATENCY_TIMER_SH	24
-#define __RME_INT_LATENCY_TIMER(_v)	((_v) << __RME_INT_LATENCY_TIMER_SH)
-#define __RME_INT_DELAY_TIMER_MK	0x00ff0000
-#define __RME_INT_DELAY_TIMER_SH	16
-#define __RME_INT_DELAY_TIMER(_v)	((_v) << __RME_INT_DELAY_TIMER_SH)
-#define __RME_INT_DELAY_DISABLE		0x00008000
-#define __RME_DLY_DELAY_DISABLE		0x00004000
-#define __RME_ACK_PENDING		0x00002000
-#define __RME_FULL_INTERRUPT_DISABLE	0x00001000
-#define __RME_CTRL_UNUSED10_MK		0x00000c00
-#define __RME_CTRL_UNUSED10_SH		10
-#define __RME_CTRL_UNUSED10(_v)		((_v) << __RME_CTRL_UNUSED10_SH)
-#define __RME_PCIEID_MK			0x00000300
-#define __RME_PCIEID_SH			8
-#define __RME_PCIEID(_v)		((_v) << __RME_PCIEID_SH)
-#define __RME_CTRL_UNUSED00_MK		0x000000fe
-#define __RME_CTRL_UNUSED00_SH		1
-#define __RME_CTRL_UNUSED00(_v)		((_v) << __RME_CTRL_UNUSED00_SH)
-#define __RME_ESIZE			0x00000001
-#define RME_QCTRL_Q1			0x0003806c
-#define __RME_CTRL_UNUSED11_MK		0x00000c00
-#define __RME_CTRL_UNUSED11_SH		10
-#define __RME_CTRL_UNUSED11(_v)		((_v) << __RME_CTRL_UNUSED11_SH)
-#define __RME_CTRL_UNUSED01_MK		0x000000fe
-#define __RME_CTRL_UNUSED01_SH		1
-#define __RME_CTRL_UNUSED01(_v)		((_v) << __RME_CTRL_UNUSED01_SH)
-#define PSS_CTL_REG			0x00018800
-#define __PSS_I2C_CLK_DIV_MK		0x007f0000
-#define __PSS_I2C_CLK_DIV_SH		16
-#define __PSS_I2C_CLK_DIV(_v)		((_v) << __PSS_I2C_CLK_DIV_SH)
-#define __PSS_LMEM_INIT_DONE		0x00001000
-#define __PSS_LMEM_RESET		0x00000200
-#define __PSS_LMEM_INIT_EN		0x00000100
-#define __PSS_LPU1_RESET		0x00000002
-#define __PSS_LPU0_RESET		0x00000001
-#define PSS_ERR_STATUS_REG		0x00018810
-#define __PSS_LPU1_TCM_READ_ERR		0x00200000
-#define __PSS_LPU0_TCM_READ_ERR		0x00100000
-#define __PSS_LMEM5_CORR_ERR		0x00080000
-#define __PSS_LMEM4_CORR_ERR		0x00040000
-#define __PSS_LMEM3_CORR_ERR		0x00020000
-#define __PSS_LMEM2_CORR_ERR		0x00010000
-#define __PSS_LMEM1_CORR_ERR		0x00008000
-#define __PSS_LMEM0_CORR_ERR		0x00004000
-#define __PSS_LMEM5_UNCORR_ERR		0x00002000
-#define __PSS_LMEM4_UNCORR_ERR		0x00001000
-#define __PSS_LMEM3_UNCORR_ERR		0x00000800
-#define __PSS_LMEM2_UNCORR_ERR		0x00000400
-#define __PSS_LMEM1_UNCORR_ERR		0x00000200
-#define __PSS_LMEM0_UNCORR_ERR		0x00000100
-#define __PSS_BAL_PERR			0x00000080
-#define __PSS_DIP_IF_ERR		0x00000040
-#define __PSS_IOH_IF_ERR		0x00000020
-#define __PSS_TDS_IF_ERR		0x00000010
-#define __PSS_RDS_IF_ERR		0x00000008
-#define __PSS_SGM_IF_ERR		0x00000004
-#define __PSS_LPU1_RAM_ERR		0x00000002
-#define __PSS_LPU0_RAM_ERR		0x00000001
-#define ERR_SET_REG			0x00018818
-#define __PSS_ERR_STATUS_SET		0x003fffff
-#define PMM_1T_RESET_REG_P0		0x0002381c
-#define __PMM_1T_RESET_P		0x00000001
-#define PMM_1T_RESET_REG_P1		0x00023c1c
-#define HQM_QSET0_RXQ_DRBL_P0		0x00038000
-#define __RXQ0_ADD_VECTORS_P		0x80000000
-#define __RXQ0_STOP_P			0x40000000
-#define __RXQ0_PRD_PTR_P		0x0000ffff
-#define HQM_QSET1_RXQ_DRBL_P0		0x00038080
-#define __RXQ1_ADD_VECTORS_P		0x80000000
-#define __RXQ1_STOP_P			0x40000000
-#define __RXQ1_PRD_PTR_P		0x0000ffff
-#define HQM_QSET0_RXQ_DRBL_P1		0x0003c000
-#define HQM_QSET1_RXQ_DRBL_P1		0x0003c080
-#define HQM_QSET0_TXQ_DRBL_P0		0x00038020
-#define __TXQ0_ADD_VECTORS_P		0x80000000
-#define __TXQ0_STOP_P			0x40000000
-#define __TXQ0_PRD_PTR_P		0x0000ffff
-#define HQM_QSET1_TXQ_DRBL_P0		0x000380a0
-#define __TXQ1_ADD_VECTORS_P		0x80000000
-#define __TXQ1_STOP_P			0x40000000
-#define __TXQ1_PRD_PTR_P		0x0000ffff
-#define HQM_QSET0_TXQ_DRBL_P1		0x0003c020
-#define HQM_QSET1_TXQ_DRBL_P1		0x0003c0a0
-#define HQM_QSET0_IB_DRBL_1_P0		0x00038040
-#define __IB1_0_ACK_P			0x80000000
-#define __IB1_0_DISABLE_P		0x40000000
-#define __IB1_0_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB1_0_COALESCING_CFG_P_SH	16
-#define __IB1_0_COALESCING_CFG_P(_v)	((_v) << __IB1_0_COALESCING_CFG_P_SH)
-#define __IB1_0_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET1_IB_DRBL_1_P0		0x000380c0
-#define __IB1_1_ACK_P			0x80000000
-#define __IB1_1_DISABLE_P		0x40000000
-#define __IB1_1_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB1_1_COALESCING_CFG_P_SH	16
-#define __IB1_1_COALESCING_CFG_P(_v)	((_v) << __IB1_1_COALESCING_CFG_P_SH)
-#define __IB1_1_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET0_IB_DRBL_1_P1		0x0003c040
-#define HQM_QSET1_IB_DRBL_1_P1		0x0003c0c0
-#define HQM_QSET0_IB_DRBL_2_P0		0x00038060
-#define __IB2_0_ACK_P			0x80000000
-#define __IB2_0_DISABLE_P		0x40000000
-#define __IB2_0_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB2_0_COALESCING_CFG_P_SH	16
-#define __IB2_0_COALESCING_CFG_P(_v)	((_v) << __IB2_0_COALESCING_CFG_P_SH)
-#define __IB2_0_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET1_IB_DRBL_2_P0		0x000380e0
-#define __IB2_1_ACK_P			0x80000000
-#define __IB2_1_DISABLE_P		0x40000000
-#define __IB2_1_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB2_1_COALESCING_CFG_P_SH	16
-#define __IB2_1_COALESCING_CFG_P(_v)	((_v) << __IB2_1_COALESCING_CFG_P_SH)
-#define __IB2_1_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET0_IB_DRBL_2_P1		0x0003c060
-#define HQM_QSET1_IB_DRBL_2_P1		0x0003c0e0
-
-/*
- * These definitions are either in error/missing in spec. Its auto-generated
- * from hard coded values in regparse.pl.
- */
-#define __EMPHPOST_AT_4G_MK_FIX		0x0000001c
-#define __EMPHPOST_AT_4G_SH_FIX		0x00000002
-#define __EMPHPRE_AT_4G_FIX		0x00000003
-#define __SFP_TXRATE_EN_FIX		0x00000100
-#define __SFP_RXRATE_EN_FIX		0x00000080
-
-/*
- * These register definitions are auto-generated from hard coded values
- * in regparse.pl.
- */
-
-/*
- * These register mapping definitions are auto-generated from mapping tables
- * in regparse.pl.
- */
-#define BFA_IOC0_HBEAT_REG		HOST_SEM0_INFO_REG
-#define BFA_IOC0_STATE_REG		HOST_SEM1_INFO_REG
-#define BFA_IOC1_HBEAT_REG		HOST_SEM2_INFO_REG
-#define BFA_IOC1_STATE_REG		HOST_SEM3_INFO_REG
-#define BFA_FW_USE_COUNT		 HOST_SEM4_INFO_REG
-#define BFA_IOC_FAIL_SYNC		HOST_SEM5_INFO_REG
-
-#define CPE_DEPTH_Q(__n) \
-	(CPE_DEPTH_Q0 + (__n) * (CPE_DEPTH_Q1 - CPE_DEPTH_Q0))
-#define CPE_QCTRL_Q(__n) \
-	(CPE_QCTRL_Q0 + (__n) * (CPE_QCTRL_Q1 - CPE_QCTRL_Q0))
-#define CPE_PI_PTR_Q(__n) \
-	(CPE_PI_PTR_Q0 + (__n) * (CPE_PI_PTR_Q1 - CPE_PI_PTR_Q0))
-#define CPE_CI_PTR_Q(__n) \
-	(CPE_CI_PTR_Q0 + (__n) * (CPE_CI_PTR_Q1 - CPE_CI_PTR_Q0))
-#define RME_DEPTH_Q(__n) \
-	(RME_DEPTH_Q0 + (__n) * (RME_DEPTH_Q1 - RME_DEPTH_Q0))
-#define RME_QCTRL_Q(__n) \
-	(RME_QCTRL_Q0 + (__n) * (RME_QCTRL_Q1 - RME_QCTRL_Q0))
-#define RME_PI_PTR_Q(__n) \
-	(RME_PI_PTR_Q0 + (__n) * (RME_PI_PTR_Q1 - RME_PI_PTR_Q0))
-#define RME_CI_PTR_Q(__n) \
-	(RME_CI_PTR_Q0 + (__n) * (RME_CI_PTR_Q1 - RME_CI_PTR_Q0))
-#define HQM_QSET_RXQ_DRBL_P0(__n) \
-	(HQM_QSET0_RXQ_DRBL_P0 + (__n) * \
-		(HQM_QSET1_RXQ_DRBL_P0 - HQM_QSET0_RXQ_DRBL_P0))
-#define HQM_QSET_TXQ_DRBL_P0(__n) \
-	(HQM_QSET0_TXQ_DRBL_P0 + (__n) * \
-		(HQM_QSET1_TXQ_DRBL_P0 - HQM_QSET0_TXQ_DRBL_P0))
-#define HQM_QSET_IB_DRBL_1_P0(__n) \
-	(HQM_QSET0_IB_DRBL_1_P0 + (__n) * \
-		(HQM_QSET1_IB_DRBL_1_P0 - HQM_QSET0_IB_DRBL_1_P0))
-#define HQM_QSET_IB_DRBL_2_P0(__n) \
-	(HQM_QSET0_IB_DRBL_2_P0 + (__n) * \
-		(HQM_QSET1_IB_DRBL_2_P0 - HQM_QSET0_IB_DRBL_2_P0))
-#define HQM_QSET_RXQ_DRBL_P1(__n) \
-	(HQM_QSET0_RXQ_DRBL_P1 + (__n) * \
-		(HQM_QSET1_RXQ_DRBL_P1 - HQM_QSET0_RXQ_DRBL_P1))
-#define HQM_QSET_TXQ_DRBL_P1(__n) \
-	(HQM_QSET0_TXQ_DRBL_P1 + (__n) * \
-		(HQM_QSET1_TXQ_DRBL_P1 - HQM_QSET0_TXQ_DRBL_P1))
-#define HQM_QSET_IB_DRBL_1_P1(__n) \
-	(HQM_QSET0_IB_DRBL_1_P1 + (__n) * \
-		(HQM_QSET1_IB_DRBL_1_P1 - HQM_QSET0_IB_DRBL_1_P1))
-#define HQM_QSET_IB_DRBL_2_P1(__n) \
-	(HQM_QSET0_IB_DRBL_2_P1 + (__n) * \
-		(HQM_QSET1_IB_DRBL_2_P1 - HQM_QSET0_IB_DRBL_2_P1))
-
-#define CPE_Q_NUM(__fn, __q) (((__fn) << 2) + (__q))
-#define RME_Q_NUM(__fn, __q) (((__fn) << 2) + (__q))
-#define CPE_Q_MASK(__q) ((__q) & 0x3)
-#define RME_Q_MASK(__q) ((__q) & 0x3)
-
-/*
- * PCI MSI-X vector defines
- */
-enum {
-	BFA_MSIX_CPE_Q0 = 0,
-	BFA_MSIX_CPE_Q1 = 1,
-	BFA_MSIX_CPE_Q2 = 2,
-	BFA_MSIX_CPE_Q3 = 3,
-	BFA_MSIX_RME_Q0 = 4,
-	BFA_MSIX_RME_Q1 = 5,
-	BFA_MSIX_RME_Q2 = 6,
-	BFA_MSIX_RME_Q3 = 7,
-	BFA_MSIX_LPU_ERR = 8,
-	BFA_MSIX_CT_MAX = 9,
-};
-
-/*
- * And corresponding host interrupt status bit field defines
- */
-#define __HFN_INT_CPE_Q0		0x00000001U
-#define __HFN_INT_CPE_Q1		0x00000002U
-#define __HFN_INT_CPE_Q2		0x00000004U
-#define __HFN_INT_CPE_Q3		0x00000008U
-#define __HFN_INT_CPE_Q4		0x00000010U
-#define __HFN_INT_CPE_Q5		0x00000020U
-#define __HFN_INT_CPE_Q6		0x00000040U
-#define __HFN_INT_CPE_Q7		0x00000080U
-#define __HFN_INT_RME_Q0		0x00000100U
-#define __HFN_INT_RME_Q1		0x00000200U
-#define __HFN_INT_RME_Q2		0x00000400U
-#define __HFN_INT_RME_Q3		0x00000800U
-#define __HFN_INT_RME_Q4		0x00001000U
-#define __HFN_INT_RME_Q5		0x00002000U
-#define __HFN_INT_RME_Q6		0x00004000U
-#define __HFN_INT_RME_Q7		0x00008000U
-#define __HFN_INT_ERR_EMC		0x00010000U
-#define __HFN_INT_ERR_LPU0		0x00020000U
-#define __HFN_INT_ERR_LPU1		0x00040000U
-#define __HFN_INT_ERR_PSS		0x00080000U
-#define __HFN_INT_MBOX_LPU0		0x00100000U
-#define __HFN_INT_MBOX_LPU1		0x00200000U
-#define __HFN_INT_MBOX1_LPU0		0x00400000U
-#define __HFN_INT_MBOX1_LPU1		0x00800000U
-#define __HFN_INT_LL_HALT		0x01000000U
-#define __HFN_INT_CPE_MASK		0x000000ffU
-#define __HFN_INT_RME_MASK		0x0000ff00U
-
-/*
- * catapult memory map.
- */
-#define LL_PGN_HQM0		0x0096
-#define LL_PGN_HQM1		0x0097
-#define PSS_SMEM_PAGE_START	0x8000
-#define PSS_SMEM_PGNUM(_pg0, _ma)	((_pg0) + ((_ma) >> 15))
-#define PSS_SMEM_PGOFF(_ma)	((_ma) & 0x7fff)
-
-/*
- * End of catapult memory map
- */
-
-#endif /* __BFI_CTREG_H__ */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/8] bna: MSGQ Implementation
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (2 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 3/8] bna: Remove Obsolete File bfi_ctreg.h Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 5/8] bna: Remove Unnecessary CNA Check Rasesh Mody
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Currently modules communicate with the FW using 32 byte command and
   response register. This limits the size of the command and response
   messages exchanged with the FW to 32 bytes. We need a mechanism to
   exchange the comamnds and responses exchange with FW that exceeds 32 bytes.

 - MSGQ implementation provides that facility. It removes the assumption that
   command/response queue size is precisely calculated to accommodate all
   concurrent FW commands/responses. The queue depth is made variable now, defined
   by a macro. A waiting command list is implemented to hold all the commands
   when there is no place in the command queue. Callback is implemented for
   each command entry to invoke the module posting the command, when there is
   space in the command queue and the command was finally posted to the queue.
   Module/Object information is embedded in the response for tracking purpose.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/Makefile   |    3 +-
 drivers/net/bna/bfa_ioc.c  |   14 +-
 drivers/net/bna/bfa_ioc.h  |    4 +-
 drivers/net/bna/bfa_msgq.c |  669 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bna/bfa_msgq.h |  130 +++++++++
 drivers/net/bna/bfi.h      |  101 +++++++
 drivers/net/bna/bna_ctrl.c |    6 +-
 7 files changed, 918 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h

diff --git a/drivers/net/bna/Makefile b/drivers/net/bna/Makefile
index a5d604d..5c4629d 100644
--- a/drivers/net/bna/Makefile
+++ b/drivers/net/bna/Makefile
@@ -6,6 +6,7 @@
 obj-$(CONFIG_BNA) += bna.o
 
 bna-objs := bnad.o bnad_ethtool.o bna_ctrl.o bna_txrx.o
-bna-objs += bfa_ioc.o bfa_ioc_ct.o bfa_cee.o cna_fwimg.o
+bna-objs +=	bfa_msgq.o bfa_ioc.o bfa_ioc_ct.o bfa_cee.o
+bna-objs +=	cna_fwimg.o
 
 EXTRA_CFLAGS := -Idrivers/net/bna
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index 3cdea65..2d5c4fd 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -1968,18 +1968,22 @@ bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
  * @param[in]	ioc	IOC instance
  * @param[i]	cmd	Mailbox command
  */
-void
-bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
+bool
+bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg)
 {
 	struct bfa_ioc_mbox_mod *mod = &ioc->mbox_mod;
 	u32			stat;
 
+	cmd->cbfn = cbfn;
+	cmd->cbarg = cbarg;
+
 	/**
 	 * If a previous command is pending, queue new command
 	 */
 	if (!list_empty(&mod->cmd_q)) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1988,7 +1992,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	stat = readl(ioc->ioc_regs.hfn_mbox_cmd);
 	if (stat) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1996,7 +2000,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	 */
 	bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg));
 
-	return;
+	return false;
 }
 
 /**
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index bda866b..33ba5f4 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -253,7 +253,9 @@ struct bfa_ioc_hwif {
 /**
  * IOC mailbox interface
  */
-void bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd);
+bool bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc,
+			struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg);
 void bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc);
 void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
 		bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg);
diff --git a/drivers/net/bna/bfa_msgq.c b/drivers/net/bna/bfa_msgq.c
new file mode 100644
index 0000000..ed52187
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.c
@@ -0,0 +1,669 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfa_msgq.c MSGQ module source file.
+ */
+
+#include "bfi.h"
+#include "bfa_msgq.h"
+#include "bfa_ioc.h"
+
+#define call_cmdq_ent_cbfn(_cmdq_ent, _status)				\
+{									\
+	bfa_msgq_cmdcbfn_t cbfn;					\
+	void *cbarg;							\
+	cbfn = (_cmdq_ent)->cbfn;					\
+	cbarg = (_cmdq_ent)->cbarg;					\
+	(_cmdq_ent)->cbfn = NULL;					\
+	(_cmdq_ent)->cbarg = NULL;					\
+	if (cbfn) {							\
+		cbfn(cbarg, (_status));					\
+	}								\
+}
+
+static void bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq);
+static void bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq);
+
+enum cmdq_event {
+	CMDQ_E_START			= 1,
+	CMDQ_E_STOP			= 2,
+	CMDQ_E_FAIL			= 3,
+	CMDQ_E_POST			= 4,
+	CMDQ_E_INIT_RESP		= 5,
+	CMDQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(cmdq, stopped, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, init_wait, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, ready, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, dbell_wait, struct bfa_msgq_cmdq,
+			enum cmdq_event);
+
+static void
+cmdq_sm_stopped_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfa_msgq_cmd_entry *cmdq_ent;
+
+	cmdq->producer_index = 0;
+	cmdq->consumer_index = 0;
+	cmdq->flags = 0;
+	cmdq->token = 0;
+	cmdq->offset = 0;
+	cmdq->bytes_to_copy = 0;
+	while (!list_empty(&cmdq->pending_q)) {
+		bfa_q_deq(&cmdq->pending_q, &cmdq_ent);
+		bfa_q_qe_init(&cmdq_ent->qe);
+		call_cmdq_ent_cbfn(cmdq_ent, BFA_STATUS_FAILED);
+	}
+}
+
+static void
+cmdq_sm_stopped(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_START:
+		bfa_fsm_set_state(cmdq, cmdq_sm_init_wait);
+		break;
+
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		/* No-op */
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_init_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_wc_down(&cmdq->msgq->init_wc);
+}
+
+static void
+cmdq_sm_init_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_INIT_RESP:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_ready_entry(struct bfa_msgq_cmdq *cmdq)
+{
+}
+
+static void
+cmdq_sm_ready(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_dbell_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_msgq_cmdq_dbell(cmdq);
+}
+
+static void
+cmdq_sm_dbell_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_DB_READY:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_cmdq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+	bfa_fsm_send_event(cmdq, CMDQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&cmdq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_PI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.cmdq_pi = htons(cmdq->producer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->dbell_mb,
+				bfa_msgq_cmdq_dbell_ready, cmdq)) {
+		bfa_msgq_cmdq_dbell_ready(cmdq);
+	}
+}
+
+static void
+__cmd_copy(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq_cmd_entry *cmd)
+{
+	size_t len = cmd->msg_size;
+	int num_entries = 0;
+	size_t to_copy;
+	u8 *src, *dst;
+
+	src = (u8 *)cmd->msg_hdr;
+	dst = (u8 *)cmdq->addr.kva;
+	dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_CMD_ENTRY_SIZE) ?
+				len : BFI_MSGQ_CMD_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		src += BFI_MSGQ_CMD_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(cmdq->producer_index, 1, cmdq->depth);
+		dst = (u8 *)cmdq->addr.kva;
+		dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+		num_entries++;
+	}
+
+}
+
+static void
+bfa_msgq_cmdq_ci_update(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfa_msgq_cmd_entry *cmd;
+	int posted = 0;
+
+	cmdq->consumer_index = ntohs(dbell->idx.cmdq_ci);
+
+	/* Walk through pending list to see if the command can be posted */
+	while (!list_empty(&cmdq->pending_q)) {
+		cmd =
+		(struct bfa_msgq_cmd_entry *)bfa_q_first(&cmdq->pending_q);
+		if (ntohs(cmd->msg_hdr->num_entries) <=
+			BFA_MSGQ_FREE_CNT(cmdq)) {
+			list_del(&cmd->qe);
+			__cmd_copy(cmdq, cmd);
+			posted = 1;
+			call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		} else {
+			break;
+		}
+	}
+
+	if (posted)
+		bfa_fsm_send_event(cmdq, CMDQ_E_POST);
+}
+
+static void
+bfa_msgq_cmdq_copy_next(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+
+	if (cmdq->bytes_to_copy)
+		bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_req(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_cmdq_copy_req *req =
+		(struct bfi_msgq_i2h_cmdq_copy_req *)mb;
+
+	cmdq->token = 0;
+	cmdq->offset = ntohs(req->offset);
+	cmdq->bytes_to_copy = ntohs(req->len);
+	bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_cmdq_copy_rsp *rsp =
+		(struct bfi_msgq_h2i_cmdq_copy_rsp *)&cmdq->copy_mb.msg[0];
+	int copied;
+	u8 *addr = (u8 *)cmdq->addr.kva;
+
+	memset(rsp, 0, sizeof(struct bfi_msgq_h2i_cmdq_copy_rsp));
+	bfi_h2i_set(rsp->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_CMDQ_COPY_RSP, 0);
+	rsp->mh.mtag.i2htok = htons(cmdq->token);
+	copied = (cmdq->bytes_to_copy >= BFI_CMD_COPY_SZ) ? BFI_CMD_COPY_SZ :
+		cmdq->bytes_to_copy;
+	addr += cmdq->offset;
+	memcpy(rsp->data, addr, copied);
+
+	cmdq->token++;
+	cmdq->offset += copied;
+	cmdq->bytes_to_copy -= copied;
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->copy_mb,
+				bfa_msgq_cmdq_copy_next, cmdq)) {
+		bfa_msgq_cmdq_copy_next(cmdq);
+	}
+}
+
+static void
+bfa_msgq_cmdq_attach(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq *msgq)
+{
+	cmdq->depth = BFA_MSGQ_CMDQ_NUM_ENTRY;
+	INIT_LIST_HEAD(&cmdq->pending_q);
+	cmdq->msgq = msgq;
+	bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+}
+
+static void bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq);
+
+enum rspq_event {
+	RSPQ_E_START			= 1,
+	RSPQ_E_STOP			= 2,
+	RSPQ_E_FAIL			= 3,
+	RSPQ_E_RESP			= 4,
+	RSPQ_E_INIT_RESP		= 5,
+	RSPQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(rspq, stopped, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, init_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+bfa_fsm_state_decl(rspq, ready, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, dbell_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+
+static void
+rspq_sm_stopped_entry(struct bfa_msgq_rspq *rspq)
+{
+	rspq->producer_index = 0;
+	rspq->consumer_index = 0;
+	rspq->flags = 0;
+}
+
+static void
+rspq_sm_stopped(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_START:
+		bfa_fsm_set_state(rspq, rspq_sm_init_wait);
+		break;
+
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_init_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	bfa_wc_down(&rspq->msgq->init_wc);
+}
+
+static void
+rspq_sm_init_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_FAIL:
+	case RSPQ_E_STOP:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_INIT_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_ready_entry(struct bfa_msgq_rspq *rspq)
+{
+}
+
+static void
+rspq_sm_ready(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_dbell_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	if (!bfa_nw_ioc_is_disabled(rspq->msgq->ioc))
+		bfa_msgq_rspq_dbell(rspq);
+}
+
+static void
+rspq_sm_dbell_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		rspq->flags |= BFA_MSGQ_RSPQ_F_DB_UPDATE;
+		break;
+
+	case RSPQ_E_DB_READY:
+		if (rspq->flags & BFA_MSGQ_RSPQ_F_DB_UPDATE) {
+			rspq->flags &= ~BFA_MSGQ_RSPQ_F_DB_UPDATE;
+			bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_rspq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_rspq *rspq = (struct bfa_msgq_rspq *)arg;
+	bfa_fsm_send_event(rspq, RSPQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&rspq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_CI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.rspq_ci = htons(rspq->consumer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(rspq->msgq->ioc, &rspq->dbell_mb,
+				bfa_msgq_rspq_dbell_ready, rspq)) {
+		bfa_msgq_rspq_dbell_ready(rspq);
+	}
+}
+
+static void
+bfa_msgq_rspq_pi_update(struct bfa_msgq_rspq *rspq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfi_msgq_mhdr *msghdr;
+	int num_entries;
+	int mc;
+	u8 *rspq_qe;
+
+	rspq->producer_index = ntohs(dbell->idx.rspq_pi);
+
+	while (rspq->consumer_index != rspq->producer_index) {
+		rspq_qe = (u8 *)rspq->addr.kva;
+		rspq_qe += (rspq->consumer_index * BFI_MSGQ_RSP_ENTRY_SIZE);
+		msghdr = (struct bfi_msgq_mhdr *)rspq_qe;
+
+		mc = msghdr->msg_class;
+		num_entries = ntohs(msghdr->num_entries);
+
+		if ((mc > BFI_MC_MAX) || (rspq->rsphdlr[mc].cbfn == NULL))
+			break;
+
+		(rspq->rsphdlr[mc].cbfn)(rspq->rsphdlr[mc].cbarg, msghdr);
+
+		BFA_MSGQ_INDX_ADD(rspq->consumer_index, num_entries,
+				rspq->depth);
+	}
+
+	bfa_fsm_send_event(rspq, RSPQ_E_RESP);
+}
+
+static void
+bfa_msgq_rspq_attach(struct bfa_msgq_rspq *rspq, struct bfa_msgq *msgq)
+{
+	rspq->depth = BFA_MSGQ_RSPQ_NUM_ENTRY;
+	rspq->msgq = msgq;
+	bfa_fsm_set_state(rspq, rspq_sm_stopped);
+}
+
+static void
+bfa_msgq_init_rsp(struct bfa_msgq *msgq,
+		 struct bfi_mbmsg *mb)
+{
+	bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_INIT_RESP);
+	bfa_fsm_send_event(&msgq->rspq, RSPQ_E_INIT_RESP);
+}
+
+static void
+bfa_msgq_init(void *arg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)arg;
+	struct bfi_msgq_cfg_req *msgq_cfg =
+		(struct bfi_msgq_cfg_req *)&msgq->init_mb.msg[0];
+
+	memset(msgq_cfg, 0, sizeof(struct bfi_msgq_cfg_req));
+	bfi_h2i_set(msgq_cfg->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_INIT_REQ, 0);
+	msgq_cfg->mh.mtag.i2htok = 0;
+
+	bfa_dma_be_addr_set(msgq_cfg->cmdq.addr, msgq->cmdq.addr.pa);
+	msgq_cfg->cmdq.q_depth = htons(msgq->cmdq.depth);
+	bfa_dma_be_addr_set(msgq_cfg->rspq.addr, msgq->rspq.addr.pa);
+	msgq_cfg->rspq.q_depth = htons(msgq->rspq.depth);
+
+	bfa_nw_ioc_mbox_queue(msgq->ioc, &msgq->init_mb, NULL, NULL);
+}
+
+static void
+bfa_msgq_isr(void *cbarg, struct bfi_mbmsg *msg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (msg->mh.msg_id) {
+	case BFI_MSGQ_I2H_INIT_RSP:
+		bfa_msgq_init_rsp(msgq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_PI:
+		bfa_msgq_rspq_pi_update(&msgq->rspq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_CI:
+		bfa_msgq_cmdq_ci_update(&msgq->cmdq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_CMDQ_COPY_REQ:
+		bfa_msgq_cmdq_copy_req(&msgq->cmdq, msg);
+		break;
+
+	default:
+		BUG_ON(1);
+	}
+}
+
+static void
+bfa_msgq_notify(void *cbarg, enum bfa_ioc_event event)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (event) {
+	case BFA_IOC_E_ENABLED:
+		bfa_wc_init(&msgq->init_wc, bfa_msgq_init, msgq);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_START);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_START);
+		bfa_wc_wait(&msgq->init_wc);
+		break;
+
+	case BFA_IOC_E_DISABLED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_STOP);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_STOP);
+		break;
+
+	case BFA_IOC_E_FAILED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_FAIL);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_FAIL);
+		break;
+
+	default:
+		break;
+	}
+}
+
+u32
+bfa_msgq_meminfo(void)
+{
+	return roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ) +
+		roundup(BFA_MSGQ_RSPQ_SIZE, BFA_DMA_ALIGN_SZ);
+}
+
+void
+bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa)
+{
+	msgq->cmdq.addr.kva = kva;
+	msgq->cmdq.addr.pa  = pa;
+
+	kva += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+	pa += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+
+	msgq->rspq.addr.kva = kva;
+	msgq->rspq.addr.pa = pa;
+}
+
+void
+bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc)
+{
+	msgq->ioc    = ioc;
+
+	bfa_msgq_cmdq_attach(&msgq->cmdq, msgq);
+	bfa_msgq_rspq_attach(&msgq->rspq, msgq);
+
+	bfa_nw_ioc_mbox_regisr(msgq->ioc, BFI_MC_MSGQ, bfa_msgq_isr, msgq);
+	bfa_q_qe_init(&msgq->ioc_notify);
+	bfa_ioc_notify_init(&msgq->ioc_notify, bfa_msgq_notify, msgq);
+	bfa_nw_ioc_notify_register(msgq->ioc, &msgq->ioc_notify);
+}
+
+void
+bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		bfa_msgq_mcfunc_t cbfn, void *cbarg)
+{
+	msgq->rspq.rsphdlr[mc].cbfn	= cbfn;
+	msgq->rspq.rsphdlr[mc].cbarg	= cbarg;
+}
+
+void
+bfa_msgq_cmd_post(struct bfa_msgq *msgq,  struct bfa_msgq_cmd_entry *cmd)
+{
+	if (ntohs(cmd->msg_hdr->num_entries) <=
+		BFA_MSGQ_FREE_CNT(&msgq->cmdq)) {
+		__cmd_copy(&msgq->cmdq, cmd);
+		call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_POST);
+	} else {
+		list_add_tail(&cmd->qe, &msgq->cmdq.pending_q);
+	}
+}
+
+void
+bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len)
+{
+	struct bfa_msgq_rspq *rspq = &msgq->rspq;
+	size_t len = buf_len;
+	size_t to_copy;
+	int ci;
+	u8 *src, *dst;
+
+	ci = rspq->consumer_index;
+	src = (u8 *)rspq->addr.kva;
+	src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	dst = buf;
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_RSP_ENTRY_SIZE) ?
+				len : BFI_MSGQ_RSP_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		dst += BFI_MSGQ_RSP_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(ci, 1, rspq->depth);
+		src = (u8 *)rspq->addr.kva;
+		src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	}
+}
diff --git a/drivers/net/bna/bfa_msgq.h b/drivers/net/bna/bfa_msgq.h
new file mode 100644
index 0000000..a6a565a
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.h
@@ -0,0 +1,130 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+#ifndef __BFA_MSGQ_H__
+#define __BFA_MSGQ_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+#include "bfa_ioc.h"
+#include "bfa_cs.h"
+
+#define BFA_MSGQ_FREE_CNT(_q)						\
+	(((_q)->consumer_index - (_q)->producer_index - 1) & ((_q)->depth - 1))
+
+#define BFA_MSGQ_INDX_ADD(_q_indx, _qe_num, _q_depth)			\
+	((_q_indx) = (((_q_indx) + (_qe_num)) & ((_q_depth) - 1)))
+
+#define BFA_MSGQ_CMDQ_NUM_ENTRY		128
+#define BFA_MSGQ_CMDQ_SIZE						\
+	(BFI_MSGQ_CMD_ENTRY_SIZE * BFA_MSGQ_CMDQ_NUM_ENTRY)
+
+#define BFA_MSGQ_RSPQ_NUM_ENTRY		128
+#define BFA_MSGQ_RSPQ_SIZE						\
+	(BFI_MSGQ_RSP_ENTRY_SIZE * BFA_MSGQ_RSPQ_NUM_ENTRY)
+
+#define bfa_msgq_cmd_set(_cmd, _cbfn, _cbarg, _msg_size, _msg_hdr)	\
+do {									\
+	(_cmd)->cbfn = (_cbfn);						\
+	(_cmd)->cbarg = (_cbarg);					\
+	(_cmd)->msg_size = (_msg_size);					\
+	(_cmd)->msg_hdr = (_msg_hdr);					\
+} while (0)
+
+struct bfa_msgq;
+
+typedef void (*bfa_msgq_cmdcbfn_t)(void *cbarg, enum bfa_status status);
+
+struct bfa_msgq_cmd_entry {
+	struct list_head				qe;
+	bfa_msgq_cmdcbfn_t		cbfn;
+	void				*cbarg;
+	size_t				msg_size;
+	struct bfi_msgq_mhdr *msg_hdr;
+};
+
+enum bfa_msgq_cmdq_flags {
+	BFA_MSGQ_CMDQ_F_DB_UPDATE	= 1,
+};
+
+struct bfa_msgq_cmdq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_cmdq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	u16			token;
+	int				offset;
+	int				bytes_to_copy;
+	struct bfa_mbox_cmd copy_mb;
+
+	struct list_head		pending_q; /* pending command queue */
+
+	struct bfa_msgq *msgq;
+};
+
+enum bfa_msgq_rspq_flags {
+	BFA_MSGQ_RSPQ_F_DB_UPDATE	= 1,
+};
+
+typedef void (*bfa_msgq_mcfunc_t)(void *cbarg, struct bfi_msgq_mhdr *mhdr);
+
+struct bfa_msgq_rspq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_rspq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	int				nmclass;
+	struct {
+		bfa_msgq_mcfunc_t	cbfn;
+		void			*cbarg;
+	} rsphdlr[BFI_MC_MAX];
+
+	struct bfa_msgq *msgq;
+};
+
+struct bfa_msgq {
+	struct bfa_msgq_cmdq cmdq;
+	struct bfa_msgq_rspq rspq;
+
+	struct bfa_wc			init_wc;
+	struct bfa_mbox_cmd init_mb;
+
+	struct bfa_ioc_notify ioc_notify;
+	struct bfa_ioc *ioc;
+};
+
+u32 bfa_msgq_meminfo(void);
+void bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa);
+void bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc);
+void bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		     bfa_msgq_mcfunc_t cbfn, void *cbarg);
+void bfa_msgq_cmd_post(struct bfa_msgq *msgq,
+		       struct bfa_msgq_cmd_entry *cmd);
+void bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len);
+
+#endif
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 088211c..6a53183 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -192,6 +192,8 @@ enum bfi_mclass {
 
 #define BFI_BOOT_LOADER_OS		0
 
+#define BFI_FWBOOT_ENV_OS		0
+
 #define BFI_BOOT_MEMTEST_RES_ADDR   0x900
 #define BFI_BOOT_MEMTEST_RES_SIG    0xA0A1A2A3
 
@@ -395,6 +397,105 @@ union bfi_ioc_i2h_msg_u {
 	u32			mboxmsg[BFI_IOC_MSGSZ];
 };
 
+/**
+ *----------------------------------------------------------------------
+ *				MSGQ
+ *----------------------------------------------------------------------
+ */
+
+enum bfi_msgq_h2i_msgs {
+	BFI_MSGQ_H2I_INIT_REQ	   = 1,
+	BFI_MSGQ_H2I_DOORBELL_PI	= 2,
+	BFI_MSGQ_H2I_DOORBELL_CI	= 3,
+	BFI_MSGQ_H2I_CMDQ_COPY_RSP      = 4,
+};
+
+enum bfi_msgq_i2h_msgs {
+	BFI_MSGQ_I2H_INIT_RSP	   = BFA_I2HM(BFI_MSGQ_H2I_INIT_REQ),
+	BFI_MSGQ_I2H_DOORBELL_PI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_PI),
+	BFI_MSGQ_I2H_DOORBELL_CI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_CI),
+	BFI_MSGQ_I2H_CMDQ_COPY_REQ      = BFA_I2HM(BFI_MSGQ_H2I_CMDQ_COPY_RSP),
+};
+
+/* Messages(commands/responsed/AENS will have the following header */
+struct bfi_msgq_mhdr {
+	u8	msg_class;
+	u8	msg_id;
+	u16	msg_token;
+	u16	num_entries;
+	u8	enet_id;
+	u8	rsvd[1];
+};
+
+#define bfi_msgq_mhdr_set(_mh, _mc, _mid, _tok, _enet_id) do {	\
+	(_mh).msg_class	 = (_mc);	\
+	(_mh).msg_id	    = (_mid);       \
+	(_mh).msg_token	 = (_tok);       \
+	(_mh).enet_id	   = (_enet_id);   \
+} while (0)
+
+/*
+ * Mailbox  for messaging interface
+ */
+#define BFI_MSGQ_CMD_ENTRY_SIZE	 (64)    /* TBD */
+#define BFI_MSGQ_RSP_ENTRY_SIZE	 (64)    /* TBD */
+
+#define bfi_msgq_num_cmd_entries(_size)				 \
+	(((_size) + BFI_MSGQ_CMD_ENTRY_SIZE - 1) / BFI_MSGQ_CMD_ENTRY_SIZE)
+
+struct bfi_msgq {
+	union bfi_addr_u addr;
+	u16 q_depth;     /* Total num of entries in the queue */
+	u8 rsvd[2];
+};
+
+/* BFI_ENET_MSGQ_CFG_REQ TBD init or cfg? */
+struct bfi_msgq_cfg_req {
+	struct bfi_mhdr mh;
+	struct bfi_msgq cmdq;
+	struct bfi_msgq rspq;
+};
+
+/* BFI_ENET_MSGQ_CFG_RSP */
+struct bfi_msgq_cfg_rsp {
+	struct bfi_mhdr mh;
+	u8 cmd_status;
+	u8 rsvd[3];
+};
+
+/* BFI_MSGQ_H2I_DOORBELL */
+struct bfi_msgq_h2i_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 cmdq_pi;
+		u16 rspq_ci;
+	} idx;
+};
+
+/* BFI_MSGQ_I2H_DOORBELL */
+struct bfi_msgq_i2h_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 rspq_pi;
+		u16 cmdq_ci;
+	} idx;
+};
+
+#define BFI_CMD_COPY_SZ 28
+
+/* BFI_MSGQ_H2I_CMD_COPY_RSP */
+struct bfi_msgq_h2i_cmdq_copy_rsp {
+	struct bfi_mhdr mh;
+	u8	      data[BFI_CMD_COPY_SZ];
+};
+
+/* BFI_MSGQ_I2H_CMD_COPY_REQ */
+struct bfi_msgq_i2h_cmdq_copy_req {
+	struct bfi_mhdr mh;
+	u16     offset;
+	u16     len;
+};
+
 #pragma pack()
 
 #endif /* __BFI_H__ */
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
index cb2594c..7d95517 100644
--- a/drivers/net/bna/bna_ctrl.c
+++ b/drivers/net/bna/bna_ctrl.c
@@ -183,7 +183,8 @@ bna_ll_isr(void *llarg, struct bfi_mbmsg *msg)
 			if (to_post) {
 				mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
 				bfa_nw_ioc_mbox_queue(&bna->device.ioc,
-							&mb_qe->cmd);
+							&mb_qe->cmd, NULL,
+							NULL);
 			}
 		} else {
 			snprintf(message, BNA_MESSAGE_SIZE,
@@ -234,7 +235,8 @@ bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe)
 	bna->mbox_mod.msg_pending++;
 	if (bna->mbox_mod.state == BNA_MBOX_FREE) {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd);
+		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd,
+					NULL, NULL);
 		bna->mbox_mod.state = BNA_MBOX_POSTED;
 	} else {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/8] bna: Remove Unnecessary CNA Check
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (3 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 4/8] bna: MSGQ Implementation Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 6/8] bna: HW Interface Init Update Rasesh Mody
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - ioc->cna is always set to 1 for eth functions, remove the check that
   asserts IOC is in CNA mode in bfa_ioc_firmware_lock() and
   bfa_ioc_firmware_unlock() in bfa_ioc_ct.c.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_ioc_ct.c |   12 ------------
 1 files changed, 0 insertions(+), 12 deletions(-)

diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 75ecf7a..4de1ad8 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -84,12 +84,6 @@ bfa_ioc_ct_firmware_lock(struct bfa_ioc *ioc)
 	struct bfi_ioc_image_hdr fwhdr;
 
 	/**
-	 * Firmware match check is relevant only for CNA.
-	 */
-	if (!ioc->cna)
-		return true;
-
-	/**
 	 * If bios boot (flash based) -- do not increment usage count
 	 */
 	if (bfa_cb_image_get_size(BFA_IOC_FWIMG_TYPE(ioc)) <
@@ -140,12 +134,6 @@ bfa_ioc_ct_firmware_unlock(struct bfa_ioc *ioc)
 	u32 usecnt;
 
 	/**
-	 * Firmware lock is relevant only for CNA.
-	 */
-	if (!ioc->cna)
-		return;
-
-	/**
 	 * If bios boot (flash based) -- do not decrement usage count
 	 */
 	if (bfa_cb_image_get_size(BFA_IOC_FWIMG_TYPE(ioc)) <
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/8] bna: HW Interface Init Update
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (4 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 5/8] bna: Remove Unnecessary CNA Check Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 7/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - Split the hw interface into common and asic specific to support new asic
   in the future.
 - Fix bfa_ioc_ct_isr_mode_set() to also include the case that we are already
   in the desired msix mode.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_ioc_ct.c |   28 +++++++++++++++++-----------
 1 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 4de1ad8..209f1f3 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -50,26 +50,32 @@ static enum bfa_status bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode);
 
 static struct bfa_ioc_hwif nw_hwif_ct;
 
+static void
+bfa_ioc_set_ctx_hwif(struct bfa_ioc *ioc, struct bfa_ioc_hwif *hwif)
+{
+	hwif->ioc_firmware_lock = bfa_ioc_ct_firmware_lock;
+	hwif->ioc_firmware_unlock = bfa_ioc_ct_firmware_unlock;
+	hwif->ioc_notify_fail = bfa_ioc_ct_notify_fail;
+	hwif->ioc_ownership_reset = bfa_ioc_ct_ownership_reset;
+	hwif->ioc_sync_start = bfa_ioc_ct_sync_start;
+	hwif->ioc_sync_join = bfa_ioc_ct_sync_join;
+	hwif->ioc_sync_leave = bfa_ioc_ct_sync_leave;
+	hwif->ioc_sync_ack = bfa_ioc_ct_sync_ack;
+	hwif->ioc_sync_complete = bfa_ioc_ct_sync_complete;
+}
+
 /**
  * Called from bfa_ioc_attach() to map asic specific calls.
  */
 void
 bfa_nw_ioc_set_ct_hwif(struct bfa_ioc *ioc)
 {
+	bfa_ioc_set_ctx_hwif(ioc, &nw_hwif_ct);
+
 	nw_hwif_ct.ioc_pll_init = bfa_ioc_ct_pll_init;
-	nw_hwif_ct.ioc_firmware_lock = bfa_ioc_ct_firmware_lock;
-	nw_hwif_ct.ioc_firmware_unlock = bfa_ioc_ct_firmware_unlock;
 	nw_hwif_ct.ioc_reg_init = bfa_ioc_ct_reg_init;
 	nw_hwif_ct.ioc_map_port = bfa_ioc_ct_map_port;
 	nw_hwif_ct.ioc_isr_mode_set = bfa_ioc_ct_isr_mode_set;
-	nw_hwif_ct.ioc_notify_fail = bfa_ioc_ct_notify_fail;
-	nw_hwif_ct.ioc_ownership_reset = bfa_ioc_ct_ownership_reset;
-	nw_hwif_ct.ioc_sync_start = bfa_ioc_ct_sync_start;
-	nw_hwif_ct.ioc_sync_join = bfa_ioc_ct_sync_join;
-	nw_hwif_ct.ioc_sync_leave = bfa_ioc_ct_sync_leave;
-	nw_hwif_ct.ioc_sync_ack = bfa_ioc_ct_sync_ack;
-	nw_hwif_ct.ioc_sync_complete = bfa_ioc_ct_sync_complete;
-
 	ioc->ioc_hwif = &nw_hwif_ct;
 }
 
@@ -297,7 +303,7 @@ bfa_ioc_ct_isr_mode_set(struct bfa_ioc *ioc, bool msix)
 	/**
 	 * If already in desired mode, do not change anything
 	 */
-	if (!msix && mode)
+	if ((!msix && mode) || (msix && !mode))
 		return;
 
 	if (msix)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/8] bna: Introduce ENET as New Driver and FW Interface
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (5 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 6/8] bna: HW Interface Init Update Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-27  2:10 ` [PATCH 8/8] bna: Tx and Rx Redesign Rasesh Mody
  2011-07-28  5:32 ` [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture David Miller
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - This patch contains the messages, opcodes and structure format for the
   messages and responses exchanged between driver and the FW. In addition
   this patch contains the state machine implementation for Ethport, Enet,
   IOCEth.
 - Ethport object is responsible for receiving link state events, sending
   port enable/disable commands to FW.
 - Enet object is responsible for synchronizing initialization/teardown of
   tx & rx datapath configuration.
 - IOCEth object is responsible for init/un-init of IO Controller in the
   adapter which runs the FW.
 - This patch also contains code for initialization and resource assignment
   for Ethport, Enet, IOCEth, Tx, Rx objects.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_enet.h |  902 ++++++++++++++++++
 drivers/net/bna/bna_enet.c | 2199 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3101 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bfi_enet.h
 create mode 100644 drivers/net/bna/bna_enet.c

diff --git a/drivers/net/bna/bfi_enet.h b/drivers/net/bna/bfi_enet.h
new file mode 100644
index 0000000..17b34ab
--- /dev/null
+++ b/drivers/net/bna/bfi_enet.h
@@ -0,0 +1,902 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfi_enet.h BNA Hardware and Firmware Interface
+ */
+
+/**
+ * Skipping statistics collection to avoid clutter.
+ * Command is no longer needed:
+ *	MTU
+ *	TxQ Stop
+ *	RxQ Stop
+ *	RxF Enable/Disable
+ *
+ * HDS-off request is dynamic
+ * keep structures as multiple of 32-bit fields for alignment.
+ * All values must be written in big-endian.
+ */
+#ifndef __BFI_ENET_H__
+#define __BFI_ENET_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+
+#pragma pack(1)
+
+#define BFI_ENET_CFG_MAX		32	/* Max resources per PF */
+
+#define BFI_ENET_TXQ_PRIO_MAX		8
+#define BFI_ENET_RX_QSET_MAX		16
+#define BFI_ENET_TXQ_WI_VECT_MAX	4
+
+#define BFI_ENET_VLAN_ID_MAX		4096
+#define BFI_ENET_VLAN_BLOCK_SIZE	512	/* in bits */
+#define BFI_ENET_VLAN_BLOCKS_MAX					\
+	(BFI_ENET_VLAN_ID_MAX / BFI_ENET_VLAN_BLOCK_SIZE)
+#define BFI_ENET_VLAN_WORD_SIZE		32	/* in bits */
+#define BFI_ENET_VLAN_WORDS_MAX						\
+	(BFI_ENET_VLAN_BLOCK_SIZE / BFI_ENET_VLAN_WORD_SIZE)
+
+#define BFI_ENET_RSS_RIT_MAX		64	/* entries */
+#define BFI_ENET_RSS_KEY_LEN		10	/* 32-bit words */
+
+union bfi_addr_be_u {
+	struct {
+		u32	addr_hi;	/* Most Significant 32-bits */
+		u32	addr_lo;	/* Least Significant 32-Bits */
+	} a32;
+};
+
+/**
+ *	T X   Q U E U E   D E F I N E S
+ */
+/* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
+/* TxQ Entry Opcodes */
+#define BFI_ENET_TXQ_WI_SEND		(0x402)	/* Single Frame Transmission */
+#define BFI_ENET_TXQ_WI_SEND_LSO	(0x403)	/* Multi-Frame Transmission */
+#define BFI_ENET_TXQ_WI_EXTENSION	(0x104)	/* Extension WI */
+
+/* TxQ Entry Control Flags */
+#define BFI_ENET_TXQ_WI_CF_FCOE_CRC	(1 << 8)
+#define BFI_ENET_TXQ_WI_CF_IPID_MODE	(1 << 5)
+#define BFI_ENET_TXQ_WI_CF_INS_PRIO	(1 << 4)
+#define BFI_ENET_TXQ_WI_CF_INS_VLAN	(1 << 3)
+#define BFI_ENET_TXQ_WI_CF_UDP_CKSUM	(1 << 2)
+#define BFI_ENET_TXQ_WI_CF_TCP_CKSUM	(1 << 1)
+#define BFI_ENET_TXQ_WI_CF_IP_CKSUM	(1 << 0)
+
+struct bfi_enet_txq_wi_base {
+	u8			reserved;
+	u8			num_vectors;	/* number of vectors present */
+	u16			opcode;
+			/* BFI_ENET_TXQ_WI_SEND or BFI_ENET_TXQ_WI_SEND_LSO */
+	u16			flags;		/* OR of all the flags */
+	u16			l4_hdr_size_n_offset;
+	u16			vlan_tag;
+	u16			lso_mss;	/* Only 14 LSB are valid */
+	u32			frame_length;	/* Only 24 LSB are valid */
+};
+
+struct bfi_enet_txq_wi_ext {
+	u16			reserved;
+	u16			opcode;		/* BFI_ENET_TXQ_WI_EXTENSION */
+	u32			reserved2[3];
+};
+
+struct bfi_enet_txq_wi_vector {			/* Tx Buffer Descriptor */
+	u16			reserved;
+	u16			length;		/* Only 14 LSB are valid */
+	union bfi_addr_be_u	addr;
+};
+
+/**
+ *  TxQ Entry Structure
+ *
+ */
+struct bfi_enet_txq_entry {
+	union {
+		struct bfi_enet_txq_wi_base	base;
+		struct bfi_enet_txq_wi_ext	ext;
+	} wi;
+	struct bfi_enet_txq_wi_vector vector[BFI_ENET_TXQ_WI_VECT_MAX];
+};
+
+#define wi_hdr		wi.base
+#define wi_ext_hdr	wi.ext
+
+#define BFI_ENET_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
+		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
+
+/**
+ *   R X   Q U E U E   D E F I N E S
+ */
+struct bfi_enet_rxq_entry {
+	union bfi_addr_be_u  rx_buffer;
+};
+
+/**
+ *   R X   C O M P L E T I O N   Q U E U E   D E F I N E S
+ */
+/* CQ Entry Flags */
+#define	BFI_ENET_CQ_EF_MAC_ERROR	(1 <<  0)
+#define	BFI_ENET_CQ_EF_FCS_ERROR	(1 <<  1)
+#define	BFI_ENET_CQ_EF_TOO_LONG		(1 <<  2)
+#define	BFI_ENET_CQ_EF_FC_CRC_OK	(1 <<  3)
+
+#define	BFI_ENET_CQ_EF_RSVD1		(1 <<  4)
+#define	BFI_ENET_CQ_EF_L4_CKSUM_OK	(1 <<  5)
+#define	BFI_ENET_CQ_EF_L3_CKSUM_OK	(1 <<  6)
+#define	BFI_ENET_CQ_EF_HDS_HEADER	(1 <<  7)
+
+#define	BFI_ENET_CQ_EF_UDP		(1 <<  8)
+#define	BFI_ENET_CQ_EF_TCP		(1 <<  9)
+#define	BFI_ENET_CQ_EF_IP_OPTIONS	(1 << 10)
+#define	BFI_ENET_CQ_EF_IPV6		(1 << 11)
+
+#define	BFI_ENET_CQ_EF_IPV4		(1 << 12)
+#define	BFI_ENET_CQ_EF_VLAN		(1 << 13)
+#define	BFI_ENET_CQ_EF_RSS		(1 << 14)
+#define	BFI_ENET_CQ_EF_RSVD2		(1 << 15)
+
+#define	BFI_ENET_CQ_EF_MCAST_MATCH	(1 << 16)
+#define	BFI_ENET_CQ_EF_MCAST		(1 << 17)
+#define BFI_ENET_CQ_EF_BCAST		(1 << 18)
+#define	BFI_ENET_CQ_EF_REMOTE		(1 << 19)
+
+#define	BFI_ENET_CQ_EF_LOCAL		(1 << 20)
+
+/* CQ Entry Structure */
+struct bfi_enet_cq_entry {
+	u32 flags;
+	u16	vlan_tag;
+	u16	length;
+	u32	rss_hash;
+	u8	valid;
+	u8	reserved1;
+	u8	reserved2;
+	u8	rxq_id;
+};
+
+/**
+ *   E N E T   C O N T R O L   P A T H   C O M M A N D S
+ */
+struct bfi_enet_q {
+	union bfi_addr_u	pg_tbl;
+	union bfi_addr_u	first_entry;
+	u16		pages;	/* # of pages */
+	u16		page_sz;
+};
+
+struct bfi_enet_txq {
+	struct bfi_enet_q	q;
+	u8			priority;
+	u8			rsvd[3];
+};
+
+struct bfi_enet_rxq {
+	struct bfi_enet_q	q;
+	u16		rx_buffer_size;
+	u16		rsvd;
+};
+
+struct bfi_enet_cq {
+	struct bfi_enet_q	q;
+};
+
+struct bfi_enet_ib_cfg {
+	u8		int_pkt_dma;
+	u8		int_enabled;
+	u8		int_pkt_enabled;
+	u8		continuous_coalescing;
+	u8		msix;
+	u8		rsvd[3];
+	u32	coalescing_timeout;
+	u32	inter_pkt_timeout;
+	u8		inter_pkt_count;
+	u8		rsvd1[3];
+};
+
+struct bfi_enet_ib {
+	union bfi_addr_u	index_addr;
+	union {
+		u16	msix_index;
+		u16	intx_bitmask;
+	} intr;
+	u16		rsvd;
+};
+
+/**
+ * ENET command messages
+ */
+enum bfi_enet_h2i_msgs {
+	/* Rx Commands */
+	BFI_ENET_H2I_RX_CFG_SET_REQ = 1,
+	BFI_ENET_H2I_RX_CFG_CLR_REQ = 2,
+
+	BFI_ENET_H2I_RIT_CFG_REQ = 3,
+	BFI_ENET_H2I_RSS_CFG_REQ = 4,
+	BFI_ENET_H2I_RSS_ENABLE_REQ = 5,
+	BFI_ENET_H2I_RX_PROMISCUOUS_REQ = 6,
+	BFI_ENET_H2I_RX_DEFAULT_REQ = 7,
+
+	BFI_ENET_H2I_MAC_UCAST_SET_REQ = 8,
+	BFI_ENET_H2I_MAC_UCAST_CLR_REQ = 9,
+	BFI_ENET_H2I_MAC_UCAST_ADD_REQ = 10,
+	BFI_ENET_H2I_MAC_UCAST_DEL_REQ = 11,
+
+	BFI_ENET_H2I_MAC_MCAST_ADD_REQ = 12,
+	BFI_ENET_H2I_MAC_MCAST_DEL_REQ = 13,
+	BFI_ENET_H2I_MAC_MCAST_FILTER_REQ = 14,
+
+	BFI_ENET_H2I_RX_VLAN_SET_REQ = 15,
+	BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ = 16,
+
+	/* Tx Commands */
+	BFI_ENET_H2I_TX_CFG_SET_REQ = 17,
+	BFI_ENET_H2I_TX_CFG_CLR_REQ = 18,
+
+	/* Port Commands */
+	BFI_ENET_H2I_PORT_ADMIN_UP_REQ = 19,
+	BFI_ENET_H2I_SET_PAUSE_REQ = 20,
+	BFI_ENET_H2I_DIAG_LOOPBACK_REQ = 21,
+
+	/* Get Attributes Command */
+	BFI_ENET_H2I_GET_ATTR_REQ = 22,
+
+	/*  Statistics Commands */
+	BFI_ENET_H2I_STATS_GET_REQ = 23,
+	BFI_ENET_H2I_STATS_CLR_REQ = 24,
+
+	BFI_ENET_H2I_WOL_MAGIC_REQ = 25,
+	BFI_ENET_H2I_WOL_FRAME_REQ = 26,
+
+	BFI_ENET_H2I_MAX = 27,
+};
+
+enum bfi_enet_i2h_msgs {
+	/* Rx Responses */
+	BFI_ENET_I2H_RX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_SET_REQ),
+	BFI_ENET_I2H_RX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_CLR_REQ),
+
+	BFI_ENET_I2H_RIT_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RIT_CFG_REQ),
+	BFI_ENET_I2H_RSS_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_CFG_REQ),
+	BFI_ENET_I2H_RSS_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_ENABLE_REQ),
+	BFI_ENET_I2H_RX_PROMISCUOUS_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_PROMISCUOUS_REQ),
+	BFI_ENET_I2H_RX_DEFAULT_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_DEFAULT_REQ),
+
+	BFI_ENET_I2H_MAC_UCAST_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_SET_REQ),
+	BFI_ENET_I2H_MAC_UCAST_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_CLR_REQ),
+	BFI_ENET_I2H_MAC_UCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_UCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_DEL_REQ),
+
+	BFI_ENET_I2H_MAC_MCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_MCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_DEL_REQ),
+	BFI_ENET_I2H_MAC_MCAST_FILTER_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_FILTER_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_SET_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ),
+
+	/* Tx Responses */
+	BFI_ENET_I2H_TX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_SET_REQ),
+	BFI_ENET_I2H_TX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_CLR_REQ),
+
+	/* Port Responses */
+	BFI_ENET_I2H_PORT_ADMIN_RSP =
+		BFA_I2HM(BFI_ENET_H2I_PORT_ADMIN_UP_REQ),
+
+	BFI_ENET_I2H_SET_PAUSE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_SET_PAUSE_REQ),
+	BFI_ENET_I2H_DIAG_LOOPBACK_RSP =
+		BFA_I2HM(BFI_ENET_H2I_DIAG_LOOPBACK_REQ),
+
+	/*  Attributes Response */
+	BFI_ENET_I2H_GET_ATTR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_GET_ATTR_REQ),
+
+	/* Statistics Responses */
+	BFI_ENET_I2H_STATS_GET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_GET_REQ),
+	BFI_ENET_I2H_STATS_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_CLR_REQ),
+
+	BFI_ENET_I2H_WOL_MAGIC_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_MAGIC_REQ),
+	BFI_ENET_I2H_WOL_FRAME_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_FRAME_REQ),
+
+	/* AENs */
+	BFI_ENET_I2H_LINK_DOWN_AEN = BFA_I2HM(BFI_ENET_H2I_MAX),
+	BFI_ENET_I2H_LINK_UP_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 1),
+
+	BFI_ENET_I2H_PORT_ENABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 2),
+	BFI_ENET_I2H_PORT_DISABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 3),
+
+	BFI_ENET_I2H_BW_UPDATE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 4),
+};
+
+/**
+ *  The following error codes can be returned by the enet commands
+ */
+enum bfi_enet_err {
+	BFI_ENET_CMD_OK		= 0,
+	BFI_ENET_CMD_FAIL	= 1,
+	BFI_ENET_CMD_DUP_ENTRY	= 2,	/* !< Duplicate entry in CAM */
+	BFI_ENET_CMD_CAM_FULL	= 3,	/* !< CAM is full */
+	BFI_ENET_CMD_NOT_OWNER	= 4,	/* !< Not permitted, b'cos not owner */
+	BFI_ENET_CMD_NOT_EXEC	= 5,	/* !< Was not sent to f/w at all */
+	BFI_ENET_CMD_WAITING	= 6,	/* !< Waiting for completion */
+	BFI_ENET_CMD_PORT_DISABLED = 7,	/* !< port in disabled state */
+};
+
+/**
+ * Generic Request
+ *
+ * bfi_enet_req is used by:
+ *	BFI_ENET_H2I_RX_CFG_CLR_REQ
+ *	BFI_ENET_H2I_TX_CFG_CLR_REQ
+ */
+struct bfi_enet_req {
+	struct bfi_msgq_mhdr mh;
+};
+
+/**
+ * Enable/Disable Request
+ *
+ * bfi_enet_enable_req is used by:
+ *	BFI_ENET_H2I_RSS_ENABLE_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_PROMISCUOUS_REQ (enet_id must be zero)
+ *	BFI_ENET_H2I_RX_DEFAULT_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_MAC_MCAST_FILTER_REQ
+ *	BFI_ENET_H2I_PORT_ADMIN_UP_REQ	(enet_id must be zero)
+ */
+struct bfi_enet_enable_req {
+	struct		bfi_msgq_mhdr mh;
+	u8		enable;		/* 1 = enable;  0 = disable */
+	u8		rsvd[3];
+};
+
+/**
+ * Generic Response
+ */
+struct bfi_enet_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+};
+
+/**
+ * GLOBAL CONFIGURATION
+ */
+
+/**
+ * bfi_enet_attr_req is used by:
+ *	BFI_ENET_H2I_GET_ATTR_REQ
+ */
+struct bfi_enet_attr_req {
+	struct bfi_msgq_mhdr	mh;
+};
+
+/**
+ * bfi_enet_attr_rsp is used by:
+ *	BFI_ENET_I2H_GET_ATTR_RSP
+ */
+struct bfi_enet_attr_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+	u32		max_cfg;
+	u32		max_ucmac;
+	u32		rit_size;
+};
+
+/**
+ * Tx Configuration
+ *
+ * bfi_enet_tx_cfg is used by:
+ *	BFI_ENET_H2I_TX_CFG_SET_REQ
+ */
+enum bfi_enet_tx_vlan_mode {
+	BFI_ENET_TX_VLAN_NOP	= 0,
+	BFI_ENET_TX_VLAN_INS	= 1,
+	BFI_ENET_TX_VLAN_WI	= 2,
+};
+
+struct bfi_enet_tx_cfg {
+	u8		vlan_mode;	/*!< processing mode */
+	u8		rsvd;
+	u16		vlan_id;
+	u8		admit_tagged_frame;
+	u8		apply_vlan_filter;
+	u8		add_to_vswitch;
+	u8		rsvd1[1];
+};
+
+struct bfi_enet_tx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queues;	/* # of Tx Queues */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_txq	q;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_TXQ_PRIO_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_tx_cfg	tx_cfg;
+};
+
+struct bfi_enet_tx_cfg_rsp {
+	struct		bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;		/* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	q_dbell;	/* PCI base address offset */
+		u32	i_dbell;	/* PCI base address offset */
+		u8	hw_qid;		/* For debugging */
+		u8	rsvd[3];
+	} q_handles[BFI_ENET_TXQ_PRIO_MAX];
+};
+
+/**
+ * Rx Configuration
+ *
+ * bfi_enet_rx_cfg is used by:
+ *	BFI_ENET_H2I_RX_CFG_SET_REQ
+ */
+enum bfi_enet_rxq_type {
+	BFI_ENET_RXQ_SINGLE		= 1,
+	BFI_ENET_RXQ_LARGE_SMALL	= 2,
+	BFI_ENET_RXQ_HDS		= 3,
+	BFI_ENET_RXQ_HDS_OPT_BASED	= 4,
+};
+
+enum bfi_enet_hds_type {
+	BFI_ENET_HDS_FORCED	= 0x01,
+	BFI_ENET_HDS_IPV6_UDP	= 0x02,
+	BFI_ENET_HDS_IPV6_TCP	= 0x04,
+	BFI_ENET_HDS_IPV4_TCP	= 0x08,
+	BFI_ENET_HDS_IPV4_UDP	= 0x10,
+};
+
+struct bfi_enet_rx_cfg {
+	u8		rxq_type;
+	u8		rsvd[3];
+
+	struct {
+		u8			max_header_size;
+		u8			force_offset;
+		u8			type;
+		u8			rsvd1;
+	} hds;
+
+	u8		multi_buffer;
+	u8		strip_vlan;
+	u8		drop_untagged;
+	u8		rsvd2;
+};
+
+/*
+ * Multicast frames are received on the ql of q-set index zero.
+ * On the completion queue.  RxQ ID = even is for large/data buffer queues
+ * and RxQ ID = odd is for small/header buffer queues.
+ */
+struct bfi_enet_rx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queue_sets;	/* # of Rx Queue Sets */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_rxq	ql;	/* large/data/single buffers */
+		struct bfi_enet_rxq	qs;	/* small/header buffers */
+		struct bfi_enet_cq	cq;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_RX_QSET_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_rx_cfg	rx_cfg;
+};
+
+struct bfi_enet_rx_cfg_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;	 /* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	ql_dbell; /* PCI base address offset */
+		u32	qs_dbell; /* PCI base address offset */
+		u32	i_dbell;  /* PCI base address offset */
+		u8		hw_lqid;  /* For debugging */
+		u8		hw_sqid;  /* For debugging */
+		u8		hw_cqid;  /* For debugging */
+		u8		rsvd;
+	} q_handles[BFI_ENET_RX_QSET_MAX];
+};
+
+/**
+ * RIT
+ *
+ * bfi_enet_rit_req is used by:
+ *	BFI_ENET_H2I_RIT_CFG_REQ
+ */
+struct bfi_enet_rit_req {
+	struct	bfi_msgq_mhdr mh;
+	u16	size;			/* number of table-entries used */
+	u8	rsvd[2];
+	u8	table[BFI_ENET_RSS_RIT_MAX];
+};
+
+/**
+ * RSS
+ *
+ * bfi_enet_rss_cfg_req is used by:
+ *	BFI_ENET_H2I_RSS_CFG_REQ
+ */
+enum bfi_enet_rss_type {
+	BFI_ENET_RSS_IPV6	= 0x01,
+	BFI_ENET_RSS_IPV6_TCP	= 0x02,
+	BFI_ENET_RSS_IPV4	= 0x04,
+	BFI_ENET_RSS_IPV4_TCP	= 0x08
+};
+
+struct bfi_enet_rss_cfg {
+	u8	type;
+	u8	mask;
+	u8	rsvd[2];
+	u32	key[BFI_ENET_RSS_KEY_LEN];
+};
+
+struct bfi_enet_rss_cfg_req {
+	struct bfi_msgq_mhdr	mh;
+	struct bfi_enet_rss_cfg	cfg;
+};
+
+/**
+ * MAC Unicast
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_MAC_UCAST_SET_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_CLR_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_ADD_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_DEL_REQ
+ */
+struct bfi_enet_ucast_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * MAC Unicast + VLAN
+ */
+struct bfi_enet_mac_n_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			vlan_id;
+	mac_t			mac_addr;
+};
+
+/**
+ * MAC Multicast
+ *
+ * bfi_enet_mac_mfilter_add_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_ADD_REQ
+ */
+struct bfi_enet_mcast_add_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_add_rsp is used by:
+ *	BFI_ENET_I2H_MAC_MCAST_ADD_RSP
+ */
+struct bfi_enet_mcast_add_rsp {
+	struct bfi_msgq_mhdr	mh;
+	u8			error;
+	u8			rsvd;
+	u16			cmd_offset;
+	u16			handle;
+	u8			rsvd1[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_del_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_DEL_REQ
+ */
+struct bfi_enet_mcast_del_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			handle;
+	u8			rsvd[2];
+};
+
+/**
+ * VLAN
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_RX_VLAN_SET_REQ
+ */
+struct bfi_enet_rx_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			block_idx;
+	u8			rsvd[3];
+	u32			bit_mask[BFI_ENET_VLAN_WORDS_MAX];
+};
+
+/**
+ * PAUSE
+ *
+ * bfi_enet_set_pause_req is used by:
+ *	BFI_ENET_H2I_SET_PAUSE_REQ
+ */
+struct bfi_enet_set_pause_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			tx_pause;	/* 1 = enable;  0 = disable */
+	u8			rx_pause;	/* 1 = enable;  0 = disable */
+};
+
+/**
+ * DIAGNOSTICS
+ *
+ * bfi_enet_diag_lb_req is used by:
+ *      BFI_ENET_H2I_DIAG_LOOPBACK
+ */
+struct bfi_enet_diag_lb_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			mode;		/* cable or Serdes */
+	u8			enable;		/* 1 = enable;  0 = disable */
+};
+
+/**
+ * enum for Loopback opmodes
+ */
+enum {
+	BFI_ENET_DIAG_LB_OPMODE_EXT = 0,
+	BFI_ENET_DIAG_LB_OPMODE_CBL = 1,
+};
+
+/**
+ * STATISTICS
+ *
+ * bfi_enet_stats_req is used by:
+ *    BFI_ENET_H2I_STATS_GET_REQ
+ *    BFI_ENET_I2H_STATS_CLR_REQ
+ */
+struct bfi_enet_stats_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			stats_mask;
+	u8			rsvd[2];
+	u32			rx_enet_mask;
+	u32			tx_enet_mask;
+	union bfi_addr_u	host_buffer;
+};
+
+/**
+ * defines for "stats_mask" above.
+ */
+#define BFI_ENET_STATS_MAC    (1 << 0)    /* !< MAC Statistics */
+#define BFI_ENET_STATS_BPC    (1 << 1)    /* !< Pause Stats from BPC */
+#define BFI_ENET_STATS_RAD    (1 << 2)    /* !< Rx Admission Statistics */
+#define BFI_ENET_STATS_RX_FC  (1 << 3)    /* !< Rx FC Stats from RxA */
+#define BFI_ENET_STATS_TX_FC  (1 << 4)    /* !< Tx FC Stats from TxA */
+
+#define BFI_ENET_STATS_ALL    0x1f
+
+/* TxF Frame Statistics */
+struct bfi_enet_stats_txf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+
+	u64 errors;
+	u64 filter_vlan;      /* frames filtered due to VLAN */
+	u64 filter_mac_sa;    /* frames filtered due to SA check */
+};
+
+/* RxF Frame Statistics */
+struct bfi_enet_stats_rxf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+	u64 frame_drops;
+};
+
+/* Fixme combine fc_tx & fc_rx */
+/* FC Tx Frame Statistics */
+struct bfi_enet_stats_fc_tx {
+	u64 txf_ucast_octets;
+	u64 txf_ucast;
+	u64 txf_ucast_vlan;
+
+	u64 txf_mcast_octets;
+	u64 txf_mcast;
+	u64 txf_mcast_vlan;
+
+	u64 txf_bcast_octets;
+	u64 txf_bcast;
+	u64 txf_bcast_vlan;
+
+	u64 txf_parity_errors;
+	u64 txf_timeout;
+	u64 txf_fid_parity_errors;
+};
+
+/* FC Rx Frame Statistics */
+struct bfi_enet_stats_fc_rx {
+	u64 rxf_ucast_octets;
+	u64 rxf_ucast;
+	u64 rxf_ucast_vlan;
+
+	u64 rxf_mcast_octets;
+	u64 rxf_mcast;
+	u64 rxf_mcast_vlan;
+
+	u64 rxf_bcast_octets;
+	u64 rxf_bcast;
+	u64 rxf_bcast_vlan;
+};
+
+/* RAD Frame Statistics */
+struct bfi_enet_stats_rad {
+	u64 rx_frames;
+	u64 rx_octets;
+	u64 rx_vlan_frames;
+
+	u64 rx_ucast;
+	u64 rx_ucast_octets;
+	u64 rx_ucast_vlan;
+
+	u64 rx_mcast;
+	u64 rx_mcast_octets;
+	u64 rx_mcast_vlan;
+
+	u64 rx_bcast;
+	u64 rx_bcast_octets;
+	u64 rx_bcast_vlan;
+
+	u64 rx_drops;
+};
+
+/* BPC Tx Registers */
+struct bfi_enet_stats_bpc {
+	/* transmit stats */
+	u64 tx_pause[8];
+	u64 tx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 tx_first_pause[8];
+
+	/* receive stats */
+	u64 rx_pause[8];
+	u64 rx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 rx_first_pause[8];
+};
+
+/* MAC Rx Statistics */
+struct bfi_enet_stats_mac {
+	u64 frame_64;		/* both rx and tx counter */
+	u64 frame_65_127;		/* both rx and tx counter */
+	u64 frame_128_255;		/* both rx and tx counter */
+	u64 frame_256_511;		/* both rx and tx counter */
+	u64 frame_512_1023;	/* both rx and tx counter */
+	u64 frame_1024_1518;	/* both rx and tx counter */
+	u64 frame_1519_1522;	/* both rx and tx counter */
+
+	/* receive stats */
+	u64 rx_bytes;
+	u64 rx_packets;
+	u64 rx_fcs_error;
+	u64 rx_multicast;
+	u64 rx_broadcast;
+	u64 rx_control_frames;
+	u64 rx_pause;
+	u64 rx_unknown_opcode;
+	u64 rx_alignment_error;
+	u64 rx_frame_length_error;
+	u64 rx_code_error;
+	u64 rx_carrier_sense_error;
+	u64 rx_undersize;
+	u64 rx_oversize;
+	u64 rx_fragments;
+	u64 rx_jabber;
+	u64 rx_drop;
+
+	/* transmit stats */
+	u64 tx_bytes;
+	u64 tx_packets;
+	u64 tx_multicast;
+	u64 tx_broadcast;
+	u64 tx_pause;
+	u64 tx_deferral;
+	u64 tx_excessive_deferral;
+	u64 tx_single_collision;
+	u64 tx_muliple_collision;
+	u64 tx_late_collision;
+	u64 tx_excessive_collision;
+	u64 tx_total_collision;
+	u64 tx_pause_honored;
+	u64 tx_drop;
+	u64 tx_jabber;
+	u64 tx_fcs_error;
+	u64 tx_control_frame;
+	u64 tx_oversize;
+	u64 tx_undersize;
+	u64 tx_fragments;
+};
+
+/**
+ * Complete statistics, DMAed from fw to host followed by
+ * BFI_ENET_I2H_STATS_GET_RSP
+ */
+struct bfi_enet_stats {
+	struct bfi_enet_stats_mac	mac_stats;
+	struct bfi_enet_stats_bpc	bpc_stats;
+	struct bfi_enet_stats_rad	rad_stats;
+	struct bfi_enet_stats_rad	rlb_stats;
+	struct bfi_enet_stats_fc_rx	fc_rx_stats;
+	struct bfi_enet_stats_fc_tx	fc_tx_stats;
+	struct bfi_enet_stats_rxf	rxf_stats[BFI_ENET_CFG_MAX];
+	struct bfi_enet_stats_txf	txf_stats[BFI_ENET_CFG_MAX];
+};
+
+#pragma pack()
+
+#endif  /* __BFI_ENET_H__ */
diff --git a/drivers/net/bna/bna_enet.c b/drivers/net/bna/bna_enet.c
new file mode 100644
index 0000000..668c72e
--- /dev/null
+++ b/drivers/net/bna/bna_enet.c
@@ -0,0 +1,2199 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+#include "bna.h"
+
+static void
+bna_err_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_HALT_INTR(bna, intr_status))
+		bna_halt_clear(bna);
+
+	bfa_nw_ioc_error_isr(&bna->ioceth.ioc);
+}
+
+void
+bna_mbox_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_ERR_INTR(bna, intr_status)) {
+		bna_err_handler(bna, intr_status);
+		return;
+	}
+	if (BNA_IS_MBOX_INTR(bna, intr_status))
+		bfa_nw_ioc_mbox_isr(&bna->ioceth.ioc);
+}
+
+static void
+bna_msgq_rsp_handler(void *arg, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bna *bna = (struct bna *)arg;
+	struct bna_tx *tx;
+	struct bna_rx *rx;
+
+	switch (msghdr->msg_id) {
+	case BFI_ENET_I2H_RX_CFG_SET_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_start_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RX_CFG_CLR_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_stop_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RIT_CFG_RSP:
+	case BFI_ENET_I2H_RSS_CFG_RSP:
+	case BFI_ENET_I2H_RSS_ENABLE_RSP:
+	case BFI_ENET_I2H_RX_PROMISCUOUS_RSP:
+	case BFI_ENET_I2H_RX_DEFAULT_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_SET_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_CLR_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_ADD_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_FILTER_RSP:
+	case BFI_ENET_I2H_RX_VLAN_SET_RSP:
+	case BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_cfg_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_MAC_MCAST_ADD_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_mcast_add_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_SET_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_start_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_CLR_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_stop_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ADMIN_RSP:
+		bna_bfi_ethport_admin_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_DIAG_LOOPBACK_RSP:
+		bna_bfi_ethport_lpbk_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_SET_PAUSE_RSP:
+		bna_bfi_pause_set_rsp(&bna->enet, msghdr);
+		break;
+
+	case BFI_ENET_I2H_GET_ATTR_RSP:
+		bna_bfi_attr_get_rsp(&bna->ioceth, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_GET_RSP:
+		bna_bfi_stats_get_rsp(bna, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_CLR_RSP:
+		/* No-op */
+		break;
+
+	case BFI_ENET_I2H_LINK_UP_AEN:
+		bna_bfi_ethport_linkup_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_LINK_DOWN_AEN:
+		bna_bfi_ethport_linkdown_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ENABLE_AEN:
+		bna_bfi_ethport_enable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_DISABLE_AEN:
+		bna_bfi_ethport_disable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_BW_UPDATE_AEN:
+		bna_bfi_bw_update_aen(&bna->tx_mod);
+		break;
+
+	default:
+		break;
+	}
+}
+
+/**
+ * ETHPORT
+ */
+#define call_ethport_stop_cbfn(_ethport)				\
+do {									\
+	if ((_ethport)->stop_cbfn) {					\
+		void (*cbfn)(struct bna_enet *);			\
+		cbfn = (_ethport)->stop_cbfn;				\
+		(_ethport)->stop_cbfn = NULL;				\
+		cbfn(&(_ethport)->bna->enet);				\
+	}								\
+} while (0)
+
+#define call_ethport_adminup_cbfn(ethport, status)			\
+do {									\
+	if ((ethport)->adminup_cbfn) {					\
+		void (*cbfn)(struct bnad *, enum bna_cb_status);	\
+		cbfn = (ethport)->adminup_cbfn;				\
+		(ethport)->adminup_cbfn = NULL;				\
+		cbfn((ethport)->bna->bnad, status);			\
+	}								\
+} while (0)
+
+static inline int
+ethport_can_be_up(struct bna_ethport *ethport)
+{
+	int ready = 0;
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 (ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	else
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 !(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	return ready;
+}
+
+#define ethport_is_up ethport_can_be_up
+
+static void bna_bfi_ethport_up(struct bna_ethport *ethport);
+static void bna_bfi_ethport_down(struct bna_ethport *ethport);
+
+enum bna_ethport_event {
+	ETHPORT_E_START			= 1,
+	ETHPORT_E_STOP			= 2,
+	ETHPORT_E_FAIL			= 3,
+	ETHPORT_E_UP			= 4,
+	ETHPORT_E_DOWN			= 5,
+	ETHPORT_E_FWRESP_UP_OK		= 6,
+	ETHPORT_E_FWRESP_DOWN		= 7,
+	ETHPORT_E_FWRESP_UP_FAIL	= 8,
+};
+
+bfa_fsm_state_decl(bna_ethport, stopped, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, last_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+
+static void
+bna_ethport_sm_stopped_entry(struct bna_ethport *ethport)
+{
+	call_ethport_stop_cbfn(ethport);
+}
+
+static void
+bna_ethport_sm_stopped(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_START:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_STOP:
+		call_ethport_stop_cbfn(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		/* No-op */
+		break;
+
+	case ETHPORT_E_DOWN:
+		/* This event is received due to Rx objects failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_down(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_INTERRUPT);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_SUCCESS);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_FWRESP_DOWN:
+		/* down_resp_wait -> up_resp_wait transition on ETHPORT_E_UP */
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_resp_wait_entry(struct bna_ethport *ethport)
+{
+	/**
+	 * NOTE: Do not call bna_bfi_ethport_down() here. That will over step
+	 * mbox due to up_resp_wait -> down_resp_wait transition on event
+	 * ETHPORT_E_DOWN
+	 */
+}
+
+static void
+bna_ethport_sm_down_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->down_resp_wait transition on ETHPORT_E_DOWN */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_last_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_last_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		/**
+		 * This event is received due to Rx objects stopping in
+		 * parallel to ethport
+		 */
+		/* No-op */
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->last_resp_wait transition on ETHPORT_T_STOP */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_ethport_admin_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_up_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_admin_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_down_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_up_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_up_req->mode = (ethport->bna->enet.type ==
+				BNA_ENET_T_LOOPBACK_INTERNAL) ?
+				BFI_ENET_DIAG_LB_OPMODE_EXT :
+				BFI_ENET_DIAG_LB_OPMODE_CBL;
+	lpbk_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_down_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_up(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_up(ethport);
+	else
+		bna_bfi_ethport_lpbk_up(ethport);
+}
+
+static void
+bna_bfi_ethport_down(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_down(ethport);
+	else
+		bna_bfi_ethport_lpbk_down(ethport);
+}
+
+void
+bna_ethport_init(struct bna_ethport *ethport, struct bna *bna)
+{
+	ethport->flags |= (BNA_ETHPORT_F_ADMIN_UP | BNA_ETHPORT_F_PORT_ENABLED);
+	ethport->bna = bna;
+
+	ethport->link_status = BNA_LINK_DOWN;
+	ethport->link_cbfn = bnad_cb_ethport_link_status;
+
+	ethport->rx_started_count = 0;
+
+	ethport->stop_cbfn = NULL;
+	ethport->adminup_cbfn = NULL;
+
+	bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+}
+
+void
+bna_ethport_uninit(struct bna_ethport *ethport)
+{
+	ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	ethport->bna = NULL;
+}
+
+void
+bna_ethport_start(struct bna_ethport *ethport)
+{
+	bfa_fsm_send_event(ethport, ETHPORT_E_START);
+}
+
+void
+bna_ethport_stop(struct bna_ethport *ethport)
+{
+	ethport->stop_cbfn = bna_enet_cb_ethport_stopped;
+	bfa_fsm_send_event(ethport, ETHPORT_E_STOP);
+}
+
+void
+bna_ethport_fail(struct bna_ethport *ethport)
+{
+	/* Reset the physical port status to enabled */
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport->link_status != BNA_LINK_DOWN) {
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+	}
+	bfa_fsm_send_event(ethport, ETHPORT_E_FAIL);
+}
+
+void
+bna_ethport_admin_up(struct bna_ethport *ethport,
+		void (*cbfn)(struct bnad *, enum bna_cb_status))
+{
+	ethport->adminup_cbfn = cbfn;
+
+	if (ethport->flags & BNA_ETHPORT_F_ADMIN_UP) {
+		call_ethport_adminup_cbfn(ethport, BNA_CB_SUCCESS);
+		return;
+	}
+
+	if ((ethport->bna->enet.type != BNA_ENET_T_REGULAR) &&
+		(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED)) {
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		return;
+	}
+
+	ethport->flags |= BNA_ETHPORT_F_ADMIN_UP;
+
+	if (ethport_can_be_up(ethport))
+		bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+}
+
+void
+bna_ethport_admin_down(struct bna_ethport *ethport)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	if (!(ethport->flags & BNA_ETHPORT_F_ADMIN_UP))
+		return;
+
+	ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+
+	if (ethport_up)
+		bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+}
+
+/* Should be called only when ethport is disabled */
+void
+bna_ethport_linkcbfn_set(struct bna_ethport *ethport,
+		      void (*linkcbfn)(struct bnad *, enum bna_link_status))
+{
+	ethport->link_cbfn = linkcbfn;
+}
+
+void
+bna_bfi_ethport_enable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_can_be_up(ethport))
+		bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+}
+
+void
+bna_bfi_ethport_disable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_up)
+		bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+}
+
+void
+bna_ethport_cb_rx_started(struct bna_ethport *ethport)
+{
+	ethport->rx_started_count++;
+
+	if (ethport->rx_started_count == 1) {
+		ethport->flags |= BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_can_be_up(ethport))
+			bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+	}
+}
+
+void
+bna_ethport_cb_rx_stopped(struct bna_ethport *ethport)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->rx_started_count--;
+
+	if (ethport->rx_started_count == 0) {
+		ethport->flags &= ~BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_up)
+			bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+	}
+}
+
+void
+bna_bfi_ethport_admin_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_enable_req *admin_req =
+		&ethport->bfi_enet_cmd.admin_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (admin_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+		break;
+	}
+}
+
+void
+bna_bfi_ethport_lpbk_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_diag_lb_req *diag_lb_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (diag_lb_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		break;
+	}
+}
+
+void
+bna_bfi_ethport_linkup_aen(struct bna_ethport *ethport,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_UP;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, ethport->link_status);
+}
+
+void
+bna_bfi_ethport_linkdown_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_DOWN;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+}
+
+int
+bna_ethport_is_disabled(struct bna_ethport *ethport)
+{
+	int ret = (!(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	return ret;
+}
+
+/**
+ * ENET
+ */
+#define bna_enet_chld_start(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_ethport_start(&(enet)->bna->ethport);			\
+	bna_tx_mod_start(&(enet)->bna->tx_mod, tx_type);		\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_chld_stop(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_ethport_stop(&(enet)->bna->ethport);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_tx_mod_stop(&(enet)->bna->tx_mod, tx_type);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define bna_enet_chld_fail(enet)					\
+do {									\
+	bna_ethport_fail(&(enet)->bna->ethport);			\
+	bna_tx_mod_fail(&(enet)->bna->tx_mod);				\
+	bna_rx_mod_fail(&(enet)->bna->rx_mod);				\
+} while (0)
+
+#define bna_enet_rx_start(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_rx_stop(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define call_enet_stop_cbfn(enet)					\
+do {									\
+	if ((enet)->stop_cbfn) {					\
+		void (*cbfn)(void *);					\
+		void *cbarg;						\
+		cbfn = (enet)->stop_cbfn;				\
+		cbarg = (enet)->stop_cbarg;				\
+		(enet)->stop_cbfn = NULL;				\
+		(enet)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define call_enet_pause_cbfn(enet)					\
+do {									\
+	if ((enet)->pause_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->pause_cbfn;				\
+		(enet)->pause_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+#define call_enet_mtu_cbfn(enet)					\
+do {									\
+	if ((enet)->mtu_cbfn) {						\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->mtu_cbfn;				\
+		(enet)->mtu_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+static void bna_enet_cb_chld_stopped(void *arg);
+static void bna_bfi_pause_set(struct bna_enet *enet);
+
+enum bna_enet_event {
+	ENET_E_START			= 1,
+	ENET_E_STOP			= 2,
+	ENET_E_FAIL			= 3,
+	ENET_E_PAUSE_CFG		= 4,
+	ENET_E_MTU_CFG			= 5,
+	ENET_E_FWRESP_PAUSE		= 6,
+	ENET_E_CHLD_STOPPED		= 7,
+};
+
+bfa_fsm_state_decl(bna_enet, stopped, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, pause_init_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, last_resp_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, started, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, chld_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+
+static void
+bna_enet_sm_stopped_entry(struct bna_enet *enet)
+{
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+	call_enet_stop_cbfn(enet);
+}
+
+static void
+bna_enet_sm_stopped(struct bna_enet *enet, enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_START:
+		bfa_fsm_set_state(enet, bna_enet_sm_pause_init_wait);
+		break;
+
+	case ENET_E_STOP:
+		call_enet_stop_cbfn(enet);
+		break;
+
+	case ENET_E_FAIL:
+		/* No-op */
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		call_enet_pause_cbfn(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		call_enet_mtu_cbfn(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		/**
+		 * This event is received due to Ethport, Tx and Rx objects
+		 * failing
+		 */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_pause_init_wait_entry(struct bna_enet *enet)
+{
+	bna_bfi_pause_set(enet);
+}
+
+static void
+bna_enet_sm_pause_init_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_last_resp_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		/* No-op */
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+			bna_enet_chld_start(enet);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_last_resp_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+}
+
+static void
+bna_enet_sm_last_resp_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+	case ENET_E_FWRESP_PAUSE:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_started_entry(struct bna_enet *enet)
+{
+	/**
+	 * NOTE: Do not call bna_enet_chld_start() here, since it will be
+	 * inadvertently called during cfg_wait->started transition as well
+	 */
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+}
+
+static void
+bna_enet_sm_started(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_bfi_pause_set(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_enet_rx_stop(enet);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_wait_entry(struct bna_enet *enet)
+{
+}
+
+static void
+bna_enet_sm_cfg_wait(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		enet->flags |= BNA_ENET_F_MTU_CHANGED;
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bna_enet_rx_start(enet);
+		/* Fall through */
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else if (enet->flags & BNA_ENET_F_MTU_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+			bna_enet_rx_stop(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_stop_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+	enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+}
+
+static void
+bna_enet_sm_cfg_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_chld_stop_wait_entry(struct bna_enet *enet)
+{
+	bna_enet_chld_stop(enet);
+}
+
+static void
+bna_enet_sm_chld_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_pause_set(struct bna_enet *enet)
+{
+	struct bfi_enet_set_pause_req *pause_req = &enet->pause_req;
+
+	bfi_msgq_mhdr_set(pause_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_SET_PAUSE_REQ, 0, 0);
+	pause_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_set_pause_req)));
+	pause_req->tx_pause = enet->pause_config.tx_pause;
+	pause_req->rx_pause = enet->pause_config.rx_pause;
+
+	bfa_msgq_cmd_set(&enet->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_set_pause_req), &pause_req->mh);
+	bfa_msgq_cmd_post(&enet->bna->msgq, &enet->msgq_cmd);
+}
+
+static void
+bna_enet_cb_chld_stopped(void *arg)
+{
+	struct bna_enet *enet = (struct bna_enet *)arg;
+
+	bfa_fsm_send_event(enet, ENET_E_CHLD_STOPPED);
+}
+
+void
+bna_enet_init(struct bna_enet *enet, struct bna *bna)
+{
+	enet->bna = bna;
+	enet->flags = 0;
+	enet->mtu = 0;
+	enet->type = BNA_ENET_T_REGULAR;
+
+	enet->stop_cbfn = NULL;
+	enet->stop_cbarg = NULL;
+
+	enet->pause_cbfn = NULL;
+
+	enet->mtu_cbfn = NULL;
+
+	bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+}
+
+void
+bna_enet_uninit(struct bna_enet *enet)
+{
+	enet->flags = 0;
+
+	enet->bna = NULL;
+}
+
+void
+bna_enet_start(struct bna_enet *enet)
+{
+	enet->flags |= BNA_ENET_F_IOCETH_READY;
+	if (enet->flags & BNA_ENET_F_ENABLED)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+void
+bna_enet_stop(struct bna_enet *enet)
+{
+	enet->stop_cbfn = bna_ioceth_cb_enet_stopped;
+	enet->stop_cbarg = &enet->bna->ioceth;
+
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+void
+bna_enet_fail(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_FAIL);
+}
+
+void
+bna_enet_cb_ethport_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_enet_cb_tx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_enet_cb_rx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_bfi_pause_set_rsp(struct bna_enet *enet, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(enet, ENET_E_FWRESP_PAUSE);
+}
+
+int
+bna_enet_mtu_get(struct bna_enet *enet)
+{
+	return enet->mtu;
+}
+
+void
+bna_enet_enable(struct bna_enet *enet)
+{
+	if (enet->fsm != (bfa_sm_t)bna_enet_sm_stopped)
+		return;
+
+	enet->flags |= BNA_ENET_F_ENABLED;
+
+	if (enet->flags & BNA_ENET_F_IOCETH_READY)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+void
+bna_enet_disable(struct bna_enet *enet, enum bna_cleanup_type type,
+		 void (*cbfn)(void *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		(*cbfn)(enet->bna->bnad);
+		return;
+	}
+
+	enet->stop_cbfn = cbfn;
+	enet->stop_cbarg = enet->bna->bnad;
+
+	enet->flags &= ~BNA_ENET_F_ENABLED;
+
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+void
+bna_enet_pause_config(struct bna_enet *enet,
+		      struct bna_pause_config *pause_config,
+		      void (*cbfn)(struct bnad *))
+{
+	enet->pause_config = *pause_config;
+
+	enet->pause_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_PAUSE_CFG);
+}
+
+void
+bna_enet_mtu_set(struct bna_enet *enet, int mtu,
+		 void (*cbfn)(struct bnad *))
+{
+	enet->mtu = mtu;
+
+	enet->mtu_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_MTU_CFG);
+}
+
+void
+bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac)
+{
+	*mac = bfa_nw_ioc_get_mac(&enet->bna->ioceth.ioc);
+}
+
+/* Should be called only when enet is disabled */
+void
+bna_enet_type_set(struct bna_enet *enet, enum bna_enet_type type)
+{
+	enet->type = type;
+}
+
+enum bna_enet_type
+bna_enet_type_get(struct bna_enet *enet)
+{
+	return enet->type;
+}
+
+/**
+ * IOCETH
+ */
+#define enable_mbox_intr(_ioceth)					\
+do {									\
+	u32 intr_status;						\
+	bna_intr_status_get((_ioceth)->bna, intr_status);		\
+	bnad_cb_mbox_intr_enable((_ioceth)->bna->bnad);			\
+	bna_mbox_intr_enable((_ioceth)->bna);				\
+} while (0)
+
+#define disable_mbox_intr(_ioceth)					\
+do {									\
+	bna_mbox_intr_disable((_ioceth)->bna);				\
+	bnad_cb_mbox_intr_disable((_ioceth)->bna->bnad);		\
+} while (0)
+
+#define call_ioceth_stop_cbfn(_ioceth)					\
+do {									\
+	if ((_ioceth)->stop_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		struct bnad *cbarg;					\
+		cbfn = (_ioceth)->stop_cbfn;				\
+		cbarg = (_ioceth)->stop_cbarg;				\
+		(_ioceth)->stop_cbfn = NULL;				\
+		(_ioceth)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define bna_stats_mod_uninit(_stats_mod)				\
+do {									\
+} while (0)
+
+#define bna_stats_mod_start(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = true;					\
+} while (0)
+
+#define bna_stats_mod_stop(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+} while (0)
+
+#define bna_stats_mod_fail(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+	(_stats_mod)->stats_get_busy = false;				\
+	(_stats_mod)->stats_clr_busy = false;				\
+} while (0)
+
+static void bna_bfi_attr_get(struct bna_ioceth *ioceth);
+
+enum bna_ioceth_event {
+	IOCETH_E_ENABLE			= 1,
+	IOCETH_E_DISABLE		= 2,
+	IOCETH_E_IOC_RESET		= 3,
+	IOCETH_E_IOC_FAILED		= 4,
+	IOCETH_E_IOC_READY		= 5,
+	IOCETH_E_ENET_ATTR_RESP		= 6,
+	IOCETH_E_ENET_STOPPED		= 7,
+	IOCETH_E_IOC_DISABLED		= 8,
+};
+
+bfa_fsm_state_decl(bna_ioceth, stopped, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_ready_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_attr_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ready, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, last_resp_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_stop_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_disable_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, failed, struct bna_ioceth,
+			enum bna_ioceth_event);
+
+static void
+bna_ioceth_sm_stopped_entry(struct bna_ioceth *ioceth)
+{
+	call_ioceth_stop_cbfn(ioceth);
+}
+
+static void
+bna_ioceth_sm_stopped(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_ENABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		bfa_nw_ioc_enable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait_entry(struct bna_ioceth *ioceth)
+{
+	/**
+	 * Do not call bfa_nw_ioc_enable() here. It must be called in the
+	 * previous state due to failed -> ioc_ready_wait transition.
+	 */
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_IOC_READY:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_attr_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_bfi_attr_get(ioceth);
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_last_resp_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ready_entry(struct bna_ioceth *ioceth)
+{
+	bna_enet_start(&ioceth->bna->enet);
+	bna_stats_mod_start(&ioceth->bna->stats_mod);
+	bnad_cb_ioceth_ready(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_ready(struct bna_ioceth *ioceth, enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_stop_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_last_resp_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_last_resp_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_stats_mod_stop(&ioceth->bna->stats_mod);
+	bna_enet_stop(&ioceth->bna->enet);
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_DISABLED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		/* This event is received due to enet failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_failed_entry(struct bna_ioceth *ioceth)
+{
+	bnad_cb_ioceth_failed(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_failed(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_attr_get(struct bna_ioceth *ioceth)
+{
+	struct bfi_enet_attr_req *attr_req = &ioceth->attr_req;
+
+	bfi_msgq_mhdr_set(attr_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_GET_ATTR_REQ, 0, 0);
+	attr_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_attr_req)));
+	bfa_msgq_cmd_set(&ioceth->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_attr_req), &attr_req->mh);
+	bfa_msgq_cmd_post(&ioceth->bna->msgq, &ioceth->msgq_cmd);
+}
+
+/* IOC callback functions */
+
+static void
+bna_cb_ioceth_enable(void *arg, enum bfa_status error)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	if (error)
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+	else
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_READY);
+}
+
+static void
+bna_cb_ioceth_disable(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_DISABLED);
+}
+
+static void
+bna_cb_ioceth_hbfail(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+}
+
+static void
+bna_cb_ioceth_reset(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_RESET);
+}
+
+static struct bfa_ioc_cbfn bna_ioceth_cbfn = {
+	bna_cb_ioceth_enable,
+	bna_cb_ioceth_disable,
+	bna_cb_ioceth_hbfail,
+	bna_cb_ioceth_reset
+};
+
+void
+bna_ioceth_init(struct bna_ioceth *ioceth, struct bna *bna,
+		struct bna_res_info *res_info)
+{
+	u64 dma;
+	u8 *kva;
+
+	ioceth->bna = bna;
+
+	/**
+	 * Attach IOC and claim:
+	 *	1. DMA memory for IOC attributes
+	 *	2. Kernel memory for FW trace
+	 */
+	bfa_nw_ioc_attach(&ioceth->ioc, ioceth, &bna_ioceth_cbfn);
+	bfa_nw_ioc_pci_init(&ioceth->ioc, &bna->pcidev, BFI_PCIFN_CLASS_ETH);
+
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].kva;
+	bfa_nw_ioc_mem_claim(&ioceth->ioc, kva, dma);
+
+	kva = res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mdl[0].kva;
+
+	/**
+	 * Attach common modules (Diag, SFP, CEE, Port) and claim respective
+	 * DMA memory.
+	 */
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].kva;
+	bfa_nw_cee_attach(&bna->cee, &ioceth->ioc, bna);
+	bfa_nw_cee_mem_claim(&bna->cee, kva, dma);
+	kva += bfa_nw_cee_meminfo();
+	dma += bfa_nw_cee_meminfo();
+
+	bfa_msgq_attach(&bna->msgq, &ioceth->ioc);
+	bfa_msgq_memclaim(&bna->msgq, kva, dma);
+	bfa_msgq_regisr(&bna->msgq, BFI_MC_ENET, bna_msgq_rsp_handler, bna);
+	kva += bfa_msgq_meminfo();
+	dma += bfa_msgq_meminfo();
+
+	ioceth->stop_cbfn = NULL;
+	ioceth->stop_cbarg = NULL;
+
+	bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+}
+
+void
+bna_ioceth_uninit(struct bna_ioceth *ioceth)
+{
+	bfa_nw_ioc_detach(&ioceth->ioc);
+
+	ioceth->bna = NULL;
+}
+
+void
+bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_attr_rsp *rsp = (struct bfi_enet_attr_rsp *)msghdr;
+
+	/**
+	 * Store only if not set earlier, since BNAD can override the HW
+	 * attributes
+	 */
+	if (!ioceth->attr.num_txq)
+		ioceth->attr.num_txq = ntohl(rsp->max_cfg);
+	if (!ioceth->attr.num_rxp)
+		ioceth->attr.num_rxp = ntohl(rsp->max_cfg);
+	ioceth->attr.num_ucmac = ntohl(rsp->max_ucmac);
+	ioceth->attr.num_mcmac = BFI_ENET_MAX_MCAM;
+	ioceth->attr.max_rit_size = ntohl(rsp->rit_size);
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_ATTR_RESP);
+}
+
+void
+bna_ioceth_cb_enet_stopped(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_STOPPED);
+}
+
+void
+bna_ioceth_enable(struct bna_ioceth *ioceth)
+{
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_ready) {
+		bnad_cb_ioceth_ready(ioceth->bna->bnad);
+		return;
+	}
+
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_stopped)
+		bfa_fsm_send_event(ioceth, IOCETH_E_ENABLE);
+}
+
+void
+bna_ioceth_disable(struct bna_ioceth *ioceth, enum bna_cleanup_type type)
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		bnad_cb_ioceth_disabled(ioceth->bna->bnad);
+		return;
+	}
+
+	ioceth->stop_cbfn = bnad_cb_ioceth_disabled;
+	ioceth->stop_cbarg = ioceth->bna->bnad;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_DISABLE);
+}
+
+bool
+bna_ioceth_state_is_failed(struct bna_ioceth *ioceth)
+{
+	return (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_failed) ?
+		true : false;
+}
+
+static void
+bna_ucam_mod_init(struct bna_ucam_mod *ucam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	ucam_mod->ucmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&ucam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_ucmac; i++) {
+		bfa_q_qe_init(&ucam_mod->ucmac[i].qe);
+		list_add_tail(&ucam_mod->ucmac[i].qe, &ucam_mod->free_q);
+	}
+
+	ucam_mod->bna = bna;
+}
+
+static void
+bna_ucam_mod_uninit(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+	int i = 0;
+
+	list_for_each(qe, &ucam_mod->free_q)
+		i++;
+
+	ucam_mod->bna = NULL;
+}
+
+static void
+bna_mcam_mod_init(struct bna_mcam_mod *mcam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	mcam_mod->mcmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mcmac[i].qe);
+		list_add_tail(&mcam_mod->mcmac[i].qe, &mcam_mod->free_q);
+	}
+
+	mcam_mod->mchandle = (struct bna_mcam_handle *)
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_handle_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mchandle[i].qe);
+		list_add_tail(&mcam_mod->mchandle[i].qe,
+				&mcam_mod->free_handle_q);
+	}
+
+	mcam_mod->bna = bna;
+}
+
+static void
+bna_mcam_mod_uninit(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_q) i++;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_handle_q) i++;
+
+	mcam_mod->bna = NULL;
+}
+
+static void
+bna_bfi_stats_get(struct bna *bna)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+
+	bna->stats_mod.stats_get_busy = true;
+
+	bfi_msgq_mhdr_set(stats_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_STATS_GET_REQ, 0, 0);
+	stats_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_stats_req)));
+	stats_req->stats_mask = htons(BFI_ENET_STATS_ALL);
+	stats_req->tx_enet_mask = htonl(bna->tx_mod.rid_mask);
+	stats_req->rx_enet_mask = htonl(bna->rx_mod.rid_mask);
+	stats_req->host_buffer.a32.addr_hi = bna->stats.hw_stats_dma.msb;
+	stats_req->host_buffer.a32.addr_lo = bna->stats.hw_stats_dma.lsb;
+
+	bfa_msgq_cmd_set(&bna->stats_mod.stats_get_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_stats_req), &stats_req->mh);
+	bfa_msgq_cmd_post(&bna->msgq, &bna->stats_mod.stats_get_cmd);
+}
+
+#define bna_stats_copy(_name, _type)					\
+do {									\
+	count = sizeof(struct bfi_enet_stats_ ## _type) / sizeof(u64);	\
+	stats_src = (u64 *)&bna->stats.hw_stats_kva->_name ## _stats;	\
+	stats_dst = (u64 *)&bna->stats.hw_stats._name ## _stats;	\
+	for (i = 0; i < count; i++)					\
+		stats_dst[i] = be64_to_cpu(stats_src[i]);		\
+} while (0)								\
+
+void
+bna_bfi_stats_get_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+	u64 *stats_src;
+	u64 *stats_dst;
+	u32 tx_enet_mask = ntohl(stats_req->tx_enet_mask);
+	u32 rx_enet_mask = ntohl(stats_req->rx_enet_mask);
+	int count;
+	int i;
+
+	bna_stats_copy(mac, mac);
+	bna_stats_copy(bpc, bpc);
+	bna_stats_copy(rad, rad);
+	bna_stats_copy(rlb, rad);
+	bna_stats_copy(fc_rx, fc_rx);
+	bna_stats_copy(fc_tx, fc_tx);
+
+	stats_src = (u64 *)&(bna->stats.hw_stats_kva->rxf_stats[0]);
+
+	/* Copy Rxf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.rxf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_rxf));
+		if (rx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_rxf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	/* Copy Txf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.txf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_txf));
+		if (tx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_txf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	bna->stats_mod.stats_get_busy = false;
+	bnad_cb_stats_get(bna->bnad, BNA_CB_SUCCESS, &bna->stats);
+}
+
+void
+bna_res_req(struct bna_res_info *res_info)
+{
+	/* DMA memory for COMMON_MODULE */
+	res_info[BNA_RES_MEM_T_COM].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.len = ALIGN(
+				(bfa_nw_cee_meminfo() +
+				bfa_msgq_meminfo()), PAGE_SIZE);
+
+	/* DMA memory for retrieving IOC attributes */
+	res_info[BNA_RES_MEM_T_ATTR].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.len =
+				ALIGN(bfa_nw_ioc_meminfo(), PAGE_SIZE);
+
+	/* Virtual memory for retreiving fw_trc */
+	res_info[BNA_RES_MEM_T_FWTRC].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.num = 0;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.len = 0;
+
+	/* DMA memory for retreiving stats */
+	res_info[BNA_RES_MEM_T_STATS].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.len =
+				ALIGN(sizeof(struct bfi_enet_stats),
+					PAGE_SIZE);
+}
+
+void
+bna_mod_res_req(struct bna *bna, struct bna_res_info *res_info)
+{
+	struct bna_attr *attr = &bna->ioceth.attr;
+
+	/* Virtual memory for Tx objects - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_tx);
+
+	/* Virtual memory for TxQ - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_txq);
+
+	/* Virtual memory for Rx objects - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rx);
+
+	/* Virtual memory for RxPath - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rxp);
+
+	/* Virtual memory for RxQ - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.len =
+		(attr->num_rxp * 2) * sizeof(struct bna_rxq);
+
+	/* Virtual memory for Unicast MAC address - stored by ucam module */
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_ucmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast MAC address - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast handle - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mcam_handle);
+}
+
+void
+bna_init(struct bna *bna, struct bnad *bnad,
+		struct bfa_pcidev *pcidev, struct bna_res_info *res_info)
+{
+	bna->bnad = bnad;
+	bna->pcidev = *pcidev;
+
+	bna->stats.hw_stats_kva = (struct bfi_enet_stats *)
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].kva;
+	bna->stats.hw_stats_dma.msb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.msb;
+	bna->stats.hw_stats_dma.lsb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.lsb;
+
+	bna_reg_addr_init(bna, &bna->pcidev);
+
+	/* Also initializes diag, cee, sfp, phy_port, msgq */
+	bna_ioceth_init(&bna->ioceth, bna, res_info);
+
+	bna_enet_init(&bna->enet, bna);
+	bna_ethport_init(&bna->ethport, bna);
+}
+
+void
+bna_mod_init(struct bna *bna, struct bna_res_info *res_info)
+{
+	bna_tx_mod_init(&bna->tx_mod, bna, res_info);
+
+	bna_rx_mod_init(&bna->rx_mod, bna, res_info);
+
+	bna_ucam_mod_init(&bna->ucam_mod, bna, res_info);
+
+	bna_mcam_mod_init(&bna->mcam_mod, bna, res_info);
+
+	bna->default_mode_rid = BFI_INVALID_RID;
+	bna->promisc_rid = BFI_INVALID_RID;
+
+	bna->mod_flags |= BNA_MOD_F_INIT_DONE;
+}
+
+void
+bna_uninit(struct bna *bna)
+{
+	if (bna->mod_flags & BNA_MOD_F_INIT_DONE) {
+		bna_mcam_mod_uninit(&bna->mcam_mod);
+		bna_ucam_mod_uninit(&bna->ucam_mod);
+		bna_rx_mod_uninit(&bna->rx_mod);
+		bna_tx_mod_uninit(&bna->tx_mod);
+		bna->mod_flags &= ~BNA_MOD_F_INIT_DONE;
+	}
+
+	bna_stats_mod_uninit(&bna->stats_mod);
+	bna_ethport_uninit(&bna->ethport);
+	bna_enet_uninit(&bna->enet);
+
+	bna_ioceth_uninit(&bna->ioceth);
+
+	bna->bnad = NULL;
+}
+
+int
+bna_num_txq_set(struct bna *bna, int num_txq)
+{
+	if (num_txq > 0 && (num_txq <= bna->ioceth.attr.num_txq)) {
+		bna->ioceth.attr.num_txq = num_txq;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+int
+bna_num_rxp_set(struct bna *bna, int num_rxp)
+{
+	if (num_rxp > 0 && (num_rxp <= bna->ioceth.attr.num_rxp)) {
+		bna->ioceth.attr.num_rxp = num_rxp;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+struct bna_mac *
+bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&ucam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&ucam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &ucam_mod->free_q);
+}
+
+struct bna_mac *
+bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &mcam_mod->free_q);
+}
+
+struct bna_mcam_handle *
+bna_mcam_mod_handle_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_handle_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_handle_q, &qe);
+
+	return (struct bna_mcam_handle *)qe;
+}
+
+void
+bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
+			struct bna_mcam_handle *handle)
+{
+	list_add_tail(&handle->qe, &mcam_mod->free_handle_q);
+}
+
+void
+bna_hw_stats_get(struct bna *bna)
+{
+	if (!bna->stats_mod.ioc_ready) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
+		return;
+	}
+	if (bna->stats_mod.stats_get_busy) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_BUSY, &bna->stats);
+		return;
+	}
+
+	bna_bfi_stats_get(bna);
+}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 8/8] bna: Tx and Rx Redesign
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (6 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 7/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
@ 2011-07-27  2:10 ` Rasesh Mody
  2011-07-28  5:32 ` [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture David Miller
  8 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-27  2:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, Rasesh Mody

Change details:
 - This patch contains the changes as a result of redesigning of Tx, Rx data
   path setup. In the old design, setting up Txqs, Rxqs were done in the driver.
   With the new design, most of the hardware setup steps for the Txq, Rxqs are
   moved to FW. Host driver issues commands to FW through the message queue to
   setup/teardown tx, rx data path. FW performs necessary steps and responds
   back to the driver with a status.
 - As a result of this redesign, the state machine implementation for Tx, Rx
   objects have changed significantly. Instead of doing the raw register access,
   these state machines mostly send a command to FW and wait for response and
   take the next action. In addition to tx, rx datapath setup, this patch also
   deals with rx filter configuration - such as unicast address, multicast
   address, vlan filter, promiscuous mode etc.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bna_tx_rx.c | 3966 +++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 3966 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bna_tx_rx.c

diff --git a/drivers/net/bna/bna_tx_rx.c b/drivers/net/bna/bna_tx_rx.c
new file mode 100644
index 0000000..03799eb
--- /dev/null
+++ b/drivers/net/bna/bna_tx_rx.c
@@ -0,0 +1,3966 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+  */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+#include "bna.h"
+#include "bfi.h"
+
+/**
+ * IB
+ */
+void
+bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo)
+{
+	ib->coalescing_timeo = coalescing_timeo;
+	ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
+				(u32)ib->coalescing_timeo, 0);
+}
+
+/**
+ * RXF
+ */
+
+#define bna_rxf_vlan_cfg_soft_reset(rxf)				\
+do {									\
+	(rxf)->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;		\
+	(rxf)->vlan_strip_pending = true;				\
+} while (0)
+
+#define bna_rxf_rss_cfg_soft_reset(rxf)					\
+do {									\
+	if ((rxf)->rss_status == BNA_STATUS_T_ENABLED)			\
+		(rxf)->rss_pending = (BNA_RSS_F_RIT_PENDING |		\
+				BNA_RSS_F_CFG_PENDING |			\
+				BNA_RSS_F_STATUS_PENDING);		\
+} while (0)
+
+static int bna_rxf_cfg_apply(struct bna_rxf *rxf);
+static void bna_rxf_cfg_reset(struct bna_rxf *rxf);
+static int bna_rxf_fltr_clear(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+
+bfa_fsm_state_decl(bna_rxf, stopped, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, paused, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, cfg_wait, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, started, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, fltr_clr_wait, struct bna_rxf,
+			enum bna_rxf_event);
+bfa_fsm_state_decl(bna_rxf, last_resp_wait, struct bna_rxf,
+			enum bna_rxf_event);
+
+static void
+bna_rxf_sm_stopped_entry(struct bna_rxf *rxf)
+{
+	call_rxf_stop_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_START:
+		if (rxf->flags & BNA_RXF_F_PAUSED) {
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+			call_rxf_start_cbfn(rxf);
+		} else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	case RXF_E_STOP:
+		call_rxf_stop_cbfn(rxf);
+		break;
+
+	case RXF_E_FAIL:
+		/* No-op */
+		break;
+
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_pause_cbfn(rxf);
+		break;
+
+	case RXF_E_RESUME:
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		call_rxf_resume_cbfn(rxf);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_paused_entry(struct bna_rxf *rxf)
+{
+	call_rxf_pause_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_paused(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+	case RXF_E_FAIL:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
+		break;
+
+	case RXF_E_RESUME:
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_cfg_wait_entry(struct bna_rxf *rxf)
+{
+	if (!bna_rxf_cfg_apply(rxf)) {
+		/* No more pending config updates */
+		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
+	}
+}
+
+static void
+bna_rxf_sm_cfg_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_last_resp_wait);
+		break;
+
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_start_cbfn(rxf);
+		call_rxf_cam_fltr_cbfn(rxf);
+		call_rxf_resume_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		/* No-op */
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_start_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
+		break;
+
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_cfg_apply(rxf)) {
+			/* No more pending config updates */
+			bfa_fsm_set_state(rxf, bna_rxf_sm_started);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_started_entry(struct bna_rxf *rxf)
+{
+	call_rxf_start_cbfn(rxf);
+	call_rxf_cam_fltr_cbfn(rxf);
+	call_rxf_resume_cbfn(rxf);
+}
+
+static void
+bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_STOP:
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_CONFIG:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
+		break;
+
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		if (!bna_rxf_fltr_clear(rxf))
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+		else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_fltr_clr_wait_entry(struct bna_rxf *rxf)
+{
+}
+
+static void
+bna_rxf_sm_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_FAIL:
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_pause_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_fltr_clear(rxf)) {
+			/* No more pending CAM entries to clear */
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_rxf_sm_last_resp_wait_entry(struct bna_rxf *rxf)
+{
+}
+
+static void
+bna_rxf_sm_last_resp_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+{
+	switch (event) {
+	case RXF_E_FAIL:
+	case RXF_E_FW_RESP:
+		bna_rxf_cfg_reset(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_ucast_req(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bfi_enet_h2i_msgs req_type)
+{
+	struct bfi_enet_ucast_req *req = &rxf->bfi_enet_cmd.ucast_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, req_type, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_ucast_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_ucast_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_add_req(struct bna_rxf *rxf, struct bna_mac *mac)
+{
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_ADD_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_add_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_add_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_del_req(struct bna_rxf *rxf, u16 handle)
+{
+	struct bfi_enet_mcast_del_req *req =
+		&rxf->bfi_enet_cmd.mcast_del_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_DEL_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_del_req)));
+	req->handle = htons(handle);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_del_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_mcast_filter_req(struct bna_rxf *rxf, enum bna_status status)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_MAC_MCAST_FILTER_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_promisc_req(struct bna_rxf *rxf, enum bna_status status)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_PROMISCUOUS_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_vlan_filter_set(struct bna_rxf *rxf, u8 block_idx)
+{
+	struct bfi_enet_rx_vlan_req *req = &rxf->bfi_enet_cmd.vlan_req;
+	int i;
+	int j;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_SET_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_vlan_req)));
+	req->block_idx = block_idx;
+	for (i = 0; i < (BFI_ENET_VLAN_BLOCK_SIZE / 32); i++) {
+		j = (block_idx * (BFI_ENET_VLAN_BLOCK_SIZE / 32)) + i;
+		if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED)
+			req->bit_mask[i] =
+				htonl(rxf->vlan_filter_table[j]);
+		else
+			req->bit_mask[i] = 0xFFFFFFFF;
+	}
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_vlan_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_vlan_strip_enable(struct bna_rxf *rxf)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->vlan_strip_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rit_cfg(struct bna_rxf *rxf)
+{
+	struct bfi_enet_rit_req *req = &rxf->bfi_enet_cmd.rit_req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RIT_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rit_req)));
+	req->size = htons(rxf->rit_size);
+	memcpy(&req->table[0], rxf->rit, rxf->rit_size);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rit_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rss_cfg(struct bna_rxf *rxf)
+{
+	struct bfi_enet_rss_cfg_req *req = &rxf->bfi_enet_cmd.rss_req;
+	int i;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rss_cfg_req)));
+	req->cfg.type = rxf->rss_cfg.hash_type;
+	req->cfg.mask = rxf->rss_cfg.hash_mask;
+	for (i = 0; i < BFI_ENET_RSS_KEY_LEN; i++)
+		req->cfg.key[i] =
+			htonl(rxf->rss_cfg.toeplitz_hash_key[i]);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rss_cfg_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+static void
+bna_bfi_rss_enable(struct bna_rxf *rxf)
+{
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->rss_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
+}
+
+/* This function gets the multicast MAC that has already been added to CAM */
+static struct bna_mac *
+bna_rxf_mcmac_get(struct bna_rxf *rxf, u8 *mac_addr)
+{
+	struct bna_mac *mac;
+	struct list_head *qe;
+
+	list_for_each(qe, &rxf->mcast_active_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
+
+	list_for_each(qe, &rxf->mcast_pending_del_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
+
+	return NULL;
+}
+
+static struct bna_mcam_handle *
+bna_rxf_mchandle_get(struct bna_rxf *rxf, int handle)
+{
+	struct bna_mcam_handle *mchandle;
+	struct list_head *qe;
+
+	list_for_each(qe, &rxf->mcast_handle_q) {
+		mchandle = (struct bna_mcam_handle *)qe;
+		if (mchandle->handle == handle)
+			return mchandle;
+	}
+
+	return NULL;
+}
+
+static void
+bna_rxf_mchandle_attach(struct bna_rxf *rxf, u8 *mac_addr, int handle)
+{
+	struct bna_mac *mcmac;
+	struct bna_mcam_handle *mchandle;
+
+	mcmac = bna_rxf_mcmac_get(rxf, mac_addr);
+	mchandle = bna_rxf_mchandle_get(rxf, handle);
+	if (mchandle == NULL) {
+		mchandle = bna_mcam_mod_handle_get(&rxf->rx->bna->mcam_mod);
+		mchandle->handle = handle;
+		mchandle->refcnt = 0;
+		list_add_tail(&mchandle->qe, &rxf->mcast_handle_q);
+	}
+	mchandle->refcnt++;
+	mcmac->handle = mchandle;
+}
+
+static int
+bna_rxf_mcast_del(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bna_cleanup_type cleanup)
+{
+	struct bna_mcam_handle *mchandle;
+	int ret = 0;
+
+	mchandle = mac->handle;
+	if (mchandle == NULL)
+		return ret;
+
+	mchandle->refcnt--;
+	if (mchandle->refcnt == 0) {
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_del_req(rxf, mchandle->handle);
+			ret = 1;
+		}
+		list_del(&mchandle->qe);
+		bfa_q_qe_init(&mchandle->qe);
+		bna_mcam_mod_handle_put(&rxf->rx->bna->mcam_mod, mchandle);
+	}
+	mac->handle = NULL;
+
+	return ret;
+}
+
+static int
+bna_rxf_mcast_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+	int ret;
+
+	/* Delete multicast entries previousely added */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
+		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		ret = bna_rxf_mcast_del(rxf, mac, BNA_HARD_CLEANUP);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (ret)
+			return ret;
+	}
+
+	/* Add multicast entries */
+	if (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		list_add_tail(&mac->qe, &rxf->mcast_active_q);
+		bna_bfi_mcast_add_req(rxf, mac);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_vlan_cfg_apply(struct bna_rxf *rxf)
+{
+	u8 vlan_pending_bitmask;
+	int block_idx = 0;
+
+	if (rxf->vlan_pending_bitmask) {
+		vlan_pending_bitmask = rxf->vlan_pending_bitmask;
+		while (!(vlan_pending_bitmask & 0x1)) {
+			block_idx++;
+			vlan_pending_bitmask >>= 1;
+		}
+		rxf->vlan_pending_bitmask &= ~(1 << block_idx);
+		bna_bfi_rx_vlan_filter_set(rxf, block_idx);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_mcast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+	int ret;
+
+	/* Throw away delete pending mcast entries */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
+		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		ret = bna_rxf_mcast_del(rxf, mac, cleanup);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (ret)
+			return ret;
+	}
+
+	/* Move active mcast entries to pending_add_q */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->mcast_pending_add_q);
+		mac = (struct bna_mac *)qe;
+		if (bna_rxf_mcast_del(rxf, mac, cleanup))
+			return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_rss_cfg_apply(struct bna_rxf *rxf)
+{
+	if (rxf->rss_pending) {
+		if (rxf->rss_pending & BNA_RSS_F_RIT_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_RIT_PENDING;
+			bna_bfi_rit_cfg(rxf);
+			return 1;
+		}
+
+		if (rxf->rss_pending & BNA_RSS_F_CFG_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_CFG_PENDING;
+			bna_bfi_rss_cfg(rxf);
+			return 1;
+		}
+
+		if (rxf->rss_pending & BNA_RSS_F_STATUS_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_STATUS_PENDING;
+			bna_bfi_rss_enable(rxf);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_cfg_apply(struct bna_rxf *rxf)
+{
+	if (bna_rxf_ucast_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_mcast_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_promisc_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_allmulti_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_vlan_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_vlan_strip_cfg_apply(rxf))
+		return 1;
+
+	if (bna_rxf_rss_cfg_apply(rxf))
+		return 1;
+
+	return 0;
+}
+
+/* Only software reset */
+static int
+bna_rxf_fltr_clear(struct bna_rxf *rxf)
+{
+	if (bna_rxf_ucast_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_mcast_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_promisc_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	if (bna_rxf_allmulti_cfg_reset(rxf, BNA_HARD_CLEANUP))
+		return 1;
+
+	return 0;
+}
+
+static void
+bna_rxf_cfg_reset(struct bna_rxf *rxf)
+{
+	bna_rxf_ucast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_mcast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_promisc_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_allmulti_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_vlan_cfg_soft_reset(rxf);
+	bna_rxf_rss_cfg_soft_reset(rxf);
+}
+
+static void
+bna_rit_init(struct bna_rxf *rxf, int rit_size)
+{
+	struct bna_rx *rx = rxf->rx;
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+	int offset = 0;
+
+	rxf->rit_size = rit_size;
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxf->rit[offset] = rxp->cq.ccb->id;
+		offset++;
+	}
+
+}
+
+void
+bna_bfi_rxf_cfg_rsp(struct bna_rxf *rxf, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
+}
+
+void
+bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
+	struct bfi_enet_mcast_add_rsp *rsp =
+		(struct bfi_enet_mcast_add_rsp *)msghdr;
+
+	bna_rxf_mchandle_attach(rxf, (u8 *)&req->mac_addr,
+		ntohs(rsp->handle));
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
+}
+
+static void
+bna_rxf_init(struct bna_rxf *rxf,
+		struct bna_rx *rx,
+		struct bna_rx_config *q_config,
+		struct bna_res_info *res_info)
+{
+	rxf->rx = rx;
+
+	INIT_LIST_HEAD(&rxf->ucast_pending_add_q);
+	INIT_LIST_HEAD(&rxf->ucast_pending_del_q);
+	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
+	INIT_LIST_HEAD(&rxf->ucast_active_q);
+	rxf->ucast_pending_mac = NULL;
+
+	INIT_LIST_HEAD(&rxf->mcast_pending_add_q);
+	INIT_LIST_HEAD(&rxf->mcast_pending_del_q);
+	INIT_LIST_HEAD(&rxf->mcast_active_q);
+	INIT_LIST_HEAD(&rxf->mcast_handle_q);
+
+	if (q_config->paused)
+		rxf->flags |= BNA_RXF_F_PAUSED;
+
+	rxf->rit = (u8 *)
+		res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info.mdl[0].kva;
+	bna_rit_init(rxf, q_config->num_paths);
+
+	rxf->rss_status = q_config->rss_status;
+	if (rxf->rss_status == BNA_STATUS_T_ENABLED) {
+		rxf->rss_cfg = q_config->rss_config;
+		rxf->rss_pending |= BNA_RSS_F_CFG_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_RIT_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_STATUS_PENDING;
+	}
+
+	rxf->vlan_filter_status = BNA_STATUS_T_DISABLED;
+	memset(rxf->vlan_filter_table, 0,
+			(sizeof(u32) * ((BFI_ENET_VLAN_ID_MAX + 1) / 32)));
+	rxf->vlan_filter_table[0] |= 1; /* for pure priority tagged frames */
+	rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
+
+	rxf->vlan_strip_status = q_config->vlan_strip_status;
+
+	bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+}
+
+static void
+bna_rxf_uninit(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac;
+
+	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
+
+	while (!list_empty(&rxf->ucast_pending_add_q)) {
+		bfa_q_deq(&rxf->ucast_pending_add_q, &mac);
+		bfa_q_qe_init(&mac->qe);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+	}
+
+	if (rxf->ucast_pending_mac) {
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod,
+			rxf->ucast_pending_mac);
+		rxf->ucast_pending_mac = NULL;
+	}
+
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &mac);
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	rxf->rxmode_pending = 0;
+	rxf->rxmode_pending_bitmask = 0;
+	if (rxf->rx->bna->promisc_rid == rxf->rx->rid)
+		rxf->rx->bna->promisc_rid = BFI_INVALID_RID;
+	if (rxf->rx->bna->default_mode_rid == rxf->rx->rid)
+		rxf->rx->bna->default_mode_rid = BFI_INVALID_RID;
+
+	rxf->rss_pending = 0;
+	rxf->vlan_strip_pending = false;
+
+	rxf->flags = 0;
+
+	rxf->rx = NULL;
+}
+
+static void
+bna_rx_cb_rxf_started(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_RXF_STARTED);
+}
+
+static void
+bna_rxf_start(struct bna_rxf *rxf)
+{
+	rxf->start_cbfn = bna_rx_cb_rxf_started;
+	rxf->start_cbarg = rxf->rx;
+	bfa_fsm_send_event(rxf, RXF_E_START);
+}
+
+static void
+bna_rx_cb_rxf_stopped(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_RXF_STOPPED);
+}
+
+static void
+bna_rxf_stop(struct bna_rxf *rxf)
+{
+	rxf->stop_cbfn = bna_rx_cb_rxf_stopped;
+	rxf->stop_cbarg = rxf->rx;
+	bfa_fsm_send_event(rxf, RXF_E_STOP);
+}
+
+static void
+bna_rxf_fail(struct bna_rxf *rxf)
+{
+	bfa_fsm_send_event(rxf, RXF_E_FAIL);
+}
+
+enum bna_cb_status
+bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+
+	if (rxf->ucast_pending_mac == NULL) {
+		rxf->ucast_pending_mac =
+				bna_ucam_mod_mac_get(&rxf->rx->bna->ucam_mod);
+		if (rxf->ucast_pending_mac == NULL)
+			return BNA_CB_UCAST_CAM_FULL;
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
+	}
+
+	memcpy(rxf->ucast_pending_mac->addr, ucmac, ETH_ALEN);
+	rxf->ucast_pending_set = 1;
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+}
+
+enum bna_cb_status
+bna_rx_mcast_add(struct bna_rx *rx, u8 *addr,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct bna_mac *mac;
+
+	/* Check if already added or pending addition */
+	if (bna_mac_find(&rxf->mcast_active_q, addr) ||
+		bna_mac_find(&rxf->mcast_pending_add_q, addr)) {
+		if (cbfn)
+			cbfn(rx->bna->bnad, rx);
+		return BNA_CB_SUCCESS;
+	}
+
+	mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
+	if (mac == NULL)
+		return BNA_CB_MCAST_LIST_FULL;
+	bfa_q_qe_init(&mac->qe);
+	memcpy(mac->addr, addr, ETH_ALEN);
+	list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
+
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+}
+
+enum bna_cb_status
+bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mclist,
+		     void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct list_head list_head;
+	struct list_head *qe;
+	u8 *mcaddr;
+	struct bna_mac *mac;
+	int i;
+
+	/* Allocate nodes */
+	INIT_LIST_HEAD(&list_head);
+	for (i = 0, mcaddr = mclist; i < count; i++) {
+		mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
+		if (mac == NULL)
+			goto err_return;
+		bfa_q_qe_init(&mac->qe);
+		memcpy(mac->addr, mcaddr, ETH_ALEN);
+		list_add_tail(&mac->qe, &list_head);
+
+		mcaddr += ETH_ALEN;
+	}
+
+	/* Purge the pending_add_q */
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	/* Schedule active_q entries for deletion */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+	}
+
+	/* Add the new entries */
+	while (!list_empty(&list_head)) {
+		bfa_q_deq(&list_head, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
+	}
+
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+
+	return BNA_CB_SUCCESS;
+
+err_return:
+	while (!list_empty(&list_head)) {
+		bfa_q_deq(&list_head, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	return BNA_CB_MCAST_LIST_FULL;
+}
+
+void
+bna_rx_vlan_add(struct bna_rx *rx, int vlan_id)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
+
+	rxf->vlan_filter_table[index] |= bit;
+	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+void
+bna_rx_vlan_del(struct bna_rx *rx, int vlan_id)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
+
+	rxf->vlan_filter_table[index] &= ~bit;
+	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+static int
+bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+
+	/* Delete MAC addresses previousely added */
+	if (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		return 1;
+	}
+
+	/* Set default unicast MAC */
+	if (rxf->ucast_pending_set) {
+		rxf->ucast_pending_set = 0;
+		memcpy(rxf->ucast_active_mac.addr,
+			rxf->ucast_pending_mac->addr, ETH_ALEN);
+		rxf->ucast_active_set = 1;
+		bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+			BFI_ENET_H2I_MAC_UCAST_SET_REQ);
+		return 1;
+	}
+
+	/* Add additional MAC entries */
+	if (!list_empty(&rxf->ucast_pending_add_q)) {
+		bfa_q_deq(&rxf->ucast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		list_add_tail(&mac->qe, &rxf->ucast_active_q);
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_ADD_REQ);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+
+	/* Throw away delete pending ucast entries */
+	while (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		if (cleanup == BNA_SOFT_CLEANUP)
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		else {
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+			return 1;
+		}
+	}
+
+	/* Move active ucast entries to pending_add_q */
+	while (!list_empty(&rxf->ucast_active_q)) {
+		bfa_q_deq(&rxf->ucast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->ucast_pending_add_q);
+		if (cleanup == BNA_HARD_CLEANUP) {
+			mac = (struct bna_mac *)qe;
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			return 1;
+		}
+	}
+
+	if (rxf->ucast_active_set) {
+		rxf->ucast_pending_set = 1;
+		rxf->ucast_active_set = 0;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+				BFI_ENET_H2I_MAC_UCAST_CLR_REQ);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Enable/disable promiscuous mode */
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_PROMISC;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Clear pending promisc mode disable */
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	/* Move promisc mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf)
+{
+	/* Enable/disable allmulti mode */
+	if (is_allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	/* Clear pending allmulti mode disable */
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	/* Move allmulti mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_enable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(rxf->rxmode_active & BNA_RXMODE_PROMISC)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		promisc_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = rxf->rx->rid;
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_promisc_disable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_PROMISC))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_promisc_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = BFI_INVALID_RID;
+	} else if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		/* Schedule disable */
+		promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_enable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_enable(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask) ||
+			(rxf->rxmode_active & BNA_RXMODE_ALLMULTI)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		allmulti_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_disable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_ALLMULTI))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_allmulti_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+	} else if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		/* Schedule disable */
+		allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf)
+{
+	if (rxf->vlan_strip_pending) {
+			rxf->vlan_strip_pending = false;
+			bna_bfi_vlan_strip_enable(rxf);
+			return 1;
+	}
+
+	return 0;
+}
+
+enum bna_cb_status
+bna_rx_mcast_del(struct bna_rx *rx, u8 *addr,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct bna_mac *mac;
+
+
+	mac = bna_mac_find(&rxf->mcast_pending_add_q, addr);
+	if (mac) {
+		list_del(&mac->qe);
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (cbfn)
+			(*cbfn)(rx->bna->bnad, rx);
+		return BNA_CB_SUCCESS;
+	}
+
+	mac = bna_mac_find(&rxf->mcast_active_q, addr);
+	if (mac) {
+		list_del(&mac->qe);
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_INVALID_MAC;
+}
+
+void
+bna_rx_mcast_delall(struct bna_rx *rx,
+		    void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct list_head *qe;
+	struct bna_mac *mac;
+	int need_hw_config = 0;
+
+
+	/* Purge all entries from pending_add_q */
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	/* Schedule all entries in active_q for deletion */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+		need_hw_config = 1;
+	}
+
+	if (need_hw_config) {
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+		return;
+	}
+
+	if (cbfn)
+		(*cbfn)(rx->bna->bnad, rx);
+}
+
+/**
+ * RX
+ */
+
+#define	BNA_GET_RXQS(qcfg)	(((qcfg)->rxp_type == BNA_RXP_SINGLE) ?	\
+	(qcfg)->num_paths : ((qcfg)->num_paths * 2))
+
+#define	SIZE_TO_PAGES(size)	(((size) >> PAGE_SHIFT) + ((((size) &\
+	(PAGE_SIZE - 1)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT))
+
+#define	call_rx_stop_cbfn(rx)						\
+do {								    \
+	if ((rx)->stop_cbfn) {						\
+		void (*cbfn)(void *, struct bna_rx *);	  \
+		void *cbarg;					    \
+		cbfn = (rx)->stop_cbfn;				 \
+		cbarg = (rx)->stop_cbarg;			       \
+		(rx)->stop_cbfn = NULL;					\
+		(rx)->stop_cbarg = NULL;				\
+		cbfn(cbarg, rx);					\
+	}							       \
+} while (0)
+
+#define bfi_enet_datapath_q_init(bfi_q, bna_qpt)			\
+do {									\
+	struct bna_dma_addr cur_q_addr =				\
+		*((struct bna_dma_addr *)((bna_qpt)->kv_qpt_ptr));	\
+	(bfi_q)->pg_tbl.a32.addr_lo = (bna_qpt)->hw_qpt_ptr.lsb;	\
+	(bfi_q)->pg_tbl.a32.addr_hi = (bna_qpt)->hw_qpt_ptr.msb;	\
+	(bfi_q)->first_entry.a32.addr_lo = cur_q_addr.lsb;		\
+	(bfi_q)->first_entry.a32.addr_hi = cur_q_addr.msb;		\
+	(bfi_q)->pages = htons((u16)(bna_qpt)->page_count);	\
+	(bfi_q)->page_sz = htons((u16)(bna_qpt)->page_size);\
+} while (0)
+
+static void bna_bfi_rx_enet_start(struct bna_rx *rx);
+static void bna_rx_enet_stop(struct bna_rx *rx);
+static void bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx);
+
+bfa_fsm_state_decl(bna_rx, stopped,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, start_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, rxf_start_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, started,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, rxf_stop_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, stop_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, cleanup_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, failed,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, quiesce_wait,
+	struct bna_rx, enum bna_rx_event);
+
+static void bna_rx_sm_stopped_entry(struct bna_rx *rx)
+{
+	call_rx_stop_cbfn(rx);
+}
+
+static void bna_rx_sm_stopped(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_START:
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
+		break;
+
+	case RX_E_STOP:
+		call_rx_stop_cbfn(rx);
+		break;
+
+	case RX_E_FAIL:
+		/* no-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_start_wait_entry(struct bna_rx *rx)
+{
+	bna_bfi_rx_enet_start(rx);
+}
+
+static void bna_rx_sm_start_wait(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	case RX_E_STARTED:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void bna_rx_sm_rxf_start_wait_entry(struct bna_rx *rx)
+{
+	rx->rx_post_cbfn(rx->bna->bnad, rx);
+	bna_rxf_start(&rx->rxf);
+}
+
+static void bna_rx_sm_rxf_start_wait(struct bna_rx *rx,
+				enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_RXF_STARTED:
+		bfa_fsm_set_state(rx, bna_rx_sm_started);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+void
+bna_rx_sm_started_entry(struct bna_rx *rx)
+{
+	struct bna_rxp *rxp;
+	struct list_head *qe_rxp;
+	int is_regular = (rx->type == BNA_RX_T_REGULAR);
+
+	/* Start IB */
+	list_for_each(qe_rxp, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe_rxp;
+		bna_ib_start(rx->bna, &rxp->cq.ib, is_regular);
+	}
+
+	bna_ethport_cb_rx_started(&rx->bna->ethport);
+}
+
+void
+bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_stop(&rx->rxf);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+void
+bna_rx_sm_rxf_stop_wait_entry(struct bna_rx *rx)
+{
+}
+
+void
+bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_RXF_STARTED:
+		bna_rxf_stop(&rx->rxf);
+		break;
+
+	case RX_E_RXF_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		bna_rx_enet_stop(rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+
+}
+
+void
+bna_rx_sm_stop_wait_entry(struct bna_rx *rx)
+{
+}
+
+void
+bna_rx_sm_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+	case RX_E_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_STARTED:
+		bna_rx_enet_stop(rx);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+void
+bna_rx_sm_cleanup_wait_entry(struct bna_rx *rx)
+{
+}
+
+void
+bna_rx_sm_cleanup_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_FAIL:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void
+bna_rx_sm_failed_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_failed(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_START:
+		bfa_fsm_set_state(rx, bna_rx_sm_quiesce_wait);
+		break;
+
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
+
+	case RX_E_FAIL:
+	case RX_E_RXF_STARTED:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+}	}
+
+static void
+bna_rx_sm_quiesce_wait_entry(struct bna_rx *rx)
+{
+}
+
+static void
+bna_rx_sm_quiesce_wait(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		break;
+
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
+
+static void
+bna_bfi_rx_enet_start(struct bna_rx *rx)
+{
+	struct bfi_enet_rx_cfg_req *cfg_req = &rx->bfi_enet_cmd.cfg_req;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
+
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_SET_REQ, 0, rx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_cfg_req)));
+
+	cfg_req->num_queue_sets = rx->num_paths;
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
+
+		GET_RXQS(rxp, q0, q1);
+		switch (rxp->type) {
+		case BNA_RXP_SLR:
+		case BNA_RXP_HDS:
+			/* Small RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].qs.q,
+						&q1->qpt);
+			cfg_req->q_cfg[i].qs.rx_buffer_size =
+				htons((u16)q1->buffer_size);
+			/* Fall through */
+
+		case BNA_RXP_SINGLE:
+			/* Large/Single RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].ql.q,
+						&q0->qpt);
+			q0->buffer_size =
+				bna_enet_mtu_get(&rx->bna->enet);
+			cfg_req->q_cfg[i].ql.rx_buffer_size =
+				htons((u16)q0->buffer_size);
+			break;
+
+		default:
+			BUG_ON(1);
+		}
+
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].cq.q,
+					&rxp->cq.qpt);
+
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			rxp->cq.ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			rxp->cq.ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)rxp->cq.ib.intr_vector);
+	}
+
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.msix = (rxp->cq.ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED :
+				BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)rxp->cq.ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)rxp->cq.ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)rxp->cq.ib.interpkt_count;
+
+	switch (rxp->type) {
+	case BNA_RXP_SLR:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_LARGE_SMALL;
+		break;
+
+	case BNA_RXP_HDS:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_HDS;
+		cfg_req->rx_cfg.hds.type = rx->hds_cfg.hdr_type;
+		cfg_req->rx_cfg.hds.force_offset = rx->hds_cfg.forced_offset;
+		cfg_req->rx_cfg.hds.max_header_size = rx->hds_cfg.forced_offset;
+		break;
+
+	case BNA_RXP_SINGLE:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_SINGLE;
+		break;
+
+	default:
+		BUG_ON(1);
+	}
+	cfg_req->rx_cfg.strip_vlan = rx->rxf.vlan_strip_status;
+
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
+}
+
+static void
+bna_bfi_rx_enet_stop(struct bna_rx *rx)
+{
+	struct bfi_enet_req *req = &rx->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_CLR_REQ, 0, rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
+}
+
+static void
+bna_rx_enet_stop(struct bna_rx *rx)
+{
+	struct bna_rxp *rxp;
+	struct list_head		 *qe_rxp;
+
+	/* Stop IB */
+	list_for_each(qe_rxp, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe_rxp;
+		bna_ib_stop(rx->bna, &rxp->cq.ib);
+	}
+
+	bna_bfi_rx_enet_stop(rx);
+}
+
+static int
+bna_rx_res_check(struct bna_rx_mod *rx_mod, struct bna_rx_config *rx_cfg)
+{
+	if ((rx_mod->rx_free_count == 0) ||
+		(rx_mod->rxp_free_count == 0) ||
+		(rx_mod->rxq_free_count == 0))
+		return 0;
+
+	if (rx_cfg->rxp_type == BNA_RXP_SINGLE) {
+		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
+			(rx_mod->rxq_free_count < rx_cfg->num_paths))
+				return 0;
+	} else {
+		if ((rx_mod->rxp_free_count < rx_cfg->num_paths) ||
+			(rx_mod->rxq_free_count < (2 * rx_cfg->num_paths)))
+			return 0;
+	}
+
+	return 1;
+}
+
+static struct bna_rxq *
+bna_rxq_get(struct bna_rx_mod *rx_mod)
+{
+	struct bna_rxq *rxq = NULL;
+	struct list_head	*qe = NULL;
+
+	bfa_q_deq(&rx_mod->rxq_free_q, &qe);
+	rx_mod->rxq_free_count--;
+	rxq = (struct bna_rxq *)qe;
+	bfa_q_qe_init(&rxq->qe);
+
+	return rxq;
+}
+
+static void
+bna_rxq_put(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
+{
+	bfa_q_qe_init(&rxq->qe);
+	list_add_tail(&rxq->qe, &rx_mod->rxq_free_q);
+	rx_mod->rxq_free_count++;
+}
+
+static struct bna_rxp *
+bna_rxp_get(struct bna_rx_mod *rx_mod)
+{
+	struct list_head	*qe = NULL;
+	struct bna_rxp *rxp = NULL;
+
+	bfa_q_deq(&rx_mod->rxp_free_q, &qe);
+	rx_mod->rxp_free_count--;
+	rxp = (struct bna_rxp *)qe;
+	bfa_q_qe_init(&rxp->qe);
+
+	return rxp;
+}
+
+static void
+bna_rxp_put(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
+{
+	bfa_q_qe_init(&rxp->qe);
+	list_add_tail(&rxp->qe, &rx_mod->rxp_free_q);
+	rx_mod->rxp_free_count++;
+}
+
+static struct bna_rx *
+bna_rx_get(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct list_head	*qe = NULL;
+	struct bna_rx *rx = NULL;
+
+	if (type == BNA_RX_T_REGULAR) {
+		bfa_q_deq(&rx_mod->rx_free_q, &qe);
+	} else
+		bfa_q_deq_tail(&rx_mod->rx_free_q, &qe);
+
+	rx_mod->rx_free_count--;
+	rx = (struct bna_rx *)qe;
+	bfa_q_qe_init(&rx->qe);
+	list_add_tail(&rx->qe, &rx_mod->rx_active_q);
+	rx->type = type;
+
+	return rx;
+}
+
+static void
+bna_rx_put(struct bna_rx_mod *rx_mod, struct bna_rx *rx)
+{
+	struct list_head *prev_qe = NULL;
+	struct list_head *qe;
+
+	bfa_q_qe_init(&rx->qe);
+
+	list_for_each(qe, &rx_mod->rx_free_q) {
+		if (((struct bna_rx *)qe)->rid < rx->rid)
+			prev_qe = qe;
+		else
+			break;
+	}
+
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&rx_mod->rx_free_q, &rx->qe);
+	} else if (bfa_q_next(prev_qe) == &rx_mod->rx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&rx->qe, &rx_mod->rx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&rx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&rx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &rx->qe;
+		bfa_q_prev(bfa_q_next(&rx->qe)) = &rx->qe;
+	}
+
+	rx_mod->rx_free_count++;
+}
+
+static void
+bna_rxp_add_rxqs(struct bna_rxp *rxp, struct bna_rxq *q0,
+		struct bna_rxq *q1)
+{
+	switch (rxp->type) {
+	case BNA_RXP_SINGLE:
+		rxp->rxq.single.only = q0;
+		rxp->rxq.single.reserved = NULL;
+		break;
+	case BNA_RXP_SLR:
+		rxp->rxq.slr.large = q0;
+		rxp->rxq.slr.small = q1;
+		break;
+	case BNA_RXP_HDS:
+		rxp->rxq.hds.data = q0;
+		rxp->rxq.hds.hdr = q1;
+		break;
+	default:
+		break;
+	}
+}
+
+static void
+bna_rxq_qpt_setup(struct bna_rxq *rxq,
+		struct bna_rxp *rxp,
+		u32 page_count,
+		u32 page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int	i;
+
+	rxq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	rxq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	rxq->qpt.kv_qpt_ptr = qpt_mem->kva;
+	rxq->qpt.page_count = page_count;
+	rxq->qpt.page_size = page_size;
+
+	rxq->rcb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < rxq->qpt.page_count; i++) {
+		rxq->rcb->sw_qpt[i] = page_mem[i].kva;
+		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static void
+bna_rxp_cqpt_setup(struct bna_rxp *rxp,
+		u32 page_count,
+		u32 page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int	i;
+
+	rxp->cq.qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	rxp->cq.qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	rxp->cq.qpt.kv_qpt_ptr = qpt_mem->kva;
+	rxp->cq.qpt.page_count = page_count;
+	rxp->cq.qpt.page_size = page_size;
+
+	rxp->cq.ccb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < rxp->cq.qpt.page_count; i++) {
+		rxp->cq.ccb->sw_qpt[i] = page_mem[i].kva;
+
+		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static void
+bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx)
+{
+	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
+
+	bfa_wc_down(&rx_mod->rx_stop_wc);
+}
+
+static void
+bna_rx_mod_cb_rx_stopped_all(void *arg)
+{
+	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
+
+	if (rx_mod->stop_cbfn)
+		rx_mod->stop_cbfn(&rx_mod->bna->enet);
+	rx_mod->stop_cbfn = NULL;
+}
+
+static void
+bna_rx_start(struct bna_rx *rx)
+{
+	rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+	if (rx->rx_flags & BNA_RX_F_ENABLED)
+		bfa_fsm_send_event(rx, RX_E_START);
+}
+
+static void
+bna_rx_stop(struct bna_rx *rx)
+{
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
+	if (rx->fsm == (bfa_fsm_t) bna_rx_sm_stopped)
+		bna_rx_mod_cb_rx_stopped(&rx->bna->rx_mod, rx);
+	else {
+		rx->stop_cbfn = bna_rx_mod_cb_rx_stopped;
+		rx->stop_cbarg = &rx->bna->rx_mod;
+		bfa_fsm_send_event(rx, RX_E_STOP);
+	}
+}
+
+static void
+bna_rx_fail(struct bna_rx *rx)
+{
+	/* Indicate Enet is not enabled, and failed */
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
+	bfa_fsm_send_event(rx, RX_E_FAIL);
+}
+
+void
+bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags |= BNA_RX_MOD_F_ENET_STARTED;
+	if (type == BNA_RX_T_LOOPBACK)
+		rx_mod->flags |= BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		if (rx->type == type)
+			bna_rx_start(rx);
+	}
+}
+
+void
+bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	rx_mod->stop_cbfn = bna_enet_cb_rx_stopped;
+
+	bfa_wc_init(&rx_mod->rx_stop_wc, bna_rx_mod_cb_rx_stopped_all, rx_mod);
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		if (rx->type == type) {
+			bfa_wc_up(&rx_mod->rx_stop_wc);
+			bna_rx_stop(rx);
+		}
+	}
+
+	bfa_wc_wait(&rx_mod->rx_stop_wc);
+}
+
+void
+bna_rx_mod_fail(struct bna_rx_mod *rx_mod)
+{
+	struct bna_rx *rx;
+	struct list_head *qe;
+
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		rx = (struct bna_rx *)qe;
+		bna_rx_fail(rx);
+	}
+}
+
+void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
+			struct bna_res_info *res_info)
+{
+	int	index;
+	struct bna_rx *rx_ptr;
+	struct bna_rxp *rxp_ptr;
+	struct bna_rxq *rxq_ptr;
+
+	rx_mod->bna = bna;
+	rx_mod->flags = 0;
+
+	rx_mod->rx = (struct bna_rx *)
+		res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mdl[0].kva;
+	rx_mod->rxp = (struct bna_rxp *)
+		res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mdl[0].kva;
+	rx_mod->rxq = (struct bna_rxq *)
+		res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	/* Initialize the queues */
+	INIT_LIST_HEAD(&rx_mod->rx_free_q);
+	rx_mod->rx_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxq_free_q);
+	rx_mod->rxq_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxp_free_q);
+	rx_mod->rxp_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rx_active_q);
+
+	/* Build RX queues */
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
+		rx_ptr = &rx_mod->rx[index];
+
+		bfa_q_qe_init(&rx_ptr->qe);
+		INIT_LIST_HEAD(&rx_ptr->rxp_q);
+		rx_ptr->bna = NULL;
+		rx_ptr->rid = index;
+		rx_ptr->stop_cbfn = NULL;
+		rx_ptr->stop_cbarg = NULL;
+
+		list_add_tail(&rx_ptr->qe, &rx_mod->rx_free_q);
+		rx_mod->rx_free_count++;
+	}
+
+	/* build RX-path queue */
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
+		rxp_ptr = &rx_mod->rxp[index];
+		bfa_q_qe_init(&rxp_ptr->qe);
+		list_add_tail(&rxp_ptr->qe, &rx_mod->rxp_free_q);
+		rx_mod->rxp_free_count++;
+	}
+
+	/* build RXQ queue */
+	for (index = 0; index < (bna->ioceth.attr.num_rxp * 2); index++) {
+		rxq_ptr = &rx_mod->rxq[index];
+		bfa_q_qe_init(&rxq_ptr->qe);
+		list_add_tail(&rxq_ptr->qe, &rx_mod->rxq_free_q);
+		rx_mod->rxq_free_count++;
+	}
+}
+
+void
+bna_rx_mod_uninit(struct bna_rx_mod *rx_mod)
+{
+	struct list_head		*qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rx_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rxp_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &rx_mod->rxq_free_q)
+		i++;
+
+	rx_mod->bna = NULL;
+}
+
+void
+bna_bfi_rx_enet_start_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_rx_cfg_rsp *cfg_rsp = &rx->bfi_enet_cmd.cfg_rsp;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&rx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_rx_cfg_rsp));
+
+	rx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
+		GET_RXQS(rxp, q0, q1);
+
+		/* Setup doorbells */
+		rxp->cq.ccb->i_dbell->doorbell_addr =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		rxp->hw_id = cfg_rsp->q_handles[i].hw_cqid;
+		q0->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].ql_dbell);
+		q0->hw_id = cfg_rsp->q_handles[i].hw_lqid;
+		if (q1) {
+			q1->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].qs_dbell);
+			q1->hw_id = cfg_rsp->q_handles[i].hw_sqid;
+		}
+
+		/* Initialize producer/consumer indexes */
+		(*rxp->cq.ccb->hw_producer_index) = 0;
+		rxp->cq.ccb->producer_index = 0;
+		q0->rcb->producer_index = q0->rcb->consumer_index = 0;
+		if (q1)
+			q1->rcb->producer_index = q1->rcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(rx, RX_E_STARTED);
+}
+
+void
+bna_bfi_rx_enet_stop_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(rx, RX_E_STOPPED);
+}
+
+void
+bna_rx_res_req(struct bna_rx_config *q_cfg, struct bna_res_info *res_info)
+{
+	u32 cq_size, hq_size, dq_size;
+	u32 cpage_count, hpage_count, dpage_count;
+	struct bna_mem_info *mem_info;
+	u32 cq_depth;
+	u32 hq_depth;
+	u32 dq_depth;
+
+	dq_depth = q_cfg->q_depth;
+	hq_depth = ((q_cfg->rxp_type == BNA_RXP_SINGLE) ? 0 : q_cfg->q_depth);
+	cq_depth = dq_depth + hq_depth;
+
+	BNA_TO_POWER_OF_2_HIGH(cq_depth);
+	cq_size = cq_depth * BFI_CQ_WI_SIZE;
+	cq_size = ALIGN(cq_size, PAGE_SIZE);
+	cpage_count = SIZE_TO_PAGES(cq_size);
+
+	BNA_TO_POWER_OF_2_HIGH(dq_depth);
+	dq_size = dq_depth * BFI_RXQ_WI_SIZE;
+	dq_size = ALIGN(dq_size, PAGE_SIZE);
+	dpage_count = SIZE_TO_PAGES(dq_size);
+
+	if (BNA_RXP_SINGLE != q_cfg->rxp_type) {
+		BNA_TO_POWER_OF_2_HIGH(hq_depth);
+		hq_size = hq_depth * BFI_RXQ_WI_SIZE;
+		hq_size = ALIGN(hq_size, PAGE_SIZE);
+		hpage_count = SIZE_TO_PAGES(hq_size);
+	} else
+		hpage_count = 0;
+
+	res_info[BNA_RX_RES_MEM_T_CCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_ccb);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_RCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_rcb);
+	mem_info->num = BNA_GET_RXQS(q_cfg);
+
+	res_info[BNA_RX_RES_MEM_T_CQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = cpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_CSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = cpage_count * sizeof(void *);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = cpage_count * q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = dpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = dpage_count * sizeof(void *);
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_DPAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = dpage_count * q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_HQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = hpage_count * sizeof(struct bna_dma_addr);
+	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
+
+	res_info[BNA_RX_RES_MEM_T_HSWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = hpage_count * sizeof(void *);
+	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
+
+	res_info[BNA_RX_RES_MEM_T_HPAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = (hpage_count ? PAGE_SIZE : 0);
+	mem_info->num = (hpage_count ? (hpage_count * q_cfg->num_paths) : 0);
+
+	res_info[BNA_RX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_RIT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = BFI_ENET_RSS_RIT_MAX;
+	mem_info->num = 1;
+
+	res_info[BNA_RX_RES_T_INTR].res_type = BNA_RES_T_INTR;
+	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.intr_type = BNA_INTR_T_MSIX;
+	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.num = q_cfg->num_paths;
+}
+
+struct bna_rx *
+bna_rx_create(struct bna *bna, struct bnad *bnad,
+		struct bna_rx_config *rx_cfg,
+		struct bna_rx_event_cbfn *rx_cbfn,
+		struct bna_res_info *res_info,
+		void *priv)
+{
+	struct bna_rx_mod *rx_mod = &bna->rx_mod;
+	struct bna_rx *rx;
+	struct bna_rxp *rxp;
+	struct bna_rxq *q0;
+	struct bna_rxq *q1;
+	struct bna_intr_info *intr_info;
+	u32 page_count;
+	struct bna_mem_descr *ccb_mem;
+	struct bna_mem_descr *rcb_mem;
+	struct bna_mem_descr *unmapq_mem;
+	struct bna_mem_descr *cqpt_mem;
+	struct bna_mem_descr *cswqpt_mem;
+	struct bna_mem_descr *cpage_mem;
+	struct bna_mem_descr *hqpt_mem;
+	struct bna_mem_descr *dqpt_mem;
+	struct bna_mem_descr *hsqpt_mem;
+	struct bna_mem_descr *dsqpt_mem;
+	struct bna_mem_descr *hpage_mem;
+	struct bna_mem_descr *dpage_mem;
+	int i, cpage_idx = 0, dpage_idx = 0, hpage_idx = 0;
+	int dpage_count, hpage_count, rcb_idx;
+
+	if (!bna_rx_res_check(rx_mod, rx_cfg))
+		return NULL;
+
+	intr_info = &res_info[BNA_RX_RES_T_INTR].res_u.intr_info;
+	ccb_mem = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info.mdl[0];
+	rcb_mem = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info.mdl[0];
+	unmapq_mem = &res_info[BNA_RX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[0];
+	cqpt_mem = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info.mdl[0];
+	cswqpt_mem = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info.mdl[0];
+	cpage_mem = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.mdl[0];
+	hqpt_mem = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info.mdl[0];
+	dqpt_mem = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info.mdl[0];
+	hsqpt_mem = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info.mdl[0];
+	dsqpt_mem = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info.mdl[0];
+	hpage_mem = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.mdl[0];
+	dpage_mem = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.mdl[0];
+
+	page_count = res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	dpage_count = res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	hpage_count = res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.num /
+			rx_cfg->num_paths;
+
+	rx = bna_rx_get(rx_mod, rx_cfg->rx_type);
+	rx->bna = bna;
+	rx->rx_flags = 0;
+	INIT_LIST_HEAD(&rx->rxp_q);
+	rx->stop_cbfn = NULL;
+	rx->stop_cbarg = NULL;
+	rx->priv = priv;
+
+	rx->rcb_setup_cbfn = rx_cbfn->rcb_setup_cbfn;
+	rx->rcb_destroy_cbfn = rx_cbfn->rcb_destroy_cbfn;
+	rx->ccb_setup_cbfn = rx_cbfn->ccb_setup_cbfn;
+	rx->ccb_destroy_cbfn = rx_cbfn->ccb_destroy_cbfn;
+	/* Following callbacks are mandatory */
+	rx->rx_cleanup_cbfn = rx_cbfn->rx_cleanup_cbfn;
+	rx->rx_post_cbfn = rx_cbfn->rx_post_cbfn;
+
+	if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_STARTED) {
+		switch (rx->type) {
+		case BNA_RX_T_REGULAR:
+			if (!(rx->bna->rx_mod.flags &
+				BNA_RX_MOD_F_ENET_LOOPBACK))
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+			break;
+		case BNA_RX_T_LOOPBACK:
+			if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_LOOPBACK)
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+			break;
+		}
+	}
+
+	rx->num_paths = rx_cfg->num_paths;
+	for (i = 0, rcb_idx = 0; i < rx->num_paths; i++) {
+		rxp = bna_rxp_get(rx_mod);
+		list_add_tail(&rxp->qe, &rx->rxp_q);
+		rxp->type = rx_cfg->rxp_type;
+		rxp->rx = rx;
+		rxp->cq.rx = rx;
+
+		q0 = bna_rxq_get(rx_mod);
+		if (BNA_RXP_SINGLE == rx_cfg->rxp_type)
+			q1 = NULL;
+		else
+			q1 = bna_rxq_get(rx_mod);
+
+		if (1 == intr_info->num)
+			rxp->vector = intr_info->idl[0].vector;
+		else
+			rxp->vector = intr_info->idl[i].vector;
+
+		/* Setup IB */
+
+		rxp->cq.ib.ib_seg_host_addr.lsb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		rxp->cq.ib.ib_seg_host_addr.msb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		rxp->cq.ib.ib_seg_host_addr_kva =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		rxp->cq.ib.intr_type = intr_info->intr_type;
+		if (intr_info->intr_type == BNA_INTR_T_MSIX)
+			rxp->cq.ib.intr_vector = rxp->vector;
+		else
+			rxp->cq.ib.intr_vector = (1 << rxp->vector);
+		rxp->cq.ib.coalescing_timeo = rx_cfg->coalescing_timeo;
+		rxp->cq.ib.interpkt_count = BFI_RX_INTERPKT_COUNT;
+		rxp->cq.ib.interpkt_timeo = BFI_RX_INTERPKT_TIMEO;
+
+		bna_rxp_add_rxqs(rxp, q0, q1);
+
+		/* Setup large Q */
+
+		q0->rx = rx;
+		q0->rxp = rxp;
+
+		q0->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
+		q0->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
+		rcb_idx++;
+		q0->rcb->q_depth = rx_cfg->q_depth;
+		q0->rcb->rxq = q0;
+		q0->rcb->bnad = bna->bnad;
+		q0->rcb->id = 0;
+		q0->rx_packets = q0->rx_bytes = 0;
+		q0->rx_packets_with_error = q0->rxbuf_alloc_failed = 0;
+
+		bna_rxq_qpt_setup(q0, rxp, dpage_count, PAGE_SIZE,
+			&dqpt_mem[i], &dsqpt_mem[i], &dpage_mem[dpage_idx]);
+		q0->rcb->page_idx = dpage_idx;
+		q0->rcb->page_count = dpage_count;
+		dpage_idx += dpage_count;
+
+		if (rx->rcb_setup_cbfn)
+			rx->rcb_setup_cbfn(bnad, q0->rcb);
+
+		/* Setup small Q */
+
+		if (q1) {
+			q1->rx = rx;
+			q1->rxp = rxp;
+
+			q1->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
+			q1->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
+			rcb_idx++;
+			q1->rcb->q_depth = rx_cfg->q_depth;
+			q1->rcb->rxq = q1;
+			q1->rcb->bnad = bna->bnad;
+			q1->rcb->id = 1;
+			q1->buffer_size = (rx_cfg->rxp_type == BNA_RXP_HDS) ?
+					rx_cfg->hds_config.forced_offset
+					: rx_cfg->small_buff_size;
+			q1->rx_packets = q1->rx_bytes = 0;
+			q1->rx_packets_with_error = q1->rxbuf_alloc_failed = 0;
+
+			bna_rxq_qpt_setup(q1, rxp, hpage_count, PAGE_SIZE,
+				&hqpt_mem[i], &hsqpt_mem[i],
+				&hpage_mem[hpage_idx]);
+			q1->rcb->page_idx = hpage_idx;
+			q1->rcb->page_count = hpage_count;
+			hpage_idx += hpage_count;
+
+			if (rx->rcb_setup_cbfn)
+				rx->rcb_setup_cbfn(bnad, q1->rcb);
+		}
+
+		/* Setup CQ */
+
+		rxp->cq.ccb = (struct bna_ccb *) ccb_mem[i].kva;
+		rxp->cq.ccb->q_depth =	rx_cfg->q_depth +
+					((rx_cfg->rxp_type == BNA_RXP_SINGLE) ?
+					0 : rx_cfg->q_depth);
+		rxp->cq.ccb->cq = &rxp->cq;
+		rxp->cq.ccb->rcb[0] = q0->rcb;
+		q0->rcb->ccb = rxp->cq.ccb;
+		if (q1) {
+			rxp->cq.ccb->rcb[1] = q1->rcb;
+			q1->rcb->ccb = rxp->cq.ccb;
+		}
+		rxp->cq.ccb->hw_producer_index =
+			(u32 *)rxp->cq.ib.ib_seg_host_addr_kva;
+		rxp->cq.ccb->i_dbell = &rxp->cq.ib.door_bell;
+		rxp->cq.ccb->intr_type = rxp->cq.ib.intr_type;
+		rxp->cq.ccb->intr_vector = rxp->cq.ib.intr_vector;
+		rxp->cq.ccb->rx_coalescing_timeo =
+			rxp->cq.ib.coalescing_timeo;
+		rxp->cq.ccb->pkt_rate.small_pkt_cnt = 0;
+		rxp->cq.ccb->pkt_rate.large_pkt_cnt = 0;
+		rxp->cq.ccb->bnad = bna->bnad;
+		rxp->cq.ccb->id = i;
+
+		bna_rxp_cqpt_setup(rxp, page_count, PAGE_SIZE,
+			&cqpt_mem[i], &cswqpt_mem[i], &cpage_mem[cpage_idx]);
+		rxp->cq.ccb->page_idx = cpage_idx;
+		rxp->cq.ccb->page_count = page_count;
+		cpage_idx += page_count;
+
+		if (rx->ccb_setup_cbfn)
+			rx->ccb_setup_cbfn(bnad, rxp->cq.ccb);
+	}
+
+	rx->hds_cfg = rx_cfg->hds_config;
+
+	bna_rxf_init(&rx->rxf, rx, rx_cfg, res_info);
+
+	bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+
+	rx_mod->rid_mask |= (1 << rx->rid);
+
+	return rx;
+}
+
+void
+bna_rx_destroy(struct bna_rx *rx)
+{
+	struct bna_rx_mod *rx_mod = &rx->bna->rx_mod;
+	struct bna_rxq *q0 = NULL;
+	struct bna_rxq *q1 = NULL;
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+
+	bna_rxf_uninit(&rx->rxf);
+
+	while (!list_empty(&rx->rxp_q)) {
+		bfa_q_deq(&rx->rxp_q, &rxp);
+		GET_RXQS(rxp, q0, q1);
+		if (rx->rcb_destroy_cbfn)
+			rx->rcb_destroy_cbfn(rx->bna->bnad, q0->rcb);
+		q0->rcb = NULL;
+		q0->rxp = NULL;
+		q0->rx = NULL;
+		bna_rxq_put(rx_mod, q0);
+
+		if (q1) {
+			if (rx->rcb_destroy_cbfn)
+				rx->rcb_destroy_cbfn(rx->bna->bnad, q1->rcb);
+			q1->rcb = NULL;
+			q1->rxp = NULL;
+			q1->rx = NULL;
+			bna_rxq_put(rx_mod, q1);
+		}
+		rxp->rxq.slr.large = NULL;
+		rxp->rxq.slr.small = NULL;
+
+		if (rx->ccb_destroy_cbfn)
+			rx->ccb_destroy_cbfn(rx->bna->bnad, rxp->cq.ccb);
+		rxp->cq.ccb = NULL;
+		rxp->rx = NULL;
+		bna_rxp_put(rx_mod, rxp);
+	}
+
+	list_for_each(qe, &rx_mod->rx_active_q) {
+		if (qe == &rx->qe) {
+			list_del(&rx->qe);
+			bfa_q_qe_init(&rx->qe);
+			break;
+		}
+	}
+
+	rx_mod->rid_mask &= ~(1 << rx->rid);
+
+	rx->bna = NULL;
+	rx->priv = NULL;
+	bna_rx_put(rx_mod, rx);
+}
+
+void
+bna_rx_enable(struct bna_rx *rx)
+{
+	if (rx->fsm != (bfa_sm_t)bna_rx_sm_stopped)
+		return;
+
+	rx->rx_flags |= BNA_RX_F_ENABLED;
+	if (rx->rx_flags & BNA_RX_F_ENET_STARTED)
+		bfa_fsm_send_event(rx, RX_E_START);
+}
+
+void
+bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
+		void (*cbfn)(void *, struct bna_rx *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		/* h/w should not be accessed. Treat we're stopped */
+		(*cbfn)(rx->bna->bnad, rx);
+	} else {
+		rx->stop_cbfn = cbfn;
+		rx->stop_cbarg = rx->bna->bnad;
+
+		rx->rx_flags &= ~BNA_RX_F_ENABLED;
+
+		bfa_fsm_send_event(rx, RX_E_STOP);
+	}
+}
+
+void
+bna_rx_cleanup_complete(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_CLEANUP_DONE);
+}
+
+enum bna_cb_status
+bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode new_mode,
+		enum bna_rxmode bitmask,
+		void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int need_hw_config = 0;
+
+	/* Error checks */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		/* If promisc mode is already enabled elsewhere in the system */
+		if ((rx->bna->promisc_rid != BFI_INVALID_RID) &&
+			(rx->bna->promisc_rid != rxf->rx->rid))
+			goto err_return;
+
+		/* If default mode is already enabled in the system */
+		if (rx->bna->default_mode_rid != BFI_INVALID_RID)
+			goto err_return;
+
+		/* Trying to enable promiscuous and default mode together */
+		if (is_default_enable(new_mode, bitmask))
+			goto err_return;
+	}
+
+	if (is_default_enable(new_mode, bitmask)) {
+		/* If default mode is already enabled elsewhere in the system */
+		if ((rx->bna->default_mode_rid != BFI_INVALID_RID) &&
+			(rx->bna->default_mode_rid != rxf->rx->rid)) {
+				goto err_return;
+		}
+
+		/* If promiscuous mode is already enabled in the system */
+		if (rx->bna->promisc_rid != BFI_INVALID_RID)
+			goto err_return;
+	}
+
+	/* Process the commands */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_promisc_disable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	if (is_allmulti_enable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_allmulti_disable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	/* Trigger h/w if needed */
+
+	if (need_hw_config) {
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	} else if (cbfn)
+		(*cbfn)(rx->bna->bnad, rx);
+
+	return BNA_CB_SUCCESS;
+
+err_return:
+	return BNA_CB_FAIL;
+}
+
+void
+bna_rx_vlanfilter_enable(struct bna_rx *rx)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+
+	if (rxf->vlan_filter_status == BNA_STATUS_T_DISABLED) {
+		rxf->vlan_filter_status = BNA_STATUS_T_ENABLED;
+		rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+void
+bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo)
+{
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxp->cq.ccb->rx_coalescing_timeo = coalescing_timeo;
+		bna_ib_coalescing_timeo_set(&rxp->cq.ib, coalescing_timeo);
+	}
+}
+
+void
+bna_rx_dim_reconfig(struct bna *bna, u32 vector[][BNA_BIAS_T_MAX])
+{
+	int i, j;
+
+	for (i = 0; i < BNA_LOAD_T_MAX; i++)
+		for (j = 0; j < BNA_BIAS_T_MAX; j++)
+			bna->rx_mod.dim_vector[i][j] = vector[i][j];
+}
+
+void
+bna_rx_dim_update(struct bna_ccb *ccb)
+{
+	struct bna *bna = ccb->cq->rx->bna;
+	u32 load, bias;
+	u32 pkt_rt, small_rt, large_rt;
+	u8 coalescing_timeo;
+
+	if ((ccb->pkt_rate.small_pkt_cnt == 0) &&
+		(ccb->pkt_rate.large_pkt_cnt == 0))
+		return;
+
+	/* Arrive at preconfigured coalescing timeo value based on pkt rate */
+
+	small_rt = ccb->pkt_rate.small_pkt_cnt;
+	large_rt = ccb->pkt_rate.large_pkt_cnt;
+
+	pkt_rt = small_rt + large_rt;
+
+	if (pkt_rt < BNA_PKT_RATE_10K)
+		load = BNA_LOAD_T_LOW_4;
+	else if (pkt_rt < BNA_PKT_RATE_20K)
+		load = BNA_LOAD_T_LOW_3;
+	else if (pkt_rt < BNA_PKT_RATE_30K)
+		load = BNA_LOAD_T_LOW_2;
+	else if (pkt_rt < BNA_PKT_RATE_40K)
+		load = BNA_LOAD_T_LOW_1;
+	else if (pkt_rt < BNA_PKT_RATE_50K)
+		load = BNA_LOAD_T_HIGH_1;
+	else if (pkt_rt < BNA_PKT_RATE_60K)
+		load = BNA_LOAD_T_HIGH_2;
+	else if (pkt_rt < BNA_PKT_RATE_80K)
+		load = BNA_LOAD_T_HIGH_3;
+	else
+		load = BNA_LOAD_T_HIGH_4;
+
+	if (small_rt > (large_rt << 1))
+		bias = 0;
+	else
+		bias = 1;
+
+	ccb->pkt_rate.small_pkt_cnt = 0;
+	ccb->pkt_rate.large_pkt_cnt = 0;
+
+	coalescing_timeo = bna->rx_mod.dim_vector[load][bias];
+	ccb->rx_coalescing_timeo = coalescing_timeo;
+
+	/* Set it to IB */
+	bna_ib_coalescing_timeo_set(&ccb->cq->ib, coalescing_timeo);
+}
+
+u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = {
+	{12, 12},
+	{6, 10},
+	{5, 10},
+	{4, 8},
+	{3, 6},
+	{3, 6},
+	{2, 4},
+	{1, 2},
+};
+
+/**
+ * TX
+ */
+#define call_tx_stop_cbfn(tx)						\
+do {									\
+	if ((tx)->stop_cbfn) {						\
+		void (*cbfn)(void *, struct bna_tx *);		\
+		void *cbarg;						\
+		cbfn = (tx)->stop_cbfn;					\
+		cbarg = (tx)->stop_cbarg;				\
+		(tx)->stop_cbfn = NULL;					\
+		(tx)->stop_cbarg = NULL;				\
+		cbfn(cbarg, (tx));					\
+	}								\
+} while (0)
+
+#define call_tx_prio_change_cbfn(tx)					\
+do {									\
+	if ((tx)->prio_change_cbfn) {					\
+		void (*cbfn)(struct bnad *, struct bna_tx *);	\
+		cbfn = (tx)->prio_change_cbfn;				\
+		(tx)->prio_change_cbfn = NULL;				\
+		cbfn((tx)->bna->bnad, (tx));				\
+	}								\
+} while (0)
+
+static void bna_tx_mod_cb_tx_stopped(void *tx_mod, struct bna_tx *tx);
+static void bna_bfi_tx_enet_start(struct bna_tx *tx);
+static void bna_tx_enet_stop(struct bna_tx *tx);
+
+enum bna_tx_event {
+	TX_E_START			= 1,
+	TX_E_STOP			= 2,
+	TX_E_FAIL			= 3,
+	TX_E_STARTED			= 4,
+	TX_E_STOPPED			= 5,
+	TX_E_PRIO_CHANGE		= 6,
+	TX_E_CLEANUP_DONE		= 7,
+	TX_E_BW_UPDATE			= 8,
+};
+
+bfa_fsm_state_decl(bna_tx, stopped, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, start_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, started, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, stop_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, cleanup_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, prio_stop_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, prio_cleanup_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, failed, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, quiesce_wait, struct bna_tx,
+			enum bna_tx_event);
+
+static void
+bna_tx_sm_stopped_entry(struct bna_tx *tx)
+{
+	call_tx_stop_cbfn(tx);
+}
+
+static void
+bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_START:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	case TX_E_STOP:
+		call_tx_stop_cbfn(tx);
+		break;
+
+	case TX_E_FAIL:
+		/* No-op */
+		break;
+
+	case TX_E_PRIO_CHANGE:
+		call_tx_prio_change_cbfn(tx);
+		break;
+
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_start_wait_entry(struct bna_tx *tx)
+{
+	bna_bfi_tx_enet_start(tx);
+}
+
+static void
+bna_tx_sm_start_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		break;
+
+	case TX_E_FAIL:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	case TX_E_STARTED:
+		if (tx->flags & (BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED)) {
+			tx->flags &= ~(BNA_TX_F_PRIO_CHANGED |
+				BNA_TX_F_BW_UPDATED);
+			bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
+		} else
+			bfa_fsm_set_state(tx, bna_tx_sm_started);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+		tx->flags |=  BNA_TX_F_PRIO_CHANGED;
+		break;
+
+	case TX_E_BW_UPDATE:
+		tx->flags |= BNA_TX_F_BW_UPDATED;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_started_entry(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+	int is_regular = (tx->type == BNA_TX_T_REGULAR);
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->tcb->priority = txq->priority;
+		/* Start IB */
+		bna_ib_start(tx->bna, &txq->ib, is_regular);
+	}
+	tx->tx_resume_cbfn(tx->bna->bnad, tx);
+}
+
+static void
+bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		bna_tx_enet_stop(tx);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_stop_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_FAIL:
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_STARTED:
+		/**
+		 * We are here due to start_wait -> stop_wait transition on
+		 * TX_E_STOP event
+		 */
+		bna_tx_enet_stop(tx);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_cleanup_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_FAIL:
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_prio_stop_wait_entry(struct bna_tx *tx)
+{
+	tx->tx_stall_cbfn(tx->bna->bnad, tx);
+	bna_tx_enet_stop(tx);
+}
+
+static void
+bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		call_tx_prio_change_cbfn(tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+		break;
+
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_prio_cleanup_wait);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_prio_cleanup_wait_entry(struct bna_tx *tx)
+{
+	call_tx_prio_change_cbfn(tx);
+	tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
+}
+
+static void
+bna_tx_sm_prio_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_failed_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_failed(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_START:
+		bfa_fsm_set_state(tx, bna_tx_sm_quiesce_wait);
+		break;
+
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_quiesce_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_quiesce_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
+
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_tx_enet_start(struct bna_tx *tx)
+{
+	struct bfi_enet_tx_cfg_req *cfg_req = &tx->bfi_enet_cmd.cfg_req;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
+
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_SET_REQ, 0, tx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_tx_cfg_req)));
+
+	cfg_req->num_queues = tx->num_txq;
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq;
+		i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
+
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].q.q, &txq->qpt);
+		cfg_req->q_cfg[i].q.priority = txq->priority;
+
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			txq->ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			txq->ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)txq->ib.intr_vector);
+	}
+
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.msix = (txq->ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED : BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)txq->ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)txq->ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)txq->ib.interpkt_count;
+
+	cfg_req->tx_cfg.vlan_mode = BFI_ENET_TX_VLAN_WI;
+	cfg_req->tx_cfg.vlan_id = htons((u16)tx->txf_vlan_id);
+	cfg_req->tx_cfg.admit_tagged_frame = BNA_STATUS_T_DISABLED;
+	cfg_req->tx_cfg.apply_vlan_filter = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_tx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
+}
+
+static void
+bna_bfi_tx_enet_stop(struct bna_tx *tx)
+{
+	struct bfi_enet_req *req = &tx->bfi_enet_cmd.req;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_CLR_REQ, 0, tx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
+}
+
+static void
+bna_tx_enet_stop(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+
+	/* Stop IB */
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		bna_ib_stop(tx->bna, &txq->ib);
+	}
+
+	bna_bfi_tx_enet_stop(tx);
+}
+
+static void
+bna_txq_qpt_setup(struct bna_txq *txq, int page_count, int page_size,
+		struct bna_mem_descr *qpt_mem,
+		struct bna_mem_descr *swqpt_mem,
+		struct bna_mem_descr *page_mem)
+{
+	int i;
+
+	txq->qpt.hw_qpt_ptr.lsb = qpt_mem->dma.lsb;
+	txq->qpt.hw_qpt_ptr.msb = qpt_mem->dma.msb;
+	txq->qpt.kv_qpt_ptr = qpt_mem->kva;
+	txq->qpt.page_count = page_count;
+	txq->qpt.page_size = page_size;
+
+	txq->tcb->sw_qpt = (void **) swqpt_mem->kva;
+
+	for (i = 0; i < page_count; i++) {
+		txq->tcb->sw_qpt[i] = page_mem[i].kva;
+
+		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].lsb =
+			page_mem[i].dma.lsb;
+		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].msb =
+			page_mem[i].dma.msb;
+	}
+}
+
+static struct bna_tx *
+bna_tx_get(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct list_head	*qe = NULL;
+	struct bna_tx *tx = NULL;
+
+	if (list_empty(&tx_mod->tx_free_q))
+		return NULL;
+	if (type == BNA_TX_T_REGULAR) {
+		bfa_q_deq(&tx_mod->tx_free_q, &qe);
+	} else {
+		bfa_q_deq_tail(&tx_mod->tx_free_q, &qe);
+	}
+	tx = (struct bna_tx *)qe;
+	bfa_q_qe_init(&tx->qe);
+	tx->type = type;
+
+	return tx;
+}
+
+static void
+bna_tx_free(struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
+	struct bna_txq *txq;
+	struct list_head *prev_qe;
+	struct list_head *qe;
+
+	while (!list_empty(&tx->txq_q)) {
+		bfa_q_deq(&tx->txq_q, &txq);
+		bfa_q_qe_init(&txq->qe);
+		txq->tcb = NULL;
+		txq->tx = NULL;
+		list_add_tail(&txq->qe, &tx_mod->txq_free_q);
+	}
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		if (qe == &tx->qe) {
+			list_del(&tx->qe);
+			bfa_q_qe_init(&tx->qe);
+			break;
+		}
+	}
+
+	tx->bna = NULL;
+	tx->priv = NULL;
+
+	prev_qe = NULL;
+	list_for_each(qe, &tx_mod->tx_free_q) {
+		if (((struct bna_tx *)qe)->rid < tx->rid)
+			prev_qe = qe;
+		else {
+			break;
+		}
+	}
+
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&tx_mod->tx_free_q, &tx->qe);
+	} else if (bfa_q_next(prev_qe) == &tx_mod->tx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&tx->qe, &tx_mod->tx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&tx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&tx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &tx->qe;
+		bfa_q_prev(bfa_q_next(&tx->qe)) = &tx->qe;
+	}
+}
+
+static void
+bna_tx_start(struct bna_tx *tx)
+{
+	tx->flags |= BNA_TX_F_ENET_STARTED;
+	if (tx->flags & BNA_TX_F_ENABLED)
+		bfa_fsm_send_event(tx, TX_E_START);
+}
+
+static void
+bna_tx_stop(struct bna_tx *tx)
+{
+	tx->stop_cbfn = bna_tx_mod_cb_tx_stopped;
+	tx->stop_cbarg = &tx->bna->tx_mod;
+
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
+	bfa_fsm_send_event(tx, TX_E_STOP);
+}
+
+static void
+bna_tx_fail(struct bna_tx *tx)
+{
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
+	bfa_fsm_send_event(tx, TX_E_FAIL);
+}
+
+static void
+bna_tx_prio_changed(struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+
+	/* No need of priority reconfiguration for loopback Tx */
+	if (tx->type != BNA_TX_T_REGULAR)
+		return;
+
+	/**
+	 * If there are exactly 8 TxQs, each one occupies one priority.
+	 * In such case, there is nothing to reconfigure, since their
+	 * priorities remain the same.
+	 */
+	if (tx->num_txq != BFI_TX_MAX_PRIO) {
+		list_for_each(qe, &tx->txq_q) {
+			txq = (struct bna_txq *)qe;
+			txq->priority = (u8)tx_mod->default_prio;
+		}
+
+		bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
+	}
+}
+
+void
+bna_bfi_tx_enet_start_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_tx_cfg_rsp *cfg_rsp = &tx->bfi_enet_cmd.cfg_rsp;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&tx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_tx_cfg_rsp));
+
+	tx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq; i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
+
+		/* Setup doorbells */
+		txq->tcb->i_dbell->doorbell_addr =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		txq->tcb->q_dbell =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].q_dbell);
+		txq->hw_id = cfg_rsp->q_handles[i].hw_qid;
+
+		/* Initialize producer/consumer indexes */
+		(*txq->tcb->hw_consumer_index) = 0;
+		txq->tcb->producer_index = txq->tcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(tx, TX_E_STARTED);
+}
+
+void
+bna_bfi_tx_enet_stop_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(tx, TX_E_STOPPED);
+}
+
+void
+bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		bfa_fsm_send_event(tx, TX_E_BW_UPDATE);
+	}
+}
+
+void
+bna_tx_res_req(int num_txq, int txq_depth, struct bna_res_info *res_info)
+{
+	u32 q_size;
+	u32 page_count;
+	struct bna_mem_info *mem_info;
+
+	res_info[BNA_TX_RES_MEM_T_TCB].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = sizeof(struct bna_tcb);
+	mem_info->num = num_txq;
+
+	q_size = txq_depth * BFI_TXQ_WI_SIZE;
+	q_size = ALIGN(q_size, PAGE_SIZE);
+	page_count = q_size >> PAGE_SHIFT;
+
+	res_info[BNA_TX_RES_MEM_T_QPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = page_count * sizeof(struct bna_dma_addr);
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_MEM_T_SWQPT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = page_count * sizeof(void *);
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_MEM_T_PAGE].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = PAGE_SIZE;
+	mem_info->num = num_txq * page_count;
+
+	res_info[BNA_TX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = num_txq;
+
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_type = BNA_RES_T_INTR;
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.intr_type =
+			BNA_INTR_T_MSIX;
+	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.num = num_txq;
+}
+
+struct bna_tx *
+bna_tx_create(struct bna *bna, struct bnad *bnad,
+		struct bna_tx_config *tx_cfg,
+		struct bna_tx_event_cbfn *tx_cbfn,
+		struct bna_res_info *res_info, void *priv)
+{
+	struct bna_intr_info *intr_info;
+	struct bna_tx_mod *tx_mod = &bna->tx_mod;
+	struct bna_tx *tx;
+	struct bna_txq *txq;
+	struct list_head *qe;
+	int page_count;
+	int page_size;
+	int page_idx;
+	int i;
+
+	intr_info = &res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info;
+	page_count = (res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.num) /
+			tx_cfg->num_txq;
+	page_size = res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.len;
+
+	/**
+	 * Get resources
+	 */
+
+	if ((intr_info->num != 1) && (intr_info->num != tx_cfg->num_txq))
+		return NULL;
+
+	/* Tx */
+
+	tx = bna_tx_get(tx_mod, tx_cfg->tx_type);
+	if (!tx)
+		return NULL;
+	tx->bna = bna;
+	tx->priv = priv;
+
+	/* TxQs */
+
+	INIT_LIST_HEAD(&tx->txq_q);
+	for (i = 0; i < tx_cfg->num_txq; i++) {
+		if (list_empty(&tx_mod->txq_free_q))
+			goto err_return;
+
+		bfa_q_deq(&tx_mod->txq_free_q, &txq);
+		bfa_q_qe_init(&txq->qe);
+		list_add_tail(&txq->qe, &tx->txq_q);
+		txq->tx = tx;
+	}
+
+	/*
+	 * Initialize
+	 */
+
+	/* Tx */
+
+	tx->tcb_setup_cbfn = tx_cbfn->tcb_setup_cbfn;
+	tx->tcb_destroy_cbfn = tx_cbfn->tcb_destroy_cbfn;
+	/* Following callbacks are mandatory */
+	tx->tx_stall_cbfn = tx_cbfn->tx_stall_cbfn;
+	tx->tx_resume_cbfn = tx_cbfn->tx_resume_cbfn;
+	tx->tx_cleanup_cbfn = tx_cbfn->tx_cleanup_cbfn;
+
+	list_add_tail(&tx->qe, &tx_mod->tx_active_q);
+
+	tx->num_txq = tx_cfg->num_txq;
+
+	tx->flags = 0;
+	if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_STARTED) {
+		switch (tx->type) {
+		case BNA_TX_T_REGULAR:
+			if (!(tx->bna->tx_mod.flags &
+				BNA_TX_MOD_F_ENET_LOOPBACK))
+				tx->flags |= BNA_TX_F_ENET_STARTED;
+			break;
+		case BNA_TX_T_LOOPBACK:
+			if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_LOOPBACK)
+				tx->flags |= BNA_TX_F_ENET_STARTED;
+			break;
+		}
+	}
+
+	/* TxQ */
+
+	i = 0;
+	page_idx = 0;
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->tcb = (struct bna_tcb *)
+		res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info.mdl[i].kva;
+		txq->tx_packets = 0;
+		txq->tx_bytes = 0;
+
+		/* IB */
+		txq->ib.ib_seg_host_addr.lsb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		txq->ib.ib_seg_host_addr.msb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		txq->ib.ib_seg_host_addr_kva =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		txq->ib.intr_type = intr_info->intr_type;
+		txq->ib.intr_vector = (intr_info->num == 1) ?
+					intr_info->idl[0].vector :
+					intr_info->idl[i].vector;
+		if (intr_info->intr_type == BNA_INTR_T_INTX)
+			txq->ib.intr_vector = (1 <<  txq->ib.intr_vector);
+		txq->ib.coalescing_timeo = tx_cfg->coalescing_timeo;
+		txq->ib.interpkt_timeo = 0; /* Not used */
+		txq->ib.interpkt_count = BFI_TX_INTERPKT_COUNT;
+
+		/* TCB */
+
+		txq->tcb->q_depth = tx_cfg->txq_depth;
+		txq->tcb->unmap_q = (void *)
+		res_info[BNA_TX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[i].kva;
+		txq->tcb->hw_consumer_index =
+			(u32 *)txq->ib.ib_seg_host_addr_kva;
+		txq->tcb->i_dbell = &txq->ib.door_bell;
+		txq->tcb->intr_type = txq->ib.intr_type;
+		txq->tcb->intr_vector = txq->ib.intr_vector;
+		txq->tcb->txq = txq;
+		txq->tcb->bnad = bnad;
+		txq->tcb->id = i;
+
+		/* QPT, SWQPT, Pages */
+		bna_txq_qpt_setup(txq, page_count, page_size,
+			&res_info[BNA_TX_RES_MEM_T_QPT].res_u.mem_info.mdl[i],
+			&res_info[BNA_TX_RES_MEM_T_SWQPT].res_u.mem_info.mdl[i],
+			&res_info[BNA_TX_RES_MEM_T_PAGE].
+				  res_u.mem_info.mdl[page_idx]);
+		txq->tcb->page_idx = page_idx;
+		txq->tcb->page_count = page_count;
+		page_idx += page_count;
+
+		/* Callback to bnad for setting up TCB */
+		if (tx->tcb_setup_cbfn)
+			(tx->tcb_setup_cbfn)(bna->bnad, txq->tcb);
+
+		if (tx_cfg->num_txq == BFI_TX_MAX_PRIO)
+			txq->priority = txq->tcb->id;
+		else
+			txq->priority = tx_mod->default_prio;
+
+		i++;
+	}
+
+	tx->txf_vlan_id = 0;
+
+	bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+
+	tx_mod->rid_mask |= (1 << tx->rid);
+
+	return tx;
+
+err_return:
+	bna_tx_free(tx);
+	return NULL;
+}
+
+void
+bna_tx_destroy(struct bna_tx *tx)
+{
+	struct bna_txq *txq;
+	struct list_head *qe;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		if (tx->tcb_destroy_cbfn)
+			(tx->tcb_destroy_cbfn)(tx->bna->bnad, txq->tcb);
+	}
+
+	tx->bna->tx_mod.rid_mask &= ~(1 << tx->rid);
+	bna_tx_free(tx);
+}
+
+void
+bna_tx_enable(struct bna_tx *tx)
+{
+	if (tx->fsm != (bfa_sm_t)bna_tx_sm_stopped)
+		return;
+
+	tx->flags |= BNA_TX_F_ENABLED;
+
+	if (tx->flags & BNA_TX_F_ENET_STARTED)
+		bfa_fsm_send_event(tx, TX_E_START);
+}
+
+void
+bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
+		void (*cbfn)(void *, struct bna_tx *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		(*cbfn)(tx->bna->bnad, tx);
+		return;
+	}
+
+	tx->stop_cbfn = cbfn;
+	tx->stop_cbarg = tx->bna->bnad;
+
+	tx->flags &= ~BNA_TX_F_ENABLED;
+
+	bfa_fsm_send_event(tx, TX_E_STOP);
+}
+
+void
+bna_tx_cleanup_complete(struct bna_tx *tx)
+{
+	bfa_fsm_send_event(tx, TX_E_CLEANUP_DONE);
+}
+
+void
+bna_tx_prio_set(struct bna_tx *tx, int prio,
+		void (*cbfn)(struct bnad *, struct bna_tx *))
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+
+	tx->prio_change_cbfn = cbfn;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->priority = (u8)prio;
+	}
+
+	bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped(void *arg, struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	bfa_wc_down(&tx_mod->tx_stop_wc);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped_all(void *arg)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	if (tx_mod->stop_cbfn)
+		tx_mod->stop_cbfn(&tx_mod->bna->enet);
+	tx_mod->stop_cbfn = NULL;
+}
+
+void
+bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
+		struct bna_res_info *res_info)
+{
+	int i;
+
+	tx_mod->bna = bna;
+	tx_mod->flags = 0;
+
+	tx_mod->tx = (struct bna_tx *)
+		res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mdl[0].kva;
+	tx_mod->txq = (struct bna_txq *)
+		res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&tx_mod->tx_free_q);
+	INIT_LIST_HEAD(&tx_mod->tx_active_q);
+
+	INIT_LIST_HEAD(&tx_mod->txq_free_q);
+
+	for (i = 0; i < bna->ioceth.attr.num_txq; i++) {
+		tx_mod->tx[i].rid = i;
+		bfa_q_qe_init(&tx_mod->tx[i].qe);
+		list_add_tail(&tx_mod->tx[i].qe, &tx_mod->tx_free_q);
+		bfa_q_qe_init(&tx_mod->txq[i].qe);
+		list_add_tail(&tx_mod->txq[i].qe, &tx_mod->txq_free_q);
+	}
+
+	tx_mod->prio_map = BFI_TX_PRIO_MAP_ALL;
+	tx_mod->default_prio = 0;
+	tx_mod->iscsi_over_cee = BNA_STATUS_T_DISABLED;
+	tx_mod->iscsi_prio = -1;
+}
+
+void
+bna_tx_mod_uninit(struct bna_tx_mod *tx_mod)
+{
+	struct list_head		*qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &tx_mod->tx_free_q)
+		i++;
+
+	i = 0;
+	list_for_each(qe, &tx_mod->txq_free_q)
+		i++;
+
+	tx_mod->bna = NULL;
+}
+
+void
+bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags |= BNA_TX_MOD_F_ENET_STARTED;
+	if (type == BNA_TX_T_LOOPBACK)
+		tx_mod->flags |= BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		if (tx->type == type)
+			bna_tx_start(tx);
+	}
+}
+
+void
+bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	tx_mod->stop_cbfn = bna_enet_cb_tx_stopped;
+
+	bfa_wc_init(&tx_mod->tx_stop_wc, bna_tx_mod_cb_tx_stopped_all, tx_mod);
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		if (tx->type == type) {
+			bfa_wc_up(&tx_mod->tx_stop_wc);
+			bna_tx_stop(tx);
+		}
+	}
+
+	bfa_wc_wait(&tx_mod->tx_stop_wc);
+}
+
+void
+bna_tx_mod_fail(struct bna_tx_mod *tx_mod)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
+
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		bna_tx_fail(tx);
+	}
+}
+
+void
+bna_tx_mod_prio_reconfig(struct bna_tx_mod *tx_mod, int cee_linkup,
+			u8 prio_map, u8 iscsi_prio_map)
+{
+	struct bna_tx *tx;
+	struct list_head		*qe;
+	int need_txq_reconfig = 0;
+	int iscsi_prio = -1;
+	int default_prio = -1;
+	int i;
+
+	/* Select the priority map */
+	if (!cee_linkup) {
+		/* No CEE. Use all priorities */
+		prio_map = BFI_TX_PRIO_MAP_ALL;
+		iscsi_prio_map = 0;
+		default_prio = 0;
+	} else {
+		/* Select default priority */
+		for (i = 0; i < BFI_TX_MAX_PRIO; i++) {
+			if ((prio_map >> i) & 0x1) {
+				default_prio = i;
+				break;
+			}
+		}
+
+		/* Derive iscsi priority */
+		if (iscsi_prio_map) {
+			for (i = 0; i < BFI_TX_MAX_PRIO; i++) {
+				if ((iscsi_prio_map >> i) & 0x1) {
+					iscsi_prio = i;
+					break;
+				}
+			}
+		}
+
+		/**
+		 * Network traffic priority map is a superset of iSCSI and
+		 * non iSCSI traffic. We are only giving a 'lift' to iSCSI
+		 * traffic by redirecting it to iSCSI priority. Other NW
+		 * traffic is still allowed to use iSCSI priority
+		 */
+		prio_map |= iscsi_prio_map;
+	}
+
+	if ((prio_map != tx_mod->prio_map) ||
+	    (default_prio != tx_mod->default_prio) ||
+	    (iscsi_prio_map && (iscsi_prio != tx_mod->iscsi_prio)))
+		need_txq_reconfig = 1;
+
+	tx_mod->prio_map = prio_map;
+	tx_mod->default_prio = default_prio;
+	tx_mod->iscsi_over_cee = (iscsi_prio_map ? BNA_STATUS_T_ENABLED :
+				BNA_STATUS_T_DISABLED);
+	tx_mod->iscsi_prio = iscsi_prio;
+	tx_mod->prio_reconfigured = need_txq_reconfig;
+
+	/* Reconfigure the TxQs */
+	if (need_txq_reconfig) {
+		list_for_each(qe, &tx_mod->tx_active_q) {
+			tx = (struct bna_tx *)qe;
+			bna_tx_prio_changed(tx);
+		}
+	}
+}
+
+void
+bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo)
+{
+	struct bna_txq *txq;
+	struct list_head *qe;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		bna_ib_coalescing_timeo_set(&txq->ib, coalescing_timeo);
+	}
+}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture
  2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
                   ` (7 preceding siblings ...)
  2011-07-27  2:10 ` [PATCH 8/8] bna: Tx and Rx Redesign Rasesh Mody
@ 2011-07-28  5:32 ` David Miller
  2011-07-28 19:30   ` Rasesh Mody
  8 siblings, 1 reply; 11+ messages in thread
From: David Miller @ 2011-07-28  5:32 UTC (permalink / raw)
  To: rmody; +Cc: netdev, adapter_linux_open_src_team

From: Rasesh Mody <rmody@brocade.com>
Date: Tue, 26 Jul 2011 19:10:40 -0700

>    This patch-set consists of few fixes, HW reg consolidation and adds support
>    for re-architecture and re-organisation of the driver.

Please do not mix bug fixes and feature changes.

If you do this, I can't put the bug fixes into the current release.

Seperate out the real pure bug fixes into a seperate series against
the main 'net' GIT tree.

Then you can submit your feature changes and cleanups seperately for
'net-next'

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture
  2011-07-28  5:32 ` [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture David Miller
@ 2011-07-28 19:30   ` Rasesh Mody
  0 siblings, 0 replies; 11+ messages in thread
From: Rasesh Mody @ 2011-07-28 19:30 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Adapter Linux Open SRC Team

>From: David Miller [mailto:davem@davemloft.net]
>Sent: Wednesday, July 27, 2011 10:33 PM
>
>From: Rasesh Mody <rmody@brocade.com>
>Date: Tue, 26 Jul 2011 19:10:40 -0700
>
>>    This patch-set consists of few fixes, HW reg consolidation and adds
>support
>>    for re-architecture and re-organisation of the driver.
>
>Please do not mix bug fixes and feature changes.
>
>If you do this, I can't put the bug fixes into the current release.
>
>Seperate out the real pure bug fixes into a seperate series against
>the main 'net' GIT tree.
>
>Then you can submit your feature changes and cleanups seperately for
>'net-next'

We will submit the bug fixes and the feature changes separately.
Thanks,
Rasesh

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-07-28 19:31 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-27  2:10 [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture Rasesh Mody
2011-07-27  2:10 ` [PATCH 1/8] bna: Remove get_regs Ethtool Support Rasesh Mody
2011-07-27  2:10 ` [PATCH 2/8] bna: Consolidated HW Registers for Supported HWs Rasesh Mody
2011-07-27  2:10 ` [PATCH 3/8] bna: Remove Obsolete File bfi_ctreg.h Rasesh Mody
2011-07-27  2:10 ` [PATCH 4/8] bna: MSGQ Implementation Rasesh Mody
2011-07-27  2:10 ` [PATCH 5/8] bna: Remove Unnecessary CNA Check Rasesh Mody
2011-07-27  2:10 ` [PATCH 6/8] bna: HW Interface Init Update Rasesh Mody
2011-07-27  2:10 ` [PATCH 7/8] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
2011-07-27  2:10 ` [PATCH 8/8] bna: Tx and Rx Redesign Rasesh Mody
2011-07-28  5:32 ` [PATCH 0/8] bna: Driver Fixes and Support for Re-architecture David Miller
2011-07-28 19:30   ` Rasesh Mody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).